[
  {
    "path": ".cursor/rules/C/core.mdc",
    "content": "---\ndescription: Enforce C++11 coding standards for Fledge plugins.\nglobs: [\"*.cpp\", \"*.h\"]\nalwaysApply: true\nauthor: \"Devki Nandan Ghildiyal\"\n---\n\n##  Fledge Project Context\n\n- **Language Focus**: C++11\n- **Primary Target**: South/North/Filter plugins\n- **Tech Stack**\n  - **Languages** : C++11\n  - **Libraries** : GTest version 1.10.0, boost version 1.71\n  - **Database** : SQLite version 3, Postgres version 12\n\n\n# Roles\n\n## Senior Architect\n- Focus: System design, scalability, security, module boundaries, third-party integrations\n- Responsibilities:\n  - Review requirements and suggest if any gap is there in the requirement.\n  - Suggest design patterns, scalability strategies, and deployment models.\n  - Validate alignment with NFRs (non-functional requirements).\n\n## Senior Developer\n- Focus: Implementation, performance, maintainability, code quality\n- Responsibilities:\n  - Review core logic, refactoring, and REST API contracts.\n  - Ensure idiomatic usage of C++11 standards.\n  - Enforce clean coding practices and SOLID principles.\n  - Fledge plugin API and extension points  \n  - Filter pipeline structure and reading processing logic  \n  - South/North plugin lifecycle\n\n## Senior QA Engineer\n- Focus: Test coverage, edge cases, negative scenarios, automation\n- Responsibilities:\n  - Review requirements and prepare test plan\n  - Validate test plans and coverage.\n  - Suggest boundary tests, failure modes, stress scenarios.\n  - Generate unit tests using GTest version 1.10.0 \n  - Review unit test structure.\n  \n\nYou are playing **three roles** while reviewing, commenting, or helping with with deep knowledge of Fledge:\n\n1. **Senior Architect** – Guide the system and module design.\n2. **Senior Developer** – Evaluate the code quality and implementation.\n3. **Senior QA Engineer** – Think from a test and validation standpoint.\n\nRespond with comments or suggestions **clearly labeled** by the role, e.g.:\n\n- `[Architect] Analyze requirements.md file to find out gaps in requirements if any.`\n- `[Developer] Use range-based loops.`\n- `[QA] Add unit tests for empty asset name.`\n\n\n## Code Style and Structure\n- Write concise, idiomatic C++ code with accurate examples.\n- Follow modern C++11 conventions and best practices.\n- Use object-oriented, procedural, or functional programming patterns as appropriate.\n- Leverage STL and standard algorithms for collection operations.\n- Use descriptive variable and method names (e.g., 'isUserSignedIn', 'calculateTotal').\n- Structure files into headers (*.h) and implementation files (*.cpp) with logical separation of concerns.\n\n## Naming Conventions\n- Use PascalCase for class names.\n- Use camelCase for variable names and methods.\n- Use SCREAMING_SNAKE_CASE for constants and macros.\n- Prefix member variables with an m_ (e.g., `m_userId`).\n\n## C++ Features Usage\n\n- Prefer modern C++11 features (e.g., auto, range-based loops).\n- Use `constexpr` and `const` to optimize compile-time computations.\n\n## Syntax and Formatting\n- Follow a consistent coding style, such as Google C++ Style Guide.\n- Place braces on the same line for control structures and methods.\n- Use clear and consistent commenting practices.\n\n## JSON Parsing\n- Fledge uses RapidJSON for JSON parsing by using C++ '*.h' files from '../../../C/thirdparty/rapidjson/include/rapidjson/'\n\n## REST API Support\n- Fledge supports REST API by using C++ files from '../../../C/thirdparty/Simple-Web-Server'\n\n## Error Handling and Validation\n- Use exceptions for error handling (e.g., `std::runtime_error`, `std::invalid_argument`).\n- Use RAII for resource management to avoid memory leaks.\n- Validate inputs at function boundaries.\n- Log errors using a logging class logger.h from '../../../C/common/include/logger.h'\n\n## Performance Optimization\n- Avoid unnecessary heap allocations; prefer stack-based objects where possible.\n- Use `std::move` to enable move semantics and avoid copies.\n- Optimize loops with algorithms from `<algorithm>` (e.g., `std::sort`, `std::for_each`).\n- Profile and optimize critical sections with tools like Valgrind.\n\n## Key Conventions\n- Do not use smart pointers.\n- Avoid global variables; use singletons sparingly.\n- Use `enum class` for strongly typed enumerations.\n- Separate interface from implementation in classes.\n- Use templates and metaprogramming judiciously for generic solutions.\n\n## Testing\n- Write unit tests using frameworks like Google Test (GTest version 1.10.0).\n- Mock dependencies with libraries like Google Mock.\n- Implement integration tests for system components.\n\n## Security\n- Use secure coding practices to avoid vulnerabilities (e.g., buffer overflows, dangling pointers).\n- Prefer `std::array` or `std::vector` over raw arrays.\n- Avoid C-style casts; use `static_cast`, `dynamic_cast`, or `reinterpret_cast` when necessary.\n- Enforce const-correctness in functions and member variables.\n\n## Documentation\n- Write clear comments for classes, methods, and critical logic.\n- Use Doxygen for generating API documentation.\n- Document assumptions, constraints, and expected behavior of code. All public classes and methods must include Doxygen comments specifying assumptions, constraints, and expected input/output.\n\nFollow the official ISO C++ standards and guidelines for best practices in modern C++11 development.\n\n## C++ south plugin development\n\n- Use 'plugins/south.mdc'\n\n## C++ north plugin development\n\n- use 'plugins/north.mdc'\n\n## C++ Filter plugin development\n\n- use 'plugins/filter.mdc'\n\n## Log Levels\n\nFledge support 5 levels of logging, which can be considered in descending order of severity; fatal, error, warning, info and debug. Each of these has a defined use and a targeted audience. By default only the 3 most severe levels of log will be written and presented to the user.\n\n| Log Level | Intended Audience | Usage |\n| :---- | :---- | :---- |\n| fatal | End-user | This is the most severe error level and is reserved for situations whereby the service that raises them can not continue. It is not for transient failures. |\n| error | End-user | Errors should be used when a transient issue prevents the service continuing in the short term, but may be recovered without the service restarting. |\n| warning | End-user | A warning message should be used if the user needs to be aware of some reduction in service or non-fatal issue that does not stop the flow of data. |\n| info | Code/Pipeline Developer | Informational messages should be used to give the user or developer more information as to the progress of a process or task, but does not impact the result of that task. It can be considered more as a progress tracking aid. |\n| debug | Code Developer | Debug messages are reserved for the code developers working on a plugin or core features of the Fledge services. |\n\n# Message Content Fog Logs\n\n- All log entries should be written to be human readable and standalone from the code that raises them. The reader of the log message should not need to have access to the source code in order to understand a log message. They should not include internal code references or variable names, but rather be descriptive regarding any variables printed in the log.\n- Log messages should not contain source file names, line numbers or function names as these have little to no meaning to the intended audience for the majority of log messages These also take up valuable space that can be better used to give a more in-depth description of the issue.\n- Not only are the lengths of messages limited in syslog, but special characters such as new lines and carriage returns are mapped to hash codes and hence do not format correctly. Messages should not include such characters and should be simple strings.\n\n"
  },
  {
    "path": ".cursor/rules/C/plugins/filter.mdc",
    "content": "---\ndescription: C++ Filter Plugin Architecture.\nglobs: [\"*.cpp\", \"*.h\"]\nalwaysApply: true\nauthor: \"Devki Nandan Ghildiyal\"\n---\n\n## Filter Plugin\n\n- Filter plugin provides a mechanism to alter data as it flows from sensor to Fledge, or from Fledge to outside. \n\n\n## General plugin guidelines\n\n- General guidelines to write a Fledge plugin is at '../../../docs/plugin_developers_guide/02_writing_plugins.rst' file\n\n## South plugin Guidelines\n\n- Specific guidelines to write a north plugin is at '../../../docs/plugin_developers_guide/06_filter_plugins.rst'\n\n\n## Common support classes\n\n- Information about common support classes used by plugin is at '../../../docs/plugin_developers_guide/035_CPP.rst'\n\n## Mutex and Locking\n\n- Thread Safety: The fledge filter plugin can receive data (ingest()) and configuration changes (reconfigure()) simultaneously from different threads\n- Data Consistency: Prevents reading configuration while it's being modified\n- RAII Pattern: std::lock_guard automatically unlocks when going out of scope, preventing deadlocks\n\n- Following sample code demonstrates use of mutex and locks when doing ingestion\n\n```\nvoid ingest(std::vector<Reading *> *readings, std::vector<Reading *>& outReadings)\n{\n    std::lock_guard<std::mutex> guard(m_configMutex);\n    IngestData(readings, outReadings);\n    readings->clear();\n}\n\n```\n\n- Following sample code demonstrates use of mutex and locks when doing configuration changes\n\n```\nvoid reconfigure(const std::string& conf)\n{\n    std::lock_guard<std::mutex> guard(m_configMutex);\n    setConfig(conf);\n    handleConfig(m_config);\n}\n\n```\n\n## Implementation details of plugin\n\n- South plugin fetches data from sensors or external sources and store in Fledge\n- Common C++ classes used in Fledge framework are at following location '../../../C/common/include' and '../../../C/common/'\n- C++ class to handle reading in Fledge is at '../../../C/common/include/reading.h'\n- C++ class to handle datapoint in Fledge is at '../../../C/common/include/datapoint.h'\n- C++ class to handle logging in Fledge is at '../../../C/common/include/logger.h'\n- C++ plugin must have a 'plugin.cpp' file\n- 'plugin.cpp' file must have plugin configuration and \n- Implementation of requirement of plugin is kept into a separate header and class implementation file which is used by 'plugin.cpp' file\n- Every plugin has 'docs' and 'tests' directory\n- 'plugin.cpp' must define plugin of the configuration\n\n## Fledge plugin configuration\n\nEvery Fledge plugin has a default configuration represented by a JSON.\n\nFollowing example demonstrates minimial configurtion for every plugin. configuration JSON for each plugin must have an elments called \"plugin\"\n\n\n```\nconst char *default_config = QUOTE({\n              \"plugin\" : {\n                      \"description\" : \"My example plugin in C++\",\n                      \"type\" : \"string\",\n                      \"default\" : \"MyPlugin\",\n                      \"readonly\" : \"true\"\n                      }\n});\n```\n\n- constant default_config is a string that contains the JSON configuration document.\n- QUOTE macro is used to manage JSON document easily\n- Configuation JSON documment will have multiple elements for each configuration item.\n- Fledge plugin supports following types\n\n| Type | Description | \n|:-----|:------------|\n|integer|An integer numeric value. The minimum and maximum properties may be used to control the limits of the values assigned to an integer.|\n|float|A floating point numeric item. The minimum and maximum properties may be used to control the limits of the values assigned to a float.|\n|string|An alpha-numeric array of characters that may contain any printable characters. The length property can be used to constrain the maximum length of the string.|\n|password|It is same as string type. User interfaces do not show this in plain text.|\n|boolean|A boolean value that can be assigned the values true or false.|\n|enumeration|The item can be assigned one of a fixed set of values. These values are defined in the options property of the item.|\n|list|A list of items, the items can be of type string, integer, float, enumeration or object. The type of the items within the list must all be the same, and this is defined via the items property of the list. A limit on the maximum number of entries allowed in the list can be enforced by use of the listSize property.|\n|kvlist|A key value pair list. The key is a string value always but the value of the item in the list may be of type string, enumeration, float, integer or object. The type of the values in the kvlist is defined by the items property of the configuration item. A limit on the maximum number of entries allowed in the list can be enforced by use of the listSize property.|\n|object|A complex configuration type with multiple elements that may be used within list and kvlist items only, it is not possible to have object type items outside of a list. Object type configuration items have a set of properties defined, each of which is itself a configuration item.|\n\n## Example for integer type\n\nSample configuration item \"register\"\n\n```\n \"register\" : {\n\t\t\t  \"description\" : \"The register number to read\",\n\t\t\t  \"displayName\" : \"Register\",\n\t\t\t  \"type\" : \"integer\",\n\t\t\t  \"default\" : \"0\",\n\t\t\t  \"order\" : \"1\"\n\t\t\t  }\n```\n\n## Example for integer type\n\nSample configuration item \"temperature\"\n\n```\n \"temperature\" : {\n\t\t\t  \"description\" : \"Temperate of PLC\",\n\t\t\t  \"displayName\" : \"PLC Temperature\",\n\t\t\t  \"type\" : \"float\",\n\t\t\t  \"default\" : \"0\",\n\t\t\t   \"order\" : \"2\"\n\t\t\t  }\n\n```\n\n## Example for string type\n\nSample configuration item \"asset\"\n\n```\n\"asset\" : {\n\t\t  \"description\" : \"The name of the asset the plugin will produce\",\n\t\t  \"displayName\" : \"Asset Name\",\n\t\t  \"type\" : \"string\",\n\t\t  \"default\" : \"MyAsset\",\n\t\t   \"order\" : \"3\"\n\t\t  }\n```\n\n## Example of password type\n\nSample configuration item \"db_password\"\n\n```\n\"db_password\" : {\n\t\t  \"description\" : \"Password of the database\",\n\t\t  \"displayName\" : \"Database Password\",\n\t\t  \"type\" : \"boolean\",\n\t\t  \"default\" : \"MyAsset\",\n\t\t   \"order\" : \"4\"\n\t\t  }\n```\n\n## Example of boolean type\n\nSample configuration item \"apply_scaling\"\n\n```\n\"apply_scaling\": {\n\t\t\t\t\"description\": \"Option to apply scaling\",\n\t\t\t\t\"displayName\": \"Use Scaling\"\n\t\t\t\t\"type\": \"boolean\",\n\t\t\t\t\"default\": \"true\",\n\t\t\t\t \"order\" : \"5\"\n\t\t\t}\n```\n\n## Example of enumeration type\n\nSample configuration item \"authentication\"\n\n```\n\"authentication\": {\n  \"description\": \"Server Authentication\",\n  \"displayName\": \"Authentication\",\n  \"type\": \"enumeration\",\n  \"options\": [\n    \"mandatory\",\n    \"optional\"\n  ],\n  \"default\": \"optional\",\n   \"order\" : \"6\"\n  \n}\n```\n\n## Example of list type\n\nSample configuration item \"tags\"\n\n```\n\"tags\" : {\n               \"description\" : \"A set of tag names on which to operate\",\n\t\t\t   \"displayName\" : \"Labels\",\n               \"type\" : \"list\",\n               \"items\" : \"string\",\n               \"default\" : \"[ \\\"speed\\\", \\\"temperature\\\", \\\"voltage\\\" ]\",\n               \"order\" : \"7\"\n               \n          }\n\n```\n\n## Example of kvlist type\n\nSample configuration item \"expressions\"\n\n```\n\"expressions\" : {\n              \"description\" : \"A set of expressions used to evaluate and label data\",\n\t\t\t  \"displayName\" : \"Labels\",\n              \"type\" : \"kvlist\",\n              \"items\" : \"string\",\n              \"default\" : \"{\\\"idle\\\" : \\\"speed == 0\\\"}\",\n              \"order\" : \"8\"\n              \n              }\n```\n\n## Example of object type\n\nSample configuration item \"map\" \n\n```\n\"map\": {\n      \"description\": \"A list of datapoints to read and PLC register definitions\",\n      \"type\": \"list\",\n      \"items\" : \"object\",\n      \"default\": \"[ { \\\"datapoint\\\" : \\\"speed\\\", \\\"register\\\" : \\\"10\\\", \\\"width\\\" : \\\"1\\\", \\\"type\\\" : \\\"integer\\\"} ]\",\n      \"order\" : \"3\",\n      \"displayName\" : \"PLC Map\",\n      \"properties\" : {\n              \"datapoint\" : {\n                      \"description\" : \"The name of the datapoint to create for the map entry\",\n                      \"displayName\" : \"Datapoint\",\n                      \"type\" : \"string\",\n                      \"default\" : \"datapoint\"\n                      },\n              \"register\" : {\n                      \"description\" : \"The register number to read\",\n                      \"displayName\" : \"Register\",\n                      \"type\" : \"integer\",\n                      \"default\" : \"0\"\n                      },\n              \"width\" : {\n                      \"description\" : \"Number of registers to read\",\n                      \"displayName\" : \"Width\",\n                      \"type\" : \"integer\",\n                      \"maximum\" : \"4\",\n                      \"default\" : \"1\"\n                      },\n              \"type\" : {\n                      \"description\" : \"The data type to read\",\n                      \"displayName\" : \"Data Type\",\n                      \"type\" : \"enumeration\",\n                      \"options\" : [ \"integer\",\"float\", \"boolean\" ],\n                      \"default\" : \"integer\"\n                      }\n              }\n      }\n```\n\n## Supported Poperties by configuration items in configuration JSON document\n\n|Property|Description|\n|:-----|:------------|\n|default|The default value for the configuration item. This is always expressed as a string regardless of the type of the configuration item.|\n|deprecated|A boolean flag to indicate that this item is no longer used and will be removed in a future release.|\n|description|A description of the configuration item used in the user interface to give more details of the item. Commonly used as a mouse over help prompt.|\n|displayName|The string to use in the user interface when presenting the configuration item. Generally a more user friendly form of the item name. Item names are referenced within the code.|\n|items|The type of the items in a list or kvlist configuration item.|\n|length|The maximum length of the string value of the item.|\n|listSize|The maximum number of entries allowed in a list or kvlist item.|\n|mandatory|A boolean flag to indicate that this item can not be left blank.|\n|maximum|The maximum value for a numeric configuration item.|\n|minimum|The minimum value for a numeric configuration item.|\n|options|Only used for enumeration type elements. This is a JSON array of string that contains the options in the enumeration.|\n|order|Used in the user interface to give an indication of how high up in the dialogue to place this item.|\n|group|Used to group related items together. The main use of this is within the GUI which will turn each group into a tab in the creation and edit screens.|\n|readonly|A boolean property that can be used to include items that can not be altered by the API.|\n|rule|A validation rule that will be run against the value. This must evaluate to true for the new value to be accepted by the API|\n|type|The type of the configuration item. The list of types supported are; integer, float, string, password, enumeration, boolean, list, kvlist, JSON, URL, IPV4, IPV6, script, code, X509 certificate and northTask.|\n|validity|An expression used to determine if the configuration item is valid. Used in the UI to gray out one value based on the value of others.|\n|value|The current value of the configuration item. This is not included when defining a set of default configuration in, for example, a plugin.|\n|properties|A set of items that are used in list and kvlist type items to create a list of groups of configuration items.|\n|keyName|A display name to be used for entry and display of key in the key-value list type, with item being an object.|\n|keyDescription|A description of key value in the key-value list type, with item being an object.|\n|permissions|An array of user roles that are allowed to update this configuration item. If not given then the configuration item can be updated by any user. If the permissions property is included in a configuration item the array must have at least one entry.|\n\n"
  },
  {
    "path": ".cursor/rules/C/plugins/north.mdc",
    "content": "---\ndescription: C++ North Plugin Architecture.\nglobs: [\"*.cpp\", \"*.h\"]\nalwaysApply: true\nauthor: \"Devki Nandan Ghildiyal\"\n---\n\n## North Plugin\n\n- North plugin extracts data stored into the Fledge and sends it out side to Fledge.\n- North plugin can send data to a server, a service in the cloud, or other Fledge instance.\n\n\n## General plugin guidelines\n\n- General guidelines to write a Fledge plugin is at '../../../docs/plugin_developers_guide/02_writing_plugins.rst' file\n\n## South plugin Guidelines\n\n- Specific guidelines to write a north plugin is at '../../../docs/plugin_developers_guide/04_north_plugins.rst'\n\n## Persisting Data \n\n- Persistence feature can be implemented in the plugin to persist state between the execution of plugin.\n\n- Guidelines to implement persistance feature is at '../../../docs/plugin_developers_guide/02_persisting_data.rst'\n\n## Common support classes\n\n- Information about common support classes used by plugin is at '../../../docs/plugin_developers_guide/035_CPP.rst'\n\n## Mutex and Locking\n\n- Thread Safety: The fledge north plugin can send data (send()) and configuration changes (reconfigure()) simultaneously from different threads\n- Data Consistency: Prevents reading configuration while it's being modified\n- RAII Pattern: std::lock_guard automatically unlocks when going out of scope, preventing deadlocks\n\n- Following sample code demonstrates use of mutex and locks when sending data\n\n```\nvoid send(std::vector<Reading *> *readings, std::vector<Reading *>& outReadings)\n{\n    std::lock_guard<std::mutex> guard(m_configMutex);\n    sendData(readings, outReadings);\n    readings->clear();\n}\n\n```\n\n- Following sample code demonstrates use of mutex and locks when doing configuration changes\n\n```\nvoid reconfigure(const std::string& conf)\n{\n    std::lock_guard<std::mutex> guard(m_configMutex);\n    setConfig(conf);\n    handleConfig(m_config);\n}\n\n```\n\n## Implementation details of plugin\n\n- South plugin fetches data from sensors or external sources and store in Fledge\n- Common C++ classes used in Fledge framework are at following location '../../../C/common/include' and '../../../C/common/'\n- C++ class to handle reading in Fledge is at '../../../C/common/include/reading.h'\n- C++ class to handle datapoint in Fledge is at '../../../C/common/include/datapoint.h'\n- C++ class to handle logging in Fledge is at '../../../C/common/include/logger.h'\n- C++ plugin must have a 'plugin.cpp' file\n- 'plugin.cpp' file must have plugin configuration and \n- Implementation of requirement of plugin is kept into a separate header and class implementation file which is used by 'plugin.cpp' file\n- Every plugin has 'docs' and 'tests' directory\n- 'plugin.cpp' must define plugin of the configuration\n\n## Fledge plugin configuration\n\nEvery Fledge plugin has a default configuration represented by a JSON.\n\nFollowing example demonstrates minimial configurtion for every plugin. configuration JSON for each plugin must have an elments called \"plugin\"\n\n\n```\nconst char *default_config = QUOTE({\n              \"plugin\" : {\n                      \"description\" : \"My example plugin in C++\",\n                      \"type\" : \"string\",\n                      \"default\" : \"MyPlugin\",\n                      \"readonly\" : \"true\"\n                      }\n});\n```\n\n- constant default_config is a string that contains the JSON configuration document.\n- QUOTE macro is used to manage JSON document easily\n- Configuation JSON documment will have multiple elements for each configuration item.\n- Fledge plugin supports following types\n\n| Type | Description | \n|:-----|:------------|\n|integer|An integer numeric value. The minimum and maximum properties may be used to control the limits of the values assigned to an integer.|\n|float|A floating point numeric item. The minimum and maximum properties may be used to control the limits of the values assigned to a float.|\n|string|An alpha-numeric array of characters that may contain any printable characters. The length property can be used to constrain the maximum length of the string.|\n|password|It is same as string type. User interfaces do not show this in plain text.|\n|boolean|A boolean value that can be assigned the values true or false.|\n|enumeration|The item can be assigned one of a fixed set of values. These values are defined in the options property of the item.|\n|list|A list of items, the items can be of type string, integer, float, enumeration or object. The type of the items within the list must all be the same, and this is defined via the items property of the list. A limit on the maximum number of entries allowed in the list can be enforced by use of the listSize property.|\n|kvlist|A key value pair list. The key is a string value always but the value of the item in the list may be of type string, enumeration, float, integer or object. The type of the values in the kvlist is defined by the items property of the configuration item. A limit on the maximum number of entries allowed in the list can be enforced by use of the listSize property.|\n|object|A complex configuration type with multiple elements that may be used within list and kvlist items only, it is not possible to have object type items outside of a list. Object type configuration items have a set of properties defined, each of which is itself a configuration item.|\n\n## Example for integer type\n\nSample configuration item \"register\"\n\n```\n \"register\" : {\n\t\t\t  \"description\" : \"The register number to read\",\n\t\t\t  \"displayName\" : \"Register\",\n\t\t\t  \"type\" : \"integer\",\n\t\t\t  \"default\" : \"0\",\n\t\t\t  \"order\" : \"1\"\n\t\t\t  }\n```\n\n## Example for integer type\n\nSample configuration item \"temperature\"\n\n```\n \"temperature\" : {\n\t\t\t  \"description\" : \"Temperate of PLC\",\n\t\t\t  \"displayName\" : \"PLC Temperature\",\n\t\t\t  \"type\" : \"float\",\n\t\t\t  \"default\" : \"0\",\n\t\t\t   \"order\" : \"2\"\n\t\t\t  }\n\n```\n\n## Example for string type\n\nSample configuration item \"asset\"\n\n```\n\"asset\" : {\n\t\t  \"description\" : \"The name of the asset the plugin will produce\",\n\t\t  \"displayName\" : \"Asset Name\",\n\t\t  \"type\" : \"string\",\n\t\t  \"default\" : \"MyAsset\",\n\t\t   \"order\" : \"3\"\n\t\t  }\n```\n\n## Example of password type\n\nSample configuration item \"db_password\"\n\n```\n\"db_password\" : {\n\t\t  \"description\" : \"Password of the database\",\n\t\t  \"displayName\" : \"Database Password\",\n\t\t  \"type\" : \"boolean\",\n\t\t  \"default\" : \"MyAsset\",\n\t\t   \"order\" : \"4\"\n\t\t  }\n```\n\n## Example of boolean type\n\nSample configuration item \"apply_scaling\"\n\n```\n\"apply_scaling\": {\n\t\t\t\t\"description\": \"Option to apply scaling\",\n\t\t\t\t\"displayName\": \"Use Scaling\"\n\t\t\t\t\"type\": \"boolean\",\n\t\t\t\t\"default\": \"true\",\n\t\t\t\t \"order\" : \"5\"\n\t\t\t}\n```\n\n## Example of enumeration type\n\nSample configuration item \"authentication\"\n\n```\n\"authentication\": {\n  \"description\": \"Server Authentication\",\n  \"displayName\": \"Authentication\",\n  \"type\": \"enumeration\",\n  \"options\": [\n    \"mandatory\",\n    \"optional\"\n  ],\n  \"default\": \"optional\",\n   \"order\" : \"6\"\n  \n}\n```\n\n## Example of list type\n\nSample configuration item \"tags\"\n\n```\n\"tags\" : {\n               \"description\" : \"A set of tag names on which to operate\",\n\t\t\t   \"displayName\" : \"Labels\",\n               \"type\" : \"list\",\n               \"items\" : \"string\",\n               \"default\" : \"[ \\\"speed\\\", \\\"temperature\\\", \\\"voltage\\\" ]\",\n               \"order\" : \"7\"\n               \n          }\n\n```\n\n## Example of kvlist type\n\nSample configuration item \"expressions\"\n\n```\n\"expressions\" : {\n              \"description\" : \"A set of expressions used to evaluate and label data\",\n\t\t\t  \"displayName\" : \"Labels\",\n              \"type\" : \"kvlist\",\n              \"items\" : \"string\",\n              \"default\" : \"{\\\"idle\\\" : \\\"speed == 0\\\"}\",\n              \"order\" : \"8\"\n              \n              }\n```\n\n## Example of object type\n\nSample configuration item \"map\" \n\n```\n\"map\": {\n      \"description\": \"A list of datapoints to read and PLC register definitions\",\n      \"type\": \"list\",\n      \"items\" : \"object\",\n      \"default\": \"[ { \\\"datapoint\\\" : \\\"speed\\\", \\\"register\\\" : \\\"10\\\", \\\"width\\\" : \\\"1\\\", \\\"type\\\" : \\\"integer\\\"} ]\",\n      \"order\" : \"3\",\n      \"displayName\" : \"PLC Map\",\n      \"properties\" : {\n              \"datapoint\" : {\n                      \"description\" : \"The name of the datapoint to create for the map entry\",\n                      \"displayName\" : \"Datapoint\",\n                      \"type\" : \"string\",\n                      \"default\" : \"datapoint\"\n                      },\n              \"register\" : {\n                      \"description\" : \"The register number to read\",\n                      \"displayName\" : \"Register\",\n                      \"type\" : \"integer\",\n                      \"default\" : \"0\"\n                      },\n              \"width\" : {\n                      \"description\" : \"Number of registers to read\",\n                      \"displayName\" : \"Width\",\n                      \"type\" : \"integer\",\n                      \"maximum\" : \"4\",\n                      \"default\" : \"1\"\n                      },\n              \"type\" : {\n                      \"description\" : \"The data type to read\",\n                      \"displayName\" : \"Data Type\",\n                      \"type\" : \"enumeration\",\n                      \"options\" : [ \"integer\",\"float\", \"boolean\" ],\n                      \"default\" : \"integer\"\n                      }\n              }\n      }\n```\n\n## Supported Poperties by configuration items in configuration JSON document\n\n|Property|Description|\n|:-----|:------------|\n|default|The default value for the configuration item. This is always expressed as a string regardless of the type of the configuration item.|\n|deprecated|A boolean flag to indicate that this item is no longer used and will be removed in a future release.|\n|description|A description of the configuration item used in the user interface to give more details of the item. Commonly used as a mouse over help prompt.|\n|displayName|The string to use in the user interface when presenting the configuration item. Generally a more user friendly form of the item name. Item names are referenced within the code.|\n|items|The type of the items in a list or kvlist configuration item.|\n|length|The maximum length of the string value of the item.|\n|listSize|The maximum number of entries allowed in a list or kvlist item.|\n|mandatory|A boolean flag to indicate that this item can not be left blank.|\n|maximum|The maximum value for a numeric configuration item.|\n|minimum|The minimum value for a numeric configuration item.|\n|options|Only used for enumeration type elements. This is a JSON array of string that contains the options in the enumeration.|\n|order|Used in the user interface to give an indication of how high up in the dialogue to place this item.|\n|group|Used to group related items together. The main use of this is within the GUI which will turn each group into a tab in the creation and edit screens.|\n|readonly|A boolean property that can be used to include items that can not be altered by the API.|\n|rule|A validation rule that will be run against the value. This must evaluate to true for the new value to be accepted by the API|\n|type|The type of the configuration item. The list of types supported are; integer, float, string, password, enumeration, boolean, list, kvlist, JSON, URL, IPV4, IPV6, script, code, X509 certificate and northTask.|\n|validity|An expression used to determine if the configuration item is valid. Used in the UI to gray out one value based on the value of others.|\n|value|The current value of the configuration item. This is not included when defining a set of default configuration in, for example, a plugin.|\n|properties|A set of items that are used in list and kvlist type items to create a list of groups of configuration items.|\n|keyName|A display name to be used for entry and display of key in the key-value list type, with item being an object.|\n|keyDescription|A description of key value in the key-value list type, with item being an object.|\n|permissions|An array of user roles that are allowed to update this configuration item. If not given then the configuration item can be updated by any user. If the permissions property is included in a configuration item the array must have at least one entry.|\n\n"
  },
  {
    "path": ".cursor/rules/C/plugins/south.mdc",
    "content": "---\ndescription: C++ South Plugin Architecture.\nglobs: [\"*.cpp\", \"*.h\"]\nalwaysApply: true\nauthor: \"Devki Nandan Ghildiyal\"\n---\n\n## South Plugin\n\n- South plugin are of two types poll plugin and asyc plugin\n\n- Poll type plugin calls plugin_poll method at the defined interval to collect data from sensor.\n\n- Asych type plugin use some incoming event from device or callback mechanism\n\n- SP_ASYNC flag is used to support async feature\n\n- Plugin interface version 1.0.0 is used to fetch single reading\n\n- Plugin interface version 2.0.0 is used to fetch multiple readings\n\n- South plugin can be used to extract controls (Set Point Control) on the underlying device to which it is connected\n\n- SP_CONTROL Flag is used to support Set Point Control feature\n\n\n## General plugin guidelines\n\n- General guidelines to write a Fledge plugin is at '../../../docs/plugin_developers_guide/02_writing_plugins.rst' file\n\n## South plugin Guidelines\n\n- Specific guidelines to write a south plugin is at '../../../docs/plugin_developers_guide/03_south_C_plugins.rst'\n\n## Persisting Data \n\n- Persistence feature can be implemented in the plugin to persist state between the execution of plugin.\n\n- SP_PERSIST_DATA flag is used to support persist data feature\n\n- Guidelines to implement persistance feature is at '../../../docs/plugin_developers_guide/02_persisting_data.rst'\n\n## Common support classes\n\n- Information about common support classes used by plugin is at '../../../docs/plugin_developers_guide/035_CPP.rst'\n\n## Mutex and Locking\n\n- Thread Safety: The fledge south plugin can receive data (ingest()) and configuration changes (reconfigure()) simultaneously from different threads\n- Data Consistency: Prevents reading configuration while it's being modified\n- RAII Pattern: std::lock_guard automatically unlocks when going out of scope, preventing deadlocks\n\n- Following sample code demonstrates use of mutex and locks when doing ingestion\n\n```\nvoid ingest(std::vector<Reading *> *readings, std::vector<Reading *>& outReadings)\n{\n    std::lock_guard<std::mutex> guard(m_configMutex);\n    IngestData(readings, outReadings);\n    readings->clear();\n}\n\n```\n\n- Following sample code demonstrates use of mutex and locks when doing configuration changes\n\n```\nvoid reconfigure(const std::string& conf)\n{\n    std::lock_guard<std::mutex> guard(m_configMutex);\n    setConfig(conf);\n    handleConfig(m_config);\n}\n\n```\n\n## Implementation details of plugin\n\n- South plugin fetches data from sensors or external sources and store in Fledge\n- Common C++ classes used in Fledge framework are at following location '../../../C/common/include' and '../../../C/common/'\n- C++ class to handle reading in Fledge is at '../../../C/common/include/reading.h'\n- C++ class to handle datapoint in Fledge is at '../../../C/common/include/datapoint.h'\n- C++ class to handle logging in Fledge is at '../../../C/common/include/logger.h'\n- C++ plugin must have a 'plugin.cpp' file\n- 'plugin.cpp' file must have plugin configuration and \n- Implementation of requirement of plugin is kept into a separate header and class implementation file which is used by 'plugin.cpp' file\n- Every plugin has 'docs' and 'tests' directory\n- 'plugin.cpp' must define plugin of the configuration\n\n## Fledge plugin configuration\n\nEvery Fledge plugin has a default configuration represented by a JSON.\n\nFollowing example demonstrates minimial configurtion for every plugin. configuration JSON for each plugin must have an elments called \"plugin\"\n\n\n```\nconst char *default_config = QUOTE({\n              \"plugin\" : {\n                      \"description\" : \"My example plugin in C++\",\n                      \"type\" : \"string\",\n                      \"default\" : \"MyPlugin\",\n                      \"readonly\" : \"true\"\n                      }\n});\n```\n\n- constant default_config is a string that contains the JSON configuration document.\n- QUOTE macro is used to manage JSON document easily\n- Configuation JSON documment will have multiple elements for each configuration item.\n- Fledge plugin supports following types\n\n| Type | Description | \n|:-----|:------------|\n|integer|An integer numeric value. The minimum and maximum properties may be used to control the limits of the values assigned to an integer.|\n|float|A floating point numeric item. The minimum and maximum properties may be used to control the limits of the values assigned to a float.|\n|string|An alpha-numeric array of characters that may contain any printable characters. The length property can be used to constrain the maximum length of the string.|\n|password|It is same as string type. User interfaces do not show this in plain text.|\n|boolean|A boolean value that can be assigned the values true or false.|\n|enumeration|The item can be assigned one of a fixed set of values. These values are defined in the options property of the item.|\n|list|A list of items, the items can be of type string, integer, float, enumeration or object. The type of the items within the list must all be the same, and this is defined via the items property of the list. A limit on the maximum number of entries allowed in the list can be enforced by use of the listSize property.|\n|kvlist|A key value pair list. The key is a string value always but the value of the item in the list may be of type string, enumeration, float, integer or object. The type of the values in the kvlist is defined by the items property of the configuration item. A limit on the maximum number of entries allowed in the list can be enforced by use of the listSize property.|\n|object|A complex configuration type with multiple elements that may be used within list and kvlist items only, it is not possible to have object type items outside of a list. Object type configuration items have a set of properties defined, each of which is itself a configuration item.|\n\n## Example for integer type\n\nSample configuration item \"register\"\n\n```\n \"register\" : {\n\t\t\t  \"description\" : \"The register number to read\",\n\t\t\t  \"displayName\" : \"Register\",\n\t\t\t  \"type\" : \"integer\",\n\t\t\t  \"default\" : \"0\",\n\t\t\t  \"order\" : \"1\"\n\t\t\t  }\n```\n\n## Example for integer type\n\nSample configuration item \"temperature\"\n\n```\n \"temperature\" : {\n\t\t\t  \"description\" : \"Temperate of PLC\",\n\t\t\t  \"displayName\" : \"PLC Temperature\",\n\t\t\t  \"type\" : \"float\",\n\t\t\t  \"default\" : \"0\",\n\t\t\t   \"order\" : \"2\"\n\t\t\t  }\n\n```\n\n## Example for string type\n\nSample configuration item \"asset\"\n\n```\n\"asset\" : {\n\t\t  \"description\" : \"The name of the asset the plugin will produce\",\n\t\t  \"displayName\" : \"Asset Name\",\n\t\t  \"type\" : \"string\",\n\t\t  \"default\" : \"MyAsset\",\n\t\t   \"order\" : \"3\"\n\t\t  }\n```\n\n## Example of password type\n\nSample configuration item \"db_password\"\n\n```\n\"db_password\" : {\n\t\t  \"description\" : \"Password of the database\",\n\t\t  \"displayName\" : \"Database Password\",\n\t\t  \"type\" : \"boolean\",\n\t\t  \"default\" : \"MyAsset\",\n\t\t   \"order\" : \"4\"\n\t\t  }\n```\n\n## Example of boolean type\n\nSample configuration item \"apply_scaling\"\n\n```\n\"apply_scaling\": {\n\t\t\t\t\"description\": \"Option to apply scaling\",\n\t\t\t\t\"displayName\": \"Use Scaling\"\n\t\t\t\t\"type\": \"boolean\",\n\t\t\t\t\"default\": \"true\",\n\t\t\t\t \"order\" : \"5\"\n\t\t\t}\n```\n\n## Example of enumeration type\n\nSample configuration item \"authentication\"\n\n```\n\"authentication\": {\n  \"description\": \"Server Authentication\",\n  \"displayName\": \"Authentication\",\n  \"type\": \"enumeration\",\n  \"options\": [\n    \"mandatory\",\n    \"optional\"\n  ],\n  \"default\": \"optional\",\n   \"order\" : \"6\"\n  \n}\n```\n\n## Example of list type\n\nSample configuration item \"tags\"\n\n```\n\"tags\" : {\n               \"description\" : \"A set of tag names on which to operate\",\n\t\t\t   \"displayName\" : \"Labels\",\n               \"type\" : \"list\",\n               \"items\" : \"string\",\n               \"default\" : \"[ \\\"speed\\\", \\\"temperature\\\", \\\"voltage\\\" ]\",\n               \"order\" : \"7\"\n               \n          }\n\n```\n\n## Example of kvlist type\n\nSample configuration item \"expressions\"\n\n```\n\"expressions\" : {\n              \"description\" : \"A set of expressions used to evaluate and label data\",\n\t\t\t  \"displayName\" : \"Labels\",\n              \"type\" : \"kvlist\",\n              \"items\" : \"string\",\n              \"default\" : \"{\\\"idle\\\" : \\\"speed == 0\\\"}\",\n              \"order\" : \"8\"\n              \n              }\n```\n\n## Example of object type\n\nSample configuration item \"map\" \n\n```\n\"map\": {\n      \"description\": \"A list of datapoints to read and PLC register definitions\",\n      \"type\": \"list\",\n      \"items\" : \"object\",\n      \"default\": \"[ { \\\"datapoint\\\" : \\\"speed\\\", \\\"register\\\" : \\\"10\\\", \\\"width\\\" : \\\"1\\\", \\\"type\\\" : \\\"integer\\\"} ]\",\n      \"order\" : \"3\",\n      \"displayName\" : \"PLC Map\",\n      \"properties\" : {\n              \"datapoint\" : {\n                      \"description\" : \"The name of the datapoint to create for the map entry\",\n                      \"displayName\" : \"Datapoint\",\n                      \"type\" : \"string\",\n                      \"default\" : \"datapoint\"\n                      },\n              \"register\" : {\n                      \"description\" : \"The register number to read\",\n                      \"displayName\" : \"Register\",\n                      \"type\" : \"integer\",\n                      \"default\" : \"0\"\n                      },\n              \"width\" : {\n                      \"description\" : \"Number of registers to read\",\n                      \"displayName\" : \"Width\",\n                      \"type\" : \"integer\",\n                      \"maximum\" : \"4\",\n                      \"default\" : \"1\"\n                      },\n              \"type\" : {\n                      \"description\" : \"The data type to read\",\n                      \"displayName\" : \"Data Type\",\n                      \"type\" : \"enumeration\",\n                      \"options\" : [ \"integer\",\"float\", \"boolean\" ],\n                      \"default\" : \"integer\"\n                      }\n              }\n      }\n```\n\n## Supported Poperties by configuration items in configuration JSON document\n\n|Property|Description|\n|:-----|:------------|\n|default|The default value for the configuration item. This is always expressed as a string regardless of the type of the configuration item.|\n|deprecated|A boolean flag to indicate that this item is no longer used and will be removed in a future release.|\n|description|A description of the configuration item used in the user interface to give more details of the item. Commonly used as a mouse over help prompt.|\n|displayName|The string to use in the user interface when presenting the configuration item. Generally a more user friendly form of the item name. Item names are referenced within the code.|\n|items|The type of the items in a list or kvlist configuration item.|\n|length|The maximum length of the string value of the item.|\n|listSize|The maximum number of entries allowed in a list or kvlist item.|\n|mandatory|A boolean flag to indicate that this item can not be left blank.|\n|maximum|The maximum value for a numeric configuration item.|\n|minimum|The minimum value for a numeric configuration item.|\n|options|Only used for enumeration type elements. This is a JSON array of string that contains the options in the enumeration.|\n|order|Used in the user interface to give an indication of how high up in the dialogue to place this item.|\n|group|Used to group related items together. The main use of this is within the GUI which will turn each group into a tab in the creation and edit screens.|\n|readonly|A boolean property that can be used to include items that can not be altered by the API.|\n|rule|A validation rule that will be run against the value. This must evaluate to true for the new value to be accepted by the API|\n|type|The type of the configuration item. The list of types supported are; integer, float, string, password, enumeration, boolean, list, kvlist, JSON, URL, IPV4, IPV6, script, code, X509 certificate and northTask.|\n|validity|An expression used to determine if the configuration item is valid. Used in the UI to gray out one value based on the value of others.|\n|value|The current value of the configuration item. This is not included when defining a set of default configuration in, for example, a plugin.|\n|properties|A set of items that are used in list and kvlist type items to create a list of groups of configuration items.|\n|keyName|A display name to be used for entry and display of key in the key-value list type, with item being an object.|\n|keyDescription|A description of key value in the key-value list type, with item being an object.|\n|permissions|An array of user roles that are allowed to update this configuration item. If not given then the configuration item can be updated by any user. If the permissions property is included in a configuration item the array must have at least one entry.|\n\n"
  },
  {
    "path": ".cursor/rules/README.md",
    "content": "# 🎯 How to Use Cursor Rules with AI Prompts\n\nThis guide explains how to effectively use the Fledge Cursor rules for Python development and documentation in your AI prompts and development workflow.\n\n## 📁 Directory Structure\n\nRules are organized for Python development and documentation:\n\n```\n.cursor/rules/\n├── C\n│   ├── core.mdc          # Core C++ Standards + + platform requirements\n│   └── plugins\n│       ├── filter.mdc    # C++ filter plugin rules\n│       ├── north.mdc     # C++ north plugin rules\n│       └── south.mdc     # C++ south plugin rules\n├── README.md             # This usage guide\n├── python/               # Python-specific rules (Python 3.8.10-3.12, Ubuntu LTS 20.04+, Raspberry Pi)\n│   ├── core.mdc          # Core Python standards + platform requirements\n│   ├── api.mdc           # REST API + web framework dependencies\n│   ├── config.mdc        # Configuration management + validation deps\n│   └── quality.mdc       # Dependencies, logging, performance + requirements.txt\n├── tests/                # Testing-specific rules\n│   └── python/           # Python testing rules\n│       ├── unit.mdc      # Unit testing rules - pytest, coverage, best practices\n│       └── api.mdc       # API integration testing rules - conftest fixtures, http.client patterns\n└── docs.mdc              # Documentation guidelines\n```\n\n## 📋 Available Rule Files\n\n| Rule File | Purpose | Applies To |\n|-----------|---------|------------|\n| `@C/core` | Core C++ standards,| `*.h`, `*.cpp` |\n| `@C/plugins/south` | C++ South Plugin| `*.h`, `*.cpp` |\n| `@C/plugins/north` | C++ North Plugin| `*.h`, `*.cpp` |\n| `@C/plugins/filter` | C++ Filter Plugin| `*.h`, `*.cpp` |\n| `@python/core` | Core Python standards, naming, imports | `*.py`, `python/**/*` |\n| `@python/api` | REST APIs, routes, middleware | API files, routes.py, web middleware |\n| `@python/config` | Configuration system, data formats | Config files, configuration modules |\n| `@python/quality` | Dependencies, logging, performance | Requirements files |\n| `@tests/python/unit` | Unit testing with pytest | Unit test files, test configuration |\n| `@tests/python/api` | API integration testing with http.client | API integration test files, conftest.py |\n| `@docs` | Documentation writing | `docs/**/*`, `*.rst` |\n\n## 🏗️ Shared Platform & Dependencies\n\nAll Python rules include consistent platform and dependency information:\n\n### **Platform Requirements** (Built into all Python rules)\n- **C++ Standard**: C++11\n- **Python Versions**: 3.8.10 - 3.12 (inclusive)\n- **Ubuntu**: LTS versions, 20.04 onwards (x86_64 & aarch64)\n- **Raspberry Pi OS**: Bullseye and Bookworm (aarch64 & armv7l)\n\n### **Dependencies Management** (Referenced in all Python rules)\n- **[python/requirements.txt](python/requirements.txt)** - Runtime dependencies\n- **[python/requirements-dev.txt](python/requirements-dev.txt)** - Development dependencies  \n- **[python/requirements-test.txt](python/requirements-test.txt)** - Testing dependencies\n\n### **Automatic Context** (No need to repeat in prompts)\nWhen you use any `@python/*` rule, the AI automatically knows:\n```bash\n# Instead of writing this every time:\n\"Create a Python function that works on Python 3.8.10-3.12, Ubuntu LTS 20.04+, Raspberry Pi, uses requirements.txt for dependencies...\"\n\n# You can simply write:\n@python/core \"Create a Python function\"\n# The AI already knows the platform and dependency constraints!\n```\n\n## 🔄 Automatic Rule Application\n\nCursor automatically applies rules based on the files you're working with:\n\n```yaml\n# Example: Working on Python files automatically applies python/core rules\npython/fledge/services/core/server.py → @python/core rules active\n\n# Working on API files applies both core and API rules  \npython/fledge/services/core/api/auth.py → @python/core + @python/api rules active\n\n# Documentation files apply docs rules\ndocs/quick_start/installing.rst → @docs rules active\n```\n\n## 🎯 Explicit Rule References in Prompts\n\n### Direct Rule Invocation\n```\n@python/core Can you help me write a function that follows Fledge Python standards?\n\n@python/api I need to create a new REST endpoint for device management\n\n@docs Help me write documentation for this new feature\n```\n\n### Multiple Rule References\n```\n@python/core @python/quality Help me refactor this code with proper error handling\n\n@python/api @python/config Create an API endpoint for configuration management\n\n@docs @python/api Document this REST API following both documentation and API standards\n\n@python/core @tests/python/unit Create a service class with comprehensive unit tests\n\n@tests/python/api @python/api Create API integration tests for new REST endpoints\n```\n\n## 💡 Context-Aware Prompts\n\n### When Working on Python Files\n```\n# Cursor automatically knows to apply Python rules\n\"Create a new service class that handles sensor data processing\"\n\n# The AI will automatically follow:\n- snake_case naming conventions\n- Type hints and docstrings  \n- FLCoreLogger usage\n- Async/await patterns\n- Error handling standards\n- Python 3.8.10-3.12 compatibility\n```\n\n### When Working on Documentation\n```\n# In docs/ directory, rules automatically apply\n\"Document this new plugin API\"\n\n# The AI will automatically:\n- Use reStructuredText format\n- Follow Sphinx conventions\n- Avoid \"Fledge\" in headings where possible\n- Include proper cross-references\n- Use correct heading hierarchy\n```\n\n## 🛠️ Specific Rule-Based Requests\n\n### Configuration Management\n```\nUsing @python/config rules, create a configuration category for my new plugin with:\n- String, integer, and boolean parameters\n- Proper validation\n- Default values wrapped in quotes\n- Reserved category name checking\n```\n\n### API Development\n```\nFollowing @python/api rules, create a REST endpoint that:\n- Handles role-based access through middleware\n- Returns camelCase JSON responses\n- Includes proper error handling\n- Checks for route conflicts\n- Uses FLCoreLogger for logging\n```\n\n### Unit Testing\n```\nUsing @tests/python/unit rules, create unit tests that:\n- Use pytest framework\n- Include proper mocking with pytest-mock\n- Test both success and failure cases\n- Follow the test file naming conventions\n- Include code coverage setup\n```\n\n### API integration Testing\n```\nUsing @tests/python/api rules, create API integration tests that:\n- Use http.client library exclusively (no requests)\n- Leverage conftest.py fixtures like reset_and_start_fledge\n- Test API endpoints with proper authentication\n- Use fledge_url and storage_plugin fixtures\n- Follow system test organization patterns\n```\n\n### Documentation\n```\nFollowing @docs rules, create documentation that:\n- Uses reStructuredText format\n- Includes proper Sphinx directives\n- Avoids excessive \"Fledge\" branding\n- Has correct heading hierarchy\n- Includes cross-references to related docs\n```\n\n## 🔀 Advanced Rule Usage\n\n### API Documentation\n```\nUsing @docs rules, create documentation for this Python API (@python/api) \nthat includes proper Sphinx directives and avoids excessive Fledge branding.\n```\n\n### Complete Feature Development\n```\nI'm creating a new Fledge service that includes:\n- Python backend (@python/core @python/api)\n- Configuration management (@python/config)  \n- Unit testing (@tests/python/unit)\n- API integration testing (@tests/python/api)\n- Complete documentation (@docs)\n```\n\n## 🔍 Rule-Aware Code Reviews\n\n```\nReview this code against @python/core and @python/quality rules:\n- Check naming conventions (snake_case vs camelCase)\n- Verify proper logging usage (FLCoreLogger)\n- Ensure type hints are present\n- Validate error handling patterns\n- Check Python version compatibility\n\nReview this test code against @tests/python/unit rules:\n- Validate pytest usage and fixture patterns\n- Check mocking strategies and test isolation\n- Ensure proper test organization and naming\n- Verify code coverage approach\n```\n\n## 🚀 Platform-Specific Development\n\n```\n# Old way (verbose, repetitive):\nUsing @python/core rules, help me optimize this code for:\n- Raspberry Pi ARM architecture (aarch64, armv7l)\n- Python 3.8.10-3.12 compatibility\n- Edge device memory constraints\n- Ubuntu LTS 20.04+ deployment\n\n# New way (automatic platform context):\n@python/core Optimize this code for edge device performance\n\n# The AI automatically knows:\n# - Python 3.8.10-3.12 compatibility\n# - Ubuntu LTS 20.04+ (x86_64 & aarch64)\n# - Raspberry Pi OS (aarch64 & armv7l)\n# - Edge device memory constraints\n# - Requirements.txt dependency management\n```\n\n## 🐛 Troubleshooting with Rules\n\n```\nThis code isn't following @python/api middleware patterns. \nHelp me fix the authentication and role validation.\n\nThis documentation doesn't follow @docs anti-branding guidelines.\nHelp me remove excessive \"Fledge\" references while maintaining clarity.\n```\n\n## 🔧 Pro Tips for Using Rules Effectively\n\n### 1. Let Rules Work Automatically\n- Just open files in the appropriate directories\n- Cursor applies rules based on file patterns (globs)\n- No need to explicitly mention rules for basic tasks\n- Rules are automatically in context\n\n### 2. Use Rule Names for Specific Guidance\n- When you need specific standards applied\n- When working across multiple technologies\n- When you want to ensure compliance with particular guidelines\n- When combining multiple rule sets\n\n### 3. Combine Rules for Complex Tasks\n- Use multiple @ references for cross-cutting concerns\n- Leverage rule interactions (e.g., API + Config + Testing)\n- Apply domain-specific and quality rules together\n\n### 4. Rule-Based Learning\n```\nExplain the difference between @python/core naming conventions \nand @python/api response formatting.\n\nHow do @python/config validation rules work with @python/api endpoints?\n```\n\n### 5. Validation Against Rules\n```\nDoes this code follow @python/quality standards for:\n- Dependencies management\n- Logging practices  \n- Performance optimization\n\nDoes this testing code follow @tests/python/unit standards for:\n- pytest usage and fixtures\n- Mocking patterns\n- Test coverage\n- Unit testing best practices\n\nDoes this API test follow @tests/python/api standards for:\n- http.client usage\n- conftest.py fixture usage\n- API testing patterns\n\nValidate this documentation against @docs standards for:\n- reStructuredText formatting\n- Sphinx directives\n- Cross-references\n- Branding guidelines\n```\n\n## 📖 Rule-Specific Examples\n\n### Python Core (@python/core)\n```\nCreate a device manager class that:\n- Uses snake_case naming\n- Includes proper docstrings\n- Has type hints for all methods\n- Uses FLCoreLogger for logging\n- Follows the server.py architectural pattern\n```\n\n### API Development (@python/api)\n```\nCreate a REST endpoint for asset management that:\n- Uses role-based middleware validation\n- Returns camelCase JSON responses\n- Handles route conflicts\n- Includes proper error handling\n- Uses async/await patterns\n```\n\n### Configuration (@python/config)\n```\nDesign a configuration category that:\n- Includes string, integer, boolean, and JSON types\n- Has proper validation rules\n- Uses quoted default values\n- Avoids reserved category names\n- Includes optional validation constraints\n```\n\n### Documentation (@docs)\n```\nWrite API documentation that:\n- Uses reStructuredText format\n- Includes proper Sphinx directives\n- Avoids excessive \"Fledge\" branding\n- Has correct heading hierarchy\n- Includes cross-references to related docs\n```\n\n### Unit Testing (@tests/python/unit)\n```\nCreate comprehensive unit tests that:\n- Use pytest with proper fixtures\n- Mock external dependencies appropriately\n- Achieve meaningful test coverage\n- Follow unit testing best practices\n- Test both success and failure scenarios\n```\n\n### API integration Testing (@tests/python/api)\n```\nCreate API integration tests that:\n- Use http.client library exclusively\n- Leverage conftest.py fixtures for environment setup\n- Test API endpoints with authentication flows\n- Use reset_and_start_fledge for clean test environments\n- Follow system test organization patterns\n```\n\n### Dependencies & Quality (@python/quality)\n```\nManage dependencies and code quality:\n- Use requirements.txt for dependency management\n- Follow FLCoreLogger patterns for logging\n- Optimize for edge device performance\n- Ensure Python version compatibility\n- Document dependency constraints\n```\n\n## 🎯 Best Practices Summary\n\n1. **Trust Automatic Application**: Let Cursor apply rules based on file context\n2. **Use @ References Explicitly**: When you need specific rule compliance\n3. **Combine Rules Strategically**: For Python development with documentation\n4. **Validate Against Rules**: Use rules for code review and quality checks\n5. **Focus on Core Technologies**: Leverage Python and documentation rules together\n\nThe rules work best when you let them guide development naturally - they'll automatically apply standards and catch issues as you code! "
  },
  {
    "path": ".cursor/rules/docs.mdc",
    "content": "\n---\ndescription: \"Cursor AI rules for Fledge documentation - covers reStructuredText, Sphinx, file naming, and content guidelines\"\nglobs: \n  - \"docs/**/*\"\n  - \"*.rst\"\nalwaysApply: false\nauthor: \"Ashish Jabble\"\n---\n\n# Documentation Directory Cursor Rules\n\n## Overview\nThis file contains specific rules for working with documentation in the `/docs` directory of the Fledge project. These rules supplement the main project cursor rules and focus on documentation-specific patterns and conventions.\n\n## Documentation Framework\n- **Format**: reStructuredText (.rst) format exclusively\n- **Build System**: Sphinx documentation generator\n- **Theme**: sphinx_rtd_theme (Read the Docs theme)\n- **Configuration**: All settings in `docs/conf.py`\n\n## 🚫 \"Fledge\" Branding Guidelines - MINIMIZE USAGE\n\n**CRITICAL RULE: Avoid \"Fledge\" in naming wherever possible**\n\n### What to Avoid:\n- ❌ Image files: `fledge_architecture.png`\n- ❌ Directory names: `fledge_authentication/`\n- ❌ File names: `fledge_configuration.rst`\n- ❌ Headings: \"Fledge Authentication Setup\"\n- ❌ Repetitive content: \"Fledge does this... Fledge provides that...\"\n\n### What to Use Instead:\n- ✅ Image files: `architecture_overview.png`, `auth_flow.png`\n- ✅ Directory names: `authentication/`, `configuration/`, `monitoring/`\n- ✅ File names: `authentication.rst`, `configuration.rst`\n- ✅ Headings: \"Authentication Setup\", \"Configuration Guide\"\n- ✅ Content alternatives: \"the platform\", \"the system\", \"this feature\"\n\n### When \"Fledge\" IS Appropriate:\n- Main title pages and introductory content\n- External references and comparisons\n- Installation package names\n- API endpoint references where it's part of the actual name\n\n## File Organization\n\n### Directory Structure\n- `/docs/` - Main documentation root\n- `/docs/_static/` - Static assets (CSS, images that aren't content)\n- `/docs/_templates/` - Custom Sphinx templates\n- `/docs/images/` - Documentation images and screenshots\n- `/docs/quick_start/` - Getting started guides\n- `/docs/plugin_developers_guide/` - Plugin development documentation\n- `/docs/rest_api_guide/` - REST API documentation\n- `/docs/building_fledge/` - Build and installation guides\n- `/docs/monitoring/` - System monitoring documentation\n- `/docs/fledge-rule-DataAvailability/` - Built-in Data Availability rule plugin docs\n- `/docs/fledge-rule-Threshold/` - Built-in Threshold rule plugin docs\n- `/docs/fledge-north-OMF.rst` - Built-in OMF north plugin documentation\n- `/docs/keywords/` - Plugin categorization keywords and mappings\n- `/docs/fledge_plugins.rst` - Master plugin list with conditional hyperlinks\n\n**DIRECTORY NAMING GUIDELINES:**\n- **AVOID \"fledge\" in new directory names** - use functional descriptions\n- Use topic-based naming: `authentication/` instead of `fledge_authentication/`\n- Keep directory names lowercase with underscores\n- Focus on the purpose/feature rather than product branding\n\n### File Naming Conventions\n- Use lowercase with underscores: `file_name.rst`\n- Index files: `index.rst` for each directory\n- Numbered files for version/download info: `91_version_history.rst`, `92_downloads.rst`\n- Descriptive names reflecting content: `securing.rst`, `troubleshooting_pi_server_integration.rst`\n- **AVOID \"fledge\" in filenames** - use functional descriptions: `authentication.rst` instead of `fledge_authentication.rst`\n- Focus on the topic/feature being documented\n\n## reStructuredText Style Guidelines\n\n### Heading Hierarchy\nFollow this exact hierarchy for consistency:\n```rst\n***************\nDocument Title (Level 1)\n***************\n\n===============\nMajor Section (Level 2)\n===============\n\nMinor Section (Level 3)\n-----------------------\n\nSubsection (Level 4)\n^^^^^^^^^^^^^^^^^^^^\n\nSub-subsection (Level 5)\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n```\n\n**IMPORTANT NAMING CONVENTIONS:**\n- **AVOID \"Fledge\" in headings** unless absolutely necessary for context\n- Use descriptive, functional titles: \"Authentication Configuration\" instead of \"Fledge Authentication Configuration\"\n- Focus on the feature/functionality rather than the product name\n- Keep headings concise and user-focused\n\n### Document Structure\n1. **Title**: Use level 1 heading with asterisks above and below\n2. **Introduction**: Brief overview of the document's purpose\n3. **Table of Contents**: Use `.. toctree::` for sections with multiple pages\n4. **Main Content**: Organized with appropriate heading levels\n5. **Cross-references**: Link to related documentation\n\n### Code Blocks\n```rst\n.. code-block:: language\n   :linenos:\n   :emphasize-lines: 2,3\n\n   code here\n```\n\n### Common Directives\n- `.. note::` - Important information\n- `.. warning::` - Critical warnings\n- `.. code-block::` - Code examples\n- `.. image::` - Images with proper alt text\n- `.. toctree::` - Table of contents trees\n\n### Images and Media\n- Store images in `/docs/images/` directory\n- Use descriptive filenames: `architecture_overview.png` (avoid \"fledge_\" prefix)\n- **AVOID \"Fledge\" in image filenames** - use descriptive terms like `architecture_overview.png` instead of `fledge_architecture_overview.png`\n- Always include alt text: `.. image:: images/filename.png :alt: Description`\n- Optimize images for web (reasonable file sizes)\n- Use subdirectories in images/ for organization by topic\n- Keep image names concise and topic-focused\n\n### Cross-References and Links\n- Internal references: `:doc:`filename`` or `:ref:`label``\n- External links: `Link text <URL>`_\n- API references: Follow existing patterns for REST API documentation\n\n## Content Guidelines\n\n### Writing Style\n- Use clear, concise language suitable for technical documentation\n- Write in active voice when possible\n- Use present tense for current functionality\n- Include step-by-step instructions for procedures\n- Provide context and examples\n- **MINIMIZE use of \"Fledge\" in content** - focus on functionality and features\n- Use \"the platform\", \"the system\", or specific feature names instead of repetitive \"Fledge\" references\n\n### Code Examples\n- Include complete, runnable examples when possible\n- Show both input and expected output\n- Use realistic data that represents actual Fledge usage\n- Comment code examples appropriately\n- Test code examples to ensure they work\n\n### API Documentation\n- Document all public APIs, parameters, and return values\n- Include HTTP status codes for REST APIs\n- Provide curl examples for API endpoints\n- Show JSON request/response examples\n- Document error conditions and responses\n\n### Configuration Documentation\n- Show complete configuration examples\n- Explain all configuration parameters\n- Provide default values where applicable\n- Include configuration validation rules\n- Link to related configuration sections\n\n## Sphinx Configuration\n\n### Extensions\n- Keep extensions minimal and focused\n- Document any new extensions added\n- Ensure extensions are available in build environment\n\n### Build Process\n- Use `make html` for local builds\n- Check for build warnings and errors\n- Test documentation locally before committing\n- Verify all links work correctly\n\n### Documentation Generation Scripts\n- **Location**: `/scripts/` directory contains documentation generation utilities\n- **Plugin Discovery**: Scripts automatically scan plugin repositories for `docs/` directories\n- **Content Aggregation**: Pulls documentation from external plugin repos during build\n- **Branch Management**: Handles DOCBRANCH parameter for version-specific documentation\n- **Integration**: Merges external plugin docs with core Fledge documentation seamlessly\n\n### Keywords and Categorization System\n- **Keywords Directory**: `/docs/keywords/` contains category definition files\n- **Category Mapping**: Each keyword file defines a plugin category (e.g., `Augmentation`, `Cleansing`, `Cloud`)\n- **Plugin Keywords**: Plugin repositories contain keyword files that reference category keywords\n- **Automatic Categorization**: Build scripts match plugin keywords with category definitions\n- **Dynamic Organization**: Plugin list automatically organized into categorical sections\n- **Conditional Display**: Categories only appear if plugins with matching keywords exist\n\n### Version Management\n- Version information managed in `conf.py`\n- DOCBRANCH parameter for plugin documentation\n- Update version info during releases\n\n### DOCBRANCH System\n- **Purpose**: Generates documentation from both core Fledge and external plugin repositories\n- **Core Documentation**: Always included from the main Fledge repository\n- **Plugin Documentation**: Pulled from individual plugin repositories if they have a `docs/` directory\n- **Branch Control**: Uses `DOCBRANCH='develop'` parameter (set to actual version during releases)\n- **Auto-Discovery**: Only includes plugins that have documentation - ignores repos without `docs/` directory\n- **Generation Scripts**: Located in `/scripts/` directory handle the plugin documentation aggregation\n- **Build Command**: `subprocess.run([\"make generated DOCBRANCH='develop'\"], shell=True, check=True)` in `conf.py`\n\n## Plugin Documentation\n\n### Plugin Repository Documentation\n- **External Plugins**: Each plugin repository can have its own `docs/` directory\n- **Auto-Discovery**: Build system automatically includes plugin docs if `docs/` directory exists\n- **Repository Requirement**: Plugin repos without `docs/` directory are ignored during documentation generation\n- **Branch Synchronization**: Uses same DOCBRANCH parameter as core documentation\n- **Integration**: Plugin docs are seamlessly integrated into the main documentation site\n\n### Built-in Plugins (In Core Repository)\nThe following plugins have documentation included directly in the core Fledge repository:\n- **`fledge-rule-DataAvailability/`** - Data availability rule plugin documentation\n- **`fledge-rule-Threshold/`** - Threshold rule plugin documentation  \n- **`fledge-north-OMF.rst`** - OMF north plugin documentation\n\n### Plugin Documentation Standards\n- Each plugin should have its own documentation section\n- Follow the pattern established in existing plugin docs\n- Include installation, configuration, and usage instructions\n- Provide troubleshooting sections\n- Use the same reStructuredText format and style guidelines\n\n### Auto-Generated Content\n- Plugin lists and references may be auto-generated by scripts in `/scripts/` directory\n- Don't manually edit generated content\n- Use the build system's generation capabilities\n- Generated content includes plugin discovery from external repositories\n\n### Plugin Listing System (`fledge_plugins.rst`)\n- **Master List**: All plugins are listed with name and description in `fledge_plugins.rst`\n- **Smart Hyperlinking**: \n  - ✅ **With Documentation**: Plugin names become hyperlinks if `docs/` directory exists in plugin repo\n  - ❌ **Without Documentation**: Plugin names remain as plain text (no hyperlink)\n- **Automatic Detection**: Build system checks for documentation availability during generation\n- **Comprehensive Coverage**: Includes all available Fledge plugins regardless of documentation status\n\n### Plugin Categorization System\n- **Keyword-Based Organization**: Plugins organized by categories using keyword mapping\n- **Keywords Directory**: `/docs/keywords/` contains category definitions and mappings\n- **Plugin Keywords**: Each plugin repository can have a keywords file defining its categories\n- **Categorical Display**: Plugins grouped and displayed under appropriate category sections\n- **Dynamic Categorization**: Categories are automatically generated based on available keywords\n\n### Plugin Documentation Sources\n- **Core Repository Plugins**: Documentation in `/docs/` for built-in plugins\n- **External Plugin Repos**: Each plugin repository can maintain its own `docs/` directory\n- **Plugin Directory Reference**: All Fledge-based plugins available in main `plugins/` directory\n- **Detailed Documentation**: Comprehensive plugin docs when `docs/` directory exists in plugin repo\n\n### Plugin Documentation Workflow\n1. **Plugin Discovery**: Build system scans all available Fledge plugin repositories\n2. **Documentation Check**: Determines if plugin repo has `docs/` directory\n3. **List Generation**: All plugins added to `fledge_plugins.rst` with name and description\n4. **Hyperlink Decision**: \n   - Plugins WITH docs → Name becomes clickable hyperlink\n   - Plugins WITHOUT docs → Name remains as plain text\n5. **Category Organization**: Plugins grouped by keywords into categorical sections\n6. **Integration**: Plugin docs seamlessly integrated into main documentation site\n\n## Quality Standards\n\n### Content Review\n- Ensure accuracy of all technical information\n- Verify code examples work with current Fledge version\n- Check that screenshots are current and accurate\n- Review for clarity and completeness\n\n### Accessibility\n- Use proper heading hierarchy for screen readers\n- Include alt text for all images\n- Ensure good color contrast in custom CSS\n- Test with accessibility tools\n\n### Maintenance\n- Update documentation when features change\n- Remove or update deprecated information\n- Keep external links current\n- Regular review of troubleshooting sections\n\n## Build and Deployment\n\n### Local Testing\n```bash\ncd docs\nmake html\n# Check _build/html/index.html in browser\n```\n\n### Build Warnings\n- Address all Sphinx build warnings\n- Fix broken internal references\n- Verify external links periodically\n- Check image references\n\n### Dependencies\n- Document build dependencies in `requirements.txt`\n- Keep Sphinx version constraints appropriate\n- Test builds in clean environments\n\n## Documentation Contribution Guidelines\n\n### New Documentation\n- Create comprehensive documentation for new features\n- Follow existing patterns and conventions\n- Include in appropriate toctree structures\n- Add cross-references to related content\n\n### Plugin Documentation Contributions\n- **Adding Plugin Docs**: Create `docs/` directory in plugin repository with proper structure\n- **Hyperlink Generation**: Plugin names in `fledge_plugins.rst` automatically become hyperlinks when docs exist\n- **Keywords Assignment**: Add appropriate keyword files to enable categorical organization\n- **Content Standards**: Follow same reStructuredText standards as core documentation\n- **Testing**: Verify plugin documentation builds correctly with main documentation site\n\n### Updates\n- Update documentation when code changes\n- Maintain backwards compatibility information\n- Add migration guides for breaking changes\n- Update version history appropriately\n\n### Review Process\n- Technical accuracy review\n- Editorial review for clarity\n- Build verification\n- Link checking\n\n## Common Patterns\n\n### Getting Started Guides\n- Step-by-step instructions\n- Prerequisites clearly stated\n- Expected outcomes described\n- Troubleshooting section included\n\n### Reference Documentation\n- Comprehensive parameter listings\n- Example configurations\n- Default values documented\n- Related settings cross-referenced\n\n### Tutorial Content\n- Progressive complexity\n- Complete working examples\n- Clear learning objectives\n- Summary and next steps\n\n## Troubleshooting Documentation\n\n### Error Messages\n- Include exact error message text\n- Provide context for when errors occur\n- Give specific resolution steps\n- Link to related configuration\n\n### Common Issues\n- Document frequently reported problems\n- Provide multiple solution approaches\n- Include preventive measures\n- Reference community resources\n\nThis documentation should be treated as living guidelines that evolve with the project's needs while maintaining consistency and quality standards.\n"
  },
  {
    "path": ".cursor/rules/python/api.mdc",
    "content": "---\ndescription: \"Python API development rules for Fledge - REST APIs, routes, middleware, and web services\"\nglobs: \"python/fledge/services/*/api/**/*,python/fledge/*/routes.py,python/fledge/common/web/**/*\"\nalwaysApply: false\nauthor: \"Ashish Jabble\"\n---\n\n# Python API Development Rules\n\n## Python Version & Platform Requirements\n\n### Supported Python Versions\n- **Target Range**: Python 3.8.10 - 3.12 (inclusive)\n- Always test API compatibility across the full supported range\n\n### Deployment Platforms\n- **Ubuntu**: LTS versions, 20.04 onwards\n  - **Architectures**: x86_64 & aarch64\n- **Raspberry Pi OS**: Bullseye and Bookworm distributions\n  - **Architectures**: aarch64 & armv7l\n\n### Dependencies Management\n- **Runtime Dependencies**: [python/requirements.txt](mdc:python/requirements.txt) - Include aiohttp, web framework deps\n- **Development Dependencies**: [python/requirements-dev.txt](mdc:python/requirements-dev.txt)\n- **Testing Dependencies**: [python/requirements-test.txt](mdc:python/requirements-test.txt) - Include pytest-aiohttp\n- **Use exact versions** for production dependencies\n- **Consider ARM compatibility** for Raspberry Pi deployment\n- **Test across Python versions** and target architectures\n\n## REST API Development\n\n### API Route Definition & Management\n- **Core Service Routes**: Defined in [python/fledge/services/core/routes.py](mdc:python/fledge/services/core/routes.py)\n- **Microservice Routes**: Exposed at [python/fledge/services/common/microservice_management/routes.py](mdc:python/fledge/services/common/microservice_management/routes.py)\n- **API Handlers**: Follow existing patterns in [python/fledge/services/core/api/](mdc:python/fledge/services/core/api/)\n\n### Route Definition Best Practices\n```python\n# Example route definition with handler\nfrom aiohttp import web\nfrom fledge.services.core.api import my_handler\n\ndef setup_routes(app):\n    \"\"\"Setup API routes for the service.\"\"\"\n    \n    # Check for existing routes to avoid conflicts\n    existing_routes = [route.resource.canonical for route in app.router.routes()]\n    \n    # Define new routes with conflict checking\n    new_routes = [\n        ('GET', '/fledge/my_endpoint', my_handler.get_data),\n        ('POST', '/fledge/my_endpoint', my_handler.create_data),\n        ('PUT', '/fledge/api/my_endpoint/{id}', my_handler.update_data),\n        ('DELETE', '/fledge/api/my_endpoint/{id}', my_handler.delete_data)\n    ]\n    \n    for method, path, handler in new_routes:\n        # Check for route conflicts before adding\n        if path in existing_routes:\n            raise ValueError(f\"Route conflict: {method} {path} already exists\")\n        \n        app.router.add_route(method, path, handler)\n```\n\n### Route Conflict Prevention\n- **Endpoint Uniqueness**: Ensure each endpoint path is unique across the application\n- **Method Specificity**: Same path can have different HTTP methods (GET, POST, PUT, DELETE)\n- **Conflict Detection**: Check existing routes before adding new ones\n- **Route Inspection**: Use `app.router.routes()` to inspect existing routes\n- **Naming Conventions**: Use descriptive, hierarchical path naming\n\n### Route Organization Guidelines\n```python\n# Core service routes (routes.py)\nCORE_ROUTES = [\n    # System management\n    '/fledge/ping',\n    '/fledge/shutdown',\n    '/fledge/restart',\n    \n    # Configuration management  \n    '/fledge/category',\n    '/fledge/category/{category_name}',\n    \n    # Plugin management\n    '/fledge/plugins',\n    '/fledge/plugins/{plugin_type}',\n    \n    # Service management\n    '/fledge/service',\n    '/fledge/service/{service_name}'\n]\n\n# Microservice routes (microservice_management/routes.py)\nMICROSERVICE_ROUTES = [\n    # Service\n    '/fledge/service/register',\n    '/fledge/service/unregister', \n    '/fledge/service/ping',\n    '/fledge/service/shutdown',\n    \n    # Health monitoring\n    '/fledge/service/health',\n    '/fledge/service/status'\n]\n```\n\n### API Handler Implementation\n```python\n# Example API handler with proper structure\nfrom aiohttp import web\nfrom fledge.common.logger import FLCoreLogger\nimport json\n\n_logger = FLCoreLogger().get_logger(__name__)\n\nasync def get_data_handler(request):\n    \"\"\"Handle GET requests for data retrieval.\"\"\"\n    try:\n        # Extract parameters\n        param_id = request.match_info.get('id')\n        query_params = request.query\n        \n        # Validate input\n        if param_id and not param_id.isdigit():\n            raise web.HTTPBadRequest(reason=\"Invalid ID parameter\")\n        \n        # Process request\n        result = await process_data_request(param_id, query_params)\n        \n        # Return consistent JSON response (camelCase keys)\n        response = {\n            \"message\": \"Data retrieved successfully\",\n            \"data\": result,\n            \"count\": len(result) if isinstance(result, list) else 1\n        }\n        \n        return web.json_response(response, status=200)\n        \n    except ValueError as ex:\n        _logger.error(ex, \"Invalid input parameter\")\n        return web.json_response({\"message\": str(ex)}, status=400)\n    except Exception as ex:\n        _logger.error(ex, \"Failed to retrieve data\")\n        return web.json_response({\"message\": \"Internal server error\"}, status=500)\n```\n\n### Middleware-Based Security & Role Validation\nFledge implements endpoint security and role validation through middleware:\n- **Middleware Location**: [python/fledge/common/web/middleware.py](mdc:python/fledge/common/web/middleware.py)\n- **Role-Based Access Control**: All endpoint role restrictions handled in middleware\n- **Centralized Validation**: Security logic centralized in middleware layer\n- **Automatic Enforcement**: Middleware automatically validates roles for protected endpoints\n\n### Role Validation Implementation\n```python\n# Role validation is handled in middleware.py, not in individual handlers\n# The middleware intercepts requests and validates user roles before reaching handlers\n\n# Example of how middleware handles role validation:\nasync def role_validation_middleware(request, handler):\n    \"\"\"Middleware that validates user roles for protected endpoints.\"\"\"\n    \n    # Extract endpoint information\n    endpoint = request.path\n    method = request.method\n    \n    # Check if endpoint requires specific roles\n    required_roles = get_endpoint_roles(endpoint, method)\n    \n    if required_roles:\n        # Validate user authentication and roles\n        user_roles = await get_user_roles(request)\n        \n        if not any(role in user_roles for role in required_roles):\n            return web.json_response(\n                {\"message\": \"Insufficient permissions\"}, \n                status=403\n            )\n    \n    # Continue to handler if validation passes\n    return await handler(request)\n```\n\n### Endpoint Role Configuration\nRole requirements for endpoints are configured and enforced by middleware:\n```python\n# Middleware handles role mapping for different endpoints\nENDPOINT_ROLES = {\n    # Administrative operations - admin only\n    '/fledge/shutdown': ['admin'],\n    '/fledge/restart': ['admin'],\n    '/fledge/service': ['admin'],\n    '/fledge/certificate': ['admin'],\n    \n    # Configuration management - admin and editor\n    '/fledge/configuration': ['admin', 'editor'],\n    '/fledge/plugins': ['admin', 'editor'],\n    '/fledge/category': ['admin', 'editor'],\n    \n    # Control operations - admin, editor, and control\n    '/fledge/schedule': ['admin', 'editor', 'control'],\n    '/fledge/notification': ['admin', 'editor', 'control'],\n    '/fledge/control/pipeline': ['admin', 'editor', 'control'],\n    '/fledge/control/script': ['admin', 'editor', 'control'],\n    \n    # Data viewing - admin, editor, control, data-view, and view\n    '/fledge/asset': ['admin', 'editor', 'control', 'data-view', 'view'],\n    '/fledge/reading': ['admin', 'editor', 'control', 'data-view', 'view'],\n    '/fledge/statistics': ['admin', 'editor', 'control', 'data-view', 'view'],\n    \n    # General viewing - all authenticated roles\n    '/fledge/ping': ['admin', 'editor', 'control', 'data-view', 'view'],\n    '/fledge/health': ['admin', 'editor', 'control', 'data-view', 'view'],\n    '/fledge/audit': ['admin', 'editor', 'control', 'data-view', 'view'],\n    \n    # Public endpoints - no authentication required\n    '/fledge/login': [],\n    '/fledge/logout': []\n}\n\n# Role Hierarchy (from most to least privileged):\n# admin      - Full system access, can modify everything\n# editor     - Can modify configurations and data processing\n# control    - Can manage control operations and pipelines  \n# data-view  - Can view data and readings but not modify\n# view       - Basic viewing access, limited data access\n\n# Middleware automatically enforces these role requirements\n# No need to implement role checking in individual handlers\n```\n\n### Security Best Practices for Middleware\n- **Centralized Role Management**: All role validation handled in middleware.py\n- **Handler Independence**: Handlers focus on business logic, not security\n- **Consistent Enforcement**: Same security rules applied across all endpoints\n- **Configuration-Based**: Role requirements configured, not hardcoded\n- **Audit Trail**: Middleware logs all security validation attempts\n- **Error Handling**: Proper HTTP status codes for authorization failures\n\n### REST API Standards\n- Use proper HTTP status codes (200, 201, 400, 401, 403, 404, 500)\n- Implement proper authentication/authorization through middleware\n- Validate all input parameters thoroughly\n- Return consistent JSON response formats (camelCase for API keys)\n- Include comprehensive error handling and logging\n- Follow RESTful conventions for HTTP methods and endpoints\n- Apply security middleware to all protected endpoints\n- Use role-based access control for sensitive operations\n\n## Microservice Communication\n- Use the microservice management client from [python/fledge/common/microservice_management_client/](mdc:python/fledge/common/microservice_management_client/)\n- Handle service discovery properly\n- Implement proper retry logic with exponential backoff\n- Use appropriate timeouts for service calls\n\n## Async/Await Patterns\n- Use async/await for I/O operations (database, HTTP, file operations)\n- Properly handle asyncio event loops\n- Use aiohttp for HTTP client operations\n- Follow existing async patterns in the codebase\n- Be mindful of blocking operations in async contexts\n"
  },
  {
    "path": ".cursor/rules/python/config.mdc",
    "content": "---\ndescription: \"Python configuration and data management rules for Fledge - config system, reading objects, and data types\"\nglobs: \"python/fledge/common/configuration*,python/fledge/services/*/configuration*,**/config*.py\"\nalwaysApply: false\nauthor: \"Ashish Jabble\"\n---\n\n# Python Configuration & Data Management Rules\n\n## Python Version & Platform Requirements\n\n### Supported Python Versions\n- **Target Range**: Python 3.8.10 - 3.12 (inclusive)\n- Always test configuration handling across the full supported range\n\n### Deployment Platforms\n- **Ubuntu**: LTS versions, 20.04 onwards\n  - **Architectures**: x86_64 & aarch64\n- **Raspberry Pi OS**: Bullseye and Bookworm distributions\n  - **Architectures**: aarch64 & armv7l\n\n### Dependencies Management\n- **Runtime Dependencies**: [python/requirements.txt](mdc:python/requirements.txt) - Include config validation deps\n- **Development Dependencies**: [python/requirements-dev.txt](mdc:python/requirements-dev.txt)\n- **Testing Dependencies**: [python/requirements-test.txt](mdc:python/requirements-test.txt) - Include config testing tools\n- **Use exact versions** for production dependencies\n- **Consider ARM compatibility** for Raspberry Pi deployment\n- **Test across Python versions** and target architectures\n\n## Configuration Management\n\n### Configuration System\n- Use the Fledge configuration management system consistently\n- Store configuration in JSON format\n- Validate configuration parameters thoroughly\n- Provide sensible defaults for all configuration options\n- Handle configuration changes gracefully\n\n### Configuration Categories & Types\nFledge configuration manager supports various types with specific formatting requirements:\n\n```python\n# Example configuration category definitions\nconfig_category = {\n    \"string_example\": {\n        \"description\": \"A string configuration parameter\",\n        \"type\": \"string\",\n        \"default\": \"default_value\",  # Always wrap in quotes\n        \"value\": \"current_value\"     # Always wrap in quotes\n    },\n    \"integer_example\": {\n        \"description\": \"An integer configuration parameter\", \n        \"type\": \"integer\",\n        \"default\": \"42\",        # Wrap in quotes but validate as integer\n        \"value\": \"100\"          # Wrap in quotes but validate as integer\n    },\n    \"boolean_example\": {\n        \"description\": \"A boolean configuration parameter\",\n        \"type\": \"boolean\", \n        \"default\": \"false\",     # Wrap in quotes: \"true\" or \"false\"\n        \"value\": \"true\"         # Wrap in quotes: \"true\" or \"false\"\n    },\n    \"float_example\": {\n        \"description\": \"A float configuration parameter\",\n        \"type\": \"float\",\n        \"default\": \"3.14\",      # Wrap in quotes but validate as float\n        \"value\": \"2.71\"         # Wrap in quotes but validate as float\n    },\n    \"enumeration_example\": {\n        \"description\": \"An enumeration configuration parameter\",\n        \"type\": \"enumeration\",\n        \"options\": [\"option1\", \"option2\", \"option3\"],\n        \"default\": \"option1\",   # Wrap in quotes, must be from options\n        \"value\": \"option2\"      # Wrap in quotes, must be from options\n    },\n    \"JSON_example\": {\n        \"description\": \"A JSON object configuration parameter\",\n        \"type\": \"JSON\",\n        \"default\": \"{\\\"key\\\": \\\"value\\\"}\",  # JSON string wrapped in quotes\n        \"value\": \"{\\\"key\\\": \\\"updated\\\"}\"   # JSON string wrapped in quotes  \n    }\n}\n```\n\n### Supported Configuration Types\nFledge configuration manager supports the following types:\n\n- **string**: Text values (default type if not specified)\n- **integer**: Whole numbers\n- **float**: Decimal numbers  \n- **boolean**: True/false values (\"true\"/\"false\" as strings)\n- **enumeration**: Predefined list of options\n- **JSON**: Complex objects or arrays\n- **password**: Encrypted/masked string values\n- **X509 certificate**: Certificate data\n- **code**: Editable code blocks (Python, JavaScript, etc.)\n\n### Optional Configuration Items\nConfiguration parameters can include optional properties:\n\n```python\nconfig_with_optional_items = {\n    \"advanced_parameter\": {\n        \"description\": \"An advanced configuration parameter\",\n        \"type\": \"integer\",\n        \"default\": \"100\",\n        \"value\": \"150\",\n        # Optional items\n        \"minimum\": \"1\",           # Minimum allowed value\n        \"maximum\": \"1000\",        # Maximum allowed value\n        \"length\": \"10\",           # Maximum string length (for string types)\n        \"rule\": \"value > 0\",      # Validation rule expression\n        \"order\": \"10\",            # Display order in UI\n        \"readonly\": \"false\",      # Whether value can be modified\n        \"options\": [\"opt1\", \"opt2\"],  # Available options (for enumeration)\n        \"displayName\": \"Advanced Parameter\",  # UI display name\n        \"validity\": \"valid\",      # Validation status\n        \"group\": \"advanced\"       # Grouping for UI organization\n    }\n}\n```\n\n### Reserved Configuration Categories\nCertain category names are reserved by the Fledge configuration manager:\n\n```python\n# Reserved category names - DO NOT USE for custom configurations\nRESERVED_CATEGORIES = [\n    \"General\",           # Core system configuration\n    \"Advanced\",          # Advanced system settings  \n    \"Utilities\",         # System utilities configuration\n    \"Security\",          # Security-related settings\n    \"rest_api\",          # REST API configuration\n    \"service\",           # Service-level configuration\n    \"storage\",           # Storage plugin configuration\n    \"scheduler\",         # Task scheduler configuration\n    \"dispatcher\",        # Dispatcher service configuration\n    \"logging\",           # Logging configuration\n    \"authentication\",    # Authentication settings\n    \"authorization\",     # Authorization settings\n    \"certificate_store\", # Certificate management\n    \"audit\",            # Audit logging configuration\n    \"performance\"       # Performance monitoring settings\n]\n\n# Use descriptive, specific names for plugin/service configurations\n# Good examples:\nGOOD_CATEGORY_NAMES = [\n    \"modbus_tcp\",        # Specific protocol/plugin name\n    \"opcua_client\",      # Specific functionality\n    \"http_north\",        # Service type and direction\n    \"pi_web_api\",        # Specific target system\n    \"temperature_filter\", # Specific filter purpose\n]\n```\n\n### Configuration Value Handling Rules\n- **Quote Wrapping**: Always wrap `default` and `value` in double quotes (`\"\"`)\n- **Type Validation**: Explicitly validate values against their declared type\n- **Type Conversion**: Convert string values to appropriate types during processing\n- **Validation Examples**:\n  ```python\n  def validate_config_value(value: str, config_type: str) -> bool:\n      \"\"\"Validate configuration value against its type.\"\"\"\n      try:\n          if config_type == \"integer\":\n              int(value)  # Must be convertible to int\n          elif config_type == \"float\": \n              float(value)  # Must be convertible to float\n          elif config_type == \"boolean\":\n              return value.lower() in [\"true\", \"false\"]\n          elif config_type == \"JSON\":\n              json.loads(value)  # Must be valid JSON\n          # string and enumeration are validated separately\n          return True\n      except (ValueError, json.JSONDecodeError):\n          return False\n  ```\n\n### Reading Object Format\nFledge uses a specific reading object format for sensor data:\n\n```python\n# Standard Fledge reading object structure\nreading_object = {\n    \"asset\": \"sensor_name\",           # Asset name (string)\n    \"timestamp\": \"2024-01-01 12:00:00.000\",  # ISO timestamp with milliseconds\n    \"readings\": {                     # Dictionary of sensor readings\n        \"temperature\": 25.5,          # Numeric values (int/float)\n        \"humidity\": 60.2,\n        \"pressure\": 1013.25,\n        \"status\": \"online\"            # String values allowed\n    }\n}\n\n# Multiple readings in a single object\nreading_batch = [\n    {\n        \"asset\": \"sensor01\",\n        \"timestamp\": \"2024-01-01 12:00:00.000\",\n        \"readings\": {\"temperature\": 25.5, \"humidity\": 60.2}\n    },\n    {\n        \"asset\": \"sensor02\", \n        \"timestamp\": \"2024-01-01 12:00:01.000\",\n        \"readings\": {\"pressure\": 1013.25, \"wind_speed\": 12.3}\n    }\n]\n```\n\n### Configuration Best Practices\n- **Descriptive Names**: Use clear, descriptive configuration parameter names\n- **Validation**: Always validate configuration values against their types\n- **Error Handling**: Provide clear error messages for invalid configurations\n- **Documentation**: Include comprehensive descriptions for all parameters\n- **Defaults**: Provide sensible defaults that work in most environments\n- **Type Safety**: Ensure type conversion is handled safely with proper error checking\n- **Reading Format**: Follow standard Fledge reading object structure for sensor data\n- **Reserved Categories**: Avoid using reserved category names for custom configurations\n- **Optional Items**: Use optional configuration properties for advanced validation and UI control\n- **Type Selection**: Choose appropriate types (use \"password\" for sensitive data, \"code\" for scripts)\n- **Validation Rules**: Implement proper minimum/maximum constraints and custom validation rules\n\n## Database Operations\n- Use the storage service abstraction from [python/fledge/common/storage_client/](mdc:python/fledge/common/storage_client/)\n- Handle database connection errors gracefully\n- Use prepared statements for SQL queries\n- Consider performance for large datasets\n- Implement proper connection pooling\n\n## Error Handling\n- Use specific exception types rather than generic Exception\n- Log errors with appropriate severity levels using Fledge logging framework\n- Handle async operations properly with try/catch blocks\n- Return meaningful error messages for API responses\n- Consider edge computing constraints when handling errors\n\n## Security Considerations\n- Validate all input parameters to prevent injection attacks\n- Use secure defaults for configuration\n- Handle certificates and keys securely\n- Follow authentication/authorization patterns\n- Sanitize data before database operations\n- Never log sensitive information\n"
  },
  {
    "path": ".cursor/rules/python/core.mdc",
    "content": "---\ndescription: \"Core Python development rules for Fledge - code style, imports, file structure, and naming conventions\"\nglobs: \"*.py,python/**/*\"\nalwaysApply: false\nauthor: \"Ashish Jabble\"\n---\n\n# Python Core Development Rules\n\n## Python Version & Platform Requirements\n\n### Supported Python Versions\n- **Minimum**: Python 3.8.10\n- **Maximum**: Python 3.12\n- **Target Range**: Python 3.8.10 - 3.12 (inclusive)\n- Always test compatibility across the full supported range\n\n### Deployment Platforms\n- **Ubuntu**: LTS versions, 20.04 onwards\n  - **Architectures**: x86_64 & aarch64\n- **Raspberry Pi OS**: Bullseye and Bookworm distributions\n  - **Architectures**: aarch64 & armv7l\n\n### Dependencies Management\n- **Runtime Dependencies**: [python/requirements.txt](mdc:python/requirements.txt)\n- **Development Dependencies**: [python/requirements-dev.txt](mdc:python/requirements-dev.txt)\n- **Testing Dependencies**: [python/requirements-test.txt](mdc:python/requirements-test.txt)\n- **Use exact versions** for production dependencies\n- **Consider ARM compatibility** for Raspberry Pi deployment\n- **Test across Python versions** and target architectures\n- **Edge Devices**: Resource-constrained environments with limited CPU/memory\n- Consider platform-specific limitations and optimizations\n\n### Supported Architectures\n- **Ubuntu**: x86_64 & aarch64 architectures\n- **Raspberry Pi OS**: aarch64 & armv7l architectures\n- **Cross-Platform Testing**: Ensure compatibility across all supported architectures\n- **Performance Considerations**: ARM-based architectures may have different performance characteristics\n\n### Compatibility Guidelines\n- Use language features available in Python 3.8.10+ only\n- Test on all supported architectures: x86_64, aarch64, and armv7l\n- Consider performance implications on ARM-based edge devices (aarch64, armv7l)\n- Handle platform-specific dependencies appropriately\n- Validate package availability across all target architectures\n\n## Code Style & Standards\n- Follow PEP 8 style guidelines strictly\n- Use type hints where appropriate for better code documentation (Python 3.8.10+ compatible)\n- Maximum line length: 120 characters\n- Use 4 spaces for indentation (never tabs)\n- Import order: standard library, third-party, local imports\n- Use double quotes for strings consistently\n- Use descriptive variable and function names\n\n### Import Management & Circular Dependencies\n- **Avoid Circular Imports**: Prevent circular dependency issues that can cause import failures\n- **Local Imports**: Use local imports only as a last resort when no other option exists\n- **Import Testing**: Ensure local imports actually work and don't break during runtime\n- **Dependency Design**: Restructure code to eliminate need for circular imports when possible\n\n### Circular Import Prevention Strategies\n```python\n# BAD: Top-level import causing circular dependency\nfrom fledge.services.core import server  # This can cause circular import\n\n# BETTER: Local import as last resort (with comment explaining why)\ndef get_server_instance():\n    \"\"\"Get server instance with local import to avoid circular dependency.\"\"\"\n    # Require a local import in order to avoid circular import references\n    from fledge.services.core import server\n    return server\n\n# BEST: Dependency injection or refactoring to avoid the need\nclass MyHandler:\n    def __init__(self, server_instance=None):\n        self._server = server_instance\n    \n    def process_request(self):\n        if self._server:\n            return self._server.get_info()\n\n# ALTERNATIVE: Use interfaces/protocols to break dependencies\nfrom typing import Protocol\n\nclass ServerProtocol(Protocol):\n    def get_info(self) -> dict: ...\n    def shutdown(self) -> None: ...\n\nclass MyService:\n    def __init__(self, server: ServerProtocol):\n        self._server = server\n```\n\n### Local Import Guidelines\nWhen local imports are unavoidable:\n- **Document Reasoning**: Always comment why local import is necessary\n- **Test Thoroughly**: Ensure the import works in all execution contexts\n- **Minimal Scope**: Keep local imports as close to usage as possible\n- **Error Handling**: Handle potential import failures gracefully\n- **Consider Alternatives**: Always look for architectural solutions first\n\n```python\ndef handle_server_operation():\n    \"\"\"Handle operation requiring server access.\"\"\"\n    try:\n        # Local import to avoid circular dependency - server imports this module\n        from fledge.services.core import server\n        \n        # Verify the import worked\n        if not hasattr(server, 'expected_method'):\n            raise ImportError(\"Server module not properly initialized\")\n            \n        return server.expected_method()\n        \n    except ImportError as ex:\n        _logger.error(ex, \"Failed to import server module\")\n        raise RuntimeError(\"Server not available\") from ex\n```\n\n### Documentation Standards\n- **Docstrings**: Use pydoc-compatible docstrings for all public functions, classes, and modules\n- **Docstring Format**: Follow PEP 257 and Google/NumPy style for consistency\n- **Mandatory Docstrings**: Required for:\n  - All public functions and methods\n  - All public classes\n  - All modules (module-level docstring)\n  - Complex private functions (use judgment)\n- **Missing Docstrings**: Complain/flag when docstrings are missing for public APIs\n\n### Docstring Examples\n```python\ndef get_user_data(user_id: int, include_settings: bool = False) -> dict:\n    \"\"\"Retrieve user data from the database.\n    \n    Args:\n        user_id: The unique identifier for the user\n        include_settings: Whether to include user settings in response\n        \n    Returns:\n        dict: User data with keys 'id', 'name', 'email', and optionally 'settings'\n        \n    Raises:\n        ValueError: If user_id is not a positive integer\n        DatabaseError: If database connection fails\n        \n    Example:\n        >>> user_data = get_user_data(123, include_settings=True)\n        >>> print(user_data['name'])\n        'John Doe'\n    \"\"\"\n    pass\n\nclass DeviceManager:\n    \"\"\"Manages IoT device connections and data collection.\n    \n    This class handles the lifecycle of IoT devices including discovery,\n    connection management, and data retrieval from various sensor types.\n    \n    Attributes:\n        device_count: Number of currently connected devices\n        connection_timeout: Timeout in seconds for device connections\n        \n    Example:\n        >>> manager = DeviceManager()\n        >>> manager.connect_device(\"sensor_01\")\n        >>> data = manager.get_device_data(\"sensor_01\")\n    \"\"\"\n    pass\n```\n\n### Docstring Quality Standards\n- **Clear and Concise**: Describe what the function/class does, not how\n- **Parameter Documentation**: Document all parameters with types and descriptions\n- **Return Value Documentation**: Describe return types and structure\n- **Exception Documentation**: List all exceptions that may be raised\n- **Usage Examples**: Include practical examples for complex functions\n- **pydoc Compatibility**: Ensure docstrings render correctly with `python -m pydoc module_name`\n\n### Naming Conventions\n- **Variables**: Use snake_case for all variable names\n  ```python\n  user_id = 123\n  device_name = \"sensor_01\"\n  connection_timeout = 30\n  ```\n- **Functions/Methods**: Use snake_case for all function and method names\n  ```python\n  def get_user_data():\n      pass\n  \n  def process_sensor_reading(reading_value):\n      pass\n  ```\n- **Classes**: Use PascalCase for class names (following PEP 8)\n  ```python\n  class DeviceManager:\n      pass\n  \n  class SensorDataProcessor:\n      pass\n  ```\n- **Constants**: Use UPPER_SNAKE_CASE for constants\n  ```python\n  MAX_RETRY_COUNT = 3\n  DEFAULT_TIMEOUT_SECONDS = 30\n  API_BASE_URL = \"https://api.example.com\"\n  ```\n\n### API Response Naming\n- **API Response Keys**: Always use camelCase for JSON response keys\n  ```python\n  # Correct API response format\n  response = {\n      \"userId\": 123,\n      \"deviceName\": \"sensor_01\",\n      \"lastReadingTime\": \"2024-01-01T12:00:00Z\",\n      \"sensorData\": {\n          \"temperature\": 25.5,\n          \"humidity\": 60.2\n      }\n  }\n  ```\n- **Internal Code**: Use snake_case everywhere else (variables, function names, etc.)\n- **Database Fields**: Use snake_case for database column names and internal data structures\n\n## File Structure & Organization\n- Python code lives in [python/fledge/](mdc:python/fledge/) directory\n- Follow the existing module structure:\n  - [python/fledge/common/](mdc:python/fledge/common/) - shared utilities and common functionality\n  - [python/fledge/services/](mdc:python/fledge/services/) - core services implementation\n  - [python/fledge/tasks/](mdc:python/fledge/tasks/) - background tasks and scheduled operations\n  - [python/fledge/plugins/](mdc:python/fledge/plugins/) - plugin interfaces and base classes\n\n### Core Microservice Architecture\n- **`server.py`**: The heart of the core microservice - central orchestration and service management\n- **Core Service Location**: [python/fledge/services/core/server.py](mdc:python/fledge/services/core/server.py)\n- **Critical Component**: Handles microservice lifecycle, service discovery, and core coordination\n- **Service Entry Point**: Primary entry point for the core Fledge service\n- **Integration Hub**: Coordinates between all other services, plugins, and components\n"
  },
  {
    "path": ".cursor/rules/python/quality.mdc",
    "content": "---\ndescription: \"Python quality rules for Fledge - dependencies, logging, performance, and version compatibility\"\nglobs: \"python/requirements*.txt\"\nalwaysApply: false\nauthor: \"Ashish Jabble\"\n---\n\n# Python Quality & Best Practices Rules\n\n## Dependencies & Requirements Management\n\n### Requirements Files Structure\n- **[python/requirements.txt](mdc:python/requirements.txt)** - Runtime dependencies for production\n- **[python/requirements-dev.txt](mdc:python/requirements-dev.txt)** - Development dependencies (includes runtime + test)\n- **[python/requirements-test.txt](mdc:python/requirements-test.txt)** - Testing framework dependencies\n\n### Version-Specific Dependencies\n- Use Python version markers for compatibility: `package==version;python_version>=\"3.12\"`\n- Example patterns from existing requirements:\n  ```\n  aiohttp==3.8.6;python_version<\"3.12\"\n  aiohttp==3.10.11;python_version>=\"3.12\"\n  yarl==1.7.2;python_version<=\"3.10\"\n  yarl==1.9.4;python_version>=\"3.11\" and python_version<\"3.12\"\n  ```\n\n### Dependency Guidelines\n- **Always specify exact versions** for production dependencies\n- **Use version markers** when different Python versions require different package versions\n- **Test across Python versions** to ensure compatibility\n- **Keep requirements files in sync** - test dependencies should match runtime versions\n- **Document reasoning** for version-specific constraints in comments\n- **Consider ARM compatibility** for Raspberry Pi deployment\n\n### Requirements File Documentation\n- **Add comments** explaining version constraints and Python version markers\n- **Document platform-specific requirements** (e.g., ARM vs x86_64)\n- **Explain version ranges** when multiple versions are supported for different Python versions\n- **Include installation notes** for complex dependencies\n- **Reference upstream issues** when using specific versions due to bugs or compatibility\n\n## Logging\n\n### Logging Framework\n- **Use FLCoreLogger class** for all logging operations\n- **Multi-stacktrace support**: FLCoreLogger handles complex stacktrace scenarios\n- Import from fledge.common.logger: `from fledge.common.logger import FLCoreLogger`\n- Create logger instance: `_logger = FLCoreLogger().get_logger(__name__)`\n\n### Logging Best Practices\n- Use the Fledge logging framework consistently across all components\n- Include appropriate context in log messages for debugging\n- Use proper log levels (DEBUG, INFO, WARNING, ERROR)\n- Avoid logging sensitive information (passwords, tokens, API keys, etc.)\n- Structure log messages for easy parsing and analysis\n\n### FLCoreLogger Usage Examples\n```python\nfrom fledge.common.logger import FLCoreLogger\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n# Standard logging\n_logger.info(\"Service started successfully\")\n_logger.warning(\"Configuration parameter missing, using default\")\n_logger.error(\"Failed to connect to database\")\n\n# Multi-stacktrace scenarios (handled automatically by FLCoreLogger)\ntry:\n    # Complex operation that might have nested exceptions\n    result = complex_operation()\nexcept Exception as ex:\n    _logger.error(ex, \"Complex operation failed\")\n```\n\n### Log Message Guidelines\n- Use descriptive messages that provide context\n- Include relevant data (IDs, names, values) but avoid sensitive information\n- Use consistent formatting for similar log types\n- Consider log aggregation and searching when structuring messages\n\n## Testing Integration\n- **Unit Testing**: Detailed unit testing guidelines are in [python/unit.mdc](mdc:python/unit.mdc)\n- **System Testing**: Located in [tests/system/python/](mdc:tests/system/python/)\n- **Test Documentation**: Follow [tests/README.rst](mdc:tests/README.rst) for complete testing instructions\n\n## Performance Guidelines\n\n### Edge Computing Optimization\n- **Memory Constraints**: Raspberry Pi and edge devices have limited RAM\n- **CPU Limitations**: ARM processors may be slower than x86_64\n- **Storage Constraints**: Limited disk space on edge devices\n- **Network Considerations**: Potentially unreliable or slow connections\n\n### Performance Best Practices\n- Optimize database queries and use appropriate indexes\n- Use appropriate caching strategies (consider memory limits)\n- Monitor resource usage in production environments\n- Profile performance-critical code paths on target platforms\n- Minimize blocking operations in async code\n- **Test on Raspberry Pi** for realistic performance validation\n- Consider memory-efficient data structures and algorithms\n- Implement graceful degradation under resource pressure\n\n## Version Compatibility & Testing\n\n### Python Version Testing\n- **Test across full range**: Python 3.8.10 through 3.12\n- **Use CI/CD matrices** to validate on multiple Python versions\n- **Avoid deprecated features** that may be removed in newer versions\n- **Use version-specific workarounds** when necessary with clear documentation\n\n### Platform Testing\n- **Ubuntu Testing**: Primary development and deployment platform (LTS 20.04+)\n- **Raspberry Pi Testing**: ARM architecture and resource constraints\n- **Cross-Architecture**: Ensure code works on x86_64, aarch64, and armv7l\n- **Performance Validation**: Test on actual Raspberry Pi hardware when possible\n- **Architecture-Specific Testing**: Validate on all supported architectures:\n  - x86_64 (Ubuntu)\n  - aarch64 (Ubuntu & Raspberry Pi OS)\n  - armv7l (Raspberry Pi OS)\n\n### Compatibility Guidelines\n- Maintain backwards compatibility for APIs across supported Python versions\n- Use `sys.version_info` checks for version-specific code paths\n- Document breaking changes clearly in commit messages and documentation\n- Consider dependency availability across Python versions and platforms\n- Test package installation on both Ubuntu and Raspberry Pi OS\n\n## Plugin Development\n- Implement proper plugin interfaces defined in [python/fledge/plugins/](mdc:python/fledge/plugins/)\n- Handle plugin lifecycle properly (start, stop, reconfigure)\n- Use the plugin configuration system\n- Follow existing plugin patterns for consistency\n- Consider performance implications for edge devices\n"
  },
  {
    "path": ".cursor/rules/tests/python/api.mdc",
    "content": "---\ndescription: \"Python API integration testing rules for Fledge - conftest fixtures, HTTP client patterns, and API test organization\"\nglobs: \"tests/system/python/api/**/*.py,tests/system/python/conftest.py\"\nalwaysApply: false\nauthor: \"Ashish Jabble\"\n---\n\n# API integration Testing Guidelines\n\n## Test Organization & Structure\n\n### Directory Structure\n- **API integration Tests**: Located in [tests/system/python/api/](mdc:tests/system/python/api/)\n- **Main conftest**: [tests/system/python/conftest.py](mdc:tests/system/python/conftest.py) - Contains shared fixtures\n- **pytest Configuration**: [tests/system/python/pytest.ini](mdc:tests/system/python/pytest.ini)\n- **Test Documentation**: Follow [tests/README.rst](mdc:tests/README.rst) for execution instructions\n\n### Test File Conventions\n- **Naming**: Test files must begin with `test_` for pytest auto-discovery\n- **Pattern**: `test_<api_area>.py` (e.g., `test_authentication.py`, `test_configuration.py`)\n- **Location**: All API tests in `tests/system/python/api/` directory\n- **Imports**: Always include `import http.client` for HTTP connections\n- **Documentation**: Include module docstrings describing the API area being tested\n\n### Test Class Organization\n- **Class Naming**: Use `TestClassName` pattern (e.g., `TestAuthenticationAPI`, `TestCommon`)\n- **Method Naming**: Use descriptive names like `test_login_username_regular_user`\n- **Test Flow**: Organize tests to follow logical API workflows\n- **Dependencies**: Use fixtures to manage test prerequisites and cleanup\n\n## HTTP Client Standards\n\n### Required HTTP Library\n- **MUST USE**: `http.client` library only for HTTP/HTTPS connections\n- **NO requests library**: Do not use `requests` - system tests use `http.client` exclusively\n- **Import Pattern**: Always include `import http.client` at the top of test files\n\n### HTTP Connection Patterns\n\n#### Basic Connection Setup\n```python\nimport http.client\nimport json\n\ndef test_api_endpoint(self, fledge_url):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"GET\", \"/fledge/ping\")\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n```\n\n#### POST Requests with JSON Data\n```python\ndef test_post_with_data(self, fledge_url):\n    conn = http.client.HTTPConnection(fledge_url)\n    data = {\"key\": \"value\"}\n    conn.request(\"POST\", \"/fledge/endpoint\", json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n    response_data = json.loads(r.read().decode())\n```\n\n#### Authenticated Requests\n```python\ndef test_authenticated_request(self, fledge_url):\n    conn = http.client.HTTPConnection(fledge_url)\n    headers = {\"authorization\": TOKEN}\n    conn.request(\"GET\", \"/fledge/protected\", headers=headers)\n    r = conn.getresponse()\n    assert 200 == r.status\n```\n\n### HTTP Best Practices\n- **Status Code Validation**: Always assert expected HTTP status codes\n- **Response Decoding**: Use `r.read().decode()` to get response text\n- **JSON Handling**: Use `json.loads()` and `json.dumps()` for JSON data\n- **Connection Reuse**: Create new connections per test method for isolation\n- **Error Handling**: Test both success and error response scenarios\n\n## Core Conftest Fixtures\n\n### Essential Fixtures for API Testing\n\n#### `reset_and_start_fledge`\nPrimary fixture for test environment setup:\n```python\ndef test_api_method(self, reset_and_start_fledge, fledge_url):\n    # Test runs with fresh Fledge instance\n```\n- **Purpose**: Kills Fledge, resets database, and starts fresh instance\n- **Parameters**: Uses `storage_plugin`, `readings_plugin`, `authentication` fixtures\n- **Usage**: Include as first parameter in test methods that need clean environment\n\n#### `fledge_url`\nProvides Fledge server connection details:\n```python\ndef test_connection(self, fledge_url):\n    conn = http.client.HTTPConnection(fledge_url)\n```\n- **Default**: \"localhost:8081\"\n- **Override**: Use `--fledge-url` command line option\n- **Usage**: Required for all HTTP connections to Fledge\n\n#### `storage_plugin`\nSpecifies database plugin for tests:\n```python\n@pytest.fixture\ndef storage_plugin(request):\n    return request.config.getoption(\"--storage-plugin\")\n```\n- **Default**: \"sqlite\"\n- **Options**: \"sqlite\", \"postgres\", \"sqlitelb\"\n- **Usage**: Used by `reset_and_start_fledge` fixture\n\n#### `authentication`\nDefines authentication mode:\n```python\n@pytest.fixture\ndef authentication():\n    return \"optional\"  # or \"mandatory\"\n```\n- **Default**: \"optional\"\n- **Override**: Define in individual test files for specific auth requirements\n- **Usage**: Controls Fledge authentication configuration\n\n### Additional Service Management Fixtures\n\n#### `add_south`\nAdds and configures south services:\n```python\ndef test_with_south_service(self, add_south, fledge_url):\n    south_service = add_south(\"sinusoid\", None, fledge_url, \"test_service\")\n```\n\n#### `add_north`\nAdds and configures north services/tasks:\n```python\ndef test_with_north_task(self, add_north, fledge_url):\n    north_task = add_north(fledge_url, \"http_north\", None, \"make\", \"test_task\")\n```\n\n#### `add_service`\nAdds generic services:\n```python\ndef test_with_service(self, add_service, fledge_url):\n    service = add_service(fledge_url, \"notification\", None, 3, \"make\", \"test_svc\")\n```\n\n### Utility Fixtures\n\n#### `wait_time` and `retries`\nControl test timing and retry behavior:\n```python\ndef test_with_timing(self, fledge_url, wait_time, retries):\n    time.sleep(wait_time)  # Default: 5 seconds\n    # Retry logic using retries count (default: 3)\n```\n\n#### `remove_data_file` and `remove_directories`\nCleanup utilities for test data:\n```python\ndef test_with_cleanup(self, remove_data_file, remove_directories):\n    # Test creates files/directories\n    remove_data_file(\"/path/to/test/file\")\n    remove_directories(\"/path/to/test/dir\")\n```\n\n## Test Configuration\n\n### pytest Configuration\n- **File**: [tests/system/python/pytest.ini](mdc:tests/system/python/pytest.ini)\n- **Default Options**: `--wait-time=6 --retries=4`\n- **Command Line Options**: Extensive options for test customization\n\n### Common Command Line Options\n```bash\n# Basic test execution\npytest tests/system/python/api/test_authentication.py\n\n# With custom Fledge URL\npytest --fledge-url=192.168.1.100:8081 tests/system/python/api/\n\n# With different storage plugin\npytest --storage-plugin=postgres tests/system/python/api/\n\n# With custom timing\npytest --wait-time=10 --retries=5 tests/system/python/api/\n```\n\n### Available Command Line Arguments\n- `--storage-plugin`: Database plugin (\"sqlite\", \"postgres\", \"sqlitelb\")\n- `--readings-plugin`: Readings plugin (\"Use main plugin\", \"sqlitememory\", etc.)\n- `--fledge-url`: Fledge server URL (default: \"localhost:8081\")\n- `--wait-time`: Generic wait time between processes (default: 5)\n- `--retries`: Number of retry attempts (default: 3)\n- `--south-branch`, `--north-branch`: Plugin branch names for installation\n- `--use-pip-cache`: Use pip cache for plugin installations\n\n## API Testing Best Practices\n\n### Test Design Principles\n- **Environment Isolation**: Use `reset_and_start_fledge` for clean test environments\n- **Fixture Dependencies**: Properly order fixtures based on dependencies\n- **Response Validation**: Validate both status codes and response content\n- **Error Scenarios**: Test both success and failure paths\n- **Authentication**: Handle both authenticated and unauthenticated scenarios\n\n### Common Testing Patterns\n\n#### Basic API Endpoint Test\n```python\nclass TestEndpoint:\n    def test_get_endpoint(self, reset_and_start_fledge, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/endpoint\")\n        r = conn.getresponse()\n        assert 200 == r.status\n        jdoc = json.loads(r.read().decode())\n        assert \"expected_key\" in jdoc\n```\n\n#### Authentication Flow Test\n```python\nclass TestAuthentication:\n    def test_login_flow(self, fledge_url, authentication, reset_and_start_fledge):\n        # Login\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", \n                    json.dumps({\"username\": \"user\", \"password\": \"password\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        jdoc = json.loads(r.read().decode())\n        token = jdoc[\"token\"]\n        \n        # Use token for authenticated request\n        conn.request(\"GET\", \"/fledge/protected\", headers={\"authorization\": token})\n        r = conn.getresponse()\n        assert 200 == r.status\n```\n\n#### Service Configuration Test\n```python\nclass TestServiceConfiguration:\n    def test_service_creation(self, add_south, fledge_url):\n        service_name = \"test_service\"\n        config = {\"key\": \"value\"}\n        service = add_south(\"plugin_name\", None, fledge_url, service_name, config)\n        assert service[\"name\"] == service_name\n```\n\n### Error Testing Guidelines\n- **Invalid Requests**: Test malformed JSON, missing parameters\n- **Authentication Errors**: Test unauthorized access, invalid tokens\n- **Resource Not Found**: Test non-existent endpoints and resources\n- **Constraint Violations**: Test duplicate names, invalid configurations\n- **Server Errors**: Test scenarios that trigger 500-level responses\n\n### Data Validation Patterns\n- **JSON Structure**: Validate response JSON contains expected keys\n- **Data Types**: Verify correct data types in responses\n- **Value Ranges**: Check that numeric values are within expected ranges\n- **String Formats**: Validate UUIDs, timestamps, and formatted strings\n- **Array Contents**: Check array lengths and element structures\n\n## Integration with System Testing\n\n### Plugin Installation Testing\n- Use fixtures like `add_south`, `add_north` for plugin integration tests\n- Test both `make` and `package` installation types\n- Validate plugin discovery and configuration\n\n### Multi-Service Testing\n- Combine multiple fixtures to test service interactions\n- Test data flow between south and north services\n- Validate configuration propagation across services\n\n### Performance and Timing\n- Use `wait_time` and `retries` for timing-sensitive tests\n- Account for Fledge startup and shutdown times\n- Test timeout scenarios and retry mechanisms\n\n## Test Execution Guidelines\n\n### Local Development\n```bash\n# Run all API tests\ncd tests/system/python\npytest api/\n\n# Run specific test file\npytest api/test_authentication.py\n\n# Run with Disables output capturing\npytest -s api/test_authentication.py\n\n# Run with verbose output\npytest -v api/test_configuration.py\n\n# Run with very verbose output\npytest -vv api/test_configuration.py\n\n# Run with custom Fledge instance\npytest --fledge-url=localhost:8082 api/\n```\n\n### Continuous Integration\n- Use matrix testing for different storage plugins\n- Test across multiple Python versions\n- Validate on different deployment architectures\n- Include both authenticated and unauthenticated test runs\n"
  },
  {
    "path": ".cursor/rules/tests/python/unit.mdc",
    "content": "---\ndescription: \"Python unit testing rules for Fledge - test organization, framework, execution, and coverage\"\nglobs: \"tests/unit/python/**/*.py,**/test_*.py,python/requirements-test.txt\"\nalwaysApply: false\nauthor: \"Ashish Jabble\"\n---\n\n# Python Unit Testing Guidelines\n\n## Test Organization & Structure\n\n### Directory Structure\n- **Unit Tests**: Located in [tests/unit/python/](mdc:tests/unit/python/) \n- **Test Instructions**: Follow detailed guidelines in [tests/README.rst](mdc:tests/README.rst)\n- **File Structure**: Tests should mirror the component structure under `tests/unit/python/fledge/<component>`\n- **Template**: Use [tests/unit/python/__template__.py](mdc:tests/unit/python/__template__.py) as starting point\n\n### Test File Conventions\n- **Naming**: Test files must begin with `test_` for pytest auto-discovery\n- **Pattern**: `test_<module_name>.py`\n- **Location**: Place tests in correct directory matching component structure\n- **Imports**: Follow Fledge import patterns and avoid circular dependencies\n- **Docstrings**: Include Pydoc-compatible docstrings for test classes and methods\n\n### Test Class & Method Organization\n- Group related tests in classes using `TestClassName` pattern\n- Use descriptive test method names: `test_should_return_success_when_valid_input`\n- Organize tests logically: happy path, edge cases, error conditions\n- Use pytest fixtures for common setup and teardown\n- Keep tests focused and atomic - one assertion per test when possible\n\n## Testing Framework & Dependencies\n\n### Primary Framework\n- **Framework**: pytest (version specified in [python/requirements-test.txt](mdc:python/requirements-test.txt))\n- **Dependencies**: All testing dependencies are managed in requirements-test.txt\n- **Dependency Management**: Reference requirements-test.txt for current versions - do not hardcode versions in documentation\n\n### Core Testing Dependencies\nKey testing packages (see [python/requirements-test.txt](mdc:python/requirements-test.txt) for current versions):\n- `pytest` - Main testing framework\n- `pytest-asyncio` - For async testing support\n- `pytest-mock` - Mocking framework integration\n- `pytest-cov` - Code coverage reporting\n- `pytest-aiohttp` - aiohttp testing utilities\n- `pylint` - Code quality and linting\n\n### Additional Testing Dependencies\n- `requests` - For HTTP client testing\n- `pyserial` - For RTU serial testing\n- `pytz` - Timezone handling in tests\n- `aiohttp` and `yarl` - Keep versions synchronized with main requirements\n\n## Test Configuration\n\n### pytest Configuration\n- **Configuration File**: [tests/unit/python/.pytest.ini](mdc:tests/unit/python/.pytest.ini)\n- **Minimum Version**: Check requirements-test.txt for current pytest version\n- **Excluded Directories**: Plugin directories excluded from test recursion\n- **Test Discovery**: Automatic discovery of test_*.py files\n\n### Coverage Configuration\n- **Configuration File**: [tests/unit/python/.coveragerc](mdc:tests/unit/python/.coveragerc)\n- **Omitted Files**: \n  - `__init__.py` and `__template__.py` files\n  - Setup files and plugin directories\n  - Test directories themselves\n- **Coverage Scope**: Focus on core Fledge components, exclude plugin frameworks\n\n## Test Execution\n\n### Basic pytest Commands\nRefer to [tests/README.rst](mdc:tests/README.rst) for complete instructions:\n\n```bash\n# Execute all tests in specific file\npytest test_filename.py\n\n# Execute specific test class\npytest test_filename.py::TestClass\n\n# Execute specific test method\npytest test_filename.py::TestClass::test_case\n\n# Verbose output with detailed information\npytest -s -vv\n\n# Run tests with coverage\npytest --cov=. --cov-report=html\n```\n\n### Advanced Test Execution\n```bash\n# Run tests with full coverage report\npytest -s -vv tests/unit/python/fledge/ --cov=. --cov-report=html --cov-config tests/unit/python/.coveragerc\n\n# Run tests with XML coverage for CI/CD\npytest --cov=. --cov-report html:coverage_html --cov-report xml:coverage.xml\n\n# Run specific test patterns\npytest -k \"test_pattern_name\"\n\n```\n\n## Code Coverage\n\n### Coverage Configuration\n- **Tool**: pytest-cov framework integration\n- **Config File**: [tests/unit/python/.coveragerc](mdc:tests/unit/python/.coveragerc)\n- **Output Formats**: HTML, XML, and terminal reports\n- **Exclusions**: Configured to omit template files, plugins, and test directories\n\n### Coverage Commands\n\n#### Basic Coverage Reports\n```bash\n# Terminal coverage report (default)\npytest --cov=. --cov-report=term\n\n# Terminal with missing lines shown\npytest --cov=. --cov-report=term-missing\n\n# HTML coverage report (recommended for development)\npytest --cov=. --cov-report=html\n\n# JSON coverage report for tools integration\npytest --cov=. --cov-report=json\n\n# XML coverage report for CI/CD systems\npytest --cov=. --cov-report=xml\n```\n\n#### Comprehensive Coverage Commands\n```bash\n# Full coverage with HTML and XML (for CI/CD)\npytest --cov=. --cov-report=html:coverage_html --cov-report=xml:coverage.xml --cov-config=tests/unit/python/.coveragerc\n\n# Coverage with specific source directory and custom config\npytest tests/unit/python/fledge/ --cov=fledge --cov-report=html --cov-config=tests/unit/python/.coveragerc\n\n# Coverage with minimum percentage threshold (fail if below)\npytest --cov=. --cov-report=term --cov-fail-under=80\n\n# Coverage with detailed terminal output and HTML\npytest --cov=. --cov-report=term-missing --cov-report=html:htmlcov\n\n# Coverage for specific modules only\npytest --cov=fledge.services.core --cov=fledge.common --cov-report=html\n```\n\n#### Coverage Report Analysis\n```bash\n# Generate coverage report after test run\ncoverage report\n\n# Generate detailed HTML report\ncoverage html\n\n# Show missing lines for specific file\ncoverage report --show-missing\n\n# Coverage report with branch coverage\npytest --cov=. --cov-branch --cov-report=html\n```\n\n### Coverage Best Practices\n\n#### Coverage Targets & Thresholds\n- **Meaningful Coverage**: Aim for meaningful test coverage, not just high numbers\n- **Minimum Thresholds**: Set reasonable minimum coverage (e.g., 80% for core modules)\n- **Critical Paths**: Require higher coverage (90%+) for business logic and critical code paths\n- **New Code**: Ensure new code has high test coverage before merging\n- **Branch Coverage**: Include branch coverage for conditional logic testing\n\n#### Coverage Configuration\n- **Exclude Appropriately**: Use .coveragerc to exclude boilerplate and framework code\n- **Include Patterns**: Focus coverage on source code, exclude tests and third-party code\n- **Source Directories**: Specify source directories to avoid including test files in coverage\n- **Precision**: Set appropriate precision for coverage reporting (e.g., 1 decimal place)\n\n#### Coverage Monitoring & Reporting\n- **Regular Monitoring**: Track coverage trends over time in CI/CD\n- **Coverage Reports**: Generate reports for code review processes\n- **Failed Builds**: Fail builds if coverage drops below threshold\n- **Coverage Badges**: Display coverage status in repository README\n- **Trend Analysis**: Monitor coverage changes across commits and releases\n\n#### Coverage Quality Guidelines\n- **Test Quality Over Quantity**: High coverage with poor tests is worse than lower coverage with good tests\n- **Uncovered Code Review**: Regularly review uncovered code to determine if tests are needed\n- **Coverage Gaps**: Identify and address significant coverage gaps in critical modules\n- **Integration vs Unit**: Distinguish between unit test coverage and integration test coverage\n- **Documentation**: Document rationale for excluding files from coverage\n\n#### Coverage Anti-Patterns to Avoid\n- **Coverage Gaming**: Writing tests just to increase coverage percentage\n- **Shallow Testing**: Tests that call code but don't verify behavior\n- **Ignoring Branches**: Only testing happy paths without error conditions\n- **Over-Mocking**: Mocking so extensively that tests don't verify real behavior\n- **Coverage-Only Metrics**: Using coverage as the only quality metric\n\n#### Coverage Integration Examples\n\n##### CI/CD Pipeline Integration\n```bash\n# In GitHub Actions, GitLab CI, etc.\npytest --cov=fledge --cov-report=xml --cov-report=html --cov-fail-under=80\n```\n\n##### Coverage with Multiple Output Formats\n```bash\n# Generate multiple report formats simultaneously\npytest --cov=. \\\n  --cov-report=term-missing \\\n  --cov-report=html:htmlcov \\\n  --cov-report=xml:coverage.xml \\\n  --cov-report=json:coverage.json \\\n  --cov-fail-under=80\n```\n\n##### Coverage Configuration in pytest.ini\n```ini\n[tool:pytest]\naddopts = --cov=fledge --cov-report=term-missing --cov-report=html --cov-fail-under=80\n```\n\n##### Coverage Badge Generation\n```bash\n# Generate coverage badge (requires coverage-badge package)\ncoverage-badge -o coverage.svg\n```\n\n## Unit Testing Best Practices\n\n### Test Design Principles\n- **Isolation**: Each test should be independent and not rely on other tests\n- **Repeatability**: Tests should produce consistent results across runs\n- **Fast Execution**: Keep unit tests fast for quick feedback loops (performance is not the focus, but speed aids development)\n- **Clear Assertions**: Use descriptive assertion messages\n- **Focused Scope**: Test one behavior per test method\n- **Deterministic**: Tests should not rely on random data or external timing\n- **Self-Contained**: Tests should set up their own data and clean up afterwards\n\n### Mocking & Fixtures\n- **External Dependencies**: Mock all external dependencies (databases, APIs, file system)\n- **pytest-mock**: Use pytest-mock for integration with pytest fixtures\n- **Fixture Scope**: Use appropriate fixture scopes (function, class, module, session)\n- **Test Data**: Create reusable test data through fixtures\n- **Cleanup**: Ensure proper cleanup of resources and mocks\n\n### Async Testing\n- **pytest-asyncio**: Use pytest-asyncio for testing async functions\n- **Event Loops**: Properly handle event loop lifecycle in tests\n- **Async Fixtures**: Use async fixtures for async setup/teardown\n- **Timeout Handling**: Set appropriate timeouts for async operations\n- **Mock Async**: Properly mock async functions and coroutines\n\n### Error Testing\n- **Exception Testing**: Test both success and failure scenarios\n- **Error Messages**: Verify error messages and types\n- **Edge Cases**: Test boundary conditions and edge cases\n- **Input Validation**: Test invalid inputs and malformed data\n- **Resource Exhaustion**: Test behavior under resource constraints\n\n## Platform & Version Testing\n\n### Python Version Compatibility\n- **Target Versions**: Test across Python 3.8.10 through 3.12\n- **Version-Specific**: Use version markers in requirements-test.txt for compatibility\n- **CI/CD Integration**: Use matrices to validate multiple Python versions\n- **Version Checks**: Use `sys.version_info` for version-specific test behavior\n\n### Platform Testing Guidelines\n- **Ubuntu Testing**: Primary development platform (LTS 20.04+)\n- **Raspberry Pi**: ARM architecture testing for deployment compatibility\n- **Cross-Architecture**: Validate functionality on x86_64, aarch64, and armv7l\n- **Dependency Availability**: Ensure test dependencies install correctly across platforms\n- **Functional Validation**: Focus on correctness, not performance characteristics\n\n### Architecture-Specific Testing\n- **x86_64**: Standard Ubuntu development and production\n- **aarch64**: Ubuntu ARM64 and Raspberry Pi OS 64-bit\n- **armv7l**: Raspberry Pi OS 32-bit\n- **Dependencies**: Ensure test dependencies are available across platforms\n- **Compatibility**: Verify unit tests pass consistently across architectures\n- **Environment Differences**: Account for platform-specific behaviors in mocks and fixtures\n"
  },
  {
    "path": ".cursor/services/notification.mdc",
    "content": "# Fledge Notification Service - Feature Development Rules (MDC Format)\n\n---\nmetadata:\n  version: \"1.0.0\"\n  last_updated: \"2024-01-01\"\n  author: \"Fledge Development Team\"\n  service: \"notification-service\"\n  language: \"cpp\"\n  framework: \"custom\"\n  license: \"MIT\"\n---\n\n## Configuration\n\n### Development Environment\n```yaml\ndevelopment:\n  language: \"cpp\"\n  standard: \"c++11\"\n  compiler: \"gcc-11\"\n  build_system: \"cmake\"\n  testing_framework: \"gtest\"\n  linting: \"clang-tidy\"\n  formatting: \"clang-format\"\n  documentation: \"doxygen\"\n```\n\n### Project Structure\n```yaml\nproject_structure:\n  root: \"fledge-service-notification\"\n  directories:\n    src:\n      - core/           # Core business logic\n      - api/            # API endpoints and handlers\n      - storage/        # Data persistence layer\n      - utils/          # Utility functions\n      - tests/          # Unit and integration tests\n    include:            # Header files\n    docs:              # Documentation\n    scripts:           # Build and deployment scripts\n    config:            # Configuration files\n```\n\n### Naming Conventions\n```yaml\nnaming_conventions:\n  classes: \"PascalCase\"\n  methods: \"camelCase\"\n  member_variables: \"m_camelCase\"\n  constants: \"UPPER_SNAKE_CASE\"\n  namespaces: \"lowercase\"\n  files: \"snake_case\"\n  examples:\n    classes: [\"NotificationManager\", \"EmailService\"]\n    methods: [\"sendNotification\", \"validateRecipient\"]\n    member_variables: [\"m_notificationQueue\", \"m_config\"]\n    constants: [\"MAX_RETRY_ATTEMPTS\", \"DEFAULT_TIMEOUT\"]\n```\n\n## Development Rules\n\n### Pre-Development Checklist\n```yaml\npre_development_checklist:\n  architecture:\n    - \"Review existing architecture and module boundaries\"\n    - \"Identify affected components and dependencies\"\n    - \"Plan integration points with existing services\"\n  requirements:\n    - \"Define clear acceptance criteria\"\n    - \"Consider backward compatibility requirements\"\n    - \"Plan error handling and edge cases\"\n  observability:\n    - \"Design logging and observability strategy\"\n    - \"Plan metrics collection\"\n    - \"Define monitoring alerts\"\n```\n\n### Code Quality Standards\n```yaml\ncode_quality:\n  complexity:\n    max_function_lines: 50\n    max_nesting_levels: 3\n    max_cyclomatic_complexity: 10\n  memory_management:\n    required: \"raw_pointers\"\n  thread_safety:\n    required: \"mutex_protection\"\n    patterns: [\"lock_guard\", \"unique_lock\", \"atomic_operations\"]\n  error_handling:\n    required: \"exception_based\"\n    forbidden: [\"silent_failures\", \"error_ignoring\"]\n    patterns: [\"try_catch\", \"custom_exceptions\", \"error_logging\"]\n```\n\n### C++ Development Standards\n```yaml\ncpp_standards:\n  memory_management:\n    preferred:\n      - \"Notification* notification = new Notification()\"\n      - \"delete notification\"\n  \n  error_handling:\n    preferred:\n      - \"class NotificationException : public std::runtime_error\"\n      - \"throw NotificationException(\\\"Invalid recipient: \\\" + recipient)\"\n    patterns:\n      - \"exception_based\"\n      - \"meaningful_error_messages\"\n      - \"proper_logging\"\n  \n  thread_safety:\n    required:\n      - \"mutex_protection_for_shared_resources\"\n      - \"atomic_operations_where_appropriate\"\n    patterns:\n      - \"std::lock_guard<std::mutex> lock(m_mutex)\"\n      - \"std::atomic<int> counter\"\n```\n\n### API Design Standards\n```yaml\napi_design:\n  restful_endpoints:\n    base_path: \"/api/v1\"\n    patterns:\n      - \"GET /notifications\"\n      - \"POST /notifications\"\n      - \"GET /notifications/{id}\"\n      - \"PUT /notifications/{id}\"\n      - \"DELETE /notifications/{id}\"\n  \n  request_models:\n    required_fields:\n      - \"recipient\"\n      - \"subject\"\n      - \"message\"\n      - \"type\"\n    optional_fields:\n      - \"templateId\"\n      - \"metadata\"\n    validation:\n      - \"recipient_format\"\n      - \"message_length\"\n      - \"type_enumeration\"\n  \n  response_models:\n    standard_fields:\n      - \"id\"\n      - \"status\"\n      - \"createdAt\"\n      - \"updatedAt\"\n    error_response:\n      - \"error_code\"\n      - \"error_message\"\n      - \"timestamp\"\n```\n\n### Testing Standards\n```yaml\ntesting_standards:\n  unit_tests:\n    required:\n      - \"success_paths\"\n      - \"failure_paths\"\n      - \"edge_cases\"\n      - \"boundary_conditions\"\n    patterns:\n      - \"Arrange-Act-Assert\"\n      - \"Given-When-Then\"\n    naming: \"MethodName_Scenario_ExpectedResult\"\n  \n  integration_tests:\n    required:\n      - \"end_to_end_flows\"\n      - \"api_endpoints\"\n      - \"database_operations\"\n    patterns:\n      - \"TestHttpClient\"\n      - \"TestDatabase\"\n      - \"MockServices\"\n  \n  test_coverage:\n    minimum: 80\n    critical_paths: 100\n    new_features: 90\n```\n\n### Logging and Observability\n```yaml\nlogging_standards:\n  levels:\n    - \"TRACE\"\n    - \"DEBUG\"\n    - \"INFO\"\n    - \"WARN\"\n    - \"ERROR\"\n    - \"FATAL\"\n  \n  structured_logging:\n    required_fields:\n      - \"timestamp\"\n      - \"level\"\n      - \"service\"\n      - \"operation\"\n    optional_fields:\n      - \"request_id\"\n      - \"user_id\"\n      - \"duration\"\n      - \"metadata\"\n  \n  sensitive_data:\n    forbidden_in_logs:\n      - \"passwords\"\n      - \"api_keys\"\n      - \"personal_identifiers\"\n      - \"credit_card_numbers\"\n    redaction_patterns:\n      - \"password=***\"\n      - \"key=***\"\n```\n\n### Security Standards\n```yaml\nsecurity_standards:\n  input_validation:\n    required:\n      - \"recipient_format\"\n      - \"message_length\"\n      - \"type_enumeration\"\n      - \"sql_injection_prevention\"\n    patterns:\n      - \"whitelist_validation\"\n      - \"regex_validation\"\n      - \"length_limits\"\n  \n  authentication:\n    required:\n      - \"token_validation\"\n      - \"permission_checks\"\n      - \"session_management\"\n    patterns:\n      - \"JWT_tokens\"\n      - \"OAuth2\"\n      - \"API_keys\"\n  \n  authorization:\n    required:\n      - \"role_based_access\"\n      - \"resource_permissions\"\n      - \"audit_logging\"\n    patterns:\n      - \"RBAC\"\n      - \"ABAC\"\n      - \"Permission_matrix\"\n```\n\n### Performance Guidelines\n```yaml\nperformance_guidelines:\n  memory_management:\n    preferred:\n      - \"RAII_principles\"\n      - \"smart_pointers\"\n      - \"move_semantics\"\n    avoid:\n      - \"unnecessary_copies\"\n      - \"memory_leaks\"\n      - \"fragmentation\"\n  \n  async_processing:\n    patterns:\n      - \"thread_pools\"\n      - \"async_await\"\n      - \"future_promise\"\n    use_cases:\n      - \"notification_sending\"\n      - \"batch_processing\"\n      - \"external_api_calls\"\n  \n  caching:\n    strategies:\n      - \"in_memory_cache\"\n      - \"distributed_cache\"\n      - \"cache_invalidation\"\n    patterns:\n      - \"LRU_cache\"\n      - \"TTL_expiration\"\n      - \"cache_warming\"\n```\n\n### Configuration Management\n```yaml\nconfiguration_management:\n  structure:\n    required_sections:\n      - \"database\"\n      - \"email\"\n      - \"logging\"\n      - \"security\"\n    optional_sections:\n      - \"caching\"\n      - \"monitoring\"\n      - \"external_services\"\n  \n  validation:\n    required:\n      - \"type_safety\"\n      - \"value_ranges\"\n      - \"required_fields\"\n    patterns:\n      - \"schema_validation\"\n      - \"environment_validation\"\n      - \"dependency_validation\"\n  \n  environment_specific:\n    development:\n      - \"debug_logging\"\n      - \"mock_services\"\n      - \"local_database\"\n    production:\n      - \"error_logging_only\"\n      - \"real_services\"\n      - \"clustered_database\"\n```\n\n### Documentation Standards\n```yaml\ndocumentation_standards:\n  code_documentation:\n    required:\n      - \"public_apis\"\n      - \"complex_algorithms\"\n      - \"business_logic\"\n    format: \"doxygen\"\n    tags:\n      - \"@brief\"\n      - \"@param\"\n      - \"@return\"\n      - \"@throws\"\n      - \"@example\"\n  \n  api_documentation:\n    required:\n      - \"endpoint_descriptions\"\n      - \"request_response_examples\"\n      - \"error_codes\"\n    format: \"OpenAPI_3.0\"\n  \n  deployment_documentation:\n    required:\n      - \"build_instructions\"\n      - \"deployment_steps\"\n      - \"configuration_guide\"\n      - \"troubleshooting\"\n```\n\n### Deployment and DevOps\n```yaml\ndeployment_standards:\n  build_system:\n    tool: \"cmake\"\n    minimum_version: \"3.16\"\n    cpp_standard: \"17\"\n    dependencies:\n      - \"gtest\"\n      - \"spdlog\"\n      - \"nlohmann_json\"\n  \n  containerization:\n    base_image: \"gcc:11\"\n    runtime_image: \"debian:bullseye-slim\"\n    multi_stage: true\n    security_scanning: true\n  \n  ci_cd:\n    required_stages:\n      - \"build\"\n      - \"test\"\n      - \"lint\"\n      - \"security_scan\"\n      - \"deploy\"\n    quality_gates:\n      - \"test_coverage >= 80%\"\n      - \"no_critical_vulnerabilities\"\n      - \"build_success\"\n```\n\n## Code Review Checklist\n\n### Architecture & Design\n```yaml\narchitecture_checklist:\n  - \"Code follows established architectural patterns\"\n  - \"No unnecessary coupling between modules\"\n  - \"Clear separation of concerns\"\n  - \"Proper use of inheritance vs composition\"\n  - \"Module boundaries respected\"\n```\n\n### Code Quality\n```yaml\ncode_quality_checklist:\n  - \"All functions are under 50 lines\"\n  - \"No more than 4 levels of nesting\"\n  - \"No code duplication (DRY principle)\"\n  - \"Meaningful variable and function names\"\n  - \"Consistent coding style\"\n  - \"No magic numbers without constants\"\n```\n\n### Testing\n```yaml\ntesting_checklist:\n  - \"Unit tests cover all new functionality\"\n  - \"Integration tests for API endpoints\"\n  - \"Edge cases and error conditions tested\"\n  - \"Test names clearly describe behavior\"\n  - \"Mock objects used appropriately\"\n  - \"Test coverage meets minimum requirements\"\n```\n\n### Security\n```yaml\nsecurity_checklist:\n  - \"Input validation implemented\"\n  - \"Authentication/authorization checks\"\n  - \"No sensitive data in logs\"\n  - \"Secure error handling\"\n  - \"No SQL injection vulnerabilities\"\n  - \"Proper secrets management\"\n```\n\n### Performance\n```yaml\nperformance_checklist:\n  - \"No N+1 query patterns\"\n  - \"Efficient algorithms used\"\n  - \"Memory management follows RAII\"\n  - \"Async operations where appropriate\"\n  - \"No blocking operations in hot paths\"\n  - \"Resource cleanup implemented\"\n```\n\n### Documentation\n```yaml\ndocumentation_checklist:\n  - \"Public APIs documented with Doxygen\"\n  - \"README updated if needed\"\n  - \"Code comments explain 'why' not 'what'\"\n  - \"Configuration documented\"\n  - \"Deployment instructions updated\"\n  - \"API documentation current\"\n```\n\n## Anti-Patterns\n\n### Memory Management Anti-Patterns\n```yaml\nmemory_anti_patterns:\n  - \"Missing cleanup in destructors\"\n  - \"Resource leaks in error paths\"\n  - \"Improper ownership semantics\"\n```\n\n### Thread Safety Anti-Patterns\n```yaml\nthread_safety_anti_patterns:\n  - \"Shared mutable state without protection\"\n  - \"Missing mutex locks\"\n  - \"Race conditions in concurrent access\"\n  - \"Improper atomic operation usage\"\n  - \"Deadlock scenarios\"\n```\n\n### Error Handling Anti-Patterns\n```yaml\nerror_handling_anti_patterns:\n  - \"Silent failures\"\n  - \"Catching all exceptions without handling\"\n  - \"Incomplete error recovery\"\n  - \"Insufficient error logging\"\n  - \"Error swallowing\"\n```\n\n### Performance Anti-Patterns\n```yaml\nperformance_anti_patterns:\n  - \"Unnecessary object copying\"\n  - \"Inefficient algorithms\"\n  - \"Blocking operations in hot paths\"\n  - \"Memory allocation in performance-critical code\"\n  - \"N+1 query patterns\"\n```\n\n## Feature Development Template\n\n### Template Structure\n```yaml\nfeature_template:\n  interface_definition:\n    - \"Define the feature interface\"\n    - \"Specify public API methods\"\n    - \"Document method signatures\"\n  \n  implementation:\n    - \"Implement the feature class\"\n    - \"Add proper error handling\"\n    - \"Include logging and metrics\"\n    - \"Follow thread safety patterns\"\n  \n  testing:\n    - \"Create unit tests\"\n    - \"Add integration tests\"\n    - \"Test edge cases\"\n    - \"Verify error conditions\"\n  \n  documentation:\n    - \"Document public APIs\"\n    - \"Add usage examples\"\n    - \"Update README if needed\"\n    - \"Include configuration docs\"\n```\n\n### Template Code Structure\n```yaml\ntemplate_code:\n  header_file: \"include/[FeatureName]Service.h\"\n  implementation_file: \"src/core/[FeatureName]Service.cpp\"\n  test_file: \"src/tests/[FeatureName]ServiceTest.cpp\"\n  documentation_file: \"docs/[FeatureName]Service.md\"\n  \n  class_structure:\n    - \"Public interface methods\"\n    - \"Private helper methods\"\n    - \"Member variables\"\n    - \"Constructor and destructor\"\n  \n  test_structure:\n    - \"Setup and teardown\"\n    - \"Success path tests\"\n    - \"Failure path tests\"\n    - \"Edge case tests\"\n```\n\n## Validation Rules\n\n### Code Validation\n```yaml\ncode_validation:\n  static_analysis:\n    - \"clang-tidy\"\n    - \"cppcheck\"\n    - \"sonarqube\"\n  \n  dynamic_analysis:\n    - \"valgrind\"\n    - \"asan\"\n    - \"tsan\"\n  \n  style_checking:\n    - \"clang-format\"\n    - \"cpplint\"\n    - \"custom_style_rules\"\n```\n\n### Test Validation\n```yaml\ntest_validation:\n  coverage_requirements:\n    line_coverage: 80\n    branch_coverage: 70\n    function_coverage: 90\n  \n  test_quality:\n    - \"No flaky tests\"\n    - \"Fast execution\"\n    - \"Clear assertions\"\n    - \"Proper mocking\"\n```\n\n### Security Validation\n```yaml\nsecurity_validation:\n  static_analysis:\n    - \"semgrep\"\n    - \"bandit\"\n    - \"custom_security_rules\"\n  \n  dependency_checking:\n    - \"safety\"\n    - \"snyk\"\n    - \"vulnerability_scanning\"\n```\n\n## Compliance\n\n### Standards Compliance\n```yaml\ncompliance:\n  coding_standards:\n    - \"C++11 standard\"\n    - \"MISRA C++ guidelines\"\n    - \"Google C++ Style Guide\"\n    - \"Project-specific conventions\"\n  \n  security_standards:\n    - \"OWASP guidelines\"\n    - \"CWE/SANS Top 25\"\n    - \"Industry best practices\"\n  \n  performance_standards:\n    - \"Response time requirements\"\n    - \"Throughput requirements\"\n    - \"Resource utilization limits\"\n```\n\n### Quality Gates\n```yaml\nquality_gates:\n  build:\n    - \"Successful compilation\"\n    - \"No warnings\"\n    - \"Static analysis passed\"\n  \n  test:\n    - \"All tests passing\"\n    - \"Coverage requirements met\"\n    - \"No flaky tests\"\n  \n  security:\n    - \"No critical vulnerabilities\"\n    - \"Security scan passed\"\n    - \"Dependency audit clean\"\n  \n  performance:\n    - \"Performance benchmarks passed\"\n    - \"Memory usage within limits\"\n    - \"Response time requirements met\"\n```\n\n---\n# End of MDC Configuration\ndescription:\nglobs:\nalwaysApply: false\n---\n"
  },
  {
    "path": ".cursor/services/notification_code_review.mdc",
    "content": "---\ndescription: \nglobs: \nalwaysApply: true\n---\n# Fledge Notification Service - Multi-Document Context (MDC)\n\n### Project Overview\nThis MDC file contains comprehensive rules, guidelines, and documentation for AI-assisted development and code review in the Fledge Notification Service project. It combines code review evaluation criteria, git diff analysis techniques, and project-specific standards.\n\n---\n\n## 1. Code Review Evaluation Criteria\n\n### 1.1 Design & Architecture\n- Verify the change fits your system's architectural patterns\n- Avoid unnecessary coupling or speculative features\n- Enforce clear separation of concerns\n- Align with defined module boundaries\n- Check for proper inheritance vs composition decisions\n\n### 1.2 Complexity & Maintainability\n- Ensure control flow remains flat\n- Keep cyclomatic complexity low\n- Abstract duplicate logic (DRY principle)\n- Remove dead or unreachable code\n- Refactor dense logic into testable helper methods\n- Break down complex methods into smaller, focused functions\n\n### 1.3 Functionality & Correctness\n- Confirm new code paths behave correctly under valid and invalid inputs\n- Cover all edge cases\n- Maintain idempotency for retry-safe operations\n- Satisfy all functional requirements or user stories\n- Include robust error-handling semantics\n- Validate input parameters and configuration\n\n### 1.4 Readability & Naming\n- Check that identifiers clearly convey intent\n- Comments should explain *why* (not *what*)\n- Code blocks should be logically ordered\n- No surprising side-effects hide behind deceptively simple names\n- Use consistent naming conventions\n\n### 1.5 Best Practices & Patterns\n- Validate use of language- or framework-specific idioms\n- Adhere to SOLID principles\n- Ensure proper resource cleanup\n- Maintain consistent logging/tracing\n- Clear separation of responsibilities across layers\n- Use RAII and smart pointers for memory management\n\n### 1.6 Test Coverage & Quality\n- Verify unit tests for both success and failure paths\n- Include integration tests exercising end-to-end flows\n- Use appropriate mocks/stubs\n- Include meaningful assertions (including edge-case inputs)\n- Test names should accurately describe behavior\n\n### 1.7 Standardization & Style\n- Ensure conformance to style guides (indentation, import/order, naming conventions)\n- Maintain consistent project structure (folder/file placement)\n- Zero new linter or formatter warnings\n- Follow C++11 standards and project conventions\n\n### 1.8 Documentation & Comments\n- Confirm public APIs or complex algorithms have clear in-code documentation\n- Update README, Swagger/OpenAPI, CHANGELOG, or other user-facing docs\n- Use Doxygen-style comments for all public APIs\n- Include `@brief`, `@param`, `@return`, `@throws` tags\n\n### 1.9 Security & Compliance\n- Check input validation and sanitization against injection attacks\n- Ensure proper output encoding\n- Implement secure error handling\n- Check dependency license and vulnerability checks\n- Follow secrets management best practices\n- Enforce authZ/authN where applicable\n\n### 1.10 Performance & Scalability\n- Identify N+1 query patterns or inefficient I/O\n- Check memory management concerns\n- Avoid heavy hot-path computations\n- Consider caching, batching, memoization, async patterns\n- Optimize algorithms where necessary\n\n### 1.11 Observability & Logging\n- Verify that key events emit metrics or tracing spans\n- Use appropriate log levels\n- Redact sensitive data\n- Include contextual information for monitoring and debugging\n- Support post-mortem analysis\n\n### 1.12 CI/CD & DevOps\n- Validate build pipeline integrity\n- Ensure automated test gating\n- Check artifact creation\n- Verify dependency declarations\n- Follow organizational DevOps best practices\n\n---\n\n## 2. C++ Specific Standards\n\n### 2.1 Naming Conventions\n- **Classes**: PascalCase (e.g., `NotificationManager`)\n- **Methods**: camelCase (e.g., `setupFilterPipeline()`)\n- **Member variables**: m_camelCase (e.g., `m_filterPipeline`)\n- **Constants**: UPPER_SNAKE_CASE (e.g., `DEFAULT_RETRIGGER_TIME`)\n- **Namespaces**: lowercase (e.g., `std`)\n\n### 2.2 Memory Management\n- Follow RAII principles\n- Avoid manual memory management where possible\n- Ensure proper cleanup in destructors\n\n### 2.3 Error Handling\n- Use exceptions for exceptional conditions\n- Log errors with appropriate log levels\n- Provide meaningful error messages\n- Handle resource failures gracefully\n\n### 2.4 Thread Safety\n- Use mutex for shared resource protection\n- Consider atomic operations where appropriate\n- Document thread safety guarantees\n- Avoid race conditions in concurrent code\n\n---\n\n## 3. Git Diff Analysis Techniques\n\n### 3.1 Understanding Git Diff Commands\n```bash\n# Show file names only\ngit diff --name-only origin/develop...origin/feature-branch\n\n# Show statistics\ngit diff --stat origin/develop...origin/feature-branch\n\n# Show detailed statistics\ngit diff --numstat origin/develop...origin/feature-branch\n\n# Show short statistics\ngit diff --shortstat origin/develop...origin/feature-branch\n\n# Show word-level changes\ngit diff --word-diff origin/develop...origin/feature-branch\n\n# Show context with more lines\ngit diff -U10 origin/develop...origin/feature-branch\n```\n\n### 3.2 Diff Output Interpretation\n\n#### File Statistics Analysis\n```bash\ngit diff --numstat origin/develop...origin/feature-branch\n```\nOutput format: `[insertions] [deletions] [filename]`\n\n#### Change Pattern Recognition\n- **High insertion count**: New functionality or major refactoring\n- **High deletion count**: Code cleanup or breaking changes\n- **Balanced changes**: Refactoring or feature updates\n- **Low net change**: Bug fixes or minor improvements\n\n### 3.3 Three-Way Merge Analysis\n```bash\n# Compare two branches\ngit diff branch1...branch2\n\n# Compare with common ancestor\ngit diff branch1..branch2\n```\n\n### 3.4 Change Impact Assessment\n\n#### File-Level Analysis\n1. **Header Files**: Interface changes, new dependencies\n2. **Source Files**: Implementation changes, new functionality\n3. **Test Files**: Test coverage, validation logic\n4. **Configuration Files**: Settings, defaults, options\n\n#### Line-Level Analysis\n1. **Additions (+)**: New code, features, methods\n2. **Deletions (-)**: Removed code, cleanup, breaking changes\n3. **Context**: Surrounding code for understanding changes\n\n### 3.5 Pattern Recognition in Diffs\n\n#### Common Patterns\n1. **New Includes**: `#include <new_header.h>`\n2. **Method Signatures**: Parameter changes, return type changes\n3. **Class Inheritance**: `class X : public Y`\n4. **Member Variables**: `Type* m_variable;`\n5. **Configuration**: JSON structures, default values\n\n#### Red Flags in Diffs\n1. **Manual Memory Management**: `new`/`delete` without smart pointers\n2. **Missing Error Handling**: No try-catch blocks\n3. **Inconsistent Naming**: Mixed naming conventions\n4. **Large Methods**: Methods with many lines added\n5. **Missing Documentation**: New methods without comments\n\n---\n\n## 4. Issue Severity Levels\n\n### 4.1 Critical\n- Memory leaks or resource leaks\n- Race conditions or thread safety issues\n- Security vulnerabilities\n- Data corruption risks\n- Build failures or compilation errors\n\n### 4.2 Major\n- Performance issues affecting scalability\n- Architectural violations\n- Missing error handling\n- Incomplete functionality\n- Breaking changes without proper migration\n\n### 4.3 Minor\n- Code style violations\n- Missing documentation\n- Inefficient algorithms\n- Code duplication\n- Minor bugs with workarounds\n\n### 4.4 Enhancement\n- Missing test coverage\n- Performance optimizations\n- Code refactoring opportunities\n- Additional features or improvements\n- Better error messages or logging\n\n---\n\n## 5. Code Review Process\n\n### 5.1 High-Level Summary\nDescribe product impact and engineering approach in 2-3 sentences:\n- **Product impact**: What does this change deliver for users or customers?\n- **Engineering approach**: Key patterns, frameworks, or best practices in use\n\n### 5.2 Fetch and Scope the Diff\n1. Run `git fetch origin` to ensure latest code\n2. Compute `git diff --name-only --diff-filter=M origin/develop...origin/feature-branch`\n3. For each file, run `git diff --quiet origin/develop...origin/feature-branch -- <file>`\n4. Skip files that produce no actual diff hunks\n\n### 5.3 Evaluate Against Criteria\nFor each truly changed file and each diffed hunk, evaluate against the 12 evaluation criteria listed above.\n\n### 5.4 Report Issues\nFor each validated issue, output a nested bullet like this:\n- File: `<path>:<line-range>`\n  - Issue: [One-line summary of the root problem]\n  - Fix: [Concise suggested change or code snippet]\n\n### 5.5 Prioritize Issues\nGroup issues by severity in this order:\n- Critical\n- Major\n- Minor\n- Enhancement\n\n### 5.6 Highlight Positives\nInclude a brief bulleted list of positive findings or well-implemented patterns observed in the diff.\n\n---\n\n## 6. Common Issues to Watch For\n\n### 6.1 Memory Management\n- Manual `new`/`delete` without smart pointers\n- Missing cleanup in destructors\n- Resource leaks in error paths\n- Improper ownership semantics\n\n### 6.2 Thread Safety\n- Shared mutable state without protection\n- Missing mutex locks\n- Race conditions in concurrent access\n- Improper atomic operation usage\n\n### 6.3 Error Handling\n- Missing exception handling\n- Incomplete error recovery\n- Silent failures\n- Insufficient error logging\n\n### 6.4 Performance\n- Unnecessary object copying\n- Inefficient algorithms\n- Blocking operations in hot paths\n- Memory allocation in performance-critical code\n\n### 6.5 Code Quality\n- Code duplication\n- Complex methods (>50 lines)\n- Deep nesting (>4 levels)\n- Magic numbers without constants\n- Inconsistent naming\n\n---\n\n## 7. Positive Patterns to Recognize\n\n- Proper use of RAII and smart pointers\n- Clear separation of concerns\n- Comprehensive error handling\n- Good logging and observability\n- Consistent coding style\n- Thorough documentation\n- Appropriate test coverage\n- Performance-conscious design\n- Thread-safe implementations\n- Backward compatibility maintenance\n\n---\n\n## 8. Advanced Git Commands for Analysis\n\n```bash\n# Show only function changes\ngit diff -p origin/develop...origin/feature-branch | grep -A 5 -B 5 \"^[+-].*(\"\n\n# Show only structural changes\ngit diff --stat --summary origin/develop...origin/feature-branch\n\n# Show changes with context\ngit diff -U5 origin/develop...origin/feature-branch\n\n# Show only additions\ngit diff --diff-filter=A origin/develop...origin/feature-branch\n\n# Show only deletions\ngit diff --diff-filter=D origin/develop...origin/feature-branch\n\n# Show only modifications\ngit diff --diff-filter=M origin/develop...origin/feature-branch\n\n# Count lines by type\ngit diff origin/develop...origin/feature-branch | grep -c \"^+\"\ngit diff origin/develop...origin/feature-branch | grep -c \"^-\"\n\n# Find new includes\ngit diff origin/develop...origin/feature-branch | grep \"^+#include\"\n\n# Find new class definitions\ngit diff origin/develop...origin/feature-branch | grep \"^+class\"\n\n# Find new method definitions\ngit diff origin/develop...origin/feature-branch | grep \"^+.*(\"\n```\n\n---\n\n## 9. Fledge-Specific Guidelines\n\n### 9.1 Notification Service Architecture\n- Follow existing notification patterns and conventions\n- Maintain backward compatibility with existing configurations\n- Use proper plugin architecture for extensibility\n- Implement proper resource cleanup for notification instances\n\n### 9.2 Filter Pipeline Integration\n- Ensure thread-safe filter pipeline operations\n- Implement proper error handling for filter setup\n- Use smart pointers for filter pipeline management\n- Add comprehensive logging for filter operations\n\n### 9.3 Configuration Management\n- Follow Fledge configuration patterns\n- Implement proper category registration/unregistration\n- Handle configuration changes gracefully\n- Validate configuration parameters\n\n### 9.4 Testing Requirements\n- Unit tests for all new functionality\n- Integration tests for filter pipeline workflows\n- Performance tests for large datasets\n- Error condition testing\n\n---\n\n## 10. AI-Assisted Development Guidelines\n\n### 10.1 Code Generation\n- Follow established naming conventions\n- Include proper error handling\n- Add comprehensive documentation\n- Ensure thread safety where applicable\n\n### 10.2 Code Review Assistance\n- Analyze git diffs systematically\n- Identify potential issues early\n- Suggest improvements and optimizations\n- Maintain consistency with existing codebase\n\n### 10.3 Documentation Generation\n- Create clear, concise documentation\n- Include examples and usage patterns\n- Document API changes and breaking changes\n- Maintain up-to-date README files\n\n### 10.4 Testing Assistance\n- Generate comprehensive test cases\n- Include edge case testing\n- Ensure proper test coverage\n- Create integration test scenarios\n\n---\n\n## 11. Project-Specific Rules\n\n### 11.1 File Organization\n- Keep header files in `include/` directories\n- Organize source files logically\n- Maintain consistent file naming\n- Group related functionality together\n\n### 11.2 Build System\n- Follow CMake conventions\n- Maintain proper dependency management\n- Ensure cross-platform compatibility\n- Include proper version information\n\n### 11.3 Version Control\n- Use meaningful commit messages\n- Create feature branches for new development\n- Maintain clean git history\n- Follow branching strategies\n\n### 11.4 Code Quality\n- Pass all linting checks\n- Maintain consistent formatting\n- Follow coding standards\n- Include proper comments\n\n---\n\nThis MDC file serves as a comprehensive guide for AI-assisted development and code review in the Fledge Notification Service project. It provides structured evaluation criteria, git diff analysis techniques, and project-specific guidelines to ensure high-quality code development and review processes.\ndescription:\nglobs:\nalwaysApply: false\n---\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.yml",
    "content": "name: \"🐛 Bug Report\"\ndescription: Create a new ticket for a bug.\ntitle: \"🐛 [BUG] - <title>\"\nlabels: [\"bug\"]\nassignees: Mark Riddoch\nbody:\n  - type: markdown\n    attributes:\n      value: |        \n        ### Please read, before you post!\n\n        This is a **BUG REPORT for issues in the existing code**.\n\n        If you have general questions, code handling problems, or ideas, please use the:\n\n        - Discussion-board: https://github.com/fledge-iot/fledge/discussions\n        - Slack-Channel: Use the fledge or fledge-help Slack Channel on https://slack.lfedge.org\n\n        Verify first that your issue is not already reported on https://github.com/fledge-iot/fledge/issues\n\n        ---\n  - type: textarea\n    id: description\n    attributes:\n      label: \"Description\"\n      description: Please enter an explicit description of your issue\n      placeholder: Short and explicit description of your incident...\n    validations:\n      required: true\n  - type: input\n    id: platform\n    attributes:\n      label: \"Environment Platform\"\n      description: Please enter the environment details\n      placeholder: Information about the system or platform (e.g., OS, version, architecture).\n    validations:\n      required: true\n  - type: input\n    id: version\n    attributes:\n      label: \"Fledge Version\"\n      description: Please enter the version details\n      placeholder: The specific version of fledge you are using.\n    validations:\n      required: true\n  - type: dropdown\n    id: installation-method\n    attributes:\n      label: \"Installation\"\n      description: Fledge installation via\n      options:\n        - Source Code\n        - Package based\n        - Docker Container\n    validations:\n      required: true\n  - type: textarea\n    id: reprod\n    attributes:\n      label: \"Steps To Reproduce\"\n      description: Please enter an explicit description of your issue\n      value: |\n        1. \n        2. \n        3. \n        4. See error\n      render: bash\n    validations:\n      required: true\n  - type: textarea\n    id: behavior\n    attributes:\n      label: \"Expected Behavior\"\n      description: A clear and concise description of what you expected to happen.\n    validations:\n      required: true\n  - type: textarea\n    id: screenshot\n    attributes:\n      label: \"Screenshots\"\n      description: If applicable, add screenshots to help explain your problem.\n    validations:\n      required: false\n  - type: textarea\n    id: logs\n    attributes:\n      label: \"Logs\"\n      description: Please copy and paste any relevant log (i.e syslogs) output. This will be automatically formatted into code, so no need for backticks.\n      render: bash\n    validations:\n      required: false\n  - type: textarea\n    id: support-bundle\n    attributes:\n      label: \"Support bundle\"\n      description: Please share the support bundle. It would be highly appreciated, as it is essential for further troubleshooting.\n      placeholder: Use the Fledge GUI interface to collect the support bundle. Navigate to the left menu, select the 'Support' menu item, click on 'Request New,' and then download the bundle.\n    validations:\n      required: true\n  - type: markdown\n    attributes:\n      value: |\n        #### Thank you for taking the time to file a bug report! Your bug request will be reviewed by the team.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "content": "blank_issues_enabled: false\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/doc_issue.yml",
    "content": "name: \"📝 Report a documentation issue\"\ndescription: \"Is something wrong, confusing or missing in the docs?\"\nlabels: [\"documentation\"]\nassignees: Mark Riddoch\nbody:\n  - type: input\n    id: version\n    attributes:\n      label: \"Version\"\n      description: Please enter the version from https://fledge-iot.readthedocs.io\n      placeholder: Obtain information about the version (e.g., latest, nightly)\n    validations:\n      required: true\n  - type: textarea\n    id: describe-issue\n    attributes:\n      label: \"Describe the documentation issue\"\n    validations:\n      required: true\n  - type: textarea\n    id: what-solution\n    attributes:\n      label: \"What solution would you like to see?\"\n    validations:\n      required: true\n  - type: markdown\n    attributes:\n      value: |\n        #### Thank you for taking the time to file a docs issue report! Your request will be reviewed by the team.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.yml",
    "content": "name: \"💡 Feature Request\"\ndescription: Create a new ticket for a new feature request\ntitle: \"💡 [REQUEST] - <title>\"\nlabels: [\"question\"]\nassignees: Mark Riddoch\nbody:\n  - type: textarea\n    id: summary\n    attributes:\n      label: \"Summary\"\n      description: Provide a brief explanation of the feature\n      placeholder: Describe in a few lines your feature request\n    validations:\n      required: true\n  - type: textarea\n    id: basic_example\n    attributes:\n      label: \"Basic Example\"\n      description: Indicate here some basic examples of your feature.\n      placeholder: A few specific words about your feature request.\n    validations:\n      required: true\n  - type: textarea\n    id: drawbacks\n    attributes:\n      label: \"Drawbacks\"\n      description: What are the drawbacks/impacts of your feature request?\n      placeholder: Identify the drawbacks and impacts while being neutral on your feature request\n    validations:\n      required: true\n  - type: textarea\n    id: unresolved_question\n    attributes:\n      label: \"Unresolved questions\"\n      description: What questions still remain unresolved?\n      placeholder: Identify any unresolved issues.\n    validations:\n      required: false\n  - type: textarea\n    id: implementation_pr\n    attributes:\n      label: \"Implementation PR\"\n      description: Pull request used\n      placeholder: \"#Pull Request ID\"\n    validations:\n      required: false\n  - type: textarea\n    id: reference_issues\n    attributes:\n      label: \"Reference Issues\"\n      description: Common issues\n      placeholder: \"#Issues IDs\"\n    validations:\n      required: false\n  - type: markdown\n    attributes:\n      value: |\n        #### Thank you for helping us improve the project! Your feature request will be reviewed by the team.\n"
  },
  {
    "path": ".github/workflows/checker.yml",
    "content": "name: Checker\n\non:\n  push:\n    branches: ['**']\n\njobs:\n  test:\n    name: 🛠️ Build on ${{ matrix.os }}\n    runs-on: ${{ matrix.os }}\n    strategy:\n      fail-fast: false\n      matrix:\n        os: [ubuntu-22.04, ubuntu-24.04]\n\n    env:\n      FLEDGE_ROOT: ${{ github.workspace }}\n      PYTHONPATH: ${{ github.workspace }}/python\n\n    steps:\n      - name: 🛎️ Checkout code\n        uses: actions/checkout@v4\n\n      - name: ⚙️ Compile Fledge Core\n        id: make_fledge\n        run: |\n          set -e\n          echo \"⚠️ APT is misinterpreting the mirror+file: scheme as a URL 🌐, causing 404 errors ❌ due to a missing or invalid /etc/apt/apt-mirrors.txt file 📄.\"\n          RELEASE=$(lsb_release -cs)\n          echo \"Using release: $RELEASE\"\n          cat <<EOF | sudo tee /etc/apt/sources.list\n          deb http://archive.ubuntu.com/ubuntu $RELEASE main restricted universe multiverse\n          deb http://archive.ubuntu.com/ubuntu $RELEASE-updates main restricted universe multiverse\n          deb http://archive.ubuntu.com/ubuntu $RELEASE-backports main restricted universe multiverse\n          deb http://security.ubuntu.com/ubuntu $RELEASE-security main restricted universe multiverse\n          EOF\n          sudo apt-get update\n          sudo apt-get install -y --fix-missing\n\n          echo \"🔧 Run setup prerequisites 📦 and compilation of code 🛠️\"\n          cd \"$FLEDGE_ROOT\"\n          sudo ./requirements.sh\n          make -j\"$(nproc)\"\n\n      - name: 🧪 Run C Unit Tests\n        if: steps.make_fledge.outcome == 'success'\n        continue-on-error: true\n        run: |\n          set +e\n          cd \"$FLEDGE_ROOT/tests/unit/C\"\n\n          echo \"🛠️ Installing C test dependencies...\"\n          chmod +x requirements.sh && ./requirements.sh\n\n          echo \"📋 Running C tests...\"\n          chmod +x scripts/RunAllTests.sh && ./scripts/RunAllTests.sh\n\n          mkdir -p \"$FLEDGE_ROOT/reports\"\n          cp -v results/*.xml \"$FLEDGE_ROOT/reports/\" || echo \"⚠️ No C test reports found\"\n\n      - name: 🧪 Run Python Unit Tests\n        if: steps.make_fledge.outcome == 'success'\n        continue-on-error: true\n        run: |\n          set +e\n          echo \"🛠️ Installing Python test dependencies...\"\n          python3 -m pip install -Ir python/requirements-test.txt\n\n          echo \"📋 Running Python tests...\"\n          python3 -m pytest -s -vv \\\n            --junit-xml=\"$FLEDGE_ROOT/tests/unit/python/fledge/python_test_output.xml\" \\\n            \"$FLEDGE_ROOT/tests/unit/python/fledge\" \\\n            --tb=line\n\n          mkdir -p \"$FLEDGE_ROOT/reports\"\n          cp -v \"$FLEDGE_ROOT/tests/unit/python/fledge/\"*.xml \"$FLEDGE_ROOT/reports/\" || echo \"⚠️ No Python test report found\"\n\n      # Publish test results to GitHub UI using a third-party action\n      # Note: GitHub Actions does not yet support native test report publishing in the UI\n      # This step uses dorny/test-reporter to visualize test results in the Actions tab\n      - name: 📤 Publish Test Report to GitHub\n        if: steps.make_fledge.outcome == 'success'\n        continue-on-error: true\n        uses: dorny/test-reporter@v1\n        with:\n          name: 📊 Test Results on ${{ matrix.os }}\n          path: ${{ env.FLEDGE_ROOT }}/reports/*.xml\n          reporter: java-junit\n          fail-on-error: true\n\n"
  },
  {
    "path": ".gitignore",
    "content": "\n# vi\n*.swp\n\n# MacOS Finder\n.DS_Store\n._*\n\n# IDE\n*.idea\n.vscode/\n\n# Data / cache files\ndata/etc/storage.json\ndata/etc/sqlite.json\ndata/etc/sqlitelb.json\ndata/etc/certs/*\ndata/var\ndata/support\ndata/scripts\ndata/plugins\ndata/snapshots\ndata/logs\ndata/configure_repo_output.txt\n\n# SQLite3 default db location and after migration\ndata/*.db\ndata/*.db-wal\ndata/*.db-shm\ndata/*.db-journal\nscripts/extras/*.db\n\n/etc/storage.json\n/etc/certs/*\nstorage.json\n\n# Docs\ndocs/_build\ndocs/__pycache__/\ndocs/.cache/\ndocs/plugins\ndocs/services\ndocs/fledge_plugins.rst\n\n# Compiled Object files\n*.pyc\n\n# build specific\n/C/plugins/storage/build\n/C/services/storage/build\n/cmake_build\n/plugins\n/python_build_dir\n/services\n/tasks\n\n# Error Logs\n*.err\n\n# Test files\n*.result\n*.temp\n\n# Keys and certificates\n*.cert\n*.csr\n*.key\n*.cer\n*.crt\n\n.cache/\n\n# Backup\ndata/backup\ndata/etc/backup_postgres_configuration_cache.json\n\n# test\n.pytest_cache\n.coverage\n\n# Async ingest pymodule\npython/async_ingest.so*\n# Filter ingest pymodule\npython/filter_ingest.so*\n\n# Python south & filter plugins\npython/fledge/plugins/south/*\npython/fledge/plugins/filter/*\npython/fledge/plugins/notificationDelivery/*\npython/fledge/plugins/notificationRule/*\n\n# doxygen build\ndoxy/\n\n# aspell backups\n*.bak\ntests/unit/C/build\ntests/unit/C/lib\ntests/unit/C/*/build\n"
  },
  {
    "path": ".readthedocs.yaml",
    "content": "# Read the Docs configuration file for Sphinx projects\n# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details\n\n# Required\nversion: 2\n\n# Set the OS, Python version and other tools you might need\nbuild:\n  os: ubuntu-20.04\n  tools:\n    python: \"3.8\"\n    # You can also specify other tool versions:\n    # nodejs: \"20\"\n    # rust: \"1.70\"\n    # golang: \"1.20\"\n\n# Build documentation in the \"docs/\" directory with Sphinx\nsphinx:\n  configuration: docs/conf.py\n  # You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs\n  # builder: \"dirhtml\"\n  # Fail on all warnings to avoid broken references\n  # fail_on_warning: true\n\n# Optionally build your docs in additional formats such as PDF and ePub\n# formats:\n#    - pdf\n#    - epub\n\n# Optional but recommended, declare the Python requirements required\n# to build your documentation\n# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html\npython:\n  install:\n    - requirements: docs/requirements.txt\n"
  },
  {
    "path": "ADOPTERS.MD",
    "content": "# Fledge Adopters\n\n- Beckhoff - PLC Vendor\n- Dianomic - IIoT Software\n- Flir - IR/Gas Cameras\n- General Atomics - Predator Drone\n- Google - Search-ML-Cloud-TPUs\n- JEA - Energy/Water Company\n- [Motorsports.ai](http://motorsports.ai/) - Racing Digital Twins\n- Nexcom - Industrial Gateways\n- Nokia - Wireless Communications\n- OSIsoft - Data Infrastructure\n- Rovisys - Industrial SI\n- Transpara - HMI for Process Manufacturers\n- Wago - PLC Vendor\n- Zededa - VMs for IoT\n- RTE France - T&D\n- Nueman Aluminium"
  },
  {
    "path": "C/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.4.0)\n\nif(COMMAND cmake_policy)\n    cmake_policy(SET CMP0003 NEW)\nendif(COMMAND cmake_policy)\n\n# Get the os name\nexecute_process(COMMAND bash -c \"cat /etc/os-release | grep -w ID | cut -f2 -d'='\"\n                                OUTPUT_VARIABLE\n                                OS_NAME\n                                OUTPUT_STRIP_TRAILING_WHITESPACE)\n\nif( POLICY CMP0007 )\n    cmake_policy( SET CMP0007 NEW )\nendif()\nproject(common-lib)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(UUIDLIB -luuid)\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development NumPy)\nendif()\n\n# Find source files\nfile(GLOB SOURCES *.cpp)\n\n# Include header files\ninclude_directories(include ../services/common/include ../common/include ../thirdparty/rapidjson/include ../thirdparty/Simple-Web-Server)\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS} ${Python3_NUMPY_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\n# Add Python 3.5 library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    target_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n\ttarget_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES} Python3::NumPy)\nendif()\n\ntarget_link_libraries(${PROJECT_NAME} ${UUIDLIB})\ntarget_link_libraries(${PROJECT_NAME} ${Boost_LIBRARIES})\ntarget_link_libraries(${PROJECT_NAME} -lcrypto)\n\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n"
  },
  {
    "path": "C/common/JSONPath.cpp",
    "content": "/*\n * Fledge RapaidJSON JSONPath search helper\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <JSONPath.h>\n#include <logger.h>\n#include <cstring>\n#include <stdexcept>\n\nusing namespace std;\nusing namespace rapidjson;\n\nJSONPath::JSONPath(const string& path) : m_path(path)\n{\n\tm_logger = Logger::getLogger();\n}\n\n/**\n * Destructor for the JSONPath\n *\n * Reclaim the vector of components.\n */\nJSONPath::~JSONPath()\n{\n\tfor (int i = 0; i < m_parsed.size(); i++)\n\t{\n\t\tdelete m_parsed[i];\n\t}\n}\n\n/**\n * Find the matching node in the JSON document\n *\n * param node\tThe node to search from\n * @return the matching node. Throws an exception if there was no match\n */\n\nValue *JSONPath::findNode(Value& root)\n{\n\tif (m_parsed.size() == 0)\n\t{\n\t\tparse();\n\t}\n\n\tValue *node = &root;\n\n\tfor (int i = 0; i < m_parsed.size(); i++)\n\t{\n\t\tnode = m_parsed[i]->match(node);\n\t}\n\treturn node;\n}\n\n/**\n * Parse the m_path JSON path. Throws an exception if there\n * was a parse error.\n *\n * The supported elements are\n * \tLiteral object name\t\t/a\n * \tArray Index\t\t\ta[1]\n * \tArray with matching predicate\ta[name==value]\n */\nvoid JSONPath::parse()\n{\nchar *path, *ptr, *sp;\n\n\tpath = strdup(m_path.c_str());\n\tptr = strtok_r(path, \"/\", &sp);\n\twhile (ptr)\n\t{\n\t\tchar *p = ptr;\n\t\tchar *bstart = NULL, *bend = NULL, *bequal = NULL;\n\t\twhile (*p)\n\t\t{\n\t\t\tif (*p == '[')\n\t\t\t{\n\t\t\t\tbstart = p + 1;\n\t\t\t}\n\t\t\tif (*p == ']')\n\t\t\t{\n\t\t\t\tbend = p - 1;\n\t\t\t}\n\t\t\tif (*p == '=' && *(p+1) == '=')\n\t\t\t{\n\t\t\t\tbequal = p;\n\t\t\t}\n\t\t\tp++;\n\t\t}\n\t\tif (bstart == NULL && bend == NULL && bequal == NULL)\n\t\t{\n\t\t\tstring s(ptr);\n\t\t\tm_parsed.push_back(new LiteralPathComponent(s));\n\t\t}\n\t\tif (bstart != NULL && bend != NULL)\n\t\t{\n\t\t\tif (bstart > bend)\n\t\t\t{\n\t\t\t\tm_logger->error(\"Invalid JSONPath '%s', malformed selector\", path);\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\t*(bstart - 1) = 0;\n\t\t\tstring name(ptr);\n\t\t\tif (bequal == NULL)\n\t\t\t{\n\t\t\t\tchar *eptr;\n\t\t\t\tlong index = strtol(bstart, &eptr, 10);\n\t\t\t\tif (eptr != bend + 1)\n\t\t\t\t{\n\t\t\t\t\tm_logger->error(\"Invalid JSONPath '%s', expected numeric selector\");\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n\t\t\t\tm_parsed.push_back(new IndexPathComponent(name, index));\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tchar *property = bstart;\n\t\t\t\tchar *value = bequal + 2;\n\t\t\t\t*(bend + 1) = 0;\n\t\t\t\t*bequal = 0;\n\t\t\t\tstring p(property), v(value);\n\t\t\t\tm_parsed.push_back(new MatchPathComponent(name, p, v));\n\t\t\t}\n\t\t}\n\n\n\t\tptr = strtok_r(NULL, \"/\", &sp);\n\t}\ndone:\n\tfree(path);\n}\n\n/**\n * A match against a literal path component\n */\nJSONPath::LiteralPathComponent::LiteralPathComponent(string& name) : m_name(name)\n{\n}\n\n/**\n * Return the child object of node that matchs the literal name given\n *\n * @param node\tThe node to match\n * @return pointer to the matching node\n */\nrapidjson::Value *JSONPath::LiteralPathComponent::match(rapidjson::Value *node)\n{\n\tif (node->IsObject() && node->HasMember(m_name.c_str()))\n\t{\n\t\treturn &((*node)[m_name.c_str()]);\n\t}\n\tthrow runtime_error(\"Document has no member \" + m_name);\n}\n\n/**\n * A match against an array index\n */\nJSONPath::IndexPathComponent::IndexPathComponent(string& name, int index) : m_name(name), m_index(index)\n{\n}\n\n/**\n * Return the object at the index position of the specified array\n *\n * @param node\tThe node to match\n * @return pointer to the matching node\n */\nrapidjson::Value *JSONPath::IndexPathComponent::match(rapidjson::Value *node)\n{\n\tif (node->IsObject() && node->HasMember(m_name.c_str()))\n\t{\n\t\tValue& n  = (*node)[m_name.c_str()];\n\t\tif (n.IsArray())\n\t\t{\n\t\t\treturn &n[m_index];\n\t\t}\n\t}\n\tthrow runtime_error(\"Document has no member \" + m_name + \" or it is not an array\");\n}\n\n/**\n * Amatch against an object that hase a particular name/value pair\n */\nJSONPath::MatchPathComponent::MatchPathComponent(string& name, string& property, string& value) : m_name(name), m_property(property), m_value(value)\n{\n}\n\n/**\n * Match a node within an array or object\n *\n * @param node\tThe node to match\n * @return pointer to the matching node\n */\nrapidjson::Value *JSONPath::MatchPathComponent::match(rapidjson::Value *node)\n{\n\tif (node->IsObject() && node->HasMember(m_name.c_str()))\n\t{\n\t\tValue& n  = (*node)[m_name.c_str()];\n\t\tif (n.IsArray())\n\t\t{\n\t\t\tfor (auto& v : n.GetArray())\n\t\t\t{\n\t\t\t\tif (v.IsObject())\n\t\t\t\t{\n\t\t\t\t\tif (v.HasMember(m_property.c_str()))\n\t\t\t\t\t{\n\t\t\t\t\t\tif (v[m_property.c_str()].IsString() \n\t\t\t\t\t\t\t\t&& m_value.compare(v[m_property.c_str()].GetString()) == 0)\n\t\t\t\t\t\t\treturn &v;\n\t\t\t\t\t\tif (v[m_property.c_str()].IsInt())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tlong val = v[m_property.c_str()].GetInt();\n\t\t\t\t\t\t\tlong tval = strtol(m_value.c_str(), NULL, 10);\n\t\t\t\t\t\t\tif (val == tval)\n\t\t\t\t\t\t\t\treturn &v;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (v[m_property.c_str()].IsDouble())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdouble val = v[m_property.c_str()].GetDouble();\n\t\t\t\t\t\t\tdouble tval = strtod(m_value.c_str(), NULL);\n\t\t\t\t\t\t\tif (val == tval)\n\t\t\t\t\t\t\t\treturn &v;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (v[m_property.c_str()].IsBool())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tbool val = v[m_property.c_str()].GetBool();\n\t\t\t\t\t\t\tif (val && (m_value.compare(\"true\") == 0 || m_value.compare(\"TRUE\") == 0))\n\t\t\t\t\t\t\t\treturn &v;\n\t\t\t\t\t\t\tif (val == false && (m_value.compare(\"false\") == 0 || m_value.compare(\"FALSE\") == 0))\n\t\t\t\t\t\t\t\treturn &v;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tthrow runtime_error(string(\"Document has no member \") + m_name + string(\" or it does not have a \") + m_property + \" property\");\n}\n"
  },
  {
    "path": "C/common/acl.cpp",
    "content": "/*\n * Fledge category management\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <logger.h>\n#include <stdexcept>\n#include <acl.h>\n#include <rapidjson/document.h>\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n#include <storage_client.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * ACLReason constructor:\n * parse input JSON for ACL change reason.\n *\n * JSON should have string attributes 'reason' and 'argument'\n *\n * @param  json\t\tThe JSON reason string to parse\n * @throws \t\texception ACLReasonMalformed\n */\nACL::ACLReason::ACLReason(const string& json)\n{\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"ACL Reason parse error in %s: %s at %d\",\n\t\t\t\t\tjson.c_str(),\n\t\t\t\t\tGetParseError_En(doc.GetParseError()),\n\t\t\t\t\t\t\t(unsigned)doc.GetErrorOffset());\n\t\tthrow new ACLReasonMalformed();\n\t}\n\n\tif (!doc.IsObject())\n\t{\n\t\tLogger::getLogger()->error(\"ACL Reason is not a JSON object: %sd\",\n\t\t\t\t\tjson.c_str());\n\t\tthrow new ACLReasonMalformed();\n\t}\n\n\tif (doc.HasMember(\"reason\") && doc[\"reason\"].IsString())\n\t{\n\t\tm_reason = doc[\"reason\"].GetString();\n\t}\n\tif (doc.HasMember(\"argument\") && doc[\"argument\"].IsString())\n\t{\n\t\tm_argument = doc[\"argument\"].GetString();\n\t}\n}\n\n/**\n * ACL constructor:\n * parse input JSON for ACL content.\n *\n * JSON should have string attributes 'name' and 'service' and 'url' arrays\n *\n * @param  json\t\tThe JSON ACL content to parse\n * @throws \t\texception ACLMalformed\n */\nACL::ACL(const string& json)\n{\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"ACL parse error in %s: %s at %d\",\n\t\t\t\t\tjson.c_str(),\n\t\t\t\t\tGetParseError_En(doc.GetParseError()),\n\t\t\t\t\t\t\t(unsigned)doc.GetErrorOffset());\n\t\tthrow new ACLMalformed();\n\t}\n\n\tLogger::getLogger()->debug(\"ACL content is %s\", json.c_str());\n\n\tif (!doc.HasMember(\"name\"))\n\t{\n\t\tLogger::getLogger()->error(\"Missing 'name' attribute in ACL JSON data\");\n\t\tthrow new ACLMalformed();\n\t}\n\tif (doc.HasMember(\"name\") && doc[\"name\"].IsString())\n\t{\n\t\tm_name = doc[\"name\"].GetString();\n\t}\n\n\t// Check for service array item\n\tif (doc.HasMember(\"service\") && doc[\"service\"].IsArray())\n\t{\n\t\tauto &items = doc[\"service\"];\n\t\tfor (auto& item : items.GetArray())\n\t\t{\n\t\t\tif (!item.IsObject())\n\t\t\t{\n\t\t\t\tthrow new ACLMalformed();\n\t\t\t}\n\t\t\tfor (Value::ConstMemberIterator itr = item.MemberBegin();\n\t\t\t\t\t\t\titr != item.MemberEnd();\n\t\t\t\t\t\t\t++itr)\n\t\t\t{\n\t\t\t\t// Construct KeyValueItem object\n\t\t\t\tKeyValueItem i(itr->name.GetString(),\n\t\t\t\t\t\titr->value.GetString());\n\n\t\t\t\t// Add object to the vector\n\t\t\t\tm_service.push_back(i);\n\t\t\t}\n\t\t}\n\t}\n\n\t// Check for url array item\n\tif (doc.HasMember(\"url\") && doc[\"url\"].IsArray())\n\t{\n\t\tauto &items = doc[\"url\"];\n\t\tfor (auto& item : items.GetArray())\n\t\t{\n\t\t\tif (!item.IsObject())\n\t\t\t{\n\t\t\t\tthrow new ACLMalformed();\n\t\t\t}\n\n\t\t\tstring url = item[\"url\"].GetString();\n\t\t\tValue &acl = item[\"acl\"]; \n\t\t\tvector<KeyValueItem> v_acl;\t\n\n\t\t\t// Check for acl array\n\t\t\tif (acl.IsArray())\n\t\t\t{\n\t\t\t\tfor (auto& item : acl.GetArray())\n\t\t\t\t{\n\t\t\t\t\tif (!item.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow new ACLMalformed();\n\t\t\t\t\t}\n\t\t\n\t\t\t\t\tfor (Value::ConstMemberIterator itr = item.MemberBegin();\n\t\t\t\t\t\t\t\t\titr != item.MemberEnd();\n\t\t\t\t\t\t\t\t\t++itr)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Construct KeyValueItem object\n\t\t\t\t\t\tKeyValueItem item(itr->name.GetString(),\n\t\t\t\t\t\t\t\titr->value.GetString());\n\n\t\t\t\t\t\t// Add object to the ACL vector\n\t\t\t\t\t\tv_acl.push_back(item);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t}\n\n\t\t\t// Construct UrlItem with url and ACL vector\n\t\t\tUrlItem u(url, v_acl);\n\n\t\t\t// Add object to the URL vector\n\t\t\tm_url.push_back(u);\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "C/common/aggregate.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2018 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <aggregate.h>\n#include <string>\n#include <sstream>\n#include <iostream>\n\nusing namespace std;\n\n\n/**\n * Return the JSON payload for a where clause\n */\nstring Aggregate::toJSON()\n{\nostringstream json;\n\n\tjson << \"{ \\\"column\\\" : \\\"\" << m_column << \"\\\",\";\n\tjson << \" \\\"operation\\\" : \\\"\" << m_operation << \"\\\" }\";\n\treturn json.str();\n}\n"
  },
  {
    "path": "C/common/asset_tracking.cpp",
    "content": "/*\n * Fledge asset tracking related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora, Massimiliano Pinto\n */\n\n#include <logger.h>\n#include <asset_tracking.h>\n#include <config_category.h>\n#include \"string_utils.h\"\n\nusing namespace std;\n\n\nAssetTracker *AssetTracker::instance = 0;\n\n/**\n * Worker thread entry point\n */\nstatic void worker(void *arg)\n{\n\tAssetTracker *tracker = (AssetTracker *)arg;\n\ttracker->workerThread();\n}\n\n/**\n * Get asset tracker singleton instance for the current south service\n *\n * @return\tSingleton asset tracker instance\n */\nAssetTracker *AssetTracker::getAssetTracker()\n{\n\treturn instance;\n}\n\n/**\n * AssetTracker class constructor\n *\n * @param mgtClient\t\tManagement client object for this south service\n * @param service  \t\tService name\n */\nAssetTracker::AssetTracker(ManagementClient *mgtClient, string service) \n\t: m_mgtClient(mgtClient), m_service(service), m_updateInterval(MIN_ASSET_TRACKER_UPDATE)\n{\n\tinstance = this;\n\tm_shutdown = false;\n\tm_storageClient = NULL;\n\tm_thread = new thread(worker, this);\n\n\ttry {\n\t\t// Find out the name of the fledge service\n\t\tConfigCategory category = mgtClient->getCategory(\"service\");\n\t\tif (category.itemExists(\"name\"))\n\t\t{\n\t\t\tm_fledgeName = category.getValue(\"name\");\n\t\t}\n\t} catch (exception& ex) {\n\t\tLogger::getLogger()->error(\"Unable to fetch the service category, %s\", ex.what());\n\t}\n\n\ttry {\n\t\t// Get a handle on the storage layer\n\t\tServiceRecord storageRecord(\"Fledge Storage\");\n\t\tif (!m_mgtClient->getService(storageRecord))\n\t\t{\n\t\t\tLogger::getLogger()->fatal(\"Unable to find storage service\");\n\t\t\treturn;\n\t\t}\n\t\tLogger::getLogger()->info(\"Connect to storage on %s:%d\",\n\t\t\t\tstorageRecord.getAddress().c_str(),\n\t\t\t\tstorageRecord.getPort());\n\n\t\t\n\t\tm_storageClient = new StorageClient(storageRecord.getAddress(),\n\t\t\t\t\t\tstorageRecord.getPort());\n\t} catch (exception& ex) {\n\t\tLogger::getLogger()->error(\"Failed to create storage client\", ex.what());\n\t}\n\n}\n\n/**\n * Destructor for the asset tracker. We must make sure any pending\n * tuples are written out before the asset tracker is destroyed.\n */\nAssetTracker::~AssetTracker()\n{\n\tm_shutdown = true;\n\t// Signal the worker thread to flush the queue\n\t{\n\t\tunique_lock<mutex> lck(m_mutex);\n\t\tm_cv.notify_all();\n\t}\n\twhile (m_pending.size())\n\t{\n\t\t// Wait for pending queue to drain\n\t\tthis_thread::sleep_for(chrono::milliseconds(10));\n\t}\n\tif (m_thread)\n\t{\n\t\tm_thread->join();\n\t\tdelete m_thread;\n\t\tm_thread = NULL;\n\t}\n\n\tif (m_storageClient)\n\t{\n\t\tdelete m_storageClient;\n\t\tm_storageClient = NULL;\n\t}\n\n\tfor (auto& item : assetTrackerTuplesCache)\n\t{\n\t\tdelete item;\n\t}\n\tassetTrackerTuplesCache.clear();\n\n\tfor (auto& store : storageAssetTrackerTuplesCache)\n\t{\n\t\tdelete store.first;\n\t}\n\tstorageAssetTrackerTuplesCache.clear();\n}\n\n/**\n * Fetch all asset tracking tuples from DB and populate local cache\n *\n * Return the vector of deprecated asset names\n *\n * @param plugin  \tPlugin name\n * @param event  \tEvent name\n */\nvoid AssetTracker::populateAssetTrackingCache(string /*plugin*/, string /*event*/)\n{\n\ttry {\n\t\tstd::vector<AssetTrackingTuple*>& vec = m_mgtClient->getAssetTrackingTuples(m_service);\n\t\tfor (AssetTrackingTuple* & rec : vec)\n\t\t{\n\t\t\tassetTrackerTuplesCache.emplace(rec);\n\t\t}\n\t\tdelete (&vec);\n\t}\n\tcatch (...)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to populate asset tracking tuples' cache\");\n\t\treturn;\n\t}\n\n\treturn;\n}\n\n/**\n * Check local cache for a given asset tracking tuple\n *\n * @param tuple\t\tTuple to find in cache\n * @return\t\t\tReturns whether tuple is present in cache\n */\nbool AssetTracker::checkAssetTrackingCache(AssetTrackingTuple& tuple)\t\n{\n\tAssetTrackingTuple *ptr = &tuple;\n\tstd::unordered_set<AssetTrackingTuple*>::const_iterator it = assetTrackerTuplesCache.find(ptr);\n\tif (it == assetTrackerTuplesCache.end())\n\t{\n\t\treturn false;\n\t}\n\telse\n\t\treturn true;\n}\n\n/**\n * Lookup tuple in the asset tracker cache\n *\n * @param tuple\t\tThe tuple to lookup\n * @return\t\tNULL if the tuple is not in the cache or the tuple from the cache\n */\nAssetTrackingTuple* AssetTracker::findAssetTrackingCache(AssetTrackingTuple& tuple)\t\n{\n\tAssetTrackingTuple *ptr = &tuple;\n\tstd::unordered_set<AssetTrackingTuple*>::const_iterator it = assetTrackerTuplesCache.find(ptr);\n\tif (it == assetTrackerTuplesCache.end())\n\t{\n\t\treturn NULL;\n\t}\n\telse\n\t{\n\t\treturn *it;\n\t}\n}\n\n/**\n * Add asset tracking tuple via microservice management API and in cache\n *\n * @param tuple\t\tNew tuple to add in DB and in cache\n */\nvoid AssetTracker::addAssetTrackingTuple(AssetTrackingTuple& tuple)\n{\n\tstd::unordered_set<AssetTrackingTuple*>::const_iterator it = assetTrackerTuplesCache.find(&tuple);\n\tif (it == assetTrackerTuplesCache.end())\n\t{\n\t\tAssetTrackingTuple *ptr = new AssetTrackingTuple(tuple);\n\n\t\tassetTrackerTuplesCache.emplace(ptr);\n\n\t\tqueue(ptr);\n\n\t\tLogger::getLogger()->debug(\"addAssetTrackingTuple(): Added tuple to cache: '%s'\", tuple.assetToString().c_str());\n\t}\n}\n\n/**\n * Add asset tracking tuple via microservice management API and in cache\n *\n * @param plugin\tPlugin name\n * @param asset\t\tAsset name\n * @param event\t\tEvent name\n */\nvoid AssetTracker::addAssetTrackingTuple(string plugin, string asset, string event)\n{\n\t// in case of \"Filter\" event, 'plugin' input argument is category name, so remove service name (prefix) & '_' from it\n\tif (event == string(\"Filter\"))\n\t{\n\t\tstring pattern  = m_service + \"_\";\n\t\tif (plugin.find(pattern) != string::npos)\n\t\t\tplugin.erase(plugin.begin(), plugin.begin() + m_service.length() + 1);\n\n\t}\n\t\n\tasset = escape(asset);\n\tAssetTrackingTuple tuple(m_service, plugin, asset, event);\n\taddAssetTrackingTuple(tuple);\n}\n\n/**\n * Return the name of the service responsible for particular event of the named asset\n *\n * @param event\tThe event of interest\n * @param asset\tThe asset we are interested in\n * @return string\tThe service name of the service that ingests the asset\n * @throws exception \tIf the service could not be found\n */\nstring AssetTracker::getService(const std::string& event, const std::string& asset)\n{\n\t// Fetch all asset tracker records\n\tstd::vector<AssetTrackingTuple*>& vec = m_mgtClient->getAssetTrackingTuples();\n\tstring foundService;\n\tfor (AssetTrackingTuple* &rec : vec)\n\t{\n\t\t// Return first service name with given asset and event\n\t\tif (rec->m_assetName == asset && rec->m_eventName == event)\n\t\t{\n\t\t\tfoundService = rec->m_serviceName;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tdelete (&vec);\n\n\t// Return found service or raise an exception\n\tif (foundService != \"\")\n\t{\n\t\treturn foundService;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"No service found for asset '%s' and event '%s'\",\n\t\t\t\t\tevent.c_str(),\n\t\t\t\t\tasset.c_str());\n\t\tthrow runtime_error(\"Fetching service for asset not yet implemented\");\n\t}\n}\n\n/**\n * Constructor for an asset tracking tuple table\n */\nAssetTrackingTable::AssetTrackingTable()\n{\n}\n\n/**\n * Destructor for asset tracking tuple table\n */\nAssetTrackingTable::~AssetTrackingTable()\n{\n\tfor (auto t : m_tuples)\n\t{\n\t\tdelete t.second;\n\t}\n}\n\n/**\n * Add a tuple to an asset tracking table\n *\n * @param tuple\tPointer to the asset tracking tuple to add\n */\nvoid\tAssetTrackingTable::add(AssetTrackingTuple *tuple)\n{\n\tauto ret = m_tuples.insert(pair<string, AssetTrackingTuple *>(tuple->getAssetName(), tuple));\n\tif (ret.second == false)\n\t\tdelete tuple;\t// Already exists\n}\n\n/**\n * Find the named asset tuple and return a pointer to te asset\n *\n * @param name\tThe name of the asset to lookup\n * @return AssetTrackingTupple* \tThe matchign tuple or NULL\n */\nAssetTrackingTuple *AssetTrackingTable::find(const string& name)\n{\n\tauto ret = m_tuples.find(name);\n\tif (ret != m_tuples.end())\n\t\treturn ret->second;\n\treturn NULL;\n}\n\n/**\n * Remove an asset tracking tuple from the table\n */\nvoid AssetTrackingTable::remove(const string& name)\n{\n\tauto ret = m_tuples.find(name);\n\tif (ret != m_tuples.end())\n\t{\n\t\tm_tuples.erase(ret);\n\t\tdelete ret->second;\t// Free the tuple\n\t}\n}\n\n/**\n * Queue an asset tuple for writing to the database.\n */\nvoid AssetTracker::queue(TrackingTuple *tuple)\n{\n\tunique_lock<mutex> lck(m_mutex);\n\tm_pending.emplace(tuple);\n\tm_cv.notify_all();\n}\n\n/**\n * Set the update interval for the asset tracker.\n *\n * @param interval The number of milliseconds between update of the asset tracker\n * @return bool\tWas the update accepted\n */\nbool AssetTracker::tune(unsigned long interval)\n{\n\tunique_lock<mutex> lck(m_mutex);\n\tif (interval >= MIN_ASSET_TRACKER_UPDATE)\n\t{\n\t\tm_updateInterval = interval;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Attempt to set asset tracker update to less than minimum interval\");\n\t\treturn false;\n\t}\n\treturn true;\n}\n\n/**\n * The worker thread that will flush any pending asset tuples to\n * the database.\n */\nvoid AssetTracker::workerThread()\n{\n\tunique_lock<mutex> lck(m_mutex);\n\twhile (m_pending.empty() && m_shutdown == false)\n\t{\n\t\tm_cv.wait_for(lck, chrono::milliseconds(m_updateInterval));\n\t\tprocessQueue();\n\t}\n\t// Process any items left in the queue at shutdown\n\tprocessQueue();\n}\n\n/**\n * Process the queue of asset tracking tuple\n */\nvoid AssetTracker::processQueue()\n{\nvector<InsertValues>\tvalues;\nstatic bool warned = false;\n\n\twhile (!m_pending.empty())\n\t{\n\t\t// Get first element as TrackingTuple calss\n\t\tTrackingTuple *tuple = m_pending.front();\n\n\t\t// Write the tuple - ideally we would like a bulk update here or to go direct to the\n\t\t// database. However we need the Fledge service name for that, which is now in\n\t\t// the member variable m_fledgeName\n\n\t\tbool warn = warned;\n\t\t// Call class specialised processData routine:\n\t\t// - 1 Insert asset tracker data via Fledge API as fallback\n\t\t// or\n\t\t// - get values for direct DB operation\n\n\t\tInsertValues iValue = tuple->processData(m_storageClient != NULL,\n\t\t\t\t\t\t\tm_mgtClient,\n\t\t\t\t\t\t\twarn,\n\t\t\t\t\t\t\tm_fledgeName);\n\t\twarned = warn;\n\n\t\t// Bulk DB insert when queue is empty\n\t\tif (iValue.size() > 0)\n\t\t{\n\t\t\tvalues.push_back(iValue);\n\t\t}\n\n\t\t// Remove element\n\t\tm_pending.pop();\n\t}\n\n\t// Queue processed, bulk direct DB data insert could be done\n\tif (m_storageClient && values.size() > 0)\n\t{\n                // Bulk DB insert\n\t\tint n_rows = m_storageClient->insertTable(\"asset_tracker\", values);\n\t\tif (n_rows != values.size())\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"The asset tracker failed to insert all records %d of %d inserted\",\n\t\t\t\t\tn_rows, values.size());\n\t\t}\n\t}\n}\n\n/**\n * Fetch all storage asset tracking tuples from DB and populate local cache\n *\n * Return the vector of deprecated asset names\n *\n */\nvoid AssetTracker::populateStorageAssetTrackingCache()\n{\n\n\ttry {\n\t\tstd::vector<StorageAssetTrackingTuple*>& vec =\n\t\t\t(std::vector<StorageAssetTrackingTuple*>&) m_mgtClient->getStorageAssetTrackingTuples(m_service);\n\n\t\tfor (StorageAssetTrackingTuple* & rec : vec)\n\t\t{\n\t\t\tset<string> setOfDPs = getDataPointsSet(rec->m_datapoints);\n\t\t\tif (setOfDPs.size() == 0)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"%s:%d Datapoints unavailable for service %s \",\n\t\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t\t__LINE__,\n\t\t\t\t\t\t\tm_service.c_str());\n\t\t\t}\n\t\t\t// Add item into cache\n\t\t\tstorageAssetTrackerTuplesCache.emplace(rec, setOfDPs);\n\t\t}\n\t\tdelete (&vec);\n\t}\n\tcatch (...)\n\t{\n\t\tLogger::getLogger()->error(\"%s:%d Failed to populate storage asset \" \\\n\t\t\t\t\t\"tracking tuples' cache\",\n\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t__LINE__);\n\t\treturn;\n\t}\n\n\treturn;\n}\n\n//This function takes a string of datapoints in comma-separated format and returns\n//set of string datapoint values\nstd::set<std::string> AssetTracker::getDataPointsSet(std::string strDatapoints)\n{\n\tstd::set<std::string> tokens;\n\tstringstream st(strDatapoints);\n\tstd::string temp;\n\n\twhile(getline(st, temp, ','))\n\t{\n\t\ttokens.insert(temp);\n\t}\n\n\treturn tokens;\n}\n\n/**\n * Return Plugin Information in the Fledge configuration\n *\n * @return bool True if the plugin info could be obtained\n */\nbool AssetTracker::getFledgeConfigInfo()\n{\n\tLogger::getLogger()->error(\"StorageAssetTracker::getPluginInfo start\");\n\ttry {\n\t\tstring url = \"/fledge/category/service\";\n\t\tif (!m_mgtClient)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"%s:%d, m_mgtClient Ptr is NULL\",\n\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t__LINE__);\n\t\t\treturn false;\n\t\t}\n\n\t\tauto res = m_mgtClient->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) &&\n\t\t\t\t\tisdigit(response[1]) &&\n\t\t\t\t\tisdigit(response[2]) &&\n\t\t\t\t\tresponse[3]==':');\n\t\t\tLogger::getLogger()->error(\"%s fetching service record: %s\\n\",\n\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\",\n\t\t\t\t\tresponse.c_str());\n\t\t\treturn false;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Failed to fetch /fledge/category/service %s.\",\n\t\t\tdoc[\"message\"].GetString());\n\t\t\treturn false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tValue& serviceName = doc[\"name\"];\n\t\t\tif (!serviceName.IsObject())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"%s:%d, serviceName is not an object\",\n\t\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t       \t__LINE__);\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (!serviceName.HasMember(\"value\"))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"%s:%d, serviceName has no member value\",\n\t\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t\t__LINE__);\n\t\t\t\treturn false;\n\n\t\t\t}\n\t\t\tValue& serviceVal = serviceName[\"value\"];\n\t\t\tif ( !serviceVal.IsString())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"%s:%d, serviceVal is not a string\",\n\t\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t\t__LINE__);\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tm_fledgeName = serviceVal.GetString();\n\t\t\tLogger::getLogger()->error(\"%s:%d, m_plugin value = %s\",\n\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t__LINE__,\n\t\t\t\t\t\tm_fledgeName.c_str());\n\t\t\treturn true;\n\t\t}\n\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tLogger::getLogger()->error(\"Get service failed %s.\", e.what());\n\t\treturn false;\n\t}\n\n\treturn false;\n}\n\n/** This function takes a StorageAssetTrackingTuple pointer and searches for\n *  it in cache, if found then returns its Deprecated status\n *\n * @param ptr\tStorageAssetTrackingTuple* , as key in cache (map)\n * @return bool\tDeprecation status\n */\nbool AssetTracker::getDeprecated(StorageAssetTrackingTuple* ptr)\n{\n        StorageAssetCacheMapItr it = storageAssetTrackerTuplesCache.find(ptr);\n\n        if (it == storageAssetTrackerTuplesCache.end())\n        {\n                Logger::getLogger()->debug(\"%s:%d :tuple not found in cache\",\n\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t__LINE__);\n                return false;\n        }\n        else\n        {\n                return (it->first)->isDeprecated();\n        }\n\n        return false;\n}\n\n/**\n *  Updates datapoints present in the arg dpSet in the cache\n *\n * @param dpSet\t\tset of datapoints string values to be updated in cache\n * @param ptr\t\tStorageAssetTrackingTuple* , as key in cache (map)\n * Retval void\n */\n\nvoid AssetTracker::updateCache(std::set<std::string> dpSet, StorageAssetTrackingTuple* ptr)\n{\n\tif(ptr == nullptr)\n\t{\n\t\tLogger::getLogger()->error(\"%s:%d: StorageAssetTrackingTuple should not be NULL pointer\",\n\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t__LINE__);\n\t\treturn;\n\t}\n\n\tStorageAssetCacheMapItr it = storageAssetTrackerTuplesCache.find(ptr);\n\t// search for the record in cache , if not present, simply update cache and return\n\tif (it == storageAssetTrackerTuplesCache.end())\n\t{\n\t\tLogger::getLogger()->debug(\"%s:%d :tuple not found in cache '%s', ptr '%p'\",\n\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t__LINE__,\n\t\t\t\t\tptr->assetToString().c_str(),\n\t\t\t\t\tptr);\n\n\t\t// Create new tuple, add it to processing queue and to cache\n\t\taddStorageAssetTrackingTuple(*ptr, dpSet, true);\n\n\t\treturn;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->debug(\"%s:%d :tuple found in cache '%p', '%s': datapoints '%d'\",\n\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t__LINE__,\n\t\t\t\t\t(it->first),\n\t\t\t\t\t(it->first)->assetToString().c_str(),\n\t\t\t\t\t(it->second).size());\n\n\t\t// record is found in cache , compare the datapoints of the argument ptr to that present in the cache\n\t\t// update the cache with datapoints present in argument record but  absent in cache\n\n\t\tstd::set<std::string> &cacheRecord = it->second;\n\t\tunsigned int sizeOfCacheRecord = cacheRecord.size();\n\n\t\t// store all the datapoints to be updated in string strDatapoints which is sent to management_client\n\t\tstd::string strDatapoints;\n\t\tunsigned int count = 0;\n\t\tfor (auto itr : cacheRecord)\n\t\t{\n\t\t\tstrDatapoints.append(itr);\n\t\t\tstrDatapoints.append(\",\");\n\t\t\tcount++;\n\t\t}\n\n\t\t// check which datapoints are not present in cache record, and need to be updated\n\t\t// in cache and db, store them in string strDatapoints, in comma-separated format\n\t\tfor(auto itr: dpSet)\n\t\t{\n\t\t\tif (cacheRecord.find(itr) == cacheRecord.end())\n\t\t\t{\n\t\t\t\tstrDatapoints.append(itr);\n\t\t\t\tstrDatapoints.append(\",\");\n\t\t\t\tcount++;\n\t\t\t}\n\t\t}\n\n\t\t// remove the last comma\n\t\tif (strDatapoints[strDatapoints.size()-1] == ',')\n\t\t{\n\t\t\tstrDatapoints.pop_back();\n\t\t}\n\n\t\tif (count <= sizeOfCacheRecord)\n\t\t{\n\t\t\t// No need to update as count of cache record is not getting increased\n\t\t\treturn;\n\t\t}\n\n\t\t// Add current StorageAssetTrackingTuple to the process queue\n\t\taddStorageAssetTrackingTuple(*(it->first), dpSet);\n\n\t\t// if update of DB successful , then update the CacheRecord\n\t\tfor(auto itr: dpSet)\n\t\t{\n\t\t\tif (cacheRecord.find(itr) == cacheRecord.end())\n\t\t\t{\n\t\t\t\tcacheRecord.insert(itr);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Add asset tracking tuple via microservice management API and in cache\n *\n * @param tuple         New tuple to add  to the queue\n * @param dpSet\t\tSet of datapoints to handle\n * @param addObj\tCreate a new obj for cache and queue if true.\n * \t\t\tOtherwise just add current tuple to processing queue.\n */\nvoid AssetTracker::addStorageAssetTrackingTuple(StorageAssetTrackingTuple& tuple,\n\t\t\t\t\t\tstd::set<std::string>& dpSet,\n\t\t\t\t\t\tbool addObj)\n{\n\t// Create a comma separated list of datapoints\n\tstd::string strDatapoints;\n\tunsigned int count = 0;\n\tfor (auto itr : dpSet)\n\t{\n\t\tstrDatapoints.append(itr);\n\t\tstrDatapoints.append(\",\");\n\t\tcount++;\n\t}\n\tif (strDatapoints[strDatapoints.size()-1] == ',')\n\t{\n\t\tstrDatapoints.pop_back();\n\t}\n\n\tif (addObj)\n\t{\n\t\t// Create new tuple from input one\n\t\tStorageAssetTrackingTuple *ptr = new StorageAssetTrackingTuple(tuple);\n\n\t\t// Add new tuple to storage asset cache\n\t\tstorageAssetTrackerTuplesCache.emplace(ptr, dpSet);\n\n\t\t// Add datapoints and count needed for data insert\n\t\tptr->m_datapoints = strDatapoints;\n\t\tptr->m_maxCount = count;\n\n\t\t// Add new tuple to processing queue\n\t\tqueue(ptr);\n\t}\n\telse\n\t{\n\t\t// Add datapoints and count needed for data insert\n\t\ttuple.m_datapoints = strDatapoints;\n\t\ttuple.m_maxCount = count;\n\n\t\t// Just add current tuple to processing queue\n\t\tqueue(&tuple);\n\t}\n}\n\n/**\n * Insert AssetTrackingTuple data via Fledge core API\n * or prepare InsertValues object for direct DB operation\n *\n * @param storage\tBoolean for storage being available\n * @param mgtClient\tManagementClient object pointer\n * @param warned\tBoolean ireference updated for logging operation\n * @param instanceName\tFledge instance name\n * @return \t\tInsertValues object\n */\nInsertValues AssetTrackingTuple::processData(bool storage,\n\t\t\t\t\tManagementClient *mgtClient,\n\t\t\t\t\tbool &warned,\n\t\t\t\t\tstring &instanceName)\n{\n\tInsertValues iValue;\n\n\t// Write the tuple - ideally we would like a bulk update here or to go direct to the\n\t// database. However we need the Fledge service name  passed in instanceName\n\tif (!storage)\n\t{\n\t\t// Fall back to using interface to the core\n\t\tif (!warned)\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Asset tracker falling back to core API\");\n\t\t}\n\t\twarned = true;\n\n\t\tmgtClient->addAssetTrackingTuple(m_serviceName,\n\t\t\t\t\tm_pluginName,\n\t\t\t\t\tm_assetName,\n\t\t\t\t\tm_eventName);\n\t}\n\telse\n\t{\n\t\tiValue.push_back(InsertValue(\"asset\",   m_assetName));\n\t\tiValue.push_back(InsertValue(\"event\",   m_eventName));\n\t\tiValue.push_back(InsertValue(\"service\", m_serviceName));\n\t\tiValue.push_back(InsertValue(\"fledge\",  instanceName));\n\t\tiValue.push_back(InsertValue(\"plugin\",  m_pluginName));\n\t}\n\n\treturn iValue;\n}\n\n/**\n * Insert StorageAssetTrackingTuple data via Fledge core API\n * or prepare InsertValues object for direct DB operation\n *\n * @param storage\tBoolean for storage being available\n * @param mgtClient\tManagementClient object pointer\n * @param warned\tBoolean ireference updated for logging operation\n * @param instanceName\tFledge instance name\n * @return \t\tInsertValues object\n */\nInsertValues StorageAssetTrackingTuple::processData(bool storage,\n\t\t\t\t\t\tManagementClient *mgtClient,\n\t\t\t\t\t\tbool &warned,\n\t\t\t\t\t\tstring &instanceName)\n{\n\tInsertValues iValue;\n\n\t// Write the tuple - ideally we would like a bulk update here or to go direct to the\n\t// database. However we need the Fledge service name for that, which is now in\n\t// the member variable m_fledgeName\n\tif (!storage)\n\t{\n\t\t// Fall back to using interface to the core\n\t\tif (!warned)\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Storage Asset tracker falling back to core API\");\n\t\t}\n\t\twarned = true;\n\n\t\t// Insert tuple via Fledge core API\n\t\tmgtClient->addStorageAssetTrackingTuple(m_serviceName,\n\t\t\t\t\t\t\tm_pluginName,\n\t\t\t\t\t\t\tm_assetName,\n\t\t\t\t\t\t\tm_eventName,\n\t\t\t\t\t\t\tfalse,\n\t\t\t\t\t\t\tm_datapoints,\n\t\t\t\t\t\t\tm_maxCount);\n\t}\n\telse\n\t{\n\t\tiValue.push_back(InsertValue(\"asset\",\tm_assetName));\n\t\tiValue.push_back(InsertValue(\"event\",\tm_eventName));\n\t\tiValue.push_back(InsertValue(\"service\",\tm_serviceName));\n\t\tiValue.push_back(InsertValue(\"fledge\",\tinstanceName));\n\t\tiValue.push_back(InsertValue(\"plugin\",\tm_pluginName));\n\n\t\t// prepare JSON datapoints\n\t\tstring datapoints = \"\\\"\";\n\t\tfor ( int i = 0; i < m_datapoints.size(); ++i)\n\t\t{\n\t\t\tif (m_datapoints[i] == ',')\n\t\t\t{\n\t\t\t\tdatapoints.append(\"\\\",\\\"\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tdatapoints.append(1,m_datapoints[i]);\n\t\t\t}\n\t\t}\n\t\tdatapoints.append(\"\\\"\");\n\n\t\tDocument doc;\n\t\tstring jsonData = \"{\\\"count\\\": \" +\n\t\t\t\tstd::to_string(m_maxCount) +\n\t\t\t\t\", \\\"datapoints\\\": [\" +\n\t\t\t\tdatapoints + \"]}\";\n\t\tdoc.Parse(jsonData.c_str());\n\t\tiValue.push_back(InsertValue(\"data\", doc));\n\t}\n\n\treturn iValue;\n}\n\n/**\n * Check if a StorageAssetTrackingTuple is in cache\n *\n * @param tuple\tThe StorageAssetTrackingTuple to find\n * @return\tPointer to found tuple or NULL\n */\nStorageAssetTrackingTuple* AssetTracker::findStorageAssetTrackingCache(StorageAssetTrackingTuple& tuple)\n{\n\tStorageAssetCacheMapItr it = storageAssetTrackerTuplesCache.find(&tuple);\n\n        if (it == storageAssetTrackerTuplesCache.end())\n\t{\n\t\treturn NULL;\n\t}\n\telse\n\t{\n\t\treturn it->first;\n\t}\n}\n\n/**\n * Get stored value in the StorageAssetTrackingTuple cache for the given tuple\n *\n * @param tuple\tThe StorageAssetTrackingTuple to find\n * @return\tPointer to found std::set<std::string> result or NULL if tuble does not exist\n */\nstd::set<std::string>* AssetTracker::getStorageAssetTrackingCacheData(StorageAssetTrackingTuple* tuple)\n{\n\tStorageAssetCacheMapItr it = storageAssetTrackerTuplesCache.find(tuple);\n\n        if (it == storageAssetTrackerTuplesCache.end())\n\t{\n\t\treturn NULL;\n\t}\n\telse\n\t{\n\t\treturn &(it->second);\n\t}\n}\n"
  },
  {
    "path": "C/common/audit_logger.cpp",
    "content": "/*\n * Fledge Singleton Audit Logger interface\n *\n * Copyright (c) 2023 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <audit_logger.h>\n\nAuditLogger *AuditLogger::m_instance = 0;\n\nusing namespace std;\n\n/**\n * Constructor for an audit logger that is passed\n * the management client. This must be called early in\n * a service or task creation before any audit logs are\n * created.\n *\n * @param mgmt\tPointer to the management client\n */\nAuditLogger::AuditLogger(ManagementClient *mgmt) : m_mgmt(mgmt)\n{\n\tm_instance = this;\n}\n\n/**\n * Destructor for an audit logger\n */\nAuditLogger::~AuditLogger()\n{\n}\n\n/**\n * Get the audit logger singleton\n */\nAuditLogger *AuditLogger::getLogger()\n{\n\tif (!m_instance)\n\t{\n\t\tLogger::getLogger()->error(\"An attempt has been made to obtain the audit logger before it has been created.\");\n\t}\n\treturn m_instance;\n}\n\nvoid AuditLogger::auditLog(const string& code,\n\t\t\tconst string& level,\n\t\t\tconst string& data)\n{\n\tif (m_instance)\n\t{\n\t\tm_instance->audit(code, level, data);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"An attempt has been made to log an audit event when no audit logger is available\");\n\t\tLogger::getLogger()->error(\"Audit event is: %s, %s, %s\", code.c_str(), level.c_str(), data.c_str());\n\t}\n}\n\n/**\n * Log an audit message\n *\n * @param code\tThe audit code\n * @param level\tThe audit level\n * @param data\tOptional data associated with the audit entry\n */\nvoid AuditLogger::audit(const string& code,\n\t\t\tconst string& level,\n\t\t\tconst string& data)\n{\n\tm_mgmt->addAuditEntry(code, level, data);\n}\n"
  },
  {
    "path": "C/common/base64databuffer.cpp",
    "content": "/*\n * Fledge Base64 encoded DataBuffer\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <base64databuffer.h>\n\nusing namespace std;\n\n/**\n * Construct a DataBuffer by decoding a Base64 encoded buffer\n */\nBase64DataBuffer::Base64DataBuffer(const string& encoded)\n{\n\tm_data = NULL;\n\tm_itemSize = encoded[0] - '0';\n\tsize_t in_len = encoded.size() - 1;\n\tif (in_len % 4 != 0)\n\t{\n\t\tthrow runtime_error(\"Base64DataBuffer string is incorrect length\");\n\t}\n\tsize_t maxLen = in_len / 4 * 3;\n\tif (encoded[in_len - 1] == '=')\n\t\tmaxLen--;\n\tif (encoded[in_len - 2] == '=')\n\t\tmaxLen--;\n\tm_len = maxLen / m_itemSize;\n\tif ((m_data = malloc(maxLen)) == NULL)\n\t{\n\t\tthrow runtime_error(\"Base64DataBuffer insufficient memory to store data\");\n\t}\n\tuint8_t *data = (uint8_t *)m_data;\n\n\tfor (size_t i = 0, j = 0; i < in_len;)\n\t{\n\t\tuint32_t a = encoded[i] == '=' ? 0 & i++ : decodingTable[static_cast<int>(encoded[i++])];\n\t\tuint32_t b = encoded[i] == '=' ? 0 & i++ : decodingTable[static_cast<int>(encoded[i++])];\n\t\tuint32_t c = encoded[i] == '=' ? 0 & i++ : decodingTable[static_cast<int>(encoded[i++])];\n\t\tuint32_t d = encoded[i] == '=' ? 0 & i++ : decodingTable[static_cast<int>(encoded[i++])];\n\n\t\tuint32_t triple = (a << 3 * 6) + (b << 2 * 6) + (c << 1 * 6) + (d << 0 * 6);\n\n\t\tif (j < maxLen)\n\t\t\tdata[j++] = (triple >> 2 * 8) & 0xFF;\n\t\tif (j < maxLen)\n\t\t\tdata[j++] = (triple >> 1 * 8) & 0xFF;\n\t\tif (j < maxLen)\n\t\t\tdata[j++] = (triple >> 0 * 8) & 0xFF;\n\t}\n}\n\n/**\n * Base 64 encode the DataBuffer. Not the first character is\n * not the data itself but an unencoded value for itemSize\n */\nstring Base64DataBuffer::encode()\n{\n\n\tsize_t nBytes = m_itemSize * m_len;\n\tsize_t encoded = 4 * ((nBytes + 2) / 3);\n\tchar *ret = (char *)malloc(encoded + 1);\n\tchar *p = ret;\n\t*p++ = m_itemSize + '0';\n\tuint8_t *data = (uint8_t *)m_data;\n\tint i;\n\tfor (i = 0; i < m_len - 2; i += 3)\n\t{\n\t\t*p++ = encodingTable[(*data >> 2) & 0x3F];\n\t\t*p++ = encodingTable[((*data & 0x3) << 4) | ((int) (*(data + 1) & 0xF0) >> 4)];\n\t\t*p++ = encodingTable[((*(data + 1) & 0xF) << 2) | ((int) (*(data + 2) & 0xC0) >> 6)];\n\t\t*p++ = encodingTable[*(data + 2) & 0x3F];\n\t\tdata += 3;\n\t}\n\tif (i < nBytes)\n\t{\n\t\t*p++ = encodingTable[(*data >> 2) & 0x3F];\n\t\tif (i == (nBytes - 1))\n\t\t{\n\t\t\t*p++ = encodingTable[((*data & 0x3) << 4)];\n\t\t\t*p++ = '=';\n\t\t}\n\t\telse\n\t\t{\n\t\t\t*p++ = encodingTable[((*data & 0x3) << 4) | ((int) (*(data + 1) & 0xF0) >> 4)];\n\t\t\t*p++ = encodingTable[((*(data + 1) & 0xF) << 2)];\n\t\t}\n\t\t*p++ = '=';\n\t}\n\t*p = '\\0';\n\tstring r =  string(ret);\n\tfree(ret);\n\treturn r;\n}\n"
  },
  {
    "path": "C/common/base64image.cpp",
    "content": "/*\n * Fledge Base64 encoded datapoint image\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <base64dpimage.h>\n#include <logger.h>\n#include <string.h>\n#include <sys/time.h>\n\nusing namespace std;\n\n/**\n * Construct a DPImage by decoding a Base64 encoded buffer\n */\nBase64DPImage::Base64DPImage(const string& data)\n{\n\tsscanf(data.c_str(), \"%d,%d,%d_\", &m_width, &m_height, &m_depth);\n\tm_byteSize = m_width * m_height * (m_depth / 8);\n\tsize_t pos = data.find_first_of(\"_\");\n\tstring encoded;\n\tif (pos != string::npos)\n\t{\n\t\tencoded = data.substr(pos + 1);\n\t}\n\tsize_t in_len = encoded.size();\n\tif (in_len % 4 != 0)\n\t{\n\t\tthrow runtime_error(\"Base64DataBuffer string is incorrect length\");\n\t}\n\tif ((m_pixels = malloc(m_byteSize)) == NULL)\n\t{\n\t\tthrow runtime_error(\"Base64DataBuffer insufficient memory to store data\");\n\t}\n\tuint8_t *ptr = (uint8_t *)m_pixels;\n\n\tfor (size_t i = 0, j = 0; i < in_len;)\n\t{\n\t\tuint32_t a = encoded[i] == '=' ? 0 & i++ : decodingTable[(uint8_t)(encoded[i++])];\n\t\tuint32_t b = encoded[i] == '=' ? 0 & i++ : decodingTable[(uint8_t)(encoded[i++])];\n\t\tuint32_t c = encoded[i] == '=' ? 0 & i++ : decodingTable[(uint8_t)(encoded[i++])];\n\t\tuint32_t d = encoded[i] == '=' ? 0 & i++ : decodingTable[(uint8_t)(encoded[i++])];\n\n\t\tuint32_t triple = (a << 3 * 6) + (b << 2 * 6) + (c << 1 * 6) + (d << 0 * 6);\n\n\t\tif (j < m_byteSize)\n\t\t\tptr[j++] = (triple >> 2 * 8) & 0xFF;\n\t\tif (j < m_byteSize)\n\t\t\tptr[j++] = (triple >> 1 * 8) & 0xFF;\n\t\tif (j < m_byteSize)\n\t\t\tptr[j++] = (triple >> 0 * 8) & 0xFF;\n\t}\n}\n\n/**\n * Base 64 encode the DPImage. Note the first character is\n * not the data itself but an unencoded value for itemSize\n */\nstring Base64DPImage::encode()\n{\n\tchar buf[80];\n\tint hlen = snprintf(buf, sizeof(buf), \"%d,%d,%d_\", m_width, m_height, m_depth);\n\tsize_t nBytes = m_byteSize;\n\tsize_t encoded = 4 * ((nBytes + 2) / 3);\n\tuint8_t *ret = (uint8_t *)malloc(hlen + encoded + 1);\n\tstrcpy((char *)ret, buf);\n\tregister uint8_t *p = ret + hlen;\n\tregister uint8_t *data = (uint8_t *)m_pixels;\n\tint i;\n\tfor (i = 0; i < m_byteSize - 2; i += 3)\n\t{\n\t\t*p++ = encodingTable[(*data >> 2) & 0x3F];\n\t\t*p++ = encodingTable[((*data & 0x3) << 4) | ((unsigned int) (*(data + 1) & 0xF0) >> 4)];\n\t\t*p++ = encodingTable[((*(data + 1) & 0xF) << 2) | ((unsigned int) (*(data + 2) & 0xC0) >> 6)];\n\t\t*p++ = encodingTable[*(data + 2) & 0x3F];\n\t\tdata += 3;\n\t}\n\tif (i < nBytes)\n\t{\n\t\t*p++ = encodingTable[(*data >> 2) & 0x3F];\n\t\tif (i == (nBytes - 1))\n\t\t{\n\t\t\t*p++ = encodingTable[((*data & 0x3) << 4)];\n\t\t\t*p++ = '=';\n\t\t}\n\t\telse\n\t\t{\n\t\t\t*p++ = encodingTable[((*data & 0x3) << 4) | ((unsigned int) (*(data + 1) & 0xF0) >> 4)];\n\t\t\t*p++ = encodingTable[((*(data + 1) & 0xF) << 2)];\n\t\t}\n\t\t*p++ = '=';\n\t}\n\t*p = '\\0';\n\tstring rstr((char *)ret);\n\tfree(ret);\n\t\n\treturn rstr;\n}\n"
  },
  {
    "path": "C/common/bearer_token.cpp",
    "content": "/*\n * Fledge bearer token utilities\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include \"bearer_token.h\"\n#include <rapidjson/document.h>\n#include <logger.h>\n\nusing namespace rapidjson;\nusing namespace std;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\n/**\n * BearerToken constructor with request object\n *\n * @param request\tHTTP request object\n */\nBearerToken::BearerToken(shared_ptr<HttpServer::Request> request)\n{\nstring bearer_token;\n\n \t// Extract access bearer token from request headers\n\tfor(auto &field : request->header)\n\t{\n\t\tif (field.first == AUTH_HEADER)\n\t\t{\n\t\t\tstd::size_t pos = field.second.rfind(BEARER_SCHEMA);\n\t\t\tif (pos != string::npos)\n\t\t\t{\n\t\t\t\tpos += strlen(BEARER_SCHEMA);\n\t\t\t\tm_bearer_token = field.second.substr(pos);\n\t\t\t}\n\t\t}\n\t}\n\n\tm_expiration = 0;\n\tm_verified = false;\n}\n\n/**\n * BearerToken constructor with string reference\n * @param token\t\tBearer token string\n */\nBearerToken::BearerToken(std::string& token) :\n\t\t\tm_bearer_token(token)\n{\t\n\tm_expiration = 0;\n\tm_verified = false;\n}\n\n/**\n * BearerToken verification from JSON string reference\n *\n * Known token claims as stored as strings\n *\n * @param response\tJSON string from token verification endpoint\n * @return\t\tTrue on success\n * \t\t\tFalse otherwise\n */\nbool BearerToken::verify(const string& response)\n{\n\tif (m_bearer_token.length() == 0)\n\t{\n\t\treturn false;\n\t}\t\n\n\tLogger *log = Logger::getLogger();\n\tDocument doc;\n\tdoc.Parse(response.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tbool httpError = (isdigit(response[0]) &&\n\t\t\t\t  isdigit(response[1]) &&\n\t\t\t\t  isdigit(response[2]) &&\n\t\t\t\t  response[3]==':');\n\t\tlog->error(\"%s error in service token verification: %s\\n\",\n\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\",\n\t\t\t\tresponse.c_str());\n\t\treturn false;\n\t}\n\n\t// Check JSON error item\n\tif (doc.HasMember(\"error\"))\n\t{\n\t\tif (doc[\"error\"].IsString())\n\t\t{\n\t\t\tstring error = doc[\"error\"].GetString();\n\t\t\tlog->error(\"Failed to parse token verification result, error %s\",\n\t\t\t\t\terror.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlog->error(\"Failed to parse token verification result: %s\",\n\t\t\t\t\tresponse.c_str());\n\t\t}\n\n\t\treturn false;\n\t}\n\n\t// Check JSON claim items\n\tif (doc.HasMember(\"aud\") &&\n\t    doc.HasMember(\"sub\") &&\n\t    doc.HasMember(\"iss\") &&\n\t    doc.HasMember(\"exp\"))\n\t{\n\t\t// Set token claims in the input map\n\t\tif (doc[\"aud\"].IsString() &&\n\t\t    doc[\"sub\"].IsString() &&\n\t\t    doc[\"iss\"].IsString() &&\n\t\t    doc[\"exp\"].IsUint())\n\t\t{\n\t\t\t// Valid data: set claim values, expiration and verified\n\t\t\tm_audience = doc[\"aud\"].GetString();\n\t\t\tm_subject = doc[\"sub\"].GetString();\n\t\t\tm_issuer = doc[\"iss\"].GetString();\n\t\t\tm_expiration = doc[\"exp\"].GetUint();\n\n\t\t\tm_verified = true;\n\n\t\t\tlog->debug(\"Token verified %s:%s, expiration %ld\",\n\t\t\t\tm_audience.c_str(),\n\t\t\t\tm_subject.c_str(),\n\t\t\t\tm_expiration);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlog->error(\"Token claims do not contain valid values: %s\",\n\t\t\t\tresponse.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tlog->error(\"Needed token claims not found: %s\", response.c_str());\n\t}\n\n\treturn m_verified;\n}\n"
  },
  {
    "path": "C/common/config_category.cpp",
    "content": "/*\n * Fledge category management\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <config_category.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <rapidjson/ostreamwrapper.h>\n#include <rapidjson/writer.h>\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n#include <sstream>\n#include <iostream>\n#include <time.h>\n#include <stdlib.h>\n#include <logger.h>\n#include <stdexcept>\n#include <string_utils.h>\n#include <boost/algorithm/string/replace.hpp>\n\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * ConfigCategories constructor without parameters\n *\n * Elements can be added with ConfigCategories::addCategoryDescription\n */\nConfigCategories::ConfigCategories()\n{\n}\n\n/**\n * Construct a ConfigCategories object from a JSON document returned from\n * the Fledge configuratrion service.\n */\nConfigCategories::ConfigCategories(const std::string& json)\n{\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"Configuration parse error in %s: %s at %d, '%s'\", json.c_str(),\n\t\t\tGetParseError_En(doc.GetParseError()), (unsigned)doc.GetErrorOffset(), StringAround(json, (unsigned)doc.GetErrorOffset()).c_str());\n\t\tthrow new ConfigMalformed();\n\t}\n\tif (doc.HasMember(\"categories\"))\n\t{\n\t\tconst Value& categories = doc[\"categories\"];\n\t\tif (categories.IsArray())\n\t\t{\n\t\t\t// Process every rows and create the result set\n\t\t\tfor (auto& cat : categories.GetArray())\n\t\t\t{\n\t\t\t\tif (!cat.IsObject())\n\t\t\t\t{\n\t\t\t\t\tthrow new ConfigMalformed();\n\t\t\t\t}\n\t\t\t\tConfigCategoryDescription *value = new ConfigCategoryDescription(cat[\"key\"].GetString(),\n\t\t\t\t\t\t\tcat[\"description\"].GetString());\n\t\t\t\tm_categories.push_back(value);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tthrow new ConfigMalformed();\n\t\t}\n\t}\n}\n\n/**\n * ConfigCategories destructor\n */\nConfigCategories::~ConfigCategories()\n{\n\tfor (auto it = m_categories.cbegin(); it != m_categories.cend(); it++)\n\t{\n\t\tdelete *it;\n\t}\n}\n\n/**\n * Add a ConfigCategoryDescription element\n *\n * @param  elem    The ConfigCategoryDescription elemen to add\n */\nvoid ConfigCategories::addCategoryDescription(ConfigCategoryDescription* elem)\n{\n\tm_categories.push_back(elem);\n}\n\n/**\n * Return the JSON string of a ConfigCategoryDescription element\n */\nstring ConfigCategoryDescription::toJSON() const\n{\n\tostringstream convert;\n\n\tconvert << \"{\\\"key\\\": \\\"\" << JSONescape(m_name) << \"\\\", \";\n\tconvert << \"\\\"description\\\" : \\\"\" << JSONescape(m_description) << \"\\\"}\";\n\n\treturn convert.str();\n}\n\n/**\n * Return the JSON string of all ConfigCategoryDescription\n * elements in m_categories\n */\nstring ConfigCategories::toJSON() const\n{\n\tostringstream convert;\n\n\tconvert << \"[\";\n\tfor (auto it = m_categories.cbegin(); it != m_categories.cend(); it++)\n\t{\n\t\tconvert << (*it)->toJSON();\n\t\tif (it + 1 != m_categories.cend() )\n\t\t{\n                        convert << \", \";\n\t\t}\n\t}\n\tconvert << \"]\";\n\n\treturn convert.str();\n}\n\n/**\n * Configuration Category constructor\n *\n * @param name\tThe name of the configuration category\n * @param json\tJSON content of the configuration category\n */\nConfigCategory::ConfigCategory(const string& name, const string& json) : m_name(name)\n{\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"Configuration parse error in category '%s', %s: %s at %d, '%s'\",\n\t\t\tname.c_str(), json.c_str(),\n\t\t\tGetParseError_En(doc.GetParseError()), (unsigned)doc.GetErrorOffset(),\n\t\t\tStringAround(json, (unsigned)doc.GetErrorOffset()).c_str());\n\t\tthrow new ConfigMalformed();\n\t}\n\t\n\tfor (Value::ConstMemberIterator itr = doc.MemberBegin(); itr != doc.MemberEnd(); ++itr)\n\t{\n\t\ttry\n\t\t{\n\t\t\tm_items.push_back(new CategoryItem(itr->name.GetString(), itr->value));\n\t\t}\n\t\tcatch (exception* e)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Configuration parse error in category '%s' item '%s', %s: %s\",\n\t\t\t\tname.c_str(),\n\t\t\t\titr->name.GetString(),\n\t\t\t\tjson.c_str(),\n\t\t\t\te->what());\n\t\t\tdelete e;\n\t\t\tthrow ConfigMalformed();\n\t\t}\n\t\tcatch (...)\n\t\t{\n\t\t\tthrow;\n\t\t}\n\t}\n}\n\n/**\n * Copy constructor for a configuration category\n *\n * @param rhs\tThe configuration category to copy\n */\nConfigCategory::ConfigCategory(ConfigCategory const& rhs)\n{\n\tm_name = rhs.m_name;\n\tm_description = rhs.m_description;\n\n\tfor (auto it = rhs.m_items.cbegin(); it != rhs.m_items.cend(); it++)\n\t{\n\t\tm_items.push_back(new CategoryItem(**it));\n\t}\n}\n\n/**\n * Copy constructor for a configuration category when copying from a pointer\n *\n * @param rhs\tThe configuration category to copy\n */\nConfigCategory::ConfigCategory(ConfigCategory const *rhs)\n{\n\tm_name = rhs->m_name;\n\tm_description = rhs->m_description;\n\n\tfor (auto it = rhs->m_items.cbegin(); it != rhs->m_items.cend(); it++)\n\t{\n\t\tm_items.push_back(new CategoryItem(**it));\n\t}\n}\n\n/**\n * Configuration category destructor\n */\nConfigCategory::~ConfigCategory()\n{\n\tfor (auto it = m_items.cbegin(); it != m_items.cend(); it++)\n\t{\n\t\tdelete *it;\n\t}\n}\n\n/**\n * Operator= for ConfigCategory\n */\nConfigCategory& ConfigCategory::operator=(ConfigCategory const& rhs)\n{\n\tm_name = rhs.m_name;\n\tm_description = rhs.m_description;\n\n\tfor (auto it = m_items.cbegin(); it != m_items.cend(); it++)\n\t{\n\t\tdelete *it;\n\t}\n\tm_items.clear();\n\tfor (auto it = rhs.m_items.cbegin(); it != rhs.m_items.cend(); it++)\n\t{\n\t\tm_items.push_back(new CategoryItem(**it));\n\t}\n\treturn *this;\n}\n\n/**\n * Operator+= for ConfigCategory\n */\nConfigCategory& ConfigCategory::operator+=(ConfigCategory const& rhs)\n{\n\tm_name = rhs.m_name;\n\tm_description = rhs.m_description;\n\n\tfor (auto it = rhs.m_items.cbegin(); it != rhs.m_items.cend(); it++)\n\t{\n\t\tm_items.push_back(new CategoryItem(**it));\n\t}\n\treturn *this;\n}\n\n/**\n * Set the m_value from m_default for each item\n */\nvoid ConfigCategory::setItemsValueFromDefault()\n{\n\tfor (auto it = m_items.cbegin(); it != m_items.cend(); it++)\n\t{\n\t\t(*it)->m_value = string((*it)->m_default);\n\t}\n}\n\n/**\n * Check whether at least one item in the category object\n * has both 'value' and 'default' set.\n *\n * @throws ConfigValueFoundWithDefault\n */\nvoid ConfigCategory::checkDefaultValuesOnly() const\n{\n\tfor (auto it = m_items.cbegin(); it != m_items.cend(); it++)\n\t{\n\t\tif (!(*it)->m_value.empty())\n\t\t{\n\t\t\tthrow new ConfigValueFoundWithDefault((*it)->m_name);\n\t\t}\n\t}\n}\n\n/**\n * Add an item to a configuration category\n */\nvoid ConfigCategory::addItem(const std::string& name, const std::string description,\n                             const std::string& type, const std::string def,\n                             const std::string& value)\n{\n\tm_items.push_back(new CategoryItem(name, description, type, def, value));\n}\n\n/**\n * Add an item to a configuration category\n */\nvoid ConfigCategory::addItem(const std::string& name, const std::string description,\n                             const std::string def, const std::string& value,\n\t\t\t     const vector<string> options)\n{\n\tm_items.push_back(new CategoryItem(name, description, def, value, options));\n}\n\n/**\n * Set the display name of an item\n *\n * @param name\tThe item name in the category\n * @param displayName\tThe display name to set\n * @return true if the item was found\n */\nbool ConfigCategory::setItemDisplayName(const std::string& name, const std::string& displayName)\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\tm_items[i]->m_displayName = displayName;\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Delete all the items from the configuration category having a specific type\n *\n * * @param type  Type to delete\n */\nvoid ConfigCategory::removeItemsType(ConfigCategory::ItemType type)\n{\n\tfor (auto it = m_items.begin(); it != m_items.end(); )\n\t{\n\t\tif ((*it)->m_itemType == type)\n\t\t{\n\t\t\tdelete *it;\n\t\t\tm_items.erase(it);\n\t\t}\n\t\telse\n\t\t{\n\t\t\t++it;\n\t\t}\n\t}\n}\n\n/**\n * Delete all the items from the configuration category\n *\n */\nvoid ConfigCategory::removeItems()\n{\n\tfor (auto it = m_items.begin(); it != m_items.end(); )\n\t{\n\t\tdelete *it;\n\t\tm_items.erase(it);\n\t}\n}\n\n/**\n * Delete all the items from the configuration category not having a specific type\n *\n * * @param type  Type to maintain\n */\nvoid ConfigCategory::keepItemsType(ConfigCategory::ItemType type)\n{\n\n\tfor (auto it = m_items.begin(); it != m_items.end(); )\n\t{\n\t\tif ((*it)->m_itemType != type)\n\t\t{\n\t\t\tdelete *it;\n\t\t\tm_items.erase(it);\n\t\t}\n\t\telse\n\t\t{\n\t\t\t++it;\n\t\t}\n\t}\n}\n\n/**\n * Extracts, process and adds subcategory information from a given category to the current instance\n *\n * * @param subCategories Configuration category from which the subcategories information should be extracted\n */\nbool ConfigCategory::extractSubcategory(ConfigCategory &subCategories)\n{\n\n\tbool extracted;\n\n\tauto it = subCategories.m_items.begin();\n\n\tif (it != subCategories.m_items.end())\n\t{\n\t\t// Generates a new temporary category from the JSON in m_default\n\t\tConfigCategory tmpCategory = ConfigCategory(\"tmpCategory\", (*it)->m_default);\n\n\t\t// Extracts all the items generated from m_default and adds them to the category\n\t\tfor(auto item : tmpCategory.m_items)\n\t\t{\n\n\t\t\tm_items.push_back(new CategoryItem(*item));\n\t\t}\n\n\t\tm_name = (*it)->m_name;\n\t\tm_description = (*it)->m_description;\n\n\t\t// Replaces the %N escape sequence with the instance name of this plugin\n\t\tstring instanceName = subCategories.m_name;\n\t\tstring pattern  = \"%N\";\n\n\t\tif (m_name.find(pattern) != string::npos)\n\t\t\tm_name.replace(m_name.find(pattern), pattern.length(), instanceName);\n\n\t\t// Removes the element just processed\n\t\tdelete *it;\n\t\tsubCategories.m_items.erase(it);\n\t\textracted = true;\n\t}\n\telse\n\t{\n\t\textracted = false;\n\t}\n\n\treturn \textracted;\n\n}\n\n/**\n * Check for the existence of an item within the configuration category\n *\n * @param name\tItem name to check within the category\n */\nbool ConfigCategory::itemExists(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Return the value of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nstring ConfigCategory::getValue(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_value;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return the value of the configuration category item with a default\n *\n * @param name         The name of the configuration item to return\n * @param defaultValue The default value to return if the item does not exist\n * @return string      The configuration item value or the default\n */\nstring ConfigCategory::getValue(const std::string& name, const std::string& defaultValue) const\n{\n\ttry\n\t{\n\t\treturn getValue(name);\n\t}\n\tcatch (ConfigItemNotFound* e)\n\t{\n\t\tLogger::getLogger()->info(\"'%s' %s , returning default value '%s'\", name.c_str(), e->what(), defaultValue.c_str());\n\t\tdelete e;\n\t\treturn defaultValue;\n\t}\n}\n\n/**\n * Return a boolean value from a configuration category item\n *\n * @param name         The name of the item\n * @param defaultValue The value to return if item is not found or invalid\n * @return bool        The boolean value\n */\nbool ConfigCategory::getBoolValue(const std::string& name, bool defaultValue) const\n{\n\ttry\n\t{\n\t\tstring val = getValue(name);\n\t\tstd::string lower = val;\n\t\tstd::transform(lower.begin(), lower.end(), lower.begin(), ::tolower);\n\t\tif (lower == \"true\" || lower == \"1\") return true;\n\t\tif (lower == \"false\" || lower == \"0\") return false;\n\t\tLogger::getLogger()->info(\"Config item '%s' expected to be boolean but got '%s'\", name.c_str(), val.c_str());\n\t\treturn defaultValue;\n\t}\n\tcatch (ConfigItemNotFound* e)\n\t{\n\t\tLogger::getLogger()->info(\"'%s' %s , returning default value '%d'\", name.c_str(), e->what(), defaultValue);\n\t\tdelete e;\n\t\treturn defaultValue;\n\t}\n}\n\n/**\n * Return an integer value from a configuration category item\n */\nint ConfigCategory::getIntegerValue(const std::string& name, int defaultValue) const\n{\n\ttry\n\t{\n\t\tstring val = getValue(name);\n\t\treturn stoi(val);\n\t}\n\tcatch (ConfigItemNotFound* e)\n\t{\n\t\tLogger::getLogger()->info(\"'%s' %s , returning default value '%d'\", name.c_str(), e->what(), defaultValue);\n\t\tdelete e;\n\t\treturn defaultValue;\n\t}\n\tcatch (std::invalid_argument& e)\n\t{\n\t\tLogger::getLogger()->info(\"Config item '%s' expected to be integer but got '%s', returning default value '%d'\", \n\t\t\tname.c_str(), e.what(), defaultValue);\n\t\treturn defaultValue;\n\t}\n\tcatch (std::out_of_range& e)\n\t{\n\t\tLogger::getLogger()->info(\"Config item '%s' out of range: %s, returning default value '%d'\", \n\t\t\tname.c_str(), e.what(), defaultValue);\n\t\treturn defaultValue;\n\t}\n}\n\n/**\n * Return a long value from a configuration category item\n */\nlong ConfigCategory::getLongValue(const std::string& name, long defaultValue) const\n{\n\ttry\n\t{\n\t\tstring val = getValue(name);\n\t\treturn stol(val);\n\t}\n\tcatch (ConfigItemNotFound* e)\n\t{\n\t\tLogger::getLogger()->info(\"'%s' %s , returning default value '%ld'\", name.c_str(), e->what(), defaultValue);\n\t\tdelete e;\n\t\treturn defaultValue;\n\t}\n\tcatch (std::invalid_argument& e)\n\t{\n\t\tLogger::getLogger()->info(\"Config item '%s' expected to be long but got '%s', returning default value '%ld'\", \n\t\t\tname.c_str(), e.what(), defaultValue);\n\t\treturn defaultValue;\n\t}\n\tcatch (std::out_of_range& e)\n\t{\n\t\tLogger::getLogger()->info(\"Config item '%s' out of range: %s, returning default value '%ld'\", \n\t\t\tname.c_str(), e.what(), defaultValue);\n\t\treturn defaultValue;\n\t}\n}\n\n/**\n * Return a double value from a configuration category item\n */\ndouble ConfigCategory::getDoubleValue(const std::string& name, double defaultValue) const\n{\n\ttry\n\t{\n\t\tstring val = getValue(name);\n\t\treturn stod(val);\n\t}\n\tcatch (ConfigItemNotFound* e)\n\t{\n\t\tLogger::getLogger()->info(\"'%s' %s , returning default value '%ld'\", name.c_str(), e->what(), defaultValue);\n\t\tdelete e;\n\t\treturn defaultValue;\n\t}\n\tcatch (std::invalid_argument& e)\n\t{\n\t\tLogger::getLogger()->info(\"Config item '%s' expected to be double but got '%s', returning default value '%lf'\", \n\t\t\tname.c_str(), e.what(), defaultValue);\n\t\treturn defaultValue;\n\t}\n\tcatch (std::out_of_range& e)\n\t{\n\t\tLogger::getLogger()->info(\"Config item '%s' out of range: %s, returning default value '%lf'\", \n\t\t\tname.c_str(), e.what(), defaultValue);\n\t\treturn defaultValue;\n\t}\n}\n\n/**\n * Return the value of the configuration category item list, this\n * is a convience function used when simple lists are defined\n * and allows for central processing of the list values\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nvector<string> ConfigCategory::getValueList(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\tif (m_items[i]->m_type.compare(\"list\"))\n\t\t\t{\n\t\t\t\tthrow new ConfigItemNotAList();\n\t\t\t}\n\t\t\tDocument d;\n\t\t\tvector<string> list;\n\t\t\td.Parse(m_items[i]->m_value.c_str());\n\t\t\tif (d.HasParseError())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"The JSON value for a list item %s has a parse error: %s, %s\",\n\t\t\t\t\tname.c_str(), GetParseError_En(d.GetParseError()), m_items[i]->m_value.c_str());\n\t\t\t\treturn list;\n\t\t\t}\n\t\t\tif (d.IsArray())\n\t\t\t{\n\t\t\t\tfor (auto& v : d.GetArray())\n\t\t\t\t{\n\t\t\t\t\tif (v.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tlist.push_back(v.GetString());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"The value of the list item %s should be a JSON array and it is not\", name.c_str());\n\t\t\t}\n\t\t\treturn list;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return the value of the configuration category item kvlist, this\n * is a convience function used when key/value lists are defined\n * and allows for central processing of the list values\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nmap<string, string> ConfigCategory::getValueKVList(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\tif (m_items[i]->m_type.compare(\"kvlist\"))\n\t\t\t{\n\t\t\t\tthrow new ConfigItemNotAList();\n\t\t\t}\n\t\t\tmap<string, string> list;\n\t\t\tDocument d;\n\t\t\td.Parse(m_items[i]->m_value.c_str());\n\t\t\tif (d.HasParseError())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"The JSON value for a kvlist item %s has a parse error: %s, %s\",\n\t\t\t\t\tname.c_str(), GetParseError_En(d.GetParseError()), m_items[i]->m_value.c_str());\n\t\t\t\treturn list;\n\t\t\t}\n\t\t\tfor (auto& v : d.GetObject())\n\t\t\t{\n\t\t\t\tstring key = v.name.GetString();\n\t\t\t\tstring value = to_string(v.value);\n\t\t\t\tlist.insert(pair<string, string>(key, value));\n\t\t\t}\n\t\t\treturn list;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Convert a RapidJSON value to a string\n *\n * @param v\tThe RapidJSON value\n */\nstd::string ConfigCategory::to_string(const rapidjson::Value& v) const\n{\n\tif (v.IsString())\n\t{\n\t\treturn { v.GetString(), v.GetStringLength() };\n\t}\n\telse\n\t{\n\t\tStringBuffer strbuf;\n\t\tWriter<rapidjson::StringBuffer> writer(strbuf);\n\t\tv.Accept(writer);\n\t\treturn { strbuf.GetString(), strbuf.GetLength() };\n\t}\n}\n\n/**\n * Return the requested attribute of a configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @param itemAttribute\tThe item attribute (such as \"file\", \"order\", \"readonly\"\n * @return\tThe configuration item attribute as string\n * @throws\tConfigItemNotFound if the item does not exist in the category\n *\t\tConfigItemAttributeNotFound if the requested attribute\n *\t\tdoes not exist for the found item.\n */\nstring ConfigCategory::getItemAttribute(const string& itemName,\n\t\t\t\t\tconst ItemAttribute itemAttribute) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (itemName.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\tswitch (itemAttribute)\n\t\t\t{\n\t\t\t\tcase ORDER_ATTR:\n\t\t\t\t\treturn m_items[i]->m_order;\n\t\t\t\tcase READONLY_ATTR:\n\t\t\t\t\treturn m_items[i]->m_readonly;\n\t\t\t\tcase MANDATORY_ATTR:\n\t\t\t\t    return m_items[i]->m_mandatory;\n\t\t\t\tcase FILE_ATTR:\n\t\t\t\t\treturn m_items[i]->m_file;\n\t\t\t\tcase VALIDITY_ATTR:\n\t\t\t\t\treturn m_items[i]->m_validity;\n\t\t\t\tcase GROUP_ATTR:\n\t\t\t\t\treturn m_items[i]->m_group;\n\t\t\t\tcase DISPLAY_NAME_ATTR:\n\t\t\t\t\treturn m_items[i]->m_displayName;\n\t\t\t\tcase DEPRECATED_ATTR:\n\t\t\t\t\treturn m_items[i]->m_deprecated;\n\t\t\t\tcase RULE_ATTR:\n\t\t\t\t\treturn m_items[i]->m_rule;\n\t\t\t\tcase BUCKET_PROPERTIES_ATTR:\n\t\t\t\t\treturn m_items[i]->m_bucketProperties;\n\t\t\t\tcase LIST_SIZE_ATTR:\n\t\t\t\t\treturn m_items[i]->m_listSize;\n\t\t\t\tcase ITEM_TYPE_ATTR:\n\t\t\t\t\treturn m_items[i]->m_listItemType;\n\t\t\t\tcase LIST_NAME_ATTR:\n\t\t\t\t    return m_items[i]->m_listName;\n\t\t\t\tcase KVLIST_KEY_NAME_ATTR:\n\t\t\t\t    return m_items[i]->m_kvlistKeyName;\n\t\t\t\tcase KVLIST_KEY_DESCRIPTION_ATTR:\n\t\t\t\t    return m_items[i]->m_kvlistKeyDescription;\n\t\t\t\tcase JSON_SCHEMA_ATTR:\n\t\t\t\t\treturn m_items[i]->m_jsonSchema;\n\t\t\t\tdefault:\n\t\t\t\t\tthrow new ConfigItemAttributeNotFound();\n\t\t\t}\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Set the requested attribute of a configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @param itemAttribute\tThe item attribute (such as \"file\", \"order\", \"readonly\"\n * @param value\tThe value to set\n * @return\tThe configuration item attribute as string\n * @throws\tConfigItemNotFound if the item does not exist in the category\n *\t\tConfigItemAttributeNotFound if the requested attribute\n *\t\tdoes not exist for the found item.\n */\nbool ConfigCategory::setItemAttribute(const string& itemName,\n\t\t\t\t\tconst ItemAttribute itemAttribute,\n\t\t\t\t\tconst string& value)\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (itemName.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\tswitch (itemAttribute)\n\t\t\t{\n\t\t\t\tcase ORDER_ATTR:\n\t\t\t\t\tm_items[i]->m_order = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase READONLY_ATTR:\n\t\t\t\t\tm_items[i]->m_readonly = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase MANDATORY_ATTR:\n\t\t\t\t    m_items[i]->m_mandatory = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase FILE_ATTR:\n\t\t\t\t\tm_items[i]->m_file = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase MINIMUM_ATTR:\n\t\t\t\t\tm_items[i]->m_minimum = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase MAXIMUM_ATTR:\n\t\t\t\t\tm_items[i]->m_maximum = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase LENGTH_ATTR:\n\t\t\t\t\tm_items[i]->m_length = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase VALIDITY_ATTR:\n\t\t\t\t\tm_items[i]->m_validity = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase GROUP_ATTR:\n\t\t\t\t\tm_items[i]->m_group = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase DISPLAY_NAME_ATTR:\n\t\t\t\t\tm_items[i]->m_displayName = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase DEPRECATED_ATTR:\n\t\t\t\t\tm_items[i]->m_deprecated = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase RULE_ATTR:\n\t\t\t\t\tm_items[i]->m_rule = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase BUCKET_PROPERTIES_ATTR:\n\t\t\t\t\tm_items[i]->m_bucketProperties = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase LIST_SIZE_ATTR:\n\t\t\t\t\tm_items[i]->m_listSize = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase ITEM_TYPE_ATTR:\n\t\t\t\t\tm_items[i]->m_listItemType = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase LIST_NAME_ATTR:\n\t\t\t\t\tm_items[i]->m_listName = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase KVLIST_KEY_NAME_ATTR:\n\t\t\t\t\tm_items[i]->m_kvlistKeyName = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase KVLIST_KEY_DESCRIPTION_ATTR:\n\t\t\t\t\tm_items[i]->m_kvlistKeyDescription = value;\n\t\t\t\t\treturn true;\n\t\t\t\tcase JSON_SCHEMA_ATTR:\n\t\t\t\t    m_items[i]->m_jsonSchema = value;\n\t\t\t\t    return true;\n\t\t\t\tdefault:\n\t\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Return the type of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nstring ConfigCategory::getType(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_type;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return the description of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nstring ConfigCategory::getDescription(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_description;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return the default value of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nstring ConfigCategory::getDefault(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_default;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Update the default value of the configuration category item\n *\n * @param name\tThe name of the configuration item to update\n * @param value\tNew value of the configuration item\n * @return bool\tWhether update succeeded\n */\nbool ConfigCategory::setDefault(const string& name, const string& value)\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\tm_items[i]->m_default = value;\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Update the value of the configuration category item\n *\n * @param name\tThe name of the configuration item to update\n * @param value\tNew value of the configuration item\n * @return bool\tWhether update succeeded\n */\nbool ConfigCategory::setValue(const string& name, const string& value)\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\tm_items[i]->m_value = value;\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n\n/**\n * Return the display name of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nstring ConfigCategory::getDisplayName(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_displayName;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return the length value of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nstring ConfigCategory::getLength(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_length;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return the minimum value of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nstring ConfigCategory::getMinimum(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_minimum;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return the maximum of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nstring ConfigCategory::getMaximum(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_maximum;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n\n\n/**\n * Return the options of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return string\tThe configuration item name\n * @throws exception if the item does not exist in the category\n */\nvector<string> ConfigCategory::getOptions(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_options;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return the permissions of the configuration category item\n *\n * @param name\tThe name of the configuration item to return\n * @return vector<string>\tThe configuration item permissions\n * @throws exception if the item does not exist in the category\n */\nvector<string> ConfigCategory::getPermissions(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_permissions;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return true if the user has permission to update the named item\n *\n * @param name\tThe name of the configuration item to return\n * @param rolename\tThe name of the user role to test\n * @return bool\tTrue if the named user can update the configuration item\n * @throws exception if the item does not exist in the category\n */\nbool ConfigCategory::hasPermission(const std::string& name, const std::string& rolename) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\tif (m_items[i]->m_permissions.empty())\n\t\t\t\treturn true;\n\t\t\tfor (auto& perm : m_items[i]->m_permissions)\n\t\t\t\tif (rolename.compare(perm) == 0)\n\t\t\t\t\treturn true;\n\t\t\treturn false;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is a string item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a string type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isString(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_itemType == StringItem;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is an enumeration item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a string type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isEnumeration(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_itemType == EnumerationItem;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is a JSON item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a JSON type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isJSON(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_itemType == JsonItem;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is a Bool item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a Bool type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isBool(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_itemType == BoolItem;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is a Numeric item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a Numeric type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isNumber(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_itemType == NumberItem;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is a Double item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a Double type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isDouble(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn m_items[i]->m_itemType == DoubleItem;\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is deprecated a item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a deprecated type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isDeprecated(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn ! m_items[i]->m_deprecated.empty();\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is a list item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a Numeric type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isList(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn (m_items[i]->m_type.compare(\"list\") == 0);\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Return if the configuration item is a kvlist item\n *\n * @param name\t\tThe name of the item to test\n * @return bool\t\tTrue if the item is a Numeric type\n * @throws exception\tIf the item was not found in the configuration category\n */\nbool ConfigCategory::isKVList(const string& name) const\n{\n\tfor (unsigned int i = 0; i < m_items.size(); i++)\n\t{\n\t\tif (name.compare(m_items[i]->m_name) == 0)\n\t\t{\n\t\t\treturn (m_items[i]->m_type.compare(\"kvlist\") == 0);\n\t\t}\n\t}\n\tthrow new ConfigItemNotFound();\n}\n\n/**\n * Set the description for the configuration category\n *\n * @param description\tThe configuration category description\n */\nvoid ConfigCategory::setDescription(const string& description)\n{\n\tm_description = description;\n}\n\n/**\n * Return JSON string of all category components\n *\n * @param full\tfalse is the deafult, true evaluates all the members of the CategoryItems\n *\n */\nstring ConfigCategory::toJSON(const bool full) const\n{\nostringstream convert;\n\n\tconvert << \"{ \\\"key\\\" : \\\"\" << JSONescape(m_name) << \"\\\", \";\n\tconvert << \"\\\"description\\\" : \\\"\" << JSONescape(m_description) << \"\\\", \\\"value\\\" : \";\n\t// Add items\n\tconvert << ConfigCategory::itemsToJSON(full);\n\tconvert << \" }\";\n\n\treturn convert.str();\n}\n\n/**\n * Return JSON string of category items only\n *\n * @param full\tfalse is the deafult, true evaluates all the members of the CategoryItems\n *\n */\nstring ConfigCategory::itemsToJSON(const bool full) const\n{\nostringstream convert;\n\n\tconvert << \"{\";\n\tfor (auto it = m_items.cbegin(); it != m_items.cend(); it++)\n\t{\n\t\tconvert << (*it)->toJSON(full);\n\t\tif (it + 1 != m_items.cend() )\n\t\t{\n\t\t\tconvert << \", \";\n\t\t}\n\t}\n\tconvert << \"}\";\n\n\treturn convert.str();\n}\n\n/**\n * Constructor for a configuration item\n * @param name\tThe category item name\n * @param item\tThe item object to add\n * @throw\tConfigMalformed exception\n * @throw\truntime_error exception\n */\nConfigCategory::CategoryItem::CategoryItem(const string& name,\n\t\t\t\t\t   const Value& item)\n{\n\tm_name = name;\n\tm_itemType = UnknownType;\n\tif (! item.IsObject())\n\t{\n\t\tthrow new ConfigMalformed();\n\t}\n\tif (item.HasMember(\"type\"))\n\t{\n\t\tm_type = item[\"type\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_type = \"\";\n\t}\n\n\tif (item.HasMember(\"description\"))\n\t{\n\t\tm_description = item[\"description\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_description = \"\";\n\t}\n\n\tif (item.HasMember(\"order\"))\n\t{\n\t\tm_order = item[\"order\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_order = \"\";\n\t}\n\n\tif (item.HasMember(\"length\"))\n\t{\n\t\tm_length = item[\"length\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_length = \"\";\n\t}\n\n\tif (item.HasMember(\"minimum\"))\n\t{\n\t\tm_minimum = item[\"minimum\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_minimum = \"\";\n\t}\n\n\tif (item.HasMember(\"maximum\"))\n\t{\n\t\tm_maximum = item[\"maximum\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_maximum = \"\";\n\t}\n\n\tif (item.HasMember(\"file\"))\n\t{\n\t\tm_file = item[\"file\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_file = \"\";\n\t}\n\n\tif (item.HasMember(\"readonly\"))\n\t{\n\t\tm_readonly = item[\"readonly\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_readonly = \"\";\n\t}\n\n\tif (item.HasMember(\"mandatory\"))\n\t{\n\t\tm_mandatory = item[\"mandatory\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_mandatory = \"\";\n\t}\n\tif (m_type.compare(\"category\") == 0)\n\t{\n\t\tm_itemType = CategoryType;\n\t}\n\tif (m_type.compare(\"script\") == 0)\n\t{\n\t\tm_itemType = ScriptItem;\n\t}\n\tif (m_type.compare(\"code\") == 0)\n\t{\n\t\tm_itemType = CodeItem;\n\t}\n\tif (m_type.compare(\"bucket\") == 0)\n\t{\n\t\tm_itemType = BucketItem;\n\t}\n\tif (m_type.compare(\"list\") == 0)\n\t{\n\t\tm_itemType = ListItem;\n\t}\n\tif (m_type.compare(\"kvlist\") == 0)\n\t{\n\t\tm_itemType = KVListItem;\n\t}\n\n\tif (item.HasMember(\"deprecated\"))\n\t{\n\t\tm_deprecated = item[\"deprecated\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_deprecated = \"\";\n\t}\n\n\tif (item.HasMember(\"displayName\"))\n\t{\n\t\tm_displayName = item[\"displayName\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_displayName = \"\";\n\t}\n\n\tif (item.HasMember(\"validity\"))\n\t{\n\t\tm_validity = item[\"validity\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_validity = \"\";\n\t}\n\tif (item.HasMember(\"group\"))\n\t{\n\t\tm_group = item[\"group\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_group = \"\";\n\t}\n\n\tif (item.HasMember(\"rule\"))\n\t{\n\t\tm_rule = item[\"rule\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_rule = \"\";\n\t}\n\n\tif (item.HasMember(\"properties\"))\n\t{\n\t\tLogger::getLogger()->debug(\"item['properties'].IsString()=%s, item['properties'].IsObject()=%s\", \n\t\t\t\t\t\t\t\t\t\titem[\"properties\"].IsString()?\"true\":\"false\",\n\t\t\t\t\t\t\t\t\t\titem[\"properties\"].IsObject()?\"true\":\"false\");\n\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"properties\"].Accept(writer);\n\t\tm_bucketProperties = item[\"properties\"].IsObject() ?\n\t\t\t  // use current string\n\t\t\t  strbuf.GetString() :\n\t\t\t  // Unescape the string\n\t\t\t  JSONunescape(strbuf.GetString());\n\n\t\tLogger::getLogger()->debug(\"m_bucketProperties=%s\", m_bucketProperties.c_str());\n\t}\n\telse\n\t{\n\t\tm_bucketProperties = \"\";\n\t}\n\t\n\tif (m_itemType == BucketItem && m_bucketProperties.empty())\n\t{\n\t\tthrow new runtime_error(\"Bucket configuration item is missing the \\\"properties\\\" attribute\");\n\t}\n\n\tif (item.HasMember(\"options\"))\n\t{\n\t\tconst Value& options = item[\"options\"];\n\t\tif (options.IsArray())\n\t\t{\n\t\t\tfor (SizeType i = 0; i < options.Size(); i++)\n\t\t\t{\n\t\t\t\tm_options.push_back(string(options[i].GetString()));\n\t\t\t}\n\t\t}\n\t}\n\n\tif (item.HasMember(\"permissions\"))\n\t{\n\t\tconst Value& permissions = item[\"permissions\"];\n\t\tif (permissions.IsArray())\n\t\t{\n\t\t\tfor (SizeType i = 0; i < permissions.Size(); i++)\n\t\t\t{\n\t\t\t\tm_permissions.push_back(string(permissions[i].GetString()));\n\t\t\t}\n\t\t}\n\t}\n\n\tif (item.HasMember(\"schema\"))\n\t{\n\t\tLogger::getLogger()->debug(\"item['schema'].IsString()=%s, item['schema'].IsObject()=%s\",\n\t\t\t\t\t\t\t\t\t\titem[\"schema\"].IsString()?\"true\":\"false\",\n\t\t\t\t\t\t\t\t\t\titem[\"schema\"].IsObject()?\"true\":\"false\");\n\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"schema\"].Accept(writer);\n\t\tm_jsonSchema = item[\"schema\"].IsObject() ?\n\t\t\t  // use current string\n\t\t\t  strbuf.GetString() :\n\t\t\t  // Unescape the string\n\t\t\t  JSONunescape(strbuf.GetString());\n\n\t\tLogger::getLogger()->debug(\"m_jsonSchema=%s\", m_jsonSchema.c_str());\n\t}\n\telse\n\t{\n\t\tm_jsonSchema = \"\";\n\t}\n\n\tif (item.HasMember(\"items\"))\n\t{\n\t\tif (item[\"items\"].IsString())\n\t\t{\n\t\t\tm_listItemType = item[\"items\"].GetString();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tthrow new runtime_error(\"Items configuration item property is not a string\");\n\t\t}\n\t}\n\telse if (m_itemType == ListItem || m_itemType == KVListItem)\n\t{\n\t\tthrow new runtime_error(\"List configuration item is missing the \\\"items\\\" attribute\");\n\t}\n\tif (item.HasMember(\"listSize\"))\n\t{\n\t\tif (item[\"listSize\"].IsString())\n\t\t{\n\t\t\tm_listSize = item[\"listSize\"].GetString();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tthrow new runtime_error(\"ListSize configuration item property is not a string\");\n\t\t}\n\t}\n\tif (item.HasMember(\"listName\"))\n\t{\n\t\tif (item[\"listName\"].IsString())\n\t\t{\n\t\t\tm_listName = item[\"listName\"].GetString();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tthrow new runtime_error(\"ListName configuration item property is not a string\");\n\t\t}\n\t}\n\tif (item.HasMember(\"keyName\"))\n\t{\n\t\tif (item[\"keyName\"].IsString())\n\t\t{\n\t\t\tm_kvlistKeyName = item[\"keyName\"].GetString();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tthrow new runtime_error(\"keyName configuration item property is not a string\");\n\t\t}\n\t}\n\tif (item.HasMember(\"keyDescription\"))\n\t{\n\t\tif (item[\"keyDescription\"].IsString())\n\t\t{\n\t\t\tm_kvlistKeyDescription = item[\"keyDescription\"].GetString();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tthrow new runtime_error(\"keyDescription configuration item property is not a string\");\n\t\t}\n\t}\n\tstd::string m_typeUpperCase = m_type;\n\tfor (auto & c: m_typeUpperCase) c = toupper(c);\n\n\t// Item \"value\" can be an escaped JSON string, so check m_type JSON as well\n\tif (item.HasMember(\"value\") &&\n\t    (item[\"value\"].IsObject() || m_typeUpperCase.compare(\"JSON\") == 0))\n\n\t{\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"value\"].Accept(writer);\n\t\tm_value = item[\"value\"].IsObject() ?\n\t\t\t  // use current string\n\t\t\t  strbuf.GetString() :\n\t\t\t  // Unescape the string\n\t\t\t  JSONunescape(strbuf.GetString());\n\n\t\t// If it's not a real eject, check the string buffer it is:\n\t\tif (!item[\"value\"].IsObject())\n\t\t{\n\t\t\tboost::replace_all(m_value, \"\\\\n\", \"\");\n\t\t\tDocument check;\n\t\t\tcheck.Parse(m_value.c_str());\n\t\t\tif (check.HasParseError())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"The JSON configuration item %s has a parse error: %s\",\n\t\t\t\t\tm_name.c_str(), GetParseError_En(check.GetParseError()));\n\t\t\t\tthrow new runtime_error(GetParseError_En(check.GetParseError()));\n\t\t\t}\n\t\t\tif (!check.IsObject())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"The JSON configuration item %s is not a valid JSON objects\",\n\t\t\t\t\t\tm_name.c_str());\n\t\t\t\tthrow new runtime_error(\"'value' JSON property is not an object\");\n\t\t\t}\n\t\t}\n\t\tif (m_typeUpperCase.compare(\"JSON\") == 0)\n\t\t{\n\t\t\tm_itemType = JsonItem;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Avoids overwrite if it is already valued\n\t\t\tif (m_itemType == StringItem)\n\t\t\t{\n\t\t\t\tm_itemType = JsonItem;\n\t\t\t}\n\t\t}\n\t}\n\t// Item \"value\" is a Bool or m_type is boolean\n\telse if (item.HasMember(\"value\") &&\n\t\t (item[\"value\"].IsBool() || m_type.compare(\"boolean\") == 0))\n\t{\n\t\tm_value = !item[\"value\"].IsBool() ?\n\t\t\t  // use string value\n\t\t\t  item[\"value\"].GetString() :\n\t\t\t  // use bool value\n\t\t\t  item[\"value\"].GetBool() ? \"true\" : \"false\";\t\n\t\t\n\t\tm_itemType = BoolItem;\n\t}\n\t// Item \"value\" is just a string\n\telse if (item.HasMember(\"value\") && item[\"value\"].IsString())\n\t{\n\t\t// Get content of script type item as is\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"value\"].Accept(writer);\n\n\t\tif (m_itemType == ScriptItem ||\n\t\t    m_itemType == CodeItem)\n\t\t{\n\t\t\tm_value = strbuf.GetString();\n\t\t\tif (m_value.empty())\n\t\t\t{\n\t\t\t\tm_value = \"\\\"\\\"\";\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_value = JSONunescape(strbuf.GetString());\n\n\t\t\tif (m_options.size() == 0)\n\t\t\t\tm_itemType = StringItem;\n\t\t\telse\n\t\t\t\tm_itemType = EnumerationItem;\n\t\t}\n\t}\n\t// Item \"value\" is a Double\n\telse if (item.HasMember(\"value\") && item[\"value\"].IsDouble())\n\t{\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"value\"].Accept(writer);\n\t\tm_value = strbuf.GetString();\n\t\tm_itemType = DoubleItem;\n\t}\n\t// Item \"value\" is a Number\n\telse if (item.HasMember(\"value\") && item[\"value\"].IsNumber())\n\t{\n\t\t// Don't check Uint/Int/Long etc: just get the string value\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"value\"].Accept(writer);\n\t\tm_value = strbuf.GetString();\n\t\tm_itemType = NumberItem;\n\t}\n\t// Item \"value\" has an unknwon type so far: set empty string\n\telse\n\t{\n\t\tm_value = \"\";\n\t}\n\n\t// Item \"default\" can be an escaped JSON string, so check m_type JSON as well\n\tif (item.HasMember(\"default\") &&\n\t    (item[\"default\"].IsObject() || m_typeUpperCase.compare(\"JSON\") == 0))\n\t{\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"default\"].Accept(writer);\n\t\tm_default = item[\"default\"].IsObject() ?\n\t\t\t  // use current string\n\t\t\t  strbuf.GetString() :\n\t\t\t  // Unescape the string\n\t\t\t  JSONunescape(strbuf.GetString());\n\n\t\t// If it's not a real eject, check the string buffer it is:\n\t\tif (!item[\"default\"].IsObject())\n\t\t{\n\t\t\tboost::replace_all(m_default, \"\\\\n\", \"\");\n\t\t\tDocument check;\n\t\t\tcheck.Parse(m_default.c_str());\n\t\t\tif (check.HasParseError())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"The JSON configuration item %s has a parse error in the default value: %s\",\n\t\t\t\t\tm_name.c_str(), GetParseError_En(check.GetParseError()));\n\t\t\t\tthrow new runtime_error(GetParseError_En(check.GetParseError()));\n\t\t\t}\n\t\t\tif (!check.IsObject())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"The JSON configuration item %s default is not a valid JSON object\",\n\t\t\t\t\t\tm_name.c_str());\n\t\t\t\tthrow new runtime_error(\"'default' JSON property is not an object\");\n\t\t\t}\n\t\t}\n\t\tif (m_typeUpperCase.compare(\"JSON\") == 0)\n\t\t{\n\n\t\t\tm_itemType = JsonItem;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Avoids overwrite if it is already valued\n\t\t\tif (m_itemType == StringItem)\n\t\t\t{\n\t\t\t\tm_itemType = JsonItem;\n\t\t\t}\n\t\t}\n\t}\n\t// Item \"default\" is a Bool or m_type is boolean\n\telse if (item.HasMember(\"default\") &&\n\t\t (item[\"default\"].IsBool() || m_type.compare(\"boolean\") == 0))\n\t{\n\t\tm_default = !item[\"default\"].IsBool() ?\n\t\t\t    // use string value\n\t\t\t    item[\"default\"].GetString() :\n\t\t\t    // use bool value\n\t\t\t    item[\"default\"].GetBool() ? \"true\" : \"false\";\t\n\t\t\n\t\tm_itemType = BoolItem;\n\t}\n\t// Item \"default\" is just a string\n\telse if (item.HasMember(\"default\") && item[\"default\"].IsString())\n\t{\n\t\t// Get content of script type item as is\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"default\"].Accept(writer);\n\t\tif (m_itemType == ScriptItem ||\n\t\t    m_itemType == CodeItem)\n\t\t{\n\t\t\tm_default = strbuf.GetString();\n\t\t\tif (m_default.empty())\n\t\t\t{\n\t\t\t\tm_default = \"\\\"\\\"\";\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_default = JSONunescape(strbuf.GetString());\n\t\t\tif (m_options.size() == 0)\n\t\t\t\tm_itemType = StringItem;\n\t\t\telse\n\t\t\t\tm_itemType = EnumerationItem;\n\t\t}\n\t}\n\t// Item \"default\" is a Double\n\telse if (item.HasMember(\"default\") && item[\"default\"].IsDouble())\n\t{\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"default\"].Accept(writer);\n\t\tm_default = strbuf.GetString();\n\t\tm_itemType = DoubleItem;\n\t}\n\t// Item \"default\" is a Number\n\telse if (item.HasMember(\"default\") && item[\"default\"].IsNumber())\n\t{\n\t\t// Don't check Uint/Int/Long etc: just get the string value\n\t\trapidjson::StringBuffer strbuf;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\titem[\"default\"].Accept(writer);\n\t\tm_default = strbuf.GetString();\n\t\tm_itemType = NumberItem;\n\t}\n\telse\n\t// Item \"default\" has an unknwon type so far: set empty string\n\t{\n\t\tm_default = \"\";\n\t}\n}\n\n/**\n * Constructor for a configuration item\n */\nConfigCategory::CategoryItem::CategoryItem(const string& name, const std::string& description,\n                                           const std::string& type, const std::string def,\n                                           const std::string& value)\n{\n\tm_name = name;\n\tm_description = description;\n\tm_type = type;\n\tm_default = def;\n\tm_value = value;\n\tm_itemType = StringItem;\n}\n\n/**\n * Constructor for a configuration item\n */\nConfigCategory::CategoryItem::CategoryItem(const string& name, const std::string& description,\n                                           const std::string def, const std::string& value,\n\t\t\t\t\t   const vector<string> options)\n{\n\tm_name = name;\n\tm_description = description;\n\tm_type = \"enumeration\";\n\tm_default = def;\n\tm_value = value;\n\tm_itemType = StringItem;\n\tfor (auto it = options.cbegin(); it != options.cend(); it++)\n\t{\n\t\tm_options.push_back(*it);\n\t}\n}\n\n/**\n * Copy constructor for configuration item\n */\nConfigCategory::CategoryItem::CategoryItem(const CategoryItem& rhs)\n{\n\tm_name = rhs.m_name;\n\tm_displayName = rhs.m_displayName;\n\tm_type = rhs.m_type;\n\tm_default = rhs.m_default;\n\tm_value = rhs.m_value;\n\tm_description = rhs.m_description;\n       \tm_order = rhs.m_order;\n       \tm_readonly = rhs.m_readonly;\n       \tm_mandatory = rhs.m_mandatory;\n       \tm_deprecated = rhs.m_deprecated;\n       \tm_length = rhs.m_length;\n       \tm_minimum = rhs.m_minimum;\n       \tm_maximum = rhs.m_maximum;\n       \tm_filename = rhs.m_filename;\n\tfor (auto it = rhs.m_options.cbegin(); it != rhs.m_options.cend(); it++)\n\t{\n\t\tm_options.push_back(*it);\n\t}\n       \tm_file = rhs.m_file;\n       \tm_itemType = rhs.m_itemType;\n\tm_validity = rhs.m_validity;\n\tm_group = rhs.m_group;\n\tm_rule = rhs.m_rule;\n\tm_bucketProperties = rhs.m_bucketProperties;\n\tm_listSize = rhs.m_listSize;\n\tm_listItemType = rhs.m_listItemType;\n\tm_listName = rhs.m_listName;\n\tm_kvlistKeyName = rhs.m_kvlistKeyName;\n\tm_kvlistKeyDescription = rhs.m_kvlistKeyDescription;\n\tfor (auto it = rhs.m_permissions.cbegin(); it != rhs.m_permissions.cend(); it++)\n\t{\n\t\tm_permissions.push_back(*it);\n\t}\n\tm_jsonSchema = rhs.m_jsonSchema;\n}\n\n/**\n * Create a JSON representation of the configuration item\n *\n * @param full\tfalse is the default, true evaluates all the members of the CategoryItem\n *\n */\nstring ConfigCategory::CategoryItem::toJSON(const bool full) const\n{\nostringstream convert;\n\n\tconvert << \"\\\"\" << JSONescape(m_name) << \"\\\" : { \";\n\tconvert << \"\\\"description\\\" : \\\"\" << JSONescape(m_description) << \"\\\", \";\n\tif (! m_displayName.empty())\n\t{\n\t\tconvert << \"\\\"displayName\\\" : \\\"\" << m_displayName << \"\\\", \";\n\t}\n\tconvert << \"\\\"type\\\" : \\\"\" << m_type << \"\\\", \";\n\tif (m_options.size() > 0)\n\t{\n\t\tconvert << \"\\\"options\\\" : [ \";\n\t\tfor (int i = 0; i < m_options.size(); i++)\n\t\t{\n\t\t\tif (i > 0)\n\t\t\t\tconvert << \",\";\n\t\t\tconvert << \"\\\"\" << m_options[i] << \"\\\"\";\n\t\t}\n\t\tconvert << \"], \";\n\t}\n\n\tif (m_permissions.size() > 0)\n\t{\n\t\tconvert << \"\\\"permissions\\\" : [ \";\n\t\tfor (int i = 0; i < m_permissions.size(); i++)\n\t\t{\n\t\t\tif (i > 0)\n\t\t\t\tconvert << \",\";\n\t\t\tconvert << \"\\\"\" << m_permissions[i] << \"\\\"\";\n\t\t}\n\t\tconvert << \"], \";\n\t}\n\n\tif (m_itemType == StringItem ||\n\t    m_itemType == BoolItem ||\n\t    m_itemType == EnumerationItem ||\n\t    m_itemType == BucketItem ||\n\t    m_itemType == ListItem ||\n\t    m_itemType == KVListItem)\n\t{\n\t\tconvert << \"\\\"value\\\" : \\\"\" << JSONescape(m_value) << \"\\\", \";\n\t\tconvert << \"\\\"default\\\" : \\\"\" << JSONescape(m_default) << \"\\\"\";\n\t}\n\telse if (m_itemType == JsonItem ||\n\t\t m_itemType == NumberItem ||\n\t\t m_itemType == DoubleItem ||\n\t\t m_itemType == ScriptItem ||\n\t\t m_itemType == CodeItem)\n\t{\n\t\tconvert << \"\\\"value\\\" : \" << m_value << \", \";\n\t\tconvert << \"\\\"default\\\" : \" << m_default;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Unknown item type in configuration category\");\n\t}\n\n\tif (full)\n\t{\n\t\tif (!m_order.empty())\n\t\t{\n\t\t\tconvert << \", \\\"order\\\" : \\\"\" << m_order << \"\\\"\";\n\t\t}\n\n\t        if (!m_length.empty())\n\t\t{\n\t\t\tconvert << \", \\\"length\\\" : \\\"\" << m_length << \"\\\"\";\n\t\t}\n\n\t\tif (!m_minimum.empty())\n\t\t{\n\t\t\tconvert << \", \\\"minimum\\\" : \\\"\" << m_minimum << \"\\\"\";\n\t\t}\n\n\t\tif (!m_maximum.empty())\n\t\t{\n\t\t\tconvert << \", \\\"maximum\\\" : \\\"\" << m_maximum << \"\\\"\";\n\t\t}\n\n\t\tif (!m_readonly.empty())\n\t\t{\n\t\t\tconvert << \", \\\"readonly\\\" : \\\"\" << m_readonly << \"\\\"\";\n\t\t}\n\n\t\tif (!m_mandatory.empty())\n\t\t{\n\t\t\tconvert << \", \\\"mandatory\\\" : \\\"\" << m_mandatory << \"\\\"\";\n\t\t}\n\n\t\tif (!m_validity.empty())\n\t\t{\n\t\t\tconvert << \", \\\"validity\\\" : \\\"\" << JSONescape(m_validity) << \"\\\"\";\n\t\t}\n\n\t\tif (!m_rule.empty())\n\t\t{\n\t\t\tconvert << \", \\\"rule\\\" : \\\"\" << JSONescape(m_rule) << \"\\\"\";\n\t\t}\n\n\t\tif (!m_bucketProperties.empty())\n\t\t{\n\t\t\tconvert << \", \\\"properties\\\" : \" << m_bucketProperties;\n\t\t}\n\n\t\tif (!m_group.empty())\n\t\t{\n\t\t\tconvert << \", \\\"group\\\" : \\\"\" << m_group << \"\\\"\";\n\t\t}\n\n\t\tif (!m_file.empty())\n\t\t{\n\t\t\tconvert << \", \\\"file\\\" : \\\"\" << m_file << \"\\\"\";\n\t\t}\n\n\t\tif (!m_listSize.empty())\n\t\t{\n\t\t\tconvert << \", \\\"listSize\\\" : \\\"\" << m_listSize << \"\\\"\";\n\t\t}\n\t\tif (!m_listItemType.empty())\n\t\t{\n\t\t\tconvert << \", \\\"items\\\" : \\\"\" << m_listItemType << \"\\\"\";\n\t\t}\n\t\tif (!m_listName.empty())\n\t\t{\n\t\t\tconvert << \", \\\"listName\\\" : \\\"\" << m_listName << \"\\\"\";\n\t\t}\n\t\tif (!m_kvlistKeyName.empty())\n\t\t{\n\t\t\tconvert << \", \\\"keyName\\\" : \\\"\" << m_kvlistKeyName << \"\\\"\";\n\t\t}\n\t\tif (!m_kvlistKeyDescription.empty())\n\t\t{\n\t\t\tconvert << \", \\\"keyDescription\\\" : \\\"\" << m_kvlistKeyDescription << \"\\\"\";\n\t\t}\n\t\tif (!m_jsonSchema.empty())\n\t\t{\n\t\t\tconvert << \", \\\"schema\\\" : \" << m_jsonSchema;\n\t\t}\n\t}\n\tconvert << \" }\";\n\n\treturn convert.str();\n}\n\n/**\n * Return only \"default\" item values\n */\nstring ConfigCategory::CategoryItem::defaultToJSON() const\n{\nostringstream convert;\n\n\tconvert << \"\\\"\" << JSONescape(m_name) << \"\\\" : { \";\n\tconvert << \"\\\"description\\\" : \\\"\" << JSONescape(m_description) << \"\\\", \";\n\tconvert << \"\\\"type\\\" : \\\"\" << m_type << \"\\\"\";\n\n\tif (!m_order.empty())\n\t{\n\t\tconvert << \", \\\"order\\\" : \\\"\" << m_order << \"\\\"\";\n\t}\n\n\tif (!m_displayName.empty())\n\t{\n\t\tconvert << \", \\\"displayName\\\" : \\\"\" << m_displayName << \"\\\"\";\n\t}\n\n\tif (!m_length.empty())\n\t{\n\t\tconvert << \", \\\"length\\\" : \\\"\" << m_length << \"\\\"\";\n\t}\n\n\tif (!m_minimum.empty())\n\t{\n\t\tconvert << \", \\\"minimum\\\" : \\\"\" << m_minimum << \"\\\"\";\n\t}\n\n\tif (!m_maximum.empty())\n\t{\n\t\tconvert << \", \\\"maximum\\\" : \\\"\" << m_maximum << \"\\\"\";\n\t}\n\n\tif (!m_readonly.empty())\n\t{\n\t\tconvert << \", \\\"readonly\\\" : \\\"\" << m_readonly << \"\\\"\";\n\t}\n\n\tif (!m_mandatory.empty())\n\t{\n\t\tconvert << \", \\\"mandatory\\\" : \\\"\" << m_mandatory << \"\\\"\";\n\t}\n\n\tif (!m_validity.empty())\n\t{\n\t\tconvert << \", \\\"validity\\\" : \\\"\" << JSONescape(m_validity) << \"\\\"\";\n\t}\n\n\tif (!m_rule.empty())\n\t{\n\t\tconvert << \", \\\"rule\\\" : \\\"\" << JSONescape(m_rule) << \"\\\"\";\n\t}\n\n\tif (!m_bucketProperties.empty())\n\t{\n\t\tconvert << \", \\\"properties\\\" : \" << m_bucketProperties;\n\t}\n\n\tif (!m_group.empty())\n\t{\n\t\tconvert << \", \\\"group\\\" : \\\"\" << m_group << \"\\\"\";\n\t}\n\n\tif (!m_file.empty())\n\t{\n\t\tconvert << \", \\\"file\\\" : \\\"\" << m_file << \"\\\"\";\n\t}\n\tif (m_options.size() > 0)\n\t{\n\t\tconvert << \", \\\"options\\\" : [ \";\n\t\tfor (int i = 0; i < m_options.size(); i++)\n\t\t{\n\t\t\tif (i > 0)\n\t\t\t\tconvert << \",\";\n\t\t\tconvert << \"\\\"\" << m_options[i] << \"\\\"\";\n\t\t}\n\t\tconvert << \"]\";\n\t}\n\tif (m_permissions.size() > 0)\n\t{\n\t\tconvert << \", \\\"permissions\\\" : [ \";\n\t\tfor (int i = 0; i < m_permissions.size(); i++)\n\t\t{\n\t\t\tif (i > 0)\n\t\t\t\tconvert << \",\";\n\t\t\tconvert << \"\\\"\" << m_permissions[i] << \"\\\"\";\n\t\t}\n\t\tconvert << \"]\";\n\t}\n\tif (!m_listSize.empty())\n\t{\n\t\tconvert << \", \\\"listSize\\\" : \\\"\" << m_listSize << \"\\\"\";\n\t}\n\tif (!m_listItemType.empty())\n\t{\n\t\tconvert << \", \\\"items\\\" : \\\"\" << m_listItemType << \"\\\"\";\n\t}\n\tif (!m_listName.empty())\n\t{\n\t    convert << \", \\\"listName\\\" : \\\"\" << m_listName << \"\\\"\";\n\t}\n\tif (!m_kvlistKeyName.empty())\n\t{\n\t    convert << \", \\\"keyName\\\" : \\\"\" << m_kvlistKeyName << \"\\\"\";\n\t}\n\tif (!m_kvlistKeyDescription.empty())\n\t{\n\t    convert << \", \\\"keyDescription\\\" : \\\"\" << m_kvlistKeyDescription << \"\\\"\";\n\t}\n\tif (!m_jsonSchema.empty())\n\t{\n\t\tconvert << \", \\\"schema\\\" : \" << m_jsonSchema;\n\t}\n\n\tif (m_itemType == StringItem ||\n\t    m_itemType == EnumerationItem ||\n\t    m_itemType == BoolItem ||\n\t    m_itemType == BucketItem ||\n\t    m_itemType == ListItem ||\n\t    m_itemType == KVListItem)\n\t{\n\t\tconvert << \", \\\"default\\\" : \\\"\" << JSONescape(m_default) << \"\\\" }\";\n\t}\n\t/**\n\t * NOTE:\n\t * These data types must be all escaped.\n\t * \"default\" items in the DefaultConfigCategory class are sent to\n\t * ConfigurationManager interface which requires string values only:\n\t *\n\t * examples:\n\t * we must use \"100\" not 100\n\t * and for JSON\n\t * \"{\\\"pipeline\\\":[\\\"scale\\\"]}\" not {\"pipeline\":[\"scale\"]}\n\t */\n\telse if (m_itemType == JsonItem ||\n\t\t m_itemType == NumberItem ||\n\t\t m_itemType == DoubleItem ||\n\t\t m_itemType == ScriptItem ||\n\t\t m_itemType == CodeItem)\n\t{\n\t\tconvert << \", \\\"default\\\" : \\\"\" << JSONescape(m_default) << \"\\\" }\";\n\t}\n\treturn convert.str();\n}\n\n/**\n * Parse BucketItem value in JSON dict format and return the key value pairs within that\n *\n * @param json\tJSON string representing the BucketItem value\n * @return\t\tVector with pairs of found key/value string pairs in BucketItem value\n */\nvector<pair<string,string>>* ConfigCategory::parseBucketItemValue(const string & json)\n{\n\tDocument document;\n\tif (document.Parse(json.c_str()).HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"parseBucketItemValue(): The provided JSON string has a parse error: %s\",\n\t\t\t\tGetParseError_En(document.GetParseError()));\n\t\treturn NULL;\n\t}\n\t\n\tvector<pair<string,string>> *vec = new vector<pair<string,string>>;\n\t\n\tfor (const auto & m : document.GetObject())\n\t\tvec->emplace_back(make_pair<string,string>(m.name.GetString(), m.value.GetString()));\n\n\treturn vec;\n}\n\n\n// DefaultConfigCategory constructor\nDefaultConfigCategory::DefaultConfigCategory(const string& name, const string& json) :\n                                            ConfigCategory::ConfigCategory(name, json)\n{\n}\n\n/**\n * Destructor for the default configuration category. Simply call the base class\n * destructor.\n */\nDefaultConfigCategory::~DefaultConfigCategory()\n{\n}\n\n\n/**\n * Return JSON string of all category components\n * of a DefaultConfigCategory class\n */\nstring DefaultConfigCategory::toJSON() const\n{\nostringstream convert;\n\n\tconvert << \"{ \";\n\tconvert << \"\\\"key\\\" : \\\"\" << JSONescape(m_name) << \"\\\", \";\n\tconvert << \"\\\"description\\\" : \\\"\" << JSONescape(m_description) << \"\\\", \\\"value\\\" : \";\n\t// Add items\n\tconvert << DefaultConfigCategory::itemsToJSON();\n\tconvert << \" }\";\n\n\treturn convert.str();\n}\n\n/**\n * Return DefaultConfigCategory \"default\" items only\n */\nstring DefaultConfigCategory::itemsToJSON() const\n{\nostringstream convert;\n        \n\tconvert << \"{\";\n\tfor (auto it = m_items.cbegin(); it != m_items.cend(); it++)\n\t{       \n\t\tconvert << (*it)->defaultToJSON();\n\t\tif (it + 1 != m_items.cend() )\n\t\t{       \n\t\t\tconvert << \", \";\n\t\t}\n\t}\n\tconvert << \"}\";\n\n\treturn convert.str();\n}\n\n/**\n * Return JSON string of a category item\n * @param itemName\tThe given item within current category\n * @return\t\tThe JSON string version of itemName\n *\t\t\tIf not found {} is returned\n */\nstring ConfigCategory::itemToJSON(const string& itemName) const\n{\n\tostringstream convert;\n        \n        convert << \"{\";\n        for (auto it = m_items.cbegin(); it != m_items.cend(); it++)\n        {\n\t\tif ((*it)->m_name.compare(itemName) == 0)\n\t\t{\n                \tconvert << (*it)->toJSON();\n\t\t}\n\t}\n\tconvert << \"}\";\n        \n\treturn convert.str();\n}\n\n/**\n * Configuration Category constructor\n *\n * @param name\tThe name of the configuration category\n * @param json\tJSON content of the configuration category\n */\nConfigCategoryChange::ConfigCategoryChange(const string& json)\n{\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"Configuration parse error in category change %s: %s at %d\",\n\t\t\tjson.c_str(), GetParseError_En(doc.GetParseError()),\n\t\t\t(unsigned)doc.GetErrorOffset());\n\t\tthrow new ConfigMalformed();\n\t}\n\tif (!doc.HasMember(\"category\"))\n\t{\n\t\tLogger::getLogger()->error(\"Configuration change is missing a category element '%s'\",\n\t\t\tjson.c_str());\n\t\tthrow new ConfigMalformed();\n\t}\n\n\tif (doc.HasMember(\"parent_category\"))\n\t{\n\t\tm_parent_name=doc[\"parent_category\"].GetString();\n\t} else {\n\t\tm_parent_name=\"\";\n\t}\n\n\tif (!doc.HasMember(\"items\"))\n\t{\n\t\tLogger::getLogger()->error(\"Configuration change is missing an items element '%s'\",\n\t\t\tjson.c_str());\n\t\tthrow new ConfigMalformed();\n\t}\n\n\tm_name = doc[\"category\"].GetString();\n\tconst Value& items = doc[\"items\"];\n\tfor (Value::ConstMemberIterator itr = items.MemberBegin(); itr != items.MemberEnd(); ++itr)\n\t{\n\t\ttry\n\t\t{\n\t\t\tm_items.push_back(new CategoryItem(itr->name.GetString(), itr->value));\n\t\t}\n\t\tcatch (exception* e)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Configuration parse error in category %s item '%s', %s: %s\",\n\t\t\t\tm_name.c_str(),\n\t\t\t\titr->name.GetString(),\n\t\t\t\tjson.c_str(),\n\t\t\t\te->what());\n\t\t\tdelete e;\n\t\t\tthrow ConfigMalformed();\n\t\t}\n\t\tcatch (...)\n\t\t{\n\t\t\tthrow;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "C/common/cryptography_utils.cpp",
    "content": "/*\n * Fledge utilities functions for generating cryptographic hash\n *\n * Copyright (c) 2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <sstream>\n#include <iomanip>\n#include \"cryptography_utils.h\"\n\n/*\n*\n* Generates SHA256 Hash\n*\n* @param input\tJSON string for the reading\n* @return SHA256 Hash String\n*/\n\nstd::string compute_sha256(const std::string& input)\n{\n    #ifdef OPENSSL_VERSION_NUMBER\n      #if OPENSSL_VERSION_NUMBER >= 0x30000000L\n        // Code for OpenSSL 3.0.x\n        unsigned char digest[SHA256_DIGEST_LENGTH];\n        EVP_MD_CTX *ctx = EVP_MD_CTX_new();\n        \n        if (!ctx) {\n            throw std::runtime_error(\"Failed to create OpenSSL EVP_MD_CTX\");\n        }\n\n        if (EVP_DigestInit_ex(ctx, EVP_sha256(), nullptr) != 1 ||\n            EVP_DigestUpdate(ctx, input.data(), input.size()) != 1 ||\n            EVP_DigestFinal_ex(ctx, digest, nullptr) != 1) \n        {\n            EVP_MD_CTX_free(ctx);\n            throw std::runtime_error(\"OpenSSL SHA-256 computation failed\");\n        }\n\n        EVP_MD_CTX_free(ctx);\n\n        std::ostringstream ss;\n        for (int i = 0; i < SHA256_DIGEST_LENGTH; i++) \n        {\n            ss << std::setw(2) << std::setfill('0') << std::hex << (int)digest[i];\n        }\n\n        return ss.str();\n      #else\n        // Code for OpenSSL 1.1.x\n        unsigned char digest[SHA256_DIGEST_LENGTH];\n        SHA256_CTX sha256Context;\n        SHA256_Init(&sha256Context);\n        SHA256_Update(&sha256Context, input.c_str(), input.length());\n        SHA256_Final(digest, &sha256Context);\n        std::ostringstream ss;\n        for (int i = 0; i < SHA256_DIGEST_LENGTH; i++)\n        {\n            ss << std::setw(2) << std::setfill('0') << std::hex << (int)digest[i];\n        }\n        return ss.str();\n      #endif\n    #endif\n    \n}\n\nstd::string compute_md5(const std::string& input)\n{\n#ifdef OPENSSL_VERSION_NUMBER\n  #if OPENSSL_VERSION_NUMBER >= 0x30000000L\n    // Code for OpenSSL 3.0.x\n    unsigned char digest[MD5_DIGEST_LENGTH];\n    EVP_MD_CTX *ctx = EVP_MD_CTX_new();\n\n    if (!ctx) {\n        throw std::runtime_error(\"Failed to create OpenSSL EVP_MD_CTX\");\n    }\n\n    if (EVP_DigestInit_ex(ctx, EVP_md5(), nullptr) != 1 ||\n        EVP_DigestUpdate(ctx, input.data(), input.size()) != 1 ||\n        EVP_DigestFinal_ex(ctx, digest, nullptr) != 1) \n    {\n        EVP_MD_CTX_free(ctx);\n        throw std::runtime_error(\"OpenSSL MD5 computation failed\");\n    }\n\n    EVP_MD_CTX_free(ctx);\n\n    std::ostringstream ss;\n    for (int i = 0; i < MD5_DIGEST_LENGTH; i++) \n    {\n        ss << std::setw(2) << std::setfill('0') << std::hex << (int)digest[i];\n    }\n\n    return ss.str();\n  #else\n    // Code for OpenSSL 1.1.x\n    unsigned char digest[MD5_DIGEST_LENGTH];\n    MD5_CTX md5Context;\n    MD5_Init(&md5Context);\n    MD5_Update(&md5Context, input.c_str(), input.length());\n    MD5_Final(digest, &md5Context);\n\n    std::ostringstream ss;\n    for (int i = 0; i < MD5_DIGEST_LENGTH; i++)\n    {\n        ss << std::setw(2) << std::setfill('0') << std::hex << (int)digest[i];\n    }\n\n    return ss.str();\n  #endif\n#endif\n}\n"
  },
  {
    "path": "C/common/databuffer.cpp",
    "content": "/*\n * Fledge\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <databuffer.h>\n#include <exception>\n#include <stdexcept>\n#include <stdlib.h>\n#include <string.h>\n\nusing namespace std;\n/**\n * Buffer constructor\n *\n * @param itemSize\tThe size of each item in the buffer\n * @param len\t\tThe length of the buffer, i.e. how many items can it hold\n */\nDataBuffer::DataBuffer(size_t itemSize, size_t len) : m_itemSize(itemSize), m_len(len)\n{\n\tm_data = calloc(len, itemSize);\n\tif (m_data == NULL)\n\t\tthrow runtime_error(\"Insufficient memory to create buffer\");\n}\n\n/**\n * DataBuffer destructor\n */\nDataBuffer::~DataBuffer()\n{\n\tif (m_data)\n\t\tfree(m_data);\n\tm_data = NULL;\n}\n\n/**\n * DataBuffer copy constructor\n *\n * @param rhs\tDataBuffer to copy\n */\nDataBuffer::DataBuffer(const DataBuffer& rhs)\n{\n\tm_itemSize = rhs.m_itemSize;\n\tm_len = rhs.m_len;\n\tm_data = calloc(m_len, m_itemSize);\n\tif (m_data)\n\t\tmemcpy(m_data, rhs.m_data, m_itemSize * m_len);\n\telse\n\t\tthrow runtime_error(\"Insufficient memory to copy databuffer\");\n}\n\n/**\n * Populate the contents of a DataBuffer\n *\n * @param src\t\tSource of the data\n * @param len\t\tNumber of bytes in the source to copy\n */\nvoid DataBuffer::populate(void *src, int len)\n{\n\tsize_t toCopy = min((size_t)len, m_len * m_itemSize);\n\tmemcpy(m_data, src, toCopy);\n}\n"
  },
  {
    "path": "C/common/datapoint.cpp",
    "content": "/*\n * Fledge\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n\n#include <string>\n#include <sstream>\n#include <iomanip>\n#include <cfloat>\n#include <vector>\n#include <logger.h>\n#include <datapoint.h>\n#include <exception>\n#include <base64databuffer.h>\n#include <base64dpimage.h>\n\n /**\n * Return the value as a string\n *\n * @return\tString representing the DatapointValue object\n */\nstd::string DatapointValue::toString() const\n{\n\tstd::ostringstream ss;\n\n\tswitch (m_type)\n\t{\n\tcase T_INTEGER:\n\t\tss << m_value.i;\n\t\treturn ss.str();\n\tcase T_FLOAT:\n\t\t{\n\t\t\tchar tmpBuffer[100];\n\t\t\tstd::string s;\n\n\t\t\tsnprintf(tmpBuffer, sizeof(tmpBuffer), \"%.10f\", m_value.f);\n\t\t\ts= tmpBuffer;\n\n\t\t\t// remove trailing 0's\n\t\t\tif (s[s.size()-1]== '0') {\n\t\t\t\ts.erase(s.find_last_not_of('0') + 1, std::string::npos);\n\n\t\t\t\t// add '0' i\n\t\t\t\tif (s[s.size()-1]== '.')\n\t\t\t\t\ts.append(\"0\");\n\n\t\t\t}\n\n\t\t\treturn s;\n\t\t}\n\tcase T_FLOAT_ARRAY:\n\t\tss << \"[\";\n\t\tfor (auto it = m_value.a->begin();\n\t\t     it != m_value.a->end();\n\t\t     ++it)\n\t\t{\n\t\t\tif (it != m_value.a->begin())\n\t\t\t{\n\t\t\t\tss << \", \";\n\t\t\t}\n\t\t\tss << *it;\n\t\t}\n\t\tss << \"]\";\n\t\treturn ss.str();\n\tcase T_DP_DICT:\n\tcase T_DP_LIST:\n\t\tss << ((m_type==T_DP_DICT)?'{':'[');\n\t\tfor (auto it = m_value.dpa->begin(); // std::vector<Datapoint *>*\tdpa;\n\t\t     it != m_value.dpa->end();\n\t\t     ++it)\n\t\t{\n\t\t\tif (it != m_value.dpa->begin())\n\t\t\t{\n\t\t\t\tss << \", \";\n\t\t\t}\n\t\t\tss << ((m_type==T_DP_DICT)?(*it)->toJSONProperty():(*it)->getData().toString());\n\t\t}\n\t\tss << ((m_type==T_DP_DICT)?'}':']');\n\t\treturn ss.str();\n\tcase T_STRING:\n\t\tss << \"\\\"\";\n\t\tss << escape(*m_value.str);\n\t\tss << \"\\\"\";\n\t\treturn ss.str();\n\tcase T_DATABUFFER:\n\t\tss << \"\\\"__DATABUFFER:\" \n\t\t\t<< ((Base64DataBuffer *)m_value.dataBuffer)->encode()\n\t\t\t<< \"\\\"\";\n\t\treturn ss.str();\n\tcase T_IMAGE:\n\t\tss << \"\\\"__DPIMAGE:\" \n\t\t\t<< ((Base64DPImage *)m_value.image)->encode()\n\t\t\t<< \"\\\"\";\n\t\treturn ss.str();\n\tcase T_2D_FLOAT_ARRAY:\n\t\t{\n\t\tss << \"[ \";\n\t\tbool first = true;\n\t\tfor (auto row : *(m_value.a2d))\n\t\t{\n\t\t\tif (first)\n\t\t\t\tfirst = false;\n\t\t\telse\n\t\t\t\tss << \", \";\n\t\t\tss << \"[\";\n\t\t\tfor (auto it = row->begin();\n\t\t\t     it != row->end();\n\t\t\t     ++it)\n\t\t\t{\n\t\t\t\tif (it != row->begin())\n\t\t\t\t{\n\t\t\t\t\tss << \", \";\n\t\t\t\t}\n\t\t\t\tss << *it;\n\t\t\t}\n\t\t\tss << \"]\";\n\t\t}\n\t\tss << \" ]\";\n\t\treturn ss.str();\n\t\t}\n\tdefault:\n\t\tthrow std::runtime_error(\"No string representation for datapoint type\");\n\t}\n}\n\n/**\n * Delete the DatapointValue along with possibly nested Datapoint objects\n */\nvoid DatapointValue::deleteNestedDPV()\n{\n\tif (m_type == T_STRING)\n\t{\n\t\tdelete m_value.str;\n\t\tm_value.str = NULL;\n\t}\n\telse if (m_type == T_FLOAT_ARRAY)\n\t{\n\t\tdelete m_value.a;\n\t\tm_value.a = NULL;\n\t}\n\telse if (m_type == T_DATABUFFER)\n\t{\n\t\tdelete m_value.dataBuffer;\n\t\tm_value.dataBuffer = NULL;\n\t}\n\telse if (m_type == T_IMAGE)\n\t{\n\t\tdelete m_value.image;\n\t\tm_value.image = NULL;\n\t}\n\telse if (m_type == T_DP_DICT ||\n\t\t m_type == T_DP_LIST)\n\t{\n\t\tif (m_value.dpa) {\n\t\t\tfor (auto it = m_value.dpa->begin();\n\t\t\t\t it != m_value.dpa->end();\n\t\t\t\t ++it)\n\t\t\t{\n\t\t\t\t// Call DatapointValue destructor\n\t\t\t\tdelete(*it);\n\t\t\t}\n\n\t\t\t// Remove vector pointer\n\t\t\tdelete m_value.dpa;\n\t\t\tm_value.dpa = NULL;\n\t\t}\n\t}\n\telse if (m_type == T_2D_FLOAT_ARRAY)\n\t{\n\t\tfor (auto it = m_value.a2d->begin();\n\t\t\t\t it != m_value.a2d->end();\n\t\t\t\t ++it)\n\t\t{\n\t\t\tdelete(*it);\n\t\t}\n\t\tdelete m_value.a2d;\n\t\tm_value.a2d = NULL;\n\t}\n}\n\n/**\n * DatapointValue class destructor\n */\nDatapointValue::~DatapointValue()\n{\n\t// Remove memory allocated by datapoints\n\t// along with possibly nested Datapoint objects\n\tdeleteNestedDPV();\n}\n\n/**\n * Copy constructor\n */\nDatapointValue::DatapointValue(const DatapointValue& obj)\n{\n\tm_type = obj.m_type;\n\tswitch (m_type)\n\t{\n\t\tcase T_STRING:\n\t\t\tm_value.str = new std::string(*(obj.m_value.str));\n\t\t\tbreak;\n\t\tcase T_FLOAT_ARRAY:\n\t\t\tm_value.a = new std::vector<double>(*(obj.m_value.a));\n\t\t\tbreak;\n\t\tcase T_DP_DICT:\n\t\tcase T_DP_LIST:\n\t\t\tm_value.dpa = new std::vector<Datapoint*>();\n\t\t\tfor (auto it = obj.m_value.dpa->begin();\n\t\t\t\tit != obj.m_value.dpa->end();\n\t\t\t\t++it)\n\t\t\t{\n\t\t\t\tDatapoint *d = *it;\n\t\t\t\t// Add new allocated datapoint to the vector\n\t\t\t\t// using copy constructor\n\t\t\t\tDatapoint *dpCopy = new Datapoint(*d);\n\t\t\t\tm_value.dpa->emplace_back(dpCopy);\n\t\t\t}\n\n\t\t\tbreak;\n\t\tcase T_IMAGE:\n\t\t\tm_value.image = new DPImage(*(obj.m_value.image));\n\t\t\tbreak;\n\t\tcase T_DATABUFFER:\n\t\t\tm_value.dataBuffer = new DataBuffer(*(obj.m_value.dataBuffer));\n\t\t\tbreak;\n\t\tcase T_2D_FLOAT_ARRAY:\n\t\t\tm_value.a2d = new std::vector< std::vector<double>* >;\n\t\t\tfor (auto row : *obj.m_value.a2d)\n\t\t\t{\n\t\t\t\tstd::vector<double> *nrow = new std::vector<double>;\n\t\t\t\tfor (auto& d : *row)\n\t\t\t\t{\n\t\t\t\t\tnrow->push_back(d);\n\t\t\t\t}\n\t\t\t\tm_value.a2d->push_back(nrow);\n\t\t\t}\n\t\t\tm_type = T_2D_FLOAT_ARRAY;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tm_value = obj.m_value;\n\t\t\tbreak;\n\t}\n}\n\n/**\n * Assignment Operator\n */\nDatapointValue& DatapointValue::operator=(const DatapointValue& rhs)\n{\n\tif (m_type == T_STRING)\n\t{\n\t\t// Remove previous value\n\t\tdelete m_value.str;\n\t}\n\tif (m_type == T_FLOAT_ARRAY)\n\t{\n\t\t// Remove previous value\n\t\tdelete m_value.a;\n\t}\n\tif (m_type == T_DP_DICT || m_type == T_DP_LIST)\n\t{\n\t\t// Remove previous value\n\t\tdelete m_value.dpa;\n\t}\n\tif (m_type == T_IMAGE)\n\t{\n\t\tdelete m_value.image;\n\t}\n\tif (m_type == T_DATABUFFER)\n\t{\n\t\tdelete m_value.dataBuffer;\n\t}\n\tif (m_type == T_2D_FLOAT_ARRAY)\n\t{\n\t\tdelete m_value.a2d;\n\t}\n\n\tm_type = rhs.m_type;\n\n\tswitch (m_type)\n\t{\n\tcase T_STRING:\n\t\tm_value.str = new std::string(*(rhs.m_value.str));\n\t\tbreak;\n\tcase T_FLOAT_ARRAY:\n\t\tm_value.a = new std::vector<double>(*(rhs.m_value.a));\n\t\tbreak;\n\tcase T_DP_DICT:\n\tcase T_DP_LIST:\n\t\tm_value.dpa = new std::vector<Datapoint*>(*(rhs.m_value.dpa));\n\t\tbreak;\n\tcase T_IMAGE:\n\t\tm_value.image = new DPImage(*(rhs.m_value.image));\n\t\tbreak;\n\tcase T_DATABUFFER:\n\t\tm_value.dataBuffer = new DataBuffer(*(rhs.m_value.dataBuffer));\n\t\tbreak;\n\tcase T_2D_FLOAT_ARRAY:\n\t\tm_value.a2d = new std::vector< std::vector<double>* >;\n\t\tfor (auto row : *(rhs.m_value.a2d))\n\t\t{\n\t\t\tstd::vector<double> *nrow = new std::vector<double>;\n\t\t\tfor (auto& d : *row)\n\t\t\t{\n\t\t\t\tnrow->push_back(d);\n\t\t\t}\n\t\t\tm_value.a2d->push_back(nrow);\n\t\t}\n\t\tm_type = T_2D_FLOAT_ARRAY;\n\t\tbreak;\n\tdefault:\n\t\tm_value = rhs.m_value;\n\t\tbreak;\n\t}\n\n\treturn *this;\n}\n\n/**\n * Escape quotes etc to allow the string to be a property value within\n * a JSON document\n *\n * @param str\tThe string to escape\n * @return The escaped string\n */\nconst std::string DatapointValue::escape(const std::string& str) const\n{\nstd::string rval;\nint bscount = 0;\n\n\tfor (size_t i = 0; i < str.length(); i++)\n\t{\n\t\tif (str[i] == '\\\\')\n\t\t{\n\t\t\tif (i + 1 < str.length() && (str[i + 1] == '\"' || str[i + 1] == '\\\\' || str[i + 1] == '/'|| str[i-1] == '\\\\'))\n\t\t\t{\n\t\t\t\trval += '\\\\';\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\trval += \"\\\\\\\\\";\n\t\t\t}\n\t\t\tbscount++;\n\t\t}\n\t\telse if (str[i] == '\\\"')\n\t\t{\n\t\t\tif ((bscount & 1) == 0)\t// not already escaped\n\t\t\t{\n\t\t\t\trval += \"\\\\\";\t// Add escape of \"\n\t\t\t}\n\t\t\trval += str[i];\n\t\t\tbscount = 0;\n\t\t}\n\t\telse\n\t\t{\n\t\t\trval += str[i];\n\t\t\tbscount = 0;\n\t\t}\n\t}\n\treturn rval;\n}\n\n/**\n * Parsing a Json string\n * \n * @param json : string json \n * @return vector of datapoints\n*/\nstd::vector<Datapoint*> *Datapoint::parseJson(const std::string& json) {\n\t\n\trapidjson::Document document;\n\n\tconst auto& parseResult = document.Parse(json.c_str());\n\tif (parseResult.HasParseError()) {\n\t\tLogger::getLogger()->fatal(\"Parsing error %d (%s).\", parseResult.GetParseError(), json.c_str());\n\t\tprintf(\"Parsing error %d (%s).\", parseResult.GetParseError(), json.c_str());\n\t\treturn nullptr;\n\t}\n\n\tif (!document.IsObject()) {\n\t\treturn nullptr;\n\t}\n\treturn recursiveJson(document);\n}\n\n/**\n * Recursive method to convert a JSON string to a datapoint \n * \n * @param document : object rapidjon \n * @return vector of datapoints\n*/\nstd::vector<Datapoint*> *Datapoint::recursiveJson(const rapidjson::Value& document) {\n\tstd::vector<Datapoint*>* p = new std::vector<Datapoint*>();\n\n\tfor (rapidjson::Value::ConstMemberIterator itr = document.MemberBegin(); itr != document.MemberEnd(); ++itr)\n\t{\t   \n\t\tif (itr->value.IsObject()) {\n\t\t\tstd::vector<Datapoint*> * vec = recursiveJson(itr->value);\n\t\t\tDatapointValue d(vec, true);\n\t\t\tp->push_back(new Datapoint(itr->name.GetString(), d));\n\t\t}\n\t\telse if (itr->value.IsString()) {\n\t\t\tDatapointValue d(itr->value.GetString());\n\t\t\tp->push_back(new Datapoint(itr->name.GetString(), d));\n\t\t}\n\t\telse if (itr->value.IsDouble()) {\n\t\t\tDatapointValue d(itr->value.GetDouble());\n\t\t\tp->push_back(new Datapoint(itr->name.GetString(), d));\n\t\t}\n\t\telse if (itr->value.IsNumber() && itr->value.IsInt()) {\n\t\t\tDatapointValue d((long)itr->value.GetInt());\n\t\t\tp->push_back(new Datapoint(itr->name.GetString(), d));\n\t\t}\n\t\telse if (itr->value.IsNumber() && itr->value.IsUint()) {\n\t\t\tDatapointValue d((long)itr->value.GetUint());\n\t\t\tp->push_back(new Datapoint(itr->name.GetString(), d));\n\t\t}\n\t\telse if (itr->value.IsNumber() && itr->value.IsInt64()) {\n\t\t\tDatapointValue d((long)itr->value.GetInt64());\n\t\t\tp->push_back(new Datapoint(itr->name.GetString(), d));\n\t\t}\n\t\telse if (itr->value.IsNumber() && itr->value.IsUint64()) {\n\t\t\tDatapointValue d((long)itr->value.GetUint64());\n\t\t\tp->push_back(new Datapoint(itr->name.GetString(), d));\n\t\t}\n\t}\n\n\treturn p;\n}\n\n"
  },
  {
    "path": "C/common/datapoint_utility.cpp",
    "content": "/*\n * Datapoint utility.\n *\n * Copyright (c) 2020, RTE (https://www.rte-france.com)\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Yannick Marchetaux\n * \n */\n#include <datapoint_utility.h>\n#include <vector>\n\nusing namespace std;\n\n/**\n * Search a dictionary from a key\n *\n * @param dict : parent dictionary\n * @param key : key to research\n * @return vector of datapoint otherwise null pointer\n*/\nDatapointUtility::Datapoints *DatapointUtility::findDictElement(Datapoints *dict, const string& key) {\n\treturn findDictOrListElement(dict, key, DatapointValue::T_DP_DICT);\n}\n\n/**\n * Search a array from a key\n *\n * @param dict : parent dictionary\n * @param key : key to research\n * @return vector of datapoint otherwise null pointer\n*/\nDatapointUtility::Datapoints *DatapointUtility::findListElement(Datapoints *dict, const string& key) {\n\treturn findDictOrListElement(dict, key, DatapointValue::T_DP_LIST);\n}\n\n/**\n * Search a list or dictionary from a key\n *\n * @param dict : parent dictionary\n * @param key : key to research\n * @param type : type of data searched\n * @return vector of datapoint otherwise null pointer\n*/\nDatapointUtility::Datapoints *DatapointUtility::findDictOrListElement(Datapoints *dict, const string& key, DatapointValue::dataTagType type) {\n\tDatapoint *dp = findDatapointElement(dict, key);\n\t\n\tif (dp == nullptr) {\n\t\treturn nullptr;\n\t}\n\n\tDatapointValue& data = dp->getData();\n\tif (data.getType() == type) {\n\t\treturn data.getDpVec();\n\t}\n\t\n\treturn nullptr;\n}\n\n/**\n * Search a DatapointValue from a key\n *\n * @param dict : parent dictionary\n * @param key : key to research \n * @return corresponding datapointValue otherwise null pointer\n*/\nDatapointValue *DatapointUtility::findValueElement(Datapoints *dict, const string& key) {\n\t\n\tDatapoint *dp = findDatapointElement(dict, key);\n\t\n\tif (dp == nullptr) {\n\t\treturn nullptr;\n\t}\n\n\treturn &dp->getData();\n}\n\n/**\n * Search a Datapoint from a key\n *\n * @param dict : parent dictionary\n * @param key : key to research\n * @return corresponding datapoint otherwise null pointer\n*/\nDatapoint *DatapointUtility::findDatapointElement(Datapoints *dict, const string& key) {\n\tif (dict == nullptr) {\n\t\treturn nullptr;\n\t}\n\t\n\tfor (Datapoint *dp : *dict) {\n\t\tif (dp->getName() == key) {\n\t\t\treturn dp;\n\t\t}\n\t}\n\treturn nullptr;\n}\n\n/**\n * Search a string from a key\n *\n * @param dict : parent dictionary\n * @param key : key to research\n * @return corresponding string otherwise empty string\n*/\nstring DatapointUtility::findStringElement(Datapoints *dict, const string& key) {\n\t\n\tDatapoint *dp = findDatapointElement(dict, key);\n\t\n\tif (dp == nullptr) {\n\t\treturn \"\";\n\t}\n\n\tDatapointValue& data = dp->getData();\n\tconst DatapointValue::dataTagType dType(data.getType());\n\tif (dType == DatapointValue::T_STRING) {\n\t\treturn data.toStringValue();\n\t}\n\n\treturn \"\";\n}\n\n/**\n * Method to delete and to free elements from a vector\n * \n * @param dps dict of values \n * @param key key of dict \n*/\nvoid DatapointUtility::deleteValue(Datapoints *dps, const string& key) {\n\tfor (Datapoints::iterator it = dps->begin(); it != dps->end(); it++){\n\t\tif ((*it)->getName() == key) {\n\t\t\tdelete (*it);\n\t\t\tdps->erase(it);\n\t\t\tbreak;\n\t\t}\n\t}\n}\n\n/**\n * Generate default attribute integer on Datapoint\n * \n * @param dps dict of values \n * @param key key of dict\n * @param valueDefault value attribute of dict\n * @return pointer of the created datapoint\n */\nDatapoint *DatapointUtility::createIntegerElement(Datapoints *dps, const string& key, long valueDefault) {\n\n\tdeleteValue(dps, key);\n\n\tDatapointValue dv(valueDefault);\n\tDatapoint *dp = new Datapoint(key, dv);\n\tdps->push_back(dp);\n\n\treturn dp;\n}\n\n/**\n * Generate default attribute string on Datapoint\n * \n * @param dps dict of values \n * @param key key of dict\n * @param valueDefault value attribute of dict\n * @return pointer of the created datapoint\n */\nDatapoint *DatapointUtility::createStringElement(Datapoints *dps, const string& key, const string& valueDefault) {\n\n\tdeleteValue(dps, key);\n\n\tDatapointValue dv(valueDefault);\n\tDatapoint *dp = new Datapoint(key, dv);\n\tdps->push_back(dp);\n\n\treturn dp;\n}\n\n/**\n * Generate default attribute dict on Datapoint\n * \n * @param dps dict of values \n * @param key key of dict\n * @param dict if the element is a dictionary\n * @return pointer of the created datapoint\n */\nDatapoint *DatapointUtility::createDictOrListElement(Datapoints* dps, const string& key, bool dict) {\n\n\tdeleteValue(dps, key);\n\n\tDatapoints *newVec = new Datapoints;\n\tDatapointValue dv(newVec, dict);\n\tDatapoint *dp = new Datapoint(key, dv);\n\tdps->push_back(dp);\n\n\treturn dp;\n}\n\n/**\n * Generate default attribute dict on Datapoint\n * \n * @param dps dict of values \n * @param key key of dict\n * @return pointer of the created datapoint\n */\nDatapoint *DatapointUtility::createDictElement(Datapoints* dps, const string& key) {\n\treturn createDictOrListElement(dps, key, true);\n}\n\n/**\n * Generate default attribute list on Datapoint\n * \n * @param dps dict of values \n * @param key key of dict\n * @return pointer of the created datapoint\n */\nDatapoint *DatapointUtility::createListElement(Datapoints* dps, const string& key) {\n   return createDictOrListElement(dps, key, false);\n}"
  },
  {
    "path": "C/common/file_utils.cpp",
    "content": "/*\n * Fledge utilities functions for handling files and directories\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Ray Verhoeff\n */\n\n#include <stdio.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <ftw.h>\n#include <stdexcept>\n#include \"file_utils.h\"\n\n/**\n * Callback for Linux file walk routine 'nftw'\n *\n * @param filePath\tFile full path\n * @param sb\t\tstruct stat to hold file information\n * @param typeflag\tFile type flag: FTW_F = file, FTW_D = directory\n * @param ftwbuf\tstruct FTW to hold name offset and file depth\n * @return\t\t\tZero if successful\n */\nstatic int fileDeleteCallback(const char *filePath, const struct stat *sb, int typeflag, struct FTW *ftwbuf)\n{\n    return remove(filePath);\n}\n\n/**\n * Copy a file\n * \n * @param to\tFull path of the destination file\n * @param from\tFull path of the source file\n * @return\t\tZero if successful\n */\nint copyFile(const char *to, const char *from)\n{\n\tint fd_to, fd_from;\n\tchar buf[4096];\n\tssize_t nread;\n\tint saved_errno;\n\n\tfd_from = open(from, O_RDONLY);\n\tif (fd_from < 0)\n\t\treturn -1;\n\n\tfd_to = open(to, O_WRONLY | O_CREAT | O_EXCL, 0666);\n\tif (fd_to < 0)\n\t\tgoto out_error;\n\n\twhile (nread = read(fd_from, buf, sizeof buf), nread > 0)\n\t{\n\t\tchar *out_ptr = buf;\n\t\tssize_t nwritten;\n\n\t\tdo\n\t\t{\n\t\t\tnwritten = write(fd_to, out_ptr, nread);\n\n\t\t\tif (nwritten >= 0)\n\t\t\t{\n\t\t\t\tnread -= nwritten;\n\t\t\t\tout_ptr += nwritten;\n\t\t\t}\n\t\t\telse if (errno != EINTR)\n\t\t\t{\n\t\t\t\tgoto out_error;\n\t\t\t}\n\t\t} while (nread > 0);\n\t}\n\n\tif (nread == 0)\n\t{\n\t\tif (close(fd_to) < 0)\n\t\t{\n\t\t\tfd_to = -1;\n\t\t\tgoto out_error;\n\t\t}\n\t\tclose(fd_from);\n\n\t\t/* Success! */\n\t\treturn 0;\n\t}\n\nout_error:\n\tsaved_errno = errno;\n\n\tclose(fd_from);\n\tif (fd_to >= 0)\n\t\tclose(fd_to);\n\n\terrno = saved_errno;\n\treturn -1;\n}\n\n/**\n * Create a single directory.\n * This routine cannot create a directory tree from a full path.\n * This routine throws a std::runtime_error exception if the directory cannot be created.\n *\n * @param directoryName\t\tFull path of the directory to create\n */\nvoid createDirectory(const std::string &directoryName)\n{\n\tconst char *path = directoryName.c_str();\n\tstruct stat sb;\n\tif (stat(path, &sb) == 0)\n\t{\n\t\tif (sb.st_mode & S_IFDIR)\n\t\t{\n\t\t\treturn; // Directory exists\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstd::string exceptionMessage = \"Path exists but is not a directory: \" + directoryName;\n\t\t\tthrow std::runtime_error(exceptionMessage.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tint retcode;\n\t\tif ((retcode = mkdir(path, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)) != 0)\n\t\t{\n\t\t\tstd::string exceptionMessage = \"Unable to create directory \" + directoryName + \": error: \" + std::to_string(retcode);\n\t\t\tthrow std::runtime_error(exceptionMessage.c_str());\n\t\t}\n\t}\n}\n\n/**\n * Remove a directory with all subdirectories and files\n *\n * @param path\t\tFull path of the directory\n * @return\t\t\tZero if successful\n */\nint removeDirectory(const char *path)\n{\n    return nftw(path, fileDeleteCallback, 64, FTW_DEPTH | FTW_PHYS);\n}\n"
  },
  {
    "path": "C/common/filter_pipeline.cpp",
    "content": "/*\n * Fledge plugin filter class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n\n#include <filter_pipeline.h>\n#include <config_handler.h>\n#include <service_handler.h>\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n\n#define JSON_CONFIG_FILTER_ELEM \"filter\"\n#define JSON_CONFIG_PIPELINE_ELEM \"pipeline\"\n\nusing namespace std;\n\n/**\n * FilterPipeline class constructor\n *\n * This class abstracts the filter pipeline interface\n *\n * @param mgtClient\tManagement client handle\n * @param storage\tStorage client handle\n * @param serviceName\tName of the service to which this pipeline applies\n */\nFilterPipeline::FilterPipeline(ManagementClient* mgtClient, StorageClient& storage, string serviceName) : \n\t\t\tmgtClient(mgtClient), storage(storage), serviceName(serviceName), m_ready(false), m_shutdown(false)\n{\n}\n\n/**\n * FilterPipeline destructor\n */\nFilterPipeline::~FilterPipeline()\n{\n}\n\n/**\n * Load the specified filter plugin\n *\n * @param filterName\tThe filter plugin to load\n * @return\t\tPlugin handle on success, NULL otherwise \n *\n */\nPLUGIN_HANDLE FilterPipeline::loadFilterPlugin(const string& filterName)\n{\n\tif (filterName.empty())\n\t{\n\t\tLogger::getLogger()->error(\"Unable to fetch filter plugin '%s' from configuration.\",\n\t\t\tfilterName.c_str());\n\t\t// Failure\n\t\treturn NULL;\n\t}\n\tLogger::getLogger()->info(\"Loading filter plugin '%s'.\", filterName.c_str());\n\n\tPluginManager* manager = PluginManager::getInstance();\n\tPLUGIN_HANDLE handle;\n\tif ((handle = manager->loadPlugin(filterName, PLUGIN_TYPE_FILTER)) != NULL)\n\t{\n\t\t// Suceess\n\t\tLogger::getLogger()->info(\"Loaded filter plugin '%s'.\", filterName.c_str());\n\t}\n\treturn handle;\n}\n\n/**\n * Load all filter plugins in the pipeline\n *\n * @param categoryName\tConfiguration category name\n * @return\t\tTrue if filters are loaded (or no filters at all)\n *\t\t\tFalse otherwise\n */\nbool FilterPipeline::loadFilters(const string& categoryName)\n{\n\tvector<string> children;\t// The Child categories of 'Filters'\n\ttry\n\t{\n\t\t// Get the category with values and defaults\n\t\tConfigCategory config = mgtClient->getCategory(categoryName);\n\t\tstring filter = config.getValue(JSON_CONFIG_FILTER_ELEM);\n\t\tLogger::getLogger()->info(\"FilterPipeline::loadFilters(): categoryName=%s, filters=%s\", categoryName.c_str(), filter.c_str());\n\t\tif (!filter.empty())\n\t\t{\n\t\t\tstd::vector<pair<string, PLUGIN_HANDLE>> filterInfo;\n\n\t\t\t// Remove \\\" and leading/trailing \"\n\t\t\t// TODO: improve/change this\n\t\t\tfilter.erase(remove(filter.begin(), filter.end(), '\\\\' ), filter.end());\n\t\t\tsize_t i;\n\t\t\twhile (! (i = filter.find('\"')) || (i = filter.rfind('\"')) == static_cast<unsigned char>(filter.size() - 1))\n\t\t\t{\n\t\t\t\tfilter.erase(i, 1);\n\t\t\t}\n\n\t\t\t//Parse JSON object for filters\n\t\t\tDocument theFilters;\n\t\t\ttheFilters.Parse(filter.c_str());\n\t\t\t// The \"pipeline\" property must be an array\n\t\t\tif (theFilters.HasParseError() ||\n\t\t\t\t!theFilters.HasMember(JSON_CONFIG_PIPELINE_ELEM) ||\n\t\t\t\t!theFilters[JSON_CONFIG_PIPELINE_ELEM].IsArray())\n\t\t\t{\n\t\t\t\tstring errMsg(\"loadFilters: can not parse JSON '\");\n\t\t\t\terrMsg += string(JSON_CONFIG_FILTER_ELEM) + \"' property\";\n\t\t\t\tLogger::getLogger()->fatal(errMsg.c_str());\n\t\t\t\tthrow runtime_error(errMsg);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tconst Value& filterList = theFilters[JSON_CONFIG_PIPELINE_ELEM];\n\t\t\t\tif (!filterList.Size())\n\t\t\t\t{\n\t\t\t\t\t// Empty array, just return true\n\t\t\t\t\treturn true;\n\t\t\t\t}\n\n\t\t\t\t// Prepare printable list of filters\n\t\t\t\tStringBuffer buffer;\n\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\tfilterList.Accept(writer);\n\t\t\t\tstring printableList(buffer.GetString());\n\n\t\t\t\tstring logMsg(\"loadFilters: found filter(s) \");\n\t\t\t\tlogMsg += printableList + \" for plugin '\";\n\t\t\t\tlogMsg += categoryName + \"'\";\n\n\t\t\t\tLogger::getLogger()->info(logMsg.c_str());\n\n\t\t\t\tloadPipeline(filterList, m_filters);\n\n\t\t\t\t// We have kept filter default config in the filterInfo map\n\t\t\t\t// Handle configuration for each filter\n\t\t\t\tfor (auto& itr : m_filters)\n\t\t\t\t{\n\t\t\t\t\titr->setupConfiguration(mgtClient, children);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tm_pipeline = filter;\n\t\t/*\n\t\t * Put all the new catregories in the Filter category parent\n\t\t * Create an empty South category if one doesn't exist\n\t\t */\n\t\tstring parentName = categoryName + \" Filters\";\n\t\tDefaultConfigCategory filterConfig(parentName, string(\"{}\"));\n\t\tfilterConfig.setDescription(\"Filters for \" + categoryName);\n\t\tmgtClient->addCategory(filterConfig, true);\n\t\tmgtClient->addChildCategories(parentName, children);\n\t\tvector<string> children1;\n\t\tchildren1.push_back(parentName);\n\t\tmgtClient->addChildCategories(categoryName, children1);\n\t\treturn true;\n\t}\n\tcatch (ConfigItemNotFound* e)\n\t{\n\t\tdelete e;\n\t\tLogger::getLogger()->info(\"loadFilters: no filters configured for '\" + categoryName + \"'\");\n\t\treturn true;\n\t}\n\tcatch (exception& e)\n\t{\n\t\tLogger::getLogger()->fatal(\"loadFilters: failed to handle '\" + categoryName + \"' filters.\");\n\t\treturn false;\n\t}\n\tcatch (...)\n\t{\n\t\tLogger::getLogger()->fatal(\"loadFilters: generic exception while loading '\" + categoryName + \"' filters.\");\n\t\treturn false;\n\t}\n}\n\nvoid FilterPipeline::loadPipeline(const Value& filterList, vector<PipelineElement *>& pipeline)\n{\n\t// Try loading all filter plugins: abort on any error\n\tfor (Value::ConstValueIterator itr = filterList.Begin(); itr != filterList.End(); ++itr)\n\t{\n\t\tif (itr->IsString())\n\t\t{\n\t\t\t// Get \"plugin\" item from filterCategoryName\n\t\t\tstring filterCategoryName = itr->GetString();\n\t\t\tLogger::getLogger()->info(\"Creating pipeline filter %s\", filterCategoryName.c_str());\n\t\t\ttry {\n\t\t\t\tConfigCategory filterDetails = mgtClient->getCategory(filterCategoryName);\n\n\t\t\t\tPipelineFilter *element = new PipelineFilter(filterCategoryName, filterDetails);\n\t\t\t\telement->setServiceName(serviceName);\n\t\t\t\telement->setStorage(&storage);\n\t\t\t\tpipeline.emplace_back(element);\n\t\t\t} catch (exception& e) {\n\t\t\t\tLogger::getLogger()->error(\"Failed to create filter %s: %s\",\n\t\t\t\t\t\tfilterCategoryName.c_str(), e.what());\n\t\t\t} catch (exception *e) {\n\t\t\t\tLogger::getLogger()->error(\"Failed to create filter %s: %s\",\n\t\t\t\t\t\tfilterCategoryName.c_str(), e->what());\n\t\t\t}\n\t\t}\n\t\telse if (itr->IsArray())\n\t\t{\n\t\t\t// Sub pipeline\n\t\t\tLogger::getLogger()->info(\"Creating pipeline branch\");\n\t\t\tPipelineBranch *element = new PipelineBranch(this);\n\t\t\tloadPipeline(*itr, element->getBranchElements());\n\t\t\tpipeline.emplace_back(element);\n\t\t}\n\t\telse if (itr->IsObject())\n\t\t{\n\t\t\t// An object, probably the write destination\n\t\t\tLogger::getLogger()->warn(\"This version of Fledge does not support pipelines with different destinations. The destination will be ignored and the data written to the default storage service.\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Unexpected object in pipeline definition, ignoring\");\n\t\t}\n\t}\n\n\t// End the pipeline with a writer element that sends data to the\n\t// ingest of the storage system\n\tPipelineWriter *element = new PipelineWriter();\n\tpipeline.emplace_back(element);\n}\n\n/**\n * Set the filter pipeline\n * \n * This method calls the method \"plugin_init\" for all loadad filters.\n * Up-to-date filter configurations and Ingest filtering methods\n * are passed to \"plugin_init\"\n *\n * @param passToOnwardFilter\tPtr to function that passes data to next filter\n * @param useFilteredData\tPtr to function that gets final filtered data\n * @param ingest\t\tThe ingest class handle\n * @return \t\tTrue on success,\n *\t\t\tFalse otherwise.\n * @thown\t\tAny caught exception\n */\nbool FilterPipeline::setupFiltersPipeline(void *passToOnwardFilter, void *useFilteredData, void *ingest)\n{\n\tbool initErrors = false;\n\tstring errMsg = \"'plugin_init' failed for filter '\";\n\tfor (auto it = m_filters.begin(); it != m_filters.end(); ++it)\n\t{\n\t\t\n\t\ttry\n\t\t{\n\t\t\tif ((*it)->isBranch())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Set branch functions\");\n\t\t\t\tPipelineBranch *branch = (PipelineBranch *)(*it);\n\t\t\t\tbranch->setFunctions(passToOnwardFilter, useFilteredData, ingest);\n\t\t\t}\n\t\t\tLogger::getLogger()->info(\"Setup element %s\", (*it)->getName().c_str());\n\t\t\t(*it)->setup(mgtClient, ingest, m_filterCategories);\n\t\t\t// Iterate the load filters set in the Ingest class m_filters member \n\t\t\tif ((it + 1) != m_filters.end())\n\t\t\t{\n\t\t\t\t(*it)->setNext(*(it + 1));\n\t\t\t\t// Set next filter pointer as OUTPUT_HANDLE\n\t\t\t\ttry {\n\t\t\t\t\tif (!(*it)->init((OUTPUT_HANDLE *)(*(it + 1)),\n\t\t\t\t\t\t\tfilterReadingSetFn(passToOnwardFilter)))\n\t\t\t\t\t{\n\t\t\t\t\t\terrMsg += (*it)->getName() + \"'\";\n\t\t\t\t\t\tinitErrors = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t} catch (exception& e) {\n\t\t\t\t\tLogger::getLogger()->error(\"Unable to initialise plugin %s, %s\", (*it)->getName().c_str(), e.what());\n\t\t\t\t\tinitErrors = true;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Set the Ingest class pointer as OUTPUT_HANDLE\n\t\t\t\ttry {\n\t\t\t\t\tif (!(*it)->init((OUTPUT_HANDLE *)(ingest),\n\t\t\t\t\t\t\t filterReadingSetFn(useFilteredData)))\n\t\t\t\t\t{\n\t\t\t\t\t\terrMsg += (*it)->getName() + \"'\";\n\t\t\t\t\t\tinitErrors = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t} catch (exception& e) {\n\t\t\t\t\tLogger::getLogger()->error(\"Unable to initialise plugin %s, %s\", (*it)->getName().c_str(), e.what());\n\t\t\t\t\tinitErrors = true;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t// TODO catch specific exceptions\n\t\tcatch (...)\n\t\t{\t\t\n\t\t\tthrow;\t\t\n\t\t}\n\t}\n\n\tif (initErrors)\n\t{\n\t\t// Failure\n\t\tLogger::getLogger()->fatal(\"Failed to create pipeline,  %s\", errMsg.c_str());\n\t\treturn false;\n\t}\n\n\t// Set filter pipeline is ready for data ingest\n\tm_ready = true;\n\t// Set the service handler for the pipeline\n\tm_serviceHandler = (ServiceHandler *)ingest;\n\n\t//Success\n\treturn true;\n}\n\n/**\n * Cleanup all the loaded filters\n *\n * Call \"plugin_shutdown\" method and free the FilterPlugin object\n *\n * @param categoryName\t\tConfiguration category name\n *\n */\nvoid FilterPipeline::cleanupFilters(const string& categoryName)\n{\n\n\t// Shutdown filters - do this down the pipeline, starting\n\t// from the first filter in the pipeline. This allows a filter\n\t// to asynchronously send data in the shutdown call to the\n\t// next element in the pipeline since that next element has\n\t// not yet been asked to shutdown.\n\t//\n\t// This is not behaviour that is encouraged or designed, but a\n\t// small number of Python filters have implemented sending data\n\t// during shutdown, hence the need to ensure that data has\n\t// somewhere to go.\n\tfor (auto it = m_filters.begin(); it != m_filters.end(); ++it)\n\t{\n\t\tPipelineElement *element = *it;\n\t\tConfigHandler *configHandler = ConfigHandler::getInstance(mgtClient);\n\t\telement->shutdown(m_serviceHandler, configHandler);\n\t}\n\t// Delete filters, in reverse order\n\tfor (auto it = m_filters.rbegin(); it != m_filters.rend(); ++it)\n\t{\n\t\tPipelineElement *element = *it;\n\t\t// Free filter\n\t\tdelete element;\n\t}\n}\n\n/**\n * Configuration change for one of the filters. Lookup the category name and\n * find the plugin to call. Call the reconfigure method of that plugin with\n * the new configuration.\n *\n * @param category\tThe name of the configuration category\n * @param newConfig\tThe new category contents\n */\nvoid FilterPipeline::configChange(const string& category, const string& newConfig)\n{\n\tauto it = m_filterCategories.find(category);\n\tif (it != m_filterCategories.end())\n\t{\n\t\tit->second->reconfigure(newConfig);\n\t}\n}\n\n/**\n * Called when we pass the data into the pipeline. Set the\n * number of active branches to 1\n */\nvoid FilterPipeline::execute()\n{\n\tunique_lock<mutex> lck(m_actives);\n\tm_activeBranches = 1;\n}\n\n/**\n * Wait for all active branches of the pipeline to complete\n */\nvoid FilterPipeline::awaitCompletion()\n{\n\tunique_lock<mutex> lck(m_actives);\n\twhile (m_activeBranches > 0)\n\t{\n\t\tm_branchActivations.wait(lck);\n\t}\n}\n\n/**\n * A new branch has started in the pipeline\n */\nvoid FilterPipeline::startBranch()\n{\n\tunique_lock<mutex> lck(m_actives);\n\tm_activeBranches++;\n}\n\n/**\n * A branch in the pipeline has completed\n */\nvoid FilterPipeline::completeBranch()\n{\n\tunique_lock<mutex> lck(m_actives);\n\tm_activeBranches--;\n\tif (m_activeBranches == 0)\n\t{\n\t\tm_branchActivations.notify_all();\n\t}\n}\n\n/**\n * Attach the debugger to the pipeline elements\n *\n * @return bool\tTrue if the pipeline was attached\n */\nbool FilterPipeline::attachDebugger()\n{\n\tbool rval =  attachDebugger(m_filters);\n\tsetDebuggerBuffer(1);\n\treturn rval;\n}\n\n/**\n * Attach the debugger to the pipeline elements\n *\n * @param pipeline\tThe pipeline (or branch) to attach the debugger\n * @return bool\t\tTrue if the debugger was attached\n */\nbool FilterPipeline::attachDebugger(const vector<PipelineElement *>& pipeline)\n{\n\tbool ret = true;\n\tif (pipeline.size() == 0)\n\t{\n\t\t// Makes no sense to attach the debugger to an empty pipeline\n\t\treturn false;\n\t}\n\tfor (auto& elem : pipeline)\n\t{\n\t\tif (!elem->attachDebugger())\n\t\t{\n\t\t\tret = false;\n\t\t\tbreak;\n\t\t}\n\t\tif (elem->isBranch())\n\t\t{\n\t\t\tPipelineBranch *branch = (PipelineBranch *)elem;\n\t\t\tif (!attachDebugger(branch->getBranchElements()))\n\t\t\t{\n\t\t\t\tret = false;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\tif (!ret)\n\t{\n\t\t// Detach any partially attached pipeline\n\t\tdetachDebugger(pipeline);\n\t}\n\treturn ret;\n}\n\n/**\n * Detach the debugger from the pipeline elements\n */\nvoid FilterPipeline::detachDebugger()\n{\n\tdetachDebugger(m_filters);\n}\n\n/**\n * Detach the debugger from the pipeline elements\n *\n * @param pipeline\tThe pipeline or branch to detach the debugger from\n */\nvoid FilterPipeline::detachDebugger(const vector<PipelineElement *>& pipeline)\n{\n\tfor (auto& elem : pipeline)\n\t{\n\t\telem->detachDebugger();\n\t\tif (elem->isBranch())\n\t\t{\n\t\t\tPipelineBranch *branch = (PipelineBranch *)elem;\n\t\t\tdetachDebugger(branch->getBranchElements());\n\t\t}\n\t}\n}\n\n/**\n * Set the debugger buffer size to the pipeline elements\n *\n * @param size\tThe request number of readings to buffer\n */\nvoid FilterPipeline::setDebuggerBuffer(unsigned int size)\n{\n\tsetDebuggerBuffer(m_filters, size);\n}\n\n/**\n * Set the debugger buffer size to the pipeline elements\n *\n * @param pipeline\tThe pipeline or branch to set the buffer size for\n * @param size\t\tThe desired number of readings to buffer\n */\nvoid FilterPipeline::setDebuggerBuffer(const vector<PipelineElement *>& pipeline, unsigned int size)\n{\n\tfor (auto& elem : pipeline)\n\t{\n\t\telem->setDebuggerBuffer(size);\n\t\tif (elem->isBranch())\n\t\t{\n\t\t\tPipelineBranch *branch = (PipelineBranch *)elem;\n\t\t\tsetDebuggerBuffer(branch->getBranchElements(), size);\n\t\t}\n\t}\n}\n\n/**\n * Get the debugger buffer contents for all the pipeline elements\n *\n * @return string\tJSON document with all the buffer contents\n */\nstring FilterPipeline::getDebuggerBuffer()\n{\n\tstring\trval = \"{ \\\"data\\\" : [\";\n\trval += getDebuggerBuffer(m_filters);\n\trval += \"]}\";\n\treturn rval;\n}\n\n\n\n/**\n * Get the debugger buffer contents for all the pipeline elements\n *\n * @param pipeline\tThe pipeline to fetch the buffered data from\n * @return string\tJSON document with all the buffer contents\n */\nstring FilterPipeline::getDebuggerBuffer(const vector<PipelineElement *>& pipeline)\n{\n\tstring rval;\n\n\tfor (auto& elem : pipeline)\n\t{\n\t\tvector<shared_ptr<Reading>> buf = elem->getDebuggerBuffer();\n\t\trval += \"{ \\\"name\\\" : \\\"\";\n\t\trval += elem->getName();\n\t\trval += \"\\\", \\\"readings\\\" : [ \";\n\t\trval += readingsToJSON(buf);\n\t\trval += \"] }\";\n\t\tif (elem->getNext())\n\t\t\trval += \",\";\n\t\tif (elem->isBranch())\n\t\t{\n\t\t\tPipelineBranch *branch = (PipelineBranch *)elem;\n\t\t\trval += \"[ \";\n\t\t\trval += getDebuggerBuffer(branch->getBranchElements());\n\t\t\trval += \"], \";\n\t\t}\n\t}\n\n\treturn rval;\n}\n\n/**\n * Get the debugger buffer contents for all the pipeline elements\n *\n * @param name\t\tThe name of the filter element we return the buffer from\n * @return string\tJSON document with all the buffer contents\n */\nstring FilterPipeline::getDebuggerBuffer(const string& name)\n{\n\tstring\trval;\n\n\tfor (auto& elem : m_filters)\n\t{\n\t\tif (elem->getName().compare(name) == 0)\n\t\t{\n\t\t\tvector<shared_ptr<Reading>> buf = elem->getDebuggerBuffer();\n\t\t\trval += \"{ \\\"name\\\" : \\\"\";\n\t\t\trval += name;\n\t\t\trval += \"\\\", \";\n\t\t\trval += readingsToJSON(buf);\n\t\t\trval += \"}\";\n\t\t}\n\t}\n\treturn rval;\n}\n\n/**\n * Convert a vector of readings into JSON that we can use to return \n * the buffered data held at each stage within the filter pipeline.\n *\n * @param readings\tA vector of shared pointers to readings\n * @return string\tA JSON structure containing the pipeline buffers\n */\nstring FilterPipeline::readingsToJSON(vector<shared_ptr<Reading>> readings)\n{\n\tstring rval;\n\n\tfor (int j = 0; j < readings.size(); j++)\n\t{\n\t\tshared_ptr<Reading> reading = readings[j];\n\t\trval += reading->toJSON();\n\t\tif (j < readings.size() - 1)\n\t\t\t\trval += \",\";\n\t}\n\n\treturn rval;\n}\n\n/**\n * Replay the data in the first saved buffer to the filter pipeline\n *\n * @return bool\tReturns true if data has been replayed, otehrwise retuns false\n */\nbool FilterPipeline::replayDebugger()\n{\nReadingSet \t\t*replay;\nvector<Reading *>\t*readings = new vector<Reading *>;\nPipelineElement\t\t*first;\n       \n\tif (m_filters.size() > 0)\n\t{\n\t\tfirst = m_filters[0]; \n\t}\n\telse\n\t{\n\t\t// No filters to replay to\n\t\treturn false;\n\t}\n\n\tif (first)\n\t{\n\t\tvector<shared_ptr<Reading>> buf = first->getDebuggerBuffer();\n\t\tfor (int i = 0; i < buf.size(); i++)\n\t\t{\n\t\t\tif (buf[i])\n\t\t\t{\n\t\t\t\treadings->emplace_back(new Reading(*buf[i].get()));\n\t\t\t}\n\t\t}\n\t\treplay = new ReadingSet(readings);\n\t\t\t\n\t\tif (replay)\n\t\t{\n\t\t\tfirst->ingest(replay);\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\telse\n\t{\n\t\treturn false;\n\t}\n\treturn true;\n}\n"
  },
  {
    "path": "C/common/filter_plugin.cpp",
    "content": "/*\n * Fledge plugin filter class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <filter_plugin.h>\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n\n#define JSON_CONFIG_FILTER_ELEM \"filter\"\n#define JSON_CONFIG_PIPELINE_ELEM \"pipeline\"\n\nusing namespace std;\n\n/**\n * FilterPlugin class constructor\n *\n * This class wraps the filter plugin C interface and creates\n * set of function pointers that resolve to the loaded plugin and\n * enclose in the class.\n *\n * @param name\t\tThe filter name\n * @param handle\tThe loaded plugin handle\n *\n * Set the function pointers to Filter Plugin C API\n */\nFilterPlugin::FilterPlugin(const std::string& name,\n\t\t\t   PLUGIN_HANDLE handle) : Plugin(handle), m_name(name)\n{\n\t// Setup the function pointers to the plugin\n\tpluginInit = (PLUGIN_HANDLE (*)(const ConfigCategory *,\n\t\t\t\t\tOUTPUT_HANDLE *,\n\t\t\t\t\tOUTPUT_STREAM output))\n\t\t\t\t\tmanager->resolveSymbol(handle,\n\t\t\t\t\t\t\t       \"plugin_init\");\n  \tpluginShutdownPtr = (void (*)(PLUGIN_HANDLE))\n\t\t\t\t      manager->resolveSymbol(handle,\n\t\t\t\t\t\t\t     \"plugin_shutdown\");\n  \tpluginIngestPtr = (void (*)(PLUGIN_HANDLE, READINGSET *))\n\t\t\t\t      manager->resolveSymbol(handle,\n\t\t\t\t\t\t\t     \"plugin_ingest\");\n\tpluginShutdownDataPtr = (string (*)(const PLUGIN_HANDLE))\n\t\t\t\t manager->resolveSymbol(handle, \"plugin_shutdown\");\n\tpluginStartDataPtr = (void (*)(const PLUGIN_HANDLE, const string& storedData))\n\t\t\t      manager->resolveSymbol(handle, \"plugin_start\");\n\tpluginStartPtr = (void (*)(const PLUGIN_HANDLE))\n\t\t\t      manager->resolveSymbol(handle, \"plugin_start\");\n  \tpluginReconfigurePtr = (void (*)(PLUGIN_HANDLE, const string&))\n\t\t\t\t      manager->resolveSymbol(handle,\n\t\t\t\t\t\t\t     \"plugin_reconfigure\");\n\n\t// Set m_instance default value\n\tm_instance = NULL;\n\n\t// Persist data initialised\n\tm_plugin_data = NULL;\t\n}\n\n/**\n * FilterPlugin destructor\n */\nFilterPlugin::~FilterPlugin()\n{\n\tdelete m_plugin_data;\n}\n\n/**\n * Call the loaded plugin \"plugin_init\" method\n *\n * @param config\tThe filter configuration\n * @param outHandle\tThe output_handled passed with\n *\t\t\tfiltered data to OUTPUT_STREAM function\n * @param outputFunc\tThe output_stream function pointer\n * \t\t\tthe filter uses to pass data out\n * @return\t\tThe PLUGIN_HANDLE object\n */\nPLUGIN_HANDLE FilterPlugin::init(const ConfigCategory& config,\n\t\t\t\t OUTPUT_HANDLE *outHandle,\n\t\t\t\t OUTPUT_STREAM outputFunc)\n{\n\tm_instance = this->pluginInit(&config,\n\t\t\t\t      outHandle,\n\t\t\t\t      outputFunc);\n\treturn (m_instance ? &m_instance : NULL);\n}\n\n/**\n * Call the loaded plugin \"plugin_shutdown\" method\n */\nvoid FilterPlugin::shutdown()\n{\n\t// Check if m_instance has been set\n\t// and function pointer exists\n\tif (m_instance && this->pluginShutdownPtr)\n\t{\n\t\treturn this->pluginShutdownPtr(m_instance);\n\t}\n}\n\n/**\n * Call the loaded plugin \"plugin_shutdown\" method\n * returning plugind data (as string)\n *\n * @return\tPlugin data as JSON string (to be saved into strage layer)\n */\nstring FilterPlugin::shutdownSaveData()\n{\n\tstring ret(\"\");\n\t// Check if m_instance has been set\n\t// and function pointer exists\n\tif (m_instance && this->pluginShutdownDataPtr)\n\t{\n\t\tret = this->pluginShutdownDataPtr(m_instance);\n\t}\n\treturn ret;\n}\n\n/**\n * Call plugin_start\n */\nvoid FilterPlugin::start()\n{\n\tif (pluginStartPtr)\n\t{\n        \treturn this->pluginStartPtr(m_instance);\n\t}\n}\n\n/**\n * Call plugin_reconfigure method\n *\n * @param configuration\tThe new filter configuration\n */\nvoid FilterPlugin::reconfigure(const string& configuration)\n{\n\tif (pluginReconfigurePtr)\n\t{\n        \treturn this->pluginReconfigurePtr(m_instance, configuration);\n\t}\n}\n\n/**\n * Call plugin_start passing plugin data.\n *\n * @param storedData\tPlugin data to pass (from storage layer)\n */\nvoid FilterPlugin::startData(const string& storedData)\n{\n\t// Check pluginStartData function pointer exists\n\tif (this->pluginStartDataPtr)\n\t{\n\t\tthis->pluginStartDataPtr(m_instance, storedData);\n\t}\n}\n\n/**\n * Call the loaded plugin \"plugin_ingest\" method\n *\n * This call ingest the readings through the filters chain\n *\n * @param readings\tThe reading set to ingest\n */\nvoid FilterPlugin::ingest(READINGSET* readings)\n{\n\tif (this->pluginIngestPtr)\n\t{\n        \treturn this->pluginIngestPtr(m_instance, readings);\n\t}\n}\n\n"
  },
  {
    "path": "C/common/form_data.cpp",
    "content": "/*\n * Fledge utilities functions for handling HTTP form data upload\n * with multipart data\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <form_data.h>\n#include <errno.h>\n\nusing namespace std;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\n// Class constructor with HTTP request object\nFormData::FormData(shared_ptr<HttpServer::Request> request)\n{\n\t// Boundary in the content has two additional '-' chars\n\tm_boundary = \"--\";\n\n\t// Get Content-Length from input header, if not found use request size\n\tauto header_it = request->header.find(\"Content-Length\");\n\tif (header_it != request->header.end()) {\n\t\tm_size = std::stoull(header_it->second);\n\t}\n\telse\n\t{\n\t\tm_size = request->content.size();\n\t}\n\n\t// Get \"Content-Type\" which has content like:\n\t// Content-Type: multipart/form-data; boundary=------------------------XYZ\n\theader_it = request->header.find(\"Content-Type\");\n\tif (header_it != request->header.end())\n\t{\n\t\t// Fetch multipart/form-data and boundary\n\t\tauto fileData = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(header_it->second.c_str());\n\t\tfor (auto it = fileData.begin();\n\t\t\tit != fileData.end();\n\t\t\t++it)\n\t\t{\n\t\t\tif (it->first == \"boundary\")\n\t\t\t{\n\t\t\t\tm_boundary += it->second.c_str();\n\t\t\t}\n\t\t}\n\t}\n\n\t// Get row data (const) from client request\n\tm_buffer = request->content.data();\n}\n\n/**\n * Skip a \\r\\n sequence\n *\n * @param b\tCurrent buffer pointer\n * @return\tPointer after \\r\\n sequence\n */\nuint8_t *FormData::skipSeparator(uint8_t *b)\n{\n\tif ((b + 1) != NULL && (*b == CR && *(b + 1) == LF))\n\t{\n\t\tb += 2;\n\t}\n\treturn b;\n}\n\n/**\n * Skip a double \\r\\n sequence\n *\n * @param b\tCurrent buffer pointer\n * @return\tPointer after the double \\r\\n sequence\n */\nuint8_t *FormData::skipDoubleSeparator(uint8_t *b)\n{\n\t// Look for \\r\\n\n\tconst uint8_t* ptr_end = m_buffer + m_size;\n\tfor (; b < ptr_end; b++)\n\t{\n\t\tif ((b + 1) != NULL && (*b == CR && *(b + 1) == LF))\n\t\t{\n\t\t\tbreak;\n\t\t}\n\t}\n\n\t// Skip double \\r\\n sequence\n\tif (b && *b == CR && ((b + 1) && *(b + 1) == LF))\n\t{\n\t\tb += 2;\n\t\tif (b && *b == CR && ((b + 1) && *(b + 1) == LF))\n\t\t{\n\t\t\tb += 2;\n\t\t}\n\t}\n\n\treturn b;\n}\n\n/**\n * Get end of content block, which can be binary data\n *\n * @param b\tCurrent buffer pointer\n * @return\tPointer after the \\r\\n sequence + boundary\n */\nuint8_t *FormData::getContentEnd(uint8_t *b)\n{\n\tif (!b)\n\t{\n\t\treturn NULL;\n\t}\n\n\t// Check content bytes\n\t// Look for boundary after \\r\\n as content end\n\tuint8_t *endOfContent = NULL;\n\tconst uint8_t* ptr_end = m_buffer + m_size;\n\tfor (; b < ptr_end; b++)\n\t{\n\t\t// Found \\r\\n\n\t\tif ((b + 2) != NULL && (*b == CR && *(b + 1) == LF))\n\t\t{\n\t\t\tendOfContent = (uint8_t *)strstr((char *)(b + 2), m_boundary.c_str());\n\t\t\tif (endOfContent)\n\t\t\t{\n\t\t\t\t// Found boundary: content ends here\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\t// Boundary found\n\tif (endOfContent && (endOfContent - 2))\n\t{\n\t\t// Remove \\r\\n from end location\n\t\tendOfContent -= 2;\n\t}\n\n\treturn endOfContent;\n}\n\n/**\n * Get given field name in the data buffer\n *\n * @param buffer\tCurrent buffer pointer\n * @param field\t\tThe field name to find\n * @return\t\tPointer to filed value\n * \t\t\tif found or NULL otherwise\n */\nuint8_t *FormData::findDataFormField(uint8_t* buffer, const string& field)\n{\n\t// Find first Content-Disposition: field\n\tuint8_t* b = buffer;\n\tuint8_t* ptr = b;\n\tconst uint8_t* ptr_end = m_buffer + m_size;\n\tstring name = \"\\\"\" + field + \"\\\"\";\n\tstring find = \"form-data; name=\" + name;\n\tbool found = false;\n\n\twhile (ptr < ptr_end)\n\t{\n\t\t// Look for boundary in content data\n\t\tchar *boundaryEnd = strstr((char *)ptr, m_boundary.c_str());\n\t\tif (boundaryEnd == NULL)\n\t\t{\n\t\t\t// No boundary, return NULL\n\t\t\treturn NULL;\n\t\t}\n\n\t\t// Point to end of boundary\n\t\tptr += m_boundary.length();\n\n\t\t// Skip single \\r\\n\n\t\tb = this->skipSeparator(ptr);\n\n\t\tptr = (uint8_t *)strstr((char *)b, \"Content-Disposition:\");\n\t\tif (ptr == NULL)\n\t\t{\n\t\t\tbreak;\n\t\t}\n\n\t\tb = ptr + strlen(\"Content-Disposition:\");\n\n\t\t// Look for \"form-data; \" and \"name=\" as per input field\n\t\tptr = (uint8_t *)strstr((char *)b, find.c_str());\n\n\t\t// Given field name found ?\n\t\tif (ptr != NULL)\n\t\t{\n\t\t\t// Point to the end of mathed string\n\t\t\tptr += find.length();\n\n\t\t\tfound = true;\n\t\t\tbreak;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Field name not found: look for next boundary after \\r\\n\n\t\t\tfor (; b < ptr_end; b++)\n\t\t\t{\n\t\t\t\t// Look for \\r\\n\n\t\t\t\tif ((b + 2) != NULL && (*b == CR && *(b + 1) == LF))\n\t\t\t\t{\n\t\t\t\t\tif (strstr((char *)(b + 2), m_boundary.c_str()) != NULL)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Look for boundary\n\t\t\t\t\t\tuint8_t *foundBoundary = (uint8_t *)strstr((char *)(b + 2), m_boundary.c_str());\n\n\t\t\t\t\t\tif (foundBoundary)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tb = foundBoundary;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tptr = b;\n\t\t}\n\t}\n\n\treturn (found ? ptr : NULL);\n}\n\n/**\n * Fetch content data uploaded without file, example\n * curl -v -v -v --output - -X POST -F 'attributes={\"name\": \"B1\", \"type\": \"model\"}' 127.0.0.1:43605/fledge/south/uploadA\n *\n * @param field\t\tThe field name to fetch\n * @param data\t\tThe value reference to fill on success\n */\nvoid FormData::getUploadedData(const string &field, FieldValue& data)\n{\n\t// Point to buffer start\n\tuint8_t* b = (uint8_t *)m_buffer;\n\n\t// Get field name if it exists\n\tuint8_t* ptr = this->findDataFormField(b, field);\n\tif (ptr == NULL)\n\t{\n\t\treturn;\n\t}\n\n\tb = ptr;\n\tuint8_t *endContent = this->getContentEnd(b);\n\n\t// Look for Content-Type, if existent within the \n\t// same part of the message, i.e. not beyond endContent\n\tptr = (uint8_t *)strstr((char *)b, \"Content-Type:\");\n\tif (ptr != NULL && ptr < endContent)\n\t{\n\t\tb = ptr + strlen(\"Content-Type:\");\n\t}\n\n\t// Check for \\r\\n sequence\n\tb = this->skipDoubleSeparator(b);\n\n\t// Content starts here\n\tuint8_t *startContent = b;\n\n\t// Find end of content\n\tif (endContent)\n\t{\n\t\t// Set output data\n\t\t// Buffer start and size\n\t\tdata.start = startContent;\n\t\tdata.size = (size_t)(endContent - startContent);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Closing boundary not found for data content\");\n\t}\n}\n\n/**\n * Fetch content data uploaded as file, example\n * curl -v -v -v --output - -X POST -F \"bucket=@/some_path/file.bin\" 127.0.0.1:43605/fledge/south/uploadA\n *\n * @param field\t\tThe field name (filename type) to fetch\n * @param data\t\tThe value reference to fill on success\n */\nvoid FormData::getUploadedFile(const string& field, FieldValue& data)\n{\n\t// Point to buffer start\n\tuint8_t* b = (uint8_t *)m_buffer;\n\n\t// Get field name if it exists\n\tuint8_t* ptr = findDataFormField(b, field);\n\tif (ptr == NULL)\n\t{\n\t\treturn;\n\t}\n\n\tb = ptr;\n\n\t// Check for ';' after name'\n\tif (*b != ';')\n\t{\n\t\treturn;\n\t}\n\n\t// Look for filename\n\tptr = (uint8_t *)strstr((char *)b, \"filename=\");\n\tif (ptr == NULL)\n\t{\n\t\treturn;\n\t}\n\tb = ptr + strlen(\"filename=\");\n\n\t// Look for Content-Type\n\tptr = (uint8_t *)strstr((char *)b, \"Content-Type:\");\n\tif (ptr == NULL)\n\t{\n\t\treturn;\n\t}\n\n\t// Get filename\n\tstring fileName;\n\tif (*(ptr - 2) == CR && (*(ptr - 1) == LF))\n\t{\n\t\tsize_t fNameSize = (ptr - 2) - b;\n\t\t// Skip leading an trailing '\"' \n\t\tif (*b == '\"')\n\t\t{\n\t\t\t// Filename starts after '\"'\n\t\t\tb++;\n\t\t\t// Size -i 1\n\t\t\tfNameSize--;\n\t\t}\n\t\tif (*(ptr - 2 - 1) == '\"')\n\t\t{\n\t\t\t// Size - 1\n\t\t\tfNameSize--;\n\t\t}\n\n\t\t// Set filename as in uploaded content\n\t\t// Caller might use this or select another name\n\t\t// while saving the content into a file\n\t\tfileName.assign((char *)b, fNameSize);\n\t}\n\n\tb = ptr + strlen(\"Content-Type:\");\n\n\t// Check for \\r\\n sequence\n\tb = this->skipDoubleSeparator(b);\n\n\t// File content starts here\n\tuint8_t *startContent = b;\n\n\t// Find end of content\n\tuint8_t *endContent = this->getContentEnd(b);\n\tif (endContent)\n\t{\n\t\t// Set output data\n\t\t// Buffer start and size\n\t\tdata.start = startContent;\n\t\tdata.size = (size_t)(endContent - startContent);\n\t\t// Set filename\n\t\tdata.filename = fileName;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Closing boundary not found for file content\");\n\t}\n}\n\n/**\n * Save the uploaded file\n *\n * @param v\tThe Field value data\n * @return\tReturns true if the file was succesfully saved\n */\nbool FormData::saveFile(FormData::FieldValue& v, const string& fileName)\n{\n\tLogger::getLogger()->debug(\"Uploaded filename is '%s'\", v.filename.c_str());\n\n\t// v.filename holds the file name as per upload content\n\tLogger::getLogger()->debug(\"Saving uploaded file as '%s', size is %ld bytes\",\n\t\t\t\tfileName.c_str(), v.size);\n\n\t// Create file\n\tint fd = open(fileName.c_str(),\n\t\t\tO_RDWR | O_CREAT | O_TRUNC,\n\t\t\t(mode_t)0644);\n\tif (fd == -1)\n\t{\n\t\t// An error occurred\n\t\tchar errBuf[128];\n\t\tchar *e = strerror_r(errno, errBuf, sizeof(errBuf));\n\t\tLogger::getLogger()->error(\"Error while creating filename '%s': %s\",\n\t\t\t\t\tfileName.c_str(), e);\n\n\t\treturn false;\n\t}\n\n\t// Write file from v.start, v.size bytes\n\tif (write(fd, (const void *)v.start, v.size) == -1)\n\t{\n\t\t// An error occurred\n\t\tchar errBuf[128];\n\t\tchar *e = strerror_r(errno, errBuf, sizeof(errBuf));\n\t\tLogger::getLogger()->error(\"Error while writing to file '%s': %s\",\n\t\t\t\t\tfileName.c_str(), e);\n\t\tclose(fd);\n\t\treturn false;\n\t}\n\n\t// Close file\n\tclose(fd);\n\n\treturn true;\n}\n\n"
  },
  {
    "path": "C/common/image.cpp",
    "content": "/*\n * Fledge DPImage class \n *\n * Copyright (c) 2020 Dianomic System\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <dpimage.h>\n#include <logger.h>\n#include <string.h>\n#include <exception>\n#include <stdexcept>\n\nusing namespace std;\n\n/**\n * DPImage constructor\n *\n * @param width\t\tThe image width\n * @param height\tThe image height\n * @param depth\t\tThe image depth\n * @param data\t\tThe actual image data\n */\nDPImage::DPImage(int width, int height, int depth, void *data) : m_width(width),\n\tm_height(height), m_depth(depth)\n{\n\tm_byteSize = width * height * (depth / 8);\n\tm_pixels = (void *)malloc(m_byteSize);\n\tif (m_pixels)\n\t{\n\t\tmemcpy(m_pixels, data, m_byteSize);\n\t}\n\telse\n\t{\n\t\tthrow runtime_error(\"Insufficient memory to store image\");\n\t}\n}\n\n/**\n * Copy constructor\n *\n * @param DPImage\t\tThe image to copy\n */\nDPImage::DPImage(const DPImage& rhs)\n{\n\tm_width = rhs.m_width;\n\tm_height = rhs.m_height;\n\tm_depth = rhs.m_depth;\n\n\tm_byteSize = m_width * m_height * (m_depth / 8);\n\tm_pixels = (void *)malloc(m_byteSize);\n\tif (m_pixels)\n\t{\n\t\tmemcpy(m_pixels, rhs.m_pixels, m_byteSize);\n\t}\n\telse\n\t{\n\t\tthrow runtime_error(\"Insufficient memory to store image\");\n\t}\n}\n\n/**\n * Assignment operator\n * @param rhs\tRighthand side of equals operator\n */\nDPImage& DPImage::operator=(const DPImage& rhs)\n{\n\t// Free any old data\n\tif (m_pixels)\n\t\tfree(m_pixels);\n    \n\tm_width = rhs.m_width;\n\tm_height = rhs.m_height;\n\tm_depth = rhs.m_depth;\n\n\tm_byteSize = m_width * m_height * (m_depth / 8);\n\tm_pixels = (void *)malloc(m_byteSize);\n\tif (m_pixels)\n\t{\n\t\tmemcpy(m_pixels, rhs.m_pixels, m_byteSize);\n\t}\n\telse\n\t{\n\t\tthrow runtime_error(\"Insufficient memory to store image\");\n\t}\n\treturn *this;\n}\n\n/**\n * Destructor for the image\n */\nDPImage::~DPImage()\n{\n\tif (m_pixels)\n\t\tfree(m_pixels);\n\tm_pixels = NULL;\n}\n"
  },
  {
    "path": "C/common/include/JSONPath.h",
    "content": "/*\n * Fledge RapaidJSON JSONPath search helper\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#ifndef _JSONPATH_H\n#define _JSONPATH_H\n\n#include <rapidjson/document.h>\n#include <string>\n#include <vector>\n#include <logger.h>\n\n/**\n * A simple implementation of a JSON Path search mechanism to use\n * alongside RapidJSON\n */\nclass JSONPath {\n\tpublic:\n\t\tJSONPath(const std::string& path);\n\t\t~JSONPath();\n\t\trapidjson::Value *findNode(rapidjson::Value& root);\n\tprivate:\n\t\tclass PathComponent {\n\t\t\tpublic:\n\t\t\t\tvirtual rapidjson::Value *match(rapidjson::Value *node) = 0;\n\t\t};\n\t\tclass LiteralPathComponent : public PathComponent {\n\t\t\tpublic:\n\t\t\t\tLiteralPathComponent(std::string& name);\n\t\t\t\trapidjson::Value *match(rapidjson::Value *node);\n\t\t\tprivate:\n\t\t\t\tstd::string\tm_name;\n\t\t};\n\t\tclass IndexPathComponent : public PathComponent {\n\t\t\tpublic:\n\t\t\t\tIndexPathComponent(std::string& name, int index);\n\t\t\t\trapidjson::Value *match(rapidjson::Value *node);\n\t\t\tprivate:\n\t\t\t\tstd::string\tm_name;\n\t\t\t\tint\t\tm_index;\n\t\t};\n\t\tclass MatchPathComponent : public PathComponent {\n\t\t\tpublic:\n\t\t\t\tMatchPathComponent(std::string& name, std::string& property, std::string& value);\n\t\t\t\trapidjson::Value *match(rapidjson::Value *node);\n\t\t\tprivate:\n\t\t\t\tstd::string\tm_name;\n\t\t\t\tstd::string\tm_property;\n\t\t\t\tstd::string\tm_value;\n\t\t};\n\t\tvoid\t\tparse();\n\t\tstd::string\tm_path;\n\t\tstd::vector<PathComponent *>\n\t\t\t\tm_parsed;\n\t\tLogger\t\t*m_logger;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/acl.h",
    "content": "#ifndef _ACL_H\n#define _ACL_H\n\n/*\n * Fledge ACL management\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <string>\n#include <vector>\n\n/**\n * This class represents the ACL (Access Control List)\n * as JSON object fetched from Fledge Storage\n *\n * There are utility methods along with ACLReason class for changes handking\n */\nclass ACL {\n\tpublic:\n\t\tACL() {};\n\t\tACL(const std::string &json);\n\t\tconst std::string&\n\t\t\tgetName() { return m_name; };\n\n\t\tclass KeyValueItem {\n\t\t\tpublic:\n\t\t\t\tKeyValueItem(const std::string& k,\n\t\t\t\t\tconst std::string& v) :\n\t\t\t\t\tkey(k), value(v) {};\n\t\t\t\tstd::string\tkey;\n\t\t\t\tstd::string\tvalue;\n\t\t};\n\t\tclass UrlItem {\n\t\t\tpublic:\n\t\t\t\tUrlItem(const std::string& url,\n\t\t\t\t\tconst std::vector<KeyValueItem>& acl) :\n\t\t\t\t\turl(url), acl(acl) {};\n\n\t\t\t\tstd::string\turl;\n\t\t\t\tstd::vector<KeyValueItem>\n\t\t\t\t\t\tacl;\n\t\t};\n\n\tpublic:\n\t\tconst std::vector<KeyValueItem>&\n\t\t       getService() { return m_service; };\t       \n\t\tconst std::vector<UrlItem>&\n\t\t\tgetURL() { return m_url; };\t       \n\tprivate:\n\t\tstd::string\tm_name;\n\t\tstd::vector<KeyValueItem>\n\t\t\t\tm_service;\n\t\tstd::vector<UrlItem>\n\t\t\t\tm_url;\n\n\tpublic:\n\t\t/**\n\t\t * This class represents the ACL security change request\n\t\t *\n\t\t * Parsed JSON should have string attributes 'reason' and 'argument'\n\t\t */\n\t\tclass ACLReason {\n\t\t\tpublic:\n\t\t\t\tACLReason(const std::string &reason);\n\t\t\t\tconst std::string&\n\t\t\t\t\tgetReason() { return m_reason; };\n\t\t\t\tconst std::string&\n\t\t\t\t\tgetArgument() { return m_argument; };\n\t\t\tprivate:\n\t\t\t\tstd::string m_reason;\n\t\t\t\tstd::string m_argument;\n\t\t};\n};\n\n\n/**\n * Custom exception ACLMalformed\n */\nclass ACLMalformed : public std::exception {\n\tpublic:\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn \"ACL JSON is malformed\";\n\t\t}\n};\n\n/**\n * Custom exception ACLReasonMalformed\n */\nclass ACLReasonMalformed : public std::exception {\n\tpublic:\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn \"ACL Reason JSON is malformed\";\n\t\t}\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/aggregate.h",
    "content": "#ifndef _AGGREGRATE_H\n#define _AGGREGRATE_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n\n\n/**\n * Aggregate clause in a selection of records\n */\nclass Aggregate {\n\tpublic:\n\t\tAggregate(const std::string& operation, const std::string& column) :\n\t\t\t\tm_column(column), m_operation(operation) {};\n\t\t~Aggregate() {};\n\t\tstd::string\ttoJSON();\n\tprivate:\n\t\tconst std::string\tm_column;\n\t\tconst std::string\tm_operation;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/asset_tracking.h",
    "content": "#ifndef _ASSET_TRACKING_H\n#define _ASSET_TRACKING_H\n/*\n * Fledge asset tracking related\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora, Massimiliano Pinto\n */\n#include <logger.h>\n#include <vector>\n#include <set>\n#include <sstream>\n#include <unordered_set>\n#include <management_client.h>\n#include <queue>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n#include <storage_client.h>\n\n#define MIN_ASSET_TRACKER_UPDATE\t500 // The minimum interval for asset tracker updates\n\n/**\n * Tracking abstract base class to be passed in the process data queue\n */\nclass TrackingTuple {\npublic:\n\tTrackingTuple() {};\n\tvirtual ~TrackingTuple() = default;\n\tvirtual InsertValues processData(bool storage_connected,\n\t\t\t\t\tManagementClient *mgtClient,\n\t\t\t\t\tbool &warned,\n\t\t\t\t\tstd::string &instanceName) = 0;\n\tvirtual std::string assetToString() = 0;\n};\n\n\n/**\n * The AssetTrackingTuple class is used to represent an asset\n * tracking tuple. Hash function and '==' operator are defined for\n * this class and pointer to this class that would be required\n * to create an unordered_set of this class.\n */\nclass AssetTrackingTuple : public TrackingTuple {\n\npublic:\n\tstd::string\tassetToString()\n\t{\n\t\tstd::ostringstream o;\n\t\to << \"service:\" << m_serviceName <<\n\t\t\t\", plugin:\" << m_pluginName <<\n\t\t\t\", asset:\" << m_assetName <<\n\t\t\t\", event:\" << m_eventName <<\n\t\t\t\", deprecated:\" << m_deprecated;\n\t\treturn o.str();\n\t}\n\n\tinline bool operator==(const AssetTrackingTuple& x) const\n\t{\n\t\treturn ( x.m_serviceName==m_serviceName &&\n\t\t\tx.m_pluginName==m_pluginName &&\n\t\t\tx.m_assetName==m_assetName &&\n\t\t\tx.m_eventName==m_eventName);\n\t};\n\n\tAssetTrackingTuple(const std::string& service,\n\t\t\tconst std::string& plugin, \n\t\t\tconst std::string& asset,\n\t\t\tconst std::string& event,\n\t\t\tconst bool& deprecated = false) :\n\t\t\tm_serviceName(service),\n\t\t\tm_pluginName(plugin), \n\t\t\tm_assetName(asset),\n\t\t\tm_eventName(event),\n\t\t\tm_deprecated(deprecated) {}\n\n\tstd::string\t&getAssetName() { return m_assetName; };\n\tstd::string     getPluginName() { return m_pluginName;}\n\tstd::string     getEventName()  { return m_eventName;}\n\tstd::string\tgetServiceName() { return m_serviceName;}\n\tbool\t\tisDeprecated() { return m_deprecated; };\n\tvoid\t\tunDeprecate() { m_deprecated = false; };\n\n\tInsertValues\tprocessData(bool storage_connected,\n\t\t\t\tManagementClient *mgtClient,\n\t\t\t\tbool &warned,\n\t\t\t\tstd::string &instanceName);\n\npublic:\n\tstd::string \tm_serviceName;\n\tstd::string \tm_pluginName;\n\tstd::string \tm_assetName;\n\tstd::string \tm_eventName;\n\nprivate:\n\tbool\t\tm_deprecated;\n};\n\nstruct AssetTrackingTuplePtrEqual {\n    bool operator()(AssetTrackingTuple const* a, AssetTrackingTuple const* b) const {\n        return *a == *b;\n    }\n};\n\nnamespace std\n{\n    template <>\n    struct hash<AssetTrackingTuple>\n    {\n        size_t operator()(const AssetTrackingTuple& t) const\n        {\n            return (std::hash<std::string>()(t.m_serviceName + t.m_pluginName + t.m_assetName + t.m_eventName));\n        }\n    };\n\n\ttemplate <>\n    struct hash<AssetTrackingTuple*>\n    {\n        size_t operator()(AssetTrackingTuple* t) const\n        {\n            return (std::hash<std::string>()(t->m_serviceName + t->m_pluginName + t->m_assetName + t->m_eventName));\n        }\n    };\n}\n\nclass StorageAssetTrackingTuple : public TrackingTuple {\npublic:\n\tStorageAssetTrackingTuple(const std::string& service,\n\t\t\t\tconst std::string& plugin,\n\t\t\t\tconst std::string& asset,\n\t\t\t\tconst std::string& event,\n\t\t\t\tconst bool& deprecated = false,\n\t\t\t\tconst std::string& datapoints = \"\",\n\t\t\t\tunsigned int c = 0) : m_datapoints(datapoints),\n\t\t\t\t\tm_maxCount(c),\n\t\t\t\t\tm_serviceName(service),\n\t\t\t\t\tm_pluginName(plugin),\n\t\t\t\t\tm_assetName(asset),\n\t\t\t\t\tm_eventName(event),\n\t\t\t\t\tm_deprecated(deprecated)\n\t\t\t\t{};\n\n\tinline bool operator==(const StorageAssetTrackingTuple& x) const\n\t{\n\t\treturn ( x.m_serviceName==m_serviceName &&\n\t\t\tx.m_pluginName==m_pluginName &&\n\t\t\tx.m_assetName==m_assetName &&\n\t\t\tx.m_eventName==m_eventName);\n\t};\n\tstd::string\tassetToString()\n\t{\n\t\tstd::ostringstream o;\n\t\to << \"service:\" << m_serviceName <<\n\t\t\t\", plugin:\" << m_pluginName <<\n\t\t\t\", asset:\" << m_assetName <<\n\t\t\t\", event:\" << m_eventName <<\n\t\t\t\", deprecated:\" << m_deprecated <<\n\t\t\t\", m_datapoints:\" << m_datapoints <<\n\t\t\t\", m_maxCount:\" << m_maxCount;\n\t\treturn o.str();\n\t};\n\n\tbool\t\tisDeprecated() { return m_deprecated; };\n\n\tunsigned int\tgetMaxCount() { return m_maxCount; }\n\tstd::string\tgetDataPoints() { return m_datapoints; }\n\tvoid\t\tunDeprecate() { m_deprecated = false; };\n\tvoid\t\tsetDeprecate() { m_deprecated = true; };\n\n\tInsertValues\tprocessData(bool storage,\n\t\t\t\tManagementClient *mgtClient,\n\t\t\t\tbool &warned,\n\t\t\t\tstd::string &instanceName);\n\npublic:\n\tstd::string\tm_datapoints;\n\tunsigned int\tm_maxCount;\n\tstd::string\tm_serviceName;\n\tstd::string\tm_pluginName;\n\tstd::string\tm_assetName;\n\tstd::string\tm_eventName;\n\nprivate:\n\tbool\t\tm_deprecated;\n};\n\nstruct StorageAssetTrackingTuplePtrEqual {\n\tbool operator()(StorageAssetTrackingTuple const* a, StorageAssetTrackingTuple const* b) const {\n\t\treturn *a == *b;\n\t}\n};\n\nnamespace std\n{\n\ttemplate <>\n\tstruct hash<StorageAssetTrackingTuple>\n\t{\n\t\tsize_t operator()(const StorageAssetTrackingTuple& t) const\n\t\t{\n\t\t\treturn (std::hash<std::string>()(t.m_serviceName +\n\t\t\t\t\t\t\tt.m_pluginName +\n\t\t\t\t\t\t\tt.m_assetName +\n\t\t\t\t\t\t\tt.m_eventName));\n\t\t}\n\t};\n\n\ttemplate <>\n\tstruct hash<StorageAssetTrackingTuple*>\n\t{\n\t\tsize_t operator()(StorageAssetTrackingTuple* t) const\n\t\t{\n\t\t\treturn (std::hash<std::string>()(t->m_serviceName +\n\t\t\t\t\t\t\tt->m_pluginName +\n\t\t\t\t\t\t\tt->m_assetName +\n\t\t\t\t\t\t\tt->m_eventName));\n\t\t}\n\t};\n}\n\ntypedef std::unordered_map<StorageAssetTrackingTuple*,\n\t\t\tstd::set<std::string>,\n\t\t\tstd::hash<StorageAssetTrackingTuple*>,\n\t\t\tStorageAssetTrackingTuplePtrEqual> StorageAssetCacheMap;\ntypedef std::unordered_map<StorageAssetTrackingTuple*,\n\t\t\tstd::set<std::string>,\n\t\t\tstd::hash<StorageAssetTrackingTuple*>,\n\t\t\tStorageAssetTrackingTuplePtrEqual>::iterator StorageAssetCacheMapItr;\n\nclass ManagementClient;\n\n/**\n * The AssetTracker class provides the asset tracking functionality.\n * There are methods to populate asset tracking cache from asset_tracker DB table,\n * and methods to check/add asset tracking tuples to DB and to cache\n */\nclass AssetTracker {\n\npublic:\n\tAssetTracker(ManagementClient *mgtClient, std::string service);\n\t~AssetTracker();\n\tstatic AssetTracker *getAssetTracker();\n\tvoid\tpopulateAssetTrackingCache(std::string plugin, std::string event);\n\tvoid\tpopulateStorageAssetTrackingCache();\n\tbool\tcheckAssetTrackingCache(AssetTrackingTuple& tuple);\n\tAssetTrackingTuple*\n\t\tfindAssetTrackingCache(AssetTrackingTuple& tuple);\n\tvoid\taddAssetTrackingTuple(AssetTrackingTuple& tuple);\n\tvoid\taddAssetTrackingTuple(std::string plugin, std::string asset, std::string event);\n\tvoid\taddStorageAssetTrackingTuple(StorageAssetTrackingTuple& tuple,\n\t\t\t\t\tstd::set<std::string>& dpSet,\n\t\t\t\t\tbool addObj = false);\n\tStorageAssetTrackingTuple*\n\t\tfindStorageAssetTrackingCache(StorageAssetTrackingTuple& tuple);\n\tstd::string\n\t\tgetIngestService(const std::string& asset)\n\t\t{\n\t\t\treturn getService(\"Ingest\", asset);\n\t\t};\n\tstd::string\n\t\tgetEgressService(const std::string& asset)\n\t\t{\n\t\t\treturn getService(\"Egress\", asset);\n\t\t};\n\tvoid\tworkerThread();\n\n\tbool\tgetDeprecated(StorageAssetTrackingTuple* ptr);\n\tvoid\tupdateCache(std::set<std::string> dpSet, StorageAssetTrackingTuple* ptr);\n\tstd::set<std::string>\n\t\t*getStorageAssetTrackingCacheData(StorageAssetTrackingTuple* tuple);\n\tbool\ttune(unsigned long updateInterval);\n\nprivate:\n\tstd::string\n\t\tgetService(const std::string& event, const std::string& asset);\n\tvoid\tqueue(TrackingTuple *tuple);\n\tvoid\tprocessQueue();\n\tstd::set<std::string>\n\t\tgetDataPointsSet(std::string strDatapoints);\n\tbool\tgetFledgeConfigInfo();\n\nprivate:\n\tstatic AssetTracker\t\t\t*instance;\n\tManagementClient\t\t\t*m_mgtClient;\n\tstd::string\t\t\t\tm_service;\n\tstd::unordered_set<AssetTrackingTuple*, std::hash<AssetTrackingTuple*>, AssetTrackingTuplePtrEqual>\n\t\t\t\t\t\tassetTrackerTuplesCache;\n\tstd::queue<TrackingTuple *>\t\tm_pending;\t// Tuples that are not yet written to the storage\n\tstd::thread\t\t\t\t*m_thread;\n\tbool\t\t\t\t\tm_shutdown;\n\tstd::condition_variable\t\t\tm_cv;\n\tstd::mutex\t\t\t\tm_mutex;\n\tstd::string\t\t\t\tm_fledgeName;\n\tStorageClient\t\t\t\t*m_storageClient;\n\tStorageAssetCacheMap\t\t\tstorageAssetTrackerTuplesCache;\n\tunsigned int\t\t\t\tm_updateInterval;\n};\n\n/**\n * A class to hold a set of asset tracking tuples that allows\n * lookup by name.\n */\nclass AssetTrackingTable {\n\tpublic:\n\t\tAssetTrackingTable();\n\t\t~AssetTrackingTable();\n\t\tvoid\t\t\tadd(AssetTrackingTuple *tuple);\n\t\tvoid\t\t\tremove(const std::string& name);\n\t\tAssetTrackingTuple\t*find(const std::string& name);\n\tprivate:\n\t\tstd::map<std::string, AssetTrackingTuple *>\n\t\t\t\tm_tuples;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/audit_logger.h",
    "content": "#ifndef _AUDIT_LOGGER_H\n#define _AUDIT_LOGGER_H\n/*\n * Fledge Singleton Audit Logger interface\n *\n * Copyright (c) 2023 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <logger.h>\n#include <management_client.h>\n#include <string>\n\n/**\n * A singleton class for access to the audit logger within services. The\n * service must create this with the maagement client before any access to it is used.\n */\nclass AuditLogger {\n\tpublic:\n\t\tAuditLogger(ManagementClient *mgmt);\n\t\t~AuditLogger();\n\n\t\tstatic AuditLogger\t*getLogger();\n\t\tstatic void\t\tauditLog(const std::string& code,\n\t\t\t\t\t\tconst std::string& level,\n\t\t\t\t\t\tconst std::string& data = \"\");\n\n\t\tvoid\t\t\taudit(const std::string& code,\n\t\t\t\t\t\tconst std::string& level,\n\t\t\t\t\t\tconst std::string& data = \"\");\n\n\tprivate:\n\t\tstatic AuditLogger\t*m_instance;\n\t\tManagementClient\t*m_mgmt;\n};\n#endif\n"
  },
  {
    "path": "C/common/include/base64.h",
    "content": "#ifndef _BASE64_H_\n#define _BASE64_H_\n#include <cstdint>\n/*\n * Fledge Base64 encoding and decoding tables\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\nstatic const char encodingTable[] = {\n      'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n      'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n      'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',\n      'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n      'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n      'o', 'p', 'q', 'r', 's', 't', 'u', 'v',\n      'w', 'x', 'y', 'z', '0', '1', '2', '3',\n      '4', '5', '6', '7', '8', '9', '+', '/'\n};\n\nstatic const uint8_t  decodingTable[] = {\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 62, 64, 64, 64, 63,\n\t52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 64, 64,  0, 64, 64,\n\t64,  0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14,\n\t15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 64, 64, 64, 64, 64,\n\t64, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n\t41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,\n\t64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64\n};\n#endif\n"
  },
  {
    "path": "C/common/include/base64databuffer.h",
    "content": "#ifndef _BASE64_DATA_BUFFER_H_\n#define _BASE64_DATA_BUFFER_H_\n/*\n * Fledge Base64DataBuffer encoding\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <databuffer.h>\n#include <string>\n#include <stdexcept>\n#include <base64.h>\n\n/**\n * The Base64DataBuffer class provide functionality on top of the\n * simple DataBuffer class that is used to encode the buffer in\n * base64 such that it may be stored as string data.\n */\nclass Base64DataBuffer : public DataBuffer {\n\n\tpublic:\n\t\tBase64DataBuffer(const std::string& encoded);\n  \t\tstd::string \t\tencode();\n};\n#endif\n"
  },
  {
    "path": "C/common/include/base64dpimage.h",
    "content": "#ifndef _BASE64_DPIMAGE_H_\n#define _BASE64_DPIMAGE_H_\n/*\n * Fledge Base64 encoded data point image\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <dpimage.h>\n#include <string>\n#include <stdexcept>\n#include <base64.h>\n\n/**\n * The Base64DPImage provide functionality on top of the \n * simple DPImage class that is used to encode the buffer in\n * base64 such that it may be stored as string data.\n */\nclass Base64DPImage : public DPImage {\n\tpublic:\n\t\tBase64DPImage(const std::string& encoded);\n  \t\tstd::string \t\tencode();\n};\n#endif\n"
  },
  {
    "path": "C/common/include/bearer_token.h",
    "content": "#ifndef _BEARER_TOKEN_H\n#define _BEARER_TOKEN_H\n/*\n * Fledge bearer token utilities\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <server_http.hpp>\n#include <string>\n\n#define AUTH_HEADER \"Authorization\"\n#define BEARER_SCHEMA \"Bearer \"\n\n/**\n * This class represents a JWT bearer token\n *\n * The claims are stored after verification to core service API endpoint\n *\n */\nclass BearerToken\n{\n\tpublic:\n\t\tBearerToken(std::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Request> request);\n\t\tBearerToken(std::string& token);\n\t\t~BearerToken() {};\n\t\tbool\t\texists()\n\t\t{\n\t\t\treturn m_bearer_token.length() > 0;\n\t\t};\n\t\t// Return string reference\n\t\tconst std::string&\n\t\t\t\ttoken() { return m_bearer_token; };\n\t\tbool\t\tverify(const std::string& serverResponse);\n\t\tunsigned long\tgetExpiration() { return m_expiration; };\n\t\t// Return string references\n\t\tconst std::string&\n\t\t\t\tgetAudience() { return m_audience; };\n\t\tconst std::string&\n\t\t\t\tgetSubject() { return m_subject; };\n\t\tconst std::string&\n\t\t\t\tgetIssuer() { return m_issuer; };\n\n\tprivate:\n\t\tbool\t\tm_verified;\n\t\tunsigned long\tm_expiration;\n\t\tstd::string\tm_bearer_token;\n\t\tstd::string\tm_audience;\n\t\tstd::string\tm_subject;\n\t\tstd::string\tm_issuer;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/config_category.h",
    "content": "#ifndef _CONFIG_CATEGORY_H\n#define _CONFIG_CATEGORY_H\n\n/*\n * Fledge category management\n *\n * Copyright (c) 2017-2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <string>\n#include <vector>\n#include <map>\n#include <rapidjson/document.h>\n#include <json_utils.h>\n\nclass ConfigCategoryDescription {\n\tpublic:\n\t\tConfigCategoryDescription(const std::string& name, const std::string& description) :\n\t\t\t\tm_name(name), m_displayName(name), m_description(description) {};\n\t\tConfigCategoryDescription(const std::string& name, const std::string& displayName, const std::string& description) :\n\t\t\t\tm_name(name), m_displayName(displayName), m_description(description) {};\n\t\tstd::string\tgetName() const { return m_name; };\n\t\tstd::string\tgetDisplayName() const { return m_displayName; };\n\t\tstd::string\tgetDescription() const { return m_description; };\n\t\t// JSON string with m_name and m_description\n\t\tstd::string \ttoJSON() const;\n\tprivate:\n\t\tconst std::string\tm_name;\n\t\tconst std::string\tm_displayName;\n\t\tconst std::string\tm_description;\n};\n\nclass ConfigCategories {\n\tpublic:\n\t\tConfigCategories(const std::string& json); \n\t\tConfigCategories(); // Constructor without parameters\n\t\t~ConfigCategories();\n\t\tunsigned int\t\t\tlength() { return m_categories.size(); };\n\t\tConfigCategoryDescription \t*operator[] (const unsigned int idx) {\n\t\t\t\t\t\treturn m_categories[idx];\n\t\t\t\t\t};\n\t\t// Add one category name with description\n\t\tvoid\t\t\taddCategoryDescription(ConfigCategoryDescription* elem);\n\t\t// JSON string of all categories\n\t\tstd::string\t\ttoJSON() const;\n\n\tprivate:\n\t\tstd::vector<ConfigCategoryDescription *> \tm_categories;\n\t\n};\n\nclass ConfigCategory {\n\tpublic:\n\t\tenum ItemType {\n\t\t\tUnknownType,\n\t\t\tStringItem,\n\t\t\tEnumerationItem,\n\t\t\tJsonItem,\n\t\t\tBoolItem,\n\t\t\tNumberItem,\n\t\t\tDoubleItem,\n\t\t\tScriptItem,\n\t\t\tCategoryType,\n\t\t\tCodeItem,\n\t\t\tBucketItem,\n\t\t\tListItem,\n\t\t\tKVListItem\n\t\t};\n\n\t\tConfigCategory(const std::string& name, const std::string& json);\n\t\tConfigCategory() {};\n\t\tConfigCategory(const ConfigCategory& orig);\n\t\tConfigCategory(const ConfigCategory *orig);\n\t\t~ConfigCategory();\n\t\tvoid\t\t\t\taddItem(const std::string& name, const std::string description,\n\t\t\t\t\t\t\tconst std::string& type, const std::string def,\n\t\t\t\t\t\t\tconst std::string& value);\n\t\tvoid\t\t\t\taddItem(const std::string& name, const std::string description,\n\t\t\t\t\t\t\tconst std::string def, const std::string& value,\n\t\t\t\t\t\t\tconst std::vector<std::string> options);\n    \t\tvoid \t\t\t\tremoveItems();\n\t\tvoid \t\t\t\tremoveItemsType(ItemType type);\n\t\tvoid \t\t\t\tkeepItemsType(ItemType type);\n\t\tbool                            extractSubcategory(ConfigCategory &subCategories);\n\t\tvoid\t\t\t\tsetDescription(const std::string& description);\n\t\tstd::string                     getName() const { return m_name; };\n\t\tstd::string                     getDescription() const { return m_description; };\n\n\t\tstd::string                     getDisplayName() const { return m_displayName; };\n\t\tvoid                            setDisplayName(const std::string& displayName) {m_displayName = displayName;};\n\n\t\tunsigned int\t\t\tgetCount() const { return m_items.size(); };\n\t\tbool\t\t\t\titemExists(const std::string& name) const;\n\t\tbool\t\t\t\tsetItemDisplayName(const std::string& name, const std::string& displayName);\n\t\tstd::string\t\t\tgetValue(const std::string& name) const;\n\t\tstd::string\t\t\tgetValue(const std::string& name, const std::string& defaultValue) const;\n\t\tbool\t\t\t\tgetBoolValue(const std::string& name, bool defaultValue = false) const;\n\t\tint\t\t\t\tgetIntegerValue(const std::string& name, int defaultValue = 0) const;\n\t\tlong\t\t\t\tgetLongValue(const std::string& name, long defaultValue = 0) const;\n\t\tdouble\t\t\t\tgetDoubleValue(const std::string& name, double defaultValue = 0) const;\n\t\tstd::vector<std::string>\tgetValueList(const std::string& name) const;\n\t\tstd::map<std::string, std::string>\tgetValueKVList(const std::string& name) const;\n\t\tstd::string\t\t\tgetType(const std::string& name) const;\n\t\tstd::string\t\t\tgetDescription(const std::string& name) const;\n\t\tstd::string\t\t\tgetDefault(const std::string& name) const;\n\t\tbool\t\t\t\tsetDefault(const std::string& name, const std::string& value);\n\t\tbool\t\t\t\tsetValue(const std::string& name, const std::string& value);\n\t\tstd::string\t\t\tgetDisplayName(const std::string& name) const;\n\t\tstd::string\t\t\tgetmParentName() const {return (m_parent_name);};\n\t\tstd::vector<std::string>\tgetOptions(const std::string& name) const;\n\t\tstd::string\t\t\tgetLength(const std::string& name) const;\n\t\tstd::string\t\t\tgetMinimum(const std::string& name) const;\n\t\tstd::string\t\t\tgetMaximum(const std::string& name) const;\n\t\tbool\t\t\t\tisString(const std::string& name) const;\n\t\tbool\t\t\t\tisEnumeration(const std::string& name) const;\n\t\tbool\t\t\t\tisJSON(const std::string& name) const;\n\t\tbool\t\t\t\tisBool(const std::string& name) const;\n\t\tbool\t\t\t\tisNumber(const std::string& name) const;\n\t\tbool\t\t\t\tisDouble(const std::string& name) const;\n\t\tbool\t\t\t\tisList(const std::string& name) const;\n\t\tbool\t\t\t\tisKVList(const std::string& name) const;\n\t\tbool\t\t\t\tisDeprecated(const std::string& name) const;\n\t\tstd::string\t\t\ttoJSON(const bool full=false) const;\n\t\tstd::string\t\t\titemsToJSON(const bool full=false) const;\n\t\tConfigCategory& \t\toperator=(ConfigCategory const& rhs);\n\t\tConfigCategory& \t\toperator+=(ConfigCategory const& rhs);\n\t\tvoid\t\t\t\tsetItemsValueFromDefault();\n\t\tvoid\t\t\t\tcheckDefaultValuesOnly() const;\n\t\tstd::string \t\t\titemToJSON(const std::string& itemName) const;\n\t\tstd::string\t\t\tto_string(const rapidjson::Value& v) const;\n\t\tstd::vector<std::string>\tgetPermissions(const std::string& name) const;\n\t\tbool\t\t\t\thasPermission(const std::string& name, const std::string& username) const;\n\t\tenum ItemAttribute {\n\t\t\t\t\tORDER_ATTR,\n\t\t\t\t\tREADONLY_ATTR,\n\t\t\t\t\tMANDATORY_ATTR,\n\t\t\t\t\tFILE_ATTR,\n\t\t\t\t\tMINIMUM_ATTR,\n\t\t\t\t\tMAXIMUM_ATTR,\n\t\t\t\t\tLENGTH_ATTR,\n\t\t\t\t\tVALIDITY_ATTR,\n\t\t\t\t\tGROUP_ATTR,\n\t\t\t\t\tDISPLAY_NAME_ATTR,\n\t\t\t\t\tDEPRECATED_ATTR,\n\t\t\t\t\tRULE_ATTR,\n\t\t\t\t\tBUCKET_PROPERTIES_ATTR,\n\t\t\t\t\tLIST_SIZE_ATTR,\n\t\t\t\t\tITEM_TYPE_ATTR,\n\t\t\t\t\tLIST_NAME_ATTR,\n\t\t\t\t\tKVLIST_KEY_NAME_ATTR,\n\t\t\t\t\tKVLIST_KEY_DESCRIPTION_ATTR,\n\t\t\t\t\tJSON_SCHEMA_ATTR\n\t\t\t\t\t};\n\t\tstd::string\t\t\tgetItemAttribute(const std::string& itemName,\n\t\t\t\t\t\t\t\t ItemAttribute itemAttribute) const;\n\n\t\tbool\t\t\t\tsetItemAttribute(const std::string& itemName,\n\t\t\t\t\t\t\t\t ItemAttribute itemAttribute, const std::string& value);\n\n        std::vector<std::pair<std::string,std::string>>* parseBucketItemValue(const std::string &);\n\n\tprotected:\n\t\tclass CategoryItem {\n\t\t\tpublic:\n\t\t\t\tCategoryItem(const std::string& name, const rapidjson::Value& item);\n\t\t\t\tCategoryItem(const std::string& name, const std::string& description,\n\t\t\t\t\t     const std::string& type, const std::string def,\n\t\t\t\t\t     const std::string& value);\n\t\t\t\tCategoryItem(const std::string& name, const std::string& description,\n\t\t\t\t\t     const std::string def, const std::string& value,\n\t\t\t\t\t     const std::vector<std::string> options);\n\t\t\t\tCategoryItem(const CategoryItem& rhs);\n\t\t\t\t// Return both \"value\" and \"default\" items\n\t\t\t\tstd::string\ttoJSON(const bool full=false) const;\n\t\t\t\t// Return only \"default\" items\n\t\t\t\tstd::string\tdefaultToJSON() const;\n\n\t\t\tpublic:\n\t\t\t\tstd::string \tm_name;\n\t\t\t\tstd::string\tm_displayName;\n\t\t\t\tstd::string \tm_type;\n\t\t\t\tstd::string \tm_default;\n\t\t\t\tstd::string \tm_value;\n\t\t\t\tstd::string \tm_description;\n\t\t\t\tstd::string \tm_order;\n\t\t\t\tstd::string \tm_readonly;\n\t\t\t\tstd::string \tm_mandatory;\n\t\t\t\tstd::string \tm_deprecated;\n\t\t\t\tstd::string\tm_length;\n\t\t\t\tstd::string\tm_minimum;\n\t\t\t\tstd::string\tm_maximum;\n\t\t\t\tstd::string \tm_filename;\n\t\t\t\tstd::vector<std::string>\n\t\t\t\t\t\tm_options;\n\t\t\t\tstd::string \tm_file;\n\t\t\t\tItemType\tm_itemType;\n\t\t\t\tstd::string\tm_validity;\n\t\t\t\tstd::string\tm_group;\n\t\t\t\tstd::string\tm_rule;\n\t\t\t\tstd::string\tm_bucketProperties;\n\t\t\t\tstd::string\tm_listSize;\n\t\t\t\tstd::string\tm_listItemType;\n\t\t\t\tstd::string\tm_listName;\n\t\t\t\tstd::string\tm_kvlistKeyName;\n\t\t\t\tstd::string\tm_kvlistKeyDescription;\n\t\t\t\tstd::vector<std::string>\n\t\t\t\t\t\tm_permissions;\n\t\t\t\tstd::string\tm_jsonSchema;\n\t\t};\n\t\tstd::vector<CategoryItem *>\tm_items;\n\t\tstd::string\t\t\tm_name;\n\t\tstd::string         \t\tm_parent_name;\n\t\tstd::string\t\t\tm_description;\n\t\tstd::string\t\t\tm_displayName;\n\n\tpublic:\n\t\tusing iterator = std::vector<CategoryItem *>::iterator;\n  \t\tusing const_iterator = std::vector<CategoryItem *>::const_iterator;\n\n\t\tconst_iterator begin() const { return m_items.begin(); }\n\t\tconst_iterator end() const { return m_items.end(); }\n\t\tconst_iterator cbegin() const { return m_items.cbegin(); }\n\t\tconst_iterator cend() const { return m_items.cend(); }\n\t\t\n};\n\n/**\n * DefaultConfigCategory\n *\n * json input parameter must contain only \"default\" items.\n * itemsToJSON() reports only \"defaults\"\n *\n * This class must be used when creating/updating a category\n * via ManagementClient::addCategoryDefault(DefaultConfigCategory categoryDefault)\n */\n\nclass DefaultConfigCategory : public ConfigCategory\n{\n\tpublic:\n\t\tDefaultConfigCategory(const std::string& name, const std::string& json);\n\t\tDefaultConfigCategory(const ConfigCategory& orig) : ConfigCategory(orig)\n\t\t{\n\t\t};\n\t\t~DefaultConfigCategory();\n\t\tstd::string\ttoJSON() const;\n\t\tstd::string\titemsToJSON() const;\n};\n\nclass ConfigCategoryChange : public ConfigCategory\n{\n\tpublic:\n\t\tConfigCategoryChange(const std::string& json);\n};\n\nclass ConfigItemNotFound : public std::exception {\n\tpublic:\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn \"Configuration item not found in configuration category\";\n\t\t}\n};\n\nclass ConfigMalformed : public std::exception {\n\tpublic:\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn \"Configuration category JSON is malformed\";\n\t\t}\n};\n\n/**\n * This exception must be raised when at least one of the JSON items of a\n * new being created category have both \"value\" and \"default\" fields.\n */\nclass ConfigValueFoundWithDefault : public std::exception {\n\tpublic:\n\t\t// Constructor with parameter\n\t\tConfigValueFoundWithDefault(const std::string& item)\n\t\t{\n\t\t\tm_errmsg = \"Configuration item '\";\n\t\t\tm_errmsg.append(item);\n\t\t\tm_errmsg += \"' has both 'value' and 'default' fields.\";\n\t\t};\n\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn m_errmsg.c_str();\n\t\t}\n\tprivate:\n\t\tstd::string\tm_errmsg;\n};\n\n/**\n * This exception must be raised when a requested item attribute\n * does not exist.\n * Supported item attributes: \"order\", \"readonly\", \"file\".\n */\nclass ConfigItemAttributeNotFound : public std::exception {\n\tpublic:\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn \"Configuration item attribute not found in configuration category\";\n\t\t}\n};\n\n/**\n * An attempt has been made to access a configuration item as a list when the\n * item is not of type list\n */\nclass ConfigItemNotAList : public std::exception {\n        public:\n                virtual const char *what() const throw()\n                {\n                        return \"Configuration item is not a list type item\";\n                }\n};\n#endif\n"
  },
  {
    "path": "C/common/include/cryptography_utils.h",
    "content": "#ifndef _CRYPTOGRAPHY_UTILS_H\n#define _CRYPTOGRAPHY_UTILS_H\n/*\n * Fledge utilities functions for generating cryptographic hash\n *\n * Copyright (c) 2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <openssl/sha.h>\n#include <openssl/md5.h>\n#include <openssl/opensslv.h>\n#ifdef OPENSSL_VERSION_NUMBER\n\t#if OPENSSL_VERSION_NUMBER >= 0x30000000L\n\t#include <openssl/evp.h>\n\t#endif\n#endif\n\n#include <string>\nstd::string compute_sha256(const std::string& input);\nstd::string compute_md5(const std::string& input);\n\n#endif\n\n"
  },
  {
    "path": "C/common/include/databuffer.h",
    "content": "#ifndef _DATABUFFER_H\n#define _DATABUFFER_H\n/*\n * Fledge Databuffer type for datapoints\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <unistd.h>\n\n/**\n * Buffer type for storage of arbitrary buffers of data within a datapoint.\n * A DataBuffer is essentially a 1 dimensional array of a memory primitive of\n * itemSize.\n */\nclass DataBuffer {\n\tpublic:\n\t\tDataBuffer(size_t itemSize, size_t len);\n\t\tDataBuffer(const DataBuffer& rhs);\n\t\tDataBuffer& operator=(const DataBuffer& rhs);\n\t\t~DataBuffer();\n\t\tvoid\t\tpopulate(void *src, int len);\n\t\t/**\n\t\t * Return the size of each item in the buffer\n\t\t */\n\t\tsize_t\t\tgetItemSize() { return m_itemSize; };\n\t\t/**\n\t\t * Return the number of items in the buffer\n\t\t */\n\t\tsize_t\t\tgetItemCount() { return m_len; };\n\t\t/**\n\t\t * Return a pointer to the raw data in the data buffer\n\t\t */\n\t\tvoid\t\t*getData() { return m_data; };\n\tprotected:\n\t\tDataBuffer()\t{};\n\t\tsize_t\t\tm_itemSize;\n\t\tsize_t\t\tm_len;\n\t\tvoid\t\t*m_data;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/datapoint.h",
    "content": "#ifndef _DATAPOINT_H\n#define _DATAPOINT_H\n/*\n * Fledge\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <string>\n#include <sstream>\n#include <iomanip>\n#include <cfloat>\n#include <vector>\n#include <logger.h>\n#include <dpimage.h>\n#include <databuffer.h>\n#include <rapidjson/document.h>\n#include \"string_utils.h\"\n\nclass Datapoint;\n/**\n * Class to hold an actual reading value.\n * The class is simply a tagged union that also contains\n * methods to return the value as a string for encoding\n * in a JSON document.\n */\nclass DatapointValue {\n\tpublic:\n\t\t/**\n\t\t * Construct with a string\n\t\t */\n\t\tDatapointValue(const std::string& value)\n\t\t{\n\t\t\tm_value.str = new std::string(value);\n\t\t\tm_type = T_STRING;\n\t\t};\n\t\t/**\n \t\t * Construct with an integer value\n\t\t */\n\t\tDatapointValue(const long value)\n\t\t{\n\t\t\tm_value.i = value;\n\t\t\tm_type = T_INTEGER;\n\t\t};\n\t\t/**\n\t\t * Construct with a floating point value\n\t\t */\n\t\tDatapointValue(const double value)\n\t\t{\n\t\t\tm_value.f = value;\n\t\t\tm_type = T_FLOAT;\n\t\t};\n\t\t/**\n\t\t * Construct with an array of floating point values\n\t\t */\n\t\tDatapointValue(const std::vector<double>& values)\n\t\t{\n\t\t\tm_value.a = new std::vector<double>(values);\n\t\t\tm_type = T_FLOAT_ARRAY;\n\t\t};\n\n\t\t/**\n\t\t * Construct with an array of Datapoints\n\t\t */\n\t\tDatapointValue(std::vector<Datapoint*>*& values, bool isDict)\n\t\t{\n\t\t\tm_value.dpa = values;\n\t\t\tm_type = isDict? T_DP_DICT : T_DP_LIST;\n\t\t}\n\n\t\t/**\n\t\t * Construct with an Image\n\t\t */\n\t\tDatapointValue(const DPImage& value)\n\t\t{\n\t\t\tm_value.image = new DPImage(value);\n\t\t\tm_type = T_IMAGE;\n\t\t}\n\n\t\t/**\n\t\t * Construct with a DataBuffer\n\t\t */\n\t\tDatapointValue(const DataBuffer& value)\n\t\t{\n\t\t\tm_value.dataBuffer = new DataBuffer(value);\n\t\t\tm_type = T_DATABUFFER;\n\t\t}\n\n\t\t/**\n\t\t * Construct with an Image Pointer, the\n\t\t * image becomes owned by the datapointValue\n\t\t */\n\t\tDatapointValue(DPImage *value)\n\t\t{\n\t\t\tm_value.image = value;\n\t\t\tm_type = T_IMAGE;\n\t\t}\n\n\t\t/**\n\t\t * Construct with a DataBuffer\n\t\t */\n\t\tDatapointValue(DataBuffer *value)\n\t\t{\n\t\t\tm_value.dataBuffer = value;\n\t\t\tm_type = T_DATABUFFER;\n\t\t}\n\n\t\t/**\n\t\t * Construct with a 2 dimentional  array of floating point values\n\t\t */\n\t\tDatapointValue(const std::vector< std::vector<double> *>& values)\n\t\t{\n\t\t\tm_value.a2d = new std::vector< std::vector<double>* >;\n\t\t\tfor (auto row : values)\n\t\t\t{\n\t\t\t\tstd::vector<double> *nrow = new std::vector<double>;\n\t\t\t\tfor (auto& d : *row)\n\t\t\t\t{\n\t\t\t\t\tnrow->push_back(d);\n\t\t\t\t}\n\t\t\t\tm_value.a2d->push_back(nrow);\n\t\t\t}\n\t\t\tm_type = T_2D_FLOAT_ARRAY;\n\t\t};\n\n\t\t/**\n\t\t * Copy constructor\n\t\t */\n\t\tDatapointValue(const DatapointValue& obj);\n\n\t\t/**\n\t\t * Assignment Operator\n\t\t */\n\t\tDatapointValue& operator=(const DatapointValue& rhs);\n\n\t\t/**\n\t\t * Destructor\n\t\t */\n\t\t~DatapointValue();\n\n\t\t/**\n                 * Set the value of a datapoint, this may\n                 * also cause the type to be changed.\n                 * @param value An string value to set\n                 */\n                void setValue(std::string value)\n                {\n                        if(m_value.str)\n                        {\n                                delete m_value.str;\n                        }\n                        m_value.str = new std::string(value);\n                        m_type = T_STRING;\n                }\n\t\n\t\t/**\n\t\t * Set the value of a datapoint, this may\n\t\t * also cause the type to be changed.\n\t\t * @param value\tAn integer value to set\n\t\t */\n\t\tvoid setValue(long value)\n\t\t{\n\t\t\tm_value.i = value;\n\t\t\tm_type = T_INTEGER;\n\t\t}\n\n\t\t/**\n\t\t * Set the value of a datapoint, this may\n\t\t * also cause the type to be changed.\n\t\t * @param value\tA floating point value to set\n\t\t */\n\t\tvoid setValue(double value)\n\t\t{\n\t\t\tm_value.f = value;\n\t\t\tm_type = T_FLOAT;\n\t\t}\n\n\t\t/** Set the value of a datapoint to be an image\n\t\t * @param value The image to set in the data point\n\t\t */\n\t\tvoid setValue(const DPImage& value)\n\t\t{\n\t\t\tm_value.image = new DPImage(value);\n\t\t\tm_type = T_IMAGE;\n\t\t}\n\n\t\t/**\n\t\t * Return the value as a string\n\t\t */\n\t\tstd::string\ttoString() const;\n\n\t\t/**\n\t\t * Return string value without trailing/leading quotes\n\t\t */\n\t\tstd::string\ttoStringValue() const { return *m_value.str; };\n\n\t\t/**\n\t\t * Return long value\n\t\t */\n\t\tlong toInt() const { return m_value.i; };\n\t\t/**\n\t\t * Return double value\n\t\t */\n\t\tdouble toDouble() const { return m_value.f; };\n\n\t\t// Supported Data Tag Types\n\t\ttypedef enum DatapointTag\n\t\t{\n\t\t\tT_STRING,\n\t\t\tT_INTEGER,\n\t\t\tT_FLOAT,\n\t\t\tT_FLOAT_ARRAY,\n\t\t\tT_DP_DICT,\n\t\t\tT_DP_LIST,\n\t\t\tT_IMAGE,\n\t\t\tT_DATABUFFER,\n\t\t\tT_2D_FLOAT_ARRAY\n\t\t} dataTagType;\n\n\t\t/**\n\t\t * Return the Tag type\n\t\t */\n\t\tdataTagType getType() const\n\t\t{\n\t\t\treturn m_type;\n\t\t}\n\n\t\tstd::string getTypeStr() const\n\t\t{\n\t\t\tswitch(m_type)\n\t\t\t{\n\t\t\t\tcase T_STRING: return std::string(\"STRING\");\n\t\t\t\tcase T_INTEGER: return std::string(\"INTEGER\");\n\t\t\t\tcase T_FLOAT: return std::string(\"FLOAT\");\n\t\t\t\tcase T_FLOAT_ARRAY: return std::string(\"FLOAT_ARRAY\");\n\t\t\t\tcase T_DP_DICT: return std::string(\"DP_DICT\");\n\t\t\t\tcase T_DP_LIST: return std::string(\"DP_LIST\");\n\t\t\t\tcase T_IMAGE: return std::string(\"IMAGE\");\n\t\t\t\tcase T_DATABUFFER: return std::string(\"DATABUFFER\");\n\t\t\t\tcase T_2D_FLOAT_ARRAY: return std::string(\"2D_FLOAT_ARRAY\");\n\t\t\t\tdefault: return std::string(\"INVALID\");\n\t\t\t}\n\t\t}\n\n\t\t/**\n\t\t * Return array of datapoints\n\t\t */\n\t\tstd::vector<Datapoint*>*& getDpVec()\n\t\t{\n\t\t\treturn m_value.dpa;\n\t\t}\n\n\t\t/**\n\t\t * Return array of float\n\t\t */\n\t\tstd::vector<double>*& getDpArr()\n\t\t{\n\t\t\treturn m_value.a;\n\t\t}\n\n\t\t/**\n\t\t * Return 2D array of float\n\t\t */\n\t\tstd::vector<std::vector<double>* >*& getDp2DArr()\n\t\t{\n\t\t\treturn m_value.a2d;\n\t\t}\n\n\t\t/**\n\t\t * Return the Image\n\t\t */\n\t\tDPImage *getImage()\n\t\t{\n\t\t\treturn m_value.image;\n\t\t}\n\n\t\t/**\n\t\t * Return the DataBuffer\n\t\t */\n\t\tDataBuffer *getDataBuffer()\n\t\t{\n\t\t\treturn m_value.dataBuffer;\n\t\t}\n\n\tprivate:\n\t\tvoid deleteNestedDPV();\n\t\tconst std::string\tescape(const std::string& str) const;\n\t\tunion data_t {\n\t\t\tstd::string*\t\tstr;\n\t\t\tlong\t\t\ti;\n\t\t\tdouble\t\t\tf;\n\t\t\tstd::vector<double>*\ta;\n\t\t\tstd::vector<Datapoint*>\n\t\t\t\t\t\t*dpa;\n\t\t\tDPImage\t\t\t*image;\n\t\t\tDataBuffer\t\t*dataBuffer;\n\t\t\tstd::vector< std::vector<double>* >\n\t\t\t\t\t\t*a2d;\n\t\t\t} m_value;\n\t\tDatapointTag\tm_type;\n};\n\n/**\n * Name and value pair used to represent a data value\n * within an asset reading.\n */\nclass Datapoint {\n\tpublic:\n\t\t/**\n\t\t * Construct with a data point value\n\t\t */\n\t\tDatapoint(const std::string& name, DatapointValue& value) : m_name(name), m_value(value)\n\t\t{\n\t\t}\n\n\t\t~Datapoint()\n\t\t{\n\t\t}\n\t\t/**\n\t\t * Return asset reading data point as a JSON\n\t\t * property that can be included within a JSON\n\t\t * document.\n\t\t */\n\t\tstd::string\ttoJSONProperty()\n\t\t{\n\t\t\tstd::string rval = \"\\\"\" + escape(m_name) + \"\\\":\";\n\t\t\trval += m_value.toString();\n\n\t\t\treturn rval;\n\t\t}\n\n\t\t/**\n\t\t * Return the Datapoint name\n\t\t */\n\t\tconst std::string getName() const\n\t\t{\n\t\t\treturn m_name;\n\t\t}\n\n\t\t/**\n\t\t * Rename the datapoint\n\t\t */\n\t\tvoid setName(std::string name)\n\t\t{\n\t\t\tm_name = name;\n\t\t}\n\n\t\t/**\n\t\t * Return Datapoint value\n\t\t */\n\t\tconst DatapointValue getData() const\n\t\t{\n\t\t\treturn m_value;\n\t\t}\n\n\t\t/**\n\t\t * Return reference to Datapoint value\n\t\t */\n\t\tDatapointValue& getData()\n\t\t{\n\t\t\treturn m_value;\n\t\t}\n\n\t\t/**\n\t\t * Parse a json string and generates \n\t\t * a corresponding datapoint vector  \n\t\t */\n\t\tstd::vector<Datapoint*>* parseJson(const std::string& json);\n\t\tstd::vector<Datapoint*>* recursiveJson(const rapidjson::Value& document);\n\n\tprivate:\n\t\tstd::string\t\tm_name;\n\t\tDatapointValue\t\tm_value;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/datapoint_utility.h",
    "content": "#ifndef INCLUDE_DATAPOINT_UTILITY_H_\n#define INCLUDE_DATAPOINT_UTILITY_H_\n/*\n * Fledge\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Yannick Marchetaux\n * \n */\n\n#include <vector>\n#include <string>\n#include \"datapoint.h\"\n#include \"reading.h\"\n\nnamespace DatapointUtility {  \n\t// Define type\n\tusing Datapoints\t= std::vector<Datapoint*>;\n\tusing Readings\t\t= std::vector<Reading*>;\n\n\t// Function for search value\n\tDatapoints\t\t*findDictElement\t\t(Datapoints* dict, const std::string& key);\n\tDatapointValue \t*findValueElement\t\t(Datapoints* dict, const std::string& key);\n\tDatapoint\t  \t*findDatapointElement\t(Datapoints* dict, const std::string& key);\n\tDatapoints\t\t*findDictOrListElement\t(Datapoints *dict, const std::string& key, DatapointValue::dataTagType type);\n\tDatapoints\t\t*findListElement\t\t(Datapoints *dict, const std::string& key);\n\tstd::string\t \tfindStringElement\t\t(Datapoints* dict, const std::string& key);\n\n\t// delete\n\tvoid deleteValue(Datapoints *dps, const std::string& key);\n\n\t// Function for create element\n\tDatapoint *createStringElement  \t(Datapoints *dps, const std::string& key, const std::string& valueDefault);\n\tDatapoint *createIntegerElement \t(Datapoints *dps, const std::string& key, long valueDefault);\n\tDatapoint *createDictElement\t\t(Datapoints *dps, const std::string& key);\n\tDatapoint *createListElement\t\t(Datapoints *dps, const std::string& key);\n\tDatapoint *createDictOrListElement\t(Datapoints* dps, const std::string& key, bool dict);\n};\n\n#endif  // INCLUDE_DATAPOINT_UTILITY_H_"
  },
  {
    "path": "C/common/include/dpimage.h",
    "content": "#ifndef _DPIMAGE_H\n#define _DPIMAGE_H\n/*\n * Fledge datapoint image\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n/**\n * Simple Image class that will be used within data points to store image data.\n *\n * This class merely acts to encapsulate the data of a simple image in memory, \n * complex functionality will be supported elsewhere. Images within the class\n * are stored as a simple, single area of memory the size of which is defined\n * by the width, hieght and depth of the image.\n */\nclass DPImage {\n\tpublic:\n\t\tDPImage() : m_width(0), m_height(0), m_depth(0), m_pixels(0), m_byteSize(0) {};\n\t\tDPImage(int width, int height, int depth, void *data);\n\t\tDPImage(const DPImage& rhs);\n\t\tDPImage& operator=(const DPImage& rhs);\n\t\t~DPImage();\n\t\t/**\n\t\t * Return the height of the image\n\t\t */\n\t\tint\t\tgetHeight() { return m_height; };\n\t\t/**\n\t\t * Return the width of the image\n\t\t */\n\t\tint\t\tgetWidth() { return m_width; };\n\t\t/**\n\t\t * Return the depth of the image in bits\n\t\t */\n\t\tint\t\tgetDepth() { return m_depth; };\n\t\t/**\n\t\t * Return a pointer to the raw data of the image\n\t\t */\n\t\tvoid\t\t*getData() { return m_pixels; };\n\tprotected:\n\t\tint\t\tm_width;\n\t\tint\t\tm_height;\n\t\tint\t\tm_depth;\n\t\tvoid\t\t*m_pixels;\n\t\tint\t\tm_byteSize;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/expression.h",
    "content": "#ifndef _EXPRESSION_H\n#define _EXPRESSION_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <sstream>\n#include <iostream>\n#include <vector>\n#include <resultset.h>\n\n\n/**\n * Class that defines data to be inserted or updated in a column within the table\n */\nclass Expression {\n\tpublic:\n\t\tExpression(const std::string& column, const std::string& op, int value) :\n\t\t\tm_column(column), m_op(op), m_type(INT_COLUMN)\n\t\t{\n\t\t\tm_value.ival = value;\n\t\t};\n\t\tExpression(const std::string& column, const std::string& op, double value) :\n\t\t\tm_column(column), m_op(op), m_type(NUMBER_COLUMN)\n\t\t{\n\t\t\tm_value.fval = value;\n\t\t};\n\t\tconst std::string\ttoJSON() const\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"{ \\\"column\\\" : \\\"\" << m_column << \"\\\", \";\n\t\t\tjson << \"\\\"operator\\\" : \\\"\" << m_op << \"\\\", \";\n\t\t\tjson << \"\\\"value\\\" : \";\n\t\t\tswitch (m_type)\n\t\t\t{\n\t\t\tcase JSON_COLUMN:\n\t\t\tcase BOOL_COLUMN:\n\t\t\tcase STRING_COLUMN:\n\t\t\tcase NULL_COLUMN:\n\t\t\t\tbreak;\n\t\t\tcase INT_COLUMN:\n\t\t\t\tjson << m_value.ival;\n\t\t\t\tbreak;\n\t\t\tcase NUMBER_COLUMN:\n\t\t\t\tjson << m_value.fval;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tjson << \"}\";\n\t\t\treturn json.str();\n\t\t}\n\tprivate:\n\t\tconst std::string\tm_column;\n\t\tconst std::string\tm_op;\n\t\tColumnType\t\tm_type;\n\t\tunion {\n\t\t\tlong\tival;\n\t\t\tdouble\tfval;\n\t\t\t}\t\tm_value;\n};\n\nclass ExpressionValues : public std::vector<Expression>\n{\n\tpublic:\n\t\tconst std::string\ttoJSON() const\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"[ \";\n\t\t\tfor (std::vector<Expression>::const_iterator it = this->cbegin();\n\t\t\t\t it != this->cend(); ++it)\n\n\t\t\t{\n\t\t\t\tjson << it->toJSON();\n\t\t\t\tif (it + 1 != this->cend())\n\t\t\t\t\tjson << \", \";\n\t\t\t\telse\n\t\t\t\t\tjson << \" \";\n\t\t\t}\n\t\t\tjson << \"]\";\n\t\t\treturn json.str();\n\t\t};\n};\n#endif\n"
  },
  {
    "path": "C/common/include/exprtk.hpp",
    "content": "/*\n ******************************************************************\n *           C++ Mathematical Expression Toolkit Library          *\n *                                                                *\n * Author: Arash Partow (1999-2018)                               *\n * URL: http://www.partow.net/programming/exprtk/index.html       *\n *                                                                *\n * Copyright notice:                                              *\n * Free use of the C++ Mathematical Expression Toolkit Library is *\n * permitted under the guidelines and in accordance with the most *\n * current version of the MIT License.                            *\n * http://www.opensource.org/licenses/MIT                         *\n *                                                                *\n * Example expressions:                                           *\n * (00) (y + x / y) * (x - y / x)                                 *\n * (01) (x^2 / sin(2 * pi / y)) - x / 2                           *\n * (02) sqrt(1 - (x^2))                                           *\n * (03) 1 - sin(2 * x) + cos(pi / y)                              *\n * (04) a * exp(2 * t) + c                                        *\n * (05) if(((x + 2) == 3) and ((y + 5) <= 9),1 + w, 2 / z)        *\n * (06) (avg(x,y) <= x + y ? x - y : x * y) + 2 * pi / x          *\n * (07) z := x + sin(2 * pi / y)                                  *\n * (08) u := 2 * (pi * z) / (w := x + cos(y / pi))                *\n * (09) clamp(-1,sin(2 * pi * x) + cos(y / 2 * pi),+1)            *\n * (10) inrange(-2,m,+2) == if(({-2 <= m} and [m <= +2]),1,0)     *\n * (11) (2sin(x)cos(2y)7 + 1) == (2 * sin(x) * cos(2*y) * 7 + 1)  *\n * (12) (x ilike 's*ri?g') and [y < (3 z^7 + w)]                  *\n *                                                                *\n ******************************************************************\n*/\n\n/*\n * Mark Riddoch\t3rd September 2019\n *\n * A modified version of the mathematical expression toolkit is\n * included here for use by various Fledge plugins. The modification\n * allows for the use of escape characters in the definition and use\n * of symbols.\n */\n\n\n#ifndef INCLUDE_EXPRTK_HPP\n#define INCLUDE_EXPRTK_HPP\n\n#define allow_escaped_symbols\t1\n\n#include <algorithm>\n#include <cctype>\n#include <cmath>\n#include <complex>\n#include <cstdio>\n#include <cstdlib>\n#include <cstring>\n#include <deque>\n#include <exception>\n#include <functional>\n#include <iterator>\n#include <limits>\n#include <list>\n#include <map>\n#include <set>\n#include <stack>\n#include <stdexcept>\n#include <string>\n#include <utility>\n#include <vector>\n\n\nnamespace exprtk\n{\n   #ifdef exprtk_enable_debugging\n     #define exprtk_debug(params) printf params\n   #else\n     #define exprtk_debug(params) (void)0\n   #endif\n\n   #define exprtk_error_location             \\\n   \"exprtk.hpp:\" + details::to_str(__LINE__) \\\n\n   #if defined(__GNUC__) && (__GNUC__  >= 7)\n\n      #define exprtk_disable_fallthrough_begin                      \\\n      _Pragma (\"GCC diagnostic push\")                               \\\n      _Pragma (\"GCC diagnostic ignored \\\"-Wimplicit-fallthrough\\\"\") \\\n\n      #define exprtk_disable_fallthrough_end                        \\\n      _Pragma (\"GCC diagnostic pop\")                                \\\n\n   #else\n      #define exprtk_disable_fallthrough_begin (void)0;\n      #define exprtk_disable_fallthrough_end   (void)0;\n   #endif\n\n   namespace details\n   {\n      typedef unsigned char     uchar_t;\n      typedef char               char_t;\n      typedef uchar_t*        uchar_ptr;\n      typedef char_t*          char_ptr;\n      typedef uchar_t const* uchar_cptr;\n      typedef char_t const*   char_cptr;\n\n      inline bool is_whitespace(const char_t c)\n      {\n         return (' '  == c) || ('\\n' == c) ||\n                ('\\r' == c) || ('\\t' == c) ||\n                ('\\b' == c) || ('\\v' == c) ||\n                ('\\f' == c) ;\n      }\n\n      inline bool is_operator_char(const char_t c)\n      {\n         return ('+' == c) || ('-' == c) ||\n                ('*' == c) || ('/' == c) ||\n                ('^' == c) || ('<' == c) ||\n                ('>' == c) || ('=' == c) ||\n                (',' == c) || ('!' == c) ||\n                ('(' == c) || (')' == c) ||\n                ('[' == c) || (']' == c) ||\n                ('{' == c) || ('}' == c) ||\n                ('%' == c) || (':' == c) ||\n                ('?' == c) || ('&' == c) ||\n                ('|' == c) || (';' == c) ;\n      }\n\n      inline bool is_letter(const char_t c)\n      {\n         return (('a' <= c) && (c <= 'z')) ||\n                (('A' <= c) && (c <= 'Z')) ;\n      }\n\n      inline bool is_digit(const char_t c)\n      {\n         return ('0' <= c) && (c <= '9');\n      }\n\n      inline bool is_letter_or_digit(const char_t c)\n      {\n         return is_letter(c) || is_digit(c);\n      }\n\n      inline bool is_left_bracket(const char_t c)\n      {\n         return ('(' == c) || ('[' == c) || ('{' == c);\n      }\n\n      inline bool is_right_bracket(const char_t c)\n      {\n         return (')' == c) || (']' == c) || ('}' == c);\n      }\n\n      inline bool is_bracket(const char_t c)\n      {\n         return is_left_bracket(c) || is_right_bracket(c);\n      }\n\n      inline bool is_sign(const char_t c)\n      {\n         return ('+' == c) || ('-' == c);\n      }\n\n      inline bool is_invalid(const char_t c)\n      {\n         return !is_whitespace   (c) &&\n                !is_operator_char(c) &&\n                !is_letter       (c) &&\n                !is_digit        (c) &&\n                ('.'  != c)          &&\n                ('_'  != c)          &&\n                ('$'  != c)          &&\n                ('~'  != c)          &&\n                ('\\'' != c);\n      }\n\n      #ifndef exprtk_disable_caseinsensitivity\n      inline void case_normalise(std::string& s)\n      {\n         for (std::size_t i = 0; i < s.size(); ++i)\n         {\n            s[i] = static_cast<std::string::value_type>(std::tolower(s[i]));\n         }\n      }\n\n      inline bool imatch(const char_t c1, const char_t c2)\n      {\n         return std::tolower(c1) == std::tolower(c2);\n      }\n\n      inline bool imatch(const std::string& s1, const std::string& s2)\n      {\n         if (s1.size() == s2.size())\n         {\n            for (std::size_t i = 0; i < s1.size(); ++i)\n            {\n               if (std::tolower(s1[i]) != std::tolower(s2[i]))\n               {\n                  return false;\n               }\n            }\n\n            return true;\n         }\n\n         return false;\n      }\n\n      struct ilesscompare\n      {\n         inline bool operator() (const std::string& s1, const std::string& s2) const\n         {\n            const std::size_t length = std::min(s1.size(),s2.size());\n\n            for (std::size_t i = 0; i < length;  ++i)\n            {\n               const char_t c1 = static_cast<char>(std::tolower(s1[i]));\n               const char_t c2 = static_cast<char>(std::tolower(s2[i]));\n\n               if (c1 > c2)\n                  return false;\n               else if (c1 < c2)\n                  return true;\n            }\n\n            return s1.size() < s2.size();\n         }\n      };\n\n      #else\n      inline void case_normalise(std::string&)\n      {}\n\n      inline bool imatch(const char_t c1, const char_t c2)\n      {\n         return c1 == c2;\n      }\n\n      inline bool imatch(const std::string& s1, const std::string& s2)\n      {\n         return s1 == s2;\n      }\n\n      struct ilesscompare\n      {\n         inline bool operator() (const std::string& s1, const std::string& s2) const\n         {\n            return s1 < s2;\n         }\n      };\n      #endif\n\n      inline bool is_valid_sf_symbol(const std::string& symbol)\n      {\n         // Special function: $f12 or $F34\n         return (4 == symbol.size())  &&\n                ('$' == symbol[0])    &&\n                imatch('f',symbol[1]) &&\n                is_digit(symbol[2])   &&\n                is_digit(symbol[3]);\n      }\n\n      inline const char_t& front(const std::string& s)\n      {\n         return s[0];\n      }\n\n      inline const char_t& back(const std::string& s)\n      {\n         return s[s.size() - 1];\n      }\n\n      inline std::string to_str(int i)\n      {\n         if (0 == i)\n            return std::string(\"0\");\n\n         std::string result;\n\n         if (i < 0)\n         {\n            for ( ; i; i /= 10)\n            {\n               result += '0' + char(-(i % 10));\n            }\n\n            result += '-';\n         }\n         else\n         {\n            for ( ; i; i /= 10)\n            {\n               result += '0' + char(i % 10);\n            }\n         }\n\n         std::reverse(result.begin(), result.end());\n\n         return result;\n      }\n\n      inline std::string to_str(std::size_t i)\n      {\n         return to_str(static_cast<int>(i));\n      }\n\n      inline bool is_hex_digit(const std::string::value_type digit)\n      {\n         return (('0' <= digit) && (digit <= '9')) ||\n                (('A' <= digit) && (digit <= 'F')) ||\n                (('a' <= digit) && (digit <= 'f')) ;\n      }\n\n      inline uchar_t hex_to_bin(uchar_t h)\n      {\n         if (('0' <= h) && (h <= '9'))\n            return (h - '0');\n         else\n            return static_cast<unsigned char>(std::toupper(h) - 'A');\n      }\n\n      template <typename Iterator>\n      inline void parse_hex(Iterator& itr, Iterator end, std::string::value_type& result)\n      {\n         if (\n              (end !=  (itr    )) &&\n              (end !=  (itr + 1)) &&\n              (end !=  (itr + 2)) &&\n              (end !=  (itr + 3)) &&\n              ('0' == *(itr    )) &&\n              (\n                ('x' == *(itr + 1)) ||\n                ('X' == *(itr + 1))\n              ) &&\n              (is_hex_digit(*(itr + 2))) &&\n              (is_hex_digit(*(itr + 3)))\n            )\n         {\n            result = hex_to_bin(static_cast<uchar_t>(*(itr + 2))) << 4 |\n                     hex_to_bin(static_cast<uchar_t>(*(itr + 3))) ;\n            itr += 3;\n         }\n         else\n            result = '\\0';\n      }\n\n      inline void cleanup_escapes(std::string& s)\n      {\n         typedef std::string::iterator str_itr_t;\n\n         str_itr_t itr1 = s.begin();\n         str_itr_t itr2 = s.begin();\n         str_itr_t end  = s.end  ();\n\n         std::size_t removal_count  = 0;\n\n         while (end != itr1)\n         {\n            if ('\\\\' == (*itr1))\n            {\n               ++removal_count;\n\n               if (end == ++itr1)\n                  break;\n               else if ('\\\\' != (*itr1))\n               {\n                  switch (*itr1)\n                  {\n                     case 'n' : (*itr1) = '\\n'; break;\n                     case 'r' : (*itr1) = '\\r'; break;\n                     case 't' : (*itr1) = '\\t'; break;\n                     case '0' : parse_hex(itr1, end, (*itr1));\n                                removal_count += 3;\n                                break;\n                  }\n\n                  continue;\n               }\n            }\n\n            if (itr1 != itr2)\n            {\n               (*itr2) = (*itr1);\n            }\n\n            ++itr1;\n            ++itr2;\n         }\n\n         s.resize(s.size() - removal_count);\n      }\n\n      class build_string\n      {\n      public:\n\n         build_string(const std::size_t& initial_size = 64)\n         {\n            data_.reserve(initial_size);\n         }\n\n         inline build_string& operator << (const std::string& s)\n         {\n            data_ += s;\n            return (*this);\n         }\n\n         inline build_string& operator << (char_cptr s)\n         {\n            data_ += std::string(s);\n            return (*this);\n         }\n\n         inline operator std::string () const\n         {\n            return data_;\n         }\n\n         inline std::string as_string() const\n         {\n            return data_;\n         }\n\n      private:\n\n         std::string data_;\n      };\n\n      static const std::string reserved_words[] =\n                                  {\n                                    \"break\",  \"case\",  \"continue\",  \"default\",  \"false\",  \"for\",\n                                    \"if\", \"else\", \"ilike\",  \"in\", \"like\", \"and\",  \"nand\", \"nor\",\n                                    \"not\",  \"null\",  \"or\",   \"repeat\", \"return\",  \"shl\",  \"shr\",\n                                    \"swap\", \"switch\", \"true\",  \"until\", \"var\",  \"while\", \"xnor\",\n                                    \"xor\", \"&\", \"|\"\n                                  };\n\n      static const std::size_t reserved_words_size = sizeof(reserved_words) / sizeof(std::string);\n\n      static const std::string reserved_symbols[] =\n                                  {\n                                    \"abs\",  \"acos\",  \"acosh\",  \"and\",  \"asin\",  \"asinh\", \"atan\",\n                                    \"atanh\", \"atan2\", \"avg\",  \"break\", \"case\", \"ceil\",  \"clamp\",\n                                    \"continue\",   \"cos\",   \"cosh\",   \"cot\",   \"csc\",  \"default\",\n                                    \"deg2grad\",  \"deg2rad\",   \"equal\",  \"erf\",   \"erfc\",  \"exp\",\n                                    \"expm1\",  \"false\",   \"floor\",  \"for\",   \"frac\",  \"grad2deg\",\n                                    \"hypot\", \"iclamp\", \"if\",  \"else\", \"ilike\", \"in\",  \"inrange\",\n                                    \"like\",  \"log\",  \"log10\", \"log2\",  \"logn\",  \"log1p\", \"mand\",\n                                    \"max\", \"min\",  \"mod\", \"mor\",  \"mul\", \"ncdf\",  \"nand\", \"nor\",\n                                    \"not\",   \"not_equal\",   \"null\",   \"or\",   \"pow\",  \"rad2deg\",\n                                    \"repeat\", \"return\", \"root\", \"round\", \"roundn\", \"sec\", \"sgn\",\n                                    \"shl\", \"shr\", \"sin\", \"sinc\", \"sinh\", \"sqrt\",  \"sum\", \"swap\",\n                                    \"switch\", \"tan\",  \"tanh\", \"true\",  \"trunc\", \"until\",  \"var\",\n                                    \"while\", \"xnor\", \"xor\", \"&\", \"|\"\n                                  };\n\n      static const std::size_t reserved_symbols_size = sizeof(reserved_symbols) / sizeof(std::string);\n\n      static const std::string base_function_list[] =\n                                  {\n                                    \"abs\", \"acos\",  \"acosh\", \"asin\",  \"asinh\", \"atan\",  \"atanh\",\n                                    \"atan2\",  \"avg\",  \"ceil\",  \"clamp\",  \"cos\",  \"cosh\",  \"cot\",\n                                    \"csc\",  \"equal\",  \"erf\",  \"erfc\",  \"exp\",  \"expm1\", \"floor\",\n                                    \"frac\", \"hypot\", \"iclamp\",  \"like\", \"log\", \"log10\",  \"log2\",\n                                    \"logn\", \"log1p\", \"mand\", \"max\", \"min\", \"mod\", \"mor\",  \"mul\",\n                                    \"ncdf\",  \"pow\",  \"root\",  \"round\",  \"roundn\",  \"sec\", \"sgn\",\n                                    \"sin\", \"sinc\", \"sinh\", \"sqrt\", \"sum\", \"swap\", \"tan\", \"tanh\",\n                                    \"trunc\",  \"not_equal\",  \"inrange\",  \"deg2grad\",   \"deg2rad\",\n                                    \"rad2deg\", \"grad2deg\"\n                                  };\n\n      static const std::size_t base_function_list_size = sizeof(base_function_list) / sizeof(std::string);\n\n      static const std::string logic_ops_list[] =\n                                  {\n                                    \"and\", \"nand\", \"nor\", \"not\", \"or\",  \"xnor\", \"xor\", \"&\", \"|\"\n                                  };\n\n      static const std::size_t logic_ops_list_size = sizeof(logic_ops_list) / sizeof(std::string);\n\n      static const std::string cntrl_struct_list[] =\n                                  {\n                                     \"if\", \"switch\", \"for\", \"while\", \"repeat\", \"return\"\n                                  };\n\n      static const std::size_t cntrl_struct_list_size = sizeof(cntrl_struct_list) / sizeof(std::string);\n\n      static const std::string arithmetic_ops_list[] =\n                                  {\n                                    \"+\", \"-\", \"*\", \"/\", \"%\", \"^\"\n                                  };\n\n      static const std::size_t arithmetic_ops_list_size = sizeof(arithmetic_ops_list) / sizeof(std::string);\n\n      static const std::string assignment_ops_list[] =\n                                  {\n                                    \":=\", \"+=\", \"-=\",\n                                    \"*=\", \"/=\", \"%=\"\n                                  };\n\n      static const std::size_t assignment_ops_list_size = sizeof(assignment_ops_list) / sizeof(std::string);\n\n      static const std::string inequality_ops_list[] =\n                                  {\n                                     \"<\",  \"<=\", \"==\",\n                                     \"=\",  \"!=\", \"<>\",\n                                    \">=\",  \">\"\n                                  };\n\n      static const std::size_t inequality_ops_list_size = sizeof(inequality_ops_list) / sizeof(std::string);\n\n      inline bool is_reserved_word(const std::string& symbol)\n      {\n         for (std::size_t i = 0; i < reserved_words_size; ++i)\n         {\n            if (imatch(symbol, reserved_words[i]))\n            {\n               return true;\n            }\n         }\n\n         return false;\n      }\n\n      inline bool is_reserved_symbol(const std::string& symbol)\n      {\n         for (std::size_t i = 0; i < reserved_symbols_size; ++i)\n         {\n            if (imatch(symbol, reserved_symbols[i]))\n            {\n               return true;\n            }\n         }\n\n         return false;\n      }\n\n      inline bool is_base_function(const std::string& function_name)\n      {\n         for (std::size_t i = 0; i < base_function_list_size; ++i)\n         {\n            if (imatch(function_name, base_function_list[i]))\n            {\n               return true;\n            }\n         }\n\n         return false;\n      }\n\n      inline bool is_control_struct(const std::string& cntrl_strct)\n      {\n         for (std::size_t i = 0; i < cntrl_struct_list_size; ++i)\n         {\n            if (imatch(cntrl_strct, cntrl_struct_list[i]))\n            {\n               return true;\n            }\n         }\n\n         return false;\n      }\n\n      inline bool is_logic_opr(const std::string& lgc_opr)\n      {\n         for (std::size_t i = 0; i < logic_ops_list_size; ++i)\n         {\n            if (imatch(lgc_opr, logic_ops_list[i]))\n            {\n               return true;\n            }\n         }\n\n         return false;\n      }\n\n      struct cs_match\n      {\n         static inline bool cmp(const char_t c0, const char_t c1)\n         {\n            return (c0 == c1);\n         }\n      };\n\n      struct cis_match\n      {\n         static inline bool cmp(const char_t c0, const char_t c1)\n         {\n            return (std::tolower(c0) == std::tolower(c1));\n         }\n      };\n\n      template <typename Iterator, typename Compare>\n      inline bool match_impl(const Iterator pattern_begin,\n                             const Iterator pattern_end,\n                             const Iterator data_begin,\n                             const Iterator data_end,\n                             const typename std::iterator_traits<Iterator>::value_type& zero_or_more,\n                             const typename std::iterator_traits<Iterator>::value_type& zero_or_one)\n      {\n         Iterator d_itr = data_begin;\n         Iterator p_itr = pattern_begin;\n\n         while ((p_itr != pattern_end) && (d_itr != data_end))\n         {\n            if (zero_or_more == *p_itr)\n            {\n               while ((p_itr != pattern_end) && (*p_itr == zero_or_more || *p_itr == zero_or_one))\n               {\n                  ++p_itr;\n               }\n\n               if (p_itr == pattern_end)\n                  return true;\n\n               const typename std::iterator_traits<Iterator>::value_type c = *(p_itr++);\n\n               while ((d_itr != data_end) && !Compare::cmp(c,*d_itr))\n               {\n                  ++d_itr;\n               }\n\n               ++d_itr;\n            }\n            else if ((*p_itr == zero_or_one) || Compare::cmp(*p_itr, *d_itr))\n            {\n               ++d_itr;\n               ++p_itr;\n            }\n            else\n               return false;\n         }\n\n         if (d_itr != data_end)\n            return false;\n         else if (p_itr == pattern_end)\n            return true;\n         else if ((zero_or_more == *p_itr) || (zero_or_one == *p_itr))\n            ++p_itr;\n\n         return pattern_end == p_itr;\n      }\n\n      inline bool wc_match(const std::string& wild_card,\n                           const std::string& str)\n      {\n         return match_impl<char_cptr,cs_match>(wild_card.data(),\n                                               wild_card.data() + wild_card.size(),\n                                               str.data(),\n                                               str.data() + str.size(),\n                                               '*',\n                                               '?');\n      }\n\n      inline bool wc_imatch(const std::string& wild_card,\n                            const std::string& str)\n      {\n         return match_impl<char_cptr,cis_match>(wild_card.data(),\n                                                wild_card.data() + wild_card.size(),\n                                                str.data(),\n                                                str.data() + str.size(),\n                                                '*',\n                                                '?');\n      }\n\n      inline bool sequence_match(const std::string& pattern,\n                                 const std::string& str,\n                                 std::size_t&       diff_index,\n                                 char_t&            diff_value)\n      {\n         if (str.empty())\n         {\n            return (\"Z\" == pattern);\n         }\n         else if ('*' == pattern[0])\n            return false;\n\n         typedef std::string::const_iterator itr_t;\n\n         itr_t p_itr = pattern.begin();\n         itr_t s_itr = str    .begin();\n\n         itr_t p_end = pattern.end();\n         itr_t s_end = str    .end();\n\n         while ((s_end != s_itr) && (p_end != p_itr))\n         {\n            if ('*' == (*p_itr))\n            {\n               const char_t target = static_cast<char>(std::toupper(*(p_itr - 1)));\n\n               if ('*' == target)\n               {\n                  diff_index = static_cast<std::size_t>(std::distance(str.begin(),s_itr));\n                  diff_value = static_cast<char>(std::toupper(*p_itr));\n\n                  return false;\n               }\n               else\n                  ++p_itr;\n\n               while (s_itr != s_end)\n               {\n                  if (target != std::toupper(*s_itr))\n                     break;\n                  else\n                     ++s_itr;\n               }\n\n               continue;\n            }\n            else if (\n                      ('?' != *p_itr) &&\n                      std::toupper(*p_itr) != std::toupper(*s_itr)\n                    )\n            {\n               diff_index = static_cast<std::size_t>(std::distance(str.begin(),s_itr));\n               diff_value = static_cast<char>(std::toupper(*p_itr));\n\n               return false;\n            }\n\n            ++p_itr;\n            ++s_itr;\n         }\n\n         return (\n                  (s_end == s_itr) &&\n                  (\n                    (p_end ==  p_itr) ||\n                    ('*'   == *p_itr)\n                  )\n                );\n      }\n\n      static const double pow10[] = {\n                                      1.0,\n                                      1.0E+001, 1.0E+002, 1.0E+003, 1.0E+004,\n                                      1.0E+005, 1.0E+006, 1.0E+007, 1.0E+008,\n                                      1.0E+009, 1.0E+010, 1.0E+011, 1.0E+012,\n                                      1.0E+013, 1.0E+014, 1.0E+015, 1.0E+016\n                                    };\n\n      static const std::size_t pow10_size = sizeof(pow10) / sizeof(double);\n\n      namespace numeric\n      {\n         namespace constant\n         {\n            static const double e       =  2.71828182845904523536028747135266249775724709369996;\n            static const double pi      =  3.14159265358979323846264338327950288419716939937510;\n            static const double pi_2    =  1.57079632679489661923132169163975144209858469968755;\n            static const double pi_4    =  0.78539816339744830961566084581987572104929234984378;\n            static const double pi_180  =  0.01745329251994329576923690768488612713442871888542;\n            static const double _1_pi   =  0.31830988618379067153776752674502872406891929148091;\n            static const double _2_pi   =  0.63661977236758134307553505349005744813783858296183;\n            static const double _180_pi = 57.29577951308232087679815481410517033240547246656443;\n            static const double log2    =  0.69314718055994530941723212145817656807550013436026;\n            static const double sqrt2   =  1.41421356237309504880168872420969807856967187537695;\n         }\n\n         namespace details\n         {\n            struct unknown_type_tag { unknown_type_tag() {} };\n            struct real_type_tag    { real_type_tag   () {} };\n            struct complex_type_tag { complex_type_tag() {} };\n            struct int_type_tag     { int_type_tag    () {} };\n\n            template <typename T>\n            struct number_type\n            {\n               typedef unknown_type_tag type;\n               number_type() {}\n            };\n\n            #define exprtk_register_real_type_tag(T)             \\\n            template<> struct number_type<T>                     \\\n            { typedef real_type_tag type; number_type() {} };    \\\n\n            #define exprtk_register_complex_type_tag(T)          \\\n            template<> struct number_type<std::complex<T> >      \\\n            { typedef complex_type_tag type; number_type() {} }; \\\n\n            #define exprtk_register_int_type_tag(T)              \\\n            template<> struct number_type<T>                     \\\n            { typedef int_type_tag type; number_type() {} };     \\\n\n            exprtk_register_real_type_tag(double     )\n            exprtk_register_real_type_tag(long double)\n            exprtk_register_real_type_tag(float      )\n\n            exprtk_register_complex_type_tag(double     )\n            exprtk_register_complex_type_tag(long double)\n            exprtk_register_complex_type_tag(float      )\n\n            exprtk_register_int_type_tag(short                 )\n            exprtk_register_int_type_tag(int                   )\n            exprtk_register_int_type_tag(long long int         )\n            exprtk_register_int_type_tag(unsigned short        )\n            exprtk_register_int_type_tag(unsigned int          )\n            exprtk_register_int_type_tag(unsigned long long int)\n\n            #undef exprtk_register_real_type_tag\n            #undef exprtk_register_int_type_tag\n\n            template <typename T>\n            struct epsilon_type\n            {\n               static inline T value()\n               {\n                  const T epsilon = T(0.0000000001);\n                  return epsilon;\n               }\n            };\n\n            template <>\n            struct epsilon_type <float>\n            {\n               static inline float value()\n               {\n                  const float epsilon = float(0.000001f);\n                  return epsilon;\n               }\n            };\n\n            template <>\n            struct epsilon_type <long double>\n            {\n               static inline long double value()\n               {\n                  const long double epsilon = (long double)(0.000000000001);\n                  return epsilon;\n               }\n            };\n\n            template <typename T>\n            inline bool is_nan_impl(const T v, real_type_tag)\n            {\n               return std::not_equal_to<T>()(v,v);\n            }\n\n            template <typename T>\n            inline int to_int32_impl(const T v, real_type_tag)\n            {\n               return static_cast<int>(v);\n            }\n\n            template <typename T>\n            inline long long int to_int64_impl(const T v, real_type_tag)\n            {\n               return static_cast<long long int>(v);\n            }\n\n            template <typename T>\n            inline bool is_true_impl(const T v)\n            {\n               return std::not_equal_to<T>()(T(0),v);\n            }\n\n            template <typename T>\n            inline bool is_false_impl(const T v)\n            {\n               return std::equal_to<T>()(T(0),v);\n            }\n\n            template <typename T>\n            inline T abs_impl(const T v, real_type_tag)\n            {\n               return ((v < T(0)) ? -v : v);\n            }\n\n            template <typename T>\n            inline T min_impl(const T v0, const T v1, real_type_tag)\n            {\n               return std::min<T>(v0,v1);\n            }\n\n            template <typename T>\n            inline T max_impl(const T v0, const T v1, real_type_tag)\n            {\n               return std::max<T>(v0,v1);\n            }\n\n            template <typename T>\n            inline T equal_impl(const T v0, const T v1, real_type_tag)\n            {\n               const T epsilon = epsilon_type<T>::value();\n               return (abs_impl(v0 - v1,real_type_tag()) <= (std::max(T(1),std::max(abs_impl(v0,real_type_tag()),abs_impl(v1,real_type_tag()))) * epsilon)) ? T(1) : T(0);\n            }\n\n            inline float equal_impl(const float v0, const float v1, real_type_tag)\n            {\n               const float epsilon = epsilon_type<float>::value();\n               return (abs_impl(v0 - v1,real_type_tag()) <= (std::max(1.0f,std::max(abs_impl(v0,real_type_tag()),abs_impl(v1,real_type_tag()))) * epsilon)) ? 1.0f : 0.0f;\n            }\n\n            template <typename T>\n            inline T equal_impl(const T v0, const T v1, int_type_tag)\n            {\n               return (v0 == v1) ? 1 : 0;\n            }\n\n            template <typename T>\n            inline T expm1_impl(const T v, real_type_tag)\n            {\n               // return std::expm1<T>(v);\n               if (abs_impl(v,real_type_tag()) < T(0.00001))\n                  return v + (T(0.5) * v * v);\n               else\n                  return std::exp(v) - T(1);\n            }\n\n            template <typename T>\n            inline T expm1_impl(const T v, int_type_tag)\n            {\n               return T(std::exp<double>(v)) - T(1);\n            }\n\n            template <typename T>\n            inline T nequal_impl(const T v0, const T v1, real_type_tag)\n            {\n               typedef real_type_tag rtg;\n               const T epsilon = epsilon_type<T>::value();\n               return (abs_impl(v0 - v1,rtg()) > (std::max(T(1),std::max(abs_impl(v0,rtg()),abs_impl(v1,rtg()))) * epsilon)) ? T(1) : T(0);\n            }\n\n            inline float nequal_impl(const float v0, const float v1, real_type_tag)\n            {\n               typedef real_type_tag rtg;\n               const float epsilon = epsilon_type<float>::value();\n               return (abs_impl(v0 - v1,rtg()) > (std::max(1.0f,std::max(abs_impl(v0,rtg()),abs_impl(v1,rtg()))) * epsilon)) ? 1.0f : 0.0f;\n            }\n\n            template <typename T>\n            inline T nequal_impl(const T v0, const T v1, int_type_tag)\n            {\n               return (v0 != v1) ? 1 : 0;\n            }\n\n            template <typename T>\n            inline T modulus_impl(const T v0, const T v1, real_type_tag)\n            {\n               return std::fmod(v0,v1);\n            }\n\n            template <typename T>\n            inline T modulus_impl(const T v0, const T v1, int_type_tag)\n            {\n               return v0 % v1;\n            }\n\n            template <typename T>\n            inline T pow_impl(const T v0, const T v1, real_type_tag)\n            {\n               return std::pow(v0,v1);\n            }\n\n            template <typename T>\n            inline T pow_impl(const T v0, const T v1, int_type_tag)\n            {\n               return std::pow(static_cast<double>(v0),static_cast<double>(v1));\n            }\n\n            template <typename T>\n            inline T logn_impl(const T v0, const T v1, real_type_tag)\n            {\n               return std::log(v0) / std::log(v1);\n            }\n\n            template <typename T>\n            inline T logn_impl(const T v0, const T v1, int_type_tag)\n            {\n               return static_cast<T>(logn_impl<double>(static_cast<double>(v0),static_cast<double>(v1),real_type_tag()));\n            }\n\n            template <typename T>\n            inline T log1p_impl(const T v, real_type_tag)\n            {\n               if (v > T(-1))\n               {\n                  if (abs_impl(v,real_type_tag()) > T(0.0001))\n                  {\n                     return std::log(T(1) + v);\n                  }\n                  else\n                     return (T(-0.5) * v + T(1)) * v;\n               }\n               else\n                  return std::numeric_limits<T>::quiet_NaN();\n            }\n\n            template <typename T>\n            inline T log1p_impl(const T v, int_type_tag)\n            {\n               if (v > T(-1))\n               {\n                  return std::log(T(1) + v);\n               }\n               else\n                  return std::numeric_limits<T>::quiet_NaN();\n            }\n\n            template <typename T>\n            inline T root_impl(const T v0, const T v1, real_type_tag)\n            {\n               if (v1 < T(0))\n                  return std::numeric_limits<T>::quiet_NaN();\n\n               const std::size_t n = static_cast<std::size_t>(v1);\n\n               if ((v0 < T(0)) && (0 == (n % 2)))\n                  return std::numeric_limits<T>::quiet_NaN();\n\n               return std::pow(v0, T(1) / n);\n            }\n\n            template <typename T>\n            inline T root_impl(const T v0, const T v1, int_type_tag)\n            {\n               return root_impl<double>(static_cast<double>(v0),static_cast<double>(v1),real_type_tag());\n            }\n\n            template <typename T>\n            inline T round_impl(const T v, real_type_tag)\n            {\n               return ((v < T(0)) ? std::ceil(v - T(0.5)) : std::floor(v + T(0.5)));\n            }\n\n            template <typename T>\n            inline T roundn_impl(const T v0, const T v1, real_type_tag)\n            {\n               const int index = std::max<int>(0, std::min<int>(pow10_size - 1, (int)std::floor(v1)));\n               const T p10 = T(pow10[index]);\n\n               if (v0 < T(0))\n                  return T(std::ceil ((v0 * p10) - T(0.5)) / p10);\n               else\n                  return T(std::floor((v0 * p10) + T(0.5)) / p10);\n            }\n\n            template <typename T>\n            inline T roundn_impl(const T v0, const T, int_type_tag)\n            {\n               return v0;\n            }\n\n            template <typename T>\n            inline T hypot_impl(const T v0, const T v1, real_type_tag)\n            {\n               return std::sqrt((v0 * v0) + (v1 * v1));\n            }\n\n            template <typename T>\n            inline T hypot_impl(const T v0, const T v1, int_type_tag)\n            {\n               return static_cast<T>(std::sqrt(static_cast<double>((v0 * v0) + (v1 * v1))));\n            }\n\n            template <typename T>\n            inline T atan2_impl(const T v0, const T v1, real_type_tag)\n            {\n               return std::atan2(v0,v1);\n            }\n\n            template <typename T>\n            inline T atan2_impl(const T, const T, int_type_tag)\n            {\n               return 0;\n            }\n\n            template <typename T>\n            inline T shr_impl(const T v0, const T v1, real_type_tag)\n            {\n               return v0 * (T(1) / std::pow(T(2),static_cast<T>(static_cast<int>(v1))));\n            }\n\n            template <typename T>\n            inline T shr_impl(const T v0, const T v1, int_type_tag)\n            {\n               return v0 >> v1;\n            }\n\n            template <typename T>\n            inline T shl_impl(const T v0, const T v1, real_type_tag)\n            {\n               return v0 * std::pow(T(2),static_cast<T>(static_cast<int>(v1)));\n            }\n\n            template <typename T>\n            inline T shl_impl(const T v0, const T v1, int_type_tag)\n            {\n               return v0 << v1;\n            }\n\n            template <typename T>\n            inline T sgn_impl(const T v, real_type_tag)\n            {\n                    if (v > T(0)) return T(+1);\n               else if (v < T(0)) return T(-1);\n               else               return T( 0);\n            }\n\n            template <typename T>\n            inline T sgn_impl(const T v, int_type_tag)\n            {\n                    if (v > T(0)) return T(+1);\n               else if (v < T(0)) return T(-1);\n               else               return T( 0);\n            }\n\n            template <typename T>\n            inline T and_impl(const T v0, const T v1, real_type_tag)\n            {\n               return (is_true_impl(v0) && is_true_impl(v1)) ? T(1) : T(0);\n            }\n\n            template <typename T>\n            inline T and_impl(const T v0, const T v1, int_type_tag)\n            {\n               return v0 && v1;\n            }\n\n            template <typename T>\n            inline T nand_impl(const T v0, const T v1, real_type_tag)\n            {\n               return (is_false_impl(v0) || is_false_impl(v1)) ? T(1) : T(0);\n            }\n\n            template <typename T>\n            inline T nand_impl(const T v0, const T v1, int_type_tag)\n            {\n               return !(v0 && v1);\n            }\n\n            template <typename T>\n            inline T or_impl(const T v0, const T v1, real_type_tag)\n            {\n               return (is_true_impl(v0) || is_true_impl(v1)) ? T(1) : T(0);\n            }\n\n            template <typename T>\n            inline T or_impl(const T v0, const T v1, int_type_tag)\n            {\n               return (v0 || v1);\n            }\n\n            template <typename T>\n            inline T nor_impl(const T v0, const T v1, real_type_tag)\n            {\n               return (is_false_impl(v0) && is_false_impl(v1)) ? T(1) : T(0);\n            }\n\n            template <typename T>\n            inline T nor_impl(const T v0, const T v1, int_type_tag)\n            {\n               return !(v0 || v1);\n            }\n\n            template <typename T>\n            inline T xor_impl(const T v0, const T v1, real_type_tag)\n            {\n               return (is_false_impl(v0) != is_false_impl(v1)) ? T(1) : T(0);\n            }\n\n            template <typename T>\n            inline T xor_impl(const T v0, const T v1, int_type_tag)\n            {\n               return v0 ^ v1;\n            }\n\n            template <typename T>\n            inline T xnor_impl(const T v0, const T v1, real_type_tag)\n            {\n               const bool v0_true = is_true_impl(v0);\n               const bool v1_true = is_true_impl(v1);\n\n               if ((v0_true &&  v1_true) || (!v0_true && !v1_true))\n                  return T(1);\n               else\n                  return T(0);\n            }\n\n            template <typename T>\n            inline T xnor_impl(const T v0, const T v1, int_type_tag)\n            {\n               const bool v0_true = is_true_impl(v0);\n               const bool v1_true = is_true_impl(v1);\n\n               if ((v0_true &&  v1_true) || (!v0_true && !v1_true))\n                  return T(1);\n               else\n                  return T(0);\n            }\n\n            #if (defined(_MSC_VER) && (_MSC_VER >= 1900)) || !defined(_MSC_VER)\n            #define exprtk_define_erf(TT,impl)           \\\n            inline TT erf_impl(TT v) { return impl(v); } \\\n\n            exprtk_define_erf(      float,::erff)\n            exprtk_define_erf(     double,::erf )\n            exprtk_define_erf(long double,::erfl)\n            #undef exprtk_define_erf\n            #endif\n\n            template <typename T>\n            inline T erf_impl(T v, real_type_tag)\n            {\n               #if defined(_MSC_VER) && (_MSC_VER < 1900)\n               // Credits: Abramowitz & Stegun Equations 7.1.25-28\n               static const T c[] = {\n                                      T( 1.26551223), T(1.00002368),\n                                      T( 0.37409196), T(0.09678418),\n                                      T(-0.18628806), T(0.27886807),\n                                      T(-1.13520398), T(1.48851587),\n                                      T(-0.82215223), T(0.17087277)\n                                    };\n\n               const T t = T(1) / (T(1) + T(0.5) * abs_impl(v,real_type_tag()));\n\n               T result = T(1) - t * std::exp((-v * v) -\n                                      c[0] + t * (c[1] + t *\n                                     (c[2] + t * (c[3] + t *\n                                     (c[4] + t * (c[5] + t *\n                                     (c[6] + t * (c[7] + t *\n                                     (c[8] + t * (c[9]))))))))));\n\n               return (v >= T(0)) ? result : -result;\n               #else\n               return erf_impl(v);\n               #endif\n            }\n\n            template <typename T>\n            inline T erf_impl(T v, int_type_tag)\n            {\n               return erf_impl(static_cast<double>(v),real_type_tag());\n            }\n\n            #if (defined(_MSC_VER) && (_MSC_VER >= 1900)) || !defined(_MSC_VER)\n            #define exprtk_define_erfc(TT,impl)           \\\n            inline TT erfc_impl(TT v) { return impl(v); } \\\n\n            exprtk_define_erfc(      float,::erfcf)\n            exprtk_define_erfc(     double,::erfc )\n            exprtk_define_erfc(long double,::erfcl)\n            #undef exprtk_define_erfc\n            #endif\n\n            template <typename T>\n            inline T erfc_impl(T v, real_type_tag)\n            {\n               #if defined(_MSC_VER) && (_MSC_VER < 1900)\n               return T(1) - erf_impl(v,real_type_tag());\n               #else\n               return erfc_impl(v);\n               #endif\n            }\n\n            template <typename T>\n            inline T erfc_impl(T v, int_type_tag)\n            {\n               return erfc_impl(static_cast<double>(v),real_type_tag());\n            }\n\n            template <typename T>\n            inline T ncdf_impl(T v, real_type_tag)\n            {\n               T cnd = T(0.5) * (T(1) + erf_impl(\n                                           abs_impl(v,real_type_tag()) /\n                                           T(numeric::constant::sqrt2),real_type_tag()));\n               return  (v < T(0)) ? (T(1) - cnd) : cnd;\n            }\n\n            template <typename T>\n            inline T ncdf_impl(T v, int_type_tag)\n            {\n               return ncdf_impl(static_cast<double>(v),real_type_tag());\n            }\n\n            template <typename T>\n            inline T sinc_impl(T v, real_type_tag)\n            {\n               if (std::abs(v) >= std::numeric_limits<T>::epsilon())\n                   return(std::sin(v) / v);\n               else\n                  return T(1);\n            }\n\n            template <typename T>\n            inline T sinc_impl(T v, int_type_tag)\n            {\n               return sinc_impl(static_cast<double>(v),real_type_tag());\n            }\n\n            template <typename T> inline T  acos_impl(const T v, real_type_tag) { return std::acos (v); }\n            template <typename T> inline T acosh_impl(const T v, real_type_tag) { return std::log(v + std::sqrt((v * v) - T(1))); }\n            template <typename T> inline T  asin_impl(const T v, real_type_tag) { return std::asin (v); }\n            template <typename T> inline T asinh_impl(const T v, real_type_tag) { return std::log(v + std::sqrt((v * v) + T(1))); }\n            template <typename T> inline T  atan_impl(const T v, real_type_tag) { return std::atan (v); }\n            template <typename T> inline T atanh_impl(const T v, real_type_tag) { return (std::log(T(1) + v) - std::log(T(1) - v)) / T(2); }\n            template <typename T> inline T  ceil_impl(const T v, real_type_tag) { return std::ceil (v); }\n            template <typename T> inline T   cos_impl(const T v, real_type_tag) { return std::cos  (v); }\n            template <typename T> inline T  cosh_impl(const T v, real_type_tag) { return std::cosh (v); }\n            template <typename T> inline T   exp_impl(const T v, real_type_tag) { return std::exp  (v); }\n            template <typename T> inline T floor_impl(const T v, real_type_tag) { return std::floor(v); }\n            template <typename T> inline T   log_impl(const T v, real_type_tag) { return std::log  (v); }\n            template <typename T> inline T log10_impl(const T v, real_type_tag) { return std::log10(v); }\n            template <typename T> inline T  log2_impl(const T v, real_type_tag) { return std::log(v)/T(numeric::constant::log2); }\n            template <typename T> inline T   neg_impl(const T v, real_type_tag) { return -v;            }\n            template <typename T> inline T   pos_impl(const T v, real_type_tag) { return +v;            }\n            template <typename T> inline T   sin_impl(const T v, real_type_tag) { return std::sin  (v); }\n            template <typename T> inline T  sinh_impl(const T v, real_type_tag) { return std::sinh (v); }\n            template <typename T> inline T  sqrt_impl(const T v, real_type_tag) { return std::sqrt (v); }\n            template <typename T> inline T   tan_impl(const T v, real_type_tag) { return std::tan  (v); }\n            template <typename T> inline T  tanh_impl(const T v, real_type_tag) { return std::tanh (v); }\n            template <typename T> inline T   cot_impl(const T v, real_type_tag) { return T(1) / std::tan(v); }\n            template <typename T> inline T   sec_impl(const T v, real_type_tag) { return T(1) / std::cos(v); }\n            template <typename T> inline T   csc_impl(const T v, real_type_tag) { return T(1) / std::sin(v); }\n            template <typename T> inline T   r2d_impl(const T v, real_type_tag) { return (v * T(numeric::constant::_180_pi)); }\n            template <typename T> inline T   d2r_impl(const T v, real_type_tag) { return (v * T(numeric::constant::pi_180));  }\n            template <typename T> inline T   d2g_impl(const T v, real_type_tag) { return (v * T(20.0/9.0)); }\n            template <typename T> inline T   g2d_impl(const T v, real_type_tag) { return (v * T(9.0/20.0)); }\n            template <typename T> inline T  notl_impl(const T v, real_type_tag) { return (std::not_equal_to<T>()(T(0),v) ? T(0) : T(1)); }\n            template <typename T> inline T  frac_impl(const T v, real_type_tag) { return (v - static_cast<long long>(v)); }\n            template <typename T> inline T trunc_impl(const T v, real_type_tag) { return T(static_cast<long long>(v));    }\n\n            template <typename T> inline T const_pi_impl(real_type_tag) { return T(numeric::constant::pi); }\n            template <typename T> inline T const_e_impl (real_type_tag) { return T(numeric::constant::e);  }\n\n            template <typename T> inline T   abs_impl(const T v, int_type_tag) { return ((v >= T(0)) ? v : -v); }\n            template <typename T> inline T   exp_impl(const T v, int_type_tag) { return std::exp  (v); }\n            template <typename T> inline T   log_impl(const T v, int_type_tag) { return std::log  (v); }\n            template <typename T> inline T log10_impl(const T v, int_type_tag) { return std::log10(v); }\n            template <typename T> inline T  log2_impl(const T v, int_type_tag) { return std::log(v)/T(numeric::constant::log2); }\n            template <typename T> inline T   neg_impl(const T v, int_type_tag) { return -v;            }\n            template <typename T> inline T   pos_impl(const T v, int_type_tag) { return +v;            }\n            template <typename T> inline T  ceil_impl(const T v, int_type_tag) { return v;             }\n            template <typename T> inline T floor_impl(const T v, int_type_tag) { return v;             }\n            template <typename T> inline T round_impl(const T v, int_type_tag) { return v;             }\n            template <typename T> inline T  notl_impl(const T v, int_type_tag) { return !v;            }\n            template <typename T> inline T  sqrt_impl(const T v, int_type_tag) { return std::sqrt (v); }\n            template <typename T> inline T  frac_impl(const T  , int_type_tag) { return T(0);          }\n            template <typename T> inline T trunc_impl(const T v, int_type_tag) { return v;             }\n            template <typename T> inline T  acos_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T acosh_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T  asin_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T asinh_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T  atan_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T atanh_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T   cos_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T  cosh_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T   sin_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T  sinh_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T   tan_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T  tanh_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T   cot_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T   sec_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n            template <typename T> inline T   csc_impl(const T  , int_type_tag) { return std::numeric_limits<T>::quiet_NaN(); }\n\n            template <typename T>\n            inline bool is_integer_impl(const T& v, real_type_tag)\n            {\n               return std::equal_to<T>()(T(0),std::fmod(v,T(1)));\n            }\n\n            template <typename T>\n            inline bool is_integer_impl(const T&, int_type_tag)\n            {\n               return true;\n            }\n         }\n\n         template <typename Type>\n         struct numeric_info { enum { length = 0, size = 32, bound_length = 0, min_exp = 0, max_exp = 0 }; };\n\n         template<> struct numeric_info<int>         { enum { length = 10, size = 16, bound_length = 9}; };\n         template<> struct numeric_info<float>       { enum { min_exp =  -38, max_exp =  +38}; };\n         template<> struct numeric_info<double>      { enum { min_exp = -308, max_exp = +308}; };\n         template<> struct numeric_info<long double> { enum { min_exp = -308, max_exp = +308}; };\n\n         template <typename T>\n         inline int to_int32(const T v)\n         {\n            const typename details::number_type<T>::type num_type;\n            return to_int32_impl(v, num_type);\n         }\n\n         template <typename T>\n         inline long long int to_int64(const T v)\n         {\n            const typename details::number_type<T>::type num_type;\n            return to_int64_impl(v, num_type);\n         }\n\n         template <typename T>\n         inline bool is_nan(const T v)\n         {\n            const typename details::number_type<T>::type num_type;\n            return is_nan_impl(v, num_type);\n         }\n\n         template <typename T>\n         inline T min(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return min_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T max(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return max_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T equal(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return equal_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T nequal(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return nequal_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T modulus(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return modulus_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T pow(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return pow_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T logn(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return logn_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T root(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return root_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T roundn(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return roundn_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T hypot(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return hypot_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T atan2(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return atan2_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T shr(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return shr_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T shl(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return shl_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T and_opr(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return and_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T nand_opr(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return nand_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T or_opr(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return or_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T nor_opr(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return nor_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T xor_opr(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return xor_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline T xnor_opr(const T v0, const T v1)\n         {\n            const typename details::number_type<T>::type num_type;\n            return xnor_impl(v0, v1, num_type);\n         }\n\n         template <typename T>\n         inline bool is_integer(const T v)\n         {\n            const typename details::number_type<T>::type num_type;\n            return is_integer_impl(v, num_type);\n         }\n\n         template <typename T, unsigned int N>\n         struct fast_exp\n         {\n            static inline T result(T v)\n            {\n               unsigned int k = N;\n               T l = T(1);\n\n               while (k)\n               {\n                  if (k & 1)\n                  {\n                     l *= v;\n                     --k;\n                  }\n\n                  v *= v;\n                  k >>= 1;\n               }\n\n               return l;\n            }\n         };\n\n         template <typename T> struct fast_exp<T,10> { static inline T result(T v) { T v_5 = fast_exp<T,5>::result(v); return v_5 * v_5; } };\n         template <typename T> struct fast_exp<T, 9> { static inline T result(T v) { return fast_exp<T,8>::result(v) * v; } };\n         template <typename T> struct fast_exp<T, 8> { static inline T result(T v) { T v_4 = fast_exp<T,4>::result(v); return v_4 * v_4; } };\n         template <typename T> struct fast_exp<T, 7> { static inline T result(T v) { return fast_exp<T,6>::result(v) * v; } };\n         template <typename T> struct fast_exp<T, 6> { static inline T result(T v) { T v_3 = fast_exp<T,3>::result(v); return v_3 * v_3; } };\n         template <typename T> struct fast_exp<T, 5> { static inline T result(T v) { return fast_exp<T,4>::result(v) * v; } };\n         template <typename T> struct fast_exp<T, 4> { static inline T result(T v) { T v_2 = v * v; return v_2 * v_2; } };\n         template <typename T> struct fast_exp<T, 3> { static inline T result(T v) { return v * v * v; } };\n         template <typename T> struct fast_exp<T, 2> { static inline T result(T v) { return v * v;     } };\n         template <typename T> struct fast_exp<T, 1> { static inline T result(T v) { return v;         } };\n         template <typename T> struct fast_exp<T, 0> { static inline T result(T  ) { return T(1);      } };\n\n         #define exprtk_define_unary_function(FunctionName)        \\\n         template <typename T>                                     \\\n         inline T FunctionName (const T v)                         \\\n         {                                                         \\\n            const typename details::number_type<T>::type num_type; \\\n            return  FunctionName##_impl(v,num_type);               \\\n         }                                                         \\\n\n         exprtk_define_unary_function(abs  )\n         exprtk_define_unary_function(acos )\n         exprtk_define_unary_function(acosh)\n         exprtk_define_unary_function(asin )\n         exprtk_define_unary_function(asinh)\n         exprtk_define_unary_function(atan )\n         exprtk_define_unary_function(atanh)\n         exprtk_define_unary_function(ceil )\n         exprtk_define_unary_function(cos  )\n         exprtk_define_unary_function(cosh )\n         exprtk_define_unary_function(exp  )\n         exprtk_define_unary_function(expm1)\n         exprtk_define_unary_function(floor)\n         exprtk_define_unary_function(log  )\n         exprtk_define_unary_function(log10)\n         exprtk_define_unary_function(log2 )\n         exprtk_define_unary_function(log1p)\n         exprtk_define_unary_function(neg  )\n         exprtk_define_unary_function(pos  )\n         exprtk_define_unary_function(round)\n         exprtk_define_unary_function(sin  )\n         exprtk_define_unary_function(sinc )\n         exprtk_define_unary_function(sinh )\n         exprtk_define_unary_function(sqrt )\n         exprtk_define_unary_function(tan  )\n         exprtk_define_unary_function(tanh )\n         exprtk_define_unary_function(cot  )\n         exprtk_define_unary_function(sec  )\n         exprtk_define_unary_function(csc  )\n         exprtk_define_unary_function(r2d  )\n         exprtk_define_unary_function(d2r  )\n         exprtk_define_unary_function(d2g  )\n         exprtk_define_unary_function(g2d  )\n         exprtk_define_unary_function(notl )\n         exprtk_define_unary_function(sgn  )\n         exprtk_define_unary_function(erf  )\n         exprtk_define_unary_function(erfc )\n         exprtk_define_unary_function(ncdf )\n         exprtk_define_unary_function(frac )\n         exprtk_define_unary_function(trunc)\n         #undef exprtk_define_unary_function\n      }\n\n      template <typename T>\n      inline T compute_pow10(T d, const int exponent)\n      {\n         static const double fract10[] =\n         {\n           0.0,\n           1.0E+001, 1.0E+002, 1.0E+003, 1.0E+004, 1.0E+005, 1.0E+006, 1.0E+007, 1.0E+008, 1.0E+009, 1.0E+010,\n           1.0E+011, 1.0E+012, 1.0E+013, 1.0E+014, 1.0E+015, 1.0E+016, 1.0E+017, 1.0E+018, 1.0E+019, 1.0E+020,\n           1.0E+021, 1.0E+022, 1.0E+023, 1.0E+024, 1.0E+025, 1.0E+026, 1.0E+027, 1.0E+028, 1.0E+029, 1.0E+030,\n           1.0E+031, 1.0E+032, 1.0E+033, 1.0E+034, 1.0E+035, 1.0E+036, 1.0E+037, 1.0E+038, 1.0E+039, 1.0E+040,\n           1.0E+041, 1.0E+042, 1.0E+043, 1.0E+044, 1.0E+045, 1.0E+046, 1.0E+047, 1.0E+048, 1.0E+049, 1.0E+050,\n           1.0E+051, 1.0E+052, 1.0E+053, 1.0E+054, 1.0E+055, 1.0E+056, 1.0E+057, 1.0E+058, 1.0E+059, 1.0E+060,\n           1.0E+061, 1.0E+062, 1.0E+063, 1.0E+064, 1.0E+065, 1.0E+066, 1.0E+067, 1.0E+068, 1.0E+069, 1.0E+070,\n           1.0E+071, 1.0E+072, 1.0E+073, 1.0E+074, 1.0E+075, 1.0E+076, 1.0E+077, 1.0E+078, 1.0E+079, 1.0E+080,\n           1.0E+081, 1.0E+082, 1.0E+083, 1.0E+084, 1.0E+085, 1.0E+086, 1.0E+087, 1.0E+088, 1.0E+089, 1.0E+090,\n           1.0E+091, 1.0E+092, 1.0E+093, 1.0E+094, 1.0E+095, 1.0E+096, 1.0E+097, 1.0E+098, 1.0E+099, 1.0E+100,\n           1.0E+101, 1.0E+102, 1.0E+103, 1.0E+104, 1.0E+105, 1.0E+106, 1.0E+107, 1.0E+108, 1.0E+109, 1.0E+110,\n           1.0E+111, 1.0E+112, 1.0E+113, 1.0E+114, 1.0E+115, 1.0E+116, 1.0E+117, 1.0E+118, 1.0E+119, 1.0E+120,\n           1.0E+121, 1.0E+122, 1.0E+123, 1.0E+124, 1.0E+125, 1.0E+126, 1.0E+127, 1.0E+128, 1.0E+129, 1.0E+130,\n           1.0E+131, 1.0E+132, 1.0E+133, 1.0E+134, 1.0E+135, 1.0E+136, 1.0E+137, 1.0E+138, 1.0E+139, 1.0E+140,\n           1.0E+141, 1.0E+142, 1.0E+143, 1.0E+144, 1.0E+145, 1.0E+146, 1.0E+147, 1.0E+148, 1.0E+149, 1.0E+150,\n           1.0E+151, 1.0E+152, 1.0E+153, 1.0E+154, 1.0E+155, 1.0E+156, 1.0E+157, 1.0E+158, 1.0E+159, 1.0E+160,\n           1.0E+161, 1.0E+162, 1.0E+163, 1.0E+164, 1.0E+165, 1.0E+166, 1.0E+167, 1.0E+168, 1.0E+169, 1.0E+170,\n           1.0E+171, 1.0E+172, 1.0E+173, 1.0E+174, 1.0E+175, 1.0E+176, 1.0E+177, 1.0E+178, 1.0E+179, 1.0E+180,\n           1.0E+181, 1.0E+182, 1.0E+183, 1.0E+184, 1.0E+185, 1.0E+186, 1.0E+187, 1.0E+188, 1.0E+189, 1.0E+190,\n           1.0E+191, 1.0E+192, 1.0E+193, 1.0E+194, 1.0E+195, 1.0E+196, 1.0E+197, 1.0E+198, 1.0E+199, 1.0E+200,\n           1.0E+201, 1.0E+202, 1.0E+203, 1.0E+204, 1.0E+205, 1.0E+206, 1.0E+207, 1.0E+208, 1.0E+209, 1.0E+210,\n           1.0E+211, 1.0E+212, 1.0E+213, 1.0E+214, 1.0E+215, 1.0E+216, 1.0E+217, 1.0E+218, 1.0E+219, 1.0E+220,\n           1.0E+221, 1.0E+222, 1.0E+223, 1.0E+224, 1.0E+225, 1.0E+226, 1.0E+227, 1.0E+228, 1.0E+229, 1.0E+230,\n           1.0E+231, 1.0E+232, 1.0E+233, 1.0E+234, 1.0E+235, 1.0E+236, 1.0E+237, 1.0E+238, 1.0E+239, 1.0E+240,\n           1.0E+241, 1.0E+242, 1.0E+243, 1.0E+244, 1.0E+245, 1.0E+246, 1.0E+247, 1.0E+248, 1.0E+249, 1.0E+250,\n           1.0E+251, 1.0E+252, 1.0E+253, 1.0E+254, 1.0E+255, 1.0E+256, 1.0E+257, 1.0E+258, 1.0E+259, 1.0E+260,\n           1.0E+261, 1.0E+262, 1.0E+263, 1.0E+264, 1.0E+265, 1.0E+266, 1.0E+267, 1.0E+268, 1.0E+269, 1.0E+270,\n           1.0E+271, 1.0E+272, 1.0E+273, 1.0E+274, 1.0E+275, 1.0E+276, 1.0E+277, 1.0E+278, 1.0E+279, 1.0E+280,\n           1.0E+281, 1.0E+282, 1.0E+283, 1.0E+284, 1.0E+285, 1.0E+286, 1.0E+287, 1.0E+288, 1.0E+289, 1.0E+290,\n           1.0E+291, 1.0E+292, 1.0E+293, 1.0E+294, 1.0E+295, 1.0E+296, 1.0E+297, 1.0E+298, 1.0E+299, 1.0E+300,\n           1.0E+301, 1.0E+302, 1.0E+303, 1.0E+304, 1.0E+305, 1.0E+306, 1.0E+307, 1.0E+308\n         };\n\n         static const int fract10_size = static_cast<int>(sizeof(fract10) / sizeof(double));\n\n         const int e = std::abs(exponent);\n\n         if (exponent >= std::numeric_limits<T>::min_exponent10)\n         {\n            if (e < fract10_size)\n            {\n               if (exponent > 0)\n                  return T(d * fract10[e]);\n               else\n                  return T(d / fract10[e]);\n            }\n            else\n               return T(d * std::pow(10.0, 10.0 * exponent));\n         }\n         else\n         {\n                     d /= T(fract10[           -std::numeric_limits<T>::min_exponent10]);\n            return T(d /    fract10[-exponent + std::numeric_limits<T>::min_exponent10]);\n         }\n      }\n\n      template <typename Iterator, typename T>\n      inline bool string_to_type_converter_impl_ref(Iterator& itr, const Iterator end, T& result)\n      {\n         if (itr == end)\n            return false;\n\n         const bool negative = ('-' == (*itr));\n\n         if (negative || ('+' == (*itr)))\n         {\n            if (end == ++itr)\n               return false;\n         }\n\n         static const uchar_t zero = static_cast<uchar_t>('0');\n\n         while ((end != itr) && (zero == (*itr))) ++itr;\n\n         bool return_result = true;\n         unsigned int digit = 0;\n         const std::size_t length  = static_cast<std::size_t>(std::distance(itr,end));\n\n         if (length <= 4)\n         {\n            exprtk_disable_fallthrough_begin\n            switch (length)\n            {\n               #ifdef exprtk_use_lut\n\n               #define exprtk_process_digit                          \\\n               if ((digit = details::digit_table[(int)*itr++]) < 10) \\\n                  result = result * 10 + (digit);                    \\\n               else                                                  \\\n               {                                                     \\\n                  return_result = false;                             \\\n                  break;                                             \\\n               }                                                     \\\n\n               #else\n\n               #define exprtk_process_digit         \\\n               if ((digit = (*itr++ - zero)) < 10)  \\\n                  result = result * T(10) + digit;  \\\n               else                                 \\\n               {                                    \\\n                  return_result = false;            \\\n                  break;                            \\\n               }                                    \\\n\n               #endif\n\n               case  4 : exprtk_process_digit\n               case  3 : exprtk_process_digit\n               case  2 : exprtk_process_digit\n               case  1 : if ((digit = (*itr - zero))>= 10) { digit = 0; return_result = false; }\n\n               #undef exprtk_process_digit\n            }\n            exprtk_disable_fallthrough_end\n         }\n         else\n            return_result = false;\n\n         if (length && return_result)\n         {\n            result = result * 10 + static_cast<T>(digit);\n            ++itr;\n         }\n\n         result = negative ? -result : result;\n         return return_result;\n      }\n\n      template <typename Iterator, typename T>\n      static inline bool parse_nan(Iterator& itr, const Iterator end, T& t)\n      {\n         typedef typename std::iterator_traits<Iterator>::value_type type;\n\n         static const std::size_t nan_length = 3;\n\n         if (std::distance(itr,end) != static_cast<int>(nan_length))\n            return false;\n\n         if (static_cast<type>('n') == (*itr))\n         {\n            if (\n                 (static_cast<type>('a') != *(itr + 1)) ||\n                 (static_cast<type>('n') != *(itr + 2))\n               )\n            {\n               return false;\n            }\n         }\n         else if (\n                   (static_cast<type>('A') != *(itr + 1)) ||\n                   (static_cast<type>('N') != *(itr + 2))\n                 )\n         {\n            return false;\n         }\n\n         t = std::numeric_limits<T>::quiet_NaN();\n\n         return true;\n      }\n\n      template <typename Iterator, typename T>\n      static inline bool parse_inf(Iterator& itr, const Iterator end, T& t, bool negative)\n      {\n         static const char_t inf_uc[] = \"INFINITY\";\n         static const char_t inf_lc[] = \"infinity\";\n         static const std::size_t inf_length = 8;\n\n         const std::size_t length = static_cast<std::size_t>(std::distance(itr,end));\n\n         if ((3 != length) && (inf_length != length))\n            return false;\n\n         char_cptr inf_itr = ('i' == (*itr)) ? inf_lc : inf_uc;\n\n         while (end != itr)\n         {\n            if (*inf_itr == static_cast<char>(*itr))\n            {\n               ++itr;\n               ++inf_itr;\n               continue;\n            }\n            else\n               return false;\n         }\n\n         if (negative)\n            t = -std::numeric_limits<T>::infinity();\n         else\n            t =  std::numeric_limits<T>::infinity();\n\n         return true;\n      }\n\n      template <typename Iterator, typename T>\n      inline bool string_to_real(Iterator& itr_external, const Iterator end, T& t, numeric::details::real_type_tag)\n      {\n         if (end == itr_external) return false;\n\n         Iterator itr = itr_external;\n\n         T d = T(0);\n\n         const bool negative = ('-' == (*itr));\n\n         if (negative || '+' == (*itr))\n         {\n            if (end == ++itr)\n               return false;\n         }\n\n         bool instate = false;\n\n         static const char zero = static_cast<uchar_t>('0');\n\n         #define parse_digit_1(d)          \\\n         if ((digit = (*itr - zero)) < 10) \\\n            { d = d * T(10) + digit; }     \\\n         else                              \\\n            { break; }                     \\\n         if (end == ++itr) break;          \\\n\n         #define parse_digit_2(d)          \\\n         if ((digit = (*itr - zero)) < 10) \\\n            { d = d * T(10) + digit; }     \\\n         else { break; }                   \\\n            ++itr;                         \\\n\n         if ('.' != (*itr))\n         {\n            const Iterator curr = itr;\n\n            while ((end != itr) && (zero == (*itr))) ++itr;\n\n            unsigned int digit;\n\n            while (end != itr)\n            {\n               // Note: For 'physical' superscalar architectures it\n               // is advised that the following loop be: 4xPD1 and 1xPD2\n               #ifdef exprtk_enable_superscalar\n               parse_digit_1(d)\n               parse_digit_1(d)\n               #endif\n               parse_digit_1(d)\n               parse_digit_1(d)\n               parse_digit_2(d)\n            }\n\n            if (curr != itr) instate = true;\n         }\n\n         int exponent = 0;\n\n         if (end != itr)\n         {\n            if ('.' == (*itr))\n            {\n               const Iterator curr = ++itr;\n               unsigned int digit;\n               T tmp_d = T(0);\n\n               while (end != itr)\n               {\n                  #ifdef exprtk_enable_superscalar\n                  parse_digit_1(tmp_d)\n                  parse_digit_1(tmp_d)\n                  parse_digit_1(tmp_d)\n                  #endif\n                  parse_digit_1(tmp_d)\n                  parse_digit_1(tmp_d)\n                  parse_digit_2(tmp_d)\n               }\n\n               if (curr != itr)\n               {\n                  instate = true;\n                  d += compute_pow10(tmp_d,static_cast<int>(-std::distance(curr,itr)));\n               }\n\n               #undef parse_digit_1\n               #undef parse_digit_2\n            }\n\n            if (end != itr)\n            {\n               typename std::iterator_traits<Iterator>::value_type c = (*itr);\n\n               if (('e' == c) || ('E' == c))\n               {\n                  int exp = 0;\n\n                  if (!details::string_to_type_converter_impl_ref(++itr, end, exp))\n                  {\n                     if (end == itr)\n                        return false;\n                     else\n                        c = (*itr);\n                  }\n\n                  exponent += exp;\n               }\n\n               if (end != itr)\n               {\n                  if (('f' == c) || ('F' == c) || ('l' == c) || ('L' == c))\n                     ++itr;\n                  else if ('#' == c)\n                  {\n                     if (end == ++itr)\n                        return false;\n                     else if (('I' <= (*itr)) && ((*itr) <= 'n'))\n                     {\n                        if (('i' == (*itr)) || ('I' == (*itr)))\n                        {\n                           return parse_inf(itr, end, t, negative);\n                        }\n                        else if (('n' == (*itr)) || ('N' == (*itr)))\n                        {\n                           return parse_nan(itr, end, t);\n                        }\n                        else\n                           return false;\n                     }\n                     else\n                        return false;\n                  }\n                  else if (('I' <= (*itr)) && ((*itr) <= 'n'))\n                  {\n                     if (('i' == (*itr)) || ('I' == (*itr)))\n                     {\n                        return parse_inf(itr, end, t, negative);\n                     }\n                     else if (('n' == (*itr)) || ('N' == (*itr)))\n                     {\n                        return parse_nan(itr, end, t);\n                     }\n                     else\n                        return false;\n                  }\n                  else\n                     return false;\n               }\n            }\n         }\n\n         if ((end != itr) || (!instate))\n            return false;\n         else if (exponent)\n            d = compute_pow10(d,exponent);\n\n         t = static_cast<T>((negative) ? -d : d);\n         return true;\n      }\n\n      template <typename T>\n      inline bool string_to_real(const std::string& s, T& t)\n      {\n         const typename numeric::details::number_type<T>::type num_type;\n\n         char_cptr begin = s.data();\n         char_cptr end   = s.data() + s.size();\n\n         return string_to_real(begin, end, t, num_type);\n      }\n\n      template <typename T>\n      struct functor_t\n      {\n         /*\n            Note: The following definitions for Type, may require tweaking\n                  based on the compiler and target architecture. The benchmark\n                  should provide enough information to make the right choice.\n         */\n         //typedef T Type;\n         //typedef const T Type;\n         typedef const T& Type;\n         typedef       T& RefType;\n         typedef T (*qfunc_t)(Type t0, Type t1, Type t2, Type t3);\n         typedef T (*tfunc_t)(Type t0, Type t1, Type t2);\n         typedef T (*bfunc_t)(Type t0, Type t1);\n         typedef T (*ufunc_t)(Type t0);\n      };\n\n   } // namespace details\n\n   namespace lexer\n   {\n      struct token\n      {\n         enum token_type\n         {\n            e_none        =   0, e_error       =   1, e_err_symbol  =   2,\n            e_err_number  =   3, e_err_string  =   4, e_err_sfunc   =   5,\n            e_eof         =   6, e_number      =   7, e_symbol      =   8,\n            e_string      =   9, e_assign      =  10, e_addass      =  11,\n            e_subass      =  12, e_mulass      =  13, e_divass      =  14,\n            e_modass      =  15, e_shr         =  16, e_shl         =  17,\n            e_lte         =  18, e_ne          =  19, e_gte         =  20,\n            e_swap        =  21, e_lt          = '<', e_gt          = '>',\n            e_eq          = '=', e_rbracket    = ')', e_lbracket    = '(',\n            e_rsqrbracket = ']', e_lsqrbracket = '[', e_rcrlbracket = '}',\n            e_lcrlbracket = '{', e_comma       = ',', e_add         = '+',\n            e_sub         = '-', e_div         = '/', e_mul         = '*',\n            e_mod         = '%', e_pow         = '^', e_colon       = ':',\n            e_ternary     = '?'\n         };\n\n         token()\n         : type(e_none),\n           value(\"\"),\n           position(std::numeric_limits<std::size_t>::max())\n         {}\n\n         void clear()\n         {\n            type     = e_none;\n            value    = \"\";\n            position = std::numeric_limits<std::size_t>::max();\n         }\n\n         template <typename Iterator>\n         inline token& set_operator(const token_type tt,\n                                    const Iterator begin, const Iterator end,\n                                    const Iterator base_begin = Iterator(0))\n         {\n            type = tt;\n            value.assign(begin,end);\n            if (base_begin)\n               position = static_cast<std::size_t>(std::distance(base_begin,begin));\n            return (*this);\n         }\n\n         template <typename Iterator>\n         inline token& set_symbol(const Iterator begin, const Iterator end, const Iterator base_begin = Iterator(0))\n         {\n            type = e_symbol;\n            value.assign(begin,end);\n            if (base_begin)\n               position = static_cast<std::size_t>(std::distance(base_begin,begin));\n            return (*this);\n         }\n\n         inline token& set_symbol(const std::string& s, const std::size_t p)\n         {\n            type     = e_symbol;\n            value    = s;\n            position = p;\n            return (*this);\n         }\n\n         template <typename Iterator>\n         inline token& set_numeric(const Iterator begin, const Iterator end, const Iterator base_begin = Iterator(0))\n         {\n            type = e_number;\n            value.assign(begin,end);\n            if (base_begin)\n               position = static_cast<std::size_t>(std::distance(base_begin,begin));\n            return (*this);\n         }\n\n         template <typename Iterator>\n         inline token& set_string(const Iterator begin, const Iterator end, const Iterator base_begin = Iterator(0))\n         {\n            type = e_string;\n            value.assign(begin,end);\n            if (base_begin)\n               position = static_cast<std::size_t>(std::distance(base_begin,begin));\n            return (*this);\n         }\n\n         inline token& set_string(const std::string& s, const std::size_t p)\n         {\n            type     = e_string;\n            value    = s;\n            position = p;\n            return (*this);\n         }\n\n         template <typename Iterator>\n         inline token& set_error(const token_type et,\n                                 const Iterator begin, const Iterator end,\n                                 const Iterator base_begin = Iterator(0))\n         {\n            if (\n                 (e_error      == et) ||\n                 (e_err_symbol == et) ||\n                 (e_err_number == et) ||\n                 (e_err_string == et) ||\n                 (e_err_sfunc  == et)\n               )\n            {\n               type = et;\n            }\n            else\n               type = e_error;\n\n            value.assign(begin,end);\n\n            if (base_begin)\n               position = static_cast<std::size_t>(std::distance(base_begin,begin));\n\n            return (*this);\n         }\n\n         static inline std::string to_str(token_type t)\n         {\n            switch (t)\n            {\n               case e_none        : return \"NONE\";\n               case e_error       : return \"ERROR\";\n               case e_err_symbol  : return \"ERROR_SYMBOL\";\n               case e_err_number  : return \"ERROR_NUMBER\";\n               case e_err_string  : return \"ERROR_STRING\";\n               case e_eof         : return \"EOF\";\n               case e_number      : return \"NUMBER\";\n               case e_symbol      : return \"SYMBOL\";\n               case e_string      : return \"STRING\";\n               case e_assign      : return \":=\";\n               case e_addass      : return \"+=\";\n               case e_subass      : return \"-=\";\n               case e_mulass      : return \"*=\";\n               case e_divass      : return \"/=\";\n               case e_modass      : return \"%=\";\n               case e_shr         : return \">>\";\n               case e_shl         : return \"<<\";\n               case e_lte         : return \"<=\";\n               case e_ne          : return \"!=\";\n               case e_gte         : return \">=\";\n               case e_lt          : return \"<\";\n               case e_gt          : return \">\";\n               case e_eq          : return \"=\";\n               case e_rbracket    : return \")\";\n               case e_lbracket    : return \"(\";\n               case e_rsqrbracket : return \"]\";\n               case e_lsqrbracket : return \"[\";\n               case e_rcrlbracket : return \"}\";\n               case e_lcrlbracket : return \"{\";\n               case e_comma       : return \",\";\n               case e_add         : return \"+\";\n               case e_sub         : return \"-\";\n               case e_div         : return \"/\";\n               case e_mul         : return \"*\";\n               case e_mod         : return \"%\";\n               case e_pow         : return \"^\";\n               case e_colon       : return \":\";\n               case e_ternary     : return \"?\";\n               case e_swap        : return \"<=>\";\n               default            : return \"UNKNOWN\";\n            }\n         }\n\n         inline bool is_error() const\n         {\n            return (\n                     (e_error      == type) ||\n                     (e_err_symbol == type) ||\n                     (e_err_number == type) ||\n                     (e_err_string == type) ||\n                     (e_err_sfunc  == type)\n                   );\n         }\n\n         token_type type;\n         std::string value;\n         std::size_t position;\n      };\n\n      class generator\n      {\n      public:\n\n         typedef token token_t;\n         typedef std::vector<token_t> token_list_t;\n         typedef std::vector<token_t>::iterator token_list_itr_t;\n         typedef details::char_t char_t;\n\n         generator()\n         : base_itr_(0),\n           s_itr_   (0),\n           s_end_   (0)\n         {\n            clear();\n         }\n\n         inline void clear()\n         {\n            base_itr_ = 0;\n            s_itr_    = 0;\n            s_end_    = 0;\n            token_list_.clear();\n            token_itr_ = token_list_.end();\n            store_token_itr_ = token_list_.end();\n         }\n\n         inline bool process(const std::string& str)\n         {\n            base_itr_ = str.data();\n            s_itr_    = str.data();\n            s_end_    = str.data() + str.size();\n\n            eof_token_.set_operator(token_t::e_eof,s_end_,s_end_,base_itr_);\n            token_list_.clear();\n\n            while (!is_end(s_itr_))\n            {\n               scan_token();\n\n               if (!token_list_.empty() && token_list_.back().is_error())\n                  return false;\n            }\n\n            return true;\n         }\n\n         inline bool empty() const\n         {\n            return token_list_.empty();\n         }\n\n         inline std::size_t size() const\n         {\n            return token_list_.size();\n         }\n\n         inline void begin()\n         {\n            token_itr_ = token_list_.begin();\n            store_token_itr_ = token_list_.begin();\n         }\n\n         inline void store()\n         {\n            store_token_itr_ = token_itr_;\n         }\n\n         inline void restore()\n         {\n            token_itr_ = store_token_itr_;\n         }\n\n         inline token_t& next_token()\n         {\n            if (token_list_.end() != token_itr_)\n            {\n               return *token_itr_++;\n            }\n            else\n               return eof_token_;\n         }\n\n         inline token_t& peek_next_token()\n         {\n            if (token_list_.end() != token_itr_)\n            {\n               return *token_itr_;\n            }\n            else\n               return eof_token_;\n         }\n\n         inline token_t& operator[](const std::size_t& index)\n         {\n            if (index < token_list_.size())\n               return token_list_[index];\n            else\n               return eof_token_;\n         }\n\n         inline token_t operator[](const std::size_t& index) const\n         {\n            if (index < token_list_.size())\n               return token_list_[index];\n            else\n               return eof_token_;\n         }\n\n         inline bool finished() const\n         {\n            return (token_list_.end() == token_itr_);\n         }\n\n         inline void insert_front(token_t::token_type tk_type)\n         {\n            if (\n                 !token_list_.empty() &&\n                 (token_list_.end() != token_itr_)\n               )\n            {\n               token_t t = *token_itr_;\n\n               t.type     = tk_type;\n               token_itr_ = token_list_.insert(token_itr_,t);\n            }\n         }\n\n         inline std::string substr(const std::size_t& begin, const std::size_t& end)\n         {\n            const details::char_cptr begin_itr = ((base_itr_ + begin) < s_end_) ? (base_itr_ + begin) : s_end_;\n            const details::char_cptr end_itr   = ((base_itr_ +   end) < s_end_) ? (base_itr_ +   end) : s_end_;\n\n            return std::string(begin_itr,end_itr);\n         }\n\n         inline std::string remaining() const\n         {\n            if (finished())\n               return \"\";\n            else if (token_list_.begin() != token_itr_)\n               return std::string(base_itr_ + (token_itr_ - 1)->position,s_end_);\n            else\n               return std::string(base_itr_ + token_itr_->position,s_end_);\n         }\n\n      private:\n\n         inline bool is_end(details::char_cptr itr)\n         {\n            return (s_end_ == itr);\n         }\n\n         inline bool is_comment_start(details::char_cptr itr)\n         {\n            #ifndef exprtk_disable_comments\n            const char_t c0 = *(itr + 0);\n            const char_t c1 = *(itr + 1);\n\n            if ('#' == c0)\n               return true;\n            else if (!is_end(itr + 1))\n            {\n               if (('/' == c0) && ('/' == c1)) return true;\n               if (('/' == c0) && ('*' == c1)) return true;\n            }\n            #endif\n            return false;\n         }\n\n         inline void skip_whitespace()\n         {\n            while (!is_end(s_itr_) && details::is_whitespace(*s_itr_))\n            {\n               ++s_itr_;\n            }\n         }\n\n         inline void skip_comments()\n         {\n            #ifndef exprtk_disable_comments\n            // The following comment styles are supported:\n            // 1. // .... \\n\n            // 2. #  .... \\n\n            // 3. /* .... */\n            struct test\n            {\n               static inline bool comment_start(const char_t c0, const char_t c1, int& mode, int& incr)\n               {\n                  mode = 0;\n                       if ('#' == c0)    { mode = 1; incr = 1; }\n                  else if ('/' == c0)\n                  {\n                          if ('/' == c1) { mode = 1; incr = 2; }\n                     else if ('*' == c1) { mode = 2; incr = 2; }\n                  }\n                  return (0 != mode);\n               }\n\n               static inline bool comment_end(const char_t c0, const char_t c1, int& mode)\n               {\n                  if (\n                       ((1 == mode) && ('\\n' == c0)) ||\n                       ((2 == mode) && ( '*' == c0) && ('/' == c1))\n                     )\n                  {\n                     mode = 0;\n                     return true;\n                  }\n                  else\n                     return false;\n               }\n            };\n\n            int mode      = 0;\n            int increment = 0;\n\n            if (is_end(s_itr_))\n               return;\n            else if (!test::comment_start(*s_itr_, *(s_itr_ + 1), mode, increment))\n               return;\n\n            details::char_cptr cmt_start = s_itr_;\n\n            s_itr_ += increment;\n\n            while (!is_end(s_itr_))\n            {\n               if ((1 == mode) && test::comment_end(*s_itr_, 0, mode))\n               {\n                  ++s_itr_;\n                  return;\n               }\n\n               if ((2 == mode))\n               {\n                  if (!is_end((s_itr_ + 1)) && test::comment_end(*s_itr_, *(s_itr_ + 1), mode))\n                  {\n                     s_itr_ += 2;\n                     return;\n                  }\n               }\n\n                ++s_itr_;\n            }\n\n            if (2 == mode)\n            {\n               token_t t;\n               t.set_error(token::e_error, cmt_start, cmt_start + mode, base_itr_);\n               token_list_.push_back(t);\n            }\n            #endif\n         }\n\n         inline void scan_token()\n         {\n            if (details::is_whitespace(*s_itr_))\n            {\n               skip_whitespace();\n               return;\n            }\n            else if (is_comment_start(s_itr_))\n            {\n               skip_comments();\n               return;\n            }\n            else if (details::is_operator_char(*s_itr_))\n            {\n               scan_operator();\n               return;\n            }\n            else if (details::is_letter(*s_itr_))\n            {\n               scan_symbol();\n               return;\n            }\n            else if (details::is_digit((*s_itr_)) || ('.' == (*s_itr_)))\n            {\n               scan_number();\n               return;\n            }\n            else if ('$' == (*s_itr_))\n            {\n               scan_special_function();\n               return;\n            }\n            #ifndef exprtk_disable_string_capabilities\n            else if ('\\'' == (*s_itr_))\n            {\n               scan_string();\n               return;\n            }\n            #endif\n            else if ('~' == (*s_itr_))\n            {\n               token_t t;\n               t.set_symbol(s_itr_, s_itr_ + 1, base_itr_);\n               token_list_.push_back(t);\n               ++s_itr_;\n               return;\n            }\n            else\n            {\n               token_t t;\n               t.set_error(token::e_error, s_itr_, s_itr_ + 2, base_itr_);\n               token_list_.push_back(t);\n               ++s_itr_;\n            }\n         }\n\n         inline void scan_operator()\n         {\n            token_t t;\n\n            const char_t c0 = s_itr_[0];\n\n            if (!is_end(s_itr_ + 1))\n            {\n               const char_t c1 = s_itr_[1];\n\n               if (!is_end(s_itr_ + 2))\n               {\n                  const char_t c2 = s_itr_[2];\n\n                  if ((c0 == '<') && (c1 == '=') && (c2 == '>'))\n                  {\n                     t.set_operator(token_t::e_swap, s_itr_, s_itr_ + 3, base_itr_);\n                     token_list_.push_back(t);\n                     s_itr_ += 3;\n                     return;\n                  }\n               }\n\n               token_t::token_type ttype = token_t::e_none;\n\n                    if ((c0 == '<') && (c1 == '=')) ttype = token_t::e_lte;\n               else if ((c0 == '>') && (c1 == '=')) ttype = token_t::e_gte;\n               else if ((c0 == '<') && (c1 == '>')) ttype = token_t::e_ne;\n               else if ((c0 == '!') && (c1 == '=')) ttype = token_t::e_ne;\n               else if ((c0 == '=') && (c1 == '=')) ttype = token_t::e_eq;\n               else if ((c0 == ':') && (c1 == '=')) ttype = token_t::e_assign;\n               else if ((c0 == '<') && (c1 == '<')) ttype = token_t::e_shl;\n               else if ((c0 == '>') && (c1 == '>')) ttype = token_t::e_shr;\n               else if ((c0 == '+') && (c1 == '=')) ttype = token_t::e_addass;\n               else if ((c0 == '-') && (c1 == '=')) ttype = token_t::e_subass;\n               else if ((c0 == '*') && (c1 == '=')) ttype = token_t::e_mulass;\n               else if ((c0 == '/') && (c1 == '=')) ttype = token_t::e_divass;\n               else if ((c0 == '%') && (c1 == '=')) ttype = token_t::e_modass;\n\n               if (token_t::e_none != ttype)\n               {\n                  t.set_operator(ttype, s_itr_, s_itr_ + 2, base_itr_);\n                  token_list_.push_back(t);\n                  s_itr_ += 2;\n                  return;\n               }\n            }\n\n            if ('<' == c0)\n               t.set_operator(token_t::e_lt , s_itr_, s_itr_ + 1, base_itr_);\n            else if ('>' == c0)\n               t.set_operator(token_t::e_gt , s_itr_, s_itr_ + 1, base_itr_);\n            else if (';' == c0)\n               t.set_operator(token_t::e_eof, s_itr_, s_itr_ + 1, base_itr_);\n            else if ('&' == c0)\n               t.set_symbol(s_itr_, s_itr_ + 1, base_itr_);\n            else if ('|' == c0)\n               t.set_symbol(s_itr_, s_itr_ + 1, base_itr_);\n            else\n               t.set_operator(token_t::token_type(c0), s_itr_, s_itr_ + 1, base_itr_);\n\n            token_list_.push_back(t);\n            ++s_itr_;\n         }\n\n         inline void scan_symbol()\n         {\n            details::char_cptr initial_itr = s_itr_;\n\n\t    bool escaped = false;\n\t    bool has_escape = false;\n            while (!is_end(s_itr_))\n            {\n               if ('\\\\' == (*s_itr_))\n\t       {\n\t          escaped = true;\n\t\t  has_escape = true;\n\t       }\n\t       else if (escaped)\n\t       {\n\t\t  escaped = false;\n\t       }\n\t       else if (!details::is_letter_or_digit(*s_itr_) && ('_' != (*s_itr_)))\n               {\n                  if ('.' != (*s_itr_))\n                     break;\n                  /*\n                     Permit symbols that contain a 'dot'\n                     Allowed   : abc.xyz, a123.xyz, abc.123, abc_.xyz a123_.xyz abc._123\n                     Disallowed: .abc, abc.<white-space>, abc.<eof>, abc.<operator +,-,*,/...>\n                  */\n                  if (\n                       (s_itr_ != initial_itr)                     &&\n                       !is_end(s_itr_ + 1)                         &&\n                       !details::is_letter_or_digit(*(s_itr_ + 1)) &&\n                       ('_' != (*(s_itr_ + 1)))\n                     )\n                     break;\n               }\n\n               ++s_itr_;\n            }\n\n            token_t t;\n\t    if (!has_escape)\n               t.set_symbol(initial_itr,s_itr_,base_itr_);\n\t    else\n            {\n               std::string parsed_string(initial_itr,s_itr_);\n\n               details::cleanup_escapes(parsed_string);\n               t.set_symbol(parsed_string, \n                    static_cast<std::size_t>(std::distance(base_itr_,initial_itr)));\n            }\n\n            token_list_.push_back(t);\n         }\n\n         inline void scan_number()\n         {\n            /*\n               Attempt to match a valid numeric value in one of the following formats:\n               (01) 123456\n               (02) 123456.\n               (03) 123.456\n               (04) 123.456e3\n               (05) 123.456E3\n               (06) 123.456e+3\n               (07) 123.456E+3\n               (08) 123.456e-3\n               (09) 123.456E-3\n               (00) .1234\n               (11) .1234e3\n               (12) .1234E+3\n               (13) .1234e+3\n               (14) .1234E-3\n               (15) .1234e-3\n            */\n\n            details::char_cptr initial_itr = s_itr_;\n            bool dot_found                 = false;\n            bool e_found                   = false;\n            bool post_e_sign_found         = false;\n            bool post_e_digit_found        = false;\n            token_t t;\n\n            while (!is_end(s_itr_))\n            {\n               if ('.' == (*s_itr_))\n               {\n                  if (dot_found)\n                  {\n                     t.set_error(token::e_err_number, initial_itr, s_itr_, base_itr_);\n                     token_list_.push_back(t);\n                     return;\n                  }\n\n                  dot_found = true;\n                  ++s_itr_;\n\n                  continue;\n               }\n               else if ('e' == std::tolower(*s_itr_))\n               {\n                  const char_t& c = *(s_itr_ + 1);\n\n                  if (is_end(s_itr_ + 1))\n                  {\n                     t.set_error(token::e_err_number, initial_itr, s_itr_, base_itr_);\n                     token_list_.push_back(t);\n\n                     return;\n                  }\n                  else if (\n                            ('+' != c) &&\n                            ('-' != c) &&\n                            !details::is_digit(c)\n                          )\n                  {\n                     t.set_error(token::e_err_number, initial_itr, s_itr_, base_itr_);\n                     token_list_.push_back(t);\n\n                     return;\n                  }\n\n                  e_found = true;\n                  ++s_itr_;\n\n                  continue;\n               }\n               else if (e_found && details::is_sign(*s_itr_) && !post_e_digit_found)\n               {\n                  if (post_e_sign_found)\n                  {\n                     t.set_error(token::e_err_number, initial_itr, s_itr_, base_itr_);\n                     token_list_.push_back(t);\n\n                     return;\n                  }\n\n                  post_e_sign_found = true;\n                  ++s_itr_;\n\n                  continue;\n               }\n               else if (e_found && details::is_digit(*s_itr_))\n               {\n                  post_e_digit_found = true;\n                  ++s_itr_;\n\n                  continue;\n               }\n               else if (('.' != (*s_itr_)) && !details::is_digit(*s_itr_))\n                  break;\n               else\n                  ++s_itr_;\n            }\n\n            t.set_numeric(initial_itr, s_itr_, base_itr_);\n            token_list_.push_back(t);\n\n            return;\n         }\n\n         inline void scan_special_function()\n         {\n            details::char_cptr initial_itr = s_itr_;\n            token_t t;\n\n            // $fdd(x,x,x) = at least 11 chars\n            if (std::distance(s_itr_,s_end_) < 11)\n            {\n               t.set_error(token::e_err_sfunc, initial_itr, s_itr_, base_itr_);\n               token_list_.push_back(t);\n\n               return;\n            }\n\n            if (\n                 !(('$' == *s_itr_)                       &&\n                   (details::imatch  ('f',*(s_itr_ + 1))) &&\n                   (details::is_digit(*(s_itr_ + 2)))     &&\n                   (details::is_digit(*(s_itr_ + 3))))\n               )\n            {\n               t.set_error(token::e_err_sfunc, initial_itr, s_itr_, base_itr_);\n               token_list_.push_back(t);\n\n               return;\n            }\n\n            s_itr_ += 4; // $fdd = 4chars\n\n            t.set_symbol(initial_itr, s_itr_, base_itr_);\n            token_list_.push_back(t);\n\n            return;\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline void scan_string()\n         {\n            details::char_cptr initial_itr = s_itr_ + 1;\n            token_t t;\n\n            if (std::distance(s_itr_,s_end_) < 2)\n            {\n               t.set_error(token::e_err_string, s_itr_, s_end_, base_itr_);\n               token_list_.push_back(t);\n               return;\n            }\n\n            ++s_itr_;\n\n            bool escaped_found = false;\n            bool escaped = false;\n\n            while (!is_end(s_itr_))\n            {\n               if (!escaped && ('\\\\' == *s_itr_))\n               {\n                  escaped_found = true;\n                  escaped = true;\n                  ++s_itr_;\n\n                  continue;\n               }\n               else if (!escaped)\n               {\n                  if ('\\'' == *s_itr_)\n                     break;\n               }\n               else if (escaped)\n               {\n                  if (!is_end(s_itr_) && ('0' == *(s_itr_)))\n                  {\n                     /*\n                        Note: The following 'awkward' conditional is\n                              due to various broken msvc compilers.\n                     */\n                     #if defined(_MSC_VER) && (_MSC_VER == 1600)\n                     const bool within_range = !is_end(s_itr_ + 2) &&\n                                               !is_end(s_itr_ + 3) ;\n                     #else\n                     const bool within_range = !is_end(s_itr_ + 1) &&\n                                               !is_end(s_itr_ + 2) &&\n                                               !is_end(s_itr_ + 3) ;\n                     #endif\n\n                     const bool x_seperator  = ('x' == *(s_itr_ + 1)) ||\n                                               ('X' == *(s_itr_ + 1)) ;\n\n                     const bool both_digits  = details::is_hex_digit(*(s_itr_ + 2)) &&\n                                               details::is_hex_digit(*(s_itr_ + 3)) ;\n\n                     if (!within_range || !x_seperator || !both_digits)\n                     {\n                        t.set_error(token::e_err_string, initial_itr, s_itr_, base_itr_);\n                        token_list_.push_back(t);\n\n                        return;\n                     }\n                     else\n                        s_itr_ += 3;\n                  }\n\n                  escaped = false;\n               }\n\n               ++s_itr_;\n            }\n\n            if (is_end(s_itr_))\n            {\n               t.set_error(token::e_err_string, initial_itr, s_itr_, base_itr_);\n               token_list_.push_back(t);\n\n               return;\n            }\n\n            if (!escaped_found)\n               t.set_string(initial_itr, s_itr_, base_itr_);\n            else\n            {\n               std::string parsed_string(initial_itr,s_itr_);\n\n               details::cleanup_escapes(parsed_string);\n\n               t.set_string(\n                    parsed_string,\n                    static_cast<std::size_t>(std::distance(base_itr_,initial_itr)));\n            }\n\n            token_list_.push_back(t);\n            ++s_itr_;\n\n            return;\n         }\n         #endif\n\n      private:\n\n         token_list_t     token_list_;\n         token_list_itr_t token_itr_;\n         token_list_itr_t store_token_itr_;\n         token_t eof_token_;\n         details::char_cptr base_itr_;\n         details::char_cptr s_itr_;\n         details::char_cptr s_end_;\n\n         friend class token_scanner;\n         friend class token_modifier;\n         friend class token_inserter;\n         friend class token_joiner;\n      };\n\n      class helper_interface\n      {\n      public:\n\n         virtual void init()                     {              }\n         virtual void reset()                    {              }\n         virtual bool result()                   { return true; }\n         virtual std::size_t process(generator&) { return 0;    }\n         virtual ~helper_interface()             {              }\n      };\n\n      class token_scanner : public helper_interface\n      {\n      public:\n\n         virtual ~token_scanner()\n         {}\n\n         explicit token_scanner(const std::size_t& stride)\n         : stride_(stride)\n         {\n            if (stride > 4)\n            {\n               throw std::invalid_argument(\"token_scanner() - Invalid stride value\");\n            }\n         }\n\n         inline std::size_t process(generator& g)\n         {\n            if (g.token_list_.size() >= stride_)\n            {\n               for (std::size_t i = 0; i < (g.token_list_.size() - stride_ + 1); ++i)\n               {\n                  token t;\n\n                  switch (stride_)\n                  {\n                     case 1 :\n                              {\n                                 const token& t0 = g.token_list_[i];\n\n                                 if (!operator()(t0))\n                                 {\n                                    return i;\n                                 }\n                              }\n                              break;\n\n                     case 2 :\n                              {\n                                 const token& t0 = g.token_list_[i    ];\n                                 const token& t1 = g.token_list_[i + 1];\n\n                                 if (!operator()(t0, t1))\n                                 {\n                                    return i;\n                                 }\n                              }\n                              break;\n\n                     case 3 :\n                              {\n                                 const token& t0 = g.token_list_[i    ];\n                                 const token& t1 = g.token_list_[i + 1];\n                                 const token& t2 = g.token_list_[i + 2];\n\n                                 if (!operator()(t0, t1, t2))\n                                 {\n                                    return i;\n                                 }\n                              }\n                              break;\n\n                     case 4 :\n                              {\n                                 const token& t0 = g.token_list_[i    ];\n                                 const token& t1 = g.token_list_[i + 1];\n                                 const token& t2 = g.token_list_[i + 2];\n                                 const token& t3 = g.token_list_[i + 3];\n\n                                 if (!operator()(t0, t1, t2, t3))\n                                 {\n                                    return i;\n                                 }\n                              }\n                              break;\n                  }\n               }\n            }\n\n            return (g.token_list_.size() - stride_ + 1);\n         }\n\n         virtual bool operator() (const token&)\n         {\n            return false;\n         }\n\n         virtual bool operator() (const token&, const token&)\n         {\n            return false;\n         }\n\n         virtual bool operator() (const token&, const token&, const token&)\n         {\n            return false;\n         }\n\n         virtual bool operator() (const token&, const token&, const token&, const token&)\n         {\n            return false;\n         }\n\n      private:\n\n         const std::size_t stride_;\n      };\n\n      class token_modifier : public helper_interface\n      {\n      public:\n\n         inline std::size_t process(generator& g)\n         {\n            std::size_t changes = 0;\n\n            for (std::size_t i = 0; i < g.token_list_.size(); ++i)\n            {\n               if (modify(g.token_list_[i])) changes++;\n            }\n\n            return changes;\n         }\n\n         virtual bool modify(token& t) = 0;\n      };\n\n      class token_inserter : public helper_interface\n      {\n      public:\n\n         explicit token_inserter(const std::size_t& stride)\n         : stride_(stride)\n         {\n            if (stride > 5)\n            {\n               throw std::invalid_argument(\"token_inserter() - Invalid stride value\");\n            }\n         }\n\n         inline std::size_t process(generator& g)\n         {\n            if (g.token_list_.empty())\n               return 0;\n            else if (g.token_list_.size() < stride_)\n               return 0;\n\n            std::size_t changes = 0;\n\n            for (std::size_t i = 0; i < (g.token_list_.size() - stride_ + 1); ++i)\n            {\n               int insert_index = -1;\n               token t;\n\n               switch (stride_)\n               {\n                  case 1 : insert_index = insert(g.token_list_[i],t);\n                           break;\n\n                  case 2 : insert_index = insert(g.token_list_[i], g.token_list_[i + 1], t);\n                           break;\n\n                  case 3 : insert_index = insert(g.token_list_[i], g.token_list_[i + 1], g.token_list_[i + 2], t);\n                           break;\n\n                  case 4 : insert_index = insert(g.token_list_[i], g.token_list_[i + 1], g.token_list_[i + 2], g.token_list_[i + 3], t);\n                           break;\n\n                  case 5 : insert_index = insert(g.token_list_[i], g.token_list_[i + 1], g.token_list_[i + 2], g.token_list_[i + 3], g.token_list_[i + 4], t);\n                           break;\n               }\n\n               typedef std::iterator_traits<generator::token_list_t::iterator>::difference_type diff_t;\n\n               if ((insert_index >= 0) && (insert_index <= (static_cast<int>(stride_) + 1)))\n               {\n                  g.token_list_.insert(\n                     g.token_list_.begin() + static_cast<diff_t>(i + static_cast<std::size_t>(insert_index)), t);\n\n                  changes++;\n               }\n            }\n\n            return changes;\n         }\n\n         #define token_inserter_empty_body \\\n         {                                 \\\n            return -1;                     \\\n         }                                 \\\n\n         inline virtual int insert(const token&, token&)\n         token_inserter_empty_body\n\n         inline virtual int insert(const token&, const token&, token&)\n         token_inserter_empty_body\n\n         inline virtual int insert(const token&, const token&, const token&, token&)\n         token_inserter_empty_body\n\n         inline virtual int insert(const token&, const token&, const token&, const token&, token&)\n         token_inserter_empty_body\n\n         inline virtual int insert(const token&, const token&, const token&, const token&, const token&, token&)\n         token_inserter_empty_body\n\n         #undef token_inserter_empty_body\n\n      private:\n\n         const std::size_t stride_;\n      };\n\n      class token_joiner : public helper_interface\n      {\n      public:\n\n         token_joiner(const std::size_t& stride)\n         : stride_(stride)\n         {}\n\n         inline std::size_t process(generator& g)\n         {\n            if (g.token_list_.empty())\n               return 0;\n\n            switch (stride_)\n            {\n               case 2  : return process_stride_2(g);\n               case 3  : return process_stride_3(g);\n               default : return 0;\n            }\n         }\n\n         virtual bool join(const token&, const token&, token&)               { return false; }\n         virtual bool join(const token&, const token&, const token&, token&) { return false; }\n\n      private:\n\n         inline std::size_t process_stride_2(generator& g)\n         {\n            typedef std::iterator_traits<generator::token_list_t::iterator>::difference_type diff_t;\n\n            if (g.token_list_.size() < 2)\n               return 0;\n\n            std::size_t changes = 0;\n\n            for (std::size_t i = 0; i < (g.token_list_.size() - 1); ++i)\n            {\n               token t;\n\n               while (join(g[i], g[i + 1], t))\n               {\n                  g.token_list_[i] = t;\n\n                  g.token_list_.erase(g.token_list_.begin() + static_cast<diff_t>(i + 1));\n\n                  ++changes;\n               }\n            }\n\n            return changes;\n         }\n\n         inline std::size_t process_stride_3(generator& g)\n         {\n            typedef std::iterator_traits<generator::token_list_t::iterator>::difference_type diff_t;\n\n            if (g.token_list_.size() < 3)\n               return 0;\n\n            std::size_t changes = 0;\n\n            for (std::size_t i = 0; i < (g.token_list_.size() - 2); ++i)\n            {\n               token t;\n\n               while (join(g[i], g[i + 1], g[i + 2], t))\n               {\n                  g.token_list_[i] = t;\n\n                  g.token_list_.erase(g.token_list_.begin() + static_cast<diff_t>(i + 1),\n                                      g.token_list_.begin() + static_cast<diff_t>(i + 3));\n                  ++changes;\n               }\n            }\n\n            return changes;\n         }\n\n         const std::size_t stride_;\n      };\n\n      namespace helper\n      {\n\n         inline void dump(lexer::generator& generator)\n         {\n            for (std::size_t i = 0; i < generator.size(); ++i)\n            {\n               lexer::token t = generator[i];\n               printf(\"Token[%02d] @ %03d  %6s  -->  '%s'\\n\",\n                      static_cast<int>(i),\n                      static_cast<int>(t.position),\n                      t.to_str(t.type).c_str(),\n                      t.value.c_str());\n            }\n         }\n\n         class commutative_inserter : public lexer::token_inserter\n         {\n         public:\n\n            using lexer::token_inserter::insert;\n\n            commutative_inserter()\n            : lexer::token_inserter(2)\n            {}\n\n            inline void ignore_symbol(const std::string& symbol)\n            {\n               ignore_set_.insert(symbol);\n            }\n\n            inline int insert(const lexer::token& t0, const lexer::token& t1, lexer::token& new_token)\n            {\n               bool match         = false;\n               new_token.type     = lexer::token::e_mul;\n               new_token.value    = \"*\";\n               new_token.position = t1.position;\n\n               if (t0.type == lexer::token::e_symbol)\n               {\n                  if (ignore_set_.end() != ignore_set_.find(t0.value))\n                  {\n                     return -1;\n                  }\n                  else if (!t0.value.empty() && ('$' == t0.value[0]))\n                  {\n                     return -1;\n                  }\n               }\n\n               if (t1.type == lexer::token::e_symbol)\n               {\n                  if (ignore_set_.end() != ignore_set_.find(t1.value))\n                  {\n                     return -1;\n                  }\n               }\n                    if ((t0.type == lexer::token::e_number     ) && (t1.type == lexer::token::e_symbol     )) match = true;\n               else if ((t0.type == lexer::token::e_number     ) && (t1.type == lexer::token::e_lbracket   )) match = true;\n               else if ((t0.type == lexer::token::e_number     ) && (t1.type == lexer::token::e_lcrlbracket)) match = true;\n               else if ((t0.type == lexer::token::e_number     ) && (t1.type == lexer::token::e_lsqrbracket)) match = true;\n               else if ((t0.type == lexer::token::e_symbol     ) && (t1.type == lexer::token::e_number     )) match = true;\n               else if ((t0.type == lexer::token::e_rbracket   ) && (t1.type == lexer::token::e_number     )) match = true;\n               else if ((t0.type == lexer::token::e_rcrlbracket) && (t1.type == lexer::token::e_number     )) match = true;\n               else if ((t0.type == lexer::token::e_rsqrbracket) && (t1.type == lexer::token::e_number     )) match = true;\n               else if ((t0.type == lexer::token::e_rbracket   ) && (t1.type == lexer::token::e_symbol     )) match = true;\n               else if ((t0.type == lexer::token::e_rcrlbracket) && (t1.type == lexer::token::e_symbol     )) match = true;\n               else if ((t0.type == lexer::token::e_rsqrbracket) && (t1.type == lexer::token::e_symbol     )) match = true;\n\n               return (match) ? 1 : -1;\n            }\n\n         private:\n\n            std::set<std::string,details::ilesscompare> ignore_set_;\n         };\n\n         class operator_joiner : public token_joiner\n         {\n         public:\n\n            operator_joiner(const std::size_t& stride)\n            : token_joiner(stride)\n            {}\n\n            inline bool join(const lexer::token& t0, const lexer::token& t1, lexer::token& t)\n            {\n               // ': =' --> ':='\n               if ((t0.type == lexer::token::e_colon) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_assign;\n                  t.value    = \":=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '+ =' --> '+='\n               else if ((t0.type == lexer::token::e_add) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_addass;\n                  t.value    = \"+=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '- =' --> '-='\n               else if ((t0.type == lexer::token::e_sub) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_subass;\n                  t.value    = \"-=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '* =' --> '*='\n               else if ((t0.type == lexer::token::e_mul) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_mulass;\n                  t.value    = \"*=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '/ =' --> '/='\n               else if ((t0.type == lexer::token::e_div) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_divass;\n                  t.value    = \"/=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '% =' --> '%='\n               else if ((t0.type == lexer::token::e_mod) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_modass;\n                  t.value    = \"%=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '> =' --> '>='\n               else if ((t0.type == lexer::token::e_gt) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_gte;\n                  t.value    = \">=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '< =' --> '<='\n               else if ((t0.type == lexer::token::e_lt) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_lte;\n                  t.value    = \"<=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '= =' --> '=='\n               else if ((t0.type == lexer::token::e_eq) && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_eq;\n                  t.value    = \"==\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '! =' --> '!='\n               else if ((static_cast<char>(t0.type) == '!') && (t1.type == lexer::token::e_eq))\n               {\n                  t.type     = lexer::token::e_ne;\n                  t.value    = \"!=\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '< >' --> '<>'\n               else if ((t0.type == lexer::token::e_lt) && (t1.type == lexer::token::e_gt))\n               {\n                  t.type     = lexer::token::e_ne;\n                  t.value    = \"<>\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '<= >' --> '<=>'\n               else if ((t0.type == lexer::token::e_lte) && (t1.type == lexer::token::e_gt))\n               {\n                  t.type     = lexer::token::e_swap;\n                  t.value    = \"<=>\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '+ -' --> '-'\n               else if ((t0.type == lexer::token::e_add) && (t1.type == lexer::token::e_sub))\n               {\n                  t.type     = lexer::token::e_sub;\n                  t.value    = \"-\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '- +' --> '-'\n               else if ((t0.type == lexer::token::e_sub) && (t1.type == lexer::token::e_add))\n               {\n                  t.type     = lexer::token::e_sub;\n                  t.value    = \"-\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               // '- -' --> '+'\n               else if ((t0.type == lexer::token::e_sub) && (t1.type == lexer::token::e_sub))\n               {\n                  /*\n                     Note: May need to reconsider this when wanting to implement\n                     pre/postfix decrement operator\n                  */\n                  t.type     = lexer::token::e_add;\n                  t.value    = \"+\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               else\n                  return false;\n            }\n\n            inline bool join(const lexer::token& t0, const lexer::token& t1, const lexer::token& t2, lexer::token& t)\n            {\n               // '[ * ]' --> '[*]'\n               if (\n                    (t0.type == lexer::token::e_lsqrbracket) &&\n                    (t1.type == lexer::token::e_mul        ) &&\n                    (t2.type == lexer::token::e_rsqrbracket)\n                  )\n               {\n                  t.type     = lexer::token::e_symbol;\n                  t.value    = \"[*]\";\n                  t.position = t0.position;\n\n                  return true;\n               }\n               else\n                  return false;\n            }\n         };\n\n         class bracket_checker : public lexer::token_scanner\n         {\n         public:\n\n            using lexer::token_scanner::operator();\n\n            bracket_checker()\n            : token_scanner(1),\n              state_(true)\n            {}\n\n            bool result()\n            {\n               if (!stack_.empty())\n               {\n                  lexer::token t;\n                  t.value      = stack_.top().first;\n                  t.position   = stack_.top().second;\n                  error_token_ = t;\n                  state_       = false;\n\n                  return false;\n               }\n               else\n                  return state_;\n            }\n\n            lexer::token error_token()\n            {\n               return error_token_;\n            }\n\n            void reset()\n            {\n               // Why? because msvc doesn't support swap properly.\n               stack_ = std::stack<std::pair<char,std::size_t> >();\n               state_ = true;\n               error_token_.clear();\n            }\n\n            bool operator() (const lexer::token& t)\n            {\n               if (\n                    !t.value.empty()                       &&\n                    (lexer::token::e_string != t.type)     &&\n                    (lexer::token::e_symbol != t.type)     &&\n                    exprtk::details::is_bracket(t.value[0])\n                  )\n               {\n                  details::char_t c = t.value[0];\n\n                       if (t.type == lexer::token::e_lbracket   ) stack_.push(std::make_pair(')',t.position));\n                  else if (t.type == lexer::token::e_lcrlbracket) stack_.push(std::make_pair('}',t.position));\n                  else if (t.type == lexer::token::e_lsqrbracket) stack_.push(std::make_pair(']',t.position));\n                  else if (exprtk::details::is_right_bracket(c))\n                  {\n                     if (stack_.empty())\n                     {\n                        state_       = false;\n                        error_token_ = t;\n\n                        return false;\n                     }\n                     else if (c != stack_.top().first)\n                     {\n                        state_       = false;\n                        error_token_ = t;\n\n                        return false;\n                     }\n                     else\n                        stack_.pop();\n                  }\n               }\n\n               return true;\n            }\n\n         private:\n\n            bool state_;\n            std::stack<std::pair<char,std::size_t> > stack_;\n            lexer::token error_token_;\n         };\n\n         class numeric_checker : public lexer::token_scanner\n         {\n         public:\n\n            using lexer::token_scanner::operator();\n\n            numeric_checker()\n            : token_scanner (1),\n              current_index_(0)\n            {}\n\n            bool result()\n            {\n               return error_list_.empty();\n            }\n\n            void reset()\n            {\n               error_list_.clear();\n               current_index_ = 0;\n            }\n\n            bool operator() (const lexer::token& t)\n            {\n               if (token::e_number == t.type)\n               {\n                  double v;\n\n                  if (!exprtk::details::string_to_real(t.value,v))\n                  {\n                     error_list_.push_back(current_index_);\n                  }\n               }\n\n               ++current_index_;\n\n               return true;\n            }\n\n            std::size_t error_count() const\n            {\n               return error_list_.size();\n            }\n\n            std::size_t error_index(const std::size_t& i)\n            {\n               if (i < error_list_.size())\n                  return error_list_[i];\n               else\n                  return std::numeric_limits<std::size_t>::max();\n            }\n\n            void clear_errors()\n            {\n               error_list_.clear();\n            }\n\n         private:\n\n            std::size_t current_index_;\n            std::vector<std::size_t> error_list_;\n         };\n\n         class symbol_replacer : public lexer::token_modifier\n         {\n         private:\n\n            typedef std::map<std::string,std::pair<std::string,token::token_type>,details::ilesscompare> replace_map_t;\n\n         public:\n\n            bool remove(const std::string& target_symbol)\n            {\n               const replace_map_t::iterator itr = replace_map_.find(target_symbol);\n\n               if (replace_map_.end() == itr)\n                  return false;\n\n               replace_map_.erase(itr);\n\n               return true;\n            }\n\n            bool add_replace(const std::string& target_symbol,\n                             const std::string& replace_symbol,\n                             const lexer::token::token_type token_type = lexer::token::e_symbol)\n            {\n               const replace_map_t::iterator itr = replace_map_.find(target_symbol);\n\n               if (replace_map_.end() != itr)\n               {\n                  return false;\n               }\n\n               replace_map_[target_symbol] = std::make_pair(replace_symbol,token_type);\n\n               return true;\n            }\n\n            void clear()\n            {\n               replace_map_.clear();\n            }\n\n         private:\n\n            bool modify(lexer::token& t)\n            {\n               if (lexer::token::e_symbol == t.type)\n               {\n                  if (replace_map_.empty())\n                     return false;\n\n                  const replace_map_t::iterator itr = replace_map_.find(t.value);\n\n                  if (replace_map_.end() != itr)\n                  {\n                     t.value = itr->second.first;\n                     t.type  = itr->second.second;\n\n                     return true;\n                  }\n               }\n\n               return false;\n            }\n\n            replace_map_t replace_map_;\n         };\n\n         class sequence_validator : public lexer::token_scanner\n         {\n         private:\n\n            typedef std::pair<lexer::token::token_type,lexer::token::token_type> token_pair_t;\n            typedef std::set<token_pair_t> set_t;\n\n         public:\n\n            using lexer::token_scanner::operator();\n\n            sequence_validator()\n            : lexer::token_scanner(2)\n            {\n               add_invalid(lexer::token::e_number ,lexer::token::e_number );\n               add_invalid(lexer::token::e_string ,lexer::token::e_string );\n               add_invalid(lexer::token::e_number ,lexer::token::e_string );\n               add_invalid(lexer::token::e_string ,lexer::token::e_number );\n               add_invalid_set1(lexer::token::e_assign );\n               add_invalid_set1(lexer::token::e_shr    );\n               add_invalid_set1(lexer::token::e_shl    );\n               add_invalid_set1(lexer::token::e_lte    );\n               add_invalid_set1(lexer::token::e_ne     );\n               add_invalid_set1(lexer::token::e_gte    );\n               add_invalid_set1(lexer::token::e_lt     );\n               add_invalid_set1(lexer::token::e_gt     );\n               add_invalid_set1(lexer::token::e_eq     );\n               add_invalid_set1(lexer::token::e_comma  );\n               add_invalid_set1(lexer::token::e_add    );\n               add_invalid_set1(lexer::token::e_sub    );\n               add_invalid_set1(lexer::token::e_div    );\n               add_invalid_set1(lexer::token::e_mul    );\n               add_invalid_set1(lexer::token::e_mod    );\n               add_invalid_set1(lexer::token::e_pow    );\n               add_invalid_set1(lexer::token::e_colon  );\n               add_invalid_set1(lexer::token::e_ternary);\n            }\n\n            bool result()\n            {\n               return error_list_.empty();\n            }\n\n            bool operator() (const lexer::token& t0, const lexer::token& t1)\n            {\n               const set_t::value_type p = std::make_pair(t0.type,t1.type);\n\n               if (invalid_bracket_check(t0.type,t1.type))\n               {\n                  error_list_.push_back(std::make_pair(t0,t1));\n               }\n               else if (invalid_comb_.find(p) != invalid_comb_.end())\n               {\n                  error_list_.push_back(std::make_pair(t0,t1));\n               }\n\n               return true;\n            }\n\n            std::size_t error_count() const\n            {\n               return error_list_.size();\n            }\n\n            std::pair<lexer::token,lexer::token> error(const std::size_t index)\n            {\n               if (index < error_list_.size())\n               {\n                  return error_list_[index];\n               }\n               else\n               {\n                  static const lexer::token error_token;\n                  return std::make_pair(error_token,error_token);\n               }\n            }\n\n            void clear_errors()\n            {\n               error_list_.clear();\n            }\n\n         private:\n\n            void add_invalid(lexer::token::token_type base, lexer::token::token_type t)\n            {\n               invalid_comb_.insert(std::make_pair(base,t));\n            }\n\n            void add_invalid_set1(lexer::token::token_type t)\n            {\n               add_invalid(t,lexer::token::e_assign);\n               add_invalid(t,lexer::token::e_shr   );\n               add_invalid(t,lexer::token::e_shl   );\n               add_invalid(t,lexer::token::e_lte   );\n               add_invalid(t,lexer::token::e_ne    );\n               add_invalid(t,lexer::token::e_gte   );\n               add_invalid(t,lexer::token::e_lt    );\n               add_invalid(t,lexer::token::e_gt    );\n               add_invalid(t,lexer::token::e_eq    );\n               add_invalid(t,lexer::token::e_comma );\n               add_invalid(t,lexer::token::e_div   );\n               add_invalid(t,lexer::token::e_mul   );\n               add_invalid(t,lexer::token::e_mod   );\n               add_invalid(t,lexer::token::e_pow   );\n               add_invalid(t,lexer::token::e_colon );\n            }\n\n            bool invalid_bracket_check(lexer::token::token_type base, lexer::token::token_type t)\n            {\n               if (details::is_right_bracket(static_cast<char>(base)))\n               {\n                  switch (t)\n                  {\n                     case lexer::token::e_assign : return (']' != base);\n                     case lexer::token::e_string : return true;\n                     default                     : return false;\n                  }\n               }\n               else if (details::is_left_bracket(static_cast<char>(base)))\n               {\n                  if (details::is_right_bracket(static_cast<char>(t)))\n                     return false;\n                  else if (details::is_left_bracket(static_cast<char>(t)))\n                     return false;\n                  else\n                  {\n                     switch (t)\n                     {\n                        case lexer::token::e_number  : return false;\n                        case lexer::token::e_symbol  : return false;\n                        case lexer::token::e_string  : return false;\n                        case lexer::token::e_add     : return false;\n                        case lexer::token::e_sub     : return false;\n                        case lexer::token::e_colon   : return false;\n                        case lexer::token::e_ternary : return false;\n                        default                      : return true ;\n                     }\n                  }\n               }\n               else if (details::is_right_bracket(static_cast<char>(t)))\n               {\n                  switch (base)\n                  {\n                     case lexer::token::e_number  : return false;\n                     case lexer::token::e_symbol  : return false;\n                     case lexer::token::e_string  : return false;\n                     case lexer::token::e_eof     : return false;\n                     case lexer::token::e_colon   : return false;\n                     case lexer::token::e_ternary : return false;\n                     default                      : return true ;\n                  }\n               }\n               else if (details::is_left_bracket(static_cast<char>(t)))\n               {\n                  switch (base)\n                  {\n                     case lexer::token::e_rbracket    : return true;\n                     case lexer::token::e_rsqrbracket : return true;\n                     case lexer::token::e_rcrlbracket : return true;\n                     default                          : return false;\n                  }\n               }\n\n               return false;\n            }\n\n            set_t invalid_comb_;\n            std::vector<std::pair<lexer::token,lexer::token> > error_list_;\n         };\n\n         struct helper_assembly\n         {\n            inline bool register_scanner(lexer::token_scanner* scanner)\n            {\n               if (token_scanner_list.end() != std::find(token_scanner_list.begin(),\n                                                         token_scanner_list.end  (),\n                                                         scanner))\n               {\n                  return false;\n               }\n\n               token_scanner_list.push_back(scanner);\n\n               return true;\n            }\n\n            inline bool register_modifier(lexer::token_modifier* modifier)\n            {\n               if (token_modifier_list.end() != std::find(token_modifier_list.begin(),\n                                                          token_modifier_list.end  (),\n                                                          modifier))\n               {\n                  return false;\n               }\n\n               token_modifier_list.push_back(modifier);\n\n               return true;\n            }\n\n            inline bool register_joiner(lexer::token_joiner* joiner)\n            {\n               if (token_joiner_list.end() != std::find(token_joiner_list.begin(),\n                                                        token_joiner_list.end  (),\n                                                        joiner))\n               {\n                  return false;\n               }\n\n               token_joiner_list.push_back(joiner);\n\n               return true;\n            }\n\n            inline bool register_inserter(lexer::token_inserter* inserter)\n            {\n               if (token_inserter_list.end() != std::find(token_inserter_list.begin(),\n                                                          token_inserter_list.end  (),\n                                                          inserter))\n               {\n                  return false;\n               }\n\n               token_inserter_list.push_back(inserter);\n\n               return true;\n            }\n\n            inline bool run_modifiers(lexer::generator& g)\n            {\n               error_token_modifier = reinterpret_cast<lexer::token_modifier*>(0);\n\n               for (std::size_t i = 0; i < token_modifier_list.size(); ++i)\n               {\n                  lexer::token_modifier& modifier = (*token_modifier_list[i]);\n\n                  modifier.reset();\n                  modifier.process(g);\n\n                  if (!modifier.result())\n                  {\n                     error_token_modifier = token_modifier_list[i];\n\n                     return false;\n                  }\n               }\n\n               return true;\n            }\n\n            inline bool run_joiners(lexer::generator& g)\n            {\n               error_token_joiner = reinterpret_cast<lexer::token_joiner*>(0);\n\n               for (std::size_t i = 0; i < token_joiner_list.size(); ++i)\n               {\n                  lexer::token_joiner& joiner = (*token_joiner_list[i]);\n\n                  joiner.reset();\n                  joiner.process(g);\n\n                  if (!joiner.result())\n                  {\n                     error_token_joiner = token_joiner_list[i];\n\n                     return false;\n                  }\n               }\n\n               return true;\n            }\n\n            inline bool run_inserters(lexer::generator& g)\n            {\n               error_token_inserter = reinterpret_cast<lexer::token_inserter*>(0);\n\n               for (std::size_t i = 0; i < token_inserter_list.size(); ++i)\n               {\n                  lexer::token_inserter& inserter = (*token_inserter_list[i]);\n\n                  inserter.reset();\n                  inserter.process(g);\n\n                  if (!inserter.result())\n                  {\n                     error_token_inserter = token_inserter_list[i];\n\n                     return false;\n                  }\n               }\n\n               return true;\n            }\n\n            inline bool run_scanners(lexer::generator& g)\n            {\n               error_token_scanner = reinterpret_cast<lexer::token_scanner*>(0);\n\n               for (std::size_t i = 0; i < token_scanner_list.size(); ++i)\n               {\n                  lexer::token_scanner& scanner = (*token_scanner_list[i]);\n\n                  scanner.reset();\n                  scanner.process(g);\n\n                  if (!scanner.result())\n                  {\n                     error_token_scanner = token_scanner_list[i];\n\n                     return false;\n                  }\n               }\n\n               return true;\n            }\n\n            std::vector<lexer::token_scanner*>  token_scanner_list;\n            std::vector<lexer::token_modifier*> token_modifier_list;\n            std::vector<lexer::token_joiner*>   token_joiner_list;\n            std::vector<lexer::token_inserter*> token_inserter_list;\n\n            lexer::token_scanner*  error_token_scanner;\n            lexer::token_modifier* error_token_modifier;\n            lexer::token_joiner*   error_token_joiner;\n            lexer::token_inserter* error_token_inserter;\n         };\n      }\n\n      class parser_helper\n      {\n      public:\n\n         typedef token         token_t;\n         typedef generator generator_t;\n\n         inline bool init(const std::string& str)\n         {\n            if (!lexer_.process(str))\n            {\n               return false;\n            }\n\n            lexer_.begin();\n\n            next_token();\n\n            return true;\n         }\n\n         inline generator_t& lexer()\n         {\n            return lexer_;\n         }\n\n         inline const generator_t& lexer() const\n         {\n            return lexer_;\n         }\n\n         inline void store_token()\n         {\n            lexer_.store();\n            store_current_token_ = current_token_;\n         }\n\n         inline void restore_token()\n         {\n            lexer_.restore();\n            current_token_ = store_current_token_;\n         }\n\n         inline void next_token()\n         {\n            current_token_ = lexer_.next_token();\n         }\n\n         inline const token_t& current_token() const\n         {\n            return current_token_;\n         }\n\n         enum token_advance_mode\n         {\n            e_hold    = 0,\n            e_advance = 1\n         };\n\n         inline void advance_token(const token_advance_mode mode)\n         {\n            if (e_advance == mode)\n            {\n               next_token();\n            }\n         }\n\n         inline bool token_is(const token_t::token_type& ttype, const token_advance_mode mode = e_advance)\n         {\n            if (current_token().type != ttype)\n            {\n               return false;\n            }\n\n            advance_token(mode);\n\n            return true;\n         }\n\n         inline bool token_is(const token_t::token_type& ttype,\n                              const std::string& value,\n                              const token_advance_mode mode = e_advance)\n         {\n            if (\n                 (current_token().type != ttype) ||\n                 !exprtk::details::imatch(value,current_token().value)\n               )\n            {\n               return false;\n            }\n\n            advance_token(mode);\n\n            return true;\n         }\n\n         inline bool peek_token_is(const token_t::token_type& ttype)\n         {\n            return (lexer_.peek_next_token().type == ttype);\n         }\n\n         inline bool peek_token_is(const std::string& s)\n         {\n            return (exprtk::details::imatch(lexer_.peek_next_token().value,s));\n         }\n\n      private:\n\n         generator_t lexer_;\n         token_t     current_token_;\n         token_t     store_current_token_;\n      };\n   }\n\n   template <typename T>\n   class vector_view\n   {\n   public:\n\n      typedef T* data_ptr_t;\n\n      vector_view(data_ptr_t data, const std::size_t& size)\n      : size_(size),\n        data_(data),\n        data_ref_(0)\n      {}\n\n      vector_view(const vector_view<T>& vv)\n      : size_(vv.size_),\n        data_(vv.data_),\n        data_ref_(0)\n      {}\n\n      inline void rebase(data_ptr_t data)\n      {\n         data_ = data;\n\n         if (!data_ref_.empty())\n         {\n            for (std::size_t i = 0; i < data_ref_.size(); ++i)\n            {\n               (*data_ref_[i]) = data;\n            }\n         }\n      }\n\n      inline data_ptr_t data() const\n      {\n         return data_;\n      }\n\n      inline std::size_t size() const\n      {\n         return size_;\n      }\n\n      inline const T& operator[](const std::size_t index) const\n      {\n         return data_[index];\n      }\n\n      inline T& operator[](const std::size_t index)\n      {\n         return data_[index];\n      }\n\n      void set_ref(data_ptr_t* data_ref)\n      {\n         data_ref_.push_back(data_ref);\n      }\n\n   private:\n\n      const std::size_t size_;\n      data_ptr_t  data_;\n      std::vector<data_ptr_t*> data_ref_;\n   };\n\n   template <typename T>\n   inline vector_view<T> make_vector_view(T* data,\n                                          const std::size_t size, const std::size_t offset = 0)\n   {\n      return vector_view<T>(data + offset,size);\n   }\n\n   template <typename T>\n   inline vector_view<T> make_vector_view(std::vector<T>& v,\n                                          const std::size_t size, const std::size_t offset = 0)\n   {\n      return vector_view<T>(v.data() + offset,size);\n   }\n\n   template <typename T> class results_context;\n\n   template <typename T>\n   struct type_store\n   {\n      enum store_type\n      {\n         e_unknown,\n         e_scalar ,\n         e_vector ,\n         e_string\n      };\n\n      type_store()\n      : size(0),\n        data(0),\n        type(e_unknown)\n      {}\n\n      std::size_t size;\n      void*       data;\n      store_type  type;\n\n      class parameter_list\n      {\n      public:\n\n         parameter_list(std::vector<type_store>& pl)\n         : parameter_list_(pl)\n         {}\n\n         inline bool empty() const\n         {\n            return parameter_list_.empty();\n         }\n\n         inline std::size_t size() const\n         {\n            return parameter_list_.size();\n         }\n\n         inline type_store& operator[](const std::size_t& index)\n         {\n            return parameter_list_[index];\n         }\n\n         inline const type_store& operator[](const std::size_t& index) const\n         {\n            return parameter_list_[index];\n         }\n\n         inline type_store& front()\n         {\n            return parameter_list_[0];\n         }\n\n         inline const type_store& front() const\n         {\n            return parameter_list_[0];\n         }\n\n         inline type_store& back()\n         {\n            return parameter_list_.back();\n         }\n\n         inline const type_store& back() const\n         {\n            return parameter_list_.back();\n         }\n\n      private:\n\n         std::vector<type_store>& parameter_list_;\n\n         friend class results_context<T>;\n      };\n\n      template <typename ViewType>\n      struct type_view\n      {\n         typedef type_store<T> type_store_t;\n         typedef ViewType      value_t;\n\n         type_view(type_store_t& ts)\n         : ts_(ts),\n           data_(reinterpret_cast<value_t*>(ts_.data))\n         {}\n\n         type_view(const type_store_t& ts)\n         : ts_(const_cast<type_store_t&>(ts)),\n           data_(reinterpret_cast<value_t*>(ts_.data))\n         {}\n\n         inline std::size_t size() const\n         {\n            return ts_.size;\n         }\n\n         inline value_t& operator[](const std::size_t& i)\n         {\n            return data_[i];\n         }\n\n         inline const value_t& operator[](const std::size_t& i) const\n         {\n            return data_[i];\n         }\n\n         inline const value_t* begin() const { return data_; }\n         inline       value_t* begin()       { return data_; }\n\n         inline const value_t* end() const\n         {\n            return static_cast<value_t*>(data_ + ts_.size);\n         }\n\n         inline value_t* end()\n         {\n            return static_cast<value_t*>(data_ + ts_.size);\n         }\n\n         type_store_t& ts_;\n         value_t* data_;\n      };\n\n      typedef type_view<T>    vector_view;\n      typedef type_view<char> string_view;\n\n      struct scalar_view\n      {\n         typedef type_store<T> type_store_t;\n         typedef T value_t;\n\n         scalar_view(type_store_t& ts)\n         : v_(*reinterpret_cast<value_t*>(ts.data))\n         {}\n\n         scalar_view(const type_store_t& ts)\n         : v_(*reinterpret_cast<value_t*>(const_cast<type_store_t&>(ts).data))\n         {}\n\n         inline value_t& operator() ()\n         {\n            return v_;\n         }\n\n         inline const value_t& operator() () const\n         {\n            return v_;\n         }\n\n         template <typename IntType>\n         inline bool to_int(IntType& i) const\n         {\n            if (!exprtk::details::numeric::is_integer(v_))\n               return false;\n\n            i = static_cast<IntType>(v_);\n\n            return true;\n         }\n\n         template <typename UIntType>\n         inline bool to_uint(UIntType& u) const\n         {\n            if (v_ < T(0))\n               return false;\n            else if (!exprtk::details::numeric::is_integer(v_))\n               return false;\n\n            u = static_cast<UIntType>(v_);\n\n            return true;\n         }\n\n         T& v_;\n      };\n   };\n\n   template <typename StringView>\n   inline std::string to_str(const StringView& view)\n   {\n      return std::string(view.begin(),view.size());\n   }\n\n   #ifndef exprtk_disable_return_statement\n   namespace details\n   {\n      template <typename T> class return_node;\n      template <typename T> class return_envelope_node;\n   }\n   #endif\n\n   template <typename T>\n   class results_context\n   {\n   public:\n\n      typedef type_store<T> type_store_t;\n\n      results_context()\n      : results_available_(false)\n      {}\n\n      inline std::size_t count() const\n      {\n         if (results_available_)\n            return parameter_list_.size();\n         else\n            return 0;\n      }\n\n      inline type_store_t& operator[](const std::size_t& index)\n      {\n         return parameter_list_[index];\n      }\n\n      inline const type_store_t& operator[](const std::size_t& index) const\n      {\n         return parameter_list_[index];\n      }\n\n   private:\n\n      inline void clear()\n      {\n         results_available_ = false;\n      }\n\n      typedef std::vector<type_store_t> ts_list_t;\n      typedef typename type_store_t::parameter_list parameter_list_t;\n\n      inline void assign(const parameter_list_t& pl)\n      {\n         parameter_list_    = pl.parameter_list_;\n         results_available_ = true;\n      }\n\n      bool results_available_;\n      ts_list_t parameter_list_;\n\n      #ifndef exprtk_disable_return_statement\n      friend class details::return_node<T>;\n      friend class details::return_envelope_node<T>;\n      #endif\n   };\n\n   namespace details\n   {\n      enum operator_type\n      {\n         e_default , e_null    , e_add     , e_sub     ,\n         e_mul     , e_div     , e_mod     , e_pow     ,\n         e_atan2   , e_min     , e_max     , e_avg     ,\n         e_sum     , e_prod    , e_lt      , e_lte     ,\n         e_eq      , e_equal   , e_ne      , e_nequal  ,\n         e_gte     , e_gt      , e_and     , e_nand    ,\n         e_or      , e_nor     , e_xor     , e_xnor    ,\n         e_mand    , e_mor     , e_scand   , e_scor    ,\n         e_shr     , e_shl     , e_abs     , e_acos    ,\n         e_acosh   , e_asin    , e_asinh   , e_atan    ,\n         e_atanh   , e_ceil    , e_cos     , e_cosh    ,\n         e_exp     , e_expm1   , e_floor   , e_log     ,\n         e_log10   , e_log2    , e_log1p   , e_logn    ,\n         e_neg     , e_pos     , e_round   , e_roundn  ,\n         e_root    , e_sqrt    , e_sin     , e_sinc    ,\n         e_sinh    , e_sec     , e_csc     , e_tan     ,\n         e_tanh    , e_cot     , e_clamp   , e_iclamp  ,\n         e_inrange , e_sgn     , e_r2d     , e_d2r     ,\n         e_d2g     , e_g2d     , e_hypot   , e_notl    ,\n         e_erf     , e_erfc    , e_ncdf    , e_frac    ,\n         e_trunc   , e_assign  , e_addass  , e_subass  ,\n         e_mulass  , e_divass  , e_modass  , e_in      ,\n         e_like    , e_ilike   , e_multi   , e_smulti  ,\n         e_swap    ,\n\n         // Do not add new functions/operators after this point.\n         e_sf00 = 1000, e_sf01 = 1001, e_sf02 = 1002, e_sf03 = 1003,\n         e_sf04 = 1004, e_sf05 = 1005, e_sf06 = 1006, e_sf07 = 1007,\n         e_sf08 = 1008, e_sf09 = 1009, e_sf10 = 1010, e_sf11 = 1011,\n         e_sf12 = 1012, e_sf13 = 1013, e_sf14 = 1014, e_sf15 = 1015,\n         e_sf16 = 1016, e_sf17 = 1017, e_sf18 = 1018, e_sf19 = 1019,\n         e_sf20 = 1020, e_sf21 = 1021, e_sf22 = 1022, e_sf23 = 1023,\n         e_sf24 = 1024, e_sf25 = 1025, e_sf26 = 1026, e_sf27 = 1027,\n         e_sf28 = 1028, e_sf29 = 1029, e_sf30 = 1030, e_sf31 = 1031,\n         e_sf32 = 1032, e_sf33 = 1033, e_sf34 = 1034, e_sf35 = 1035,\n         e_sf36 = 1036, e_sf37 = 1037, e_sf38 = 1038, e_sf39 = 1039,\n         e_sf40 = 1040, e_sf41 = 1041, e_sf42 = 1042, e_sf43 = 1043,\n         e_sf44 = 1044, e_sf45 = 1045, e_sf46 = 1046, e_sf47 = 1047,\n         e_sf48 = 1048, e_sf49 = 1049, e_sf50 = 1050, e_sf51 = 1051,\n         e_sf52 = 1052, e_sf53 = 1053, e_sf54 = 1054, e_sf55 = 1055,\n         e_sf56 = 1056, e_sf57 = 1057, e_sf58 = 1058, e_sf59 = 1059,\n         e_sf60 = 1060, e_sf61 = 1061, e_sf62 = 1062, e_sf63 = 1063,\n         e_sf64 = 1064, e_sf65 = 1065, e_sf66 = 1066, e_sf67 = 1067,\n         e_sf68 = 1068, e_sf69 = 1069, e_sf70 = 1070, e_sf71 = 1071,\n         e_sf72 = 1072, e_sf73 = 1073, e_sf74 = 1074, e_sf75 = 1075,\n         e_sf76 = 1076, e_sf77 = 1077, e_sf78 = 1078, e_sf79 = 1079,\n         e_sf80 = 1080, e_sf81 = 1081, e_sf82 = 1082, e_sf83 = 1083,\n         e_sf84 = 1084, e_sf85 = 1085, e_sf86 = 1086, e_sf87 = 1087,\n         e_sf88 = 1088, e_sf89 = 1089, e_sf90 = 1090, e_sf91 = 1091,\n         e_sf92 = 1092, e_sf93 = 1093, e_sf94 = 1094, e_sf95 = 1095,\n         e_sf96 = 1096, e_sf97 = 1097, e_sf98 = 1098, e_sf99 = 1099,\n         e_sffinal  = 1100,\n         e_sf4ext00 = 2000, e_sf4ext01 = 2001, e_sf4ext02 = 2002, e_sf4ext03 = 2003,\n         e_sf4ext04 = 2004, e_sf4ext05 = 2005, e_sf4ext06 = 2006, e_sf4ext07 = 2007,\n         e_sf4ext08 = 2008, e_sf4ext09 = 2009, e_sf4ext10 = 2010, e_sf4ext11 = 2011,\n         e_sf4ext12 = 2012, e_sf4ext13 = 2013, e_sf4ext14 = 2014, e_sf4ext15 = 2015,\n         e_sf4ext16 = 2016, e_sf4ext17 = 2017, e_sf4ext18 = 2018, e_sf4ext19 = 2019,\n         e_sf4ext20 = 2020, e_sf4ext21 = 2021, e_sf4ext22 = 2022, e_sf4ext23 = 2023,\n         e_sf4ext24 = 2024, e_sf4ext25 = 2025, e_sf4ext26 = 2026, e_sf4ext27 = 2027,\n         e_sf4ext28 = 2028, e_sf4ext29 = 2029, e_sf4ext30 = 2030, e_sf4ext31 = 2031,\n         e_sf4ext32 = 2032, e_sf4ext33 = 2033, e_sf4ext34 = 2034, e_sf4ext35 = 2035,\n         e_sf4ext36 = 2036, e_sf4ext37 = 2037, e_sf4ext38 = 2038, e_sf4ext39 = 2039,\n         e_sf4ext40 = 2040, e_sf4ext41 = 2041, e_sf4ext42 = 2042, e_sf4ext43 = 2043,\n         e_sf4ext44 = 2044, e_sf4ext45 = 2045, e_sf4ext46 = 2046, e_sf4ext47 = 2047,\n         e_sf4ext48 = 2048, e_sf4ext49 = 2049, e_sf4ext50 = 2050, e_sf4ext51 = 2051,\n         e_sf4ext52 = 2052, e_sf4ext53 = 2053, e_sf4ext54 = 2054, e_sf4ext55 = 2055,\n         e_sf4ext56 = 2056, e_sf4ext57 = 2057, e_sf4ext58 = 2058, e_sf4ext59 = 2059,\n         e_sf4ext60 = 2060, e_sf4ext61 = 2061\n      };\n\n      inline std::string to_str(const operator_type opr)\n      {\n         switch (opr)\n         {\n            case e_add    : return  \"+\";\n            case e_sub    : return  \"-\";\n            case e_mul    : return  \"*\";\n            case e_div    : return  \"/\";\n            case e_mod    : return  \"%\";\n            case e_pow    : return  \"^\";\n            case e_assign : return \":=\";\n            case e_addass : return \"+=\";\n            case e_subass : return \"-=\";\n            case e_mulass : return \"*=\";\n            case e_divass : return \"/=\";\n            case e_modass : return \"%=\";\n            case e_lt     : return  \"<\";\n            case e_lte    : return \"<=\";\n            case e_eq     : return \"==\";\n            case e_equal  : return  \"=\";\n            case e_ne     : return \"!=\";\n            case e_nequal : return \"<>\";\n            case e_gte    : return \">=\";\n            case e_gt     : return  \">\";\n            default       : return\"N/A\";\n         }\n      }\n\n      struct base_operation_t\n      {\n         base_operation_t(const operator_type t, const unsigned int& np)\n         : type(t),\n           num_params(np)\n         {}\n\n         operator_type type;\n         unsigned int num_params;\n      };\n\n      namespace loop_unroll\n      {\n         #ifndef exprtk_disable_superscalar_unroll\n         const unsigned int global_loop_batch_size = 16;\n         #else\n         const unsigned int global_loop_batch_size = 4;\n         #endif\n\n         struct details\n         {\n            details(const std::size_t& vsize,\n                    const unsigned int loop_batch_size = global_loop_batch_size)\n            : batch_size(loop_batch_size   ),\n              remainder (vsize % batch_size),\n              upper_bound(static_cast<int>(vsize - (remainder ? loop_batch_size : 0)))\n            {}\n\n            unsigned int batch_size;\n            int   remainder;\n            int upper_bound;\n         };\n      }\n\n      #ifdef exprtk_enable_debugging\n      inline void dump_ptr(const std::string& s, const void* ptr, const std::size_t size = 0)\n      {\n         if (size)\n            exprtk_debug((\"%s - addr: %p\\n\",s.c_str(),ptr));\n         else\n            exprtk_debug((\"%s - addr: %p size: %d\\n\",\n                          s.c_str(),\n                          ptr,\n                          static_cast<unsigned int>(size)));\n      }\n      #else\n      inline void dump_ptr(const std::string&, const void*) {}\n      inline void dump_ptr(const std::string&, const void*, const std::size_t) {}\n      #endif\n\n      template <typename T>\n      class vec_data_store\n      {\n      public:\n\n         typedef vec_data_store<T> type;\n         typedef T* data_t;\n\n      private:\n\n         struct control_block\n         {\n            control_block()\n            : ref_count(1),\n              size     (0),\n              data     (0),\n              destruct (true)\n            {}\n\n            control_block(const std::size_t& dsize)\n            : ref_count(1    ),\n              size     (dsize),\n              data     (0    ),\n              destruct (true )\n            { create_data(); }\n\n            control_block(const std::size_t& dsize, data_t dptr, bool dstrct = false)\n            : ref_count(1     ),\n              size     (dsize ),\n              data     (dptr  ),\n              destruct (dstrct)\n            {}\n\n           ~control_block()\n            {\n               if (data && destruct && (0 == ref_count))\n               {\n                  dump_ptr(\"~control_block() data\",data);\n                  delete[] data;\n                  data = reinterpret_cast<data_t>(0);\n               }\n            }\n\n            static inline control_block* create(const std::size_t& dsize, data_t data_ptr = data_t(0), bool dstrct = false)\n            {\n               if (dsize)\n               {\n                  if (0 == data_ptr)\n                     return (new control_block(dsize));\n                  else\n                     return (new control_block(dsize, data_ptr, dstrct));\n               }\n               else\n                  return (new control_block);\n            }\n\n            static inline void destroy(control_block*& cntrl_blck)\n            {\n               if (cntrl_blck)\n               {\n                  if (\n                       (0 !=   cntrl_blck->ref_count) &&\n                       (0 == --cntrl_blck->ref_count)\n                     )\n                  {\n                     delete cntrl_blck;\n                  }\n\n                  cntrl_blck = 0;\n               }\n            }\n\n            std::size_t ref_count;\n            std::size_t size;\n            data_t      data;\n            bool        destruct;\n\n         private:\n\n            control_block(const control_block&);\n            control_block& operator=(const control_block&);\n\n            inline void create_data()\n            {\n               destruct = true;\n               data     = new T[size];\n               std::fill_n(data,size,T(0));\n               dump_ptr(\"control_block::create_data() - data\",data,size);\n            }\n         };\n\n      public:\n\n         vec_data_store()\n         : control_block_(control_block::create(0))\n         {}\n\n         vec_data_store(const std::size_t& size)\n         : control_block_(control_block::create(size,(data_t)(0),true))\n         {}\n\n         vec_data_store(const std::size_t& size, data_t data, bool dstrct = false)\n         : control_block_(control_block::create(size, data, dstrct))\n         {}\n\n         vec_data_store(const type& vds)\n         {\n            control_block_ = vds.control_block_;\n            control_block_->ref_count++;\n         }\n\n        ~vec_data_store()\n         {\n            control_block::destroy(control_block_);\n         }\n\n         type& operator=(const type& vds)\n         {\n            if (this != &vds)\n            {\n               std::size_t final_size = min_size(control_block_, vds.control_block_);\n\n               vds.control_block_->size = final_size;\n                   control_block_->size = final_size;\n\n               if (control_block_->destruct || (0 == control_block_->data))\n               {\n                  control_block::destroy(control_block_);\n\n                  control_block_ = vds.control_block_;\n                  control_block_->ref_count++;\n               }\n            }\n\n            return (*this);\n         }\n\n         inline data_t data()\n         {\n            return control_block_->data;\n         }\n\n         inline data_t data() const\n         {\n            return control_block_->data;\n         }\n\n         inline std::size_t size()\n         {\n            return control_block_->size;\n         }\n\n         inline std::size_t size() const\n         {\n            return control_block_->size;\n         }\n\n         inline data_t& ref()\n         {\n            return control_block_->data;\n         }\n\n         inline void dump() const\n         {\n            #ifdef exprtk_enable_debugging\n            exprtk_debug((\"size: %d\\taddress:%p\\tdestruct:%c\\n\",\n                          size(),\n                          data(),\n                          (control_block_->destruct ? 'T' : 'F')));\n\n            for (std::size_t i = 0; i < size(); ++i)\n            {\n               if (5 == i)\n                  exprtk_debug((\"\\n\"));\n\n               exprtk_debug((\"%15.10f \",data()[i]));\n            }\n            exprtk_debug((\"\\n\"));\n            #endif\n         }\n\n         static inline void match_sizes(type& vds0, type& vds1)\n         {\n            std::size_t size = min_size(vds0.control_block_,vds1.control_block_);\n            vds0.control_block_->size = size;\n            vds1.control_block_->size = size;\n         }\n\n      private:\n\n         static inline std::size_t min_size(control_block* cb0, control_block* cb1)\n         {\n            const std::size_t size0 = cb0->size;\n            const std::size_t size1 = cb1->size;\n\n            if (size0 && size1)\n               return std::min(size0,size1);\n            else\n               return (size0) ? size0 : size1;\n         }\n\n         control_block* control_block_;\n      };\n\n      namespace numeric\n      {\n         namespace details\n         {\n            template <typename T>\n            inline T process_impl(const operator_type operation, const T arg)\n            {\n               switch (operation)\n               {\n                  case e_abs   : return numeric::abs  (arg);\n                  case e_acos  : return numeric::acos (arg);\n                  case e_acosh : return numeric::acosh(arg);\n                  case e_asin  : return numeric::asin (arg);\n                  case e_asinh : return numeric::asinh(arg);\n                  case e_atan  : return numeric::atan (arg);\n                  case e_atanh : return numeric::atanh(arg);\n                  case e_ceil  : return numeric::ceil (arg);\n                  case e_cos   : return numeric::cos  (arg);\n                  case e_cosh  : return numeric::cosh (arg);\n                  case e_exp   : return numeric::exp  (arg);\n                  case e_expm1 : return numeric::expm1(arg);\n                  case e_floor : return numeric::floor(arg);\n                  case e_log   : return numeric::log  (arg);\n                  case e_log10 : return numeric::log10(arg);\n                  case e_log2  : return numeric::log2 (arg);\n                  case e_log1p : return numeric::log1p(arg);\n                  case e_neg   : return numeric::neg  (arg);\n                  case e_pos   : return numeric::pos  (arg);\n                  case e_round : return numeric::round(arg);\n                  case e_sin   : return numeric::sin  (arg);\n                  case e_sinc  : return numeric::sinc (arg);\n                  case e_sinh  : return numeric::sinh (arg);\n                  case e_sqrt  : return numeric::sqrt (arg);\n                  case e_tan   : return numeric::tan  (arg);\n                  case e_tanh  : return numeric::tanh (arg);\n                  case e_cot   : return numeric::cot  (arg);\n                  case e_sec   : return numeric::sec  (arg);\n                  case e_csc   : return numeric::csc  (arg);\n                  case e_r2d   : return numeric::r2d  (arg);\n                  case e_d2r   : return numeric::d2r  (arg);\n                  case e_d2g   : return numeric::d2g  (arg);\n                  case e_g2d   : return numeric::g2d  (arg);\n                  case e_notl  : return numeric::notl (arg);\n                  case e_sgn   : return numeric::sgn  (arg);\n                  case e_erf   : return numeric::erf  (arg);\n                  case e_erfc  : return numeric::erfc (arg);\n                  case e_ncdf  : return numeric::ncdf (arg);\n                  case e_frac  : return numeric::frac (arg);\n                  case e_trunc : return numeric::trunc(arg);\n\n                  default      : exprtk_debug((\"numeric::details::process_impl<T> - Invalid unary operation.\\n\"));\n                                 return std::numeric_limits<T>::quiet_NaN();\n               }\n            }\n\n            template <typename T>\n            inline T process_impl(const operator_type operation, const T arg0, const T arg1)\n            {\n               switch (operation)\n               {\n                  case e_add    : return (arg0 + arg1);\n                  case e_sub    : return (arg0 - arg1);\n                  case e_mul    : return (arg0 * arg1);\n                  case e_div    : return (arg0 / arg1);\n                  case e_mod    : return modulus<T>(arg0,arg1);\n                  case e_pow    : return pow<T>(arg0,arg1);\n                  case e_atan2  : return atan2<T>(arg0,arg1);\n                  case e_min    : return std::min<T>(arg0,arg1);\n                  case e_max    : return std::max<T>(arg0,arg1);\n                  case e_logn   : return logn<T>(arg0,arg1);\n                  case e_lt     : return (arg0 <  arg1) ? T(1) : T(0);\n                  case e_lte    : return (arg0 <= arg1) ? T(1) : T(0);\n                  case e_eq     : return std::equal_to<T>()(arg0,arg1) ? T(1) : T(0);\n                  case e_ne     : return std::not_equal_to<T>()(arg0,arg1) ? T(1) : T(0);\n                  case e_gte    : return (arg0 >= arg1) ? T(1) : T(0);\n                  case e_gt     : return (arg0 >  arg1) ? T(1) : T(0);\n                  case e_and    : return and_opr <T>(arg0,arg1);\n                  case e_nand   : return nand_opr<T>(arg0,arg1);\n                  case e_or     : return or_opr  <T>(arg0,arg1);\n                  case e_nor    : return nor_opr <T>(arg0,arg1);\n                  case e_xor    : return xor_opr <T>(arg0,arg1);\n                  case e_xnor   : return xnor_opr<T>(arg0,arg1);\n                  case e_root   : return root    <T>(arg0,arg1);\n                  case e_roundn : return roundn  <T>(arg0,arg1);\n                  case e_equal  : return equal      (arg0,arg1);\n                  case e_nequal : return nequal     (arg0,arg1);\n                  case e_hypot  : return hypot   <T>(arg0,arg1);\n                  case e_shr    : return shr     <T>(arg0,arg1);\n                  case e_shl    : return shl     <T>(arg0,arg1);\n\n                  default       : exprtk_debug((\"numeric::details::process_impl<T> - Invalid binary operation.\\n\"));\n                                  return std::numeric_limits<T>::quiet_NaN();\n               }\n            }\n\n            template <typename T>\n            inline T process_impl(const operator_type operation, const T arg0, const T arg1, int_type_tag)\n            {\n               switch (operation)\n               {\n                  case e_add    : return (arg0 + arg1);\n                  case e_sub    : return (arg0 - arg1);\n                  case e_mul    : return (arg0 * arg1);\n                  case e_div    : return (arg0 / arg1);\n                  case e_mod    : return arg0 % arg1;\n                  case e_pow    : return pow<T>(arg0,arg1);\n                  case e_min    : return std::min<T>(arg0,arg1);\n                  case e_max    : return std::max<T>(arg0,arg1);\n                  case e_logn   : return logn<T>(arg0,arg1);\n                  case e_lt     : return (arg0 <  arg1) ? T(1) : T(0);\n                  case e_lte    : return (arg0 <= arg1) ? T(1) : T(0);\n                  case e_eq     : return (arg0 == arg1) ? T(1) : T(0);\n                  case e_ne     : return (arg0 != arg1) ? T(1) : T(0);\n                  case e_gte    : return (arg0 >= arg1) ? T(1) : T(0);\n                  case e_gt     : return (arg0 >  arg1) ? T(1) : T(0);\n                  case e_and    : return ((arg0 != T(0)) && (arg1 != T(0))) ? T(1) : T(0);\n                  case e_nand   : return ((arg0 != T(0)) && (arg1 != T(0))) ? T(0) : T(1);\n                  case e_or     : return ((arg0 != T(0)) || (arg1 != T(0))) ? T(1) : T(0);\n                  case e_nor    : return ((arg0 != T(0)) || (arg1 != T(0))) ? T(0) : T(1);\n                  case e_xor    : return arg0 ^ arg1;\n                  case e_xnor   : return !(arg0 ^ arg1);\n                  case e_root   : return root<T>(arg0,arg1);\n                  case e_equal  : return arg0 == arg1;\n                  case e_nequal : return arg0 != arg1;\n                  case e_hypot  : return hypot<T>(arg0,arg1);\n                  case e_shr    : return arg0 >> arg1;\n                  case e_shl    : return arg0 << arg1;\n\n                  default       : exprtk_debug((\"numeric::details::process_impl<IntType> - Invalid binary operation.\\n\"));\n                                  return std::numeric_limits<T>::quiet_NaN();\n               }\n            }\n         }\n\n         template <typename T>\n         inline T process(const operator_type operation, const T arg)\n         {\n            return exprtk::details::numeric::details::process_impl(operation,arg);\n         }\n\n         template <typename T>\n         inline T process(const operator_type operation, const T arg0, const T arg1)\n         {\n            return exprtk::details::numeric::details::process_impl(operation,arg0,arg1);\n         }\n      }\n\n      template <typename T>\n      class expression_node\n      {\n      public:\n\n         enum node_type\n         {\n            e_none         , e_null         , e_constant     , e_unary        ,\n            e_binary       , e_binary_ext   , e_trinary      , e_quaternary   ,\n            e_vararg       , e_conditional  , e_while        , e_repeat       ,\n            e_for          , e_switch       , e_mswitch      , e_return       ,\n            e_retenv       , e_variable     , e_stringvar    , e_stringconst  ,\n            e_stringvarrng , e_cstringvarrng, e_strgenrange  , e_strconcat    ,\n            e_stringvarsize, e_strswap      , e_stringsize   , e_stringvararg ,\n            e_function     , e_vafunction   , e_genfunction  , e_strfunction  ,\n            e_strcondition , e_strccondition, e_add          , e_sub          ,\n            e_mul          , e_div          , e_mod          , e_pow          ,\n            e_lt           , e_lte          , e_gt           , e_gte          ,\n            e_eq           , e_ne           , e_and          , e_nand         ,\n            e_or           , e_nor          , e_xor          , e_xnor         ,\n            e_in           , e_like         , e_ilike        , e_inranges     ,\n            e_ipow         , e_ipowinv      , e_abs          , e_acos         ,\n            e_acosh        , e_asin         , e_asinh        , e_atan         ,\n            e_atanh        , e_ceil         , e_cos          , e_cosh         ,\n            e_exp          , e_expm1        , e_floor        , e_log          ,\n            e_log10        , e_log2         , e_log1p        , e_neg          ,\n            e_pos          , e_round        , e_sin          , e_sinc         ,\n            e_sinh         , e_sqrt         , e_tan          , e_tanh         ,\n            e_cot          , e_sec          , e_csc          , e_r2d          ,\n            e_d2r          , e_d2g          , e_g2d          , e_notl         ,\n            e_sgn          , e_erf          , e_erfc         , e_ncdf         ,\n            e_frac         , e_trunc        , e_uvouv        , e_vov          ,\n            e_cov          , e_voc          , e_vob          , e_bov          ,\n            e_cob          , e_boc          , e_vovov        , e_vovoc        ,\n            e_vocov        , e_covov        , e_covoc        , e_vovovov      ,\n            e_vovovoc      , e_vovocov      , e_vocovov      , e_covovov      ,\n            e_covocov      , e_vocovoc      , e_covovoc      , e_vococov      ,\n            e_sf3ext       , e_sf4ext       , e_nulleq       , e_strass       ,\n            e_vector       , e_vecelem      , e_rbvecelem    , e_rbveccelem   ,\n            e_vecdefass    , e_vecvalass    , e_vecvecass    , e_vecopvalass  ,\n            e_vecopvecass  , e_vecfunc      , e_vecvecswap   , e_vecvecineq   ,\n            e_vecvalineq   , e_valvecineq   , e_vecvecarith  , e_vecvalarith  ,\n            e_valvecarith  , e_vecunaryop   , e_break        , e_continue     ,\n            e_swap\n         };\n\n         typedef T value_type;\n         typedef expression_node<T>* expression_ptr;\n\n         virtual ~expression_node()\n         {}\n\n         inline virtual T value() const\n         {\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline virtual expression_node<T>* branch(const std::size_t& index = 0) const\n         {\n            return reinterpret_cast<expression_ptr>(index * 0);\n         }\n\n         inline virtual node_type type() const\n         {\n            return e_none;\n         }\n      };\n\n      template <typename T>\n      inline bool is_generally_string_node(const expression_node<T>* node);\n\n      inline bool is_true(const double v)\n      {\n         return std::not_equal_to<double>()(0.0,v);\n      }\n\n      inline bool is_true(const long double v)\n      {\n         return std::not_equal_to<long double>()(0.0L,v);\n      }\n\n      inline bool is_true(const float v)\n      {\n         return std::not_equal_to<float>()(0.0f,v);\n      }\n\n      template <typename T>\n      inline bool is_true(const std::complex<T>& v)\n      {\n         return std::not_equal_to<std::complex<T> >()(std::complex<T>(0),v);\n      }\n\n      template <typename T>\n      inline bool is_true(const expression_node<T>* node)\n      {\n         return std::not_equal_to<T>()(T(0),node->value());\n      }\n\n      template <typename T>\n      inline bool is_false(const expression_node<T>* node)\n      {\n         return std::equal_to<T>()(T(0),node->value());\n      }\n\n      template <typename T>\n      inline bool is_unary_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_unary == node->type());\n      }\n\n      template <typename T>\n      inline bool is_neg_unary_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_neg == node->type());\n      }\n\n      template <typename T>\n      inline bool is_binary_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_binary == node->type());\n      }\n\n      template <typename T>\n      inline bool is_variable_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_variable == node->type());\n      }\n\n      template <typename T>\n      inline bool is_ivariable_node(const expression_node<T>* node)\n      {\n         return node &&\n                (\n                  details::expression_node<T>::e_variable   == node->type() ||\n                  details::expression_node<T>::e_vecelem    == node->type() ||\n                  details::expression_node<T>::e_rbvecelem  == node->type() ||\n                  details::expression_node<T>::e_rbveccelem == node->type()\n                );\n      }\n\n      template <typename T>\n      inline bool is_vector_elem_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_vecelem == node->type());\n      }\n\n      template <typename T>\n      inline bool is_rebasevector_elem_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_rbvecelem == node->type());\n      }\n\n      template <typename T>\n      inline bool is_rebasevector_celem_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_rbveccelem == node->type());\n      }\n\n      template <typename T>\n      inline bool is_vector_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_vector == node->type());\n      }\n\n      template <typename T>\n      inline bool is_ivector_node(const expression_node<T>* node)\n      {\n         if (node)\n         {\n            switch (node->type())\n            {\n               case details::expression_node<T>::e_vector      :\n               case details::expression_node<T>::e_vecvalass   :\n               case details::expression_node<T>::e_vecvecass   :\n               case details::expression_node<T>::e_vecopvalass :\n               case details::expression_node<T>::e_vecopvecass :\n               case details::expression_node<T>::e_vecvecswap  :\n               case details::expression_node<T>::e_vecvecarith :\n               case details::expression_node<T>::e_vecvalarith :\n               case details::expression_node<T>::e_valvecarith :\n               case details::expression_node<T>::e_vecunaryop  : return true;\n               default                                         : return false;\n            }\n         }\n         else\n            return false;\n      }\n\n      template <typename T>\n      inline bool is_constant_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_constant == node->type());\n      }\n\n      template <typename T>\n      inline bool is_null_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_null == node->type());\n      }\n\n      template <typename T>\n      inline bool is_break_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_break == node->type());\n      }\n\n      template <typename T>\n      inline bool is_continue_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_continue == node->type());\n      }\n\n      template <typename T>\n      inline bool is_swap_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_swap == node->type());\n      }\n\n      template <typename T>\n      inline bool is_function(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_function == node->type());\n      }\n\n      template <typename T>\n      inline bool is_return_node(const expression_node<T>* node)\n      {\n         return node && (details::expression_node<T>::e_return == node->type());\n      }\n\n      template <typename T> class unary_node;\n\n      template <typename T>\n      inline bool is_negate_node(const expression_node<T>* node)\n      {\n         if (node && is_unary_node(node))\n         {\n            return (details::e_neg == static_cast<const unary_node<T>*>(node)->operation());\n         }\n         else\n            return false;\n      }\n\n      template <typename T>\n      inline bool branch_deletable(expression_node<T>* node)\n      {\n         return !is_variable_node(node) &&\n                !is_string_node  (node) ;\n      }\n\n      template <std::size_t N, typename T>\n      inline bool all_nodes_valid(expression_node<T>* (&b)[N])\n      {\n         for (std::size_t i = 0; i < N; ++i)\n         {\n            if (0 == b[i]) return false;\n         }\n\n         return true;\n      }\n\n      template <typename T,\n                typename Allocator,\n                template <typename,typename> class Sequence>\n      inline bool all_nodes_valid(const Sequence<expression_node<T>*,Allocator>& b)\n      {\n         for (std::size_t i = 0; i < b.size(); ++i)\n         {\n            if (0 == b[i]) return false;\n         }\n\n         return true;\n      }\n\n      template <std::size_t N, typename T>\n      inline bool all_nodes_variables(expression_node<T>* (&b)[N])\n      {\n         for (std::size_t i = 0; i < N; ++i)\n         {\n            if (0 == b[i])\n               return false;\n            else if (!is_variable_node(b[i]))\n               return false;\n         }\n\n         return true;\n      }\n\n      template <typename T,\n                typename Allocator,\n                template <typename,typename> class Sequence>\n      inline bool all_nodes_variables(Sequence<expression_node<T>*,Allocator>& b)\n      {\n         for (std::size_t i = 0; i < b.size(); ++i)\n         {\n            if (0 == b[i])\n               return false;\n            else if (!is_variable_node(b[i]))\n               return false;\n         }\n\n         return true;\n      }\n\n      template <typename NodeAllocator, typename T, std::size_t N>\n      inline void free_all_nodes(NodeAllocator& node_allocator, expression_node<T>* (&b)[N])\n      {\n         for (std::size_t i = 0; i < N; ++i)\n         {\n            free_node(node_allocator,b[i]);\n         }\n      }\n\n      template <typename NodeAllocator,\n                typename T,\n                typename Allocator,\n                template <typename,typename> class Sequence>\n      inline void free_all_nodes(NodeAllocator& node_allocator, Sequence<expression_node<T>*,Allocator>& b)\n      {\n         for (std::size_t i = 0; i < b.size(); ++i)\n         {\n            free_node(node_allocator,b[i]);\n         }\n\n         b.clear();\n      }\n\n      template <typename NodeAllocator, typename T>\n      inline void free_node(NodeAllocator& node_allocator, expression_node<T>*& node, const bool force_delete = false)\n      {\n         if (0 != node)\n         {\n            if (\n                 (is_variable_node(node) || is_string_node(node)) ||\n                 force_delete\n               )\n               return;\n\n            node_allocator.free(node);\n            node = reinterpret_cast<expression_node<T>*>(0);\n         }\n      }\n\n      template <typename T>\n      inline void destroy_node(expression_node<T>*& node)\n      {\n         delete node;\n         node = reinterpret_cast<expression_node<T>*>(0);\n      }\n\n      template <typename Type>\n      class vector_holder\n      {\n      private:\n\n         typedef Type value_type;\n         typedef value_type* value_ptr;\n         typedef const value_ptr const_value_ptr;\n\n         class vector_holder_base\n         {\n         public:\n\n            virtual ~vector_holder_base() {}\n\n            inline value_ptr operator[](const std::size_t& index) const\n            {\n               return value_at(index);\n            }\n\n            inline std::size_t size() const\n            {\n               return vector_size();\n            }\n\n            inline value_ptr data() const\n            {\n               return value_at(0);\n            }\n\n            virtual inline bool rebaseable() const\n            {\n               return false;\n            }\n\n            virtual void set_ref(value_ptr*) {}\n\n         protected:\n\n            virtual value_ptr value_at(const std::size_t&) const = 0;\n            virtual std::size_t vector_size()              const = 0;\n         };\n\n         class array_vector_impl : public vector_holder_base\n         {\n         public:\n\n            array_vector_impl(const Type* vec, const std::size_t& vec_size)\n            : vec_(vec),\n              size_(vec_size)\n            {}\n\n         protected:\n\n            value_ptr value_at(const std::size_t& index) const\n            {\n               if (index < size_)\n                  return const_cast<const_value_ptr>(vec_ + index);\n               else\n                  return const_value_ptr(0);\n            }\n\n            std::size_t vector_size() const\n            {\n               return size_;\n            }\n\n         private:\n\n            array_vector_impl operator=(const array_vector_impl&);\n\n            const Type* vec_;\n            const std::size_t size_;\n         };\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         class sequence_vector_impl : public vector_holder_base\n         {\n         public:\n\n            typedef Sequence<Type,Allocator> sequence_t;\n\n            sequence_vector_impl(sequence_t& seq)\n            : sequence_(seq)\n            {}\n\n         protected:\n\n            value_ptr value_at(const std::size_t& index) const\n            {\n               return (index < sequence_.size()) ? (&sequence_[index]) : const_value_ptr(0);\n            }\n\n            std::size_t vector_size() const\n            {\n               return sequence_.size();\n            }\n\n         private:\n\n            sequence_vector_impl operator=(const sequence_vector_impl&);\n\n            sequence_t& sequence_;\n         };\n\n         class vector_view_impl : public vector_holder_base\n         {\n         public:\n\n            typedef exprtk::vector_view<Type> vector_view_t;\n\n            vector_view_impl(vector_view_t& vec_view)\n            : vec_view_(vec_view)\n            {}\n\n            void set_ref(value_ptr* ref)\n            {\n               vec_view_.set_ref(ref);\n            }\n\n            virtual inline bool rebaseable() const\n            {\n               return true;\n            }\n\n         protected:\n\n            value_ptr value_at(const std::size_t& index) const\n            {\n               return (index < vec_view_.size()) ? (&vec_view_[index]) : const_value_ptr(0);\n            }\n\n            std::size_t vector_size() const\n            {\n               return vec_view_.size();\n            }\n\n         private:\n\n            vector_view_impl operator=(const vector_view_impl&);\n\n            vector_view_t& vec_view_;\n         };\n\n      public:\n\n         typedef typename details::vec_data_store<Type> vds_t;\n\n         vector_holder(Type* vec, const std::size_t& vec_size)\n         : vector_holder_base_(new(buffer)array_vector_impl(vec,vec_size))\n         {}\n\n         vector_holder(const vds_t& vds)\n         : vector_holder_base_(new(buffer)array_vector_impl(vds.data(),vds.size()))\n         {}\n\n         template <typename Allocator>\n         vector_holder(std::vector<Type,Allocator>& vec)\n         : vector_holder_base_(new(buffer)sequence_vector_impl<Allocator,std::vector>(vec))\n         {}\n\n         vector_holder(exprtk::vector_view<Type>& vec)\n         : vector_holder_base_(new(buffer)vector_view_impl(vec))\n         {}\n\n         inline value_ptr operator[](const std::size_t& index) const\n         {\n            return (*vector_holder_base_)[index];\n         }\n\n         inline std::size_t size() const\n         {\n            return vector_holder_base_->size();\n         }\n\n         inline value_ptr data() const\n         {\n            return vector_holder_base_->data();\n         }\n\n         void set_ref(value_ptr* ref)\n         {\n            vector_holder_base_->set_ref(ref);\n         }\n\n         bool rebaseable() const\n         {\n            return vector_holder_base_->rebaseable();\n         }\n\n      private:\n\n         mutable vector_holder_base* vector_holder_base_;\n         uchar_t buffer[64];\n      };\n\n      template <typename T>\n      class null_node : public expression_node<T>\n      {\n      public:\n\n         inline T value() const\n         {\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_null;\n         }\n      };\n\n      template <typename T>\n      class null_eq_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         null_eq_node(expression_ptr brnch, const bool equality = true)\n         : branch_(brnch),\n           branch_deletable_(branch_deletable(branch_)),\n           equality_(equality)\n         {}\n\n        ~null_eq_node()\n         {\n            if (branch_ && branch_deletable_)\n            {\n               destroy_node(branch_);\n            }\n         }\n\n         inline T value() const\n         {\n            const T v = branch_->value();\n            const bool result = details::numeric::is_nan(v);\n\n            if (result)\n               return (equality_) ? T(1) : T(0);\n            else\n               return (equality_) ? T(0) : T(1);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_nulleq;\n         }\n\n         inline operator_type operation() const\n         {\n            return details::e_eq;\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return branch_;\n         }\n\n      private:\n\n         expression_ptr branch_;\n         const bool branch_deletable_;\n         bool equality_;\n      };\n\n      template <typename T>\n      class literal_node : public expression_node<T>\n      {\n      public:\n\n         explicit literal_node(const T& v)\n         : value_(v)\n         {}\n\n         inline T value() const\n         {\n            return value_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_constant;\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return reinterpret_cast<expression_node<T>*>(0);\n         }\n\n      private:\n\n         literal_node(literal_node<T>&) {}\n         literal_node<T>& operator=(literal_node<T>&) { return (*this); }\n\n         const T value_;\n      };\n\n      template <typename T>\n      struct range_pack;\n\n      template <typename T>\n      struct range_data_type;\n\n      template <typename T>\n      class range_interface\n      {\n      public:\n\n         typedef range_pack<T> range_t;\n\n         virtual ~range_interface()\n         {}\n\n         virtual range_t& range_ref() = 0;\n\n         virtual const range_t& range_ref() const = 0;\n      };\n\n      #ifndef exprtk_disable_string_capabilities\n      template <typename T>\n      class string_base_node\n      {\n      public:\n\n         typedef range_data_type<T> range_data_type_t;\n\n         virtual ~string_base_node()\n         {}\n\n         virtual std::string str () const = 0;\n\n         virtual char_cptr   base() const = 0;\n\n         virtual std::size_t size() const = 0;\n      };\n\n      template <typename T>\n      class string_literal_node : public expression_node <T>,\n                                  public string_base_node<T>,\n                                  public range_interface <T>\n      {\n      public:\n\n         typedef range_pack<T> range_t;\n\n         explicit string_literal_node(const std::string& v)\n         : value_(v)\n         {\n            rp_.n0_c = std::make_pair<bool,std::size_t>(true,0);\n            rp_.n1_c = std::make_pair<bool,std::size_t>(true,v.size() - 1);\n            rp_.cache.first  = rp_.n0_c.second;\n            rp_.cache.second = rp_.n1_c.second;\n         }\n\n         inline T value() const\n         {\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_stringconst;\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return reinterpret_cast<expression_node<T>*>(0);\n         }\n\n         std::string str() const\n         {\n            return value_;\n         }\n\n         char_cptr base() const\n         {\n            return value_.data();\n         }\n\n         std::size_t size() const\n         {\n            return value_.size();\n         }\n\n         range_t& range_ref()\n         {\n            return rp_;\n         }\n\n         const range_t& range_ref() const\n         {\n            return rp_;\n         }\n\n      private:\n\n         string_literal_node(const string_literal_node<T>&);\n         string_literal_node<T>& operator=(const string_literal_node<T>&);\n\n         const std::string value_;\n         range_t rp_;\n      };\n      #endif\n\n      template <typename T>\n      class unary_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         unary_node(const operator_type& opr,\n                    expression_ptr brnch)\n         : operation_(opr),\n           branch_(brnch),\n           branch_deletable_(branch_deletable(branch_))\n         {}\n\n        ~unary_node()\n         {\n            if (branch_ && branch_deletable_)\n            {\n               destroy_node(branch_);\n            }\n         }\n\n         inline T value() const\n         {\n            const T arg = branch_->value();\n\n            return numeric::process<T>(operation_,arg);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_unary;\n         }\n\n         inline operator_type operation() const\n         {\n            return operation_;\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return branch_;\n         }\n\n         inline void release()\n         {\n            branch_deletable_ = false;\n         }\n\n      protected:\n\n         operator_type operation_;\n         expression_ptr branch_;\n         bool branch_deletable_;\n      };\n\n      template <typename T, std::size_t D, bool B>\n      struct construct_branch_pair\n      {\n         template <std::size_t N>\n         static inline void process(std::pair<expression_node<T>*,bool> (&)[N], expression_node<T>*)\n         {}\n      };\n\n      template <typename T, std::size_t D>\n      struct construct_branch_pair<T,D,true>\n      {\n         template <std::size_t N>\n         static inline void process(std::pair<expression_node<T>*,bool> (&branch)[N], expression_node<T>* b)\n         {\n            if (b)\n            {\n               branch[D] = std::make_pair(b,branch_deletable(b));\n            }\n         }\n      };\n\n      template <std::size_t N, typename T>\n      inline void init_branches(std::pair<expression_node<T>*,bool> (&branch)[N],\n                                expression_node<T>* b0,\n                                expression_node<T>* b1 = reinterpret_cast<expression_node<T>*>(0),\n                                expression_node<T>* b2 = reinterpret_cast<expression_node<T>*>(0),\n                                expression_node<T>* b3 = reinterpret_cast<expression_node<T>*>(0),\n                                expression_node<T>* b4 = reinterpret_cast<expression_node<T>*>(0),\n                                expression_node<T>* b5 = reinterpret_cast<expression_node<T>*>(0),\n                                expression_node<T>* b6 = reinterpret_cast<expression_node<T>*>(0),\n                                expression_node<T>* b7 = reinterpret_cast<expression_node<T>*>(0),\n                                expression_node<T>* b8 = reinterpret_cast<expression_node<T>*>(0),\n                                expression_node<T>* b9 = reinterpret_cast<expression_node<T>*>(0))\n      {\n         construct_branch_pair<T,0,(N > 0)>::process(branch,b0);\n         construct_branch_pair<T,1,(N > 1)>::process(branch,b1);\n         construct_branch_pair<T,2,(N > 2)>::process(branch,b2);\n         construct_branch_pair<T,3,(N > 3)>::process(branch,b3);\n         construct_branch_pair<T,4,(N > 4)>::process(branch,b4);\n         construct_branch_pair<T,5,(N > 5)>::process(branch,b5);\n         construct_branch_pair<T,6,(N > 6)>::process(branch,b6);\n         construct_branch_pair<T,7,(N > 7)>::process(branch,b7);\n         construct_branch_pair<T,8,(N > 8)>::process(branch,b8);\n         construct_branch_pair<T,9,(N > 9)>::process(branch,b9);\n      }\n\n      struct cleanup_branches\n      {\n         template <typename T, std::size_t N>\n         static inline void execute(std::pair<expression_node<T>*,bool> (&branch)[N])\n         {\n            for (std::size_t i = 0; i < N; ++i)\n            {\n               if (branch[i].first && branch[i].second)\n               {\n                  destroy_node(branch[i].first);\n               }\n            }\n         }\n\n         template <typename T,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline void execute(Sequence<std::pair<expression_node<T>*,bool>,Allocator>& branch)\n         {\n            for (std::size_t i = 0; i < branch.size(); ++i)\n            {\n               if (branch[i].first && branch[i].second)\n               {\n                  destroy_node(branch[i].first);\n               }\n            }\n         }\n      };\n\n      template <typename T>\n      class binary_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n\n         binary_node(const operator_type& opr,\n                     expression_ptr branch0,\n                     expression_ptr branch1)\n         : operation_(opr)\n         {\n            init_branches<2>(branch_, branch0, branch1);\n         }\n\n        ~binary_node()\n         {\n            cleanup_branches::execute<T,2>(branch_);\n         }\n\n         inline T value() const\n         {\n            const T arg0 = branch_[0].first->value();\n            const T arg1 = branch_[1].first->value();\n\n            return numeric::process<T>(operation_,arg0,arg1);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_binary;\n         }\n\n         inline operator_type operation()\n         {\n            return operation_;\n         }\n\n         inline expression_node<T>* branch(const std::size_t& index = 0) const\n         {\n            if (0 == index)\n               return branch_[0].first;\n            else if (1 == index)\n               return branch_[1].first;\n            else\n               return reinterpret_cast<expression_ptr>(0);\n         }\n\n      protected:\n\n         operator_type operation_;\n         branch_t branch_[2];\n      };\n\n      template <typename T, typename Operation>\n      class binary_ext_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n\n         binary_ext_node(expression_ptr branch0, expression_ptr branch1)\n         {\n            init_branches<2>(branch_, branch0, branch1);\n         }\n\n        ~binary_ext_node()\n         {\n            cleanup_branches::execute<T,2>(branch_);\n         }\n\n         inline T value() const\n         {\n            const T arg0 = branch_[0].first->value();\n            const T arg1 = branch_[1].first->value();\n\n            return Operation::process(arg0,arg1);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_binary_ext;\n         }\n\n         inline operator_type operation()\n         {\n            return Operation::operation();\n         }\n\n         inline expression_node<T>* branch(const std::size_t& index = 0) const\n         {\n            if (0 == index)\n               return branch_[0].first;\n            else if (1 == index)\n               return branch_[1].first;\n            else\n               return reinterpret_cast<expression_ptr>(0);\n         }\n\n      protected:\n\n         branch_t branch_[2];\n      };\n\n      template <typename T>\n      class trinary_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n\n         trinary_node(const operator_type& opr,\n                      expression_ptr branch0,\n                      expression_ptr branch1,\n                      expression_ptr branch2)\n         : operation_(opr)\n         {\n            init_branches<3>(branch_, branch0, branch1, branch2);\n         }\n\n        ~trinary_node()\n         {\n            cleanup_branches::execute<T,3>(branch_);\n         }\n\n         inline T value() const\n         {\n            const T arg0 = branch_[0].first->value();\n            const T arg1 = branch_[1].first->value();\n            const T arg2 = branch_[2].first->value();\n\n            switch (operation_)\n            {\n               case e_inrange : return (arg1 < arg0) ? T(0) : ((arg1 > arg2) ? T(0) : T(1));\n\n               case e_clamp   : return (arg1 < arg0) ? arg0 : (arg1 > arg2 ? arg2 : arg1);\n\n               case e_iclamp  : if ((arg1 <= arg0) || (arg1 >= arg2))\n                                   return arg1;\n                                else\n                                   return ((T(2) * arg1  <= (arg2 + arg0)) ? arg0 : arg2);\n\n               default        : exprtk_debug((\"trinary_node::value() - Error: Invalid operation\\n\"));\n                                return std::numeric_limits<T>::quiet_NaN();\n            }\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_trinary;\n         }\n\n      protected:\n\n         operator_type operation_;\n         branch_t branch_[3];\n      };\n\n      template <typename T>\n      class quaternary_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n\n         quaternary_node(const operator_type& opr,\n                         expression_ptr branch0,\n                         expression_ptr branch1,\n                         expression_ptr branch2,\n                         expression_ptr branch3)\n         : operation_(opr)\n         {\n            init_branches<4>(branch_, branch0, branch1, branch2, branch3);\n         }\n\n        ~quaternary_node()\n         {\n            cleanup_branches::execute<T,4>(branch_);\n         }\n\n         inline T value() const\n         {\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_quaternary;\n         }\n\n      protected:\n\n         operator_type operation_;\n         branch_t branch_[4];\n      };\n\n      template <typename T>\n      class conditional_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         conditional_node(expression_ptr test,\n                          expression_ptr consequent,\n                          expression_ptr alternative)\n         : test_(test),\n           consequent_(consequent),\n           alternative_(alternative),\n           test_deletable_(branch_deletable(test_)),\n           consequent_deletable_(branch_deletable(consequent_)),\n           alternative_deletable_(branch_deletable(alternative_))\n         {}\n\n        ~conditional_node()\n         {\n            if (test_ && test_deletable_)\n            {\n               destroy_node(test_);\n            }\n\n            if (consequent_ && consequent_deletable_ )\n            {\n               destroy_node(consequent_);\n            }\n\n            if (alternative_ && alternative_deletable_)\n            {\n               destroy_node(alternative_);\n            }\n         }\n\n         inline T value() const\n         {\n            if (is_true(test_))\n               return consequent_->value();\n            else\n               return alternative_->value();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_conditional;\n         }\n\n      private:\n\n         expression_ptr test_;\n         expression_ptr consequent_;\n         expression_ptr alternative_;\n         const bool test_deletable_;\n         const bool consequent_deletable_;\n         const bool alternative_deletable_;\n      };\n\n      template <typename T>\n      class cons_conditional_node : public expression_node<T>\n      {\n      public:\n\n         // Consequent only conditional statement node\n         typedef expression_node<T>* expression_ptr;\n\n         cons_conditional_node(expression_ptr test,\n                               expression_ptr consequent)\n         : test_(test),\n           consequent_(consequent),\n           test_deletable_(branch_deletable(test_)),\n           consequent_deletable_(branch_deletable(consequent_))\n         {}\n\n        ~cons_conditional_node()\n         {\n            if (test_ && test_deletable_)\n            {\n               destroy_node(test_);\n            }\n\n            if (consequent_ && consequent_deletable_)\n            {\n               destroy_node(consequent_);\n            }\n         }\n\n         inline T value() const\n         {\n            if (is_true(test_))\n               return consequent_->value();\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_conditional;\n         }\n\n      private:\n\n         expression_ptr test_;\n         expression_ptr consequent_;\n         const bool test_deletable_;\n         const bool consequent_deletable_;\n      };\n\n      #ifndef exprtk_disable_break_continue\n      template <typename T>\n      class break_exception\n      {\n      public:\n\n         break_exception(const T& v)\n         : value(v)\n         {}\n\n         T value;\n      };\n\n      class continue_exception\n      {};\n\n      template <typename T>\n      class break_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         break_node(expression_ptr ret = expression_ptr(0))\n         : return_(ret),\n           return_deletable_(branch_deletable(return_))\n         {}\n\n        ~break_node()\n         {\n            if (return_deletable_)\n            {\n               destroy_node(return_);\n            }\n         }\n\n         inline T value() const\n         {\n            throw break_exception<T>(return_ ? return_->value() : std::numeric_limits<T>::quiet_NaN());\n            #ifndef _MSC_VER\n            return std::numeric_limits<T>::quiet_NaN();\n            #endif\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_break;\n         }\n\n      private:\n\n         expression_ptr return_;\n         const bool return_deletable_;\n      };\n\n      template <typename T>\n      class continue_node : public expression_node<T>\n      {\n      public:\n\n         inline T value() const\n         {\n            throw continue_exception();\n            #ifndef _MSC_VER\n            return std::numeric_limits<T>::quiet_NaN();\n            #endif\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_break;\n         }\n      };\n      #endif\n\n      template <typename T>\n      class while_loop_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         while_loop_node(expression_ptr condition, expression_ptr loop_body)\n         : condition_(condition),\n           loop_body_(loop_body),\n           condition_deletable_(branch_deletable(condition_)),\n           loop_body_deletable_(branch_deletable(loop_body_))\n         {}\n\n        ~while_loop_node()\n         {\n            if (condition_ && condition_deletable_)\n            {\n               destroy_node(condition_);\n            }\n\n            if (loop_body_ && loop_body_deletable_)\n            {\n               destroy_node(loop_body_);\n            }\n         }\n\n         inline T value() const\n         {\n            T result = T(0);\n\n            while (is_true(condition_))\n            {\n               result = loop_body_->value();\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_while;\n         }\n\n      private:\n\n         expression_ptr condition_;\n         expression_ptr loop_body_;\n         const bool condition_deletable_;\n         const bool loop_body_deletable_;\n      };\n\n      template <typename T>\n      class repeat_until_loop_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         repeat_until_loop_node(expression_ptr condition, expression_ptr loop_body)\n         : condition_(condition),\n           loop_body_(loop_body),\n           condition_deletable_(branch_deletable(condition_)),\n           loop_body_deletable_(branch_deletable(loop_body_))\n         {}\n\n        ~repeat_until_loop_node()\n         {\n            if (condition_ && condition_deletable_)\n            {\n               destroy_node(condition_);\n            }\n\n            if (loop_body_ && loop_body_deletable_)\n            {\n               destroy_node(loop_body_);\n            }\n         }\n\n         inline T value() const\n         {\n            T result = T(0);\n\n            do\n            {\n               result = loop_body_->value();\n            }\n            while (is_false(condition_));\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_repeat;\n         }\n\n      private:\n\n         expression_ptr condition_;\n         expression_ptr loop_body_;\n         const bool condition_deletable_;\n         const bool loop_body_deletable_;\n      };\n\n      template <typename T>\n      class for_loop_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         for_loop_node(expression_ptr initialiser,\n                       expression_ptr condition,\n                       expression_ptr incrementor,\n                       expression_ptr loop_body)\n         : initialiser_(initialiser),\n           condition_  (condition  ),\n           incrementor_(incrementor),\n           loop_body_  (loop_body  ),\n           initialiser_deletable_(branch_deletable(initialiser_)),\n           condition_deletable_  (branch_deletable(condition_  )),\n           incrementor_deletable_(branch_deletable(incrementor_)),\n           loop_body_deletable_  (branch_deletable(loop_body_  ))\n         {}\n\n        ~for_loop_node()\n         {\n            if (initialiser_ && initialiser_deletable_)\n            {\n               destroy_node(initialiser_);\n            }\n\n            if (condition_ && condition_deletable_)\n            {\n               destroy_node(condition_);\n            }\n\n            if (incrementor_ && incrementor_deletable_)\n            {\n               destroy_node(incrementor_);\n            }\n\n            if (loop_body_ && loop_body_deletable_)\n            {\n               destroy_node(loop_body_);\n            }\n         }\n\n         inline T value() const\n         {\n            T result = T(0);\n\n            if (initialiser_)\n               initialiser_->value();\n\n            if (incrementor_)\n            {\n               while (is_true(condition_))\n               {\n                  result = loop_body_->value();\n                  incrementor_->value();\n               }\n            }\n            else\n            {\n               while (is_true(condition_))\n               {\n                  result = loop_body_->value();\n               }\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_for;\n         }\n\n      private:\n\n         expression_ptr initialiser_      ;\n         expression_ptr condition_        ;\n         expression_ptr incrementor_      ;\n         expression_ptr loop_body_        ;\n         const bool initialiser_deletable_;\n         const bool condition_deletable_  ;\n         const bool incrementor_deletable_;\n         const bool loop_body_deletable_  ;\n      };\n\n      #ifndef exprtk_disable_break_continue\n      template <typename T>\n      class while_loop_bc_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         while_loop_bc_node(expression_ptr condition, expression_ptr loop_body)\n         : condition_(condition),\n           loop_body_(loop_body),\n           condition_deletable_(branch_deletable(condition_)),\n           loop_body_deletable_(branch_deletable(loop_body_))\n         {}\n\n        ~while_loop_bc_node()\n         {\n            if (condition_ && condition_deletable_)\n            {\n               destroy_node(condition_);\n            }\n\n            if (loop_body_ && loop_body_deletable_)\n            {\n               destroy_node(loop_body_);\n            }\n         }\n\n         inline T value() const\n         {\n            T result = T(0);\n\n            while (is_true(condition_))\n            {\n               try\n               {\n                  result = loop_body_->value();\n               }\n               catch(const break_exception<T>& e)\n               {\n                  return e.value;\n               }\n               catch(const continue_exception&)\n               {}\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_while;\n         }\n\n      private:\n\n         expression_ptr condition_;\n         expression_ptr loop_body_;\n         const bool condition_deletable_;\n         const bool loop_body_deletable_;\n      };\n\n      template <typename T>\n      class repeat_until_loop_bc_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         repeat_until_loop_bc_node(expression_ptr condition, expression_ptr loop_body)\n         : condition_(condition),\n           loop_body_(loop_body),\n           condition_deletable_(branch_deletable(condition_)),\n           loop_body_deletable_(branch_deletable(loop_body_))\n         {}\n\n        ~repeat_until_loop_bc_node()\n         {\n            if (condition_ && condition_deletable_)\n            {\n               destroy_node(condition_);\n            }\n\n            if (loop_body_ && loop_body_deletable_)\n            {\n               destroy_node(loop_body_);\n            }\n         }\n\n         inline T value() const\n         {\n            T result = T(0);\n\n            do\n            {\n               try\n               {\n                  result = loop_body_->value();\n               }\n               catch(const break_exception<T>& e)\n               {\n                  return e.value;\n               }\n               catch(const continue_exception&)\n               {}\n            }\n            while (is_false(condition_));\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_repeat;\n         }\n\n      private:\n\n         expression_ptr condition_;\n         expression_ptr loop_body_;\n         const bool condition_deletable_;\n         const bool loop_body_deletable_;\n      };\n\n      template <typename T>\n      class for_loop_bc_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         for_loop_bc_node(expression_ptr initialiser,\n                       expression_ptr condition,\n                       expression_ptr incrementor,\n                       expression_ptr loop_body)\n         : initialiser_(initialiser),\n           condition_  (condition  ),\n           incrementor_(incrementor),\n           loop_body_  (loop_body  ),\n           initialiser_deletable_(branch_deletable(initialiser_)),\n           condition_deletable_  (branch_deletable(condition_  )),\n           incrementor_deletable_(branch_deletable(incrementor_)),\n           loop_body_deletable_  (branch_deletable(loop_body_  ))\n         {}\n\n        ~for_loop_bc_node()\n         {\n            if (initialiser_ && initialiser_deletable_)\n            {\n               destroy_node(initialiser_);\n            }\n\n            if (condition_ && condition_deletable_)\n            {\n               destroy_node(condition_);\n            }\n\n            if (incrementor_ && incrementor_deletable_)\n            {\n               destroy_node(incrementor_);\n            }\n\n            if (loop_body_ && loop_body_deletable_)\n            {\n               destroy_node(loop_body_);\n            }\n         }\n\n         inline T value() const\n         {\n            T result = T(0);\n\n            if (initialiser_)\n               initialiser_->value();\n\n            if (incrementor_)\n            {\n               while (is_true(condition_))\n               {\n                  try\n                  {\n                     result = loop_body_->value();\n                  }\n                  catch(const break_exception<T>& e)\n                  {\n                     return e.value;\n                  }\n                  catch(const continue_exception&)\n                  {}\n\n                  incrementor_->value();\n               }\n            }\n            else\n            {\n               while (is_true(condition_))\n               {\n                  try\n                  {\n                     result = loop_body_->value();\n                  }\n                  catch(const break_exception<T>& e)\n                  {\n                     return e.value;\n                  }\n                  catch(const continue_exception&)\n                  {}\n               }\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_for;\n         }\n\n      private:\n\n         expression_ptr initialiser_;\n         expression_ptr condition_  ;\n         expression_ptr incrementor_;\n         expression_ptr loop_body_  ;\n         const bool initialiser_deletable_;\n         const bool condition_deletable_  ;\n         const bool incrementor_deletable_;\n         const bool loop_body_deletable_  ;\n      };\n      #endif\n\n      template <typename T>\n      class switch_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         switch_node(const Sequence<expression_ptr,Allocator>& arg_list)\n         {\n            if (1 != (arg_list.size() & 1))\n               return;\n\n            arg_list_.resize(arg_list.size());\n            delete_branch_.resize(arg_list.size());\n\n            for (std::size_t i = 0; i < arg_list.size(); ++i)\n            {\n               if (arg_list[i])\n               {\n                       arg_list_[i] = arg_list[i];\n                  delete_branch_[i] = static_cast<unsigned char>(branch_deletable(arg_list_[i]) ? 1 : 0);\n               }\n               else\n               {\n                  arg_list_.clear();\n                  delete_branch_.clear();\n                  return;\n               }\n            }\n         }\n\n        ~switch_node()\n         {\n            for (std::size_t i = 0; i < arg_list_.size(); ++i)\n            {\n               if (arg_list_[i] && delete_branch_[i])\n               {\n                  destroy_node(arg_list_[i]);\n               }\n            }\n         }\n\n         inline T value() const\n         {\n            if (!arg_list_.empty())\n            {\n               const std::size_t upper_bound = (arg_list_.size() - 1);\n\n               for (std::size_t i = 0; i < upper_bound; i += 2)\n               {\n                  expression_ptr condition  = arg_list_[i    ];\n                  expression_ptr consequent = arg_list_[i + 1];\n\n                  if (is_true(condition))\n                  {\n                     return consequent->value();\n                  }\n               }\n\n               return arg_list_[upper_bound]->value();\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_switch;\n         }\n\n      protected:\n\n         std::vector<expression_ptr> arg_list_;\n         std::vector<unsigned char> delete_branch_;\n      };\n\n      template <typename T, typename Switch_N>\n      class switch_n_node : public switch_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         switch_n_node(const Sequence<expression_ptr,Allocator>& arg_list)\n         : switch_node<T>(arg_list)\n         {}\n\n         inline T value() const\n         {\n            return Switch_N::process(switch_node<T>::arg_list_);\n         }\n      };\n\n      template <typename T>\n      class multi_switch_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         multi_switch_node(const Sequence<expression_ptr,Allocator>& arg_list)\n         {\n            if (0 != (arg_list.size() & 1))\n               return;\n\n            arg_list_.resize(arg_list.size());\n            delete_branch_.resize(arg_list.size());\n\n            for (std::size_t i = 0; i < arg_list.size(); ++i)\n            {\n               if (arg_list[i])\n               {\n                       arg_list_[i] = arg_list[i];\n                  delete_branch_[i] = static_cast<unsigned char>(branch_deletable(arg_list_[i]) ? 1 : 0);\n               }\n               else\n               {\n                  arg_list_.clear();\n                  delete_branch_.clear();\n                  return;\n               }\n            }\n         }\n\n        ~multi_switch_node()\n         {\n            for (std::size_t i = 0; i < arg_list_.size(); ++i)\n            {\n               if (arg_list_[i] && delete_branch_[i])\n               {\n                  destroy_node(arg_list_[i]);\n               }\n            }\n         }\n\n         inline T value() const\n         {\n            T result = T(0);\n\n            if (arg_list_.empty())\n            {\n               return std::numeric_limits<T>::quiet_NaN();\n            }\n\n            const std::size_t upper_bound = (arg_list_.size() - 1);\n\n            for (std::size_t i = 0; i < upper_bound; i += 2)\n            {\n               expression_ptr condition  = arg_list_[i    ];\n               expression_ptr consequent = arg_list_[i + 1];\n\n               if (is_true(condition))\n               {\n                  result = consequent->value();\n               }\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_mswitch;\n         }\n\n      private:\n\n         std::vector<expression_ptr> arg_list_;\n         std::vector<unsigned char> delete_branch_;\n      };\n\n      template <typename T>\n      class ivariable\n      {\n      public:\n\n         virtual ~ivariable()\n         {}\n\n         virtual T& ref() = 0;\n         virtual const T& ref() const = 0;\n      };\n\n      template <typename T>\n      class variable_node : public expression_node<T>,\n                            public ivariable      <T>\n      {\n      public:\n\n         static T null_value;\n\n         explicit variable_node()\n         : value_(&null_value)\n         {}\n\n         variable_node(T& v)\n         : value_(&v)\n         {}\n\n         inline bool operator <(const variable_node<T>& v) const\n         {\n            return this < (&v);\n         }\n\n         inline T value() const\n         {\n            return (*value_);\n         }\n\n         inline T& ref()\n         {\n            return (*value_);\n         }\n\n         inline const T& ref() const\n         {\n            return (*value_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_variable;\n         }\n\n      private:\n\n         T* value_;\n      };\n\n      template <typename T>\n      T variable_node<T>::null_value = T(std::numeric_limits<T>::quiet_NaN());\n\n      template <typename T>\n      struct range_pack\n      {\n         typedef expression_node<T>*           expression_node_ptr;\n         typedef std::pair<std::size_t,std::size_t> cached_range_t;\n\n         range_pack()\n         : n0_e (std::make_pair(false,expression_node_ptr(0))),\n           n1_e (std::make_pair(false,expression_node_ptr(0))),\n           n0_c (std::make_pair(false,0)),\n           n1_c (std::make_pair(false,0)),\n           cache(std::make_pair(0,0))\n         {}\n\n         void clear()\n         {\n            n0_e  = std::make_pair(false,expression_node_ptr(0));\n            n1_e  = std::make_pair(false,expression_node_ptr(0));\n            n0_c  = std::make_pair(false,0);\n            n1_c  = std::make_pair(false,0);\n            cache = std::make_pair(0,0);\n         }\n\n         void free()\n         {\n            if (n0_e.first && n0_e.second)\n            {\n               n0_e.first = false;\n\n               if (\n                    !is_variable_node(n0_e.second) &&\n                    !is_string_node  (n0_e.second)\n                  )\n               {\n                  destroy_node(n0_e.second);\n               }\n            }\n\n            if (n1_e.first && n1_e.second)\n            {\n               n1_e.first = false;\n\n               if (\n                    !is_variable_node(n1_e.second) &&\n                    !is_string_node  (n1_e.second)\n                  )\n               {\n                  destroy_node(n1_e.second);\n               }\n            }\n         }\n\n         bool const_range()\n         {\n           return ( n0_c.first &&  n1_c.first) &&\n                  (!n0_e.first && !n1_e.first);\n         }\n\n         bool var_range()\n         {\n           return ( n0_e.first &&  n1_e.first) &&\n                  (!n0_c.first && !n1_c.first);\n         }\n\n         bool operator() (std::size_t& r0, std::size_t& r1, const std::size_t& size = std::numeric_limits<std::size_t>::max()) const\n         {\n            if (n0_c.first)\n               r0 = n0_c.second;\n            else if (n0_e.first)\n            {\n               T r0_value = n0_e.second->value();\n\n               if (r0_value < 0)\n                  return false;\n               else\n                  r0 = static_cast<std::size_t>(details::numeric::to_int64(r0_value));\n            }\n            else\n               return false;\n\n            if (n1_c.first)\n               r1 = n1_c.second;\n            else if (n1_e.first)\n            {\n               T r1_value = n1_e.second->value();\n\n               if (r1_value < 0)\n                  return false;\n               else\n                  r1 = static_cast<std::size_t>(details::numeric::to_int64(r1_value));\n            }\n            else\n               return false;\n\n            if (\n                 (std::numeric_limits<std::size_t>::max() != size) &&\n                 (std::numeric_limits<std::size_t>::max() == r1  )\n               )\n            {\n               r1 = size - 1;\n            }\n\n            cache.first  = r0;\n            cache.second = r1;\n\n            return (r0 <= r1);\n         }\n\n         inline std::size_t const_size() const\n         {\n            return (n1_c.second - n0_c.second + 1);\n         }\n\n         inline std::size_t cache_size() const\n         {\n            return (cache.second - cache.first + 1);\n         }\n\n         std::pair<bool,expression_node_ptr> n0_e;\n         std::pair<bool,expression_node_ptr> n1_e;\n         std::pair<bool,std::size_t        > n0_c;\n         std::pair<bool,std::size_t        > n1_c;\n         mutable cached_range_t             cache;\n      };\n\n      template <typename T>\n      class string_base_node;\n\n      template <typename T>\n      struct range_data_type\n      {\n         typedef range_pack<T> range_t;\n         typedef string_base_node<T>* strbase_ptr_t;\n\n         range_data_type()\n         : range(0),\n           data (0),\n           size (0),\n           type_size(0),\n           str_node (0)\n         {}\n\n         range_t*      range;\n         void*         data;\n         std::size_t   size;\n         std::size_t   type_size;\n         strbase_ptr_t str_node;\n      };\n\n      template <typename T> class vector_node;\n\n      template <typename T>\n      class vector_interface\n      {\n      public:\n\n         typedef vector_node<T>*   vector_node_ptr;\n         typedef vec_data_store<T>           vds_t;\n\n         virtual ~vector_interface()\n         {}\n\n         virtual std::size_t size   () const = 0;\n\n         virtual vector_node_ptr vec() const = 0;\n\n         virtual vector_node_ptr vec()       = 0;\n\n         virtual       vds_t& vds   ()       = 0;\n\n         virtual const vds_t& vds   () const = 0;\n\n         virtual bool side_effect   () const { return false; }\n      };\n\n      template <typename T>\n      class vector_node : public expression_node <T>,\n                          public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*  expression_ptr;\n         typedef vector_holder<T>    vector_holder_t;\n         typedef vector_node<T>*     vector_node_ptr;\n         typedef vec_data_store<T>             vds_t;\n\n         vector_node(vector_holder_t* vh)\n         : vector_holder_(vh),\n           vds_((*vector_holder_).size(),(*vector_holder_)[0])\n         {\n            vector_holder_->set_ref(&vds_.ref());\n         }\n\n         vector_node(const vds_t& vds, vector_holder_t* vh)\n         : vector_holder_(vh),\n           vds_(vds)\n         {}\n\n         inline T value() const\n         {\n            return vds().data()[0];\n         }\n\n         vector_node_ptr vec() const\n         {\n            return const_cast<vector_node_ptr>(this);\n         }\n\n         vector_node_ptr vec()\n         {\n            return this;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vector;\n         }\n\n         std::size_t size() const\n         {\n            return vds().size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n         inline vector_holder_t& vec_holder()\n         {\n            return (*vector_holder_);\n         }\n\n      private:\n\n         vector_holder_t* vector_holder_;\n         vds_t                      vds_;\n      };\n\n      template <typename T>\n      class vector_elem_node : public expression_node<T>,\n                               public ivariable      <T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef vector_holder<T>    vector_holder_t;\n         typedef vector_holder_t*    vector_holder_ptr;\n\n         vector_elem_node(expression_ptr index, vector_holder_ptr vec_holder)\n         : index_(index),\n           vec_holder_(vec_holder),\n           vector_base_((*vec_holder)[0]),\n           index_deletable_(branch_deletable(index_))\n         {}\n\n        ~vector_elem_node()\n         {\n            if (index_ && index_deletable_)\n            {\n               destroy_node(index_);\n            }\n         }\n\n         inline T value() const\n         {\n            return *(vector_base_ + static_cast<std::size_t>(details::numeric::to_int64(index_->value())));\n         }\n\n         inline T& ref()\n         {\n            return *(vector_base_ + static_cast<std::size_t>(details::numeric::to_int64(index_->value())));\n         }\n\n         inline const T& ref() const\n         {\n            return *(vector_base_ + static_cast<std::size_t>(details::numeric::to_int64(index_->value())));\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecelem;\n         }\n\n         inline vector_holder_t& vec_holder()\n         {\n            return (*vec_holder_);\n         }\n\n      private:\n\n         expression_ptr index_;\n         vector_holder_ptr vec_holder_;\n         T* vector_base_;\n         const bool index_deletable_;\n      };\n\n      template <typename T>\n      class rebasevector_elem_node : public expression_node<T>,\n                                     public ivariable      <T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef vector_holder<T>    vector_holder_t;\n         typedef vector_holder_t*    vector_holder_ptr;\n         typedef vec_data_store<T>   vds_t;\n\n         rebasevector_elem_node(expression_ptr index, vector_holder_ptr vec_holder)\n         : index_(index),\n           index_deletable_(branch_deletable(index_)),\n           vector_holder_(vec_holder),\n           vds_((*vector_holder_).size(),(*vector_holder_)[0])\n         {\n            vector_holder_->set_ref(&vds_.ref());\n         }\n\n        ~rebasevector_elem_node()\n         {\n            if (index_ && index_deletable_)\n            {\n               destroy_node(index_);\n            }\n         }\n\n         inline T value() const\n         {\n            return *(vds_.data() + static_cast<std::size_t>(details::numeric::to_int64(index_->value())));\n         }\n\n         inline T& ref()\n         {\n            return *(vds_.data() + static_cast<std::size_t>(details::numeric::to_int64(index_->value())));\n         }\n\n         inline const T& ref() const\n         {\n            return *(vds_.data() + static_cast<std::size_t>(details::numeric::to_int64(index_->value())));\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_rbvecelem;\n         }\n\n         inline vector_holder_t& vec_holder()\n         {\n            return (*vector_holder_);\n         }\n\n      private:\n\n         expression_ptr index_;\n         const bool index_deletable_;\n         vector_holder_ptr vector_holder_;\n         vds_t             vds_;\n      };\n\n      template <typename T>\n      class rebasevector_celem_node : public expression_node<T>,\n                                      public ivariable      <T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef vector_holder<T>    vector_holder_t;\n         typedef vector_holder_t*    vector_holder_ptr;\n         typedef vec_data_store<T>   vds_t;\n\n         rebasevector_celem_node(const std::size_t index, vector_holder_ptr vec_holder)\n         : index_(index),\n           vector_holder_(vec_holder),\n           vds_((*vector_holder_).size(),(*vector_holder_)[0])\n         {\n            vector_holder_->set_ref(&vds_.ref());\n         }\n\n         inline T value() const\n         {\n            return *(vds_.data() + index_);\n         }\n\n         inline T& ref()\n         {\n            return *(vds_.data() + index_);\n         }\n\n         inline const T& ref() const\n         {\n            return *(vds_.data() + index_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_rbveccelem;\n         }\n\n         inline vector_holder_t& vec_holder()\n         {\n            return (*vector_holder_);\n         }\n\n      private:\n\n         const std::size_t index_;\n         vector_holder_ptr vector_holder_;\n         vds_t vds_;\n      };\n\n      template <typename T>\n      class vector_assignment_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         vector_assignment_node(T* vector_base,\n                                const std::size_t& size,\n                                const std::vector<expression_ptr>& initialiser_list,\n                                const bool single_value_initialse)\n         : vector_base_(vector_base),\n           initialiser_list_(initialiser_list),\n           size_(size),\n           single_value_initialse_(single_value_initialse)\n         {}\n\n        ~vector_assignment_node()\n         {\n            for (std::size_t i = 0; i < initialiser_list_.size(); ++i)\n            {\n               if (branch_deletable(initialiser_list_[i]))\n               {\n                  destroy_node(initialiser_list_[i]);\n               }\n            }\n         }\n\n         inline T value() const\n         {\n            if (single_value_initialse_)\n            {\n               for (std::size_t i = 0; i < size_; ++i)\n               {\n                  *(vector_base_ + i) = initialiser_list_[0]->value();\n               }\n            }\n            else\n            {\n               std::size_t il_size = initialiser_list_.size();\n\n               for (std::size_t i = 0; i < il_size; ++i)\n               {\n                  *(vector_base_ + i) = initialiser_list_[i]->value();\n               }\n\n               if (il_size < size_)\n               {\n                  for (std::size_t i = il_size; i < size_; ++i)\n                  {\n                     *(vector_base_ + i) = T(0);\n                  }\n               }\n            }\n\n            return *(vector_base_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecdefass;\n         }\n\n      private:\n\n         vector_assignment_node<T>& operator=(const vector_assignment_node<T>&);\n\n         mutable T* vector_base_;\n         std::vector<expression_ptr> initialiser_list_;\n         const std::size_t size_;\n         const bool single_value_initialse_;\n      };\n\n      template <typename T>\n      class swap_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef variable_node<T>*   variable_node_ptr;\n\n         swap_node(variable_node_ptr var0, variable_node_ptr var1)\n         : var0_(var0),\n           var1_(var1)\n         {}\n\n         inline T value() const\n         {\n            std::swap(var0_->ref(),var1_->ref());\n            return var1_->ref();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_swap;\n         }\n\n      private:\n\n         variable_node_ptr var0_;\n         variable_node_ptr var1_;\n      };\n\n      template <typename T>\n      class swap_generic_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef ivariable<T>* ivariable_ptr;\n\n         swap_generic_node(expression_ptr var0, expression_ptr var1)\n         : binary_node<T>(details::e_swap, var0, var1),\n           var0_(dynamic_cast<ivariable_ptr>(var0)),\n           var1_(dynamic_cast<ivariable_ptr>(var1))\n         {}\n\n         inline T value() const\n         {\n            std::swap(var0_->ref(),var1_->ref());\n            return var1_->ref();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_swap;\n         }\n\n      private:\n\n         ivariable_ptr var0_;\n         ivariable_ptr var1_;\n      };\n\n      template <typename T>\n      class swap_vecvec_node : public binary_node     <T>,\n                               public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*  expression_ptr;\n         typedef vector_node<T>*     vector_node_ptr;\n         typedef vec_data_store<T>             vds_t;\n\n         swap_vecvec_node(expression_ptr branch0,\n                          expression_ptr branch1)\n         : binary_node<T>(details::e_swap, branch0, branch1),\n           vec0_node_ptr_(0),\n           vec1_node_ptr_(0),\n           vec_size_     (0),\n           initialised_  (false)\n         {\n            if (is_ivector_node(binary_node<T>::branch_[0].first))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(binary_node<T>::branch_[0].first)))\n               {\n                  vec0_node_ptr_ = vi->vec();\n                  vds()          = vi->vds();\n               }\n            }\n\n            if (is_ivector_node(binary_node<T>::branch_[1].first))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(binary_node<T>::branch_[1].first)))\n               {\n                  vec1_node_ptr_ = vi->vec();\n               }\n            }\n\n            if (vec0_node_ptr_ && vec1_node_ptr_)\n            {\n               vec_size_ = std::min(vec0_node_ptr_->vds().size(),\n                                    vec1_node_ptr_->vds().size());\n\n               initialised_ = true;\n            }\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[0].first->value();\n               binary_node<T>::branch_[1].first->value();\n\n               T* vec0 = vec0_node_ptr_->vds().data();\n               T* vec1 = vec1_node_ptr_->vds().data();\n\n               for (std::size_t i = 0; i < vec_size_; ++i)\n               {\n                  std::swap(vec0[i],vec1[i]);\n               }\n\n               return vec1_node_ptr_->value();\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return vec0_node_ptr_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return vec0_node_ptr_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecvecswap;\n         }\n\n         std::size_t size() const\n         {\n            return vec_size_;\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n      private:\n\n         vector_node<T>* vec0_node_ptr_;\n         vector_node<T>* vec1_node_ptr_;\n         std::size_t     vec_size_;\n         bool            initialised_;\n         vds_t           vds_;\n      };\n\n      #ifndef exprtk_disable_string_capabilities\n      template <typename T>\n      class stringvar_node : public expression_node <T>,\n                             public string_base_node<T>,\n                             public range_interface <T>\n      {\n      public:\n\n         typedef range_pack<T> range_t;\n\n         static std::string null_value;\n\n         explicit stringvar_node()\n         : value_(&null_value)\n         {}\n\n         explicit stringvar_node(std::string& v)\n         : value_(&v)\n         {\n            rp_.n0_c = std::make_pair<bool,std::size_t>(true,0);\n            rp_.n1_c = std::make_pair<bool,std::size_t>(true,v.size() - 1);\n            rp_.cache.first  = rp_.n0_c.second;\n            rp_.cache.second = rp_.n1_c.second;\n         }\n\n         inline bool operator <(const stringvar_node<T>& v) const\n         {\n            return this < (&v);\n         }\n\n         inline T value() const\n         {\n            rp_.n1_c.second  = (*value_).size() - 1;\n            rp_.cache.second = rp_.n1_c.second;\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return ref();\n         }\n\n         char_cptr base() const\n         {\n            return &(*value_)[0];\n         }\n\n         std::size_t size() const\n         {\n            return ref().size();\n         }\n\n         std::string& ref()\n         {\n            return (*value_);\n         }\n\n         const std::string& ref() const\n         {\n            return (*value_);\n         }\n\n         range_t& range_ref()\n         {\n            return rp_;\n         }\n\n         const range_t& range_ref() const\n         {\n            return rp_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_stringvar;\n         }\n\n      private:\n\n         std::string* value_;\n         mutable range_t rp_;\n      };\n\n      template <typename T>\n      std::string stringvar_node<T>::null_value = std::string(\"\");\n\n      template <typename T>\n      class string_range_node : public expression_node <T>,\n                                public string_base_node<T>,\n                                public range_interface <T>\n      {\n      public:\n\n         typedef range_pack<T> range_t;\n\n         static std::string null_value;\n\n         explicit string_range_node(std::string& v, const range_t& rp)\n         : value_(&v),\n           rp_(rp)\n         {}\n\n         virtual ~string_range_node()\n         {\n            rp_.free();\n         }\n\n         inline bool operator <(const string_range_node<T>& v) const\n         {\n            return this < (&v);\n         }\n\n         inline T value() const\n         {\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline std::string str() const\n         {\n            return (*value_);\n         }\n\n         char_cptr base() const\n         {\n            return &(*value_)[0];\n         }\n\n         std::size_t size() const\n         {\n            return ref().size();\n         }\n\n         inline range_t range() const\n         {\n            return rp_;\n         }\n\n         inline virtual std::string& ref()\n         {\n            return (*value_);\n         }\n\n         inline virtual const std::string& ref() const\n         {\n            return (*value_);\n         }\n\n         inline range_t& range_ref()\n         {\n            return rp_;\n         }\n\n         inline const range_t& range_ref() const\n         {\n            return rp_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_stringvarrng;\n         }\n\n      private:\n\n         std::string* value_;\n         range_t      rp_;\n      };\n\n      template <typename T>\n      std::string string_range_node<T>::null_value = std::string(\"\");\n\n      template <typename T>\n      class const_string_range_node : public expression_node <T>,\n                                      public string_base_node<T>,\n                                      public range_interface <T>\n      {\n      public:\n\n         typedef range_pack<T> range_t;\n\n         explicit const_string_range_node(const std::string& v, const range_t& rp)\n         : value_(v),\n           rp_(rp)\n         {}\n\n        ~const_string_range_node()\n         {\n            rp_.free();\n         }\n\n         inline T value() const\n         {\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return value_;\n         }\n\n         char_cptr base() const\n         {\n            return value_.data();\n         }\n\n         std::size_t size() const\n         {\n            return value_.size();\n         }\n\n         range_t range() const\n         {\n            return rp_;\n         }\n\n         range_t& range_ref()\n         {\n            return rp_;\n         }\n\n         const range_t& range_ref() const\n         {\n            return rp_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_cstringvarrng;\n         }\n\n      private:\n\n         const_string_range_node<T>& operator=(const const_string_range_node<T>&);\n\n         const std::string value_;\n         range_t rp_;\n      };\n\n      template <typename T>\n      class generic_string_range_node : public expression_node <T>,\n                                        public string_base_node<T>,\n                                        public range_interface <T>\n      {\n      public:\n\n         typedef expression_node <T>*  expression_ptr;\n         typedef stringvar_node  <T>* strvar_node_ptr;\n         typedef string_base_node<T>*    str_base_ptr;\n         typedef range_pack      <T>          range_t;\n         typedef range_t*                   range_ptr;\n         typedef range_interface<T>          irange_t;\n         typedef irange_t*                 irange_ptr;\n\n         generic_string_range_node(expression_ptr str_branch, const range_t& brange)\n         : initialised_(false),\n           branch_(str_branch),\n           branch_deletable_(branch_deletable(branch_)),\n           str_base_ptr_ (0),\n           str_range_ptr_(0),\n           base_range_(brange)\n         {\n            range_.n0_c = std::make_pair<bool,std::size_t>(true,0);\n            range_.n1_c = std::make_pair<bool,std::size_t>(true,0);\n            range_.cache.first  = range_.n0_c.second;\n            range_.cache.second = range_.n1_c.second;\n\n            if (is_generally_string_node(branch_))\n            {\n               str_base_ptr_ = dynamic_cast<str_base_ptr>(branch_);\n\n               if (0 == str_base_ptr_)\n                  return;\n\n               str_range_ptr_ = dynamic_cast<irange_ptr>(branch_);\n\n               if (0 == str_range_ptr_)\n                  return;\n            }\n\n            initialised_ = (str_base_ptr_ && str_range_ptr_);\n         }\n\n        ~generic_string_range_node()\n         {\n            base_range_.free();\n\n            if (branch_ && branch_deletable_)\n            {\n               destroy_node(branch_);\n            }\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               branch_->value();\n\n               std::size_t str_r0 = 0;\n               std::size_t str_r1 = 0;\n\n               std::size_t r0 = 0;\n               std::size_t r1 = 0;\n\n               range_t& range = str_range_ptr_->range_ref();\n\n               const std::size_t base_str_size = str_base_ptr_->size();\n\n               if (\n                    range      (str_r0,str_r1,base_str_size) &&\n                    base_range_(    r0,    r1,base_str_size)\n                  )\n               {\n                  const std::size_t size = (r1 - r0) + 1;\n\n                  range_.n1_c.second  = size - 1;\n                  range_.cache.second = range_.n1_c.second;\n\n                  value_.assign(str_base_ptr_->base() + str_r0 + r0, size);\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return value_;\n         }\n\n         char_cptr base() const\n         {\n            return &value_[0];\n         }\n\n         std::size_t size() const\n         {\n            return value_.size();\n         }\n\n         range_t& range_ref()\n         {\n            return range_;\n         }\n\n         const range_t& range_ref() const\n         {\n            return range_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strgenrange;\n         }\n\n      private:\n\n         bool                initialised_;\n         expression_ptr           branch_;\n         const bool     branch_deletable_;\n         str_base_ptr       str_base_ptr_;\n         irange_ptr        str_range_ptr_;\n         mutable range_t      base_range_;\n         mutable range_t           range_;\n         mutable std::string       value_;\n      };\n\n      template <typename T>\n      class string_concat_node : public binary_node     <T>,\n                                 public string_base_node<T>,\n                                 public range_interface <T>\n      {\n      public:\n\n         typedef expression_node <T>*  expression_ptr;\n         typedef string_base_node<T>*    str_base_ptr;\n         typedef range_pack      <T>          range_t;\n         typedef range_t*                   range_ptr;\n         typedef range_interface<T>          irange_t;\n         typedef irange_t*                 irange_ptr;\n\n         string_concat_node(const operator_type& opr,\n                            expression_ptr branch0,\n                            expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           initialised_(false),\n           str0_base_ptr_ (0),\n           str1_base_ptr_ (0),\n           str0_range_ptr_(0),\n           str1_range_ptr_(0)\n         {\n            range_.n0_c = std::make_pair<bool,std::size_t>(true,0);\n            range_.n1_c = std::make_pair<bool,std::size_t>(true,0);\n\n            range_.cache.first  = range_.n0_c.second;\n            range_.cache.second = range_.n1_c.second;\n\n            if (is_generally_string_node(binary_node<T>::branch_[0].first))\n            {\n               str0_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == str0_base_ptr_)\n                  return;\n\n               str0_range_ptr_ = dynamic_cast<irange_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == str0_range_ptr_)\n                  return;\n            }\n\n            if (is_generally_string_node(binary_node<T>::branch_[1].first))\n            {\n               str1_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == str1_base_ptr_)\n                  return;\n\n               str1_range_ptr_ = dynamic_cast<irange_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == str1_range_ptr_)\n                  return;\n            }\n\n            initialised_ = str0_base_ptr_  &&\n                           str1_base_ptr_  &&\n                           str0_range_ptr_ &&\n                           str1_range_ptr_ ;\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[0].first->value();\n               binary_node<T>::branch_[1].first->value();\n\n               std::size_t str0_r0 = 0;\n               std::size_t str0_r1 = 0;\n\n               std::size_t str1_r0 = 0;\n               std::size_t str1_r1 = 0;\n\n               range_t& range0 = str0_range_ptr_->range_ref();\n               range_t& range1 = str1_range_ptr_->range_ref();\n\n               if (\n                    range0(str0_r0,str0_r1,str0_base_ptr_->size()) &&\n                    range1(str1_r0,str1_r1,str1_base_ptr_->size())\n                  )\n               {\n                  const std::size_t size0 = (str0_r1 - str0_r0) + 1;\n                  const std::size_t size1 = (str1_r1 - str1_r0) + 1;\n\n                  value_.assign(str0_base_ptr_->base() + str0_r0, size0);\n                  value_.append(str1_base_ptr_->base() + str1_r0, size1);\n\n                  range_.n1_c.second  = value_.size() - 1;\n                  range_.cache.second = range_.n1_c.second;\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return value_;\n         }\n\n         char_cptr base() const\n         {\n            return &value_[0];\n         }\n\n         std::size_t size() const\n         {\n            return value_.size();\n         }\n\n         range_t& range_ref()\n         {\n            return range_;\n         }\n\n         const range_t& range_ref() const\n         {\n            return range_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strconcat;\n         }\n\n      private:\n\n         bool initialised_;\n         str_base_ptr str0_base_ptr_;\n         str_base_ptr str1_base_ptr_;\n         irange_ptr   str0_range_ptr_;\n         irange_ptr   str1_range_ptr_;\n         mutable range_t     range_;\n         mutable std::string value_;\n      };\n\n      template <typename T>\n      class swap_string_node : public binary_node     <T>,\n                               public string_base_node<T>,\n                               public range_interface <T>\n      {\n      public:\n\n         typedef expression_node <T>*  expression_ptr;\n         typedef stringvar_node  <T>* strvar_node_ptr;\n         typedef string_base_node<T>*    str_base_ptr;\n         typedef range_pack      <T>          range_t;\n         typedef range_t*                   range_ptr;\n         typedef range_interface<T>          irange_t;\n         typedef irange_t*                 irange_ptr;\n\n         swap_string_node(expression_ptr branch0, expression_ptr branch1)\n         : binary_node<T>(details::e_swap, branch0, branch1),\n           initialised_(false),\n           str0_node_ptr_(0),\n           str1_node_ptr_(0)\n         {\n            if (is_string_node(binary_node<T>::branch_[0].first))\n            {\n               str0_node_ptr_ = static_cast<strvar_node_ptr>(binary_node<T>::branch_[0].first);\n            }\n\n            if (is_string_node(binary_node<T>::branch_[1].first))\n            {\n               str1_node_ptr_ = static_cast<strvar_node_ptr>(binary_node<T>::branch_[1].first);\n            }\n\n            initialised_ = (str0_node_ptr_ && str1_node_ptr_);\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[0].first->value();\n               binary_node<T>::branch_[1].first->value();\n\n               std::swap(str0_node_ptr_->ref(),str1_node_ptr_->ref());\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return str0_node_ptr_->str();\n         }\n\n         char_cptr base() const\n         {\n           return str0_node_ptr_->base();\n         }\n\n         std::size_t size() const\n         {\n            return str0_node_ptr_->size();\n         }\n\n         range_t& range_ref()\n         {\n            return str0_node_ptr_->range_ref();\n         }\n\n         const range_t& range_ref() const\n         {\n            return str0_node_ptr_->range_ref();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strswap;\n         }\n\n      private:\n\n         bool initialised_;\n         strvar_node_ptr str0_node_ptr_;\n         strvar_node_ptr str1_node_ptr_;\n      };\n\n      template <typename T>\n      class swap_genstrings_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node <T>* expression_ptr;\n         typedef string_base_node<T>*   str_base_ptr;\n         typedef range_pack      <T>         range_t;\n         typedef range_t*                  range_ptr;\n         typedef range_interface<T>         irange_t;\n         typedef irange_t*                irange_ptr;\n\n         swap_genstrings_node(expression_ptr branch0,\n                              expression_ptr branch1)\n         : binary_node<T>(details::e_default, branch0, branch1),\n           str0_base_ptr_ (0),\n           str1_base_ptr_ (0),\n           str0_range_ptr_(0),\n           str1_range_ptr_(0),\n           initialised_(false)\n         {\n            if (is_generally_string_node(binary_node<T>::branch_[0].first))\n            {\n               str0_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == str0_base_ptr_)\n                  return;\n\n               irange_ptr range = dynamic_cast<irange_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == range)\n                  return;\n\n               str0_range_ptr_ = &(range->range_ref());\n            }\n\n            if (is_generally_string_node(binary_node<T>::branch_[1].first))\n            {\n               str1_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == str1_base_ptr_)\n                  return;\n\n               irange_ptr range = dynamic_cast<irange_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == range)\n                  return;\n\n               str1_range_ptr_ = &(range->range_ref());\n            }\n\n            initialised_ = str0_base_ptr_  &&\n                           str1_base_ptr_  &&\n                           str0_range_ptr_ &&\n                           str1_range_ptr_ ;\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[0].first->value();\n               binary_node<T>::branch_[1].first->value();\n\n               std::size_t str0_r0 = 0;\n               std::size_t str0_r1 = 0;\n\n               std::size_t str1_r0 = 0;\n               std::size_t str1_r1 = 0;\n\n               range_t& range0 = (*str0_range_ptr_);\n               range_t& range1 = (*str1_range_ptr_);\n\n               if (\n                    range0(str0_r0,str0_r1,str0_base_ptr_->size()) &&\n                    range1(str1_r0,str1_r1,str1_base_ptr_->size())\n                  )\n               {\n                  const std::size_t size0    = range0.cache_size();\n                  const std::size_t size1    = range1.cache_size();\n                  const std::size_t max_size = std::min(size0,size1);\n\n                  char_ptr s0 = const_cast<char_ptr>(str0_base_ptr_->base() + str0_r0);\n                  char_ptr s1 = const_cast<char_ptr>(str1_base_ptr_->base() + str1_r0);\n\n                  loop_unroll::details lud(max_size);\n                  char_cptr upper_bound = s0 + lud.upper_bound;\n\n                  while (s0 < upper_bound)\n                  {\n                     #define exprtk_loop(N)   \\\n                     std::swap(s0[N], s1[N]); \\\n\n                     exprtk_loop( 0) exprtk_loop( 1)\n                     exprtk_loop( 2) exprtk_loop( 3)\n                     #ifndef exprtk_disable_superscalar_unroll\n                     exprtk_loop( 4) exprtk_loop( 5)\n                     exprtk_loop( 6) exprtk_loop( 7)\n                     exprtk_loop( 8) exprtk_loop( 9)\n                     exprtk_loop(10) exprtk_loop(11)\n                     exprtk_loop(12) exprtk_loop(13)\n                     exprtk_loop(14) exprtk_loop(15)\n                     #endif\n\n                     s0 += lud.batch_size;\n                     s1 += lud.batch_size;\n                  }\n\n                  int i = 0;\n\n                  exprtk_disable_fallthrough_begin\n                  switch (lud.remainder)\n                  {\n                     #define case_stmt(N)                      \\\n                     case N : { std::swap(s0[i],s1[i]); ++i; } \\\n\n                     #ifndef exprtk_disable_superscalar_unroll\n                     case_stmt(15) case_stmt(14)\n                     case_stmt(13) case_stmt(12)\n                     case_stmt(11) case_stmt(10)\n                     case_stmt( 9) case_stmt( 8)\n                     case_stmt( 7) case_stmt( 6)\n                     case_stmt( 5) case_stmt( 4)\n                     #endif\n                     case_stmt( 3) case_stmt( 2)\n                     case_stmt( 1)\n                  }\n                  exprtk_disable_fallthrough_end\n\n                  #undef exprtk_loop\n                  #undef case_stmt\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strswap;\n         }\n\n      private:\n\n         swap_genstrings_node(swap_genstrings_node<T>&);\n         swap_genstrings_node<T>& operator=(swap_genstrings_node<T>&);\n\n         str_base_ptr str0_base_ptr_;\n         str_base_ptr str1_base_ptr_;\n         range_ptr    str0_range_ptr_;\n         range_ptr    str1_range_ptr_;\n         bool         initialised_;\n      };\n\n      template <typename T>\n      class stringvar_size_node : public expression_node<T>\n      {\n      public:\n\n         static std::string null_value;\n\n         explicit stringvar_size_node()\n         : value_(&null_value)\n         {}\n\n         explicit stringvar_size_node(std::string& v)\n         : value_(&v)\n         {}\n\n         inline T value() const\n         {\n            return T((*value_).size());\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_stringvarsize;\n         }\n\n      private:\n\n         std::string* value_;\n      };\n\n      template <typename T>\n      std::string stringvar_size_node<T>::null_value = std::string(\"\");\n\n      template <typename T>\n      class string_size_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node <T>* expression_ptr;\n         typedef string_base_node<T>*   str_base_ptr;\n\n         string_size_node(expression_ptr brnch)\n         : branch_(brnch),\n           branch_deletable_(branch_deletable(branch_)),\n           str_base_ptr_(0)\n         {\n            if (is_generally_string_node(branch_))\n            {\n               str_base_ptr_ = dynamic_cast<str_base_ptr>(branch_);\n\n               if (0 == str_base_ptr_)\n                  return;\n            }\n         }\n\n        ~string_size_node()\n         {\n            if (branch_ && branch_deletable_)\n            {\n               destroy_node(branch_);\n            }\n         }\n\n         inline T value() const\n         {\n            T result = std::numeric_limits<T>::quiet_NaN();\n\n            if (str_base_ptr_)\n            {\n               branch_->value();\n               result = T(str_base_ptr_->size());\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_stringsize;\n         }\n\n      private:\n\n         expression_ptr           branch_;\n         const bool     branch_deletable_;\n         str_base_ptr       str_base_ptr_;\n      };\n\n      struct asn_assignment\n      {\n         static inline void execute(std::string& s, char_cptr data, const std::size_t size)\n         { s.assign(data,size); }\n      };\n\n      struct asn_addassignment\n      {\n         static inline void execute(std::string& s, char_cptr data, const std::size_t size)\n         { s.append(data,size); }\n      };\n\n      template <typename T, typename AssignmentProcess = asn_assignment>\n      class assignment_string_node : public binary_node     <T>,\n                                     public string_base_node<T>,\n                                     public range_interface <T>\n      {\n      public:\n\n         typedef expression_node <T>*  expression_ptr;\n         typedef stringvar_node  <T>* strvar_node_ptr;\n         typedef string_base_node<T>*    str_base_ptr;\n         typedef range_pack      <T>          range_t;\n         typedef range_t*                   range_ptr;\n         typedef range_interface<T>          irange_t;\n         typedef irange_t*                 irange_ptr;\n\n         assignment_string_node(const operator_type& opr,\n                                expression_ptr branch0,\n                                expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           initialised_(false),\n           str0_base_ptr_ (0),\n           str1_base_ptr_ (0),\n           str0_node_ptr_ (0),\n           str1_range_ptr_(0)\n         {\n            if (is_string_node(binary_node<T>::branch_[0].first))\n            {\n               str0_node_ptr_ = static_cast<strvar_node_ptr>(binary_node<T>::branch_[0].first);\n\n               str0_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[0].first);\n            }\n\n            if (is_generally_string_node(binary_node<T>::branch_[1].first))\n            {\n               str1_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == str1_base_ptr_)\n                  return;\n\n               irange_ptr range = dynamic_cast<irange_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == range)\n                  return;\n\n               str1_range_ptr_ = &(range->range_ref());\n            }\n\n            initialised_ = str0_base_ptr_  &&\n                           str1_base_ptr_  &&\n                           str0_node_ptr_  &&\n                           str1_range_ptr_ ;\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[1].first->value();\n\n               std::size_t r0 = 0;\n               std::size_t r1 = 0;\n\n               range_t& range = (*str1_range_ptr_);\n\n               if (range(r0, r1, str1_base_ptr_->size()))\n               {\n                  AssignmentProcess::execute(str0_node_ptr_->ref(),\n                                             str1_base_ptr_->base() + r0,\n                                             (r1 - r0) + 1);\n\n                  binary_node<T>::branch_[0].first->value();\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return str0_node_ptr_->str();\n         }\n\n         char_cptr base() const\n         {\n           return str0_node_ptr_->base();\n         }\n\n         std::size_t size() const\n         {\n            return str0_node_ptr_->size();\n         }\n\n         range_t& range_ref()\n         {\n            return str0_node_ptr_->range_ref();\n         }\n\n         const range_t& range_ref() const\n         {\n            return str0_node_ptr_->range_ref();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strass;\n         }\n\n      private:\n\n         bool            initialised_;\n         str_base_ptr    str0_base_ptr_;\n         str_base_ptr    str1_base_ptr_;\n         strvar_node_ptr str0_node_ptr_;\n         range_ptr       str1_range_ptr_;\n      };\n\n      template <typename T, typename AssignmentProcess = asn_assignment>\n      class assignment_string_range_node : public binary_node     <T>,\n                                           public string_base_node<T>,\n                                           public range_interface <T>\n      {\n      public:\n\n         typedef expression_node <T>*  expression_ptr;\n         typedef stringvar_node  <T>* strvar_node_ptr;\n         typedef string_base_node<T>*    str_base_ptr;\n         typedef range_pack      <T>          range_t;\n         typedef range_t*                   range_ptr;\n         typedef range_interface<T>          irange_t;\n         typedef irange_t*                 irange_ptr;\n\n         assignment_string_range_node(const operator_type& opr,\n                                      expression_ptr branch0,\n                                      expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           initialised_(false),\n           str0_base_ptr_ (0),\n           str1_base_ptr_ (0),\n           str0_node_ptr_ (0),\n           str0_range_ptr_(0),\n           str1_range_ptr_(0)\n         {\n            if (is_string_range_node(binary_node<T>::branch_[0].first))\n            {\n               str0_node_ptr_ = static_cast<strvar_node_ptr>(binary_node<T>::branch_[0].first);\n\n               str0_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[0].first);\n\n               irange_ptr range = dynamic_cast<irange_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == range)\n                  return;\n\n               str0_range_ptr_ = &(range->range_ref());\n            }\n\n            if (is_generally_string_node(binary_node<T>::branch_[1].first))\n            {\n               str1_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == str1_base_ptr_)\n                  return;\n\n               irange_ptr range = dynamic_cast<irange_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == range)\n                  return;\n\n               str1_range_ptr_ = &(range->range_ref());\n            }\n\n            initialised_ = str0_base_ptr_  &&\n                           str1_base_ptr_  &&\n                           str0_node_ptr_  &&\n                           str0_range_ptr_ &&\n                           str1_range_ptr_ ;\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[0].first->value();\n               binary_node<T>::branch_[1].first->value();\n\n               std::size_t s0_r0 = 0;\n               std::size_t s0_r1 = 0;\n\n               std::size_t s1_r0 = 0;\n               std::size_t s1_r1 = 0;\n\n               range_t& range0 = (*str0_range_ptr_);\n               range_t& range1 = (*str1_range_ptr_);\n\n               if (\n                    range0(s0_r0, s0_r1, str0_base_ptr_->size()) &&\n                    range1(s1_r0, s1_r1, str1_base_ptr_->size())\n                  )\n               {\n                  std::size_t size = std::min((s0_r1 - s0_r0),(s1_r1 - s1_r0)) + 1;\n\n                  std::copy(str1_base_ptr_->base() + s1_r0,\n                            str1_base_ptr_->base() + s1_r0 + size,\n                            const_cast<char_ptr>(base() + s0_r0));\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return str0_node_ptr_->str();\n         }\n\n         char_cptr base() const\n         {\n           return str0_node_ptr_->base();\n         }\n\n         std::size_t size() const\n         {\n            return str0_node_ptr_->size();\n         }\n\n         range_t& range_ref()\n         {\n            return str0_node_ptr_->range_ref();\n         }\n\n         const range_t& range_ref() const\n         {\n            return str0_node_ptr_->range_ref();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strass;\n         }\n\n      private:\n\n         bool            initialised_;\n         str_base_ptr    str0_base_ptr_;\n         str_base_ptr    str1_base_ptr_;\n         strvar_node_ptr str0_node_ptr_;\n         range_ptr       str0_range_ptr_;\n         range_ptr       str1_range_ptr_;\n      };\n\n      template <typename T>\n      class conditional_string_node : public trinary_node    <T>,\n                                      public string_base_node<T>,\n                                      public range_interface <T>\n      {\n      public:\n\n         typedef expression_node <T>* expression_ptr;\n         typedef string_base_node<T>*   str_base_ptr;\n         typedef range_pack      <T>         range_t;\n         typedef range_t*                  range_ptr;\n         typedef range_interface<T>         irange_t;\n         typedef irange_t*                irange_ptr;\n\n         conditional_string_node(expression_ptr test,\n                                 expression_ptr consequent,\n                                 expression_ptr alternative)\n         : trinary_node<T>(details::e_default,consequent,alternative,test),\n           initialised_(false),\n           str0_base_ptr_ (0),\n           str1_base_ptr_ (0),\n           str0_range_ptr_(0),\n           str1_range_ptr_(0),\n           test_              (test),\n           consequent_  (consequent),\n           alternative_(alternative)\n         {\n            range_.n0_c = std::make_pair<bool,std::size_t>(true,0);\n            range_.n1_c = std::make_pair<bool,std::size_t>(true,0);\n\n            range_.cache.first  = range_.n0_c.second;\n            range_.cache.second = range_.n1_c.second;\n\n            if (is_generally_string_node(trinary_node<T>::branch_[0].first))\n            {\n               str0_base_ptr_ = dynamic_cast<str_base_ptr>(trinary_node<T>::branch_[0].first);\n\n               if (0 == str0_base_ptr_)\n                  return;\n\n               str0_range_ptr_ = dynamic_cast<irange_ptr>(trinary_node<T>::branch_[0].first);\n\n               if (0 == str0_range_ptr_)\n                  return;\n            }\n\n            if (is_generally_string_node(trinary_node<T>::branch_[1].first))\n            {\n               str1_base_ptr_ = dynamic_cast<str_base_ptr>(trinary_node<T>::branch_[1].first);\n\n               if (0 == str1_base_ptr_)\n                  return;\n\n               str1_range_ptr_ = dynamic_cast<irange_ptr>(trinary_node<T>::branch_[1].first);\n\n               if (0 == str1_range_ptr_)\n                  return;\n            }\n\n            initialised_ = str0_base_ptr_  &&\n                           str1_base_ptr_  &&\n                           str0_range_ptr_ &&\n                           str1_range_ptr_ ;\n\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               std::size_t r0 = 0;\n               std::size_t r1 = 0;\n\n               if (is_true(test_))\n               {\n                  consequent_->value();\n\n                  range_t& range = str0_range_ptr_->range_ref();\n\n                  if (range(r0, r1, str0_base_ptr_->size()))\n                  {\n                     const std::size_t size = (r1 - r0) + 1;\n\n                     value_.assign(str0_base_ptr_->base() + r0, size);\n\n                     range_.n1_c.second  = value_.size() - 1;\n                     range_.cache.second = range_.n1_c.second;\n\n                     return T(1);\n                  }\n               }\n               else\n               {\n                  alternative_->value();\n\n                  range_t& range = str1_range_ptr_->range_ref();\n\n                  if (range(r0, r1, str1_base_ptr_->size()))\n                  {\n                     const std::size_t size = (r1 - r0) + 1;\n\n                     value_.assign(str1_base_ptr_->base() + r0, size);\n\n                     range_.n1_c.second  = value_.size() - 1;\n                     range_.cache.second = range_.n1_c.second;\n\n                     return T(0);\n                  }\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return value_;\n         }\n\n         char_cptr base() const\n         {\n            return &value_[0];\n         }\n\n         std::size_t size() const\n         {\n            return value_.size();\n         }\n\n         range_t& range_ref()\n         {\n            return range_;\n         }\n\n         const range_t& range_ref() const\n         {\n            return range_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strcondition;\n         }\n\n      private:\n\n         bool initialised_;\n         str_base_ptr str0_base_ptr_;\n         str_base_ptr str1_base_ptr_;\n         irange_ptr   str0_range_ptr_;\n         irange_ptr   str1_range_ptr_;\n         mutable range_t     range_;\n         mutable std::string value_;\n\n         expression_ptr test_;\n         expression_ptr consequent_;\n         expression_ptr alternative_;\n      };\n\n      template <typename T>\n      class cons_conditional_str_node : public binary_node     <T>,\n                                        public string_base_node<T>,\n                                        public range_interface <T>\n      {\n      public:\n\n         typedef expression_node <T>* expression_ptr;\n         typedef string_base_node<T>*   str_base_ptr;\n         typedef range_pack      <T>         range_t;\n         typedef range_t*                  range_ptr;\n         typedef range_interface<T>         irange_t;\n         typedef irange_t*                irange_ptr;\n\n         cons_conditional_str_node(expression_ptr test,\n                                   expression_ptr consequent)\n         : binary_node<T>(details::e_default, consequent, test),\n           initialised_(false),\n           str0_base_ptr_ (0),\n           str0_range_ptr_(0),\n           test_      (test),\n           consequent_(consequent)\n         {\n            range_.n0_c = std::make_pair<bool,std::size_t>(true,0);\n            range_.n1_c = std::make_pair<bool,std::size_t>(true,0);\n\n            range_.cache.first  = range_.n0_c.second;\n            range_.cache.second = range_.n1_c.second;\n\n            if (is_generally_string_node(binary_node<T>::branch_[0].first))\n            {\n               str0_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == str0_base_ptr_)\n                  return;\n\n               str0_range_ptr_ = dynamic_cast<irange_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == str0_range_ptr_)\n                  return;\n            }\n\n            initialised_ = str0_base_ptr_ && str0_range_ptr_ ;\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               if (is_true(test_))\n               {\n                  consequent_->value();\n\n                  range_t& range = str0_range_ptr_->range_ref();\n\n                  std::size_t r0 = 0;\n                  std::size_t r1 = 0;\n\n                  if (range(r0, r1, str0_base_ptr_->size()))\n                  {\n                     const std::size_t size = (r1 - r0) + 1;\n\n                     value_.assign(str0_base_ptr_->base() + r0, size);\n\n                     range_.n1_c.second  = value_.size() - 1;\n                     range_.cache.second = range_.n1_c.second;\n\n                     return T(1);\n                  }\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return value_;\n         }\n\n         char_cptr base() const\n         {\n            return &value_[0];\n         }\n\n         std::size_t size() const\n         {\n            return value_.size();\n         }\n\n         range_t& range_ref()\n         {\n            return range_;\n         }\n\n         const range_t& range_ref() const\n         {\n            return range_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strccondition;\n         }\n\n      private:\n\n         bool initialised_;\n         str_base_ptr str0_base_ptr_;\n         irange_ptr   str0_range_ptr_;\n         mutable range_t     range_;\n         mutable std::string value_;\n\n         expression_ptr test_;\n         expression_ptr consequent_;\n      };\n\n      template <typename T, typename VarArgFunction>\n      class str_vararg_node  : public expression_node <T>,\n                               public string_base_node<T>,\n                               public range_interface <T>\n      {\n      public:\n\n         typedef expression_node <T>*  expression_ptr;\n         typedef string_base_node<T>*    str_base_ptr;\n         typedef range_pack      <T>          range_t;\n         typedef range_t*                   range_ptr;\n         typedef range_interface<T>          irange_t;\n         typedef irange_t*                 irange_ptr;\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         str_vararg_node(const Sequence<expression_ptr,Allocator>& arg_list)\n         : final_node_(arg_list.back()),\n           final_deletable_(branch_deletable(final_node_)),\n           initialised_(false),\n           str_base_ptr_ (0),\n           str_range_ptr_(0)\n         {\n            if (0 == final_node_)\n               return;\n            else if (!is_generally_string_node(final_node_))\n               return;\n\n            str_base_ptr_ = dynamic_cast<str_base_ptr>(final_node_);\n\n            if (0 == str_base_ptr_)\n               return;\n\n            str_range_ptr_ = dynamic_cast<irange_ptr>(final_node_);\n\n            if (0 == str_range_ptr_)\n               return;\n\n            initialised_ = str_base_ptr_  && str_range_ptr_;\n\n            if (arg_list.size() > 1)\n            {\n               const std::size_t arg_list_size = arg_list.size() - 1;\n\n               arg_list_.resize(arg_list_size);\n               delete_branch_.resize(arg_list_size);\n\n               for (std::size_t i = 0; i < arg_list_size; ++i)\n               {\n                  if (arg_list[i])\n                  {\n                          arg_list_[i] = arg_list[i];\n                     delete_branch_[i] = static_cast<unsigned char>(branch_deletable(arg_list_[i]) ? 1 : 0);\n                  }\n                  else\n                  {\n                     arg_list_     .clear();\n                     delete_branch_.clear();\n                     return;\n                  }\n               }\n            }\n         }\n\n        ~str_vararg_node()\n         {\n            if (final_node_ && final_deletable_)\n            {\n               destroy_node(final_node_);\n            }\n\n            for (std::size_t i = 0; i < arg_list_.size(); ++i)\n            {\n               if (arg_list_[i] && delete_branch_[i])\n               {\n                  destroy_node(arg_list_[i]);\n               }\n            }\n         }\n\n         inline T value() const\n         {\n            if (!arg_list_.empty())\n            {\n               VarArgFunction::process(arg_list_);\n            }\n\n            final_node_->value();\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         std::string str() const\n         {\n            return str_base_ptr_->str();\n         }\n\n         char_cptr base() const\n         {\n            return str_base_ptr_->base();\n         }\n\n         std::size_t size() const\n         {\n            return str_base_ptr_->size();\n         }\n\n         range_t& range_ref()\n         {\n            return str_range_ptr_->range_ref();\n         }\n\n         const range_t& range_ref() const\n         {\n            return str_range_ptr_->range_ref();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_stringvararg;\n         }\n\n      private:\n\n         expression_ptr final_node_;\n         bool           final_deletable_;\n         bool           initialised_;\n         str_base_ptr   str_base_ptr_;\n         irange_ptr     str_range_ptr_;\n         std::vector<expression_ptr> arg_list_;\n         std::vector<unsigned char> delete_branch_;\n      };\n      #endif\n\n      template <typename T, std::size_t N>\n      inline T axn(T a, T x)\n      {\n         // a*x^n\n         return a * exprtk::details::numeric::fast_exp<T,N>::result(x);\n      }\n\n      template <typename T, std::size_t N>\n      inline T axnb(T a, T x, T b)\n      {\n         // a*x^n+b\n         return a * exprtk::details::numeric::fast_exp<T,N>::result(x) + b;\n      }\n\n      template <typename T>\n      struct sf_base\n      {\n         typedef typename details::functor_t<T>::Type Type;\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::qfunc_t quaternary_functor_t;\n         typedef typename functor_t::tfunc_t    trinary_functor_t;\n         typedef typename functor_t::bfunc_t     binary_functor_t;\n         typedef typename functor_t::ufunc_t      unary_functor_t;\n      };\n\n      #define define_sfop3(NN,OP0,OP1)                   \\\n      template <typename T>                              \\\n      struct sf##NN##_op : public sf_base<T>             \\\n      {                                                  \\\n         typedef typename sf_base<T>::Type Type;         \\\n         static inline T process(Type x, Type y, Type z) \\\n         {                                               \\\n            return (OP0);                                \\\n         }                                               \\\n         static inline std::string id()                  \\\n         {                                               \\\n            return OP1;                                  \\\n         }                                               \\\n      };                                                 \\\n\n      define_sfop3(00,(x + y) / z       ,\"(t+t)/t\")\n      define_sfop3(01,(x + y) * z       ,\"(t+t)*t\")\n      define_sfop3(02,(x + y) - z       ,\"(t+t)-t\")\n      define_sfop3(03,(x + y) + z       ,\"(t+t)+t\")\n      define_sfop3(04,(x - y) + z       ,\"(t-t)+t\")\n      define_sfop3(05,(x - y) / z       ,\"(t-t)/t\")\n      define_sfop3(06,(x - y) * z       ,\"(t-t)*t\")\n      define_sfop3(07,(x * y) + z       ,\"(t*t)+t\")\n      define_sfop3(08,(x * y) - z       ,\"(t*t)-t\")\n      define_sfop3(09,(x * y) / z       ,\"(t*t)/t\")\n      define_sfop3(10,(x * y) * z       ,\"(t*t)*t\")\n      define_sfop3(11,(x / y) + z       ,\"(t/t)+t\")\n      define_sfop3(12,(x / y) - z       ,\"(t/t)-t\")\n      define_sfop3(13,(x / y) / z       ,\"(t/t)/t\")\n      define_sfop3(14,(x / y) * z       ,\"(t/t)*t\")\n      define_sfop3(15,x / (y + z)       ,\"t/(t+t)\")\n      define_sfop3(16,x / (y - z)       ,\"t/(t-t)\")\n      define_sfop3(17,x / (y * z)       ,\"t/(t*t)\")\n      define_sfop3(18,x / (y / z)       ,\"t/(t/t)\")\n      define_sfop3(19,x * (y + z)       ,\"t*(t+t)\")\n      define_sfop3(20,x * (y - z)       ,\"t*(t-t)\")\n      define_sfop3(21,x * (y * z)       ,\"t*(t*t)\")\n      define_sfop3(22,x * (y / z)       ,\"t*(t/t)\")\n      define_sfop3(23,x - (y + z)       ,\"t-(t+t)\")\n      define_sfop3(24,x - (y - z)       ,\"t-(t-t)\")\n      define_sfop3(25,x - (y / z)       ,\"t-(t/t)\")\n      define_sfop3(26,x - (y * z)       ,\"t-(t*t)\")\n      define_sfop3(27,x + (y * z)       ,\"t+(t*t)\")\n      define_sfop3(28,x + (y / z)       ,\"t+(t/t)\")\n      define_sfop3(29,x + (y + z)       ,\"t+(t+t)\")\n      define_sfop3(30,x + (y - z)       ,\"t+(t-t)\")\n      define_sfop3(31,(axnb<T,2>(x,y,z)),\"       \")\n      define_sfop3(32,(axnb<T,3>(x,y,z)),\"       \")\n      define_sfop3(33,(axnb<T,4>(x,y,z)),\"       \")\n      define_sfop3(34,(axnb<T,5>(x,y,z)),\"       \")\n      define_sfop3(35,(axnb<T,6>(x,y,z)),\"       \")\n      define_sfop3(36,(axnb<T,7>(x,y,z)),\"       \")\n      define_sfop3(37,(axnb<T,8>(x,y,z)),\"       \")\n      define_sfop3(38,(axnb<T,9>(x,y,z)),\"       \")\n      define_sfop3(39,x * numeric::log(y)   + z,\"\")\n      define_sfop3(40,x * numeric::log(y)   - z,\"\")\n      define_sfop3(41,x * numeric::log10(y) + z,\"\")\n      define_sfop3(42,x * numeric::log10(y) - z,\"\")\n      define_sfop3(43,x * numeric::sin(y) + z  ,\"\")\n      define_sfop3(44,x * numeric::sin(y) - z  ,\"\")\n      define_sfop3(45,x * numeric::cos(y) + z  ,\"\")\n      define_sfop3(46,x * numeric::cos(y) - z  ,\"\")\n      define_sfop3(47,details::is_true(x) ? y : z,\"\")\n\n      #define define_sfop4(NN,OP0,OP1)                           \\\n      template <typename T>                                      \\\n      struct sf##NN##_op : public sf_base<T>                     \\\n      {                                                          \\\n         typedef typename sf_base<T>::Type Type;                 \\\n         static inline T process(Type x, Type y, Type z, Type w) \\\n         {                                                       \\\n            return (OP0);                                        \\\n         }                                                       \\\n         static inline std::string id() { return OP1; }          \\\n      };                                                         \\\n\n      define_sfop4(48,(x + ((y + z) / w)),\"t+((t+t)/t)\")\n      define_sfop4(49,(x + ((y + z) * w)),\"t+((t+t)*t)\")\n      define_sfop4(50,(x + ((y - z) / w)),\"t+((t-t)/t)\")\n      define_sfop4(51,(x + ((y - z) * w)),\"t+((t-t)*t)\")\n      define_sfop4(52,(x + ((y * z) / w)),\"t+((t*t)/t)\")\n      define_sfop4(53,(x + ((y * z) * w)),\"t+((t*t)*t)\")\n      define_sfop4(54,(x + ((y / z) + w)),\"t+((t/t)+t)\")\n      define_sfop4(55,(x + ((y / z) / w)),\"t+((t/t)/t)\")\n      define_sfop4(56,(x + ((y / z) * w)),\"t+((t/t)*t)\")\n      define_sfop4(57,(x - ((y + z) / w)),\"t-((t+t)/t)\")\n      define_sfop4(58,(x - ((y + z) * w)),\"t-((t+t)*t)\")\n      define_sfop4(59,(x - ((y - z) / w)),\"t-((t-t)/t)\")\n      define_sfop4(60,(x - ((y - z) * w)),\"t-((t-t)*t)\")\n      define_sfop4(61,(x - ((y * z) / w)),\"t-((t*t)/t)\")\n      define_sfop4(62,(x - ((y * z) * w)),\"t-((t*t)*t)\")\n      define_sfop4(63,(x - ((y / z) / w)),\"t-((t/t)/t)\")\n      define_sfop4(64,(x - ((y / z) * w)),\"t-((t/t)*t)\")\n      define_sfop4(65,(((x + y) * z) - w),\"((t+t)*t)-t\")\n      define_sfop4(66,(((x - y) * z) - w),\"((t-t)*t)-t\")\n      define_sfop4(67,(((x * y) * z) - w),\"((t*t)*t)-t\")\n      define_sfop4(68,(((x / y) * z) - w),\"((t/t)*t)-t\")\n      define_sfop4(69,(((x + y) / z) - w),\"((t+t)/t)-t\")\n      define_sfop4(70,(((x - y) / z) - w),\"((t-t)/t)-t\")\n      define_sfop4(71,(((x * y) / z) - w),\"((t*t)/t)-t\")\n      define_sfop4(72,(((x / y) / z) - w),\"((t/t)/t)-t\")\n      define_sfop4(73,((x * y) + (z * w)),\"(t*t)+(t*t)\")\n      define_sfop4(74,((x * y) - (z * w)),\"(t*t)-(t*t)\")\n      define_sfop4(75,((x * y) + (z / w)),\"(t*t)+(t/t)\")\n      define_sfop4(76,((x * y) - (z / w)),\"(t*t)-(t/t)\")\n      define_sfop4(77,((x / y) + (z / w)),\"(t/t)+(t/t)\")\n      define_sfop4(78,((x / y) - (z / w)),\"(t/t)-(t/t)\")\n      define_sfop4(79,((x / y) - (z * w)),\"(t/t)-(t*t)\")\n      define_sfop4(80,(x / (y + (z * w))),\"t/(t+(t*t))\")\n      define_sfop4(81,(x / (y - (z * w))),\"t/(t-(t*t))\")\n      define_sfop4(82,(x * (y + (z * w))),\"t*(t+(t*t))\")\n      define_sfop4(83,(x * (y - (z * w))),\"t*(t-(t*t))\")\n\n      define_sfop4(84,(axn<T,2>(x,y) + axn<T,2>(z,w)),\"\")\n      define_sfop4(85,(axn<T,3>(x,y) + axn<T,3>(z,w)),\"\")\n      define_sfop4(86,(axn<T,4>(x,y) + axn<T,4>(z,w)),\"\")\n      define_sfop4(87,(axn<T,5>(x,y) + axn<T,5>(z,w)),\"\")\n      define_sfop4(88,(axn<T,6>(x,y) + axn<T,6>(z,w)),\"\")\n      define_sfop4(89,(axn<T,7>(x,y) + axn<T,7>(z,w)),\"\")\n      define_sfop4(90,(axn<T,8>(x,y) + axn<T,8>(z,w)),\"\")\n      define_sfop4(91,(axn<T,9>(x,y) + axn<T,9>(z,w)),\"\")\n      define_sfop4(92,((details::is_true(x) && details::is_true(y)) ? z : w),\"\")\n      define_sfop4(93,((details::is_true(x) || details::is_true(y)) ? z : w),\"\")\n      define_sfop4(94,((x <  y) ? z : w),\"\")\n      define_sfop4(95,((x <= y) ? z : w),\"\")\n      define_sfop4(96,((x >  y) ? z : w),\"\")\n      define_sfop4(97,((x >= y) ? z : w),\"\")\n      define_sfop4(98,(details::is_true(numeric::equal(x,y)) ? z : w),\"\")\n      define_sfop4(99,(x * numeric::sin(y) + z * numeric::cos(w)),\"\")\n\n      define_sfop4(ext00,((x + y) - (z * w)),\"(t+t)-(t*t)\")\n      define_sfop4(ext01,((x + y) - (z / w)),\"(t+t)-(t/t)\")\n      define_sfop4(ext02,((x + y) + (z * w)),\"(t+t)+(t*t)\")\n      define_sfop4(ext03,((x + y) + (z / w)),\"(t+t)+(t/t)\")\n      define_sfop4(ext04,((x - y) + (z * w)),\"(t-t)+(t*t)\")\n      define_sfop4(ext05,((x - y) + (z / w)),\"(t-t)+(t/t)\")\n      define_sfop4(ext06,((x - y) - (z * w)),\"(t-t)-(t*t)\")\n      define_sfop4(ext07,((x - y) - (z / w)),\"(t-t)-(t/t)\")\n      define_sfop4(ext08,((x + y) - (z - w)),\"(t+t)-(t-t)\")\n      define_sfop4(ext09,((x + y) + (z - w)),\"(t+t)+(t-t)\")\n      define_sfop4(ext10,((x + y) + (z + w)),\"(t+t)+(t+t)\")\n      define_sfop4(ext11,((x + y) * (z - w)),\"(t+t)*(t-t)\")\n      define_sfop4(ext12,((x + y) / (z - w)),\"(t+t)/(t-t)\")\n      define_sfop4(ext13,((x - y) - (z + w)),\"(t-t)-(t+t)\")\n      define_sfop4(ext14,((x - y) + (z + w)),\"(t-t)+(t+t)\")\n      define_sfop4(ext15,((x - y) * (z + w)),\"(t-t)*(t+t)\")\n      define_sfop4(ext16,((x - y) / (z + w)),\"(t-t)/(t+t)\")\n      define_sfop4(ext17,((x * y) - (z + w)),\"(t*t)-(t+t)\")\n      define_sfop4(ext18,((x / y) - (z + w)),\"(t/t)-(t+t)\")\n      define_sfop4(ext19,((x * y) + (z + w)),\"(t*t)+(t+t)\")\n      define_sfop4(ext20,((x / y) + (z + w)),\"(t/t)+(t+t)\")\n      define_sfop4(ext21,((x * y) + (z - w)),\"(t*t)+(t-t)\")\n      define_sfop4(ext22,((x / y) + (z - w)),\"(t/t)+(t-t)\")\n      define_sfop4(ext23,((x * y) - (z - w)),\"(t*t)-(t-t)\")\n      define_sfop4(ext24,((x / y) - (z - w)),\"(t/t)-(t-t)\")\n      define_sfop4(ext25,((x + y) * (z * w)),\"(t+t)*(t*t)\")\n      define_sfop4(ext26,((x + y) * (z / w)),\"(t+t)*(t/t)\")\n      define_sfop4(ext27,((x + y) / (z * w)),\"(t+t)/(t*t)\")\n      define_sfop4(ext28,((x + y) / (z / w)),\"(t+t)/(t/t)\")\n      define_sfop4(ext29,((x - y) / (z * w)),\"(t-t)/(t*t)\")\n      define_sfop4(ext30,((x - y) / (z / w)),\"(t-t)/(t/t)\")\n      define_sfop4(ext31,((x - y) * (z * w)),\"(t-t)*(t*t)\")\n      define_sfop4(ext32,((x - y) * (z / w)),\"(t-t)*(t/t)\")\n      define_sfop4(ext33,((x * y) * (z + w)),\"(t*t)*(t+t)\")\n      define_sfop4(ext34,((x / y) * (z + w)),\"(t/t)*(t+t)\")\n      define_sfop4(ext35,((x * y) / (z + w)),\"(t*t)/(t+t)\")\n      define_sfop4(ext36,((x / y) / (z + w)),\"(t/t)/(t+t)\")\n      define_sfop4(ext37,((x * y) / (z - w)),\"(t*t)/(t-t)\")\n      define_sfop4(ext38,((x / y) / (z - w)),\"(t/t)/(t-t)\")\n      define_sfop4(ext39,((x * y) * (z - w)),\"(t*t)*(t-t)\")\n      define_sfop4(ext40,((x * y) / (z * w)),\"(t*t)/(t*t)\")\n      define_sfop4(ext41,((x / y) * (z / w)),\"(t/t)*(t/t)\")\n      define_sfop4(ext42,((x / y) * (z - w)),\"(t/t)*(t-t)\")\n      define_sfop4(ext43,((x * y) * (z * w)),\"(t*t)*(t*t)\")\n      define_sfop4(ext44,(x + (y * (z / w))),\"t+(t*(t/t))\")\n      define_sfop4(ext45,(x - (y * (z / w))),\"t-(t*(t/t))\")\n      define_sfop4(ext46,(x + (y / (z * w))),\"t+(t/(t*t))\")\n      define_sfop4(ext47,(x - (y / (z * w))),\"t-(t/(t*t))\")\n      define_sfop4(ext48,(((x - y) - z) * w),\"((t-t)-t)*t\")\n      define_sfop4(ext49,(((x - y) - z) / w),\"((t-t)-t)/t\")\n      define_sfop4(ext50,(((x - y) + z) * w),\"((t-t)+t)*t\")\n      define_sfop4(ext51,(((x - y) + z) / w),\"((t-t)+t)/t\")\n      define_sfop4(ext52,((x + (y - z)) * w),\"(t+(t-t))*t\")\n      define_sfop4(ext53,((x + (y - z)) / w),\"(t+(t-t))/t\")\n      define_sfop4(ext54,((x + y) / (z + w)),\"(t+t)/(t+t)\")\n      define_sfop4(ext55,((x - y) / (z - w)),\"(t-t)/(t-t)\")\n      define_sfop4(ext56,((x + y) * (z + w)),\"(t+t)*(t+t)\")\n      define_sfop4(ext57,((x - y) * (z - w)),\"(t-t)*(t-t)\")\n      define_sfop4(ext58,((x - y) + (z - w)),\"(t-t)+(t-t)\")\n      define_sfop4(ext59,((x - y) - (z - w)),\"(t-t)-(t-t)\")\n      define_sfop4(ext60,((x / y) + (z * w)),\"(t/t)+(t*t)\")\n      define_sfop4(ext61,(((x * y) * z) / w),\"((t*t)*t)/t\")\n\n      #undef define_sfop3\n      #undef define_sfop4\n\n      template <typename T, typename SpecialFunction>\n      class sf3_node : public trinary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         sf3_node(const operator_type& opr,\n                  expression_ptr branch0,\n                  expression_ptr branch1,\n                  expression_ptr branch2)\n         : trinary_node<T>(opr, branch0, branch1, branch2)\n         {}\n\n         inline T value() const\n         {\n            const T x = trinary_node<T>::branch_[0].first->value();\n            const T y = trinary_node<T>::branch_[1].first->value();\n            const T z = trinary_node<T>::branch_[2].first->value();\n\n            return SpecialFunction::process(x, y, z);\n         }\n      };\n\n      template <typename T, typename SpecialFunction>\n      class sf4_node : public quaternary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         sf4_node(const operator_type& opr,\n                  expression_ptr branch0,\n                  expression_ptr branch1,\n                  expression_ptr branch2,\n                  expression_ptr branch3)\n         : quaternary_node<T>(opr, branch0, branch1, branch2, branch3)\n         {}\n\n         inline T value() const\n         {\n            const T x = quaternary_node<T>::branch_[0].first->value();\n            const T y = quaternary_node<T>::branch_[1].first->value();\n            const T z = quaternary_node<T>::branch_[2].first->value();\n            const T w = quaternary_node<T>::branch_[3].first->value();\n\n            return SpecialFunction::process(x, y, z, w);\n         }\n      };\n\n      template <typename T, typename SpecialFunction>\n      class sf3_var_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         sf3_var_node(const T& v0, const T& v1, const T& v2)\n         : v0_(v0),\n           v1_(v1),\n           v2_(v2)\n         {}\n\n         inline T value() const\n         {\n            return SpecialFunction::process(v0_, v1_, v2_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_trinary;\n         }\n\n      private:\n\n         sf3_var_node(sf3_var_node<T,SpecialFunction>&);\n         sf3_var_node<T,SpecialFunction>& operator=(sf3_var_node<T,SpecialFunction>&);\n\n         const T& v0_;\n         const T& v1_;\n         const T& v2_;\n      };\n\n      template <typename T, typename SpecialFunction>\n      class sf4_var_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         sf4_var_node(const T& v0, const T& v1, const T& v2, const T& v3)\n         : v0_(v0),\n           v1_(v1),\n           v2_(v2),\n           v3_(v3)\n         {}\n\n         inline T value() const\n         {\n            return SpecialFunction::process(v0_, v1_, v2_, v3_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_trinary;\n         }\n\n      private:\n\n         sf4_var_node(sf4_var_node<T,SpecialFunction>&);\n         sf4_var_node<T,SpecialFunction>& operator=(sf4_var_node<T,SpecialFunction>&);\n\n         const T& v0_;\n         const T& v1_;\n         const T& v2_;\n         const T& v3_;\n      };\n\n      template <typename T, typename VarArgFunction>\n      class vararg_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         vararg_node(const Sequence<expression_ptr,Allocator>& arg_list)\n         {\n            arg_list_     .resize(arg_list.size());\n            delete_branch_.resize(arg_list.size());\n\n            for (std::size_t i = 0; i < arg_list.size(); ++i)\n            {\n               if (arg_list[i])\n               {\n                       arg_list_[i] = arg_list[i];\n                  delete_branch_[i] = static_cast<unsigned char>(branch_deletable(arg_list_[i]) ? 1 : 0);\n               }\n               else\n               {\n                  arg_list_.clear();\n                  delete_branch_.clear();\n                  return;\n               }\n            }\n         }\n\n        ~vararg_node()\n         {\n            for (std::size_t i = 0; i < arg_list_.size(); ++i)\n            {\n               if (arg_list_[i] && delete_branch_[i])\n               {\n                  destroy_node(arg_list_[i]);\n               }\n            }\n         }\n\n         inline T value() const\n         {\n            if (!arg_list_.empty())\n               return VarArgFunction::process(arg_list_);\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vararg;\n         }\n\n      private:\n\n         std::vector<expression_ptr> arg_list_;\n         std::vector<unsigned char> delete_branch_;\n      };\n\n      template <typename T, typename VarArgFunction>\n      class vararg_varnode : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         vararg_varnode(const Sequence<expression_ptr,Allocator>& arg_list)\n         {\n            arg_list_.resize(arg_list.size());\n\n            for (std::size_t i = 0; i < arg_list.size(); ++i)\n            {\n               if (arg_list[i] && is_variable_node(arg_list[i]))\n               {\n                  variable_node<T>* var_node_ptr = static_cast<variable_node<T>*>(arg_list[i]);\n                  arg_list_[i] = (&var_node_ptr->ref());\n               }\n               else\n               {\n                  arg_list_.clear();\n                  return;\n               }\n            }\n         }\n\n         inline T value() const\n         {\n            if (!arg_list_.empty())\n               return VarArgFunction::process(arg_list_);\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vararg;\n         }\n\n      private:\n\n         std::vector<const T*> arg_list_;\n      };\n\n      template <typename T, typename VecFunction>\n      class vectorize_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         vectorize_node(const expression_ptr v)\n         : ivec_ptr_(0),\n           v_(v),\n           v_deletable_(branch_deletable(v_))\n         {\n            if (is_ivector_node(v))\n            {\n               ivec_ptr_ = dynamic_cast<vector_interface<T>*>(v);\n            }\n            else\n               ivec_ptr_ = 0;\n         }\n\n        ~vectorize_node()\n         {\n            if (v_ && v_deletable_)\n            {\n               destroy_node(v_);\n            }\n         }\n\n         inline T value() const\n         {\n            if (ivec_ptr_)\n            {\n               v_->value();\n               return VecFunction::process(ivec_ptr_);\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecfunc;\n         }\n\n      private:\n\n         vector_interface<T>* ivec_ptr_;\n         expression_ptr              v_;\n         const bool        v_deletable_;\n      };\n\n      template <typename T>\n      class assignment_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         assignment_node(const operator_type& opr,\n                         expression_ptr branch0,\n                         expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           var_node_ptr_(0)\n         {\n            if (is_variable_node(binary_node<T>::branch_[0].first))\n            {\n               var_node_ptr_ = static_cast<variable_node<T>*>(binary_node<T>::branch_[0].first);\n            }\n         }\n\n         inline T value() const\n         {\n            if (var_node_ptr_)\n            {\n               T& result = var_node_ptr_->ref();\n\n               result = binary_node<T>::branch_[1].first->value();\n\n               return result;\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n      private:\n\n         variable_node<T>* var_node_ptr_;\n      };\n\n      template <typename T>\n      class assignment_vec_elem_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         assignment_vec_elem_node(const operator_type& opr,\n                                  expression_ptr branch0,\n                                  expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec_node_ptr_(0)\n         {\n            if (is_vector_elem_node(binary_node<T>::branch_[0].first))\n            {\n               vec_node_ptr_ = static_cast<vector_elem_node<T>*>(binary_node<T>::branch_[0].first);\n            }\n         }\n\n         inline T value() const\n         {\n            if (vec_node_ptr_)\n            {\n               T& result = vec_node_ptr_->ref();\n\n               result = binary_node<T>::branch_[1].first->value();\n\n               return result;\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n      private:\n\n         vector_elem_node<T>* vec_node_ptr_;\n      };\n\n      template <typename T>\n      class assignment_rebasevec_elem_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         assignment_rebasevec_elem_node(const operator_type& opr,\n                                        expression_ptr branch0,\n                                        expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           rbvec_node_ptr_(0)\n         {\n            if (is_rebasevector_elem_node(binary_node<T>::branch_[0].first))\n            {\n               rbvec_node_ptr_ = static_cast<rebasevector_elem_node<T>*>(binary_node<T>::branch_[0].first);\n            }\n         }\n\n         inline T value() const\n         {\n            if (rbvec_node_ptr_)\n            {\n               T& result = rbvec_node_ptr_->ref();\n\n               result = binary_node<T>::branch_[1].first->value();\n\n               return result;\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n      private:\n\n         rebasevector_elem_node<T>* rbvec_node_ptr_;\n      };\n\n      template <typename T>\n      class assignment_rebasevec_celem_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         assignment_rebasevec_celem_node(const operator_type& opr,\n                                         expression_ptr branch0,\n                                         expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           rbvec_node_ptr_(0)\n         {\n            if (is_rebasevector_celem_node(binary_node<T>::branch_[0].first))\n            {\n               rbvec_node_ptr_ = static_cast<rebasevector_celem_node<T>*>(binary_node<T>::branch_[0].first);\n            }\n         }\n\n         inline T value() const\n         {\n            if (rbvec_node_ptr_)\n            {\n               T& result = rbvec_node_ptr_->ref();\n\n               result = binary_node<T>::branch_[1].first->value();\n\n               return result;\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n      private:\n\n         rebasevector_celem_node<T>* rbvec_node_ptr_;\n      };\n\n      template <typename T>\n      class assignment_vec_node : public binary_node     <T>,\n                                  public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef vector_node<T>*    vector_node_ptr;\n         typedef vec_data_store<T>            vds_t;\n\n         assignment_vec_node(const operator_type& opr,\n                             expression_ptr branch0,\n                             expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec_node_ptr_(0)\n         {\n            if (is_vector_node(binary_node<T>::branch_[0].first))\n            {\n               vec_node_ptr_ = static_cast<vector_node<T>*>(binary_node<T>::branch_[0].first);\n               vds()         = vec_node_ptr_->vds();\n            }\n         }\n\n         inline T value() const\n         {\n            if (vec_node_ptr_)\n            {\n               const T v = binary_node<T>::branch_[1].first->value();\n\n               T* vec = vds().data();\n\n               loop_unroll::details lud(size());\n               const T* upper_bound = vec + lud.upper_bound;\n\n               while (vec < upper_bound)\n               {\n                 #define exprtk_loop(N) \\\n                  vec[N] = v;           \\\n\n                  exprtk_loop( 0) exprtk_loop( 1)\n                  exprtk_loop( 2) exprtk_loop( 3)\n                  #ifndef exprtk_disable_superscalar_unroll\n                  exprtk_loop( 4) exprtk_loop( 5)\n                  exprtk_loop( 6) exprtk_loop( 7)\n                  exprtk_loop( 8) exprtk_loop( 9)\n                  exprtk_loop(10) exprtk_loop(11)\n                  exprtk_loop(12) exprtk_loop(13)\n                  exprtk_loop(14) exprtk_loop(15)\n                  #endif\n\n                  vec += lud.batch_size;\n               }\n\n               exprtk_disable_fallthrough_begin\n               switch (lud.remainder)\n               {\n                  #define case_stmt(N) \\\n                  case N : *vec++ = v; \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(15) case_stmt(14)\n                  case_stmt(13) case_stmt(12)\n                  case_stmt(11) case_stmt(10)\n                  case_stmt( 9) case_stmt( 8)\n                  case_stmt( 7) case_stmt( 6)\n                  case_stmt( 5) case_stmt( 4)\n                  #endif\n                  case_stmt( 3) case_stmt( 2)\n                  case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef exprtk_loop\n               #undef case_stmt\n\n               return vec_node_ptr_->value();\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return vec_node_ptr_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return vec_node_ptr_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecvalass;\n         }\n\n         std::size_t size() const\n         {\n            return vds().size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n      private:\n\n         vector_node<T>* vec_node_ptr_;\n         vds_t           vds_;\n      };\n\n      template <typename T>\n      class assignment_vecvec_node : public binary_node     <T>,\n                                     public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*  expression_ptr;\n         typedef vector_node<T>*     vector_node_ptr;\n         typedef vec_data_store<T>             vds_t;\n\n         assignment_vecvec_node(const operator_type& opr,\n                                expression_ptr branch0,\n                                expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec0_node_ptr_(0),\n           vec1_node_ptr_(0),\n           initialised_(false),\n           src_is_ivec_(false)\n         {\n            if (is_vector_node(binary_node<T>::branch_[0].first))\n            {\n               vec0_node_ptr_ = static_cast<vector_node<T>*>(binary_node<T>::branch_[0].first);\n               vds()          = vec0_node_ptr_->vds();\n            }\n\n            if (is_vector_node(binary_node<T>::branch_[1].first))\n            {\n               vec1_node_ptr_ = static_cast<vector_node<T>*>(binary_node<T>::branch_[1].first);\n               vds_t::match_sizes(vds(),vec1_node_ptr_->vds());\n            }\n            else if (is_ivector_node(binary_node<T>::branch_[1].first))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(binary_node<T>::branch_[1].first)))\n               {\n                  vec1_node_ptr_ = vi->vec();\n\n                  if (!vi->side_effect())\n                  {\n                     vi->vds()    = vds();\n                     src_is_ivec_ = true;\n                  }\n                  else\n                     vds_t::match_sizes(vds(),vi->vds());\n               }\n            }\n\n            initialised_ = (vec0_node_ptr_ && vec1_node_ptr_);\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[1].first->value();\n\n               if (src_is_ivec_)\n                  return vec0_node_ptr_->value();\n\n               T* vec0 = vec0_node_ptr_->vds().data();\n               T* vec1 = vec1_node_ptr_->vds().data();\n\n               loop_unroll::details lud(size());\n               const T* upper_bound = vec0 + lud.upper_bound;\n\n               while (vec0 < upper_bound)\n               {\n                  #define exprtk_loop(N) \\\n                  vec0[N] = vec1[N];     \\\n\n                  exprtk_loop( 0) exprtk_loop( 1)\n                  exprtk_loop( 2) exprtk_loop( 3)\n                  #ifndef exprtk_disable_superscalar_unroll\n                  exprtk_loop( 4) exprtk_loop( 5)\n                  exprtk_loop( 6) exprtk_loop( 7)\n                  exprtk_loop( 8) exprtk_loop( 9)\n                  exprtk_loop(10) exprtk_loop(11)\n                  exprtk_loop(12) exprtk_loop(13)\n                  exprtk_loop(14) exprtk_loop(15)\n                  #endif\n\n                  vec0 += lud.batch_size;\n                  vec1 += lud.batch_size;\n               }\n\n               exprtk_disable_fallthrough_begin\n               switch (lud.remainder)\n               {\n                  #define case_stmt(N)        \\\n                  case N : *vec0++ = *vec1++; \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(15) case_stmt(14)\n                  case_stmt(13) case_stmt(12)\n                  case_stmt(11) case_stmt(10)\n                  case_stmt( 9) case_stmt( 8)\n                  case_stmt( 7) case_stmt( 6)\n                  case_stmt( 5) case_stmt( 4)\n                  #endif\n                  case_stmt( 3) case_stmt( 2)\n                  case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef exprtk_loop\n               #undef case_stmt\n\n               return vec0_node_ptr_->value();\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return vec0_node_ptr_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return vec0_node_ptr_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecvecass;\n         }\n\n         std::size_t size() const\n         {\n            return vds().size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n      private:\n\n         vector_node<T>* vec0_node_ptr_;\n         vector_node<T>* vec1_node_ptr_;\n         bool            initialised_;\n         bool            src_is_ivec_;\n         vds_t           vds_;\n      };\n\n      template <typename T, typename Operation>\n      class assignment_op_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         assignment_op_node(const operator_type& opr,\n                            expression_ptr branch0,\n                            expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           var_node_ptr_(0)\n         {\n            if (is_variable_node(binary_node<T>::branch_[0].first))\n            {\n               var_node_ptr_ = static_cast<variable_node<T>*>(binary_node<T>::branch_[0].first);\n            }\n         }\n\n         inline T value() const\n         {\n            if (var_node_ptr_)\n            {\n               T& v = var_node_ptr_->ref();\n               v = Operation::process(v,binary_node<T>::branch_[1].first->value());\n\n               return v;\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n      private:\n\n         variable_node<T>* var_node_ptr_;\n      };\n\n      template <typename T, typename Operation>\n      class assignment_vec_elem_op_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         assignment_vec_elem_op_node(const operator_type& opr,\n                                     expression_ptr branch0,\n                                     expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec_node_ptr_(0)\n         {\n            if (is_vector_elem_node(binary_node<T>::branch_[0].first))\n            {\n               vec_node_ptr_ = static_cast<vector_elem_node<T>*>(binary_node<T>::branch_[0].first);\n            }\n         }\n\n         inline T value() const\n         {\n            if (vec_node_ptr_)\n            {\n               T& v = vec_node_ptr_->ref();\n                  v = Operation::process(v,binary_node<T>::branch_[1].first->value());\n\n               return v;\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n      private:\n\n         vector_elem_node<T>* vec_node_ptr_;\n      };\n\n      template <typename T, typename Operation>\n      class assignment_rebasevec_elem_op_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         assignment_rebasevec_elem_op_node(const operator_type& opr,\n                                           expression_ptr branch0,\n                                           expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           rbvec_node_ptr_(0)\n         {\n            if (is_rebasevector_elem_node(binary_node<T>::branch_[0].first))\n            {\n               rbvec_node_ptr_ = static_cast<rebasevector_elem_node<T>*>(binary_node<T>::branch_[0].first);\n            }\n         }\n\n         inline T value() const\n         {\n            if (rbvec_node_ptr_)\n            {\n               T& v = rbvec_node_ptr_->ref();\n                  v = Operation::process(v,binary_node<T>::branch_[1].first->value());\n\n               return v;\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n      private:\n\n         rebasevector_elem_node<T>* rbvec_node_ptr_;\n      };\n\n      template <typename T, typename Operation>\n      class assignment_rebasevec_celem_op_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         assignment_rebasevec_celem_op_node(const operator_type& opr,\n                                            expression_ptr branch0,\n                                            expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           rbvec_node_ptr_(0)\n         {\n            if (is_rebasevector_celem_node(binary_node<T>::branch_[0].first))\n            {\n               rbvec_node_ptr_ = static_cast<rebasevector_celem_node<T>*>(binary_node<T>::branch_[0].first);\n            }\n         }\n\n         inline T value() const\n         {\n            if (rbvec_node_ptr_)\n            {\n               T& v = rbvec_node_ptr_->ref();\n                  v = Operation::process(v,binary_node<T>::branch_[1].first->value());\n\n               return v;\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n      private:\n\n         rebasevector_celem_node<T>* rbvec_node_ptr_;\n      };\n\n      template <typename T, typename Operation>\n      class assignment_vec_op_node : public binary_node     <T>,\n                                     public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*  expression_ptr;\n         typedef vector_node<T>*     vector_node_ptr;\n         typedef vec_data_store<T>             vds_t;\n\n         assignment_vec_op_node(const operator_type& opr,\n                                expression_ptr branch0,\n                                expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec_node_ptr_(0)\n         {\n            if (is_vector_node(binary_node<T>::branch_[0].first))\n            {\n               vec_node_ptr_ = static_cast<vector_node<T>*>(binary_node<T>::branch_[0].first);\n               vds()         = vec_node_ptr_->vds();\n            }\n         }\n\n         inline T value() const\n         {\n            if (vec_node_ptr_)\n            {\n               const T v = binary_node<T>::branch_[1].first->value();\n\n               T* vec = vds().data();\n\n               loop_unroll::details lud(size());\n               const T* upper_bound = vec + lud.upper_bound;\n\n               while (vec < upper_bound)\n               {\n                  #define exprtk_loop(N)       \\\n                  Operation::assign(vec[N],v); \\\n\n                  exprtk_loop( 0) exprtk_loop( 1)\n                  exprtk_loop( 2) exprtk_loop( 3)\n                  #ifndef exprtk_disable_superscalar_unroll\n                  exprtk_loop( 4) exprtk_loop( 5)\n                  exprtk_loop( 6) exprtk_loop( 7)\n                  exprtk_loop( 8) exprtk_loop( 9)\n                  exprtk_loop(10) exprtk_loop(11)\n                  exprtk_loop(12) exprtk_loop(13)\n                  exprtk_loop(14) exprtk_loop(15)\n                  #endif\n\n                  vec += lud.batch_size;\n               }\n\n               exprtk_disable_fallthrough_begin\n               switch (lud.remainder)\n               {\n                  #define case_stmt(N)                  \\\n                  case N : Operation::assign(*vec++,v); \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(15) case_stmt(14)\n                  case_stmt(13) case_stmt(12)\n                  case_stmt(11) case_stmt(10)\n                  case_stmt( 9) case_stmt( 8)\n                  case_stmt( 7) case_stmt( 6)\n                  case_stmt( 5) case_stmt( 4)\n                  #endif\n                  case_stmt( 3) case_stmt( 2)\n                  case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n\n               #undef exprtk_loop\n               #undef case_stmt\n\n               return vec_node_ptr_->value();\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return vec_node_ptr_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return vec_node_ptr_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecopvalass;\n         }\n\n         std::size_t size() const\n         {\n            return vds().size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n         bool side_effect() const\n         {\n            return true;\n         }\n\n      private:\n\n         vector_node<T>* vec_node_ptr_;\n         vds_t           vds_;\n      };\n\n      template <typename T, typename Operation>\n      class assignment_vecvec_op_node : public binary_node     <T>,\n                                        public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*  expression_ptr;\n         typedef vector_node<T>*     vector_node_ptr;\n         typedef vec_data_store<T>             vds_t;\n\n         assignment_vecvec_op_node(const operator_type& opr,\n                                   expression_ptr branch0,\n                                   expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec0_node_ptr_(0),\n           vec1_node_ptr_(0),\n           initialised_(false)\n         {\n            if (is_vector_node(binary_node<T>::branch_[0].first))\n            {\n               vec0_node_ptr_ = static_cast<vector_node<T>*>(binary_node<T>::branch_[0].first);\n               vds()          = vec0_node_ptr_->vds();\n            }\n\n            if (is_vector_node(binary_node<T>::branch_[1].first))\n            {\n               vec1_node_ptr_ = static_cast<vector_node<T>*>(binary_node<T>::branch_[1].first);\n               vec1_node_ptr_->vds() = vds();\n            }\n            else if (is_ivector_node(binary_node<T>::branch_[1].first))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(binary_node<T>::branch_[1].first)))\n               {\n                  vec1_node_ptr_ = vi->vec();\n                  vec1_node_ptr_->vds() = vds();\n               }\n               else\n                  vds_t::match_sizes(vds(),vec1_node_ptr_->vds());\n            }\n\n            initialised_ = (vec0_node_ptr_ && vec1_node_ptr_);\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[0].first->value();\n               binary_node<T>::branch_[1].first->value();\n\n               T* vec0 = vec0_node_ptr_->vds().data();\n               T* vec1 = vec1_node_ptr_->vds().data();\n\n               loop_unroll::details lud(size());\n               const T* upper_bound = vec0 + lud.upper_bound;\n\n               while (vec0 < upper_bound)\n               {\n                  #define exprtk_loop(N)                         \\\n                  vec0[N] = Operation::process(vec0[N],vec1[N]); \\\n\n                  exprtk_loop( 0) exprtk_loop( 1)\n                  exprtk_loop( 2) exprtk_loop( 3)\n                  #ifndef exprtk_disable_superscalar_unroll\n                  exprtk_loop( 4) exprtk_loop( 5)\n                  exprtk_loop( 6) exprtk_loop( 7)\n                  exprtk_loop( 8) exprtk_loop( 9)\n                  exprtk_loop(10) exprtk_loop(11)\n                  exprtk_loop(12) exprtk_loop(13)\n                  exprtk_loop(14) exprtk_loop(15)\n                  #endif\n\n                  vec0 += lud.batch_size;\n                  vec1 += lud.batch_size;\n               }\n\n               int i = 0;\n\n               exprtk_disable_fallthrough_begin\n               switch (lud.remainder)\n               {\n                  #define case_stmt(N)                                             \\\n                  case N : { vec0[i] = Operation::process(vec0[i],vec1[i]); ++i; } \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(15) case_stmt(14)\n                  case_stmt(13) case_stmt(12)\n                  case_stmt(11) case_stmt(10)\n                  case_stmt( 9) case_stmt( 8)\n                  case_stmt( 7) case_stmt( 6)\n                  case_stmt( 5) case_stmt( 4)\n                  #endif\n                  case_stmt( 3) case_stmt( 2)\n                  case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef exprtk_loop\n               #undef case_stmt\n\n               return vec0_node_ptr_->value();\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return vec0_node_ptr_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return vec0_node_ptr_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecopvecass;\n         }\n\n         std::size_t size() const\n         {\n            return vds().size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n         bool side_effect() const\n         {\n            return true;\n         }\n\n      private:\n\n         vector_node<T>* vec0_node_ptr_;\n         vector_node<T>* vec1_node_ptr_;\n         bool            initialised_;\n         vds_t           vds_;\n      };\n\n      template <typename T, typename Operation>\n      class vec_binop_vecvec_node : public binary_node     <T>,\n                                    public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*    expression_ptr;\n         typedef vector_node<T>*       vector_node_ptr;\n         typedef vector_holder<T>*   vector_holder_ptr;\n         typedef vec_data_store<T>               vds_t;\n\n         vec_binop_vecvec_node(const operator_type& opr,\n                               expression_ptr branch0,\n                               expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec0_node_ptr_(0),\n           vec1_node_ptr_(0),\n           temp_         (0),\n           temp_vec_node_(0),\n           initialised_(false)\n         {\n            bool v0_is_ivec = false;\n            bool v1_is_ivec = false;\n\n            if (is_vector_node(binary_node<T>::branch_[0].first))\n            {\n               vec0_node_ptr_ = static_cast<vector_node_ptr>(binary_node<T>::branch_[0].first);\n            }\n            else if (is_ivector_node(binary_node<T>::branch_[0].first))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(binary_node<T>::branch_[0].first)))\n               {\n                  vec0_node_ptr_ = vi->vec();\n                  v0_is_ivec     = true;\n               }\n            }\n\n            if (is_vector_node(binary_node<T>::branch_[1].first))\n            {\n               vec1_node_ptr_ = static_cast<vector_node_ptr>(binary_node<T>::branch_[1].first);\n            }\n            else if (is_ivector_node(binary_node<T>::branch_[1].first))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(binary_node<T>::branch_[1].first)))\n               {\n                  vec1_node_ptr_ = vi->vec();\n                  v1_is_ivec     = true;\n               }\n            }\n\n            if (vec0_node_ptr_ && vec1_node_ptr_)\n            {\n               vector_holder<T>& vec0 = vec0_node_ptr_->vec_holder();\n               vector_holder<T>& vec1 = vec1_node_ptr_->vec_holder();\n\n               if (v0_is_ivec && (vec0.size() <= vec1.size()))\n                  vds_ = vds_t(vec0_node_ptr_->vds());\n               else if (v1_is_ivec && (vec1.size() <= vec0.size()))\n                  vds_ = vds_t(vec1_node_ptr_->vds());\n               else\n                  vds_ = vds_t(std::min(vec0.size(),vec1.size()));\n\n               temp_          = new vector_holder<T>(vds().data(),vds().size());\n               temp_vec_node_ = new vector_node<T>  (vds(),temp_);\n\n               initialised_ = true;\n            }\n         }\n\n        ~vec_binop_vecvec_node()\n         {\n            delete temp_;\n            delete temp_vec_node_;\n         }\n\n         inline T value() const\n         {\n            if (initialised_)\n            {\n               binary_node<T>::branch_[0].first->value();\n               binary_node<T>::branch_[1].first->value();\n\n               T* vec0 = vec0_node_ptr_->vds().data();\n               T* vec1 = vec1_node_ptr_->vds().data();\n               T* vec2 = vds().data();\n\n               loop_unroll::details lud(size());\n               const T* upper_bound = vec2 + lud.upper_bound;\n\n               while (vec2 < upper_bound)\n               {\n                  #define exprtk_loop(N)                         \\\n                  vec2[N] = Operation::process(vec0[N],vec1[N]); \\\n\n                  exprtk_loop( 0) exprtk_loop( 1)\n                  exprtk_loop( 2) exprtk_loop( 3)\n                  #ifndef exprtk_disable_superscalar_unroll\n                  exprtk_loop( 4) exprtk_loop( 5)\n                  exprtk_loop( 6) exprtk_loop( 7)\n                  exprtk_loop( 8) exprtk_loop( 9)\n                  exprtk_loop(10) exprtk_loop(11)\n                  exprtk_loop(12) exprtk_loop(13)\n                  exprtk_loop(14) exprtk_loop(15)\n                  #endif\n\n                  vec0 += lud.batch_size;\n                  vec1 += lud.batch_size;\n                  vec2 += lud.batch_size;\n               }\n\n               int i = 0;\n\n               exprtk_disable_fallthrough_begin\n               switch (lud.remainder)\n               {\n                  #define case_stmt(N)                                             \\\n                  case N : { vec2[i] = Operation::process(vec0[i],vec1[i]); ++i; } \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(15) case_stmt(14)\n                  case_stmt(13) case_stmt(12)\n                  case_stmt(11) case_stmt(10)\n                  case_stmt( 9) case_stmt( 8)\n                  case_stmt( 7) case_stmt( 6)\n                  case_stmt( 5) case_stmt( 4)\n                  #endif\n                  case_stmt( 3) case_stmt( 2)\n                  case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef exprtk_loop\n               #undef case_stmt\n\n               return (vds().data())[0];\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return temp_vec_node_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return temp_vec_node_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecvecarith;\n         }\n\n         std::size_t size() const\n         {\n            return vds_.size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n      private:\n\n         vector_node_ptr   vec0_node_ptr_;\n         vector_node_ptr   vec1_node_ptr_;\n         vector_holder_ptr temp_;\n         vector_node_ptr   temp_vec_node_;\n         bool              initialised_;\n         vds_t             vds_;\n      };\n\n      template <typename T, typename Operation>\n      class vec_binop_vecval_node : public binary_node     <T>,\n                                    public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*    expression_ptr;\n         typedef vector_node<T>*       vector_node_ptr;\n         typedef vector_holder<T>*   vector_holder_ptr;\n         typedef vec_data_store<T>               vds_t;\n\n         vec_binop_vecval_node(const operator_type& opr,\n                               expression_ptr branch0,\n                               expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec0_node_ptr_(0),\n           temp_         (0),\n           temp_vec_node_(0)\n         {\n            bool v0_is_ivec = false;\n\n            if (is_vector_node(binary_node<T>::branch_[0].first))\n            {\n               vec0_node_ptr_ = static_cast<vector_node_ptr>(binary_node<T>::branch_[0].first);\n            }\n            else if (is_ivector_node(binary_node<T>::branch_[0].first))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(binary_node<T>::branch_[0].first)))\n               {\n                  vec0_node_ptr_ = vi->vec();\n                  v0_is_ivec     = true;\n               }\n            }\n\n            if (vec0_node_ptr_)\n            {\n               if (v0_is_ivec)\n                  vds() = vec0_node_ptr_->vds();\n               else\n                  vds() = vds_t(vec0_node_ptr_->size());\n\n               temp_          = new vector_holder<T>(vds());\n               temp_vec_node_ = new vector_node<T>  (vds(),temp_);\n            }\n         }\n\n        ~vec_binop_vecval_node()\n         {\n            delete temp_;\n            delete temp_vec_node_;\n         }\n\n         inline T value() const\n         {\n            if (vec0_node_ptr_)\n            {\n                           binary_node<T>::branch_[0].first->value();\n               const T v = binary_node<T>::branch_[1].first->value();\n\n               T* vec0 = vec0_node_ptr_->vds().data();\n               T* vec1 = vds().data();\n\n               loop_unroll::details lud(size());\n               const T* upper_bound = vec0 + lud.upper_bound;\n\n               while (vec0 < upper_bound)\n               {\n                  #define exprtk_loop(N)                   \\\n                  vec1[N] = Operation::process(vec0[N],v); \\\n\n                  exprtk_loop( 0) exprtk_loop( 1)\n                  exprtk_loop( 2) exprtk_loop( 3)\n                  #ifndef exprtk_disable_superscalar_unroll\n                  exprtk_loop( 4) exprtk_loop( 5)\n                  exprtk_loop( 6) exprtk_loop( 7)\n                  exprtk_loop( 8) exprtk_loop( 9)\n                  exprtk_loop(10) exprtk_loop(11)\n                  exprtk_loop(12) exprtk_loop(13)\n                  exprtk_loop(14) exprtk_loop(15)\n                  #endif\n\n                  vec0 += lud.batch_size;\n                  vec1 += lud.batch_size;\n               }\n\n               int i = 0;\n\n               exprtk_disable_fallthrough_begin\n               switch (lud.remainder)\n               {\n                  #define case_stmt(N)                                       \\\n                  case N : { vec1[i] = Operation::process(vec0[i],v); ++i; } \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(15) case_stmt(14)\n                  case_stmt(13) case_stmt(12)\n                  case_stmt(11) case_stmt(10)\n                  case_stmt( 9) case_stmt( 8)\n                  case_stmt( 7) case_stmt( 6)\n                  case_stmt( 5) case_stmt( 4)\n                  #endif\n                  case_stmt( 3) case_stmt( 2)\n                  case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef exprtk_loop\n               #undef case_stmt\n\n               return (vds().data())[0];\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return temp_vec_node_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return temp_vec_node_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecvalarith;\n         }\n\n         std::size_t size() const\n         {\n            return vds().size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n      private:\n\n         vector_node_ptr   vec0_node_ptr_;\n         vector_holder_ptr temp_;\n         vector_node_ptr   temp_vec_node_;\n         vds_t             vds_;\n      };\n\n      template <typename T, typename Operation>\n      class vec_binop_valvec_node : public binary_node     <T>,\n                                    public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*    expression_ptr;\n         typedef vector_node<T>*       vector_node_ptr;\n         typedef vector_holder<T>*   vector_holder_ptr;\n         typedef vec_data_store<T>               vds_t;\n\n         vec_binop_valvec_node(const operator_type& opr,\n                               expression_ptr branch0,\n                               expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           vec1_node_ptr_(0),\n           temp_         (0),\n           temp_vec_node_(0)\n         {\n            bool v1_is_ivec = false;\n\n            if (is_vector_node(binary_node<T>::branch_[1].first))\n            {\n               vec1_node_ptr_ = static_cast<vector_node_ptr>(binary_node<T>::branch_[1].first);\n            }\n            else if (is_ivector_node(binary_node<T>::branch_[1].first))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(binary_node<T>::branch_[1].first)))\n               {\n                  vec1_node_ptr_ = vi->vec();\n                  v1_is_ivec     = true;\n               }\n            }\n\n            if (vec1_node_ptr_)\n            {\n               if (v1_is_ivec)\n                  vds() = vec1_node_ptr_->vds();\n               else\n                  vds() = vds_t(vec1_node_ptr_->size());\n\n               temp_          = new vector_holder<T>(vds());\n               temp_vec_node_ = new vector_node<T>  (vds(),temp_);\n            }\n         }\n\n        ~vec_binop_valvec_node()\n         {\n            delete temp_;\n            delete temp_vec_node_;\n         }\n\n         inline T value() const\n         {\n            if (vec1_node_ptr_)\n            {\n               const T v = binary_node<T>::branch_[0].first->value();\n                           binary_node<T>::branch_[1].first->value();\n\n               T* vec0 = vds().data();\n               T* vec1 = vec1_node_ptr_->vds().data();\n\n               loop_unroll::details lud(size());\n               const T* upper_bound = vec0 + lud.upper_bound;\n\n               while (vec0 < upper_bound)\n               {\n                  #define exprtk_loop(N)                   \\\n                  vec0[N] = Operation::process(v,vec1[N]); \\\n\n                  exprtk_loop( 0) exprtk_loop( 1)\n                  exprtk_loop( 2) exprtk_loop( 3)\n                  #ifndef exprtk_disable_superscalar_unroll\n                  exprtk_loop( 4) exprtk_loop( 5)\n                  exprtk_loop( 6) exprtk_loop( 7)\n                  exprtk_loop( 8) exprtk_loop( 9)\n                  exprtk_loop(10) exprtk_loop(11)\n                  exprtk_loop(12) exprtk_loop(13)\n                  exprtk_loop(14) exprtk_loop(15)\n                  #endif\n\n                  vec0 += lud.batch_size;\n                  vec1 += lud.batch_size;\n               }\n\n               int i = 0;\n\n               exprtk_disable_fallthrough_begin\n               switch (lud.remainder)\n               {\n                  #define case_stmt(N)                                       \\\n                  case N : { vec0[i] = Operation::process(v,vec1[i]); ++i; } \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(15) case_stmt(14)\n                  case_stmt(13) case_stmt(12)\n                  case_stmt(11) case_stmt(10)\n                  case_stmt( 9) case_stmt( 8)\n                  case_stmt( 7) case_stmt( 6)\n                  case_stmt( 5) case_stmt( 4)\n                  #endif\n                  case_stmt( 3) case_stmt( 2)\n                  case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef exprtk_loop\n               #undef case_stmt\n\n               return (vds().data())[0];\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return temp_vec_node_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return temp_vec_node_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecvalarith;\n         }\n\n         std::size_t size() const\n         {\n            return vds().size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n      private:\n\n         vector_node_ptr   vec1_node_ptr_;\n         vector_holder_ptr temp_;\n         vector_node_ptr   temp_vec_node_;\n         vds_t             vds_;\n      };\n\n      template <typename T, typename Operation>\n      class unary_vector_node : public unary_node      <T>,\n                                public vector_interface<T>\n      {\n      public:\n\n         typedef expression_node<T>*    expression_ptr;\n         typedef vector_node<T>*       vector_node_ptr;\n         typedef vector_holder<T>*   vector_holder_ptr;\n         typedef vec_data_store<T>               vds_t;\n\n         unary_vector_node(const operator_type& opr, expression_ptr branch0)\n         : unary_node<T>(opr, branch0),\n           vec0_node_ptr_(0),\n           temp_         (0),\n           temp_vec_node_(0)\n         {\n            bool vec0_is_ivec = false;\n\n            if (is_vector_node(unary_node<T>::branch_))\n            {\n               vec0_node_ptr_ = static_cast<vector_node_ptr>(unary_node<T>::branch_);\n            }\n            else if (is_ivector_node(unary_node<T>::branch_))\n            {\n               vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n               if (0 != (vi = dynamic_cast<vector_interface<T>*>(unary_node<T>::branch_)))\n               {\n                  vec0_node_ptr_ = vi->vec();\n                  vec0_is_ivec   = true;\n               }\n            }\n\n            if (vec0_node_ptr_)\n            {\n               if (vec0_is_ivec)\n                  vds_ = vec0_node_ptr_->vds();\n               else\n                  vds_ = vds_t(vec0_node_ptr_->size());\n\n               temp_          = new vector_holder<T>(vds());\n               temp_vec_node_ = new vector_node<T>  (vds(),temp_);\n            }\n         }\n\n        ~unary_vector_node()\n         {\n            delete temp_;\n            delete temp_vec_node_;\n         }\n\n         inline T value() const\n         {\n            unary_node<T>::branch_->value();\n\n            if (vec0_node_ptr_)\n            {\n               T* vec0 = vec0_node_ptr_->vds().data();\n               T* vec1 = vds().data();\n\n               loop_unroll::details lud(size());\n               const T* upper_bound = vec0 + lud.upper_bound;\n\n               while (vec0 < upper_bound)\n               {\n                  #define exprtk_loop(N)                 \\\n                  vec1[N] = Operation::process(vec0[N]); \\\n\n                  exprtk_loop( 0) exprtk_loop( 1)\n                  exprtk_loop( 2) exprtk_loop( 3)\n                  #ifndef exprtk_disable_superscalar_unroll\n                  exprtk_loop( 4) exprtk_loop( 5)\n                  exprtk_loop( 6) exprtk_loop( 7)\n                  exprtk_loop( 8) exprtk_loop( 9)\n                  exprtk_loop(10) exprtk_loop(11)\n                  exprtk_loop(12) exprtk_loop(13)\n                  exprtk_loop(14) exprtk_loop(15)\n                  #endif\n\n                  vec0 += lud.batch_size;\n                  vec1 += lud.batch_size;\n               }\n\n               int i = 0;\n\n               exprtk_disable_fallthrough_begin\n               switch (lud.remainder)\n               {\n                  #define case_stmt(N)                                     \\\n                  case N : { vec1[i] = Operation::process(vec0[i]); ++i; } \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(15) case_stmt(14)\n                  case_stmt(13) case_stmt(12)\n                  case_stmt(11) case_stmt(10)\n                  case_stmt( 9) case_stmt( 8)\n                  case_stmt( 7) case_stmt( 6)\n                  case_stmt( 5) case_stmt( 4)\n                  #endif\n                  case_stmt( 3) case_stmt( 2)\n                  case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef exprtk_loop\n               #undef case_stmt\n\n               return (vds().data())[0];\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         vector_node_ptr vec() const\n         {\n            return temp_vec_node_;\n         }\n\n         vector_node_ptr vec()\n         {\n            return temp_vec_node_;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vecunaryop;\n         }\n\n         std::size_t size() const\n         {\n            return vds().size();\n         }\n\n         vds_t& vds()\n         {\n            return vds_;\n         }\n\n         const vds_t& vds() const\n         {\n            return vds_;\n         }\n\n      private:\n\n         vector_node_ptr   vec0_node_ptr_;\n         vector_holder_ptr temp_;\n         vector_node_ptr   temp_vec_node_;\n         vds_t             vds_;\n      };\n\n      template <typename T>\n      class scand_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         scand_node(const operator_type& opr,\n                    expression_ptr branch0,\n                    expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1)\n         {}\n\n         inline T value() const\n         {\n            return (\n                     std::not_equal_to<T>()\n                        (T(0),binary_node<T>::branch_[0].first->value()) &&\n                     std::not_equal_to<T>()\n                        (T(0),binary_node<T>::branch_[1].first->value())\n                   ) ? T(1) : T(0);\n         }\n      };\n\n      template <typename T>\n      class scor_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         scor_node(const operator_type& opr,\n                   expression_ptr branch0,\n                   expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1)\n         {}\n\n         inline T value() const\n         {\n            return (\n                     std::not_equal_to<T>()\n                        (T(0),binary_node<T>::branch_[0].first->value()) ||\n                     std::not_equal_to<T>()\n                        (T(0),binary_node<T>::branch_[1].first->value())\n                   ) ? T(1) : T(0);\n         }\n      };\n\n      template <typename T, typename IFunction, std::size_t N>\n      class function_N_node : public expression_node<T>\n      {\n      public:\n\n         // Function of N paramters.\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n         typedef IFunction ifunction;\n\n         function_N_node(ifunction* func)\n         : function_((N == func->param_count) ? func : reinterpret_cast<ifunction*>(0)),\n           parameter_count_(func->param_count)\n         {}\n\n        ~function_N_node()\n         {\n            cleanup_branches::execute<T,N>(branch_);\n         }\n\n         template <std::size_t NumBranches>\n         bool init_branches(expression_ptr (&b)[NumBranches])\n         {\n            // Needed for incompetent and broken msvc compiler versions\n            #ifdef _MSC_VER\n             #pragma warning(push)\n             #pragma warning(disable: 4127)\n            #endif\n            if (N != NumBranches)\n               return false;\n            else\n            {\n               for (std::size_t i = 0; i < NumBranches; ++i)\n               {\n                  if (b[i])\n                     branch_[i] = std::make_pair(b[i],branch_deletable(b[i]));\n                  else\n                     return false;\n               }\n               return true;\n            }\n            #ifdef _MSC_VER\n             #pragma warning(pop)\n            #endif\n         }\n\n         inline bool operator <(const function_N_node<T,IFunction,N>& fn) const\n         {\n            return this < (&fn);\n         }\n\n         inline T value() const\n         {\n            // Needed for incompetent and broken msvc compiler versions\n            #ifdef _MSC_VER\n             #pragma warning(push)\n             #pragma warning(disable: 4127)\n            #endif\n            if ((0 == function_) || (0 == N))\n               return std::numeric_limits<T>::quiet_NaN();\n            else\n            {\n               T v[N];\n               evaluate_branches<T,N>::execute(v,branch_);\n               return invoke<T,N>::execute(*function_,v);\n            }\n            #ifdef _MSC_VER\n             #pragma warning(pop)\n            #endif\n         }\n\n         template <typename T_, std::size_t BranchCount>\n         struct evaluate_branches\n         {\n            static inline void execute(T_ (&v)[BranchCount], const branch_t (&b)[BranchCount])\n            {\n               for (std::size_t i = 0; i < BranchCount; ++i)\n               {\n                  v[i] = b[i].first->value();\n               }\n            }\n         };\n\n         template <typename T_>\n         struct evaluate_branches <T_,5>\n         {\n            static inline void execute(T_ (&v)[5], const branch_t (&b)[5])\n            {\n               v[0] = b[0].first->value();\n               v[1] = b[1].first->value();\n               v[2] = b[2].first->value();\n               v[3] = b[3].first->value();\n               v[4] = b[4].first->value();\n            }\n         };\n\n         template <typename T_>\n         struct evaluate_branches <T_,4>\n         {\n            static inline void execute(T_ (&v)[4], const branch_t (&b)[4])\n            {\n               v[0] = b[0].first->value();\n               v[1] = b[1].first->value();\n               v[2] = b[2].first->value();\n               v[3] = b[3].first->value();\n            }\n         };\n\n         template <typename T_>\n         struct evaluate_branches <T_,3>\n         {\n            static inline void execute(T_ (&v)[3], const branch_t (&b)[3])\n            {\n               v[0] = b[0].first->value();\n               v[1] = b[1].first->value();\n               v[2] = b[2].first->value();\n            }\n         };\n\n         template <typename T_>\n         struct evaluate_branches <T_,2>\n         {\n            static inline void execute(T_ (&v)[2], const branch_t (&b)[2])\n            {\n               v[0] = b[0].first->value();\n               v[1] = b[1].first->value();\n            }\n         };\n\n         template <typename T_>\n         struct evaluate_branches <T_,1>\n         {\n            static inline void execute(T_ (&v)[1], const branch_t (&b)[1])\n            {\n               v[0] = b[0].first->value();\n            }\n         };\n\n         template <typename T_, std::size_t ParamCount>\n         struct invoke { static inline T execute(ifunction&, branch_t (&)[ParamCount]) { return std::numeric_limits<T_>::quiet_NaN(); } };\n\n         template <typename T_>\n         struct invoke<T_,20>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[20])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11],v[12],v[13],v[14],v[15],v[16],v[17],v[18],v[19]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,19>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[19])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11],v[12],v[13],v[14],v[15],v[16],v[17],v[18]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,18>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[18])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11],v[12],v[13],v[14],v[15],v[16],v[17]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,17>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[17])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11],v[12],v[13],v[14],v[15],v[16]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,16>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[16])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11],v[12],v[13],v[14],v[15]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,15>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[15])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11],v[12],v[13],v[14]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,14>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[14])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11],v[12],v[13]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,13>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[13])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11],v[12]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,12>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[12])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10],v[11]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,11>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[11])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],v[10]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,10>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[10])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,9>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[9])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,8>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[8])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,7>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[7])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5],v[6]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,6>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[6])\n            { return f(v[0],v[1],v[2],v[3],v[4],v[5]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,5>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[5])\n            { return f(v[0],v[1],v[2],v[3],v[4]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,4>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[4])\n            { return f(v[0],v[1],v[2],v[3]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,3>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[3])\n            { return f(v[0],v[1],v[2]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,2>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[2])\n            { return f(v[0],v[1]); }\n         };\n\n         template <typename T_>\n         struct invoke<T_,1>\n         {\n            static inline T_ execute(ifunction& f, T_ (&v)[1])\n            { return f(v[0]); }\n         };\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_function;\n         }\n\n      private:\n\n         ifunction*  function_;\n         std::size_t parameter_count_;\n         branch_t    branch_[N];\n      };\n\n      template <typename T, typename IFunction>\n      class function_N_node<T,IFunction,0> : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef IFunction ifunction;\n\n         function_N_node(ifunction* func)\n         : function_((0 == func->param_count) ? func : reinterpret_cast<ifunction*>(0))\n         {}\n\n         inline bool operator <(const function_N_node<T,IFunction,0>& fn) const\n         {\n            return this < (&fn);\n         }\n\n         inline T value() const\n         {\n            if (function_)\n               return (*function_)();\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_function;\n         }\n\n      private:\n\n         ifunction* function_;\n      };\n\n      template <typename T, typename VarArgFunction>\n      class vararg_function_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n\n         vararg_function_node(VarArgFunction*  func,\n                              const std::vector<expression_ptr>& arg_list)\n         : function_(func),\n           arg_list_(arg_list)\n         {\n            value_list_.resize(arg_list.size(),std::numeric_limits<T>::quiet_NaN());\n         }\n\n        ~vararg_function_node()\n         {\n            for (std::size_t i = 0; i < arg_list_.size(); ++i)\n            {\n               if (arg_list_[i] && !details::is_variable_node(arg_list_[i]))\n               {\n                  destroy_node(arg_list_[i]);\n               }\n            }\n         }\n\n         inline bool operator <(const vararg_function_node<T,VarArgFunction>& fn) const\n         {\n            return this < (&fn);\n         }\n\n         inline T value() const\n         {\n            if (function_)\n            {\n               populate_value_list();\n               return (*function_)(value_list_);\n            }\n            else\n               return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_vafunction;\n         }\n\n      private:\n\n         inline void populate_value_list() const\n         {\n            for (std::size_t i = 0; i < arg_list_.size(); ++i)\n            {\n               value_list_[i] = arg_list_[i]->value();\n            }\n         }\n\n         VarArgFunction* function_;\n         std::vector<expression_ptr> arg_list_;\n         mutable std::vector<T> value_list_;\n      };\n\n      template <typename T, typename GenericFunction>\n      class generic_function_node : public expression_node<T>\n      {\n      public:\n\n         typedef type_store<T>                         type_store_t;\n         typedef expression_node<T>*                 expression_ptr;\n         typedef variable_node<T>                   variable_node_t;\n         typedef vector_node<T>                       vector_node_t;\n         typedef variable_node_t*               variable_node_ptr_t;\n         typedef vector_node_t*                   vector_node_ptr_t;\n         typedef range_interface<T>               range_interface_t;\n         typedef range_data_type<T>               range_data_type_t;\n         typedef range_pack<T>                              range_t;\n         typedef std::pair<expression_ptr,bool>            branch_t;\n         typedef std::pair<void*,std::size_t>                void_t;\n         typedef std::vector<T>                            tmp_vs_t;\n         typedef std::vector<type_store_t>         typestore_list_t;\n         typedef std::vector<range_data_type_t>        range_list_t;\n\n         generic_function_node(const std::vector<expression_ptr>& arg_list,\n                               GenericFunction* func = (GenericFunction*)(0))\n         : function_(func),\n           arg_list_(arg_list)\n         {}\n\n         virtual ~generic_function_node()\n         {\n            cleanup_branches::execute(branch_);\n         }\n\n         virtual bool init_branches()\n         {\n            expr_as_vec1_store_.resize(arg_list_.size(),T(0)               );\n            typestore_list_    .resize(arg_list_.size(),type_store_t()     );\n            range_list_        .resize(arg_list_.size(),range_data_type_t());\n            branch_            .resize(arg_list_.size(),branch_t((expression_ptr)0,false));\n\n            for (std::size_t i = 0; i < arg_list_.size(); ++i)\n            {\n               type_store_t& ts = typestore_list_[i];\n\n               if (0 == arg_list_[i])\n                  return false;\n               else if (is_ivector_node(arg_list_[i]))\n               {\n                  vector_interface<T>* vi = reinterpret_cast<vector_interface<T>*>(0);\n\n                  if (0 == (vi = dynamic_cast<vector_interface<T>*>(arg_list_[i])))\n                     return false;\n\n                  ts.size = vi->size();\n                  ts.data = vi->vds().data();\n                  ts.type = type_store_t::e_vector;\n               }\n               #ifndef exprtk_disable_string_capabilities\n               else if (is_generally_string_node(arg_list_[i]))\n               {\n                  string_base_node<T>* sbn = reinterpret_cast<string_base_node<T>*>(0);\n\n                  if (0 == (sbn = dynamic_cast<string_base_node<T>*>(arg_list_[i])))\n                     return false;\n\n                  ts.size = sbn->size();\n                  ts.data = reinterpret_cast<void*>(const_cast<char_ptr>(sbn->base()));\n                  ts.type = type_store_t::e_string;\n\n                  range_list_[i].data      = ts.data;\n                  range_list_[i].size      = ts.size;\n                  range_list_[i].type_size = sizeof(char);\n                  range_list_[i].str_node  = sbn;\n\n                  range_interface_t* ri = reinterpret_cast<range_interface_t*>(0);\n\n                  if (0 == (ri = dynamic_cast<range_interface_t*>(arg_list_[i])))\n                     return false;\n\n                  range_t& rp = ri->range_ref();\n\n                  if (\n                       rp.const_range() &&\n                       is_const_string_range_node(arg_list_[i])\n                     )\n                  {\n                     ts.size = rp.const_size();\n                     ts.data = static_cast<char_ptr>(ts.data) + rp.n0_c.second;\n                     range_list_[i].range = reinterpret_cast<range_t*>(0);\n                  }\n                  else\n                     range_list_[i].range = &(ri->range_ref());\n               }\n               #endif\n               else if (is_variable_node(arg_list_[i]))\n               {\n                  variable_node_ptr_t var = variable_node_ptr_t(0);\n\n                  if (0 == (var = dynamic_cast<variable_node_ptr_t>(arg_list_[i])))\n                     return false;\n\n                  ts.size = 1;\n                  ts.data = &var->ref();\n                  ts.type = type_store_t::e_scalar;\n               }\n               else\n               {\n                  ts.size = 1;\n                  ts.data = reinterpret_cast<void*>(&expr_as_vec1_store_[i]);\n                  ts.type = type_store_t::e_scalar;\n               }\n\n               branch_[i] = std::make_pair(arg_list_[i],branch_deletable(arg_list_[i]));\n            }\n\n            return true;\n         }\n\n         inline bool operator <(const generic_function_node<T,GenericFunction>& fn) const\n         {\n            return this < (&fn);\n         }\n\n         inline T value() const\n         {\n            if (function_)\n            {\n               if (populate_value_list())\n               {\n                  typedef typename GenericFunction::parameter_list_t parameter_list_t;\n\n                  return (*function_)(parameter_list_t(typestore_list_));\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_genfunction;\n         }\n\n      protected:\n\n         inline virtual bool populate_value_list() const\n         {\n            for (std::size_t i = 0; i < branch_.size(); ++i)\n            {\n               expr_as_vec1_store_[i] = branch_[i].first->value();\n            }\n\n            for (std::size_t i = 0; i < branch_.size(); ++i)\n            {\n               range_data_type_t& rdt = range_list_[i];\n\n               if (rdt.range)\n               {\n                  range_t&    rp = (*rdt.range);\n                  std::size_t r0 = 0;\n                  std::size_t r1 = 0;\n\n                  if (rp(r0,r1,rdt.size))\n                  {\n                     type_store_t& ts = typestore_list_[i];\n\n                     ts.size = rp.cache_size();\n                     #ifndef exprtk_disable_string_capabilities\n                     if (ts.type == type_store_t::e_string)\n                        ts.data = const_cast<char_ptr>(rdt.str_node->base()) + rp.cache.first;\n                     else\n                     #endif\n                        ts.data = static_cast<char_ptr>(rdt.data) + (rp.cache.first * rdt.type_size);\n                  }\n                  else\n                     return false;\n               }\n            }\n\n            return true;\n         }\n\n         GenericFunction* function_;\n         mutable typestore_list_t typestore_list_;\n\n      private:\n\n         std::vector<expression_ptr> arg_list_;\n         std::vector<branch_t>         branch_;\n         mutable tmp_vs_t  expr_as_vec1_store_;\n         mutable range_list_t      range_list_;\n      };\n\n      #ifndef exprtk_disable_string_capabilities\n      template <typename T, typename StringFunction>\n      class string_function_node : public generic_function_node<T,StringFunction>,\n                                   public string_base_node<T>,\n                                   public range_interface <T>\n      {\n      public:\n\n         typedef generic_function_node<T,StringFunction> gen_function_t;\n         typedef range_pack<T> range_t;\n\n         string_function_node(StringFunction* func,\n                              const std::vector<typename gen_function_t::expression_ptr>& arg_list)\n         : gen_function_t(arg_list,func)\n         {\n            range_.n0_c = std::make_pair<bool,std::size_t>(true,0);\n            range_.n1_c = std::make_pair<bool,std::size_t>(true,0);\n            range_.cache.first  = range_.n0_c.second;\n            range_.cache.second = range_.n1_c.second;\n         }\n\n         inline bool operator <(const string_function_node<T,StringFunction>& fn) const\n         {\n            return this < (&fn);\n         }\n\n         inline T value() const\n         {\n            T result = std::numeric_limits<T>::quiet_NaN();\n\n            if (gen_function_t::function_)\n            {\n               if (gen_function_t::populate_value_list())\n               {\n                  typedef typename StringFunction::parameter_list_t parameter_list_t;\n\n                  result = (*gen_function_t::function_)(ret_string_,\n                                                        parameter_list_t(gen_function_t::typestore_list_));\n\n                  range_.n1_c.second  = ret_string_.size() - 1;\n                  range_.cache.second = range_.n1_c.second;\n\n                  return result;\n               }\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strfunction;\n         }\n\n         std::string str() const\n         {\n            return ret_string_;\n         }\n\n         char_cptr base() const\n         {\n           return &ret_string_[0];\n         }\n\n         std::size_t size() const\n         {\n            return ret_string_.size();\n         }\n\n         range_t& range_ref()\n         {\n            return range_;\n         }\n\n         const range_t& range_ref() const\n         {\n            return range_;\n         }\n\n      protected:\n\n         mutable range_t     range_;\n         mutable std::string ret_string_;\n      };\n      #endif\n\n      template <typename T, typename GenericFunction>\n      class multimode_genfunction_node : public generic_function_node<T,GenericFunction>\n      {\n      public:\n\n         typedef generic_function_node<T,GenericFunction> gen_function_t;\n         typedef range_pack<T> range_t;\n\n         multimode_genfunction_node(GenericFunction* func,\n                                    const std::size_t& param_seq_index,\n                                    const std::vector<typename gen_function_t::expression_ptr>& arg_list)\n         : gen_function_t(arg_list,func),\n           param_seq_index_(param_seq_index)\n         {}\n\n         inline T value() const\n         {\n            T result = std::numeric_limits<T>::quiet_NaN();\n\n            if (gen_function_t::function_)\n            {\n               if (gen_function_t::populate_value_list())\n               {\n                  typedef typename GenericFunction::parameter_list_t parameter_list_t;\n\n                  return (*gen_function_t::function_)(param_seq_index_,\n                                                      parameter_list_t(gen_function_t::typestore_list_));\n               }\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_genfunction;\n         }\n\n      private:\n\n         std::size_t param_seq_index_;\n      };\n\n      #ifndef exprtk_disable_string_capabilities\n      template <typename T, typename StringFunction>\n      class multimode_strfunction_node : public string_function_node<T,StringFunction>\n      {\n      public:\n\n         typedef string_function_node<T,StringFunction> str_function_t;\n         typedef range_pack<T> range_t;\n\n         multimode_strfunction_node(StringFunction* func,\n                                    const std::size_t& param_seq_index,\n                                    const std::vector<typename str_function_t::expression_ptr>& arg_list)\n         : str_function_t(func,arg_list),\n           param_seq_index_(param_seq_index)\n         {}\n\n         inline T value() const\n         {\n            T result = std::numeric_limits<T>::quiet_NaN();\n\n            if (str_function_t::function_)\n            {\n               if (str_function_t::populate_value_list())\n               {\n                  typedef typename StringFunction::parameter_list_t parameter_list_t;\n\n                  result = (*str_function_t::function_)(param_seq_index_,\n                                                        str_function_t::ret_string_,\n                                                        parameter_list_t(str_function_t::typestore_list_));\n\n                  str_function_t::range_.n1_c.second  = str_function_t::ret_string_.size() - 1;\n                  str_function_t::range_.cache.second = str_function_t::range_.n1_c.second;\n\n                  return result;\n               }\n            }\n\n            return result;\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_strfunction;\n         }\n\n      private:\n\n         const std::size_t param_seq_index_;\n      };\n      #endif\n\n      class return_exception\n      {};\n\n      template <typename T>\n      class null_igenfunc\n      {\n      public:\n\n         virtual ~null_igenfunc()\n         {}\n\n         typedef type_store<T> generic_type;\n         typedef typename generic_type::parameter_list parameter_list_t;\n\n         inline virtual T operator() (parameter_list_t)\n         {\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n      };\n\n      #ifndef exprtk_disable_return_statement\n      template <typename T>\n      class return_node : public generic_function_node<T,null_igenfunc<T> >\n      {\n      public:\n\n         typedef null_igenfunc<T> igeneric_function_t;\n         typedef igeneric_function_t* igeneric_function_ptr;\n         typedef generic_function_node<T,igeneric_function_t> gen_function_t;\n         typedef results_context<T> results_context_t;\n\n         return_node(const std::vector<typename gen_function_t::expression_ptr>& arg_list,\n                     results_context_t& rc)\n         : gen_function_t  (arg_list),\n           results_context_(&rc)\n         {}\n\n         inline T value() const\n         {\n            if (\n                 (0 != results_context_) &&\n                 gen_function_t::populate_value_list()\n               )\n            {\n               typedef typename type_store<T>::parameter_list parameter_list_t;\n\n               results_context_->\n                  assign(parameter_list_t(gen_function_t::typestore_list_));\n\n               throw return_exception();\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_return;\n         }\n\n      private:\n\n         results_context_t* results_context_;\n      };\n\n      template <typename T>\n      class return_envelope_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef results_context<T>  results_context_t;\n\n         return_envelope_node(expression_ptr body, results_context_t& rc)\n         : results_context_(&rc  ),\n           return_invoked_ (false),\n           body_           (body ),\n           body_deletable_ (branch_deletable(body_))\n         {}\n\n        ~return_envelope_node()\n         {\n            if (body_ && body_deletable_)\n            {\n               destroy_node(body_);\n            }\n         }\n\n         inline T value() const\n         {\n            try\n            {\n               return_invoked_ = false;\n               results_context_->clear();\n\n               return body_->value();\n            }\n            catch(const return_exception&)\n            {\n               return_invoked_ = true;\n               return std::numeric_limits<T>::quiet_NaN();\n            }\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_retenv;\n         }\n\n         inline bool* retinvk_ptr()\n         {\n            return &return_invoked_;\n         }\n\n      private:\n\n         results_context_t* results_context_;\n         mutable bool       return_invoked_;\n         expression_ptr     body_;\n         const bool         body_deletable_;\n      };\n      #endif\n\n      #define exprtk_define_unary_op(OpName)                    \\\n      template <typename T>                                     \\\n      struct OpName##_op                                        \\\n      {                                                         \\\n         typedef typename functor_t<T>::Type Type;              \\\n         typedef typename expression_node<T>::node_type node_t; \\\n                                                                \\\n         static inline T process(Type v)                        \\\n         {                                                      \\\n            return numeric:: OpName (v);                        \\\n         }                                                      \\\n                                                                \\\n         static inline node_t type()                            \\\n         {                                                      \\\n            return expression_node<T>::e_##OpName;              \\\n         }                                                      \\\n                                                                \\\n         static inline details::operator_type operation()       \\\n         {                                                      \\\n            return details::e_##OpName;                         \\\n         }                                                      \\\n      };                                                        \\\n\n      exprtk_define_unary_op(abs  )\n      exprtk_define_unary_op(acos )\n      exprtk_define_unary_op(acosh)\n      exprtk_define_unary_op(asin )\n      exprtk_define_unary_op(asinh)\n      exprtk_define_unary_op(atan )\n      exprtk_define_unary_op(atanh)\n      exprtk_define_unary_op(ceil )\n      exprtk_define_unary_op(cos  )\n      exprtk_define_unary_op(cosh )\n      exprtk_define_unary_op(cot  )\n      exprtk_define_unary_op(csc  )\n      exprtk_define_unary_op(d2g  )\n      exprtk_define_unary_op(d2r  )\n      exprtk_define_unary_op(erf  )\n      exprtk_define_unary_op(erfc )\n      exprtk_define_unary_op(exp  )\n      exprtk_define_unary_op(expm1)\n      exprtk_define_unary_op(floor)\n      exprtk_define_unary_op(frac )\n      exprtk_define_unary_op(g2d  )\n      exprtk_define_unary_op(log  )\n      exprtk_define_unary_op(log10)\n      exprtk_define_unary_op(log2 )\n      exprtk_define_unary_op(log1p)\n      exprtk_define_unary_op(ncdf )\n      exprtk_define_unary_op(neg  )\n      exprtk_define_unary_op(notl )\n      exprtk_define_unary_op(pos  )\n      exprtk_define_unary_op(r2d  )\n      exprtk_define_unary_op(round)\n      exprtk_define_unary_op(sec  )\n      exprtk_define_unary_op(sgn  )\n      exprtk_define_unary_op(sin  )\n      exprtk_define_unary_op(sinc )\n      exprtk_define_unary_op(sinh )\n      exprtk_define_unary_op(sqrt )\n      exprtk_define_unary_op(tan  )\n      exprtk_define_unary_op(tanh )\n      exprtk_define_unary_op(trunc)\n      #undef exprtk_define_unary_op\n\n      template <typename T>\n      struct opr_base\n      {\n         typedef typename details::functor_t<T>::Type Type;\n         typedef typename details::functor_t<T>::RefType RefType;\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::qfunc_t quaternary_functor_t;\n         typedef typename functor_t::tfunc_t    trinary_functor_t;\n         typedef typename functor_t::bfunc_t     binary_functor_t;\n         typedef typename functor_t::ufunc_t      unary_functor_t;\n      };\n\n      template <typename T>\n      struct add_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         typedef typename opr_base<T>::RefType RefType;\n         static inline T process(Type t1, Type t2) { return t1 + t2; }\n         static inline T process(Type t1, Type t2, Type t3) { return t1 + t2 + t3; }\n         static inline void assign(RefType t1, Type t2) { t1 += t2; }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_add; }\n         static inline details::operator_type operation() { return details::e_add; }\n      };\n\n      template <typename T>\n      struct mul_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         typedef typename opr_base<T>::RefType RefType;\n         static inline T process(Type t1, Type t2) { return t1 * t2; }\n         static inline T process(Type t1, Type t2, Type t3) { return t1 * t2 * t3; }\n         static inline void assign(RefType t1, Type t2) { t1 *= t2; }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_mul; }\n         static inline details::operator_type operation() { return details::e_mul; }\n      };\n\n      template <typename T>\n      struct sub_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         typedef typename opr_base<T>::RefType RefType;\n         static inline T process(Type t1, Type t2) { return t1 - t2; }\n         static inline T process(Type t1, Type t2, Type t3) { return t1 - t2 - t3; }\n         static inline void assign(RefType t1, Type t2) { t1 -= t2; }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_sub; }\n         static inline details::operator_type operation() { return details::e_sub; }\n      };\n\n      template <typename T>\n      struct div_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         typedef typename opr_base<T>::RefType RefType;\n         static inline T process(Type t1, Type t2) { return t1 / t2; }\n         static inline T process(Type t1, Type t2, Type t3) { return t1 / t2 / t3; }\n         static inline void assign(RefType t1, Type t2) { t1 /= t2; }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_div; }\n         static inline details::operator_type operation() { return details::e_div; }\n      };\n\n      template <typename T>\n      struct mod_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         typedef typename opr_base<T>::RefType RefType;\n         static inline T process(Type t1, Type t2) { return numeric::modulus<T>(t1,t2); }\n         static inline void assign(RefType t1, Type t2) { t1 = numeric::modulus<T>(t1,t2); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_mod; }\n         static inline details::operator_type operation() { return details::e_mod; }\n      };\n\n      template <typename T>\n      struct pow_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         typedef typename opr_base<T>::RefType RefType;\n         static inline T process(Type t1, Type t2) { return numeric::pow<T>(t1,t2); }\n         static inline void assign(RefType t1, Type t2) { t1 = numeric::pow<T>(t1,t2); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_pow; }\n         static inline details::operator_type operation() { return details::e_pow; }\n      };\n\n      template <typename T>\n      struct lt_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return ((t1 < t2) ? T(1) : T(0)); }\n         static inline T process(const std::string& t1, const std::string& t2) { return ((t1 < t2) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_lt; }\n         static inline details::operator_type operation() { return details::e_lt; }\n      };\n\n      template <typename T>\n      struct lte_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return ((t1 <= t2) ? T(1) : T(0)); }\n         static inline T process(const std::string& t1, const std::string& t2) { return ((t1 <= t2) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_lte; }\n         static inline details::operator_type operation() { return details::e_lte; }\n      };\n\n      template <typename T>\n      struct gt_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return ((t1 > t2) ? T(1) : T(0)); }\n         static inline T process(const std::string& t1, const std::string& t2) { return ((t1 > t2) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_gt; }\n         static inline details::operator_type operation() { return details::e_gt; }\n      };\n\n      template <typename T>\n      struct gte_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return ((t1 >= t2) ? T(1) : T(0)); }\n         static inline T process(const std::string& t1, const std::string& t2) { return ((t1 >= t2) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_gte; }\n         static inline details::operator_type operation() { return details::e_gte; }\n      };\n\n      template <typename T>\n      struct eq_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return (std::equal_to<T>()(t1,t2) ? T(1) : T(0)); }\n         static inline T process(const std::string& t1, const std::string& t2) { return ((t1 == t2) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_eq; }\n         static inline details::operator_type operation() { return details::e_eq; }\n      };\n\n      template <typename T>\n      struct equal_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return numeric::equal(t1,t2); }\n         static inline T process(const std::string& t1, const std::string& t2) { return ((t1 == t2) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_eq; }\n         static inline details::operator_type operation() { return details::e_equal; }\n      };\n\n      template <typename T>\n      struct ne_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return (std::not_equal_to<T>()(t1,t2) ? T(1) : T(0)); }\n         static inline T process(const std::string& t1, const std::string& t2) { return ((t1 != t2) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_ne; }\n         static inline details::operator_type operation() { return details::e_ne; }\n      };\n\n      template <typename T>\n      struct and_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return (details::is_true(t1) && details::is_true(t2)) ? T(1) : T(0); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_and; }\n         static inline details::operator_type operation() { return details::e_and; }\n      };\n\n      template <typename T>\n      struct nand_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return (details::is_true(t1) && details::is_true(t2)) ? T(0) : T(1); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_nand; }\n         static inline details::operator_type operation() { return details::e_nand; }\n      };\n\n      template <typename T>\n      struct or_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return (details::is_true(t1) || details::is_true(t2)) ? T(1) : T(0); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_or; }\n         static inline details::operator_type operation() { return details::e_or; }\n      };\n\n      template <typename T>\n      struct nor_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return (details::is_true(t1) || details::is_true(t2)) ? T(0) : T(1); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_nor; }\n         static inline details::operator_type operation() { return details::e_nor; }\n      };\n\n      template <typename T>\n      struct xor_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return numeric::xor_opr<T>(t1,t2); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_nor; }\n         static inline details::operator_type operation() { return details::e_xor; }\n      };\n\n      template <typename T>\n      struct xnor_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(Type t1, Type t2) { return numeric::xnor_opr<T>(t1,t2); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_nor; }\n         static inline details::operator_type operation() { return details::e_xnor; }\n      };\n\n      template <typename T>\n      struct in_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(const T&, const T&) { return std::numeric_limits<T>::quiet_NaN(); }\n         static inline T process(const std::string& t1, const std::string& t2) { return ((std::string::npos != t2.find(t1)) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_in; }\n         static inline details::operator_type operation() { return details::e_in; }\n      };\n\n      template <typename T>\n      struct like_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(const T&, const T&) { return std::numeric_limits<T>::quiet_NaN(); }\n         static inline T process(const std::string& t1, const std::string& t2) { return (details::wc_match(t2,t1) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_like; }\n         static inline details::operator_type operation() { return details::e_like; }\n      };\n\n      template <typename T>\n      struct ilike_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(const T&, const T&) { return std::numeric_limits<T>::quiet_NaN(); }\n         static inline T process(const std::string& t1, const std::string& t2) { return (details::wc_imatch(t2,t1) ? T(1) : T(0)); }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_ilike; }\n         static inline details::operator_type operation() { return details::e_ilike; }\n      };\n\n      template <typename T>\n      struct inrange_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n         static inline T process(const T& t0, const T& t1, const T& t2) { return ((t0 <= t1) && (t1 <= t2)) ? T(1) : T(0); }\n         static inline T process(const std::string& t0, const std::string& t1, const std::string& t2)\n         {\n            return ((t0 <= t1) && (t1 <= t2)) ? T(1) : T(0);\n         }\n         static inline typename expression_node<T>::node_type type() { return expression_node<T>::e_inranges; }\n         static inline details::operator_type operation() { return details::e_inrange; }\n      };\n\n      template <typename T>\n      inline T value(details::expression_node<T>* n)\n      {\n         return n->value();\n      }\n\n      template <typename T>\n      inline T value(T* t)\n      {\n         return (*t);\n      }\n\n      template <typename T>\n      struct vararg_add_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n\n         template <typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline T process(const Sequence<Type,Allocator>& arg_list)\n         {\n            switch (arg_list.size())\n            {\n               case 0  : return T(0);\n               case 1  : return process_1(arg_list);\n               case 2  : return process_2(arg_list);\n               case 3  : return process_3(arg_list);\n               case 4  : return process_4(arg_list);\n               case 5  : return process_5(arg_list);\n               default :\n                         {\n                            T result = T(0);\n\n                            for (std::size_t i = 0; i < arg_list.size(); ++i)\n                            {\n                              result += value(arg_list[i]);\n                            }\n\n                            return result;\n                         }\n            }\n         }\n\n         template <typename Sequence>\n         static inline T process_1(const Sequence& arg_list)\n         {\n            return value(arg_list[0]);\n         }\n\n         template <typename Sequence>\n         static inline T process_2(const Sequence& arg_list)\n         {\n            return value(arg_list[0]) + value(arg_list[1]);\n         }\n\n         template <typename Sequence>\n         static inline T process_3(const Sequence& arg_list)\n         {\n            return value(arg_list[0]) + value(arg_list[1]) +\n                   value(arg_list[2]) ;\n         }\n\n         template <typename Sequence>\n         static inline T process_4(const Sequence& arg_list)\n         {\n            return value(arg_list[0]) + value(arg_list[1]) +\n                   value(arg_list[2]) + value(arg_list[3]) ;\n         }\n\n         template <typename Sequence>\n         static inline T process_5(const Sequence& arg_list)\n         {\n            return value(arg_list[0]) + value(arg_list[1]) +\n                   value(arg_list[2]) + value(arg_list[3]) +\n                   value(arg_list[4]) ;\n         }\n      };\n\n      template <typename T>\n      struct vararg_mul_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n\n         template <typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline T process(const Sequence<Type,Allocator>& arg_list)\n         {\n            switch (arg_list.size())\n            {\n               case 0  : return T(0);\n               case 1  : return process_1(arg_list);\n               case 2  : return process_2(arg_list);\n               case 3  : return process_3(arg_list);\n               case 4  : return process_4(arg_list);\n               case 5  : return process_5(arg_list);\n               default :\n                         {\n                            T result = T(value(arg_list[0]));\n\n                            for (std::size_t i = 1; i < arg_list.size(); ++i)\n                            {\n                               result *= value(arg_list[i]);\n                            }\n\n                            return result;\n                         }\n            }\n         }\n\n         template <typename Sequence>\n         static inline T process_1(const Sequence& arg_list)\n         {\n            return value(arg_list[0]);\n         }\n\n         template <typename Sequence>\n         static inline T process_2(const Sequence& arg_list)\n         {\n            return value(arg_list[0]) * value(arg_list[1]);\n         }\n\n         template <typename Sequence>\n         static inline T process_3(const Sequence& arg_list)\n         {\n            return value(arg_list[0]) * value(arg_list[1]) *\n                   value(arg_list[2]) ;\n         }\n\n         template <typename Sequence>\n         static inline T process_4(const Sequence& arg_list)\n         {\n            return value(arg_list[0]) * value(arg_list[1]) *\n                   value(arg_list[2]) * value(arg_list[3]) ;\n         }\n\n         template <typename Sequence>\n         static inline T process_5(const Sequence& arg_list)\n         {\n            return value(arg_list[0]) * value(arg_list[1]) *\n                   value(arg_list[2]) * value(arg_list[3]) *\n                   value(arg_list[4]) ;\n         }\n      };\n\n      template <typename T>\n      struct vararg_avg_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n\n         template <typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline T process(const Sequence<Type,Allocator>& arg_list)\n         {\n            switch (arg_list.size())\n            {\n               case 0  : return T(0);\n               case 1  : return process_1(arg_list);\n               case 2  : return process_2(arg_list);\n               case 3  : return process_3(arg_list);\n               case 4  : return process_4(arg_list);\n               case 5  : return process_5(arg_list);\n               default : return vararg_add_op<T>::process(arg_list) / arg_list.size();\n            }\n         }\n\n         template <typename Sequence>\n         static inline T process_1(const Sequence& arg_list)\n         {\n            return value(arg_list[0]);\n         }\n\n         template <typename Sequence>\n         static inline T process_2(const Sequence& arg_list)\n         {\n            return (value(arg_list[0]) + value(arg_list[1])) / T(2);\n         }\n\n         template <typename Sequence>\n         static inline T process_3(const Sequence& arg_list)\n         {\n            return (value(arg_list[0]) + value(arg_list[1]) + value(arg_list[2])) / T(3);\n         }\n\n         template <typename Sequence>\n         static inline T process_4(const Sequence& arg_list)\n         {\n            return (value(arg_list[0]) + value(arg_list[1]) +\n                    value(arg_list[2]) + value(arg_list[3])) / T(4);\n         }\n\n         template <typename Sequence>\n         static inline T process_5(const Sequence& arg_list)\n         {\n            return (value(arg_list[0]) + value(arg_list[1]) +\n                    value(arg_list[2]) + value(arg_list[3]) +\n                    value(arg_list[4])) / T(5);\n         }\n      };\n\n      template <typename T>\n      struct vararg_min_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n\n         template <typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline T process(const Sequence<Type,Allocator>& arg_list)\n         {\n            switch (arg_list.size())\n            {\n               case 0  : return T(0);\n               case 1  : return process_1(arg_list);\n               case 2  : return process_2(arg_list);\n               case 3  : return process_3(arg_list);\n               case 4  : return process_4(arg_list);\n               case 5  : return process_5(arg_list);\n               default :\n                         {\n                            T result = T(value(arg_list[0]));\n\n                            for (std::size_t i = 1; i < arg_list.size(); ++i)\n                            {\n                               const T v = value(arg_list[i]);\n\n                               if (v < result)\n                                  result = v;\n                            }\n\n                            return result;\n                         }\n            }\n         }\n\n         template <typename Sequence>\n         static inline T process_1(const Sequence& arg_list)\n         {\n            return value(arg_list[0]);\n         }\n\n         template <typename Sequence>\n         static inline T process_2(const Sequence& arg_list)\n         {\n            return std::min<T>(value(arg_list[0]),value(arg_list[1]));\n         }\n\n         template <typename Sequence>\n         static inline T process_3(const Sequence& arg_list)\n         {\n            return std::min<T>(std::min<T>(value(arg_list[0]),value(arg_list[1])),value(arg_list[2]));\n         }\n\n         template <typename Sequence>\n         static inline T process_4(const Sequence& arg_list)\n         {\n            return std::min<T>(\n                        std::min<T>(value(arg_list[0]),value(arg_list[1])),\n                        std::min<T>(value(arg_list[2]),value(arg_list[3])));\n         }\n\n         template <typename Sequence>\n         static inline T process_5(const Sequence& arg_list)\n         {\n            return std::min<T>(\n                   std::min<T>(std::min<T>(value(arg_list[0]),value(arg_list[1])),\n                               std::min<T>(value(arg_list[2]),value(arg_list[3]))),\n                               value(arg_list[4]));\n         }\n      };\n\n      template <typename T>\n      struct vararg_max_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n\n         template <typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline T process(const Sequence<Type,Allocator>& arg_list)\n         {\n            switch (arg_list.size())\n            {\n               case 0  : return T(0);\n               case 1  : return process_1(arg_list);\n               case 2  : return process_2(arg_list);\n               case 3  : return process_3(arg_list);\n               case 4  : return process_4(arg_list);\n               case 5  : return process_5(arg_list);\n               default :\n                         {\n                            T result = T(value(arg_list[0]));\n\n                            for (std::size_t i = 1; i < arg_list.size(); ++i)\n                            {\n                               const T v = value(arg_list[i]);\n\n                               if (v > result)\n                                  result = v;\n                            }\n\n                            return result;\n                         }\n            }\n         }\n\n         template <typename Sequence>\n         static inline T process_1(const Sequence& arg_list)\n         {\n            return value(arg_list[0]);\n         }\n\n         template <typename Sequence>\n         static inline T process_2(const Sequence& arg_list)\n         {\n            return std::max<T>(value(arg_list[0]),value(arg_list[1]));\n         }\n\n         template <typename Sequence>\n         static inline T process_3(const Sequence& arg_list)\n         {\n            return std::max<T>(std::max<T>(value(arg_list[0]),value(arg_list[1])),value(arg_list[2]));\n         }\n\n         template <typename Sequence>\n         static inline T process_4(const Sequence& arg_list)\n         {\n            return std::max<T>(\n                        std::max<T>(value(arg_list[0]),value(arg_list[1])),\n                        std::max<T>(value(arg_list[2]),value(arg_list[3])));\n         }\n\n         template <typename Sequence>\n         static inline T process_5(const Sequence& arg_list)\n         {\n            return std::max<T>(\n                   std::max<T>(std::max<T>(value(arg_list[0]),value(arg_list[1])),\n                               std::max<T>(value(arg_list[2]),value(arg_list[3]))),\n                               value(arg_list[4]));\n         }\n      };\n\n      template <typename T>\n      struct vararg_mand_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n\n         template <typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline T process(const Sequence<Type,Allocator>& arg_list)\n         {\n            switch (arg_list.size())\n            {\n               case 1  : return process_1(arg_list);\n               case 2  : return process_2(arg_list);\n               case 3  : return process_3(arg_list);\n               case 4  : return process_4(arg_list);\n               case 5  : return process_5(arg_list);\n               default :\n                         {\n                            for (std::size_t i = 0; i < arg_list.size(); ++i)\n                            {\n                               if (std::equal_to<T>()(T(0),value(arg_list[i])))\n                                  return T(0);\n                            }\n\n                            return T(1);\n                         }\n            }\n         }\n\n         template <typename Sequence>\n         static inline T process_1(const Sequence& arg_list)\n         {\n            return std::not_equal_to<T>()\n                      (T(0),value(arg_list[0])) ? T(1) : T(0);\n         }\n\n         template <typename Sequence>\n         static inline T process_2(const Sequence& arg_list)\n         {\n            return (\n                     std::not_equal_to<T>()(T(0),value(arg_list[0])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[1]))\n                   ) ? T(1) : T(0);\n         }\n\n         template <typename Sequence>\n         static inline T process_3(const Sequence& arg_list)\n         {\n            return (\n                     std::not_equal_to<T>()(T(0),value(arg_list[0])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[1])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[2]))\n                   ) ? T(1) : T(0);\n         }\n\n         template <typename Sequence>\n         static inline T process_4(const Sequence& arg_list)\n         {\n            return (\n                     std::not_equal_to<T>()(T(0),value(arg_list[0])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[1])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[2])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[3]))\n                   ) ? T(1) : T(0);\n         }\n\n         template <typename Sequence>\n         static inline T process_5(const Sequence& arg_list)\n         {\n            return (\n                     std::not_equal_to<T>()(T(0),value(arg_list[0])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[1])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[2])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[3])) &&\n                     std::not_equal_to<T>()(T(0),value(arg_list[4]))\n                   ) ? T(1) : T(0);\n         }\n      };\n\n      template <typename T>\n      struct vararg_mor_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n\n         template <typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline T process(const Sequence<Type,Allocator>& arg_list)\n         {\n            switch (arg_list.size())\n            {\n               case 1  : return process_1(arg_list);\n               case 2  : return process_2(arg_list);\n               case 3  : return process_3(arg_list);\n               case 4  : return process_4(arg_list);\n               case 5  : return process_5(arg_list);\n               default :\n                         {\n                            for (std::size_t i = 0; i < arg_list.size(); ++i)\n                            {\n                               if (std::not_equal_to<T>()(T(0),value(arg_list[i])))\n                                  return T(1);\n                            }\n\n                            return T(0);\n                         }\n            }\n         }\n\n         template <typename Sequence>\n         static inline T process_1(const Sequence& arg_list)\n         {\n            return std::not_equal_to<T>()\n                      (T(0),value(arg_list[0])) ? T(1) : T(0);\n         }\n\n         template <typename Sequence>\n         static inline T process_2(const Sequence& arg_list)\n         {\n            return (\n                     std::not_equal_to<T>()(T(0),value(arg_list[0])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[1]))\n                   ) ? T(1) : T(0);\n         }\n\n         template <typename Sequence>\n         static inline T process_3(const Sequence& arg_list)\n         {\n            return (\n                     std::not_equal_to<T>()(T(0),value(arg_list[0])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[1])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[2]))\n                   ) ? T(1) : T(0);\n         }\n\n         template <typename Sequence>\n         static inline T process_4(const Sequence& arg_list)\n         {\n            return (\n                     std::not_equal_to<T>()(T(0),value(arg_list[0])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[1])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[2])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[3]))\n                   ) ? T(1) : T(0);\n         }\n\n         template <typename Sequence>\n         static inline T process_5(const Sequence& arg_list)\n         {\n            return (\n                     std::not_equal_to<T>()(T(0),value(arg_list[0])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[1])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[2])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[3])) ||\n                     std::not_equal_to<T>()(T(0),value(arg_list[4]))\n                   ) ? T(1) : T(0);\n         }\n      };\n\n      template <typename T>\n      struct vararg_multi_op : public opr_base<T>\n      {\n         typedef typename opr_base<T>::Type Type;\n\n         template <typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         static inline T process(const Sequence<Type,Allocator>& arg_list)\n         {\n            switch (arg_list.size())\n            {\n               case 0  : return std::numeric_limits<T>::quiet_NaN();\n               case 1  : return process_1(arg_list);\n               case 2  : return process_2(arg_list);\n               case 3  : return process_3(arg_list);\n               case 4  : return process_4(arg_list);\n               case 5  : return process_5(arg_list);\n               case 6  : return process_6(arg_list);\n               case 7  : return process_7(arg_list);\n               case 8  : return process_8(arg_list);\n               default :\n                         {\n                            for (std::size_t i = 0; i < (arg_list.size() - 1); ++i)\n                            {\n                               value(arg_list[i]);\n                            }\n\n                            return value(arg_list.back());\n                         }\n            }\n         }\n\n         template <typename Sequence>\n         static inline T process_1(const Sequence& arg_list)\n         {\n            return value(arg_list[0]);\n         }\n\n         template <typename Sequence>\n         static inline T process_2(const Sequence& arg_list)\n         {\n                   value(arg_list[0]);\n            return value(arg_list[1]);\n         }\n\n         template <typename Sequence>\n         static inline T process_3(const Sequence& arg_list)\n         {\n                   value(arg_list[0]);\n                   value(arg_list[1]);\n            return value(arg_list[2]);\n         }\n\n         template <typename Sequence>\n         static inline T process_4(const Sequence& arg_list)\n         {\n                   value(arg_list[0]);\n                   value(arg_list[1]);\n                   value(arg_list[2]);\n            return value(arg_list[3]);\n         }\n\n         template <typename Sequence>\n         static inline T process_5(const Sequence& arg_list)\n         {\n                   value(arg_list[0]);\n                   value(arg_list[1]);\n                   value(arg_list[2]);\n                   value(arg_list[3]);\n            return value(arg_list[4]);\n         }\n\n         template <typename Sequence>\n         static inline T process_6(const Sequence& arg_list)\n         {\n                   value(arg_list[0]);\n                   value(arg_list[1]);\n                   value(arg_list[2]);\n                   value(arg_list[3]);\n                   value(arg_list[4]);\n            return value(arg_list[5]);\n         }\n\n         template <typename Sequence>\n         static inline T process_7(const Sequence& arg_list)\n         {\n                   value(arg_list[0]);\n                   value(arg_list[1]);\n                   value(arg_list[2]);\n                   value(arg_list[3]);\n                   value(arg_list[4]);\n                   value(arg_list[5]);\n            return value(arg_list[6]);\n         }\n\n         template <typename Sequence>\n         static inline T process_8(const Sequence& arg_list)\n         {\n                   value(arg_list[0]);\n                   value(arg_list[1]);\n                   value(arg_list[2]);\n                   value(arg_list[3]);\n                   value(arg_list[4]);\n                   value(arg_list[5]);\n                   value(arg_list[6]);\n            return value(arg_list[7]);\n         }\n      };\n\n      template <typename T>\n      struct vec_add_op\n      {\n         typedef vector_interface<T>* ivector_ptr;\n\n         static inline T process(const ivector_ptr v)\n         {\n            const T* vec = v->vec()->vds().data();\n            const std::size_t vec_size = v->vec()->vds().size();\n\n            loop_unroll::details lud(vec_size);\n\n            if (vec_size <= static_cast<std::size_t>(lud.batch_size))\n            {\n               T result = T(0);\n               int i    = 0;\n\n               exprtk_disable_fallthrough_begin\n               switch (vec_size)\n               {\n                  #define case_stmt(N)         \\\n                  case N : result += vec[i++]; \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(16) case_stmt(15)\n                  case_stmt(14) case_stmt(13)\n                  case_stmt(12) case_stmt(11)\n                  case_stmt(10) case_stmt( 9)\n                  case_stmt( 8) case_stmt( 7)\n                  case_stmt( 6) case_stmt( 5)\n                  #endif\n                  case_stmt( 4) case_stmt( 3)\n                  case_stmt( 2) case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef case_stmt\n\n               return result;\n            }\n\n            T r[] = {\n                      T(0), T(0), T(0), T(0), T(0), T(0), T(0), T(0),\n                      T(0), T(0), T(0), T(0), T(0), T(0), T(0), T(0)\n                    };\n\n            const T* upper_bound = vec + lud.upper_bound;\n\n            while (vec < upper_bound)\n            {\n               #define exprtk_loop(N) \\\n               r[N] += vec[N];        \\\n\n               exprtk_loop( 0) exprtk_loop( 1)\n               exprtk_loop( 2) exprtk_loop( 3)\n               #ifndef exprtk_disable_superscalar_unroll\n               exprtk_loop( 4) exprtk_loop( 5)\n               exprtk_loop( 6) exprtk_loop( 7)\n               exprtk_loop( 8) exprtk_loop( 9)\n               exprtk_loop(10) exprtk_loop(11)\n               exprtk_loop(12) exprtk_loop(13)\n               exprtk_loop(14) exprtk_loop(15)\n               #endif\n\n               vec += lud.batch_size;\n            }\n\n            int i = 0;\n\n            exprtk_disable_fallthrough_begin\n            switch (lud.remainder)\n            {\n               #define case_stmt(N)       \\\n               case N : r[0] += vec[i++]; \\\n\n               #ifndef exprtk_disable_superscalar_unroll\n               case_stmt(15) case_stmt(14)\n               case_stmt(13) case_stmt(12)\n               case_stmt(11) case_stmt(10)\n               case_stmt( 9) case_stmt( 8)\n               case_stmt( 7) case_stmt( 6)\n               case_stmt( 5) case_stmt( 4)\n               #endif\n               case_stmt( 3) case_stmt( 2)\n               case_stmt( 1)\n            }\n            exprtk_disable_fallthrough_end\n\n            #undef exprtk_loop\n            #undef case_stmt\n\n            return (r[ 0] + r[ 1] + r[ 2] + r[ 3])\n                   #ifndef exprtk_disable_superscalar_unroll\n                 + (r[ 4] + r[ 5] + r[ 6] + r[ 7])\n                 + (r[ 8] + r[ 9] + r[10] + r[11])\n                 + (r[12] + r[13] + r[14] + r[15])\n                   #endif\n                   ;\n         }\n      };\n\n      template <typename T>\n      struct vec_mul_op\n      {\n         typedef vector_interface<T>* ivector_ptr;\n\n         static inline T process(const ivector_ptr v)\n         {\n            const T* vec = v->vec()->vds().data();\n            const std::size_t vec_size = v->vec()->vds().size();\n\n            loop_unroll::details lud(vec_size);\n\n            if (vec_size <= static_cast<std::size_t>(lud.batch_size))\n            {\n               T result = T(1);\n               int i    = 0;\n\n               exprtk_disable_fallthrough_begin\n               switch (vec_size)\n               {\n                  #define case_stmt(N)         \\\n                  case N : result *= vec[i++]; \\\n\n                  #ifndef exprtk_disable_superscalar_unroll\n                  case_stmt(16) case_stmt(15)\n                  case_stmt(14) case_stmt(13)\n                  case_stmt(12) case_stmt(11)\n                  case_stmt(10) case_stmt( 9)\n                  case_stmt( 8) case_stmt( 7)\n                  case_stmt( 6) case_stmt( 5)\n                  #endif\n                  case_stmt( 4) case_stmt( 3)\n                  case_stmt( 2) case_stmt( 1)\n               }\n               exprtk_disable_fallthrough_end\n\n               #undef case_stmt\n\n               return result;\n            }\n\n            T r[] = {\n                      T(1), T(1), T(1), T(1), T(1), T(1), T(1), T(1),\n                      T(1), T(1), T(1), T(1), T(1), T(1), T(1), T(1)\n                    };\n\n            const T* upper_bound = vec + lud.upper_bound;\n\n            while (vec < upper_bound)\n            {\n               #define exprtk_loop(N) \\\n               r[N] *= vec[N];        \\\n\n               exprtk_loop( 0) exprtk_loop( 1)\n               exprtk_loop( 2) exprtk_loop( 3)\n               #ifndef exprtk_disable_superscalar_unroll\n               exprtk_loop( 4) exprtk_loop( 5)\n               exprtk_loop( 6) exprtk_loop( 7)\n               exprtk_loop( 8) exprtk_loop( 9)\n               exprtk_loop(10) exprtk_loop(11)\n               exprtk_loop(12) exprtk_loop(13)\n               exprtk_loop(14) exprtk_loop(15)\n               #endif\n\n               vec += lud.batch_size;\n            }\n\n            int i = 0;\n\n            exprtk_disable_fallthrough_begin\n            switch (lud.remainder)\n            {\n               #define case_stmt(N)       \\\n               case N : r[0] *= vec[i++]; \\\n\n               #ifndef exprtk_disable_superscalar_unroll\n               case_stmt(15) case_stmt(14)\n               case_stmt(13) case_stmt(12)\n               case_stmt(11) case_stmt(10)\n               case_stmt( 9) case_stmt( 8)\n               case_stmt( 7) case_stmt( 6)\n               case_stmt( 5) case_stmt( 4)\n               #endif\n               case_stmt( 3) case_stmt( 2)\n               case_stmt( 1)\n            }\n            exprtk_disable_fallthrough_end\n\n            #undef exprtk_loop\n            #undef case_stmt\n\n            return (r[ 0] * r[ 1] * r[ 2] * r[ 3])\n                   #ifndef exprtk_disable_superscalar_unroll\n                 + (r[ 4] * r[ 5] * r[ 6] * r[ 7])\n                 + (r[ 8] * r[ 9] * r[10] * r[11])\n                 + (r[12] * r[13] * r[14] * r[15])\n                   #endif\n                   ;\n         }\n      };\n\n      template <typename T>\n      struct vec_avg_op\n      {\n         typedef vector_interface<T>* ivector_ptr;\n\n         static inline T process(const ivector_ptr v)\n         {\n            const std::size_t vec_size = v->vec()->vds().size();\n\n            return vec_add_op<T>::process(v) / vec_size;\n         }\n      };\n\n      template <typename T>\n      struct vec_min_op\n      {\n         typedef vector_interface<T>* ivector_ptr;\n\n         static inline T process(const ivector_ptr v)\n         {\n            const T* vec = v->vec()->vds().data();\n            const std::size_t vec_size = v->vec()->vds().size();\n\n            T result = vec[0];\n\n            for (std::size_t i = 1; i < vec_size; ++i)\n            {\n               T v_i = vec[i];\n\n               if (v_i < result)\n                  result = v_i;\n            }\n\n            return result;\n         }\n      };\n\n      template <typename T>\n      struct vec_max_op\n      {\n         typedef vector_interface<T>* ivector_ptr;\n\n         static inline T process(const ivector_ptr v)\n         {\n            const T* vec = v->vec()->vds().data();\n            const std::size_t vec_size = v->vec()->vds().size();\n\n            T result = vec[0];\n\n            for (std::size_t i = 1; i < vec_size; ++i)\n            {\n               T v_i = vec[i];\n\n               if (v_i > result)\n                  result = v_i;\n            }\n\n            return result;\n         }\n      };\n\n      template <typename T>\n      class vov_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~vov_base_node()\n         {}\n\n         inline virtual operator_type operation() const\n         {\n            return details::e_default;\n         }\n\n         virtual const T& v0() const = 0;\n\n         virtual const T& v1() const = 0;\n      };\n\n      template <typename T>\n      class cov_base_node : public expression_node<T>\n      {\n      public:\n\n       virtual ~cov_base_node()\n          {}\n\n         inline virtual operator_type operation() const\n         {\n            return details::e_default;\n         }\n\n         virtual const T c() const = 0;\n\n         virtual const T& v() const = 0;\n      };\n\n      template <typename T>\n      class voc_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~voc_base_node()\n         {}\n\n         inline virtual operator_type operation() const\n         {\n            return details::e_default;\n         }\n\n         virtual const T c() const = 0;\n\n         virtual const T& v() const = 0;\n      };\n\n      template <typename T>\n      class vob_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~vob_base_node()\n         {}\n\n         virtual const T& v() const = 0;\n      };\n\n      template <typename T>\n      class bov_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~bov_base_node()\n         {}\n\n         virtual const T& v() const = 0;\n      };\n\n      template <typename T>\n      class cob_base_node : public expression_node<T>\n      {\n      public:\n\n       virtual ~cob_base_node()\n       {}\n\n         inline virtual operator_type operation() const\n         {\n            return details::e_default;\n         }\n\n         virtual const T c() const = 0;\n\n         virtual void set_c(const T) = 0;\n\n         virtual expression_node<T>* move_branch(const std::size_t& index) = 0;\n      };\n\n      template <typename T>\n      class boc_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~boc_base_node()\n         {}\n\n         inline virtual operator_type operation() const\n         {\n            return details::e_default;\n         }\n\n         virtual const T c() const = 0;\n\n         virtual void set_c(const T) = 0;\n\n         virtual expression_node<T>* move_branch(const std::size_t& index) = 0;\n      };\n\n      template <typename T>\n      class uv_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~uv_base_node()\n         {}\n\n         inline virtual operator_type operation() const\n         {\n            return details::e_default;\n         }\n\n         virtual const T& v() const = 0;\n      };\n\n      template <typename T>\n      class sos_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~sos_base_node()\n         {}\n\n         inline virtual operator_type operation() const\n         {\n            return details::e_default;\n         }\n      };\n\n      template <typename T>\n      class sosos_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~sosos_base_node()\n         {}\n\n         inline virtual operator_type operation() const\n         {\n            return details::e_default;\n         }\n      };\n\n      template <typename T>\n      class T0oT1oT2_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~T0oT1oT2_base_node()\n         {}\n\n         virtual std::string type_id() const = 0;\n      };\n\n      template <typename T>\n      class T0oT1oT2oT3_base_node : public expression_node<T>\n      {\n      public:\n\n         virtual ~T0oT1oT2oT3_base_node()\n         {}\n\n         virtual std::string type_id() const = 0;\n      };\n\n      template <typename T, typename Operation>\n      class unary_variable_node : public uv_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         explicit unary_variable_node(const T& var)\n         : v_(var)\n         {}\n\n         inline T value() const\n         {\n            return Operation::process(v_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline const T& v() const\n         {\n            return v_;\n         }\n\n      private:\n\n         unary_variable_node(unary_variable_node<T,Operation>&);\n         unary_variable_node<T,Operation>& operator=(unary_variable_node<T,Operation>&);\n\n         const T& v_;\n      };\n\n      template <typename T>\n      class uvouv_node : public expression_node<T>\n      {\n      public:\n\n         // UOpr1(v0) Op UOpr2(v1)\n\n         typedef expression_node<T>* expression_ptr;\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::bfunc_t      bfunc_t;\n         typedef typename functor_t::ufunc_t      ufunc_t;\n\n         explicit uvouv_node(const T& var0,const T& var1,\n                             ufunc_t uf0, ufunc_t uf1, bfunc_t bf)\n         : v0_(var0),\n           v1_(var1),\n           u0_(uf0 ),\n           u1_(uf1 ),\n           f_ (bf  )\n         {}\n\n         inline T value() const\n         {\n            return f_(u0_(v0_),u1_(v1_));\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_uvouv;\n         }\n\n         inline operator_type operation() const\n         {\n            return details::e_default;\n         }\n\n         inline const T& v0()\n         {\n            return v0_;\n         }\n\n         inline const T& v1()\n         {\n            return v1_;\n         }\n\n         inline ufunc_t u0()\n         {\n            return u0_;\n         }\n\n         inline ufunc_t u1()\n         {\n            return u1_;\n         }\n\n         inline ufunc_t f()\n         {\n            return f_;\n         }\n\n      private:\n\n         uvouv_node(uvouv_node<T>&);\n         uvouv_node<T>& operator=(uvouv_node<T>&);\n\n         const T& v0_;\n         const T& v1_;\n         const ufunc_t u0_;\n         const ufunc_t u1_;\n         const bfunc_t f_;\n      };\n\n      template <typename T, typename Operation>\n      class unary_branch_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         explicit unary_branch_node(expression_ptr brnch)\n         : branch_(brnch),\n           branch_deletable_(branch_deletable(branch_))\n         {}\n\n        ~unary_branch_node()\n         {\n            if (branch_ && branch_deletable_)\n            {\n               destroy_node(branch_);\n            }\n         }\n\n         inline T value() const\n         {\n            return Operation::process(branch_->value());\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return branch_;\n         }\n\n         inline void release()\n         {\n            branch_deletable_ = false;\n         }\n\n      private:\n\n         unary_branch_node(unary_branch_node<T,Operation>&);\n         unary_branch_node<T,Operation>& operator=(unary_branch_node<T,Operation>&);\n\n         expression_ptr branch_;\n         bool           branch_deletable_;\n      };\n\n      template <typename T> struct is_const                { enum {result = 0}; };\n      template <typename T> struct is_const <const T>      { enum {result = 1}; };\n      template <typename T> struct is_const_ref            { enum {result = 0}; };\n      template <typename T> struct is_const_ref <const T&> { enum {result = 1}; };\n      template <typename T> struct is_ref                  { enum {result = 0}; };\n      template <typename T> struct is_ref<T&>              { enum {result = 1}; };\n      template <typename T> struct is_ref<const T&>        { enum {result = 0}; };\n\n      template <std::size_t State>\n      struct param_to_str { static std::string result() { static const std::string r(\"v\"); return r; } };\n\n      template <>\n      struct param_to_str<0> { static std::string result() { static const std::string r(\"c\"); return r; } };\n\n      #define exprtk_crtype(Type)                          \\\n      param_to_str<is_const_ref< Type >::result>::result() \\\n\n      template <typename T>\n      struct T0oT1oT2process\n      {\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::bfunc_t      bfunc_t;\n\n         struct mode0\n         {\n            static inline T process(const T& t0, const T& t1, const T& t2, const bfunc_t bf0, const bfunc_t bf1)\n            {\n               // (T0 o0 T1) o1 T2\n               return bf1(bf0(t0,t1),t2);\n            }\n\n            template <typename T0, typename T1, typename T2>\n            static inline std::string id()\n            {\n               static const std::string result = \"(\" + exprtk_crtype(T0) + \"o\"   +\n                                                       exprtk_crtype(T1) + \")o(\" +\n                                                       exprtk_crtype(T2) + \")\"   ;\n               return result;\n            }\n         };\n\n         struct mode1\n         {\n            static inline T process(const T& t0, const T& t1, const T& t2, const bfunc_t bf0, const bfunc_t bf1)\n            {\n               // T0 o0 (T1 o1 T2)\n               return bf0(t0,bf1(t1,t2));\n            }\n\n            template <typename T0, typename T1, typename T2>\n            static inline std::string id()\n            {\n               static const std::string result = \"(\" + exprtk_crtype(T0) + \")o(\" +\n                                                       exprtk_crtype(T1) + \"o\"   +\n                                                       exprtk_crtype(T2) + \")\"   ;\n               return result;\n            }\n         };\n      };\n\n      template <typename T>\n      struct T0oT1oT20T3process\n      {\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::bfunc_t      bfunc_t;\n\n         struct mode0\n         {\n            static inline T process(const T& t0, const T& t1,\n                                    const T& t2, const T& t3,\n                                    const bfunc_t bf0, const bfunc_t bf1, const bfunc_t bf2)\n            {\n               // (T0 o0 T1) o1 (T2 o2 T3)\n               return bf1(bf0(t0,t1),bf2(t2,t3));\n            }\n\n            template <typename T0, typename T1, typename T2, typename T3>\n            static inline std::string id()\n            {\n               static const std::string result = \"(\" + exprtk_crtype(T0) + \"o\"  +\n                                                       exprtk_crtype(T1) + \")o\" +\n                                                 \"(\" + exprtk_crtype(T2) + \"o\"  +\n                                                       exprtk_crtype(T3) + \")\"  ;\n               return result;\n            }\n         };\n\n         struct mode1\n         {\n            static inline T process(const T& t0, const T& t1,\n                                    const T& t2, const T& t3,\n                                    const bfunc_t bf0, const bfunc_t bf1, const bfunc_t bf2)\n            {\n               // (T0 o0 (T1 o1 (T2 o2 T3))\n               return bf0(t0,bf1(t1,bf2(t2,t3)));\n            }\n            template <typename T0, typename T1, typename T2, typename T3>\n            static inline std::string id()\n            {\n               static const std::string result = \"(\" + exprtk_crtype(T0) +  \")o((\" +\n                                                       exprtk_crtype(T1) +  \")o(\"  +\n                                                       exprtk_crtype(T2) +  \"o\"    +\n                                                       exprtk_crtype(T3) +  \"))\"   ;\n               return result;\n            }\n         };\n\n         struct mode2\n         {\n            static inline T process(const T& t0, const T& t1,\n                                    const T& t2, const T& t3,\n                                    const bfunc_t bf0, const bfunc_t bf1, const bfunc_t bf2)\n            {\n               // (T0 o0 ((T1 o1 T2) o2 T3)\n               return bf0(t0,bf2(bf1(t1,t2),t3));\n            }\n\n            template <typename T0, typename T1, typename T2, typename T3>\n            static inline std::string id()\n            {\n               static const std::string result = \"(\" + exprtk_crtype(T0) + \")o((\" +\n                                                       exprtk_crtype(T1) + \"o\"    +\n                                                       exprtk_crtype(T2) + \")o(\"  +\n                                                       exprtk_crtype(T3) + \"))\"   ;\n               return result;\n            }\n         };\n\n         struct mode3\n         {\n            static inline T process(const T& t0, const T& t1,\n                                    const T& t2, const T& t3,\n                                    const bfunc_t bf0, const bfunc_t bf1, const bfunc_t bf2)\n            {\n               // (((T0 o0 T1) o1 T2) o2 T3)\n               return bf2(bf1(bf0(t0,t1),t2),t3);\n            }\n\n            template <typename T0, typename T1, typename T2, typename T3>\n            static inline std::string id()\n            {\n               static const std::string result = \"((\" + exprtk_crtype(T0) + \"o\"    +\n                                                        exprtk_crtype(T1) + \")o(\"  +\n                                                        exprtk_crtype(T2) + \"))o(\" +\n                                                        exprtk_crtype(T3) + \")\";\n               return result;\n            }\n         };\n\n         struct mode4\n         {\n            static inline T process(const T& t0, const T& t1,\n                                    const T& t2, const T& t3,\n                                    const bfunc_t bf0, const bfunc_t bf1, const bfunc_t bf2)\n            {\n               // ((T0 o0 (T1 o1 T2)) o2 T3\n               return bf2(bf0(t0,bf1(t1,t2)),t3);\n            }\n\n            template <typename T0, typename T1, typename T2, typename T3>\n            static inline std::string id()\n            {\n               static const std::string result = \"((\" + exprtk_crtype(T0) + \")o(\"  +\n                                                        exprtk_crtype(T1) + \"o\"    +\n                                                        exprtk_crtype(T2) + \"))o(\" +\n                                                        exprtk_crtype(T3) + \")\"    ;\n               return result;\n            }\n         };\n      };\n\n      #undef exprtk_crtype\n\n      template <typename T, typename T0, typename T1>\n      struct nodetype_T0oT1 { static const typename expression_node<T>::node_type result; };\n      template <typename T, typename T0, typename T1>\n      const typename expression_node<T>::node_type nodetype_T0oT1<T,T0,T1>::result = expression_node<T>::e_none;\n\n      #define synthesis_node_type_define(T0_,T1_,v_)                                                            \\\n      template <typename T, typename T0, typename T1>                                                           \\\n      struct nodetype_T0oT1<T,T0_,T1_> { static const typename expression_node<T>::node_type result; };         \\\n      template <typename T, typename T0, typename T1>                                                           \\\n      const typename expression_node<T>::node_type nodetype_T0oT1<T,T0_,T1_>::result = expression_node<T>:: v_; \\\n\n      synthesis_node_type_define(const T0&,const T1&, e_vov)\n      synthesis_node_type_define(const T0&,const T1 , e_voc)\n      synthesis_node_type_define(const T0 ,const T1&, e_cov)\n      synthesis_node_type_define(      T0&,      T1&,e_none)\n      synthesis_node_type_define(const T0 ,const T1 ,e_none)\n      synthesis_node_type_define(      T0&,const T1 ,e_none)\n      synthesis_node_type_define(const T0 ,      T1&,e_none)\n      synthesis_node_type_define(const T0&,      T1&,e_none)\n      synthesis_node_type_define(      T0&,const T1&,e_none)\n      #undef synthesis_node_type_define\n\n      template <typename T, typename T0, typename T1, typename T2>\n      struct nodetype_T0oT1oT2 { static const typename expression_node<T>::node_type result; };\n      template <typename T, typename T0, typename T1, typename T2>\n      const typename expression_node<T>::node_type nodetype_T0oT1oT2<T,T0,T1,T2>::result = expression_node<T>::e_none;\n\n      #define synthesis_node_type_define(T0_,T1_,T2_,v_)                                                               \\\n      template <typename T, typename T0, typename T1, typename T2>                                                     \\\n      struct nodetype_T0oT1oT2<T,T0_,T1_,T2_> { static const typename expression_node<T>::node_type result; };         \\\n      template <typename T, typename T0, typename T1, typename T2>                                                     \\\n      const typename expression_node<T>::node_type nodetype_T0oT1oT2<T,T0_,T1_,T2_>::result = expression_node<T>:: v_; \\\n\n      synthesis_node_type_define(const T0&,const T1&,const T2&, e_vovov)\n      synthesis_node_type_define(const T0&,const T1&,const T2 , e_vovoc)\n      synthesis_node_type_define(const T0&,const T1 ,const T2&, e_vocov)\n      synthesis_node_type_define(const T0 ,const T1&,const T2&, e_covov)\n      synthesis_node_type_define(const T0 ,const T1&,const T2 , e_covoc)\n      synthesis_node_type_define(const T0 ,const T1 ,const T2 , e_none )\n      synthesis_node_type_define(const T0 ,const T1 ,const T2&, e_none )\n      synthesis_node_type_define(const T0&,const T1 ,const T2 , e_none )\n      synthesis_node_type_define(      T0&,      T1&,      T2&, e_none )\n      #undef synthesis_node_type_define\n\n      template <typename T, typename T0, typename T1, typename T2, typename T3>\n      struct nodetype_T0oT1oT2oT3 { static const typename expression_node<T>::node_type result; };\n      template <typename T, typename T0, typename T1, typename T2, typename T3>\n      const typename expression_node<T>::node_type nodetype_T0oT1oT2oT3<T,T0,T1,T2,T3>::result = expression_node<T>::e_none;\n\n      #define synthesis_node_type_define(T0_,T1_,T2_,T3_,v_)                                                                  \\\n      template <typename T, typename T0, typename T1, typename T2, typename T3>                                               \\\n      struct nodetype_T0oT1oT2oT3<T,T0_,T1_,T2_,T3_> { static const typename expression_node<T>::node_type result; };         \\\n      template <typename T, typename T0, typename T1, typename T2, typename T3>                                               \\\n      const typename expression_node<T>::node_type nodetype_T0oT1oT2oT3<T,T0_,T1_,T2_,T3_>::result = expression_node<T>:: v_; \\\n\n      synthesis_node_type_define(const T0&,const T1&,const T2&, const T3&,e_vovovov)\n      synthesis_node_type_define(const T0&,const T1&,const T2&, const T3 ,e_vovovoc)\n      synthesis_node_type_define(const T0&,const T1&,const T2 , const T3&,e_vovocov)\n      synthesis_node_type_define(const T0&,const T1 ,const T2&, const T3&,e_vocovov)\n      synthesis_node_type_define(const T0 ,const T1&,const T2&, const T3&,e_covovov)\n      synthesis_node_type_define(const T0 ,const T1&,const T2 , const T3&,e_covocov)\n      synthesis_node_type_define(const T0&,const T1 ,const T2&, const T3 ,e_vocovoc)\n      synthesis_node_type_define(const T0 ,const T1&,const T2&, const T3 ,e_covovoc)\n      synthesis_node_type_define(const T0&,const T1 ,const T2 , const T3&,e_vococov)\n      synthesis_node_type_define(const T0 ,const T1 ,const T2 , const T3 ,e_none   )\n      synthesis_node_type_define(const T0 ,const T1 ,const T2 , const T3&,e_none   )\n      synthesis_node_type_define(const T0 ,const T1 ,const T2&, const T3 ,e_none   )\n      synthesis_node_type_define(const T0 ,const T1&,const T2 , const T3 ,e_none   )\n      synthesis_node_type_define(const T0&,const T1 ,const T2 , const T3 ,e_none   )\n      synthesis_node_type_define(const T0 ,const T1 ,const T2&, const T3&,e_none   )\n      synthesis_node_type_define(const T0&,const T1&,const T2 , const T3 ,e_none   )\n      #undef synthesis_node_type_define\n\n      template <typename T, typename T0, typename T1>\n      class T0oT1 : public expression_node<T>\n      {\n      public:\n\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::bfunc_t      bfunc_t;\n         typedef T value_type;\n         typedef T0oT1<T,T0,T1> node_type;\n\n         T0oT1(T0 p0, T1 p1, const bfunc_t p2)\n         : t0_(p0),\n           t1_(p1),\n           f_ (p2)\n         {}\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            static const typename expression_node<T>::node_type result = nodetype_T0oT1<T,T0,T1>::result;\n            return result;\n         }\n\n         inline operator_type operation() const\n         {\n            return e_default;\n         }\n\n         inline T value() const\n         {\n            return f_(t0_,t1_);\n         }\n\n         inline T0 t0() const\n         {\n            return t0_;\n         }\n\n         inline T1 t1() const\n         {\n            return t1_;\n         }\n\n         inline bfunc_t f() const\n         {\n            return f_;\n         }\n\n         template <typename Allocator>\n         static inline expression_node<T>* allocate(Allocator& allocator,\n                                                    T0 p0, T1 p1,\n                                                    bfunc_t p2)\n         {\n            return allocator\n                     .template allocate_type<node_type,T0,T1,bfunc_t&>\n                        (p0, p1, p2);\n         }\n\n      private:\n\n         T0oT1(T0oT1<T,T0,T1>&) {}\n         T0oT1<T,T0,T1>& operator=(T0oT1<T,T0,T1>&) { return (*this); }\n\n         T0 t0_;\n         T1 t1_;\n         const bfunc_t f_;\n      };\n\n      template <typename T, typename T0, typename T1, typename T2, typename ProcessMode>\n      class T0oT1oT2 : public T0oT1oT2_base_node<T>\n      {\n      public:\n\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::bfunc_t      bfunc_t;\n         typedef T value_type;\n         typedef T0oT1oT2<T,T0,T1,T2,ProcessMode> node_type;\n         typedef ProcessMode process_mode_t;\n\n         T0oT1oT2(T0 p0, T1 p1, T2 p2, const bfunc_t p3, const bfunc_t p4)\n         : t0_(p0),\n           t1_(p1),\n           t2_(p2),\n           f0_(p3),\n           f1_(p4)\n         {}\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            static const typename expression_node<T>::node_type result = nodetype_T0oT1oT2<T,T0,T1,T2>::result;\n            return result;\n         }\n\n         inline operator_type operation() const\n         {\n            return e_default;\n         }\n\n         inline T value() const\n         {\n            return ProcessMode::process(t0_,t1_,t2_,f0_,f1_);\n         }\n\n         inline T0 t0() const\n         {\n            return t0_;\n         }\n\n         inline T1 t1() const\n         {\n            return t1_;\n         }\n\n         inline T2 t2() const\n         {\n            return t2_;\n         }\n\n         bfunc_t f0() const\n         {\n            return f0_;\n         }\n\n         bfunc_t f1() const\n         {\n            return f1_;\n         }\n\n         std::string type_id() const\n         {\n            return id();\n         }\n\n         static inline std::string id()\n         {\n            return process_mode_t::template id<T0,T1,T2>();\n         }\n\n         template <typename Allocator>\n         static inline expression_node<T>* allocate(Allocator& allocator, T0 p0, T1 p1, T2 p2, bfunc_t p3, bfunc_t p4)\n         {\n            return allocator\n                      .template allocate_type<node_type,T0,T1,T2,bfunc_t,bfunc_t>\n                         (p0, p1, p2, p3, p4);\n         }\n\n      private:\n\n         T0oT1oT2(node_type&) {}\n         node_type& operator=(node_type&) { return (*this); }\n\n         T0 t0_;\n         T1 t1_;\n         T2 t2_;\n         const bfunc_t f0_;\n         const bfunc_t f1_;\n      };\n\n      template <typename T, typename T0_, typename T1_, typename T2_, typename T3_, typename ProcessMode>\n      class T0oT1oT2oT3 : public T0oT1oT2oT3_base_node<T>\n      {\n      public:\n\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::bfunc_t      bfunc_t;\n         typedef T value_type;\n         typedef T0_ T0;\n         typedef T1_ T1;\n         typedef T2_ T2;\n         typedef T3_ T3;\n         typedef T0oT1oT2oT3<T,T0,T1,T2,T3,ProcessMode> node_type;\n         typedef ProcessMode process_mode_t;\n\n         T0oT1oT2oT3(T0 p0, T1 p1, T2 p2, T3 p3, bfunc_t p4, bfunc_t p5, bfunc_t p6)\n         : t0_(p0),\n           t1_(p1),\n           t2_(p2),\n           t3_(p3),\n           f0_(p4),\n           f1_(p5),\n           f2_(p6)\n         {}\n\n         inline T value() const\n         {\n            return ProcessMode::process(t0_, t1_, t2_, t3_, f0_, f1_, f2_);\n         }\n\n         inline T0 t0() const\n         {\n            return t0_;\n         }\n\n         inline T1 t1() const\n         {\n            return t1_;\n         }\n\n         inline T2 t2() const\n         {\n            return t2_;\n         }\n\n         inline T3 t3() const\n         {\n            return t3_;\n         }\n\n         inline bfunc_t f0() const\n         {\n            return f0_;\n         }\n\n         inline bfunc_t f1() const\n         {\n            return f1_;\n         }\n\n         inline bfunc_t f2() const\n         {\n            return f2_;\n         }\n\n         inline std::string type_id() const\n         {\n            return id();\n         }\n\n         static inline std::string id()\n         {\n            return process_mode_t::template id<T0,T1,T2,T3>();\n         }\n\n         template <typename Allocator>\n         static inline expression_node<T>* allocate(Allocator& allocator,\n                                                    T0 p0, T1 p1, T2 p2, T3 p3,\n                                                    bfunc_t p4, bfunc_t p5, bfunc_t p6)\n         {\n            return allocator\n                      .template allocate_type<node_type,T0,T1,T2,T3,bfunc_t,bfunc_t>\n                         (p0, p1, p2, p3, p4, p5, p6);\n         }\n\n      private:\n\n         T0oT1oT2oT3(node_type&) {}\n         node_type& operator=(node_type&) { return (*this); }\n\n         T0 t0_;\n         T1 t1_;\n         T2 t2_;\n         T3 t3_;\n         const bfunc_t f0_;\n         const bfunc_t f1_;\n         const bfunc_t f2_;\n      };\n\n      template <typename T, typename T0, typename T1, typename T2>\n      class T0oT1oT2_sf3 : public T0oT1oT2_base_node<T>\n      {\n      public:\n\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::tfunc_t      tfunc_t;\n         typedef T value_type;\n         typedef T0oT1oT2_sf3<T,T0,T1,T2> node_type;\n\n         T0oT1oT2_sf3(T0 p0, T1 p1, T2 p2, const tfunc_t p3)\n         : t0_(p0),\n           t1_(p1),\n           t2_(p2),\n           f_ (p3)\n         {}\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            static const typename expression_node<T>::node_type result = nodetype_T0oT1oT2<T,T0,T1,T2>::result;\n            return result;\n         }\n\n         inline operator_type operation() const\n         {\n            return e_default;\n         }\n\n         inline T value() const\n         {\n            return f_(t0_, t1_, t2_);\n         }\n\n         inline T0 t0() const\n         {\n            return t0_;\n         }\n\n         inline T1 t1() const\n         {\n            return t1_;\n         }\n\n         inline T2 t2() const\n         {\n            return t2_;\n         }\n\n         tfunc_t f() const\n         {\n            return f_;\n         }\n\n         std::string type_id() const\n         {\n            return id();\n         }\n\n         static inline std::string id()\n         {\n            return \"sf3\";\n         }\n\n         template <typename Allocator>\n         static inline expression_node<T>* allocate(Allocator& allocator, T0 p0, T1 p1, T2 p2, tfunc_t p3)\n         {\n            return allocator\n                     .template allocate_type<node_type,T0,T1,T2,tfunc_t>\n                        (p0, p1, p2, p3);\n         }\n\n      private:\n\n         T0oT1oT2_sf3(node_type&) {}\n         node_type& operator=(node_type&) { return (*this); }\n\n         T0 t0_;\n         T1 t1_;\n         T2 t2_;\n         const tfunc_t f_;\n      };\n\n      template <typename T, typename T0, typename T1, typename T2>\n      class sf3ext_type_node : public T0oT1oT2_base_node<T>\n      {\n      public:\n\n         virtual ~sf3ext_type_node()\n         {}\n\n         virtual T0 t0() const = 0;\n\n         virtual T1 t1() const = 0;\n\n         virtual T2 t2() const = 0;\n      };\n\n      template <typename T, typename T0, typename T1, typename T2, typename SF3Operation>\n      class T0oT1oT2_sf3ext : public sf3ext_type_node<T,T0,T1,T2>\n      {\n      public:\n\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::tfunc_t      tfunc_t;\n         typedef T value_type;\n         typedef T0oT1oT2_sf3ext<T,T0,T1,T2,SF3Operation> node_type;\n\n         T0oT1oT2_sf3ext(T0 p0, T1 p1, T2 p2)\n         : t0_(p0),\n           t1_(p1),\n           t2_(p2)\n         {}\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            static const typename expression_node<T>::node_type result = nodetype_T0oT1oT2<T,T0,T1,T2>::result;\n            return result;\n         }\n\n         inline operator_type operation() const\n         {\n            return e_default;\n         }\n\n         inline T value() const\n         {\n            return SF3Operation::process(t0_, t1_, t2_);\n         }\n\n         T0 t0() const\n         {\n            return t0_;\n         }\n\n         T1 t1() const\n         {\n            return t1_;\n         }\n\n         T2 t2() const\n         {\n            return t2_;\n         }\n\n         std::string type_id() const\n         {\n            return id();\n         }\n\n         static inline std::string id()\n         {\n            return SF3Operation::id();\n         }\n\n         template <typename Allocator>\n         static inline expression_node<T>* allocate(Allocator& allocator, T0 p0, T1 p1, T2 p2)\n         {\n            return allocator\n                     .template allocate_type<node_type,T0,T1,T2>\n                        (p0, p1, p2);\n         }\n\n      private:\n\n         T0oT1oT2_sf3ext(node_type&) {}\n         node_type& operator=(node_type&) { return (*this); }\n\n         T0 t0_;\n         T1 t1_;\n         T2 t2_;\n      };\n\n      template <typename T>\n      inline bool is_sf3ext_node(const expression_node<T>* n)\n      {\n         switch (n->type())\n         {\n            case expression_node<T>::e_vovov : return true;\n            case expression_node<T>::e_vovoc : return true;\n            case expression_node<T>::e_vocov : return true;\n            case expression_node<T>::e_covov : return true;\n            case expression_node<T>::e_covoc : return true;\n            default                          : return false;\n         }\n      }\n\n      template <typename T, typename T0, typename T1, typename T2, typename T3>\n      class T0oT1oT2oT3_sf4 : public T0oT1oT2_base_node<T>\n      {\n      public:\n\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::qfunc_t      qfunc_t;\n         typedef T value_type;\n         typedef T0oT1oT2oT3_sf4<T,T0,T1,T2,T3> node_type;\n\n         T0oT1oT2oT3_sf4(T0 p0, T1 p1, T2 p2, T3 p3, const qfunc_t p4)\n         : t0_(p0),\n           t1_(p1),\n           t2_(p2),\n           t3_(p3),\n           f_ (p4)\n         {}\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            static const typename expression_node<T>::node_type result = nodetype_T0oT1oT2oT3<T,T0,T1,T2,T3>::result;\n            return result;\n         }\n\n         inline operator_type operation() const\n         {\n            return e_default;\n         }\n\n         inline T value() const\n         {\n            return f_(t0_, t1_, t2_, t3_);\n         }\n\n         inline T0 t0() const\n         {\n            return t0_;\n         }\n\n         inline T1 t1() const\n         {\n            return t1_;\n         }\n\n         inline T2 t2() const\n         {\n            return t2_;\n         }\n\n         inline T3 t3() const\n         {\n            return t3_;\n         }\n\n         qfunc_t f() const\n         {\n            return f_;\n         }\n\n         std::string type_id() const\n         {\n            return id();\n         }\n\n         static inline std::string id()\n         {\n            return \"sf4\";\n         }\n\n         template <typename Allocator>\n         static inline expression_node<T>* allocate(Allocator& allocator, T0 p0, T1 p1, T2 p2, T3 p3, qfunc_t p4)\n         {\n            return allocator\n                     .template allocate_type<node_type,T0,T1,T2,T3,qfunc_t>\n                        (p0, p1, p2, p3, p4);\n         }\n\n      private:\n\n         T0oT1oT2oT3_sf4(node_type&) {}\n         node_type& operator=(node_type&) { return (*this); }\n\n         T0 t0_;\n         T1 t1_;\n         T2 t2_;\n         T3 t3_;\n         const qfunc_t f_;\n      };\n\n      template <typename T, typename T0, typename T1, typename T2, typename T3, typename SF4Operation>\n      class T0oT1oT2oT3_sf4ext : public T0oT1oT2oT3_base_node<T>\n      {\n      public:\n\n         typedef typename details::functor_t<T> functor_t;\n         typedef typename functor_t::tfunc_t      tfunc_t;\n         typedef T value_type;\n         typedef T0oT1oT2oT3_sf4ext<T,T0,T1,T2,T3,SF4Operation> node_type;\n\n         T0oT1oT2oT3_sf4ext(T0 p0, T1 p1, T2 p2, T3 p3)\n         : t0_(p0),\n           t1_(p1),\n           t2_(p2),\n           t3_(p3)\n         {}\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            static const typename expression_node<T>::node_type result = nodetype_T0oT1oT2oT3<T,T0,T1,T2,T3>::result;\n            return result;\n         }\n\n         inline operator_type operation() const\n         {\n            return e_default;\n         }\n\n         inline T value() const\n         {\n            return SF4Operation::process(t0_, t1_, t2_, t3_);\n         }\n\n         inline T0 t0() const\n         {\n            return t0_;\n         }\n\n         inline T1 t1() const\n         {\n            return t1_;\n         }\n\n         inline T2 t2() const\n         {\n            return t2_;\n         }\n\n         inline T3 t3() const\n         {\n            return t2_;\n         }\n\n         std::string type_id() const\n         {\n            return id();\n         }\n\n         static inline std::string id()\n         {\n            return SF4Operation::id();\n         }\n\n         template <typename Allocator>\n         static inline expression_node<T>* allocate(Allocator& allocator, T0 p0, T1 p1, T2 p2, T3 p3)\n         {\n            return allocator\n                     .template allocate_type<node_type,T0,T1,T2,T3>\n                        (p0, p1, p2, p3);\n         }\n\n      private:\n\n         T0oT1oT2oT3_sf4ext(node_type&) {}\n         node_type& operator=(node_type&) { return (*this); }\n\n         T0 t0_;\n         T1 t1_;\n         T2 t2_;\n         T3 t3_;\n      };\n\n      template <typename T>\n      inline bool is_sf4ext_node(const expression_node<T>* n)\n      {\n         switch (n->type())\n         {\n            case expression_node<T>::e_vovovov : return true;\n            case expression_node<T>::e_vovovoc : return true;\n            case expression_node<T>::e_vovocov : return true;\n            case expression_node<T>::e_vocovov : return true;\n            case expression_node<T>::e_covovov : return true;\n            case expression_node<T>::e_covocov : return true;\n            case expression_node<T>::e_vocovoc : return true;\n            case expression_node<T>::e_covovoc : return true;\n            case expression_node<T>::e_vococov : return true;\n            default                            : return false;\n         }\n      }\n\n      template <typename T, typename T0, typename T1>\n      struct T0oT1_define\n      {\n         typedef details::T0oT1<T,T0,T1> type0;\n      };\n\n      template <typename T, typename T0, typename T1, typename T2>\n      struct T0oT1oT2_define\n      {\n         typedef details::T0oT1oT2<T,T0,T1,T2,typename T0oT1oT2process<T>::mode0> type0;\n         typedef details::T0oT1oT2<T,T0,T1,T2,typename T0oT1oT2process<T>::mode1> type1;\n         typedef details::T0oT1oT2_sf3<T,T0,T1,T2> sf3_type;\n         typedef details::sf3ext_type_node<T,T0,T1,T2> sf3_type_node;\n      };\n\n      template <typename T, typename T0, typename T1, typename T2, typename T3>\n      struct T0oT1oT2oT3_define\n      {\n         typedef details::T0oT1oT2oT3<T,T0,T1,T2,T3,typename T0oT1oT20T3process<T>::mode0> type0;\n         typedef details::T0oT1oT2oT3<T,T0,T1,T2,T3,typename T0oT1oT20T3process<T>::mode1> type1;\n         typedef details::T0oT1oT2oT3<T,T0,T1,T2,T3,typename T0oT1oT20T3process<T>::mode2> type2;\n         typedef details::T0oT1oT2oT3<T,T0,T1,T2,T3,typename T0oT1oT20T3process<T>::mode3> type3;\n         typedef details::T0oT1oT2oT3<T,T0,T1,T2,T3,typename T0oT1oT20T3process<T>::mode4> type4;\n         typedef details::T0oT1oT2oT3_sf4<T,T0,T1,T2,T3> sf4_type;\n      };\n\n      template <typename T, typename Operation>\n      class vov_node : public vov_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         // variable op variable node\n         explicit vov_node(const T& var0, const T& var1)\n         : v0_(var0),\n           v1_(var1)\n         {}\n\n         inline T value() const\n         {\n            return Operation::process(v0_,v1_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline const T& v0() const\n         {\n            return v0_;\n         }\n\n         inline const T& v1() const\n         {\n            return v1_;\n         }\n\n      protected:\n\n         const T& v0_;\n         const T& v1_;\n\n      private:\n\n         vov_node(vov_node<T,Operation>&);\n         vov_node<T,Operation>& operator=(vov_node<T,Operation>&);\n      };\n\n      template <typename T, typename Operation>\n      class cov_node : public cov_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         // constant op variable node\n         explicit cov_node(const T& const_var, const T& var)\n         : c_(const_var),\n           v_(var)\n         {}\n\n         inline T value() const\n         {\n            return Operation::process(c_,v_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline const T c() const\n         {\n            return c_;\n         }\n\n         inline const T& v() const\n         {\n            return v_;\n         }\n\n      protected:\n\n         const T  c_;\n         const T& v_;\n\n      private:\n\n         cov_node(const cov_node<T,Operation>&);\n         cov_node<T,Operation>& operator=(const cov_node<T,Operation>&);\n      };\n\n      template <typename T, typename Operation>\n      class voc_node : public voc_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         // variable op constant node\n         explicit voc_node(const T& var, const T& const_var)\n         : v_(var),\n           c_(const_var)\n         {}\n\n         inline T value() const\n         {\n            return Operation::process(v_,c_);\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline const T c() const\n         {\n            return c_;\n         }\n\n         inline const T& v() const\n         {\n            return v_;\n         }\n\n      protected:\n\n         const T& v_;\n         const T  c_;\n\n      private:\n\n         voc_node(const voc_node<T,Operation>&);\n         voc_node<T,Operation>& operator=(const voc_node<T,Operation>&);\n      };\n\n      template <typename T, typename Operation>\n      class vob_node : public vob_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n         typedef Operation operation_t;\n\n         // variable op constant node\n         explicit vob_node(const T& var, const expression_ptr brnch)\n         : v_(var)\n         {\n            init_branches<1>(branch_,brnch);\n         }\n\n        ~vob_node()\n         {\n            cleanup_branches::execute<T,1>(branch_);\n         }\n\n         inline T value() const\n         {\n            return Operation::process(v_,branch_[0].first->value());\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline const T& v() const\n         {\n            return v_;\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return branch_[0].first;\n         }\n\n      private:\n\n         vob_node(const vob_node<T,Operation>&);\n         vob_node<T,Operation>& operator=(const vob_node<T,Operation>&);\n\n         const T& v_;\n         branch_t branch_[1];\n      };\n\n      template <typename T, typename Operation>\n      class bov_node : public bov_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n         typedef Operation operation_t;\n\n         // variable op constant node\n         explicit bov_node(const expression_ptr brnch, const T& var)\n         : v_(var)\n         {\n            init_branches<1>(branch_,brnch);\n         }\n\n        ~bov_node()\n         {\n            cleanup_branches::execute<T,1>(branch_);\n         }\n\n         inline T value() const\n         {\n            return Operation::process(branch_[0].first->value(),v_);\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline const T& v() const\n         {\n            return v_;\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return branch_[0].first;\n         }\n\n      private:\n\n         bov_node(const bov_node<T,Operation>&);\n         bov_node<T,Operation>& operator=(const bov_node<T,Operation>&);\n\n         const T& v_;\n         branch_t branch_[1];\n      };\n\n      template <typename T, typename Operation>\n      class cob_node : public cob_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n         typedef Operation operation_t;\n\n         // variable op constant node\n         explicit cob_node(const T const_var, const expression_ptr brnch)\n         : c_(const_var)\n         {\n            init_branches<1>(branch_,brnch);\n         }\n\n        ~cob_node()\n         {\n            cleanup_branches::execute<T,1>(branch_);\n         }\n\n         inline T value() const\n         {\n            return Operation::process(c_,branch_[0].first->value());\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline const T c() const\n         {\n            return c_;\n         }\n\n         inline void set_c(const T new_c)\n         {\n            (*const_cast<T*>(&c_)) = new_c;\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return branch_[0].first;\n         }\n\n         inline expression_node<T>* move_branch(const std::size_t&)\n         {\n            branch_[0].second = false;\n            return branch_[0].first;\n         }\n\n      private:\n\n         cob_node(const cob_node<T,Operation>&);\n         cob_node<T,Operation>& operator=(const cob_node<T,Operation>&);\n\n         const T  c_;\n         branch_t branch_[1];\n      };\n\n      template <typename T, typename Operation>\n      class boc_node : public boc_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr,bool> branch_t;\n         typedef Operation operation_t;\n\n         // variable op constant node\n         explicit boc_node(const expression_ptr brnch, const T const_var)\n         : c_(const_var)\n         {\n            init_branches<1>(branch_,brnch);\n         }\n\n        ~boc_node()\n         {\n            cleanup_branches::execute<T,1>(branch_);\n         }\n\n         inline T value() const\n         {\n            return Operation::process(branch_[0].first->value(),c_);\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline const T c() const\n         {\n            return c_;\n         }\n\n         inline void set_c(const T new_c)\n         {\n            (*const_cast<T*>(&c_)) = new_c;\n         }\n\n         inline expression_node<T>* branch(const std::size_t&) const\n         {\n            return branch_[0].first;\n         }\n\n         inline expression_node<T>* move_branch(const std::size_t&)\n         {\n            branch_[0].second = false;\n            return branch_[0].first;\n         }\n\n      private:\n\n         boc_node(const boc_node<T,Operation>&);\n         boc_node<T,Operation>& operator=(const boc_node<T,Operation>&);\n\n         const T  c_;\n         branch_t branch_[1];\n      };\n\n      #ifndef exprtk_disable_string_capabilities\n      template <typename T, typename SType0, typename SType1, typename Operation>\n      class sos_node : public sos_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         // string op string node\n         explicit sos_node(SType0 p0, SType1 p1)\n         : s0_(p0),\n           s1_(p1)\n         {}\n\n         inline T value() const\n         {\n            return Operation::process(s0_,s1_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline std::string& s0()\n         {\n            return s0_;\n         }\n\n         inline std::string& s1()\n         {\n            return s1_;\n         }\n\n      protected:\n\n         SType0 s0_;\n         SType1 s1_;\n\n      private:\n\n         sos_node(sos_node<T,SType0,SType1,Operation>&);\n         sos_node<T,SType0,SType1,Operation>& operator=(sos_node<T,SType0,SType1,Operation>&);\n      };\n\n      template <typename T, typename SType0, typename SType1, typename RangePack, typename Operation>\n      class str_xrox_node : public sos_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         // string-range op string node\n         explicit str_xrox_node(SType0 p0, SType1 p1, RangePack rp0)\n         : s0_ (p0 ),\n           s1_ (p1 ),\n           rp0_(rp0)\n         {}\n\n        ~str_xrox_node()\n         {\n            rp0_.free();\n         }\n\n         inline T value() const\n         {\n            std::size_t r0 = 0;\n            std::size_t r1 = 0;\n\n            if (rp0_(r0, r1, s0_.size()))\n               return Operation::process(s0_.substr(r0, (r1 - r0) + 1), s1_);\n            else\n               return T(0);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline std::string& s0()\n         {\n            return s0_;\n         }\n\n         inline std::string& s1()\n         {\n            return s1_;\n         }\n\n      protected:\n\n         SType0    s0_;\n         SType1    s1_;\n         RangePack rp0_;\n\n      private:\n\n         str_xrox_node(str_xrox_node<T,SType0,SType1,RangePack,Operation>&);\n         str_xrox_node<T,SType0,SType1,RangePack,Operation>& operator=(str_xrox_node<T,SType0,SType1,RangePack,Operation>&);\n      };\n\n      template <typename T, typename SType0, typename SType1, typename RangePack, typename Operation>\n      class str_xoxr_node : public sos_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         // string op string range node\n         explicit str_xoxr_node(SType0 p0, SType1 p1, RangePack rp1)\n         : s0_ (p0 ),\n           s1_ (p1 ),\n           rp1_(rp1)\n         {}\n\n        ~str_xoxr_node()\n         {\n            rp1_.free();\n         }\n\n         inline T value() const\n         {\n            std::size_t r0 = 0;\n            std::size_t r1 = 0;\n\n            if (rp1_(r0, r1, s1_.size()))\n               return Operation::process(s0_, s1_.substr(r0, (r1 - r0) + 1));\n            else\n               return T(0);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline std::string& s0()\n         {\n            return s0_;\n         }\n\n         inline std::string& s1()\n         {\n            return s1_;\n         }\n\n      protected:\n\n         SType0    s0_;\n         SType1    s1_;\n         RangePack rp1_;\n\n      private:\n\n         str_xoxr_node(str_xoxr_node<T,SType0,SType1,RangePack,Operation>&);\n         str_xoxr_node<T,SType0,SType1,RangePack,Operation>& operator=(str_xoxr_node<T,SType0,SType1,RangePack,Operation>&);\n      };\n\n      template <typename T, typename SType0, typename SType1, typename RangePack, typename Operation>\n      class str_xroxr_node : public sos_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         // string-range op string-range node\n         explicit str_xroxr_node(SType0 p0, SType1 p1, RangePack rp0, RangePack rp1)\n         : s0_ (p0 ),\n           s1_ (p1 ),\n           rp0_(rp0),\n           rp1_(rp1)\n         {}\n\n        ~str_xroxr_node()\n         {\n            rp0_.free();\n            rp1_.free();\n         }\n\n         inline T value() const\n         {\n            std::size_t r0_0 = 0;\n            std::size_t r0_1 = 0;\n            std::size_t r1_0 = 0;\n            std::size_t r1_1 = 0;\n\n            if (\n                 rp0_(r0_0, r1_0, s0_.size()) &&\n                 rp1_(r0_1, r1_1, s1_.size())\n               )\n            {\n               return Operation::process(\n                                          s0_.substr(r0_0, (r1_0 - r0_0) + 1),\n                                          s1_.substr(r0_1, (r1_1 - r0_1) + 1)\n                                        );\n            }\n            else\n               return T(0);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline std::string& s0()\n         {\n            return s0_;\n         }\n\n         inline std::string& s1()\n         {\n            return s1_;\n         }\n\n      protected:\n\n         SType0    s0_;\n         SType1    s1_;\n         RangePack rp0_;\n         RangePack rp1_;\n\n      private:\n\n         str_xroxr_node(str_xroxr_node<T,SType0,SType1,RangePack,Operation>&);\n         str_xroxr_node<T,SType0,SType1,RangePack,Operation>& operator=(str_xroxr_node<T,SType0,SType1,RangePack,Operation>&);\n      };\n\n      template <typename T, typename Operation>\n      class str_sogens_node : public binary_node<T>\n      {\n      public:\n\n         typedef expression_node <T>* expression_ptr;\n         typedef string_base_node<T>*   str_base_ptr;\n         typedef range_pack      <T>         range_t;\n         typedef range_t*                  range_ptr;\n         typedef range_interface<T>         irange_t;\n         typedef irange_t*                irange_ptr;\n\n         str_sogens_node(const operator_type& opr,\n                         expression_ptr branch0,\n                         expression_ptr branch1)\n         : binary_node<T>(opr, branch0, branch1),\n           str0_base_ptr_ (0),\n           str1_base_ptr_ (0),\n           str0_range_ptr_(0),\n           str1_range_ptr_(0)\n         {\n            if (is_generally_string_node(binary_node<T>::branch_[0].first))\n            {\n               str0_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == str0_base_ptr_)\n                  return;\n\n               irange_ptr range = dynamic_cast<irange_ptr>(binary_node<T>::branch_[0].first);\n\n               if (0 == range)\n                  return;\n\n               str0_range_ptr_ = &(range->range_ref());\n            }\n\n            if (is_generally_string_node(binary_node<T>::branch_[1].first))\n            {\n               str1_base_ptr_ = dynamic_cast<str_base_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == str1_base_ptr_)\n                  return;\n\n               irange_ptr range = dynamic_cast<irange_ptr>(binary_node<T>::branch_[1].first);\n\n               if (0 == range)\n                  return;\n\n               str1_range_ptr_ = &(range->range_ref());\n            }\n         }\n\n         inline T value() const\n         {\n            if (\n                 str0_base_ptr_  &&\n                 str1_base_ptr_  &&\n                 str0_range_ptr_ &&\n                 str1_range_ptr_\n               )\n            {\n               binary_node<T>::branch_[0].first->value();\n               binary_node<T>::branch_[1].first->value();\n\n               std::size_t str0_r0 = 0;\n               std::size_t str0_r1 = 0;\n\n               std::size_t str1_r0 = 0;\n               std::size_t str1_r1 = 0;\n\n               range_t& range0 = (*str0_range_ptr_);\n               range_t& range1 = (*str1_range_ptr_);\n\n               if (\n                    range0(str0_r0, str0_r1, str0_base_ptr_->size()) &&\n                    range1(str1_r0, str1_r1, str1_base_ptr_->size())\n                  )\n               {\n                  return Operation::process(\n                                             str0_base_ptr_->str().substr(str0_r0,(str0_r1 - str0_r0) + 1),\n                                             str1_base_ptr_->str().substr(str1_r0,(str1_r1 - str1_r0) + 1)\n                                           );\n               }\n            }\n\n            return std::numeric_limits<T>::quiet_NaN();\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n      private:\n\n         str_sogens_node(str_sogens_node<T,Operation>&);\n         str_sogens_node<T,Operation>& operator=(str_sogens_node<T,Operation>&);\n\n         str_base_ptr str0_base_ptr_;\n         str_base_ptr str1_base_ptr_;\n         range_ptr    str0_range_ptr_;\n         range_ptr    str1_range_ptr_;\n      };\n\n      template <typename T, typename SType0, typename SType1, typename SType2, typename Operation>\n      class sosos_node : public sosos_base_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef Operation operation_t;\n\n         // variable op variable node\n         explicit sosos_node(SType0 p0, SType1 p1, SType2 p2)\n         : s0_(p0),\n           s1_(p1),\n           s2_(p2)\n         {}\n\n         inline T value() const\n         {\n            return Operation::process(s0_,s1_,s2_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return Operation::type();\n         }\n\n         inline operator_type operation() const\n         {\n            return Operation::operation();\n         }\n\n         inline std::string& s0()\n         {\n            return s0_;\n         }\n\n         inline std::string& s1()\n         {\n            return s1_;\n         }\n\n         inline std::string& s2()\n         {\n            return s2_;\n         }\n\n      protected:\n\n         SType0 s0_;\n         SType1 s1_;\n         SType2 s2_;\n\n      private:\n\n         sosos_node(sosos_node<T,SType0,SType1,SType2,Operation>&);\n         sosos_node<T,SType0,SType1,SType2,Operation>& operator=(sosos_node<T,SType0,SType1,SType2,Operation>&);\n      };\n      #endif\n\n      template <typename T, typename PowOp>\n      class ipow_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef PowOp operation_t;\n\n         explicit ipow_node(const T& v)\n         : v_(v)\n         {}\n\n         inline T value() const\n         {\n            return PowOp::result(v_);\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_ipow;\n         }\n\n      private:\n\n         ipow_node(const ipow_node<T,PowOp>&);\n         ipow_node<T,PowOp>& operator=(const ipow_node<T,PowOp>&);\n\n         const T& v_;\n      };\n\n      template <typename T, typename PowOp>\n      class bipow_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr, bool> branch_t;\n         typedef PowOp operation_t;\n\n         explicit bipow_node(expression_ptr brnch)\n         {\n            init_branches<1>(branch_, brnch);\n         }\n\n        ~bipow_node()\n         {\n            cleanup_branches::execute<T,1>(branch_);\n         }\n\n         inline T value() const\n         {\n            return PowOp::result(branch_[0].first->value());\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_ipow;\n         }\n\n      private:\n\n         bipow_node(const bipow_node<T,PowOp>&);\n         bipow_node<T,PowOp>& operator=(const bipow_node<T,PowOp>&);\n\n         branch_t branch_[1];\n      };\n\n      template <typename T, typename PowOp>\n      class ipowinv_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef PowOp operation_t;\n\n         explicit ipowinv_node(const T& v)\n         : v_(v)\n         {}\n\n         inline T value() const\n         {\n            return (T(1) / PowOp::result(v_));\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_ipowinv;\n         }\n\n      private:\n\n         ipowinv_node(const ipowinv_node<T,PowOp>&);\n         ipowinv_node<T,PowOp>& operator=(const ipowinv_node<T,PowOp>&);\n\n         const T& v_;\n      };\n\n      template <typename T, typename PowOp>\n      class bipowninv_node : public expression_node<T>\n      {\n      public:\n\n         typedef expression_node<T>* expression_ptr;\n         typedef std::pair<expression_ptr, bool> branch_t;\n         typedef PowOp operation_t;\n\n         explicit bipowninv_node(expression_ptr brnch)\n         {\n            init_branches<1>(branch_, brnch);\n         }\n\n        ~bipowninv_node()\n         {\n            cleanup_branches::execute<T,1>(branch_);\n         }\n\n         inline T value() const\n         {\n            return (T(1) / PowOp::result(branch_[0].first->value()));\n         }\n\n         inline typename expression_node<T>::node_type type() const\n         {\n            return expression_node<T>::e_ipowinv;\n         }\n\n      private:\n\n         bipowninv_node(const bipowninv_node<T,PowOp>&);\n         bipowninv_node<T,PowOp>& operator=(const bipowninv_node<T,PowOp>&);\n\n         branch_t branch_[1];\n      };\n\n      template <typename T>\n      inline bool is_vov_node(const expression_node<T>* node)\n      {\n         return (0 != dynamic_cast<const vov_base_node<T>*>(node));\n      }\n\n      template <typename T>\n      inline bool is_cov_node(const expression_node<T>* node)\n      {\n         return (0 != dynamic_cast<const cov_base_node<T>*>(node));\n      }\n\n      template <typename T>\n      inline bool is_voc_node(const expression_node<T>* node)\n      {\n         return (0 != dynamic_cast<const voc_base_node<T>*>(node));\n      }\n\n      template <typename T>\n      inline bool is_cob_node(const expression_node<T>* node)\n      {\n         return (0 != dynamic_cast<const cob_base_node<T>*>(node));\n      }\n\n      template <typename T>\n      inline bool is_boc_node(const expression_node<T>* node)\n      {\n         return (0 != dynamic_cast<const boc_base_node<T>*>(node));\n      }\n\n      template <typename T>\n      inline bool is_t0ot1ot2_node(const expression_node<T>* node)\n      {\n         return (0 != dynamic_cast<const T0oT1oT2_base_node<T>*>(node));\n      }\n\n      template <typename T>\n      inline bool is_t0ot1ot2ot3_node(const expression_node<T>* node)\n      {\n         return (0 != dynamic_cast<const T0oT1oT2oT3_base_node<T>*>(node));\n      }\n\n      template <typename T>\n      inline bool is_uv_node(const expression_node<T>* node)\n      {\n         return (0 != dynamic_cast<const uv_base_node<T>*>(node));\n      }\n\n      template <typename T>\n      inline bool is_string_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_stringvar == node->type());\n      }\n\n      template <typename T>\n      inline bool is_string_range_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_stringvarrng == node->type());\n      }\n\n      template <typename T>\n      inline bool is_const_string_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_stringconst == node->type());\n      }\n\n      template <typename T>\n      inline bool is_const_string_range_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_cstringvarrng == node->type());\n      }\n\n      template <typename T>\n      inline bool is_string_assignment_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_strass == node->type());\n      }\n\n      template <typename T>\n      inline bool is_string_concat_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_strconcat == node->type());\n      }\n\n      template <typename T>\n      inline bool is_string_function_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_strfunction == node->type());\n      }\n\n      template <typename T>\n      inline bool is_string_condition_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_strcondition == node->type());\n      }\n\n      template <typename T>\n      inline bool is_string_ccondition_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_strccondition == node->type());\n      }\n\n      template <typename T>\n      inline bool is_string_vararg_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_stringvararg == node->type());\n      }\n\n      template <typename T>\n      inline bool is_genricstring_range_node(const expression_node<T>* node)\n      {\n         return node && (expression_node<T>::e_strgenrange == node->type());\n      }\n\n      template <typename T>\n      inline bool is_generally_string_node(const expression_node<T>* node)\n      {\n         if (node)\n         {\n            switch (node->type())\n            {\n               case expression_node<T>::e_stringvar     :\n               case expression_node<T>::e_stringconst   :\n               case expression_node<T>::e_stringvarrng  :\n               case expression_node<T>::e_cstringvarrng :\n               case expression_node<T>::e_strgenrange   :\n               case expression_node<T>::e_strass        :\n               case expression_node<T>::e_strconcat     :\n               case expression_node<T>::e_strfunction   :\n               case expression_node<T>::e_strcondition  :\n               case expression_node<T>::e_strccondition :\n               case expression_node<T>::e_stringvararg  : return true;\n               default                                  : return false;\n            }\n         }\n\n         return false;\n      }\n\n      class node_allocator\n      {\n      public:\n\n         template <typename ResultNode, typename OpType, typename ExprNode>\n         inline expression_node<typename ResultNode::value_type>* allocate(OpType& operation, ExprNode (&branch)[1])\n         {\n            return allocate<ResultNode>(operation,branch[0]);\n         }\n\n         template <typename ResultNode, typename OpType, typename ExprNode>\n         inline expression_node<typename ResultNode::value_type>* allocate(OpType& operation, ExprNode (&branch)[2])\n         {\n            return allocate<ResultNode>(operation,branch[0],branch[1]);\n         }\n\n         template <typename ResultNode, typename OpType, typename ExprNode>\n         inline expression_node<typename ResultNode::value_type>* allocate(OpType& operation, ExprNode (&branch)[3])\n         {\n            return allocate<ResultNode>(operation,branch[0],branch[1],branch[2]);\n         }\n\n         template <typename ResultNode, typename OpType, typename ExprNode>\n         inline expression_node<typename ResultNode::value_type>* allocate(OpType& operation, ExprNode (&branch)[4])\n         {\n            return allocate<ResultNode>(operation,branch[0],branch[1],branch[2],branch[3]);\n         }\n\n         template <typename ResultNode, typename OpType, typename ExprNode>\n         inline expression_node<typename ResultNode::value_type>* allocate(OpType& operation, ExprNode (&branch)[5])\n         {\n            return allocate<ResultNode>(operation,branch[0],branch[1],branch[2],branch[3],branch[4]);\n         }\n\n         template <typename ResultNode, typename OpType, typename ExprNode>\n         inline expression_node<typename ResultNode::value_type>* allocate(OpType& operation, ExprNode (&branch)[6])\n         {\n            return allocate<ResultNode>(operation,branch[0],branch[1],branch[2],branch[3],branch[4],branch[5]);\n         }\n\n         template <typename node_type>\n         inline expression_node<typename node_type::value_type>* allocate() const\n         {\n            return (new node_type());\n         }\n\n         template <typename node_type,\n                   typename Type,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node<typename node_type::value_type>* allocate(const Sequence<Type,Allocator>& seq) const\n         {\n            return (new node_type(seq));\n         }\n\n         template <typename node_type, typename T1>\n         inline expression_node<typename node_type::value_type>* allocate(T1& t1) const\n         {\n            return (new node_type(t1));\n         }\n\n         template <typename node_type, typename T1>\n         inline expression_node<typename node_type::value_type>* allocate_c(const T1& t1) const\n         {\n            return (new node_type(t1));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const T2& t2) const\n         {\n            return (new node_type(t1, t2));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2>\n         inline expression_node<typename node_type::value_type>* allocate_cr(const T1& t1, T2& t2) const\n         {\n            return (new node_type(t1, t2));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2>\n         inline expression_node<typename node_type::value_type>* allocate_rc(T1& t1, const T2& t2) const\n         {\n            return (new node_type(t1, t2));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2>\n         inline expression_node<typename node_type::value_type>* allocate_rr(T1& t1, T2& t2) const\n         {\n            return (new node_type(t1, t2));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2>\n         inline expression_node<typename node_type::value_type>* allocate_tt(T1 t1, T2 t2) const\n         {\n            return (new node_type(t1, t2));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2, typename T3>\n         inline expression_node<typename node_type::value_type>* allocate_ttt(T1 t1, T2 t2, T3 t3) const\n         {\n            return (new node_type(t1, t2, t3));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2, typename T3, typename T4>\n         inline expression_node<typename node_type::value_type>* allocate_tttt(T1 t1, T2 t2, T3 t3, T4 t4) const\n         {\n            return (new node_type(t1, t2, t3, t4));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2, typename T3>\n         inline expression_node<typename node_type::value_type>* allocate_rrr(T1& t1, T2& t2, T3& t3) const\n         {\n            return (new node_type(t1, t2, t3));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2, typename T3, typename T4>\n         inline expression_node<typename node_type::value_type>* allocate_rrrr(T1& t1, T2& t2, T3& t3, T4& t4) const\n         {\n            return (new node_type(t1, t2, t3, t4));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2, typename T3, typename T4, typename T5>\n         inline expression_node<typename node_type::value_type>* allocate_rrrrr(T1& t1, T2& t2, T3& t3, T4& t4, T5& t5) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2, typename T3>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const T2& t2,\n                                                                          const T3& t3) const\n         {\n            return (new node_type(t1, t2, t3));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const T2& t2,\n                                                                          const T3& t3, const T4& t4) const\n         {\n            return (new node_type(t1, t2, t3, t4));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4, typename T5>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const T2& t2,\n                                                                          const T3& t3, const T4& t4,\n                                                                          const T5& t5) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4, typename T5, typename T6>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const T2& t2,\n                                                                          const T3& t3, const T4& t4,\n                                                                          const T5& t5, const T6& t6) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5, t6));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4,\n                   typename T5, typename T6, typename T7>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const T2& t2,\n                                                                          const T3& t3, const T4& t4,\n                                                                          const T5& t5, const T6& t6,\n                                                                          const T7& t7) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5, t6, t7));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4,\n                   typename T5, typename T6,\n                   typename T7, typename T8>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const T2& t2,\n                                                                          const T3& t3, const T4& t4,\n                                                                          const T5& t5, const T6& t6,\n                                                                          const T7& t7, const T8& t8) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5, t6, t7, t8));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4,\n                   typename T5, typename T6,\n                   typename T7, typename T8, typename T9>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const T2& t2,\n                                                                          const T3& t3, const T4& t4,\n                                                                          const T5& t5, const T6& t6,\n                                                                          const T7& t7, const T8& t8,\n                                                                          const T9& t9) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5, t6, t7, t8, t9));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4,\n                   typename T5, typename T6,\n                   typename T7, typename T8,\n                   typename T9, typename T10>\n         inline expression_node<typename node_type::value_type>* allocate(const T1& t1, const  T2&  t2,\n                                                                          const T3& t3, const  T4&  t4,\n                                                                          const T5& t5, const  T6&  t6,\n                                                                          const T7& t7, const  T8&  t8,\n                                                                          const T9& t9, const T10& t10) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5, t6, t7, t8, t9, t10));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2, typename T3>\n         inline expression_node<typename node_type::value_type>* allocate_type(T1 t1, T2 t2, T3 t3) const\n         {\n            return (new node_type(t1, t2, t3));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4>\n         inline expression_node<typename node_type::value_type>* allocate_type(T1 t1, T2 t2,\n                                                                               T3 t3, T4 t4) const\n         {\n            return (new node_type(t1, t2, t3, t4));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4,\n                   typename T5>\n         inline expression_node<typename node_type::value_type>* allocate_type(T1 t1, T2 t2,\n                                                                               T3 t3, T4 t4,\n                                                                               T5 t5) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4,\n                   typename T5, typename T6>\n         inline expression_node<typename node_type::value_type>* allocate_type(T1 t1, T2 t2,\n                                                                               T3 t3, T4 t4,\n                                                                               T5 t5, T6 t6) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5, t6));\n         }\n\n         template <typename node_type,\n                   typename T1, typename T2,\n                   typename T3, typename T4,\n                   typename T5, typename T6, typename T7>\n         inline expression_node<typename node_type::value_type>* allocate_type(T1 t1, T2 t2,\n                                                                               T3 t3, T4 t4,\n                                                                               T5 t5, T6 t6,\n                                                                               T7 t7) const\n         {\n            return (new node_type(t1, t2, t3, t4, t5, t6, t7));\n         }\n\n         template <typename T>\n         void inline free(expression_node<T>*& e) const\n         {\n            delete e;\n            e = 0;\n         }\n      };\n\n      inline void load_operations_map(std::multimap<std::string,details::base_operation_t,details::ilesscompare>& m)\n      {\n         #define register_op(Symbol,Type,Args)                                               \\\n         m.insert(std::make_pair(std::string(Symbol),details::base_operation_t(Type,Args))); \\\n\n         register_op(      \"abs\", e_abs     , 1)\n         register_op(     \"acos\", e_acos    , 1)\n         register_op(    \"acosh\", e_acosh   , 1)\n         register_op(     \"asin\", e_asin    , 1)\n         register_op(    \"asinh\", e_asinh   , 1)\n         register_op(     \"atan\", e_atan    , 1)\n         register_op(    \"atanh\", e_atanh   , 1)\n         register_op(     \"ceil\", e_ceil    , 1)\n         register_op(      \"cos\", e_cos     , 1)\n         register_op(     \"cosh\", e_cosh    , 1)\n         register_op(      \"exp\", e_exp     , 1)\n         register_op(    \"expm1\", e_expm1   , 1)\n         register_op(    \"floor\", e_floor   , 1)\n         register_op(      \"log\", e_log     , 1)\n         register_op(    \"log10\", e_log10   , 1)\n         register_op(     \"log2\", e_log2    , 1)\n         register_op(    \"log1p\", e_log1p   , 1)\n         register_op(    \"round\", e_round   , 1)\n         register_op(      \"sin\", e_sin     , 1)\n         register_op(     \"sinc\", e_sinc    , 1)\n         register_op(     \"sinh\", e_sinh    , 1)\n         register_op(      \"sec\", e_sec     , 1)\n         register_op(      \"csc\", e_csc     , 1)\n         register_op(     \"sqrt\", e_sqrt    , 1)\n         register_op(      \"tan\", e_tan     , 1)\n         register_op(     \"tanh\", e_tanh    , 1)\n         register_op(      \"cot\", e_cot     , 1)\n         register_op(  \"rad2deg\", e_r2d     , 1)\n         register_op(  \"deg2rad\", e_d2r     , 1)\n         register_op( \"deg2grad\", e_d2g     , 1)\n         register_op( \"grad2deg\", e_g2d     , 1)\n         register_op(      \"sgn\", e_sgn     , 1)\n         register_op(      \"not\", e_notl    , 1)\n         register_op(      \"erf\", e_erf     , 1)\n         register_op(     \"erfc\", e_erfc    , 1)\n         register_op(     \"ncdf\", e_ncdf    , 1)\n         register_op(     \"frac\", e_frac    , 1)\n         register_op(    \"trunc\", e_trunc   , 1)\n         register_op(    \"atan2\", e_atan2   , 2)\n         register_op(      \"mod\", e_mod     , 2)\n         register_op(     \"logn\", e_logn    , 2)\n         register_op(      \"pow\", e_pow     , 2)\n         register_op(     \"root\", e_root    , 2)\n         register_op(   \"roundn\", e_roundn  , 2)\n         register_op(    \"equal\", e_equal   , 2)\n         register_op(\"not_equal\", e_nequal  , 2)\n         register_op(    \"hypot\", e_hypot   , 2)\n         register_op(      \"shr\", e_shr     , 2)\n         register_op(      \"shl\", e_shl     , 2)\n         register_op(    \"clamp\", e_clamp   , 3)\n         register_op(   \"iclamp\", e_iclamp  , 3)\n         register_op(  \"inrange\", e_inrange , 3)\n         #undef register_op\n      }\n\n   } // namespace details\n\n   class function_traits\n   {\n   public:\n\n      function_traits()\n      : allow_zero_parameters_(false),\n        has_side_effects_(true),\n        min_num_args_(0),\n        max_num_args_(std::numeric_limits<std::size_t>::max())\n      {}\n\n      inline bool& allow_zero_parameters()\n      {\n         return allow_zero_parameters_;\n      }\n\n      inline bool& has_side_effects()\n      {\n         return has_side_effects_;\n      }\n\n      std::size_t& min_num_args()\n      {\n         return min_num_args_;\n      }\n\n      std::size_t& max_num_args()\n      {\n         return max_num_args_;\n      }\n\n   private:\n\n      bool allow_zero_parameters_;\n      bool has_side_effects_;\n      std::size_t min_num_args_;\n      std::size_t max_num_args_;\n   };\n\n   template <typename FunctionType>\n   void enable_zero_parameters(FunctionType& func)\n   {\n      func.allow_zero_parameters() = true;\n\n      if (0 != func.min_num_args())\n      {\n         func.min_num_args() = 0;\n      }\n   }\n\n   template <typename FunctionType>\n   void disable_zero_parameters(FunctionType& func)\n   {\n      func.allow_zero_parameters() = false;\n   }\n\n   template <typename FunctionType>\n   void enable_has_side_effects(FunctionType& func)\n   {\n      func.has_side_effects() = true;\n   }\n\n   template <typename FunctionType>\n   void disable_has_side_effects(FunctionType& func)\n   {\n      func.has_side_effects() = false;\n   }\n\n   template <typename FunctionType>\n   void set_min_num_args(FunctionType& func, const std::size_t& num_args)\n   {\n      func.min_num_args() = num_args;\n\n      if ((0 != func.min_num_args()) && func.allow_zero_parameters())\n         func.allow_zero_parameters() = false;\n   }\n\n   template <typename FunctionType>\n   void set_max_num_args(FunctionType& func, const std::size_t& num_args)\n   {\n      func.max_num_args() = num_args;\n   }\n\n   template <typename T>\n   class ifunction : public function_traits\n   {\n   public:\n\n      explicit ifunction(const std::size_t& pc)\n      : param_count(pc)\n      {}\n\n      virtual ~ifunction()\n      {}\n\n      #define empty_method_body                      \\\n      {                                              \\\n         return std::numeric_limits<T>::quiet_NaN(); \\\n      }                                              \\\n\n      inline virtual T operator() ()\n      empty_method_body\n\n       inline virtual T operator() (const T&)\n      empty_method_body\n\n       inline virtual T operator() (const T&,const T&)\n      empty_method_body\n\n       inline virtual T operator() (const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                  const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      inline virtual T operator() (const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&,\n                                   const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&, const T&)\n      empty_method_body\n\n      #undef empty_method_body\n\n      std::size_t param_count;\n   };\n\n   template <typename T>\n   class ivararg_function : public function_traits\n   {\n   public:\n\n      virtual ~ivararg_function()\n      {}\n\n      inline virtual T operator() (const std::vector<T>&)\n      {\n         exprtk_debug((\"ivararg_function::operator() - Operator has not been overridden.\\n\"));\n         return std::numeric_limits<T>::quiet_NaN();\n      }\n   };\n\n   template <typename T>\n   class igeneric_function : public function_traits\n   {\n   public:\n\n      enum return_type\n      {\n         e_rtrn_scalar = 0,\n         e_rtrn_string = 1\n      };\n\n      typedef T type;\n      typedef type_store<T> generic_type;\n      typedef typename generic_type::parameter_list parameter_list_t;\n\n      igeneric_function(const std::string& param_seq = \"\", const return_type rtr_type = e_rtrn_scalar)\n      : parameter_sequence(param_seq),\n        rtrn_type(rtr_type)\n      {}\n\n      virtual ~igeneric_function()\n      {}\n\n      #define igeneric_function_empty_body(N)        \\\n      {                                              \\\n         exprtk_debug((\"igeneric_function::operator() - Operator has not been overridden. [\"#N\"]\\n\")); \\\n         return std::numeric_limits<T>::quiet_NaN(); \\\n      }                                              \\\n\n      // f(i_0,i_1,....,i_N) --> Scalar\n      inline virtual T operator() (parameter_list_t)\n      igeneric_function_empty_body(1)\n\n      // f(i_0,i_1,....,i_N) --> String\n      inline virtual T operator() (std::string&, parameter_list_t)\n      igeneric_function_empty_body(2)\n\n      // f(psi,i_0,i_1,....,i_N) --> Scalar\n      inline virtual T operator() (const std::size_t&, parameter_list_t)\n      igeneric_function_empty_body(3)\n\n      // f(psi,i_0,i_1,....,i_N) --> String\n      inline virtual T operator() (const std::size_t&, std::string&, parameter_list_t)\n      igeneric_function_empty_body(4)\n\n      std::string parameter_sequence;\n      return_type rtrn_type;\n   };\n\n   template <typename T> class parser;\n   template <typename T> class expression_helper;\n\n   template <typename T>\n   class symbol_table\n   {\n   public:\n\n      typedef T (*ff00_functor)();\n      typedef T (*ff01_functor)(T);\n      typedef T (*ff02_functor)(T,T);\n      typedef T (*ff03_functor)(T,T,T);\n      typedef T (*ff04_functor)(T,T,T,T);\n      typedef T (*ff05_functor)(T,T,T,T,T);\n      typedef T (*ff06_functor)(T,T,T,T,T,T);\n      typedef T (*ff07_functor)(T,T,T,T,T,T,T);\n      typedef T (*ff08_functor)(T,T,T,T,T,T,T,T);\n      typedef T (*ff09_functor)(T,T,T,T,T,T,T,T,T);\n      typedef T (*ff10_functor)(T,T,T,T,T,T,T,T,T,T);\n      typedef T (*ff11_functor)(T,T,T,T,T,T,T,T,T,T,T);\n      typedef T (*ff12_functor)(T,T,T,T,T,T,T,T,T,T,T,T);\n      typedef T (*ff13_functor)(T,T,T,T,T,T,T,T,T,T,T,T,T);\n      typedef T (*ff14_functor)(T,T,T,T,T,T,T,T,T,T,T,T,T,T);\n      typedef T (*ff15_functor)(T,T,T,T,T,T,T,T,T,T,T,T,T,T,T);\n\n   protected:\n\n       struct freefunc00 : public exprtk::ifunction<T>\n       {\n          using exprtk::ifunction<T>::operator();\n\n          freefunc00(ff00_functor ff) : exprtk::ifunction<T>(0), f(ff) {}\n          inline T operator() ()\n          { return f(); }\n          ff00_functor f;\n       };\n\n      struct freefunc01 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc01(ff01_functor ff) : exprtk::ifunction<T>(1), f(ff) {}\n         inline T operator() (const T& v0)\n         { return f(v0); }\n         ff01_functor f;\n      };\n\n      struct freefunc02 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc02(ff02_functor ff) : exprtk::ifunction<T>(2), f(ff) {}\n         inline T operator() (const T& v0, const T& v1)\n         { return f(v0, v1); }\n         ff02_functor f;\n      };\n\n      struct freefunc03 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc03(ff03_functor ff) : exprtk::ifunction<T>(3), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2)\n         { return f(v0, v1, v2); }\n         ff03_functor f;\n      };\n\n      struct freefunc04 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc04(ff04_functor ff) : exprtk::ifunction<T>(4), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2, const T& v3)\n         { return f(v0, v1, v2, v3); }\n         ff04_functor f;\n      };\n\n      struct freefunc05 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc05(ff05_functor ff) : exprtk::ifunction<T>(5), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2, const T& v3, const T& v4)\n         { return f(v0, v1, v2, v3, v4); }\n         ff05_functor f;\n      };\n\n      struct freefunc06 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc06(ff06_functor ff) : exprtk::ifunction<T>(6), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2, const T& v3, const T& v4, const T& v5)\n         { return f(v0, v1, v2, v3, v4, v5); }\n         ff06_functor f;\n      };\n\n      struct freefunc07 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc07(ff07_functor ff) : exprtk::ifunction<T>(7), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2, const T& v3, const T& v4,\n                              const T& v5, const T& v6)\n         { return f(v0, v1, v2, v3, v4, v5, v6); }\n         ff07_functor f;\n      };\n\n      struct freefunc08 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc08(ff08_functor ff) : exprtk::ifunction<T>(8), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2, const T& v3, const T& v4,\n                              const T& v5, const T& v6, const T& v7)\n         { return f(v0, v1, v2, v3, v4, v5, v6, v7); }\n         ff08_functor f;\n      };\n\n      struct freefunc09 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc09(ff09_functor ff) : exprtk::ifunction<T>(9), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2, const T& v3, const T& v4,\n                              const T& v5, const T& v6, const T& v7, const T& v8)\n         { return f(v0, v1, v2, v3, v4, v5, v6, v7, v8); }\n         ff09_functor f;\n      };\n\n      struct freefunc10 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc10(ff10_functor ff) : exprtk::ifunction<T>(10), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2, const T& v3, const T& v4,\n                              const T& v5, const T& v6, const T& v7, const T& v8, const T& v9)\n         { return f(v0, v1, v2, v3, v4, v5, v6, v7, v8, v9); }\n         ff10_functor f;\n      };\n\n      struct freefunc11 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc11(ff11_functor ff) : exprtk::ifunction<T>(11), f(ff) {}\n         inline T operator() (const T& v0, const T& v1, const T& v2, const T& v3, const T& v4,\n                              const T& v5, const T& v6, const T& v7, const T& v8, const T& v9, const T& v10)\n         { return f(v0, v1, v2, v3, v4, v5, v6, v7, v8, v9, v10); }\n         ff11_functor f;\n      };\n\n      struct freefunc12 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc12(ff12_functor ff) : exprtk::ifunction<T>(12), f(ff) {}\n         inline T operator() (const T& v00, const T& v01, const T& v02, const T& v03, const T& v04,\n                              const T& v05, const T& v06, const T& v07, const T& v08, const T& v09,\n                              const T& v10, const T& v11)\n         { return f(v00, v01, v02, v03, v04, v05, v06, v07, v08, v09, v10, v11); }\n         ff12_functor f;\n      };\n\n      struct freefunc13 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc13(ff13_functor ff) : exprtk::ifunction<T>(13), f(ff) {}\n         inline T operator() (const T& v00, const T& v01, const T& v02, const T& v03, const T& v04,\n                              const T& v05, const T& v06, const T& v07, const T& v08, const T& v09,\n                              const T& v10, const T& v11, const T& v12)\n         { return f(v00, v01, v02, v03, v04, v05, v06, v07, v08, v09, v10, v11, v12); }\n         ff13_functor f;\n      };\n\n      struct freefunc14 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc14(ff14_functor ff) : exprtk::ifunction<T>(14), f(ff) {}\n         inline T operator() (const T& v00, const T& v01, const T& v02, const T& v03, const T& v04,\n                              const T& v05, const T& v06, const T& v07, const T& v08, const T& v09,\n                              const T& v10, const T& v11, const T& v12, const T& v13)\n         { return f(v00, v01, v02, v03, v04, v05, v06, v07, v08, v09, v10, v11, v12, v13); }\n         ff14_functor f;\n      };\n\n      struct freefunc15 : public exprtk::ifunction<T>\n      {\n         using exprtk::ifunction<T>::operator();\n\n         freefunc15(ff15_functor ff) : exprtk::ifunction<T>(15), f(ff) {}\n         inline T operator() (const T& v00, const T& v01, const T& v02, const T& v03, const T& v04,\n                              const T& v05, const T& v06, const T& v07, const T& v08, const T& v09,\n                              const T& v10, const T& v11, const T& v12, const T& v13, const T& v14)\n         { return f(v00, v01, v02, v03, v04, v05, v06, v07, v08, v09, v10, v11, v12, v13, v14); }\n         ff15_functor f;\n      };\n\n      template <typename Type, typename RawType>\n      struct type_store\n      {\n         typedef details::expression_node<T>*        expression_ptr;\n         typedef typename details::variable_node<T>  variable_node_t;\n         typedef ifunction<T>                        ifunction_t;\n         typedef ivararg_function<T>                 ivararg_function_t;\n         typedef igeneric_function<T>                igeneric_function_t;\n         typedef details::vector_holder<T>           vector_t;\n         #ifndef exprtk_disable_string_capabilities\n         typedef typename details::stringvar_node<T> stringvar_node_t;\n         #endif\n\n         typedef Type type_t;\n         typedef type_t* type_ptr;\n         typedef std::pair<bool,type_ptr> type_pair_t;\n         typedef std::map<std::string,type_pair_t,details::ilesscompare> type_map_t;\n         typedef typename type_map_t::iterator tm_itr_t;\n         typedef typename type_map_t::const_iterator tm_const_itr_t;\n\n         enum { lut_size = 256 };\n\n         type_map_t  map;\n         std::size_t size;\n\n         type_store()\n         : size(0)\n         {}\n\n         inline bool symbol_exists(const std::string& symbol_name) const\n         {\n            if (symbol_name.empty())\n               return false;\n            else if (map.end() != map.find(symbol_name))\n               return true;\n            else\n               return false;\n         }\n\n         template <typename PtrType>\n         inline std::string entity_name(const PtrType& ptr) const\n         {\n            if (map.empty())\n               return std::string();\n\n            tm_const_itr_t itr = map.begin();\n\n            while (map.end() != itr)\n            {\n               if (itr->second.second == ptr)\n               {\n                  return itr->first;\n               }\n               else\n                  ++itr;\n            }\n\n            return std::string();\n         }\n\n         inline bool is_constant(const std::string& symbol_name) const\n         {\n            if (symbol_name.empty())\n               return false;\n            else\n            {\n               const tm_const_itr_t itr = map.find(symbol_name);\n\n               if (map.end() == itr)\n                  return false;\n               else\n                  return (*itr).second.first;\n            }\n         }\n\n         template <typename Tie, typename RType>\n         inline bool add_impl(const std::string& symbol_name, RType t, const bool is_const)\n         {\n            if (symbol_name.size() > 1)\n            {\n               for (std::size_t i = 0; i < details::reserved_symbols_size; ++i)\n               {\n                  if (details::imatch(symbol_name, details::reserved_symbols[i]))\n                  {\n                     return false;\n                  }\n               }\n            }\n\n            const tm_itr_t itr = map.find(symbol_name);\n\n            if (map.end() == itr)\n            {\n               map[symbol_name] = Tie::make(t,is_const);\n               ++size;\n            }\n\n            return true;\n         }\n\n         struct tie_array\n         {\n            static inline std::pair<bool,vector_t*> make(std::pair<T*,std::size_t> v, const bool is_const = false)\n            {\n               return std::make_pair(is_const, new vector_t(v.first, v.second));\n            }\n         };\n\n         struct tie_stdvec\n         {\n            template <typename Allocator>\n            static inline std::pair<bool,vector_t*> make(std::vector<T,Allocator>& v, const bool is_const = false)\n            {\n               return std::make_pair(is_const, new vector_t(v));\n            }\n         };\n\n         struct tie_vecview\n         {\n            static inline std::pair<bool,vector_t*> make(exprtk::vector_view<T>& v, const bool is_const = false)\n            {\n               return std::make_pair(is_const, new vector_t(v));\n            }\n         };\n\n         struct tie_stddeq\n         {\n            template <typename Allocator>\n            static inline std::pair<bool,vector_t*> make(std::deque<T,Allocator>& v, const bool is_const = false)\n            {\n               return std::make_pair(is_const, new vector_t(v));\n            }\n         };\n\n         template <std::size_t v_size>\n         inline bool add(const std::string& symbol_name, T (&v)[v_size], const bool is_const = false)\n         {\n            return add_impl<tie_array,std::pair<T*,std::size_t> >\n                      (symbol_name, std::make_pair(v,v_size), is_const);\n         }\n\n         inline bool add(const std::string& symbol_name, T* v, const std::size_t v_size, const bool is_const = false)\n         {\n            return add_impl<tie_array,std::pair<T*,std::size_t> >\n                     (symbol_name, std::make_pair(v,v_size), is_const);\n         }\n\n         template <typename Allocator>\n         inline bool add(const std::string& symbol_name, std::vector<T,Allocator>& v, const bool is_const = false)\n         {\n            return add_impl<tie_stdvec,std::vector<T,Allocator>&>\n                      (symbol_name, v, is_const);\n         }\n\n         inline bool add(const std::string& symbol_name, exprtk::vector_view<T>& v, const bool is_const = false)\n         {\n            return add_impl<tie_vecview,exprtk::vector_view<T>&>\n                      (symbol_name, v, is_const);\n         }\n\n         template <typename Allocator>\n         inline bool add(const std::string& symbol_name, std::deque<T,Allocator>& v, const bool is_const = false)\n         {\n            return add_impl<tie_stddeq,std::deque<T,Allocator>&>\n                      (symbol_name, v, is_const);\n         }\n\n         inline bool add(const std::string& symbol_name, RawType& t, const bool is_const = false)\n         {\n            struct tie\n            {\n               static inline std::pair<bool,variable_node_t*> make(T& t,const bool is_const = false)\n               {\n                  return std::make_pair(is_const, new variable_node_t(t));\n               }\n\n               #ifndef exprtk_disable_string_capabilities\n               static inline std::pair<bool,stringvar_node_t*> make(std::string& t,const bool is_const = false)\n               {\n                  return std::make_pair(is_const, new stringvar_node_t(t));\n               }\n               #endif\n\n               static inline std::pair<bool,function_t*> make(function_t& t, const bool is_constant = false)\n               {\n                  return std::make_pair(is_constant,&t);\n               }\n\n               static inline std::pair<bool,vararg_function_t*> make(vararg_function_t& t, const bool is_const = false)\n               {\n                  return std::make_pair(is_const,&t);\n               }\n\n               static inline std::pair<bool,generic_function_t*> make(generic_function_t& t, const bool is_constant = false)\n               {\n                  return std::make_pair(is_constant,&t);\n               }\n            };\n\n            const tm_itr_t itr = map.find(symbol_name);\n\n            if (map.end() == itr)\n            {\n               map[symbol_name] = tie::make(t,is_const);\n               ++size;\n            }\n\n            return true;\n         }\n\n         inline type_ptr get(const std::string& symbol_name) const\n         {\n            const tm_const_itr_t itr = map.find(symbol_name);\n\n            if (map.end() == itr)\n               return reinterpret_cast<type_ptr>(0);\n            else\n               return itr->second.second;\n         }\n\n         template <typename TType, typename TRawType, typename PtrType>\n         struct ptr_match\n         {\n            static inline bool test(const PtrType, const void*)\n            {\n               return false;\n            }\n         };\n\n         template <typename TType, typename TRawType>\n         struct ptr_match<TType,TRawType,variable_node_t*>\n         {\n            static inline bool test(const variable_node_t* p, const void* ptr)\n            {\n               exprtk_debug((\"ptr_match::test() - %p <--> %p\\n\",(void*)(&(p->ref())),ptr));\n               return (&(p->ref()) == ptr);\n            }\n         };\n\n         inline type_ptr get_from_varptr(const void* ptr) const\n         {\n            tm_const_itr_t itr = map.begin();\n\n            while (map.end() != itr)\n            {\n               type_ptr ret_ptr = itr->second.second;\n\n               if (ptr_match<Type,RawType,type_ptr>::test(ret_ptr,ptr))\n               {\n                  return ret_ptr;\n               }\n\n               ++itr;\n            }\n\n            return type_ptr(0);\n         }\n\n         inline bool remove(const std::string& symbol_name, const bool delete_node = true)\n         {\n            const tm_itr_t itr = map.find(symbol_name);\n\n            if (map.end() != itr)\n            {\n               struct deleter\n               {\n                  static inline void process(std::pair<bool,variable_node_t*>& n)  { delete n.second; }\n                  static inline void process(std::pair<bool,vector_t*>& n)         { delete n.second; }\n                  #ifndef exprtk_disable_string_capabilities\n                  static inline void process(std::pair<bool,stringvar_node_t*>& n) { delete n.second; }\n                  #endif\n                  static inline void process(std::pair<bool,function_t*>&)         {                  }\n               };\n\n               if (delete_node)\n               {\n                  deleter::process((*itr).second);\n               }\n\n               map.erase(itr);\n               --size;\n\n               return true;\n            }\n            else\n               return false;\n         }\n\n         inline RawType& type_ref(const std::string& symbol_name)\n         {\n            struct init_type\n            {\n               static inline double set(double)           { return (0.0);           }\n               static inline double set(long double)      { return (0.0);           }\n               static inline float  set(float)            { return (0.0f);          }\n               static inline std::string set(std::string) { return std::string(\"\"); }\n            };\n\n            static RawType null_type = init_type::set(RawType());\n\n            const tm_const_itr_t itr = map.find(symbol_name);\n\n            if (map.end() == itr)\n               return null_type;\n            else\n               return itr->second.second->ref();\n         }\n\n         inline void clear(const bool delete_node = true)\n         {\n            struct deleter\n            {\n               static inline void process(std::pair<bool,variable_node_t*>& n)  { delete n.second; }\n               static inline void process(std::pair<bool,vector_t*>& n)         { delete n.second; }\n               static inline void process(std::pair<bool,function_t*>&)         {                  }\n               #ifndef exprtk_disable_string_capabilities\n               static inline void process(std::pair<bool,stringvar_node_t*>& n) { delete n.second; }\n               #endif\n            };\n\n            if (!map.empty())\n            {\n               if (delete_node)\n               {\n                  tm_itr_t itr = map.begin();\n                  tm_itr_t end = map.end  ();\n\n                  while (end != itr)\n                  {\n                     deleter::process((*itr).second);\n                     ++itr;\n                  }\n               }\n\n               map.clear();\n            }\n\n            size = 0;\n         }\n\n         template <typename Allocator,\n                   template <typename, typename> class Sequence>\n         inline std::size_t get_list(Sequence<std::pair<std::string,RawType>,Allocator>& list) const\n         {\n            std::size_t count = 0;\n\n            if (!map.empty())\n            {\n               tm_const_itr_t itr = map.begin();\n               tm_const_itr_t end = map.end  ();\n\n               while (end != itr)\n               {\n                  list.push_back(std::make_pair((*itr).first,itr->second.second->ref()));\n                  ++itr;\n                  ++count;\n               }\n            }\n\n            return count;\n         }\n\n         template <typename Allocator,\n                   template <typename, typename> class Sequence>\n         inline std::size_t get_list(Sequence<std::string,Allocator>& vlist) const\n         {\n            std::size_t count = 0;\n\n            if (!map.empty())\n            {\n               tm_const_itr_t itr = map.begin();\n               tm_const_itr_t end = map.end  ();\n\n               while (end != itr)\n               {\n                  vlist.push_back((*itr).first);\n                  ++itr;\n                  ++count;\n               }\n            }\n\n            return count;\n         }\n      };\n\n      typedef details::expression_node<T>* expression_ptr;\n      typedef typename details::variable_node<T> variable_t;\n      typedef typename details::vector_holder<T> vector_holder_t;\n      typedef variable_t* variable_ptr;\n      #ifndef exprtk_disable_string_capabilities\n      typedef typename details::stringvar_node<T> stringvar_t;\n      typedef stringvar_t* stringvar_ptr;\n      #endif\n      typedef ifunction        <T> function_t;\n      typedef ivararg_function <T> vararg_function_t;\n      typedef igeneric_function<T> generic_function_t;\n      typedef function_t* function_ptr;\n      typedef vararg_function_t*  vararg_function_ptr;\n      typedef generic_function_t* generic_function_ptr;\n\n      static const std::size_t lut_size = 256;\n\n      // Symbol Table Holder\n      struct control_block\n      {\n         struct st_data\n         {\n            type_store<typename details::variable_node<T>,T> variable_store;\n            #ifndef exprtk_disable_string_capabilities\n            type_store<typename details::stringvar_node<T>,std::string> stringvar_store;\n            #endif\n            type_store<ifunction<T>,ifunction<T> >                 function_store;\n            type_store<ivararg_function <T>,ivararg_function <T> > vararg_function_store;\n            type_store<igeneric_function<T>,igeneric_function<T> > generic_function_store;\n            type_store<igeneric_function<T>,igeneric_function<T> > string_function_store;\n            type_store<vector_holder_t,vector_holder_t>            vector_store;\n\n            st_data()\n            {\n               for (std::size_t i = 0; i < details::reserved_words_size; ++i)\n               {\n                  reserved_symbol_table_.insert(details::reserved_words[i]);\n               }\n\n               for (std::size_t i = 0; i < details::reserved_symbols_size; ++i)\n               {\n                  reserved_symbol_table_.insert(details::reserved_symbols[i]);\n               }\n            }\n\n           ~st_data()\n            {\n               for (std::size_t i = 0; i < free_function_list_.size(); ++i)\n               {\n                  delete free_function_list_[i];\n               }\n            }\n\n            inline bool is_reserved_symbol(const std::string& symbol) const\n            {\n               return (reserved_symbol_table_.end() != reserved_symbol_table_.find(symbol));\n            }\n\n            static inline st_data* create()\n            {\n               return (new st_data);\n            }\n\n            static inline void destroy(st_data*& sd)\n            {\n               delete sd;\n               sd = reinterpret_cast<st_data*>(0);\n            }\n\n            std::list<T>               local_symbol_list_;\n            std::list<std::string>     local_stringvar_list_;\n            std::set<std::string>      reserved_symbol_table_;\n            std::vector<ifunction<T>*> free_function_list_;\n         };\n\n         control_block()\n         : ref_count(1),\n           data_(st_data::create())\n         {}\n\n         control_block(st_data* data)\n         : ref_count(1),\n           data_(data)\n         {}\n\n        ~control_block()\n         {\n            if (data_ && (0 == ref_count))\n            {\n               st_data::destroy(data_);\n            }\n         }\n\n         static inline control_block* create()\n         {\n            return (new control_block);\n         }\n\n         template <typename SymTab>\n         static inline void destroy(control_block*& cntrl_blck, SymTab* sym_tab)\n         {\n            if (cntrl_blck)\n            {\n               if (\n                    (0 !=   cntrl_blck->ref_count) &&\n                    (0 == --cntrl_blck->ref_count)\n                  )\n               {\n                  if (sym_tab)\n                     sym_tab->clear();\n\n                  delete cntrl_blck;\n               }\n\n               cntrl_blck = 0;\n            }\n         }\n\n         std::size_t ref_count;\n         st_data* data_;\n      };\n\n   public:\n\n      symbol_table()\n      : control_block_(control_block::create())\n      {\n         clear();\n      }\n\n     ~symbol_table()\n      {\n         control_block::destroy(control_block_,this);\n      }\n\n      symbol_table(const symbol_table<T>& st)\n      {\n         control_block_ = st.control_block_;\n         control_block_->ref_count++;\n      }\n\n      inline symbol_table<T>& operator=(const symbol_table<T>& st)\n      {\n         if (this != &st)\n         {\n            control_block::destroy(control_block_,reinterpret_cast<symbol_table<T>*>(0));\n\n            control_block_ = st.control_block_;\n            control_block_->ref_count++;\n         }\n\n         return (*this);\n      }\n\n      inline bool operator==(const symbol_table<T>& st)\n      {\n         return (this == &st) || (control_block_ == st.control_block_);\n      }\n\n      inline void clear_variables(const bool delete_node = true)\n      {\n         local_data().variable_store.clear(delete_node);\n      }\n\n      inline void clear_functions()\n      {\n         local_data().function_store.clear();\n      }\n\n      inline void clear_strings()\n      {\n         #ifndef exprtk_disable_string_capabilities\n         local_data().stringvar_store.clear();\n         #endif\n      }\n\n      inline void clear_vectors()\n      {\n         local_data().vector_store.clear();\n      }\n\n      inline void clear_local_constants()\n      {\n         local_data().local_symbol_list_.clear();\n      }\n\n      inline void clear()\n      {\n         if (!valid()) return;\n         clear_variables      ();\n         clear_functions      ();\n         clear_strings        ();\n         clear_vectors        ();\n         clear_local_constants();\n      }\n\n      inline std::size_t variable_count() const\n      {\n         if (valid())\n            return local_data().variable_store.size;\n         else\n            return 0;\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline std::size_t stringvar_count() const\n      {\n         if (valid())\n            return local_data().stringvar_store.size;\n         else\n            return 0;\n      }\n      #endif\n\n      inline std::size_t function_count() const\n      {\n         if (valid())\n            return local_data().function_store.size;\n         else\n            return 0;\n      }\n\n      inline std::size_t vector_count() const\n      {\n         if (valid())\n            return local_data().vector_store.size;\n         else\n            return 0;\n      }\n\n      inline variable_ptr get_variable(const std::string& variable_name) const\n      {\n         if (!valid())\n            return reinterpret_cast<variable_ptr>(0);\n         else if (!valid_symbol(variable_name))\n            return reinterpret_cast<variable_ptr>(0);\n         else\n            return local_data().variable_store.get(variable_name);\n      }\n\n      inline variable_ptr get_variable(const T& var_ref) const\n      {\n         if (!valid())\n            return reinterpret_cast<variable_ptr>(0);\n         else\n            return local_data().variable_store.get_from_varptr(\n                                                  reinterpret_cast<const void*>(&var_ref));\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline stringvar_ptr get_stringvar(const std::string& string_name) const\n      {\n         if (!valid())\n            return reinterpret_cast<stringvar_ptr>(0);\n         else if (!valid_symbol(string_name))\n            return reinterpret_cast<stringvar_ptr>(0);\n         else\n            return local_data().stringvar_store.get(string_name);\n      }\n      #endif\n\n      inline function_ptr get_function(const std::string& function_name) const\n      {\n         if (!valid())\n            return reinterpret_cast<function_ptr>(0);\n         else if (!valid_symbol(function_name))\n            return reinterpret_cast<function_ptr>(0);\n         else\n            return local_data().function_store.get(function_name);\n      }\n\n      inline vararg_function_ptr get_vararg_function(const std::string& vararg_function_name) const\n      {\n         if (!valid())\n            return reinterpret_cast<vararg_function_ptr>(0);\n         else if (!valid_symbol(vararg_function_name))\n            return reinterpret_cast<vararg_function_ptr>(0);\n         else\n            return local_data().vararg_function_store.get(vararg_function_name);\n      }\n\n      inline generic_function_ptr get_generic_function(const std::string& function_name) const\n      {\n         if (!valid())\n            return reinterpret_cast<generic_function_ptr>(0);\n         else if (!valid_symbol(function_name))\n            return reinterpret_cast<generic_function_ptr>(0);\n         else\n            return local_data().generic_function_store.get(function_name);\n      }\n\n      inline generic_function_ptr get_string_function(const std::string& function_name) const\n      {\n         if (!valid())\n            return reinterpret_cast<generic_function_ptr>(0);\n         else if (!valid_symbol(function_name))\n            return reinterpret_cast<generic_function_ptr>(0);\n         else\n            return local_data().string_function_store.get(function_name);\n      }\n\n      typedef vector_holder_t* vector_holder_ptr;\n\n      inline vector_holder_ptr get_vector(const std::string& vector_name) const\n      {\n         if (!valid())\n            return reinterpret_cast<vector_holder_ptr>(0);\n         else if (!valid_symbol(vector_name))\n            return reinterpret_cast<vector_holder_ptr>(0);\n         else\n            return local_data().vector_store.get(vector_name);\n      }\n\n      inline T& variable_ref(const std::string& symbol_name)\n      {\n         static T null_var = T(0);\n         if (!valid())\n            return null_var;\n         else if (!valid_symbol(symbol_name))\n            return null_var;\n         else\n            return local_data().variable_store.type_ref(symbol_name);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline std::string& stringvar_ref(const std::string& symbol_name)\n      {\n         static std::string null_stringvar;\n         if (!valid())\n            return null_stringvar;\n         else if (!valid_symbol(symbol_name))\n            return null_stringvar;\n         else\n            return local_data().stringvar_store.type_ref(symbol_name);\n      }\n      #endif\n\n      inline bool is_constant_node(const std::string& symbol_name) const\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(symbol_name))\n            return false;\n         else\n            return local_data().variable_store.is_constant(symbol_name);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline bool is_constant_string(const std::string& symbol_name) const\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(symbol_name))\n            return false;\n         else if (!local_data().stringvar_store.symbol_exists(symbol_name))\n            return false;\n         else\n            return local_data().stringvar_store.is_constant(symbol_name);\n      }\n      #endif\n\n      inline bool create_variable(const std::string& variable_name, const T& value = T(0))\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(variable_name))\n            return false;\n         else if (symbol_exists(variable_name))\n            return false;\n\n         local_data().local_symbol_list_.push_back(value);\n         T& t = local_data().local_symbol_list_.back();\n\n         return add_variable(variable_name,t);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline bool create_stringvar(const std::string& stringvar_name, const std::string& value = std::string(\"\"))\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(stringvar_name))\n            return false;\n         else if (symbol_exists(stringvar_name))\n            return false;\n\n         local_data().local_stringvar_list_.push_back(value);\n         std::string& s = local_data().local_stringvar_list_.back();\n\n         return add_stringvar(stringvar_name,s);\n      }\n      #endif\n\n      inline bool add_raw_variable(const std::string& variable_name, T& t, const bool is_constant = false)\n      {\n         if (!valid())\n            return false;\n         else if (symbol_exists(variable_name))\n            return false;\n         else\n            return local_data().variable_store.add(variable_name,t,is_constant);\n      }\n\n      inline bool add_variable(const std::string& variable_name, T& t, const bool is_constant = false)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(variable_name))\n            return false;\n         else if (symbol_exists(variable_name))\n            return false;\n         else\n            return local_data().variable_store.add(variable_name,t,is_constant);\n      }\n\n      inline bool add_constant(const std::string& constant_name, const T& value)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(constant_name))\n            return false;\n         else if (symbol_exists(constant_name))\n            return false;\n\n         local_data().local_symbol_list_.push_back(value);\n         T& t = local_data().local_symbol_list_.back();\n\n         return add_variable(constant_name,t,true);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline bool add_stringvar(const std::string& stringvar_name, std::string& s, const bool is_constant = false)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(stringvar_name))\n            return false;\n         else if (symbol_exists(stringvar_name))\n            return false;\n         else\n            return local_data().stringvar_store.add(stringvar_name,s,is_constant);\n      }\n      #endif\n\n      inline bool add_function(const std::string& function_name, function_t& function)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(function_name))\n            return false;\n         else if (symbol_exists(function_name))\n            return false;\n         else\n            return local_data().function_store.add(function_name,function);\n      }\n\n      inline bool add_function(const std::string& vararg_function_name, vararg_function_t& vararg_function)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(vararg_function_name))\n            return false;\n         else if (symbol_exists(vararg_function_name))\n            return false;\n         else\n            return local_data().vararg_function_store.add(vararg_function_name,vararg_function);\n      }\n\n      inline bool add_function(const std::string& function_name, generic_function_t& function)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(function_name))\n            return false;\n         else if (symbol_exists(function_name))\n            return false;\n         else if (std::string::npos != function.parameter_sequence.find_first_not_of(\"STVZ*?|\"))\n            return false;\n         else if (generic_function_t::e_rtrn_scalar == function.rtrn_type)\n            return local_data().generic_function_store.add(function_name,function);\n         else if (generic_function_t::e_rtrn_string == function.rtrn_type)\n            return local_data().string_function_store.add(function_name, function);\n         else\n            return false;\n      }\n\n      #define exprtk_define_freefunction(NN)                                                \\\n      inline bool add_function(const std::string& function_name, ff##NN##_functor function) \\\n      {                                                                                     \\\n         if (!valid())                                                                      \\\n         { return false; }                                                                  \\\n         if (!valid_symbol(function_name))                                                  \\\n         { return false; }                                                                  \\\n         if (symbol_exists(function_name))                                                  \\\n         { return false; }                                                                  \\\n                                                                                            \\\n         exprtk::ifunction<T>* ifunc = new freefunc##NN(function);                          \\\n                                                                                            \\\n         local_data().free_function_list_.push_back(ifunc);                                 \\\n                                                                                            \\\n         return add_function(function_name,(*local_data().free_function_list_.back()));     \\\n      }                                                                                     \\\n\n      exprtk_define_freefunction(00) exprtk_define_freefunction(01)\n      exprtk_define_freefunction(02) exprtk_define_freefunction(03)\n      exprtk_define_freefunction(04) exprtk_define_freefunction(05)\n      exprtk_define_freefunction(06) exprtk_define_freefunction(07)\n      exprtk_define_freefunction(08) exprtk_define_freefunction(09)\n      exprtk_define_freefunction(10) exprtk_define_freefunction(11)\n      exprtk_define_freefunction(12) exprtk_define_freefunction(13)\n      exprtk_define_freefunction(14) exprtk_define_freefunction(15)\n\n      #undef exprtk_define_freefunction\n\n      inline bool add_reserved_function(const std::string& function_name, function_t& function)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(function_name,false))\n            return false;\n         else if (symbol_exists(function_name,false))\n            return false;\n         else\n            return local_data().function_store.add(function_name,function);\n      }\n\n      inline bool add_reserved_function(const std::string& vararg_function_name, vararg_function_t& vararg_function)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(vararg_function_name,false))\n            return false;\n         else if (symbol_exists(vararg_function_name,false))\n            return false;\n         else\n            return local_data().vararg_function_store.add(vararg_function_name,vararg_function);\n      }\n\n      inline bool add_reserved_function(const std::string& function_name, generic_function_t& function)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(function_name,false))\n            return false;\n         else if (symbol_exists(function_name,false))\n            return false;\n         else if (std::string::npos != function.parameter_sequence.find_first_not_of(\"STV*?|\"))\n            return false;\n         else if (generic_function_t::e_rtrn_scalar == function.rtrn_type)\n            return local_data().generic_function_store.add(function_name,function);\n         else if (generic_function_t::e_rtrn_string == function.rtrn_type)\n            return local_data().string_function_store.add(function_name, function);\n         else\n            return false;\n      }\n\n      template <std::size_t N>\n      inline bool add_vector(const std::string& vector_name, T (&v)[N])\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(vector_name))\n            return false;\n         else if (symbol_exists(vector_name))\n            return false;\n         else\n            return local_data().vector_store.add(vector_name,v);\n      }\n\n      inline bool add_vector(const std::string& vector_name, T* v, const std::size_t& v_size)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(vector_name))\n            return false;\n         else if (symbol_exists(vector_name))\n            return false;\n         else if (0 == v_size)\n            return false;\n         else\n            return local_data().vector_store.add(vector_name,v,v_size);\n      }\n\n      template <typename Allocator>\n      inline bool add_vector(const std::string& vector_name, std::vector<T,Allocator>& v)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(vector_name))\n            return false;\n         else if (symbol_exists(vector_name))\n            return false;\n         else if (0 == v.size())\n            return false;\n         else\n            return local_data().vector_store.add(vector_name,v);\n      }\n\n      inline bool add_vector(const std::string& vector_name, exprtk::vector_view<T>& v)\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(vector_name))\n            return false;\n         else if (symbol_exists(vector_name))\n            return false;\n         else if (0 == v.size())\n            return false;\n         else\n            return local_data().vector_store.add(vector_name,v);\n      }\n\n      inline bool remove_variable(const std::string& variable_name, const bool delete_node = true)\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().variable_store.remove(variable_name, delete_node);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline bool remove_stringvar(const std::string& string_name)\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().stringvar_store.remove(string_name);\n      }\n      #endif\n\n      inline bool remove_function(const std::string& function_name)\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().function_store.remove(function_name);\n      }\n\n      inline bool remove_vararg_function(const std::string& vararg_function_name)\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().vararg_function_store.remove(vararg_function_name);\n      }\n\n      inline bool remove_vector(const std::string& vector_name)\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().vector_store.remove(vector_name);\n      }\n\n      inline bool add_constants()\n      {\n         return add_pi      () &&\n                add_epsilon () &&\n                add_infinity() ;\n      }\n\n      inline bool add_pi()\n      {\n         const typename details::numeric::details::number_type<T>::type num_type;\n         static const T local_pi = details::numeric::details::const_pi_impl<T>(num_type);\n         return add_constant(\"pi\",local_pi);\n      }\n\n      inline bool add_epsilon()\n      {\n         static const T local_epsilon = details::numeric::details::epsilon_type<T>::value();\n         return add_constant(\"epsilon\",local_epsilon);\n      }\n\n      inline bool add_infinity()\n      {\n         static const T local_infinity = std::numeric_limits<T>::infinity();\n         return add_constant(\"inf\",local_infinity);\n      }\n\n      template <typename Package>\n      inline bool add_package(Package& package)\n      {\n         return package.register_package(*this);\n      }\n\n      template <typename Allocator,\n                template <typename, typename> class Sequence>\n      inline std::size_t get_variable_list(Sequence<std::pair<std::string,T>,Allocator>& vlist) const\n      {\n         if (!valid())\n            return 0;\n         else\n            return local_data().variable_store.get_list(vlist);\n      }\n\n      template <typename Allocator,\n                template <typename, typename> class Sequence>\n      inline std::size_t get_variable_list(Sequence<std::string,Allocator>& vlist) const\n      {\n         if (!valid())\n            return 0;\n         else\n            return local_data().variable_store.get_list(vlist);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      template <typename Allocator,\n                template <typename, typename> class Sequence>\n      inline std::size_t get_stringvar_list(Sequence<std::pair<std::string,std::string>,Allocator>& svlist) const\n      {\n         if (!valid())\n            return 0;\n         else\n            return local_data().stringvar_store.get_list(svlist);\n      }\n\n      template <typename Allocator,\n                template <typename, typename> class Sequence>\n      inline std::size_t get_stringvar_list(Sequence<std::string,Allocator>& svlist) const\n      {\n         if (!valid())\n            return 0;\n         else\n            return local_data().stringvar_store.get_list(svlist);\n      }\n      #endif\n\n      template <typename Allocator,\n                template <typename, typename> class Sequence>\n      inline std::size_t get_vector_list(Sequence<std::string,Allocator>& vlist) const\n      {\n         if (!valid())\n            return 0;\n         else\n            return local_data().vector_store.get_list(vlist);\n      }\n\n      inline bool symbol_exists(const std::string& symbol_name, const bool check_reserved_symb = true) const\n      {\n         /*\n            Function will return true if symbol_name exists as either a\n            reserved symbol, variable, stringvar, vector or function name\n            in any of the type stores.\n         */\n         if (!valid())\n            return false;\n         else if (local_data().variable_store.symbol_exists(symbol_name))\n            return true;\n         #ifndef exprtk_disable_string_capabilities\n         else if (local_data().stringvar_store.symbol_exists(symbol_name))\n            return true;\n         #endif\n         else if (local_data().vector_store.symbol_exists(symbol_name))\n            return true;\n         else if (local_data().function_store.symbol_exists(symbol_name))\n            return true;\n         else if (check_reserved_symb && local_data().is_reserved_symbol(symbol_name))\n            return true;\n         else\n            return false;\n      }\n\n      inline bool is_variable(const std::string& variable_name) const\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().variable_store.symbol_exists(variable_name);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline bool is_stringvar(const std::string& stringvar_name) const\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().stringvar_store.symbol_exists(stringvar_name);\n      }\n\n      inline bool is_conststr_stringvar(const std::string& symbol_name) const\n      {\n         if (!valid())\n            return false;\n         else if (!valid_symbol(symbol_name))\n            return false;\n         else if (!local_data().stringvar_store.symbol_exists(symbol_name))\n            return false;\n\n         return (\n                  local_data().stringvar_store.symbol_exists(symbol_name) ||\n                  local_data().stringvar_store.is_constant  (symbol_name)\n                );\n      }\n      #endif\n\n      inline bool is_function(const std::string& function_name) const\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().function_store.symbol_exists(function_name);\n      }\n\n      inline bool is_vararg_function(const std::string& vararg_function_name) const\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().vararg_function_store.symbol_exists(vararg_function_name);\n      }\n\n      inline bool is_vector(const std::string& vector_name) const\n      {\n         if (!valid())\n            return false;\n         else\n            return local_data().vector_store.symbol_exists(vector_name);\n      }\n\n      inline std::string get_variable_name(const expression_ptr& ptr) const\n      {\n         return local_data().variable_store.entity_name(ptr);\n      }\n\n      inline std::string get_vector_name(const vector_holder_ptr& ptr) const\n      {\n         return local_data().vector_store.entity_name(ptr);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline std::string get_stringvar_name(const expression_ptr& ptr) const\n      {\n         return local_data().stringvar_store.entity_name(ptr);\n      }\n\n      inline std::string get_conststr_stringvar_name(const expression_ptr& ptr) const\n      {\n         return local_data().stringvar_store.entity_name(ptr);\n      }\n      #endif\n\n      inline bool valid() const\n      {\n         // Symbol table sanity check.\n         return control_block_ && control_block_->data_;\n      }\n\n      inline void load_from(const symbol_table<T>& st)\n      {\n         {\n            std::vector<std::string> name_list;\n\n            st.local_data().function_store.get_list(name_list);\n\n            if (!name_list.empty())\n            {\n               for (std::size_t i = 0; i < name_list.size(); ++i)\n               {\n                  exprtk::ifunction<T>& ifunc = *st.get_function(name_list[i]);\n                  add_function(name_list[i],ifunc);\n               }\n            }\n         }\n\n         {\n            std::vector<std::string> name_list;\n\n            st.local_data().vararg_function_store.get_list(name_list);\n\n            if (!name_list.empty())\n            {\n               for (std::size_t i = 0; i < name_list.size(); ++i)\n               {\n                  exprtk::ivararg_function<T>& ivafunc = *st.get_vararg_function(name_list[i]);\n                  add_function(name_list[i],ivafunc);\n               }\n            }\n         }\n\n         {\n            std::vector<std::string> name_list;\n\n            st.local_data().generic_function_store.get_list(name_list);\n\n            if (!name_list.empty())\n            {\n               for (std::size_t i = 0; i < name_list.size(); ++i)\n               {\n                  exprtk::igeneric_function<T>& ifunc = *st.get_generic_function(name_list[i]);\n                  add_function(name_list[i],ifunc);\n               }\n            }\n         }\n\n         {\n            std::vector<std::string> name_list;\n\n            st.local_data().string_function_store.get_list(name_list);\n\n            if (!name_list.empty())\n            {\n               for (std::size_t i = 0; i < name_list.size(); ++i)\n               {\n                  exprtk::igeneric_function<T>& ifunc = *st.get_string_function(name_list[i]);\n                  add_function(name_list[i],ifunc);\n               }\n            }\n         }\n      }\n\n   private:\n\n      inline bool valid_symbol(const std::string& symbol, const bool check_reserved_symb = true) const\n      {\n         if (symbol.empty())\n            return false;\n         else if (!details::is_letter(symbol[0]))\n            return false;\n#if !allow_escaped_symbols\n         else if (symbol.size() > 1)\n         {\n            bool escaped = false;\n            for (std::size_t i = 1; i < symbol.size(); ++i)\n            {\n\t       if ('\\\\' == symbol[i])\n\t\t       escaped = true;\n\t       else if (escaped)\n\t\t       escaped = false;\n               else if (\n                    !details::is_letter_or_digit(symbol[i]) &&\n                    ('_' != symbol[i])\n                  )\n               {\n                  if (('.' == symbol[i]) && (i < (symbol.size() - 1)))\n                     continue;\n                  else\n                     return false;\n               }\n            }\n         }\n#endif\n\n         return (check_reserved_symb) ? (!local_data().is_reserved_symbol(symbol)) : true;\n      }\n\n      inline bool valid_function(const std::string& symbol) const\n      {\n         if (symbol.empty())\n            return false;\n         else if (!details::is_letter(symbol[0]))\n            return false;\n         else if (symbol.size() > 1)\n         {\n            for (std::size_t i = 1; i < symbol.size(); ++i)\n            {\n               if (\n                    !details::is_letter_or_digit(symbol[i]) &&\n                    ('_' != symbol[i])\n                  )\n               {\n                  if (('.' == symbol[i]) && (i < (symbol.size() - 1)))\n                     continue;\n                  else\n                     return false;\n               }\n            }\n         }\n\n         return true;\n      }\n\n      typedef typename control_block::st_data local_data_t;\n\n      inline local_data_t& local_data()\n      {\n         return *(control_block_->data_);\n      }\n\n      inline const local_data_t& local_data() const\n      {\n         return *(control_block_->data_);\n      }\n\n      control_block* control_block_;\n\n      friend class parser<T>;\n   };\n\n   template <typename T>\n   class function_compositor;\n\n   template <typename T>\n   class expression\n   {\n   private:\n\n      typedef details::expression_node<T>*  expression_ptr;\n      typedef details::vector_holder<T>* vector_holder_ptr;\n      typedef std::vector<symbol_table<T> >  symtab_list_t;\n\n      struct control_block\n      {\n         enum data_type\n         {\n            e_unknown  ,\n            e_expr     ,\n            e_vecholder,\n            e_data     ,\n            e_vecdata  ,\n            e_string\n         };\n\n         struct data_pack\n         {\n            data_pack()\n            : pointer(0),\n              type(e_unknown),\n              size(0)\n            {}\n\n            data_pack(void* ptr, const data_type dt, const std::size_t sz = 0)\n            : pointer(ptr),\n              type(dt),\n              size(sz)\n            {}\n\n            void*       pointer;\n            data_type   type;\n            std::size_t size;\n         };\n\n         typedef std::vector<data_pack> local_data_list_t;\n         typedef results_context<T>     results_context_t;\n\n         control_block()\n         : ref_count(0),\n           expr     (0),\n           results  (0),\n           retinv_null(false),\n           return_invoked(&retinv_null)\n         {}\n\n         control_block(expression_ptr e)\n         : ref_count(1),\n           expr     (e),\n           results  (0),\n           retinv_null(false),\n           return_invoked(&retinv_null)\n         {}\n\n        ~control_block()\n         {\n            if (expr && details::branch_deletable(expr))\n            {\n               destroy_node(expr);\n            }\n\n            if (!local_data_list.empty())\n            {\n               for (std::size_t i = 0; i < local_data_list.size(); ++i)\n               {\n                  switch (local_data_list[i].type)\n                  {\n                     case e_expr      : delete reinterpret_cast<expression_ptr>(local_data_list[i].pointer);\n                                        break;\n\n                     case e_vecholder : delete reinterpret_cast<vector_holder_ptr>(local_data_list[i].pointer);\n                                        break;\n\n                     case e_data      : delete (T*)(local_data_list[i].pointer);\n                                        break;\n\n                     case e_vecdata   : delete [] (T*)(local_data_list[i].pointer);\n                                        break;\n\n                     case e_string    : delete (std::string*)(local_data_list[i].pointer);\n                                        break;\n\n                     default          : break;\n                  }\n               }\n            }\n\n            if (results)\n            {\n               delete results;\n            }\n         }\n\n         static inline control_block* create(expression_ptr e)\n         {\n            return new control_block(e);\n         }\n\n         static inline void destroy(control_block*& cntrl_blck)\n         {\n            if (cntrl_blck)\n            {\n               if (\n                    (0 !=   cntrl_blck->ref_count) &&\n                    (0 == --cntrl_blck->ref_count)\n                  )\n               {\n                  delete cntrl_blck;\n               }\n\n               cntrl_blck = 0;\n            }\n         }\n\n         std::size_t ref_count;\n         expression_ptr expr;\n         local_data_list_t local_data_list;\n         results_context_t* results;\n         bool  retinv_null;\n         bool* return_invoked;\n\n         friend class function_compositor<T>;\n      };\n\n   public:\n\n      expression()\n      : control_block_(0)\n      {\n         set_expression(new details::null_node<T>());\n      }\n\n      expression(const expression<T>& e)\n      : control_block_    (e.control_block_    ),\n        symbol_table_list_(e.symbol_table_list_)\n      {\n         control_block_->ref_count++;\n      }\n\n      expression(const symbol_table<T>& symbol_table)\n      : control_block_(0)\n      {\n         set_expression(new details::null_node<T>());\n         symbol_table_list_.push_back(symbol_table);\n      }\n\n      inline expression<T>& operator=(const expression<T>& e)\n      {\n         if (this != &e)\n         {\n            if (control_block_)\n            {\n               if (\n                    (0 !=   control_block_->ref_count) &&\n                    (0 == --control_block_->ref_count)\n                  )\n               {\n                  delete control_block_;\n               }\n\n               control_block_ = 0;\n            }\n\n            control_block_ = e.control_block_;\n            control_block_->ref_count++;\n            symbol_table_list_ = e.symbol_table_list_;\n         }\n\n         return *this;\n      }\n\n      inline bool operator==(const expression<T>& e)\n      {\n         return (this == &e);\n      }\n\n      inline bool operator!() const\n      {\n         return (\n                  (0 == control_block_      ) ||\n                  (0 == control_block_->expr)\n                );\n      }\n\n      inline expression<T>& release()\n      {\n         control_block::destroy(control_block_);\n\n         return (*this);\n      }\n\n     ~expression()\n      {\n         control_block::destroy(control_block_);\n      }\n\n      inline T value() const\n      {\n         return control_block_->expr->value();\n      }\n\n      inline T operator() () const\n      {\n         return value();\n      }\n\n      inline operator T() const\n      {\n         return value();\n      }\n\n      inline operator bool() const\n      {\n         return details::is_true(value());\n      }\n\n      inline void register_symbol_table(symbol_table<T>& st)\n      {\n         symbol_table_list_.push_back(st);\n      }\n\n      inline const symbol_table<T>& get_symbol_table(const std::size_t& index = 0) const\n      {\n         return symbol_table_list_[index];\n      }\n\n      inline symbol_table<T>& get_symbol_table(const std::size_t& index = 0)\n      {\n         return symbol_table_list_[index];\n      }\n\n      typedef results_context<T> results_context_t;\n\n      inline const results_context_t& results() const\n      {\n         if (control_block_->results)\n            return (*control_block_->results);\n         else\n         {\n            static const results_context_t null_results;\n            return null_results;\n         }\n      }\n\n      inline bool return_invoked() const\n      {\n         return (*control_block_->return_invoked);\n      }\n\n   private:\n\n      inline symtab_list_t get_symbol_table_list() const\n      {\n         return symbol_table_list_;\n      }\n\n      inline void set_expression(const expression_ptr expr)\n      {\n         if (expr)\n         {\n            if (control_block_)\n            {\n               if (0 == --control_block_->ref_count)\n               {\n                  delete control_block_;\n               }\n            }\n\n            control_block_ = control_block::create(expr);\n         }\n      }\n\n      inline void register_local_var(expression_ptr expr)\n      {\n         if (expr)\n         {\n            if (control_block_)\n            {\n               control_block_->\n                  local_data_list.push_back(\n                     typename expression<T>::control_block::\n                        data_pack(reinterpret_cast<void*>(expr),\n                                  control_block::e_expr));\n            }\n         }\n      }\n\n      inline void register_local_var(vector_holder_ptr vec_holder)\n      {\n         if (vec_holder)\n         {\n            if (control_block_)\n            {\n               control_block_->\n                  local_data_list.push_back(\n                     typename expression<T>::control_block::\n                        data_pack(reinterpret_cast<void*>(vec_holder),\n                                  control_block::e_vecholder));\n            }\n         }\n      }\n\n      inline void register_local_data(void* data, const std::size_t& size = 0, const std::size_t data_mode = 0)\n      {\n         if (data)\n         {\n            if (control_block_)\n            {\n               typename control_block::data_type dt = control_block::e_data;\n\n               switch (data_mode)\n               {\n                  case 0 : dt = control_block::e_data;    break;\n                  case 1 : dt = control_block::e_vecdata; break;\n                  case 2 : dt = control_block::e_string;  break;\n               }\n\n               control_block_->\n                  local_data_list.push_back(\n                     typename expression<T>::control_block::\n                        data_pack(reinterpret_cast<void*>(data), dt, size));\n            }\n         }\n      }\n\n      inline const typename control_block::local_data_list_t& local_data_list()\n      {\n         if (control_block_)\n         {\n            return control_block_->local_data_list;\n         }\n         else\n         {\n            static typename control_block::local_data_list_t null_local_data_list;\n            return null_local_data_list;\n         }\n      }\n\n      inline void register_return_results(results_context_t* rc)\n      {\n         if (control_block_ && rc)\n         {\n            control_block_->results = rc;\n         }\n      }\n\n      inline void set_retinvk(bool* retinvk_ptr)\n      {\n         if (control_block_)\n         {\n            control_block_->return_invoked = retinvk_ptr;\n         }\n      }\n\n      control_block* control_block_;\n      symtab_list_t      symbol_table_list_;\n\n      friend class parser<T>;\n      friend class expression_helper<T>;\n      friend class function_compositor<T>;\n   };\n\n   template <typename T>\n   class expression_helper\n   {\n   public:\n\n      static inline bool is_constant(const expression<T>& expr)\n      {\n         return details::is_constant_node(expr.control_block_->expr);\n      }\n\n      static inline bool is_variable(const expression<T>& expr)\n      {\n         return details::is_variable_node(expr.control_block_->expr);\n      }\n\n      static inline bool is_unary(const expression<T>& expr)\n      {\n         return details::is_unary_node(expr.control_block_->expr);\n      }\n\n      static inline bool is_binary(const expression<T>& expr)\n      {\n         return details::is_binary_node(expr.control_block_->expr);\n      }\n\n      static inline bool is_function(const expression<T>& expr)\n      {\n         return details::is_function(expr.control_block_->expr);\n      }\n\n      static inline bool is_null(const expression<T>& expr)\n      {\n         return details::is_null_node(expr.control_block_->expr);\n      }\n   };\n\n   template <typename T>\n   inline bool is_valid(const expression<T>& expr)\n   {\n      return !expression_helper<T>::is_null(expr);\n   }\n\n   namespace parser_error\n   {\n      enum error_mode\n      {\n         e_unknown = 0,\n         e_syntax  = 1,\n         e_token   = 2,\n         e_numeric = 4,\n         e_symtab  = 5,\n         e_lexer   = 6,\n         e_helper  = 7\n      };\n\n      struct type\n      {\n         type()\n         : mode(parser_error::e_unknown),\n           line_no  (0),\n           column_no(0)\n         {}\n\n         lexer::token token;\n         error_mode mode;\n         std::string diagnostic;\n         std::string src_location;\n         std::string error_line;\n         std::size_t line_no;\n         std::size_t column_no;\n      };\n\n      inline type make_error(const error_mode mode,\n                             const std::string& diagnostic   = \"\",\n                             const std::string& src_location = \"\")\n      {\n         type t;\n         t.mode         = mode;\n         t.token.type   = lexer::token::e_error;\n         t.diagnostic   = diagnostic;\n         t.src_location = src_location;\n         exprtk_debug((\"%s\\n\",diagnostic .c_str()));\n         return t;\n      }\n\n      inline type make_error(const error_mode mode,\n                             const lexer::token& tk,\n                             const std::string& diagnostic   = \"\",\n                             const std::string& src_location = \"\")\n      {\n         type t;\n         t.mode       = mode;\n         t.token      = tk;\n         t.diagnostic = diagnostic;\n         t.src_location = src_location;\n         exprtk_debug((\"%s\\n\",diagnostic .c_str()));\n         return t;\n      }\n\n      inline std::string to_str(error_mode mode)\n      {\n         switch (mode)\n         {\n            case e_unknown : return std::string(\"Unknown Error\");\n            case e_syntax  : return std::string(\"Syntax Error\" );\n            case e_token   : return std::string(\"Token Error\"  );\n            case e_numeric : return std::string(\"Numeric Error\");\n            case e_symtab  : return std::string(\"Symbol Error\" );\n            case e_lexer   : return std::string(\"Lexer Error\"  );\n            case e_helper  : return std::string(\"Helper Error\" );\n            default        : return std::string(\"Unknown Error\");\n         }\n      }\n\n      inline bool update_error(type& error, const std::string& expression)\n      {\n         if (\n              expression.empty()                         ||\n              (error.token.position > expression.size()) ||\n              (std::numeric_limits<std::size_t>::max() == error.token.position)\n            )\n         {\n            return false;\n         }\n\n         std::size_t error_line_start = 0;\n\n         for (std::size_t i = error.token.position; i > 0; --i)\n         {\n            const details::char_t c = expression[i];\n\n            if (('\\n' == c) || ('\\r' == c))\n            {\n               error_line_start = i + 1;\n               break;\n            }\n         }\n\n         std::size_t next_nl_position = std::min(expression.size(),\n                                                 expression.find_first_of('\\n',error.token.position + 1));\n\n         error.column_no  = error.token.position - error_line_start;\n         error.error_line = expression.substr(error_line_start,\n                                              next_nl_position - error_line_start);\n\n         error.line_no = 0;\n\n         for (std::size_t i = 0; i < next_nl_position; ++i)\n         {\n            if ('\\n' == expression[i])\n               ++error.line_no;\n         }\n\n         return true;\n      }\n\n      inline void dump_error(const type& error)\n      {\n         printf(\"Position: %02d   Type: [%s]   Msg: %s\\n\",\n                static_cast<int>(error.token.position),\n                exprtk::parser_error::to_str(error.mode).c_str(),\n                error.diagnostic.c_str());\n      }\n   }\n\n   namespace details\n   {\n      template <typename Parser>\n      inline void disable_type_checking(Parser& p)\n      {\n         p.state_.type_check_enabled = false;\n      }\n   }\n\n   template <typename T>\n   class parser : public lexer::parser_helper\n   {\n   private:\n\n      enum precedence_level\n      {\n         e_level00,\n         e_level01,\n         e_level02,\n         e_level03,\n         e_level04,\n         e_level05,\n         e_level06,\n         e_level07,\n         e_level08,\n         e_level09,\n         e_level10,\n         e_level11,\n         e_level12,\n         e_level13,\n         e_level14\n      };\n\n      typedef const T&                                               cref_t;\n      typedef const T                                               const_t;\n      typedef ifunction                <T>                                F;\n      typedef ivararg_function         <T>                              VAF;\n      typedef igeneric_function        <T>                               GF;\n      typedef ifunction                <T>                      ifunction_t;\n      typedef ivararg_function         <T>               ivararg_function_t;\n      typedef igeneric_function        <T>              igeneric_function_t;\n      typedef details::expression_node <T>                expression_node_t;\n      typedef details::literal_node    <T>                   literal_node_t;\n      typedef details::unary_node      <T>                     unary_node_t;\n      typedef details::binary_node     <T>                    binary_node_t;\n      typedef details::trinary_node    <T>                   trinary_node_t;\n      typedef details::quaternary_node <T>                quaternary_node_t;\n      typedef details::conditional_node<T>               conditional_node_t;\n      typedef details::cons_conditional_node<T>     cons_conditional_node_t;\n      typedef details::while_loop_node <T>                while_loop_node_t;\n      typedef details::repeat_until_loop_node<T>   repeat_until_loop_node_t;\n      typedef details::for_loop_node   <T>                  for_loop_node_t;\n      #ifndef exprtk_disable_break_continue\n      typedef details::while_loop_bc_node <T>          while_loop_bc_node_t;\n      typedef details::repeat_until_loop_bc_node<T> repeat_until_loop_bc_node_t;\n      typedef details::for_loop_bc_node<T>               for_loop_bc_node_t;\n      #endif\n      typedef details::switch_node     <T>                    switch_node_t;\n      typedef details::variable_node   <T>                  variable_node_t;\n      typedef details::vector_elem_node<T>               vector_elem_node_t;\n      typedef details::rebasevector_elem_node<T>   rebasevector_elem_node_t;\n      typedef details::rebasevector_celem_node<T> rebasevector_celem_node_t;\n      typedef details::vector_node     <T>                    vector_node_t;\n      typedef details::range_pack      <T>                          range_t;\n      #ifndef exprtk_disable_string_capabilities\n      typedef details::stringvar_node     <T>              stringvar_node_t;\n      typedef details::string_literal_node<T>         string_literal_node_t;\n      typedef details::string_range_node  <T>           string_range_node_t;\n      typedef details::const_string_range_node<T> const_string_range_node_t;\n      typedef details::generic_string_range_node<T> generic_string_range_node_t;\n      typedef details::string_concat_node <T>          string_concat_node_t;\n      typedef details::assignment_string_node<T>   assignment_string_node_t;\n      typedef details::assignment_string_range_node<T> assignment_string_range_node_t;\n      typedef details::conditional_string_node<T>  conditional_string_node_t;\n      typedef details::cons_conditional_str_node<T> cons_conditional_str_node_t;\n      #endif\n      typedef details::assignment_node<T>                 assignment_node_t;\n      typedef details::assignment_vec_elem_node       <T> assignment_vec_elem_node_t;\n      typedef details::assignment_rebasevec_elem_node <T> assignment_rebasevec_elem_node_t;\n      typedef details::assignment_rebasevec_celem_node<T> assignment_rebasevec_celem_node_t;\n      typedef details::assignment_vec_node     <T>    assignment_vec_node_t;\n      typedef details::assignment_vecvec_node  <T> assignment_vecvec_node_t;\n      typedef details::scand_node<T>                           scand_node_t;\n      typedef details::scor_node<T>                             scor_node_t;\n      typedef lexer::token                                          token_t;\n      typedef expression_node_t*                        expression_node_ptr;\n      typedef expression<T>                                    expression_t;\n      typedef symbol_table<T>                                symbol_table_t;\n      typedef typename expression<T>::symtab_list_t     symbol_table_list_t;\n      typedef details::vector_holder<T>*                  vector_holder_ptr;\n\n      typedef typename details::functor_t<T>            functor_t;\n      typedef typename functor_t::qfunc_t    quaternary_functor_t;\n      typedef typename functor_t::tfunc_t       trinary_functor_t;\n      typedef typename functor_t::bfunc_t        binary_functor_t;\n      typedef typename functor_t::ufunc_t         unary_functor_t;\n\n      typedef details::operator_type operator_t;\n\n      typedef std::map<operator_t,  unary_functor_t>   unary_op_map_t;\n      typedef std::map<operator_t, binary_functor_t>  binary_op_map_t;\n      typedef std::map<operator_t,trinary_functor_t> trinary_op_map_t;\n\n      typedef std::map<std::string,std::pair<trinary_functor_t   ,operator_t> > sf3_map_t;\n      typedef std::map<std::string,std::pair<quaternary_functor_t,operator_t> > sf4_map_t;\n\n      typedef std::map<binary_functor_t,operator_t> inv_binary_op_map_t;\n      typedef std::multimap<std::string,details::base_operation_t,details::ilesscompare> base_ops_map_t;\n      typedef std::set<std::string,details::ilesscompare> disabled_func_set_t;\n\n      typedef details::T0oT1_define<T,  cref_t,  cref_t> vov_t;\n      typedef details::T0oT1_define<T, const_t,  cref_t> cov_t;\n      typedef details::T0oT1_define<T,  cref_t, const_t> voc_t;\n\n      typedef details::T0oT1oT2_define<T,  cref_t,  cref_t,  cref_t> vovov_t;\n      typedef details::T0oT1oT2_define<T,  cref_t,  cref_t, const_t> vovoc_t;\n      typedef details::T0oT1oT2_define<T,  cref_t, const_t,  cref_t> vocov_t;\n      typedef details::T0oT1oT2_define<T, const_t,  cref_t,  cref_t> covov_t;\n      typedef details::T0oT1oT2_define<T, const_t,  cref_t, const_t> covoc_t;\n      typedef details::T0oT1oT2_define<T, const_t, const_t,  cref_t> cocov_t;\n      typedef details::T0oT1oT2_define<T,  cref_t, const_t, const_t> vococ_t;\n\n      typedef details::T0oT1oT2oT3_define<T,  cref_t,  cref_t,  cref_t,  cref_t> vovovov_t;\n      typedef details::T0oT1oT2oT3_define<T,  cref_t,  cref_t,  cref_t, const_t> vovovoc_t;\n      typedef details::T0oT1oT2oT3_define<T,  cref_t,  cref_t, const_t,  cref_t> vovocov_t;\n      typedef details::T0oT1oT2oT3_define<T,  cref_t, const_t,  cref_t,  cref_t> vocovov_t;\n      typedef details::T0oT1oT2oT3_define<T, const_t,  cref_t,  cref_t,  cref_t> covovov_t;\n\n      typedef details::T0oT1oT2oT3_define<T, const_t,  cref_t, const_t,  cref_t> covocov_t;\n      typedef details::T0oT1oT2oT3_define<T,  cref_t, const_t,  cref_t, const_t> vocovoc_t;\n      typedef details::T0oT1oT2oT3_define<T, const_t,  cref_t,  cref_t, const_t> covovoc_t;\n      typedef details::T0oT1oT2oT3_define<T,  cref_t, const_t, const_t,  cref_t> vococov_t;\n\n      typedef results_context<T> results_context_t;\n\n      typedef parser_helper prsrhlpr_t;\n\n      struct scope_element\n      {\n         enum element_type\n         {\n            e_none    ,\n            e_variable,\n            e_vector  ,\n            e_vecelem ,\n            e_string\n         };\n\n         typedef details::vector_holder<T> vector_holder_t;\n         typedef variable_node_t*        variable_node_ptr;\n         typedef vector_holder_t*        vector_holder_ptr;\n         typedef expression_node_t*    expression_node_ptr;\n         #ifndef exprtk_disable_string_capabilities\n         typedef stringvar_node_t*      stringvar_node_ptr;\n         #endif\n\n         scope_element()\n         : name(\"???\"),\n           size (std::numeric_limits<std::size_t>::max()),\n           index(std::numeric_limits<std::size_t>::max()),\n           depth(std::numeric_limits<std::size_t>::max()),\n           ref_count(0),\n           ip_index (0),\n           type (e_none),\n           active(false),\n           data    (0),\n           var_node(0),\n           vec_node(0)\n           #ifndef exprtk_disable_string_capabilities\n           ,str_node(0)\n           #endif\n         {}\n\n         bool operator < (const scope_element& se) const\n         {\n            if (ip_index < se.ip_index)\n               return true;\n            else if (ip_index > se.ip_index)\n               return false;\n            else if (depth < se.depth)\n               return true;\n            else if (depth > se.depth)\n               return false;\n            else if (index < se.index)\n               return true;\n            else if (index > se.index)\n               return false;\n            else\n               return (name < se.name);\n         }\n\n         void clear()\n         {\n            name   = \"???\";\n            size   = std::numeric_limits<std::size_t>::max();\n            index  = std::numeric_limits<std::size_t>::max();\n            depth  = std::numeric_limits<std::size_t>::max();\n            type   = e_none;\n            active = false;\n            ref_count = 0;\n            ip_index  = 0;\n            data      = 0;\n            var_node  = 0;\n            vec_node  = 0;\n            #ifndef exprtk_disable_string_capabilities\n            str_node  = 0;\n            #endif\n         }\n\n         std::string  name;\n         std::size_t  size;\n         std::size_t  index;\n         std::size_t  depth;\n         std::size_t  ref_count;\n         std::size_t  ip_index;\n         element_type type;\n         bool         active;\n         void*        data;\n         expression_node_ptr var_node;\n         vector_holder_ptr   vec_node;\n         #ifndef exprtk_disable_string_capabilities\n         stringvar_node_ptr str_node;\n         #endif\n      };\n\n      class scope_element_manager\n      {\n      public:\n\n         typedef expression_node_t* expression_node_ptr;\n         typedef variable_node_t*     variable_node_ptr;\n         typedef parser<T>                     parser_t;\n\n         scope_element_manager(parser<T>& p)\n         : parser_(p),\n           input_param_cnt_(0)\n         {}\n\n         inline std::size_t size() const\n         {\n            return element_.size();\n         }\n\n         inline bool empty() const\n         {\n            return element_.empty();\n         }\n\n         inline scope_element& get_element(const std::size_t& index)\n         {\n            if (index < element_.size())\n               return element_[index];\n            else\n               return null_element_;\n         }\n\n         inline scope_element& get_element(const std::string& var_name,\n                                           const std::size_t index = std::numeric_limits<std::size_t>::max())\n         {\n            const std::size_t current_depth = parser_.state_.scope_depth;\n\n            for (std::size_t i = 0; i < element_.size(); ++i)\n            {\n               scope_element& se = element_[i];\n\n               if (se.depth > current_depth)\n                  continue;\n               else if (\n                         details::imatch(se.name, var_name) &&\n                         (se.index == index)\n                       )\n                  return se;\n            }\n\n            return null_element_;\n         }\n\n         inline scope_element& get_active_element(const std::string& var_name,\n                                                  const std::size_t index = std::numeric_limits<std::size_t>::max())\n         {\n            const std::size_t current_depth = parser_.state_.scope_depth;\n\n            for (std::size_t i = 0; i < element_.size(); ++i)\n            {\n               scope_element& se = element_[i];\n\n               if (se.depth > current_depth)\n                  continue;\n               else if (\n                         details::imatch(se.name, var_name) &&\n                         (se.index == index)                &&\n                         (se.active)\n                       )\n                  return se;\n            }\n\n            return null_element_;\n         }\n\n         inline bool add_element(const scope_element& se)\n         {\n            for (std::size_t i = 0; i < element_.size(); ++i)\n            {\n               scope_element& cse = element_[i];\n\n               if (\n                    details::imatch(cse.name, se.name) &&\n                    (cse.depth <= se.depth)            &&\n                    (cse.index == se.index)            &&\n                    (cse.size  == se.size )            &&\n                    (cse.type  == se.type )            &&\n                    (cse.active)\n                  )\n                  return false;\n            }\n\n            element_.push_back(se);\n            std::sort(element_.begin(),element_.end());\n\n            return true;\n         }\n\n         inline void deactivate(const std::size_t& scope_depth)\n         {\n            exprtk_debug((\"deactivate() - Scope depth: %d\\n\",\n                          static_cast<int>(parser_.state_.scope_depth)));\n\n            for (std::size_t i = 0; i < element_.size(); ++i)\n            {\n               scope_element& se = element_[i];\n\n               if (se.active && (se.depth >= scope_depth))\n               {\n                  exprtk_debug((\"deactivate() - element[%02d] '%s'\\n\",\n                                static_cast<int>(i),\n                                se.name.c_str()));\n\n                  se.active = false;\n               }\n            }\n         }\n\n         inline void free_element(scope_element& se)\n         {\n            switch (se.type)\n            {\n               case scope_element::e_variable   : if (se.data    ) delete (T*) se.data;\n                                                  if (se.var_node) delete se.var_node;\n                                                  break;\n\n               case scope_element::e_vector     : if (se.data    ) delete[] (T*) se.data;\n                                                  if (se.vec_node) delete se.vec_node;\n                                                  break;\n\n               case scope_element::e_vecelem    : if (se.var_node) delete se.var_node;\n                                                  break;\n\n               #ifndef exprtk_disable_string_capabilities\n               case scope_element::e_string     : if (se.data    ) delete (std::string*) se.data;\n                                                  if (se.str_node) delete se.str_node;\n                                                  break;\n               #endif\n\n               default                          : return;\n            }\n\n            se.clear();\n         }\n\n         inline void cleanup()\n         {\n            for (std::size_t i = 0; i < element_.size(); ++i)\n            {\n               free_element(element_[i]);\n            }\n\n            element_.clear();\n\n            input_param_cnt_ = 0;\n         }\n\n         inline std::size_t next_ip_index()\n         {\n            return ++input_param_cnt_;\n         }\n\n         inline expression_node_ptr get_variable(const T& v)\n         {\n            for (std::size_t i = 0; i < element_.size(); ++i)\n            {\n               scope_element& se = element_[i];\n\n               if (\n                    se.active   &&\n                    se.var_node &&\n                    details::is_variable_node(se.var_node)\n                  )\n               {\n                  variable_node_ptr vn = reinterpret_cast<variable_node_ptr>(se.var_node);\n\n                  if (&(vn->ref()) == (&v))\n                  {\n                     return se.var_node;\n                  }\n               }\n            }\n\n            return expression_node_ptr(0);\n         }\n\n      private:\n\n         scope_element_manager& operator=(const scope_element_manager&);\n\n         parser_t& parser_;\n         std::vector<scope_element> element_;\n         scope_element null_element_;\n         std::size_t input_param_cnt_;\n      };\n\n      class scope_handler\n      {\n      public:\n\n         typedef parser<T> parser_t;\n\n         scope_handler(parser<T>& p)\n         : parser_(p)\n         {\n            parser_.state_.scope_depth++;\n            #ifdef exprtk_enable_debugging\n            std::string depth(2 * parser_.state_.scope_depth,'-');\n            exprtk_debug((\"%s> Scope Depth: %02d\\n\",\n                          depth.c_str(),\n                          static_cast<int>(parser_.state_.scope_depth)));\n            #endif\n         }\n\n        ~scope_handler()\n         {\n            parser_.sem_.deactivate(parser_.state_.scope_depth);\n            parser_.state_.scope_depth--;\n            #ifdef exprtk_enable_debugging\n            std::string depth(2 * parser_.state_.scope_depth,'-');\n            exprtk_debug((\"<%s Scope Depth: %02d\\n\",\n                          depth.c_str(),\n                          static_cast<int>(parser_.state_.scope_depth)));\n            #endif\n         }\n\n      private:\n\n         scope_handler& operator=(const scope_handler&);\n\n         parser_t& parser_;\n      };\n\n      struct symtab_store\n      {\n         symbol_table_list_t symtab_list_;\n\n         typedef typename symbol_table_t::local_data_t   local_data_t;\n         typedef typename symbol_table_t::variable_ptr   variable_ptr;\n         typedef typename symbol_table_t::function_ptr   function_ptr;\n         #ifndef exprtk_disable_string_capabilities\n         typedef typename symbol_table_t::stringvar_ptr stringvar_ptr;\n         #endif\n         typedef typename symbol_table_t::vector_holder_ptr       vector_holder_ptr;\n         typedef typename symbol_table_t::vararg_function_ptr   vararg_function_ptr;\n         typedef typename symbol_table_t::generic_function_ptr generic_function_ptr;\n\n         inline bool empty() const\n         {\n            return symtab_list_.empty();\n         }\n\n         inline void clear()\n         {\n            symtab_list_.clear();\n         }\n\n         inline bool valid() const\n         {\n            if (!empty())\n            {\n               for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n               {\n                  if (symtab_list_[i].valid())\n                     return true;\n               }\n            }\n\n            return false;\n         }\n\n         inline bool valid_symbol(const std::string& symbol) const\n         {\n            if (!symtab_list_.empty())\n               return symtab_list_[0].valid_symbol(symbol);\n            else\n               return false;\n         }\n\n         inline bool valid_function_name(const std::string& symbol) const\n         {\n            if (!symtab_list_.empty())\n               return symtab_list_[0].valid_function(symbol);\n            else\n               return false;\n         }\n\n         inline variable_ptr get_variable(const std::string& variable_name) const\n         {\n            if (!valid_symbol(variable_name))\n               return reinterpret_cast<variable_ptr>(0);\n\n            variable_ptr result = reinterpret_cast<variable_ptr>(0);\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else\n                  result = local_data(i)\n                              .variable_store.get(variable_name);\n\n               if (result) break;\n            }\n\n            return result;\n         }\n\n         inline variable_ptr get_variable(const T& var_ref) const\n         {\n            variable_ptr result = reinterpret_cast<variable_ptr>(0);\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else\n                  result = local_data(i).variable_store\n                              .get_from_varptr(reinterpret_cast<const void*>(&var_ref));\n\n               if (result) break;\n            }\n\n            return result;\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline stringvar_ptr get_stringvar(const std::string& string_name) const\n         {\n            if (!valid_symbol(string_name))\n               return reinterpret_cast<stringvar_ptr>(0);\n\n            stringvar_ptr result = reinterpret_cast<stringvar_ptr>(0);\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else\n                  result = local_data(i)\n                              .stringvar_store.get(string_name);\n\n               if (result) break;\n            }\n\n            return result;\n         }\n         #endif\n\n         inline function_ptr get_function(const std::string& function_name) const\n         {\n            if (!valid_function_name(function_name))\n               return reinterpret_cast<function_ptr>(0);\n\n            function_ptr result = reinterpret_cast<function_ptr>(0);\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else\n                  result = local_data(i)\n                              .function_store.get(function_name);\n\n               if (result) break;\n            }\n\n            return result;\n         }\n\n         inline vararg_function_ptr get_vararg_function(const std::string& vararg_function_name) const\n         {\n            if (!valid_function_name(vararg_function_name))\n               return reinterpret_cast<vararg_function_ptr>(0);\n\n            vararg_function_ptr result = reinterpret_cast<vararg_function_ptr>(0);\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else\n                  result = local_data(i)\n                              .vararg_function_store.get(vararg_function_name);\n\n               if (result) break;\n            }\n\n            return result;\n         }\n\n         inline generic_function_ptr get_generic_function(const std::string& function_name) const\n         {\n            if (!valid_function_name(function_name))\n               return reinterpret_cast<generic_function_ptr>(0);\n\n            generic_function_ptr result = reinterpret_cast<generic_function_ptr>(0);\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else\n                  result = local_data(i)\n                              .generic_function_store.get(function_name);\n\n               if (result) break;\n            }\n\n            return result;\n         }\n\n         inline generic_function_ptr get_string_function(const std::string& function_name) const\n         {\n            if (!valid_function_name(function_name))\n               return reinterpret_cast<generic_function_ptr>(0);\n\n            generic_function_ptr result = reinterpret_cast<generic_function_ptr>(0);\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else\n                  result =\n                     local_data(i).string_function_store.get(function_name);\n\n               if (result) break;\n            }\n\n            return result;\n         }\n\n         inline vector_holder_ptr get_vector(const std::string& vector_name) const\n         {\n            if (!valid_symbol(vector_name))\n               return reinterpret_cast<vector_holder_ptr>(0);\n\n            vector_holder_ptr result = reinterpret_cast<vector_holder_ptr>(0);\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else\n                  result =\n                     local_data(i).vector_store.get(vector_name);\n\n               if (result) break;\n            }\n\n            return result;\n         }\n\n         inline bool is_constant_node(const std::string& symbol_name) const\n         {\n            if (!valid_symbol(symbol_name))\n               return false;\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (local_data(i).variable_store.is_constant(symbol_name))\n                  return true;\n            }\n\n            return false;\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline bool is_constant_string(const std::string& symbol_name) const\n         {\n            if (!valid_symbol(symbol_name))\n               return false;\n\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (!local_data(i).stringvar_store.symbol_exists(symbol_name))\n                  continue;\n               else if ( local_data(i).stringvar_store.is_constant(symbol_name))\n                  return true;\n            }\n\n            return false;\n         }\n         #endif\n\n         inline bool symbol_exists(const std::string& symbol) const\n         {\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (symtab_list_[i].symbol_exists(symbol))\n                  return true;\n            }\n\n            return false;\n         }\n\n         inline bool is_variable(const std::string& variable_name) const\n         {\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (\n                         symtab_list_[i].local_data().variable_store\n                           .symbol_exists(variable_name)\n                       )\n                  return true;\n            }\n\n            return false;\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline bool is_stringvar(const std::string& stringvar_name) const\n         {\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (\n                         symtab_list_[i].local_data().stringvar_store\n                           .symbol_exists(stringvar_name)\n                       )\n                  return true;\n            }\n\n            return false;\n         }\n\n         inline bool is_conststr_stringvar(const std::string& symbol_name) const\n         {\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (\n                         symtab_list_[i].local_data().stringvar_store\n                           .symbol_exists(symbol_name)\n                       )\n               {\n                  return (\n                           local_data(i).stringvar_store.symbol_exists(symbol_name) ||\n                           local_data(i).stringvar_store.is_constant  (symbol_name)\n                         );\n\n               }\n            }\n\n            return false;\n         }\n         #endif\n\n         inline bool is_function(const std::string& function_name) const\n         {\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (\n                         local_data(i).vararg_function_store\n                           .symbol_exists(function_name)\n                       )\n                  return true;\n            }\n\n            return false;\n         }\n\n         inline bool is_vararg_function(const std::string& vararg_function_name) const\n         {\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (\n                         local_data(i).vararg_function_store\n                           .symbol_exists(vararg_function_name)\n                       )\n                  return true;\n            }\n\n            return false;\n         }\n\n         inline bool is_vector(const std::string& vector_name) const\n         {\n            for (std::size_t i = 0; i < symtab_list_.size(); ++i)\n            {\n               if (!symtab_list_[i].valid())\n                  continue;\n               else if (\n                         local_data(i).vector_store\n                           .symbol_exists(vector_name)\n                       )\n                  return true;\n            }\n\n            return false;\n         }\n\n         inline std::string get_variable_name(const expression_node_ptr& ptr) const\n         {\n            return local_data().variable_store.entity_name(ptr);\n         }\n\n         inline std::string get_vector_name(const vector_holder_ptr& ptr) const\n         {\n            return local_data().vector_store.entity_name(ptr);\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline std::string get_stringvar_name(const expression_node_ptr& ptr) const\n         {\n            return local_data().stringvar_store.entity_name(ptr);\n         }\n\n         inline std::string get_conststr_stringvar_name(const expression_node_ptr& ptr) const\n         {\n            return local_data().stringvar_store.entity_name(ptr);\n         }\n         #endif\n\n         inline local_data_t& local_data(const std::size_t& index = 0)\n         {\n            return symtab_list_[index].local_data();\n         }\n\n         inline const local_data_t& local_data(const std::size_t& index = 0) const\n         {\n            return symtab_list_[index].local_data();\n         }\n\n         inline symbol_table_t& get_symbol_table(const std::size_t& index = 0)\n         {\n            return symtab_list_[index];\n         }\n      };\n\n      struct parser_state\n      {\n         parser_state()\n         : type_check_enabled(true)\n         {\n            reset();\n         }\n\n         void reset()\n         {\n            parsing_return_stmt = false;\n            parsing_break_stmt  = false;\n            return_stmt_present = false;\n            side_effect_present = false;\n            scope_depth         = 0;\n         }\n\n         #ifndef exprtk_enable_debugging\n         void activate_side_effect(const std::string&)\n         #else\n         void activate_side_effect(const std::string& source)\n         #endif\n         {\n            if (!side_effect_present)\n            {\n               side_effect_present = true;\n\n               exprtk_debug((\"activate_side_effect() - caller: %s\\n\",source.c_str()));\n            }\n         }\n\n         bool parsing_return_stmt;\n         bool parsing_break_stmt;\n         bool return_stmt_present;\n         bool side_effect_present;\n         bool type_check_enabled;\n         std::size_t scope_depth;\n      };\n\n   public:\n\n      struct unknown_symbol_resolver\n      {\n\n         enum usr_symbol_type\n         {\n            e_usr_variable_type = 0,\n            e_usr_constant_type = 1\n         };\n\n         enum usr_mode\n         {\n            e_usrmode_default  = 0,\n            e_usrmode_extended = 1\n         };\n\n         usr_mode mode;\n\n         unknown_symbol_resolver(const usr_mode m = e_usrmode_default)\n         : mode(m)\n         {}\n\n         virtual ~unknown_symbol_resolver()\n         {}\n\n         virtual bool process(const std::string& /*unknown_symbol*/,\n                              usr_symbol_type&   st,\n                              T&                 default_value,\n                              std::string&       error_message)\n         {\n            if (e_usrmode_default != mode)\n               return false;\n\n            st = e_usr_variable_type;\n            default_value = T(0);\n            error_message.clear();\n\n            return true;\n         }\n\n         virtual bool process(const std::string& /* unknown_symbol */,\n                              symbol_table_t&    /* symbol_table   */,\n                              std::string&       /* error_message  */)\n         {\n            return false;\n         }\n      };\n\n      enum collect_type\n      {\n         e_ct_none        = 0,\n         e_ct_variables   = 1,\n         e_ct_functions   = 2,\n         e_ct_assignments = 4\n      };\n\n      enum symbol_type\n      {\n         e_st_unknown        = 0,\n         e_st_variable       = 1,\n         e_st_vector         = 2,\n         e_st_vecelem        = 3,\n         e_st_string         = 4,\n         e_st_function       = 5,\n         e_st_local_variable = 6,\n         e_st_local_vector   = 7,\n         e_st_local_string   = 8\n      };\n\n      class dependent_entity_collector\n      {\n      public:\n\n         typedef std::pair<std::string,symbol_type> symbol_t;\n         typedef std::vector<symbol_t> symbol_list_t;\n\n         dependent_entity_collector(const std::size_t options = e_ct_none)\n         : options_(options),\n           collect_variables_  ((options_ & e_ct_variables  ) == e_ct_variables  ),\n           collect_functions_  ((options_ & e_ct_functions  ) == e_ct_functions  ),\n           collect_assignments_((options_ & e_ct_assignments) == e_ct_assignments),\n           return_present_   (false),\n           final_stmt_return_(false)\n         {}\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline std::size_t symbols(Sequence<symbol_t,Allocator>& symbols_list)\n         {\n            if (!collect_variables_ && !collect_functions_)\n               return 0;\n            else if (symbol_name_list_.empty())\n               return 0;\n\n            for (std::size_t i = 0; i < symbol_name_list_.size(); ++i)\n            {\n               details::case_normalise(symbol_name_list_[i].first);\n            }\n\n            std::sort(symbol_name_list_.begin(),symbol_name_list_.end());\n\n            std::unique_copy(symbol_name_list_.begin(),\n                             symbol_name_list_.end  (),\n                             std::back_inserter(symbols_list));\n\n            return symbols_list.size();\n         }\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline std::size_t assignment_symbols(Sequence<symbol_t,Allocator>& assignment_list)\n         {\n            if (!collect_assignments_)\n               return 0;\n            else if (assignment_name_list_.empty())\n               return 0;\n\n            for (std::size_t i = 0; i < assignment_name_list_.size(); ++i)\n            {\n               details::case_normalise(assignment_name_list_[i].first);\n            }\n\n            std::sort(assignment_name_list_.begin(),assignment_name_list_.end());\n\n            std::unique_copy(assignment_name_list_.begin(),\n                             assignment_name_list_.end  (),\n                             std::back_inserter(assignment_list));\n\n            return assignment_list.size();\n         }\n\n         void clear()\n         {\n            symbol_name_list_    .clear();\n            assignment_name_list_.clear();\n            retparam_list_       .clear();\n            return_present_    = false;\n            final_stmt_return_ = false;\n         }\n\n         bool& collect_variables()\n         {\n            return collect_variables_;\n         }\n\n         bool& collect_functions()\n         {\n            return collect_functions_;\n         }\n\n         bool& collect_assignments()\n         {\n            return collect_assignments_;\n         }\n\n         bool return_present() const\n         {\n            return return_present_;\n         }\n\n         bool final_stmt_return() const\n         {\n            return final_stmt_return_;\n         }\n\n         typedef std::vector<std::string> retparam_list_t;\n\n         retparam_list_t return_param_type_list() const\n         {\n            return retparam_list_;\n         }\n\n      private:\n\n         inline void add_symbol(const std::string& symbol, const symbol_type st)\n         {\n            switch (st)\n            {\n               case e_st_variable       :\n               case e_st_vector         :\n               case e_st_string         :\n               case e_st_local_variable :\n               case e_st_local_vector   :\n               case e_st_local_string   : if (collect_variables_)\n                                             symbol_name_list_\n                                                .push_back(std::make_pair(symbol, st));\n                                          break;\n\n               case e_st_function       : if (collect_functions_)\n                                             symbol_name_list_\n                                                .push_back(std::make_pair(symbol, st));\n                                          break;\n\n               default                  : return;\n            }\n         }\n\n         inline void add_assignment(const std::string& symbol, const symbol_type st)\n         {\n            switch (st)\n            {\n               case e_st_variable       :\n               case e_st_vector         :\n               case e_st_string         : if (collect_assignments_)\n                                             assignment_name_list_\n                                                .push_back(std::make_pair(symbol, st));\n                                          break;\n\n               default                  : return;\n            }\n         }\n\n         std::size_t options_;\n         bool collect_variables_;\n         bool collect_functions_;\n         bool collect_assignments_;\n         bool return_present_;\n         bool final_stmt_return_;\n         symbol_list_t symbol_name_list_;\n         symbol_list_t assignment_name_list_;\n         retparam_list_t retparam_list_;\n\n         friend class parser<T>;\n      };\n\n      class settings_store\n      {\n      private:\n\n         typedef std::set<std::string,details::ilesscompare> disabled_entity_set_t;\n         typedef disabled_entity_set_t::iterator des_itr_t;\n\n      public:\n\n         enum settings_compilation_options\n         {\n            e_unknown              =    0,\n            e_replacer             =    1,\n            e_joiner               =    2,\n            e_numeric_check        =    4,\n            e_bracket_check        =    8,\n            e_sequence_check       =   16,\n            e_commutative_check    =   32,\n            e_strength_reduction   =   64,\n            e_disable_vardef       =  128,\n            e_collect_vars         =  256,\n            e_collect_funcs        =  512,\n            e_collect_assings      = 1024,\n            e_disable_usr_on_rsrvd = 2048,\n            e_disable_zero_return  = 4096\n         };\n\n         enum settings_base_funcs\n         {\n            e_bf_unknown = 0,\n            e_bf_abs       , e_bf_acos     , e_bf_acosh    , e_bf_asin   ,\n            e_bf_asinh     , e_bf_atan     , e_bf_atan2    , e_bf_atanh  ,\n            e_bf_avg       , e_bf_ceil     , e_bf_clamp    , e_bf_cos    ,\n            e_bf_cosh      , e_bf_cot      , e_bf_csc      , e_bf_equal  ,\n            e_bf_erf       , e_bf_erfc     , e_bf_exp      , e_bf_expm1  ,\n            e_bf_floor     , e_bf_frac     , e_bf_hypot    , e_bf_iclamp ,\n            e_bf_like      , e_bf_log      , e_bf_log10    , e_bf_log1p  ,\n            e_bf_log2      , e_bf_logn     , e_bf_mand     , e_bf_max    ,\n            e_bf_min       , e_bf_mod      , e_bf_mor      , e_bf_mul    ,\n            e_bf_ncdf      , e_bf_pow      , e_bf_root     , e_bf_round  ,\n            e_bf_roundn    , e_bf_sec      , e_bf_sgn      , e_bf_sin    ,\n            e_bf_sinc      , e_bf_sinh     , e_bf_sqrt     , e_bf_sum    ,\n            e_bf_swap      , e_bf_tan      , e_bf_tanh     , e_bf_trunc  ,\n            e_bf_not_equal , e_bf_inrange  , e_bf_deg2grad , e_bf_deg2rad,\n            e_bf_rad2deg   , e_bf_grad2deg\n         };\n\n         enum settings_control_structs\n         {\n            e_ctrl_unknown = 0,\n            e_ctrl_ifelse,\n            e_ctrl_switch,\n            e_ctrl_for_loop,\n            e_ctrl_while_loop,\n            e_ctrl_repeat_loop,\n            e_ctrl_return\n         };\n\n         enum settings_logic_opr\n         {\n            e_logic_unknown = 0,\n            e_logic_and, e_logic_nand,  e_logic_nor,\n            e_logic_not, e_logic_or,    e_logic_xnor,\n            e_logic_xor, e_logic_scand, e_logic_scor\n         };\n\n         enum settings_arithmetic_opr\n         {\n            e_arith_unknown = 0,\n            e_arith_add, e_arith_sub, e_arith_mul,\n            e_arith_div, e_arith_mod, e_arith_pow\n         };\n\n         enum settings_assignment_opr\n         {\n            e_assign_unknown = 0,\n            e_assign_assign, e_assign_addass, e_assign_subass,\n            e_assign_mulass, e_assign_divass, e_assign_modass\n         };\n\n         enum settings_inequality_opr\n         {\n            e_ineq_unknown = 0,\n            e_ineq_lt,    e_ineq_lte, e_ineq_eq,\n            e_ineq_equal, e_ineq_ne,  e_ineq_nequal,\n            e_ineq_gte,   e_ineq_gt\n         };\n\n         static const std::size_t compile_all_opts = e_replacer          +\n                                                     e_joiner            +\n                                                     e_numeric_check     +\n                                                     e_bracket_check     +\n                                                     e_sequence_check    +\n                                                     e_commutative_check +\n                                                     e_strength_reduction;\n\n         settings_store(const std::size_t compile_options = compile_all_opts)\n         {\n           load_compile_options(compile_options);\n         }\n\n         settings_store& enable_all_base_functions()\n         {\n            disabled_func_set_.clear();\n            return (*this);\n         }\n\n         settings_store& enable_all_control_structures()\n         {\n            disabled_ctrl_set_.clear();\n            return (*this);\n         }\n\n         settings_store& enable_all_logic_ops()\n         {\n            disabled_logic_set_.clear();\n            return (*this);\n         }\n\n         settings_store& enable_all_arithmetic_ops()\n         {\n            disabled_arithmetic_set_.clear();\n            return (*this);\n         }\n\n         settings_store& enable_all_assignment_ops()\n         {\n            disabled_assignment_set_.clear();\n            return (*this);\n         }\n\n         settings_store& enable_all_inequality_ops()\n         {\n            disabled_inequality_set_.clear();\n            return (*this);\n         }\n\n         settings_store& enable_local_vardef()\n         {\n            disable_vardef_ = false;\n            return (*this);\n         }\n\n         settings_store& disable_all_base_functions()\n         {\n            std::copy(details::base_function_list,\n                      details::base_function_list + details::base_function_list_size,\n                      std::insert_iterator<disabled_entity_set_t>\n                        (disabled_func_set_, disabled_func_set_.begin()));\n            return (*this);\n         }\n\n         settings_store& disable_all_control_structures()\n         {\n            std::copy(details::cntrl_struct_list,\n                      details::cntrl_struct_list + details::cntrl_struct_list_size,\n                      std::insert_iterator<disabled_entity_set_t>\n                        (disabled_ctrl_set_, disabled_ctrl_set_.begin()));\n            return (*this);\n         }\n\n         settings_store& disable_all_logic_ops()\n         {\n            std::copy(details::logic_ops_list,\n                      details::logic_ops_list + details::logic_ops_list_size,\n                      std::insert_iterator<disabled_entity_set_t>\n                        (disabled_logic_set_, disabled_logic_set_.begin()));\n            return (*this);\n         }\n\n         settings_store& disable_all_arithmetic_ops()\n         {\n            std::copy(details::arithmetic_ops_list,\n                      details::arithmetic_ops_list + details::arithmetic_ops_list_size,\n                      std::insert_iterator<disabled_entity_set_t>\n                        (disabled_arithmetic_set_, disabled_arithmetic_set_.begin()));\n            return (*this);\n         }\n\n         settings_store& disable_all_assignment_ops()\n         {\n            std::copy(details::assignment_ops_list,\n                      details::assignment_ops_list + details::assignment_ops_list_size,\n                      std::insert_iterator<disabled_entity_set_t>\n                        (disabled_assignment_set_, disabled_assignment_set_.begin()));\n            return (*this);\n         }\n\n         settings_store& disable_all_inequality_ops()\n         {\n            std::copy(details::inequality_ops_list,\n                      details::inequality_ops_list + details::inequality_ops_list_size,\n                      std::insert_iterator<disabled_entity_set_t>\n                        (disabled_inequality_set_, disabled_inequality_set_.begin()));\n            return (*this);\n         }\n\n         settings_store& disable_local_vardef()\n         {\n            disable_vardef_ = true;\n            return (*this);\n         }\n\n         bool replacer_enabled           () const { return enable_replacer_;           }\n         bool commutative_check_enabled  () const { return enable_commutative_check_;  }\n         bool joiner_enabled             () const { return enable_joiner_;             }\n         bool numeric_check_enabled      () const { return enable_numeric_check_;      }\n         bool bracket_check_enabled      () const { return enable_bracket_check_;      }\n         bool sequence_check_enabled     () const { return enable_sequence_check_;     }\n         bool strength_reduction_enabled () const { return enable_strength_reduction_; }\n         bool collect_variables_enabled  () const { return enable_collect_vars_;       }\n         bool collect_functions_enabled  () const { return enable_collect_funcs_;      }\n         bool collect_assignments_enabled() const { return enable_collect_assings_;    }\n         bool vardef_disabled            () const { return disable_vardef_;            }\n         bool rsrvd_sym_usr_disabled     () const { return disable_rsrvd_sym_usr_;     }\n         bool zero_return_disabled       () const { return disable_zero_return_;       }\n\n         bool function_enabled(const std::string& function_name)\n         {\n            if (disabled_func_set_.empty())\n               return true;\n            else\n               return (disabled_func_set_.end() == disabled_func_set_.find(function_name));\n         }\n\n         bool control_struct_enabled(const std::string& control_struct)\n         {\n            if (disabled_ctrl_set_.empty())\n               return true;\n            else\n               return (disabled_ctrl_set_.end() == disabled_ctrl_set_.find(control_struct));\n         }\n\n         bool logic_enabled(const std::string& logic_operation)\n         {\n            if (disabled_logic_set_.empty())\n               return true;\n            else\n               return (disabled_logic_set_.end() == disabled_logic_set_.find(logic_operation));\n         }\n\n         bool arithmetic_enabled(const details::operator_type& arithmetic_operation)\n         {\n            if (disabled_logic_set_.empty())\n               return true;\n            else\n               return disabled_arithmetic_set_.end() == disabled_arithmetic_set_\n                                                            .find(arith_opr_to_string(arithmetic_operation));\n         }\n\n         bool assignment_enabled(const details::operator_type& assignment)\n         {\n            if (disabled_assignment_set_.empty())\n               return true;\n            else\n               return disabled_assignment_set_.end() == disabled_assignment_set_\n                                                           .find(assign_opr_to_string(assignment));\n         }\n\n         bool inequality_enabled(const details::operator_type& inequality)\n         {\n            if (disabled_inequality_set_.empty())\n               return true;\n            else\n               return disabled_inequality_set_.end() == disabled_inequality_set_\n                                                           .find(inequality_opr_to_string(inequality));\n         }\n\n         bool function_disabled(const std::string& function_name)\n         {\n            if (disabled_func_set_.empty())\n               return false;\n            else\n               return (disabled_func_set_.end() != disabled_func_set_.find(function_name));\n         }\n\n         bool control_struct_disabled(const std::string& control_struct)\n         {\n            if (disabled_ctrl_set_.empty())\n               return false;\n            else\n               return (disabled_ctrl_set_.end() != disabled_ctrl_set_.find(control_struct));\n         }\n\n         bool logic_disabled(const std::string& logic_operation)\n         {\n            if (disabled_logic_set_.empty())\n               return false;\n            else\n               return (disabled_logic_set_.end() != disabled_logic_set_.find(logic_operation));\n         }\n\n         bool assignment_disabled(const details::operator_type assignment_operation)\n         {\n            if (disabled_assignment_set_.empty())\n               return false;\n            else\n               return disabled_assignment_set_.end() != disabled_assignment_set_\n                                                           .find(assign_opr_to_string(assignment_operation));\n         }\n\n         bool arithmetic_disabled(const details::operator_type arithmetic_operation)\n         {\n            if (disabled_arithmetic_set_.empty())\n               return false;\n            else\n               return disabled_arithmetic_set_.end() != disabled_arithmetic_set_\n                                                           .find(arith_opr_to_string(arithmetic_operation));\n         }\n\n         bool inequality_disabled(const details::operator_type& inequality)\n         {\n            if (disabled_inequality_set_.empty())\n               return false;\n            else\n               return disabled_inequality_set_.end() != disabled_inequality_set_\n                                                           .find(inequality_opr_to_string(inequality));\n         }\n\n         settings_store& disable_base_function(settings_base_funcs bf)\n         {\n            if (\n                 (e_bf_unknown != bf) &&\n                 (static_cast<std::size_t>(bf) < (details::base_function_list_size + 1))\n               )\n            {\n               disabled_func_set_.insert(details::base_function_list[bf - 1]);\n            }\n\n            return (*this);\n         }\n\n         settings_store& disable_control_structure(settings_control_structs ctrl_struct)\n         {\n            if (\n                 (e_ctrl_unknown != ctrl_struct) &&\n                 (static_cast<std::size_t>(ctrl_struct) < (details::cntrl_struct_list_size + 1))\n               )\n            {\n               disabled_ctrl_set_.insert(details::cntrl_struct_list[ctrl_struct - 1]);\n            }\n\n            return (*this);\n         }\n\n         settings_store& disable_logic_operation(settings_logic_opr logic)\n         {\n            if (\n                 (e_logic_unknown != logic) &&\n                 (static_cast<std::size_t>(logic) < (details::logic_ops_list_size + 1))\n               )\n            {\n               disabled_logic_set_.insert(details::logic_ops_list[logic - 1]);\n            }\n\n            return (*this);\n         }\n\n         settings_store& disable_arithmetic_operation(settings_arithmetic_opr arithmetic)\n         {\n            if (\n                 (e_arith_unknown != arithmetic) &&\n                 (static_cast<std::size_t>(arithmetic) < (details::arithmetic_ops_list_size + 1))\n               )\n            {\n               disabled_arithmetic_set_.insert(details::arithmetic_ops_list[arithmetic - 1]);\n            }\n\n            return (*this);\n         }\n\n         settings_store& disable_assignment_operation(settings_assignment_opr assignment)\n         {\n            if (\n                 (e_assign_unknown != assignment) &&\n                 (static_cast<std::size_t>(assignment) < (details::assignment_ops_list_size + 1))\n               )\n            {\n               disabled_assignment_set_.insert(details::assignment_ops_list[assignment - 1]);\n            }\n\n            return (*this);\n         }\n\n         settings_store& disable_inequality_operation(settings_inequality_opr inequality)\n         {\n            if (\n                 (e_ineq_unknown != inequality) &&\n                 (static_cast<std::size_t>(inequality) < (details::inequality_ops_list_size + 1))\n               )\n            {\n               disabled_inequality_set_.insert(details::inequality_ops_list[inequality - 1]);\n            }\n\n            return (*this);\n         }\n\n         settings_store& enable_base_function(settings_base_funcs bf)\n         {\n            if (\n                 (e_bf_unknown != bf) &&\n                 (static_cast<std::size_t>(bf) < (details::base_function_list_size + 1))\n               )\n            {\n               const des_itr_t itr = disabled_func_set_.find(details::base_function_list[bf - 1]);\n\n               if (disabled_func_set_.end() != itr)\n               {\n                  disabled_func_set_.erase(itr);\n               }\n            }\n\n            return (*this);\n         }\n\n         settings_store& enable_control_structure(settings_control_structs ctrl_struct)\n         {\n            if (\n                 (e_ctrl_unknown != ctrl_struct) &&\n                 (static_cast<std::size_t>(ctrl_struct) < (details::cntrl_struct_list_size + 1))\n               )\n            {\n               const des_itr_t itr = disabled_ctrl_set_.find(details::cntrl_struct_list[ctrl_struct - 1]);\n\n               if (disabled_ctrl_set_.end() != itr)\n               {\n                  disabled_ctrl_set_.erase(itr);\n               }\n            }\n\n            return (*this);\n         }\n\n         settings_store& enable_logic_operation(settings_logic_opr logic)\n         {\n            if (\n                 (e_logic_unknown != logic) &&\n                 (static_cast<std::size_t>(logic) < (details::logic_ops_list_size + 1))\n               )\n            {\n               const des_itr_t itr = disabled_logic_set_.find(details::logic_ops_list[logic - 1]);\n\n               if (disabled_logic_set_.end() != itr)\n               {\n                  disabled_logic_set_.erase(itr);\n               }\n            }\n\n            return (*this);\n         }\n\n         settings_store& enable_arithmetic_operation(settings_arithmetic_opr arithmetic)\n         {\n            if (\n                 (e_arith_unknown != arithmetic) &&\n                 (static_cast<std::size_t>(arithmetic) < (details::arithmetic_ops_list_size + 1))\n               )\n            {\n               const des_itr_t itr = disabled_arithmetic_set_.find(details::arithmetic_ops_list[arithmetic - 1]);\n\n               if (disabled_arithmetic_set_.end() != itr)\n               {\n                  disabled_arithmetic_set_.erase(itr);\n               }\n            }\n\n            return (*this);\n         }\n\n         settings_store& enable_assignment_operation(settings_assignment_opr assignment)\n         {\n            if (\n                 (e_assign_unknown != assignment) &&\n                 (static_cast<std::size_t>(assignment) < (details::assignment_ops_list_size + 1))\n               )\n            {\n               const des_itr_t itr = disabled_assignment_set_.find(details::assignment_ops_list[assignment - 1]);\n\n               if (disabled_assignment_set_.end() != itr)\n               {\n                  disabled_assignment_set_.erase(itr);\n               }\n            }\n\n            return (*this);\n         }\n\n         settings_store& enable_inequality_operation(settings_inequality_opr inequality)\n         {\n            if (\n                 (e_ineq_unknown != inequality) &&\n                 (static_cast<std::size_t>(inequality) < (details::inequality_ops_list_size + 1))\n               )\n            {\n               const des_itr_t itr = disabled_inequality_set_.find(details::inequality_ops_list[inequality - 1]);\n\n               if (disabled_inequality_set_.end() != itr)\n               {\n                  disabled_inequality_set_.erase(itr);\n               }\n            }\n\n            return (*this);\n         }\n\n      private:\n\n         void load_compile_options(const std::size_t compile_options)\n         {\n            enable_replacer_           = (compile_options & e_replacer            ) == e_replacer;\n            enable_joiner_             = (compile_options & e_joiner              ) == e_joiner;\n            enable_numeric_check_      = (compile_options & e_numeric_check       ) == e_numeric_check;\n            enable_bracket_check_      = (compile_options & e_bracket_check       ) == e_bracket_check;\n            enable_sequence_check_     = (compile_options & e_sequence_check      ) == e_sequence_check;\n            enable_commutative_check_  = (compile_options & e_commutative_check   ) == e_commutative_check;\n            enable_strength_reduction_ = (compile_options & e_strength_reduction  ) == e_strength_reduction;\n            enable_collect_vars_       = (compile_options & e_collect_vars        ) == e_collect_vars;\n            enable_collect_funcs_      = (compile_options & e_collect_funcs       ) == e_collect_funcs;\n            enable_collect_assings_    = (compile_options & e_collect_assings     ) == e_collect_assings;\n            disable_vardef_            = (compile_options & e_disable_vardef      ) == e_disable_vardef;\n            disable_rsrvd_sym_usr_     = (compile_options & e_disable_usr_on_rsrvd) == e_disable_usr_on_rsrvd;\n            disable_zero_return_       = (compile_options & e_disable_zero_return ) == e_disable_zero_return;\n         }\n\n         std::string assign_opr_to_string(details::operator_type opr)\n         {\n            switch (opr)\n            {\n               case details::e_assign : return \":=\";\n               case details::e_addass : return \"+=\";\n               case details::e_subass : return \"-=\";\n               case details::e_mulass : return \"*=\";\n               case details::e_divass : return \"/=\";\n               case details::e_modass : return \"%=\";\n               default                : return   \"\";\n            }\n         }\n\n         std::string arith_opr_to_string(details::operator_type opr)\n         {\n            switch (opr)\n            {\n               case details::e_add : return \"+\";\n               case details::e_sub : return \"-\";\n               case details::e_mul : return \"*\";\n               case details::e_div : return \"/\";\n               case details::e_mod : return \"%\";\n               default             : return  \"\";\n            }\n         }\n\n         std::string inequality_opr_to_string(details::operator_type opr)\n         {\n            switch (opr)\n            {\n               case details::e_lt    : return  \"<\";\n               case details::e_lte   : return \"<=\";\n               case details::e_eq    : return \"==\";\n               case details::e_equal : return  \"=\";\n               case details::e_ne    : return \"!=\";\n               case details::e_nequal: return \"<>\";\n               case details::e_gte   : return \">=\";\n               case details::e_gt    : return  \">\";\n               default               : return   \"\";\n            }\n         }\n\n         bool enable_replacer_;\n         bool enable_joiner_;\n         bool enable_numeric_check_;\n         bool enable_bracket_check_;\n         bool enable_sequence_check_;\n         bool enable_commutative_check_;\n         bool enable_strength_reduction_;\n         bool enable_collect_vars_;\n         bool enable_collect_funcs_;\n         bool enable_collect_assings_;\n         bool disable_vardef_;\n         bool disable_rsrvd_sym_usr_;\n         bool disable_zero_return_;\n\n         disabled_entity_set_t disabled_func_set_ ;\n         disabled_entity_set_t disabled_ctrl_set_ ;\n         disabled_entity_set_t disabled_logic_set_;\n         disabled_entity_set_t disabled_arithmetic_set_;\n         disabled_entity_set_t disabled_assignment_set_;\n         disabled_entity_set_t disabled_inequality_set_;\n\n         friend class parser<T>;\n      };\n\n      typedef settings_store settings_t;\n\n      parser(const settings_t& settings = settings_t())\n      : settings_(settings),\n        resolve_unknown_symbol_(false),\n        results_context_(0),\n        unknown_symbol_resolver_(reinterpret_cast<unknown_symbol_resolver*>(0)),\n        #ifdef _MSC_VER\n        #pragma warning(push)\n        #pragma warning (disable:4355)\n        #endif\n        sem_(*this),\n        #ifdef _MSC_VER\n        #pragma warning(pop)\n        #endif\n        operator_joiner_2_(2),\n        operator_joiner_3_(3)\n      {\n         init_precompilation();\n\n         load_operations_map           (base_ops_map_     );\n         load_unary_operations_map     (unary_op_map_     );\n         load_binary_operations_map    (binary_op_map_    );\n         load_inv_binary_operations_map(inv_binary_op_map_);\n         load_sf3_map                  (sf3_map_          );\n         load_sf4_map                  (sf4_map_          );\n\n         expression_generator_.init_synthesize_map();\n         expression_generator_.set_parser(*this);\n         expression_generator_.set_uom(unary_op_map_);\n         expression_generator_.set_bom(binary_op_map_);\n         expression_generator_.set_ibom(inv_binary_op_map_);\n         expression_generator_.set_sf3m(sf3_map_);\n         expression_generator_.set_sf4m(sf4_map_);\n         expression_generator_.set_strength_reduction_state(settings_.strength_reduction_enabled());\n      }\n\n     ~parser()\n      {}\n\n      inline void init_precompilation()\n      {\n         if (settings_.collect_variables_enabled())\n            dec_.collect_variables() = true;\n\n         if (settings_.collect_functions_enabled())\n            dec_.collect_functions() = true;\n\n         if (settings_.collect_assignments_enabled())\n            dec_.collect_assignments() = true;\n\n         if (settings_.replacer_enabled())\n         {\n            symbol_replacer_.clear();\n            symbol_replacer_.add_replace(\"true\" ,\"1\",lexer::token::e_number);\n            symbol_replacer_.add_replace(\"false\",\"0\",lexer::token::e_number);\n            helper_assembly_.token_modifier_list.clear();\n            helper_assembly_.register_modifier(&symbol_replacer_);\n         }\n\n         if (settings_.commutative_check_enabled())\n         {\n            for (std::size_t i = 0; i < details::reserved_words_size; ++i)\n            {\n               commutative_inserter_.ignore_symbol(details::reserved_words[i]);\n            }\n\n            helper_assembly_.token_inserter_list.clear();\n            helper_assembly_.register_inserter(&commutative_inserter_);\n         }\n\n         if (settings_.joiner_enabled())\n         {\n            helper_assembly_.token_joiner_list.clear();\n            helper_assembly_.register_joiner(&operator_joiner_2_);\n            helper_assembly_.register_joiner(&operator_joiner_3_);\n         }\n\n         if (\n              settings_.numeric_check_enabled () ||\n              settings_.bracket_check_enabled () ||\n              settings_.sequence_check_enabled()\n            )\n         {\n            helper_assembly_.token_scanner_list.clear();\n\n            if (settings_.numeric_check_enabled())\n            {\n               helper_assembly_.register_scanner(&numeric_checker_);\n            }\n\n            if (settings_.bracket_check_enabled())\n            {\n               helper_assembly_.register_scanner(&bracket_checker_);\n            }\n\n            if (settings_.sequence_check_enabled())\n            {\n               helper_assembly_.register_scanner(&sequence_validator_);\n            }\n         }\n      }\n\n      inline bool compile(const std::string& expression_string, expression<T>& expr)\n      {\n         state_          .reset();\n         error_list_     .clear();\n         brkcnt_list_    .clear();\n         synthesis_error_.clear();\n         sem_            .cleanup();\n\n         return_cleanup();\n\n         expression_generator_.set_allocator(node_allocator_);\n\n         if (expression_string.empty())\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          \"ERR000 - Empty expression!\",\n                          exprtk_error_location));\n\n            return false;\n         }\n\n         if (!init(expression_string))\n         {\n            process_lexer_errors();\n            return false;\n         }\n\n         if (lexer().empty())\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          \"ERR001 - Empty expression!\",\n                          exprtk_error_location));\n\n            return false;\n         }\n\n         if (!run_assemblies())\n         {\n            return false;\n         }\n\n         symtab_store_.symtab_list_ = expr.get_symbol_table_list();\n         dec_.clear();\n\n         lexer().begin();\n\n         next_token();\n\n         expression_node_ptr e = parse_corpus();\n\n         if ((0 != e) && (token_t::e_eof == current_token().type))\n         {\n            bool* retinvk_ptr = 0;\n\n            if (state_.return_stmt_present)\n            {\n               dec_.return_present_ = true;\n\n               e = expression_generator_\n                     .return_envelope(e,results_context_,retinvk_ptr);\n            }\n\n            expr.set_expression(e);\n            expr.set_retinvk(retinvk_ptr);\n\n            register_local_vars(expr);\n            register_return_results(expr);\n\n            return !(!expr);\n         }\n         else\n         {\n            if (error_list_.empty())\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR002 - Invalid expression encountered\",\n                             exprtk_error_location));\n            }\n\n            dec_.clear    ();\n            sem_.cleanup  ();\n            return_cleanup();\n\n            if ((0 != e) && branch_deletable(e))\n            {\n               destroy_node(e);\n            }\n\n            return false;\n         }\n      }\n\n      inline expression_t compile(const std::string& expression_string, symbol_table_t& symtab)\n      {\n         expression_t expr;\n\n         expr.register_symbol_table(symtab);\n\n         compile(expression_string,expr);\n\n         return expr;\n      }\n\n      void process_lexer_errors()\n      {\n         for (std::size_t i = 0; i < lexer().size(); ++i)\n         {\n            if (lexer()[i].is_error())\n            {\n               std::string diagnostic = \"ERR003 - \";\n\n               switch (lexer()[i].type)\n               {\n                  case lexer::token::e_error      : diagnostic += \"General token error\";\n                                                    break;\n\n                  case lexer::token::e_err_symbol : diagnostic += \"Symbol error\";\n                                                    break;\n\n                  case lexer::token::e_err_number : diagnostic += \"Invalid numeric token\";\n                                                    break;\n\n                  case lexer::token::e_err_string : diagnostic += \"Invalid string token\";\n                                                    break;\n\n                  case lexer::token::e_err_sfunc  : diagnostic += \"Invalid special function token\";\n                                                    break;\n\n                  default                         : diagnostic += \"Unknown compiler error\";\n               }\n\n               set_error(\n                  make_error(parser_error::e_lexer,\n                             lexer()[i],\n                             diagnostic + \": \" + lexer()[i].value,\n                             exprtk_error_location));\n            }\n         }\n      }\n\n      inline bool run_assemblies()\n      {\n         if (settings_.commutative_check_enabled())\n         {\n            helper_assembly_.run_inserters(lexer());\n         }\n\n         if (settings_.joiner_enabled())\n         {\n            helper_assembly_.run_joiners(lexer());\n         }\n\n         if (settings_.replacer_enabled())\n         {\n            helper_assembly_.run_modifiers(lexer());\n         }\n\n         if (\n              settings_.numeric_check_enabled () ||\n              settings_.bracket_check_enabled () ||\n              settings_.sequence_check_enabled()\n            )\n         {\n            if (!helper_assembly_.run_scanners(lexer()))\n            {\n               if (helper_assembly_.error_token_scanner)\n               {\n                  lexer::helper::bracket_checker*    bracket_checker_ptr    = 0;\n                  lexer::helper::numeric_checker*    numeric_checker_ptr    = 0;\n                  lexer::helper::sequence_validator* sequence_validator_ptr = 0;\n\n                  if (0 != (bracket_checker_ptr = dynamic_cast<lexer::helper::bracket_checker*>(helper_assembly_.error_token_scanner)))\n                  {\n                     set_error(\n                        make_error(parser_error::e_token,\n                                   bracket_checker_ptr->error_token(),\n                                   \"ERR004 - Mismatched brackets: '\" + bracket_checker_ptr->error_token().value + \"'\",\n                                   exprtk_error_location));\n                  }\n                  else if (0 != (numeric_checker_ptr = dynamic_cast<lexer::helper::numeric_checker*>(helper_assembly_.error_token_scanner)))\n                  {\n                     for (std::size_t i = 0; i < numeric_checker_ptr->error_count(); ++i)\n                     {\n                        lexer::token error_token = lexer()[numeric_checker_ptr->error_index(i)];\n\n                        set_error(\n                           make_error(parser_error::e_token,\n                                      error_token,\n                                      \"ERR005 - Invalid numeric token: '\" + error_token.value + \"'\",\n                                      exprtk_error_location));\n                     }\n\n                     if (numeric_checker_ptr->error_count())\n                     {\n                        numeric_checker_ptr->clear_errors();\n                     }\n                  }\n                  else if (0 != (sequence_validator_ptr = dynamic_cast<lexer::helper::sequence_validator*>(helper_assembly_.error_token_scanner)))\n                  {\n                     for (std::size_t i = 0; i < sequence_validator_ptr->error_count(); ++i)\n                     {\n                        std::pair<lexer::token,lexer::token> error_token = sequence_validator_ptr->error(i);\n\n                        set_error(\n                           make_error(parser_error::e_token,\n                                      error_token.first,\n                                      \"ERR006 - Invalid token sequence: '\" +\n                                      error_token.first.value  + \"' and '\" +\n                                      error_token.second.value + \"'\",\n                                      exprtk_error_location));\n                     }\n\n                     if (sequence_validator_ptr->error_count())\n                     {\n                        sequence_validator_ptr->clear_errors();\n                     }\n                  }\n               }\n\n               return false;\n            }\n         }\n\n         return true;\n      }\n\n      inline settings_store& settings()\n      {\n         return settings_;\n      }\n\n      inline parser_error::type get_error(const std::size_t& index)\n      {\n         if (index < error_list_.size())\n            return error_list_[index];\n         else\n            throw std::invalid_argument(\"parser::get_error() - Invalid error index specificed\");\n      }\n\n      inline std::string error() const\n      {\n         if (!error_list_.empty())\n         {\n            return error_list_[0].diagnostic;\n         }\n         else\n            return std::string(\"No Error\");\n      }\n\n      inline std::size_t error_count() const\n      {\n         return error_list_.size();\n      }\n\n      inline dependent_entity_collector& dec()\n      {\n         return dec_;\n      }\n\n      inline bool replace_symbol(const std::string& old_symbol, const std::string& new_symbol)\n      {\n         if (!settings_.replacer_enabled())\n            return false;\n         else if (details::is_reserved_word(old_symbol))\n            return false;\n         else\n            return symbol_replacer_.add_replace(old_symbol,new_symbol,lexer::token::e_symbol);\n      }\n\n      inline bool remove_replace_symbol(const std::string& symbol)\n      {\n         if (!settings_.replacer_enabled())\n            return false;\n         else if (details::is_reserved_word(symbol))\n            return false;\n         else\n            return symbol_replacer_.remove(symbol);\n      }\n\n      inline void enable_unknown_symbol_resolver(unknown_symbol_resolver* usr = reinterpret_cast<unknown_symbol_resolver*>(0))\n      {\n         resolve_unknown_symbol_ = true;\n\n         if (usr)\n            unknown_symbol_resolver_ = usr;\n         else\n            unknown_symbol_resolver_ = &default_usr_;\n      }\n\n      inline void enable_unknown_symbol_resolver(unknown_symbol_resolver& usr)\n      {\n         enable_unknown_symbol_resolver(&usr);\n      }\n\n      inline void disable_unknown_symbol_resolver()\n      {\n         resolve_unknown_symbol_  = false;\n         unknown_symbol_resolver_ = &default_usr_;\n      }\n\n   private:\n\n      inline bool valid_base_operation(const std::string& symbol)\n      {\n         const std::size_t length = symbol.size();\n\n         if (\n              (length < 3) || // Shortest base op symbol length\n              (length > 9)    // Longest base op symbol length\n            )\n            return false;\n         else\n            return settings_.function_enabled(symbol) &&\n                   (base_ops_map_.end() != base_ops_map_.find(symbol));\n      }\n\n      inline bool valid_vararg_operation(const std::string& symbol)\n      {\n         static const std::string s_sum     = \"sum\" ;\n         static const std::string s_mul     = \"mul\" ;\n         static const std::string s_avg     = \"avg\" ;\n         static const std::string s_min     = \"min\" ;\n         static const std::string s_max     = \"max\" ;\n         static const std::string s_mand    = \"mand\";\n         static const std::string s_mor     = \"mor\" ;\n         static const std::string s_multi   = \"~\"   ;\n         static const std::string s_mswitch = \"[*]\" ;\n\n         return\n               (\n                  details::imatch(symbol,s_sum    ) ||\n                  details::imatch(symbol,s_mul    ) ||\n                  details::imatch(symbol,s_avg    ) ||\n                  details::imatch(symbol,s_min    ) ||\n                  details::imatch(symbol,s_max    ) ||\n                  details::imatch(symbol,s_mand   ) ||\n                  details::imatch(symbol,s_mor    ) ||\n                  details::imatch(symbol,s_multi  ) ||\n                  details::imatch(symbol,s_mswitch)\n               ) &&\n               settings_.function_enabled(symbol);\n      }\n\n      bool is_invalid_arithmetic_operation(const details::operator_type operation)\n      {\n         return settings_.arithmetic_disabled(operation);\n      }\n\n      bool is_invalid_assignment_operation(const details::operator_type operation)\n      {\n         return settings_.assignment_disabled(operation);\n      }\n\n      bool is_invalid_inequality_operation(const details::operator_type operation)\n      {\n         return settings_.inequality_disabled(operation);\n      }\n\n      #ifdef exprtk_enable_debugging\n      inline void next_token()\n      {\n         std::string ct_str = current_token().value;\n         parser_helper::next_token();\n         std::string depth(2 * state_.scope_depth,' ');\n         exprtk_debug((\"%s\"\n                       \"prev[%s] --> curr[%s]\\n\",\n                       depth.c_str(),\n                       ct_str.c_str(),\n                       current_token().value.c_str()));\n      }\n      #endif\n\n      inline expression_node_ptr parse_corpus()\n      {\n         std::vector<expression_node_ptr> arg_list;\n         std::vector<bool> side_effect_list;\n\n         expression_node_ptr result = error_node();\n\n         scoped_vec_delete<expression_node_t> sdd((*this),arg_list);\n\n         lexer::token begin_token;\n         lexer::token   end_token;\n\n         for ( ; ; )\n         {\n            state_.side_effect_present = false;\n\n            begin_token = current_token();\n\n            expression_node_ptr arg = parse_expression();\n\n            if (0 == arg)\n            {\n               if (error_list_.empty())\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR007 - Invalid expression encountered\",\n                                exprtk_error_location));\n               }\n\n               return error_node();\n            }\n            else\n            {\n               arg_list.push_back(arg);\n\n               side_effect_list.push_back(state_.side_effect_present);\n\n               end_token = current_token();\n\n               std::string sub_expr = construct_subexpr(begin_token,end_token);\n\n               exprtk_debug((\"parse_corpus(%02d) Subexpr: %s\\n\",\n                             static_cast<int>(arg_list.size() - 1),\n                             sub_expr.c_str()));\n\n               exprtk_debug((\"parse_corpus(%02d) - Side effect present: %s\\n\",\n                             static_cast<int>(arg_list.size() - 1),\n                             state_.side_effect_present ? \"true\" : \"false\"));\n\n               exprtk_debug((\"-------------------------------------------------\\n\"));\n            }\n\n            if (lexer().finished())\n               break;\n            else if (token_is(token_t::e_eof,prsrhlpr_t::e_hold))\n            {\n               if (lexer().finished())\n                  break;\n               else\n                  next_token();\n            }\n         }\n\n         if (\n              !arg_list.empty() &&\n              is_return_node(arg_list.back())\n            )\n         {\n            dec_.final_stmt_return_ = true;\n         }\n\n         result = simplify(arg_list,side_effect_list);\n\n         sdd.delete_ptr = (0 == result);\n\n         return result;\n      }\n\n      std::string construct_subexpr(lexer::token& begin_token, lexer::token& end_token)\n      {\n         std::string result = lexer().substr(begin_token.position,end_token.position);\n\n         for (std::size_t i = 0; i < result.size(); ++i)\n         {\n            if (details::is_whitespace(result[i])) result[i] = ' ';\n         }\n\n         return result;\n      }\n\n      static const precedence_level default_precedence = e_level00;\n\n      struct state_t\n      {\n         inline void set(const precedence_level& l,\n                         const precedence_level& r,\n                         const details::operator_type& o)\n         {\n            left  = l;\n            right = r;\n            operation = o;\n         }\n\n         inline void reset()\n         {\n            left      = e_level00;\n            right     = e_level00;\n            operation = details::e_default;\n         }\n\n         precedence_level left;\n         precedence_level right;\n         details::operator_type operation;\n      };\n\n      inline expression_node_ptr parse_expression(precedence_level precedence = e_level00)\n      {\n         expression_node_ptr expression = parse_branch(precedence);\n\n         if (0 == expression)\n         {\n            return error_node();\n         }\n\n         bool break_loop = false;\n\n         state_t current_state;\n\n         for ( ; ; )\n         {\n            current_state.reset();\n\n            switch (current_token().type)\n            {\n               case token_t::e_assign : current_state.set(e_level00,e_level00,details::e_assign); break;\n               case token_t::e_addass : current_state.set(e_level00,e_level00,details::e_addass); break;\n               case token_t::e_subass : current_state.set(e_level00,e_level00,details::e_subass); break;\n               case token_t::e_mulass : current_state.set(e_level00,e_level00,details::e_mulass); break;\n               case token_t::e_divass : current_state.set(e_level00,e_level00,details::e_divass); break;\n               case token_t::e_modass : current_state.set(e_level00,e_level00,details::e_modass); break;\n               case token_t::e_swap   : current_state.set(e_level00,e_level00,details::e_swap  ); break;\n               case token_t::e_lt     : current_state.set(e_level05,e_level06,details::    e_lt); break;\n               case token_t::e_lte    : current_state.set(e_level05,e_level06,details::   e_lte); break;\n               case token_t::e_eq     : current_state.set(e_level05,e_level06,details::    e_eq); break;\n               case token_t::e_ne     : current_state.set(e_level05,e_level06,details::    e_ne); break;\n               case token_t::e_gte    : current_state.set(e_level05,e_level06,details::   e_gte); break;\n               case token_t::e_gt     : current_state.set(e_level05,e_level06,details::    e_gt); break;\n               case token_t::e_add    : current_state.set(e_level07,e_level08,details::   e_add); break;\n               case token_t::e_sub    : current_state.set(e_level07,e_level08,details::   e_sub); break;\n               case token_t::e_div    : current_state.set(e_level10,e_level11,details::   e_div); break;\n               case token_t::e_mul    : current_state.set(e_level10,e_level11,details::   e_mul); break;\n               case token_t::e_mod    : current_state.set(e_level10,e_level11,details::   e_mod); break;\n               case token_t::e_pow    : current_state.set(e_level12,e_level12,details::   e_pow); break;\n               default                : if (token_t::e_symbol == current_token().type)\n                                        {\n                                           static const std::string s_and   =   \"and\";\n                                           static const std::string s_nand  =  \"nand\";\n                                           static const std::string s_or    =    \"or\";\n                                           static const std::string s_nor   =   \"nor\";\n                                           static const std::string s_xor   =   \"xor\";\n                                           static const std::string s_xnor  =  \"xnor\";\n                                           static const std::string s_in    =    \"in\";\n                                           static const std::string s_like  =  \"like\";\n                                           static const std::string s_ilike = \"ilike\";\n                                           static const std::string s_and1  =     \"&\";\n                                           static const std::string s_or1   =     \"|\";\n                                           static const std::string s_not   =   \"not\";\n\n                                           if (details::imatch(current_token().value,s_and))\n                                           {\n                                              current_state.set(e_level03, e_level04, details::e_and);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_and1))\n                                           {\n                                              #ifndef exprtk_disable_sc_andor\n                                              current_state.set(e_level03, e_level04, details::e_scand);\n                                              #else\n                                              current_state.set(e_level03, e_level04, details::e_and);\n                                              #endif\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_nand))\n                                           {\n                                              current_state.set(e_level03, e_level04, details::e_nand);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_or))\n                                           {\n                                              current_state.set(e_level01, e_level02, details::e_or);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_or1))\n                                           {\n                                              #ifndef exprtk_disable_sc_andor\n                                              current_state.set(e_level01, e_level02, details::e_scor);\n                                              #else\n                                              current_state.set(e_level01, e_level02, details::e_or);\n                                              #endif\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_nor))\n                                           {\n                                              current_state.set(e_level01, e_level02, details::e_nor);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_xor))\n                                           {\n                                              current_state.set(e_level01, e_level02, details::e_xor);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_xnor))\n                                           {\n                                              current_state.set(e_level01, e_level02, details::e_xnor);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_in))\n                                           {\n                                              current_state.set(e_level04, e_level04, details::e_in);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_like))\n                                           {\n                                              current_state.set(e_level04, e_level04, details::e_like);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_ilike))\n                                           {\n                                              current_state.set(e_level04, e_level04, details::e_ilike);\n                                              break;\n                                           }\n                                           else if (details::imatch(current_token().value,s_not))\n                                           {\n                                              break;\n                                           }\n                                        }\n\n                                        break_loop = true;\n            }\n\n            if (break_loop)\n            {\n               parse_pending_string_rangesize(expression);\n               break;\n            }\n            else if (current_state.left < precedence)\n               break;\n\n            lexer::token prev_token = current_token();\n\n            next_token();\n\n            expression_node_ptr right_branch   = error_node();\n            expression_node_ptr new_expression = error_node();\n\n            if (is_invalid_arithmetic_operation(current_state.operation))\n            {\n               free_node(node_allocator_,expression);\n\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             prev_token,\n                             \"ERR008 - Invalid arithmetic operation '\" + details::to_str(current_state.operation) + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n            else if (is_invalid_inequality_operation(current_state.operation))\n            {\n               free_node(node_allocator_,expression);\n\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             prev_token,\n                             \"ERR009 - Invalid inequality operation '\" + details::to_str(current_state.operation) + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n            else if (is_invalid_assignment_operation(current_state.operation))\n            {\n               free_node(node_allocator_,expression);\n\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             prev_token,\n                             \"ERR010 - Invalid assignment operation '\" + details::to_str(current_state.operation) + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            if (0 != (right_branch = parse_expression(current_state.right)))\n            {\n               if (\n                    details::is_return_node(  expression) ||\n                    details::is_return_node(right_branch)\n                  )\n               {\n                  free_node(node_allocator_,   expression);\n                  free_node(node_allocator_, right_branch);\n\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                prev_token,\n                                \"ERR011 - Return statements cannot be part of sub-expressions\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n\n               new_expression = expression_generator_\n                                  (\n                                    current_state.operation,\n                                    expression,\n                                    right_branch\n                                  );\n            }\n\n            if (0 == new_expression)\n            {\n               if (error_list_.empty())\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                prev_token,\n                                !synthesis_error_.empty() ?\n                                synthesis_error_ :\n                                \"ERR012 - General parsing error at token: '\" + prev_token.value + \"'\",\n                                exprtk_error_location));\n               }\n\n               free_node(node_allocator_,   expression);\n               free_node(node_allocator_, right_branch);\n\n               return error_node();\n            }\n            else\n            {\n               if (\n                    token_is(token_t::e_ternary,prsrhlpr_t::e_hold) &&\n                    (precedence == e_level00)\n                  )\n               {\n                  expression = parse_ternary_conditional_statement(new_expression);\n               }\n               else\n                  expression = new_expression;\n\n               parse_pending_string_rangesize(expression);\n            }\n         }\n\n         return expression;\n      }\n\n      bool simplify_unary_negation_branch(expression_node_ptr& node)\n      {\n         {\n            typedef details::unary_branch_node<T,details::neg_op<T> > ubn_t;\n            ubn_t* n = dynamic_cast<ubn_t*>(node);\n\n            if (n)\n            {\n               expression_node_ptr un_r = n->branch(0);\n               n->release();\n               free_node(node_allocator_,node);\n               node = un_r;\n\n               return true;\n            }\n         }\n\n         {\n            typedef details::unary_variable_node<T,details::neg_op<T> > uvn_t;\n\n            uvn_t* n = dynamic_cast<uvn_t*>(node);\n\n            if (n)\n            {\n               const T& v = n->v();\n               expression_node_ptr return_node = error_node();\n\n               if (\n                    (0 != (return_node = symtab_store_.get_variable(v))) ||\n                    (0 != (return_node = sem_         .get_variable(v)))\n                  )\n               {\n                  free_node(node_allocator_,node);\n                  node = return_node;\n\n                  return true;\n               }\n               else\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR013 - Failed to find variable node in symbol table\",\n                                exprtk_error_location));\n\n                  free_node(node_allocator_,node);\n\n                  return false;\n               }\n            }\n         }\n\n         return false;\n      }\n\n      static inline expression_node_ptr error_node()\n      {\n         return reinterpret_cast<expression_node_ptr>(0);\n      }\n\n      template <typename Type, std::size_t N>\n      struct scoped_delete\n      {\n         typedef Type* ptr_t;\n\n         scoped_delete(parser<T>& pr, ptr_t& p)\n         : delete_ptr(true),\n           parser_(pr),\n           p_(&p)\n         {}\n\n         scoped_delete(parser<T>& pr, ptr_t (&p)[N])\n         : delete_ptr(true),\n           parser_(pr),\n           p_(&p[0])\n         {}\n\n        ~scoped_delete()\n         {\n            if (delete_ptr)\n            {\n               for (std::size_t i = 0; i < N; ++i)\n               {\n                  free_node(parser_.node_allocator_,p_[i]);\n               }\n            }\n         }\n\n         bool delete_ptr;\n         parser<T>& parser_;\n         ptr_t* p_;\n\n      private:\n\n         scoped_delete<Type,N>& operator=(const scoped_delete<Type,N>&);\n      };\n\n      template <typename Type>\n      struct scoped_deq_delete\n      {\n         typedef Type* ptr_t;\n\n         scoped_deq_delete(parser<T>& pr, std::deque<ptr_t>& deq)\n         : delete_ptr(true),\n           parser_(pr),\n           deq_(deq)\n         {}\n\n        ~scoped_deq_delete()\n         {\n            if (delete_ptr && !deq_.empty())\n            {\n               for (std::size_t i = 0; i < deq_.size(); ++i)\n               {\n                  free_node(parser_.node_allocator_,deq_[i]);\n               }\n\n               deq_.clear();\n            }\n         }\n\n         bool delete_ptr;\n         parser<T>& parser_;\n         std::deque<ptr_t>& deq_;\n\n      private:\n\n         scoped_deq_delete<Type>& operator=(const scoped_deq_delete<Type>&);\n      };\n\n      template <typename Type>\n      struct scoped_vec_delete\n      {\n         typedef Type* ptr_t;\n\n         scoped_vec_delete(parser<T>& pr, std::vector<ptr_t>& vec)\n         : delete_ptr(true),\n           parser_(pr),\n           vec_(vec)\n         {}\n\n        ~scoped_vec_delete()\n         {\n            if (delete_ptr && !vec_.empty())\n            {\n               for (std::size_t i = 0; i < vec_.size(); ++i)\n               {\n                  free_node(parser_.node_allocator_,vec_[i]);\n               }\n\n               vec_.clear();\n            }\n         }\n\n         bool delete_ptr;\n         parser<T>& parser_;\n         std::vector<ptr_t>& vec_;\n\n      private:\n\n         scoped_vec_delete<Type>& operator=(const scoped_vec_delete<Type>&);\n      };\n\n      struct scoped_bool_negator\n      {\n         scoped_bool_negator(bool& bb)\n         : b(bb)\n         { b = !b; }\n\n        ~scoped_bool_negator()\n         { b = !b; }\n\n         bool& b;\n      };\n\n      struct scoped_bool_or_restorer\n      {\n         scoped_bool_or_restorer(bool& bb)\n         : b(bb),\n           original_value_(bb)\n         {}\n\n        ~scoped_bool_or_restorer()\n         {\n            b = b || original_value_;\n         }\n\n         bool& b;\n         bool original_value_;\n      };\n\n      inline expression_node_ptr parse_function_invocation(ifunction<T>* function, const std::string& function_name)\n      {\n         expression_node_ptr func_node = reinterpret_cast<expression_node_ptr>(0);\n\n         switch (function->param_count)\n         {\n            case  0 : func_node = parse_function_call_0  (function,function_name); break;\n            case  1 : func_node = parse_function_call< 1>(function,function_name); break;\n            case  2 : func_node = parse_function_call< 2>(function,function_name); break;\n            case  3 : func_node = parse_function_call< 3>(function,function_name); break;\n            case  4 : func_node = parse_function_call< 4>(function,function_name); break;\n            case  5 : func_node = parse_function_call< 5>(function,function_name); break;\n            case  6 : func_node = parse_function_call< 6>(function,function_name); break;\n            case  7 : func_node = parse_function_call< 7>(function,function_name); break;\n            case  8 : func_node = parse_function_call< 8>(function,function_name); break;\n            case  9 : func_node = parse_function_call< 9>(function,function_name); break;\n            case 10 : func_node = parse_function_call<10>(function,function_name); break;\n            case 11 : func_node = parse_function_call<11>(function,function_name); break;\n            case 12 : func_node = parse_function_call<12>(function,function_name); break;\n            case 13 : func_node = parse_function_call<13>(function,function_name); break;\n            case 14 : func_node = parse_function_call<14>(function,function_name); break;\n            case 15 : func_node = parse_function_call<15>(function,function_name); break;\n            case 16 : func_node = parse_function_call<16>(function,function_name); break;\n            case 17 : func_node = parse_function_call<17>(function,function_name); break;\n            case 18 : func_node = parse_function_call<18>(function,function_name); break;\n            case 19 : func_node = parse_function_call<19>(function,function_name); break;\n            case 20 : func_node = parse_function_call<20>(function,function_name); break;\n            default : {\n                         set_error(\n                            make_error(parser_error::e_syntax,\n                                       current_token(),\n                                       \"ERR014 - Invalid number of parameters for function: '\" + function_name + \"'\",\n                                       exprtk_error_location));\n\n                         return error_node();\n                      }\n         }\n\n         if (func_node)\n            return func_node;\n         else\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR015 - Failed to generate call to function: '\" + function_name + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n      }\n\n      template <std::size_t NumberofParameters>\n      inline expression_node_ptr parse_function_call(ifunction<T>* function, const std::string& function_name)\n      {\n         #ifdef _MSC_VER\n            #pragma warning(push)\n            #pragma warning(disable: 4127)\n         #endif\n         if (0 == NumberofParameters)\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR016 - Expecting ifunction '\" + function_name + \"' to have non-zero parameter count\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         #ifdef _MSC_VER\n            #pragma warning(pop)\n         #endif\n\n         expression_node_ptr branch[NumberofParameters];\n         expression_node_ptr result  = error_node();\n\n         std::fill_n(branch, NumberofParameters, reinterpret_cast<expression_node_ptr>(0));\n\n         scoped_delete<expression_node_t,NumberofParameters> sd((*this),branch);\n\n         next_token();\n\n         if (!token_is(token_t::e_lbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR017 - Expecting argument list for function: '\" + function_name + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         for (int i = 0; i < static_cast<int>(NumberofParameters); ++i)\n         {\n            branch[i] = parse_expression();\n\n            if (0 == branch[i])\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR018 - Failed to parse argument \" + details::to_str(i) + \" for function: '\" + function_name + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n            else if (i < static_cast<int>(NumberofParameters - 1))\n            {\n               if (!token_is(token_t::e_comma))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR019 - Invalid number of arguments for function: '\" + function_name + \"'\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n         }\n\n         if (!token_is(token_t::e_rbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR020 - Invalid number of arguments for function: '\" + function_name + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else\n            result = expression_generator_.function(function,branch);\n\n         sd.delete_ptr = false;\n\n         return result;\n      }\n\n      inline expression_node_ptr parse_function_call_0(ifunction<T>* function, const std::string& function_name)\n      {\n         expression_node_ptr result = expression_generator_.function(function);\n\n         state_.side_effect_present = function->has_side_effects();\n\n         next_token();\n\n         if (\n               token_is(token_t::e_lbracket) &&\n              !token_is(token_t::e_rbracket)\n            )\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR021 - Expecting '()' to proceed call to function: '\" + function_name + \"'\",\n                          exprtk_error_location));\n\n            free_node(node_allocator_,result);\n\n            return error_node();\n         }\n         else\n            return result;\n      }\n\n      template <std::size_t MaxNumberofParameters>\n      inline std::size_t parse_base_function_call(expression_node_ptr (&param_list)[MaxNumberofParameters], const std::string& function_name = \"\")\n      {\n         std::fill_n(param_list, MaxNumberofParameters, reinterpret_cast<expression_node_ptr>(0));\n\n         scoped_delete<expression_node_t,MaxNumberofParameters> sd((*this),param_list);\n\n         next_token();\n\n         if (!token_is(token_t::e_lbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR022 - Expected a '(' at start of function call to '\" + function_name  +\n                          \"', instead got: '\" + current_token().value + \"'\",\n                          exprtk_error_location));\n\n            return 0;\n         }\n\n         if (token_is(token_t::e_rbracket, e_hold))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR023 - Expected at least one input parameter for function call '\" + function_name + \"'\",\n                          exprtk_error_location));\n\n            return 0;\n         }\n\n         std::size_t param_index = 0;\n\n         for (; param_index < MaxNumberofParameters; ++param_index)\n         {\n            param_list[param_index] = parse_expression();\n\n            if (0 == param_list[param_index])\n               return 0;\n            else if (token_is(token_t::e_rbracket))\n            {\n               sd.delete_ptr = false;\n               break;\n            }\n            else if (token_is(token_t::e_comma))\n               continue;\n            else\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR024 - Expected a ',' between function input parameters, instead got: '\" + current_token().value + \"'\",\n                             exprtk_error_location));\n\n               return 0;\n            }\n         }\n\n         if (sd.delete_ptr)\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR025 - Invalid number of input parameters passed to function '\" + function_name  + \"'\",\n                          exprtk_error_location));\n\n            return 0;\n         }\n\n         return (param_index + 1);\n      }\n\n      inline expression_node_ptr parse_base_operation()\n      {\n         typedef std::pair<base_ops_map_t::iterator,base_ops_map_t::iterator> map_range_t;\n\n         const std::string operation_name   = current_token().value;\n         const token_t     diagnostic_token = current_token();\n\n         map_range_t itr_range = base_ops_map_.equal_range(operation_name);\n\n         if (0 == std::distance(itr_range.first,itr_range.second))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          diagnostic_token,\n                          \"ERR026 - No entry found for base operation: \" + operation_name,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         static const std::size_t MaxNumberofParameters = 4;\n         expression_node_ptr param_list[MaxNumberofParameters] = {0};\n\n         const std::size_t parameter_count = parse_base_function_call(param_list, operation_name);\n\n         if ((parameter_count > 0) && (parameter_count <= MaxNumberofParameters))\n         {\n            for (base_ops_map_t::iterator itr = itr_range.first; itr != itr_range.second; ++itr)\n            {\n               details::base_operation_t& operation = itr->second;\n\n               if (operation.num_params == parameter_count)\n               {\n                  switch (parameter_count)\n                  {\n                     #define base_opr_case(N)                                         \\\n                     case N : {                                                       \\\n                                 expression_node_ptr pl##N[N] = {0};                  \\\n                                 std::copy(param_list, param_list + N, pl##N);        \\\n                                 lodge_symbol(operation_name, e_st_function);         \\\n                                 return expression_generator_(operation.type, pl##N); \\\n                              }                                                       \\\n\n                     base_opr_case(1)\n                     base_opr_case(2)\n                     base_opr_case(3)\n                     base_opr_case(4)\n                     #undef base_opr_case\n                  }\n               }\n            }\n         }\n\n         for (std::size_t i = 0; i < MaxNumberofParameters; ++i)\n         {\n            free_node(node_allocator_, param_list[i]);\n         }\n\n         set_error(\n            make_error(parser_error::e_syntax,\n                       diagnostic_token,\n                       \"ERR027 - Invalid number of input parameters for call to function: '\" + operation_name + \"'\",\n                       exprtk_error_location));\n\n         return error_node();\n      }\n\n      inline expression_node_ptr parse_conditional_statement_01(expression_node_ptr condition)\n      {\n         // Parse: [if][(][condition][,][consequent][,][alternative][)]\n\n         expression_node_ptr consequent  = error_node();\n         expression_node_ptr alternative = error_node();\n\n         bool result = true;\n\n         if (!token_is(token_t::e_comma))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR028 - Expected ',' between if-statement condition and consequent\",\n                          exprtk_error_location));\n            result = false;\n         }\n         else if (0 == (consequent = parse_expression()))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR029 - Failed to parse consequent for if-statement\",\n                          exprtk_error_location));\n            result = false;\n         }\n         else if (!token_is(token_t::e_comma))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR030 - Expected ',' between if-statement consequent and alternative\",\n                          exprtk_error_location));\n            result = false;\n         }\n         else if (0 == (alternative = parse_expression()))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR031 - Failed to parse alternative for if-statement\",\n                          exprtk_error_location));\n            result = false;\n         }\n         else if (!token_is(token_t::e_rbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR032 - Expected ')' at the end of if-statement\",\n                          exprtk_error_location));\n            result = false;\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         if (result)\n         {\n            const bool consq_is_str = is_generally_string_node( consequent);\n            const bool alter_is_str = is_generally_string_node(alternative);\n\n            if (consq_is_str || alter_is_str)\n            {\n               if (consq_is_str && alter_is_str)\n               {\n                  return expression_generator_\n                           .conditional_string(condition,consequent,alternative);\n               }\n\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR033 - Return types of ternary if-statement differ\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n         #endif\n\n         if (!result)\n         {\n            free_node(node_allocator_,  condition);\n            free_node(node_allocator_, consequent);\n            free_node(node_allocator_,alternative);\n\n            return error_node();\n         }\n         else\n            return expression_generator_\n                      .conditional(condition,consequent,alternative);\n      }\n\n      inline expression_node_ptr parse_conditional_statement_02(expression_node_ptr condition)\n      {\n         expression_node_ptr consequent  = error_node();\n         expression_node_ptr alternative = error_node();\n\n         bool result = true;\n\n         if (token_is(token_t::e_lcrlbracket,prsrhlpr_t::e_hold))\n         {\n            if (0 == (consequent = parse_multi_sequence(\"if-statement-01\")))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR034 - Failed to parse body of consequent for if-statement\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n         else\n         {\n            if (\n                 settings_.commutative_check_enabled() &&\n                 token_is(token_t::e_mul,prsrhlpr_t::e_hold)\n               )\n            {\n               next_token();\n            }\n\n            if (0 != (consequent = parse_expression()))\n            {\n               if (!token_is(token_t::e_eof))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR035 - Expected ';' at the end of the consequent for if-statement\",\n                                exprtk_error_location));\n\n                  result = false;\n               }\n            }\n            else\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR036 - Failed to parse body of consequent for if-statement\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n\n         if (result)\n         {\n            if (details::imatch(current_token().value,\"else\"))\n            {\n               next_token();\n\n               if (token_is(token_t::e_lcrlbracket,prsrhlpr_t::e_hold))\n               {\n                  if (0 == (alternative = parse_multi_sequence(\"else-statement-01\")))\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR037 - Failed to parse body of the 'else' for if-statement\",\n                                   exprtk_error_location));\n\n                     result = false;\n                  }\n               }\n               else if (details::imatch(current_token().value,\"if\"))\n               {\n                  if (0 == (alternative = parse_conditional_statement()))\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR038 - Failed to parse body of if-else statement\",\n                                   exprtk_error_location));\n\n                     result = false;\n                  }\n               }\n               else if (0 != (alternative = parse_expression()))\n               {\n                  if (!token_is(token_t::e_eof))\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR039 - Expected ';' at the end of the 'else-if' for the if-statement\",\n                                   exprtk_error_location));\n\n                     result = false;\n                  }\n               }\n               else\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR040 - Failed to parse body of the 'else' for if-statement\",\n                                exprtk_error_location));\n\n                  result = false;\n               }\n            }\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         if (result)\n         {\n            const bool consq_is_str = is_generally_string_node( consequent);\n            const bool alter_is_str = is_generally_string_node(alternative);\n\n            if (consq_is_str || alter_is_str)\n            {\n               if (consq_is_str && alter_is_str)\n               {\n                  return expression_generator_\n                           .conditional_string(condition, consequent, alternative);\n               }\n\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR041 - Return types of ternary if-statement differ\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n         #endif\n\n         if (!result)\n         {\n            free_node(node_allocator_,   condition);\n            free_node(node_allocator_,  consequent);\n            free_node(node_allocator_, alternative);\n\n            return error_node();\n         }\n         else\n            return expression_generator_\n                      .conditional(condition, consequent, alternative);\n      }\n\n      inline expression_node_ptr parse_conditional_statement()\n      {\n         expression_node_ptr condition = error_node();\n\n         next_token();\n\n         if (!token_is(token_t::e_lbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR042 - Expected '(' at start of if-statement, instead got: '\" + current_token().value + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (0 == (condition = parse_expression()))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR043 - Failed to parse condition for if-statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (token_is(token_t::e_comma,prsrhlpr_t::e_hold))\n         {\n            // if (x,y,z)\n            return parse_conditional_statement_01(condition);\n         }\n         else if (token_is(token_t::e_rbracket))\n         {\n            // 00. if (x) y;\n            // 01. if (x) y; else z;\n            // 02. if (x) y; else {z0; ... zn;}\n            // 03. if (x) y; else if (z) w;\n            // 04. if (x) y; else if (z) w; else u;\n            // 05. if (x) y; else if (z) w; else {u0; ... un;}\n            // 06. if (x) y; else if (z) {w0; ... wn;}\n            // 07. if (x) {y0; ... yn;}\n            // 08. if (x) {y0; ... yn;} else z;\n            // 09. if (x) {y0; ... yn;} else {z0; ... zn;};\n            // 10. if (x) {y0; ... yn;} else if (z) w;\n            // 11. if (x) {y0; ... yn;} else if (z) w; else u;\n            // 12. if (x) {y0; ... nex;} else if (z) w; else {u0 ... un;}\n            // 13. if (x) {y0; ... yn;} else if (z) {w0; ... wn;}\n            return parse_conditional_statement_02(condition);\n         }\n\n         set_error(\n            make_error(parser_error::e_syntax,\n                       current_token(),\n                       \"ERR044 - Invalid if-statement\",\n                       exprtk_error_location));\n\n         free_node(node_allocator_,condition);\n\n         return error_node();\n      }\n\n      inline expression_node_ptr parse_ternary_conditional_statement(expression_node_ptr condition)\n      {\n         // Parse: [condition][?][consequent][:][alternative]\n         expression_node_ptr consequent  = error_node();\n         expression_node_ptr alternative = error_node();\n\n         bool result = true;\n\n         if (0 == condition)\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR045 - Encountered invalid condition branch for ternary if-statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (!token_is(token_t::e_ternary))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR046 - Expected '?' after condition of ternary if-statement\",\n                          exprtk_error_location));\n\n            result = false;\n         }\n         else if (0 == (consequent = parse_expression()))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR047 - Failed to parse consequent for ternary if-statement\",\n                          exprtk_error_location));\n\n            result = false;\n         }\n         else if (!token_is(token_t::e_colon))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR048 - Expected ':' between ternary if-statement consequent and alternative\",\n                          exprtk_error_location));\n\n            result = false;\n         }\n         else if (0 == (alternative = parse_expression()))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR049 - Failed to parse alternative for ternary if-statement\",\n                          exprtk_error_location));\n\n            result = false;\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         if (result)\n         {\n            const bool consq_is_str = is_generally_string_node( consequent);\n            const bool alter_is_str = is_generally_string_node(alternative);\n\n            if (consq_is_str || alter_is_str)\n            {\n               if (consq_is_str && alter_is_str)\n               {\n                  return expression_generator_\n                           .conditional_string(condition, consequent, alternative);\n               }\n\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR050 - Return types of ternary if-statement differ\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n         #endif\n\n         if (!result)\n         {\n            free_node(node_allocator_,   condition);\n            free_node(node_allocator_,  consequent);\n            free_node(node_allocator_, alternative);\n\n            return error_node();\n         }\n         else\n            return expression_generator_\n                      .conditional(condition, consequent, alternative);\n      }\n\n      inline expression_node_ptr parse_while_loop()\n      {\n         // Parse: [while][(][test expr][)][{][expression][}]\n         expression_node_ptr condition   = error_node();\n         expression_node_ptr branch      = error_node();\n         expression_node_ptr result_node = error_node();\n\n         bool result = true;\n\n         next_token();\n\n         if (!token_is(token_t::e_lbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR051 - Expected '(' at start of while-loop condition statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (0 == (condition = parse_expression()))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR052 - Failed to parse condition for while-loop\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (!token_is(token_t::e_rbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR053 - Expected ')' at end of while-loop condition statement\",\n                          exprtk_error_location));\n\n            result = false;\n         }\n\n         brkcnt_list_.push_front(false);\n\n         if (result)\n         {\n            if (0 == (branch = parse_multi_sequence(\"while-loop\")))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR054 - Failed to parse body of while-loop\"));\n               result = false;\n            }\n            else if (0 == (result_node = expression_generator_.while_loop(condition,\n                                                                          branch,\n                                                                          brkcnt_list_.front())))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR055 - Failed to synthesize while-loop\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n\n         if (!result)\n         {\n            free_node(node_allocator_,      branch);\n            free_node(node_allocator_,   condition);\n            free_node(node_allocator_, result_node);\n\n            brkcnt_list_.pop_front();\n\n            return error_node();\n         }\n         else\n            return result_node;\n      }\n\n      inline expression_node_ptr parse_repeat_until_loop()\n      {\n         // Parse: [repeat][{][expression][}][until][(][test expr][)]\n         expression_node_ptr condition = error_node();\n         expression_node_ptr branch    = error_node();\n         next_token();\n\n         std::vector<expression_node_ptr> arg_list;\n         std::vector<bool> side_effect_list;\n\n         scoped_vec_delete<expression_node_t> sdd((*this),arg_list);\n\n         brkcnt_list_.push_front(false);\n\n         if (details::imatch(current_token().value,\"until\"))\n         {\n            next_token();\n            branch = node_allocator_.allocate<details::null_node<T> >();\n         }\n         else\n         {\n            token_t::token_type seperator = token_t::e_eof;\n\n            scope_handler sh(*this);\n\n            scoped_bool_or_restorer sbr(state_.side_effect_present);\n\n            for ( ; ; )\n            {\n               state_.side_effect_present = false;\n\n               expression_node_ptr arg = parse_expression();\n\n               if (0 == arg)\n                  return error_node();\n               else\n               {\n                  arg_list.push_back(arg);\n                  side_effect_list.push_back(state_.side_effect_present);\n               }\n\n               if (details::imatch(current_token().value,\"until\"))\n               {\n                  next_token();\n                  break;\n               }\n\n               bool is_next_until = peek_token_is(token_t::e_symbol) &&\n                                    peek_token_is(\"until\");\n\n               if (!token_is(seperator) && is_next_until)\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR056 - Expected '\" + token_t::to_str(seperator) + \"' in body of repeat until loop\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n\n               if (details::imatch(current_token().value,\"until\"))\n               {\n                  next_token();\n                  break;\n               }\n            }\n\n            branch = simplify(arg_list,side_effect_list);\n\n            sdd.delete_ptr = (0 == branch);\n\n            if (sdd.delete_ptr)\n            {\n               brkcnt_list_.pop_front();\n\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR057 - Failed to parse body of repeat until loop\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n         }\n\n         if (!token_is(token_t::e_lbracket))\n         {\n            brkcnt_list_.pop_front();\n\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR058 - Expected '(' before condition statement of repeat until loop\",\n                          exprtk_error_location));\n\n            free_node(node_allocator_,branch);\n\n            return error_node();\n         }\n         else if (0 == (condition = parse_expression()))\n         {\n            brkcnt_list_.pop_front();\n\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR059 - Failed to parse condition for repeat until loop\",\n                          exprtk_error_location));\n\n            free_node(node_allocator_,branch);\n\n            return error_node();\n         }\n         else if (!token_is(token_t::e_rbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR060 - Expected ')' after condition of repeat until loop\",\n                          exprtk_error_location));\n\n            free_node(node_allocator_,    branch);\n            free_node(node_allocator_, condition);\n\n            brkcnt_list_.pop_front();\n\n            return error_node();\n         }\n\n         expression_node_ptr result;\n\n         result = expression_generator_\n                     .repeat_until_loop(condition, branch, brkcnt_list_.front());\n\n         if (0 == result)\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR061 - Failed to synthesize repeat until loop\",\n                          exprtk_error_location));\n\n            free_node(node_allocator_,condition);\n\n            brkcnt_list_.pop_front();\n\n            return error_node();\n         }\n         else\n         {\n            brkcnt_list_.pop_front();\n            return result;\n         }\n      }\n\n      inline expression_node_ptr parse_for_loop()\n      {\n         expression_node_ptr initialiser = error_node();\n         expression_node_ptr condition   = error_node();\n         expression_node_ptr incrementor = error_node();\n         expression_node_ptr loop_body   = error_node();\n\n         scope_element* se = 0;\n         bool result       = true;\n         std::string loop_counter_symbol;\n\n         next_token();\n\n         scope_handler sh(*this);\n\n         if (!token_is(token_t::e_lbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR062 - Expected '(' at start of for-loop\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         if (!token_is(token_t::e_eof))\n         {\n            if (\n                 !token_is(token_t::e_symbol,prsrhlpr_t::e_hold) &&\n                 details::imatch(current_token().value,\"var\")\n               )\n            {\n               next_token();\n\n               if (!token_is(token_t::e_symbol,prsrhlpr_t::e_hold))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR063 - Expected a variable at the start of initialiser section of for-loop\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n               else if (!peek_token_is(token_t::e_assign))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR064 - Expected variable assignment of initialiser section of for-loop\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n\n               loop_counter_symbol = current_token().value;\n\n               se = &sem_.get_element(loop_counter_symbol);\n\n               if ((se->name == loop_counter_symbol) && se->active)\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR065 - For-loop variable '\" + loop_counter_symbol+ \"' is being shadowed by a previous declaration\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n               else if (!symtab_store_.is_variable(loop_counter_symbol))\n               {\n                  if (\n                       !se->active &&\n                       (se->name == loop_counter_symbol) &&\n                       (se->type ==  scope_element::e_variable)\n                     )\n                  {\n                     se->active = true;\n                     se->ref_count++;\n                  }\n                  else\n                  {\n                     scope_element nse;\n                     nse.name      = loop_counter_symbol;\n                     nse.active    = true;\n                     nse.ref_count = 1;\n                     nse.type      = scope_element::e_variable;\n                     nse.depth     = state_.scope_depth;\n                     nse.data      = new T(T(0));\n                     nse.var_node  = node_allocator_.allocate<variable_node_t>(*(T*)(nse.data));\n\n                     if (!sem_.add_element(nse))\n                     {\n                        set_error(\n                           make_error(parser_error::e_syntax,\n                                      current_token(),\n                                      \"ERR066 - Failed to add new local variable '\" + loop_counter_symbol + \"' to SEM\",\n                                      exprtk_error_location));\n\n                        sem_.free_element(nse);\n\n                        result = false;\n                     }\n                     else\n                     {\n                        exprtk_debug((\"parse_for_loop() - INFO - Added new local variable: %s\\n\",nse.name.c_str()));\n\n                        state_.activate_side_effect(\"parse_for_loop()\");\n                     }\n                  }\n               }\n            }\n\n            if (0 == (initialiser = parse_expression()))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR067 - Failed to parse initialiser of for-loop\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n            else if (!token_is(token_t::e_eof))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR068 - Expected ';' after initialiser of for-loop\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n\n         if (!token_is(token_t::e_eof))\n         {\n            if (0 == (condition = parse_expression()))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR069 - Failed to parse condition of for-loop\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n            else if (!token_is(token_t::e_eof))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR070 - Expected ';' after condition section of for-loop\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n\n         if (!token_is(token_t::e_rbracket))\n         {\n            if (0 == (incrementor = parse_expression()))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR071 - Failed to parse incrementor of for-loop\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n            else if (!token_is(token_t::e_rbracket))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR072 - Expected ')' after incrementor section of for-loop\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n\n         if (result)\n         {\n            brkcnt_list_.push_front(false);\n\n            if (0 == (loop_body = parse_multi_sequence(\"for-loop\")))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR073 - Failed to parse body of for-loop\",\n                             exprtk_error_location));\n\n               result = false;\n            }\n         }\n\n         if (!result)\n         {\n            if (se)\n            {\n               se->ref_count--;\n            }\n\n            sem_.cleanup();\n\n            free_node(node_allocator_, initialiser);\n            free_node(node_allocator_,   condition);\n            free_node(node_allocator_, incrementor);\n            free_node(node_allocator_,   loop_body);\n\n            if (!brkcnt_list_.empty())\n            {\n               brkcnt_list_.pop_front();\n            }\n\n            return error_node();\n         }\n         else\n         {\n            expression_node_ptr result_node =\n               expression_generator_.for_loop(initialiser,\n                                              condition,\n                                              incrementor,\n                                              loop_body,\n                                              brkcnt_list_.front());\n            brkcnt_list_.pop_front();\n\n            return result_node;\n         }\n      }\n\n      inline expression_node_ptr parse_switch_statement()\n      {\n         std::vector<expression_node_ptr> arg_list;\n         expression_node_ptr result = error_node();\n\n         if (!details::imatch(current_token().value,\"switch\"))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR074 - Expected keyword 'switch'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         scoped_vec_delete<expression_node_t> svd((*this),arg_list);\n\n         next_token();\n\n         if (!token_is(token_t::e_lcrlbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR075 - Expected '{' for call to switch statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         for ( ; ; )\n         {\n            if (!details::imatch(\"case\",current_token().value))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR076 - Expected either a 'case' or 'default' statement\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            next_token();\n\n            expression_node_ptr condition = parse_expression();\n\n            if (0 == condition)\n               return error_node();\n            else if (!token_is(token_t::e_colon))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR077 - Expected ':' for case of switch statement\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            expression_node_ptr consequent = parse_expression();\n\n            if (0 == consequent)\n               return error_node();\n            else if (!token_is(token_t::e_eof))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR078 - Expected ';' at end of case for switch statement\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            // Can we optimise away the case statement?\n            if (is_constant_node(condition) && is_false(condition))\n            {\n               free_node(node_allocator_,  condition);\n               free_node(node_allocator_, consequent);\n            }\n            else\n            {\n               arg_list.push_back( condition);\n               arg_list.push_back(consequent);\n            }\n\n            if (details::imatch(\"default\",current_token().value))\n            {\n               next_token();\n               if (!token_is(token_t::e_colon))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR079 - Expected ':' for default of switch statement\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n\n               expression_node_ptr default_statement = error_node();\n\n               if (token_is(token_t::e_lcrlbracket,prsrhlpr_t::e_hold))\n                  default_statement = parse_multi_sequence(\"switch-default\");\n               else\n                  default_statement = parse_expression();\n\n               if (0 == default_statement)\n                  return error_node();\n               else if (!token_is(token_t::e_eof))\n               {\n                  free_node(node_allocator_,default_statement);\n\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR080 - Expected ';' at end of default for switch statement\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n\n               arg_list.push_back(default_statement);\n               break;\n            }\n         }\n\n         if (!token_is(token_t::e_rcrlbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR081 - Expected '}' at end of switch statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         result = expression_generator_.switch_statement(arg_list);\n\n         svd.delete_ptr = (0 == result);\n\n         return result;\n      }\n\n      inline expression_node_ptr parse_multi_switch_statement()\n      {\n         std::vector<expression_node_ptr> arg_list;\n         expression_node_ptr result = error_node();\n\n         if (!details::imatch(current_token().value,\"[*]\"))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR082 - Expected token '[*]'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         scoped_vec_delete<expression_node_t> svd((*this),arg_list);\n\n         next_token();\n\n         if (!token_is(token_t::e_lcrlbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR083 - Expected '{' for call to [*] statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         for ( ; ; )\n         {\n            if (!details::imatch(\"case\",current_token().value))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR084 - Expected a 'case' statement for multi-switch\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            next_token();\n\n            expression_node_ptr condition = parse_expression();\n\n            if (0 == condition)\n               return error_node();\n\n            if (!token_is(token_t::e_colon))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR085 - Expected ':' for case of [*] statement\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            expression_node_ptr consequent = parse_expression();\n\n            if (0 == consequent)\n               return error_node();\n\n            if (!token_is(token_t::e_eof))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR086 - Expected ';' at end of case for [*] statement\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            // Can we optimise away the case statement?\n            if (is_constant_node(condition) && is_false(condition))\n            {\n               free_node(node_allocator_,  condition);\n               free_node(node_allocator_, consequent);\n            }\n            else\n            {\n               arg_list.push_back(condition);\n               arg_list.push_back(consequent);\n            }\n\n            if (token_is(token_t::e_rcrlbracket,prsrhlpr_t::e_hold))\n            {\n               break;\n            }\n         }\n\n         if (!token_is(token_t::e_rcrlbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR087 - Expected '}' at end of [*] statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         result = expression_generator_.multi_switch_statement(arg_list);\n\n         svd.delete_ptr = (0 == result);\n\n         return result;\n      }\n\n      inline expression_node_ptr parse_vararg_function()\n      {\n         std::vector<expression_node_ptr> arg_list;\n         expression_node_ptr result = error_node();\n\n         details::operator_type opt_type = details::e_default;\n         const std::string symbol = current_token().value;\n\n         if (details::imatch(symbol,\"~\"))\n         {\n            next_token();\n            return parse_multi_sequence();\n         }\n         else if (details::imatch(symbol,\"[*]\"))\n         {\n            return parse_multi_switch_statement();\n         }\n         else if (details::imatch(symbol, \"avg\" )) opt_type = details::e_avg ;\n         else if (details::imatch(symbol, \"mand\")) opt_type = details::e_mand;\n         else if (details::imatch(symbol, \"max\" )) opt_type = details::e_max ;\n         else if (details::imatch(symbol, \"min\" )) opt_type = details::e_min ;\n         else if (details::imatch(symbol, \"mor\" )) opt_type = details::e_mor ;\n         else if (details::imatch(symbol, \"mul\" )) opt_type = details::e_prod;\n         else if (details::imatch(symbol, \"sum\" )) opt_type = details::e_sum ;\n         else\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR088 - Unsupported vararg function: \" + symbol,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         scoped_vec_delete<expression_node_t> sdd((*this),arg_list);\n\n         lodge_symbol(symbol,e_st_function);\n\n         next_token();\n\n         if (!token_is(token_t::e_lbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR089 - Expected '(' for call to vararg function: \" + symbol,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         for ( ; ; )\n         {\n            expression_node_ptr arg = parse_expression();\n\n            if (0 == arg)\n               return error_node();\n            else\n               arg_list.push_back(arg);\n\n            if (token_is(token_t::e_rbracket))\n               break;\n            else if (!token_is(token_t::e_comma))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR090 - Expected ',' for call to vararg function: \" + symbol,\n                             exprtk_error_location));\n\n               return error_node();\n            }\n         }\n\n         result = expression_generator_.vararg_function(opt_type,arg_list);\n\n         sdd.delete_ptr = (0 == result);\n         return result;\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline expression_node_ptr parse_string_range_statement(expression_node_ptr& expression)\n      {\n         if (!token_is(token_t::e_lsqrbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR091 - Expected '[' as start of string range definition\",\n                          exprtk_error_location));\n\n            free_node(node_allocator_,expression);\n\n            return error_node();\n         }\n         else if (token_is(token_t::e_rsqrbracket))\n         {\n            return node_allocator_.allocate<details::string_size_node<T> >(expression);\n         }\n\n         range_t rp;\n\n         if (!parse_range(rp,true))\n         {\n            free_node(node_allocator_,expression);\n\n            return error_node();\n         }\n\n         expression_node_ptr result = expression_generator_(expression,rp);\n\n         if (0 == result)\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR092 - Failed to generate string range node\",\n                          exprtk_error_location));\n\n            free_node(node_allocator_,expression);\n         }\n\n         rp.clear();\n\n         return result;\n      }\n      #else\n      inline expression_node_ptr parse_string_range_statement(expression_node_ptr&)\n      {\n         return error_node();\n      }\n      #endif\n\n      inline void parse_pending_string_rangesize(expression_node_ptr& expression)\n      {\n         // Allow no more than 100 range calls, eg: s[][][]...[][]\n         const std::size_t max_rangesize_parses = 100;\n\n         std::size_t i = 0;\n\n         while\n            (\n              (0 != expression)                     &&\n              (i++ < max_rangesize_parses)          &&\n              error_list_.empty()                   &&\n              is_generally_string_node(expression)  &&\n              token_is(token_t::e_lsqrbracket,prsrhlpr_t::e_hold)\n            )\n         {\n            expression = parse_string_range_statement(expression);\n         }\n      }\n\n      template <typename Allocator1,\n                typename Allocator2,\n                template <typename,typename> class Sequence>\n      inline expression_node_ptr simplify(Sequence<expression_node_ptr,Allocator1>& expression_list,\n                                          Sequence<bool,Allocator2>& side_effect_list,\n                                          const bool specialise_on_final_type = false)\n      {\n         if (expression_list.empty())\n            return error_node();\n         else if (1 == expression_list.size())\n            return expression_list[0];\n\n         Sequence<expression_node_ptr,Allocator1> tmp_expression_list;\n\n         bool return_node_present = false;\n\n         for (std::size_t i = 0; i < (expression_list.size() - 1); ++i)\n         {\n            if (is_variable_node(expression_list[i]))\n               continue;\n            else if (\n                      is_return_node  (expression_list[i]) ||\n                      is_break_node   (expression_list[i]) ||\n                      is_continue_node(expression_list[i])\n                    )\n            {\n               tmp_expression_list.push_back(expression_list[i]);\n\n               // Remove all subexpressions after first short-circuit\n               // node has been encountered.\n\n               for (std::size_t j = i + 1; j < expression_list.size(); ++j)\n               {\n                  free_node(node_allocator_,expression_list[j]);\n               }\n\n               return_node_present = true;\n\n               break;\n            }\n            else if (\n                      is_constant_node(expression_list[i]) ||\n                      is_null_node    (expression_list[i]) ||\n                      !side_effect_list[i]\n                    )\n            {\n               free_node(node_allocator_,expression_list[i]);\n               continue;\n            }\n            else\n               tmp_expression_list.push_back(expression_list[i]);\n         }\n\n         if (!return_node_present)\n         {\n            tmp_expression_list.push_back(expression_list.back());\n         }\n\n         expression_list.swap(tmp_expression_list);\n\n         if (tmp_expression_list.size() > expression_list.size())\n         {\n            exprtk_debug((\"simplify() - Reduced subexpressions from %d to %d\\n\",\n                          static_cast<int>(tmp_expression_list.size()),\n                          static_cast<int>(expression_list    .size())));\n         }\n\n         if (\n              return_node_present          ||\n              side_effect_list.back()      ||\n              (expression_list.size() > 1)\n            )\n            state_.activate_side_effect(\"simplify()\");\n\n         if (1 == expression_list.size())\n            return expression_list[0];\n         else if (specialise_on_final_type && is_generally_string_node(expression_list.back()))\n            return expression_generator_.vararg_function(details::e_smulti,expression_list);\n         else\n            return expression_generator_.vararg_function(details::e_multi,expression_list);\n      }\n\n      inline expression_node_ptr parse_multi_sequence(const std::string& source = \"\")\n      {\n         token_t::token_type close_bracket = token_t::e_rcrlbracket;\n         token_t::token_type seperator     = token_t::e_eof;\n\n         if (!token_is(token_t::e_lcrlbracket))\n         {\n            if (token_is(token_t::e_lbracket))\n            {\n               close_bracket = token_t::e_rbracket;\n               seperator     = token_t::e_comma;\n            }\n            else\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR093 - Expected '\" + token_t::to_str(close_bracket) + \"' for call to multi-sequence\" +\n                             ((!source.empty()) ? std::string(\" section of \" + source): \"\"),\n                             exprtk_error_location));\n\n               return error_node();\n            }\n         }\n         else if (token_is(token_t::e_rcrlbracket))\n         {\n            return node_allocator_.allocate<details::null_node<T> >();\n         }\n\n         std::vector<expression_node_ptr> arg_list;\n         std::vector<bool> side_effect_list;\n\n         expression_node_ptr result = error_node();\n\n         scoped_vec_delete<expression_node_t> sdd((*this),arg_list);\n\n         scope_handler sh(*this);\n\n         scoped_bool_or_restorer sbr(state_.side_effect_present);\n\n         for ( ; ; )\n         {\n            state_.side_effect_present = false;\n\n            expression_node_ptr arg = parse_expression();\n\n            if (0 == arg)\n               return error_node();\n            else\n            {\n               arg_list.push_back(arg);\n               side_effect_list.push_back(state_.side_effect_present);\n            }\n\n            if (token_is(close_bracket))\n               break;\n\n            bool is_next_close = peek_token_is(close_bracket);\n\n            if (!token_is(seperator) && is_next_close)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR094 - Expected '\" + details::to_str(seperator) + \"' for call to multi-sequence section of \" + source,\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            if (token_is(close_bracket))\n               break;\n         }\n\n         result = simplify(arg_list,side_effect_list,source.empty());\n\n         sdd.delete_ptr = (0 == result);\n         return result;\n      }\n\n      inline bool parse_range(range_t& rp, const bool skip_lsqr = false)\n      {\n         // Examples of valid ranges:\n         // 1. [1:5]     -> 1..5\n         // 2. [ :5]     -> 0..5\n         // 3. [1: ]     -> 1..end\n         // 4. [x:y]     -> x..y where x <= y\n         // 5. [x+1:y/2] -> x+1..y/2 where x+1 <= y/2\n         // 6. [ :y]     -> 0..y where 0 <= y\n         // 7. [x: ]     -> x..end where x <= end\n\n         rp.clear();\n\n         if (!skip_lsqr && !token_is(token_t::e_lsqrbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR095 - Expected '[' for start of range\",\n                          exprtk_error_location));\n\n            return false;\n         }\n\n         if (token_is(token_t::e_colon))\n         {\n            rp.n0_c.first  = true;\n            rp.n0_c.second = 0;\n            rp.cache.first = 0;\n         }\n         else\n         {\n            expression_node_ptr r0 = parse_expression();\n\n            if (0 == r0)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR096 - Failed parse begin section of range\",\n                             exprtk_error_location));\n\n               return false;\n            }\n            else if (is_constant_node(r0))\n            {\n               const T r0_value = r0->value();\n\n               if (r0_value >= T(0))\n               {\n                  rp.n0_c.first  = true;\n                  rp.n0_c.second = static_cast<std::size_t>(details::numeric::to_int64(r0_value));\n                  rp.cache.first = rp.n0_c.second;\n               }\n\n               free_node(node_allocator_,r0);\n\n               if (r0_value < T(0))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR097 - Range lower bound less than zero! Constraint: r0 >= 0\",\n                                exprtk_error_location));\n\n                  return false;\n               }\n            }\n            else\n            {\n               rp.n0_e.first  = true;\n               rp.n0_e.second = r0;\n            }\n\n            if (!token_is(token_t::e_colon))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR098 - Expected ':' for break  in range\",\n                             exprtk_error_location));\n\n               rp.free();\n\n               return false;\n            }\n         }\n\n         if (token_is(token_t::e_rsqrbracket))\n         {\n            rp.n1_c.first  = true;\n            rp.n1_c.second = std::numeric_limits<std::size_t>::max();\n         }\n         else\n         {\n            expression_node_ptr r1 = parse_expression();\n\n            if (0 == r1)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR099 - Failed parse end section of range\",\n                             exprtk_error_location));\n\n               rp.free();\n\n               return false;\n            }\n            else if (is_constant_node(r1))\n            {\n               const T r1_value = r1->value();\n\n               if (r1_value >= T(0))\n               {\n                  rp.n1_c.first   = true;\n                  rp.n1_c.second  = static_cast<std::size_t>(details::numeric::to_int64(r1_value));\n                  rp.cache.second = rp.n1_c.second;\n               }\n\n               free_node(node_allocator_,r1);\n\n               if (r1_value < T(0))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR100 - Range upper bound less than zero! Constraint: r1 >= 0\",\n                                exprtk_error_location));\n\n                  return false;\n               }\n            }\n            else\n            {\n               rp.n1_e.first  = true;\n               rp.n1_e.second = r1;\n            }\n\n            if (!token_is(token_t::e_rsqrbracket))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR101 - Expected ']' for start of range\",\n                             exprtk_error_location));\n\n               rp.free();\n\n               return false;\n            }\n         }\n\n         if (rp.const_range())\n         {\n            std::size_t r0 = 0;\n            std::size_t r1 = 0;\n\n            const bool rp_result = rp(r0,r1);\n\n            if (!rp_result || (r0 > r1))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR102 - Invalid range, Constraint: r0 <= r1\",\n                             exprtk_error_location));\n\n               return false;\n            }\n         }\n\n         return true;\n      }\n\n      inline void lodge_symbol(const std::string& symbol,\n                               const symbol_type st)\n      {\n         dec_.add_symbol(symbol,st);\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline expression_node_ptr parse_string()\n      {\n         const std::string symbol = current_token().value;\n\n         typedef details::stringvar_node<T>* strvar_node_t;\n\n         expression_node_ptr result   = error_node();\n         strvar_node_t const_str_node = static_cast<strvar_node_t>(0);\n\n         scope_element& se = sem_.get_active_element(symbol);\n\n         if (scope_element::e_string == se.type)\n         {\n            se.active = true;\n            result    = se.str_node;\n            lodge_symbol(symbol,e_st_local_string);\n         }\n         else\n         {\n            if (!symtab_store_.is_conststr_stringvar(symbol))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR103 - Unknown string symbol\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            result = symtab_store_.get_stringvar(symbol);\n\n            if (symtab_store_.is_constant_string(symbol))\n            {\n               const_str_node = static_cast<strvar_node_t>(result);\n               result = expression_generator_(const_str_node->str());\n            }\n\n            lodge_symbol(symbol,e_st_string);\n         }\n\n         if (peek_token_is(token_t::e_lsqrbracket))\n         {\n            next_token();\n\n            if (peek_token_is(token_t::e_rsqrbracket))\n            {\n               next_token();\n               next_token();\n\n               if (const_str_node)\n               {\n                  free_node(node_allocator_,result);\n\n                  return expression_generator_(T(const_str_node->size()));\n               }\n               else\n                  return node_allocator_.allocate<details::stringvar_size_node<T> >\n                            (static_cast<details::stringvar_node<T>*>(result)->ref());\n            }\n\n            range_t rp;\n\n            if (!parse_range(rp))\n            {\n               free_node(node_allocator_,result);\n\n               return error_node();\n            }\n            else if (const_str_node)\n            {\n               free_node(node_allocator_,result);\n               result = expression_generator_(const_str_node->ref(),rp);\n            }\n            else\n               result = expression_generator_(static_cast<details::stringvar_node<T>*>\n                           (result)->ref(), rp);\n\n            if (result)\n               rp.clear();\n         }\n         else\n            next_token();\n\n         return result;\n      }\n      #else\n      inline expression_node_ptr parse_string()\n      {\n         return error_node();\n      }\n      #endif\n\n      #ifndef exprtk_disable_string_capabilities\n      inline expression_node_ptr parse_const_string()\n      {\n         const std::string   const_str = current_token().value;\n         expression_node_ptr result    = expression_generator_(const_str);\n\n         if (peek_token_is(token_t::e_lsqrbracket))\n         {\n            next_token();\n\n            if (peek_token_is(token_t::e_rsqrbracket))\n            {\n               next_token();\n               next_token();\n\n               free_node(node_allocator_,result);\n\n               return expression_generator_(T(const_str.size()));\n            }\n\n            range_t rp;\n\n            if (!parse_range(rp))\n            {\n               free_node(node_allocator_,result);\n\n               return error_node();\n            }\n\n            free_node(node_allocator_,result);\n\n            if (rp.n1_c.first && (rp.n1_c.second == std::numeric_limits<std::size_t>::max()))\n            {\n               rp.n1_c.second  = const_str.size() - 1;\n               rp.cache.second = rp.n1_c.second;\n            }\n\n            if (\n                 (rp.n0_c.first && (rp.n0_c.second >= const_str.size())) ||\n                 (rp.n1_c.first && (rp.n1_c.second >= const_str.size()))\n               )\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR104 - Overflow in range for string: '\" + const_str + \"'[\" +\n                             (rp.n0_c.first ? details::to_str(static_cast<int>(rp.n0_c.second)) : \"?\") + \":\" +\n                             (rp.n1_c.first ? details::to_str(static_cast<int>(rp.n1_c.second)) : \"?\") + \"]\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            result = expression_generator_(const_str,rp);\n\n            if (result)\n               rp.clear();\n         }\n         else\n            next_token();\n\n         return result;\n      }\n      #else\n      inline expression_node_ptr parse_const_string()\n      {\n         return error_node();\n      }\n      #endif\n\n      inline expression_node_ptr parse_vector()\n      {\n         const std::string symbol = current_token().value;\n\n         vector_holder_ptr vec = vector_holder_ptr(0);\n\n         const scope_element& se = sem_.get_active_element(symbol);\n\n         if (\n              !details::imatch(se.name, symbol) ||\n              (se.depth > state_.scope_depth)   ||\n              (scope_element::e_vector != se.type)\n            )\n         {\n            if (0 == (vec = symtab_store_.get_vector(symbol)))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR105 - Symbol '\" + symbol+ \" not a vector\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n         }\n         else\n            vec = se.vec_node;\n\n         expression_node_ptr index_expr = error_node();\n\n         next_token();\n\n         if (!token_is(token_t::e_lsqrbracket))\n         {\n            return node_allocator_.allocate<vector_node_t>(vec);\n         }\n         else if (token_is(token_t::e_rsqrbracket))\n         {\n            return expression_generator_(T(vec->size()));\n         }\n         else if (0 == (index_expr = parse_expression()))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR106 - Failed to parse index for vector: '\" + symbol + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (!token_is(token_t::e_rsqrbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR107 - Expected ']' for index of vector: '\" + symbol + \"'\",\n                          exprtk_error_location));\n\n            free_node(node_allocator_,index_expr);\n\n            return error_node();\n         }\n\n         // Perform compile-time range check\n         if (details::is_constant_node(index_expr))\n         {\n            const std::size_t index    = static_cast<std::size_t>(details::numeric::to_int32(index_expr->value()));\n            const std::size_t vec_size = vec->size();\n\n            if (index >= vec_size)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR108 - Index of \" + details::to_str(index) + \" out of range for \"\n                             \"vector '\" + symbol + \"' of size \" + details::to_str(vec_size),\n                             exprtk_error_location));\n\n               free_node(node_allocator_,index_expr);\n\n               return error_node();\n            }\n         }\n\n         return expression_generator_.vector_element(symbol,vec,index_expr);\n      }\n\n      inline expression_node_ptr parse_vararg_function_call(ivararg_function<T>* vararg_function, const std::string& vararg_function_name)\n      {\n         std::vector<expression_node_ptr> arg_list;\n\n         expression_node_ptr result = error_node();\n\n         scoped_vec_delete<expression_node_t> sdd((*this),arg_list);\n\n         next_token();\n\n         if (token_is(token_t::e_lbracket))\n         {\n            if (token_is(token_t::e_rbracket))\n            {\n               if (!vararg_function->allow_zero_parameters())\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR109 - Zero parameter call to vararg function: \"\n                                + vararg_function_name + \" not allowed\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n            else\n            {\n               for ( ; ; )\n               {\n                  expression_node_ptr arg = parse_expression();\n\n                  if (0 == arg)\n                     return error_node();\n                  else\n                     arg_list.push_back(arg);\n\n                  if (token_is(token_t::e_rbracket))\n                     break;\n                  else if (!token_is(token_t::e_comma))\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR110 - Expected ',' for call to vararg function: \"\n                                   + vararg_function_name,\n                                   exprtk_error_location));\n\n                     return error_node();\n                  }\n               }\n            }\n         }\n         else if (!vararg_function->allow_zero_parameters())\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR111 - Zero parameter call to vararg function: \"\n                          + vararg_function_name + \" not allowed\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         if (arg_list.size() < vararg_function->min_num_args())\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR112 - Invalid number of parameters to call to vararg function: \"\n                          + vararg_function_name + \", require at least \"\n                          + details::to_str(static_cast<int>(vararg_function->min_num_args())) + \" parameters\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (arg_list.size() > vararg_function->max_num_args())\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR113 - Invalid number of parameters to call to vararg function: \"\n                          + vararg_function_name + \", require no more than \"\n                          + details::to_str(static_cast<int>(vararg_function->max_num_args())) + \" parameters\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         result = expression_generator_.vararg_function_call(vararg_function,arg_list);\n\n         sdd.delete_ptr = (0 == result);\n\n         return result;\n      }\n\n      class type_checker\n      {\n      public:\n\n         typedef parser<T> parser_t;\n         typedef std::vector<std::string> param_seq_list_t;\n\n         type_checker(parser_t& p,\n                      const std::string& func_name,\n                      const std::string& param_seq)\n         : invalid_state_(true),\n           parser_(p),\n           function_name_(func_name)\n         {\n            split(param_seq);\n         }\n\n         bool verify(const std::string& param_seq, std::size_t& pseq_index)\n         {\n            if (param_seq_list_.empty())\n               return true;\n\n            std::vector<std::pair<std::size_t,char> > error_list;\n\n            for (std::size_t i = 0; i < param_seq_list_.size(); ++i)\n            {\n               details::char_t diff_value = 0;\n               std::size_t     diff_index = 0;\n\n               bool result = details::sequence_match(param_seq_list_[i],\n                                                     param_seq,\n                                                     diff_index,diff_value);\n\n              if (result)\n              {\n                 pseq_index = i;\n                 return true;\n              }\n              else\n                 error_list.push_back(std::make_pair(diff_index,diff_value));\n            }\n\n            if (1 == error_list.size())\n            {\n               parser_.\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                parser_.current_token(),\n                                \"ERR114 - Failed parameter type check for function '\" + function_name_ + \"', \"\n                                \"Expected '\" + param_seq_list_[0] + \"'  call set: '\" + param_seq +\"'\",\n                                exprtk_error_location));\n            }\n            else\n            {\n               // find first with largest diff_index;\n               std::size_t max_diff_index = 0;\n\n               for (std::size_t i = 1; i < error_list.size(); ++i)\n               {\n                  if (error_list[i].first > error_list[max_diff_index].first)\n                  {\n                     max_diff_index = i;\n                  }\n               }\n\n               parser_.\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                parser_.current_token(),\n                                \"ERR115 - Failed parameter type check for function '\" + function_name_ + \"', \"\n                                \"Best match: '\" + param_seq_list_[max_diff_index] + \"'  call set: '\" + param_seq +\"'\",\n                                exprtk_error_location));\n            }\n\n            return false;\n         }\n\n         std::size_t paramseq_count() const\n         {\n            return param_seq_list_.size();\n         }\n\n         std::string paramseq(const std::size_t& index) const\n         {\n            return param_seq_list_[index];\n         }\n\n         bool invalid() const\n         {\n            return !invalid_state_;\n         }\n\n         bool allow_zero_parameters() const\n         {\n            return\n               param_seq_list_.end() != std::find(param_seq_list_.begin(),\n                                                  param_seq_list_.end(),\n                                                  \"Z\");\n         }\n\n      private:\n\n         void split(const std::string& s)\n         {\n            if (s.empty())\n               return;\n\n            std::size_t start = 0;\n            std::size_t end   = 0;\n\n            param_seq_list_t param_seq_list;\n\n            struct token_validator\n            {\n               static inline bool process(const std::string& str,\n                                          std::size_t s, std::size_t e,\n                                          param_seq_list_t& psl)\n               {\n                  if (\n                       (e - s) &&\n                       (std::string::npos == str.find(\"?*\")) &&\n                       (std::string::npos == str.find(\"**\"))\n                     )\n                  {\n                     const std::string curr_str = str.substr(s, e - s);\n\n                     if (\"Z\" == curr_str)\n                     {\n                        psl.push_back(curr_str);\n                        return true;\n                     }\n                     else if (std::string::npos == curr_str.find_first_not_of(\"STV*?|\"))\n                     {\n                        psl.push_back(curr_str);\n                        return true;\n                     }\n                  }\n\n                  return false;\n               }\n            };\n\n            while (std::string::npos != (end = s.find('|',start)))\n            {\n               if (!token_validator::process(s, start, end, param_seq_list))\n               {\n                  invalid_state_ = false;\n\n                  const std::string err_param_seq = s.substr(start, end - start);\n\n                  parser_.\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   parser_.current_token(),\n                                   \"ERR116 - Invalid parameter sequence of '\" + err_param_seq +\n                                   \"'  for function: \" + function_name_,\n                                   exprtk_error_location));\n\n                  return;\n               }\n               else\n                  start = end + 1;\n            }\n\n            if (start < s.size())\n            {\n               if (token_validator::process(s, start, s.size(), param_seq_list))\n                  param_seq_list_ = param_seq_list;\n               else\n               {\n                  const std::string err_param_seq = s.substr(start, s.size() - start);\n\n                  parser_.\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   parser_.current_token(),\n                                   \"ERR117 - Invalid parameter sequence of '\" + err_param_seq +\n                                   \"'  for function: \" + function_name_,\n                                   exprtk_error_location));\n                  return;\n               }\n            }\n         }\n\n         type_checker(const type_checker&);\n         type_checker& operator=(const type_checker&);\n\n         bool invalid_state_;\n         parser_t& parser_;\n         std::string function_name_;\n         param_seq_list_t param_seq_list_;\n      };\n\n      inline expression_node_ptr parse_generic_function_call(igeneric_function<T>* function, const std::string& function_name)\n      {\n         std::vector<expression_node_ptr> arg_list;\n\n         scoped_vec_delete<expression_node_t> sdd((*this),arg_list);\n\n         next_token();\n\n         std::string param_type_list;\n\n         type_checker tc((*this), function_name, function->parameter_sequence);\n\n         if (tc.invalid())\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR118 - Type checker instantiation failure for generic function: \" + function_name,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         if (\n              !function->parameter_sequence.empty() &&\n              function->allow_zero_parameters    () &&\n              !tc      .allow_zero_parameters    ()\n            )\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR119 - Mismatch in zero parameter condition for generic function: \"\n                          + function_name,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         if (token_is(token_t::e_lbracket))\n         {\n            if (token_is(token_t::e_rbracket))\n            {\n               if (\n                    !function->allow_zero_parameters() &&\n                    !tc       .allow_zero_parameters()\n                  )\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR120 - Zero parameter call to generic function: \"\n                                + function_name + \" not allowed\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n            else\n            {\n               for ( ; ; )\n               {\n                  expression_node_ptr arg = parse_expression();\n\n                  if (0 == arg)\n                     return error_node();\n\n                  if (is_ivector_node(arg))\n                     param_type_list += 'V';\n                  else if (is_generally_string_node(arg))\n                     param_type_list += 'S';\n                  else // Everything else is assumed to be a scalar returning expression\n                     param_type_list += 'T';\n\n                  arg_list.push_back(arg);\n\n                  if (token_is(token_t::e_rbracket))\n                     break;\n                  else if (!token_is(token_t::e_comma))\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR121 - Expected ',' for call to generic function: \" + function_name,\n                                   exprtk_error_location));\n\n                     return error_node();\n                  }\n               }\n            }\n         }\n         else if (\n                   !function->parameter_sequence.empty() &&\n                   function->allow_zero_parameters    () &&\n                   !tc      .allow_zero_parameters    ()\n                 )\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR122 - Zero parameter call to generic function: \"\n                          + function_name + \" not allowed\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         std::size_t param_seq_index = 0;\n\n         if (\n              state_.type_check_enabled &&\n              !tc.verify(param_type_list, param_seq_index)\n            )\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR123 - Expected ',' for call to generic function: \" + function_name,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         expression_node_ptr result = error_node();\n\n         if (tc.paramseq_count() <= 1)\n            result = expression_generator_\n                       .generic_function_call(function, arg_list);\n         else\n            result = expression_generator_\n                       .generic_function_call(function, arg_list, param_seq_index);\n\n         sdd.delete_ptr = (0 == result);\n\n         return result;\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline expression_node_ptr parse_string_function_call(igeneric_function<T>* function, const std::string& function_name)\n      {\n         std::vector<expression_node_ptr> arg_list;\n\n         scoped_vec_delete<expression_node_t> sdd((*this),arg_list);\n\n         next_token();\n\n         std::string param_type_list;\n\n         type_checker tc((*this), function_name, function->parameter_sequence);\n\n         if (\n              (!function->parameter_sequence.empty()) &&\n              (0 == tc.paramseq_count())\n            )\n         {\n            return error_node();\n         }\n\n         if (token_is(token_t::e_lbracket))\n         {\n            if (!token_is(token_t::e_rbracket))\n            {\n               for ( ; ; )\n               {\n                  expression_node_ptr arg = parse_expression();\n\n                  if (0 == arg)\n                     return error_node();\n\n                  if (is_ivector_node(arg))\n                     param_type_list += 'V';\n                  else if (is_generally_string_node(arg))\n                     param_type_list += 'S';\n                  else // Everything else is a scalar returning expression\n                     param_type_list += 'T';\n\n                  arg_list.push_back(arg);\n\n                  if (token_is(token_t::e_rbracket))\n                     break;\n                  else if (!token_is(token_t::e_comma))\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR124 - Expected ',' for call to string function: \" + function_name,\n                                   exprtk_error_location));\n\n                     return error_node();\n                  }\n               }\n            }\n         }\n\n         std::size_t param_seq_index = 0;\n\n         if (!tc.verify(param_type_list, param_seq_index))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR125 - Expected ',' for call to string function: \" + function_name,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         expression_node_ptr result = error_node();\n\n         if (tc.paramseq_count() <= 1)\n            result = expression_generator_\n                       .string_function_call(function, arg_list);\n         else\n            result = expression_generator_\n                       .string_function_call(function, arg_list, param_seq_index);\n\n         sdd.delete_ptr = (0 == result);\n\n         return result;\n      }\n      #endif\n\n      template <typename Type, std::size_t NumberOfParameters>\n      struct parse_special_function_impl\n      {\n         static inline expression_node_ptr process(parser<Type>& p,const details::operator_type opt_type, const std::string& sf_name)\n         {\n            expression_node_ptr branch[NumberOfParameters];\n            expression_node_ptr result  = error_node();\n\n            std::fill_n(branch,NumberOfParameters,reinterpret_cast<expression_node_ptr>(0));\n\n            scoped_delete<expression_node_t,NumberOfParameters> sd(p,branch);\n\n            p.next_token();\n\n            if (!p.token_is(token_t::e_lbracket))\n            {\n               p.set_error(\n                    make_error(parser_error::e_syntax,\n                               p.current_token(),\n                               \"ERR126 - Expected '(' for special function '\" + sf_name + \"'\",\n                               exprtk_error_location));\n\n               return error_node();\n            }\n\n            for (std::size_t i = 0; i < NumberOfParameters; ++i)\n            {\n               branch[i] = p.parse_expression();\n\n               if (0 == branch[i])\n               {\n                  return p.error_node();\n               }\n               else if (i < (NumberOfParameters - 1))\n               {\n                  if (!p.token_is(token_t::e_comma))\n                  {\n                     p.set_error(\n                          make_error(parser_error::e_syntax,\n                                     p.current_token(),\n                                     \"ERR127 - Expected ',' before next parameter of special function '\" + sf_name + \"'\",\n                                     exprtk_error_location));\n\n                     return p.error_node();\n                  }\n               }\n            }\n\n            if (!p.token_is(token_t::e_rbracket))\n            {\n               p.set_error(\n                    make_error(parser_error::e_syntax,\n                               p.current_token(),\n                               \"ERR128 - Invalid number of parameters for special function '\" + sf_name + \"'\",\n                               exprtk_error_location));\n\n               return p.error_node();\n            }\n            else\n               result = p.expression_generator_.special_function(opt_type,branch);\n\n            sd.delete_ptr = (0 == result);\n\n            return result;\n         }\n      };\n\n      inline expression_node_ptr parse_special_function()\n      {\n         const std::string sf_name = current_token().value;\n\n         // Expect: $fDD(expr0,expr1,expr2) or $fDD(expr0,expr1,expr2,expr3)\n         if (\n              !details::is_digit(sf_name[2]) ||\n              !details::is_digit(sf_name[3])\n            )\n         {\n            set_error(\n               make_error(parser_error::e_token,\n                          current_token(),\n                          \"ERR129 - Invalid special function[1]: \" + sf_name,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         const int id = (sf_name[2] - '0') * 10 +\n                        (sf_name[3] - '0');\n\n         if (id >= details::e_sffinal)\n         {\n            set_error(\n               make_error(parser_error::e_token,\n                          current_token(),\n                          \"ERR130 - Invalid special function[2]: \" + sf_name,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         const int sf_3_to_4                   = details::e_sf48;\n         const details::operator_type opt_type = details::operator_type(id + 1000);\n         const std::size_t NumberOfParameters  = (id < (sf_3_to_4 - 1000)) ? 3U : 4U;\n\n         switch (NumberOfParameters)\n         {\n            case 3  : return parse_special_function_impl<T,3>::process((*this), opt_type, sf_name);\n            case 4  : return parse_special_function_impl<T,4>::process((*this), opt_type, sf_name);\n            default : return error_node();\n         }\n      }\n\n      inline expression_node_ptr parse_null_statement()\n      {\n         next_token();\n         return node_allocator_.allocate<details::null_node<T> >();\n      }\n\n      #ifndef exprtk_disable_break_continue\n      inline expression_node_ptr parse_break_statement()\n      {\n         if (state_.parsing_break_stmt)\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR131 - Break call within a break call is not allowed\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         scoped_bool_negator sbn(state_.parsing_break_stmt);\n\n         if (!brkcnt_list_.empty())\n         {\n            next_token();\n\n            brkcnt_list_.front() = true;\n\n            expression_node_ptr return_expr = error_node();\n\n            if (token_is(token_t::e_lsqrbracket))\n            {\n               if (0 == (return_expr = parse_expression()))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR132 - Failed to parse return expression for 'break' statement\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n               else if (!token_is(token_t::e_rsqrbracket))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR133 - Expected ']' at the completion of break's return expression\",\n                                exprtk_error_location));\n\n                  free_node(node_allocator_,return_expr);\n\n                  return error_node();\n               }\n            }\n\n            state_.activate_side_effect(\"parse_break_statement()\");\n\n            return node_allocator_.allocate<details::break_node<T> >(return_expr);\n         }\n         else\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR134 - Invalid use of 'break', allowed only in the scope of a loop\",\n                          exprtk_error_location));\n         }\n\n         return error_node();\n      }\n\n      inline expression_node_ptr parse_continue_statement()\n      {\n         if (!brkcnt_list_.empty())\n         {\n            next_token();\n\n            brkcnt_list_.front() = true;\n            state_.activate_side_effect(\"parse_continue_statement()\");\n\n            return node_allocator_.allocate<details::continue_node<T> >();\n         }\n         else\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR135 - Invalid use of 'continue', allowed only in the scope of a loop\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n      }\n      #endif\n\n      inline expression_node_ptr parse_define_vector_statement(const std::string& vec_name)\n      {\n         expression_node_ptr size_expr = error_node();\n\n         if (!token_is(token_t::e_lsqrbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR136 - Expected '[' as part of vector size definition\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (0 == (size_expr = parse_expression()))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR137 - Failed to determine size of vector '\" + vec_name + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (!is_constant_node(size_expr))\n         {\n            free_node(node_allocator_,size_expr);\n\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR138 - Expected a literal number as size of vector '\" + vec_name + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         T vector_size = size_expr->value();\n\n         free_node(node_allocator_,size_expr);\n\n         if (\n              (vector_size <= T(0)) ||\n              std::not_equal_to<T>()\n              (T(0),vector_size - details::numeric::trunc(vector_size))\n            )\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR139 - Invalid vector size. Must be an integer greater than zero, size: \" +\n                          details::to_str(details::numeric::to_int32(vector_size)),\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         std::vector<expression_node_ptr> vec_initilizer_list;\n\n         scoped_vec_delete<expression_node_t> svd((*this),vec_initilizer_list);\n\n         bool single_value_initialiser = false;\n         bool vec_to_vec_initialiser   = false;\n         bool null_initialisation      = false;\n\n         if (!token_is(token_t::e_rsqrbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR140 - Expected ']' as part of vector size definition\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (!token_is(token_t::e_eof))\n         {\n            if (!token_is(token_t::e_assign))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR141 - Expected ':=' as part of vector definition\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n            else if (token_is(token_t::e_lsqrbracket))\n            {\n               expression_node_ptr initialiser = parse_expression();\n\n               if (0 == initialiser)\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR142 - Failed to parse single vector initialiser\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n\n               vec_initilizer_list.push_back(initialiser);\n\n               if (!token_is(token_t::e_rsqrbracket))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR143 - Expected ']' to close single value vector initialiser\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n\n               single_value_initialiser = true;\n            }\n            else if (!token_is(token_t::e_lcrlbracket))\n            {\n               expression_node_ptr initialiser = error_node();\n\n               // Is this a vector to vector assignment and initialisation?\n               if (token_t::e_symbol == current_token().type)\n               {\n                  // Is it a locally defined vector?\n                  scope_element& se = sem_.get_active_element(current_token().value);\n\n                  if (scope_element::e_vector == se.type)\n                  {\n                     if (0 != (initialiser = parse_expression()))\n                        vec_initilizer_list.push_back(initialiser);\n                     else\n                        return error_node();\n                  }\n                  // Are we dealing with a user defined vector?\n                  else if (symtab_store_.is_vector(current_token().value))\n                  {\n                     lodge_symbol(current_token().value,e_st_vector);\n\n                     if (0 != (initialiser = parse_expression()))\n                        vec_initilizer_list.push_back(initialiser);\n                     else\n                        return error_node();\n                  }\n                  // Are we dealing with a null initialisation vector definition?\n                  else if (token_is(token_t::e_symbol,\"null\"))\n                     null_initialisation = true;\n               }\n\n               if (!null_initialisation)\n               {\n                  if (0 == initialiser)\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR144 - Expected '{' as part of vector initialiser list\",\n                                   exprtk_error_location));\n\n                     return error_node();\n                  }\n                  else\n                     vec_to_vec_initialiser = true;\n               }\n            }\n            else if (!token_is(token_t::e_rcrlbracket))\n            {\n               for ( ; ; )\n               {\n                  expression_node_ptr initialiser = parse_expression();\n\n                  if (0 == initialiser)\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR145 - Expected '{' as part of vector initialiser list\",\n                                   exprtk_error_location));\n\n                     return error_node();\n                  }\n                  else\n                     vec_initilizer_list.push_back(initialiser);\n\n                  if (token_is(token_t::e_rcrlbracket))\n                     break;\n\n                  bool is_next_close = peek_token_is(token_t::e_rcrlbracket);\n\n                  if (!token_is(token_t::e_comma) && is_next_close)\n                  {\n                     set_error(\n                        make_error(parser_error::e_syntax,\n                                   current_token(),\n                                   \"ERR146 - Expected ',' between vector initialisers\",\n                                   exprtk_error_location));\n\n                     return error_node();\n                  }\n\n                  if (token_is(token_t::e_rcrlbracket))\n                     break;\n               }\n            }\n\n            if (\n                 !token_is(token_t::e_rbracket   ,prsrhlpr_t::e_hold) &&\n                 !token_is(token_t::e_rcrlbracket,prsrhlpr_t::e_hold) &&\n                 !token_is(token_t::e_rsqrbracket,prsrhlpr_t::e_hold)\n               )\n            {\n               if (!token_is(token_t::e_eof))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR147 - Expected ';' at end of vector definition\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n\n            if (vec_initilizer_list.size() > vector_size)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR148 - Initialiser list larger than the number of elements in the vector: '\" + vec_name + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n         }\n\n         typename symbol_table_t::vector_holder_ptr vec_holder = typename symbol_table_t::vector_holder_ptr(0);\n\n         const std::size_t vec_size = static_cast<std::size_t>(details::numeric::to_int32(vector_size));\n\n         scope_element& se = sem_.get_element(vec_name);\n\n         if (se.name == vec_name)\n         {\n            if (se.active)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR149 - Illegal redefinition of local vector: '\" + vec_name + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n            else if (\n                      (se.size == vec_size) &&\n                      (scope_element::e_vector == se.type)\n                    )\n            {\n               vec_holder = se.vec_node;\n               se.active  = true;\n               se.depth   = state_.scope_depth;\n               se.ref_count++;\n            }\n         }\n\n         if (0 == vec_holder)\n         {\n            scope_element nse;\n            nse.name      = vec_name;\n            nse.active    = true;\n            nse.ref_count = 1;\n            nse.type      = scope_element::e_vector;\n            nse.depth     = state_.scope_depth;\n            nse.size      = vec_size;\n            nse.data      = new T[vec_size];\n            nse.vec_node  = new typename scope_element::vector_holder_t((T*)(nse.data),nse.size);\n\n            if (!sem_.add_element(nse))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR150 - Failed to add new local vector '\" + vec_name + \"' to SEM\",\n                             exprtk_error_location));\n\n               sem_.free_element(nse);\n\n               return error_node();\n            }\n\n            vec_holder = nse.vec_node;\n\n            exprtk_debug((\"parse_define_vector_statement() - INFO - Added new local vector: %s[%d]\\n\",\n                          nse.name.c_str(),\n                          static_cast<int>(nse.size)));\n         }\n\n         state_.activate_side_effect(\"parse_define_vector_statement()\");\n\n         lodge_symbol(vec_name,e_st_local_vector);\n\n         expression_node_ptr result = error_node();\n\n         if (null_initialisation)\n            result = expression_generator_(T(0.0));\n         else if (vec_to_vec_initialiser)\n            result = expression_generator_(\n                        details::e_assign,\n                        node_allocator_.allocate<vector_node_t>(vec_holder),\n                        vec_initilizer_list[0]);\n         else\n            result = node_allocator_\n                        .allocate<details::vector_assignment_node<T> >(\n                           (*vec_holder)[0],\n                           vec_size,\n                           vec_initilizer_list,\n                           single_value_initialiser);\n\n         svd.delete_ptr = (0 == result);\n\n         return result;\n      }\n\n      #ifndef exprtk_disable_string_capabilities\n      inline expression_node_ptr parse_define_string_statement(const std::string& str_name, expression_node_ptr initialisation_expression)\n      {\n         stringvar_node_t* str_node = reinterpret_cast<stringvar_node_t*>(0);\n\n         scope_element& se = sem_.get_element(str_name);\n\n         if (se.name == str_name)\n         {\n            if (se.active)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR151 - Illegal redefinition of local variable: '\" + str_name + \"'\",\n                             exprtk_error_location));\n\n               free_node(node_allocator_,initialisation_expression);\n\n               return error_node();\n            }\n            else if (scope_element::e_string == se.type)\n            {\n               str_node  = se.str_node;\n               se.active = true;\n               se.depth  = state_.scope_depth;\n               se.ref_count++;\n            }\n         }\n\n         if (0 == str_node)\n         {\n            scope_element nse;\n            nse.name      = str_name;\n            nse.active    = true;\n            nse.ref_count = 1;\n            nse.type      = scope_element::e_string;\n            nse.depth     = state_.scope_depth;\n            nse.data      = new std::string;\n            nse.str_node  = new stringvar_node_t(*(std::string*)(nse.data));\n\n            if (!sem_.add_element(nse))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR152 - Failed to add new local string variable '\" + str_name + \"' to SEM\",\n                             exprtk_error_location));\n\n               free_node(node_allocator_,initialisation_expression);\n\n               sem_.free_element(nse);\n\n               return error_node();\n            }\n\n            str_node = nse.str_node;\n\n            exprtk_debug((\"parse_define_string_statement() - INFO - Added new local string variable: %s\\n\",nse.name.c_str()));\n         }\n\n         lodge_symbol(str_name,e_st_local_string);\n\n         state_.activate_side_effect(\"parse_define_string_statement()\");\n\n         expression_node_ptr branch[2] = {0};\n\n         branch[0] = str_node;\n         branch[1] = initialisation_expression;\n\n         return expression_generator_(details::e_assign,branch);\n      }\n      #else\n      inline expression_node_ptr parse_define_string_statement(const std::string&, expression_node_ptr)\n      {\n         return error_node();\n      }\n      #endif\n\n      inline bool local_variable_is_shadowed(const std::string& symbol)\n      {\n         const scope_element& se = sem_.get_element(symbol);\n         return (se.name == symbol) && se.active;\n      }\n\n      inline expression_node_ptr parse_define_var_statement()\n      {\n         if (settings_.vardef_disabled())\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR153 - Illegal variable definition\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (!details::imatch(current_token().value,\"var\"))\n         {\n            return error_node();\n         }\n         else\n            next_token();\n\n         const std::string var_name = current_token().value;\n\n         expression_node_ptr initialisation_expression = error_node();\n\n         if (!token_is(token_t::e_symbol))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR154 - Expected a symbol for variable definition\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (details::is_reserved_symbol(var_name))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR155 - Illegal redefinition of reserved keyword: '\" + var_name + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (symtab_store_.symbol_exists(var_name))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR156 - Illegal redefinition of variable '\" + var_name + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (local_variable_is_shadowed(var_name))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR157 - Illegal redefinition of local variable: '\" + var_name + \"'\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (token_is(token_t::e_lsqrbracket,prsrhlpr_t::e_hold))\n         {\n            return parse_define_vector_statement(var_name);\n         }\n         else if (token_is(token_t::e_lcrlbracket,prsrhlpr_t::e_hold))\n         {\n            return parse_uninitialised_var_statement(var_name);\n         }\n         else if (token_is(token_t::e_assign))\n         {\n            if (0 == (initialisation_expression = parse_expression()))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR158 - Failed to parse initialisation expression\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n         }\n\n         if (\n              !token_is(token_t::e_rbracket   ,prsrhlpr_t::e_hold) &&\n              !token_is(token_t::e_rcrlbracket,prsrhlpr_t::e_hold) &&\n              !token_is(token_t::e_rsqrbracket,prsrhlpr_t::e_hold)\n            )\n         {\n            if (!token_is(token_t::e_eof,prsrhlpr_t::e_hold))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR159 - Expected ';' after variable definition\",\n                             exprtk_error_location));\n\n               free_node(node_allocator_,initialisation_expression);\n\n               return error_node();\n            }\n         }\n\n         if (\n              (0 != initialisation_expression) &&\n              details::is_generally_string_node(initialisation_expression)\n            )\n         {\n            return parse_define_string_statement(var_name,initialisation_expression);\n         }\n\n         expression_node_ptr var_node = reinterpret_cast<expression_node_ptr>(0);\n\n         scope_element& se = sem_.get_element(var_name);\n\n         if (se.name == var_name)\n         {\n            if (se.active)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR160 - Illegal redefinition of local variable: '\" + var_name + \"'\",\n                             exprtk_error_location));\n\n               free_node(node_allocator_, initialisation_expression);\n\n               return error_node();\n            }\n            else if (scope_element::e_variable == se.type)\n            {\n               var_node  = se.var_node;\n               se.active = true;\n               se.depth  = state_.scope_depth;\n               se.ref_count++;\n            }\n         }\n\n         if (0 == var_node)\n         {\n            scope_element nse;\n            nse.name      = var_name;\n            nse.active    = true;\n            nse.ref_count = 1;\n            nse.type      = scope_element::e_variable;\n            nse.depth     = state_.scope_depth;\n            nse.data      = new T(T(0));\n            nse.var_node  = node_allocator_.allocate<variable_node_t>(*(T*)(nse.data));\n\n            if (!sem_.add_element(nse))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR161 - Failed to add new local variable '\" + var_name + \"' to SEM\",\n                             exprtk_error_location));\n\n               free_node(node_allocator_, initialisation_expression);\n\n               sem_.free_element(nse);\n\n               return error_node();\n            }\n\n            var_node = nse.var_node;\n\n            exprtk_debug((\"parse_define_var_statement() - INFO - Added new local variable: %s\\n\",nse.name.c_str()));\n         }\n\n         state_.activate_side_effect(\"parse_define_var_statement()\");\n\n         lodge_symbol(var_name,e_st_local_variable);\n\n         expression_node_ptr branch[2] = {0};\n\n         branch[0] = var_node;\n         branch[1] = initialisation_expression ? initialisation_expression : expression_generator_(T(0));\n\n         return expression_generator_(details::e_assign,branch);\n      }\n\n      inline expression_node_ptr parse_uninitialised_var_statement(const std::string& var_name)\n      {\n         if (\n              !token_is(token_t::e_lcrlbracket) ||\n              !token_is(token_t::e_rcrlbracket)\n            )\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR162 - Expected a '{}' for uninitialised var definition\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (!token_is(token_t::e_eof,prsrhlpr_t::e_hold))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR163 - Expected ';' after uninitialised variable definition\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         expression_node_ptr var_node = reinterpret_cast<expression_node_ptr>(0);\n\n         scope_element& se = sem_.get_element(var_name);\n\n         if (se.name == var_name)\n         {\n            if (se.active)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR164 - Illegal redefinition of local variable: '\" + var_name + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n            else if (scope_element::e_variable == se.type)\n            {\n               var_node  = se.var_node;\n               se.active = true;\n               se.ref_count++;\n            }\n         }\n\n         if (0 == var_node)\n         {\n            scope_element nse;\n            nse.name      = var_name;\n            nse.active    = true;\n            nse.ref_count = 1;\n            nse.type      = scope_element::e_variable;\n            nse.depth     = state_.scope_depth;\n            nse.ip_index  = sem_.next_ip_index();\n            nse.data      = new T(T(0));\n            nse.var_node  = node_allocator_.allocate<variable_node_t>(*(T*)(nse.data));\n\n            if (!sem_.add_element(nse))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR165 - Failed to add new local variable '\" + var_name + \"' to SEM\",\n                             exprtk_error_location));\n\n               sem_.free_element(nse);\n\n               return error_node();\n            }\n\n            exprtk_debug((\"parse_uninitialised_var_statement() - INFO - Added new local variable: %s\\n\",\n                          nse.name.c_str()));\n         }\n\n         lodge_symbol(var_name,e_st_local_variable);\n\n         state_.activate_side_effect(\"parse_uninitialised_var_statement()\");\n\n         return expression_generator_(T(0));\n      }\n\n      inline expression_node_ptr parse_swap_statement()\n      {\n         if (!details::imatch(current_token().value,\"swap\"))\n         {\n            return error_node();\n         }\n         else\n            next_token();\n\n         if (!token_is(token_t::e_lbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR166 - Expected '(' at start of swap statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         expression_node_ptr variable0 = error_node();\n         expression_node_ptr variable1 = error_node();\n\n         bool variable0_generated = false;\n         bool variable1_generated = false;\n\n         const std::string var0_name = current_token().value;\n\n         if (!token_is(token_t::e_symbol,prsrhlpr_t::e_hold))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR167 - Expected a symbol for variable or vector element definition\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (peek_token_is(token_t::e_lsqrbracket))\n         {\n            if (0 == (variable0 = parse_vector()))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR168 - First parameter to swap is an invalid vector element: '\" + var0_name + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n\n            variable0_generated = true;\n         }\n         else\n         {\n            if (symtab_store_.is_variable(var0_name))\n            {\n               variable0 = symtab_store_.get_variable(var0_name);\n            }\n\n            scope_element& se = sem_.get_element(var0_name);\n\n            if (\n                 (se.active)            &&\n                 (se.name == var0_name) &&\n                 (scope_element::e_variable == se.type)\n               )\n            {\n               variable0 = se.var_node;\n            }\n\n            lodge_symbol(var0_name,e_st_variable);\n\n            if (0 == variable0)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR169 - First parameter to swap is an invalid variable: '\" + var0_name + \"'\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n            else\n               next_token();\n         }\n\n         if (!token_is(token_t::e_comma))\n         {\n            set_error(\n                make_error(parser_error::e_syntax,\n                           current_token(),\n                           \"ERR170 - Expected ',' between parameters to swap\",\n                           exprtk_error_location));\n\n            if (variable0_generated)\n            {\n               free_node(node_allocator_,variable0);\n            }\n\n            return error_node();\n         }\n\n         const std::string var1_name = current_token().value;\n\n         if (!token_is(token_t::e_symbol,prsrhlpr_t::e_hold))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR171 - Expected a symbol for variable or vector element definition\",\n                          exprtk_error_location));\n\n            if (variable0_generated)\n            {\n               free_node(node_allocator_,variable0);\n            }\n\n            return error_node();\n         }\n         else if (peek_token_is(token_t::e_lsqrbracket))\n         {\n            if (0 == (variable1 = parse_vector()))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR172 - Second parameter to swap is an invalid vector element: '\" + var1_name + \"'\",\n                             exprtk_error_location));\n\n               if (variable0_generated)\n               {\n                  free_node(node_allocator_,variable0);\n               }\n\n               return error_node();\n            }\n\n            variable1_generated = true;\n         }\n         else\n         {\n            if (symtab_store_.is_variable(var1_name))\n            {\n               variable1 = symtab_store_.get_variable(var1_name);\n            }\n\n            scope_element& se = sem_.get_element(var1_name);\n\n            if (\n                 (se.active) &&\n                 (se.name == var1_name) &&\n                 (scope_element::e_variable == se.type)\n               )\n            {\n               variable1 = se.var_node;\n            }\n\n            lodge_symbol(var1_name,e_st_variable);\n\n            if (0 == variable1)\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR173 - Second parameter to swap is an invalid variable: '\" + var1_name + \"'\",\n                             exprtk_error_location));\n\n               if (variable0_generated)\n               {\n                  free_node(node_allocator_,variable0);\n               }\n\n               return error_node();\n            }\n            else\n               next_token();\n         }\n\n         if (!token_is(token_t::e_rbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR174 - Expected ')' at end of swap statement\",\n                          exprtk_error_location));\n\n            if (variable0_generated)\n            {\n               free_node(node_allocator_,variable0);\n            }\n\n            if (variable1_generated)\n            {\n               free_node(node_allocator_,variable1);\n            }\n\n            return error_node();\n         }\n\n         typedef details::variable_node<T>* variable_node_ptr;\n\n         variable_node_ptr v0 = variable_node_ptr(0);\n         variable_node_ptr v1 = variable_node_ptr(0);\n\n         expression_node_ptr result = error_node();\n\n         if (\n              (0 != (v0 = dynamic_cast<variable_node_ptr>(variable0))) &&\n              (0 != (v1 = dynamic_cast<variable_node_ptr>(variable1)))\n            )\n         {\n            result = node_allocator_.allocate<details::swap_node<T> >(v0, v1);\n\n            if (variable0_generated)\n            {\n               free_node(node_allocator_,variable0);\n            }\n\n            if (variable1_generated)\n            {\n               free_node(node_allocator_,variable1);\n            }\n         }\n         else\n            result = node_allocator_.allocate<details::swap_generic_node<T> >\n                        (variable0, variable1);\n\n         state_.activate_side_effect(\"parse_swap_statement()\");\n\n         return result;\n      }\n\n      #ifndef exprtk_disable_return_statement\n      inline expression_node_ptr parse_return_statement()\n      {\n         if (state_.parsing_return_stmt)\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR175 - Return call within a return call is not allowed\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         scoped_bool_negator sbn(state_.parsing_return_stmt);\n\n         std::vector<expression_node_ptr> arg_list;\n\n         scoped_vec_delete<expression_node_t> sdd((*this),arg_list);\n\n         if (!details::imatch(current_token().value,\"return\"))\n         {\n            return error_node();\n         }\n         else\n            next_token();\n\n         if (!token_is(token_t::e_lsqrbracket))\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR176 - Expected '[' at start of return statement\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else if (!token_is(token_t::e_rsqrbracket))\n         {\n            for ( ; ; )\n            {\n               expression_node_ptr arg = parse_expression();\n\n               if (0 == arg)\n                  return error_node();\n\n               arg_list.push_back(arg);\n\n               if (token_is(token_t::e_rsqrbracket))\n                  break;\n               else if (!token_is(token_t::e_comma))\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR177 - Expected ',' between values during call to return\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n         }\n         else if (settings_.zero_return_disabled())\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR178 - Zero parameter return statement not allowed\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         lexer::token prev_token = current_token();\n\n         if (token_is(token_t::e_rsqrbracket))\n         {\n            if (!arg_list.empty())\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             prev_token,\n                             \"ERR179 - Invalid ']' found during return call\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n         }\n\n         std::string ret_param_type_list;\n\n         for (std::size_t i = 0; i < arg_list.size(); ++i)\n         {\n            if (0 == arg_list[i])\n               return error_node();\n            else if (is_ivector_node(arg_list[i]))\n               ret_param_type_list += 'V';\n            else if (is_generally_string_node(arg_list[i]))\n               ret_param_type_list += 'S';\n            else\n               ret_param_type_list += 'T';\n         }\n\n         dec_.retparam_list_.push_back(ret_param_type_list);\n\n         expression_node_ptr result = expression_generator_.return_call(arg_list);\n\n         sdd.delete_ptr = (0 == result);\n\n         state_.return_stmt_present = true;\n\n         state_.activate_side_effect(\"parse_return_statement()\");\n\n         return result;\n      }\n      #else\n      inline expression_node_ptr parse_return_statement()\n      {\n         return error_node();\n      }\n      #endif\n\n      inline bool post_variable_process(const std::string& symbol)\n      {\n         if (\n              peek_token_is(token_t::e_lbracket   ) ||\n              peek_token_is(token_t::e_lcrlbracket) ||\n              peek_token_is(token_t::e_lsqrbracket)\n            )\n         {\n            if (!settings_.commutative_check_enabled())\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR180 - Invalid sequence of variable '\"+ symbol + \"' and bracket\",\n                             exprtk_error_location));\n\n               return false;\n            }\n\n            lexer().insert_front(token_t::e_mul);\n         }\n\n         return true;\n      }\n\n      inline bool post_bracket_process(const typename token_t::token_type& token, expression_node_ptr& branch)\n      {\n         bool implied_mul = false;\n\n         if (is_generally_string_node(branch))\n            return true;\n\n         const lexer::parser_helper::token_advance_mode hold = prsrhlpr_t::e_hold;\n\n         switch (token)\n         {\n            case token_t::e_lcrlbracket : implied_mul = token_is(token_t::e_lbracket   ,hold) ||\n                                                        token_is(token_t::e_lcrlbracket,hold) ||\n                                                        token_is(token_t::e_lsqrbracket,hold) ;\n                                          break;\n\n            case token_t::e_lbracket    : implied_mul = token_is(token_t::e_lbracket   ,hold) ||\n                                                        token_is(token_t::e_lcrlbracket,hold) ||\n                                                        token_is(token_t::e_lsqrbracket,hold) ;\n                                          break;\n\n            case token_t::e_lsqrbracket : implied_mul = token_is(token_t::e_lbracket   ,hold) ||\n                                                        token_is(token_t::e_lcrlbracket,hold) ||\n                                                        token_is(token_t::e_lsqrbracket,hold) ;\n                                          break;\n\n            default                     : return true;\n         }\n\n         if (implied_mul)\n         {\n            if (!settings_.commutative_check_enabled())\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR181 - Invalid sequence of brackets\",\n                             exprtk_error_location));\n\n               return false;\n            }\n            else if (token_t::e_eof != current_token().type)\n            {\n               lexer().insert_front(current_token().type);\n               lexer().insert_front(token_t::e_mul);\n               next_token();\n            }\n         }\n\n         return true;\n      }\n\n      inline expression_node_ptr parse_symtab_symbol()\n      {\n         const std::string symbol = current_token().value;\n\n         // Are we dealing with a variable or a special constant?\n         expression_node_ptr variable = symtab_store_.get_variable(symbol);\n\n         if (variable)\n         {\n            if (symtab_store_.is_constant_node(symbol))\n            {\n               variable = expression_generator_(variable->value());\n            }\n\n            if (!post_variable_process(symbol))\n               return error_node();\n\n            lodge_symbol(symbol,e_st_variable);\n            next_token();\n\n            return variable;\n         }\n\n         // Are we dealing with a locally defined variable, vector or string?\n         if (!sem_.empty())\n         {\n            scope_element& se = sem_.get_active_element(symbol);\n\n            if (se.active && details::imatch(se.name, symbol))\n            {\n               if (scope_element::e_variable == se.type)\n               {\n                  se.active = true;\n                  lodge_symbol(symbol,e_st_local_variable);\n\n                  if (!post_variable_process(symbol))\n                     return error_node();\n\n                  next_token();\n\n                  return se.var_node;\n               }\n               else if (scope_element::e_vector == se.type)\n               {\n                  return parse_vector();\n               }\n               #ifndef exprtk_disable_string_capabilities\n               else if (scope_element::e_string == se.type)\n               {\n                  return parse_string();\n               }\n               #endif\n            }\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         // Are we dealing with a string variable?\n         if (symtab_store_.is_stringvar(symbol))\n         {\n            return parse_string();\n         }\n         #endif\n\n         {\n            // Are we dealing with a function?\n            ifunction<T>* function = symtab_store_.get_function(symbol);\n\n            if (function)\n            {\n               lodge_symbol(symbol,e_st_function);\n\n               expression_node_ptr func_node =\n                                      parse_function_invocation(function,symbol);\n\n               if (func_node)\n                  return func_node;\n               else\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR182 - Failed to generate node for function: '\" + symbol + \"'\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n         }\n\n         {\n            // Are we dealing with a vararg function?\n            ivararg_function<T>* vararg_function = symtab_store_.get_vararg_function(symbol);\n\n            if (vararg_function)\n            {\n               lodge_symbol(symbol,e_st_function);\n\n               expression_node_ptr vararg_func_node =\n                                      parse_vararg_function_call(vararg_function, symbol);\n\n               if (vararg_func_node)\n                  return vararg_func_node;\n               else\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR183 - Failed to generate node for vararg function: '\" + symbol + \"'\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n         }\n\n         {\n            // Are we dealing with a vararg generic function?\n            igeneric_function<T>* generic_function = symtab_store_.get_generic_function(symbol);\n\n            if (generic_function)\n            {\n               lodge_symbol(symbol,e_st_function);\n\n               expression_node_ptr genericfunc_node =\n                                      parse_generic_function_call(generic_function, symbol);\n\n               if (genericfunc_node)\n                  return genericfunc_node;\n               else\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR184 - Failed to generate node for generic function: '\" + symbol + \"'\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         {\n            // Are we dealing with a vararg string returning function?\n            igeneric_function<T>* string_function = symtab_store_.get_string_function(symbol);\n\n            if (string_function)\n            {\n               lodge_symbol(symbol,e_st_function);\n\n               expression_node_ptr stringfunc_node =\n                                      parse_string_function_call(string_function, symbol);\n\n               if (stringfunc_node)\n                  return stringfunc_node;\n               else\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR185 - Failed to generate node for string function: '\" + symbol + \"'\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n            }\n         }\n         #endif\n\n         // Are we dealing with a vector?\n         if (symtab_store_.is_vector(symbol))\n         {\n            lodge_symbol(symbol,e_st_vector);\n            return parse_vector();\n         }\n\n         if (details::is_reserved_symbol(symbol))\n         {\n               if (\n                    settings_.function_enabled(symbol) ||\n                    !details::is_base_function(symbol)\n                  )\n               {\n                  set_error(\n                     make_error(parser_error::e_syntax,\n                                current_token(),\n                                \"ERR186 - Invalid use of reserved symbol '\" + symbol + \"'\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n         }\n\n         // Should we handle unknown symbols?\n         if (resolve_unknown_symbol_ && unknown_symbol_resolver_)\n         {\n            if (!(settings_.rsrvd_sym_usr_disabled() && details::is_reserved_symbol(symbol)))\n            {\n               symbol_table_t& symtab = symtab_store_.get_symbol_table();\n\n               std::string error_message;\n\n               if (unknown_symbol_resolver::e_usrmode_default == unknown_symbol_resolver_->mode)\n               {\n                  T default_value = T(0);\n\n                  typename unknown_symbol_resolver::usr_symbol_type usr_symbol_type;\n\n                  if (unknown_symbol_resolver_->process(symbol, usr_symbol_type, default_value, error_message))\n                  {\n                     bool create_result = false;\n\n                     switch (usr_symbol_type)\n                     {\n                        case unknown_symbol_resolver::e_usr_variable_type : create_result = symtab.create_variable(symbol, default_value);\n                                                                            break;\n\n                        case unknown_symbol_resolver::e_usr_constant_type : create_result = symtab.add_constant(symbol, default_value);\n                                                                            break;\n\n                        default                                           : create_result = false;\n                     }\n\n                     if (create_result)\n                     {\n                        expression_node_ptr var = symtab_store_.get_variable(symbol);\n\n                        if (var)\n                        {\n                           if (symtab_store_.is_constant_node(symbol))\n                           {\n                              var = expression_generator_(var->value());\n                           }\n\n                           lodge_symbol(symbol,e_st_variable);\n\n                           if (!post_variable_process(symbol))\n                              return error_node();\n\n                           next_token();\n\n                           return var;\n                        }\n                     }\n                  }\n\n                  set_error(\n                     make_error(parser_error::e_symtab,\n                                current_token(),\n                                \"ERR187 - Failed to create variable: '\" + symbol + \"'\" +\n                                (error_message.empty() ? \"\" : \" - \" + error_message),\n                                exprtk_error_location));\n\n               }\n               else if (unknown_symbol_resolver::e_usrmode_extended == unknown_symbol_resolver_->mode)\n               {\n                  if (unknown_symbol_resolver_->process(symbol, symtab, error_message))\n                  {\n                     expression_node_ptr result = parse_symtab_symbol();\n\n                     if (result)\n                     {\n                        return result;\n                     }\n                  }\n\n                  set_error(\n                     make_error(parser_error::e_symtab,\n                                current_token(),\n                                \"ERR188 - Failed to resolve symbol: '\" + symbol + \"'\" +\n                                (error_message.empty() ? \"\" : \" - \" + error_message),\n                                exprtk_error_location));\n               }\n\n               return error_node();\n            }\n         }\n\n         set_error(\n            make_error(parser_error::e_syntax,\n                       current_token(),\n                       \"ERR189 - Undefined symbol: '\" + symbol + \"'\",\n                       exprtk_error_location));\n\n         return error_node();\n      }\n\n      inline expression_node_ptr parse_symbol()\n      {\n         static const std::string symbol_if       = \"if\"      ;\n         static const std::string symbol_while    = \"while\"   ;\n         static const std::string symbol_repeat   = \"repeat\"  ;\n         static const std::string symbol_for      = \"for\"     ;\n         static const std::string symbol_switch   = \"switch\"  ;\n         static const std::string symbol_null     = \"null\"    ;\n         static const std::string symbol_break    = \"break\"   ;\n         static const std::string symbol_continue = \"continue\";\n         static const std::string symbol_var      = \"var\"     ;\n         static const std::string symbol_swap     = \"swap\"    ;\n         static const std::string symbol_return   = \"return\"  ;\n\n         if (valid_vararg_operation(current_token().value))\n         {\n            return parse_vararg_function();\n         }\n         else if (valid_base_operation(current_token().value))\n         {\n            return parse_base_operation();\n         }\n         else if (\n                   details::imatch(current_token().value, symbol_if) &&\n                   settings_.control_struct_enabled(current_token().value)\n                 )\n         {\n            return parse_conditional_statement();\n         }\n         else if (\n                   details::imatch(current_token().value, symbol_while) &&\n                   settings_.control_struct_enabled(current_token().value)\n                 )\n         {\n            return parse_while_loop();\n         }\n         else if (\n                   details::imatch(current_token().value, symbol_repeat) &&\n                   settings_.control_struct_enabled(current_token().value)\n                 )\n         {\n            return parse_repeat_until_loop();\n         }\n         else if (\n                   details::imatch(current_token().value, symbol_for) &&\n                   settings_.control_struct_enabled(current_token().value)\n                 )\n         {\n            return parse_for_loop();\n         }\n         else if (\n                   details::imatch(current_token().value, symbol_switch) &&\n                   settings_.control_struct_enabled(current_token().value)\n                 )\n         {\n            return parse_switch_statement();\n         }\n         else if (details::is_valid_sf_symbol(current_token().value))\n         {\n            return parse_special_function();\n         }\n         else if (details::imatch(current_token().value, symbol_null))\n         {\n            return parse_null_statement();\n         }\n         #ifndef exprtk_disable_break_continue\n         else if (details::imatch(current_token().value, symbol_break))\n         {\n            return parse_break_statement();\n         }\n         else if (details::imatch(current_token().value, symbol_continue))\n         {\n            return parse_continue_statement();\n         }\n         #endif\n         else if (details::imatch(current_token().value, symbol_var))\n         {\n            return parse_define_var_statement();\n         }\n         else if (details::imatch(current_token().value, symbol_swap))\n         {\n            return parse_swap_statement();\n         }\n         #ifndef exprtk_disable_return_statement\n         else if (\n                   details::imatch(current_token().value, symbol_return) &&\n                   settings_.control_struct_enabled(current_token().value)\n                 )\n         {\n            return parse_return_statement();\n         }\n         #endif\n         else if (symtab_store_.valid() || !sem_.empty())\n         {\n            return parse_symtab_symbol();\n         }\n         else\n         {\n            set_error(\n               make_error(parser_error::e_symtab,\n                          current_token(),\n                          \"ERR190 - Variable or function detected, yet symbol-table is invalid, Symbol: \" + current_token().value,\n                          exprtk_error_location));\n\n            return error_node();\n         }\n      }\n\n      inline expression_node_ptr parse_branch(precedence_level precedence = e_level00)\n      {\n         expression_node_ptr branch = error_node();\n\n         if (token_t::e_number == current_token().type)\n         {\n            T numeric_value = T(0);\n\n            if (details::string_to_real(current_token().value, numeric_value))\n            {\n               expression_node_ptr literal_exp = expression_generator_(numeric_value);\n\n               if (0 == literal_exp)\n               {\n                  set_error(\n                     make_error(parser_error::e_numeric,\n                                current_token(),\n                                \"ERR191 - Failed generate node for scalar: '\" + current_token().value + \"'\",\n                                exprtk_error_location));\n\n                  return error_node();\n               }\n\n               next_token();\n               branch = literal_exp;\n            }\n            else\n            {\n               set_error(\n                  make_error(parser_error::e_numeric,\n                             current_token(),\n                             \"ERR192 - Failed to convert '\" + current_token().value + \"' to a number\",\n                             exprtk_error_location));\n\n               return error_node();\n            }\n         }\n         else if (token_t::e_symbol == current_token().type)\n         {\n            branch = parse_symbol();\n         }\n         #ifndef exprtk_disable_string_capabilities\n         else if (token_t::e_string == current_token().type)\n         {\n            branch = parse_const_string();\n         }\n         #endif\n         else if (token_t::e_lbracket == current_token().type)\n         {\n            next_token();\n\n            if (0 == (branch = parse_expression()))\n               return error_node();\n            else if (!token_is(token_t::e_rbracket))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR193 - Expected ')' instead of: '\" + current_token().value + \"'\",\n                             exprtk_error_location));\n\n               free_node(node_allocator_,branch);\n\n               return error_node();\n            }\n            else if (!post_bracket_process(token_t::e_lbracket,branch))\n            {\n               free_node(node_allocator_,branch);\n\n               return error_node();\n            }\n         }\n         else if (token_t::e_lsqrbracket == current_token().type)\n         {\n            next_token();\n\n            if (0 == (branch = parse_expression()))\n               return error_node();\n            else if (!token_is(token_t::e_rsqrbracket))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR194 - Expected ']' instead of: '\" + current_token().value + \"'\",\n                             exprtk_error_location));\n\n               free_node(node_allocator_,branch);\n\n               return error_node();\n            }\n            else if (!post_bracket_process(token_t::e_lsqrbracket,branch))\n            {\n               free_node(node_allocator_,branch);\n\n               return error_node();\n            }\n         }\n         else if (token_t::e_lcrlbracket == current_token().type)\n         {\n            next_token();\n\n            if (0 == (branch = parse_expression()))\n               return error_node();\n            else if (!token_is(token_t::e_rcrlbracket))\n            {\n               set_error(\n                  make_error(parser_error::e_syntax,\n                             current_token(),\n                             \"ERR195 - Expected '}' instead of: '\" + current_token().value + \"'\",\n                             exprtk_error_location));\n\n               free_node(node_allocator_,branch);\n\n               return error_node();\n            }\n            else if (!post_bracket_process(token_t::e_lcrlbracket,branch))\n            {\n               free_node(node_allocator_,branch);\n\n               return error_node();\n            }\n         }\n         else if (token_t::e_sub == current_token().type)\n         {\n            next_token();\n            branch = parse_expression(e_level11);\n\n            if (\n                 branch &&\n                 !(\n                    details::is_neg_unary_node    (branch) &&\n                    simplify_unary_negation_branch(branch)\n                  )\n               )\n            {\n               branch = expression_generator_(details::e_neg,branch);\n            }\n         }\n         else if (token_t::e_add == current_token().type)\n         {\n            next_token();\n            branch = parse_expression(e_level13);\n         }\n         else if (token_t::e_eof == current_token().type)\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR196 - Premature end of expression[1]\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n         else\n         {\n            set_error(\n               make_error(parser_error::e_syntax,\n                          current_token(),\n                          \"ERR197 - Premature end of expression[2]\",\n                          exprtk_error_location));\n\n            return error_node();\n         }\n\n         if (\n              branch                    &&\n              (e_level00 == precedence) &&\n              token_is(token_t::e_ternary,prsrhlpr_t::e_hold)\n            )\n         {\n            branch = parse_ternary_conditional_statement(branch);\n         }\n\n         parse_pending_string_rangesize(branch);\n\n         return branch;\n      }\n\n      template <typename Type>\n      class expression_generator\n      {\n      public:\n\n         typedef details::expression_node<Type>* expression_node_ptr;\n         typedef expression_node_ptr (*synthesize_functor_t)(expression_generator<T>&, const details::operator_type& operation, expression_node_ptr (&branch)[2]);\n         typedef std::map<std::string,synthesize_functor_t> synthesize_map_t;\n         typedef typename exprtk::parser<Type> parser_t;\n         typedef const Type& vtype;\n         typedef const Type  ctype;\n\n         inline void init_synthesize_map()\n         {\n            #ifndef exprtk_disable_enhanced_features\n            synthesize_map_[\"(v)o(v)\"] = synthesize_vov_expression::process;\n            synthesize_map_[\"(c)o(v)\"] = synthesize_cov_expression::process;\n            synthesize_map_[\"(v)o(c)\"] = synthesize_voc_expression::process;\n\n            #define register_synthezier(S)                      \\\n            synthesize_map_[S ::node_type::id()] = S ::process; \\\n\n            register_synthezier(synthesize_vovov_expression0)\n            register_synthezier(synthesize_vovov_expression1)\n            register_synthezier(synthesize_vovoc_expression0)\n            register_synthezier(synthesize_vovoc_expression1)\n            register_synthezier(synthesize_vocov_expression0)\n            register_synthezier(synthesize_vocov_expression1)\n            register_synthezier(synthesize_covov_expression0)\n            register_synthezier(synthesize_covov_expression1)\n            register_synthezier(synthesize_covoc_expression0)\n            register_synthezier(synthesize_covoc_expression1)\n            register_synthezier(synthesize_cocov_expression1)\n            register_synthezier(synthesize_vococ_expression0)\n\n            register_synthezier(synthesize_vovovov_expression0)\n            register_synthezier(synthesize_vovovoc_expression0)\n            register_synthezier(synthesize_vovocov_expression0)\n            register_synthezier(synthesize_vocovov_expression0)\n            register_synthezier(synthesize_covovov_expression0)\n            register_synthezier(synthesize_covocov_expression0)\n            register_synthezier(synthesize_vocovoc_expression0)\n            register_synthezier(synthesize_covovoc_expression0)\n            register_synthezier(synthesize_vococov_expression0)\n\n            register_synthezier(synthesize_vovovov_expression1)\n            register_synthezier(synthesize_vovovoc_expression1)\n            register_synthezier(synthesize_vovocov_expression1)\n            register_synthezier(synthesize_vocovov_expression1)\n            register_synthezier(synthesize_covovov_expression1)\n            register_synthezier(synthesize_covocov_expression1)\n            register_synthezier(synthesize_vocovoc_expression1)\n            register_synthezier(synthesize_covovoc_expression1)\n            register_synthezier(synthesize_vococov_expression1)\n\n            register_synthezier(synthesize_vovovov_expression2)\n            register_synthezier(synthesize_vovovoc_expression2)\n            register_synthezier(synthesize_vovocov_expression2)\n            register_synthezier(synthesize_vocovov_expression2)\n            register_synthezier(synthesize_covovov_expression2)\n            register_synthezier(synthesize_covocov_expression2)\n            register_synthezier(synthesize_vocovoc_expression2)\n            register_synthezier(synthesize_covovoc_expression2)\n\n            register_synthezier(synthesize_vovovov_expression3)\n            register_synthezier(synthesize_vovovoc_expression3)\n            register_synthezier(synthesize_vovocov_expression3)\n            register_synthezier(synthesize_vocovov_expression3)\n            register_synthezier(synthesize_covovov_expression3)\n            register_synthezier(synthesize_covocov_expression3)\n            register_synthezier(synthesize_vocovoc_expression3)\n            register_synthezier(synthesize_covovoc_expression3)\n            register_synthezier(synthesize_vococov_expression3)\n\n            register_synthezier(synthesize_vovovov_expression4)\n            register_synthezier(synthesize_vovovoc_expression4)\n            register_synthezier(synthesize_vovocov_expression4)\n            register_synthezier(synthesize_vocovov_expression4)\n            register_synthezier(synthesize_covovov_expression4)\n            register_synthezier(synthesize_covocov_expression4)\n            register_synthezier(synthesize_vocovoc_expression4)\n            register_synthezier(synthesize_covovoc_expression4)\n            #endif\n         }\n\n         inline void set_parser(parser_t& p)\n         {\n            parser_ = &p;\n         }\n\n         inline void set_uom(unary_op_map_t& unary_op_map)\n         {\n            unary_op_map_ = &unary_op_map;\n         }\n\n         inline void set_bom(binary_op_map_t& binary_op_map)\n         {\n            binary_op_map_ = &binary_op_map;\n         }\n\n         inline void set_ibom(inv_binary_op_map_t& inv_binary_op_map)\n         {\n            inv_binary_op_map_ = &inv_binary_op_map;\n         }\n\n         inline void set_sf3m(sf3_map_t& sf3_map)\n         {\n            sf3_map_ = &sf3_map;\n         }\n\n         inline void set_sf4m(sf4_map_t& sf4_map)\n         {\n            sf4_map_ = &sf4_map;\n         }\n\n         inline void set_allocator(details::node_allocator& na)\n         {\n            node_allocator_ = &na;\n         }\n\n         inline void set_strength_reduction_state(const bool enabled)\n         {\n            strength_reduction_enabled_ = enabled;\n         }\n\n         inline bool strength_reduction_enabled() const\n         {\n            return strength_reduction_enabled_;\n         }\n\n         inline bool valid_operator(const details::operator_type& operation, binary_functor_t& bop)\n         {\n            typename binary_op_map_t::iterator bop_itr = binary_op_map_->find(operation);\n\n            if ((*binary_op_map_).end() == bop_itr)\n               return false;\n\n            bop = bop_itr->second;\n\n            return true;\n         }\n\n         inline bool valid_operator(const details::operator_type& operation, unary_functor_t& uop)\n         {\n            typename unary_op_map_t::iterator uop_itr = unary_op_map_->find(operation);\n\n            if ((*unary_op_map_).end() == uop_itr)\n               return false;\n\n            uop = uop_itr->second;\n\n            return true;\n         }\n\n         inline details::operator_type get_operator(const binary_functor_t& bop)\n         {\n            return (*inv_binary_op_map_).find(bop)->second;\n         }\n\n         inline expression_node_ptr operator() (const Type& v) const\n         {\n            return node_allocator_->allocate<literal_node_t>(v);\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline expression_node_ptr operator() (const std::string& s) const\n         {\n            return node_allocator_->allocate<string_literal_node_t>(s);\n         }\n\n         inline expression_node_ptr operator() (std::string& s, range_t& rp) const\n         {\n            return node_allocator_->allocate_rr<string_range_node_t>(s,rp);\n         }\n\n         inline expression_node_ptr operator() (const std::string& s, range_t& rp) const\n         {\n            return node_allocator_->allocate_tt<const_string_range_node_t>(s,rp);\n         }\n\n         inline expression_node_ptr operator() (expression_node_ptr branch, range_t& rp) const\n         {\n            if (is_generally_string_node(branch))\n               return node_allocator_->allocate_tt<generic_string_range_node_t>(branch,rp);\n            else\n               return error_node();\n         }\n         #endif\n\n         inline bool unary_optimisable(const details::operator_type& operation) const\n         {\n            return (details::e_abs   == operation) || (details::e_acos  == operation) ||\n                   (details::e_acosh == operation) || (details::e_asin  == operation) ||\n                   (details::e_asinh == operation) || (details::e_atan  == operation) ||\n                   (details::e_atanh == operation) || (details::e_ceil  == operation) ||\n                   (details::e_cos   == operation) || (details::e_cosh  == operation) ||\n                   (details::e_exp   == operation) || (details::e_expm1 == operation) ||\n                   (details::e_floor == operation) || (details::e_log   == operation) ||\n                   (details::e_log10 == operation) || (details::e_log2  == operation) ||\n                   (details::e_log1p == operation) || (details::e_neg   == operation) ||\n                   (details::e_pos   == operation) || (details::e_round == operation) ||\n                   (details::e_sin   == operation) || (details::e_sinc  == operation) ||\n                   (details::e_sinh  == operation) || (details::e_sqrt  == operation) ||\n                   (details::e_tan   == operation) || (details::e_tanh  == operation) ||\n                   (details::e_cot   == operation) || (details::e_sec   == operation) ||\n                   (details::e_csc   == operation) || (details::e_r2d   == operation) ||\n                   (details::e_d2r   == operation) || (details::e_d2g   == operation) ||\n                   (details::e_g2d   == operation) || (details::e_notl  == operation) ||\n                   (details::e_sgn   == operation) || (details::e_erf   == operation) ||\n                   (details::e_erfc  == operation) || (details::e_ncdf  == operation) ||\n                   (details::e_frac  == operation) || (details::e_trunc == operation) ;\n         }\n\n         inline bool sf3_optimisable(const std::string& sf3id, trinary_functor_t& tfunc)\n         {\n            typename sf3_map_t::iterator itr = sf3_map_->find(sf3id);\n\n            if (sf3_map_->end() == itr)\n               return false;\n            else\n               tfunc = itr->second.first;\n\n            return true;\n         }\n\n         inline bool sf4_optimisable(const std::string& sf4id, quaternary_functor_t& qfunc)\n         {\n            typename sf4_map_t::iterator itr = sf4_map_->find(sf4id);\n\n            if (sf4_map_->end() == itr)\n               return false;\n            else\n               qfunc = itr->second.first;\n\n            return true;\n         }\n\n         inline bool sf3_optimisable(const std::string& sf3id, details::operator_type& operation)\n         {\n            typename sf3_map_t::iterator itr = sf3_map_->find(sf3id);\n\n            if (sf3_map_->end() == itr)\n               return false;\n            else\n               operation = itr->second.second;\n\n            return true;\n         }\n\n         inline bool sf4_optimisable(const std::string& sf4id, details::operator_type& operation)\n         {\n            typename sf4_map_t::iterator itr = sf4_map_->find(sf4id);\n\n            if (sf4_map_->end() == itr)\n               return false;\n            else\n               operation = itr->second.second;\n\n            return true;\n         }\n\n         inline expression_node_ptr operator() (const details::operator_type& operation, expression_node_ptr (&branch)[1])\n         {\n            if (0 == branch[0])\n            {\n               return error_node();\n            }\n            else if (details::is_null_node(branch[0]))\n            {\n               return branch[0];\n            }\n            else if (details::is_break_node(branch[0]))\n            {\n               return error_node();\n            }\n            else if (details::is_continue_node(branch[0]))\n            {\n               return error_node();\n            }\n            else if (details::is_constant_node(branch[0]))\n            {\n               return synthesize_expression<unary_node_t,1>(operation,branch);\n            }\n            else if (unary_optimisable(operation) && details::is_variable_node(branch[0]))\n            {\n               return synthesize_uv_expression(operation,branch);\n            }\n            else if (unary_optimisable(operation) && details::is_ivector_node(branch[0]))\n            {\n               return synthesize_uvec_expression(operation,branch);\n            }\n            else\n               return synthesize_unary_expression(operation,branch);\n         }\n\n         inline bool is_assignment_operation(const details::operator_type& operation) const\n         {\n            return (\n                     (details::e_addass == operation) ||\n                     (details::e_subass == operation) ||\n                     (details::e_mulass == operation) ||\n                     (details::e_divass == operation) ||\n                     (details::e_modass == operation)\n                   ) &&\n                   parser_->settings_.assignment_enabled(operation);\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline bool valid_string_operation(const details::operator_type& operation) const\n         {\n            return (details::e_add    == operation) ||\n                   (details::e_lt     == operation) ||\n                   (details::e_lte    == operation) ||\n                   (details::e_gt     == operation) ||\n                   (details::e_gte    == operation) ||\n                   (details::e_eq     == operation) ||\n                   (details::e_ne     == operation) ||\n                   (details::e_in     == operation) ||\n                   (details::e_like   == operation) ||\n                   (details::e_ilike  == operation) ||\n                   (details::e_assign == operation) ||\n                   (details::e_addass == operation) ||\n                   (details::e_swap   == operation) ;\n         }\n         #else\n         inline bool valid_string_operation(const details::operator_type&) const\n         {\n            return false;\n         }\n         #endif\n\n         inline std::string to_str(const details::operator_type& operation) const\n         {\n            switch (operation)\n            {\n               case details::e_add  : return \"+\"      ;\n               case details::e_sub  : return \"-\"      ;\n               case details::e_mul  : return \"*\"      ;\n               case details::e_div  : return \"/\"      ;\n               case details::e_mod  : return \"%\"      ;\n               case details::e_pow  : return \"^\"      ;\n               case details::e_lt   : return \"<\"      ;\n               case details::e_lte  : return \"<=\"     ;\n               case details::e_gt   : return \">\"      ;\n               case details::e_gte  : return \">=\"     ;\n               case details::e_eq   : return \"==\"     ;\n               case details::e_ne   : return \"!=\"     ;\n               case details::e_and  : return \"and\"    ;\n               case details::e_nand : return \"nand\"   ;\n               case details::e_or   : return \"or\"     ;\n               case details::e_nor  : return \"nor\"    ;\n               case details::e_xor  : return \"xor\"    ;\n               case details::e_xnor : return \"xnor\"   ;\n               default              : return \"UNKNOWN\";\n            }\n         }\n\n         inline bool operation_optimisable(const details::operator_type& operation) const\n         {\n            return (details::e_add  == operation) ||\n                   (details::e_sub  == operation) ||\n                   (details::e_mul  == operation) ||\n                   (details::e_div  == operation) ||\n                   (details::e_mod  == operation) ||\n                   (details::e_pow  == operation) ||\n                   (details::e_lt   == operation) ||\n                   (details::e_lte  == operation) ||\n                   (details::e_gt   == operation) ||\n                   (details::e_gte  == operation) ||\n                   (details::e_eq   == operation) ||\n                   (details::e_ne   == operation) ||\n                   (details::e_and  == operation) ||\n                   (details::e_nand == operation) ||\n                   (details::e_or   == operation) ||\n                   (details::e_nor  == operation) ||\n                   (details::e_xor  == operation) ||\n                   (details::e_xnor == operation) ;\n         }\n\n         inline std::string branch_to_id(expression_node_ptr branch)\n         {\n            static const std::string null_str   (\"(null)\" );\n            static const std::string const_str  (\"(c)\"    );\n            static const std::string var_str    (\"(v)\"    );\n            static const std::string vov_str    (\"(vov)\"  );\n            static const std::string cov_str    (\"(cov)\"  );\n            static const std::string voc_str    (\"(voc)\"  );\n            static const std::string str_str    (\"(s)\"    );\n            static const std::string strrng_str (\"(rngs)\" );\n            static const std::string cs_str     (\"(cs)\"   );\n            static const std::string cstrrng_str(\"(crngs)\");\n\n            if (details::is_null_node(branch))\n               return null_str;\n            else if (details::is_constant_node(branch))\n               return const_str;\n            else if (details::is_variable_node(branch))\n               return var_str;\n            else if (details::is_vov_node(branch))\n               return vov_str;\n            else if (details::is_cov_node(branch))\n               return cov_str;\n            else if (details::is_voc_node(branch))\n               return voc_str;\n            else if (details::is_string_node(branch))\n               return str_str;\n            else if (details::is_const_string_node(branch))\n               return cs_str;\n            else if (details::is_string_range_node(branch))\n               return strrng_str;\n            else if (details::is_const_string_range_node(branch))\n               return cstrrng_str;\n            else if (details::is_t0ot1ot2_node(branch))\n               return \"(\" + dynamic_cast<details::T0oT1oT2_base_node<T>*>(branch)->type_id() + \")\";\n            else if (details::is_t0ot1ot2ot3_node(branch))\n               return \"(\" + dynamic_cast<details::T0oT1oT2oT3_base_node<T>*>(branch)->type_id() + \")\";\n            else\n               return \"ERROR\";\n         }\n\n         inline std::string branch_to_id(expression_node_ptr (&branch)[2])\n         {\n            return branch_to_id(branch[0]) + std::string(\"o\") + branch_to_id(branch[1]);\n         }\n\n         inline bool cov_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return details::is_constant_node(branch[0]) &&\n                      details::is_variable_node(branch[1]) ;\n         }\n\n         inline bool voc_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return details::is_variable_node(branch[0]) &&\n                      details::is_constant_node(branch[1]) ;\n         }\n\n         inline bool vov_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return details::is_variable_node(branch[0]) &&\n                      details::is_variable_node(branch[1]) ;\n         }\n\n         inline bool cob_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return details::is_constant_node(branch[0]) &&\n                     !details::is_constant_node(branch[1]) ;\n         }\n\n         inline bool boc_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return !details::is_constant_node(branch[0]) &&\n                       details::is_constant_node(branch[1]) ;\n         }\n\n         inline bool cocob_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (\n                 (details::e_add == operation) ||\n                 (details::e_sub == operation) ||\n                 (details::e_mul == operation) ||\n                 (details::e_div == operation)\n               )\n            {\n               return (details::is_constant_node(branch[0]) && details::is_cob_node(branch[1])) ||\n                      (details::is_constant_node(branch[1]) && details::is_cob_node(branch[0])) ;\n            }\n            else\n               return false;\n         }\n\n         inline bool coboc_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (\n                 (details::e_add == operation) ||\n                 (details::e_sub == operation) ||\n                 (details::e_mul == operation) ||\n                 (details::e_div == operation)\n               )\n            {\n               return (details::is_constant_node(branch[0]) && details::is_boc_node(branch[1])) ||\n                      (details::is_constant_node(branch[1]) && details::is_boc_node(branch[0])) ;\n            }\n            else\n               return false;\n         }\n\n         inline bool uvouv_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return details::is_uv_node(branch[0]) &&\n                      details::is_uv_node(branch[1]) ;\n         }\n\n         inline bool vob_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return details::is_variable_node(branch[0]) &&\n                     !details::is_variable_node(branch[1]) ;\n         }\n\n         inline bool bov_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return !details::is_variable_node(branch[0]) &&\n                       details::is_variable_node(branch[1]) ;\n         }\n\n         inline bool binext_optimisable(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!operation_optimisable(operation))\n               return false;\n            else\n               return !details::is_constant_node(branch[0]) ||\n                      !details::is_constant_node(branch[1]) ;\n         }\n\n         inline bool is_invalid_assignment_op(const details::operator_type& operation, expression_node_ptr (&branch)[2])\n         {\n            if (is_assignment_operation(operation))\n            {\n               const bool b1_is_genstring = details::is_generally_string_node(branch[1]);\n\n               if (details::is_string_node(branch[0]))\n                  return !b1_is_genstring;\n               else\n                  return (\n                           !details::is_variable_node          (branch[0]) &&\n                           !details::is_vector_elem_node       (branch[0]) &&\n                           !details::is_rebasevector_elem_node (branch[0]) &&\n                           !details::is_rebasevector_celem_node(branch[0]) &&\n                           !details::is_vector_node            (branch[0])\n                         )\n                         || b1_is_genstring;\n            }\n            else\n               return false;\n         }\n\n         inline bool is_constpow_operation(const details::operator_type& operation, expression_node_ptr(&branch)[2])\n         {\n            if (\n                 !details::is_constant_node(branch[1]) ||\n                  details::is_constant_node(branch[0]) ||\n                  details::is_variable_node(branch[0]) ||\n                  details::is_vector_node  (branch[0]) ||\n                  details::is_generally_string_node(branch[0])\n               )\n               return false;\n\n            const Type c = static_cast<details::literal_node<Type>*>(branch[1])->value();\n\n            return cardinal_pow_optimisable(operation, c);\n         }\n\n         inline bool is_invalid_break_continue_op(expression_node_ptr (&branch)[2])\n         {\n            return (\n                     details::is_break_node   (branch[0]) ||\n                     details::is_break_node   (branch[1]) ||\n                     details::is_continue_node(branch[0]) ||\n                     details::is_continue_node(branch[1])\n                   );\n         }\n\n         inline bool is_invalid_string_op(const details::operator_type& operation, expression_node_ptr (&branch)[2])\n         {\n            const bool b0_string = is_generally_string_node(branch[0]);\n            const bool b1_string = is_generally_string_node(branch[1]);\n\n            bool result = false;\n\n            if (b0_string != b1_string)\n               result = true;\n            else if (!valid_string_operation(operation) && b0_string && b1_string)\n               result = true;\n\n            if (result)\n            {\n               parser_->set_synthesis_error(\"Invalid string operation\");\n            }\n\n            return result;\n         }\n\n         inline bool is_invalid_string_op(const details::operator_type& operation, expression_node_ptr (&branch)[3])\n         {\n            const bool b0_string = is_generally_string_node(branch[0]);\n            const bool b1_string = is_generally_string_node(branch[1]);\n            const bool b2_string = is_generally_string_node(branch[2]);\n\n            bool result = false;\n\n            if ((b0_string != b1_string) || (b1_string != b2_string))\n               result = true;\n            else if ((details::e_inrange != operation) && b0_string && b1_string && b2_string)\n               result = true;\n\n            if (result)\n            {\n               parser_->set_synthesis_error(\"Invalid string operation\");\n            }\n\n            return result;\n         }\n\n         inline bool is_string_operation(const details::operator_type& operation, expression_node_ptr (&branch)[2])\n         {\n            const bool b0_string = is_generally_string_node(branch[0]);\n            const bool b1_string = is_generally_string_node(branch[1]);\n\n            return (b0_string && b1_string && valid_string_operation(operation));\n         }\n\n         inline bool is_string_operation(const details::operator_type& operation, expression_node_ptr (&branch)[3])\n         {\n            const bool b0_string = is_generally_string_node(branch[0]);\n            const bool b1_string = is_generally_string_node(branch[1]);\n            const bool b2_string = is_generally_string_node(branch[2]);\n\n            return (b0_string && b1_string && b2_string && (details::e_inrange == operation));\n         }\n\n         #ifndef exprtk_disable_sc_andor\n         inline bool is_shortcircuit_expression(const details::operator_type& operation) const\n         {\n            return (\n                     (details::e_scand == operation) ||\n                     (details::e_scor  == operation)\n                   );\n         }\n         #else\n         inline bool is_shortcircuit_expression(const details::operator_type&) const\n         {\n            return false;\n         }\n         #endif\n\n         inline bool is_null_present(expression_node_ptr (&branch)[2]) const\n         {\n            return (\n                     details::is_null_node(branch[0]) ||\n                     details::is_null_node(branch[1])\n                   );\n         }\n\n         inline bool is_vector_eqineq_logic_operation(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!is_ivector_node(branch[0]) && !is_ivector_node(branch[1]))\n               return false;\n            else\n               return (\n                        (details::e_lt    == operation) ||\n                        (details::e_lte   == operation) ||\n                        (details::e_gt    == operation) ||\n                        (details::e_gte   == operation) ||\n                        (details::e_eq    == operation) ||\n                        (details::e_ne    == operation) ||\n                        (details::e_equal == operation) ||\n                        (details::e_and   == operation) ||\n                        (details::e_nand  == operation) ||\n                        (details::  e_or  == operation) ||\n                        (details:: e_nor  == operation) ||\n                        (details:: e_xor  == operation) ||\n                        (details::e_xnor  == operation)\n                      );\n         }\n\n         inline bool is_vector_arithmetic_operation(const details::operator_type& operation, expression_node_ptr (&branch)[2]) const\n         {\n            if (!is_ivector_node(branch[0]) && !is_ivector_node(branch[1]))\n               return false;\n            else\n               return (\n                        (details::e_add == operation) ||\n                        (details::e_sub == operation) ||\n                        (details::e_mul == operation) ||\n                        (details::e_div == operation) ||\n                        (details::e_pow == operation)\n                      );\n         }\n\n         inline expression_node_ptr operator() (const details::operator_type& operation, expression_node_ptr (&branch)[2])\n         {\n            if ((0 == branch[0]) || (0 == branch[1]))\n            {\n               return error_node();\n            }\n            else if (is_invalid_string_op(operation,branch))\n            {\n               return error_node();\n            }\n            else if (is_invalid_assignment_op(operation,branch))\n            {\n               return error_node();\n            }\n            else if (is_invalid_break_continue_op(branch))\n            {\n               return error_node();\n            }\n            else if (details::e_assign == operation)\n            {\n               return synthesize_assignment_expression(operation, branch);\n            }\n            else if (details::e_swap == operation)\n            {\n               return synthesize_swap_expression(branch);\n            }\n            else if (is_assignment_operation(operation))\n            {\n               return synthesize_assignment_operation_expression(operation, branch);\n            }\n            else if (is_vector_eqineq_logic_operation(operation, branch))\n            {\n               return synthesize_veceqineqlogic_operation_expression(operation, branch);\n            }\n            else if (is_vector_arithmetic_operation(operation, branch))\n            {\n               return synthesize_vecarithmetic_operation_expression(operation, branch);\n            }\n            else if (is_shortcircuit_expression(operation))\n            {\n               return synthesize_shortcircuit_expression(operation, branch);\n            }\n            else if (is_string_operation(operation, branch))\n            {\n               return synthesize_string_expression(operation, branch);\n            }\n            else if (is_null_present(branch))\n            {\n               return synthesize_null_expression(operation, branch);\n            }\n            #ifndef exprtk_disable_cardinal_pow_optimisation\n            else if (is_constpow_operation(operation, branch))\n            {\n               return cardinal_pow_optimisation(branch);\n            }\n            #endif\n\n            expression_node_ptr result = error_node();\n\n            #ifndef exprtk_disable_enhanced_features\n            if (synthesize_expression(operation, branch, result))\n            {\n               return result;\n            }\n            else\n            #endif\n\n            {\n               /*\n                  Possible reductions:\n                  1. c o cob -> cob\n                  2. cob o c -> cob\n                  3. c o boc -> boc\n                  4. boc o c -> boc\n               */\n               result = error_node();\n\n               if (cocob_optimisable(operation, branch))\n               {\n                  result = synthesize_cocob_expression::process((*this), operation, branch);\n               }\n               else if (coboc_optimisable(operation, branch) && (0 == result))\n               {\n                  result = synthesize_coboc_expression::process((*this), operation, branch);\n               }\n\n               if (result)\n                  return result;\n            }\n\n            if (uvouv_optimisable(operation, branch))\n            {\n               return synthesize_uvouv_expression(operation, branch);\n            }\n            else if (vob_optimisable(operation, branch))\n            {\n               return synthesize_vob_expression::process((*this), operation, branch);\n            }\n            else if (bov_optimisable(operation, branch))\n            {\n               return synthesize_bov_expression::process((*this), operation, branch);\n            }\n            else if (cob_optimisable(operation, branch))\n            {\n               return synthesize_cob_expression::process((*this), operation, branch);\n            }\n            else if (boc_optimisable(operation, branch))\n            {\n               return synthesize_boc_expression::process((*this), operation, branch);\n            }\n            #ifndef exprtk_disable_enhanced_features\n            else if (cov_optimisable(operation, branch))\n            {\n               return synthesize_cov_expression::process((*this), operation, branch);\n            }\n            #endif\n            else if (binext_optimisable(operation, branch))\n            {\n               return synthesize_binary_ext_expression::process((*this), operation, branch);\n            }\n            else\n               return synthesize_expression<binary_node_t,2>(operation, branch);\n         }\n\n         inline expression_node_ptr operator() (const details::operator_type& operation, expression_node_ptr (&branch)[3])\n         {\n            if (\n                 (0 == branch[0]) ||\n                 (0 == branch[1]) ||\n                 (0 == branch[2])\n               )\n            {\n               details::free_all_nodes(*node_allocator_,branch);\n\n               return error_node();\n            }\n            else if (is_invalid_string_op(operation, branch))\n            {\n               return error_node();\n            }\n            else if (is_string_operation(operation, branch))\n            {\n               return synthesize_string_expression(operation, branch);\n            }\n            else\n               return synthesize_expression<trinary_node_t,3>(operation, branch);\n         }\n\n         inline expression_node_ptr operator() (const details::operator_type& operation, expression_node_ptr (&branch)[4])\n         {\n            return synthesize_expression<quaternary_node_t,4>(operation,branch);\n         }\n\n         inline expression_node_ptr operator() (const details::operator_type& operation, expression_node_ptr b0)\n         {\n            expression_node_ptr branch[1] = { b0 };\n            return (*this)(operation,branch);\n         }\n\n         inline expression_node_ptr operator() (const details::operator_type& operation, expression_node_ptr b0, expression_node_ptr b1)\n         {\n            if ((0 == b0) || (0 == b1))\n               return error_node();\n            else\n            {\n               expression_node_ptr branch[2] = { b0, b1 };\n               return expression_generator<Type>::operator()(operation,branch);\n            }\n         }\n\n         inline expression_node_ptr conditional(expression_node_ptr condition,\n                                                expression_node_ptr consequent,\n                                                expression_node_ptr alternative) const\n         {\n            if ((0 == condition) || (0 == consequent))\n            {\n               free_node(*node_allocator_,   condition);\n               free_node(*node_allocator_,  consequent);\n               free_node(*node_allocator_, alternative);\n\n               return error_node();\n            }\n            // Can the condition be immediately evaluated? if so optimise.\n            else if (details::is_constant_node(condition))\n            {\n               // True branch\n               if (details::is_true(condition))\n               {\n                  free_node(*node_allocator_,   condition);\n                  free_node(*node_allocator_, alternative);\n\n                  return consequent;\n               }\n               // False branch\n               else\n               {\n                  free_node(*node_allocator_,  condition);\n                  free_node(*node_allocator_, consequent);\n\n                  if (alternative)\n                     return alternative;\n                  else\n                     return node_allocator_->allocate<details::null_node<T> >();\n               }\n            }\n            else if ((0 != consequent) && (0 != alternative))\n            {\n               return node_allocator_->\n                        allocate<conditional_node_t>(condition, consequent, alternative);\n            }\n            else\n               return node_allocator_->\n                        allocate<cons_conditional_node_t>(condition, consequent);\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline expression_node_ptr conditional_string(expression_node_ptr condition,\n                                                       expression_node_ptr consequent,\n                                                       expression_node_ptr alternative) const\n         {\n            if ((0 == condition) || (0 == consequent))\n            {\n               free_node(*node_allocator_,   condition);\n               free_node(*node_allocator_,  consequent);\n               free_node(*node_allocator_, alternative);\n\n               return error_node();\n            }\n            // Can the condition be immediately evaluated? if so optimise.\n            else if (details::is_constant_node(condition))\n            {\n               // True branch\n               if (details::is_true(condition))\n               {\n                  free_node(*node_allocator_,   condition);\n                  free_node(*node_allocator_, alternative);\n\n                  return consequent;\n               }\n               // False branch\n               else\n               {\n                  free_node(*node_allocator_,  condition);\n                  free_node(*node_allocator_, consequent);\n\n                  if (alternative)\n                     return alternative;\n                  else\n                     return node_allocator_->\n                              allocate_c<details::string_literal_node<Type> >(\"\");\n               }\n            }\n            else if ((0 != consequent) && (0 != alternative))\n               return node_allocator_->\n                        allocate<conditional_string_node_t>(condition, consequent, alternative);\n            else\n               return error_node();\n         }\n         #else\n         inline expression_node_ptr conditional_string(expression_node_ptr,\n                                                       expression_node_ptr,\n                                                       expression_node_ptr) const\n         {\n            return error_node();\n         }\n         #endif\n\n         inline expression_node_ptr while_loop(expression_node_ptr& condition,\n                                               expression_node_ptr& branch,\n                                               const bool brkcont = false) const\n         {\n            if (!brkcont && details::is_constant_node(condition))\n            {\n               expression_node_ptr result = error_node();\n               if (details::is_true(condition))\n                  // Infinite loops are not allowed.\n                  result = error_node();\n               else\n                  result = node_allocator_->allocate<details::null_node<Type> >();\n\n               free_node(*node_allocator_, condition);\n               free_node(*node_allocator_,    branch);\n\n               return result;\n            }\n            else if (details::is_null_node(condition))\n            {\n               free_node(*node_allocator_,condition);\n\n               return branch;\n            }\n            else if (!brkcont)\n               return node_allocator_->allocate<while_loop_node_t>(condition,branch);\n            #ifndef exprtk_disable_break_continue\n            else\n               return node_allocator_->allocate<while_loop_bc_node_t>(condition,branch);\n            #else\n               return error_node();\n            #endif\n         }\n\n         inline expression_node_ptr repeat_until_loop(expression_node_ptr& condition,\n                                                      expression_node_ptr& branch,\n                                                      const bool brkcont = false) const\n         {\n            if (!brkcont && details::is_constant_node(condition))\n            {\n               if (\n                    details::is_true(condition) &&\n                    details::is_constant_node(branch)\n                  )\n               {\n                  free_node(*node_allocator_,condition);\n\n                  return branch;\n               }\n\n               free_node(*node_allocator_, condition);\n               free_node(*node_allocator_,    branch);\n\n               return error_node();\n            }\n            else if (details::is_null_node(condition))\n            {\n               free_node(*node_allocator_,condition);\n\n               return branch;\n            }\n            else if (!brkcont)\n               return node_allocator_->allocate<repeat_until_loop_node_t>(condition,branch);\n            #ifndef exprtk_disable_break_continue\n            else\n               return node_allocator_->allocate<repeat_until_loop_bc_node_t>(condition,branch);\n            #else\n               return error_node();\n            #endif\n         }\n\n         inline expression_node_ptr for_loop(expression_node_ptr& initialiser,\n                                             expression_node_ptr& condition,\n                                             expression_node_ptr& incrementor,\n                                             expression_node_ptr& loop_body,\n                                             bool brkcont = false) const\n         {\n            if (!brkcont && details::is_constant_node(condition))\n            {\n               expression_node_ptr result = error_node();\n\n               if (details::is_true(condition))\n                  // Infinite loops are not allowed.\n                  result = error_node();\n               else\n                  result = node_allocator_->allocate<details::null_node<Type> >();\n\n               free_node(*node_allocator_, initialiser);\n               free_node(*node_allocator_,   condition);\n               free_node(*node_allocator_, incrementor);\n               free_node(*node_allocator_,   loop_body);\n\n               return result;\n            }\n            else if (details::is_null_node(condition))\n            {\n               free_node(*node_allocator_, initialiser);\n               free_node(*node_allocator_,   condition);\n               free_node(*node_allocator_, incrementor);\n\n               return loop_body;\n            }\n            else if (!brkcont)\n               return node_allocator_->allocate<for_loop_node_t>\n                                       (\n                                         initialiser,\n                                         condition,\n                                         incrementor,\n                                         loop_body\n                                       );\n\n            #ifndef exprtk_disable_break_continue\n            else\n               return node_allocator_->allocate<for_loop_bc_node_t>\n                                       (\n                                         initialiser,\n                                         condition,\n                                         incrementor,\n                                         loop_body\n                                       );\n            #else\n            return error_node();\n            #endif\n         }\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node_ptr const_optimise_switch(Sequence<expression_node_ptr,Allocator>& arg_list)\n         {\n            expression_node_ptr result = error_node();\n\n            for (std::size_t i = 0; i < (arg_list.size() / 2); ++i)\n            {\n               expression_node_ptr condition  = arg_list[(2 * i)    ];\n               expression_node_ptr consequent = arg_list[(2 * i) + 1];\n\n               if ((0 == result) && details::is_true(condition))\n               {\n                  result = consequent;\n                  break;\n               }\n            }\n\n            if (0 == result)\n            {\n               result = arg_list.back();\n            }\n\n            for (std::size_t i = 0; i < arg_list.size(); ++i)\n            {\n               expression_node_ptr current_expr = arg_list[i];\n\n               if (current_expr && (current_expr != result))\n               {\n                  free_node(*node_allocator_,current_expr);\n               }\n            }\n\n            return result;\n         }\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node_ptr const_optimise_mswitch(Sequence<expression_node_ptr,Allocator>& arg_list)\n         {\n            expression_node_ptr result = error_node();\n\n            for (std::size_t i = 0; i < (arg_list.size() / 2); ++i)\n            {\n               expression_node_ptr condition  = arg_list[(2 * i)    ];\n               expression_node_ptr consequent = arg_list[(2 * i) + 1];\n\n               if (details::is_true(condition))\n               {\n                  result = consequent;\n               }\n            }\n\n            if (0 == result)\n            {\n               T zero = T(0);\n               result = node_allocator_->allocate<literal_node_t>(zero);\n            }\n\n            for (std::size_t i = 0; i < arg_list.size(); ++i)\n            {\n               expression_node_ptr& current_expr = arg_list[i];\n\n               if (current_expr && (current_expr != result))\n               {\n                  free_node(*node_allocator_,current_expr);\n               }\n            }\n\n            return result;\n         }\n\n         struct switch_nodes\n         {\n            typedef std::vector<expression_node_ptr> arg_list_t;\n\n            #define case_stmt(N)                                             \\\n            if (is_true(arg[(2 * N)])) { return arg[(2 * N) + 1]->value(); } \\\n\n            struct switch_1\n            {\n               static inline T process(const arg_list_t& arg)\n               {\n                  case_stmt(0)\n\n                  return arg.back()->value();\n               }\n            };\n\n            struct switch_2\n            {\n               static inline T process(const arg_list_t& arg)\n               {\n                  case_stmt(0) case_stmt(1)\n\n                  return arg.back()->value();\n               }\n            };\n\n            struct switch_3\n            {\n               static inline T process(const arg_list_t& arg)\n               {\n                  case_stmt(0) case_stmt(1)\n                  case_stmt(2)\n\n                  return arg.back()->value();\n               }\n            };\n\n            struct switch_4\n            {\n               static inline T process(const arg_list_t& arg)\n               {\n                  case_stmt(0) case_stmt(1)\n                  case_stmt(2) case_stmt(3)\n\n                  return arg.back()->value();\n               }\n            };\n\n            struct switch_5\n            {\n               static inline T process(const arg_list_t& arg)\n               {\n                  case_stmt(0) case_stmt(1)\n                  case_stmt(2) case_stmt(3)\n                  case_stmt(4)\n\n                  return arg.back()->value();\n               }\n            };\n\n            struct switch_6\n            {\n               static inline T process(const arg_list_t& arg)\n               {\n                  case_stmt(0) case_stmt(1)\n                  case_stmt(2) case_stmt(3)\n                  case_stmt(4) case_stmt(5)\n\n                  return arg.back()->value();\n               }\n            };\n\n            struct switch_7\n            {\n               static inline T process(const arg_list_t& arg)\n               {\n                  case_stmt(0) case_stmt(1)\n                  case_stmt(2) case_stmt(3)\n                  case_stmt(4) case_stmt(5)\n                  case_stmt(6)\n\n                  return arg.back()->value();\n               }\n            };\n\n            #undef case_stmt\n         };\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node_ptr switch_statement(Sequence<expression_node_ptr,Allocator>& arg_list)\n         {\n            if (arg_list.empty())\n               return error_node();\n            else if (\n                      !all_nodes_valid(arg_list)   ||\n                      (arg_list.size() < 3)        ||\n                      ((arg_list.size() % 2) != 1)\n                    )\n            {\n               details::free_all_nodes(*node_allocator_,arg_list);\n\n               return error_node();\n            }\n            else if (is_constant_foldable(arg_list))\n               return const_optimise_switch(arg_list);\n\n            switch ((arg_list.size() - 1) / 2)\n            {\n               #define case_stmt(N)                                                 \\\n               case N :                                                             \\\n                  return node_allocator_->                                          \\\n                            allocate<details::switch_n_node                         \\\n                              <Type,typename switch_nodes::switch_##N> >(arg_list); \\\n\n               case_stmt(1)\n               case_stmt(2)\n               case_stmt(3)\n               case_stmt(4)\n               case_stmt(5)\n               case_stmt(6)\n               case_stmt(7)\n               #undef case_stmt\n\n               default : return node_allocator_->allocate<details::switch_node<Type> >(arg_list);\n            }\n         }\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node_ptr multi_switch_statement(Sequence<expression_node_ptr,Allocator>& arg_list)\n         {\n            if (!all_nodes_valid(arg_list))\n            {\n               details::free_all_nodes(*node_allocator_,arg_list);\n\n               return error_node();\n            }\n            else if (is_constant_foldable(arg_list))\n               return const_optimise_mswitch(arg_list);\n            else\n               return node_allocator_->allocate<details::multi_switch_node<Type> >(arg_list);\n         }\n\n         #define unary_opr_switch_statements            \\\n         case_stmt(details::  e_abs, details::  abs_op) \\\n         case_stmt(details:: e_acos, details:: acos_op) \\\n         case_stmt(details::e_acosh, details::acosh_op) \\\n         case_stmt(details:: e_asin, details:: asin_op) \\\n         case_stmt(details::e_asinh, details::asinh_op) \\\n         case_stmt(details:: e_atan, details:: atan_op) \\\n         case_stmt(details::e_atanh, details::atanh_op) \\\n         case_stmt(details:: e_ceil, details:: ceil_op) \\\n         case_stmt(details::  e_cos, details::  cos_op) \\\n         case_stmt(details:: e_cosh, details:: cosh_op) \\\n         case_stmt(details::  e_exp, details::  exp_op) \\\n         case_stmt(details::e_expm1, details::expm1_op) \\\n         case_stmt(details::e_floor, details::floor_op) \\\n         case_stmt(details::  e_log, details::  log_op) \\\n         case_stmt(details::e_log10, details::log10_op) \\\n         case_stmt(details:: e_log2, details:: log2_op) \\\n         case_stmt(details::e_log1p, details::log1p_op) \\\n         case_stmt(details::  e_neg, details::  neg_op) \\\n         case_stmt(details::  e_pos, details::  pos_op) \\\n         case_stmt(details::e_round, details::round_op) \\\n         case_stmt(details::  e_sin, details::  sin_op) \\\n         case_stmt(details:: e_sinc, details:: sinc_op) \\\n         case_stmt(details:: e_sinh, details:: sinh_op) \\\n         case_stmt(details:: e_sqrt, details:: sqrt_op) \\\n         case_stmt(details::  e_tan, details::  tan_op) \\\n         case_stmt(details:: e_tanh, details:: tanh_op) \\\n         case_stmt(details::  e_cot, details::  cot_op) \\\n         case_stmt(details::  e_sec, details::  sec_op) \\\n         case_stmt(details::  e_csc, details::  csc_op) \\\n         case_stmt(details::  e_r2d, details::  r2d_op) \\\n         case_stmt(details::  e_d2r, details::  d2r_op) \\\n         case_stmt(details::  e_d2g, details::  d2g_op) \\\n         case_stmt(details::  e_g2d, details::  g2d_op) \\\n         case_stmt(details:: e_notl, details:: notl_op) \\\n         case_stmt(details::  e_sgn, details::  sgn_op) \\\n         case_stmt(details::  e_erf, details::  erf_op) \\\n         case_stmt(details:: e_erfc, details:: erfc_op) \\\n         case_stmt(details:: e_ncdf, details:: ncdf_op) \\\n         case_stmt(details:: e_frac, details:: frac_op) \\\n         case_stmt(details::e_trunc, details::trunc_op) \\\n\n         inline expression_node_ptr synthesize_uv_expression(const details::operator_type& operation,\n                                                             expression_node_ptr (&branch)[1])\n         {\n            T& v = static_cast<details::variable_node<T>*>(branch[0])->ref();\n\n            switch (operation)\n            {\n               #define case_stmt(op0,op1)                                                          \\\n               case op0 : return node_allocator_->                                                 \\\n                             allocate<typename details::unary_variable_node<Type,op1<Type> > >(v); \\\n\n               unary_opr_switch_statements\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         inline expression_node_ptr synthesize_uvec_expression(const details::operator_type& operation,\n                                                               expression_node_ptr (&branch)[1])\n         {\n            switch (operation)\n            {\n               #define case_stmt(op0,op1)                                                    \\\n               case op0 : return node_allocator_->                                           \\\n                             allocate<typename details::unary_vector_node<Type,op1<Type> > > \\\n                                (operation, branch[0]);                                      \\\n\n               unary_opr_switch_statements\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         inline expression_node_ptr synthesize_unary_expression(const details::operator_type& operation,\n                                                                expression_node_ptr (&branch)[1])\n         {\n            switch (operation)\n            {\n               #define case_stmt(op0,op1)                                                                \\\n               case op0 : return node_allocator_->                                                       \\\n                             allocate<typename details::unary_branch_node<Type,op1<Type> > >(branch[0]); \\\n\n               unary_opr_switch_statements\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         inline expression_node_ptr const_optimise_sf3(const details::operator_type& operation,\n                                                       expression_node_ptr (&branch)[3])\n         {\n            expression_node_ptr temp_node = error_node();\n\n            switch (operation)\n            {\n               #define case_stmt(op)                                                        \\\n               case details::e_sf##op : temp_node = node_allocator_->                       \\\n                             allocate<details::sf3_node<Type,details::sf##op##_op<Type> > > \\\n                                (operation, branch);                                        \\\n                             break;                                                         \\\n\n               case_stmt(00) case_stmt(01) case_stmt(02) case_stmt(03)\n               case_stmt(04) case_stmt(05) case_stmt(06) case_stmt(07)\n               case_stmt(08) case_stmt(09) case_stmt(10) case_stmt(11)\n               case_stmt(12) case_stmt(13) case_stmt(14) case_stmt(15)\n               case_stmt(16) case_stmt(17) case_stmt(18) case_stmt(19)\n               case_stmt(20) case_stmt(21) case_stmt(22) case_stmt(23)\n               case_stmt(24) case_stmt(25) case_stmt(26) case_stmt(27)\n               case_stmt(28) case_stmt(29) case_stmt(30) case_stmt(31)\n               case_stmt(32) case_stmt(33) case_stmt(34) case_stmt(35)\n               case_stmt(36) case_stmt(37) case_stmt(38) case_stmt(39)\n               case_stmt(40) case_stmt(41) case_stmt(42) case_stmt(43)\n               case_stmt(44) case_stmt(45) case_stmt(46) case_stmt(47)\n               #undef case_stmt\n               default : return error_node();\n            }\n\n            const T v = temp_node->value();\n\n            details::free_node(*node_allocator_,temp_node);\n\n            return node_allocator_->allocate<literal_node_t>(v);\n         }\n\n         inline expression_node_ptr varnode_optimise_sf3(const details::operator_type& operation, expression_node_ptr (&branch)[3])\n         {\n            typedef details::variable_node<Type>* variable_ptr;\n\n            const Type& v0 = static_cast<variable_ptr>(branch[0])->ref();\n            const Type& v1 = static_cast<variable_ptr>(branch[1])->ref();\n            const Type& v2 = static_cast<variable_ptr>(branch[2])->ref();\n\n            switch (operation)\n            {\n               #define case_stmt(op)                                                                \\\n               case details::e_sf##op : return node_allocator_->                                    \\\n                             allocate_rrr<details::sf3_var_node<Type,details::sf##op##_op<Type> > > \\\n                                (v0, v1, v2);                                                       \\\n\n               case_stmt(00) case_stmt(01) case_stmt(02) case_stmt(03)\n               case_stmt(04) case_stmt(05) case_stmt(06) case_stmt(07)\n               case_stmt(08) case_stmt(09) case_stmt(10) case_stmt(11)\n               case_stmt(12) case_stmt(13) case_stmt(14) case_stmt(15)\n               case_stmt(16) case_stmt(17) case_stmt(18) case_stmt(19)\n               case_stmt(20) case_stmt(21) case_stmt(22) case_stmt(23)\n               case_stmt(24) case_stmt(25) case_stmt(26) case_stmt(27)\n               case_stmt(28) case_stmt(29) case_stmt(30) case_stmt(31)\n               case_stmt(32) case_stmt(33) case_stmt(34) case_stmt(35)\n               case_stmt(36) case_stmt(37) case_stmt(38) case_stmt(39)\n               case_stmt(40) case_stmt(41) case_stmt(42) case_stmt(43)\n               case_stmt(44) case_stmt(45) case_stmt(46) case_stmt(47)\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         inline expression_node_ptr special_function(const details::operator_type& operation, expression_node_ptr (&branch)[3])\n         {\n            if (!all_nodes_valid(branch))\n               return error_node();\n            else if (is_constant_foldable(branch))\n               return const_optimise_sf3(operation,branch);\n            else if (all_nodes_variables(branch))\n               return varnode_optimise_sf3(operation,branch);\n            else\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op)                                                        \\\n                  case details::e_sf##op : return node_allocator_->                            \\\n                                allocate<details::sf3_node<Type,details::sf##op##_op<Type> > > \\\n                                   (operation, branch);                                        \\\n\n                  case_stmt(00) case_stmt(01) case_stmt(02) case_stmt(03)\n                  case_stmt(04) case_stmt(05) case_stmt(06) case_stmt(07)\n                  case_stmt(08) case_stmt(09) case_stmt(10) case_stmt(11)\n                  case_stmt(12) case_stmt(13) case_stmt(14) case_stmt(15)\n                  case_stmt(16) case_stmt(17) case_stmt(18) case_stmt(19)\n                  case_stmt(20) case_stmt(21) case_stmt(22) case_stmt(23)\n                  case_stmt(24) case_stmt(25) case_stmt(26) case_stmt(27)\n                  case_stmt(28) case_stmt(29) case_stmt(30) case_stmt(31)\n                  case_stmt(32) case_stmt(33) case_stmt(34) case_stmt(35)\n                  case_stmt(36) case_stmt(37) case_stmt(38) case_stmt(39)\n                  case_stmt(40) case_stmt(41) case_stmt(42) case_stmt(43)\n                  case_stmt(44) case_stmt(45) case_stmt(46) case_stmt(47)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         }\n\n         inline expression_node_ptr const_optimise_sf4(const details::operator_type& operation, expression_node_ptr (&branch)[4])\n         {\n            expression_node_ptr temp_node = error_node();\n\n            switch (operation)\n            {\n               #define case_stmt(op)                                                                    \\\n               case details::e_sf##op : temp_node = node_allocator_->                                   \\\n                                         allocate<details::sf4_node<Type,details::sf##op##_op<Type> > > \\\n                                            (operation, branch);                                        \\\n                                        break;                                                          \\\n\n               case_stmt(48) case_stmt(49) case_stmt(50) case_stmt(51)\n               case_stmt(52) case_stmt(53) case_stmt(54) case_stmt(55)\n               case_stmt(56) case_stmt(57) case_stmt(58) case_stmt(59)\n               case_stmt(60) case_stmt(61) case_stmt(62) case_stmt(63)\n               case_stmt(64) case_stmt(65) case_stmt(66) case_stmt(67)\n               case_stmt(68) case_stmt(69) case_stmt(70) case_stmt(71)\n               case_stmt(72) case_stmt(73) case_stmt(74) case_stmt(75)\n               case_stmt(76) case_stmt(77) case_stmt(78) case_stmt(79)\n               case_stmt(80) case_stmt(81) case_stmt(82) case_stmt(83)\n               case_stmt(84) case_stmt(85) case_stmt(86) case_stmt(87)\n               case_stmt(88) case_stmt(89) case_stmt(90) case_stmt(91)\n               case_stmt(92) case_stmt(93) case_stmt(94) case_stmt(95)\n               case_stmt(96) case_stmt(97) case_stmt(98) case_stmt(99)\n               #undef case_stmt\n               default : return error_node();\n            }\n\n            const T v = temp_node->value();\n\n            details::free_node(*node_allocator_,temp_node);\n\n            return node_allocator_->allocate<literal_node_t>(v);\n         }\n\n         inline expression_node_ptr varnode_optimise_sf4(const details::operator_type& operation, expression_node_ptr (&branch)[4])\n         {\n            typedef details::variable_node<Type>* variable_ptr;\n\n            const Type& v0 = static_cast<variable_ptr>(branch[0])->ref();\n            const Type& v1 = static_cast<variable_ptr>(branch[1])->ref();\n            const Type& v2 = static_cast<variable_ptr>(branch[2])->ref();\n            const Type& v3 = static_cast<variable_ptr>(branch[3])->ref();\n\n            switch (operation)\n            {\n               #define case_stmt(op)                                                                 \\\n               case details::e_sf##op : return node_allocator_->                                     \\\n                             allocate_rrrr<details::sf4_var_node<Type,details::sf##op##_op<Type> > > \\\n                                (v0, v1, v2, v3);                                                    \\\n\n               case_stmt(48) case_stmt(49) case_stmt(50) case_stmt(51)\n               case_stmt(52) case_stmt(53) case_stmt(54) case_stmt(55)\n               case_stmt(56) case_stmt(57) case_stmt(58) case_stmt(59)\n               case_stmt(60) case_stmt(61) case_stmt(62) case_stmt(63)\n               case_stmt(64) case_stmt(65) case_stmt(66) case_stmt(67)\n               case_stmt(68) case_stmt(69) case_stmt(70) case_stmt(71)\n               case_stmt(72) case_stmt(73) case_stmt(74) case_stmt(75)\n               case_stmt(76) case_stmt(77) case_stmt(78) case_stmt(79)\n               case_stmt(80) case_stmt(81) case_stmt(82) case_stmt(83)\n               case_stmt(84) case_stmt(85) case_stmt(86) case_stmt(87)\n               case_stmt(88) case_stmt(89) case_stmt(90) case_stmt(91)\n               case_stmt(92) case_stmt(93) case_stmt(94) case_stmt(95)\n               case_stmt(96) case_stmt(97) case_stmt(98) case_stmt(99)\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         inline expression_node_ptr special_function(const details::operator_type& operation, expression_node_ptr (&branch)[4])\n         {\n            if (!all_nodes_valid(branch))\n               return error_node();\n            else if (is_constant_foldable(branch))\n               return const_optimise_sf4(operation,branch);\n            else if (all_nodes_variables(branch))\n               return varnode_optimise_sf4(operation,branch);\n            switch (operation)\n            {\n               #define case_stmt(op)                                                        \\\n               case details::e_sf##op : return node_allocator_->                            \\\n                             allocate<details::sf4_node<Type,details::sf##op##_op<Type> > > \\\n                                (operation, branch);                                        \\\n\n               case_stmt(48) case_stmt(49) case_stmt(50) case_stmt(51)\n               case_stmt(52) case_stmt(53) case_stmt(54) case_stmt(55)\n               case_stmt(56) case_stmt(57) case_stmt(58) case_stmt(59)\n               case_stmt(60) case_stmt(61) case_stmt(62) case_stmt(63)\n               case_stmt(64) case_stmt(65) case_stmt(66) case_stmt(67)\n               case_stmt(68) case_stmt(69) case_stmt(70) case_stmt(71)\n               case_stmt(72) case_stmt(73) case_stmt(74) case_stmt(75)\n               case_stmt(76) case_stmt(77) case_stmt(78) case_stmt(79)\n               case_stmt(80) case_stmt(81) case_stmt(82) case_stmt(83)\n               case_stmt(84) case_stmt(85) case_stmt(86) case_stmt(87)\n               case_stmt(88) case_stmt(89) case_stmt(90) case_stmt(91)\n               case_stmt(92) case_stmt(93) case_stmt(94) case_stmt(95)\n               case_stmt(96) case_stmt(97) case_stmt(98) case_stmt(99)\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node_ptr const_optimise_varargfunc(const details::operator_type& operation, Sequence<expression_node_ptr,Allocator>& arg_list)\n         {\n            expression_node_ptr temp_node = error_node();\n\n            switch (operation)\n            {\n               #define case_stmt(op0,op1)                                                 \\\n               case op0 : temp_node = node_allocator_->                                   \\\n                                         allocate<details::vararg_node<Type,op1<Type> > > \\\n                                            (arg_list);                                   \\\n                          break;                                                          \\\n\n               case_stmt(details::e_sum   , details::vararg_add_op  )\n               case_stmt(details::e_prod  , details::vararg_mul_op  )\n               case_stmt(details::e_avg   , details::vararg_avg_op  )\n               case_stmt(details::e_min   , details::vararg_min_op  )\n               case_stmt(details::e_max   , details::vararg_max_op  )\n               case_stmt(details::e_mand  , details::vararg_mand_op )\n               case_stmt(details::e_mor   , details::vararg_mor_op  )\n               case_stmt(details::e_multi , details::vararg_multi_op)\n               #undef case_stmt\n               default : return error_node();\n            }\n\n            const T v = temp_node->value();\n\n            details::free_node(*node_allocator_,temp_node);\n\n            return node_allocator_->allocate<literal_node_t>(v);\n         }\n\n         inline bool special_one_parameter_vararg(const details::operator_type& operation)\n         {\n            return (\n                     (details::e_sum  == operation) ||\n                     (details::e_prod == operation) ||\n                     (details::e_avg  == operation) ||\n                     (details::e_min  == operation) ||\n                     (details::e_max  == operation)\n                   );\n         }\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node_ptr varnode_optimise_varargfunc(const details::operator_type& operation, Sequence<expression_node_ptr,Allocator>& arg_list)\n         {\n            switch (operation)\n            {\n               #define case_stmt(op0,op1)                                                   \\\n               case op0 : return node_allocator_->                                          \\\n                             allocate<details::vararg_varnode<Type,op1<Type> > >(arg_list); \\\n\n               case_stmt(details::e_sum   , details::vararg_add_op  )\n               case_stmt(details::e_prod  , details::vararg_mul_op  )\n               case_stmt(details::e_avg   , details::vararg_avg_op  )\n               case_stmt(details::e_min   , details::vararg_min_op  )\n               case_stmt(details::e_max   , details::vararg_max_op  )\n               case_stmt(details::e_mand  , details::vararg_mand_op )\n               case_stmt(details::e_mor   , details::vararg_mor_op  )\n               case_stmt(details::e_multi , details::vararg_multi_op)\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node_ptr vectorize_func(const details::operator_type& operation, Sequence<expression_node_ptr,Allocator>& arg_list)\n         {\n            if (1 == arg_list.size())\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                      \\\n                  case op0 : return node_allocator_->                                             \\\n                                allocate<details::vectorize_node<Type,op1<Type> > >(arg_list[0]); \\\n\n                  case_stmt(details::e_sum  , details::vec_add_op)\n                  case_stmt(details::e_prod , details::vec_mul_op)\n                  case_stmt(details::e_avg  , details::vec_avg_op)\n                  case_stmt(details::e_min  , details::vec_min_op)\n                  case_stmt(details::e_max  , details::vec_max_op)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else\n               return error_node();\n         }\n\n         template <typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline expression_node_ptr vararg_function(const details::operator_type& operation, Sequence<expression_node_ptr,Allocator>& arg_list)\n         {\n            if (!all_nodes_valid(arg_list))\n            {\n               details::free_all_nodes(*node_allocator_,arg_list);\n\n               return error_node();\n            }\n            else if (is_constant_foldable(arg_list))\n               return const_optimise_varargfunc(operation,arg_list);\n            else if ((arg_list.size() == 1) && details::is_ivector_node(arg_list[0]))\n               return vectorize_func(operation,arg_list);\n            else if ((arg_list.size() == 1) && special_one_parameter_vararg(operation))\n               return arg_list[0];\n            else if (all_nodes_variables(arg_list))\n               return varnode_optimise_varargfunc(operation,arg_list);\n\n            #ifndef exprtk_disable_string_capabilities\n            if (details::e_smulti == operation)\n            {\n               return node_allocator_->\n                 allocate<details::str_vararg_node<Type,details::vararg_multi_op<Type> > >(arg_list);\n            }\n            else\n            #endif\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                \\\n                  case op0 : return node_allocator_->                                       \\\n                                allocate<details::vararg_node<Type,op1<Type> > >(arg_list); \\\n\n                  case_stmt(details::e_sum   , details::vararg_add_op  )\n                  case_stmt(details::e_prod  , details::vararg_mul_op  )\n                  case_stmt(details::e_avg   , details::vararg_avg_op  )\n                  case_stmt(details::e_min   , details::vararg_min_op  )\n                  case_stmt(details::e_max   , details::vararg_max_op  )\n                  case_stmt(details::e_mand  , details::vararg_mand_op )\n                  case_stmt(details::e_mor   , details::vararg_mor_op  )\n                  case_stmt(details::e_multi , details::vararg_multi_op)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         }\n\n         template <std::size_t N>\n         inline expression_node_ptr function(ifunction_t* f, expression_node_ptr (&b)[N])\n         {\n            typedef typename details::function_N_node<T,ifunction_t,N> function_N_node_t;\n            expression_node_ptr result = synthesize_expression<function_N_node_t,N>(f,b);\n\n            if (0 == result)\n               return error_node();\n            else\n            {\n               // Can the function call be completely optimised?\n               if (details::is_constant_node(result))\n                  return result;\n               else if (!all_nodes_valid(b))\n                  return error_node();\n               else if (N != f->param_count)\n               {\n                  details::free_all_nodes(*node_allocator_,b);\n\n                  return error_node();\n               }\n\n               function_N_node_t* func_node_ptr = static_cast<function_N_node_t*>(result);\n\n               if (func_node_ptr->init_branches(b))\n                  return result;\n               else\n               {\n                  details::free_all_nodes(*node_allocator_,b);\n\n                  return error_node();\n               }\n            }\n         }\n\n         inline expression_node_ptr function(ifunction_t* f)\n         {\n            typedef typename details::function_N_node<Type,ifunction_t,0> function_N_node_t;\n            return node_allocator_->allocate<function_N_node_t>(f);\n         }\n\n         inline expression_node_ptr vararg_function_call(ivararg_function_t* vaf,\n                                                         std::vector<expression_node_ptr>& arg_list)\n         {\n            if (!all_nodes_valid(arg_list))\n            {\n               details::free_all_nodes(*node_allocator_,arg_list);\n\n               return error_node();\n            }\n\n            typedef details::vararg_function_node<Type,ivararg_function_t> alloc_type;\n\n            expression_node_ptr result = node_allocator_->allocate<alloc_type>(vaf,arg_list);\n\n            if (\n                 !arg_list.empty()        &&\n                 !vaf->has_side_effects() &&\n                 is_constant_foldable(arg_list)\n               )\n            {\n               const Type v = result->value();\n               details::free_node(*node_allocator_,result);\n               result = node_allocator_->allocate<literal_node_t>(v);\n            }\n\n            parser_->state_.activate_side_effect(\"vararg_function_call()\");\n\n            return result;\n         }\n\n         inline expression_node_ptr generic_function_call(igeneric_function_t* gf,\n                                                          std::vector<expression_node_ptr>& arg_list,\n                                                          const std::size_t& param_seq_index = std::numeric_limits<std::size_t>::max())\n         {\n            if (!all_nodes_valid(arg_list))\n            {\n               details::free_all_nodes(*node_allocator_,arg_list);\n               return error_node();\n            }\n\n            typedef details::generic_function_node     <Type,igeneric_function_t> alloc_type1;\n            typedef details::multimode_genfunction_node<Type,igeneric_function_t> alloc_type2;\n\n            const std::size_t no_psi = std::numeric_limits<std::size_t>::max();\n\n            expression_node_ptr result = error_node();\n\n            if (no_psi == param_seq_index)\n               result = node_allocator_->allocate<alloc_type1>(arg_list,gf);\n            else\n               result = node_allocator_->allocate<alloc_type2>(gf, param_seq_index, arg_list);\n\n            alloc_type1* genfunc_node_ptr = static_cast<alloc_type1*>(result);\n\n            if (\n                 !arg_list.empty()                  &&\n                 !gf->has_side_effects()            &&\n                 parser_->state_.type_check_enabled &&\n                 is_constant_foldable(arg_list)\n               )\n            {\n               genfunc_node_ptr->init_branches();\n\n               const Type v = result->value();\n\n               details::free_node(*node_allocator_,result);\n\n               return node_allocator_->allocate<literal_node_t>(v);\n            }\n            else if (genfunc_node_ptr->init_branches())\n            {\n               parser_->state_.activate_side_effect(\"generic_function_call()\");\n\n               return result;\n            }\n            else\n            {\n               details::free_node(*node_allocator_, result);\n               details::free_all_nodes(*node_allocator_, arg_list);\n\n               return error_node();\n            }\n         }\n\n         #ifndef exprtk_disable_string_capabilities\n         inline expression_node_ptr string_function_call(igeneric_function_t* gf,\n                                                         std::vector<expression_node_ptr>& arg_list,\n                                                         const std::size_t& param_seq_index = std::numeric_limits<std::size_t>::max())\n         {\n            if (!all_nodes_valid(arg_list))\n            {\n               details::free_all_nodes(*node_allocator_,arg_list);\n               return error_node();\n            }\n\n            typedef details::string_function_node      <Type,igeneric_function_t> alloc_type1;\n            typedef details::multimode_strfunction_node<Type,igeneric_function_t> alloc_type2;\n\n            const std::size_t no_psi = std::numeric_limits<std::size_t>::max();\n\n            expression_node_ptr result = error_node();\n\n            if (no_psi == param_seq_index)\n               result = node_allocator_->allocate<alloc_type1>(gf,arg_list);\n            else\n               result = node_allocator_->allocate<alloc_type2>(gf, param_seq_index, arg_list);\n\n            alloc_type1* strfunc_node_ptr = static_cast<alloc_type1*>(result);\n\n            if (\n                 !arg_list.empty()       &&\n                 !gf->has_side_effects() &&\n                 is_constant_foldable(arg_list)\n               )\n            {\n               strfunc_node_ptr->init_branches();\n\n               const Type v = result->value();\n\n               details::free_node(*node_allocator_,result);\n\n               return node_allocator_->allocate<literal_node_t>(v);\n            }\n            else if (strfunc_node_ptr->init_branches())\n            {\n               parser_->state_.activate_side_effect(\"string_function_call()\");\n\n               return result;\n            }\n            else\n            {\n               details::free_node     (*node_allocator_,result  );\n               details::free_all_nodes(*node_allocator_,arg_list);\n\n               return error_node();\n            }\n         }\n         #endif\n\n         #ifndef exprtk_disable_return_statement\n         inline expression_node_ptr return_call(std::vector<expression_node_ptr>& arg_list)\n         {\n            if (!all_nodes_valid(arg_list))\n            {\n               details::free_all_nodes(*node_allocator_,arg_list);\n               return error_node();\n            }\n\n            typedef details::return_node<Type> alloc_type;\n\n            expression_node_ptr result = node_allocator_->\n                                            allocate_rr<alloc_type>(arg_list,parser_->results_ctx());\n\n            alloc_type* return_node_ptr = static_cast<alloc_type*>(result);\n\n            if (return_node_ptr->init_branches())\n            {\n               parser_->state_.activate_side_effect(\"return_call()\");\n\n               return result;\n            }\n            else\n            {\n               details::free_node     (*node_allocator_,result  );\n               details::free_all_nodes(*node_allocator_,arg_list);\n\n               return error_node();\n            }\n         }\n\n         inline expression_node_ptr return_envelope(expression_node_ptr body,\n                                                    results_context_t* rc,\n                                                    bool*& return_invoked)\n         {\n            typedef details::return_envelope_node<Type> alloc_type;\n\n            expression_node_ptr result = node_allocator_->\n                                            allocate_cr<alloc_type>(body,(*rc));\n\n            return_invoked = static_cast<alloc_type*>(result)->retinvk_ptr();\n\n            return result;\n         }\n         #else\n         inline expression_node_ptr return_call(std::vector<expression_node_ptr>&)\n         {\n            return error_node();\n         }\n\n         inline expression_node_ptr return_envelope(expression_node_ptr,\n                                                    results_context_t*,\n                                                    bool*&)\n         {\n            return error_node();\n         }\n         #endif\n\n         inline expression_node_ptr vector_element(const std::string& symbol,\n                                                   vector_holder_ptr vector_base,\n                                                   expression_node_ptr index)\n         {\n            expression_node_ptr result = error_node();\n\n            if (details::is_constant_node(index))\n            {\n               std::size_t i = static_cast<std::size_t>(details::numeric::to_int64(index->value()));\n\n               details::free_node(*node_allocator_,index);\n\n               if (vector_base->rebaseable())\n               {\n                  return node_allocator_->allocate<rebasevector_celem_node_t>(i,vector_base);\n               }\n\n               scope_element& se = parser_->sem_.get_element(symbol,i);\n\n               if (se.index == i)\n               {\n                  result = se.var_node;\n               }\n               else\n               {\n                  scope_element nse;\n                  nse.name      = symbol;\n                  nse.active    = true;\n                  nse.ref_count = 1;\n                  nse.type      = scope_element::e_vecelem;\n                  nse.index     = i;\n                  nse.depth     = parser_->state_.scope_depth;\n                  nse.data      = 0;\n                  nse.var_node  = node_allocator_->allocate<variable_node_t>((*(*vector_base)[i]));\n\n                  if (!parser_->sem_.add_element(nse))\n                  {\n                     parser_->set_synthesis_error(\"Failed to add new local vector element to SEM [1]\");\n\n                     parser_->sem_.free_element(nse);\n\n                     result = error_node();\n                  }\n\n                  exprtk_debug((\"vector_element() - INFO - Added new local vector element: %s\\n\",nse.name.c_str()));\n\n                  parser_->state_.activate_side_effect(\"vector_element()\");\n\n                  result = nse.var_node;\n               }\n            }\n            else if (vector_base->rebaseable())\n               result = node_allocator_->allocate<rebasevector_elem_node_t>(index,vector_base);\n            else\n               result = node_allocator_->allocate<vector_elem_node_t>(index,vector_base);\n\n            return result;\n         }\n\n      private:\n\n         template <std::size_t N, typename NodePtr>\n         inline bool is_constant_foldable(NodePtr (&b)[N]) const\n         {\n            for (std::size_t i = 0; i < N; ++i)\n            {\n               if (0 == b[i])\n                  return false;\n               else if (!details::is_constant_node(b[i]))\n                  return false;\n            }\n\n            return true;\n         }\n\n         template <typename NodePtr,\n                   typename Allocator,\n                   template <typename,typename> class Sequence>\n         inline bool is_constant_foldable(const Sequence<NodePtr,Allocator>& b) const\n         {\n            for (std::size_t i = 0; i < b.size(); ++i)\n            {\n               if (0 == b[i])\n                  return false;\n               else if (!details::is_constant_node(b[i]))\n                  return false;\n            }\n\n            return true;\n         }\n\n         void lodge_assignment(symbol_type cst, expression_node_ptr node)\n         {\n            parser_->state_.activate_side_effect(\"lodge_assignment()\");\n\n            if (!parser_->dec_.collect_assignments())\n               return;\n\n            std::string symbol_name;\n\n            switch (cst)\n            {\n               case e_st_variable : symbol_name = parser_->symtab_store_\n                                                     .get_variable_name(node);\n                                    break;\n\n               #ifndef exprtk_disable_string_capabilities\n               case e_st_string   : symbol_name = parser_->symtab_store_\n                                                     .get_stringvar_name(node);\n                                    break;\n               #endif\n\n               case e_st_vector   : {\n                                       typedef details::vector_holder<T> vector_holder_t;\n\n                                       vector_holder_t& vh = static_cast<vector_node_t*>(node)->vec_holder();\n\n                                       symbol_name = parser_->symtab_store_.get_vector_name(&vh);\n                                    }\n                                    break;\n\n               case e_st_vecelem  : {\n                                       typedef details::vector_holder<T> vector_holder_t;\n\n                                       vector_holder_t& vh = static_cast<vector_elem_node_t*>(node)->vec_holder();\n\n                                       symbol_name = parser_->symtab_store_.get_vector_name(&vh);\n\n                                       cst = e_st_vector;\n                                    }\n                                    break;\n\n               default            : return;\n            }\n\n            if (!symbol_name.empty())\n            {\n               parser_->dec_.add_assignment(symbol_name,cst);\n            }\n         }\n\n         inline expression_node_ptr synthesize_assignment_expression(const details::operator_type& operation, expression_node_ptr (&branch)[2])\n         {\n            if (details::is_variable_node(branch[0]))\n            {\n               lodge_assignment(e_st_variable,branch[0]);\n\n               return synthesize_expression<assignment_node_t,2>(operation,branch);\n            }\n            else if (details::is_vector_elem_node(branch[0]))\n            {\n               lodge_assignment(e_st_vecelem,branch[0]);\n\n               return synthesize_expression<assignment_vec_elem_node_t, 2>(operation, branch);\n            }\n            else if (details::is_rebasevector_elem_node(branch[0]))\n            {\n               lodge_assignment(e_st_vecelem,branch[0]);\n\n               return synthesize_expression<assignment_rebasevec_elem_node_t, 2>(operation, branch);\n            }\n            else if (details::is_rebasevector_celem_node(branch[0]))\n            {\n               lodge_assignment(e_st_vecelem,branch[0]);\n\n               return synthesize_expression<assignment_rebasevec_celem_node_t, 2>(operation, branch);\n            }\n            #ifndef exprtk_disable_string_capabilities\n            else if (details::is_string_node(branch[0]))\n            {\n               lodge_assignment(e_st_string,branch[0]);\n\n               return synthesize_expression<assignment_string_node_t,2>(operation, branch);\n            }\n            else if (details::is_string_range_node(branch[0]))\n            {\n               lodge_assignment(e_st_string,branch[0]);\n\n               return synthesize_expression<assignment_string_range_node_t,2>(operation, branch);\n            }\n            #endif\n            else if (details::is_vector_node(branch[0]))\n            {\n               lodge_assignment(e_st_vector,branch[0]);\n\n               if (details::is_ivector_node(branch[1]))\n                  return synthesize_expression<assignment_vecvec_node_t,2>(operation, branch);\n              else\n                  return synthesize_expression<assignment_vec_node_t,2>(operation, branch);\n            }\n            else\n            {\n               parser_->set_synthesis_error(\"Invalid assignment operation.[1]\");\n\n               return error_node();\n            }\n         }\n\n         inline expression_node_ptr synthesize_assignment_operation_expression(const details::operator_type& operation,\n                                                                               expression_node_ptr (&branch)[2])\n         {\n            if (details::is_variable_node(branch[0]))\n            {\n               lodge_assignment(e_st_variable,branch[0]);\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                  \\\n                  case op0 : return node_allocator_->                                                         \\\n                                template allocate_rrr<typename details::assignment_op_node<Type,op1<Type> > > \\\n                                   (operation, branch[0], branch[1]);                                         \\\n\n                  case_stmt(details::e_addass,details::add_op)\n                  case_stmt(details::e_subass,details::sub_op)\n                  case_stmt(details::e_mulass,details::mul_op)\n                  case_stmt(details::e_divass,details::div_op)\n                  case_stmt(details::e_modass,details::mod_op)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else if (details::is_vector_elem_node(branch[0]))\n            {\n               lodge_assignment(e_st_vecelem,branch[0]);\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                            \\\n                  case op0 : return node_allocator_->                                                                   \\\n                                 template allocate_rrr<typename details::assignment_vec_elem_op_node<Type,op1<Type> > > \\\n                                    (operation, branch[0], branch[1]);                                                  \\\n\n                  case_stmt(details::e_addass,details::add_op)\n                  case_stmt(details::e_subass,details::sub_op)\n                  case_stmt(details::e_mulass,details::mul_op)\n                  case_stmt(details::e_divass,details::div_op)\n                  case_stmt(details::e_modass,details::mod_op)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else if (details::is_rebasevector_elem_node(branch[0]))\n            {\n               lodge_assignment(e_st_vecelem,branch[0]);\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                                  \\\n                  case op0 : return node_allocator_->                                                                         \\\n                                 template allocate_rrr<typename details::assignment_rebasevec_elem_op_node<Type,op1<Type> > > \\\n                                    (operation, branch[0], branch[1]);                                                        \\\n\n                  case_stmt(details::e_addass,details::add_op)\n                  case_stmt(details::e_subass,details::sub_op)\n                  case_stmt(details::e_mulass,details::mul_op)\n                  case_stmt(details::e_divass,details::div_op)\n                  case_stmt(details::e_modass,details::mod_op)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else if (details::is_rebasevector_celem_node(branch[0]))\n            {\n               lodge_assignment(e_st_vecelem,branch[0]);\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                                   \\\n                  case op0 : return node_allocator_->                                                                          \\\n                                 template allocate_rrr<typename details::assignment_rebasevec_celem_op_node<Type,op1<Type> > > \\\n                                    (operation, branch[0], branch[1]);                                                         \\\n\n                  case_stmt(details::e_addass,details::add_op)\n                  case_stmt(details::e_subass,details::sub_op)\n                  case_stmt(details::e_mulass,details::mul_op)\n                  case_stmt(details::e_divass,details::div_op)\n                  case_stmt(details::e_modass,details::mod_op)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else if (details::is_vector_node(branch[0]))\n            {\n               lodge_assignment(e_st_vector,branch[0]);\n\n               if (details::is_ivector_node(branch[1]))\n               {\n                  switch (operation)\n                  {\n                     #define case_stmt(op0,op1)                                                                         \\\n                     case op0 : return node_allocator_->                                                                \\\n                                   template allocate_rrr<typename details::assignment_vecvec_op_node<Type,op1<Type> > > \\\n                                      (operation, branch[0], branch[1]);                                                \\\n\n                     case_stmt(details::e_addass,details::add_op)\n                     case_stmt(details::e_subass,details::sub_op)\n                     case_stmt(details::e_mulass,details::mul_op)\n                     case_stmt(details::e_divass,details::div_op)\n                     case_stmt(details::e_modass,details::mod_op)\n                     #undef case_stmt\n                     default : return error_node();\n                  }\n               }\n               else\n               {\n                  switch (operation)\n                  {\n                     #define case_stmt(op0,op1)                                                                      \\\n                     case op0 : return node_allocator_->                                                             \\\n                                   template allocate_rrr<typename details::assignment_vec_op_node<Type,op1<Type> > > \\\n                                      (operation, branch[0], branch[1]);                                             \\\n\n                     case_stmt(details::e_addass,details::add_op)\n                     case_stmt(details::e_subass,details::sub_op)\n                     case_stmt(details::e_mulass,details::mul_op)\n                     case_stmt(details::e_divass,details::div_op)\n                     case_stmt(details::e_modass,details::mod_op)\n                     #undef case_stmt\n                     default : return error_node();\n                  }\n               }\n            }\n            #ifndef exprtk_disable_string_capabilities\n            else if (\n                      (details::e_addass == operation) &&\n                      details::is_string_node(branch[0])\n                    )\n            {\n               typedef details::assignment_string_node<T,details::asn_addassignment> addass_t;\n\n               lodge_assignment(e_st_string,branch[0]);\n\n               return synthesize_expression<addass_t,2>(operation,branch);\n            }\n            #endif\n            else\n            {\n               parser_->set_synthesis_error(\"Invalid assignment operation[2]\");\n\n               return error_node();\n            }\n         }\n\n         inline expression_node_ptr synthesize_veceqineqlogic_operation_expression(const details::operator_type& operation,\n                                                                                   expression_node_ptr (&branch)[2])\n         {\n            const bool is_b0_ivec = details::is_ivector_node(branch[0]);\n            const bool is_b1_ivec = details::is_ivector_node(branch[1]);\n\n            #define batch_eqineq_logic_case                \\\n            case_stmt(details::   e_lt, details::   lt_op) \\\n            case_stmt(details::  e_lte, details::  lte_op) \\\n            case_stmt(details::   e_gt, details::   gt_op) \\\n            case_stmt(details::  e_gte, details::  gte_op) \\\n            case_stmt(details::   e_eq, details::   eq_op) \\\n            case_stmt(details::   e_ne, details::   ne_op) \\\n            case_stmt(details::e_equal, details::equal_op) \\\n            case_stmt(details::  e_and, details::  and_op) \\\n            case_stmt(details:: e_nand, details:: nand_op) \\\n            case_stmt(details::   e_or, details::   or_op) \\\n            case_stmt(details::  e_nor, details::  nor_op) \\\n            case_stmt(details::  e_xor, details::  xor_op) \\\n            case_stmt(details:: e_xnor, details:: xnor_op) \\\n\n            if (is_b0_ivec && is_b1_ivec)\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                     \\\n                  case op0 : return node_allocator_->                                                            \\\n                                template allocate_rrr<typename details::vec_binop_vecvec_node<Type,op1<Type> > > \\\n                                   (operation, branch[0], branch[1]);                                            \\\n\n                  batch_eqineq_logic_case\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else if (is_b0_ivec && !is_b1_ivec)\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                     \\\n                  case op0 : return node_allocator_->                                                            \\\n                                template allocate_rrr<typename details::vec_binop_vecval_node<Type,op1<Type> > > \\\n                                   (operation, branch[0], branch[1]);                                            \\\n\n                  batch_eqineq_logic_case\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else if (!is_b0_ivec && is_b1_ivec)\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                     \\\n                  case op0 : return node_allocator_->                                                            \\\n                                template allocate_rrr<typename details::vec_binop_valvec_node<Type,op1<Type> > > \\\n                                   (operation, branch[0], branch[1]);                                            \\\n\n                  batch_eqineq_logic_case\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else\n               return error_node();\n\n            #undef batch_eqineq_logic_case\n         }\n\n         inline expression_node_ptr synthesize_vecarithmetic_operation_expression(const details::operator_type& operation,\n                                                                                  expression_node_ptr (&branch)[2])\n         {\n            const bool is_b0_ivec = details::is_ivector_node(branch[0]);\n            const bool is_b1_ivec = details::is_ivector_node(branch[1]);\n\n            #define vector_ops                        \\\n            case_stmt(details::e_add,details::add_op) \\\n            case_stmt(details::e_sub,details::sub_op) \\\n            case_stmt(details::e_mul,details::mul_op) \\\n            case_stmt(details::e_div,details::div_op) \\\n            case_stmt(details::e_mod,details::mod_op) \\\n\n            if (is_b0_ivec && is_b1_ivec)\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                     \\\n                  case op0 : return node_allocator_->                                                            \\\n                                template allocate_rrr<typename details::vec_binop_vecvec_node<Type,op1<Type> > > \\\n                                   (operation, branch[0], branch[1]);                                            \\\n\n                  vector_ops\n                  case_stmt(details::e_pow,details:: pow_op)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else if (is_b0_ivec && !is_b1_ivec)\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                     \\\n                  case op0 : return node_allocator_->                                                            \\\n                                template allocate_rrr<typename details::vec_binop_vecval_node<Type,op1<Type> > > \\\n                                   (operation, branch[0], branch[1]);                                            \\\n\n                  vector_ops\n                  case_stmt(details::e_pow,details:: pow_op)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else if (!is_b0_ivec && is_b1_ivec)\n            {\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                                     \\\n                  case op0 : return node_allocator_->                                                            \\\n                                template allocate_rrr<typename details::vec_binop_valvec_node<Type,op1<Type> > > \\\n                                   (operation, branch[0], branch[1]);                                            \\\n\n                  vector_ops\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n            else\n               return error_node();\n\n            #undef vector_ops\n         }\n\n         inline expression_node_ptr synthesize_swap_expression(expression_node_ptr (&branch)[2])\n         {\n            const bool v0_is_ivar = details::is_ivariable_node(branch[0]);\n            const bool v1_is_ivar = details::is_ivariable_node(branch[1]);\n\n            const bool v0_is_ivec = details::is_ivector_node  (branch[0]);\n            const bool v1_is_ivec = details::is_ivector_node  (branch[1]);\n\n            #ifndef exprtk_disable_string_capabilities\n            const bool v0_is_str = details::is_generally_string_node(branch[0]);\n            const bool v1_is_str = details::is_generally_string_node(branch[1]);\n            #endif\n\n            expression_node_ptr result = error_node();\n\n            if (v0_is_ivar && v1_is_ivar)\n            {\n               typedef details::variable_node<T>* variable_node_ptr;\n\n               variable_node_ptr v0 = variable_node_ptr(0);\n               variable_node_ptr v1 = variable_node_ptr(0);\n\n               if (\n                    (0 != (v0 = dynamic_cast<variable_node_ptr>(branch[0]))) &&\n                    (0 != (v1 = dynamic_cast<variable_node_ptr>(branch[1])))\n                  )\n               {\n                  result = node_allocator_->allocate<details::swap_node<T> >(v0,v1);\n               }\n               else\n                  result = node_allocator_->allocate<details::swap_generic_node<T> >(branch[0],branch[1]);\n            }\n            else if (v0_is_ivec && v1_is_ivec)\n            {\n               result = node_allocator_->allocate<details::swap_vecvec_node<T> >(branch[0],branch[1]);\n            }\n            #ifndef exprtk_disable_string_capabilities\n            else if (v0_is_str && v1_is_str)\n            {\n               if (is_string_node(branch[0]) && is_string_node(branch[1]))\n                  result = node_allocator_->allocate<details::swap_string_node<T> >\n                                               (branch[0], branch[1]);\n               else\n                  result = node_allocator_->allocate<details::swap_genstrings_node<T> >\n                                               (branch[0], branch[1]);\n            }\n            #endif\n            else\n            {\n               parser_->set_synthesis_error(\"Only variables, strings, vectors or vector elements can be swapped\");\n\n               return error_node();\n            }\n\n            parser_->state_.activate_side_effect(\"synthesize_swap_expression()\");\n\n            return result;\n         }\n\n         #ifndef exprtk_disable_sc_andor\n         inline expression_node_ptr synthesize_shortcircuit_expression(const details::operator_type& operation, expression_node_ptr (&branch)[2])\n         {\n            expression_node_ptr result = error_node();\n\n            if (details::is_constant_node(branch[0]))\n            {\n               if (\n                    (details::e_scand == operation) &&\n                    std::equal_to<T>()(T(0),branch[0]->value())\n                  )\n                  result = node_allocator_->allocate_c<literal_node_t>(T(0));\n               else if (\n                         (details::e_scor == operation) &&\n                         std::not_equal_to<T>()(T(0),branch[0]->value())\n                       )\n                  result = node_allocator_->allocate_c<literal_node_t>(T(1));\n            }\n\n            if (details::is_constant_node(branch[1]) && (0 == result))\n            {\n               if (\n                    (details::e_scand == operation) &&\n                    std::equal_to<T>()(T(0),branch[1]->value())\n                  )\n                  result = node_allocator_->allocate_c<literal_node_t>(T(0));\n               else if (\n                         (details::e_scor == operation) &&\n                         std::not_equal_to<T>()(T(0),branch[1]->value())\n                       )\n                  result = node_allocator_->allocate_c<literal_node_t>(T(1));\n            }\n\n            if (result)\n            {\n               free_node(*node_allocator_, branch[0]);\n               free_node(*node_allocator_, branch[1]);\n\n               return result;\n            }\n            else if (details::e_scand == operation)\n            {\n               return synthesize_expression<scand_node_t,2>(operation, branch);\n            }\n            else if (details::e_scor == operation)\n            {\n               return synthesize_expression<scor_node_t,2>(operation, branch);\n            }\n            else\n               return error_node();\n         }\n         #else\n         inline expression_node_ptr synthesize_shortcircuit_expression(const details::operator_type&, expression_node_ptr (&)[2])\n         {\n            return error_node();\n         }\n         #endif\n\n         #define basic_opr_switch_statements        \\\n         case_stmt(details::e_add, details::add_op) \\\n         case_stmt(details::e_sub, details::sub_op) \\\n         case_stmt(details::e_mul, details::mul_op) \\\n         case_stmt(details::e_div, details::div_op) \\\n         case_stmt(details::e_mod, details::mod_op) \\\n         case_stmt(details::e_pow, details::pow_op) \\\n\n         #define extended_opr_switch_statements       \\\n         case_stmt(details::  e_lt, details::  lt_op) \\\n         case_stmt(details:: e_lte, details:: lte_op) \\\n         case_stmt(details::  e_gt, details::  gt_op) \\\n         case_stmt(details:: e_gte, details:: gte_op) \\\n         case_stmt(details::  e_eq, details::  eq_op) \\\n         case_stmt(details::  e_ne, details::  ne_op) \\\n         case_stmt(details:: e_and, details:: and_op) \\\n         case_stmt(details::e_nand, details::nand_op) \\\n         case_stmt(details::  e_or, details::  or_op) \\\n         case_stmt(details:: e_nor, details:: nor_op) \\\n         case_stmt(details:: e_xor, details:: xor_op) \\\n         case_stmt(details::e_xnor, details::xnor_op) \\\n\n         #ifndef exprtk_disable_cardinal_pow_optimisation\n         template <typename TType, template <typename,typename> class IPowNode>\n         inline expression_node_ptr cardinal_pow_optimisation_impl(const TType& v, const unsigned int& p)\n         {\n            switch (p)\n            {\n               #define case_stmt(cp)                                                     \\\n               case cp : return node_allocator_->                                        \\\n                            allocate<IPowNode<T,details::numeric::fast_exp<T,cp> > >(v); \\\n\n               case_stmt( 1) case_stmt( 2) case_stmt( 3) case_stmt( 4)\n               case_stmt( 5) case_stmt( 6) case_stmt( 7) case_stmt( 8)\n               case_stmt( 9) case_stmt(10) case_stmt(11) case_stmt(12)\n               case_stmt(13) case_stmt(14) case_stmt(15) case_stmt(16)\n               case_stmt(17) case_stmt(18) case_stmt(19) case_stmt(20)\n               case_stmt(21) case_stmt(22) case_stmt(23) case_stmt(24)\n               case_stmt(25) case_stmt(26) case_stmt(27) case_stmt(28)\n               case_stmt(29) case_stmt(30) case_stmt(31) case_stmt(32)\n               case_stmt(33) case_stmt(34) case_stmt(35) case_stmt(36)\n               case_stmt(37) case_stmt(38) case_stmt(39) case_stmt(40)\n               case_stmt(41) case_stmt(42) case_stmt(43) case_stmt(44)\n               case_stmt(45) case_stmt(46) case_stmt(47) case_stmt(48)\n               case_stmt(49) case_stmt(50) case_stmt(51) case_stmt(52)\n               case_stmt(53) case_stmt(54) case_stmt(55) case_stmt(56)\n               case_stmt(57) case_stmt(58) case_stmt(59) case_stmt(60)\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         inline expression_node_ptr cardinal_pow_optimisation(const T& v, const T& c)\n         {\n            const bool not_recipricol = (c >= T(0));\n            const unsigned int p = static_cast<unsigned int>(details::numeric::to_int32(details::numeric::abs(c)));\n\n            if (0 == p)\n               return node_allocator_->allocate_c<literal_node_t>(T(1));\n            else if (std::equal_to<T>()(T(2),c))\n            {\n               return node_allocator_->\n                  template allocate_rr<typename details::vov_node<Type,details::mul_op<Type> > >(v,v);\n            }\n            else\n            {\n               if (not_recipricol)\n                  return cardinal_pow_optimisation_impl<T,details::ipow_node>(v,p);\n               else\n                  return cardinal_pow_optimisation_impl<T,details::ipowinv_node>(v,p);\n            }\n         }\n\n         inline bool cardinal_pow_optimisable(const details::operator_type& operation, const T& c)\n         {\n            return (details::e_pow == operation) && (details::numeric::abs(c) <= T(60)) && details::numeric::is_integer(c);\n         }\n\n         inline expression_node_ptr cardinal_pow_optimisation(expression_node_ptr (&branch)[2])\n         {\n            const Type c = static_cast<details::literal_node<Type>*>(branch[1])->value();\n            const bool not_recipricol = (c >= T(0));\n            const unsigned int p = static_cast<unsigned int>(details::numeric::to_int32(details::numeric::abs(c)));\n\n            node_allocator_->free(branch[1]);\n\n            if (0 == p)\n            {\n               details::free_all_nodes(*node_allocator_, branch);\n\n               return node_allocator_->allocate_c<literal_node_t>(T(1));\n            }\n            else if (not_recipricol)\n               return cardinal_pow_optimisation_impl<expression_node_ptr,details::bipow_node>(branch[0],p);\n            else\n               return cardinal_pow_optimisation_impl<expression_node_ptr,details::bipowninv_node>(branch[0],p);\n         }\n         #else\n         inline expression_node_ptr cardinal_pow_optimisation(T&, const T&)\n         {\n            return error_node();\n         }\n\n         inline bool cardinal_pow_optimisable(const details::operator_type&, const T&)\n         {\n            return false;\n         }\n\n         inline expression_node_ptr cardinal_pow_optimisation(expression_node_ptr(&)[2])\n         {\n            return error_node();\n         }\n         #endif\n\n         struct synthesize_binary_ext_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               const bool left_neg  = is_neg_unary_node(branch[0]);\n               const bool right_neg = is_neg_unary_node(branch[1]);\n\n               if (left_neg && right_neg)\n               {\n                  if (\n                       (details::e_add == operation) ||\n                       (details::e_sub == operation) ||\n                       (details::e_mul == operation) ||\n                       (details::e_div == operation)\n                     )\n                  {\n                     if (\n                          !expr_gen.parser_->simplify_unary_negation_branch(branch[0]) ||\n                          !expr_gen.parser_->simplify_unary_negation_branch(branch[1])\n                        )\n                     {\n                        details::free_all_nodes(*expr_gen.node_allocator_,branch);\n\n                        return error_node();\n                     }\n                  }\n\n                  switch (operation)\n                  {\n                                           // -f(x + 1) + -g(y + 1) --> -(f(x + 1) + g(y + 1))\n                     case details::e_add : return expr_gen(details::e_neg,\n                                              expr_gen.node_allocator_->\n                                                 template allocate<typename details::binary_ext_node<Type,details::add_op<Type> > >\n                                                    (branch[0],branch[1]));\n\n                                           // -f(x + 1) - -g(y + 1) --> g(y + 1) - f(x + 1)\n                     case details::e_sub : return expr_gen.node_allocator_->\n                                              template allocate<typename details::binary_ext_node<Type,details::sub_op<Type> > >\n                                                 (branch[1],branch[0]);\n\n                     default             : break;\n                  }\n               }\n               else if (left_neg && !right_neg)\n               {\n                  if (\n                       (details::e_add == operation) ||\n                       (details::e_sub == operation) ||\n                       (details::e_mul == operation) ||\n                       (details::e_div == operation)\n                     )\n                  {\n                     if (!expr_gen.parser_->simplify_unary_negation_branch(branch[0]))\n                     {\n                        details::free_all_nodes(*expr_gen.node_allocator_,branch);\n\n                        return error_node();\n                     }\n\n                     switch (operation)\n                     {\n                                              // -f(x + 1) + g(y + 1) --> g(y + 1) - f(x + 1)\n                        case details::e_add : return expr_gen.node_allocator_->\n                                                 template allocate<typename details::binary_ext_node<Type,details::sub_op<Type> > >\n                                                   (branch[1], branch[0]);\n\n                                              // -f(x + 1) - g(y + 1) --> -(f(x + 1) + g(y + 1))\n                        case details::e_sub : return expr_gen(details::e_neg,\n                                                 expr_gen.node_allocator_->\n                                                    template allocate<typename details::binary_ext_node<Type,details::add_op<Type> > >\n                                                       (branch[0], branch[1]));\n\n                                              // -f(x + 1) * g(y + 1) --> -(f(x + 1) * g(y + 1))\n                        case details::e_mul : return expr_gen(details::e_neg,\n                                                 expr_gen.node_allocator_->\n                                                    template allocate<typename details::binary_ext_node<Type,details::mul_op<Type> > >\n                                                       (branch[0], branch[1]));\n\n                                              // -f(x + 1) / g(y + 1) --> -(f(x + 1) / g(y + 1))\n                        case details::e_div : return expr_gen(details::e_neg,\n                                                 expr_gen.node_allocator_->\n                                                    template allocate<typename details::binary_ext_node<Type,details::div_op<Type> > >\n                                                       (branch[0], branch[1]));\n\n                        default             : return error_node();\n                     }\n                  }\n               }\n               else if (!left_neg && right_neg)\n               {\n                  if (\n                       (details::e_add == operation) ||\n                       (details::e_sub == operation) ||\n                       (details::e_mul == operation) ||\n                       (details::e_div == operation)\n                     )\n                  {\n                     if (!expr_gen.parser_->simplify_unary_negation_branch(branch[1]))\n                     {\n                        details::free_all_nodes(*expr_gen.node_allocator_,branch);\n\n                        return error_node();\n                     }\n\n                     switch (operation)\n                     {\n                                              // f(x + 1) + -g(y + 1) --> f(x + 1) - g(y + 1)\n                        case details::e_add : return expr_gen.node_allocator_->\n                                                 template allocate<typename details::binary_ext_node<Type,details::sub_op<Type> > >\n                                                   (branch[0], branch[1]);\n\n                                              // f(x + 1) - - g(y + 1) --> f(x + 1) + g(y + 1)\n                        case details::e_sub : return expr_gen.node_allocator_->\n                                                 template allocate<typename details::binary_ext_node<Type,details::add_op<Type> > >\n                                                   (branch[0], branch[1]);\n\n                                              // f(x + 1) * -g(y + 1) --> -(f(x + 1) * g(y + 1))\n                        case details::e_mul : return expr_gen(details::e_neg,\n                                                 expr_gen.node_allocator_->\n                                                    template allocate<typename details::binary_ext_node<Type,details::mul_op<Type> > >\n                                                       (branch[0], branch[1]));\n\n                                              // f(x + 1) / -g(y + 1) --> -(f(x + 1) / g(y + 1))\n                        case details::e_div : return expr_gen(details::e_neg,\n                                                 expr_gen.node_allocator_->\n                                                    template allocate<typename details::binary_ext_node<Type,details::div_op<Type> > >\n                                                       (branch[0], branch[1]));\n\n                        default             : return error_node();\n                     }\n                  }\n               }\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                           \\\n                  case op0 : return expr_gen.node_allocator_->                                         \\\n                                template allocate<typename details::binary_ext_node<Type,op1<Type> > > \\\n                                   (branch[0], branch[1]);                                             \\\n\n                  basic_opr_switch_statements\n                  extended_opr_switch_statements\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         };\n\n         struct synthesize_vob_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               const Type& v = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n\n               #ifndef exprtk_disable_enhanced_features\n               if (details::is_sf3ext_node(branch[1]))\n               {\n                  expression_node_ptr result = error_node();\n\n                  const bool synthesis_result = synthesize_sf4ext_expression::template compile_right<vtype>\n                                                  (expr_gen, v, operation, branch[1], result);\n\n                  if (synthesis_result)\n                  {\n                     free_node(*expr_gen.node_allocator_,branch[1]);\n                     return result;\n                  }\n               }\n               #endif\n\n               if (\n                    (details::e_mul == operation) ||\n                    (details::e_div == operation)\n                  )\n               {\n                  if (details::is_uv_node(branch[1]))\n                  {\n                     typedef details::uv_base_node<Type>* uvbn_ptr_t;\n\n                     details::operator_type o = static_cast<uvbn_ptr_t>(branch[1])->operation();\n\n                     if (details::e_neg == o)\n                     {\n                        const Type& v1 = static_cast<uvbn_ptr_t>(branch[1])->v();\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n\n                        switch (operation)\n                        {\n                           case details::e_mul : return expr_gen(details::e_neg,\n                                                    expr_gen.node_allocator_->\n                                                       template allocate_rr<typename details::\n                                                          vov_node<Type,details::mul_op<Type> > >(v,v1));\n\n                           case details::e_div : return expr_gen(details::e_neg,\n                                                    expr_gen.node_allocator_->\n                                                       template allocate_rr<typename details::\n                                                          vov_node<Type,details::div_op<Type> > >(v,v1));\n\n                           default             : break;\n                        }\n                     }\n                  }\n               }\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                       \\\n                  case op0 : return expr_gen.node_allocator_->                                     \\\n                                template allocate_rc<typename details::vob_node<Type,op1<Type> > > \\\n                                   (v, branch[1]);                                                 \\\n\n                  basic_opr_switch_statements\n                  extended_opr_switch_statements\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         };\n\n         struct synthesize_bov_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               const Type& v = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n\n               #ifndef exprtk_disable_enhanced_features\n               if (details::is_sf3ext_node(branch[0]))\n               {\n                  expression_node_ptr result = error_node();\n\n                  const bool synthesis_result = synthesize_sf4ext_expression::template compile_left<vtype>\n                                                   (expr_gen, v, operation, branch[0], result);\n\n                  if (synthesis_result)\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[0]);\n\n                     return result;\n                  }\n               }\n               #endif\n\n               if (\n                    (details::e_add == operation) ||\n                    (details::e_sub == operation) ||\n                    (details::e_mul == operation) ||\n                    (details::e_div == operation)\n                  )\n               {\n                  if (details::is_uv_node(branch[0]))\n                  {\n                     typedef details::uv_base_node<Type>* uvbn_ptr_t;\n\n                     details::operator_type o = static_cast<uvbn_ptr_t>(branch[0])->operation();\n\n                     if (details::e_neg == o)\n                     {\n                        const Type& v0 = static_cast<uvbn_ptr_t>(branch[0])->v();\n\n                        free_node(*expr_gen.node_allocator_,branch[0]);\n\n                        switch (operation)\n                        {\n                           case details::e_add : return expr_gen.node_allocator_->\n                                                    template allocate_rr<typename details::\n                                                       vov_node<Type,details::sub_op<Type> > >(v,v0);\n\n                           case details::e_sub : return expr_gen(details::e_neg,\n                                                    expr_gen.node_allocator_->\n                                                       template allocate_rr<typename details::\n                                                          vov_node<Type,details::add_op<Type> > >(v0,v));\n\n                           case details::e_mul : return expr_gen(details::e_neg,\n                                                    expr_gen.node_allocator_->\n                                                       template allocate_rr<typename details::\n                                                          vov_node<Type,details::mul_op<Type> > >(v0,v));\n\n                           case details::e_div : return expr_gen(details::e_neg,\n                                                    expr_gen.node_allocator_->\n                                                       template allocate_rr<typename details::\n                                                          vov_node<Type,details::div_op<Type> > >(v0,v));\n                           default : break;\n                        }\n                     }\n                  }\n               }\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                       \\\n                  case op0 : return expr_gen.node_allocator_->                                     \\\n                                template allocate_cr<typename details::bov_node<Type,op1<Type> > > \\\n                                   (branch[0], v);                                                 \\\n\n                  basic_opr_switch_statements\n                  extended_opr_switch_statements\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         };\n\n         struct synthesize_cob_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               const Type c = static_cast<details::literal_node<Type>*>(branch[0])->value();\n\n               free_node(*expr_gen.node_allocator_,branch[0]);\n\n               if (std::equal_to<T>()(T(0),c) && (details::e_mul == operation))\n               {\n                  free_node(*expr_gen.node_allocator_,branch[1]);\n\n                  return expr_gen(T(0));\n               }\n               else if (std::equal_to<T>()(T(0),c) && (details::e_div == operation))\n               {\n                  free_node(*expr_gen.node_allocator_, branch[1]);\n\n                  return expr_gen(T(0));\n               }\n               else if (std::equal_to<T>()(T(0),c) && (details::e_add == operation))\n                  return branch[1];\n               else if (std::equal_to<T>()(T(1),c) && (details::e_mul == operation))\n                  return branch[1];\n\n               if (details::is_cob_node(branch[1]))\n               {\n                  // Simplify expressions of the form:\n                  // 1. (1 * (2 * (3 * (4 * (5 * (6 * (7 * (8 * (9 + x))))))))) --> 40320 * (9 + x)\n                  // 2. (1 + (2 + (3 + (4 + (5 + (6 + (7 + (8 + (9 + x))))))))) --> 45 + x\n                  if (\n                       (operation == details::e_mul) ||\n                       (operation == details::e_add)\n                     )\n                  {\n                     details::cob_base_node<Type>* cobnode = static_cast<details::cob_base_node<Type>*>(branch[1]);\n\n                     if (operation == cobnode->operation())\n                     {\n                        switch (operation)\n                        {\n                           case details::e_add : cobnode->set_c(c + cobnode->c()); break;\n                           case details::e_mul : cobnode->set_c(c * cobnode->c()); break;\n                           default             : return error_node();\n                        }\n\n                        return cobnode;\n                     }\n                  }\n\n                  if (operation == details::e_mul)\n                  {\n                     details::cob_base_node<Type>* cobnode = static_cast<details::cob_base_node<Type>*>(branch[1]);\n                     details::operator_type cob_opr = cobnode->operation();\n\n                     if (\n                          (details::e_div == cob_opr) ||\n                          (details::e_mul == cob_opr)\n                        )\n                     {\n                        switch (cob_opr)\n                        {\n                           case details::e_div : cobnode->set_c(c * cobnode->c()); break;\n                           case details::e_mul : cobnode->set_c(cobnode->c() / c); break;\n                           default             : return error_node();\n                        }\n\n                        return cobnode;\n                     }\n                  }\n                  else if (operation == details::e_div)\n                  {\n                     details::cob_base_node<Type>* cobnode = static_cast<details::cob_base_node<Type>*>(branch[1]);\n                     details::operator_type cob_opr = cobnode->operation();\n\n                     if (\n                          (details::e_div == cob_opr) ||\n                          (details::e_mul == cob_opr)\n                        )\n                     {\n                        details::expression_node<Type>* new_cobnode = error_node();\n\n                        switch (cob_opr)\n                        {\n                           case details::e_div : new_cobnode = expr_gen.node_allocator_->\n                                                    template allocate_tt<typename details::cob_node<Type,details::mul_op<Type> > >\n                                                       (c / cobnode->c(), cobnode->move_branch(0));\n                                                 break;\n\n                           case details::e_mul : new_cobnode = expr_gen.node_allocator_->\n                                                    template allocate_tt<typename details::cob_node<Type,details::div_op<Type> > >\n                                                       (c / cobnode->c(), cobnode->move_branch(0));\n                                                 break;\n\n                           default             : return error_node();\n                        }\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n\n                        return new_cobnode;\n                     }\n                  }\n               }\n               #ifndef exprtk_disable_enhanced_features\n               else if (details::is_sf3ext_node(branch[1]))\n               {\n                  expression_node_ptr result = error_node();\n\n                  if (synthesize_sf4ext_expression::template compile_right<ctype>(expr_gen,c,operation,branch[1],result))\n                  {\n                     free_node(*expr_gen.node_allocator_,branch[1]);\n\n                     return result;\n                  }\n               }\n               #endif\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                       \\\n                  case op0 : return expr_gen.node_allocator_->                                     \\\n                                template allocate_tt<typename details::cob_node<Type,op1<Type> > > \\\n                                   (c,  branch[1]);                                                \\\n\n                  basic_opr_switch_statements\n                  extended_opr_switch_statements\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         };\n\n         struct synthesize_boc_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               const Type c = static_cast<details::literal_node<Type>*>(branch[1])->value();\n\n               details::free_node(*(expr_gen.node_allocator_), branch[1]);\n\n               if (std::equal_to<T>()(T(0),c) && (details::e_mul == operation))\n               {\n                  free_node(*expr_gen.node_allocator_, branch[0]);\n\n                  return expr_gen(T(0));\n               }\n               else if (std::equal_to<T>()(T(0),c) && (details::e_div == operation))\n               {\n                  free_node(*expr_gen.node_allocator_, branch[0]);\n\n                  return expr_gen(std::numeric_limits<T>::quiet_NaN());\n               }\n               else if (std::equal_to<T>()(T(0),c) && (details::e_add == operation))\n                  return branch[0];\n               else if (std::equal_to<T>()(T(1),c) && (details::e_mul == operation))\n                  return branch[0];\n\n               if (details::is_boc_node(branch[0]))\n               {\n                  // Simplify expressions of the form:\n                  // 1. (((((((((x + 9) * 8) * 7) * 6) * 5) * 4) * 3) * 2) * 1) --> (x + 9) * 40320\n                  // 2. (((((((((x + 9) + 8) + 7) + 6) + 5) + 4) + 3) + 2) + 1) --> x + 45\n                  if (\n                       (operation == details::e_mul) ||\n                       (operation == details::e_add)\n                     )\n                  {\n                     details::boc_base_node<Type>* bocnode = static_cast<details::boc_base_node<Type>*>(branch[0]);\n\n                     if (operation == bocnode->operation())\n                     {\n                        switch (operation)\n                        {\n                           case details::e_add : bocnode->set_c(c + bocnode->c()); break;\n                           case details::e_mul : bocnode->set_c(c * bocnode->c()); break;\n                           default             : return error_node();\n                        }\n\n                        return bocnode;\n                     }\n                  }\n                  else if (operation == details::e_div)\n                  {\n                     details::boc_base_node<Type>* bocnode = static_cast<details::boc_base_node<Type>*>(branch[0]);\n                     details::operator_type        boc_opr = bocnode->operation();\n\n                     if (\n                          (details::e_div == boc_opr) ||\n                          (details::e_mul == boc_opr)\n                        )\n                     {\n                        switch (boc_opr)\n                        {\n                           case details::e_div : bocnode->set_c(c * bocnode->c()); break;\n                           case details::e_mul : bocnode->set_c(bocnode->c() / c); break;\n                           default             : return error_node();\n                        }\n\n                        return bocnode;\n                     }\n                  }\n                  else if (operation == details::e_pow)\n                  {\n                     // (v ^ c0) ^ c1 --> v ^(c0 * c1)\n                     details::boc_base_node<Type>* bocnode = static_cast<details::boc_base_node<Type>*>(branch[0]);\n                     details::operator_type        boc_opr = bocnode->operation();\n\n                     if (details::e_pow == boc_opr)\n                     {\n                        bocnode->set_c(bocnode->c() * c);\n\n                        return bocnode;\n                     }\n                  }\n               }\n\n               #ifndef exprtk_disable_enhanced_features\n               if (details::is_sf3ext_node(branch[0]))\n               {\n                  expression_node_ptr result = error_node();\n\n                  const bool synthesis_result = synthesize_sf4ext_expression::template compile_left<ctype>\n                                                   (expr_gen, c, operation, branch[0], result);\n\n                  if (synthesis_result)\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[0]);\n\n                     return result;\n                  }\n               }\n               #endif\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                       \\\n                  case op0 : return expr_gen.node_allocator_->                                     \\\n                                template allocate_cr<typename details::boc_node<Type,op1<Type> > > \\\n                                   (branch[0], c);                                                 \\\n\n                  basic_opr_switch_statements\n                  extended_opr_switch_statements\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         };\n\n         struct synthesize_cocob_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               expression_node_ptr result = error_node();\n\n               // (cob) o c --> cob\n               if (details::is_cob_node(branch[0]))\n               {\n                  details::cob_base_node<Type>* cobnode = static_cast<details::cob_base_node<Type>*>(branch[0]);\n\n                  const Type c = static_cast<details::literal_node<Type>*>(branch[1])->value();\n\n                  if (std::equal_to<T>()(T(0),c) && (details::e_mul == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[0]);\n                     free_node(*expr_gen.node_allocator_, branch[1]);\n\n                     return expr_gen(T(0));\n                  }\n                  else if (std::equal_to<T>()(T(0),c) && (details::e_div == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[0]);\n                     free_node(*expr_gen.node_allocator_, branch[1]);\n\n                     return expr_gen(T(std::numeric_limits<T>::quiet_NaN()));\n                  }\n                  else if (std::equal_to<T>()(T(0),c) && (details::e_add == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[1]);\n\n                     return branch[0];\n                  }\n                  else if (std::equal_to<T>()(T(1),c) && (details::e_mul == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[1]);\n\n                     return branch[0];\n                  }\n                  else if (std::equal_to<T>()(T(1),c) && (details::e_div == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[1]);\n\n                     return branch[0];\n                  }\n\n                  const bool op_addsub = (details::e_add == cobnode->operation()) ||\n                                         (details::e_sub == cobnode->operation()) ;\n\n                  if (op_addsub)\n                  {\n                     switch (operation)\n                     {\n                        case details::e_add : cobnode->set_c(cobnode->c() + c); break;\n                        case details::e_sub : cobnode->set_c(cobnode->c() - c); break;\n                        default             : return error_node();\n                     }\n\n                     result = cobnode;\n                  }\n                  else if (details::e_mul == cobnode->operation())\n                  {\n                     switch (operation)\n                     {\n                        case details::e_mul : cobnode->set_c(cobnode->c() * c); break;\n                        case details::e_div : cobnode->set_c(cobnode->c() / c); break;\n                        default             : return error_node();\n                     }\n\n                     result = cobnode;\n                  }\n                  else if (details::e_div == cobnode->operation())\n                  {\n                     if (details::e_mul == operation)\n                     {\n                        cobnode->set_c(cobnode->c() * c);\n                        result = cobnode;\n                     }\n                     else if (details::e_div == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::div_op<Type> > >\n                                       (cobnode->c() / c, cobnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_, branch[0]);\n                     }\n                  }\n\n                  if (result)\n                  {\n                     free_node(*expr_gen.node_allocator_,branch[1]);\n                  }\n               }\n\n               // c o (cob) --> cob\n               else if (details::is_cob_node(branch[1]))\n               {\n                  details::cob_base_node<Type>* cobnode = static_cast<details::cob_base_node<Type>*>(branch[1]);\n\n                  const Type c = static_cast<details::literal_node<Type>*>(branch[0])->value();\n\n                  if (std::equal_to<T>()(T(0),c) && (details::e_mul == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[0]);\n                     free_node(*expr_gen.node_allocator_, branch[1]);\n\n                     return expr_gen(T(0));\n                  }\n                  else if (std::equal_to<T>()(T(0),c) && (details::e_div == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[0]);\n                     free_node(*expr_gen.node_allocator_, branch[1]);\n\n                     return expr_gen(T(0));\n                  }\n                  else if (std::equal_to<T>()(T(0),c) && (details::e_add == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[0]);\n\n                     return branch[1];\n                  }\n                  else if (std::equal_to<T>()(T(1),c) && (details::e_mul == operation))\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[0]);\n\n                     return branch[1];\n                  }\n\n                  if (details::e_add == cobnode->operation())\n                  {\n                     if (details::e_add == operation)\n                     {\n                        cobnode->set_c(c + cobnode->c());\n                        result = cobnode;\n                     }\n                     else if (details::e_sub == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::sub_op<Type> > >\n                                       (c - cobnode->c(), cobnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                  }\n                  else if (details::e_sub == cobnode->operation())\n                  {\n                     if (details::e_add == operation)\n                     {\n                        cobnode->set_c(c + cobnode->c());\n                        result = cobnode;\n                     }\n                     else if (details::e_sub == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::add_op<Type> > >\n                                       (c - cobnode->c(), cobnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                  }\n                  else if (details::e_mul == cobnode->operation())\n                  {\n                     if (details::e_mul == operation)\n                     {\n                        cobnode->set_c(c * cobnode->c());\n                        result = cobnode;\n                     }\n                     else if (details::e_div == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::div_op<Type> > >\n                                       (c / cobnode->c(), cobnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                  }\n                  else if (details::e_div == cobnode->operation())\n                  {\n                     if (details::e_mul == operation)\n                     {\n                        cobnode->set_c(c * cobnode->c());\n                        result = cobnode;\n                     }\n                     else if (details::e_div == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::mul_op<Type> > >\n                                       (c / cobnode->c(), cobnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                  }\n\n                  if (result)\n                  {\n                     free_node(*expr_gen.node_allocator_,branch[0]);\n                  }\n               }\n\n               return result;\n            }\n         };\n\n         struct synthesize_coboc_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               expression_node_ptr result = error_node();\n\n               // (boc) o c --> boc\n               if (details::is_boc_node(branch[0]))\n               {\n                  details::boc_base_node<Type>* bocnode = static_cast<details::boc_base_node<Type>*>(branch[0]);\n\n                  const Type c = static_cast<details::literal_node<Type>*>(branch[1])->value();\n\n                  if (details::e_add == bocnode->operation())\n                  {\n                     switch (operation)\n                     {\n                        case details::e_add : bocnode->set_c(bocnode->c() + c); break;\n                        case details::e_sub : bocnode->set_c(bocnode->c() - c); break;\n                        default             : return error_node();\n                     }\n\n                     result = bocnode;\n                  }\n                  else if (details::e_mul == bocnode->operation())\n                  {\n                     switch (operation)\n                     {\n                        case details::e_mul : bocnode->set_c(bocnode->c() * c); break;\n                        case details::e_div : bocnode->set_c(bocnode->c() / c); break;\n                        default             : return error_node();\n                     }\n\n                     result = bocnode;\n                  }\n                  else if (details::e_sub == bocnode->operation())\n                  {\n                     if (details::e_add == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::boc_node<Type,details::add_op<Type> > >\n                                       (bocnode->move_branch(0), c - bocnode->c());\n\n                        free_node(*expr_gen.node_allocator_,branch[0]);\n                     }\n                     else if (details::e_sub == operation)\n                     {\n                        bocnode->set_c(bocnode->c() + c);\n                        result = bocnode;\n                     }\n                  }\n                  else if (details::e_div == bocnode->operation())\n                  {\n                     switch (operation)\n                     {\n                        case details::e_div : bocnode->set_c(bocnode->c() * c); break;\n                        case details::e_mul : bocnode->set_c(bocnode->c() / c); break;\n                        default             : return error_node();\n                     }\n\n                     result = bocnode;\n                  }\n\n                  if (result)\n                  {\n                     free_node(*expr_gen.node_allocator_, branch[1]);\n                  }\n               }\n\n               // c o (boc) --> boc\n               else if (details::is_boc_node(branch[1]))\n               {\n                  details::boc_base_node<Type>* bocnode = static_cast<details::boc_base_node<Type>*>(branch[1]);\n\n                  const Type c = static_cast<details::literal_node<Type>*>(branch[0])->value();\n\n                  if (details::e_add == bocnode->operation())\n                  {\n                     if (details::e_add == operation)\n                     {\n                        bocnode->set_c(c + bocnode->c());\n                        result = bocnode;\n                     }\n                     else if (details::e_sub == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::sub_op<Type> > >\n                                       (c - bocnode->c(), bocnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                  }\n                  else if (details::e_sub == bocnode->operation())\n                  {\n                     if (details::e_add == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::boc_node<Type,details::add_op<Type> > >\n                                       (bocnode->move_branch(0), c - bocnode->c());\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                     else if (details::e_sub == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::sub_op<Type> > >\n                                       (c + bocnode->c(), bocnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                  }\n                  else if (details::e_mul == bocnode->operation())\n                  {\n                     if (details::e_mul == operation)\n                     {\n                        bocnode->set_c(c * bocnode->c());\n                        result = bocnode;\n                     }\n                     else if (details::e_div == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::div_op<Type> > >\n                                       (c / bocnode->c(), bocnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                  }\n                  else if (details::e_div == bocnode->operation())\n                  {\n                     if (details::e_mul == operation)\n                     {\n                        bocnode->set_c(bocnode->c() / c);\n                        result = bocnode;\n                     }\n                     else if (details::e_div == operation)\n                     {\n                        result = expr_gen.node_allocator_->\n                                    template allocate_tt<typename details::cob_node<Type,details::div_op<Type> > >\n                                       (c * bocnode->c(), bocnode->move_branch(0));\n\n                        free_node(*expr_gen.node_allocator_,branch[1]);\n                     }\n                  }\n\n                  if (result)\n                  {\n                     free_node(*expr_gen.node_allocator_,branch[0]);\n                  }\n               }\n\n               return result;\n            }\n         };\n\n         #ifndef exprtk_disable_enhanced_features\n         inline bool synthesize_expression(const details::operator_type& operation,\n                                           expression_node_ptr (&branch)[2],\n                                           expression_node_ptr& result)\n         {\n            result = error_node();\n\n            if (!operation_optimisable(operation))\n               return false;\n\n            const std::string node_id = branch_to_id(branch);\n\n            const typename synthesize_map_t::iterator itr = synthesize_map_.find(node_id);\n\n            if (synthesize_map_.end() != itr)\n            {\n               result = itr->second((*this), operation, branch);\n\n               return true;\n            }\n            else\n               return false;\n         }\n\n         struct synthesize_vov_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               const Type& v1 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v2 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                       \\\n                  case op0 : return expr_gen.node_allocator_->                                     \\\n                                template allocate_rr<typename details::vov_node<Type,op1<Type> > > \\\n                                   (v1, v2);                                                       \\\n\n                  basic_opr_switch_statements\n                  extended_opr_switch_statements\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         };\n\n         struct synthesize_cov_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               const Type  c = static_cast<details::literal_node<Type>*> (branch[0])->value();\n               const Type& v = static_cast<details::variable_node<Type>*>(branch[1])->ref  ();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               if (std::equal_to<T>()(T(0),c) && (details::e_mul == operation))\n                  return expr_gen(T(0));\n               else if (std::equal_to<T>()(T(0),c) && (details::e_div == operation))\n                  return expr_gen(T(0));\n               else if (std::equal_to<T>()(T(0),c) && (details::e_add == operation))\n                  return static_cast<details::variable_node<Type>*>(branch[1]);\n               else if (std::equal_to<T>()(T(1),c) && (details::e_mul == operation))\n                  return static_cast<details::variable_node<Type>*>(branch[1]);\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                       \\\n                  case op0 : return expr_gen.node_allocator_->                                     \\\n                                template allocate_cr<typename details::cov_node<Type,op1<Type> > > \\\n                                   (c, v);                                                         \\\n\n                  basic_opr_switch_statements\n                  extended_opr_switch_statements\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         };\n\n         struct synthesize_voc_expression\n         {\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               const Type& v = static_cast<details::variable_node<Type>*>(branch[0])->ref  ();\n               const Type  c = static_cast<details::literal_node<Type>*> (branch[1])->value();\n\n               details::free_node(*(expr_gen.node_allocator_), branch[1]);\n\n               if (expr_gen.cardinal_pow_optimisable(operation,c))\n               {\n                  if (std::equal_to<T>()(T(1),c))\n                     return branch[0];\n                  else\n                     return expr_gen.cardinal_pow_optimisation(v,c);\n               }\n               else if (std::equal_to<T>()(T(0),c) && (details::e_mul == operation))\n                  return expr_gen(T(0));\n               else if (std::equal_to<T>()(T(0),c) && (details::e_div == operation))\n                  return expr_gen(std::numeric_limits<T>::quiet_NaN());\n               else if (std::equal_to<T>()(T(0),c) && (details::e_add == operation))\n                  return static_cast<details::variable_node<Type>*>(branch[0]);\n               else if (std::equal_to<T>()(T(1),c) && (details::e_mul == operation))\n                  return static_cast<details::variable_node<Type>*>(branch[0]);\n               else if (std::equal_to<T>()(T(1),c) && (details::e_div == operation))\n                  return static_cast<details::variable_node<Type>*>(branch[0]);\n\n               switch (operation)\n               {\n                  #define case_stmt(op0,op1)                                                       \\\n                  case op0 : return expr_gen.node_allocator_->                                     \\\n                                template allocate_rc<typename details::voc_node<Type,op1<Type> > > \\\n                                   (v, c);                                                         \\\n\n                  basic_opr_switch_statements\n                  extended_opr_switch_statements\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n         };\n\n         struct synthesize_sf3ext_expression\n         {\n            template <typename T0, typename T1, typename T2>\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& sf3opr,\n                                                      T0 t0, T1 t1, T2 t2)\n            {\n               switch (sf3opr)\n               {\n                  #define case_stmt(op)                                                                              \\\n                  case details::e_sf##op : return details::T0oT1oT2_sf3ext<T,T0,T1,T2,details::sf##op##_op<Type> >:: \\\n                                allocate(*(expr_gen.node_allocator_), t0, t1, t2);                                   \\\n\n                  case_stmt(00) case_stmt(01) case_stmt(02) case_stmt(03)\n                  case_stmt(04) case_stmt(05) case_stmt(06) case_stmt(07)\n                  case_stmt(08) case_stmt(09) case_stmt(10) case_stmt(11)\n                  case_stmt(12) case_stmt(13) case_stmt(14) case_stmt(15)\n                  case_stmt(16) case_stmt(17) case_stmt(18) case_stmt(19)\n                  case_stmt(20) case_stmt(21) case_stmt(22) case_stmt(23)\n                  case_stmt(24) case_stmt(25) case_stmt(26) case_stmt(27)\n                  case_stmt(28) case_stmt(29) case_stmt(30)\n                  #undef case_stmt\n                  default : return error_node();\n               }\n            }\n\n            template <typename T0, typename T1, typename T2>\n            static inline bool compile(expression_generator<Type>& expr_gen, const std::string& id,\n                                       T0 t0, T1 t1, T2 t2,\n                                       expression_node_ptr& result)\n            {\n               details::operator_type sf3opr;\n\n               if (!expr_gen.sf3_optimisable(id,sf3opr))\n                  return false;\n               else\n                  result = synthesize_sf3ext_expression::template process<T0,T1,T2>(expr_gen,sf3opr,t0,t1,t2);\n\n               return true;\n            }\n         };\n\n         struct synthesize_sf4ext_expression\n         {\n            template <typename T0, typename T1, typename T2, typename T3>\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& sf4opr,\n                                                      T0 t0, T1 t1, T2 t2, T3 t3)\n            {\n               switch (sf4opr)\n               {\n                  #define case_stmt0(op)                                                                                      \\\n                  case details::e_sf##op : return details::T0oT1oT2oT3_sf4ext<Type,T0,T1,T2,T3,details::sf##op##_op<Type> >:: \\\n                                allocate(*(expr_gen.node_allocator_), t0, t1, t2, t3);                                        \\\n\n\n                  #define case_stmt1(op)                                                                                             \\\n                  case details::e_sf4ext##op : return details::T0oT1oT2oT3_sf4ext<Type,T0,T1,T2,T3,details::sfext##op##_op<Type> >:: \\\n                                allocate(*(expr_gen.node_allocator_), t0, t1, t2, t3);                                               \\\n\n                  case_stmt0(48) case_stmt0(49) case_stmt0(50) case_stmt0(51)\n                  case_stmt0(52) case_stmt0(53) case_stmt0(54) case_stmt0(55)\n                  case_stmt0(56) case_stmt0(57) case_stmt0(58) case_stmt0(59)\n                  case_stmt0(60) case_stmt0(61) case_stmt0(62) case_stmt0(63)\n                  case_stmt0(64) case_stmt0(65) case_stmt0(66) case_stmt0(67)\n                  case_stmt0(68) case_stmt0(69) case_stmt0(70) case_stmt0(71)\n                  case_stmt0(72) case_stmt0(73) case_stmt0(74) case_stmt0(75)\n                  case_stmt0(76) case_stmt0(77) case_stmt0(78) case_stmt0(79)\n                  case_stmt0(80) case_stmt0(81) case_stmt0(82) case_stmt0(83)\n\n                  case_stmt1(00) case_stmt1(01) case_stmt1(02) case_stmt1(03)\n                  case_stmt1(04) case_stmt1(05) case_stmt1(06) case_stmt1(07)\n                  case_stmt1(08) case_stmt1(09) case_stmt1(10) case_stmt1(11)\n                  case_stmt1(12) case_stmt1(13) case_stmt1(14) case_stmt1(15)\n                  case_stmt1(16) case_stmt1(17) case_stmt1(18) case_stmt1(19)\n                  case_stmt1(20) case_stmt1(21) case_stmt1(22) case_stmt1(23)\n                  case_stmt1(24) case_stmt1(25) case_stmt1(26) case_stmt1(27)\n                  case_stmt1(28) case_stmt1(29) case_stmt1(30) case_stmt1(31)\n                  case_stmt1(32) case_stmt1(33) case_stmt1(34) case_stmt1(35)\n                  case_stmt1(36) case_stmt1(37) case_stmt1(38) case_stmt1(39)\n                  case_stmt1(40) case_stmt1(41) case_stmt1(42) case_stmt1(43)\n                  case_stmt1(44) case_stmt1(45) case_stmt1(46) case_stmt1(47)\n                  case_stmt1(48) case_stmt1(49) case_stmt1(50) case_stmt1(51)\n                  case_stmt1(52) case_stmt1(53) case_stmt1(54) case_stmt1(55)\n                  case_stmt1(56) case_stmt1(57) case_stmt1(58) case_stmt1(59)\n                  case_stmt1(60) case_stmt1(61)\n\n                  #undef case_stmt0\n                  #undef case_stmt1\n                  default : return error_node();\n               }\n            }\n\n            template <typename T0, typename T1, typename T2, typename T3>\n            static inline bool compile(expression_generator<Type>& expr_gen, const std::string& id,\n                                       T0 t0, T1 t1, T2 t2, T3 t3,\n                                       expression_node_ptr& result)\n            {\n               details::operator_type sf4opr;\n\n               if (!expr_gen.sf4_optimisable(id,sf4opr))\n                  return false;\n               else\n                  result = synthesize_sf4ext_expression::template process<T0,T1,T2,T3>\n                              (expr_gen, sf4opr, t0, t1, t2, t3);\n\n               return true;\n            }\n\n            // T o (sf3ext)\n            template <typename ExternalType>\n            static inline bool compile_right(expression_generator<Type>& expr_gen,\n                                             ExternalType t,\n                                             const details::operator_type& operation,\n                                             expression_node_ptr& sf3node,\n                                             expression_node_ptr& result)\n            {\n               if (!details::is_sf3ext_node(sf3node))\n                  return false;\n\n               typedef details::T0oT1oT2_base_node<Type>* sf3ext_base_ptr;\n\n               sf3ext_base_ptr n = static_cast<sf3ext_base_ptr>(sf3node);\n               std::string id = \"t\" + expr_gen.to_str(operation) + \"(\" + n->type_id() + \")\";\n\n               switch (n->type())\n               {\n                  case details::expression_node<Type>::e_covoc : return compile_right_impl\n                                                                    <typename covoc_t::sf3_type_node,ExternalType,ctype,vtype,ctype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  case details::expression_node<Type>::e_covov : return compile_right_impl\n                                                                    <typename covov_t::sf3_type_node,ExternalType,ctype,vtype,vtype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  case details::expression_node<Type>::e_vocov : return compile_right_impl\n                                                                    <typename vocov_t::sf3_type_node,ExternalType,vtype,ctype,vtype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  case details::expression_node<Type>::e_vovoc : return compile_right_impl\n                                                                    <typename vovoc_t::sf3_type_node,ExternalType,vtype,vtype,ctype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  case details::expression_node<Type>::e_vovov : return compile_right_impl\n                                                                    <typename vovov_t::sf3_type_node,ExternalType,vtype,vtype,vtype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  default                                      : return false;\n               }\n            }\n\n            // (sf3ext) o T\n            template <typename ExternalType>\n            static inline bool compile_left(expression_generator<Type>& expr_gen,\n                                            ExternalType t,\n                                            const details::operator_type& operation,\n                                            expression_node_ptr& sf3node,\n                                            expression_node_ptr& result)\n            {\n               if (!details::is_sf3ext_node(sf3node))\n                  return false;\n\n               typedef details::T0oT1oT2_base_node<Type>* sf3ext_base_ptr;\n\n               sf3ext_base_ptr n = static_cast<sf3ext_base_ptr>(sf3node);\n\n               std::string id = \"(\" + n->type_id() + \")\" + expr_gen.to_str(operation) + \"t\";\n\n               switch (n->type())\n               {\n                  case details::expression_node<Type>::e_covoc : return compile_left_impl\n                                                                    <typename covoc_t::sf3_type_node,ExternalType,ctype,vtype,ctype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  case details::expression_node<Type>::e_covov : return compile_left_impl\n                                                                    <typename covov_t::sf3_type_node,ExternalType,ctype,vtype,vtype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  case details::expression_node<Type>::e_vocov : return compile_left_impl\n                                                                    <typename vocov_t::sf3_type_node,ExternalType,vtype,ctype,vtype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  case details::expression_node<Type>::e_vovoc : return compile_left_impl\n                                                                    <typename vovoc_t::sf3_type_node,ExternalType,vtype,vtype,ctype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  case details::expression_node<Type>::e_vovov : return compile_left_impl\n                                                                    <typename vovov_t::sf3_type_node,ExternalType,vtype,vtype,vtype>\n                                                                       (expr_gen, id, t, sf3node, result);\n\n                  default                                      : return false;\n               }\n            }\n\n            template <typename SF3TypeNode, typename ExternalType, typename T0, typename T1, typename T2>\n            static inline bool compile_right_impl(expression_generator<Type>& expr_gen,\n                                                  const std::string& id,\n                                                  ExternalType t,\n                                                  expression_node_ptr& node,\n                                                  expression_node_ptr& result)\n            {\n               SF3TypeNode* n = dynamic_cast<SF3TypeNode*>(node);\n\n               if (n)\n               {\n                  T0 t0 = n->t0();\n                  T1 t1 = n->t1();\n                  T2 t2 = n->t2();\n\n                  return synthesize_sf4ext_expression::template compile<ExternalType,T0,T1,T2>\n                            (expr_gen, id, t, t0, t1, t2, result);\n               }\n               else\n                  return false;\n            }\n\n            template <typename SF3TypeNode, typename ExternalType, typename T0, typename T1, typename T2>\n            static inline bool compile_left_impl(expression_generator<Type>& expr_gen,\n                                                 const std::string& id,\n                                                 ExternalType t,\n                                                 expression_node_ptr& node,\n                                                 expression_node_ptr& result)\n            {\n               SF3TypeNode* n = dynamic_cast<SF3TypeNode*>(node);\n\n               if (n)\n               {\n                  T0 t0 = n->t0();\n                  T1 t1 = n->t1();\n                  T2 t2 = n->t2();\n\n                  return synthesize_sf4ext_expression::template compile<T0,T1,T2,ExternalType>\n                            (expr_gen, id, t0, t1, t2, t, result);\n               }\n               else\n                  return false;\n            }\n         };\n\n         struct synthesize_vovov_expression0\n         {\n            typedef typename vovov_t::type0 node_type;\n            typedef typename vovov_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 v1) o1 (v2)\n               const details::vov_base_node<Type>* vov = static_cast<details::vov_base_node<Type>*>(branch[0]);\n               const Type& v0 = vov->v0();\n               const Type& v1 = vov->v1();\n               const Type& v2 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = vov->operation();\n               const details::operator_type o1 = operation;\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 / v1) / v2 --> (vovov) v0 / (v1 * v2)\n                  if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype,vtype,vtype>(expr_gen, \"t/(t*t)\", v0, v1, v2, result);\n\n                     exprtk_debug((\"(v0 / v1) / v2 --> (vovov) v0 / (v1 * v2)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<vtype, vtype, vtype>\n                     (expr_gen, id(expr_gen, o0, o1), v0, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t\");\n            }\n         };\n\n         struct synthesize_vovov_expression1\n         {\n            typedef typename vovov_t::type1 node_type;\n            typedef typename vovov_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0) o0 (v1 o1 v2)\n               const details::vov_base_node<Type>* vov = static_cast<details::vov_base_node<Type>*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v1 = vov->v0();\n               const Type& v2 = vov->v1();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = vov->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // v0 / (v1 / v2) --> (vovov) (v0 * v2) / v1\n                  if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype,vtype,vtype>(expr_gen, \"(t*t)/t\", v0, v2, v1, result);\n\n                     exprtk_debug((\"v0 / (v1 / v2) --> (vovov) (v0 * v2) / v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<vtype, vtype, vtype>\n                     (expr_gen, id(expr_gen, o0, o1), v0, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\");\n            }\n         };\n\n         struct synthesize_vovoc_expression0\n         {\n            typedef typename vovoc_t::type0 node_type;\n            typedef typename vovoc_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 v1) o1 (c)\n               const details::vov_base_node<Type>* vov = static_cast<details::vov_base_node<Type>*>(branch[0]);\n               const Type& v0 = vov->v0();\n               const Type& v1 = vov->v1();\n               const Type   c = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = vov->operation();\n               const details::operator_type o1 = operation;\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 / v1) / c --> (vovoc) v0 / (v1 * c)\n                  if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype,vtype,ctype>(expr_gen, \"t/(t*t)\", v0, v1, c, result);\n\n                     exprtk_debug((\"(v0 / v1) / c --> (vovoc) v0 / (v1 * c)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<vtype, vtype, ctype>\n                     (expr_gen, id(expr_gen, o0, o1), v0, v1, c, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, c, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t\");\n            }\n         };\n\n         struct synthesize_vovoc_expression1\n         {\n            typedef typename vovoc_t::type1 node_type;\n            typedef typename vovoc_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0) o0 (v1 o1 c)\n               const details::voc_base_node<Type>* voc = static_cast<const details::voc_base_node<Type>*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v1 = voc->v();\n               const Type   c = voc->c();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = voc->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // v0 / (v1 / c) --> (vocov) (v0 * c) / v1\n                  if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype,ctype,vtype>(expr_gen, \"(t*t)/t\", v0, c, v1, result);\n\n                     exprtk_debug((\"v0 / (v1 / c) --> (vocov) (v0 * c) / v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<vtype, vtype, ctype>\n                     (expr_gen, id(expr_gen, o0, o1), v0, v1, c, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, c, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\");\n            }\n         };\n\n         struct synthesize_vocov_expression0\n         {\n            typedef typename vocov_t::type0 node_type;\n            typedef typename vocov_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 c) o1 (v1)\n               const details::voc_base_node<Type>* voc = static_cast<details::voc_base_node<Type>*>(branch[0]);\n               const Type& v0 = voc->v();\n               const Type   c = voc->c();\n               const Type& v1 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = voc->operation();\n               const details::operator_type o1 = operation;\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 / c) / v1 --> (vovoc) v0 / (v1 * c)\n                  if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype,vtype,ctype>(expr_gen, \"t/(t*t)\", v0, v1, c, result);\n\n                     exprtk_debug((\"(v0 / c) / v1 --> (vovoc) v0 / (v1 * c)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<vtype, ctype, vtype>\n                     (expr_gen, id(expr_gen, o0, o1), v0, c, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, c, v1, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t\");\n            }\n         };\n\n         struct synthesize_vocov_expression1\n         {\n            typedef typename vocov_t::type1 node_type;\n            typedef typename vocov_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0) o0 (c o1 v1)\n               const details::cov_base_node<Type>* cov = static_cast<details::cov_base_node<Type>*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type   c = cov->c();\n               const Type& v1 = cov->v();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = cov->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // v0 / (c / v1) --> (vovoc) (v0 * v1) / c\n                  if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype, vtype, ctype>(expr_gen, \"(t*t)/t\", v0, v1, c, result);\n\n                     exprtk_debug((\"v0 / (c / v1) --> (vovoc) (v0 * v1) / c\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<vtype, ctype, vtype>\n                     (expr_gen, id(expr_gen, o0, o1), v0, c, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, c, v1, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\");\n            }\n         };\n\n         struct synthesize_covov_expression0\n         {\n            typedef typename covov_t::type0 node_type;\n            typedef typename covov_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (c o0 v0) o1 (v1)\n               const details::cov_base_node<Type>* cov = static_cast<details::cov_base_node<Type>*>(branch[0]);\n               const Type   c = cov->c();\n               const Type& v0 = cov->v();\n               const Type& v1 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = cov->operation();\n               const details::operator_type o1 = operation;\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (c / v0) / v1 --> (covov) c / (v0 * v1)\n                  if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype, vtype, vtype>(expr_gen, \"t/(t*t)\", c, v0, v1, result);\n\n                     exprtk_debug((\"(c / v0) / v1 --> (covov) c / (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<ctype, vtype, vtype>\n                     (expr_gen, id(expr_gen, o0, o1), c, v0, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), c, v0, v1, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t\");\n            }\n         };\n\n         struct synthesize_covov_expression1\n         {\n            typedef typename covov_t::type1 node_type;\n            typedef typename covov_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (c) o0 (v0 o1 v1)\n               const details::vov_base_node<Type>* vov = static_cast<details::vov_base_node<Type>*>(branch[1]);\n               const Type   c = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type& v0 = vov->v0();\n               const Type& v1 = vov->v1();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = vov->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // c / (v0 / v1) --> (covov) (c * v1) / v0\n                  if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype, vtype, vtype>(expr_gen, \"(t*t)/t\", c, v1, v0, result);\n\n                     exprtk_debug((\"c / (v0 / v1) --> (covov) (c * v1) / v0\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<ctype, vtype, vtype>\n                     (expr_gen, id(expr_gen, o0, o1), c, v0, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), c, v0, v1, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen, const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\");\n            }\n         };\n\n         struct synthesize_covoc_expression0\n         {\n            typedef typename covoc_t::type0 node_type;\n            typedef typename covoc_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (c0 o0 v) o1 (c1)\n               const details::cov_base_node<Type>* cov = static_cast<details::cov_base_node<Type>*>(branch[0]);\n               const Type  c0 = cov->c();\n               const Type&  v = cov->v();\n               const Type  c1 = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = cov->operation();\n               const details::operator_type o1 = operation;\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (c0 + v) + c1 --> (cov) (c0 + c1) + v\n                  if ((details::e_add == o0) && (details::e_add == o1))\n                  {\n                     exprtk_debug((\"(c0 + v) + c1 --> (cov) (c0 + c1) + v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::add_op<Type> > >(c0 + c1, v);\n                  }\n                  // (c0 + v) - c1 --> (cov) (c0 - c1) + v\n                  else if ((details::e_add == o0) && (details::e_sub == o1))\n                  {\n                     exprtk_debug((\"(c0 + v) - c1 --> (cov) (c0 - c1) + v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::add_op<Type> > >(c0 - c1, v);\n                  }\n                  // (c0 - v) + c1 --> (cov) (c0 + c1) - v\n                  else if ((details::e_sub == o0) && (details::e_add == o1))\n                  {\n                     exprtk_debug((\"(c0 - v) + c1 --> (cov) (c0 + c1) - v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::sub_op<Type> > >(c0 + c1, v);\n                  }\n                  // (c0 - v) - c1 --> (cov) (c0 - c1) - v\n                  else if ((details::e_sub == o0) && (details::e_sub == o1))\n                  {\n                     exprtk_debug((\"(c0 - v) - c1 --> (cov) (c0 - c1) - v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::sub_op<Type> > >(c0 - c1, v);\n                  }\n                  // (c0 * v) * c1 --> (cov) (c0 * c1) * v\n                  else if ((details::e_mul == o0) && (details::e_mul == o1))\n                  {\n                     exprtk_debug((\"(c0 * v) * c1 --> (cov) (c0 * c1) * v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::mul_op<Type> > >(c0 * c1, v);\n                  }\n                  // (c0 * v) / c1 --> (cov) (c0 / c1) * v\n                  else if ((details::e_mul == o0) && (details::e_div == o1))\n                  {\n                     exprtk_debug((\"(c0 * v) / c1 --> (cov) (c0 / c1) * v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::mul_op<Type> > >(c0 / c1, v);\n                  }\n                  // (c0 / v) * c1 --> (cov) (c0 * c1) / v\n                  else if ((details::e_div == o0) && (details::e_mul == o1))\n                  {\n                     exprtk_debug((\"(c0 / v) * c1 --> (cov) (c0 * c1) / v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::div_op<Type> > >(c0 * c1, v);\n                  }\n                  // (c0 / v) / c1 --> (cov) (c0 / c1) / v\n                  else if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     exprtk_debug((\"(c0 / v) / c1 --> (cov) (c0 / c1) / v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::div_op<Type> > >(c0 / c1, v);\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<ctype, vtype, ctype>\n                     (expr_gen, id(expr_gen, o0, o1), c0, v, c1,result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), c0, v, c1, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t\");\n            }\n         };\n\n         struct synthesize_covoc_expression1\n         {\n            typedef typename covoc_t::type1 node_type;\n            typedef typename covoc_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (c0) o0 (v o1 c1)\n               const details::voc_base_node<Type>* voc = static_cast<details::voc_base_node<Type>*>(branch[1]);\n               const Type  c0 = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type&  v = voc->v();\n               const Type  c1 = voc->c();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = voc->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (c0) + (v + c1) --> (cov) (c0 + c1) + v\n                  if ((details::e_add == o0) && (details::e_add == o1))\n                  {\n                     exprtk_debug((\"(c0) + (v + c1) --> (cov) (c0 + c1) + v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::add_op<Type> > >(c0 + c1, v);\n                  }\n                  // (c0) + (v - c1) --> (cov) (c0 - c1) + v\n                  else if ((details::e_add == o0) && (details::e_sub == o1))\n                  {\n                     exprtk_debug((\"(c0) + (v - c1) --> (cov) (c0 - c1) + v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::add_op<Type> > >(c0 - c1, v);\n                  }\n                  // (c0) - (v + c1) --> (cov) (c0 - c1) - v\n                  else if ((details::e_sub == o0) && (details::e_add == o1))\n                  {\n                     exprtk_debug((\"(c0) - (v + c1) --> (cov) (c0 - c1) - v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::sub_op<Type> > >(c0 - c1, v);\n                  }\n                  // (c0) - (v - c1) --> (cov) (c0 + c1) - v\n                  else if ((details::e_sub == o0) && (details::e_sub == o1))\n                  {\n                     exprtk_debug((\"(c0) - (v - c1) --> (cov) (c0 + c1) - v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::sub_op<Type> > >(c0 + c1, v);\n                  }\n                  // (c0) * (v * c1) --> (voc) v * (c0 * c1)\n                  else if ((details::e_mul == o0) && (details::e_mul == o1))\n                  {\n                     exprtk_debug((\"(c0) * (v * c1) --> (voc) v * (c0 * c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::mul_op<Type> > >(c0 * c1, v);\n                  }\n                  // (c0) * (v / c1) --> (cov) (c0 / c1) * v\n                  else if ((details::e_mul == o0) && (details::e_div == o1))\n                  {\n                     exprtk_debug((\"(c0) * (v / c1) --> (cov) (c0 / c1) * v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::mul_op<Type> > >(c0 / c1, v);\n                  }\n                  // (c0) / (v * c1) --> (cov) (c0 / c1) / v\n                  else if ((details::e_div == o0) && (details::e_mul == o1))\n                  {\n                     exprtk_debug((\"(c0) / (v * c1) --> (cov) (c0 / c1) / v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::div_op<Type> > >(c0 / c1, v);\n                  }\n                  // (c0) / (v / c1) --> (cov) (c0 * c1) / v\n                  else if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     exprtk_debug((\"(c0) / (v / c1) --> (cov) (c0 * c1) / v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::div_op<Type> > >(c0 * c1, v);\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<ctype, vtype, ctype>\n                     (expr_gen, id(expr_gen, o0, o1), c0, v, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), c0, v, c1, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\");\n            }\n         };\n\n         struct synthesize_cocov_expression0\n         {\n            typedef typename cocov_t::type0 node_type;\n            static inline expression_node_ptr process(expression_generator<Type>&, const details::operator_type&, expression_node_ptr (&)[2])\n            {\n               // (c0 o0 c1) o1 (v) - Not possible.\n               return error_node();\n            }\n         };\n\n         struct synthesize_cocov_expression1\n         {\n            typedef typename cocov_t::type1 node_type;\n            typedef typename cocov_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (c0) o0 (c1 o1 v)\n               const details::cov_base_node<Type>* cov = static_cast<details::cov_base_node<Type>*>(branch[1]);\n               const Type  c0 = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type  c1 = cov->c();\n               const Type&  v = cov->v();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = cov->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (c0) + (c1 + v) --> (cov) (c0 + c1) + v\n                  if ((details::e_add == o0) && (details::e_add == o1))\n                  {\n                     exprtk_debug((\"(c0) + (c1 + v) --> (cov) (c0 + c1) + v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::add_op<Type> > >(c0 + c1, v);\n                  }\n                  // (c0) + (c1 - v) --> (cov) (c0 + c1) - v\n                  else if ((details::e_add == o0) && (details::e_sub == o1))\n                  {\n                     exprtk_debug((\"(c0) + (c1 - v) --> (cov) (c0 + c1) - v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::sub_op<Type> > >(c0 + c1, v);\n                  }\n                  // (c0) - (c1 + v) --> (cov) (c0 - c1) - v\n                  else if ((details::e_sub == o0) && (details::e_add == o1))\n                  {\n                     exprtk_debug((\"(c0) - (c1 + v) --> (cov) (c0 - c1) - v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::sub_op<Type> > >(c0 - c1, v);\n                  }\n                  // (c0) - (c1 - v) --> (cov) (c0 - c1) + v\n                  else if ((details::e_sub == o0) && (details::e_sub == o1))\n                  {\n                     exprtk_debug((\"(c0) - (c1 - v) --> (cov) (c0 - c1) + v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::add_op<Type> > >(c0 - c1, v);\n                  }\n                  // (c0) * (c1 * v) --> (cov) (c0 * c1) * v\n                  else if ((details::e_mul == o0) && (details::e_mul == o1))\n                  {\n                     exprtk_debug((\"(c0) * (c1 * v) --> (cov) (c0 * c1) * v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::mul_op<Type> > >(c0 * c1, v);\n                  }\n                  // (c0) * (c1 / v) --> (cov) (c0 * c1) / v\n                  else if ((details::e_mul == o0) && (details::e_div == o1))\n                  {\n                     exprtk_debug((\"(c0) * (c1 / v) --> (cov) (c0 * c1) / v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::div_op<Type> > >(c0 * c1, v);\n                  }\n                  // (c0) / (c1 * v) --> (cov) (c0 / c1) / v\n                  else if ((details::e_div == o0) && (details::e_mul == o1))\n                  {\n                     exprtk_debug((\"(c0) / (c1 * v) --> (cov) (c0 / c1) / v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::div_op<Type> > >(c0 / c1, v);\n                  }\n                  // (c0) / (c1 / v) --> (cov) (c0 / c1) * v\n                  else if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     exprtk_debug((\"(c0) / (c1 / v) --> (cov) (c0 / c1) * v\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_cr<typename details::cov_node<Type,details::mul_op<Type> > >(c0 / c1, v);\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<ctype, ctype, vtype>\n                     (expr_gen, id(expr_gen, o0, o1), c0, c1, v, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), c0, c1, v, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen, const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\");\n            }\n         };\n\n         struct synthesize_vococ_expression0\n         {\n            typedef typename vococ_t::type0 node_type;\n            typedef typename vococ_t::sf3_type sf3_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v o0 c0) o1 (c1)\n               const details::voc_base_node<Type>* voc = static_cast<details::voc_base_node<Type>*>(branch[0]);\n               const Type&  v = voc->v();\n               const Type& c0 = voc->c();\n               const Type& c1 = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = voc->operation();\n               const details::operator_type o1 = operation;\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v + c0) + c1 --> (voc) v + (c0 + c1)\n                  if ((details::e_add == o0) && (details::e_add == o1))\n                  {\n                     exprtk_debug((\"(v + c0) + c1 --> (voc) v + (c0 + c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::add_op<Type> > >(v, c0 + c1);\n                  }\n                  // (v + c0) - c1 --> (voc) v + (c0 - c1)\n                  else if ((details::e_add == o0) && (details::e_sub == o1))\n                  {\n                     exprtk_debug((\"(v + c0) - c1 --> (voc) v + (c0 - c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::add_op<Type> > >(v, c0 - c1);\n                  }\n                  // (v - c0) + c1 --> (voc) v - (c0 + c1)\n                  else if ((details::e_sub == o0) && (details::e_add == o1))\n                  {\n                     exprtk_debug((\"(v - c0) + c1 --> (voc) v - (c0 + c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::add_op<Type> > >(v, c1 - c0);\n                  }\n                  // (v - c0) - c1 --> (voc) v - (c0 + c1)\n                  else if ((details::e_sub == o0) && (details::e_sub == o1))\n                  {\n                     exprtk_debug((\"(v - c0) - c1 --> (voc) v - (c0 + c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::sub_op<Type> > >(v, c0 + c1);\n                  }\n                  // (v * c0) * c1 --> (voc) v * (c0 * c1)\n                  else if ((details::e_mul == o0) && (details::e_mul == o1))\n                  {\n                     exprtk_debug((\"(v * c0) * c1 --> (voc) v * (c0 * c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::mul_op<Type> > >(v, c0 * c1);\n                  }\n                  // (v * c0) / c1 --> (voc) v * (c0 / c1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1))\n                  {\n                     exprtk_debug((\"(v * c0) / c1 --> (voc) v * (c0 / c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::mul_op<Type> > >(v, c0 / c1);\n                  }\n                  // (v / c0) * c1 --> (voc) v * (c1 / c0)\n                  else if ((details::e_div == o0) && (details::e_mul == o1))\n                  {\n                     exprtk_debug((\"(v / c0) * c1 --> (voc) v * (c1 / c0)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::mul_op<Type> > >(v, c1 / c0);\n                  }\n                  // (v / c0) / c1 --> (voc) v / (c0 * c1)\n                  else if ((details::e_div == o0) && (details::e_div == o1))\n                  {\n                     exprtk_debug((\"(v / c0) / c1 --> (voc) v / (c0 * c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::div_op<Type> > >(v, c0 * c1);\n                  }\n                  // (v ^ c0) ^ c1 --> (voc) v ^ (c0 * c1)\n                  else if ((details::e_pow == o0) && (details::e_pow == o1))\n                  {\n                     exprtk_debug((\"(v ^ c0) ^ c1 --> (voc) v ^ (c0 * c1)\\n\"));\n\n                     return expr_gen.node_allocator_->\n                               template allocate_rc<typename details::voc_node<Type,details::pow_op<Type> > >(v, c0 * c1);\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf3ext_expression::template compile<vtype, ctype, ctype>\n                     (expr_gen, id(expr_gen, o0, o1), v, c0, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v, c0, c1, f0, f1);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0, const details::operator_type o1)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t\");\n            }\n         };\n\n         struct synthesize_vococ_expression1\n         {\n            typedef typename vococ_t::type0 node_type;\n\n            static inline expression_node_ptr process(expression_generator<Type>&, const details::operator_type&, expression_node_ptr (&)[2])\n            {\n               // (v) o0 (c0 o1 c1) - Not possible.\n               exprtk_debug((\"(v) o0 (c0 o1 c1) - Not possible.\\n\"));\n               return error_node();\n            }\n         };\n\n         struct synthesize_vovovov_expression0\n         {\n            typedef typename vovovov_t::type0 node_type;\n            typedef typename vovovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 v1) o1 (v2 o2 v3)\n               const details::vov_base_node<Type>* vov0 = static_cast<details::vov_base_node<Type>*>(branch[0]);\n               const details::vov_base_node<Type>* vov1 = static_cast<details::vov_base_node<Type>*>(branch[1]);\n               const Type& v0 = vov0->v0();\n               const Type& v1 = vov0->v1();\n               const Type& v2 = vov1->v0();\n               const Type& v3 = vov1->v1();\n               const details::operator_type o0 = vov0->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = vov1->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 / v1) * (v2 / v3) --> (vovovov) (v0 * v2) / (v1 * v3)\n                  if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,vtype,vtype>(expr_gen, \"(t*t)/(t*t)\", v0, v2, v1, v3, result);\n\n                     exprtk_debug((\"(v0 / v1) * (v2 / v3) --> (vovovov) (v0 * v2) / (v1 * v3)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / v1) / (v2 / v3) --> (vovovov) (v0 * v3) / (v1 * v2)\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,vtype,vtype>(expr_gen, \"(t*t)/(t*t)\", v0, v3, v1, v2, result);\n\n                     exprtk_debug((\"(v0 / v1) / (v2 / v3) --> (vovovov) (v0 * v3) / (v1 * v2)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 + v1) / (v2 / v3) --> (vovovov) (v0 + v1) * (v3 / v2)\n                  else if ((details::e_add == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,vtype,vtype>(expr_gen, \"(t+t)*(t/t)\", v0, v1, v3, v2, result);\n\n                     exprtk_debug((\"(v0 + v1) / (v2 / v3) --> (vovovov) (v0 + v1) * (v3 / v2)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 - v1) / (v2 / v3) --> (vovovov) (v0 + v1) * (v3 / v2)\n                  else if ((details::e_sub == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,vtype,vtype>(expr_gen, \"(t-t)*(t/t)\", v0, v1, v3, v2, result);\n\n                     exprtk_debug((\"(v0 - v1) / (v2 / v3) --> (vovovov) (v0 - v1) * (v3 / v2)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * v1) / (v2 / v3) --> (vovovov) ((v0 * v1) * v3) / v2\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,vtype,vtype>(expr_gen, \"((t*t)*t)/t\", v0, v1, v3, v2, result);\n\n                     exprtk_debug((\"(v0 * v1) / (v2 / v3) --> (vovovov) ((v0 * v1) * v3) / v2\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, v3,result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, v3, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vovovoc_expression0\n         {\n            typedef typename vovovoc_t::type0 node_type;\n            typedef typename vovovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 v1) o1 (v2 o2 c)\n               const details::vov_base_node<Type>* vov = static_cast<details::vov_base_node<Type>*>(branch[0]);\n               const details::voc_base_node<Type>* voc = static_cast<details::voc_base_node<Type>*>(branch[1]);\n               const Type& v0 = vov->v0();\n               const Type& v1 = vov->v1();\n               const Type& v2 = voc->v ();\n               const Type   c = voc->c ();\n               const details::operator_type o0 = vov->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = voc->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 / v1) * (v2 / c) --> (vovovoc) (v0 * v2) / (v1 * c)\n                  if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,vtype,ctype>(expr_gen, \"(t*t)/(t*t)\", v0, v2, v1, c, result);\n\n                     exprtk_debug((\"(v0 / v1) * (v2 / c) --> (vovovoc) (v0 * v2) / (v1 * c)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / v1) / (v2 / c) --> (vocovov) (v0 * c) / (v1 * v2)\n                  if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,ctype,vtype,vtype>(expr_gen, \"(t*t)/(t*t)\", v0, c, v1, v2, result);\n\n                     exprtk_debug((\"(v0 / v1) / (v2 / c) --> (vocovov) (v0 * c) / (v1 * v2)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, c, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, c, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vovocov_expression0\n         {\n            typedef typename vovocov_t::type0 node_type;\n            typedef typename vovocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 v1) o1 (c o2 v2)\n               const details::vov_base_node<Type>* vov = static_cast<details::vov_base_node<Type>*>(branch[0]);\n               const details::cov_base_node<Type>* cov = static_cast<details::cov_base_node<Type>*>(branch[1]);\n               const Type& v0 = vov->v0();\n               const Type& v1 = vov->v1();\n               const Type& v2 = cov->v ();\n               const Type   c = cov->c ();\n               const details::operator_type o0 = vov->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = cov->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 / v1) * (c / v2) --> (vocovov) (v0 * c) / (v1 * v2)\n                  if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,ctype,vtype,vtype>(expr_gen, \"(t*t)/(t*t)\", v0, c, v1, v2, result);\n\n                     exprtk_debug((\"(v0 / v1) * (c / v2) --> (vocovov) (v0 * c) / (v1 * v2)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / v1) / (c / v2) --> (vovovoc) (v0 * v2) / (v1 * c)\n                  if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,vtype,ctype>(expr_gen, \"(t*t)/(t*t)\", v0, v2, v1, c, result);\n\n                     exprtk_debug((\"(v0 / v1) / (c / v2) --> (vovovoc) (v0 * v2) / (v1 * c)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, c, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, c, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vocovov_expression0\n         {\n            typedef typename vocovov_t::type0 node_type;\n            typedef typename vocovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 c) o1 (v1 o2 v2)\n               const details::voc_base_node<Type>* voc = static_cast<details::voc_base_node<Type>*>(branch[0]);\n               const details::vov_base_node<Type>* vov = static_cast<details::vov_base_node<Type>*>(branch[1]);\n               const Type   c = voc->c ();\n               const Type& v0 = voc->v ();\n               const Type& v1 = vov->v0();\n               const Type& v2 = vov->v1();\n               const details::operator_type o0 = voc->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = vov->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 / c) * (v1 / v2) --> (vovocov) (v0 * v1) / (c * v2)\n                  if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,ctype,vtype>(expr_gen, \"(t*t)/(t*t)\", v0, v1, c, v2, result);\n\n                     exprtk_debug((\"(v0 / c) * (v1 / v2) --> (vovocov) (v0 * v1) / (c * v2)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c) / (v1 / v2) --> (vovocov) (v0 * v2) / (c * v1)\n                  if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,vtype,ctype,vtype>(expr_gen, \"(t*t)/(t*t)\", v0, v2, c, v1, result);\n\n                     exprtk_debug((\"(v0 / c) / (v1 / v2) --> (vovocov) (v0 * v2) / (c * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, c, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_covovov_expression0\n         {\n            typedef typename covovov_t::type0 node_type;\n            typedef typename covovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (c o0 v0) o1 (v1 o2 v2)\n               const details::cov_base_node<Type>* cov = static_cast<details::cov_base_node<Type>*>(branch[0]);\n               const details::vov_base_node<Type>* vov = static_cast<details::vov_base_node<Type>*>(branch[1]);\n               const Type   c = cov->c ();\n               const Type& v0 = cov->v ();\n               const Type& v1 = vov->v0();\n               const Type& v2 = vov->v1();\n               const details::operator_type o0 = cov->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = vov->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (c / v0) * (v1 / v2) --> (covovov) (c * v1) / (v0 * v2)\n                  if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<ctype,vtype,vtype,vtype>(expr_gen, \"(t*t)/(t*t)\", c, v1, v0, v2, result);\n\n                     exprtk_debug((\"(c / v0) * (v1 / v2) --> (covovov) (c * v1) / (v0 * v2)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c / v0) / (v1 / v2) --> (covovov) (c * v2) / (v0 * v1)\n                  if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<ctype,vtype,vtype,vtype>(expr_gen, \"(t*t)/(t*t)\", c, v2, v0, v1, result);\n\n                     exprtk_debug((\"(c / v0) / (v1 / v2) --> (covovov) (c * v2) / (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c, v0, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), c, v0, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_covocov_expression0\n         {\n            typedef typename covocov_t::type0 node_type;\n            typedef typename covocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (c0 o0 v0) o1 (c1 o2 v1)\n               const details::cov_base_node<Type>* cov0 = static_cast<details::cov_base_node<Type>*>(branch[0]);\n               const details::cov_base_node<Type>* cov1 = static_cast<details::cov_base_node<Type>*>(branch[1]);\n               const Type  c0 = cov0->c();\n               const Type& v0 = cov0->v();\n               const Type  c1 = cov1->c();\n               const Type& v1 = cov1->v();\n               const details::operator_type o0 = cov0->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = cov1->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (c0 + v0) + (c1 + v1) --> (covov) (c0 + c1) + v0 + v1\n                  if ((details::e_add == o0) && (details::e_add == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)+t\", (c0 + c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 + v0) + (c1 + v1) --> (covov) (c0 + c1) + v0 + v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 + v0) - (c1 + v1) --> (covov) (c0 - c1) + v0 - v1\n                  else if ((details::e_add == o0) && (details::e_sub == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)-t\", (c0 - c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 + v0) - (c1 + v1) --> (covov) (c0 - c1) + v0 - v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 - v0) - (c1 - v1) --> (covov) (c0 - c1) - v0 + v1\n                  else if ((details::e_sub == o0) && (details::e_sub == o1) && (details::e_sub == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t-t)+t\", (c0 - c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 - v0) - (c1 - v1) --> (covov) (c0 - c1) - v0 + v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 * v0) * (c1 * v1) --> (covov) (c0 * c1) * v0 * v1\n                  else if ((details::e_mul == o0) && (details::e_mul == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)*t\", (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 * v0) * (c1 * v1) --> (covov) (c0 * c1) * v0 * v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 * v0) / (c1 * v1) --> (covov) (c0 / c1) * (v0 / v1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", (c0 / c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 * v0) / (c1 * v1) --> (covov) (c0 / c1) * (v0 / v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 / v0) * (c1 / v1) --> (covov) (c0 * c1) / (v0 * v1)\n                  else if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t/(t*t)\", (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 / v0) * (c1 / v1) --> (covov) (c0 * c1) / (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 / v0) / (c1 / v1) --> (covov) ((c0 / c1) * v1) / v0\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", (c0 / c1), v1, v0, result);\n\n                     exprtk_debug((\"(c0 / v0) / (c1 / v1) --> (covov) ((c0 / c1) * v1) / v0\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 * v0) / (c1 / v1) --> (covov) (c0 / c1) * (v0 * v1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t*(t*t)\", (c0 / c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 * v0) / (c1 / v1) --> (covov) (c0 / c1) * (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 / v0) / (c1 * v1) --> (covov) (c0 / c1) / (v0 * v1)\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t/(t*t)\", (c0 / c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 / v0) / (c1 * v1) --> (covov) (c0 / c1) / (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c * v0) +/- (c * v1) --> (covov) c * (v0 +/- v1)\n                  else if (\n                            (std::equal_to<T>()(c0,c1)) &&\n                            (details::e_mul == o0)      &&\n                            (details::e_mul == o2)      &&\n                            (\n                              (details::e_add == o1) ||\n                              (details::e_sub == o1)\n                            )\n                          )\n                  {\n                     std::string specfunc;\n\n                     switch (o1)\n                     {\n                        case details::e_add : specfunc = \"t*(t+t)\"; break;\n                        case details::e_sub : specfunc = \"t*(t-t)\"; break;\n                        default             : return error_node();\n                     }\n\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype, vtype, vtype>(expr_gen, specfunc, c0, v0, v1, result);\n\n                     exprtk_debug((\"(c * v0) +/- (c * v1) --> (covov) c * (v0 +/- v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, c1, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, c1, v1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vocovoc_expression0\n         {\n            typedef typename vocovoc_t::type0 node_type;\n            typedef typename vocovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 c0) o1 (v1 o2 c1)\n               const details::voc_base_node<Type>* voc0 = static_cast<details::voc_base_node<Type>*>(branch[0]);\n               const details::voc_base_node<Type>* voc1 = static_cast<details::voc_base_node<Type>*>(branch[1]);\n               const Type  c0 = voc0->c();\n               const Type& v0 = voc0->v();\n               const Type  c1 = voc1->c();\n               const Type& v1 = voc1->v();\n               const details::operator_type o0 = voc0->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = voc1->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 + c0) + (v1 + c1) --> (covov) (c0 + c1) + v0 + v1\n                  if ((details::e_add == o0) && (details::e_add == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)+t\", (c0 + c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 + c0) + (v1 + c1) --> (covov) (c0 + c1) + v0 + v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 + c0) - (v1 + c1) --> (covov) (c0 - c1) + v0 - v1\n                  else if ((details::e_add == o0) && (details::e_sub == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)-t\", (c0 - c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 + c0) - (v1 + c1) --> (covov) (c0 - c1) + v0 - v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 - c0) - (v1 - c1) --> (covov) (c1 - c0) + v0 - v1\n                  else if ((details::e_sub == o0) && (details::e_sub == o1) && (details::e_sub == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)-t\", (c1 - c0), v0, v1, result);\n\n                     exprtk_debug((\"(v0 - c0) - (v1 - c1) --> (covov) (c1 - c0) + v0 - v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * c0) * (v1 * c1) --> (covov) (c0 * c1) * v0 * v1\n                  else if ((details::e_mul == o0) && (details::e_mul == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)*t\", (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 * c0) * (v1 * c1) --> (covov) (c0 * c1) * v0 * v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * c0) / (v1 * c1) --> (covov) (c0 / c1) * (v0 / v1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", (c0 / c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 * c0) / (v1 * c1) --> (covov) (c0 / c1) * (v0 / v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c0) * (v1 / c1) --> (covov) (1 / (c0 * c1)) * v0 * v1\n                  else if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)*t\", Type(1) / (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 / c0) * (v1 / c1) --> (covov) (1 / (c0 * c1)) * v0 * v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c0) / (v1 / c1) --> (covov) ((c1 / c0) * v0) / v1\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", (c1 / c0), v0, v1, result);\n\n                     exprtk_debug((\"(v0 / c0) / (v1 / c1) --> (covov) ((c1 / c0) * v0) / v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * c0) / (v1 / c1) --> (covov) (c0 * c1) * (v0 / v1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t*(t/t)\", (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 * c0) / (v1 / c1) --> (covov) (c0 * c1) * (v0 / v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c0) / (v1 * c1) --> (covov) (1 / (c0 * c1)) * v0 / v1\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t*(t/t)\", Type(1) / (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 / c0) / (v1 * c1) --> (covov) (1 / (c0 * c1)) * v0 / v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c0) * (v1 + c1) --> (vocovoc) (v0 * (1 / c0)) * (v1 + c1)\n                  else if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,ctype,vtype,ctype>(expr_gen, \"(t*t)*(t+t)\", v0, T(1) / c0, v1, c1, result);\n\n                     exprtk_debug((\"(v0 / c0) * (v1 + c1) --> (vocovoc) (v0 * (1 / c0)) * (v1 + c1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c0) * (v1 - c1) --> (vocovoc) (v0 * (1 / c0)) * (v1 - c1)\n                  else if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_sub == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf4ext_expression::\n                           template compile<vtype,ctype,vtype,ctype>(expr_gen, \"(t*t)*(t-t)\", v0, T(1) / c0, v1, c1, result);\n\n                     exprtk_debug((\"(v0 / c0) * (v1 - c1) --> (vocovoc) (v0 * (1 / c0)) * (v1 - c1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * c) +/- (v1 * c) --> (covov) c * (v0 +/- v1)\n                  else if (\n                            (std::equal_to<T>()(c0,c1)) &&\n                            (details::e_mul == o0)      &&\n                            (details::e_mul == o2)      &&\n                            (\n                              (details::e_add == o1) ||\n                              (details::e_sub == o1)\n                            )\n                          )\n                  {\n                     std::string specfunc;\n\n                     switch (o1)\n                     {\n                        case details::e_add : specfunc = \"t*(t+t)\"; break;\n                        case details::e_sub : specfunc = \"t*(t-t)\"; break;\n                        default             : return error_node();\n                     }\n\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, specfunc, c0, v0, v1, result);\n\n                     exprtk_debug((\"(v0 * c) +/- (v1 * c) --> (covov) c * (v0 +/- v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c) +/- (v1 / c) --> (vovoc) (v0 +/- v1) / c\n                  else if (\n                            (std::equal_to<T>()(c0,c1)) &&\n                            (details::e_div == o0)      &&\n                            (details::e_div == o2)      &&\n                            (\n                              (details::e_add == o1) ||\n                              (details::e_sub == o1)\n                            )\n                          )\n                  {\n                     std::string specfunc;\n\n                     switch (o1)\n                     {\n                        case details::e_add : specfunc = \"(t+t)/t\"; break;\n                        case details::e_sub : specfunc = \"(t-t)/t\"; break;\n                        default             : return error_node();\n                     }\n\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype,vtype,ctype>(expr_gen, specfunc, v0, v1, c0, result);\n\n                     exprtk_debug((\"(v0 / c) +/- (v1 / c) --> (vovoc) (v0 +/- v1) / c\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, c0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_covovoc_expression0\n         {\n            typedef typename covovoc_t::type0 node_type;\n            typedef typename covovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (c0 o0 v0) o1 (v1 o2 c1)\n               const details::cov_base_node<Type>* cov = static_cast<details::cov_base_node<Type>*>(branch[0]);\n               const details::voc_base_node<Type>* voc = static_cast<details::voc_base_node<Type>*>(branch[1]);\n               const Type  c0 = cov->c();\n               const Type& v0 = cov->v();\n               const Type  c1 = voc->c();\n               const Type& v1 = voc->v();\n               const details::operator_type o0 = cov->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = voc->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (c0 + v0) + (v1 + c1) --> (covov) (c0 + c1) + v0 + v1\n                  if ((details::e_add == o0) && (details::e_add == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)+t\", (c0 + c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 + v0) + (v1 + c1) --> (covov) (c0 + c1) + v0 + v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 + v0) - (v1 + c1) --> (covov) (c0 - c1) + v0 - v1\n                  else if ((details::e_add == o0) && (details::e_sub == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)-t\", (c0 - c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 + v0) - (v1 + c1) --> (covov) (c0 - c1) + v0 - v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 - v0) - (v1 - c1) --> (covov) (c0 + c1) - v0 - v1\n                  else if ((details::e_sub == o0) && (details::e_sub == o1) && (details::e_sub == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t-(t+t)\", (c0 + c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 - v0) - (v1 - c1) --> (covov) (c0 + c1) - v0 - v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 * v0) * (v1 * c1) --> (covov) (c0 * c1) * v0 * v1\n                  else if ((details::e_mul == o0) && (details::e_mul == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)*t\", (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 * v0) * (v1 * c1) --> (covov) (c0 * c1) * v0 * v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 * v0) / (v1 * c1) --> (covov) (c0 / c1) * (v0 / v1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", (c0 / c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 * v0) / (v1 * c1) --> (covov) (c0 / c1) * (v0 / v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 / v0) * (v1 / c1) --> (covov) (c0 / c1) * (v1 / v0)\n                  else if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t*(t/t)\", (c0 / c1), v1, v0, result);\n\n                     exprtk_debug((\"(c0 / v0) * (v1 / c1) --> (covov) (c0 / c1) * (v1 / v0)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 / v0) / (v1 / c1) --> (covov) (c0 * c1) / (v0 * v1)\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t/(t*t)\", (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 / v0) / (v1 / c1) --> (covov) (c0 * c1) / (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 * v0) / (v1 / c1) --> (covov) (c0 * c1) * (v0 / v1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 * v0) / (v1 / c1) --> (covov) (c0 * c1) * (v0 / v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c0 / v0) / (v1 * c1) --> (covov) (c0 / c1) / (v0 * v1)\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"t/(t*t)\", (c0 / c1), v0, v1, result);\n\n                     exprtk_debug((\"(c0 / v0) / (v1 * c1) --> (covov) (c0 / c1) / (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (c * v0) +/- (v1 * c) --> (covov) c * (v0 +/- v1)\n                  else if (\n                            (std::equal_to<T>()(c0,c1)) &&\n                            (details::e_mul == o0)      &&\n                            (details::e_mul == o2)      &&\n                            (\n                              (details::e_add == o1) ||\n                              (details::e_sub == o1)\n                            )\n                          )\n                  {\n                     std::string specfunc;\n\n                     switch (o1)\n                     {\n                        case details::e_add : specfunc = \"t*(t+t)\"; break;\n                        case details::e_sub : specfunc = \"t*(t-t)\"; break;\n                        default             : return error_node();\n                     }\n\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen,specfunc, c0, v0, v1, result);\n\n                     exprtk_debug((\"(c * v0) +/- (v1 * c) --> (covov) c * (v0 +/- v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vococov_expression0\n         {\n            typedef typename vococov_t::type0 node_type;\n            typedef typename vococov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 c0) o1 (c1 o2 v1)\n               const details::voc_base_node<Type>* voc = static_cast<details::voc_base_node<Type>*>(branch[0]);\n               const details::cov_base_node<Type>* cov = static_cast<details::cov_base_node<Type>*>(branch[1]);\n               const Type  c0 = voc->c();\n               const Type& v0 = voc->v();\n               const Type  c1 = cov->c();\n               const Type& v1 = cov->v();\n               const details::operator_type o0 = voc->operation();\n               const details::operator_type o1 = operation;\n               const details::operator_type o2 = cov->operation();\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (expr_gen.parser_->settings_.strength_reduction_enabled())\n               {\n                  // (v0 + c0) + (c1 + v1) --> (covov) (c0 + c1) + v0 + v1\n                  if ((details::e_add == o0) && (details::e_add == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)+t\", (c0 + c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 + c0) + (c1 + v1) --> (covov) (c0 + c1) + v0 + v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 + c0) - (c1 + v1) --> (covov) (c0 - c1) + v0 - v1\n                  else if ((details::e_add == o0) && (details::e_sub == o1) && (details::e_add == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t+t)-t\", (c0 - c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 + c0) - (c1 + v1) --> (covov) (c0 - c1) + v0 - v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 - c0) - (c1 - v1) --> (vovoc) v0 + v1 - (c1 + c0)\n                  else if ((details::e_sub == o0) && (details::e_sub == o1) && (details::e_sub == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype,vtype,ctype>(expr_gen, \"(t+t)-t\", v0, v1, (c1 + c0), result);\n\n                     exprtk_debug((\"(v0 - c0) - (c1 - v1) --> (vovoc) v0 + v1 - (c1 + c0)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * c0) * (c1 * v1) --> (covov) (c0 * c1) * v0 * v1\n                  else if ((details::e_mul == o0) && (details::e_mul == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)*t\", (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 * c0) * (c1 * v1) --> (covov) (c0 * c1) * v0 * v1\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * c0) / (c1 * v1) --> (covov) (c0 / c1) * (v0 * v1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", (c0 / c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 * c0) / (c1 * v1) --> (covov) (c0 / c1) * (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c0) * (c1 / v1) --> (covov) (c1 / c0) * (v0 / v1)\n                  else if ((details::e_div == o0) && (details::e_mul == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", (c1 / c0), v0, v1, result);\n\n                     exprtk_debug((\"(v0 / c0) * (c1 / v1) --> (covov) (c1 / c0) * (v0 / v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * c0) / (c1 / v1) --> (covov) (c0 / c1) * (v0 * v1)\n                  else if ((details::e_mul == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)*t\", (c0 / c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 * c0) / (c1 / v1) --> (covov) (c0 / c1) * (v0 * v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c0) / (c1 * v1) --> (covov) (1 / (c0 * c1)) * (v0 / v1)\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_mul == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, \"(t*t)/t\", Type(1) / (c0 * c1), v0, v1, result);\n\n                     exprtk_debug((\"(v0 / c0) / (c1 * v1) --> (covov) (1 / (c0 * c1)) * (v0 / v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 / c0) / (c1 / v1) --> (vovoc) (v0 * v1) * (1 / (c0 * c1))\n                  else if ((details::e_div == o0) && (details::e_div == o1) && (details::e_div == o2))\n                  {\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<vtype,vtype,ctype>(expr_gen, \"(t*t)*t\", v0, v1, Type(1) / (c0 * c1), result);\n\n                     exprtk_debug((\"(v0 / c0) / (c1 / v1) --> (vovoc) (v0 * v1) * (1 / (c0 * c1))\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n                  // (v0 * c) +/- (c * v1) --> (covov) c * (v0 +/- v1)\n                  else if (\n                            (std::equal_to<T>()(c0,c1)) &&\n                            (details::e_mul == o0)      &&\n                            (details::e_mul == o2)      &&\n                            (\n                              (details::e_add == o1) || (details::e_sub == o1)\n                            )\n                          )\n                  {\n                     std::string specfunc;\n\n                     switch (o1)\n                     {\n                        case details::e_add : specfunc = \"t*(t+t)\"; break;\n                        case details::e_sub : specfunc = \"t*(t-t)\"; break;\n                        default             : return error_node();\n                     }\n\n                     const bool synthesis_result =\n                        synthesize_sf3ext_expression::\n                           template compile<ctype,vtype,vtype>(expr_gen, specfunc, c0, v0, v1, result);\n\n                     exprtk_debug((\"(v0 * c) +/- (c * v1) --> (covov) c * (v0 +/- v1)\\n\"));\n\n                     return (synthesis_result) ? result : error_node();\n                  }\n               }\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c0, c1, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o1,f1))\n                  return error_node();\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n               else\n                  return node_type::allocate(*(expr_gen.node_allocator_), v0, c0, c1, v1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vovovov_expression1\n         {\n            typedef typename vovovov_t::type1 node_type;\n            typedef typename vovovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 (v1 o1 (v2 o2 v3))\n               typedef typename synthesize_vovov_expression1::node_type lcl_vovov_t;\n\n               const lcl_vovov_t* vovov = static_cast<const lcl_vovov_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v1 = vovov->t0();\n               const Type& v2 = vovov->t1();\n               const Type& v3 = vovov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vovov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vovov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vovov->f0();\n               binary_functor_t f2 = vovov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               if (synthesize_sf4ext_expression::template compile<T0,T1,T2,T3>(expr_gen,id(expr_gen,o0,o1,o2),v0,v1,v2,v3,result))\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 (v1 o1 (v2 o2 v3))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_),v0,v1,v2,v3,f0,f1,f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_vovovoc_expression1\n         {\n            typedef typename vovovoc_t::type1 node_type;\n            typedef typename vovovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 (v1 o1 (v2 o2 c))\n               typedef typename synthesize_vovoc_expression1::node_type lcl_vovoc_t;\n\n               const lcl_vovoc_t* vovoc = static_cast<const lcl_vovoc_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v1 = vovoc->t0();\n               const Type& v2 = vovoc->t1();\n               const Type   c = vovoc->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vovoc->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vovoc->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vovoc->f0();\n               binary_functor_t f2 = vovoc->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, c, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 (v1 o1 (v2 o2 c))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, c, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_vovocov_expression1\n         {\n            typedef typename vovocov_t::type1 node_type;\n            typedef typename vovocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 (v1 o1 (c o2 v2))\n               typedef typename synthesize_vocov_expression1::node_type lcl_vocov_t;\n\n               const lcl_vocov_t* vocov = static_cast<const lcl_vocov_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v1 = vocov->t0();\n               const Type   c = vocov->t1();\n               const Type& v2 = vocov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vocov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vocov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vocov->f0();\n               binary_functor_t f2 = vocov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, c, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 (v1 o1 (c o2 v2))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, c, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_vocovov_expression1\n         {\n            typedef typename vocovov_t::type1 node_type;\n            typedef typename vocovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 (c o1 (v1 o2 v2))\n               typedef typename synthesize_covov_expression1::node_type lcl_covov_t;\n\n               const lcl_covov_t* covov = static_cast<const lcl_covov_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type   c = covov->t0();\n               const Type& v1 = covov->t1();\n               const Type& v2 = covov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(covov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(covov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = covov->f0();\n               binary_functor_t f2 = covov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 (c o1 (v1 o2 v2))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_covovov_expression1\n         {\n            typedef typename covovov_t::type1 node_type;\n            typedef typename covovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // c o0 (v0 o1 (v1 o2 v2))\n               typedef typename synthesize_vovov_expression1::node_type lcl_vovov_t;\n\n               const lcl_vovov_t* vovov = static_cast<const lcl_vovov_t*>(branch[1]);\n               const Type   c = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type& v0 = vovov->t0();\n               const Type& v1 = vovov->t1();\n               const Type& v2 = vovov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vovov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vovov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vovov->f0();\n               binary_functor_t f2 = vovov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c, v0, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"c o0 (v0 o1 (v1 o2 v2))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c, v0, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_covocov_expression1\n         {\n            typedef typename covocov_t::type1 node_type;\n            typedef typename covocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // c0 o0 (v0 o1 (c1 o2 v1))\n               typedef typename synthesize_vocov_expression1::node_type lcl_vocov_t;\n\n               const lcl_vocov_t* vocov = static_cast<const lcl_vocov_t*>(branch[1]);\n               const Type  c0 = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type& v0 = vocov->t0();\n               const Type  c1 = vocov->t1();\n               const Type& v1 = vocov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vocov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vocov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vocov->f0();\n               binary_functor_t f2 = vocov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, c1, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"c0 o0 (v0 o1 (c1 o2 v1))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, c1, v1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_vocovoc_expression1\n         {\n            typedef typename vocovoc_t::type1 node_type;\n            typedef typename vocovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 (c0 o1 (v1 o2 c2))\n               typedef typename synthesize_covoc_expression1::node_type lcl_covoc_t;\n\n               const lcl_covoc_t* covoc = static_cast<const lcl_covoc_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type  c0 = covoc->t0();\n               const Type& v1 = covoc->t1();\n               const Type  c1 = covoc->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(covoc->f0());\n               const details::operator_type o2 = expr_gen.get_operator(covoc->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = covoc->f0();\n               binary_functor_t f2 = covoc->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 (c0 o1 (v1 o2 c2))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_covovoc_expression1\n         {\n            typedef typename covovoc_t::type1 node_type;\n            typedef typename covovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // c0 o0 (v0 o1 (v1 o2 c1))\n               typedef typename synthesize_vovoc_expression1::node_type lcl_vovoc_t;\n\n               const lcl_vovoc_t* vovoc = static_cast<const lcl_vovoc_t*>(branch[1]);\n               const Type  c0 = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type& v0 = vovoc->t0();\n               const Type& v1 = vovoc->t1();\n               const Type  c1 = vovoc->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vovoc->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vovoc->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vovoc->f0();\n               binary_functor_t f2 = vovoc->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"c0 o0 (v0 o1 (v1 o2 c1))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_vococov_expression1\n         {\n            typedef typename vococov_t::type1 node_type;\n            typedef typename vococov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 (c0 o1 (c1 o2 v1))\n               typedef typename synthesize_cocov_expression1::node_type lcl_cocov_t;\n\n               const lcl_cocov_t* cocov = static_cast<const lcl_cocov_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type  c0 = cocov->t0();\n               const Type  c1 = cocov->t1();\n               const Type& v1 = cocov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(cocov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(cocov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = cocov->f0();\n               binary_functor_t f2 = cocov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c0, c1, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 (c0 o1 (c1 o2 v1))\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c0, c1, v1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"(t\" << expr_gen.to_str(o2) << \"t))\");\n            }\n         };\n\n         struct synthesize_vovovov_expression2\n         {\n            typedef typename vovovov_t::type2 node_type;\n            typedef typename vovovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 ((v1 o1 v2) o2 v3)\n               typedef typename synthesize_vovov_expression0::node_type lcl_vovov_t;\n\n               const lcl_vovov_t* vovov = static_cast<const lcl_vovov_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v1 = vovov->t0();\n               const Type& v2 = vovov->t1();\n               const Type& v3 = vovov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vovov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vovov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vovov->f0();\n               binary_functor_t f2 = vovov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, v3, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 ((v1 o1 v2) o2 v3)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, v3, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"((t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vovovoc_expression2\n         {\n            typedef typename vovovoc_t::type2 node_type;\n            typedef typename vovovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 ((v1 o1 v2) o2 c)\n               typedef typename synthesize_vovoc_expression0::node_type lcl_vovoc_t;\n\n               const lcl_vovoc_t* vovoc = static_cast<const lcl_vovoc_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v1 = vovoc->t0();\n               const Type& v2 = vovoc->t1();\n               const Type   c = vovoc->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vovoc->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vovoc->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vovoc->f0();\n               binary_functor_t f2 = vovoc->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, c, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 ((v1 o1 v2) o2 c)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, c, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"((t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vovocov_expression2\n         {\n            typedef typename vovocov_t::type2 node_type;\n            typedef typename vovocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 ((v1 o1 c) o2 v2)\n               typedef typename synthesize_vocov_expression0::node_type lcl_vocov_t;\n\n               const lcl_vocov_t* vocov = static_cast<const lcl_vocov_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type& v1 = vocov->t0();\n               const Type   c = vocov->t1();\n               const Type& v2 = vocov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vocov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vocov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vocov->f0();\n               binary_functor_t f2 = vocov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, c, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 ((v1 o1 c) o2 v2)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, c, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"((t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vocovov_expression2\n         {\n            typedef typename vocovov_t::type2 node_type;\n            typedef typename vocovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 ((c o1 v1) o2 v2)\n               typedef typename synthesize_covov_expression0::node_type lcl_covov_t;\n\n               const lcl_covov_t* covov = static_cast<const lcl_covov_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type   c = covov->t0();\n               const Type& v1 = covov->t1();\n               const Type& v2 = covov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(covov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(covov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = covov->f0();\n               binary_functor_t f2 = covov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 ((c o1 v1) o2 v2)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"((t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_covovov_expression2\n         {\n            typedef typename covovov_t::type2 node_type;\n            typedef typename covovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // c o0 ((v1 o1 v2) o2 v3)\n               typedef typename synthesize_vovov_expression0::node_type lcl_vovov_t;\n\n               const lcl_vovov_t* vovov = static_cast<const lcl_vovov_t*>(branch[1]);\n               const Type   c = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type& v0 = vovov->t0();\n               const Type& v1 = vovov->t1();\n               const Type& v2 = vovov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vovov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vovov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vovov->f0();\n               binary_functor_t f2 = vovov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c, v0, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"c o0 ((v1 o1 v2) o2 v3)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c, v0, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"((t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t)\");\n            }\n        };\n\n         struct synthesize_covocov_expression2\n         {\n            typedef typename covocov_t::type2 node_type;\n            typedef typename covocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // c0 o0 ((v0 o1 c1) o2 v1)\n               typedef typename synthesize_vocov_expression0::node_type lcl_vocov_t;\n\n               const lcl_vocov_t* vocov = static_cast<const lcl_vocov_t*>(branch[1]);\n               const Type  c0 = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type& v0 = vocov->t0();\n               const Type  c1 = vocov->t1();\n               const Type& v1 = vocov->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vocov->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vocov->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vocov->f0();\n               binary_functor_t f2 = vocov->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, c1, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"c0 o0 ((v0 o1 c1) o2 v1)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, c1, v1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"((t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vocovoc_expression2\n         {\n            typedef typename vocovoc_t::type2 node_type;\n            typedef typename vocovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // v0 o0 ((c0 o1 v1) o2 c1)\n               typedef typename synthesize_covoc_expression0::node_type lcl_covoc_t;\n\n               const lcl_covoc_t* covoc = static_cast<const lcl_covoc_t*>(branch[1]);\n               const Type& v0 = static_cast<details::variable_node<Type>*>(branch[0])->ref();\n               const Type  c0 = covoc->t0();\n               const Type& v1 = covoc->t1();\n               const Type  c1 = covoc->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(covoc->f0());\n               const details::operator_type o2 = expr_gen.get_operator(covoc->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = covoc->f0();\n               binary_functor_t f2 = covoc->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"v0 o0 ((c0 o1 v1) o2 c1)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"((t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_covovoc_expression2\n         {\n            typedef typename covovoc_t::type2 node_type;\n            typedef typename covovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // c0 o0 ((v0 o1 v1) o2 c1)\n               typedef typename synthesize_vovoc_expression0::node_type lcl_vovoc_t;\n\n               const lcl_vovoc_t* vovoc = static_cast<const lcl_vovoc_t*>(branch[1]);\n               const Type  c0 = static_cast<details::literal_node<Type>*>(branch[0])->value();\n               const Type& v0 = vovoc->t0();\n               const Type& v1 = vovoc->t1();\n               const Type  c1 = vovoc->t2();\n               const details::operator_type o0 = operation;\n               const details::operator_type o1 = expr_gen.get_operator(vovoc->f0());\n               const details::operator_type o2 = expr_gen.get_operator(vovoc->f1());\n\n               binary_functor_t f0 = reinterpret_cast<binary_functor_t>(0);\n               binary_functor_t f1 = vovoc->f0();\n               binary_functor_t f2 = vovoc->f1();\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o0,f0))\n                  return error_node();\n\n               exprtk_debug((\"c0 o0 ((v0 o1 v1) o2 c1)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"t\" << expr_gen.to_str(o0) << \"((t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t)\");\n            }\n         };\n\n         struct synthesize_vococov_expression2\n         {\n            typedef typename vococov_t::type2 node_type;\n            static inline expression_node_ptr process(expression_generator<Type>&, const details::operator_type&, expression_node_ptr (&)[2])\n            {\n               // v0 o0 ((c0 o1 c1) o2 v1) - Not possible\n               exprtk_debug((\"v0 o0 ((c0 o1 c1) o2 v1) - Not possible\\n\"));\n               return error_node();\n            }\n\n            static inline std::string id(expression_generator<Type>&,\n                                         const details::operator_type, const details::operator_type, const details::operator_type)\n            {\n               return \"INVALID\";\n            }\n         };\n\n         struct synthesize_vovovov_expression3\n         {\n            typedef typename vovovov_t::type3 node_type;\n            typedef typename vovovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 v1) o1 v2) o2 v3\n               typedef typename synthesize_vovov_expression0::node_type lcl_vovov_t;\n\n               const lcl_vovov_t* vovov = static_cast<const lcl_vovov_t*>(branch[0]);\n               const Type& v0 = vovov->t0();\n               const Type& v1 = vovov->t1();\n               const Type& v2 = vovov->t2();\n               const Type& v3 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(vovov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vovov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vovov->f0();\n               binary_functor_t f1 = vovov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, v3, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 v1) o1 v2) o2 v3\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, v3, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vovovoc_expression3\n         {\n            typedef typename vovovoc_t::type3 node_type;\n            typedef typename vovovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 v1) o1 v2) o2 c\n               typedef typename synthesize_vovov_expression0::node_type lcl_vovov_t;\n\n               const lcl_vovov_t* vovov = static_cast<const lcl_vovov_t*>(branch[0]);\n               const Type& v0 = vovov->t0();\n               const Type& v1 = vovov->t1();\n               const Type& v2 = vovov->t2();\n               const Type   c = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = expr_gen.get_operator(vovov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vovov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vovov->f0();\n               binary_functor_t f1 = vovov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, c, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 v1) o1 v2) o2 c\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, c, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vovocov_expression3\n         {\n            typedef typename vovocov_t::type3 node_type;\n            typedef typename vovocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 v1) o1 c) o2 v2\n               typedef typename synthesize_vovoc_expression0::node_type lcl_vovoc_t;\n\n               const lcl_vovoc_t* vovoc = static_cast<const lcl_vovoc_t*>(branch[0]);\n               const Type& v0 = vovoc->t0();\n               const Type& v1 = vovoc->t1();\n               const Type   c = vovoc->t2();\n               const Type& v2 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(vovoc->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vovoc->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vovoc->f0();\n               binary_functor_t f1 = vovoc->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, c, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 v1) o1 c) o2 v2\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, c, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vocovov_expression3\n         {\n            typedef typename vocovov_t::type3 node_type;\n            typedef typename vocovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 c) o1 v1) o2 v2\n               typedef typename synthesize_vocov_expression0::node_type lcl_vocov_t;\n\n               const lcl_vocov_t* vocov = static_cast<const lcl_vocov_t*>(branch[0]);\n               const Type& v0 = vocov->t0();\n               const Type   c = vocov->t1();\n               const Type& v1 = vocov->t2();\n               const Type& v2 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(vocov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vocov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vocov->f0();\n               binary_functor_t f1 = vocov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 c) o1 v1) o2 v2\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_covovov_expression3\n         {\n            typedef typename covovov_t::type3 node_type;\n            typedef typename covovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((c o0 v0) o1 v1) o2 v2\n               typedef typename synthesize_covov_expression0::node_type lcl_covov_t;\n\n               const lcl_covov_t* covov = static_cast<const lcl_covov_t*>(branch[0]);\n               const Type   c = covov->t0();\n               const Type& v0 = covov->t1();\n               const Type& v1 = covov->t2();\n               const Type& v2 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(covov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(covov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = covov->f0();\n               binary_functor_t f1 = covov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c, v0, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((c o0 v0) o1 v1) o2 v2\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c, v0, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_covocov_expression3\n         {\n            typedef typename covocov_t::type3 node_type;\n            typedef typename covocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((c0 o0 v0) o1 c1) o2 v1\n               typedef typename synthesize_covoc_expression0::node_type lcl_covoc_t;\n\n               const lcl_covoc_t* covoc = static_cast<const lcl_covoc_t*>(branch[0]);\n               const Type  c0 = covoc->t0();\n               const Type& v0 = covoc->t1();\n               const Type  c1 = covoc->t2();\n               const Type& v1 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(covoc->f0());\n               const details::operator_type o1 = expr_gen.get_operator(covoc->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = covoc->f0();\n               binary_functor_t f1 = covoc->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, c1, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((c0 o0 v0) o1 c1) o2 v1\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, c1, v1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vocovoc_expression3\n         {\n            typedef typename vocovoc_t::type3 node_type;\n            typedef typename vocovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 c0) o1 v1) o2 c1\n               typedef typename synthesize_vocov_expression0::node_type lcl_vocov_t;\n\n               const lcl_vocov_t* vocov = static_cast<const lcl_vocov_t*>(branch[0]);\n               const Type& v0 = vocov->t0();\n               const Type  c0 = vocov->t1();\n               const Type& v1 = vocov->t2();\n               const Type  c1 = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = expr_gen.get_operator(vocov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vocov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vocov->f0();\n               binary_functor_t f1 = vocov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 c0) o1 v1) o2 c1\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_covovoc_expression3\n         {\n            typedef typename covovoc_t::type3 node_type;\n            typedef typename covovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((c0 o0 v0) o1 v1) o2 c1\n               typedef typename synthesize_covov_expression0::node_type lcl_covov_t;\n\n               const lcl_covov_t* covov = static_cast<const lcl_covov_t*>(branch[0]);\n               const Type  c0 = covov->t0();\n               const Type& v0 = covov->t1();\n               const Type& v1 = covov->t2();\n               const Type  c1 = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = expr_gen.get_operator(covov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(covov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = covov->f0();\n               binary_functor_t f1 = covov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((c0 o0 v0) o1 v1) o2 c1\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vococov_expression3\n         {\n            typedef typename vococov_t::type3 node_type;\n            typedef typename vococov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 c0) o1 c1) o2 v1\n               typedef typename synthesize_vococ_expression0::node_type lcl_vococ_t;\n\n               const lcl_vococ_t* vococ = static_cast<const lcl_vococ_t*>(branch[0]);\n               const Type& v0 = vococ->t0();\n               const Type  c0 = vococ->t1();\n               const Type  c1 = vococ->t2();\n               const Type& v1 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(vococ->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vococ->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vococ->f0();\n               binary_functor_t f1 = vococ->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c0, c1, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 c0) o1 c1) o2 v1\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c0, c1, v1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"((t\" << expr_gen.to_str(o0) << \"t)\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vovovov_expression4\n         {\n            typedef typename vovovov_t::type4 node_type;\n            typedef typename vovovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // (v0 o0 (v1 o1 v2)) o2 v3\n               typedef typename synthesize_vovov_expression1::node_type lcl_vovov_t;\n\n               const lcl_vovov_t* vovov = static_cast<const lcl_vovov_t*>(branch[0]);\n               const Type& v0 = vovov->t0();\n               const Type& v1 = vovov->t1();\n               const Type& v2 = vovov->t2();\n               const Type& v3 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(vovov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vovov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vovov->f0();\n               binary_functor_t f1 = vovov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, v3, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"(v0 o0 (v1 o1 v2)) o2 v3\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, v3, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vovovoc_expression4\n         {\n            typedef typename vovovoc_t::type4 node_type;\n            typedef typename vovovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 (v1 o1 v2)) o2 c)\n               typedef typename synthesize_vovov_expression1::node_type lcl_vovov_t;\n\n               const lcl_vovov_t* vovov = static_cast<const lcl_vovov_t*>(branch[0]);\n               const Type& v0 = vovov->t0();\n               const Type& v1 = vovov->t1();\n               const Type& v2 = vovov->t2();\n               const Type   c = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = expr_gen.get_operator(vovov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vovov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vovov->f0();\n               binary_functor_t f1 = vovov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, v2, c, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 (v1 o1 v2)) o2 c)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, v2, c, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vovocov_expression4\n         {\n            typedef typename vovocov_t::type4 node_type;\n            typedef typename vovocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 (v1 o1 c)) o2 v1)\n               typedef typename synthesize_vovoc_expression1::node_type lcl_vovoc_t;\n\n               const lcl_vovoc_t* vovoc = static_cast<const lcl_vovoc_t*>(branch[0]);\n               const Type& v0 = vovoc->t0();\n               const Type& v1 = vovoc->t1();\n               const Type   c = vovoc->t2();\n               const Type& v2 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(vovoc->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vovoc->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vovoc->f0();\n               binary_functor_t f1 = vovoc->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, v1, c, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 (v1 o1 c)) o2 v1)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, v1, c, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vocovov_expression4\n         {\n            typedef typename vocovov_t::type4 node_type;\n            typedef typename vocovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 (c o1 v1)) o2 v2)\n               typedef typename synthesize_vocov_expression1::node_type lcl_vocov_t;\n\n               const lcl_vocov_t* vocov = static_cast<const lcl_vocov_t*>(branch[0]);\n               const Type& v0 = vocov->t0();\n               const Type   c = vocov->t1();\n               const Type& v1 = vocov->t2();\n               const Type& v2 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(vocov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vocov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vocov->f0();\n               binary_functor_t f1 = vocov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 (c o1 v1)) o2 v2)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_covovov_expression4\n         {\n            typedef typename covovov_t::type4 node_type;\n            typedef typename covovov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((c o0 (v0 o1 v1)) o2 v2)\n               typedef typename synthesize_covov_expression1::node_type lcl_covov_t;\n\n               const lcl_covov_t* covov = static_cast<const lcl_covov_t*>(branch[0]);\n               const Type   c = covov->t0();\n               const Type& v0 = covov->t1();\n               const Type& v1 = covov->t2();\n               const Type& v2 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(covov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(covov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = covov->f0();\n               binary_functor_t f1 = covov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c, v0, v1, v2, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((c o0 (v0 o1 v1)) o2 v2)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c, v0, v1, v2, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_covocov_expression4\n         {\n            typedef typename covocov_t::type4 node_type;\n            typedef typename covocov_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((c0 o0 (v0 o1 c1)) o2 v1)\n               typedef typename synthesize_covoc_expression1::node_type lcl_covoc_t;\n\n               const lcl_covoc_t* covoc = static_cast<const lcl_covoc_t*>(branch[0]);\n               const Type  c0 = covoc->t0();\n               const Type& v0 = covoc->t1();\n               const Type  c1 = covoc->t2();\n               const Type& v1 = static_cast<details::variable_node<Type>*>(branch[1])->ref();\n               const details::operator_type o0 = expr_gen.get_operator(covoc->f0());\n               const details::operator_type o1 = expr_gen.get_operator(covoc->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = covoc->f0();\n               binary_functor_t f1 = covoc->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, c1, v1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((c0 o0 (v0 o1 c1)) o2 v1)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, c1, v1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vocovoc_expression4\n         {\n            typedef typename vocovoc_t::type4 node_type;\n            typedef typename vocovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((v0 o0 (c0 o1 v1)) o2 c1)\n               typedef typename synthesize_vocov_expression1::node_type lcl_vocov_t;\n\n               const lcl_vocov_t* vocov = static_cast<const lcl_vocov_t*>(branch[0]);\n               const Type& v0 = vocov->t0();\n               const Type  c0 = vocov->t1();\n               const Type& v1 = vocov->t2();\n               const Type  c1 = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = expr_gen.get_operator(vocov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(vocov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = vocov->f0();\n               binary_functor_t f1 = vocov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), v0, c0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((v0 o0 (c0 o1 v1)) o2 c1)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), v0, c0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_covovoc_expression4\n         {\n            typedef typename covovoc_t::type4 node_type;\n            typedef typename covovoc_t::sf4_type sf4_type;\n            typedef typename node_type::T0 T0;\n            typedef typename node_type::T1 T1;\n            typedef typename node_type::T2 T2;\n            typedef typename node_type::T3 T3;\n\n            static inline expression_node_ptr process(expression_generator<Type>& expr_gen,\n                                                      const details::operator_type& operation,\n                                                      expression_node_ptr (&branch)[2])\n            {\n               // ((c0 o0 (v0 o1 v1)) o2 c1)\n               typedef typename synthesize_covov_expression1::node_type lcl_covov_t;\n\n               const lcl_covov_t* covov = static_cast<const lcl_covov_t*>(branch[0]);\n               const Type  c0 = covov->t0();\n               const Type& v0 = covov->t1();\n               const Type& v1 = covov->t2();\n               const Type  c1 = static_cast<details::literal_node<Type>*>(branch[1])->value();\n               const details::operator_type o0 = expr_gen.get_operator(covov->f0());\n               const details::operator_type o1 = expr_gen.get_operator(covov->f1());\n               const details::operator_type o2 = operation;\n\n               binary_functor_t f0 = covov->f0();\n               binary_functor_t f1 = covov->f1();\n               binary_functor_t f2 = reinterpret_cast<binary_functor_t>(0);\n\n               details::free_node(*(expr_gen.node_allocator_),branch[0]);\n               details::free_node(*(expr_gen.node_allocator_),branch[1]);\n\n               expression_node_ptr result = error_node();\n\n               const bool synthesis_result =\n                  synthesize_sf4ext_expression::template compile<T0, T1, T2, T3>\n                     (expr_gen, id(expr_gen, o0, o1, o2), c0, v0, v1, c1, result);\n\n               if (synthesis_result)\n                  return result;\n               else if (!expr_gen.valid_operator(o2,f2))\n                  return error_node();\n\n               exprtk_debug((\"((c0 o0 (v0 o1 v1)) o2 c1)\\n\"));\n\n               return node_type::allocate(*(expr_gen.node_allocator_), c0, v0, v1, c1, f0, f1, f2);\n            }\n\n            static inline std::string id(expression_generator<Type>& expr_gen,\n                                         const details::operator_type o0,\n                                         const details::operator_type o1,\n                                         const details::operator_type o2)\n            {\n               return (details::build_string() << \"(t\" << expr_gen.to_str(o0) << \"(t\" << expr_gen.to_str(o1) << \"t)\" << expr_gen.to_str(o2) << \"t\");\n            }\n         };\n\n         struct synthesize_vococov_expression4\n         {\n            typedef typename vococov_t::type4 node_type;\n            static inline expression_node_ptr process(expression_generator<Type>&, const details::operator_type&, expression_node_ptr (&)[2])\n            {\n               // ((v0 o0 (c0 o1 c1)) o2 v1) - Not possible\n               exprtk_debug((\"((v0 o0 (c0 o1 c1)) o2 v1) - Not possible\\n\"));\n               return error_node();\n            }\n\n            static inline std::string id(expression_generator<Type>&,\n                                         const details::operator_type, const details::operator_type, const details::operator_type)\n            {\n               return \"INVALID\";\n            }\n         };\n         #endif\n\n         inline expression_node_ptr synthesize_uvouv_expression(const details::operator_type& operation, expression_node_ptr (&branch)[2])\n         {\n            // Definition: uv o uv\n            details::operator_type o0 = static_cast<details::uv_base_node<Type>*>(branch[0])->operation();\n            details::operator_type o1 = static_cast<details::uv_base_node<Type>*>(branch[1])->operation();\n            const Type& v0 = static_cast<details::uv_base_node<Type>*>(branch[0])->v();\n            const Type& v1 = static_cast<details::uv_base_node<Type>*>(branch[1])->v();\n            unary_functor_t u0 = reinterpret_cast<unary_functor_t> (0);\n            unary_functor_t u1 = reinterpret_cast<unary_functor_t> (0);\n            binary_functor_t f = reinterpret_cast<binary_functor_t>(0);\n\n            if (!valid_operator(o0,u0))\n               return error_node();\n            else if (!valid_operator(o1,u1))\n               return error_node();\n            else if (!valid_operator(operation,f))\n               return error_node();\n\n            expression_node_ptr result = error_node();\n\n            if (\n                 (details::e_neg == o0) &&\n                 (details::e_neg == o1)\n               )\n            {\n               switch (operation)\n               {\n                  // (-v0 + -v1) --> -(v0 + v1)\n                  case details::e_add : result = (*this)(details::e_neg,\n                                                    node_allocator_->\n                                                       allocate_rr<typename details::\n                                                          vov_node<Type,details::add_op<Type> > >(v0, v1));\n                                        exprtk_debug((\"(-v0 + -v1) --> -(v0 + v1)\\n\"));\n                                        break;\n\n                  // (-v0 - -v1) --> (v1 - v0)\n                  case details::e_sub : result = node_allocator_->\n                                                    allocate_rr<typename details::\n                                                       vov_node<Type,details::sub_op<Type> > >(v1, v0);\n                                        exprtk_debug((\"(-v0 - -v1) --> (v1 - v0)\\n\"));\n                                        break;\n\n                  // (-v0 * -v1) --> (v0 * v1)\n                  case details::e_mul : result = node_allocator_->\n                                                    allocate_rr<typename details::\n                                                       vov_node<Type,details::mul_op<Type> > >(v0, v1);\n                                        exprtk_debug((\"(-v0 * -v1) --> (v0 * v1)\\n\"));\n                                        break;\n\n                  // (-v0 / -v1) --> (v0 / v1)\n                  case details::e_div : result = node_allocator_->\n                                                    allocate_rr<typename details::\n                                                       vov_node<Type,details::div_op<Type> > >(v0, v1);\n                                        exprtk_debug((\"(-v0 / -v1) --> (v0 / v1)\\n\"));\n                                        break;\n\n                  default             : break;\n               }\n            }\n\n            if (0 == result)\n            {\n               result = node_allocator_->\n                            allocate_rrrrr<typename details::uvouv_node<Type> >(v0, v1, u0, u1, f);\n            }\n\n            details::free_all_nodes(*node_allocator_,branch);\n            return result;\n         }\n\n         #undef basic_opr_switch_statements\n         #undef extended_opr_switch_statements\n         #undef unary_opr_switch_statements\n\n         #ifndef exprtk_disable_string_capabilities\n\n         #define string_opr_switch_statements          \\\n         case_stmt(details::  e_lt ,details::   lt_op) \\\n         case_stmt(details:: e_lte ,details::  lte_op) \\\n         case_stmt(details::  e_gt ,details::   gt_op) \\\n         case_stmt(details:: e_gte ,details::  gte_op) \\\n         case_stmt(details::  e_eq ,details::   eq_op) \\\n         case_stmt(details::  e_ne ,details::   ne_op) \\\n         case_stmt(details::e_in   ,details::   in_op) \\\n         case_stmt(details::e_like ,details:: like_op) \\\n         case_stmt(details::e_ilike,details::ilike_op) \\\n\n         template <typename T0, typename T1>\n         inline expression_node_ptr synthesize_str_xrox_expression_impl(const details::operator_type& opr,\n                                                                        T0 s0, T1 s1,\n                                                                        range_t rp0)\n         {\n            switch (opr)\n            {\n               #define case_stmt(op0,op1)                                                                       \\\n               case op0 : return node_allocator_->                                                              \\\n                             allocate_ttt<typename details::str_xrox_node<Type,T0,T1,range_t,op1<Type> >,T0,T1> \\\n                                (s0, s1, rp0);                                                                  \\\n\n               string_opr_switch_statements\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         template <typename T0, typename T1>\n         inline expression_node_ptr synthesize_str_xoxr_expression_impl(const details::operator_type& opr,\n                                                                        T0 s0, T1 s1,\n                                                                        range_t rp1)\n         {\n            switch (opr)\n            {\n               #define case_stmt(op0,op1)                                                                       \\\n               case op0 : return node_allocator_->                                                              \\\n                             allocate_ttt<typename details::str_xoxr_node<Type,T0,T1,range_t,op1<Type> >,T0,T1> \\\n                                (s0, s1, rp1);                                                                  \\\n\n               string_opr_switch_statements\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         template <typename T0, typename T1>\n         inline expression_node_ptr synthesize_str_xroxr_expression_impl(const details::operator_type& opr,\n                                                                         T0 s0, T1 s1,\n                                                                         range_t rp0, range_t rp1)\n         {\n            switch (opr)\n            {\n               #define case_stmt(op0,op1)                                                                         \\\n               case op0 : return node_allocator_->                                                                \\\n                             allocate_tttt<typename details::str_xroxr_node<Type,T0,T1,range_t,op1<Type> >,T0,T1> \\\n                                (s0, s1, rp0, rp1);                                                               \\\n\n               string_opr_switch_statements\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         template <typename T0, typename T1>\n         inline expression_node_ptr synthesize_sos_expression_impl(const details::operator_type& opr, T0 s0, T1 s1)\n         {\n            switch (opr)\n            {\n               #define case_stmt(op0,op1)                                                                  \\\n               case op0 : return node_allocator_->                                                         \\\n                             allocate_tt<typename details::sos_node<Type,T0,T1,op1<Type> >,T0,T1>(s0, s1); \\\n\n               string_opr_switch_statements\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n\n         inline expression_node_ptr synthesize_sos_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string& s0 = static_cast<details::stringvar_node<Type>*>(branch[0])->ref();\n            std::string& s1 = static_cast<details::stringvar_node<Type>*>(branch[1])->ref();\n\n            return synthesize_sos_expression_impl<std::string&,std::string&>(opr, s0, s1);\n         }\n\n         inline expression_node_ptr synthesize_sros_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string&  s0 = static_cast<details::string_range_node<Type>*>(branch[0])->ref  ();\n            std::string&  s1 = static_cast<details::stringvar_node<Type>*>   (branch[1])->ref  ();\n            range_t      rp0 = static_cast<details::string_range_node<Type>*>(branch[0])->range();\n\n            static_cast<details::string_range_node<Type>*>(branch[0])->range_ref().clear();\n\n            free_node(*node_allocator_,branch[0]);\n\n            return synthesize_str_xrox_expression_impl<std::string&,std::string&>(opr, s0, s1, rp0);\n         }\n\n         inline expression_node_ptr synthesize_sosr_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string&  s0 = static_cast<details::stringvar_node<Type>*>   (branch[0])->ref  ();\n            std::string&  s1 = static_cast<details::string_range_node<Type>*>(branch[1])->ref  ();\n            range_t      rp1 = static_cast<details::string_range_node<Type>*>(branch[1])->range();\n\n            static_cast<details::string_range_node<Type>*>(branch[1])->range_ref().clear();\n\n            free_node(*node_allocator_,branch[1]);\n\n            return synthesize_str_xoxr_expression_impl<std::string&,std::string&>(opr, s0, s1, rp1);\n         }\n\n         inline expression_node_ptr synthesize_socsr_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string&  s0 = static_cast<details::stringvar_node<Type>*>         (branch[0])->ref  ();\n            std::string   s1 = static_cast<details::const_string_range_node<Type>*>(branch[1])->str  ();\n            range_t      rp1 = static_cast<details::const_string_range_node<Type>*>(branch[1])->range();\n\n            static_cast<details::const_string_range_node<Type>*>(branch[1])->range_ref().clear();\n\n            free_node(*node_allocator_,branch[1]);\n\n            return synthesize_str_xoxr_expression_impl<std::string&, const std::string>(opr, s0, s1, rp1);\n         }\n\n         inline expression_node_ptr synthesize_srosr_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string&  s0 = static_cast<details::string_range_node<Type>*>(branch[0])->ref  ();\n            std::string&  s1 = static_cast<details::string_range_node<Type>*>(branch[1])->ref  ();\n            range_t      rp0 = static_cast<details::string_range_node<Type>*>(branch[0])->range();\n            range_t      rp1 = static_cast<details::string_range_node<Type>*>(branch[1])->range();\n\n            static_cast<details::string_range_node<Type>*>(branch[0])->range_ref().clear();\n            static_cast<details::string_range_node<Type>*>(branch[1])->range_ref().clear();\n\n            details::free_node(*node_allocator_,branch[0]);\n            details::free_node(*node_allocator_,branch[1]);\n\n            return synthesize_str_xroxr_expression_impl<std::string&,std::string&>(opr, s0, s1, rp0, rp1);\n         }\n\n         inline expression_node_ptr synthesize_socs_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string& s0 = static_cast<     details::stringvar_node<Type>*>(branch[0])->ref();\n            std::string  s1 = static_cast<details::string_literal_node<Type>*>(branch[1])->str();\n\n            details::free_node(*node_allocator_,branch[1]);\n\n            return synthesize_sos_expression_impl<std::string&, const std::string>(opr, s0, s1);\n         }\n\n         inline expression_node_ptr synthesize_csos_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string  s0 = static_cast<details::string_literal_node<Type>*>(branch[0])->str();\n            std::string& s1 = static_cast<     details::stringvar_node<Type>*>(branch[1])->ref();\n\n            details::free_node(*node_allocator_,branch[0]);\n\n            return synthesize_sos_expression_impl<const std::string,std::string&>(opr, s0, s1);\n         }\n\n         inline expression_node_ptr synthesize_csosr_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string   s0 = static_cast<details::string_literal_node<Type>*>(branch[0])->str  ();\n            std::string&  s1 = static_cast<details::string_range_node<Type>*>  (branch[1])->ref  ();\n            range_t      rp1 = static_cast<details::string_range_node<Type>*>  (branch[1])->range();\n\n            static_cast<details::string_range_node<Type>*>(branch[1])->range_ref().clear();\n\n            details::free_node(*node_allocator_,branch[0]);\n            details::free_node(*node_allocator_,branch[1]);\n\n            return synthesize_str_xoxr_expression_impl<const std::string,std::string&>(opr, s0, s1, rp1);\n         }\n\n         inline expression_node_ptr synthesize_srocs_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string&  s0 = static_cast<details::string_range_node<Type>*>  (branch[0])->ref  ();\n            std::string   s1 = static_cast<details::string_literal_node<Type>*>(branch[1])->str  ();\n            range_t      rp0 = static_cast<details::string_range_node<Type>*>  (branch[0])->range();\n\n            static_cast<details::string_range_node<Type>*>(branch[0])->range_ref().clear();\n\n            details::free_node(*node_allocator_,branch[0]);\n            details::free_node(*node_allocator_,branch[1]);\n\n            return synthesize_str_xrox_expression_impl<std::string&, const std::string>(opr, s0, s1, rp0);\n         }\n\n         inline expression_node_ptr synthesize_srocsr_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string&  s0 = static_cast<details::string_range_node<Type>*>      (branch[0])->ref  ();\n            std::string   s1 = static_cast<details::const_string_range_node<Type>*>(branch[1])->str  ();\n            range_t      rp0 = static_cast<details::string_range_node<Type>*>      (branch[0])->range();\n            range_t      rp1 = static_cast<details::const_string_range_node<Type>*>(branch[1])->range();\n\n            static_cast<details::string_range_node<Type>*>      (branch[0])->range_ref().clear();\n            static_cast<details::const_string_range_node<Type>*>(branch[1])->range_ref().clear();\n\n            details::free_node(*node_allocator_,branch[0]);\n            details::free_node(*node_allocator_,branch[1]);\n\n            return synthesize_str_xroxr_expression_impl<std::string&, const std::string>(opr, s0, s1, rp0, rp1);\n         }\n\n         inline expression_node_ptr synthesize_csocs_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            const std::string s0 = static_cast<details::string_literal_node<Type>*>(branch[0])->str();\n            const std::string s1 = static_cast<details::string_literal_node<Type>*>(branch[1])->str();\n\n            expression_node_ptr result = error_node();\n\n            if (details::e_add == opr)\n               result = node_allocator_->allocate_c<details::string_literal_node<Type> >(s0 + s1);\n            else if (details::e_in == opr)\n               result = node_allocator_->allocate_c<details::literal_node<Type> >(details::in_op   <Type>::process(s0,s1));\n            else if (details::e_like == opr)\n               result = node_allocator_->allocate_c<details::literal_node<Type> >(details::like_op <Type>::process(s0,s1));\n            else if (details::e_ilike == opr)\n               result = node_allocator_->allocate_c<details::literal_node<Type> >(details::ilike_op<Type>::process(s0,s1));\n            else\n            {\n               expression_node_ptr temp = synthesize_sos_expression_impl<const std::string, const std::string>(opr, s0, s1);\n\n               const Type v = temp->value();\n\n               details::free_node(*node_allocator_,temp);\n\n               result = node_allocator_->allocate<literal_node_t>(v);\n            }\n\n            details::free_all_nodes(*node_allocator_,branch);\n\n            return result;\n         }\n\n         inline expression_node_ptr synthesize_csocsr_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            const std::string s0 = static_cast<details::string_literal_node<Type>*>    (branch[0])->str  ();\n                  std::string s1 = static_cast<details::const_string_range_node<Type>*>(branch[1])->str  ();\n            range_t          rp1 = static_cast<details::const_string_range_node<Type>*>(branch[1])->range();\n\n            static_cast<details::const_string_range_node<Type>*>(branch[1])->range_ref().clear();\n\n            free_node(*node_allocator_,branch[0]);\n            free_node(*node_allocator_,branch[1]);\n\n            return synthesize_str_xoxr_expression_impl<const std::string, const std::string>(opr, s0, s1, rp1);\n         }\n\n         inline expression_node_ptr synthesize_csros_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string   s0 = static_cast<details::const_string_range_node<Type>*>(branch[0])->str  ();\n            std::string&  s1 = static_cast<details::stringvar_node<Type>*>         (branch[1])->ref  ();\n            range_t      rp0 = static_cast<details::const_string_range_node<Type>*>(branch[0])->range();\n\n            static_cast<details::const_string_range_node<Type>*>(branch[0])->range_ref().clear();\n\n            free_node(*node_allocator_,branch[0]);\n\n            return synthesize_str_xrox_expression_impl<const std::string,std::string&>(opr, s0, s1, rp0);\n         }\n\n         inline expression_node_ptr synthesize_csrosr_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            const std::string  s0 = static_cast<details::const_string_range_node<Type>*>(branch[0])->str  ();\n                  std::string& s1 = static_cast<details::string_range_node<Type>*>      (branch[1])->ref  ();\n            range_t           rp0 = static_cast<details::const_string_range_node<Type>*>(branch[0])->range();\n            range_t           rp1 = static_cast<details::string_range_node<Type>*>      (branch[1])->range();\n\n            static_cast<details::const_string_range_node<Type>*>(branch[0])->range_ref().clear();\n            static_cast<details::string_range_node<Type>*>      (branch[1])->range_ref().clear();\n\n            free_node(*node_allocator_,branch[0]);\n            free_node(*node_allocator_,branch[1]);\n\n            return synthesize_str_xroxr_expression_impl<const std::string,std::string&>(opr, s0, s1, rp0, rp1);\n         }\n\n         inline expression_node_ptr synthesize_csrocs_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string       s0 = static_cast<details::const_string_range_node<Type>*>(branch[0])->str  ();\n            const std::string s1 = static_cast<details::string_literal_node<Type>*>    (branch[1])->str  ();\n            range_t          rp0 = static_cast<details::const_string_range_node<Type>*>(branch[0])->range();\n\n            static_cast<details::const_string_range_node<Type>*>(branch[0])->range_ref().clear();\n\n            details::free_all_nodes(*node_allocator_,branch);\n\n            return synthesize_str_xrox_expression_impl<const std::string,std::string>(opr, s0, s1, rp0);\n         }\n\n         inline expression_node_ptr synthesize_csrocsr_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            std::string   s0 = static_cast<details::const_string_range_node<Type>*>(branch[0])->str  ();\n            std::string   s1 = static_cast<details::const_string_range_node<Type>*>(branch[1])->str  ();\n            range_t      rp0 = static_cast<details::const_string_range_node<Type>*>(branch[0])->range();\n            range_t      rp1 = static_cast<details::const_string_range_node<Type>*>(branch[1])->range();\n\n            static_cast<details::const_string_range_node<Type>*>(branch[0])->range_ref().clear();\n            static_cast<details::const_string_range_node<Type>*>(branch[1])->range_ref().clear();\n\n            details::free_all_nodes(*node_allocator_,branch);\n\n            return synthesize_str_xroxr_expression_impl<const std::string, const std::string>(opr, s0, s1, rp0, rp1);\n         }\n\n         inline expression_node_ptr synthesize_strogen_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            switch (opr)\n            {\n               #define case_stmt(op0,op1)                                                       \\\n               case op0 : return node_allocator_->                                              \\\n                             allocate_ttt<typename details::str_sogens_node<Type,op1<Type> > >  \\\n                                (opr, branch[0], branch[1]);                                    \\\n\n               string_opr_switch_statements\n               #undef case_stmt\n               default : return error_node();\n            }\n         }\n         #endif\n\n         #ifndef exprtk_disable_string_capabilities\n         inline expression_node_ptr synthesize_string_expression(const details::operator_type& opr, expression_node_ptr (&branch)[2])\n         {\n            if ((0 == branch[0]) || (0 == branch[1]))\n            {\n               details::free_all_nodes(*node_allocator_,branch);\n\n               return error_node();\n            }\n\n            const bool b0_is_s   = details::is_string_node            (branch[0]);\n            const bool b0_is_cs  = details::is_const_string_node      (branch[0]);\n            const bool b0_is_sr  = details::is_string_range_node      (branch[0]);\n            const bool b0_is_csr = details::is_const_string_range_node(branch[0]);\n\n            const bool b1_is_s   = details::is_string_node            (branch[1]);\n            const bool b1_is_cs  = details::is_const_string_node      (branch[1]);\n            const bool b1_is_sr  = details::is_string_range_node      (branch[1]);\n            const bool b1_is_csr = details::is_const_string_range_node(branch[1]);\n\n            const bool b0_is_gen = details::is_string_assignment_node (branch[0]) ||\n                                   details::is_genricstring_range_node(branch[0]) ||\n                                   details::is_string_concat_node     (branch[0]) ||\n                                   details::is_string_function_node   (branch[0]) ||\n                                   details::is_string_condition_node  (branch[0]) ||\n                                   details::is_string_ccondition_node (branch[0]) ||\n                                   details::is_string_vararg_node     (branch[0]) ;\n\n            const bool b1_is_gen = details::is_string_assignment_node (branch[1]) ||\n                                   details::is_genricstring_range_node(branch[1]) ||\n                                   details::is_string_concat_node     (branch[1]) ||\n                                   details::is_string_function_node   (branch[1]) ||\n                                   details::is_string_condition_node  (branch[1]) ||\n                                   details::is_string_ccondition_node (branch[1]) ||\n                                   details::is_string_vararg_node     (branch[1]) ;\n\n            if (details::e_add == opr)\n            {\n               if (!b0_is_cs || !b1_is_cs)\n               {\n                  return synthesize_expression<string_concat_node_t,2>(opr,branch);\n               }\n            }\n\n            if (b0_is_gen || b1_is_gen)\n            {\n               return synthesize_strogen_expression(opr,branch);\n            }\n            else if (b0_is_s)\n            {\n                    if (b1_is_s  ) return synthesize_sos_expression   (opr,branch);\n               else if (b1_is_cs ) return synthesize_socs_expression  (opr,branch);\n               else if (b1_is_sr ) return synthesize_sosr_expression  (opr,branch);\n               else if (b1_is_csr) return synthesize_socsr_expression (opr,branch);\n            }\n            else if (b0_is_cs)\n            {\n                    if (b1_is_s  ) return synthesize_csos_expression  (opr,branch);\n               else if (b1_is_cs ) return synthesize_csocs_expression (opr,branch);\n               else if (b1_is_sr ) return synthesize_csosr_expression (opr,branch);\n               else if (b1_is_csr) return synthesize_csocsr_expression(opr,branch);\n            }\n            else if (b0_is_sr)\n            {\n                    if (b1_is_s  ) return synthesize_sros_expression  (opr,branch);\n               else if (b1_is_sr ) return synthesize_srosr_expression (opr,branch);\n               else if (b1_is_cs ) return synthesize_srocs_expression (opr,branch);\n               else if (b1_is_csr) return synthesize_srocsr_expression(opr,branch);\n            }\n            else if (b0_is_csr)\n            {\n                    if (b1_is_s  ) return synthesize_csros_expression  (opr,branch);\n               else if (b1_is_sr ) return synthesize_csrosr_expression (opr,branch);\n               else if (b1_is_cs ) return synthesize_csrocs_expression (opr,branch);\n               else if (b1_is_csr) return synthesize_csrocsr_expression(opr,branch);\n            }\n\n            return error_node();\n         }\n         #else\n         inline expression_node_ptr synthesize_string_expression(const details::operator_type&, expression_node_ptr (&branch)[2])\n         {\n            details::free_all_nodes(*node_allocator_,branch);\n            return error_node();\n         }\n         #endif\n\n         #ifndef exprtk_disable_string_capabilities\n         inline expression_node_ptr synthesize_string_expression(const details::operator_type& opr, expression_node_ptr (&branch)[3])\n         {\n            if (details::e_inrange != opr)\n               return error_node();\n            else if ((0 == branch[0]) || (0 == branch[1]) || (0 == branch[2]))\n            {\n               details::free_all_nodes(*node_allocator_,branch);\n\n               return error_node();\n            }\n            else if (\n                      details::is_const_string_node(branch[0]) &&\n                      details::is_const_string_node(branch[1]) &&\n                      details::is_const_string_node(branch[2])\n                    )\n            {\n               const std::string s0 = static_cast<details::string_literal_node<Type>*>(branch[0])->str();\n               const std::string s1 = static_cast<details::string_literal_node<Type>*>(branch[1])->str();\n               const std::string s2 = static_cast<details::string_literal_node<Type>*>(branch[2])->str();\n\n               const Type v = (((s0 <= s1) && (s1 <= s2)) ? Type(1) : Type(0));\n\n               details::free_all_nodes(*node_allocator_,branch);\n\n               return node_allocator_->allocate_c<details::literal_node<Type> >(v);\n            }\n            else if (\n                      details::is_string_node(branch[0]) &&\n                      details::is_string_node(branch[1]) &&\n                      details::is_string_node(branch[2])\n                    )\n            {\n               std::string& s0 = static_cast<details::stringvar_node<Type>*>(branch[0])->ref();\n               std::string& s1 = static_cast<details::stringvar_node<Type>*>(branch[1])->ref();\n               std::string& s2 = static_cast<details::stringvar_node<Type>*>(branch[2])->ref();\n\n               typedef typename details::sosos_node<Type,std::string&,std::string&,std::string&,details::inrange_op<Type> > inrange_t;\n\n               return node_allocator_->allocate_type<inrange_t,std::string&,std::string&,std::string&>(s0,s1,s2);\n            }\n            else if (\n                      details::is_const_string_node(branch[0]) &&\n                            details::is_string_node(branch[1]) &&\n                      details::is_const_string_node(branch[2])\n                    )\n            {\n               std::string  s0 = static_cast<details::string_literal_node<Type>*>(branch[0])->str();\n               std::string& s1 = static_cast<     details::stringvar_node<Type>*>(branch[1])->ref();\n               std::string  s2 = static_cast<details::string_literal_node<Type>*>(branch[2])->str();\n\n               typedef typename details::sosos_node<Type,std::string,std::string&,std::string,details::inrange_op<Type> > inrange_t;\n\n               details::free_node(*node_allocator_,branch[0]);\n               details::free_node(*node_allocator_,branch[2]);\n\n               return node_allocator_->allocate_type<inrange_t,std::string,std::string&,std::string>(s0,s1,s2);\n            }\n            else if (\n                            details::is_string_node(branch[0]) &&\n                      details::is_const_string_node(branch[1]) &&\n                            details::is_string_node(branch[2])\n                    )\n            {\n               std::string&  s0 = static_cast<     details::stringvar_node<Type>*>(branch[0])->ref();\n               std::string   s1 = static_cast<details::string_literal_node<Type>*>(branch[1])->str();\n               std::string&  s2 = static_cast<     details::stringvar_node<Type>*>(branch[2])->ref();\n\n               typedef typename details::sosos_node<Type,std::string&,std::string,std::string&,details::inrange_op<Type> > inrange_t;\n\n               details::free_node(*node_allocator_,branch[1]);\n\n               return node_allocator_->allocate_type<inrange_t,std::string&,std::string,std::string&>(s0,s1,s2);\n            }\n            else if (\n                      details::is_string_node(branch[0]) &&\n                      details::is_string_node(branch[1]) &&\n                      details::is_const_string_node(branch[2])\n                    )\n            {\n               std::string& s0 = static_cast<     details::stringvar_node<Type>*>(branch[0])->ref();\n               std::string& s1 = static_cast<     details::stringvar_node<Type>*>(branch[1])->ref();\n               std::string  s2 = static_cast<details::string_literal_node<Type>*>(branch[2])->str();\n\n               typedef typename details::sosos_node<Type,std::string&,std::string&,std::string,details::inrange_op<Type> > inrange_t;\n\n               details::free_node(*node_allocator_,branch[2]);\n\n               return node_allocator_->allocate_type<inrange_t,std::string&,std::string&,std::string>(s0,s1,s2);\n            }\n            else if (\n                      details::is_const_string_node(branch[0]) &&\n                      details::      is_string_node(branch[1]) &&\n                      details::      is_string_node(branch[2])\n                    )\n            {\n               std::string  s0 = static_cast<details::string_literal_node<Type>*>(branch[0])->str();\n               std::string& s1 = static_cast<     details::stringvar_node<Type>*>(branch[1])->ref();\n               std::string& s2 = static_cast<     details::stringvar_node<Type>*>(branch[2])->ref();\n\n               typedef typename details::sosos_node<Type,std::string,std::string&,std::string&,details::inrange_op<Type> > inrange_t;\n\n               details::free_node(*node_allocator_,branch[0]);\n\n               return node_allocator_->allocate_type<inrange_t,std::string,std::string&,std::string&>(s0,s1,s2);\n            }\n            else\n               return error_node();\n         }\n         #else\n         inline expression_node_ptr synthesize_string_expression(const details::operator_type&, expression_node_ptr (&branch)[3])\n         {\n            details::free_all_nodes(*node_allocator_,branch);\n            return error_node();\n         }\n         #endif\n\n         inline expression_node_ptr synthesize_null_expression(const details::operator_type& operation, expression_node_ptr (&branch)[2])\n         {\n            /*\n             Note: The following are the type promotion rules\n             that relate to operations that include 'null':\n             0. null ==/!=     null --> true false\n             1. null operation null --> null\n             2. x    ==/!=     null --> true/false\n             3. null ==/!=     x    --> true/false\n             4. x   operation  null --> x\n             5. null operation x    --> x\n            */\n\n            typedef typename details::null_eq_node<T> nulleq_node_t;\n\n            bool b0_null = details::is_null_node(branch[0]);\n            bool b1_null = details::is_null_node(branch[1]);\n\n            if (b0_null && b1_null)\n            {\n               expression_node_ptr result = error_node();\n\n               if (details::e_eq == operation)\n                  result = node_allocator_->allocate_c<literal_node_t>(T(1));\n               else if (details::e_ne == operation)\n                  result = node_allocator_->allocate_c<literal_node_t>(T(0));\n\n               if (result)\n               {\n                  details::free_node(*node_allocator_,branch[0]);\n                  details::free_node(*node_allocator_,branch[1]);\n\n                  return result;\n               }\n\n               details::free_node(*node_allocator_,branch[1]);\n\n               return branch[0];\n            }\n            else if (details::e_eq == operation)\n            {\n               expression_node_ptr result = node_allocator_->\n                                                allocate_rc<nulleq_node_t>(branch[b0_null ? 0 : 1],true);\n\n               details::free_node(*node_allocator_,branch[b0_null ? 1 : 0]);\n\n               return result;\n            }\n            else if (details::e_ne == operation)\n            {\n               expression_node_ptr result = node_allocator_->\n                                                allocate_rc<nulleq_node_t>(branch[b0_null ? 0 : 1],false);\n\n               details::free_node(*node_allocator_,branch[b0_null ? 1 : 0]);\n\n               return result;\n            }\n            else if (b0_null)\n            {\n               details::free_node(*node_allocator_,branch[0]);\n               branch[0] = branch[1];\n               branch[1] = error_node();\n            }\n            else if (b1_null)\n            {\n               details::free_node(*node_allocator_,branch[1]);\n               branch[1] = error_node();\n            }\n\n            if (\n                 (details::e_add == operation) || (details::e_sub == operation) ||\n                 (details::e_mul == operation) || (details::e_div == operation) ||\n                 (details::e_mod == operation) || (details::e_pow == operation)\n               )\n            {\n               return branch[0];\n            }\n            else if (\n                      (details::e_lt    == operation) || (details::e_lte  == operation) ||\n                      (details::e_gt    == operation) || (details::e_gte  == operation) ||\n                      (details::e_and   == operation) || (details::e_nand == operation) ||\n                      (details::e_or    == operation) || (details::e_nor  == operation) ||\n                      (details::e_xor   == operation) || (details::e_xnor == operation) ||\n                      (details::e_in    == operation) || (details::e_like == operation) ||\n                      (details::e_ilike == operation)\n                    )\n            {\n               return node_allocator_->allocate_c<literal_node_t>(T(0));\n            }\n\n            details::free_node(*node_allocator_,branch[0]);\n\n            return node_allocator_->allocate<details::null_node<Type> >();\n         }\n\n         template <typename NodeType, std::size_t N>\n         inline expression_node_ptr synthesize_expression(const details::operator_type& operation, expression_node_ptr (&branch)[N])\n         {\n            if (\n                 (details::e_in    == operation) ||\n                 (details::e_like  == operation) ||\n                 (details::e_ilike == operation)\n               )\n            {\n               free_all_nodes(*node_allocator_,branch);\n\n               return error_node();\n            }\n            else if (!details::all_nodes_valid<N>(branch))\n            {\n               free_all_nodes(*node_allocator_,branch);\n\n               return error_node();\n            }\n            else if ((details::e_default != operation))\n            {\n               // Attempt simple constant folding optimisation.\n               expression_node_ptr expression_point = node_allocator_->allocate<NodeType>(operation,branch);\n\n               if (is_constant_foldable<N>(branch))\n               {\n                  Type v = expression_point->value();\n                  details::free_node(*node_allocator_,expression_point);\n\n                  return node_allocator_->allocate<literal_node_t>(v);\n               }\n               else\n                  return expression_point;\n            }\n            else\n               return error_node();\n         }\n\n         template <typename NodeType, std::size_t N>\n         inline expression_node_ptr synthesize_expression(F* f, expression_node_ptr (&branch)[N])\n         {\n            if (!details::all_nodes_valid<N>(branch))\n            {\n               free_all_nodes(*node_allocator_,branch);\n\n               return error_node();\n            }\n\n            typedef typename details::function_N_node<T,ifunction_t,N> function_N_node_t;\n\n            // Attempt simple constant folding optimisation.\n\n            expression_node_ptr expression_point = node_allocator_->allocate<NodeType>(f);\n            function_N_node_t* func_node_ptr = dynamic_cast<function_N_node_t*>(expression_point);\n\n            if (0 == func_node_ptr)\n            {\n               free_all_nodes(*node_allocator_,branch);\n\n               return error_node();\n            }\n            else\n               func_node_ptr->init_branches(branch);\n\n            if (is_constant_foldable<N>(branch) && !f->has_side_effects())\n            {\n               Type v = expression_point->value();\n               details::free_node(*node_allocator_,expression_point);\n\n               return node_allocator_->allocate<literal_node_t>(v);\n            }\n\n            parser_->state_.activate_side_effect(\"synthesize_expression(function<NT,N>)\");\n\n            return expression_point;\n         }\n\n         bool                     strength_reduction_enabled_;\n         details::node_allocator* node_allocator_;\n         synthesize_map_t         synthesize_map_;\n         unary_op_map_t*          unary_op_map_;\n         binary_op_map_t*         binary_op_map_;\n         inv_binary_op_map_t*     inv_binary_op_map_;\n         sf3_map_t*               sf3_map_;\n         sf4_map_t*               sf4_map_;\n         parser_t*                parser_;\n      };\n\n      inline void set_error(const parser_error::type& error_type)\n      {\n         error_list_.push_back(error_type);\n      }\n\n      inline void remove_last_error()\n      {\n         if (!error_list_.empty())\n         {\n            error_list_.pop_back();\n         }\n      }\n\n      inline void set_synthesis_error(const std::string& synthesis_error_message)\n      {\n         if (synthesis_error_.empty())\n         {\n            synthesis_error_ = synthesis_error_message;\n         }\n      }\n\n      inline void register_local_vars(expression<T>& e)\n      {\n         for (std::size_t i = 0; i < sem_.size(); ++i)\n         {\n            scope_element& se = sem_.get_element(i);\n\n            if (\n                 (scope_element::e_variable == se.type) ||\n                 (scope_element::e_vecelem  == se.type)\n               )\n            {\n               if (se.var_node)\n               {\n                  e.register_local_var(se.var_node);\n               }\n\n               if (se.data)\n               {\n                  e.register_local_data(se.data, 1, 0);\n               }\n            }\n            else if (scope_element::e_vector == se.type)\n            {\n               if (se.vec_node)\n               {\n                  e.register_local_var(se.vec_node);\n               }\n\n               if (se.data)\n               {\n                  e.register_local_data(se.data, se.size, 1);\n               }\n            }\n            #ifndef exprtk_disable_string_capabilities\n            else if (scope_element::e_string == se.type)\n            {\n               if (se.str_node)\n               {\n                  e.register_local_var(se.str_node);\n               }\n\n               if (se.data)\n               {\n                  e.register_local_data(se.data, se.size, 2);\n               }\n            }\n            #endif\n\n            se.var_node  = 0;\n            se.vec_node  = 0;\n            #ifndef exprtk_disable_string_capabilities\n            se.str_node  = 0;\n            #endif\n            se.data      = 0;\n            se.ref_count = 0;\n            se.active    = false;\n         }\n      }\n\n      inline void register_return_results(expression<T>& e)\n      {\n         e.register_return_results(results_context_);\n         results_context_ = 0;\n      }\n\n      inline void load_unary_operations_map(unary_op_map_t& m)\n      {\n         #define register_unary_op(Op,UnaryFunctor)             \\\n         m.insert(std::make_pair(Op,UnaryFunctor<T>::process)); \\\n\n         register_unary_op(details::  e_abs, details::  abs_op)\n         register_unary_op(details:: e_acos, details:: acos_op)\n         register_unary_op(details::e_acosh, details::acosh_op)\n         register_unary_op(details:: e_asin, details:: asin_op)\n         register_unary_op(details::e_asinh, details::asinh_op)\n         register_unary_op(details::e_atanh, details::atanh_op)\n         register_unary_op(details:: e_ceil, details:: ceil_op)\n         register_unary_op(details::  e_cos, details::  cos_op)\n         register_unary_op(details:: e_cosh, details:: cosh_op)\n         register_unary_op(details::  e_exp, details::  exp_op)\n         register_unary_op(details::e_expm1, details::expm1_op)\n         register_unary_op(details::e_floor, details::floor_op)\n         register_unary_op(details::  e_log, details::  log_op)\n         register_unary_op(details::e_log10, details::log10_op)\n         register_unary_op(details:: e_log2, details:: log2_op)\n         register_unary_op(details::e_log1p, details::log1p_op)\n         register_unary_op(details::  e_neg, details::  neg_op)\n         register_unary_op(details::  e_pos, details::  pos_op)\n         register_unary_op(details::e_round, details::round_op)\n         register_unary_op(details::  e_sin, details::  sin_op)\n         register_unary_op(details:: e_sinc, details:: sinc_op)\n         register_unary_op(details:: e_sinh, details:: sinh_op)\n         register_unary_op(details:: e_sqrt, details:: sqrt_op)\n         register_unary_op(details::  e_tan, details::  tan_op)\n         register_unary_op(details:: e_tanh, details:: tanh_op)\n         register_unary_op(details::  e_cot, details::  cot_op)\n         register_unary_op(details::  e_sec, details::  sec_op)\n         register_unary_op(details::  e_csc, details::  csc_op)\n         register_unary_op(details::  e_r2d, details::  r2d_op)\n         register_unary_op(details::  e_d2r, details::  d2r_op)\n         register_unary_op(details::  e_d2g, details::  d2g_op)\n         register_unary_op(details::  e_g2d, details::  g2d_op)\n         register_unary_op(details:: e_notl, details:: notl_op)\n         register_unary_op(details::  e_sgn, details::  sgn_op)\n         register_unary_op(details::  e_erf, details::  erf_op)\n         register_unary_op(details:: e_erfc, details:: erfc_op)\n         register_unary_op(details:: e_ncdf, details:: ncdf_op)\n         register_unary_op(details:: e_frac, details:: frac_op)\n         register_unary_op(details::e_trunc, details::trunc_op)\n         #undef register_unary_op\n      }\n\n      inline void load_binary_operations_map(binary_op_map_t& m)\n      {\n         typedef typename binary_op_map_t::value_type value_type;\n\n         #define register_binary_op(Op,BinaryFunctor)        \\\n         m.insert(value_type(Op,BinaryFunctor<T>::process)); \\\n\n         register_binary_op(details:: e_add, details:: add_op)\n         register_binary_op(details:: e_sub, details:: sub_op)\n         register_binary_op(details:: e_mul, details:: mul_op)\n         register_binary_op(details:: e_div, details:: div_op)\n         register_binary_op(details:: e_mod, details:: mod_op)\n         register_binary_op(details:: e_pow, details:: pow_op)\n         register_binary_op(details::  e_lt, details::  lt_op)\n         register_binary_op(details:: e_lte, details:: lte_op)\n         register_binary_op(details::  e_gt, details::  gt_op)\n         register_binary_op(details:: e_gte, details:: gte_op)\n         register_binary_op(details::  e_eq, details::  eq_op)\n         register_binary_op(details::  e_ne, details::  ne_op)\n         register_binary_op(details:: e_and, details:: and_op)\n         register_binary_op(details::e_nand, details::nand_op)\n         register_binary_op(details::  e_or, details::  or_op)\n         register_binary_op(details:: e_nor, details:: nor_op)\n         register_binary_op(details:: e_xor, details:: xor_op)\n         register_binary_op(details::e_xnor, details::xnor_op)\n         #undef register_binary_op\n      }\n\n      inline void load_inv_binary_operations_map(inv_binary_op_map_t& m)\n      {\n         typedef typename inv_binary_op_map_t::value_type value_type;\n\n         #define register_binary_op(Op,BinaryFunctor)        \\\n         m.insert(value_type(BinaryFunctor<T>::process,Op)); \\\n\n         register_binary_op(details:: e_add, details:: add_op)\n         register_binary_op(details:: e_sub, details:: sub_op)\n         register_binary_op(details:: e_mul, details:: mul_op)\n         register_binary_op(details:: e_div, details:: div_op)\n         register_binary_op(details:: e_mod, details:: mod_op)\n         register_binary_op(details:: e_pow, details:: pow_op)\n         register_binary_op(details::  e_lt, details::  lt_op)\n         register_binary_op(details:: e_lte, details:: lte_op)\n         register_binary_op(details::  e_gt, details::  gt_op)\n         register_binary_op(details:: e_gte, details:: gte_op)\n         register_binary_op(details::  e_eq, details::  eq_op)\n         register_binary_op(details::  e_ne, details::  ne_op)\n         register_binary_op(details:: e_and, details:: and_op)\n         register_binary_op(details::e_nand, details::nand_op)\n         register_binary_op(details::  e_or, details::  or_op)\n         register_binary_op(details:: e_nor, details:: nor_op)\n         register_binary_op(details:: e_xor, details:: xor_op)\n         register_binary_op(details::e_xnor, details::xnor_op)\n         #undef register_binary_op\n      }\n\n      inline void load_sf3_map(sf3_map_t& sf3_map)\n      {\n         typedef std::pair<trinary_functor_t,details::operator_type> pair_t;\n\n         #define register_sf3(Op)                                                                             \\\n         sf3_map[details::sf##Op##_op<T>::id()] = pair_t(details::sf##Op##_op<T>::process,details::e_sf##Op); \\\n\n         register_sf3(00) register_sf3(01) register_sf3(02) register_sf3(03)\n         register_sf3(04) register_sf3(05) register_sf3(06) register_sf3(07)\n         register_sf3(08) register_sf3(09) register_sf3(10) register_sf3(11)\n         register_sf3(12) register_sf3(13) register_sf3(14) register_sf3(15)\n         register_sf3(16) register_sf3(17) register_sf3(18) register_sf3(19)\n         register_sf3(20) register_sf3(21) register_sf3(22) register_sf3(23)\n         register_sf3(24) register_sf3(25) register_sf3(26) register_sf3(27)\n         register_sf3(28) register_sf3(29) register_sf3(30)\n         #undef register_sf3\n\n         #define register_sf3_extid(Id, Op)                                        \\\n         sf3_map[Id] = pair_t(details::sf##Op##_op<T>::process,details::e_sf##Op); \\\n\n         register_sf3_extid(\"(t-t)-t\",23)  // (t-t)-t --> t-(t+t)\n         #undef register_sf3_extid\n      }\n\n      inline void load_sf4_map(sf4_map_t& sf4_map)\n      {\n         typedef std::pair<quaternary_functor_t,details::operator_type> pair_t;\n\n         #define register_sf4(Op)                                                                             \\\n         sf4_map[details::sf##Op##_op<T>::id()] = pair_t(details::sf##Op##_op<T>::process,details::e_sf##Op); \\\n\n         register_sf4(48) register_sf4(49) register_sf4(50) register_sf4(51)\n         register_sf4(52) register_sf4(53) register_sf4(54) register_sf4(55)\n         register_sf4(56) register_sf4(57) register_sf4(58) register_sf4(59)\n         register_sf4(60) register_sf4(61) register_sf4(62) register_sf4(63)\n         register_sf4(64) register_sf4(65) register_sf4(66) register_sf4(67)\n         register_sf4(68) register_sf4(69) register_sf4(70) register_sf4(71)\n         register_sf4(72) register_sf4(73) register_sf4(74) register_sf4(75)\n         register_sf4(76) register_sf4(77) register_sf4(78) register_sf4(79)\n         register_sf4(80) register_sf4(81) register_sf4(82) register_sf4(83)\n         #undef register_sf4\n\n         #define register_sf4ext(Op)                                                                                    \\\n         sf4_map[details::sfext##Op##_op<T>::id()] = pair_t(details::sfext##Op##_op<T>::process,details::e_sf4ext##Op); \\\n\n         register_sf4ext(00) register_sf4ext(01) register_sf4ext(02) register_sf4ext(03)\n         register_sf4ext(04) register_sf4ext(05) register_sf4ext(06) register_sf4ext(07)\n         register_sf4ext(08) register_sf4ext(09) register_sf4ext(10) register_sf4ext(11)\n         register_sf4ext(12) register_sf4ext(13) register_sf4ext(14) register_sf4ext(15)\n         register_sf4ext(16) register_sf4ext(17) register_sf4ext(18) register_sf4ext(19)\n         register_sf4ext(20) register_sf4ext(21) register_sf4ext(22) register_sf4ext(23)\n         register_sf4ext(24) register_sf4ext(25) register_sf4ext(26) register_sf4ext(27)\n         register_sf4ext(28) register_sf4ext(29) register_sf4ext(30) register_sf4ext(31)\n         register_sf4ext(32) register_sf4ext(33) register_sf4ext(34) register_sf4ext(35)\n         register_sf4ext(36) register_sf4ext(36) register_sf4ext(38) register_sf4ext(39)\n         register_sf4ext(40) register_sf4ext(41) register_sf4ext(42) register_sf4ext(43)\n         register_sf4ext(44) register_sf4ext(45) register_sf4ext(46) register_sf4ext(47)\n         register_sf4ext(48) register_sf4ext(49) register_sf4ext(50) register_sf4ext(51)\n         register_sf4ext(52) register_sf4ext(53) register_sf4ext(54) register_sf4ext(55)\n         register_sf4ext(56) register_sf4ext(57) register_sf4ext(58) register_sf4ext(59)\n         register_sf4ext(60) register_sf4ext(61)\n         #undef register_sf4ext\n      }\n\n      inline results_context_t& results_ctx()\n      {\n         if (0 == results_context_)\n         {\n            results_context_ = new results_context_t();\n         }\n\n         return (*results_context_);\n      }\n\n      inline void return_cleanup()\n      {\n         #ifndef exprtk_disable_return_statement\n         if (results_context_)\n         {\n            delete results_context_;\n            results_context_ = 0;\n         }\n\n         state_.return_stmt_present = false;\n         #endif\n      }\n\n   private:\n\n      parser(const parser<T>&);\n      parser<T>& operator=(const parser<T>&);\n\n      settings_store settings_;\n      expression_generator<T> expression_generator_;\n      details::node_allocator node_allocator_;\n      symtab_store symtab_store_;\n      dependent_entity_collector dec_;\n      std::deque<parser_error::type> error_list_;\n      std::deque<bool> brkcnt_list_;\n      parser_state state_;\n      bool resolve_unknown_symbol_;\n      results_context_t* results_context_;\n      unknown_symbol_resolver* unknown_symbol_resolver_;\n      unknown_symbol_resolver default_usr_;\n      base_ops_map_t base_ops_map_;\n      unary_op_map_t unary_op_map_;\n      binary_op_map_t binary_op_map_;\n      inv_binary_op_map_t inv_binary_op_map_;\n      sf3_map_t sf3_map_;\n      sf4_map_t sf4_map_;\n      std::string synthesis_error_;\n      scope_element_manager sem_;\n\n      lexer::helper::helper_assembly helper_assembly_;\n\n      lexer::helper::commutative_inserter commutative_inserter_;\n      lexer::helper::operator_joiner      operator_joiner_2_;\n      lexer::helper::operator_joiner      operator_joiner_3_;\n      lexer::helper::symbol_replacer      symbol_replacer_;\n      lexer::helper::bracket_checker      bracket_checker_;\n      lexer::helper::numeric_checker      numeric_checker_;\n      lexer::helper::sequence_validator   sequence_validator_;\n\n      template <typename ParserType>\n      friend void details::disable_type_checking(ParserType& p);\n   };\n\n   template <typename Allocator,\n             template <typename, typename> class Sequence>\n   inline bool collect_variables(const std::string& expr_str,\n                                 Sequence<std::string, Allocator>& symbol_list)\n   {\n      typedef double T;\n      typedef exprtk::symbol_table<T> symbol_table_t;\n      typedef exprtk::expression<T>     expression_t;\n      typedef exprtk::parser<T>             parser_t;\n      typedef parser_t::dependent_entity_collector::symbol_t symbol_t;\n\n      symbol_table_t symbol_table;\n      expression_t   expression;\n      parser_t       parser;\n\n      expression.register_symbol_table(symbol_table);\n\n      parser.enable_unknown_symbol_resolver();\n      parser.dec().collect_variables() = true;\n\n      if (!parser.compile(expr_str, expression))\n         return false;\n\n      std::deque<symbol_t> symb_list;\n\n      parser.dec().symbols(symb_list);\n\n      for (std::size_t i = 0; i < symb_list.size(); ++i)\n      {\n         symbol_list.push_back(symb_list[i].first);\n      }\n\n      return true;\n   }\n\n   template <typename T,\n             typename Allocator,\n             template <typename, typename> class Sequence>\n   inline bool collect_variables(const std::string& expr_str,\n                                 exprtk::symbol_table<T>& extrnl_symbol_table,\n                                 Sequence<std::string, Allocator>& symbol_list)\n   {\n      typedef exprtk::symbol_table<T> symbol_table_t;\n      typedef exprtk::expression<T>     expression_t;\n      typedef exprtk::parser<T>             parser_t;\n      typedef typename parser_t::dependent_entity_collector::symbol_t symbol_t;\n\n      symbol_table_t symbol_table;\n      expression_t   expression;\n      parser_t       parser;\n\n      expression.register_symbol_table(symbol_table);\n      expression.register_symbol_table(extrnl_symbol_table);\n\n      parser.enable_unknown_symbol_resolver();\n      parser.dec().collect_variables() = true;\n\n      details::disable_type_checking(parser);\n\n      if (!parser.compile(expr_str, expression))\n         return false;\n\n      std::deque<symbol_t> symb_list;\n\n      parser.dec().symbols(symb_list);\n\n      for (std::size_t i = 0; i < symb_list.size(); ++i)\n      {\n         symbol_list.push_back(symb_list[i].first);\n      }\n\n      return true;\n   }\n\n   template <typename Allocator,\n             template <typename, typename> class Sequence>\n   inline bool collect_functions(const std::string& expr_str,\n                                 Sequence<std::string, Allocator>& symbol_list)\n   {\n      typedef double T;\n      typedef exprtk::symbol_table<T> symbol_table_t;\n      typedef exprtk::expression<T>     expression_t;\n      typedef exprtk::parser<T>             parser_t;\n      typedef parser_t::dependent_entity_collector::symbol_t symbol_t;\n\n      symbol_table_t symbol_table;\n      expression_t   expression;\n      parser_t       parser;\n\n      expression.register_symbol_table(symbol_table);\n\n      parser.enable_unknown_symbol_resolver();\n      parser.dec().collect_functions() = true;\n\n      if (!parser.compile(expr_str, expression))\n         return false;\n\n      std::deque<symbol_t> symb_list;\n\n      parser.dec().symbols(symb_list);\n\n      for (std::size_t i = 0; i < symb_list.size(); ++i)\n      {\n         symbol_list.push_back(symb_list[i].first);\n      }\n\n      return true;\n   }\n\n   template <typename T,\n             typename Allocator,\n             template <typename, typename> class Sequence>\n   inline bool collect_functions(const std::string& expr_str,\n                                 exprtk::symbol_table<T>& extrnl_symbol_table,\n                                 Sequence<std::string, Allocator>& symbol_list)\n   {\n      typedef exprtk::symbol_table<T> symbol_table_t;\n      typedef exprtk::expression<T>     expression_t;\n      typedef exprtk::parser<T>             parser_t;\n      typedef typename parser_t::dependent_entity_collector::symbol_t symbol_t;\n\n      symbol_table_t symbol_table;\n      expression_t   expression;\n      parser_t       parser;\n\n      expression.register_symbol_table(symbol_table);\n      expression.register_symbol_table(extrnl_symbol_table);\n\n      parser.enable_unknown_symbol_resolver();\n      parser.dec().collect_functions() = true;\n\n      details::disable_type_checking(parser);\n\n      if (!parser.compile(expr_str, expression))\n         return false;\n\n      std::deque<symbol_t> symb_list;\n\n      parser.dec().symbols(symb_list);\n\n      for (std::size_t i = 0; i < symb_list.size(); ++i)\n      {\n         symbol_list.push_back(symb_list[i].first);\n      }\n\n      return true;\n   }\n\n   template <typename T>\n   inline T integrate(const expression<T>& e,\n                      T& x,\n                      const T& r0, const T& r1,\n                      const std::size_t number_of_intervals = 1000000)\n   {\n      if (r0 > r1)\n         return T(0);\n\n      const T h = (r1 - r0) / (T(2) * number_of_intervals);\n      T total_area = T(0);\n\n      for (std::size_t i = 0; i < number_of_intervals; ++i)\n      {\n         x = r0 + T(2) * i * h;\n         const T y0 = e.value(); x += h;\n         const T y1 = e.value(); x += h;\n         const T y2 = e.value(); x += h;\n         total_area += h * (y0 + T(4) * y1 + y2) / T(3);\n      }\n\n      return total_area;\n   }\n\n   template <typename T>\n   inline T integrate(const expression<T>& e,\n                      const std::string& variable_name,\n                      const T& r0, const T& r1,\n                      const std::size_t number_of_intervals = 1000000)\n   {\n      const symbol_table<T>& sym_table = e.get_symbol_table();\n\n      if (!sym_table.valid())\n         return std::numeric_limits<T>::quiet_NaN();\n\n      details::variable_node<T>* var = sym_table.get_variable(variable_name);\n\n      if (var)\n      {\n         T& x = var->ref();\n         T  x_original = x;\n         T result = integrate(e,x,r0,r1,number_of_intervals);\n         x = x_original;\n\n         return result;\n      }\n      else\n         return std::numeric_limits<T>::quiet_NaN();\n   }\n\n   template <typename T>\n   inline T derivative(const expression<T>& e,\n                       T& x,\n                       const T& h = T(0.00000001))\n   {\n      const T x_init = x;\n      const T _2h    = T(2) * h;\n\n      x = x_init + _2h;\n      const T y0 = e.value();\n      x = x_init +   h;\n      const T y1 = e.value();\n      x = x_init -   h;\n      const T y2 = e.value();\n      x = x_init - _2h;\n      const T y3 = e.value();\n      x = x_init;\n\n      return (-y0 + T(8) * (y1 - y2) + y3) / (T(12) * h);\n   }\n\n   template <typename T>\n   inline T second_derivative(const expression<T>& e,\n                              T& x,\n                              const T& h = T(0.00001))\n   {\n      const T x_init = x;\n      const T _2h    = T(2) * h;\n\n      const T y = e.value();\n      x = x_init + _2h;\n      const T y0 = e.value();\n      x = x_init +   h;\n      const T y1 = e.value();\n      x = x_init -   h;\n      const T y2 = e.value();\n      x = x_init - _2h;\n      const T y3 = e.value();\n      x = x_init;\n\n      return (-y0 + T(16) * (y1 + y2) - T(30) * y - y3) / (T(12) * h * h);\n   }\n\n   template <typename T>\n   inline T third_derivative(const expression<T>& e,\n                             T& x,\n                             const T& h = T(0.0001))\n   {\n      const T x_init = x;\n      const T _2h    = T(2) * h;\n\n      x = x_init + _2h;\n      const T y0 = e.value();\n      x = x_init +   h;\n      const T y1 = e.value();\n      x = x_init -   h;\n      const T y2 = e.value();\n      x = x_init - _2h;\n      const T y3 = e.value();\n      x = x_init;\n\n      return (y0 + T(2) * (y2 - y1) - y3) / (T(2) * h * h * h);\n   }\n\n   template <typename T>\n   inline T derivative(const expression<T>& e,\n                       const std::string& variable_name,\n                       const T& h = T(0.00000001))\n   {\n      const symbol_table<T>& sym_table = e.get_symbol_table();\n\n      if (!sym_table.valid())\n      {\n         return std::numeric_limits<T>::quiet_NaN();\n      }\n\n      details::variable_node<T>* var = sym_table.get_variable(variable_name);\n\n      if (var)\n      {\n         T& x = var->ref();\n         T x_original = x;\n         T result = derivative(e,x,h);\n         x = x_original;\n\n         return result;\n      }\n      else\n         return std::numeric_limits<T>::quiet_NaN();\n   }\n\n   template <typename T>\n   inline T second_derivative(const expression<T>& e,\n                              const std::string& variable_name,\n                              const T& h = T(0.00001))\n   {\n      const symbol_table<T>& sym_table = e.get_symbol_table();\n\n      if (!sym_table.valid())\n      {\n         return std::numeric_limits<T>::quiet_NaN();\n      }\n\n      details::variable_node<T>* var = sym_table.get_variable(variable_name);\n\n      if (var)\n      {\n         T& x = var->ref();\n         const T x_original = x;\n         const T result = second_derivative(e,x,h);\n         x = x_original;\n\n         return result;\n      }\n      else\n         return std::numeric_limits<T>::quiet_NaN();\n   }\n\n   template <typename T>\n   inline T third_derivative(const expression<T>& e,\n                             const std::string& variable_name,\n                             const T& h = T(0.0001))\n   {\n      const symbol_table<T>& sym_table = e.get_symbol_table();\n\n      if (!sym_table.valid())\n      {\n         return std::numeric_limits<T>::quiet_NaN();\n      }\n\n      details::variable_node<T>* var = sym_table.get_variable(variable_name);\n\n      if (var)\n      {\n         T& x = var->ref();\n         const T x_original = x;\n         const T result = third_derivative(e,x,h);\n         x = x_original;\n\n         return result;\n      }\n      else\n         return std::numeric_limits<T>::quiet_NaN();\n   }\n\n   /*\n      Note: The following 'compute' routines are simple helpers,\n      for quickly setting up the required pieces of code in order\n      to evaluate an expression. By virtue of how they operate\n      there will be an overhead with regards to their setup and\n      teardown and hence should not be used in time critical\n      sections of code.\n      Furthermore they only assume a small sub set of variables,\n      no string variables or user defined functions.\n   */\n   template <typename T>\n   inline bool compute(const std::string& expression_string, T& result)\n   {\n      // No variables\n      symbol_table<T> symbol_table;\n      symbol_table.add_constants();\n\n      expression<T> expression;\n      expression.register_symbol_table(symbol_table);\n\n      parser<T> parser;\n\n      if (parser.compile(expression_string,expression))\n      {\n         result = expression.value();\n\n         return true;\n      }\n      else\n         return false;\n   }\n\n   template <typename T>\n   inline bool compute(const std::string& expression_string,\n                       const T& x,\n                       T& result)\n   {\n      // Only 'x'\n      static const std::string x_var(\"x\");\n\n      symbol_table<T> symbol_table;\n      symbol_table.add_constants();\n      symbol_table.add_constant(x_var,x);\n\n      expression<T> expression;\n      expression.register_symbol_table(symbol_table);\n\n      parser<T> parser;\n\n      if (parser.compile(expression_string,expression))\n      {\n         result = expression.value();\n\n         return true;\n      }\n      else\n         return false;\n   }\n\n   template <typename T>\n   inline bool compute(const std::string& expression_string,\n                       const T&x, const T& y,\n                       T& result)\n   {\n      // Only 'x' and 'y'\n      static const std::string x_var(\"x\");\n      static const std::string y_var(\"y\");\n\n      symbol_table<T> symbol_table;\n      symbol_table.add_constants();\n      symbol_table.add_constant(x_var,x);\n      symbol_table.add_constant(y_var,y);\n\n      expression<T> expression;\n      expression.register_symbol_table(symbol_table);\n\n      parser<T> parser;\n\n      if (parser.compile(expression_string,expression))\n      {\n         result = expression.value();\n\n         return true;\n      }\n      else\n         return false;\n   }\n\n   template <typename T>\n   inline bool compute(const std::string& expression_string,\n                       const T& x, const T& y, const T& z,\n                       T& result)\n   {\n      // Only 'x', 'y' or 'z'\n      static const std::string x_var(\"x\");\n      static const std::string y_var(\"y\");\n      static const std::string z_var(\"z\");\n\n      symbol_table<T> symbol_table;\n      symbol_table.add_constants();\n      symbol_table.add_constant(x_var,x);\n      symbol_table.add_constant(y_var,y);\n      symbol_table.add_constant(z_var,z);\n\n      expression<T> expression;\n      expression.register_symbol_table(symbol_table);\n\n      parser<T> parser;\n\n      if (parser.compile(expression_string,expression))\n      {\n         result = expression.value();\n\n         return true;\n      }\n      else\n         return false;\n   }\n\n   template <typename T, std::size_t N>\n   class polynomial : public ifunction<T>\n   {\n   private:\n\n      template <typename Type, std::size_t NumberOfCoefficients>\n      struct poly_impl { };\n\n      template <typename Type>\n      struct poly_impl <Type,12>\n      {\n         static inline T evaluate(const Type x,\n                                  const Type c12, const Type c11, const Type c10, const Type c9, const Type c8,\n                                  const Type  c7, const Type  c6, const Type  c5, const Type c4, const Type c3,\n                                  const Type  c2, const Type  c1, const Type  c0)\n         {\n            // p(x) = c_12x^12 + c_11x^11 + c_10x^10 + c_9x^9 + c_8x^8 + c_7x^7 + c_6x^6 + c_5x^5 + c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return ((((((((((((c12 * x + c11) * x + c10) * x + c9) * x + c8) * x + c7) * x + c6) * x + c5) * x + c4) * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,11>\n      {\n         static inline T evaluate(const Type x,\n                                  const Type c11, const Type c10, const Type c9, const Type c8, const Type c7,\n                                  const Type c6,  const Type  c5, const Type c4, const Type c3, const Type c2,\n                                  const Type c1,  const Type  c0)\n         {\n            // p(x) = c_11x^11 + c_10x^10 + c_9x^9 + c_8x^8 + c_7x^7 + c_6x^6 + c_5x^5 + c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return (((((((((((c11 * x + c10) * x + c9) * x + c8) * x + c7) * x + c6) * x + c5) * x + c4) * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,10>\n      {\n         static inline T evaluate(const Type x,\n                                  const Type c10, const Type c9, const Type c8, const Type c7, const Type c6,\n                                  const Type c5,  const Type c4, const Type c3, const Type c2, const Type c1,\n                                  const Type c0)\n         {\n            // p(x) = c_10x^10 + c_9x^9 + c_8x^8 + c_7x^7 + c_6x^6 + c_5x^5 + c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return ((((((((((c10 * x + c9) * x + c8) * x + c7) * x + c6) * x + c5) * x + c4) * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,9>\n      {\n         static inline T evaluate(const Type x,\n                                  const Type c9, const Type c8, const Type c7, const Type c6, const Type c5,\n                                  const Type c4, const Type c3, const Type c2, const Type c1, const Type c0)\n         {\n            // p(x) = c_9x^9 + c_8x^8 + c_7x^7 + c_6x^6 + c_5x^5 + c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return (((((((((c9 * x + c8) * x + c7) * x + c6) * x + c5) * x + c4) * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,8>\n      {\n         static inline T evaluate(const Type x,\n                                  const Type c8, const Type c7, const Type c6, const Type c5, const Type c4,\n                                  const Type c3, const Type c2, const Type c1, const Type c0)\n         {\n            // p(x) = c_8x^8 + c_7x^7 + c_6x^6 + c_5x^5 + c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return ((((((((c8 * x + c7) * x + c6) * x + c5) * x + c4) * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,7>\n      {\n         static inline T evaluate(const Type x,\n                                  const Type c7, const Type c6, const Type c5, const Type c4, const Type c3,\n                                  const Type c2, const Type c1, const Type c0)\n         {\n            // p(x) = c_7x^7 + c_6x^6 + c_5x^5 + c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return (((((((c7 * x + c6) * x + c5) * x + c4) * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,6>\n      {\n         static inline T evaluate(const Type x,\n                                  const Type c6, const Type c5, const Type c4, const Type c3, const Type c2,\n                                  const Type c1, const Type c0)\n         {\n            // p(x) = c_6x^6 + c_5x^5 + c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return ((((((c6 * x + c5) * x + c4) * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,5>\n      {\n         static inline T evaluate(const Type x,\n                                  const Type c5, const Type c4, const Type c3, const Type c2,\n                                  const Type c1, const Type c0)\n         {\n            // p(x) = c_5x^5 + c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return (((((c5 * x + c4) * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,4>\n      {\n         static inline T evaluate(const Type x, const Type c4, const Type c3, const Type c2, const Type c1, const Type c0)\n         {\n            // p(x) = c_4x^4 + c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return ((((c4 * x + c3) * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,3>\n      {\n         static inline T evaluate(const Type x, const Type c3, const Type c2, const Type c1, const Type c0)\n         {\n            // p(x) = c_3x^3 + c_2x^2 + c_1x^1 + c_0x^0\n            return (((c3 * x + c2) * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,2>\n      {\n         static inline T evaluate(const Type x, const Type c2, const Type c1, const Type c0)\n         {\n            // p(x) = c_2x^2 + c_1x^1 + c_0x^0\n            return ((c2 * x + c1) * x + c0);\n         }\n      };\n\n      template <typename Type>\n      struct poly_impl <Type,1>\n      {\n         static inline T evaluate(const Type x, const Type c1, const Type c0)\n         {\n            // p(x) = c_1x^1 + c_0x^0\n            return (c1 * x + c0);\n         }\n      };\n\n   public:\n\n      using ifunction<T>::operator();\n\n      polynomial()\n      : ifunction<T>((N+2 <= 20) ? (N + 2) : std::numeric_limits<std::size_t>::max())\n      {\n         disable_has_side_effects(*this);\n      }\n\n      virtual ~polynomial()\n      {}\n\n      #define poly_rtrn(NN) \\\n      return (NN != N) ? std::numeric_limits<T>::quiet_NaN() :\n\n      inline virtual T operator() (const T& x, const T& c1, const T& c0)\n      {\n         poly_rtrn(1) poly_impl<T,1>::evaluate(x,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(2) poly_impl<T,2>::evaluate(x,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(3) poly_impl<T,3>::evaluate(x,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(4) poly_impl<T,4>::evaluate(x,c4,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c5, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(5) poly_impl<T,5>::evaluate(x,c5,c4,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c6, const T& c5, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(6) poly_impl<T,6>::evaluate(x,c6,c5,c4,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c7, const T& c6, const T& c5, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(7) poly_impl<T,7>::evaluate(x,c7,c6,c5,c4,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c8, const T& c7, const T& c6, const T& c5, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(8) poly_impl<T,8>::evaluate(x,c8,c7,c6,c5,c4,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c9, const T& c8, const T& c7, const T& c6, const T& c5, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(9) poly_impl<T,9>::evaluate(x,c9,c8,c7,c6,c5,c4,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c10, const T& c9, const T& c8, const T& c7, const T& c6, const T& c5, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(10) poly_impl<T,10>::evaluate(x,c10,c9,c8,c7,c6,c5,c4,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c11, const T& c10, const T& c9, const T& c8, const T& c7, const T& c6, const T& c5, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(11) poly_impl<T,11>::evaluate(x,c11,c10,c9,c8,c7,c6,c5,c4,c3,c2,c1,c0);\n      }\n\n      inline virtual T operator() (const T& x, const T& c12, const T& c11, const T& c10, const T& c9, const T& c8, const T& c7, const T& c6, const T& c5, const T& c4, const T& c3, const T& c2, const T& c1, const T& c0)\n      {\n         poly_rtrn(12) poly_impl<T,12>::evaluate(x,c12,c11,c10,c9,c8,c7,c6,c5,c4,c3,c2,c1,c0);\n      }\n\n      #undef poly_rtrn\n\n      inline virtual T operator() ()\n      {\n         return std::numeric_limits<T>::quiet_NaN();\n      }\n\n      inline virtual T operator() (const T&)\n      {\n         return std::numeric_limits<T>::quiet_NaN();\n      }\n\n      inline virtual T operator() (const T&, const T&)\n      {\n         return std::numeric_limits<T>::quiet_NaN();\n      }\n   };\n\n   template <typename T>\n   class function_compositor\n   {\n   public:\n\n      typedef exprtk::expression<T>             expression_t;\n      typedef exprtk::symbol_table<T>           symbol_table_t;\n      typedef exprtk::parser<T>                 parser_t;\n      typedef typename parser_t::settings_store settings_t;\n\n      struct function\n      {\n         function()\n         {}\n\n         function(const std::string& n)\n         : name_(n)\n         {}\n\n         function(const std::string& name,\n                  const std::string& expression)\n         : name_(name),\n           expression_(expression)\n         {}\n\n         function(const std::string& name,\n                  const std::string& expression,\n                  const std::string& v0)\n         : name_(name),\n           expression_(expression)\n         {\n            v_.push_back(v0);\n         }\n\n         function(const std::string& name,\n                  const std::string& expression,\n                  const std::string& v0, const std::string& v1)\n         : name_(name),\n           expression_(expression)\n         {\n            v_.push_back(v0); v_.push_back(v1);\n         }\n\n         function(const std::string& name,\n                  const std::string& expression,\n                  const std::string& v0, const std::string& v1,\n                  const std::string& v2)\n         : name_(name),\n           expression_(expression)\n         {\n            v_.push_back(v0); v_.push_back(v1);\n            v_.push_back(v2);\n         }\n\n         function(const std::string& name,\n                  const std::string& expression,\n                  const std::string& v0, const std::string& v1,\n                  const std::string& v2, const std::string& v3)\n         : name_(name),\n           expression_(expression)\n         {\n            v_.push_back(v0); v_.push_back(v1);\n            v_.push_back(v2); v_.push_back(v3);\n         }\n\n         function(const std::string& name,\n                  const std::string& expression,\n                  const std::string& v0, const std::string& v1,\n                  const std::string& v2, const std::string& v3,\n                  const std::string& v4)\n         : name_(name),\n           expression_(expression)\n         {\n            v_.push_back(v0); v_.push_back(v1);\n            v_.push_back(v2); v_.push_back(v3);\n            v_.push_back(v4);\n         }\n\n         inline function& name(const std::string& n)\n         {\n            name_ = n;\n            return (*this);\n         }\n\n         inline function& expression(const std::string& e)\n         {\n            expression_ = e;\n            return (*this);\n         }\n\n         inline function& var(const std::string& v)\n         {\n            v_.push_back(v);\n            return (*this);\n         }\n\n         std::string name_;\n         std::string expression_;\n         std::deque<std::string> v_;\n      };\n\n   private:\n\n      struct base_func : public exprtk::ifunction<T>\n      {\n         typedef const T&                       type;\n         typedef exprtk::ifunction<T>     function_t;\n         typedef std::vector<T*>            varref_t;\n         typedef std::vector<T>                var_t;\n         typedef std::pair<T*,std::size_t> lvarref_t;\n         typedef std::vector<lvarref_t>    lvr_vec_t;\n\n         using exprtk::ifunction<T>::operator();\n\n         base_func(const std::size_t& pc = 0)\n         : exprtk::ifunction<T>(pc),\n           local_var_stack_size(0),\n           stack_depth(0)\n         {\n            v.resize(pc);\n         }\n\n         virtual ~base_func()\n         {}\n\n         inline void update(const T& v0)\n         {\n            (*v[0]) = v0;\n         }\n\n         inline void update(const T& v0, const T& v1)\n         {\n            (*v[0]) = v0; (*v[1]) = v1;\n         }\n\n         inline void update(const T& v0, const T& v1, const T& v2)\n         {\n            (*v[0]) = v0; (*v[1]) = v1;\n            (*v[2]) = v2;\n         }\n\n         inline void update(const T& v0, const T& v1, const T& v2, const T& v3)\n         {\n            (*v[0]) = v0; (*v[1]) = v1;\n            (*v[2]) = v2; (*v[3]) = v3;\n         }\n\n         inline void update(const T& v0, const T& v1, const T& v2, const T& v3, const T& v4)\n         {\n            (*v[0]) = v0; (*v[1]) = v1;\n            (*v[2]) = v2; (*v[3]) = v3;\n            (*v[4]) = v4;\n         }\n\n         inline void update(const T& v0, const T& v1, const T& v2, const T& v3, const T& v4, const T& v5)\n         {\n            (*v[0]) = v0; (*v[1]) = v1;\n            (*v[2]) = v2; (*v[3]) = v3;\n            (*v[4]) = v4; (*v[5]) = v5;\n         }\n\n         inline function_t& setup(expression_t& expr)\n         {\n            expression = expr;\n\n            typedef typename expression_t::control_block::local_data_list_t ldl_t;\n\n            ldl_t ldl = expr.local_data_list();\n\n            std::vector<std::size_t> index_list;\n\n            for (std::size_t i = 0; i < ldl.size(); ++i)\n            {\n               if (ldl[i].size)\n               {\n                  index_list.push_back(i);\n               }\n            }\n\n            std::size_t input_param_count = 0;\n\n            for (std::size_t i = 0; i < index_list.size(); ++i)\n            {\n               const std::size_t index = index_list[i];\n\n               if (i < (index_list.size() - v.size()))\n               {\n                  lv.push_back(\n                        std::make_pair(\n                           reinterpret_cast<T*>(ldl[index].pointer),\n                           ldl[index].size));\n\n                  local_var_stack_size += ldl[index].size;\n               }\n               else\n                  v[input_param_count++] = reinterpret_cast<T*>(ldl[index].pointer);\n            }\n\n            clear_stack();\n\n            return (*this);\n         }\n\n         inline void pre()\n         {\n            if (stack_depth++)\n            {\n               if (!v.empty())\n               {\n                  var_t var_stack(v.size(),T(0));\n                  copy(v,var_stack);\n                  param_stack.push_back(var_stack);\n               }\n\n               if (!lv.empty())\n               {\n                  var_t local_var_stack(local_var_stack_size,T(0));\n                  copy(lv,local_var_stack);\n                  local_stack.push_back(local_var_stack);\n               }\n            }\n         }\n\n         inline void post()\n         {\n            if (--stack_depth)\n            {\n               if (!v.empty())\n               {\n                  copy(param_stack.back(),v);\n                  param_stack.pop_back();\n               }\n\n               if (!lv.empty())\n               {\n                  copy(local_stack.back(),lv);\n                  local_stack.pop_back();\n               }\n            }\n         }\n\n         void copy(const varref_t& src_v, var_t& dest_v)\n         {\n            for (std::size_t i = 0; i < src_v.size(); ++i)\n            {\n               dest_v[i] = (*src_v[i]);\n            }\n         }\n\n         void copy(const var_t& src_v, varref_t& dest_v)\n         {\n            for (std::size_t i = 0; i < src_v.size(); ++i)\n            {\n               (*dest_v[i]) = src_v[i];\n            }\n         }\n\n         void copy(const lvr_vec_t& src_v, var_t& dest_v)\n         {\n            typename var_t::iterator itr = dest_v.begin();\n            typedef  typename std::iterator_traits<typename var_t::iterator>::difference_type diff_t;\n\n            for (std::size_t i = 0; i < src_v.size(); ++i)\n            {\n               lvarref_t vr = src_v[i];\n\n               if (1 == vr.second)\n                  *itr++ = (*vr.first);\n               else\n               {\n                  std::copy(vr.first, vr.first + vr.second, itr);\n                  itr += static_cast<diff_t>(vr.second);\n               }\n            }\n         }\n\n         void copy(const var_t& src_v, lvr_vec_t& dest_v)\n         {\n            typename var_t::const_iterator itr = src_v.begin();\n            typedef  typename std::iterator_traits<typename var_t::iterator>::difference_type diff_t;\n\n            for (std::size_t i = 0; i < src_v.size(); ++i)\n            {\n               lvarref_t vr = dest_v[i];\n\n               if (1 == vr.second)\n                  (*vr.first) = *itr++;\n               else\n               {\n                  std::copy(itr, itr + static_cast<diff_t>(vr.second), vr.first);\n                  itr += static_cast<diff_t>(vr.second);\n               }\n            }\n         }\n\n         inline void clear_stack()\n         {\n            for (std::size_t i = 0; i < v.size(); ++i)\n            {\n               (*v[i]) = 0;\n            }\n         }\n\n         inline virtual T value(expression_t& e)\n         {\n            return e.value();\n         }\n\n         expression_t expression;\n         varref_t v;\n         lvr_vec_t lv;\n         std::size_t local_var_stack_size;\n         std::size_t stack_depth;\n         std::deque<var_t> param_stack;\n         std::deque<var_t> local_stack;\n      };\n\n      typedef std::map<std::string,base_func*> funcparam_t;\n\n      struct func_0param : public base_func\n      {\n         using exprtk::ifunction<T>::operator();\n\n         func_0param() : base_func(0) {}\n\n         inline T operator() ()\n         {\n            return this->value(base_func::expression);\n         }\n      };\n\n      typedef const T& type;\n\n      template <typename BaseFuncType>\n      struct scoped_bft\n      {\n         scoped_bft(BaseFuncType& bft) : bft_(bft) { bft_.pre (); }\n        ~scoped_bft()                              { bft_.post(); }\n\n         BaseFuncType& bft_;\n\n      private:\n\n         scoped_bft(scoped_bft&);\n         scoped_bft& operator=(scoped_bft&);\n      };\n\n      struct func_1param : public base_func\n      {\n         using exprtk::ifunction<T>::operator();\n\n         func_1param() : base_func(1) {}\n\n         inline T operator() (type v0)\n         {\n            scoped_bft<func_1param> sb(*this);\n            base_func::update(v0);\n            return this->value(base_func::expression);\n         }\n      };\n\n      struct func_2param : public base_func\n      {\n         using exprtk::ifunction<T>::operator();\n\n         func_2param() : base_func(2) {}\n\n         inline T operator() (type v0, type v1)\n         {\n            scoped_bft<func_2param> sb(*this);\n            base_func::update(v0, v1);\n            return this->value(base_func::expression);\n         }\n      };\n\n      struct func_3param : public base_func\n      {\n         using exprtk::ifunction<T>::operator();\n\n         func_3param() : base_func(3) {}\n\n         inline T operator() (type v0, type v1, type v2)\n         {\n            scoped_bft<func_3param> sb(*this);\n            base_func::update(v0, v1, v2);\n            return this->value(base_func::expression);\n         }\n      };\n\n      struct func_4param : public base_func\n      {\n         using exprtk::ifunction<T>::operator();\n\n         func_4param() : base_func(4) {}\n\n         inline T operator() (type v0, type v1, type v2, type v3)\n         {\n            scoped_bft<func_4param> sb(*this);\n            base_func::update(v0, v1, v2, v3);\n            return this->value(base_func::expression);\n         }\n      };\n\n      struct func_5param : public base_func\n      {\n         using exprtk::ifunction<T>::operator();\n\n         func_5param() : base_func(5) {}\n\n         inline T operator() (type v0, type v1, type v2, type v3, type v4)\n         {\n            scoped_bft<func_5param> sb(*this);\n            base_func::update(v0, v1, v2, v3, v4);\n            return this->value(base_func::expression);\n         }\n      };\n\n      struct func_6param : public base_func\n      {\n         using exprtk::ifunction<T>::operator();\n\n         func_6param() : base_func(6) {}\n\n         inline T operator() (type v0, type v1, type v2, type v3, type v4, type v5)\n         {\n            scoped_bft<func_6param> sb(*this);\n            base_func::update(v0, v1, v2, v3, v4, v5);\n            return this->value(base_func::expression);\n         }\n      };\n\n      static T return_value(expression_t& e)\n      {\n         typedef exprtk::results_context<T> results_context_t;\n         typedef typename results_context_t::type_store_t type_t;\n         typedef typename type_t::scalar_view scalar_t;\n\n         T result = e.value();\n\n         if (e.return_invoked())\n         {\n            // Due to the post compilation checks, it can be safely\n            // assumed that there will be at least one parameter\n            // and that the first parameter will always be scalar.\n            return scalar_t(e.results()[0])();\n         }\n\n         return result;\n      }\n\n      #define def_fp_retval(N)                               \\\n      struct func_##N##param_retval : public func_##N##param \\\n      {                                                      \\\n         inline T value(expression_t& e)                     \\\n         {                                                   \\\n            return return_value(e);                          \\\n         }                                                   \\\n      };                                                     \\\n\n      def_fp_retval(0)\n      def_fp_retval(1)\n      def_fp_retval(2)\n      def_fp_retval(3)\n      def_fp_retval(4)\n      def_fp_retval(5)\n      def_fp_retval(6)\n\n      template <typename Allocator,\n                template <typename,typename> class Sequence>\n      inline bool add(const std::string& name,\n                      const std::string& expression,\n                      const Sequence<std::string,Allocator>& var_list,\n                      const bool override = false)\n      {\n         const typename std::map<std::string,expression_t>::iterator itr = expr_map_.find(name);\n\n         if (expr_map_.end() != itr)\n         {\n            if (!override)\n            {\n               exprtk_debug((\"Compositor error(add): function '%s' already defined\\n\",\n                             name.c_str()));\n\n               return false;\n            }\n\n            remove(name, var_list.size());\n         }\n\n         if (compile_expression(name,expression,var_list))\n         {\n            const std::size_t n = var_list.size();\n\n            fp_map_[n][name]->setup(expr_map_[name]);\n\n            return true;\n         }\n         else\n         {\n            exprtk_debug((\"Compositor error(add): Failed to compile function '%s'\\n\",\n                          name.c_str()));\n\n            return false;\n         }\n      }\n\n   public:\n\n      function_compositor()\n      : parser_(settings_t::compile_all_opts +\n                settings_t::e_disable_zero_return),\n        fp_map_(7)\n      {}\n\n      function_compositor(const symbol_table_t& st)\n      : symbol_table_(st),\n        parser_(settings_t::compile_all_opts +\n                settings_t::e_disable_zero_return),\n        fp_map_(7)\n      {}\n\n     ~function_compositor()\n      {\n         clear();\n      }\n\n      inline symbol_table_t& symbol_table()\n      {\n         return symbol_table_;\n      }\n\n      inline void add_auxiliary_symtab(symbol_table_t& symtab)\n      {\n         auxiliary_symtab_list_.push_back(&symtab);\n      }\n\n      void clear()\n      {\n         symbol_table_.clear();\n         expr_map_    .clear();\n\n         for (std::size_t i = 0; i < fp_map_.size(); ++i)\n         {\n            typename funcparam_t::iterator itr = fp_map_[i].begin();\n            typename funcparam_t::iterator end = fp_map_[i].end  ();\n\n            while (itr != end)\n            {\n               delete itr->second;\n               ++itr;\n            }\n\n            fp_map_[i].clear();\n         }\n      }\n\n      inline bool add(const function& f, const bool override = false)\n      {\n         return add(f.name_, f.expression_, f.v_,override);\n      }\n\n   private:\n\n      template <typename Allocator,\n                template <typename,typename> class Sequence>\n      bool compile_expression(const std::string& name,\n                              const std::string& expression,\n                              const Sequence<std::string,Allocator>& input_var_list,\n                              bool  return_present = false)\n      {\n         expression_t compiled_expression;\n         symbol_table_t local_symbol_table;\n\n         local_symbol_table.load_from(symbol_table_);\n         local_symbol_table.add_constants();\n\n         if (!valid(name,input_var_list.size()))\n            return false;\n\n         if (!forward(name,\n                      input_var_list.size(),\n                      local_symbol_table,\n                      return_present))\n            return false;\n\n         compiled_expression.register_symbol_table(local_symbol_table);\n\n         for (std::size_t i = 0; i < auxiliary_symtab_list_.size(); ++i)\n         {\n            compiled_expression.register_symbol_table((*auxiliary_symtab_list_[i]));\n         }\n\n         std::string mod_expression;\n\n         for (std::size_t i = 0; i < input_var_list.size(); ++i)\n         {\n            mod_expression += \" var \" + input_var_list[i] + \"{};\\n\";\n         }\n\n         if (\n              ('{' == details::front(expression)) &&\n              ('}' == details::back (expression))\n            )\n            mod_expression += \"~\" + expression + \";\";\n         else\n            mod_expression += \"~{\" + expression + \"};\";\n\n         if (!parser_.compile(mod_expression,compiled_expression))\n         {\n            exprtk_debug((\"Compositor Error: %s\\n\",parser_.error().c_str()));\n            exprtk_debug((\"Compositor modified expression: \\n%s\\n\",mod_expression.c_str()));\n\n            remove(name,input_var_list.size());\n\n            return false;\n         }\n\n         if (!return_present && parser_.dec().return_present())\n         {\n            remove(name,input_var_list.size());\n\n            return compile_expression(name, expression, input_var_list, true);\n         }\n\n         // Make sure every return point has a scalar as its first parameter\n         if (parser_.dec().return_present())\n         {\n            typedef std::vector<std::string> str_list_t;\n\n            str_list_t ret_param_list = parser_.dec().return_param_type_list();\n\n            for (std::size_t i = 0; i < ret_param_list.size(); ++i)\n            {\n               const std::string& params = ret_param_list[i];\n\n               if (params.empty() || ('T' != params[0]))\n               {\n                  exprtk_debug((\"Compositor Error: Return statement in function '%s' is invalid\\n\",\n                                name.c_str()));\n\n                  remove(name,input_var_list.size());\n\n                  return false;\n               }\n            }\n         }\n\n         expr_map_[name] = compiled_expression;\n\n         exprtk::ifunction<T>& ifunc = (*(fp_map_[input_var_list.size()])[name]);\n\n         if (symbol_table_.add_function(name,ifunc))\n            return true;\n         else\n         {\n            exprtk_debug((\"Compositor Error: Failed to add function '%s' to symbol table\\n\",\n                          name.c_str()));\n            return false;\n         }\n      }\n\n      inline bool symbol_used(const std::string& symbol) const\n      {\n         return (\n                  symbol_table_.is_variable       (symbol) ||\n                  symbol_table_.is_stringvar      (symbol) ||\n                  symbol_table_.is_function       (symbol) ||\n                  symbol_table_.is_vector         (symbol) ||\n                  symbol_table_.is_vararg_function(symbol)\n                );\n      }\n\n      inline bool valid(const std::string& name,\n                        const std::size_t& arg_count) const\n      {\n         if (arg_count > 6)\n            return false;\n         else if (symbol_used(name))\n            return false;\n         else if (fp_map_[arg_count].end() != fp_map_[arg_count].find(name))\n            return false;\n         else\n            return true;\n      }\n\n      inline bool forward(const std::string& name,\n                          const std::size_t& arg_count,\n                          symbol_table_t& sym_table,\n                          const bool ret_present = false)\n      {\n         switch (arg_count)\n         {\n            #define case_stmt(N)                                     \\\n            case N : (fp_map_[arg_count])[name] =                    \\\n                     (!ret_present) ? static_cast<base_func*>        \\\n                                      (new func_##N##param) :        \\\n                                      static_cast<base_func*>        \\\n                                      (new func_##N##param_retval) ; \\\n                     break;                                          \\\n\n            case_stmt(0) case_stmt(1) case_stmt(2)\n            case_stmt(3) case_stmt(4) case_stmt(5)\n            case_stmt(6)\n            #undef case_stmt\n         }\n\n         exprtk::ifunction<T>& ifunc = (*(fp_map_[arg_count])[name]);\n\n         return sym_table.add_function(name,ifunc);\n      }\n\n      inline void remove(const std::string& name, const std::size_t& arg_count)\n      {\n         if (arg_count > 6)\n            return;\n\n         const typename std::map<std::string,expression_t>::iterator em_itr = expr_map_.find(name);\n\n         if (expr_map_.end() != em_itr)\n         {\n            expr_map_.erase(em_itr);\n         }\n\n         const typename funcparam_t::iterator fp_itr = fp_map_[arg_count].find(name);\n\n         if (fp_map_[arg_count].end() != fp_itr)\n         {\n            delete fp_itr->second;\n            fp_map_[arg_count].erase(fp_itr);\n         }\n\n         symbol_table_.remove_function(name);\n      }\n\n   private:\n\n      symbol_table_t symbol_table_;\n      parser_t parser_;\n      std::map<std::string,expression_t> expr_map_;\n      std::vector<funcparam_t> fp_map_;\n      std::vector<symbol_table_t*> auxiliary_symtab_list_;\n   };\n\n   template <typename T>\n   inline bool pgo_primer()\n   {\n      static const std::string expression_list[]\n                                        = {\n                                             \"(y + x)\",\n                                             \"2 * (y + x)\",\n                                             \"(2 * y + 2 * x)\",\n                                             \"(y + x / y) * (x - y / x)\",\n                                             \"x / ((x + y) * (x - y)) / y\",\n                                             \"1 - ((x * y) + (y / x)) - 3\",\n                                             \"sin(2 * x) + cos(pi / y)\",\n                                             \"1 - sin(2 * x) + cos(pi / y)\",\n                                             \"sqrt(1 - sin(2 * x) + cos(pi / y) / 3)\",\n                                             \"(x^2 / sin(2 * pi / y)) -x / 2\",\n                                             \"x + (cos(y - sin(2 / x * pi)) - sin(x - cos(2 * y / pi))) - y\",\n                                             \"clamp(-1.0, sin(2 * pi * x) + cos(y / 2 * pi), +1.0)\",\n                                             \"iclamp(-1.0, sin(2 * pi * x) + cos(y / 2 * pi), +1.0)\",\n                                             \"max(3.33, min(sqrt(1 - sin(2 * x) + cos(pi / y) / 3), 1.11))\",\n                                             \"if(avg(x,y) <= x + y, x - y, x * y) + 2 * pi / x\",\n                                             \"1.1x^1 + 2.2y^2 - 3.3x^3 + 4.4y^4 - 5.5x^5 + 6.6y^6 - 7.7x^27 + 8.8y^55\",\n                                             \"(yy + xx)\",\n                                             \"2 * (yy + xx)\",\n                                             \"(2 * yy + 2 * xx)\",\n                                             \"(yy + xx / yy) * (xx - yy / xx)\",\n                                             \"xx / ((xx + yy) * (xx - yy)) / yy\",\n                                             \"1 - ((xx * yy) + (yy / xx)) - 3\",\n                                             \"sin(2 * xx) + cos(pi / yy)\",\n                                             \"1 - sin(2 * xx) + cos(pi / yy)\",\n                                             \"sqrt(1 - sin(2 * xx) + cos(pi / yy) / 3)\",\n                                             \"(xx^2 / sin(2 * pi / yy)) -xx / 2\",\n                                             \"xx + (cos(yy - sin(2 / xx * pi)) - sin(xx - cos(2 * yy / pi))) - yy\",\n                                             \"clamp(-1.0, sin(2 * pi * xx) + cos(yy / 2 * pi), +1.0)\",\n                                             \"max(3.33, min(sqrt(1 - sin(2 * xx) + cos(pi / yy) / 3), 1.11))\",\n                                             \"if(avg(xx,yy) <= xx + yy, xx - yy, xx * yy) + 2 * pi / xx\",\n                                             \"1.1xx^1 + 2.2yy^2 - 3.3xx^3 + 4.4yy^4 - 5.5xx^5 + 6.6yy^6 - 7.7xx^27 + 8.8yy^55\",\n                                             \"(1.1*(2.2*(3.3*(4.4*(5.5*(6.6*(7.7*(8.8*(9.9+x)))))))))\",\n                                             \"(((((((((x+9.9)*8.8)*7.7)*6.6)*5.5)*4.4)*3.3)*2.2)*1.1)\",\n                                             \"(x + y) * z\", \"x + (y * z)\", \"(x + y) * 7\", \"x + (y * 7)\",\n                                             \"(x + 7) * y\", \"x + (7 * y)\", \"(7 + x) * y\", \"7 + (x * y)\",\n                                             \"(2 + x) * 3\", \"2 + (x * 3)\", \"(2 + 3) * x\", \"2 + (3 * x)\",\n                                             \"(x + 2) * 3\", \"x + (2 * 3)\",\n                                             \"(x + y) * (z / w)\", \"(x + y) * (z / 7)\", \"(x + y) * (7 / z)\", \"(x + 7) * (y / z)\",\n                                             \"(7 + x) * (y / z)\", \"(2 + x) * (y / z)\", \"(x + 2) * (y / 3)\", \"(2 + x) * (y / 3)\",\n                                             \"(x + 2) * (3 / y)\", \"x + (y * (z / w))\", \"x + (y * (z / 7))\", \"x + (y * (7 / z))\",\n                                             \"x + (7 * (y / z))\", \"7 + (x * (y / z))\", \"2 + (x * (3 / y))\", \"x + (2 * (y / 4))\",\n                                             \"2 + (x * (y / 3))\", \"x + (2 * (3 / y))\",\n                                             \"x + ((y * z) / w)\", \"x + ((y * z) / 7)\", \"x + ((y * 7) / z)\", \"x + ((7 * y) / z)\",\n                                             \"7 + ((y * z) / w)\", \"2 + ((x * 3) / y)\", \"x + ((2 * y) / 3)\", \"2 + ((x * y) / 3)\",\n                                             \"x + ((2 * 3) / y)\", \"(((x + y) * z) / w)\",\n                                             \"(((x + y) * z) / 7)\", \"(((x + y) * 7) / z)\", \"(((x + 7) * y) / z)\", \"(((7 + x) * y) / z)\",\n                                             \"(((2 + x) * 3) / y)\", \"(((x + 2) * y) / 3)\", \"(((2 + x) * y) / 3)\", \"(((x + 2) * 3) / y)\",\n                                             \"((x + (y * z)) / w)\", \"((x + (y * z)) / 7)\", \"((x + (y * 7)) / y)\", \"((x + (7 * y)) / z)\",\n                                             \"((7 + (x * y)) / z)\", \"((2 + (x * 3)) / y)\", \"((x + (2 * y)) / 3)\", \"((2 + (x * y)) / 3)\",\n                                             \"((x + (2 * 3)) / y)\",\n                                             \"(xx + yy) * zz\", \"xx + (yy * zz)\",\n                                             \"(xx + yy) * 7\", \"xx + (yy * 7)\",\n                                             \"(xx + 7) * yy\", \"xx + (7 * yy)\",\n                                             \"(7 + xx) * yy\", \"7 + (xx * yy)\",\n                                             \"(2 + x) * 3\", \"2 + (x * 3)\",\n                                             \"(2 + 3) * x\", \"2 + (3 * x)\",\n                                             \"(x + 2) * 3\", \"x + (2 * 3)\",\n                                             \"(xx + yy) * (zz / ww)\", \"(xx + yy) * (zz / 7)\",\n                                             \"(xx + yy) * (7 / zz)\", \"(xx + 7) * (yy / zz)\",\n                                             \"(7 + xx) * (yy / zz)\", \"(2 + xx) * (yy / zz)\",\n                                             \"(xx + 2) * (yy / 3)\", \"(2 + xx) * (yy / 3)\",\n                                             \"(xx + 2) * (3 / yy)\", \"xx + (yy * (zz / ww))\",\n                                             \"xx + (yy * (zz / 7))\", \"xx + (yy * (7 / zz))\",\n                                             \"xx + (7 * (yy / zz))\", \"7 + (xx * (yy / zz))\",\n                                             \"2 + (xx * (3 / yy))\", \"xx + (2 * (yy / 4))\",\n                                             \"2 + (xx * (yy / 3))\", \"xx + (2 * (3 / yy))\",\n                                             \"xx + ((yy * zz) / ww)\", \"xx + ((yy * zz) / 7)\",\n                                             \"xx + ((yy * 7) / zz)\", \"xx + ((7 * yy) / zz)\",\n                                             \"7 + ((yy * zz) / ww)\", \"2 + ((xx * 3) / yy)\",\n                                             \"xx + ((2 * yy) / 3)\", \"2 + ((xx * yy) / 3)\",\n                                             \"xx + ((2 * 3) / yy)\", \"(((xx + yy) * zz) / ww)\",\n                                             \"(((xx + yy) * zz) / 7)\", \"(((xx + yy) * 7) / zz)\",\n                                             \"(((xx + 7) * yy) / zz)\", \"(((7 + xx) * yy) / zz)\",\n                                             \"(((2 + xx) * 3) / yy)\", \"(((xx + 2) * yy) / 3)\",\n                                             \"(((2 + xx) * yy) / 3)\", \"(((xx + 2) * 3) / yy)\",\n                                             \"((xx + (yy * zz)) / ww)\", \"((xx + (yy * zz)) / 7)\",\n                                             \"((xx + (yy * 7)) / yy)\", \"((xx + (7 * yy)) / zz)\",\n                                             \"((7 + (xx * yy)) / zz)\", \"((2 + (xx * 3)) / yy)\",\n                                             \"((xx + (2 * yy)) / 3)\", \"((2 + (xx * yy)) / 3)\",\n                                             \"((xx + (2 * 3)) / yy)\"\n                                          };\n      static const std::size_t expression_list_size = sizeof(expression_list) / sizeof(std::string);\n\n      T  x = T(0);\n      T  y = T(0);\n      T  z = T(0);\n      T  w = T(0);\n      T xx = T(0);\n      T yy = T(0);\n      T zz = T(0);\n      T ww = T(0);\n\n      exprtk::symbol_table<T> symbol_table;\n      symbol_table.add_constants();\n      symbol_table.add_variable( \"x\", x);\n      symbol_table.add_variable( \"y\", y);\n      symbol_table.add_variable( \"z\", z);\n      symbol_table.add_variable( \"w\", w);\n      symbol_table.add_variable(\"xx\",xx);\n      symbol_table.add_variable(\"yy\",yy);\n      symbol_table.add_variable(\"zz\",zz);\n      symbol_table.add_variable(\"ww\",ww);\n\n      typedef typename std::deque<exprtk::expression<T> > expr_list_t;\n      expr_list_t expr_list;\n\n      const std::size_t rounds = 50;\n\n      {\n         for (std::size_t r = 0; r < rounds; ++r)\n         {\n            expr_list.clear();\n            exprtk::parser<T> parser;\n\n            for (std::size_t i = 0; i < expression_list_size; ++i)\n            {\n               exprtk::expression<T> expression;\n               expression.register_symbol_table(symbol_table);\n\n               if (!parser.compile(expression_list[i],expression))\n               {\n                  return false;\n               }\n\n               expr_list.push_back(expression);\n            }\n         }\n      }\n\n      struct execute\n      {\n         static inline T process(T& x, T& y, expression<T>& expression)\n         {\n            static const T lower_bound = T(-20);\n            static const T upper_bound = T(+20);\n            static const T delta       = T(0.1);\n\n            T total = T(0);\n\n            for (x = lower_bound; x <= upper_bound; x += delta)\n            {\n               for (y = lower_bound; y <= upper_bound; y += delta)\n               {\n                  total += expression.value();\n               }\n            }\n\n            return total;\n         }\n      };\n\n      for (std::size_t i = 0; i < expr_list.size(); ++i)\n      {\n         execute::process( x,  y, expr_list[i]);\n         execute::process(xx, yy, expr_list[i]);\n      }\n\n      {\n         for (std::size_t i = 0; i < 10000; ++i)\n         {\n            const T v = T(123.456 + i);\n\n            if (details::is_true(details::numeric::nequal(details::numeric::fast_exp<T, 1>::result(v),details::numeric::pow(v,T( 1)))))\n               return false;\n\n            #define else_stmt(N)                                                                                                           \\\n            else if (details::is_true(details::numeric::nequal(details::numeric::fast_exp<T,N>::result(v),details::numeric::pow(v,T(N))))) \\\n               return false;                                                                                                               \\\n\n            else_stmt( 2) else_stmt( 3) else_stmt( 4) else_stmt( 5)\n            else_stmt( 6) else_stmt( 7) else_stmt( 8) else_stmt( 9)\n            else_stmt(10) else_stmt(11) else_stmt(12) else_stmt(13)\n            else_stmt(14) else_stmt(15) else_stmt(16) else_stmt(17)\n            else_stmt(18) else_stmt(19) else_stmt(20) else_stmt(21)\n            else_stmt(22) else_stmt(23) else_stmt(24) else_stmt(25)\n            else_stmt(26) else_stmt(27) else_stmt(28) else_stmt(29)\n            else_stmt(30) else_stmt(31) else_stmt(32) else_stmt(33)\n            else_stmt(34) else_stmt(35) else_stmt(36) else_stmt(37)\n            else_stmt(38) else_stmt(39) else_stmt(40) else_stmt(41)\n            else_stmt(42) else_stmt(43) else_stmt(44) else_stmt(45)\n            else_stmt(46) else_stmt(47) else_stmt(48) else_stmt(49)\n            else_stmt(50) else_stmt(51) else_stmt(52) else_stmt(53)\n            else_stmt(54) else_stmt(55) else_stmt(56) else_stmt(57)\n            else_stmt(58) else_stmt(59) else_stmt(60) else_stmt(61)\n         }\n      }\n\n      return true;\n   }\n}\n\n#if defined(_WIN32) || defined(__WIN32__) || defined(WIN32)\n#   ifndef NOMINMAX\n#      define NOMINMAX\n#   endif\n#   ifndef WIN32_LEAN_AND_MEAN\n#      define WIN32_LEAN_AND_MEAN\n#   endif\n#   include <windows.h>\n#   include <ctime>\n#else\n#   include <ctime>\n#   include <sys/time.h>\n#   include <sys/types.h>\n#endif\n\nnamespace exprtk\n{\n   class timer\n   {\n   public:\n\n      #if defined(_WIN32) || defined(__WIN32__) || defined(WIN32)\n      timer()\n      : in_use_(false)\n      {\n         QueryPerformanceFrequency(&clock_frequency_);\n      }\n\n      inline void start()\n      {\n         in_use_ = true;\n         QueryPerformanceCounter(&start_time_);\n      }\n\n      inline void stop()\n      {\n         QueryPerformanceCounter(&stop_time_);\n         in_use_ = false;\n      }\n\n      inline double time() const\n      {\n         return (1.0 * (stop_time_.QuadPart - start_time_.QuadPart)) / (1.0 * clock_frequency_.QuadPart);\n      }\n\n      #else\n\n      timer()\n      : in_use_(false)\n      {\n         start_time_.tv_sec  = 0;\n         start_time_.tv_usec = 0;\n         stop_time_.tv_sec   = 0;\n         stop_time_.tv_usec  = 0;\n      }\n\n      inline void start()\n      {\n         in_use_ = true;\n         gettimeofday(&start_time_,0);\n      }\n\n      inline void stop()\n      {\n         gettimeofday(&stop_time_, 0);\n         in_use_ = false;\n      }\n\n      inline unsigned long long int usec_time() const\n      {\n         if (!in_use_)\n         {\n            if (stop_time_.tv_sec >= start_time_.tv_sec)\n            {\n               return 1000000LLU * static_cast<unsigned long long int>(stop_time_.tv_sec  - start_time_.tv_sec ) +\n                                   static_cast<unsigned long long int>(stop_time_.tv_usec - start_time_.tv_usec) ;\n            }\n            else\n               return std::numeric_limits<unsigned long long int>::max();\n         }\n         else\n            return std::numeric_limits<unsigned long long int>::max();\n      }\n\n      inline double time() const\n      {\n         return usec_time() * 0.000001;\n      }\n\n      #endif\n\n      inline bool in_use() const\n      {\n         return in_use_;\n      }\n\n   private:\n\n      bool in_use_;\n\n      #if defined(_WIN32) || defined(__WIN32__) || defined(WIN32)\n         LARGE_INTEGER start_time_;\n         LARGE_INTEGER stop_time_;\n         LARGE_INTEGER clock_frequency_;\n      #else\n         struct timeval start_time_;\n         struct timeval stop_time_;\n      #endif\n   };\n\n} // namespace exprtk\n\n#ifndef exprtk_disable_rtl_io\nnamespace exprtk\n{\n   namespace rtl { namespace io { namespace details\n   {\n      template <typename T>\n      inline void print_type(const std::string& fmt,\n                             const T v,\n                             exprtk::details::numeric::details::real_type_tag)\n      {\n         printf(fmt.c_str(),v);\n      }\n\n      template <typename T>\n      struct print_impl\n      {\n         typedef typename igeneric_function<T>::generic_type generic_type;\n         typedef typename igeneric_function<T>::parameter_list_t parameter_list_t;\n         typedef typename generic_type::scalar_view scalar_t;\n         typedef typename generic_type::vector_view vector_t;\n         typedef typename generic_type::string_view string_t;\n         typedef typename exprtk::details::numeric::details::number_type<T>::type num_type;\n\n         static void process(const std::string& scalar_format, parameter_list_t parameters)\n         {\n            for (std::size_t i = 0; i < parameters.size(); ++i)\n            {\n               generic_type& gt = parameters[i];\n\n               switch (gt.type)\n               {\n                  case generic_type::e_scalar : print(scalar_format,scalar_t(gt));\n                                                break;\n\n                  case generic_type::e_vector : print(scalar_format,vector_t(gt));\n                                                break;\n\n                  case generic_type::e_string : print(string_t(gt));\n                                                break;\n\n                  default                     : continue;\n               }\n            }\n         }\n\n         static inline void print(const std::string& scalar_format, const scalar_t& s)\n         {\n            print_type(scalar_format,s(),num_type());\n         }\n\n         static inline void print(const std::string& scalar_format, const vector_t& v)\n         {\n            for (std::size_t i = 0; i < v.size(); ++i)\n            {\n               print_type(scalar_format,v[i],num_type());\n\n               if ((i + 1) < v.size())\n                  printf(\" \");\n            }\n         }\n\n         static inline void print(const string_t& s)\n         {\n            printf(\"%s\",to_str(s).c_str());\n         }\n      };\n\n   } // namespace exprtk::rtl::io::details\n\n   template <typename T>\n   struct print : public exprtk::igeneric_function<T>\n   {\n      typedef typename igeneric_function<T>::parameter_list_t parameter_list_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      print(const std::string& scalar_format = \"%10.5f\")\n      : scalar_format_(scalar_format)\n      {\n         exprtk::enable_zero_parameters(*this);\n      }\n\n      inline T operator() (parameter_list_t parameters)\n      {\n         details::print_impl<T>::process(scalar_format_,parameters);\n         return T(0);\n      }\n\n      std::string scalar_format_;\n   };\n\n   template <typename T>\n   struct println : public exprtk::igeneric_function<T>\n   {\n      typedef typename igeneric_function<T>::parameter_list_t parameter_list_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      println(const std::string& scalar_format = \"%10.5f\")\n      : scalar_format_(scalar_format)\n      {\n         exprtk::enable_zero_parameters(*this);\n      }\n\n      inline T operator() (parameter_list_t parameters)\n      {\n         details::print_impl<T>::process(scalar_format_,parameters);\n         printf(\"\\n\");\n         return T(0);\n      }\n\n      std::string scalar_format_;\n   };\n\n   template <typename T>\n   struct package\n   {\n      print  <T> p;\n      println<T> pl;\n\n      bool register_package(exprtk::symbol_table<T>& symtab)\n      {\n         #define exprtk_register_function(FunctionName,FunctionType)              \\\n         if (!symtab.add_function(FunctionName,FunctionType))                     \\\n         {                                                                        \\\n            exprtk_debug((                                                        \\\n              \"exprtk::rtl::io::register_package - Failed to add function: %s\\n\", \\\n              FunctionName));                                                     \\\n            return false;                                                         \\\n         }                                                                        \\\n\n         exprtk_register_function(\"print\"  ,  p)\n         exprtk_register_function(\"println\", pl)\n         #undef exprtk_register_function\n\n         return true;\n      }\n   };\n\n   } // namespace exprtk::rtl::io\n   } // namespace exprtk::rtl\n}    // namespace exprtk\n#endif\n\n#ifndef exprtk_disable_rtl_io_file\n#include <fstream>\nnamespace exprtk\n{\n   namespace rtl { namespace io { namespace file { namespace details\n   {\n      enum file_mode\n      {\n         e_error = 0,\n         e_read  = 1,\n         e_write = 2,\n         e_rdwrt = 4\n      };\n\n      struct file_descriptor\n      {\n         file_descriptor(const std::string& fname, const std::string& access)\n         : stream_ptr(0),\n           mode(get_file_mode(access)),\n           file_name(fname)\n         {}\n\n         void*       stream_ptr;\n         file_mode   mode;\n         std::string file_name;\n\n         bool open()\n         {\n            if (e_read == mode)\n            {\n               std::ifstream* stream = new std::ifstream(file_name.c_str(),std::ios::binary);\n\n               if (!(*stream))\n               {\n                  file_name.clear();\n                  delete stream;\n\n                  return false;\n               }\n               else\n                  stream_ptr = stream;\n\n               return true;\n            }\n            else if (e_write == mode)\n            {\n               std::ofstream* stream = new std::ofstream(file_name.c_str(),std::ios::binary);\n\n               if (!(*stream))\n               {\n                  file_name.clear();\n                  delete stream;\n\n                  return false;\n               }\n               else\n                  stream_ptr = stream;\n\n               return true;\n            }\n            else if (e_rdwrt == mode)\n            {\n               std::fstream* stream = new std::fstream(file_name.c_str(),std::ios::binary);\n\n               if (!(*stream))\n               {\n                  file_name.clear();\n                  delete stream;\n\n                  return false;\n               }\n               else\n                  stream_ptr = stream;\n\n               return true;\n            }\n            else\n               return false;\n         }\n\n         template <typename Stream, typename Ptr>\n         void close(Ptr& p)\n         {\n            Stream* stream = reinterpret_cast<Stream*>(p);\n            stream->close();\n            delete stream;\n            p = reinterpret_cast<Ptr>(0);\n         }\n\n         bool close()\n         {\n            switch (mode)\n            {\n               case e_read  : close<std::ifstream>(stream_ptr);\n                              break;\n\n               case e_write : close<std::ofstream>(stream_ptr);\n                              break;\n\n               case e_rdwrt : close<std::fstream> (stream_ptr);\n                              break;\n\n               default      : return false;\n            }\n\n            return true;\n         }\n\n         template <typename View>\n         bool write(const View& view, const std::size_t amount, const std::size_t offset = 0)\n         {\n            switch (mode)\n            {\n               case e_write : reinterpret_cast<std::ofstream*>(stream_ptr)->\n                                 write(reinterpret_cast<const char*>(view.begin() + offset), amount * sizeof(typename View::value_t));\n                              break;\n\n               case e_rdwrt : reinterpret_cast<std::fstream*>(stream_ptr)->\n                                 write(reinterpret_cast<const char*>(view.begin() + offset) , amount * sizeof(typename View::value_t));\n                              break;\n\n               default      : return false;\n            }\n\n            return true;\n         }\n\n         template <typename View>\n         bool read(View& view, const std::size_t amount, const std::size_t offset = 0)\n         {\n            switch (mode)\n            {\n               case e_read  : reinterpret_cast<std::ifstream*>(stream_ptr)->\n                                 read(reinterpret_cast<char*>(view.begin() + offset), amount * sizeof(typename View::value_t));\n                              break;\n\n               case e_rdwrt : reinterpret_cast<std::fstream*>(stream_ptr)->\n                                 read(reinterpret_cast<char*>(view.begin() + offset) , amount * sizeof(typename View::value_t));\n                              break;\n\n               default      : return false;\n            }\n\n            return true;\n         }\n\n         bool getline(std::string& s)\n         {\n            switch (mode)\n            {\n               case e_read  : return (!!std::getline(*reinterpret_cast<std::ifstream*>(stream_ptr),s));\n               case e_rdwrt : return (!!std::getline(*reinterpret_cast<std::fstream* >(stream_ptr),s));\n               default      : return false;\n            }\n         }\n\n         bool eof()\n         {\n            switch (mode)\n            {\n               case e_read  : return reinterpret_cast<std::ifstream*>(stream_ptr)->eof();\n               case e_write : return reinterpret_cast<std::ofstream*>(stream_ptr)->eof();\n               case e_rdwrt : return reinterpret_cast<std::fstream* >(stream_ptr)->eof();\n               default      : return true;\n            }\n         }\n\n         file_mode get_file_mode(const std::string& access)\n         {\n            if (access.empty() || access.size() > 2)\n               return e_error;\n\n            std::size_t w_cnt = 0;\n            std::size_t r_cnt = 0;\n\n            for (std::size_t i = 0; i < access.size(); ++i)\n            {\n               switch (std::tolower(access[i]))\n               {\n                  case 'r' : r_cnt++; break;\n                  case 'w' : w_cnt++; break;\n                  default  : return e_error;\n               }\n            }\n\n            if ((0 == r_cnt) && (0 == w_cnt))\n               return e_error;\n            else if ((r_cnt > 1) || (w_cnt > 1))\n               return e_error;\n            else if ((1 == r_cnt) && (1 == w_cnt))\n               return e_rdwrt;\n            else if (1 == r_cnt)\n               return e_read;\n            else\n               return e_write;\n         }\n      };\n\n      template <typename T>\n      file_descriptor* make_handle(T v)\n      {\n         file_descriptor* fd = reinterpret_cast<file_descriptor*>(0);\n\n         std::memcpy(reinterpret_cast<char*>(&fd),\n                     reinterpret_cast<const char*>(&v),\n                     sizeof(fd));\n         return fd;\n      }\n\n      template <typename T>\n      void perform_check()\n      {\n         #ifdef _MSC_VER\n         #pragma warning(push)\n         #pragma warning(disable: 4127)\n         #endif\n         if (sizeof(T) < sizeof(void*))\n         {\n            throw std::runtime_error(\"exprtk::rtl::io::file - Error - pointer size larger than holder.\");\n         }\n         #ifdef _MSC_VER\n         #pragma warning(pop)\n         #endif\n      }\n\n   } // namespace exprtk::rtl::io::file::details\n\n   template <typename T>\n   class open : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::string_view    string_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      open()\n      : exprtk::igeneric_function<T>(\"S|SS\")\n      { details::perform_check<T>(); }\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         std::string file_name;\n         std::string access;\n\n         file_name = to_str(string_t(parameters[0]));\n\n         if (file_name.empty())\n            return T(0);\n\n         if (0 == ps_index)\n            access = \"r\";\n         else if (0 == string_t(parameters[1]).size())\n            return T(0);\n         else\n            access = to_str(string_t(parameters[1]));\n\n         details::file_descriptor* fd = new details::file_descriptor(file_name,access);\n\n         if (fd->open())\n         {\n            T t = T(0);\n\n            std::memcpy(reinterpret_cast<char*>(&t ),\n                        reinterpret_cast<char*>(&fd),\n                        sizeof(fd));\n            return t;\n         }\n         else\n         {\n            delete fd;\n            return T(0);\n         }\n      }\n   };\n\n   template <typename T>\n   struct close : public exprtk::ifunction<T>\n   {\n      using exprtk::ifunction<T>::operator();\n\n      close()\n      : exprtk::ifunction<T>(1)\n      { details::perform_check<T>(); }\n\n      inline T operator() (const T& v)\n      {\n         details::file_descriptor* fd = details::make_handle(v);\n\n         if (!fd->close())\n            return T(0);\n\n         delete fd;\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class write : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::string_view    string_t;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      write()\n      : igfun_t(\"TS|TST|TV|TVT\")\n      { details::perform_check<T>(); }\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         details::file_descriptor* fd = details::make_handle(scalar_t(parameters[0])());\n\n         std::size_t amount = 0;\n\n         switch (ps_index)\n         {\n            case 0  : {\n                         const string_t buffer(parameters[1]);\n                         amount = buffer.size();\n                         return T(fd->write(buffer,amount) ? 1 : 0);\n                      }\n\n            case 1  : {\n                         const string_t buffer(parameters[1]);\n                         amount = std::min(buffer.size(),\n                                           static_cast<std::size_t>(scalar_t(parameters[2])()));\n                         return T(fd->write(buffer,amount) ? 1 : 0);\n                      }\n\n            case 2  : {\n                         const vector_t vec(parameters[1]);\n                         amount = vec.size();\n                         return T(fd->write(vec,amount) ? 1 : 0);\n                      }\n\n            case 3  : {\n                         const vector_t vec(parameters[1]);\n                         amount = std::min(vec.size(),\n                                           static_cast<std::size_t>(scalar_t(parameters[2])()));\n                         return T(fd->write(vec,amount) ? 1 : 0);\n                      }\n         }\n\n         return T(0);\n      }\n   };\n\n   template <typename T>\n   class read : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::string_view    string_t;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      read()\n      : igfun_t(\"TS|TST|TV|TVT\")\n      { details::perform_check<T>(); }\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         details::file_descriptor* fd = details::make_handle(scalar_t(parameters[0])());\n\n         std::size_t amount = 0;\n\n         switch (ps_index)\n         {\n            case 0  : {\n                         string_t buffer(parameters[1]);\n                         amount = buffer.size();\n                         return T(fd->read(buffer,amount) ? 1 : 0);\n                      }\n\n            case 1  : {\n                         string_t buffer(parameters[1]);\n                         amount = std::min(buffer.size(),\n                                           static_cast<std::size_t>(scalar_t(parameters[2])()));\n                         return T(fd->read(buffer,amount) ? 1 : 0);\n                      }\n\n            case 2  : {\n                         vector_t vec(parameters[1]);\n                         amount = vec.size();\n                         return T(fd->read(vec,amount) ? 1 : 0);\n                      }\n\n            case 3  : {\n                         vector_t vec(parameters[1]);\n                         amount = std::min(vec.size(),\n                                           static_cast<std::size_t>(scalar_t(parameters[2])()));\n                         return T(fd->read(vec,amount) ? 1 : 0);\n                      }\n         }\n\n         return T(0);\n      }\n   };\n\n   template <typename T>\n   class getline : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::string_view    string_t;\n      typedef typename generic_type::scalar_view    scalar_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      getline()\n      : igfun_t(\"T\",igfun_t::e_rtrn_string)\n      { details::perform_check<T>(); }\n\n      inline T operator() (std::string& result,\n                           parameter_list_t parameters)\n      {\n         details::file_descriptor* fd = details::make_handle(scalar_t(parameters[0])());\n         return T(fd->getline(result) ? 1 : 0);\n      }\n   };\n\n   template <typename T>\n   struct eof : public exprtk::ifunction<T>\n   {\n      using exprtk::ifunction<T>::operator();\n\n      eof()\n      : exprtk::ifunction<T>(1)\n      { details::perform_check<T>(); }\n\n      inline T operator() (const T& v)\n      {\n         details::file_descriptor* fd = details::make_handle(v);\n\n         return (fd->eof() ? T(1) : T(0));\n      }\n   };\n\n   template <typename T>\n   struct package\n   {\n      open   <T> o;\n      close  <T> c;\n      write  <T> w;\n      read   <T> r;\n      getline<T> g;\n      eof    <T> e;\n\n      bool register_package(exprtk::symbol_table<T>& symtab)\n      {\n         #define exprtk_register_function(FunctionName,FunctionType)                    \\\n         if (!symtab.add_function(FunctionName,FunctionType))                           \\\n         {                                                                              \\\n            exprtk_debug((                                                              \\\n              \"exprtk::rtl::io::file::register_package - Failed to add function: %s\\n\", \\\n              FunctionName));                                                           \\\n            return false;                                                               \\\n         }                                                                              \\\n\n         exprtk_register_function(\"open\"   ,o)\n         exprtk_register_function(\"close\"  ,c)\n         exprtk_register_function(\"write\"  ,w)\n         exprtk_register_function(\"read\"   ,r)\n         exprtk_register_function(\"getline\",g)\n         exprtk_register_function(\"eof\"    ,e)\n         #undef exprtk_register_function\n\n         return true;\n      }\n   };\n\n   } // namespace exprtk::rtl::io::file\n   } // namespace exprtk::rtl::io\n   } // namespace exprtk::rtl\n}    // namespace exprtk\n#endif\n\n#ifndef exprtk_disable_rtl_vecops\nnamespace exprtk\n{\n   namespace rtl { namespace vecops {\n\n   namespace helper\n   {\n      template <typename Vector>\n      inline bool invalid_range(const Vector& v, const std::size_t r0, const std::size_t r1)\n      {\n         if (r0 > (v.size() - 1))\n            return true;\n         else if (r1 > (v.size() - 1))\n            return true;\n         else if (r1 < r0)\n            return true;\n         else\n            return false;\n      }\n\n      template <typename T>\n      struct load_vector_range\n      {\n         typedef typename exprtk::igeneric_function<T> igfun_t;\n         typedef typename igfun_t::parameter_list_t    parameter_list_t;\n         typedef typename igfun_t::generic_type        generic_type;\n         typedef typename generic_type::scalar_view    scalar_t;\n         typedef typename generic_type::vector_view    vector_t;\n\n         static inline bool process(parameter_list_t& parameters,\n                                    std::size_t& r0, std::size_t& r1,\n                                    const std::size_t& r0_prmidx,\n                                    const std::size_t& r1_prmidx,\n                                    const std::size_t vec_idx = 0)\n         {\n            if (r0_prmidx >= parameters.size())\n               return false;\n\n            if (r1_prmidx >= parameters.size())\n               return false;\n\n            if (!scalar_t(parameters[r0_prmidx]).to_uint(r0))\n               return false;\n\n            if (!scalar_t(parameters[r1_prmidx]).to_uint(r1))\n               return false;\n\n            return !invalid_range(vector_t(parameters[vec_idx]), r0, r1);\n         }\n      };\n   }\n\n   namespace details\n   {\n      template <typename T>\n      inline void kahan_sum(T& sum, T& error, T v)\n      {\n         T x = v - error;\n         T y = sum + x;\n         error = (y - sum) - x;\n         sum = y;\n      }\n\n   } // namespace exprtk::rtl::details\n\n   template <typename T>\n   class all_true : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      all_true()\n      : exprtk::igeneric_function<T>(\"V|VTT\")\n        /*\n           Overloads:\n           0. V   - vector\n           1. VTT - vector, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t vec(parameters[0]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 1, 2, 0)\n            )\n            return std::numeric_limits<T>::quiet_NaN();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            if (vec[i] == T(0))\n            {\n               return T(0);\n            }\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class all_false : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      all_false()\n      : exprtk::igeneric_function<T>(\"V|VTT\")\n        /*\n           Overloads:\n           0. V   - vector\n           1. VTT - vector, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t vec(parameters[0]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 1, 2, 0)\n            )\n            return std::numeric_limits<T>::quiet_NaN();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            if (vec[i] != T(0))\n            {\n               return T(0);\n            }\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class any_true : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      any_true()\n      : exprtk::igeneric_function<T>(\"V|VTT\")\n        /*\n           Overloads:\n           0. V   - vector\n           1. VTT - vector, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t vec(parameters[0]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 1, 2, 0)\n            )\n            return std::numeric_limits<T>::quiet_NaN();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            if (vec[i] != T(0))\n            {\n               return T(1);\n            }\n         }\n\n         return T(0);\n      }\n   };\n\n   template <typename T>\n   class any_false : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      any_false()\n      : exprtk::igeneric_function<T>(\"V|VTT\")\n        /*\n           Overloads:\n           0. V   - vector\n           1. VTT - vector, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t vec(parameters[0]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 1, 2, 0)\n            )\n            return std::numeric_limits<T>::quiet_NaN();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            if (vec[i] == T(0))\n            {\n               return T(1);\n            }\n         }\n\n         return T(0);\n      }\n   };\n\n   template <typename T>\n   class count : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      count()\n      : exprtk::igeneric_function<T>(\"V|VTT\")\n        /*\n           Overloads:\n           0. V   - vector\n           1. VTT - vector, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t vec(parameters[0]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 1, 2, 0)\n            )\n            return std::numeric_limits<T>::quiet_NaN();\n\n         std::size_t cnt = 0;\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            if (vec[i] != T(0)) ++cnt;\n         }\n\n         return T(cnt);\n      }\n   };\n\n   template <typename T>\n   class copy : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      copy()\n      : exprtk::igeneric_function<T>(\"VV|VTTVTT\")\n        /*\n           Overloads:\n           0. VV     - x(vector), y(vector)\n           1. VTTVTT - x(vector), xr0, xr1, y(vector), yr0, yr1,\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t x(parameters[0]);\n               vector_t y(parameters[(0 == ps_index) ? 1 : 3]);\n\n         std::size_t xr0 = 0;\n         std::size_t xr1 = x.size() - 1;\n\n         std::size_t yr0 = 0;\n         std::size_t yr1 = y.size() - 1;\n\n         if (1 == ps_index)\n         {\n            if (\n                 !helper::load_vector_range<T>::process(parameters, xr0, xr1, 1, 2, 0) ||\n                 !helper::load_vector_range<T>::process(parameters, yr0, yr1, 4, 5, 3)\n               )\n               return T(0);\n         }\n\n         const std::size_t n = std::min(xr1 - xr0 + 1, yr1 - yr0 + 1);\n\n         std::copy(x.begin() + xr0, x.begin() + xr0 + n, y.begin() + yr0);\n\n         return T(n);\n      }\n   };\n\n   template <typename T>\n   class rol : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      rol()\n      : exprtk::igeneric_function<T>(\"VT|VTTT\")\n        /*\n           Overloads:\n           0. VT   - vector, N\n           1. VTTT - vector, N, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         vector_t vec(parameters[0]);\n\n         std::size_t n  = 0;\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (!scalar_t(parameters[1]).to_uint(n))\n            return T(0);\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0)\n            )\n            return T(0);\n\n         std::size_t dist  = r1 - r0 + 1;\n         std::size_t shift = n % dist;\n\n         std::rotate(vec.begin() + r0, vec.begin() + r0 + shift, vec.begin() + r1 + 1);\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class ror : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      ror()\n      : exprtk::igeneric_function<T>(\"VT|VTTT\")\n        /*\n           Overloads:\n           0. VT   - vector, N\n           1. VTTT - vector, N, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         vector_t vec(parameters[0]);\n\n         std::size_t n  = 0;\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (!scalar_t(parameters[1]).to_uint(n))\n            return T(0);\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0)\n            )\n            return T(0);\n\n         std::size_t dist  = r1 - r0 + 1;\n         std::size_t shift = (dist - (n % dist)) % dist;\n\n         std::rotate(vec.begin() + r0, vec.begin() + r0 + shift, vec.begin() + r1 + 1);\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class shift_left : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      shift_left()\n      : exprtk::igeneric_function<T>(\"VT|VTTT\")\n        /*\n           Overloads:\n           0. VT   - vector, N\n           1. VTTT - vector, N, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         vector_t vec(parameters[0]);\n\n         std::size_t n  = 0;\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (!scalar_t(parameters[1]).to_uint(n))\n            return T(0);\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0)\n            )\n            return T(0);\n\n         std::size_t dist  = r1 - r0 + 1;\n\n         if (n > dist)\n            return T(0);\n\n         std::rotate(vec.begin() + r0, vec.begin() + r0 + n, vec.begin() + r1 + 1);\n\n         for (std::size_t i = r1 - n + 1; i <= r1; ++i)\n         {\n            vec[i] = T(0);\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class shift_right : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      shift_right()\n      : exprtk::igeneric_function<T>(\"VT|VTTT\")\n        /*\n           Overloads:\n           0. VT   - vector, N\n           1. VTTT - vector, N, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         vector_t vec(parameters[0]);\n\n         std::size_t n  = 0;\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (!scalar_t(parameters[1]).to_uint(n))\n            return T(0);\n\n         if (\n              (1 == ps_index) &&\n              !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0)\n            )\n            return T(0);\n\n         std::size_t dist  = r1 - r0 + 1;\n\n         if (n > dist)\n            return T(0);\n\n         std::size_t shift = (dist - (n % dist)) % dist;\n\n         std::rotate(vec.begin() + r0, vec.begin() + r0 + shift, vec.begin() + r1 + 1);\n\n         for (std::size_t i = r0; i < r0 + n; ++i)\n         {\n            vec[i] = T(0);\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class sort : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::string_view    string_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      sort()\n      : exprtk::igeneric_function<T>(\"V|VTT|VS|VSTT\")\n        /*\n           Overloads:\n           0. V    - vector\n           1. VTT  - vector, r0, r1\n           2. VS   - vector, string\n           3. VSTT - vector, string, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         vector_t vec(parameters[0]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 1, 2, 0))\n            return T(0);\n         if ((3 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0))\n            return T(0);\n\n         bool ascending = true;\n\n         if ((2 == ps_index) || (3 == ps_index))\n         {\n            if (exprtk::details::imatch(to_str(string_t(parameters[1])),\"ascending\"))\n               ascending = true;\n            else if (exprtk::details::imatch(to_str(string_t(parameters[1])),\"descending\"))\n               ascending = false;\n            else\n               return T(0);\n         }\n\n         if (ascending)\n            std::sort(vec.begin() + r0, vec.begin() + r1 + 1, std::less<T>   ());\n         else\n            std::sort(vec.begin() + r0, vec.begin() + r1 + 1, std::greater<T>());\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class nthelement : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      nthelement()\n      : exprtk::igeneric_function<T>(\"VT|VTTT\")\n        /*\n           Overloads:\n           0. VT   - vector, nth-element\n           1. VTTT - vector, nth-element, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         vector_t vec(parameters[0]);\n\n         std::size_t n  = 0;\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if (!scalar_t(parameters[1]).to_uint(n))\n            return T(0);\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         std::nth_element(vec.begin() + r0, vec.begin() + r0 + n , vec.begin() + r1 + 1);\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class iota : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      iota()\n      : exprtk::igeneric_function<T>(\"VT|VTT|VTTT|VTTTT\")\n        /*\n           Overloads:\n           0. VT    - vector, increment\n           1. VTT   - vector, increment, base\n           2. VTTTT - vector, increment, r0, r1\n           3. VTTTT - vector, increment, base, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         vector_t vec(parameters[0]);\n\n         T increment = scalar_t(parameters[1])();\n         T base      = ((1 == ps_index) || (3 == ps_index)) ? scalar_t(parameters[2])() : T(0);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if ((2 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if ((3 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 3, 4, 0))\n            return std::numeric_limits<T>::quiet_NaN();\n         else\n         {\n            long long j = 0;\n\n            for (std::size_t i = r0; i <= r1; ++i, ++j)\n            {\n               vec[i] = base + (increment * j);\n            }\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class sumk : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      sumk()\n      : exprtk::igeneric_function<T>(\"V|VTT\")\n        /*\n           Overloads:\n           0. V   - vector\n           1. VTT - vector, r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t vec(parameters[0]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = vec.size() - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 1, 2, 0))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         T result = T(0);\n         T error  = T(0);\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            details::kahan_sum(result, error, vec[i]);\n         }\n\n         return result;\n      }\n   };\n\n   template <typename T>\n   class axpy : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      axpy()\n      : exprtk::igeneric_function<T>(\"TVV|TVVTT\")\n        /*\n           y <- ax + y\n           Overloads:\n           0. TVV   - a, x(vector), y(vector)\n           1. TVVTT - a, x(vector), y(vector), r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t x(parameters[1]);\n               vector_t y(parameters[2]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = std::min(x.size(),y.size()) - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 3, 4, 1))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(y, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         T a = scalar_t(parameters[0])();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            y[i] = (a * x[i]) + y[i];\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class axpby : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      axpby()\n      : exprtk::igeneric_function<T>(\"TVTV|TVTVTT\")\n        /*\n           y <- ax + by\n           Overloads:\n           0. TVTV   - a, x(vector), b, y(vector)\n           1. TVTVTT - a, x(vector), b, y(vector), r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t x(parameters[1]);\n               vector_t y(parameters[3]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = std::min(x.size(),y.size()) - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 4, 5, 1))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(y, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         const T a = scalar_t(parameters[0])();\n         const T b = scalar_t(parameters[2])();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            y[i] = (a * x[i]) + (b * y[i]);\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class axpyz : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      axpyz()\n      : exprtk::igeneric_function<T>(\"TVVV|TVVVTT\")\n        /*\n           z <- ax + y\n           Overloads:\n           0. TVVV   - a, x(vector), y(vector), z(vector)\n           1. TVVVTT - a, x(vector), y(vector), z(vector), r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t x(parameters[1]);\n         const vector_t y(parameters[2]);\n               vector_t z(parameters[3]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = std::min(x.size(),y.size()) - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 3, 4, 1))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(y, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(z, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         T a = scalar_t(parameters[0])();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            z[i] = (a * x[i]) + y[i];\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class axpbyz : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      axpbyz()\n      : exprtk::igeneric_function<T>(\"TVTVV|TVTVVTT\")\n        /*\n           z <- ax + by\n           Overloads:\n           0. TVTVV   - a, x(vector), b, y(vector), z(vector)\n           1. TVTVVTT - a, x(vector), b, y(vector), z(vector), r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t x(parameters[1]);\n         const vector_t y(parameters[3]);\n               vector_t z(parameters[4]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = std::min(x.size(),y.size()) - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 4, 5, 1))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(y, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(z, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         const T a = scalar_t(parameters[0])();\n         const T b = scalar_t(parameters[2])();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            z[i] = (a * x[i]) + (b * y[i]);\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class axpbz : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      axpbz()\n      : exprtk::igeneric_function<T>(\"TVTV|TVTVTT\")\n        /*\n           z <- ax + b\n           Overloads:\n           0. TVTV   - a, x(vector), b, z(vector)\n           1. TVTVTT - a, x(vector), b, z(vector), r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t x(parameters[1]);\n               vector_t z(parameters[3]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = x.size() - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 4, 5, 1))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(z, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         const T a = scalar_t(parameters[0])();\n         const T b = scalar_t(parameters[2])();\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            z[i] = (a * x[i]) + b;\n         }\n\n         return T(1);\n      }\n   };\n\n   template <typename T>\n   class dot : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      dot()\n      : exprtk::igeneric_function<T>(\"VV|VVTT\")\n        /*\n           Overloads:\n           0. VV   - x(vector), y(vector)\n           1. VVTT - x(vector), y(vector), r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t x(parameters[0]);\n         const vector_t y(parameters[1]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = std::min(x.size(),y.size()) - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(y, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         T result = T(0);\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            result += (x[i] * y[i]);\n         }\n\n         return result;\n      }\n   };\n\n   template <typename T>\n   class dotk : public exprtk::igeneric_function<T>\n   {\n   public:\n\n      typedef typename exprtk::igeneric_function<T> igfun_t;\n      typedef typename igfun_t::parameter_list_t    parameter_list_t;\n      typedef typename igfun_t::generic_type        generic_type;\n      typedef typename generic_type::scalar_view    scalar_t;\n      typedef typename generic_type::vector_view    vector_t;\n\n      using exprtk::igeneric_function<T>::operator();\n\n      dotk()\n      : exprtk::igeneric_function<T>(\"VV|VVTT\")\n        /*\n           Overloads:\n           0. VV   - x(vector), y(vector)\n           1. VVTT - x(vector), y(vector), r0, r1\n        */\n      {}\n\n      inline T operator() (const std::size_t& ps_index, parameter_list_t parameters)\n      {\n         const vector_t x(parameters[0]);\n         const vector_t y(parameters[1]);\n\n         std::size_t r0 = 0;\n         std::size_t r1 = std::min(x.size(),y.size()) - 1;\n\n         if ((1 == ps_index) && !helper::load_vector_range<T>::process(parameters, r0, r1, 2, 3, 0))\n            return std::numeric_limits<T>::quiet_NaN();\n         else if (helper::invalid_range(y, r0, r1))\n            return std::numeric_limits<T>::quiet_NaN();\n\n         T result = T(0);\n         T error  = T(0);\n\n         for (std::size_t i = r0; i <= r1; ++i)\n         {\n            details::kahan_sum(result, error, (x[i] * y[i]));\n         }\n\n         return result;\n      }\n   };\n\n   template <typename T>\n   struct package\n   {\n      all_true   <T> at;\n      all_false  <T> af;\n      any_true   <T> nt;\n      any_false  <T> nf;\n      count      <T>  c;\n      copy       <T> cp;\n      rol        <T> rl;\n      ror        <T> rr;\n      shift_left <T> sl;\n      shift_right<T> sr;\n      sort       <T> st;\n      nthelement <T> ne;\n      iota       <T> ia;\n      sumk       <T> sk;\n      axpy       <T> b1_axpy;\n      axpby      <T> b1_axpby;\n      axpyz      <T> b1_axpyz;\n      axpbyz     <T> b1_axpbyz;\n      axpbz      <T> b1_axpbz;\n      dot        <T> dt;\n      dotk       <T> dtk;\n\n      bool register_package(exprtk::symbol_table<T>& symtab)\n      {\n         #define exprtk_register_function(FunctionName,FunctionType)                  \\\n         if (!symtab.add_function(FunctionName,FunctionType))                         \\\n         {                                                                            \\\n            exprtk_debug((                                                            \\\n              \"exprtk::rtl::vecops::register_package - Failed to add function: %s\\n\", \\\n              FunctionName));                                                         \\\n            return false;                                                             \\\n         }                                                                            \\\n\n         exprtk_register_function(\"all_true\"     ,at)\n         exprtk_register_function(\"all_false\"    ,af)\n         exprtk_register_function(\"any_true\"     ,nt)\n         exprtk_register_function(\"any_false\"    ,nf)\n         exprtk_register_function(\"count\"        , c)\n         exprtk_register_function(\"copy\"        , cp)\n         exprtk_register_function(\"rotate_left\"  ,rl)\n         exprtk_register_function(\"rol\"          ,rl)\n         exprtk_register_function(\"rotate_right\" ,rr)\n         exprtk_register_function(\"ror\"          ,rr)\n         exprtk_register_function(\"shftl\"        ,sl)\n         exprtk_register_function(\"shftr\"        ,sr)\n         exprtk_register_function(\"sort\"         ,st)\n         exprtk_register_function(\"nth_element\"  ,ne)\n         exprtk_register_function(\"iota\"         ,ia)\n         exprtk_register_function(\"sumk\"         ,sk)\n         exprtk_register_function(\"axpy\"    ,b1_axpy)\n         exprtk_register_function(\"axpby\"  ,b1_axpby)\n         exprtk_register_function(\"axpyz\"  ,b1_axpyz)\n         exprtk_register_function(\"axpbyz\",b1_axpbyz)\n         exprtk_register_function(\"axpbz\"  ,b1_axpbz)\n         exprtk_register_function(\"dot\"          ,dt)\n         exprtk_register_function(\"dotk\"        ,dtk)\n         #undef exprtk_register_function\n\n         return true;\n      }\n   };\n\n   } // namespace exprtk::rtl::vecops\n   } // namespace exprtk::rtl\n}    // namespace exprtk\n#endif\n\nnamespace exprtk\n{\n   namespace information\n   {\n      static const char* library = \"Mathematical Expression Toolkit\";\n      static const char* version = \"2.718281828459045235360287471352662497757247093699\"\n                                   \"95957496696762772407663035354759457138217852516642\";\n      static const char* date    = \"20180913\";\n\n      static inline std::string data()\n      {\n         static const std::string info_str = std::string(library) +\n                                             std::string(\" v\") + std::string(version) +\n                                             std::string(\" (\") + date + std::string(\")\");\n         return info_str;\n      }\n\n   } // namespace information\n\n   #ifdef exprtk_debug\n   #undef exprtk_debug\n   #endif\n\n   #ifdef exprtk_error_location\n   #undef exprtk_error_location\n   #endif\n\n   #ifdef exprtk_disable_fallthrough_begin\n   #undef exprtk_disable_fallthrough_begin\n   #endif\n\n   #ifdef exprtk_disable_fallthrough_end\n   #undef exprtk_disable_fallthrough_end\n   #endif\n\n} // namespace exprtk\n\n#endif\n"
  },
  {
    "path": "C/common/include/file_utils.h",
    "content": "/*\n * Fledge utilities functions for handling files and directories\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Ray Verhoeff\n */\n\n#pragma once\n#include <string>\n\nint copyFile(const char *to, const char *from);\nvoid createDirectory(const std::string &directoryName);\nint removeDirectory(const char *path);\n"
  },
  {
    "path": "C/common/include/filter_pipeline.h",
    "content": "#ifndef _FILTER_PIPELINE_H\n#define _FILTER_PIPELINE_H\n/*\n * Fledge filter pipeline class.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n#include <plugin.h>\n#include <plugin_manager.h>\n#include <config_category.h>\n#include <management_client.h>\n#include <plugin_data.h>\n#include <reading_set.h>\n#include <filter_plugin.h>\n#include <service_handler.h>\n#include <pipeline_element.h>\ntypedef void (*filterReadingSetFn)(OUTPUT_HANDLE *outHandle, READINGSET* readings);\n\n/**\n * The FilterPipeline class is used to represent a pipeline of filters \n * applicable to a task/service. Methods are provided to load filters, \n * setup filtering pipeline and for pipeline/filters cleanup.\n */\nclass FilterPipeline\n{\n\npublic:\n\tFilterPipeline(ManagementClient* mgtClient,\n\t\t\tStorageClient& storage,\n\t\t\tstd::string serviceName);\n\t~FilterPipeline();\n\tPipelineElement *getFirstFilterPlugin()\n\t{\n\t\treturn (m_filters.begin() == m_filters.end()) ?\n\t\t\tNULL : *(m_filters.begin());\n\t};\n\tunsigned int\tgetFilterCount() { return m_filters.size(); }\n\tvoid\t\tconfigChange(const std::string&, const std::string&);\n\t\n\t// Cleanup the loaded filters\n\tvoid \t\tcleanupFilters(const std::string& categoryName);\n\t// Load filters as specified in the configuration\n\tbool\t\tloadFilters(const std::string& categoryName);\n\t// Setup the filter pipeline\n\tbool\t\tsetupFiltersPipeline(void *passToOnwardFilter,\n\t\t\t\t\t     void *useFilteredData,\n\t\t\t\t\t     void *ingest);\n\t// Check FilterPipeline is ready for data ingest\n\tbool\t\tisReady() { return m_ready; };\n\tbool\t\thasChanged(const std::string pipeline) const { return m_pipeline != pipeline; }\n\tbool\t\tisShuttingDown() { return m_shutdown; };\n\tvoid \t\tsetShuttingDown() { m_shutdown = true; }\n\tvoid\t\texecute();\n\tvoid\t\tawaitCompletion();\n\tvoid\t\tstartBranch();\n\tvoid\t\tcompleteBranch();\n\t// The filter pipeline debugger entry points\n\tbool\t\tattachDebugger();\n\tvoid\t\tdetachDebugger();\n\tvoid\t\tsetDebuggerBuffer(unsigned int size);\n\tstd::string\tgetDebuggerBuffer();\n\tstd::string\tgetDebuggerBuffer(const std::string& name);\n\tbool\t\treplayDebugger();\n\nprivate:\n\tPLUGIN_HANDLE\tloadFilterPlugin(const std::string& filterName);\n\tvoid\t\tloadPipeline(const rapidjson::Value& filters, std::vector<PipelineElement *>& pipeline);\n\tbool\t\tattachDebugger(const std::vector<PipelineElement *>& pipeline);\n\tvoid\t\tdetachDebugger(const std::vector<PipelineElement *>& pipeline);\n\tvoid\t\tsetDebuggerBuffer(const std::vector<PipelineElement *>& pipeline, unsigned int size);\n\tstd::string\tgetDebuggerBuffer(const std::vector<PipelineElement *>& pipeline);\n\tstd::string\treadingsToJSON(std::vector<std::shared_ptr<Reading>> readings);\n\nprotected:\n\tManagementClient*\tmgtClient;\n\tStorageClient&\t\tstorage;\n\tstd::string\t\tserviceName;\n\tstd::vector<PipelineElement *>\n\t\t\t\tm_filters;\t// Elements in the \"trunk\" pipeline\n\tstd::map<std::string, PipelineElement *>\n\t\t\t\tm_filterCategories;\n\tstd::string\t\tm_pipeline;\n\tbool\t\t\tm_ready;\n\tbool\t\t\tm_shutdown;\n\tServiceHandler\t\t*m_serviceHandler;\n\tint\t\t\tm_activeBranches;\n\tstd::mutex\t\tm_actives;\n\tstd::condition_variable\tm_branchActivations;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/filter_plugin.h",
    "content": "#ifndef _FILTER_PLUGIN_H\n#define _FILTER_PLUGIN_H\n/*\n * Fledge filter plugin class.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <plugin.h>\n#include <plugin_manager.h>\n#include <config_category.h>\n#include <management_client.h>\n#include <plugin_data.h>\n#include <reading_set.h>\n\n// This is a C++ ReadingSet class instance passed through\ntypedef ReadingSet READINGSET;\n// Data handle passed to function pointer\ntypedef void OUTPUT_HANDLE;\n// Function pointer called by \"plugin_ingest\" plugin method\ntypedef void (*OUTPUT_STREAM)(OUTPUT_HANDLE *, READINGSET *);\n\n// FilterPlugin class\nclass FilterPlugin : public Plugin\n{\n\npublic:\n        FilterPlugin(const std::string& name,\n\t\t     PLUGIN_HANDLE handle);\n        ~FilterPlugin();\n\n\tconst std::string\tgetName() const { return m_name; };\n        PLUGIN_HANDLE\t\tinit(const ConfigCategory& config,\n\t\t\t\t     OUTPUT_HANDLE* outHandle,\n\t\t\t\t     OUTPUT_STREAM outputFunc);\n        void\t\t\tshutdown();\n        void\t\t\tingest(READINGSET *);\n\tbool\t\t\tpersistData() { return info->options & SP_PERSIST_DATA; };\n\tvoid\t\t\tstartData(const std::string& pluginData);\n\tstd::string\t\tshutdownSaveData();\n\tvoid\t\t\tstart();\n\tvoid\t\t\treconfigure(const std::string&);\n\nprivate:\n\tPLUGIN_HANDLE\t(*pluginInit)(const ConfigCategory* config,\n\t\t\t\t      OUTPUT_HANDLE* outHandle,\n\t\t\t\t      OUTPUT_STREAM output);\n        void            (*pluginShutdownPtr)(PLUGIN_HANDLE);\n        void            (*pluginReconfigurePtr)(PLUGIN_HANDLE, const std::string&);\n        void            (*pluginIngestPtr)(PLUGIN_HANDLE,\n\t\t\t\t\t   READINGSET *);\n\tstd::string\t(*pluginShutdownDataPtr)(const PLUGIN_HANDLE);\n\tvoid\t\t(*pluginStartDataPtr)(PLUGIN_HANDLE,\n\t\t\t\t\t      const std::string& pluginData);\n\tvoid\t\t(*pluginStartPtr)(PLUGIN_HANDLE);\n\npublic:\n\t// Persist plugin data\n\tPluginData*\tm_plugin_data;\n\nprivate:\n\tstd::string\tm_name;\n        PLUGIN_HANDLE   m_instance;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/form_data.h",
    "content": "#ifndef _FORM_DATA_H\n#define _FORM_DATA_H\n/*\n * Fledge utilities functions for handling HTTP form data upload\n * with multipart data\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <logger.h>\n#include <server_http.hpp>\n\n#define CR '\\r'\n#define LF '\\n'\n\n/**\n * This class represents a parsed HTTP form data uploaded\n * to SimpleWeb::Server<SimpleWeb::HTTP\n *\n * FormData::FieldValue holds the field value as buffer start, size\n * and filename if data comes form a file upload\n *\n * FormData holds the input buffer, size and boundary multipart data\n *\n * Public methods fetch value of a given field name\n * and save file to filesystem\n */\nclass FormData {\n\tpublic:\n\t\tclass FieldValue {\n\t\t\tpublic:\n\t\t\t\tFieldValue()\n\t\t\t\t{\n\t\t\t\t\tsize = 0;\n\t\t\t\t\tstart = NULL;\n\t\t\t\t};\n\t\t\t\tconst uint8_t*  start;\n\t\t\t\tsize_t\t\tsize;\n\t\t\t\tstd::string\tfilename;\n\t\t};\n\n\tpublic:\n\t\tFormData(std::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Request> request);\n\t\tvoid\t\tgetUploadedData(const std::string& field, FieldValue& data);\n\t\tvoid\t\tgetUploadedFile(const std::string& field, FieldValue& data);\n\t\tbool\t\tsaveFile(FieldValue& b, const std::string& fileName);\n\n\tprivate:\n\t\tuint8_t*\tskipSeparator(uint8_t *b);\n\t\tuint8_t*\tskipDoubleSeparator(uint8_t *b);\n\t\tuint8_t*\tgetContentEnd(uint8_t *b);\n\t\tuint8_t*\tfindDataFormField(uint8_t* buffer, const std::string& field);\n\n\tprivate:\n\t\tconst uint8_t*\tm_buffer; \t// pointer to already allocated buffer data\n\t\tsize_t\t\tm_size; \t// buffer size\n\t\tstd::string\tm_boundary;\t// multipart boundary\n};\n#endif\n"
  },
  {
    "path": "C/common/include/insert.h",
    "content": "#ifndef _INSERT_H\n#define _INSERT_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <sstream>\n#include <iostream>\n#include <vector>\n#include <resultset.h>\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/error/error.h\"\n\n\n/**\n * Class that defines data to be inserted or updated in a column within the table\n */\nclass InsertValue {\n\tpublic:\n\t\tInsertValue(const std::string& column, const std::string& value) :\n\t\t\t\tm_column(column)\n\t\t{\n\t\t\tm_value.str = (char *)malloc(value.length() + 1);\n\t\t\tstrncpy(m_value.str, value.c_str(), value.length() + 1);\n\t\t\tm_type = STRING_COLUMN;\n\t\t};\n\t\tInsertValue(const std::string& column, const int value) :\n\t\t\t\tm_column(column)\n\t\t{\n\t\t\tm_value.ival = value;\n\t\t\tm_type = INT_COLUMN;\n\t\t};\n\t\tInsertValue(const std::string& column, const long value) :\n\t\t\t\tm_column(column)\n\t\t{\n\t\t\tm_value.ival = value;\n\t\t\tm_type = INT_COLUMN;\n\t\t};\n\t\tInsertValue(const std::string& column, const double value) :\n\t\t\t\tm_column(column)\n\t\t{\n\t\t\tm_value.fval = value;\n\t\t\tm_type = NUMBER_COLUMN;\n\t\t};\n\t\tInsertValue(const std::string& column, const rapidjson::Value& value) :\n\t\t\t\tm_column(column)\n\t\t{\n\t\t\trapidjson::StringBuffer sb;\n\t\t\trapidjson::Writer<rapidjson::StringBuffer> writer(sb);\n\t\t\tvalue.Accept(writer);\n\t\t\tstd::string s = sb.GetString();\n\t\t\tm_value.str = (char *)malloc(s.length() + 1);\n\t\t\tstrncpy(m_value.str, s.c_str(), s.length() + 1);\n\t\t\tm_type = JSON_COLUMN;\n\t\t};\n\n\t\t// Insert a NULL value for the given column\n\t\tInsertValue(const std::string& column) :\n\t\t\t\tm_column(column)\n\t\t{\n\t\t\tm_type = NULL_COLUMN;\n\t\t\tm_value.str = NULL;\n\t\t}\n\n\t\tInsertValue(const InsertValue& rhs) : m_column(rhs.m_column)\n\t\t{\n\t\t\tm_type = rhs.m_type;\n\t\t\tswitch (rhs.m_type)\n\t\t\t{\n\t\t\tcase INT_COLUMN:\n\t\t\t\tm_value.ival = rhs.m_value.ival;\n\t\t\t\tbreak;\n\t\t\tcase NUMBER_COLUMN:\n\t\t\t\tm_value.fval = rhs.m_value.fval;\n\t\t\t\tbreak;\n\t\t\tcase STRING_COLUMN:\n\t\t\t\tm_value.str = strdup(rhs.m_value.str);\n\t\t\t\tbreak;\n\t\t\tcase JSON_COLUMN:\t// Internally stored a a string\n\t\t\t\tm_value.str = strdup(rhs.m_value.str);\n\t\t\t\tbreak;\n\t\t\tcase NULL_COLUMN:\n\t\t\t\tm_value.str = NULL;\n\t\t\t\tbreak;\n\t\t\tcase BOOL_COLUMN:\n\t\t\t\t// TODO\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\t~InsertValue()\n\t\t{\n\t\t\tif (m_type == STRING_COLUMN || m_type == JSON_COLUMN)\n\t\t\t{\n\t\t\t\tfree(m_value.str);\n\t\t\t}\n\t\t};\n\t\tconst std::string\ttoJSON() const\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"\\\"\" << m_column << \"\\\" : \";\n\t\t\tswitch (m_type)\n\t\t\t{\n\t\t\tcase JSON_COLUMN:\n\t\t\t\tjson << m_value.str;\n\t\t\t\tbreak;\n\t\t\tcase BOOL_COLUMN:\n\t\t\t\tjson << m_value.ival;\n\t\t\t\tbreak;\n\t\t\tcase INT_COLUMN:\n\t\t\t\tjson << m_value.ival;\n\t\t\t\tbreak;\n\t\t\tcase NUMBER_COLUMN:\n\t\t\t\tjson << m_value.fval;\n\t\t\t\tbreak;\n\t\t\tcase STRING_COLUMN:\n\t\t\t\tjson << \"\\\"\" << m_value.str << \"\\\"\";\n\t\t\t\tbreak;\n\t\t\tcase NULL_COLUMN:\n\t\t\t\t// JSON output for NULL value\n\t\t\t\tjson << \"null\";\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\treturn json.str();\n\t\t}\n\tprivate:\n\t\tInsertValue&\t\toperator=(InsertValue const& rhs);\n\t\tconst std::string\tm_column;\n\t\tColumnType\t\tm_type;\n\t\tunion {\n\t\t\tchar\t*str;\n\t\t\tlong\tival;\n\t\t\tdouble\tfval;\n\t\t\t}\t\tm_value;\n};\n\nclass InsertValues : public std::vector<InsertValue>\n{\n\tpublic:\n\t\tconst std::string\ttoJSON() const\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"{ \";\n\t\t\tfor (std::vector<InsertValue>::const_iterator it = this->cbegin();\n\t\t\t\t it != this->cend(); ++it)\n\n\t\t\t{\n\t\t\t\tjson << it->toJSON();\n\t\t\t\tif (it + 1 != this->cend())\n\t\t\t\t\tjson << \", \";\n\t\t\t\telse\n\t\t\t\t\tjson << \" \";\n\t\t\t}\n\t\t\tjson << \"}\";\n\t\t\treturn json.str();\n\t\t};\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/join.h",
    "content": "#ifndef _JOIN_H\n#define _JOIN_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n\nclass Query;\n\n/**\n * Join clause representation\n */\nclass Join {\n\tpublic:\n\t\tJoin(const std::string& table, const std::string& on, Query *query) :\n\t\t\t\tm_table(table), m_column(on), m_on(on), m_query(query)\n\t\t{\n\t\t};\n\t\tJoin(const std::string& table, const std::string& column, const std::string& on, Query *query) :\n\t\t\t\tm_table(table), m_column(column), m_on(on), m_query(query)\n\t\t{\n\t\t};\n\t\t~Join();\n\t\tconst std::string\ttoJSON() const;\n\tprivate:\n\t\tJoin(const Join&);\n\t\tJoin&\t\t\toperator=(Join const&);\n\t\tconst std::string\tm_table;\n\t\tconst std::string\tm_column;\n\t\tconst std::string\tm_on;\n\t\tQuery\t\t\t*m_query;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/json_properties.h",
    "content": "#ifndef _JSON_PROPERTIES_H\n#define _JSON_PROPERTIES_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <sstream>\n#include <iostream>\n#include <vector>\n\nclass JSONProperty {\n\tpublic:\n\t\tJSONProperty(const std::string& column, std::vector<std::string> path, const std::string& value) :\n\t\t\t\t\tm_column(column), m_value(value)\n\t\t{\n\t\t\tfor (std::vector<std::string>::const_iterator it = path.cbegin();\n\t\t\t\t\tit != path.cend(); ++it)\n\t\t\t\tm_path.push_back(*it);\n\t\t}\n\n\t\tconst std::string\ttoJSON() const\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"{ \\\"column\\\" : \\\"\" << m_column << \"\\\",\";\n\t\t\tjson << \" \\\"path\\\" : [\";\n\t\t\tfor (std::vector<std::string>::const_iterator it = m_path.cbegin();\n\t\t\t\t\tit != m_path.cend(); ++it)\n\t\t\t{\n\t\t\t\tjson << \"\\\"\" << *it << \"\\\"\";\n\t\t\t\tif ((it + 1) != m_path.cend())\n\t\t\t\t\tjson << \",\";\n\t\t\t}\n\t\t\tjson << \"],\";\n\t\t\tjson << \"\\\"value\\\" : \\\"\" << m_value << \"\\\" }\";\n\t\t\treturn json.str();\n\t\t}\n\tprivate:\n\t\tconst std::string\t\tm_column;\n\t\tconst std::string\t\tm_value;\n\t\tstd::vector<std::string>\tm_path;\n};\n\n/**\n * Class that defines JSON properties for update\n */\nclass JSONProperties : public std::vector<JSONProperty>\n{\n\tpublic:\n\t\tconst std::string\ttoJSON() const\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"\\\"json_properties\\\" : [ \";\n\t\t\tfor (std::vector<JSONProperty>::const_iterator it = this->cbegin();\n\t\t\t\t it != this->cend(); ++it)\n\n\t\t\t{\n\t\t\t\tjson << it->toJSON();\n\t\t\t\tif (it + 1 != this->cend())\n\t\t\t\t\tjson << \", \";\n\t\t\t\telse\n\t\t\t\t\tjson << \" \";\n\t\t\t}\n\t\t\tjson << \"]\";\n\t\t\treturn json.str();\n\t\t};\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/json_provider.h",
    "content": "#ifndef _JSONPROVIDER_H\n#define _JSONPROVIDER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n\nclass JSONProvider\n{\n\tpublic:\n\t\tvirtual void\tasJSON(std::string &) const = 0;\n};\n#endif\n"
  },
  {
    "path": "C/common/include/json_utils.h",
    "content": "#ifndef _JSON_UTILS_H\n#define _JSON_UTILS_H\n/*\n * Fledge utilities functions for handling JSON document\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\nbool JSONStringToVectorString(std::vector<std::string>& vectorString,\n                              const std::string& JSONString,\n                              const std::string& Key);\n\nstd::string JSONescape(const std::string& subject);\nstd::string JSONunescape(const std::string& subject);\n\n#endif\n"
  },
  {
    "path": "C/common/include/logger.h",
    "content": "#ifndef _LOGGER_H\n#define _LOGGER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017-2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <string>\n#include <functional>\n#include <map>\n#include <mutex>\n#include <queue>\n#include <thread>\n#include <condition_variable>\n#include <atomic>\n#include <sys/socket.h>\n#include <arpa/inet.h>\n#define PRINT_FUNC\tLogger::getLogger()->info(\"%s:%d\", __FUNCTION__, __LINE__);\n\n/**\n * Fledge Logger class used to log to syslog\n *\n * At startup this class should be constructed\n * using the standard constructor. To log a message\n * call debug, info, warn etc. using the instance\n * of the class.\n *\n * To obtain that singleton instance call the static\n * method getLogger.\n *\n * It is generally unsafe to delete the logger class\n * as it may be called asynchronouly from multiple\n * threads and single handlers. The destructor has\n * hence been made private to prevent the destruction\n * of the class.\n */\nclass Logger {\n\tpublic:\n\t\tenum class LogLevel\n\t\t{\n\t\t\tERROR,\n\t\t\tWARNING,\n\t\t\tINFO,\n\t\t\tDEBUG,\n\t\t\tFATAL\n\t\t};\n\n\t\tLogger(const std::string& application);\n\t\t~Logger();\n\t\tstatic Logger *getLogger();\n\t\tvoid debug(const std::string& msg, ...);\n\t\tvoid printLongString(const std::string&, LogLevel = LogLevel::DEBUG);\n\t\tvoid info(const std::string& msg, ...);\n\t\tvoid warn(const std::string& msg, ...);\n\t\tvoid error(const std::string& msg, ...);\n\t\tvoid fatal(const std::string& msg, ...);\n\t\tvoid setMinLevel(const std::string& level);\n\t\tstd::string& getMinLevel() { return levelString; }\n\n\t\t// LogInterceptor callback function signature\n\t\ttypedef void (*LogInterceptor)(LogLevel, const std::string&, void*);\n\n\t\t// Register an interceptor\n\t\tbool registerInterceptor(LogLevel level, LogInterceptor callback, void* userData);\n\n\t\t// Unregister an interceptor\n\t\tbool unregisterInterceptor(LogLevel level, LogInterceptor callback);\n\n\tprivate:\n\t\tstd::string \t*format(const std::string& msg, va_list ap);\n\t\tstatic Logger   *instance;\n\t\tstd::string     levelString;\n\t\tint\t\tm_level;\n\n\t\tstruct InterceptorData {\n\t\t\tLogInterceptor callback;\n\t\t\tvoid* userData;\n\t\t};\n\n\t\tstd::multimap<LogLevel, InterceptorData> m_interceptors;\n\t\tstd::mutex m_interceptorMapMutex;\n\n\t\tstruct LogTask {\n\t\t\tLogLevel level;\n\t\t\tstd::string message;\n\t\t\tLogInterceptor callback;\n\t\t\tvoid* userData;\n\t\t};\n\n\t\tstd::queue<LogTask>\tm_taskQueue;\n\t\tstd::mutex\t\tm_queueMutex;\n\t\tstd::condition_variable m_condition;\n\t\tstd::atomic<bool>\tm_runWorker;\n\t\tstd::thread \t\t*m_workerThread;\n\n\t\tvoid log(int sysLogLvl, const char * lvlName, LogLevel appLogLvl, const std::string& msg, va_list args);\n\t\tvoid sendToUdpSink(const std::string& msg);\n\t\tvoid executeInterceptor(LogLevel level, const std::string& message);\n\t\tvoid workerThread();\n\t\tint m_UdpSockFD = -1;\n\t\tstruct sockaddr_in m_UdpServerAddr;\n\t\tbool m_SyslogUdpEnabled = false;\n\t\tstd::string m_identifier;\n\t\tstd::string m_hostname;\n};\n\n#endif\n\n"
  },
  {
    "path": "C/common/include/management_client.h",
    "content": "#ifndef _MANAGEMENT_CLIENT_H\n#define _MANAGEMENT_CLIENT_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017-2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <client_http.hpp>\n#include <server_http.hpp>\n#include <config_category.h>\n#include <service_record.h>\n#include <logger.h>\n#include <string>\n#include <map>\n#include <vector>\n#include <rapidjson/document.h>\n#include <asset_tracking.h>\n#include <json_utils.h>\n#include <thread>\n#include <bearer_token.h>\n#include <acl.h>\n#include \"utils.h\"\n\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\nusing namespace rapidjson;\n\nclass AssetTrackingTuple;\nclass AssetTrackingTable;\nclass StorageAssetTrackingTuple;\n\n/**\n * The management client class used by services and tasks to communicate\n * with the management API of the Fledge core microservice.\n *\n * The class encapsulates the management REST API and provides methods for accessing each\n * of those APIs.\n */\nclass ManagementClient {\n\tpublic:\n\t\tManagementClient(const std::string& hostname, const unsigned short port);\n\t\t~ManagementClient();\n\t\tbool \t\t\tregisterService(const ServiceRecord& service);\n\t\tbool \t\t\tunregisterService();\n\t\tbool \t\t\trestartService();\n\t\tbool \t\t\tgetService(ServiceRecord& service);\n\t\tbool\t\t\tgetServices(std::vector<ServiceRecord *>& services);\n\t\tbool\t\t\tgetServices(std::vector<ServiceRecord *>& services, const std::string& type);\n\t\tbool \t\t\tregisterCategory(const std::string& categoryName);\n\t\tbool \t\t\tregisterCategoryChild(const std::string& categoryName);\n\t\tbool \t\t\tunregisterCategory(const std::string& categoryName);\n\t\tConfigCategories\tgetCategories();\n\t\tConfigCategory\t\tgetCategory(const std::string& categoryName);\n                std::string             setCategoryItemValue(const std::string& categoryName,\n                                                             const std::string& itemName,\n                                                             const std::string& itemValue);\n\t\tstd::string\t\taddChildCategories(const std::string& parentCategory,\n\t\t\t\t\t\t\t   const std::vector<std::string>& children);\n\t\tstd::vector<AssetTrackingTuple*>&\n\t\t\t\t\tgetAssetTrackingTuples(const std::string serviceName = \"\");\n\t\tstd::vector<StorageAssetTrackingTuple*>&\n \t\t\t\t\tgetStorageAssetTrackingTuples(const std::string serviceName);\n\n\t\tStorageAssetTrackingTuple* getStorageAssetTrackingTuple(const std::string& serviceName,\n                                                         \tconst std::string& assetName,\n\t\t\t\t\t\t\t\tconst std::string& event, const std::string & dp, const unsigned int& c);\n\n\t\tbool addAssetTrackingTuple(const std::string& service, \n\t\t\t\t\t   const std::string& plugin, \n\t\t\t\t\t   const std::string& asset, \n\t\t\t\t\t   const std::string& event);\n\n\t\tbool addStorageAssetTrackingTuple(const std::string& service,\n                                           const std::string& plugin,\n                                           const std::string& asset,\n                                           const std::string& event,\n\t\t\t\t\t   const bool& deprecated = false,\n\t\t\t\t\t   const std::string& datapoints = \"\",\n\t\t\t\t\t   const int& count = 0);\n\t\tConfigCategories\tgetChildCategories(const std::string& categoryName);\n\t\tHttpClient\t\t*getHttpClient();\n\t\tbool\t\t\taddAuditEntry(const std::string& serviceName,\n\t\t\t\t\t\t      const std::string& severity,\n\t\t\t\t\t\t      const std::string& details);\n\t\tstd::string&\t\tgetRegistrationBearerToken()\n\t\t{\n\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_bearer_token_mtx);\n\t\t\t\t\treturn m_bearer_token;\n\t\t};\n\t\tvoid\t\t\tsetNewBearerToken(const std::string& bearerToken)\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_bearer_token_mtx);\n\t\t\t\t\t\tm_bearer_token = bearerToken;\n\t\t\t\t\t};\n\t\tbool\t\t\tverifyBearerToken(BearerToken& token);\n\t\tbool\t\t\tverifyAccessBearerToken(BearerToken& bToken);\n\t\tbool\t\t\tverifyAccessBearerToken(std::shared_ptr<HttpServer::Request> request);\n\t\tbool\t\t\trefreshBearerToken(const std::string& currentToken,\n\t\t\t\t\t\t\tstd::string& newToken);\n\t\tstd::string&\t\tgetBearerToken() { return m_bearer_token; };\n\t\tbool\t\t\taddProxy(const std::string& serviceName,\n\t\t\t\t\t\tconst std::string& operation,\n\t\t\t\t\t\tconst std::string& publicEnpoint,\n\t\t\t\t\t\tconst std::string& privateEndpoint);\n\t\tbool\t\t\taddProxy(const std::string& serviceName,\n\t\t\t\t\t\tconst std::map<std::string,\n\t\t\t\t\t\tstd::vector<std::pair<std::string, std::string> > >& endpoints);\n\t\tbool\t\t\tdeleteProxy(const std::string& serviceName);\n\t\tconst std::string \tgetUrlbase() { return m_urlbase.str(); }\n\t        ACL\t\t\tgetACL(const std::string& aclName);\n\t\tAssetTrackingTuple*\tgetAssetTrackingTuple(const std::string& serviceName,\n\t\t\t\t\t\t\t\tconst std::string& assetName,\n\t\t\t\t\t\t\t\tconst std::string& event);\n\t\tint \t\t\tvalidateDatapoints(std::string dp1, std::string dp2);\n\t\tAssetTrackingTable\t*getDeprecatedAssetTrackingTuples();\n\t\tstd::string\t\tgetAlertByKey(const std::string& key);\n\t\tbool\t\t\traiseAlert(const std::string& key, const std::string& message, const std::string& urgency=\"normal\");\n\t\tbool\t\t\tclearAlert(const std::string& key);\n\n\tprivate:\n\t\tstd::ostringstream \t\t\tm_urlbase;\n\t\tstd::map<std::thread::id, HttpClient *> m_client_map;\n\t\tHttpClient\t\t\t\t*m_client;\n\t\tstd::string\t\t\t\t*m_uuid;\n\t\tLogger\t\t\t\t\t*m_logger;\n\t\tstd::map<std::string, std::string>\tm_categories;\n\t\t// Bearer token returned by service registration\n\t\t// if the service startup token has been passed in registration payload\n\t\tstd::string\t\t\t\tm_bearer_token;\n\t\t// Map of received and verified access bearer tokens from other microservices\n\t\tstd::map<std::string, BearerToken> \tm_received_tokens;\n\t\t// m_received_tokens lock\n\t\tstd::mutex \t\t\t\tm_mtx_rTokens;\n\t\t// m_client_map lock\n\t\tstd::mutex\t\t\t\tm_mtx_client_map;\n\t\t// Get and set bearer token mutex\n\t\tstd::mutex\t\t\t\tm_bearer_token_mtx;\n  \n\tpublic:\n\t\t// member template must be here and not in .cpp file\n\t\ttemplate<class T> bool\taddCategory(const T& t, bool keepOriginalItems = false)\n\t\t{\n\t\t\ttry {\n\t\t\t\tstd::string blockedCharacter = {};\n\t\t\t\tif (!isValidIdentifier(t.getName(), blockedCharacter))\n\t\t\t\t{\n\t\t\t\t\tm_logger->error(\"The category name %s contains %s invalid character(s).\", blockedCharacter.c_str(), t.getName().c_str());\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tstd::string url = \"/fledge/service/category\";\n\n                                // Build the JSON payload\n                                std::ostringstream payload;\n                                payload << \"{ \\\"key\\\" : \\\"\" << JSONescape(t.getName());\n                                payload << \"\\\", \\\"description\\\" : \\\"\" << JSONescape(t.getDescription());\n                                if (! t.getDisplayName().empty() ) {\n                                \tpayload << \"\\\", \\\"display_name\\\" : \\\"\" << JSONescape(t.getDisplayName());\n                                }\n                                payload << \"\\\", \\\"value\\\" : \" << t.itemsToJSON();\n\n\n\t\t\t\t/**\n\t\t\t\t * Note:\n\t\t\t\t * At the time being the keep_original_items is added into payload\n\t\t\t\t * and configuration manager in the Fledge handles it.\n\t\t\t\t *\n\t\t\t\t * In the near future keep_original_items will be passed\n\t\t\t\t * as URL modifier, i.e: 'URL?keep_original_items=true'\n\t\t\t\t */\n\t\t\t\tif (keepOriginalItems)\n\t\t\t\t{\n\t\t\t\t\turl += \"?keep_original_items=true\";\n\t\t\t\t}\n\n\t\t\t\t// Terminate JSON string\n\t\t\t\tpayload << \" }\";\n\n\t\t\t\tauto res = this->getHttpClient()->request(\"POST\", url.c_str(), payload.str());\n\n\t\t\t\tDocument doc;\n\t\t\t\tstd::string response = res->content.string();\n\n\t\t\t\tdoc.Parse(response.c_str());\n\t\t\t\tif (doc.HasParseError())\n\t\t\t\t{\n\t\t\t\t\tm_logger->error(\"Failed to parse result of adding a category: %s\\n\",\n\t\t\t\t\t\t\tresponse.c_str());\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t\t{\n\t\t\t\t\tm_logger->error(\"Failed to add configuration category: %s.\",\n\t\t\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\treturn true;\n\t\t\t\t}\n\t\t\t} catch (const SimpleWeb::system_error &e) {\n\t\t\t\tm_logger->error(\"Add config category failed %s.\", e.what());\n\t\t\t}\n\t\t\treturn false;\n\t\t};\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/pipeline_debugger.h",
    "content": "#ifndef _PIPELINE_DEBUGGER_H\n#define _PIPELINE_DEBUGGER_H\n/*\n * Fledge filter pipeline debugger.\n *\n * Copyright (c) 2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <reading_set.h>\n#include <reading.h>\n#include <reading_circularbuffer.h>\n#include <mutex>\n#include <vector>\n#include <memory>\n\n/**\n * The debugger class for elements in a pipeline\n */\nclass PipelineDebugger {\n\tpublic:\n\t\tPipelineDebugger();\n\t\t~PipelineDebugger();\n\t\ttypedef enum debuggerActions\n\t\t{\n\t\t\tNoAction,\n\t\t\tBlock\n\t\t} DebuggerActions;\n\t\tDebuggerActions\t\tprocess(ReadingSet *readingSet);\n\t\tvoid\t\t\tsetBuffer(unsigned int size);\n\t\tvoid\t\t\tclearBuffer();\n\t\tstd::vector<std::shared_ptr<Reading>>\n\t\t\t\t\tfetchBuffer();\n\tprivate:\n\t\tReadingCircularBuffer\t*m_buffer;\n\t\tstd::mutex\t\tm_bufferMutex;\n\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/pipeline_element.h",
    "content": "#ifndef _PIPELINE_ELEMENT_H\n#define _PIPELINE_ELEMENT_H\n/*\n * Fledge filter pipeline elements.\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <config_category.h>\n#include <management_client.h>\n#include <plugin.h>\n#include <plugin_manager.h>\n#include <plugin_data.h>\n#include <reading_set.h>\n#include <filter_plugin.h>\n#include <service_handler.h>\n#include <config_handler.h>\n#include <pipeline_debugger.h>\n\nclass FilterPipeline;\n\n/**\n * The base pipeline element class\n */\nclass PipelineElement {\n\tpublic:\n\t\tPipelineElement() : m_next(NULL), m_storage(NULL), m_debugger(NULL) {};\n\t\tvirtual ~PipelineElement() {};\n\t\tvoid\t\t\tsetNext(PipelineElement *next)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_next = next;\n\t\t\t\t\t};\n\t\tPipelineElement\t\t*getNext()\n\t\t\t\t\t{\n\t\t\t\t\t\treturn m_next;\n\t\t\t\t\t};\n\t\tvoid\t\t\tsetService(const std::string& serviceName)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_serviceName = serviceName;\n\t\t\t\t\t};\n\t\tvoid\t\t\tsetStorage(StorageClient *storage)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_storage = storage;\n\t\t\t\t\t};\n\t\tbool\t\t\tattachDebugger();\n\t\tvoid\t\t\tdetachDebugger();\n\t\tvoid\t\t\tsetDebuggerBuffer(unsigned int size);\n\t\tstd::vector<std::shared_ptr<Reading>>\n\t\t\t\t\tgetDebuggerBuffer();\n\t\tstatic void\t\tingest(void *handle, READINGSET *readings)\n\t\t\t\t\t{\n\t\t\t\t\t       \t((PipelineElement *)handle)->ingest(readings);\n\t\t\t\t\t};\n\t\tvirtual bool\t\tsetupConfiguration(ManagementClient * /* mgtClient */,\n\t\t\t\t\t\tstd::vector<std::string>& /* children */)\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t};\n\t\tvirtual bool\t\tisFilter()\n\t\t\t       \t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t};\n\t\tvirtual bool\t\tisBranch()\n\t\t\t       \t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t};\n\t\tvirtual void\t\tingest(READINGSET *readingSet) = 0;\n\t\tvirtual bool\t\tsetup(ManagementClient *mgmt, void *ingest, std::map<std::string, PipelineElement*>& categories) = 0;\n\t\tvirtual bool\t\tinit(OUTPUT_HANDLE* outHandle, OUTPUT_STREAM output) = 0;\n\t\tvirtual void\t\tshutdown(ServiceHandler *serviceHandler, ConfigHandler *configHandler) = 0;\n\t\tvirtual void\t\treconfigure(const std::string& /* newConfig */)\n\t\t\t\t\t{\n\t\t\t\t\t};\n\t\tvirtual std::string\tgetName() = 0;\n\t\tvirtual bool\t\tisReady() = 0;\n\tprotected:\n\t\tstd::string\t\tm_serviceName;\n\t\tPipelineElement\t\t*m_next;\n\t\tStorageClient\t\t*m_storage;\n\t\tPipelineDebugger\t*m_debugger;\n};\n\n/**\n * A pipeline element the runs a filter plugin\n */\nclass PipelineFilter : public PipelineElement {\n\tpublic:\n\t\tPipelineFilter(const std::string& name, const ConfigCategory& filterDetails);\n\t\t~PipelineFilter();\n\t\tbool\t\t\tsetupConfiguration(ManagementClient *mgtClient, std::vector<std::string>& children);\n\t\tvoid\t\t\tingest(READINGSET *readingSet)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (m_debugger)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tPipelineDebugger::DebuggerActions  action =\n\t\t\t\t\t\t\t\tm_debugger->process(readingSet);\n\n\t\t\t\t\t\t\tswitch (action)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\tcase PipelineDebugger::Block:\n\t\t\t\t\t\t\t\tdelete readingSet;\n\t\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t\tcase PipelineDebugger::NoAction:\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (m_plugin)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_plugin->ingest(readingSet);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tLogger::getLogger()->error(\"Pipeline filter %s has  no plugin associated with it.\", m_name.c_str());\n\t\t\t\t\t\t}\n\t\t\t\t\t};\n\t\tbool\t\t\tsetup(ManagementClient *mgmt, void *ingest, std::map<std::string, PipelineElement*>& categories);\n\t\tbool\t\t\tinit(OUTPUT_HANDLE* outHandle, OUTPUT_STREAM output);\n\t\tvoid\t\t\tshutdown(ServiceHandler *serviceHandler, ConfigHandler *configHandler);\n\t\tvoid\t\t\treconfigure(const std::string& newConfig);\n\t\tbool\t\t\tisFilter() { return true; };\n\t\tstd::string\t\tgetCategoryName() { return m_categoryName; };\n\t\tbool\t\t\tpersistData() { return m_plugin->persistData(); };\n\t\tvoid\t\t\tsetPluginData(PluginData *data) { m_plugin->m_plugin_data = data; };\n\t\tstd::string\t\tgetPluginData() { return m_plugin->m_plugin_data->loadStoredData(m_serviceName + m_name); };\n\t\tvoid\t\t\tsetServiceName(const std::string& name) { m_serviceName = name; };\n\t\tstd::string\t\tgetName() { return m_name; };\n\t\tbool\t\t\tisReady() { return true; };\n\tprivate:\n\t\tPLUGIN_HANDLE\t\tloadFilterPlugin(const std::string& filterName);\n\tprivate:\n\t\tstd::string\t\tm_name;\t\t// The name of the filter instance\n\t\tstd::string\t\tm_categoryName;\n\t\tstd::string\t\tm_pluginName;\n\t\tPLUGIN_HANDLE\t\tm_handle;\n\t\tFilterPlugin\t\t*m_plugin;\n\t\tstd::string\t\tm_serviceName;\n\t\tConfigCategory\t\tm_updatedCfg;\n};\n\n/**\n * A pipeline element that represents a branch in the pipeline\n */\nclass PipelineBranch : public PipelineElement {\n\tpublic:\n\t\tPipelineBranch(FilterPipeline *parent);\n\t\t~PipelineBranch();\n\t\tvoid\t\t\tingest(READINGSET *readingSet);\n\t\tstd::string\t\tgetName() { return \"Branch\"; };\n\t\tbool\t\t\tsetupConfiguration(ManagementClient *mgtClient, std::vector<std::string>& children);\n\t\tbool\t\t\tsetup(ManagementClient *mgmt, void *ingest, std::map<std::string, PipelineElement*>& categories);\n\t\tbool                    init(OUTPUT_HANDLE* outHandle, OUTPUT_STREAM output);\n\t\tvoid                    shutdown(ServiceHandler *serviceHandler, ConfigHandler *configHandler);\n\t\tbool                    isReady();\n\t\tbool\t\t\tisBranch()\n\t\t\t\t\t{\n\t\t\t\t\t\treturn true;\n\t\t\t\t\t};\n\t\tstd::vector<PipelineElement *>&\t\n\t\t\t\t\tgetBranchElements()\n\t\t\t\t\t{\n\t\t\t\t\t\treturn m_branch;\n\t\t\t\t\t};\n\t\tvoid\t\t\tsetFunctions(void *onward, void *use, void *ingest)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_passOnward = onward;\n\t\t\t\t\t\tm_useData = use;\n\t\t\t\t\t\tm_ingest = ingest;\n\t\t\t\t\t};\n\tprivate:\n\t\tstatic void\t\tbranchHandler(void *instance);\n\t\tvoid\t\t\thandler();\n\tprivate:\n\t\tstd::vector<PipelineElement *>\t\tm_branch;\n\t\tstd::thread\t\t\t\t*m_thread;\n\t\tstd::queue<READINGSET *>\t\tm_queue;\n\t\tstd::mutex\t\t\t\tm_mutex;\n\t\tstd::condition_variable\t\t\tm_cv;\n\t\tvoid\t\t\t\t\t*m_passOnward;\n\t\tvoid\t\t\t\t\t*m_useData;\n\t\tvoid\t\t\t\t\t*m_ingest;\n\t\tbool\t\t\t\t\tm_shutdownCalled;\n\t\tFilterPipeline\t\t\t\t*m_pipeline;\n};\n\n/**\n * A pipeline element that writes to a storage service or buffer\n */\nclass PipelineWriter : public PipelineElement {\n\tpublic:\n\t\tPipelineWriter();\n\t\tstd::string\t\tgetName() { return \"Writer\"; };\n\t\tvoid\t\t\tingest(READINGSET *readingSet);\n\t\tbool\t\t\tsetup(ManagementClient *mgmt, void *ingest, std::map<std::string, PipelineElement*>& categories);\n\t\tbool                    init(OUTPUT_HANDLE* outHandle, OUTPUT_STREAM output);\n\t\tvoid                    shutdown(ServiceHandler *serviceHandler, ConfigHandler *configHandler);\n\t\tbool                    isReady();\n\tprivate:\n\t\tOUTPUT_STREAM\t\tm_useData;\n\t\tvoid\t\t\t*m_ingest;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/plugin_data.h",
    "content": "#ifndef _PLUGIN_DATA_H\n#define _PLUGIN_DATA_H\n/*\n * Fledge persist plugin data class.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <storage_client.h>\n\nclass PluginData\n{\n\npublic:\n\tPluginData(StorageClient* client);\n\t~PluginData() {};\n\t// Load data\n\tstd::string loadStoredData(const std::string& key);\n\t// Store data\n\tbool persistPluginData(const std::string& key, const std::string& data, const std::string& service_name);\n\nprivate:\n\tStorageClient*\t\tm_storage;\n\tbool\t\t\tm_dataLoaded;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/process.h",
    "content": "#ifndef _PROCESS_H\n#define _PROCESS_H\n/*\n * Fledge process base class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <storage_client.h>\n#include <management_client.h>\n#include <audit_logger.h>\n#include <string.h>\n\n/**\n * Fledge process base class\n */\nclass FledgeProcess\n{\n\tpublic:\n\t\tFledgeProcess(int argc, char** argv);\n\t\tvirtual ~FledgeProcess();\n\t\tStorageClient*          getStorageClient() const;\n\t\tManagementClient*\tgetManagementClient() const;\n\t\tLogger\t\t\t*getLogger() const;\n    \t\tstd::string\t    \tgetName() const { return m_name; };\n\n\t    \ttime_t\t\t\tgetStartTime() const { return m_stime; };\n\n\tprotected:\n\t\tstd::string getArgValue(const std::string& name) const;\n\t\tbool\t\t\tm_dryRun;\n\n\tprivate:\n\t\tconst time_t\t\tm_stime;    // Start time\n\t\tconst int\t\tm_argc;\n\t\tconst char**\t\tm_arg_vals;\n\t\t// Fledge core management service details\n\t\tstd::string\t\tm_name;\n\t\tint\t\t\tm_core_mngt_port;\n\t\tstd::string\t\tm_core_mngt_host;\n\t\tManagementClient* \tm_client;\n\t\tStorageClient*\t\tm_storage;\n\t\tLogger*\t\t\tm_logger;\n\t\tAuditLogger*\t\tm_auditLogger;\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/purge_result.h",
    "content": "#ifndef _PURGE_RESULT_H\n#define _PURGE_RESULT_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <string.h>\n#include <sstream>\n#include <iostream>\n#include <reading.h>\n#include <rapidjson/document.h>\n#include <vector>\n\n/**\n */\nclass PurgeResult {\n\tpublic:\n\t\tPurgeResult() : m_removed(0), m_unsentPurged(0), m_unsentRetained(0),\n\t\t\t\tm_remaining(0) {};\n\t\tPurgeResult(const std::string& json);\n\t\tunsigned long\tgetRemoved() const { return m_removed; };\n\t\tunsigned long\tgetUnsentPurged() const { return m_unsentPurged; };\n\t\tunsigned long\tgetUnsentRetained() const { return m_unsentRetained; };\n\t\tunsigned long\tgetRemaining() const { return m_remaining; };\n\tprivate:\n\t\tunsigned long \tm_removed;\n\t\tunsigned long \tm_unsentPurged;\n\t\tunsigned long \tm_unsentRetained;\n\t\tunsigned long \tm_remaining;\n\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/pyruntime.h",
    "content": "#ifndef _PYRUNTIME_H\n#define _PYRUNTIME_H\n/*\n * Fledge Python Runtime.\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <Python.h>\n\nclass PythonRuntime {\n\tpublic:\n\t\tstatic PythonRuntime\t*getPythonRuntime();\n\t\tstatic bool\t\tinitialised() { return m_instance != NULL; };\n\t\tstatic void\t\tshutdown();\n\t\tvoid \texecute(const std::string& python);\n\t\tPyObject\t*call(const std::string& name, const std::string& fmt, ...);\n\t\tPyObject\t*call(PyObject *module, const std::string& name, const std::string& fmt, ...);\n\t\tPyObject\t*importModule(const std::string& name);\n\tprivate:\n\t\tPythonRuntime();\n\t\t~PythonRuntime();\n\t\tPythonRuntime(const PythonRuntime& rhs);\n\t\tPythonRuntime& operator=(const PythonRuntime& rhs);\n\t\tvoid\t\tlogException(const std::string& name);\n\n\t\tstatic PythonRuntime\t*m_instance;\n\n};\n\n#endif\n\n"
  },
  {
    "path": "C/common/include/pythonconfigcategory.h",
    "content": "#ifndef _PYTHONCONFIGCATEGORY_H\n#define _PYTHONCONFIGCATEGORY_H\n/*\n * Fledge Python Configuration Category\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <config_category.h>\n#include <Python.h>\n\n/**\n * A wrapper class for a ConfigCategory to convert to and from \n * Python objects.\n */\nclass PythonConfigCategory : public ConfigCategory {\n\tpublic:\n\t\tPythonConfigCategory(PyObject *pyConfig);\n\t\tPyObject \t\t*toPython();\n\tprivate:\n\t\tPyObject \t*convertItem(CategoryItem *);\n};\n#endif\n"
  },
  {
    "path": "C/common/include/pythonreading.h",
    "content": "#ifndef _PYTHONREADING_H\n#define _PYTHONREADING_H\n/*\n * Fledge Python Reading\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <reading.h>\n#include <Python.h>\n\n/**\n * A wrapper class for a Reading to convert to and from \n * Python objects.\n */\nclass PythonReading : public Reading {\n\tpublic:\n\t\tPythonReading(PyObject *pyReading);\n\t\t~PythonReading() {};\n\t\tPyObject \t\t*toPython(bool changeKeys = false, bool bytesString = false);\n\t\tstatic std::string\terrorMessage();\n\t\tstatic bool\t\tisArray(PyObject *);\n\t\tstatic bool\t\tdoneNumPyImport;\n\n\tprivate:\n\t\tPyObject\t\t*convertDatapoint(Datapoint *dp, bool bytesString = false);\n\t\tDatapointValue\t\t*getDatapointValue(PyObject *object);\n\t\tvoid \t\t\tfixQuoting(std::string& str);\n\t\tint\t\t\tInitNumPy();\n};\n#endif\n"
  },
  {
    "path": "C/common/include/pythonreadingset.h",
    "content": "#ifndef _PYTHON_READING_SET_H_\n#define _PYTHON_READING_SET_H_\n/*\n * Fledge Python Reading Set\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <reading_set.h>\n#include <Python.h>\n\n/**\n * A wrapper class for the ReadingSet class that allows conversion\n * to and from Python objects.\n */\nclass PythonReadingSet : public ReadingSet {\n\tpublic:\n\t\tPythonReadingSet(PyObject *pySet);\n\t\t~PythonReadingSet() {};\n\t\tPyObject\t*toPython(bool changeKeys = false);\n\tprivate:\n\t\tvoid setReadingAttr(Reading* newReading, PyObject *readingList, bool fillIfMissing);\n};\n#endif\n"
  },
  {
    "path": "C/common/include/query.h",
    "content": "#ifndef _QUERY_H\n#define _QUERY_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <where.h>\n#include <aggregate.h>\n#include <sort.h>\n#include <join.h>\n#include <timebucket.h>\n#include <returns.h>\n#include <string>\n#include <vector>\n\n\n/**\n * Storage layer query container\n */\nclass Query {\n\tpublic:\n\t\tQuery(Where *where);\n\t\tQuery(Aggregate *aggreate, Where *where);\n\t\tQuery(Timebucket *timebucket, Where *where);\n\t\tQuery(Timebucket *timebucket, Where *where, unsigned int limit);\n\t\tQuery(Returns *returns);\n\t\tQuery(std::vector<Returns *> returns);\n\t\tQuery(std::vector<Returns *> returns, Where *where);\n\t\tQuery(std::vector<Returns *> returns, Where *where, unsigned int limit);\n\t\t~Query();\n\t\tvoid\t\t\t\taggregate(Aggregate *aggegate);\n\t\tvoid\t\t\t\tgroup(const std::string& column);\n\t\tvoid\t\t\t\tsort(Sort *sort);\n\t\tvoid\t\t\t\tlimit(unsigned int limit);\n\t\tvoid\t\t\t\ttimebucket(Timebucket*);\n\t\tvoid\t\t\t\treturns(Returns *);\n\t\tvoid\t\t\t\treturns(std::vector<Returns *>);\n\t\tvoid\t\t\t\tdistinct();\n\t\tvoid\t\t\t\tjoin(Join *join);\n\t\tconst std::string\t\ttoJSON() const;\n\tprivate:\n\t\tQuery(const Query&);\t\t// Disable copy of query\n\t\tQuery& \t\t\t\toperator=(Query const&);\n\t\tWhere\t\t\t\t*m_where;\n\t\tstd::vector<Aggregate *>\tm_aggregates;\n\t\tstd::string\t\t\tm_group;\n\t\tstd::vector<Sort *>\t\tm_sort;\n\t\tunsigned int\t\t\tm_limit;\n\t\tTimebucket*\t\t\tm_timebucket;\n\t\tstd::vector<Returns *>\t\tm_returns;\n\t\tbool\t\t\t\tm_distinct;\n\t\tJoin\t\t\t\t*m_join;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/reading.h",
    "content": "#ifndef _READING_H\n#define _READING_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <datapoint.h>\n#include <string>\n#include <ctime>\n#include <vector>\n#include <sys/time.h>\n#include <rapidjson/document.h>\n\n#define DEFAULT_DATE_TIME_FORMAT      \"%Y-%m-%d %H:%M:%S\"\n#define COMBINED_DATE_STANDARD_FORMAT \"%Y-%m-%dT%H:%M:%S\"\n#define ISO8601_DATE_TIME_FORMAT      \"%Y-%m-%d %H:%M:%S +0000\"\n#define DATE_TIME_BUFFER_LEN          52\n\n/**\n * An asset reading represented as a class.\n *\n * Each asset reading may have multiple datapoints to represent the\n * multiple values that maybe held within a complex asset.\n *\n * NB The timestamp data held for both the system timestamp and the\n * user timestamp are always held internally as UTC times\n */\nclass Reading {\n\tpublic:\n\t\tReading(const std::string& asset, Datapoint *value);\n\t\tReading(const std::string& asset, std::vector<Datapoint *> values);\n\t\tReading(const std::string& asset, std::vector<Datapoint *> values, const std::string& ts);\n\t\tReading(const std::string& asset, const std::string& datapoints);\n\t\tReading(const Reading& orig);\n\n\t\tvirtual\t~Reading();\n\t\tvoid\t\t\t\taddDatapoint(Datapoint *value);\n\t\tDatapoint\t\t\t*removeDatapoint(const std::string& name);\n\t\tDatapoint\t\t\t*getDatapoint(const std::string& name) const;\n\t\tstd::string\t\t\ttoJSON(bool minimal = false) const;\n\t\tstd::string\t\t\tgetDatapointsJSON() const;\n\t\t// Return AssetName\n\t\tconst std::string&              getAssetName() const { return m_asset; };\n\t\t// Set AssetName\n\t\tvoid\t\t\t\tsetAssetName(std::string assetName) { m_asset = assetName; };\n\t\tunsigned int\t\t\tgetDatapointCount() { return m_values.size(); };\n\t\tvoid\t\t\t\tremoveAllDatapoints();\n\t\t// Return Reading datapoints\n\t\tconst std::vector<Datapoint *>\tgetReadingData() const { return m_values; };\n\t\t// Return refrerence to Reading datapoints\n\t\tstd::vector<Datapoint *>&\tgetReadingData() { return m_values; };\n\t\tbool\t\t\t\thasId() const { return m_has_id; };\n\t\tunsigned long\t\t\tgetId() const { return m_id; };\n\t\tunsigned long\t\t\tgetTimestamp() const { return (unsigned long)m_timestamp.tv_sec; };\n\t\tunsigned long\t\t\tgetUserTimestamp() const { return (unsigned long)m_userTimestamp.tv_sec; };\n\t\tvoid\t\t\t\tsetId(unsigned long id) { m_id = id; };\n\t\tvoid\t\t\t\tsetTimestamp(unsigned long ts) { m_timestamp.tv_sec = (time_t)ts; m_timestamp.tv_usec = 0; };\n\t\tvoid\t\t\t\tsetTimestamp(struct timeval tm) { m_timestamp = tm; };\n\t\tvoid\t\t\t\tsetTimestamp(const std::string& timestamp);\n\t\tvoid\t\t\t\tgetTimestamp(struct timeval *tm) { *tm = m_timestamp; };\n\t\tvoid\t\t\t\tsetUserTimestamp(unsigned long uTs) { m_userTimestamp.tv_sec = (time_t)uTs; m_userTimestamp.tv_usec = 0; };\n\t\tvoid\t\t\t\tsetUserTimestamp(struct timeval tm) { m_userTimestamp = tm; };\n\t\tvoid\t\t\t\tsetUserTimestamp(const std::string& timestamp);\n\t\tvoid\t\t\t\tgetUserTimestamp(struct timeval *tm) { *tm = m_userTimestamp; };\n\n\t\ttypedef enum dateTimeFormat { FMT_DEFAULT, FMT_STANDARD, FMT_ISO8601, FMT_ISO8601MS } readingTimeFormat;\n\n\t\tvoid\tgetFormattedDateTimeStr(const time_t *tv_sec, char *date_time, readingTimeFormat dateFormat) const;\n\t\t// Return Reading asset time - ts time\n\t\tconst std::string getAssetDateTime(readingTimeFormat datetimeFmt = FMT_DEFAULT, bool addMs = true) const;\n\t\t// Return Reading asset time - user_ts time\n\t\tconst std::string getAssetDateUserTime(readingTimeFormat datetimeFmt = FMT_DEFAULT, bool addMs = true) const;\n\t\tstd::string\t\t\tsubstitute(const std::string& str);\n\n\tprotected:\n\t\tReading() {};\n\t\tReading&\t\t\toperator=(Reading const&);\n\t\tvoid\t\t\t\tstringToTimestamp(const std::string& timestamp, struct timeval *ts);\n\t\tconst std::string\t\tescape(const std::string& str) const;\n\t\tstd::vector<Datapoint *>\t*JSONtoDatapoints(const rapidjson::Value& json);\n\t\tunsigned long\t\t\tm_id;\n\t\tbool\t\t\t\tm_has_id;\n\t\tstd::string\t\t\tm_asset;\n\t\tstruct timeval\t\t\tm_timestamp;\n\t\tstruct timeval\t\t\tm_userTimestamp;\n\t\tstd::vector<Datapoint *>\tm_values;\n\t\t// Supported date time formats for 'm_timestamp'\n\t\tstatic std::vector<std::string>\tm_dateTypes;\n\tprivate:\n\t\t// Internal class used for macro substitution\n\t\tclass Macro {\n\t\t\tpublic:\n\t\t\t\tMacro(const std::string& dpname, std::string::size_type s,\n\t\t\t\t\t\tconst std::string& defValue) :\n\t\t\t\t\tstart(s), name(dpname), def(defValue)\n\n\t\t\t\t{\n\t\t\t\t};\n\t\t\t\tMacro(const std::string& dpname, std::string::size_type s) :\n\t\t\t\t\tstart(s), name(dpname)\n\n\t\t\t\t{\n\t\t\t\t};\n\t\t\t\t// Start of variable to substitute\n\t\t\t\tstd::string::size_type\t\tstart;\n\t\t\t\t// Name of variable to substitute\n\t\t\t\tstd::string\t\t\tname;\n\t\t\t\t// Default value to substitute\n\t\t\t\tstd::string\t\t\tdef;\n\t\t};\n\t\tvoid\t\tcollectMacroInfo(const std::string& str, std::vector<Macro>& macros);\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/reading_circularbuffer.h",
    "content": "#ifndef _READING_CIRCULARBUFFER_H\n#define _READING_CIRCULARBUFFER_H\n/*\n * Fledge Reading Circular Buffer.\n *\n * Copyright (c) 2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <reading.h>\n#include <mutex>\n#include <vector>\n#include <memory>\n\n/**\n * A circular buffer of readings. The buffer size is set in the constructor,\n * when it fills the oldest reading will be overwritten by new readings being\n * appended.\n *\n * The user can extract the current state at any point in historic order.\n */\nclass ReadingCircularBuffer {\n\tpublic:\n\t\tReadingCircularBuffer(unsigned int size);\n\t\t~ReadingCircularBuffer();\n\t\tvoid\t\tinsert(Reading *);\n\t\tvoid\t\tinsert(const std::vector<Reading *>& readings);\n\t\tvoid\t\tinsert(const std::vector<Reading *> *readings);\n\t\tint\t\textract(std::vector<std::shared_ptr<Reading>>& vec);\n\tprivate:\n\t\tunsigned int\tm_size;\n\t\tstd::mutex\tm_mutex;\n\t\tstd::vector<std::shared_ptr<Reading>>\n\t\t\t\tm_readings;\n\t\tunsigned int\tm_insert;\n\t\tunsigned int\tm_entries;\n\n};\n#endif\n"
  },
  {
    "path": "C/common/include/reading_set.h",
    "content": "#ifndef _READINGSET_H\n#define _READINGSET_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <string>\n#include <string.h>\n#include <sstream>\n#include <iostream>\n#include <reading.h>\n#include <rapidjson/document.h>\n#include <vector>\n\n/**\n * Reading set class\n *\n * A specialised container for a set of readings that allows\n * creation from a JSON document.\n */\nclass ReadingSet {\n\tpublic:\n\t\tReadingSet();\n\t\tReadingSet(const std::string& json);\n\t\tReadingSet(const std::vector<Reading *>* readings);\n\t\tvirtual ~ReadingSet();\n\n\t\tunsigned long\t\t\tgetCount() const { return m_readings.size(); };\n\t\tconst Reading\t\t\t*operator[] (const unsigned int idx) {\n\t\t\t\t\t\t\treturn m_readings[idx];\n\t\t\t\t\t\t};\n\n\t\t// Return the const reference of readings data\n\t\tconst std::vector<Reading *>&\tgetAllReadings() const { return m_readings; };\n\t\t// Return the reference of readings\n\t\tstd::vector<Reading *>*\t\tgetAllReadingsPtr() { return &m_readings; };\n\n\t\t// Remove readings from reading set and return reference to readings\n\t\tstd::vector<Reading *>* moveAllReadings();\n\t\t// Delete a reading from reading set and return pointer of deleted reading\n\t\tReading* removeReading(unsigned long id);\n\t\t\n\t\t// Return the reading id of the last  data element\n\t\tunsigned long\t\t\tgetLastId() const { return m_last_id; };\n\t\tunsigned long\t\t\tgetReadingId(uint32_t pos);\n\t\tvoid\t\t\t\tappend(ReadingSet *);\n\t\tvoid\t\t\t\tappend(ReadingSet&);\n\t\tvoid\t\t\t\tappend(std::vector<Reading *> &);\n\t\tvoid\t\t\t\tmerge(std::vector<Reading *> *readings);\n\t\tvoid\t\t\t\tremoveAll();\n\t\tvoid\t\t\t\tclear();\n\t\tbool\t\t\t\tcopy(const ReadingSet& src);\n\n\tprotected:\n\t\tunsigned long\t\t\tm_count;\n\t\tReadingSet(const ReadingSet&);\n\t\tReadingSet&\t\t\toperator=(ReadingSet const &);\n\t\tstd::vector<Reading *>\t\tm_readings;\n\t\t// Id of last Reading element\n\t\tunsigned long\t\t\tm_last_id;    // Id of the last Reading\n};\n\n/**\n * JSONReading class\n *\n * A specialised reading class that allows creation from a JSON document\n */\nclass JSONReading : public Reading {\n\tpublic:\n\t\tJSONReading(const rapidjson::Value& json);\n\t\t~JSONReading() {};\n\n\t\t// Return the reading id\n\t\tunsigned long\tgetId() const { return m_id; };\n\n\tprivate:\n\t\tDatapoint \t*datapoint(const std::string& name, const rapidjson::Value& json);\n                void \t\tescapeCharacter(std::string& stringToEvaluate, std::string pattern);\n};\n\nclass ReadingSetException : public std::exception\n{\n\tpublic:\n\t\tReadingSetException(const char *what)\n\t\t{\n\t\t\tm_what = strdup(what);\n\t\t};\n\t\t~ReadingSetException()\n\t\t{\n\t\t\tif (m_what)\n\t\t\t\tfree(m_what);\n\t\t};\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn m_what;\n\t\t};\n\tprivate:\n\t\tchar *m_what;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/reading_stream.h",
    "content": "#ifndef _READING_STREAM_H\n#define _READING_STREAM_H\n/*\n * Fledge storage reading stream protocol definitions.\n *\n * Copyright (c) 2019 Dianomic Systems Inc.\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#define RDS_CONNECTION_MAGIC\t0x344f4e4e\n#define\tRDS_BLOCK_MAGIC\t\t0x5244424b\n#define\tRDS_READING_MAGIC\t0x52444947\n#define RDS_ACK_MAGIC\t\t0x4241434b\n#define RDS_NACK_MAGIC\t\t0x4e41434b\n\ntypedef struct {\n\tuint32_t\tmagic;\n\tuint32_t\ttoken;\n} RDSConnectHeader;\n\ntypedef struct {\n\tuint32_t\tmagic;\n\tuint32_t\tblockNumber;\n\tuint32_t\tcount;\n} RDSBlockHeader;\n\ntypedef struct {\n\tuint32_t\tmagic;\n\tuint32_t\treadingNo;\n\tuint32_t\tassetLength;\n\tuint32_t\tpayloadLength;\n} RDSReadingHeader;\n\ntypedef struct {\n\tuint32_t\tmagic;\n\tuint32_t\tblock;\n} RDSAcknowledge;\n\ntypedef struct {\n\tuint32_t\tassetCodeLength;\n\tuint32_t \tpayloadLength;\n\tstruct timeval\tuserTs;\n\tchar\t\tassetCode[1];\n} ReadingStream;\n\n#endif\n\n"
  },
  {
    "path": "C/common/include/readingset_circularbuffer.h",
    "content": "#ifndef _READINGSETCIRCULARBUFFER_H\n#define _READINGSETCIRCULARBUFFER_H\n/*\n * Fledge ReadingSet Circular Buffer.\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n#include <reading_set.h>\n#include <mutex>\n#include <vector>\n#include <memory>\n\n/**\n * Reading set circular buffer class\n *\n * Reading set circular buffer is a data structure to hold ReadingSet\n * passed to a plugin.\n */\nclass ReadingSetCircularBuffer {\n\tpublic:\n\t\tReadingSetCircularBuffer(unsigned long maxBufferSize=10);\n\t\t~ReadingSetCircularBuffer();\n\n\t\tvoid\tinsert(ReadingSet*);\n\t\tvoid\tinsert(ReadingSet&);\n\t\tstd::vector<std::shared_ptr<ReadingSet>> extract(bool isExtractSingleElement=true);\n\n\tprivate:\n\t\tstd::mutex\tm_mutex;\n\t\tunsigned long\tm_maxBufferSize;\n\t\tunsigned long\tm_nextReadIndex;\n\t\tvoid appendReadingSet(const std::vector<Reading *>& readings);\n\t\tReadingSetCircularBuffer (const ReadingSetCircularBuffer&) = delete;\n\t\tReadingSetCircularBuffer&\toperator=(const ReadingSetCircularBuffer&) = delete;\n\t\tstd::vector<std::shared_ptr<ReadingSet>> m_circularBuffer;\n};\n\n#endif\n\n"
  },
  {
    "path": "C/common/include/resultset.h",
    "content": "#ifndef _RESULTSET_H\n#define _RESULTSET_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <string.h>\n#include <sstream>\n#include <iostream>\n#include <vector>\n#include <rapidjson/document.h>\n\ntypedef enum column_type {\n\tINT_COLUMN = 1,\n\tNUMBER_COLUMN,\n\tSTRING_COLUMN,\n\tBOOL_COLUMN,\n\tJSON_COLUMN,\n\tNULL_COLUMN\n} ColumnType;\n\n\n/**\n * Result set\n */\nclass ResultSet {\n\tpublic:\n\t\tclass ColumnValue {\n\t\t\tpublic:\n\t\t\t\tColumnValue(const std::string& value)\n\t\t\t\t{\n\t\t\t\t\tm_value.str = (char *)malloc(value.length() + 1);\n\t\t\t\t\tstrncpy(m_value.str, value.c_str(), value.length() + 1);\n\t\t\t\t\tm_type = STRING_COLUMN;\n\t\t\t\t};\n\t\t\t\tColumnValue(const int value)\n\t\t\t\t{\n\t\t\t\t\tm_value.ival = value;\n\t\t\t\t\tm_type = INT_COLUMN;\n\t\t\t\t};\n\t\t\t\tColumnValue(const long value)\n\t\t\t\t{\n\t\t\t\t\tm_value.ival = value;\n\t\t\t\t\tm_type = INT_COLUMN;\n\t\t\t\t};\n\t\t\t\tColumnValue(const double value)\n\t\t\t\t{\n\t\t\t\t\tm_value.fval = value;\n\t\t\t\t\tm_type = NUMBER_COLUMN;\n\t\t\t\t};\n\t\t\t\tColumnValue(const rapidjson::Value& value)\n\t\t\t\t{\n\t\t\t\t\tm_doc = new rapidjson::Document();\n\t\t\t\t\trapidjson::Document::AllocatorType& a = m_doc->GetAllocator();\n\t\t\t\t\tm_value.json = new rapidjson::Value(value, a);\n\t\t\t\t\tm_type = JSON_COLUMN;\n\t\t\t\t};\n\t\t\t\t~ColumnValue()\n\t\t\t\t{\n\t\t\t\t\tif (m_type == STRING_COLUMN)\n\t\t\t\t\t\tfree(m_value.str);\n\t\t\t\t\telse if (m_type == JSON_COLUMN)\n\t\t\t\t\t{\n\t\t\t\t\t\tdelete m_doc;\n\t\t\t\t\t\tdelete m_value.json;\n\t\t\t\t\t}\n\t\t\t\t};\n\t\t\t\tColumnType \tgetType() { return m_type; };\n\t\t\t\tlong\tgetInteger() const;\n\t\t\t\tdouble\tgetNumber() const;\n\t\t\t\tchar\t*getString() const;\n\t\t\t\tconst rapidjson::Value *getJSON() const { return m_value.json; };\n\t\t\tprivate:\n\t\t\t\tColumnValue(const ColumnValue&);\n\t\t\t\tColumnValue&\toperator=(ColumnValue const&);\n\t\t\t\tColumnType\tm_type;\n\t\t\t\tunion {\n\t\t\t\t\tchar\t\t\t*str;\n\t\t\t\t\tlong\t\t\tival;\n\t\t\t\t\tdouble\t\t\tfval;\n\t\t\t\t\trapidjson::Value\t*json;\n\t\t\t\t\t}\tm_value;\n\t\t\t\trapidjson::Document *m_doc;\n\t\t};\n\n\t\tclass Row {\n\t\t\tpublic:\n\t\t\t\tRow(ResultSet *resultSet) : m_resultSet(resultSet) {};\n\t\t\t\t~Row()\n\t\t\t\t{\n\t\t\t\t\tfor (auto it = m_values.cbegin();\n\t\t\t\t\t\t\tit != m_values.cend(); it++)\n\t\t\t\t\t\tdelete *it;\n\t\t\t\t}\n\t\t\t\tvoid append(ColumnValue *value)\n\t\t\t\t{\n\t\t\t\t\tm_values.push_back(value);\n\t\t\t\t};\n\t\t\t\tColumnType\tgetType(unsigned int column);\n\t\t\t\tColumnType\tgetType(const std::string& name);\n\t\t\t\tColumnValue\t*getColumn(unsigned int column) const;\n\t\t\t\tColumnValue\t*getColumn(const std::string& name) const;\n\t\t\t\tColumnValue \t*operator[] (unsigned long colNo) const {\n\t\t\t\t\t\t\treturn m_values[colNo];\n\t\t\t\t\t\t};\n\t\t\tprivate:\n\t\t\t\tRow(const Row&);\n\t\t\t\tRow&\t\t\t\t\toperator=(Row const&);\n\t\t\t\tstd::vector<ResultSet::ColumnValue *>\tm_values;\n\t\t\t\tconst ResultSet\t\t\t\t*m_resultSet;\n\t\t};\n\n\t\ttypedef std::vector<Row *>::iterator RowIterator;\n\n\t\tResultSet(const std::string& json);\n\t\t~ResultSet();\n\t\tunsigned int\t\t\trowCount() const { return m_rowCount; };\n\t\tunsigned int\t\t\tcolumnCount() const { return m_columns.size(); };\n\t\tconst std::string&\t\tcolumnName(unsigned int column) const;\n\t\tColumnType\t\t\tcolumnType(unsigned int column) const;\n\t\tColumnType\t\t\tcolumnType(const std::string& name) const;\n\t\tRowIterator\t\t\tfirstRow();\n\t\tRowIterator\t\t\tnextRow(RowIterator it);\n\t\tbool\t\t\t\tisLastRow(RowIterator it) const;\n\t\tbool\t\t\t\thasNextRow(RowIterator it) const;\n\t\tunsigned int\t\t\tfindColumn(const std::string& name) const;\n\t\tconst Row *\t\t\toperator[] (unsigned long rowNo) {\n\t\t\t\t\t\t\treturn m_rows[rowNo];\n\t\t\t\t\t\t};\n\n\tprivate:\n\t\tResultSet(const ResultSet &);\n\t\tResultSet&\t\t\toperator=(ResultSet const&);\n\t\tclass Column {\n\t\t\tpublic:\n\t\t\t\tColumn(const std::string& name, ColumnType type) : m_name(name), m_type(type) {};\n\t\t\t\tconst std::string& getName() { return m_name; };\n\t\t\t\tColumnType\tgetType() { return m_type; };\n\t\t\tprivate:\n\t\t\t\tconst std::string\tm_name;\n\t\t\t\tColumnType\t\tm_type;\n\t\t};\n\n\n\t\tunsigned int\t\t\t\tm_rowCount;\n\t\tstd::vector<ResultSet::Column *>\tm_columns;\n\t\tstd::vector<ResultSet::Row *>\t\tm_rows;\n\n};\n\nclass ResultException : public std::exception {\n\n\tpublic:\n\t\tResultException(const char *what)\n\t\t{\n\t\t\tm_what = strdup(what);\n\t\t};\n\t\t~ResultException()\n\t\t{\n\t\t\tif (m_what)\n\t\t\t\tfree(m_what);\n\t\t};\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn m_what;\n\t\t};\n\tprivate:\n\t\tchar *m_what;\n};\n\nclass ResultNoSuchColumnException : public std::exception {\n\tpublic:\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn \"Column does not exist\";\n\t\t}\n};\n\nclass ResultNoMoreRowsException : public std::exception {\n\tpublic:\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn \"No more rows in the result set\";\n\t\t}\n};\n\nclass ResultIncorrectTypeException : public std::exception {\n\tpublic:\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn \"No more rows in the result set\";\n\t\t}\n};\n\n#endif\n\n"
  },
  {
    "path": "C/common/include/returns.h",
    "content": "#ifndef _RETURNS_H\n#define _RETURNS_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <sstream>\n#include <iostream>\n\n\n/**\n * Control a returned column\n */\nclass Returns {\n\tpublic:\n\t\tReturns(const std::string& column) :\n\t\t\t\tm_column(column) {};\n\t\tReturns(const std::string& column, const std::string& alias) :\n\t\t\t\tm_column(column), m_alias(alias) {};\n\t\tReturns(const std::string& column, const std::string& alias, const std::string& format) :\n\t\t\t\tm_column(column), m_alias(alias), m_format(format) {};\n\t\t~Returns() {};\n\t\tvoid\t\tformat(const std::string format)\n\t\t{\n\t\t\tm_format = format;\n\t\t}\n\t\tvoid\t\ttimezone(const std::string timezone)\n\t\t{\n\t\t\tm_timezone = timezone;\n\t\t}\n\t\tstd::string\ttoJSON()\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tif ((! m_alias.empty()) || (! m_format.empty()) || (! m_timezone.empty()))\n\t\t\t{\n\t\t\t\tjson << \"{ \";\n\t\t\t\tjson << \"\\\"column\\\" : \\\"\" << m_column << \"\\\"\";\n\t\t\t\tif (! m_alias.empty())\n\t\t\t\t\tjson << \", \\\"alias\\\" : \\\"\" << m_alias << \"\\\"\";\n\t\t\t\tif (! m_format.empty())\n\t\t\t\t\tjson << \", \\\"format\\\" : \\\"\" << m_format << \"\\\"\";\n\t\t\t\tif (! m_timezone.empty())\n\t\t\t\t\tjson << \", \\\"timezone\\\" : \\\"\" << m_timezone << \"\\\"\";\n\t\t\t\tjson << \" }\";\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tjson << \"\\\"\" << m_column << \"\\\"\";\n\t\t\t}\n\t\t\treturn json.str();\n\t\t}\n\tprivate:\n\t\tconst std::string\tm_column;\n\t\tconst std::string\tm_alias;\n\t\tstd::string\t\tm_format;\n\t\tstd::string\t\tm_timezone;\n};\n#endif\n"
  },
  {
    "path": "C/common/include/service_record.h",
    "content": "#ifndef _SERVICE_RECORD_H\n#define _SERVICE_RECORD_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <json_provider.h>\n#include <string>\n\nclass ServiceRecord : public JSONProvider {\n\tpublic:\n\t\tServiceRecord(const std::string& name);\n\t\tServiceRecord(const std::string& name,\n\t\t\t      const std::string& type);\n\t\tServiceRecord(const std::string& name,\n\t\t\t      const std::string& type,\n\t\t\t      const std::string& protocol,\n\t\t\t      const std::string& address,\n\t\t\t      const unsigned short port,\n\t\t\t      const unsigned short managementPort,\n\t\t\t      const std::string& token = \"\");\n\t\tvoid\t\t\tasJSON(std::string &) const;\n\t\tconst std::string&\tgetName() const\n\t\t\t\t\t{\n\t\t\t\t\t\treturn m_name;\n\t\t\t\t\t}\n\t\tconst std::string&\tgetType() const\n\t\t\t\t\t{\n\t\t\t\t\t\treturn m_type;\n\t\t\t\t\t}\n\t\tvoid\t\t\tsetAddress(const std::string& address)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_address = address;\n\t\t\t\t\t}\n\t\tvoid\t\t\tsetPort(const unsigned short port)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_port = port;\n\t\t\t\t\t}\n\t\tvoid\t\t\tsetProtocol(const std::string& protocol)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_protocol = protocol;\n\t\t\t\t\t}\n\t\tconst std::string&\tgetProtocol() const\n\t\t\t\t\t{\n\t\t\t\t\t\treturn m_protocol;\n\t\t\t\t\t}\n\t\tvoid\t\t\tsetManagementPort(const unsigned short managementPort)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_managementPort = managementPort;\n\t\t\t\t\t}\n\t\tconst std::string&\tgetAddress()\n\t\t\t\t\t{\n\t\t\t\t\t\treturn m_address;\n\t\t\t\t\t}\n\t\tunsigned short\t\tgetPort()\n\t\t\t\t\t{\n\t\t\t\t\t\treturn m_port;\n\t\t\t\t\t}\n\t\tbool\t\t\toperator==(const ServiceRecord& b) const\n\t\t\t\t\t{\n\t\t\t\t\t\treturn m_name.compare(b.m_name) == 0\n\t\t\t\t\t\t\t&& m_type.compare(b.m_type) == 0\n\t\t\t\t\t\t\t&& m_protocol.compare(b.m_protocol) == 0\n\t\t\t\t\t\t\t&& m_address.compare(b.m_address) == 0\n\t\t\t\t\t\t\t&& m_port == b.m_port\n\t\t\t\t\t\t\t&& m_managementPort == b.m_managementPort;\n\t\t\t\t\t}\n\tprivate:\n\t\tstd::string\t\tm_name;\n\t\tstd::string\t\tm_type;\n\t\tstd::string\t\tm_protocol;\n\t\tstd::string\t\tm_address;\n\t\tunsigned short\t\tm_port;\n\t\tunsigned short\t\tm_managementPort;\n\t\tstd::string\t\tm_token; // token set by core server at service start\n};\n\n#endif\n"
  },
  {
    "path": "C/common/include/sort.h",
    "content": "#ifndef _SORT_H\n#define _SORT_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <sstream>\n#include <iostream>\n\n\n/**\n * Sort clause in a selection of records\n */\nclass Sort {\n\tpublic:\n\t\tSort(const std::string& column) :\n\t\t\t\tm_column(column), m_reverse(false) {};\n\t\tSort(const std::string& column, bool reverse) :\n\t\t\t\tm_column(column), m_reverse(reverse) {};\n\t\t~Sort() {};\n\t\tstd::string\ttoJSON()\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"{ \\\"column\\\" : \\\"\" << m_column << \"\\\", \";\n\t\t\tjson << \"\\\"direction\\\" : \\\"\" << (m_reverse ? \"desc\" : \"asc\") << \"\\\" }\";\n\t\t\treturn json.str();\n\t\t}\n\tprivate:\n\t\tconst std::string\tm_column;\n\t\tbool\t\t\tm_reverse;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/storage_client.h",
    "content": "#ifndef _STORAGE_CLIENT_H\n#define _STORAGE_CLIENT_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <client_http.hpp>\n#include <reading.h>\n#include <reading_set.h>\n#include <resultset.h>\n#include <purge_result.h>\n#include <query.h>\n#include <insert.h>\n#include <json_properties.h>\n#include <expression.h>\n#include <update_modifier.h>\n#include <logger.h>\n#include <string>\n#include <vector>\n#include <thread>\n\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\n#define STREAM_BLK_SIZE \t100\t// Readings to send per write call to a stream\n#define STREAM_THRESHOLD\t25\t// Switch to streamed mode above this number of readings per second\n\n// Backup values for repeated storage client exception messages\n#define SC_INITIAL_BACKOFF\t100\n#define SC_MAX_BACKOFF\t\t1000\n\n#define DEFAULT_SCHEMA \t\"fledge\"\n\nclass ManagementClient;\n\n/**\n * Client for accessing the storage service\n */\nclass StorageClient {\n\tpublic:\n\t\tStorageClient(HttpClient *client);\n\t\tStorageClient(const std::string& hostname, const unsigned short port);\n\t\t~StorageClient();\n\t\tResultSet\t*queryTable(const std::string& schema, const std::string& tablename, const Query& query);\n\t\tResultSet\t*queryTable(const std::string& tablename, const Query& query);\n\t\tReadingSet\t*queryTableToReadings(const std::string& tableName, const Query& query);\n\t\tint \t\tinsertTable(const std::string& schema, const std::string& tableName, const InsertValues& values);\n\t\tint             insertTable(const std::string& schema, const std::string& tableName,\n                                                const std::vector<InsertValues>& values);\n                int             insertTable(const std::string& tableName, const std::vector<InsertValues>& values);\n\n\n\n\t\tint\t\tupdateTable(const std::string& schema, const std::string& tableName, const InsertValues& values,\n\t\t\t\t\tconst Where& where, const UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& schema, const std::string& tableName, const JSONProperties& json,\n\t\t\t\t\tconst Where& where, const UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& schema, const std::string& tableName, const InsertValues& values,\n\t\t\t\t\tconst JSONProperties& json, const Where& where, const UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& schema, const std::string& tableName, const ExpressionValues& values,\n\t\t\t\t\tconst Where& where, const UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& schema, const std::string& tableName,\n\t\t\t\t\tstd::vector<std::pair<ExpressionValues *, Where *>>& updates, const UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& schema, const std::string& tableName, const InsertValues& values,\n\t\t\t\t\tconst ExpressionValues& expressoins, const Where& where, const UpdateModifier *modifier = NULL);\n\t\tint\t\tdeleteTable(const std::string& schema, const std::string& tableName, const Query& query);\n\t\tint \t\tinsertTable(const std::string& tableName, const InsertValues& values);\n\t\tint\t\tupdateTable(const std::string& tableName, const InsertValues& values, const Where& where, const UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& tableName, const JSONProperties& json, const Where& where, const UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& tableName, const InsertValues& values, const JSONProperties& json,\n\t\t\t\t\tconst Where& where, const UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& tableName, const ExpressionValues& values, const Where& where,\n\t\t\t\t\tconst UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& tableName, std::vector<std::pair<ExpressionValues *, Where *>>& updates,\n\t\t\t\t\tconst UpdateModifier *modifier = NULL);\n\t\tint\t\tupdateTable(const std::string& tableName, const InsertValues& values, const ExpressionValues& expressions,\n\t\t\t\t\tconst Where& where, const UpdateModifier *modifier = NULL);\n\t\tint \t\tupdateTable(const std::string& schema, const std::string& tableName, \n\t\t\t\t\tstd::vector<std::pair<InsertValue*, Where* > > &updates, const UpdateModifier *modifier);\n\n\t\tint \t\tupdateTable(const std::string& tableName, std::vector<std::pair<InsertValue*, Where*> >& updates, \n\t\t\t\t\tconst UpdateModifier *modifier = NULL);\n\n\t\tint\t\tdeleteTable(const std::string& tableName, const Query& query);\n\t\tbool\t\treadingAppend(Reading& reading);\n\t\tbool\t\treadingAppend(const std::vector<Reading *> & readings);\n\t\tResultSet\t*readingQuery(const Query& query);\n\t\tReadingSet \t*readingQueryToReadings(const Query& query);\n\t\tReadingSet\t*readingFetch(const unsigned long readingId, const unsigned long count);\n\t\tPurgeResult\treadingPurgeByAge(unsigned long age, unsigned long sent, bool purgeUnsent);\n\t\tPurgeResult\treadingPurgeBySize(unsigned long size, unsigned long sent, bool purgeUnsent);\n\t\tPurgeResult\treadingPurgeByAsset(const std::string& asset);\n\t\tbool\t\tregisterAssetNotification(const std::string& assetName,\n\t\t\t\t\t\t\t  const std::string& callbackUrl);\n\t\tbool\t\tunregisterAssetNotification(const std::string& assetName,\n\t\t\t\t\t\t\t    const std::string& callbackUrl);\n\t\tbool\t\tregisterTableNotification(const std::string& tableName, const std::string& key, \n\t\t\t\t\t\t\t\tstd::vector<std::string> keyValues, const std::string& operation, const std::string& callbackUrl);\n\t\tbool\t\tunregisterTableNotification(const std::string& tableName, const std::string& key, \n\t\t\t\t\t\t\t\tstd::vector<std::string> keyValues, const std::string& operation, const std::string& callbackUrl);\n\t\tvoid\t\tregisterManagement(ManagementClient *mgmnt) { m_management = mgmnt; };\n\t\tbool \t\tcreateSchema(const std::string&);\n\t\tbool\t\tdeleteHttpClient();\n\n\tprivate:\n\t\tvoid\t\thandleUnexpectedResponse(const char *operation,\n\t\t\t\t\t\t\tconst std::string& table,\n\t\t\t\t\t\t\tconst std::string& responseCode,\n\t\t\t\t\t\t\tconst std::string& payload);\n\t\tvoid\t\thandleUnexpectedResponse(const char *operation,\n\t\t\t\t\t\t\tconst std::string& responseCode,\n\t\t\t\t\t\t\tconst std::string& payload);\n\t\tvoid\t\thandleException(const std::exception& ex, const char *operation, ...);\n\t\tHttpClient \t*getHttpClient(void);\n\t\tbool\t\topenStream();\n\t\tbool\t\tstreamReadings(const std::vector<Reading *> & readings);\n\n\t\tstd::ostringstream \t\t\tm_urlbase;\n\t\tstd::string\t\t\t\tm_host;\n\t\tstd::map<std::thread::id, HttpClient *> m_client_map;\n\t\tstd::map<std::thread::id, std::atomic<int>> m_seqnum_map;\n\t\tLogger\t\t\t\t\t*m_logger;\n\t\tpid_t\t\t\t\t\tm_pid;\n\t\tbool\t\t\t\t\tm_streaming;\n\t\tint\t\t\t\t\tm_stream;\n\t\tuint32_t\t\t\t\tm_readingBlock;\n\t\tstd::string\t\t\t\tm_lastException;\n\t\tint\t\t\t\t\tm_exRepeat;\n\t\tint\t\t\t\t\tm_backoff;\n\t\tManagementClient\t\t\t*m_management;\n};\n\n#endif\n\n"
  },
  {
    "path": "C/common/include/string_utils.h",
    "content": "#ifndef _STRING_UTILS_H\n#define _STRING_UTILS_H\n/*\n * Fledge utilities functions for handling stringa\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli, Massimiliano Pinto\n */\n\n#include <string>\n#include <sstream>\n#include <iomanip>\n\n\nvoid StringReplace(std::string& StringToManage,\n\t\t   const std::string& StringToSearch,\n\t\t   const std::string& StringReplacement);\n\nvoid StringReplaceAll(std::string& StringToManage,\n\t\t\t\t\t  const std::string& StringToSearch,\n\t\t\t\t\t  const std::string& StringReplacement);\n\nstd::string StringSlashFix(const std::string& stringToFix);\nstd::string evaluateParentPath(const std::string& path, char separator);\nstd::string extractLastLevel(const std::string& path, char separator);\n\nvoid   StringStripCRLF(std::string& StringToManage);\nstd::string StringStripWhiteSpacesAll(const std::string& original);\nstd::string StringStripWhiteSpacesExtra(const  std::string& original);\nvoid StringStripQuotes(std::string& StringToManage);\n\nstd::string urlEncode(const std::string& s);\nstd::string urlDecode(const std::string& s);\nvoid StringEscapeQuotes(std::string& s);\n\nchar *trim(char *str);\nstd::string StringLTrim(const std::string& str);\nstd::string StringRTrim(const std::string& str);\nstd::string StringTrim(const std::string& str);\n\nbool IsRegex(const std::string &str);\n\nstd::string StringAround(const std::string& str, unsigned int pos,\n\t\tunsigned int after = 30, unsigned int before = 10);\n\nvoid StringReplaceAllEx(std::string& StringToManage,\n\t\t\t\t\t  const std::string& StringToSearch,\n\t\t\t\t\t  const std::string& StringToChange);\n\nstd::string\tescape(const std::string& str);\n\n#endif\n"
  },
  {
    "path": "C/common/include/timebucket.h",
    "content": "#ifndef _TIMEBUCKET_H\n#define _TIMEBUCKET_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <sstream>\n#include <iostream>\n\n\n/**\n * Timebucket clause in a selection of records\n */\nclass Timebucket {\n\tpublic:\n\t\tTimebucket(const std::string& column, unsigned int size,\n\t\t\tconst std::string& format, const std::string& alias) :\n\t\t\t\tm_column(column), m_size(size), m_format(format), m_alias(alias) {};\n\t\tTimebucket(const std::string& column, unsigned int size,\n\t\t\tconst std::string& format) :\n\t\t\t\tm_column(column), m_size(size), m_format(format), m_alias(column) {};\n\t\t~Timebucket() {};\n\t\tstd::string\ttoJSON()\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"{ \\\"timestamp\\\" : \\\"\" << m_column << \"\\\", \";\n\t\t\tjson << \"\\\"size\\\" : \\\"\" << m_size << \"\\\", \";\n\t\t\tjson << \"\\\"format\\\" : \\\"\" << m_format << \"\\\", \";\n\t\t\tjson << \"\\\"alias\\\" : \\\"\" << m_alias << \"\\\" }\";\n\t\t\treturn json.str();\n\t\t}\n\tprivate:\n\t\tconst std::string\tm_column;\n\t\tunsigned int\t\tm_size;\n\t\tconst std::string\tm_format;\n\t\tconst std::string\tm_alias;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/update_modifier.h",
    "content": "#ifndef _UPDATE_MODIFIER_H\n#define _UPDATE_MODIFIER_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2022 Dianonic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n\n\n/**\n * Update modifier\n */\nclass UpdateModifier {\n\tpublic:\n\t\tUpdateModifier(const std::string& modifier) :\n\t\t\t\tm_modifier(modifier)\n\t\t{\n\t\t};\n\t\t~UpdateModifier();\n\t\tconst std::string\ttoJSON() const { return m_modifier; };\n\tprivate:\n\t\tUpdateModifier(const UpdateModifier&);\n\t\tUpdateModifier&\t\toperator=(UpdateModifier const&);\n\t\tconst std::string\tm_modifier;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/utils.h",
    "content": "#ifndef _FLEDGE_UTILS_H\n#define _FLEDGE_UTILS_H\n/*\n * Fledge general utilities\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <string>\n#include <algorithm>\n#include <vector>\n\n#define _FLEDGE_ROOT_PATH    \"/usr/local/fledge\"\n\nusing namespace std;\n\n/**\n * Return Fledge root dir\n *\n * Return current value of FLEDGE_ROOT env var or\n * default path _FLEDGE_ROOT_PATH\n *\n * @return\tReturn Fledge root dir\n */\nstatic const string getRootDir()\n{\n\tconst char* rootDir = getenv(\"FLEDGE_ROOT\");\n\treturn (rootDir ? string(rootDir) : string(_FLEDGE_ROOT_PATH));\n}\n\n/**\n * Return Fledge data dir\n *\n * Return current value of FLEDGE_DATA env var or\n * default value: getRootDir + /data\n *\n * @return\tReturn Fledge data dir\n */\nstatic const string getDataDir()\n{\n\tconst char* dataDir = getenv(\"FLEDGE_DATA\");\n\treturn (dataDir ? string(dataDir) : string(getRootDir() + \"/data\"));\n}\n\n/**\n * @brief Constructs the path for the debug-trace subdirectory in the Fledge data directory.\n *\n * @return A string representing the path to the debug-trace directory.\n */\nstatic std::string getDebugTracePath() \n{\n    return getDataDir() + \"/logs/debug-trace\";\n}\n\n/**\n * @brief Converts a string representation of a boolean value to a boolean type.\n *\n * This function takes a string input and checks if it represents a boolean value.\n * It recognizes \"true\", \"1\", and their case-insensitive variants as true.\n * Any other string will be interpreted as false.\n *\n * @param str The string to convert to a boolean. Can be \"true\", \"false\", \"1\", \"0\", etc.\n * @return true if the input string represents a true value; false otherwise.\n *\n * @note This function is case-insensitive and will convert the input string to lowercase\n * before comparison.\n *\n * @example\n * bool result1 = stringToBool(\"True\");    // result1 is true\n * bool result2 = stringToBool(\"false\");   // result2 is false\n * bool result3 = stringToBool(\"1\");       // result3 is true\n * bool result4 = stringToBool(\"0\");       // result4 is false\n */\nstatic bool stringToBool(const std::string& str)\n{\n    std::string lowerStr = str;\n    std::transform(lowerStr.begin(), lowerStr.end(), lowerStr.begin(), ::tolower);\n    return (lowerStr == \"true\" || lowerStr == \"1\");\n}\n\n/** \n * @brief Validates if a given string is a valid identifier.\n * A valid identifier is defined as a string that does not contain any disallowed characters.\n * \n * @param str The string to validate as an identifier.\n * @param blockedCharacter A reference to a string that will be set to the first disallowed character found in the input string, if any.\n * @return true if the string is a valid identifier; false otherwise.\n*/\nstatic bool isValidIdentifier(const std::string& str, std::string& blockedCharacter)\n{\n    if (str.empty()) return false;\n    // Check for disallowed characters\n    static const std::vector<std::string> disallowed_characters = { \"\\\\\" };\n    for (const auto& ch : disallowed_characters)\n    {\n        if (str.find(ch) != std::string::npos)\n        {\n            blockedCharacter = ch;\n            return false;\n        }\n    }\n    return true;\n}\n#endif\n"
  },
  {
    "path": "C/common/include/value.h",
    "content": "#ifndef _VALUE_H\n#define _VALUE_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <sstream>\n#include <iostream>\n\n\n/**\n * A value in an update statement\n */\nclass UpdateValue {\n\tpublic:\n\t\tenum UpdateType {\n\t\t\tStringType,\n\t\t\tIntType,\n\t\t\tDoubleType,\n\t\t\tJSONType };\n\t\tUpdateValue(const std::string& column, const std::string& value) :\n\t\t\t\tm_column(column), m_value.str(value), m_type(UpdateValue::StringType) {};\n\t\tUpdateValue(const std::string& column, const int value) :\n\t\t\t\tm_column(column), m_value.ival(value), m_type(UpdateValue::IntType) {};\n\t\tUpdateValue(const std::string& column, const double value) :\n\t\t\t\tm_column(column), m_value.fval(value), m_type(UpdateValue::DoubleType) {};\n\t\t~UpdateValue() {};\n\t\tstd::string\ttoJSON()\n\t\t{\n\t\tstd::ostringstream json;\n\n\t\t\tjson << \"\\\"\" << m_column << \"\\\" : \";\n\t\t\tswitch (m_type)\n\t\t\t{\n\t\t\tcase UpdateValue::StringType:\n\t\t\t\tjson << \"\\\"\" << m_value.str << \"\\\"\";\n\t\t\t\tbreak;\n\t\t\tcase UpdateValue::IntType:\n\t\t\t\tjson << m_value.ival;\n\t\t\t\tbreak;\n\t\t\tcase UpdateValue::DoubleType:\n\t\t\t\tjson << m_value.fval;\n\t\t\t\tbreak;\n\t\t\tcase UpdateValue::JSONType:\n\t\t\t\tjson << m_value.str;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\treturn json.str();\n\t\t}\n\tprivate:\n\t\tconst std::string\tm_column;\n\t\tenum UpdateType\t\tm_type;\n\t\tunion {\n\t\t\tstd::string\tstr;\n\t\t\tint\t\tival;\n\t\t\tdouble\t\tfval;\n\t\t}\t\t\tm_value;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/include/where.h",
    "content": "#ifndef _WHERE_H\n#define _WHERE_H\n/*\n * Fledge storage client.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <string>\n#include <vector>\n#include <stdexcept>\n\ntypedef enum Conditional {\n\tOlder,\n\tNewer,\n\tEquals,\n\tNotEquals,\n\tGreaterThan,\n\tLessThan,\n\tIn,\n\tIsNull,\n\tNotNull\n} Condition;\n\n/**\n * Where clause in a selection of records\n */\nclass Where {\n\tpublic:\n\t\tWhere(const std::string& column, const Condition condition, const std::string& value) :\n\t\t\t\tm_column(column), m_condition(condition), m_and(0), m_or(0)\n\t\t{\n\t\t\tif (condition != In)\n\t\t\t{\n\t\t\t\tm_value = value;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_in.push_back(value);\n\t\t\t}\n\t\t};\n\t\tWhere(const std::string& column, const Condition condition, const std::string& value, Where *andCondition) :\n\t\t\t\tm_column(column), m_condition(condition), m_and(andCondition), m_or(0)\n\t\t{\n\t\t\tif (condition != In)\n\t\t\t{\n\t\t\t\tm_value = value;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_in.push_back(value);\n\t\t\t}\n\t\t};\n\t\tWhere(const std::string& column, const Condition condition) :\n\t\t\t\tm_column(column), m_condition(condition), m_and(0), m_or(0)\n\t\t{\n\t\t\tif (condition != IsNull && condition != NotNull)\n\t\t\t{\n\t\t\t\tthrow std::runtime_error(\"Missing value in where clause\");\n\t\t\t}\n\t\t};\n\t\tWhere(const std::string& column, const Condition condition, Where *andCondition) :\n\t\t\t\tm_column(column), m_condition(condition), m_and(andCondition), m_or(0)\n\t\t{\n\t\t\tif (condition != IsNull && condition != NotNull)\n\t\t\t{\n\t\t\t\tthrow std::runtime_error(\"Missing value in where clause\");\n\t\t\t}\n\t\t};\n\t\t~Where();\n\t\tvoid\t\tandWhere(Where *condition) { m_and = condition; };\n\t\tvoid\t\torWhere(Where *condition) { m_or = condition; };\n\t\tvoid\t\taddIn(const std::string& value)\n\t\t{\n\t\t\tif (m_condition == In)\n\t\t\t{\n\t\t\t\tm_in.push_back(value);\n\t\t\t}\n\t\t};\n\t\tconst std::string\ttoJSON() const;\n\tprivate:\n\t\tWhere(const Where&);\n\t\tWhere&\t\t\toperator=(Where const&);\n\t\tconst std::string\tm_column;\n\t\tconst Condition\t\tm_condition;\n\t\tstd::string\t\tm_value;\n\t\tWhere\t\t\t*m_and;\n\t\tWhere\t\t\t*m_or;\n\t\tstd::vector<std::string>\n\t\t\t\t\tm_in;\n};\n#endif\n\n"
  },
  {
    "path": "C/common/join.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <join.h>\n#include <query.h>\n#include <sstream>\n#include <iostream>\n\nusing namespace std;\n\n/**\n * Destructor fo rthe join clause\n */\nJoin::~Join()\n{\n\tdelete m_query;\n}\n\n/**\n * Convert a join clause to its JSON representation\n *\n * @return string\tThe JSON form of the join\n */\nconst string Join::toJSON() const\n{\nostringstream   json;\nbool \t\tfirst = true;\n\n\tjson << \" \\\"join\\\" : {\";\n\tjson << \"\\\"table\\\" : { \\\"name\\\" : \\\"\" << m_table << \"\\\", \";\n\tjson << \"\\\"column\\\" : \\\"\" << m_column << \"\\\" }, \";\n\tjson << \"\\\"on\\\" : \\\"\" << m_on << \"\\\", \";\n\tjson << \"\\\"query\\\" : \" << m_query->toJSON();\n\tjson << \" }\";\n\treturn json.str();\n}\n"
  },
  {
    "path": "C/common/json_utils.cpp",
    "content": "/*\n * Fledge utilities functions for handling JSON document\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <iostream>\n#include <string>\n#include <vector>\n#include \"json_utils.h\"\n#include \"rapidjson/document.h\"\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * Processes a string containing an array in JSON format and loads a vector of string\n *\n * @param vectorString  vector of string used by reference in which the JSON array will be loaded\n * @param JSONString    string containing an array in JSON format\n * @param Key           key of the JSON from which the array should be evaluated\n *\n */\nbool JSONStringToVectorString(std::vector<std::string>& vectorString,\n\t\t\t      const std::string& JSONString,\n\t\t\t      const std::string& Key)\n{\n\tbool success = true;\n\n\tDocument JSONdoc;\n\n\tJSONdoc.Parse(JSONString.c_str());\n\n\n\tif (JSONdoc.HasParseError())\n\t{\n\t\tsuccess = false;\n\n\t} else if (!JSONdoc.HasMember(Key.c_str()))\n\t{\n\t\tsuccess = false;\n\n\t} else if (!JSONdoc[Key.c_str()].IsArray())\n\t{\n\n\t\tsuccess = false;\n\t}\n\n\tif (success)\n\t{\n\t\tconst Value &filterList = JSONdoc[Key.c_str()];\n\t\tif (!filterList.Size())\n\t\t{\n\t\t\tsuccess = false;\n\t\t} else\n\t\t{\n\t\t\tfor (Value::ConstValueIterator itr = filterList.Begin();\n\t\t\t     itr != filterList.End(); ++itr)\n\t\t\t{\n\t\t\t\tvectorString.emplace_back(itr->GetString());\n\t\t\t}\n\n\t\t}\n\t}\n\n\treturn success;\n}\n\nstring JSONescape(const std::string& subject)\n{\nsize_t pos = 0;\nstring replace(\"\\\\\\\"\");\nstring escaped = subject;\n\n        while ((pos = escaped.find(\"\\\"\", pos)) != std::string::npos)\n        {\n                escaped.replace(pos, 1, replace);\n                pos += replace.length();\n        }\n        return escaped;\n}\n\n/**\n * Return unescaped version of a JSON string\n *\n * Routine removes \\\" inside the string\n * and leading and trailing \"\n *\n * @param input         Input string\n * @return              Unescaped string\n */\nstd::string JSONunescape(const std::string& input)\n{\n\tstd::string output;\n\tsize_t inputSize = input.size();\n\toutput.reserve(inputSize);\n\n\tfor (size_t i = 0; i < inputSize; ++i)\n\t{\n\t\t// skip leading or trailing \"\n\t\tif ((i == 0 || i == inputSize -1) && input[i] == '\"')\n\t\t{\n\t\t\tcontinue;\n\t\t}\n\n\t\t// \\\\\\\" -> \\\"\n\t\tif (input[i] == '\\\\' && i + 3 < inputSize && input[i + 1] == '\\\\' && input[i + 2] == '\\\\' && input[i + 3] == '\"')\n\t\t{\n\t\t\toutput.push_back('\\\\');\n\t\t\toutput.push_back('\"');\n\t\t\ti += 3;\n\t\t}\n\t\t// \\\\\" -> \\\"\n\t\t// \\\" -> \"\n\t\telse if (input[i] == '\\\\' && i + 1 < inputSize && input[i + 1] == '\"')\n\t\t{\n\t\t\toutput.push_back('\"');\n\t\t\t++i;\n\t\t}\n\t\telse\n\t\t{\n\t\t\toutput.push_back(input[i]);\n\t\t}\n\t}\n\n\treturn output;\n}\n"
  },
  {
    "path": "C/common/logger.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017-2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <logger.h>\n#include <stdio.h>\n#include <unistd.h>\n#include <syslog.h>\n#include <stdarg.h>\n#include <memory>\n#include <string.h>\n#include <sys/time.h>\n#include <sys/socket.h>\n#include <exception>\n#include <arpa/inet.h>\n#include <stdexcept>\n#include <algorithm>\n\nusing namespace std;\n\n// uncomment line below to get uSec level timestamps\n// #define ADD_USEC_TS\nconst char * DEFALUT_LOG_IP = \"127.0.0.1\";\nconst int DEFAULT_LOG_PORT = 5140;\n\ninline long getCurrTimeUsec()\n{\n\tstruct timeval m_timestamp;\n\tgettimeofday(&m_timestamp, NULL);\n\treturn m_timestamp.tv_usec;\n}\n\n/**\n * The singleton pointer\n */\nLogger *Logger::instance = 0;\n\n/**\n * Constructor for the Logger class.\n *\n * @param application\tThe application name\n */\nLogger::Logger(const string& application) : m_runWorker(true), m_workerThread(NULL)\n{\n\tstatic char ident[80];\n\n\tif (instance)\n\t{\n\t\tinstance->error(\"Attempt to create second singleton instance, original application name %s, current attempt made by %s\", ident, application.c_str());\n\t\tthrow runtime_error(\"Attempt to create secnd Logger instance\");\n\t}\n\t/* Prepend \"Fledge \" in all cases other than Fledge itself and Fledge Storage.\n\t */\n\tif (application.compare(\"Fledge\") != 0 && application.compare(\"Fledge Storage\") != 0)\n\t{\n\t\tsnprintf(ident, sizeof(ident), \"Fledge %s\", application.c_str());\n\t}\n\telse\n\t{\n\t\tstrncpy(ident, application.c_str(), sizeof(ident));\n\t}\n\n\t// Check if SYSLOG_UDP_ENABLED is set via environment variable\n\tconst char* udpEnabledEnv = std::getenv(\"SYSLOG_UDP_ENABLED\");\n\tm_SyslogUdpEnabled = false;\n\n\tauto toLower = [](const std::string& s) {\n        std::string out = s;\n        std::transform(out.begin(), out.end(), out.begin(),\n            [](unsigned char c) { return std::tolower(c); });\n        return out;\n    };\n\n\tif (udpEnabledEnv != nullptr && toLower(std::string(udpEnabledEnv)) == \"true\") \n\t{\n\t\tm_SyslogUdpEnabled = true;\n\t}\n\n\tif(m_SyslogUdpEnabled)\n\t{\n\t\t// Check LOG_IP and LOG_PORT from environment variables with default values\n\t\tconst char* logIpEnv = std::getenv(\"LOG_IP\");\n\t\tconst char* logPortEnv = std::getenv(\"LOG_PORT\");\n\n\t\tstd::string logIp = logIpEnv ? logIpEnv : DEFALUT_LOG_IP; // Default to 127.0.0.1\n\t\tint logPort = logPortEnv ? std::atoi(logPortEnv) : DEFAULT_LOG_PORT; // Default to port 5140\n\t\t// Initialize the UDP socket\n\t\tm_UdpSockFD = socket(AF_INET, SOCK_DGRAM, 0);\n\t\tif (m_UdpSockFD >= 0) \n\t\t{\n\t\t\tmemset(&m_UdpServerAddr, 0, sizeof(m_UdpServerAddr));\n\t\t\tm_UdpServerAddr.sin_family = AF_INET;\n\t\t\tm_UdpServerAddr.sin_port = htons(logPort); // Use the port from LOG_PORT or default\n\t\t\tif (inet_pton(AF_INET, logIp.c_str(), &m_UdpServerAddr.sin_addr) <= 0)\n\t\t\t{\n\t\t\t\tthrow std::runtime_error(\"Invalid LOG_IP address\");\n\t\t\t}\n\t\t} \n\t\telse \n\t\t{\n\t\t\tthrow std::runtime_error(\"Failed to create UDP socket\");\n\t\t}\n\n\t\tif(m_SyslogUdpEnabled)\n\t\t{\n\t\t\tchar hostname[256]; // Buffer to store the hostname\n\t\n\t\t\t// Retrieve the hostname (localhost name)\n\t\t\tif (gethostname(hostname, sizeof(hostname)) != 0)\n\t\t\t{\n\t\t\t\t// Fallback in case of failure to retrieve hostname\n\t\t\t\tstrncpy(hostname, \"localhost\", sizeof(hostname) - 1);\n\t\t\t\thostname[sizeof(hostname) - 1] = '\\0';\n\t\t\t}\n\t\t\t\n\t\t\tm_hostname = hostname;\n\t\t}\n\t}\n\telse\n\t{\n\t\t// Warning: these flags should be updated with caution because the `m_identifier` used in UDP when `m_SyslogUdpEnabled` is `true` may break\n\t\topenlog(ident, LOG_PID|LOG_CONS, LOG_USER);\n\t}\n\n\tinstance = this;\n\tm_level = LOG_WARNING;\n\tm_identifier = ident;\n}\n\n/**\n * Destructor for the logger class.\n */\nLogger::~Logger()\n{\n\t// Stop the getLogger() call returning a deleted instance\n\tif (instance == this)\n\t\tinstance = NULL;\n\telse if (!instance)\n\t\treturn;\t// Already destroyed\n\tm_runWorker = false;\n\tm_condition.notify_one();\n\tif (m_workerThread && m_workerThread->joinable())\n\t{\n\t\tm_workerThread->join();\n\t\tdelete m_workerThread;\n\t\tm_workerThread = NULL;\n\t}\n\n\tif(!m_SyslogUdpEnabled)\n\t{\n\t\tcloselog();\n\t}\n\telse\n\t{\n\t\tif (m_UdpSockFD >= 0) \n\t\t{\n\t\t\tclose(m_UdpSockFD);\n\t\t\tm_UdpSockFD = -1;\n\t\t}\n\t}\n}\n\n/**\n * Send a message to the UDP sink if enabled\n *\n * @param msg\t\tThe message to send\n */\nvoid Logger::sendToUdpSink(const std::string& msg) \n{\n\tif (m_UdpSockFD >= 0) \n\t{\n\t\tsendto(m_UdpSockFD, msg.c_str(), msg.size(), 0, (struct sockaddr*)&m_UdpServerAddr, sizeof(m_UdpServerAddr));\n\t}\n}\n\n/**\n * Return the singleton instance of the logger class.\n */\nLogger *Logger::getLogger()\n{\n\tif (!instance)\n\t{\n\t\t// Any service should have already created the logger\n\t\t// for the service. If not then create the default logger\n\t\t// and clearly identify this. We should ideally avoid\n\t\t// the use of a default as this will not identify the\n\t\t// source of the log message.\n\t\tinstance = new Logger(\"(default)\");\n\t}\n\n\treturn instance;\n}\n\n/**\n * Set the minimum level of logging to write to syslog.\n *\n * @param level\tThe minimum, inclusive, level of logging to write\n */\nvoid Logger::setMinLevel(const string& level)\n{\n\tif (level.compare(\"info\") == 0)\n\t{\n\t\tsetlogmask(LOG_UPTO(LOG_INFO));\n\t\tlevelString = level;\n\t\tm_level = LOG_INFO;\n\t} else if (level.compare(\"warning\") == 0)\n\t{\n\t\tsetlogmask(LOG_UPTO(LOG_WARNING));\n\t\tlevelString = level;\n\t\tm_level = LOG_WARNING;\n\t} else if (level.compare(\"debug\") == 0)\n\t{\n\t\tsetlogmask(LOG_UPTO(LOG_DEBUG));\n\t\tlevelString = level;\n\t\tm_level = LOG_DEBUG;\n\t} else if (level.compare(\"error\") == 0)\n\t{\n\t\tsetlogmask(LOG_UPTO(LOG_ERR));\n\t\tlevelString = level;\n\t\tm_level = LOG_ERR;\n\t} else\n\t{\n\t\terror(\"Request to set unsupported log level %s\", level.c_str());\n\t}\n}\n\n/**\n * Register a callback function to be called when\n * a log message is written that matches the specification\n * given.\n *\n * Note: The callback functions are called on a separate thread.\n * This worker thread is only created when the first callback is\n * registered.\n *\n * @param level\t\tThe level that must be matched\n * @param callback\tThe funtion to be called\n * @param userData\tUser date to pass to the callback function\n * @return bool\t\tReturn true if the callback was registered\n */\nbool Logger::registerInterceptor(LogLevel level, LogInterceptor callback, void* userData)\n{\n\t// Do not register the interceptor if callback function is null\n\tif (callback == nullptr)\n\t{\n\t\treturn false;\n\t}\n\n\tstd::lock_guard<std::mutex> lock(m_interceptorMapMutex);\n\tif (m_workerThread == NULL)\n\t{\n\t\tm_workerThread = new std::thread(&Logger::workerThread, this);\n\t}\n\tauto it = m_interceptors.emplace(level, InterceptorData{callback, userData});\n\tif (it != m_interceptors.end())\n\t{\n\t\treturn true;\n\t}\n\treturn false;\n}\n\n/**\n * Remove the registration of a previously registered callback\n *\n * @param level\t\tThe matching log level for the callback\n * @param callback\tThe callback to unregister\n * @return bool\t\tTrue if the callback was unregistered.\n */\nbool Logger::unregisterInterceptor(LogLevel level, LogInterceptor callback)\n{\n\tstd::lock_guard<mutex> lock(m_interceptorMapMutex);\n\tauto range = m_interceptors.equal_range(level);\n\tfor (auto it = range.first; it != range.second; ++it)\n\t{\n\t\tif (it->second.callback == callback)\n\t\t{\n\t\t\tm_interceptors.erase(it);\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Queue the execution of a callback when a log message is received\n * that matches a registered callback\n *\n * @param level\t\tThe log level\n * @param message\tThe log message\n */\nvoid Logger::executeInterceptor(LogLevel level, const std::string& message)\n{\n\tstd::lock_guard<mutex> lock(m_interceptorMapMutex);\n\tauto range = m_interceptors.equal_range(level);\n\tfor (auto it = range.first; it != range.second; ++it)\n\t{\n\t\tstd::lock_guard<mutex> lock(m_queueMutex);\n\t\tm_taskQueue.push({level, message, it->second.callback, it->second.userData});\n\t}\n\tm_condition.notify_one();\n}\n\n/**\n * The worker thread that processes intercepted log messages and\n * calls the callback function to handle them\n */\nvoid Logger::workerThread()\n{\n\twhile (m_runWorker)\n\t{\n\t\tstd::unique_lock<mutex> lock(m_queueMutex);\n\t\tm_condition.wait(lock, [this] { return !m_taskQueue.empty() || !m_runWorker; });\n\n\t\twhile (!m_taskQueue.empty())\n\t\t{\n\t\t\tif(!m_runWorker) //Exit immediately during shutdown\n\t\t\t{\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tLogTask task = m_taskQueue.front();\n\t\t\tm_taskQueue.pop();\n\t\t\tlock.unlock();\n\n\t\t\tif (task.callback)\n\t\t\t{\n\t\t\t\ttask.callback(task.level, task.message, task.userData);\n\t\t\t}\n\n\t\t\tlock.lock();\n\t\t}\n\t}\n}\n\n/**\n * Log a message at the level debug\n *\n * @param msg\t\tA printf format string\n * @param ...\t\tThe variable arguments required by the printf format\n */\nvoid Logger::debug(const string& msg, ...)\n{\n\tva_list args;\n\tva_start(args, msg);\n\n\t// Use the unified log function with the \"DEBUG\" level\n\tlog(LOG_DEBUG, \"DEBUG\", LogLevel::DEBUG, msg, args);\n\n\tva_end(args);\n}\n\n/**\n * Log a long string across multiple syslog entries\n *\n * @param s\tThe string to log\n * @param level\tlevel to log the string at\n */\nvoid Logger::printLongString(const string& s, LogLevel level)\n{\n\tconst int charsPerLine = 950;\n\tint len = s.size();\n\tconst char *cstr = s.c_str();\n\tfor (int i=0; i<(len+charsPerLine-1)/charsPerLine; i++)\n\t{\n\t\tswitch (level)\n\t\t{\n\t\t\tcase LogLevel::FATAL:\n\t\t\t\tthis->fatal(\"%.*s%s\",\n\t\t\t\t\t\tcharsPerLine,\n\t\t\t\t\t\tcstr+i*charsPerLine,\n\t\t\t\t\t\tlen - i > charsPerLine ? \"...\" : \"\");\n\t\t\t\tbreak;\n\t\t\tcase LogLevel::ERROR:\n\t\t\t\tthis->error(\"%.*s%s\",\n\t\t\t\t\t\tcharsPerLine,\n\t\t\t\t\t\tcstr+i*charsPerLine,\n\t\t\t\t\t\tlen - i > charsPerLine ? \"...\" : \"\");\n\t\t\t\tbreak;\n\t\t\tcase LogLevel::WARNING:\n\t\t\t\tthis->warn(\"%.*s%s\",\n\t\t\t\t\t\tcharsPerLine,\n\t\t\t\t\t\tcstr+i*charsPerLine,\n\t\t\t\t\t\tlen - i > charsPerLine ? \"...\" : \"\");\n\t\t\t\tbreak;\n\t\t\tcase LogLevel::INFO:\n\t\t\t\tthis->info(\"%.*s%s\",\n\t\t\t\t\t\tcharsPerLine,\n\t\t\t\t\t\tcstr+i*charsPerLine,\n\t\t\t\t\t\tlen - i > charsPerLine ? \"...\" : \"\");\n\t\t\t\tbreak;\n\t\t\tcase LogLevel::DEBUG:\n\t\t\tdefault:\n\t\t\t\tthis->debug(\"%.*s%s\",\n\t\t\t\t\t\tcharsPerLine,\n\t\t\t\t\t\tcstr+i*charsPerLine,\n\t\t\t\t\t\tlen - i > charsPerLine ? \"...\" : \"\");\n\t\t\t\tbreak;\n\t\t}\n\t}\n}\n\n/**\n * Log a message at the level info\n *\n * @param msg\t\tA printf format string\n * @param ...\t\tThe variable arguments required by the printf format\n */\nvoid Logger::info(const std::string& msg, ...)\n{\n\tva_list args;\n\tva_start(args, msg);\n\n\t// Use the unified log function with the \"INFO\" level\n\tlog(LOG_INFO, \"INFO\", LogLevel::INFO, msg, args);\n\n\tva_end(args);\n}\n\n/**\n * Log a message at the level warn\n *\n * @param msg\t\tA printf format string\n * @param ...\t\tThe variable arguments required by the printf format\n */\nvoid Logger::warn(const string& msg, ...)\n{\n\tva_list args;\n\tva_start(args, msg);\n\n\t// Use the unified log function with the \"WARNING\" level\n\tlog(LOG_WARNING, \"WARNING\", LogLevel::WARNING, msg, args);\n\n\tva_end(args);\n}\n\n/**\n * Log a message at the level error\n *\n * @param msg\t\tA printf format string\n * @param ...\t\tThe variable arguments required by the printf format\n */\nvoid Logger::error(const string& msg, ...)\n{\n\tva_list args;\n\tva_start(args, msg);\n\n\t// Use the unified log function with the \"ERROR\" level\n\tlog(LOG_ERR, \"ERROR\", LogLevel::ERROR, msg, args);\n\n\tva_end(args);\n}\n\n\n/**\n * Log a message at the level fatal\n *\n * @param msg\t\tA printf format string\n * @param ...\t\tThe variable arguments required by the printf format\n */\nvoid Logger::fatal(const string& msg, ...)\n{\n\tva_list args;\n\tva_start(args, msg);\n\n\t// Use the unified log function with the \"FATAL\" level\n\tlog(LOG_CRIT, \"FATAL\", LogLevel::FATAL, msg, args);\n\n\tva_end(args);\n}\n\n/**\n * Log a message at the specified level\n *\n * @param sysLogLvl\tThe syslog level to use\n * @param lvlName\t\tThe name of the log level\n * @param appLogLvl\tThe application log level\n * @param msg\t\tA printf format string\n * @param ...\t\tThe variable arguments required by the printf format\n */\nvoid Logger::log(int sysLogLvl, const char * lvlName, LogLevel appLogLvl, const std::string& msg, va_list args)\n{\n\t// Check if the current log level allows messages\n\tif (m_level < sysLogLvl) \n\t{\n\t\treturn;\n\t}\n\n\tconstexpr size_t MAX_BUFFER_SIZE = 1024; // Maximum allowed log size\n\tchar buffer[MAX_BUFFER_SIZE]; // Stack-allocated buffer for formatting\n\n\tint copied = 0;\n\n\tif(m_SyslogUdpEnabled)\n\t{\n\t\t// Add the identifier to the message in case udp\n\t\tcopied = snprintf(buffer, sizeof(buffer), \"%s %s[%d]: \", m_hostname.c_str(), m_identifier.c_str(), getpid());\n\t}\n\n#ifdef ADD_USEC_TS\n\tcopied += snprintf(buffer + copied, sizeof(buffer) - copied, \"[.%06ld] %s: \", getCurrTimeUsec(), lvlName);\n#else\n\tcopied += snprintf(buffer + copied, sizeof(buffer) - copied, \"%s: \", lvlName);\n#endif\n\n\t// Format the log message using vsnprintf\n\tvsnprintf(buffer + copied, sizeof(buffer) - copied, msg.c_str(), args);\n\n\tif(m_SyslogUdpEnabled)\n\t{\n\t\t// Send the message to the UDP sink\n\t\tsendToUdpSink(buffer);\n\t}\n\telse\n\t{\n\t\tsyslog(sysLogLvl, \"%s\", buffer);\n\t}\n\n\t// Execute interceptors if any are present\n\tif (!m_interceptors.empty())\n\t{\n\t\texecuteInterceptor(appLogLvl, buffer);\n\t}\n}\n"
  },
  {
    "path": "C/common/management_client.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017-2021 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <management_client.h>\n#include <rapidjson/document.h>\n#include <service_record.h>\n#include <string_utils.h>\n#include <asset_tracking.h>\n#include <bearer_token.h>\n#include <crypto.hpp>\n#include <rapidjson/error/en.h>\n\nusing namespace std;\nusing namespace rapidjson;\nusing namespace SimpleWeb;\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\n/**\n * Management Client constructor. Creates a class used to send management API requests\n * from a micro service to the Fledge core service.\n *\n * The parameters required here are passed to new services and tasks using the --address=\n * and --port= arguments when the service is started.\n *\n * @param hostname\tThe hostname of the Fledge core micro service\n * @param port\t\tThe port of the management service API listener in the Fledge core\n */\nManagementClient::ManagementClient(const string& hostname, const unsigned short port) : m_uuid(0)\n{\nostringstream urlbase;\n\n\tm_logger = Logger::getLogger();\n\tm_urlbase << hostname << \":\" << port;\n}\n\n/**\n * Destructor for management client\n */\nManagementClient::~ManagementClient()\n{\n\tstd::map<std::thread::id, HttpClient *>::iterator item;\n\n\tif (m_uuid)\n\t{\n\t\tdelete m_uuid;\n\t\tm_uuid = 0;\n\t}\n\n\t// Deletes all the HttpClient objects created in the map\n\tfor (item  = m_client_map.begin() ; item  != m_client_map.end() ; ++item)\n\t{\n\t\tdelete item->second;\n\t}\n}\n\n/**\n * Creates a HttpClient object for each thread\n * it stores/retrieves the reference to the HttpClient and the associated thread id in a map\n *\n * @return HttpClient\tThe HTTP client connection to the core\n */\nHttpClient *ManagementClient::getHttpClient() {\n\n\tstd::map<std::thread::id, HttpClient *>::iterator item;\n\tHttpClient *client;\n\n\tstd::thread::id thread_id = std::this_thread::get_id();\n\n\tm_mtx_client_map.lock();\n\titem = m_client_map.find(thread_id);\n\n\tif (item  == m_client_map.end() ) {\n\n\t\t// Adding a new HttpClient\n\t\tclient = new HttpClient(m_urlbase.str());\n\t\tm_client_map[thread_id] = client;\n\t}\n\telse\n\t{\n\t\tclient = item->second;\n\t}\n\n\tm_mtx_client_map.unlock();\n\n\treturn (client);\n}\n\n/**\n * Register this service with the Fledge core\n *\n * @param service\tThe service record of this service\n * @return bool\t\tTrue if the service registration was sucessful\n */\nbool ManagementClient::registerService(const ServiceRecord& service)\n{\nstring payload;\n\n\ttry {\n\t\tservice.asJSON(payload);\n\n\t\tauto res = this->getHttpClient()->request(\"POST\", \"/fledge/service\", payload);\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) &&\n\t\t\t\t\t  isdigit(response[1]) &&\n\t\t\t\t\t  isdigit(response[2]) &&\n\t\t\t\t\t  response[3]==':');\n\t\t\tm_logger->error(\"%s service registration: %s\\n\", \n\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\", \n\t\t\t\t\tresponse.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tif (doc.HasMember(\"id\"))\n\t\t{\n\t\t\tm_uuid = new string(doc[\"id\"].GetString());\n\t\t\tm_logger->info(\"Registered service '%s' with UUID %s.\\n\",\n\t\t\t\t\tservice.getName().c_str(),\n\t\t\t\t\tm_uuid->c_str());\n\t\t\tif (doc.HasMember(\"bearer_token\")){\n\t\t\t\tm_bearer_token = string(doc[\"bearer_token\"].GetString());\n#ifdef DEBUG_BEARER_TOKEN\n\t\t\t\tm_logger->debug(\"Bearer token issued for service '%s': %s\",\n\t\t\t\t\t\tservice.getName().c_str(),\n\t\t\t\t\t\tm_bearer_token.c_str());\n#endif\n\t\t\t}\n\n\t\t\treturn true;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to register service: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"Unexpected result from service registration %s\",\n\t\t\t\t\tresponse.c_str());\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Register service failed %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Unregister this service with the Fledge core\n *\n * @return bool\tTrue if the service successfully unregistered\n */\nbool ManagementClient::unregisterService()\n{\n\n\tif (!m_uuid)\n\t{\n\t\treturn false;\t// Not registered\n\t}\n\ttry {\n\t\tstring url = \"/fledge/service/\";\n\t\turl += urlEncode(*m_uuid);\n\t\tauto res = this->getHttpClient()->request(\"DELETE\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s service unregistration: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tif (doc.HasMember(\"id\"))\n\t\t{\n\t\t\tdelete m_uuid;\n\t\t\tm_uuid = new string(doc[\"id\"].GetString());\n\t\t\tm_logger->info(\"Unregistered service %s.\\n\", m_uuid->c_str());\n\t\t\treturn true;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to unregister service: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Unregister service failed %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Restart this service by sendign a request to the Fledge core\n *\n * @return bool\tTrue if the service successfully requested restart\n */\nbool ManagementClient::restartService()\n{\n\n\tif (!m_uuid)\n\t{\n\t\treturn false;\t// Not registered\n\t}\n\ttry {\n\t\tstring url = \"/fledge/service/\";\n\t\turl += urlEncode(*m_uuid);\n\t\turl += \"/restart\";\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s service restart: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tif (doc.HasMember(\"id\"))\n\t\t{\n\t\t\tdelete m_uuid;\n\t\t\tm_uuid = new string(doc[\"id\"].GetString());\n\t\t\tm_logger->info(\"Restart service %s.\\n\", m_uuid->c_str());\n\t\t\treturn true;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to restart service: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Restart service failed %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Get the specified service. Supplied with a service\n * record that must either have the name or the type fields populated.\n * The call will populate the other fields of the service record.\n *\n * Note, if multiple service records match then only the first will be\n * returned.\n *\n * @param service\tA partially filled service record that will be completed\n * @return bool\t\tReturn true if the service record was found\n */\nbool ManagementClient::getService(ServiceRecord& service)\n{\nstring payload;\n\n\ttry {\n\t\tstring url = \"/fledge/service\";\n\t\tif (!service.getName().empty())\n\t\t{\n\t\t\turl += \"?name=\" + urlEncode(service.getName());\n\t\t}\n\t\telse if (!service.getType().empty())\n\t\t{\n\t\t\turl += \"?type=\" + urlEncode(service.getType());\n\t\t}\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s fetching service record: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\treturn false;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to register service: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\treturn false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tValue& serviceRecord = doc[\"services\"][0];\n\t\t\tservice.setAddress(serviceRecord[\"address\"].GetString());\n\t\t\tservice.setPort(serviceRecord[\"service_port\"].GetInt());\n\t\t\tservice.setProtocol(serviceRecord[\"protocol\"].GetString());\n\t\t\tservice.setManagementPort(serviceRecord[\"management_port\"].GetInt());\n\t\t\treturn true;\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Get service failed %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Return all services registered with the Fledge core\n *\n * @param services\tA vector of service records that will be populated\n * @return bool\t\tTrue if the vecgtor was populated\n */\nbool ManagementClient::getServices(vector<ServiceRecord *>& services)\n{\nstring payload;\n\n\ttry {\n\t\tstring url = \"/fledge/service\";\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s fetching service record: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\treturn false;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to register service: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\treturn false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tValue& records = doc[\"services\"];\n\t\t\tfor (auto& serviceRecord : records.GetArray())\n\t\t\t{\n\t\t\t\tServiceRecord *service = new ServiceRecord(serviceRecord[\"name\"].GetString(),\n\t\t\t\t\t\tserviceRecord[\"type\"].GetString());\n\t\t\t\tservice->setAddress(serviceRecord[\"address\"].GetString());\n\t\t\t\tservice->setPort(serviceRecord[\"service_port\"].GetInt());\n\t\t\t\tservice->setProtocol(serviceRecord[\"protocol\"].GetString());\n\t\t\t\tservice->setManagementPort(serviceRecord[\"management_port\"].GetInt());\n\t\t\t\tservices.push_back(service);\n\t\t\t}\n\t\t\treturn true;\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Get services failed %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Return all services registered with the Fledge core of a specified type\n *\n * @param services\tA vector of service records that will be populated\n * @param type\t\tThe type of services to return\n * @return bool\t\tTrue if the vecgtor was populated\n */\nbool ManagementClient::getServices(vector<ServiceRecord *>& services, const string& type)\n{\nstring payload;\n\n\ttry {\n\t\tstring url = \"/fledge/service?type=\";\n\t\turl += type;\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s fetching service record: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\treturn false;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to register service: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\treturn false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tValue& records = doc[\"services\"];\n\t\t\tfor (auto& serviceRecord : records.GetArray())\n\t\t\t{\n\t\t\t\tServiceRecord *service = new ServiceRecord(serviceRecord[\"name\"].GetString(),\n\t\t\t\t\t\tserviceRecord[\"type\"].GetString());\n\t\t\t\tservice->setAddress(serviceRecord[\"address\"].GetString());\n\t\t\t\tservice->setPort(serviceRecord[\"service_port\"].GetInt());\n\t\t\t\tservice->setProtocol(serviceRecord[\"protocol\"].GetString());\n\t\t\t\tservice->setManagementPort(serviceRecord[\"management_port\"].GetInt());\n\t\t\t\tservices.push_back(service);\n\t\t\t}\n\t\t\treturn true;\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Get services failed %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Register interest in a configuration category. The service will be called \n * with the updated configuration category whenever an item in the category\n * is added, removed or changed.\n *\n * @param category\tThe name of the category to register\n * @return bool\t\tTrue if the registration was succesful\n */\nbool ManagementClient::registerCategoryChild(const string& category)\n{\nostringstream convert;\n\n\tif (m_uuid == 0)\n\t{\n\t\t// Not registered with core\n\t\tm_logger->error(\"Service is not registered with the core - not registering configuration interest\");\n\t\treturn true;\n\t}\n\ttry {\n\t\tconvert << \"{ \\\"category\\\" : \\\"\" << JSONescape(category) << \"\\\", \";\n\t\tconvert << \"\\\"child\\\" : \\\"\" << \"True\" << \"\\\", \";\n\t\tconvert << \"\\\"service\\\" : \\\"\" << *m_uuid << \"\\\" }\";\n\n\t\tauto res = this->getHttpClient()->request(\"POST\", \"/fledge/interest\", convert.str());\n\t\tDocument doc;\n\t\tstring content = res->content.string();\n\t\tdoc.Parse(content.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(content[0]) && isdigit(content[1]) && isdigit(content[2]) && content[3]==':');\n\t\t\tm_logger->error(\"%s child category registration: %s\\n\",\n\t\t\t\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\",\n\t\t\t\t\t\t\t\tcontent.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tif (doc.HasMember(\"id\"))\n\t\t{\n\t\t\tconst char *reg_id = doc[\"id\"].GetString();\n\t\t\tm_categories[category] = string(reg_id);\n\t\t\tm_logger->info(\"Registered child configuration category %s, registration id %s.\",\n\t\t\t\t\tcategory.c_str(), reg_id);\n\t\t\treturn true;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to register child configuration category: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"Failed to register child configuration category: %s.\",\n\t\t\t\t\tcontent.c_str());\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n                m_logger->error(\"Register child configuration category failed %s.\", e.what());\n                return false;\n        }\n        return false;\n}\n\n\n/**\n * Register interest in a configuration category\n *\n * @param category\tThe name of the configuration category to register\n * @return bool\t\tTrue if the configuration category has been registered\n */\nbool ManagementClient::registerCategory(const string& category)\n{\nostringstream convert;\n\n\tif (m_uuid == 0)\n\t{\n\t\t// Not registered with core\n\t\tm_logger->error(\"Service is not registered with the core - not registering configuration interest\");\n\t\treturn true;\n\t}\n\ttry {\n\t\tconvert << \"{ \\\"category\\\" : \\\"\" << JSONescape(category) << \"\\\", \";\n\t\tconvert << \"\\\"service\\\" : \\\"\" << *m_uuid << \"\\\" }\";\n\t\tauto res = this->getHttpClient()->request(\"POST\", \"/fledge/interest\", convert.str());\n\t\tDocument doc;\n\t\tstring content = res->content.string();\n\t\tdoc.Parse(content.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(content[0]) && isdigit(content[1]) && isdigit(content[2]) && content[3]==':');\n\t\t\tm_logger->error(\"%s category registration: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tcontent.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tif (doc.HasMember(\"id\"))\n\t\t{\n\t\t\tconst char *reg_id = doc[\"id\"].GetString();\n\t\t\tm_categories[category] = string(reg_id);\n\t\t\tm_logger->info(\"Registered configuration category %s, registration id %s.\",\n\t\t\t\t\tcategory.c_str(), reg_id);\n\t\t\treturn true;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to register configuration category: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"Failed to register configuration category: %s.\",\n\t\t\t\t\tcontent.c_str());\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n                m_logger->error(\"Register configuration category failed %s.\", e.what());\n                return false;\n        }\n        return false;\n}\n\n/**\n * Unregister interest in a configuration category. The service will no\n * longer be called when the configuration category is changed.\n *\n * @param category\tThe name of the configuration category to unregister\n * @return bool\t\tTrue if the configuration category is unregistered\n */             \nbool ManagementClient::unregisterCategory(const string& category)\n{               \nostringstream convert;\n        \n        try {   \n\t\tstring url = \"/fledge/interest/\";\n\t\turl += urlEncode(m_categories[category]);\n        auto res = this->getHttpClient()->request(\"DELETE\", url.c_str());\n        } catch (const SimpleWeb::system_error &e) {\n                m_logger->error(\"Unregister configuration category failed %s.\", e.what());\n                return false;\n        }\n        return false;\n}\n\n/**\n * Get the set of all configuration categories from the core micro service.\n *\n * @return ConfigCategories\tThe set of all confguration categories\n */\nConfigCategories ManagementClient::getCategories()\n{\n\ttry {\n\t\tstring url = \"/fledge/service/category\";\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s fetching configuration categories: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch configuration categories: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn ConfigCategories(response);\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Get config categories failed %s.\", e.what());\n\t\tthrow;\n\t}\n}\n\n/**\n * Return the content of the named category by calling the\n * management API of the Fledge core.\n *\n * @param  categoryName\t\tThe name of the categpry to return\n * @return ConfigCategory\tThe configuration category\n * @throw  exception\t\tIf the category does not exist or\n *\t\t\t\tthe result can not be parsed\n */\nConfigCategory ManagementClient::getCategory(const string& categoryName)\n{\n\ttry {\n\t\tstring url = \"/fledge/service/category/\" + urlEncode(categoryName);\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tstring response = res->content.string();\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\treturn ConfigCategory(categoryName, response);\n\t\t}\n\t\tDocument doc;\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s fetching configuration category for %s: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tcategoryName.c_str(), response.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\") && doc[\"message\"].IsString())\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch configuration category: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch configuration category: %s.\",\n\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Get config category failed %s.\", e.what());\n\t\tthrow;\n\t}\n}\n\n/**\n * Set a category configuration item value\n *\n * @param categoryName  The given category name\n * @param itemName      The given item name\n * @param itemValue     The item value to set\n * @return              JSON string of the updated\n *                      category item\n * @throw               std::exception\n */\nstring ManagementClient::setCategoryItemValue(const string& categoryName,\n\t\t\t\t\t      const string& itemName,\n\t\t\t\t\t      const string& itemValue)\n{\n\ttry {\n\t\tstring url = \"/fledge/service/category/\" + urlEncode(categoryName) + \"/\" + urlEncode(itemName);\n\t\tstring payload = \"{ \\\"value\\\" : \\\"\" + itemValue + \"\\\" }\";\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url.c_str(), payload);\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\treturn response;\n\t\t}\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s setting configuration category item value: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to set configuration category item value: %s.\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"Failed to set configuration category item value: %s.\",\n\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Get config category failed %s.\", e.what());\n\t\tthrow;\n\t}\n}\n\n/**\n * Return child categories of a given category\n *\n * @param categoryName\t\tThe given category name\n * @return\t\t\tJSON string with current child categories\n * @throw\t\t\tstd::exception\n */\nConfigCategories ManagementClient::getChildCategories(const string& categoryName)\n{\n\ttry\n\t{\n\t\tstring url = \"/fledge/service/category/\" + urlEncode(categoryName) + \"/children\";\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) &&\n\t\t\t\t\t  isdigit(response[1]) &&\n\t\t\t\t\t  isdigit(response[2]) &&\n\t\t\t\t\t  response[3]==':');\n\t\t\tm_logger->error(\"%s fetching child categories of %s: %s\\n\",\n\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\",\n\t\t\t\t\tcategoryName.c_str(),\n\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch child categories of %s: %s.\",\n\t\t\t\t\tcategoryName.c_str(),\n\t\t\t\t\tdoc[\"message\"].GetString());\n\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn ConfigCategories(response);\n\t\t}\n\t}\n\tcatch (const SimpleWeb::system_error &e)\n\t{\n\t\tm_logger->error(\"Get child categories of %s failed %s.\",\n\t\t\t\tcategoryName.c_str(),\n\t\t\t\te.what());\n\t\tthrow;\n\t}\n}\n\n/**\n * Add child categories to a (parent) category\n *\n * @param parentCategory\tThe given category name\n * @param children\t\tCategories to add under parent\n * @return\t\t\tJSON string with current child categories\n * @throw\t\t\tstd::exception\n */\nstring ManagementClient::addChildCategories(const string& parentCategory,\n\t\t\t\t\t    const vector<string>& children)\n{\n\ttry {\n\t\tstring url = \"/fledge/service/category/\" + urlEncode(parentCategory) + \"/children\";\n\t\tstring payload = \"{ \\\"children\\\" : [\";\n\n\t\tfor (auto it = children.begin(); it != children.end(); ++it)\n\t\t{\n\t\t\tpayload += \"\\\"\" + JSONescape((*it)) + \"\\\"\";\n\t\t\tif ((it + 1) != children.end())\n\t\t\t{\n\t\t\t\t payload += \", \";\n\t\t\t}\n\t\t}\n\t\tpayload += \"] }\";\n\t\tauto res = this->getHttpClient()->request(\"POST\", url.c_str(), payload);\n\t\tstring response = res->content.string();\n\t\tDocument doc;\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError() || !doc.HasMember(\"children\"))\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s adding child categories: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to add child categories: %s.\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn response;\n\t\t}\n\t}\n\tcatch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Add child categories failed %s.\", e.what());\n\t\tthrow;\n\t}\n}\n\n/**\n * Get the asset tracking tuples\n * for a service or all services\n *\n * @param    serviceName\tThe serviceName to restrict data fetch\n *\t\t\t\tIf empty records for all services are fetched\n * @return\t\tA vector of pointers to AssetTrackingTuple objects allocated on heap\n */\nstd::vector<AssetTrackingTuple*>& ManagementClient::getAssetTrackingTuples(const std::string serviceName)\n{\n\tstd::vector<AssetTrackingTuple*> *vec = new std::vector<AssetTrackingTuple*>();\n\t\n\ttry {\n\t\tstring url = \"/fledge/track\";\n\t\tif (serviceName != \"\")\n\t\t{\n\t\t\turl += \"?service=\"+urlEncode(serviceName);\n\t\t}\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s fetch asset tracking tuples: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch asset tracking tuples: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tconst rapidjson::Value& trackArray = doc[\"track\"];\n\t\t\tif (trackArray.IsArray())\n\t\t\t{\n\t\t\t\t// Process every row and create the AssetTrackingTuple object\n\t\t\t\tfor (auto& rec : trackArray.GetArray())\n\t\t\t\t{\n\t\t\t\t\tif (!rec.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow runtime_error(\"Expected asset tracker tuple to be an object\");\n\t\t\t\t\t}\n\n\t\t\t\t\t// Do not load \"store\" events as they bill be loaded by getStorageAssetTrackingTuples()\n\t\t\t\t\tif (rec[\"event\"].GetString() == \"store\")\n\t\t\t\t\t{\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\t// Note: deprecatedTimestamp NULL value is returned as \"\"\n\t\t\t\t\t// otherwise it's a string DATE\n\t\t\t\t\tbool deprecated = rec.HasMember(\"deprecatedTimestamp\") &&\n\t\t\t\t\t    strlen(rec[\"deprecatedTimestamp\"].GetString());\n\n\t\t\t\t\tAssetTrackingTuple *tuple = new AssetTrackingTuple(rec[\"service\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"plugin\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"asset\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"event\"].GetString(),\n\t\t\t\t\t\t\t\t\tdeprecated);\n\n\t\t\t\t\tm_logger->debug(\"Adding AssetTracker tuple for service %s: %s:%s:%s, \" \\\n\t\t\t\t\t\t\t\"deprecated state is %d\",\n\t\t\t\t\t\t\trec[\"service\"].GetString(),\n\t\t\t\t\t\t\trec[\"plugin\"].GetString(),\n\t\t\t\t\t\t\trec[\"asset\"].GetString(),\n\t\t\t\t\t\t\trec[\"event\"].GetString(),\n\t\t\t\t\t\t\tdeprecated);\n\t\t\t\t\tvec->push_back(tuple);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tthrow runtime_error(\"Expected array of rows in asset track tuples array\");\n\t\t\t}\n\n\t\t\treturn (*vec);\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Fetch/parse of asset tracking tuples for service %s failed: %s.\", serviceName.c_str(), e.what());\n\t\t//throw;\n\t}\n\tcatch (...) {\n\t\tm_logger->error(\"Unexpected exception when retrieving asset tuples for service %s:, serviceName.c_str()\");\n\t}\n\treturn *vec;\n}\n\n/**\n * Add a new asset tracking tuple\n *\n * @param service\tService name\n * @param plugin\tPlugin name\n * @param asset\t\tAsset name\n * @param event\t\tEvent type\n * @return\t\twhether operation was successful\n */\nbool ManagementClient::addAssetTrackingTuple(const std::string& service, \n\t\t\t\t\tconst std::string& plugin,\n\t\t\t\t\tconst std::string& asset,\n\t\t\t\t\tconst std::string& event)\n{\n\tostringstream convert;\n\n\ttry {\n\t\tconvert << \"{ \\\"service\\\" : \\\"\" << JSONescape(service) << \"\\\", \";\n\t\tconvert << \" \\\"plugin\\\" : \\\"\" << plugin << \"\\\", \";\n\t\tconvert << \" \\\"asset\\\" : \\\"\" << asset << \"\\\", \";\n\t\tconvert << \" \\\"event\\\" : \\\"\" << event << \"\\\" }\";\n\n\t\tauto res = this->getHttpClient()->request(\"POST\", \"/fledge/track\", convert.str());\n\t\tDocument doc;\n\t\tstring content = res->content.string();\n\t\tdoc.Parse(content.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(content[0]) && isdigit(content[1]) && isdigit(content[2]) && content[3]==':');\n\t\t\tm_logger->error(\"%s asset tracking tuple addition: %s\\n\", \n\t\t\t\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tcontent.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tif (doc.HasMember(\"fledge\"))\n\t\t{\n\t\t\tconst char *reg_id = doc[\"fledge\"].GetString();\n\t\t\treturn true;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to add asset tracking tuple: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"Failed to add asset tracking tuple: %s.\",\n\t\t\t\t\tcontent.c_str());\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\t\t\tm_logger->error(\"Failed to add asset tracking tuple: %s.\", e.what());\n\t\t\t\treturn false;\n\t\t}\n\t\treturn false;\n}\n\n/**\n * Add an Audit Entry. Called when an auditable event occurs\n * to regsiter that event.\n *\n * Fledge API call example :\n *\n *  curl -X POST -d '{\"source\":\"LMTR\", \"severity\":\"WARNING\",\n *\t\t      \"details\":{\"message\":\"Engine oil pressure low\"}}'\n *  http://localhost:8081/fledge/audit\n *\n * @param   code\tThe log code for the entry \n * @param   severity\tThe severity level\n * @param   message\tThe JSON message to log\n */\nbool ManagementClient::addAuditEntry(const std::string& code,\n\t\t\t\t     const std::string& severity,\n\t\t\t\t     const std::string& message)\n{\n\tostringstream convert;\n\n\ttry {\n\t\tconvert << \"{ \\\"source\\\" : \\\"\" << code << \"\\\", \";\n\t\tconvert << \" \\\"severity\\\" : \\\"\" << severity << \"\\\", \";\n\t\tconvert << \" \\\"details\\\" : \" << message << \" }\";\n\n\t\tauto res = this->getHttpClient()->request(\"POST\",\n\t\t\t\t\t\t\t  \"/fledge/audit\",\n\t\t\t\t\t\t\t  convert.str());\n\t\tDocument doc;\n\t\tstring content = res->content.string();\n\t\tdoc.Parse(content.c_str());\n\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(content[0]) &&\n\t\t\t\t\t  isdigit(content[1]) &&\n\t\t\t\t\t  isdigit(content[2]) &&\n\t\t\t\t\t  content[3]==':');\n\t\t\tm_logger->error(\"%s audit entry: %s\\n\", \n\t\t\t\t\t(httpError ?\n\t\t\t\t\t \"HTTP error during\" :\n\t\t\t\t\t \"Failed to parse result of\"), \n\t\t\t\t\tcontent.c_str());\n\t\t\treturn false;\n\t\t}\n\n\t\tbool ret = false;\n\n\t\t// Check server reply\n\t\tif (doc.HasMember(\"source\"))\n\t\t{\n\t\t\t// OK\n\t\t\tret = true;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\t// Erropr\n\t\t\tm_logger->error(\"Failed to add audit entry: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Erropr\n\t\t\tm_logger->error(\"Failed to add audit entry: %s.\",\n\t\t\t\t\tcontent.c_str());\n\t\t}\n\n\t\treturn ret;\n\t}\n\tcatch (const SimpleWeb::system_error &e)\n\t{\n\t\tm_logger->error(\"Failed to add audit entry: %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Checks and validate the JWT bearer token object as reference\n *\n * @param request\tThe bearer token object\n * @return\t\tTrue on success, false otherwise\n */\nbool ManagementClient::verifyAccessBearerToken(BearerToken& token)\n{\n\tif (!token.exists())\n\t{\n\t\tm_logger->warn(\"Access bearer token has empty value\");\n\t\treturn false;\n\t}\n\treturn verifyBearerToken(token);\n}\n\n/**\n * Checks and validate the JWT bearer token coming from HTTP request\n *\n * @param request\tHTTP request object\n * @return\t\tTrue on success, false otherwise\n */\nbool ManagementClient::verifyAccessBearerToken(shared_ptr<HttpServer::Request> request)\n{\n\tBearerToken bT(request);\n\treturn this->verifyBearerToken(bT);\n}\n\n/**\n * Refresh the JWT bearer token string\n *\n * @param currentToken\tCurrent bearer token\n * @param newToken\tNew issued bearer token being set\n * @return              True on success, false otherwise\n */\nbool ManagementClient::refreshBearerToken(const string& currentToken,\n\t\t\t\t\tstring& newToken)\n{\n\tif (currentToken.length() == 0)\n\t{\n\t\tnewToken.clear();\n\t\treturn false;\n\t}\n\n\tbool ret = false;\n\n\t// Refresh it by calling Fledge management endpoint\n\tstring url = \"/fledge/service/refresh_token\";\n\tstring payload;\n\tSimpleWeb::CaseInsensitiveMultimap header;\n\theader.emplace(\"Authorization\", \"Bearer \" + currentToken);\n\tauto res = this->getHttpClient()->request(\"POST\", url.c_str(), payload, header);\n\tDocument doc;\n\tstring response = res->content.string();\n\tdoc.Parse(response.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tbool httpError = (isdigit(response[0]) &&\n\t\t\t\tisdigit(response[1]) &&\n\t\t\t\tisdigit(response[2]) &&\n\t\t\t\tresponse[3]==':');\n\t\tm_logger->error(\"%s error in service token refresh: %s\\n\",\n\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\",\n\t\t\t\tresponse.c_str());\n\t\tret = false;\n\t}\n\telse\n\t{\n\t\tif (doc.HasMember(\"error\"))\n\t\t{\n\t\t\tif (doc[\"error\"].IsString())\n\t\t\t{\n\t\t\t\tstring error = doc[\"error\"].GetString();\n\t\t\t\tm_logger->error(\"Failed to refresh refresh bearer token, error %s\",\n\t\t\t\t\t\terror.c_str());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to refresh beearer token result: %s\",\n\t\t\t\t\t\tresponse.c_str());\n\t\t\t}\n\t\t\tret = false;\n\t\t}\n\t\telse if (doc.HasMember(\"bearer_token\"))\n\t\t{\n\t\t\t// Set new token\n\t\t\tnewToken = doc[\"bearer_token\"].GetString();\n\t\t\tret = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"Bearer token not found in token refresh result: %s\",\n\t\t\t\t\tresponse.c_str());\n\t\t\tret = false;\n\t\t}\n\t}\n\n\tm_mtx_rTokens.lock();\n\tif (ret)\n\t{\n\t\t// Remove old token from received ones\n\t\tm_received_tokens.erase(currentToken);\n\t}\n\telse\n\t{\n\t\tnewToken.clear();\n\t}\n\n\tm_mtx_rTokens.unlock();\n\n\treturn ret;\n}\n\n/**\n * Checks and validate the JWT bearer token string\n *\n * Input token internal data will be set\n * with new values or cached ones\n *\n * @param bearerToken\tThe bearer token object\n * @return\t\tTrue on success, false otherwise\n */\nbool ManagementClient::verifyBearerToken(BearerToken& bearerToken)\n{\n\tif (!bearerToken.exists())\n\t{\n\t\tm_logger->warn(\"Bearer token has empty value\");\n\t\treturn false;\n\t}\n\n\tbool ret = true;\n\tconst string& token = bearerToken.token();\n\n\t// Check token already exists in cache:\n\tmap<string, BearerToken>::iterator item;\n\t// Acquire lock\n\tm_mtx_rTokens.lock();\n\n\titem = m_received_tokens.find(token);\n\tif (item  == m_received_tokens.end())\n\t{\n\t\t// Token is not in the cache\n\t\tbool verified = false;\n\t\t// Token does not exist:\n\t\t// Verify it by calling Fledge management endpoint\n\t\tstring url = \"/fledge/service/verify_token\";\n                string payload;\n                SimpleWeb::CaseInsensitiveMultimap header;\n                header.emplace(\"Authorization\", \"Bearer \" + token);\n                auto res = this->getHttpClient()->request(\"POST\", url.c_str(), payload, header);\n\t\tstring response = res->content.string();\n\n\t\t// Parse JSON message and store claims in input token object\n\t\tverified = bearerToken.verify(response);\n\t\tif (verified)\n\t\t{\n\t\t\t// Token verified, store the token object\n\t\t\tm_received_tokens.emplace(token, bearerToken);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tret = false;\n\t\t\tm_logger->error(\"Micro service bearer token '%s' not verified.\",\n\t\t\t\t\ttoken.c_str());\n\t\t}\n#ifdef DEBUG_BEARER_TOKEN\n\t\tm_logger->debug(\"New token verified by core API endpoint %d, claims %s:%s:%s:%ld\",\n\t\t\t\tret,\n\t\t\t\tbearerToken.getAudience().c_str(),\n\t\t\t\tbearerToken.getSubject().c_str(),\n\t\t\t\tbearerToken.getIssuer().c_str(),\n\t\t\t\tbearerToken.getExpiration());\n#endif\n\t}\n\telse\n\t{\n\t\t// Token is in the cache\n\t\tunsigned long expiration = (*item).second.getExpiration();\n\t\tunsigned long now = time(NULL);\n\n\t\t// Check expiration\n\t\tif (now >= expiration)\n\t\t{\n\t\t\tret = false;\n\t\t\t// Remove token from received ones\n\t\t\tm_received_tokens.erase(token);\n\n\t\t\tm_logger->error(\"Micro service bearer token expired.\");\n\t\t}\n\n\t\t// Set input token object as per cached data\n\t\tbearerToken = (*item).second;\n\n#ifdef DEBUG_BEARER_TOKEN\n\t\tm_logger->debug(\"Existing token already verified %d, claims %s:%s:%s:%ld\",\n\t\t\t\tret,\n\t\t\t\t(*item).second.getAudience().c_str(),\n\t\t\t\t(*item).second.getSubject().c_str(),\n\t\t\t\t(*item).second.getIssuer().c_str(),\n\t\t\t\t(*item).second.getExpiration());\n#endif\n\t}\n\n\t// Release lock\n\tm_mtx_rTokens.unlock();\n\n\treturn ret;\n}\n\n/**\n * Request that the core proxy a URL to the service. URL's in the public Fledge API will be forwarded\n * to the service API of the named service.\n *\n * @param serviceName\tThe name of the service to send the request to\n * @param operation\tThe type of operations; post, put, get or delete\n * @param publicEndpoint\tThe URL inthe Fledge public API to be proxied\n * @param privateEnpoint\tThe URL in the service API of the named service to which the reuests will be proxied.\n * @return\tbool\t\tTrue if the proxy request was accepted\n */\nbool ManagementClient::addProxy(const std::string& serviceName,\n\t\tconst std::string& operation,\n\t\tconst std::string& publicEndpoint,\n\t\tconst std::string& privateEndpoint)\n{\n\tostringstream convert;\n\n\ttry {\n\t\tconvert << \"{ \\\"\" << operation << \"\\\" : { \";\n\t\tconvert << \"\\\"\" << publicEndpoint << \"\\\" : \";\n\t\tconvert << \"\\\"\" << privateEndpoint << \"\\\" } \";\n\t\tconvert << \"\\\"service_name\\\" : \\\"\" << serviceName << \"\\\" }\";\n\n\t\tauto res = this->getHttpClient()->request(\"POST\",\n\t\t\t\t\t\t\t  \"/fledge/proxy\",\n\t\t\t\t\t\t\t  convert.str());\n\t\tDocument doc;\n\t\tstring content = res->content.string();\n\t\tdoc.Parse(content.c_str());\n\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(content[0]) &&\n\t\t\t\t\t  isdigit(content[1]) &&\n\t\t\t\t\t  isdigit(content[2]) &&\n\t\t\t\t\t  content[3]==':');\n\t\t\tm_logger->error(\"%s proxy addition: %s\\n\", \n\t\t\t\t\t(httpError ?\n\t\t\t\t\t \"HTTP error during\" :\n\t\t\t\t\t \"Failed to parse result of\"), \n\t\t\t\t\tcontent.c_str());\n\t\t\treturn false;\n\t\t}\n\n\n\t\tbool result = false;\n                if (res->status_code[0] == '2') // A 2xx response\n\t\t{\n\t\t\tresult = true;\n\t\t}\n\n\t\tif (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Add proxy entry: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\treturn result;\n\t\t}\n\t\treturn result;\n\t}\n\tcatch (const SimpleWeb::system_error &e)\n\t{\n\t\tm_logger->error(\"Failed to add proxt entry: %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Request that the core proxy a URL to the service. URL's in the public Fledge API will be forwarded\n * to the service API of the named service.\n *\n * @param serviceName\tThe name of the service to send the request to\n * @param endpoints\tThe set of endpoints to be mapped\n * @return\tbool\t\tTrue if the proxy request was accepted\n */\nbool ManagementClient::addProxy(const std::string& serviceName,\n\t\t\tconst map<std::string, vector<pair<string, string> > >& endpoints)\n{\n\tostringstream convert;\n\n\ttry {\n\n\t\tconvert << \"{ \";\n\t\tfor (auto const& op : endpoints)\n\t\t{\n\t\t\tconvert << \"\\\"\" << op.first << \"\\\" : { \";\n\t\t\tbool first = true;\n\t\t\tfor (auto const& ep : op.second)\n\t\t\t{\n\t\t\t\tif (!first)\n\t\t\t\t\tconvert << \", \";\n\t\t\t\tfirst = false;\n\t\t\t\tconvert << \"\\\"\" << ep.first << \"\\\" :\";\n\t\t\t\tconvert << \"\\\"\" << ep.second << \"\\\"\";\n\t\t\t}\n\t\t\tconvert << \"}, \";\n\t\t}\n\t\tconvert << \"\\\"service_name\\\" : \\\"\" << serviceName << \"\\\" }\";\n\n\t\tauto res = this->getHttpClient()->request(\"POST\",\n\t\t\t\t\t\t\t  \"/fledge/proxy\",\n\t\t\t\t\t\t\t  convert.str());\n\t\tDocument doc;\n\t\tstring content = res->content.string();\n\t\tdoc.Parse(content.c_str());\n\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(content[0]) &&\n\t\t\t\t\t  isdigit(content[1]) &&\n\t\t\t\t\t  isdigit(content[2]) &&\n\t\t\t\t\t  content[3]==':');\n\t\t\tm_logger->error(\"%s proxy addition: %s\\n\", \n\t\t\t\t\t(httpError ?\n\t\t\t\t\t \"HTTP error during\" :\n\t\t\t\t\t \"Failed to parse result of\"), \n\t\t\t\t\tcontent.c_str());\n\t\t\treturn false;\n\t\t}\n\n\t\tbool result = false;\n                if (res->status_code[0] == '2') // A 2xx response\n\t\t{\n\t\t\tresult = true;\n\t\t}\n\n\t\tif (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Add proxy entries: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\treturn result;\n\t\t}\n\t\treturn result;\n\t}\n\tcatch (const SimpleWeb::system_error &e)\n\t{\n\t\tm_logger->error(\"Failed to add proxy entry: %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Delete the current proxy endpoitn for the named service. Normally called prior\n * to the service shutting down.\n *\n * @param serviceName\tTHe name of the service to sto the proxying for\n * @return bool\tTrue if the request succeeded\n */\nbool ManagementClient::deleteProxy(const std::string& serviceName)\n{\n\tbool result = false;\n\ttry {\n\t\tstring url = \"/fledge/proxy/\";\n\t\turl += urlEncode(serviceName);\n\t\tauto res = this->getHttpClient()->request(\"DELETE\", url.c_str());\n                if (res->status_code[0] == '2') // A 2xx response\n\t\t{\n\t\t\tresult = true;;\n\t\t}\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s service proxy deletion: %s\\n\", \n\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\", \n\t\t\t\t\t\tresponse.c_str());\n\t\t\treturn result;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Stop proxy of endpoints for service: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\treturn result;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->info(\"API proxying has been stopped\");\n\t\t\treturn result;\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Proxy deletion failed %s.\", e.what());\n\t\treturn false;\n\t}\n\treturn false;\n}\n\n/**\n * Get the asset tracking tuple\n * for a service and asset name\n *\n * @param    serviceName\tThe serviceName to restrict data fetch\n * @param    assetName\t\tThe asset name that belongs to the service\n * @param    event\t\tThe associated event type\n * @return\t\tA vector of pointers to AssetTrackingTuple objects allocated on heap\n */\nAssetTrackingTuple* ManagementClient::getAssetTrackingTuple(const std::string& serviceName,\n\t\t\t\t\t\t\tconst std::string& assetName,\n\t\t\t\t\t\t\tconst std::string& event)\n{\n\tAssetTrackingTuple* tuple = NULL;\n\ttry {\n\t\tstring url = \"/fledge/track\";\n\t\tif (serviceName == \"\" && assetName == \"\" && event == \"\")\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch asset tracking tuple: \" \\\n\t\t\t\t\t\"service name, asset name and event type are required.\");\n\t\t\tthrow new exception();\n\t\t}\n\n\t\turl += \"?service=\" + urlEncode(serviceName);\n\t\turl += \"&asset=\" + urlEncode(assetName) + \"&event=\" + event;\n\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) &&\n\t\t\t\t\tisdigit(response[1]) &&\n\t\t\t\t\tisdigit(response[2]) &&\n\t\t\t\t\tresponse[3]==':');\n\t\t\tm_logger->error(\"%s fetch asset tracking tuple: %s\\n\",\n\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\",\n\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch asset tracking tuple: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tconst rapidjson::Value& trackArray = doc[\"track\"];\n\t\t\tif (trackArray.IsArray())\n\t\t\t{\n\t\t\t\t// Process every row and create the AssetTrackingTuple object\n\t\t\t\tfor (auto& rec : trackArray.GetArray())\n\t\t\t\t{\n\t\t\t\t\tif (!rec.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow runtime_error(\"Expected asset tracker tuple to be an object\");\n\t\t\t\t\t}\n\n\t\t\t\t\t// Note: deprecatedTimestamp NULL value is returned as \"\"\n\t\t\t\t\t// otherwise it's a string DATE\n\t\t\t\t\tbool deprecated = rec.HasMember(\"deprecatedTimestamp\") &&\n\t\t\t\t\t    strlen(rec[\"deprecatedTimestamp\"].GetString());\n\n\t\t\t\t\t// Create a new AssetTrackingTuple object, to be freed by the caller\n\t\t\t\t\ttuple = new AssetTrackingTuple(rec[\"service\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"plugin\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"asset\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"event\"].GetString(),\n\t\t\t\t\t\t\t\t\tdeprecated);\n\n\t\t\t\t\tm_logger->debug(\"Adding AssetTracker tuple for service %s: %s:%s:%s, \" \\\n\t\t\t\t\t\t\t\"deprecated state is %d\",\n\t\t\t\t\t\t\trec[\"service\"].GetString(),\n\t\t\t\t\t\t\trec[\"plugin\"].GetString(),\n\t\t\t\t\t\t\trec[\"asset\"].GetString(),\n\t\t\t\t\t\t\trec[\"event\"].GetString(),\n\t\t\t\t\t\t\tdeprecated);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tthrow runtime_error(\"Expected array of rows in asset track tuples array\");\n\t\t\t}\n\n\t\t\treturn tuple;\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Fetch/parse of asset tracking tuples for service %s failed: %s.\",\n\t\t\t\tserviceName.c_str(),\n\t\t\t\te.what());\n\t} catch (...) {\n\t\tm_logger->error(\"Unexpected exception when retrieving asset tuples for service %s\",\n\t\t\t\tserviceName.c_str());\n\t}\n\n\treturn tuple;\n}\n\n/**\n * Get the asset tracking tuples for all the deprecated assets\n *\n * @return\t\tA vector of pointers to AssetTrackingTuple objects allocated on heap\n */\nAssetTrackingTable* ManagementClient::getDeprecatedAssetTrackingTuples()\n{\n\tAssetTrackingTable* table = NULL;\n\ttry {\n\t\tstring url = \"/fledge/track?deprecated=true\";\n\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) &&\n\t\t\t\t\tisdigit(response[1]) &&\n\t\t\t\t\tisdigit(response[2]) &&\n\t\t\t\t\tresponse[3]==':');\n\t\t\tm_logger->error(\"%s fetch asset tracking tuple: %s\\n\",\n\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\",\n\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch asset tracking tuple: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tconst rapidjson::Value& trackArray = doc[\"track\"];\n\t\t\tif (trackArray.IsArray())\n\t\t\t{\n\t\t\t\ttable = new AssetTrackingTable();\n\t\t\t\t// Process every row and create the AssetTrackingTuple object\n\t\t\t\tfor (auto& rec : trackArray.GetArray())\n\t\t\t\t{\n\t\t\t\t\tif (!rec.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow runtime_error(\"Expected asset tracker tuple to be an object\");\n\t\t\t\t\t}\n\n\t\t\t\t\t// Note: deprecatedTimestamp NULL value is returned as \"\"\n\t\t\t\t\t// otherwise it's a string DATE\n\t\t\t\t\tbool deprecated = rec.HasMember(\"deprecatedTimestamp\") &&\n\t\t\t\t\t    strlen(rec[\"deprecatedTimestamp\"].GetString());\n\n\t\t\t\t\t// Create a new AssetTrackingTuple object, to be freed by the caller\n\t\t\t\t\tAssetTrackingTuple *tuple = new AssetTrackingTuple(rec[\"service\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"plugin\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"asset\"].GetString(),\n\t\t\t\t\t\t\t\t\trec[\"event\"].GetString(),\n\t\t\t\t\t\t\t\t\tdeprecated);\n\n\t\t\t\t\tm_logger->debug(\"Adding AssetTracker tuple for service %s: %s:%s:%s, \" \\\n\t\t\t\t\t\t\t\"deprecated state is %d\",\n\t\t\t\t\t\t\trec[\"service\"].GetString(),\n\t\t\t\t\t\t\trec[\"plugin\"].GetString(),\n\t\t\t\t\t\t\trec[\"asset\"].GetString(),\n\t\t\t\t\t\t\trec[\"event\"].GetString(),\n\t\t\t\t\t\t\tdeprecated);\n\n\t\t\t\t\ttable->add(tuple);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tthrow runtime_error(\"Expected array of rows in asset track tuples array\");\n\t\t\t}\n\n\t\t\treturn table;\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Fetch/parse of deprecated asset tracking tuples failed: %s.\",\n\t\t\t\te.what());\n\t} catch (...) {\n\t\tm_logger->error(\"Unexpected exception when retrieving asset tuples for deprecated assets\");\n\t}\n\n\treturn table;\n}\n\n/**\n * Return the content of the named ACL by calling the\n * management API of the Fledge core.\n *\n * @param  aclName\t\tThe name of the ACL to return\n * @return ACL\t\t\tThe ACL class\n * @throw  exception\t\tIf the ACL does not exist or\n *\t\t\t\tthe JSON result can not be parsed\n */\nACL ManagementClient::getACL(const string& aclName)\n{\n\ttry {\n\t\tstring url = \"/fledge/ACL/\" + urlEncode(aclName);\n\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tDocument doc;\n\t\tstring response = res->content.string();\n\t\tdoc.Parse(response.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(response[0]) &&\n\t\t\t\t\t  isdigit(response[1]) &&\n\t\t\t\t\t  isdigit(response[2]) && response[3]==':');\n\t\t\tm_logger->error(\"%s fetching ACL for %s: %s\\n\",\n\t\t\t\t\thttpError?\"HTTP error while\":\"Failed to parse result of\",\n\t\t\t\t\taclName.c_str(),\n\t\t\t\t\tresponse.c_str());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"Failed to fetch ACL: %s.\",\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t\tthrow new exception();\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Success\n\t\t\treturn ACL(response);\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Get ACL failed %s.\", e.what());\n\t\tthrow;\n\t}\n}\n\n/**\n * Get the asset tracking tuple\n * for a service and asset name\n *\n * @param    serviceName        The serviceName to restrict data fetch\n * @param    assetName          The asset name that belongs to the service\n * @param    event              The associated event type\n * @param    dp\t\t\tThe datapoints Type\n * @param    c\t\t\tThe count of datapoints\n * @return              \tA pointer to AssetTrackingTuple objects allocated on heap\n */\nStorageAssetTrackingTuple* ManagementClient::getStorageAssetTrackingTuple(const std::string& serviceName,\n                                                        const std::string& assetName,\n                                                        const std::string& event,\n\t\t\t\t\t\t\tconst std::string& dp,\n\t\t\t\t\t\t\tconst unsigned int& c)\n{\n\t\n        StorageAssetTrackingTuple* tuple = NULL;\n        try {\n                string url = \"/fledge/track\";\n                if (serviceName == \"\" || assetName == \"\" || event == \"\")\n                {\n                        m_logger->error(\"Failed to fetch storage asset tracking tuple: \" \\\n                                        \"service name, asset name and event type are required.\");\n                        throw new exception();\n                }\n\n                url += \"?service=\" + urlEncode(serviceName);\n                url += \"&asset=\" + urlEncode(assetName) + \"&event=\" + event;\n\n                auto res = this->getHttpClient()->request(\"GET\", url.c_str());\n                Document doc;\n                string response = res->content.string();\n                doc.Parse(response.c_str());\n                if (doc.HasParseError())\n                {\n                        bool httpError = (isdigit(response[0]) &&\n                                        isdigit(response[1]) &&\n                                        isdigit(response[2]) &&\n                                        response[3]==':');\n                        m_logger->error(\"%s fetch storage asset tracking tuple: %s\\n\",\n                                        httpError?\"HTTP error during\":\"Failed to parse result of\",\n                                        response.c_str());\n                        throw new exception();\n                }\n                else if (doc.HasMember(\"message\"))\n                {\n                        m_logger->error(\"Failed to fetch storage asset tracking tuple: %s.\",\n                                doc[\"message\"].GetString());\n                        throw new exception();\n                }\n                else\n                {\n                        const rapidjson::Value& trackArray = doc[\"track\"];\n                        if (trackArray.IsArray())\n                        {\n                                // Process every row and create the AssetTrackingTuple object\n                                for (auto& rec : trackArray.GetArray())\n                                {\n\t\t\t\t\t m_logger->debug(\"%s:%d Inside for loop of trackArray \", __FUNCTION__, __LINE__);\n\n                                        if (!rec.IsObject())\n                                        {\n                                                throw runtime_error(\"Expected storage asset tracker tuple to be an object\");\n                                        }\n\n                                        // Note: deprecatedTimestamp NULL value is returned as \"\"\n                                        // otherwise it's a string DATE\n                                        bool deprecated = rec.HasMember(\"deprecatedTimestamp\") &&\n                                            strlen(rec[\"deprecatedTimestamp\"].GetString());\n\n\t\t\t\t\tstd::string data ;\n                                        if (!rec.HasMember(\"data\"))\n                                        {\n                                                throw runtime_error(\"Expected storage asset tracker tuple to contain member data\");\n                                        }\n\n                                        const rapidjson::Value& dataVal = rec[\"data\"];\n                                        if (!dataVal.IsObject())\n                                        {\n                                                throw runtime_error(\"Expected data in storage asset tracker tuple to be an object\");\n                                        }\n\n                                        if (!dataVal.HasMember(\"datapoints\"))\n                                        {\n                                                 throw runtime_error(\"Expected asset tracker tuple to contain datapoints\");\n                                        }\n\n                                        if (dataVal.ObjectEmpty())\n                                        {\n                                                m_logger->error(\"%s:%d dataVal  Object empty \" , __FUNCTION__, __LINE__);\n                                                continue;\n                                        }\n\n                                        if (!dataVal[\"datapoints\"].IsArray())\n                                        {\n                                                throw runtime_error(\"Expected datapoints to be object\");\n                                        }\n\n\t\t\t\t\tstd::string datapoints;\n\t\t\t\t\tfor (auto& r : dataVal[\"datapoints\"].GetArray())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (!r.IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tthrow runtime_error(\"Expected r to be string\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdatapoints.append(r.GetString());\n\t\t\t\t\t\t\tdatapoints.append(\",\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\n\t\t\t\t\tif (datapoints[datapoints.size()-1] == ',')\n\t\t\t\t\t{\n\t\t\t\t\t\tdatapoints.pop_back();\n\t\t\t\t\t}\n\n\t\t\t\t\tif(validateDatapoints(dp,datapoints))\n\t\t\t\t\t{\n\t\t\t\t\t\t//datapoints in db not same as in arg, continue\n\t\t\t\t\t\tm_logger->debug(\"%s:%d :Datapoints in db not same as in arg\",\n\t\t\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t\t\t__LINE__);\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\t\n                                        if (!dataVal.HasMember(\"count\"))\n                                        {\n                                                 throw runtime_error(\"Expected asset tracker tuple to contain count\");\n                                        }\n\n                                        if (!dataVal[\"count\"].IsInt())\n                                        {\n                                                throw runtime_error(\"Expected count in data to be int\");\n                                        }\n                                        int count = dataVal[\"count\"].GetInt();\n\t\t\t\t\tif ( count != c)\n\t\t\t\t\t{\n\t\t\t\t\t\t// count not same, continue\n\t\t\t\t\t\tm_logger->debug(\"%s:%d :count in db not same as received in arg\",\n\t\t\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t\t\t__LINE__);\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n                                        // Create a new AssetTrackingTuple object, to be freed by the caller\n                                        tuple = new StorageAssetTrackingTuple(rec[\"service\"].GetString(),\n                                                                        rec[\"plugin\"].GetString(),\n                                                                        rec[\"asset\"].GetString(),\n                                                                        rec[\"event\"].GetString(),\n                                                                        deprecated,\n\t\t\t\t\t\t\t\t\tdatapoints,\n\t\t\t\t\t\t\t\t\tcount);\n\n                                        m_logger->debug(\"%s:%d : Adding StorageAssetTracker tuple for service %s: %s:%s:%s, \" \\\n                                                        \"deprecated state is %d, datapoints %s , count %d\",__FUNCTION__, __LINE__,\n                                                        rec[\"service\"].GetString(),\n                                                        rec[\"plugin\"].GetString(),\n                                                        rec[\"asset\"].GetString(),\n                                                        rec[\"event\"].GetString(),\n                                                        deprecated, datapoints.c_str(), count);\n \n\n                                }\n                        }\n                        else\n                        {\n                                throw runtime_error(\"Expected array of rows in storage asset track tuples array\");\n                        }\n\n                        return tuple;\n                }\n        } catch (const SimpleWeb::system_error &e) {\n                m_logger->error(\"Fetch/parse of storage asset tracking tuples for service %s failed: %s.\",\n                                serviceName.c_str(),\n                                e.what());\n        } catch (...) {\n                m_logger->error(\"Unexpected exception when retrieving storage asset tuples for service %s\",\n                                serviceName.c_str());\n        }\n\n        return tuple;\n}\n\t\n/**\n * Add a new asset tracking tuple\n *\n * @param service\tService name\n * @param plugin\tPlugin name\n * @param asset\t\tAsset name\n * @param event\t\tEvent type\n * @param deprecated\tDeprecated or not\n * @param datapoints\tDatapoints type\n * @param count\t\tCount Type\n * @return\t\twhether operation was successful\n */\nbool ManagementClient::addStorageAssetTrackingTuple(const std::string& service, \n\t\t\t\t\tconst std::string& plugin,\n\t\t\t\t\tconst std::string& asset,\n\t\t\t\t\tconst std::string& event,\n\t\t\t\t\tconst bool& deprecated,\n\t\t\t\t\tconst std::string& datapoints,\n\t\t\t\t\tconst int& count)\n{\n\tostringstream convert;\n\tstd::string d ;\n        for ( int i = 0; i < datapoints.size(); ++i)\n        {\n                if (datapoints[i] == ',')\n                {\n                        d.append(\"\\\",\\\"\");\n                }\n                else\n                        d.append(1,datapoints[i]);\n        }\n\n\ttry {\n\t\tconvert << \"{ \\\"service\\\" : \\\"\" << JSONescape(service) << \"\\\", \";\n\t\tconvert << \" \\\"plugin\\\" : \\\"\" << plugin << \"\\\", \";\n\t\tconvert << \" \\\"asset\\\" : \\\"\" << asset << \"\\\", \";\n\t\tconvert << \" \\\"event\\\" : \\\"\" << event << \"\\\", \";\n\t\tconvert << \" \\\"deprecated\\\" :\\\"\" << deprecated << \"\\\", \";\n\t\tconvert << \" \\\"data\\\"  :  { \\\"datapoints\\\" : \\[ \\\"\" << d << \"\\\" ], \";\n\t\tconvert << \" \\\"count\\\" : \" << count << \" } }\";\n\n\t\tauto res = this->getHttpClient()->request(\"POST\", \"/fledge/track\", convert.str());\n\t\tif (res->status_code[0] == '2') // A 2xx response\n                {\n                        return true;\n                }\n\n\t\tDocument doc;\n\t\tstring content = res->content.string();\n\t\tdoc.Parse(content.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tbool httpError = (isdigit(content[0]) && isdigit(content[1]) && isdigit(content[2]) && content[3]==':');\n\t\t\tm_logger->error(\"%s:%d , %s storage asset tracking tuple addition: %s\\n\",__FUNCTION__, __LINE__, \n\t\t\t\t\t\t\t\thttpError?\"HTTP error during\":\"Failed to parse result of\", \n\t\t\t\t\t\t\t\tcontent.c_str());\n\t\t\treturn false;\n\t\t}\n\t\telse if (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->error(\"%s:%d Failed to add storage asset tracking tuple: %s.\",__FUNCTION__, __LINE__,\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"%s:%d Failed to add storage asset tracking tuple: %s.\",__FUNCTION__, __LINE__,\n\t\t\t\t\tcontent.c_str());\n\t\t}\n\t} catch (const SimpleWeb::system_error &e) {\n\t\t\t\tm_logger->error(\"%s:%d Failed to add storage asset tracking tuple: %s.\",__FUNCTION__, __LINE__, e.what());\n\t\t\t\treturn false;\n\t\t}\n\t\treturn false;\n}\n\n/**\n * Get the storage asset tracking tuples\n * for a service or all services\n *\n * @param    serviceName        The serviceName to restrict data fetch\n *                              If empty records for all services are fetched\n * @return              \tA vector of pointers to AssetTrackingTuple objects allocated on heap\n */\nstd::vector<StorageAssetTrackingTuple*>& ManagementClient::getStorageAssetTrackingTuples(const std::string serviceName)\n{\n        std::vector<StorageAssetTrackingTuple*> *vec = new std::vector<StorageAssetTrackingTuple*>();\n\n        try {\n                string url = \"/fledge/track\";\n                if (serviceName != \"\")\n                {\n                        url += \"?service=\"+urlEncode(serviceName);\n                }\n                auto res = this->getHttpClient()->request(\"GET\", url.c_str());\n                Document doc;\n                string response = res->content.string();\n                doc.Parse(response.c_str());\n                if (doc.HasParseError())\n                {\n                        bool httpError = (isdigit(response[0]) && isdigit(response[1]) && isdigit(response[2]) && response[3]==':');\n                        m_logger->error(\"%s fetch asset tracking tuples: %s\\n\",\n                                                                httpError?\"HTTP error during\":\"Failed to parse result of\",\n                                                                response.c_str());\n                        throw new exception();\n                }\n\t\telse if (doc.HasMember(\"message\"))\n                {\n                        m_logger->error(\"Failed to fetch asset tracking tuples: %s.\",\n                                doc[\"message\"].GetString());\n                        throw new exception();\n                }\n                else\n                {\n                        const rapidjson::Value& trackArray = doc[\"track\"];\n                        if (trackArray.IsArray())\n                        {\n                                // Process every row and create the AssetTrackingTuple object\n                                for (auto& rec : trackArray.GetArray())\n                                {\n                                        if (!rec.IsObject())\n                                        {\n                                                throw runtime_error(\"Expected asset tracker tuple to be an object\");\n                                        }\n\n                                        // Note: deprecatedTimestamp NULL value is returned as \"\"\n                                        // otherwise it's a string DATE\n                                        bool deprecated = rec.HasMember(\"deprecatedTimestamp\") &&\n                                            strlen(rec[\"deprecatedTimestamp\"].GetString());\n\n                                        std::string data ;\n                                        if (!rec.HasMember(\"data\"))\n                                        {\n                                                throw runtime_error(\"Expected asset tracker tuple to contain member data\");\n                                        }\n\n                                        const rapidjson::Value& dataVal = rec[\"data\"];\n                                        if (!dataVal.IsObject())\n                                        {\n                                                throw runtime_error(\"Expected data asset tracker tuple to be an object\");\n                                        }\n\n\t\t\t\t\tif (dataVal.ObjectEmpty())\n\t\t\t\t\t{\n\t\t\t\t\t\tm_logger->debug(\"%s:%d dataVal Object empty \" , __FUNCTION__, __LINE__);\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n                                        if (!dataVal.HasMember(\"datapoints\"))\n                                        {\n                                                 throw runtime_error(\"Expected asset tracker tuple to contain datapoints\");\n                                        }\n\n\t\t\t\t\tif (!dataVal[\"datapoints\"].IsArray())\n                                        {\n                                                throw runtime_error(\"Expected datapoints to be array\");\n                                        }\n\n                                        std::string datapoints;\n                                        for (auto& r : dataVal[\"datapoints\"].GetArray())\n                                        {\n                                                if (!r.IsString())\n                                                {\n                                                        throw runtime_error(\"Expected individual datapoints in datapoints array to be string\");\n                                                }\n                                                else\n                                                {\n                                                        datapoints.append(r.GetString());\n                                                        datapoints.append(\",\");\n                                                }\n                                        }\n\n\t\t\t\t\tif( datapoints[datapoints.size()-1] == ',')\n\t\t\t\t\t{\n\t\t\t\t\t\tdatapoints.pop_back();\n\t\t\t\t\t}\n\n                                        if (!dataVal.HasMember(\"count\"))\n                                        {\n                                                 throw runtime_error(\"Expected asset tracker tuple to contain count\");\n                                        }\n\n                                        if (!dataVal[\"count\"].IsInt())\n                                        {\n                                                throw runtime_error(\"Expected count in data to be int\");\n                                        }\n                                        int count = dataVal[\"count\"].GetInt();\n                                        m_logger->debug(\"%s:%d count = %d  \", __FUNCTION__, __LINE__, count);\n\n                                        StorageAssetTrackingTuple *tuple = new StorageAssetTrackingTuple(rec[\"service\"].GetString(),\n                                                                        rec[\"plugin\"].GetString(),\n                                                                        rec[\"asset\"].GetString(),\n                                                                        rec[\"event\"].GetString(),\n                                                                        deprecated, datapoints, count);\n\n                                        m_logger->debug(\"%s:%d: Adding StorageAssetTracker tuple for service %s: %s:%s:%s, \" \\\n                                                        \"deprecated state is %d, datapoints %s , count %d\" ,__FUNCTION__, __LINE__,\n                                                        rec[\"service\"].GetString(),\n                                                        rec[\"plugin\"].GetString(),\n                                                        rec[\"asset\"].GetString(),\n                                                        rec[\"event\"].GetString(),\n                                                        deprecated, datapoints.c_str(), count);\n                                        vec->push_back(tuple);\n                                }\n                        }\n                        else\n                        {\n                                throw runtime_error(\"Expected array of rows in asset track tuples array\");\n                        }\n\n                        return (*vec);\n                }\n        } catch (const SimpleWeb::system_error &e) {\n                m_logger->error(\"Fetch/parse of asset tracking tuples for service %s failed: %s.\", serviceName.c_str(), e.what());\n        }\n        catch (...) {\n                m_logger->error(\"Unexpected exception when retrieving asset tuples for service %s\", serviceName.c_str());\n        }\n        return *vec;\n}\n\n/**\n * Compare the datapoints to be equal or not, they can be '\"' enclosed \n *\n * @param    dp1        The datapoint to compare, enclosed in '\"'\n * @param    dp2        The datapoint to compare\n * @return   int        integer depicting result of comparison, 0 on equal \n */\n\nint ManagementClient::validateDatapoints(std::string dp1, std::string dp2)\n{\n\tstd::string temp;\n\tfor (int i = 0; i < dp1.size(); ++i)\n\t{\n\t\tif ( dp1[i] != '\"')\n\t\t\ttemp.push_back(dp1[i]);\n\t}\n\n\treturn temp.compare(dp2);\n}\n\n/**\n * Get an alert by specific key\n *\n * @param    key        Key to get alert\n * @return   string     Alert\n */\nstd::string ManagementClient::getAlertByKey(const std::string& key)\n{\n\tstd::string response = \"Status: 404 Not found\";\n\ttry\n\t{\n\t\tstd::string url = \"/fledge/alert/\" + urlEncode(key) ;\n\t\tauto res = this->getHttpClient()->request(\"GET\", url.c_str());\n\t\tstd::string statusCode = res->status_code;\n\t\tif (statusCode.compare(\"200 OK\"))\n\t\t{\n\t\t\tm_logger->error(\"Get alert failed %s.\", statusCode.c_str());\n\t\t\tresponse = \"Status: \" + statusCode;\n\t\t\treturn response;\n\t\t}\n\n\t\tresponse = res->content.string();\n\t}\n\tcatch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Get alert failed %s.\", e.what());\n\t}\n\treturn response;\n}\n\n\n/**\n * Raise an alert\n *\n * @param    key        Alert key\n * @param    message    Alert message\n * @param    urgency    Alert urgency\n * @return   whether operation was successful\n */\nbool ManagementClient::raiseAlert(const std::string& key, const std::string& message, const std::string& urgency)\n{\n\ttry\n\t{\n\t\tstd::string url = \"/fledge/alert\" ;\n\t\tostringstream   payload;\n\t\tpayload << \"{\\\"key\\\":\\\"\" << key  << \"\\\",\"\n\t\t\t\t\t<< \"\\\"message\\\":\\\"\" << message  << \"\\\",\"\n\t\t\t\t\t<< \"\\\"urgency\\\":\\\"\" << urgency  << \"\\\"}\";\n\n\t\tauto res = this->getHttpClient()->request(\"POST\", url.c_str(), payload.str());\n\t\tstd::string statusCode = res->status_code;\n\t\tif (statusCode.compare(\"200 OK\"))\n\t\t{\n\t\t\tm_logger->error(\"Raise alert failed %s.\", statusCode.c_str());\n\t\t\treturn false;\n\t\t}\n\n\t\treturn true;\n\t}\n\tcatch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Raise alert failed %s.\", e.what());\n\t\treturn false;\n\t}\n}\n\n/**\n * Clear an alert\n *\n * @param    key        Alert key\n * @return   whether operation was successful\n */\nbool ManagementClient::clearAlert(const std::string& key)\n{\n\ttry\n\t{\n\t\tstd::string url = \"/fledge/alert/\" + urlEncode(key);\n\t\tauto res = this->getHttpClient()->request(\"DELETE\", url.c_str());\n\t\tstd::string statusCode = res->status_code;\n\t\tif (statusCode.compare(\"200 OK\"))\n\t\t{\n\t\t\tm_logger->error(\"Clear alert failed %s.\", statusCode.c_str());\n\t\t\treturn false;\n\t\t}\n\n\t\treturn true;\n\t}\n\tcatch (const SimpleWeb::system_error &e) {\n\t\tm_logger->error(\"Clear alert failed %s.\", e.what());\n\t\treturn false;\n\t}\n}\n\n"
  },
  {
    "path": "C/common/pipeline_branch.cpp",
    "content": "/*\n * Fledge pipeline branch class\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <pipeline_element.h>\n#include <filter_pipeline.h>\n#include <config_handler.h>\n#include <service_handler.h>\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n\nusing namespace std;\n\n/**\n * Constructor for a branch in a filter pipeline\n */\nPipelineBranch::PipelineBranch(FilterPipeline *parent) : m_pipeline(parent), m_thread(NULL), PipelineElement()\n{\n\tm_shutdownCalled = false;\n}\n\n/**\n * Destructor for the pipeline branch\n *\n * If the pipeline is not already shutdown then shut it down\n * Delete the thread if it exists.\n */\nPipelineBranch::~PipelineBranch()\n{\n\tif (!m_shutdownCalled)\n\t{\n\t\tm_shutdownCalled = true;\n\t\tm_cv.notify_all();\n\t\tif (m_thread->joinable())\n\t\t\tm_thread->join();\n\t}\n\tif (m_thread)\n\t{\n\t\tdelete m_thread;\n\t}\n\n\t// Clear any queued readings\n\twhile (!m_queue.empty())\n\t{\n\t\tReadingSet *readings = m_queue.front();\n\t\tm_queue.pop();\n\t\tdelete readings;\n\t}\n\tfor (auto it = m_branch.begin(); it != m_branch.end(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n}\n\n/**\n * Setup the configuration for a branch in a pipeline\n *\n * @param       mgtClient       The management client\n * @param       children        A vector to fill with child configuration categories\n */\nbool PipelineBranch::setupConfiguration(ManagementClient *mgtClient, vector<string>& children)\n{\n\tfor (auto it = m_branch.begin(); it != m_branch.end(); ++it)\n\t{\n\t\t(*it)->setupConfiguration(mgtClient, children);\n\t}\n\treturn true;\n}\n\n/**\n * Setup the configuration categories for the branch element of\n * a pipeline. The branch itself has no category, but it must call\n * the setup method on all items in the child branch of the\n * piepline.\n *\n * @param\tmgmt\t\tThe management client\n * @param\tingest\t\tThe configuration handler for our service\n * @param\tfilterCategories\tA map of the category names to pipeline elements\n */\nbool PipelineBranch::setup(ManagementClient *mgmt, void *ingest, map<string, PipelineElement *>&  filterCategories)\n{\nvector<string> children;\n\n\tfor (auto it = m_branch.begin(); it != m_branch.end(); ++it)\n\t{\n\t\tif ((*it)->isBranch())\n\t\t{\n\t\t\tPipelineBranch *branch = (PipelineBranch *)(*it);\n\t\t\tbranch->setFunctions(m_passOnward, m_useData, m_ingest);\n\t\t}\n\t\t(*it)->setup(mgmt, ingest, filterCategories);\n\t}\n\treturn true;\n}\n/**\n * Initialise the pipeline branch.\n *\n * Initialise the elements of the child pipeline\n * Spawn a thread to excute the child pipeline.\n *\n * @param config\tThe filter configuration\n * @param outHandle\tThe pipeline element on the \"main branch\"\n * @param output\n */\nbool PipelineBranch::init(OUTPUT_HANDLE* outHandle, OUTPUT_STREAM output)\n{\n\tbool initErrors = false;\n\tstring errMsg = \"'plugin_init' failed for filter '\";\n\tfor (auto it = m_branch.begin(); it != m_branch.end(); ++it)\n\t{\n\t\ttry\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Initialise %s on pipeline branch\", (*it)->getName().c_str());\n\t\t\t// Iterate the load filters set in the Ingest class m_filters member \n\t\t\tif ((it + 1) != m_branch.end())\n\t\t\t{\n\t\t\t\t(*it)->setNext(*(it + 1));\n\t\t\t\t// Set next filter pointer as OUTPUT_HANDLE\n\t\t\t\tif (!(*it)->init((OUTPUT_HANDLE *)(*(it + 1)),\n\t\t\t\t\t\tfilterReadingSetFn(m_passOnward)))\n\t\t\t\t{\n\t\t\t\t\terrMsg += (*it)->getName() + \"'\";\n\t\t\t\t\tinitErrors = true;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Set the Ingest class pointer as OUTPUT_HANDLE\n\t\t\t\tif (!(*it)->init((OUTPUT_HANDLE *)(m_ingest),\n\t\t\t\t\t\t filterReadingSetFn(m_useData)))\n\t\t\t\t{\n\t\t\t\t\terrMsg += (*it)->getName() + \"'\";\n\t\t\t\t\tinitErrors = true;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\n\t\t}\n\t\t// TODO catch specific exceptions\n\t\tcatch (...)\n\t\t{\t\t\n\t\t\tthrow;\t\t\n\t\t}\n\t}\n\n\tif (initErrors)\n\t{\n\t\t// Failure\n\t\tLogger::getLogger()->fatal(\"%s error: %s\", __FUNCTION__, errMsg.c_str());\n\t\treturn false;\n\t}\n\n\tLogger::getLogger()->debug(\"Create branch handler thread\");\n\tm_thread = new thread(PipelineBranch::branchHandler, this);\n\n\t//Success\n\treturn true;\n}\n\n/**\n * Ingest a set of readings and pass on in the pipeline. Create a deep copy\n * and queue the copy into the branched pipeline.\n *\n * @param readingSet\tThe set of readings to ingest\n */\nvoid PipelineBranch::ingest(READINGSET *readingSet)\n{\n\tif (m_debugger)\n\t{\n\t\tPipelineDebugger::DebuggerActions action = m_debugger->process(readingSet);\n\n\t\tswitch (action)\n\t\t{\n\t\tcase PipelineDebugger::Block:\n\t\t\tdelete readingSet;\n\t\t\treturn;\n\t\tcase PipelineDebugger::NoAction:\n\t\t\tbreak;\n\t\t}\n\n\t}\n\tm_pipeline->startBranch();\n\tREADINGSET *copy = new ReadingSet();\n\tcopy->copy(*readingSet);\n\tunique_lock<mutex> lck(m_mutex);\n\tm_queue.push(copy);\n\tlck.unlock();\n\tm_cv.notify_one();\n\tif (m_next)\n\t{\n\t\tm_next->ingest(readingSet);\n\t}\n\telse\n\t{\n\t\t// Pipeline branch has no downstream element, write direct to storage\n\t\t(*(OUTPUT_STREAM)m_useData)(m_ingest, readingSet);\n\t}\n}\n\n/**\n * Setup the configuration categories for the branch element of\n * a pipeline. The branch itself has no category, but it must call\n * the setup method on all items in the child branch of the\n * piepline.\n *\n * @param\tmgmt\t\tThe management client\n * @param\tingest\t\tThe configuration handler for our service\n * @param\tfilterCategories\tA map of the category names to pipeline elements\n */\nvoid PipelineBranch::shutdown(ServiceHandler *serviceHandler, ConfigHandler *configHandler)\n{\n\t// Shutdown the handler thread\n\tm_shutdownCalled = true;\n\tm_cv.notify_all();\n\tm_thread->join();\n\tdelete m_thread;\n\tm_thread = NULL;\n\n\t// Shutdown the filter elements on the branch\n\tfor (auto it = m_branch.begin(); it != m_branch.end(); ++it)\n\t{\n\t\t(*it)->shutdown(serviceHandler, configHandler);\n\t}\n\n\t// Clear any queued readings\n\twhile (!m_queue.empty())\n\t{\n\t\tReadingSet *readings = m_queue.front();\n\t\tm_queue.pop();\n\t\tdelete readings;\n\t}\n}\n\n/**\n * Return if the branch is ready to be executed\n */\nbool PipelineBranch::isReady()\n{\n\treturn true;\n}\n\n/**\n * Static entry point for the thread that handles sending data on the\n * branch\n *\n * @param instance\tThe instance of the PipelineBranch\n */\nvoid PipelineBranch::branchHandler(void *instance)\n{\n\tPipelineBranch *branch = (PipelineBranch *)instance;\n\tbranch->handler();\n}\n\n/**\n * The handler for readings in an instance of a branch.\n * Loop waiting for data or a shutdown signal and pass the\n * queued data to the first filter in the pipeline branch\n */\nvoid PipelineBranch::handler()\n{\n\tLogger::getLogger()->info(\"Starting thread to process branch pipeline\");\n\twhile (!m_shutdownCalled)\n\t{\n\t\tunique_lock<mutex> lck(m_mutex);\n\t\twhile (m_queue.empty())\n\t\t{\n\t\t\tm_cv.wait(lck);\n\t\t\tif (m_shutdownCalled)\n\t\t\t{\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\tReadingSet *readings = m_queue.front();\n\t\tm_queue.pop();\n\t\tlck.unlock();\n\t\tm_branch[0]->ingest(readings);\n\t\tm_pipeline->completeBranch();\n\t}\n}\n"
  },
  {
    "path": "C/common/pipeline_debugger.cpp",
    "content": "/*\n * Fledge pipeline debugger class\n *\n * Copyright (c) 2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <pipeline_debugger.h>\n\nusing namespace std;\n\n/**\n * Constructor for the pipeline element debugger\n */\nPipelineDebugger::PipelineDebugger() : m_buffer(NULL)\n{\n}\n\n/**\n * Destructor for the pipeline element debugger\n */\nPipelineDebugger::~PipelineDebugger()\n{\n\tif (m_buffer)\n\t\tdelete m_buffer;\n}\n\n/**\n * Process a reading set as it flows through the pipeline.\n * The main purpose here is to buffer the readings in the circular\n * buffer in order to allow later examination of the data.\n *\n * @param readings\t\tThe reading set flowing into the pipeline element\n * @return DebuggerActions\tAction signal to the pipeline\n */\nPipelineDebugger::DebuggerActions PipelineDebugger::process(ReadingSet *readings)\n{\n\tlock_guard<mutex> guard(m_bufferMutex);\n\tif (!m_buffer)\n\t\treturn NoAction;\n\tm_buffer->insert(readings->getAllReadings());\n\treturn NoAction;\n}\n\n/**\n * Set the size of the circular buffer used to buffer\n * the data flowing in the pipeline\n *\n * @param size\t\tThe number of readings to buffer\n */\nvoid PipelineDebugger::setBuffer(unsigned int size)\n{\n\tlock_guard<mutex> guard(m_bufferMutex);\n\tif (m_buffer)\n\t{\n\t\tdelete m_buffer;\n\t}\n\tm_buffer = new ReadingCircularBuffer(size);\n}\n\n/**\n * Remove the circular buffer of readings and stop the\n * process of storing future readings\n */\nvoid PipelineDebugger::clearBuffer()\n{\n\tlock_guard<mutex> guard(m_bufferMutex);\n\tif (m_buffer)\n\t{\n\t\tdelete m_buffer;\n\t\tm_buffer = NULL;\n\t}\n}\n\n/**\n * Fetch the current contents of the circular buffer. A vector\n * of shared pointers is returned to alleviate the need to \n * copy the readings.\n *\n * @return vector<shared_ptr<Reading> The readings that are returned\n */\nstd::vector<std::shared_ptr<Reading>> PipelineDebugger::fetchBuffer()\n{\n\tvector<std::shared_ptr<Reading>> vec;\n\tlock_guard<mutex> guard(m_bufferMutex);\n\tif (m_buffer)\n\t{\n\t\tint extracted = m_buffer->extract(vec);\n\t\tLogger::getLogger()->debug(\"Debugger return %d readings\", extracted);\n\t}\n\treturn vec;\n}\n"
  },
  {
    "path": "C/common/pipeline_element.cpp",
    "content": "/*\n * Fledge pipeline element classes\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <pipeline_element.h>\n#include <filter_pipeline.h>\n#include <config_handler.h>\n#include <service_handler.h>\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n\nusing namespace std;\n\n/**\n * Attach a debugger class to the pipeline\n *\n * @return bool\t\tTrue if a debugger is attached to the element\n */\nbool PipelineElement::attachDebugger()\n{\n\tif (!m_debugger)\n\t\tm_debugger = new PipelineDebugger();\n\treturn m_debugger ? true : false;\n}\n\n/**\n * Detach a pipeline debugger from the pipeline element\n */\nvoid PipelineElement::detachDebugger()\n{\n\tif (m_debugger)\n\t\tdelete m_debugger;\n\tm_debugger = NULL;\n}\n\n/** \n * Setup the size of the debug buffer\n *\n * @param size\t\tNumber of readings to buffer\n */\nvoid PipelineElement::setDebuggerBuffer(unsigned int size)\n{\n\tif (m_debugger)\n\t{\n\t\tif (size)\n\t\t\tm_debugger->setBuffer(size);\n\t\telse\n\t\t\tm_debugger->clearBuffer();\n\t}\n}\n\n/**\n * Fetch the content of the debugger buffer\n *\n * @return vector<shared_ptr<ReadingSet>>\tThe current contents of the debugger buffer\n */\nvector<shared_ptr<Reading>> PipelineElement::getDebuggerBuffer()\n{\n\tif (m_debugger)\n\t{\n\t\treturn m_debugger->fetchBuffer();\n\t}\n\tvector<shared_ptr<Reading>> empty;\n\n\treturn empty;\n}\n"
  },
  {
    "path": "C/common/pipeline_filter.cpp",
    "content": "/*\n * Fledge pipeline filter class\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <pipeline_element.h>\n#include <filter_pipeline.h>\n#include <config_handler.h>\n#include <service_handler.h>\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n\nusing namespace std;\n\n\n/**\n * Construct the PipelineFilter class. This is the\n * specialisation of the PipelineElement that represents\n * a running filter in the pipeline.\n */\nPipelineFilter::PipelineFilter(const string& name, const ConfigCategory& filterDetails) :\n\tPipelineElement(), m_name(name), m_plugin(NULL)\n{\n\tm_name = name;\n\tif (!filterDetails.itemExists(\"plugin\"))\n\t{\n\t\tstring errMsg(\"loadFilters: 'plugin' item not found \");\n\t\terrMsg += \"in \" + m_name + \" category\";\n\t\tLogger::getLogger()->fatal(errMsg.c_str());\n\t\tthrow runtime_error(errMsg);\n\t}\n\tm_pluginName = filterDetails.getValue(\"plugin\");\n\t// Load filter plugin only: we don't call any plugin method right now\n\tm_handle = loadFilterPlugin(m_pluginName);\n\tif (!m_handle)\n\t{\n\t\tstring errMsg(\"Cannot load filter plugin '\" + m_pluginName + \"'\");\n\t\tLogger::getLogger()->fatal(errMsg.c_str());\n\t\tthrow runtime_error(errMsg);\n\t}\n}\n\n/**\n * Destructor for the pipeline filter element\n */\nPipelineFilter::~PipelineFilter()\n{\n\tdelete m_plugin;\n}\n\n/**\n * Setup the configuration for a filter in a pipeline\n *\n * @param\tmgtClient\tThe managament client\n * @param\tchildren\tA vector to fill with child configuration categories\n */\nbool PipelineFilter::setupConfiguration(ManagementClient *mgtClient, vector<string>& children)\n{\n\tPluginManager *manager = PluginManager::getInstance();\n\tstring filterConfig = manager->getInfo(m_handle)->config;\n\n\tm_categoryName = m_serviceName + \"_\" + m_name;\n\t// Create/Update default filter category items\n\tDefaultConfigCategory filterDefConfig(m_categoryName, filterConfig);\n\tstring filterDescription = \"Configuration of '\" + m_name;\n\tfilterDescription += \"' filter for plugin '\" + m_pluginName + \"'\";\n\tfilterDefConfig.setDescription(filterDescription);\n\n\tif (!mgtClient->addCategory(filterDefConfig, true))\n\t{\n\t\tstring errMsg(\"Cannot create/update '\" + \\\n\t\t\t      m_categoryName + \"' filter category\");\n\t\tLogger::getLogger()->fatal(errMsg.c_str());\n\t\treturn false;\n\t}\n\tchildren.push_back(m_serviceName + '_' + m_name);\n\n\t// Instantiate the FilterPlugin class\n\t// in order to call plugin entry points\n\tm_plugin = new FilterPlugin(m_name, m_handle);\n\tif (!m_plugin)\n\t\treturn false;\n\treturn true;\n}\n\n\n/**\n * Load the specified filter plugin\n *\n * @param filterName\tThe filter plugin to load\n * @return\t\tPlugin handle on success, NULL otherwise \n *\n */\nPLUGIN_HANDLE PipelineFilter::loadFilterPlugin(const string& filterName)\n{\n\tif (filterName.empty())\n\t{\n\t\tLogger::getLogger()->error(\"Unable to fetch filter plugin '%s' from configuration.\",\n\t\t\tfilterName.c_str());\n\t\t// Failure\n\t\treturn NULL;\n\t}\n\tLogger::getLogger()->info(\"Loading filter plugin '%s'.\", filterName.c_str());\n\n\tPluginManager *manager = PluginManager::getInstance();\n\tPLUGIN_HANDLE handle;\n\tif ((handle = manager->loadPlugin(filterName, PLUGIN_TYPE_FILTER)) != NULL)\n\t{\n\t\t// Success\n\t\tLogger::getLogger()->info(\"Loaded filter plugin '%s'.\", filterName.c_str());\n\t}\n\treturn handle;\n}\n\n/**\n * Setup the configuration categories for the filter \n * element in a pipeline\n *\n * @param mgmt\tThe Management client\n * @param ingest\tThe service handler for our service\n * @param filterCatiegories\tA map of the category name to pipeline element\n */\nbool PipelineFilter::setup(ManagementClient *mgmt, void *ingest, map<string, PipelineElement *>& filterCategories)\n{\nvector<string> children;\n\n\tLogger::getLogger()->info(\"Load plugin categoryName %s for %s\", m_categoryName.c_str(), m_name.c_str());\n\t// Fetch up to date filter configuration\n\ttry {\n\t\tm_updatedCfg = mgmt->getCategory(m_categoryName);\n\n\t\t// Pass Management client IP:Port to filter so that it may connect to bucket service\n\t\tm_updatedCfg.addItem(\"mgmt_client_url_base\", \"Management client host and port\",\n\t\t\t\t\t\t\t\t\t\"string\", \"127.0.0.1:0\",\n\t\t\t\t\t\t\t\t\tmgmt->getUrlbase());\n\n\t\t// Add filter category name under service/process config name\n\t\tchildren.push_back(m_categoryName);\n\t\tmgmt->addChildCategories(m_serviceName, children);\n\t} catch (...) {\n\t\tLogger::getLogger()->error(\"Failed to fetch configuration %s for filter %s\", m_categoryName.c_str(), m_name.c_str());\n\t\treturn false;\n\t}\n\t\t\n\tConfigHandler *configHandler = ConfigHandler::getInstance(mgmt);\n\tconfigHandler->registerCategory((ServiceHandler *)ingest, m_categoryName);\n\tfilterCategories[m_categoryName] = this;\n\n\treturn true;\n}\n\n/**\n * Initialise the pipeline filter ready for ingest of data\n *\n * @param outHandle\tThe pipeline element we are sending the data to\n * @param output\t\n */\nbool PipelineFilter::init(OUTPUT_HANDLE* outHandle, OUTPUT_STREAM output)\n{\n\tm_plugin->init(m_updatedCfg, outHandle, output);\n\tif (m_plugin->persistData())\n\t{\n\t\t// Plugin support SP_PERSIST_DATA\n\t\t// Instantiate the PluginData class\n\t\tm_plugin->m_plugin_data = new PluginData(m_storage);\n\t\t// Load plugin data from storage layer\n\t\tstring pluginStoredData = m_plugin->m_plugin_data->loadStoredData(m_serviceName + m_name + m_pluginName);\n\t\t//call 'plugin_start' with plugin data: startData()\n\t\tm_plugin->startData(pluginStoredData);\n\t}\n\treturn true;\n}\n\n/**\n * Shutdown a pipeline element that is a filter.\n *\n * Remove registration for categories of interest, persist and plugin\n * data that needs persisting and call shutdown on the plugin itself.\n *\n * @param\tserviceHandler The service handler of the service that is hostign the pipeline\n * @param\tconfigHandler\tThe config handler for the service from which we unregister\n */\nvoid PipelineFilter::shutdown(ServiceHandler *serviceHandler, ConfigHandler *configHandler)\n{\n\tstring filterCategoryName =  m_serviceName + \"_\" + m_name;\n\tconfigHandler->unregisterCategory(serviceHandler, filterCategoryName);\n\t\t\n\t// If plugin has SP_PERSIST_DATA option:\n\tif (m_plugin->m_plugin_data)\n \t{\n\t\t\t// 1- call shutdownSaveData and get up-to-date plugin data.\n\t\t\tstring saveData = m_plugin->shutdownSaveData();\n\t\t\t// 2- store returned data: key is service/task categoryName + filter category name + pluginName\n\t\t\tstring key(m_serviceName + m_plugin->getName() + m_pluginName.c_str());\n\t\t\tif (!m_plugin->m_plugin_data->persistPluginData(key, saveData, m_serviceName))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Filter %s has failed to save data [%s] for key %s and name %s\",\n\t\t\t\t\t\t\t   m_plugin->getName().c_str(),\n\t\t\t\t\t\t\t   saveData.c_str(),\n\t\t\t\t\t\t\t   key.c_str(),\n\t\t\t\t\t\t\t   m_serviceName.c_str());\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Call filter plugin shutdown\n\t\t\tm_plugin->shutdown();\n\t\t}\n}\n\n/**\n * Reconfigure method\n */\nvoid PipelineFilter::reconfigure(const string& newConfig)\n{\n\tm_plugin->reconfigure(newConfig);\n}\n"
  },
  {
    "path": "C/common/pipeline_writer.cpp",
    "content": "/*\n * Fledge pipeline writer class\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <pipeline_element.h>\n#include <filter_pipeline.h>\n#include <config_handler.h>\n#include <service_handler.h>\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n\nusing namespace std;\n\n/**\n * Constructor for the pipeline writer, the element that sits\n * at the end of every pipeline and branch\n */\nPipelineWriter::PipelineWriter()\n{\n}\n\n/**\n * Ingest into a pipeline writer\n */\nvoid PipelineWriter::ingest(READINGSET *readingSet)\n{\n\tif (m_debugger)\n\t{\n\t\tPipelineDebugger::DebuggerActions action = m_debugger->process(readingSet);\n\n\t\tswitch (action)\n\t\t{\n\t\tcase PipelineDebugger::Block:\n\t\t\tdelete readingSet;\n\t\t\treturn;\n\t\tcase PipelineDebugger::NoAction:\n\t\t\tbreak;\n\t\t}\n\n\t}\n\t(*m_useData)(m_ingest, readingSet);\n}\n\n/**\n * Setup the pipeline writer\n */\nbool PipelineWriter::setup(ManagementClient *mgmt, void *ingest, std::map<std::string, PipelineElement*>& categories)\n{\n\treturn true;\n}\n\n/**\n * Initialise the pipeline writer\n */\nbool PipelineWriter::init(OUTPUT_HANDLE* outHandle, OUTPUT_STREAM output)\n{\n\tm_useData = output;\n\tm_ingest = outHandle;\n\treturn true;\n}\n\n/**\n * Shutdown the pipeline writer\n */\nvoid PipelineWriter::shutdown(ServiceHandler *serviceHandler, ConfigHandler *configHandler)\n{\n}\n\n/**\n * Return if the pipeline writer is ready to receive data\n */\nbool PipelineWriter::isReady()\n{\n\treturn true;\n}\n"
  },
  {
    "path": "C/common/plugin_data.cpp",
    "content": "/*\n * Fledge persist plugin data class.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <resultset.h>\n#include <where.h>\n#include <plugin_data.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * PluginData constructor\n * @param client\tStorageClient pointer\n */\nPluginData::PluginData(StorageClient* client) : m_storage(client), m_dataLoaded(false)\n{\n}\n\n/**\n * Load stored data for a given key.\n *\n * @param    key\tGiven key for data load\n * @return   JSON string with found data or empty JSON data\n */\nstring PluginData::loadStoredData(const string& key)\n{\n\t// Set empty JSON document\n\tstring foundData(\"{}\");\n\tconst Condition conditionId(Equals);\n\tWhere* wKey = new Where(\"key\",\n\t\t\t\tconditionId,\n\t\t\t\tkey);\n\n\tResultSet* pluginData = m_storage->queryTable(\"plugin_data\", wKey);\n\tif (pluginData != NULL && pluginData->rowCount())\n\t{\n\t\tm_dataLoaded = true;\n\t\t// Get the first row only\n\t\tResultSet::RowIterator it = pluginData->firstRow();\n\t\t// Access the element\n\t\tResultSet::Row* row = *it;\n\t\tif (row)\n\t\t{\n\t\t\t// Get column value\n\t\t\tResultSet::ColumnValue* theVal = row->getColumn(\"data\");\n\t\t\t// get column type\n\t\t\tColumnType type  = row->getType(\"data\");\n\t\t\tif (type == JSON_COLUMN)\n\t\t\t{\n\t\t\t\t// Convert JSON object to string\n\t\t\t\tconst rapidjson::Value* val = theVal->getJSON();\n\t\t\t\trapidjson::StringBuffer strbuf;\n\t\t\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\t\t\tval->Accept(writer);\n\t\t\t\tfoundData = strbuf.GetString();\n\t\t\t}\n\t\t\telse if (type == STRING_COLUMN)\n\t\t\t{\n\t\t\t\t// just a string\n\t\t\t\tfoundData = theVal->getString();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Other column types are not supported\n\t\t\t}\n\t\t}\n\t}\n\n\t// Free resultset\n\tdelete pluginData;\n\n\t// Return found data\n\treturn foundData;\n}\n\n/**\n * Store plugin data for a given key.\n *\n * @param      key             The given key\n * @param      data            The JSON data to save (as string)\n * @param      service_name    The name of service\n * @return     true on success, false otherwise.\n */\nbool PluginData::persistPluginData(const string& key, const string& data, const string& service_name)\n{\n\tDocument JSONData;\n\tJSONData.Parse(data.c_str());\n\tif (JSONData.HasParseError())\n\t{\n\t\tLogger::getLogger()->warn(\"Failed to persist data for key: %s and service name: %s, parse error in JSON data\", key.c_str(), service_name.c_str());\n\t\treturn false;\n\t}\t\n\n\tbool ret = true;\n\n\t// Prepare WHERE key =\n\tconst Condition conditionUpdate(Equals);\n\tWhere wKey(\"key\", conditionUpdate, key);\n\tInsertValues updateData;\n\tupdateData.push_back(InsertValue(\"data\", JSONData));\n\tupdateData.push_back(InsertValue(\"service_name\", service_name));\n\n\tif (m_dataLoaded)\n\t{\n\t\t// Try update first\n\t\tif (m_storage->updateTable(\"plugin_data\",\n\t\t\t\t\t   updateData,\n\t\t\t\t\t   wKey) == -1)\n\t\t{\n\t\t\t// Update failure: try insert\n\t\t\tInsertValues insertData;\n\t\t\tinsertData.push_back(InsertValue(\"key\", key));\n\t\t\tinsertData.push_back(InsertValue(\"data\", JSONData));\n\t\t\tinsertData.push_back(InsertValue(\"service_name\", service_name));\n\n\t\t\tif (m_storage->insertTable(\"plugin_data\",\n\t\t\t\t\t\t   insertData) == -1)\n\t\t\t{\n\t\t\t\tret = false;\n\t\t\t}\n\t\t}\n\t}\n\telse\n\t{       // We didn't load the data so do an insert first\n\t\tInsertValues insertData;\n\t\tinsertData.push_back(InsertValue(\"key\", key));\n\t\tinsertData.push_back(InsertValue(\"data\", JSONData));\n\t\tinsertData.push_back(InsertValue(\"service_name\", service_name));\n\n\t\tif (m_storage->insertTable(\"plugin_data\",\n\t\t\t\t\t   insertData) == -1)\n\t\t{\n\t\t\t// The insert failed, so try an update before giving up\n\t\t\tif (m_storage->updateTable(\"plugin_data\", updateData, wKey) == -1)\n\t\t\t{\n\t\t\t\tret = false;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_dataLoaded = true;\t// Data is now in the database\n\t\t}\n\t}\n\n\tif (!ret)\n\t{\n\t\tLogger::getLogger()->warn(\"Failed to persist data for key: %s and service name: %s, unable to insert into storage\", key.c_str(), service_name.c_str());\n\t}\n\treturn ret;\n}\n"
  },
  {
    "path": "C/common/process.cpp",
    "content": "/*\n * Fledge process class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n/**\n * Fledge process base class\n */\n#include <iostream>\n#include <logger.h>\n#include <process.h>\n#include <service_record.h>\n#include <signal.h>\n#include <dlfcn.h>\n#include <execinfo.h>\n#include <cxxabi.h>\n\n\n#define LOG_SERVICE_NAME  \"Fledge Process\"\n\nusing namespace std;\n\n/**\n * Signal handler to log stack traces on fatal signals\n */\nstatic void handler(int sig)\n{\nLogger\t*logger = Logger::getLogger();\nvoid\t*array[20];\nchar\tbuf[1024];\nint\tsize;\n\n\t// get void*'s for all entries on the stack\n\tsize = backtrace(array, 20);\n\n\t// print out all the frames to stderr\n\tlogger->fatal(\"Signal %d (%s) trapped:\\n\", sig, strsignal(sig));\n\tchar **messages = backtrace_symbols(array, size);\n\tfor (int i = 0; i < size; i++)\n\t{\n\t\tDl_info info;\n\t\tif (dladdr(array[i], &info) && info.dli_sname)\n\t\t{\n\t\t    char *demangled = NULL;\n\t\t    int status = -1;\n\t\t    if (info.dli_sname[0] == '_')\n\t\t        demangled = abi::__cxa_demangle(info.dli_sname, NULL, 0, &status);\n\t\t    snprintf(buf, sizeof(buf), \"%-3d %*p %s + %zd---------\",\n\t\t             i, int(2 + sizeof(void*) * 2), array[i],\n\t\t             status == 0 ? demangled :\n\t\t             info.dli_sname == 0 ? messages[i] : info.dli_sname,\n\t\t             (char *)array[i] - (char *)info.dli_saddr);\n\t\t    free(demangled);\n\t\t} \n\t\telse\n\t\t{\n\t\t    snprintf(buf, sizeof(buf), \"%-3d %*p %s---------\",\n\t\t             i, int(2 + sizeof(void*) * 2), array[i], messages[i]);\n\t\t}\n\t\tlogger->fatal(\"(%d) %s\", i, buf);\n\t}\n\tfree(messages);\n\texit(1);\n}\n\n// Destructor\nFledgeProcess::~FledgeProcess()\n{\n\tdelete m_client;\n\tdelete m_storage;\n\tdelete m_logger;\n}\n\n// Constructor\nFledgeProcess::FledgeProcess(int argc, char** argv) :\n\t\t\t\tm_stime(time(NULL)),\n\t\t\t\tm_argc(argc),\n\t\t\t\tm_arg_vals((const char**) argv),\n\t\t\t\tm_dryRun(false)\n{\n\tsignal(SIGSEGV, handler);\n\tsignal(SIGILL, handler);\n\tsignal(SIGBUS, handler);\n\tsignal(SIGFPE, handler);\n\tsignal(SIGABRT, handler);\n\n\tstring myName = LOG_SERVICE_NAME;\n\n\ttry\n\t{\n\t\tm_core_mngt_host = getArgValue(\"--address=\");\n\t\tm_core_mngt_port = atoi(getArgValue(\"--port=\").c_str());\n\t\tm_name = getArgValue(\"--name=\");\n\t}\n\tcatch (exception e)\n\t{\n\t\tthrow runtime_error(string(\"Error while parsing required options: \") + e.what());\n\t}\n\n\t// Look for the --dryrun flag\n\tfor (int i = 1; i < argc; i++)\n\t{\n\t\tif (!strncmp(argv[i], \"--dryrun\", 8))\n\t\t{\n\t\t\tm_dryRun = true;\n\t\t}\n\t}\n\n\tmyName = m_name;\n\tm_logger = new Logger(myName);\n\n\tif (m_core_mngt_host.empty())\n\t{\n\t\tthrow runtime_error(\"Error: --address is not specified\");\n\t}\n\telse if (m_core_mngt_port == 0)\n\t{\n\t\tthrow runtime_error(\"Error: --port is not specified\");\n\t}\n\telse if (m_name.empty())\n\t{\n\t\tthrow runtime_error(\"Error: --name is not specified\");\n\t}\n\n\tm_logger->setMinLevel(\"warning\");\t// Default to warnings, errors and fatal for log messages\n\ttry\n\t{\n\t\tstring minLogLevel = getArgValue(\"--loglevel=\");\n\t\tif (!minLogLevel.empty())\n\t\t{\n\t\t\tm_logger->setMinLevel(minLogLevel);\n\t\t}\n\t}\n\tcatch (exception e)\n\t{\n\t\tthrow runtime_error(string(\"Error while parsing optional options: \") + e.what());\n\t}\n\n\t// Connection to Fledge core microservice\n\tm_client = new ManagementClient(m_core_mngt_host, m_core_mngt_port);\n\n\t// Create Audit Logger\n\tm_auditLogger = new AuditLogger(m_client);\n\n\t// Storage layer handle\n\tServiceRecord storageInfo(\"Fledge Storage\");\n\n\tif (!m_client->getService(storageInfo))\n\t{\n\t\tstring errMsg(\"Unable to find storage service at \");\n\t\terrMsg += m_core_mngt_host;\n\t\terrMsg += ':';\n\t\terrMsg += to_string(m_core_mngt_port);\n\n\t\tthrow runtime_error(errMsg);\n\t}\n\n\tif (!(m_storage = new StorageClient(storageInfo.getAddress(),\n\t\t\t\t\t    storageInfo.getPort())))\n\t{\n\t\tstring errMsg(\"Unable to connect to storage service at \");\n\t\terrMsg.append(storageInfo.getAddress());\n\t\terrMsg += ':';\n\t\terrMsg += to_string(storageInfo.getPort());\n\n\t\tthrow runtime_error(errMsg);\n\t}\n}\n\n/**\n * Get command line argument value like \"--xyx=ABC\"\n * Argument name to pass is \"--xyz=\"\n *\n * @param name    The argument name (--xyz=)\n * @return        The argument value if found or an emopty string\n */\nstring FledgeProcess::getArgValue(const string& name) const\n{\n\tfor (int i=1; i < m_argc; i++)\n\t{\n\t\tif (strncmp(m_arg_vals[i], name.c_str(), name.length()) == 0)\n\t\t{\n\t\t\t// Return the option value (after \"--xyx=ABC\"\n\t\t\treturn string(m_arg_vals[i] + name.length());\n\t\t}\n\t}\n\t// Return empty string\n\treturn string(\"\");\n}\n\n/**\n * Return storage client\n */\nStorageClient* FledgeProcess::getStorageClient() const\n{\n        return m_storage;\n}\n\n/**\n * Return management client\n */\nManagementClient* FledgeProcess::getManagementClient() const\n{\n\treturn m_client;\n}\n\n/**\n * Return Logger\n */\nLogger *FledgeProcess::getLogger() const\n{\n\treturn m_logger;\n}\n"
  },
  {
    "path": "C/common/purge_result.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <purge_result.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <sstream>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * Construct a purge result from a JSON document returned from\n * the Fledge storage service.\n */\nPurgeResult::PurgeResult(const std::string& json)\n{\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tthrow new exception();\n\t}\n\tif (doc.HasMember(\"removed\"))\n\t{\n\t\tm_removed = doc[\"removed\"].GetUint();\n\t}\n\telse\n\t{\n\t\tm_removed = 0;\n\t}\n\tif (doc.HasMember(\"unsentPurged\"))\n\t{\n\t\tm_unsentPurged = doc[\"unsentPurged\"].GetUint();\n\t}\n\telse\n\t{\n\t\tm_unsentPurged = 0;\n\t}\n\tif (doc.HasMember(\"unsentRetained\"))\n\t{\n\t\tm_unsentRetained = doc[\"unsentRetained\"].GetUint();\n\t}\n\telse\n\t{\n\t\tm_unsentRetained = 0;\n\t}\n\tif (doc.HasMember(\"readings\"))\n\t{\n\t\tm_remaining = doc[\"readings\"].GetUint();\n\t}\n\telse\n\t{\n\t\tm_remaining = 0;\n\t}\n}\n"
  },
  {
    "path": "C/common/pyexception.cpp",
    "content": "/*\n * Fledge Python runtime\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <logger.h>\n#include <pyruntime.h>\n#include <Python.h>\n#include <stdexcept>\n#include <stdarg.h>\n\nusing namespace std;\n\n/**\n * Log an exception from a Python rotuine including the stack track formatted into the\n * error log.\n *\n * @param name\tThe name to attached to the exception trace.\n */\nvoid PythonRuntime::logException(const string& name)\n{\n\tPyObject* type;\n\tPyObject* value;\n\tPyObject* traceback;\n\n\n\tPyErr_Fetch(&type, &value, &traceback);\n\tPyErr_NormalizeException(&type, &value, &traceback);\n\n\tPyObject* str_exc_value = PyObject_Repr(value);\n\tPyObject* pyExcValueStr = PyUnicode_AsEncodedString(str_exc_value, \"utf-8\", \"Error ~\");\n\tconst char* pErrorMessage = value ?\n\t\t\t\t    PyBytes_AsString(pyExcValueStr) :\n\t\t\t\t    \"no error description.\";\n\tLogger::getLogger()->fatal(\"Python Runtime: %s: Error '%s'\", name.c_str(), pErrorMessage);\n\t\n\t// Check for numpy/pandas import errors\n\tconst char *err1 = \"implement_array_function method already has a docstring\";\n\tconst char *err2 = \"cannot import name 'check_array_indexer' from 'pandas.core.indexers'\";\n\n\t\n\tstd::string fcn = \"\";\n\tfcn += \"def get_pretty_traceback(exc_type, exc_value, exc_tb):\\n\";\n\tfcn += \"    import sys, traceback\\n\";\n\tfcn += \"    lines = []\\n\"; \n\tfcn += \"    lines = traceback.format_exception(exc_type, exc_value, exc_tb)\\n\";\n\tfcn += \"    return lines\\n\";\n\n\tPyRun_SimpleString(fcn.c_str());\n\tPyObject* mod = PyImport_ImportModule(\"__main__\");\n\tif (mod != NULL)\n\t{\n\t\tPyObject* method = PyObject_GetAttrString(mod, \"get_pretty_traceback\");\n\t\tif (method != NULL)\n\t\t{\n\t\t\tPyObject* outList = PyObject_CallObject(method, Py_BuildValue(\"OOO\", type, value, traceback));\n\t\t\tif (outList != NULL)\n\t\t\t{\n\t\t\t\tif (PyList_Check(outList))\n\t\t\t\t{\n\t\t\t\t\tPy_ssize_t listSize = PyList_Size(outList);\n\t\t\t\t\tfor (Py_ssize_t i = 0; i < listSize; i++)\n\t\t\t\t\t{\n\t\t\t\t\t\tPyObject *tmp = PyUnicode_AsASCIIString(PyList_GetItem(outList, i));\n\t\t\t\t\t\tLogger::getLogger()->fatal(\"%s\",\n\t\t\t\t\t\t\t\tPyBytes_AsString(tmp));\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t\tLogger::getLogger()->error(\"Expected a list\");\n\t\t\t}\n\t\t}\n\t\tPy_CLEAR(method);\n\t}\n\n\t// Reset error\n\tPyErr_Clear();\n\n\t// Remove references\n\tPy_CLEAR(type);\n\tPy_CLEAR(value);\n\tPy_CLEAR(traceback);\n\tPy_CLEAR(str_exc_value);\n\tPy_CLEAR(pyExcValueStr);\n\tPy_CLEAR(mod);\n}\n"
  },
  {
    "path": "C/common/pyruntime.cpp",
    "content": "/*\n * Fledge Python runtime\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <logger.h>\n#include <pyruntime.h>\n#include <Python.h>\n#include <stdexcept>\n#include <stdarg.h>\n\n\nusing namespace std;\n\n\nPythonRuntime *PythonRuntime::m_instance = 0;\n\n/**\n * Get PythonRuntime singleton instance for the process\n *\n * @return\tSingleton PythonRuntime instance\n */\nPythonRuntime *PythonRuntime::getPythonRuntime()\n{\n\tif (!m_instance)\n\t{\n\t\tm_instance = new PythonRuntime;\n\t}\n\treturn m_instance;\n}\n\n/**\n * Constructor\n */\nPythonRuntime::PythonRuntime()\n{\n\tPy_Initialize();\n\tPyEval_InitThreads();\n\tPyThreadState *save = PyEval_SaveThread();\t// Release the GIL\n}\n\n/**\n * Destructor\n */\nPythonRuntime::~PythonRuntime()\n{\n\tPyGILState_STATE gstate = PyGILState_Ensure();\n\tPy_Finalize();\n}\n\n/**\n * Don't allow a copy constructor to be used\n */\nPythonRuntime::PythonRuntime(const PythonRuntime& rhs)\n{\n\tthrow runtime_error(\"Illegal attempt to copy a Python runtime\");\n}\n\n/**\n * Don't allow an assignment to make a copy\n */\nPythonRuntime& PythonRuntime::operator=(const PythonRuntime& rhs)\n{\n\tthrow runtime_error(\"Illegal attempt to copy a Python runtime via assignment\");\n}\n\n/**\n * Execute simple Python script passed as a string\n *\n * @param python\tThe Python code to run\n */\nvoid PythonRuntime::execute(const string& python)\n{\n\tPyGILState_STATE state = PyGILState_Ensure();\n\ttry {\n\t\tPyRun_SimpleString(python.c_str());\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->error(\"Exception %s executing Python '%s'\", e.what(),\n\t\t\t\tpython.c_str());\n\t}\n\tPyGILState_Release(state);\n}\n\n/**\n * Call a Python function with a set of arguemnts\n *\n * The characters space, tab, colon and comma are ignored in format\n * strings (but not within format units such as s#). This can be used to\n * make long format strings a tad more readable.\n * \n * s (str or None) [const char *]\n * Convert a null-terminated C string to a Python str object using\n * 'utf-8' encoding. If the C string pointer is NULL, None is used.\n * \n * s# (str or None) [const char *, Py_ssize_t]\n * Convert a C string and its length to a Python str object using 'utf-8'\n * encoding. If the C string pointer is NULL, the length is ignored and\n * None is returned.\n * \n * y (bytes) [const char *]\n * This converts a C string to a Python bytes object. If the C string\n * pointer is NULL, None is returned.\n * \n * y# (bytes) [const char *, Py_ssize_t]\n * This converts a C string and its lengths to a Python object. If the\n * C string pointer is NULL, None is returned.\n * \n * z (str or None) [const char *]\n * Same as s.\n * \n * z# (str or None) [const char *, Py_ssize_t]\n * Same as s#.\n * \n * u (str) [const wchar_t *]\n * Convert a null-terminated wchar_t buffer of Unicode (UTF-16 or UCS-4)\n * data to a Python Unicode object. If the Unicode buffer pointer is NULL,\n * None is returned.\n * \n * u# (str) [const wchar_t *, Py_ssize_t]\n * Convert a Unicode (UTF-16 or UCS-4) data buffer and its length to\n * a Python Unicode object. If the Unicode buffer pointer is NULL, the\n * length is ignored and None is returned.\n * \n * U (str or None) [const char *]\n * Same as s.\n * \n * U# (str or None) [const char *, Py_ssize_t]\n * Same as s#.\n * \n * i (int) [int]\n * Convert a plain C int to a Python integer object.\n * \n * b (int) [char]\n * Convert a plain C char to a Python integer object.\n * \n * h (int) [short int]\n * Convert a plain C short int to a Python integer object.\n * \n * l (int) [long int]\n * Convert a C long int to a Python integer object.\n * \n * B (int) [unsigned char]\n * Convert a C unsigned char to a Python integer object.\n * \n * H (int) [unsigned short int]\n * Convert a C unsigned short int to a Python integer object.\n * \n * I (int) [unsigned int]\n * Convert a C unsigned int to a Python integer object.\n * \n * k (int) [unsigned long]\n * Convert a C unsigned long to a Python integer object.\n * \n * L (int) [long long]\n * Convert a C long long to a Python integer object.\n * \n * K (int) [unsigned long long]\n * Convert a C unsigned long long to a Python integer object.\n * \n * n (int) [Py_ssize_t]\n * Convert a C Py_ssize_t to a Python integer.\n * \n * c (bytes of length 1) [char]\n * Convert a C int representing a byte to a Python bytes object of length 1.\n * \n * C (str of length 1) [int]\n * Convert a C int representing a character to Python str object of length 1.\n * \n * d (float) [double]\n * Convert a C double to a Python floating point number.\n * \n * f (float) [float]\n * Convert a C float to a Python floating point number.\n * \n * D (complex) [Py_complex *]\n * Convert a C Py_complex structure to a Python complex number.\n * \n * O (object) [PyObject *]\n * Pass a Python object untouched (except for its reference count, which\n * is incremented by one). If the object passed in is a NULL pointer, it\n * is assumed that this was caused because the call producing the argument\n * found an error and set an exception. Therefore, Py_BuildValue() will\n * return NULL but won’t raise an exception. If no exception has been\n * raised yet, SystemError is set.\n * \n * S (object) [PyObject *]\n * Same as O.\n * \n * N (object) [PyObject *]\n/bin/bash: ft: command not found\n * \n * O& (object) [converter, anything]\n * Convert anything to a Python object through a converter function. The\n * function is called with anything (which should be compatible with void*)\n * as its argument and should return a “new” Python object, or NULL\n * if an error occurred.\n * \n * (items) (tuple) [matching-items]\n * Convert a sequence of C values to a Python tuple with the same number of items.\n * \n * [items] (list) [matching-items]\n * Convert a sequence of C values to a Python list with the same number of items.\n * \n * {items} (dict) [matching-items]\n * Convert a sequence of C values to a Python dictionary. Each pair of\n * consecutive C values adds one item to the dictionary, serving as key\n * and value, respectively.\n * \n *\n * @param fcn\tThe name of the function to call\n * @param fmt\tThe buildValue style format string for the arguments\n * @return PyObject* The function result\n */\nPyObject *PythonRuntime::call(const string& fcn, const string& fmt, ...)\n{\nPyObject *rval = NULL;\nva_list ap;\nPyObject *mod, *method;\n\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tif ((mod = PyImport_ImportModule(\"__main__\")) != NULL)\n\t{\n\t\tif ((method = PyObject_GetAttrString(mod, fcn.c_str())) != NULL)\n\t\t{\n\t\t\tva_start(ap, fmt);\n\t\t\tPyObject *args = Py_VaBuildValue(fmt.c_str(), ap);\n\t\t\tva_end(ap);\n\t\t\trval = PyObject_Call(method, args, NULL);\n\t\t\tif (rval == NULL)\n\t\t\t{\n\t\t\t\tif (PyErr_Occurred())\n\t\t\t\t{\n\t\t\t\t\tlogException(fcn);\n\t\t\t\t\tPyErr_Print();\n\t\t\t\t}\n\t\t\t}\n\t\t\tPy_CLEAR(method);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->fatal(\"Method '%s' not found\", fcn.c_str());\n\t\t}\n\t\t// Remove references\n\t\tPy_CLEAR(mod);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->fatal(\"Failed to import module\");\n\t}\n\n\t// Reset error\n\tPyErr_Clear();\n\n\tPyGILState_Release(state);\n\n\treturn rval;\n}\n\n/**\n * Call a Python function within a specified module.\n *\n * The using the same formattign rules as the call method above\n *\n * @param module\tThe module in which the function was imported\n * @param fcn\tThe name of the function to call\n * @param fmt\tThe buildValue style format string for the arguments\n * @return PyObject* The function result\n */\nPyObject *PythonRuntime::call(PyObject *module, const string& fcn, const string& fmt, ...)\n{\nPyObject *rval;\nva_list ap;\nPyObject *method;\n\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tif ((method = PyObject_GetAttrString(module, fcn.c_str())) != NULL)\n\t{\n\t\tva_start(ap, fmt);\n\t\tPyObject *args = Py_VaBuildValue(fmt.c_str(), ap);\n\t\tva_end(ap);\n\t\trval = PyObject_Call(method, args, NULL);\n\t\tif (rval == NULL)\n\t\t{\n\t\t\tif (PyErr_Occurred())\n\t\t\t{\n\t\t\t\tlogException(fcn);\n\t\t\t\tPyErr_Print();\n\t\t\t}\n\t\t}\n\t\tPy_CLEAR(method);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->fatal(\"Method '%s' not found\", fcn.c_str());\n\t}\n\n\t// Reset error\n\tPyErr_Clear();\n\n\tPyGILState_Release(state);\n\n\treturn rval;\n}\n\n/**\n * Import a Python module\n *\n * @param name\tThe name of the module to import\n * @return PyObject* The Python module\n */\nPyObject *PythonRuntime::importModule(const string& name)\n{\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *module = PyImport_ImportModule(name.c_str());\n\tif (!module)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to import Python module %s\", name.c_str());\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogException(name);\n\t\t}\n\t}\n\tPyGILState_Release(state);\n\treturn module;\n}\n\n/**\n * Shutdown an instance of a Python runtime if one\n * has been started\n */\nvoid PythonRuntime::shutdown()\n{\n\tif (!m_instance)\n\t{\n\t\treturn;\n\t}\n\tdelete m_instance;\n\tm_instance = NULL;\n}\n"
  },
  {
    "path": "C/common/pythonconfigcategory.cpp",
    "content": "/*\n * Fledge Python Config Category\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <pythonconfigcategory.h>\n#include <logger.h>\n#include <stdexcept>\n\n\nusing namespace std;\n\n/**\n * Construct a PythonConfigCategory from a DICT object returned by Python code.\n *\n * The PythonConfigCategory acts as a wrapper on the ConfigCategory class to convert to and\n * from configuration categories in C and Python.\n *\n * @param pyConfig\tThe Python DICT\n */\nPythonConfigCategory::PythonConfigCategory(PyObject *config)\n{\n\tif (!PyDict_Check(config))\n\t{\n\t\t\tthrow runtime_error(\"Invalid configuration category, expected Python DICT\");\n\t}\n\n\t// Fetch all items in configuration dict\t\t\t\n\tPyObject *dKey, *dValue;\n\tPy_ssize_t dPos = 0;\n\n\t// Fetch all Datapoints in 'reading' dict\n\t// dKey and dValue are borrowed references\n\twhile (PyDict_Next(config, &dPos, &dKey, &dValue))\n\t{\n\t\tstring name = PyUnicode_AsUTF8(dKey);\n\t\tstring description, type, def, value;\n\t\tif (!PyDict_Check(dValue))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Configuration item %s is not an object\", name.c_str());\n\t\t\tthrow runtime_error(\"Malformed configuration item\");\n\t\t}\n\t\tPyObject *obj = PyDict_GetItemString(dValue, \"description\");\n\t\tif (obj)\n\t\t{\n\t\t\tdescription = PyUnicode_AsUTF8(obj);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Configuration item %s is missing a description\", name.c_str());\n\t\t\tthrow runtime_error(\"Malformed configuration item, missing description\");\n\t\t}\n\t\tobj = PyDict_GetItemString(dValue, \"type\");\n\t\tif (obj)\n\t\t{\n\t\t\ttype = PyUnicode_AsUTF8(obj);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Configuration item %s is missing a type\", name.c_str());\n\t\t\tthrow runtime_error(\"Malformed configuration item, missing type\");\n\t\t}\n\t\tobj = PyDict_GetItemString(dValue, \"default\");\n\t\tif (obj)\n\t\t{\n\t\t\tdef = PyUnicode_AsUTF8(obj);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Configuration item %s is missing a default value\", name.c_str());\n\t\t\tthrow runtime_error(\"Malformed configuration item, missing default value\");\n\t\t}\n\t\tif (type.compare(\"enumeration\") == 0)\n\t\t{\n\t\t\tvector<string> options;\n\t\t\tobj = PyDict_GetItemString(dValue, \"options\");\n\t\t\tif (obj && PyList_Check(obj))\n\t\t\t{\n\t\t\t\tPy_ssize_t listSize = PyList_Size(obj);\n\t\t\t\tfor (Py_ssize_t i = 0; i < listSize; i++)\n\t\t\t\t{\n\t\t\t\t\tPyObject *str = PyList_GetItem(obj, i);\n\t\t\t\t\tstring s = PyUnicode_AsUTF8(str);\n\t\t\t\t\toptions.push_back(s);\n\t\t\t\t}\n\n\t\t\t\taddItem(name, description, def, value, options);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Configuration item %s is missing an options list\", name.c_str());\n\t\t\t\tthrow runtime_error(\"Malformed configuration item, missing options\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\taddItem(name, description, type, def, value);\n\t\t}\n\t}\n}\n\n/**\n * Convert a ConfigCategory, into a PyObject\n * structure that can be passed to embedded Python code.\n *\n * @return PyObject*\tThe Python representation of the configuration category as a DICT\n */\nPyObject *PythonConfigCategory::toPython()\n{\n\t// Create object (dict) for reading Datapoints:\n\t// this will be added as the value for key 'readings'\n\tPyObject *category = PyDict_New();\n\n\t// Get all datapoints\n\tfor (auto it = m_items.begin(); it != m_items.end(); ++it)\n\t{\n\t\tPyObject *value = convertItem(*it);\n\t\t// Add Item: key and value\n\t\tif (value)\n\t\t{\n\t\t\tPyObject *key = PyUnicode_FromString((*it)->m_name.c_str());\n\t\t\tPyDict_SetItem(category, key, value);\n\t\t\n\t\t\tPy_CLEAR(key);\n\t\t\tPy_CLEAR(value);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Unable to convert configuration item '%s' of configuration category '%s' to Python\",\n\t\t\t\t\t(*it)->m_name.c_str(), m_name.c_str());\n\t\t}\n\t}\n\n\treturn category;\n}\n\n/**\n * Convert a single datapoint into a Pythn object\n *\n * @param dp\tThe datapoint to convert\n * @return The pointer to a converted Python Object or NULL if the conversion failed\n */\nPyObject *PythonConfigCategory::convertItem(CategoryItem *item)\n{\n\tPyObject *pyItem = PyDict_New();\n\n\tPyObject *value = PyUnicode_FromString(item->m_displayName.c_str());\n\tPyObject *key = PyUnicode_FromString(\"displayName\");\n\tPyDict_SetItem(pyItem, key, value);\n\tPy_CLEAR(key);\n\tPy_CLEAR(value);\n\n\tvalue = PyUnicode_FromString(item->m_type.c_str());\n\tkey = PyUnicode_FromString(\"type\");\n\tPyDict_SetItem(pyItem, key, value);\n\tPy_CLEAR(key);\n\tPy_CLEAR(value);\n\n\tvalue = PyUnicode_FromString(item->m_default.c_str());\n\tkey = PyUnicode_FromString(\"default\");\n\tPyDict_SetItem(pyItem, key, value);\n\tPy_CLEAR(key);\n\tPy_CLEAR(value);\n\n\tvalue = PyUnicode_FromString(item->m_value.c_str());\n\tkey = PyUnicode_FromString(\"value\");\n\tPyDict_SetItem(pyItem, key, value);\n\tPy_CLEAR(key);\n\tPy_CLEAR(value);\n\n\tif (item->m_description.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_description.c_str());\n\t\tkey = PyUnicode_FromString(\"description\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\tif (item->m_order.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_order.c_str());\n\t\tkey = PyUnicode_FromString(\"order\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\tif (item->m_readonly.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_readonly.c_str());\n\t\tkey = PyUnicode_FromString(\"readonly\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\tif (item->m_mandatory.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_mandatory.c_str());\n\t\tkey = PyUnicode_FromString(\"mandatory\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\tif (item->m_deprecated.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_deprecated.c_str());\n\t\tkey = PyUnicode_FromString(\"deprecated\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\tif (item->m_length.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_length.c_str());\n\t\tkey = PyUnicode_FromString(\"length\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\tif (item->m_minimum.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_minimum.c_str());\n\t\tkey = PyUnicode_FromString(\"minimum\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\tif (item->m_maximum.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_maximum.c_str());\n\t\tkey = PyUnicode_FromString(\"maximum\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\tif (item->m_filename.length())\n\t{\n\t\tvalue = PyUnicode_FromString(item->m_filename.c_str());\n\t\tkey = PyUnicode_FromString(\"filename\");\n\t\tPyDict_SetItem(pyItem, key, value);\n\t\tPy_CLEAR(key);\n\t\tPy_CLEAR(value);\n\t}\n\n\n\treturn pyItem;\n}\n\n"
  },
  {
    "path": "C/common/pythonreading.cpp",
    "content": "/*\n * Fledge Python Reading\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n *\n * Extreme caution needs to be taken with these Python interfaces\n * classes, especially with the use of numpy which is not written\n * to support multiple imports of the package due to the use\n * of global variables within numpy itself. Hence we import numpy\n * once by use of the import_array() macro. This macro also has\n * issues a it contians an embedded return statement.\n */\n#include <pythonreading.h>\n#include <pyruntime.h>\n#include <stdexcept>\n\n#define PY_ARRAY_UNIQUE_SYMBOL  PyArray_API_FLEDGE\n#include <numpy/npy_common.h>\n#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION\n#include <numpy/ndarraytypes.h>\n#include <numpy/ndarrayobject.h>\n\n#undef NUMPY_IMPORT_ARRAY_RETVAL\n#define NUMPY_IMPORT_ARRAY_RETVAL       0\n\nbool PythonReading::doneNumPyImport = false;\n\nusing namespace std;\n\n\n/**\n * Construct a PythonReading from a DICT object returned by Python code.\n *\n * The PythonReading acts as a wrapper on the Reading class to convert to and\n * from Readings in C and Python.\n *\n * @param pyReading\tThe Python DICT\n */\nPythonReading::PythonReading(PyObject *pyReading)\n{    \n\t// Get 'asset_code' value: borrowed reference.\n\tPyObject *assetCode = PyDict_GetItemString(pyReading,\n\t\t\t\t\t\t   \"asset\");\n\n\tif (!assetCode)\n\t{\n\t\tassetCode = PyDict_GetItemString(pyReading, \"asset_code\");\n\t}\n\n\t// Get 'reading' value: borrowed reference.\n\tPyObject *reading = PyDict_GetItemString(pyReading,\n\t\t\t\t\t\t \"readings\");\n\tif (!reading)\n\t{\n\t\treading = PyDict_GetItemString(pyReading, \"reading\");\n\t}\n\n\t// Keys not found or reading is not a dict\n\tif (!assetCode ||\n\t    !reading ||\n\t    !PyDict_Check(reading))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tthrow runtime_error(errorMessage());\n\t\t}\n\t\tif (!assetCode)\n\t\t\tthrow runtime_error(\"Reading has no asset code element.\");\n\t\tif (!reading)\n\t\t\tthrow runtime_error(\"Reading is missing the reading element which shuld contain the data.\");\n\t\telse\n\t\t\tthrow runtime_error(\"The reading element in the python Reading is of an incorrect type, it should be a Python DICT.\");\n\t}\n\tif (PyUnicode_Check(assetCode))\n\t{\n\t\tm_asset = PyUnicode_AsUTF8(assetCode);\n\t}\n\telse if (PyBytes_Check(assetCode))\n\t{\n\t\tm_asset = PyBytes_AsString(assetCode);\n\t}\n\telse\n\t{\n\t\tthrow runtime_error(\"Unable to parse the asset code value. Asset codes should be a string\");\n\n\t}\n\n\t// Fetch all Datapoints in 'reading' dict\t\t\t\n\tPyObject *dKey, *dValue;\n\tPy_ssize_t dPos = 0;\n\n\t// Fetch all Datapoints in 'readings' dict\n\t// dKey and dValue are borrowed references\n\twhile (PyDict_Next(reading, &dPos, &dKey, &dValue))\n\t{\n\t\tDatapointValue *dataPoint = getDatapointValue(dValue);\n\t\tif (dataPoint)\n\t\t{\n\t\t\t// Deteck Python keys like reading[b'ema']\n\t\t\t// or reading['ema']\n\t\t\tif (PyUnicode_Check(dKey))   \n\t\t\t{\n\t\t\t\tm_values.emplace_back(new Datapoint(\n\t\t\t\t\tstring(PyUnicode_AsUTF8(dKey)),\n\t\t\t\t\t*dataPoint));\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_values.emplace_back(new Datapoint(\n\t\t\t\t\tstring(PyBytes_AsString(dKey)),\n\t\t\t\t\t*dataPoint));\n\t\t\t}\n\n\t\t\t// Remove temp objects\n\t\t\tdelete dataPoint;\n\t\t}\n\t}\n\n\t/**\n\t  *Set id, uuid, ts and user_ts of the original data\n\t */\n\n\t// Get 'id' value: borrowed reference.\n\tPyObject *id = PyDict_GetItemString(pyReading, \"id\");\n\tif (id && PyLong_Check(id))\n\t{\n\t\t// Set id\n\t\tm_id = PyLong_AsUnsignedLong(id);\n\t\tm_has_id = true;\n\t}\n\telse\n\t{\n\t\tm_has_id = false;\n\t\tm_id = 0;\n\t}\n\n\t// New reference, to delete\n\tPyObject *key = PyUnicode_FromString(\"timestamp\");\n\n\t// Get 'ts' value: borrowed reference.\n\t// Need to use PyDict_GetItemWithError in order to avoid an exception\n\tPyObject *ts = PyDict_GetItemWithError(pyReading, key);\n\tif (!(ts && PyUnicode_Check(ts)))\n\t{\n\t\tts = PyDict_GetItemString(pyReading, \"ts\");\n\t}\n\tif (ts && PyUnicode_Check(ts))\n\t{\n\t\t// Set timestamp\n\t\tconst char *ts_str = PyUnicode_AsUTF8(ts);\n\t\tsetTimestamp(ts_str);\n\t}\n\telse\n\t{\n\t\tm_timestamp.tv_sec = 0;\n\t\tm_timestamp.tv_usec = 0;\n\t\t// Logger::getLogger()->debug(\"PythonReading c'tor: Couldn't parse 'ts' \");\n\t}\n\n\tPy_CLEAR(key);\n\n\t// New reference, to delete\n\tkey = PyUnicode_FromString(\"user_ts\");\n\n\t// Get 'user_ts' value: borrowed reference.\n\tPyObject *uts = PyDict_GetItemWithError(reading, key);\n\tif (!uts)\n\t{\n\t\tuts = PyDict_GetItemWithError(pyReading, key);\n\t}\n\tif (uts && PyUnicode_Check(uts))\n\t{\n\t\t// Set user timestamp\n\t\tconst char *ts_str = PyUnicode_AsUTF8(uts);\n\t\tsetUserTimestamp(ts_str);\n\t}\n\telse\n\t{\n\t\t//Logger::getLogger()->debug(\"PythonReading c'tor: Couldn't parse 'user_ts' \");\n\t        m_userTimestamp.tv_sec = 0;\n       \t\tm_userTimestamp.tv_usec = 0;\n\t}\n\tPy_CLEAR(key);\n}\n\n/**\n * Given a Python value convert it into a DatapointValue\n *\n * @param value The python object to convert\n * @return The converted DatapointValue or NULL if the conversion was not possible\n */\nDatapointValue *PythonReading::getDatapointValue(PyObject *value)\n{\n\tInitNumPy();\n\tif (!value)\n\t{\n\t\tthrow runtime_error(\"NULL datapoint value in Python reading\");\n\t}\n\n\tDatapointValue *dataPoint = NULL;\n\tif (PyLong_Check(value))\t// Integer\tT_INTEGER\n\t{\n\t\tdataPoint = new DatapointValue((long)PyLong_AsUnsignedLongMask(value));\n\t}\n\telse if (PyFloat_Check(value))\t\t// Float\t\tT_FLOAT\n\t{\n\t\tdataPoint = new DatapointValue(PyFloat_AS_DOUBLE(value));\n\t}\n\telse if (PyBytes_Check(value))\t\t// String\t\tT_STRING\n\t{\n\t\tstring str = PyBytes_AsString(value);\n\t\tfixQuoting(str);\n\t\tdataPoint = new DatapointValue(str);\n\t}\n\telse if (PyUnicode_Check(value))\t// String\t\tT_STRING\n\t{\n\t\tstring str = PyUnicode_AsUTF8(value);\n\t\tfixQuoting(str);\n\t\tdataPoint = new DatapointValue(str);\n\t}\n\telse if (PyDict_Check(value))\t\t// Nested object\tT_DP_DICT\n\t{\n\t\tvector<Datapoint *> *values = new vector<Datapoint *>;\n\t\tPy_ssize_t dPos = 0;\n\t\tPyObject *dKey, *dValue;\n\t\twhile (PyDict_Next(value, &dPos, &dKey, &dValue))\n\t\t{\n\t\t\tDatapointValue *dpv = getDatapointValue(dValue);\n\t\t\tif (dpv)\n\t\t\t{\n\t\t               if (PyUnicode_Check(dKey))\n                               {\n                                     values->emplace_back(new Datapoint(string(PyUnicode_AsUTF8(dKey)), *dpv));\n                               }\n                               else\n                               {\n                                     values->emplace_back(new Datapoint(string(PyBytes_AsString(dKey)), *dpv));\n                               }\n\t\t\t\t// Remove temp objects\n\t\t\t\tdelete dpv;\n\t\t\t}\n\t\t}\n\t\tdataPoint = new DatapointValue(values, true);\n\t}\n\telse if (PyList_Check(value))\t// List of data points or floats\n\t{\n\t\tPy_ssize_t listSize = PyList_Size(value);\n\t\t// Find out what the list contains\n\t\tPyObject *item0 = PyList_GetItem(value, 0);\n\t\tif (item0 == NULL)\n\t\t{\n\t\t\treturn NULL;\n\t\t}\n\t\tif (PyFloat_Check(item0))\t// List of floats\tT_FLOAT_ARRAY\n\t\t{\n\t\t\tvector<double> values;\n\t\t\tfor (Py_ssize_t i = 0; i < listSize; i++)\n\t\t\t{\n\t\t\t\tdouble d = PyFloat_AS_DOUBLE(PyList_GetItem(value, i));\n\t\t\t\tvalues.push_back(d);\n\t\t\t}\n\t\t\tdataPoint = new DatapointValue(values);\n\t\t}\n\t\telse if (PyList_Check(item0))\t// 2D array \t\tT_2D_FLOAT_ARRAY\n\t\t{\n\t\t\tvector<vector<double>* > values;\n\t\t\tfor (Py_ssize_t i = 0; i < listSize; i++)\n\t\t\t{\n\t\t\t\tvector<double> *row = new vector<double>;\n\t\t\t\tPyObject *pyRow = PyList_GetItem(value, i);\n\t\t\t\tfor (Py_ssize_t j = 0; j < PyList_Size(pyRow); j++)\n\t\t\t\t{\n\t\t\t\t\tdouble d = PyFloat_AS_DOUBLE(PyList_GetItem(pyRow, j));\n\t\t\t\t\trow->push_back(d);\n\t\t\t\t}\n\t\t\t\tvalues.push_back(row);\n\t\t\t}\n\t\t\tdataPoint = new DatapointValue(values);\n\t\t\tfor (auto& row : values)\n\t\t\t\tdelete row;\n\t\t}\n\t\telse if (PyDict_Check(item0))\t// List of datapoints\tT_DP_LIST\n\t\t{\n\t\t\tvector<Datapoint *>* values = new vector<Datapoint *>;\n\t\t\tfor (Py_ssize_t i = 0; i < listSize; i++)\n\t\t\t{\n\t\t\t\tPyObject *item = PyList_GetItem(value, i);\n\t\t\t\tif (PyDict_Check(item))\n\t\t\t\t{\n\t\t\t\t\tPyObject *key, *val;\n\t\t\t\t\tPyDict_Next(item, 0, &key, &val);\n\t\t\t\t\tDatapointValue *dpv = getDatapointValue(val);\n\t\t\t\t\tif (dpv)\n\t\t\t\t\t{\n\t\t\t\t\t\tvalues->emplace_back(new Datapoint(string(PyBytes_AsString(key)), *dpv));\n\t\t\t\t\t\t// Remove temp objects\n\t\t\t\t\t\tdelete dpv;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tdataPoint = new DatapointValue(values, false);\n\t\t}\n\t}\n\telse if (PyArray_Check(value))\t// Numpy array\n\t{\n\t\tPyArrayObject *array = (PyArrayObject *)value;\n\t\tint item_size = PyArray_ITEMSIZE(array);\n\t\tif (PyArray_NDIM(array) == 1)\t// Databuffer\tT_DATABUFFER\n\t\t{\n\t\t\tnpy_intp *dims = PyArray_DIMS(array);\n\t\t\tint n_items = (int)dims[0];\n\t\t\tDataBuffer *buffer = new DataBuffer(item_size, n_items);\n\t\t\tmemcpy(buffer->getData(), PyArray_DATA(array), n_items * item_size);\n\n\t\t\tdataPoint = new DatapointValue(buffer);\n\t\t}\n\t\telse if (PyArray_NDIM(array) == 2)\t// Image\tT_IMAGE\n\t\t{\n\t\t\tnpy_intp *dims = PyArray_DIMS(array);\n\t\t\tint height = (int)dims[0];\n\t\t\tint width = (int)dims[1];\n\t\t\tint depth = item_size * 8;\t// In bits\n\t\t\tDPImage *image = new DPImage(width, height, depth, PyArray_DATA(array));\n\n\t\t\tdataPoint = new DatapointValue(image);\n\t\t}\n\t\telse if (PyArray_NDIM(array) == 3)\t// RGB Image\tT_IMAGE\n\t\t{\n\t\t\tnpy_intp *dims = PyArray_DIMS(array);\n\t\t\tif ((int)dims[2] == 3)\n\t\t\t{\n\t\t\t\tint height = (int)dims[0];\n\t\t\t\tint width = (int)dims[1];\n\t\t\t\tint depth = 24;\t// In bits\n\t\t\t\tDPImage *image = new DPImage(width, height, depth, PyArray_DATA(array));\n\n\t\t\t\tdataPoint = new DatapointValue(image);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Received 3D numpy array that is not RGB image\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Encountered a numpy array with more than 3 dimensions in a Python data point %s. This is currently not supported\");\n\t\t}\n\t}\n\telse\n\t{\n        Logger::getLogger()->info(\"PythonReading::getDatapointValue: UNSUPPORTED\");\n\t\tPyTypeObject *type = value->ob_type;\n\t\tLogger::getLogger()->error(\"Encountered an unsupported type '%s' when create a reading from Python\", type->tp_name);\n\t}\n\n\treturn dataPoint;\n}\n\n/**\n * Convert a PythonReading, which is just a Reading, into a PyObject\n * structure that can be passed to embedded Python code.\n *\n * @param changeKeys\t\tSet DICT keys as reading/asset_code if true\n *\t\t\t\tor readings/asset if false\n * @param useBytesString\tWhether to use DICT keys as BytesString\n *\t\t\t\tand string values as BytesString\n * @return PyObject*\tThe Python representation of the readings as a DICT\n */\nPyObject *PythonReading::toPython(bool changeKeys, bool useBytesString)\n{\n\t// Create object (dict) for reading Datapoints:\n\t// this will be added as the value for key 'readings'\n\tPyObject *dataPoints = PyDict_New();\n\n\t// Get all datapoints\n\tfor (auto it = m_values.begin(); it != m_values.end(); ++it)\n\t{\n\t\t// Pass BytesString switch\n\t\tPyObject *value = convertDatapoint(*it, useBytesString);\n\t\t// Add Datapoint: key and value\n\t\tif (value)\n\t\t{\n\t\t\tPyObject *key = useBytesString ?\n\t\t\t\t\tPyBytes_FromString((*it)->getName().c_str())\n\t\t\t\t\t:\n\t\t\t\t\tPyUnicode_FromString((*it)->getName().c_str());\n\t\t\tPyDict_SetItem(dataPoints, key, value);\n\t\t\n\t\t\tPy_CLEAR(key);\n\t\t\tPy_CLEAR(value);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Unable to convert datapoint '%s' of reading '%s' tp Python\",\n\t\t\t\t\t(*it)->getName().c_str(), m_asset.c_str());\n\t\t}\n\t}\n\n\t// Create an object (dict) with 'asset_code' and 'readings' key\n\tPyObject *readingObject = PyDict_New();\n\n\t// Add reading datapoints\n\tPyObject *key = PyUnicode_FromString(changeKeys ? \"reading\" : \"readings\");\n\tPyDict_SetItem(readingObject, key, dataPoints);\n\tPy_CLEAR(key);\n\n\t// Add reading asset name\n\tPyObject *assetVal = useBytesString ?\n\t\t\tPyBytes_FromString(m_asset.c_str())\n\t\t\t:\n\t\t\tPyUnicode_FromString(m_asset.c_str());\n\n\tkey = PyUnicode_FromString(changeKeys ? \"asset_code\" : \"asset\");\n\tPyDict_SetItem(readingObject, key, assetVal);\n\tPy_CLEAR(key);\n\n\t// Add reading id\n\tPyObject *readingId = PyLong_FromUnsignedLong(m_id);\n\tkey = PyUnicode_FromString(\"id\");\n\tPyDict_SetItem(readingObject, key, readingId);\n\tPy_CLEAR(key);\n\n\t// Add reading timestamp\n\t// PyObject *readingTs = PyLong_FromUnsignedLong(m_timestamp.tv_sec);\n\tstring s = this->getAssetDateTime(FMT_DEFAULT) + \"+00:00\";\n\tPyObject *readingTs = PyUnicode_FromString(s.c_str());\n\tkey = PyUnicode_FromString(\"ts\");\n\tPyDict_SetItem(readingObject, key, readingTs);\n\tPy_CLEAR(key);\n\n\t// Add reading user timestamp\n\t//PyObject *readingUserTs = PyLong_FromUnsignedLong(m_userTimestamp.tv_sec);\n\ts = this->getAssetDateUserTime(FMT_DEFAULT) + \"+00:00\";\n\tPyObject *readingUserTs = PyUnicode_FromString(s.c_str());\n\tkey = PyUnicode_FromString(\"user_ts\");\n\tPyDict_SetItem(readingObject, key, readingUserTs);\n\tPy_CLEAR(key);\n\n\t// Remove temp objects\n\tPy_CLEAR(dataPoints);\n\tPy_CLEAR(assetVal);\n\tPy_CLEAR(readingId);\n\tPy_CLEAR(readingTs);\n\tPy_CLEAR(readingUserTs);\n\t\n\treturn readingObject;\n}\n\n/**\n * Convert a single datapoint into a Pythn object\n *\n * @param dp\t\tThe datapoint to convert\n * @param bytesString\tWheter to set PyObject string as PyBytes or PyUnicode\n * @return The pointer to a converted Python Object or NULL if the conversion failed\n */\nPyObject *PythonReading::convertDatapoint(Datapoint *dp, bool bytesString)\n{\n\tPyObject *value = NULL;\n\tDatapointValue::dataTagType dataType = dp->getData().getType();\n\n\tif (dataType == DatapointValue::dataTagType::T_INTEGER)\n\t{\n\t\tvalue = PyLong_FromLong(dp->getData().toInt());\n\t}\n\telse if (dataType == DatapointValue::dataTagType::T_FLOAT)\n\t{\n\t\tvalue = PyFloat_FromDouble(dp->getData().toDouble());\n\t}\n\telse if (dataType == DatapointValue::dataTagType::T_STRING)\n\t{\n\t\tvalue = bytesString ?\n\t\t\t\tPyBytes_FromString(dp->getData().toStringValue().c_str())\n\t\t\t\t:\n\t\t\t\tPyUnicode_FromString(dp->getData().toStringValue().c_str());\n\t}\n\telse if (dataType == DatapointValue::dataTagType::T_FLOAT_ARRAY)\n\t{\n\t\tvector<double>* values = dp->getData().getDpArr();;\n\t\tint i = 0;\n\t\tvalue = PyList_New(values->size());\n\t\tfor (auto it = values->begin(); it != values->end(); ++it)\n\t\t{\n\t\t\tPyList_SetItem(value, i++, PyFloat_FromDouble(*it));\n\t\t}\n\t}\n\telse if (dataType == DatapointValue::dataTagType::T_2D_FLOAT_ARRAY)\n\t{\n\t\tvector<vector<double>* > *vec = dp->getData().getDp2DArr();\n\t\tvalue = PyList_New(vec->size());\n\t\tint rowNo = 0;\n\t\tfor (auto row : *vec)\n\t\t{\n\t\t\tint i = 0;\n\t\t\tPyObject *pyRow = PyList_New(row->size());\n\t\t\tfor (auto& d : *row)\n\t\t\t{\n\t\t\t\tPyList_SetItem(pyRow, i++, PyFloat_FromDouble(d));\n\t\t\t}\n\t\t\tPyList_SetItem(value, rowNo++, pyRow);\n\t\t}\n\t}\n\telse if (dataType == DatapointValue::dataTagType::T_DATABUFFER)\n\t{\n//\t\tPythonRuntime::getPythonRuntime()->initNumPy();\n\t\tInitNumPy();\n\t\tDataBuffer *dbuf = dp->getData().getDataBuffer();\n\t\tnpy_intp dim = dbuf->getItemCount();\n\t\tenum NPY_TYPES\ttype;\n\t\tswitch (dbuf->getItemSize())\n\t\t{\n\t\t\tcase 1:\n\t\t\t\ttype = NPY_UBYTE;\n\t\t\t\tbreak;\n\t\t\tcase 2:\n\t\t\t\ttype = NPY_UINT16;\n\t\t\t\tbreak;\n\t\t\tcase 4:\n\t\t\t\ttype = NPY_UINT32;\n\t\t\t\tbreak;\n\t\t\tcase 8:\n\t\t\t\ttype = NPY_UINT64;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tbreak;\n\t\t}\n\t\tPyGILState_STATE state = PyGILState_Ensure();\n\t\tvalue = PyArray_SimpleNewFromData(1, &dim, type, dbuf->getData());\n\t\tPyGILState_Release(state);\n#if 0\n\t\tPy_buffer *buffer = (Py_buffer *)malloc(sizeof(Py_buffer));\n\t\tDataBuffer *dbuf = (*it)->getData().getDataBuffer();\n\t\tbuffer->buf = dbuf->getData();\n\t\tbuffer->itemsize = dbuf->m_itemSize;\n\t\tbuffer->len = dbuf->len  *dbuf->itemSize;\n#endif\n\n\t}\n\telse if (dataType == DatapointValue::dataTagType::T_IMAGE)\n\t{\n\t\t//PythonRuntime::getPythonRuntime()->initNumPy();\n\t\tInitNumPy();\n\t\tDPImage *image = dp->getData().getImage();\n\t\tif (image->getDepth() == 24)\n\t\t{{\n\t\t\tnpy_intp dim[3];\n\t\t\tdim[0] = image->getHeight();\n\t\t\tdim[1] = image->getWidth();\n\t\t\tdim[2] = 3;\n\t\t\tenum NPY_TYPES\ttype = NPY_UBYTE;\n\t\t\tPyGILState_STATE state = PyGILState_Ensure();\n\t\t\tvalue = PyArray_SimpleNewFromData(3, dim, type, image->getData());\n\t\t\tPyGILState_Release(state);\n\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tnpy_intp dim[2];\n\t\t\tdim[0] = image->getHeight();\n\t\t\tdim[1] = image->getWidth();\n\t\t\tenum NPY_TYPES\ttype;\n\t\t\tswitch (image->getDepth())\n\t\t\t{\n\t\t\t\tcase 8:\n\t\t\t\t\ttype = NPY_UBYTE;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 16:\n\t\t\t\t\ttype = NPY_UINT16;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 32:\n\t\t\t\t\ttype = NPY_UINT32;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 64:\n\t\t\t\t\ttype = NPY_UINT64;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tPyGILState_STATE state = PyGILState_Ensure();\n\t\t\tvalue = PyArray_SimpleNewFromData(2, dim, type, image->getData());\n\t\t\tPyGILState_Release(state);\n\t\t}\n\t}\n\telse if (dataType == DatapointValue::dataTagType::T_DP_DICT)\n\t{\n\t\tvector<Datapoint *>* children = dp->getData().getDpVec();;\n\t\tvalue = PyDict_New();\n\t\tfor (auto child = children->begin(); child != children->end(); ++child)\n\t\t{\n\t\t\tPyObject *childValue = convertDatapoint(*child);\n\t\t\t// Add Datapoint: key and value\n\t\t\tPyObject *key = PyUnicode_FromString((*child)->getName().c_str());\n\t\t\tPyDict_SetItem(value, key, childValue);\n\t\t\n\t\t\tPy_CLEAR(key);\n\t\t\tPy_CLEAR(childValue);\n\t\t}\n\t}\n\telse if (dataType == DatapointValue::dataTagType::T_DP_LIST)\n\t{\n\t\tvector<Datapoint *>* children = dp->getData().getDpVec();\n\t\tint i = 0;\n\t\tvalue = PyList_New(children->size());\n\t\tfor (auto child = children->begin(); child != children->end(); ++child)\n\t\t{\n\t\t\tPyObject *childValue = convertDatapoint(*child);\n\t\t\t// TODO complete\n\t\t\t// Add Datapoint: key and value\n\t\t\tPyObject *key = PyUnicode_FromString((*child)->getName().c_str());\n\t\t\tPyObject *dict = PyDict_New();\n\t\t\tPyDict_SetItem(dict, key, childValue);\n\t\t\tPyList_SetItem(value, i++, dict);\n\t\t\n\t\t\tPy_CLEAR(key);\n\t\t\tPy_CLEAR(childValue);\n\t\t}\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->info(\"Unable to convert datapoint type '%s' to Python, defaulting to string representation\", dp->getData().getTypeStr().c_str());\n\t\tvalue = PyUnicode_FromString(dp->getData().toString().c_str());\n\t}\n\n\treturn value;\n}\n\n/**\n * Retrieve the error message last raised in Python\n *\n * @return string\tThe Python error message\n */\nstring PythonReading::errorMessage()\n{\n\t//Get error message\n\tPyObject *pType, *pValue, *pTraceback;\n\tPyErr_Fetch(&pType, &pValue, &pTraceback);\n\tPyErr_NormalizeException(&pType, &pValue, &pTraceback);\n\n\tPyObject *str_exc_value = PyObject_Repr(pValue);\n\tPyObject *pyExcValueStr = PyUnicode_AsEncodedString(str_exc_value, \"utf-8\", \"Error ~\");\n\n\t// NOTE from :\n\t// https://docs.python.org/2/c-api/exceptions.html\n\t//\n\t// The value and traceback object may be NULL\n\t// even when the type object is not.\t\n\tstring errorMessage = pValue ?\n\t\t\t\t    PyBytes_AsString(pyExcValueStr) :\n\t\t\t\t    \"no error description.\";\n\t\t\t\t\t\n\tLogger::getLogger()->error(\"Exception from python interpreter: %s\", errorMessage.c_str());\n\t\n\t// Reset error\n\tPyErr_Clear();\n\n\t// Remove references\n\tPy_CLEAR(pType);\n\tPy_CLEAR(pValue);\n\tPy_CLEAR(pTraceback);\n\tPy_CLEAR(str_exc_value);\n\tPy_CLEAR(pyExcValueStr);\n\n\treturn errorMessage;\n}\n\n\n\n/**\n * Fix the quoting if the datapoint contains unescaped quotes\n *\n * @param str\tString to fix the quoting of\n */\nvoid PythonReading::fixQuoting(string& str)\n{\nstring newString;\nbool escape = false;\n\n\tfor (int i = 0; i < str.length(); i++)\n\t{\n\t\tif (str[i] == '\\\"' && escape == false)\n\t\t{\n\t\t\tnewString += '\\\\';\n\t\t\tnewString += '\\\\';\n\t\t\tnewString += '\\\\';\n\t\t}\n\t\telse if (str[i] == '\\\\')\n\t\t{\n\t\t\tescape = !escape;\n\t\t}\n\t\tnewString += str[i];\n\t}\n\tstr = newString;\n}\n\n\n/**\n * Return true of the Python object is an array. This is mostly for testing\n * and overcomes an issue with including the numpy header files multiple times.\n *\n * @param obj\tThe Pythin object to test\n * @return true if the Python object is a numpy array\n */\nbool PythonReading::isArray(PyObject *obj)\n{\n\treturn PyArray_Check(obj);\n}\n\n/**\n * Import NumPy. Due to the way numpy uses global variables we must only do\n * this once in a single exeutable as multiple imports result in crashes.\n */\nint PythonReading::InitNumPy()\n{\n\tif (!PythonReading::doneNumPyImport)\n\t{\n\t\tPythonReading::doneNumPyImport = true;\n\t\t// Note the following is a macro in the numpy header file that has an embedded return\n\t\t// in the case of failure. Hence the need to return a value. Assume no code after this\n\t\t// line is run\n\t\tPyGILState_STATE state = PyGILState_Ensure();\n\t\t\n\t\tif (PyImport_ImportModule(\"numpy.core.multiarray\") == NULL)\n\t\t\tthrow runtime_error(errorMessage());\n\n\t\timport_array();\n\t\t\n\t\tPyGILState_Release(state);\n\t}\n\treturn 0;\n};\n\n\n"
  },
  {
    "path": "C/common/pythonreadingset.cpp",
    "content": "/*\n * Fledge Python Reading Set\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <pythonreadingset.h>\n#include <pythonreading.h>\n#include <stdexcept>\n\nusing namespace std;\n\n\n/**\n * Set id, uuid, ts and user_ts in the reading object\n *\n * @param newReading\tReading object to update\n * @param readingList\tPyObject containing this reading object\n * @param fillIfMissing\tIf True, only fill ID/TS fields if not set already\n */\nvoid PythonReadingSet::setReadingAttr(Reading* newReading, PyObject *readingList, bool fillIfMissing)\n{\n\tif (!newReading)\n\t\treturn;\n\t\n\t// Get 'id' value: borrowed reference.\n\tPyObject* id = PyDict_GetItemString(readingList, \"id\");\n\tbool fill = (!fillIfMissing || (fillIfMissing && newReading->getId()==0));\n\tif (fill && id && PyLong_Check(id))\n\t{\n\t\t// Set id\n\t\tnewReading->setId(PyLong_AsUnsignedLong(id));\n\t}\n\n\t// Get 'ts' value: borrowed reference.\n\tPyObject* ts = PyDict_GetItemString(readingList, \"ts\");\n\tfill = (!fillIfMissing || (fillIfMissing && newReading->getTimestamp()==0));\n\tif (fill && ts)\n\t{\n\t\t// Convert a timestamp of the form '2019-01-07 19:06:35.366100+01:00'\n\t\tconst char *ts_str = PyUnicode_AsUTF8(ts);\n\t\tnewReading->setTimestamp(ts_str);\n\t}\n\n\t// Get 'user_ts' value: borrowed reference.\n\tPyObject* uts = PyDict_GetItemString(readingList, \"timestamp\");\n\tfill = (!fillIfMissing || (fillIfMissing && newReading->getUserTimestamp()==0));\n\tif (fill && uts)\n\t{\n\t\t// Convert a timestamp of the form '2019-01-07 19:06:35.366100+01:00'\n\t\tconst char *ts_str = PyUnicode_AsUTF8(uts);\n\t\tnewReading->setUserTimestamp(ts_str);\n\t}\n\t\n\t// Get 'ts' value: borrowed reference.\n\tPyObject* userts = PyDict_GetItemString(readingList, \"user_ts\");\n\tfill = (!fillIfMissing || (fillIfMissing && newReading->getUserTimestamp()==0));\n\tif (fill && userts)\n\t{\n\t\t// Convert a timestamp of the form '2019-01-07 19:06:35.366100+01:00'\n\t\tconst char *ts_str = PyUnicode_AsUTF8(userts);\n\t\tnewReading->setUserTimestamp(ts_str);\n\t}\n\n\t// if User TS is still not filled, copy TS into it\n\tfill = (fillIfMissing && newReading->getUserTimestamp()==0);\n\t//Logger::getLogger()->debug(\"fill=%s, newReading->getUserTimestamp()=%d, newReading->getTimestamp()=%d\", fill?\"True\":\"False\", newReading->getUserTimestamp(), newReading->getTimestamp());\n\tif (fill)\n\t{\n\t\tstruct timeval tVal;\n\t\tnewReading->getTimestamp(&tVal);\n\t\tnewReading->setUserTimestamp(tVal);\n\t\tLogger::getLogger()->debug(\"Copied TS into user TS: newReading->getUserTimestamp()=%d\", newReading->getUserTimestamp());\n\t}\n\n\t// if TS is still not filled, copy User TS into it\n\tfill = (fillIfMissing && newReading->getTimestamp()==0);\n\t//Logger::getLogger()->debug(\"fill=%s, newReading->getUserTimestamp()=%d, newReading->getTimestamp()=%d\", fill?\"True\":\"False\", newReading->getUserTimestamp(), newReading->getTimestamp());\n\tif (fill)\n\t{\n\t\tstruct timeval tVal;\n\t\tnewReading->getUserTimestamp(&tVal);\n\t\tnewReading->setTimestamp(tVal);\n\t\tLogger::getLogger()->debug(\"Copied user TS into TS: newReading->getUserTimestamp()=%d\", newReading->getUserTimestamp());\n\t}\n}\n\n\n/**\n * Construct PythonReadingSet from a python list object that contains a\n * list of readings\n *\n * @param set\tA Python object pointer that contians a list of readings\n */\nPythonReadingSet::PythonReadingSet(PyObject *set)\n{\n\tif (PyList_Check(set))\n\t{\n\t\tLogger::getLogger()->debug(\"PythonReadingSet c'tor: LIST of size %d\", PyList_Size(set));\n\t}\n\telse if (PyDict_Check(set))\n\t{\n\t\tLogger::getLogger()->debug(\"PythonReadingSet c'tor: DICT of size %d\", PyDict_Size(set));\n\t}\n    \n\tif (PyList_Check(set))\n\t{\n\t\tPy_ssize_t listSize = PyList_Size(set);\n\t\tfor (Py_ssize_t i = 0; i < listSize; i++)\n\t\t{\n\t\t\tPyObject *pyReading = PyList_GetItem(set, i);\n\t\t\tPythonReading *reading = new PythonReading(pyReading);\n\t\t\tsetReadingAttr(reading, set, true);\n\t\t\tm_readings.push_back(reading);\n\t\t\tm_count++;\n\t\t\tm_last_id = reading->getId();\n\t\t}\n\t}\n\telse if (PyDict_Check(set))\n\t{\n\t\tPythonReading *reading = new PythonReading(set);\n\t\tif (reading)\n\t\t{\n\t\t\tsetReadingAttr(reading, set, true);\n\t\t\tm_readings.push_back(reading);\n\t\t\tm_count++;\n\t\t\tm_last_id = reading->getId();\n\t\t}\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Expected a Python list/dict as a reading set when constructing a PythonReadingSet\");\n\t\tthrow runtime_error(\"Expected a Python list/dict as a reading set when constructing a PythonReadingSet\");\n\t}\n}\n    \n\n/**\n * Convert the ReadingSet to a Python List\n *\n * @return A Python object that contains the set of readings as a Python list\n */\nPyObject *PythonReadingSet::toPython(bool changeKeys)\n{\n\tPyObject *set = PyList_New(m_readings.size());\n\tfor (int i = 0; i < m_readings.size(); i++)\n\t{\n\t\tPythonReading *pyReading = (PythonReading *) m_readings[i];\n\t\tPyList_SetItem(set, i, pyReading->toPython(changeKeys));\n\t}\n\treturn set;\n}\n\n"
  },
  {
    "path": "C/common/query.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2018 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <query.h>\n#include <sstream>\n#include <iostream>\n\nusing namespace std;\n\n/**\n * Construct a query with a simple where clause\n *\n * @param where\tA pointer to the where condition\n */\nQuery::Query(Where *where) : m_where(where), m_limit(0), m_timebucket(0), m_distinct(false), m_join(0)\n{\n}\n\n/**\n * Construct a query with a where clause and aggregate response\n *\n * @param aggregate\tA ppointer to the aggregate operation to perform\n * @param where\tA pointer to the where condition\n */\nQuery::Query(Aggregate *aggregate, Where *where) : m_where(where),\n\t\t\t\t\t\t   m_limit(0),\n\t\t\t\t\t\t   m_timebucket(0),\n\t\t\t\t\t\t   m_distinct(false),\n\t\t\t\t\t\t   m_join(0)\n{\n\tm_aggregates.push_back(aggregate);\n}\n\n/**\n * Construct a timebucket query with a simple where clause\n *\n * @param timebuck\tA pointer to the timebucket definition\n * @param where\tA pointer to the where condition\n */\nQuery::Query(Timebucket *timebucket, Where *where) : m_where(where),\n\t\t\t\t\t\t     m_limit(0),\n\t\t\t\t\t\t     m_timebucket(timebucket),\n\t\t\t\t\t\t     m_distinct(false),\n\t\t\t\t\t\t     m_join(0)\n{\n}\n\n/**\n * Construct a timebucket query with a simple where clause and a limit on\n * the rows to return\n *\n * @param timebuck\tA pointer to the timebucket definition\n * @param where\tA pointer to the where condition\n * @param limit\tThe number of rows to return\n */\nQuery::Query(Timebucket *timebucket, Where *where, unsigned int limit) : m_where(where),\n\t\t\t\t\t\t\t\t\t m_limit(limit),\n\t\t\t\t\t\t\t\t\t m_timebucket(timebucket),\n\t\t\t\t\t\t\t\t\t m_distinct(false),\n\t\t\t\t\t\t\t\t\t m_join(0)\n{\n}\n\n/**\n * Construct a query with a fixed set of returned values and a simple where clause\n *\n * @params returns\tThe set of rows to return\n * @params where\tThe where clause\n */\nQuery::Query(vector<Returns *> returns, Where *where) : m_where(where),\n\t\t\t\t\t\t\tm_limit(0),\n\t\t\t\t\t\t\tm_timebucket(0),\n\t\t\t\t\t\t\tm_distinct(false),\n\t\t\t\t\t\t\tm_join(0)\n{\n\tfor (auto it = returns.cbegin(); it != returns.cend(); ++it)\n\t{\n\t\tm_returns.push_back(*it);\n\t}\n}\n\n/**\n * Construct a query with a fixed set of returned values and a simple where clause\n * and return a limited set of rows\n *\n * @params returns\tThe set of rows to return\n * @params where\tThe where clause\n * @param limit\t\tThe numebr of rows to return\n */\nQuery::Query(vector<Returns *> returns, Where *where, unsigned int limit) : m_where(where),\n\t\t\t\t\t\t\t\t\t    m_limit(limit),\n\t\t\t\t\t\t\t\t\t    m_timebucket(0),\n\t\t\t\t\t\t\t\t\t    m_distinct(false),\n\t\t\t\t\t\t\t\t\t    m_join(0)\n{\n\tfor (auto it = returns.cbegin(); it != returns.cend(); ++it)\n\t{\n\t\tm_returns.push_back(*it);\n\t}\n}\n\n/**\n * Construct a simple query to return certain columns from a table\n *\n * @param returns\tThe rows to return\n */\nQuery::Query(vector<Returns *> returns) : m_where(0),\n\t\t\t\t\t  m_limit(0),\n\t\t\t\t\t  m_timebucket(0),\n\t\t\t\t\t  m_distinct(false),\n\t\t\t\t\t  m_join(0)\n{\n\tfor (auto it = returns.cbegin(); it != returns.cend(); ++it)\n\t{\n\t\tm_returns.push_back(*it);\n\t}\n}\n\n/**\n * Construct a simple query to return a certain column from a table\n *\n * @param returns\tThe rows to return\n */\nQuery::Query(Returns *returns) : m_where(0),\n\t\t\t\t m_limit(0),\n\t\t\t\t m_timebucket(0),\n\t\t\t\t m_distinct(false),\n\t\t\t\t m_join(0)\n{\n\t\tm_returns.push_back(returns);\n}\n\n/**\n * Destructor for a query object\n */\nQuery::~Query()\n{\n\tif (m_where)\n\t{\n\t\tdelete m_where;\n\t}\n\tfor (auto it = m_aggregates.cbegin(); it != m_aggregates.cend(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n\tfor (auto it = m_sort.cbegin(); it != m_sort.cend(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n\tfor (auto it = m_returns.cbegin(); it != m_returns.cend(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n\tif (m_timebucket)\n\t{\n\t\tdelete m_timebucket;\n\t}\n\tif (m_join)\n\t{\n\t\tdelete m_join;\n\t}\n}\n\n/**\n * Add a aggregate operation to an existing query object\n *\n * @param aggregate\tThe aggregate operation to add\n */\nvoid Query::aggregate(Aggregate *aggregate)\n{\n\tm_aggregates.push_back(aggregate);\n}\n\n/**\n * Add a sort operation to an existing query\n *\n * @param sort\tThe sort operation to add\n */\nvoid Query::sort(Sort *sort)\n{\n\tm_sort.push_back(sort);\n}\n\n/**\n * Add a group operation to a query\n *\n * @param column\tThe column to group by\n */\nvoid Query::group(const string& column)\n{\n\tm_group = column;\n}\n\n/**\n * Limit the numebr of rows returned by the query\n *\n * @param limit\tThe number of rows to limit the return to\n */\nvoid Query::limit(unsigned int limit)\n{\n\tm_limit = limit;\n}\n\n/**\n * Add a timebucket operation to an existing query\n *\n * @param timebucket\tThe timebucket operation to add to the query\n */\nvoid Query::timebucket(Timebucket *timebucket)\n{\n\tm_timebucket = timebucket;\n}\n\n/**\n * Limit the query to return just a single column\n *\n * @param returns\tThe column to return\n */\nvoid Query::returns(Returns *returns)\n{\n\tm_returns.push_back(returns);\n}\n\n/**\n * Limit the columns returned by the query\n *\n * @param returns\tThe columns to return\n */\nvoid Query::returns(vector<Returns *> returns)\n{\n\tfor (auto it = returns.cbegin(); it != returns.cend(); ++it)\n\t{\n\t\tm_returns.push_back(*it);\n\t}\n}\n\n/**\n * Add a join clause to a query\n *\n * @param join\tA pointer to a Join onject\n */\nvoid Query::join(Join *join)\n{\n\tm_join = join;\n}\n\n/**\n * Add a distinct value modifier to the query\n */\nvoid Query::distinct()\n{\n\tm_distinct = true;\n}\n\n/**\n * Return the JSON payload for a where clause\n */\nconst string Query::toJSON() const\n{\nostringstream   json;\nbool \t\tfirst = true;\n\n\tjson << \"{ \";\n\tif (m_where)\n\t{\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"where\\\" : \" << m_where->toJSON();\n\t\tfirst = false;\n\t}\n\tif (m_join)\n\t{\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tfirst = false;\n\t\tjson << m_join->toJSON();\n\t}\n\tswitch (m_aggregates.size())\n\t{\n\tcase 0:\n\t\tbreak;\n\tcase 1:\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"aggregate\\\" : \" << m_aggregates.front()->toJSON();\n\t\tfirst = false;\n\t\tbreak;\n\tdefault:\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"aggregate\\\" : [ \";\n\t\tfor (auto it = m_aggregates.cbegin(); it != m_aggregates.cend(); ++it)\n\t\t{\n\t\t\tif (it != m_aggregates.cbegin())\n\t\t\t\tjson << \", \";\n\t\t\tjson << (*it)->toJSON();\n\t\t}\n\t\tjson << \" ]\";\n\t\tfirst = false;\n\t\tbreak;\n\t}\n\tif (!m_group.empty())\n\t{\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"group\\\" : \\\"\" << m_group << \"\\\"\";\n\t\tfirst = false;\n\t}\n\tswitch (m_sort.size())\n\t{\n\tcase 0:\n\t\tbreak;\n\tcase 1:\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"sort\\\" : \" << m_sort.front()->toJSON();\n\t\tfirst = false;\n\t\tbreak;\n\tdefault:\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"sort\\\" : [ \";\n\t\tfor (auto it = m_sort.cbegin(); it != m_sort.cend(); ++it)\n\t\t{\n\t\t\tif (it != m_sort.cbegin())\n\t\t\t\tjson << \", \";\n\t\t\tjson << (*it)->toJSON();\n\t\t}\n\t\tjson << \" ], \";\n\t\tfirst = false;\n\t\tbreak;\n\t}\n\tif (m_timebucket)\n\t{\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"timebucket\\\" : \" << m_timebucket->toJSON();\n\t\tfirst = false;\n\t}\n\tif (m_limit)\n\t{\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"limit\\\" : \" << m_limit;\n\t\tfirst = false;\n\t}\n\tif (m_returns.size())\n\t{\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"return\\\" : [ \";\n\t\tfor (auto it = m_returns.cbegin(); it != m_returns.cend(); ++it)\n\t\t{\n\t\t\tif (it != m_returns.cbegin())\n\t\t\t\tjson << \", \";\n\t\t\tjson << (*it)->toJSON();\n\t\t}\n\t\tjson << \" ]\";\n\t\tfirst = false;\n\t}\n\tif (m_distinct)\n\t{\n\t\tif (! first)\n\t\t\tjson << \", \";\n\t\tjson << \"\\\"modifier\\\" : \\\"distinct\\\"\";\n\t\tfirst = false;\n\t}\n\tjson << \" }\";\n\treturn json.str();\n}\n"
  },
  {
    "path": "C/common/reading.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2018 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <reading.h>\n#include <ctime>\n#include <string>\n#include <sstream>\n#include <iostream>\n#include <mutex>\n#include <time.h>\n#include <string.h>\n#include <logger.h>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nstd::vector<std::string> Reading::m_dateTypes = {\n\tDEFAULT_DATE_TIME_FORMAT,\n\tCOMBINED_DATE_STANDARD_FORMAT,\n\tISO8601_DATE_TIME_FORMAT,\n\tISO8601_DATE_TIME_FORMAT\t// Version with milliseconds\n};\n\n/**\n * Reading constructor\n *\n * A reading is a container for the values related to a single asset.\n * Each actual datavalue that relates to that asset is held within an\n * instance of a Datapoint class.\n */\nReading::Reading(const string& asset, Datapoint *value) : m_asset(asset), m_has_id(false), m_id(0)\n{\n\tm_values.push_back(value);\n\t// Store seconds and microseconds\n\tgettimeofday(&m_timestamp, NULL);\n\t// Initialise m_userTimestamp\n\tm_userTimestamp = m_timestamp;\n}\n\n/**\n * Reading constructor\n *\n * A reading is a container for the values related to a single asset.\n * Each actual datavalue that relates to that asset is held within an\n * instance of a Datapoint class.\n */\nReading::Reading(const string& asset, vector<Datapoint *> values) : m_asset(asset), m_has_id(false), m_id(0)\n{\n\tfor (auto it = values.cbegin(); it != values.cend(); it++)\n\t{\n\t\tm_values.push_back(*it);\n\t}\n\t// Store seconds and microseconds\n\tgettimeofday(&m_timestamp, NULL);\n\t// Initialise m_userTimestamp\n\tm_userTimestamp = m_timestamp;\n}\n\n/**\n * Reading constructor\n *\n * A reading is a container for the values related to a single asset.\n * Each actual datavalue that relates to that asset is held within an\n * instance of a Datapoint class.\n */\nReading::Reading(const string& asset, vector<Datapoint *> values, const string& ts) : m_asset(asset), m_has_id(false), m_id(0)\n{\n\tfor (auto it = values.cbegin(); it != values.cend(); it++)\n\t{\n\t\tm_values.push_back(*it);\n\t}\n\tstringToTimestamp(ts, &m_timestamp);\n\t// Initialise m_userTimestamp\n\tm_userTimestamp = m_timestamp;\n}\n\n/**\n * Construct a reading with datapoints given as JSON\n */\nReading::Reading(const string& asset, const string& datapoints) : m_asset(asset), m_has_id(false), m_id(0)\n{\n\tDocument d;\n\tif (d.Parse(datapoints.c_str()).HasParseError())\n\t{\n\t\tthrow runtime_error(\"Failed to parse reading datapoints \" + datapoints);\n\t}\n\tfor (Value::ConstMemberIterator itr = d.MemberBegin(); itr != d.MemberEnd(); ++itr)\n\t{\n\t\tstring name = itr->name.GetString();\n\t\tif (itr->value.IsInt64())\n\t\t{\n\t\t\tlong v = itr->value.GetInt64();\n\t\t\tDatapointValue dpv(v);\n\t\t\tm_values.push_back(new Datapoint(name, dpv));\n\t\t}\n\t\telse if (itr->value.IsDouble())\n\t\t{\n\t\t\tdouble v = itr->value.GetDouble();\n\t\t\tDatapointValue dpv(v);\n\t\t\tm_values.push_back(new Datapoint(name, dpv));\n\t\t}\n\t\telse if (itr->value.IsString())\n\t\t{\n\t\t\tstring v = itr->value.GetString();\n\t\t\tDatapointValue dpv(v);\n\t\t\tm_values.push_back(new Datapoint(name, dpv));\n\t\t}\n\t\telse if (itr->value.IsObject())\n\t\t{\n\t\t\t// Map objects as nested datapoints\n\t\t\tvector<Datapoint *> *values = JSONtoDatapoints(itr->value);\n\t\t\tDatapointValue dpv(values, true);\n\t\t\tm_values.push_back(new Datapoint(name, dpv));\n\t\t}\n\t\telse if (itr->value.IsArray())\n\t\t{\n\t\t\tvector<double> arr;\n\t\t\tfor (auto& v : itr->value.GetArray())\n\t\t\t{\n\t\t\t\tif (v.IsNumber())\n\t\t\t\t\tarr.emplace_back(v.GetDouble());\n\t\t\t\telse\n\t\t\t\t\tthrow runtime_error(\"Only numeric lists are currently supported in datapoints\");\n\t\t\t}\n\n\t\t\tDatapointValue dpv(arr);\n\t\t\tm_values.emplace_back(new Datapoint(name, dpv));\n\t\t}\n\t}\n\t// Store seconds and microseconds\n\tgettimeofday(&m_timestamp, NULL);\n\t// Initialise m_userTimestamp\n\tm_userTimestamp = m_timestamp;\n}\n\n/**\n * Reading copy constructor\n */\nReading::Reading(const Reading& orig) : m_asset(orig.m_asset),\n\tm_timestamp(orig.m_timestamp),\n\tm_userTimestamp(orig.m_userTimestamp),\n\tm_has_id(orig.m_has_id), m_id(orig.m_id)\n{\n\tfor (auto it = orig.m_values.cbegin(); it != orig.m_values.cend(); it++)\n\t{\n\t\tm_values.emplace_back(new Datapoint(**it));\n\t}\n}\n\n/**\n * Destructor for Reading class\n */\nReading::~Reading()\n{\n\tfor (auto it = m_values.cbegin(); it != m_values.cend(); it++)\n\t{\n\t\tdelete(*it);\n\t}\n}\n\n/**\n * Remove all data points for Reading class\n */\nvoid Reading::removeAllDatapoints()\n{\n\tfor (auto it = m_values.cbegin(); it != m_values.cend(); it++)\n\t{\n\t\tdelete(*it);\n\t}\n\tm_values.clear();\n}\n\n/**\n * Add another data point to an asset reading\n */\nvoid Reading::addDatapoint(Datapoint *value)\n{\n\tm_values.push_back(value);\n}\n\n/**\n * Remove a datapoint from the reading\n *\n * @param name\tName of the datapoint to remove\n * @return\tPointer to the datapoint removed or NULL if it was not found\n */\nDatapoint *Reading::removeDatapoint(const string& name)\n{\nDatapoint *rval;\n\n\tfor (auto it = m_values.begin(); it != m_values.end(); it++)\n\t{\n\t\tif ((*it)->getName().compare(name) == 0)\n\t\t{\n\t\t\trval = *it;\n\t\t\tm_values.erase(it);\n\t\t\treturn rval;\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * Return a specific data point by name\n *\n * @param name\t\tThe name of the datapoint to return\n * @return\t\tPointer a the named datapoint or NULL if it does not exist\n */\nDatapoint *Reading::getDatapoint(const string& name) const\n{\nDatapoint *rval;\n\n\tfor (auto it = m_values.cbegin(); it != m_values.cend(); it++)\n\t{\n\t\tif ((*it)->getName().compare(name) == 0)\n\t\t{\n\t\t\trval = *it;\n\t\t\treturn rval;\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * Return the asset reading as a JSON structure encoded in a\n * C++ string.\n */\nstring Reading::toJSON(bool minimal) const\n{\nostringstream convert;\n\n\tconvert << \"{\\\"asset_code\\\":\\\"\";\n\tconvert << escape(m_asset);\n\tconvert << \"\\\",\\\"user_ts\\\":\\\"\";\n\n\t// Add date_time with microseconds + timezone UTC:\n\t// YYYY-MM-DD HH24:MM:SS.MS+00:00\n\tconvert << getAssetDateUserTime(FMT_DEFAULT) << \"+00:00\";\n\tif (!minimal)\n\t{\n\t\tconvert << \"\\\",\\\"ts\\\":\\\"\";\n\n\t\t// Add date_time with microseconds + timezone UTC:\n\t\t// YYYY-MM-DD HH24:MM:SS.MS+00:00\n\t\tconvert << getAssetDateTime(FMT_DEFAULT) << \"+00:00\";\n\t}\n\n\t// Add values\n\tconvert << \"\\\",\\\"reading\\\":{\";\n\tfor (auto it = m_values.cbegin(); it != m_values.cend(); it++)\n\t{\n\t\tif (it != m_values.cbegin())\n\t\t{\n\t\t\tconvert << \",\";\n\t\t}\n\t\tconvert << (*it)->toJSONProperty();\n\t}\n\tconvert << \"}}\";\n\treturn convert.str();\n}\n\n/**\n * Return the asset reading as a JSON structure encoded in a\n * C++ string.\n */\nstring Reading::getDatapointsJSON() const\n{\nostringstream convert;\n\n\tconvert << \"{\";\n\tfor (auto it = m_values.cbegin(); it != m_values.cend(); it++)\n\t{\n\t\tif (it != m_values.cbegin())\n\t\t{\n\t\t\tconvert << \",\";\n\t\t}\n\t\tconvert << (*it)->toJSONProperty();\n\t}\n\tconvert << \"}\";\n\treturn convert.str();\n}\n\n/**\n * Convert time since epoch to a formatted m_timestamp DataTime in UTC \n * and use a cache to speed it up\n * @param tv_sec\tSeconds since epoch\n * @param date_time\tBuffer in which to return the formatted timestamp\n * @param dateFormat\tFormat: FMT_DEFAULT or FMT_STANDARD\n */\nvoid Reading::getFormattedDateTimeStr(const time_t *tv_sec, char *date_time, readingTimeFormat dateFormat) const\n{\n\tstatic unsigned long cached_sec_since_epoch = 0;\n\tstatic char cached_date_time_str[DATE_TIME_BUFFER_LEN] = \"\";\n\tstatic readingTimeFormat cachedDateFormat = (readingTimeFormat) 0xff;\n\tstatic std::mutex mtx;\n\n\tstd::unique_lock<std::mutex> lck(mtx);\n\n\tif(*cached_date_time_str && cached_sec_since_epoch && *tv_sec == cached_sec_since_epoch && cachedDateFormat == dateFormat)\n\t{\n\t\tstrncpy(date_time, cached_date_time_str, DATE_TIME_BUFFER_LEN);\n\t\tdate_time[DATE_TIME_BUFFER_LEN-1] = '\\0';\n\t\treturn;\n\t}\n\n\tstruct tm timeinfo;\n\tgmtime_r(tv_sec, &timeinfo);\n\n\t/**\n\t * Build date_time with format YYYY-MM-DD HH24:MM:SS.MS+00:00\n\t * this is same as Python3:\n\t * datetime.datetime.now(tz=datetime.timezone.utc)\n\t */\n\n\t// Create datetime with seconds\n\tstd::strftime(date_time, DATE_TIME_BUFFER_LEN,\n\t      m_dateTypes[dateFormat].c_str(),\n                  &timeinfo);\n\n\t// update cache\n\tstrncpy(cached_date_time_str, date_time, DATE_TIME_BUFFER_LEN);\n\tcached_date_time_str[DATE_TIME_BUFFER_LEN-1] = '\\0';\n\tcached_sec_since_epoch = *tv_sec;\n\tcachedDateFormat = dateFormat;\n}\n\n\n/**\n * Return a formatted m_timestamp DataTime in UTC\n * @param dateFormat    Format: FMT_DEFAULT or FMT_STANDARD\n * @return              The formatted datetime string\n */\nconst string Reading::getAssetDateTime(readingTimeFormat dateFormat, bool addMS) const\n{\nchar date_time[DATE_TIME_BUFFER_LEN];\nchar micro_s[10];\nchar  assetTime[DATE_TIME_BUFFER_LEN + 20];\n\n\tgetFormattedDateTimeStr(&m_timestamp.tv_sec, date_time, dateFormat);\n\n\tif (dateFormat != FMT_ISO8601 && addMS)\n\t{\n\t\t// Add microseconds\n\t\tsnprintf(micro_s,\n\t\t\t sizeof(micro_s),\n\t\t\t \".%06lu\",\n\t\t\t m_timestamp.tv_usec);\n\n\t\t// Add date_time + microseconds\n\t\tif (dateFormat != FMT_ISO8601MS)\n\t\t{\n\t\t\tsnprintf(assetTime, sizeof(assetTime), \"%s%s\", date_time, micro_s);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring dt(date_time);\n\t\t\tsize_t pos = dt.find_first_of(\"+\");\n\t\t\tpos--;\n\t\t\tsnprintf(assetTime, sizeof(assetTime), \"%s%s%s\",\n\t\t\t\t\tdt.substr(0, pos).c_str(),\n\t\t\t\t       \tmicro_s, dt.substr(pos).c_str());\n\t\t}\n\t\treturn string(assetTime);\n\t}\n\telse\n\t{\n\t\treturn string(date_time);\n\t}\n\n}\n\n/**\n * Return a formatted m_userTimestamp DataTime in UTC\n * @param dateFormat    Format: FMT_DEFAULT or FMT_STANDARD\n * @return              The formatted datetime string\n */\nconst string Reading::getAssetDateUserTime(readingTimeFormat dateFormat, bool addMS) const\n{\n\tchar date_time[DATE_TIME_BUFFER_LEN+10];\n\tchar micro_s[10];\n\tchar  assetTime[DATE_TIME_BUFFER_LEN + 20];\n\n\tgetFormattedDateTimeStr(&m_userTimestamp.tv_sec, date_time, dateFormat);\n\t\n\tif (dateFormat != FMT_ISO8601 && addMS)\n\t{\n\t\t// Add microseconds\n\t\tsnprintf(micro_s,\n\t\t\t sizeof(micro_s),\n\t\t\t \".%06lu\",\n\t\t\t m_userTimestamp.tv_usec);\n\n\t\t// Add date_time + microseconds\n\t\tif (dateFormat != FMT_ISO8601MS)\n\t\t{\n\t\t\tsnprintf(assetTime, sizeof(assetTime), \"%s%s\", date_time, micro_s);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring dt(date_time);\n\t\t\tsize_t pos = dt.find_first_of(\"+\");\n\t\t\tpos--;\n\t\t\tsnprintf(assetTime, sizeof(assetTime), \"%s%s%s\",\n\t\t\t\t\tdt.substr(0, pos).c_str(),\n\t\t\t\t       \tmicro_s, dt.substr(pos).c_str());\n\t\t}\n\t\treturn string(assetTime);\n\t}\n\telse\n\t{\n\t\treturn string(date_time);\n\t}\n}\n\n/**\n * Set the system timestamp from a string of the format\n * 2019-01-01 10:00:00.123456+08:00\n * The timeval is populated in UTC\n *\n * @param timestamp\tThe timestamp string\n */\nvoid Reading::setTimestamp(const string& timestamp)\n{\n\tstringToTimestamp(timestamp, &m_timestamp);\n}\n\n/**\n * Set the user timestamp from a string of the format\n * 2019-01-01 10:00:00.123456+08:00\n * The timeval is populated in UTC\n *\n * @param timestamp\tThe timestamp string\n */\nvoid Reading::setUserTimestamp(const string& timestamp)\n{\n\tstringToTimestamp(timestamp, &m_userTimestamp);\n}\n\n/**\n * Convert a string timestamp, with milliseconds to a \n * struct timeval.\n *\n * Timezone handling\n *    The timezone in the string is extracted to get UTC values.\n *    Times within a reading are always stored as UTC\n *\n * @param timestamp\tString timestamp\n * @param ts\t\tStruct timeval to populate\n */\nvoid Reading::stringToTimestamp(const string& timestamp, struct timeval *ts)\n{\n\tstatic std::mutex mtx;\n\tstatic char cached_timestamp_upto_min[32] = \"\";\n\tstatic unsigned long cached_sec_since_epoch = 0;\n\n\tconst int timestamp_str_len_till_min = 16;\n\tconst int timestamp_str_len_till_sec = 19;\n\t\n\tchar date_time [DATE_TIME_BUFFER_LEN];\n\n\tstrcpy (date_time, timestamp.c_str());\n\n\t{\n\t\tlock_guard<mutex> guard(mtx);\n\t\t\n\t\tchar timestamp_sec[32];\n\t\tstrncpy(timestamp_sec, date_time, timestamp_str_len_till_sec);\n\t\ttimestamp_sec[timestamp_str_len_till_sec] = '\\0';\n\t\tif(*cached_timestamp_upto_min && cached_sec_since_epoch && (strncmp(timestamp_sec, cached_timestamp_upto_min, timestamp_str_len_till_min) == 0))\n\t\t{\n\t\t\t// cache hit\n\t\t\tint sec_part = strtoul(timestamp_sec+timestamp_str_len_till_min+1, NULL, 10);\n\t\t\tts->tv_sec = cached_sec_since_epoch + sec_part;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// cache miss\n\t\t\tstruct tm tm;\n\t\t\tmemset(&tm, 0, sizeof(struct tm));\n\t\t\tstrptime(date_time, \"%Y-%m-%d %H:%M:%S\", &tm);\n\t\t\t// Convert time to epoch - mktime assumes localtime so most adjust for that\n\t\t\tts->tv_sec = mktime(&tm);\n\n\t\t\textern long timezone;\n\t\t\tts->tv_sec -= timezone;\n\n\t\t\tstrncpy(cached_timestamp_upto_min, timestamp_sec, timestamp_str_len_till_min);\n\t\t\tcached_timestamp_upto_min[timestamp_str_len_till_min] = '\\0';\n\t\t\tcached_sec_since_epoch = ts->tv_sec - tm.tm_sec;  // store only for upto-minute part\n\t\t}\n\t}\n\t\n\t// Now process the fractional seconds\n\tconst char *ptr = date_time;\n\twhile (*ptr && *ptr != '.')\n\t\tptr++;\n\tif (*ptr)\n\t{\n\t\tchar *eptr;\n\t\tts->tv_usec = strtol(ptr + 1, &eptr, 10);\n\t\tint digits = eptr - (ptr + 1);\t// Number of digits we have\n\t\twhile (digits < 6)\n\t\t{\n\t\t\tdigits++;\n\t\t\tts->tv_usec *= 10;\n\t\t}\n\t}\n\telse\n\t{\n\t\tts->tv_usec = 0;\n\t}\n\n\t// Get the timezone from the string and convert to UTC\n\tptr = date_time + 10; // Skip date as it contains '-' characters\n\twhile (*ptr && *ptr != '-' && *ptr != '+')\n\t\tptr++;\n\tif (*ptr)\n\t{\n\t\tint h, m;\n\t\tint sign = (*ptr == '+' ? -1 : +1);\n\t\th = strtoul(ptr+1, NULL, 10);\n\t\tm = strtoul(ptr+4, NULL, 10);\n\t\tts->tv_sec += sign * ((3600 * h) + (60 * m));\n\t}\n}\n\n\n/**\n * Escape quotes etc to allow the string to be a property value within\n * a JSON document\n *\n * @param str\tThe string to escape\n * @return The escaped string\n */\nconst string Reading::escape(const string& str) const\n{\nstring rval;\nint bscount = 0;\n\n\tfor (size_t i = 0; i < str.length(); i++)\n\t{\n\t\tif (str[i] == '\\\\')\n\t\t{\n\t\t\tif (i + 1 < str.length() && (str[i + 1] == '\"' || str[i + 1] == '\\\\' || str[i + 1] == '/' || str[i-1] == '\\\\'))\n\t\t\t{\n\t\t\t\trval += '\\\\';\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\trval += \"\\\\\\\\\";\n\t\t\t}\n\t\t\tbscount++;\n\t\t}\n\t\telse if (str[i] == '\\\"')\n\t\t{\n\t\t\tif ((bscount & 1) == 0)\t// not already escaped\n\t\t\t{\n\t\t\t\trval += \"\\\\\";\t// Add escape of \"\n\t\t\t}\n\t\t\trval += str[i];\n\t\t\tbscount = 0;\n\t\t}\n\t\telse\n\t\t{\n\t\t\trval += str[i];\n\t\t\tbscount = 0;\n\t\t}\n\t}\n\treturn rval;\n}\n\n/**\n * Convert a JSON Value object to a set of data points\n *\n * @param json\tThe json object to convert\n */\nvector<Datapoint *> *Reading::JSONtoDatapoints(const Value& json)\n{\nvector<Datapoint *> *values = new vector<Datapoint *>;\n\n\tfor (Value::ConstMemberIterator itr = json.MemberBegin(); itr != json.MemberEnd(); ++itr)\n\t{\n\t\tstring name = itr->name.GetString();\n\t\tif (itr->value.IsInt64())\n\t\t{\n\t\t\tlong v = itr->value.GetInt64();\n\t\t\tDatapointValue dpv(v);\n\t\t\tvalues->push_back(new Datapoint(name, dpv));\n\t\t}\n\t\telse if (itr->value.IsDouble())\n\t\t{\n\t\t\tdouble v = itr->value.GetDouble();\n\t\t\tDatapointValue dpv(v);\n\t\t\tvalues->push_back(new Datapoint(name, dpv));\n\t\t}\n\t\telse if (itr->value.IsString())\n\t\t{\n\t\t\tstring v = itr->value.GetString();\n\t\t\tDatapointValue dpv(v);\n\t\t\tvalues->push_back(new Datapoint(name, dpv));\n\t\t}\n\t\telse if (itr->value.IsObject())\n\t\t{\n\t\t\t// Map objects as nested datapoints\n\t\t\tvector<Datapoint *> *nestedValues = JSONtoDatapoints(itr->value);\n\t\t\tDatapointValue dpv(nestedValues, true);\n\t\t\tvalues->push_back(new Datapoint(name, dpv));\n\t\t}\n\t\telse if (itr->value.IsArray())\n\t\t{\n\t\t\tvector<double> arr;\n\t\t\tfor (auto& v : itr->value.GetArray())\n\t\t\t{\n\t\t\t\tif (v.IsNumber())\n\t\t\t\t\tarr.emplace_back(v.GetDouble());\n\t\t\t\telse\n\t\t\t\t\tthrow runtime_error(\"Only numeric lists are currently supported in datapoints\");\n\t\t\t}\n\n\t\t\tDatapointValue dpv(arr);\n\t\t\tvalues->emplace_back(new Datapoint(name, dpv));\n\t\t}\n\t}\n\treturn values;\n}\n\n/**\n * Create the information about the macros to substitute in the given string\n *\n * @param str\tThe string we are substituting\n * @param macros\tVector of macros to build up\n */\nvoid Reading::collectMacroInfo(const string& str, vector<Macro>& macros)\n{\n\tstring::size_type start = str.find('$');\n\tstring::size_type end = str.find('$', start + 1);\n\n\twhile (start != string::npos && end != string::npos) \n\t{\n\t\tstring::size_type bar = str.find('|', start + 1);\n\t\tif (bar != string::npos && bar < end && bar > start + 1) \n\t\t{\n\t\t\tstring def = str.substr(bar + 1, end - bar - 1);\n\t\t\tmacros.emplace_back(Macro(str.substr(start + 1, bar - start - 1), start, def));\n\t\t}\n\t\telse if (end > start + 1)\n\t\t{\n\t\t\tmacros.emplace_back(Macro(str.substr(start + 1, end - start - 1), start));\n\t\t}\n\t\tstart = str.find('$', end + 1);\n\t\tend = str.find('$', start + 1);\n\t}\n}\n\n/**\n * Substitute values from this reading into the string.\n * Macros are of the form $datapointname$, $ASSET$ or \n * $datapointname|default$\n *\n * @param str\tThe string to substitute into\n * @return string\tThe substituted string\n */\nstring Reading::substitute(const string& str)\n{\n\tstring rval = str;\n\tvector<Macro> macros;\n\tcollectMacroInfo(rval, macros);\n\t// Replace Macros by datapoint value\n\tfor (auto it =  macros.rbegin(); it != macros.rend(); ++it)\n\t{\n\t\t// In case of ASSET Macro, replace it by asset name instead of datapoint value\n\t\tif (it->name == \"ASSET\")\n\t\t{\n\t\t\trval.replace(it->start, it->name.length()+2, this->getAssetName() );\n\t\t\tcontinue;\n\t\t}\n\t\tDatapoint * datapoint = this->getDatapoint(it->name);\n\n\t\tif (datapoint)\n\t\t{\n\t\t\t// Check for datapoint type for string and numbers\n\t\t\tDatapointValue::dataTagType dataType = datapoint->getData().getType();\n\t\t\tif (\n\t\t\t\tdataType != DatapointValue::dataTagType::T_STRING &&\n\t\t\t\tdataType != DatapointValue::dataTagType::T_INTEGER &&\n\t\t\t\tdataType != DatapointValue::dataTagType::T_FLOAT\n\t\t\t)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"The datapoint %s cannot be used as a macro substitution as it is not a string or numeric value\",it->name.c_str());\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tstring datapointValue = \"\";\n\t\t\tswitch (dataType)\n\t\t\t{\n\t\t\t\tcase DatapointValue::dataTagType::T_INTEGER: \n\t\t\t\t\tdatapointValue = std::to_string(datapoint->getData().toInt());\n\t\t\t\t\tbreak;\n\t\t\t\tcase DatapointValue::dataTagType::T_FLOAT: \n\t\t\t\t\tdatapointValue = std::to_string(datapoint->getData().toDouble());\n\t\t\t\t\tbreak;\n\t\t\t\tdefault: \n\t\t\t\tdatapointValue = datapoint->getData().toStringValue();\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\trval.replace(it->start, it->name.length()+2\n\t\t\t\t\t\t+ (it->def.empty() ? 0 : it->def.length() + 1),\n\t\t\t\t\t\tdatapointValue );\n\t\t}\n\t\telse if (!it->def.empty())\n\t\t{\n\t\t\trval.replace(it->start, it->name.length() + it->def.length() + 3, it->def);\n\t\t}\n\t\telse\n\t\t{\n\t\t\trval.replace(it->start, it->name.length()+2, \"\");\n\t\t}\n\t}\n\treturn rval;\n}\n"
  },
  {
    "path": "C/common/reading_circularbuffer.cpp",
    "content": "/*\n * Fledge reading circular buffer class\n *\n * Copyright (c) 2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <reading_circularbuffer.h>\n\nusing namespace std;\n\n/**\n * Create a circular buffer of readings\n *\n * @param size\tThe number of items to retain in the circular buffer\n */\nReadingCircularBuffer::ReadingCircularBuffer(unsigned int size) : m_size(size),\n       \tm_insert(0), m_entries(0)\n{\n\tm_readings.resize(size, NULL);\n}\n\n/**\n * Destructor for the circular buffer\n */\nReadingCircularBuffer::~ReadingCircularBuffer()\n{\n\tlock_guard<mutex> guard(m_mutex);\n\tfor (int i = 0; i < m_entries; i++)\n\t\tm_readings[i] = NULL;\n}\n\n/**\n * Insert a single reading into the shared buffer\n *\n * @param reading\tThe reading to insert\n */\nvoid ReadingCircularBuffer::insert(Reading *reading)\n{\n\tlock_guard<mutex> guard(m_mutex);\n\tif (m_entries == m_size)\n\t\tm_readings[m_insert] = NULL;\n\telse\n\t\tm_entries++;\n\tshared_ptr<Reading> copy(new Reading(*reading));\n\tm_readings[m_insert] = copy;\n\tm_insert++;\n\tif (m_insert >= m_size)\n\t\tm_insert = 0;\n}\n\n/**\n * Insert a list of readings into the circular buffer\n *\n * @param readings\tThe set of readings to ingest\n */\nvoid ReadingCircularBuffer::insert(const vector<Reading *>& readings)\n{\n\tfor (auto& reading : readings)\n\t\tinsert(reading);\n}\n\n/**\n * Insert a list of readings into the circular buffer\n *\n * @param readings\tThe set of readings to ingest\n */\nvoid ReadingCircularBuffer::insert(const vector<Reading *> *readings)\n{\n\tfor (auto& reading : *readings)\n\t\tinsert(reading);\n}\n\n/**\n * Return the buffered data into a supplied vector\n *\n * @param vec\tThe vector to populate with the shared pointers\n * @return int\tThe number of readings placed in the vector\n */\nint ReadingCircularBuffer::extract(vector<shared_ptr<Reading>>& vec)\n{\n\tint start = 0;\n\tlock_guard<mutex> guard(m_mutex);\n\tif (m_entries == m_size)\n\t{\n\t\tstart = (m_insert + 1) % m_size;\n\t}\n\tfor (int i = 0; i < m_entries; i++)\n\t{\n\t\tvec.push_back(m_readings[start % m_size]);\n\t\tstart++;\n\t}\n\treturn m_entries;\n}\n"
  },
  {
    "path": "C/common/reading_set.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <reading_set.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <sstream>\n#include <iostream>\n#include <time.h>\n#include <stdlib.h>\n#include <logger.h>\n#include <base64databuffer.h>\n#include <base64dpimage.h>\n\n#include <boost/algorithm/string/replace.hpp>\n\n#define ASSET_NAME_INVALID_READING \"error_invalid_reading\"\n\nstatic const char* kTypeNames[] =\n    { \"Null\", \"False\", \"True\", \"Object\", \"Array\", \"String\", \"Number\" };\n\nusing namespace std;\nusing namespace rapidjson;\n\n// List of characters to be escaped in JSON\nconst vector<string> JSON_characters_to_be_escaped = {\n\t\"\\\\\",\n\t\"\\\"\"\n};\n\n/**\n * Construct an empty reading set\n */\nReadingSet::ReadingSet() : m_count(0), m_last_id(0)\n{\n}\n\n/**\n * Construct a reading set from a vector<Reading *> pointer\n * NOTE: readings are copied into m_readings\n *\n * @param readings\tThe  vector<Reading *> pointer\n *\t\t\tof readings to be copied\n *\t\t\tinto m_readings vector\n */\nReadingSet::ReadingSet(const vector<Reading *>* readings) : m_last_id(0)\n{\n\tm_count = readings->size();\n\tfor (auto it = readings->begin(); it != readings->end(); ++it)\n\t{\n\t\tif ((*it)->hasId() && (*it)->getId() > m_last_id)\n\t\t\tm_last_id = (*it)->getId();\n\t\tm_readings.push_back(*it);\n\t}\n}\n\n/**\n * Construct a reading set from a JSON document returned from\n * the Fledge storage service query or notification. The JSON\n * is parsed using the in-situ RapidJSON parser in order to\n * reduce overhead on what is most likely a large JSON document.\n *\n * WARNING: Although the string passed in is defiend as const\n * this call is destructive to this string and the conntents\n * of the string should not be used after making this call.\n *\n * @param json\tThe JSON document (as string) with readings data\n */\nReadingSet::ReadingSet(const std::string& json) : m_last_id(0)\n{\n\tunsigned long rows = 0;\n\tDocument doc;\n\tdoc.ParseInsitu((char *)json.c_str());\t// Cast away const in order to use in-situ\n\tif (doc.HasParseError())\n\t{\n\t\tthrow new ReadingSetException(\"Unable to parse results json document\");\n\t}\n\t// Check we have \"count\" and \"rows\"\n\tbool docHasRows =  doc.HasMember(\"rows\"); // Query\n\tbool docHasReadings =  doc.HasMember(\"readings\"); // Notification\n\n\t// Check we have \"rows\" or \"readings\"\n\tif (!docHasRows && !docHasReadings)\n\t{\n\t\tthrow new ReadingSetException(\"Missing readings or rows array\");\n\t}\n\n\t// Check we have \"count\" and \"rows\"\n\tif (doc.HasMember(\"count\") && docHasRows)\n\t{\n\t\tm_count = doc[\"count\"].GetUint();\n\t\t// No readings\n\t\tif (!m_count)\n\t\t{\n\t\t\tm_last_id = 0;\n\t\t\treturn;\n\t\t}\n\t}\n\telse\n\t{\n\t\t// These fields might be updated later\n\t\tm_count = 0;\n\t\tm_last_id = 0;\n\t}\n\n\t// Get \"rows\" or \"readings\" data\n\tconst Value& readings = docHasRows ? doc[\"rows\"] : doc[\"readings\"];\n\tif (readings.IsArray())\n\t{\n\t\tunsigned long id = 0;\n\t\t// Process every rows and create the result set\n\t\tfor (auto& reading : readings.GetArray())\n\t\t{\n\t\t\tif (!reading.IsObject())\n\t\t\t{\n\t\t\t\tthrow new ReadingSetException(\"Expected reading to be an object\");\n\t\t\t}\n\t\t\tJSONReading *value = new JSONReading(reading);\n\t\t\tm_readings.push_back(value);\n\n\t\t\t// Get the Reading Id\n\t\t\tid = value->getId();\n\n\t\t\t// We don't have count informations with \"readings\"\n\t\t\tif (docHasReadings)\n\t\t\t{\n\t\t\t\trows++;\n\t\t\t}\n\n\t\t}\n\t\t// Set the last id\n\t\tm_last_id = id;\n\n\t\t// Set count informations with \"readings\"\n\t\tif (docHasReadings)\n\t\t{\n\t\t\tm_count = rows;\n\t\t}\n\t}\n\telse\n\t{\n\t\tthrow new ReadingSetException(\"Expected array of rows in result set\");\n\t}\n}\n\n/**\n * Destructor for a result set\n */\nReadingSet::~ReadingSet()\n{\n\t/* Delete the readings */\n\tfor (auto it = m_readings.cbegin(); it != m_readings.cend(); it++)\n\t{\n\t\tdelete *it;\n\t}\n}\n\n/**\n * Append the readings in a second reading set to this reading set.\n * The readings are removed from the original reading set\n *\n * @param readings\tA ReadingSet to append to the current ReadingSet\n */\nvoid\nReadingSet::append(ReadingSet *readings)\n{\n\tvector<Reading *> *vec = readings->getAllReadingsPtr();\n\tappend(*vec);\n\treadings->clear();\n}\n\n/**\n * Append the readings in a second reading set to this reading set.\n * The readings are removed from the original reading set\n *\n * @param readings\tA ReadingSet to append to the current ReadingSet\n */\nvoid\nReadingSet::append(ReadingSet& readings)\n{\n\tvector<Reading *> *vec = readings.getAllReadingsPtr();\n\tappend(*vec);\n\treadings.clear();\n}\n\n/**\n * Append a set of readings to this reading set. The\n * readings are not copied, but rather moved from the\n * vector, with the resulting vector havign the values\n * removed on return.\n *\n * It is assumed the readings in the vector have been\n * created with the new operator.\n *\n * @param readings\tA vector of Reading pointers to append to the ReadingSet\n */\nvoid\nReadingSet::append(vector<Reading *>& readings)\n{\n\tfor (auto it = readings.cbegin(); it != readings.cend(); it++)\n\t{\n\t\tif ((*it)->getId() > m_last_id)\n\t\t\tm_last_id = (*it)->getId();\n\t\tm_readings.push_back(*it);\n\t\tm_count++;\n\t}\n\treadings.clear();\n}\n\n/**\n * merge the readings in a vector with the set of readings in the reading set.\n * The input reading vector must be ordered as per timestamp and cleared at the end of the operation.\n * @param readings\tA vector of Reading pointers to merge with the ReadingSet\n*/\nvoid ReadingSet::merge(std::vector<Reading *> *readings)\n{\n\tif (!readings || readings->empty()) {\n\t\treturn;\n\t}\n\n\tsize_t totalSize = m_readings.size() + readings->size();\n\tstd::vector<Reading*> merged;\n\tmerged.reserve(totalSize);\n\tmerged.resize(totalSize);  // make sure we can assign via operator[]\n\n\tsize_t i = 0;\n\tauto p1 = m_readings.begin();\n\tauto p2 = readings->begin();\n\n\twhile (p1 != m_readings.end() || p2 != readings->end()) {\n\t\tif (p1 != m_readings.end() && p2 != readings->end()) {\n\t\t\tstruct timeval ta, tb;\n\t\t\t(*p1)->getUserTimestamp(&ta);\n\t\t\t(*p2)->getUserTimestamp(&tb);\n\n\t\t\t// stable ordering: if equal, p1 wins\n\t\t\tif (timercmp(&ta, &tb, <=)) {\n\t\t\t\tmerged[i++] = *p1++;\n\t\t\t} else {\n\t\t\t\tif ((*p2)->hasId() && (*p2)->getId() > m_last_id) {\n\t\t\t\t\tm_last_id = (*p2)->getId();\n\t\t\t\t}\n\t\t\t\tmerged[i++] = *p2++;\n\t\t\t}\n\t\t} else if (p1 != m_readings.end()) {\n\t\t\tmerged[i++] = *p1++;\n\t\t} else if (p2 != readings->end()) {\n\t\t\tif ((*p2)->hasId() && (*p2)->getId() > m_last_id) {\n\t\t\t\tm_last_id = (*p2)->getId();\n\t\t\t}\n\t\t\tmerged[i++] = *p2++;\n\t\t}\n\t}\n\n\tm_readings = std::move(merged);\n\tm_count = m_readings.size();\n\t//Clear input readings vector\n\treadings->clear();\n}\n\n/**\n* Deep copy a set of readings to this reading set.\n*\n* @param src\tThe reading set to copy\n* @return bool\tTrue if the reading set was copied\n*/\nbool\nReadingSet::copy(const ReadingSet& src)\n{\n\tvector<Reading *> readings;\n\tbool copyResult = true;\n\ttry\n\t{\n\t\t// Iterate over all the readings in ReadingSet\n\t\tfor (auto const &reading : src.getAllReadings())\n\t\t{\n\t\t\tstring assetName = reading->getAssetName();\n\t\t\tstring ts = reading->getAssetDateUserTime();\n\t\t\tvector<Datapoint *> dataPoints;\n\t\t\ttry\n\t\t\t{\n\t\t\t\t// Iterate over all the datapoints associated with one reading\n\t\t\t\tfor (auto const &dp : reading->getReadingData())\n\t\t\t\t{\n\t\t\t\t\tstring dataPointName  = dp->getName();\n\t\t\t\t\tDatapointValue dv = dp->getData();\n\t\t\t\t\tdataPoints.emplace_back(new Datapoint(dataPointName, dv));\n\t\t\t\t\t\n\t\t\t\t}\n\t\t\t}\n\t\t\t// Catch exception while copying datapoints\n\t\t\tcatch(std::bad_alloc& ex)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Insufficient memory, failed while copying dataPoints from ReadingSet, %s \", ex.what());\n\t\t\t\tcopyResult = false;\n\t\t\t\tfor (auto const &dp : dataPoints)\n\t\t\t\t{\n\t\t\t\t\tdelete dp;\n\t\t\t\t}\n\t\t\t\tdataPoints.clear();\n\t\t\t\tthrow;\n\t\t\t}\n\t\t\tcatch (std::exception& ex)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Unknown exception, failed while copying datapoint from ReadingSet, %s \", ex.what());\n\t\t\t\tcopyResult = false;\n\t\t\t\tfor (auto const &dp : dataPoints)\n\t\t\t\t{\n\t\t\t\t\tdelete dp;\n\t\t\t\t}\n\t\t\t\tdataPoints.clear();\n\t\t\t\tthrow;\n\t\t\t}\n\t\t\t\n\t\t\tReading *in = new Reading(assetName, dataPoints, ts);\n\t\t\treadings.emplace_back(in);\n\t   }\n   }\n   // Catch exception while copying readings\n   catch (std::bad_alloc& ex)\n   {\n\t\tLogger::getLogger()->error(\"Insufficient memory, failed while copying %d reading from ReadingSet, %s \",readings.size()+1, ex.what());\n\t\tcopyResult = false;\n\t\tfor (auto const &r : readings)\n\t\t{\n\t\t\tdelete r;\n\t\t}\n\t\treadings.clear();\n   }\n   catch (std::exception& ex)\n   {\n\t\tLogger::getLogger()->error(\"Unknown exception, failed while copying %d reading from ReadingSet, %s \",readings.size()+1, ex.what());\n\t\tcopyResult = false;\n\t\tfor (auto const &r : readings)\n\t\t{\n\t\t\tdelete r;\n\t\t}\n\t\treadings.clear();\n   }\n\n   //Append if All elements have been copied successfully\n   if (copyResult)\n   {\n\t   append(readings);\n   }\n\n   return copyResult;\n}\n\n/**\n * Remove all readings from the reading set and delete the memory\n * After this call the reading set exists but contains no readings.\n */\nvoid\nReadingSet::removeAll()\n{\n\tfor (auto it = m_readings.cbegin(); it != m_readings.cend(); it++)\n\t{\n\t\tdelete *it;\n\t}\n\tm_readings.clear();\n\tm_count = 0;\n\tm_last_id = 0;\n}\n\n/**\n * Remove the readings from the vector without deleting them\n */\nvoid\nReadingSet::clear()\n{\n\tm_readings.clear();\n\tm_count = 0;\n\tm_last_id = 0;\n}\n\n/**\n * Remove readings from the vector and return a reference to new vector\n * containing readings*\n*/\nstd::vector<Reading*>* ReadingSet::moveAllReadings()\n{\n\tstd::vector<Reading*>* transferredPtr = new std::vector<Reading*>(std::move(m_readings));\n\tm_count = 0;\n\tm_last_id = 0;\n\tm_readings.clear();\n\n\treturn transferredPtr;\n}\n\n/**\n * Remove reading from vector based on index and return its pointer\n*/\nReading* ReadingSet::removeReading(unsigned long id)\n{\n\tif (id >= m_readings.size())\n\t{\n\t\treturn nullptr;\n\t}\n\n\tReading* reading = m_readings[id];\n\tm_readings.erase(m_readings.begin() + id);\n\tm_count--;\n\n\treturn reading;\n}\n\n/**\n * Return the ID of the nth reading in the reading set\n *\n * @param pos\tThe position of the reading to return the ID for\n */\nunsigned long ReadingSet::getReadingId(uint32_t pos)\n{\n\tif (pos < m_readings.size())\n\t{\n\t\tReading *reading = m_readings[pos];\n\t\treturn reading->getId();\n\t}\n\treturn m_last_id;\n}\n\n/**\n * Construct a reading from a JSON document\n *\n * The data can be in the \"value\" property as single numeric value\n * or in the JSON \"reading\" with different values and types\n *\n * @param json\tThe JSON document that contains the reading\n */\nJSONReading::JSONReading(const Value& json)\n{\n\tif (json.HasMember(\"id\"))\n\t{\n\t\tm_id = json[\"id\"].GetUint64();\n\t\tm_has_id = true;\n\t}\n\telse\n\t{\n\t\tm_has_id = false;\n\t}\n\tif (json.HasMember(\"asset_code\"))\n\t{\n\t\tm_asset = json[\"asset_code\"].GetString();\n\t}\n\telse\n\t{\n\t\tstring errMsg = \"Malformed JSON reading, missing asset_code '\";\n\t\terrMsg.append(\"value\");\n\t\terrMsg += \"'\";\n\t\tthrow new ReadingSetException(errMsg.c_str());\n\t}\n\tif (json.HasMember(\"user_ts\"))\n\t{\n\t\tstringToTimestamp(json[\"user_ts\"].GetString(), &m_userTimestamp);\n\t}\n\telse\n\t{\n\t\tstring errMsg = \"Malformed JSON reading, missing user timestamp '\";\n\t\terrMsg.append(\"value\");\n\t\terrMsg += \"'\";\n\t\tthrow new ReadingSetException(errMsg.c_str());\n\t}\n\tif (json.HasMember(\"ts\"))\n\t{\n\t\tstringToTimestamp(json[\"ts\"].GetString(), &m_timestamp);\n\t}\n\telse\n\t{\n\t\tm_timestamp = m_userTimestamp;\n\t}\n\n\t// We have a single value here which is a number\n\tif (json.HasMember(\"value\") && json[\"value\"].IsNumber())\n\t{\n\t\tconst Value &m = json[\"value\"];\n\t\t\n\t\tif (m.IsInt() ||\n\t\t    m.IsUint() ||\n\t\t    m.IsInt64() ||\n\t\t    m.IsUint64())\n\t\t{\n\t\t\tDatapointValue* value;\n\t\t\tif (m.IsInt() ||\n\t\t\t    m.IsUint() )\n\t\t\t{\n\t\t\t\tvalue = new DatapointValue((long) m.GetInt());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tvalue = new DatapointValue((long) m.GetInt64());\n\t\t\t}\n\t\t\tthis->addDatapoint(new Datapoint(\"value\",*value));\n\t\t\tdelete value;\n\n\t\t}\n\t\telse if (m.IsDouble())\n\t\t{\n\t\t\tDatapointValue value(m.GetDouble());\n\t\t\tthis->addDatapoint(new Datapoint(\"value\",\n\t\t\t\t\t\t\t value));\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring errMsg = \"Cannot parse the numeric type\";\n\t\t\terrMsg += \" of reading element '\";\n\t\t\terrMsg.append(\"value\");\n\t\t\terrMsg += \"'\";\n\n\t\t\tthrow new ReadingSetException(errMsg.c_str());\n\t\t}\n\t}\n\telse if (json.HasMember(\"reading\"))\n\t{\n\t\tif (json[\"reading\"].IsObject())\n\t\t{\n\t\t\t// Add 'reading' values\n\t\t\tfor (auto &m : json[\"reading\"].GetObject())\n\t\t\t{\n\t\t\t\tDatapoint *dp = datapoint(m.name.GetString(), m.value);\n\t\t\t\tif (dp)\n\t\t\t\t{\n\t\t\t\t\taddDatapoint(dp);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// The reading should be an object at this stage, it is and invalid one if not\n\t\t\t// the asset name ASSET_NAME_INVALID_READING will be created in the PI-Server containing the\n\t\t\t// invalid asset_name/values.\n\t\t\tif (json[\"reading\"].IsString())\n\t\t\t{\n\t\t\t\tstring tmp_reading1 = json[\"reading\"].GetString();\n\n\t\t\t\t// Escape specific character for to be properly manage as JSON\n\t\t\t\tfor (const string &item : JSON_characters_to_be_escaped)\n\t\t\t\t{\n\n\t\t\t\t\tescapeCharacter(tmp_reading1, item);\n\t\t\t\t}\n\n\t\t\t\tLogger::getLogger()->error(\n\t\t\t\t\t\"Invalid reading: Asset name |%s| reading value |%s| converted value |%s|\",\n\t\t\t\t\tm_asset.c_str(),\n\t\t\t\t\tjson[\"reading\"].GetString(),\n\t\t\t\t\ttmp_reading1.c_str());\n\n\t\t\t\tDatapointValue value(tmp_reading1);\n\t\t\t\tthis->addDatapoint(new Datapoint(m_asset, value));\n\n\t\t\t} else if (json[\"reading\"].IsInt() ||\n\t\t\t\t   json[\"reading\"].IsUint() ||\n\t\t\t\t   json[\"reading\"].IsInt64() ||\n\t\t\t\t   json[\"reading\"].IsUint64()) {\n\n\t\t\t\tDatapointValue *value;\n\n\t\t\t\tif (json[\"reading\"].IsInt() ||\n\t\t\t\t    json[\"reading\"].IsUint()) {\n\t\t\t\t\tvalue = new DatapointValue((long) json[\"reading\"].GetInt());\n\t\t\t\t} else {\n\t\t\t\t\tvalue = new DatapointValue((long) json[\"reading\"].GetInt64());\n\t\t\t\t}\n\t\t\t\tthis->addDatapoint(new Datapoint(m_asset, *value));\n\t\t\t\tdelete value;\n\n\t\t\t} else if (json[\"reading\"].IsDouble())\n\t\t\t{\n\t\t\t\tDatapointValue value(json[\"reading\"].GetDouble());\n\t\t\t\tthis->addDatapoint(new Datapoint(m_asset, value));\n\n\t\t\t}\n\n\t\t\tm_asset = string(ASSET_NAME_INVALID_READING) + string(\"_\") + m_asset.c_str();\n\t\t}\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Missing reading property for JSON reading, %s\", m_asset.c_str());\n\t}\n}\n\n/**\n * Create a Datapoint from a JSON item in a reading\n *\n * @param item\tThe JSON object forthe data point\n * @return Datapoint* The new data point\n */\nDatapoint *JSONReading::datapoint(const string& name, const Value& item)\n{\nDatapoint *rval = NULL;\n\n\tswitch (item.GetType())\n\t{\n\t\t// String\n\t\tcase (kStringType):\n\t\t{\n\t\t\tstring str = item.GetString();\n\t\t\tif (str[0] == '_' && str[1] == '_')\n\t\t\t{\n\t\t\t\t// special encoded type\n\t\t\t\tsize_t pos = str.find_first_of(':');\n\t\t\t\tif (str.compare(2, 10, \"DATABUFFER\") == 0)\n\t\t\t\t{\n\t\t\t\t\ttry {\n\t\t\t\t\t\tDataBuffer *databuffer = new Base64DataBuffer(str.substr(pos + 1));\n\t\t\t\t\t\tDatapointValue value(databuffer);\n\t\t\t\t\t\trval = new Datapoint(name, value);\n\t\t\t\t\t} catch (exception& e) {\n\t\t\t\t\t\tLogger::getLogger()->error(\"Unable to create datapoint %s as the base 64 encoded data is incorrect, %s\",\n\t\t\t\t\t\t\t\tname.c_str(), e.what());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse if (str.compare(2, 7, \"DPIMAGE\") == 0)\n\t\t\t\t{\n\t\t\t\t\ttry {\n\t\t\t\t\t\tDPImage *image = new Base64DPImage(str.substr(pos + 1));\n\t\t\t\t\t\tDatapointValue value(image);\n\t\t\t\t\t\trval = new Datapoint(name, value);\n\t\t\t\t\t} catch (exception& e) {\n\t\t\t\t\t\tLogger::getLogger()->error(\"Unable to create datapoint %s as the base 64 encoded data is incorrect, %s\",\n\t\t\t\t\t\t\t\tname.c_str(), e.what());\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tDatapointValue value(item.GetString());\n\t\t\t\trval = new Datapoint(name, value);\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\n\t\t// Number\n\t\tcase (kNumberType):\n\t\t{\n\t\t\tif (item.IsInt())\n\t\t\t{\n\t\t\t\tDatapointValue value((long)item.GetInt());\n\t\t\t\trval = new Datapoint(name, value);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\telse if (item.IsUint())\n\t\t\t{\n\t\t\t\tDatapointValue value((long)item.GetUint());\n\t\t\t\trval = new Datapoint(name, value);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\telse if (item.IsInt64())\n\t\t\t{\n\t\t\t\tDatapointValue value((long)item.GetInt64());\n\t\t\t\trval = new Datapoint(name, value);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\telse if (item.IsUint64())\n\t\t\t{\n\t\t\t\tDatapointValue value((long)item.GetUint64());\n\t\t\t\trval = new Datapoint(name, value);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\telse if (item.IsDouble())\n\t\t\t{\n\t\t\t\tDatapointValue value(item.GetDouble());\n\t\t\t\trval = new Datapoint(name, value);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring errMsg = \"Cannot parse the numeric type\";\n\t\t\t\terrMsg += \" of reading element '\";\n\t\t\t\terrMsg.append(name);\n\t\t\t\terrMsg += \"'\";\n\n\t\t\t\tthrow new ReadingSetException(errMsg.c_str());\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\t// Arrays\n\t\tcase kArrayType:\n\t\t{\n\t\t\tvector<double> arrayValues;\n\t\t\tfor (auto& v : item.GetArray())\n\t\t\t{\n\t\t\t\tif (v.IsDouble())\n\t\t\t\t{\n\t\t\t\t\tarrayValues.push_back(v.GetDouble());\n\t\t\t\t}\n\t\t\t\telse if (v.IsInt() || v.IsUint())\n\t\t\t\t{\n\t\t\t\t\tdouble i = (double)v.GetInt();\n\t\t\t\t\tarrayValues.push_back(i);\n\t\t\t\t}\n\t\t\t\telse if (v.IsInt64() || v.IsUint64())\n\t\t\t\t{\n\t\t\t\t\tdouble i = (double)v.GetInt64();\n\t\t\t\t\tarrayValues.push_back(i);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Don't create blank array of datapoint values\n\t\t\tif (!arrayValues.empty())\n\t\t\t{\n\t\t\t\tDatapointValue value(arrayValues);\n\t\t\t\trval = new Datapoint(name, value);\n\t\t\t}\n\t\t\tbreak;\n\t\t\t    \n\t\t}\n\t\n\t\t// Nested object\n\t\tcase kObjectType:\n\t\t{\n\t\t\tvector<Datapoint *> *obj = new vector<Datapoint *>;\n\t\t\tfor (auto &mo : item.GetObject())\n\t\t\t{\n\t\t\t\tDatapoint *dp = datapoint(mo.name.GetString(), mo.value);\n\t\t\t\tif (dp)\n\t\t\t\t{\n\t\t\t\t\tobj->push_back(dp);\n\t\t\t\t}\n\t\t\t}\n\t\t\tDatapointValue value(obj, true);\n\t\t\trval = new Datapoint(name, value);\n\t\t\tbreak;\n\t\t}\n\n\t\tcase kTrueType:\n\t\t{\n\t\t\tDatapointValue value(\"true\");\n\t\t\trval = new Datapoint(name, value);\n\t\t\tbreak;\n\t\t}\n\t\tcase kFalseType:\n\t\t{\n\t\t\tDatapointValue value(\"false\");\n\t\t\trval = new Datapoint(name, value);\n\t\t\tbreak;\n\t\t}\n\n\t\tdefault:\n\t\t{\n\t\t\tchar errMsg[80];\n\t\t       \tsnprintf(errMsg, sizeof(errMsg), \"Unhandled type for %s in JSON payload %d\", name.c_str(), item.GetType());\n\t\t\tthrow new ReadingSetException(errMsg);\n\t\t}\n\t}\n\treturn rval;\n}\n\n/**\n * Escapes a character in a string to be properly handled as JSON\n *\n */\nvoid JSONReading::escapeCharacter(string& stringToEvaluate, string pattern)\n{\n\tstring escaped = \"\\\\\" + pattern;\n\n\tboost::replace_all(stringToEvaluate, pattern, escaped);\n}\n"
  },
  {
    "path": "C/common/readingset_circularbuffer.cpp",
    "content": "/*\n * Fledge ReadingSet Circular Buffer.\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n#include <readingset_circularbuffer.h>\n#include <logger.h>\n\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * Construct an empty reading set circular buffer\n *\n * @param maxBufferSize Maximum size of the ReadingSet circular buffer. It should be atleast one.\n */\nReadingSetCircularBuffer::ReadingSetCircularBuffer(unsigned long maxBufferSize)\n{\n\tif ( maxBufferSize <= 0)\n\t{\n\t\tmaxBufferSize = 1;\n\t\tLogger::getLogger()->warn(\"Minimum size of ReadingSetCircularBuffer cannot be less than one, setting buffer size to 1\");\n\t}\n\tm_maxBufferSize = maxBufferSize;\n\tm_nextReadIndex = 0;\n}\n\n/**\n * Destructor for a result set\n */\nReadingSetCircularBuffer::~ReadingSetCircularBuffer()\n{\n\tlock_guard<mutex> guard(m_mutex);\n\t/* Delete the readings */\n\tm_circularBuffer.clear();\n}\n\n/**\n * Insert a ReadingSet into circular buffer\n *\n * @param readings\t\tReference for ReadingSet to be inserted into circular buffer\n */\nvoid ReadingSetCircularBuffer::insert(ReadingSet& readings)\n{\n\tappendReadingSet(readings.getAllReadings());\n}\n\n/**\n * Insert a ReadingSet into circular buffer\n *\n * @param readings\t\tPointer for ReadingSet to be inserted into circular buffer\n */\nvoid ReadingSetCircularBuffer::insert(ReadingSet* readings)\n{\n\tappendReadingSet(readings->getAllReadings());\n}\n\n/**\n * Internal implementation for inserting ReadingSet into the circular buffer\n *\n * @param readings\t\tappends ReadingSet into the circular buffer\n */\nvoid ReadingSetCircularBuffer::appendReadingSet(const std::vector<Reading *>& readings)\n{\n\tlock_guard<mutex> guard(m_mutex);\n\tbool isBufferFull = (m_circularBuffer.size() == m_maxBufferSize);\n\n\t//Check if there is space available to insert a new ReadingSet\n\tif (isBufferFull)\n\t{\n\t\tLogger::getLogger()->info(\"ReadingSetCircularBuffer buffer is full, removing first element\");\n\t\t// Make space for new ReadingSet and adjust m_nextReadIndex\n\t\tm_circularBuffer.erase(m_circularBuffer.begin() + 0);\n\t\tm_nextReadIndex--;\n\t}\t\n\n\tstd::vector<Reading *> *newReadings = new std::vector<Reading *>;\n\t\n\t// Iterate over all the readings in ReadingSet\n\tfor (auto const &reading : readings)\n\t{\n\t\tnewReadings->emplace_back(new Reading(*reading));\n\t}\n\t// Insert ReadingSet into buffer\n\tm_circularBuffer.push_back(std::make_shared<ReadingSet>(newReadings));\n\tdelete newReadings;\n}\n\n/**\n * Fetch the vector of ReadingSet from circular buffer\n *\n * @param isExtractSingleElement True to extract single ReadingSet otherwise for extract entire buffer \n * @return\t\tReturn a vector of shared pointer to ReadingSet\n */\nstd::vector<std::shared_ptr<ReadingSet>> ReadingSetCircularBuffer::extract(bool isExtractSingleElement)\n{\n\t\n    lock_guard<mutex> guard(m_mutex);\n    bool isNoDataToRead = m_circularBuffer.empty() || (m_nextReadIndex == m_circularBuffer.size());\n    std::vector<std::shared_ptr<ReadingSet>> bufferedItem;\n    // Check for empty buffer\n    if (isNoDataToRead)\n    {\n\tLogger::getLogger()->info(\"There is no more data to read in ReadingSet circualr buffer\");\n\treturn  bufferedItem;\n    }\n\n    // Return single item from buffer\n    if (isExtractSingleElement)\n    {\n        bufferedItem.emplace_back(m_circularBuffer[m_nextReadIndex]);\n        m_nextReadIndex++;\n        return  bufferedItem;\n    }\n\n    // Return Entire buffer\n    if(m_nextReadIndex == 0)\n    {\n        m_nextReadIndex = m_circularBuffer.size();\n        return m_circularBuffer;\n    }\n    // Send remaining items in the buffer\n    for (int i = m_nextReadIndex; i <  m_circularBuffer.size(); i ++)\n        bufferedItem.emplace_back(m_circularBuffer[i]);\n\n    m_nextReadIndex =  m_circularBuffer.size();\n\treturn bufferedItem;\n}\n"
  },
  {
    "path": "C/common/result_set.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <resultset.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <sstream>\n#include <iostream>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * Construct a result set from a JSON document returned from\n * the Fledge storage service.\n *\n * @param json\tThe JSON document to construct the result set from\n */\nResultSet::ResultSet(const std::string& json)\n{\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tthrow new ResultException(\"Unable to parse results json document\");\n\t}\n\tif (doc.HasMember(\"count\") && doc[\"count\"].IsUint())\n\t{\n\t\tm_rowCount = doc[\"count\"].GetUint();\n\t\tif (m_rowCount)\n\t\t{\n\t\t\tconst Value& rows = doc[\"rows\"];\n\t\t\tif (!doc.HasMember(\"rows\"))\n\t\t\t{\n\t\t\t\tthrow new ResultException(\"Missing rows array\");\n\t\t\t}\n\t\t\tif (rows.IsArray())\n\t\t\t{\n\t\t\t\t// Process first row to get column names and types\n\t\t\t\tconst Value& firstRow = rows[0];\n\t\t\t\tfor (Value::ConstMemberIterator itr = firstRow.MemberBegin(); itr != firstRow.MemberEnd(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tColumnType type = STRING_COLUMN;\n\t\t\t\t\tif (itr->value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\ttype = JSON_COLUMN;\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->value.IsNumber() && itr->value.IsDouble())\n\t\t\t\t\t{\n\t\t\t\t\t\ttype = NUMBER_COLUMN;\n\t\t\t\t\t}\n          \t\t\t\telse if (itr->value.IsNumber())\n\t\t\t\t\t{\n\t\t\t\t\t\ttype = INT_COLUMN;\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->value.IsBool())\n\t\t\t\t\t{\n\t\t\t\t\t\ttype = BOOL_COLUMN;\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\ttype = STRING_COLUMN;\n\t\t\t\t\t}\n\t\t\t\t\t// Array of any objects is JSON\n\t\t\t\t\telse if (itr->value.IsArray())\n\t\t\t\t\t{\n\t\t\t\t\t\ttype = JSON_COLUMN;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow new ResultException(\"Unable to determine column type\");\n\t\t\t\t\t}\n\t\t\t\t\tm_columns.push_back(new Column(string(itr->name.GetString()), type));\n\t\t\t\t}\n\t\t\t\t// Process every rows and create the result set\n\t\t\t\tfor (auto& row : rows.GetArray())\n\t\t\t\t{\n\t\t\t\t\tif (!row.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tthrow new ResultException(\"Expected row to be an object\");\n\t\t\t\t\t}\n\t\t\t\t\tResultSet::Row\t*rowValue = new ResultSet::Row(this);\n\t\t\t\t\tunsigned int colNo = 0;\n\t\t\t\t\tfor (Value::ConstMemberIterator item = row.MemberBegin(); item != row.MemberEnd(); ++item)\n\t\t\t\t\t{\n\t\t\t\t\t\tswitch (m_columns[colNo]->getType())\n\t\t\t\t\t\t{\n\t\t\t\t\t\tcase STRING_COLUMN:\n\t\t\t\t\t\t\tif (item->value.IsBool())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\trowValue->append(new ColumnValue(item->value.IsTrue() ? \"true\" : \"false\"));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\trowValue->append(new ColumnValue(string(item->value.GetString())));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase INT_COLUMN:\n\t\t\t\t\t\t\trowValue->append(new ColumnValue((long)(item->value.GetInt64())));\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase NUMBER_COLUMN:\n\t\t\t\t\t\t\trowValue->append(new ColumnValue(item->value.GetDouble()));\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase JSON_COLUMN:\n\t\t\t\t\t\t\trowValue->append(new ColumnValue(item->value));\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase BOOL_COLUMN:\n\t\t\t\t\t\t\tif (item->value.IsString())\n\t\t\t\t\t\t\t\trowValue->append(new ColumnValue(string(item->value.GetString())));\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\trowValue->append(new ColumnValue(item->value.IsTrue() ? \"true\" : \"false\"));\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tcolNo++;\n\t\t\t\t\t}\n\t\t\t\t\tm_rows.push_back(rowValue);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tthrow new ResultException(\"Expected array of rows in result set\");\n\t\t\t}\n\t\t}\n\t}\n\telse\n\t{\n\t\tm_rowCount = 0;\n\t}\n}\n\n/**\n * Destructor for a result set\n */\nResultSet::~ResultSet()\n{\n\t/* Delete the columns */\n\tfor (auto it = m_columns.cbegin(); it != m_columns.cend(); it++)\n\t{\n\t\tdelete *it;\n\t}\n\t/* Delete the rows */\n\tfor (auto it = m_rows.cbegin(); it != m_rows.cend(); it++)\n\t{\n\t\tdelete *it;\n\t}\n}\n\n/**\n * Return the name of a specific column\n *\n * @param column - the column number of the column to return.\n * Columns are numbered from 0\n * @return string& The name of the column\n * @throw ResultNoSuchColumnException\tThe specified column does not exist in the result set\n */\nconst string& ResultSet::columnName(unsigned int column) const\n{\n\tif (column >= m_columns.size())\n\t{\n\t\tthrow new ResultNoSuchColumnException();\n\t}\n\treturn m_columns[column]->getName();\n}\n\n/**\n * Return the type of a specific column\n *\n * @param column - the column number of the column to return.\n * Columns are numbered from 0\n * @return ColumnType\tThe type of the specified column\n * @throw ResultNoSuchColumnException\tThe specified column does not exist in the result set\n */\nColumnType ResultSet::columnType(unsigned int column) const\n{\n\tif (column >= m_columns.size())\n\t{\n\t\tthrow new ResultNoSuchColumnException();\n\t}\n\treturn m_columns[column]->getType();\n}\n\n/**\n * Return the type of a specific column\n *\n * @param name - the name of the column to return.\n * @return ColumnType\tThe type of the specified column\n * @throw ResultNoSuchColumnException\tThe specified column does not exist in the result set\n */\nColumnType ResultSet::columnType(const string& name) const\n{\n\tunsigned int column = findColumn(name);\n\treturn m_columns[column]->getType();\n}\n\n/**\n * Fetch an iterator for the rows in a result set.\n * The iterator is positioned at the first row in the\n * result set.\n */\nResultSet::RowIterator ResultSet::firstRow()\n{\n\treturn m_rows.begin();\n}\n\n/**\n * Given an iterator over the rows in a result set move to the\n * next row in the result set.\n *\n * @param it\tIterator returned by the firstRow() method\n * @return RowIterator\tNew value of the iterator\n * @throw ResultNoMoreRowsException\tThere are no more rows in the result set\n */\nResultSet::RowIterator ResultSet::nextRow(RowIterator it)\n{\n\tif (it == m_rows.end())\n\t\tthrow new ResultNoMoreRowsException();\n\telse\n\t\treturn ++it;\n}\n\n/**\n * Given an iterator over the rows in a result set return if there\n * are any more rows in the result set.\n *\n * @param it\tIterator returned by the firstRow() method\n * @return bool\tTrue if there are more rows in the result set\n */\nbool ResultSet::hasNextRow(RowIterator it) const\n{\n\treturn (it + 1) != m_rows.end();\n}\n\n/**\n * Given an iterator over the rows in a result set return if there\n * this is the last row in the result set.\n *\n * @param it\tIterator returned by the firstRow() method\n * @return bool\tTrue if there are no more rows in the result set\n */\nbool ResultSet::isLastRow(RowIterator it) const\n{\n\treturn (it + 1) == m_rows.end();\n}\n\n/**\n * Return the type of the given column in this row.\n *\n * @param column\tThe column number in the row, columns are numbered from 0\n * @return ColumnType\tThe column type of the specified column\n * @throw ResultNoSuchColumnException\tThe specified column does not exist in the row\n */\nColumnType ResultSet::Row::getType(unsigned int column)\n{\n\tif (column > m_values.size())\n\t\tthrow new ResultNoSuchColumnException();\n\treturn m_values[column]->getType();\n}\n\n/**\n * Return the type of the given column in this row.\n *\n * @param name\t\tThe column name in the row\n * @return ColumnType\tThe column type of the specified column\n * @throw ResultNoSuchColumnException\tThe specified column does not exist in the row\n */\nColumnType ResultSet::Row::getType(const string& name)\n{\n\tunsigned int column = m_resultSet->findColumn(name);\n\treturn m_values[column]->getType();\n}\n\n/**\n * Return the column value of the given column in this row.\n *\n * @param column\tThe column number in the row, columns are numbered from 0\n * @return ColumnValue\tThe column value of the specified column\n * @throw ResultNoSuchColumnException\tThe specified column does not exist in the row\n */\nResultSet::ColumnValue *ResultSet::Row::getColumn(unsigned int column) const\n{\n\tif (column > m_values.size())\n\t\tthrow new ResultNoSuchColumnException();\n\treturn m_values[column];\n}\n\n/**\n * Return the column value of the given column in this row.\n *\n * @param name\t\tThe column name in the row\n * @return ColumnValue\tThe column value of the specified column\n * @throw ResultNoSuchColumnException\tThe specified column does not exist in the row\n */\nResultSet::ColumnValue *ResultSet::Row::getColumn(const string& name) const\n{\n\tunsigned int column = m_resultSet->findColumn(name);\n\treturn m_values[column];\n}\n\n/**\n * Find the named column in the result set and return the column index.\n *\n * @param name\tThe name of the column to return\n * @return unsigned int\t\tThe index of the named column\n * @throw ResultNoSuchColumnException\tThe named column does not exist in the result set\n */\nunsigned int ResultSet::findColumn(const string& name) const\n{\n\tfor (unsigned int i = 0; i != m_columns.size(); i++)\n\t{\n\t\tif (m_columns[i]->getName().compare(name) == 0)\n\t\t{\n\t\t\treturn i;\n\t\t}\n\t}\n\tthrow ResultNoSuchColumnException();\n}\n\n/**\n * Retrieve a column value as an integer\n * \n * @return long Integer value\n * @throw ResultIncorrectTypeException\tThe column can not be returned as an integer\n */\nlong ResultSet::ColumnValue::getInteger() const\n{\n\tswitch (m_type)\n\t{\n\tcase INT_COLUMN:\n\t\treturn m_value.ival;\n\tcase NUMBER_COLUMN:\n\t\treturn (long)m_value.fval;\n\tdefault:\n\t\tthrow new ResultIncorrectTypeException();\n\t}\n}\n\n/**\n * Retrieve a column value as a floating point number\n * \n * @return double Floating point value\n * @throw ResultIncorrectTypeException\tThe column can not be returned as a double\n */\ndouble ResultSet::ColumnValue::getNumber() const\n{\n\tswitch (m_type)\n\t{\n\tcase INT_COLUMN:\n\t\treturn (double)m_value.ival;\n\tcase NUMBER_COLUMN:\n\t\treturn m_value.fval;\n\tdefault:\n\t\tthrow new ResultIncorrectTypeException();\n\t}\n}\n\n/**\n * Retrieve a column value as a string\n * \n * @return double Floating point value\n * @throw ResultIncorrectTypeException\tThe column can not be returned as a double\n */\nchar *ResultSet::ColumnValue::getString() const\n{\n\tswitch (m_type)\n\t{\n\tcase STRING_COLUMN:\n\t\treturn m_value.str;\n\tdefault:\n\t\tthrow new ResultIncorrectTypeException();\n\t}\n}\n"
  },
  {
    "path": "C/common/service_record.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <service_record.h>\n#include <string>\n#include <sstream>\n\nusing namespace std;\n\n/**\n * Constructor for the service record\n */\nServiceRecord::ServiceRecord(const string& name,\n\t\t\t     const string& type,\n\t\t\t     const string& protocol,\n\t\t\t     const string& address,\n\t\t\t     const unsigned short port,\n\t\t\t     const unsigned short managementPort,\n\t\t\t     const string& token) : m_name(name),\n\t\t\t\t\t\t\t  m_type(type),\n\t\t\t\t\t\t\t  m_protocol(protocol),\n\t\t\t\t\t\t\t  m_address(address),\n\t\t\t\t\t\t\t  m_port(port),\n\t\t\t\t\t\t\t  m_managementPort(managementPort),\n\t\t\t\t\t\t\t  m_token(token)\n{\n}\n\n/**\n * Construct an incomplete service record with a name and type\n */\nServiceRecord::ServiceRecord(const string& name,\n\t\t\t     const string& type) : m_name(name),\n\t\t\t\t\t\t   m_type(type),\n\t\t\t\t\t\t   m_protocol(\"\"),\n\t\t\t\t\t\t   m_address(\"\"),\n\t\t\t\t\t\t   m_port(0),\n\t\t\t\t\t\t   m_managementPort(0)\n{\n}\n\n/**\n * Construct an incomplete service record with just a name\n */\nServiceRecord::ServiceRecord(const string& name) : m_name(name),\n\t\t\t\t\t\t   m_type(\"\"),\n\t\t\t\t\t\t   m_protocol(\"\"),\n\t\t\t\t\t\t   m_address(\"\"),\n\t\t\t\t\t\t   m_port(0),\n\t\t\t\t\t\t   m_managementPort(0)\n{\n}\n\n/**\n * Serialise the service record to json\n */\nvoid ServiceRecord::asJSON(string& json) const\n{\nostringstream convert;\n\n\tconvert << \"{ \";\n\tconvert << \"\\\"name\\\" : \\\"\" << m_name << \"\\\",\";\n\tconvert << \"\\\"type\\\" : \\\"\" << m_type << \"\\\",\";\n\tconvert << \"\\\"protocol\\\" : \\\"\" << m_protocol << \"\\\",\";\n\tconvert << \"\\\"address\\\" : \\\"\" << m_address << \"\\\",\";\n\tconvert << \"\\\"management_port\\\" : \" << m_managementPort;\n\tif (m_port)\n\t{\n\t\tconvert << \",\\\"service_port\\\" : \" << m_port;\n\t}\n\tif (m_token != \"\") {\n\t\tconvert << \",\\\"token\\\" : \\\"\" << m_token << \"\\\"\";\n\t}\n\tconvert << \" }\";\n\n\tjson = convert.str();\n}\n"
  },
  {
    "path": "C/common/storage_client.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <storage_client.h>\n#include <reading.h>\n#include <reading_set.h>\n#include <reading_stream.h>\n#include <rapidjson/document.h>\n#include <rapidjson/error/en.h>\n#include <management_client.h>\n#include <service_record.h>\n#include <string>\n#include <sstream>\n#include <iostream>\n#include <thread>\n#include <map>\n#include <string_utils.h>\n#include <sys/uio.h>\n#include <errno.h>\n#include <stdarg.h>\n\n#define EXCEPTION_BUFFER_SIZE 120\n\n#define INSTRUMENT\t\t0\n// Streaming is currently disabled due to an issue that causes the stream to\n// hang after a period. Set the followign to 1 in order to enable streaming\n#define ENABLE_STREAMING\t0\n\n#if INSTRUMENT\n#include <sys/time.h>\n#endif\n\nusing namespace std;\nusing namespace rapidjson;\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\n// handles m_client_map access\nstd::mutex sto_mtx_client_map;\n\n/**\n * Storage Client constructor\n */\nStorageClient::StorageClient(const string& hostname, const unsigned short port) : m_streaming(false), m_management(NULL)\n{\n\tm_host = hostname;\n\tm_pid = getpid();\n\tm_logger = Logger::getLogger();\n\tm_urlbase << hostname << \":\" << port;\n}\n\n/**\n * Storage Client constructor\n * stores the provided HttpClient into the map\n */\nStorageClient::StorageClient(HttpClient *client) : m_streaming(false), m_management(NULL)\n{\n\n\tstd::thread::id thread_id = std::this_thread::get_id();\n\n\tsto_mtx_client_map.lock();\n\tm_client_map[thread_id] = client;\n\tsto_mtx_client_map.unlock();\n}\n\n\n/**\n * Destructor for storage client\n */\nStorageClient::~StorageClient()\n{\n\tstd::map<std::thread::id, HttpClient *>::iterator item;\n\n\t// Deletes all the HttpClient objects created in the map\n\tfor (item  = m_client_map.begin() ; item  != m_client_map.end() ; ++item)\n\t{\n\t\tdelete item->second;\n\t}\n}\n\n\n/**\n * Delete HttpClient object for current thread\n */\nbool StorageClient::deleteHttpClient()\n{\n\tstd::thread::id thread_id = std::this_thread::get_id();\n\n\tlock_guard<mutex> guard(sto_mtx_client_map);\n\n\tif(m_client_map.find(thread_id) == m_client_map.end())\n\t\treturn false;\n\n\tostringstream ss;\n\tss << thread_id;\n\tLogger::getLogger()->debug(\"Storage client deleting HttpClient object @ %p for thread %s\", m_client_map[thread_id], ss.str().c_str());\n\t\n\tdelete m_client_map[thread_id];\n\tm_client_map.erase(thread_id);\n\n\treturn true;\n}\n\n\n/**\n * Creates a HttpClient object for each thread\n * it stores/retrieves the reference to the HttpClient and the associated thread id in a map\n */\nHttpClient *StorageClient::getHttpClient(void) {\n\n\tstd::map<std::thread::id, HttpClient *>::iterator item;\n\tHttpClient *client;\n\n\tstd::thread::id thread_id = std::this_thread::get_id();\n\n\tsto_mtx_client_map.lock();\n\titem = m_client_map.find(thread_id);\n\n\tif (item  == m_client_map.end() ) {\n\n\t\t// Adding a new HttpClient\n\t\tclient = new HttpClient(m_urlbase.str());\n\t\tm_client_map[thread_id] = client;\n\t\tm_seqnum_map[thread_id].store(0);\n\t\tstd::ostringstream ss;\n\t\tss << std::this_thread::get_id();\n\t}\n\telse\n\t{\n\t\tclient = item->second;\n\t}\n\tsto_mtx_client_map.unlock();\n\n\treturn (client);\n}\n\n/**\n * Append a single reading\n */\nbool StorageClient::readingAppend(Reading& reading)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \\\"readings\\\" : [ \";\n\t\tconvert << reading.toJSON();\n\t\tconvert << \" ] }\";\n\t\tauto res = this->getHttpClient()->request(\"POST\", \"/storage/reading\", convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Append readings\", res->status_code, resultPayload.str());\n\t\treturn false;\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"append reading\");\n\t}\n\treturn false;\n}\n\n/**\n * Append multiple readings\n *\n * TODO implement a mechanism to force streamed or non-streamed mode\n */\nbool StorageClient::readingAppend(const vector<Reading *>& readings)\n{\n#if INSTRUMENT\n\tstruct timeval\tstart, t1, t2;\n#endif\n\tif (m_streaming)\n\t{\n\t\treturn streamReadings(readings);\n\t}\n\t// See if we should switch to stream mode\n\tstruct timeval tmFirst, tmLast, dur;\n\treadings[0]->getUserTimestamp(&tmFirst);\n\treadings[readings.size()-1]->getUserTimestamp(&tmLast);\n\ttimersub(&tmLast, &tmFirst, &dur);\n\tdouble timeSpan = dur.tv_sec + ((double)dur.tv_usec / 1000000);\n\tdouble rate = (double)readings.size() / timeSpan;\n\t// Stream functionality disabled\n#if ENABLE_STREAMING\n\tif (rate > STREAM_THRESHOLD)\n\t{\n\t\tm_logger->info(\"Reading rate %.1f readings per second above threshold, attmempting to switch to stream mode\", rate);\n\t\tif (openStream())\n\t\t{\n\t\t\tm_logger->info(\"Successfully switch to stream mode for readings\");\n\t\t\treturn streamReadings(readings);\n\t\t}\n\t\tm_logger->warn(\"Failed to switch to streaming mode\");\n\t}\n#endif\n\tstatic HttpClient *httpClient = this->getHttpClient(); // to initialize m_seqnum_map[thread_id] for this thread\n\ttry {\n\t\tstd::thread::id thread_id = std::this_thread::get_id();\n\t\tostringstream ss;\n\t\tsto_mtx_client_map.lock();\n\t\tm_seqnum_map[thread_id].fetch_add(1);\n\t\tss << m_pid << \"#\" << thread_id << \"_\" << m_seqnum_map[thread_id].load();\n\t\tsto_mtx_client_map.unlock();\n\n\t\tSimpleWeb::CaseInsensitiveMultimap headers = {{\"SeqNum\", ss.str()}};\n\n#if INSTRUMENT\n\t\tgettimeofday(&start, NULL);\n#endif\n\t\tostringstream convert;\n\t\tconvert << \"{ \\\"readings\\\" : [ \";\n\t\tfor (vector<Reading *>::const_iterator it = readings.cbegin();\n\t\t\t\t\t\t it != readings.cend(); ++it)\n\t\t{\n\t\t\tif (it != readings.cbegin())\n\t\t\t{\n\t\t\t\tconvert << \", \";\n\t\t\t}\n\t\t\tconvert << (*it)->toJSON();\n\t\t}\n\t\tconvert << \" ] }\";\n#if INSTRUMENT\n\t\tgettimeofday(&t1, NULL);\n#endif\n\t\tauto res = this->getHttpClient()->request(\"POST\", \"/storage/reading\", convert.str(), headers);\n#if INSTRUMENT\n\t\tgettimeofday(&t2, NULL);\n#endif\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n#if INSTRUMENT\n\t\t\tstruct timeval tm;\n\t\t\ttimersub(&t1, &start, &tm);\n\t\t\tdouble buildTime, requestTime;\n\t\t\tbuildTime = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\t\t\ttimersub(&t2, &t1, &tm);\n\t\t\trequestTime = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\t\t\tm_logger->info(\"Appended %d readings in %.3f seconds. Took %.3f seconds to build request\", readings.size(), requestTime, buildTime);\n\t\t\tm_logger->info(\"%.1f Readings per second, request building %.2f%% of time\", readings.size() / (buildTime + requestTime),\n\t\t\t\t\t(buildTime * 100) / (requestTime + buildTime));\n\t\t\tm_logger->info(\"Request block size %dK\", strlen(convert.str().c_str())/1024);\n#endif\n\t\t\treturn true;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Append readings\", res->status_code, resultPayload.str());\n\t\treturn false;\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"append readings\");\n\t}\n\treturn false;\n}\n\n/**\n * Perform a generic query against the readings data\n *\n * @param query\t\tThe query to execute\n * @return ResultSet\tThe result of the query\n */\nResultSet *StorageClient::readingQuery(const Query& query)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << query.toJSON();\n\t\tauto res = this->getHttpClient()->request(\"PUT\", \"/storage/reading/query\", convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tResultSet *result = new ResultSet(resultPayload.str().c_str());\n\t\t\treturn result;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Query readings\", res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"query readings\");\n\t\tthrow;\n\t} catch (exception* ex) {\n\t\thandleException(*ex, \"query readings\");\n\t\tdelete ex;\n\t\tthrow exception();\n\t}\n\treturn 0;\n}\n\n/**\n * Perform a generic query against the readings data,\n * returning ReadingSet object\n *\n * @param query\t\tThe query to execute\n * @return ReadingSet\tThe result of the query\n */\nReadingSet *StorageClient::readingQueryToReadings(const Query& query)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << query.toJSON();\n\t\tauto res = this->getHttpClient()->request(\"PUT\", \"/storage/reading/query\", convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tReadingSet* result = new ReadingSet(resultPayload.str().c_str());\n\t\t\treturn result;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Query readings\", res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"query readings\");\n\t\tthrow;\n\t} catch (exception* ex) {\n\t\thandleException(*ex, \"query readings\");\n\t\tdelete ex;\n\t\tthrow exception();\n\t}\n\treturn 0;\n}\n\n/**\n * Retrieve a set of readings for sending on the northbound\n * interface of Fledge\n *\n * @param readingId\tThe ID of the reading which should be the first one to send\n * @param count\t\tMaximum number if readings to return\n * @return ReadingSet\tThe set of readings\n */\nReadingSet *StorageClient::readingFetch(const unsigned long readingId, const unsigned long count)\n{\n\ttry {\n\n\t\tchar url[256];\n\t\tsnprintf(url, sizeof(url), \"/storage/reading?id=%lu&count=%lu\",\n\t\t\t\treadingId, count);\n\n\t\tauto res = this->getHttpClient()->request(\"GET\", url);\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tReadingSet *result = new ReadingSet(resultPayload.str());\n\t\t\treturn result;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Fetch readings\", res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"fetch readings\");\n\t\tthrow;\n\t} catch (exception* ex) {\n\t\thandleException(*ex, \"fetch readings\");\n\t\tdelete ex;\n\t\tthrow exception();\n\t}\n\treturn 0;\n}\n\n/**\n * Purge the readings by age\n *\n * @param age\tNumber of hours old a reading has to be to be considered for purging\n * @param sent\tThe ID of the last reading that was sent\n * @param purgeUnsent\tFlag to control if unsent readings should be purged\n * @return PurgeResult\tData on the readings hat were purged\n */\nPurgeResult StorageClient::readingPurgeByAge(unsigned long age, unsigned long sent, bool purgeUnsent)\n{\n\ttry {\n\t\tchar url[256];\n\t\tsnprintf(url, sizeof(url), \"/storage/reading/purge?age=%ld&sent=%ld&flags=%s\",\n\t\t\t\tage, sent, purgeUnsent ? \"purge\" : \"retain\");\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url);\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\treturn PurgeResult(resultPayload.str());\n\t\t}\n\t\thandleUnexpectedResponse(\"Purge by age\", res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"purge readings by age\");\n\t\tthrow;\n\t} catch (exception* ex) {\n\t\thandleException(*ex, \"purge readings by age\");\n\t\tdelete ex;\n\t\tthrow exception();\n\t}\n\treturn PurgeResult();\n}\n\n/**\n * Purge the readings by size\n *\n * @param size\t\tDesired maximum size of readings table\n * @param sent\tThe ID of the last reading that was sent\n * @param purgeUnsent\tFlag to control if unsent readings should be purged\n * @return PurgeResult\tData on the readings hat were purged\n */\nPurgeResult StorageClient::readingPurgeBySize(unsigned long size, unsigned long sent, bool purgeUnsent)\n{\n\ttry {\n\t\tchar url[256];\n\t\tsnprintf(url, sizeof(url), \"/storage/reading/purge?size=%ld&sent=%ld&flags=%s\",\n\t\t\t\tsize, sent, purgeUnsent ? \"purge\" : \"retain\");\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url);\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\treturn PurgeResult(resultPayload.str());\n\t\t}\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"purge readings by size\");\n\t\tthrow;\n\t} catch (exception* ex) {\n\t\thandleException(*ex, \"purge readings by size\");\n\t\tdelete ex;\n\t\tthrow exception();\n\t}\n\treturn PurgeResult();\n}\n\n/**\n * Purge the readings by asset name\n *\n * @param asset\t\tThe name of the asset to purge\n * @return PurgeResult\tData on the readings that were purged\n */\nPurgeResult StorageClient::readingPurgeByAsset(const string& asset)\n{\n\ttry {\n\t\tchar url[256];\n\t\tsnprintf(url, sizeof(url), \"/storage/reading/purge?asset=%s\", asset.c_str());\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url);\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\treturn PurgeResult(resultPayload.str());\n\t\t}\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"purge readings by size\");\n\t\tthrow;\n\t} catch (exception* ex) {\n\t\thandleException(*ex, \"purge readings by size\");\n\t\tdelete ex;\n\t\tthrow exception();\n\t}\n\treturn PurgeResult();\n}\n\n/**\n * Query a table\n *\n * @param tablename\tThe name of the table to query\n * @param query\t\tThe query payload\n * @return ResultSet*\tThe resultset of the query\n */\nResultSet *StorageClient::queryTable(const std::string& tableName, const Query& query)\n{\n\treturn queryTable(DEFAULT_SCHEMA, tableName, query);\n}\n\n/**\n * Query a table\n *\n * @param schema\tThe name of the schema to query\n * @param tablename\tThe name of the table to query\n * @param query\t\tThe query payload\n * @return ResultSet*\tThe resultset of the query\n */\nResultSet *StorageClient::queryTable(const std::string& schema, const std::string& tableName, const Query& query)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << query.toJSON();\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s/query\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url, convert.str());\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tResultSet *result = new ResultSet(resultPayload.str().c_str());\n\t\t\treturn result;\n\t\t}\n\t\thandleUnexpectedResponse(\"Query table\", res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"query table %s\", tableName.c_str());\n\t\tthrow;\n\t} catch (exception* ex) {\n\t\thandleException(*ex, \"query table %s\", tableName.c_str());\n\t\tdelete ex;\n\t\tthrow exception();\n\t}\n\treturn 0;\n}\n\n/**\n * Query a table and return a ReadingSet pointer\n *\n * @param tablename\tThe name of the table to query\n * @param query\t\tThe query payload\n * @return ReadingSet*\tThe resultset of the query as\n *\t\t\tReadingSet class pointer\n */\nReadingSet* StorageClient::queryTableToReadings(const std::string& tableName,\n\t\t\t\t\t\tconst Query& query)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << query.toJSON();\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/table/%s/query\", tableName.c_str());\n\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url, convert.str());\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tReadingSet* result = new ReadingSet(resultPayload.str().c_str());\n\t\t\treturn result;\n\t\t}\n\t\thandleUnexpectedResponse(\"Query table\", res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"query table %s to readings\", tableName.c_str());\n\t\tthrow;\n\t} catch (exception* ex) {\n\t\thandleException(*ex, \"query table %s to readings\", tableName.c_str());\n\t\tdelete ex;\n\t\tthrow exception();\n\t}\n\treturn 0;\n}\n\n/**\n * Insert data into an arbitrary table\n *\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe values to insert into the table\n * @return int\t\tThe number of rows inserted\n */\nint StorageClient::insertTable(const string& tableName, const InsertValues& values)\n{\n\treturn insertTable(DEFAULT_SCHEMA, tableName, values);\n}\n\n/**\n * Insert data into an arbitrary table\n *\n * @param schema\tThe name of the schema to insert into\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe values to insert into the table\n * @return int\t\tThe number of rows inserted\n */\nint StorageClient::insertTable(const string& schema, const string& tableName, const InsertValues& values)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << values.toJSON();\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"POST\", url, convert.str());\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\tif (res->status_code.compare(\"200 OK\") == 0 || res->status_code.compare(\"201 Created\") == 0)\n\t\t{\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"POST result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of insertTable. %s. Document is %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()),\n\t\t\t\t\t\tresultPayload.str().c_str());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to append table data: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\treturn doc[\"rows_affected\"].GetInt();\n\t\t}\n\t\thandleUnexpectedResponse(\"Insert table\", res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"insert into table %s\", tableName.c_str());\n\t\tthrow;\n\t}\n\treturn 0;\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe values to insert into the table\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional storage modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& tableName, const InsertValues& values, const Where& where, const UpdateModifier *modifier)\n{\n\treturn updateTable(DEFAULT_SCHEMA, tableName, values, where, modifier);\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param schema\tThe name of the schema into which data will be added\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe values to insert into the table\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional storage modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& schema, const string& tableName, const InsertValues& values, const Where& where, const UpdateModifier *modifier)\n{\n\tstatic HttpClient *httpClient = this->getHttpClient(); // to initialize m_seqnum_map[thread_id] for this thread\n\ttry {\n\t\tstd::thread::id thread_id = std::this_thread::get_id();\n\t\tostringstream ss;\n\t\tsto_mtx_client_map.lock();\n\t\tm_seqnum_map[thread_id].fetch_add(1);\n\t\tss << m_pid << \"#\" << thread_id << \"_\" << m_seqnum_map[thread_id].load();\n\t\tsto_mtx_client_map.unlock();\n\n\t\tSimpleWeb::CaseInsensitiveMultimap headers = {{\"SeqNum\", ss.str()}};\n\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \\\"updates\\\" : [ {\";\n\t\tif (modifier)\n\t\t{\n\t\t\tconvert << \"\\\"modifiers\\\" : [ \\\"\" << modifier->toJSON() << \"\\\" ], \";\n\t\t}\n\t\tconvert << \"\\\"where\\\" : \";\n\t\tconvert << where.toJSON();\n\t\tconvert << \", \\\"values\\\" : \";\n\t\tconvert << values.toJSON();\n\t\tconvert << \" }\";\n\t\tconvert << \" ] }\";\n\t\t\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url, convert.str(), headers);\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"PUT result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of updateTable. %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to update table data: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\treturn doc[\"rows_affected\"].GetInt();\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Update table\", tableName, res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"update table %s\", tableName.c_str());\n\t\tthrow;\n\t}\n\treturn -1;\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe expressions to update into the table\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& tableName, const ExpressionValues& values, const Where& where, const UpdateModifier *modifier)\n{\n\treturn updateTable(DEFAULT_SCHEMA, tableName, values, where, modifier);\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param schema\tThe name of the schema into which data will be added\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe expressions to update into the table\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& schema, const string& tableName, const ExpressionValues& values, const Where& where, const UpdateModifier *modifier)\n{\n\tstatic HttpClient *httpClient = this->getHttpClient(); // to initialize m_seqnum_map[thread_id] for this thread\n\ttry {\n\t\tstd::thread::id thread_id = std::this_thread::get_id();\n\t\tostringstream ss;\n\t\tsto_mtx_client_map.lock();\n\t\tm_seqnum_map[thread_id].fetch_add(1);\n\t\tss << m_pid << \"#\" << thread_id << \"_\" << m_seqnum_map[thread_id].load();\n\t\tsto_mtx_client_map.unlock();\n\n\t\tSimpleWeb::CaseInsensitiveMultimap headers = {{\"SeqNum\", ss.str()}};\n\t\t\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \\\"updates\\\" : [ {\";\n\t\tif (modifier)\n\t\t{\n\t\t\tconvert << \"\\\"modifiers\\\" : [ \\\"\" << modifier->toJSON() << \"\\\" ], \";\n\t\t}\n\t\tconvert << \"\\\"where\\\" : \";\n\t\tconvert << where.toJSON();\n\t\tconvert << \", \\\"expressions\\\" : \";\n\t\tconvert << values.toJSON();\n\t\tconvert << \" }\";\n\t\tconvert << \" ] }\";\n\t\t\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url, convert.str(), headers);\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"PUT result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of updateTable. %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to update table data: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\treturn doc[\"rows_affected\"].GetInt();\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Update table\", tableName, res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"update table %s\", tableName.c_str());\n\t\tthrow;\n\t}\n\treturn -1;\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param tableName\tThe name of the table into which data will be added\n * @param updates\tThe expressions and condition pairs to update in the table\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& tableName, vector<pair<ExpressionValues *, Where *>>& updates, const UpdateModifier *modifier)\n{\n\treturn updateTable(DEFAULT_SCHEMA, tableName, updates, modifier);\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param schema\tThe name of the schema into which data will be added\n * @param tableName\tThe name of the table into which data will be added\n * @param updates\tThe expressions and condition pairs to update in the table\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& schema, const string& tableName, vector<pair<ExpressionValues *, Where *>>& updates, const UpdateModifier *modifier)\n{\n\tstatic HttpClient *httpClient = this->getHttpClient(); // to initialize m_seqnum_map[thread_id] for this thread\n\ttry {\n\t\tstd::thread::id thread_id = std::this_thread::get_id();\n\t\tostringstream ss;\n\t\tsto_mtx_client_map.lock();\n\t\tm_seqnum_map[thread_id].fetch_add(1);\n\t\tss << m_pid << \"#\" << thread_id << \"_\" << m_seqnum_map[thread_id].load();\n\t\tsto_mtx_client_map.unlock();\n\n\t\tSimpleWeb::CaseInsensitiveMultimap headers = {{\"SeqNum\", ss.str()}};\n\t\t\n\t\tostringstream convert;\n\t\tconvert << \"{ \\\"updates\\\" : [ \";\n\t\tfor (vector<pair<ExpressionValues *, Where *>>::const_iterator it = updates.cbegin();\n\t\t\t\t\t\t it != updates.cend(); ++it)\n\t\t{\n\t\t\tif (it != updates.cbegin())\n\t\t\t{\n\t\t\t\tconvert << \", \";\n\t\t\t}\n\t\t\tconvert << \"{ \";\n\t\t\tif (modifier)\n\t\t\t{\n\t\t\t\tconvert << \"\\\"modifiers\\\" : [ \\\"\" << modifier->toJSON() << \"\\\" ], \";\n\t\t\t}\n\t\t\tconvert << \"\\\"where\\\" : \";\n\t\t\tconvert << it->second->toJSON();\n\t\t\tconvert << \", \\\"expressions\\\" : \";\n\t\t\tconvert << it->first->toJSON();\n\t\t\tconvert << \" }\";\n\t\t}\n\t\tconvert << \" ] }\";\n\t\t\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url, convert.str(), headers);\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"PUT result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of updateTable. %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to update table data: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\treturn doc[\"rows_affected\"].GetInt();\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Update table\", tableName, res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"update table %s\", tableName.c_str());\n\t\tthrow;\n\t}\n\treturn -1;\n}\n\n\n/**\n * Update data into an arbitrary table\n *\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe values to insert into the table\n * @param expressions\tThe expression to update inthe table\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& tableName, const InsertValues& values, const ExpressionValues& expressions, const Where& where, const UpdateModifier *modifier)\n{\n\treturn updateTable(DEFAULT_SCHEMA, tableName, values, expressions, where, modifier);\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param schema\tThe name of the schema into which data will be added\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe values to insert into the table\n * @param expressions\tThe expression to update inthe table\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& schema, const string& tableName, const InsertValues& values, const ExpressionValues& expressions, const Where& where, const UpdateModifier *modifier)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \\\"updates\\\" : [ { \";\n\t\tif (modifier)\n\t\t{\n\t\t\tconvert << \"\\\"modifiers\\\" : [ \\\"\" << modifier->toJSON() << \"\\\" ], \";\n\t\t}\n\t\tconvert << \"\\\"where\\\" : \";\n\t\tconvert << where.toJSON();\n\t\tconvert << \", \\\"values\\\" : \";\n\t\tconvert << values.toJSON();\n\t\tconvert << \", \\\"expressions\\\" : \";\n\t\tconvert << expressions.toJSON();\n\t\tconvert << \" }\";\n\t\tconvert << \" ] }\";\n\t\t\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url, convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"PUT result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of updateTable. %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to update table data: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\treturn doc[\"rows_affected\"].GetInt();\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Update table\", tableName, res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"update table %s\", tableName.c_str());\n\t\tthrow;\n\t}\n\treturn -1;\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param tableName\tThe name of the table into which data will be added\n * @param json\t\tThe values to insert into the table\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& tableName, const JSONProperties& values, const Where& where, const UpdateModifier *modifier)\n{\n\treturn updateTable(DEFAULT_SCHEMA, tableName, values, where, modifier);\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param schema\tThe name of the schema into which data will be added\n * @param tableName\tThe name of the table into which data will be added\n * @param json\t\tThe values to insert into the table\n * @param where\t\tThe conditions to match the updated rows\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& schema, const string& tableName, const JSONProperties& values, const Where& where, const UpdateModifier *modifier)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \\\"updates\\\" : [ {\";\n\t\tif (modifier)\n\t\t{\n\t\t\tconvert << \"\\\"modifiers\\\" : [ \\\"\" << modifier->toJSON() << \"\\\" ]\";\n\t\t}\n\t\tconvert << \"\\\"where\\\" : \";\n\t\tconvert << where.toJSON();\n\t\tconvert << \", \";\n\t\tconvert << values.toJSON();\n\t\tconvert << \" }\";\n\t\tconvert << \" ] }\";\n\t\t\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url, convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"PUT result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of updateTable. %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to update table data: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\treturn doc[\"rows_affected\"].GetInt();\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Update table\", tableName, res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"update table %s\", tableName.c_str());\n\t\tthrow;\n\t}\n\treturn -1;\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe values to insert into the table\n * @param jsonProp\tThe JSON Properties to update\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& tableName, const InsertValues& values, const JSONProperties& jsonProp, const Where& where, const UpdateModifier *modifier)\n{\n\treturn updateTable(DEFAULT_SCHEMA, tableName, values, jsonProp, where, modifier);\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param schema\tThe name of the schema into which data will be added\n * @param tableName\tThe name of the table into which data will be added\n * @param values\tThe values to insert into the table\n * @param jsonProp\tThe JSON Properties to update\n * @param where\t\tThe conditions to match the updated rows\n * @param modifier\tOptional update modifier\n * @return int\t\tThe number of rows updated\n */\nint StorageClient::updateTable(const string& schema, const string& tableName, const InsertValues& values, const JSONProperties& jsonProp, const Where& where, const UpdateModifier *modifier)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \\\"updates\\\" : [ {\";\n\t\tif (modifier)\n\t\t{\n\t\t\tconvert << \"\\\"modifiers\\\" : [ \\\"\" << modifier->toJSON() << \"\\\", \";\n\t\t}\n\t\tconvert << \"\\\"where\\\" : \";\n\t\tconvert << where.toJSON();\n\t\tconvert << \", \\\"values\\\" : \";\n\t\tconvert << values.toJSON();\n\t\tconvert << \", \";\n\t\tconvert << jsonProp.toJSON();\n\t\tconvert << \" }\";\n\t\tconvert << \" ] }\";\n\t\t\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"PUT\", url, convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"PUT result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of updateTable. %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to update table data: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\treturn doc[\"rows_affected\"].GetInt();\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Update table\", tableName, res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"update table %s\", tableName.c_str());\n\t\tthrow;\n\t}\n\treturn -1;\n}\n\n/**\n * Delete from a table\n *\n * @param tablename\tThe name of the table to delete from\n * @param query\t\tThe query payload to match rows to delete\n * @return int\tThe number of rows deleted\n */\nint StorageClient::deleteTable(const std::string& tableName, const Query& query)\n{\n\treturn deleteTable(DEFAULT_SCHEMA, tableName, query);\n}\n\n/**\n * Delete from a table\n *\n * @param schema\tThe name of the schema to delete from\n * @param tablename\tThe name of the table to delete from\n * @param query\t\tThe query payload to match rows to delete\n * @return int\tThe number of rows deleted\n */\nint StorageClient::deleteTable(const std::string& schema, const std::string& tableName, const Query& query)\n{\n\ttry {\n\t\tostringstream convert;\n\n\t\tconvert << query.toJSON();\n\t\tchar url[128];\n\t\tsnprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\t\tauto res = this->getHttpClient()->request(\"DELETE\", url, convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"PUT result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of deleteTable. %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to delete table data: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\treturn doc[\"rows_affected\"].GetInt();\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Delete from table\", tableName, res->status_code, resultPayload.str());\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"delete table date in %s\", tableName.c_str());\n\t\tthrow;\n\t}\n\treturn -1;\n}\n\n/**\n * Standard logging method for all interactions\n *\n * @param operation\tThe operation being undertaken\n * @param table\t\tThe name of the table\n * @param responseCode\tThe HTTP response code\n * @param payload\tThe payload in the response message\n */\nvoid StorageClient::handleUnexpectedResponse(const char *operation, const string& table,\n\t\t\tconst string& responseCode,  const string& payload)\n{\n\tstring op(operation);\n\top += \" \";\n\top += table;\n\thandleUnexpectedResponse(op.c_str(), responseCode, payload);\n}\n\n/**\n * Standard logging method for all interactions\n *\n * @param operation\tThe operation being undertaken\n * @param responseCode\tThe HTTP response code\n * @param payload\tThe payload in the response message\n */\nvoid StorageClient::handleUnexpectedResponse(const char *operation,\n\t\t\tconst string& responseCode,  const string& payload)\n{\nDocument doc;\n\n\tdoc.Parse(payload.c_str());\n\tif (!doc.HasParseError())\n\t{\n\t\tif (doc.HasMember(\"message\"))\n\t\t{\n\t\t\tm_logger->info(\"%s completed with result %s\", operation, \n\t\t\t\t\t\t\tresponseCode.c_str());\n\t\t\tm_logger->error(\"%s: %s\", operation,\n\t\t\t\tdoc[\"message\"].GetString());\n\t\t}\n\t}\n\telse\n\t{\n\t\tm_logger->error(\"%s completed with result %s\", operation, responseCode.c_str());\n\t}\n}\n\n/**\n * Register interest for a Reading asset name\n *\n * @param assetName\tThe asset name to register\n *\t\t\tfor readings data notification\n * @param callbackUrl\tThe callback URL to send readings data.\n * @return\t\tTrue on success, false otherwise.\n */\nbool StorageClient::registerAssetNotification(const string& assetName,\n\t\t\t\t\t      const string& callbackUrl)\n{\n\ttry\n\t{\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \\\"url\\\" : \\\"\";\n\t\tconvert << callbackUrl;\n\t\tconvert << \"\\\" }\";\n\t\tauto res = this->getHttpClient()->request(\"POST\",\n\t\t\t\t\t\t\t  \"/storage/reading/interest/\" + urlEncode(assetName),\n\t\t\t\t\t\t\t  convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Register asset\",\n\t\t\t\t\t assetName,\n\t\t\t\t\t res->status_code,\n\t\t\t\t\t resultPayload.str());\n\t\tm_logger->error(\"/storage/reading/interest/%s: %s\",\n\t\t\t\turlEncode(assetName).c_str(), res->status_code.c_str());\n\n\t\treturn false;\n\t} catch (exception& ex)\n\t{\n\t\thandleException(ex, \"register asset '%s'\", assetName.c_str());\n\t}\n\treturn false;\n}\n\n/**\n * Unregister interest for a Reading asset name\n *\n * @param assetName\tThe asset name to unregister\n *\t\t\tfor readings data notification\n * @param callbackUrl\tThe callback URL provided in registration.\n * @return\t\tTrue on success, false otherwise.\n */\nbool StorageClient::unregisterAssetNotification(const string& assetName,\n\t\t\t\t\t\tconst string& callbackUrl)\n{\n\ttry\n\t{\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \\\"url\\\" : \\\"\";\n\t\tconvert << callbackUrl;\n\t\tconvert << \"\\\" }\";\n\t\tauto res = this->getHttpClient()->request(\"DELETE\",\n\t\t\t\t\t\t\t  \"/storage/reading/interest/\" + urlEncode(assetName),\n\t\t\t\t\t\t\t  convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Unregister asset\",\n\t\t\t\t\t assetName,\n\t\t\t\t\t res->status_code,\n\t\t\t\t\t resultPayload.str());\n\n\t\treturn false;\n\t} catch (exception& ex)\n\t{\n\t\thandleException(ex, \"unregister asset '%s'\", assetName.c_str());\n\t}\n\treturn false;\n}\n\n/**\n * Register interest for a table\n *\n * @param tableName\tThe table name to register for notification\n * @param tableKey\tThe key of interest in the table\n * @param tableKeyValues\tThe key values of interest\n * @param tableOperation\tThe table operation of interest (insert/update/delete)\n * @param callbackUrl\tThe callback URL to send change data\n * @return\t\tTrue on success, false otherwise.\n */\nbool StorageClient::registerTableNotification(const string& tableName, const string& key, std::vector<std::string> keyValues,\n\t\t\t\t\t      const string& operation, const string& callbackUrl)\n{\n\ttry\n\t{\n\t\tostringstream keyValuesStr;\n\t\tfor (auto & s : keyValues)\n\t\t{\n\t\t\tkeyValuesStr << \"\\\"\" << s << \"\\\"\";\n\t\t\tif (&s != &keyValues.back())\n\t\t\t\tkeyValuesStr << \", \";\n\t\t}\n\t\t\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \";\n\t\tconvert << \"\\\"url\\\" : \\\"\" << callbackUrl << \"\\\", \";\n\t\tconvert << \"\\\"key\\\" : \\\"\" << key << \"\\\", \";\n\t\tconvert << \"\\\"values\\\" : [\" << keyValuesStr.str() << \"], \";\n\t\tconvert << \"\\\"operation\\\" : \\\"\" << operation << \"\\\" \";\n\t\tconvert << \"}\";\n\t\t\n\t\tauto res = this->getHttpClient()->request(\"POST\",\n\t\t\t\t\t\t\t  \"/storage/table/interest/\" + urlEncode(tableName),\n\t\t\t\t\t\t\t  convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Register table\",\n\t\t\t\t\t tableName,\n\t\t\t\t\t res->status_code,\n\t\t\t\t\t resultPayload.str());\n\t\tm_logger->error(\"POST /storage/table/interest/%s: %s\",\n\t\t\t\turlEncode(tableName).c_str(), res->status_code.c_str());\n\n\t\treturn false;\n\t} catch (exception& ex)\n\t{\n\t\thandleException(ex, \"register table '%s'\", tableName.c_str());\n\t}\n\treturn false;\n}\n\n/**\n * Unregister interest for a table name\n *\n * @param tableName\tThe table name to unregister interest in\n * @param tableKey\tThe key of interest in the table\n * @param tableKeyValues\tThe key values of interest\n * @param tableOperation\tThe table operation of interest (insert/update/delete)\n * @param callbackUrl\tThe callback URL to send change data\n * @return\t\tTrue on success, false otherwise.\n */\nbool StorageClient::unregisterTableNotification(const string& tableName, const string& key, std::vector<std::string> keyValues,\n\t\t\t\t\t      const string& operation, const string& callbackUrl)\n{\n\ttry\n\t{\n\t\tostringstream keyValuesStr;\n\t\tfor (auto & s : keyValues)\n\t\t{\n\t\t\tkeyValuesStr << \"\\\"\" << s << \"\\\"\";\n\t\t\tif (&s != &keyValues.back())\n\t\t\t\tkeyValuesStr << \", \";\n\t\t}\n\t\t\n\t\tostringstream convert;\n\n\t\tconvert << \"{ \";\n\t\tconvert << \"\\\"url\\\" : \\\"\" << callbackUrl << \"\\\", \";\n\t\tconvert << \"\\\"key\\\" : \\\"\" << key << \"\\\", \";\n\t\tconvert << \"\\\"values\\\" : [\" << keyValuesStr.str() << \"], \";\n\t\tconvert << \"\\\"operation\\\" : \\\"\" << operation << \"\\\" \";\n\t\tconvert << \"}\";\n\t\t\n\t\tauto res = this->getHttpClient()->request(\"DELETE\",\n\t\t\t\t\t\t\t  \"/storage/table/interest/\" + urlEncode(tableName),\n\t\t\t\t\t\t\t  convert.str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Unregister table\",\n\t\t\t\t\t tableName,\n\t\t\t\t\t res->status_code,\n\t\t\t\t\t resultPayload.str());\n\t\tm_logger->error(\"DELETE /storage/table/interest/%s: %s\",\n\t\t\t\turlEncode(tableName).c_str(), res->status_code.c_str());\n\n\t\treturn false;\n\t} catch (exception& ex)\n\t{\n\t\thandleException(ex, \"unregister table '%s'\", tableName.c_str());\n\t}\n\treturn false;\n}\n\n/*\n * Attempt to open a streaming connection to the storage service. We use a REST API\n * call to create the stream. If successful this call will return a port and a token\n * to use when sending data via the stream.\n *\n * @return bool\t\tReturn true if the stream was setup\n */\nbool StorageClient::openStream()\n{\n\ttry {\n\t\tauto res = this->getHttpClient()->request(\"POST\", \"/storage/reading/stream\");\n\t\tm_logger->info(\"POST /storage/reading/stream returned: %s\", res->status_code.c_str());\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n\t\t{\n\t\t\tostringstream resultPayload;\n\t\t\tresultPayload << res->content.rdbuf();\n\t\t\tDocument doc;\n\t\t\tdoc.Parse(resultPayload.str().c_str());\n\t\t\tif (doc.HasParseError())\n\t\t\t{\n\t\t\t\tm_logger->info(\"POST result %s.\", res->status_code.c_str());\n\t\t\t\tm_logger->error(\"Failed to parse result of createStream. %s. Document is %s\",\n\t\t\t\t\t\tGetParseError_En(doc.GetParseError()),\n\t\t\t\t\t\tresultPayload.str().c_str());\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"message\"))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to switch to stream mode: %s\",\n\t\t\t\t\tdoc[\"message\"].GetString());\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tint port, token;\n\t\t\tif ((!doc.HasMember(\"port\")) || (!doc.HasMember(\"token\")))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Missing items in stream creation response\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t       \tport = doc[\"port\"].GetInt();\n\t\t\ttoken = doc[\"token\"].GetInt();\n\t\t\tif ((m_stream = socket(AF_INET, SOCK_STREAM, 0)) == -1)\n        \t\t{\n\t\t\t\tm_logger->error(\"Unable to create socket\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tstruct sockaddr_in serv_addr;\n\t\t\thostent *server;\n\t\t\tif ((server = gethostbyname(m_host.c_str())) == NULL)\n\t\t\t{\n\t\t\t\tm_logger->error(\"Unable to resolve hostname for reading stream: %s\", m_host.c_str());\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tbzero((char *) &serv_addr, sizeof(serv_addr));\n\t\t\tserv_addr.sin_family = AF_INET;\n\t\t\tbcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length);\n\t\t\tserv_addr.sin_port = htons(port);\n\t\t\tif (connect(m_stream, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Unable to connect to storage streaming server: %s, %d\", m_host.c_str(), port);\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tRDSConnectHeader conhdr;\n\t\t\tconhdr.magic = RDS_CONNECTION_MAGIC;\n\t\t\tconhdr.token = token;\n\t\t\tif (write(m_stream, &conhdr, sizeof(conhdr)) != sizeof(conhdr))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Failed to write connection header: %s\", strerror(errno));\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tm_streaming = true;\n\t\t\tm_logger->info(\"Storage stream succesfully created\");\n\t\t\treturn true;\n\t\t}\n\t\tostringstream resultPayload;\n\t\tresultPayload << res->content.rdbuf();\n\t\thandleUnexpectedResponse(\"Create reading stream\", res->status_code, resultPayload.str());\n\t\treturn false;\n\t} catch (exception& ex) {\n\t\thandleException(ex, \"create reading stream\");\n\t}\n\tm_logger->error(\"Fallen through!\");\n\treturn false;\n}\n\n/**\n * Stream a set of readings to the storage service.\n *\n * The stream uses a TCP connection to the storage system, it sends\n * blocks of readings to the storage engine and bypasses the usual \n * JSON conversion and imoprtantly parsing on the storage system\n * side.\n *\n * A block of readings is introduced by a block header, the block\n * header contains a magic number, the block number and the count\n * of the number of readings in a block.\n *\n * Each reading within the block is preceeded by a reading header\n * that contains a magic number, a reading number within the block,\n * The length of the asset name for the reading, the length of the\n * payload within the reading. The reading itself follows the herader\n * and consists of the timestamp as a binary timeval structure, the name\n * of the asset, including the null terminator. If the asset name length\n * is 0 then no asset name is sent and the name of the asset is the same\n * as the previous asset in the block. Following this the paylod is included.\n *\n * Each block is sent to the storage layer in a number of chunks rather\n * that a single write per block. The implementation make use of the\n * Linux scatter/gather IO calls to reduce the number of copies of data\n * that are required.\n *\n * Currently there is no acknowledement handling as TCP is used as the underlying\n * transport and the TCP acknowledgement is assumed to be a good enough \n * indication of delivery.\n *\n * TODO Deal with acknowledgements, add error checking/recovery\n *\n * @param readings\tThe readings to stream\n * @return bool\t\tTrue if the readings have been sent\n */\nbool StorageClient::streamReadings(const std::vector<Reading *> & readings)\n{\nRDSBlockHeader   \t\tblkhdr;\nRDSReadingHeader \t\trdhdrs[STREAM_BLK_SIZE];\nregister RDSReadingHeader\t*phdr;\nstruct iovec\t\t\tiovs[STREAM_BLK_SIZE * 4], *iovp;\nstring\t\t\t\tpayloads[STREAM_BLK_SIZE];\nstruct timeval\t\t\ttm[STREAM_BLK_SIZE];\nssize_t\t\t\t\tn, length = 0;\nstring\t\t\t\tlastAsset;\n\n\n\tif (!m_streaming)\n\t{\n\t\tm_logger->warn(\"Attempt to send data via a storage stream when streaming is not setup\");\n\t\treturn false;\n\t}\n\n\t/*\n\t * Assemble and write the block header. This header contains information\n\t * to synchronise the blocks of data and also the number of readings\n\t * to expect within the block.\n\t */\n\tblkhdr.magic = RDS_BLOCK_MAGIC;\n\tblkhdr.blockNumber = m_readingBlock++;\n\tblkhdr.count = readings.size();\n\tif ((n = write(m_stream, &blkhdr, sizeof(blkhdr))) != sizeof(blkhdr))\n\t{\n\t\tif (errno == EPIPE || errno == ECONNRESET)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Storage service has closed stream unexpectedly\");\n\t\t\tm_streaming = false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Failed to write block header: %s\", strerror(errno));\n\t\t}\n\t\treturn false;\n\t}\n\n\t/*\n\t * Use the writev scatter/gather interface to send the reading headers and reading data.\n\t * We sent chunks of data in order to allow the parallel sending and unpacking process\n\t * at the two ends. The chunk size is STREAM_BLK_SIZE readings.\n\t */\n\tiovp = iovs;\n\tphdr = rdhdrs;\n\tint offset = 0;\n\tfor (int i = 0; i < readings.size(); i++)\n\t{\n\t\tphdr->magic = RDS_READING_MAGIC;\n\t\tphdr->readingNo = i;\n\t\tstring assetCode = readings[i]->getAssetName();\n\t\tif (i > 0 && assetCode.compare(lastAsset) == 0)\n\t\t{\n\t\t\t// Asset name is unchanged so don't send it\n\t\t\tphdr->assetLength = 0;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Asset name has changed or this is the first asset in the block\n\t\t\tlastAsset = assetCode;\n\t\t\tphdr->assetLength = assetCode.length() + 1;\n\t\t}\n\n\t\t// Always generate the JSON variant of the data points and send\n\t\tpayloads[offset] = readings[i]->getDatapointsJSON();\n\t\tphdr->payloadLength = payloads[offset].length() + 1;\n\n\t\t// Add the reading header\n\t\tiovp->iov_base = phdr;\n\t\tiovp->iov_len = sizeof(RDSReadingHeader);\n\t\tlength += iovp->iov_len;\n\t\tiovp++;\n\n\t\t// Reading user timestamp\n\t\treadings[i]->getUserTimestamp(&tm[offset]);\n\t\tiovp->iov_base = &tm[offset];\n\t\tiovp->iov_len = sizeof(struct timeval);\n\t\tlength += iovp->iov_len;\n\t\tiovp++;\n\n\t\t// If the asset code has changed than add that\n\t\tif (phdr->assetLength)\n\t\t{\n\t\t\tiovp->iov_base = (void *)(readings[i]->getAssetName().c_str());\t// Cast away const due to iovec definition\n\t\t\tiovp->iov_len = phdr->assetLength;\n\t\t\tlength += iovp->iov_len;\n\t\t\tiovp++;\n\t\t}\n\n\t\t// Add the data points themselves\n\t\tiovp->iov_base = (void *)(payloads[offset].c_str()); // Cast away const due to iovec definition\n\t\tiovp->iov_len = phdr->payloadLength;\n\t\tlength += iovp->iov_len;\n\t\tiovp++;\n\n\t\toffset++;\n\t\tif (offset == STREAM_BLK_SIZE - 1)\n\t\t{\n\t\t\tif (iovp - iovs > STREAM_BLK_SIZE * 4)\n\t\t\t\tLogger::getLogger()->error(\"Too many iov blocks %d\", iovp - iovs);\n\t\t\t// Send a chunk of readings in the block\n\t\t\tn = writev(m_stream, (const iovec *)iovs, iovp - iovs);\n\t\t\tif (n == -1)\n\t\t\t{\n\t\t\t\tif (errno == EPIPE || errno == ECONNRESET)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->error(\"Stream has been closed by the storage service\");\n\t\t\t\t\tm_streaming = false;\n\t\t\t\t}\n\t\t\t\tLogger::getLogger()->error(\"Write of block %d filed: %s\",\n\t\t\t\t\t\t\tm_readingBlock - 1, strerror(errno));\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\telse if (n < length)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Write of block short, %d < %d: %s\",\n\t\t\t\t\t\t\tn, length, strerror(errno));\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\telse if (n > length)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->fatal(\"Long write %d < %d\", length, n);\n\t\t\t}\n\t\t\toffset = 0;\n\t\t\tlength = 0;\n\t\t\tiovp = iovs;\n\t\t\tphdr = rdhdrs;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tphdr++;\n\t\t}\n\t}\n\n\tif (length)\t// Remaining data to be sent to finish the block\n\t{\n\t\tn = writev(m_stream, (const iovec *)iovs, iovp - iovs);\n\t\tif (n == -1)\n\t\t{\n\t\t\tif (errno == EPIPE || errno == ECONNRESET)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Stream has been closed by the storage service\");\n\t\t\t\tm_streaming = false;\n\t\t\t}\n\t\t\tLogger::getLogger()->error(\"Write of block %d filed: %s\",\n\t\t\t\t\t\tm_readingBlock - 1, strerror(errno));\n\t\t\treturn false;\n\t\t}\n\t\telse if (n < length)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Write of block short, %d < %d: %s\",\n\t\t\t\t\t\tn, length, strerror(errno));\n\t\t\treturn false;\n\t\t}\n\t\telse if (n > length)\n\t\t{\n\t\t\tLogger::getLogger()->fatal(\"Long write %d < %d\", length, n);\n\t\t}\n\t}\n\tLogger::getLogger()->info(\"Written block of %d readings via streaming connection\", readings.size());\n\treturn true;\n}\n\n/**\n * Handle exceptions encountered when communicating to the storage system\n *\n * @param ex\tThe exception we are handling\n */\nvoid StorageClient::handleException(const exception& ex, const char *operation, ...)\n{\n\tchar buf[EXCEPTION_BUFFER_SIZE];\n\tva_list ap;\n\tva_start(ap, operation);\n\tvsnprintf(buf, sizeof(buf), operation, ap);\n\tva_end(ap);\n\t// Firstly deal with not flooding the log with repeated exceptions\n\tconst char *what = ex.what();\n\tif (m_lastException.empty())\t// First exception\n\t{\n\t\tm_lastException = what;\n\t\tm_exRepeat = 0;\n\t\tm_backoff = SC_INITIAL_BACKOFF;\n\t\tm_logger->error(\"Failed to %s: %s\", buf, m_lastException.c_str());\n\t}\n\telse if (m_lastException.compare(what) == 0)\n\t{\n\t\tm_exRepeat++;\n\t\tif ((m_exRepeat % m_backoff) == 0)\n\t\t{\n\t\t\tif (m_backoff < SC_MAX_BACKOFF)\n\t\t\t\tm_backoff *= 2;\n\t\t\tm_logger->error(\"Storage client repeated failure: %s\", m_lastException.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tm_logger->error(\"Storage client failure: %s repeated %d times\", m_lastException.c_str(), m_exRepeat);\n\t\tm_backoff = SC_INITIAL_BACKOFF;\n\t\tm_lastException = what;\n\t\tm_logger->error(\"Failed to %s: %s\", buf, m_lastException.c_str());\n\t}\n\n\t// Now implement some recovery strategies\n\tif (m_lastException.compare(\"Connection refused\") == 0)\n\t{\n\t\t// This is probably because the storage service has gone down\n\t\tif (m_management)\n\t\t{\n\t\t\t// Get a handle on the storage layer\n\t\t\tServiceRecord storageRecord(\"Fledge Storage\");\n\t\t\tif (!m_management->getService(storageRecord))\n\t\t\t{\n\t\t\t\tm_logger->fatal(\"Unable to find a storage service from service registry, exiting...\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tm_urlbase << storageRecord.getAddress() << \":\" << storageRecord.getPort();\n\t\t}\n\t\tif (m_exRepeat >= SC_INITIAL_BACKOFF * 2)\n\t\t{\n\t\t\t// We clearly tried to recover a number of times without success, simply exit at this stage\n\t\t\tm_logger->fatal(\"Storage service appears to have failed and unable to connect to core, exiting...\");\n\t\t\texit(1);\n\t\t}\n\t}\n}\n\n/**\n * Function to create Storage Schema\n */\nbool StorageClient::createSchema(const std::string& payload)\n{\n        try {\n                auto res = this->getHttpClient()->request(\"POST\", \"/storage/schema\", payload.c_str());\n                if (res->status_code.compare(\"200 OK\") == 0)\n                {\n                        return true;\n                }\n                ostringstream resultPayload;\n                resultPayload << res->content.rdbuf();\n                handleUnexpectedResponse(\"Post Storage Schema\", res->status_code, resultPayload.str());\n                return false;\n        } catch (exception& ex) {\n                handleException(ex, \"post storage schema\");\n        }\n        return false;\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param schema        The name of the schema into which data will be added\n * @param tableName     The name of the table into which data will be added\n * @param updates       The values and condition pairs to update in the table\n * @param modifier      Optional update modifier\n * @return int          The number of rows updated\n */\nint StorageClient::updateTable(const string& schema, const string& tableName, std::vector<std::pair<InsertValue*, Where*> >& updates, const UpdateModifier *modifier)\n{\n        static HttpClient *httpClient = this->getHttpClient(); // to initialize m_seqnum_map[thread_id] for this thread\n        try {\n                std::thread::id thread_id = std::this_thread::get_id();\n                ostringstream ss;\n                sto_mtx_client_map.lock();\n                m_seqnum_map[thread_id].fetch_add(1);\n                ss << m_pid << \"#\" << thread_id << \"_\" << m_seqnum_map[thread_id].load();\n                sto_mtx_client_map.unlock();\n\n                SimpleWeb::CaseInsensitiveMultimap headers = {{\"SeqNum\", ss.str()}};\n\n                ostringstream convert;\n                convert << \"{ \\\"updates\\\" : [ \";\n\n\t\tfor (vector<pair<InsertValue *, Where *>>::const_iterator it = updates.cbegin();\n                                                 it != updates.cend(); ++it)\n                {\n                        if (it != updates.cbegin())\n                        {\n                                convert << \", \";\n                        }\n                        convert << \"{ \";\n                        if (modifier)\n                        {\n                                convert << \"\\\"modifiers\\\" : [ \\\"\" << modifier->toJSON() << \"\\\" ], \";\n                        }\n                        convert << \"\\\"where\\\" : \";\n                        convert << it->second->toJSON();\n                        convert << \", \\\"values\\\" : \";\n                        convert << \" { \" << it->first->toJSON() << \" } \";\n                        convert << \" }\";\n                }\n                convert << \" ] }\";\n\n                char url[128];\n                snprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n                auto res = this->getHttpClient()->request(\"PUT\", url, convert.str(), headers);\n\n\t\tif (res->status_code.compare(\"200 OK\") == 0)\n                {\n                        ostringstream resultPayload;\n                        resultPayload << res->content.rdbuf();\n                        Document doc;\n                        doc.Parse(resultPayload.str().c_str());\n                        if (doc.HasParseError())\n                        {\n                                m_logger->info(\"PUT result %s.\", res->status_code.c_str());\n                                m_logger->error(\"Failed to parse result of updateTable. %s\",\n                                                GetParseError_En(doc.GetParseError()));\n                                return -1;\n                        }\n                        else if (doc.HasMember(\"message\"))\n                        {\n                                m_logger->error(\"Failed to update table data: %s\",\n                                        doc[\"message\"].GetString());\n                                return -1;\n                        }\n                        return doc[\"rows_affected\"].GetInt();\n                }\n                ostringstream resultPayload;\n                resultPayload << res->content.rdbuf();\n                handleUnexpectedResponse(\"Update table\", tableName, res->status_code, resultPayload.str());\n        } catch (exception& ex) {\n                handleException(ex, \"update table %s\", tableName.c_str());\n                throw;\n        }\n        return -1;\n}\n\n/**\n * Update data into an arbitrary table\n *\n * @param tableName     The name of the table into which data will be added\n * @param updates       The values to insert into the table\n * @param modifier      Optional storage modifier\n * @return int          The number of rows updated\n */\n\nint StorageClient::updateTable(const string& tableName, std::vector<std::pair<InsertValue*, Where*> >& updates, const UpdateModifier *modifier)\n{\n\treturn updateTable(DEFAULT_SCHEMA, tableName, updates, modifier);\n}\n\n/**\n * Insert data into an arbitrary table\n *\n * @param tableName     The name of the table into which data will be added\n * @param values        The values to insert into the table\n * @return int          The number of rows inserted\n */\nint StorageClient::insertTable(const string& tableName, const std::vector<InsertValues>&  values)\n{\n\treturn insertTable(DEFAULT_SCHEMA, tableName, values);\n}\n/**\n * Insert data into an arbitrary table\n *\n * @param schema        The name of the schema to insert into\n * @param tableName     The name of the table into which data will be added\n * @param values        The values to insert into the table\n * @return int          The number of rows inserted\n */\nint StorageClient::insertTable(const string& schema, const string& tableName, const std::vector<InsertValues>&  values)\n{\n        try {\n\t\tostringstream convert;\n\t\tconvert << \"{ \\\"inserts\\\": [\" ;\n                for (std::vector<InsertValues>::const_iterator it = values.cbegin();\n                                                 it != values.cend(); ++it)\n                {\n                        if (it != values.cbegin())\n                        {\n                                convert << \", \";\n                        }\n                        convert <<  it->toJSON() ;\n                }\n\t\tconvert << \"]}\";\n\n                char url[1000];\n                snprintf(url, sizeof(url), \"/storage/schema/%s/table/%s\", schema.c_str(), tableName.c_str());\n\n                auto res = this->getHttpClient()->request(\"POST\", url, convert.str());\n                ostringstream resultPayload;\n                resultPayload << res->content.rdbuf();\n                if (res->status_code.compare(\"200 OK\") == 0 || res->status_code.compare(\"201 Created\") == 0)\n                {\n\n                        Document doc;\n                        doc.Parse(resultPayload.str().c_str());\n                        if (doc.HasParseError())\n                        {\n                                m_logger->info(\"POST result %s.\", res->status_code.c_str());\n                                m_logger->error(\"Failed to parse result of insertTable. %s. Document is %s\",\n                                                GetParseError_En(doc.GetParseError()),\n                                                resultPayload.str().c_str());\n                                return -1;\n                        }\n                        else if (doc.HasMember(\"rows_affected\"))\n\t\t\t{\n\t                       return doc[\"rows_affected\"].GetInt();\n\t\t\t}\n                }\n                handleUnexpectedResponse(\"Insert table\", res->status_code, resultPayload.str());\n        } catch (exception& ex) {\n                handleException(ex, \"insert into table %s\", tableName.c_str());\n                throw;\n        }\n        return 0;\n}\n"
  },
  {
    "path": "C/common/string_utils.cpp",
    "content": "/*\n * Fledge utilities functions for handling JSON document\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli, Massimiliano Pinto\n */\n\n#include <iostream>\n#include <string>\n#include \"string_utils.h\"\n#include <logger.h>\n#include <stdio.h>\n#include <string.h>\n\nusing namespace std;\n\n/**\n * Search and replace a string\n *\n * @param out StringToManage    string in which apply the search and replacement\n * @param     StringToSearch    string to search and replace\n * @param     StringToReplace   substitution string\n *\n */\nvoid StringReplace(std::string& StringToManage,\n\t\t   const std::string& StringToSearch,\n\t\t   const std::string& StringReplacement)\n{\n\tif (StringToManage.find(StringToSearch) != string::npos)\n\t{\n\t\tStringToManage.replace(StringToManage.find(StringToSearch),\n\t\t\t\t       StringToSearch.length(),\n\t\t\t\t       StringReplacement);\n\t}\n}\n\n/**\n * Search and replace all the occurances of a string\n *\n * @param out StringToManage    string in which apply the search and replacement\n * @param     StringToSearch    string to search and replace\n * @param     StringToReplace   substitution string\n *\n */\nvoid StringReplaceAll(std::string& StringToManage,\n\t\t\t\t   const std::string& StringToSearch,\n\t\t\t\t   const std::string& StringReplacement)\n{\n\n\twhile (StringToManage.find(StringToSearch) != string::npos)\n\t{\n\t\tStringReplace(StringToManage,StringToSearch, StringReplacement);\n\t}\n}\n\n/**\n * Removes the last level of the hierarchy\n *\n */\nstd::string evaluateParentPath(const std::string& path, char separator)\n{\n\tstd::string parent;\n\n\tparent = path;\n\tif (parent.length() > 1)\n\t{\n\t\tif (parent.find(separator) != string::npos)\n\t\t{\n\t\t\twhile (parent.back() != separator)\n\t\t\t{\n\t\t\t\tparent.erase(parent.size() - 1);\n\t\t\t}\n\t\t\tif (parent.back() == separator)\n\t\t\t{\n\t\t\t\tparent.erase(parent.size() - 1);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn parent;\n}\n\n/**\n * Extract last level of the hierarchy\n *\n */\nstd::string extractLastLevel(const std::string& path, char separator)\n{\n\tstd::string level;\n\tstd::string tmpPath;\n\tchar end_char;\n\n\ttmpPath = path;\n\n\tif (tmpPath.length() > 0)\n\t{\n\t\tif (tmpPath.find(separator) != string::npos)\n\t\t{\n\t\t\tend_char = tmpPath.back();\n\t\t\twhile (end_char != separator)\n\t\t\t{\n\t\t\t\tlevel.insert(0, 1, end_char);\n\t\t\t\ttmpPath.erase(tmpPath.size() - 1);\n\t\t\t\tend_char = tmpPath.back();\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlevel = path;\n\t\t}\n\t}\n\n\treturn level;\n}\n\n\n\n/**\n * Removes slash when not needed, at the beggining and at the end,\n * substitutes // with /\n *\n * @param     stringToFix    string to handle\n *\n */\nstd::string StringSlashFix(const std::string& stringToFix)\n{\n\tstd::string stringFixed;\n\n\tstringFixed = stringToFix;\n\n\tif (!stringFixed.empty()) {\n\n\t\tchar singleChar;\n\n\t\t// Remove first char if '/'\n\t\tfor (singleChar = stringFixed.front() ; singleChar == '/' ; singleChar = stringFixed.front())\n\t\t{\n\t\t\tstringFixed.erase(0, 1);\n\t\t}\n\n\t\t// Remove last char if '/'\n\t\tfor (singleChar = stringFixed.back() ; singleChar == '/' ; singleChar = stringFixed.back())\n\t\t{\n\t\t\tstringFixed.pop_back();\n\t\t}\n\n\t\t// Substitute // with /\n\t\twhile (stringFixed.find(\"//\") != string::npos)\n\t\t{\n\t\t\tStringReplace(stringFixed, \"//\", \"/\");\n\t\t}\n\t}\n\n\treturn stringFixed;\n}\n\n/**\n * Strips Line feed and/or carriage return\n *\n * @param StringToManage The string to strip\n */\nvoid StringStripCRLF(std::string& StringToManage)\n{\n\tstring::size_type pos = 0;\n\n\twhile ((pos = StringToManage.find ('\\r',pos)) != string::npos)\n\t{\n\t\tStringToManage.erase ( pos, 1 );\n\t}\n\n\tpos = 0;\n\twhile ((pos = StringToManage.find ('\\n',pos)) != string::npos)\n\t{\n\t\tStringToManage.erase ( pos, 1 );\n\t}\n\n}\n\n/**\n * Stripes \" from the string\n *\n */\nvoid StringStripQuotes(std::string& StringToManage)\n{\n\tif ( ! StringToManage.empty())\n\t{\n\t\tStringReplaceAll(StringToManage, \"\\\"\", \"\");\n\t}\n}\n\n/**\n * Removes all the white spaces from a string\n *\n */\nstring StringStripWhiteSpacesAll(const  std::string& original)\n{\n\tstring output;\n\n\toutput = original;\n\n\tfor (size_t i = 0; i < output.length(); )\n\t{\n\t\tif (isspace(output[i]))\n\t\t{\n\t\t\toutput.erase(i, 1);\n\t\t}\n\t\telse\n\t\t{\n\t\t\ti++;\n\t\t}\n\n\t}\n\n\treturn (output);\n}\n\n/**\n * Removes all the spaces at both ends of a string and\n * removes all the white spaces except 1 space\n *\n */\nstring StringStripWhiteSpacesExtra(const  std::string& original)\n{\n\tint cSpace;\n\tstring output;\n\n\toutput = StringRTrim(StringLTrim(original));\n\tcSpace = 0;\n\n\tfor (size_t i = 0; i < output.length(); )\n\t{\n\t\tif (output[i] == ' ')\n\t\t{\n\t\t\tcSpace++;\n\n\t\t\tif (cSpace > 1)\n\t\t\t{\n\t\t\t\toutput.erase(i, 1);\n\t\t\t} else {\n\t\t\t\ti++;\n\t\t\t}\n\t\t} else\n\t\t{\n\t\t\tif (isspace(output[i]))\n\t\t\t{\n\t\t\t\toutput.erase(i, 1);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\ti++;\n\t\t\t\tcSpace = 0;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (output);\n}\n\n\n/**\n * URL-encode a given string\n *\n * @param s             Input string that is to be URL-encoded\n * @return              URL-encoded output string\n */\nstring urlEncode(const string &s)\n{\n\tostringstream escaped;\n\tescaped.fill('0');\n\tescaped << hex;\n\n\tfor (string::const_iterator i = s.begin(), n = s.end();\n\t\t\t\t    i != n;\n\t\t\t\t    ++i)\n\t{\n\t\tstring::value_type c = (*i);\n\n\t\t// Keep alphanumeric and other accepted characters intact\n\t\tif (isalnum(c) || c == '-' || c == '_' || c == '.' || c == '~') {\n\t\t\tescaped << c;\n\t\t\tcontinue;\n\t\t}\n\n\t\t // Any other characters are percent-encoded\n\t\tescaped << uppercase;\n\t\tescaped << '%' << setw(2) << int((unsigned char) c);\n\t\tescaped << nouppercase;\n\t}\n\n\treturn escaped.str();\n}\n\n/**\n * Check if a char is an hex value\n *\n * @param c\tThe input char\n * @return\tTrue with hex value\n * \t\tfalse otherwise\n */\nstatic inline bool ishex (const char c)\n{\n\tif (isdigit(c) ||\n\t    c=='A' ||\n\t    c=='B' ||\n\t    c=='C' ||\n\t    c=='D' ||\n\t    c=='E' ||\n\t    c=='F')\n\t{\n\t\treturn true;\n\t}\n\telse\n\t{\n\t\treturn false;\n\t}\n}\n\n/**\n * URL decode of a given string\n *\n * @param name\tThe string to decode\n * @return\tThe URL decoded string\n *\n * In case of decoding errors the routine returns\n * current decoded string\n */\nstring urlDecode(const std::string& name)\n{\n\tstd::string decoded(name);\n\tchar* s = (char *)name.c_str();\n\tchar* dec = (char *)decoded.c_str();\n\tchar* o;\n\tconst char* end = s + name.length();\n\tint c;\n\n\tfor (o = dec; s <= end; o++)\n\t{\n\t\tc = *s++;\n\t\tif (c == '+')\n\t\t{\n\t\t\tc = ' ';\n\t\t}\n\t\telse if (c == '%' && (!ishex(*s++) ||\n\t\t\t !ishex(*s++) ||\n\t\t\t !sscanf(s - 2, \"%2x\", &c)))\n\t\t{\n\t\t\tbreak;\n\t\t}\n\n\t\tif (dec)\n\t\t{\n\t\t\t*o = c;\n\t\t}\n\t}\n\n\treturn string(dec);\n}\n\n/**\n * Escape all double quotes characters in the string\n *\n * @param str\tThe string to escape\n */\nvoid StringEscapeQuotes(std::string& str)\n{\n\tfor (size_t i = 0; i < str.length(); i++)\n\t{\n\t\tif (str[i] == '\\\"' && (i == 0 || str[i-1] != '\\\\'))\n\t\t{\n\t\t\tstr.replace(i, 1, \"\\\\\\\"\");\n\t\t}\n\n\t}\n}\n\n/**\n * Remove space at both ends of a string\n */\nchar *trim(char *str)\n{\n\tchar *ptr;\n\n\twhile (*str && *str == ' ')\n\t\tstr++;\n\n\tptr = str + strlen(str) - 1;\n\twhile (ptr > str && *ptr == ' ')\n\t{\n\t\t*ptr = 0;\n\t\tptr--;\n\t}\n\treturn str;\n}\n\n/**\n * Remove spaces at the left side of a string\n */\nstd::string StringLTrim(const std::string& str)\n{\n\tstring output;\n\tsize_t pos = str.find_first_not_of(\" \");\n\n\tif (pos == std::string::npos)\n\t\toutput = \"\";\n\telse\n\t\toutput = str.substr(pos);\n\n\treturn (output);\n}\n\n/**\n * Remove spaces at the right side of a string\n */\nstd::string StringRTrim(const std::string& str)\n{\n\tstring output;\n\tsize_t pos = str.find_last_not_of(\" \");\n\n\tif (pos == std::string::npos)\n\t\toutput =  \"\";\n\telse\n\t\toutput = str.substr(0, pos + 1);\n\n\treturn (output);\n}\n\n/**\n * Remove spaces at both ends of a string\n */\nstd::string StringTrim(const std::string& str)\n{\n\treturn StringRTrim(StringLTrim(str));\n}\n\n/**\n * Evaluates if the input string is a regular expression\n */\nbool IsRegex(const string &str) {\n\n\tsize_t nChar;\n\tnChar = strcspn(str.c_str(), \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_\");\n\n\treturn (nChar != 0);\n}\n\n/**\n * Return a new string that extracts from the passed in string either side\n * of a position within the string.\n *\n * @param str\tThe string to return a portion of\n * @param pos\tThe position around which to extract a portion\n * @param after\tThe number of characters after the position to return, defaults to 30 if omitted\n * @param before The number of characters before the position to return, defaults to 10\n */\nstd::string StringAround(const std::string& str, unsigned int pos,\n\t\tunsigned int after, unsigned int before)\n{\n\tsize_t\tstart = pos > before ? (pos - before) : 0;\n\tsize_t\tlen = before + after;\n\treturn str.substr(start, len);\n}\n\n/**\n * Search and duplicate all the occurances of a string\n *\n * @param out StringToManage    string in which apply the search\n * @param     StringToSearch    string to search\n * @param     StringToChange  substitution string\n *\n */\nvoid StringReplaceAllEx(std::string& StringToManage,\n\t\t\t\t\t  const std::string& StringToSearch,\n\t\t\t\t\t  const std::string& StringToChange)\n{\n\tsize_t pos = 0;\n\twhile ((pos = StringToManage.find(StringToSearch, pos)) != std::string::npos)\n\t{\n\t\tStringToManage.replace(pos, StringToSearch.length(), StringToChange);\n\t\tpos += StringToChange.length(); // Move past the last replaced substring\n\t}\n\n}\n\n/**\n * Escape double quotes, forward and backword slash\n *\n * @param str\tThe string to escape\n * @return The escaped string\n */\nstd::string\tescape(const std::string& str)\n{\n\tsize_t pos = 0;\n\tif (str.find(\"\\\"\", pos) == std::string::npos && str.find(\"\\\\\", pos) == std::string::npos && str.find(\"/\", pos) == std::string::npos)\n\t\treturn str; //return if none of the following character exists '\"' , \"\\\" , \"/\"\n\n\tstd::string rval;\n\tint bscount = 0;\n\tfor (size_t i = 0; i < str.length(); i++)\n\t{\n\t\tif (str[i] == '\\\\')\n\t\t{\n\t\t\tif (i + 1 < str.length() && (str[i + 1] == '\"' || str[i + 1] == '\\\\' || str[i + 1] == '/' || str[i-1] == '\\\\'))\n\t\t\t{\n\t\t\t\trval += '\\\\';\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\trval += \"\\\\\\\\\";\n\t\t\t}\n\t\t\tbscount++;\n\t\t}\n\t\telse if (str[i] == '\\\"')\n\t\t{\n\t\t\tif ((bscount & 1) == 0) // not already escaped\n\t\t\t{\n\t\t\t\trval += \"\\\\\"; // Add escape of \"\n\t\t\t}\n\t\t\trval += str[i];\n\t\t\tbscount = 0;\n\t\t}\n\t\telse\n\t\t{\n\t\t\trval += str[i];\n\t\t\tbscount = 0;\n\t\t}\n\t}\n\treturn rval;\n}\n\n"
  },
  {
    "path": "C/common/where.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2018 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <where.h>\n#include <string>\n#include <sstream>\n#include <iostream>\n\nusing namespace std;\n\n/**\n * Where clause destructor\n */\nWhere::~Where()\n{\n\tif (m_or)\n\t{\n\t\tdelete m_or;\n\t}\n\tif (m_and)\n\t{\n\t\tdelete m_and;\n\t}\n}\n\n/**\n * Return the JSON payload for a where clause\n */\nconst string Where::toJSON() const\n{\nostringstream json;\n\n\tjson << \"{ \\\"column\\\" : \\\"\" << m_column << \"\\\", \";\n\tjson << \"\\\"condition\\\" : \\\"\";\n\tswitch (m_condition)\n\t{\n\tcase Older:\n\t\tjson << \"older\";\n\t\tbreak;\n\tcase Newer:\n\t\tjson << \"newer\";\n\t\tbreak;\n\tcase Equals:\n\t\tjson << \"=\";\n\t\tbreak;\n\tcase NotEquals:\n\t\tjson << \"!=\";\n\t\tbreak;\n\tcase LessThan:\n\t\tjson << \"<\";\n\t\tbreak;\n\tcase GreaterThan:\n\t\tjson << \">\";\n\t\tbreak;\n\tcase In:\n\t\tjson << \"in\";\n\t\tbreak;\n\tcase IsNull:\n\t\tjson << \"isnull\";\n\t\tbreak;\n\tcase NotNull:\n\t\tjson << \"notnull\";\n\t\tbreak;\n\t}\n\tjson << \"\\\"\";\n\n\tif (m_condition != IsNull && m_condition != NotNull)\n\t{\n\t\tjson << \", \";\n\t\tif ( (m_condition == Older) || (m_condition == Newer) )\n\t\t{\n\t\t\tjson << \"\\\"value\\\" : \" << m_value << \"\";\n\n\t\t}\n\t\telse if (m_condition != In)\n\t\t{\n\t\t\tjson << \"\\\"value\\\" : \\\"\" << m_value << \"\\\"\";\n\t\t}\n\t\telse\n\t\t{\n\t\t\tjson << \"\\\"value\\\" : [\";\n\t\t\tfor (auto v = m_in.begin();\n\t\t\t     v != m_in.end();\n\t\t\t     ++v)\n\t\t\t{\n\t\t\t\tjson << \"\\\"\" << *v << \"\\\"\";\n\t\t\t\tif (next(v, 1) != m_in.end())\n\t\t\t\t{\n\t\t\t\t\tjson << \", \";\n\t\t\t\t}\n\t\t\t}\n\t\t\tjson << \"]\";\n\t\t}\n\t}\n\n\tif (m_and || m_or)\n\t{\n\t\tif (m_and)\n\t\t{\n\t\t\tjson << \", \\\"and\\\" : \" << m_and->toJSON();\n\t\t}\n\t\tif (m_or)\n\t\t{\n\t\t\tjson << \", \\\"or\\\" : \" << m_or->toJSON();\n\t\t}\n\t}\n\tjson << \" }\";\n\treturn json.str();\n}\n"
  },
  {
    "path": "C/plugins/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(plugins-common-lib)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\n\nset(NEEDED_FLEDGE_LIBS common-lib services-common-lib)\nset(LIBCURL_LIB -lcurl)\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\n# Find source files\nfile(GLOB SOURCES *.cpp)\n\n# Include header files\ninclude_directories(include)\ninclude_directories(../../common/include)\ninclude_directories(../../services/common/include)\ninclude_directories(../../thirdparty/Simple-Web-Server)\ninclude_directories(../../thirdparty/rapidjson/include)\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${Boost_LIBRARIES} ${NEEDED_FLEDGE_LIBS} z ssl crypto)\ntarget_link_libraries(${PROJECT_NAME} ${LIBCURL_LIB})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n"
  },
  {
    "path": "C/plugins/common/http_sender.cpp",
    "content": "/*\n * Fledge HTTP Sender wrapper.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto, Mark Riddoch\n */\n\n#include <http_sender.h>\n#include <unistd.h>\n#include <string_utils.h>\n#include <sstream>   // for std::stringstream\n#include <fstream> \n#include <logger.h>\n#include <utils.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n\nusing namespace std;\n\n/**\n * Constructor\n */\nHttpSender::HttpSender()\n{\n}\n\n/**\n * Destructor\n */\nHttpSender::~HttpSender()\n{\n}\n\n/**\n * @brief Creates the '/logs/debug-trace' subdirectory in the Fledge data directory.\n * \n * This function ensures that both the 'logs' directory and the 'debug-trace' directory are created if they do not exist.\n */\nbool HttpSender::createDebugTraceDirectory() \n{\n    // Retrieve the 'logs' and 'debug-trace' directory paths\n    std::string logsDir = getDataDir() + \"/logs\";\n    std::string debugTraceDir = logsDir + \"/debug-trace\";\n\n    // Ensure path consistency with getDebugTracePath(). Assert is commented out to prevent unexpected runtime interruptions in execution\n    // std::assert(debugTraceDir == getDebugTracePath()); \n\n    auto createDir = [](const std::string& dirPath) -> bool \n    {\n        struct stat dirInfo;\n        if (stat(dirPath.c_str(), &dirInfo) == 0)\n        {\n            if (dirInfo.st_mode & S_IFDIR)\n            {\n                return true; // Directory exists\n            }\n            else\n            {\n                Logger::getLogger()->error(\"Path exists but is not a directory: %s\", dirPath.c_str());\n                return false;\n            }\n        }\n        else\n        {\n            // Directory does not exist, attempt to create it\n            if (mkdir(dirPath.c_str(), 0755) == 0)\n            {\n                return true; // Success\n            }\n            else\n            {\n                return false;\n            }\n        }\n    };\n\n    // Create the logs directory if it does not exist\n    if (!createDir(logsDir))\n    {\n        Logger::getLogger()->error(\"Failed to create directory: %s. 'debug-trace' directory will not be created.\", logsDir.c_str());\n        return false;\n    }\n\n    // Create the debug-trace directory if it does not exist\n    if (!createDir(debugTraceDir))\n    {\n        Logger::getLogger()->error(\"Failed to create 'debug-trace' directory: %s.\", debugTraceDir.c_str());\n        return false;\n    }\n\n    return true; \n}\n\n/**\n * @brief Constructs the file path for the OMF log.\n *\n * @return A string representing the path to the OMF log file.\n */\nstd::string HttpSender::getOMFTracePath() \n{\n    return getDebugTracePath() + \"/omf.log\";\n}\n"
  },
  {
    "path": "C/plugins/common/include/http_sender.h",
    "content": "#ifndef _HTTP_SENDER_H\n#define _HTTP_SENDER_H\n/*\n * Fledge HTTP Sender wrapper.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <string>\n#include <vector>\n\n#define HTTP_SENDER_USER_AGENT     \"Fledge http sender\"\n#define HTTP_SENDER_DEFAULT_METHOD \"GET\"\n#define HTTP_SENDER_DEFAULT_PATH   \"/\"\n\nclass HttpSender\n{\n\tpublic:\n\t\t/**\n\t\t * Constructor:\n\t\t */\n\t\tHttpSender();\n\n\t\t// Destructor\n\t\tvirtual ~HttpSender();\n\n\t\t/**\n\t\t * Set a proxy server\n\t\t */\n\t\tvirtual void setProxy(const std::string& proxy) = 0;\n\n\t\t/**\n\t\t * HTTP(S) request: pass method and path, HTTP headers and POST/PUT payload.\n\t\t */\n\t\tvirtual int sendRequest(\n\t\t\t\tconst std::string& method = std::string(HTTP_SENDER_DEFAULT_METHOD),\n\t\t\t\tconst std::string& path = std::string(HTTP_SENDER_DEFAULT_PATH),\n\t\t\t\tconst std::vector<std::pair<std::string, std::string>>& headers = {},\n\t\t\t\tconst std::string& payload = std::string()\n\t\t) = 0;\n\n\t\tvirtual std::string getHostPort() = 0;\n\t\tvirtual std::string getHTTPResponse() = 0;\n\t\tvirtual unsigned int getMaxRetries() = 0;\n\n\t\tvirtual void setAuthMethod          (std::string& authMethod) = 0;\n\t\tvirtual void setAuthBasicCredentials(std::string& authBasicCredentials) = 0;\n\t\tvirtual void setMaxRetries          (unsigned int retries) = 0;\n\n\t\t// OCS configurations\n\t\tvirtual void setOCSNamespace         (std::string& OCSNamespace) = 0;\n\t\tvirtual void setOCSTenantId          (std::string& OCSTenantId) = 0;\n\t\tvirtual\tvoid setOCSClientId          (std::string& OCSClientId) = 0;\n\t\tvirtual void setOCSClientSecret      (std::string& OCSClientSecret) = 0;\n\t\tvirtual void setOCSToken             (std::string& OCSToken) = 0;\n\n        /**\n         * @brief Constructs the file path for the OMF log.\n         *\n         * @return A string representing the path to the OMF log file.\n         */\n        static std::string getOMFTracePath();\n\n        /**\n         * @brief Creates the '/logs/debug-trace' subdirectory in the Fledge data directory.\n         * \n         */\n        static bool createDebugTraceDirectory();\n};\n\n/**\n * BadRequest exception\n */\nclass BadRequest : public std::exception {\n\tpublic:\n\t\t// Constructor with parameter\n\t\tBadRequest(const std::string& serverReply)\n\t\t{\n\t\t\tm_errmsg = serverReply;\n\t\t};\n\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn m_errmsg.c_str();\n\t\t}\n\n\tprivate:\n\t\tstd::string     m_errmsg;\n};\n\n/**\n * Unauthorized  exception\n */\nclass Unauthorized  : public std::exception {\n\tpublic:\n\t\t// Constructor with parameter\n\t\tUnauthorized (const std::string& serverReply)\n\t\t{\n\t\t\tm_errmsg = serverReply;\n\t\t};\n\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn m_errmsg.c_str();\n\t\t}\n\n\tprivate:\n\t\tstd::string     m_errmsg;\n};\n\n/**\n * Conflict  exception\n */\nclass Conflict  : public std::exception {\n\tpublic:\n\t\t// Constructor with parameter\n\t\tConflict (const std::string& serverReply)\n\t\t{\n\t\t\tm_errmsg = serverReply;\n\t\t};\n\n\t\tvirtual const char *what() const throw()\n\t\t{\n\t\t\treturn m_errmsg.c_str();\n\t\t}\n\n\tprivate:\n\t\tstd::string     m_errmsg;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/common/include/libcurl_https.h",
    "content": "#ifndef _LIBCURL_HTTPS_H\n#define _LIBCURL_HTTPS_H\n/*\n * Fledge HTTP Sender wrapper.\n *\n * Copyright (c) 2019 Diamnomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <string>\n#include <vector>\n#include <http_sender.h>\n#include <curl/curl.h>\n#include <fstream>\n\nusing namespace std;\n\nclass LibcurlHttps:  public HttpSender\n{\npublic:\n    /**\n     * Constructor:\n     * pass host:port, connect & request timeouts, retry_sleep_Time, max_retry\n     */\n    LibcurlHttps(const std::string& host_port,\n\t\tunsigned int connect_timeout = 0,\n\t\tunsigned int request_timeout = 0,\n\t\tunsigned int retry_sleep_Time = 1,\n\t\tunsigned int max_retry = 4);\n\n    // Destructor\n    ~LibcurlHttps();\n\n    /**\n     * Set the proxy host and port\n     */\n    void setProxy(const std::string& proxy);\n\n    /**\n     * HTTP(S) request: pass method and path, HTTP headers and POST/PUT payload.\n     */\n    int sendRequest(\n    \t\tconst std::string& method = std::string(HTTP_SENDER_DEFAULT_METHOD),\n\t\t    const std::string& path = std::string(HTTP_SENDER_DEFAULT_PATH),\n\t\t    const std::vector<std::pair<std::string, std::string>>& headers = {},\n\t\t    const std::string& payload = std::string()\n\t);\n\n    void setAuthMethod          (std::string& authMethod)           {m_authMethod = authMethod; }\n    void setAuthBasicCredentials(std::string& authBasicCredentials) {m_authBasicCredentials = authBasicCredentials; }\n\tvoid setMaxRetries          (unsigned int retries)              {m_max_retry = retries; };\n\n\t// OCS configurations\n\tvoid setOCSNamespace         (std::string& OCSNamespace)          {m_OCSNamespace    = OCSNamespace; }\n\tvoid setOCSTenantId          (std::string& OCSTenantId)           {m_OCSTenantId     = OCSTenantId; }\n\tvoid setOCSClientId          (std::string& OCSClientId)           {m_OCSClientId     = OCSClientId; }\n\tvoid setOCSClientSecret      (std::string& OCSClientSecret)       {m_OCSClientSecret = OCSClientSecret; }\n\tvoid setOCSToken             (std::string& OCSToken)              {m_OCSToken        = OCSToken; }\n\n    std::string getHostPort()     { return m_host_port; };\n\tstd::string getHTTPResponse() { return m_HTTPResponse; };\n\tunsigned int getMaxRetries()  { return m_max_retry; };\n\nprivate:\n\t// Make private the copy constructor and operator=\n\tLibcurlHttps(const LibcurlHttps&);\n\tLibcurlHttps&     operator=(LibcurlHttps const &);\n\n    void setLibCurlOptions(CURL *sender, const std::string& path, const vector<pair<std::string, std::string>>& headers);\n    void setTrace();\n    void resetTrace();\n\nprivate:\n\tCURL               *m_sender;\n\tstd::string         m_HTTPResponse;\n\tstd::string         m_host_port;\n\tunsigned int        m_retry_sleep_time;       // Seconds between each retry\n\tunsigned int        m_max_retry;              // Max number of retries in the communication\n\tstd::string         m_authMethod;             // Authentication method to be used\n\tstd::string         m_authBasicCredentials;   // Credentials is the base64 encoding of id and password joined by a single colon (:)\n    \tstruct curl_slist  *m_chunk = NULL;\n\tunsigned int        m_request_timeout;\n\tunsigned int        m_connect_timeout;\n\n\t// OCS configurations\n\tstd::string\tm_OCSNamespace;\n\tstd::string\tm_OCSTenantId;\n\tstd::string\tm_OCSClientId;\n\tstd::string\tm_OCSClientSecret;\n\tstd::string\tm_OCSToken;\n\tstd::ofstream\tm_ofs;\n\tbool\t\tm_log;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/common/include/piwebapi.h",
    "content": "#ifndef _PIWEBAPI_H\n#define _PIWEBAPI_H\n/*\n * Fledge OSIsoft PI Web API integration.\n *\n * Copyright (c) 2020-2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <string>\n\nusing namespace std;\n\n#define TIMEOUT_CONNECT   10\n#define TIMEOUT_REQUEST   10\n#define RETRY_SLEEP_TIME  1\n#define MAX_RETRY         3\n\n#define URL_GET_VERSION \"/piwebapi/system\"\n\n/**\n * The PIWebAPI class.\n */\nclass PIWebAPI\n{\n\tpublic:\n\t\tPIWebAPI();\n\n\n\t\t// Destructor\n\t\t~PIWebAPI();\n\n\t\tvoid    setAuthMethod          (std::string& authMethod)           {m_authMethod = authMethod; }\n\t\tvoid    setAuthBasicCredentials(std::string& authBasicCredentials) {m_authBasicCredentials = authBasicCredentials; }\n\n\t\tint     GetVersion(const string& host, string &version, bool logMessage = true);\n\t\tstring  ExtractVersion(const string& response);\n\t\tstring  errorMessageHandler(const string& msg);\n\n\tprivate:\n\t\tstring  extractSection(const string& msg, const string& toSearch);\n\t\tstring  extractMessageFromJSon(const string& json);\n\n\t\tstring  m_authMethod;             // Authentication method to be used\n\t\tstring  m_authBasicCredentials;   // Credentials is the base64 encoding of id and password joined by a single colon (:)\n\n\t\t// Substitute a message with a different one\n\t\tconst vector<pair<string, string>> PIWEB_ERRORS = {\n\t\t\t//   original message       New one\n\t\t\t{\"Noroutetohost\",    \"The PI Web API server is not reachable, verify the network reachability\"},\n\t\t\t{\"No route to host\", \"The PI Web API server is not reachable, verify the network reachability\"},\n\t\t};\n\n\n};\n#endif\n"
  },
  {
    "path": "C/plugins/common/include/simple_http.h",
    "content": "#ifndef _SIMPLE_HTTP_H\n#define _SIMPLE_HTTP_H\n/*\n * Fledge HTTP Sender wrapper.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto, Mark Riddoch\n */\n\n#include <string>\n#include <vector>\n#include <http_sender.h>\n#include <client_http.hpp>\n#include <fstream>\n\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\nclass SimpleHttp: public HttpSender\n{\n\tpublic:\n\t\t/**\n\t\t * Constructor: pass host:port & connect & request timeouts\n\t\t */\n\t\tSimpleHttp(const std::string& host_port,\n\t\t\t\tunsigned int connect_timeout = 0,\n\t\t\t\tunsigned int request_timeout = 0,\n\t\t\t\tunsigned int retry_sleep_Time = 1,\n\t\t\t\tunsigned int max_retry = 4);\n\n\n\t\t// Destructor\n\t\t~SimpleHttp();\n\n\t\t/**\n\t\t * Set the host and port of the proxy server\n\t\t */\n\t\tvoid setProxy(const std::string& proxy);\n\n\t\t/**\n\t\t * HTTP(S) request: pass method and path, HTTP headers and POST/PUT payload.\n\t\t */\n\t\tint sendRequest(\n\t\t\t\tconst std::string& method = std::string(HTTP_SENDER_DEFAULT_METHOD),\n\t\t\t\tconst std::string& path = std::string(HTTP_SENDER_DEFAULT_PATH),\n\t\t\t\tconst std::vector<std::pair<std::string, std::string>>& headers = {},\n\t\t\t\tconst std::string& payload = std::string()\n\t\t);\n\n\t\tvoid setAuthMethod          (std::string& authMethod)           {m_authMethod = authMethod; }\n\t\tvoid setAuthBasicCredentials(std::string& authBasicCredentials) {m_authBasicCredentials = authBasicCredentials; }\n\n\t\tstd::string getHostPort()     { return m_host_port; };\n\t\tstd::string getHTTPResponse() { return m_HTTPResponse; };\n\t\tunsigned int getMaxRetries()  { return m_max_retry; };\n\n\t\t// OCS configurations\n\t\tvoid setOCSNamespace         (std::string& OCSNamespace)          {m_OCSNamespace    = OCSNamespace; }\n\t\tvoid setOCSTenantId          (std::string& OCSTenantId)           {m_OCSTenantId     = OCSTenantId; }\n\t\tvoid setOCSClientId          (std::string& OCSClientId)           {m_OCSClientId     = OCSClientId; }\n\t\tvoid setOCSClientSecret      (std::string& OCSClientSecret)       {m_OCSClientSecret = OCSClientSecret; }\n\t\tvoid setOCSToken             (std::string& OCSToken)              {m_OCSToken        = OCSToken; }\n\t\tvoid setMaxRetries           (unsigned int retries)               {m_max_retry = retries; };\n\n\tprivate:\n\t\t// Make private the copy constructor and operator=\n\t\tSimpleHttp(const SimpleHttp&);\n\t\tSimpleHttp&\toperator=(SimpleHttp const &);\n        void setTrace();\n        void resetTrace();\n\n\tprivate:\n\t\tstd::string\t    m_host_port;\n\t\tHttpClient\t   *m_sender;\n\t\tstd::string\t    m_HTTPResponse;\n\t\tunsigned int\tm_retry_sleep_time;       // Seconds between each retry\n\t\tunsigned int\tm_max_retry;              // Max number of retries in the communication\n\n\t\tstd::string\tm_authMethod;             // Authentication method to be used\n\t\tstd::string\tm_authBasicCredentials;   // Credentials is the base64 encoding of id and password joined by a single colon (:)\n\n\t\t// OCS configurations\n\t\tstd::string\tm_OCSNamespace;\n\t\tstd::string\tm_OCSTenantId;\n\t\tstd::string\tm_OCSClientId;\n\t\tstd::string\tm_OCSClientSecret;\n\t\tstd::string\tm_OCSToken;\n\t\tbool\t\tm_log;\n\t\tstd::ofstream\tm_ofs;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/common/include/simple_https.h",
    "content": "#ifndef _SIMPLE_HTTPS_H\n#define _SIMPLE_HTTPS_H\n/*\n * Fledge HTTP Sender wrapper.\n *\n * Copyright (c) 2018 Diamnomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto, Mark Riddoch, Stefano Simonelli\n */\n\n#include <string>\n#include <vector>\n#include <http_sender.h>\n#include <client_https.hpp>\n#include <fstream>\n\nusing HttpsClient = SimpleWeb::Client<SimpleWeb::HTTPS>;\n\nclass SimpleHttps: public HttpSender\n{\n\tpublic:\n\t\t/**\n\t\t * Constructor:\n\t\t * pass host:port, connect & request timeouts\n\t\t */\n\t\tSimpleHttps(const std::string& host_port,\n\t\t\t    unsigned int connect_timeout = 0,\n\t\t\t    unsigned int request_timeout = 0,\n\t\t\t    unsigned int retry_sleep_Time = 1,\n\t\t\t    unsigned int max_retry = 4);\n\n\t\t// Destructor\n\t\t~SimpleHttps();\n\n\t\t/**\n\t\t * Set the host and port of the proxy server\n\t\t */\n\t\tvoid setProxy(const std::string& proxy);\n\n\t\t/**\n\t\t * HTTP(S) request: pass method and path, HTTP headers and POST/PUT payload.\n\t\t */\n\t\tint sendRequest(\n\t\t\t\tconst std::string& method = std::string(HTTP_SENDER_DEFAULT_METHOD),\n\t\t\t\tconst std::string& path = std::string(HTTP_SENDER_DEFAULT_PATH),\n\t\t\t\tconst std::vector<std::pair<std::string, std::string>>& headers = {},\n\t\t\t\tconst std::string& payload = std::string()\n\t\t);\n\n\t\tvoid setAuthMethod          (std::string& authMethod)           {m_authMethod = authMethod; }\n\t\tvoid setAuthBasicCredentials(std::string& authBasicCredentials) {m_authBasicCredentials = authBasicCredentials; }\n\t\tvoid setMaxRetries          (unsigned int retries)              {m_max_retry = retries; };\n\n\t\t// OCS configurations\n\t\tvoid setOCSNamespace         (std::string& OCSNamespace)          {m_OCSNamespace    = OCSNamespace; }\n\t\tvoid setOCSTenantId          (std::string& OCSTenantId)           {m_OCSTenantId     = OCSTenantId; }\n\t\tvoid setOCSClientId          (std::string& OCSClientId)           {m_OCSClientId     = OCSClientId; }\n\t\tvoid setOCSClientSecret      (std::string& OCSClientSecret)       {m_OCSClientSecret = OCSClientSecret; }\n\t\tvoid setOCSToken             (std::string& OCSToken)              {m_OCSToken        = OCSToken; }\n\n\t\tstd::string getHTTPResponse() { return m_HTTPResponse; };\n\t\tstd::string getHostPort()     { return m_host_port; };\n\t\tunsigned int getMaxRetries()  { return m_max_retry; };\n\n\tprivate:\n\t\t// Make private the copy constructor and operator=\n\t\tSimpleHttps(const SimpleHttps&);\n\t\tSimpleHttps&     operator=(SimpleHttps const &);\n        void setTrace();\n        void resetTrace();\n\tprivate:\n\t\tstd::string\t    m_host_port;\n\t\tHttpsClient\t   *m_sender;\n\t\tstd::string\t    m_HTTPResponse;\n\t\tunsigned int\tm_retry_sleep_time;       // Seconds between each retry\n\t\tunsigned int\tm_max_retry;              // Max number of retries in the communication\n\n\t\tstd::string\tm_authMethod;             // Authentication method to be used\n\t\tstd::string\tm_authBasicCredentials;   // Credentials is the base64 encoding of id and password joined by a single colon (:)\n\n\t\t// OCS configurations\n\t\tstd::string\tm_OCSNamespace;\n\t\tstd::string\tm_OCSTenantId;\n\t\tstd::string\tm_OCSClientId;\n\t\tstd::string\tm_OCSClientSecret;\n\t\tstd::string\tm_OCSToken;\n\t\tbool\t\tm_log;\n\t\tstd::ofstream\tm_ofs;\n\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/common/libcurl_https.cpp",
    "content": "/*\n * Fledge HTTPS Sender implementation using the libcurl library\n *  - https://curl.haxx.se/libcurl/\n *  - https://github.com/curl/curl\n *\n * Fledge uses the libcurl library to support the Kerberos authentication\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <thread>\n#include <logger.h>\n#include <vector>\n#include <sstream>\n#include <string.h>\n#include <stdlib.h>\n#include <curl/curl.h>\n#include <unistd.h>\n\n#include \"libcurl_https.h\"\n#include \"string_utils.h\"\n\n#define VERBOSE_LOG      0\n\n#define HTTP_HEADER_LINE 255\n\nusing namespace std;\n\n/**\n * Creates a UTC time string for the current time\n *\n * @return\t\t      Current UTC time\n */\nstatic std::string CurrentTimeString()\n{\n\ttime_t now = time(NULL);\n\tstruct tm timeinfo;\n\tgmtime_r(&now, &timeinfo);\n\tchar timeString[20];\n\tstrftime(timeString, sizeof(timeString), \"%F %T\", &timeinfo);\n\treturn std::string(timeString);\n}\n\n/**\n * Constructor: host:port, connect_timeout, request_timeout,\n *              retry_sleep_Time, max_retry\n *\n * Logs the messages into omf.log if the file is present\n */\nLibcurlHttps::LibcurlHttps(const string& host_port,\n\t\t\t unsigned int connect_timeout,\n\t\t\t unsigned int request_timeout,\n\t\t\t unsigned int retry_sleep_Time,\n\t\t\t unsigned int max_retry) :\n\t\t\tHttpSender(),\n\t\t\tm_connect_timeout(connect_timeout),\n\t\t\tm_request_timeout(request_timeout),\n\t\t\tm_host_port(host_port),\n\t\t\tm_retry_sleep_time(retry_sleep_Time),\n\t\t\tm_max_retry (max_retry)\n{\n\n\tif (curl_global_init(CURL_GLOBAL_DEFAULT) != 0)\n\t{\n\t\tLogger::getLogger()->error(\"libcurl_https - curl_global_init failed, the libcurl library cannot be initialized.\");\n\t}\n\tsetTrace();\n}\n\n/**\n * Destructor\n */\nLibcurlHttps::~LibcurlHttps()\n{\n\tresetTrace();\n\tcurl_global_cleanup();\n}\n\n/**\n * @brief Configures logging for the HTTP sender.\n */\nvoid LibcurlHttps::setTrace() \n{\n    // Check if the log file is writable\n    std::string tracePath = HttpSender::getOMFTracePath();\n    if (access(tracePath.c_str(), W_OK) == 0)\n\t{\n\t\tm_log = true;\n\t\tm_ofs.open(tracePath.c_str(), ofstream::app);\n\t}\n\telse\n\t{\n\t\tm_log = false;\n\t}\n}\n\n/**\n * @brief Resets the logging state for the HTTP sender.\n */\nvoid LibcurlHttps::resetTrace() \n{\n    if (m_log) \n    {\n        m_ofs.close(); // Close the log file if it is open\n    }\n}\n\n/**\n * Add a proxy server\n *\n * @param proxy\tThe host and port of the proxy\n */\nvoid LibcurlHttps::setProxy(const string& proxy)\n{\n\tcurl_easy_setopt(m_sender, CURLOPT_PROXY, proxy.c_str());\n}\n\n/**\n * Avoid libcurl debug messages\n */\nsize_t cb_write_data(void *buffer, size_t size, size_t nmemb, void *userp)\n{\n\treturn size * nmemb;\n}\n\n/**\n * Handle the call back header to retrieve the text message in response to an HTTP request\n * this call back is called for all the headers lines\n *\n * received header is nitems * size long in 'buffer' NOT ZERO TERMINATED\n * userdata' is set with CURLOPT_HEADERDATA\n *\n * @param buffer        Header message\n * @param size          (nitems * size) is the size of 'buffer'\n * @param nitems\n * @param userdata out  Buffer to store the data needed\n *\n */\nstatic size_t cb_header(char *buffer, size_t size, size_t nitems, void *userdata)\n{\n\n\tchar *header = (char *)  userdata;\n\tint  numBytes = 0;\n\tbool getHeader = false;\n\n\t// Only the first line of the headers is needed\n\tif (*header == '\\0')\n\t{\n\t\tgetHeader = true;\n\t}\n\telse\n\t{\n\t\t// in some situations as using Kerberos\n\t\t// the last header starting with HTTP contains the real error\n\t\tchar tmpBuffer[10];\n\t\tsprintf(tmpBuffer, \"%.*s\", 4, buffer);\n\n\t\tstring tmpStr = tmpBuffer;\n\t\tfor (auto & c: tmpStr) c = toupper(c);\n\n\t\tif (tmpStr.compare(\"HTTP\") == 0) {\n\n\t\t\tgetHeader = true;\n\t\t}\n\n\t}\n\n\tif (getHeader)\n\t{\n\t\tif ((size * nitems) < (HTTP_HEADER_LINE - 1))\n\t\t\tnumBytes = size * nitems;\n\t\telse\n\t\t\tnumBytes = HTTP_HEADER_LINE - 1;\n\n\t\tsprintf(header, \"%.*s\", numBytes, buffer);\n\t}\n\treturn nitems * size;\n}\n\n/**\n * Setups the libcurl general options used in all the HTTP methods\n *\n * @param sender    libcurl handle on which the options should be configured\n * @param path      The URL path\n * @param headers   The optional headers to send\n *\n */\nvoid LibcurlHttps::setLibCurlOptions(CURL *sender, const string& path, const vector<pair<string, string>>& headers)\n{\n\tstring httpHeader;\n\n#if VERBOSE_LOG\n\tcurl_easy_setopt(m_sender, CURLOPT_VERBOSE, 1L);\n#else\n\tcurl_easy_setopt(m_sender, CURLOPT_VERBOSE, 0L);\n\t// this workaround is needed to avoid all libcurl debug messages\n\tcurl_easy_setopt(m_sender, CURLOPT_WRITEFUNCTION, cb_write_data);\n#endif\n\tcurl_easy_setopt(m_sender, CURLOPT_NOPROGRESS, 1L);\n\tcurl_easy_setopt(m_sender, CURLOPT_TCP_KEEPALIVE, 1L);\n\n\tcurl_easy_setopt(m_sender, CURLOPT_TIMEOUT,        m_request_timeout);\n\tcurl_easy_setopt(m_sender, CURLOPT_CONNECTTIMEOUT, m_connect_timeout);\n\n\t// HTTP headers handling\n\tm_chunk = curl_slist_append(m_chunk, \"User-Agent: \" HTTP_SENDER_USER_AGENT);\n\n\t// To let PI Web API having Cross-Site Request Forgery (CSRF) enabled as by default configuration\n\tm_chunk = curl_slist_append(m_chunk, \"X-Requested-With: XMLHttpRequest\");\n\n\tfor (auto it = headers.begin(); it != headers.end(); ++it)\n\t{\n\t\thttpHeader = (*it).first + \": \" + (*it).second;\n\t\tm_chunk = curl_slist_append(m_chunk, httpHeader.c_str());\n\t}\n\n\t// Handle basic authentication\n\tif (m_authMethod == \"b\")\n\t{\n\t\thttpHeader = \"Authorization: Basic \" + m_authBasicCredentials;\n\t\tm_chunk = curl_slist_append(m_chunk, httpHeader.c_str());\n\n\t\t/* set user name and password for the authentication */\n\t\t//curl_easy_setopt(m_sender, CURLOPT_USERPWD, \"user:pwd\");\n\t}\n\tcurl_easy_setopt(m_sender, CURLOPT_HTTPHEADER, m_chunk);\n\n\t// Handle Kerberos authentication\n\tif (m_authMethod == \"k\")\n\t{\n\t\tLogger::getLogger()->debug(\"Kerberos authentication - keytab file :%s: \", getenv(\"KRB5_CLIENT_KTNAME\"));\n\n\t\tcurl_easy_setopt(m_sender, CURLOPT_HTTPAUTH, CURLAUTH_GSSNEGOTIATE);\n\t\t// The empty user should be defined for Kerberos authentication\n\t\tcurl_easy_setopt(m_sender, CURLOPT_USERPWD, \":\");\n\t}\n\n\t// Configure libcurl\n\tstring url = \"https://\" + m_host_port + path;\n\n\tcurl_easy_setopt(m_sender, CURLOPT_URL, url.c_str());\n\n\t// Setup SSL\n\tcurl_easy_setopt(m_sender, CURLOPT_USE_SSL, CURLUSESSL_ALL);\n\tcurl_easy_setopt(m_sender, CURLOPT_SSL_VERIFYPEER, 0L);\n\tcurl_easy_setopt(m_sender, CURLOPT_SSL_VERIFYHOST, 0L);\n\tcurl_easy_setopt(m_sender, CURLOPT_HTTP_VERSION, (long)CURL_HTTP_VERSION_2TLS);\n}\n\n/**\n * Send data, it retries the operation m_max_retry times\n * waiting m_retry_sleep_time*2 at each attempt\n *\n * @param method    The HTTP method (GET, POST, ...)\n * @param path      The URL path\n * @param headers   The optional headers to send\n * @param payload   The optional data payload (for POST, PUT)\n * @return          The HTTP code for the cases : 1xx Informational /\n *                                                2xx Success /\n *                                                3xx Redirection\n * @throw\t    BadRequest for HTTP 400 error\n *\t\t    std::exception as generic exception for all the\n *\t\t    cases >= 401 Client errors / 5xx Server errors\n */\nint LibcurlHttps::sendRequest(\n\t\tconst string& method,\n\t\tconst string& path,\n\t\tconst vector<pair<string, string>>& headers,\n\t\tconst string& payload\n)\n{\n\t// Variables definition\n\tlong   httpCode = 0;\n\tstring httpResponseText;\n\tchar   httpHeaderBuffer[HTTP_HEADER_LINE];\n\n\tbool retry = false;\n\tint  retryCount = 1;\n\tint  sleepTime = m_retry_sleep_time;\n\n\tCURLcode res = CURLE_OK;\n\n\tenum exceptionType\n\t{\n\t    none, typeBadRequest, typeException\n\t};\n\n\texceptionType exceptionRaised = none;\n\tstring exceptionMessage;\n\tstring errorMessage;\n\n\t// Init libcurl\n\tm_sender = curl_easy_init();\n\tif(m_sender)\n\t{\n\t\tsetLibCurlOptions(m_sender, path, headers);\n\t}\n\telse\n\t{\n\t\tstring errorMessage = \"libcurl_https - curl_easy_init failed, the libcurl library cannot be initialized.\";\n\n\t\tLogger::getLogger()->error(errorMessage);\n\t\tthrow runtime_error(errorMessage.c_str());\n\t}\n\n\t// Select the requested HTTP method\n\tif (method.compare(\"POST\") == 0)\n\t{\n\t\tcurl_easy_setopt(m_sender, CURLOPT_POST, 1L);\n\n\t\tcurl_easy_setopt(m_sender, CURLOPT_POSTFIELDS,           payload.c_str());\n\t\tcurl_easy_setopt(m_sender, CURLOPT_POSTFIELDSIZE, (long) payload.length());\n\t}\n\telse if (method.compare(\"GET\") == 0)\n\t{\n\t\t// TODO : to be implemented\n\t\terrorMessage = \"libcurl_https - method GET is not currently implemented\";\n\t\tLogger::getLogger()->debug(errorMessage);\n\t\tthrow runtime_error(errorMessage);\n\t}\n\telse if (method.compare(\"PUT\") == 0)\n\t{\n\t\t// TODO : to be implemented\n\t\terrorMessage = \"libcurl_https - method PUT currently not implemented\";\n\t\tLogger::getLogger()->debug(\"libcurl_https - method PUT is not currently implemented\");\n\t\tthrow runtime_error(errorMessage);\n\n\t}\n\telse if (method.compare(\"DELETE\") == 0)\n\t{\n\t\t// TODO : to be implemented\n\t\terrorMessage = \"libcurl_https - method DELETE currently not implemented\";\n\t\tLogger::getLogger()->debug(\"libcurl_https - method DELETE is not currently implemented \");\n\t\tthrow runtime_error(errorMessage);\n\t}\n\n\tdo\n\t{\n\t\tstd::chrono::high_resolution_clock::time_point tStart;\n\t\ttry\n\t\t{\n\t\t\texceptionRaised = none;\n\t\t\thttpCode = 0;\n\t\t\thttpResponseText = \"\";\n\t\t\thttpHeaderBuffer[0] = '\\0';\n\n\t\t\t// It is needed to handle the call back header to retrieve the text message\n\t\t\t// in response to an HTTP request\n\t\t\t// curl.haxx.se/mail/lib-2013-10/0114.html\n\t\t\tcurl_easy_setopt(m_sender, CURLOPT_HEADERDATA,     httpHeaderBuffer);\n\t\t\tcurl_easy_setopt(m_sender, CURLOPT_HEADERFUNCTION, cb_header);\n\t\t\tif (m_log)\n\t\t\t{\n\t\t\t\tm_ofs << endl << method << \" \" << path << endl;\n\t\t\t\tm_ofs << \"Headers\" << endl;\n\t\t\t\tfor (auto it = headers.begin(); it != headers.end(); it++)\n\t\t\t\t{\n\t\t\t\t\tm_ofs << \"    \" << it->first << \": \" << it->second << endl;\n\t\t\t\t}\n\t\t\t\tm_ofs << \"Payload:\" << endl;\n\t\t\t\tm_ofs << payload << endl;\n\t\t\t\ttStart = std::chrono::high_resolution_clock::now();\n\t\t\t}\n\n\t\t\t// Execute the HTTP method\n\t\t\tres = curl_easy_perform(m_sender);\n\n\t\t\tcurl_easy_getinfo(m_sender, CURLINFO_RESPONSE_CODE, &httpCode);\n\n\t\t\t// fix the text message\n\t\t\t// NOTE : the text should be considered only if the HTTP code is not an ACK\n\t\t\thttpResponseText = httpHeaderBuffer;\n\t\t\tif (m_log)\n\t\t\t{\n\t\t\t\tstd::chrono::high_resolution_clock::time_point tEnd = std::chrono::high_resolution_clock::now();\n\t\t\t\tm_ofs << \"Response:\" << endl;\n\t\t\t\tm_ofs << \"   Code: \" << httpCode << endl;\n\t\t\t\tm_ofs << \"   Time: \" << ((double)std::chrono::duration_cast<std::chrono::microseconds>(tEnd - tStart).count()) / 1.0E6 << \" sec     \" << CurrentTimeString() << endl;\n\t\t\t\tm_ofs << \"   Content: \" << httpResponseText << endl << endl;\n\t\t\t}\n\t\t\tStringStripCRLF(httpResponseText);\n\n\t\t\tm_HTTPResponse = httpResponseText;\n\t\t}\n\t\tcatch (exception &ex)\n\t\t{\n\t\t\texceptionRaised = typeException;\n\t\t\terrorMessage = \"Failed to send data: \";\n\t\t\terrorMessage.append(ex.what());\n\t\t}\n\n\n\t\tif ( (res == CURLE_OK )                          &&\n\t\t     (exceptionRaised == none )                  &&\n\t\t     ((httpCode >= 200) && (httpCode <= 399))\n\t\t     )\n\t\t{\n\t\t\tretry = false;\n#if VERBOSE_LOG\n\t\t\tLogger::getLogger()->info(\"HTTPS sendRequest succeeded : retry count |%d| HTTP code |%d|\",\n\t\t\t\t\t\t  retryCount,\n\t\t\t\t\t\t  httpCode);\n#endif\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (res != CURLE_OK)\n\t\t\t{\n\t\t\t\terrorMessage = string(curl_easy_strerror(res) );\n\t\t\t\tif (httpResponseText.compare(\"\") != 0 )\n\t\t\t\t\terrorMessage += \" - \" + httpResponseText;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// the situation is : CURLE_OK but the httpCode reports an error\n\t\t\t\terrorMessage = httpResponseText;\n\t\t\t}\n\n#if VERBOSE_LOG\n\t\t\tif (exceptionRaised)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\n\t\t\t\t\t\"HTTPS sendRequest : retry count |%d| error |%s| message |%s|\",\n\t\t\t\t\tretryCount,\n\t\t\t\t\terrorMessage.c_str(),\n\t\t\t\t\tpayload.c_str());\n\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\n\t\t\t\t\t\"HTTPS sendRequest : retry count |%d| HTTP code |%d| error message |%s| HTTP message |%s|\",\n\t\t\t\t\tretryCount,\n\t\t\t\t\thttpCode,\n\t\t\t\t\terrorMessage.c_str(),\n\t\t\t\t\tpayload.c_str());\n\t\t\t}\n#endif\n\n\t\t\tif (m_log && !errorMessage.empty())\n\t\t\t{\n\t\t\t\tstd::chrono::high_resolution_clock::time_point tEnd = std::chrono::high_resolution_clock::now();\n\t\t\t\tm_ofs << \"   Time: \" << ((double)std::chrono::duration_cast<std::chrono::microseconds>(tEnd - tStart).count()) / 1.0E6 << \" sec     \" << CurrentTimeString() << endl;\n\t\t\t\tm_ofs << \"   Exception: \" << errorMessage << endl;\n\t\t\t}\n\n\t\t\tif (retryCount < m_max_retry)\n\t\t\t{\n\t\t\t\tthis_thread::sleep_for(chrono::seconds(sleepTime));\n\n\t\t\t\tretry = true;\n\t\t\t\tsleepTime *= 2;\n\t\t\t\tretryCount++;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tretry = false;\n\t\t\t}\n\n\t\t}\n\t} while (retry);\n\n\t// Cleanup\n\tcurl_easy_cleanup(m_sender);\n\tcurl_slist_free_all(m_chunk);\n\tm_sender = NULL;\n\tm_chunk = NULL;\n\n\t// Check if an error should be raised\n\tif (exceptionRaised == none)\n\t{\n\t\t// 0 = HTTP failed without an HTTP code\n\t\tif (httpCode == 0)\n\t\t{\n\t\t\tthrow runtime_error(errorMessage);\n\t\t}\n\t\telse if (httpCode == 400)\n\t\t{\n\t\t\tthrow BadRequest(errorMessage);\n\t\t}\n\t\telse if (httpCode == 401)\n\t\t{\n\t\t\tthrow Unauthorized(errorMessage);\n\t\t}\n\t\telse if (httpCode == 409)\n\t\t{\n\t\t\tthrow Conflict(errorMessage);\n\t\t}\n\t\telse if (httpCode >= 401)\n\t\t{\n\t\t\tstring errorMessageHTTP;\n\t\t\terrorMessageHTTP = \"HTTP code |\" + to_string(httpCode) +  \"| - HTTP error |\" + errorMessage + \"|\";\n\n\t\t\tthrow runtime_error(errorMessageHTTP);\n\t\t}\n\t}\n\telse\n\t{\n\t\tthrow runtime_error(errorMessage);\n\t}\n\n\treturn httpCode;\n}\n"
  },
  {
    "path": "C/plugins/common/piwebapi.cpp",
    "content": "/*\n * Fledge OSIsoft PI Web API integration.\n * Implements the integration for the specific functionalities exposed by PI Web API\n *\n * Copyright (c) 2020-2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <string>\n#include <vector>\n#include <utility>\n\n#include <piwebapi.h>\n#include <string_utils.h>\n#include <logger.h>\n#include <simple_https.h>\n#include <string_utils.h>\n\n#include <rapidjson/document.h>\n#include \"rapidjson/error/en.h\"\n\n#include <stdlib.h>\n#include <string.h>\n#include <status_code.hpp>\n\nusing namespace std;\nusing namespace rapidjson;\n\nPIWebAPI::PIWebAPI()\n{\n}\n\n// Destructor\nPIWebAPI::~PIWebAPI()\n{\n}\n\n/**\n * Extracts the PIWebAPI version from the JSON returned by the PIWebAPI api\n *\n * @param response  JSON message generated by the PIWebAPI API containing the version\n * @return          The version of the PIWebAPI server\n *\n */\nstd::string PIWebAPI::ExtractVersion(const string& response)\n{\n\tDocument JSon;\n\tstring version;\n\tstring responseFixed;\n\tParseResult ok;\n\n\t// TODO: at the current stage a non JSON is returned, so we fixed the format\n\tok = JSon.Parse(response.c_str());\n\tif (!ok)\n\t{\n\t\tresponseFixed = \"{\\\"\" + response;\n\t\tStringStripCRLF(responseFixed);\n\t}\n\telse\n\t{\n\t\tresponseFixed = response;\n\t}\n\n\tok = JSon.Parse(responseFixed.c_str());\n\tif (!ok)\n\t{\n\t\tLogger::getLogger()->error(\"PIWebAPI version extract, invalid json - HTTP response :%s:\", response.c_str());\n\t}\n\telse\n\t{\n\t\tif (JSon.HasMember(\"ProductTitle\"))\n\t\t{\n\t\t\tversion = JSon[\"ProductTitle\"].GetString();\n\t\t}\n\t\tif (JSon.HasMember(\"ProductVersion\"))\n\t\t{\n\t\t\tversion = version + \"-\" + JSon[\"ProductVersion\"].GetString();\n\t\t}\n\n\t}\n\n\treturn(version);\n}\n\n\n/**\n * Calls the PI Web API to retrieve the version\n *\n * @param host       Reference of the server running PI Web API in the format: hostName + \":\" + port\n * @param version    The returned version string of the PI Web API server\n * @param logMessage If true, log an error message if there is a failure (default: true)\n * @return           HTTP response status code\n *\n */\nint PIWebAPI::GetVersion(const string& host, string &version, bool logMessage)\n{\n\tstring response;\n\tstring payload;\n\tstring errorMsg;\n\n\tHttpSender *endPoint;\n\tvector<pair<string, string>> header;\n\tint httpCode;\n\n\tendPoint = new SimpleHttps(host,\n\t\t\t\t\t\t\t   TIMEOUT_CONNECT,\n\t\t\t\t\t\t\t   TIMEOUT_REQUEST,\n\t\t\t\t\t\t\t   RETRY_SLEEP_TIME,\n\t\t\t\t\t\t\t   MAX_RETRY);\n\n\t// HTTP header\n\theader.push_back( std::make_pair(\"Content-Type\", \"application/json\"));\n\theader.push_back( std::make_pair(\"Accept\", \"application/json\"));\n\n\t// HTTP payload\n\tpayload =  \"\";\n\n\t// Set requested authentication\n\tendPoint->setAuthMethod          (m_authMethod);\n\tendPoint->setAuthBasicCredentials(m_authBasicCredentials);\n\n\ttry\n\t{\n\t\thttpCode = endPoint->sendRequest(\"GET\",\n\t\t\t\t\t\t\t\t\t\t URL_GET_VERSION,\n\t\t\t\t\t\t\t\t\t\t header,\n\t\t\t\t\t\t\t\t\t\t payload);\n\n\t\tresponse = endPoint->getHTTPResponse();\n\n\t\tif (httpCode >= 200 && httpCode <= 399)\n\t\t{\n\t\t\tversion = ExtractVersion(response);\n\t\t}\n\t\telse if (logMessage)\n\t\t{\n\t\t\terrorMsg = errorMessageHandler(response);\n\t\t\tLogger::getLogger()->error(\"Error in retrieving the PI Web API version from %s: [%d] %s \", host.c_str(), httpCode, errorMsg.c_str());\n\t\t}\n\t}\n\tcatch (const BadRequest& ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\terrorMsg = errorMessageHandler(ex.what());\n\t\t\tLogger::getLogger()->error(\"BadRequest error retrieving the PI Web API version from %s: %s\", host.c_str(), errorMsg.c_str());\n\t\t}\n\t\thttpCode = (int) SimpleWeb::StatusCode::client_error_bad_request;\n\t}\n\tcatch (const Unauthorized& ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"The PI Web API server at %s has rejected our request due to an authentication issue. Please check the authentication method and credentials are correctly configured.\", host.c_str());\n\t\t}\n\t\thttpCode = (int) SimpleWeb::StatusCode::client_error_unauthorized;\n\t}\n\tcatch (exception &ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\terrorMsg = errorMessageHandler(ex.what());\n\t\t\tLogger::getLogger()->error(\"Error in retrieving the PI Web API version from %s: %s\", host.c_str(), errorMsg.c_str());\n\t\t}\n\t\thttpCode = (int) SimpleWeb::StatusCode::server_error_service_unavailable;\n\t}\n\n\tdelete endPoint;\n\n\treturn httpCode;\n}\n\n/**\n * Extracts a section from a string, between the string and the '|' character\n *\n * @param msg       Msg from which the string must be extracted\n * @param toSearch  The string to be searched as the starting position for the extraction\n * @return          The extracted string\n *\n */\nstring PIWebAPI::extractSection(const string& msg, const string& toSearch) {\n\n\tstring::size_type pos, pos1, pos2;\n\tstring section;\n\n\tpos = msg.find (toSearch);\n\tif (pos != string::npos )\n\t{\n\t\tpos1 = msg.find (\"|\",pos);\n\t\tpos2 = msg.find (\"|\",pos1+1);\n\t\tif (pos2 != string::npos ) {\n\n\t\t\tsection = msg.substr(pos1 +1, pos2 - pos1 -1);\n\t\t}\n\t}\n\treturn (section);\n}\n\n/**\n * Handles PIWebAPI json error message extracting significant parts to produce a meaningful and concise message\n *\n * OSIsoft documentation about the error structure generated by PIWebAPI:\n * https://docs.osisoft.com/bundle/pi-web-api-reference/page/help/topics/error-handling.html\n *\n * @param json JSON message generated by PIWebAPI containing the error\n * @return     The concise and meaningful error message*\n *\n */\nstring PIWebAPI::extractMessageFromJSon(const string& json)\n{\n\tDocument JSon;\n\tParseResult ok;\n\tstring::size_type pos;\n\n\tstring  msgFinal, msgFixed;\n\tstring msgMessage, msgReason,msgName, msgValue;\n\n\tmsgFixed = extractSection(json, \"HTTP error |\");\n\tif (msgFixed.empty())\n\t\tmsgFixed = json;\n\n\tok = JSon.Parse(msgFixed.c_str());\n\tif (!ok)\n\t{\n\t\t// removes bad characters if present in the error message\n\t\tchar badChars[4];\n\t\tbadChars[0]='\\357';\n\t\tbadChars[1]='\\273';\n\t\tbadChars[2]='\\277';\n\t\tbadChars[3]=0;\n\n\t\tpos = msgFixed.find (badChars);\n\t\tif (pos != string::npos )\n\t\t{\n\t\t\tmsgFixed.erase ( pos, strlen(badChars) );\n\t\t}\n\t}\n\n\tok = JSon.Parse(msgFixed.c_str());\n\tif (ok)\n\t{\n\t\tif (JSon.HasMember(\"Messages\"))\n\t\t{\n\t\t\tValue &messages = JSon[\"Messages\"];\n\t\t\tif (messages.IsArray())\n\t\t\t{\n\t\t\t\tlong messageId;\n\t\t\t\tfor (Value::ConstValueIterator itr = messages.Begin(); itr != messages.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif ((*itr)[\"MessageIndex\"].IsInt64())\n\t\t\t\t\t\tmessageId = (*itr)[\"MessageIndex\"].GetInt64();\n\n\t\t\t\t\tif ((*itr).HasMember(\"Events\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tconst Value &messageEvents = (*itr)[\"Events\"];\n\t\t\t\t\t\tif (messageEvents.IsArray())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tValue::ConstValueIterator itrEvents = messageEvents.Begin();\n\n\t\t\t\t\t\t\tconst Value &messageInfo = (*itrEvents)[\"EventInfo\"];\n\t\t\t\t\t\t\tmsgMessage = messageInfo[\"Message\"].GetString();\n\n\t\t\t\t\t\t\tif (messageInfo.HasMember(\"Reason\") && messageInfo[\"Reason\"].IsString())\n\t\t\t\t\t\t\t\tmsgReason = messageInfo[\"Reason\"].GetString();\n\n\t\t\t\t\t\t\tconst Value &parameters = messageInfo[\"Parameters\"];\n\t\t\t\t\t\t\tif (parameters.IsArray())\n\t\t\t\t\t\t\t{\n\n\t\t\t\t\t\t\t\tfor (Value::ConstValueIterator itrParameters = parameters.Begin(); itrParameters != parameters.End(); ++itrParameters)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (! msgValue.empty())\n\t\t\t\t\t\t\t\t\t\tmsgValue += \" \";\n\n\t\t\t\t\t\t\t\t\tmsgName = (*itrParameters)[\"Name\"].GetString();\n\t\t\t\t\t\t\t\t\tmsgValue += (*itrParameters)[\"Value\"].GetString();\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\tmsgFinal = msgMessage;\n\t\t\t\t\t\t\t\tif (!msgReason.empty())\n\t\t\t\t\t\t\t\t\tmsgFinal += \" \" + msgReason;\n\n\t\t\t\t\t\t\t\tif (!msgValue.empty())\n\t\t\t\t\t\t\t\t\tmsgFinal += \" \" +  msgValue;\n\t\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tLogger::getLogger()->warn(\"PI Web API errors handling expects to received Parameters as an JSON array\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tLogger::getLogger()->warn(\"PI Web API errors handling expects to received Events as an JSON array\");\n\t\t\t\t\t}\n\n\t\t\t\t}\n\n\t\t\t} else {\n\t\t\t\tLogger::getLogger()->warn(\"PI Web API errors handling expects to received Messages as an JSON array\");\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (msgFinal);\n}\n\n/**\n * Handles PI Web API  error message considering the possible cases:\n *\n * - removes all the LF CR and extracts spaces\n * - substitutes a message with a different one using an hardcoded vector\n * - in the case of the presence of an HTTP code adds the corresponding text using the Simple-Web-Server functionalities\n * - Handles PI Web API json error message extracting significant parts to produce a meaningful and concise message\n *\n * @param json JSON message generated by PIWebAPI containing the error\n * @return     The concise and meaningful error message\n *\n */\nstring PIWebAPI::errorMessageHandler(const string& msg)\n{\n\tDocument JSon;\n\tParseResult ok;\n\n\tstring msgTrimmed, msgSub, msgHttp, msgJson, finalMsg, msgFixed, messages, tmpMsg;\n\tstring httpCode;\n\tint  httpCodeN;\n\n\n\tmsgTrimmed = StringStripWhiteSpacesExtra(msg);\n\n\t// Handles error message substitution\n\tfor(auto &errorMsg : PIWEB_ERRORS) {\n\n\t\tif (msgTrimmed.find(errorMsg.first) != std::string::npos)\n\t\t{\n\t\t\tmsgSub = errorMsg.second;\n\t\t}\n\t}\n\n\t// Handles HTTP error code recognition\n\thttpCode = extractSection(msgTrimmed, \"HTTP code |\");\n\tif (! httpCode.empty()) {\n\n\t\t\tSimpleWeb::StatusCode code;\n\n\t\t\thttpCodeN = atoi(httpCode.c_str());\n\t\t\tcode = (SimpleWeb::StatusCode) httpCodeN;\n\n\t\t\tmsgHttp = SimpleWeb::status_code(code);\n\n\t}\n\n\t// Handles error in JSON format returned by the PI Web API\n\tmsgJson = extractMessageFromJSon (msgTrimmed);\n\n\t// Define the final message\n\tfinalMsg = msg;\n\tif (!msgSub.empty())\n\t\tfinalMsg = msgSub;\n\n\tif (!msgHttp.empty())\n\t\tfinalMsg = msgHttp;\n\n\tif (!msgJson.empty())\n\t\tfinalMsg = msgJson;\n\n\n\treturn(finalMsg);\n}\n"
  },
  {
    "path": "C/plugins/common/simple_http.cpp",
    "content": "/*\n * Fledge HTTP Sender implementation using the\n * Simple Web Seerver HTTP library.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto, Mark Riddoch\n */\n#include <simple_http.h>\n#include <thread>\n#include <logger.h>\n#include <unistd.h>\n\n#define VERBOSE_LOG\t0\n\nusing namespace std;\n\n/**\n * Creates a UTC time string for the current time\n *\n * @return\t\t      Current UTC time\n */\nstatic std::string CurrentTimeString()\n{\n\ttime_t now = time(NULL);\n\tstruct tm timeinfo;\n\tgmtime_r(&now, &timeinfo);\n\tchar timeString[20];\n\tstrftime(timeString, sizeof(timeString), \"%F %T\", &timeinfo);\n\treturn std::string(timeString);\n}\n\n// Using https://github.com/eidheim/Simple-Web-Server\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\n/**\n * Constructor: host:port, connect_timeout, request_timeout\n */\nSimpleHttp::SimpleHttp(const string& host_port,\n\t\t       unsigned int connect_timeout,\n\t\t       unsigned int request_timeout,\n\t\t       unsigned int retry_sleep_Time,\n\t\t       unsigned int max_retry) :\n\t\t       HttpSender(), m_host_port(host_port),\n\t\t       m_retry_sleep_time(retry_sleep_Time),\n\t\t       m_max_retry (max_retry)\n{\n\tm_sender = new HttpClient(host_port);\n\tm_sender->config.timeout = (time_t)request_timeout;\n\tm_sender->config.timeout_connect = (time_t)connect_timeout;\n\tsetTrace();\n}\n\n/**\n * @brief Configures logging for the HTTP sender.\n */\nvoid SimpleHttp::setTrace() \n{\n    // Check if the log file is writable\n    std::string tracePath = HttpSender::getOMFTracePath();\n    if (access(tracePath.c_str(), W_OK) == 0)\n\t{\n\t\tm_log = true;\n\t\tm_ofs.open(tracePath.c_str(), ofstream::app);\n\t}\n\telse\n\t{\n\t\tm_log = false;\n\t}\n}\n\n/**\n * @brief Resets the logging state for the HTTP sender.\n */\nvoid SimpleHttp::resetTrace() \n{\n    if (m_log) \n    {\n        m_ofs.close(); // Close the log file if it is open\n    }\n}\n\n/**\n * Set a proxy server\n *\n * @param proxy\tThe name and port of the proxy server\n */\nvoid SimpleHttp::setProxy(const string& proxy)\n{\n\tm_sender->config.proxy_server = proxy;\n}\n\n/**\n * Destructor\n */\nSimpleHttp::~SimpleHttp()\n{\n\tresetTrace();\n\tdelete m_sender;\n}\n\n/**\n * Send data\n *\n * @param method    The HTTP method (GET, POST, ...)\n * @param path      The URL path\n * @param headers   The optional headers to send\n * @param payload   The optional data payload (for POST, PUT)\n * @return          The HTTP code on success or 0 on execptions\n */\nint SimpleHttp::sendRequest(\n\t\tconst string& method,\n\t\tconst string& path,\n\t\tconst vector<pair<string, string>>& headers,\n\t\tconst string& payload\n)\n{\n\tSimpleWeb::CaseInsensitiveMultimap header;\n\n\t// Add Fledge UserAgent\n\theader.emplace(\"User-Agent\", HTTP_SENDER_USER_AGENT);\n\theader.emplace(\"Content-Type\", \"application/json\");\n\n\t// To let PI Web API having Cross-Site Request Forgery (CSRF) enabled as by default configuration\n\theader.emplace(\"X-Requested-With\", \"XMLHttpRequest\");\n\n\t// Add custom headers\n\tfor (auto it = headers.begin(); it != headers.end(); ++it)\n\t{\n\t\theader.emplace((*it).first, (*it).second);\n\t}\n\n\t// Handle basic authentication\n\tif (m_authMethod == \"b\")\n\t\theader.emplace(\"Authorization\", \"Basic \" + m_authBasicCredentials);\n\n\tstring retCode;\n\tstring response;\n\tint http_code;\n\n\tbool retry = false;\n\tint  retry_count = 1;\n\tint  sleep_time = m_retry_sleep_time;\n\n\tenum exceptionType\n\t{\n\t    none, typeBadRequest, typeException\n\t};\n\n\texceptionType exception_raised;\n\tstring exception_message;\n\n\tdo\n\t{\n\t\tstd::chrono::high_resolution_clock::time_point tStart;\n\t\ttry\n\t\t{\n\t\t\texception_raised = none;\n\t\t\thttp_code = 0;\n\n\t\t\tif (m_log)\n\t\t\t{\n\t\t\t\tm_ofs << endl << method << \" \" << path << endl;\n\t\t\t\tm_ofs << \"Headers:\" << endl;\n\t\t\t\tfor (auto it = header.begin(); it != header.end(); it++)\n\t\t\t\t{\n\t\t\t\t\tm_ofs << \"    \" << it->first << \": \" << it->second << endl;\n\t\t\t\t}\n\t\t\t\tm_ofs << \"Payload:\" << endl;\n\t\t\t\tm_ofs << payload << endl;\n\t\t\t\ttStart = std::chrono::high_resolution_clock::now();\n\t\t\t}\n\n\t\t\t// Call HTTPS method\n\t\t\tauto res = m_sender->request(method, path, payload, header);\n\n\t\t\tretCode = res->status_code;\n\t\t\tresponse = res->content.string();\n\n\t\t\tif (m_log)\n\t\t\t{\n\t\t\t\tstd::chrono::high_resolution_clock::time_point tEnd = std::chrono::high_resolution_clock::now();\n\t\t\t\tm_ofs << \"Response:\" << endl;\n\t\t\t\tm_ofs << \"   Code: \" << res->status_code << endl;\n\t\t\t\tm_ofs << \"   Time: \" << ((double)std::chrono::duration_cast<std::chrono::microseconds>(tEnd - tStart).count()) / 1.0E6 << \" sec     \" << CurrentTimeString() << endl;\n\t\t\t\tm_ofs << \"   Content: \" << res->content.string() << endl << endl;\n\t\t\t}\n\n\t\t\tm_HTTPResponse = response;\n\n\t\t\t// In same cases the response is an empty string\n\t\t\t// and retCode contains code and the description\n\t\t\tif (response.compare(\"\") == 0)\n\t\t\t\tresponse = res->status_code;\n\n\t\t\thttp_code = atoi(retCode.c_str());\n\n\t\t}\n\t\tcatch (BadRequest &ex)\n\t\t{\n\t\t\texception_raised = typeBadRequest;\n\t\t\texception_message = ex.what();\n\n\t\t}\n\t\tcatch (exception &ex)\n\t\t{\n\t\t\texception_raised = typeException;\n\t\t\texception_message = \"Failed to send data: \";\n\t\t\texception_message.append(ex.what());\n\t\t}\n\n\t\tif (exception_raised == none &&\n\t\t    ((http_code >= 200) && (http_code <= 399)))\n\t\t{\n\t\t\tretry = false;\n#if VERBOSE_LOG\n\t\t\tLogger::getLogger()->info(\"HTTP sendRequest succeeded : retry count |%d| HTTP code |%d| message |%s|\",\n\t\t\t\t\t\t  retry_count,\n\t\t\t\t\t\t  http_code,\n\t\t\t\t\t\t  payload.c_str());\n#endif\n\t\t}\n\t\telse\n\t\t{\n#if VERBOSE_LOG\n\t\t\tif (exception_raised)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\n\t\t\t\t\t\"HTTP sendRequest : retry count |%d| error |%s| message |%s|\",\n\t\t\t\t\tretry_count,\n\t\t\t\t\texception_message.c_str(),\n\t\t\t\t\tpayload.c_str());\n\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\n\t\t\t\t\t\"HTTP sendRequest : retry count |%d| HTTP code |%d| HTTP error |%s| message |%s|\",\n\t\t\t\t\tretry_count,\n\t\t\t\t\thttp_code,\n\t\t\t\t\tresponse.c_str(),\n\t\t\t\t\tpayload.c_str());\n\t\t\t}\n#endif\n\n\t\t\tif (m_log && !exception_message.empty())\n\t\t\t{\n\t\t\t\tstd::chrono::high_resolution_clock::time_point tEnd = std::chrono::high_resolution_clock::now();\n\t\t\t\tm_ofs << \"   Time: \" << ((double)std::chrono::duration_cast<std::chrono::microseconds>(tEnd - tStart).count()) / 1.0E6 << \" sec     \" << CurrentTimeString() << endl;\n\t\t\t\tm_ofs << \"   Exception: \" << exception_message << endl;\n\t\t\t}\n\n\t\t\tif (retry_count < m_max_retry)\n\t\t\t{\n\t\t\t\tthis_thread::sleep_for(chrono::seconds(sleep_time));\n\n\t\t\t\tretry = true;\n\t\t\t\tsleep_time *= 2;\n\t\t\t\tretry_count++;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tretry = false;\n\t\t\t}\n\t\t}\n\n\t} while (retry);\n\n\t// Check if an error should be raised\n\tif (exception_raised == none)\n\t{\n\t\t// If 400 Bad Request, throw BadRequest exception\n\t\tif (http_code == 400)\n\t\t{\n\t\t\tthrow BadRequest(response);\n\t\t}\n\t\telse if (http_code == 401)\n\t\t{\n\t\t\tthrow Unauthorized(response);\n\t\t}\n\t\telse if (http_code == 409)\n\t\t{\n\t\t\tthrow Conflict(response);\n\t\t}\n\t\telse if (http_code > 401)\n\t\t{\n\t\t\tstd::stringstream error_message;\n\t\t\terror_message << \"HTTP code |\" << to_string(http_code) << \"| HTTP error |\" << response << \"|\";\n\n\t\t\tthrow runtime_error(error_message.str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tif (exception_raised == typeBadRequest)\n\t\t{\n\t\t\tthrow BadRequest(exception_message);\n\t\t}\n\t\telse if (exception_raised == typeException)\n\t\t{\n\t\t\tthrow runtime_error(exception_message);\n\t\t}\n\t}\n\n\treturn http_code;\n}\n"
  },
  {
    "path": "C/plugins/common/simple_https.cpp",
    "content": "/*\n * Fledge HTTP Sender implementation using the\n * HTTPS Simple Web Server library\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto, Mark Riddoch\n */\n\n#include <simple_https.h>\n#include <thread>\n#include <logger.h>\n#include <string_utils.h>\n\n#define VERBOSE_LOG\t0\n\nusing namespace std;\n\n/**\n * Creates a UTC time string for the current time\n *\n * @return\t\t      Current UTC time\n */\nstatic std::string CurrentTimeString()\n{\n\ttime_t now = time(NULL);\n\tstruct tm timeinfo;\n\tgmtime_r(&now, &timeinfo);\n\tchar timeString[20];\n\tstrftime(timeString, sizeof(timeString), \"%F %T\", &timeinfo);\n\treturn std::string(timeString);\n}\n\n// Using https://github.com/eidheim/Simple-Web-Server\nusing HttpsClient = SimpleWeb::Client<SimpleWeb::HTTPS>;\n\n/**\n * Constructor: host:port, connect_timeout, request_timeout\n */\nSimpleHttps::SimpleHttps(const string& host_port,\n                         unsigned int connect_timeout,\n                         unsigned int request_timeout,\n\t\t\t unsigned int retry_sleep_Time,\n\t\t\t unsigned int max_retry) :\n\t\t\t HttpSender(), m_host_port(host_port),\n\t\t\t m_retry_sleep_time(retry_sleep_Time),\n\t\t\t m_max_retry (max_retry)\n{\n\t// Passing false to second parameter avoids certificate verification\n\tm_sender = new HttpsClient(host_port, false);\n\tm_sender->config.timeout = (time_t)request_timeout;\n\tm_sender->config.timeout_connect = (time_t)connect_timeout;\n    setTrace();\n}\n\n/**\n * Destructor\n */\nSimpleHttps::~SimpleHttps()\n{\n    resetTrace();\n\tdelete m_sender;\n}\n\n/**\n * @brief Configures logging for the HTTP sender.\n */\nvoid SimpleHttps::setTrace() \n{\n    // Check if the log file is writable\n    std::string tracePath = HttpSender::getOMFTracePath();\n    if (access(tracePath.c_str(), W_OK) == 0)\n\t{\n\t\tm_log = true;\n\t\tm_ofs.open(tracePath.c_str(), ofstream::app);\n\t}\n\telse\n\t{\n\t\tm_log = false;\n\t}\n}\n\n/**\n * @brief Resets the logging state for the HTTP sender.\n */\nvoid SimpleHttps::resetTrace() \n{\n    if (m_log) \n    {\n        m_ofs.close(); // Close the log file if it is open\n    }\n}\n\n/**\n * Set a proxy server\n *\n * @param proxy\tThe name and port of the proxy server\n */\nvoid SimpleHttps::setProxy(const string& proxy)\n{\n\tm_sender->config.proxy_server = proxy;\n}\n\n/**\n * Send data, it retries the operation m_max_retry times\n * waiting m_retry_sleep_time*2 at each attempt\n *\n * @param method    The HTTP method (GET, POST, ...)\n * @param path      The URL path\n * @param headers   The optional headers to send\n * @param payload   The optional data payload (for POST, PUT)\n * @return          The HTTP code for the cases : 1xx Informational / 2xx Success / 3xx Redirection\n * @throw\t    BadRequest for HTTP 400 error\n *\t\t    std::exception as generic exception for all the cases >= 401 Client errors / 5xx Server errors\n */\nint SimpleHttps::sendRequest(\n\t\tconst string& method,\n\t\tconst string& path,\n\t\tconst vector<pair<string, string>>& headers,\n\t\tconst string& payload\n)\n{\n\tSimpleWeb::CaseInsensitiveMultimap header;\n\n\t// Add Fledge UserAgent\n\theader.emplace(\"User-Agent\", HTTP_SENDER_USER_AGENT);\n\n\t// To let PI Web API having Cross-Site Request Forgery (CSRF) enabled as by default configuration\n\theader.emplace(\"X-Requested-With\", \"XMLHttpRequest\");\n\n\t// Add custom headers\n\tfor (auto it = headers.begin(); it != headers.end(); ++it)\n\t{\n\t\theader.emplace((*it).first, (*it).second);\n\t}\n\n\t// Handle basic authentication\n\tif (m_authMethod == \"b\")\n\t{\n\t\theader.emplace(\"Authorization\", \"Basic \" + m_authBasicCredentials);\n\t}\n\telse if (m_OCSToken.compare(\"\") != 0)\n\t{\n\t\theader.emplace(\"Authorization\", \"Bearer \" + m_OCSToken);\n\t}\n\n\tstring retCode;\n\tstring response;\n\tint http_code;\n\n\tbool retry = false;\n\tint  retry_count = 1;\n\tint  sleep_time = m_retry_sleep_time;\n\n\tenum exceptionType\n\t{\n\t    none, typeBadRequest, typeException\n\t};\n\n\texceptionType exception_raised;\n\tstring exception_message;\n\n\tdo\n\t{\n\t\tstd::chrono::high_resolution_clock::time_point tStart;\n\t\ttry\n\t\t{\n\t\t\texception_raised = none;\n\t\t\thttp_code = 0;\n\n\t\t\tif (m_log)\n\t\t\t{\n\t\t\t\tm_ofs << endl << method << \" \" << path << endl;\n\t\t\t\tm_ofs << \"Headers:\" << endl;\n\t\t\t\tfor (auto it = header.begin(); it != header.end(); it++)\n\t\t\t\t{\n\t\t\t\t\tm_ofs << \"    \" << it->first << \": \" << it->second << endl;\n\t\t\t\t}\n\t\t\t\tm_ofs << \"Payload:\" << endl;\n\t\t\t\tm_ofs << payload << endl;\n\t\t\t\ttStart = std::chrono::high_resolution_clock::now();\n\t\t\t}\n\n\t\t\t// Call HTTPS method\n\t\t\tauto res = m_sender->request(method, path, payload, header);\n\n\t\t\tretCode = res->status_code;\n\t\t\tresponse = res->content.string();\n\t\t\tm_HTTPResponse = response;\n\n\t\t\tif (m_log)\n\t\t\t{\n\t\t\t\tstd::chrono::high_resolution_clock::time_point tEnd = std::chrono::high_resolution_clock::now();\n\t\t\t\tm_ofs << \"Response:\" << endl;\n\t\t\t\tm_ofs << \"   Code: \" << res->status_code << endl;\n\t\t\t\tm_ofs << \"   Time: \" << ((double)std::chrono::duration_cast<std::chrono::microseconds>(tEnd - tStart).count()) / 1.0E6 << \" sec     \" << CurrentTimeString() << endl;\n\t\t\t\tm_ofs << \"   Content: \" << res->content.string() << endl << endl;\n\t\t\t}\n\n\t\t\t// In same cases the response is an empty string\n\t\t\t// and retCode contains code and the description\n\t\t\tif (response.compare(\"\") == 0)\n\t\t\t\tresponse = res->status_code;\n\n\t\t\thttp_code = atoi(retCode.c_str());\n\t\t}\n\t\tcatch (BadRequest &ex)\n\t\t{\n\t\t\texception_raised = typeBadRequest;\n\t\t\texception_message = ex.what();\n\t\t}\n\t\tcatch (exception &ex)\n\t\t{\n\t\t\texception_raised = typeException;\n\t\t\texception_message = \"Failed to send data: \";\n\t\t\texception_message.append(ex.what());\n\t\t}\n\n\t\tif (exception_raised == none &&\n\t\t    ((http_code >= 200) && (http_code <= 399)))\n\t\t{\n\t\t\tretry = false;\n#if VERBOSE_LOG\n\t\t\tLogger::getLogger()->info(\"HTTPS sendRequest succeeded : retry count |%d| HTTP code |%d| message |%s|\",\n\t\t\t\t\t\t  retry_count,\n\t\t\t\t\t\t  http_code,\n\t\t\t\t\t\t  payload.c_str());\n#endif\n\t\t}\n\t\telse\n\t\t{\n#if VERBOSE_LOG\n\t\t\tif (exception_raised)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\n\t\t\t\t\t\"HTTPS sendRequest : retry count |%d| error |%s| message |%s|\",\n\t\t\t\t\tretry_count,\n\t\t\t\t\texception_message.c_str(),\n\t\t\t\t\tpayload.c_str());\n\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\n\t\t\t\t\t\"HTTPS sendRequest : retry count |%d| HTTP code |%d| HTTP error |%s| message |%s|\",\n\t\t\t\t\tretry_count,\n\t\t\t\t\thttp_code,\n\t\t\t\t\tresponse.c_str(),\n\t\t\t\t\tpayload.c_str());\n\t\t\t}\n#endif\n\n\t\t\tif (m_log && !exception_message.empty())\n\t\t\t{\n\t\t\t\tstd::chrono::high_resolution_clock::time_point tEnd = std::chrono::high_resolution_clock::now();\n\t\t\t\tm_ofs << \"   Time: \" << ((double)std::chrono::duration_cast<std::chrono::microseconds>(tEnd - tStart).count()) / 1.0E6 << \" sec     \" << CurrentTimeString() << endl;\n\t\t\t\tm_ofs << \"   Exception: \" << exception_message << endl;\n\t\t\t}\n\t\t\t\n\t\t\tif (retry_count < m_max_retry)\n\t\t\t{\n\t\t\t\tthis_thread::sleep_for(chrono::seconds(sleep_time));\n\n\t\t\t\tretry = true;\n\t\t\t\tsleep_time *= 2;\n\t\t\t\tretry_count++;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tretry = false;\n\t\t\t}\n\t\t}\n\n\t} while (retry);\n\n\t// Check if an error should be raised\n\tif (exception_raised == none)\n\t{\n\t\t// If 400 Bad Request, throw BadRequest exception\n\t\tif (http_code == 400)\n\t\t{\n\t\t\tthrow BadRequest(response);\n\t\t}\n\t\telse if (http_code == 401)\n\t\t{\n\t\t\tthrow Unauthorized(response);\n\t\t}\n\t\telse if (http_code == 409)\n\t\t{\n\t\t\tthrow Conflict(response);\n\t\t}\n\t\telse if (http_code > 401)\n\t\t{\n\t\t\tstd::stringstream error_message;\n\t\t\tStringReplace(response, \"\\r\\n\", \"\");\n\t\t\terror_message << \"HTTP code |\" << to_string(http_code) << \"| HTTP error |\" << response << \"|\";\n\n\t\t\tthrow runtime_error(error_message.str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tif (exception_raised == typeBadRequest)\n\t\t{\n\t\t\tthrow BadRequest(exception_message);\n\t\t}\n\t\telse if (exception_raised == typeException)\n\t\t{\n\t\t\tthrow runtime_error(exception_message);\n\t\t}\n\t}\n\n\treturn http_code;\n}\n"
  },
  {
    "path": "C/plugins/filter/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.4.0)\n\nproject(filters-common-lib)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\n# Find source files\nfile(GLOB SOURCES *.cpp)\n\n# Include header files\ninclude_directories(include ../../../common/include ../../../services/common/include)\ninclude_directories(../../../thirdparty/Simple-Web-Server)\ninclude_directories(../../../thirdparty/rapidjson/include)\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES} ${EXTRA_SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${Boost_LIBRARIES})\n\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n"
  },
  {
    "path": "C/plugins/filter/common/filter.cpp",
    "content": "/*\n * Fledge base FledgeFilter class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <filter.h>\n\nusing namespace std;\n\n/**\n * FledgeFilter constructor\n *\n * This class or a derived one has to be used\n * as return object from Fledge filters C interface \"plugin_init\"A\n *\n * @param filterName\tThe filter plugin name\n * @param filterConfig\tThe filter plugin configuration\n * @param outHandle\tA handle passed to the filter output stream function\n * @param output\tThe The output stream function pointer\n */\nFledgeFilter::FledgeFilter(const string& filterName,\n\t\t\t     ConfigCategory& filterConfig,\n\t\t\t     OUTPUT_HANDLE *outHandle,\n\t\t\t     OUTPUT_STREAM output) : m_name(filterName),\n\t\t\t\t\t\t     m_config(filterConfig),\n\t\t\t\t\t\t     m_enabled(false)\n{\n\tm_data = outHandle;\n\tm_func = output;\n\n\t// Set the enable flag\n\tif (m_config.itemExists(\"enable\"))\n\t{\n\t\tm_enabled = m_config.getValue(\"enable\").compare(\"true\") == 0 ||\n\t\t\t    m_config.getValue(\"enable\").compare(\"True\") == 0;\n\t}\n}\n\n/**\n * Set a new configurartion for this plugin\n *\n * @param newConfig\tThe new configuration\n */\nvoid FledgeFilter::setConfig(const string& newConfig)\n{\n\tm_config = ConfigCategory(m_name, newConfig);\n\tm_enabled = m_config.getValue(\"enable\").compare(\"true\") == 0 ||\n\t\t\tm_config.getValue(\"enable\").compare(\"True\") == 0;\n}\n"
  },
  {
    "path": "C/plugins/filter/common/include/filter.h",
    "content": "#ifndef _FLEDGE_FITER_H\n#define _FLEDGE_FITER_H\n/*\n * Fledge base FledgeFilter class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <string>\n#include <config_category.h>\n#include <filter_plugin.h>\n\nclass FledgeFilter{\n\tpublic:\n\t\tFledgeFilter(const std::string& filterName,\n\t\t\t      ConfigCategory& filterConfig,\n\t\t\t      OUTPUT_HANDLE *outHandle,\n\t\t\t      OUTPUT_STREAM output);\n\t\t~FledgeFilter() {};\n\t\tconst std::string&\n\t\t\t\tgetName() const { return m_name; };\n\t\tbool\t\tisEnabled() const { return m_enabled; };\n\t\tConfigCategory& getConfig() { return m_config; };\n\t\tvoid\t\tdisableFilter() { m_enabled = false; };\n\t\tvoid\t\tsetConfig(const std::string& newConfig);\n\tpublic:\n\t\tOUTPUT_HANDLE*\tm_data;\n\t\tOUTPUT_STREAM\tm_func;\n\tprotected:\n\t\tstd::string\tm_name;\n\t\tConfigCategory\tm_config;\n\t\tbool\t\tm_enabled;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(OMF)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\nset_source_files_properties(version.h PROPERTIES GENERATED TRUE)\nadd_custom_command(\n  OUTPUT version.h\n  DEPENDS ${CMAKE_SOURCE_DIR}/VERSION\n  COMMAND ${CMAKE_SOURCE_DIR}/mkversion ${CMAKE_SOURCE_DIR}\n  COMMENT \"Generating version header\"\n  VERBATIM\n)\ninclude_directories(${CMAKE_BINARY_DIR})\n\n# Add here all needed Fledge libraries as list\nset(NEEDED_FLEDGE_LIBS common-lib plugins-common-lib)\n\nset(COMMON_LIBS -lssl -lcrypto)\n\n# Find source files\nfile(GLOB SOURCES *.cpp)\n\n# Include header files\ninclude_directories(include)\ninclude_directories(../../../services/common/include)\ninclude_directories(../../../common/include)\ninclude_directories(../../../plugins/common/include)\ninclude_directories(../../../thirdparty/Simple-Web-Server)\ninclude_directories(../../../thirdparty/rapidjson/include)\nlink_directories(${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES} version.h)\ntarget_link_libraries(${PROJECT_NAME} ${NEEDED_FLEDGE_LIBS})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIBS})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/plugins/north/${PROJECT_NAME})\n"
  },
  {
    "path": "C/plugins/north/OMF/OMFError.cpp",
    "content": "/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2023 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <utility>\n#include <iostream>\n#include <string>\n#include <cstring>\n#include <logger.h>\n#include \"string_utils.h\"\n\n#include <iterator>\n#include <typeinfo>\n#include <algorithm>\n\n#include <omferror.h>\n#include <rapidjson/error/en.h>\n\n#include <stdio.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * Constructors\n */\nOMFError::OMFError() : m_hasErrors(false)\n{\n}\n\nOMFError::OMFError(const string& json)\n{\n\tsetFromHttpResponse(json);\n}\n\n/**\n * Destructor for the error class\n */\nOMFError::~OMFError()\n{\n}\n\n/**\n * Parse error information from an OMF POST response JSON document\n *\n * @param json\tJSON response document from an OMF POST\n */\nvoid OMFError::setFromHttpResponse(const string& json)\n{\n\tm_messages.clear();\n\tm_hasErrors = false;\n\n\tchar *p = (char *)json.c_str();\n\n\tFILE *fp = fopen(\"/tmp/error\", \"a\");\n\tfprintf(fp, \"%s\\n\\n\", p);\n\tfclose(fp);\n\n\twhile (*p && *p != '{')\n\t\tp++;\n\tDocument doc;\n\tif (doc.ParseInsitu(p).HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"Unable to parse response from OMF endpoint: %s\",\n\t\t\t\t\t\t\t\t   GetParseError_En(doc.GetParseError()));\n\t\tLogger::getLogger()->error(\"Error response was: %s\", json.c_str());\n\t}\n\telse if (doc.HasMember(\"Messages\") && doc[\"Messages\"].IsArray())\n\t{\n\t\tconst Value& messages = doc[\"Messages\"].GetArray();\n\n\t\tfor (Value::ConstValueIterator a = messages.Begin(); a != messages.End(); a++)\n\t\t{\n\t\t\tconst Value& msg = *a;\n\t\t\tint httpCode = 200;\n\t\t\tif (msg.HasMember(\"Status\") && msg[\"Status\"].IsObject())\n\t\t\t{\n\t\t\t\tconst Value& status = msg[\"Status\"];\n\t\t\t\tif (status.HasMember(\"Code\") && status[\"Code\"].IsInt())\n\t\t\t\t{\n\t\t\t\t\thttpCode = status[\"Code\"].GetInt();\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (msg.HasMember(\"Events\") && msg[\"Events\"].IsArray())\n\t\t\t{\n\t\t\t\tconst Value& events = msg[\"Events\"];\n\t\t\t\tfor (Value::ConstValueIterator b = events.Begin(); b != events.End(); b++)\n\t\t\t\t{\n\t\t\t\t\tconst Value &event = *b;\n\t\t\t\t\tstring message, reason, severity, exceptionType, exceptionMessage;\n\n\t\t\t\t\t// ExceptionInfo can appear inside an Event object\n\t\t\t\t\t// or inside an Event->InnerEvents object \n\t\t\t\t\tif (event.HasMember(\"ExceptionInfo\") && event[\"ExceptionInfo\"].IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst Value &exceptionInfo = event[\"ExceptionInfo\"];\n\t\t\t\t\t\tif (exceptionInfo.HasMember(\"Type\") && exceptionInfo[\"Type\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\texceptionType = exceptionInfo[\"Type\"].GetString();\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (exceptionInfo.HasMember(\"Message\") && exceptionInfo[\"Message\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\texceptionMessage = exceptionInfo[\"Message\"].GetString();\n\t\t\t\t\t\t\tstd::string crlf(2, '\\r');\n\t\t\t\t\t\t\tcrlf[1] = '\\n';\n\t\t\t\t\t\t\tStringReplaceAll(exceptionMessage, crlf, \" \");\n\t\t\t\t\t\t\texceptionMessage = StringStripWhiteSpacesExtra(exceptionMessage);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (event.HasMember(\"InnerEvents\") && event[\"InnerEvents\"].IsArray())\n\t\t\t\t\t{\n\t\t\t\t\t\trapidjson::GenericArray<true, rapidjson::Value> innerEvents = event[\"InnerEvents\"].GetArray();\n\t\t\t\t\t\tif (innerEvents.Size() > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value &innerEvent = innerEvents[0];\n\t\t\t\t\t\t\tif (innerEvent.HasMember(\"ExceptionInfo\") && innerEvent[\"ExceptionInfo\"].IsObject())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tconst Value &exceptionInfo = innerEvent[\"ExceptionInfo\"];\n\t\t\t\t\t\t\t\tif (exceptionInfo.HasMember(\"Type\") && exceptionInfo[\"Type\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\texceptionType = exceptionInfo[\"Type\"].GetString();\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif (exceptionInfo.HasMember(\"Message\") && exceptionInfo[\"Message\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\texceptionMessage = exceptionInfo[\"Message\"].GetString();\n\t\t\t\t\t\t\t\t\tstd::string crlf(2, '\\r');\n\t\t\t\t\t\t\t\t\tcrlf[1] = '\\n';\n\t\t\t\t\t\t\t\t\tStringReplaceAll(exceptionMessage, crlf, \" \");\n\t\t\t\t\t\t\t\t\texceptionMessage = StringStripWhiteSpacesExtra(exceptionMessage);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (event.HasMember(\"Severity\") && event[\"Severity\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tseverity = event[\"Severity\"].GetString();\n\t\t\t\t\t\tif (severity.compare(\"Error\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_hasErrors = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (event.HasMember(\"EventInfo\") && event[\"EventInfo\"].IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst Value& eventInfo = event[\"EventInfo\"];\n\t\t\t\t\t\tif (eventInfo.HasMember(\"Message\") && eventInfo[\"Message\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tmessage = eventInfo[\"Message\"].GetString();\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (eventInfo.HasMember(\"Reason\") && eventInfo[\"Reason\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treason = eventInfo[\"Reason\"].GetString();\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tm_messages.push_back(Message(severity, message, reason, exceptionType, exceptionMessage, httpCode));\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Return the most severe HTTP Code from all messages.\n * PI Web API HTTP Codes are usually the same within one HTTP response.\n * \n * @return HTTP Code\n */\nint OMFError::getHttpCode()\n{\n\tint httpCode = 200;\n\n\tfor (Message &msg : m_messages)\n\t{\n\t\tif (msg.getHttpCode() > httpCode)\n\t\t{\n\t\t\thttpCode = msg.getHttpCode();\n\t\t}\n\t}\n\n\treturn httpCode;\n}\n\n/**\n * Return the error message for the given message\n *\n * @param offset\tThe error within the report to return\n * @return string\tThe event message\n */\nstring OMFError::getMessage(unsigned int offset)\n{\nstring rval;\n\n\tif (offset < messageCount())\n\t{\n\t\trval = m_messages[offset].getMessage();\n\t}\n\treturn rval;\n}\n\n/**\n * Return the error reason for the given message\n *\n * @param offset\tThe error within the report to return\n * @return string\tThe event reason\n */\nstring OMFError::getEventReason(unsigned int offset)\n{\nstring rval;\n\n\tif (offset < messageCount())\n\t{\n\t\trval = m_messages[offset].getReason();\n\t}\n\treturn rval;\n}\n\n/**\n * Get the event severity for a given message\n *\n * @param offset\tThe message to examine\n * @return string\tThe event severity\n */\nstring OMFError::getEventSeverity(unsigned int offset)\n{\nstring rval;\n\n\tif (offset < messageCount())\n\t{\n\t\trval = m_messages[offset].getSeverity();\n\t}\n\treturn rval;\n}\n\n/**\n * Get the event exception type for a given message\n *\n * @param offset\tThe message to examine\n * @return string\tThe event exception type\n */\nstring OMFError::getEventExceptionType(unsigned int offset)\n{\nstring rval;\n\n\tif (offset < messageCount())\n\t{\n\t\trval = m_messages[offset].getExceptionType();\n\t}\n\treturn rval;\n}\n\n/**\n * Get the event exception message for a given message\n *\n * @param offset\tThe message to examine\n * @return string\tThe event exception message\n */\nstring OMFError::getEventExceptionMessage(unsigned int offset)\n{\nstring rval;\n\n\tif (offset < messageCount())\n\t{\n\t\trval = m_messages[offset].getExceptionMessage();\n\t}\n\treturn rval;\n}\n\n/**\n * Log all available messages\n *\n * @param mainMessage       Top-level message when reporting an error\n * @param filterDuplicates  If true, do not log duplicate messages\n * @return                  True if OMFError object holds at least one message\n */\nbool OMFError::Log(const std::string &mainMessage, bool filterDuplicates)\n{\n\tif (hasMessages())\n\t{\n\t\tif (hasErrors())\n\t\t{\n\t\t\tLogger::getLogger()->error(\"HTTP %d: %s: %u %s\",\n\t\t\t\t\t\t\t\t\t   getHttpCode(),\n\t\t\t\t\t\t\t\t\t   mainMessage.c_str(),\n\t\t\t\t\t\t\t\t\t   messageCount(),\n\t\t\t\t\t\t\t\t\t   (messageCount() == 1) ? \"message\" : \"messages\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"HTTP %d: %s: %u %s\",\n\t\t\t\t\t\t\t\t\t  getHttpCode(),\n\t\t\t\t\t\t\t\t\t  mainMessage.c_str(),\n\t\t\t\t\t\t\t\t\t  messageCount(),\n\t\t\t\t\t\t\t\t\t  (messageCount() == 1) ? \"message\" : \"messages\");\n\t\t}\n\n\t\tstd::string lastMessage;\n\t\tstd::string lastExceptionMessage;\n\t\tunsigned int numDuplicates = 0;\n\n\t\tfor (unsigned int i = 0; i < messageCount(); i++)\n\t\t{\n\t\t\tMessage &msg = m_messages[i];\n\t\t\tstd::string errorMessage = msg.getMessage();\n\t\t\tstd::string exceptionMessage = msg.getExceptionMessage();\n\n\t\t\tif (filterDuplicates && (0 == errorMessage.compare(lastMessage)) && (0 == exceptionMessage.compare(lastExceptionMessage)))\n\t\t\t{\n\t\t\t\tnumDuplicates++;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (msg.getSeverity().compare(\"Error\") == 0)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->error(\"Message %u HTTP %d: %s, %s, %s\",\n\t\t\t\t\t\t\t\t\t\t\t   i,\n\t\t\t\t\t\t\t\t\t\t\t   msg.getHttpCode(),\n\t\t\t\t\t\t\t\t\t\t\t   msg.getSeverity().c_str(),\n\t\t\t\t\t\t\t\t\t\t\t   errorMessage.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t   msg.getReason().c_str());\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"Message %u HTTP %d: %s, %s, %s\",\n\t\t\t\t\t\t\t\t\t\t\t  i,\n\t\t\t\t\t\t\t\t\t\t\t  msg.getHttpCode(),\n\t\t\t\t\t\t\t\t\t\t\t  msg.getSeverity().c_str(),\n\t\t\t\t\t\t\t\t\t\t\t  errorMessage.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t  msg.getReason().c_str());\n\t\t\t\t}\n\n\t\t\t\tif (!exceptionMessage.empty() && (0 != errorMessage.compare(exceptionMessage)))\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->error(\"Message %u Exception: %s (%s)\",\n\t\t\t\t\t\t\t\t\t\t\t   i,\n\t\t\t\t\t\t\t\t\t\t\t   exceptionMessage.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t   msg.getExceptionType().c_str());\n\t\t\t\t}\n\n\t\t\t\tlastMessage = errorMessage;\n\t\t\t\tlastExceptionMessage = exceptionMessage;\n\t\t\t}\n\t\t}\n\n\t\tif (numDuplicates > 0)\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"%u duplicate messages skipped\", numDuplicates);\n\t\t}\n\t}\n\n\treturn hasMessages();\n}\n"
  },
  {
    "path": "C/plugins/north/OMF/include/OMFHint.h",
    "content": "#ifndef _OMF_HINT_H\n#define _OMF_HINT_H\n\n#include <rapidjson/document.h>\n\n/**\n * Virtual base class for an OMF Hint\n */\nclass OMFHint\n{\n\tpublic:\n\t\tvirtual ~OMFHint() = default;\n\t\tconst std::string&\tgetHint() const { return m_hint; };\n\tprotected:\n\t\tstd::string\tm_hint;\n};\n\n/**\n * A number hint, defines how number type should be defined, float64 or float32\n\n */\nclass OMFNumberHint : public OMFHint\n{\n\tpublic:\n\t\tOMFNumberHint(const std::string& type) { m_hint = type; };\n\t\t~OMFNumberHint() {};\n};\n\n/**\n * A integer hint, defines how integer type should be defined, int64, int32 or int16\n\n */\nclass OMFIntegerHint : public OMFHint\n{\npublic:\n\tOMFIntegerHint(const std::string& type) { m_hint = type; };\n\t~OMFIntegerHint() {};\n};\n\n\n/**\n * A tag hint, used to define an existing OMF container or tag to use\n */\nclass OMFTagHint : public OMFHint\n{\n\tpublic:\n\t\tOMFTagHint(const std::string& tag) { m_hint = tag; };\n\t\t~OMFTagHint() {};\n};\n\n/**\n * A Type name hint, tells us how to name the types we use\n */\nclass OMFTypeNameHint : public OMFHint\n{\n\tpublic:\n\t\tOMFTypeNameHint(const std::string& name) { m_hint = name; };\n\t\t~OMFTypeNameHint() {};\n};\n\n/**\n * A tag name hint, tells us which Container name to use in PI\n */\nclass OMFTagNameHint : public OMFHint\n{\n\tpublic:\n\t\tOMFTagNameHint(const std::string& name) { m_hint = name; };\n\t\t~OMFTagNameHint() {};\n};\n\n/**\n * A tag name hint, defines which PI Tag to use for a Datapoint\n */\nclass OMFTagNameDatapointHint : public OMFHint\n{\n\tpublic:\n\t\tOMFTagNameDatapointHint(const std::string &name) { m_hint = name; };\n\t\t~OMFTagNameDatapointHint() {};\n};\n\n/**\n * A AFLocation hint, tells us in which Asset Framework hierarchy the asset should be created\n */\nclass OMFAFLocationHint : public OMFHint\n{\npublic:\n\tOMFAFLocationHint(const std::string& name) { m_hint = name; };\n\t~OMFAFLocationHint() {};\n};\n\n/**\n * A Legacy type hint, tells the OMF plugin to send complex types for this asset\n */\nclass OMFLegacyTypeHint : public OMFHint\n{\npublic:\n\tOMFLegacyTypeHint(const std::string& name) { m_hint = name; };\n\t~OMFLegacyTypeHint() {};\n};\n\n/**\n * A Source hint, defines the data source for the asset or datapoint\n */\nclass OMFSourceHint : public OMFHint\n{\npublic:\n\tOMFSourceHint(const std::string& name) { m_hint = name; };\n\t~OMFSourceHint() {};\n};\n\n/**\n * A unit of measurement hint, defines the unit of measurement for a datapoint\n */\nclass OMFUOMHint : public OMFHint\n{\npublic:\n\tOMFUOMHint(const std::string& name) { m_hint = name; };\n\t~OMFUOMHint() {};\n};\n\n\n/**\n * A minimum hint, defines the minimum value for a property\n */\nclass OMFMinimumHint : public OMFHint\n{\npublic:\n\tOMFMinimumHint(const std::string& name) { m_hint = name; };\n\t~OMFMinimumHint() {};\n};\n\n\n/**\n * A maximum hint, defines the maximum value for a property\n */\nclass OMFMaximumHint : public OMFHint\n{\npublic:\n\tOMFMaximumHint(const std::string& name) { m_hint = name; };\n\t~OMFMaximumHint() {};\n};\n\n/**\n * A interpolation hint, defines the interpolation value for a property\n */\nclass OMFInterpolationHint : public OMFHint\n{\npublic:\n\tOMFInterpolationHint(const std::string& name) { m_hint = name; };\n\t~OMFInterpolationHint() {};\n};\n/**\n * A set of hints for a reading\n *\n * documentation available at:\n * https://fledge-iot.readthedocs.io/en/latest/plugins/fledge-filter-omfhint/index.html\n * https://fledge-iot.readthedocs.io/en/latest/OMF.html#omf-hints\n *\n */\nclass OMFHints\n{\n\tpublic:\n\t\tOMFHints(const std::string& hints);\n\t\t~OMFHints();\n\t\tconst std::vector<OMFHint *>&\n\t\t\t\t\tgetHints() const { return m_hints; };\n\t\tconst std::vector<OMFHint *>&\n\t\t\t\t\tgetHints(const std::string&) const;\n\t\tconst unsigned short\tgetChecksum() { return m_chksum; };\n\t\tstatic string          \tgetHintForChecksum(const string &hint);\n\tprivate:\n\t\trapidjson::Document\tm_doc;\n\t\tunsigned short\t\tm_chksum;\n\t\tstd::vector<OMFHint *>\tm_hints;\n\t\tstd::map<std::string, std::vector<OMFHint *> > m_datapointHints;\n};\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/include/basetypes.h",
    "content": "#ifndef _BASETYPES_H\n#define _BASETYPES_H\n/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <plugin_api.h>\n\nstatic const char *baseOMFTypes = QUOTE(\n    [\n\t   {\n\t      \"id\":\"Double64\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"Double64\":{\n\t\t    \"type\":[\"number\", \"null\"],\n\t\t    \"format\":\"float64\"\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   },\n\t   {\n\t      \"id\":\"Double32\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"Double32\":{\n\t\t    \"type\":[\"number\", \"null\"],\n\t\t    \"format\":\"float32\"\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   },\n\t   {\n\t      \"id\":\"Integer16\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"Integer16\":{\n\t\t    \"type\":[\"integer\",\"null\"],\n\t\t    \"format\":\"int16\"\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   },\n\t   {\n\t      \"id\":\"Integer32\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"Integer32\":{\n\t\t    \"type\":[\"integer\",\"null\"],\n\t\t    \"format\":\"int32\"\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   },\n\t   {\n\t      \"id\":\"Integer64\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"Integer64\":{\n\t\t    \"type\":[\"integer\",\"null\"],\n\t\t    \"format\":\"int64\"\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   },\n\t   {\n\t      \"id\":\"UInteger16\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"UInteger16\":{\n\t\t    \"type\":[\"integer\",\"null\"],\n\t\t    \"format\":\"uint16\"\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   },\n\t   {\n\t      \"id\":\"UInteger32\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"UInteger32\":{\n\t\t    \"type\":[\"integer\",\"null\"],\n\t\t    \"format\":\"uint32\"\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   },\n\t   {\n\t      \"id\":\"UInteger64\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"UInteger64\":{\n\t\t    \"type\":[\"integer\",\"null\"],\n\t\t    \"format\":\"uint64\"\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   },\n\t   {\n\t      \"id\":\"String\",\n\t      \"type\":\"object\",\n\t      \"classification\":\"dynamic\",\n\t      \"properties\":{\n\t\t \"String\":{\n\t\t    \"type\":[\"string\",\"null\"]\n\t\t },\n\t\t \"Time\":{\n\t\t    \"type\":\"string\",\n\t\t    \"format\":\"date-time\",\n\t\t    \"isindex\":true\n\t\t }\n\t      }\n\t   }\n    ]);\n\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/include/linkedlookup.h",
    "content": "#ifndef _LINKEDLOOKUP_H\n#define _LINKEDLOOKUP_H\ntypedef enum {\n\tOMFBT_UNKNOWN, OMFBT_DOUBLE64, OMFBT_DOUBLE32, OMFBT_INTEGER16,\n\tOMFBT_INTEGER32, OMFBT_INTEGER64, OMFBT_UINTEGER16, OMFBT_UINTEGER32,\n\tOMFBT_UINTEGER64, OMFBT_STRING, OMFBT_FLEDGEASSET\n} OMFBaseType;\n\n/**\n * Lookup status bit\n */\n#define\tLAL_ASSET_SENT\t\t0x01\t// We have sent the asset \n#define LAL_LINK_SENT\t\t0x02\t// We have sent the link to the base type\n#define LAL_CONTAINER_SENT\t0x04\t// We have sent the container\n#define LAL_AFLINK_SENT\t\t0x08\t// We have sent the link to the AF location\n\n/**\n * Linked Asset Information class\n *\n * This is the data stored for each asset and asset datapoint pair that\n * is being sent to PI using the linked container mechanism. We use the class\n * so we can combine all the information we need in a single lookup table,\n * this not only saves space but allows to build and retain the table\n * before we start building the payloads. This hopefully will help prevent\n * to much memory fragmentation, which was an issue with the old, separate\n * lookup mechanism we had.\n */\nclass LALookup {\n\tpublic:\n\t\tLALookup()\t{ m_sentState = 0; m_baseType = OMFBT_UNKNOWN; };\n\t\tbool\t\tassetState(const std::string& tagName)\n\t\t\t\t{\n\t\t\t\t\treturn ((m_sentState & LAL_ASSET_SENT) != 0)\n\t\t\t\t\t\t&& (m_tagName.compare(tagName) == 0);\n\t\t\t\t};\n\t\tbool\t\tlinkState(const std::string& tagName)\n\t\t\t\t{\n\t\t\t\t\treturn ((m_sentState & LAL_LINK_SENT) != 0)\n\t\t\t\t\t\t&& (m_tagName.compare(tagName) == 0);\n\t\t\t\t};\n\t\tbool\t\tcontainerState(const std::string& tagName)\n\t       \t\t\t{\n\t\t\t\t       \treturn ((m_sentState & LAL_CONTAINER_SENT) != 0)\n\t\t\t\t\t\t&& (m_tagName.compare(tagName) == 0);\n\t\t\t\t};\n\t\tbool\t\tafLinkState() { return (m_sentState & LAL_AFLINK_SENT) != 0; };\n\t\tvoid\t\tsetBaseType(const std::string& baseType);\n\t\tOMFBaseType\tgetBaseType() { return m_baseType; };\n\t\tstd::string\tgetBaseTypeString();\n\t\tvoid\t\tassetSent(const std::string& tagName)\n\t\t\t\t{\n\t\t\t\t\tif (m_tagName.compare(tagName))\n\t\t\t\t\t{\n\t\t\t\t\t\tm_sentState = LAL_ASSET_SENT;\n\t\t\t\t\t\tm_tagName = tagName;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tm_sentState |= LAL_ASSET_SENT;\n\t\t\t\t\t}\n\t\t\t\t};\n\t\tvoid\t\tlinkSent(const std::string& tagName)\n\t\t\t\t{\n\t\t\t\t\tif (m_tagName.compare(tagName))\n\t\t\t\t\t{\n\t\t\t\t\t\t// Force the container to resend if the tagName changes\n\t\t\t\t\t\tm_tagName = tagName;\n\t\t\t\t\t\tm_sentState &= ~LAL_CONTAINER_SENT;\n\t\t\t\t\t}\n\t\t\t\t\tm_sentState |= LAL_LINK_SENT;\n\t\t\t\t};\n\t\tvoid\t\tafLinkSent() {  m_sentState |= LAL_AFLINK_SENT; };\n\t\tvoid\t\tcontainerSent(const std::string& tagName, const std::string& baseType);\n\t\tvoid\t\tcontainerSent(const std::string& tagName, OMFBaseType baseType);\n\tprivate:\n\t\tuint8_t\t\tm_sentState;\n\t\tOMFBaseType\tm_baseType;\n\t\tstd::string\tm_tagName;\n};\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/include/ocs.h",
    "content": "#ifndef _OCS_H\n#define _OCS_H\n/*\n * Fledge OSIsoft ADH and OCS integration.\n *\n * Copyright (c) 2020-2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <string>\n#include <chrono>\n\nusing namespace std;\n\n#define TIMEOUT_CONNECT   10\n#define TIMEOUT_REQUEST   10\n#define RETRY_SLEEP_TIME  1\n\n#define URL_RETRIEVE_TOKEN \"/identity/connect/token\"\n\n#define PAYLOAD_RETRIEVE_TOKEN \"grant_type=client_credentials&client_id=CLIENT_ID_PLACEHOLDER&client_secret=CLIENT_SECRET_ID_PLACEHOLDER\"\n\n/**\n * The OCS class.\n */\nclass OCS\n{\n\tpublic:\n\t\tOCS(const std::string &authorizationUrl);\n\n\t\t// Destructor\n\t\t~OCS();\n\n\t\tstd::string\tOCSRetrieveAuthToken(const string& clientId, const string& clientSecret, bool logMessage = true);\n\t\tint  retrieveToken(const string& clientId, const string& clientSecret, bool logMessage = true);\n\t\tvoid  extractToken(const string& response);\n\tprivate:\n\t\tstd::string m_token;\n\t\tstd::string m_authUrl;\n\t\tunsigned int m_expiresIn;\n\t\tstd::chrono::steady_clock::time_point m_nextAuthentication;\n};\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/include/omf.h",
    "content": "#ifndef _OMF_H\n#define _OMF_H\n/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2018-2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <string>\n#include <vector>\n#include <map>\n#include <unordered_map>\n#include <reading.h>\n#include <http_sender.h>\n#include <zlib.h>\n#include <rapidjson/document.h>\n#include <omfbuffer.h>\n#include <omferror.h>\n#include <linkedlookup.h>\n\n#define\tOMF_HINT\t\"OMFHint\"\n\n#define PIWEBAPI_PIPOINTS_NOT_CREATED\t\t\"One or more PI Points could not be created.\"\n#define PIWEBAPI_CONTAINER_NOT_FOUND\t\t\"Container not found.\"\n#define PIWEBAPI_UPDATE_EXCEPTION\t\t\t\"An exception occurred while updating data.\"\n#define MESSAGE_PI_UNSTABLE\t\t\t\t\t\"HTTP Code %d: Processing cannot continue until data archive errors are corrected\"\n#define MESSAGE_UNAUTHORIZED\t\t\t\t\"OMF endpoint reported we are not authorized, please check configuration of the authentication method and credentials\"\n\nconst char *const noConnectionErrorMessages[] =\n\t{\"Failed to send data: Operation canceled\",\t\t// PI Web API\n\t \"Failed to send data: Connection refused\",\t\t// Edge Data Store\n\t \"Failed to send data: Host not found\",\t\t\t// usually followed by \"(authoritative)\" or \"(non-authoritative), try again later\"\n\t \"Failed to send data: Network is unreachable\",\n\t \"\"};\t\t// empty string marks the end of the array\n\n// The following will force the OMF version for EDS endpoints\n// Remove or comment out the line below to prevent the forcing\n// of the version\n#define EDS_OMF_VERSION\t\"1.0\"\n#define CR_OMF_VERSION\t\"1.0\"\n\n\n#define TYPE_ID_DEFAULT     1\n#define FAKE_ASSET_KEY      \"_default_start_id_\"\n#define OMF_TYPE_STRING\t\t\"string\"\n#define OMF_TYPE_INTEGER\t\"integer\"\n#define OMF_TYPE_FLOAT\t\t\"number\"\n#define OMF_TYPE_UNSUPPORTED\t\"unsupported\"\n\nenum OMF_ENDPOINT {\n\tENDPOINT_PIWEB_API,\n\tENDPOINT_CR,\n\tENDPOINT_OCS,\n\tENDPOINT_EDS,\n\tENDPOINT_ADH\n};\n\n// Documentation about the Naming Scheme available at: https://fledge-iot.readthedocs.io/en/latest/OMF.html#naming-scheme\nenum NAMINGSCHEME_ENDPOINT {\n\tNAMINGSCHEME_CONCISE,\n\tNAMINGSCHEME_SUFFIX,\n\tNAMINGSCHEME_HASH,\n\tNAMINGSCHEME_COMPATIBILITY\n};\n\n\nusing namespace std;\nusing namespace rapidjson;\n\nstd::string ApplyPIServerNamingRules(const std::string &objName, bool *changed);\nstd::string DataPointNamesAsString(const Reading& reading);\n\n/**\n * Per asset dataTypes - This class is used in a std::map where assetName is a key\n *\n * - typeId          = id of the type, it is incremented if the type is redefined\n * - types           = is a JSON string with datapoint names and types\n * - typesShort      = a numeric representation of the type used to quickly identify if a type has changed\n * - namingScheme    = Naming schema of the asset, valid options are Concise, Backward compatibility ..\n * - afhHash         = Asset hash based on the AF hierarchy\n * - afHierarchy     = Current position of the asset in the AF hierarchy\n * - afHierarchyOrig = Original position of the asset in the AF hierarchy, in case the asset was moved\n * - hintChkSum      = Checksum of the OMF hints\n\n */\nclass OMFDataTypes\n{ \n        public:\n                long           typeId;\n                std::string    types;\n                unsigned long  typesShort;\n\t\t\t\tlong           namingScheme;\n\t\t\t\tstring         afhHash;\n\t\t\t\tstring         afHierarchy;\n\t\t\t\tstring         afHierarchyOrig;\n\n\t\tunsigned short hintChkSum;\n};\n\nclass OMFHints;\n\n/**\n * The OMF class.\n * Implements the OMF protocol\n */\nclass OMF\n{\n\tpublic:\n\t\t/**\n\t\t * Constructor:\n\t\t * pass server URL path, OMF_type_id and producerToken.\n\t\t */\n\t\tOMF(const std::string& name,\n\t\t    HttpSender& sender,\n                    const std::string& path,\n\t\t    const long typeId,\n\t\t    const std::string& producerToken);\n\n\t\tOMF(const std::string& name,\n\t\t    HttpSender& sender,\n\t\t    const std::string& path,\n\t\t    std::map<std::string, OMFDataTypes>& types,\n\t\t    const std::string& producerToken);\n\n\t\t// Destructor\n\t\t~OMF();\n\n\t\tvoid\t\tsetOMFVersion(std::string& omfversion)\n\t\t\t\t{\n\t\t\t\t       \tm_OMFVersion = omfversion;\n\t\t\t\t\tif (omfversion.compare(\"1.0\") == 0\n\t\t\t\t\t\t\t|| omfversion.compare(\"1.1\") == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_linkedProperties = false;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tm_linkedProperties = true;\n\t\t\t\t\t}\n\t\t\t\t};\n\n\t\tvoid\t\tsetSender(HttpSender& sender)\n\t\t\t\t{\n\t\t\t\t\tm_sender = sender;\n\t\t\t\t};\n\n\t\t/**\n\t\t * Send data to PI Server passing a vector of readings.\n\t\t *\n\t\t * Data sending is composed by a few phases\n\t\t * handled by private methods.\n\t\t *\n\t\t * Note: DataTypes are sent only once by using\n\t\t * an in memory key map, being the key = assetName + typeId.\n\t\t * Passing false to skipSentDataTypes changes the logic.\n\t\t *\n\t\t * Returns the number of processed readings.\n\t\t */\n\n\t\t// Method with vector (by reference) of readings\n\t\tuint32_t sendToServer(const std::vector<Reading>& readings,\n\t\t\t\t      bool skipSentDataTypes = true); // never called\n\n\t\t// Method with vector (by reference) of reading pointers\n\t\tuint32_t sendToServer(const std::vector<Reading *>& readings,\n\t\t\t\t      bool compression, bool skipSentDataTypes = true);\n\n\t\t// Send a single reading (by reference)\n\t\tuint32_t sendToServer(const Reading& reading,\n\t\t\t\t      bool skipSentDataTypes = true); // never called\n\n\t\t// Send a single reading pointer\n\t\tuint32_t sendToServer(const Reading* reading,\n\t\t\t\t      bool skipSentDataTypes = true); // never called\n\n\t\t// Set saved OMF formats\n\t\tvoid setFormatType(const std::string &key, std::string &value);\n\n\t\t// Set which PIServer component should be used for the communication\n\t\tvoid setPIServerEndpoint(const OMF_ENDPOINT PIServerEndpoint);\n\n\t\t// Set the naming scheme of the objects in the endpoint\n\t\tvoid setNamingScheme(const NAMINGSCHEME_ENDPOINT namingScheme) {m_NamingScheme = namingScheme;};\n\n\t\t// Generate the container id for the given asset\n\t\tstd::string generateMeasurementId(const string& assetName);\n\n\t\t// Generate a suffix for the given asset in relation to the selected naming schema and the value of the type id\n\t\tstd::string generateSuffixType(string &assetName, long typeId);\n\n\t\t// Generate a suffix for the given asset in relation to the selected naming schema and the value of the type id\n\t\tlong getNamingScheme(const string& assetName);\n\n\t\tstring getHashStored(const string& assetName);\n\t\tstring getPathStored(const string& assetName);\n\t\tstring getPathOrigStored(const string& assetName);\n\t\tbool setPathStored(const string& assetName, string &afHierarchy);\n\t\tbool deleteAssetAFH(const string& assetName, string& path);\n\t\tbool createAssetAFH(const string& assetName, string& path);\n\n\t\t// Set the first level of hierarchy in Asset Framework in which the assets will be created, PI Web API only.\n\t\tvoid setDefaultAFLocation(const std::string &DefaultAFLocation);\n\n\t\tbool setAFMap(const std::string &AFMap);\n\n\t\tvoid setSendFullStructure(const bool sendFullStructure) {m_sendFullStructure = sendFullStructure;};\n\n\t\tvoid setPrefixAFAsset(const std::string &prefixAFAsset);\n\n\t\tvoid setDelimiter(const std::string &delimiter) {m_delimiter = delimiter;};\n\n\t\tvoid setDataActionCode(const std::string &actionCode) {m_dataActionCode = actionCode;};\n\n\t\t// Get saved OMF formats\n\t\tstd::string getFormatType(const std::string &key) const;\n\n\t\t// Set the list of errors considered not blocking\n\t\t// in the communication with the PI Server\n                void setNotBlockingErrors(std::vector<std::string>& );\n\n\t\t// Compress string using gzip\n\t\tstd::string compress_string(const std::string& str,\n                            \t\t\t\tint compressionlevel = Z_DEFAULT_COMPRESSION);\n\n\t\t// Return current value of global type-id\n\t\tconst long getTypeId() const { return m_typeId; };\n\n\t\t// Check DataTypeError\n\t\tbool isDataTypeError(const char* message);\n\n\t\t// Check if plugin configuration is working and PI is stable\n\t\tbool isPIstable() { return m_PIstable; };\n\n\t\t// Check if plugin is connected to PI\n\t\tbool isPIconnected() { return m_connected; };\n\n\t\t// Set PI connection status\n\t\tvoid setPIconnected(bool connectionStatus) { m_connected = connectionStatus; };\n\n\t\t// Get and Set number of blocks of Readings\n\t\tstd::size_t getNumBlocks() { return m_numBlocks; };\n\t\tvoid setNumBlocks(std::size_t numBlocks) { m_numBlocks = numBlocks; };\n\n\t\t// Map object types found in input data\n\t\tvoid setMapObjectTypes(const std::vector<Reading *>& data,\n\t\t\t\t\tstd::map<std::string, Reading*>& dataSuperSet);\n\t\t// Removed mapped object types found in input data\n\t\tvoid unsetMapObjectTypes(std::map<std::string, Reading*>& dataSuperSet) const;\n\n\t\tvoid setStaticData(std::vector<std::pair<std::string, std::string>> *staticData)\n\t\t{\n\t\t\tm_staticData = staticData;\n\t\t};\n\n\t\tvoid generateAFHierarchyPrefixLevel(string& path, string& prefix, string& AFHierarchyLevel);\n\n\t\t// Retrieve private objects\n\t\tmap<std::string, std::string> getNamesRules() const { return m_NamesRules; };\n\t\tmap<std::string, std::string> getMetadataRulesExist() const { return m_MetadataRulesExist; };\n\n\t\tbool getAFMapEmptyNames() const { return m_AFMapEmptyNames; };\n\t\tbool getAFMapEmptyMetadata() const { return m_AFMapEmptyMetadata; };\n\n\t\tvoid setLegacyMode(bool legacy) { m_legacy = legacy; };\n\n\t\tstatic std::string ApplyPIServerNamingRulesObj(const std::string &objName, bool *changed);\n\t\tstatic std::string ApplyPIServerNamingRulesPath(const std::string &objName, bool *changed);\n\t\tstatic std::string ApplyPIServerNamingRulesInvalidChars(const std::string &objName, bool *changed);\n\n\t\tstatic std::string variableValueHandle(const Reading& reading, std::string &AFHierarchy);\n\t\tstatic bool        extractVariable(string &strToHandle, string &variable, string &value, string &defaultValue);\n\t\tstatic void   \t   reportAsset(const string& asset, const string& level, const string& msg);\n\nprivate:\n\t\t/**\n\t\t * Builds the HTTP header to send\n\t\t * messagetype header takes the passed type value:\n\t\t * 'Type', 'Container', 'Data'\n\t\t */\n\t\tconst std::vector<std::pair<std::string, std::string>>\n\t\t\tcreateMessageHeader(const std::string& type, const std::string& action=\"create\") const;\n\n\t\t// Create data for Type message for current row\n\t\tconst std::string createTypeData(const Reading& reading, OMFHints *hints);\n\n\t\t// Create data for Container message for current row\n\t\tconst std::string createContainerData(const Reading& reading, OMFHints *hints);\n\n\t\t// Create data for additional type message, with 'Data' for current row\n\t\tconst std::string createStaticData(const Reading& reading);\n\n\t\t// Create data Link message, with 'Data', for current row\n\t\tstd::string createLinkData(const Reading& reading, std::string& AFHierarchyLevel, std::string&  prefix, std::string&  objectPrefix, OMFHints *hints, bool legacy);\n\n\t\t/**\n\t\t * Create data for readings data content, with 'Data', for one row\n\t\t * The new formatted data have to be added to a new JSON doc to send.\n\t\t * we want to avoid sending of one data row\n\t\t */\n\t\tconst std::string createMessageData(Reading& reading);\n\n\t\t// Set the the tagName in an assetName Type message\n\t\tvoid setAssetTypeTag(const std::string& assetName,\n\t\t\t\t     const std::string& tagName,\n\t\t\t\t     std::string& data);\n\n\t\tvoid setAssetTypeTagNew(const std::string& assetName,\n\t\t\t\t\t\t\t const std::string& tagName,\n\t\t\t\t\t\t\t std::string& data);\n\n\t\t// Create the OMF data types if needed\n\t\tbool handleDataTypes(const string keyComplete,\n\t\t\t                 const Reading& row,\n\t\t\t\t             bool skipSendingTypes, OMFHints *hints);\n\n\t\t// Send OMF data types\n\t\tbool sendDataTypes(const Reading& row, OMFHints *hints);\n\n\t\t// Get saved dataType\n\t\tbool getCreatedTypes(const std::string& keyComplete, const Reading& row, OMFHints *hints);\n\n\t\t// Set saved dataType\n\t\tunsigned long calcTypeShort(const Reading& row);\n\n\t\t// Clear data types cache\n\t\tvoid clearCreatedTypes();\n\n\t\t// Increment type-id value\n\t\tvoid incrementTypeId();\n\n\t\t// Handle data type errors\n\t\tbool handleTypeErrors(const string& keyComplete, const Reading& reading, OMFHints*hints);\n\n\t\tstring errorMessageHandler(const string &msg);\n\n\t\tvoid handleRESTException(const std::exception &e, const char *mainMessage);\n\n\t\tvoid CheckHttpCode(const int httpCode, const std::string &errorMessage);\n\n\t\tstd::string getExceptionMessage(const std::exception &e, OMFError *error);\n\n\t\t// Extract assetName from error message\n\t\tstd::string getAssetNameFromError(const char* message);\n\n\t\t// Get asset type-id from cached data\n\t\tlong getAssetTypeId(const std::string& assetName);\n\n\t\t// Increment per asset type-id value\n\t\tvoid incrementAssetTypeId(const std::string& keyComplete);\n\t\tvoid incrementAssetTypeIdOnly(const std::string& keyComplete);\n\n\t\t// Set global type-id as the maximum value of all per asset type-ids\n\t\tvoid setTypeId();\n\n\t\t// Set saved dataType\n\t\tbool setCreatedTypes(const Reading& row, OMFHints *hints);\n\n\t\t// Remove cached data types entry for given asset name\n\t\tvoid clearCreatedTypes(const std::string& keyComplete);\n\n\t\t// Add the 1st level of AF hierarchy if the end point is PI Web API\n\t\tvoid setAFHierarchy();\n\n\t\tbool handleAFHierarchy();\n\t\tbool handleAFHierarchySystemWide();\n\t\tbool handleOmfHintHierarchies();\n\n\t\tbool sendAFHierarchy(std::string AFHierarchy);\n\n\t\tbool sendAFHierarchyLevels(std::string parentPath, std::string path, std::string &lastLevel);\n\t\tbool sendAFHierarchyTypes(const std::string AFHierarchyLevel, const std::string prefix);\n\t\tbool sendAFHierarchyStatic(const std::string AFHierarchyLevel, const std::string prefix);\n\t\tbool sendAFHierarchyLink(std::string parent, std::string child, std::string prefixIdParent, std::string prefixId);\n\n\t\tbool manageAFHierarchyLink(std::string parent, std::string child, std::string prefixIdParent, std::string prefixId, std::string childFull, string action);\n\n\t\tbool AFHierarchySendMessage(const std::string& msgType, std::string& jsonData, const std::string& action=\"create\");\n\n\n\t\tstd::string generateUniquePrefixId(const std::string &path);\n\t\tbool evaluateAFHierarchyRules(const string& assetName, const Reading& reading);\n\t\tvoid retrieveAFHierarchyPrefixAssetName(const string& assetName, string& prefix, string& AFHierarchyLevel);\n\t\tvoid retrieveAFHierarchyFullPrefixAssetName(const string& assetName, string& prefix, string& AFHierarchy);\n\n\t\tbool createAFHierarchyOmfHint(const string& assetName, const  string &OmfHintHierarchy);\n\n\t\tbool HandleAFMapNames(Document& JSon);\n\t\tbool HandleAFMapMetedata(Document& JSon);\n\n\t\t// Start of support for using linked containers\n\t\tbool sendBaseTypes();\n\t\tbool sendFledgeAssetType();\n\t\tbool sendAFLinks(Reading& reading, OMFHints *hints);\n\t\t// End of support for using linked containers\n\t\t//\n\t\tstring createAFLinks(Reading &reading, OMFHints *hints);\n\n\n\tprivate:\n\t\t// Use for the evaluation of the OMFDataTypes.typesShort\n\t\tunion t_typeCount {\n\t\t\tstruct\n\t\t\t{\n\t\t\t\tunsigned char tTotal;\n\t\t\t\tunsigned char tFloat;\n\t\t\t\tunsigned char tString;\n\t\t\t\tunsigned char spare0;\n\n\t\t\t\tunsigned char spare1;\n\t\t\t\tunsigned char spare2;\n\t\t\t\tunsigned char spare3;\n\t\t\t\tunsigned char spare4;\n\t\t\t} cnt;\n\t\t\tunsigned long valueLong = 0;\n\t\t};\n\n\t\tstd::string\t          m_assetName;\n\t\tconst std::string\t  m_path;\n\t\tlong\t\t\t      m_typeId;\n\t\tconst std::string\t  m_producerToken;\n\t\tOMF_ENDPOINT\t\t  m_PIServerEndpoint;\n\t\tNAMINGSCHEME_ENDPOINT m_NamingScheme;\n\t\tstd::string\t\t      m_DefaultAFLocation;\n\t\tbool                  m_sendFullStructure; // If disabled the AF hierarchy is not created.\n\t\tstd::string\t\t\t  m_delimiter;\n\t\tstd::string\t\t\t  m_dataActionCode;\n\n\t\t// Asset Framework Hierarchy Rules handling - Metadata MAP\n\t\t// Documentation: https://fledge-iot.readthedocs.io/en/latest/plugins/fledge-north-OMF/index.html?highlight=hierarchy#asset-framework-hierarchy-rules\n\t\tstd::string\t\tm_AFMap;\n\t\tbool            m_AFMapEmptyNames;  // true if there are no rules to manage\n\t\tbool            m_AFMapEmptyMetadata;\n\t\tstd::string\t\tm_AFHierarchyLevel;\n\t\tstd::string\t\tm_prefixAFAsset;\n\n\t\tvector<std::string>  m_afhHierarchyAlreadyCreated={\n\n\t\t\t//  Asset Framework path\n\t\t\t// {\"\"}\n\t\t};\n\n\t\tmap<std::string, std::string>  m_NamesRules={\n\n\t\t\t// Asset_name   - Asset Framework path\n\t\t\t// {\"\",         \"\"}\n\t\t};\n\n\t\tmap<std::string, std::string>  m_MetadataRulesExist={\n\n\t\t\t// Property   - Asset Framework path\n\t\t\t// {\"\",         \"\"}\n\t\t};\n\n\t\tmap<std::string, std::string>  m_MetadataRulesNonExist={\n\n\t\t\t// Property   - Asset Framework path\n\t\t\t// {\"\",         \"\"}\n\t\t};\n\n\t\tmap<std::string, vector<pair<string, string>>>   m_MetadataRulesEqual={\n\n\t\t\t// Property    - Value  - Asset Framework path\n\t\t\t// {\"\",         {{\"\",        \"\"}} }\n\t\t};\n\n\t\tmap<std::string, vector<pair<string, string>>>   m_MetadataRulesNotEqual={\n\n\t\t\t// Property    - Value  - Asset Framework path\n\t\t\t// {\"\",         {{\"\",        \"\"}} }\n\t\t};\n\n\t\tmap<std::string, vector<pair<string, string>>>  m_AssetNamePrefix ={\n\n\t\t\t// Property   - Hierarchy - prefix\n\t\t\t// {\"\",         {{\"\",        \"\"}} }\n\t\t};\n\n\t\t// Define the OMF format to use for each type\n\t\t// the format will not be applied if the string is empty\n\t\tstd::map<const std::string, std::string> m_formatTypes {\n\t\t\t{OMF_TYPE_STRING, \"\"},\n\t\t\t{OMF_TYPE_INTEGER,\"int64\"},\n\t\t\t{OMF_TYPE_FLOAT,  \"float64\"},\n\t\t\t{OMF_TYPE_UNSUPPORTED,  \"unsupported\"}\n\t\t};\n\n\t\t// Vector with OMF_TYPES\n\t\tconst std::vector<std::string> omfTypes = { OMF_TYPE_STRING,\n\t\t\t\t\t\t\t    OMF_TYPE_FLOAT,  // Forces the creation of float also for integer numbers\n\t\t\t\t\t\t\t    OMF_TYPE_FLOAT,\n\t\t\t\t\t\t\t    OMF_TYPE_UNSUPPORTED};\n\t\t// HTTP Sender interface\n\t\tHttpSender&\t\tm_sender;\n\t\tbool\t\t\tm_changeTypeId;\n\n\t\t// These errors are considered not blocking in the communication\n\t\t// with the destination, the sending operation will proceed\n\t\t// with the next block of data if one of these is encountered\n\t\tstd::vector<std::string> m_notBlockingErrors;\n\n\t\t// Data types cache[key] = (key_type_id, key data types)\n\t\tstd::map<std::string, OMFDataTypes>* m_OMFDataTypes;\n\n\t\t// Stores the type for the block of data containing all the used properties\n\t\tstd::map<string, Reading*> m_SuperSetDataPoints;\n\n\t\t/**\n\t\t * Static data to send to OMF\n\t\t */\n\t\tstd::vector<std::pair<std::string, std::string>> *m_staticData;\n\n\n\t\t/**\n\t\t * The version of OMF we are talking\n\t\t */\n\t\tstd::string\t\tm_OMFVersion;\n\n\t\t/**\n\t\t * Support sending properties via links\n\t\t */\n\t\tbool\t\t\tm_linkedProperties;\n\n\t\t/**\n\t\t * The state of the linked assets, the key is\n\t\t * either an asset name with an underscore appended\n\t\t * or an asset name, followed by an underscore and a\n\t\t * data point name\n\t\t */\n\t\tstd::unordered_map<std::string, LALookup>\n\t\t\t\t\tm_linkedAssetState;\n\n\t\t/**\n\t\t * Force the data to be sent using the legacy, complex OMF types\n\t\t */\n\t\tbool\t\t\tm_legacy;\n\n\t\t/**\n\t\t * Assets that have been logged as having errors. This prevents us\n\t\t * from flooding the logs with reports for the same asset.\n\t\t */\n\t\tstatic std::vector<std::string>\n\t\t\t\t\tm_reportedAssets;\n\t\t/**\n\t\t * Service name\n\t\t */\n\t\tconst std::string\tm_name;\n\n\t\t/**\n\t\t * Have base types been sent to the PI Server\n\t\t */\n\t\tbool\t\t\tm_baseTypesSent;\n\n\t\t/**\n\t\t * If true, plugin configuration is correct and the PI Server shows no errors\n\t\t * If false, no Readings can be processed until PI is corrected and/or the configuration is updated.\n\t\t */\n\t\tbool\t\t\tm_PIstable;\n\n\t\t/**\n\t\t * If true, plugin is connected to the PI Server\n\t\t */\n\t\tbool\t\t\tm_connected;\n\n\t\t/**\n\t\t * Number of blocks of Readings from which to send Data at once\n\t\t */\n\t\tstd::size_t\t\tm_numBlocks;\n\t};\n\n/**\n * The OMFData class.\n * A reading is formatted with OMF specifications using the original\n * type creation scheme implemented by the OMF plugin\n *\n * There is no good reason to retain this class any more, it is here\n * mostly to reduce the scope of the change when introducing the OMFBuffer\n */\nclass OMFData\n{\n\tpublic:\n\t\tOMFData(OMFBuffer & payload, \n\t\t\tconst Reading& reading,\n\t\t\tstring measurementId,\n\t\t\tbool needDelim,\n\t\t\tconst OMF_ENDPOINT PIServerEndpoint = ENDPOINT_CR,\n\t\t\tconst std::string& DefaultAFLocation = std::string(),\n\t\t\tOMFHints *hints = NULL);\n\t\tbool\thasData() { return m_hasData; };\n\tprivate:\n\t\tbool\tm_hasData;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/include/omfbuffer.h",
    "content": "#ifndef _OMF_BUFFER_H\n#define _OMF_BUFFER_H\n/*\n * Fledge OMF North plugin buffer class\n *\n * Copyright (c) 2023 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <string>\n#include <list>\n\n#define BUFFER_CHUNK\t8192\n\n/**\n * Buffer class designed to hold OMF payloads that can\n * grow as required but have minimal copy semantics.\n *\n * TODO Add a coalesce and compress public entry point\n */\nclass OMFBuffer {\n\tclass Buffer {\n\t\tpublic:\n\t\t\tBuffer();\n\t\t\tBuffer(unsigned int);\n\t\t\t~Buffer();\n\t\t\tchar\t\t*detach();\n\t\t\tchar\t\t*data;\n\t\t\tunsigned int\toffset;\n\t\t\tunsigned int\tlength;\n\t\t\tbool\t\tattached;\n\t};\n\n        public:\n                OMFBuffer();\n                ~OMFBuffer();\n\n\t\tbool\t\t\tisEmpty() { return buffers.empty() || (buffers.size() == 1 && buffers.front()->offset == 0); }\n\t\tvoid\t\t\tappend(const char);\n\t\tvoid\t\t\tappend(const char *);\n\t\tvoid\t\t\tappend(const int);\n\t\tvoid\t\t\tappend(const unsigned int);\n\t\tvoid\t\t\tappend(const long);\n\t\tvoid\t\t\tappend(const unsigned long);\n\t\tvoid\t\t\tappend(const double);\n\t\tvoid\t\t\tappend(const std::string&);\n\t\tvoid\t\t\tquote(const std::string&);\n\t\tconst char\t\t*coalesce();\n\t\tvoid\t\t\tclear();\n\n\tprivate:\n\t\tstd::list<Buffer *>\tbuffers;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/include/omferror.h",
    "content": "#ifndef _OMFERROR_H\n#define _OMFERROR_H\n/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2023 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <rapidjson/document.h>\n#include <string>\n#include <vector>\n\n/**\n * An encapsulation of an error return from an OMF call.\n * The class parses the JSON response and gives access to portion of that JSON response.\n */\nclass OMFError {\n\tpublic:\n\t\tOMFError();\n\t\tOMFError(const std::string& json);\n\t\t~OMFError();\n\n\t\tunsigned int\tmessageCount() { return m_messages.size(); };\n\t\tstd::string\tgetMessage(unsigned int offset);\n\t\tstd::string\tgetEventReason(unsigned int offset);\n\t\tstd::string\tgetEventSeverity(unsigned int offset);\n\t\tstd::string\tgetEventExceptionType(unsigned int offset);\n\t\tstd::string\tgetEventExceptionMessage(unsigned int offset);\n\t\tint\t\t\tgetHttpCode();\n\t\tvoid setFromHttpResponse(const std::string& json);\n\t\t/**\n\t\t * The error report contains at least one error level event\n\t\t */\n\t\tbool\t\thasErrors() { return m_hasErrors; };\n\t\t/**\n\t\t * The error report contains at least one message\n\t\t */\n\t\tbool\t\thasMessages() { return !m_messages.empty(); };\n\t\tbool\t\tLog(const std::string &mainMessage, bool filterDuplicates = true);\n\tprivate:\n\t\tclass Message {\n\t\t\tpublic:\n\t\t\t\tMessage(const std::string& severity,\n\t\t\t\t\t\tconst std::string& message,\n\t\t\t\t\t\tconst std::string& reason,\n\t\t\t\t\t\tconst std::string& exceptionType,\n\t\t\t\t\t\tconst std::string& exceptionMessage,\n\t\t\t\t\t\tconst int httpCode) :\n\t\t\t\t\tm_severity(severity),\n\t\t\t\t\tm_message(message),\n\t\t\t\t\tm_reason(reason),\n\t\t\t\t\tm_exceptionType(exceptionType),\n\t\t\t\t\tm_exceptionMessage(exceptionMessage),\n\t\t\t\t\tm_httpCode(httpCode)\n\t\t\t\t{\n\t\t\t\t};\n\t\t\t\tstd::string\tgetSeverity() { return m_severity; };\n\t\t\t\tstd::string\tgetMessage() { return m_message; };\n\t\t\t\tstd::string\tgetReason() { return m_reason; };\n\t\t\t\tstd::string\tgetExceptionType() { return m_exceptionType; };\n\t\t\t\tstd::string\tgetExceptionMessage() { return m_exceptionMessage; };\n\t\t\t\tint\tgetHttpCode() { return m_httpCode; };\n\t\t\tprivate:\n\t\t\t\tstd::string\tm_severity;\n\t\t\t\tstd::string\tm_message;\n\t\t\t\tstd::string\tm_reason;\n\t\t\t\tstd::string\tm_exceptionType;\n\t\t\t\tstd::string\tm_exceptionMessage;\n\t\t\t\tint\t\t\tm_httpCode;\n\t\t};\n\t\tstd::vector<Message>\tm_messages;\n\t\tbool\t\t\tm_hasErrors;\n};\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/include/omfinfo.h",
    "content": "#ifndef _OMFINFO_H\n#define _OMFINFO_H\n/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2023-2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <plugin_api.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <strings.h>\n#include <string>\n#include <logger.h>\n#include <plugin_exception.h>\n#include <iostream>\n#include <omf.h>\n#include <piwebapi.h>\n#include <ocs.h>\n#include <simple_https.h>\n#include <simple_http.h>\n#include <config_category.h>\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"json_utils.h\"\n#include \"libcurl_https.h\"\n#include \"utils.h\"\n#include \"string_utils.h\"\n#include <version.h>\n#include <linkedlookup.h>\n\n#include \"crypto.hpp\"\n\n#define PLUGIN_NAME \"OMF\"\n#define TYPE_ID_KEY \"type-id\"\n#define SENT_TYPES_KEY \"sentDataTypes\"\n#define DATA_KEY \"dataTypes\"\n#define DATA_KEY_SHORT \"dataTypesShort\"\n#define DATA_KEY_HINT \"hintChecksum\"\n#define NAMING_SCHEME \"namingScheme\"\n#define AFH_HASH \"afhHash\"\n#define AF_HIERARCHY \"afHierarchy\"\n#define AF_HIERARCHY_ORIG \"afHierarchyOrig\"\n\n\n#define PROPERTY_TYPE   \"type\"\n#define PROPERTY_NUMBER \"number\"\n#define PROPERTY_STRING \"string\"\n\n#define ENDPOINT_URL_PI_WEB_API \"https://HOST_PLACEHOLDER:PORT_PLACEHOLDER/piwebapi/omf\"\n#define ENDPOINT_URL_CR         \"https://HOST_PLACEHOLDER:PORT_PLACEHOLDER/ingress/messages\"\n#define ENDPOINT_URL_OCS        \"https://REGION_PLACEHOLDER.osisoft.com:PORT_PLACEHOLDER/api/v1/tenants/TENANT_ID_PLACEHOLDER/Namespaces/NAMESPACE_ID_PLACEHOLDER/omf\"\n#define ENDPOINT_URL_ADH        \"https://REGION_PLACEHOLDER.datahub.connect.aveva.com:PORT_PLACEHOLDER/api/v1/Tenants/TENANT_ID_PLACEHOLDER/Namespaces/NAMESPACE_ID_PLACEHOLDER/omf\"\n\n#define ENDPOINT_URL_EDS        \"http://localhost:PORT_PLACEHOLDER/api/v1/tenants/default/namespaces/default/omf\"\n\n#define AUTHORIZATION_URL_ADH   \"REGION_PLACEHOLDER.datahub.connect.aveva.com\"\n#define AUTHORIZATION_URL_OCS   \"REGION_PLACEHOLDER.osisoft.com:443\"\n\nenum OMF_ENDPOINT_PORT {\n\tENDPOINT_PORT_PIWEB_API=443,\n\tENDPOINT_PORT_CR=5460,\n\tENDPOINT_PORT_OCS=443,\n\tENDPOINT_PORT_EDS=5590,\n\tENDPOINT_PORT_ADH=443\n};\n\n/**\n * Plugin specific default configuration\n */\n\n#define NOT_BLOCKING_ERRORS_DEFAULT QUOTE(                              \\\n\t{                                                                   \\\n\t\t\"errors400\" : [                                                 \\\n\t\t\t\"Redefinition of the type with the same ID is not allowed\", \\\n\t\t\t\"Invalid value type for the property\",                      \\\n\t\t\t\"Property does not exist in the type definition\",           \\\n\t\t\t\"Container is not defined\",                                 \\\n\t\t\t\"Unable to find the property of the container of type\"      \\\n\t\t]\t\t\t                                            \\\n\t}                                                                   \\\n)\n\n#define NOT_BLOCKING_ERRORS_DEFAULT_PI_WEB_API QUOTE(            \\\n\t{                                                            \\\n\t\t\"EventInfo\" : [                                          \\\n\t\t\t\"The specified value is outside the allowable range\" \\\n\t\t]\t\t\t                                     \\\n\t}                                                            \\\n)\n\n#define AF_HIERARCHY_RULES QUOTE(                                          \\\n\t{                                                                     \\\n\t}                                                                     \\\n)\n\n/**\n * A class that holds the configuration information for the OMF plugin.\n *\n * Note this is the first stage of refactoring the OMF plugins and represents\n * the CONNECTOR_INFO structure of original plugin as a class\n */\nclass OMFInformation {\n\tpublic:\n\t\tOMFInformation(ConfigCategory* configData);\n\t\t~OMFInformation();\n\t\tvoid\t\tstart(const std::string& storedData);\n\t\tuint32_t\tsend(const vector<Reading *>& readings);\n\t\tstd::string\tsaveData();\n\tprivate:\n\t\tvoid \t\tloadSentDataTypes(rapidjson::Document& JSONData);\n\t\tlong\t\tgetMaxTypeId();\n\t\tint\t\tPIWebAPIGetVersion(bool logMessage = true);\n\t\tint\t\tEDSGetVersion(bool logMessage = true);\n\t\tint\t\tIsADHConnected(bool logMessage = true);\n\t\tvoid\t\tSetOMFVersion();\n\t\tvoid\t\tCheckDataActionCode();\n\t\tOMF_ENDPOINT\tidentifyPIServerEndpoint();\n\t\tstd::string\tsaveSentDataTypes();\n\t\tunsigned long\tcalcTypeShort(const std::string& dataTypes);\n\t\tvoid\t\tParseProductVersion(std::string &versionString, int *major, int *minor);\n\t\tstd::string\tParseEDSProductInformation(std::string json);\n\t\tstd::string\tAuthBasicCredentialsGenerate(std::string& userId, std::string& password);\n\t\tvoid\t\tAuthKerberosSetup(std::string& keytabEnv, std::string& keytabFileName);\n\t\tdouble\t\tGetElapsedTime(struct timeval *startTime);\n\t\tbool\t\tIsDataArchiveConnected();\n        void handleOMFTracing();\n\t\t\n\tprivate:\n\t\tLogger\t\t*m_logger;\n\t\tHttpSender\t*m_sender;              // HTTPS connection\n\t\tOMF \t\t*m_omf;                 // OMF data protocol\n\t\tOCS\t\t\t*m_ocs;\t\t\t\t\t// ADH and OCS authorization\n\t\tbool\t\tm_sendFullStructure;    // It sends the minimum OMF structural messages to load data into PI Data Archive if disabled\n\t\tbool\t\tm_compression;          // whether to compress readings' data\n\t\tstring\t\tm_protocol;             // http / https\n\t\tstring\t\tm_hostAndPort;          // hostname:port for SimpleHttps\n\t\tunsigned int\tm_retrySleepTime;     \t// Seconds between each retry\n\t\tunsigned int\tm_maxRetry;\t        // Max number of retries in the communication\n\t\tunsigned int\tm_timeout;\t        // connect and operation timeout\n\t\tstring\t\tm_path;\t\t        // PI Server application path\n\t\tstring\t\tm_delimiter;\t\t\t// delimiter between Asset and Datapoint in PI data stream names\n\t\tstring\t\tm_dataActionCode;\t\t// Action code to use for OMF Data posts: update or create\n\t\tlong\t\tm_typeId;\t\t        // OMF protocol type-id prefix\n\t\tstring\t\tm_producerToken;\t        // PI Server connector token\n\t\tstring\t\tm_formatNumber;\t        // OMF protocol Number format\n\t\tstring\t\tm_formatInteger;\t        // OMF protocol Integer format\n\t\tOMF_ENDPOINT\tm_PIServerEndpoint;     // Defines which End point should be used for the communication\n\t\tNAMINGSCHEME_ENDPOINT\n\t\t\t\tm_NamingScheme; // Define how the object names should be generated - https://fledge-iot.readthedocs.io/en/latest/OMF.html#naming-scheme\n\t\tstring\t\tm_DefaultAFLocation;    // 1st hierarchy in Asset Framework, PI Web API only.\n\t\tstring\t\tm_AFMap;                // Defines a set of rules to address where assets should be placed in the AF hierarchy.\n\t\t\t\t\t\t//    https://fledge-iot.readthedocs.io/en/latest/OMF.html#asset-framework-hierarchy-rules\n\n\t\tstring\t\tm_prefixAFAsset;        // Prefix to generate unique asset id\n\t\tstring\t\tm_PIWebAPIProductTitle;\n\t\tstring\t\tm_RestServerVersion;\n\t\tstring\t\tm_PIWebAPIAuthMethod;   // Authentication method to be used with the PI Web API.\n\t\tstring\t\tm_PIWebAPICredentials;  // Credentials is the base64 encoding of id and password joined by a single colon (:)\n\t\tstring \t\tm_KerberosKeytab;       // Kerberos authentication keytab file\n\t\t\t\t\t\t    //   stores the environment variable value about the keytab file path\n\t\t\t\t\t\t    //   to allow the environment to persist for all the execution of the plugin\n\t\t\t\t\t\t    //\n\t\t\t\t\t\t    //   Note : A keytab is a file containing pairs of Kerberos principals\n\t\t\t\t\t\t    //   and encrypted keys (which are derived from the Kerberos password).\n\t\t\t\t\t\t    //   You can use a keytab file to authenticate to various remote systems\n\t\t\t\t\t\t    //   using Kerberos without entering a password.\n\n\t\tstring\t\tm_OCSNamespace;           // ADH & OCS configurations\n\t\tstring\t\tm_OCSTenantId;\n\t\tstring\t\tm_OCSClientId;\n\t\tstring\t\tm_OCSClientSecret;\n\t\tstring\t\tm_OCSToken;\n\t\tstring\t\tm_authUrl;\n\n\t\tvector<pair<string, string>>\n\t\t\t\tm_staticData;\t// Static data\n\t\t// Errors considered not blocking in the communication with the PI Server\n\t\tstd::vector<std::string>\n\t\t\t\tm_notBlockingErrors;\n\t\t// Per asset DataTypes\n\t\tstd::map<std::string, OMFDataTypes>\n\t\t\t\tm_assetsDataTypes;\n\t\tstring\t\tm_omfversion;\n\t\tbool\t\tm_legacy;\n\t\tstring\t\tm_name;\n\t\tbool\t\tm_connected;\n        bool        m_tracingEnabled;\n\t\tstd::size_t\tm_numBlocks;\n};\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/include/omflinkeddata.h",
    "content": "#ifndef OMFLINKEDDATA_H\n#define OMFLINKEDDATA_H\n/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2022-2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <map>\n#include <set>\n#include <reading.h>\n#include <OMFHint.h>\n#include <omfbuffer.h>\n#include <linkedlookup.h>\n#include <omferror.h>\n\n/**\n * The OMFLinkedData class.\n * A reading is formatted with OMF specifications using the linked\n * type creation scheme supported for OMF Version 1.2 onwards.\n *\n * This is based on the new mechanism discussed at AVEVA World 2022 and\n * the mechanism is detailed in the Google Doc,\n * https://docs.google.com/document/d/1w0e7VRqX7xzc0lEBLq-sYhgaHE0ABasOa6EC9dJMrMs/edit\n *\n * The principle is to use links to containers in OMF with each container being a single\n * data point in the asset. There are no specific types for the assets, they share a set\n * of base types via these links. This should allow for readings that have different sets\n * of datapoints for each asset.\n *\n * It is also a goal of this mechanism to move away from the need to persist state data\n * between invocations and make the process more robust.\n */\nclass OMFLinkedData\n{\n\tpublic:\n\t\tOMFLinkedData(  std::unordered_map<std::string, LALookup> *linkedAssetState,\n\t\t\t\tconst OMF_ENDPOINT PIServerEndpoint = ENDPOINT_CR) :\n\t\t\t\t\tm_linkedAssetState(linkedAssetState),\n\t\t\t\t\tm_endpoint(PIServerEndpoint),\n       \t\t\t\t\tm_doubleFormat(\"float64\"),\n\t\t\t\t\tm_integerFormat(\"int64\")\n\t\t\t\t\t{};\n\t\tbool\t\tprocessReading(OMFBuffer& payload, bool needDelim, const Reading& reading,\n\t\t\t\tconst std::string& DefaultAFLocation = std::string(),\n\t\t\t\tOMFHints *hints = NULL);\n\t\tvoid\t\tbuildLookup(const std::vector<Reading *>& reading);\n\t\tvoid\t\tsetSendFullStructure(const bool sendFullStructure) {m_sendFullStructure = sendFullStructure;};\n\t\tbool\t\tflushContainers(HttpSender& sender, const std::string& path, std::vector<std::pair<std::string, std::string> >& header, OMFError& error, bool *isConnected);\n\t\tstd::size_t\tclearLALookup(const std::vector<Reading *>& reading, std::size_t startIndex, std::size_t numReadings, std::string &delimiter);\n\t\tvoid\t\tsetDelimiter(const std::string &delimiter) {m_delimiter = delimiter;};\n\t\tvoid\t\tsetFormats(const std::string& doubleFormat, const std::string& integerFormat)\n\t\t\t\t{\n\t\t\t\t\tm_doubleFormat = doubleFormat;\n\t\t\t\t\tm_integerFormat = integerFormat;\n\t\t\t\t};\n\t\tvoid\t\tsetStaticData(std::vector<std::pair<std::string, std::string>> *staticData)\n\t\t\t\t{\n\t\t\t\t\tm_staticData = staticData;\n\t\t\t\t};\n\n\tprivate:\n\t\tstd::string\tgetBaseType(Datapoint *dp, const std::string& format);\n\t\tvoid\t\tsendContainer(std::string& link, Datapoint *dp, OMFHints * hints, const std::string& baseType);\n\t\tbool\t\tisTypeSupported(DatapointValue& dataPoint)\n\t\t\t\t{\n\t\t\t\t\tswitch (dataPoint.getType())\n\t\t\t\t\t{\n\t\t\t\t\t\tcase DatapointValue::DatapointTag::T_FLOAT:\n\t\t\t\t\t\tcase DatapointValue::DatapointTag::T_INTEGER:\n\t\t\t\t\t\tcase DatapointValue::DatapointTag::T_STRING:\n\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t};\n\n\tprivate:\n\t\tbool m_sendFullStructure;\n\t\tstd::string m_delimiter;\n\n\t\t/**\n\t\t * The container for this asset and data point has been sent in\n\t\t * this session. The key is the asset followed by the datapoint name\n\t\t * with a delimiter (default: '.') in between. The value is the base type used, a\n\t\t * container will be sent if the base type changes.\n\t\t */\n\t\tstd::unordered_map<std::string, LALookup>\t*m_linkedAssetState;\n\n\t\t/**\n\t\t * The endpoint to which we are sending data\n\t\t */\n\t\tOMF_ENDPOINT\t\t\t\tm_endpoint;\n\n\t\t/**\n\t\t * Static data to send to OMF\n\t\t */\n\t\tstd::vector<std::pair<std::string, std::string>> *m_staticData;\n\n\n\t\t/**\n\t\t * The set of containers to flush\n\t\t */\n\t\tstd::string\t\t\t\tm_containers;\n\t\tstd::set<std::string>   m_containerNames;\n\t\tstd::string\t\t\t\tm_doubleFormat;\n\t\tstd::string\t\t\t\tm_integerFormat;\n\n};\n#endif\n"
  },
  {
    "path": "C/plugins/north/OMF/linkdata.cpp",
    "content": "/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2022-2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <utility>\n#include <iostream>\n#include <string>\n#include <cstring>\n#include <omf.h>\n#include <piwebapi.h>\n#include <OMFHint.h>\n#include <logger.h>\n#include \"string_utils.h\"\n#include <datapoint.h>\n\n#include <iterator>\n#include <typeinfo>\n#include <algorithm>\n\n#include <omflinkeddata.h>\n#include <omferror.h>\n\nusing namespace std;\n\n/**\n * In order to cut down on the number of string copies made whilst building\n * the OMF message for a reading we reserve a number of bytes in a string and\n * each time we get close to filling the string we reserve more. The value below\n * defines the increment we use to grow the string reservation.\n */\n#define RESERVE_INCREMENT\t100\n\n/**\n * Create a comma-separated string from a string set\n *\n * @param stringSet\tSet of strings\n * @return\t\t\tSet members as a comma-separated string\n */\nstatic std::string StringSetToCSVString(const std::set<std::string> &stringSet)\n{\n\tstd::string stringSetMembers;\n\n\tfor (std::string item : stringSet)\n\t{\n\t\tstringSetMembers.append(item).append(\",\");\n\t}\n\n\tif (stringSetMembers.size() > 0)\n\t{\n\t\tstringSetMembers.resize(stringSetMembers.size() - 1);\t// remove trailing comma\n\t}\n\n\treturn stringSetMembers;\n}\n\n/**\n * Convert a DatapointValue to a string suitable for an OMF Data message\n *\n * @param dp\t\tDatapoint\n * @param format\tOMF data type format to be used for the string\n * @return\t\t\tValue string for the OMF Data message\n */\nstatic std::string DatapointValueToOMFString(Datapoint *dp, std::string &format)\n{\n\t// Coerce floating point numbers to integers if requested.\n\t// OMF will not accept floating point numbers sent to integer Containers so they must be coerced.\n\t// OMF will accept integers sent to floating point Containers so no need to explicitly coerce.\n\t// When coercing negative floating point numbers to unsigned integers, set the OMF value to 'null'.\n\tstd::string omfValueString;\n\n\tif (dp->getData().getType() == DatapointValue::T_FLOAT)\n\t{\n\t\tdouble doubleValue = dp->getData().toDouble();\n\n\t\tif (format.compare(0, 6, \"Double\") == 0)\n\t\t{\n\t\t\tomfValueString = dp->getData().toString(); // very common; check this first\n\t\t}\n\t\telse if (format.compare(0, 7, \"Integer\") == 0)\n\t\t{\n\t\t\tomfValueString = std::to_string((long)doubleValue);\n\t\t}\n\t\telse if (format.compare(0, 8, \"UInteger\") == 0)\n\t\t{\n\t\t\tif (doubleValue < 0.0)\n\t\t\t{\n\t\t\t\tomfValueString = std::string(\"null\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tomfValueString = std::to_string((long)doubleValue);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tomfValueString = dp->getData().toString();\n\t\t}\n\t}\n\telse if ((dp->getData().getType() == DatapointValue::T_INTEGER) && (format.compare(0, 8, \"UInteger\") == 0))\n\t{\n\t\tif (dp->getData().toInt() < 0)\n\t\t{\n\t\t\tomfValueString = std::string(\"null\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tomfValueString = dp->getData().toString();\n\t\t}\n\t}\n\telse\n\t{\n\t\tomfValueString = dp->getData().toString();\n\t}\n\n\treturn omfValueString;\n}\n\n/**\n * OMFLinkedData constructor, generates the OMF message containing the data\n *\n * @param payload\t    The buffer into which to populate the payload\n * @param delim\t\t    Add a delimiter before outputting anything\n * @param reading           Reading for which the OMF message must be generated\n * @param AFHierarchyPrefix Unused at the current stage\n * @param hints             OMF hints for the specific reading for changing the behaviour of the operation\n *\n */\nbool  OMFLinkedData::processReading(OMFBuffer& payload, bool delim, const Reading& reading, const string&  AFHierarchyPrefix, OMFHints *hints)\n{\n\tbool rval = false;\n\tbool changed;\n\n\n\tstring assetName = reading.getAssetName();\n\tstring originalAssetName = OMF::ApplyPIServerNamingRulesObj(assetName, NULL);\n\n\t// Apply any TagName hints to modify the containerid\n\tif (hints)\n\t{\n\t\tconst std::vector<OMFHint *> omfHints = hints->getHints();\n\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t{\n\t\t\tif (typeid(**it) == typeid(OMFTagNameHint))\n\t\t\t{\n\t\t\t\tstring hintValue = (*it)->getHint();\n\t\t\t\tLogger::getLogger()->debug(\"Using OMF TagName hint: %s for asset %s\",\n\t\t\t\t\t       hintValue.c_str(), assetName.c_str());\n\t\t\t\tassetName = hintValue;\n\t\t\t}\n\t\t\tif (typeid(**it) == typeid(OMFTagHint))\n\t\t\t{\n\t\t\t\tstring hintValue = (*it)->getHint();\n\t\t\t\tLogger::getLogger()->debug(\"Using OMF Tag hint: %s for asset %s\",\n\t\t\t\t\t       hintValue.c_str(), assetName.c_str());\n\t\t\t\tassetName = hintValue;\n\t\t\t}\n\t\t}\n\t}\n\n\n\t// Get reading data\n\tconst vector<Datapoint*> data = reading.getReadingData();\n\tvector<string> skippedDatapoints;\n\n\tLogger::getLogger()->debug(\"Processing %s (%s) using Linked Types\", assetName.c_str(), DataPointNamesAsString(reading).c_str());\n\n\tassetName = OMF::ApplyPIServerNamingRulesObj(assetName, NULL);\n\n\tbool needDelim = delim;\n\tauto assetLookup = m_linkedAssetState->find(originalAssetName + m_delimiter);\n\tif (assetLookup == m_linkedAssetState->end())\n\t{\n\t\t// Panic Asset lookup not created\n\t\tLogger::getLogger()->error(\"Internal error: No asset lookup item for %s.\", assetName.c_str());\n\t\treturn \"\";\n\t}\n\tif (m_sendFullStructure && assetLookup->second.assetState(assetName) == false)\n\t{\n\t\tif (needDelim)\n\t\t\tpayload.append(',');\n\t\t// Send the data message to create the asset instance\n\t\tpayload.append(\"{ \\\"typeid\\\":\\\"FledgeAsset\\\", \\\"values\\\":[ { \\\"AssetId\\\":\\\"\");\n\t\tpayload.append(assetName + \"\\\",\\\"Name\\\":\\\"\");\n\t\tpayload.append(assetName + \"\\\"\");\n\n\t\tfor (std::pair<std::string, std::string> &sData : *m_staticData)\n\t\t{\n\t\t\tpayload.append(\",\\\"\");\n\t\t\tpayload.append(sData.first);\n\t\t\tpayload.append(\"\\\":\\\"\");\n\t\t\tpayload.append(sData.second);\n\t\t\tpayload.append('\\\"');\n\t\t}\n\n\t\tpayload.append(\"} ] }\");\n\t\trval = true;\n\t\tneedDelim = true;\n\t\tassetLookup->second.assetSent(assetName);\n\t}\n\n\t/**\n\t * This loop creates the data values for each of the datapoints in the\n\t * reading.\n\t */\n\tfor (vector<Datapoint*>::const_iterator it = data.begin(); it != data.end(); ++it)\n\t{\n\t\tDatapoint *dp = *it;\n\t\tstring dpName = dp->getName();\n\t\tif (dpName.compare(OMF_HINT) == 0)\n\t\t{\n\t\t\t// Don't send the OMF Hint to the PI Server\n\t\t\tcontinue;\n\t\t}\n\t\tdpName = OMF::ApplyPIServerNamingRulesObj(dpName, NULL);\n\t\tif (!isTypeSupported(dp->getData()))\n\t\t{\n\t\t\tskippedDatapoints.push_back(dpName);\n\t\t\tcontinue;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring format;\n\t\t\tstring tagNameHintRaw, tagNameHint;\n\t\t\tbool tagNameHintchanged = false;\n\t\t\tif (hints)\n\t\t\t{\n\t\t\t\tconst vector<OMFHint *> omfHints = hints->getHints(dpName);\n\t\t\t\tfor (auto hit = omfHints.cbegin(); hit != omfHints.cend(); hit++)\n\t\t\t\t{\n\t\t\t\t\tif (typeid(**hit) == typeid(OMFNumberHint))\n\t\t\t\t\t{\n\t\t\t\t\t\tformat = (*hit)->getHint();\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (typeid(**hit) == typeid(OMFIntegerHint))\n\t\t\t\t\t{\n\t\t\t\t\t\tformat = (*hit)->getHint();\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (typeid(**hit) == typeid(OMFTagNameDatapointHint))\n\t\t\t\t\t{\n\t\t\t\t\t\ttagNameHintRaw = (*hit)->getHint();\n\t\t\t\t\t\ttagNameHint = OMF::ApplyPIServerNamingRulesObj(tagNameHintRaw, &tagNameHintchanged);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Create the link for the asset if not already created\n\t\t\tstring link = tagNameHint.empty() ? assetName + m_delimiter + dpName : tagNameHint;\n\t\t\tstring dpLookupName = originalAssetName + m_delimiter + dpName;\n\t\t\tauto dpLookup = m_linkedAssetState->find(dpLookupName);\n\n\t\t\tstring baseType = getBaseType(dp, format);\n\t\t\tif (baseType.empty())\n\t\t\t{\n\t\t\t\t// Skip the datapoint.\n\t\t\t\t// Data type is not supported or the OMFHint is incorrect.\n\t\t\t\t// If 'format' is non-empty, a numeric or integer OMFHint was applied. The format string must be invalid.\n\t\t\t\tif (format.empty())\n\t\t\t\t{\n\t\t\t\t\tskippedDatapoints.push_back(dpName);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tskippedDatapoints.push_back(dpName + \"[\" + format + \"]\");\n\t\t\t\t}\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (needDelim)\n\t\t\t{\n\t\t\t\tpayload.append(',');\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tneedDelim = true;\n\t\t\t}\n\n\t\t\tif (dpLookup == m_linkedAssetState->end())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Trying to send a link for a datapoint for which we have not created a base type\");\n\t\t\t}\n\t\t\telse if (dpLookup->second.containerState(assetName) == false)\n\t\t\t{\n\t\t\t\tsendContainer(link, dp, hints, baseType);\n\t\t\t\tdpLookup->second.containerSent(assetName, baseType);\n\n\t\t\t\tif (tagNameHintchanged)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"Datapoint %s.%s tagName Hint %s is not a valid PI name. Changed to %s\",\n\t\t\t\t\t\tassetName.c_str(),\n\t\t\t\t\t\tdpName.c_str(),\n\t\t\t\t\t\ttagNameHintRaw.c_str(),\n\t\t\t\t\t\ttagNameHint.c_str());\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (baseType.compare(dpLookup->second.getBaseTypeString()) != 0)\n\t\t\t{\n\t\t\t\t// Land here if the integer or number OMFHint is different from the hint in place\n\t\t\t\t// when the Container was first created or confirmed in the current run.\n\t\t\t\t// The only way to store data in this case is to reset the format to its initial value.\n\t\t\t\t// Attempting to apply the requested format will cause data to be discarded because the\n\t\t\t\t// requested format is not defined for the Container.\n\t\t\t\tstring containerFormat = dpLookup->second.getBaseTypeString();\n\n\t\t\t\tLogger::getLogger()->debug(\"%s: Requested format '%s' does not match the Container format '%s'. Resetting to '%s'.\",\n\t\t\t\t\tlink.c_str(), baseType.c_str(), containerFormat.c_str(), containerFormat.c_str());\n\n\t\t\t\tbaseType = containerFormat;\n\t\t\t}\n\n\t\t\tif (m_sendFullStructure && dpLookup->second.linkState(assetName) == false)\n\t\t\t{\n\t\t\t\tpayload.append(\"{ \\\"typeid\\\":\\\"__Link\\\",\");\n\t\t\t\tpayload.append(\"\\\"values\\\":[ { \\\"source\\\" : {\");\n\t\t\t\tpayload.append(\"\\\"typeid\\\": \\\"FledgeAsset\\\",\");\n\t\t\t\tpayload.append(\"\\\"index\\\":\\\"\" + assetName);\n\t\t\t\tpayload.append(\"\\\" }, \\\"target\\\" : {\");\n\t\t\t\tpayload.append(\"\\\"containerid\\\" : \\\"\");\n\t\t\t\tpayload.append(link);\n\t\t\t\tpayload.append(\"\\\" } } ] },\");\n\n\t\t\t\trval = true;\n\t\t\t\tdpLookup->second.linkSent(assetName);\n\t\t\t}\n\n\t\t\t// Convert reading data into the OMF JSON string\n\t\t\tpayload.append(\"{\\\"containerid\\\": \\\"\" + link);\n\t\t\tpayload.append(\"\\\", \\\"values\\\": [{\");\n\n\t\t\t// Base type we are using for this data point\n\t\t\tpayload.append(\"\\\"\" + baseType + \"\\\": \");\n\n\t\t\t// Add datapoint Value as a string to the payload.\n\t\t\tpayload.append(DatapointValueToOMFString(dp, baseType));\n\t\t\tpayload.append(\", \");\n\n\t\t\t// Append Z to getAssetDateTime(FMT_STANDARD)\n\t\t\tpayload.append(\"\\\"Time\\\": \\\"\" + reading.getAssetDateUserTime(Reading::FMT_STANDARD) + \"Z\" + \"\\\"\");\n\t\t\tpayload.append(\"} ] }\");\n\t\t\trval = true;\n\t\t}\n\t}\n\tif (skippedDatapoints.size() > 0)\n\t{\n\t\tstring points;\n\t\tfor (string& dp : skippedDatapoints)\n\t\t{\n\t\t\tif (!points.empty())\n\t\t\t\tpoints.append(\", \");\n\t\t\tpoints.append(dp);\n\t\t}\n\t\tauto pos = points.find_last_of(\",\");\n\t\tif (pos != string::npos)\n\t\t{\n\t\t\tpoints.replace(pos, 1, \" and\");\n\t\t}\n\t\tstring assetName = reading.getAssetName();\n\t\tstring msg = \"The asset \" + assetName + \" had a number of datapoints (\" + points + \") that are not supported by OMF and have been omitted\";\n\t\tOMF::reportAsset(assetName, \"warn\", msg);\n\t}\n\treturn rval;\n}\n\n/**\n * If the entries are needed in the lookup table for this block of readings then create them\n *\n * @param readings\tA block of readings to process\n */\nvoid OMFLinkedData::buildLookup(const vector<Reading *>& readings)\n{\n\n\tfor (const Reading *reading : readings)\n\t{\n\t\tstring assetName = reading->getAssetName();\n\t\tassetName = OMF::ApplyPIServerNamingRulesObj(assetName, NULL);\n\n\t\t// Apply any TagName hints to modify the containerid\n\t\tLALookup empty;\n\n\t\tstring assetKey = assetName + m_delimiter;\n\t\tif (m_linkedAssetState->count(assetKey) == 0)\n\t\t\tm_linkedAssetState->insert(pair<string, LALookup>(assetKey, empty));\n\n\t\t// Get reading data\n\t\tconst vector<Datapoint*> data = reading->getReadingData();\n\n\t\t/**\n\t\t * This loop creates the data values for each of the datapoints in the\n\t\t * reading.\n\t\t */\n\t\tfor (vector<Datapoint*>::const_iterator it = data.begin(); it != data.end(); ++it)\n\t\t{\n\t\t\tDatapoint *dp = *it;\n\t\t\tstring dpName = dp->getName();\n\t\t\tif (dpName.compare(OMF_HINT) == 0)\n\t\t\t{\n\t\t\t\t// Don't send the OMF Hint to the PI Server\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tdpName = OMF::ApplyPIServerNamingRulesObj(dpName, NULL);\n\t\t\tif (!isTypeSupported(dp->getData()))\n\t\t\t{\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tstring link = assetName + m_delimiter + dpName;\n\t\t\tif (m_linkedAssetState->count(link) == 0)\n\t\t\t\tm_linkedAssetState->insert(pair<string, LALookup>(link, empty));\n\t\t}\n\t}\n}\n\n/**\n * Calculate the base type we need to link the container\n *\n * @param dp\t\tThe datapoint to process\n * @param format\tThe format to use based on a hint, this may be empty\n * @return\tThe base type linked in the container\n */\nstring OMFLinkedData::getBaseType(Datapoint *dp, const string& format)\n{\n\tstring baseType;\n\tswitch (dp->getData().getType())\n\t{\n\t\tcase DatapointValue::T_STRING:\n\t\t\tbaseType = \"String\";\n\t\t\tbreak;\n\t\tcase DatapointValue::T_INTEGER:\n\t\t{\n\t\t\tstring intFormat;\n\t\t\tif (!format.empty())\n\t\t\t\tintFormat = format;\n\t\t\telse\n\t\t\t\tintFormat = m_integerFormat;\n\t\t\tif (intFormat.compare(\"int64\") == 0)\n\t\t\t\tbaseType = \"Integer64\";\n\t\t\telse if (intFormat.compare(\"int32\") == 0)\n\t\t\t\tbaseType = \"Integer32\";\n\t\t\telse if (intFormat.compare(\"int16\") == 0)\n\t\t\t\tbaseType = \"Integer16\";\n\t\t\telse if (intFormat.compare(\"uint64\") == 0)\n\t\t\t\tbaseType = \"UInteger64\";\n\t\t\telse if (intFormat.compare(\"uint32\") == 0)\n\t\t\t\tbaseType = \"UInteger32\";\n\t\t\telse if (intFormat.compare(\"uint16\") == 0)\n\t\t\t\tbaseType = \"UInteger16\";\n\t\t\telse if (intFormat.compare(\"float64\") == 0)\n\t\t\t\tbaseType = \"Double64\";\n\t\t\telse if (intFormat.compare(\"float32\") == 0)\n\t\t\t\tbaseType = \"Double32\";\n\t\t\tbreak;\n\t\t}\n\t\tcase DatapointValue::T_FLOAT:\n\t\t{\n\t\t\tstring doubleFormat;\n\t\t\tif (!format.empty())\n\t\t\t\tdoubleFormat = format;\n\t\t\telse\n\t\t\t\tdoubleFormat = m_doubleFormat;\n\t\t\tif (doubleFormat.compare(\"float64\") == 0)\n\t\t\t\tbaseType = \"Double64\";\n\t\t\telse if (doubleFormat.compare(\"float32\") == 0)\n\t\t\t\tbaseType = \"Double32\";\n\t\t\telse if (doubleFormat.compare(\"int64\") == 0)\n\t\t\t\tbaseType = \"Integer64\";\n\t\t\telse if (doubleFormat.compare(\"int32\") == 0)\n\t\t\t\tbaseType = \"Integer32\";\n\t\t\telse if (doubleFormat.compare(\"int16\") == 0)\n\t\t\t\tbaseType = \"Integer16\";\n\t\t\telse if (doubleFormat.compare(\"uint64\") == 0)\n\t\t\t\tbaseType = \"UInteger64\";\n\t\t\telse if (doubleFormat.compare(\"uint32\") == 0)\n\t\t\t\tbaseType = \"UInteger32\";\n\t\t\telse if (doubleFormat.compare(\"uint16\") == 0)\n\t\t\t\tbaseType = \"UInteger16\";\n\t\t\tbreak;\n\t\t}\n\t\tdefault:\n\t\t\tLogger::getLogger()->error(\"Unsupported type %s for the data point %s\", dp->getData().getTypeStr().c_str(),\n\t\t\t\t\tdp->getName().c_str());\n\t\t\tbreak;\n\t}\n\n\treturn baseType;\n}\n\n/**\n * Create a container message for the linked datapoint\n *\n * @param linkName\tThe name to use for the container\n * @param dp\t\tThe datapoint to process\n * @param hints\t\tHints related to this asset\n * @param baseType\tThe baseType we will use\n */\nvoid OMFLinkedData::sendContainer(string& linkName, Datapoint *dp, OMFHints * hints, const string& baseType)\n{\n\tstring dataSource = \"Fledge\";\n\tstring uom, minimum, maximum, interpolation;\n\tbool propertyOverrides = false;\n\n\n\tif (hints)\n\t{\n\t\tconst vector<OMFHint *> omfHints = hints->getHints();\n\t\tfor (auto it = omfHints.cbegin(); it != omfHints.end(); it++)\n\t\t{\n\t\t\tif (typeid(**it) == typeid(OMFSourceHint))\n\t\t\t{\n\t\t\t\tdataSource = (*it)->getHint();\n\t\t\t}\n\t\t}\n\n\t\tconst vector<OMFHint *> dpHints = hints->getHints(dp->getName());\n\t\tfor (auto it = dpHints.cbegin(); it != dpHints.end(); it++)\n\t\t{\n\t\t\tif (typeid(**it) == typeid(OMFSourceHint))\n\t\t\t{\n\t\t\t\tdataSource = (*it)->getHint();\n\t\t\t}\n\t\t\tif (typeid(**it) == typeid(OMFUOMHint))\n\t\t\t{\n\t\t\t\tuom = (*it)->getHint();\n\t\t\t\tpropertyOverrides = true;\n\t\t\t}\n\t\t\tif (typeid(**it) == typeid(OMFMinimumHint))\n\t\t\t{\n\t\t\t\tminimum = (*it)->getHint();\n\t\t\t\tpropertyOverrides = true;\n\t\t\t}\n\t\t\tif (typeid(**it) == typeid(OMFMaximumHint))\n\t\t\t{\n\t\t\t\tmaximum = (*it)->getHint();\n\t\t\t\tpropertyOverrides = true;\n\t\t\t}\n\t\t\tif (typeid(**it) == typeid(OMFInterpolationHint))\n\t\t\t{\n\t\t\t\tinterpolation = (*it)->getHint();\n\t\t\t\tpropertyOverrides = true;\n\t\t\t}\n\t\t}\n\t}\n\t\n\tstring container = \"{ \\\"id\\\" : \\\"\" + linkName;\n\tcontainer += \"\\\", \\\"typeid\\\" : \\\"\";\n\tcontainer += baseType;\n\tcontainer += \"\\\", \\\"name\\\" : \\\"\";\n\tstring dpName = OMF::ApplyPIServerNamingRulesObj(dp->getName(), NULL);\n\tcontainer += dpName;\n\tcontainer += \"\\\", \\\"datasource\\\" : \\\"\" + dataSource + \"\\\"\";\n\n\tif (propertyOverrides)\n\t{\n\t\tcontainer += \", \\\"propertyoverrides\\\" : { \\\"\";\n\t\tcontainer += baseType;\n\t\tcontainer += \"\\\" : {\";\n\t\tstring delim = \"\";\n\t\tif (!uom.empty())\n\t\t{\n\t\t\tdelim = \",\";\n\t\t\tcontainer += \"\\\"uom\\\" : \\\"\";\n\t\t\tcontainer += uom;\n\t\t\tcontainer += \"\\\"\";\n\t\t}\n\t\tif (!minimum.empty())\n\t\t{\n\t\t\tcontainer += delim;\n\t\t\tdelim = \",\";\n\t\t\tcontainer += \"\\\"minimum\\\" : \";\n\t\t\tcontainer += minimum;\n\t\t}\n\t\tif (!maximum.empty())\n\t\t{\n\t\t\tcontainer += delim;\n\t\t\tdelim = \",\";\n\t\t\tcontainer += \"\\\"maximum\\\" : \";\n\t\t\tcontainer += maximum;\n\t\t}\n\t\tif (!interpolation.empty())\n\t\t{\n\t\t\tcontainer += delim;\n\t\t\tdelim = \",\";\n\t\t\tcontainer += \"\\\"interpolation\\\" : \\\"\";\n\t\t\tcontainer += interpolation;\n\t\t\tcontainer += \"\\\"\";\n\t\t}\n\t\tcontainer += \"} }\";\n\t}\n\tcontainer += \"}\";\n\tm_containerNames.insert(linkName);\n\n\tLogger::getLogger()->debug(\"Built container: %s\", container.c_str());\n\n\tif (! m_containers.empty())\n\t\tm_containers += \",\";\n\tm_containers.append(container);\n}\n\n/**\n * Flush the container definitions that have been built up\n *\n * @param sender\tHTTP client\n * @param path\t\tREST server URL\n * @param header\tREST call headers\n * @param error\t\tOMFError object with parsed PI Web API HTTP response\n * @param isConnected Set to false if REST call shows loss of connection to PI\n * @return\t\t\ttrue if the containers were successfully flushed\n */\nbool OMFLinkedData::flushContainers(HttpSender& sender, const string& path, vector<pair<string, string> >& header, OMFError& error, bool *isConnected)\n{\n\tif (m_containers.empty())\n\t\treturn true;\t\t// Nothing to flush\n\tstring payload = \"[\" + m_containers + \"]\";\n\tm_containers = \"\";\n\n\tLogger::getLogger()->debug(\"Flush container information: %s\", payload.c_str());\n\n\t// Write to OMF endpoint\n\ttry\n\t{\n\t\tint res = sender.sendRequest(\"POST\",\n\t\t\t\t\t   path,\n\t\t\t\t\t   header,\n\t\t\t\t\t   payload);\n\t\tif  ( ! (res >= 200 && res <= 299) )\n\t\t{\n\t\t\tLogger::getLogger()->error(\"An error occurred sending the container data. HTTP code %d - %s %s\",\n\t\t\t\t\t\t   res,\n\t\t\t\t\t\t   sender.getHostPort().c_str(),\n\t\t\t\t\t\t   sender.getHTTPResponse().c_str());\n\t\t\tif (!m_containerNames.empty())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Containers attempted: %s\", StringSetToCSVString(m_containerNames).c_str());\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\t\telse if (res == 201)\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Containers created: %s\", StringSetToCSVString(m_containerNames).c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Containers confirmed: %s\", StringSetToCSVString(m_containerNames).c_str());\n\t\t}\n\t}\n\t// Exception raised for HTTP 400 Bad Request\n\tcatch (const BadRequest& e)\n\t{\n\t\terror.setFromHttpResponse(sender.getHTTPResponse());\n\t\tif (error.Log(\"The OMF endpoint reported a Bad Request when sending Containers\") == false)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"HTTP 400: Bad Request when sending Containers. Exception: %s\", e.what());\n\t\t}\n\n\t\tif (!m_containerNames.empty())\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Containers attempted: %s\", StringSetToCSVString(m_containerNames).c_str());\n\t\t}\n\n\t\treturn false;\n\t}\n\tcatch (const Conflict& e)\n\t{\n\t\terror.setFromHttpResponse(sender.getHTTPResponse());\n\t\tif (error.Log(\"The OMF endpoint reported a Conflict when sending Containers\") == false)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"HTTP 409: Conflict when sending Containers. Exception: %s\", e.what());\n\t\t}\n\n\t\tif (!m_containerNames.empty())\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Containers attempted: %s\", StringSetToCSVString(m_containerNames).c_str());\n\t\t}\n\t\treturn false;\n\t}\n\tcatch (const std::exception &e)\n\t{\n\t\terror.setFromHttpResponse(sender.getHTTPResponse());\n\t\tif (error.hasMessages())\n\t\t{\n\t\t\terror.Log(\"An exception occurred when sending container information to the OMF endpoint\");\n\t\t\t\n\t\t\tif (error.getHttpCode() == 503)\n\t\t\t{\n\t\t\t\t*isConnected = false;\n\t\t\t\tLogger::getLogger()->warn(\"HTTP 503: REST service unavailable\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tPIWebAPI piwebapi;\n\t\t\tstd::string errorMessage = piwebapi.errorMessageHandler(e.what());\n\t\t\tLogger::getLogger()->error(\"An exception occurred when sending container information to the OMF endpoint, %s - %s %s\",\n\t\t\t\t\t\t\t\t\t   errorMessage.c_str(),\n\t\t\t\t\t\t\t\t\t   sender.getHostPort().c_str(),\n\t\t\t\t\t\t\t\t\t   path.c_str());\n\t\t}\n\n\t\tif (!m_containerNames.empty())\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Containers attempted: %s\", StringSetToCSVString(m_containerNames).c_str());\n\t\t}\n\n\t\t// Check for any error messages that indicate a loss of connection\n\t\tint i = 0;\n\t\twhile (strlen(noConnectionErrorMessages[i]))\n\t\t{\n\t\t\tif (0 == strncmp(e.what(), noConnectionErrorMessages[i], strlen(noConnectionErrorMessages[i])))\n\t\t\t{\n\t\t\t\t*isConnected = false;\n\t\t\t\tLogger::getLogger()->warn(\"Connection to the destination data archive has been lost\");\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ti++;\n\t\t}\n\n\t\treturn false;\n\t}\n\treturn true;\n}\n\n/**\n * Clear selected Reading and Datapoint information from the linked asset state map\n *\n * @param readings\t\tVector of Readings\n * @param startIndex\tStart index into Readings\n * @param numReadings\tNumber of Readings to clear\n * @return\t\t\t\tNumber of asset and datapoint entries cleared\n */\nstd::size_t OMFLinkedData::clearLALookup(const std::vector<Reading *> &readings, std::size_t startIndex, std::size_t numReadings, std::string &delimiter)\n{\n\tstd::size_t numCleared = 0;\n\tLALookup empty;\n\n\tfor (std::size_t i = startIndex; (i < numReadings) && (i < readings.size()); i++)\n\t{\n\t\tReading *reading = readings[i];\n\t\tstd::string assetNameDelim = OMF::ApplyPIServerNamingRulesObj(reading->getAssetName(), NULL) + delimiter;\n\n\t\t// Check if the asset key is present in the linked asset state map\n\t\tauto assetIterator = m_linkedAssetState->find(assetNameDelim);\n\t\tif (assetIterator != m_linkedAssetState->end())\n\t\t{\n\t\t\tassetIterator->second = empty;\n\t\t\tnumCleared++;\n\t\t}\n\n\t\t// Check if datapoint keys are present in the linked asset state map\n\t\tfor (Datapoint *datapoint : reading->getReadingData())\n\t\t{\n\t\t\tstd::string dpName = OMF::ApplyPIServerNamingRulesObj(datapoint->getName(), NULL);\n\t\t\tauto datappointIterator = m_linkedAssetState->find(assetNameDelim + dpName);\n\t\t\tif (datappointIterator != m_linkedAssetState->end())\n\t\t\t{\n\t\t\t\tdatappointIterator->second = empty;\n\t\t\t\tnumCleared++;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn numCleared;\n}\n\n/**\n * Set the base type by passing the string of the base type\n */\nvoid LALookup::setBaseType(const string& baseType)\n{\n\tif (baseType.compare(\"Double64\") == 0)\n\t\tm_baseType = OMFBT_DOUBLE64;\n\telse if (baseType.compare(\"Double32\") == 0)\n\t\tm_baseType = OMFBT_DOUBLE32;\n\telse if (baseType.compare(\"Integer16\") == 0)\n\t\tm_baseType = OMFBT_INTEGER16;\n\telse if (baseType.compare(\"Integer32\") == 0)\n\t\tm_baseType = OMFBT_INTEGER32;\n\telse if (baseType.compare(\"Integer64\") == 0)\n\t\tm_baseType = OMFBT_INTEGER64;\n\telse if (baseType.compare(\"UInteger16\") == 0)\n\t\tm_baseType = OMFBT_UINTEGER16;\n\telse if (baseType.compare(\"UInteger32\") == 0)\n\t\tm_baseType = OMFBT_UINTEGER32;\n\telse if (baseType.compare(\"UInteger64\") == 0)\n\t\tm_baseType = OMFBT_UINTEGER64;\n\telse if (baseType.compare(\"String\") == 0)\n\t\tm_baseType = OMFBT_STRING;\n\telse if (baseType.compare(\"FledgeAsset\") == 0)\n\t\tm_baseType = OMFBT_FLEDGEASSET;\n\telse\n\t\tLogger::getLogger()->fatal(\"Unable to map base type '%s'\", baseType.c_str());\n}\n\n/**\n * The container has been sent with the specific base type\n *\n * @param tagName\tThe name of the tag we are using\n * @param baseType\tThe baseType we resolve to\n */\nvoid LALookup::containerSent(const std::string& tagName, OMFBaseType baseType)\n{\n\tif (m_tagName.compare(tagName))\n\t{\n\t\t// Force a new Link and AF Link to be sent for the new tag name\n\t\tm_sentState &= ~(LAL_LINK_SENT | LAL_AFLINK_SENT);\n\t}\n\tm_baseType = baseType;\n\tm_tagName = tagName;\n\tm_sentState |= LAL_CONTAINER_SENT;\n}\n\n/**\n * The container has been sent with the specific base type\n *\n * @param tagName\tThe name of the tag we are using\n * @param baseType\tThe baseType we resolve to\n */\nvoid LALookup::containerSent(const std::string& tagName, const std::string& baseType)\n{\n\tsetBaseType(baseType);\n\tif (m_tagName.compare(tagName))\n\t{\n\t\t// Force a new Link and AF Link to be sent for the new tag name\n\t\tm_sentState &= ~(LAL_LINK_SENT | LAL_AFLINK_SENT);\n\t}\n\tm_tagName = tagName;\n\tm_sentState |= LAL_CONTAINER_SENT;\n}\n\n/**\n * Get a string representation of the base type that was sent\n */\nstring LALookup::getBaseTypeString()\n{\n\tswitch (m_baseType)\n\t{\n\t\tcase OMFBT_UNKNOWN:\n\t\t\treturn \"Unknown\";\n\t\tcase OMFBT_DOUBLE64:\n\t\t\treturn \"Double64\";\n\t\tcase OMFBT_DOUBLE32:\n\t\t\treturn \"Double32\";\n\t\tcase OMFBT_INTEGER16:\n\t\t\treturn \"Integer16\";\n\t\tcase OMFBT_INTEGER32:\n\t\t\treturn \"Integer32\";\n\t\tcase OMFBT_INTEGER64:\n\t\t\treturn \"Integer64\";\n\t\tcase OMFBT_UINTEGER16:\n\t\t\treturn \"UInteger16\";\n\t\tcase OMFBT_UINTEGER32:\n\t\t\treturn \"UInteger32\";\n\t\tcase OMFBT_UINTEGER64:\n\t\t\treturn \"UInteger64\";\n\t\tcase OMFBT_STRING:\n\t\t\treturn \"String\";\n\t\tdefault:\n\t\t\treturn \"Unknown\";\n\t}\n}\n"
  },
  {
    "path": "C/plugins/north/OMF/ocs.cpp",
    "content": "/*\n * Fledge OSIsoft ADH and OCS integration.\n * Implements the integration for the specific functionalities exposed by ADH and OCS\n *\n * Copyright (c) 2020-2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <string>\n#include <vector>\n#include <utility>\n\n#include <ocs.h>\n#include <string_utils.h>\n#include <logger.h>\n#include <simple_https.h>\n#include <rapidjson/document.h>\n#include \"rapidjson/error/en.h\"\n\nusing namespace std;\nusing namespace rapidjson;\n\nOCS::OCS(const std::string &authorizationUrl) : m_authUrl(authorizationUrl), m_nextAuthentication(std::chrono::steady_clock::time_point())\n{\n}\n\n// Destructor\nOCS::~OCS()\n{\n}\n\n/**\n * Extracts the OCS token from the JSON returned by the OCS API\n *\n * @param response  JSON message generated by the OCS API containing the OCS token\n *\n */\nvoid OCS::extractToken(const string &response)\n{\n\tDocument JSon;\n\n\tParseResult ok = JSon.Parse(response.c_str());\n\tif (!ok)\n\t{\n\t\tLogger::getLogger()->error(\"OCS token extract, invalid json - HTTP response :%s:\", response.c_str());\n\t}\n\telse\n\t{\n\t\tif (JSon.HasMember(\"access_token\"))\n\t\t{\n\t\t\tm_token = JSon[\"access_token\"].GetString();\n\t\t}\n\n\t\tif (JSon.HasMember(\"expires_in\"))\n\t\t{\n\t\t\tm_expiresIn = JSon[\"expires_in\"].GetUint();\n\t\t\tLogger::getLogger()->debug(\"ADH token expires in %u seconds\", m_expiresIn);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_expiresIn = 0;\n\t\t\tLogger::getLogger()->warn(\"ADH authentication response does not include 'expires_in'\");\n\t\t}\n\t}\n}\n\n/**\n * Calls the OCS/ADH API to retrieve the authentication token related to the the clientId and clientSecret\n *\n * @param clientId      Client Id code assigned by OCS/ADH using its GUI to the specific connection\n * @param clientSecret  Client Secret code assigned by OCS/ADH using its GUI to the specific connection\n * @param logMessage    If true, log error messages (default: true)\n * @return              HTTP Code\n *\n */\nint OCS::retrieveToken(const string& clientId, const string& clientSecret, bool logMessage)\n{\n\tstring response;\n\tstring payload;\n\n\tHttpSender *endPoint;\n\tvector<pair<string, string>> header;\n\tint httpCode = 400;\n\n\tendPoint = new SimpleHttps(m_authUrl,\n\t\t\t\t\t\t\t   TIMEOUT_CONNECT,\n\t\t\t\t\t\t\t   TIMEOUT_REQUEST,\n\t\t\t\t\t\t\t   RETRY_SLEEP_TIME,\n\t\t\t\t\t\t\t   0);\n\n\theader.push_back( std::make_pair(\"Content-Type\", \"application/x-www-form-urlencoded\"));\n\theader.push_back( std::make_pair(\"Accept\", \" text/plain\"));\n\n\tpayload =  PAYLOAD_RETRIEVE_TOKEN;\n\n\tStringReplace(payload, \"CLIENT_ID_PLACEHOLDER\",        urlEncode(clientId));\n\tStringReplace(payload, \"CLIENT_SECRET_ID_PLACEHOLDER\", urlEncode(clientSecret));\n\n\t// Anonymous auth\n\tstring authMethod = \"a\";\n\tendPoint->setAuthMethod (authMethod);\n\n\ttry\n\t{\n\t\thttpCode = endPoint->sendRequest(\"POST\",\n\t\t\t\t\t\t\t\t\t\t URL_RETRIEVE_TOKEN,\n\t\t\t\t\t\t\t\t\t\t header,\n\t\t\t\t\t\t\t\t\t\t payload);\n\n\t\tresponse = endPoint->getHTTPResponse();\n\n\t\tif (httpCode >= 200 && httpCode <= 399)\n\t\t{\n\t\t\textractToken(response);\n\t\t\tLogger::getLogger()->debug(\"ADH authentication token of %u characters retrieved\", m_token.size());\n\t\t}\n\t\telse if (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Error in retrieving the authentication token from ADH - http :%d: :%s: \", httpCode, response.c_str());\n\t\t}\n\t}\n\tcatch (const Unauthorized &e)\n\t{\n\t\t// Log authentication failures regardless of 'logMessage'\n\t\tLogger::getLogger()->error(\"Unable to authenticate with AVEVA Data Hub\");\n\t\thttpCode = 401;\n\t}\n\tcatch (exception &ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Error in retrieving the authentication token from ADH - error :%s: \", ex.what());\n\t\t}\n\t}\n\n\tdelete endPoint;\n\n\treturn httpCode;\n}\n\n/**\n * Calls the ADH API to retrieve the authentication token\n *\n * @param    logMessage\tIf true, log error messages (default: true)\n * @return   token      Authorization token\n */\nstring OCS::OCSRetrieveAuthToken(const string &clientId, const string &clientSecret, bool logMessage)\n{\n\tstd::chrono::steady_clock::time_point now = std::chrono::steady_clock::now();\n\n\tif (now >= m_nextAuthentication)\n\t{\n\t\tint httpCode = retrieveToken(clientId, clientSecret, logMessage);\n\t\tif (httpCode >= 200 && httpCode <= 399)\n\t\t{\n\t\t\t// Set the next authentication check time only if this attempt was successful.\n\t\t\t// Otherwise, leave the next authentication check as-is so retry will be immediate.\n\t\t\t// Authentication check time is half the expiry time to avoid being near the deadline.\n\t\t\tm_nextAuthentication = now + std::chrono::seconds(m_expiresIn / 2);\n\t\t}\n\t\treturn m_token;\n\t}\n\telse\n\t{\n\t\treturn m_token;\n\t}\n}\n"
  },
  {
    "path": "C/plugins/north/OMF/omf.cpp",
    "content": "/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2018-2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <utility>\n#include <iostream>\n#include <string>\n#include <cstring>\n#include <omf.h>\n#include <OMFHint.h>\n#include <logger.h>\n#include <zlib.h>\n#include <rapidjson/document.h>\n#include \"rapidjson/error/en.h\"\n#include \"string_utils.h\"\n#include <plugin_api.h>\n#include <string_utils.h>\n#include <datapoint.h>\n#include <thread>\n\n#include <piwebapi.h>\n\n#include <algorithm>\n#include <vector>\n#include <iterator>\n\n#include <basetypes.h>\n#include <omflinkeddata.h>\n#include <audit_logger.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nstatic bool isTypeSupported(DatapointValue& dataPoint);\nvector<string> OMF::m_reportedAssets;\n\n// 1 enable performance tracking\n#define INSTRUMENT\t0\n\n#define  AFHierarchySeparator '/'\n#define  AF_TYPES_SUFFIX       \"-type\"      // The asset name is composed by: asset name + AF_TYPES_SUFFIX + incremental id of the type\n\n// Handling escapes for AF Hierarchies\n#define AFH_SLASH            \"/\"\n#define AFH_SLASH_ESCAPE     \"@/\"\n#define AFH_SLASH_ESCAPE_TMP \"##\"\n#define AFH_ESCAPE_SEQ       \"@@\"\n#define AFH_ESCAPE_CHAR      \"@\"\n\n// Structures to generate and assign the 1st level of AF hierarchy if the end point is PI Web API\n// _placeholder_ will be replaced with the proper value\nconst char *AF_HIERARCHY_1LEVEL_TYPE = QUOTE(\n\t[\n\t\t{\n\t\t\t\"id\": \"_placeholder_typeid_\",\n\t\t\t\"version\": \"1.0.0.0\",\n\t\t\t\"type\": \"object\",\n\t\t\t\"classification\": \"static\",\n\t\t\t\"properties\": {\n\t\t\t\t\"Name\": {\n\t\t\t\t\t\"type\": \"string\",\n\t\t\t\t\t\"isname\": true\n\t\t\t\t},\n\t\t\t\t\"AssetId\": {\n\t\t\t\t\t\"type\": \"string\",\n\t\t\t\t\t\"isindex\": true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t]\n);\n\n// Structures to generate the OMF message for handling static information\n// _placeholder_ will be replaced with the proper value\nconst char *AF_HIERARCHY_1LEVEL_STATIC = QUOTE(\n\t[\n\t\t{\n\n\t\t\t\"typeid\": \"_placeholder_typeid_\",\n\t\t\t\"values\": [\n\t\t\t\t{\n\t\t\t\t\"Name\": \"_placeholder_Name_\",\n\t\t\t\t\"AssetId\": \"_placeholder_AssetId_\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t]\n);\n\n// Structures to generate the OMF message for handling link information\n// _placeholder_ will be replaced with the proper value\nconst char *AF_HIERARCHY_LEVEL_LINK = QUOTE(\n[\n  {\n    \"typeid\": \"__Link\",\n\t\"values\": [\n\t\t{\n\t\t\t\"source\": {\n\t\t\t\t\"typeid\": \"_placeholder_src_type_\",\n\t\t\t\t\"index\":  \"_placeholder_src_idx_\"\n\t\t\t},\n\t\t\t\"target\": {\n\t\t\t\t\"typeid\": \"_placeholder_tgt_type_\",\n\t\t\t\t\"index\":  \"_placeholder_tgt_idx_\"\n\t\t\t}\n\t\t}\n\t]\n  }\n]\n);\n\n// Structures to generate the OMF message for handling the link information for the first level of the AF hierarchy\n// _placeholder_ will be replaced with the proper value\nconst char *AF_HIERARCHY_1LEVEL_LINK = QUOTE(\n\t{\n\t\t\"source\": {\n\t\t\t\"typeid\": \"_placeholder_src_type_\",\n\t\t\t\"index\": \"_placeholder_src_idx_\"\n\t\t},\n\t\t\"target\": {\n\t\t\t\"typeid\": \"_placeholder_tgt_type_\",\n\t\t\t\"index\": \"_placeholder_tgt_idx_\"\n\t\t}\n\t}\n);\n\n/**\n * Parse \"index\" and \"containerid\" values from JSON containing link \"source\" and \"target\"\n *\n * @param json    JSON text as char string\n * @param links   Vector of source-target name pairs\n */\nstatic void parseLinkData(const char *json, std::vector<std::pair<std::string, std::string>> &links)\n{\n    Document doc;\n\t\n    if (doc.Parse(json).HasParseError())\n    {\n\t\tLogger::getLogger()->error(\"parseLinkData error %d: failed to parse %s\", (int)doc.GetParseError(), json);\n    }\n    else if (doc.IsArray()) // top level of the document should be an array\n    {\n        for (auto &it : doc.GetArray()) // top-level array has one unnamed object\n        {\n            if (it.IsObject() && it.HasMember(\"values\"))\n            {\n                for (auto &it2 : it.GetObject()[\"values\"].GetArray()) // each object in the \"values\" array has \"source\" and \"target\"\n                {\n                    if (it2.IsObject() && it2.HasMember(\"source\") && it2.HasMember(\"target\"))\n                    {\n                        auto sourceObject = it2[\"source\"].GetObject();\n                        if (sourceObject.HasMember(\"index\"))\n                        {\n                            std::string sourceString = sourceObject[\"index\"].GetString();\n                            std::string targetString;\n\n                            auto targetObject = it2[\"target\"].GetObject();\n                            if (targetObject.HasMember(\"index\"))\n                            {\n                                targetString = targetObject[\"index\"].GetString();\n                            }\n                            else if (targetObject.HasMember(\"containerid\"))\n                            {\n                                targetString = targetObject[\"containerid\"].GetString();\n                            }\n\n                            if (!sourceString.empty() && !targetString.empty())\n                            {\n                                links.push_back(std::make_pair(sourceString, targetString));\n                            }\n                        }\n                    }\n                }\n            }\n        }\n    }\n}\n\n/**\n * Parse \"index\" and \"containerid\" values from JSON containing link \"source\" and \"target\"\n *\n * @param json    JSON text as std::string\n * @param links   Vector of source-target name pairs\n */\nstatic void parseLinkData(const std::string &json, std::vector<std::pair<std::string, std::string>> &links)\n{\n\tparseLinkData(json.c_str(), links);\n}\n\n/**\n * Parse \"Name\" from JSON containing a Data message with element definitions\n *\n * @param json\tJSON text\n * @return\t\tName string if present, otherwise an empty string\n */\nstatic std::string parseNameFromJson(const std::string &json)\n{\n    std::string name;\n\n    Document doc;\n    if (doc.Parse(json.c_str()).HasParseError())\n    {\n\t\tLogger::getLogger()->error(\"parseNameFromJson error %d: failed to parse %s\", (int)doc.GetParseError(), json.c_str());\n    }\n    else if (doc.IsArray()) // top level of the document should be an array\n    {\n        for (auto &it : doc.GetArray()) // top-level array has one unnamed object\n        {\n            if (it.IsObject() && it.HasMember(\"values\"))\n            {\n                for (auto &it2 : it.GetObject()[\"values\"].GetArray()) // any object in the \"values\" array has at least \"Name\" and \"AssetId\"\n                {\n                    if (it2.IsObject() && it2.HasMember(\"Name\"))\n                    {\n                        name = it2[\"Name\"].GetString();\n                    }\n                }\n            }\n        }\n    }\n\n    return name;\n}\n\n/**\n * Parse \"ids\" from JSON with Type or Container definitions.\n * This method can return a single TypeId.\n *\n * @param json\tJSON text\n * @param Ids\tArray of Type or Container ids\n * @param typeIdPtr\tTypeId associated with the (single) Id (optional)\n */\nstatic void parseIdFromJson(const std::string &json, std::vector<std::string> &Ids, std::string *typeIdPtr = NULL)\n{\n    Document doc;\n    if (doc.Parse(json.c_str()).HasParseError())\n    {\n\t\tLogger::getLogger()->error(\"parseIdFromJson error %d: failed to parse %s\", (int)doc.GetParseError(), json.c_str());\n    }\n    else if (doc.IsArray()) // top level of the document should be an array\n    {\n\t\tfor (auto &it : doc.GetArray()) // array has unnamed objects, one per type\n\t\t{\n\t\t\tif (it.IsObject())\n\t\t\t{\n\t\t\t\tif (it.HasMember(\"id\"))\n\t\t\t\t{\n\t\t\t\t\tIds.push_back(it[\"id\"].GetString());\n\t\t\t\t}\n\n\t\t\t\tif (typeIdPtr && typeIdPtr->empty() && it.HasMember(\"typeid\"))\n\t\t\t\t{\n\t\t\t\t\ttypeIdPtr->assign(it[\"typeid\"].GetString());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Log an array of \"ids\" with a message after a REST call.\n * Include the HTTP code in the message if it represents an error.\n *\n * @param httpCode\tHTTP return code from the REST operation. If zero, there is no return code available.\n * @param message\tMessage used to label the logged ids\n * @param Ids\t\tArray of Id strings\n */\nstatic void LogIds(const int httpCode, const char *message, std::vector<std::string> &Ids)\n{\n\tif ((httpCode >= 400) && (httpCode < 600))\n\t{\n\t\tfor (std::string &Id : Ids)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Error %d %s %s\", httpCode, message, Id.c_str());\n\t\t}\n\t}\n\telse if (httpCode == 0)\n\t{\n\t\tfor (std::string &Id : Ids)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"%s %s\", message, Id.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tfor (std::string &Id : Ids)\n\t\t{\n\t\t\tLogger::getLogger()->info(\"%s %s\", message, Id.c_str());\n\t\t}\n\t}\n}\n\n/**\n * Log an array of Links which means source-target pairs.\n * Include the HTTP code in the message if it represents an error.\n *\n * @param httpCode\tHTTP return code from the REST operation\n * @param message\tMessage used to label the logged links\n * @param links\t\tArray of source-target string pairs\n */\nstatic void LogLinks(const int httpCode, const char *message, std::vector<std::pair<std::string, std::string>> &links)\n{\n\tif ((httpCode >= 400) && (httpCode < 600))\n\t{\n\t\tfor (std::pair<std::string, std::string> &it : links)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Error %d %s %s to %s\", httpCode, message, it.first.c_str(), it.second.c_str());\n\t\t}\n\t}\n\telse if (httpCode == 0)\n\t{\n\t\tfor (std::pair<std::string, std::string> &it : links)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Error %s %s to %s\", message, it.first.c_str(), it.second.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tfor (std::pair<std::string, std::string> &it : links)\n\t\t{\n\t\t\tLogger::getLogger()->info(\"%s %s to %s\", message, it.first.c_str(), it.second.c_str());\n\t\t}\n\t}\n}\n\n/**\n * Extracts an HTTP code from an error message formatted by HttpSender.\n * The message format is \"HTTP code |nnn| HTTP error ...\"\n *\n * @param msg       Msg from which the HTTP code must be extracted\n * @return          HTTP Code from the message. Returns 200 if no HTTP code was found.\n *\n */\nstatic int HTTPCodeFromErrorMessage(const string &msg)\n{\n\tstring::size_type pos, pos1, pos2;\n\tint httpCode = 200;\n\n\tpos = msg.find(\"HTTP code |\");\n\tif (pos != string::npos)\n\t{\n\t\tpos1 = msg.find(\"|\", pos);\n\t\tpos2 = msg.find(\"|\", pos1 + 1);\n\t\tif (pos2 != string::npos)\n\t\t{\n\t\t\tstd::string httpCodeString = msg.substr(pos1 + 1, pos2 - pos1 - 1);\n\t\t\thttpCode = std::stoi(httpCodeString);\n\t\t}\n\t}\n\treturn httpCode;\n}\n\n/**\n * OMFData constructor, generates the OMF message containing the data\n *\n * @param reading           Reading for which the OMF message must be generated\n * @param measurementId     Name/Reference of the object of the Data Archive at which the data must be assigned\n * @param PIServerEndpoint  End point for which the OMF message must be prepared among: PIWebAPI, ADH, OCS, EDS...\n * @param AFHierarchyPrefix Unused at the current stage\n * @param hints             OMF hints for the specific reading for changing the behaviour of the operation\n *\n */\nOMFData::OMFData(OMFBuffer& payload, const Reading& reading, string measurementId, bool delim, const OMF_ENDPOINT PIServerEndpoint,const string&  AFHierarchyPrefix, OMFHints *hints)\n{\n\tbool changed;\n\n\tLogger::getLogger()->debug(\"%s - measurementId :%s: \", __FUNCTION__, measurementId.c_str());\n\n\t// Apply any TagName hints to modify the containerid\n\tif (hints)\n\t{\n\t\tconst std::vector<OMFHint *> omfHints = hints->getHints();\n\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t{\n\t\t\tif (typeid(**it) == typeid(OMFTagNameHint))\n\t\t\t{\n\t\t\t\tmeasurementId = (*it)->getHint();\n\t\t\t\tLogger::getLogger()->debug(\"Using OMF TagName hint: %s\", measurementId.c_str());\n\t\t\t}\n\t\t\tif (typeid(**it) == typeid(OMFTagHint))\n\t\t\t{\n\t\t\t\tmeasurementId = (*it)->getHint();\n\t\t\t\tLogger::getLogger()->debug(\"Using OMF Tag hint: %s\", measurementId.c_str());\n\t\t\t}\n\t\t}\n\t}\n\n\t// Get reading data\n\tconst vector<Datapoint*> data = reading.getReadingData();\n\n\tm_hasData = false;\n\t// Check if there are any datapoints to send\n\tfor (vector<Datapoint*>::const_iterator it = data.begin(); it != data.end(); ++it)\n\t{\n\t\tstring dpName = (*it)->getName();\n\t\tif (dpName.compare(OMF_HINT) == 0)\n\t\t{\n\t\t\t// Don't send the OMF Hint to the PI Server\n\t\t\tcontinue;\n\t\t}\n\t\tif (isTypeSupported((*it)->getData()))\n\t\t{\n\t\t\tm_hasData = true;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (m_hasData)\n\t{\n\t\tif (delim)\n\t\t{\n\t\t\tpayload.append(\", \");\n\t\t}\n\t\t// Convert reading data into the OMF JSON string\n\t\tpayload.append(\"{\\\"containerid\\\": \\\"\" + measurementId);\n\t\tpayload.append(\"\\\", \\\"values\\\": [{\");\n\n\n\n\t\t/**\n\t\t * This loop creates:\n\t\t * \"dataName\": {\"type\": \"dataType\"},\n\t\t */\n\t\tfor (vector<Datapoint*>::const_iterator it = data.begin(); it != data.end(); ++it)\n\t\t{\n\t\t\tstring dpName = (*it)->getName();\n\t\t\tif (dpName.compare(OMF_HINT) == 0)\n\t\t\t{\n\t\t\t\t// Don't send the OMF Hint to the PI Server\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (!isTypeSupported((*it)->getData()))\n\t\t\t{\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Add datapoint Name\n\t\t\t\tpayload.append(\"\\\"\" + OMF::ApplyPIServerNamingRulesObj(dpName, nullptr) + \"\\\": \" + (*it)->getData().toString());\n\t\t\t\tpayload.append(\", \");\n\t\t\t}\n\t\t}\n\n\t\t// Append Z to getAssetDateTime(FMT_STANDARD)\n\t\tpayload.append(\"\\\"Time\\\": \\\"\" + reading.getAssetDateUserTime(Reading::FMT_STANDARD) + \"Z\" + \"\\\"\");\n\n\t\tpayload.append(\"}]}\");\n\t}\n}\n\n\n/**\n * OMF constructor\n */\nOMF::OMF(const string& name,\n\tHttpSender& sender,\n\t const string& path,\n\t const long id,\n\t const string& token) :\n\t m_path(path),\n\t m_typeId(id),\n\t m_producerToken(token),\n\t m_sender(sender),\n\t m_legacy(false),\n\t m_name(name),\n\t m_baseTypesSent(false),\n\t m_linkedProperties(true),\n\t m_connected(true),\n\t m_PIstable(true),\n\t m_numBlocks(1)\n{\n\tm_changeTypeId = false;\n\tm_OMFDataTypes = NULL;\n\tm_OMFVersion = \"1.0\";\n\tm_reportedAssets.clear();\n}\n\n/**\n * OMF constructor with per asset data types\n */\n\nOMF::OMF(const string& name,\n\t HttpSender& sender,\n\t const string& path,\n\t map<string, OMFDataTypes>& types,\n\t const string& token) :\n\t m_path(path),\n\t m_OMFDataTypes(&types),\n\t m_producerToken(token),\n\t m_sender(sender),\n\t m_name(name),\n\t m_baseTypesSent(false),\n\t m_linkedProperties(true),\n\t m_connected(true),\n\t m_PIstable(true),\n\t m_numBlocks(1)\n{\n\t// Get starting type-id sequence or set the default value\n\tauto it = (*m_OMFDataTypes).find(FAKE_ASSET_KEY);\n\tm_typeId = (it != (*m_OMFDataTypes).end()) ?\n\t\t   (*it).second.typeId :\n\t\t   TYPE_ID_DEFAULT;\n\n\tm_changeTypeId = false;\n\tm_reportedAssets.clear();\n}\n\n// Destructor\nOMF::~OMF()\n{\n}\n\n/**\n * Compress a string\n *\n * @param str\t\t\tInput STL string that is to be compressed\n * @param compressionlevel\tzlib/gzip Compression level\n * @return str\t\t\tgzip compressed binary data\n */\nstd::string OMF::compress_string(const std::string& str,\n                            int compressionlevel)\n{\n    const int windowBits = 15;\n    const int GZIP_ENCODING = 16;\n\n    z_stream zs;                        // z_stream is zlib's control structure\n    memset(&zs, 0, sizeof(zs));\n\n    if (deflateInit2(&zs, compressionlevel, Z_DEFLATED,\n\t\t windowBits | GZIP_ENCODING, 8,\n\t\t Z_DEFAULT_STRATEGY) != Z_OK)\n        throw(std::runtime_error(\"deflateInit failed while compressing.\"));\n\n    zs.next_in = (Bytef*)str.data();\n    zs.avail_in = str.size();           // set the z_stream's input\n\n    int ret;\n    char outbuffer[32768];\n    std::string outstring;\n\n    // retrieve the compressed bytes blockwise\n    do {\n        zs.next_out = reinterpret_cast<Bytef*>(outbuffer);\n        zs.avail_out = sizeof(outbuffer);\n\n        ret = deflate(&zs, Z_FINISH);\n\n        if (outstring.size() < zs.total_out) {\n            // append the block to the output string\n            outstring.append(outbuffer,\n                             zs.total_out - outstring.size());\n        }\n    } while (ret == Z_OK);\n\n    deflateEnd(&zs);\n\n    if (ret != Z_STREAM_END) {          // an error occurred that was not EOF\n        std::ostringstream oss;\n        oss << \"Exception during zlib compression: (\" << ret << \") \" << zs.msg;\n        throw(std::runtime_error(oss.str()));\n    }\n\n    return outstring;\n}\n\n/**\n * Sends all the data type messages for a Reading data row\n *\n * @param row    The current Reading data row\n * @return       True if all data types have been sent (HTTP 2xx OK)\n *               False when first error occurs.\n */\nbool OMF::sendDataTypes(const Reading& row, OMFHints *hints)\n{\n\tint res;\n\tm_changeTypeId = false;\n\t\n\t// Create header for Type\n\tvector<pair<string, string>> resType = OMF::createMessageHeader(\"Type\");\n\n\t// Create data for Type message\tand parse the types for logging purposes\n\tstring typeData = OMF::createTypeData(row, hints);\n\tstd::vector<std::string> Ids;\n\tparseIdFromJson(typeData, Ids);\n\n\t// If Datatype in Reading row is not supported, just return true\n\tif (typeData.empty())\n\t{\n\t\treturn true;\n\t}\n\telse\n\t{\n\t\t// TODO: ADD LOG\n\t}\n\n\t// Build an HTTPS POST with 'resType' headers\n\t// and 'typeData' JSON payload\n\t// Then get HTTPS POST ret code and return 0 to client on error\n\tstring assetName = row.getAssetName();\n\ttry\n\t{\n\t\tres = m_sender.sendRequest(\"POST\",\n\t\t\t\t\t   m_path,\n\t\t\t\t\t   resType,\n\t\t\t\t\t   typeData);\n\t\tif  ( ! (res >= 200 && res <= 299) )\n\t\t{\n\t\t\tLogIds(res, \"creating Type\", Ids);\n\t\t\tstring msg = \"An error occurred sending the dataType message for the asset \" + assetName;\n\t\t\tmsg.append(\". HTTP error code \" + to_string(res));\n\t\t\treportAsset(assetName, \"error\", msg);\n\t\t\treturn false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogIds(res, (res == 201) ? \"Created Type\" : \"Confirmed Type\", Ids);\n\t\t}\n\t}\n\t// Exception raised for HTTP 400 Bad Request\n\tcatch (const BadRequest& e)\n\t{\n\t\tLogIds(400, \"creating Type\", Ids);\n\t\tstring msg = \"The OMF endpoint reported a Bad Request when sending Types for the asset \" + assetName;\n\t\thandleRESTException(e, msg.c_str());\n\n\t\tif (OMF::isDataTypeError(e.what()))\n\t\t{\n\t\t\t// Data type error: force type-id change\n\t\t\tm_changeTypeId = true;\n\t\t\tLogger::getLogger()->warn(\"A data type change will take place to try to resolve this error\");\n\t\t}\n\t\treportAsset(assetName, \"error\", \"The OMF endpoint reported a Bad Request when sending Types\");\n\n\t\treturn false;\n\t}\n\tcatch (const Unauthorized& e)\n\t{\n\t\tLogIds(401, \"creating Type\", Ids);\n\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\treturn false;\n\t}\n\tcatch (const Conflict &e)\n\t{\n\t\tLogIds(409, \"creating Type\", Ids);\n\t\tstring msg = \"Type conflict for \" + assetName + \" (\" + DataPointNamesAsString(row) + \"). Creating a new Type\";\n\t\thandleRESTException(e, msg.c_str());\n\t\tif (!OMF::handleTypeErrors(assetName, row, hints))\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\tcatch (const std::exception &e)\n\t{\n\t\tLogIds(0, \"Error creating Type\", Ids);\n\t\tstring msg = \"An error occurred sending the Type message for the asset \" + assetName;\n\t\thandleRESTException(e, msg.c_str());\n\t\treturn false;\n\t}\n\n\t// Create header for Container\n\tvector<pair<string, string>> resContainer = OMF::createMessageHeader(\"Container\");\n\t// Create data for Container message\t\n\tstring typeContainer = OMF::createContainerData(row, hints);\n\tstring measurementId = generateMeasurementId(assetName);\n\n\t// Parse the Container Id and the Container Type Id from the JSON payload.\n\t// There will be only 1.\n\tIds.clear();\n\tstd::string containerTypeId;\n\tparseIdFromJson(typeContainer, Ids, &containerTypeId);\n\n\t// Build an HTTPS POST with 'resContainer' headers\n\t// and 'typeContainer' JSON payload\n\t// Then get HTTPS POST ret code and return 0 to client on error\n\ttry\n\t{\n\t\tres = m_sender.sendRequest(\"POST\",\n\t\t\t\t\t   m_path,\n\t\t\t\t\t   resContainer,\n\t\t\t\t\t   typeContainer);\n\t\tif  ( ! (res >= 200 && res <= 299) )\n\t\t{\n\t\t\tLogIds(res, \"creating Container\", Ids);\n\t\t\tstring msg = \"An error occurred sending the dataType container message for the asset \" + assetName + \" (Type: \" + containerTypeId + \")\";\n\t\t\tmsg.append(\". HTTP error code \" + to_string(res));\n\t\t\treportAsset(assetName, \"error\", msg);\n\t\t\treturn false;\n\t\t}\n\t\telse if (res == 201)\n\t\t{\n\t\t\tLogIds(res, std::string(\"Created Container (Type: \" +  containerTypeId + \")\").c_str(), Ids);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogIds(res, std::string(\"Confirmed Container (Type: \" +  containerTypeId + \")\").c_str(), Ids);\n\t\t}\n\t}\n\t// Exception raised for HTTP 400 Bad Request\n\tcatch (const BadRequest &e)\n\t{\n\t\tLogIds(400, \"creating Container\", Ids);\n\t\tstring msg = \"The OMF endpoint reported a Bad Request when sending Containers for the asset \" + assetName + \" (Type: \" + containerTypeId + \")\";\n\t\thandleRESTException(e, msg.c_str());\n\n\t\tif (OMF::isDataTypeError(e.what()))\n\t\t{\n\t\t\t// Data type error: force type-id change\n\t\t\tm_changeTypeId = true;\n\t\t\tLogger::getLogger()->warn(\"A data type change will take place to try to resolve this error\");\n\t\t}\n\n\t\treportAsset(assetName, \"error\", \"The OMF endpoint reported a Bad Request when sending Containers\");\n\t\treturn false;\n\t}\n\tcatch (const Unauthorized &e)\n\t{\n\t\tLogIds(401, \"creating Container\", Ids);\n\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\treturn false;\n\t}\n\tcatch (const Conflict &e)\n\t{\n\t\tLogIds(409, \"creating Container\", Ids);\n\t\tstring msg = \"A Conflict occurred sending the Container message for the asset \" + assetName + \" (Type: \" + containerTypeId + \")\";\n\t\thandleRESTException(e, msg.c_str());\n\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, 409);\n\t\tm_PIstable = false;\n\t\treturn false;\n\t}\n\tcatch (const std::exception &e)\n\t{\n\t\tLogIds(0, \"creating Container\", Ids);\n\t\tOMFError error;\n\t\tstring msg = \"An error occurred sending the Container message for the asset: \" + assetName + \" (Type: \" + containerTypeId + \")\";\n\t\thandleRESTException(e, msg.c_str());\n\n\t\tif (error.hasMessages())\n\t\t{\n\t\t\tfor (unsigned int i = 0; i < error.messageCount(); i++)\n\t\t\t{\n\t\t\t\tif ((error.getHttpCode() == 500) && (0 == error.getMessage(i).compare(PIWEBAPI_PIPOINTS_NOT_CREATED)))\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, error.getHttpCode());\n\t\t\t\t\tm_PIstable = false;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\treportAsset(assetName, \"error\", msg);\n\t\treturn false;\n\t}\n\n\tif (m_sendFullStructure)\n\t{\n\t\t// Create header for Static data\n\t\tvector<pair<string, string>> resStaticData = OMF::createMessageHeader(\"Data\");\n\t\t// Create data for Static Data message\n\t\tstring typeStaticData = OMF::createStaticData(row);\n\n\t\t// Build an HTTPS POST with 'resStaticData' headers\n\t\t// and 'typeStaticData' JSON payload\n\t\t// Then get HTTPS POST ret code and return 0 to client on error\n\t\ttry\n\t\t{\n\t\t\tres = m_sender.sendRequest(\"POST\",\n\t\t\t\t\t\t\t\t\t   m_path,\n\t\t\t\t\t\t\t\t\t   resStaticData,\n\t\t\t\t\t\t\t\t\t   typeStaticData);\n\t\t\tif (!(res >= 200 && res <= 299))\n\t\t\t{\n\t\t\t\tstring msg = \"An error occurred creating Element \" + parseNameFromJson(typeStaticData) +  \" for the asset \" + assetName;\n\t\t\t\tmsg.append(\". HTTP error code \" + to_string(res));\n\t\t\t\treportAsset(assetName, \"warn\", msg);\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\telse if (res == 201)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Created Element %s\", parseNameFromJson(typeStaticData).c_str());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Confirmed Element %s\", parseNameFromJson(typeStaticData).c_str());\n\t\t\t}\n\t\t}\n\t\t// Exception raised for HTTP 400 Bad Request\n\t\tcatch (const BadRequest& e)\n\t\t{\n\t\t\tstring msg = \"Bad Request reported when creating Element \" + parseNameFromJson(typeStaticData) +  \" for the asset \" + assetName;\n\t\t\thandleRESTException(e, msg.c_str());\n\n\t\t\tif (OMF::isDataTypeError(e.what()))\n\t\t\t{\n\t\t\t\t// Data type error: force type-id change\n\t\t\t\tm_changeTypeId = true;\n\t\t\t\tLogger::getLogger()->warn(\"A data type change will take place to try to resolve this error\");\n\t\t\t}\n\n\t\t\treturn false;\n\t\t}\n\t\tcatch (const Unauthorized &e)\n\t\t{\n\t\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\t\treturn false;\n\t\t}\n\t\tcatch (const Conflict &e)\n\t\t{\n\t\t\tstring msg = \"Conflict found creating Element \" + parseNameFromJson(typeStaticData) +  \" for the asset \" + assetName;\n\t\t\thandleRESTException(e, msg.c_str());\n\t\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, 409);\n\t\t\tm_PIstable = false;\n\t\t\treturn false;\n\t\t}\n\t\tcatch (const std::exception &e)\n\t\t{\n\t\t\tstring msg = \"An error occurred creating Element \" + parseNameFromJson(typeStaticData) +  \" for the asset \" + assetName;\n\t\t\thandleRESTException(e, msg.c_str());\n\t\t\treturn false;\n\t\t}\n\n\t\t// Create header for Link data\n\t\tvector<pair<string, string>> resLinkData = OMF::createMessageHeader(\"Data\");\n\n\t\tstring assetName = m_assetName;\n\t\tstring AFHierarchyLevel;\n\t\tstring prefix;\n\t\tstring objectPrefix;\n\n\t\tauto rule = m_AssetNamePrefix.find(assetName);\n\t\tif (rule != m_AssetNamePrefix.end())\n\t\t{\n\t\t\tauto itemArray = rule->second;\n\t\t\tobjectPrefix = \"\";\n\n\t\t\tfor (auto &item : itemArray)\n\t\t\t{\n\t\t\t\tstring AFHierarchy;\n\t\t\t\tstring prefix;\n\n\t\t\t\tAFHierarchy = std::get<0>(item);\n\t\t\t\tgenerateAFHierarchyPrefixLevel(AFHierarchy, prefix, AFHierarchyLevel);\n\n\t\t\t\tprefix = std::get<1>(item);\n\n\t\t\t\tif (objectPrefix.empty())\n\t\t\t\t{\n\t\t\t\t\tobjectPrefix = prefix;\n\t\t\t\t}\n\n\t\t\t\t// Create data for Static Data message and parse the link names for logging purposes\n\t\t\t\tstring typeLinkData = OMF::createLinkData(row, AFHierarchyLevel, prefix, objectPrefix, hints, true);\n\t\t\t\tstring payload = \"[\" + typeLinkData + \"]\";\n\t\t\t\tstd::vector<std::pair<std::string, std::string>> links;\n\t\t\t\tparseLinkData(payload, links);\n\n\t\t\t\t// Build an HTTPS POST with 'resLinkData' headers\n\t\t\t\t// and 'typeLinkData' JSON payload\n\t\t\t\t// Then get HTTPS POST ret code and return 0 to client on error\n\t\t\t\ttry\n\t\t\t\t{\n\t\t\t\t\tres = m_sender.sendRequest(\"POST\",\n\t\t\t\t\t\t\t\t\t\t\t   m_path,\n\t\t\t\t\t\t\t\t\t\t\t   resLinkData,\n\t\t\t\t\t\t\t\t\t\t\t   payload);\n\t\t\t\t\tif (!(res >= 200 && res <= 299))\n\t\t\t\t\t{\n\t\t\t\t\t\tLogLinks(res, \"creating Link\", links);\n\t\t\t\t\t\tstring msg = \"An error occurred sending the link Data message for the asset \" + assetName;\n\t\t\t\t\t\tmsg.append(\". HTTP error code \" + to_string(res));\n\t\t\t\t\t\treportAsset(assetName, \"warn\", msg);\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tLogLinks(res, (res == 201) ? \"Created Link\" : \"Confirmed Link\", links);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t// Exception raised for HTTP 400 Bad Request\n\t\t\t\tcatch (const BadRequest &e)\n\t\t\t\t{\n\t\t\t\t\tLogLinks(400, \"creating Link\", links);\n\t\t\t\t\tstring msg = \"The OMF endpoint reported a Bad Request when sending link Data for the asset\" + assetName;\n\t\t\t\t\thandleRESTException(e, msg.c_str());\n\n\t\t\t\t\tif (OMF::isDataTypeError(e.what()))\n\t\t\t\t\t{\n\t\t\t\t\t\t// Data type error: force type-id change\n\t\t\t\t\t\tm_changeTypeId = true;\n\t\t\t\t\t\tLogger::getLogger()->warn(\"A data type change will take place to try to resolve this error\");\n\t\t\t\t\t}\n\t\t\t\t\treportAsset(assetName, \"warn\", msg);\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tcatch (const Unauthorized &e)\n\t\t\t\t{\n\t\t\t\t\tLogLinks(401, \"creating Link\", links);\n\t\t\t\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tcatch (const Conflict &e)\n\t\t\t\t{\n\t\t\t\t\tLogLinks(409, \"creating Link\", links);\n\t\t\t\t\tstring msg = \"Conflict found sending the link Data message for the asset \" + assetName;\n\t\t\t\t\thandleRESTException(e, msg.c_str());\n\t\t\t\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, 409);\n\t\t\t\t\tm_PIstable = false;\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tcatch (const std::exception &e)\n\t\t\t\t{\n\t\t\t\t\tLogLinks(0, \"creating Link\", links);\n\t\t\t\t\tstring msg = \"An error occurred sending the link Data message for the asset \" + assetName;\n\t\t\t\t\thandleRESTException(e, msg.c_str());\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring msg(\"AF hierarchy is not defined for the asset \" + assetName);\n\t\t\treportAsset(assetName, \"warn\", msg);\n\t\t}\n\t}\n\t// All data types sent: success\n\treturn true;\n}\n\n/**\n * AFHierarchy - send an OMF message\n *\n * @param msgType    message type : Type, Data\n * @param jsonData   OMF message to send\n * @param action     action to be executed, either \"create\"or \"delete\"\n * @return\t\t     true if succeeded\n */\nbool OMF::AFHierarchySendMessage(const string& msgType, string& jsonData, const std::string& action)\n{\n\tbool success = true;\n\tint res = 0;\n\tstring errorMessage;\n\n\tvector<pair<string, string>> resType = OMF::createMessageHeader(msgType, action);\n\n\ttry\n\t{\n\t\tres = m_sender.sendRequest(\"POST\", m_path, resType, jsonData);\n\t\tif  ( ! (res >= 200 && res <= 299) )\n\t\t{\n\t\t\tsuccess = false;\n\t\t}\n\t\telse if (msgType.compare(\"Data\") == 0)\n\t\t{\n\t\t\tstd::string name = parseNameFromJson(jsonData);\n\t\t\tif (name.empty())\n\t\t\t{\n\t\t\t\tstd::vector<std::pair<std::string, std::string>> links;\n\t\t\t\tparseLinkData(jsonData, links);\n\t\t\t\tLogLinks(res, (res == 201) ? \"Created Link\" : \"Confirmed Link\", links);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info((res == 201) ? \"Created Element %s\" : \"Confirmed Element %s\", name.c_str());\n\t\t\t}\n\t\t}\n\t\telse if (msgType.compare(\"Type\") == 0)\n\t\t{\n\t\t\tstd::vector<std::string> typeIds;\n\t\t\tparseIdFromJson(jsonData, typeIds);\n\t\t\tLogIds(res, (res == 201) ? \"Created Type\" : \"Confirmed Type\", typeIds);\n\t\t}\n\t}\n\tcatch (const BadRequest& ex)\n\t{\n\t\terrorMessage = \"The OMF endpoint reported a Bad Request when sending AF hierarchy\";\n\t\thandleRESTException(ex, errorMessage.c_str());\n\t\tsuccess = false;\n\t}\n\tcatch (const std::exception &ex)\n\t{\n\t\thandleRESTException(ex, \"Error sending AF hierarchy\");\n\t\terrorMessage = ex.what();\n\t\tsuccess = false;\n\t}\n\n\tif (! success)\n\t{\n\t\tstring errorMsg = errorMessageHandler(errorMessage);\n\n\t\tif (res != 0)\n\t\t\tLogger::getLogger()->error(\"Sending Asset Framework hierarchy, %d %s - %s %s\",\n\t\t\t\t\t\t   res,\n\t\t\t\t\t\t   errorMsg.c_str(),\n\t\t\t\t\t\t   m_sender.getHostPort().c_str(),\n\t\t\t\t\t\t   m_path.c_str());\n\t\telse\n\t\t\tLogger::getLogger()->error(\"Sending Asset Framework hierarchy, %s - %s %s\",\n\t\t\t\t\t\t   errorMsg.c_str(),\n\t\t\t\t\t\t   m_sender.getHostPort().c_str(),\n\t\t\t\t\t\t   m_path.c_str());\n\n\t}\n\n\treturn success;\n}\n\n/**\n * AFHierarchy - handles OMF types definition\n *\n */\nbool OMF::sendAFHierarchyTypes(const std::string AFHierarchyLevel, const std::string prefix)\n{\n\tbool success;\n\tstring jsonData;\n\tstring tmpStr;\n\n\tjsonData = \"\";\n\ttmpStr = AF_HIERARCHY_1LEVEL_TYPE;\n\tStringReplace(tmpStr, \"_placeholder_typeid_\", prefix + \"_\" + AFHierarchyLevel + \"_typeid\");\n\tjsonData.append(tmpStr);\n\n\tsuccess = AFHierarchySendMessage(\"Type\", jsonData);\n\n\treturn success;\n}\n\n/**\n *  AFHierarchy - handles OMF static data\n *\n */\nbool OMF::sendAFHierarchyStatic(const std::string AFHierarchyLevel, const std::string prefix)\n{\n\tbool success;\n\tstring jsonData;\n\tstring tmpStr;\n\n\tjsonData = \"\";\n\ttmpStr = AF_HIERARCHY_1LEVEL_STATIC;\n\tStringReplace(tmpStr, \"_placeholder_typeid_\"  , prefix + \"_\" + AFHierarchyLevel + \"_typeid\");\n\tStringReplace(tmpStr, \"_placeholder_Name_\"    , AFHierarchyLevel);\n\tStringReplace(tmpStr, \"_placeholder_AssetId_\" , prefix + \"_\" + AFHierarchyLevel);\n\tjsonData.append(tmpStr);\n\n\tsuccess = AFHierarchySendMessage(\"Data\", jsonData);\n\n\treturn success;\n}\n\n/**\n *  AFHierarchy - creates the link between 2 elements in the AF hierarchy\n *\n */\nbool OMF::sendAFHierarchyLink(std::string parent, std::string child, std::string prefixIdParent, std::string prefixId)\n{\n\tbool success;\n\tstring jsonData;\n\tstring tmpStr;\n\n\tjsonData = \"\";\n\ttmpStr = AF_HIERARCHY_LEVEL_LINK;\n\n\tStringReplace(tmpStr, \"_placeholder_src_type_\", prefixIdParent + \"_\" + parent + \"_typeid\");\n\tStringReplace(tmpStr, \"_placeholder_src_idx_\",  prefixIdParent + \"_\" + parent );\n\tStringReplace(tmpStr, \"_placeholder_tgt_type_\", prefixId       + \"_\" + child + \"_typeid\");\n\tStringReplace(tmpStr, \"_placeholder_tgt_idx_\",  prefixId + \"_\" + child);\n\tjsonData.append(tmpStr);\n\n\n\tsuccess = AFHierarchySendMessage(\"Data\", jsonData);\n\n\treturn success;\n}\n\n/**\n *  AFHierarchy - creates or delete the link between 2 elements in the AF hierarchy in relation to the parameter action\n *\n */\nbool OMF::manageAFHierarchyLink(std::string parent, std::string child, std::string prefixIdParent, std::string prefixId, std::string childFull, string action)\n{\n\tbool success;\n\tstring jsonData;\n\tstring tmpStr;\n\n\tjsonData = \"\";\n\ttmpStr = AF_HIERARCHY_LEVEL_LINK;\n\n\tStringReplace(tmpStr, \"_placeholder_src_type_\", prefixIdParent + \"_\" + parent + \"_typeid\");\n\tStringReplace(tmpStr, \"_placeholder_src_idx_\",  prefixIdParent + \"_\" + parent );\n\n\tif (childFull.empty()) {\n\n\t\tStringReplace(tmpStr, \"_placeholder_tgt_type_\", prefixId       + \"_\" + child + \"_typeid\");\n\t} else {\n\t\tStringReplace(tmpStr, \"_placeholder_tgt_type_\", childFull);\n\t}\n\n\tStringReplace(tmpStr, \"_placeholder_tgt_idx_\",  \"A_\" + prefixId + \"_\" + child);\n\tjsonData.append(tmpStr);\n\n\tsuccess = AFHierarchySendMessage(\"Data\", jsonData, action);\n\n\treturn success;\n}\n\n/**\n *  AFHierarchy - delete the link between 2 elements in the AF hierarchy\n *\n */\nbool OMF::deleteAssetAFH(const string& assetName, string& path) {\n\n\tstd::string pathLastLevel, pathPrefixId, assetNamePrefixId, assetNameFullId;\n\n\tassetNamePrefixId = getHashStored(assetName);\n\tgenerateAFHierarchyPrefixLevel(path, pathPrefixId, pathLastLevel);\n\n\tsetAssetTypeTagNew(assetName, \"typename_sensor\", assetNameFullId);\n\n\tLogger::getLogger()->debug(\"%s - assetName :%s: childPrefixId :%s: pathStored :%s: parentLastLevel :%s: parentPrefixId :%s:  childFull :%s:\"\n\t\t, __FUNCTION__\n\t\t, assetName.c_str()\n\t\t, assetNamePrefixId.c_str()\n\t\t, path.c_str()\n\t\t, pathLastLevel.c_str()\n\t\t, pathPrefixId.c_str()\n\t\t, assetNameFullId.c_str()\n\t);\n\n\treturn manageAFHierarchyLink(pathLastLevel, assetName, pathPrefixId, assetNamePrefixId, assetNameFullId, \"delete\");\n}\n\n/**\n *  AFHierarchy - create the link between 2 elements in the AF hierarchy\n *\n */\nbool OMF::createAssetAFH(const string& assetName, string& path) {\n\n\tstd::string pathLastLevel, pathPrefixId, assetNamePrefixId, assetNameFullId;\n\n\tassetNamePrefixId = getHashStored(assetName);\n\tgenerateAFHierarchyPrefixLevel(path, pathPrefixId, pathLastLevel);\n\n\tsetAssetTypeTagNew(assetName, \"typename_sensor\", assetNameFullId);\n\n\tLogger::getLogger()->debug(\"%s - assetName :%s: childPrefixId :%s: pathStored :%s: pathLastLevel :%s: pathPrefixId :%s:  childFull :%s:\"\n\t\t, __FUNCTION__\n\t\t, assetName.c_str()\n\t\t, assetNamePrefixId.c_str()\n\t\t, path.c_str()\n\t\t, pathLastLevel.c_str()\n\t\t, pathPrefixId.c_str()\n\t\t, assetNameFullId.c_str()\n\t);\n\n\treturn manageAFHierarchyLink(pathLastLevel, assetName, pathPrefixId, assetNamePrefixId, assetNameFullId, \"create\");\n}\n\n/**\n  * Creates the hierarchies tree in the AF as defined in the configuration item DefaultAFLocation\n * each level is separated by /\n * the implementation is available for PI Web API only\n * The hierarchy is created/recreated if an OMF type message is sent*\n *\n */\nbool OMF::handleAFHierarchySystemWide() {\n\n\tbool success = true;\n\tstd::string level;\n\tstd::string previousLevel;\n\tstring parentPath;\n\tparentPath = evaluateParentPath(m_DefaultAFLocation, AFHierarchySeparator);\n\tsuccess = sendAFHierarchyLevels(parentPath, m_DefaultAFLocation, m_AFHierarchyLevel);\n\n\treturn success;\n}\n\n/**\n * Creates all the AF hierarchies levels as requested by the input parameter\n * Creates the AF hierarchy if it was not already created\n *\n * @param  AFHierarchy   Hierarchies levels to be created as relative or absolute path\n * @return out\t\t     true if succeeded\n */\nbool OMF::sendAFHierarchy(string AFHierarchy)\n{\n\tbool success = true;\n\tstring path;\n\tstring dummy;\n\tstring parentPath;\n\n\n\tif(find(m_afhHierarchyAlreadyCreated.begin(), m_afhHierarchyAlreadyCreated.end(), AFHierarchy) == m_afhHierarchyAlreadyCreated.end()){\n\n\t\tif (AFHierarchy.at(0) == '/')\n\t\t{\n\t\t\t// Absolute path\n\t\t\tpath = AFHierarchy;\n\t\t\tparentPath = evaluateParentPath(path, AFHierarchySeparator);\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// relative  path\n\t\t\tpath = m_DefaultAFLocation + \"/\" + AFHierarchy;\n\t\t\tparentPath = m_DefaultAFLocation;\n\t\t}\n\n\t\tm_afhHierarchyAlreadyCreated.push_back(AFHierarchy);\n\n\t\tif (success = sendAFHierarchyLevels(parentPath, path, dummy))\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"%s - path created :%s:\", __FUNCTION__, AFHierarchy.c_str() );\n\t\t}\n\t} else {\n\t\tLogger::getLogger()->debug(\"%s - path already created :%s:\", __FUNCTION__, AFHierarchy.c_str() );\n\t}\n\n\treturn success;\n}\n\n/**\n * Creates all the AF hierarchies level as requested by the input parameter\n *\n * @param parentPath  Parent path\n * @param path\t      Full path of hierarchies to create\n * @param lastLevel\t  last level of the created hierarchy\n * @return\t\t      true if succeeded \n */\nbool OMF::sendAFHierarchyLevels(string parentPath, string path, std::string &lastLevel) {\n\n\tbool success = true;\n\tstd::string level;\n\tstd::string previousLevel;\n\n\tStringReplaceAll(path, AFH_ESCAPE_SEQ ,AFH_ESCAPE_CHAR);\n\tStringReplaceAll(path, AFH_SLASH_ESCAPE ,AFH_SLASH_ESCAPE_TMP);\n\n\tif (path.find(AFHierarchySeparator) == string::npos)\n\t{\n\t\tstring prefixId;\n\n\t\t// only 1 single level of hierarchy\n\t\tStringReplaceAll(path, AFH_SLASH_ESCAPE_TMP ,AFH_SLASH);\n\t\tprefixId = generateUniquePrefixId(path);\n\n\t\tsuccess = sendAFHierarchyTypes(path, prefixId);\n\t\tif (success)\n\t\t{\n\t\t\tsuccess = sendAFHierarchyStatic(path,prefixId);\n\t\t}\n\t\tlastLevel = path;\n\t}\n\telse\n\t{\n\t\tstring pathFixed;\n\t\tstring parentPathFixed;\n\t\tstring prefixId;\n\t\tstring prefixIdParent;\n\t\tstring previousLevelPath;\n\t\tstring AFHierarchyLevel;\n\t\tstring levelPath;\n\n\t\tpathFixed = StringSlashFix(path);\n\t\tstd::stringstream pathStream(pathFixed);\n\n\t\t// multiple hierarchy levels\n\t\twhile (std::getline(pathStream, level, AFHierarchySeparator))\n\t\t{\n\t\t\tStringReplaceAll(level, AFH_SLASH_ESCAPE_TMP ,AFH_SLASH);\n\n\t\t\tlevelPath = previousLevelPath + AFHierarchySeparator + level;\n\t\t\tlevelPath = StringSlashFix(levelPath);\n\t\t\tprefixId = generateUniquePrefixId(levelPath);\n\n\t\t\tif (!sendAFHierarchyTypes(level, prefixId))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (!sendAFHierarchyStatic(level, prefixId))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\t// Creates the link between the AF level\n\t\t\tif (previousLevel != \"\")\n\t\t\t{\n\t\t\t\tparentPathFixed = StringSlashFix(previousLevelPath);\n\t\t\t\tprefixIdParent = generateUniquePrefixId(parentPathFixed);\n\n\t\t\t\tif (!sendAFHierarchyLink(previousLevel, level, prefixIdParent, prefixId))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\t\tpreviousLevelPath = levelPath;\n\t\t\tpreviousLevel = level;\n\t\t}\n\t\tlastLevel = level;\n\t}\n\n\treturn success;\n}\n\n/**\n * Handle the creation of AF hierarchies\n *\n * @return\t\ttrue if succeeded\n*/\nbool OMF::handleAFHierarchy()\n{\n\tbool success = true;\n\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\n\t\tsuccess = handleAFHierarchySystemWide();\n\t}\n\treturn success;\n}\n\n/**\n * Sets the value of the prefix used for the objects naming\n *\n */\nvoid OMF::setAFHierarchy()\n{\n\tstd::string level;\n\tstd::string AFLocation;\n\n\tAFLocation = m_DefaultAFLocation;\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\t// Implementation only for PI Web API\n\t\tStringReplaceAll(AFLocation, AFH_ESCAPE_SEQ,   AFH_ESCAPE_CHAR);\n\t\tStringReplaceAll(AFLocation, AFH_SLASH_ESCAPE ,AFH_SLASH_ESCAPE_TMP);\n\t\tstd::stringstream defaultAFLocation(AFLocation);\n\n\t\tif (AFLocation.find(AFHierarchySeparator) == string::npos)\n\t\t{\n\t\t\t// only 1 single level of hierarchy\n\t\t\tm_AFHierarchyLevel = AFLocation;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// multiple hierarchy levels\n\t\t\twhile (std::getline(defaultAFLocation, level, AFHierarchySeparator))\n\t\t\t{\n\t\t\t\t;\n\t\t\t}\n\t\t\tm_AFHierarchyLevel = level;\n\t\t}\n\t\tStringReplaceAll(m_AFHierarchyLevel, AFH_SLASH_ESCAPE_TMP ,AFH_SLASH);\n\t}\n}\n\n/**\n * Send all the readings to the PI Server\n *\n * @param readings            A vector of readings data pointers\n * @param skipSendDataTypes   Send datatypes only once (default is true)\n * @param compression         If true, compress JSON payload before sending to PI\n * @return                    Number of readings sent on success, 0 otherwise\n */\nuint32_t OMF::sendToServer(const vector<Reading *>& readings,\n\t\t\t   bool compression, bool skipSentDataTypes)\n{\n\tbool AFHierarchySent = false;\n\tbool sendLinkedTypes = false;\n\tbool sendDataTypes;\n\tstring keyComplete;\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\tstring measurementId;\n\n#if INSTRUMENT\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\n\tstruct timeval\tstart, t1, t2, t3, t4, t5;\n\tgettimeofday(&start, NULL);\n#endif\n\n\tif (m_linkedProperties && m_baseTypesSent == false)\n\t{\n\t\tif (!sendBaseTypes() || !sendFledgeAssetType())\n\t\t{\n\t\t\tif (!m_connected || !m_PIstable)\n\t\t\t{\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\t\n\t\t\tLogger::getLogger()->error(\"Unable to send base types, linked assets will not be sent. The system will fall back to using complex types.\");\n\t\t\tm_linkedProperties = false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_baseTypesSent = true;\n\t\t}\n\t}\n\n\t// TODO We do not need the superset stuff if we are using linked data types,\n\t// this would save us iterating over the data an extra time and reduce our\n\t// memory footprint\n\t//\n\t// Create a superset of all the datapoints for each assetName\n\t// the superset[assetName] is then passed to routines which handles\n\t// creation of OMF data types. This is used for the initial type\n\t// handling of complex data types.\n\tOMF::setMapObjectTypes(readings, m_SuperSetDataPoints);\n\n#if INSTRUMENT\n\tgettimeofday(&t1, NULL);\n#endif\n\n\t// Applies the PI-Server naming rules to the AF hierarchy\n\t{\n\t\tbool changed = false;\n\t\tstring  origDefaultAFLocation;\n\n\t\torigDefaultAFLocation = m_DefaultAFLocation;\n\t\tm_DefaultAFLocation = ApplyPIServerNamingRulesPath(m_DefaultAFLocation, &changed);\n\n\t\tif (changed) {\n\n\t\t\tLogger::getLogger()->info(\"%s - AF hierarchy changed to follow PI-Server naming rules from :%s: to :%s:\", __FUNCTION__, origDefaultAFLocation.c_str(), m_DefaultAFLocation.c_str() );\n\t\t}\n\t}\n\n\t/*\n\t * Iterate over readings:\n\t * - Send/cache Types\n\t * - transform a reading to OMF format\n\t * - add OMF data to new vector\n\t */\n\n\t// Used for logging\n\tstring json_not_compressed;\n\n\tstring OMFHintAFHierarchyTmp;\n\tstring OMFHintAFHierarchy;\n\n\t// Create the class that deals with the linked data generation\n\tOMFLinkedData linkedData(&m_linkedAssetState, m_PIServerEndpoint);\n\tlinkedData.setSendFullStructure(m_sendFullStructure);\n\tlinkedData.setDelimiter(m_delimiter);\n\tlinkedData.setFormats(getFormatType(OMF_TYPE_FLOAT), getFormatType(OMF_TYPE_INTEGER));\n\tlinkedData.setStaticData(m_staticData);\n\n\t// Create the lookup data for this block of readings\n\tlinkedData.buildLookup(readings);\n\n\tunsigned int idx = 0;\n\n\tstd::size_t blockSize = readings.size() / m_numBlocks;\n\tfor (std::size_t i = 0; i < readings.size(); i += blockSize)\n\t{\n\t\tOMFBuffer payload;\n\t\tpayload.append('[');\n\t\tbool pendingSeparator = false;\n\n\t\tfor (std::size_t j = i; j < (i + blockSize) && (j < readings.size()); ++j)\n\t\t{\n\t\t\tReading *reading = readings[j];\n\t\t\tOMFHintAFHierarchy = \"\";\n\t\t\tLogger::getLogger()->debug(\"sendToServer[%u/%u]: %s (%s)\", idx++, readings.size(), reading->getAssetName().c_str(), DataPointNamesAsString(*reading).c_str());\n\n\t\t\t// Fetch and parse any OMFHint for this reading\n\t\t\tDatapoint *hintsdp = reading->getDatapoint(\"OMFHint\");\n\t\t\tOMFHints *hints = NULL;\n\t\t\tbool usingTagHint = false;\n\t\t\tlong typeId = 0;\n\t\t\tif (hintsdp)\n\t\t\t{\n\t\t\t\thints = new OMFHints(hintsdp->getData().toString());\n\t\t\t\tconst vector<OMFHint *> omfHints = hints->getHints();\n\t\t\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t\t\t{\n\t\t\t\t\tif (typeid(**it) == typeid(OMFTagHint))\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->debug(\"Using OMF Tag hint: %s\", (*it)->getHint().c_str());\n\t\t\t\t\t\tkeyComplete.append(\"_\" + (*it)->getHint());\n\t\t\t\t\t\tusingTagHint = true;\n\t\t\t\t\t}\n\t\t\t\t\telse if (typeid(**it) == typeid(OMFAFLocationHint))\n\t\t\t\t\t{\n\t\t\t\t\t\tOMFHintAFHierarchyTmp = (*it)->getHint();\n\t\t\t\t\t\tOMFHintAFHierarchy = variableValueHandle(*reading, OMFHintAFHierarchyTmp);\n\n\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - OMF AFHierarchy original value :%s: new :%s:\"\n\t\t\t\t\t\t\t,__FUNCTION__\n\t\t\t\t\t\t\t,OMFHintAFHierarchyTmp.c_str()\n\t\t\t\t\t\t\t,OMFHintAFHierarchy.c_str() );\n\t\t\t\t\t}\n\t\t\t\t\telse if (typeid(**it) == typeid(OMFLegacyTypeHint))\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->warn(\"OMFHint LegacyType has been deprecated. The hint value '%s' will be ignored.\", (*it)->getHint().c_str());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Applies the PI-Server naming rules to the AssetName\n\t\t\t{\n\n\t\t\t\tbool changed;\n\t\t\t\tstring assetNameFledge;\n\n\t\t\t\tassetNameFledge = reading->getAssetName();\n\t\t\t\tm_assetName = ApplyPIServerNamingRulesObj(assetNameFledge, &changed);\n\t\t\t\tif (changed) {\n\n\t\t\t\t\tLogger::getLogger()->info(\"%s -  3 Asset name changed to follow PI-Server naming rules from :%s: to :%s:\", __FUNCTION__, assetNameFledge.c_str(), m_assetName.c_str() );\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Since hints are attached to individual readings that are processed by the north plugin if an AFLocation\n\t\t\t// hint is present it will override any default AFLocation or AF Location rules defined in the north plugin configuration.\n\t\t\tif ( ! createAFHierarchyOmfHint(m_assetName, OMFHintAFHierarchy) )\n\t\t\t{\n\t\t\t\tif (!evaluateAFHierarchyRules(m_assetName, *reading))\n\t\t\t\t{\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\t\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\t\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\t\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t\t\t)\n\t\t\t{\n\t\t\t\tkeyComplete = m_assetName;\n\t\t\t}\n\t\t\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t\t\t{\n\t\t\t\tif (getNamingScheme(m_assetName) == NAMINGSCHEME_CONCISE) {\n\n\t\t\t\t\tkeyComplete = m_assetName;\n\t\t\t\t} else {\n\t\t\t\t\tretrieveAFHierarchyPrefixAssetName(m_assetName, AFHierarchyPrefix, AFHierarchyLevel);\n\t\t\t\t\tkeyComplete = AFHierarchyPrefix + \"_\" + m_assetName;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (! AFHierarchySent)\n\t\t\t{\n\t\t\t\tsetAFHierarchy();\n\t\t\t}\n\n\t\t\t// Use old style complex types if the user has forced it via configuration,\n\t\t\t// we are running against an EDS endpoint or Connector Relay or we have types defined for this\n\t\t\t// asset already\n\t\t\tif (m_legacy || m_PIServerEndpoint == ENDPOINT_EDS || \n\t\t\t\t\tm_PIServerEndpoint == ENDPOINT_CR ||\n\t\t\t\t\tm_OMFDataTypes->find(keyComplete) != m_OMFDataTypes->end())\n\t\t\t{\n\t\t\t\t// Legacy type support\n\t\t\t\tif (! usingTagHint)\n\t\t\t\t{\n\t\t\t\t\t/*\n\t\t\t\t\t* Check the OMFHints, if there are any, to see if we have a \n\t\t\t\t\t* type name that should be used for this asset.\n\t\t\t\t\t* We will still create the type, but the name will be fixed \n\t\t\t\t\t* as the value of this hint.\n\t\t\t\t\t*/\n\t\t\t\t\tbool usingTypeNameHint = false;\n\t\t\t\t\tif (hints)\n\t\t\t\t\t{\n\t\t\t\t\t\tconst vector<OMFHint *> omfHints = hints->getHints();\n\t\t\t\t\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (typeid(**it) == typeid(OMFTypeNameHint))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tLogger::getLogger()->debug(\"Using OMF TypeName hint: %s\", (*it)->getHint().c_str());\n\t\t\t\t\t\t\t\tkeyComplete.append(\"_\" + (*it)->getHint());\n\t\t\t\t\t\t\t\tusingTypeNameHint = true;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\n\t\t\t\t\tauto it = m_SuperSetDataPoints.find(m_assetName);\n\t\t\t\t\tif (it == m_SuperSetDataPoints.end()) {\n\t\t\t\t\t\t// The asset has only unsupported properties, so it is ignored\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\tsendDataTypes = (skipSentDataTypes == true) ?\n\t\t\t\t\t\t\t// Send if not already sent\n\t\t\t\t\t\t\t!OMF::getCreatedTypes(keyComplete, *reading, hints) :\n\t\t\t\t\t\t\t// Always send types\n\t\t\t\t\t\t\ttrue;\n\n\t\t\t\t\tReading* datatypeStructure = NULL;\n\t\t\t\t\tif (sendDataTypes && !usingTypeNameHint)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Increment type-id of assetName in memory cache\n\t\t\t\t\t\tOMF::incrementAssetTypeIdOnly(keyComplete);\n\t\t\t\t\t\t// Remove data and keep type-id\n\t\t\t\t\t\tOMF::clearCreatedTypes(keyComplete);\n\n\t\t\t\t\t\t// Get the supersetDataPoints for current assetName\n\t\t\t\t\t\tauto it = m_SuperSetDataPoints.find(m_assetName);\n\t\t\t\t\t\tif (it != m_SuperSetDataPoints.end())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdatatypeStructure = (*it).second;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (m_sendFullStructure)\n\t\t\t\t\t{\n\t\t\t\t\t\t// The AF hierarchy is created/recreated if an OMF type message is sent\n\t\t\t\t\t\t// it sends the hierarchy once\n\t\t\t\t\t\tif (sendDataTypes and !AFHierarchySent)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (!handleAFHierarchy())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\treturn 0;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tAFHierarchySent = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (usingTypeNameHint)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (sendDataTypes && !OMF::handleDataTypes(keyComplete,\n\t\t\t\t\t\t\t\t\t\t*reading, skipSentDataTypes, hints))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Failure\n\t\t\t\t\t\t\treturn 0;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\t// Check first we have supersetDataPoints for the current reading\n\t\t\t\t\t\tif ((sendDataTypes && datatypeStructure == NULL) ||\n\t\t\t\t\t\t\t// Handle the data types of the current reading\n\t\t\t\t\t\t\t(sendDataTypes &&\n\t\t\t\t\t\t\t// Send data type\n\t\t\t\t\t\t\t!OMF::handleDataTypes(keyComplete, *datatypeStructure, skipSentDataTypes, hints) &&\n\t\t\t\t\t\t\t// Data type not sent:\n\t\t\t\t\t\t\t(!m_changeTypeId ||\n\t\t\t\t\t\t\t// Increment type-id and re-send data types\n\t\t\t\t\t\t\t!OMF::handleTypeErrors(keyComplete, *datatypeStructure, hints))))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Remove all assets supersetDataPoints\n\t\t\t\t\t\t\tOMF::unsetMapObjectTypes(m_SuperSetDataPoints);\n\n\t\t\t\t\t\t\t// Failure\n\t\t\t\t\t\t\treturn 0;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t// Create the key for dataTypes sending once\n\t\t\t\t\ttypeId = OMF::getAssetTypeId(m_assetName);\n\t\t\t\t}\n\n\t\t\t\tmeasurementId = generateMeasurementId(m_assetName);\n\n\t\t\t\tif (OMFData(payload, *reading, measurementId, pendingSeparator, m_PIServerEndpoint, AFHierarchyPrefix, hints).hasData())\n\t\t\t\t{\n\t\t\t\t\tpendingSeparator = true;\n\t\t\t\t}\n\n\t\t\t\tsendLinkedTypes = false;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// We do this before the send so we know if it was sent for the first time\n\t\t\t\t// in the processReading call\n\t\t\t\tauto lookup = m_linkedAssetState.find(m_assetName + m_delimiter);\n\t\t\t\t// Send data for this reading using the new mechanism\n\t\t\t\tif (linkedData.processReading(payload, pendingSeparator, *reading, AFHierarchyPrefix, hints))\n\t\t\t\t\tpendingSeparator = true;\n\n\t\t\t\tsendLinkedTypes = true;\n\t\t\t}\n\n\t\t\tif (hints)\n\t\t\t{\n\t\t\t\tdelete hints;\n\t\t\t}\n\t\t} // end 'for' one block of Readings\n\n\t#if INSTRUMENT\n\t\tgettimeofday(&t2, NULL);\n\t#endif\n\n\t\tpayload.append(']');\n\n\t\t// TODO Improve this with coalesceCompressed call and avoid string on the stack\n\t\t// and avoid copy into a string\n\t\tconst char *omfData = payload.coalesce();\n\n\t#if INSTRUMENT\n\t\tgettimeofday(&t3, NULL);\n\t#endif\n\n\t\tvector<pair<string, string>> containerHeader = OMF::createMessageHeader(\"Container\");\n\t\tOMFError omfError;\n\t\tif (!linkedData.flushContainers(m_sender, m_path, containerHeader, omfError, &m_connected))\n\t\t{\n\t\t\tif (omfError.hasMessages())\n\t\t\t{\n\t\t\t\t// Exit immediately if attempting to create PI Points results in HTTP 409 (Conflict)\n\t\t\t\t// or HTTP 500 (Internal Server Error) with a specific error message.\n\t\t\t\t// Both mean that processing cannot continue because the PI Server cannot store data.\n\t\t\t\tint httpCode = omfError.getHttpCode();\n\n\t\t\t\tfor (unsigned int i = 0; i < omfError.messageCount(); i++)\n\t\t\t\t{\n\t\t\t\t\tif ((httpCode == 409) || ((httpCode == 500) && (0 == omfError.getMessage(i).compare(PIWEBAPI_PIPOINTS_NOT_CREATED))))\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, httpCode);\n\t\t\t\t\t\tm_PIstable = false;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn 0;\n\t\t}\n\n\t\t/**\n\t\t * Types messages sent, now transform each reading to OMF format.\n\t\t *\n\t\t * After formatting the new vector of data can be sent\n\t\t * with one message only\n\t\t */\n\n\t\t// Create header for Readings data\n\t\tvector<pair<string, string>> readingData = OMF::createMessageHeader(\"Data\", m_dataActionCode);\n\t\tif (compression)\n\t\t\treadingData.push_back(pair<string, string>(\"compression\", \"gzip\"));\n\n\t\t// Build an HTTPS POST with 'readingData headers\n\t\t// and 'allReadings' JSON payload\n\t\t// Then get HTTPS POST ret code and return 0 to client on error\n\t\ttry\n\t\t{\n\t\t\tint res = m_sender.sendRequest(\"POST\",\n\t\t\t\t\t\t\tm_path,\n\t\t\t\t\t\t\treadingData,\n\t\t\t\t\t\t\tcompression ? compress_string(omfData) : omfData);\n\t\t\tif  ( ! (res >= 200 && res <= 299) )\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Sending JSON readings , \"\n\t\t\t\t\t\t\t\"- error: HTTP code |%d| - %s %s\",\n\t\t\t\t\t\t\tres,\n\t\t\t\t\t\t\tm_sender.getHostPort().c_str(),\n\t\t\t\t\t\t\tm_path.c_str()\n\t\t\t\t\t\t\t);\n\t\t\t\tdelete[] omfData;\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t#if INSTRUMENT\n\t\t\tgettimeofday(&t4, NULL);\n\t#endif\n\n\t#if INSTRUMENT\n\t\t\tstruct timeval tm;\n\t\t\tdouble timeT1, timeT2, timeT3, timeT4, timeT5;\n\n\t\t\ttimersub(&t1, &start, &tm);\n\t\t\ttimeT1 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\t\ttimersub(&t2, &t1, &tm);\n\t\t\ttimeT2 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\t\ttimersub(&t3, &t2, &tm);\n\t\t\ttimeT3 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\t\ttimersub(&t4, &t3, &tm);\n\t\t\ttimeT4 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\t\ttimersub(&t5, &t4, &tm);\n\t\t\ttimeT5 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\t\tLogger::getLogger()->warn(\"Timing seconds - thread %s - superSet %6.3f - Loop %6.3f - compress %6.3f - send data %6.3f - readings %d - msg size %d\",\n\t\t\t\t\t\t\t\t\tthreadId.str().c_str(),\n\t\t\t\t\t\t\t\t\ttimeT1,\n\t\t\t\t\t\t\t\t\ttimeT2,\n\t\t\t\t\t\t\t\t\ttimeT3,\n\t\t\t\t\t\t\t\t\ttimeT4,\n\t\t\t\t\t\t\t\t\treadings.size(),\n\t\t\t\t\t\t\t\t\tstrlen(omfData)\n\t\t\t);\n\n\t#endif\n\n\n\t\t\tdelete[] omfData;\n\t\t}\n\t\t// Exception raised for HTTP 400 Bad Request\n\t\tcatch (const BadRequest& e)\n\t\t{\n\t\t\tOMFError error(m_sender.getHTTPResponse());\n\t\t\terror.Log(\"The OMF endpoint reported a Bad Request when sending data\");\n\n\t\t\tif (OMF::isDataTypeError(e.what()))\n\t\t\t{\n\t\t\t\t// Some assets have invalid or redefined data type\n\t\t\t\t// NOTE:\n\t\t\t\t//\n\t\t\t\t// 1- We consider this a NOT blocking issue.\n\t\t\t\t// 2- Type-id is not incremented\n\t\t\t\t// 3- Data Types cache is cleared: next sendData call\n\t\t\t\t//    will send data types again.\n\n\t\t\t\tstring errorMsg = errorMessageHandler(e.what());\n\n\t\t\t\tLogger::getLogger()->warn(\"Sending JSON readings, \"\n\t\t\t\t\t\t\t\"not blocking issue: %s - %s %s\",\n\t\t\t\t\t\t\terrorMsg.c_str(),\n\t\t\t\t\t\t\tm_sender.getHostPort().c_str(),\n\t\t\t\t\t\t\tm_path.c_str());\n\n\t\t\t\t// Extract assetName from error message\n\t\t\t\tstring assetName;\n\t\t\t\tif (m_PIServerEndpoint == ENDPOINT_CR)\n\t\t\t\t{\n\t\t\t\t\tassetName = OMF::getAssetNameFromError(e.what());\n\t\t\t\t}\n\t\t\t\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t\t\t\t{\n\t\t\t\t\t// Currently not implemented/supported as PI WEB API does not\n\t\t\t\t\t// report in the error message the asset causing the problem\n\t\t\t\t\tassetName = \"\";\n\t\t\t\t}\n\n\t\t\t\tif (assetName.empty())\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"Sending JSON readings, \"\n\t\t\t\t\t\t\t\t\t\t\t\"not blocking issue: assetName not found in error message, \"\n\t\t\t\t\t\t\t\t\t\t\t\" no types redefinition\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// Remove data and keep type-id\n\t\t\t\t\tOMF::clearCreatedTypes(assetName);\n\n\t\t\t\t\tLogger::getLogger()->warn(\"Sending JSON readings, \"\n\t\t\t\t\t\t\t\t\"not blocking issue: 'type-id' of assetName '%s' \"\n\t\t\t\t\t\t\t\t\"has been set to %d \"\n\t\t\t\t\t\t\t\t\"- %s %s\",\n\t\t\t\t\t\t\t\tassetName.c_str(),\n\t\t\t\t\t\t\t\tOMF::getAssetTypeId(assetName),\n\t\t\t\t\t\t\t\tm_sender.getHostPort().c_str(),\n\t\t\t\t\t\t\t\tm_path.c_str()\n\t\t\t\t\t\t\t\t);\n\t\t\t\t}\n\n\t\t\t\tdelete[] omfData;\n\n\t\t\t\t// It returns size instead of 0 as the rows in the block should be skipped in case of an error\n\t\t\t\t// as it is considered a not blocking ones.\n\t\t\t\treturn readings.size();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring errorMsg = errorMessageHandler(e.what());\n\n\t\t\t\tLogger::getLogger()->error(\"Sending JSON data error : %s - %s %s\",\n\t\t\t\t\t\t\t\t\t\terrorMsg.c_str(),\n\t\t\t\t\t\t\t\t\t\tm_sender.getHostPort().c_str(),\n\t\t\t\t\t\t\t\t\t\tm_path.c_str()\n\t\t\t\t\t\t\t\t\t\t);\n\t\t\t\tdelete[] omfData;\n\t\t\t}\n\n\t\t\t// Failure\n\t\t\treturn 0;\n\t\t}\n\t\tcatch (const Unauthorized &e)\n\t\t{\n\t\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\t\tdelete[] omfData;\n\t\t\treturn 0;\n\t\t}\n\t\tcatch (const Conflict& e)\n\t\t{\n\t\t\thandleRESTException(e, \"Conflict sending Data\");\n\n\t\t\tstd::vector<std::pair<std::string, std::string>> links;\n\t\t\tparseLinkData(omfData, links);\n\t\t\tLogLinks(409, \"creating Link\", links);\n\n\t\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, 409);\n\t\t\tm_PIstable = false;\n\t\t\tdelete[] omfData;\n\t\t\treturn 0;\n\t\t}\n\t\tcatch (const std::exception &e)\n\t\t{\n\t\t\thandleRESTException(e, \"Error sending Data\");\n\n\t\t\tstd::vector<std::pair<std::string, std::string>> links;\n\t\t\tparseLinkData(omfData, links);\n\t\t\tLogLinks(0, \"creating Link\", links);\n\n\t\t\tlinkedData.clearLALookup(readings, i, i + blockSize, m_delimiter);\n\t\t\tdelete[] omfData;\n\t\t\treturn 0;\n\t\t}\n\t} // end 'for' all blocks of Readings\n\n\t// Create the AF Links between assets if AF structure creation with linked types is requested\n\tif (sendLinkedTypes && m_sendFullStructure)\n\t{\n\t\tfor (Reading *reading : readings)\n\t\t{\n\t\t\tOMFHints *hints = NULL;\n\t\t\tDatapoint *hintsdp = reading->getDatapoint(\"OMFHint\");\n\t\t\tif (hintsdp)\n\t\t\t{\n\t\t\t\thints = new OMFHints(hintsdp->getData().toString());\n\t\t\t}\n\n\t\t\tm_assetName = ApplyPIServerNamingRulesObj(reading->getAssetName(), nullptr);\n\t\t\tauto lookup = m_linkedAssetState.find(m_assetName + m_delimiter);\n\t\t\tif (lookup->second.afLinkState() == false)\n\t\t\t{\n\t\t\t\t// If the hierarchy has not already been sent then send it\n\t\t\t\tif (!AFHierarchySent)\n\t\t\t\t{\n\t\t\t\t\tif (!handleAFHierarchy())\n\t\t\t\t\t{\n\t\t\t\t\t\tdelete hints;\n\t\t\t\t\t\treturn 0;\n\t\t\t\t\t}\n\t\t\t\t\tAFHierarchySent = true;\n\t\t\t\t}\n\n\t\t\t\tif (!sendAFLinks(*reading, hints))\n\t\t\t\t{\n\t\t\t\t\tdelete hints;\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t\tlookup->second.afLinkSent();\n\t\t\t}\n\n\t\t\tdelete hints;\n\t\t}\n\t}\n\n\t// Remove all assets supersetDataPoints\n\tOMF::unsetMapObjectTypes(m_SuperSetDataPoints);\n\n\t// Return number of sent readings to the caller\n\treturn readings.size();\n}\n\n/**\n * Apply a handling on the error message in relation to the End Point\n *\n */\nstring OMF::errorMessageHandler(const string &msg)\n{\n\tstring errorMsg;\n\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tPIWebAPI piWeb;\n\t\terrorMsg = piWeb.errorMessageHandler(msg);\n\n\t} else {\n\t\terrorMsg = msg;\n\t}\n\n\treturn(errorMsg);\n}\n\n\n/**\n * Send all the readings to the PI Server.\n * Note: this overload is never called.\n *\n * @param readings            A vector of readings data\n * @param skipSendDataTypes   Send datatypes only once (default is true)\n * @return                    Number of readings sent on success, 0 otherwise\n */\nuint32_t OMF::sendToServer(const vector<Reading>& readings,\n\t\t\t   bool skipSentDataTypes)\n{\n\t/*\n\t * Iterate over readings:\n\t * - Send/cache Types\n\t * - transform a reading to OMF format\n\t * - add OMF data to new vector\n\t */\n\tstring measurementId;\n\tOMFBuffer payload;\n\tpayload.append('[');\n\n\t// Fetch Reading data\n\tfor (vector<Reading>::const_iterator elem = readings.begin();\n\t\t\t\t\t\t    elem != readings.end();\n\t\t\t\t\t\t    ++elem)\n\t{\n\t\tbool sendDataTypes;\n\t\tOMFHints *hints = NULL;\n\n\t\tDatapoint *hintsdp = elem->getDatapoint(OMF_HINT);\n\t\tif (hintsdp)\n\t\t{\n\t\t\thints = new OMFHints(hintsdp->getData().toString());\n\t\t}\n\n\t\t// Create the key for dataTypes sending once\n\t\tm_assetName = ApplyPIServerNamingRulesObj((*elem).getAssetName(), nullptr);\n\t\tlong typeId = OMF::getAssetTypeId(m_assetName);\n\t\tstring key(m_assetName);\n\n\t\tmeasurementId = generateMeasurementId(m_assetName);\n\n\t\tsendDataTypes = (skipSentDataTypes == true) ?\n\t\t\t\t // Send if not already sent\n\t\t\t\t !OMF::getCreatedTypes(key, (*elem), hints) :\n\t\t\t\t // Always send types\n\t\t\t\t true;\n\n\t\t// Handle the data types of the current reading\n\t\tif (sendDataTypes && !OMF::handleDataTypes(key, *elem, skipSentDataTypes, hints))\n\t\t{\n\t\t\t// Failure\n\t\t\treturn 0;\n\t\t}\n\n\t\t// Add into JSON string the OMF transformed Reading data\n\t\tif (OMFData(payload, *elem, measurementId, false, m_PIServerEndpoint, m_AFHierarchyLevel, hints).hasData())\n\t\t       \tif (elem < (readings.end() -1 ))\n\t\t\t\tpayload.append(',');\n\t}\n\n\tpayload.append(']');\n\n\t// Build headers for Readings data\n\tvector<pair<string, string>> readingData = OMF::createMessageHeader(\"Data\");\n\n\tconst char *omfData = payload.coalesce();\n\t// Build an HTTPS POST with 'readingData headers and 'allReadings' JSON payload\n\t// Then get HTTPS POST ret code and return 0 to client on error\n\ttry\n\t{\n\t\tint res = m_sender.sendRequest(\"POST\", m_path, readingData, omfData);\n\n\t\tif  ( ! (res >= 200 && res <= 299) ) {\n\t\t\tLogger::getLogger()->error(\"Sending JSON readings data \"\n\t\t\t\t\t\t   \"- error: HTTP code |%d| - HostPort |%s| - path |%s| - OMF message |%s|\",\n\t\t\t\tres,\n\t\t\t\tm_sender.getHostPort().c_str(),\n\t\t\t\tm_path.c_str(),\n                                omfData);\n\n\t\t\tdelete[] omfData;\n\t\t\treturn 0;\n\t\t}\n\t}\n\tcatch (const std::exception& e)\n\t{\n\t\tLogger::getLogger()->error(\"Sending JSON readings data \"\n\t\t\t\t\t   \"- generic error: |%s| - HostPort |%s| - path |%s| - OMF message |%s|\",\n\t\t\t\t\t   e.what(),\n\t\t\t\t\t   m_sender.getHostPort().c_str(),\n\t\t\t\t\t   m_path.c_str(),\n\t\t\t\t\t   omfData);\n\n\t\tdelete[] omfData;\n\t\treturn false;\n\t}\n\tdelete[] omfData;\n\n\t// Return number of sent readings to the caller\n\treturn readings.size();\n}\n\n/**\n * Send a single reading to the PI Server.\n * Note: this overload is never called.\n *\n * @param reading             A reading to send\n * @return                    1 = on success, 0 otherwise\n */\nuint32_t OMF::sendToServer(const Reading& reading,\n\t\t\t   bool skipSentDataTypes)\n{\n\treturn OMF::sendToServer(&reading, skipSentDataTypes);\n}\n\n/**\n * Send a single reading pointer to the PI Server.\n * Note: this overload is never called.\n *\n * @param reading             A reading pointer to send\n * @return                    1 = on success, 0 otherwise\n */\nuint32_t OMF::sendToServer(const Reading* reading,\n\t\t\t   bool skipSentDataTypes)\n{\n\tstring measurementId;\n\tOMFBuffer payload;\n\tpayload.append('[');\n\n\tm_assetName = ApplyPIServerNamingRulesObj(reading->getAssetName(), nullptr);\n\n\tstring key(m_assetName);\n\tmeasurementId = generateMeasurementId(m_assetName);\n\n\n\tDatapoint *hintsdp = reading->getDatapoint(\"OMFHint\");\n\tOMFHints *hints = NULL;\n\tif (hintsdp)\n\t{\n\t\thints = new OMFHints(hintsdp->getData().toString());\n\t}\n\tif (!OMF::handleDataTypes(key, *reading, skipSentDataTypes, hints))\n\t{\n\t\t// Failure\n\t\treturn 0;\n\t}\n\n\tlong typeId = OMF::getAssetTypeId(m_assetName);\n\t// Add into JSON string the OMF transformed Reading data\n\tOMFData(payload, *reading, measurementId, false, m_PIServerEndpoint, m_AFHierarchyLevel, hints);\n\tpayload.append(']');\n\n\t// Build headers for Readings data\n\tvector<pair<string, string>> readingData = OMF::createMessageHeader(\"Data\");\n\n\tconst char *omfData = payload.coalesce();\n\t// Build an HTTPS POST with 'readingData headers and 'allReadings' JSON payload\n\t// Then get HTTPS POST ret code and return 0 to client on error\n\ttry\n\t{\n\n\t\tint res = m_sender.sendRequest(\"POST\", m_path, readingData, omfData);\n\n\t\tif  ( ! (res >= 200 && res <= 299) )\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Sending JSON readings data \"\n\t\t\t\t\t\t   \"- error: HTTP code |%d| - HostPort |%s| - path |%s| - OMF message |%s|\",\n\t\t\t\t\t\t   res,\n\t\t\t\t\t\t   m_sender.getHostPort().c_str(),\n\t\t\t\t\t\t   m_path.c_str(),\n\t\t\t\t\t\t   omfData);\n\t\t\tdelete[] omfData;\n\t\t\treturn 0;\n\t\t}\n\t}\n\tcatch (const std::exception& e)\n\t{\n\t\tstring errorMsg = errorMessageHandler(e.what());\n\n\t\tLogger::getLogger()->error(\"Sending JSON readings data \"\n\t\t\t\t\t\t\t\t   \"- generic error: %s - %s %s\",\n\t\t\t\t\t\t\t\t   errorMsg.c_str(),\n\t\t\t\t\t\t\t\t   m_sender.getHostPort().c_str(),\n\t\t\t\t\t\t\t\t   m_path.c_str() );\n\n\t\tdelete[] omfData;\n\t\treturn false;\n\t}\n\n\tdelete[] omfData;\n\t// Return number of sent readings to the caller\n\treturn 1;\n}\n\n/**\n * Creates a vector of HTTP header to be sent to Server\n *\n * @param type    The message type ('Type', 'Container', 'Data')\n * @param action  Action to execute, either \"create\" or \"delete\"\n * @return        A vector of HTTP Header string pairs\n */\nconst vector<pair<string, string>> OMF::createMessageHeader(const std::string& type, const std::string& action) const\n{\n\tvector<pair<string, string>> res;\n\n\tres.push_back(pair<string, string>(\"messagetype\", type));\n\tres.push_back(pair<string, string>(\"producertoken\", m_producerToken));\n\tres.push_back(pair<string, string>(\"omfversion\", m_OMFVersion));\n\tres.push_back(pair<string, string>(\"messageformat\", \"JSON\"));\n\tres.push_back(pair<string, string>(\"action\", action));\n\n\treturn  res; \n}\n\n/**\n * Creates the Type message for data type definition\n *\n * @param reading    A reading data\n * @return           Type JSON message as string\n */\nconst std::string OMF::createTypeData(const Reading& reading, OMFHints *hints)\n{\n\t// Build the Type data message (JSON Array)\n\n\tstring tData=\"[\";\n\n\tif (m_sendFullStructure)\n\t{\n\t\t// Add the Static data part\n\t\ttData.append(\"{ \\\"type\\\": \\\"object\\\", \\\"properties\\\": { \");\n\t\tfor (auto it = m_staticData->cbegin(); it != m_staticData->cend(); ++it)\n\t\t{\n\t\t\ttData.append(\"\\\"\");\n\t\t\ttData.append(ApplyPIServerNamingRulesObj(it->first.c_str(), nullptr) );\n\t\t\ttData.append(\"\\\": {\\\"type\\\": \\\"string\\\"},\");\n\t\t}\n\n\t\t// Connector relay / ODS / EDS\n\t\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t   )\n\t\t{\n\t\t\ttData.append(\"\\\"Name\\\": { \\\"type\\\": \\\"string\\\", \\\"isindex\\\": true } }, \"\n\t\t\t\t\t\t \"\\\"classification\\\": \\\"static\\\", \\\"id\\\": \\\"\");\n\t\t}\n\t\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t\t{\n\t\t\ttData.append(\"\\\"Name\\\": { \\\"type\\\": \\\"string\\\", \\\"isname\\\": true }, \");\n\t\t\ttData.append(\"\\\"AssetId\\\": { \\\"type\\\": \\\"string\\\", \\\"isindex\\\": true } \");\n\t\t\ttData.append(\" }, \\\"classification\\\": \\\"static\\\", \\\"id\\\": \\\"\");\n\t\t}\n\n\t\t// Add type_id + '_' + asset_name + '_typename_sensor'\n\t\tOMF::setAssetTypeTag(m_assetName,\n\t\t\t\t\t \"typename_sensor\",\n\t\t\t\t\t tData);\n\n\t\ttData.append(\"\\\" }, \");\n\t}\n\n\t// Add the Dynamic data part\n\ttData.append(\" { \\\"type\\\": \\\"object\\\", \\\"properties\\\": {\");\n\n\t/* We add for each reading\n\t * the DataPoint name & type\n\t * type is 'integer' for INT\n\t * 'number' for FLOAT\n\t * 'string' for STRING\n\t */\n\n\tbool ret = true;\n\tconst vector<Datapoint*> data = reading.getReadingData();\n\n\t/**\n\t * This loop creates:\n\t * \"dataName\": {\"type\": \"dataType\"},\n\t */\n\tfor (vector<Datapoint*>::const_iterator it = data.begin(); it != data.end(); ++it)\n\t{\n\t\tstring dpName = (*it)->getName();\n\t\tif (dpName.compare(OMF_HINT) == 0)\n\t\t{\n\t\t\t// We never include OMF hints in the data we send to PI\n\t\t\tcontinue;\n\t\t}\n\t\tstring omfType;\n\t\tif (!isTypeSupported( (*it)->getData()))\n\t\t{\n\t\t\tomfType = OMF_TYPE_UNSUPPORTED;\n\t\t}\n\t\telse\n\t\t{\n\t        omfType = omfTypes[((*it)->getData()).getType()];\n\t\t}\n\t\tstring format = OMF::getFormatType(omfType);\n\t\tif (hints && (omfType == OMF_TYPE_FLOAT || omfType == OMF_TYPE_INTEGER))\n\t\t{\n\t\t\tconst vector<OMFHint *> omfHints = hints->getHints(dpName);\n\t\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t\t{\n\t\t\t\tif (typeid(**it) == typeid(OMFNumberHint))\n\t\t\t\t{\n\t\t\t\t\tformat = (*it)->getHint();\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (typeid(**it) == typeid(OMFIntegerHint))\n\t\t\t\t{\n\t\t\t\t\tomfType = OMF_TYPE_INTEGER;\n\t\t\t\t\tformat = (*it)->getHint();\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t}\n\t\t}\n\n\t\tif (format.compare(OMF_TYPE_UNSUPPORTED) == 0)\n\t\t{\n\t\t\t//TO DO: ADD LOG\n\t\t\tret = false;\n\t\t\tcontinue;\n\t\t}\n\t\t// Add datapoint Name\n\t\ttData.append(\"\\\"\" + ApplyPIServerNamingRulesObj(dpName, nullptr) + \"\\\"\");\n\t\ttData.append(\": {\\\"type\\\": \\\"\");\n\t\t// Add datapoint Type\n\t\ttData.append(omfType);\n\n\t\ttData.append(\"\\\", \\\"name\\\": \\\"\");\n\t\ttData.append(ApplyPIServerNamingRulesObj(dpName, nullptr) );\n\n\t\t// Applies a format if it is defined\n\t\tif (! format.empty() ) {\n\n\t\t\ttData.append(\"\\\", \\\"format\\\": \\\"\");\n\t\t\ttData.append(format);\n\t\t}\n\n\t\ttData.append(\"\\\"}, \");\n\t}\n\n\t// Add time field\n\ttData.append(\"\\\"Time\\\": {\\\"type\\\": \\\"string\\\", \\\"isindex\\\": true, \\\"format\\\": \\\"date-time\\\"}}, \"\n\"\\\"classification\\\": \\\"dynamic\\\", \\\"id\\\": \\\"\");\n\n\tbool typeNameSet = false;\n\tif (hints)\n\t{\n\t\tconst vector<OMFHint *> omfHints = hints->getHints();\n\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t{\n\t\t\tif (typeid(**it) == typeid(OMFTypeNameHint))\n\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"Using OMF TypeName hint: %s\", (*it)->getHint().c_str());\n\t\t\t\ttData.append((*it)->getHint());\n\t\t\t\ttypeNameSet = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (!typeNameSet)\n\t{\n\t\t// Add type_id + '_' + asset_name + '__typename_measurement'\n\t\tOMF::setAssetTypeTag(m_assetName,\n\t\t\t\t     \"typename_measurement\",\n\t\t\t\t     tData);\n\t}\n\n\ttData.append(\"\\\" }]\");\n\n\t// Check we have to return empty data or not\n\tif (!ret && data.size() == 1)\n\t{\n\t\t// TODO: ADD LOGGING\n\t\treturn string(\"\");\n\t}\n\telse\n\t{\n\t\t// Return JSON string\n\t\treturn tData;\n\t}\n}\n\n\n/**\n * Creates the Container message for data type definition\n *\n * @param reading    A reading data\n * @return           Type JSON message as string\n */\nconst std::string OMF::createContainerData(const Reading& reading, OMFHints *hints)\n{\n\tstring assetName = m_assetName;\n\n\tstring measurementId;\n\n\t// Build the Container data (JSON Array)\n\tstring cData = \"[{\\\"typeid\\\": \\\"\";\n\n\tstring typeName = \"\";\n\tif (hints)\n\t{\n\t\tconst std::vector<OMFHint *> omfHints = hints->getHints();\n\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t{\n\t\t\tif (typeid(**it) == typeid(OMFTypeNameHint))\n\t\t\t{\n\t\t\t\ttypeName = (*it)->getHint();\n\t\t\t\tLogger::getLogger()->debug(\"Using OMF TypeName hint: %s\", typeName.c_str());\n\t\t\t}\n\t\t}\n\t}\n\tif (typeName.length())\n\t{\n\t\tcData.append(typeName);\n\t}\n\telse\n\t{\n\t\t// Add type_id + '_' + asset_name + '__typename_measurement'\n\t\tOMF::setAssetTypeTag(assetName,\n\t\t\t\t     \"typename_measurement\",\n\t\t\t\t     cData);\n\t}\n\n\tmeasurementId = generateMeasurementId(assetName);\n\n\t// Apply any TagName hints to modify the containerid\n\tif (hints)\n\t{\n\t\tconst std::vector<OMFHint *> omfHints = hints->getHints();\n\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t{\n\t\t\tif (typeid(**it) == typeid(OMFTagNameHint))\n\t\t\t{\n\t\t\t\tmeasurementId = (*it)->getHint();\n\t\t\t\tLogger::getLogger()->debug(\"Using OMF TagName hint: %s\", measurementId.c_str());\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tcData.append(\"\\\", \\\"id\\\": \\\"\" + measurementId);\n\tcData.append(\"\\\"}]\");\n\n\t// Return JSON string\n\treturn cData;\n}\n\n/**\n * Generate the container id for the given asset\n *\n * @param assetName  Asset for which the container id should be generated\n * @return           Container id for the requested asset\n */\nstd::string OMF::generateMeasurementId(const string& assetName)\n{\n\tstd::string measurementId;\n\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\tlong namingScheme;\n\tlong typeId;\n\n\ttypeId = OMF::getAssetTypeId(assetName);\n\n\tnamingScheme = getNamingScheme(assetName);\n\n\tif (namingScheme == NAMINGSCHEME_COMPATIBILITY ||\n\t\tnamingScheme == NAMINGSCHEME_HASH)\n\t{\n\t\tmeasurementId = to_string(typeId) + \"measurement_\" + assetName;\n\n\t\t// Add the 1st level of AFHierarchy as a prefix to the name in case of PI Web API\n\t\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t\t{\n\t\t\tretrieveAFHierarchyPrefixAssetName(assetName, AFHierarchyPrefix, AFHierarchyLevel);\n\n\t\t\tmeasurementId = AFHierarchyPrefix + \"_\" + measurementId;\n\t\t}\n\t} else {\n\t\tif (typeId > 1)\n\t\t{\n\t\t\tmeasurementId = to_string(typeId) + \"measurement_\" + assetName;\n\t\t} else {\n\n\t\t\tmeasurementId = assetName;\n\t\t}\n\n\t}\n\n\tLogger::getLogger()->debug(\"%s - namingScheme default :%ld: namingScheme applied :%ld:  assetName :%s: typeId :%ld: measurementId :%s:\",\n\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t   m_NamingScheme,\n\t\t\t\t\t\t\t   namingScheme,\n\t\t\t\t\t\t\t   assetName.c_str(), \n\t\t\t\t\t\t\t   typeId, \n\t\t\t\t\t\t\t   measurementId.c_str() );\n\n\treturn(measurementId);\n}\n\n\n/**\n * Generate a suffix for the given asset in relation to the selected naming schema and the value of the type id\n *\n * @param assetName  Asset for which the suffix should be generated\n * @param typeId     Type id of the asset\n * @return           Suffix to be used for the given asset\n */\nstd::string OMF::generateSuffixType(string &assetName, long typeId)\n{\n\tstd::string suffix;\n\tlong namingScheme;\n\n\tnamingScheme = getNamingScheme(assetName);\n\n\tif (namingScheme == NAMINGSCHEME_COMPATIBILITY ||\n\t\tnamingScheme == NAMINGSCHEME_SUFFIX)\n\t{\n\t\tsuffix = AF_TYPES_SUFFIX + to_string(typeId);\n\n\t} else {\n\t\tif (typeId > 1)\n\t\t{\n\t\t\tsuffix = AF_TYPES_SUFFIX + to_string(typeId);\n\t\t}\n\n\t}\n\n\tLogger::getLogger()->debug(\"%s - namingScheme default :%ld: namingScheme applied :%ld: typeId :%ld: suffix :%s:\",\n\t\t__FUNCTION__, \n\t\tm_NamingScheme,\n\t\tnamingScheme,\n\t\ttypeId, \n\t\tsuffix.c_str());\n\n\treturn(suffix);\n}\n\n\n/**\n * Creates the Static Data message for data type definition\n *\n * Note: type is 'Data'\n *\n * @param reading    A reading data\n * @return           Type JSON message as string\n */\nconst std::string OMF::createStaticData(const Reading& reading)\n{\n\tstring assetName;\n\t// Build the Static data (JSON Array)\n\tstring sData = \"[\";\n\n\tsData.append(\"{\\\"typeid\\\": \\\"\");\n\n\tassetName = m_assetName;\n\n\tlong typeId = getAssetTypeId(assetName);\n\n\t// Add type_id + '_' + asset_name + '_typename_sensor'\n\tOMF::setAssetTypeTag(assetName,\n\t\t\t     \"typename_sensor\",\n\t\t\t     sData);\n\n\tsData.append(\"\\\", \\\"values\\\": [{\");\n\tfor (auto it = m_staticData->cbegin(); it != m_staticData->cend(); ++it)\n\t{\n\t\tsData.append(\"\\\"\");\n\t\tsData.append(ApplyPIServerNamingRulesObj(it->first.c_str(), nullptr) );\n\t\tsData.append(\"\\\": \\\"\");\n\t\tsData.append(it->second.c_str());\n\t\tsData.append(\"\\\", \");\n\t}\n\tsData.append(\" \\\"Name\\\": \\\"\");\n\n\t// Add asset_name\n\t// Connector relay / ODS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR)\n\t{\n\t\tsData.append(assetName);\n\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_OCS ||\n\t\t m_PIServerEndpoint == ENDPOINT_ADH ||\n\t         m_PIServerEndpoint == ENDPOINT_EDS)\n\t{\n\t\tsData.append(assetName);\n\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tstring AFHierarchyPrefix;\n\t\tstring AFHierarchyLevel;\n\n\t\tretrieveAFHierarchyPrefixAssetName(assetName, AFHierarchyPrefix, AFHierarchyLevel);\n\n\t\tsData.append(assetName + generateSuffixType(assetName, typeId));\n\t\tsData.append(\"\\\", \\\"AssetId\\\": \\\"\");\n\t\tsData.append(\"A_\" + AFHierarchyPrefix + \"_\" + assetName + generateSuffixType(assetName, typeId) );\n\t}\n\n\tsData.append(\"\\\"}]}]\");\n\n\t// Return JSON string\n\treturn sData;\n}\n\n\n/**\n * Creates the Link Data message for data type definition\n *\n * Note: type is 'Data'\n *\n * @param reading    A reading data\n * @param AFHierarchyLevel\tThe AF element we are placing the reading in\n * @param AFHierarchyPrefix\tThe prefix we use for the AF Element\n * @param objectPrefix\tThe object prefix we are using for this asset\n * @param legacy     We are using legacy, complex types for this reading\n * @return           Type JSON message as string\n */\nstd::string OMF::createLinkData(const Reading& reading,  std::string& AFHierarchyLevel, std::string&  AFHierarchyPrefix, std::string&  objectPrefix, OMFHints *hints, bool legacy)\n{\n\tstring targetTypeId;\n\n\tstring measurementId;\n\n\tstring assetName = m_assetName;\n\n\t// Build the Link data (JSON Array)\n\n\tlong typeId = getAssetTypeId(assetName);\n\n\n\tstring lData = \"{\\\"typeid\\\": \\\"__Link\\\", \\\"values\\\": [\";\n\n\t// Handles the structure for the Connector Relay\n\t// not supported by PI Web API\n\t// Connector relay / ADH / ODS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t)\n\t{\n\t\tlData.append(\"{\\\"source\\\": {\\\"typeid\\\": \\\"\");\n\n\t\t// Add type_id + '_' + asset_name + '__typename_sensor'\n\t\tOMF::setAssetTypeTag(assetName,\n\t\t\t\t     \"typename_sensor\",\n\t\t\t\t     lData);\n\n\t\tlData.append(\"\\\", \\\"index\\\": \\\"_ROOT\\\"},\");\n\t\tlData.append(\"\\\"target\\\": {\\\"typeid\\\": \\\"\");\n\n\t\t// Add type_id + '_' + asset_name + '__typename_sensor'\n\t\tOMF::setAssetTypeTag(assetName,\n\t\t\t\t     \"typename_sensor\",\n\t\t\t\t     lData);\n\n\t\tlData.append(\"\\\", \\\"index\\\": \\\"\");\n\n\t\t// Add asset_name\n\t\tlData.append(assetName);\n\n\t\tlData.append(\"\\\"}}\");\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\t// Link the asset to the 1st level of AF hierarchy if the end point is PI Web API\n\n\t\tstring tmpStr = AF_HIERARCHY_1LEVEL_LINK;\n\n\t\tOMF::setAssetTypeTag(assetName, \"typename_sensor\", targetTypeId);\n\n\t\tStringReplace(tmpStr, \"_placeholder_src_type_\", AFHierarchyPrefix + \"_\" + AFHierarchyLevel + \"_typeid\");\n\t\tStringReplace(tmpStr, \"_placeholder_src_idx_\",  AFHierarchyPrefix + \"_\" + AFHierarchyLevel );\n\n\t\tif (legacy)\n\t\t{\n\t\t\tStringReplace(tmpStr, \"_placeholder_tgt_type_\", targetTypeId);\n\t\t\tStringReplace(tmpStr, \"_placeholder_tgt_idx_\", \"A_\" + objectPrefix + \"_\" + assetName +\n\t\t\t\tgenerateSuffixType(assetName, typeId) );\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Get the new asset name after hints are applied for the linked data messages\n\t\t\tstring newAssetName = assetName;\n\t\t\tif (hints)\n\t\t\t{\n\t\t\t\tconst std::vector<OMFHint *> omfHints = hints->getHints();\n\t\t\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t\t\t{\n\t\t\t\t\tif (typeid(**it) == typeid(OMFTagNameHint))\n\t\t\t\t\t{\n\t\t\t\t\t\tstring hintValue = (*it)->getHint();\n\t\t\t\t\t\tLogger::getLogger()->debug(\"Using OMF TagName hint: %s for asset %s\",\n\t\t\t\t\t\t\t       hintValue.c_str(), assetName.c_str());\n\t\t\t\t\t\tnewAssetName = hintValue;\n\t\t\t\t\t}\n\t\t\t\t\tif (typeid(**it) == typeid(OMFTagHint))\n\t\t\t\t\t{\n\t\t\t\t\t\tstring hintValue = (*it)->getHint();\n\t\t\t\t\t\tLogger::getLogger()->debug(\"Using OMF Tag hint: %s for asset %s\",\n\t\t\t\t\t\t\t       hintValue.c_str(), assetName.c_str());\n\t\t\t\t\t\tnewAssetName = hintValue;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tStringReplace(tmpStr, \"_placeholder_tgt_type_\", \"FledgeAsset\");\n\t\t\tStringReplace(tmpStr, \"_placeholder_tgt_idx_\", newAssetName);\n\t\t}\n\n\t\tlData.append(tmpStr);\n\t}\n\n\n\tif (legacy)\n\t{\n\t\tlData.append(\",{\\\"source\\\": {\\\"typeid\\\": \\\"\");\n\n\t\t// Add type_id + '_' + asset_name + '__typename_sensor'\n\t\tOMF::setAssetTypeTag(assetName,\n\t\t\t\t     \"typename_sensor\",\n\t\t\t\t     lData);\n\n\t\tlData.append(\"\\\", \\\"index\\\": \\\"\");\n\n\t\tif (m_PIServerEndpoint == ENDPOINT_CR)\n\t\t{\n\t\t\t// Add asset_name\n\t\t\tlData.append(assetName);\n\t\t}\n\t\telse if (m_PIServerEndpoint == ENDPOINT_OCS ||\n\t\t\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\t\t\t m_PIServerEndpoint == ENDPOINT_EDS)\n\t\t{\n\t\t\t// Add asset_name\n\t\t\tlData.append(assetName);\n\t\t}\n\t\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t\t{\n\t\t\tlData.append(\"A_\" + objectPrefix + \"_\" + assetName + generateSuffixType(assetName, typeId) );\n\t\t}\n\t\n\t\tmeasurementId = generateMeasurementId(assetName);\n\n\t\t// Apply any TagName hints to modify the containerid\n\t\tif (hints)\n\t\t{\n\t\t\tconst std::vector<OMFHint *> omfHints = hints->getHints();\n\t\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t\t{\n\t\t\t\tif (typeid(**it) == typeid(OMFTagNameHint))\n\t\t\t\t{\n\t\t\t\t\tmeasurementId = (*it)->getHint();\n\t\t\t\t\tLogger::getLogger()->debug(\"Using OMF TagName hint: %s\", measurementId.c_str());\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tlData.append(\"\\\"}, \\\"target\\\": {\\\"containerid\\\": \\\"\" + measurementId);\n\n\t\tlData.append(\"\\\"}}\");\n\t}\n\tlData.append(\"]}\");\n\n\t// Return JSON string\n\treturn lData;\n}\n\n/**\n * Calculate the prefix to be used for AF objects and the last level of the hierarchies\n * from a given AF path\n *\n * @param path                   Path to evaluate\n * @param out/prefix\t\t     Calculated prefix\n * @param out/AFHierarchyLevel   last level of the hierarchies evaluated form the path\n */\nvoid OMF::generateAFHierarchyPrefixLevel(string& path, string& prefix, string& AFHierarchyLevel)\n{\n\tstring pathFixed;\n\n\tAFHierarchyLevel = extractLastLevel(path, AFHierarchySeparator);\n\n\tpathFixed = StringSlashFix(path);\n\tprefix = generateUniquePrefixId(pathFixed);\n}\n\n\n/**\n * Retrieve from the map the prefix and the last level of the hierarchy from a given assetname\n *\n * @param path                   assetName to evaluate\n * @param out/prefix\t\t     Calculated prefix\n * @param out/AFHierarchyLevel   Last level of the hierarchy\n */\nvoid OMF::retrieveAFHierarchyPrefixAssetName(const string& assetName, string& prefix, string& AFHierarchyLevel)\n{\n\tstring AFHierarchy;\n\tstring prefixTmp;\n\n\t// Metadata Rules - Exist\n\tauto rule = m_AssetNamePrefix.find(assetName);\n\tif (rule != m_AssetNamePrefix.end())\n\t{\n\t\tAFHierarchy = std::get<0>(rule->second[0]);\n\t\tgenerateAFHierarchyPrefixLevel(AFHierarchy, prefixTmp, AFHierarchyLevel);\n\n\t\tprefix =std::get<1>(rule->second[0]);\n\n\t}\n\n}\n\n/**\n * Retrieve from the map the prefix and the hierarchy name from a given assetname\n *\n * @param path                   assetName to evaluate\n * @param out/prefix\t\t     Calculated prefix\n * @param out/AFHierarchyLevel   hierarchy name\n */\nvoid OMF::retrieveAFHierarchyFullPrefixAssetName(const string& assetName, string& prefix, string& AFHierarchy)\n{\n\tstring path;\n\t// Metadata Rules - Exist\n\tauto rule = m_AssetNamePrefix.find(assetName);\n\tif (rule != m_AssetNamePrefix.end())\n\t{\n\t\tAFHierarchy = std::get<0>(rule->second[0]);\n\t\tprefix =std::get<1>(rule->second[0]);\n\n\t}\n\n}\n\n/**\n * Handle the OMF hint AFLocation to define a position of the asset into the AF hierarchy\n *\n * @param assetName              AssetName to handle\n * @param OmfHintHierarchy\t\t Position of the asset into the AF hierarchy\n *\n * @return                       True if set asset will have a defined AF hierarchy position\n */\nbool OMF::createAFHierarchyOmfHint(const string& assetName, const  string &OmfHintHierarchy)\n{\n\tstring pathNew;\n\tstring prefix;\n\tstring AFHierarchyLevel;\n\n\tstring prefixStored;\n\tstring pathStored;\n\n\tbool ruleMatched = false;\n\n\tif (! OmfHintHierarchy.empty())\n\t{\n\n\t\tpathNew = OmfHintHierarchy;\n\n\t\tif (pathNew.at(0) != '/')\n\t\t{\n\t\t\t// relative  path\n\t\t\tpathNew = \"/\" + pathNew;\n\t\t}\n\t\tgenerateAFHierarchyPrefixLevel(pathNew, prefix, AFHierarchyLevel);\n\t\truleMatched = true;\n\n\t\tprefixStored = getHashStored (assetName);\n\t\tpathStored = getPathStored (assetName);\n\n\t\tLogger::getLogger()->debug(\"%s - OMF hint hierarchy - assetName :%s: path :%s: pathStored :%s: prefixStored :%s: \"\n\t\t\t, __FUNCTION__\n\t\t\t, assetName.c_str()\n\t\t\t, pathNew.c_str()\n\t\t\t, pathStored.c_str()\n\t\t\t, prefixStored.c_str()\n\t\t\t);\n\n\t\tif (find(m_afhHierarchyAlreadyCreated.begin(), m_afhHierarchyAlreadyCreated.end(), pathNew) == m_afhHierarchyAlreadyCreated.end()){\n\n\t\t\tLogger::getLogger()->debug(\"%s - New path requested :%s:\", __FUNCTION__, pathNew.c_str());\n\n\t\t\tif (!sendAFHierarchy(pathNew.c_str()))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\n\t\tif (pathStored.compare(\"\") == 0)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"%s - New path for the assetName :%s: path :%s:\", __FUNCTION__, assetName.c_str(), pathNew.c_str());\n\n\t\t\tauto item = make_pair(pathNew, prefix);\n\t\t\tm_AssetNamePrefix[assetName].push_back(item);\n\n\t\t} else {\n\t\t\tif (pathNew.compare(pathStored) != 0) {\n\n\t\t\t\tLogger::getLogger()->debug(\"%s - path changed for the assetName :%s: path :%s: previous path :%s:\"\n\t\t\t\t\t\t\t\t\t\t   , __FUNCTION__\n\t\t\t\t\t\t\t\t\t\t   , assetName.c_str()\n\t\t\t\t\t\t\t\t\t\t   , pathNew.c_str()\n\t\t\t\t\t\t\t\t\t\t   , pathStored.c_str());\n\n\t\t\t\tif (!deleteAssetAFH(assetName, pathStored))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\n\t\t\t\tauto item = make_pair(pathNew, prefix);\n\t\t\t\tm_AssetNamePrefix[assetName].clear();\n\t\t\t\tm_AssetNamePrefix[assetName].push_back(item);\n\t\t\t\tsetPathStored (assetName, pathNew);\n\n\t\t\t\tif (!createAssetAFH(assetName, pathNew))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\n\t\t\t} else {\n\t\t\t\tLogger::getLogger()->debug(\"%s - Same path for the assetName :%s: path :%s:\", __FUNCTION__, assetName.c_str(), pathNew.c_str());\n\t\t\t}\n\t\t\t\n\t\t}\n\n\t}\n\n\t// For debug\n//\tLogger::getLogger()->debug(\"%s - Hierarchy asset start\", __FUNCTION__);\n//\tfor (auto item=m_AssetNamePrefix.begin(); item!=m_AssetNamePrefix.end(); ++item)\n//\t{\n//\t\tauto v = item->second;\n//\n//\t\tfor(auto  arrayItem : v) {\n//\n//\t\t\tLogger::getLogger()->debug(\"%s - Hierarchy asset :%s: hash :%s: path :%s:\", __FUNCTION__, item->first.c_str(), arrayItem.first.c_str(), arrayItem.second.c_str());\n//\t\t}\n//\n//\t}\n\n\treturn (ruleMatched);\n}\n\n/**\n * Extracts a variable and its elements from a string, the variable will have the shape ${room:unknown}\n *\n * @param strToHandle   Source string from which the variable should be extracted\n * @param variable      Variable found in the form ${room:unknown}\n * @param value         Value of the variable, left part , room in this case ${room:unknown}\n * @param defaultValue  Default value of the variable, right part , unknown in this case ${room:unknown}\n * @return              True a variable is found in the source string\n */\nbool OMF::extractVariable(string &strToHandle, string &variable, string &value, string &defaultValue)\n{\n\tbool found;\n\tsize_t pos1, pos2, pos3;\n\n\tfound = false;\n\tvariable =\"\";\n\tvalue =\"\";\n\tdefaultValue =\"\";\n\n\tpos1 = strToHandle.find(\"${\");\n\tif (pos1 !=std::string::npos)\n\t{\n\t\tpos3 = strToHandle.find(\"}\", pos1);\n\t\tpos2 = strToHandle.find(\":\", pos1);\n\t\tif ( (pos2 != std::string::npos) && (pos2 < pos3) )\n\t\t{\n\t\t\tvalue = strToHandle.substr(pos1 + 2, (pos2 - (pos1 + 2) ) );\n\n\t\t\tpos3 = strToHandle.find(\"}\", pos2);\n\t\t\tif (pos3 != std::string::npos)\n\t\t\t{\n\t\t\t\tfound = true;\n\n\t\t\t\tdefaultValue = strToHandle.substr(pos2 + 1, (pos3 - (pos2 + 1)));\n\t\t\t\tvariable = strToHandle.substr(pos1, pos3 - pos1 + 1);\n\t\t\t}\n\t\t} else {\n\t\t\tLogger::getLogger()->debug(\"OMF hierarchy hints doesn't have the default value in the metadata reference :%s:\", strToHandle.c_str());\n\n\t\t\t// No default value provided\n\t\t\tif (pos3 != std::string::npos)\n\t\t\t{\n\t\t\t\tfound = true;\n\n\t\t\t\tvalue = strToHandle.substr(pos1 + 2, (pos3 - (pos1 + 2) ) );\n\t\t\t\tvariable = strToHandle.substr(pos1, pos3 - pos1 + 1);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn(found);\n}\n\n/**\n * Evaluate the AF hierarchy provided and expand the variables in the form ${room:unknown}\n *\n * @param reading       Asset reading that should be considered from which to extract the metadata values\n * @param AFHierarchy   AF hierarchy containing the variable to be expanded\n *\n * @return              True if variables were found and expanded\n */\nstd::string OMF::variableValueHandle(const Reading& reading, std::string &AFHierarchy) {\n\n\tstring AFHierarchyNew;\n\tstring propertyToSearch;\n\tstring propertyValue;\n\tstring propertyDefault;\n\tstring variableValue;\n\tstring propertyName;\n\tbool found;\n\tbool foundProperty;\n\n\tfound = false;\n\tAFHierarchyNew = AFHierarchy;\n\n\tif (AFHierarchyNew.find(\"${\") !=std::string::npos)\n\t{\n\n\t\twhile (extractVariable(AFHierarchyNew, variableValue , propertyToSearch,  propertyDefault))\n\t\t{\n\t\t\tauto values = reading.getReadingData();\n\t\t\tfoundProperty = false;\n\n\t\t\tfor (auto it = values.begin(); it != values.end(); it++)\n\t\t\t{\n\t\t\t\tpropertyName = (*it)->getName();\n\t\t\t\tif (propertyName.compare(propertyToSearch) == 0)\n\t\t\t\t{\n\t\t\t\t\tDatapointValue data = (*it)->getData();\n\t\t\t\t\tpropertyValue = data.toString();\n\t\t\t\t\tfound = true;\n\t\t\t\t\tfoundProperty = true;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (foundProperty) {\n\n\t\t\t\tStringReplaceAll(propertyValue, \"\\\"\", \"\");\n\t\t\t\tStringReplace(AFHierarchyNew, variableValue, propertyValue);\n\n\t\t\t} else {\n\t\t\t\tStringReplaceAll(propertyValue, \"\\\"\", \"\");\n\t\t\t\tStringReplace(AFHierarchyNew, variableValue, propertyDefault);\n\t\t\t}\n\t\t}\n\t}\n\n\tStringReplaceAll(AFHierarchyNew, \"//\", \"/\");\n\n\tLogger::getLogger()->debug(\"%s - Variables found :%s: AFHierarchy :%s: AFHierarchyNew :%s: variableValue :%s: propertyToSearch :%s: propertyValue :%s: propertyDefault :%s:\"\n\t\t,__FUNCTION__\n\t\t,found ? \"true\" : \"false\"\n\t\t,AFHierarchy.c_str()\n\t\t,AFHierarchyNew.c_str()\n\t\t,variableValue.c_str()\n\t\t,propertyToSearch.c_str()\n\t\t,propertyValue.c_str()\n\t\t,propertyDefault.c_str()\n\t\t);\n\n\treturn (AFHierarchyNew);\n}\n\n/**\n * Evaluated the maps containing the Named and Metadata rules to fill the map m_AssetNamePrefix\n * containing for each asset name the related prefix and hierarchy name\n *\n * @param path                   assetName to evaluate\n * @param reading\t\t         reading row from which will be extracted the datapoint for the evaluation of the rules\n */\nbool OMF::evaluateAFHierarchyRules(const string& assetName, const Reading& reading)\n{\n\tbool ruleMatched = false;\n\tbool ruleMatchedNames = false;\n\tbool success = true;\n\n\tstring pathInitial;\n\tstring path;\n\tbool changed;\n\n\t// names rules - Check if there are any rules defined or not\n\tif (!m_AFMapEmptyNames)\n\t{\n\t\tif (! m_NamesRules.empty())\n\t\t{\n\n\t\t\tstring pathNamingRules;\n\t\t\tstring prefix;\n\t\t\tstring AFHierarchyLevel;\n\n\t\t\tauto it = m_NamesRules.find(assetName);\n\t\t\tif (it != m_NamesRules.end())\n\t\t\t{\n\n\t\t\t\tpathInitial = it->second;\n\n\t\t\t\tif (pathInitial.at(0) != '/')\n\t\t\t\t{\n\t\t\t\t\t// relative  path\n\t\t\t\t\tpathInitial = \"/\" + m_DefaultAFLocation + \"/\" + pathInitial;\n\t\t\t\t}\n\t\t\t\tpath = variableValueHandle(reading, pathInitial);\n\t\t\t\tpath = ApplyPIServerNamingRulesPath(path, &changed);\n\n\t\t\t\tif (pathInitial.compare(path) != 0) {\n\n\t\t\t\t\tit->second = path;\n\t\t\t\t}\n\n\t\t\t\tgenerateAFHierarchyPrefixLevel(path, prefix, AFHierarchyLevel);\n\t\t\t\truleMatched = true;\n\t\t\t\truleMatchedNames = true;\n\n\t\t\t\tauto v =  m_AssetNamePrefix[assetName];\n\t\t\t\tauto item = make_pair(path, prefix);\n\n\t\t\t\tif(v.size() == 0 ||  std::find(v.begin(), v.end(), item) == v.end())\n\t\t\t\t{\n\t\t\t\t\tif (success = sendAFHierarchy(path.c_str()))\n\t\t\t\t\t{\n\t\t\t\t\t\tm_AssetNamePrefix[assetName].push_back(item);\n\n\t\t\t\t\t\tLogger::getLogger()->debug(\n\t\t\t\t\t\t\t\"%s - m_NamesRules size :%d: m_AssetNamePrefix size :%d:   vector size :%d: pathInitial :%s: path :%s: stored :%s:\"\n\t\t\t\t\t\t\t\t,__FUNCTION__\n\t\t\t\t\t\t\t\t,m_NamesRules.size()\n\t\t\t\t\t\t\t\t,m_AssetNamePrefix.size()\n\t\t\t\t\t\t\t\t,v.size()\n\t\t\t\t\t\t\t\t,pathInitial.c_str()\n\t\t\t\t\t\t\t\t,path.c_str()\n\t\t\t\t\t\t\t\t,it->second.c_str());\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tLogger::getLogger()->debug(\n\t\t\t\t\t\t\"%s - m_NamesRules skipped pathInitial :%s: path :%s: stored :%s:\"\n\t\t\t\t\t\t\t,__FUNCTION__\n\t\t\t\t\t\t\t,pathInitial.c_str()\n\t\t\t\t\t\t\t,path.c_str()\n\t\t\t\t\t\t\t,it->second.c_str());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// Meta rules - Check if there are any rules defined or not\n\tif (!m_AFMapEmptyMetadata)\n\t{\n\t\tauto values = reading.getReadingData();\n\n\t\t// Metadata Rules - Exist\n\t\tif (! m_MetadataRulesExist.empty() )\n\t\t{\n\t\t\tstring propertyName;\n\t\t\tstring prefix;\n\t\t\tstring AFHierarchyLevel;\n\n\t\t\tfor (auto it = values.begin(); it != values.end(); it++)\n\t\t\t{\n\t\t\t\tpropertyName = (*it)->getName();\n\t\t\t\tauto rule = m_MetadataRulesExist.find(propertyName);\n\t\t\t\tif (rule != m_MetadataRulesExist.end())\n\t\t\t\t{\n\t\t\t\t\tpathInitial = rule->second;;\n\t\t\t\t\tif (pathInitial.at(0) != '/')\n\t\t\t\t\t{\n\t\t\t\t\t\tpathInitial = \"/\" + m_DefaultAFLocation + \"/\" + pathInitial;\n\t\t\t\t\t}\n\n\t\t\t\t\tpath = variableValueHandle(reading, pathInitial);\n\t\t\t\t\tpath = ApplyPIServerNamingRulesPath(path, &changed);\n\n\t\t\t\t\tgenerateAFHierarchyPrefixLevel(path, prefix, AFHierarchyLevel);\n\t\t\t\t\truleMatched = true;\n\n\t\t\t\t\tauto v =  m_AssetNamePrefix[assetName];\n\t\t\t\t\tauto item = make_pair(path, prefix);\n\n\t\t\t\t\tif(v.size() == 0 || std::find(v.begin(), v.end(), item) == v.end())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (success = sendAFHierarchy(path.c_str()))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_AssetNamePrefix[assetName].push_back(item);\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesExist asset :%s: path added :%s: :%s:\" , __FUNCTION__, assetName.c_str(), pathInitial.c_str()  , path.c_str() );\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesExist already created asset :%s: path added :%s: :%s:\" , __FUNCTION__, assetName.c_str(), pathInitial.c_str()  , path.c_str() );\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// Metadata Rules - NonExist\n\t\tif (! m_MetadataRulesNonExist.empty() )\n\t\t{\n\t\t\tstring propertyName;\n\t\t\tstring prefix;\n\t\t\tstring AFHierarchyLevel;\n\n\t\t\tbool found;\n\t\t\tstring rule;\n\t\t\tfor (auto it = m_MetadataRulesNonExist.begin(); it != m_MetadataRulesNonExist.end(); it++)\n\t\t\t{\n\t\t\t\tfound = false;\n\t\t\t\trule = it->first;\n\t\t\t\tpathInitial = it->second;\n\t\t\t\tfor (auto itL2 = values.begin(); found == false && itL2 != values.end(); itL2++)\n\t\t\t\t{\n\t\t\t\t\tpropertyName = (*itL2)->getName();\n\t\t\t\t\tif (propertyName.compare(rule) == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tfound = true;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!found)\n\t\t\t\t{\n\t\t\t\t\tif (pathInitial.at(0) != '/')\n\t\t\t\t\t{\n\t\t\t\t\t\t// relative  path\n\t\t\t\t\t\tpathInitial = \"/\" +m_DefaultAFLocation + \"/\" + pathInitial;\n\t\t\t\t\t}\n\t\t\t\t\tpath = variableValueHandle(reading, pathInitial);\n\t\t\t\t\tpath = ApplyPIServerNamingRulesPath(path, &changed);\n\n\t\t\t\t\tgenerateAFHierarchyPrefixLevel(path, prefix, AFHierarchyLevel);\n\t\t\t\t\truleMatched = true;\n\n\t\t\t\t\tauto v =  m_AssetNamePrefix[assetName];\n\t\t\t\t\tauto item = make_pair(path, prefix);\n\n\n\t\t\t\t\tif(v.size() == 0 || std::find(v.begin(), v.end(), item) == v.end())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (success = sendAFHierarchy(path.c_str()))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_AssetNamePrefix[assetName].push_back(item);\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesNonExist - asset :%s: path added :%s: :%s:\" , __FUNCTION__, assetName.c_str(), pathInitial.c_str()  , path.c_str() );\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesNonExist -  already created asset :%s: path added :%s: :%s:\" , __FUNCTION__, assetName.c_str(), pathInitial.c_str()  , path.c_str() );\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// Metadata Rules - equal\n\t\tif ( ! m_MetadataRulesEqual.empty() )\n\t\t{\n\t\t\tstring propertyName;\n\t\t\tstring prefix;\n\t\t\tstring AFHierarchyLevel;\n\n\t\t\tbool found;\n\t\t\tstring rule;\n\t\t\tfor (auto it = m_MetadataRulesEqual.begin(); it != m_MetadataRulesEqual.end(); it++)\n\t\t\t{\n\t\t\t\tfound = false;\n\t\t\t\trule = it->first;\n\t\t\t\tfor (auto itL2 = values.begin(); found == false && itL2 != values.end(); itL2++)\n\t\t\t\t{\n\t\t\t\t\tpropertyName = (*itL2)->getName();\n\t\t\t\t\tDatapointValue data = (*itL2)->getData();\n\t\t\t\t\tstring dataValue = data.toString();\n\t\t\t\t\tStringStripQuotes(dataValue);\n\t\t\t\t\tif (propertyName.compare(rule) == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tfor (auto itL3 = it->second.begin(); found == false && itL3 != it->second.end(); itL3++)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring value = itL3->first;\n\t\t\t\t\t\t\tStringStripQuotes(value);\n\t\t\t\t\t\t\tpathInitial = itL3->second;\n\t\t\t\t\t\t\tif (value.compare(dataValue) == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tfound = true;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (found)\n\t\t\t\t{\n\t\t\t\t\tif (pathInitial.at(0) != '/')\n\t\t\t\t\t{\n\t\t\t\t\t\t// relative  path\n\t\t\t\t\t\tpathInitial = \"/\" + m_DefaultAFLocation + \"/\" + pathInitial;\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\tpath = variableValueHandle(reading, pathInitial);\n\t\t\t\t\tpath = ApplyPIServerNamingRulesPath(path, &changed);\n\t\t\t\t\t\n\t\t\t\t\tgenerateAFHierarchyPrefixLevel(path, prefix, AFHierarchyLevel);\n\t\t\t\t\truleMatched = true;\n\n\t\t\t\t\tauto v =  m_AssetNamePrefix[assetName];\n\t\t\t\t\tauto item = make_pair(path, prefix);\n\n\n\t\t\t\t\tif(v.size() == 0 || std::find(v.begin(), v.end(), item) == v.end())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (success = sendAFHierarchy(path.c_str()))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_AssetNamePrefix[assetName].push_back(item);\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesEqual asset :%s: path added :%s: :%s:\" , __FUNCTION__, assetName.c_str(), pathInitial.c_str()  , path.c_str() );\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesEqual already created asset :%s: path added :%s: :%s:\" , __FUNCTION__, assetName.c_str(), pathInitial.c_str()  , path.c_str() );\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// Metadata Rules - Not equal\n\t\tif ( ! m_MetadataRulesNotEqual.empty() )\n\t\t{\n\t\t\tstring propertyName;\n\t\t\tstring prefix;\n\t\t\tstring AFHierarchyLevel;\n\t\t\tstring rule;\n\t\t\tbool NotEqual;\n\n\t\t\tfor (auto it = m_MetadataRulesNotEqual.begin(); it != m_MetadataRulesNotEqual.end(); it++)\n\t\t\t{\n\t\t\t\tNotEqual = false;\n\t\t\t\trule = it->first;\n\t\t\t\tfor (auto itL2 = values.begin(); NotEqual == false && itL2 != values.end(); itL2++)\n\t\t\t\t{\n\t\t\t\t\tpropertyName = (*itL2)->getName();\n\n\t\t\t\t\tif (propertyName.compare(rule) == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tDatapointValue data = (*itL2)->getData();\n\t\t\t\t\t\tstring dataValue = data.toString();\n\t\t\t\t\t\tStringStripQuotes(dataValue);\n\n\t\t\t\t\t\tfor (auto itL3 = it->second.begin(); NotEqual == false && itL3 != it->second.end(); itL3++)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring value = itL3->first;\n\t\t\t\t\t\t\tpathInitial = itL3->second;\n\t\t\t\t\t\t\tStringStripQuotes(value);\n\n\t\t\t\t\t\t\tif (value.compare(dataValue) != 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tNotEqual = true;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (NotEqual)\n\t\t\t\t{\n\t\t\t\t\tif (pathInitial.at(0) != '/')\n\t\t\t\t\t{\n\t\t\t\t\t\t// relative  path\n\t\t\t\t\t\tpathInitial = \"/\" + m_DefaultAFLocation + \"/\" + pathInitial;\n\t\t\t\t\t}\n\t\t\t\t\tpath = variableValueHandle(reading, pathInitial);\n\t\t\t\t\tpath = ApplyPIServerNamingRulesPath(path, &changed);\n\n\t\t\t\t\tgenerateAFHierarchyPrefixLevel(path, prefix, AFHierarchyLevel);\n\t\t\t\t\truleMatched = true;\n\n\t\t\t\t\tauto v =  m_AssetNamePrefix[assetName];\n\t\t\t\t\tauto item = make_pair(path, prefix);\n\n\n\t\t\t\t\tif(v.size() == 0 || std::find(v.begin(), v.end(), item) == v.end())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (success = sendAFHierarchy(path.c_str()))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_AssetNamePrefix[assetName].push_back(item);\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesNotEqual asset :%s: path added :%s: :%s:\" , __FUNCTION__, assetName.c_str(), pathInitial.c_str()  , path.c_str() );\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesNotEqual already created asset :%s: path added :%s: :%s:\" , __FUNCTION__, assetName.c_str(), pathInitial.c_str()  , path.c_str() );\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// If no rules matched the AF default location\n\tif ( ! ruleMatched )\n\t{\n\t\tstring prefix;\n\t\tstring AFHierarchyLevel;\n\n\t\tgenerateAFHierarchyPrefixLevel(m_DefaultAFLocation, prefix, AFHierarchyLevel);\n\n\t\tauto item = make_pair(m_DefaultAFLocation, prefix);\n\t\tauto & curr_vec = m_AssetNamePrefix[assetName];\n\t\t\n\t\t// Insert new item into m_AssetNamePrefix[assetName] vector, if it doesn't exists already\n\t\tif (std::find(curr_vec.begin(), curr_vec.end(), item) == curr_vec.end())\n\t\t{\n\t\t\tm_AssetNamePrefix[assetName].push_back(item);\n\t\t\tLogger::getLogger()->debug(\"m_AssetNamePrefix.size()=%d; m_AssetNamePrefix[assetName].size()=%d, added m_AssetNamePrefix[%s]=(%s,%s)\", \n\t\t\t\t\t\t\t\tm_AssetNamePrefix.size(), m_AssetNamePrefix[assetName].size(), assetName.c_str(), m_DefaultAFLocation.c_str(), prefix.c_str());\n\t\t}\n\t}\n\n\treturn success;\n}\n\n/**\n * Set the tag ID_XYZ_typename_sensor|typename_measurement\n *\n * @param assetName    The assetName\n * @param tagName      The tagName to append\n * @param data         The string to append result tag\n */\nvoid OMF::setAssetTypeTag(const string& assetName,\n\t\t\t  const string& tagName,\n\t\t\t  string& data)\n{\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\tstring keyComplete;\n\n\t// Add the 1st level of AFHierarchy as a prefix to the name in case of PI Web API\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tretrieveAFHierarchyPrefixAssetName (assetName, AFHierarchyPrefix, AFHierarchyLevel);\n\t\tkeyComplete = AFHierarchyPrefix + \"_\" + assetName;\n\t}\n\telse\n\t{\n\t\tkeyComplete = assetName;\n\t}\n\n\tstring AssetTypeTag = to_string(this->getAssetTypeId(assetName)) +\n\t\t              \"_\" + assetName +\n\t\t              \"_\" + tagName;\n\n\t// Add the 1st level of AFHierarchy as a prefix to the name in case of PI Web API\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tAssetTypeTag = \"A_\" + AFHierarchyPrefix + \"_\" + AFHierarchyLevel + \"_\" + AssetTypeTag;\n\t}\n\t// Add type-id + '_' + asset_name + '_' + tagName'\n\tdata.append(AssetTypeTag);\n}\n\n/**\n * Set the tag ID_XYZ_typename_sensor|typename_measurement using the path in which the asset was created\n *\n * @param assetName    The assetName\n * @param tagName      The tagName to append\n * @param data         The string to append result tag\n */\nvoid OMF::setAssetTypeTagNew(const string& assetName,\n\t\t\t\t\t\t  const string& tagName,\n\t\t\t\t\t\t  string& data)\n{\n\tstring path;\n\tstring assetPrefix;\n\tstring AFHierarchyLevel;\n\tstring keyComplete;\n\n\t// Add the 1st level of AFHierarchy as a prefix to the name in case of PI Web API\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tpath=getPathOrigStored (assetName);\n\t\tgenerateAFHierarchyPrefixLevel(path, assetPrefix, AFHierarchyLevel);\n\t\tkeyComplete = assetPrefix + \"_\" + assetName;\n\t}\n\telse\n\t{\n\t\tkeyComplete = assetName;\n\t}\n\n\tstring AssetTypeTag = to_string(this->getAssetTypeId(assetName)) +\n\t\t\t\t\t\t  \"_\" + assetName +\n\t\t\t\t\t\t  \"_\" + tagName;\n\n\t// Add the 1st level of AFHierarchy as a prefix to the name in case of PI Web API\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tAssetTypeTag = \"A_\" + assetPrefix + \"_\" + AFHierarchyLevel + \"_\" + AssetTypeTag;\n\t}\n\t// Add type-id + '_' + asset_name + '_' + tagName'\n\tdata.append(AssetTypeTag);\n}\n\n/**\n * Handles the OMF data types for the current Reading row\n * DataTypes are created and sent only once per assetName + typeId\n * if skipSending is true\n *\n * @param row            The current Reading row with data\n * @param skipSending    Send once or always the data types\n * @return               True if data types have been sent or already sent.\n *                       False if the sending has failed.\n */ \nbool OMF::handleDataTypes(const string keyComplete, const Reading& row, bool skipSending, OMFHints *hints)\n{\n\t// Create the key for dataTypes sending once\n\tconst string key(skipSending ? (keyComplete) : \"\");\n\n\t// Check whether to create and send Data Types\n\tbool sendTypes = (skipSending == true) ?\n\t\t\t  // Send if not already sent\n\t\t\t  !OMF::getCreatedTypes(key, row, hints) :\n\t\t\t  // Always send types\n\t\t\t  true;\n\n\t// Handle the data types of the current reading\n\tif (sendTypes && !OMF::sendDataTypes(row, hints))\n\t{\n\t\t// Failure\n\t\treturn false;\n\t}\n\n\t// We have sent types, we might save this.\n\tif (skipSending && sendTypes)\n\t{\n\t\t// Save datatypes key\n\t\tOMF::setCreatedTypes(row, hints);\n\t}\n\n\t// Success\n\treturn true;\n}\n\n/**\n * Get from m_formatTypes map the key (OMF type + OMF format)\n *\n * @param key    The OMF type for which the format is requested\n * @return       The defined OMF format for the requested type\n *\n */\nstd::string OMF::getFormatType(const string &key) const\n{\n\tstring value;\n\n\ttry\n\t{\n\t\tauto pos = m_formatTypes.find(key);\n\t\tvalue = pos->second;\n\t}\n\tcatch (const std::exception &e)\n\t{\n\t\tLogger::getLogger()->error(\"Unable to find the OMF format for the type :\" + key + \": - error: %s\", e.what());\n\t}\n\n\treturn value;\n}\n\n/**\n * Add the key (OMF type + OMF format) into a map\n *\n * @param key    The OMF type, key of the map\n * @param value  The OMF format to set for the specific OMF type\n *\n */\nvoid OMF::setFormatType(const string &key, string &value)\n{\n\n\tm_formatTypes[key] = value;\n}\n\n/**\n * Set which PIServer component should be used for the communication\n */\nvoid OMF::setPIServerEndpoint(const OMF_ENDPOINT PIServerEndpoint)\n{\n\tm_PIServerEndpoint = PIServerEndpoint;\n}\n\n/**\n * Set the first level of hierarchy in Asset Framework in which the assets will be created, PI Web API only.\n */\nvoid OMF::setDefaultAFLocation(const string &DefaultAFLocation)\n{\n\tm_DefaultAFLocation = StringSlashFix(DefaultAFLocation);\n}\n\n/**\n * Set the rules to address where assets should be placed in the AF hierarchy.\n * Decodes the JSON and assign to the structures the values about the Names rules\n *\n */\nbool OMF::HandleAFMapNames(Document& JSon)\n{\n\tbool success = true;\n\tstring name;\n\tstring value;\n\n\tm_NamesRules.clear();\n\n\tValue &JsonNames = JSon[\"names\"];\n\n\tfor (Value::ConstMemberIterator itr = JsonNames.MemberBegin(); itr != JsonNames.MemberEnd(); ++itr)\n\t{\n\t\tname = itr->name.GetString();\n\t\tvalue = itr->value.GetString();\n\n\t\tif (m_NamesRules.find(name) == m_NamesRules.end())\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"%s - m_NamesRules size :%d: Exist name :%s: value :%s:\"\n\t\t\t\t\t\t\t\t\t   \t,__FUNCTION__\n\t\t\t\t\t\t\t\t\t   \t,m_NamesRules.size()\n\t\t\t\t\t\t\t\t\t   \t,name.c_str()\n\t\t\t\t\t\t\t\t\t   \t,value.c_str());\n\n\t\t\tauto newMapValue = make_pair(name, value);\n\t\t\tm_NamesRules.insert(newMapValue);\n\t\t\tm_AFMapEmptyNames = false;\n\t\t} else {\n\t\t\tLogger::getLogger()->debug(\"%s - skipped m_NamesRules size :%d: Exist name :%s: value :%s:\"\n\t\t\t\t,__FUNCTION__\n\t\t\t\t,m_NamesRules.size()\n\t\t\t\t,name.c_str()\n\t\t\t\t,value.c_str());\n\t\t}\n\t}\n\n\treturn success;\n}\n\n/**\n * Set the rules to address where assets should be placed in the AF hierarchy.\n * Decodes the JSON and assign to the structures the values about the Metadata rules\n *\n */\nbool OMF::HandleAFMapMetedata(Document& JSon)\n{\n\tbool success = true;\n\tstring name;\n\tstring value;\n\n\tstring variable, variableValue, variableDefault;\n\n\tValue &JsonMetadata = JSon[\"metadata\"];\n\n\t// --- Handling Exist section\n\tif (JsonMetadata.HasMember(\"exist\"))\n\t{\n\t\tValue &JSonExist = JsonMetadata[\"exist\"];\n\n\t\tfor (Value::ConstMemberIterator itr = JSonExist.MemberBegin(); itr != JSonExist.MemberEnd(); ++itr)\n\t\t{\n\t\t\tbool changed = false;\n\t\t\tname = itr->name.GetString();\n\t\t\tvalue = itr->value.GetString();\n\n\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesExist :%s: value :%s:\", __FUNCTION__, name.c_str(), value.c_str());\n\t\t\tauto newMapValue = make_pair(name,value);\n\n\t\t\tm_MetadataRulesExist.insert (newMapValue);\n\n\t\t\textractVariable(value, variable, variableValue, variableDefault);\n\t\t\tif (! variableDefault.empty()) {\n\n\t\t\t\tm_MetadataRulesNonExist.insert (newMapValue);\n\t\t\t\tm_AFMapEmptyMetadata = false;\n\t\t\t}\n\n\t\t\tm_AFMapEmptyMetadata = false;\n\t\t}\n\t}\n\n\t// --- Handling Non Exist section\n\tif (JsonMetadata.HasMember(\"nonexist\"))\n\t{\n\t\tValue &JSonNonExist = JsonMetadata[\"nonexist\"];\n\n\t\tfor (Value::ConstMemberIterator itr = JSonNonExist.MemberBegin(); itr != JSonNonExist.MemberEnd(); ++itr)\n\t\t{\n\t\t\tname = itr->name.GetString();\n\t\t\tvalue = itr->value.GetString();\n\n\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesNonExist :%s: value :%s:\", __FUNCTION__, name.c_str(), value.c_str());\n\n\t\t\tauto newMapValue = make_pair(name,value);\n\n\t\t\tm_MetadataRulesNonExist.insert (newMapValue);\n\n\t\t\tm_AFMapEmptyMetadata = false;\n\t\t}\n\t}\n\n\n\t// --- Handling Equal section\n\tif (JsonMetadata.HasMember(\"equal\"))\n\t{\n\t\tstring property;\n\t\tstring value;\n\t\tstring path;\n\n\t\tValue &JSonEqual = JsonMetadata[\"equal\"];\n\n\t\tfor (Value::ConstMemberIterator itr = JSonEqual.MemberBegin(); itr != JSonEqual.MemberEnd(); ++itr)\n\t\t{\n\t\t\tproperty = itr->name.GetString();\n\n\t\t\tfor (Value::ConstMemberIterator itrL2 = itr->value.MemberBegin(); itrL2 != itr->value.MemberEnd(); ++itrL2)\n\t\t\t{\n\t\t\t\tvalue = itrL2->name.GetString();\n\t\t\t\tpath = itrL2->value.GetString();\n\n\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesEqual :%s: name :%s: value :%s:\", __FUNCTION__, property.c_str() , value.c_str(), path.c_str());\n\n\t\t\t\tauto item = make_pair(value,path);\n\t\t\t\tm_MetadataRulesEqual[property].push_back(item);\n\n\t\t\t\tm_AFMapEmptyMetadata = false;\n\t\t\t}\n\t\t}\n\t}\n\t// --- Handling Not Equal section\n\tif (JsonMetadata.HasMember(\"notequal\"))\n\t{\n\t\tstring property;\n\t\tstring value;\n\t\tstring path;\n\n\t\tValue &JSonEqual = JsonMetadata[\"notequal\"];\n\n\t\tfor (Value::ConstMemberIterator itr = JSonEqual.MemberBegin(); itr != JSonEqual.MemberEnd(); ++itr)\n\t\t{\n\t\t\tproperty = itr->name.GetString();\n\n\t\t\tfor (Value::ConstMemberIterator itrL2 = itr->value.MemberBegin(); itrL2 != itr->value.MemberEnd(); ++itrL2)\n\t\t\t{\n\t\t\t\tvalue = itrL2->name.GetString();\n\t\t\t\tpath = itrL2->value.GetString();\n\n\t\t\t\tLogger::getLogger()->debug(\"%s - m_MetadataRulesNotEqual :%s: name :%s: value :%s:\", __FUNCTION__, property.c_str() , value.c_str(), path.c_str());\n\n\t\t\t\tauto item = make_pair(value,path);\n\t\t\t\tm_MetadataRulesNotEqual[property].push_back(item);\n\n\t\t\t\tm_AFMapEmptyMetadata = false;\n\t\t\t}\n\t\t}\n\t}\n\treturn success;\n}\n\n/**\n * Set the Names and Metadata rules to address where assets should be placed in the AF hierarchy.\n *\n */\nbool OMF::setAFMap(const string &AFMap)\n{\n\tbool success = true;\n\tDocument JSon;\n\n\tm_AFMapEmptyNames = true;\n\tm_AFMapEmptyMetadata = true;\n\tm_AFMap = AFMap;\n\n\tParseResult ok = JSon.Parse(m_AFMap.c_str());\n\tif (!ok)\n\t{\n\t\tLogger::getLogger()->error(\"setAFMap - Invalid Asset Framework Map, error :%s:\", GetParseError_En(JSon.GetParseError()));\n\t\treturn false;\n\t}\n\n\tif (JSon.HasMember(\"names\"))\n\t{\n\t\tHandleAFMapNames(JSon);\n\t}\n\tif (JSon.HasMember(\"metadata\"))\n\t{\n\t\tHandleAFMapMetedata(JSon);\n\t}\n\n\treturn success;\n}\n\n/**\n * Set the first level of hierarchy in Asset Framework in which the assets will be created, PI Web API only.\n */\nvoid OMF::setPrefixAFAsset(const string &prefixAFAsset)\n{\n\tm_prefixAFAsset = prefixAFAsset;\n}\n\n/**\n * Generate an unique prefix for AF objects\n */\nstring OMF::generateUniquePrefixId(const string &path)\n{\n\tstring prefix;\n\n\tstd::size_t hierarchyHash = std::hash<std::string>{}(path);\n\tprefix = std::to_string(hierarchyHash);\n\n\treturn prefix;\n}\n\n/**\n * Set the list of errors considered not blocking in the communication\n * with the PI Server\n */\nvoid OMF::setNotBlockingErrors(std::vector<std::string>& notBlockingErrors)\n{\n\n\tm_notBlockingErrors = notBlockingErrors;\n}\n\n\n/**\n * Increment type-id\n */\nvoid OMF::incrementTypeId()\n{\n\t++m_typeId;\n}\n\n/**\n * Clear OMF types cache\n */\nvoid OMF::clearCreatedTypes()\n{\n\tif (m_OMFDataTypes)\n\t{\n\t\tm_OMFDataTypes->clear();\n\t}\n}\n\n/**\n * Check for invalid/redefinition data type error\n *\n * @param message       Server reply message for data type creation\n * @return              True for data type error, false otherwise\n */\nbool OMF::isDataTypeError(const char* message)\n{\n\tif (message)\n\t{\n\t\tstring serverReply(message);\n\n\t\tfor(string &item : m_notBlockingErrors) {\n\n\t\t\tif (serverReply.find(item) != std::string::npos)\n\t\t\t{\n\t\t\t\treturn true;\n\t\t\t}\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Send again Data Types of current reading data\n * with a new type-id\n *\n * NOTE: the m_typeId member variable value is incremented.\n *\n * @param reading       The current reading data\n * @return              True if data types with new-id\n *                      have been sent, false otherwise.\n */\nbool OMF::handleTypeErrors(const string& keyComplete, const Reading& reading, OMFHints *hints)\n{\n\tLogger::getLogger()->debug(\"handleTypeErrors keyComplete :%s:\", keyComplete.c_str());\n\n\tbool ret = true;\n\n\tstring assetName = m_assetName;\n\n\t// Reset change type-id indicator\n\tm_changeTypeId = false;\n\n\t// Increment per asset type-id in memory cache:\n\t// Note: if key is not found the global type-id is incremented\n\tOMF::incrementAssetTypeId(keyComplete);\n\n\t// Clear per asset data (but keep the type-id) if key found\n\t// or remove all data otherwise\n\tauto it = m_OMFDataTypes->find(keyComplete);\n\tif (it != m_OMFDataTypes->end())\n\t{\n\t\t// Clear the OMF types cache per asset, keep type-id\n\t\tOMF::clearCreatedTypes(keyComplete);\n\t}\n\telse\n\t{\n\t\t// Remove all cached data, any asset\n\t\tOMF::clearCreatedTypes();\n\t}\n\n\t// Force re-send data types with a new type-id\n\tif (!OMF::handleDataTypes(keyComplete, reading, false, hints))\n\t{\n\t\tLogger::getLogger()->error(\"Failure re-sending JSON dataType messages \"\n\t\t\t\t\t   \"with new type-id=%d for asset %s\",\n\t\t\t\t\t\t\t\t   OMF::getAssetTypeId(assetName),\n\t\t\t\t\t\t\t\t   assetName.c_str());\n\t\t// Failure\n\t\tret = false;\n\t}\n\n\treturn ret;\n}\n\n/**\n * Create a superset data map for each reading and found datapoints\n *\n * The output map is filled with a Reading object containing\n * all the datapoints found for each asset in the input reading set.\n * The datapoints have a fake value based on the datapoint type\n *  \n * @param    readings\t\tCurrent input readings data\n * @param    dataSuperSet\tMap to store all datapoints for an assetname\n */\nvoid OMF::setMapObjectTypes(const vector<Reading*>& readings,\n\t\t\t    std::map<std::string, Reading*>& dataSuperSet)\n{\n\t// Temporary map for [asset][datapoint] = type\n\tstd::map<string, map<string, string>> readingAllDataPoints;\n\n\t// Fetch ALL Reading pointers in the input vector\n\t// and create a map of [assetName][datapoint1 .. datapointN] = type\n\tfor (vector<Reading *>::const_iterator elem = readings.begin();\n\t\t\t\t\t\telem != readings.end();\n\t\t\t\t\t\t++elem)\n\t{\n\t\t// Get asset name\n\t\tstring assetName = ApplyPIServerNamingRulesObj((**elem).getAssetName(), nullptr);\n\n\t\t//string assetName = (**elem).getAssetName();\n\n\t\t// Get all datapoints\n\t\tconst vector<Datapoint*> data = (**elem).getReadingData();\n\t\t// Iterate through datapoints\n\t\tfor (vector<Datapoint*>::const_iterator it = data.begin();\n\t\t\t\t\t\t\tit != data.end();\n\t\t\t\t\t\t\t++it)\n\t\t{\n\t\t\tstring omfType;\n\t\t\tstring datapointName = (*it)->getName();\n\n\t\t\tif (!isTypeSupported((*it)->getData()))\n\t\t\t{\n\t\t\t\tomfType = OMF_TYPE_UNSUPPORTED;\n\t\t\t\tLogger::getLogger()->debug(\"%s - The type of the datapoint \" + assetName + \"/\" + datapointName + \" is unsupported, it will be ignored\", __FUNCTION__);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tomfType = omfTypes[((*it)->getData()).getType()];\n\n\t\t\t\t// if an OMF hint is applied the type may change\n\t\t\t\t{\n\t\t\t\t\tReading *reading = *elem;\n\n\t\t\t\t\t// Fetch and parse any OMFHint for this reading\n\t\t\t\t\tDatapoint *hintsdp = reading->getDatapoint(\"OMFHint\");\n\n\t\t\t\t\tif (hintsdp && (omfType == OMF_TYPE_FLOAT || omfType == OMF_TYPE_INTEGER))\n\t\t\t\t\t{\n\t\t\t\t\t\tOMFHints *hints = new OMFHints(hintsdp->getData().toString());\n\t\t\t\t\t\tconst vector<OMFHint *> omfHints = hints->getHints();\n\n\t\t\t\t\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (typeid(**it) == typeid(OMFIntegerHint))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tomfType = OMF_TYPE_INTEGER;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tdelete hints;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tauto itr = readingAllDataPoints.find(assetName);\n\t\t\t\t// Asset not found in the map\n\t\t\t\tif (itr == readingAllDataPoints.end())\n\t\t\t\t{\n\t\t\t\t\t// Set type of current datapoint for assetName\n\t\t\t\t\treadingAllDataPoints[assetName][datapointName] = omfType;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// Asset found\n\t\t\t\t\tauto dpItr = (*itr).second.find(datapointName);\n\t\t\t\t\t// Datapoint not found\n\t\t\t\t\tif (dpItr == (*itr).second.end())\n\t\t\t\t\t{\n\t\t\t\t\t\t// Add datapointName/type to map with key assetName\n\t\t\t\t\t\t(*itr).second.emplace(datapointName, omfType);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tif ((*dpItr).second.compare(omfType) != 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Datapoint already set has changed type\n\t\t\t\t\t\t\tLogger::getLogger()->info(\"Datapoint '\" + datapointName + \\\n                                          \"' in asset '\" + assetName + \\\n                                          \"' has changed type from '\" + (*dpItr).second + \\\n                                          \" to \" + omfType);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// Update datapointName/type to map with key assetName\n\t\t\t\t\t\t// 1- remove element\n\t\t\t\t\t\t(*itr).second.erase(dpItr);\n\t\t\t\t\t\t// 2- Add new value\n\t\t\t\t\t\treadingAllDataPoints[assetName][datapointName] = omfType;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// Loop now only the elements found in the per asset types map\n\tfor (auto it = readingAllDataPoints.begin();\n\t\t  it != readingAllDataPoints.end();\n\t\t  ++it)\n\t{\n\t\tstring assetName = (*it).first;\n\t\tvector<Datapoint *> values;\n\t\t// Set fake datapoints values\n\t\tfor (auto dp = (*it).second.begin();\n\t\t\t  dp != (*it).second.end();\n\t\t\t  ++dp)\n\t\t{\n\t\t\tif ((*dp).second.compare(OMF_TYPE_FLOAT) == 0)\n\t\t\t{\n\t\t\t\tDatapointValue vDouble(0.1);\n\t\t\t\tvalues.push_back(new Datapoint((*dp).first, vDouble));\n\t\t\t}\n\t\t\telse if ((*dp).second.compare(OMF_TYPE_INTEGER) == 0)\n\t\t\t{\n\t\t\t\tDatapointValue vInt((long)1);\n\t\t\t\tvalues.push_back(new Datapoint((*dp).first, vInt));\n\t\t\t}\n\t\t\telse if ((*dp).second.compare(OMF_TYPE_STRING) == 0)\n\t\t\t{\n\t\t\t\tDatapointValue vString(\"v_str\");\n\t\t\t\tvalues.push_back(new Datapoint((*dp).first, vString));\n\t\t\t}\n\t\t\telse if ((*dp).second.compare(OMF_TYPE_UNSUPPORTED) == 0)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->debug(\"%s - The asset '\" + assetName + \" has a datapoint having an unsupported type, it will be ignored\", __FUNCTION__);\n\n\t\t\t\t// Avoids forcing PI Server to handle unsupported datapoint types\n\t\t\t\t// std::vector<double> vData = {0};\n\t\t\t\t// DatapointValue vArray(vData);\n\t\t\t\t// values.push_back(new Datapoint((*dp).first, vArray));\n\t\t\t}\n\t\t}\n\n\t\t// Add the superset Reading data with fake values\n\t\tdataSuperSet.emplace(assetName, new Reading(assetName, values));\n\t}\n}\n\n/**\n * Cleanup the mapped object types for input data\n *\n * @param    dataSuperSet\tThe  mapped object to cleanup\n */\nvoid OMF::unsetMapObjectTypes(std::map<std::string, Reading*>& dataSuperSet) const\n{\n\t// Remove all assets supersetDataPoints\n\tfor (auto m = dataSuperSet.begin();\n\t\t  m != dataSuperSet.end();\n\t\t  ++m)\n\t{\n\t\t(*m).second->removeAllDatapoints();\n\t\tdelete (*m).second;\n\t}\n\tdataSuperSet.clear();\n}\n/**\n * Extract assetName from error message\n *\n * Currently handled cases\n * (1) $datasource + \".\" + $id + \"_\" + $assetName + \"_typename_measurement\" + ...\n * (2) $id + \"measurement_\" + $assetName\n *\n * @param    message\t\tOMF error message (JSON)\n * @return   The found assetName if found, or empty string\n */\nstring OMF::getAssetNameFromError(const char* message)\n{\n\tstring assetName;\n\tDocument error;\n\n\terror.Parse(message);\n\n\tif (!error.HasParseError() &&\n\t    error.HasMember(\"source\") &&\n\t    error[\"source\"].IsString())\n\t{\n\t\tstring tmp = error[\"source\"].GetString();\n\n\t\t// (1) $datasource + \".\" + $id + \"_\" + $assetName + \"_typename_measurement\" + ...\n\t\tsize_t found = tmp.find(\"_typename_measurement\");\n\t\tif (found != std::string::npos)\n\t\t{\n\t\t\ttmp = tmp.substr(0, found);\n\t\t\tfound = tmp.find_first_of(m_delimiter[0]);\n\t\t\tif (found != std::string::npos &&\n\t\t\t    found < tmp.length())\n\t\t\t{\n\t\t\t\ttmp = tmp.substr(found + 1);\n\t\t\t\tfound = tmp.find_first_of('_');\n\t\t\t\tif (found != std::string::npos &&\n\t\t\t\t    found < tmp.length())\n\t\t\t\t{\n\t\t\t\t\t// bug fixed\n\t\t\t\t\t//assetName = assetName.substr(found + 1 );\n\t\t\t\t\tassetName = tmp.substr(found + 1 );\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// (2) $id + \"measurement_\" + $assetName\n\t\t\tfound = tmp.find_first_of('_');\n\t\t\tif (found != std::string::npos &&\n\t\t\t    found < tmp.length())\n\t\t\t{\n\t\t\t\tassetName = tmp.substr(found + 1);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn assetName;\n}\n\n/**\n * Return the asset type-id\n *\n * @param assetName\tThe asset name\n * @return\t\tThe found type-id\n *\t\t\tor the generic value\n */\nlong OMF::getAssetTypeId(const string& assetName)\n{\n\tlong typeId;\n\tstring keyComplete;\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\n\t// Connector relay / ODS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t)\n\t{\n\t\tkeyComplete = assetName;\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tif (getNamingScheme(assetName) == NAMINGSCHEME_CONCISE) {\n\n\t\t\tkeyComplete = assetName;\n\t\t} else {\n\t\t\tretrieveAFHierarchyPrefixAssetName(assetName, AFHierarchyPrefix, AFHierarchyLevel);\n\t\t\tkeyComplete = AFHierarchyPrefix + \"_\" + assetName;\n\t\t}\n\t}\n\n\n\tif (!m_OMFDataTypes)\n\t{\n\t\t// Use current value of m_typeId\n\t\ttypeId = m_typeId;\n\t}\n\telse\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Set the type-id of found element\n\t\t\ttypeId = ((*it).second).typeId;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Use current value of m_typeId\n\t\t\ttypeId = m_typeId;\n\t\t}\n\t}\n\n\treturn typeId;\n}\n\n/**\n * Retrieve the naming scheme for the given asset in relation to the end point selected the default naming scheme selected\n * and the naming scheme of the asset itself\n *\n * @param assetName  Asset for which the naming schema should be retrieved\n * @return           Naming schema of the given asset\n */\nlong OMF::getNamingScheme(const string& assetName)\n{\n\tlong namingScheme;\n\tstring keyComplete;\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\n\t// Connector relay / ADH / ODS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t)\n\t{\n\t\tkeyComplete = assetName;\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tretrieveAFHierarchyPrefixAssetName(assetName, AFHierarchyPrefix, AFHierarchyLevel);\n\t\tkeyComplete = AFHierarchyPrefix + \"_\" + assetName;\n\t}\n\n\n\tif (!m_OMFDataTypes)\n\t{\n\t\t// Use current value of m_typeId\n\t\tnamingScheme = m_NamingScheme;\n\t}\n\telse\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Set the type-id of found element\n\t\t\tnamingScheme = ((*it).second).namingScheme;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tkeyComplete = assetName;\n\n\t\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\t\tif (it != m_OMFDataTypes->end())\n\t\t\t{\n\t\t\t\t// Set the type-id of found element\n\t\t\t\tnamingScheme = ((*it).second).namingScheme;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Use current value of m_typeId\n\t\t\t\tnamingScheme = m_NamingScheme;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn namingScheme;\n}\n\n/**\n * Retrieve the hash for the given asset in relation to the end point selected\n *\n * @param assetName  Asset for which the hash should be retrieved\n * @return           Hash of the given asset\n */\nstring OMF::getHashStored(const string& assetName)\n{\n\tstring hash;\n\tstring keyComplete;\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\n\t// Connector relay / ADH / ODS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t)\n\t{\n\t\tkeyComplete = assetName;\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tif (getNamingScheme(assetName) == NAMINGSCHEME_CONCISE) {\n\n\t\t\tkeyComplete = assetName;\n\t\t} else {\n\t\t\tkeyComplete = AFHierarchyPrefix + \"_\" + assetName;\n\t\t}\n\t}\n\n\n\tif (!m_OMFDataTypes)\n\t{\n\n\t\thash = \"\";\n\t}\n\telse\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Set the type-id of found element\n\t\t\thash = ((*it).second).afhHash;\n\t\t}\n\t\telse\n\t\t{\n\t\t\thash = \"\";\n\t\t}\n\t}\n\n\treturn hash;\n}\n\n/**\n * Retrieve the current AF hierarchy for the given asset\n *\n * @param assetName  Asset for which the path should be retrieved\n * @return           Path of the given asset\n */\nstring OMF::getPathStored(const string& assetName)\n{\n\tstring afHierarchy;\n\tstring keyComplete;\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\n\t// Connector relay / ADH / OCS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t)\n\t{\n\t\tkeyComplete = assetName;\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tif (getNamingScheme(assetName) == NAMINGSCHEME_CONCISE) {\n\n\t\t\tkeyComplete = assetName;\n\t\t} else {\n\t\t\tkeyComplete = AFHierarchyPrefix + \"_\" + assetName;\n\t\t}\n\t}\n\n\n\tif (!m_OMFDataTypes)\n\t{\n\t\tafHierarchy = \"\";\n\t}\n\telse\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Set the type-id of found element\n\t\t\tafHierarchy = ((*it).second).afHierarchy;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tafHierarchy = \"\";\n\t\t}\n\t}\n\n\treturn afHierarchy;\n}\n\n/**\n * Retrieve the AF hierarchy in which given asset was created\n *\n * @param assetName  Asset for which the path should be retrieved\n * @return           Path of the given asset\n */\nstring OMF::getPathOrigStored(const string& assetName)\n{\n\tstring afHierarchy;\n\tstring keyComplete;\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\n\t// Connector relay / ADH / ODS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t)\n\t{\n\t\tkeyComplete = assetName;\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tif (getNamingScheme(assetName) == NAMINGSCHEME_CONCISE) {\n\n\t\t\tkeyComplete = assetName;\n\t\t} else {\n\t\t\tkeyComplete = AFHierarchyPrefix + \"_\" + assetName;\n\t\t}\n\t}\n\n\n\tif (!m_OMFDataTypes)\n\t{\n\n\t\tafHierarchy = \"\";\n\t}\n\telse\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Set the type-id of found element\n\t\t\tafHierarchy = ((*it).second).afHierarchyOrig;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tafHierarchy = \"\";\n\t\t}\n\t}\n\n\treturn afHierarchy;\n}\n\n/**\n * Stores the current AF hierarchy for the given asset\n *\n * @param assetName    Asset for which the path should be retrieved\n * @param afHierarchy  Current AF hierarchy of the asset\n *\n * @return             True if the operation has success\n */\nbool OMF::setPathStored(const string& assetName, string &afHierarchy)\n{\n\tbool operationExecuted;\n\tstring keyComplete;\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchyLevel;\n\n\t// Connector relay / ADH / ODS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t)\n\t{\n\t\tkeyComplete = assetName;\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tif (getNamingScheme(assetName) == NAMINGSCHEME_CONCISE) {\n\n\t\t\tkeyComplete = assetName;\n\t\t} else {\n\t\t\tkeyComplete = AFHierarchyPrefix + \"_\" + assetName;\n\t\t}\n\t}\n\n\toperationExecuted = false;\n\tif (!m_OMFDataTypes)\n\t{\n\t\toperationExecuted = false;\n\n\t}\n\telse\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Set the type-id of found element\n\t\t\t((*it).second).afHierarchy = afHierarchy;\n\t\t\toperationExecuted = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\toperationExecuted = false;\n\t\t}\n\t}\n\n\treturn operationExecuted;\n}\n\n/**\n * Increment the type-id for the given asset name\n *\n * If cached data pointer is NULL or asset name is not set\n * the global m_typeId is incremented.\n *\n * @param    keyComplete\t\tThe asset name\n *\t\t\t\twhich type-id sequence\n *\t\t\t\thas to be incremented.\n */\nvoid OMF::incrementAssetTypeId(const std::string& keyComplete)\n{\n\tlong typeId;\n\tif (!m_OMFDataTypes)\n        {\n\t\t// Increment current value of m_typeId\n\t\tOMF::incrementTypeId();\n        }\n\telse\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Increment value of found type-id\n\t\t\t++((*it).second).typeId;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Increment current value of m_typeId\n\t\t\tOMF::incrementTypeId();\n\t\t}\n\t}\n}\n\n/**\n * Increment the type-id for the given asset name\n *\n * If cached data pointer is NULL or asset name is not set\n * the global m_typeId is incremented.\n *\n * @param    keyComplete\t\tThe asset name\n *\t\t\t\t                which type-id sequence\n *\t\t\t\t                has to be incremented.\n */\nvoid OMF::incrementAssetTypeIdOnly(const std::string& keyComplete)\n{\n\tlong typeId;\n\tif (m_OMFDataTypes)\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Increment value of found type-id\n\t\t\t++((*it).second).typeId;\n\t\t}\n\t}\n}\n\n\n/**\n * Generate a 64 bit number containing  a set of counts,\n * number of datapoint in an asset and the number of datapoint of each type we support.\n *\n */\nunsigned long OMF::calcTypeShort(const Reading& row)\n{\n\tt_typeCount typeCount;\n\n\tint type;\n\n\tconst vector<Datapoint*> data = row.getReadingData();\n\tfor (vector<Datapoint*>::const_iterator it = data.begin();\n\t\t (it != data.end() &&\n\t\t  isTypeSupported((*it)->getData()));\n\t\t ++it)\n\t{\n\t\tstring dpName = (*it)->getName();\n\n\t\tif (!isTypeSupported((*it)->getData()))\n\t\t{\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (dpName.compare(OMF_HINT) == 0)\n\t\t{\n\t\t\t// We never include OMF hints in the data we send to PI\n\t\t\tcontinue;\n\t\t}\n\n\t\ttype = ((*it)->getData()).getType();\n\n\t\t// Integer is handled as float in the OMF integration\n\t\tif (type == DatapointValue::dataTagType::T_INTEGER)\n\t\t{\n\t\t\ttypeCount.cnt.tFloat++;\n\t\t}\n\n\t\tif (type == DatapointValue::dataTagType::T_FLOAT)\n\t\t{\n\t\t\ttypeCount.cnt.tFloat++;\n\t\t}\n\n\t\tif (type == DatapointValue::dataTagType::T_STRING)\n\t\t{\n\t\t\ttypeCount.cnt.tString++;\n\n\t\t}\n\t\ttypeCount.cnt.tTotal++;\n\n\t}\n\n\treturn typeCount.valueLong;\n}\n\n/**\n * Add the reading asset namekey into a map\n * That key is checked by getCreatedTypes in order\n * to send dataTypes only once\n *\n * @param row    The reading data row\n * @return       True, false if map pointer is NULL\n */\nbool OMF::setCreatedTypes(const Reading& row, OMFHints *hints)\n{\n\tstring types;\n\tstring keyComplete;\n\tstring assetName;\n\tstring AFHierarchyPrefix;\n\tstring AFHierarchy;\n\n\tif (!m_OMFDataTypes)\n\t{\n\t\treturn false;\n\t}\n\n\tassetName = m_assetName;\n\tretrieveAFHierarchyFullPrefixAssetName(assetName, AFHierarchyPrefix, AFHierarchy);\n\n\t// Connector relay / ADH / ODS / EDS\n\tif (m_PIServerEndpoint == ENDPOINT_CR  ||\n\t\tm_PIServerEndpoint == ENDPOINT_ADH ||\n\t\tm_PIServerEndpoint == ENDPOINT_OCS ||\n\t\tm_PIServerEndpoint == ENDPOINT_EDS\n\t\t)\n\t{\n\t\tkeyComplete = m_assetName;\n\t}\n\telse if (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tif (getNamingScheme(assetName) == NAMINGSCHEME_CONCISE) {\n\n\t\t\tkeyComplete = assetName;\n\t\t} else {\n\t\t\tkeyComplete = AFHierarchyPrefix + \"_\" + assetName;\n\t\t}\n\t}\n\n\t// We may need to add the hint to the key if we have a TypeName key\n\tif (hints)\n\t{\n\t\tconst vector<OMFHint *> omfHints = hints->getHints();\n\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t{\n\t\t\tif (typeid(**it) == typeid(OMFTypeNameHint))\n\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"Using OMF TypeName hint: %s\", (*it)->getHint().c_str());\n\t\t\t\tkeyComplete.append(\"_\" + (*it)->getHint());\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\n\tlong typeId = OMF::getAssetTypeId(keyComplete);\n\tconst vector<Datapoint*> data = row.getReadingData();\n\ttypes.append(\"{\");\n\tbool first = true;\n\tfor (vector<Datapoint*>::const_iterator it = data.begin();\n\t\t\t\t\t\t(it != data.end() &&\n\t\t\t\t\t\t isTypeSupported((*it)->getData()));\n\t\t\t\t\t\t++it)\n\t{\n\t\tstring dpName = (*it)->getName();\n\t\tif (dpName.compare(OMF_HINT) == 0)\n\t\t{\n\t\t\t// We never include OMF hints in the data we send to PI\n\t\t\tcontinue;\n\t\t}\n\t\tif (!first)\n\t\t{\n\t\t\ttypes.append(\", \");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfirst = false;\n\t\t}\n\n\t\tstring omfType;\n\t\tif (!isTypeSupported((*it)->getData()))\n\t\t{\n\t\t\tomfType = OMF_TYPE_UNSUPPORTED;\n\t\t\tcontinue;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tomfType = omfTypes[((*it)->getData()).getType()];\n\t\t}\n\n\t\tstring format = OMF::getFormatType(omfType);\n\t\tif (hints && (omfType == OMF_TYPE_FLOAT || omfType == OMF_TYPE_INTEGER))\n\t\t{\n\t\t\tconst vector<OMFHint *> omfHints = hints->getHints(dpName);\n\t\t\tfor (auto it = omfHints.cbegin(); it != omfHints.cend(); it++)\n\t\t\t{\n\t\t\t\tif (typeid(**it) == typeid(OMFNumberHint))\n\t\t\t\t{\n\t\t\t\t\tformat = (*it)->getHint();\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (typeid(**it) == typeid(OMFIntegerHint))\n\t\t\t\t{\n\t\t\t\t\tomfType = OMF_TYPE_INTEGER;\n\t\t\t\t\tformat = (*it)->getHint();\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// Add datapoint Name\n\t\ttypes.append(\"\\\"\" + dpName + \"\\\"\");\n\t\ttypes.append(\": {\\\"type\\\": \\\"\");\n\t\t// Add datapoint Type\n\t\ttypes.append(omfType);\n\n\t\t// Applies a format if it is defined\n\t\tif (!format.empty())\n\t\t{\n\t\t\ttypes.append(\"\\\", \\\"format\\\": \\\"\");\n\t\t\ttypes.append(format);\n\t\t}\n\n\t\ttypes.append(\"\\\"}\");\n\t}\n\ttypes.append(\"}\");\n\n\tif (m_OMFDataTypes->find(keyComplete) == m_OMFDataTypes->end())\n\t{\n\t\t// New entry\n\t\tOMFDataTypes newData;\n\t\t// Start from default as we don't have anything in the cache\n\t\tnewData.typeId = m_typeId;\n\n\t\tnewData.types = types;\n\t\t(*m_OMFDataTypes)[keyComplete] = newData;\n\t}\n\telse\n\t{\n\t\t// Just update dataTypes and keep the typeId\n\t\t(*m_OMFDataTypes)[keyComplete].types = types;\n\t}\n\n\t(*m_OMFDataTypes)[keyComplete].typesShort = calcTypeShort(row);\n\t(*m_OMFDataTypes)[keyComplete].hintChkSum = hints ? hints->getChecksum() : 0;\n\n\t(*m_OMFDataTypes)[keyComplete].namingScheme     = m_NamingScheme;\n\t(*m_OMFDataTypes)[keyComplete].afhHash          = AFHierarchyPrefix;\n\t(*m_OMFDataTypes)[keyComplete].afHierarchy      = AFHierarchy;\n\t(*m_OMFDataTypes)[keyComplete].afHierarchyOrig  = AFHierarchy;\n\n\tLogger::getLogger()->debug(\"%s - keyComplete :%s: m_NamingScheme :%ld: AFHierarchyPrefix :%s: AFHierarchy :%s: \"\n\t\t\t\t, __FUNCTION__\n\t\t\t\t, keyComplete.c_str()\n\t\t\t\t, m_NamingScheme\n\t\t\t\t, AFHierarchyPrefix.c_str()\n\t\t\t\t, AFHierarchy.c_str() );\n\n\treturn true;\n}\n\n/**\n * Set a new value for global type-id\n *\n * new value is the maximum value of\n * type-id among all asset datatypes\n * or\n * the current value of m_typeId\n */\nvoid OMF::setTypeId()\n{\n\tlong maxId = m_typeId;\n\tfor (auto it = m_OMFDataTypes->begin();\n\t\t  it != m_OMFDataTypes->end();\n\t\t  ++it)\n\t{\n\t\tif ((*it).second.typeId > maxId)\n\t\t{\n\t\t\tmaxId = (*it).second.typeId;\n\t\t}\n\t}\n\tm_typeId = maxId;\n}\n\n/**\n * Clear OMF types cache for given asset name\n * but keep the type-id\n */\nvoid OMF::clearCreatedTypes(const string& keyComplete)\n{\n\tif (m_OMFDataTypes)\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\t// Just clear data types\n\t\t\t(*it).second.types = \"\";\n\t\t}\n\t}\n}\n\n/**\n * Check the key (assetName) is set and not empty\n * in the per asset data types cache.\n *\n * @param keyComplete    The data type key (assetName) from the Reading row\n * @return       True if the key exists and data value is not empty:\n *\t\t this means the dataTypes were already sent\n *\t\t Found key with empty value means the data types\n *\t\t must be sent again with the new type-id.\n *               Return false if the key is not found or found but empty.\n */\nbool OMF::getCreatedTypes(const string& keyComplete, const Reading& row, OMFHints *hints)\n{\n\tLogger::getLogger()->debug(\"OMF::getCreatedTypes: Key: %s Asset: %s (%s)\", keyComplete.c_str(), row.getAssetName().c_str(), DataPointNamesAsString(row).c_str());\n\n\tunsigned long typesDefinition;\n\tbool ret = false;\n\tbool found = false;\n\n\tt_typeCount typeStored;\n\tt_typeCount typeNew;\n\n\tif (!m_OMFDataTypes)\n\t{\n\t\tret = false;\n\t}\n\telse\n\t{\n\t\tauto it = m_OMFDataTypes->find(keyComplete);\n\t\tif (it != m_OMFDataTypes->end())\n\t\t{\n\t\t\tOMFDataTypes& type = it->second;\n\t\t\tret = ! type.types.empty();\n\t\t\tif (ret)\n\t\t\t{\n\t\t\t\t// Considers empty also the case \"{}\"\n\t\t\t\tif (type.types.compare(\"{}\") == 0)\n\t\t\t\t{\n\t\t\t\t\tret = false;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// The Connector Relay recreates the type only when an error is received from the PI-Server\n\t\t\t\t\t// not in advance\n\t\t\t\t\tif (m_PIServerEndpoint != ENDPOINT_CR)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (hints && type.hintChkSum != hints->getChecksum())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tret = false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Check if the defined type has changed respect the superset type\n\t\t\t\t\t\t\tReading* datatypeStructure = NULL;\n\n\t\t\t\t\t\t\tauto itSuper = m_SuperSetDataPoints.find(m_assetName);\n\n\t\t\t\t\t\t\tif (itSuper != m_SuperSetDataPoints.end())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tdatatypeStructure = (*itSuper).second;\n\n\t\t\t\t\t\t\t\t// Check if the types are changed\n\t\t\t\t\t\t\t\ttypeStored.valueLong = type.typesShort;\n\t\t\t\t\t\t\t\ttypeNew.valueLong = calcTypeShort(*datatypeStructure);\n\n\t\t\t\t\t\t\t\tif (typeNew.cnt.tTotal  > typeStored.cnt.tTotal ||\n\t\t\t\t\t\t\t\t\ttypeNew.cnt.tFloat  > typeStored.cnt.tFloat ||\n\t\t\t\t\t\t\t\t\ttypeNew.cnt.tString > typeStored.cnt.tString\n\t\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tret = false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tLogger::getLogger()->debug(\"OMF::getCreatedTypes: Id: %ld Tshort: %lu Types: %s\", type.typeId, type.typesShort, type.types.c_str());\n\t\t}\n\t}\n\treturn ret;\n}\n\n/**\n * Check whether input Datapoint type is supported by OMF class\n *\n * @param    dataPoint\t\tInput data\n * @return\t\t\tTrue if supported, false otherwise\n */ \n\nstatic bool isTypeSupported(DatapointValue& dataPoint)\n{\n\tswitch (dataPoint.getType())\n\t{\n\t\tcase DatapointValue::DatapointTag::T_FLOAT:\n\t\tcase DatapointValue::DatapointTag::T_INTEGER:\n\t\tcase DatapointValue::DatapointTag::T_STRING:\n\t\t\treturn true;\n\t\tdefault:\n\t\t\treturn false;\t\n\t}\n}\n\n/**\n * Find the best available exception message.\n * This will be either the Message from the OMF REST JSON response,\n * or an std::exception message.\n *\n * @param    exception\t\tstd::exception object\n * @param    error\t\t\tOMFError object to be populated\n * @return\t\t\t\t\tBest available exception message\n */\nstd::string OMF::getExceptionMessage(const std::exception &e, OMFError *error)\n{\n\tstd::string exceptionMessage = std::string(e.what());\n\tstd::string httpResponse = m_sender.getHTTPResponse();\n\n\t// Check if either the httpResponse or the std::exception message contain an OMF JSON response\n\tif (httpResponse.empty())\n\t{\n\t\terror->setFromHttpResponse(exceptionMessage);\n\t}\n\telse\n\t{\n\t\terror->setFromHttpResponse(httpResponse);\n\t}\n\n\t// If OMFError indicates it has messages, an OMF response JSON document must have been available.\n\t// Return the first message. If OMFError has no messages, return the std::exception message instead.\n\tif (error->hasMessages())\n\t{\n\t\treturn error->getMessage(0);\n\t}\n\telse\n\t{\n\t\treturn exceptionMessage;\n\t}\n}\n\n/**\n * Process an std::exception generated by an OMF REST call.\n *\n * @param    exception\t\tstd::exception object\n * @param    mainMessage\tMain message text for the logged message\n */\nvoid OMF::handleRESTException(const std::exception &e, const char *mainMessage)\n{\n\tOMFError error;\n\tstd::string errorMsg = getExceptionMessage(e, &error);\n\n\tif (error.hasMessages())\n\t{\n\t\terror.Log(mainMessage);\n\t\tCheckHttpCode(error.getHttpCode(), errorMsg);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"%s, %s - %s %s\",\n\t\t\t\t\t\t\t\t   mainMessage,\n\t\t\t\t\t\t\t\t   errorMessageHandler(errorMsg).c_str(),\n\t\t\t\t\t\t\t\t   m_sender.getHostPort().c_str(),\n\t\t\t\t\t\t\t\t   m_path.c_str());\n\t\tCheckHttpCode(HTTPCodeFromErrorMessage(errorMsg), errorMsg);\n\t}\n\n\t// Check for any error messages that indicate a loss of connection\n\tint i = 0;\n\twhile (strlen(noConnectionErrorMessages[i]))\n\t{\n\t\tif (0 == strncmp(e.what(), noConnectionErrorMessages[i], strlen(noConnectionErrorMessages[i])))\n\t\t{\n\t\t\tm_connected = false;\n\t\t\tLogger::getLogger()->warn(\"Connection to the destination data archive has been lost\");\n\t\t\tbreak;\n\t\t}\n\t\ti++;\n\t}\n}\n\n/**\n * Check the HTTP response code and the error message for conditions\n * indicating loss of connection or instability in the PI Server\n *\n * @param    httpCode\t\tHTTP response code from REST call\n * @param    errorMessage\tError message from REST call\n */\nvoid OMF::CheckHttpCode(const int httpCode, const std::string &errorMessage)\n{\n\tswitch (httpCode)\n\t{\n\tcase 404: // Not Found\n\t\tif (errorMessage.compare(PIWEBAPI_CONTAINER_NOT_FOUND) == 0)\n\t\t{\n\t\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, httpCode);\n\t\t\tm_PIstable = false;\n\t\t}\n\t\tbreak;\n\tcase 413: // Request Entity Too Large\n\t\tm_numBlocks++;\n\t\tLogger::getLogger()->warn(\"Next POST of Readings will take place in %lu blocks\", m_numBlocks);\n\t\tbreak;\n\tcase 500: // Internal Server Error\n\t\tif (errorMessage.compare(PIWEBAPI_PIPOINTS_NOT_CREATED) == 0)\n\t\t{\n\t\t\t// This can occur for a Container message if the PI license has expired,\n\t\t\t// or if the plugin's PI user account lacks permission to create PI points.\n\t\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, httpCode);\n\t\t\tm_PIstable = false;\n\t\t}\n\t\t// TODO: determine exactly what a PI Web API update exception for a Data message means.\n\t\t// It can mean PI Web API server running but the software is not, therefore loss of connection.\n\t\t// It can also means transient data error: PI Web API is running, PI is stable but the data is in error.\n\t\t// In all cases, a HTTP 500 will cause plugin_send to return zero which means a failure\n\t\t// else if (errorMessage.compare(PIWEBAPI_UPDATE_EXCEPTION) == 0)\n\t\t// {\n\t\t// \tLogger::getLogger()->warn(\"Connection to the destination data archive has been lost\");\n\t\t// \tLogger::getLogger()->debug(\"%s: %d\", __FUNCTION__, httpCode);\n\t\t// \tm_connected = false;\n\t\t// }\n\t\tbreak;\n\tcase 503: // Service Unavailable\n\t\tLogger::getLogger()->warn(\"Connection to the destination data archive has been lost\");\n\t\tm_connected = false;\n\t\tbreak;\n\tdefault:\n\t\tbreak;\n\t}\n}\n\n/**\n * Check a PI Server name and returns the proper name to use following the naming rules\n *\n * Invalid chars: Control characters plus: * ? ; { } [ ] | \\ ` ' \"\n *\n * @param    objName  The object name to verify\n * @param    changed  if not null, it is set to true if a change occurred\n * @return\t\t\t  Object name following the PI Server naming rules\n */\nstd::string OMF::ApplyPIServerNamingRulesInvalidChars(const std::string &objName, bool *changed)\n{\n\tstd::string nameFixed;\n\n\tif (changed)\n\t\t*changed = false;\n\n\tnameFixed = objName;\n\n\tfor (size_t i = 0; i < nameFixed.length(); i++)\n\t{\n\t\tif (\n\t\t\tnameFixed[i] == '*'  ||\n\t\t\tnameFixed[i] == '?'  ||\n\t\t\tnameFixed[i] == ';'  ||\n\t\t\tnameFixed[i] == '{'  ||\n\t\t\tnameFixed[i] == '}'  ||\n\t\t\tnameFixed[i] == '['  ||\n\t\t\tnameFixed[i] == ']'  ||\n\t\t\tnameFixed[i] == '|'  ||\n\t\t\tnameFixed[i] == '\\\\' ||\n\t\t\tnameFixed[i] == '`'  ||\n\t\t\tnameFixed[i] == '\\'' ||\n\t\t\tnameFixed[i] == '\\\"' ||\n\t\t\tiscntrl(nameFixed[i])\n\t\t\t)\n\t\t{\n\t\t\tnameFixed.replace(i, 1, \"_\");\n\n\t\t\tif (changed)\n\t\t\t\t*changed = true;\n\t\t}\n\n\t}\n\n\treturn (nameFixed);\n}\n\n/**\n * Check a PI Server object name and returns the proper name to use following the naming rules:\n *\n * - Blank names are not permitted, substituted with '_'\n * - Trailing spaces are removed\n * - Maximum name length is 200 characters.\n * - Valid chars\n * - Names cannot begin with '__', These are reserved for system use, substituted with single '_'\n *\n * Note: Names on PI-Server side are not case sensitive\n *\n * @param    objName  The object name to verify\n * @param    changed  if not null, it is set to true if a change occur\n * @return\t\t\t  Object name following the PI Server naming rules\n */\nstd::string OMF::ApplyPIServerNamingRulesObj(const std::string &objName, bool *changed)\n{\n\tstd::string nameFixed;\n\n\tif (changed)\n\t\t*changed = false;\n\n\tnameFixed = StringTrim(objName);\n\n\tif (nameFixed.empty ()) {\n\n\t\tLogger::getLogger()->debug(\"%s - object name empty\", __FUNCTION__);\n\n\t\tnameFixed = \"_\";\n\t\tif (changed)\n\t\t\t*changed = true;\n\n\t} else {\n\t\tif (nameFixed.length() > 201) {\n\n\t\t\tnameFixed = nameFixed.substr(0, 200);\n\t\t\tif (changed)\n\t\t\t\t*changed = true;\n\n\t\t\tLogger::getLogger()->warn(\"%s - object name too long, truncated to :%s: \", __FUNCTION__, nameFixed.c_str() );\n\t\t}\n\t}\n\n\tnameFixed = ApplyPIServerNamingRulesInvalidChars(nameFixed, changed);\n\n\t/// Names cannot begin with '__'. These are reserved for system use.\n\tif (\n\t\tnameFixed[0] == '_'  &&\n\t\tnameFixed[1] == '_'\n\t\t)\n\t{\n\t\tnameFixed.erase(0, 1);\n\t\tif (changed)\n\t\t\t*changed = true;\n\t}\n\n\tif (objName.compare(nameFixed) != 0)\n\t{\n\t\tLogger::getLogger()->debug(\"%s - original :%s: trimmed :%s:\", __FUNCTION__, objName.c_str(), nameFixed.c_str());\n\t}\n\n\treturn (nameFixed);\n}\n\n/**\n * Create a comma-separated string of all Datapoint names in a Reading\n *\n * @param reading\tReading\n * @return\t\t\tDatapoint names in the Reading\n */\nstd::string DataPointNamesAsString(const Reading& reading)\n{\n\tstd::string dataPointNames;\n\n\tfor (Datapoint *datapoint : reading.getReadingData())\n\t{\n\t\tdataPointNames.append(datapoint->getName());\n\t\tdataPointNames.append(\",\");\n\t}\n\n\tif (dataPointNames.size() > 0)\n\t{\n\t\tdataPointNames.resize(dataPointNames.size() - 1);\t// remove trailing comma\n\t}\n\n\treturn dataPointNames;\n}\n\n/**\n * Check a PI Server path name and returns the proper name to use following the naming rules:\n *\n * - Blank names are not permitted, substituted with '_'\n * - Trailing spaces are removed\n * - Maximum name length is 200 characters.\n * - Valid chars\n * - Names cannot begin with '__', These are reserved for system use, substituted with single '_'\n *\n * Names on PI-Server side are not case sensitive\n *\n * @param    objName  The object name to verify\n * @param    changed  if not null, it is set to true if a change occurred\n * @return\t\t\t  Object name following the PI Server naming rules\n */\nstd::string OMF::ApplyPIServerNamingRulesPath(const std::string &objName, bool *changed)\n{\n\tstd::string nameFixed;\n\n\tif (changed)\n\t\t*changed = false;\n\n\tnameFixed = StringTrim(objName);\n\n\tif (nameFixed.empty ()) {\n\n\t\tLogger::getLogger()->debug(\"%s - path empty\", __FUNCTION__);\n\t\tnameFixed = \"_\";\n\t\tif (changed)\n\t\t\t*changed = true;\n\n\t} else {\n\t\tif (nameFixed.length() > 201) {\n\n\t\t\tnameFixed = nameFixed.substr(0, 200);\n\t\t\tif (changed)\n\t\t\t\t*changed = true;\n\n\t\t\tLogger::getLogger()->warn(\"%s - path too long, truncated to :%s: \", __FUNCTION__, nameFixed.c_str() );\n\t\t}\n\t}\n\n\tnameFixed = ApplyPIServerNamingRulesInvalidChars(nameFixed, changed);\n\n\t/// Names cannot begin with '__'. These are reserved for system use.\n\tif (\n\t\tnameFixed[0] == '_' &&\n\t\tnameFixed[1] == '_'\n\t\t)\n\t{\n\t\tnameFixed.erase(0, 1);\n\t\tif (changed)\n\t\t\t*changed = true;\n\t}\n\n\tif (nameFixed.find(\"/__\") != string::npos)\n\t{\n\t\tStringReplaceAll(nameFixed,\"/__\",\"/_\");\n\t\tif (changed)\n\t\t\t*changed = true;\n\n\t}\n\n\tif (objName.compare(nameFixed) != 0)\n\t{\n\t\tLogger::getLogger()->debug(\"%s - original :%s: trimmed :%s:\", __FUNCTION__, objName.c_str(), nameFixed.c_str());\n\t}\n\n\treturn (nameFixed);\n}\n\n/**\n * Send the base types that we use to define all the data point values\n *\n * @return true If the data types were sent correctly. Otherwise false.\n */\nbool OMF::sendBaseTypes()\n{\n\tvector<pair<string, string>> resType = OMF::createMessageHeader(\"Type\");\n\n\t// Build an HTTPS POST with 'resType' headers\n\t// and 'typeData' JSON payload\n\t// Then get HTTPS POST ret code and return 0 to client on error\n\ttry\n\t{\n\t\tint res = m_sender.sendRequest(\"POST\",\n\t\t\t\t\t   m_path,\n\t\t\t\t\t   resType,\n\t\t\t\t\t   baseOMFTypes);\n\t\tif  ( ! (res >= 200 && res <= 299) )\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Sending base data types message 'Type', HTTP code %d - %s %s\",\n\t\t\t\t\t\t   res,\n\t\t\t\t\t\t   m_sender.getHostPort().c_str(),\n\t\t\t\t\t\t   m_path.c_str());\n\t\t\treturn false;\n\t\t}\n\t\telse if (res == 201)\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Created basic data types\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Confirmed basic data types\");\n\t\t}\n\t}\n\t// Exception raised for HTTP 400 Bad Request\n\tcatch (const BadRequest& e)\n\t{\n\t\tstring errorMsg;\n\t\tOMFError error(m_sender.getHTTPResponse());\n\t\tif (error.hasMessages())\n\t\t{\n\t\t\terror.Log(\"The OMF endpoint reported a Bad Request when sending base types\");\n\t\t\terrorMsg = error.getMessage(0);\n\t\t}\n\t\telse\n\t\t{\n\t\t\terrorMsg = errorMessageHandler(e.what());\n\t\t}\n\n\t\tif (OMF::isDataTypeError(e.what()))\n\t\t{\n\t\t\t// Data type error: force type-id change\n\t\t\tm_changeTypeId = true;\n\t\t}\n\n\t\tLogger::getLogger()->warn(\"Sending dataType message 'Type', not blocking issue: %s %s - %s %s\",\n\t\t\t(m_changeTypeId ? \"Data Type \" : \"\" ),\n\t\t\terrorMsg.c_str(),\n\t\t\tm_sender.getHostPort().c_str(),\n\t\t\tm_path.c_str());\n\n\t\treturn false;\n\t}\n\tcatch (const Unauthorized &e)\n\t{\n\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\treturn false;\n\t}\n\tcatch (const Conflict& e)\n\t{\n\t\thandleRESTException(e, \"The OMF endpoint reported a Conflict when sending base types\");\n\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, 409);\n\t\tm_PIstable = false;\n\t\treturn false;\n\t}\n\tcatch (const std::exception &e)\n\t{\n\t\thandleRESTException(e, \"Sending Basic Types error\");\n\t\treturn false;\n\t}\n\tLogger::getLogger()->debug(\"Base types successfully sent\");\n\treturn true;\n}\n\n/**\n * Create the FledgeAsset OMF Type which will define an AF Template.\n * The AF Template will be used to create AF Elements to represent Containers for Linked Types.\n *\n * @return true If the FledgeAsset Type was sent correctly\n */\nbool OMF::sendFledgeAssetType()\n{\n\tOMFBuffer writer;\n\n\twriter.append('[');\n\twriter.append('{');\n\n\twriter.append(\"\\\"id\\\":\\\"FledgeAsset\\\",\");\n\twriter.append(\"\\\"type\\\":\\\"object\\\",\");\n\twriter.append(\"\\\"classification\\\":\\\"static\\\",\");\n\n\twriter.append(\"\\\"properties\\\":{\");\n\n\twriter.append(\"\\\"AssetId\\\":{\");\n\twriter.append(\"\\\"type\\\":\\\"string\\\",\");\n\twriter.append(\"\\\"isindex\\\":true\");\n\twriter.append(\"},\");\n\n\tfor (std::pair<std::string, std::string> &sData : *m_staticData)\n\t{\n\t\twriter.append('\\\"');\n\t\twriter.append(sData.first);\n\t\twriter.append(\"\\\":{\");\n\t\twriter.append(\"\\\"type\\\":\\\"string\\\"\");\n\t\twriter.append(\"},\");\n\t}\n\n\twriter.append(\"\\\"Name\\\":{\");\n\twriter.append(\"\\\"type\\\":\\\"string\\\",\");\n\twriter.append(\"\\\"isname\\\":true\");\n\twriter.append('}');\n\n\twriter.append('}');\n\twriter.append('}');\n\twriter.append(']');\n\n\tconst char *payload = writer.coalesce();\n\tLogger::getLogger()->debug(\"%s: %s\", __FUNCTION__, payload);\n\n\tbool retCode = false;\n\ttry\n\t{\n\t\tvector<pair<string, string>> resType = OMF::createMessageHeader(\"Type\");\n\t\tint res = m_sender.sendRequest(\"POST\",\n\t\t\t\t\t   m_path,\n\t\t\t\t\t   resType,\n\t\t\t\t\t   payload);\n\t\t\t\t\t   \n\t\tLogger::getLogger()->info((res == 201) ? \"Created FledgeAsset Type\" : \"Confirmed FledgeAsset Type\");\n\t\tretCode = true;\n\t}\n\tcatch (const Unauthorized &e)\n\t{\n\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t}\n\tcatch (const Conflict& e)\n\t{\n\t\tLogger::getLogger()->warn(\"FledgeAsset Type exists with a different definition\");\n\t\tretCode = true;\n\t}\n\tcatch (const std::exception &e)\n\t{\n\t\thandleRESTException(e, \"Sending FledgeAsset Type error\");\n\t}\n\n\tdelete [] payload;\n\treturn retCode;\n}\n\n/**\n * Send a message to link the asset into the right place in the AF structure\n *\n * @param reading\tThe reading being sent\n * @param hints\t\tOMF Hints for this reading\n * @return\t\t\ttrue if the message was sent correctly, otherwise false.\n */\nbool OMF::sendAFLinks(Reading &reading, OMFHints *hints)\n{\n\tbool success = true;\n\tstd::string afLinks = createAFLinks(reading, hints);\n\tif (afLinks.empty())\n\t{\n\t\treturn success;\n\t}\n\tafLinks = \"[\" + afLinks + \"]\";\n\t\n\tstd::vector<std::pair<std::string, std::string>> links;\n\tparseLinkData(afLinks, links);\n\n\ttry\n\t{\n\t\tvector<pair<string, string>> messageHeader = OMF::createMessageHeader(\"Data\", m_dataActionCode);\n\t\tint res = m_sender.sendRequest(\"POST\",\n\t\t\t\t\t\t\t\t\t   m_path,\n\t\t\t\t\t\t\t\t\t   messageHeader,\n\t\t\t\t\t\t\t\t\t   afLinks);\n\t\tif (res >= 200 && res <= 299)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"AF Link message sent successfully: %s\", afLinks.c_str());\n\t\t\tsuccess = true;\n\t\t\tLogLinks(res, (res == 201) ? \"Created Link\" : \"Confirmed Link\", links);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogLinks(res, \"creating Link\", links);\n\t\t\tLogger::getLogger()->error(\"Sending AF Link Data message, HTTP code %d - %s %s\",\n\t\t\t\t\t\t\t\t\t   res,\n\t\t\t\t\t\t\t\t\t   m_sender.getHostPort().c_str(),\n\t\t\t\t\t\t\t\t\t   m_path.c_str());\n\t\t\tsuccess = false;\n\t\t}\n\t}\n\tcatch (const BadRequest &e)\n\t{\n\t\tLogLinks(400, \"creating Link\", links);\n\t\thandleRESTException(e, \"The OMF endpoint reported a Bad Request when sending AF Link\");\n\t\tsuccess = false;\n\t}\n\tcatch (const Unauthorized &e)\n\t{\n\t\tLogLinks(401, \"creating Link\", links);\n\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\tsuccess = false;\n\t}\n\tcatch (const Conflict &e)\n\t{\n\t\tLogLinks(409, \"creating Link\", links);\n\t\tstring msg = \"Conflict sending AF Link Data message for the asset \" + reading.getAssetName();\n\t\thandleRESTException(e, msg.c_str());\n\t\tLogger::getLogger()->warn(MESSAGE_PI_UNSTABLE, 409);\n\t\tm_PIstable = false;\n\t\tsuccess = false;\n\t}\n\tcatch (const std::exception &e)\n\t{\n\t\tLogLinks(0, \"creating Link\", links);\n\t\thandleRESTException(e, \"AF Link send message exception\");\n\t\tsuccess = false;\n\t}\n\n\treturn success;\n}\n\n/**\n * Create the messages to link the asset holding the container to its parent asset\n *\n * @param reading\tThe reading being sent\n * @param hints\t\tOMF Hints for this reading\n * @return\t\t\tOMF JSON snippet to create the AF Link\n */\nstring OMF::createAFLinks(Reading& reading, OMFHints *hints)\n{\nstring AFDataMessage;\n\n\tif (m_sendFullStructure)\n\t{\n\t\tstring assetName = m_assetName;\n\t\tstring AFHierarchyLevel;\n\t\tstring prefix;\n\t\tstring objectPrefix;\n\n\t\tauto rule = m_AssetNamePrefix.find(assetName);\n\t\tif (rule != m_AssetNamePrefix.end())\n\t\t{\n\t\t\tauto itemArray = rule->second;\n\t\t\tobjectPrefix = \"\";\n\n\t\t\tfor (auto &item : itemArray)\n\t\t\t{\n\t\t\t\tstring AFHierarchy;\n\t\t\t\tstring prefix;\n\n\t\t\t\tAFHierarchy = std::get<0>(item);\n\t\t\t\tgenerateAFHierarchyPrefixLevel(AFHierarchy, prefix, AFHierarchyLevel);\n\n\t\t\t\tprefix = std::get<1>(item);\n\n\t\t\t\tif (objectPrefix.empty())\n\t\t\t\t{\n\t\t\t\t\tobjectPrefix = prefix;\n\t\t\t\t}\n\n\t\t\t\tLogger::getLogger()->debug(\"%s - assetName :%s: AFHierarchy :%s: prefix :%s: objectPrefix :%s:  AFHierarchyLevel :%s: \", __FUNCTION__\n\t\t\t\t\t\t\t\t\t\t   ,assetName.c_str()\n\t\t\t\t\t\t\t\t\t\t   , AFHierarchy.c_str()\n\t\t\t\t\t\t\t\t\t\t   , prefix.c_str()\n\t\t\t\t\t\t\t\t\t\t   , objectPrefix.c_str()\n\t\t\t\t\t\t\t\t\t\t   , AFHierarchyLevel.c_str() );\n\n\t\t\t\t// Create data for Static Data message\n\t\t\t\tAFDataMessage = OMF::createLinkData(reading, AFHierarchyLevel, prefix, objectPrefix, hints, false);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"AF hierarchy is not defined for the asset Name |%s|\", assetName.c_str());\n\t\t}\n\t}\n\treturn AFDataMessage;\n}\n\n/**\n * Report an error related to an asset if the asset has not already been reported\n *\n * @param asset\tThe asset name\n * @param level\tThe level to log the message at\n * @param msg\tThe message to log\n */\nvoid OMF::reportAsset(const string& asset, const string& level, const string& msg)\n{\n\tif (std::find(m_reportedAssets.begin(), m_reportedAssets.end(), asset) == m_reportedAssets.end())\n\t{\n\t\tm_reportedAssets.push_back(asset);\n\t\tif (level.compare(\"error\") == 0)\n\t\t\tLogger::getLogger()->error(msg);\n\t\telse if (level.compare(\"warn\") == 0)\n\t\t\tLogger::getLogger()->warn(msg);\n\t\telse if (level.compare(\"fatal\") == 0)\n\t\t\tLogger::getLogger()->fatal(msg);\n\t\telse if (level.compare(\"info\") == 0)\n\t\t\tLogger::getLogger()->info(msg);\n\t\telse\n\t\t\tLogger::getLogger()->debug(msg);\n\t}\n}\n"
  },
  {
    "path": "C/plugins/north/OMF/omfbuffer.cpp",
    "content": "/*\n * Fledge OMF north plugin buffer class\n *\n * Copyright (c) 2023 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <omfbuffer.h>\n#include <string.h>\n#include <string_utils.h>\n\nusing namespace std;\n/**\n * Buffer class designed to hold OMF payloads that can\n * as required but have minimal copy semantics.\n */\n\n/**\n * OMFBuffer constructor\n */\nOMFBuffer::OMFBuffer()\n{\n        buffers.push_front(new OMFBuffer::Buffer());\n}\n\n/**\n * OMFBuffer destructor\n */\nOMFBuffer::~OMFBuffer()\n{\n\tfor (list<OMFBuffer::Buffer *>::iterator it = buffers.begin(); it != buffers.end(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n}\n\n/**\n * Clear all the buffers from the OMFBuffer and allow it to be reused\n */\nvoid OMFBuffer::clear()\n{\n\tfor (list<OMFBuffer::Buffer *>::iterator it = buffers.begin(); it != buffers.end(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n\tbuffers.clear();\n        buffers.push_front(new OMFBuffer::Buffer());\n}\n\n/**\n * Append a character to a buffer\n *\n * @param data\tThe character to append to the buffer\n */\nvoid OMFBuffer::append(const char data)\n{\nOMFBuffer::Buffer *buffer = buffers.back();\n\n        if (buffer->offset == buffer->length)\n        {\n\t\tbuffer = new OMFBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tbuffer->data[buffer->offset] = data;\n\tbuffer->data[buffer->offset + 1] = 0;\n\tbuffer->offset++;\n}\n\n/**\n * Append a character string to a buffer\n *\n * @para data\tThe string to append to the buffer\n */\nvoid OMFBuffer::append(const char *data)\n{\nunsigned int len = strlen(data);\nOMFBuffer::Buffer *buffer = buffers.back();\n\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tif (len > BUFFER_CHUNK)\n\t\t{\n\t\t\tbuffer = new OMFBuffer::Buffer(len + BUFFER_CHUNK);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbuffer = new OMFBuffer::Buffer();\n\t\t}\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], data, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append an integer to a buffer\n *\n * @param value\tThe value to append to the buffer\n */\nvoid OMFBuffer::append(const int value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nOMFBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%d\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new OMFBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append a long to a buffer\n *\n * @param value\tThe long value to append to the buffer\n */\nvoid OMFBuffer::append(const long value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nOMFBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%ld\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new OMFBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append an unsigned integer to a buffer\n *\n * @param value\tThe unsigned long value to append to the buffer\n */\nvoid OMFBuffer::append(const unsigned int value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nOMFBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%u\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new OMFBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append an unsigned long to a buffer\n *\n * @param value\tThe value to append to the buffer\n */\nvoid OMFBuffer::append(const unsigned long value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nOMFBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%lu\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new OMFBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append a double to a buffer\n *\n * @param value\tThe double value to append to the buffer\n */\nvoid OMFBuffer::append(const double value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nOMFBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%f\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new OMFBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append a string to a buffer\n *\n * @param str\tThe string to be appended to the buffer\n */\nvoid OMFBuffer::append(const string& str)\n{\nconst char\t*cstr = str.c_str();\nunsigned int len = strlen(cstr);\nOMFBuffer::Buffer *buffer = buffers.back();\n\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tif (len > BUFFER_CHUNK)\n\t\t{\n\t\t\tbuffer = new OMFBuffer::Buffer(len + BUFFER_CHUNK);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbuffer = new OMFBuffer::Buffer();\n\t\t}\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], cstr, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Quote and append a string to a buffer\n *\n * @param str\tThe string to quote and append to the buffer\n */\nvoid OMFBuffer::quote(const string& str)\n{\nstring esc = str;\nStringEscapeQuotes(esc);\nconst char\t*cstr = esc.c_str();\nunsigned int len = strlen(cstr) + 2;\nOMFBuffer::Buffer *buffer = buffers.back();\n\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tif (len > BUFFER_CHUNK)\n\t\t{\n\t\t\tbuffer = new OMFBuffer::Buffer(len + BUFFER_CHUNK);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbuffer = new OMFBuffer::Buffer();\n\t\t}\n\t\tbuffers.push_back(buffer);\n\t}\n\tbuffer->data[buffer->offset] = '\"';\n\tmemcpy(&buffer->data[buffer->offset + 1], cstr, len - 2);\n\tbuffer->data[buffer->offset + len - 1] = '\"';\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Create a coalesced buffer from the buffer chain\n *\n * The buffer returned has been created using the new[] operator and must be\n * deleted by the caller.\n * @return char* The OMF payload in a single buffer\n */\nconst char *OMFBuffer::coalesce()\n{\nunsigned int length = 0, offset = 0;\nchar\t     *buffer = 0;\n\n\tif (buffers.size() == 1)\n\t{\n\t\treturn buffers.back()->detach();\n\t}\n\tfor (list<OMFBuffer::Buffer *>::iterator it = buffers.begin(); it != buffers.end(); ++it)\n\t{\n\t\tlength += (*it)->offset;\n\t}\n\tbuffer = new char[length+1];\n\tfor (list<OMFBuffer::Buffer *>::iterator it = buffers.begin(); it != buffers.end(); ++it)\n\t{\n\t\tmemcpy(&buffer[offset], (*it)->data, (*it)->offset);\n\t\toffset += (*it)->offset;\n\t}\n\tbuffer[offset] = 0;\n\treturn buffer;\n}\n\n/**\n * Construct a buffer with a standard size initial buffer.\n */\nOMFBuffer::Buffer::Buffer() : offset(0), length(BUFFER_CHUNK), attached(true)\n{\n\tdata = new char[BUFFER_CHUNK+1];\n\tdata[0] = 0;\n}\n\n/**\n * Construct a large buffer, passing the size of buffer required. This is useful\n * if you know your buffer requirements are large and you wish to reduce the amount\n * of allocation required.\n *\n * @param size\tThe size of the initial buffer to allocate.\n */\nOMFBuffer::Buffer::Buffer(unsigned int size) : offset(0), length(size), attached(true)\n{\n\tdata = new char[size+1];\n\tdata[0] = 0;\n}\n\n/**\n * Buffer destructor, the buffer itself is also deleted by this\n * call and any reference to it must no longer be used.\n */\nOMFBuffer::Buffer::~Buffer()\n{\n\tif (attached)\n\t{\n\t\tdelete[] data;\n\t\tdata = 0;\n\t}\n}\n\n/**\n * Detach the buffer from the OMFBuffer. The reference to the buffer\n * is removed from the OMFBuffer but the buffer itself is not deleted.\n * This allows the buffer ownership to be taken by external code\n * whilst allowing the OMFBuffer to allocate a new buffer.\n */\nchar *OMFBuffer::Buffer::detach()\n{\nchar *rval = data;\n\n\tattached = false;\n\tlength = 0;\n\tdata = 0;\n\treturn rval;\n}\n"
  },
  {
    "path": "C/plugins/north/OMF/omfhints.cpp",
    "content": "/*\n * Fledge OSI Soft OMF interface to PI Server.\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <utility>\n#include <iostream>\n#include <string>\n#include <cstring>\n#include <omf.h>\n#include <OMFHint.h>\n#include <logger.h>\n#include <rapidjson/document.h>\n#include \"rapidjson/error/en.h\"\n#include \"string_utils.h\"\n#include <string_utils.h>\n#include <datapoint.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n#define OMFHINTS_AFLOCATION \"\\\"AFLocation\\\"\"\n\n\n/**\n *  Extracts from a complete OMF hint the part on which the checksum should be generated,\n *  for example, it will remove the section related to the AFLocation hint to avoid the creation of a new type\n *  when the value changes.\n *\n * @param hint   Original/complete OMF hint\n *\n * @return       OMF hint that should be considered for the calculation of the checksum\n */\nstring OMFHints::getHintForChecksum(const string &hint) {\n\n\tsize_t pos1, pos2, pos3;\n\tstring hintFinal;\n\n\thintFinal = hint;\n\n\tpos1 = hintFinal.find(OMFHINTS_AFLOCATION);\n\tif (pos1 != std::string::npos)\n\t{\n\t\tpos2 = hintFinal.find(\",\", pos1);\n\t\tif (pos2 != std::string::npos)\n\t\t{\n\t\t\t// There is another hint\n\t\t\thintFinal.erase(pos1, pos2 - pos1 + 1);\n\t\t} else {\n\t\t\tpos3 = hintFinal.find(\",\");\n\t\t\tif (pos3 != std::string::npos)\n\t\t\t{\n\t\t\t\thintFinal.erase(pos3, hintFinal.length() - pos3 -1);\n\t\t\t}else {\n\t\t\t\thintFinal.erase(pos1, hintFinal.length() - pos1 -1);\n\t\t\t}\n\t\t}\n\t}\n\n\t// Handle special cases\n\tStringReplace(hintFinal, \"{}\", \"\");\n\n\tif (hintFinal.length() == 3) {\n\n\t\tStringReplace(hintFinal, \"{\", \"\");\n\t\tStringReplace(hintFinal, \"}\", \"\");\n\t}\n\n\treturn (hintFinal);\n}\n\n/**\n *  Decodes the OMFhint in JSON format assigning the values to the memory structures: m_chksum,  m_hints and m_datapointHints\n *\n * @param hint   OMF hint in JSON format\n */\nOMFHints::OMFHints(const string& hints)\n{\n\tstring hintsTmp, hintsChksum;\n\n\thintsTmp = hints;\n\tStringReplaceAll(hintsTmp,\"\\\\\",\"\");\n\n\tm_chksum = 0;\n\tif (hintsTmp[0] == '\\\"')\n\t{\n\t\t// Skip any enclosing \"'s\n\t\tm_doc.Parse(hintsTmp.substr(1, hintsTmp.length() - 2).c_str());\n\t\thintsChksum = getHintForChecksum(hintsTmp);\n\t\tfor (int i = 1; i < hintsChksum.length() - 1; i++)\n\t\t\tm_chksum += hintsChksum[i];\n\t}\n\telse\n\t{\n\t\tm_doc.Parse(hintsTmp.c_str());\n\t\thintsChksum = getHintForChecksum(hintsTmp);\n\t\tfor (int i = 0; i < hintsChksum.length(); i++)\n\t\t\tm_chksum += hintsChksum[i];\n\t}\n\tLogger::getLogger()->debug(\"%s - hints original :%s: adapted :%s: chksum :%X: \"\n\t\t, __FUNCTION__\n\t\t,hints.c_str()\n\t\t,hintsChksum.c_str()\n\t\t, m_chksum);\n\n\tif (m_doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"Ignoring OMFHint '%s' parse error in JSON\", hintsTmp.c_str());\n\t}\n\telse\n\t{\n\t\tfor (Value::ConstMemberIterator itr = m_doc.MemberBegin();\n\t\t\t\t\titr != m_doc.MemberEnd(); ++itr)\n\t\t{\n\t\t\tconst char *name = itr->name.GetString();\n\t\t\tif (strcmp(name, \"number\") == 0)\n\t\t\t{\n\t\t\t\tm_hints.push_back(new OMFNumberHint(itr->value.GetString()));\n\t\t\t}\n\t\t\telse if (strcmp(name, \"integer\") == 0)\n\t\t\t{\n\t\t\t\tm_hints.push_back(new OMFIntegerHint(itr->value.GetString()));\n\t\t\t}\n\t\t\telse if (strcmp(name, \"typeName\") == 0)\n\t\t\t{\n\t\t\t\tm_hints.push_back(new OMFTypeNameHint(itr->value.GetString()));\n\t\t\t}\n\t\t\telse if (strcmp(name, \"tagName\") == 0)\n\t\t\t{\n\t\t\t\tm_hints.push_back(new OMFTagNameHint(itr->value.GetString()));\n\t\t\t}\n\t\t\telse if (strcmp(name, \"tag\") == 0)\n\t\t\t{\n\t\t\t\tm_hints.push_back(new OMFTagHint(itr->value.GetString()));\n\t\t\t}\n\t\t\telse if (strcmp(name, \"AFLocation\") == 0)\n\t\t\t{\n\t\t\t\tm_hints.push_back(new OMFAFLocationHint(itr->value.GetString()));\n\t\t\t}\n\t\t\telse if (strcmp(name, \"LegacyType\") == 0)\n\t\t\t{\n\t\t\t\tm_hints.push_back(new OMFLegacyTypeHint(itr->value.GetString()));\n\t\t\t}\n\t\t\telse if (strcmp(name, \"source\") == 0)\n\t\t\t{\n\t\t\t\tm_hints.push_back(new OMFSourceHint(itr->value.GetString()));\n\t\t\t}\n\t\t\telse if (strcmp(name, \"datapoint\") == 0)\n\t\t\t{\n\t\t\t\tconst Value &child = itr->value;\n\t\t\t\tif (child.IsArray())\n\t\t\t\t{\n\t\t\t\t\tfor (Value::ConstValueIterator dpitr2 = child.Begin(); dpitr2 != child.End(); ++dpitr2)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (dpitr2->HasMember(\"name\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst string dpname = (*dpitr2)[\"name\"].GetString();\n\t\t\t\t\t\t\tvector<OMFHint *> hints;\n\t\t\t\t\t\t\tfor (Value::ConstMemberIterator dpitr = dpitr2->MemberBegin(); dpitr != dpitr2->MemberEnd(); ++dpitr)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tconst char *name = dpitr->name.GetString();\n\t\t\t\t\t\t\t\tif (strcmp(name, \"number\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFNumberHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"integer\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFIntegerHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"typeName\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFTypeNameHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"tagName\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFTagNameDatapointHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"tag\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFTagHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"uom\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFUOMHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"source\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFSourceHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"minimum\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFMinimumHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"maximum\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFMaximumHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"interpolation\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tstring interpolation = dpitr->value.GetString();\n\t\t\t\t\t\t\t\t\tif (interpolation.compare(\"continuous\")\n\t\t\t\t\t\t\t\t\t\t\t&& interpolation.compare(\"discrete\")\n\t\t\t\t\t\t\t\t\t\t       && interpolation.compare(\"stepwisecontinuousleading\")\n\t\t\t\t\t\t\t\t\t\t       && interpolation.compare(\"stepwisecontinuousfollowing\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tLogger::getLogger()->warn(\"Invalid value for interpolation hint for %s, only continuous, discrete, stepwisecontinuousleading, and stepwisecontinuousfollowing are supported\", dpname.c_str());\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\thints.push_back(new OMFInterpolationHint(interpolation));\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp(name, \"name\"))\t// Ignore the name\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tLogger::getLogger()->warn(\"Invalid OMF hint '%s'\", name);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tm_datapointHints.insert(std::pair<string,vector<OMFHint *>>(dpname, hints));\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tif (child.HasMember(\"name\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tconst string dpname = child[\"name\"].GetString();\n\t\t\t\t\t\tvector<OMFHint *> hints;\n\t\t\t\t\t\tfor (Value::ConstMemberIterator dpitr = child.MemberBegin(); dpitr != child.MemberEnd(); ++dpitr)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst char *name = dpitr->name.GetString();\n\t\t\t\t\t\t\tif (strcmp(name, \"number\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFNumberHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"integer\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFIntegerHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"typeName\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFTypeNameHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"tagName\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFTagNameDatapointHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"tag\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFTagHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"uom\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFUOMHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"source\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFSourceHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"minimum\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFMinimumHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"maximum\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\thints.push_back(new OMFMaximumHint(dpitr->value.GetString()));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"interpolation\") == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tstring interpolation = dpitr->value.GetString();\n\t\t\t\t\t\t\t\tif (interpolation.compare(\"continuous\")\n\t\t\t\t\t\t\t\t\t\t&& interpolation.compare(\"discrete\")\n\t\t\t\t\t\t\t\t\t       && interpolation.compare(\"stepwisecontinuousleading\")\n\t\t\t\t\t\t\t\t\t       && interpolation.compare(\"stepwisecontinuousfollowing\"))\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tLogger::getLogger()->warn(\"Invalid value for interpolation hint for %s, only continuous, discrete, stepwisecontinuousleading, and stepwisecontinuousfollowing are supported\", dpname.c_str());\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\thints.push_back(new OMFInterpolationHint(interpolation));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (strcmp(name, \"name\"))\t// Ignore the name\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tLogger::getLogger()->warn(\"Invalid OMF hint '%s'\", name);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tm_datapointHints.insert(std::pair<string,vector<OMFHint *>>(dpname, hints));\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Unrecognised hint '%s' in OMFHint\", name);\n\t\t\t}\n\t\t}\n\t}\n\n}\n\n/**\n * Destructor for Hints class\n */\nOMFHints::~OMFHints()\n{\n\tfor (OMFHint *hint : m_hints)\n\t{\n\t\tdelete hint;\n\t}\n\tfor (auto it = m_datapointHints.begin(); it != m_datapointHints.end(); it++)\n\t{\n\t\tfor (OMFHint *hint : it->second)\n\t\t{\n\t\t\tdelete hint;\n\t\t}\n\t}\n\tm_datapointHints.erase(m_datapointHints.begin(), m_datapointHints.end());\n\tm_hints.clear();\n}\n\n/**\n * Return the hints for a given data point. If it has known then return the hits\n * for all data points.\n *\n * @param datapoint The name of the datapoint to retrieve the hints for\n */\nconst vector<OMFHint *>& OMFHints::getHints(const string& datapoint) const\n{\n\tauto it = m_datapointHints.find(datapoint);\n\tif (it != m_datapointHints.end())\n\t{\n\t\treturn it->second;\n\t}\n\treturn m_hints;\n}\n"
  },
  {
    "path": "C/plugins/north/OMF/omfinfo.cpp",
    "content": "/*\n * Fledge OSIsoft OMF interface to PI Server.\n *\n * Copyright (c) 2023-2025 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <omfinfo.h>\n#include <utils.h>\n\nusing namespace std;\nusing namespace rapidjson;\nusing namespace SimpleWeb;\n\n/**\n * Constructor for the OMFInformation class\n */\nOMFInformation::OMFInformation(ConfigCategory *config) : m_sender(NULL), m_omf(NULL), m_ocs(NULL), m_connected(false)\n{\n\n\tm_logger = Logger::getLogger();\n\tm_name = config->getName();\n\tm_numBlocks = 1;\n\n\tint endpointPort = 0;\n\n\t// PIServerEndpoint handling\n\tstring PIServerEndpoint = config->getValue(\"PIServerEndpoint\");\n\tstring ADHRegions = config->getValue(\"ADHRegions\");\n\tstring ServerHostname = config->getValue(\"ServerHostname\");\n\tif (gethostbyname(ServerHostname.c_str()) == NULL)\n\t{\n\t\tLogger::getLogger()->warn(\"Unable to resolve server hostname '%s'. This should be a valid hostname or IP Address.\", ServerHostname.c_str());\n\t}\n\tstring ServerPort = config->getValue(\"ServerPort\");\n\tstring url;\n\tstring NamingScheme = config->getValue(\"NamingScheme\");\n\n\t// Translate the PIServerEndpoint configuration\n\tif(PIServerEndpoint.compare(\"PI Web API\") == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"PI-Server end point manually selected - PI Web API \");\n\t\tm_PIServerEndpoint = ENDPOINT_PIWEB_API;\n\t\turl                        = ENDPOINT_URL_PI_WEB_API;\n\t\tendpointPort               = ENDPOINT_PORT_PIWEB_API;\n\t}\n\telse if(PIServerEndpoint.compare(\"Connector Relay\") == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"PI-Server end point manually selected - Connector Relay \");\n\t\tm_PIServerEndpoint = ENDPOINT_CR;\n\t\turl                = ENDPOINT_URL_CR;\n\t\tendpointPort       = ENDPOINT_PORT_CR;\n\t}\n\telse if(PIServerEndpoint.compare(\"AVEVA Data Hub\") == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"End point manually selected - AVEVA Data Hub\");\n\t\tm_PIServerEndpoint = ENDPOINT_ADH;\n\t\turl \t\t   = ENDPOINT_URL_ADH;\n\t\tm_authUrl \t   = AUTHORIZATION_URL_ADH;\n\t\tstd::string region = \"uswe\";\n\t\tif(ADHRegions.compare(\"EU-West\") == 0)\n\t\t\tregion = \"euno\";\n\t\telse if(ADHRegions.compare(\"Australia\") == 0)\n\t\t\tregion = \"auea\";\n\t\tStringReplace(url, \"REGION_PLACEHOLDER\", region);\n\t\tStringReplace(m_authUrl, \"REGION_PLACEHOLDER\", region);\n\t\tendpointPort       = ENDPOINT_PORT_ADH;\n\t}\n\telse if(PIServerEndpoint.compare(\"OSIsoft Cloud Services\") == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"End point manually selected - OSIsoft Cloud Services\");\n\t\tm_PIServerEndpoint = ENDPOINT_OCS;\n\t\turl                = ENDPOINT_URL_OCS;\n\t\tm_authUrl          = AUTHORIZATION_URL_OCS;\n\t\tstd::string region = \"dat-b\";\n\t\tif(ADHRegions.compare(\"EU-West\") == 0)\n\t\t\tregion = \"dat-d\";\n\t\telse if(ADHRegions.compare(\"Australia\") == 0)\n\t\t\tLogger::getLogger()->error(\"OSIsoft Cloud Services are not hosted in Australia\");\n\t\tStringReplace(url, \"REGION_PLACEHOLDER\", region);\n\t\tStringReplace(m_authUrl, \"REGION_PLACEHOLDER\", region);\n\t\tendpointPort       = ENDPOINT_PORT_OCS;\n\t}\n\telse if(PIServerEndpoint.compare(\"Edge Data Store\") == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"End point manually selected - Edge Data Store\");\n\t\tm_PIServerEndpoint = ENDPOINT_EDS;\n\t\turl                = ENDPOINT_URL_EDS;\n\t\tendpointPort       = ENDPOINT_PORT_EDS;\n\t}\n\tServerPort = (ServerPort.compare(\"0\") == 0) ? to_string(endpointPort) : ServerPort;\n\n\tif (endpointPort == ENDPOINT_PORT_PIWEB_API) \n    {\n\t\t// Use SendFullStructure ?\n\t\tm_sendFullStructure = stringToBool(config->getValue(\"SendFullStructure\"));\n\n\t} else \n    {\n\t\tm_sendFullStructure = true;\n\t}\n\n    m_tracingEnabled = stringToBool(config->getValue(\"EnableTracing\"));\n\n\tunsigned int retrySleepTime = atoi(config->getValue(\"OMFRetrySleepTime\").c_str());\n\tunsigned int maxRetry = atoi(config->getValue(\"OMFMaxRetry\").c_str());\n\tunsigned int timeout = atoi(config->getValue(\"OMFHttpTimeout\").c_str());\n\n\tstring producerToken = config->getValue(\"producerToken\");\n\n\tstring formatNumber = config->getValue(\"formatNumber\");\n\tstring formatInteger = config->getValue(\"formatInteger\");\n\tstring DefaultAFLocation = config->getValue(\"DefaultAFLocation\");\n\tstring AFMap = config->getValue(\"AFMap\");\n\n\tstring PIWebAPIAuthMethod     = config->getValue(\"PIWebAPIAuthenticationMethod\");\n\tstring PIWebAPIUserId         = config->getValue(\"PIWebAPIUserId\");\n\tstring PIWebAPIPassword       = config->getValue(\"PIWebAPIPassword\");\n\tstring KerberosKeytabFileName = config->getValue(\"PIWebAPIKerberosKeytabFileName\");\n\n\t// OCS configurations\n\tstring OCSNamespace    = config->getValue(\"OCSNamespace\");\n\tstring OCSTenantId     = config->getValue(\"OCSTenantId\");\n\tstring OCSClientId     = config->getValue(\"OCSClientId\");\n\tstring OCSClientSecret = config->getValue(\"OCSClientSecret\");\n\n\tStringReplace(url, \"HOST_PLACEHOLDER\", ServerHostname);\n\tStringReplace(url, \"PORT_PLACEHOLDER\", ServerPort);\n\n\t// TENANT_ID_PLACEHOLDER and NAMESPACE_ID_PLACEHOLDER, if present, will be replaced with the values of OCSTenantId and OCSNamespace\n\tStringReplace(url, \"TENANT_ID_PLACEHOLDER\",    OCSTenantId);\n\tStringReplace(url, \"NAMESPACE_ID_PLACEHOLDER\", OCSNamespace);\n\n\t/**\n\t * Extract host, port, path from URL\n\t */\n\tsize_t findProtocol = url.find_first_of(\":\");\n\tstring protocol = url.substr(0, findProtocol);\n\n\tstring tmpUrl = url.substr(findProtocol + 3);\n\tsize_t findPort = tmpUrl.find_first_of(\":\");\n\tstring hostName = tmpUrl.substr(0, findPort);\n\n\tsize_t findPath = tmpUrl.find_first_of(\"/\");\n\tstring port = tmpUrl.substr(findPort + 1, findPath - findPort - 1);\n\tstring path = tmpUrl.substr(findPath);\n\n\tstring hostAndPort(hostName + \":\" + port);\n\n\t// Set configuration fields\n\tm_protocol = protocol;\n\tm_hostAndPort = hostAndPort;\n\tm_path = path;\n\tm_retrySleepTime = retrySleepTime;\n\tm_maxRetry = maxRetry;\n\tm_timeout = timeout;\n\tm_typeId = TYPE_ID_DEFAULT;\n\tm_producerToken = producerToken;\n\tm_formatNumber = formatNumber;\n\tm_formatInteger = formatInteger;\n\tm_DefaultAFLocation = DefaultAFLocation;\n\tm_AFMap = AFMap;\n\n\t// OCS configurations\n\tm_OCSNamespace    = OCSNamespace;\n\tm_OCSTenantId     = OCSTenantId;\n\tm_OCSClientId     = OCSClientId;\n\tm_OCSClientSecret = OCSClientSecret;\n\n\t// PI Web API end-point - evaluates the authentication method requested\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tif (PIWebAPIAuthMethod.compare(\"anonymous\") == 0)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"PI Web API end-point - anonymous authentication\");\n\t\t\tm_PIWebAPIAuthMethod = \"a\";\n\t\t}\n\t\telse if (PIWebAPIAuthMethod.compare(\"basic\") == 0)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"PI Web API end-point - basic authentication\");\n\t\t\tm_PIWebAPIAuthMethod = \"b\";\n\t\t\tm_PIWebAPICredentials = AuthBasicCredentialsGenerate(PIWebAPIUserId, PIWebAPIPassword);\n\t\t}\n\t\telse if (PIWebAPIAuthMethod.compare(\"kerberos\") == 0)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"PI Web API end-point - kerberos authentication\");\n\t\t\tm_PIWebAPIAuthMethod = \"k\";\n\t\t\tAuthKerberosSetup(m_KerberosKeytab, KerberosKeytabFileName);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Invalid authentication method for PI Web API :%s: \", PIWebAPIAuthMethod.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\t// For all other endpoint types, set PI Web API authentication to 'anonymous.'\n\t\t// This prevents the HttpSender from inserting PI Web API authentication headers.\n\t\tm_PIWebAPIAuthMethod = \"a\";\n\t}\n\n\t// Use compression ?\n\tstring compr = config->getValue(\"compression\");\n\tif (compr == \"True\" || compr == \"true\" || compr == \"TRUE\")\n\t\tm_compression = true;\n\telse\n\t\tm_compression = false;\n\n\t// Set the list of errors considered not blocking in the communication\n\t// with the PI Server\n\tif (m_PIServerEndpoint == ENDPOINT_PIWEB_API)\n\t{\n\t\tJSONStringToVectorString(m_notBlockingErrors,\n\t\t\t\t\t config->getValue(\"PIWebAPInotBlockingErrors\"),\n\t\t\t\t\t string(\"EventInfo\"));\n\t}\n\telse\n\t{\n\t\tJSONStringToVectorString(m_notBlockingErrors,\n\t\t\t\t\t config->getValue(\"notBlockingErrors\"),\n\t\t\t\t\t string(\"errors400\"));\n\t}\n\t/**\n\t * Add static data\n\t * Split the string up into each pair\n\t */\n\tstring staticData = config->getValue(\"StaticData\");\n\tsize_t pos = 0;\n\tsize_t start = 0;\n\tdo {\n\t\tpos = staticData.find(\",\", start);\n\t\tstring item = staticData.substr(start, pos - start);\n\t\tstart = pos + 1;\n\t\tsize_t pos2 = 0;\n\t\tif ((pos2 = item.find(\":\")) != string::npos)\n\t\t{\n\t\t\tstring name = item.substr(0, pos2);\n\t\t\twhile (name[0] == ' ')\n\t\t\t\tname = name.substr(1);\n\t\t\tstring value = item.substr(pos2 + 1);\n\t\t\twhile (value[0] == ' ')\n\t\t\t\tvalue = value.substr(1);\n\t\t\tif (!name.empty() && !value.empty())\n\t\t\t{\n\t\t\t\tpair<string, string> sData = make_pair(name, value);\n\t\t\t\tm_staticData.push_back(sData);\n\t\t\t}\n\t\t}\n\t} while (pos != string::npos);\n\n\t// Set Asset/Datapoint data stream name delimiter\n\tm_delimiter = config->getValue(\"AssetDatapointNameDelimiter\");\n\tif (m_delimiter.empty())\n\t{\n\t\t// Delimiter can't be empty. If the user has cleared it, set it to the default.\n\t\tm_delimiter = \".\";\n\t}\n\telse\n\t{\n\t\tStringTrim(m_delimiter);\n\t\tif (m_delimiter.empty())\n\t\t{\n\t\t\t// If trimming emptied the string, the delimiter is a blank which is legal\n\t\t\tm_delimiter = \" \";\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Delimiter must be a single character\n\t\t\tm_delimiter.resize(1);\n\t\t}\n\t}\n\n\t// Set the Action Code for OMF Data posts: update or create\n\tm_dataActionCode = config->itemExists(\"OMFDataActionCode\") ? config->getValue(\"OMFDataActionCode\") : \"update\";\n\n\t{\n\t\t// NamingScheme handling\n\t\tif(NamingScheme.compare(\"Concise\") == 0)\n\t\t{\n\t\t\tm_NamingScheme = NAMINGSCHEME_CONCISE;\n\t\t}\n\t\telse if(NamingScheme.compare(\"Use Type Suffix\") == 0)\n\t\t{\n\t\t\tm_NamingScheme = NAMINGSCHEME_SUFFIX;\n\t\t}\n\t\telse if(NamingScheme.compare(\"Use Attribute Hash\") == 0)\n\t\t{\n\t\t\tm_NamingScheme = NAMINGSCHEME_HASH;\n\t\t}\n\t\telse if(NamingScheme.compare(\"Backward compatibility\") == 0)\n\t\t{\n\t\t\tm_NamingScheme = NAMINGSCHEME_COMPATIBILITY;\n\t\t}\n\t\tLogger::getLogger()->debug(\"End point naming scheme :%s: \", NamingScheme.c_str() );\n\n\t}\n\n\t// Fetch legacy OMF type option\n\tstring legacy = config->getValue(\"Legacy\");\n\tif (legacy == \"True\" || legacy == \"true\" || legacy == \"TRUE\")\n\t\tm_legacy = true;\n\telse\n\t\tm_legacy = false;\n    \n    // Enable or disable OMF tracing based on the current configuration\n    handleOMFTracing();\n}\n\n/**\n * @brief Handles the enabling and configuration of OMF tracing.\n *\n * If OMF tracing is enabled, this function checks if the trace file exists.\n * If it does not exist, the file is created in write mode. If the file\n * exists but is read-only, the function changes the file permissions to\n * allow writing. If OMF tracing is disabled, it checks if the trace file\n * exists and has write permissions, and if so, sets it to read-only.\n */\nvoid OMFInformation::handleOMFTracing() \n{\n    std::string filename = HttpSender::getOMFTracePath(); // Retrieve the trace file path\n\n    if (m_tracingEnabled) \n    {\n        if(!HttpSender::createDebugTraceDirectory())\n        {\n            return;\n        }\n\n        // Check if the trace file exists\n        std::ifstream fileCheck(filename.c_str());\n        if (!fileCheck) \n        {\n            // File does not exist, create it in write mode\n            std::ofstream traceFile(filename.c_str(), std::ofstream::out);\n            if (!traceFile) \n            {\n                Logger::getLogger()->error(\"Unable to create trace file: %s\", filename.c_str());\n            } \n        } \n        else \n        {\n            // File exists, check if it is read-only\n            struct stat fileStat;\n            if (stat(filename.c_str(), &fileStat) == 0) \n            {\n                // Check if the file is read-only\n                if (!(fileStat.st_mode & S_IWUSR)) \n                {\n                    // Change the file permissions to allow writing\n                    if (chmod(filename.c_str(), fileStat.st_mode | S_IWUSR) != 0) \n                    {\n                        Logger::getLogger()->error(\"Unable to set write permissions for: %s\", filename.c_str());\n                    } \n                }\n            }\n        }\n    } \n    else \n    {\n        // Check if the trace file exists before attempting to make it read-only\n        if (access(filename.c_str(), F_OK) == 0) \n        {\n            // Check if the file has write permissions\n            struct stat fileStat;\n            if (stat(filename.c_str(), &fileStat) == 0) \n            {\n                // If the file has write permission, change it to read-only\n                if (fileStat.st_mode & S_IWUSR) \n                {\n                    if (chmod(filename.c_str(), fileStat.st_mode & ~S_IWUSR) != 0) \n                    {\n                        Logger::getLogger()->error(\"Unable to set read-only permissions for: %s\", filename.c_str());\n                    } \n                }\n            }\n        } \n    }\n}\n\n/**\n * Destructor for the OMFInformation class.\n */\nOMFInformation::~OMFInformation()\n{\n\tdelete m_sender;\n\tdelete m_omf;\n\tdelete m_ocs;\n\t// TODO cleanup the allocated member variables\n}\n\n/**\n * The plugin start entry point has been called\n *\n * @param storedData\tThe data that has been persisted by a previous execution\n * of the plugin\n */\nvoid OMFInformation::start(const string& storedData)\n{\n\tm_logger->info(\"Host: %s\", m_hostAndPort.c_str());\n\tif ((m_PIServerEndpoint == ENDPOINT_OCS) || (m_PIServerEndpoint == ENDPOINT_ADH))\n\t{\n\t\tm_logger->info(\"Namespace: %s\", m_OCSNamespace.c_str());\n\t}\n\n\t// Parse JSON plugin_data\n\tDocument JSONData;\n\tJSONData.Parse(storedData.c_str());\n\tif (JSONData.HasParseError())\n\t{\n\t\tm_logger->error(\"%s plugin error: failure parsing \"\n\t\t\t      \"plugin data JSON object '%s'\",\n\t\t\t      PLUGIN_NAME,\n\t\t\t      storedData.c_str());\n\t}\n\telse if (JSONData.HasMember(TYPE_ID_KEY) &&\n\t\t(JSONData[TYPE_ID_KEY].IsString() ||\n\t\t JSONData[TYPE_ID_KEY].IsNumber()))\n\t{\n\t\t// Update type-id in PLUGIN_HANDLE object\n\t\tif (JSONData[TYPE_ID_KEY].IsNumber())\n\t\t{\n\t\t\tm_typeId = JSONData[TYPE_ID_KEY].GetInt();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_typeId = atol(JSONData[TYPE_ID_KEY].GetString());\n\t\t}\n\t}\n\n\t// Check if the configured Asset/Datapoint delimiter is legal in OMF which uses PI and AF rules\n\tbool changed = false;\n\tOMF::ApplyPIServerNamingRulesInvalidChars(m_delimiter, &changed);\n\tif (changed)\n\t{\n\t\tm_logger->error(\"Asset/Datapoint name delimiter '%s' is not legal in OMF\", m_delimiter.c_str());\n\t}\n\telse\n\t{\n\t\tm_logger->info(\"Asset/Datapoint name delimiter set to '%s'\", m_delimiter.c_str());\n\t}\n\n\t// Load sentdataTypes\n\tloadSentDataTypes(JSONData);\n\n\t// Log default type-id\n\tif (m_assetsDataTypes.size() == 1 &&\n\t    m_assetsDataTypes.find(FAKE_ASSET_KEY) != m_assetsDataTypes.end())\n\t{\n\t\t// Only one value: we have the FAKE_ASSET_KEY and no other data\n\t\tLogger::getLogger()->info(\"%s plugin is using global OMF prefix %s=%d\",\n\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t  TYPE_ID_KEY,\n\t\t\t\t\t  m_typeId);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->info(\"%s plugin is using per asset OMF prefix %s=%d \"\n\t\t\t\t\t  \"(max value found)\",\n\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t  TYPE_ID_KEY,\n\t\t\t\t\t  getMaxTypeId());\n\t}\n\n\t// Allocate an HttpSender subclass to communicate with PI Web API with selected authorization\n\tif (!m_sender)\n\t{\n\t\t/**\n\t\t * Select the transport library based on the authentication method and transport encryption\n\t\t * requirements.\n\t\t *\n\t\t * LibcurlHttps is used to integrate Kerberos as the SimpleHttp does not support it\n\t\t * the Libcurl integration implements only HTTPS not HTTP currently. We use SimpleHttp or\n\t\t * SimpleHttps, as appropriate for the URL given, if not using Kerberos\n\t\t *\n\t\t *\n\t\t * The handler is allocated using \"Hostname : port\", connect_timeout and request_timeout.\n\t\t * Default is no timeout\n\t\t */\n\t\tif (m_PIWebAPIAuthMethod.compare(\"k\") == 0)\n\t\t{\n\t\t\tm_sender = new LibcurlHttps(m_hostAndPort,\n\t\t\t\t\t\t\t    m_timeout,\n\t\t\t\t\t\t\t    m_timeout,\n\t\t\t\t\t\t\t    m_retrySleepTime,\n\t\t\t\t\t\t\t    m_maxRetry);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (m_protocol.compare(\"http\") == 0)\n\t\t\t{\n\t\t\t\tm_sender = new SimpleHttp(m_hostAndPort,\n\t\t\t\t\t\t\t\t  m_timeout,\n\t\t\t\t\t\t\t\t  m_timeout,\n\t\t\t\t\t\t\t\t  m_retrySleepTime,\n\t\t\t\t\t\t\t\t  m_maxRetry);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_sender = new SimpleHttps(m_hostAndPort,\n\t\t\t\t\t\t\t\t   m_timeout,\n\t\t\t\t\t\t\t\t   m_timeout,\n\t\t\t\t\t\t\t\t   m_retrySleepTime,\n\t\t\t\t\t\t\t\t   m_maxRetry);\n\t\t\t}\n\t\t}\n\n\t\tm_sender->setAuthMethod          (m_PIWebAPIAuthMethod);\n\t\tm_sender->setAuthBasicCredentials(m_PIWebAPICredentials);\n\n\t\t// OCS configurations\n\t\tm_sender->setOCSNamespace        (m_OCSNamespace);\n\t\tm_sender->setOCSTenantId         (m_OCSTenantId);\n\t\tm_sender->setOCSClientId         (m_OCSClientId);\n\t\tm_sender->setOCSClientSecret     (m_OCSClientSecret);\n\t}\n\n\t// Retrieve the destination data archive version\n\tm_connected = true;\n\tint httpCode = 200;\n\tswitch (m_PIServerEndpoint)\n\t{\n\tcase ENDPOINT_PIWEB_API:\n\t\thttpCode = PIWebAPIGetVersion();\n\t\tif (httpCode >= 200 && httpCode < 400)\n\t\t{\n\t\t\tSetOMFVersion();\n\t\t\tCheckDataActionCode();\n\t\t\tLogger::getLogger()->info(\"%s connected to %s OMF Version: %s\",\n\t\t\t\t\t\t\t\t\t  m_RestServerVersion.c_str(), m_hostAndPort.c_str(), m_omfversion.c_str());\n\t\t\tm_connected = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"The PI Web API service %s is not available. HTTP Code: %d\",\n\t\t\t\t\t\t\t\t\t   m_hostAndPort.c_str(), httpCode);\n\t\t\tm_connected = false;\n\t\t}\n\t\tbreak;\n\tcase ENDPOINT_EDS:\n\t\thttpCode = EDSGetVersion();\n\t\tif (httpCode >= 200 && httpCode < 400)\n\t\t{\n\t\t\tSetOMFVersion();\n\t\t\tCheckDataActionCode();\n\t\t\tLogger::getLogger()->info(\"Edge Data Store %s OMF Version: %s\", m_RestServerVersion.c_str(), m_omfversion.c_str());\n\t\t\tm_connected = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Edge Data Store %s is not available. HTTP Code: %d\",\n\t\t\t\t\t\t\t\t\t   m_hostAndPort.c_str(), httpCode);\n\t\t\tm_connected = false;\n\t\t}\n\t\tbreak;\n\tcase ENDPOINT_OCS:\n\t\tSetOMFVersion();\n\t\tCheckDataActionCode();\n\t\tLogger::getLogger()->info(\"OSIsoft Cloud Services OMF Version: %s\", m_omfversion.c_str());\n\t\tbreak;\n\tcase ENDPOINT_ADH:\n\t\tSetOMFVersion();\n\t\tCheckDataActionCode();\n\t\tLogger::getLogger()->info(\"AVEVA Data Hub OMF Version: %s\", m_omfversion.c_str());\n\t\tbreak;\n\tcase ENDPOINT_CR:\n\t\tSetOMFVersion();\n\t\tCheckDataActionCode();\n\t\tLogger::getLogger()->info(\"Connector Relay OMF Version: %s\", m_omfversion.c_str());\n\t\tbreak;\n\tdefault:\n\t\tSetOMFVersion();\n\t\tCheckDataActionCode();\n\t\tLogger::getLogger()->info(\"OMF Version: %s\", m_omfversion.c_str());\n\t\tbreak;\n\t}\n\n\t// Allocate the OMF class that implements the PI Server data protocol\n\tif (!m_omf)\n\t{\n\t\tm_omf = new OMF(m_name, *m_sender, m_path, m_assetsDataTypes,\n\t\t\t\tm_producerToken);\n\n\t\tm_omf->setSendFullStructure(m_sendFullStructure);\n\t\tm_omf->setDelimiter(m_delimiter);\n\n\t\t// Set PIServerEndpoint configuration\n\t\tm_omf->setNamingScheme(m_NamingScheme);\n\t\tm_omf->setPIServerEndpoint(m_PIServerEndpoint);\n\t\tm_omf->setDefaultAFLocation(m_DefaultAFLocation);\n\t\tm_omf->setAFMap(m_AFMap);\n\n\t\t// Generates the prefix to have unique asset_id across different levels of hierarchies\n\t\tstring AFHierarchyLevel;\n\t\tm_omf->generateAFHierarchyPrefixLevel(m_DefaultAFLocation, m_prefixAFAsset, AFHierarchyLevel);\n\n\t\tm_omf->setPrefixAFAsset(m_prefixAFAsset);\n\n\t\t// Set OMF FormatTypes  \n\t\tm_omf->setFormatType(OMF_TYPE_FLOAT,\n\t\t\t\t\t     m_formatNumber);\n\t\tm_omf->setFormatType(OMF_TYPE_INTEGER,\n\t\t\t\t\t     m_formatInteger);\n\n\t\tm_omf->setStaticData(&m_staticData);\n\t\tm_omf->setNotBlockingErrors(m_notBlockingErrors);\n\n\t\tif (m_omfversion == \"1.1\" || m_omfversion == \"1.0\")\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Setting LegacyType to be true for OMF Version '%s'. This will force use old style complex types. \", m_omfversion.c_str());\n\t\t\tm_omf->setLegacyMode(true);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_omf->setLegacyMode(m_legacy);\n\t\t}\n\t}\n\n\t// Allocate the OCS class that implements ADH and OCS authentication\n\tif (!m_ocs)\n\t{\n\t\tif ((m_PIServerEndpoint == ENDPOINT_ADH) || (m_PIServerEndpoint == ENDPOINT_OCS))\n\t\t{\n\t\t\tm_ocs = new OCS(m_authUrl);\n\t\t}\n\t}\n}\n\n/**\n * Send data to the OMF endpoint\n *\n * @param readings\tThe block of readings to send\n * @return uint32_t\tThe number of readings sent\n */\nuint32_t OMFInformation::send(const vector<Reading *>& readings)\n{\n#if INSTRUMENT\n\tstruct timeval startTime;\n\tgettimeofday(&startTime, NULL);\n#endif\n\t// Check if the destination data archive is available\n\tif (!IsDataArchiveConnected())\n\t{\n\t\t// Error already reported by IsDataArchiveConnected\n\t\treturn 0;\n\t}\n\n\t// For OCS and ADH, retrieve the authentication token\n\tif (m_ocs)\n\t{\n\t\tstd::string token = m_ocs->OCSRetrieveAuthToken(m_OCSClientId, m_OCSClientSecret);\n\t\tif (!token.empty())\n\t\t{\n\t\t\tm_OCSToken = token;\n\t\t\tm_sender->setOCSToken(token);\n\t\t}\n\t}\n\n\t// Exit immediately if the plugin is not stable due to PI Server errors\n\tif (!m_omf->isPIstable())\n\t{\n\t\treturn 0;\n\t}\n\n\t// Send the readings data to the PI Server\n\tm_omf->setOMFVersion(m_omfversion);\n\tm_omf->setDataActionCode(m_dataActionCode);\n\tm_omf->setPIconnected(m_connected);\n\tm_omf->setNumBlocks(m_numBlocks);\n\n\tuint32_t ret = m_omf->sendToServer(readings, m_compression);\n\n\tm_connected = m_omf->isPIconnected();\n\tm_numBlocks = m_omf->getNumBlocks();\n\n\t// Detect typeId change in OMF class\n\tif (m_omf->getTypeId() != m_typeId)\n\t{\n\t\t// Update typeId in plugin handle\n\t\tm_typeId = m_omf->getTypeId();\n\t\t// Log change\n\t\tLogger::getLogger()->info(\"%s plugin: a new OMF global %s (%d) has been created.\",\n\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t  TYPE_ID_KEY,\n\t\t\t\t\t  m_typeId);\n\t}\n\t\n#if INSTRUMENT\n\tLogger::getLogger()->debug(\"plugin_send elapsed time: %6.3f seconds, NumValues: %u\", GetElapsedTime(&startTime), ret);\n#endif\n\n\t// Return sent data ret code\n\treturn ret;\n}\n\n/**\n * Return the data to be persisted\n * @return string\tThe data to persist\n */\nstring OMFInformation::saveData()\n{\n#if INSTRUMENT\n\tstruct timeval startTime;\n\tgettimeofday(&startTime, NULL);\n#endif\n\t// Create save data\n\tstd::ostringstream saveData;\n\tsaveData << \"{\";\n\n\t// Add sent data types\n\tstring typesData = saveSentDataTypes();\n\tif (!typesData.empty())\n\t{\n\t\t// Save datatypes\n\t\tsaveData << typesData;\n\t}\n\telse\n\t{\n\t\t// Just save type-id\n\t\tsaveData << \"\\\"\" << TYPE_ID_KEY << \"\\\": \" << to_string(m_typeId);\n\t}\n\n\tsaveData << \"}\";\n\n        // Log saving the plugin configuration\n        Logger::getLogger()->debug(\"%s plugin: saving plugin_data '%s'\",\n\t\t\t\t   PLUGIN_NAME,\n\t\t\t\t   saveData.str().c_str());\n\n\n#if INSTRUMENT\n\t// For debugging: write plugin's JSON data to a file\n\tstring jsonFilePath = getDataDir() + string(\"/logs/OMFSaveData.json\");\n\tofstream f(jsonFilePath.c_str(), ios_base::trunc);\n\tf << saveData.str();\n\tf.close();\n\n\tLogger::getLogger()->debug(\"plugin_shutdown elapsed time: %6.3f seconds\", GetElapsedTime(&startTime));\t\n#endif\n\n\t// Return current plugin data to save\n\treturn saveData.str();\n}\n\n\n/**\n * Load stored data types (already sent to PI server)\n *\n * Each element, the assetName, has type-id and datatype for each datapoint\n *\n * If no data exists in the plugin_data table, then a map entry\n * with FAKE_ASSET_KEY is made in order to set the start type-id\n * sequence with default value set to 1:\n * all new created OMF dataTypes have type-id prefix set to the value of 1.\n *\n * If data like {\"type-id\": 14} or {\"type-id\": \"14\" } is found, a map entry\n * with FAKE_ASSET_KEY is made and the start type-id sequence value is set\n * to the found value, i.e. 14:\n * all new created OMF dataTypes have type-id prefix set to the value of 14.\n *\n * If proper per asset types data is loaded, the FAKE_ASSET_KEY is not set:\n * all new created OMF dataTypes have type-id prefix set to the value of 1\n * while existing (loaded) OMF dataTypes will keep their type-id values.\n *\n * @param   JSONData\tThe JSON document containing all saved data\n */\nvoid OMFInformation::loadSentDataTypes(Document& JSONData)\n{\n\tif (JSONData.HasMember(SENT_TYPES_KEY) &&\n\t    JSONData[SENT_TYPES_KEY].IsArray())\n\t{\n\t\tconst Value& cachedTypes = JSONData[SENT_TYPES_KEY];\n\t\tfor (Value::ConstValueIterator it = cachedTypes.Begin();\n\t\t\t\t\t\tit != cachedTypes.End();\n\t\t\t\t\t\t++it)\n\t\t{\n\t\t\tif (!it->IsObject())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element in '%s' \" \\\n\t\t\t\t\t\t\t  \"property is not an object, ignoring it\",\n\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t  SENT_TYPES_KEY);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tfor (Value::ConstMemberIterator itr = it->MemberBegin();\n\t\t\t\t\t\t\titr != it->MemberEnd();\n\t\t\t\t\t\t\t++itr)\n\t\t\t{\n\t\t\t\tstring key = itr->name.GetString();\n\t\t\t\tconst Value& cachedValue = itr->value;\n\n\t\t\t\t// Add typeId and dataTypes to the in memory cache\n\t\t\t\tlong typeId;\n\t\t\t\tif (cachedValue.HasMember(TYPE_ID_KEY) &&\n\t\t\t\t    cachedValue[TYPE_ID_KEY].IsNumber())\n\t\t\t\t{\n\t\t\t\t\ttypeId = cachedValue[TYPE_ID_KEY].GetInt();\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element '%s'\" \\\n\t\t\t\t\t\t\t\t  \" doesn't have '%s' property, ignoring it\",\n\t\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t\t  key.c_str(),\n\t\t\t\t\t\t\t\t  TYPE_ID_KEY);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tlong NamingScheme;\n\t\t\t\tif (cachedValue.HasMember(NAMING_SCHEME) &&\n\t\t\t\t\tcachedValue[NAMING_SCHEME].IsNumber())\n\t\t\t\t{\n\t\t\t\t\tNamingScheme = cachedValue[NAMING_SCHEME].GetInt();\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element '%s'\" \\\n\t\t\t\t\t\t\t\t  \" doesn't have '%s' property, handling naming scheme in compatibility mode\",\n\t\t\t\t\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t\t\t\t\t  key.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t  NAMING_SCHEME);\n\t\t\t\t\tNamingScheme = NAMINGSCHEME_COMPATIBILITY;\n\t\t\t\t}\n\n\t\t\t\tstring AFHHash;\n\t\t\t\tif (cachedValue.HasMember(AFH_HASH) &&\n\t\t\t\t\tcachedValue[AFH_HASH].IsString())\n\t\t\t\t{\n\t\t\t\t\tAFHHash = cachedValue[AFH_HASH].GetString();\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element '%s'\" \\\n\t\t\t\t\t\t\t\t  \" doesn't have '%s' property\",\n\t\t\t\t\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t\t\t\t\t  key.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t  AFH_HASH);\n\t\t\t\t\tAFHHash = \"\";\n\t\t\t\t}\n\n\t\t\t\tstring AFHierarchy;\n\t\t\t\tif (cachedValue.HasMember(AF_HIERARCHY) &&\n\t\t\t\t\tcachedValue[AF_HIERARCHY].IsString())\n\t\t\t\t{\n\t\t\t\t\tAFHierarchy = cachedValue[AF_HIERARCHY].GetString();\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element '%s'\" \\\n\t\t\t\t\t\t\t\t  \" doesn't have '%s' property\",\n\t\t\t\t\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t\t\t\t\t  key.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t  AF_HIERARCHY);\n\t\t\t\t\tAFHierarchy = \"\";\n\t\t\t\t}\n\n\t\t\t\tstring AFHierarchyOrig;\n\t\t\t\tif (cachedValue.HasMember(AF_HIERARCHY_ORIG) &&\n\t\t\t\t\tcachedValue[AF_HIERARCHY_ORIG].IsString())\n\t\t\t\t{\n\t\t\t\t\tAFHierarchyOrig = cachedValue[AF_HIERARCHY_ORIG].GetString();\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element '%s'\" \\\n\t\t\t\t\t\t\t\t  \" doesn't have '%s' property\",\n\t\t\t\t\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t\t\t\t\t  key.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t  AF_HIERARCHY_ORIG);\n\t\t\t\t\tAFHierarchyOrig = \"\";\n\t\t\t\t}\n\n\t\t\t\tstring dataTypes;\n\t\t\t\tif (cachedValue.HasMember(DATA_KEY) &&\n\t\t\t\t    cachedValue[DATA_KEY].IsObject())\n\t\t\t\t{\n\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\tconst Value& types = cachedValue[DATA_KEY];\n\t\t\t\t\ttypes.Accept(writer);\n\t\t\t\t\tdataTypes = buffer.GetString();\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element '%s'\" \\\n\t\t\t\t\t\t\t\t  \" doesn't have '%s' property, ignoring it\",\n\t\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t\t  key.c_str(),\n\t\t\t\t\t\t\t\t  DATA_KEY);\n\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tunsigned long dataTypesShort;\n\t\t\t\tif (cachedValue.HasMember(DATA_KEY_SHORT) &&\n\t\t\t\t\tcachedValue[DATA_KEY_SHORT].IsString())\n\t\t\t\t{\n\t\t\t\t\tstring strDataTypesShort = cachedValue[DATA_KEY_SHORT].GetString();\n\t\t\t\t\t// The information are stored as string in hexadecimal format\n\t\t\t\t\tdataTypesShort = stoi (strDataTypesShort,nullptr,16);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tdataTypesShort = calcTypeShort(dataTypes);\n\t\t\t\t\tif (dataTypesShort == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element '%s'\" \\\n                                      \" doesn't have '%s' property\",\n\t\t\t\t\t\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t\t\t\t\t\t  key.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t\t  DATA_KEY_SHORT);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->warn(\"%s plugin: current element '%s'\" \\\n                                      \" doesn't have '%s' property, calculated '0x%X'\",\n\t\t\t\t\t\t\t\t\t\t\t\t  PLUGIN_NAME,\n\t\t\t\t\t\t\t\t\t\t\t\t  key.c_str(),\n\t\t\t\t\t\t\t\t\t\t\t\t  DATA_KEY_SHORT,\n\t\t\t\t\t\t\t\t\t\t\t\t  dataTypesShort);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tunsigned short hintChecksum = 0;\n\t\t\t\tif (cachedValue.HasMember(DATA_KEY_HINT) &&\n\t\t\t\t\tcachedValue[DATA_KEY_HINT].IsString())\n\t\t\t\t{\n\t\t\t\t\tstring strHint = cachedValue[DATA_KEY_HINT].GetString();\n\t\t\t\t\t// The information are stored as string in hexadecimal format\n\t\t\t\t\thintChecksum = stoi (strHint,nullptr,16);\n\t\t\t\t}\n\t\t\t\tOMFDataTypes dataType;\n\t\t\t\tdataType.typeId = typeId;\n\t\t\t\tdataType.types = dataTypes;\n\t\t\t\tdataType.typesShort = dataTypesShort;\n\t\t\t\tdataType.hintChkSum = hintChecksum;\n\t\t\t\tdataType.namingScheme = NamingScheme;\n\t\t\t\tdataType.afhHash = AFHHash;\n\t\t\t\tdataType.afHierarchy = AFHierarchy;\n\t\t\t\tdataType.afHierarchyOrig = AFHierarchyOrig;\n\n\t\t\t\tLogger::getLogger()->debug(\"%s - AFHHash :%s: AFHierarchy :%s: AFHierarchyOrig :%s: \", __FUNCTION__, AFHHash.c_str(), AFHierarchy.c_str() , AFHierarchyOrig.c_str() );\n\n\n\t\t\t\tLogger::getLogger()->debug(\"%s - NamingScheme :%ld: \", __FUNCTION__,NamingScheme );\n\n\t\t\t\t// Add data into the map\n\t\t\t\tm_assetsDataTypes[key] = dataType;\n\t\t\t}\n\t\t}\n\t}\n\telse\n\t{\n\t\t// There is no stored data when plugin starts first time \n\t\tif (JSONData.MemberBegin() != JSONData.MemberEnd())\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Persisted data is not of the correct format, ignoring\");\n\t\t}\n\t\t\n\t\tOMFDataTypes dataType;\n\t\tdataType.typeId = m_typeId;\n\t\tdataType.types = \"{}\";\n\n\t\t// Add default data into the map\n\t\tm_assetsDataTypes[FAKE_ASSET_KEY] = dataType;\n\t}\n}\n\n\n\n/**\n * Return the maximum value of type-id, among all entries in the map\n *\n * If the array is empty the m_typeId is returned.\n *\n * @return\t\tThe maximum value of type-id found\n */\nlong OMFInformation::getMaxTypeId()\n{\n\tlong maxId = m_typeId;\n\tfor (auto it = m_assetsDataTypes.begin();\n\t\t  it != m_assetsDataTypes.end();\n\t\t  ++it)\n\t{\n\t\tif ((*it).second.typeId > maxId)\n\t\t{\n\t\t\tmaxId = (*it).second.typeId;\n\t\t}\n\t}\n\treturn maxId;\n}\n\n/**\n * Calls the PI Web API system information endpoint to get the product version\n * \n * @param    logMessage\tIf true, log error messages (default: true)\n * @return   HttpCode\tREST response code\n */\nint OMFInformation::PIWebAPIGetVersion(bool logMessage)\n{\n\tint res = 400;\n\tunsigned int retries = m_sender->getMaxRetries();\n\tm_sender->setMaxRetries(0);\n\n\ttry\n\t{\n\t\tstring path = \"https://\" + m_hostAndPort + \"/piwebapi/system\";\n\n\t\tvector<pair<string, string>> headers;\n\t\theaders.push_back( std::make_pair(\"Accept\", \"application/json\"));\n\n\t\tm_RestServerVersion.clear();\n\n\t\tres = m_sender->sendRequest(\"GET\", path, headers, std::string(\"\"));\n\t\tif (res >= 200 && res <= 299)\n\t\t{\n\t\t\tPIWebAPI piwebapi;\n\t\t\tm_RestServerVersion = piwebapi.ExtractVersion(m_sender->getHTTPResponse());\n\t\t}\n\t}\n\tcatch (const BadRequest &ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"PI Web API system information BadRequest exception: %s\", ex.what());\n\t\t}\n\t\tres = 400;\n\t}\n\tcatch (const Unauthorized &e)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\t}\n\t\tres = 401;\n\t}\n\tcatch (const std::exception &ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"PI Web API system information exception: %s\", ex.what());\n\t\t}\n\t\tres = 400;\n\t}\n\tcatch (...)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"PI Web API system information generic exception\");\n\t\t}\n\t\tres = 400;\n\t}\n\n\tm_sender->setMaxRetries(retries);\n\treturn res;\n}\n\n/**\n * Calls the Edge Data Store product information endpoint to get the EDS version\n * \n * @param    logMessage\tIf true, log error messages (default: true)\n * @return   HttpCode\tREST response code\n */\nint OMFInformation::EDSGetVersion(bool logMessage)\n{\n\tint res = 400;\n\tunsigned int retries = m_sender->getMaxRetries();\n\tm_sender->setMaxRetries(0);\n\n\ttry\n\t{\n\t\tstring path = \"http://\" + m_hostAndPort + \"/api/v1/diagnostics/productinformation\";\n\t\t\n\t\tvector<pair<string, string>> headers;\n\t\theaders.push_back( std::make_pair(\"Accept\", \"application/json\"));\n\n\t\tm_RestServerVersion.clear();\n\n\t\tres = m_sender->sendRequest(\"GET\", path, headers, std::string(\"\"));\n\t\tif (res >= 200 && res <= 299)\n\t\t{\n\t\t\tm_RestServerVersion = ParseEDSProductInformation(m_sender->getHTTPResponse());\n\t\t}\n\t}\n\tcatch (const BadRequest &ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Edge Data Store productinformation BadRequest exception: %s\", ex.what());\n\t\t}\n\t\tres = 400;\n\t}\n\tcatch (const Unauthorized &e)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\t}\n\t\tres = 401;\n\t}\n\tcatch (const std::exception &ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Edge Data Store productinformation exception: %s\", ex.what());\n\t\t}\n\t\tres = 400;\n\t}\n\tcatch (...)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Edge Data Store productinformation generic exception\");\n\t\t}\n\t\tres = 400;\n\t}\n\n\tm_sender->setMaxRetries(retries);\n\treturn res;\n}\n\n/**\n * Calls the ADH Namespace identity endpoint to check the connection to ADH\n *\n * @param    logMessage\tIf true, log error messages (default: true)\n * @return   HttpCode\tREST response code\n */\nint OMFInformation::IsADHConnected(bool logMessage)\n{\n\tif (m_OCSToken.empty())\n\t{\n\t\tstd::string token = m_ocs->OCSRetrieveAuthToken(m_OCSClientId, m_OCSClientSecret, false);\n\t\tif (!token.empty())\n\t\t{\n\t\t\tm_OCSToken = token;\n\t\t\tm_sender->setOCSToken(token);\n\t\t}\n\t}\n\n\tint res = 400;\n\tunsigned int retries = m_sender->getMaxRetries();\n\tm_sender->setMaxRetries(0);\n\n\ttry\n\t{\n\t\tstring path = m_path;\n\t\tpath.resize(path.size() - 4); // remove trailing \"/omf\"\n\n\t\tvector<pair<string, string>> headers;\n\t\theaders.push_back( std::make_pair(\"Accept\", \"application/json\"));\n\n\t\tres = m_sender->sendRequest(\"GET\", path, headers, std::string(\"\"));\n\t}\n\tcatch (const BadRequest &ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"AVEVA Data Hub health check BadRequest exception: %s\", ex.what());\n\t\t}\n\t\tres = 400;\n\t}\n\tcatch (const Unauthorized &e)\n\t{\n\t\t// HTTP 401: Land here if the ADH or OCS Token has expired\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(MESSAGE_UNAUTHORIZED);\n\t\t}\n\t\tres = 401;\n\t}\n\tcatch (const std::exception &ex)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"AVEVA Data Hub health check exception: %s\", ex.what());\n\t\t}\n\t\tres = 400;\n\t}\n\tcatch (...)\n\t{\n\t\tif (logMessage)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"AVEVA Data Hub health check generic exception\");\n\t\t}\n\t\tres = 400;\n\t}\n\n\tm_sender->setMaxRetries(retries);\n\treturn res;\n}\n\n/**\n * Set the supported OMF Version for the OMF endpoint \n */\nvoid OMFInformation::SetOMFVersion()\n{\n\tswitch (m_PIServerEndpoint)\n\t{\n\tcase ENDPOINT_PIWEB_API:\n\t\tif (m_RestServerVersion.find(\"2019\") != std::string::npos)\n\t\t{\n\t\t\tm_omfversion = \"1.0\";\n\t\t}\n\t\telse if (m_RestServerVersion.find(\"2020\") != std::string::npos)\n\t\t{\n\t\t\tm_omfversion = \"1.1\";\n\t\t}\n\t\telse if (m_RestServerVersion.find(\"2021\") != std::string::npos)\n\t\t{\n\t\t\tm_omfversion = \"1.2\";\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_omfversion = \"1.2\";\n\t\t}\n\t\tbreak;\n\tcase ENDPOINT_EDS:\n\t\t// Edge Data Store versions with supported OMF versions:\n\t\t// EDS 2020 (1.0.0.609)\t\t\t\tOMF 1.0, 1.1\n\t\t// EDS 2023 (1.1.1.46)\t\t\t\tOMF 1.0, 1.1, 1.2\n\t\t// EDS 2023 Patch 1 (1.1.3.2)\t\tOMF 1.0, 1.1, 1.2\n\t\t{\n\t\t\tint major = 0;\n\t\t\tint minor = 0;\n\t\t\tParseProductVersion(m_RestServerVersion, &major, &minor);\n\t\t\tif ((major > 1) || (major == 1 && minor > 0))\n\t\t\t{\n\t\t\t\tm_omfversion = \"1.2\";\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_omfversion = EDS_OMF_VERSION;\n\t\t\t}\n\t\t}\n\t\tbreak;\n\tcase ENDPOINT_CR:\n\t\tm_omfversion = CR_OMF_VERSION;\n\t\tbreak;\n\tcase ENDPOINT_OCS:\n\tcase ENDPOINT_ADH:\n\tdefault:\n\t\tm_omfversion = \"1.2\"; // assume cloud service OMF endpoint types support OMF 1.2\n\t\tbreak;\n\t}\n}\n\n/**\n * Check the Action code for OMF Data messages.\n * This method changes the Action code only if 'update' is specified for an OMF version too old to support it.\n */\nvoid OMFInformation::CheckDataActionCode()\n{\n\tif (!m_omfversion.empty())\n\t{\n\t\tif ((m_omfversion.compare(\"1.2\") != 0) && (m_dataActionCode.compare(\"update\") == 0))\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"OMF Version %s does not support Data Action Code %s; setting to 'create'\", m_omfversion.c_str(), m_dataActionCode.c_str());\n\t\t\tm_dataActionCode = \"create\";\n\t\t}\n\t}\n}\n\n/**\n * Evaluate if the endpoint is a PI Web API or a Connector Relay.\n *\n * @return\t               OMF_ENDPOINT values\n */\nOMF_ENDPOINT OMFInformation::identifyPIServerEndpoint()\n{\n\tOMF_ENDPOINT PIServerEndpoint;\n\n\tHttpSender *endPoint;\n\tvector<pair<string, string>> header;\n\tint httpCode;\n\n\n\tif (m_PIWebAPIAuthMethod.compare(\"k\") == 0)\n\t{\n\t\tendPoint = new LibcurlHttps(m_hostAndPort,\n\t\t\t\t\t    m_timeout,\n\t\t\t\t\t    m_timeout,\n\t\t\t\t\t    m_retrySleepTime,\n\t\t\t\t\t    m_maxRetry);\n\t}\n\telse\n\t{\n\t\tendPoint = new SimpleHttps(m_hostAndPort,\n\t\t\t\t\t   m_timeout,\n\t\t\t\t\t   m_timeout,\n\t\t\t\t\t   m_retrySleepTime,\n\t\t\t\t\t   m_maxRetry);\n\t}\n\n\t// Set requested authentication\n\tendPoint->setAuthMethod          (m_PIWebAPIAuthMethod);\n\tendPoint->setAuthBasicCredentials(m_PIWebAPICredentials);\n\n\ttry\n\t{\n\t\thttpCode = endPoint->sendRequest(\"GET\",\n\t\t\t\t\t\t m_path,\n\t\t\t\t\t\t header,\n\t\t\t\t\t\t \"\");\n\n\t\tif (httpCode >= 200 && httpCode <= 399)\n\t\t{\n\t\t\tPIServerEndpoint = ENDPOINT_PIWEB_API;\n\t\t\tif (m_PIWebAPIAuthMethod == \"b\")\n\t\t\t\tLogger::getLogger()->debug(\"PI Web API end-point basic authorization granted\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tPIServerEndpoint = ENDPOINT_CR;\n\t\t}\n\n\t}\n\tcatch (exception &ex)\n\t{\n\t\tLogger::getLogger()->warn(\"PI-Server end-point discovery encountered the error :%s: \"\n\t\t\t                  \"trying selecting the Connector Relay as an end-point\", ex.what());\n\t\tPIServerEndpoint = ENDPOINT_CR;\n\t}\n\n\tdelete endPoint;\n\n\treturn (PIServerEndpoint);\n}\n\n\n/**\n * Return a JSON string with the dataTypes to save in plugin_data\n *\n * Note: the entry with FAKE_ASSET_KEY is never saved.\n *\n * @return            The string with JSON data\n */\nstring OMFInformation::saveSentDataTypes()\n{\n\tstring ret;\n\tstd::ostringstream newData;\n\n\tauto it = m_assetsDataTypes.find(FAKE_ASSET_KEY);\n\tif (it != m_assetsDataTypes.end())\n\t{\n\t\t// Set typeId in FAKE_ASSET_KEY\n\t\tm_typeId = (*it).second.typeId;\n\t\t// Remove the entry\n\t\tm_assetsDataTypes.erase(it);\n\t}\n\n\n\tunsigned long tSize = m_assetsDataTypes.size();\n\tif (tSize)\n\t{\n\t\t\n\t\t// Prepare output data (skip empty data types)\n\t\tnewData << \"\\\"\" << SENT_TYPES_KEY << \"\\\" : [\";\n\n\t\tbool pendingSeparator = false;\n\t\tfor (auto it = m_assetsDataTypes.begin();\n\t\t\t  it != m_assetsDataTypes.end();\n\t\t\t  ++it)\n\t\t{\n\t\t\tif (((*it).second).types.compare(\"{}\") != 0)\n\t\t\t{\n\t\t\t\tnewData << (pendingSeparator ? \", \" : \"\");\n\t\t\t\tnewData << \"{\\\"\" << (*it).first << \"\\\" : {\\\"\" << TYPE_ID_KEY <<\n\t\t\t\t\t   \"\\\": \" << to_string(((*it).second).typeId);\n\n\t\t\t\t// The information should be stored as string in hexadecimal format\n\t\t\t\tstd::stringstream tmpStream;\n\t\t\t\ttmpStream << std::hex << ((*it).second).typesShort;\n\t\t\t\tstd::string typesShort = tmpStream.str();\n\n\t\t\t\tnewData << \", \\\"\" << DATA_KEY_SHORT << \"\\\": \\\"0x\" << typesShort << \"\\\"\";\n\t\t\t\tstd::stringstream hintStream;\n\t\t\t\thintStream << std::hex << ((*it).second).hintChkSum;\n\t\t\t\tstd::string hintChecksum = hintStream.str();\n\t\t\t\tnewData << \", \\\"\" << DATA_KEY_HINT << \"\\\": \\\"0x\" << hintChecksum << \"\\\"\";\n\n\t\t\t\tlong NamingScheme;\n\t\t\t\tNamingScheme = ((*it).second).namingScheme;\n\t\t\t\tnewData << \", \\\"\" << NAMING_SCHEME << \"\\\": \" << to_string(NamingScheme) << \"\";\n\n\t\t\t\tstring AFHHash;\n\t\t\t\tAFHHash = ((*it).second).afhHash;\n\t\t\t\tnewData << \", \\\"\" << AFH_HASH << \"\\\": \\\"\" << AFHHash << \"\\\"\";\n\n\t\t\t\tstring AFHierarchy;\n\t\t\t\tAFHierarchy = ((*it).second).afHierarchy;\n\t\t\t\tnewData << \", \\\"\" << AF_HIERARCHY << \"\\\": \\\"\" << AFHierarchy << \"\\\"\";\n\n\t\t\t\tstring AFHierarchyOrig;\n\t\t\t\tAFHierarchyOrig = ((*it).second).afHierarchyOrig;\n\t\t\t\tnewData << \", \\\"\" << AF_HIERARCHY_ORIG << \"\\\": \\\"\" << AFHierarchyOrig << \"\\\"\";\n\n\t\t\t\tLogger::getLogger()->debug(\"%s - AFHHash :%s: AFHierarchy :%s: AFHierarchyOrig :%s:\", __FUNCTION__, AFHHash.c_str(), AFHierarchy.c_str(), AFHierarchyOrig.c_str()  );\n\t\t\t\tLogger::getLogger()->debug(\"%s - NamingScheme :%ld: \", __FUNCTION__,NamingScheme );\n\n\t\t\t\tnewData << \", \\\"\" << DATA_KEY << \"\\\": \" <<\n\t\t\t\t\t   (((*it).second).types.empty() ? \"{}\" : ((*it).second).types) <<\n\t\t\t\t\t   \"}}\";\n\t\t\t\tpendingSeparator = true;\n\t\t\t}\n\t\t}\n\n\t\ttSize = m_assetsDataTypes.size();\n\t\tif (!tSize)\n\t\t{\n\t\t\t// DataTypes map is empty\n\t\t\treturn ret;\n\t\t}\n\n\t\tnewData << \"]\";\n\n\t\tret = newData.str();\n\t}\n\n\treturn ret;\n}\n\n\n/**\n * Calculate the TypeShort in the case it is missing loading type definition\n *\n * Generate a 64 bit number containing a set of counts,\n * number of datapoints in an asset and the number of datapoint of each type we support.\n *\n */\nunsigned long OMFInformation::calcTypeShort(const string& dataTypes)\n{\n\tunion t_typeCount {\n\t\tstruct\n\t\t{\n\t\t\tunsigned char tTotal;\n\t\t\tunsigned char tFloat;\n\t\t\tunsigned char tString;\n\t\t\tunsigned char spare0;\n\n\t\t\tunsigned char spare1;\n\t\t\tunsigned char spare2;\n\t\t\tunsigned char spare3;\n\t\t\tunsigned char spare4;\n\t\t} cnt;\n\t\tunsigned long valueLong = 0;\n\n\t} typeCount;\n\n\tDocument JSONData;\n\tJSONData.Parse(dataTypes.c_str());\n\n\tif (JSONData.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"calcTypeShort - unable to calculate TypeShort on :%s: \", dataTypes.c_str());\n\t\treturn (0);\n\t}\n\n\tfor (Value::ConstMemberIterator it = JSONData.MemberBegin(); it != JSONData.MemberEnd(); ++it)\n\t{\n\n\t\tstring key = it->name.GetString();\n\t\tconst Value& value = it->value;\n\n\t\tif (value.HasMember(PROPERTY_TYPE) && value[PROPERTY_TYPE].IsString())\n\t\t{\n\t\t\tstring type =value[PROPERTY_TYPE].GetString();\n\n\t\t\t// Integer is handled as float in the OMF integration\n\t\t\tif (type.compare(PROPERTY_NUMBER) == 0)\n\t\t\t{\n\t\t\t\ttypeCount.cnt.tFloat++;\n\t\t\t} else if (type.compare(PROPERTY_STRING) == 0)\n\t\t\t{\n\t\t\t\ttypeCount.cnt.tString++;\n\t\t\t} else {\n\n\t\t\t\tLogger::getLogger()->error(\"calcTypeShort - unrecognized type :%s: \", type.c_str());\n\t\t\t}\n\t\t\ttypeCount.cnt.tTotal++;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"calcTypeShort - unable to extract the type for :%s: \", key.c_str());\n\t\t\treturn (0);\n\t\t}\n\t}\n\n\treturn typeCount.valueLong;\n}\n\n/**\n * Finds major and minor product version numbers in a version string\n * \n * @param    versionString\t\tVersion string of the form x.x.x.x where x's are integers\n * @param    major\t\t\t\tMajor product version returned (first digit)\n * @param    minor\t\t\t\tMinor product version returned (second digit)\n */\nvoid OMFInformation::ParseProductVersion(std::string &versionString, int *major, int *minor)\n{\n\t*major = 0;\n\t*minor = 0;\n\tsize_t last = 0;\n\tsize_t next = versionString.find(\".\", last);\n\tif (next != string::npos)\n\t{\n\t\t*major = atoi(versionString.substr(last, next - last).c_str());\n\t\tlast = next + 1;\n\t\tnext = versionString.find(\".\", last);\n\t\tif (next != string::npos)\n\t\t{\n\t\t\t*minor = atoi(versionString.substr(last, next - last).c_str());\n\t\t}\n\t}\n}\n\n/**\n * Parses the Edge Data Store version string from the /productinformation REST response.\n * Note that the response format differs between EDS 2020 and EDS 2023.\n * \n * @param    json\t\tREST response from /api/v1/diagnostics/productinformation\n * @return   version\tEdge Data Store version string\n */\nstd::string OMFInformation::ParseEDSProductInformation(std::string json)\n{\n\tstd::string version;\n\n\tDocument doc;\n\n\tif (!doc.Parse(json.c_str()).HasParseError())\n\t{\n\t\ttry\n\t\t{\n\t\t\tif (doc.HasMember(\"Edge Data Store\"))\t// EDS 2020 response\n\t\t\t{\n\t\t\t\tconst rapidjson::Value &EDS = doc[\"Edge Data Store\"];\n\t\t\t\tversion = EDS.GetString();\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"Product Version\"))\t// EDS 2023 response\n\t\t\t{\n\t\t\t\tconst rapidjson::Value &EDS = doc[\"Product Version\"];\n\t\t\t\tversion = EDS.GetString();\n\t\t\t}\n\t\t}\n\t\tcatch (...)\n\t\t{\n\t\t}\n\t}\n\n\tLogger::getLogger()->debug(\"Edge Data Store Version: %s JSON: %s\", version.c_str(), json.c_str());\n\treturn version;\n}\n\n/**\n * Generate the credentials for the basic authentication\n * encoding user id and password joined by a single colon (:) using base64\n *\n * @param    userId   User id to be used for the generation of the credentials\n * @param    password Password to be used for the generation of the credentials\n * @return            credentials to be used with the basic authentication\n */\nstring OMFInformation::AuthBasicCredentialsGenerate(string& userId, string& password)\n{\n\tstring Credentials;\n\n\tCredentials = Crypto::Base64::encode(userId + \":\" + password);\n\t              \t\n\treturn (Credentials);\n}\n\n/**\n * Configures for Kerberos authentication :\n *   - set the environment KRB5_CLIENT_KTNAME to the position containing the\n *     Kerberos keys, the keytab file.\n *\n * @param   out  keytabEnv       string containing the command to set the\n *                               KRB5_CLIENT_KTNAME environment variable\n * @param        keytabFileName  File name of the keytab file\n *\n */\nvoid OMFInformation::AuthKerberosSetup(string& keytabEnv, string& keytabFileName)\n{\n\tstring fledgeData = getDataDir ();\n\tstring keytabFullPath = fledgeData + \"/etc/kerberos\" + \"/\" + keytabFileName;\n\n\tkeytabEnv = \"KRB5_CLIENT_KTNAME=\" + keytabFullPath;\n\tputenv((char *) keytabEnv.c_str());\n\n\tif (access(keytabFullPath.c_str(), F_OK) != 0)\n\t{\n\t\tLogger::getLogger()->error(\"Kerberos authentication not possible, the keytab file :%s: is missing.\", keytabFullPath.c_str());\n\t}\n\n}\n\n/**\n * Calculate elapsed time in seconds\n *\n * @param startTime   Start time of the interval to be evaluated\n * @return            Elapsed time in seconds\n */\ndouble OMFInformation::GetElapsedTime(struct timeval *startTime)\n{\n\tstruct timeval endTime, diff;\n\tgettimeofday(&endTime, NULL);\n\ttimersub(&endTime, startTime, &diff);\n\treturn diff.tv_sec + ((double)diff.tv_usec / 1000000);\n}\n\n/**\n * Check if the destination data archive is available by making a lightweight REST GET call every 60 seconds.\n * Log a message if the connection state changes.\n * First call to this method will make a REST call.\n * This method can check connectivity with PI Web API, Edge Data Store and AVEVA Data Hub.\n *\n * @return           Connection status\n */\nbool OMFInformation::IsDataArchiveConnected()\n{\n\tstatic std::chrono::steady_clock::time_point nextCheck(std::chrono::steady_clock::time_point::duration::zero());\n\tstatic bool lastConnected = m_connected; // Previous value of m_connected\n\n\tstd::chrono::steady_clock::time_point now = std::chrono::steady_clock::now();\n\n\tif (now >= nextCheck)\n\t{\n\t\tint httpCode;\n\n\t\tswitch (m_PIServerEndpoint)\n\t\t{\n\t\tcase ENDPOINT_PIWEB_API:\n\t\t\thttpCode = PIWebAPIGetVersion(false);\n\t\t\tbreak;\n\t\tcase ENDPOINT_EDS:\n\t\t\thttpCode = EDSGetVersion(false);\n\t\t\tbreak;\n\t\tcase ENDPOINT_ADH:\n\t\tcase ENDPOINT_OCS:\n\t\t\thttpCode = IsADHConnected(false);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\thttpCode = 200; // assume all other endpoint types are connected\n\t\t\tbreak;\n\t\t}\n\n\t\tm_connected = ((httpCode < 200) || (httpCode >= 400)) ? false : true;\n\n\t\tLogger::getLogger()->debug(\"%s: Check %s HTTP Code: %d Connected: %s LastConnected: %s\",\n\t\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t\t   m_hostAndPort.c_str(),\n\t\t\t\t\t\t\t\t   httpCode,\n\t\t\t\t\t\t\t\t   m_connected ? \"true\" : \"false\",\n\t\t\t\t\t\t\t\t   lastConnected ? \"true\" : \"false\");\n\n\t\t// See if the connection status has changed since the last check.\n\t\t// If so, write a disconnection or reconnection message.\n\t\tif (m_connected == true)\n\t\t{\n\t\t\tSetOMFVersion();\n\t\t\tCheckDataActionCode();\n\n\t\t\tif (lastConnected == false)\n\t\t\t{\n\t\t\t\tswitch (m_PIServerEndpoint)\n\t\t\t\t{\n\t\t\t\tcase ENDPOINT_PIWEB_API:\n\t\t\t\t\tLogger::getLogger()->warn(\"%s reconnected to %s OMF Version: %s\",\n\t\t\t\t\t\t\t\t\t\t\t  m_RestServerVersion.c_str(), m_hostAndPort.c_str(), m_omfversion.c_str());\n\t\t\t\t\tbreak;\n\t\t\t\tcase ENDPOINT_EDS:\n\t\t\t\t\tLogger::getLogger()->warn(\"Edge Data Store %s reconnected to %s OMF Version: %s\",\n\t\t\t\t\t\t\t\t\t\t\t  m_RestServerVersion.c_str(), m_hostAndPort.c_str(), m_omfversion.c_str());\n\t\t\t\t\tbreak;\n\t\t\t\tcase ENDPOINT_ADH:\n\t\t\t\t\tLogger::getLogger()->warn(\"AVEVA Data Hub %s reconnected. OMF Version: %s\",\n\t\t\t\t\t\t\t\t\t\t\t  m_hostAndPort.c_str(), m_omfversion.c_str());\n\t\t\t\t\tbreak;\n\t\t\t\tcase ENDPOINT_OCS:\n\t\t\t\t\tLogger::getLogger()->warn(\"OSIsoft Cloud Services %s reconnected. OMF Version: %s\",\n\t\t\t\t\t\t\t\t\t\t\t  m_hostAndPort.c_str(), m_omfversion.c_str());\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tLogger::getLogger()->warn(\"Destination Data Archive %s reconnected. OMF Version: %s\",\n\t\t\t\t\t\t\t\t\t\t\t  m_hostAndPort.c_str(), m_omfversion.c_str());\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\tlastConnected = true;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (lastConnected == true)\n\t\t\t{\n\t\t\t\tswitch (m_PIServerEndpoint)\n\t\t\t\t{\n\t\t\t\tcase ENDPOINT_PIWEB_API:\n\t\t\t\t\tLogger::getLogger()->error(\"The PI Web API service %s is not available. HTTP Code: %d\",\n\t\t\t\t\t\t\t\t\t\t\t   m_hostAndPort.c_str(), httpCode);\n\t\t\t\t\tbreak;\n\t\t\t\tcase ENDPOINT_EDS:\n\t\t\t\t\tLogger::getLogger()->error(\"Edge Data Store %s is not available. HTTP Code: %d\",\n\t\t\t\t\t\t\t\t\t\t\t   m_hostAndPort.c_str(), httpCode);\n\t\t\t\t\tbreak;\n\t\t\t\tcase ENDPOINT_ADH:\n\t\t\t\t\tLogger::getLogger()->error(\"AVEVA Data Hub %s is not available. HTTP Code: %d\",\n\t\t\t\t\t\t\t\t\t\t\t   m_hostAndPort.c_str(), httpCode);\n\t\t\t\t\tbreak;\n\t\t\t\tcase ENDPOINT_OCS:\n\t\t\t\t\tLogger::getLogger()->error(\"OSIsoft Cloud Services %s is not available. HTTP Code: %d\",\n\t\t\t\t\t\t\t\t\t\t\t   m_hostAndPort.c_str(), httpCode);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tLogger::getLogger()->warn(\"Destination Data Archive %s is not available. HTTP Code: %d\",\n\t\t\t\t\t\t\t\t\t\t\t  m_hostAndPort.c_str(), httpCode);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\tlastConnected = false;\n\t\t\t}\n\t\t}\n\n\t\tnextCheck = now + std::chrono::seconds(60);\n\t}\n\n\treturn m_connected;\n}\n"
  },
  {
    "path": "C/plugins/north/OMF/plugin.cpp",
    "content": "/*\n * Fledge PI Server north plugin.\n *\n * Copyright (c) 2018-2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto, Stefano Simonelli\n *\n * PI Web API OMF Endpoint documentation available at:\n * https://fledge-iot.readthedocs.io/en/latest/OMF.html?highlight=omf%20hint#\n *\n * Troubleshooting the PI-Server integration available at:\n * https://fledge-iot.readthedocs.io/en/latest/troubleshooting_pi-server_integration.html#how-to-check-the-pi-web-api-is-installed-and-running\n *\n * Information about Asset Framework Hierarchy Rules available at:\n * https://fledge-iot.readthedocs.io/en/latest/OMF.html?highlight=omf%20hint#asset-framework-hierarchy-rules\n *\n * Information about OMF Hint available at:\n * https://fledge-iot.readthedocs.io/en/latest/OMF.html?highlight=omf%20hint#omf-hints\n * https://fledge-iot.readthedocs.io/en/latest/plugins/fledge-filter-omfhint/index.html\n *\n * OSIsoft documentation about PI Web API:\n * https://docs.osisoft.com/bundle/pi-web-api/page/pi-web-api.html\n * https://docs.osisoft.com/bundle/pi-web-api-reference/page/help.html\n * https://pisquare.osisoft.com/s/topic/0TO1I000000OGBGWA4/pi-web-api\n *\n * OSIsoft documentation about OMF:\n * https://docs.osisoft.com/bundle/omf/page/index.html\n *\n * OSIsoft documentation about OMF in PI Web API:\n * https://docs.osisoft.com/bundle/omf-with-pi-web-api/page/osisoft-message-format.html\n *\n */\n\n#include <unistd.h>\n\n#include <plugin_api.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <strings.h>\n#include <string>\n#include <logger.h>\n#include <iostream>\n#include <omf.h>\n#include <piwebapi.h>\n#include <ocs.h>\n#include <simple_https.h>\n#include <simple_http.h>\n#include <config_category.h>\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"json_utils.h\"\n#include \"libcurl_https.h\"\n#include \"utils.h\"\n#include \"string_utils.h\"\n#include <version.h>\n#include <omfinfo.h>\n\n#include \"crypto.hpp\"\n\n\n#define VERBOSE_LOG\t0\n#define INSTRUMENT 0\n\nusing namespace std;\nusing namespace rapidjson;\nusing namespace SimpleWeb;\n/*\n * Note that the properties \"group\" is used to group related items, these will appear in different tabs,\n * using the group name, in the GUI.\n *\n * This GUI functionality has yet to be implemented.\n *\n * Current groups used are\n *\t\"Authentication\"\tItems relating to authentication with the endpoint\n *\t\"Connection\"\t\tConnection tuning items\n *\t\"Formats & Types\"\tControls for the way formats and types are defined\n *\t\"Asset Framework\"\tAsset framework configuration items\n *\t\"Cloud\"\t\t\tThings related to OCS or ADH only\n *\t\"Advanced\"\t\tAdds to the Advanced tab that already exists\n */\nconst char *PLUGIN_DEFAULT_CONFIG_INFO = QUOTE(\n\t{\n\t\t\"plugin\": {\n\t\t\t\"description\": \"PI Server North C Plugin\",\n\t\t\t\"type\": \"string\",\n\t\t\t\"default\": PLUGIN_NAME,\n\t\t\t\"readonly\": \"true\"\n\t\t},\n\t\t\"PIServerEndpoint\": {\n\t\t\t\"description\": \"Select the endpoint among PI Web API, Connector Relay, OSIsoft Cloud Services or Edge Data Store\",\n\t\t\t\"type\": \"enumeration\",\n\t\t\t\"options\":[\"PI Web API\", \"AVEVA Data Hub\", \"Connector Relay\", \"OSIsoft Cloud Services\", \"Edge Data Store\"],\n\t\t\t\"default\": \"PI Web API\",\n\t\t\t\"order\": \"1\",\n\t\t\t\"displayName\": \"Endpoint\"\n\t\t},\n\t\t\"ADHRegions\": {\n\t\t\t\"description\": \"AVEVA Data Hub or OSIsoft Cloud Services region\",\n\t\t\t\"type\": \"enumeration\",\n\t\t\t\"options\":[\"US-West\", \"EU-West\", \"Australia\"],\n\t\t\t\"default\": \"US-West\",\n\t\t\t\"order\": \"2\",\n\t\t\t\"group\" : \"Cloud\",\n\t\t\t\"displayName\": \"Cloud Service Region\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"AVEVA Data Hub\\\" || PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\"\n\t\t},\n\t\t\"SendFullStructure\": {\n\t\t\t\"description\": \"If true, create an AF structure to organize the data. If false, create PI Points only.\",\n\t\t\t\"type\": \"boolean\",\n\t\t\t\"default\": \"true\",\n\t\t\t\"order\": \"3\",\n\t\t\t\"displayName\": \"Create AF structure\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"PI Web API\\\"\"\n\t\t},\n\t\t\"NamingScheme\": {\n\t\t\t\"description\": \"Define the naming scheme of the objects in the endpoint\",\n\t\t\t\"type\": \"enumeration\",\n\t\t\t\"options\":[\"Concise\", \"Use Type Suffix\", \"Use Attribute Hash\", \"Backward compatibility\"],\n\t\t\t\"default\": \"Concise\",\n\t\t\t\"order\": \"4\",\n\t\t\t\"displayName\": \"Naming Scheme\"\n\t\t},\n\t\t\"ServerHostname\": {\n\t\t\t\"description\": \"Hostname of the server running the endpoint either PI Web API or Connector Relay\",\n\t\t\t\"type\": \"string\",\n\t\t\t\"default\": \"localhost\",\n\t\t\t\"order\": \"5\",\n\t\t\t\"displayName\": \"Server hostname\",\n\t\t\t\"validity\" : \"PIServerEndpoint != \\\"Edge Data Store\\\" && PIServerEndpoint != \\\"OSIsoft Cloud Services\\\" && PIServerEndpoint != \\\"AVEVA Data Hub\\\"\"\n\t\t},\n\t\t\"ServerPort\": {\n\t\t\t\"description\": \"Port on which the endpoint either PI Web API or Connector Relay or Edge Data Store is listening, 0 will use the default one\",\n\t\t\t\"type\": \"integer\",\n\t\t\t\"default\": \"0\",\n\t\t\t\"order\": \"6\",\n\t\t\t\"displayName\": \"Server port, 0=use the default\",\n\t\t\t\"validity\" : \"PIServerEndpoint != \\\"OSIsoft Cloud Services\\\" && PIServerEndpoint != \\\"AVEVA Data Hub\\\"\"\n\t\t},\n\t\t\"producerToken\": {\n\t\t\t\"description\": \"The producer token that represents this Fledge stream\",\n\t\t\t\"type\": \"string\",\n\t\t\t\"default\": \"omf_north_0001\",\n\t\t\t\"order\": \"7\",\n\t\t\t\"displayName\": \"Producer Token\",\n\t\t\t\"group\" : \"Authentication\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"Connector Relay\\\"\"\n\t\t},\n\t\t\"source\": {\n\t\t\t\"description\": \"Defines the source of the data to be sent on the stream, this may be one of either readings, statistics or audit.\",\n\t\t\t\"type\": \"enumeration\",\n\t\t\t\"options\":[\"readings\", \"statistics\"],\n\t\t\t\"default\": \"readings\",\n\t\t\t\"order\": \"8\",\n\t\t\t\"displayName\": \"Data Source\"\n\t\t},\n\t\t\"StaticData\": {\n\t\t\t\"description\": \"Static data to include in every Container created by OMF\",\n\t\t\t\"type\": \"string\",\n\t\t\t\"default\": \"Location: Palo Alto, Company: Dianomic\",\n\t\t\t\"order\": \"9\",\n\t\t\t\"displayName\": \"Static Data\"\n\t\t},\n\t\t\"AssetDatapointNameDelimiter\": {\n\t\t\t\"description\": \"Delimiter character between Asset and Datapoint in PI data stream names\",\n\t\t\t\"type\": \"string\",\n\t\t\t\"default\": \".\",\n\t\t\t\"order\": \"10\",\n\t\t\t\"displayName\": \"Data Stream Name Delimiter\"\n\t\t},\n\t\t\"OMFDataActionCode\": {\n\t\t\t\"description\": \"OMF Action Code to use when POSTing OMF Data messages\",\n\t\t\t\"type\": \"enumeration\",\n\t\t\t\"options\":[\"update\", \"create\"],\n\t\t\t\"default\": \"update\",\n\t\t\t\"order\": \"11\",\n\t\t\t\"displayName\": \"Action Code for Data Messages\"\n\t\t},\n\t\t\"OMFRetrySleepTime\": {\n\t\t\t\"description\": \"Seconds between each retry for the communication with the OMF PI Connector Relay, NOTE : the time is doubled at each attempt.\",\n\t\t\t\"type\": \"integer\",\n\t\t\t\"default\": \"1\",\n\t\t\t\"order\": \"11\",\n\t\t\t\"group\": \"Connection\",\n\t\t\t\"displayName\": \"Sleep Time Retry\"\n\t\t},\n\t\t\"OMFMaxRetry\": {\n\t\t\t\"description\": \"Max number of retries for the communication with the OMF PI Connector Relay\",\n\t\t\t\"type\": \"integer\",\n\t\t\t\"default\": \"3\",\n\t\t\t\"order\": \"12\",\n\t\t\t\"group\": \"Connection\",\n\t\t\t\"displayName\": \"Maximum Retry\"\n\t\t},\n\t\t\"OMFHttpTimeout\": {\n\t\t\t\"description\": \"Timeout in seconds for the HTTP operations with the OMF PI Connector Relay\",\n\t\t\t\"type\": \"integer\",\n\t\t\t\"default\": \"10\",\n\t\t\t\"order\": \"13\",\n\t\t\t\"group\": \"Connection\",\n\t\t\t\"displayName\": \"HTTP Timeout\"\n\t\t},\n\t\t\"formatInteger\": {\n\t\t\t\"description\": \"OMF format property to apply to the type Integer\",\n\t\t\t\"type\": \"enumeration\",\n\t\t\t\"default\": \"int64\",\n\t\t\t\"options\": [\"int64\", \"int32\", \"int16\", \"uint64\", \"uint32\", \"uint16\"],\n\t\t\t\"order\": \"14\",\n\t\t\t\"group\": \"Formats & Types\",\n\t\t\t\"displayName\": \"Integer Format\"\n\t\t},\n\t\t\"formatNumber\": {\n\t\t\t\"description\": \"OMF format property to apply to the type Number\",\n\t\t\t\"type\": \"enumeration\",\n\t\t\t\"default\": \"float64\",\n\t\t\t\"options\": [\"float64\", \"float32\"],\n\t\t\t\"order\": \"15\",\n\t\t\t\"group\": \"Formats & Types\",\n\t\t\t\"displayName\": \"Number Format\"\n\t\t},\n\t\t\"compression\": {\n\t\t\t\"description\": \"Compress readings data before sending to PI server\",\n\t\t\t\"type\": \"boolean\",\n\t\t\t\"default\": \"true\",\n\t\t\t\"order\": \"16\",\n\t\t\t\"group\": \"Connection\",\n\t\t\t\"displayName\": \"Compression\"\n\t\t},\n\t\t\"DefaultAFLocation\": {\n\t\t\t\"description\": \"Defines the default location in the Asset Framework hierarchy in which the assets will be created, each level is separated by /, PI Web API only.\",\n\t\t\t\"type\": \"string\",\n\t\t\t\"default\": \"/fledge/data_piwebapi/default\",\n\t\t\t\"order\": \"17\",\n\t\t\t\"displayName\": \"Default Asset Framework Location\",\n\t\t\t\"group\" : \"Asset Framework\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"PI Web API\\\"\"\n\t\t},\n\t\t\"AFMap\": {\n\t\t\t\"description\": \"Defines a set of rules to address where assets should be placed in the AF hierarchy.\",\n\t\t\t\"type\": \"JSON\",\n\t\t\t\"default\": AF_HIERARCHY_RULES,\n\t\t\t\"order\": \"18\",\n\t\t\t\"group\" : \"Asset Framework\",\n\t\t\t\"displayName\": \"Asset Framework hierarchy rules\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"PI Web API\\\"\"\n\n\n\t\t},\n\t\t\"notBlockingErrors\": {\n\t\t\t\"description\": \"These errors are considered not blocking in the communication with the PI Server, the sending operation will proceed with the next block of data if one of these is encountered\",\n\t\t\t\"type\": \"JSON\",\n\t\t\t\"default\": NOT_BLOCKING_ERRORS_DEFAULT,\n\t\t\t\"order\": \"19\" ,\n\t\t\t\"readonly\": \"true\"\n\t\t},\n\t\t\"PIWebAPIAuthenticationMethod\": {\n\t\t\t\"description\": \"Defines the authentication method to be used with the PI Web API.\",\n\t\t\t\"type\": \"enumeration\",\n\t\t\t\"options\":[\"anonymous\", \"basic\", \"kerberos\"],\n\t\t\t\"default\": \"anonymous\",\n\t\t\t\"order\": \"20\",\n\t\t\t\"group\": \"Authentication\",\n\t\t\t\"displayName\": \"PI Web API Authentication Method\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"PI Web API\\\"\"\n\t\t},\n\t\t\"PIWebAPIUserId\": {\n\t\t\t\"description\": \"User id of PI Web API to be used with the basic access authentication.\",\n\t\t\t\"type\": \"string\",\n\t\t\t\"default\": \"user_id\",\n\t\t\t\"order\": \"21\",\n\t\t\t\"group\": \"Authentication\",\n\t\t\t\"displayName\": \"PI Web API User Id\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"PI Web API\\\" && PIWebAPIAuthenticationMethod == \\\"basic\\\"\"\n\t\t},\n\t\t\"PIWebAPIPassword\": {\n\t\t\t\"description\": \"Password of the user of PI Web API to be used with the basic access authentication.\",\n\t\t\t\"type\": \"password\",\n\t\t\t\"default\": \"password\",\n\t\t\t\"order\": \"22\" ,\n\t\t\t\"group\": \"Authentication\",\n\t\t\t\"displayName\": \"PI Web API Password\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"PI Web API\\\" && PIWebAPIAuthenticationMethod == \\\"basic\\\"\"\n\t\t},\n\t\t\"PIWebAPIKerberosKeytabFileName\": {\n\t\t\t\"description\": \"Keytab file name used for Kerberos authentication in PI Web API.\",\n\t\t\t\"type\": \"string\",\n\t\t\t\"default\": \"piwebapi_kerberos_https.keytab\",\n\t\t\t\"order\": \"23\" ,\n\t\t\t\"group\": \"Authentication\",\n\t\t\t\"displayName\": \"PI Web API Kerberos keytab file\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"PI Web API\\\" && PIWebAPIAuthenticationMethod == \\\"kerberos\\\"\"\n\t\t},\n\t\t\"OCSNamespace\" : {\n\t\t\t\"description\" : \"Specifies the namespace where the information are stored and it is used for the interaction with AVEVA Data Hub or OCS\",\n\t\t\t\"type\" : \"string\",\n\t\t\t\"default\": \"name_space\",\n\t\t\t\"order\": \"24\",\n\t\t\t\"group\" : \"Cloud\",\n\t\t\t\"displayName\" : \"Namespace\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\" || PIServerEndpoint == \\\"AVEVA Data Hub\\\"\"\n\t\t},\n\t\t\"OCSTenantId\" : {\n\t\t\t\"description\" : \"Tenant id associated to the specific AVEVA Data Hub or OCS account\",\n\t\t\t\"type\" : \"string\",\n\t\t\t\"default\": \"ocs_tenant_id\",\n\t\t\t\"order\": \"25\",\n\t\t\t\"group\" : \"Cloud\",\n\t\t\t\"displayName\" : \"Tenant ID\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\" || PIServerEndpoint == \\\"AVEVA Data Hub\\\"\"\n\t\t},\n\t\t\"OCSClientId\" : {\n\t\t\t\"description\" : \"Client id associated to the specific account, it is used to authenticate when using the AVEVA Data Hub or OCS\",\n\t\t\t\"type\" : \"string\",\n\t\t\t\"default\": \"ocs_client_id\",\n\t\t\t\"order\": \"26\",\n\t\t\t\"group\" : \"Cloud\",\n\t\t\t\"displayName\" : \"Client ID\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\" || PIServerEndpoint == \\\"AVEVA Data Hub\\\"\"\n\t\t},\n\t\t\"OCSClientSecret\" : {\n\t\t\t\"description\" : \"Client secret associated to the specific account, it is used to authenticate with AVEVA Data Hub or OCS\",\n\t\t\t\"type\" : \"password\",\n\t\t\t\"default\": \"ocs_client_secret\",\n\t\t\t\"order\": \"27\",\n\t\t\t\"group\" : \"Cloud\",\n\t\t\t\"displayName\" : \"Client Secret\",\n\t\t\t\"validity\" : \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\" || PIServerEndpoint == \\\"AVEVA Data Hub\\\"\"\n\t\t},\n\t\t\"PIWebAPInotBlockingErrors\": {\n\t\t\t\"description\": \"These errors are considered not blocking in the communication with the PI Web API, the sending operation will proceed with the next block of data if one of these is encountered\",\n\t\t\t\"type\": \"JSON\",\n\t\t\t\"default\": NOT_BLOCKING_ERRORS_DEFAULT_PI_WEB_API,\n\t\t\t\"order\": \"28\" ,\n\t\t\t\"readonly\": \"true\"\n\t\t},\n\t\t\"Legacy\": {\n\t\t\t\"description\": \"Force all data to be sent using complex OMF types\",\n\t\t\t\"type\": \"boolean\",\n\t\t\t\"default\": \"false\",\n\t\t\t\"order\": \"29\",\n\t\t\t\"group\": \"Formats & Types\",\n\t\t\t\"displayName\": \"Complex Types\"\n\t\t},\n        \"EnableTracing\" : {\n            \"description\" : \"If enabled, a detailed tracing of OMF messages will be written to logs/debug-trace/omf.log file in Fledge data directory.\",\n            \"type\" : \"boolean\",\n            \"default\" : \"false\",\n            \"order\" : \"30\",\n            \"displayName\" : \"Enable Tracing\"\n        }\n\t}\n);\n\n// \"default\": \"{\\\"pipeline\\\": [\\\"DeltaFilter\\\"]}\"\n\n\n/**\n * Return the information about this plugin\n */\n/**\n * The PI Server plugin interface\n */\nextern \"C\" {\n\n/**\n * The C API plugin information structure\n */\nstatic PLUGIN_INFORMATION info = {\n\tPLUGIN_NAME,\t\t\t   // Name\n\tVERSION,\t\t\t   // Version\n\tSP_PERSIST_DATA | SP_BUILTIN,\t   // Flags\n\tPLUGIN_TYPE_NORTH,\t\t   // Type\n\t\"1.0.0\",\t\t\t   // Interface version\n\tPLUGIN_DEFAULT_CONFIG_INFO\t   // Configuration\n};\n\n/**\n * Return the information about this plugin\n */\nPLUGIN_INFORMATION *plugin_info()\n{\n\treturn &info;\n}\n\n/**\n * Initialise the plugin with configuration.\n *\n * This function is called to get the plugin handle.\n */\nPLUGIN_HANDLE plugin_init(ConfigCategory* configData)\n{\n#if INSTRUMENT\n\tstruct timeval startTime;\n\tgettimeofday(&startTime, NULL);\n#endif\n\n\tint endpointPort = 0;\n\n\t/**\n\t * Handle the PI Server parameters here\n\t */\n\t// Allocate connector struct\n\tOMFInformation *info = new OMFInformation(configData);\n#if INSTRUMENT\n\tLogger::getLogger()->debug(\"plugin_init elapsed time: %6.3f seconds\", GetElapsedTime(&startTime));\n#endif\n\n\treturn (PLUGIN_HANDLE)info;\n}\n\n\n/**\n * Plugin start with stored plugin_data\n *\n * @param handle\tThe plugin handle\n * @param storedData\tThe stored plugin_data\n */\nvoid plugin_start(const PLUGIN_HANDLE handle,\n\t\t  const string& storedData)\n{\n#if INSTRUMENT\n\tstruct timeval startTime;\n\tgettimeofday(&startTime, NULL);\n\n\t// For debugging: write plugin's stored data to a file\n\tstring jsonFilePath = getDataDir() + string(\"/logs/OMFStoredData.json\");\n\tofstream f(jsonFilePath.c_str(), ios_base::trunc);\n\tf << storedData.c_str();\n\tf.close();\n#endif\n\n\tLogger* logger = Logger::getLogger();\n\tOMFInformation *info = (OMFInformation *)handle;\n\tinfo->start(storedData);\n\n\n#if INSTRUMENT\n\tLogger::getLogger()->debug(\"plugin_start elapsed time: %6.3f seconds\", GetElapsedTime(&startTime));\n#endif\n}\n\n/**\n * Send Readings data to historian server\n */\nuint32_t plugin_send(const PLUGIN_HANDLE handle,\n\t\t     const vector<Reading *>& readings)\n{\n\tOMFInformation *info = (OMFInformation *)handle;\n\treturn info->send(readings);\n}\n\n/**\n * Shutdown the plugin\n *\n * Delete allocated data\n *\n * Note: the entry with FAKE_ASSET_KEY ios never saved.\n *\n * @param handle   The plugin handle\n * @return         A string with JSON plugin data\n *                 the caller will persist\n */\nstring plugin_shutdown(PLUGIN_HANDLE handle)\n{\n\t// Delete the handle\n\tOMFInformation *info = (OMFInformation *) handle;\n\n\tstring rval = info->saveData();\n\tdelete info;\n\treturn rval;\n}\n\n// End of extern \"C\"\n};\n"
  },
  {
    "path": "C/plugins/storage/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (FledgeStoragePlugins)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\nadd_subdirectory(common)\nadd_subdirectory(postgres)\nadd_subdirectory(sqlite)\n"
  },
  {
    "path": "C/plugins/storage/README.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n\n\n***************\nStorage Plugins\n***************\n\nThis directory contains the source code for the plugins used by the Storage service.\n\nBuilding\n========\n\nTo make this plugin, run the commands:\n::\n  mkdir build\n  cd build\n  cmake ..\n  make\n\nUse the command ``make install`` to install in the default location,\nnote you will need permission on the installation directory or use\nthe sudo command. Pass the option *DESTDIR=* to set your own destination\ninto which to install the Storage service.\n\n"
  },
  {
    "path": "C/plugins/storage/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.4.0)\n\nproject(storage-common-lib)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(DLLIB -ldl)\n\n# Find source files\nfile(GLOB SOURCES *.cpp)\n\n# Include header files\ninclude_directories(./include)\ninclude_directories(../../../common/include)\n\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${DLLIB})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n\n"
  },
  {
    "path": "C/plugins/storage/common/disk_monitor.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <disk_monitor.h>\n#include <logger.h>\n#include <sys/vfs.h>\n#include <string.h>\n\nusing namespace std;\n\n/**\n * Construct a disk space monitor class\n *\n * It monitors the free space on two path, since the storage service may use\n * different locations for reading and configuration storage. If they are both\n * the same file system then the monitor is only done once.\n *\n * If the free space falls below 5% a fatal error is written to the error log\n * If it falls below 10% a warning is written advising that disk space should be released\n * It attempts to predict when storage will become exhausted. If it is less then 14 days\n * it will report this to the error log once a day.\n * If it is less than 72 hours it will report this once per hour.\n *\n * All reporting is to the system log\n *\n * NB In an ideal world we would make the thresholds and reporting interval configurable,\n * however we are running within the limited environment of a storage plugin and do not\n * have access to the manaagement client or configuration subsystem.\n */\nDiskSpaceMonitor::DiskSpaceMonitor(const string& path1, const string& path2) :\n\tm_dbPath1(path1), m_dbPath2(path2), m_started(false), m_sameDevice(false), m_lastCheck(0),\n\tm_lastPerc1(0.0), m_lastPerc2(0.0), m_lastPrediction1(0.0), m_lastPrediction2(0.0), m_reported(0)\n{\n\tm_logger = Logger::getLogger();\n}\n\n/**\n * Called periodically to monitor the disk usage\n *\n * @param interval\tThe number of seconds between calls\n */\nvoid DiskSpaceMonitor::periodic(int interval)\n{\n\tstruct statfs stf1, stf2;\n\n\tif (!m_started)\n\t{\n\t\t// We have not yet started to monitor the disk usage.\n\t\t// Do the initial statfs calls to see if the configuration\n\t\t// and readings are on the same filesystem. If they are we \n\t\t// only monitor one of them\n\t\t//\n\t\t// If the statfs fails log it and do not start monitoring. The\n\t\t// rate at which logs are created is limited to prevent flooding\n\t\t// the error log.\n\t\tif (statfs(m_dbPath1.c_str(), &stf1) != 0)\n\t\t{\n\t\t\tif (m_reported == 0)\n\t\t\t{\n\t\t\t\tm_logger->error(\"Can't statfs %s, %s. Disk space monitoring is disabled\",\n\t\t\t\t\t\tm_dbPath1.c_str(), strerror(errno));\n\t\t\t\tm_reported++;\n\t\t\t}\n\t\t\telse if (++m_reported > FAILED_DISK_MONITOR_REPORT_INTERVAL)\n\t\t\t{\n\t\t\t\tm_reported = 0;\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\tif (statfs(m_dbPath2.c_str(), &stf2) != 0)\n\t\t{\n\t\t\tif (m_reported == 0)\n\t\t\t{\n\t\t\t\tm_logger->error(\"Can't statfs %s, %s. Disk space monitoring is disabled\",\n\t\t\t\t\t\tm_dbPath2.c_str(), strerror(errno));\n\t\t\t\tm_reported++;\n\t\t\t}\n\t\t\telse if (++m_reported > FAILED_DISK_MONITOR_REPORT_INTERVAL)\n\t\t\t{\n\t\t\t\tm_reported = 0;\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\tif (memcmp(&stf1.f_fsid, &stf2.f_fsid, sizeof(fsid_t)) == 0)\t// Same filesystem\n\t\t{\n\t\t\tm_sameDevice = true;\n\t\t}\n\t\tm_started = true;\n\t}\n\tm_lastCheck += interval;\n\n\tif (m_lastCheck < CHECK_THRESHOLD)\n\t{\n\t\t// Do not check too frerquently\n\t\treturn;\n\t}\n\tm_lastCheck = 0;\n\n\t\n\tif (statfs(m_dbPath1.c_str(), &stf1) == 0)\n\t{\n\t\tunsigned long freespace = (unsigned long)stf1.f_bavail;\n\t\tunsigned long totalspace = (unsigned long)stf1.f_blocks;\n\n\t\tdouble perc = (double)(freespace  * 100.0) / totalspace;\n\n\t\tif (perc < 5.0)\n\t\t{\n\t\t\tm_logger->fatal(\"Disk space is critically low. Urgent action required, continuing may result in data corruption\");\n\t\t}\n\t\telse if (perc < 10.0)\n\t\t{\n\t\t\tm_logger->error(\"Available disk space is becoming low, please consider releasing more disk space\");\n\t\t}\n\t\tif (m_lastPerc1 > 0.0)\n\t\t{\n\t\t\tdouble diff = m_lastPerc1 - perc;\n\t\t\tif (diff > 0.0)\n\t\t\t{\n\t\t\t\tdouble prediction = (perc * CHECK_THRESHOLD)/ (3600.0 * diff);\n\t\t\t\tif (prediction <= 72.0 && m_lastPrediction1 - prediction > 1.0)\n\t\t\t\t{\n\t\t\t\t\tm_logger->error(\"At current rates disk space will be exhausted in %.0f hours\", prediction);\n\t\t\t\t\tm_lastPrediction1 = prediction;\n\t\t\t\t}\n\t\t\t\telse if (prediction / 24.0 <= 14 && (m_lastPrediction1 == 0.0 || m_lastPrediction1 - prediction > 24.0))\n\t\t\t\t{\n\t\t\t\t\tm_lastPrediction1 = prediction;\n\t\t\t\t\tm_logger->warn(\"At current rates disk space will be exhausted in %.1f days\", prediction / 24);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_lastPrediction1 = 0.0;\n\t\t\t}\n\t\t}\n\t\tm_lastPerc1 = perc;\n\n\t}\n\tif (m_sameDevice)\n\t{\n\t\treturn;\n\t}\n\tif (statfs(m_dbPath2.c_str(), &stf1) == 0)\n\t{\n\t\tunsigned long freespace = (unsigned long)stf1.f_bavail;\n\t\tunsigned long totalspace = (unsigned long)stf1.f_blocks;\n\n\t\tdouble perc = (double)(freespace  * 100.0) / totalspace;\n\n\t\tif (perc < 5.0)\n\t\t{\n\t\t\tm_logger->fatal(\"Disk space is critically low. Urgent action required, continuing may result in data corruption\");\n\t\t}\n\t\telse if (perc < 10.0)\n\t\t{\n\t\t\tm_logger->error(\"Available disk space is becoming low, please consider releasing more disk space\");\n\t\t}\n\t\tif (m_lastPerc2 > 0.0)\n\t\t{\n\t\t\tdouble diff = m_lastPerc2 - perc;\n\t\t\tif (diff > 0.0)\n\t\t\t{\n\t\t\t\tdouble prediction = (perc * CHECK_THRESHOLD)/ (3600.0 * diff);\n\t\t\t\tif (prediction <= 72.0 && (m_lastPrediction2 == 0.0 || m_lastPrediction2 - prediction > 1.0))\n\t\t\t\t{\n\t\t\t\t\tm_logger->error(\"At current rates disk space will be exhausted in %.0f hours\", prediction);\n\t\t\t\t\tm_lastPrediction2 = prediction;\n\t\t\t\t}\n\t\t\t\telse if (prediction / 24.0 <= 14 && m_lastPrediction1 - prediction > 24.0)\n\t\t\t\t{\n\t\t\t\t\tm_lastPrediction2 = prediction;\n\t\t\t\t\tm_logger->warn(\"At current rates disk space will be exhausted in %.1f days\", prediction / 24);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_lastPrediction2 = 0.0;\n\t\t\t}\n\t\t}\n\t\tm_lastPerc2 = perc;\n\t}\n}\n"
  },
  {
    "path": "C/plugins/storage/common/include/disk_monitor.h",
    "content": "#ifndef _DISK_SPACE_MONITOR_H\n#define _DISK_SPACE_MONITOR_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <logger.h>\n\n#define CHECK_THRESHOLD\t\t300\t// check every 5 minutes\n\n#define FAILED_DISK_MONITOR_REPORT_INTERVAL\t600 // Interval between loggign failure to stat the filesystem (10 minutes)\n\n/**\n * A class to monitor the free disk space used to store\n * the various storage databases\n */\nclass DiskSpaceMonitor {\n\tpublic:\n\t\tDiskSpaceMonitor(const std::string& db1, const std::string& db2);\n\t\tvoid\t\tperiodic(int interval);\n\tprivate:\n\t\tstd::string\tm_dbPath1;\n\t\tstd::string\tm_dbPath2;\n\t\tbool\t\tm_started;\n\t\tbool\t\tm_sameDevice;\n\t\tunsigned int\tm_lastCheck;\n\t\tLogger\t\t*m_logger;\n\t\tdouble\t\tm_lastPerc1;\n\t\tdouble\t\tm_lastPerc2;\n\t\tdouble\t\tm_lastPrediction1;\n\t\tdouble\t\tm_lastPrediction2;\n\t\tint\t\tm_reported;\n};\n#endif\n"
  },
  {
    "path": "C/plugins/storage/common/include/sql_buffer.h",
    "content": "#ifndef _SQL_BUFFER_H\n#define _SQL_BUFFER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <string>\n#include <list>\n\n#define BUFFER_CHUNK\t1024\n\n/**\n * Buffer class designed to hold SQL statement that can\n * as required but have minimal copy semantics.\n */\nclass SQLBuffer {\n\tclass Buffer {\n\t\tpublic:\n\t\t\tBuffer();\n\t\t\tBuffer(unsigned int);\n\t\t\t~Buffer();\n\t\t\tchar\t\t*detach();\n\t\t\tchar\t\t*data;\n\t\t\tunsigned int\toffset;\n\t\t\tunsigned int\tlength;\n\t\t\tbool\t\tattached;\n\t};\n\n        public:\n                SQLBuffer();\n                ~SQLBuffer();\n\n\t\tbool\t\t\tisEmpty() { return buffers.empty() || (buffers.size() == 1 && buffers.front()->offset == 0); }\n\t\tvoid\t\t\tappend(const char);\n\t\tvoid\t\t\tappend(const char *);\n\t\tvoid\t\t\tappend(const int);\n\t\tvoid\t\t\tappend(const unsigned int);\n\t\tvoid\t\t\tappend(const long);\n\t\tvoid\t\t\tappend(const unsigned long);\n\t\tvoid\t\t\tappend(const double);\n\t\tvoid\t\t\tappend(const std::string&);\n\t\tvoid\t\t\tquote(const std::string&);\n\t\tconst char\t\t*coalesce();\n\t\tvoid\t\t\tclear();\n\n\tprivate:\n\t\tstd::list<Buffer *>\tbuffers;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/storage/common/sql_buffer.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <sql_buffer.h>\n#include <string.h>\n#include <string_utils.h>\n\nusing namespace std;\n/**\n * Buffer class designed to hold SQL statement that can\n * as required but have minimal copy semantics.\n */\n\n/**\n * SQLBuffer constructor\n */\nSQLBuffer::SQLBuffer()\n{\n        buffers.push_front(new SQLBuffer::Buffer());\n}\n\n/**\n * SQLBuffer destructor\n */\nSQLBuffer::~SQLBuffer()\n{\n\tfor (list<SQLBuffer::Buffer *>::iterator it = buffers.begin(); it != buffers.end(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n}\n\n/**\n * Clear all the buffers from the SQLBuffer and allow it to be reused\n */\nvoid SQLBuffer::clear()\n{\n\tfor (list<SQLBuffer::Buffer *>::iterator it = buffers.begin(); it != buffers.end(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n\tbuffers.clear();\n        buffers.push_front(new SQLBuffer::Buffer());\n}\n\n/**\n * Append a character to a buffer\n *\n * @param data\tThe character to append to the buffer\n */\nvoid SQLBuffer::append(const char data)\n{\nSQLBuffer::Buffer *buffer = buffers.back();\n\n        if (buffer->offset == buffer->length)\n        {\n\t\tbuffer = new SQLBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tbuffer->data[buffer->offset] = data;\n\tbuffer->data[buffer->offset + 1] = 0;\n\tbuffer->offset++;\n}\n\n/**\n * Append a character string to a buffer\n *\n * @para data\tThe string to append to the buffer\n */\nvoid SQLBuffer::append(const char *data)\n{\nunsigned int len = strlen(data);\nSQLBuffer::Buffer *buffer = buffers.back();\n\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tif (len > BUFFER_CHUNK)\n\t\t{\n\t\t\tbuffer = new SQLBuffer::Buffer(len + BUFFER_CHUNK);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbuffer = new SQLBuffer::Buffer();\n\t\t}\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], data, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append an integer to a buffer\n *\n * @param value\tThe value to append to the buffer\n */\nvoid SQLBuffer::append(const int value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nSQLBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%d\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new SQLBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append a long to a buffer\n *\n * @param value\tThe long value to append to the buffer\n */\nvoid SQLBuffer::append(const long value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nSQLBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%ld\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new SQLBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append an unsigned integer to a buffer\n *\n * @param value\tThe unsigned long value to append to the buffer\n */\nvoid SQLBuffer::append(const unsigned int value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nSQLBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%u\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new SQLBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append an unsigned long to a buffer\n *\n * @param value\tThe value to append to the buffer\n */\nvoid SQLBuffer::append(const unsigned long value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nSQLBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%lu\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new SQLBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append a double to a buffer\n *\n * @param value\tThe double value to append to the buffer\n */\nvoid SQLBuffer::append(const double value)\n{\nchar\ttmpbuf[80];\nunsigned int len;\nSQLBuffer::Buffer *buffer = buffers.back();\n\n\tlen = (unsigned int)snprintf(tmpbuf, 80, \"%f\", value);\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tbuffer = new SQLBuffer::Buffer();\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], tmpbuf, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Append a string to a buffer\n *\n * @param str\tThe string to be appended to the buffer\n */\nvoid SQLBuffer::append(const string& str)\n{\nconst char\t*cstr = str.c_str();\nunsigned int len = strlen(cstr);\nSQLBuffer::Buffer *buffer = buffers.back();\n\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tif (len > BUFFER_CHUNK)\n\t\t{\n\t\t\tbuffer = new SQLBuffer::Buffer(len + BUFFER_CHUNK);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbuffer = new SQLBuffer::Buffer();\n\t\t}\n\t\tbuffers.push_back(buffer);\n\t}\n\tmemcpy(&buffer->data[buffer->offset], cstr, len);\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Quote and append a string to a buffer\n *\n * @param str\tThe string to quote and append to the buffer\n */\nvoid SQLBuffer::quote(const string& str)\n{\nstring esc = str;\nStringEscapeQuotes(esc);\nconst char\t*cstr = esc.c_str();\nunsigned int len = strlen(cstr) + 2;\nSQLBuffer::Buffer *buffer = buffers.back();\n\n        if (buffer->offset + len >= buffer->length)\n        {\n\t\tif (len > BUFFER_CHUNK)\n\t\t{\n\t\t\tbuffer = new SQLBuffer::Buffer(len + BUFFER_CHUNK);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbuffer = new SQLBuffer::Buffer();\n\t\t}\n\t\tbuffers.push_back(buffer);\n\t}\n\tbuffer->data[buffer->offset] = '\"';\n\tmemcpy(&buffer->data[buffer->offset + 1], cstr, len - 2);\n\tbuffer->data[buffer->offset + len - 1] = '\"';\n\tbuffer->offset += len;\n\tbuffer->data[buffer->offset] = 0;\n}\n\n/**\n * Create a coalesced buffer from the buffer chain\n *\n * The buffer returned has been created usign the new[] operator and must be\n * deleted by the caller.\n * @return char* The SQL statement in a single buffer\n */\nconst char *SQLBuffer::coalesce()\n{\nunsigned int length = 0, offset = 0;\nchar\t     *buffer = 0;\n\n\tif (buffers.size() == 1)\n\t{\n\t\treturn buffers.back()->detach();\n\t}\n\tfor (list<SQLBuffer::Buffer *>::iterator it = buffers.begin(); it != buffers.end(); ++it)\n\t{\n\t\tlength += (*it)->offset;\n\t}\n\tbuffer = new char[length+1];\n\tfor (list<SQLBuffer::Buffer *>::iterator it = buffers.begin(); it != buffers.end(); ++it)\n\t{\n\t\tmemcpy(&buffer[offset], (*it)->data, (*it)->offset);\n\t\toffset += (*it)->offset;\n\t}\n\tbuffer[offset] = 0;\n\treturn buffer;\n}\n\n/**\n * Construct a buffer with a standard size initial buffer.\n */\nSQLBuffer::Buffer::Buffer() : offset(0), length(BUFFER_CHUNK), attached(true)\n{\n\tdata = new char[BUFFER_CHUNK+1];\n\tdata[0] = 0;\n}\n\n/**\n * Construct a large buffer, passign the size of buffer required. THis is useful\n * if you know your buffer requirements are large and you wish to reduce the amount\n * of allocation required.\n *\n * @param size\tThe size of the initial buffer to allocate.\n */\nSQLBuffer::Buffer::Buffer(unsigned int size) : offset(0), length(size), attached(true)\n{\n\tdata = new char[size+1];\n\tdata[0] = 0;\n}\n\n/**\n * Buffer destructor, the buffer itself is also deleted by this\n * call and any reference to it must no longer be used.\n */\nSQLBuffer::Buffer::~Buffer()\n{\n\tif (attached)\n\t{\n\t\tdelete[] data;\n\t\tdata = 0;\n\t}\n}\n\n/**\n * Detach the buffer from the SQLBuffer. The reference to the buffer\n * is removed from the SQLBuffer but the buffer itself is not deleted.\n * This allows the buffer ownership to be taken by external code\n * whilst allowing the SQLBuffer to allocate a new buffer.\n */\nchar *SQLBuffer::Buffer::detach()\n{\nchar *rval = data;\n\n\tattached = false;\n\tlength = 0;\n\tdata = 0;\n\treturn rval;\n}\n"
  },
  {
    "path": "C/plugins/storage/postgres/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(postgres)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Handle Postgres on RedHat/CentOS\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\ninclude(CheckRhPg)\n\n# Find source files\nfile(GLOB SOURCES *.cpp)\n\n# Include header files\ninclude_directories(include ../../../common/include ../../../services/common/include ../common/include)\ninclude_directories(../../../thirdparty/rapidjson/include)\nlink_directories(${PROJECT_BINARY_DIR}/../../../lib)\n\nif(${RH_POSTGRES_FOUND} EQUAL 1)\n\n    include_directories(${RH_POSTGRES_INCLUDE})\n    link_directories(${RH_POSTGRES_LIB64})\nelse()\n    include_directories(/usr/include/postgresql)\nendif()\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\ntarget_link_libraries(${PROJECT_NAME} -lpq)\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/plugins/storage//${PROJECT_NAME})\n\n# Install init.sql\ninstall(FILES ${CMAKE_SOURCE_DIR}/scripts/plugins/storage/${PROJECT_NAME}/init.sql DESTINATION fledge/plugins/storage//${PROJECT_NAME})\n"
  },
  {
    "path": "C/plugins/storage/postgres/CheckRhPg.cmake",
    "content": "# Evaluates if rh-postgresql13 is available and enabled and identifies its path\n\nexecute_process(\n        COMMAND  \"scl\" \"enable\" \"rh-postgresql13\" \"command -v pg_isready\"\n        RESULT_VARIABLE CMD_ERROR\n        OUTPUT_VARIABLE CMD_OUTPUT\n)\n\nif(${CMD_ERROR} EQUAL 0)\n    string(REGEX REPLACE \"/bin/pg_isready[\\n]\" \"\" RH_POSTGRES_PATH ${CMD_OUTPUT})\n\n    set(RH_POSTGRES_FOUND 1)\n    set(RH_POSTGRES_INCLUDE \"${RH_POSTGRES_PATH}/include\")\n    set(RH_POSTGRES_LIB64   \"${RH_POSTGRES_PATH}/lib64\")\nelse()\n    set(RH_POSTGRES_FOUND 0)\nendif()\n\nif(${RH_POSTGRES_FOUND} EQUAL 1)\n\n    MESSAGE( STATUS \"INFO: rh-postgresql13 found in the path :${RH_POSTGRES_PATH}:\")\nelse()\n    MESSAGE( STATUS \"INFO: rh-postgresql13 not found\")\nendif()\n"
  },
  {
    "path": "C/plugins/storage/postgres/README.rst",
    "content": "*************************\nPostgreSQL Storage Plugin\n*************************\n\nThis directory contains the source code for the PostgreSQL Storage plugin used\nby the Storage service.\n\nBuilding\n========\n\nTo make postgres plugin run the commands:\n::\n  mkdir build\n  cd build\n  cmake ..\n  make\n\nUse the command ``make install`` to install in the default location,\nnote you will need permission on the installation directory or use the sudo command.\nPass the option *DESTDIR=* to set your own destination into which\nto install the Storage service.\n\n"
  },
  {
    "path": "C/plugins/storage/postgres/connection.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <connection.h>\n#include <connection_manager.h>\n#include <sql_buffer.h>\n#include <iostream>\n#include <libpq-fe.h>\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n#include <string>\n#include <vector>\n#include <stdarg.h>\n#include <stdlib.h>\n#include <sstream>\n#include <logger.h>\n#include <time.h>\n#include <algorithm>\n#include <math.h>\n#include <sys/time.h>\n\n#include \"json_utils.h\"\n#include \"string_utils.h\"\n\n#include <iostream>\n#include <chrono>\n#include <thread>\n\nusing namespace std;\nusing namespace rapidjson;\n\nstatic time_t connectErrorTime = 0;\n#define CONNECT_ERROR_THRESHOLD\t\t5*60\t// 5 minutes\n#define MSG_LEN 5000\n\n//\n// Used for the purge operation - start\n//\n#define PURGE_DELETE_BLOCK_SIZE\t    10000\n#define MIN_PURGE_DELETE_BLOCK_SIZE\t1000\n#define MAX_PURGE_DELETE_BLOCK_SIZE\t10000\n\n#define TARGET_PURGE_BLOCK_DEL_TIME\t(70*1000) \t// 70 msec\n#define PURGE_BLOCK_SZ_GRANULARITY\t5 \t// 5 rows\n#define RECALC_PURGE_BLOCK_SIZE_NUM_BLOCKS\t30\t// recalculate purge block size after every 30 blocks\n\n#define START_TIME std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();\n#define END_TIME std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now(); \\\n\t\t\t\t auto usecs = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count();\n//\n// Used for the purge operation - end\n\n\n#define LEN_BUFFER_DATE 100\n// Format timestamp having microseconds\n#define F_DATEH24_US    \t\"YYYY-MM-DD HH24:MI:SS.US\"\n\nstatic int purgeBlockSize = PURGE_DELETE_BLOCK_SIZE;\n\nconst vector<string>  pg_column_reserved_words = {\n\t\"user\"\n};\n\n/**\n * Check whether to compute timebucket query with min,max,avg for all datapoints\n *\n * @param    payload\tJSON payload\n * @return\t\tTrue if aggregation is 'all'\n */\nbool aggregateAll(const Value& payload)\n{\n\tif (payload.HasMember(\"aggregate\") &&\n\t    payload[\"aggregate\"].IsObject())\n\t{       \n\t\tconst Value& agg = payload[\"aggregate\"];\n\t\tif (agg.HasMember(\"operation\") &&\n\t\t    strcmp(agg[\"operation\"].GetString(), \"all\") == 0)\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Build, exucute and return data of a timebucket query with min,max,avg for all datapoints\n *\n * @param    payload\tJSON object for timebucket query\n * @param    resultSet\tJSON Output buffer\n * @return\t\tTrue of success, false on any error\n */\nbool Connection::aggregateQuery(const Value& payload, string& resultSet)\n{\n\tif (!payload.HasMember(\"where\") ||\n\t    !payload.HasMember(\"timebucket\"))\n\t{\n\t\traiseError(\"retrieve\", \"aggregateQuery is missing \"\n\t\t\t   \"'where' and/or 'timebucket' properties\");\n\t\treturn false;\n\t}\n\n\tSQLBuffer sql;\n\n\tsql.append(\"SELECT asset_code, \");\n\n\tdouble size = 1;\n\tstring timeColumn;\n\n\t// Check timebucket object\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& bucket = payload[\"timebucket\"];\n\t\tif (!bucket.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"retrieve\", \"aggregateQuery is missing \"\n\t\t\t\t   \"'timestamp' property for 'timebucket'\");\n\t\t\treturn false;\n\t\t}\n\n\t\t// Time column\n\t\ttimeColumn = bucket[\"timestamp\"].GetString();\n\n\t\t// Bucket size\n\t\tif (bucket.HasMember(\"size\"))\n\t\t{\n\t\t\tsize = atof(bucket[\"size\"].GetString());\n\t\t\tif (!size)\n\t\t\t{\n\t\t\t\tsize = 1;\n\t\t\t}\n\t\t}\n\n\t\t// Time format for output\n\t\tif (bucket.HasMember(\"format\") && size >= 1)\n\t\t{\n\t\t\tsql.append(\"to_char(\");\n\t\t\tsql.append(\"\\\"\");\n\t\t\tsql.append(\"timestamp\");\n\t\t\tsql.append(\"\\\"\");\n\t\t\tsql.append(\", '\");\n\t\t\tsql.append(bucket[\"format\"].GetString());\n\t\t\tsql.append(\"')\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (size < 1)\n\t\t\t{\n\t\t\t\t// sub-second granularity to time bucket size:\n\t\t\t\t// force output formatting with microseconds\n\t\t\t\tsql.append(\"to_char(\");\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(\"timestamp\");\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(\", '\");\n\t\t\t\tsql.append(\"YYYY-MM-DD HH24:MI:SS.US\");\n\t\t\t\tsql.append(\"')\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"timestamp\");\n\t\t\t}\n\t\t}\n\n\t\t// Time output alias\n\t\tif (bucket.HasMember(\"alias\"))\n\t\t{\n\t\t\tsql.append(\" AS \");\n\t\t\tsql.append(bucket[\"alias\"].GetString());\n\t\t}\n\t}\n\n\t// JSON format aggregated data\n\tsql.append(\", (('{' || string_agg('\\\"' || x || '\\\" : ' || resd, ', ') || '}')::jsonb) AS reading \");\n\n\t// subquery\n\tsql.append(\"FROM ( SELECT  x, asset_code, max(timestamp) AS timestamp, \");\n\t// Add mon\n\tsql.append(\"'{\\\"min\\\" : ' || min((reading->>x)::float) || ', \");\n\t// Add max\n\tsql.append(\"\\\"max\\\" : ' || max((reading->>x)::float) || ', \");\n\t// Add avg\n\tsql.append(\"\\\"average\\\" : ' || avg((reading->>x)::float) || ', \");\n\t// Add count\n\tsql.append(\"\\\"count\\\" : ' || count(reading->>x) || ', \");\n\t// Add sum\n\tsql.append(\"\\\"sum\\\" : ' || sum((reading->>x)::float) || '}' AS resd \");\n\n\t// subquery\n\tsql.append(\"FROM ( SELECT asset_code, \");\n\tsql.append(timeColumn);\n\tsql.append(\", to_timestamp(\");\n\n\t// Size formatted string\n\tstring size_format;\n\tif (fmod(size, 1.0) == 0.0)\n\t{\n\t\tsize_format = to_string(int(size));\n\t}\n\telse\n\t{\n\t\tsize_format = to_string(size);\n\t}\n\n\t// Add timebucket size\n\tif (size != 1)\n\t{\n\t\tsql.append(size_format);\n\t\tif (size > 1)\n\t\t{\n\t\t\tsql.append(\" * round(extract(epoch from \");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\" * round((extract(epoch from \");\n\t\t}\n\t\tsql.append(timeColumn);\n\t\tsql.append(\" ) / \");\n\t\tsql.append(size_format);\n\t\tsql.append(')');\n\t\tif (size > 1)\n\t\t{\n\t\t\tsql.append(')');\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\"::numeric, 6))\");\n\t\t}\n\t}\n\telse\n\t{\n\t\tsql.append(\" round(extract(epoch from \");\n\t\tsql.append(timeColumn);\n\t\tsql.append(\") / 1)) \");\n\t}\n\tsql.append(\" AS \\\"timestamp\\\", reading, \");\n\n\t// Get all datapoints in 'reading' field\n\tsql.append(\"jsonb_object_keys(reading) AS x FROM fledge.readings \");\n\n\t// Add where condition\n\tsql.append(\"WHERE \");\n\n\tvector<string>  asset_codes;\n\tif (!jsonWhereClause(payload[\"where\"], sql, asset_codes))\n\t{\n\t\traiseError(\"retrieve\", \"aggregateQuery: failure while building WHERE clause\");\n\t\treturn false;\n\t}\n\n\t// sort results\n\tsql.append(\" ORDER BY \");\n\tsql.append(timeColumn);\n\tsql.append(\" DESC) tmp \");\n\n\t// Add group by\n\tsql.append(\"GROUP BY x, asset_code, \");\n\n\tsql.append(\"round(extract(epoch from \");\n\tsql.append(timeColumn);\n\tsql.append(\") / \");\n\tif (size != 1)\n\t{\n\t\tsql.append(size_format);\n\t}\n\telse\n\t{\n\t\tsql.append('1'); \n\t}\n\tsql.append(\") \");\n\n\t// sort results\n\tsql.append(\"ORDER BY timestamp DESC) tbl \");\n\n\t// Add final group and sort\n\tsql.append(\"GROUP BY timestamp, asset_code ORDER BY timestamp DESC\");\n\n\t// Add limit\n\tif (payload.HasMember(\"limit\"))\n\t{\n\t\tif (!payload[\"limit\"].IsInt())\n\t\t{\n\t\t\traiseError(\"retrieve\", \"aggregateQuery: limit must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" LIMIT \");\n\t\ttry {\n\t\t\tsql.append(payload[\"limit\"].GetInt());\n\t\t} catch (exception e) {\n\t\t\traiseError(\"retrieve\", \"aggregateQuery: bad value for limit parameter: %s\", e.what());\n\t\t\treturn false;\n\t\t}\n\t}\n\tsql.append(';');\n\n\t// Execute query\n\tconst char *query = sql.coalesce();\n\n\tlogSQL(\"CommonRetrieve\", query);\n\n\tPGresult *res = PQexec(dbConnection, query);\n\n\tdelete[] query;\n\n\tif (PQresultStatus(res) == PGRES_TUPLES_OK)\n\t{\n\t\tmapResultSet(res, resultSet);\n\t\tPQclear(res);\n\t\treturn true;\n\t}\n\tchar *SQLState = PQresultErrorField(res, PG_DIAG_SQLSTATE);\n\tif (!strcmp(SQLState, \"22P02\")) // Conversion error\n\t{\n\t\traiseError(\"retrieve\", \"Unable to convert data to the required type\");\n\t}\n\telse\n\t{\n\t\traiseError(\"retrieve\", PQerrorMessage(dbConnection));\n\t}\n\tPQclear(res);\n\treturn false;\n}\n\n/**\n * Create a database connection\n */\nConnection::Connection() : m_maxReadingRows(INSERT_ROW_LIMIT)\n{\n\tconst char *defaultConninfo = \"dbname = fledge\";\n\tchar *connInfo = NULL;\n\t\n\tif ((connInfo = getenv(\"DB_CONNECTION\")) == NULL)\n\t{\n\t\tconnInfo = (char *)defaultConninfo;\n\t}\n \n\t/* Make a connection to the database */\n\tdbConnection = PQconnectdb(connInfo);\n\n\t/* Check to see that the backend connection was successfully made */\n\tif (PQstatus(dbConnection) != CONNECTION_OK)\n\t{\n\t\tif (connectErrorTime == 0 || (time(0) - connectErrorTime > CONNECT_ERROR_THRESHOLD))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Failed to connect to the database: %s\",\n\t\t\t\tPQerrorMessage(dbConnection));\n\t\t\tconnectErrorTime = time(0);\n\t\t}\n\t\tthrow runtime_error(\"Unable to connect to PostgreSQL database\");\n\t}\n\t\n\tlogSQL(\"Set\", \"session time zone 'UTC' \");\n\tPGresult *res = PQexec(dbConnection, \" set session time zone 'UTC' \");\n\tif (PQresultStatus(res) != PGRES_COMMAND_OK)\n\t{\n\t\tLogger::getLogger()->error(\"set session time zone failed: %s\", PQerrorMessage(dbConnection));\n\t}\n\tPQclear(res);\n}\n\n/**\n * Destructor for the database connection. Close the connection\n * to Postgres\n */\nConnection::~Connection()\n{\n\tPQfinish(dbConnection);\n}\n\n/**\n * Perform a query against a common table\n *\n */\nbool Connection::retrieve(const string& schema,\n\t\t\tconst string& table,\n\t\t\tconst string& condition,\n\t\t\tstring& resultSet)\n{\nDocument document;  // Default template parameter uses UTF8 and MemoryPoolAllocator.\nSQLBuffer\tsql;\nSQLBuffer\tjsonConstraints;\t// Extra constraints to add to where clause\nvector<string>  asset_codes;\n\n\ttry {\n\t\tif (condition.empty())\n\t\t{\n\t\t\tsql.append(\"SELECT * FROM \");\n\t\t\tsql.append(table);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\", \"Failed to parse JSON payload\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (document.HasMember(\"aggregate\"))\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tif (!jsonAggregates(document, document[\"aggregate\"], sql, jsonConstraints, false))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t}\n\t\t\telse if (document.HasMember(\"join\"))\n                        {\n                                sql.append(\"SELECT \");\n                                selectColumns(document, sql, 0);\n                        }\n\t\t\telse if (document.HasMember(\"return\"))\n\t\t\t{\n\t\t\t\tint col = 0;\n\t\t\t\tValue& columns = document[\"return\"];\n\t\t\t\tif (! columns.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col)\n\t\t\t\t\t\tsql.append(\", \");\n\t\t\t\t\tif (!itr->IsObject())\t// Simple column name\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\traiseError(\"rerieve\", \"column must be a string\");\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"rerieve\", \"format must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tsql.append(\"to_char(\");\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append(\", '\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"format\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"')\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"rerieve\", \"timezone must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append(\" AT TIME ZONE '\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"timezone\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"' \");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsql.append(' ');\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"retrieve\", \"return object must have either a column or json property\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\t\t\tsql.append('\"');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tsql.append(\" * FROM \");\n\t\t\t}\n\n\t\t\tif (document.HasMember(\"join\"))\n                        {\n                                sql.append(\" FROM \");\n                                sql.append(table);\n                                sql.append(\" t0\");\n                                appendTables(schema, document, sql, 1);\n                        }\n                        else\n                        {\n                                sql.append(table);\n                        }\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\n\t\t\t\tif (document.HasMember(\"join\"))\n                                {\n                                        if (!jsonWhereClause(document[\"where\"], sql, asset_codes, false, \"t0.\"))\n                                        {\n                                                return false;\n                                        }\n\n                                        // Now and the join condition itself\n                                        string col0, col1;\n                                        const Value& join = document[\"join\"];\n                                        if (join.HasMember(\"on\") && join[\"on\"].IsString())\n                                        {\n                                                col0 = join[\"on\"].GetString();\n                                        }\n                                        else\n                                        {\n\n                                                raiseError(\"rerieve\", \"Missing on item\");\n                                                return false;\n                                        }\n                                        if (join.HasMember(\"table\"))\n                                        {\n                                                const Value& table = join[\"table\"];\n                                                if (table.HasMember(\"column\") && table[\"column\"].IsString())\n                                                {\n                                                        col1 = table[\"column\"].GetString();\n                                                }\n                                                else\n                                                {\n                                                        raiseError(\"QueryTable\", \"Missing column in join table\");\n                                                        return false;\n                                                }\n                                        }\n                                        sql.append(\" AND t0.\");\n                                        sql.append(col0);\n                                        sql.append(\" = t1.\");\n                                        sql.append(col1);\n                                        sql.append(\" \");\n                                        if (join.HasMember(\"query\") && join[\"query\"].IsObject())\n                                        {\n                                                sql.append(\"AND  \");\n                                                const Value& query = join[\"query\"];\n                                                processJoinQueryWhereClause(query, sql, asset_codes, 1);\n                                        }\n                                }\n\t\t\t\telse if (document.HasMember(\"where\"))\n\t\t\t\t{\n\t\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"JSON does not contain where clause\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! jsonConstraints.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AND \");\n\t\t\t\t\tconst char *jsonBuf =  jsonConstraints.coalesce();\n\t\t\t\t\tsql.append(jsonBuf);\n\t\t\t\t\tdelete[] jsonBuf;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!jsonModifiers(document, sql))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(';');\n\n\t\tconst char *query = sql.coalesce();\n\t\tlogSQL(\"CommonRetrieve\", query);\n\n\t\tPGresult *res = PQexec(dbConnection, query);\n\t\tdelete[] query;\n\t\tif (PQresultStatus(res) == PGRES_TUPLES_OK)\n\t\t{\n\t\t\tmapResultSet(res, resultSet);\n\t\t\tPQclear(res);\n\n\t\t\treturn true;\n\t\t}\n\t\tchar *SQLState = PQresultErrorField(res, PG_DIAG_SQLSTATE);\n\t\tif (!strcmp(SQLState, \"22P02\"))\t// Conversion error\n\t\t{\n\t\t\traiseError(\"retrieve\", \"Unable to convert data to the required type\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"retrieve\", PQerrorMessage(dbConnection));\n\t\t}\n\t\tPQclear(res);\n\t\treturn false;\n\t} catch (exception e) {\n\t\traiseError(\"retrieve\", \"Internal error: %s\", e.what());\n\t}\n\treturn false;\n}\n\n/**\n * Perform a query against the readings table\n *\n */\nbool Connection::retrieveReadings(const string& condition, string& resultSet)\n{\n\tDocument document;  // Default template parameter uses UTF8 and MemoryPoolAllocator.\n\tSQLBuffer\tsql;\n\tSQLBuffer\tjsonConstraints;\t// Extra constraints to add to where clause\n\n\tconst string table = \"readings\";\n\n\ttry {\n\t\tif (condition.empty())\n\t\t{\n\t\t\tconst char *sql_cmd = R\"(\n\t\t\t\t\tSELECT\n\t\t\t\t\t\tid,\n\t\t\t\t\t\tasset_code,\n\t\t\t\t\t\treading,\n\t\t\t\t\t\tto_char(user_ts, ')\" F_DATEH24_US R\"(') as user_ts,\n\t\t\t\t\t\tto_char(ts, ')\" F_DATEH24_US R\"(') as ts\n\t\t\t\t\tFROM fledge.)\";\n\n\t\t\tsql.append(sql_cmd);\n\t\t\tsql.append(table);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\", \"Failed to parse JSON payload\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\t// timebucket aggregate all datapoints\n\t\t\tif (aggregateAll(document))\n\t\t\t{\n\t\t\t\treturn aggregateQuery(document, resultSet);\n\t\t\t}\n\n\t\t\tif (document.HasMember(\"aggregate\"))\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tif (!jsonAggregates(document, document[\"aggregate\"], sql, jsonConstraints, true))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM fledge.\");\n\t\t\t}\n\t\t\telse if (document.HasMember(\"return\"))\n\t\t\t{\n\t\t\t\tint col = 0;\n\t\t\t\tValue& columns = document[\"return\"];\n\t\t\t\tif (! columns.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col)\n\t\t\t\t\t\tsql.append(\", \");\n\n\t\t\t\t\tif (!itr->IsObject())\t// Simple column name\n\t\t\t\t\t{\n\t\t\t\t\t\tif (strcmp(itr->GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Display without TZ expression and microseconds also\n\t\t\t\t\t\t\tsql.append(\"to_char(user_ts, '\" F_DATEH24_US \"') as user_ts\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (strcmp(itr->GetString() ,\"ts\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Display without TZ expression and microseconds also\n\t\t\t\t\t\t\tsql.append(\"to_char(ts, '\" F_DATEH24_US \"') as ts\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\traiseError(\"rerieve\", \"column must be a string\");\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"rerieve\", \"format must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tsql.append(\"to_char(\");\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append(\", '\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"format\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"')\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"rerieve\", \"timezone must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\tsql.append(\" AT TIME ZONE '\");\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"timezone\"].GetString());\n\t\t\t\t\t\t\t\tsql.append(\"' \");\n\n\t\t\t\t\t\t\t\t// Use aliasing to avoid duplicate column name\n\t\t\t\t\t\t\t\tif (!itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (strcmp((*itr)[\"column\"].GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t// Display without TZ expression and microseconds also\n\t\t\t\t\t\t\t\t\tsql.append(\"to_char(user_ts, '\" F_DATEH24_US \"')\");\n\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \\\"user_ts\\\" \");\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strcmp((*itr)[\"column\"].GetString() ,\"ts\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t// Display without TZ expression and microseconds also\n\t\t\t\t\t\t\t\t\tsql.append(\"to_char(ts, '\" F_DATEH24_US \"')\");\n\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \\\"ts\\\" \");\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsql.append(' ');\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"retrieve\", \"return object must have either a column or json property\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\t\t\tsql.append('\"');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM fledge.\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\n\t\t\t\tconst char *sql_cmd = R\"(\n\t\t\t\t\t\tid,\n\t\t\t\t\t\tasset_code,\n\t\t\t\t\t\treading,\n\t\t\t\t\t\tto_char(user_ts, ')\" F_DATEH24_US R\"(') as user_ts,\n\t\t\t\t\t\tto_char(ts, ')\" F_DATEH24_US R\"(') as ts\n\t\t\t\t\tFROM fledge.)\";\n\n\t\t\t\tsql.append(sql_cmd);\n\t\t\t}\n\t\t\tsql.append(table);\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\n\t\t\t\tif (document.HasMember(\"where\"))\n\t\t\t\t{\n\t\t\t\t\tvector<string>  asset_codes;\n\t\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"JSON does not contain where clause\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! jsonConstraints.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AND \");\n\t\t\t\t\tconst char *jsonBuf =  jsonConstraints.coalesce();\n\t\t\t\t\tsql.append(jsonBuf);\n\t\t\t\t\tdelete[] jsonBuf;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!jsonModifiers(document, sql))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(';');\n\n\t\tconst char *query = sql.coalesce();\n\t\tlogSQL(\"CommonRetrieve\", query);\n\n\t\tPGresult *res = PQexec(dbConnection, query);\n\t\tdelete[] query;\n\t\tif (PQresultStatus(res) == PGRES_TUPLES_OK)\n\t\t{\n\t\t\tmapResultSet(res, resultSet);\n\t\t\tPQclear(res);\n\t\t\treturn true;\n\t\t}\n\t\tchar *SQLState = PQresultErrorField(res, PG_DIAG_SQLSTATE);\n\t\tif (!strcmp(SQLState, \"22P02\"))\t// Conversion error\n\t\t{\n\t\t\traiseError(\"retrieve\", \"Unable to convert data to the required type\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"retrieve\", PQerrorMessage(dbConnection));\n\t\t}\n\t\tPQclear(res);\n\t\treturn false;\n\t} catch (exception e) {\n\t\traiseError(\"retrieve\", \"Internal error: %s\", e.what());\n\t}\n\treturn false;\n}\n\n\n/**\n * Insert data into a table\n */\nint Connection::insert(const std::string& table, const std::string& data)\n{\nSQLBuffer\tsql;\nDocument\tdocument;\nostringstream convert;\nstd::size_t arr = data.find(\"inserts\");\n\n\t// Check first the 'inserts' property in JSON data\n\tbool stdInsert = (arr == std::string::npos || arr > 8);\n\t// If input data is not an array of iserts\n\t// create an array with one element\n\tif (stdInsert)\n\t{\n\t\tconvert << \"{ \\\"inserts\\\" : [ \";\n\t\tconvert << data;\n\t\tconvert << \" ] }\";\n\t}\n\n\tif (document.Parse(stdInsert ? convert.str().c_str() : data.c_str()).HasParseError())\n\t{\n\t\traiseError(\"insert\", \"Failed to parse JSON payload\\n\");\n\t\treturn -1;\n\t}\n\n\t// Get the array with row(s)\n\tValue &inserts = document[\"inserts\"];\n\tif (!inserts.IsArray())\n\t{\n\t\traiseError(\"insert\", \"Payload is missing the inserts array\");\n\t\treturn -1;\n\t}\n\n\t// Number of inserts\n\tint ins = 0;\n\n\t// Iterate through insert array\n\tfor (Value::ConstValueIterator iter = inserts.Begin();\n\t\t\t\t\titer != inserts.End();\n\t\t\t\t\t++iter)\n\t{\n\t\tif (!iter->IsObject())\n\t\t{\n\t\t\traiseError(\"insert\",\n\t\t\t\t   \"Each entry in the insert array must be an object\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tint col = 0;\n\t\tSQLBuffer values;\n\n\t \tsql.append(\"INSERT INTO \");\n\t\tsql.append(table);\n\t\tsql.append(\" (\");\n\n\t\tfor (Value::ConstMemberIterator itr = (*iter).MemberBegin();\n\t\t\t\t\t\titr != (*iter).MemberEnd();\n\t\t\t\t\t\t++itr)\n\t\t{\n\t\t\tif (itr->value.IsNull())\n\t\t\t\tcontinue;\n\n\t\t\t// Append column name\n\t\t\tif (col)\n\t\t\t{\n\t\t\t\tsql.append(\", \");\n\t\t\t}\n\t\t\tstring field_name = double_quote_reserved_column_name(itr->name.GetString());\n\t\t\tsql.append(field_name);\n\n\t\t\t// Append column value\n\t\t\tif (col)\n\t\t\t{\n\t\t\t\tvalues.append(\", \");\n\t\t\t}\n\t\t\tif (itr->value.IsString())\n\t\t\t{\n\t\t\t\tconst char *str = itr->value.GetString();\n\t\t\t\t// Check if the string is a function\n\t\t\t\tif (isFunction(str))\n\t\t\t\t{\n\t\t\t\t\tvalues.append(str);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tvalues.append('\\'');\n\t\t\t\t\tvalues.append(escape(str));\n\t\t\t\t\tvalues.append('\\'');\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (itr->value.IsDouble())\n\t\t\t\tvalues.append(itr->value.GetDouble());\n\t\t\telse if (itr->value.IsNumber())\n\t\t\t\tvalues.append(itr->value.GetInt());\n\t\t\telse if (itr->value.IsObject())\n\t\t\t{\n\t\t\t\tStringBuffer buffer;\n\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\titr->value.Accept(writer);\n\t\t\t\tvalues.append('\\'');\n\t\t\t\tvalues.append(escape(buffer.GetString()));\n\t\t\t\tvalues.append('\\'');\n\t\t\t}\n\t\t\tcol++;\n\t\t}\n\t\tsql.append(\") VALUES (\");\n\t\tconst char *vals = values.coalesce();\n\t\tsql.append(vals);\n\t\tdelete[] vals;\n\t\tsql.append(\");\");\n\n\t\t// Increment row count\n\t\tins++;\n\t}\n\n\tconst char *query = sql.coalesce();\n\tlogSQL(\"CommonInsert\", query);\n\tPGresult *res = PQexec(dbConnection, query);\n\tdelete[] query;\n\tif (PQresultStatus(res) == PGRES_COMMAND_OK)\n\t{\n\t\tPQclear(res);\n\t\treturn atoi(PQcmdTuples(res));\n\t}\n \traiseError(\"insert\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn -1;\n}\n\n/**\n * Perform an update against a common table\n *\n */\nint Connection::update(const string& table, const string& payload)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument\tdocument;\nSQLBuffer\tsql;\n\n\tint \trow = 0;\n\tostringstream convert;\n\tbool \tallowZero = false;\n\n\tstd::size_t arr = payload.find(\"updates\");\n\tbool changeReqd = (arr == std::string::npos || arr > 8);\n\tif (changeReqd)\n\t{\n\t\tconvert << \"{ \\\"updates\\\" : [ \";\n\t\tconvert << payload;\n\t\tconvert << \" ] }\";\n\t}\n\n\tif (document.Parse(changeReqd?convert.str().c_str():payload.c_str()).HasParseError())\n\t{\n\t\traiseError(\"update\", \"Failed to parse JSON payload\");\n\t\treturn -1;\n\t}\n\telse\n\t{\n\t\tValue &updates = document[\"updates\"];\n\t\tif (!updates.IsArray())\n\t\t{\n\t\t\traiseError(\"update\", \"Payload is missing the updates array\");\n\t\t\treturn -1;\n\t\t}\n\t\t\n\t\tint i=0;\n\t\tfor (Value::ConstValueIterator iter = updates.Begin(); iter != updates.End(); ++iter,++i)\n\t\t{\n\t\t\tif (!iter->IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"update\",\n\t\t\t\t\t   \"Each entry in the update array must be an object\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tsql.append(\"UPDATE \");\n\t\t\tsql.append(table);\n\t\t\tsql.append(\" SET \");\n\n\t\t\tint \tcol = 0;\n\t\t\tif ((*iter).HasMember(\"values\"))\n\t\t\t{\n\t\t\t\tconst Value& values = (*iter)[\"values\"];\n\t\t\t\tfor (Value::ConstMemberIterator itr = values.MemberBegin();\n\t\t\t\t\t\titr != values.MemberEnd(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(itr->name.GetString());\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(\" = \");\n\t\t \n\t\t\t\t\tif (itr->value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = itr->value.GetString();\n\t\t\t\t\t\t// Check if the string is a function\n\t\t\t\t\t\tif (isFunction(str))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(str);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t\tsql.append(escape(str));\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->value.IsDouble())\n\t\t\t\t\t\tsql.append(itr->value.GetDouble());\n\t\t\t\t\telse if (itr->value.IsUint64())\n\t\t\t\t\t\tsql.append((unsigned long)itr->value.GetUint64());\n\t\t\t\t\telse if (itr->value.IsInt64())\n\t\t\t\t\t\tsql.append((long)itr->value.GetInt64());\n\t\t\t\t\telse if (itr->value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\titr->value.Accept(writer);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(buffer.GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\t// Handle JSON value null: \"item\" : null\n\t\t\t\t\telse if (itr->value.IsNull())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\"NULL\");\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"expressions\"))\n\t\t\t{\n\t\t\t\tconst Value& exprs = (*iter)[\"expressions\"];\n\t\t\t\tif (!exprs.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"update\", \"The property exressions must be an array\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = exprs.Begin(); itr != exprs.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"expressions must be an array of objects\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"column\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing column property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"operator\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing operator property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"value\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing value property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(\" = \");\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t\tsql.append((*itr)[\"operator\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t\tconst Value& value = (*itr)[\"value\"];\n\t\t \n\t\t\t\t\tif (value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = value.GetString();\n\t\t\t\t\t\t// Check if the string is a function\n\t\t\t\t\t\tif (isFunction(str))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(str);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t\tsql.append(str);\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsDouble())\n\t\t\t\t\t\tsql.append(value.GetDouble());\n\t\t\t\t\telse if (value.IsUint64())\n\t\t\t\t\t\tsql.append((unsigned long)value.GetUint64());\n\t\t\t\t\telse if (value.IsInt64())\n\t\t\t\t\t\tsql.append((long)value.GetInt64());\n\t\t\t\t\telse if (value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\tvalue.Accept(writer);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(buffer.GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"json_properties\"))\n\t\t\t{\n\t\t\t\tconst Value& exprs = (*iter)[\"json_properties\"];\n\t\t\t\tif (!exprs.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t   \"The property json_properties must be an array\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = exprs.Begin(); itr != exprs.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"json_properties must be an array of objects\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"column\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing column property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"path\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing path property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"value\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t  \"Missing value property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(\" = jsonb_set(\");\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(\", '{\");\n\n\t\t\t\t\tconst Value& path = (*itr)[\"path\"];\n\t\t\t\t\tif (!path.IsArray())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"The property path must be an array\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tint pathElement = 0;\n\t\t\t\t\tfor (Value::ConstValueIterator itr2 = path.Begin();\n\t\t\t\t\t\titr2 != path.End(); ++itr2)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (pathElement > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(',');\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (itr2->IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr2->GetString());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t\t   \"The elements of path must all be strings\");\n\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tpathElement++;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\"}', \");\n\t\t\t\t\tconst Value& value = (*itr)[\"value\"];\n\t\t \n\t\t\t\t\tif (value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = value.GetString();\n\t\t\t\t\t\t// Check if the string is a function\n\t\t\t\t\t\tif (isFunction(str))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\"'\\\"\");\n\t\t\t\t\t\t\tsql.append(str);\n\t\t\t\t\t\t\tsql.append(\"\\\"'\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\t\tvalue.Accept(writer);\n\t\t\t\t\t\t\tsql.append(\"'\\\"\");\n\t\t\t\t\t\t\tsql.append(escape_double_quotes(escape(JSONunescape(buffer.GetString()))));\n\t\t\t\t\t\t\tsql.append(\"\\\"'\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsDouble())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(value.GetDouble());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsUint64())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append((unsigned long)value.GetUint64());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsInt64())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append((long)value.GetInt64());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\tvalue.Accept(writer);\n\n\t\t\t\t\t\tstd::string buffer_escaped = \"\\\"\";\n\t\t\t\t\t\tbuffer_escaped.append(escape_double_quotes(escape(buffer.GetString())));\n\t\t\t\t\t\tbuffer_escaped.append( \"\\\"\");\n\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(buffer_escaped);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\")\");\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (col == 0)\n\t\t\t{\n\t\t\t\traiseError(\"update\",\n\t\t\t\t\t   \"Missing values or expressions object in payload\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"condition\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t\tvector<string>  asset_codes;\n\t\t\t\tif (!jsonWhereClause((*iter)[\"condition\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if ((*iter).HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tvector<string>  asset_codes;\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t\tif (!jsonWhereClause((*iter)[\"where\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (iter->HasMember(\"modifier\") && (*iter)[\"modifier\"].IsArray())\n\t\t\t{\n\t\t\t\tconst Value& modifier = (*iter)[\"modifier\"];\n\t\t\t\tfor (Value::ConstValueIterator modifiers = modifier.Begin(); modifiers != modifier.End(); ++modifiers)\n                \t\t{\n\t\t\t\t\tif (modifiers->IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tstring mod = modifiers->GetString();\n\t\t\t\t\t\tif (mod.compare(\"allowzero\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tallowZero = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\tsql.append(';');\n\t\t}\n\t}\n\n\tconst char *query = sql.coalesce();\n\tlogSQL(\"CommonUpdate\", query);\n\tPGresult *res = PQexec(dbConnection, query);\n\tdelete[] query;\n\tif (PQresultStatus(res) == PGRES_COMMAND_OK)\n\t{\n\t\tint rowsUpdated = atoi(PQcmdTuples(res));\n\t\tif (rowsUpdated == 0 && allowZero == false)\n\t\t{\n \t\t\traiseError(\"update\", \"No rows where updated\");\n\t\t\treturn -1;\n\t\t}\n\t\tPQclear(res);\n\t\treturn atoi(PQcmdTuples(res));\n\t}\n \traiseError(\"update\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn -1;\n}\n\n/**\n * Perform a delete against a common table\n *\n */\nint Connection::deleteRows(const string& table, const string& condition)\n{\nDocument document;  // Default template parameter uses UTF8 and MemoryPoolAllocator.\nSQLBuffer\tsql;\n \n\tsql.append(\"DELETE FROM \");\n\tsql.append(table);\n\tif (! condition.empty())\n\t{\n\t\tsql.append(\" WHERE \");\n\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t{\n\t\t\traiseError(\"delete\", \"Failed to parse JSON payload\");\n\t\t\treturn -1;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tvector<string>  asset_codes;\n\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"delete\", \"JSON does not contain where clause\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n\tsql.append(';');\n\n\tconst char *query = sql.coalesce();\n\tlogSQL(\"CommonDelete\", query);\n\tPGresult *res = PQexec(dbConnection, query);\n\tdelete[] query;\n\tif (PQresultStatus(res) == PGRES_COMMAND_OK)\n\t{\n\t\tPQclear(res);\n\t\treturn atoi(PQcmdTuples(res));\n\t}\n \traiseError(\"delete\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn -1;\n}\n\n/**\n * Format a date to a fixed format with milliseconds, microseconds and\n * timezone expressed, examples :\n *\n *   case - formatted |2019-01-01 10:01:01.000000+00:00| date |2019-01-01 10:01:01|\n *   case - formatted |2019-02-01 10:02:01.000000+00:00| date |2019-02-01 10:02:01.0|\n *   case - formatted |2019-02-02 10:02:02.841000+00:00| date |2019-02-02 10:02:02.841|\n *   case - formatted |2019-02-03 10:02:03.123456+00:00| date |2019-02-03 10:02:03.123456|\n *   case - formatted |2019-03-01 10:03:01.100000+00:00| date |2019-03-01 10:03:01.1+00:00|\n *   case - formatted |2019-03-02 10:03:02.123000+00:00| date |2019-03-02 10:03:02.123+00:00|\n *   case - formatted |2019-03-03 10:03:03.123456+00:00| date |2019-03-03 10:03:03.123456+00:00|\n *   case - formatted |2019-03-04 10:03:04.123456+01:00| date |2019-03-04 10:03:04.123456+01:00|\n *   case - formatted |2019-03-05 10:03:05.123456-01:00| date |2019-03-05 10:03:05.123456-01:00|\n *   case - formatted |2019-03-04 10:03:04.123456+02:30| date |2019-03-04 10:03:04.123456+02:30|\n *   case - formatted |2019-03-05 10:03:05.123456-02:30| date |2019-03-05 10:03:05.123456-02:30|\n *\n * @param out\tfalse if the date is invalid\n *\n */\nbool Connection::formatDate(char *formatted_date, size_t buffer_size, const char *date) {\n\n\tstruct timeval tv = {0};\n\tstruct tm tm  = {0};\n\tchar *valid_date = nullptr;\n\n\t// Extract up to seconds\n\tmemset(&tm, 0, sizeof(tm));\n\tvalid_date = strptime(date, \"%Y-%m-%d %H:%M:%S\", &tm);\n\n\tif (! valid_date)\n\t{\n\t\treturn (false);\n\t}\n\n\tstrftime (formatted_date, buffer_size, \"%Y-%m-%d %H:%M:%S\", &tm);\n\n\t// Work out the microseconds from the fractional part of the seconds\n\tchar fractional[10] = {0};\n\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%[0-9]*\", fractional);\n\t// Truncate to max 6 digits\n\tfractional[6] = 0;\n\tint multiplier = 6 - (int)strlen(fractional);\n\tif (multiplier < 0)\n\t\tmultiplier = 0;\n\twhile (multiplier--)\n\t\tstrcat(fractional, \"0\");\n\n\tstrcat(formatted_date ,\".\");\n\tstrcat(formatted_date ,fractional);\n\n\t// Handles timezone\n\tchar timezone_hour[5] = {0};\n\tchar timezone_min[5] = {0};\n\tchar sign[2] = {0};\n\n\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%*d-%2[0-9]:%2[0-9]\", timezone_hour, timezone_min);\n\tif (timezone_hour[0] != 0)\n\t{\n\t\tstrcat(sign, \"-\");\n\t}\n\telse\n\t{\n\t\tmemset(timezone_hour, 0, sizeof(timezone_hour));\n\t\tmemset(timezone_min,  0, sizeof(timezone_min));\n\n\t\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%*d+%2[0-9]:%2[0-9]\", timezone_hour, timezone_min);\n\t\tif  (timezone_hour[0] != 0)\n\t\t{\n\t\t\tstrcat(sign, \"+\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// No timezone is expressed in the source date\n\t\t\t// the default UTC is added\n\t\t\tstrcat(formatted_date, \"+00:00\");\n\t\t}\n\t}\n\n\tif (sign[0] != 0)\n\t{\n\t\tif (timezone_hour[0] != 0)\n\t\t{\n\t\t\tstrcat(formatted_date, sign);\n\n\t\t\t// Pad with 0 if an hour having only 1 digit was provided\n\t\t\t// +1 -> +01\n\t\t\tif (strlen(timezone_hour) == 1)\n\t\t\t\tstrcat(formatted_date, \"0\");\n\n\t\t\tstrcat(formatted_date, timezone_hour);\n\t\t\tstrcat(formatted_date, \":\");\n\t\t}\n\n\t\tif (timezone_min[0] != 0)\n\t\t{\n\t\t\tstrcat(formatted_date, timezone_min);\n\n\t\t\t// Pad with 0 if minutes having only 1 digit were provided\n\t\t\t// 3 -> 30\n\t\t\tif (strlen(timezone_min) == 1)\n\t\t\t\tstrcat(formatted_date, \"0\");\n\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Minutes aren't expressed in the source date\n\t\t\tstrcat(formatted_date, \"00\");\n\t\t}\n\t}\n\n\n\treturn (true);\n\n\n}\n\n\n/**\n * Append a set of readings to the readings table\n */\nint Connection::appendReadings(const char *readings)\n{\nDocument \tdoc;\nSQLBuffer\tsql;\nint\t\trow = 0;\nbool \t\tadd_row = false;\n\n\tParseResult ok = doc.Parse(readings);\n\tif (!ok)\n\t{\n \t\traiseError(\"appendReadings\", GetParseError_En(doc.GetParseError()));\n\t\treturn -1;\n\t}\n\n\tif (!doc.HasMember(\"readings\"))\n\t{\n\t\traiseError(\"appendReadings\", \"Payload is missing a readings array\");\n\t\treturn -1;\n\t}\n\tValue &rdings = doc[\"readings\"];\n\tif (!rdings.IsArray())\n\t{\n\t\traiseError(\"appendReadings\", \"Payload is missing the readings array\");\n\t\treturn -1;\n\t}\n\n\tconst char *head = \"INSERT INTO fledge.readings ( user_ts, asset_code, reading ) VALUES \";\n\tsql.append(head);\n\n\tint count = 0;\n\tfor (Value::ConstValueIterator itr = rdings.Begin(); itr != rdings.End(); ++itr)\n\t{\n\t\tif (count == m_maxReadingRows)\n\t\t{\n\t\t\tsql.append(';');\n\n\t\t\tconst char *query = sql.coalesce();\n\t\t\tlogSQL(\"ReadingsAppend\", query);\n\t\t\tPGresult *res = PQexec(dbConnection, query);\n\t\t\tdelete[] query;\n\t\t\tif (PQresultStatus(res) != PGRES_COMMAND_OK)\n\t\t\t{\n\t\t\t\traiseError(\"appendReadings\", PQerrorMessage(dbConnection));\n\t\t\t\tPQclear(res);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tPQclear(res);\n\n\t\t\tsql.clear();\n\t\t\tsql.append(head);\n\t\t\tcount = 0;\n\t\t}\n\t\tif (!itr->IsObject())\n\t\t{\n\t\t\traiseError(\"appendReadings\",\n\t\t\t\t\t\"Each reading in the readings array must be an object\");\n\t\t\treturn -1;\n\t\t}\n\t\tadd_row = true;\n\t\tconst char *asset_code = (*itr)[\"asset_code\"].GetString();\n\t\tif (strlen(asset_code) == 0)\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Postgres appendReadings - empty asset code value, row is ignored\");\n\t\t\tcontinue;\n\t\t}\n\n\t\tconst char *str = (*itr)[\"user_ts\"].GetString();\n\t\t// Check if the string is a function\n\t\tif (isFunction(str))\n\t\t{\n\t\t\tif (count)\n\t\t\t\tsql.append(\", (\");\n\t\t\telse\n\t\t\t\tsql.append('(');\n\n\t\t\tsql.append(str);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tchar formatted_date[LEN_BUFFER_DATE] = {0};\n\t\t\tif (! formatDate(formatted_date, sizeof(formatted_date), str) )\n\t\t\t{\n\t\t\t\traiseError(\"appendReadings\", \"Invalid date |%s|\", str);\n\t\t\t\tadd_row = false;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (count)\n\t\t\t\t{\n\t\t\t\t\tsql.append(\", (\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append('(');\n\t\t\t\t}\n\n\t\t\t\tsql.append('\\'');\n\t\t\t\tsql.append(formatted_date);\n\t\t\t\tsql.append('\\'');\n\t\t\t}\n\t\t}\n\n\t\tif (add_row)\n\t\t{\n\t\t\trow++;\n\t\t\tcount++;\n\n\t\t\t// Handles - asset_code\n\t\t\tsql.append(\",\\'\");\n\t\t\tstd::string escaped_asset(asset_code);\n\t\t\tstd::string target =\"'\";\n\t\t\tstd::string replacement =\"''\";\n\t\t\tStringReplaceAllEx(escaped_asset, target, replacement);\n\t\t\tsql.append(escaped_asset);\n\t\t\tsql.append(\"', '\");\n\n\t\t\t// Handles - reading\n\t\t\tStringBuffer buffer;\n\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t(*itr)[\"reading\"].Accept(writer);\n\t\t\tsql.append(escape(buffer.GetString()));\n\t\t\tsql.append(\"\\' \");\n\n\t\t\tsql.append(')');\n\t\t}\n\t}\n\n\tif (count == 0)\n\t{\n\t\t// No rows in final block\n\t\treturn 0;\n\t}\n\tsql.append(';');\n\n\tconst char *query = sql.coalesce();\n\n\tif (row > 0)\n\t{\n\t\tlogSQL(\"ReadingsAppend\", query);\n\t\tPGresult *res = PQexec(dbConnection, query);\n\t\tdelete[] query;\n\t\tif (PQresultStatus(res) == PGRES_COMMAND_OK)\n\t\t{\n\t\t\tPQclear(res);\n\t\t\treturn atoi(PQcmdTuples(res));\n\t\t}\n \t\traiseError(\"appendReadings\", PQerrorMessage(dbConnection));\n\t\tPQclear(res);\n\t\treturn -1;\n\t}\n\telse\n\t{\n\t\tdelete[] query;\n\t\treturn 0;\n\t}\n}\n\n/**\n * Fetch a block of readings from the reading table\n */\nbool Connection::fetchReadings(unsigned long id, unsigned int blksize, std::string& resultSet)\n{\nchar\tsqlbuffer[200];\n\n\tsnprintf(sqlbuffer, sizeof(sqlbuffer),\n\t\t\"SELECT id, asset_code, reading, user_ts AT TIME ZONE 'UTC' as \\\"user_ts\\\", ts AT TIME ZONE 'UTC' as \\\"ts\\\" FROM fledge.readings WHERE id >= %ld ORDER BY id LIMIT %d;\", id, blksize);\n\n\tlogSQL(\"ReadingsFetch\", sqlbuffer);\n\tPGresult *res = PQexec(dbConnection, sqlbuffer);\n\tif (PQresultStatus(res) == PGRES_TUPLES_OK)\n\t{\n\t\tmapResultSet(res, resultSet);\n\t\tPQclear(res);\n\t\treturn true;\n\t}\n \traiseError(\"retrieve\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn false;\n}\n\n\n\n/**\n * Purge readings from the reading table\n */\nunsigned int  Connection::purgeReadings(unsigned long age, unsigned int flags, unsigned long sent, std::string& result)\n{\n\tunsigned long rowidLimit = 0, minrowidLimit = 0, maxrowidLimit = 0, rowidMin;\n\n\tstring sqlCommand;\n\tSQLBuffer sql;\n\tlong unsentPurged = 0;\n\tlong unsentRetained = 0;\n\tlong numReadings = 0;\n\tbool flag_retain;\n\tint blocks = 0;\n\tstruct timeval startTv{}, endTv{};\n\n\tconst char *logSection=\"ReadingsPurgeByAge\";\n\n\tLogger *logger = Logger::getLogger();\n\n\tflag_retain = false;\n\n\tif ( (flags & STORAGE_PURGE_RETAIN_ANY) || (flags & STORAGE_PURGE_RETAIN_ALL) )\n\t{\n\t\tflag_retain = true;\n\t}\n\tLogger::getLogger()->debug(\"%s - flags :%X: flag_retain :%d: sent :%lu:\", __FUNCTION__, flags, flag_retain, sent);\n\n\t// Prepare empty result\n\tresult = \"{ \\\"removed\\\" : 0, \";\n\tresult += \" \\\"unsentPurged\\\" : 0, \";\n\tresult += \" \\\"unsentRetained\\\" : 0, \";\n\tresult += \" \\\"readings\\\" : 0, \";\n\tresult += \" \\\"method\\\" : \\\"age\\\", \";\n\tresult += \" \\\"duration\\\" : 0 }\";\n\n\tlogger->info(\"Purge starting...\");\n\tgettimeofday(&startTv, NULL);\n\n\t/*\n\t * We fetch the current rowid and limit the purge process to work on just\n\t * those rows present in the database when the purge process started.\n\t * This prevents us looping in the purge process if new readings become\n\t * eligible for purging at a rate that is faster than we can purge them.\n\t */\n\trowidLimit = purgeOperation(\"SELECT max(id) from fledge.readings;\", logSection,\n\t\t\t\t\t\t   \"ReadingsPurgeByAge - phase 1, fetching maximum id\",\n\t\t\t\t\t\t   true);\n\tif (rowidLimit == -1) {\n\t\treturn 0;\n\t}\n\tmaxrowidLimit = rowidLimit;\n\n\tminrowidLimit = purgeOperation(\"SELECT min(id) from fledge.readings;\", logSection,\n\t\t\t\t\t\t   \"ReadingsPurgeByAge - phase 1, fetching minimum id\", true);\n\tif (minrowidLimit == -1) {\n\t\treturn 0;\n\t}\n\t//###   #########################################################################################:\n\n\tif (age == 0)\n\t{\n\t\t/*\n\t\t * An age of 0 means remove the oldest hours data.\n\t\t * So set age based on the data we have and continue.\n\t\t */\n\n\t\tsqlCommand = \"SELECT round(extract(epoch FROM (now() - min(user_ts)))/360) FROM fledge.readings WHERE id <=\" + to_string (rowidLimit) + \";\";\n\t\tage = purgeOperation(sqlCommand.c_str() , logSection,\n\t\t\t\t\t   \"ReadingsPurgeByAge - phase 1, calculating age\", true);\n\t\tif (age == -1) {\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\tLogger::getLogger()->debug(\"%s - rowidLimit :%lu: maxrowidLimit :%lu: maxrowidLimit :%lu: age :%lu:\", __FUNCTION__, rowidLimit, maxrowidLimit, minrowidLimit, age);\n\n\t{\n\t\t/*\n\t\t * Refine rowid limit to just those rows older than age hours.\n\t\t */\n\t\tunsigned long l = minrowidLimit;\n\t\tunsigned long r;\n\t\tif (flag_retain) {\n\n\t\t\tr = min(sent, rowidLimit);\n\t\t} else {\n\t\t\tr = rowidLimit;\n\t\t}\n\n\t\tr = max(r, l);\n\t\tlogger->debug   (\"%s - l=%u, r=%u, sent=%u, rowidLimit=%u, minrowidLimit=%u, flags=%u\", __FUNCTION__, l, r, sent, rowidLimit, minrowidLimit, flags);\n\n\t\tif (l == r)\n\t\t{\n\t\t\tlogger->info(\"V2 No data to purge: min_id == max_id == %u\", minrowidLimit);\n\t\t\treturn 0;\n\t\t}\n\n\t\tunsigned long m=l;\n\n\t\twhile (l <= r)\n\t\t{\n\t\t\tunsigned long midRowId = 0;\n\t\t\tunsigned long prev_m = m;\n\t\t\tm = l + (r - l) / 2;\n\t\t\tif (prev_m == m) break;\n\n\t\t\t// e.g. select id from readings where rowid = 219867307 AND user_ts < datetime('now' , '-24 hours', 'utc');\n\t\t\tsqlCommand = \"SELECT id FROM fledge.readings WHERE id = \" + to_string (m) + \" AND user_ts < (now() - INTERVAL '\" + to_string (age) + \" hours');\";\n\t\t\tmidRowId = purgeOperation(sqlCommand.c_str() , logSection, \"ReadingsPurgeByAge - phase 2, fetching midRowId\", true);\n\t\t\tif (midRowId == -1) {\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\tif (midRowId == 0) // mid row doesn't satisfy given condition for user_ts, so discard right/later half and look in left/earlier half\n\t\t\t{\n\t\t\t\t// search in earlier/left half\n\t\t\t\tr = m - 1;\n\n\t\t\t\t// The m position should be skipped as midRowId is 0\n\t\t\t\tm = r;\n\t\t\t}\n\t\t\telse //if (l != m)\n\t\t\t{\n\t\t\t\t// search in later/right half\n\t\t\t\tl = m + 1;\n\t\t\t}\n\t\t}\n\n\n\t\trowidLimit = m;\n\n\t\tLogger::getLogger()->debug(\"%s - s1 rowidLimit :%lu: minrowidLimit :%lu: maxrowidLimit :%lu:\", __FUNCTION__, rowidLimit, minrowidLimit, maxrowidLimit);\n\n\t\tsqlCommand = \"SELECT max(id) FROM fledge.readings WHERE id <= \" + to_string (rowidLimit) + \" AND user_ts < (now() - INTERVAL '\" + to_string (age) + \" hours');\";\n\t\trowidLimit = purgeOperation(sqlCommand.c_str() , logSection, \"ReadingsPurgeByAge - phase 2, checking rowidLimit\", true);\n\n\t\tif (rowidLimit == -1) {\n\t\t\treturn 0;\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"%s - s2 rowidLimit :%lu: minrowidLimit :%lu: maxrowidLimit :%lu:\", __FUNCTION__, rowidLimit, minrowidLimit, maxrowidLimit);\n\n\t\tif (minrowidLimit == rowidLimit)\n\t\t{\n\t\t\tlogger->info(\"No data to purge\");\n\t\t\treturn 0;\n\t\t}\n\n\t\trowidMin = minrowidLimit;\n\t\tLogger::getLogger()->debug(\"%s - m :%lu: rowidMin :%lu: \",__FUNCTION__ ,m,  rowidMin);\n\t}\n\n\t\n\tif ( ! flag_retain )\n\t{\n\t\tunsigned long lastPurgedId;\n\n\t\tsqlCommand = \"SELECT id FROM fledge.readings WHERE id = \" + to_string (rowidLimit) + \";\";\n\t\tlastPurgedId = purgeOperation(sqlCommand.c_str() , logSection, \"ReadingsPurgeByAge - phase 2, fetching unsentPurged\", true);\n\t\tif (lastPurgedId == -1) {\n\t\t\treturn 0;\n\t\t}\n\n\t\tif (sent != 0 && lastPurgedId > sent)\t// Unsent readings will be purged\n\t\t{\n\t\t\t// Get number of unsent rows we are about to remove\n\t\t\tunsentPurged = lastPurgedId - sent;\n\t\t}\n\t\tLogger::getLogger()->debug(\"%s - lastPurgedId :%d: unsentPurged :%ld:\" ,__FUNCTION__, lastPurgedId, unsentPurged);\n\t}\n\n\tunsigned int deletedRows = 0;\n\tunsigned int rowsAffected, totTime=0, prevBlocks=0, prevTotTime=0;\n\n\tlogger->info(\"Purge about to delete readings # %ld to %ld\", rowidMin, rowidLimit);\n\twhile (rowidMin < rowidLimit)\n\t{\n\t\tblocks++;\n\t\trowidMin += purgeBlockSize;\n\t\tif (rowidMin > rowidLimit)\n\t\t{\n\t\t\trowidMin = rowidLimit;\n\t\t}\n\n\t\t{\n\t\t\tsqlCommand = \"DELETE FROM fledge.readings WHERE id <=\" + to_string(rowidMin) + \";\" ;\n\n\t\t\tSTART_TIME;\n\t\t\trowsAffected = purgeOperation(sqlCommand.c_str() , logSection, \"ReadingsPurgeByAge - phase 3, deleting readings\", false);\n\t\t\tEND_TIME;\n\n\t\t\tlogger->debug(\"%s - DELETE sql :%s: rowsAffected :%ld:\",  __FUNCTION__, sqlCommand.c_str() ,rowsAffected);\n\n\t\t\tif (rowsAffected == -1) {\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\ttotTime += usecs;\n\n\t\t\tif(usecs>150000)\n\t\t\t{\n\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100+usecs/10000));\n\t\t\t}\n\t\t}\n\n\t\tdeletedRows += rowsAffected;\n\t\tlogger->debug(\"Purge delete block #%d with %d readings\", blocks, rowsAffected);\n\n\t\tif(blocks % RECALC_PURGE_BLOCK_SIZE_NUM_BLOCKS == 0)\n\t\t{\n\t\t\tint prevAvg = prevTotTime/(prevBlocks?prevBlocks:1);\n\t\t\tint currAvg = (totTime-prevTotTime)/(blocks-prevBlocks);\n\t\t\tint avg = ((prevAvg?prevAvg:currAvg)*5 + currAvg*5) / 10; // 50% weightage for long term avg and 50% weightage for current avg\n\t\t\tprevBlocks = blocks;\n\t\t\tprevTotTime = totTime;\n\t\t\tint deviation = abs(avg - TARGET_PURGE_BLOCK_DEL_TIME);\n\t\t\tlogger->debug(\"blocks=%d, totTime=%d usecs, prevAvg=%d usecs, currAvg=%d usecs, avg=%d usecs, TARGET_PURGE_BLOCK_DEL_TIME=%d usecs, deviation=%d usecs\",\n\t\t\t\t\t\t  blocks, totTime, prevAvg, currAvg, avg, TARGET_PURGE_BLOCK_DEL_TIME, deviation);\n\t\t\tif (deviation > TARGET_PURGE_BLOCK_DEL_TIME/10)\n\t\t\t{\n\t\t\t\tfloat ratio = (float)TARGET_PURGE_BLOCK_DEL_TIME / (float)avg;\n\t\t\t\tif (ratio > 2.0) ratio = 2.0;\n\t\t\t\tif (ratio < 0.5) ratio = 0.5;\n\t\t\t\tpurgeBlockSize = (float)purgeBlockSize * ratio;\n\t\t\t\tpurgeBlockSize = purgeBlockSize / PURGE_BLOCK_SZ_GRANULARITY * PURGE_BLOCK_SZ_GRANULARITY;\n\t\t\t\tif (purgeBlockSize < MIN_PURGE_DELETE_BLOCK_SIZE)\n\t\t\t\t\tpurgeBlockSize = MIN_PURGE_DELETE_BLOCK_SIZE;\n\t\t\t\tif (purgeBlockSize > MAX_PURGE_DELETE_BLOCK_SIZE)\n\t\t\t\t\tpurgeBlockSize = MAX_PURGE_DELETE_BLOCK_SIZE;\n\t\t\t\tlogger->debug(\"Changed purgeBlockSize to %d\", purgeBlockSize);\n\t\t\t}\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100));\n\t\t}\n\t\t//Logger::getLogger()->debug(\"Purge delete block #%d with %d readings\", blocks, rowsAffected);\n\t} while (rowidMin  < rowidLimit);\n\n\tlogger->debug   (\"%s - sent=%u, minrowidLimit=%u, maxrowidLimit=%u, rowidLimit=%u deletedRows=%u\", __FUNCTION__, sent, minrowidLimit, maxrowidLimit, rowidLimit, deletedRows);\n\n\tunsentRetained = maxrowidLimit - rowidLimit;\n\n\tnumReadings = maxrowidLimit +1 - minrowidLimit - deletedRows;\n\n\tif (sent == 0)\t// Special case when no north process is used\n\t{\n\t\tunsentPurged = deletedRows;\n\t}\n\n\tostringstream convert;\n\n\tunsigned long duration;\n\tgettimeofday(&endTv, NULL);\n\tduration = (1000000 * (endTv.tv_sec - startTv.tv_sec)) + endTv.tv_usec - startTv.tv_usec;\n\n\tconvert << \"{ \\\"removed\\\" : \"       << deletedRows    << \", \";\n\tconvert << \" \\\"unsentPurged\\\" : \"   << unsentPurged   << \", \";\n\tconvert << \" \\\"unsentRetained\\\" : \" << unsentRetained << \", \";\n\tconvert << \" \\\"readings\\\" : \"       << numReadings    << \", \";\n\tconvert << \" \\\"method\\\" : \\\"age\\\", \";\n\tconvert << \" \\\"duration\\\" : \"       << duration       << \" }\";\n\n\tresult = convert.str();\n\n\tduration = duration / 1000; // milliseconds\n\tlogger->info(\"Purge process complete in %d blocks in %ld milliseconds\", blocks, duration);\n\n\tLogger::getLogger()->debug(\"%s - age :%lu: flag_retain :%x: sent :%lu: result :%s:\", __FUNCTION__, age, flags, flag_retain, result.c_str() );\n\n\treturn deletedRows;\n}\n\n/**\n * Execute a SQL command for the purge task\n */\nunsigned long Connection::purgeOperation(const char *sql, const char *logSection, const char *phase, bool retrieve)\n{\n\tSQLBuffer sqlBuffer;\n\tconst char *query;\n\tunsigned long value;\n\tPGresult *res;\n\tbool error;\n\tchar *PGValue {};\n\n\terror = false;\n\tvalue = 0;\n\n\tLogger::getLogger()->debug(\"%s - sql :%s: logSection :%s: phase :%s:\", __FUNCTION__, sql, logSection, phase);\n\n\tsqlBuffer.append(sql);\n\tquery = sqlBuffer.coalesce();\n\tlogSQL(logSection, query);\n\tres = PQexec(dbConnection, query);\n\tdelete[] query;\n\n\tif (retrieve) {\n\t\tif (PQresultStatus(res) == PGRES_TUPLES_OK) {\n\n\t\t\tPGValue = PQgetvalue(res, 0, 0);\n\t\t\tif (PGValue)\n\t\t\t\tvalue = (unsigned long) atol(PGValue);\n\n\t\t} else {\n\t\t\terror = true;\n\t\t}\n\t} else {\n\t\tif (PQresultStatus(res) == PGRES_COMMAND_OK) {\n\t\t\tvalue = (unsigned long)atoi(PQcmdTuples(res));\n\t\t} else {\n\t\t\terror = true;\n\t\t}\n\t}\n\n\tif (error)\n\t{\n\t\traiseError(phase, PQerrorMessage(dbConnection));\n\t\tvalue = -1;\n\t}\n\n\tPQclear(res);\n\n\treturn value;\n}\n\n/**\n * Purge readings from the reading table leaving a number of rows equal to the parameter rows\n */\nunsigned int  Connection::purgeReadingsByRows(unsigned long rows,\n\t\t\t\t\tunsigned int flags,\n\t\t\t\t\tunsigned long sent,\n\t\t\t\t\tstd::string& result)\n{\n\tunsigned long deletedRows = 0, unsentPurged = 0, unsentRetained = 0, numReadings = 0;\n\tunsigned long limit = 0;\n\tunsigned long rowcount, minId, maxId;\n\tunsigned long rowsAffectedLastComand;\n\tunsigned long deletePoint;\n\tstruct timeval startTv, endTv;\n\n\tstring sqlCommand;\n\tbool flag_retain;\n\n\tconst char *logSection=\"ReadingsPurgeByRows\";\n\n\tLogger *logger = Logger::getLogger();\n\n\tgettimeofday(&startTv, NULL);\n\tflag_retain = false;\n\n\tif ( (flags & STORAGE_PURGE_RETAIN_ANY) || (flags & STORAGE_PURGE_RETAIN_ALL) )\n\t{\n\t\tflag_retain = true;\n\t}\n\tLogger::getLogger()->debug(\" %s - flags :%X: flag_retain :%s: sent :%ld:\", __FUNCTION__, flags, flag_retain ? \"true\" : \"false\", sent);\n\n\tlogger->info(\"Purge by Rows called\");\n\tif (flag_retain)\n\t{\n\t\tlimit = sent;\n\t\tlogger->info(\"Sent is %lu\", sent);\n\t}\n\tlogger->info(\"Purge by Rows called with flag_retain %X, rows %lu, limit %lu\", flag_retain, rows, limit);\n\n\n\trowcount = purgeOperation(\"SELECT count(*) from fledge.readings;\", logSection,\n\t\t\t\t\t\t\t  \"ReadingsPurgeByRows - phase 1, fetching row count\", true);\n\tif (rowcount == -1) {\n\t\treturn 0;\n\t}\n\n\tmaxId = purgeOperation(\"SELECT max(id) from fledge.readings;\", logSection,\n\t\t\t\t\t\t   \"ReadingsPurgeByRows - phase 1, fetching maximum id\",\n\t\t\t\t\t\t   true);\n\tif (maxId == -1) {\n\t\treturn 0;\n\t}\n\n\tnumReadings = rowcount;\n\trowsAffectedLastComand = 0;\n\tdeletedRows = 0;\n\n\tdo\n\t{\n\t\tif (rowcount <= rows)\n\t\t{\n\t\t\tlogger->info(\"Row count %d is less than required rows %d\", rowcount, rows);\n\t\t\tbreak;\n\t\t}\n\n\t\tminId = purgeOperation(\"SELECT min(id) from fledge.readings;\", logSection,\n\t\t\t\t\t\t\t   \"ReadingsPurgeByRows - phase 2, fetching minimum id\", true);\n\t\tif (minId == -1) {\n\t\t\treturn 0;\n\t\t}\n\n\t\tdeletePoint = minId + min(100000UL, rows);\n\t\tif (maxId - deletePoint < rows || deletePoint > maxId)\n\t\t\tdeletePoint = maxId - rows;\n\n\t\t// Do not delete\n\t\tif (flag_retain) {\n\n\t\t\tif (limit < deletePoint)\n\t\t\t{\n\t\t\t\tdeletePoint = limit;\n\t\t\t}\n\t\t}\n\n\t\t{\n\t\t\tlogger->info(\"RowCount %lu, Max Id %lu, min Id %lu, delete point %lu\", rowcount, maxId, minId, deletePoint);\n\n\t\t\tsqlCommand = \"DELETE FROM fledge.readings WHERE id <= \" +  to_string(deletePoint);\n\t\t\trowsAffectedLastComand = purgeOperation(sqlCommand.c_str(), logSection, \"ReadingsPurgeByRows - phase 2, deleting readings\", false);\n\n\t\t\tif (rowsAffectedLastComand != -1)\t// No error occured\n\t\t\t{\n\t\t\t\tdeletedRows += rowsAffectedLastComand;\n\t\t\t\tnumReadings -= rowsAffectedLastComand;\n\t\t\t\trowcount    -= rowsAffectedLastComand;\n\n\t\t\t\tlogger->debug(\"Deleted %lu rows\", rowsAffectedLastComand);\n\t\t\t\tif (rowsAffectedLastComand == 0)\n\t\t\t\t{\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (limit == 0)\n\t\t\t\t{\n\t\t\t\t\t// We may purge unsent rows\n\t\t\t\t\tif (minId > sent)\n\t\t\t\t\t{\n\t\t\t\t\t\t// The entire block was unsent\n\t\t\t\t\t\tunsentPurged += rowsAffectedLastComand;\n\t\t\t\t\t}\n\t\t\t\t\telse if (minId < sent && deletePoint > sent)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Only part of the block was unsent\n\t\t\t\t\t\tlong unsentBlock = rowsAffectedLastComand - (sent - minId);\n\t\t\t\t\t\tunsentPurged += unsentBlock;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} while (rowcount > rows);\n\n\tif (limit)\n\t{\n\t\tunsentRetained = numReadings - rows;\n\t}\n\n\tgettimeofday(&endTv, NULL);\n\tunsigned long duration = (1000000 * (endTv.tv_sec - startTv.tv_sec)) + endTv.tv_usec - startTv.tv_usec;\n\n\tostringstream convert;\n\n\tconvert << \"{ \\\"removed\\\" : \" << deletedRows << \", \";\n\tconvert << \" \\\"unsentPurged\\\" : \" << unsentPurged << \", \";\n\tconvert << \" \\\"unsentRetained\\\" : \" << unsentRetained << \", \";\n\tconvert << \" \\\"readings\\\" : \" << numReadings << \", \";\n\tconvert << \" \\\"method\\\" : \\\"rows\\\", \";\n\tconvert << \" \\\"duration\\\" : \" << duration << \" }\";\n\n\tresult = convert.str();\n\n\tLogger::getLogger()->debug(\"%s - Purge by Rows complete - rows :%lu: flag :%x: sent :%lu:  numReadings :%lu:  rowsAffected :%u:  result :%s:\", __FUNCTION__, rows, flags, sent, numReadings, rowsAffectedLastComand, result.c_str() );\n\n\treturn deletedRows;\n\n}\n\n/**\n * Map a SQL result set to a JSON document\n */\nvoid Connection::mapResultSet(PGresult *res, string& resultSet)\n{\nint nFields, i, j;\nDocument doc;\n\n\tdoc.SetObject();    // Create the JSON document\n\tDocument::AllocatorType& allocator = doc.GetAllocator();\n\tnFields = PQnfields(res); // No. of columns in resultset\n\tValue rows(kArrayType);   // Setup a rows array\n\tValue count;\n\tcount.SetInt(PQntuples(res)); // Create the count\n\tdoc.AddMember(\"count\", count, allocator);\n\n\t// Iterate over the rows\n\tfor (i = 0; i < PQntuples(res); i++)\n\t{\n\t\tValue row(kObjectType); // Create a row\n\t\tfor (j = 0; j < nFields; j++)\n\t\t{\n\t\t\t/**\n\t\t\t * TODO Improve handling of Oid's\n\t\t\t *\n\t\t\t * Current OID detection is based on\n\t\t\t *\n\t\t\t * SELECT oid, typname FROM pg_type;\n\t\t\t */\n\n\t\t\t/**\n\t\t\t * If PQgetvalue() is pointer to an empty string,\n\t\t\t * we assume that is a NULL and we return\n\t\t\t * the \"\" value no matter the OID value\n\t\t\t */\n\t\t\tif (!strlen(PQgetvalue(res, i, j)))\n\t\t\t{\n\t\t\t\tValue value(\"\", allocator);\n\t\t\t\tValue name(PQfname(res, j), allocator);\n\t\t\t\trow.AddMember(name, value, allocator);\n\n\t\t\t\t// Get the next column\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/* PQgetvalue() has a value, check OID */\t\n\t\t\tOid oid = PQftype(res, j);\n\t\t\tswitch (oid)\n\t\t\t{\n\t\t\t\tcase 3802: // JSON type hard coded in this example: jsonb\n\t\t\t\t{\n\t\t\t\t\tDocument d;\n\t\t\t\t\tif (d.Parse(PQgetvalue(res, i, j)).HasParseError())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"resultSet\", \"Failed to parse: %s\\n\", PQgetvalue(res, i, j));\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tValue value(d, allocator);\n\t\t\t\t\tValue name(PQfname(res, j), allocator);\n\t\t\t\t\trow.AddMember(name, value, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase 23:    //INT 4 bytes: int4\n\t\t\t\t{\n\t\t\t\t\tint32_t intVal = atoi(PQgetvalue(res, i, j));\n\t\t\t\t\tValue name(PQfname(res, j), allocator);\n\t\t\t\t\trow.AddMember(name, intVal, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase 21:    //SMALLINT 2 bytes: int2\n\t\t\t\t{\n\t\t\t\t\tint16_t intVal = (short)atoi(PQgetvalue(res, i, j));\n\t\t\t\t\tValue name(PQfname(res, j), allocator);\n\t\t\t\t\trow.AddMember(name, intVal, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase 20:    //BIG INT 8 bytes: int8\n\t\t\t\t{\n\t\t\t\t\tint64_t intVal = atol(PQgetvalue(res, i, j));\n\t\t\t\t\tValue name(PQfname(res, j), allocator);\n\t\t\t\t\trow.AddMember(name, intVal, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase 700: // float4\n\t\t\t\tcase 701: // float8\n\t\t\t\tcase 710: // this OID doesn't exist\n\t\t\t\t{\n\t\t\t\t\tdouble dblVal = atof(PQgetvalue(res, i, j));\n\t\t\t\t\tValue name(PQfname(res, j), allocator);\n\t\t\t\t\trow.AddMember(name, dblVal, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase 1184: // Timestamp: timestamptz\n\t\t\t\t{\n\t\t\t\t\tchar *str = PQgetvalue(res, i, j);\n\t\t\t\t\tValue value(str, allocator);\n\t\t\t\t\tValue name(PQfname(res, j), allocator);\n\t\t\t\t\trow.AddMember(name, value, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tdefault:\n\t\t\t\t{\n\t\t\t\t\tchar *str = PQgetvalue(res, i, j);\n\t\t\t\t\tif (oid == 1042) // char(x) rather than varchar so trim white space\n\t\t\t\t\t{\n\t\t\t\t\t\tstr = trim(str);\n\t\t\t\t\t}\n\t\t\t\t\tValue value(str, allocator);\n\t\t\t\t\tValue name(PQfname(res, j), allocator);\n\t\t\t\t\trow.AddMember(name, value, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\trows.PushBack(row, allocator);  // Add the row\n\t}\n\tdoc.AddMember(\"rows\", rows, allocator); // Add the rows to the JSON\n\t/* Write out the JSON document we created */\n\tStringBuffer buffer;\n\tWriter<StringBuffer> writer(buffer);\n\tdoc.Accept(writer);\n\tresultSet = buffer.GetString();\n}\n\n/**\n * Process the aggregate options and return the columns to be selected\n *\n * @param payload           To evaluate for the generation of the SQLcommands\n * @param aggregates        To evaluate for the generation of the SQL commands\n * @param jsonConstraint    To evaluate for the generation of the SQL commands\n * @param isTableReading    True if the handled table is the readings for which\n *                          a specific format should be applied\n * @param sql\t\t    The sql commands relates to payload, aggregates\n *                          and jsonConstraint\n *\n */\nbool Connection::jsonAggregates(const Value& payload,\n\t\t\t\tconst Value& aggregates,\n\t\t\t\tSQLBuffer& sql,\n\t\t\t\tSQLBuffer& jsonConstraint,\n\t\t\t\tbool isTableReading)\n{\n\tif (aggregates.IsObject())\n\t{\n\t\tif (! aggregates.HasMember(\"operation\"))\n\t\t{\n\t\t\traiseError(\"Select aggregation\", \"Missing property \\\"operation\\\"\");\n\t\t\treturn false;\n\t\t}\n\t\tif ((! aggregates.HasMember(\"column\")) && (! aggregates.HasMember(\"json\")))\n\t\t{\n\t\t\traiseError(\"Select aggregation\", \"Missing property \\\"column\\\" or \\\"json\\\"\");\n\t\t\treturn false;\n\t\t}\n\n\t\tstring column_name = aggregates[\"column\"].GetString();\n\n\t\tsql.append(aggregates[\"operation\"].GetString());\n\t\tsql.append('(');\n\t\tif (aggregates.HasMember(\"column\"))\n\t\t{\n\t\t\tif (strcmp(aggregates[\"operation\"].GetString(), \"count\") != 0)\n\t\t\t{\n\t\t\t\t// an operation different from the 'count' is requested\n\t\t\t\tif (isTableReading && (column_name.compare(\"user_ts\") == 0) )\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"to_char(user_ts, '\" F_DATEH24_US \"')\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(column_name);\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// 'count' operation is requested\n\t\t\t\tsql.append(column_name);\n\t\t\t}\n\t\t}\n\t\telse if (aggregates.HasMember(\"json\"))\n\t\t{\n\t\t\tconst Value& json = aggregates[\"json\"];\n\t\t\tif (! json.IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\", \"The json property must be an object\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (!json.HasMember(\"column\"))\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\", \"The json property is missing a column property\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append('(');\n\t\t\tsql.append(\"\\\"\");\n\n\t\t\tsql.append(json[\"column\"].GetString());\n\t\t\tsql.append(\"\\\"\");\n\t\t\tsql.append(\"->\");\n\t\t\tif (!json.HasMember(\"properties\"))\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\", \"The json property is missing a properties property\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tconst Value& jsonFields = json[\"properties\"];\n\t\t\tif (jsonFields.IsArray())\n\t\t\t{\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tint field = 0;\n\t\t\t\tstring prev;\n\t\t\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (field)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\"->>\");\n\t\t\t\t\t}\n\t\t\t\t\tif (prev.length() > 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tjsonConstraint.append(\"->>'\");\n\t\t\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\t\t\tjsonConstraint.append(\"'\");\n\t\t\t\t\t}\n\t\t\t\t\tprev = itr->GetString();\n\t\t\t\t\tfield++;\n\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\tsql.append('\\'');\n\t\t\t\t}\n\t\t\t\tjsonConstraint.append(\" ? '\");\n\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\tjsonConstraint.append(\"'\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append('\\'');\n\t\t\t\tsql.append(jsonFields.GetString());\n\t\t\t\tsql.append('\\'');\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tjsonConstraint.append(\" ? '\");\n\t\t\t\tjsonConstraint.append(jsonFields.GetString());\n\t\t\t\tjsonConstraint.append(\"'\");\n\t\t\t}\n\t\t\tsql.append(\")::float\");\n\t\t}\n\t\tsql.append(\") AS \\\"\");\n\t\tif (aggregates.HasMember(\"alias\"))\n\t\t{\n\t\t\tsql.append(aggregates[\"alias\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(aggregates[\"operation\"].GetString());\n\t\t\tsql.append('_');\n\t\t\tsql.append(aggregates[\"column\"].GetString());\n\t\t}\n\t\tsql.append(\"\\\"\");\n\t}\n\telse if (aggregates.IsArray())\n\t{\n\t\tint index = 0;\n\t\tfor (Value::ConstValueIterator itr = aggregates.Begin(); itr != aggregates.End(); ++itr)\n\t\t{\n\t\t\tif (!itr->IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"select aggregation\",\n\t\t\t\t\t\t\"Each element in the aggregate array must be an object\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif ((! itr->HasMember(\"column\")) && (! itr->HasMember(\"json\")))\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\", \"Missing property \\\"column\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (! itr->HasMember(\"operation\"))\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\", \"Missing property \\\"operation\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (index)\n\t\t\t\tsql.append(\", \");\n\t\t\tindex++;\n\t\t\tsql.append((*itr)[\"operation\"].GetString());\n\t\t\tsql.append('(');\n\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t{\n\n\t\t\t\tstring column_name= (*itr)[\"column\"].GetString();\n\t\t\t\tif (isTableReading && (column_name.compare(\"user_ts\") == 0) )\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"to_char(user_ts, '\" F_DATEH24_US \"')\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(column_name);\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t{\n\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\tif (! json.IsObject())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"Select aggregation\", \"The json property must be an object\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (!json.HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The json property is missing a column property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\"CASE WHEN jsonb_typeof(\");\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(json[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tif (!json.HasMember(\"properties\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The json property is missing a properties property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tconst Value& jsonFields = json[\"properties\"];\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tif (jsonFields.IsArray())\n\t\t\t\t{\n\t\t\t\t\tstring prev;\n\t\t\t\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (prev.length() > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tjsonConstraint.append(\"->>'\");\n\t\t\t\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\t\t\t\tjsonConstraint.append(\"'\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tprev = itr->GetString();\n\t\t\t\t\t\tsql.append(\"->>'\");\n\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\tjsonConstraint.append(\" ? '\");\n\t\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\t\tjsonConstraint.append(\"'\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"->'\");\n\t\t\t\t\tsql.append(jsonFields.GetString());\n\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\tsql.append(\") != 'number' THEN 0 ELSE (\");\n\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(json[\"column\"].GetString());\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(\"->>'\");\n\t\t\t\t\tsql.append(jsonFields.GetString());\n\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\tjsonConstraint.append(\" ? '\");\n\t\t\t\t\tjsonConstraint.append(jsonFields.GetString());\n\t\t\t\t\tjsonConstraint.append(\"'\");\n\n\t\t\t\t\tsql.append(\")::float\");\n\n\t\t\t\t}\n\t\t\t}\n\t\t\tsql.append(\" END) AS \\\"\");\n\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t{\n\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append((*itr)[\"operation\"].GetString());\n\t\t\t\tsql.append('_');\n\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t}\n\t\t\tsql.append(\"\\\"\");\n\t\t}\n\t}\n\tif (payload.HasMember(\"group\"))\n\t{\n\t\tsql.append(\", \");\n\t\tif (payload[\"group\"].IsObject())\n\t\t{\n\t\t\tconst Value& grp = payload[\"group\"];\n\t\t\tif (grp.HasMember(\"format\"))\n\t\t\t{\n\t\t\t\tsql.append(\"to_char(\");\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(grp[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(\", '\");\n\t\t\t\tsql.append(grp[\"format\"].GetString());\n\t\t\t\tsql.append(\"')\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(grp[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t}\n\t\t\tif (grp.HasMember(\"alias\"))\n\t\t\t{\n\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\tsql.append(grp[\"alias\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\tsql.append(grp[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Double quotes commented to allow a group by of the type : date(history_ts), key\n\t\t\t//sql.append(\"\\\"\");\n\t\t\tsql.append(payload[\"group\"].GetString());\n\t\t\t//sql.append(\"\\\"\");\n\t\t}\n\t}\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& tb = payload[\"timebucket\"];\n\t\tif (! tb.IsObject())\n\t\t{\n\t\t\traiseError(\"Select data\", \"The \\\"timebucket\\\" property must be an object\");\n\t\t\treturn false;\n\t\t}\n\t\tif (! tb.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"Select data\", \"The \\\"timebucket\\\" object must have a timestamp property\");\n\t\t\treturn false;\n\t\t}\n\t\tif (tb.HasMember(\"format\"))\n\t\t{\n\t\t\tsql.append(\", to_char(to_timestamp(\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\", to_timestamp(\");\n\t\t}\n\t\tif (tb.HasMember(\"size\"))\n\t\t{\n\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\tsql.append(\" * \");\n\t\t}\n\t\tsql.append(\"floor(extract(epoch from \");\n\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\tsql.append(\") / \");\n\t\tif (tb.HasMember(\"size\"))\n\t\t{\n\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(1);\n\t\t}\n\t\tsql.append(\"))\");\n\t\tif (tb.HasMember(\"format\"))\n\t\t{\n\t\t\tsql.append(\", '\");\n\t\t\tsql.append(tb[\"format\"].GetString());\n\t\t\tsql.append(\"')\");\n\t\t}\n\t\tsql.append(\" AS \\\"\");\n\t\tif (tb.HasMember(\"alias\"))\n\t\t{\n\t\t\tsql.append(tb[\"alias\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\"timestamp\");\n\t\t}\n\t\tsql.append('\"');\n\t}\n\treturn true;\n}\n\n/**\n * Process the modifers for limit, skip, sort and group\n */\nbool Connection::jsonModifiers(const Value& payload, SQLBuffer& sql)\n{\n\tif (payload.HasMember(\"timebucket\") && payload.HasMember(\"sort\"))\n\t{\n\t\traiseError(\"query modifiers\", \"Sort and timebucket modifiers can not be used in the same payload\");\n\t\treturn false;\n\t}\n\n\t// Count columns\n\tunsigned int nAggregates = 0;\n\tif (payload.HasMember(\"aggregate\") &&\n\t    payload[\"aggregate\"].IsArray())\n\t{\n\t\tnAggregates = payload[\"aggregate\"].Size();\n\t}\n\n\tstring groupColumn;\n\tif (payload.HasMember(\"group\"))\n\t{\n\t\tsql.append(\" GROUP BY \");\n\t\tif (payload[\"group\"].IsObject())\n\t\t{\n\t\t\tconst Value& grp = payload[\"group\"];\n\t\t\tif (grp.HasMember(\"format\"))\n\t\t\t{\n\t\t\t\tsql.append(\"to_char(\");\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(grp[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(\", '\");\n\t\t\t\tsql.append(grp[\"format\"].GetString());\n\t\t\t\tsql.append(\"')\");\n\n\t\t\t\t// Get the column name in GROUP BY\n\t\t\t\tgroupColumn = grp[\"column\"].GetString();\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Double quotes commented to allow a group by of the type : date(history_ts), key\n\t\t\t//sql.append(\"\\\"\");\n\t\t\tsql.append(payload[\"group\"].GetString());\n\t\t\t//sql.append(\"\\\"\");\n\n\t\t\t// Get the column name in GROUP BY\n\t\t\tgroupColumn = payload[\"group\"].GetString();\n\t\t}\n\t}\n\n\tif (payload.HasMember(\"sort\"))\n\t{\n\t\tsql.append(\" ORDER BY \");\n\t\tconst Value& sortBy = payload[\"sort\"];\n\t\tif (sortBy.IsObject())\n\t\t{\n\t\t\tif (! sortBy.HasMember(\"column\"))\n\t\t\t{\n\t\t\t\traiseError(\"Select sort\", \"Missing property \\\"column\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\t// Check wether column name in GROUP BY is the same\n\t\t\t// of column name in ORDER BY\n\t\t\tif (!groupColumn.empty() &&\n\t\t\t    groupColumn.compare(sortBy[\"column\"].GetString()) == 0 &&\n\t\t\t    nAggregates)\n\t\t\t{\n\t\t\t\t// Note that the GROUP BY column is added as last one\n\t\t\t\t// in the column names for SELECT\n\t\t\t\t// The ORDER BY column name is now replaced by a column\n\t\t\t\t// number, without double quotes\n\t\t\t\t// The column number is nAggregates + 1\n\t\t\t\t// Example: SELECT MIN(id), MAX(id), AVG(id) ..\n\t\t\t\t// nAggregates value is 3\n\t\t\t\t// Final SQL statement is: SELECT ... ORDER BY 4\n\t\t\t\tsql.append(nAggregates + 1);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(sortBy[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t}\n\t\t\tsql.append(' ');\n\t\t\tif (! sortBy.HasMember(\"direction\"))\n\t\t\t{\n\t\t\t\tsql.append(\"ASC\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(sortBy[\"direction\"].GetString());\n\t\t\t}\n\t\t}\n\t\telse if (sortBy.IsArray())\n\t\t{\n\t\t\tint index = 0;\n\t\t\tfor (Value::ConstValueIterator itr = sortBy.Begin(); itr != sortBy.End(); ++itr)\n\t\t\t{\n\t\t\t\tif (!itr->IsObject())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"select sort\",\n\t\t\t\t\t\t\t\"Each element in the sort array must be an object\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! itr->HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"Select sort\", \"Missing property \\\"column\\\"\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (index)\n\t\t\t\t\tsql.append(\", \");\n\t\t\t\tindex++;\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\tsql.append(' ');\n\t\t\t\tif (! itr->HasMember(\"direction\"))\n\t\t\t\t{\n\t\t\t\t\t sql.append(\"ASC\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append((*itr)[\"direction\"].GetString());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& tb = payload[\"timebucket\"];\n\t\tif (! tb.IsObject())\n\t\t{\n\t\t\traiseError(\"Select data\", \"The \\\"timebucket\\\" property must be an object\");\n\t\t\treturn false;\n\t\t}\n\t\tif (! tb.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"Select data\", \"The \\\"timebucket\\\" object must have a timestamp property\");\n\t\t\treturn false;\n\t\t}\n\t\tif (payload.HasMember(\"group\"))\n\t\t{\n\t\t\tsql.append(\", \");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\" GROUP BY \");\n\t\t}\n\t\tsql.append(\"floor(extract(epoch from \");\n\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\tsql.append(\") / \");\n\t\tif (tb.HasMember(\"size\"))\n\t\t{\n\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(1);\n\t\t}\n\t\tsql.append(\") ORDER BY \");\n\t\tsql.append(\"floor(extract(epoch from \");\n\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\tsql.append(\") / \");\n\t\tif (tb.HasMember(\"size\"))\n\t\t{\n\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(1);\n\t\t}\n\t\tsql.append(\") DESC\");\n\t}\n\n\tif (payload.HasMember(\"skip\"))\n\t{\n\t\tif (!payload[\"skip\"].IsInt())\n\t\t{\n\t\t\traiseError(\"skip\", \"Skip must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" OFFSET \");\n\t\tsql.append(payload[\"skip\"].GetInt());\n\t}\n\n\tif (payload.HasMember(\"limit\"))\n\t{\n\t\tif (!payload[\"limit\"].IsInt())\n\t\t{\n\t\t\traiseError(\"limit\", \"Limit must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" LIMIT \");\n\t\ttry {\n\t\t\tsql.append(payload[\"limit\"].GetInt());\n\t\t} catch (exception e) {\n\t\t\traiseError(\"limit\", \"Bad value for limit parameter: %s\", e.what());\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * Convert a JSON where clause into a PostresSQL where clause\n *\n */\nbool Connection::jsonWhereClause(const Value& whereClause,\n\t\t\t\tSQLBuffer& sql,\n\t\t\t\tvector<string>  &asset_codes,\n\t\t\t\tbool convertLocaltime, // not in use\n\t\t\t\tconst string prefix)\n{\n\n\tif (!whereClause.IsObject())\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" property must be a JSON object\");\n\t\treturn false;\n\t}\n\tif (!whereClause.HasMember(\"column\"))\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" object is missing a \\\"column\\\" property\");\n\t\treturn false;\n\t}\n\tif (!whereClause.HasMember(\"condition\"))\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" object is missing a \\\"condition\\\" property\");\n\t\treturn false;\n\t}\n\n\t// Handle WHERE 1 = 1, 0.55 = 0.55 etc\n\tstring whereColumnName = whereClause[\"column\"].GetString();\n\tchar* p;\n\tdouble converted = strtod(whereColumnName.c_str(), &p);\n\tif (*p)\n\t{\n\t\t// Double quote column name\n\t\tif (prefix.empty())\n\t\t{\n\t\t\tsql.append(\"\\\"\");\n\t\t}\n\n\t\t// Add prefix\n\t\tif (!prefix.empty())\n\t\t{\n\t\t\tsql.append(prefix);\n\n\t\t}\n\n\t\tsql.append(whereColumnName);\n\n\t\t// Double quote column name\n\t\tif (prefix.empty())\n\t\t{\n\t\t\tsql.append(\"\\\"\");\n\t\t}\n\t}\n\telse\n\t{\n\t\t// Use numeric value\n\t\tsql.append(whereColumnName);\n\t}\n\n\tsql.append(' ');\n\tstring cond = whereClause[\"condition\"].GetString();\n\n\tif (cond.compare(\"isnull\") == 0)\n\t{\n\t\tsql.append(\"isnull \");\n\t}\n\telse if (cond.compare(\"notnull\") == 0)\n\t{\n\t\tsql.append(\"notnull \");\n\t}\n\telse\n\t{\n\t\tif (!whereClause.HasMember(\"value\"))\n\t\t{\n\t\t\traiseError(\"where clause\", \"The \\\"where\\\" object is missing a \\\"value\\\" property\");\n\t\t\treturn false;\n\t\t}\n\t\tif (!cond.compare(\"older\"))\n\t\t{\n\t\t\tif (!whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\traiseError(\"where clause\", \"The \\\"value\\\" of an \\\"older\\\" condition must be an integer\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(\"< now() - INTERVAL '\");\n\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\tsql.append(\" seconds'\");\n\t\t}\n\t\telse if (!cond.compare(\"newer\"))\n\t\t{\n\t\t\tif (!whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\traiseError(\"where clause\", \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(\"> now() - INTERVAL '\");\n\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\tsql.append(\" seconds'\");\n\t\t}\n\t\telse if (!cond.compare(\"in\") || !cond.compare(\"not in\"))\n\t\t{\n\t\t\t// Check we have a non empty array\n\t\t\tif (whereClause[\"value\"].IsArray() &&\n\t\t\t    whereClause[\"value\"].Size())\n\t\t\t{\n\t\t\t\tsql.append(cond);\n\t\t\t\tsql.append(\" ( \");\n\t\t\t\tint field = 0;\n\t\t\t\tfor (Value::ConstValueIterator itr = whereClause[\"value\"].Begin();\n\t\t\t\t\t\t\t\titr != whereClause[\"value\"].End();\n\t\t\t\t\t\t\t\t++itr)\n\t\t\t\t{\n\t\t\t\t\tif (field)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\", \");\n\t\t\t\t\t}\n\t\t\t\t\tfield++;\n\t\t\t\t\tif (itr->IsNumber())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->IsInt())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr->GetInt());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->IsInt64())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append((long)itr->GetInt64());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr->GetDouble());\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(itr->GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring message(\"The \\\"value\\\" of a \\\"\" + \\\n\t\t\t\t\t\t\t\tcond + \\\n\t\t\t\t\t\t\t\t\"\\\" condition array element must be \" \\\n\t\t\t\t\t\t\t\t\"a string, integer or double.\");\n\t\t\t\t\t\traiseError(\"where clause\", message.c_str());\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tsql.append(\" )\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring message(\"The \\\"value\\\" of a \\\"\" + \\\n\t\t\t\t\t\tcond + \"\\\" condition must be an array \" \\\n\t\t\t\t\t\t\"and must not be empty.\");\n\t\t\t\traiseError(\"where clause\", message.c_str());\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(cond);\n\t\t\tsql.append(' ');\n\t\t\tif (whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\t} else if (whereClause[\"value\"].IsString())\n\t\t\t{\n\t\t\t\tsql.append('\\'');\n\t\t\t\tstring value = whereClause[\"value\"].GetString();\n\t\t\t\tsql.append(escape(value));\n\t\t\t\tsql.append('\\'');\n\n\t\t\t\t// Identify a specific operation to restrinct the tables involved\n\t\t\t\tif (whereColumnName.compare(\"asset_code\") == 0)\n\t\t\t\t{\n\t\t\t\t\tif ( cond.compare(\"=\") == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tasset_codes.push_back(value);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n \n\tif (whereClause.HasMember(\"and\"))\n\t{\n\t\tsql.append(\" AND \");\n\t\tvector<string>  asset_codes;\n\t\tif (!jsonWhereClause(whereClause[\"and\"], sql, asset_codes, false, prefix))\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\tif (whereClause.HasMember(\"or\"))\n\t{\n\t\tvector<string>  asset_codes;\n\t\tsql.append(\" OR \");\n\t\tif (!jsonWhereClause(whereClause[\"or\"], sql, asset_codes, false, prefix))\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\n\treturn true;\n}\n\nbool Connection::returnJson(const Value& json, SQLBuffer& sql, SQLBuffer& jsonConstraint)\n{\n\tif (! json.IsObject())\n\t{\n\t\traiseError(\"retrieve\", \"The json property must be an object\");\n\t\treturn false;\n\t}\n\tif (!json.HasMember(\"column\"))\n\t{\n\t\traiseError(\"retrieve\", \"The json property is missing a column property\");\n\t\treturn false;\n\t}\n\tsql.append(json[\"column\"].GetString());\n\tsql.append(\"->\");\n\tif (!json.HasMember(\"properties\"))\n\t{\n\t\traiseError(\"retrieve\", \"The json property is missing a properties property\");\n\t\treturn false;\n\t}\n\tconst Value& jsonFields = json[\"properties\"];\n\tif (jsonFields.IsArray())\n\t{\n\t\tif (! jsonConstraint.isEmpty())\n\t\t{\n\t\t\tjsonConstraint.append(\" AND \");\n\t\t}\n\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\tint field = 0;\n\t\tstring prev;\n\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t{\n\t\t\tif (field)\n\t\t\t{\n\t\t\t\tsql.append(\"->\");\n\t\t\t}\n\t\t\tif (prev.length())\n\t\t\t{\n\t\t\t\tjsonConstraint.append(\"->'\");\n\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\tjsonConstraint.append('\\'');\n\t\t\t}\n\t\t\tfield++;\n\t\t\tsql.append('\\'');\n\t\t\tsql.append(itr->GetString());\n\t\t\tsql.append('\\'');\n\t\t\tprev = itr->GetString();\n\t\t}\n\t\tjsonConstraint.append(\" ? '\");\n\t\tjsonConstraint.append(prev);\n\t\tjsonConstraint.append(\"'\");\n\t}\n\telse\n\t{\n\t\tsql.append('\\'');\n\t\tsql.append(jsonFields.GetString());\n\t\tsql.append('\\'');\n\t\tif (! jsonConstraint.isEmpty())\n\t\t{\n\t\t\tjsonConstraint.append(\" AND \");\n\t\t}\n\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\tjsonConstraint.append(\" ? '\");\n\t\tjsonConstraint.append(jsonFields.GetString());\n\t\tjsonConstraint.append(\"'\");\n\t}\n\n\treturn true;\n}\n\n/**\n * Remove whitespace at both ends of a string\n */\nchar *Connection::trim(char *str)\n{\nchar *ptr;\n\n\twhile (*str && *str == ' ')\n\t\tstr++;\n\n\tptr = str + strlen(str) - 1;\n\twhile (ptr > str && *ptr == ' ')\n\t{\n\t\t*ptr = 0;\n\t\tptr--;\n\t}\n\treturn str;\n}\n\n/**\n * Raise an error to return from the plugin\n */\nvoid Connection::raiseError(const char *operation, const char *reason, ...)\n{\nConnectionManager *manager = ConnectionManager::getInstance();\nchar\ttmpbuf[512];\n\n\tva_list ap;\n\tva_start(ap, reason);\n\tvsnprintf(tmpbuf, sizeof(tmpbuf), reason, ap);\n\tva_end(ap);\n\tLogger::getLogger()->error(\"PostgreSQL storage plugin raising error: %s\", tmpbuf);\n\tmanager->setError(operation, tmpbuf, false);\n}\n\n/**\n * Return the sie of a given table in bytes\n */\nlong Connection::tableSize(const string& table)\n{\nSQLBuffer buf;\n\n\tbuf.append(\"SELECT pg_total_relation_size(relid) FROM pg_catalog.pg_statio_user_tables WHERE relname = '\");\n\tbuf.append(table);\n\tbuf.append(\"'\");\n\tconst char *query = buf.coalesce();\n\tPGresult *res = PQexec(dbConnection, query);\n\tdelete[] query;\n\tif (PQresultStatus(res) == PGRES_TUPLES_OK)\n\t{\n\t\tlong tSize = atol(PQgetvalue(res, 0, 0));\n\t\tPQclear(res);\n\t\treturn tSize;\n\t}\n \traiseError(\"tableSize\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn -1;\n}\n\n/**\n  * Add double quotes for words that are reserved as a column name\n  * Sample : user to \"user\"\n  *\n  * @param column_name  Column name to be evaluated\n  * @param out\t        Final name of the column\n  */\nconst string Connection::double_quote_reserved_column_name(const string &column_name)\n{\n\tstring final_column_name;\n\n\tif ( std::find(pg_column_reserved_words.begin(),\n\t\t       pg_column_reserved_words.end(),\n\t\t       column_name)\n\t     != pg_column_reserved_words.end()\n\t\t)\n\t{\n\t\tfinal_column_name = \"\\\"\" + column_name + \"\\\"\";\n\t}\n\telse\n\t{\n\t\tfinal_column_name = column_name;\n\t}\n\n\treturn(final_column_name);\n}\n\n/**\n  * Converts the input string quoting the double quotes : \"  to \\\"\n  *\n  * @param str   String to convert\n  * @param out\tConverted string\n  */\nconst string Connection::escape_double_quotes(const string& str)\n{\n\tchar\t\t*buffer;\n\tconst char\t*p1;\n\tchar  \t\t*p2;\n\tstring\t\tnewString;\n\n\tif (str.find_first_of('\\\"') == string::npos)\n\t{\n\t\treturn str;\n\t}\n\n\tbuffer = (char *)malloc(str.length() * 2);\n\n\tp1 = str.c_str();\n\tp2 = buffer;\n\twhile (*p1)\n\t{\n\t\tif (*p1 == '\\\"')\n\t\t{\n\t\t\t*p2++ = '\\\\';\n\t\t\t*p2++ = '\\\"';\n\t\t\tp1++;\n\t\t}\n\t\telse if (*p1 == '\\\\' ) // Take care of previously escaped quotes\n\t\t{\n\t\t\t*p2++ = '\\\\';\n\t\t\t*p2++ = '\\\\';\n\t\t\tp1++;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t*p2++ = *p1++;\n\t\t}\n\t}\n\t*p2 = 0;\n\tnewString = string(buffer);\n\tfree(buffer);\n\treturn newString;\n}\n\nconst string Connection::escape(const string& str)\n{\nchar    *buffer;\nconst char    *p1;\nchar  *p2;\nstring  newString;\n\n    if (str.find_first_of('\\'') == string::npos)\n    {\n        return str;\n    }\n\n    buffer = (char *)malloc(str.length() * 2);\n\n    p1 = str.c_str();\n    p2 = buffer;\n    while (*p1)\n    {\n        if (*p1 == '\\'')\n        {\n            *p2++ = '\\'';\n            *p2++ = '\\'';\n            p1++;\n        }\n        else\n        {\n            *p2++ = *p1++;\n        }\n    }\n    *p2 = 0;\n    newString = string(buffer);\n    free(buffer);\n    return newString;\n}\n\n/**\n * Optionally log SQL statement execution\n *\n * @param\ttag\tA string tag that says why the SQL is being executed\n * @param\tstmt\tThe SQL statement itself\n */\nvoid Connection::logSQL(const char *tag, const char *stmt)\n{\n\tif (m_logSQL)\n\t{\n\t\tLogger::getLogger()->info(\"%s: %s\", tag, stmt);\n\t}\n}\n\n/**\n * Create snapshot of a common table\n *\n * @param table\t\tThe table to snapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= 0 on success\n *\n * The new created table name has the name:\n * $table_snap$id\n */\nint Connection::create_table_snapshot(const string& table, const string& id)\n{\n\tstring query = \"SELECT * INTO TABLE fledge.\";\n\tquery += table + \"_snap\" +  id + \" FROM fledge.\" + table;\n\n\tlogSQL(\"CreateTableSnapshot\", query.c_str());\n\n\tPGresult *res = PQexec(dbConnection, query.c_str());\n\tif (PQresultStatus(res) == PGRES_COMMAND_OK)\n\t{\n\t\tPQclear(res);\n\t\treturn 1;\n\t}\n\n\traiseError(\"create_table_snapshot\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn -1;\n}\n\n/**\n * Set the contents of a common table from a snapshot\n *\n * @param table\t\tThe table to fill\n * @param id\t\tThe snapshot id of the table\n * @return\t\t-1 on error, >= 0 on success\n *\n */\nint Connection::load_table_snapshot(const string& table, const string& id)\n{\n\tstring purgeQuery = \"DELETE FROM fledge.\" + table;\n\tstring query = \"START TRANSACTION; \" + purgeQuery;\n\tquery += \"; INSERT INTO fledge.\" + table;\n\tquery += \" SELECT * FROM fledge.\" + table + \"_snap\" + id;\n\tquery += \"; COMMIT;\";\n\n\tlogSQL(\"LoadTableSnapshot\", query.c_str());\n\n\tPGresult *res = PQexec(dbConnection, query.c_str());\n\tif (PQresultStatus(res) == PGRES_COMMAND_OK)\n\t{\n\t\tPQclear(res);\n\t\treturn 1;\n\t}\n\telse\n\t{\n\t\tPGresult *resRollback = PQexec(dbConnection, \"ROLLBACK;\");\n\t\tif (PQresultStatus(resRollback) != PGRES_COMMAND_OK)\n\t\t{\n\t\t\traiseError(\" rollback load_table_snapshot\",\n\t\t\t\t   PQerrorMessage(dbConnection));\n\t\t}\n\t\tPQclear(resRollback);\n\t}\n\n\traiseError(\"load_table_snapshot\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn -1;\n}\n\n/**\n * Delete a snapshot of a common table\n *\n * @param table\t\tThe table to snapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= 0 on success\n */\nint Connection::delete_table_snapshot(const string& table, const string& id)\n{\n\tstring query = \"DROP TABLE fledge.\" + table + \"_snap\" + id;\n\n\tlogSQL(\"DeleteTableSnapshot\", query.c_str());\n\n\tPGresult *res = PQexec(dbConnection, query.c_str());\n\tif (PQresultStatus(res) == PGRES_COMMAND_OK)\n\t{\n\t\tPQclear(res);\n\t\treturn 1;\n\t}\n\n\traiseError(\"delete_table_snapshot\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn -1;\n}\n\n/**\n * Get list of snapshots for a given common table\n *\n * @param table         The given table name\n * @param resultSet\tOutput data buffer\n * @return\t\tTrue on success, false on database errors\n */\nbool Connection::get_table_snapshots(const string& table,\n                                     string& resultSet)\n{               \nSQLBuffer sql;  \n\ttry\n\t{\n\t\tsql.append(\"SELECT REPLACE(table_name, '\");\n\t\tsql.append(table);\n\t\tsql.append(\"_snap', '') AS id FROM information_schema.tables \");\n\t\tsql.append(\"WHERE table_schema = 'fledge' AND table_name LIKE '\");\n\t\tsql.append(table);\n\t\tsql.append(\"_snap%';\");\n\n\t\tconst char *query = sql.coalesce();\n\t\tlogSQL(\"GetTableSnapshots\", query);\n\n\t\tPGresult *res = PQexec(dbConnection, query);\n\t\tdelete[] query;\n\t\tif (PQresultStatus(res) == PGRES_TUPLES_OK)\n\t\t{\n\t\t\tmapResultSet(res, resultSet);\n\t\t\tPQclear(res);\n\n\t\t\treturn true;\n\t\t}\n\t\tchar *SQLState = PQresultErrorField(res, PG_DIAG_SQLSTATE);\n\t\tif (!strcmp(SQLState, \"22P02\")) // Conversion error\n\t\t{\n\t\t\traiseError(\"get_table_snapshots\", \"Unable to convert data to the required type\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"get_table_snapshots\", PQerrorMessage(dbConnection));\n\t\t}\n\t\tPQclear(res);\n\t\treturn false;\n\t} catch (exception e) {\n\t\traiseError(\"get_table_snapshots\", \"Internal error: %s\", e.what());\n\t}\n\treturn false;\n}\n\n/**\n * Check to see if the str is a function\n *\n * @param str   The string to check\n * @return true if the string contains a function call\n */\nbool Connection::isFunction(const char *str) const\n{\n\treturn strcmp(str, \"now()\") == 0;\n}\n\n/**\n * In the case of a join add the columns to select from for all the tables in\n * the join\n *\n * @param document\tThe query we are processing\n * @param sql\t\tThe SQLBuffer we are writing\n * @param level\t\tThe table number we are processing\n */\nbool Connection::selectColumns(const Value& document, SQLBuffer& sql, int level)\n{\nSQLBuffer\tjsonConstraints;\n\n\tstring tag = \"t\" + to_string(level) + \".\";\n\n\tif (document.HasMember(\"return\"))\n\t{\n\t\tint col = 0;\n\t\tconst Value& columns = document[\"return\"];\n\t\tif (! columns.IsArray())\n\t\t{\n\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\treturn false;\n\t\t}\n\t\tif (document.HasMember(\"modifier\"))\n\t\t{\n\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\tsql.append(' ');\n\t\t}\n\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t{\n\t\t\tif (col)\n\t\t\t\tsql.append(\", \");\n\t\t\tif (!itr->IsObject())\t// Simple column name\n\t\t\t{\n\t\t\t\tsql.append(tag);\n\t\t\t\tsql.append(itr->GetString());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t   \"column must be a string\");\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"rerieve\", \"format must be a string\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsql.append(\"to_char(\");\n\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\tsql.append(\", '\");\n\t\t\t\t\t\tsql.append((*itr)[\"format\"].GetString());\n\t\t\t\t\t\tsql.append(\"')\");\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"rerieve\", \"timezone must be a string\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\t\tsql.append(\" AT TIME ZONE '\");\n\t\t\t\t\t\tsql.append((*itr)[\"timezone\"].GetString());\n\t\t\t\t\t\tsql.append(\"' \");\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(tag);\n\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t{\n\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t   \"return object must have either a column or json property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\n\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\tsql.append('\"');\n\t\t\t\t}\n\t\t\t}\n\t\t\tcol++;\n\t\t}\n\t}\n\telse\n\t{\n\t\tsql.append('*');\n\t\treturn true;\n\t}\n\tif (document.HasMember(\"join\"))\n\t{\n\t\tconst Value& join = document[\"join\"];\n\t\tif (join.HasMember(\"query\"))\n\t\t{\n\t\t\tconst Value& query = join[\"query\"];\n\t\t\tsql.append(\", \");\n\t\t\tif (!selectColumns(query, sql, ++level))\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Join failed to add select columns\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t}\n\treturn true;\n}\n\n\n/**\n * In the case of a join add the tables to select from for all the tables in\n * the join\n *\n * @param document\tThe query we are processing\n * @param sql\t\tThe SQLBuffer we are writing\n * @param level\t\tThe table number we are processing\n */\nbool Connection::appendTables(const string& schema,\n\t\t\tconst Value& document,\n\t\t\tSQLBuffer& sql,\n\t\t\tint level)\n{\n\tstring tag = \"t\" + to_string(level);\n\tif (document.HasMember(\"join\"))\n\t{\n\t\tconst Value& join = document[\"join\"];\n\t\tif (join.HasMember(\"table\"))\n\t\t{\n\t\t\tconst Value& table = join[\"table\"];\n\t\t\tif (!table.HasMember(\"name\"))\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Joining table is missing a table name\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tconst Value& name = table[\"name\"];\n\t\t\tif (!name.IsString())\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Joining table name is not a string\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tsql.append(\", \");\n                        sql.append(schema);\n                        sql.append('.');\n                        sql.append(name.GetString());\n                        sql.append(\" \");\n                        sql.append(tag);\n\t\t\tif (join.HasMember(\"query\"))\n\t\t\t{\n\t\t\t\tconst Value& query = join[\"query\"];\n\t\t\t\tappendTables(schema, query, sql, ++level);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Join is missing a join query definition\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"commonRetrieve\", \"Join is missing a table definition\");\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * Recurse down and add the where cluase and join terms for each\n * new table joined to the query\n *\n * @param query\tThe JSON query\n * @param sql\tThe SQLBuffer we are writing the data to\n * @param asset_codes   The asset codes\n * @param level\tThe nesting level of the joined table\n */\nbool Connection::processJoinQueryWhereClause(const Value& query,\n\t\t\t\t\t\tSQLBuffer& sql,\n\t\t\t\t\t\tstd::vector<std::string> &asset_codes,\n\t\t\t\t\t\tint level)\n{\n\tstring tag = \"t\" + to_string(level) + \".\";\n\tif (!jsonWhereClause(query[\"where\"], sql, asset_codes, false, tag))\n\t{\n\t\treturn false;\n\t}\n\n\tif (query.HasMember(\"join\"))\n\t{\n\t\t// Now and the join condition itself\n\t\tstring col0, col1;\n\t\tconst Value& join = query[\"join\"];\n\t\tif (join.HasMember(\"on\") && join[\"on\"].IsString())\n\t\t{\n\t\t\tcol0 = join[\"on\"].GetString();\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t\tif (join.HasMember(\"table\"))\n\t\t{\n\t\t\tconst Value& table = join[\"table\"];\n\t\t\tif (table.HasMember(\"column\") && table[\"column\"].IsString())\n\t\t\t{\n\t\t\t\tcol1 = table[\"column\"].GetString();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"Joined query\", \"Missing join column in table\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(\" AND \");\n\t\tsql.append(tag);\n\t\tsql.append(col0);\n\t\tsql.append(\" = t\");\n\t\tsql.append(level + 1);\n\t\tsql.append(\".\");\n\t\tsql.append(col1);\n\t\tsql.append(\" \");\n\t\tif (join.HasMember(\"query\") && join[\"query\"].IsObject())\n\t\t{\n\t\t\tsql.append(\" AND \");\n\t\t\tconst Value& query = join[\"query\"];\n\t\t\tprocessJoinQueryWhereClause(query, sql, asset_codes, level + 1);\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * Find existing payload schema from the DB fledge.service_schema table \n *\n * @param service   The string containing service name \n * @param name      The string containing schema name\n * @return \t    resultSet string containing the output of the sql query executed\n */\n\nbool Connection::findSchemaFromDB(const std::string &service, const std::string &schema, std::string &resultSet)\n{\n\n\tSQLBuffer sql;\n        try\n        {\n\t\tsql.append(\"select * from fledge.service_schema where service = '\");\n\t\tsql.append(service);\n\t\tsql.append(\"'\");\n\t\tsql.append(\" and name = '\");\n\t        sql.append(schema);\n\t      \tsql.append(\"';\");\n                const char *query = sql.coalesce();\n                logSQL(\"findSchemaFromDB\", query);\n\n                PGresult *res = PQexec(dbConnection, query);\n                delete[] query;\n                if (PQresultStatus(res) == PGRES_TUPLES_OK)\n                {\n                        mapResultSet(res, resultSet);\n                        PQclear(res);\n\n                        return true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tchar *SQLState = PQresultErrorField(res, PG_DIAG_SQLSTATE);\n                \tif (!strcmp(SQLState, \"22P02\")) // Conversion error\n                 \t{\n                        \traiseError(\"findSchemaFromDB\", \"Unable to convert data to the required type\");\n                 \t}\n                 \telse\n                 \t{\n                        \traiseError(\"findSchemaFromDB\", PQerrorMessage(dbConnection));\n                 \t}\n                 \tPQclear(res);\n                 \treturn false;\n         \t}\n\t}catch (exception e) {\n                \traiseError(\"findSchemaFromDB\", \"Internal error: %s\", e.what());\n        }\n         \n\treturn false;\n}\n\n/**\n * This function parses the fledge.service_schema table payload retrieved in \n * and outputs a set of data structures containg the information about the tables\n * and their columns and indexes\n *\n * @param[out] \tversion   version retrieved form payload  \n * @param[in]   res       output containing payload information\n * @param[out]  tableColumnMap map[tablename ---> set of columns]\n * @param[out]  tableIndexMap  map[tablename ---> indexes] where each index is a comma separated string of columns\n * @param[ouy]  schemaCreationRequest which is like this is first schema creation request or\n *              schema already exist in the DB\n * @return      true if parsing is successful else false \n */\n\nbool Connection::parseDatabaseStorageSchema(int &version,const std::string &res, \n\t\t std::unordered_map<std::string, std::unordered_set<columnRec, columnRecHasher, columnRecComparator> > &tableColumnMap,\n\t\t std::unordered_map<std::string, std::vector<std::string> > &tableIndexMap,\n\t\t bool &schemaCreationRequest)\n{\n\tDocument document;\n\n\tif (document.Parse(res.c_str()).HasParseError())\n        {\n       \t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d Failed to parse JSON payload (DB query response) %s at %d\",__FUNCTION__, __LINE__, GetParseError_En(document.GetParseError()), document.GetErrorOffset());\n\n\t        return false;\n        }\n\tif (!document.HasMember(\"count\"))\n\t{\n\t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d count absent from database query response to fledge.service_schema\",__FUNCTION__, __LINE__);\n                return false;\n\t}\n\tint count = document[\"count\"].GetInt();\n\tif ( count == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"%s:%d count = 0, returning from function parseDatabaseStorageSchema\", __FUNCTION__, __LINE__);\n\t\tschemaCreationRequest = true;\n\t\treturn true;\n\t}\n        if (!document.HasMember(\"rows\"))\n        {\n        \traiseError(\"parseDatabaseStorageSchema\", \"%s:%d rows absent from database query reponse to fledge.service_schema\", __FUNCTION__, __LINE__);\n        \treturn false;\n        }\n        else\n\t{\n\t\tValue& rows = document[\"rows\"];\n                if (!rows.IsArray())\n                {\n                \traiseError(\"parseDatabaseStorageSchema\", \"%s:%d The property rows in database query reponse to fledge.service_schema must be an array\", __FUNCTION__, __LINE__);\n                        return false;\n                }\n                else\n                {\n\t\t\tif (rows.Size() < 1)\n\t\t\t{\n\t\t\t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d rows array from database query reponse to fledge.service_schema has size 0\", __FUNCTION__, __LINE__);\n\t\t                return false;\n\t\t\t}\n\t\t\t// The above check ensures rows[0] can be accessed\n\t\t\tValue& firstRow = rows[0];\n\n\t\t\tif (!firstRow.HasMember(\"version\"))\n\t\t\t{\n\t\t\t\t raiseError(\"parseDatabaseStorageSchema\", \"%s:%d rows[0] in fledge.service_schema does not have version\", __FUNCTION__, __LINE__);\n\t\t\t\t return false;\n\t\t\t}\n\n\t\t\tif(!firstRow[\"version\"].IsInt())\n                        {\n                        \traiseError(\"parseDatabaseStorageSchema\", \"%s %d extracting version in rows[0],expecting an int value here\", __FUNCTION__, __LINE__);\n\t\t\t\treturn false;\n                        }\n\t\t\tversion = firstRow[\"version\"].GetInt();\n\n\t\t\tif (!firstRow.HasMember(\"definition\"))\n                        {\n                                 raiseError(\"parseDatabaseStorageSchema\", \"%s:%d rows[0] in fledge.service_schema does not have definition\", __FUNCTION__, __LINE__);\n                                 return false;\n                        }\n\t\t\tif (!firstRow[\"definition\"].IsString())\n\t\t\t{\n\t\t\t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d The property definition in rows[0] in fledge.service_schema must be a string\", __FUNCTION__, __LINE__);\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tstd::string defStr = firstRow[\"definition\"].GetString();\n\t\t\tif (defStr.empty())\n\t\t\t{\n\t\t\t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d The rows[0][definition] in fledge.service_schema is empty\", __FUNCTION__, __LINE__);\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tDocument docDefStr;\n\t\t\tif (docDefStr.Parse(defStr.c_str()).HasParseError())\n        \t\t{\n                \t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d Failed to parse JSON starting at definition in database query reponse to fledge.service_schema %s:%d \", __FUNCTION__, __LINE__, GetParseError_En(docDefStr.GetParseError()),docDefStr.GetErrorOffset());\n                \t\treturn false;\n        \t\t}\n\n\t\t\tif (!docDefStr.HasMember(\"tables\"))\n                        {\n                                raiseError(\"parseDatabaseStorageSchema\", \"%s:%d tables section not present in payload obtained from fledge.service_schema \",__FUNCTION__, __LINE__);\n                                return false;\n                        }\n\n\t\t\tValue& tables = docDefStr[\"tables\"];\n\t\t\tif (!tables.IsArray())\n\t\t\t{\n\t\t\t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d The tables section obtained from payload in fledge.service_schema must be anarray\", __FUNCTION__, __LINE__);\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\t// Iterate over the table s list and prepare the data structures\n\t\t\tfor (rapidjson::SizeType i = 0; i < tables.Size(); i++)\n                        {\n\t\t\t\tif (!tables[i].HasMember(\"name\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d The tables[%d] section in payload in fledge.service_schema does not have name field\", __FUNCTION__, __LINE__, i);\n                                \treturn false;\n\t\t\t\t}\n\t\t\t\tif (!tables[i][\"name\"].IsString())\n                        \t{\n                                \traiseError(\"parseDatabaseStorageSchema\", \"%s:%d The property name in tables[%d] in fledge.service_schema must be a string\", __FUNCTION__, __LINE__, i);\n                                \treturn false;\n                        \t}\n                        \tstd::string name = tables[i][\"name\"].GetString();\n\n\t\t\t\tif (!tables[i].HasMember(\"columns\"))\n                                {\n                                        raiseError(\"parseDatabaseStorageSchema\", \"%s:%d The tables[%d] section in payload in fledge.service_schema does not have columns field\", __FUNCTION__, __LINE__, i);\n                                        return false;\n                                }\n\n                        \tValue& columns = tables[i][\"columns\"];\n\n\t\t\t\tstd::unordered_set<columnRec, columnRecHasher, columnRecComparator> columnSet;\n\t\t\t\tstd::vector<std::string> indexesVec;\n\n\t                        if (!columns.IsArray())\n                        \t{\n \t                       \t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d The property columns in table %s must be an array\", __FUNCTION__, __LINE__, name.c_str());\n                                \treturn false;\n                        \t}\n\n\t\t\t\tLogger::getLogger()->debug(\"%s:%d Extracting the columns of table name %s\", __FUNCTION__, __LINE__, name.c_str());\n\n                        \tfor (auto& v : columns.GetArray())\n                        \t{\n       \t                \t\tif (v.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (v.HasMember(\"column\"))\n                                                {\n                                                \tif (!v[\"column\"].IsString())\n                                                        {\n                                                        \tLogger::getLogger()->error(\"%s :%d, table %s,extracting column name, expecting a string value here\", __FUNCTION__, __LINE__, name.c_str());\n                                                        }\n                                                        else\n                                                        {\n\t\t\t\t\t\t\t\tcolumnRec c;\n\t\t\t\t\t\t\t\tc.column = v[\"column\"].GetString();\n\t\t\t\t\t\t\t\tif ( c.column.empty())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"parseDatabaseStorageSchema\", \"%s :%d, table %s, column name empty,inconsistent DB\", __FUNCTION__, __LINE__, name.c_str());\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif (v.HasMember(\"type\"))\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (!v[\"type\"].IsString())\n                                                        \t\t{\n                                                                \t\tLogger::getLogger()->error(\"%s:%d tablename %s, column = %s, extracting column type, expecting a string value here\", __FUNCTION__, __LINE__,name.c_str(), c.column.c_str());\n                                                        \t\t}\n\t\t\t\t\t\t\t\t\tc.type = v[\"type\"].GetString();\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\tif (v.HasMember(\"size\"))\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (!v[\"size\"].IsInt())\n                                                                        {       \n                                                                                Logger::getLogger()->error(\"%s:%d, tableName = %s, column = %s,extracting column size, expecting an int value here\", __FUNCTION__, __LINE__,name.c_str(), c.column.c_str());\n                                                                        }\n\t\t\t\t\t\t\t\t\tc.sz = v[\"size\"].GetInt();\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\tif (v.HasMember(\"key\"))\n                                                \t\t{\n\t\t                                                        if (!v[\"key\"].IsBool())\n                                                        \t\t{\n\t\t                                                                Logger::getLogger()->error(\"%s:%d, tableName = %s, column = %s,extracting column key, expecting a bool value here\", __FUNCTION__, __LINE__, name.c_str(), c.column.c_str());\n                                                        \t\t}\n\t\t                                                        else\n                 \t\t                                        {\n                                                                \t\tif (v[\"key\"].GetBool())\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tc.key = true;\n\t\t\t\t\t\t\t\t\t\t}\n                                                        \t\t}\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\tcolumnSet.insert(c);\n\t\t\t\t\t\t\t}\n                                                }\n\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tLogger::getLogger()->debug(\"%s:%d Extracting the indexes of tables[%d]\", __FUNCTION__, __LINE__, i);\n\n\t\t\t\tif (!tables[i].HasMember(\"indexes\"))\n                                {\n                                        Logger::getLogger()->debug(\"%s:%d The tables[%d] section in payload in fledge.service_schema does not have indexes field\", __FUNCTION__, __LINE__, i);\n                                }\n\t\t\t\telse\n\t\t\t\t{\n\n\t\t\t\t\tValue& indexes = tables[i][\"indexes\"];\n\t\t\t\t\tif (!indexes.IsArray())\n                                \t{\n                                        \traiseError(\"parseDatabaseStorageSchema\", \"%s:%d The property indexes under tablename = %s must be an array\", __FUNCTION__, __LINE__, name.c_str());\n                                        \treturn false;\n                                \t}\n\n\n\t\t\t\t\tfor (auto& v : indexes.GetArray())\n                                \t{\n\t\t\t\t\t\tstd::vector<std::string> indexVec;\n\t\t\t\t\t\tstd::string s;\n                                        \tif (v.IsObject())\n                                        \t{\n                                                \tif (v.HasMember(\"index\"))\n                                                \t{\n                                                        \tif (!v[\"index\"].IsArray())\n                                                        \t{\n                                                                \traiseError(\"parseDatabaseStorageSchema\", \"%s:%d, tableName = %s, extracting index values, expecting an array here\", __FUNCTION__, __LINE__, name.c_str());\n\t\t\t\t\t\t\t\t\treturn false;\n                                                        \t}\n                                                        \telse\n                                                        \t{\n\t\t\t\t\t\t\t\t\tfor (auto& i : v[\"index\"].GetArray())\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tif (!i.IsString())\n                                                                \t\t{\n                                                                        \t\traiseError(\"parseDatabaseStorageSchema\", \"%s:%d, tableName = %s, extracting index ,expecting a string here\", __FUNCTION__, __LINE__, name.c_str());\n                                                                        \t\treturn false;\n                                                                \t\t}\n\t\t\t\t\t\t\t\t\t\tindexVec.push_back(i.GetString());\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tstd::sort(indexVec.begin(), indexVec.end());\n\t\t\t\t\t\t\t\t\tfor ( int i = 0; i < indexVec.size(); ++i)\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\ts.append(indexVec[i]);\n\t\t\t\t\t\t\t\t\t\tif ( i < indexVec.size() -1 ) s.append(\",\");\n\t\t\t\t\t\t\t\t\t}\n                                                        \t}\n                                                \t}\n                                        \t}\n\t\t\t\t\t\tindexesVec.push_back(s);\n                                \t}\n\t\t\t\t}\n\n\t\t\t\ttableColumnMap[name] = columnSet;\n\t\t\t\ttableIndexMap[name] = indexesVec;\n\t\t\t}\n\t\t}\n\n\t}\n\n\treturn true;\n}\n/**\n * Create schema of tables\n *\n * @param payload   The  payload containing information about schema of \n *                  tables to create\n * @return true if the tables can be crated successfully\n */\nint Connection::create_schema(const std::string &payload)\n{\n\tDocument document;\n\tstd::string schema;\n\tint version;\n\tconst char *logSection=\"CreatingSchema\";\n\tunsigned long rowsAffectedLastCommand = 0;\n\tstd::unordered_map<std::string, std::unordered_set<columnRec, columnRecHasher, columnRecComparator> > columnMapFromDB;\n        std::unordered_map<std::string, std::vector<std::string> > indexMapFromDB;\n\tbool schemaCreationReq = false;\n\tstd::vector<sqlQuery> queries;\n\n\ttry \n\t{\n                if (payload.empty())\n                {\n\t\t\traiseError(\"create_schema\", \"%s:%d function's input parameter payload empty\", __FUNCTION__, __LINE__);\n                        return -1;\n                }\n                else\n                {\n                        if (document.Parse(payload.c_str()).HasParseError())\n                        {\n\t\t\t\traiseError(\"create_schema\", \"%s:%d Failed to parse JSON payload %s:%d\", __FUNCTION__, __LINE__, GetParseError_En(document.GetParseError()), document.GetErrorOffset());\n                                return -1;\n                        }\n\t\t\tif (!document.HasMember(\"schema\"))\n                        {\n\t\t\t\traiseError(\"create_schema\", \"%s:%d schema absent from input parameter JSON payload\", __FUNCTION__, __LINE__);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (!document[\"schema\"].IsString())\n                                {\n                                \traiseError(\"create_schema\", \"%s:%d The property schema in JSON payload must be a string\", __FUNCTION__, __LINE__);\n                                        return -1;\n                                }\n\t\t\t\tschema = document[\"schema\"].GetString();\n\n\t\t\t\tif (schema.empty())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"create_schema\", \"%s:%d schema obtained from payload is empty\", __FUNCTION__, __LINE__);\n                                        return -1;\n\t\t\t\t}\n\t\t\t\tLogger::getLogger()->debug(\"%s:%d schema obtained from payload = %s\", __FUNCTION__, __LINE__, schema.c_str());\n\n\t\t\t\tif (!document.HasMember(\"service\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"create_schema\", \"%s:%d service absent from payload for schema %s\", __FUNCTION__, __LINE__, schema.c_str());\n                                        return -1;\n\t\t\t\t}\n\t\t\t\tif (!document[\"service\"].IsString())\n                                {\n                                        raiseError(\"create_schema\", \"%s:%d The property service in JSON payload must be a string\", __FUNCTION__, __LINE__);\n                                        return -1;\n                                }\n\n\t\t\t\tstd::string service = document[\"service\"].GetString();\t\n\t\t\t\tif (service.empty())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"create_schema\", \"%s:%d empty service name for schema %s\", __FUNCTION__, __LINE__, schema.c_str());\n                                        return -1;\n\t\t\t\t}\n\t\t\t\tLogger::getLogger()->debug(\"%s:%d service obtained from payload = %s\", __FUNCTION__, __LINE__, service.c_str());\n\n\t\t\t\tif (!document.HasMember(\"version\"))\n                        \t{\n\t\t\t\t\traiseError(\"create_schema\", \"%s:%d version absent from payload for schema %s and service %s\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str());\n                                \treturn -1;\n                        \t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tif(!document[\"version\"].IsInt())\n                                        {\n\t                                        raiseError(\"create_schema\", \"%s %d version needs to be int for schema %s and service %s\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str());\n\t\t\t\t\t\treturn -1;\n                                        }\n\n\t\t\t\t\tversion = document[\"version\"].GetInt();\n\t\t\t\t\tLogger::getLogger()->debug(\"%s:%d version obtained from payload = %d\", __FUNCTION__, __LINE__, version);\n\t\t\t\t\tstd::string results;\n\t\t\t\t\tif (findSchemaFromDB(service, schema, results))\n\t\t\t\t\t{\n\t\t\t\t\t\tif (!parseDatabaseStorageSchema(version, results, columnMapFromDB, indexMapFromDB, schemaCreationReq))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d error in parsing Database Storage schema %s for schema  and service %s\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str());\n\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d findSchemaFromDB returned false, error in database query execution for service %s, schema %s\", __FUNCTION__, __LINE__, service.c_str(), schema.c_str());\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\t\t\t\n\t\t\t\t\tstd::string queryToCreateSchema = \"create schema if not exists \" + schema + \";\" ;\n\t                                rowsAffectedLastCommand = purgeOperation(queryToCreateSchema.c_str(), logSection, \"Create Schema if not exists \", false);\n\t\t\t\t\tif (rowsAffectedLastCommand == -1)\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d Error in creating schema %s in database, query executed = %s\",__FUNCTION__,__LINE__, schema.c_str(), queryToCreateSchema.c_str());\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!document.HasMember(\"tables\"))\n                        \t{\n\t\t\t\t\traiseError(\"create_schema\", \"%s:%d tables section absent from payload for schema %s and service %s\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str());\n                                \treturn -1;\n                        \t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"%s:%d Extracting tables from payload for schema %s and service %s\", __FUNCTION__, __LINE__, schema.c_str() , service.c_str());\n\n\t\t\t\t\tValue& tables = document[\"tables\"];\n\t\t\t\t\tif (!tables.IsArray())\n                                \t{\n                                        \traiseError(\"create_schema\", \"%s:%d, Schema %s, Service %s, The property tables must be an array\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str());\n                                        \treturn -1;\n                                \t}\n\t\t\t\t\telse\n                          \t\t{\n\t\t\t\t\t\tstd::unordered_set<std::string> unSetTablesInSchemaRequest;\n\t\t\t\t\t\tstd::string sqlDropTables;\n\n\t\t\t\t\t\t// Iterate over all the table lists in the Schema Creation/Alter request\n                                \t\tfor (rapidjson::SizeType i = 0; i < tables.Size(); i++)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (!tables[i].HasMember(\"name\"))\n                                \t\t\t{\n\t\t\t                                        raiseError(\"create_schema\", \"%s:%d Schema %s, Service %s : The tables[%d] section in payload does not have name field\", __FUNCTION__, __LINE__,schema.c_str(), service.c_str(), i);\n                       \t\t\t\t                 return -1;\n                                \t\t\t}\n                                \t\t\tif (!tables[i][\"name\"].IsString())\n                                \t\t\t{\n                                        \t\t\traiseError(\"create_schema\", \"%s:%d , Schema %s, Service %s, The property name in tables[%d] must be a string\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), i);\n                                        \t\t\treturn -1;\n                                \t\t\t}\n                                \n\t\t\t\t\t\t\tstd::string name = tables[i][\"name\"].GetString();\n\n\t\t\t\t\t\t\tif (name.empty())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d Schema %s, Service %s, The property name in tables[%d] is empty\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), i);\n\t\t\t\t\t\t\t\treturn -1;\t\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"%s:%d Extracting columns for schema %s, service %s, table name %s \", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\n\t\t\t\t\t\t\tunSetTablesInSchemaRequest.insert(name);\n\n\t\t\t\t\t\t\tif (!tables[i].HasMember(\"columns\"))\n                                                        {\n                                                                raiseError(\"create_schema\", \"%s:%d The tables section does not have columns field\", __FUNCTION__, __LINE__);\n                                                                return -1;\n                                                        }\n\t\t\t\t\t\t\tValue& columns = tables[i][\"columns\"];\n\t\t\t\t\t\t\tif (!columns.IsArray())\n                                                        {\n                                                                raiseError(\"create_schema\", \"%s:%d The property columns must be an array\", __FUNCTION__, __LINE__);\n                                                                return -1;\n                                                        }\n\n\t\t\t\t\t\t\tstd::vector<std::string> indexesMatrixFromReq;\n\t\t\t\t\t\t\tstd::unordered_set<columnRec, columnRecHasher, columnRecComparator> colsPerTableInReq;\n\t\t\t\t\t\t\tbool alterTable = false;\n\t\t\t\t\t\t\tstd::string sql, sqlIdx;\n\n\t\t\t\t\t\t\t// if this is schema creation request  or \n\t\t\t\t\t\t\t// this table does not exist in db, then create it \n\t\t\t\t\t\t\t// else alter the table\n\t\t\t\t\t\t\tif (schemaCreationReq || (columnMapFromDB.find(name) == columnMapFromDB.end()))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsql = \"create table \" + schema + \".\" + name + \" (\" ;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsql = \"alter table \" + schema + \".\" + name + \" \" ;\n\t\t\t\t\t\t\t\talterTable = true;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\t\t\t// Iterate over the columns array\n\t\t\t\t\t\t\t// For each column, find name, type, size, primary key or not\n\t\t\t\t\t\t\t// and store in colsPerTableInReq\n\t\t\t\t\t\t\tfor (auto& v : columns.GetArray())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (v.IsObject())\n\t\t\t\t                                {\n\t\t\t\t\t\t\t\t\tcolumnRec c;\n\t\t\t\t\t\t\t\t\tif (v.HasMember(\"column\"))\n                                        \t\t\t\t{\n                                                \t\t\t\tif (!v[\"column\"].IsString())\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s %d Schema: %s, Service: %s ,table name %s , extracting column name, expecting a string value here\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\t\t\t\t\t\t\t\t\t\t\treturn -1; \n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\t{\n                                                                \t\t\tc.column = v[\"column\"].GetString(); \n\t\t\t\t\t\t\t\t\t\t\tif (c.column.empty())\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s %d Schema: %s, Service: %s ,table name %s, extracting column, found empty value for column\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str() , name.c_str());\n\t\t\t\t\t\t\t\t\t\t\t\treturn -1;\t\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tif (v.HasMember(\"type\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tif (!v[\"type\"].IsString())\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d Schema:%s, Service:%s, tableName : %s , extracting type, expecting a string value here\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\t\t\t\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tc.type = v[\"type\"].GetString();\n\t\t\t\t\t\t\t\t\t\t\tif (c.type == \"double\") c.type = \"real\";\n\t\t\t\t\t\t\t\t\t\t\tif (!checkValidDataType(c.type))\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d Schema:%s, Service:%s, tableName : %s , type %s extracted is not a valid data type\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str(), c.type.c_str());\n\t\t\t\t\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tif (v.HasMember(\"size\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tif(!v[\"size\"].IsInt())\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s %d Schema:%s, Service:%s, tableName:%s ,extracting size, expecting an int value here\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\t\t\t\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tc.sz = v[\"size\"].GetInt();\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tif (v.HasMember(\"key\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tif(!v[\"key\"].IsBool())\n                                                                                {\n\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s %d Schema:%s, Service:%s, tableName:%s, extracting key, expecting a bool value here\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\t\t\t\t\t\t\t\t\t\t\treturn -1;\n\n                                                                                }\n                                                                                else\n                                                                                {\n                                                                                         c.key = v[\"key\"].GetBool();\n                                                                                }\n                                                                        }\n\n\t\t\t\t\t\t\t\t\tcolsPerTableInReq.insert(c);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t// Iterate over all the indexes per table and store in indexesMatrixFromReq\n\n\t\t\t\t\t\t\tif (!tables[i].HasMember(\"indexes\"))\n                                                        {\n\t\t\t\t\t\t\t\t//Indexes are optional,if absent, will not trigger an exit from function\n                                                                Logger::getLogger()->debug(\"%s:%d Schema:%s, Service:%s, tableName:%s does not have indexes field\", __FUNCTION__, __LINE__ ,schema.c_str(), service.c_str(), name.c_str());\n                                                        }\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\n\t\t\t\t\t\t\t\tValue& idx = tables[i][\"indexes\"];\n\t\t\t\t\t\t\t\tif (!idx.IsArray())\n                                                        \t{\n\t\t\t\t\t\t\t\t\t// make sure if indexes are present, their type in JSON is valid\n                                                                \traiseError(\"create_schema\", \"%s:%d Schema:%s, Service:%s, tableName:%s The property indexes must be an array\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\t\t\t\t\t\t\t\t\treturn -1;\n                                                        \t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tLogger::getLogger()->debug(\"%s:%d Extracting indexes for Schema:%s, Service:%s, tableName: %s\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\n                        \t\t\t\t\t\tfor (auto& v : idx.GetArray())\n                        \t\t\t\t\t\t{\n                                        \t\t\t\t\tstd::vector<std::string> indexVec;\n\t\t\t\t\t\t\t\t\t\tstd::string s;\t\n\t\t\t\t                                        \tif (v.IsObject())\n                                        \t\t\t\t\t{\n\t\t\t\t                                                \tif (v.HasMember(\"index\"))\n                               \t\t\t\t\t                \t{\n                                                        \t\t\t\t\tif (!v[\"index\"].IsArray())\n                                                        \t\t\t\t\t{\n\t\t\t\t                                                                \traiseError(\"create_schema\", \"%s %d Schema:%s, Service:%s, tableName:%s , extracting index values, expecting an array here\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n                               \t\t\t\t\t                                 \treturn -1;\n                                                        \t\t\t\t\t}\n                                                        \t\t\t\t\telse\n                                                        \t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t// keep the cols in indexes as a comma separated list of sorted columns\n\t\t\t\t                                                                \tfor (auto& i : v[\"index\"].GetArray())\n                                                                \t\t\t\t\t{\n                                                                        \t\t\t\t\tindexVec.push_back(i.GetString());\n                                                                \t\t\t\t\t}\n\n                                                                \t\t\t\t\tstd::sort(indexVec.begin(), indexVec.end());\n\t\t\t\t\t\t\t\t\t\t\t\t\tfor (auto i = 0; i < indexVec.size(); ++i)\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\ts.append(indexVec[i]);\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tif (i < indexVec.size() -1){\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\ts.append(\",\");\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n                                                        \t\t\t\t\t}\n                                                \t\t\t\t\t}\n                                        \t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tindexesMatrixFromReq.push_back(s);\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t// Traverse through the colums list found in DB for this table \n\t\t\t\t\t\t\t// and create/alter/delete the colums list\n\t\t\t\t\t\t\t//\n\n\t\t\t\t\t\t\tunordered_set<columnRec, columnRecHasher, columnRecComparator> *dbCol = nullptr;\n\t\t\t\t\t\t\tif (columnMapFromDB.find(name) != columnMapFromDB.end())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tdbCol = &columnMapFromDB[name];\n\t\t\t\t\t\t\t\tLogger::getLogger()->debug(\"%s:%d Schema:%s, Service:%s, tableName: %s found in Database \", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tLogger::getLogger()->debug(\"%s:%d Schema:%s, Service:%s, tableName: %s could not be found in Database \", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str());\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tbool columnsToAlter = false;\n\t\t\t\t\t\t\tfor ( auto& v: colsPerTableInReq)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t// table creation case\n\t\t\t\t\t\t\t\tif (!alterTable)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tsql += v.column + \" \" + v.type;\n\t\t\t\t\t\t\t\t\tif (v.type == \"varchar\")\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql += \"(\" + std::to_string(v.sz) + \")\";\t\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tif (v.key == true)\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql += \" primary key\";\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tsql +=\",\";\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t// alter table case, table already exists\n\t\t\t\t\t\t\t\t\t// check if column already exists in database\n\t\t\t\t\t\t\t\t\t// if not then add if not a key column\n\t\t\t\t\t\t\t\t\tif (dbCol != nullptr && (dbCol->find(v) == dbCol->end()))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t// if it is not a key then add the column else log error\n\t\t\t\t\t\t\t\t\t\tif (!v.key)\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql += \"add column \";\n\t\t\t\t\t\t\t\t\t\t\tsql += v.column + \" \" + v.type;\n\t\t\t                                                                if (v.type == \"varchar\")\n               \t\t\t\t                                                {\n                        \t                                                        \tsql += \"(\" + std::to_string(v.sz) + \")\";\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\tsql +=\",\";\n\t\t\t\t\t\t\t\t\t\t\tcolumnsToAlter = true;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t// altering a key is not allowed\n\t\t\t\t\t\t\t\t\t\t\t// column in req does not exist in DB\n\t\t\t\t\t\t\t\t\t\t\t// but is key, not allowed\n\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d Schema:%s, Service:%s, tableName:%s, altering key request(%s) is not allowed for an existing table, dropping the schema request\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str(), v.column.c_str());\n\t\t\t\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t// altering an existing column not alllowed\n\t\t\t\t\t\t\t\t\t\t// This condition means , column in req already present in DB\n\t\t\t\t\t\t\t\t\t\tif (dbCol != nullptr)\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tauto itr = dbCol->find(v);\n\t\t\t\t\t\t\t\t\t\t\t//Check if the column matches exactly with that present in db , if not same , the reject the request\n\t\t\t\t\t\t\t\t\t\t\t// We ignore size for integer columns\n\t\t\t\t\t\t\t\t\t\t\tif (v.type.compare(\"integer\") == 0)\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t       \tif (itr->type != v.type || itr->key != v.key)\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d Schema:%s, Service:%s, tableName:%s, altering an existing column %s is not allowed\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str(), v.column.c_str() );\n\t\t\t\t\t\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\telse if (itr->type != v.type || itr->sz != v.sz || itr->key != v.key)\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d Schema:%s, Service:%s, tableName:%s, altering an existing column %s is not allowed\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str(), v.column.c_str() );\n\t\t\t\t\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t// If altering the table, drop all the columns which are present\n\t\t\t\t\t\t\t// in DB but are not in the request, iterate over DB colums list \n\t\t\t\t\t\t\t// to find out columns which are ppresent in DB, compare with the\n\t\t\t\t\t\t\t// incoming request list of columns colsPerTableInReq\n\n\t\t\t\t\t\t\tif(alterTable && dbCol)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tfor ( auto col : *dbCol)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t// Make sure the column to be dropped is not a primary key\n\t\t\t\t\t\t\t\t\tif(colsPerTableInReq.find(col) == colsPerTableInReq.end())\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tif (!col.key)\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t// this column is in database but not in latest schema request\n\t\t\t\t\t\t\t\t\t\t// need to drop this column\n\t\t\t\t\t\t\t\t\t\t\tsql += \"drop column \" + col.column + \",\" ;\n\t\t\t\t\t\t\t\t\t\t\tcolumnsToAlter = true;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\traiseError(\"create_schema\", \"%s:%d Schema:%s, Service:%s, tableName:%s, dropping th ekey column is not allowed\", __FUNCTION__, __LINE__, schema.c_str(), service.c_str(), name.c_str(), col.column.c_str());\n\t\t\t\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t// remove last comma\n\t\t\t\t\t\t\tif ( sql[sql.size() - 1] == ',')\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsql.erase(sql.size()-1);\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tif (alterTable)\n\t\t\t\t\t\t\t\tsql += \" ;\";\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tsql += \" );\";\n\n\t\t\t\t\t\t\t// execute the sql here \n\t\t\t\t\t\t\t// if alterTable is true and no columns to Alter , then dont fire the sqlquery\n\t\t\t\t\t\t\t// \n\t\t\t\t\t\t\tif (!(alterTable && !columnsToAlter))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsqlQuery q;\n\t\t\t\t\t\t\t        q.query = sql.c_str();\n\t\t\t\t\t\t\t\tq.purgeOpArg = \"CreatingSchema - phase 1, creating/altering tables\";\n\t\t\t\t\t\t\t\tchar msg[MSG_LEN] = {'\\0'};\n\t\t\t\t\t\t\t\tsnprintf(msg, MSG_LEN, \"Function: %s, Schema:%s, Service:%s, tableName:%s, Error in creating/altering tables, command executed = %s\",__FUNCTION__, schema.c_str(), service.c_str(), name.c_str(), sql.c_str());\n\t\t\t\t\t\t\t\tq.logMsg = msg;\n\n\t\t\t\t\t\t\t\tqueries.push_back(q);\n\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t}\n                                                       \tstd::vector<std::string> &indexMatrixFromDB = indexMapFromDB[name];\n                                                        bool indexPresent = false;\n\n\t\t\t\t\t\t\t// create the indexes in req and not in DB\n\t\t\t\t\t\t\t// iterate over the index creation request and search from \n\t\t\t\t\t\t\t// them in DB, if does not exist create it\n\t\t\t\t\t\t\tfor (auto &req : indexesMatrixFromReq)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tindexPresent = false;\n\t\t\t\t\t\t\t\tfor ( auto &row : indexMatrixFromDB)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (req == row)\n\t\t\t\t\t\t\t\t\t\tindexPresent = true;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\tif(!indexPresent)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tsqlIdx = \"create index \" + name + \"_\" + getIndexName(req) + \" on \" + schema + \".\" + name + \"(\";\n                                                               \t\tsqlIdx += req; \n                                                        \t\tsqlIdx += \" );\";\n\n\t\t\t\t\t\t\t\t\tsqlQuery q;\n\t\t\t\t\t\t\t\t\tq.query = sqlIdx.c_str();\n\t\t\t\t\t\t\t\t\tq.purgeOpArg = \"CreatingSchema - phase 2, creating index on tables\";\n\t\t\t\t\t\t\t\t\tchar msg[MSG_LEN] = {'\\0'};\n\t\t\t\t\t\t\t\t\tsnprintf(msg, MSG_LEN, \"Function :%s, Schema:%s, Service:%s, tableName:%s Error in creating indexes command %s\",__FUNCTION__, schema.c_str(), service.c_str(), name.c_str(), sqlIdx.c_str());\n\t\t\t\t\t\t\t\t\tq.logMsg = msg;\n\n\t\t\t\t\t\t\t\t\tqueries.push_back(q);\n\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t// delete the indexes in DB and not in req\n\t\t\t\t\t\t\t// iterate over the indexes list present in DB and compare with the\n\t\t\t\t\t\t\t// indexes in teh schema creation request,if not found, then delete them\n\t\t\t\t\t\t\t//\n\t\t\t\t\t\t\tfor (auto &req : indexMatrixFromDB)\n                                                       \t{\n                                                       \t\tindexPresent = false;\n                                                               \tfor ( auto &row : indexesMatrixFromReq)\n                                                               \t{\n                                                               \t\tif (req == row)\n                                                                       \t\tindexPresent = true;\n                                                                }\n                                                                if(!indexPresent)\n                                                               \t{\n                                                               \t\tsqlIdx = \"drop index \" + schema + \".\" + name + \"_\" + req + \";\";\n\n\t\t\t\t\t\t\t\t\tsqlQuery q;\n\t\t\t\t\t\t\t\t\tq.query = sqlIdx;\n\t\t\t\t\t\t\t\t\tq.purgeOpArg = \"CreatingSchema - phase 2, dropping index on tables\";\n\t\t\t\t\t\t\t\t\tchar msg[MSG_LEN] = {'\\0'};\n\t\t\t\t\t\t\t\t\tsnprintf(msg, MSG_LEN, \"Function: %s, Schema:%s, Service:%s, tableName:%s, Error in executing drop index command %s\",__FUNCTION__, schema.c_str(), service.c_str(), name.c_str(), sqlIdx.c_str());\n\t\t\t\t\t\t\t\t\tq.logMsg = msg;\n\n\t\t\t\t\t\t\t\t\tqueries.push_back(q);\n                                                                }\n                                                        }\n\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\n\n\t\t\t\t\t\t// Iterate over all the sqlQuery command and execute them \n\n\t\t\t\t\t\tfor (sqlQuery& q : queries)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif(!q.query.empty())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\trowsAffectedLastCommand = purgeOperation(q.query.c_str(), logSection, q.purgeOpArg.c_str(), false);\n                                                                if (rowsAffectedLastCommand == -1)\n                                                                {\n                                                                \traiseError(\"create_schema\", q.logMsg.c_str());\n                                                                        return -1;\n                                                                }\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t//\n\t\t\t\t\t\t// delete all the tables which are not in the new schema request\n\t\t\t\t\t\t// but present in db\n\n\t\t\t\t\t\tsqlDropTables += \"drop table if exists \";\n\t\t\t\t\t\tbool tableToDrop = false;\n\t\t\t\t\t\tfor (auto itr : columnMapFromDB)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (unSetTablesInSchemaRequest.find(itr.first) == unSetTablesInSchemaRequest.end())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsqlDropTables += schema +\".\" + itr.first + \",\";\n\t\t\t\t\t\t\t\ttableToDrop = true;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (sqlDropTables[sqlDropTables.size() -1 ] == ',')\n                                                {\n                                                \tsqlDropTables.erase(sqlDropTables.size() -1);\n                                                }\n\t\t\t\t\t\tsqlDropTables += \";\";\n\t\t\t\t\t\tif (tableToDrop)\n\t\t\t\t\t\t{\n              \t\t\t\t\t\trowsAffectedLastCommand = purgeOperation(sqlDropTables.c_str(), logSection, \"Dropping unrequired tables\", false);\n\t\t\t\t\t\t\tif (rowsAffectedLastCommand == -1)\n                                                        {\n                                                        \traiseError(\"create_schema\", \"%s:%d Error in executing drop table command %s\",__FUNCTION__,__LINE__, sqlDropTables.c_str());\n                                                                return -1;\n                                                        }\n\n\t\t\t\t\t\t}\n\t\t\t\t\t\t// delete payload in fledge.service_schema if already present\n\t\t\t\t\t\tif(schemaCreationReq == false)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstd::string s = \"delete from fledge.service_schema where name =  '\" + schema + \"' and   service = '\" + service + \"';\";\n\t\t\t\t\t\t\trowsAffectedLastCommand = purgeOperation(s.c_str(), logSection, \"delete from fledge.service_schema  \", false);\n\t\t\t\t\t\t\tif (rowsAffectedLastCommand == -1)\n                                                        {\n\t                                                        raiseError(\"create_schema\", \"%s:%d Error in executing delete payload from service_schema command =%s\",__FUNCTION__, __LINE__, s.c_str());\n                                                                return -1;\n                                                        }\n\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// insert payload in the fledge.service_schema\n                        \t\t\tstd::string s = \"insert into fledge.service_schema(name, service, version, definition) values ('\" + schema + \"', \" +\"'\" + service + \"', \" + to_string(version) + \", \" + \"'\" + payload + \"') ;\" ;\n\n                        \t\t        rowsAffectedLastCommand = purgeOperation(s.c_str(), logSection, \"insert in fledge.service_schema  \", false);\n\t\t\t\t\t\tif (rowsAffectedLastCommand == -1)\n                                                {\n\t                                                raiseError(\"create_schema\", \"%s:%d Error in executing insert payload into service_schema, command =%s \",__FUNCTION__, __LINE__, s.c_str());\n                                                        return -1;\n                                                }\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t    \t}\n\t    \n\t}\n\tcatch( std::exception &e){\n\t\traiseError(\"create_schema\", \"%s %d exception caught %s\", __FUNCTION__, __LINE__, e.what() );\n\t\treturn -1;\n\t}\n\n\treturn 1;\n\n}\n\n/**\n * This function checks for input string for ',' and returns a string with ',' replaced with '_'\n *\n * @param[in]   string to check for ',' \n * @return      string with , replaced with _ \n */\n\nstd::string Connection::getIndexName(std::string s){\n\tstd::replace_if( s.begin(),s.end(), [](char ch) {return ch ==',';},'_');\t\n\treturn s;\n}\n\n/**\n * This function checks whether the passed string represent a valid postgres column data type \n *\n * @param[in]   string to check for ','\n * @return     \ttrue if it is a valid data type , false otherwise \n */\n\nbool Connection::checkValidDataType(const std::string &s){\n\treturn ( s == \"varchar\" || s ==  \"integer\" || s ==  \"double\" || s == \"real\" || s == \"sequence\");\n}\n\n/**\n * Purge readings by asset or purge all readings\n *\n * @param asset\t\tThe asset name to purge\n * \t\t\tIf empty all assets will be removed\n * @return\t\tThe number of removed asset records\n */\nunsigned int Connection::purgeReadingsAsset(const string& asset)\n{\nSQLBuffer       sql;\nunsigned int rowsAffected;\n\n\tsql.append(\"DELETE FROM fledge.readings\");\n\n\tif (!asset.empty())\n\t{\n\t\tsql.append(\" WHERE asset_code = '\" + asset + \"'\");\n\t}\n\tsql.append(';');\n       \n\tconst char *query = sql.coalesce();\n        logSQL(\"PurgeReadingsAsset\", query);\n\n\tSTART_TIME;\n\n\tPGresult *res = PQexec(dbConnection, query);\n\n\tEND_TIME;\n\n\tdelete[] query;\n\tif (PQresultStatus(res) == PGRES_COMMAND_OK)\n\t{\n\t\tPQclear(res);\n\t\treturn atoi(PQcmdTuples(res));\n\t}\n\traiseError(\"PurgeReadingsAsset\", PQerrorMessage(dbConnection));\n\tPQclear(res);\n\treturn 0;\n}\n"
  },
  {
    "path": "C/plugins/storage/postgres/connection_manager.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <connection_manager.h>\n#include <connection.h>\n#include <logger.h>\n#include <stdexcept>\n\n\nConnectionManager *ConnectionManager::instance = 0;\n\n/**\n * Default constructor for the connection manager.\n */\nConnectionManager::ConnectionManager()\n{\n\tlastError.message = NULL;\n\tlastError.entryPoint = NULL;\n\tif (getenv(\"FLEDGE_TRACE_SQL\"))\n\t\tm_logSQL = true;\n\telse\n\t\tm_logSQL = false;\n}\n\n/**\n * Called at shutdown. Shrink the idle pool, this will\n * have the side effect of closing the connections to the database.\n */\nvoid ConnectionManager::shutdown()\n{\n\tshrinkPool(idle.size());\n}\n\n/**\n * Return the singleton instance of the connection manager.\n * if none was created then create it.\n */\nConnectionManager *ConnectionManager::getInstance()\n{\n\tif (instance == 0)\n\t{\n\t\tinstance = new ConnectionManager();\n\t}\n\treturn instance;\n}\n\n/**\n * Grow the connection pool by the number of connections\n * specified.\n *\n * @param delta\tThe number of connections to add to the pool\n */\nvoid ConnectionManager::growPool(unsigned int delta)\n{\n\twhile (delta-- > 0)\n\t{\n\t\ttry {\n\t\t\tConnection *conn = new Connection();\n\t\t\tconn->setTrace(m_logSQL);\n\t\t\tconn->setMaxReadingRows(m_maxReadingRows);\n\t\t\tidleLock.lock();\n\t\t\tidle.push_back(conn);\n\t\t\tidleLock.unlock();\n\t\t} catch (std::exception& e) {\n\t\t\tLogger::getLogger()->error(\"Failed to create storage connection: %s\", e.what());\n\t\t}\n\t}\n}\n\n/**\n * Attempt to shrink the number of connections in the idle pool\n *\n * @param delta\t\tNumber of connections to attempt to remove\n * @return The number of connections removed.\n */\nunsigned int ConnectionManager::shrinkPool(unsigned int delta)\n{\nunsigned int removed = 0;\nConnection   *conn;\n\n\twhile (delta-- > 0)\n\t{\n\t\tidleLock.lock();\n\t\tconn = idle.back();\n\t\tidle.pop_back();\n\t\tidleLock.unlock();\n\t\tif (conn)\n\t\t{\n\t\t\tdelete conn;\n\t\t\tremoved++;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbreak;\n\t\t}\n\t}\n\treturn removed;\n}\n\n/**\n * Allocate a connection from the idle pool. If\n * no connection is available add a new connection\n */\nConnection *ConnectionManager::allocate()\n{\nConnection *conn = 0;\n\n\tidleLock.lock();\n\tif (idle.empty())\n\t{\n\t\tconn = new Connection();\n\t\tconn->setTrace(m_logSQL);\n\t\tconn->setMaxReadingRows(m_maxReadingRows);\n\t}\n\telse\n\t{\n\t\tconn = idle.front();\n\t    \tidle.pop_front();\n\t}\n\tidleLock.unlock();\n\tif (conn)\n\t{\n\t\tinUseLock.lock();\n\t\tinUse.push_front(conn);\n\t\tinUseLock.unlock();\n\t}\n\treturn conn;\n}\n\n/**\n * Release a connection back to the idle pool for\n * reallocation.\n *\n * @param conn\tThe connection to release.\n */\nvoid ConnectionManager::release(Connection *conn)\n{\n\tinUseLock.lock();\n\tinUse.remove(conn);\n\tinUseLock.unlock();\n\tidleLock.lock();\n\tidle.push_back(conn);\n\tidleLock.unlock();\n}\n\n/**\n * Set the last error information for a plugin.\n *\n * @param source\tThe source of the error\n * @param description\tThe error description\n * @param retryable\tFlag to determien if the error condition is transient\n */\nvoid ConnectionManager::setError(const char *source, const char *description, bool retryable)\n{\n\terrorLock.lock();\n\tif (lastError.entryPoint)\n\t\tfree(lastError.entryPoint);\n\tif (lastError.message)\n\t\tfree(lastError.message);\n\tlastError.retryable = retryable;\n\tlastError.entryPoint = strdup(source);\n\tlastError.message = strdup(description);\n\terrorLock.unlock();\n}\n"
  },
  {
    "path": "C/plugins/storage/postgres/include/connection.h",
    "content": "#ifndef _CONNECTION_H\n#define _CONNECTION_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <sql_buffer.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <libpq-fe.h>\n#include <unordered_map>\n#include <unordered_set>\n#include <functional>\n#include <vector>\n\n#define\tSTORAGE_PURGE_RETAIN_ANY 0x0001U\n#define\tSTORAGE_PURGE_RETAIN_ALL 0x0002U\n#define STORAGE_PURGE_SIZE\t 0x0004U\n\n/**\n * Maximum number of readings to insert in a single \n * insert statement\n */\n#define INSERT_ROW_LIMIT\t5000\n\nclass Connection {\n\tpublic:\n\t\tConnection();\n\t\t~Connection();\n\t\tbool\t\tretrieve(const std::string& schema,\n\t\t\t\t\tconst std::string& table, const std::string& condition,\n\t\t\t\t\tstd::string& resultSet);\n    \t\tbool \t\tretrieveReadings(const std::string& condition, std::string& resultSet);\n\t\tint\t\tinsert(const std::string& table, const std::string& data);\n\t\tint\t\tupdate(const std::string& table, const std::string& data);\n\t\tint\t\tdeleteRows(const std::string& table, const std::string& condition);\n\t\tint\t\tappendReadings(const char *readings);\n\t\tbool\t\tfetchReadings(unsigned long id, unsigned int blksize, std::string& resultSet);\n\t\tunsigned int\tpurgeReadings(unsigned long age, unsigned int flags, unsigned long sent, std::string& results);\n\t\tunsigned int\tpurgeReadingsByRows(unsigned long rowcount, unsigned int flags,unsigned long sent, std::string& results);\n\t\tunsigned long   purgeOperation(const char *sql, const char *logSection, const char *phase, bool retrieve);\n\n\t\tlong\t\ttableSize(const std::string& table);\n\t\tvoid\t\tsetTrace(bool flag) { m_logSQL = flag; };\n    \t\tstatic bool \tformatDate(char *formatted_date, size_t formatted_date_size, const char *date);\n\t\tint\t\tcreate_table_snapshot(const std::string& table, const std::string& id);\n\t\tint\t\tload_table_snapshot(const std::string& table, const std::string& id);\n\t\tint\t\tdelete_table_snapshot(const std::string& table, const std::string& id);\n\t\tbool\t\tget_table_snapshots(const std::string& table,\n\t\t\t\t\t\t    std::string& resultSet);\n\t\tbool\t\taggregateQuery(const rapidjson::Value& payload, std::string& resultSet);\n\t\tint \t\tcreate_schema(const std::string &payload);\n\t\tbool \t\tfindSchemaFromDB(const std::string &service,\n\t\t\t\t\t\tconst std::string &name,\n\t\t\t\t\t\tstd::string &resultSet);\n\t\tunsigned int\tpurgeReadingsAsset(const std::string& asset);\n\t\tvoid\t\tsetMaxReadingRows(long rows)\n\t\t\t\t{\n\t\t\t\t\tm_maxReadingRows = rows;\n\t\t\t\t}\n\n\tprivate:\n\t\tbool\t\tm_logSQL;\n\t\tvoid\t\traiseError(const char *operation, const char *reason,...);\n\t\tPGconn\t\t*dbConnection;\n\t\tvoid\t\tmapResultSet(PGresult *res, std::string& resultSet);\n\t\tbool\t\tjsonModifiers(const rapidjson::Value&, SQLBuffer&);\n\t\tbool\t\tjsonAggregates(const rapidjson::Value&,\n\t\t\t\t\t\tconst rapidjson::Value&,\n\t\t\t\t\t\tSQLBuffer&, SQLBuffer&,\n\t\t\t\t\t\tbool isTableReading = false);\n\t\tbool\t\tjsonWhereClause(const rapidjson::Value& whereClause,\n\t\t\t\t\t\tSQLBuffer&, std::vector<std::string>  &asset_codes,\n\t\t\t\t\t\tbool convertLocaltime = false,\n\t\t\t\t\t\tstd::string prefix = \"\");\n\t\tbool\t\treturnJson(const rapidjson::Value&, SQLBuffer&, SQLBuffer&);\n\t\tchar\t\t*trim(char *str);\n    \t\tconst std::string\tescape_double_quotes(const std::string&);\n\t\tconst std::string\tescape(const std::string&);\n    \t\tconst std::string \tdouble_quote_reserved_column_name(const std::string &column_name);\n\t\tvoid\t\tlogSQL(const char *, const char *);\n\t\tbool\t\tisFunction(const char *) const;\n                bool            selectColumns(const rapidjson::Value& document, SQLBuffer& sql, int level);\n                bool            appendTables(const std::string &schema,\n\t\t\t\t\t\tconst rapidjson::Value& document,\n\t\t\t\t\t\tSQLBuffer& sql,\n\t\t\t\t\t\tint level);\n                bool            processJoinQueryWhereClause(const rapidjson::Value& query,\n\t\t\t\t\t\t\tSQLBuffer& sql,\n\t\t\t\t\t\t\tstd::vector<std::string>  &asset_codes,\n\t\t\t\t\t\t\tint level);\n\n\t\tstd::string getIndexName(std::string s);\n\t\tbool \t\tcheckValidDataType(const std::string &s);\n\t\tlong\t\tm_maxReadingRows;\n\n\n\t\ttypedef\tstruct{\n\t\t\tstd::string \tcolumn;\n\t\t\tstd::string\ttype;\n\t\t\tint\t\tsz;\n\t\t\tbool\t\tkey = false;\n\t\t} columnRec;\n\n\t\t// Custom Hash Functor that will compute the hash on the\n\t\t// passed objects column data member \n\t\tstruct columnRecHasher\n\t\t{\n\t  \t\tsize_t operator()(const columnRec & obj) const\n  \t\t\t{\n\t\t\t\treturn std::hash<std::string>()(obj.column);\n  \t\t\t}\n\t\t};\n\n\t\tstruct columnRecComparator\n\t\t{\n  \t\t\tbool operator()(const columnRec & obj1, const columnRec & obj2) const\n  \t\t\t{\n\t\t\t        if (obj1.column == obj2.column)\n\t\t\t\t      return true;\n\t\t\t        return false;\n  \t\t\t}\n\t\t};\n\n\t\ttypedef struct{\n                        std::string     query;\n                        std::string     purgeOpArg;\n\t\t\tstd::string\tlogMsg;\n                } sqlQuery;\n\n\tpublic:\n\n\t\tbool            parseDatabaseStorageSchema(int &version, const std::string &res,\n                                std::unordered_map<std::string, std::unordered_set<columnRec, columnRecHasher, columnRecComparator> > &tableColumnMap, std::unordered_map<std::string, std::vector<std::string> > &tableIndexMap, bool &schemaCreationRequest);\n\n\n};\n#endif\n"
  },
  {
    "path": "C/plugins/storage/postgres/include/connection_manager.h",
    "content": "#ifndef _CONNECTION_MANAGER_H\n#define _CONNECTION_MANAGER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <plugin_api.h>\n#include <list>\n#include <mutex>\n\nclass Connection;\n\n/**\n * Singleton class to manage Postgres connection pool\n */\nclass ConnectionManager {\n\tpublic:\n\t\tstatic ConnectionManager  *getInstance();\n\t\tvoid                      growPool(unsigned int);\n\t\tunsigned int              shrinkPool(unsigned int);\n\t\tConnection                *allocate();\n\t\tvoid                      release(Connection *);\n\t\tvoid\t\t\t  shutdown();\n\t\tvoid\t\t\t  setError(const char *, const char *, bool);\n\t\tPLUGIN_ERROR\t\t  *getError()\n\t\t\t\t\t  {\n\t\t\t\t\t\treturn &lastError;\n\t\t\t\t\t  }\n\t\tvoid\t\t\t  setMaxReadingRows(long rows)\n\t\t\t\t\t  {\n\t\t\t\t\t\t  m_maxReadingRows = rows;\n\t\t\t\t\t  }\n\n\tprivate:\n\t\tConnectionManager();\n\t\tstatic ConnectionManager     *instance;\n\t\tstd::list<Connection *>      idle;\n\t\tstd::list<Connection *>      inUse;\n\t\tstd::mutex                   idleLock;\n\t\tstd::mutex                   inUseLock;\n\t\tstd::mutex                   errorLock;\n\t\tPLUGIN_ERROR\t\t     lastError;\n\t\tbool\t\t\t     m_logSQL;\n\t\tlong\t\t\t     m_maxReadingRows;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/storage/postgres/plugin.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017-2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <connection_manager.h>\n#include <connection.h>\n#include <plugin_api.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <strings.h>\n#include \"libpq-fe.h\"\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <sstream>\n#include <iostream>\n#include <string>\n#include <logger.h>\n#include <plugin_exception.h>\n#include <config_category.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n#define DEFAULT_SCHEMA \"fledge\"\n\n#define OR_DEFAULT_SCHEMA(x)\t((x) ? (x) : DEFAULT_SCHEMA)\n\n/**\n * The Postgres plugin interface\n */\nextern \"C\" {\n\n\nconst char *default_config = QUOTE({\n                \"poolSize\" : {\n                        \"description\" : \"Connection pool size\",\n                        \"type\" : \"integer\",\n                        \"default\" : \"5\",\n                        \"displayName\" : \"Pool Size\",\n                        \"order\" : \"1\"\n                        },\n                \"maxReadingRows\" : {\n                        \"description\" : \"The maximum number of readings to insert in a single statement\",\n                        \"type\" : \"integer\",\n                        \"default\" : \"5000\",\n                        \"displayName\" : \"Max. Insert Rows\",\n                        \"order\" : \"2\"\n                        }\n                });\n\n/**\n * The plugin information structure\n */\nstatic PLUGIN_INFORMATION info = {\n\t\"PostgresSQL\",            // Name\n\t\"1.2.0\",                  // Version\n\tSP_COMMON|SP_READINGS,    // Flags\n\tPLUGIN_TYPE_STORAGE,      // Type\n\t\"1.6.0\",                  // Interface version\n\tdefault_config\n};\n\n/**\n * Return the information about this plugin\n */\nPLUGIN_INFORMATION *plugin_info()\n{\n\treturn &info;\n}\n\n/**\n * Initialise the plugin, called to get the plugin handle\n * In the case of Postgres we also get a pool of connections\n * to use.\n */\nPLUGIN_HANDLE plugin_init(ConfigCategory *category)\n{\nConnectionManager *manager = ConnectionManager::getInstance();\nlong poolSize = 5, maxReadingRows = 5000;\n\n\tif (category->itemExists(\"poolSize\"))\n\t{\n\t\tpoolSize = strtol(category->getValue(\"poolSize\").c_str(), NULL, 10);\n\t}\n\tif (category->itemExists(\"maxReadingRows\"))\n\t{\n\t\tlong val = strtol(category->getValue(\"maxReadingRows\").c_str(), NULL, 10);\n\t\tif (val > 0)\n\t\t\tmaxReadingRows = val;\n\t}\n\tmanager->setMaxReadingRows(maxReadingRows);\n\tmanager->growPool(poolSize);\n\treturn manager;\n}\n\n/**\n * Insert into an arbitrary table\n */\nint plugin_common_insert(PLUGIN_HANDLE handle, char *schema, char *table, char *data)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn 0;\n\t}\n\n\n\tint result = connection->insert(std::string(OR_DEFAULT_SCHEMA(schema)) + \".\" + std::string(table), std::string(data));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Retrieve data from an arbitrary table\n */\nconst char *plugin_common_retrieve(PLUGIN_HANDLE handle, char *schema, char *table, char *query)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn NULL;\n\t}\n\n\tbool rval = connection->retrieve(schema, std::string(OR_DEFAULT_SCHEMA(schema)) + \".\" + std::string(table), std::string(query), results);\n\tmanager->release(connection);\n\tif (rval)\n\t{\n\t\treturn strdup(results.c_str());\n\t}\n\treturn NULL;\n}\n\n/**\n * Update an arbitary table\n */\nint plugin_common_update(PLUGIN_HANDLE handle, char *schema, char *table, char *data)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn 0;\n\t}\n\n\tint result = connection->update(std::string(OR_DEFAULT_SCHEMA(schema)) + \".\" + std::string(table), std::string(data));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Delete from an arbitrary table\n */\nint plugin_common_delete(PLUGIN_HANDLE handle, char *schema , char *table, char *condition)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn 0;\n\t}\n\n\tint result = connection->deleteRows(std::string(OR_DEFAULT_SCHEMA(schema)) + \".\" + std::string(table), std::string(condition));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Append a sequence of readings to the readings buffer\n */\nint plugin_reading_append(PLUGIN_HANDLE handle, char *readings)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn 0;\n\t}\n\n\tint result = connection->appendReadings(readings);\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Fetch a block of readings from the readings buffer\n */\nchar *plugin_reading_fetch(PLUGIN_HANDLE handle, unsigned long id, unsigned int blksize)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string\t  resultSet;\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn NULL;\n\t}\n\n\tconnection->fetchReadings(id, blksize, resultSet);\n\tmanager->release(connection);\n\treturn strdup(resultSet.c_str());\n}\n\n/**\n * Retrieve some readings from the readings buffer\n */\nchar *plugin_reading_retrieve(PLUGIN_HANDLE handle, char *condition)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn NULL;\n\t}\n\n\tconnection->retrieveReadings(std::string(condition), results);\n\tmanager->release(connection);\n\treturn strdup(results.c_str());\n}\n\n/**\n * Purge readings from the buffer\n */\nchar *plugin_reading_purge(PLUGIN_HANDLE handle, unsigned long param, unsigned int flags, unsigned long sent)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string \t  results;\nunsigned long\t  age, size;\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn NULL;\n\t}\n\n\tif (flags & STORAGE_PURGE_SIZE)\n\t{\n\t\t(void)connection->purgeReadingsByRows(param, flags, sent, results);\n\t}\n\telse\n\t{\n\t\tage = param;\n\t\t(void)connection->purgeReadings(age, flags, sent, results);\n\t}\n\tmanager->release(connection);\n\treturn strdup(results.c_str());\n}\n\n/**\n * Release a previously returned result set\n */\nvoid plugin_release(PLUGIN_HANDLE handle, char *results)\n{\n\t(void)handle;\n\tfree(results);\n}\n\n/**\n * Return details on the last error that occured.\n */\nPLUGIN_ERROR *plugin_last_error(PLUGIN_HANDLE handle)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\n  \n\treturn manager->getError();\n}\n\n/**\n * Shutdown the plugin\n */\nbool plugin_shutdown(PLUGIN_HANDLE handle)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\n  \n\tmanager->shutdown();\n\treturn true;\n}\n\n\n/**\n * Create snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table to shapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= o on success\n *\n * The new created table has the following name:\n * table_id\n */\nint plugin_create_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\t char *table,\n\t\t\t\t char *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn -1;\n\t}\n\n        int result = connection->create_table_snapshot(std::string(table),\n\t\t\t\t\t\t\tstd::string(id));\n        manager->release(connection);\n        return result;\n}\n\n/**\n * Load a snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table to fill from a given snapshot\n * @param id\t\tThe table snapshot id\n * @return\t\t-1 on error, >= o on success\n */\nint plugin_load_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\tchar *table,\n\t\t\t\tchar *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn -1;\n\t}\n\n        int result = connection->load_table_snapshot(std::string(table),\n\t\t\t\t\t\t     std::string(id));\n        manager->release(connection);\n        return result;\n}\n\n/**\n * Delete a snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table which shapshot will be removed\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= o on success\n *\n */\nint plugin_delete_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\t char *table,\n\t\t\t\t char *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn -1;\n\t}\n\n        int result = connection->delete_table_snapshot(std::string(table),\n\t\t\t\t\t\t\tstd::string(id));\n        manager->release(connection);\n        return result;\n}\n/**\n * Get all snapshots of a given common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table name \n * @return \t\tList of snapshots (even empty list) or NULL for errors\n *\n */\nconst char* plugin_get_table_snapshots(PLUGIN_HANDLE handle,\n\t\t\t\t\tchar *table)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn NULL;\n\t}\n\n\tbool rval = connection->get_table_snapshots(std::string(table), results);\n\tmanager->release(connection);\n\n\treturn rval ? strdup(results.c_str()) : NULL;\n}\n\n/**\n * Create schema of a common table\n *\n * @param handle        The plugin handle\n * @param payload       The payload to shapshot\n * @return              -1 on error, >= o on success\n *\n */\nint plugin_createSchema(PLUGIN_HANDLE handle,\n                                 char *payload)\n{\n\tConnectionManager *manager = (ConnectionManager *)handle;\n\tConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn -1;\n\t}\n\n\tint result = connection->create_schema(std::string(payload));\n        manager->release(connection);\n        return result;\n}\n\nint plugin_schema_update(PLUGIN_HANDLE handle, \n\t\t                  char *schema, char *payload)\n{\n\tConnectionManager *manager = (ConnectionManager *)handle;\n        Connection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn 0;\n\t}\n\n\t// create_schema handles both create and update schema\n\t// schema value gets parsed from the payload\n        int result = connection->create_schema(std::string(payload));\n        manager->release(connection);\n        return result;\n\n}\n\n/**\n * Purge given readings asset or all readings from the buffer\n */\nunsigned int plugin_reading_purge_asset(PLUGIN_HANDLE handle, char *asset)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tif (connection == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"No database connections available\");\n\t\treturn 0;\n\t}\n\n\tunsigned int deleted = connection->purgeReadingsAsset(asset);\n\tmanager->release(connection);\n\treturn deleted;\n}\n\n};\n"
  },
  {
    "path": "C/plugins/storage/sqlite/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(sqlite)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Path of compiled sqlite3 file: /usr/local/bin\nset(FLEDGE_SQLITE3_LIBS \"/usr/local/bin\" CACHE INTERNAL \"\")\n\n# Find source files\nfile(GLOB SOURCES ./common/*.cpp ./schema/*.cpp *.cpp)\n\n# Include header files\ninclude_directories(./include)\ninclude_directories(./common/include)\ninclude_directories(./schema/include)\ninclude_directories(../../../common/include)\ninclude_directories(../../../services/common/include)\ninclude_directories(../common/include)\ninclude_directories(../../../thirdparty/rapidjson/include)\nlink_directories(${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Check Sqlite3 required version\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\nfind_package(sqlite3)\n\n# Use static SQLite3 library\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tinclude_directories(${FLEDGE_SQLITE3_LIBS})\n\ttarget_link_libraries(${PROJECT_NAME} -L\"${FLEDGE_SQLITE3_LIBS}/.libs\" -lsqlite3)\nelse()\n\ttarget_link_libraries(${PROJECT_NAME} -lsqlite3)\nendif()\n\n# Install SQLite3 command line with static library\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tinstall(PROGRAMS ${FLEDGE_SQLITE3_LIBS}/sqlite3 DESTINATION \"fledge/plugins/storage/${PROJECT_NAME}\")\nendif()\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/plugins/storage/${PROJECT_NAME})\n\n# Install init.sql\ninstall(FILES ${CMAKE_SOURCE_DIR}/scripts/plugins/storage/${PROJECT_NAME}/init.sql DESTINATION fledge/plugins/storage/${PROJECT_NAME})\ninstall(FILES ${CMAKE_SOURCE_DIR}/scripts/plugins/storage/${PROJECT_NAME}/init_readings.sql DESTINATION fledge/plugins/storage/${PROJECT_NAME})\n\n"
  },
  {
    "path": "C/plugins/storage/sqlite/Findsqlite3.cmake",
    "content": "# This CMake file locates the SQLite3 development libraries\n#\n# The following variables are set:\n# SQLITE_FOUND - If the SQLite library was found\n# SQLITE_LIBRARIES - Path to the static library\n# SQLITE_INCLUDE_DIR - Path to SQLite headers\n# SQLITE_VERSION - Library version\n\nset(SQLITE_MIN_VERSION \"3.11.0\")\n# Check wether path of compiled libsqlite3.a and .h files exists\nif (EXISTS ${FLEDGE_SQLITE3_LIBS})\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h PATHS ${FLEDGE_SQLITE3_LIBS})\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.a PATHS \"${FLEDGE_SQLITE3_LIBS}/.libs\")\nelse()\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h)\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.so)\nendif()\n\nif (SQLITE_INCLUDE_DIR AND SQLITE_LIBRARIES)\n  execute_process(COMMAND grep \".*#define.*SQLITE_VERSION \" ${SQLITE_INCLUDE_DIR}/sqlite3.h\n    COMMAND sed \"s/.*\\\"\\\\(.*\\\\)\\\".*/\\\\1/\"\n    OUTPUT_VARIABLE SQLITE_VERSION\n    OUTPUT_STRIP_TRAILING_WHITESPACE)\n    if (\"${SQLITE_VERSION}\" VERSION_LESS \"${SQLITE_MIN_VERSION}\")\n        message(FATAL_ERROR \"SQLite3 version >= ${SQLITE_MIN_VERSION} required, found version ${SQLITE_VERSION}\")\n    else()\n        message(STATUS \"Found SQLite version ${SQLITE_VERSION}: ${SQLITE_LIBRARIES}\")\n        set(SQLITE_FOUND TRUE)\n    endif()\nelse()\n  message(FATAL_ERROR \"Could not find SQLite\")\nendif()\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/connection.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <sqlite_common.h>\n#include <connection.h>\n#include <connection_manager.h>\n#include <utils.h>\n#include <unistd.h>\n\n#include \"readings_catalogue.h\"\n\n/*\n * Control the way purge deletes readings. The block size sets a limit as to how many rows\n * get deleted in each call, whilst the sleep interval controls how long the thread sleeps\n * between deletes. The idea is to not keep the database locked too long and allow other threads\n * to have access to the database between blocks.\n */\n#define PURGE_SLEEP_MS 500\n\n#define PURGE_SLOWDOWN_AFTER_BLOCKS 5\n#define PURGE_SLOWDOWN_SLEEP_MS 500\n\n#define LOG_AFTER_NERRORS (MAX_RETRIES / 2)\n\n/**\n * SQLite3 storage plugin for Fledge\n */\n\nusing namespace std;\nusing namespace rapidjson;\n\n#define CONNECT_ERROR_THRESHOLD\t\t5*60\t// 5 minutes\n\n/*\n * The following allows for conditional inclusion of code that tracks the top queries\n * run by the storage plugin and the number of times a particular statement has to\n * be retried because of the database being busy./\n */\n#define DO_PROFILE\t\t0\n#define DO_PROFILE_RETRIES\t0\n#if DO_PROFILE\n#include <profile.h>\n\n#define\tTOP_N_STATEMENTS\t\t10\t// Number of statements to report in top n\n#define RETRY_REPORT_THRESHOLD\t\t1000\t// Report retry statistics every X calls\n\nQueryProfile profiler(TOP_N_STATEMENTS);\nunsigned long retryStats[MAX_RETRIES] = { 0,0,0,0,0,0,0,0,0,0 };\nunsigned long numStatements = 0;\nint\t      maxQueue = 0;\n#endif\n\nstatic std::atomic<int> m_waiting(0);\nstatic std::atomic<int> m_writeAccessOngoing(0);\nstatic std::mutex\tdb_mutex;\nstatic std::condition_variable\tdb_cv;\nstatic int purgeBlockSize = PURGE_DELETE_BLOCK_SIZE;\n\n#define START_TIME std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();\n#define END_TIME std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now(); \\\n\t\t\t\t auto usecs = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count();\n\nstatic time_t connectErrorTime = 0;\n\n/**\n * This SQLIte3 query callback returns a formatted date\n * by SELECT strftime('format', column, 'locatime')\n *\n * @param data         Output parameter to update with new datetime\n * @param nCols        The number of columns or the row\n * @param colValues    The column values\n * @param colNames     The column names\n * @return             0 on success, 1 otherwise\n */\nint dateCallback(void *data,\n\t\tint nCols,\n\t\tchar **colValues,\n\t\tchar **colNames)\n{\n\tif (colValues[0] != NULL)\n\t{\n\t\tmemcpy((char *)data,\n\t\t\tcolValues[0],\n\t\t\tstrlen(colValues[0]));\n\t\t// OK\n\t\treturn 0;\n\t}\n\telse\n\t{\n\t\t// Failure\n\t\treturn 1;\n\t}\n}\n\n/**\n * Retrieves the current datetime (now ()) from SQlite\n *\n * @param Now      Output parameter - now ()\n * @return         True, operations succeded\n *\n */\nbool Connection::getNow(string& Now)\n{\n\tbool retCode;\n\tchar* zErrMsg = NULL;\n\tchar nowDate[100] = \"\";\n\n\tstring nowSqlCMD = \"SELECT \" SQLITE3_NOW_READING;\n\n\tint rc = SQLexec(dbHandle, \"now\",\n\t                 nowSqlCMD.c_str(),\n\t                 dateCallback,\n\t                 nowDate,\n\t                 &zErrMsg);\n\n\tif (rc == SQLITE_OK )\n\t{\n\t\tNow = nowDate;\n\t\tretCode = true;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"SELECT NOW() error :%s:\", nowSqlCMD.c_str(), zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\tNow = \"\";\n\t\tretCode = false;\n\t}\n\treturn retCode;\n}\n\n//###   #########################################################################################:\n/**\n * Apply Fledge default datetime formatting\n * to a detected DATETIME datatype column\n *\n * @param pStmt    Current SQLite3 result set\n * @param i        Current column index\n * @param          Output parameter for new date\n * @return         True is format has been applied,\n *\t\t   False otherwise\n */\nbool Connection::applyColumnDateTimeFormat(sqlite3_stmt *pStmt,\n\t\t\t\t\t   int i,\n\t\t\t\t\t   string& newDate)\n{\n\n\tbool apply_format = false;\n\tstring formatStmt = {};\n\n\tif (sqlite3_column_database_name(pStmt, i) != NULL &&\n\t    sqlite3_column_table_name(pStmt, i)    != NULL)\n\t{\n\n\t\tif ((strcmp(sqlite3_column_origin_name(pStmt, i), \"user_ts\") == 0) &&\n\t\t    (strcmp(sqlite3_column_table_name(pStmt, i), \"readings\") == 0) &&\n\t\t    (strlen((char *) sqlite3_column_text(pStmt, i)) == 32))\n\t\t{\n\n\t\t\t// Extract milliseconds and microseconds for the user_ts field of the readings table\n\t\t\tformatStmt = string(\"SELECT strftime('\");\n\t\t\tformatStmt += string(F_DATEH24_SEC);\n\t\t\tformatStmt += \"', '\" + string((char *) sqlite3_column_text(pStmt, i));\n\t\t\tformatStmt += \"')\";\n\t\t\tformatStmt += \" || substr('\" + string((char *) sqlite3_column_text(pStmt, i));\n\t\t\tformatStmt += \"', instr('\" + string((char *) sqlite3_column_text(pStmt, i));\n\t\t\tformatStmt += \"', '.'), 7)\";\n\n\t\t\tapply_format = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t/**\n\t\t\t * Handle here possible unformatted DATETIME column type\n\t\t\t * If (column_name == column_original_name) AND\n\t\t\t * (sqlite3_column_table_name() == \"DATETIME\")\n\t\t\t * we assume the column has not been formatted\n\t\t\t * by any datetime() or strftime() SQLite function.\n\t\t\t * Thus we apply default FLEDGE formatting:\n\t\t\t * \"%Y-%m-%d %H:%M:%f\"\n\t\t\t */\n\t\t\tif (sqlite3_column_database_name(pStmt, i) != NULL &&\n\t\t\t    sqlite3_column_table_name(pStmt, i) != NULL &&\n\t\t\t    (strcmp(sqlite3_column_origin_name(pStmt, i),\n\t\t\t\t    sqlite3_column_name(pStmt, i)) == 0))\n\t\t\t{\n\t\t\t\tconst char *pzDataType;\n\t\t\t\tint retType = sqlite3_table_column_metadata(dbHandle,\n\t\t\t\t\t\t\t\t\t    sqlite3_column_database_name(pStmt, i),\n\t\t\t\t\t\t\t\t\t    sqlite3_column_table_name(pStmt, i),\n\t\t\t\t\t\t\t\t\t    sqlite3_column_name(pStmt, i),\n\t\t\t\t\t\t\t\t\t    &pzDataType,\n\t\t\t\t\t\t\t\t\t    NULL, NULL, NULL, NULL);\n\n\t\t\t\t// Check whether to Apply dateformat\n\t\t\t\tif (pzDataType != NULL &&\n\t\t\t\t    retType == SQLITE_OK &&\n\t\t\t\t    strcmp(pzDataType, SQLITE3_FLEDGE_DATETIME_TYPE) == 0 &&\n\t\t\t\t    strcmp(sqlite3_column_origin_name(pStmt, i),\n\t\t\t\t\t   sqlite3_column_name(pStmt, i)) == 0)\n\t\t\t\t{\n\t\t\t\t\t// Column metadata found and column datatype is \"pzDataType\"\n\t\t\t\t\tformatStmt = string(\"SELECT strftime('\");\n\t\t\t\t\tformatStmt += string(F_DATEH24_MS);\n\t\t\t\t\t\n\t\t\t\t\tstring columnText ((char *) sqlite3_column_text(pStmt, i));\n\t\t\t\t\tif (columnText.find(\"strftime\") != string::npos)\n\t\t\t\t\t{\n\t\t\t\t\t\tformatStmt += \"', \" + columnText + \")\";\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tformatStmt += \"', '\" + columnText + \"')\";\n\t\t\t\t\t}\n\n\t\t\t\t\tapply_format = true;\n\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// Format not done\n\t\t\t\t\t// Just log the error if present\n\t\t\t\t\tif (retType != SQLITE_OK)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"SQLite3 failed \" \\\n                                                                \"to call sqlite3_table_column_metadata() \" \\\n                                                                \"for column '%s'\",\n\t\t\t\t\t\t\t\t\t   sqlite3_column_name(pStmt, i));\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (apply_format)\n\t{\n\n\t\tchar* zErrMsg = NULL;\n\t\t// New formatted data\n\t\tchar formattedData[100] = \"\";\n\n\t\t// Exec the format SQL\n\t\tint rc = SQLexec(dbHandle, \"date\",\n\t\t\t\t formatStmt.c_str(),\n\t\t\t\t dateCallback,\n\t\t\t\t formattedData,\n\t\t\t\t &zErrMsg);\n\n\t\tif (rc == SQLITE_OK )\n\t\t{\n\t\t\t// Use new formatted datetime value\n\t\t\tnewDate.assign(formattedData);\n\n\t\t\treturn true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"SELECT dateformat '%s': error %s\",\n\t\t\t\t\t\t   formatStmt.c_str(),\n\t\t\t\t\t\t   zErrMsg);\n\n\t\t\tsqlite3_free(zErrMsg);\n\t\t}\n\n\t}\n\n\treturn false;\n}\n\n/**\n * Apply the specified date format\n * using the available formats in SQLite3\n * for a specific column\n *\n * If the requested format is not available\n * the input column is used as is.\n * Additionally milliseconds could be rounded\n * upon request.\n * The routine return false if date format is not\n * found and the caller might decide to raise an error\n * or use the non formatted value\n *\n * @param inFormat     Input date format from application\n * @param colName      The column name to format\n * @param outFormat    The formatted column\n * @return             True if format has been applied or\n *\t\t       false id no format is in use.\n */\nbool applyColumnDateFormat(const string& inFormat,\n\t\t\t\t  const string& colName,\n\t\t\t\t  string& outFormat,\n\t\t\t\t  bool roundMs)\n\n{\nbool retCode;\n\t// Get format, if any, from the supported formats map\n\tconst string format = sqliteDateFormat[inFormat];\n\tif (!format.empty())\n\t{\n\t\t// Apply found format via SQLite3 strftime()\n\t\toutFormat.append(\"strftime('\");\n\t\toutFormat.append(format);\n\t\toutFormat.append(\"', \");\n\n\t\t// Check whether we have to round milliseconds\n\t\tif (roundMs == true &&\n\t\t    format.back() == 'f')\n\t\t{\n\t\t\toutFormat.append(\"cast(round((julianday(\");\n\t\t\toutFormat.append(colName);\n\t\t\toutFormat.append(\") - 2440587.5)*86400 -0.00005, 3) AS FLOAT), 'unixepoch'\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\toutFormat.append(colName);\n\t\t}\n\n\t\toutFormat.append(\" )\");\n\t\tretCode = true;\n\t}\n\telse\n\t{\n\t\t// Use column as is\n\t\toutFormat.append(colName);\n\t\tretCode = false;\n\t}\n\n\treturn retCode;\n}\n\n/**\n * Apply the specified date format\n * using the available formats in SQLite3\n * for a specific column\n *\n * If the requested format is not available\n * the input column is used as is.\n * Additionally milliseconds could be rounded\n * upon request.\n * The routine return false if date format is not\n * found and the caller might decide to raise an error\n * or use the non formatted value\n *\n * @param inFormat     Input date format from application\n * @param colName      The column name to format\n * @param outFormat    The formatted column\n * @return             True if format has been applied or\n *\t\t       false id no format is in use.\n */\nbool applyColumnDateFormatLocaltime(const string& inFormat,\n\t\t\t\t  const string& colName,\n\t\t\t\t  string& outFormat,\n\t\t\t\t  bool roundMs)\n\n{\nbool retCode;\n\t// Get format, if any, from the supported formats map\n\tconst string format = sqliteDateFormat[inFormat];\n\tif (!format.empty())\n\t{\n\t\t// Apply found format via SQLite3 strftime()\n\t\toutFormat.append(\"strftime('\");\n\t\toutFormat.append(format);\n\t\toutFormat.append(\"', \");\n\n\t\t// Check whether we have to round milliseconds\n\t\tif (roundMs == true &&\n\t\t    format.back() == 'f')\n\t\t{\n\t\t\toutFormat.append(\"cast(round((julianday(\");\n\t\t\toutFormat.append(colName);\n\t\t\toutFormat.append(\") - 2440587.5)*86400 -0.00005, 3) AS FLOAT), 'unixepoch'\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\toutFormat.append(colName);\n\t\t}\n\n\t\toutFormat.append(\", 'localtime')\");\n\t\tretCode = true;\n\t}\n\telse\n\t{\n\t\t// Use column as is\n\t\toutFormat.append(colName);\n\t\tretCode = false;\n\t}\n\n\treturn retCode;\n}\n\n/**\n * Apply the specified date format\n * using the available formats in SQLite3\n *\n * @param inFormat     Input date format from application\n * @param outFormat    The formatted column\n * @return             True if format has been applied or\n *\t\t       false\n */\nbool applyDateFormat(const string& inFormat, string& outFormat)\n\n{\nbool retCode;\n\t// Get format, if any, from the supported formats map\n\tconst string format = sqliteDateFormat[inFormat];\n\tif (!format.empty())\n\t{\n\t\t// Apply found format via SQLite3 strftime()\n\t\toutFormat.append(\"strftime('\");\n\t\toutFormat.append(format);\n\t\toutFormat.append(\"', \");\n\n\t\treturn true;\n\t}\n\telse\n\t{\n\t\treturn false;\n\t}\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Create a SQLite3 database connection\n */\nConnection::Connection(ConnectionManager *manager) : m_manager(manager)\n{\n\tstring dbPath, dbPathReadings;\n\tconst char *defaultConnection = getenv(\"DEFAULT_SQLITE_DB_FILE\");\n\tconst char *defaultReadingsConnection = getenv(\"DEFAULT_SQLITE_DB_READINGS_FILE\");\n\n\tm_logSQL = false;\n\tm_queuing = 0;\n\tm_streamOpenTransaction = true;\n\n\tif (defaultConnection == NULL)\n\t{\n\t\t// Set DB base path\n\t\tdbPath = getDataDir();\n\t\t// Add the filename\n\t\tdbPath += _DB_NAME;\n\t}\n\telse\n\t{\n\t\tdbPath = defaultConnection;\n\t}\n\n\tif (defaultReadingsConnection == NULL)\n\t{\n\t\t// Set DB base path\n\t\tdbPathReadings = getDataDir();\n\t\t// Add the filename\n\t\tdbPathReadings += READINGS_DB_FILE_NAME;\n\t}\n\telse\n\t{\n\t\tdbPathReadings = defaultReadingsConnection;\n\t}\n\n\t// Allow usage of URI for filename\n\tsqlite3_config(SQLITE_CONFIG_URI, 1);\n\n\tLogger *logger = Logger::getLogger();\n\n\t/**\n\t * Make a connection to the database\n\t * and check backend connection was successfully made\n\t * Note:\n\t *   we assume the database already exists, so the flag\n\t *   SQLITE_OPEN_CREATE is not added in sqlite3_open_v2 call\n\t */\n\tif (sqlite3_open_v2(dbPath.c_str(),\n\t\t\t    &dbHandle,\n\t\t\t    SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX,\n\t\t\t    NULL) != SQLITE_OK)\n\t{\n\t\tconst char* dbErrMsg = sqlite3_errmsg(dbHandle);\n\t\tconst char* errMsg = \"Failed to open the SQLite3 database\";\n\n\t\tlogger->error(\"%s '%s': %s\",\n\t\t\t\t\t   dbErrMsg,\n\t\t\t\t\t   dbPath.c_str(),\n\t\t\t\t\t   dbErrMsg);\n\t\tconnectErrorTime = time(0);\n\n\t\traiseError(\"Connection\", \"%s '%s': '%s'\",\n\t\t\t   dbErrMsg,\n\t\t\t   dbPath.c_str(),\n\t\t\t   dbErrMsg);\n\n\t\tsqlite3_close_v2(dbHandle);\n\t\tdbHandle = NULL;\n\t}\n\telse\n\t{\n\t\tint rc;\n\t\tchar *zErrMsg = NULL;\n\n\t\t// Enable the WAL for the fledge DB\n\t\trc = sqlite3_exec(dbHandle, m_manager->getDBConfiguration().c_str(), NULL, NULL, &zErrMsg);\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tstring errMsg = \"Failed to set WAL from the fledge DB - \" + m_manager->getDBConfiguration();\n\t\t\tlogger->error(\"%s : error %s\",\n\t\t\t                           m_manager->getDBConfiguration().c_str(),\n\t\t\t\t\t\t\t\t\t   zErrMsg);\n\t\t\tconnectErrorTime = time(0);\n\n\t\t\tsqlite3_free(zErrMsg);\n\t\t}\n\n\t\t/*\n\t\t * Build the ATTACH DATABASE command in order to get\n\t\t * 'fledge.' prefix in all SQL queries\n\t\t */\n\t\tSQLBuffer attachDb;\n\t\tattachDb.append(\"ATTACH DATABASE '\");\n\t\tattachDb.append(dbPath + \"' AS fledge;\");\n\n\t\tconst char *sqlStmt = attachDb.coalesce();\n\n\t\t// Exec the statement\n\t\trc = SQLexec(dbHandle, \"database\",\n\t\t\t     sqlStmt,\n\t\t\t     NULL,\n\t\t\t     NULL,\n\t\t\t     &zErrMsg);\n\n\t\t// Check result\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tconst char* errMsg = \"Failed to attach 'fledge' database in\";\n\t\t\tlogger->error(\"%s '%s': error %s\",\n\t\t\t\t\t\t   errMsg,\n\t\t\t\t\t\t   sqlStmt,\n\t\t\t\t\t\t   zErrMsg);\n\t\t\tconnectErrorTime = time(0);\n\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\tsqlite3_close_v2(dbHandle);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogger->info(\"Connected to SQLite3 database: %s\",\n\t\t\t\t\t\t  dbPath.c_str());\n\t\t}\n\t\t//Release sqlStmt buffer\n\t\tdelete[] sqlStmt;\n\n\t\t// Attach readings database - readings_1\n\t\tif (access(dbPathReadings.c_str(), R_OK) != 0)\n\t\t{\n\t\t\tlogger->info(\"No readings database, assuming seperate readings plugin is avialable\");\n\t\t\tm_noReadings = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_noReadings = false;\n\t\t\tSQLBuffer attachReadingsDb;\n\t\t\tattachReadingsDb.append(\"ATTACH DATABASE '\");\n\t\t\tattachReadingsDb.append(dbPathReadings + \"' AS readings_1;\");\n\n\t\t\tconst char *sqlReadingsStmt = attachReadingsDb.coalesce();\n\n\t\t\t// Exec the statement\n\t\t\trc = SQLexec(dbHandle, \"database\", \n\t\t\t\t\t\t sqlReadingsStmt,\n\t\t\t\t\t\t NULL,\n\t\t\t\t\t\t NULL,\n\t\t\t\t\t\t &zErrMsg);\n\n\t\t\t// Check result\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tconst char* errMsg = \"Failed to attach 'readings' database in\";\n\t\t\t\tlogger->error(\"%s '%s': error %s\",\n\t\t\t\t\t\t\t\t\t\t   errMsg,\n\t\t\t\t\t\t\t\t\t\t   sqlReadingsStmt,\n\t\t\t\t\t\t\t\t\t\t   zErrMsg);\n\t\t\t\tconnectErrorTime = time(0);\n\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\tsqlite3_close_v2(dbHandle);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tlogger->info(\"Connected to SQLite3 database: %s\",\n\t\t\t\t\t\t\t\t\t\t  dbPath.c_str());\n\t\t\t}\n\t\t\t//Release sqlStmt buffer\n\t\t\tdelete[] sqlReadingsStmt;\n\n\t\t\t// Enable the WAL for the readings DB\n\t\t\trc = sqlite3_exec(dbHandle, m_manager->getDBConfiguration().c_str(), NULL, NULL, &zErrMsg);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tstring errMsg = \"Failed to set WAL from the readings DB - \" + m_manager->getDBConfiguration();\n\t\t\t\tLogger::getLogger()->error(\"%s : error %s\",\n\t\t\t\t\t\t\t\t\t\t   errMsg.c_str(),\n\t\t\t\t\t\t\t\t\t\t   zErrMsg);\n\t\t\t\tconnectErrorTime = time(0);\n\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t}\n\n\t\t\tReadingsCatalogue *catalogue = ReadingsCatalogue::getInstance();\n\t\t\tcatalogue->createReadingsOverflowTable(dbHandle, 1);\n\t\t}\n\n\t}\n\n\tif (!m_noReadings)\n\t{\n\t\t// Attach all the defined/used databases\n\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\t\tif ( !readCat->connectionAttachAllDbs(dbHandle) )\n\t\t{\n\t\t\tconst char* errMsg = \"Failed to attach all the databases to the connection database in\";\n\t\t\tlogger->error(errMsg);\n\n\t\t\tconnectErrorTime = time(0);\n\t\t\tsqlite3_close_v2(dbHandle);\n\t\t\tthrow new runtime_error(errMsg);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogger->info(\"Attached all %d readings databases to connection\", readCat->getReadingsCount());\n\t\t}\n\t}\n\telse\n\t{\n\t\tlogger->info(\"Connection will not attach to readings tables\");\n\t}\n\n\tm_schemaManager = SchemaManager::getInstance();\n}\n#endif\n\n/**\n * Destructor for the database connection.\n * Close the connection to SQLite3 db\n */\nConnection::~Connection()\n{\n\tsqlite3_close_v2(dbHandle);\n}\n\n/**\n * Enable or disable the tracing of SQL statements\n *\n * @param flag\tDesired state of the SQL trace flag\n */\nvoid Connection::setTrace(bool flag)\n{\n\tm_logSQL = flag;\n}\n\n/**\n * Map a SQLite3 result set to a string version of a JSON document\n *\n * @param res          Sqlite3 result set\n * @param resultSet    Output Json as string\n * @return             SQLite3 result code of sqlite3_step(res)\n *\n */\nint Connection::mapResultSet(void* res, string& resultSet,  unsigned long *rowsCount)\n{\n// Cast to SQLite3 result set\nsqlite3_stmt* pStmt = (sqlite3_stmt *)res;\n// JSON generic document\nDocument doc;\n// SQLite3 return code\nint rc;\n// Number of returned rows, number of columns\nunsigned long nRows = 0, nCols = 0;\n\n\t// Create the JSON document\n\tdoc.SetObject();\n\t// Get document allocator\n\tDocument::AllocatorType& allocator = doc.GetAllocator();\n\t// Create the array for returned rows\n\tValue rows(kArrayType);\n\t// Rows counter, set it to 0 now\n\tValue count;\n\tcount.SetInt(0);\n\n\t// Iterate over all the rows in the resultSet\n\twhile ((rc = SQLstep(pStmt)) == SQLITE_ROW)\n\t{\n\t\t// Get number of columns for current row\n\t\tnCols = sqlite3_column_count(pStmt);\n\t\t// Create the 'row' object\n\t\tValue row(kObjectType);\n\n\t\t// Build the row with all fields\n\t\tfor (int i = 0; i < nCols; i++)\n\t\t{\n\t\t\t// JSON document for the current row\n\t\t\tDocument d;\n\t\t\t// Set object name as the column name\n\t\t\tValue name(sqlite3_column_name(pStmt, i), allocator);\n\t\t\t// Get the \"TEXT\" value of the column value\n\t\t\tchar* str = (char *)sqlite3_column_text(pStmt, i);\n\n\t\t\t// Check the column value datatype\n\t\t\tswitch (sqlite3_column_type(pStmt, i))\n\t\t\t{\n\t\t\t\tcase (SQLITE_NULL):\n\t\t\t\t{\n\t\t\t\t\trow.AddMember(name, \"\", allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase (SQLITE3_TEXT):\n\t\t\t\t{\n\n\t\t\t\t\t/**\n\t\t\t\t\t * Handle here possible unformatted DATETIME column type\n\t\t\t\t\t */\n\t\t\t\t\tstring newDate;\n\t\t\t\t\tif (applyColumnDateTimeFormat(pStmt, i, newDate))\n\t\t\t\t\t{\n\t\t\t\t\t\t// Use new formatted datetime value\n\t\t\t\t\t\tstr = (char *)newDate.c_str();\n\t\t\t\t\t}\n\n\t\t\t\t\tValue value;\n\t\t\t\t\tif (!d.Parse(str).HasParseError())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (d.IsNumber())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Set string\n\t\t\t\t\t\t\tvalue = Value(str, allocator);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// JSON parsing ok, use the document\n\t\t\t\t\t\t\t// if string value is not \"null\", \"true\", \"false\"\n\t\t\t\t\t\t\tif (strcmp(str, \"null\") != 0 && strcmp(str, \"true\") != 0 && strcmp(str, \"false\") != 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tvalue = Value(d, allocator);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t// Use (char *) value for \"null\", \"true\", \"false\"\n\t\t\t\t\t\t\t\tvalue = Value(str, allocator);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\t// Use (char *) value\n\t\t\t\t\t\tvalue = Value(str, allocator);\n\t\t\t\t\t}\n\t\t\t\t\t// Add name & value to the current row\n\t\t\t\t\trow.AddMember(name, value, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase (SQLITE_INTEGER):\n\t\t\t\t{\n\t\t\t\t\tint64_t intVal = atol(str);\n\t\t\t\t\t// Add name & value to the current row\n\t\t\t\t\trow.AddMember(name, intVal, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase (SQLITE_FLOAT):\n\t\t\t\t{\n\t\t\t\t\tdouble dblVal = atof(str);\n\t\t\t\t\t// Add name & value to the current row\n\t\t\t\t\trow.AddMember(name, dblVal, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tdefault:\n\t\t\t\t{\n\t\t\t\t\t// Default: use  (char *) value\n\t\t\t\t\tValue value(str != NULL ? str : \"\", allocator);\n\t\t\t\t\t// Add name & value to the current row\n\t\t\t\t\trow.AddMember(name, value, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// All fields added: increase row counter\n\t\tnRows++;\n\n\t\t// Add the current row to the all rows object\n\t\trows.PushBack(row, allocator);\n\t}\n\n\tif (rowsCount != nullptr)\n\t{\n\t\t*rowsCount = nRows;\n\t}\n\n\t// All rows added: update rows count\n\tcount.SetInt(nRows);\n\n\t// Add 'rows' and 'count' to the final JSON document\n\tdoc.AddMember(\"count\", count, allocator);\n\tdoc.AddMember(\"rows\", rows, allocator);\n\n\t/* Write out the JSON document we created */\n\tStringBuffer buffer;\n\tWriter<StringBuffer> writer(buffer);\n\tdoc.Accept(writer);\n\n\t// Set the result as a CPP string \n\tresultSet = buffer.GetString();\n\n\t// Return SQLite3 ret code\n\treturn rc;\n}\n\n/**\n * This SQLIte3 query callback just returns the number of rows seen\n * by a SELECT statement in the 'data' parameter\n *\n * @param data         Output parameter to update with number of rows\n * @param nCols        The number of columns or the row\n * @param colValues    The column values\n * @param colNames     The column names\n * @return             0 on success, 1 otherwise\n */\nint selectCallback(void *data,\n\t\t  int nCols,\n\t\t  char **colValues,\n\t\t  char **colNames)\n{\n\nint *nRows = (int *)data;\n\t// Increment the number of rows seen\n\t*nRows++;\n\n\t// Set OK\n\treturn 0;\n}\n\n/**\n * This SQLIte3 query count callback just returns the number of rows\n * as per 'count(*)' column\n * by a SELECT statement in the 'data' parameter\n *\n * @param data         Output parameter to update with number of rows\n * @param nCols        The number of columns or the row\n * @param colValues    The column values\n * @param colNames     The column names\n * @return             0 on success, 1 otherwise\n */\nint countCallback(void *data,\n\t\t int nCols,\n\t\t char **colValues,\n\t\t char **colNames)\n{\nint *nRows = (int *)data;\n\n\t// Return the value of the first column: the count(*)\n\t*nRows = atoi(colValues[0]);\n\n\t// Set OK\n\treturn 0;\n}\n\n/**\n * This SQLIte3 query rowid callback just returns the rowid\n * by a SELECT statement in the 'data' parameter\n *\n * @param data         Output parameter to update with rowid\n * @param nCols        The number of columns or the row\n * @param colValues    The column values\n * @param colNames     The column names\n * @return             0 on success, 1 otherwise\n */\nint rowidCallback(void *data,\n\t\t\t int nCols,\n\t\t\t char **colValues,\n\t\t\t char **colNames)\n{\nunsigned long *rowid = (unsigned long *)data;\n\n\t// Return the value of the first column: the count(*)\n\tif (colValues[0])\n\t\t*rowid = strtoul(colValues[0], NULL, 10);\n\telse\n\t\t*rowid = 0;\n\n\t// Set OK\n\treturn 0;\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Perform a query against a common table\n *\n */\nbool Connection::retrieve(const string& schema,\n\t\t\t  const string& table,\n\t\t\t  const string& condition,\n\t\t\t  string& resultSet)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument\tdocument;\nSQLBuffer\tsql;\n// Extra constraints to add to where clause\nSQLBuffer\tjsonConstraints;\nbool\t\tisOptAggregate = false;\nvector<string>  asset_codes;\n\n\tif (!m_schemaManager->exists(dbHandle, schema))\n\t{\n\t\traiseError(\"retrieve\", \"Schema %s does not exist, unable to retrieve from table %s\", schema.c_str(), table.c_str());\n\t\treturn false;\n\t}\n\n\ttry {\n\t\tif (dbHandle == NULL)\n\t\t{\n\t\t\traiseError(\"retrieve\", \"No SQLite3 db connection available\");\n\t\t\treturn false;\n\t\t}\n\n\t\tif (condition.empty())\n\t\t{\n\t\t\tsql.append(\"SELECT * FROM \");\n\t\t\tsql.append(schema);\n\t\t\tsql.append('.');\n\t\t\tsql.append(table);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\", \"Failed to parse JSON payload\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (document.HasMember(\"aggregate\"))\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tif (!jsonAggregates(document, document[\"aggregate\"], sql, jsonConstraints, isOptAggregate, false))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t\tsql.append(schema);\n\t\t\t\tsql.append('.');\n\t\t\t}\n\t\t\telse if (document.HasMember(\"join\"))\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tselectColumns(document, sql, 0);\n\t\t\t}\n\t\t\telse if (document.HasMember(\"return\"))\n\t\t\t{\n\t\t\t\tint col = 0;\n\t\t\t\tValue& columns = document[\"return\"];\n\t\t\t\tif (! columns.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col)\n\t\t\t\t\t\tsql.append(\", \");\n\t\t\t\t\tif (!itr->IsObject())\t// Simple column name\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t\t   \"column must be a string\");\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t\t\t   \"format must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t// SQLite 3 date format.\n\t\t\t\t\t\t\t\tstring new_format;\n\t\t\t\t\t\t\t\tapplyColumnDateFormat((*itr)[\"format\"].GetString(),\n\t\t\t\t\t\t\t\t\t\t      (*itr)[\"column\"].GetString(),\n\t\t\t\t\t\t\t\t\t\t      new_format, true);\n\n\t\t\t\t\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\t\t\t\t\tsql.append(new_format);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t\t\t   \"timezone must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t// SQLite3 doesnt support time zone formatting\n\t\t\t\t\t\t\t\tif (strcasecmp((*itr)[\"timezone\"].GetString(), \"utc\") != 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t\t   \"SQLite3 plugin does not support timezones in qeueries\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\tsql.append(\", 'utc')\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsql.append(' ');\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t   \"return object must have either a column or json property\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\t\t\tsql.append('\"');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t\tsql.append(schema);\n\t\t\t\tsql.append('.');\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tsql.append(\" * FROM \");\n\t\t\t\tsql.append(schema);\n\t\t\t\tsql.append('.');\n\t\t\t}\n\t\t\tif (document.HasMember(\"join\"))\n\t\t\t{\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t\tsql.append(schema);\n\t\t\t\tsql.append('.');\n\t\t\t\tsql.append(table);\n\t\t\t\tsql.append(\" t0\");\n\t\t\t\tappendTables(schema, document, sql, 1);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(table);\n\t\t\t}\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t \n\t\t\t\tif (document.HasMember(\"join\"))\n\t\t\t\t{\n\t\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes, false, \"t0.\"))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\n\t\t\t\t\t// Now and the join condition itself\n\t\t\t\t\tstring col0, col1;\n\t\t\t\t\tconst Value& join = document[\"join\"];\n\t\t\t\t\tif (join.HasMember(\"on\") && join[\"on\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tcol0 = join[\"on\"].GetString();\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\n\t\t\t\t\t\traiseError(\"rerieve\", \"Missing on item\");\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t\tif (join.HasMember(\"table\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tconst Value& table = join[\"table\"];\n\t\t\t\t\t\tif (table.HasMember(\"column\") && table[\"column\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tcol1 = table[\"column\"].GetString();\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"QueryTable\", \"Missing column in join table\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\" AND t0.\");\n\t\t\t\t\tsql.append(col0);\n\t\t\t\t\tsql.append(\" = t1.\");\n\t\t\t\t\tsql.append(col1);\n\t\t\t\t\tsql.append(\" \");\n\t\t\t\t\tif (join.HasMember(\"query\") && join[\"query\"].IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\"AND  \");\n\t\t\t\t\t\tconst Value& query = join[\"query\"];\n\t\t\t\t\t\tprocessJoinQueryWhereClause(query, sql, asset_codes, 1);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse if (document.HasMember(\"where\"))\n\t\t\t\t{\n\t\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes, false))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"retrieve\", \"Failed to add where clause\");\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t   \"JSON does not contain where clause\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! jsonConstraints.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AND \");\n                                        const char *jsonBuf =  jsonConstraints.coalesce();\n                                        sql.append(jsonBuf);\n                                        delete[] jsonBuf;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!jsonModifiers(document, sql, false))\n\t\t\t{\n\t\t\t\traiseError(\"query\", \"Modifiers failed\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(';');\n\n\t\tconst char *query = sql.coalesce();\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tsqlite3_stmt *stmt = NULL;\n\n\t\tlogSQL(\"CommonRetrive\", query);\n\n\t\t// Prepare the SQL statement and get the result set\n\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\tdelete[] query;\n\t\t\tif (stmt)\n\t\t\t{\n\t\t\t\tsqlite3_finalize(stmt);\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\n\t\t// Call result set mapping\n\t\trc = mapResultSet(stmt, resultSet);\n\n\t\t// Delete result set\n\t\tsqlite3_finalize(stmt);\n\n\t\t// Check result set mapping errors\n\t\tif (rc != SQLITE_DONE)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\tdelete[] query;\n\t\t\t// Failure\n\t\t\treturn false;\n\t\t}\n\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\t\t// Success\n\t\treturn true;\n\t} catch (exception e) {\n\t\traiseError(\"retrieve\", \"Internal error: %s\", e.what());\n\t}\n\treturn false;\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Insert data into a table\n */\nint Connection::insert(const string& schema, const string& table, const string& data)\n{\nDocument\tdocument;\nostringstream convert;\nsqlite3_stmt *stmt = NULL;\nint rc;\nstd::size_t arr = data.find(\"inserts\");\n\n\tif (!m_schemaManager->exists(dbHandle, schema))\n\t{\n\t\traiseError(\"insert\", \"Schema %s does not exist, unable to insert into table %s\", schema.c_str(), table.c_str());\n\t\treturn false;\n\t}\n\n\n\t// Check first the 'inserts' property in JSON data\n\tbool stdInsert = (arr == std::string::npos || arr > 8);\n\n\t// If input data is not an array of iserts\n\t// create an array with one element\n\tif (stdInsert)\n\t{\n\t\tconvert << \"{ \\\"inserts\\\" : [ \";\n\t\tconvert << data;\n\t\tconvert << \" ] }\";\n\t}\n\n\tif (document.Parse(stdInsert ? convert.str().c_str() : data.c_str()).HasParseError())\n\t{\n\t\traiseError(\"insert\", \"Failed to parse JSON payload\\n\");\n\t\treturn -1;\n\t}\n\t// Get the array with row(s)\n\tValue &inserts = document[\"inserts\"];\n\tif (!inserts.IsArray())\n\t{\n\t\traiseError(\"insert\", \"Payload is missing the inserts array\");\n\t\treturn -1;\n\t}\n\n\t// Number of inserts\n\tint ins = 0;\n\tint failedInsertCount = 0;\n\t\n\t// Generate sql query for prepared statement\n\tfor (Value::ConstValueIterator iter = inserts.Begin();\n\t\t\t\t\titer != inserts.End();\n\t\t\t\t\t++iter)\n\t{\n\t\tif (!iter->IsObject())\n\t\t{\n\t\t\traiseError(\"insert\",\n\t\t\t\t   \"Each entry in the insert array must be an object\");\n\t\t\treturn -1;\n\t\t}\n\n\t\t{\n\t\t\tint col = 0;\n\t\t\tSQLBuffer sql;\n\t\t\tSQLBuffer values;\n\t\t\tsql.append(\"INSERT INTO \" + schema + \".\" + table + \" (\");\n\t\t\t\n\t\t\tfor (Value::ConstMemberIterator itr = (*iter).MemberBegin();\n\t\t\t\t\t\t\titr != (*iter).MemberEnd();\n\t\t\t\t\t\t\t++itr)\n\t\t\t{\n\t\t\t\t// Append column name\n\t\t\t\tif (col)\n\t\t\t\t{\n\t\t\t\t\tsql.append(\", \");\n\t\t\t\t}\n\t\t\t\tsql.append(itr->name.GetString());\n\t\t\t\tcol++;\n\t\t\t}\n\t\t\t\n\t\t\tsql.append(\") VALUES (\");\n\t\t\tfor ( auto i = 0 ; i < col; i++ )\n\t\t\t{\n\t\t\t\tif (i) \n\t\t\t\t{\n\t\t\t\t\tsql.append(\",\");\n\t\t\t\t}\n\t\t\t\tsql.append(\"?\");\n\t\t\t}\n\t\t\tsql.append(\");\");\n\t\t\t\n\t\t\tconst char *query = sql.coalesce();\n\t\t\t\n\t\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tif (stmt)\n\t\t\t\t{\n\t\t\t\t\tsqlite3_finalize(stmt);\n\t\t\t\t}\n\t\t\t\traiseError(\"insert\", sqlite3_errmsg(dbHandle));\n\t\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\t\tdelete[] query;\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tdelete[] query;\n\n\t\t\t// Bind columns with prepared sql query\n\t\t\tint columID = 1;\n\t\t\tfor (Value::ConstMemberIterator itr = (*iter).MemberBegin();\n\t\t\t\t\t\t\titr != (*iter).MemberEnd();\n\t\t\t\t\t\t\t++itr)\n\t\t\t{\n\t\t\t\t\n\t\t\t\tif (itr->value.IsString())\n\t\t\t\t{\n\t\t\t\t\tconst char *str = itr->value.GetString();\n\t\t\t\t\tif (strcmp(str, \"now()\") == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsqlite3_bind_text(stmt, columID, SQLITE3_NOW, -1, SQLITE_TRANSIENT);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\t\n\t\t\t\t\t\tsqlite3_bind_text(stmt, columID, str, -1, SQLITE_TRANSIENT);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse if (itr->value.IsDouble()) {\n\t\t\t\t\tsqlite3_bind_double(stmt, columID,itr->value.GetDouble());\n\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\telse if (itr->value.IsInt64())\n\t\t\t\t{\n\t\t\t\t\tsqlite3_bind_int(stmt, columID,(long)itr->value.GetInt64());\n\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\telse if (itr->value.IsInt())\n\t\t\t\t{\n\t\t\t\t\tsqlite3_bind_int(stmt, columID,itr->value.GetInt());\n\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\telse if (itr->value.IsObject())\n\t\t\t\t{\n\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\titr->value.Accept(writer);\n\t\t\t\t\tsqlite3_bind_text(stmt, columID, buffer.GetString(), -1, SQLITE_TRANSIENT);\n\t\t\t\t}\n\t\t\t\tcolumID++ ;\n\t\t\t}\n\t\t\t\n\t\t\tif (sqlite3_exec(dbHandle, \"BEGIN TRANSACTION\", NULL, NULL, NULL) != SQLITE_OK)\n\t\t\t{\n\t\t\t\tif (stmt)\n\t\t\t\t{\n\t\t\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\t\t\tsqlite3_reset(stmt);\n\t\t\t\t\tsqlite3_finalize(stmt);\n\t\t\t\t}\n\t\t\t\traiseError(\"insert\", sqlite3_errmsg(dbHandle));\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tm_writeAccessOngoing.fetch_add(1);\n\t\t\t\n\t\t\tint sqlite3_resut = SQLstep(stmt);\n\t\t\t\n\t\t\tm_writeAccessOngoing.fetch_sub(1);\n\t\t\t\n\t\t\tif (sqlite3_resut != SQLITE_DONE)\n\t\t\t{\n\t\t\t\tfailedInsertCount++;\n\t\t\t\traiseError(\"insert\", sqlite3_errmsg(dbHandle));\n\t\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", sqlite3_expanded_sql(stmt));\n\n\t\t\t\t// transaction is still open, do rollback\n\t\t\t\tif (sqlite3_get_autocommit(dbHandle) == 0)\n\t\t\t\t{\n\t\t\t\t\trc = sqlite3_exec(dbHandle,\"ROLLBACK TRANSACTION;\",NULL,NULL,NULL);\n\t\t\t\t\tif (rc != SQLITE_OK)\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"insert rollback\", sqlite3_errmsg(dbHandle));\n\t\t\t\t\t}\n\t\t\t\t\n\t\t\t\t}\n\t\t\t}\n\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\tsqlite3_reset(stmt);\n\t\t\t\n\n\t\t\tif (sqlite3_resut == SQLITE_DONE && sqlite3_exec(dbHandle, \"COMMIT TRANSACTION\", NULL, NULL, NULL) != SQLITE_OK)\n\t\t\t{\n\t\t\t\tif (stmt)\n\t\t\t\t{\n\t\t\t\t\tsqlite3_finalize(stmt);\n\t\t\t\t}\n\t\t\t\traiseError(\"insert\", sqlite3_errmsg(dbHandle));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tsqlite3_finalize(stmt);\n\t\t}\n\t\t// Increment row count\n\t\tins++;\n\t\t\n\t}\n\n\n\tif (m_writeAccessOngoing == 0)\n\t\tdb_cv.notify_all();\n\n\tif (failedInsertCount)\n\t{\n\t\tchar buf[100];\n\t\tsnprintf(buf, sizeof(buf),\n\t\t\t\t\"Not all inserts into table '%s.%s' within transaction succeeded\",\n\t\t\t\tschema.c_str(), table.c_str());\n\t\traiseError(\"insert\", buf);\n\t}\n\n\treturn (!failedInsertCount ? ins : -1);\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Perform an update against a common table\n * This routine uses SQLite 3 JSON1 extension:\n *\n *    json_set(field, '$.key.value', the_value)\n *\n */\nint Connection::update(const string& schema, const string& table, const string& payload)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument\tdocument;\nSQLBuffer\tsql;\nvector<string>  asset_codes;\nbool\t\tallowZero = false;\n\n\tint \trow = 0;\n\tostringstream convert;\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\n\tif (!m_schemaManager->exists(dbHandle, schema))\n\t{\n\t\traiseError(\"update\", \"Schema %s does not exist, unable to update table %s\", schema.c_str(), table.c_str());\n\t\treturn false;\n\t}\n\n\tstd::size_t arr = payload.find(\"updates\");\n\tbool changeReqd = (arr == std::string::npos || arr > 8);\n\tif (changeReqd)\n\t{\n\t\tconvert << \"{ \\\"updates\\\" : [ \";\n\t\tconvert << payload;\n\t\tconvert << \" ] }\";\n\t}\n\n\tif (document.Parse(changeReqd?convert.str().c_str():payload.c_str()).HasParseError())\n\t{\n\t\traiseError(\"update\", \"Failed to parse JSON payload\");\n\t\treturn -1;\n\t}\n\telse\n\t{\n\t\tValue &updates = document[\"updates\"];\n\t\tif (!updates.IsArray())\n\t\t{\n\t\t\traiseError(\"update\", \"Payload is missing the updates array\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tsql.append(\"BEGIN TRANSACTION;\");\n\t\tint i=0;\n\t\tfor (Value::ConstValueIterator iter = updates.Begin(); iter != updates.End(); ++iter,++i)\n\t\t{\n\t\t\tif (!iter->IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"update\",\n\t\t\t\t\t   \"Each entry in the update array must be an object\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tsql.append(\"UPDATE \");\n\t\t\tsql.append(schema);\n\t\t\tsql.append('.');\n\t\t\tsql.append(table);\n\t\t\tsql.append(\" SET \");\n\n\t\t\tint\t\tcol = 0;\n\t\t\tif ((*iter).HasMember(\"values\"))\n\t\t\t{\n\t\t\t\tconst Value& values = (*iter)[\"values\"];\n\t\t\t\tfor (Value::ConstMemberIterator itr = values.MemberBegin();\n\t\t\t\t\t\titr != values.MemberEnd(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(itr->name.GetString());\n\t\t\t\t\tsql.append(\" = \");\n\t\t \n\t\t\t\t\tif (itr->value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = itr->value.GetString();\n\t\t\t\t\t\tif (strcmp(str, \"now()\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(SQLITE3_NOW);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t\tsql.append(escape(str));\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->value.IsDouble())\n\t\t\t\t\t\tsql.append(itr->value.GetDouble());\n\t\t\t\t\telse if (itr->value.IsUint64())\n\t\t\t\t\t\tsql.append((unsigned long)itr->value.GetUint64());\n\t\t\t\t\telse if (itr->value.IsInt64())\n\t\t\t\t\t\tsql.append((long)itr->value.GetInt64());\n\t\t\t\t\telse if (itr->value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\titr->value.Accept(writer);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(buffer.GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\t// Handle JSON value null: \"item\" : null\n\t\t\t\t\telse if (itr->value.IsNull())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\"NULL\");\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"expressions\"))\n\t\t\t{\n\t\t\t\tconst Value& exprs = (*iter)[\"expressions\"];\n\t\t\t\tif (!exprs.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"update\", \"The property exressions must be an array\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = exprs.Begin(); itr != exprs.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"expressions must be an array of objects\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"column\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing column property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"operator\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing operator property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"value\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing value property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(\" = \");\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t\tsql.append((*itr)[\"operator\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t\tconst Value& value = (*itr)[\"value\"];\n\t\t \n\t\t\t\t\tif (value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = value.GetString();\n\t\t\t\t\t\tif (strcmp(str, \"now()\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(SQLITE3_NOW);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t\tsql.append(str);\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsDouble())\n\t\t\t\t\t\tsql.append(value.GetDouble());\n\t\t\t\t\telse if (value.IsInt64())\n\t\t\t\t\t\tsql.append((long)value.GetInt64());\n\t\t\t\t\telse if (value.IsInt())\n\t\t\t\t\t\tsql.append(value.GetInt());\n\t\t\t\t\telse if (value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\tvalue.Accept(writer);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(buffer.GetString());\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"json_properties\"))\n\t\t\t{\n\t\t\t\tconst Value& exprs = (*iter)[\"json_properties\"];\n\t\t\t\tif (!exprs.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t   \"The property json_properties must be an array\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = exprs.Begin(); itr != exprs.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"json_properties must be an array of objects\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"column\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing column property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"path\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing path property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"value\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t  \"Missing value property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\n\t\t\t\t\t// SQLite 3 JSON1 extension: json_set\n\t\t\t\t\t// json_set(field, '$.key.value', the_value)\n\t\t\t\t\tsql.append(\" = json_set(\");\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(\", '$.\");\n\n\t\t\t\t\tconst Value& path = (*itr)[\"path\"];\n\t\t\t\t\tif (!path.IsArray())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"The property path must be an array\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tint pathElement = 0;\n\t\t\t\t\tfor (Value::ConstValueIterator itr2 = path.Begin();\n\t\t\t\t\t\titr2 != path.End(); ++itr2)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (pathElement > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('.');\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (itr2->IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr2->GetString());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t\t   \"The elements of path must all be strings\");\n\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tpathElement++;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\"', \");\n\t\t\t\t\tconst Value& value = (*itr)[\"value\"];\n\t\t \n\t\t\t\t\tif (value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = value.GetString();\n\t\t\t\t\t\tif (strcmp(str, \"now()\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(SQLITE3_NOW);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t\tsql.append(escape(str));\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsDouble())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(value.GetDouble());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsInt64())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append((long)value.GetInt64());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsInt())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(value.GetInt());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\tvalue.Accept(writer);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(buffer.GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\")\");\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (iter->HasMember(\"modifier\") && (*iter)[\"modifier\"].IsArray())\n\t\t\t{\n\t\t\t\tconst Value& modifier = (*iter)[\"modifier\"];\n\t\t\t\tfor (Value::ConstValueIterator modifiers = modifier.Begin(); modifiers != modifier.End(); ++modifiers)\n                \t\t{\n\t\t\t\t\tif (modifiers->IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tstring mod = modifiers->GetString();\n\t\t\t\t\t\tif (mod.compare(\"allowzero\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tallowZero = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (col == 0)\n\t\t\t{\n\t\t\t\traiseError(\"update\",\n\t\t\t\t\t   \"Missing values or expressions object in payload\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"condition\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t\tif (!jsonWhereClause((*iter)[\"condition\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if ((*iter).HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t\tif (!jsonWhereClause((*iter)[\"where\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\tsql.append(';');\n\t\trow++;\n\t\t}\n\t}\n\tsql.append(\"COMMIT TRANSACTION;\");\n\t\n\tconst char *query = sql.coalesce();\n\tlogSQL(\"CommonUpdate\", query);\n\tchar *zErrMsg = NULL;\n\tint rc;\n\n\t// Exec the UPDATE statement: no callback, no result set\n\tm_writeAccessOngoing.fetch_add(1);\n\trc = SQLexec(dbHandle, table, \n\t\t     query,\n\t\t     NULL,\n\t\t     NULL,\n\t\t     &zErrMsg);\n\tm_writeAccessOngoing.fetch_sub(1);\n\tif (m_writeAccessOngoing == 0)\n\t\tdb_cv.notify_all();\n\n\t// Check result code\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"update\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\tif (sqlite3_get_autocommit(dbHandle)==0) // transaction is still open, do rollback\n\t\t{\n\t\t\trc=SQLexec(dbHandle, table,\n\t\t\t\t\"ROLLBACK TRANSACTION;\",\n\t\t\t\tNULL,\n\t\t\t\tNULL,\n\t\t\t\t&zErrMsg);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"rollback\", zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t}\n\t\t}\n\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\t\treturn -1;\n\t}\n\telse\n\t{\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\n\t\tint update = sqlite3_changes(dbHandle);\n\n\t\tint return_value=0;\n\n\t\tif (update == 0 && allowZero == false)\n\t\t{\n\t\t\tchar buf[100];\n\t\t\tsnprintf(buf, sizeof(buf),\n\t\t\t\t\t\"Not all updates of table '%s.%s' within transaction succeeded\",\n\t\t\t\t\tschema.c_str(), table.c_str());\n\t\t\traiseError(\"update\", buf);\n\t\t\treturn_value = -1;\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn_value = (row == 1 ? update : row);\n\t\t}\n\n\t\t// Returns the number of rows affected, cases :\n\t\t//\n\t\t// 1) update == 0, no update,                                    returns -1\n\t\t// 2) single command SQL that could affects multiple rows,       returns 'update'\n\t\t// 3) multiple SQL commands packed and executed in one SQLExec,  returns 'row'\n\t\treturn (return_value);\n\t}\n\n\t// Return failure\n\treturn -1;\n}\n#endif\n\n/**\n * Format a date to a fixed format with milliseconds, microseconds and\n * timezone expressed, examples :\n *\n *   case - formatted |2019-01-01 10:01:01.000000+00:00| date |2019-01-01 10:01:01|\n *   case - formatted |2019-02-01 10:02:01.000000+00:00| date |2019-02-01 10:02:01.0|\n *   case - formatted |2019-02-02 10:02:02.841000+00:00| date |2019-02-02 10:02:02.841|\n *   case - formatted |2019-02-03 10:02:03.123456+00:00| date |2019-02-03 10:02:03.123456|\n *   case - formatted |2019-03-01 10:03:01.100000+00:00| date |2019-03-01 10:03:01.1+00:00|\n *   case - formatted |2019-03-02 10:03:02.123000+00:00| date |2019-03-02 10:03:02.123+00:00|\n *   case - formatted |2019-03-03 10:03:03.123456+00:00| date |2019-03-03 10:03:03.123456+00:00|\n *   case - formatted |2019-03-04 10:03:04.123456+01:00| date |2019-03-04 10:03:04.123456+01:00|\n *   case - formatted |2019-03-05 10:03:05.123456-01:00| date |2019-03-05 10:03:05.123456-01:00|\n *   case - formatted |2019-03-04 10:03:04.123456+02:30| date |2019-03-04 10:03:04.123456+02:30|\n *   case - formatted |2019-03-05 10:03:05.123456-02:30| date |2019-03-05 10:03:05.123456-02:30|\n *\n * @param out\tfalse if the date is invalid\n *\n */\nbool Connection::formatDate(char *formatted_date, size_t buffer_size, const char *date)\n{\n\n\tstruct timeval tv = {0};\n\tstruct tm tm  = {0};\n\tchar *valid_date = nullptr;\n\n\tenum codeOptimization{CO_NONE, CO_01, CO_02, CO_03};\n\tcodeOptimization opt;\n\tint len;\n\n\t// Code optimization for the cases:\n\t//\n\t// 2019-03-03 10:03:03.123456+00:00\n\t// 2019-02-02 10:02:02.841\n\t// 2019-01-01 10:01:01\n\n\tlen = strlen(date);\n\tif (len == 32)\n\t{\n\t\tif ( date[19] == '.' &&\n\t\t\t (date[26] == '-' || date[26] == '+')&&\n\t\t\t date[29] == ':' )\n\n\t\t{\n\t\t\t// Case - 2019-03-03 10:03:03.123456+00:00\n\t\t\tstrcpy(formatted_date, date);\n\t\t\topt = CO_01;\n\t\t}\n\t\telse\n\t\t\topt = CO_NONE;\n\n\t}\n\telse if (len == 23)\n\t{\n\t\tif ( date[19] == '.')\n\t\t{\n\t\t\t// Case - 2019-02-02 10:02:02.841\n\t\t\tstrcpy(formatted_date, date);\n\t\t\tstrcat(formatted_date, \"000+00:00\");\n\t\t\topt = CO_02;\n\t\t}\n\t\telse\n\t\t\topt = CO_NONE;\n\t}\n\telse if (len == 19)\n\t{\n\t\t// Case - 2019-01-01 10:01:01\n\t\tstrcpy(formatted_date, date);\n\t\tstrcat(formatted_date, \".000000+00:00\");\n\t\topt = CO_03;\n\t}\n\telse\n\t{\n\t\topt = CO_NONE;\n\t}\n\n\tif (opt != CO_NONE)\n\t{\n\t\treturn (true);\n\t}\n\n\t// Extract up to seconds\n\tmemset(&tm, 0, sizeof(tm));\n\tvalid_date = strptime(date, F_DATEH24_SEC, &tm);\n\n\tif (! valid_date)\n\t{\n\t\treturn (false);\n\t}\n\n\tstrftime (formatted_date, buffer_size, F_DATEH24_SEC, &tm);\n\n\t// Work out the microseconds from the fractional part of the seconds\n\tchar fractional[10] = {0};\n\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%[0-9]*\", fractional);\n\t// Truncate to max 6 digits\n\tfractional[6] = 0;\n\tint multiplier = 6 - (int)strlen(fractional);\n\tif (multiplier < 0)\n\t\tmultiplier = 0;\n\twhile (multiplier--)\n\t\tstrcat(fractional, \"0\");\n\n\tstrcat(formatted_date ,\".\");\n\tstrcat(formatted_date ,fractional);\n\n\t// Handles timezone\n\tchar timezone_hour[5] = {0};\n\tchar timezone_min[5] = {0};\n\tchar sign[2] = {0};\n\n\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%*d-%2[0-9]:%2[0-9]\", timezone_hour, timezone_min);\n\tif (timezone_hour[0] != 0)\n\t{\n\t\tstrcat(sign, \"-\");\n\t}\n\telse\n\t{\n\t\tmemset(timezone_hour, 0, sizeof(timezone_hour));\n\t\tmemset(timezone_min,  0, sizeof(timezone_min));\n\n\t\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%*d+%2[0-9]:%2[0-9]\", timezone_hour, timezone_min);\n\t\tif  (timezone_hour[0] != 0)\n\t\t{\n\t\t\tstrcat(sign, \"+\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// No timezone is expressed in the source date\n\t\t\t// the default UTC is added\n\t\t\tstrcat(formatted_date, \"+00:00\");\n\t\t}\n\t}\n\n\tif (sign[0] != 0)\n\t{\n\t\tif (timezone_hour[0] != 0)\n\t\t{\n\t\t\tstrcat(formatted_date, sign);\n\n\t\t\t// Pad with 0 if an hour having only 1 digit was provided\n\t\t\t// +1 -> +01\n\t\t\tif (strlen(timezone_hour) == 1)\n\t\t\t\tstrcat(formatted_date, \"0\");\n\n\t\t\tstrcat(formatted_date, timezone_hour);\n\t\t\tstrcat(formatted_date, \":\");\n\t\t}\n\n\t\tif (timezone_min[0] != 0)\n\t\t{\n\t\t\tstrcat(formatted_date, timezone_min);\n\n\t\t\t// Pad with 0 if minutes having only 1 digit were provided\n\t\t\t// 3 -> 30\n\t\t\tif (strlen(timezone_min) == 1)\n\t\t\t\tstrcat(formatted_date, \"0\");\n\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Minutes aren't expressed in the source date\n\t\t\tstrcat(formatted_date, \"00\");\n\t\t}\n\t}\n\n\n\treturn (true);\n\n\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Process the aggregate options and return the columns to be selected\n */\nbool Connection::jsonAggregates(const Value& payload,\n\t\t\t\tconst Value& aggregates,\n\t\t\t\tSQLBuffer& sql,\n\t\t\t\tSQLBuffer& jsonConstraint,\n\t\t\t\tbool &isOptAggregate,\n\t\t\t\tbool isTableReading,\n\t\t\t\tbool isExtQuery\n\t\t\t\t)\n{\n\tstring col;\n\tstring column_name;\n\n\tisOptAggregate = false;\n\n\tif (aggregates.IsObject())\n\t{\n\t\tif (! aggregates.HasMember(\"operation\"))\n\t\t{\n\t\t\traiseError(\"Select aggregation\",\n\t\t\t\t   \"Missing property \\\"operation\\\"\");\n\t\t\treturn false;\n\t\t}\n\t\tif ((! aggregates.HasMember(\"column\")) && (! aggregates.HasMember(\"json\")))\n\t\t{\n\t\t\traiseError(\"Select aggregation\",\n\t\t\t\t   \"Missing property \\\"column\\\" or \\\"json\\\"\");\n\t\t\treturn false;\n\t\t}\n\t\tstring operation;\n\n\t\t// Handles the case of the count, the virtual tables should use count and the external the sun operation\n\t\toperation =aggregates[\"operation\"].GetString();\n\t\tif (isTableReading)\n\t\t{\n\t\t\tif (operation.compare(\"count\") ==0)\n\t\t\t{\n\t\t\t\tisOptAggregate = true;\n\t\t\t\tif (isExtQuery)\n\t\t\t\t{\n\t\t\t\t\toperation = \"sum\";\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tsql.append(operation);\n\n\t\tsql.append('(');\n\t\tif (aggregates.HasMember(\"column\"))\n\t\t{\n\t\t\tcol = aggregates[\"column\"].GetString();\n\t\t\tif (col.compare(\"*\") == 0)\t// Faster to count ROWID rather than *\n\t\t\t{\n\t\t\t\tcol = \"ROWID\";\n\t\t\t\tsql.append(col);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// an operation different from the 'count' is requested\n\t\t\t\tif (isTableReading && (col.compare(\"user_ts\") == 0) )\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, 'localtime') \");\n\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(col);\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\telse if (aggregates.HasMember(\"json\"))\n\t\t{\n\t\t\tconst Value& json = aggregates[\"json\"];\n\t\t\tif (! json.IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\",\n\t\t\t\t\t   \"The json property must be an object\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (!json.HasMember(\"column\"))\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t   \"The json property is missing a column property\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\t// Use json_extract(field, '$.key1.key2') AS value\n\t\t\tsql.append(\"json_extract(\");\n\t\t\tsql.append(json[\"column\"].GetString());\n\t\t\tsql.append(\", '$.\");\n\n\t\t\tif (!json.HasMember(\"properties\"))\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t   \"The json property is missing a properties property\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tconst Value& jsonFields = json[\"properties\"];\n\n\t\t\tif (jsonFields.IsArray())\n\t\t\t{\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t\t\t// Build the Json keys NULL check\n\t\t\t\tjsonConstraint.append(\"json_type(\");\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tjsonConstraint.append(\", '$.\");\n\n\t\t\t\tint field = 0;\n\t\t\t\tstring prev;\n\t\t\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (field)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\".\");\n\t\t\t\t\t}\n\t\t\t\t\tif (prev.length() > 0)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Append Json field for NULL check\n\t\t\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\t\t\tjsonConstraint.append(\".\");\n\t\t\t\t\t}\n\t\t\t\t\tprev = itr->GetString();\n\t\t\t\t\tfield++;\n\t\t\t\t\t// Append Json field for query\n\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t}\n\t\t\t\t// Add last Json key\n\t\t\t\tjsonConstraint.append(prev);\n\n\t\t\t\t// Add condition for all json keys not null\n\t\t\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Append Json field for query\n\t\t\t\tsql.append(jsonFields.GetString());\n\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t\t\t// Build the Json key NULL check\n\t\t\t\tjsonConstraint.append(\"json_type(\");\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tjsonConstraint.append(\", '$.\");\n\t\t\t\tjsonConstraint.append(jsonFields.GetString());\n\n\t\t\t\t// Add condition for json key not null\n\t\t\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t\t\t}\n\t\t\tsql.append(\"')\");\n\t\t}\n\t\tsql.append(\") AS \\\"\");\n\t\tif (aggregates.HasMember(\"alias\"))\n\t\t{\n\t\t\t// Handles the case of the count: the external query should use the alias and the internal the name of the field\n\t\t\tif (isTableReading)\n\t\t\t{\n\t\t\t\tif (isExtQuery)\n\t\t\t\t\tsql.append(aggregates[\"alias\"].GetString());\n\t\t\t\telse\n\t\t\t\t\tsql.append(col);\n\t\t\t}\n\t\t\telse\n\t\t\t\tsql.append(aggregates[\"alias\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(aggregates[\"operation\"].GetString());\n\t\t\tsql.append('_');\n\t\t\tsql.append(aggregates[\"column\"].GetString());\n\t\t}\n\t\tsql.append(\"\\\"\");\n\t}\n\telse if (aggregates.IsArray())\n\t{\n\t\tint index = 0;\n\t\tfor (Value::ConstValueIterator itr = aggregates.Begin(); itr != aggregates.End(); ++itr)\n\t\t{\n\t\t\tif (!itr->IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"select aggregation\",\n\t\t\t\t\t\t\"Each element in the aggregate array must be an object\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif ((! itr->HasMember(\"column\")) && (! itr->HasMember(\"json\")))\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\", \"Missing property \\\"column\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (! itr->HasMember(\"operation\"))\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\", \"Missing property \\\"operation\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (index)\n\t\t\t\tsql.append(\", \");\n\t\t\tindex++;\n\t\t\tsql.append((*itr)[\"operation\"].GetString());\n\t\t\tsql.append('(');\n\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t{\n\t\t\t\tcolumn_name= (*itr)[\"column\"].GetString();\n\t\t\t\tif (isTableReading && (column_name.compare(\"user_ts\") == 0) )\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, 'localtime') \");\n\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(column_name);\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t}\n\n\t\t\t}\n\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t{\n\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\tif (! json.IsObject())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"Select aggregation\", \"The json property must be an object\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (!json.HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The json property is missing a column property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (!json.HasMember(\"properties\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The json property is missing a properties property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tconst Value& jsonFields = json[\"properties\"];\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\t// Use json_extract(field, '$.key1.key2') AS value\n\t\t\t\tsql.append(\"json_extract(\");\n\t\t\t\tcolumn_name=json[\"column\"].GetString();\n\t\t\t\tsql.append(column_name);\n\t\t\t\tsql.append(\", '$.\");\n\n\t\t\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t\t\t// Build the Json keys NULL check\n\t\t\t\tjsonConstraint.append(\"json_type(\");\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tjsonConstraint.append(\", '$.\");\n\n\t\t\t\tif (jsonFields.IsArray())\n\t\t\t\t{\n\t\t\t\t\tstring prev;\n\t\t\t\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (prev.length() > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\t\t\t\tjsonConstraint.append('.');\n\t\t\t\t\t\t\tsql.append('.');\n\t\t\t\t\t\t}\n\t\t\t\t\t\t// Append Json field for query\n\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t\tprev = itr->GetString();\n\t\t\t\t\t}\n\t\t\t\t\t// Add last Json key\n\t\t\t\t\tjsonConstraint.append(prev);\n\n\t\t\t\t\t// Add condition for json key not null\n\t\t\t\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// Append Json field for query\n\t\t\t\t\tsql.append(jsonFields.GetString());\n\n\t\t\t\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t\t\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t\t\t\t// Build the Json key NULL check\n\t\t\t\t\tjsonConstraint.append(jsonFields.GetString());\n\n\t\t\t\t\t// Add condition for json key not null\n\t\t\t\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t\t\t\t}\n\t\t\t\tsql.append(\"')\");\n\t\t\t}\n\t\t\tsql.append(\") AS \\\"\");\n\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t{\n\t\t\t\tif (isTableReading)\n\t\t\t\t{\n\t\t\t\t\tif (isExtQuery)\n\t\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\telse\n\t\t\t\t\t\tsql.append(column_name);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append((*itr)[\"operation\"].GetString());\n\t\t\t\tsql.append('_');\n\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t}\n\t\t\tsql.append(\"\\\"\");\n\t\t}\n\t}\n\tif (payload.HasMember(\"group\"))\n\t{\n\t\tsql.append(\", \");\n\t\tif (payload[\"group\"].IsObject())\n\t\t{\n\t\t\tconst Value& grp = payload[\"group\"];\n\t\t\t\n\t\t\tif (grp.HasMember(\"format\"))\n\t\t\t{\n\t\t\t\t// SQLite 3 date format.\n\t\t\t\tstring new_format;\n\t\t\t\tif (isTableReading)\n\t\t\t\t{\n\t\t\t\t\tapplyColumnDateFormatLocaltime(grp[\"format\"].GetString(),\n\t\t\t\t\t\t\t               grp[\"column\"].GetString(),\n\t\t\t\t\t\t\t               new_format);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tapplyColumnDateFormat(grp[\"format\"].GetString(),\n\t\t\t\t\t\t\t      grp[\"column\"].GetString(),\n\t\t\t\t\t\t\t      new_format);\n\t\t\t\t}\n\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\tsql.append(new_format);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(grp[\"column\"].GetString());\n\t\t\t}\n\n\t\t\tif (grp.HasMember(\"alias\"))\n\t\t\t{\n\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\tsql.append(grp[\"alias\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\tsql.append(grp[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(payload[\"group\"].GetString());\n\t\t}\n\t}\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& tb = payload[\"timebucket\"];\n\t\tif (! tb.IsObject())\n\t\t{\n\t\t\traiseError(\"Select data\",\n\t\t\t\t   \"The \\\"timebucket\\\" property must be an object\");\n\t\t\treturn false;\n\t\t}\n\t\tif (! tb.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"Select data\",\n\t\t\t\t   \"The \\\"timebucket\\\" object must have a timestamp property\");\n\t\t\treturn false;\n\t\t}\n\n\t\tif (tb.HasMember(\"format\"))\n\t\t{\n\t\t\t// SQLite 3 date format is limited.\n\t\t\tstring new_format;\n\t\t\tif (applyDateFormat(tb[\"format\"].GetString(),\n\t\t\t\t\t    new_format))\n\t\t\t{\n\t\t\t\tsql.append(\", \");\n\t\t\t\t// Add the formatted column\n\t\t\t\tsql.append(new_format);\n\n\t\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t\t{\n\t\t\t\t\t// Use Unix epoch, without microseconds\n\t\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\t\tsql.append(\" * round(\");\n\t\t\t\t\tsql.append(\"strftime('%s', \");\n\t\t\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\t\t\tsql.append(\") / \");\n\t\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\t\tsql.append(\", 6)\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\t\t}\n\t\t\t\tsql.append(\", 'unixepoch')\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t/**\n\t\t\t\t * No date format found: we should return an error.\n\t\t\t\t * Note: currently if input Json payload has no 'result' member\n\t\t\t\t * raiseError() results in no data being sent to the client\n\t\t\t\t * We use Unix epoch without microseconds\n\t\t\t\t */\n\t\t\t\tsql.append(\", datetime(\");\n\t\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\t\tsql.append(\" * round(\");\n\t\t\t\t}\n\t\t\t\t// Use Unix epoch, without microseconds\n\t\t\t\tsql.append(\"strftime('%s', \");\n\t\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(\") / \");\n\t\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\t\tsql.append(\", 6)\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\")\");\n\t\t\t\t}\n\t\t\t\tsql.append(\", 'unixepoch')\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\", datetime(\");\n\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t{\n\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\tsql.append(\" * round(\");\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * Default format when no format is specified:\n\t\t\t * - we use Unix time without milliseconds.\n\t\t\t */\n\t\t\tsql.append(\"strftime('%s', \");\n\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t{\n\t\t\t\tsql.append(\") / \");\n\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\tsql.append(\", 6)\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\")\");\n\t\t\t}\n\t\t\tsql.append(\", 'unixepoch')\");\n\t\t}\n\n\t\tsql.append(\" AS \\\"\");\n\t\tif (tb.HasMember(\"alias\"))\n\t\t{\n\t\t\tsql.append(tb[\"alias\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\"timestamp\");\n\t\t}\n\t\tsql.append('\"');\n\t}\n\treturn true;\n}\n#endif\n\n/**\n * Process the modifiers for limit, skip, sort and group\n */\nbool Connection::jsonModifiers(const Value& payload,\n                               SQLBuffer& sql,\n\t\t\t       bool isTableReading)\n{\n\n\tif (payload.HasMember(\"timebucket\") && payload.HasMember(\"sort\"))\n\t{\n\t\traiseError(\"query modifiers\",\n\t\t\t   \"Sort and timebucket modifiers can not be used in the same payload\");\n\t\treturn false;\n\t}\n\n\tif (payload.HasMember(\"group\"))\n\t{\n\t\tsql.append(\" GROUP BY \");\n\t\tif (payload[\"group\"].IsObject())\n\t\t{\n\t\t\tconst Value& grp = payload[\"group\"];\n\t\t\tif (grp.HasMember(\"format\"))\n\t\t\t{\n\t\t\t\t/**\n\t\t\t\t * SQLite 3 date format is limited.\n\t\t\t\t * Handle all available formats here.\n\t\t\t\t */\n\t\t\t\tstring new_format;\n\t\t\t\tif (isTableReading)\n\t\t\t\t{\n\t\t\t\t\tapplyColumnDateFormatLocaltime(grp[\"format\"].GetString(),\n\t\t\t\t\t\t\t\t       grp[\"column\"].GetString(),\n\t\t\t\t\t\t\t\t       new_format);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tapplyColumnDateFormat(grp[\"format\"].GetString(),\n\t\t\t\t\t\t\t      grp[\"column\"].GetString(),\n\t\t\t\t\t\t\t      new_format);\n\t\t\t\t}\n\n\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\tsql.append(new_format);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(payload[\"group\"].GetString());\n\t\t}\n\t}\n\n\tif (payload.HasMember(\"sort\"))\n\t{\n\t\tsql.append(\" ORDER BY \");\n\t\tconst Value& sortBy = payload[\"sort\"];\n\t\tif (sortBy.IsObject())\n\t\t{\n\t\t\tif (! sortBy.HasMember(\"column\"))\n\t\t\t{\n\t\t\t\traiseError(\"Select sort\", \"Missing property \\\"column\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(sortBy[\"column\"].GetString());\n\t\t\tsql.append(' ');\n\t\t\tif (! sortBy.HasMember(\"direction\"))\n\t\t\t{\n\t\t\t\tsql.append(\"ASC\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(sortBy[\"direction\"].GetString());\n\t\t\t}\n\t\t}\n\t\telse if (sortBy.IsArray())\n\t\t{\n\t\t\tint index = 0;\n\t\t\tfor (Value::ConstValueIterator itr = sortBy.Begin(); itr != sortBy.End(); ++itr)\n\t\t\t{\n\t\t\t\tif (!itr->IsObject())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"select sort\",\n\t\t\t\t\t\t   \"Each element in the sort array must be an object\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! itr->HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"Select sort\", \"Missing property \\\"column\\\"\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (index)\n\t\t\t\t{\n\t\t\t\t\tsql.append(\", \");\n\t\t\t\t}\n\t\t\t\tindex++;\n\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\tsql.append(' ');\n\t\t\t\tif (! itr->HasMember(\"direction\"))\n\t\t\t\t{\n\t\t\t\t\t sql.append(\"ASC\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append((*itr)[\"direction\"].GetString());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& tb = payload[\"timebucket\"];\n\t\tif (! tb.IsObject())\n\t\t{\n\t\t\traiseError(\"Select data\",\n\t\t\t\t   \"The \\\"timebucket\\\" property must be an object\");\n\t\t\treturn false;\n\t\t}\n\t\tif (! tb.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"Select data\",\n\t\t\t\t   \"The \\\"timebucket\\\" object must have a timestamp property\");\n\t\t\treturn false;\n\t\t}\n\t\tif (payload.HasMember(\"group\"))\n\t\t{\n\t\t\tsql.append(\", \");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\" GROUP BY \");\n\t\t}\n\n\t\t// Divide by \"size\"\n\t\tif (tb.HasMember(\"size\"))\n\t\t{\n\t\t\t// Use Unix epoch without milliseconds\n\t\t\tsql.append(\"round(strftime('%s', \");\n\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\tsql.append(\") / \");\n\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\tsql.append(\", 6)\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Use Unix epoch\n\t\t\tsql.append(\"strftime('%s', \");\n\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\tsql.append(\")\");\n\t\t}\n\n\t\tsql.append(\" ORDER BY \");\n\n                // Use Unix epoch without milliseconds\n                sql.append(\"strftime('%s', \");\n                sql.append(tb[\"timestamp\"].GetString());\n                sql.append(\")\");\n\n\t\tsql.append(\" DESC\");\n\t}\n\n\tif (payload.HasMember(\"limit\"))\n\t{\n\t\tif (!payload[\"limit\"].IsInt())\n\t\t{\n\t\t\traiseError(\"limit\",\n\t\t\t\t   \"Limit must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" LIMIT \");\n\t\ttry {\n\t\t\tsql.append(payload[\"limit\"].GetInt());\n\t\t} catch (exception e) {\n\t\t\traiseError(\"limit\",\n\t\t\t\t   \"Bad value for limit parameter: %s\",\n\t\t\t\t   e.what());\n\t\t\treturn false;\n\t\t}\n\t}\n\n\t// OFFSET must go after LIMIT\n\tif (payload.HasMember(\"skip\"))\n\t{\n\t\t// Add no limits\n\t\tif (!payload.HasMember(\"limit\"))\n\t\t{\n\t\t\tsql.append(\" LIMIT -1\");\n\t\t}\n\n\t\tif (!payload[\"skip\"].IsInt())\n\t\t{\n\t\t\traiseError(\"skip\",\n\t\t\t\t   \"Skip must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" OFFSET \");\n\t\tsql.append(payload[\"skip\"].GetInt());\n\t}\n\treturn true;\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Convert a JSON where clause into a SQLite3 where clause\n *\n */\nbool Connection::jsonWhereClause(\n\tconst Value& whereClause,\n\tSQLBuffer& sql,\n\tstd::vector<std::string>  &asset_codes,\n\tbool convertLocaltime,\n\tstring prefix)\n{\n\n\tstring column;\n\tstring cond;\n\n\tif (!whereClause.IsObject())\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" property must be a JSON object\");\n\t\treturn false;\n\t}\n\tif (!whereClause.HasMember(\"column\"))\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" object is missing a \\\"column\\\" property\");\n\t\treturn false;\n\t}\n\tif (!whereClause.HasMember(\"condition\"))\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" object is missing a \\\"condition\\\" property\");\n\t\treturn false;\n\t}\n\n\tcolumn = whereClause[\"column\"].GetString();\n\tif (!prefix.empty())\n\t\tsql.append(prefix);\n\tsql.append(column);\n\tsql.append(' ');\n\tcond = whereClause[\"condition\"].GetString();\n\n\tif (cond.compare(\"isnull\") == 0)\n\t{\n\t\tsql.append(\"isnull \");\n\t}\n\telse if (cond.compare(\"notnull\") == 0)\n\t{\n\t\tsql.append(\"notnull \");\n\t}\n\telse\n\t{\n\t\tif (!whereClause.HasMember(\"value\"))\n\t\t{\n\t\t\traiseError(\"where clause\",\n\t\t\t\t   \"The \\\"where\\\" object is missing a \\\"value\\\" property\");\n\t\t\treturn false;\n\t\t}\n\n\t\tif (!cond.compare(\"older\"))\n\t\t{\n\t\t\tif (!whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\traiseError(\"where clause\",\n\t\t\t\t\t   \"The \\\"value\\\" of an \\\"older\\\" condition must be an integer\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(\"< datetime('now', '-\");\n\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\tif (convertLocaltime)\n\t\t\t\tsql.append(\" seconds', 'localtime')\"); // Get value in localtime\n\t\t\telse\n\t\t\t\tsql.append(\" seconds')\"); // Get value in UTC by asking for no timezone\n\t\t}\n\t\telse if (!cond.compare(\"newer\"))\n\t\t{\n\t\t\tif (!whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\traiseError(\"where clause\",\n\t\t\t\t\t   \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(\"> datetime('now', '-\");\n\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\tif (convertLocaltime)\n\t\t\t\tsql.append(\" seconds', 'localtime')\"); // Get value in localtime\n\t\t\telse\n\t\t\t\tsql.append(\" seconds')\"); // Get value in UTC by asking for no timezone\n\t\t}\n\t\telse if (!cond.compare(\"in\") || !cond.compare(\"not in\"))\n\t\t{\n\t\t\t// Check we have a non empty array\n\t\t\tif (whereClause[\"value\"].IsArray() &&\n\t\t\t    whereClause[\"value\"].Size())\n\t\t\t{\n\t\t\t\tsql.append(cond);\n\t\t\t\tsql.append(\" ( \");\n\t\t\t\tint field = 0;\n\t\t\t\tfor (Value::ConstValueIterator itr = whereClause[\"value\"].Begin();\n\t\t\t\t\t\t\t\titr != whereClause[\"value\"].End();\n\t\t\t\t\t\t\t\t++itr)\n\t\t\t\t{\n\t\t\t\t\tif (field)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\", \");\n\t\t\t\t\t}\n\t\t\t\t\tfield++;\n\t\t\t\t\tif (itr->IsNumber())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->IsInt())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr->GetInt());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->IsInt64())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append((long)itr->GetInt64());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr->GetDouble());\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(itr->GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring message(\"The \\\"value\\\" of a \\\"\" + \\\n\t\t\t\t\t\t\t\tcond + \\\n\t\t\t\t\t\t\t\t\"\\\" condition array element must be \" \\\n\t\t\t\t\t\t\t\t\"a string, integer or double.\");\n\t\t\t\t\t\traiseError(\"where clause\", message.c_str());\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tsql.append(\" )\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring message(\"The \\\"value\\\" of a \\\"\" + \\\n\t\t\t\t\t\tcond + \"\\\" condition must be an array \" \\\n\t\t\t\t\t\t\"and must not be empty.\");\n\t\t\t\traiseError(\"where clause\", message.c_str());\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(cond);\n\t\t\tsql.append(' ');\n\t\t\tif (whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\t} else if (whereClause[\"value\"].IsString())\n\t\t\t{\n\t\t\t\tstring value = whereClause[\"value\"].GetString();\n\t\t\t\tsql.append('\\'');\n\t\t\t\tsql.append(escape(value ));\n\t\t\t\tsql.append('\\'');\n\n\t\t\t\t// Identify a specific operation to restrinct the tables involved\n\t\t\t\tif (column.compare(\"asset_code\") == 0)\n\t\t\t\t\tif ( cond.compare(\"=\") == 0)\n\t\t\t\t\t\t\tasset_codes.push_back(value);\n\t\t\t}\n\t\t}\n\t}\n \n\tif (whereClause.HasMember(\"and\"))\n\t{\n\t\tsql.append(\" AND \");\n\t\tif (!jsonWhereClause(whereClause[\"and\"], sql, asset_codes, convertLocaltime, prefix))\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\tif (whereClause.HasMember(\"or\"))\n\t{\n\t\tsql.append(\" OR \");\n\t\tif (!jsonWhereClause(whereClause[\"or\"], sql, asset_codes, convertLocaltime, prefix))\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\n\treturn true;\n}\n\n#endif\n\n\n\n/**\n * This routine uses SQLit3 JSON1 extension functions\n */\nbool Connection::returnJson(const Value& json,\n\t\t\t    SQLBuffer& sql,\n\t\t\t    SQLBuffer& jsonConstraint)\n{\n\tif (! json.IsObject())\n\t{\n\t\traiseError(\"retrieve\", \"The json property must be an object\");\n\t\treturn false;\n\t}\n\tif (!json.HasMember(\"column\"))\n\t{\n\t\traiseError(\"retrieve\", \"The json property is missing a column property\");\n\t\treturn false;\n\t}\n\t// Call JSON1 SQLite3 extension routine 'json_extract'\n\t// json_extract(field, '$.key1.key2') AS value\n\tsql.append(\"json_extract(\");\n\tsql.append(json[\"column\"].GetString());\n\tsql.append(\", '$.\");\n\tif (!json.HasMember(\"properties\"))\n\t{\n\t\traiseError(\"retrieve\", \"The json property is missing a properties property\");\n\t\treturn false;\n\t}\n\tconst Value& jsonFields = json[\"properties\"];\n\tif (jsonFields.IsArray())\n\t{\n\t\tif (! jsonConstraint.isEmpty())\n\t\t{\n\t\t\tjsonConstraint.append(\" AND \");\n\t\t}\n\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t// Build the Json keys NULL check\n\t\tjsonConstraint.append(\"json_type(\");\n\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\tjsonConstraint.append(\", '$.\");\n\t\tint field = 0;\n\t\tstring prev;\n\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t{\n\t\t\tif (field)\n\t\t\t{\n\t\t\t\tsql.append(\".\");\n\t\t\t}\n\t\t\tif (prev.length())\n\t\t\t{\n\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\tjsonConstraint.append(\".\");\n\t\t\t}\n\t\t\tfield++;\n\t\t\t// Append Json field for query\n\t\t\tsql.append(itr->GetString());\n\t\t\tprev = itr->GetString();\n\t\t}\n\t\t// Add last Json key\n\t\tjsonConstraint.append(prev);\n\n\t\t// Add condition for all json keys not null\n\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t}\n\telse\n\t{\n\t\t// Append Json field for query\n\t\tsql.append(jsonFields.GetString());\n\t\tif (! jsonConstraint.isEmpty())\n\t\t{\n\t\t\tjsonConstraint.append(\" AND \");\n\t\t}\n\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t// Build the Json key NULL check\n\t\tjsonConstraint.append(\"json_type(\");\n\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\tjsonConstraint.append(\", '$.\");\n\t\tjsonConstraint.append(jsonFields.GetString());\n\n\t\t// Add condition for json key not null\n\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t}\n\tsql.append(\"') \");\n\n\treturn true;\n}\n\n/**\n * Remove whitespace at both ends of a string\n */\nchar *Connection::trim(char *str)\n{\nchar *ptr;\n\n\twhile (*str && *str == ' ')\n\t\tstr++;\n\n\tptr = str + strlen(str) - 1;\n\twhile (ptr > str && *ptr == ' ')\n\t{\n\t\t*ptr = 0;\n\t\tptr--;\n\t}\n\treturn str;\n}\n\n/**\n * Raise an error to return from the plugin\n */\nvoid Connection::raiseError(const char *operation, const char *reason, ...)\n{\nConnectionManager *manager = ConnectionManager::getInstance();\nchar\ttmpbuf[512];\n\n\tva_list ap;\n\tva_start(ap, reason);\n\tvsnprintf(tmpbuf, sizeof(tmpbuf), reason, ap);\n\tva_end(ap);\n\tLogger::getLogger()->error(\"%s storage plugin raising error: %s: %s\",\n\t\t\t\t   PLUGIN_LOG_NAME,\n\t\t\t\t   operation, tmpbuf);\n\tmanager->setError(operation, tmpbuf, false);\n}\n\n/**\n * Return the sie of a given table in bytes\n */\nlong Connection::tableSize(const string& table)\n{\nSQLBuffer buf;\n\n \traiseError(\"tableSize\", \"Not available in SQLite3 storage plugin\");\n\treturn -1;\n}\n\n/**\n * String escape routine\n */\nconst string Connection::escape(const string& str)\n{\nchar    *buffer;\nconst char    *p1;\nchar  *p2;\nstring  newString;\n\n    if (str.find_first_of('\\'') == string::npos)\n    {\n        return str;\n    }\n\n    buffer = (char *)malloc(str.length() * 2);\n\n    p1 = str.c_str();\n    p2 = buffer;\n    while (*p1)\n    {\n        if (*p1 == '\\'')\n        {\n            *p2++ = '\\'';\n            *p2++ = '\\'';\n            p1++;\n        }\n        else\n        {\n            *p2++ = *p1++;\n        }\n    }\n    *p2 = 0;\n    newString = string(buffer);\n    free(buffer);\n    return newString;\n}\n\n/**\n * Optionally log SQL statement execution\n *\n * @param\ttag\tA string tag that says why the SQL is being executed\n * @param\tstmt\tThe SQL statement itself\n */\nvoid Connection::logSQL(const char *tag, const char *stmt)\n{\n\tif (m_logSQL)\n\t{\n\t\tLogger::getLogger()->info(\"%s, %s: %s\",\n\t\t\t\t\t  PLUGIN_LOG_NAME,\n\t\t\t\t\t  tag,\n\t\t\t\t\t  stmt);\n\t}\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * SQLITE wrapper to rety statements when the database is locked\n *\n * @param\tdb\tThe open SQLite database\n * @param\tsql\tThe SQL to execute\n * @param\tcallback\tCallback function\n * @param\tcbArg\t\tCallback 1st argument\n * @param\terrmsg\t\tLocaiton to write error message\n */\nint Connection::SQLexec(sqlite3 *db, const string& table, const char *sql, int (*callback)(void*,int,char**,char**),\n  \t\t\tvoid *cbArg, char **errmsg)\n{\nint retries = 0, rc;\n\n\t*errmsg = NULL;\n\tdo {\n#if DO_PROFILE\n\t\tProfileItem *prof = new ProfileItem(sql);\n#endif\n\t\tif (*errmsg)\n\t\t{\n\t\t\tsqlite3_free(*errmsg);\n\t\t\t*errmsg = NULL;\n\t\t}\n\t\trc = sqlite3_exec(db, sql, callback, cbArg, errmsg);\n#if DO_PROFILE\n\t\tprof->complete();\n\t\tprofiler.insert(prof);\n#endif\n\t\tretries++;\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\n\n#if DO_PROFILE_RETRIES\n\t\t\tm_qMutex.lock();\n\t\t\tm_waiting.fetch_add(1);\n\t\t\tif (maxQueue < m_waiting)\n\t\t\t\tmaxQueue = m_waiting;\n\t\t\tm_qMutex.unlock();\n#endif\n\t\t\tint interval = (1 * RETRY_BACKOFF);\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(interval));\n#if DO_PROFILE_RETRIES\n\t\t\tm_qMutex.lock();\n\t\t\tm_waiting.fetch_sub(1);\n\t\t\tm_qMutex.unlock();\n#endif\n\t\t\tif (sqlite3_get_autocommit(db)==0) // if transaction is still open, do rollback\n\t\t\t{\n\t\t\t\tint rc2;\n\t\t\t\tchar *zErrMsg = NULL;\n\t\t\t\trc2=SQLexec(db, table,\n\t\t\t\t\t\"ROLLBACK TRANSACTION;\",\n\t\t\t\t\tNULL,\n\t\t\t\t\tNULL,\n\t\t\t\t\t&zErrMsg);\n\t\t\t\tif (rc2 != SQLITE_OK)\n\t\t\t\t{\n\t\t\t\t\traiseError(\"rollback\", zErrMsg);\n\t\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\t}\n\t\t\t}\n\n\t\t}\n\t} while (retries < MAX_RETRIES && (rc != SQLITE_OK));\n\tif (retries >= MAX_RETRIES)\n\t{\n\t\tLogger::getLogger()->error(\"SQL statement %s failed after maximum retries\", sql, sqlite3_errmsg(dbHandle));\n\t}\n\telse if (retries > LOG_AFTER_NERRORS)\n\t{\n\t\tLogger::getLogger()->warn(\"%d retries required of the SQL statement '%s': %s\", retries, sql, sqlite3_errmsg(dbHandle));\n\t\tLogger::getLogger()->warn(\"If the excessive retries continue for sustained periods it is a sign that the system may be reaching the limits of the load it can handle\");\n\t}\n#if DO_PROFILE_RETRIES\n\tretryStats[retries-1]++;\n\tif (++numStatements > RETRY_REPORT_THRESHOLD - 1)\n\t{\n\t\tnumStatements = 0;\n\t\tLogger *log = Logger::getLogger();\n\t\tlog->info(\"Storage layer statement retry profile\");\n\t\tfor (int i = 0; i < MAX_RETRIES-1; i++)\n\t\t{\n\t\t\tlog->info(\"%2d: %d\", i, retryStats[i]);\n\t\t\tretryStats[i] = 0;\n\t\t}\n\t\tlog->info(\"Too many retries: %d\", retryStats[MAX_RETRIES-1]);\n\t\tretryStats[MAX_RETRIES-1] = 0;\n\t\tlog->info(\"Maximum retry queue length: %d\", maxQueue);\n\t\tmaxQueue = 0;\n\t}\n#endif\n\n\tif (rc == SQLITE_LOCKED)\n\t{\n\t\tLogger::getLogger()->error(\"Database still locked after maximum retries, executing %s operation on %s\", operation(sql).c_str(), table.c_str());\n\t}\n\tif (rc == SQLITE_BUSY)\n\t{\n\t\tLogger::getLogger()->error(\"Database still busy after maximum retries, executing %s operation on %s\", operation(sql).c_str(), table.c_str());\n\t}\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\tLogger::getLogger()->error(\"Database error after maximum retries, executing %s operation on %s\", operation(sql).c_str(), table.c_str());\n\t}\n\n\treturn rc;\n}\n#endif\n\n/**\n * Execute a step command on a prepared statement but add the ability to retry on error.\n *\n * It is assumed that binding has already taken place and that those bound\n * vaiables are maintained for all retries.\n *\n * @param statement\tThe prepared statement to step\n * @return int\t\tThe status of the final sqlite3_step that was issued\n */\nint Connection::SQLstep(sqlite3_stmt *statement)\n{\nint retries = 0, rc;\n\n\tdo {\n#if DO_PROFILE\n\t\tProfileItem *prof = new ProfileItem(sqlite3_sql(statement));\n#endif\n\t\tif (retries)\n\t\t{\n\t\t\tsqlite3_reset(statement);\n\t\t}\n\t\trc = sqlite3_step(statement);\n#if DO_PROFILE\n\t\tprof->complete();\n\t\tprofiler.insert(prof);\n#endif\n\t\tretries++;\n\t\tif (rc == SQLITE_LOCKED || rc == SQLITE_BUSY)\n\t\t{\n\t\t\tint interval = (retries * RETRY_BACKOFF);\n\n\t\t\tthis_thread::sleep_for(chrono::milliseconds(interval));\n\t\t}\n\t} while (retries < MAX_RETRIES && (rc == SQLITE_LOCKED || rc == SQLITE_BUSY));\n\tif (retries >= MAX_RETRIES)\n\t{\n\t\tLogger::getLogger()->error(\"SQL statement failed after maximum retries\", sqlite3_errmsg(dbHandle));\n\t}\n\telse if (retries > LOG_AFTER_NERRORS)\n\t{\n\t\tLogger::getLogger()->warn(\"%d retries required of the SQL statement: %s\", retries, sqlite3_errmsg(dbHandle));\n\t\tLogger::getLogger()->warn(\"If the excessive retries continue for sustained periods it is a sign that the system may be reaching the limits of the load it can handle\");\n\t}\n#if DO_PROFILE_RETRIES\n\tretryStats[retries-1]++;\n\tif (++numStatements > 1000)\n\t{\n\t\tnumStatements = 0;\n\t\tLogger *log = Logger::getLogger();\n\t\tlog->info(\"Storage layer statement retry profile\");\n\t\tfor (int i = 0; i < MAX_RETRIES-1; i++)\n\t\t{\n\t\t\tlog->info(\"%2d: %d\", i, retryStats[i]);\n\t\t\tretryStats[i] = 0;\n\t\t}\n\t\tlog->info(\"Too many retries: %d\", retryStats[MAX_RETRIES-1]);\n\t\tretryStats[MAX_RETRIES-1] = 0;\n\t}\n#endif\n\n\tif (rc == SQLITE_LOCKED)\n\t{\n\t\tLogger::getLogger()->error(\"Database still locked after maximum retries\");\n\t}\n\tif (rc == SQLITE_BUSY)\n\t{\n\t\tLogger::getLogger()->error(\"Database still busy after maximum retries\");\n\t}\n\n\treturn rc;\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Perform a delete against a common table\n *\n */\nint Connection::deleteRows(const string& schema, const string& table, const string& condition)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument document;\nSQLBuffer\tsql;\nvector<string>  asset_codes;\n\n\tif (!m_schemaManager->exists(dbHandle, schema))\n\t{\n\t\traiseError(\"delete\", \"Schema %s does not exist, unable to delete from table %s\", schema.c_str(), table.c_str());\n\t\treturn false;\n\t}\n\n\tsql.append(\"DELETE FROM \");\n\tsql.append(schema);\n\tsql.append('.');\n\tsql.append(table);\n\tif (! condition.empty())\n\t{\n\t\tsql.append(\" WHERE \");\n\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t{\n\t\t\traiseError(\"delete\", \"Failed to parse JSON payload\");\n\t\t\treturn -1;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"delete\",\n\t\t\t\t\t   \"JSON does not contain where clause\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n\tsql.append(';');\n\n\tconst char *query = sql.coalesce();\n\tlogSQL(\"CommonDelete\", query);\n\tchar *zErrMsg = NULL;\n\tint delete_rows;\n\tint rc;\n\n\t// Exec the DELETE statement: no callback, no result set\n\tm_writeAccessOngoing.fetch_add(1);\n\trc = SQLexec(dbHandle, table,\n\t\t     query,\n\t\t     NULL,\n\t\t     NULL,\n\t\t     &zErrMsg);\n\tm_writeAccessOngoing.fetch_sub(1);\n\tif (m_writeAccessOngoing == 0)\n\t\tdb_cv.notify_all();\n\n\t// Check result code\n\tif (rc == SQLITE_OK)\n\t{\n\t\t// Success. Release memory for 'query' var\n\t\tdelete[] query;\n        \treturn sqlite3_changes(dbHandle);\n\t}\n\telse\n\t{\n \t\traiseError(\"delete\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\tdelete[] query;\n\n\t\t// Failure\n\t\treturn -1;\n\t}\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Create snapshot of a common table\n *\n * @param table\t\tThe table to snapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= 0 on success\n *\n * The new created table name has the name:\n * $table_snap$id\n */\nint Connection::create_table_snapshot(const string& table, const string& id)\n{\n\tstring query = \"CREATE TABLE fledge.\";\n\tquery += table + \"_snap\" +  id + \" AS SELECT * FROM fledge.\" + table;\n\n\tlogSQL(\"CreateTableSnapshot\", query.c_str());\n\n\tchar* zErrMsg = NULL;\n\tint rc = SQLexec(dbHandle, table,\n\t\t\t query.c_str(),\n\t\t\t NULL,\n\t\t\t NULL,\n\t\t\t &zErrMsg);\n\n\t// Check result code\n\tif (rc == SQLITE_OK)\n\t{\n\t\treturn 1;\n\t}\n\telse\n\t{\n\t\traiseError(\"create_table_snapshot\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn -1;\n\t}\n}\n\n/**\n * Set the contents of a common table from a snapshot\n *\n * @param table\t\tThe table to fill\n * @param id\t\tThe snapshot id of the table\n * @return\t\t-1 on error, >= 0 on success\n *\n */\nint Connection::load_table_snapshot(const string& table, const string& id)\n{\n\tstring purgeQuery = \"DELETE FROM fledge.\" + table;\n\tstring query = \"BEGIN TRANSACTION; \";\n\tquery += purgeQuery +\"; INSERT INTO fledge.\" + table;\n\tquery += \" SELECT * FROM fledge.\" + table + \"_snap\" + id;\n\tquery += \"; COMMIT TRANSACTION;\";\n\n\tlogSQL(\"LoadTableSnapshot\", query.c_str());\n\n\tchar* zErrMsg = NULL;\n\tint rc = SQLexec(dbHandle, table,\n\t\t\t query.c_str(),\n\t\t\t NULL,\n\t\t\t NULL,\n\t\t\t &zErrMsg);\n\n\t// Check result code\n\tif (rc == SQLITE_OK)\n\t{\n\t\treturn 1;\n\t}\n\telse\n\t{\n\t\traiseError(\"load_table_snapshot\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\n\t\t// transaction is still open, do rollback\n\t\tif (sqlite3_get_autocommit(dbHandle) == 0)\n\t\t{\n\t\t\trc = SQLexec(dbHandle, table,\n\t\t\t\t     \"ROLLBACK TRANSACTION;\",\n\t\t\t\t     NULL,\n\t\t\t\t     NULL,\n\t\t\t\t     &zErrMsg);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"rollback for load_table_snapshot\", zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t}\n\t\t}\n\t\treturn -1;\n\t}\n}\n\n/**\n * Delete a snapshot of a common table\n *\n * @param table\t\tThe table to snapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= 0 on success\n *\n */\nint Connection::delete_table_snapshot(const string& table, const string& id)\n{\n\tstring query = \"DROP TABLE fledge.\" + table + \"_snap\" + id;\n\n\tlogSQL(\"DeleteTableSnapshot\", query.c_str());\n\n\tchar* zErrMsg = NULL;\n\tint rc = SQLexec(dbHandle, table,\n\t\t\t query.c_str(),\n\t\t\t NULL,\n\t\t\t NULL,\n\t\t\t &zErrMsg);\n\n\t// Check result code\n\tif (rc == SQLITE_OK)\n\t{\n\t\treturn 1;\n\t}\n\telse\n\t{\n\t\traiseError(\"delete_table_snapshot\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn -1;\n\t}\n}\n\n/**\n * Get list of snapshots for a given common table\n *\n * @param table\t\tThe given table name\n */\nbool Connection::get_table_snapshots(const string& table,\n\t\t\t\t     string& resultSet)\n{\nSQLBuffer sql;\n\ttry {\n\t\tif (dbHandle == NULL)\n\t\t{\n\t\t\traiseError(\"retrieve\", \"No SQLite 3 db connection available\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\"SELECT REPLACE(name, '\");\n\t\tsql.append(table);\n\t\tsql.append(\"_snap', '') AS id FROM sqlite_master WHERE type='table' AND name LIKE '\");\n\t\tsql.append(table);\n\t\tsql.append(\"_snap%';\");\n\n\t\tconst char *query = sql.coalesce();\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tsqlite3_stmt *stmt = NULL;\n\n\t\tlogSQL(\"GetTableSnapshots\", query);\n\n\t\t// Prepare the SQL statement and get the result set\n\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"get_table_snapshots\", sqlite3_errmsg(dbHandle));\n\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\tif (stmt)\n\t\t\t\tsqlite3_finalize(stmt);\n\t\t\tdelete[] query;\n\t\t\treturn false;\n\t\t}\n\n\t\t// Call result set mapping\n\t\trc = mapResultSet(stmt, resultSet);\n\n\t\t// Delete result set\n\t\tsqlite3_finalize(stmt);\n\n\t\t// Check result set mapping errors\n\t\tif (rc != SQLITE_DONE)\n\t\t{\n\t\t\traiseError(\"get_table_snapshots\", sqlite3_errmsg(dbHandle));\n\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\tdelete[] query;\n\t\t\t// Failure\n\t\t\treturn false;\n\t\t}\n\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\t\t// Success\n\t\treturn true;\n\t} catch (exception e) {\n\t\traiseError(\"get_table_snapshots\", \"Internal error: %s\", e.what());\n\t\t// Failure\n\t\treturn false;\n\t}\n}\n\n/**\n * In the case of a join add the columns to select from for all the tables in\n * the join\n *\n * @param document\tThe query we are processing\n * @param sql\t\tThe SQLBuffer we are writing\n * @param level\t\tThe table number we are processing\n */\nbool Connection::selectColumns(const Value& document, SQLBuffer& sql, int level)\n{\nSQLBuffer\tjsonConstraints;\n\n\tstring tag = \"t\" + to_string(level) + \".\";\n\n\tif (document.HasMember(\"return\"))\n\t{\n\t\tint col = 0;\n\t\tconst Value& columns = document[\"return\"];\n\t\tif (! columns.IsArray())\n\t\t{\n\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\treturn false;\n\t\t}\n\t\tif (document.HasMember(\"modifier\"))\n\t\t{\n\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\tsql.append(' ');\n\t\t}\n\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t{\n\t\t\tif (col)\n\t\t\t\tsql.append(\", \");\n\t\t\tif (!itr->IsObject())\t// Simple column name\n\t\t\t{\n\t\t\t\tsql.append(tag);\n\t\t\t\tsql.append(itr->GetString());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t   \"column must be a string\");\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t   \"format must be a string\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// SQLite 3 date format.\n\t\t\t\t\t\tstring new_format;\n\t\t\t\t\t\tapplyColumnDateFormat((*itr)[\"format\"].GetString(),\n\t\t\t\t\t\t\t\t      tag + (*itr)[\"column\"].GetString(),\n\t\t\t\t\t\t\t\t      new_format, true);\n\n\t\t\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\t\t\tsql.append(new_format);\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t   \"timezone must be a string\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\t// SQLite3 doesnt support time zone formatting\n\t\t\t\t\t\tif (strcasecmp((*itr)[\"timezone\"].GetString(), \"utc\") != 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t   \"SQLite3 plugin does not support timezones in qeueries\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\tsql.append(tag);\n\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\tsql.append(\", 'utc')\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(tag);\n\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t{\n\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t   \"return object must have either a column or json property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\n\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\tsql.append('\"');\n\t\t\t\t}\n\t\t\t}\n\t\t\tcol++;\n\t\t}\n\t}\n\telse\n\t{\n\t\tsql.append('*');\n\t\treturn true;\n\t}\n\tif (document.HasMember(\"join\"))\n\t{\n\t\tconst Value& join = document[\"join\"];\n\t\tif (join.HasMember(\"query\"))\n\t\t{\n\t\t\tconst Value& query = join[\"query\"];\n\t\t\tsql.append(\", \");\n\t\t\tif (!selectColumns(query, sql, ++level))\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Join failed to add select columns\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t}\n\treturn true;\n}\n\n\n/**\n * In the case of a join add the tables to select from for all the tables in\n * the join\n *\n * @param schema\tThe schema we are using\n * @param document\tThe query we are processing\n * @param sql\t\tThe SQLBuffer we are writing\n * @param level\t\tThe table number we are processing\n */\nbool Connection::appendTables(const string& schema, const Value& document, SQLBuffer& sql, int level)\n{\n\tstring tag = \"t\" + to_string(level);\n\tif (document.HasMember(\"join\"))\n\t{\n\t\tconst Value& join = document[\"join\"];\n\t\tif (join.HasMember(\"table\"))\n\t\t{\n\t\t\tconst Value& table = join[\"table\"];\n\t\t\tif (!table.HasMember(\"name\"))\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Joining table is missing a table name\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tconst Value& name = table[\"name\"];\n\t\t\tif (!name.IsString())\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Joining table name is not a string\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(\", \");\n\t\t\tsql.append(schema);\n\t\t\tsql.append('.');\n\t\t\tsql.append(name.GetString());\n\t\t\tsql.append(\" \");\n\t\t\tsql.append(tag);\n\t\t\tif (join.HasMember(\"query\"))\n\t\t\t{\n\t\t\t\tconst Value& query = join[\"query\"];\n\t\t\t\tappendTables(schema, query, sql, ++level);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Join is missing a join query definition\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"commonRetrieve\", \"Join is missing a table definition\");\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * Recurse down and add the where cluase and join terms for each\n * new table joined to the query\n *\n * @param query\tThe JSON query\n * @param sql\tThe SQLBuffer we are writing the data to\n * @param asset_codes\tThe asset codes\n * @param level\tThe nestign level of the joined table\n */\nbool Connection::processJoinQueryWhereClause(const Value& query, SQLBuffer& sql, std::vector<std::string>  &asset_codes, int level)\n{\n\tstring tag = \"t\" + to_string(level) + \".\";\n\tif (!jsonWhereClause(query[\"where\"], sql, asset_codes, true, tag))\n\t{\n\t\treturn false;\n\t}\n\n\tif (query.HasMember(\"join\"))\n\t{\n\t\t// Now and the join condition itself\n\t\tstring col0, col1;\n\t\tconst Value& join = query[\"join\"];\n\t\tif (join.HasMember(\"on\") && join[\"on\"].IsString())\n\t\t{\n\t\t\tcol0 = join[\"on\"].GetString();\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t\tif (join.HasMember(\"table\"))\n\t\t{\n\t\t\tconst Value& table = join[\"table\"];\n\t\t\tif (table.HasMember(\"column\") && table[\"column\"].IsString())\n\t\t\t{\n\t\t\t\tcol1 = table[\"column\"].GetString();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"Joined query\", \"Missing join column in table\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(\" AND \");\n\t\tsql.append(tag);\n\t\tsql.append(col0);\n\t\tsql.append(\" = t\");\n\t\tsql.append(level + 1);\n\t\tsql.append(\".\");\n\t\tsql.append(col1);\n\t\tsql.append(\" \");\n\t\tif (join.HasMember(\"query\") && join[\"query\"].IsObject())\n\t\t{\n\t\t\tsql.append(\" AND \");\n\t\t\tconst Value& query = join[\"query\"];\n\t\t\tprocessJoinQueryWhereClause(query, sql, asset_codes, level + 1);\n\t\t}\n\t}\n\treturn true;\n}\n\n\n/**\n * Create schema and populate with tables and indexes as defined in the JSON schema\n * definition.\n *\n * @param schema   The  schema defintion as a JSON document containing information about schema of tables to create\n * @return true if the schema was created\n */\nbool Connection::createSchema(const std::string &schema)\n{\n\treturn m_schemaManager->create(dbHandle, schema);\n}\n\n/**\n * Execute a SQLite VACUUM command on the database\n */\nbool Connection::vacuum()\n{\n\tchar* zErrMsg = NULL;\n\t// Exec the statement\n\tint rc = SQLexec(dbHandle, \"\", \"VACUUM;\", NULL, NULL, &zErrMsg);\n\n\t// Check result\n\tif (rc != SQLITE_OK)\n\t{\n\t\t\tconst char* errMsg = \"Failed to vacuum database \";\n\t\t\tLogger::getLogger()->error(\"%s: error %s\",\n\t\t\t\t\t\t   errMsg, zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn false;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->info(\"Database vacuum complete\");\n\t}\n\treturn true;\n}\n#endif\n\n/**\n * Return the first word in a SQL statement, ie the operation that is beign executed.\n *\n * @param sql\tThe complete SQL statement\n * @return string\tThe operation\n */\nstring Connection::operation(const char *sql)\n{\n\tconst char *p1 = sql;\n\tchar buf[40], *p2 = buf;\n\twhile (*p1 && !isspace(*p1) && p2 - buf < 40)\n\t\t*p2++ = *p1++;\n\t*p2 = '\\0';\n\treturn string(buf);\n\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/connection_manager.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <sqlite3.h>\n#include <unistd.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n#include <connection_manager.h>\n#include <connection.h>\n#include <readings_catalogue.h>\n#include <logger.h>\n#include <disk_monitor.h>\n#include <utils.h>\n#include <sqlite_common.h>\n\n#define  PRAGMA_SMALL \"PRAGMA busy_timeout = 5000; PRAGMA cache_size = -2000; PRAGMA journal_mode = WAL; PRAGMA secure_delete = off; PRAGMA journal_size_limit = 2048000;\"\n#define  PRAGMA_NORMAL \"PRAGMA busy_timeout = 5000; PRAGMA cache_size = -4000; PRAGMA journal_mode = WAL; PRAGMA secure_delete = off; PRAGMA journal_size_limit = 4096000;\"\n#define  PRAGMA_HISPEED \"PRAGMA busy_timeout = 5000; PRAGMA cache_size = -8000; PRAGMA journal_mode = WAL; PRAGMA secure_delete = off; PRAGMA journal_size_limit = 81920000; PRAGMA temp_store = MEMORY\"\n\nConnectionManager *ConnectionManager::instance = 0;\n\n/**\n * Background thread entry point\n */\nstatic void managerBackground(void *arg)\n{\n\tConnectionManager *mgr = (ConnectionManager *)arg;\n\tmgr->background();\n}\n\n/**\n * Default constructor for the connection manager.\n */\nConnectionManager::ConnectionManager() : m_shutdown(false),\n\t\t\t\t\tm_vacuumInterval(6 * 60 * 60),\n\t\t\t\t\tm_attachedDatabases(0),\n\t\t\t\t\tm_diskSpaceMonitor(0),\n\t\t\t\t\tm_config(0)\n{\n\tlastError.message = NULL;\n\tlastError.entryPoint = NULL;\n\tif (getenv(\"FLEDGE_TRACE_SQL\"))\n\t\tm_trace = true;\n\telse\n\t\tm_trace = false;\n\t\n\tstd::string dbPath, dbPathReadings;\n\tconst char *defaultConnection = getenv(\"DEFAULT_SQLITE_DB_FILE\");\n\tconst char *defaultReadingsConnection = getenv(\"DEFAULT_SQLITE_DB_READINGS_FILE\");\n\tif (defaultConnection == NULL)\n\t{\n\t\t// Set DB base path\n\t\tdbPath = getDataDir();\n\t\t// Add the filename\n\t\tdbPath += _DB_NAME;\n\t}\n\telse\n\t{\n\t\tdbPath = defaultConnection;\n\t}\n\n\tif (defaultReadingsConnection == NULL)\n\t{\n\t\t// Set DB base path\n\t\tdbPathReadings = getDataDir();\n\t\t// Add the filename\n\t\tdbPathReadings += READINGS_DB_FILE_NAME;\n\t}\n\telse\n\t{\n\t\tdbPathReadings = defaultReadingsConnection;\n\t}\n\n\tm_diskSpaceMonitor = new DiskSpaceMonitor(dbPath, dbPathReadings);\n\n\tm_background = new std::thread(managerBackground, this);\n\n        struct rlimit lim;\n        getrlimit(RLIMIT_NOFILE, &lim);\n\tm_descriptorLimit = lim.rlim_cur;\n}\n\n/**\n * Called at shutdown. Shrink the idle pool, this will\n * have the side effect of closing the connections to the database.\n */\nvoid ConnectionManager::shutdown()\n{\n\tm_shutdown = true;\n\tshrinkPool(idle.size());\n\tif (m_background)\n\t\tm_background->join();\n\tif (m_diskSpaceMonitor)\n\t{\n\t\tdelete m_diskSpaceMonitor;\n\t\tm_diskSpaceMonitor = NULL;\n\t}\n}\n\n/**\n * Return the singleton instance of the connection manager.\n * if none was created then create it.\n */\nConnectionManager *ConnectionManager::getInstance()\n{\n\tif (instance == 0)\n\t{\n\t\tinstance = new ConnectionManager();\n\t}\n\treturn instance;\n}\n\n/**\n * Grow the connection pool by the number of connections\n * specified.\n *\n * @param delta\tThe number of connections to add to the pool\n */\nvoid ConnectionManager::growPool(unsigned int delta)\n{\n\tint poolSize = idle.size() + inUse.size();\n\n\tif ((delta + poolSize) * m_attachedDatabases * NO_DESCRIPTORS_PER_DB\n\t\t\t> (DESCRIPTOR_THRESHOLD * m_descriptorLimit) / 100)\n\t{\n\t\tLogger::getLogger()->warn(\"Request to grow database connection pool rejected\"\n\t\t\t       \" due to excessive file descriptor usage\");\n\t\treturn;\n\t}\n\tint failures = 0;\n\twhile (delta-- > 0)\n\t{\n\t\ttry {\n\t\t\tConnection *conn = new Connection(this);\n\t\t\tif (m_trace)\n\t\t\t\tconn->setTrace(true);\n\t\t\tidleLock.lock();\n\t\t\tidle.push_back(conn);\n\t\t\tidleLock.unlock();\n\t\t} catch (...) {\n\t\t\tfailures++;\n\t\t}\n\t}\n\tif (failures > 0)\n\t{\n\t\tidleLock.lock();\n\t\tint idleCount = idle.size();\n\t\tidleLock.unlock();\n\t\tinUseLock.lock();\n\t\tint inUseCount = inUse.size();\n\t\tinUseLock.unlock();\n\t\tLogger::getLogger()->warn(\"Connection pool growth restricted due to failure to create %d connections, %d idle connections & %d connection in use currently\", failures, idleCount, inUseCount);\n\t\tnoConnectionsDiagnostic();\n\t}\n}\n\n/**\n * Attempt to shrink the number of connections in the idle pool\n *\n * @param delta\t\tNumber of connections to attempt to remove\n * @return The number of connections removed.\n */\nunsigned int ConnectionManager::shrinkPool(unsigned int delta)\n{\nunsigned int removed = 0;\nConnection   *conn;\n\n\twhile (delta-- > 0)\n\t{\n\t\tidleLock.lock();\n\t\tconn = idle.back();\n\t\tidle.pop_back();\n\t\tidleLock.unlock();\n\t\tif (conn)\n\t\t{\n\t\t\tdelete conn;\n\t\t\tremoved++;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbreak;\n\t\t}\n\t}\n\treturn removed;\n}\n\n/**\n * Allocate a connection from the idle pool. If\n * no connection is available add a new connection\n */\nConnection *ConnectionManager::allocate()\n{\nConnection *conn = 0;\n\n\tidleLock.lock();\n\tif (idle.empty())\n\t{\n\t\ttry {\n\t\t\tconn = new Connection(this);\n\t\t} catch (...) {\n\t\t\tconn = NULL;\n\t\t\tLogger::getLogger()->error(\"Failed to create database connection to allocate\");\n\t\t\tnoConnectionsDiagnostic();\n\t\t}\n\t}\n\telse\n\t{\n\t\tconn = idle.front();\n\t    \tidle.pop_front();\n\t}\n\tidleLock.unlock();\n\tif (conn)\n\t{\n\t\tinUseLock.lock();\n\t\tinUse.push_front(conn);\n\t\tinUseLock.unlock();\n\t}\n\treturn conn;\n}\n\n/**\n * Attach a database to all the connections, idle and  inuse\n *\n * @param path  - path of the database to attach\n * @param alias - alias to be assigned to the attached database\n */\nbool ConnectionManager::attachNewDb(std::string &path, std::string &alias)\n{\n\tint rc;\n\tstd::string sqlCmd;\n\tsqlite3 *dbHandle;\n\tbool result;\n\tchar *zErrMsg = NULL;\n\n\tint poolSize = idle.size() + inUse.size();\n\tif (poolSize * m_attachedDatabases * NO_DESCRIPTORS_PER_DB \n\t\t\t> (DESCRIPTOR_THRESHOLD * m_descriptorLimit) / 100)\n\t{\n\t\tLogger::getLogger()->warn(\"Request to attach new database rejected\"\n\t\t\t       \" due to excessive file descriptor usage\");\n\t\treturn false;\n\t}\n\tresult = true;\n\n\tsqlCmd = \"ATTACH DATABASE '\" + path + \"' AS \" + alias + \";\";\n\n\tidleLock.lock();\n\tinUseLock.lock();\n\n\t// attach the DB to all idle connections\n\tfor (auto conn : idle)\n\t{\n\t\tdbHandle = conn->getDbHandle();\n\t\trc = SQLExec (dbHandle, sqlCmd.c_str(), &zErrMsg);\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"attachNewDb - It was not possible to attach the db :%s: to an idle connection, error :%s:\", path.c_str(), zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\tresult = false;\n\t\t\t// TODO We are potentially left in an inconsistant state with the new database\n\t\t\t// attached to some connections but not all. \n\t\t\tbreak;\n\t\t}\n\n\n\t\tLogger::getLogger()->debug(\"attachNewDb idle dbHandle :%X: sqlCmd :%s: \", dbHandle, sqlCmd.c_str());\n\n\t}\n\n\tif (result)\n\t{\n\t\t// attach the DB to all inUse connections\n\t\tfor (auto conn : inUse)\n\t\t{\n\t\t\tdbHandle = conn->getDbHandle();\n\t\t\trc = SQLExec (dbHandle, sqlCmd.c_str(), &zErrMsg);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"attachNewDb - It was not possible to attach the db :%s: to an inUse connection, error :%s:\", path.c_str() ,zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\tresult = false;\n\t\t\t\t// TODO We are potentially left in an inconsistant state with the new\n\t\t\t\t// database attached to some connections but not all. \n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tLogger::getLogger()->debug(\"attachNewDb inUse dbHandle :%X: sqlCmd :%s: \", dbHandle, sqlCmd.c_str());\n\t\t}\n\t}\n\tm_attachedDatabases++;\n\n\tinUseLock.unlock();\n\tidleLock.unlock();\n\n\treturn (result);\n}\n\n/**\n * Detach a database from all the connections\n *\n */\nbool ConnectionManager::detachNewDb(std::string &alias)\n{\n\tint rc;\n\tstd::string sqlCmd;\n\tsqlite3 *dbHandle;\n\tbool result;\n\tchar *zErrMsg = NULL;\n\n\tresult = true;\n\n\tsqlCmd = \"DETACH  DATABASE \" + alias + \";\";\n\tLogger::getLogger()->debug(\"detachDb - db alias :%s: cmd :%s:\" ,  alias.c_str() , sqlCmd.c_str() );\n\n\tidleLock.lock();\n\tinUseLock.lock();\n\n\t// attach the DB to all idle connections\n\tfor (auto conn : idle)\n\t{\n\t\tdbHandle = conn->getDbHandle();\n\t\trc = SQLExec (dbHandle, sqlCmd.c_str(), &zErrMsg);\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"detachNewDb - It was not possible to detach the db :%s: from an idle connection, error :%s:\", alias.c_str(), zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\tresult = false;\n\t\t\tbreak;\n\t\t}\n\t\tLogger::getLogger()->debug(\"detachNewDb - idle dbHandle :%X: sqlCmd :%s: \", dbHandle, sqlCmd.c_str());\n\t}\n\n\tif (result)\n\t{\n\t\t// attach the DB to all inUse connections\n\t\tfor (auto conn : inUse)\n\t\t{\n\t\t\tdbHandle = conn->getDbHandle();\n\t\t\trc = SQLExec (dbHandle, sqlCmd.c_str(), &zErrMsg);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"detachNewDb - It was not possible to detach the db :%s: from an inUse connection, error :%s:\", alias.c_str() ,zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\tresult = false;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tLogger::getLogger()->debug(\"detachNewDb - inUse dbHandle :%X: sqlCmd :%s: \", dbHandle, sqlCmd.c_str());\n\t\t}\n\t}\n\tm_attachedDatabases--;\n\n\tinUseLock.unlock();\n\tidleLock.unlock();\n\n\treturn (result);\n}\n\n\n/**\n * Adds to all the connections a request to attach a database\n *\n *  *\n * @param newDbId  - database id to attach\n * @param dbHandle - dbhandle for which the attach request should NOT be added\n *\n */\nbool ConnectionManager::attachRequestNewDb(int newDbId, sqlite3 *dbHandle)\n{\n\tint rc;\n\tstd::string sqlCmd;\n\tbool result;\n\tchar *zErrMsg = NULL;\n\n\tint poolSize = idle.size() + inUse.size();\n\tif (poolSize * m_attachedDatabases * NO_DESCRIPTORS_PER_DB \n\t\t\t> (DESCRIPTOR_THRESHOLD * m_descriptorLimit) / 100)\n\t{\n\t\tLogger::getLogger()->warn(\"Request to attach nwe database rejected\"\n\t\t\t       \" due to excessive file descriptor usage\");\n\t\treturn false;\n\t}\n\tresult = true;\n\n\tidleLock.lock();\n\tinUseLock.lock();\n\n\t// attach the DB to all idle connections\n\tfor (auto conn : idle)\n\t{\n\t\tif (dbHandle == conn->getDbHandle())\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"attachRequestNewDb - idle skipped dbHandle :%X: sqlCmd :%s: \", conn->getDbHandle(), sqlCmd.c_str());\n\n\t\t} else\n\t\t{\n\t\t\tconn->setUsedDbId(newDbId);\n\n\t\t\tLogger::getLogger()->debug(\"attachRequestNewDb - idle, dbHandle :%X: sqlCmd :%s: \", conn->getDbHandle(), sqlCmd.c_str());\n\t\t}\n\n\t}\n\n\tif (result)\n\t{\n\t\t// attach the DB to all inUse connections\n\t\t{\n\n\t\t\tfor (auto conn : inUse) {\n\n\t\t\t\tif (dbHandle == conn->getDbHandle())\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"attachRequestNewDb - inUse skipped dbHandle :%X: sqlCmd :%s: \", conn->getDbHandle(), sqlCmd.c_str());\n\t\t\t\t} else\n\t\t\t\t{\n\t\t\t\t\tconn->setUsedDbId(newDbId);\n\n\t\t\t\t\tLogger::getLogger()->debug(\"attachRequestNewDb - inUse, dbHandle :%X: sqlCmd :%s: \", conn->getDbHandle(), sqlCmd.c_str());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tm_attachedDatabases++;\n\n\tinUseLock.unlock();\n\tidleLock.unlock();\n\n\treturn (result);\n}\n\n\n/**\n * Release a connection back to the idle pool for\n * reallocation.\n *\n * @param conn\tThe connection to release.\n */\nvoid ConnectionManager::release(Connection *conn)\n{\n#if TRACK_CONNECTION_USER\n\tconn->clearUsage();\n#endif\n\tinUseLock.lock();\n\tinUse.remove(conn);\n\tinUseLock.unlock();\n\tidleLock.lock();\n\tidle.push_back(conn);\n\tidleLock.unlock();\n}\n\n/**\n * Set the last error information for a plugin.\n *\n * @param source\tThe source of the error\n * @param description\tThe error description\n * @param retryable\tFlag to determien if the error condition is transient\n */\nvoid ConnectionManager::setError(const char *source, const char *description, bool retryable)\n{\n\terrorLock.lock();\n\tif (lastError.entryPoint)\n\t\tfree(lastError.entryPoint);\n\tif (lastError.message)\n\t\tfree(lastError.message);\n\tlastError.retryable = retryable;\n\tlastError.entryPoint = strdup(source);\n\tlastError.message = strdup(description);\n\terrorLock.unlock();\n}\n\n/**\n * SQLIte wrapper to retry statements when the database is locked\n *\n */\nint ConnectionManager::SQLExec(sqlite3 *dbHandle, const char *sqlCmd, char **errMsg)\n{\n\tint retries = 0, rc;\n\n\n\tdo {\n\t\tif (errMsg == NULL)\n\t\t{\n\t\t\trc = sqlite3_exec(dbHandle, sqlCmd, NULL, NULL, NULL);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (*errMsg)\n\t\t\t{\n\t\t\t\tsqlite3_free(*errMsg);\n\t\t\t\t*errMsg = NULL;\n\t\t\t}\n\t\t\trc = sqlite3_exec(dbHandle, sqlCmd, NULL, NULL, errMsg);\n\t\t\tLogger::getLogger()->debug(\"SQLExec: rc :%d: \", rc);\n\t\t}\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tint interval = (retries * RETRY_BACKOFF);\n\t\t\tusleep(interval);\t// sleep retries milliseconds\n\t\t\tif (retries > 5)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"ConnectionManager::SQLExec - error :%s: dbHandle :%X: sqlCmd :%s: retry :%d: of :%d:\",\n\t\t\t\t\t\t\t\t\t\t  sqlite3_errmsg(dbHandle),\n\t\t\t\t\t\t\t\t\t\t  dbHandle,\n\t\t\t\t\t\t\t\t\t\t  sqlCmd,\n\t\t\t\t\t\t\t\t\t\t  rc,\n\t\t\t\t\t\t\t\t\t\t  MAX_RETRIES);\n\t\t\t}\n\t\t\tretries++;\n\t\t}\n\t} while (retries < MAX_RETRIES && (rc  != SQLITE_OK));\n\n\tif (rc == SQLITE_LOCKED)\n\t{\n\t\tLogger::getLogger()->error(\"ConnectionManager::SQLExec - Database still locked after maximum retries\");\n\t}\n\tif (rc == SQLITE_BUSY)\n\t{\n\t\tLogger::getLogger()->error(\"ConnectionManager::SQLExec - Database still busy after maximum retries\");\n\t}\n\n\treturn rc;\n}\n\n/**\n * Background thread used to execute periodic tasks and oversee the database activity.\n *\n * We will run the SQLite vacuum command periodically to allow space to be reclaimed\n */\nvoid ConnectionManager::background()\n{\n\ttime_t nextVacuum = time(0) + m_vacuumInterval;\n\n\twhile (!m_shutdown)\n\t{\n\t\tif (m_diskSpaceMonitor)\n\t\t\tm_diskSpaceMonitor->periodic(15);\t// Called with the interval we sleep for\n\t\tsleep(15);\n\t\ttime_t tim = time(0);\n\t\tif (m_vacuumInterval && tim > nextVacuum)\n\t\t{\n\t\t\tConnection *con = allocate();\n\t\t\tcon->vacuum();\n\t\t\trelease(con);\n\t\t\tnextVacuum = time(0) + m_vacuumInterval;\n\t\t}\n\t}\n}\n\n/**\n * Determine if we can allow another database to be created and attached to all the\n * connections.\n *\n * @return True if we can create another database.\n */\nbool ConnectionManager::allowMoreDatabases()\n{\n\t// Allow for a couple of user defined schemas as well as the fledge database\n\tif (m_attachedDatabases + 4 > ReadingsCatalogue::getInstance()->getMaxAttached())\n\t{\n\t\treturn false;\n\t}\n\tint poolSize = idle.size() + inUse.size();\n\tif (poolSize * (m_attachedDatabases + 1) * NO_DESCRIPTORS_PER_DB \n\t\t\t> (DESCRIPTOR_THRESHOLD * m_descriptorLimit) / 100)\n\t{\n\t\treturn false;\n\t}\n\treturn true;\n}\n\nvoid ConnectionManager::noConnectionsDiagnostic()\n{\n#if TRACK_CONNECTION_USER\n\tLogger *logger = Logger::getLogger();\n\n\tinUseLock.lock();\n\tlogger->warn(\"There are %d connections in use currently\", inUse.size());\n\tfor (auto conn : inUse)\n\t{\n\t\tlogger->warn(\"  Connection is use by %s\", conn->getUsage().c_str());\n\t}\n\tinUseLock.unlock();\n#endif\n}\n\n/**\n * Return the pragma configuration for the database\n */\nstring ConnectionManager::getDBConfiguration()\n{\n\tif (m_config && m_config->itemExists(\"deployment\"))\n\t{\n\t\tstring mode = m_config->getValue(\"deployment\");\n\t\tif (mode.compare(\"Small\") == 0)\n\t\t\treturn PRAGMA_SMALL;\n\t\tif (mode.compare(\"High Bandwidth\") == 0)\n\t\t\treturn PRAGMA_HISPEED;\n\t}\n\treturn PRAGMA_NORMAL;\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/include/connection.h",
    "content": "#ifndef _CONNECTION_H\n#define _CONNECTION_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <sql_buffer.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <sqlite3.h>\n#include <mutex>\n#include <reading_stream.h>\n#include <schema.h>\n#include <map>\n#include <vector>\n#include <atomic>\n\nclass ConnectionManager;\n\n#define TRACK_CONNECTION_USER\t\t0 // Set to 1 to get dianositcs about connection pool use\n\n#define READINGS_DB_FILE_NAME     \"/\" READINGS_DB_NAME_BASE \"_1.db\"\n#define READINGS_DB               READINGS_DB_NAME_BASE \"_1\"\n#define READINGS_TABLE            \"readings\"\n#define READINGS_TABLE_MEM       READINGS_TABLE \"_1\"\n\n\n// Set plugin name for log messages\n#ifndef PLUGIN_LOG_NAME\n#define PLUGIN_LOG_NAME \"SQLite3\"\n#endif\n\n// Retry mechanism\n#define PREP_CMD_MAX_RETRIES\t\t50\t// Maximum no. of retries when a lock is encountered\n#define PREP_CMD_RETRY_BASE \t\t50\t// Base time to wait for\n#define PREP_CMD_RETRY_BACKOFF\t\t50\t// Variable time to wait for\n\n#define MAX_RETRIES\t\t\t40\t// Maximum no. of retries when a lock is encountered\n#define RETRY_BACKOFF\t\t\t50\t// Multipler to backoff DB retry on lock\n\n/*\n * Control the way purge deletes readings. The block size sets a limit as to how many rows\n * get deleted in each call, whilst the sleep interval controls how long the thread sleeps\n * between deletes. The idea is to not keep the database locked too long and allow other threads\n * to have access to the database between blocks.\n */\n#define PURGE_SLEEP_MS 500\n\n#define PURGE_DELETE_BLOCK_SIZE\t    10000\n#define MIN_PURGE_DELETE_BLOCK_SIZE\t1000\n#define MAX_PURGE_DELETE_BLOCK_SIZE\t10000\n\n#define TARGET_PURGE_BLOCK_DEL_TIME\t(70*1000) \t// 70 msec\n#define PURGE_BLOCK_SZ_GRANULARITY\t5 \t// 5 rows\n#define RECALC_PURGE_BLOCK_SIZE_NUM_BLOCKS\t30\t// recalculate purge block size after every 30 blocks\n\n#define PURGE_SLOWDOWN_AFTER_BLOCKS 5\n#define PURGE_SLOWDOWN_SLEEP_MS 500\n\n#define SECONDS_PER_DAY \"86400.0\"\n// 2440587.5 is the julian day at 1/1/1970 0:00 UTC.\n#define JULIAN_DAY_START_UNIXTIME \"2440587.5\"\n\n\n#define START_TIME std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();\n#define END_TIME std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now(); \\\n\t\t\t\t auto usecs = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count();\n\n\nint dateCallback(void *data, int nCols, char **colValues, char **colNames);\nbool applyColumnDateFormat(const std::string& inFormat,\n\t\t\t   const std::string& colName,\n\t\t\t   std::string& outFormat,\n\t\t\t   bool roundMs = false);\n\nbool applyColumnDateFormatLocaltime(const std::string& inFormat,\n\t\t\t\t    const std::string& colName,\n\t\t\t\t    std::string& outFormat,\n\t\t\t\t    bool roundMs = false);\n\nint rowidCallback(void *data,\n\t\t  int nCols,\n\t\t  char **colValues,\n\t\t  char **colNames);\n\nint selectCallback(void *data,\n\t\t   int nCols,\n\t\t   char **colValues,\n\t\t   char **colNames);\n\nint countCallback(void *data,\n\t\t  int nCols,\n\t\t  char **colValues,\n\t\t  char **colNames);\n\nbool applyDateFormat(const std::string& inFormat, std::string& outFormat);\n\nclass Connection {\n\tpublic:\n\t\tConnection(ConnectionManager *manager);\n\t\t~Connection();\n#ifndef SQLITE_SPLIT_READINGS\n\t\tbool\t\tcreateSchema(const std::string& schema);\n\t\tbool\t\tretrieve(const std::string& schema,\n\t\t\t\t\t const std::string& table,\n\t\t\t\t\t const std::string& condition,\n\t\t\t\t\t std::string& resultSet);\n\t\tint\t\tinsert(const std::string& schema,\n\t\t\t\t\tconst std::string& table,\n\t\t\t\t\tconst std::string& data);\n\t\tint\t\tupdate(const std::string& schema,\n\t\t\t\t\t\tconst std::string& table,\n\t\t\t\t\t\tconst std::string& data);\n\t\tint\t\tdeleteRows(const std::string& schema,\n\t\t\t\t\t\tconst std::string& table,\n\t\t\t\t\t\tconst std::string& condition);\n\t\tint\t\tcreate_table_snapshot(const std::string& table, const std::string& id);\n\t\tint\t\tload_table_snapshot(const std::string& table, const std::string& id);\n\t\tint\t\tdelete_table_snapshot(const std::string& table, const std::string& id);\n\t\tbool\t\tget_table_snapshots(const std::string& table, std::string& resultSet);\n#endif\n\t\tint\t\tappendReadings(const char *readings);\n\t\tint \t\treadingStream(ReadingStream **readings, bool commit);\n\t\tbool\t\tfetchReadings(unsigned long id, unsigned int blksize,\n\t\t\t\t\t\tstd::string& resultSet);\n\t\tbool\t\tretrieveReadings(const std::string& condition,\n\t\t\t\t\t\t std::string& resultSet);\n\t\tunsigned int\tpurgeReadings(unsigned long age, unsigned int flags,\n\t\t\t\t\t\tunsigned long sent, std::string& results);\n\t\tunsigned int\tpurgeReadingsByRows(unsigned long rowcount, unsigned int flags,\n\t\t\t\t\t\tunsigned long sent, std::string& results);\n\t\tlong\t\ttableSize(const std::string& table);\n\t\tvoid\t\tsetTrace(bool);\n\t\tbool\t\tformatDate(char *formatted_date, size_t formatted_date_size, const char *date);\n\t\tbool\t\taggregateQuery(const rapidjson::Value& payload, std::string& resultSet);\n\t\tbool\t\tgetNow(std::string& Now);\n\n\t\tsqlite3\t\t*getDbHandle() {return dbHandle;};\n\t\tvoid\t\tsetUsedDbId(int dbId);\n\n\t\tvoid\t\tshutdownAppendReadings();\n\t\tunsigned int\tpurgeReadingsAsset(const std::string& asset);\n\t\tbool\t\tvacuum();\n\t\tbool\t\tsupportsReadings() { return ! m_noReadings; };\n#if TRACK_CONNECTION_USER\n\t\tvoid\t\tsetUsage(std::string usage) { m_usage = usage; };\n\t\tvoid\t\tclearUsage() { m_usage = \"\"; };\n\t\tstd::string\tgetUsage() { return m_usage; };\n#endif\n\n\tprivate:\n\t\tstd::string\toperation(const char *sql);\n\t\tstd::vector<int>\n\t\t       \t\tm_NewDbIdList;            // Newly created databases that should be attached\n\n\t\tbool\t\tm_streamOpenTransaction;\n\t\tint\t\tm_queuing;\n\t\tstd::mutex\tm_qMutex;\n\t\tint\t\tSQLPrepare(sqlite3 *dbHandle, const char *sqlCmd, sqlite3_stmt **readingsStmt);\n\t\tint\t\tSQLexec(sqlite3 *db, const std::string& table, const char *sql,\n\t\t\t\tint (*callback)(void*,int,char**,char**),\n\t\t\t\t\tvoid *cbArg, char **errmsg);\n\n\t\tint\t\tSQLstep(sqlite3_stmt *statement);\n\t\tbool\t\tm_logSQL;\n\t\tvoid\t\traiseError(const char *operation, const char *reason,...);\n\t\tsqlite3\t\t*dbHandle;\n\t\tSchemaManager\t*m_schemaManager;\n\t\tint\t\tmapResultSet(void *res, std::string& resultSet, unsigned long *rowsCount = nullptr);\n#ifndef SQLITE_SPLIT_READINGS\n\t\tbool\t\tjsonWhereClause(const rapidjson::Value& whereClause, SQLBuffer&, std::vector<std::string>  &asset_codes, bool convertLocaltime = false, std::string prefix = \"\");\n#else\n\t\tbool\t\tjsonWhereClause(const rapidjson::Value& whereClause, SQLBuffer&, bool convertLocaltime = false, std::string prefix = \"\");\n#endif\n\t\tbool\t\tjsonModifiers(const rapidjson::Value&, SQLBuffer&, bool isTableReading = false);\n#ifndef SQLITE_SPLIT_READINGS\n\t\tbool\t\tjsonAggregates(const rapidjson::Value&,\n\t\t\t\t\t       const rapidjson::Value&,\n\t\t\t\t\t       SQLBuffer&,\n\t\t\t\t\t       SQLBuffer&,\n\t\t\t\t\t       bool &isOptAggregate,\n\t\t\t\t\t       bool isTableReading = false,\n\t\t\t\t\t       bool isExtQuery = false\n\t\t\t\t\t       );\n#else\n\tbool\t\t\tjsonAggregates(const rapidjson::Value&,\n\t\t                               const rapidjson::Value&,\n\t\t                               SQLBuffer&,\n\t\t                               SQLBuffer&,\n\t\t                               bool isTableReading = false);\n#endif\n\t\tbool\t\treturnJson(const rapidjson::Value&, SQLBuffer&, SQLBuffer&);\n\t\tchar\t\t*trim(char *str);\n\t\tconst std::string\n\t\t\t\tescape(const std::string&);\n\t\tbool\t\tapplyColumnDateTimeFormat(sqlite3_stmt *pStmt,\n\t\t\t\t\t\tint i,\n\t\t\t\t\t\tstd::string& newDate);\n\t\tvoid\t\tlogSQL(const char *, const char *);\n\t\tbool\t\tselectColumns(const rapidjson::Value& document, SQLBuffer& sql, int level);\n\t\tbool \t\tappendTables(const std::string& schema, const rapidjson::Value& document, SQLBuffer& sql, int level);\n\t\tbool\t\tprocessJoinQueryWhereClause(const rapidjson::Value& query, SQLBuffer& sql, std::vector<std::string>  &asset_codes, int level);\n\t\tbool\t\tm_noReadings;\n#if TRACK_CONNECTION_USER\n\t\tstd::string\tm_usage;\n#endif\n\t\tConnectionManager\n\t\t\t\t*m_manager;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/include/connection_manager.h",
    "content": "#ifndef _CONNECTION_MANAGER_H\n#define _CONNECTION_MANAGER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <sqlite3.h>\n\n#include <plugin_api.h>\n#include <list>\n#include <mutex>\n#include <thread>\n#include <config_category.h>\n\n#define NO_DESCRIPTORS_PER_DB\t3\t// 3 deascriptors per database when using WAL mode\n#define DESCRIPTOR_THRESHOLD\t75\t// Percentage of descriptors that can be used on database connections\n\nclass Connection;\nclass DiskSpaceMonitor;\n\n/**\n * Singleton class to manage SQLite3 connection pool\n */\nclass ConnectionManager {\n\tpublic:\n\t\tstatic ConnectionManager  *getInstance();\n\t\tvoid                      growPool(unsigned int);\n\t\tunsigned int              shrinkPool(unsigned int);\n\t\tConnection                *allocate();\n\t\tbool                      attachNewDb(std::string &path, std::string &alias);\n\t\tbool                      attachRequestNewDb(int newDbId, sqlite3 *dbHandle);\n\t\tbool \t\t\t  detachNewDb(std::string &alias);\n\t\tvoid                      release(Connection *);\n\t\tvoid\t\t\t  shutdown();\n\t\tvoid\t\t\t  setError(const char *, const char *, bool);\n\t\tPLUGIN_ERROR\t\t  *getError()\n\t\t\t\t\t  {\n\t\t\t\t\t\treturn &lastError;\n\t\t\t\t\t  }\n\t\tvoid\t\t\t  background();\n\t\tvoid\t\t\t  setVacuumInterval(long hours)\n\t\t\t\t\t  {\n\t\t\t\t\t\tm_vacuumInterval = 60 * 60 * hours;\n\t\t\t\t\t  };\n\t\tbool\t\t\t  allowMoreDatabases();\n\t\tvoid\t\t\t  setConfiguration(ConfigCategory *category)\n\t\t\t\t\t  {\n\t\t\t\t\t\t  m_config = category;\n\t\t\t\t\t  };\n\t\tstd::string\t\t  getDBConfiguration();\n\n\tprotected:\n\t\tConnectionManager();\n\n\tprivate:\n\t\tstatic ConnectionManager     *instance;\n\t\tint\t\t       \t     SQLExec(sqlite3 *dbHandle, const char *sqlCmd,\n\t\t\t\t\t\t\tchar **errMsg);\n\t\tvoid\t\t\t     noConnectionsDiagnostic();\n\n\tprotected:\n\t\tstd::list<Connection *>      idle;\n\t\tstd::list<Connection *>      inUse;\n\t\tstd::mutex                   idleLock;\n\t\tstd::mutex                   inUseLock;\n\t\tstd::mutex                   errorLock;\n\t\tPLUGIN_ERROR\t\t     lastError;\n\t\tbool\t\t\t     m_trace;\n\t\tbool\t\t\t     m_shutdown;\n\t\tstd::thread\t\t     *m_background;\n\t\tlong                         m_vacuumInterval;\n\t\tunsigned int\t\t     m_descriptorLimit;\n\t\tunsigned int\t\t     m_attachedDatabases;\n\t\tDiskSpaceMonitor\t     *m_diskSpaceMonitor;\n\t\tConfigCategory\t\t     *m_config;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/include/purge_configuration.h",
    "content": "#ifndef _PURGE_CONFIGURATION_H\n#define _PURGE_CONFIGURATION_H\n/*\n * Fledge storage service - Purge configuration\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <vector>\n#include <cstdint>\n\nclass PurgeConfiguration {\n\tpublic:\n\t\tstatic PurgeConfiguration\t*getInstance();\n\t\tvoid\t\t\t\texclude(const std::string& asset);\n\t\tbool\t\t\t\thasExclusions() { return m_exclude.size() != 0; };\n\t\tbool\t\t\t\tisExcluded(const std::string& asset);\n\t\tvoid\t\t\t\tminimumRetained(uint32_t minimum);\n\t\tuint32_t\t\t\tgetMinimumRetained() { return m_minimum; };\n\tprivate:\n\t\tPurgeConfiguration();\n\t\t~PurgeConfiguration();\n\tprivate:\n\t\tstatic PurgeConfiguration\t*m_instance;\n\t\tstd::vector<std::string>\tm_exclude;\n\t\tuint32_t\t\t\tm_minimum;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/include/readings_catalogue.h",
    "content": "#ifndef _READINGS_CATALOGUE_H\n#define _READINGS_CATALOGUE_H\n/*\n * Fledge storage service - Readings catalogue handling\n *\n * Copyright (c) 2020 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli, Massimiliano Pinto\n */\n\n#include \"connection.h\"\n#include <thread>\n\n#define\tOVERFLOW_TABLE_ID\t0\t// Table ID to use for the overflow table\n\n/**\n * This class handles per thread started transaction boundaries:\n */\nclass TransactionBoundary {\n\tpublic:\n\t\tTransactionBoundary() {};\n\t\tunsigned long\tGetMinReadingId();\n\t\tvoid\t\tSetThreadTransactionStart(std::thread::id tid,\n\t\t\t\t\t\t\tunsigned long id);\n\t\tvoid\t\tClearThreadTransaction(std::thread::id);\n\n\tprivate:\n\t\tstd::map<std::thread::id, unsigned long>\n\t\t\t\tm_boundaries;\n\t\tstd::mutex\tm_boundaryLock;\n};\n\n\n/**\n * - poolSize                  = Number of connections to allocate\n * - nReadingsPerDb            = Number of readings tables per database\n * - nDbPreallocate            = Number of databases to allocate in advance\n * - nDbLeftFreeBeforeAllocate = Number of free databases before a new allocation is executed\n * - nDbToAllocate             = Number of database to allocate each time\n *\n */\ntypedef struct\n{\n\tint poolSize = 5;\n\tint nReadingsPerDb = 14;\n\tint nDbPreallocate = 3;\n\tint nDbLeftFreeBeforeAllocate = 1;\n\tint nDbToAllocate = 2;\n\n} STORAGE_CONFIGURATION;\n\n/**\n * Class used to store table references\n */\nclass TableReference {\n\tpublic:\n\t\tTableReference(int dbId, int tableId) : m_dbId(dbId), m_tableId(tableId)\n\t\t\t\t{\n\t\t\t\t\tm_issued = time(0);\n\t\t\t\t};\n\t\ttime_t\t\tlastIssued()\n\t       \t\t\t{\n\t\t\t\t\treturn m_issued;\n\t\t\t\t};\n\t\tint\t\tgetTable()\n\t\t\t\t{\n\t\t\t\t\treturn m_tableId;\n\t\t\t\t};\n\t\tint\t\tgetDatabase()\n\t\t\t\t{\n\t\t\t\t\treturn m_dbId;\n\t\t\t\t};\n\t\tvoid\t\tissue()\n\t\t\t\t{\n\t\t\t\t\tm_issued = time(0);\n\t\t\t\t};\n\tprivate:\n\t\tint\t\tm_dbId;\n\t\tint\t\tm_tableId;\n\t\ttime_t\t\tm_issued;\n};\n\n/**\n * Implements the handling of multiples readings tables stored among multiple SQLite databases.\n *\n * The databases are named using the format readings_<dbid>, like for example readings_1.db\n * and each database contains multiples readings named as readings_<dbid>_<id> like readings_1_1, readings_1_2\n *\n * The table asset_reading_catalogue is used as a catalogue in order map a particular asset_code\n * to a table that holds readings for that asset_code.\n *\n * readings_1.asset_reading_catalogue:\n * - table_id     INTEGER               NOT NULL,\n * - db_id        INTEGER               NOT NULL,\n * - asset_code   character varying(50) NOT NULL\n *\n * The first reading table readings_1_1 is created by the script init_readings.sql executed during the storage init\n * all the other readings tables are created by the code when Fledge starts.\n *\n * The table configuration_readings created by the script init_readings.sql keeps track of the information:\n *\n * - global_id         -- Stores the last global Id used +1, Updated at -1 when Fledge starts, Updated at the proper value when Fledge stops\n * - db_id_Last        -- Latest database available\n * - n_readings_per_db -- Number of readings table per database\n * - n_db_preallocate  -- Number of databases to allocate in advance\n *\n * The readings tables are allocated in sequence starting from the readings_1_1 and proceeding with the other tables available in the first database.\n * The tables in the 2nd database (readings_2.db) will be used when all the tables in the first db are allocated.\n *\n * Implementation notes:\n *\n * 1) Many functions receive the database connection as an input parameter:\n *\n * - sqlite3 *dbHandle\n *\n * and they will use that connection for the sql operations instead of allocating a new one each time.\n * This approach allows to:\n *\n * - allocate a connection once using it for all the following operations\n * - avoid to receive in use a connection having a different configuration (attached databases)\n *   as the connections are handled in pool and it is not defined which one will be allocated\n *   moreover all the operations are executed in parallel in multi threads\n *\n */\nclass ReadingsCatalogue {\n\npublic:\n\ttypedef struct ReadingReference {\n\t\tint dbId;\n\t\tint tableId;\n\n\t} tyReadingReference;\n\n\tstatic ReadingsCatalogue *getInstance()\n\t{\n\t\tstatic ReadingsCatalogue *instance = 0;\n\n\t\tif (!instance)\n\t\t{\n\t\t\tinstance = new ReadingsCatalogue;\n\t\t}\n\t\treturn instance;\n\t}\n\n\tvoid          multipleReadingsInit(STORAGE_CONFIGURATION &storageConfig);\n\tstd::string   generateDbAlias(int dbId);\n\tstd::string   generateDbName(int tableId);\n\tstd::string   generateDbFileName(int dbId);\n\tstd::string   generateDbNameFromTableId(int tableId);\n\tstd::string   generateReadingsName(int  dbId, int tableId);\n\tvoid          getAllDbs(std::vector<int> &dbIdList);\n\tvoid          getNewDbs(std::vector<int> &dbIdList);\n\tint           getMaxReadingsId(int dbId);\n\tint           getReadingsCount();\n\tint           getReadingPosition(int dbId, int tableId);\n\tint           getNReadingsAvailable() const      {return m_nReadingsAvailable;}\n\tlong\t      getIncGlobalId() { return m_ReadingsGlobalId.fetch_add(1); }  // returns the value before the add operation\n\tlong\t      getMinGlobalId (sqlite3 *dbHandle);\n\tlong \t      getGlobalId() {return m_ReadingsGlobalId;};\n\tbool          evaluateGlobalId();\n\tbool          storeGlobalId ();\n\n\tvoid          preallocateReadingsTables(int dbId);\n\tbool          loadAssetReadingCatalogue();\n\tbool          loadEmptyAssetReadingCatalogue(bool clean = true);\n\n\tbool          latestDbUpdate(sqlite3 *dbHandle, int newDbId);\n\tint           preallocateNewDbsRange(int dbIdStart, int dbIdEnd);\n\ttyReadingReference getEmptyReadingTableReference(std::string& asset);\n\ttyReadingReference getReadingReference(Connection *connection, const char *asset_code);\n\tbool          attachDbsToAllConnections();\n\tstd::string   sqlConstructMultiDb(std::string &sqlCmdBase, std::vector<std::string>  &assetCodes, bool considerExclusion=false);\n\tstd::string   sqlConstructOverflow(std::string &sqlCmdBase, std::vector<std::string>  &assetCodes, bool considerExclusion=false, bool groupBy = false);\n\tint           purgeAllReadings(sqlite3 *dbHandle, const char *sqlCmdBase, char **errMsg = NULL, unsigned long *rowsAffected = NULL);\n\n\tbool          connectionAttachAllDbs(sqlite3 *dbHandle);\n\tbool          connectionAttachDbList(sqlite3 *dbHandle, std::vector<int> &dbIdList);\n\tbool          attachDb(sqlite3 *dbHandle, std::string &path, std::string &alias, int dbId);\n\tvoid          detachDb(sqlite3 *dbHandle, std::string &alias);\n\n\tvoid          setUsedDbId(int dbId);\n\tint           extractReadingsIdFromName(std::string tableName);\n\tint           extractDbIdFromName(std::string tableName);\n\tint           SQLExec(sqlite3 *dbHandle, const char *sqlCmd,  char **errMsg = NULL);\n\tbool\t      createReadingsOverflowTable(sqlite3 *dbHandle, int dbId);\n\tint\t      getMaxAttached() { return m_attachLimit; };\n\n\nprivate:\n\tSTORAGE_CONFIGURATION m_storageConfigCurrent;                           // The current configuration of the multiple readings\n\tSTORAGE_CONFIGURATION m_storageConfigApi;                               // The parameters retrieved from the API\n\n\tenum NEW_DB_OPERATION {\n\t\tNEW_DB_ATTACH_ALL,\n\t\tNEW_DB_ATTACH_REQUEST,\n\t\tNEW_DB_DETACH\n\t};\n\n\tenum ACTION  {\n\t\tACTION_DB_ADD,\n\t\tACTION_DB_REMOVE,\n\t\tACTION_DB_NONE,\n\t\tACTION_TB_ADD,\n\t\tACTION_TB_REMOVE,\n\t\tACTION_TB_NONE,\n\t\tACTION_INVALID\n\t};\n\n\ttypedef struct ReadingAvailable {\n\t\tint lastReadings;\n\t\tint tableCount;\n\n\t} tyReadingsAvailable;\n\n\tReadingsCatalogue();\n\n\tbool          createNewDB(sqlite3 *dbHandle, int newDbId,  int startId, NEW_DB_OPERATION attachAllDb);\n\tint           getUsedTablesDbId(int dbId);\n\tint           getNReadingsAllocate() const {return m_storageConfigCurrent.nReadingsPerDb;}\n\tbool          createReadingsTables(sqlite3 *dbHandle, int dbId, int idStartFrom, int nTables);\n\tbool          isReadingAvailable() const;\n\tvoid          allocateReadingAvailable();\n\ttyReadingsAvailable   evaluateLastReadingAvailable(sqlite3 *dbHandle, int dbId);\n\tlong          calculateGlobalId (sqlite3 *dbHandle);\n\tstd::string   generateDbFilePath(int dbId);\n\n\tvoid\t\t  raiseError(const char *operation, const char *reason,...);\n\tint\t\t\t  SQLStep(sqlite3_stmt *statement);\n\tbool          enableWAL(std::string &dbPathReadings);\n\n\tbool          configurationRetrieve(sqlite3 *dbHandle);\n\tvoid          prepareAllDbs();\n\tbool          applyStorageConfigChanges(sqlite3 *dbHandle);\n\tvoid          dbFileDelete(std::string dbPath);\n\tvoid          dbsRemove(int startId, int endId);\n\tvoid          storeReadingsConfiguration (sqlite3 *dbHandle);\n\tACTION        changesLogicDBs(int dbIdCurrent , int dbIdLast, int nDbPreallocateCurrent, int nDbPreallocateRequest, int nDbLeftFreeBeforeAllocate);\n\tACTION        changesLogicTables(int maxUsed ,int Current, int Request);\n\tint           retrieveDbIdFromTableId(int tableId);\n\n\tvoid          configChangeAddDb(sqlite3 *dbHandle);\n\tvoid          configChangeRemoveDb(sqlite3 *dbHandle);\n\tvoid          configChangeAddTables(sqlite3 *dbHandle , int startId, int endId);\n\tvoid          configChangeRemoveTables(sqlite3 *dbHandle , int startId, int endId);\n\n\tint           calcMaxReadingUsed();\n\tvoid          dropReadingsTables(sqlite3 *dbHandle, int dbId, int idStart, int idEnd);\n\n\n\tint           m_dbIdCurrent;            // Current database in use\n\tint           m_dbIdLast;               // Last database available not already in use\n\tint           m_dbNAvailable;           // Number of databases available\n\tstd::vector<int>\n\t\t      m_dbIdList;               // Databases already created but not in use\n\n\tstd::atomic<long>\n  \t\t      m_ReadingsGlobalId;       // Global row id shared among all the readings table\n\tint\n \t\t      m_nReadingsAvailable = 0; // Number of readings tables available\n\tstd::map <std::string, TableReference>   m_AssetReadingCatalogue={ // In memory structure to identify in which database/table an asset is stored\n\n\t\t// asset_code  - reading Table Id, Db Id\n\t\t// {\"\",         ,{1               ,1 }}\n\t};\n\tstd::map <std::string, std::pair<int, int>>   m_EmptyAssetReadingCatalogue={ // In memory structure to identify in which database/table an asset is empty \n\t\t// asset_code  - reading Table Id, Db Id\n\t\t// {\"\",         ,{1               ,1 }}\n\t};\n\tint\t       m_nextOverflow;\t// The next database to use for overflow assets\n\tint\t       m_attachLimit;\n\tint\t       m_maxOverflowUsed;\n\tint\t       m_compounds; \t// Max number of compound statements\n\tstd::mutex     m_emptyReadingTableMutex;\npublic:\n\tTransactionBoundary\t\t\t\tm_tx;\n\n};\n\n/**\n * Used to synchronize the attach database operation\n */\nclass AttachDbSync {\n\npublic:\n\tstatic AttachDbSync *getInstance()\n\t{\n\t\tstatic AttachDbSync instance;\n\t\treturn &instance;\n\t}\n\n\tvoid   lock()    {m_dbLock.lock();}\n\tvoid   unlock()  {m_dbLock.unlock();}\n\nprivate:\n\tAttachDbSync(){};\n\n\tstd::mutex m_dbLock;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/include/sqlite_common.h",
    "content": "#ifndef _COMMON_CONNECTION_H\n#define _COMMON_CONNECTION_H\n\n#include <sql_buffer.h>\n#include <iostream>\n#include <sqlite3.h>\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n#include <string>\n#include <map>\n#include <stdarg.h>\n#include <stdlib.h>\n#include <sstream>\n#include <logger.h>\n#include <time.h>\n#include <unistd.h>\n#include <chrono>\n#include <thread>\n#include <atomic>\n#include <condition_variable>\n#include <sys/time.h>\n\n#define _DB_NAME                  \"/fledge.db\"\n#define READINGS_DB_NAME_BASE     \"readings\"\n\n#define  DB_CONFIGURATION \"PRAGMA busy_timeout = 5000; PRAGMA cache_size = -4000; PRAGMA journal_mode = WAL; PRAGMA secure_delete = off; PRAGMA journal_size_limit = 4096000;\"\n\n#define LEN_BUFFER_DATE 100\n#define F_TIMEH24_S             \"%H:%M:%S\"\n#define F_DATEH24_S             \"%Y-%m-%d %H:%M:%S\"\n#define F_DATEH24_M             \"%Y-%m-%d %H:%M\"\n#define F_DATEH24_H             \"%Y-%m-%d %H\"\n// This is the default datetime format in Fledge: 2018-05-03 18:15:00.622\n#define F_DATEH24_MS            \"%Y-%m-%d %H:%M:%f\"\n// Format up to seconds\n#define F_DATEH24_SEC           \"%Y-%m-%d %H:%M:%S\"\n#define SQLITE3_NOW             \"strftime('%Y-%m-%d %H:%M:%f', 'now', 'localtime')\"\n// The default precision is milliseconds, it adds microseconds and timezone\n#define SQLITE3_NOW_READING     \"strftime('%Y-%m-%d %H:%M:%f000+00:00', 'now')\"\n#define SQLITE3_FLEDGE_DATETIME_TYPE \"DATETIME\"\n\n#define\tSTORAGE_PURGE_RETAIN_ANY 0x0001U\n#define\tSTORAGE_PURGE_RETAIN_ALL 0x0002U\n#define STORAGE_PURGE_SIZE\t 0x0004U\n\nstatic std::map<std::string, std::string> sqliteDateFormat = {\n                                                {\"HH24:MI:SS\",\n                                                        F_TIMEH24_S},\n                                                {\"YYYY-MM-DD HH24:MI:SS.MS\",\n                                                        F_DATEH24_MS},\n                                                {\"YYYY-MM-DD HH24:MI:SS\",\n                                                        F_DATEH24_S},\n                                                {\"YYYY-MM-DD HH24:MI\",\n                                                        F_DATEH24_M},\n                                                {\"YYYY-MM-DD HH24\",\n                                                        F_DATEH24_H},\n                                                {\"\", \"\"}\n                                        };\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/purge_configuration.cpp",
    "content": "/*\n * Fledge Fledge Configuration management.\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <purge_configuration.h>\n#include <logger.h>\n\nusing namespace std;\n\nPurgeConfiguration *PurgeConfiguration::m_instance = 0;\n\n/**\n * Constructor for the purge configurtion class\n */\nPurgeConfiguration::PurgeConfiguration() : m_minimum(0)\n{\n}\n\n/**\n * Destructor for the purge configurtion class\n */\nPurgeConfiguration::~PurgeConfiguration()\n{\n}\n\n/**\n * Return the singleton instance of the PurgeConfiguration class\n * for this plugin\n *\n * @return PurgeConfiguration* singleton instance\n */\nPurgeConfiguration *PurgeConfiguration::getInstance()\n{\n\tif (m_instance == 0)\n\t{\n\t\tm_instance = new PurgeConfiguration();\n\t}\n\treturn m_instance;\n}\n\n/**\n * Add an asset to the exclusion list\n *\n * @param asset the asset to add to the exclusion list\n */\nvoid PurgeConfiguration::exclude(const string& asset)\n{\n\tLogger::getLogger()->debug(\"'%s' added to exclusion list\", asset.c_str());\n\tm_exclude.push_back(asset);\n}\n\n/**\n * Check if the named asset appears in the exclusion list\n *\n * @param asset\tAsset to check for exclusion\n * @return True if the asset is excluded\n */\nbool PurgeConfiguration::isExcluded(const string& asset)\n{\n\tfor (auto it = m_exclude.cbegin(); it != m_exclude.cend(); it++)\n\t{\n\t\tif (it->compare(asset) == 0)\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Set the minimum number of rows to retian for each asset\n *\n * @param minimum Minimum number of rows to retain\n */\nvoid PurgeConfiguration::minimumRetained(uint32_t minimum)\n{\n\tm_minimum = minimum;\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/readings.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <math.h>\n#include <sqlite_common.h>\n#include <connection.h>\n#include <connection_manager.h>\n#include <reading_stream.h>\n#include <random>\n#include <utils.h>\n\n#include <sys/stat.h>\n\n#include <string_utils.h>\n#include <algorithm>\n#include <vector>\n\n#include <readings_catalogue.h>\n\n// 1 enable performance tracking\n#define INSTRUMENT\t0\n\n#define LOG_AFTER_NERRORS 0\n\n#if INSTRUMENT\n#include <sys/time.h>\n#endif\n\n// Decode stream data\n#define\tRDS_USER_TIMESTAMP(stream, x) \tstream[x]->userTs\n#define\tRDS_ASSET_CODE(stream, x)\t\tstream[x]->assetCode\n#define\tRDS_PAYLOAD(stream, x)\t\t\t&(stream[x]->assetCode[0]) + stream[x]->assetCodeLength\n\n//#ifndef PLUGIN_LOG_NAME\n//#define PLUGIN_LOG_NAME \"SQLite 3\"\n//#endif\n\n/**\n * SQLite3 storage plugin for Fledge\n */\n\nusing namespace std;\nusing namespace rapidjson;\n\n#define CONNECT_ERROR_THRESHOLD\t\t5*60\t// 5 minutes\n\n\n/*\n * The following allows for conditional inclusion of code that tracks the top queries\n * run by the storage plugin and the number of times a particular statement has to\n * be retried because of the database being busy./\n */\n#define DO_PROFILE\t\t0\n#define DO_PROFILE_RETRIES\t0\n#if DO_PROFILE\n#include <profile.h>\n\n#define\tTOP_N_STATEMENTS\t\t10\t// Number of statements to report in top n\n#define RETRY_REPORT_THRESHOLD\t\t1000\t// Report retry statistics every X calls\n\nQueryProfile profiler(TOP_N_STATEMENTS);\nunsigned long retryStats[MAX_RETRIES] = { 0,0,0,0,0,0,0,0,0,0 };\nunsigned long numStatements = 0;\nint\t      maxQueue = 0;\n#endif\n\nstatic std::atomic<int> m_waiting(0);\nstatic std::atomic<int> m_writeAccessOngoing(0);\nstatic std::mutex\tdb_mutex;\nstatic std::condition_variable\tdb_cv;\nstatic int purgeBlockSize = PURGE_DELETE_BLOCK_SIZE;\n\nstatic time_t connectErrorTime = 0;\n\n// Used to synchronize the shut down of the threads executing appendReadings\nstatic std::atomic<int> m_appendCount(0);\nstatic bool\t\t\t\tm_shutdown=false;\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Check whether to compute timebucket query with min,max,avg for all datapoints\n *\n * @param    payload\tJSON payload\n * @return\t\tTrue if aggregation is 'all'\n */\nbool aggregateAll(const Value& payload)\n{\n\tif (payload.HasMember(\"aggregate\") &&\n\t    payload[\"aggregate\"].IsObject())\n\t{\n\t\tconst Value& agg = payload[\"aggregate\"];\n\t\tif (agg.HasMember(\"operation\") &&\n\t\t    (strcmp(agg[\"operation\"].GetString(), \"all\") == 0))\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Build, exucute and return data of a timebucket query with min,max,avg for all datapoints\n *\n * @param    payload\tJSON object for timebucket query\n * @param    resultSet\tJSON Output buffer\n * @return\t\tTrue of success, false on any error\n */\nbool Connection::aggregateQuery(const Value& payload, string& resultSet)\n{\n\tvector<string>  asset_codes;\n\n\n\tif (!payload.HasMember(\"where\") ||\n\t    !payload.HasMember(\"timebucket\"))\n\t{\n\t\traiseError(\"retrieve\", \"aggregateQuery is missing \"\n\t\t\t   \"'where' and/or 'timebucket' properties\");\n\t\treturn false;\n\t}\n\n\tSQLBuffer sql;\n\n\tsql.append(\"SELECT asset_code, \");\n\n\tdouble size = 1;\n\tstring timeColumn;\n\n\t// Check timebucket object\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& bucket = payload[\"timebucket\"];\n\t\tif (!bucket.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"retrieve\", \"aggregateQuery is missing \"\n\t\t\t\t   \"'timestamp' property for 'timebucket'\");\n\t\t\treturn false;\n\t\t}\n\n\t\t// Time column\n\t\ttimeColumn = bucket[\"timestamp\"].GetString();\n\n\t\t// Bucket size\n\t\tif (bucket.HasMember(\"size\"))\n\t\t{\n\t\t\tsize = atof(bucket[\"size\"].GetString());\n\t\t\tif (!size)\n\t\t\t{\n\t\t\t\tsize = 1;\n\t\t\t}\n\t\t}\n\n\t\t// Time format for output\n\t\tstring newFormat;\n\t\tif (bucket.HasMember(\"format\") && size >= 1)\n\t\t{\n\t\t\tapplyColumnDateFormatLocaltime(bucket[\"format\"].GetString(),\n\t\t\t\t\t\t\t\"timestamp\",\n\t\t\t\t\t\t\tnewFormat,\n\t\t\t\t\t\t\ttrue);\n\t\t\tsql.append(newFormat);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (size < 1) \n\t\t\t{\n\t\t\t\t// sub-second granularity to time bucket size:\n\t\t\t\t// force output formatting with microseconds\t\n\t\t\t\tnewFormat = \"strftime('%Y-%m-%d %H:%M:%S', \" + timeColumn + \n\t\t\t\t\t    \", 'localtime') || substr(\" + timeColumn\t  +\n\t\t\t\t\t    \", instr(\" + timeColumn + \", '.'), 7)\";\n\t\t\t\tsql.append(newFormat);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"timestamp\");\n\t\t\t}\n\t\t}\n\n\t\t// Time output alias\n\t\tif (bucket.HasMember(\"alias\"))\n\t\t{\n\t\t\tsql.append(\" AS \");\n\t\t\tsql.append(bucket[\"alias\"].GetString());\n\t\t}\n\t}\n\n\t// JSON format aggregated data\n\tsql.append(\", '{' || group_concat('\\\"' || x || '\\\" : ' || resd, ', ') || '}' AS reading \");\n\n\t// subquery\n\tsql.append(\"FROM ( SELECT  x, asset_code, max(timestamp) AS timestamp, \");\n\t// Add min\n\tsql.append(\"'{\\\"min\\\" : ' || min(theval) || ', \");\n\t// Add max\n\tsql.append(\"\\\"max\\\" : ' || max(theval) || ', \");\n\t// Add avg\n\tsql.append(\"\\\"average\\\" : ' || avg(theval) || ', \");\n\t// Add count\n\tsql.append(\"\\\"count\\\" : ' || count(theval) || ', \");\n\t// Add sum\n\tsql.append(\"\\\"sum\\\" : ' || sum(theval) || '}' AS resd \");\n\n\tif (size < 1)\n\t{\n\t\t// Add max(user_ts)\n\t\tsql.append(\", max(\" + timeColumn + \") AS \" + timeColumn + \" \");\n\t}\n\n\t// subquery\n\tsql.append(\"FROM ( SELECT asset_code, \");\n\tsql.append(timeColumn);\n\n\tif (size >= 1)\n\t{\n\t\tsql.append(\", datetime(\");\n\t}\n\telse\n\t{\n\t\tsql.append(\", (\");\n\t}\n\n\t// Size formatted string\n\tstring size_format;\n\tif (fmod(size, 1.0) == 0.0)\n\t{\n\t\tsize_format = to_string(int(size));\n\t}\n\telse\n\t{\n\t\tsize_format = to_string(size);\n\t}\n\n\t// Add timebucket size\n\t// Unix Time is (Julian Day - JulianDay(1/1/1970 0:00 UTC) * Seconds_per_day\n\tif (size != 1)\n\t{\n\t\tsql.append(size_format);\n\t\tsql.append(\" * round((julianday(\");\n\t\tsql.append(timeColumn);\n\t\tsql.append(\") - \" + string(JULIAN_DAY_START_UNIXTIME) + \") * \" + string(SECONDS_PER_DAY) + \" / \");\n\t\tsql.append(size_format);\n\t\tsql.append(\")\");\n\t}\n\telse\n\t{\n\t\tsql.append(\"round((julianday(\");\n\t\tsql.append(timeColumn);\n\t\tsql.append(\") - \" + string(JULIAN_DAY_START_UNIXTIME) + \") * \" + string(SECONDS_PER_DAY) + \" / 1)\");\n\t}\n\tif (size >= 1)\n\t{\n\t\tsql.append(\", 'unixepoch') AS \\\"timestamp\\\", reading, \");\n\t}\n\telse\n\t{\n\t\tsql.append(\") AS \\\"timestamp\\\", reading, \");\n\t}\n\n\t// Get all datapoints in 'reading' field\n\tsql.append(\"json_each.key AS x, json_each.value AS theval FROM \");\n\n\t{\n\t\tstring sql_cmd;\n\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\n\t\t// SQL - start\n\t\tsql_cmd = R\"(\n\t\t\t(\n\t\t\t)\";\n\n\t\t// SQL - union of all the readings tables\n\t\tstring sql_cmd_base;\n\t\tstring sql_cmd_tmp;\n\t\tsql_cmd_base = \" SELECT  ROWID, id, \\\"_assetcode_\\\" asset_code, reading, user_ts, ts  FROM _dbname_._tablename_ \";\n\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, asset_codes);\n\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t// SQL - end\n\t\tsql_cmd += R\"(\n\t\t\t\t) as reading_table\n\t\t\t)\";\n\t\tsql.append(sql_cmd.c_str());\n\n\t\tsql.append(\", json_each(reading_table.reading) \");\n\n\t}\n\n\n\t// Add where condition\n\tsql.append(\"WHERE \");\n\tif (!jsonWhereClause(payload[\"where\"], sql, asset_codes))\n\t{\n\t\traiseError(\"retrieve\", \"aggregateQuery: failure while building WHERE clause\");\n\t\treturn false;\n\t}\n\n\t// close subquery\n\tsql.append(\") tmp \");\n\n\t// Add group by\n\t// Unix Time is (Julian Day - JulianDay(1/1/1970 0:00 UTC) * Seconds_per_day\n\tsql.append(\" GROUP BY x, asset_code, \");\n\tsql.append(\"round((julianday(\");\n\tsql.append(timeColumn);\n\tsql.append(\") - \" + string(JULIAN_DAY_START_UNIXTIME) + \") * \" + string(SECONDS_PER_DAY) + \" / \");\n\n\tif (size != 1)\n\t{\n\t\tsql.append(size_format);\n\t}\n\telse\n\t{\n\t\tsql.append('1');\n\t}\n\tsql.append(\") \");\n\n\t// close subquery\n\tsql.append(\") tbl \");\n\n\t// Add final group and sort\n\tsql.append(\"GROUP BY timestamp, asset_code ORDER BY timestamp DESC\");\n\n\t// Add limit\n\tif (payload.HasMember(\"limit\"))\n\t{\n\t\tif (!payload[\"limit\"].IsInt())\n\t\t{\n\t\t\traiseError(\"retrieve\", \"aggregateQuery: limit must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" LIMIT \");\n\t\ttry {\n\t\t\tsql.append(payload[\"limit\"].GetInt());\n\t\t} catch (exception e) {\n\t\t\traiseError(\"retrieve\", \"aggregateQuery: bad value for limit parameter: %s\", e.what());\n\t\t\treturn false;\n\t\t}\n\t}\n\tsql.append(';');\n\n\t// Execute query\n\tconst char *query = sql.coalesce();\n\tint rc;\n\tsqlite3_stmt *stmt;\n\n\tlogSQL(\"CommonRetrieve\", query);\n\n\t// Prepare the SQL statement and get the result set\n\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\n\t// Release memory for 'query' var\n\tdelete[] query;\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\n\t// Call result set mapping\n\trc = mapResultSet(stmt, resultSet);\n\n\t// Delete result set\n\tsqlite3_finalize(stmt);\n\n\t// Check result set mapping errors\n\tif (rc != SQLITE_DONE)\n\t{\n\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t// Failure\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n#endif\n\n/**\n * Append a stream of readings to SQLite db\n *\n * @param readings  readings to store into the SQLite db\n * @param commit    if true a database commit is executed and a new transaction will be opened at the next execution\n *\n * TODO: the current code should be adapted to use the multi databases/tables implementation\n *\n */\nint Connection::readingStream(ReadingStream **readings, bool commit)\n{\n\t// Row defintion related\n\tint i;\n\tbool add_row = false;\n\tconst char *user_ts;\n\tstring now;\n\tchar ts[60], micro_s[10];\n\tchar formatted_date[LEN_BUFFER_DATE] = {0};\n\tstruct tm timeinfo;\n\tconst char *asset_code;\n\tconst char *payload;\n\tstring reading;\n\n\t// Retry mechanism\n\tint retries = 0;\n\tint sleep_time_ms = 0;\n\n\t// SQLite related\n\tsqlite3_stmt *stmt;\n\tint sqlite3_resut;\n\tint rowNumber = -1;\n\n\tif (m_noReadings)\n\t{\n\t\tLogger::getLogger()->error(\"Attempt to stream readings to plugin that has no storage for readings\");\n\t\treturn 0;\n\t}\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\tReadingsCatalogue *readCatalogue = ReadingsCatalogue::getInstance();\n\n\t{\n\t\t// Attaches the needed databases if the queue is not empty\n\t\tAttachDbSync *attachSync = AttachDbSync::getInstance();\n\t\tattachSync->lock();\n\n\t\tif ( ! m_NewDbIdList.empty())\n\t\t{\n\t\t\treadCatalogue->connectionAttachDbList(this->getDbHandle(), m_NewDbIdList);\n\t\t}\n\t\tattachSync->unlock();\n\t}\n\n#if INSTRUMENT\n\tstruct timeval start, t1, t2, t3, t4, t5;\n#endif\n\n\t// * TODO: the current code should be adapted to use the multi databases/tables implementation\n\tconst char *sql_cmd = \"INSERT INTO  \" READINGS_DB \".readings_1 ( asset_code, reading, user_ts ) VALUES  (?,?,?)\";\n\n\tif (sqlite3_prepare_v2(dbHandle, sql_cmd, strlen(sql_cmd), &stmt, NULL) != SQLITE_OK)\n\t{\n\t\traiseError(\"readingStream\", sqlite3_errmsg(dbHandle));\n\t\treturn -1;\n\t}\n\n\t// The handling of the commit parameter is overridden as using a pool of connections every execution receives\n\t// a differen one, so a commit at every run is executed.\n\tm_streamOpenTransaction = true;\n\tcommit = true;\n\n\tif (m_streamOpenTransaction)\n\t{\n\t\tif (sqlite3_exec(dbHandle, \"BEGIN TRANSACTION\", NULL, NULL, NULL) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"readingStream\", sqlite3_errmsg(dbHandle));\n\t\t\treturn -1;\n\t\t}\n\t\tm_streamOpenTransaction = false;\n\t}\n\n#if INSTRUMENT\n\tgettimeofday(&start, NULL);\n#endif\n\n\ttry\n\t{\n\t\tfor (i = 0; readings[i]; i++)\n\t\t{\n\t\t\tadd_row = true;\n\n\t\t\t// Handles - asset_code\n\t\t\tasset_code = RDS_ASSET_CODE(readings, i);\n\n\t\t\t// Handles - reading\n\t\t\tpayload = RDS_PAYLOAD(readings, i);\n\t\t\treading = escape(payload);\n\n\t\t\t// Handles - user_ts\n\t\t\tmemset(&timeinfo, 0, sizeof(struct tm));\n\t\t\tgmtime_r(&RDS_USER_TIMESTAMP(readings, i).tv_sec, &timeinfo);\n\t\t\tstd::strftime(ts, sizeof(ts), \"%Y-%m-%d %H:%M:%S\", &timeinfo);\n\t\t\tsnprintf(micro_s, sizeof(micro_s), \".%06lu\", RDS_USER_TIMESTAMP(readings, i).tv_usec);\n\n\t\t\tformatted_date[0] = {0};\n\t\t\tstrncat(ts, micro_s, 10);\n\t\t\tuser_ts = ts;\n\t\t\tif (strcmp(user_ts, \"now()\") == 0)\n\t\t\t{\n\t\t\t\tgetNow(now);\n\t\t\t\tuser_ts = now.c_str();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (!formatDate(formatted_date, sizeof(formatted_date), user_ts))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"appendReadings\", \"Invalid date '%s'\", user_ts);\n\t\t\t\t\tadd_row = false;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tuser_ts = formatted_date;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (add_row)\n\t\t\t{\n\t\t\t\tif (stmt != NULL)\n\t\t\t\t{\n\t\t\t\t\tsqlite3_bind_text(stmt, 1, asset_code,      -1, SQLITE_STATIC);\n\t\t\t\t\tsqlite3_bind_text(stmt, 2, reading.c_str(), -1, SQLITE_STATIC);\n\t\t\t\t\tsqlite3_bind_text(stmt, 3, user_ts,         -1, SQLITE_STATIC);\n\n\t\t\t\t\tretries =0;\n\t\t\t\t\tsleep_time_ms = 0;\n\n\t\t\t\t\t// Retry mechanism in case SQLlite DB is locked\n\t\t\t\t\tdo {\n\t\t\t\t\t\t// Insert the row using a lock to ensure one insert at time\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_writeAccessOngoing.fetch_add(1);\n\t\t\t\t\t\t\t//unique_lock<mutex> lck(db_mutex);\n\n\t\t\t\t\t\t\tsqlite3_resut = sqlite3_step(stmt);\n\n\t\t\t\t\t\t\tm_writeAccessOngoing.fetch_sub(1);\n\t\t\t\t\t\t\t//db_cv.notify_all();\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (sqlite3_resut == SQLITE_LOCKED  )\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + (random() %  PREP_CMD_RETRY_BACKOFF);\n\t\t\t\t\t\t\tretries++;\n\n\t\t\t\t\t\t\tLogger::getLogger()->info(\"SQLITE_LOCKED - record %d - retry number %d sleep time ms %d\",i, retries, sleep_time_ms);\n\n\t\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (sqlite3_resut == SQLITE_BUSY)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tostringstream threadId;\n\t\t\t\t\t\t\tthreadId << std::this_thread::get_id();\n\n\t\t\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + (random() %  PREP_CMD_RETRY_BACKOFF);\n\t\t\t\t\t\t\tretries++;\n\n\t\t\t\t\t\t\tLogger::getLogger()->info(\"SQLITE_BUSY - thread '%s' - record %d - retry number %d sleep time ms %d\", threadId.str().c_str() ,i , retries, sleep_time_ms);\n\n\t\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t\t\t}\n\t\t\t\t\t} while (retries < PREP_CMD_MAX_RETRIES && (sqlite3_resut == SQLITE_LOCKED || sqlite3_resut == SQLITE_BUSY));\n\n\t\t\t\t\tif (sqlite3_resut == SQLITE_DONE)\n\t\t\t\t\t{\n\t\t\t\t\t\trowNumber++;\n\n\t\t\t\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\t\t\t\tsqlite3_reset(stmt);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"appendReadings\",\n\t\t\t\t\t\t\t\t   \"Inserting a row into SQLIte using a prepared command - asset_code '%s' error '%s' reading '%s' \",\n\t\t\t\t\t\t\t\t   asset_code,\n\t\t\t\t\t\t\t\t   sqlite3_errmsg(dbHandle),\n\t\t\t\t\t\t\t\t   reading.c_str());\n\n\t\t\t\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION\", NULL, NULL, NULL);\n\t\t\t\t\t\tm_streamOpenTransaction = true;\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\trowNumber = i;\n\n\t} catch (exception e) {\n\n\t\traiseError(\"appendReadings\", \"Inserting a row into SQLIte using a prepared command - error '%s'\", e.what());\n\n\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION\", NULL, NULL, NULL);\n\t\tm_streamOpenTransaction = true;\n\t\treturn -1;\n\t}\n\n#if INSTRUMENT\n\tgettimeofday(&t1, NULL);\n#endif\n\n\tif (commit)\n\t{\n\t\tsqlite3_resut = sqlite3_exec(dbHandle, \"END TRANSACTION\", NULL, NULL, NULL);\n\t\tif (sqlite3_resut != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"appendReadings\", \"Executing the commit of the transaction - error '%s'\", sqlite3_errmsg(dbHandle));\n\t\t\trowNumber = -1;\n\t\t}\n\t\tm_streamOpenTransaction = true;\n\t}\n\n\tif(stmt != NULL)\n\t{\n\t\tif (sqlite3_finalize(stmt) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"appendReadings\",\"freeing SQLite in memory structure - error '%s'\", sqlite3_errmsg(dbHandle));\n\t\t}\n\t}\n\n#if INSTRUMENT\n\tgettimeofday(&t2, NULL);\n#endif\n\n#if INSTRUMENT\n\tstruct timeval tm;\n\tdouble timeT1, timeT2, timeT3;\n\n\ttimersub(&t1, &start, &tm);\n\ttimeT1 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\ttimersub(&t2, &t1, &tm);\n\ttimeT2 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\tLogger::getLogger()->debug(\"readingStream row count %d\", rowNumber);\n\n\tLogger::getLogger()->debug(\"readingStream Timing - stream handling %.3f seconds - commit/finalize %.3f seconds\",\n\t\t\t\t\t\t\t   timeT1,\n\t\t\t\t\t\t\t   timeT2\n\t);\n#endif\n\n\treturn rowNumber;\n}\n\n\n/**\n * Append a set of readings to the readings table\n */\nvoid Connection::setUsedDbId(int dbId) {\n\n\tm_NewDbIdList.push_back(dbId);\n}\n\n/**\n * Wait until all the threads executing the appendReadings are shutted down\n */\nvoid  Connection::shutdownAppendReadings()\n{\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\tLogger::getLogger()->debug(\"%s - thread Id '%s' appendReadings shutting down started\", __FUNCTION__, threadId.str().c_str());\n\n\tm_shutdown=true;\n\n\twhile (m_appendCount > 0) {\n\n\t\tLogger::getLogger()->debug(\"%s - thread Id '%s' waiting threads to shut down, count %d \", __FUNCTION__, threadId.str().c_str(), int(m_appendCount));\n\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(150));\n\t}\n\tLogger::getLogger()->debug(\"%s - thread Id '%s' appendReadings shutting down ended\", __FUNCTION__, threadId.str().c_str());\n\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Append a set of readings to the readings table\n */\nint Connection::appendReadings(const char *readings)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument doc;\nint      row = 0, readingId;\nbool     add_row = false;\n\nint lastReadingsId;\n\n// Variables related to the SQLite insert using prepared command\nconst char   *user_ts;\nconst char   *asset_code;\nstring        reading,\n              msg;\nsqlite3_stmt *stmt;\nint rc;\nint           sqlite3_resut;\nint           readingsId;\nstring        now;\n\nstd::pair<int, sqlite3_stmt *> pairValue;\nstring lastAsset;\n\n// Retry mechanism\nint retries = 0;\nint sleep_time_ms = 0;\n\nint stmtArraySize;\nstd::thread::id tid = std::this_thread::get_id();\nostringstream threadId;\n\n\tif (m_noReadings)\n\t{\n\t\tLogger::getLogger()->error(\"Attempt to append readings to plugin that has no storage for readings\");\n\t\treturn 0;\n\t}\n\n\tthreadId << tid;\n\n\t{\n\t\tif (m_shutdown)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"%s - thread Id '%s' plugin is shutting down, operation cancelled\", __FUNCTION__, threadId.str().c_str());\n\t\t\treturn -1;\n\t\t}\n\n\t\tm_appendCount++;\n\n\t\tLogger::getLogger()->debug(\"%s - thread Id '%s' operation started , threads count %d \", __FUNCTION__,  threadId.str().c_str(), int(m_appendCount) );\n\t}\n\n\tReadingsCatalogue *readCatalogue = ReadingsCatalogue::getInstance();\n\n\t// Attaches the needed databases if the queue is not empty\n\tAttachDbSync *attachSync = AttachDbSync::getInstance();\n\tattachSync->lock();\n\n\tif ( ! m_NewDbIdList.empty())\n\t{\n\t\treadCatalogue->connectionAttachDbList(this->getDbHandle(), m_NewDbIdList);\n\t}\n\tattachSync->unlock();\n\n\tstmtArraySize = readCatalogue->getReadingPosition(0, 0);\n\tvector<sqlite3_stmt *> readingsStmt(stmtArraySize + 1, nullptr);\n\n#if INSTRUMENT\n\tLogger::getLogger()->debug(\"appendReadings start thread '%s'\", threadId.str().c_str());\n\n\tstruct timeval\tstart, t1, t2, t3, t4, t5;\n#endif\n\n#if INSTRUMENT\n\tgettimeofday(&start, NULL);\n#endif\n\n\tParseResult ok = doc.Parse(readings);\n\tif (!ok)\n\t{\n \t\traiseError(\"appendReadings\", GetParseError_En(doc.GetParseError()));\n\t\tm_appendCount--;\n\t\treturn -1;\n\t}\n\n\tif (!doc.HasMember(\"readings\"))\n\t{\n \t\traiseError(\"appendReadings\", \"Payload is missing a readings array\");\n\t\tm_appendCount--;\n\t\treturn -1;\n\t}\n\tValue &readingsValue = doc[\"readings\"];\n\tif (!readingsValue.IsArray())\n\t{\n\t\traiseError(\"appendReadings\", \"Payload is missing the readings array\");\n\t\tm_appendCount--;\n\t\treturn -1;\n\t}\n\n\tint tableIdx;\n\tstring sql_cmd;\n\n\t{\n\tm_writeAccessOngoing.fetch_add(1);\n\t//unique_lock<mutex> lck(db_mutex);\n\tsqlite3_exec(dbHandle, \"BEGIN TRANSACTION\", NULL, NULL, NULL);\n\n#if INSTRUMENT\n\t\tgettimeofday(&t1, NULL);\n#endif\n\n\tlastAsset = \"\";\n\tfor (Value::ConstValueIterator itr = readingsValue.Begin(); itr != readingsValue.End(); ++itr)\n\t{\n\t\tif (!itr->IsObject())\n\t\t{\n\t\t\traiseError(\"appendReadings\",\"Each reading in the readings array must be an object\");\n\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION;\", NULL, NULL, NULL);\n\t\t\tm_appendCount--;\n\t\t\treturn -1;\n\t\t}\n\n\t\tadd_row = true;\n\n\t\t// Handles - user_ts\n\t\tchar formatted_date[LEN_BUFFER_DATE] = {0};\n\t\tuser_ts = (*itr)[\"user_ts\"].GetString();\n\t\tif (strcmp(user_ts, \"now()\") == 0)\n\t\t{\n\t\t\tgetNow(now);\n\t\t\tuser_ts = now.c_str();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (! formatDate(formatted_date, sizeof(formatted_date), user_ts) )\n\t\t\t{\n\t\t\t\traiseError(\"appendReadings\", \"Invalid date |%s|\", user_ts);\n\t\t\t\tadd_row = false;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tuser_ts = formatted_date;\n\t\t\t}\n\t\t}\n\n\t\tif (add_row)\n\t\t{\n\t\t\t// Handles - asset_code\n\t\t\tasset_code = (*itr)[\"asset_code\"].GetString();\n\n\t\t\tif (strlen(asset_code) == 0)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Sqlite appendReadings - empty asset code value, row ignored.\");\n\t\t\t\tstmt = NULL;\n\t\t\t}\n\n\t\t\t//# A different asset is managed respect the previous one\n\t\t\tif (strlen(asset_code) && lastAsset.compare(asset_code) != 0)\n\t\t\t{\n\t\t\t\tReadingsCatalogue::tyReadingReference ref;\n\n\t\t\t\tref = readCatalogue->getReadingReference(this, asset_code);\n\t\t\t\treadingsId = ref.tableId;\n\n\t\t\t\tLogger::getLogger()->debug(\"tyReadingReference '%s' %d %d \", asset_code, ref.dbId, ref.tableId);\n\n\t\t\t\tif (readingsId == -1)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"appendReadings - It was not possible to insert the row for the asset_code '%s' into the readings, row ignored.\", asset_code);\n\t\t\t\t\tstmt = NULL;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tint nReadings, idxReadings;\n\n\t\t\t\t\tnReadings = readCatalogue->getReadingsCount();\n\t\t\t\t\tidxReadings = readCatalogue->getReadingPosition(ref.dbId, ref.tableId);\n\n\t\t\t\t\tLogger::getLogger()->debug(\"tyReadingReference '%s' %d %d idxReadings %d\", asset_code, ref.dbId, ref.tableId, idxReadings);\n\n\t\t\t\t\tif (idxReadings >= stmtArraySize)\n\t\t\t\t\t{\n\t\t\t\t\t\tstmtArraySize = idxReadings + 1;\n\t\t\t\t\t\treadingsStmt.resize(stmtArraySize, nullptr);\n\n\t\t\t\t\t\tLogger::getLogger()->debug(\"appendReadings: thread '%s' resize size %d idx %d \", threadId.str().c_str(), stmtArraySize, readingsId);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (readingsStmt[idxReadings] == nullptr)\n\t\t\t\t\t{\n\t\t\t\t\t\tstring dbName = readCatalogue->generateDbName(ref.dbId);\n\t\t\t\t\t\tstring dbReadingsName = readCatalogue->generateReadingsName(ref.dbId, readingsId);\n\n\t\t\t\t\t\tif (readingsId == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Overflow table\n\t\t\t\t\t\t\tsql_cmd = \"INSERT INTO  \" + dbName + \".readings_\" + to_string(ref.dbId) + \"_overflow ( id, asset_code, user_ts, reading ) VALUES  (?,'\" + asset_code + \"',?,?)\";\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql_cmd = \"INSERT INTO  \" + dbName + \".\" + dbReadingsName + \" ( id, user_ts, reading ) VALUES  (?,?,?)\";\n\t\t\t\t\t\t}\n\t\t\t\t\t\trc = SQLPrepare(dbHandle, sql_cmd.c_str(), &readingsStmt[idxReadings]);\n\n\t\t\t\t\t\tLogger::getLogger()->debug(\"tyReadingReference sql_cmd  '%s' '%s' %d %d \", sql_cmd.c_str(), asset_code, ref.dbId, ref.tableId);\n\n\t\t\t\t\t\tif (rc != SQLITE_OK)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"appendReadings\", sqlite3_errmsg(dbHandle));\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tstmt = readingsStmt[idxReadings];\n\n\t\t\t\t\tlastAsset = asset_code;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Handles - reading\n\t\t\tStringBuffer buffer;\n\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t(*itr)[\"reading\"].Accept(writer);\n\t\t\treading = escape(buffer.GetString());\n\n\t\t\tif(stmt != NULL) {\n\t\t\t\t// First reading, use the id as transaction start\n\t\t\t\tif (itr == readingsValue.Begin())\n\t\t\t\t{\n\t\t\t\t\t// Get current reading global id\n\t\t\t\t\tunsigned long startTransactionId = readCatalogue->getIncGlobalId();\n\n\t\t\t\t\t// Mark transaction srtart fot this thread\n\t\t\t\t\treadCatalogue->m_tx.SetThreadTransactionStart(tid,\n\t\t\t\t\t\t\tstartTransactionId);\n\n\t\t\t\t\t// Bind first parameter with reading id\n\t\t\t\t\tsqlite3_bind_int64 (stmt, 1, startTransactionId);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// Bind first parameter with reading id\n\t\t\t\t\tsqlite3_bind_int64 (stmt, 1, readCatalogue->getIncGlobalId());\n\t\t\t\t}\n\n\t\t\t\t// Set parameter for user timestamp\n\t\t\t\tsqlite3_bind_text(stmt, 2, user_ts         ,-1, SQLITE_STATIC);\n\n\t\t\t\t// Set parameter for reading JSON data\n\t\t\t\tsqlite3_bind_text(stmt, 3, reading.c_str(), -1, SQLITE_STATIC);\n\n\t\t\t\tretries =0;\n\t\t\t\tsleep_time_ms = 0;\n\n\t\t\t\tstring msgError;\n\n\t\t\t\t// Retry mechanism in case SQLlite DB is locked\n\t\t\t\tdo {\n\t\t\t\t\t// Insert the row using a lock to ensure one insert at time\n\t\t\t\t\t{\n\t\t\t\t\t\tsqlite3_resut = sqlite3_step(stmt);\n\t\t\t\t\t}\n\n\t\t\t\t\tif(sqlite3_resut != SQLITE_DONE)\n\t\t\t\t\t{\n\t\t\t\t\t\tmsgError = \"\";\n\t\t\t\t\t\tif (sqlite3_resut == SQLITE_LOCKED  )\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tmsgError = \"SQLITE_LOCKED\";\n\n\t\t\t\t\t\t} else if (sqlite3_resut == SQLITE_BUSY)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tmsgError = \"SQLITE_BUSY\";\n\n\t\t\t\t\t\t} else if (sqlite3_resut  != SQLITE_DONE)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tmsgError = \"SQLITE_ERROR\";\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + (random() %  PREP_CMD_RETRY_BACKOFF);\n\t\t\t\t\t\tretries++;\n\n\t\t\t\t\t\tif (retries >= LOG_AFTER_NERRORS)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tLogger::getLogger()->warn(\"appendReadings - %s - \" \\\n\t\t\t\t\t\t\t\t\t\"asset_code '%s' readingsId %d \" \\\n\t\t\t\t\t\t\t\t\t\"thread '%s' dbHandle %X record \" \\\n\t\t\t\t\t\t\t\t\t\"%d retry number %d sleep time ms %derror '%s'\",\n\t\t\t\t\t\t\t\t\tmsgError.c_str(),\n\t\t\t\t\t\t\t\t\tasset_code,\n\t\t\t\t\t\t\t\t\treadingsId,\n\t\t\t\t\t\t\t\t\tthreadId.str().c_str() ,\n\t\t\t\t\t\t\t\t\tdbHandle,\n\t\t\t\t\t\t\t\t\trow,\n\t\t\t\t\t\t\t\t\tretries,\n\t\t\t\t\t\t\t\t\tsleep_time_ms,\n\t\t\t\t\t\t\t\t\tsqlite3_errmsg(dbHandle));\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// Put thread to sleep\n\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t\t}\n\t\t\t\t} while (retries < PREP_CMD_MAX_RETRIES && (sqlite3_resut != SQLITE_DONE));\n\n\t\t\t\tif (sqlite3_resut == SQLITE_DONE)\n\t\t\t\t{\n\t\t\t\t\trow++;\n\n\t\t\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\t\t\tsqlite3_reset(stmt);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"appendReadings\",\"Inserting a row into \" \\\n\t\t\t\t\t\t\"SQLIte using a prepared command - asset_code \" \\\n\t\t\t\t\t\t\"'%s' error '%s' reading '%s' dbHandle %X\",\n\t\t\t\t\t\tasset_code,\n\t\t\t\t\t\tsqlite3_errmsg(dbHandle),\n\t\t\t\t\t\treading.c_str(),\n\t\t\t\t\t\tdbHandle);\n\n\t\t\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\t\t\tsqlite3_reset(stmt);\n\n\t\t\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION\", NULL, NULL, NULL);\n\t\t\t\t\tm_appendCount--;\n\n\t\t\t\t\t// Clear transaction boundary for this thread\n\t\t\t\t\treadCatalogue->m_tx.ClearThreadTransaction(tid);\n\n\t\t\t\t\t// Finalize sqlite structures\n\t\t\t\t\tfor (auto &item : readingsStmt)\n\t\t\t\t\t{\n\t\t\t\t\t\tif(item != nullptr)\n\t\t\t\t\t\t{\n\n\t\t\t\t\t\t\tif (sqlite3_finalize(item) != SQLITE_OK)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\traiseError(\"appendReadings\",\"freeing SQLite in memory structure - error '%s'\", sqlite3_errmsg(dbHandle));\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t}\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tsqlite3_resut = sqlite3_exec(dbHandle, \"END TRANSACTION\", NULL, NULL, NULL);\n\tif (sqlite3_resut != SQLITE_OK)\n\t{\n\t\traiseError(\"appendReadings\",\n\t\t\t\t\"Executing the commit of the transaction '%s'\",\n\t\t\t\tsqlite3_errmsg(dbHandle));\n\t\trow = -1;\n\t}\n\n\t// Clear transaction boundary for this thread\n\treadCatalogue->m_tx.ClearThreadTransaction(tid);\n\n\tm_writeAccessOngoing.fetch_sub(1);\n\t//db_cv.notify_all();\n\t}\n#if INSTRUMENT\n\t\tgettimeofday(&t2, NULL);\n#endif\n\n\t// Finalize sqlite structures\n\tfor (auto &item : readingsStmt)\n\t{\n\t\tif(item != nullptr)\n\t\t{\n\n\t\t\tif (sqlite3_finalize(item) != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"appendReadings\",\"freeing SQLite in memory structure - error '%s'\", sqlite3_errmsg(dbHandle));\n\t\t\t}\n\t\t}\n\n\t}\n\n#if INSTRUMENT\n\t\tgettimeofday(&t3, NULL);\n#endif\n\n#if INSTRUMENT\n\t\tstruct timeval tm;\n\t\tdouble timeT1, timeT2, timeT3;\n\n\t\ttimersub(&t1, &start, &tm);\n\t\ttimeT1 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\ttimersub(&t2, &t1, &tm);\n\t\ttimeT2 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\ttimersub(&t3, &t2, &tm);\n\t\ttimeT3 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\tLogger::getLogger()->debug(\"appendReadings end   thread '%s' buffer :%10lu: count :%5d: JSON :%6.3f: inserts :%6.3f: finalize :%6.3f:\",\n\t\t\t\t\t\t\t\t   threadId.str().c_str(),\n\t\t\t\t\t\t\t\t   strlen(readings),\n\t\t\t\t\t\t\t\t   row,\n\t\t\t\t\t\t\t\t   timeT1,\n\t\t\t\t\t\t\t\t   timeT2,\n\t\t\t\t\t\t\t\t   timeT3\n\t\t);\n\n#endif\n\n\tm_appendCount--;\n\n\treturn row;\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Fetch a block of readings from the reading table\n * It might not work with SQLite 3\n *\n * Fetch, used by the north side, returns timestamp in UTC.\n *\n * NOTE : it expects to handle a date having a fixed format\n * with milliseconds, microseconds and timezone expressed,\n * like for example :\n *\n *    2019-01-11 15:45:01.123456+01:00\n */\nbool Connection::fetchReadings(unsigned long id,\n\t\t\t       unsigned int blksize,\n\t\t\t       std::string& resultSet)\n{\nchar sqlbuffer[5120];\nchar *zErrMsg = NULL;\nint rc;\nint retrieve;\nvector<string>  asset_codes;\nstring sql_cmd;\nunsigned long minGlobalId;\nunsigned long idWindow;\nunsigned long rowsCount;\n\n\tif (m_noReadings)\n\t{\n\t\tLogger::getLogger()->error(\"Attempt to fetch readings to plugin that has no storage for readings\");\n\t\treturn false;\n\t}\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\tReadingsCatalogue *readCatalogue = ReadingsCatalogue::getInstance();\n\n\t{\n\t\t// Attaches the needed databases if the queue is not empty\n\t\tAttachDbSync *attachSync = AttachDbSync::getInstance();\n\t\tattachSync->lock();\n\n\t\tif ( ! m_NewDbIdList.empty())\n\t\t{\n\t\t\treadCatalogue->connectionAttachDbList(this->getDbHandle(), m_NewDbIdList);\n\t\t}\n\t\tattachSync->unlock();\n\t}\n\n\tif (id == 1)\n\t{\n\t\t// at the first extract, Verifies if there are data having id above the current searched window\n\t\tminGlobalId = readCatalogue->getMinGlobalId(this->getDbHandle());\n\t\tidWindow = id + blksize;\n\n\t\tif (idWindow < minGlobalId)\n\t\t{\n\t\t\tid = minGlobalId;\n\t\t\tLogger::getLogger()->debug(\"%s - first extraction, data extracted from the id :%lu:\", __FUNCTION__, id);\n\t\t}\n\t}\n\n\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t// SQL - start\n\tsql_cmd = R\"(\n\t\tSELECT\n\t\t\tid,\n\t\t\tasset_code,\n\t\t\treading,\n\t\t\tstrftime('%Y-%m-%d %H:%M:%S', user_ts, 'utc')  ||\n\t\t\tsubstr(user_ts, instr(user_ts, '.'), 7) AS user_ts,\n\t\t\tstrftime('%Y-%m-%d %H:%M:%f', ts, 'utc') AS ts\n\t\tFROM\n\t\t(\n\t)\";\n\n\t// SQL - union of all the readings tables\n\tstring sql_cmd_base;\n\tstring sql_cmd_tmp;\n\t// Would like to add a LIMIT on each sub-query in the union all, however SQLITE\n\t// does not support this. Note we can not use id + blocksize as this fail if we \n\t// have holes in the id space\n\tsql_cmd_base = \" SELECT  id, \\\"_assetcode_\\\" asset_code, reading, user_ts, ts \" \\\n\t\t\t\"FROM _dbname_._tablename_ WHERE id >= \" +\n\t\t\tto_string(id) + \" \";\n\n\t// Check for any uncommitted transactions:\n\t// fetch the minimum reading id among all per thread transactions\n\t// an use it as a boundary limit.\n\t// If no pending transactions just use current global reading id as limit\n\tunsigned long safe_id = readCatalogue->m_tx.GetMinReadingId();\n\tif (safe_id)\n\t{\n\t\tsql_cmd_base += \"AND id < \" + to_string(safe_id) + \" \";\n\t}\n\telse\n\t{\n\t\tsql_cmd_base += \"AND id < \" + to_string(readCatalogue->getGlobalId()) + \" \";\n\t}\n\n\tsql_cmd_tmp = readCatalogue->sqlConstructMultiDb(sql_cmd_base, asset_codes);\n\tsql_cmd += sql_cmd_tmp;\n\n\t// Now add in ther overflow tables\n\tsql_cmd_base = \" SELECT  id, asset_code, reading, user_ts, ts \" \\\n\t\t\t\"FROM _dbname_._tablename_ WHERE id >= \" +\n\t\t\tto_string(id) + \" \";\n\tif (safe_id)\n\t{\n\t\tsql_cmd_base += \"AND id < \" + to_string(safe_id) + \" \";\n\t}\n\telse\n\t{\n\t\tsql_cmd_base += \"AND id < \" + to_string(readCatalogue->getGlobalId()) + \" \";\n\t}\n\n\tsql_cmd_tmp = readCatalogue->sqlConstructOverflow(sql_cmd_base, asset_codes);\n\tsql_cmd += sql_cmd_tmp;\n\n\t// SQL - end\n\tsql_cmd += R\"(\n\t\t) as tb\n\t\tORDER BY id ASC\n\t\tLIMIT\n\t)\" + to_string(blksize);\n\n\tlogSQL(\"ReadingsFetch\", sql_cmd.c_str());\n\n\tsqlite3_stmt *stmt;\n\t// Prepare the SQL statement and get the result set\n\trc = sqlite3_prepare_v2(dbHandle, sql_cmd.c_str(),-1,&stmt,NULL);\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\n\t\t// Failure\n\t\treturn false;\n\t}\n\telse\n\t{\n\t\t// Call result set mapping\n\t\trc = mapResultSet(stmt, resultSet, &rowsCount);\n\n\t\tif (rowsCount == 0)\n\t\t{\n\t\t\t// If no data were processed, it verifies if there are data having id above the current searched window\n\t\t\tminGlobalId = readCatalogue->getMinGlobalId(this->getDbHandle());\n\t\t\tidWindow = id + blksize;\n\n\t\t\tif (idWindow < minGlobalId)\n\t\t\t{\n\t\t\t\tid = minGlobalId;\n\n\t\t\t\t// Delete result set\n\t\t\t\tsqlite3_finalize(stmt);\n\n\t\t\t\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t\t\t\t{\n\t\t\t\t\t// SQL - start\n\t\t\t\t\tsql_cmd = R\"(\n\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\tid,\n\t\t\t\t\t\t\tasset_code,\n\t\t\t\t\t\t\treading,\n\t\t\t\t\t\t\tstrftime('%Y-%m-%d %H:%M:%S', user_ts, 'utc')  ||\n\t\t\t\t\t\t\tsubstr(user_ts, instr(user_ts, '.'), 7) AS user_ts,\n\t\t\t\t\t\t\tstrftime('%Y-%m-%d %H:%M:%f', ts, 'utc') AS ts\n\t\t\t\t\t\tFROM\n\t\t\t\t\t\t(\n\t\t\t\t\t)\";\n\n\t\t\t\t\t// SQL - union of all the readings tables\n\t\t\t\t\tstring sql_cmd_base;\n\t\t\t\t\tstring sql_cmd_tmp;\n\t\t\t\t\tsql_cmd_base = \" SELECT  id, \\\"_assetcode_\\\" asset_code, reading, user_ts, ts  FROM _dbname_._tablename_ WHERE id >= \" + to_string(id) + \" and id <=  \" + to_string(id) + \" + \" + to_string(blksize) + \" \";\n\t\t\t\t\tsql_cmd_tmp = readCatalogue->sqlConstructMultiDb(sql_cmd_base, asset_codes);\n\t\t\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\t\t\t// SQL - end\n\t\t\t\t\tsql_cmd += R\"(\n\t\t\t\t\t) as tb\n\t\t\t\t\tORDER BY id ASC\n\t\t\t\t\tLIMIT\n\t\t\t\t)\" + to_string(blksize);\n\n\t\t\t\t}\n\n\t\t\t\tlogSQL(\"ReadingsFetch\", sql_cmd.c_str());\n\n\t\t\t\t// Prepare the SQL statement and get the result set\n\t\t\t\trc = sqlite3_prepare_v2(dbHandle, sql_cmd.c_str(),-1,&stmt,NULL);\n\t\t\t\tif (rc != SQLITE_OK)\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\n\t\t\t\t\t// Failure\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\t// Call result set mapping\n\t\t\t\trc = mapResultSet(stmt, resultSet, &rowsCount);\n\n\t\t\t\tif (rowsCount != 0)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"%s - following extractions, data extracted from the id :%lu:\", __FUNCTION__, id);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// Delete result set\n\t\tsqlite3_finalize(stmt);\n\n\t\t// Check result set errors\n\t\tif (rc != SQLITE_DONE)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\n\t\t\t// Failure\n\t\t\treturn false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Success\n\t\t\treturn true;\n\t\t}\n\t}\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Perform a query against the readings table\n *\n * retrieveReadings, used by the API, returns timestamp in utc unless \n * otherwise requested.\n *\n */\nbool Connection::retrieveReadings(const string& condition, string& resultSet)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument\tdocument;\nSQLBuffer\tsql;\n\nSQLBuffer\tsqlExtDummy;\nSQLBuffer\tsqlExt;\nSQLBuffer\tjsonConstraintsExt;\n// Extra constraints to add to where clause\nSQLBuffer\tjsonConstraints;\nbool\t\tisAggregate = false;\nbool\t\tisOptAggregate = false;\nconst char\t*timezone = \"utc\";\n\nstring modifierExt;\nstring modifierInt;\n\nvector<string>  asset_codes;\n\n\tif (m_noReadings)\n\t{\n\t\tLogger::getLogger()->error(\"Attempt to retrieve readings to plugin that has no storage for readings\");\n\t\treturn false;\n\t}\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\tReadingsCatalogue *readCatalogue = ReadingsCatalogue::getInstance();\n\n\tif (readCatalogue)\n\t{\n\t\t// Attaches the required databases if the queue is not empty\n\t\tAttachDbSync *attachSync = AttachDbSync::getInstance();\n\t\tattachSync->lock();\n\n\t\tif ( ! m_NewDbIdList.empty())\n\t\t{\n\t\t\treadCatalogue->connectionAttachDbList(this->getDbHandle(), m_NewDbIdList);\n\t\t}\n\t\tattachSync->unlock();\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Readings catalogue not avialable\");\n\t}\n\n\ttry {\n\t\tif (dbHandle == NULL)\n\t\t{\n\t\t\traiseError(\"retrieve\", \"No SQLite 3 db connection available\");\n\t\t\treturn false;\n\t\t}\n\n\t\tif (condition.empty())\n\t\t{\n\t\t\tstring sql_cmd;\n\t\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\n\t\t\t// SQL - start\n\t\t\tsql_cmd = R\"(\n\t\t\t\tSELECT\n\t\t\t\t\tid,\n\t\t\t\t\tasset_code,\n\t\t\t\t\treading,\n\t\t\t\t\tstrftime(')\" F_DATEH24_SEC R\"(', user_ts, 'utc')  ||\n\t\t\t\t\tsubstr(user_ts, instr(user_ts, '.'), 7) AS user_ts,\n\t\t\t\t\tstrftime(')\" F_DATEH24_MS R\"(', ts, 'utc') AS ts\n\t\t\t\tFROM (\n\t\t\t)\";\n\n\t\t\t// SQL - union of all the readings tables\n\t\t\tstring sql_cmd_base;\n\t\t\tstring sql_cmd_tmp;\n\t\t\tsql_cmd_base = \" SELECT  id, \\\"_assetcode_\\\" asset_code, reading, user_ts, ts  FROM _dbname_._tablename_ \";\n\t\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, asset_codes);\n\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\tsql_cmd_base = \" SELECT  id, asset_code, reading, user_ts, ts  FROM _dbname_._tablename_ \";\n\t\t\tsql_cmd_tmp = readCatalogue->sqlConstructOverflow(sql_cmd_base, asset_codes);\n\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\t// SQL - end\n\t\t\tsql_cmd += R\"(\n\t\t\t\t) as tb;\n\t\t\t)\";\n\t\t\tsql.append(sql_cmd.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\", \"Failed to parse JSON payload\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (document.HasMember(\"timezone\") && document[\"timezone\"].IsString())\n\t\t\t{\n\t\t\t\ttimezone = document[\"timezone\"].GetString();\n\t\t\t}\n\n\t\t\t// timebucket aggregate all datapoints\n\t\t\tif (aggregateAll(document))\n\t\t\t{\n\t\t\t\treturn aggregateQuery(document, resultSet);\n\t\t\t}\n\n\t\t\tif (document.HasMember(\"aggregate\"))\n\t\t\t{\n\t\t\t\tisAggregate = true;\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\t// Generates the SQL for the external query\n\t\t\t\tif (!jsonAggregates(document, document[\"aggregate\"], sql, jsonConstraints, isOptAggregate, true, true))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\t// Generates the SQL for the internal query\n\t\t\t\tif (isOptAggregate)\n\t\t\t\t{\n\t\t\t\t\tif (!jsonAggregates(document, document[\"aggregate\"], sqlExt, jsonConstraintsExt, isOptAggregate, true, false))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tsql.append(\" FROM  \");\n\t\t\t}\n\t\t\telse if (document.HasMember(\"return\"))\n\t\t\t{\n\t\t\t\tint col = 0;\n\t\t\t\tValue& columns = document[\"return\"];\n\t\t\t\tif (! columns.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col)\n\t\t\t\t\t\tsql.append(\", \");\n\t\t\t\t\tif (!itr->IsObject())\t// Simple column name\n\t\t\t\t\t{\n\t\t\t\t\t\tif (strcmp(itr->GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Display without TZ expression and microseconds also\n\t\t\t\t\t\t\tsql.append(\" strftime('\" F_DATEH24_SEC \"', user_ts, '\");\n\t\t\t\t\t\t\tsql.append(timezone);\n\t\t\t\t\t\t\tsql.append(\"') \");\n\t\t\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t\t\t\tsql.append(\" as  user_ts \");\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (strcmp(itr->GetString() ,\"ts\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Display without TZ expression and microseconds also\n\t\t\t\t\t\t\tsql.append(\" strftime('\" F_DATEH24_MS \"', ts, '\");\n\t\t\t\t\t\t\tsql.append(timezone);\n\t\t\t\t\t\t\tsql.append(\"') \");\n\t\t\t\t\t\t\tsql.append(\" as ts \");\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t   \"column must be a string\");\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t\t   \"format must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t// SQLite 3 date format.\n\t\t\t\t\t\t\t\tstring new_format;\n\t\t\t\t\t\t\t\tapplyColumnDateFormatLocaltime((*itr)[\"format\"].GetString(),\n\t\t\t\t\t\t\t\t\t\t      (*itr)[\"column\"].GetString(),\n\t\t\t\t\t\t\t\t\t\t      new_format, true);\n\t\t\t\t\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\t\t\t\t\tsql.append(new_format);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t\t   \"timezone must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t// SQLite3 doesnt support time zone formatting\n\t\t\t\t\t\t\t\tconst char *tz = (*itr)[\"timezone\"].GetString();\n\n\t\t\t\t\t\t\t\tif (strncasecmp(tz, \"utc\", 3) == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (strcmp((*itr)[\"column\"].GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t// Extract milliseconds and microseconds for the user_ts fields\n\n\t\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, 'utc') \");\n\t\t\t\t\t\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\tsql.append(\", 'utc')\");\n\t\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strncasecmp(tz, \"localtime\", 9) == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (strcmp((*itr)[\"column\"].GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t// Extract milliseconds and microseconds for the user_ts fields\n\n\t\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, 'localtime') \");\n\t\t\t\t\t\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\tsql.append(\", 'localtime')\");\n\t\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t\t   \"SQLite3 plugin does not support timezones in queries\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\n\t\t\t\t\t\t\t\tif (strcmp((*itr)[\"column\"].GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t// Extract milliseconds and microseconds for the user_ts fields\n\n\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, '\");\n\t\t\t\t\t\t\t\t\tsql.append(timezone);\n\t\t\t\t\t\t\t\t\tsql.append(\"') \");\n\t\t\t\t\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\tsql.append(\", '\");\n\t\t\t\t\t\t\t\t\tsql.append(timezone);\n\t\t\t\t\t\t\t\t\tsql.append(\"')\");\n\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsql.append(' ');\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t   \"return object must have either a column or json property\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\t\t\tsql.append('\"');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\n\t\t\t\tsql.append(\"id, asset_code, reading, strftime('\" F_DATEH24_SEC \"', user_ts, '\");\n\t\t\t\tsql.append(timezone);\n\t\t\t\tsql.append(\"')  || substr(user_ts, instr(user_ts, '.'), 7) AS user_ts, strftime('\" F_DATEH24_MS \"', ts, '\");\n\t\t\t\tsql.append(timezone);\n\t\t\t\tsql.append(\"') AS ts FROM  \");\n\t\t\t}\n\t\t\t{\n\n\t\t\t\t// Identifies the asset_codes used in the query\n\t\t\t\tif (document.HasMember(\"where\"))\n\t\t\t\t{\n\t\t\t\t\tjsonWhereClause(document[\"where\"], sqlExtDummy, asset_codes);\n\t\t\t\t}\n\n\t\t\t\tstring sql_cmd;\n\t\t\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\n\t\t\t\t// SQL - start\n\t\t\t\tsql_cmd = R\"(\n\t\t\t\t\t(\n\t\t\t\t)\";\n\n\t\t\t\t// SQL - union of all the readings tables\n\t\t\t\tstring sql_cmd_base;\n\t\t\t\tstring sql_cmd_overflow_base;\n\t\t\t\tstring sql_cmd_tmp;\n\n\t\t\t\t// Specific optimization for the count operation\n\t\t\t\tif (isOptAggregate)\n\t\t\t\t{\n\t\t\t\t\tconst char *queryTmp = sqlExt.coalesce();\n\n\t\t\t\t\tsql_cmd_base = \" SELECT \";\n\t\t\t\t\tsql_cmd_base += queryTmp;\n\n\t\t\t\t\tif (! strstr(queryTmp, \"ROWID\"))\n\t\t\t\t\t\tsql_cmd_base += \",  ROWID\";\n\n\t\t\t\t\tif (! strstr(queryTmp, \"asset_code\"))\n\t\t\t\t\t\tsql_cmd_base += \",  asset_code\";\n\n\t\t\t\t\tsql_cmd_base += \", id, reading, user_ts, ts \";\n\t\t\t\t\tsql_cmd_overflow_base = sql_cmd_base;\n\t\t\t\t\tStringReplaceAll (sql_cmd_base, \"asset_code\", \" \\\"_assetcode_\\\" .assetcode. \");\n\t\t\t\t\tsql_cmd_base += \" FROM _dbname_._tablename_ \";\n\t\t\t\t\tsql_cmd_overflow_base += \" FROM _dbname_._tablename_ \";\n\n\t\t\t\t\tdelete[] queryTmp;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql_cmd_base = \" SELECT ROWID, id, \\\"_assetcode_\\\" asset_code, reading, user_ts, ts  FROM _dbname_._tablename_ \";\n\t\t\t\t\tsql_cmd_overflow_base = \" SELECT ROWID, id, asset_code, reading, user_ts, ts  FROM _dbname_._tablename_ \";\n\t\t\t\t}\n\t\t\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, asset_codes);\n\t\t\t\tsql_cmd += sql_cmd_tmp;\n\t\t\t\t\n\t\t\t\tsql_cmd_tmp = readCatalogue->sqlConstructOverflow(sql_cmd_overflow_base, asset_codes, false, isOptAggregate);\n\t\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\t\t// SQL - end\n\t\t\t\tsql_cmd += R\"(\n\t\t\t\t\t) as readings_1\n\t\t\t\t)\";\n\t\t\t\tsql.append(sql_cmd.c_str());\n\n\t\t\t}\n\n\n\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t \n\t\t\t\tif (document.HasMember(\"where\"))\n\t\t\t\t{\n\t\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t   \"JSON does not contain where clause\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! jsonConstraints.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AND \");\n                                        const char *jsonBuf =  jsonConstraints.coalesce();\n                                        sql.append(jsonBuf);\n                                        delete[] jsonBuf;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (isAggregate)\n\t\t\t{\n\t\t\t\t/*\n\t\t\t\t * Performance improvement: force sqlite to use an index\n\t\t\t\t * if we are doing an aggregate and have no where clause.\n\t\t\t\t */\n\t\t\t\tsql.append(\" WHERE id = id\");\n\t\t\t}\n\t\t\tif (!jsonModifiers(document, sql, true))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(';');\n\n\t\tconst char *query = sql.coalesce();\n\t\tint rc;\n\t\tsqlite3_stmt *stmt;\n\n\t\tlogSQL(\"ReadingsRetrieve\", query);\n\n\t\t// Prepare the SQL statement and get the result set\n\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t\treturn false;\n\t\t}\n\n\t\t// Call result set mapping\n\t\trc = mapResultSet(stmt, resultSet);\n\n\t\t// Delete result set\n\t\tsqlite3_finalize(stmt);\n\n\t\t// Check result set mapping errors\n\t\tif (rc != SQLITE_DONE)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t\t// Failure\n\t\t\treturn false;\n\t\t}\n\t\t// Success\n\t\treturn true;\n\t} catch (exception e) {\n\t\traiseError(\"retrieve\", \"Internal error: %s\", e.what());\n\t\treturn false;\n\t}\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Purge readings from the reading table\n */\nunsigned int  Connection::purgeReadings(unsigned long age,\n\t\t\t\t\tunsigned int flags,\n\t\t\t\t\tunsigned long sent,\n\t\t\t\t\tstd::string& result)\n{\nlong unsentPurged = 0;\nlong unsentRetained = 0;\nlong numReadings = 0;\nunsigned long rowidLimit = 0, minrowidLimit = 0, maxrowidLimit = 0, rowidMin;\nstruct timeval startTv, endTv;\nint blocks = 0;\nbool flag_retain;\nchar *zErrMsg = NULL;\nvector<string>  assetCodes;\n\n\tif (m_noReadings)\n\t{\n\t\tLogger::getLogger()->error(\"Attempt to purge readings from plugin that has no storage for readings\");\n\t\treturn 0;\n\t}\n\n\tLogger *logger = Logger::getLogger();\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\tReadingsCatalogue *readCatalogue = ReadingsCatalogue::getInstance();\n\n\t{\n\t\t// Attaches the needed databases if the queue is not empty\n\t\tAttachDbSync *attachSync = AttachDbSync::getInstance();\n\t\tattachSync->lock();\n\n\t\tif ( ! m_NewDbIdList.empty())\n\t\t{\n\t\t\treadCatalogue->connectionAttachDbList(this->getDbHandle(), m_NewDbIdList);\n\t\t}\n\t\tattachSync->unlock();\n\t}\n\n\tflag_retain = false;\n\n\tif ( (flags & STORAGE_PURGE_RETAIN_ANY) || (flags & STORAGE_PURGE_RETAIN_ALL) )\n\t{\n\t\tflag_retain = true;\n\t}\n\n\n\tLogger::getLogger()->debug(\"%s - flags %X flag_retain %d sent :%ld:\", __FUNCTION__, flags, flag_retain, sent);\n\n\t// Prepare empty result\n\tresult = \"{ \\\"removed\\\" : 0, \";\n\tresult += \" \\\"unsentPurged\\\" : 0, \";\n\tresult += \" \\\"unsentRetained\\\" : 0, \";\n\tresult += \" \\\"readings\\\" : 0, \";\n\tresult += \" \\\"method\\\" : \\\"rows\\\", \";\n\tresult += \" \\\"duration\\\" : 0 }\";\n\n\tlogger->info(\"Purge starting...\");\n\tgettimeofday(&startTv, NULL);\n\t/*\n\t * We fetch the current rowid and limit the purge process to work on just\n\t * those rows present in the database when the purge process started.\n\t * This provents us looping in the purge process if new readings become\n\t * eligible for purging at a rate that is faster than we can purge them.\n\t */\n\n\tstring sql_cmd;\n\tstring sql_cmd_tmp;\n\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t// SQL - start\n\tsql_cmd = R\"(\n\t\tSELECT MAX(rowid)\n\t\tFROM\n\t\t(\n\t)\";\n\n\t// SQL - union of all the readings tables\n\tstring sql_cmd_base = \" SELECT  MAX(rowid) rowid FROM _dbname_._tablename_ \";\n\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, assetCodes);\n\tsql_cmd += sql_cmd_tmp;\n\tsql_cmd_tmp = readCat->sqlConstructOverflow(sql_cmd_base, assetCodes);\n\tsql_cmd += sql_cmd_tmp;\n\n\t// SQL - end\n\tsql_cmd += R\"(\n\t\t) as readings_1\n\t)\";\n\n\tint rc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t sql_cmd.c_str(),\n\t     rowidCallback,\n\t     &rowidLimit,\n\t     &zErrMsg);\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"purge - phase 0, fetching rowid limit \", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn 0;\n\t}\n\tmaxrowidLimit = rowidLimit;\n\n\tLogger::getLogger()->debug(\"purgeReadings rowidLimit %lu\", rowidLimit);\n\n\n\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t// SQL - start\n\tsql_cmd = R\"(\n\t\tSELECT MIN(rowid)\n\t\tFROM\n\t\t(\n\t)\";\n\n\t// SQL - union of all the readings tables\n\tsql_cmd_base = \" SELECT  MIN(rowid) rowid FROM _dbname_._tablename_ \";\n\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, assetCodes, true);\n\tsql_cmd += sql_cmd_tmp;\n\tsql_cmd_tmp = readCat->sqlConstructOverflow(sql_cmd_base, assetCodes, true);\n\tsql_cmd += sql_cmd_tmp;\n\n\t// SQL - end\n\tsql_cmd += R\"(\n\t\t) as readings_1\n\t)\";\n\n\tLogger::getLogger()->debug(\"%s - SELECT MIN - '%s'\", __FUNCTION__,  sql_cmd.c_str() );\n\n\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t sql_cmd.c_str(),\n\t     rowidCallback,\n\t     &minrowidLimit,\n\t     &zErrMsg);\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"purge - phaase 0, fetching minrowid limit \", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn 0;\n\t}\n\n\tLogger::getLogger()->debug(\"purgeReadings minrowidLimit %lu\", minrowidLimit);\n\n\tif (age == 0)\n\t{\n\t\t/*\n\t\t * An age of 0 means remove the oldest hours data.\n\t\t * So set age based on the data we have and continue.\n\t\t */\n\t\tstring sql_cmd;\n\t\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t\t// SQL - start\n\t\tsql_cmd = R\"(\n\t\t\tSELECT (strftime('%s','now', 'utc') - strftime('%s', MIN(user_ts)))/360\n\t\t\tFROM\n\t\t\t(\n\t\t)\";\n\n\t\t// SQL - union of all the readings tables\n\t\tstring sql_cmd_base;\n\t\tstring sql_cmd_tmp;\n\t\tsql_cmd_base = \" SELECT MIN(user_ts) user_ts FROM _dbname_._tablename_  WHERE rowid <= \" + to_string(rowidLimit);\n\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, assetCodes, true);\n\t\tsql_cmd += sql_cmd_tmp;\n\t\tsql_cmd_tmp = readCat->sqlConstructOverflow(sql_cmd_base, assetCodes, true);\n\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t// SQL - end\n\t\tsql_cmd += R\"(\n\t\t\t) as readings_1\n\t\t)\";\n\n\t\tSQLBuffer oldest;\n\t\toldest.append(sql_cmd);\n\t\toldest.append(';');\n\t\tconst char *query = oldest.coalesce();\n\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tint purge_readings = 0;\n\n\t\t// Exec query and get result in 'purge_readings' via 'selectCallback'\n\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t     query,\n\t\t\t     selectCallback,\n\t\t\t     &purge_readings,\n\t\t\t     &zErrMsg);\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\n\t\tif (rc == SQLITE_OK)\n\t\t{\n\t\t\tage = purge_readings;\n\t\t}\n\t\telse\n\t\t{\n \t\t\traiseError(\"purge - phase 1\", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"purgeReadings purge_readings %d age %d\", purge_readings, age);\n\t}\n\tLogger::getLogger()->debug(\"%s - rowidLimit :%lu: maxrowidLimit :%lu: maxrowidLimit :%lu: age :%lu:\", __FUNCTION__, rowidLimit, maxrowidLimit, minrowidLimit, age);\n\n\n\t{\n\t\t/*\n\t\t * Refine rowid limit to just those rows older than age hours.\n\t\t */\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tunsigned long l = minrowidLimit;\n\t\tunsigned long r;\n\t\tif (flag_retain) {\n\n\t\t\tr = min(sent, rowidLimit);\n\t\t} else {\n\t\t\tr = rowidLimit;\n\t\t}\n\n\t\tr = max(r, l);\n\t\tlogger->debug (\"%s line %d - l=%u, r=%u, sent=%u, rowidLimit=%u, minrowidLimit=%u, flag_retain=%u\", __FUNCTION__, __LINE__, l, r, sent, rowidLimit, minrowidLimit, flag_retain);\n\n\t\tif (l == r)\n\t\t{\n \t\t\tlogger->info(\"No data to purge: min_id == max_id == %u\", minrowidLimit);\n\t\t\treturn 0;\n\t\t}\n\n\t\tunsigned long m=l;\n\n\t\twhile (l <= r)\n\t\t{\n\t\t\tunsigned long midRowId = 0;\n\t\t\tunsigned long prev_m = m;\n\t\t\tm = l + (r - l) / 2;\n\t\t\tif (prev_m == m) break;\n\n\t\t\t// e.g. select id from readings where rowid = 219867307 AND user_ts < datetime('now' , '-24 hours', 'utc');\n\n\t\t\tstring sql_cmd;\n\t\t\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t\t\t{\n\t\t\t\t// SQL - start\n\t\t\t\t// MIN is used to ensure just 1 row is returned\n\t\t\t\tsql_cmd = R\"(\n\t\t\t\t\tselect id\n\t\t\t\t\tFROM\n\t\t\t\t\t(\n\t\t\t\t)\";\n\n\t\t\t\t// SQL - union of all the readings tables\n\t\t\t\tstring sql_cmd_base;\n\t\t\t\tstring sql_cmd_tmp;\n\t\t\t\tsql_cmd_base = \" SELECT id FROM _dbname_._tablename_  WHERE rowid = \" + to_string(m) + \" AND user_ts < datetime('now' , '-\" +to_string(age) + \" hours')\";\n\t\t\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\t\t\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, assetCodes);\n\t\t\t\tsql_cmd += sql_cmd_tmp;\n\t\t\t\tsql_cmd_tmp = readCat->sqlConstructOverflow(sql_cmd_base, assetCodes);\n\t\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\t\t// SQL - end\n\t\t\t\tsql_cmd += R\"(\n\t\t\t\t\t) as readings_1\n\t\t\t\t)\";\n\n\t\t\t}\n\n\t\t\tSQLBuffer sqlBuffer;\n\t\t\tsqlBuffer.append(sql_cmd);\n\t\t\tsqlBuffer.append(';');\n\t\t\tconst char *query = sqlBuffer.coalesce();\n\n\t\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\tquery,\n\t\t\trowidCallback,\n\t\t\t&midRowId,\n\t\t\t&zErrMsg);\n\n\n\t\t\tdelete[] query;\n\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t \t\t\traiseError(\"purge - phase 1, fetching midRowId \", zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\tif (midRowId == 0) // mid row doesn't satisfy given condition for user_ts, so discard right/later half and look in left/earlier half\n\t\t\t{\n\t\t\t\t// search in earlier/left half\n\t\t\t\tr = m - 1;\n\n\t\t\t\t// The m position should be skipped as midRowId is 0\n\t\t\t\tm = r;\n\t\t\t}\n\t\t\telse //if (l != m)\n\t\t\t{\n\t\t\t\t// search in later/right half\n\t\t\t\tl = m + 1;\n\t\t\t}\n\t\t}\n\n\t\trowidLimit = m;\n\n\t\tLogger::getLogger()->debug(\"%s - rowidLimit :%lu: minrowidLimit :%lu: maxrowidLimit :%lu:\", __FUNCTION__, rowidLimit, minrowidLimit, maxrowidLimit);\n\n\t\tif (minrowidLimit == rowidLimit)\n\t\t{\n\t\t\tlogger->info(\"No data to purge\");\n\t\t\treturn 0;\n\t\t}\n\n\t\trowidMin = minrowidLimit;\n\t\tLogger::getLogger()->debug(\"%s - m :%lu: rowidMin :%lu: \",__FUNCTION__ ,m,  rowidMin);\n\t}\n\n\t//logger->info(\"Purge collecting unsent row count\");\n\n\tif ( ! flag_retain )\n\t{\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tint lastPurgedId;\n\n\t\tstring sql_cmd;\n\t\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t\t{\n\t\t\t// SQL - start\n\t\t\t// MIN is used to ensure just 1 row is returned\n\t\t\tsql_cmd = R\"(\n\t\t\t\t\tselect id\n\t\t\t\t\tFROM\n\t\t\t\t\t(\n\t\t\t\t)\";\n\n\t\t\t// SQL - union of all the readings tables\n\t\t\tstring sql_cmd_base;\n\t\t\tstring sql_cmd_tmp;\n\t\t\tsql_cmd_base = \" SELECT id FROM _dbname_._tablename_  WHERE rowid = \" + to_string(rowidLimit) + \" \";\n\t\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\t\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, assetCodes);\n\t\t\tsql_cmd += sql_cmd_tmp;\n\t\t\tsql_cmd_tmp = readCat->sqlConstructOverflow(sql_cmd_base, assetCodes);\n\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\t// SQL - end\n\t\t\tsql_cmd += R\"(\n\t\t\t\t\t) as readings_1\n\t\t\t\t)\";\n\n\t\t}\n\n\t\tSQLBuffer idBuffer;\n\t\tidBuffer.append(sql_cmd);\n\t\tidBuffer.append(';');\n\t\tconst char *idQuery = idBuffer.coalesce();\n\n\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t     idQuery,\n\t  \t     rowidCallback,\n\t\t     &lastPurgedId,\n\t\t     &zErrMsg);\n\n\t\t// Release memory for 'idQuery' var\n\t\tdelete[] idQuery;\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n \t\t\traiseError(\"purge - phase 0, fetching rowid limit \", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\n\t\tif (sent != 0 && lastPurgedId > sent)\t// Unsent readings will be purged\n\t\t{\n\t\t\t// Get number of unsent rows we are about to remove\n\t\t\tint unsent = rowidLimit - sent;\n\t\t\tunsentPurged = unsent;\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"%s - lastPurgedId %d unsentPurged :%ld:\",__FUNCTION__, lastPurgedId, unsentPurged);\n\t}\n\tif (m_writeAccessOngoing)\n\t{\n\t\twhile (m_writeAccessOngoing)\n\t\t{\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100));\n\t\t}\n\t}\n\n\tunsigned int deletedRows = 0;\n\tzErrMsg = NULL;\n\tunsigned long rowsAffected;\n\tunsigned int totTime=0, prevBlocks=0, prevTotTime=0;\n\tlogger->info(\"Purge about to delete readings # %ld to %ld\", rowidMin, rowidLimit);\n\n\twhile (rowidMin < rowidLimit)\n\t{\n\t\tblocks++;\n\t\trowidMin += purgeBlockSize;\n\t\tif (rowidMin > rowidLimit)\n\t\t{\n\t\t\trowidMin = rowidLimit;\n\t\t}\n\t\tSQLBuffer sql;\n\t\tsql.append(\"DELETE FROM  _dbname_._tablename_ WHERE rowid <= \");\n\t\tsql.append(rowidMin);\n\t\tsql.append(\" AND user_ts < datetime('now' , '-\" +to_string(age) + \" hours')\");\n\t\tsql.append(';');\n\t\tconst char *query = sql.coalesce();\n\n\t\tlogSQL(\"ReadingsPurge\", query);\n\n\t\tint rc;\n\t\t{\n\t\t//unique_lock<mutex> lck(db_mutex);\n//\t\tif (m_writeAccessOngoing) db_cv.wait(lck);\n\n\t\tSTART_TIME;\n\t\trc = readCat->purgeAllReadings(dbHandle, query ,&zErrMsg, &rowsAffected);\n\t\tEND_TIME;\n\n\t\tlogger->debug(\"%s - DELETE sql '%s' rowsAffected :%ld:\",  __FUNCTION__, query ,rowsAffected);\n\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\n\t\ttotTime += usecs;\n\n\t\tif(usecs > 150000)\n\t\t{\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100 + usecs/1000));\n\t\t}\n\t\t}\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"purge - phase 3\", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\n\t\t// Get db changes\n\t\tdeletedRows += rowsAffected;\n\t\tlogger->debug(\"%s - Purge delete block #%d with %d readings\", __FUNCTION__, blocks, rowsAffected);\n\n\t\tif(blocks % RECALC_PURGE_BLOCK_SIZE_NUM_BLOCKS == 0)\n\t\t{\n\t\t\tint prevAvg = prevTotTime/(prevBlocks?prevBlocks:1);\n\t\t\tint currAvg = (totTime-prevTotTime)/(blocks-prevBlocks);\n\t\t\tint avg = ((prevAvg?prevAvg:currAvg)*5 + currAvg*5) / 10; // 50% weightage for long term avg and 50% weightage for current avg\n\t\t\tprevBlocks = blocks;\n\t\t\tprevTotTime = totTime;\n\t\t\tint deviation = abs(avg - TARGET_PURGE_BLOCK_DEL_TIME);\n\t\t\tlogger->debug(\"blocks=%d, totTime=%d usecs, prevAvg=%d usecs, currAvg=%d usecs, avg=%d usecs, TARGET_PURGE_BLOCK_DEL_TIME=%d usecs, deviation=%d usecs\",\n\t\t\t\t\t\t\tblocks, totTime, prevAvg, currAvg, avg, TARGET_PURGE_BLOCK_DEL_TIME, deviation);\n\t\t\tif (deviation > TARGET_PURGE_BLOCK_DEL_TIME/10)\n\t\t\t{\n\t\t\t\tfloat ratio = (float)TARGET_PURGE_BLOCK_DEL_TIME / (float)avg;\n\t\t\t\tif (ratio > 2.0) ratio = 2.0;\n\t\t\t\tif (ratio < 0.5) ratio = 0.5;\n\t\t\t\tpurgeBlockSize = (float)purgeBlockSize * ratio;\n\t\t\t\tpurgeBlockSize = purgeBlockSize / PURGE_BLOCK_SZ_GRANULARITY * PURGE_BLOCK_SZ_GRANULARITY;\n\t\t\t\tif (purgeBlockSize < MIN_PURGE_DELETE_BLOCK_SIZE)\n\t\t\t\t\tpurgeBlockSize = MIN_PURGE_DELETE_BLOCK_SIZE;\n\t\t\t\tif (purgeBlockSize > MAX_PURGE_DELETE_BLOCK_SIZE)\n\t\t\t\t\tpurgeBlockSize = MAX_PURGE_DELETE_BLOCK_SIZE;\n\t\t\t\tlogger->debug(\"Changed purgeBlockSize to %d\", purgeBlockSize);\n\t\t\t}\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100));\n\t\t}\n\t\tLogger::getLogger()->debug(\"Purge delete block #%d with %d readings\", blocks, rowsAffected);\n\t} while (rowidMin  < rowidLimit);\n\n\tunsentRetained = maxrowidLimit - rowidLimit;\n\n\tnumReadings = maxrowidLimit +1 - minrowidLimit - deletedRows;\n\n\tif (sent == 0)\t// Special case when not north process is used\n\t{\n\t\tunsentPurged = deletedRows;\n\t}\n\n\tif (deletedRows)\n\t{\n\t\tstd::thread th(&ReadingsCatalogue::loadEmptyAssetReadingCatalogue,ReadingsCatalogue::getInstance(),false);\n\t\tth.detach();\n\t}\n\n\tgettimeofday(&endTv, NULL);\n\tunsigned long duration = (1000000 * (endTv.tv_sec - startTv.tv_sec)) + endTv.tv_usec - startTv.tv_usec;\n\n\tostringstream convert;\n\n\tconvert << \"{ \\\"removed\\\" : \" << deletedRows << \", \";\n\tconvert << \" \\\"unsentPurged\\\" : \" << unsentPurged << \", \";\n\tconvert << \" \\\"unsentRetained\\\" : \" << unsentRetained << \", \";\n    \tconvert << \" \\\"readings\\\" : \" << numReadings << \", \";\n\tconvert << \" \\\"method\\\" : \\\"age\\\", \";\n\tconvert << \" \\\"duration\\\" : \" << duration << \" }\";\n\n\tresult = convert.str();\n\n\tlogger->info(\"Purge process complete in %d blocks in %lduS\", blocks, duration);\n\n\tLogger::getLogger()->debug(\"%s - age :%lu: flag_retain :%x: sent :%lu: result '%s'\", __FUNCTION__, age, flags, flag_retain, result.c_str() );\n\n\treturn deletedRows;\n}\n\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Purge readings from the reading table\n */\nunsigned int  Connection::purgeReadingsByRows(unsigned long rows,\n\t\t\t\t\tunsigned int flags,\n\t\t\t\t\tunsigned long sent,\n\t\t\t\t\tstd::string& result)\n{\nunsigned long  deletedRows = 0, unsentPurged = 0, unsentRetained = 0, numReadings = 0;\nunsigned long limit = 0;\nstring sql_cmd;\nvector<string>  assetCodes;\nbool flag_retain;\nstruct timeval startTv, endTv;\n\n\n\t// rowidCallback expects unsigned long\n\tunsigned long rowcount, minId, maxId;\n\tunsigned long rowsAffected;\n\tunsigned long  deletePoint;\n\tchar *zErrMsg = NULL;\n\tint rc;\n\n\tLogger *logger = Logger::getLogger();\n\n\tif (m_noReadings)\n\t{\n\t\tlogger->error(\"Attempt to purge readings from plugin that has no storage for readings\");\n\t\treturn 0;\n\t}\n\n\tgettimeofday(&startTv, NULL);\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\tReadingsCatalogue *readCatalogue = ReadingsCatalogue::getInstance();\n\n\t{\n\t\t// Attaches the needed databases if the queue is not empty\n\t\tAttachDbSync *attachSync = AttachDbSync::getInstance();\n\t\tattachSync->lock();\n\n\t\tif ( ! m_NewDbIdList.empty())\n\t\t{\n\t\t\treadCatalogue->connectionAttachDbList(this->getDbHandle(), m_NewDbIdList);\n\t\t}\n\t\tattachSync->unlock();\n\t}\n\n\n\tflag_retain = false;\n\n\tif ( (flags & STORAGE_PURGE_RETAIN_ANY) || (flags & STORAGE_PURGE_RETAIN_ALL) )\n\t{\n\t\tflag_retain = true;\n\t}\n\tLogger::getLogger()->debug(\"%s - flags %X flag_retain %d sent :%ld:\", __FUNCTION__, flags, flag_retain, sent);\n\n\n\tlogger->info(\"Purge by Rows called\");\n\tif (flag_retain)\n\t{\n\t\tlimit = sent;\n\t\tlogger->info(\"Sent is %lu\", sent);\n\t}\n\tlogger->info(\"Purge by Rows called with flag_retain %d, rows %lu, limit %lu\", flag_retain, rows, limit);\n\n\trowsAffected = 0;\n\t// Don't save unsent rows\n\n\n\t{ // Calc rowcount\n\t\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t\t{\n\t\t\t// SQL - start\n\t\t\t// MIN is used to ensure just 1 row is returned\n\t\t\tsql_cmd = R\"(\n\t\t\t\tSELECT  SUM(rowid)\n\t\t\t\t\tFROM\n\t\t\t\t\t(\n\t\t\t\t)\";\n\n\t\t\t// SQL - union of all the readings tables\n\t\t\tstring sql_cmd_base;\n\t\t\tstring sql_cmd_tmp;\n\t\t\tsql_cmd_base = \" select count(rowid) rowid FROM _dbname_._tablename_ \";\n\t\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\t\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, assetCodes);\n\t\t\tsql_cmd += sql_cmd_tmp;\n\t\t\tsql_cmd_tmp = readCat->sqlConstructOverflow(sql_cmd_base, assetCodes);\n\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\t// SQL - end\n\t\t\tsql_cmd += R\"(\n\t\t\t\t\t) as readings_1\n\t\t\t\t)\";\n\t\t}\n\n\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t\t sql_cmd.c_str(),\n\t\t\t\t\t rowidCallback,\n\t\t\t\t\t &rowcount,\n\t\t\t\t\t &zErrMsg);\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"purge - phaase 0, fetching row count\", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\t{ // Calc maxId\n\t\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t\t{\n\t\t\t// SQL - start\n\t\t\t// MIN is used to ensure just 1 row is returned\n\t\t\tsql_cmd = R\"(\n\t\t\t\tSELECT  MAX(id)\n\t\t\t\t\tFROM\n\t\t\t\t\t(\n\t\t\t\t)\";\n\n\t\t\t// SQL - union of all the readings tables\n\t\t\tstring sql_cmd_base;\n\t\t\tstring sql_cmd_tmp;\n\t\t\tsql_cmd_base = \" SELECT MAX(id) id FROM _dbname_._tablename_ \";\n\t\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\t\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, assetCodes);\n\t\t\tsql_cmd += sql_cmd_tmp;\n\t\t\tsql_cmd_tmp = readCat->sqlConstructOverflow(sql_cmd_base, assetCodes);\n\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\t// SQL - end\n\t\t\tsql_cmd += R\"(\n\t\t\t\t\t) as readings_1\n\t\t\t\t)\";\n\n\t\t}\n\n\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t\t sql_cmd.c_str(),\n\t\t\t\t\t rowidCallback,\n\t\t\t\t\t &maxId,\n\t\t\t\t\t &zErrMsg);\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"purge - phaase 0, fetching maximum id\", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\tnumReadings = rowcount;\n\trowsAffected = 0;\n\tdo\n\t{\n\t\tif (rowcount <= rows)\n\t\t{\n\t\t\tlogger->info(\"Row count %d is less than required rows %d\", rowcount, rows);\n\t\t\tbreak;\n\t\t}\n\n\t\t{ // Calc minId\n\t\t\t// Generate a single SQL statement that using a set of UNION considers all the readings table in handling\n\t\t\t{\n\t\t\t\t// SQL - start\n\t\t\t\t// MIN is used to ensure just 1 row is returned\n\t\t\t\tsql_cmd = R\"(\n\t\t\t\t\tSELECT  MIN(rowid)\n\t\t\t\t\t\tFROM\n\t\t\t\t\t\t(\n\t\t\t\t\t)\";\n\n\t\t\t\t// SQL - union of all the readings tables\n\t\t\t\tstring sql_cmd_base;\n\t\t\t\tstring sql_cmd_tmp;\n\t\t\t\tsql_cmd_base = \" SELECT MIN(rowid) rowid FROM _dbname_._tablename_ \";\n\t\t\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\t\t\t\tsql_cmd_tmp = readCat->sqlConstructMultiDb(sql_cmd_base, assetCodes, true);\n\t\t\t\tsql_cmd += sql_cmd_tmp;\n\t\t\t\tsql_cmd_tmp = readCat->sqlConstructOverflow(sql_cmd_base, assetCodes, true);\n\t\t\t\tsql_cmd += sql_cmd_tmp;\n\n\t\t\t\t// SQL - end\n\t\t\t\tsql_cmd += R\"(\n\t\t\t\t\t\t) as readings_1\n\t\t\t\t\t)\";\n\n\t\t\t\tlogger->debug(\"%s - SELECT MIN - sql_cmd '%s' \", __FUNCTION__, sql_cmd.c_str() );\n\t\t\t}\n\n\t\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t\t\t sql_cmd.c_str(),\n\t\t\t\t\t\t rowidCallback,\n\t\t\t\t\t\t &minId,\n\t\t\t\t\t\t &zErrMsg);\n\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"purge - phaase 0, fetching minimum id\", zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\n\t\tunsigned long deletePoint = minId + min(100000UL, rows);\n\n\t\tif (maxId - deletePoint < rows || deletePoint > maxId)\n\t\t\tdeletePoint = maxId - rows;\n\n\t\t// Do not delete\n\t\tif (flag_retain) {\n\n\t\t\tif (limit < deletePoint)\n\t\t\t{\n\t\t\t\tdeletePoint = limit;\n\t\t\t}\n\t\t}\n\t\tSQLBuffer sql;\n\n\t\tlogger->info(\"RowCount %lu, Max Id %lu, min Id %lu, delete point %lu\", rowcount, maxId, minId, deletePoint);\n\n\t\tsql.append(\"DELETE FROM  _dbname_._tablename_ WHERE id <= \");\n\t\tsql.append(deletePoint);\n\t\tconst char *query = sql.coalesce();\n\n\t\t{\n\t\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\n\t\t\t//unique_lock<mutex> lck(db_mutex);\n//\t\t\tif (m_writeAccessOngoing) db_cv.wait(lck);\n\n\t\t\t// Exec DELETE query: no callback, no resultset\n\t\t\trc = readCat->purgeAllReadings(dbHandle, query ,&zErrMsg, &rowsAffected);\n\n\t\t\tlogger->info(\"%s - DELETE - query '%s' rowsAffected :%ld:\", __FUNCTION__, query ,rowsAffected);\n\n\t\t\tdeletedRows += rowsAffected;\n\t\t\tnumReadings -= rowsAffected;\n\t\t\trowcount    -= rowsAffected;\n\n\t\t\tsqlite3_free(zErrMsg);\n\n\t\t\t// Release memory for 'query' var\n\t\t\tdelete[] query;\n\t\t\tlogger->debug(\" Deleted :%lu: rows\", rowsAffected);\n\t\t\tif (rowsAffected == 0)\n\t\t\t{\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (limit != 0 && sent != 0)\n\t\t\t{\n\t\t\t\tunsentPurged = deletePoint - sent;\n\t\t\t}\n\t\t\telse if (!limit)\n\t\t\t{\n\t\t\t\tunsentPurged += rowsAffected;\n\t\t\t}\n\t\t}\n\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(1));\n\t} while (rowcount > rows);\n\n\tif (limit)\n\t{\n\t\tunsentRetained = numReadings - rows;\n\t}\n\n\tif (deletedRows)\n\t{\n\t\tstd::thread th(&ReadingsCatalogue::loadEmptyAssetReadingCatalogue,ReadingsCatalogue::getInstance(),false);\n\t\tth.detach();\n\t}\n\n\tgettimeofday(&endTv, NULL);\n\tunsigned long duration = (1000000 * (endTv.tv_sec - startTv.tv_sec)) + endTv.tv_usec - startTv.tv_usec;\n\n\tostringstream convert;\n\n\tconvert << \"{ \\\"removed\\\" : \" << deletedRows << \", \";\n\tconvert << \" \\\"unsentPurged\\\" : \" << unsentPurged << \", \";\n\tconvert << \" \\\"unsentRetained\\\" : \" << unsentRetained << \", \";\n    \tconvert << \" \\\"readings\\\" : \" << numReadings << \", \";\n\tconvert << \" \\\"method\\\" : \\\"rows\\\", \";\n\tconvert << \" \\\"duration\\\" : \" << duration << \" }\";\n\n\tresult = convert.str();\n\n\tLogger::getLogger()->debug(\"%s - Purge by Rows complete - rows :%lu: flag :%x: sent :%lu:  numReadings :%lu:  rowsAffected :%u:  result '%s'\", __FUNCTION__, rows, flags, sent, numReadings, rowsAffected, result.c_str() );\n\n\treturn deletedRows;\n}\n#endif\n\n/**\n * SQLIte wrapper to retry statements when the database error occurs\n *\n */\nint Connection::SQLPrepare(sqlite3 *dbHandle, const char *sqlCmd, sqlite3_stmt **readingsStmt)\n{\n\tint retries = 0, rc;\n\n\tdo {\n\t\trc = sqlite3_prepare_v2(dbHandle, sqlCmd, -1, readingsStmt, NULL);\n\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\n\t\t\tif (retries >= LOG_AFTER_NERRORS){\n\t\t\t\tLogger::getLogger()->warn(\"SQLPrepare - error '%s' dbHandle %X sqlCmd '%s' retry %d of %d\",\n\t\t\t\t\t\t\t\t\t\t  sqlite3_errmsg(dbHandle),\n\t\t\t\t\t\t\t\t\t\t  dbHandle,\n\t\t\t\t\t\t\t\t\t\t  sqlCmd,\n\t\t\t\t\t\t\t\t\t\t  rc,\n\t\t\t\t\t\t\t\t\t\t  MAX_RETRIES);\n\n\t\t\t}\n\n\t\t\tretries++;\n\t\t\tint interval = (retries * RETRY_BACKOFF);\n\t\t\tusleep(interval);\t// sleep retries milliseconds\n\t\t}\n\t} while (retries < MAX_RETRIES && (rc != SQLITE_OK));\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\tLogger::getLogger()->error(\"SQLPrepare - Database error after maximum retries\");\n\t}\n\n\treturn rc;\n}\n\n/**\n * Purge readings by asset or purge all readings\n *\n * @param asset\t\tThe asset name to purge\n * \t\t\tIf empty all assets will be removed\n * @return\t\tThe number of removed asset records\n */\nunsigned int Connection::purgeReadingsAsset(const string& asset)\n{\nchar *zErrMsg = NULL;\nint rc;\nsqlite3_stmt *stmt;\n\n\tif (m_noReadings)\n\t{\n\t\treturn 0;\n\t}\n\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\tif (readCat == NULL)\n\t{\n\t\treturn 0;\n\t}\n\n\tif (asset.empty())\n\t{\n\t\tSQLBuffer sql;\n\t\tunsigned long rowsAffected;\n\n\t\tsql.append(\"DELETE FROM _dbname_._tablename_;\");\n\t\tconst char *query = sql.coalesce();\n\n\t\tlogSQL(\"ReadingsAssetPurge\", query);\n\n\t\tif (m_writeAccessOngoing)\n\t\t{\n\t\t\twhile (m_writeAccessOngoing)\n\t\t\t{\n\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100));\n\t\t\t}\n\t\t}\n\n\t\t// Purge all readings data in all db tables\n\t\t// zErrMsg is freed if rc != SQLITE_OK)\n\t\trc = readCat->purgeAllReadings(dbHandle, query ,&zErrMsg, &rowsAffected);\n\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"ReadingsAssetPurge\", sqlite3_errmsg(dbHandle));\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\n\t\treturn rowsAffected;\n\t}\n\telse\n\t{\n\t\tReadingsCatalogue::tyReadingReference ref;\n\t\tref = readCat->getReadingReference(this, asset.c_str());\n\t\tstring dbReadingsName;\n\t\tstring dbName;\n\n\t\tstring query = \"DELETE FROM \" + readCat->generateDbName(ref.dbId);\n\t\tquery += \".\" + readCat->generateReadingsName(ref.dbId, ref.tableId) + \";\";\n\n\t\t// Execute SQL statement via SQLExec wrapper\n\t\trc = readCat->SQLExec(dbHandle, query.c_str(), &zErrMsg);\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"ReadingsAssetPurge\", sqlite3_errmsg(dbHandle));\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\t\treadCat->loadEmptyAssetReadingCatalogue();\n\t\t// Get numbwer of affected rows\n                return (unsigned int)sqlite3_changes(dbHandle);\n\t}\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlite/common/readings_catalogue.cpp",
    "content": "/*\n * Fledge storage service - Readings catalogue handling\n *\n * Copyright (c) 2020 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli, Massimiliano Pinto\n */\n\n#include <vector>\n#include <algorithm>\n#include <utils.h>\n#include <sys/stat.h>\n#include <libgen.h>\n\n#include <string_utils.h>\n#include <connection.h>\n#include <connection_manager.h>\n#include <sqlite_common.h>\n#include \"readings_catalogue.h\"\n#include <purge_configuration.h>\n#include \"json_utils.h\"\n\nusing namespace std;\nusing namespace rapidjson;\n\n// Log set, clear and get of transaction boundaries\n#define LOG_TX_BOUNDARIES 0\n\n/**\n * Constructor\n *\n * This is never explicitly called as the ReadingsCatalogue is a\n * singleton class.\n */\nReadingsCatalogue::ReadingsCatalogue() : m_nextOverflow(1), m_maxOverflowUsed(0)\n{\n}\n\n/**\n * Logs an error. A variable argument function that\n * uses a printf format string to log an error message with the\n * associated operation.\n *\n * @param operation\tThe operastion in progress\n * @param reason\tA printf format string with the error message text\n */\nvoid ReadingsCatalogue::raiseError(const char *operation, const char *reason, ...)\n{\n\tchar\ttmpbuf[512];\n\n\tva_list ap;\n\tva_start(ap, reason);\n\tvsnprintf(tmpbuf, sizeof(tmpbuf), reason, ap);\n\tva_end(ap);\n\tLogger::getLogger()->error(\"ReadingsCatalogues: %s during operation %s\", tmpbuf, operation);\n}\n\n/**\n * Retrieve the information from the persistent storage:\n *     global id\n *     last created database\n *\n * @param dbHandle Database connection to use for the operations\n *\n */\nbool ReadingsCatalogue::configurationRetrieve(sqlite3 *dbHandle)\n{\n\tstring sql_cmd;\n\tint rc;\n\tint id;\n\tint nCols;\n\tsqlite3_stmt *stmt;\n\n\t// Retrieves the global_id from thd DB\n\tsql_cmd = \" SELECT global_id, db_id_Last, n_readings_per_db, n_db_preallocate FROM \" READINGS_DB \".configuration_readings \";\n\n\tif (sqlite3_prepare_v2(dbHandle,sql_cmd.c_str(),-1, &stmt,NULL) != SQLITE_OK)\n\t{\n\t\traiseError(\"configurationRetrieve\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\n\tif (SQLStep(stmt) != SQLITE_ROW)\n\t{\n\t\tm_ReadingsGlobalId = 1;\n\t\tm_dbIdLast = 0;\n\n\t\tm_storageConfigCurrent.nReadingsPerDb = m_storageConfigApi.nReadingsPerDb;\n\t\tm_storageConfigCurrent.nDbPreallocate = m_storageConfigApi.nDbPreallocate;\n\n\t\tsql_cmd = \" INSERT INTO \" READINGS_DB \".configuration_readings VALUES (\" + to_string(m_ReadingsGlobalId) + \",\"\n\t\t\t\t  + to_string(m_dbIdLast)              + \",\"\n\t\t\t\t  + to_string(m_storageConfigCurrent.nReadingsPerDb) + \",\"\n\t\t\t\t  + to_string(m_storageConfigCurrent.nDbPreallocate) + \")\";\n\t\tif (SQLExec(dbHandle, sql_cmd.c_str()) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"configurationRetrieve\", sqlite3_errmsg(dbHandle));\n\t\t\treturn false;\n\t\t}\n\t}\n\telse\n\t{\n\t\tnCols = sqlite3_column_count(stmt);\n\t\tm_ReadingsGlobalId = sqlite3_column_int64(stmt, 0);\n\t\tm_dbIdLast = sqlite3_column_int(stmt, 1);\n\t\tm_storageConfigCurrent.nReadingsPerDb = sqlite3_column_int(stmt, 2);\n\t\tm_storageConfigCurrent.nDbPreallocate = sqlite3_column_int(stmt, 3);\n\t}\n\tLogger::getLogger()->debug(\"configurationRetrieve: ReadingsGlobalId %lu dbIdLast %d \", m_ReadingsGlobalId.load(), m_dbIdLast);\n\n\tsqlite3_finalize(stmt);\n\n\treturn true;\n}\n\n/**\n * Retrieves the global id stored in SQLite and if it is not possible\n * it calculates the value from the readings tables executing a max(id) on each table.\n *\n * Once retrieved or calculated,\n * It updates the value into SQlite to -1 to force a calculation at the next plugin init (Fledge starts)\n * in the case the proper value was not stored as the plugin shutdown (when Fledge is stopped) was not called.\n *\n */\nbool ReadingsCatalogue::evaluateGlobalId ()\n{\n\tstring sql_cmd;\n\tint rc;\n\tlong id;\n\tint nCols;\n\tsqlite3_stmt *stmt;\n\tsqlite3 *dbHandle;\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\tConnection *connection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Evaluate Global ID\";\n\tconnection->setUsage(usage);\n#endif\n\tdbHandle = connection->getDbHandle();\n\n\t// Retrieves the global_id from thd DB\n\tsql_cmd = \" SELECT global_id FROM \" READINGS_DB \".configuration_readings \";\n\n\tif (sqlite3_prepare_v2(dbHandle,sql_cmd.c_str(),-1, &stmt,NULL) != SQLITE_OK)\n\t{\n\t\traiseError(\"evaluateGlobalId\", sqlite3_errmsg(dbHandle));\n\t\tmanager->release(connection);\n\t\treturn false;\n\t}\n\n\tif (SQLStep(stmt) != SQLITE_ROW)\n\t{\n\t\tm_ReadingsGlobalId = 1;\n\n\t\tsql_cmd = \" INSERT INTO \" READINGS_DB \".configuration_readings VALUES (\" + to_string(m_ReadingsGlobalId) + \",\"\n\t\t\t\t  + \"0\" + \",\"\n\t\t\t\t  + to_string(m_storageConfigApi.nReadingsPerDb) + \",\"\n\t\t\t\t  + to_string(m_storageConfigApi.nDbPreallocate) + \")\";\n\n\t\tif (SQLExec(dbHandle, sql_cmd.c_str()) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"evaluateGlobalId\", sqlite3_errmsg(dbHandle));\n\t\t\tmanager->release(connection);\n\t\t\treturn false;\n\t\t}\n\t}\n\telse\n\t{\n\t\tnCols = sqlite3_column_count(stmt);\n\t\tm_ReadingsGlobalId = sqlite3_column_int64(stmt, 0);\n\t}\n\n\tid = m_ReadingsGlobalId;\n\tLogger::getLogger()->debug(\"evaluateGlobalId - global id from the DB %lu\", id);\n\n\tif (m_ReadingsGlobalId == -1)\n\t{\n\t\tm_ReadingsGlobalId = calculateGlobalId (dbHandle);\n\t}\n\n\tid = m_ReadingsGlobalId;\n\tLogger::getLogger()->debug(\"evaluateGlobalId - global id from the DB %lu\", id);\n\n\t// Set the global_id in the DB to -1 to force a calculation at the restart\n\t// in case the shutdown is not executed and the proper value stored\n\tsql_cmd = \" UPDATE \" READINGS_DB \".configuration_readings SET global_id=-1;\";\n\n\tif (SQLExec(dbHandle, sql_cmd.c_str()) != SQLITE_OK)\n\t{\n\t\traiseError(\"evaluateGlobalId\", sqlite3_errmsg(dbHandle));\n\t\tmanager->release(connection);\n\t\treturn false;\n\t}\n\n\tsqlite3_finalize(stmt);\n\tmanager->release(connection);\n\n\treturn true;\n}\n\n/**\n * Stores the global id into SQlite\n *\n */\nbool ReadingsCatalogue::storeGlobalId ()\n{\n\tstring sql_cmd;\n\tint rc;\n\tint id;\n\tint nCols;\n\tsqlite3_stmt *stmt;\n\tsqlite3 *dbHandle;\n\n\tunsigned long i;\n\ti = m_ReadingsGlobalId;\n\tLogger::getLogger()->debug(\"storeGlobalId m_globalId %lu \", i);\n\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\tConnection *connection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Store Global ID\";\n\tconnection->setUsage(usage);\n#endif\n\tdbHandle = connection->getDbHandle();\n\n\tsql_cmd = \" UPDATE \" READINGS_DB \".configuration_readings SET global_id=\" + to_string(m_ReadingsGlobalId);\n\n\tif (SQLExec(dbHandle, sql_cmd.c_str()) != SQLITE_OK)\n\t{\n\t\traiseError(\"storeGlobalId\", sqlite3_errmsg(dbHandle));\n\t\tmanager->release(connection);\n\t\treturn false;\n\t}\n\n\tmanager->release(connection);\n\n\treturn true;\n}\n\n/**\n * Calculates the value from the readings tables executing a max(id) on each table.\n *\n * @param dbHandle Database connection to use for the operations\n *\n */\nlong ReadingsCatalogue::calculateGlobalId (sqlite3 *dbHandle)\n{\n\tstring sql_cmd;\n\tstring dbReadingsName;\n\tstring dbName;\n\n\tint rc;\n\tunsigned long id;\n\tint nCols;\n\n\tsqlite3_stmt *stmt;\n\tid = 1;\n\n\t// Prepare the sql command to calculate the global id from the rows in the DB\n\tsql_cmd = R\"(\n\t\tSELECT\n\t\t\tmax(id) id\n\t\tFROM\n\t\t(\n\t)\";\n\n\tbool firstRow = true;\n\tif (m_AssetReadingCatalogue.empty())\n\t{\n\t\tstring dbReadingsName = generateReadingsName(1, 1);\n\n\t\tsql_cmd += \" SELECT max(id) id FROM \" READINGS_DB \".\" + dbReadingsName + \" \";\n\t}\n\telse\n\t{\n\t\tfor (auto &item : m_AssetReadingCatalogue)\n\t\t{\n\t\t\tif (item.second.getTable() != 0)\n\t\t\t{\n\t\t\t\tif (!firstRow)\n\t\t\t\t{\n\t\t\t\t\tsql_cmd += \" UNION \";\n\t\t\t\t}\n\n\t\t\t\tdbName = generateDbName(item.second.getDatabase());\n\t\t\t\tdbReadingsName = generateReadingsName(item.second.getDatabase(), item.second.getTable());\n\n\t\t\t\tsql_cmd += \" SELECT max(id) id FROM \" + dbName + \".\" + dbReadingsName + \" \";\n\t\t\t\tfirstRow = false;\n\t\t\t}\n\t\t}\n\t\t// Now add overflow tables\n\t\tfor (int i = 1; i <= m_maxOverflowUsed; i++)\n\t\t{\n\t\t\tif (!firstRow)\n\t\t\t{\n\t\t\t\tsql_cmd += \" UNION \";\n\t\t\t}\n\t\t\tdbName = generateDbName(i);\n\t\t\tdbReadingsName = generateReadingsName(i, 0);\n\t\t\tsql_cmd += \" SELECT max(id) id FROM \" + dbName + \".\" + dbReadingsName + \" \";\n\t\t\tfirstRow = false;\n\t\t}\n\t}\n\tsql_cmd += \") AS tb\";\n\n\n\tif (sqlite3_prepare_v2(dbHandle,sql_cmd.c_str(),-1, &stmt,NULL) != SQLITE_OK)\n\t{\n\t\traiseError(\"calculateGlobalId\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\n\tif (SQLStep(stmt) != SQLITE_ROW)\n\t{\n\t\traiseError(\"calculateGlobalId SQLStep\", sqlite3_errmsg(dbHandle));\n\t\tid = 1;\n\t}\n\telse\n\t{\n\t\tnCols = sqlite3_column_count(stmt);\n\t\tid = sqlite3_column_int64(stmt, 0);\n\t\t// m_globalId stores then next value to be used\n\t\tid++;\n\t}\n\n\tLogger::getLogger()->debug(\"calculateGlobalId - global id evaluated %d\", id);\n\tsqlite3_finalize(stmt);\n\n\treturn (id);\n}\n\n/**\n *  Calculates the minimum id from the readings tables executing a min(id) on each table\n *\n * @param dbHandle Database connection to use for the operations\n *\n */\nlong ReadingsCatalogue::getMinGlobalId (sqlite3 *dbHandle)\n{\n\tstring sql_cmd;\n\tstring dbReadingsName;\n\tstring dbName;\n\n\tint rc;\n\tunsigned long id;\n\tint nCols;\n\n\tsqlite3_stmt *stmt;\n\tid = 1;\n\n\t// Prepare the sql command to calculate the global id from the rows in the DB\n\t{\n\t\tsql_cmd = R\"(\n\t\t\tSELECT\n\t\t\t\tmin(id) id\n\t\t\tFROM\n\t\t\t(\n\t\t)\";\n\n\t\tbool firstRow = true;\n\t\tif (m_AssetReadingCatalogue.empty())\n\t\t{\n\t\t\tstring dbReadingsName = generateReadingsName(1, 1);\n\n\t\t\tsql_cmd += \" SELECT min(id) id FROM \" READINGS_DB \".\" + dbReadingsName + \" \";\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (auto &item : m_AssetReadingCatalogue)\n\t\t\t{\n\t\t\t\tif (item.second.getTable() != 0)\n\t\t\t\t{\n\t\t\t\t\tif (!firstRow)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql_cmd += \" UNION \";\n\t\t\t\t\t}\n\n\t\t\t\t\tdbName = generateDbName(item.second.getDatabase());\n\t\t\t\t\tdbReadingsName = generateReadingsName(item.second.getDatabase(), item.second.getTable());\n\n\t\t\t\t\tsql_cmd += \" SELECT min(id) id FROM \" + dbName + \".\" + dbReadingsName + \" \";\n\t\t\t\t\tfirstRow = false;\n\t\t\t\t}\n\t\t\t}\n\t\t\t// Now add overflow tables\n\t\t\tfor (int i = 1; i <= m_maxOverflowUsed; i++)\n\t\t\t{\n\t\t\t\tif (!firstRow)\n\t\t\t\t{\n\t\t\t\t\tsql_cmd += \" UNION \";\n\t\t\t\t}\n\t\t\t\tdbName = generateDbName(i);\n\t\t\t\tdbReadingsName = generateReadingsName(i, 0);\n\t\t\t\tsql_cmd += \" SELECT min(id) id FROM \" + dbName + \".\" + dbReadingsName + \" \";\n\t\t\t\tfirstRow = false;\n\t\t\t}\n\t\t}\n\t\tsql_cmd += \") AS tb\";\n\t}\n\n\tif (sqlite3_prepare_v2(dbHandle,sql_cmd.c_str(),-1, &stmt,NULL) != SQLITE_OK)\n\t{\n\t\traiseError(__FUNCTION__, sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\n\tif (SQLStep(stmt) != SQLITE_ROW)\n\t{\n\t\tid = 0;\n\t}\n\telse\n\t{\n\t\tnCols = sqlite3_column_count(stmt);\n\t\tid = sqlite3_column_int64(stmt, 0);\n\t}\n\n\tLogger::getLogger()->debug(\"%s - global id evaluated %lu\", __FUNCTION__, id);\n\n\tsqlite3_finalize(stmt);\n\n\treturn (id);\n}\n\n/**\n * Loads the reading catalogue stored in SQLite into an in memory structure\n *\n */\nbool  ReadingsCatalogue::loadAssetReadingCatalogue()\n{\n\tint nCols;\n\tint tableId, dbId, maxDbID;\n\tchar *asset_name;\n\tsqlite3_stmt *stmt;\n\tint rc;\n\tsqlite3\t\t*dbHandle;\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\tConnection        *connection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Load Asset Reading Catalogue\";\n\tconnection->setUsage(usage);\n#endif\n\tdbHandle = connection->getDbHandle();\n\n\t// loads readings catalog from the db\n\tconst char *sql_cmd = R\"(\n\t\tSELECT\n\t\t\ttable_id,\n\t\t\tdb_id,\n\t\t\tasset_code\n\t\tFROM  )\" READINGS_DB R\"(.asset_reading_catalogue\n\t\tORDER BY table_id;\n\t)\";\n\n\n\tmaxDbID = 1;\n\tif (sqlite3_prepare_v2(dbHandle,sql_cmd,-1, &stmt,NULL) != SQLITE_OK)\n\t{\n\t\traiseError(\"retrieve asset_reading_catalogue\", sqlite3_errmsg(dbHandle));\n\t\tmanager->release(connection);\n\t\treturn false;\n\t}\n\telse\n\t{\n\t\t// Iterate over all the rows in the resultSet\n\t\twhile ((rc = SQLStep(stmt)) == SQLITE_ROW)\n\t\t{\n\t\t\tnCols = sqlite3_column_count(stmt);\n\n\t\t\ttableId = sqlite3_column_int(stmt, 0);\n\t\t\tdbId = sqlite3_column_int(stmt, 1);\n\t\t\tasset_name = (char *)sqlite3_column_text(stmt, 2);\n\n\t\t\tif (dbId > maxDbID)\n\t\t\t\tmaxDbID = dbId;\n\n\t\t\tLogger::getLogger()->debug(\"loadAssetReadingCatalogue - thread '%s' reading Id %d dbId %d asset name '%s' max db Id %d\", threadId.str().c_str(), tableId, dbId,  asset_name, maxDbID);\n\n\t\t\tauto newMapValue = make_pair(asset_name,TableReference(dbId, tableId));\n\t\t\tm_AssetReadingCatalogue.insert(newMapValue);\n\t\t\tif (tableId == 0 && dbId > m_maxOverflowUsed)\t// Overflow\n\t\t\t{\n\t\t\t\tm_maxOverflowUsed = dbId;\n\t\t\t}\n\n\t\t}\n\n\t\tsqlite3_finalize(stmt);\n\t}\n\tmanager->release(connection);\n\tm_dbIdCurrent = maxDbID;\n\n\tLogger::getLogger()->debug(\"loadAssetReadingCatalogue maxdb %d\", m_dbIdCurrent);\n\n\treturn true;\n}\n\n/**\n * Add the newly create db to the list\n *\n */\nvoid ReadingsCatalogue::setUsedDbId(int dbId)\n{\n\tm_dbIdList.push_back(dbId);\n}\n\n/**\n * Preallocate all the required databases:\n *\n *  - Initial stage  - creates the databases requested by the preallocation\n *  - Following runs - attaches all the databases already created\n *\n */\nvoid ReadingsCatalogue::prepareAllDbs()\n{\n\n\tint dbId, dbIdStart, dbIdEnd;\n\n\tLogger::getLogger()->debug(\"prepareAllDbs - dbIdCurrent %d dbIdLast %d nDbPreallocate %d\", m_dbIdCurrent, m_dbIdLast, m_storageConfigCurrent.nDbPreallocate);\n\n\tif (m_dbIdLast == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"prepareAllDbs - initial stage \");\n\n\t\t// Initial stage - creates the databases requested by the preallocation\n\t\tdbIdStart = 2;\n\t\tdbIdEnd = dbIdStart + m_storageConfigCurrent.nDbPreallocate - 2;\n\n\t\tint created = preallocateNewDbsRange(dbIdStart, dbIdEnd);\n\t\tif (created)\n\t\t{\n\t\t\tm_dbIdLast = dbIdStart + created - 1;\n\t\t}\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->debug(\"prepareAllDbs - following runs\");\n\n\t\t// Following runs - attaches all the databases\n\t\tfor (dbId = 2; dbId <= m_dbIdLast ; dbId++ )\n\t\t{\n\t\t\tm_dbIdList.push_back(dbId);\n\t\t}\n\t\tattachDbsToAllConnections();\n\t}\n\n\tm_dbNAvailable = (m_dbIdLast - m_dbIdCurrent) - m_storageConfigCurrent.nDbLeftFreeBeforeAllocate;\n\n\tLogger::getLogger()->debug(\"prepareAllDbs - dbNAvailable %d\", m_dbNAvailable);\n}\n\n/**\n * Create a set of databases\n *\n * @param    dbIdStart\tRange of the database to create\n * @param    dbIdEnd    Range of the database to create\n * @return   int\tThe number of datbases created\n *\n */\nint ReadingsCatalogue::preallocateNewDbsRange(int dbIdStart, int dbIdEnd) {\n\n\tint dbId;\n\tint startReadingsId;\n\ttyReadingsAvailable readingsAvailable;\n\tint created = 0;\n\n\tLogger::getLogger()->debug(\"preallocateNewDbsRange - Id start %d Id end %d \", dbIdStart, dbIdEnd);\n\n\tfor (dbId = dbIdStart; dbId <= dbIdEnd; dbId++)\n\t{\n\t\treadingsAvailable = evaluateLastReadingAvailable(NULL, dbId - 1);\n\t\tstartReadingsId = 1;\n\t\tif (!createNewDB(NULL,  dbId, startReadingsId, NEW_DB_ATTACH_ALL))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Failed to preallocated all databases, terminated after creating %d databases\", created);\n\t\t\tbreak;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tcreated++;\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"preallocateNewDbsRange - db created %d startReadingsIdOnDB %d\", dbId, startReadingsId);\n\t}\n\treturn created;\n}\n\n/**\n * Generates a list of all the used databases. Note this list does not include\n * the first database, readings_1, onl the ohtes that have been added.\n *\n * @param    dbIdList\treturned by reference, the list databases in use\n *\n */\nvoid ReadingsCatalogue::getAllDbs(vector<int> &dbIdList)\n{\n\n\tint dbId;\n\n\tLogger::getLogger()->debug(\"getAllDbs - used db\");\n\n\tfor (auto &item : m_AssetReadingCatalogue) {\n\n\t\tdbId = item.second.getDatabase();\n\t\tif (dbId > 1)\n\t\t{\n\t\t\tif (std::find(dbIdList.begin(), dbIdList.end(), dbId) ==  dbIdList.end() )\n\t\t\t{\n\t\t\t\tdbIdList.push_back(dbId);\n\t\t\t\tLogger::getLogger()->debug(\"getAllDbs  DB %d\", dbId);\n\t\t\t}\n\n\t\t}\n\t}\n\n\tLogger::getLogger()->debug(\"getAllDbs - created db\");\n\n\tfor (auto &dbId : m_dbIdList) {\n\n\t\tif (std::find(dbIdList.begin(), dbIdList.end(), dbId) ==  dbIdList.end() )\n\t\t{\n\t\t\tdbIdList.push_back(dbId);\n\t\t\tLogger::getLogger()->debug(\"getAllDbs DB created %d\", dbId);\n\t\t}\n\t}\n\n\tsort(dbIdList.begin(), dbIdList.end());\n}\n\n/**\n * Retrieve the list of newly created databases\n *\n * @param    dbIdList\treturned by reference, the list of new databases\n *\n */\nvoid ReadingsCatalogue::getNewDbs(vector<int> &dbIdList) {\n\n\tint dbId;\n\n\tfor (auto &dbId : m_dbIdList) {\n\n\t\tif (std::find(dbIdList.begin(), dbIdList.end(), dbId) ==  dbIdList.end() )\n\t\t{\n\t\t\tdbIdList.push_back(dbId);\n\t\t\tLogger::getLogger()->debug(\"getNewDbs - dbId %d\", dbId);\n\t\t}\n\t}\n\n\tsort(dbIdList.begin(), dbIdList.end());\n}\n\n/**\n * Enable WAL mode on the given database file. This method will open and then\n * close the database and does not use any existing connection.\n *\n * @param    dbPathReadings\tDatabase path for which the WAL must be enabled\n *\n */\nbool ReadingsCatalogue::enableWAL(string &dbPathReadings) {\n\n\tint rc;\n\tsqlite3 *dbHandle;\n\n\tLogger::getLogger()->debug(\"enableWAL on '%s'\", dbPathReadings.c_str());\n\n\trc = sqlite3_open(dbPathReadings.c_str(), &dbHandle);\n\tif(rc != SQLITE_OK)\n\t{\n\t\traiseError(\"enableWAL\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\telse\n\t{\n\t\t// Enables the WAL feature\n\t\tConnectionManager *manager = ConnectionManager::getInstance();\n\t\trc = sqlite3_exec(dbHandle, manager->getDBConfiguration().c_str(), NULL, NULL, NULL);\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"enableWAL\", sqlite3_errmsg(dbHandle));\n\t\t\treturn false;\n\t\t}\n\t}\n\tsqlite3_close(dbHandle);\n\treturn true;\n}\n\n/**\n * Attach a database to the database connection passed to the call\n *\n * @param dbHandle Database connection to use for the operations\n * @param path     path of the database to attach\n * @param alias    alias to be assigned to the attached database\n * @param id\t   the database ID\n */\nbool ReadingsCatalogue::attachDb(sqlite3 *dbHandle, std::string &path, std::string &alias, int id)\n{\nint \trc;\nstring\tsqlCmd;\nbool\tresult = true;\nchar\t*zErrMsg = NULL;\n\n\tsqlCmd = \"ATTACH DATABASE '\" + path + \"' AS \" + alias + \";\";\n\n\tLogger::getLogger()->debug(\"attachDb  - path '%s' alias '%s' cmd '%s'\" , path.c_str(), alias.c_str() , sqlCmd.c_str() );\n\trc = SQLExec (dbHandle, sqlCmd.c_str(), &zErrMsg);\n\tif (rc != SQLITE_OK)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to attach the db '%s' to the connection %X, error '%s'\", path.c_str(), dbHandle, zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\tresult = false;\n\t}\n\n\t// See if the overflow table exists and if not create it\n\t// This is a workaround as the schema update mechanism can't cope\n\t// with multiple readings tables\n\tcreateReadingsOverflowTable(dbHandle, id);\n\n\treturn result;\n}\n\n/**\n * Detach a database from a connection\n *\n * @param dbHandle Database connection to use for the operations*\n * @param alias    alias of the database to detach\n */\nvoid ReadingsCatalogue::detachDb(sqlite3 *dbHandle, std::string &alias)\n{\n\tint rc;\n\tstd::string sqlCmd;\n\tchar *zErrMsg = nullptr;\n\n\tsqlCmd = \"DETACH  DATABASE \" + alias + \";\";\n\n\tLogger::getLogger()->debug(\"%s - db '%s' cmd '%s'\" ,__FUNCTION__,  alias.c_str() , sqlCmd.c_str() );\n\trc = SQLExec (dbHandle, sqlCmd.c_str(), &zErrMsg);\n\tif (rc != SQLITE_OK)\n\t{\n\t\tLogger::getLogger()->error(\"%s - It was not possible to detach the db '%s' from the connection %X, error '%s'\", __FUNCTION__, alias.c_str(), dbHandle, zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t}\n}\n\n/**\n * Attaches all the defined SQlite database to all the connections and enable the WAL\n *\n * @param dbHandle Database connection to use for the operations\n * @param dbIdList List of database to attach\n * @return         True of success, false on any error\n *\n */\nbool ReadingsCatalogue::connectionAttachDbList(sqlite3 *dbHandle, vector<int> &dbIdList)\n{\n\tint dbId;\n\tstring dbPathReadings;\n\tstring dbAlias;\n\tint item;\n\n\tbool result;\n\n\tresult = true;\n\n\tLogger::getLogger()->debug(\"connectionAttachDbList - start dbHandle %X\" ,dbHandle);\n\n\twhile (result && !dbIdList.empty())\n\t{\n\t\titem = dbIdList.back();\n\n\t\tdbPathReadings = generateDbFilePath(item);\n\t\tdbAlias = generateDbAlias(item);\n\n\t\tLogger::getLogger()->debug(\n\t\t\t\t\"connectionAttachDbList - dbHandle %X dbId %d path %s alias %s\",\n\t\t\t\tdbHandle, item, dbPathReadings.c_str(), dbAlias.c_str());\n\n\t\tresult = attachDb(dbHandle, dbPathReadings, dbAlias, item);\n\t\tdbIdList.pop_back();\n\n\t}\n\n\tLogger::getLogger()->debug(\"connectionAttachDbList - end dbHandle %X\" ,dbHandle);\n\n\treturn result;\n}\n\n\n/**\n * Attaches all the defined SQlite database to all the connections and enable the WAL\n *\n * @param dbHandle Database connection to use for the operations\n * @return         True of success, false on any error\n *\n */\nbool ReadingsCatalogue::connectionAttachAllDbs(sqlite3 *dbHandle)\n{\n\tint dbId;\n\tstring dbPathReadings;\n\tstring dbAlias;\n\tvector<int> dbIdList;\n\tbool result;\n\n\tresult = true;\n\n\tgetAllDbs(dbIdList);\n\n\tfor(int item : dbIdList)\n\t{\n\t\tdbPathReadings = generateDbFilePath(item);\n\t\tdbAlias = generateDbAlias(item);\n\n\t\tresult = attachDb(dbHandle, dbPathReadings, dbAlias, item);\n\t\tif (! result)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Unable to attach all databases to the connection\");\n\t\t\tbreak;\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"connectionAttachAllDbs - dbId %d path %s alias %s\", item, dbPathReadings.c_str(), dbAlias.c_str());\n\t}\n\treturn result;\n}\n\n\n/**\n * Attaches all the defined SQlite database to all the connections and enable the WAL\n *\n * @return         True of success, false on any error\n *\n */\nbool ReadingsCatalogue::attachDbsToAllConnections()\n{\n\tint dbId;\n\tstring dbPathReadings;\n\tstring dbAlias;\n\tvector<int> dbIdList;\n\tbool result;\n\n\tresult = true;\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\tConnection        *connection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Attach DBs to all connections\";\n\tconnection->setUsage(usage);\n#endif\n\n\tgetAllDbs(dbIdList);\n\n\tfor (int item : dbIdList)\n\t{\n\t\tdbPathReadings = generateDbFilePath(item);\n\t\tdbAlias = generateDbAlias(item);\n\n\t\tenableWAL(dbPathReadings);\n\t\t// Attached the new db to the connections\n\t\tresult = manager->attachNewDb(dbPathReadings, dbAlias);\n\t\tif (! result)\n\t\t\tbreak;\n\n\t\tLogger::getLogger()->debug(\"attachDbsToAllConnections - dbId %d path '%s' alias '%s'\", item, dbPathReadings.c_str(), dbAlias.c_str());\n\t}\n\n\tmanager->release(connection);\n\n\treturn (result);\n}\n\n/**\n * Setup the multiple readings databases/tables feature\n *\n * @param storageConfig Configuration to apply\n *\n */\nvoid ReadingsCatalogue::multipleReadingsInit(STORAGE_CONFIGURATION &storageConfig)\n{\n\tsqlite3 *dbHandle;\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\tConnection *connection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Multiple readings init\";\n\tconnection->setUsage(usage);\n#endif\n\tif (! connection->supportsReadings())\n\t{\n\t\tmanager->release(connection);\n\t\treturn;\n\t}\n\tdbHandle = connection->getDbHandle();\n\n\t// Enquire for the attached database limit\n\tm_attachLimit = sqlite3_limit(dbHandle, SQLITE_LIMIT_ATTACHED, -1);\n\tLogger::getLogger()->info(\"The version of SQLite can support %d attached databases\",\n\t\t\tm_attachLimit);\n\tm_compounds = sqlite3_limit(dbHandle, SQLITE_LIMIT_COMPOUND_SELECT, -1);\n\n\tif (storageConfig.nDbLeftFreeBeforeAllocate < 1)\n\t{\n\t\tLogger::getLogger()->warn(\"%s - parameter nDbLeftFreeBeforeAllocate not valid, use a value >= 1, 1 used \", __FUNCTION__);\n\t\tstorageConfig.nDbLeftFreeBeforeAllocate = 1;\n\t}\n\tif (storageConfig.nDbToAllocate < 1)\n\t{\n\t\tLogger::getLogger()->warn(\"%s - parameter nDbToAllocate not valid, use a value >= 1, 1 used \", __FUNCTION__);\n\t\tstorageConfig.nDbToAllocate = 1;\n\t}\n\n\tm_storageConfigApi.nReadingsPerDb = storageConfig.nReadingsPerDb;\n\tm_storageConfigApi.nDbPreallocate = storageConfig.nDbPreallocate;\n\tm_storageConfigApi.nDbLeftFreeBeforeAllocate = storageConfig.nDbLeftFreeBeforeAllocate;\n\tm_storageConfigApi.nDbToAllocate = storageConfig.nDbToAllocate;\n\n\tm_storageConfigCurrent.nDbLeftFreeBeforeAllocate = storageConfig.nDbLeftFreeBeforeAllocate;\n\tm_storageConfigCurrent.nDbToAllocate = storageConfig.nDbToAllocate;\n\n\ttry\n\t{\n\t\tconfigurationRetrieve(dbHandle);\n\n\t\tloadAssetReadingCatalogue();\n\t\tpreallocateReadingsTables(1);   // on the first database\n\n\t\tLogger::getLogger()->debug(\"nReadingsPerDb %d\", m_storageConfigCurrent.nReadingsPerDb);\n\t\tLogger::getLogger()->debug(\"nDbPreallocate %d\", m_storageConfigCurrent.nDbPreallocate);\n\n\t\tprepareAllDbs();\n\n\t\tapplyStorageConfigChanges(dbHandle);\n\n\t\tLogger::getLogger()->debug(\"multipleReadingsInit - dbIdCurrent %d dbIdLast %d nDbPreallocate current %d requested %d\",\n\t\t\t\t\t\t\t\t   m_dbIdCurrent,\n\t\t\t\t\t\t\t\t   m_dbIdLast,\n\t\t\t\t\t\t\t\t   m_storageConfigCurrent.nDbPreallocate,\n\t\t\t\t\t\t\t\t   m_storageConfigApi.nDbPreallocate);\n\n\t\tstoreReadingsConfiguration(dbHandle);\n\n\n\t\tpreallocateReadingsTables(0);   // on the last database\n\n\t\tevaluateGlobalId();\n\t}\n\tcatch (exception& e)\n\t{\n\t\tLogger::getLogger()->error(\"It is not possible to initialize the multiple readings handling, error '%s' \", e.what());\n\t}\n\n\tmanager->release(connection);\n}\n\n\n/**\n * Store on the database the configuration of the storage plugin\n *\n * @param dbHandle Database connection to use for the operations\n *\n */\nvoid ReadingsCatalogue::storeReadingsConfiguration (sqlite3 *dbHandle)\n{\n\tstring errMsg;\n\tstring sql_cmd;\n\n\tLogger::getLogger()->debug(\"storeReadingsConfiguration - nReadingsPerDb %d nDbPreallocate %d\", m_storageConfigCurrent.nReadingsPerDb , m_storageConfigCurrent.nDbPreallocate);\n\n\tsql_cmd = \" UPDATE \" READINGS_DB \".configuration_readings SET n_readings_per_db=\" + to_string(m_storageConfigCurrent.nReadingsPerDb) + \",\" +\n\t\t\t  \"n_db_preallocate=\"  + to_string(m_storageConfigCurrent.nDbPreallocate)  + \",\" +\n\t\t\t  \"db_id_Last=\"        + to_string(m_dbIdLast)  + \";\";\n\n\tLogger::getLogger()->debug(\"sql_cmd '%s'\", sql_cmd.c_str());\n\n\tif (SQLExec(dbHandle, sql_cmd.c_str()) != SQLITE_OK)\n\t{\n\t\terrMsg = \"is not possible to store the configuration about the multiple readings handling, error :\";\n\t\terrMsg += sqlite3_errmsg(dbHandle);\n\t\traiseError(\"storeReadingsConfiguration\", errMsg.c_str());\n\t\tthrow runtime_error(errMsg.c_str());\n\t}\n}\n\n/**\n * Add all the required DBs in relation to the storage plugin configuration\n *\n * @param dbHandle Database connection to use for the operations\n *\n */\nvoid ReadingsCatalogue::configChangeAddDb(sqlite3 *dbHandle)\n{\n\tstring errMsg;\n\tint dbId;\n\tint startReadingsId;\n\tint startId, endId;\n\ttyReadingsAvailable readingsAvailable;\n\n\tstartId =  m_dbIdLast +1;\n\tendId = m_storageConfigApi.nDbPreallocate;\n\n\tLogger::getLogger()->debug(\"configChangeAddDb - dbIdCurrent %d dbIdLast %d nDbPreallocate current %d requested %d\",\n\t\t\t\t\t\t\t   m_dbIdCurrent,\n\t\t\t\t\t\t\t   m_dbIdLast,\n\t\t\t\t\t\t\t   m_storageConfigCurrent.nDbPreallocate,\n\t\t\t\t\t\t\t   m_storageConfigApi.nDbPreallocate);\n\n\tLogger::getLogger()->debug(\"configChangeAddDb - Id start %d Id end %d \", startId, endId);\n\tint created = 0;\n\n\ttry\n\t{\n\t\tfor (dbId = startId; dbId <= endId; dbId++)\n\t\t{\n\t\t\treadingsAvailable = evaluateLastReadingAvailable(dbHandle, dbId - 1);\n\t\t\tif (readingsAvailable.lastReadings == 0)\n\t\t\t{\n\t\t\t\terrMsg = \"Unable to determinate used readings table while adding a database\";\n\t\t\t\tthrow runtime_error(errMsg.c_str());\n\t\t\t}\n\n\t\t\tstartReadingsId = readingsAvailable.lastReadings +1;\n\t\t\tif (! createNewDB(dbHandle,  dbId, startReadingsId, NEW_DB_ATTACH_ALL))\n\t\t\t{\n\t\t\t\terrMsg = \"Unable to add a new database\";\n\t\t\t\tthrow runtime_error(errMsg.c_str());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tcreated++;\n\t\t\t}\n\t\t\tLogger::getLogger()->debug(\"configChangeAddDb - db created %d startReadingsIdOnDB %d\", dbId, startReadingsId);\n\t\t}\n\t}\n\tcatch (exception& e)\n\t{\n\t\tLogger::getLogger()->error(\"It is not possible to add the requested databases, error '%s' - removing created databases\", e.what());\n\t\tdbsRemove(startId , endId);\n\t}\n\n\tm_dbIdLast = startId + created - 1;\n\tm_storageConfigCurrent.nDbPreallocate = m_storageConfigApi.nDbPreallocate;\n\tm_dbNAvailable = (m_dbIdLast - m_dbIdCurrent) - m_storageConfigCurrent.nDbLeftFreeBeforeAllocate;\n}\n\n/**\n * Removes all the required DBs in relation to the storage plugin configuration\n *\n * @param dbHandle Database connection to use for the operations\n *\n */\nvoid ReadingsCatalogue::configChangeRemoveDb(sqlite3 *dbHandle)\n{\n\tstring errMsg;\n\tint dbId;\n\tint startReadingsId;\n\ttyReadingsAvailable readingsAvailable;\n\tstring dbAlias;\n\tstring dbPath;\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\n\tLogger::getLogger()->debug(\"configChangeRemoveDb - dbIdCurrent %d dbIdLast %d nDbPreallocate current %d requested %d\",\n\t\t\t\t\t\t\t   m_dbIdCurrent,\n\t\t\t\t\t\t\t   m_dbIdLast,\n\t\t\t\t\t\t\t   m_storageConfigCurrent.nDbPreallocate,\n\t\t\t\t\t\t\t   m_storageConfigApi.nDbPreallocate);\n\n\n\tLogger::getLogger()->debug(\"configChangeRemoveDb - Id start %d Id end %d \", m_dbIdCurrent, m_storageConfigApi.nDbPreallocate);\n\n\tdbsRemove(m_storageConfigApi.nDbPreallocate + 1, m_dbIdLast);\n\n\n\tm_dbIdLast = m_storageConfigApi.nDbPreallocate;\n\tm_storageConfigCurrent.nDbPreallocate = m_storageConfigApi.nDbPreallocate;\n\tm_dbNAvailable = (m_dbIdLast - m_dbIdCurrent) - m_storageConfigCurrent.nDbLeftFreeBeforeAllocate;\n}\n\n\n/**\n * Adds all the required readings tables in relation to the storage plugin configuration\n *\n * @param dbHandle - handle of the connection to use for the database operation\n * @param startId  - range of the readings table to create\n * @param endId    - range of the readings table to create\n *\n */\nvoid ReadingsCatalogue::configChangeAddTables(sqlite3 *dbHandle, int startId, int endId)\n{\n\tint dbId;\n\tint maxReadingUsed;\n\tint nTables;\n\n\tnTables = endId - startId +1;\n\n\tLogger::getLogger()->debug(\"%s - startId %d endId %d nTables %d\",\n\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t   startId,\n\t\t\t\t\t\t\t   endId,\n\t\t\t\t\t\t\t   nTables);\n\n\tfor (dbId = 1; dbId <= m_dbIdLast ; dbId++ )\n\t{\n\t\tLogger::getLogger()->debug(\"%s - configChangeAddTables - dbId %d startId %d nTables %d\",\n\t\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t\t   dbId,\n\t\t\t\t\t\t\t\t   startId,\n\t\t\t\t\t\t\t\t   nTables);\n\t\tcreateReadingsTables(dbHandle, dbId, startId, nTables);\n\t}\n\n\tm_storageConfigCurrent.nReadingsPerDb = m_storageConfigApi.nReadingsPerDb;\n\tmaxReadingUsed = calcMaxReadingUsed();\n\tm_nReadingsAvailable = m_storageConfigCurrent.nReadingsPerDb - maxReadingUsed;\n\n\tLogger::getLogger()->debug(\"%s - maxReadingUsed %d nReadingsPerDb %d m_nReadingsAvailable %d\",\n\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t   maxReadingUsed,\n\t\t\t\t\t\t\t   m_storageConfigCurrent.nReadingsPerDb,\n\t\t\t\t\t\t\t   m_nReadingsAvailable);\n}\n\n/**\n * Deletes all the required readings tables in relation to the storage plugin configuration\n *\n * @param dbHandle - handle of the connection to use for the database operation\n * @param startId  - range of the readings table to delete\n * @param endId    - range of the readings table to delete\n *\n */\nvoid ReadingsCatalogue::configChangeRemoveTables(sqlite3 *dbHandle, int startId, int endId)\n{\n\tint dbId;\n\tint maxReadingUsed;\n\n\tLogger::getLogger()->debug(\"%s - startId %d endId %d\",\n\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t   startId,\n\t\t\t\t\t\t\t   endId);\n\n\tfor (dbId = 1; dbId <= m_dbIdLast ; dbId++ )\n\t{\n\t\tLogger::getLogger()->debug(\"%s - configChangeRemoveTables - dbId %d startId %d endId %d\",\n\t\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t\t   dbId,\n\t\t\t\t\t\t\t\t   startId,\n\t\t\t\t\t\t\t\t   endId);\n\t\tdropReadingsTables(dbHandle, dbId, startId, endId);\n\t}\n\n\tm_storageConfigCurrent.nReadingsPerDb = m_storageConfigApi.nReadingsPerDb;\n\tmaxReadingUsed = calcMaxReadingUsed();\n\tm_nReadingsAvailable = m_storageConfigCurrent.nReadingsPerDb - maxReadingUsed;\n\n\tLogger::getLogger()->debug(\"%s - maxReadingUsed %d nReadingsPerDb %d m_nReadingsAvailable %d\",\n\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t   maxReadingUsed,\n\t\t\t\t\t\t\t   m_storageConfigCurrent.nReadingsPerDb,\n\t\t\t\t\t\t\t   m_nReadingsAvailable);\n}\n\n/**\n * Drops a set of readings\n *\n * @param dbHandle - handle of the connection to use for the database operation\n * @param dbId     - database id on which the tables should be dropped\n * @param startId  - range of the readings table to delete\n * @param endId    - range of the readings table to delete\n *\n */\nvoid  ReadingsCatalogue::dropReadingsTables(sqlite3 *dbHandle, int dbId, int idStart, int idEnd)\n{\n\tstring errMsg;\n\tstring dropReadings, dropIdx;\n\tstring dbName;\n\tstring tableName;\n\tint tableId;\n\tint rc;\n\tint idx;\n\tbool newConnection;\n\n\tLogger::getLogger()->debug(\"%s - dropping tales on database id %dform id %d to %d\", __FUNCTION__, dbId, idStart, idEnd);\n\n\tdbName = generateDbName(dbId);\n\n\tfor (idx = idStart ; idx <= idEnd; ++idx)\n\t{\n\t\ttableName = generateReadingsName(dbId, idx);\n\n\t\tdropReadings = \"DROP TABLE \" + dbName + \".\" + tableName + \";\";\n\t\tdropIdx      = \"DROP INDEX \" + tableName + \"_ix3;\";\n\n\n\t\trc = SQLExec(dbHandle, dropIdx.c_str());\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\terrMsg = sqlite3_errmsg(dbHandle);\n\t\t\traiseError(__FUNCTION__, sqlite3_errmsg(dbHandle));\n\t\t\tthrow runtime_error(errMsg.c_str());\n\t\t}\n\n\t\trc = SQLExec(dbHandle, dropReadings.c_str());\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\terrMsg = sqlite3_errmsg(dbHandle);\n\t\t\traiseError(__FUNCTION__, sqlite3_errmsg(dbHandle));\n\t\t\tthrow runtime_error(errMsg.c_str());\n\t\t}\n\n\t}\n}\n\n/**\n * Deletes a range of database, detach and delete the file\n *\n * @param startId  - range of the databases to delete\n * @param endId    - range of the databases to delete\n *\n */\nvoid ReadingsCatalogue::dbsRemove(int startId, int endId)\n{\n\tstring errMsg;\n\tint dbId;\n\tint startReadingsId;\n\ttyReadingsAvailable readingsAvailable;\n\tstring dbAlias;\n\tstring dbPath;\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\n\tLogger::getLogger()->debug(\"dbsRemove - startId %d endId %d\", startId, endId);\n\n\tfor (dbId = startId; dbId <= endId; dbId++)\n\t{\n\t\tdbAlias = generateDbAlias(dbId);\n\t\tdbPath  = generateDbFilePath(dbId);\n\n\t\tLogger::getLogger()->debug(\"dbsRemove - db alias '%s' db path '%s'\", dbAlias.c_str(), dbPath.c_str());\n\n\t\tmanager->detachNewDb(dbAlias);\n\t\tdbFileDelete(dbPath);\n\t}\n}\n\n/**\n * Delete a file\n *\n * @param dbPath  - Full path of the file to delete\n *\n */\nvoid  ReadingsCatalogue::dbFileDelete(string dbPath)\n{\n\tstring errMsg;\n\tbool success;\n\n\tLogger::getLogger()->debug(\"dbFileDelete - db path '%s'\", dbPath.c_str());\n\n\tif (remove (dbPath.c_str()) !=0)\n\t{\n\t\terrMsg = \"Unable to remove database :\" + dbPath + \":\";\n\t\tthrow runtime_error(errMsg.c_str());\n\t}\n}\n\n/**\n * Evaluates and applies the storage plugin configuration\n *\n * @param dbHandle handle of the connection to use for the database operations\n *\n */\nbool ReadingsCatalogue::applyStorageConfigChanges(sqlite3 *dbHandle)\n{\n\tbool configChanged;\n\tACTION operation;\n\tint maxReadingUsed;\n\n\tconfigChanged = false;\n\n\tLogger::getLogger()->debug(\"applyStorageConfigChanges - dbIdCurrent %d dbIdLast %d nDbPreallocate current %d requested %d nDbLeftFreeBeforeAllocate %d\",\n\t\t\t\t\t\t\t   m_dbIdCurrent,\n\t\t\t\t\t\t\t   m_dbIdLast,\n\t\t\t\t\t\t\t   m_storageConfigCurrent.nDbPreallocate,\n\t\t\t\t\t\t\t   m_storageConfigApi.nDbPreallocate,\n\t\t\t\t\t\t\t   m_storageConfigCurrent.nDbLeftFreeBeforeAllocate);\n\ttry{\n\n\t\tif (m_storageConfigApi.nDbPreallocate <= 2)\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"applyStorageConfigChanges: parameter nDbPreallocate changed, but it is not possible to apply the change, use a larger value >= 3\");\n\t\t} else {\n\n\t\t\toperation = changesLogicDBs(m_dbIdCurrent,\n\t\t\t\t\t\t\t\t\t\tm_dbIdLast,\n\t\t\t\t\t\t\t\t\t\tm_storageConfigCurrent.nDbPreallocate,\n\t\t\t\t\t\t\t\t\t\tm_storageConfigApi.nDbPreallocate,\n\t\t\t\t\t\t\t\t\t\tm_storageConfigCurrent.nDbLeftFreeBeforeAllocate);\n\n\t\t\t// Database operation\n\t\t\t{\n\t\t\t\tif (operation == ACTION_DB_ADD)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"applyStorageConfigChanges - parameters nDbPreallocate changed, adding more databases from %d to %d\", m_dbIdLast, m_storageConfigApi.nDbPreallocate);\n\t\t\t\t\tconfigChanged = true;\n\t\t\t\t\tconfigChangeAddDb(dbHandle);\n\n\t\t\t\t} else if (operation == ACTION_INVALID)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"applyStorageConfigChanges: parameter nDbPreallocate changed, but it is not possible to apply the change as there are already data stored in the database id %d, use a larger value\", m_dbIdCurrent);\n\n\t\t\t\t} else if (operation == ACTION_DB_REMOVE)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"applyStorageConfigChanges - parameters nDbPreallocate changed, removing databases from %d to %d\", m_storageConfigApi.nDbPreallocate, m_dbIdLast);\n\t\t\t\t\tconfigChanged = true;\n\t\t\t\t\tconfigChangeRemoveDb(dbHandle);\n\t\t\t\t} else\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"applyStorageConfigChanges - not changes\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (m_storageConfigApi.nReadingsPerDb <= 2)\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"applyStorageConfigChanges: parameter nReadingsPerDb changed, but it is not possible to apply the change, use a larger value >= 3\");\n\t\t} else {\n\n\t\t\tmaxReadingUsed = calcMaxReadingUsed();\n\t\t\toperation = changesLogicTables(maxReadingUsed,\n\t\t\t\t\t\t\t\t\t\t   m_storageConfigCurrent.nReadingsPerDb,\n\t\t\t\t\t\t\t\t\t\t   m_storageConfigApi.nReadingsPerDb);\n\n\t\t\tLogger::getLogger()->debug(\"%s - maxReadingUsed %d Current %d Requested %d\",\n\t\t\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t\t\t   maxReadingUsed,\n\t\t\t\t\t\t\t\t\t   m_storageConfigCurrent.nReadingsPerDb,\n\t\t\t\t\t\t\t\t\t   m_storageConfigApi.nReadingsPerDb);\n\n\t\t\t// Table  operation\n\t\t\t{\n\t\t\t\tif (operation == ACTION_TB_ADD)\n\t\t\t\t{\n\t\t\t\t\tint startId, endId;\n\n\t\t\t\t\tstartId = m_storageConfigCurrent.nReadingsPerDb +1;\n\t\t\t\t\tendId =  m_storageConfigApi.nReadingsPerDb;\n\n\t\t\t\t\tLogger::getLogger()->debug(\"applyStorageConfigChanges - parameters nReadingsPerDb changed, adding more tables from %d to %d\", startId, endId);\n\t\t\t\t\tconfigChanged = true;\n\t\t\t\t\tconfigChangeAddTables(dbHandle, startId, endId);\n\n\t\t\t\t} else if (operation == ACTION_INVALID)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"applyStorageConfigChanges: parameter nReadingsPerDb changed, but it is not possible to apply the change as there are already data stored in the table id %d, use a larger value\", maxReadingUsed);\n\n\t\t\t\t} else if (operation == ACTION_TB_REMOVE)\n\t\t\t\t{\n\t\t\t\t\tint startId, endId;\n\n\t\t\t\t\tstartId =  m_storageConfigApi.nReadingsPerDb +1;\n\t\t\t\t\tendId =  m_storageConfigCurrent.nReadingsPerDb;\n\n\t\t\t\t\tLogger::getLogger()->debug(\"applyStorageConfigChanges - parameters nReadingsPerDb changed, removing tables from %d to %d\", m_storageConfigApi.nReadingsPerDb +1, m_storageConfigCurrent.nReadingsPerDb);\n\t\t\t\t\tconfigChanged = true;\n\t\t\t\t\tconfigChangeRemoveTables(dbHandle, startId, endId);\n\t\t\t\t} else\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"applyStorageConfigChanges - not changes\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\n\t\tif ( !configChanged)\n\t\t\tLogger::getLogger()->debug(\"applyStorageConfigChanges - storage parameters not changed\");\n\n\t}\n\tcatch (exception& e)\n\t{\n\t\tLogger::getLogger()->error(\"It is not possible to apply the chnages to the multi readings handling, error '%s' \", e.what());\n\t}\n\n\treturn configChanged;\n}\n\n/**\n * Calculates the maximum readings id used\n *\n * @return maximum readings id used\n *\n */\nint  ReadingsCatalogue::calcMaxReadingUsed()\n{\n\tint maxReading = 0;\n\n\tfor (auto &item : m_AssetReadingCatalogue)\n\t{\n\t\tif (item.second.getTable() > maxReading)\n\t\t\tmaxReading = item.second.getTable();\n\t}\n\n\treturn (maxReading);\n}\n\n/**\n * Evaluates the operations to be executed in relation to the input parameters on the readings tables\n *\n * @param maxUsed Maximum table id used\n * @param Current Current table id configured\n * @param Request Requested configuration id\n\n * @return        Operation to execute : ACTION_TB_NONE / ACTION_TB_ADD /ACTION_TB_REMOVE / ACTION_INVALID\n *\n */\nReadingsCatalogue::ACTION  ReadingsCatalogue::changesLogicTables(int maxUsed ,int Current, int Request)\n{\n\tACTION operation;\n\n\tLogger::getLogger()->debug(\"%s - maxUsed %d Request %d Request current %d\",\n\t\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t\t   maxUsed,\n\t\t\t\t\t\t\t   Current,\n\t\t\t\t\t\t\t   Request);\n\n\toperation = ACTION_TB_NONE;\n\n\tif (Current != Request)\n\t{\n\t\tif (Request > Current)\n\t\t{\n\t\t\toperation = ACTION_TB_ADD;\n\n\t\t}\n\t\telse if ((Request < Current) && (maxUsed >= Request))\n\t\t{\n\t\t\toperation = ACTION_INVALID;\n\n\t\t} else if ((Request < Current) && (maxUsed < Request))\n\t\t{\n\t\t\toperation = ACTION_TB_REMOVE;\n\t\t}\n\t}\n\treturn operation;\n}\n\n/**\n * Evaluates the operations to be executed in relation to the input parameters on the databases\n *\n * @param dbIdCurrent               - Current database id in use\n * @param dbIdLast                  - Latest database id created, but necessary in use\n * @param nDbPreallocateCurrent     - Current table id configured\n * @param nDbPreallocateRequest     - Requested configuration id\n * @param nDbLeftFreeBeforeAllocate - Number of database to maintain free\n\n * @return - Operation to execute : ACTION_DB_NONE / ACTION_DB_ADD / ACTION_DB_REMOVE / ACTION_INVALID\n *\n */\nReadingsCatalogue::ACTION ReadingsCatalogue::changesLogicDBs(int dbIdCurrent , int dbIdLast, int nDbPreallocateCurrent, int nDbPreallocateRequest, int nDbLeftFreeBeforeAllocate)\n{\n\tACTION operation;\n\n\toperation = ACTION_DB_NONE;\n\n\tif ( nDbPreallocateCurrent != nDbPreallocateRequest)\n\t{\n\t\tif (nDbPreallocateRequest > dbIdLast)\n\t\t{\n\t\t\toperation = ACTION_DB_ADD;\n\n\t\t} else if (nDbPreallocateRequest < (dbIdCurrent + nDbLeftFreeBeforeAllocate) )\n\t\t{\n\t\t\toperation = ACTION_INVALID;\n\n\t\t} else if ( (nDbPreallocateRequest >= (dbIdCurrent + nDbLeftFreeBeforeAllocate)) && (nDbPreallocateRequest < dbIdLast))\n\t\t{\n\t\t\toperation = ACTION_DB_REMOVE;\n\t\t}\n\t}\n\treturn operation;\n}\n\n\n/**\n * Creates all the required readings tables considering the tables already defined in the database\n * and the number of tables to have on each database.\n *\n * @param dbId Database Id in which the table must be created\n *\n */\nvoid ReadingsCatalogue::preallocateReadingsTables(int dbId)\n{\n\tint readingsToAllocate;\n\tint readingsToCreate;\n\tint startId;\n\n\tif (dbId == 0 )\n\t\tdbId = m_dbIdCurrent;\n\n\ttyReadingsAvailable readingsAvailable;\n\n\tstring dbName;\n\n\treadingsAvailable.lastReadings = 0;\n\treadingsAvailable.tableCount = 0;\n\n\t// Identifies last readings available\n\treadingsAvailable = evaluateLastReadingAvailable(NULL, dbId);\n\treadingsToAllocate = getNReadingsAllocate();\n\n\tif (readingsAvailable.tableCount < readingsToAllocate)\n\t{\n\t\treadingsToCreate = readingsToAllocate - readingsAvailable.tableCount;\n\t\tif (dbId == 1)\n\t\t\tstartId = 2;\n\t\telse\n\t\t\tstartId = 1;\n\t\tcreateReadingsTables(NULL, dbId, startId, readingsToCreate);\n\t}\n\n\tm_nReadingsAvailable = readingsToAllocate - getUsedTablesDbId(dbId);\n\n\tLogger::getLogger()->debug(\"preallocateReadingsTables - dbId %d nReadingsAvailable %d lastReadingsCreated %d tableCount %d\", m_dbIdCurrent, m_nReadingsAvailable, readingsAvailable.lastReadings, readingsAvailable.tableCount);\n}\n\n/**\n * Generates the full path of the SQLite database from the given the id\n *\n * @param dbId Database Id for which the full path must be generated\n * @return     Generated the full path\n *\n */\nstring ReadingsCatalogue::generateDbFilePath(int dbId)\n{\n\tstring dbPathReadings;\n\n\tchar *defaultReadingsConnection;\n\tchar defaultReadingsConnectionTmp[1000];\n\n\tdefaultReadingsConnection = getenv(\"DEFAULT_SQLITE_DB_READINGS_FILE\");\n\n\tif (defaultReadingsConnection == NULL)\n\t{\n\t\tdbPathReadings = getDataDir();\n\t}\n\telse\n\t{\n\t\t// dirname modify the content of the parameter\n\t\tstrncpy ( defaultReadingsConnectionTmp, defaultReadingsConnection, sizeof(defaultReadingsConnectionTmp) );\n\t\tdbPathReadings  = dirname(defaultReadingsConnectionTmp);\n\t}\n\n\tif (dbPathReadings.back() != '/')\n\t\tdbPathReadings += \"/\";\n\n\tdbPathReadings += generateDbFileName(dbId);\n\n\treturn  (dbPathReadings);\n}\n\n/**\n * Stores on the persistent storage the id of the last created database\n *\n * @param dbHandle Database connection to use for the operations\n * @param newDbId  Id of the created database\n * @return         True of success, false on any error\n *\n */\nbool ReadingsCatalogue::latestDbUpdate(sqlite3 *dbHandle, int newDbId)\n{\n\tstring sql_cmd;\n\n\tLogger::getLogger()->debug(\"latestDbUpdate - dbHandle %X newDbId %d\", dbHandle, newDbId);\n\n\t{\n\t\tsql_cmd = \" UPDATE \" READINGS_DB \".configuration_readings SET db_id_Last=\" + to_string(newDbId) + \";\";\n\n\t\tif (SQLExec(dbHandle, sql_cmd.c_str()) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"latestDbUpdate\", sqlite3_errmsg(dbHandle));\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\n\n/**\n * Creates a new database\n *\n * @param dbHandle     Database connection to use for the operations\n * @param newDbId      If of the created database to create\n * @param startId      Starting id for the creation of the reading tables\n * @param attachAllDb  Type of attache operation to apply on the newly created database\n * @return             True of success, false on any error\n *\n */\nbool  ReadingsCatalogue::createNewDB(sqlite3 *dbHandle, int newDbId, int startId, NEW_DB_OPERATION attachAllDb)\n{\n\tint rc;\n\tint nTables;\n\n\tint readingsToAllocate;\n\tint readingsToCreate;\n\n\tstring sqlCmd;\n\tstring dbPathReadings;\n\tstring dbAlias;\n\n\tstruct stat st;\n\tbool dbAlreadyPresent=false;\n\tbool result;\n\tbool connAllocated;\n\tConnection *connection;\n\n\tconnAllocated = false;\n\tresult = true;\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\n\t// Are there enough descriptors available to create another database\n\tif (!manager->allowMoreDatabases())\n\t{\n\t\treturn false;\n\t}\n\n\tif (dbHandle == NULL)\n\t{\n\t\tconnection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Create New database\";\n\tconnection->setUsage(usage);\n#endif\n\t\tdbHandle = connection->getDbHandle();\n\t\tconnAllocated = true;\n\t}\n\n\t// Creates the DB data file\n\tdbPathReadings = generateDbFilePath(newDbId);\n\n\tdbAlreadyPresent = false;\n\tif(stat(dbPathReadings.c_str(),&st) == 0)\n\t{\n\t\tLogger::getLogger()->info(\"createNewDB - database file '%s' already present, creation skipped \" , dbPathReadings.c_str() );\n\t\tdbAlreadyPresent = true;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->debug(\"createNewDB - new database created '%s'\", dbPathReadings.c_str());\n\t}\n\tenableWAL(dbPathReadings);\n\n\tlatestDbUpdate(dbHandle, newDbId);\n\n\treadingsToAllocate = getNReadingsAllocate();\n\treadingsToCreate = readingsToAllocate;\n\n\t// Attached the new db to the connections\n\tdbAlias = generateDbAlias(newDbId);\n\n\tif (attachAllDb == NEW_DB_ATTACH_ALL)\n\t{\n\t\tLogger::getLogger()->debug(\"createNewDB - attach all the databases\");\n\t\tresult = manager->attachNewDb(dbPathReadings, dbAlias);\n\n\t} else  if (attachAllDb == NEW_DB_ATTACH_REQUEST)\n\t{\n\t\tLogger::getLogger()->debug(\"createNewDB - attach single\");\n\n\t\tresult = attachDb(dbHandle, dbPathReadings, dbAlias, newDbId);\n\t\tresult = manager->attachRequestNewDb(newDbId, dbHandle);\n\n\t} else  if (attachAllDb == NEW_DB_DETACH)\n\t{\n\t\tLogger::getLogger()->debug(\"createNewDB - attach\");\n\t\tresult = attachDb(dbHandle, dbPathReadings, dbAlias, newDbId);\n\t}\n\n\tif (result)\n\t{\n\t\tsetUsedDbId(newDbId);\n\n\t\tif (dbAlreadyPresent)\n\t\t{\n\t\t\ttyReadingsAvailable readingsAvailable = evaluateLastReadingAvailable(dbHandle, newDbId);\n\n\t\t\tif (readingsAvailable.lastReadings == -1)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"createNewDB - database file '%s' is already present but it is not possible to evaluate the readings table already present\" , dbPathReadings.c_str() );\n\t\t\t\tresult = false;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\treadingsToCreate = readingsToAllocate - readingsAvailable.tableCount;\n\t\t\t\tstartId = readingsAvailable.lastReadings +1;\n\t\t\t\tLogger::getLogger()->info(\"createNewDB - database file '%s' is already present, creating readings tables - from id %d n %d \" , dbPathReadings.c_str(), startId, readingsToCreate);\n\t\t\t}\n\t\t}\n\n\t\tif (readingsToCreate > 0)\n\t\t{\n\t\t\tstartId = 1;\n\t\t\tcreateReadingsTables(dbHandle, newDbId ,startId, readingsToCreate);\n\n\t\t\tLogger::getLogger()->info(\"createNewDB - database file '%s' created readings table - from id %d n %d \" , dbPathReadings.c_str(), startId, readingsToCreate);\n\t\t}\n\t\tm_nReadingsAvailable = readingsToAllocate;\n\t}\n\n\t// Create the overflow table in the new database if it was not previosuly created\n\tcreateReadingsOverflowTable(dbHandle, newDbId);\n\n\tif (attachAllDb == NEW_DB_DETACH)\n\t{\n\t\tLogger::getLogger()->debug(\"createNewDB - deattach\");\n\t\tdetachDb(dbHandle, dbAlias);\n\t}\n\n\tif (connAllocated)\n\t{\n\t\tmanager->release(connection);\n\t}\n\n\treturn (result);\n}\n\n/**\n * Creates a set of reading tables in the given database id\n *\n * @param dbHandle    Database connection to use for the operations\n * @param dbId        Database id on which the tables should be created\n * @param idStartFrom Id from with to start to create the tables\n * @param nTables     Number of table to create\n *\n */\nbool  ReadingsCatalogue::createReadingsTables(sqlite3 *dbHandle, int dbId, int idStartFrom, int nTables)\n{\n\tstring createReadings, createReadingsIdx;\n\tstring dbName;\n\tstring dbReadingsName;\n\tint tableId;\n\tint rc;\n\tint readingsIdx;\n\tbool newConnection;\n\tConnection        *connection;\n\n\tLogger *logger = Logger::getLogger();\n\tnewConnection = false;\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\n\tif (dbHandle == NULL)\n\t{\n\t\tconnection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Create Readings Tables\";\n\tconnection->setUsage(usage);\n#endif\n\t\tdbHandle = connection->getDbHandle();\n\t\tnewConnection = true;\n\t}\n\n\tlogger->info(\"Creating %d readings table in advance starting id %d\", nTables, idStartFrom);\n\n\tdbName = generateDbName(dbId);\n\n\tfor (readingsIdx = 0 ;  readingsIdx < nTables; ++readingsIdx)\n\t{\n\t\ttableId = idStartFrom + readingsIdx;\n\t\tdbReadingsName = generateReadingsName(dbId, tableId);\n\n\t\tcreateReadings = R\"(\n\t\t\tCREATE TABLE IF NOT EXISTS )\" + dbName + \".\" + dbReadingsName + R\"( (\n\t\t\t\tid         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n\t\t\t\treading    JSON                        NOT NULL DEFAULT '{}',\n\t\t\t\tuser_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),\n\t\t\t\tts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))\n\t\t\t);\n\t\t)\";\n\n\t\tcreateReadingsIdx = R\"(\n\t\t\tCREATE INDEX IF NOT EXISTS )\" + dbName + \".\" + dbReadingsName + R\"(_ix3 ON )\" + dbReadingsName + R\"( (user_ts);\n\t\t)\";\n\n\t\tlogger->info(\" Creating table '%s' sql cmd '%s'\", dbReadingsName.c_str(), createReadings.c_str());\n\n\t\trc = SQLExec(dbHandle, createReadings.c_str());\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"createReadingsTables\", sqlite3_errmsg(dbHandle));\n\t\t\tif (newConnection)\n\t\t\t{\n\t\t\t\tmanager->release(connection);\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\n\t\trc = SQLExec(dbHandle, createReadingsIdx.c_str());\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"createReadingsTables\", sqlite3_errmsg(dbHandle));\n\t\t\tif (newConnection)\n\t\t\t{\n\t\t\t\tmanager->release(connection);\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\t}\n\tif (newConnection)\n\t{\n\t\tmanager->release(connection);\n\t}\n\n\treturn true;\n}\n\n/**\n * Create the overflow reading tables in the given database id\n *\n * We should only do this once when we upgrade to the version with an\n * overflow table. Although this should ideally be done in the schema\n * update script we can't do this as we can not loop over all the\n * databases in that script.\n *\n * @param dbHandle    Database connection to use for the operation\n *\n */\nbool  ReadingsCatalogue::createReadingsOverflowTable(sqlite3 *dbHandle, int dbId)\n{\n\tstring dbReadingsName;\n\n\tLogger *logger = Logger::getLogger();\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\n\tstring dbName = generateDbName(dbId);\n\tlogger->info(\"Creating reading overflow table for database '%s'\", dbName.c_str());\n\n\tdbReadingsName = string(READINGS_TABLE) + \"_\" + to_string(dbId);\n       \tdbReadingsName.append(\"_overflow\");\n\n\tstring sqlCmd = \"select count(*) from \" + dbName + \".\" + dbReadingsName + \";\";\n\tchar *errMsg;\n\tint rc = SQLExec(dbHandle, sqlCmd.c_str(), &errMsg);\n\tif (rc == SQLITE_OK)\n\t{\n\t\tlogger->debug(\"Overflow table %s already exists, not attempting creation\", dbReadingsName.c_str());\n\t\treturn true;\n\t}\n\n\tstring createReadings = R\"(\n\t\tCREATE TABLE IF NOT EXISTS )\" + dbName + \".\" + dbReadingsName + R\"( (\n\t\t\tid         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n\t\t\tasset_code CHARACTER varying(50)       NOT NULL,\n\t\t\treading    JSON                        NOT NULL DEFAULT '{}',\n\t\t\tuser_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),\n\t\t\tts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))\n\t\t);\n\t)\";\n\n\tstring createReadingsIdx1 = R\"(\n\t\tCREATE INDEX IF NOT EXISTS )\" + dbName + \".\" + dbReadingsName + R\"(_ix1 ON )\" + dbReadingsName + R\"( (asset_code, user_ts desc);\n\t)\";\n\tstring createReadingsIdx2 = R\"(\n\t\tCREATE INDEX IF NOT EXISTS )\" + dbName + \".\" + dbReadingsName + R\"(_ix2 ON )\" + dbReadingsName + R\"( (asset_code);\n\t)\";\n\tstring createReadingsIdx3 = R\"(\n\t\tCREATE INDEX IF NOT EXISTS )\" + dbName + \".\" + dbReadingsName + R\"(_ix3 ON )\" + dbReadingsName + R\"( (user_ts);\n\t)\";\n\n\tlogger->info(\" Creating table '%s' sql cmd '%s'\", dbReadingsName.c_str(), createReadings.c_str());\n\n\trc = SQLExec(dbHandle, createReadings.c_str());\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"creating overflow table\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\n\trc = SQLExec(dbHandle, createReadingsIdx1.c_str());\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"creating overflow table index 1\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\trc = SQLExec(dbHandle, createReadingsIdx2.c_str());\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"creating overflow table index 2\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\trc = SQLExec(dbHandle, createReadingsIdx3.c_str());\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"creating overflow table index 3\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\n/**\n * Evaluates the latest reading table defined in the provided database id looking at sqlite_master, the SQLite repository\n *\n * @param dbHandle Database connection to use for the operations\n * @param dbId     Database id for which the operation must be executed\n *\n * @return - a struct containing\n *             lastReadings = the id of the latest reading table defined in the  given database id\n *             tableCount   = Number of tables  given database id in the given database id\n */\nReadingsCatalogue::tyReadingsAvailable  ReadingsCatalogue::evaluateLastReadingAvailable(sqlite3 *dbHandle, int dbId)\n{\n\tstring dbName;\n\tint nCols;\n\tint id;\n\tchar *asset_name;\n\tsqlite3_stmt *stmt;\n\tint rc;\n\tstring tableName;\n\ttyReadingsAvailable readingsAvailable;\n\n\tConnection        *connection;\n\tbool connAllocated = false;\n\n\tvector<int> readingsId(getNReadingsAvailable(), 0);\n\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\n\tif (dbHandle == NULL)\n\t{\n\t\tconnection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Evaluate last reading available\";\n\tconnection->setUsage(usage);\n#endif\n\t\tdbHandle = connection->getDbHandle();\n\t\tconnAllocated = true;\n\t}\n\n\tdbName = generateDbName(dbId);\n\n\tstring sql_cmd = R\"(\n\t\tSELECT name\n\t\tFROM  )\" + dbName +  R\"(.sqlite_master\n\t\tWHERE type='table' and name like 'readings_%';\n\t)\";\n\n\tif (sqlite3_prepare_v2(dbHandle,sql_cmd.c_str(),-1, &stmt,NULL) != SQLITE_OK)\n\t{\n\t\traiseError(\"evaluateLastReadingAvailable\", sqlite3_errmsg(dbHandle));\n\t\treadingsAvailable.lastReadings = -1;\n\t\treadingsAvailable.tableCount = 0;\n\t}\n\telse\n\t{\n\t\t// Iterate over all the rows in the resultSet\n\t\treadingsAvailable.lastReadings = 0;\n\t\treadingsAvailable.tableCount = 0;\n\t\twhile ((rc = SQLStep(stmt)) == SQLITE_ROW)\n\t\t{\n\t\t\tnCols = sqlite3_column_count(stmt);\n\n\t\t\ttableName = (char *)sqlite3_column_text(stmt, 0);\n\t\t\tif (tableName.find_first_of(\"overflow\") == string::npos)\n\t\t\t{\n\t\t\t\tid = extractReadingsIdFromName(tableName);\n\n\t\t\t\tif (id > readingsAvailable.lastReadings)\n\t\t\t\t\treadingsAvailable.lastReadings = id;\n\n\t\t\t\treadingsAvailable.tableCount++;\n\t\t\t}\n\t\t}\n\t\tLogger::getLogger()->debug(\"evaluateLastReadingAvailable - tableName '%s' lastReadings %d\", tableName.c_str(), readingsAvailable.lastReadings);\n\n\t\tsqlite3_finalize(stmt);\n\t}\n\n\tif (connAllocated)\n\t{\n\t\tmanager->release(connection);\n\t}\n\n\treturn (readingsAvailable);\n}\n\n/**\n * Checks if there is a reading table still available to be used\n */\nbool  ReadingsCatalogue::isReadingAvailable() const\n{\n\tif (m_nReadingsAvailable <= 0)\n\t\treturn false;\n\telse\n\t\treturn true;\n\n}\n\n/**\n * Tracks the allocation of a reading table\n *\n */\nvoid  ReadingsCatalogue::allocateReadingAvailable()\n{\n\tm_nReadingsAvailable--;\n}\n\n/**\n * Allocates a reading table to the given asset_code\n *\n * @param    connection\tDb connection to be used for the operations\n * @param    asset_code for which the referenced readings table should be idenfied\n * @return              the reading id associated to the provided asset_code\n */\nReadingsCatalogue::tyReadingReference  ReadingsCatalogue::getReadingReference(Connection *connection, const char *asset_code)\n{\n\ttyReadingReference ref;\n\n\tsqlite3_stmt *stmt;\n\tstring sql_cmd;\n\tint rc;\n\tsqlite3\t\t*dbHandle;\n\n\tstring msg;\n\tbool success;\n\tstd::string escaped_asset = std::string(asset_code);\n\tstd::string target =\"\\\"\";\n\tstd::string replacement =\"\\\"\\\"\";\n\tStringReplaceAllEx(escaped_asset, target, replacement);\n\tint startReadingsId;\n\ttyReadingsAvailable readingsAvailable;\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\n\tsuccess = true;\n\n\tdbHandle = connection->getDbHandle();\n\n\tLogger *logger = Logger::getLogger();\n\n\tauto item = m_AssetReadingCatalogue.find(asset_code);\n\tif (item != m_AssetReadingCatalogue.end())\n\t{\n\t\t//# The asset is already allocated to a table\n\t\tref.tableId = item->second.getTable();\n\t\tref.dbId = item->second.getDatabase();\n\t\titem->second.issue();\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->debug(\"getReadingReference - before lock dbHandle %X threadId '%s'\", dbHandle, threadId.str().c_str() );\n\n\t\tAttachDbSync *attachSync = AttachDbSync::getInstance();\n\t\tattachSync->lock();\n\t\t\n\t\tReadingsCatalogue::tyReadingReference emptyTableReference = {-1, -1};\n\t\tstd::string emptyAsset = {};\n\t\tauto item = m_AssetReadingCatalogue.find(asset_code);\n\t\tif (item != m_AssetReadingCatalogue.end())\n\t\t{\n\t\t\tref.tableId = item->second.getTable();\n\t\t\tref.dbId = item->second.getDatabase();\n\t\t\titem->second.issue();\n\t\t}\n\t\telse\n\t\t{\n\t\t\t\n\t\t\tif (! isReadingAvailable ())\n\t\t\t{\n\t\t\t\t// No Reading table available... Get empty reading table \n\t\t\t\temptyTableReference = getEmptyReadingTableReference(emptyAsset);\n\t\t\t\tif ( !emptyAsset.empty() )\n\t\t\t\t{\n\t\t\t\t\tref = emptyTableReference;\n\t\t\t\t}\n\t\t\t\telse \n\t\t\t\t{\n\t\t\t\t\t//# Allocate a new block of readings table\n\t\t\t\t\tLogger::getLogger()->debug(\"Allocating a new db form the preallocated tables.  %d preallocated tables available.\", m_dbNAvailable);\n\n\t\t\t\t\tif (m_dbNAvailable > 0)\n\t\t\t\t\t{\n\t\t\t\t\t\t// DBs already pre-allocated are available\n\t\t\t\t\t\tm_dbIdCurrent++;\n\t\t\t\t\t\tm_dbNAvailable--;\n\t\t\t\t\t\tm_nReadingsAvailable = getNReadingsAllocate();\n\n\t\t\t\t\t\tLogger::getLogger()->debug(\"Allocate dbIdCurrent %d dbIdLast %d dbNAvailable  %d nReadingsAvailable %d  \", m_dbIdCurrent, m_dbIdLast, m_dbNAvailable, m_nReadingsAvailable);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\t// There are no pre-allocated databases available\n\t\t\t\t\t\t// Allocates new DBs\n\t\t\t\t\t\tint dbId, dbIdStart, dbIdEnd, allocated = 0;\n\n\t\t\t\t\t\tdbIdStart = m_dbIdLast +1;\n\t\t\t\t\t\tdbIdEnd = m_dbIdLast + m_storageConfigCurrent.nDbToAllocate;\n\n\t\t\t\t\t\tLogger::getLogger()->debug(\"getReadingReference - allocate a new db - create new db - dbIdCurrent %d dbIdStart %d dbIdEnd %d\", m_dbIdCurrent, dbIdStart, dbIdEnd);\n\n\t\t\t\t\t\tfor (dbId = dbIdStart; dbId <= dbIdEnd; dbId++)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treadingsAvailable = evaluateLastReadingAvailable(dbHandle, dbId - 1);\n\n\t\t\t\t\t\t\tstartReadingsId = 1;\n\n\t\t\t\t\t\t\tsuccess = createNewDB(dbHandle,  dbId, startReadingsId, NEW_DB_ATTACH_REQUEST);\n\t\t\t\t\t\t\tif (success)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tLogger::getLogger()->debug(\"getReadingReference - allocate a new db - create new dbs - dbId %d startReadingsIdOnDB %d\", dbId, startReadingsId);\n\t\t\t\t\t\t\t\tallocated++;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (allocated)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_dbIdLast += allocated;\n\t\t\t\t\t\t\tm_dbIdCurrent++;\n\t\t\t\t\t\t\tm_dbNAvailable += (allocated - 1);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tref.tableId = -1;\n\t\t\t\t\tref.dbId = -1;\n\t\t\t\t}\n\t\t\t\t\n\t\t\t}\n\n\t\t\tif (success)\n\t\t\t// Associate a reading table to the asset\n\t\t\t{\n\t\t\t\t// Associate the asset to the reading_id\n\t\t\t\tif (emptyAsset.empty())\n\t\t\t\t{\n\t\t\t\t\tref.tableId = getMaxReadingsId(m_dbIdCurrent) + 1;\n\t\t\t\t\tref.dbId = m_dbIdCurrent;\n\t\t\t\t}\n\n\t\t\t\t{\n\t\t\t\t\tm_EmptyAssetReadingCatalogue.erase(emptyAsset);\n\t\t\t\t\tm_AssetReadingCatalogue.erase(emptyAsset);\n\t\t\t\t\tauto newMapValue = make_pair(asset_code, TableReference(ref.dbId, ref.tableId));\n\t\t\t\t\tm_AssetReadingCatalogue.insert(newMapValue);\n\t\t\t\t}\n\n\t\t\t\t// Allocate the table in the reading catalogue\n\t\t\t\tif (emptyAsset.empty())\n\t\t\t\t{\n\t\t\t\t\tsql_cmd =\n\t\t\t\t\t\t\"INSERT INTO  \" READINGS_DB \".asset_reading_catalogue (table_id, db_id, asset_code) VALUES  (\"\n\t\t\t\t\t\t+ to_string(ref.tableId) + \",\"\n\t\t\t\t\t\t+ to_string(ref.dbId) + \",\"\n\t\t\t\t\t\t+ \"\\\"\" + escaped_asset + \"\\\")\";\n\t\t\t\t\t\n\t\t\t\t\t\tLogger::getLogger()->debug(\"getReadingReference - allocate a new reading table for the asset '%s' db Id %d readings Id %d \", asset_code, ref.dbId, ref.tableId);\n\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql_cmd = \t\" UPDATE \" READINGS_DB \".asset_reading_catalogue SET asset_code ='\" + string(escaped_asset) + \"'\" +\n\t\t\t\t\t\t\t\t\t\" WHERE db_id = \" + to_string(ref.dbId) + \" AND table_id = \" + to_string(ref.tableId) + \";\";\n\n\t\t\t\t\tLogger::getLogger()->debug(\"getReadingReference - Use empty table %readings_%d_%d: \",ref.dbId,ref.tableId);\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\t{\n\t\t\t\t\trc = SQLExec(dbHandle, sql_cmd.c_str());\n\t\t\t\t\tif (rc != SQLITE_OK)\n\t\t\t\t\t{\n\t\t\t\t\t\tmsg = string(sqlite3_errmsg(dbHandle)) + \" asset :\" + asset_code + \":\";\n\t\t\t\t\t\traiseError(\"asset_reading_catalogue update\", msg.c_str());\n\t\t\t\t\t}\n\n\t\t\t\t\tif (emptyAsset.empty())\n\t\t\t\t\t{\n\t\t\t\t\t\tallocateReadingAvailable();\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t}\n\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Assign to overflow\n\t\t\t\tLogger::getLogger()->info(\"Assign asset %s to the overflow table\", asset_code);\n\t\t\t\tauto newMapValue = make_pair(asset_code, TableReference(m_nextOverflow, 0));\n\t\t\t\tm_AssetReadingCatalogue.insert(newMapValue);\n\t\t\t\tsql_cmd =\n\t\t\t\t\t\"INSERT INTO  \" READINGS_DB \".asset_reading_catalogue (table_id, db_id, asset_code) VALUES  ( 0,\"\n\t\t\t\t\t+ to_string(m_nextOverflow) + \",\"\n\t\t\t\t\t+ \"\\\"\" + asset_code + \"\\\")\";\n\t\t\t\trc = SQLExec(dbHandle, sql_cmd.c_str());\n\t\t\t\tif (rc != SQLITE_OK)\n\t\t\t\t{\n\t\t\t\t\tmsg = string(sqlite3_errmsg(dbHandle)) + \" asset :\" + asset_code + \":\";\n\t\t\t\t\traiseError(\"asset_reading_catalogue update\", msg.c_str());\n\t\t\t\t}\n\t\t\t\tref.tableId = 0;\n\t\t\t\tref.dbId = m_nextOverflow;\n\t\t\t\tif (m_nextOverflow > m_maxOverflowUsed)\n\t\t\t\t{\n\t\t\t\t\tm_maxOverflowUsed = m_nextOverflow;\n\t\t\t\t}\n\t\t\t\tm_nextOverflow++;\n\t\t\t\tif (m_nextOverflow > m_dbIdLast)\n\t\t\t\t\tm_nextOverflow = 1;\n\n\t\t\t}\n\t\t\tLogger::getLogger()->debug(\"Assign: '%s' to %d, %d\", asset_code, ref.dbId, ref.tableId);\n\t\t}\n\t\tattachSync->unlock();\n\t}\n\n\treturn (ref);\n\n}\n\n/**\n * Loads the empty reading table catalogue\n *\n */\nbool ReadingsCatalogue::loadEmptyAssetReadingCatalogue(bool clean)\n{\n\tstd::lock_guard<std::mutex> guard(m_emptyReadingTableMutex);\n\tsqlite3 *dbHandle;\n\tstring sql_cmd;\n\tsqlite3_stmt *stmt;\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\t\n\tif (clean)\n\t{\n\t\tm_EmptyAssetReadingCatalogue.clear();\n\t}\n\n\t// Do not populate m_EmptyAssetReadingCatalogue if data is already there\n\tif (m_EmptyAssetReadingCatalogue.size())\t\n\t{\n\t\treturn true;\n\t}\n\n\tConnection *connection = manager->allocate();\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Load empty sset reading catalogue\";\n\tconnection->setUsage(usage);\n#endif\n\tdbHandle = connection->getDbHandle();\n\ttime_t issueThreshold = time(0) - 600;\t\t// More than 10 minutes since it was last ussed\n\tfor (auto &item : m_AssetReadingCatalogue)\n\t{\n\t\tstring asset_name = item.first; // Asset\n\t\tint tableId = item.second.getTable(); // tableId;\n\t\tint dbId = item.second.getDatabase(); // dbId;\n\n\t\tif (tableId > 0 && item.second.lastIssued() < issueThreshold)\n\t\t{\n\t\t\t\n\t\t\tsql_cmd = \"SELECT COUNT(*) FROM readings_\" + to_string(dbId) + \".readings_\" + to_string(dbId) + \"_\" + to_string(tableId) + \" ;\";\n\t\t\tif (sqlite3_prepare_v2(dbHandle, sql_cmd.c_str(), -1, &stmt, NULL) != SQLITE_OK)\n\t\t\t{\n\t\t\t\tsqlite3_finalize(stmt);\t\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (SQLStep(stmt) == SQLITE_ROW)\n\t\t\t{\n\t\t\t\tif (sqlite3_column_int(stmt, 0) == 0)\n\t\t\t\t{\n\t\t\t\t\tauto newItem = make_pair(tableId,dbId);\n\t\t\t\t\tauto newMapValue = make_pair(asset_name,newItem);\n\t\t\t\t\tm_EmptyAssetReadingCatalogue.insert(newMapValue);\n\t\t\t\t\t\n\t\t\t\t}\n\t\t\t}\n\t\t\tsqlite3_finalize(stmt);\n\t\t}\n\t\t\n\t}\n\tmanager->release(connection);\n\treturn true;\n}\n\n/**\n *  Get Empty Reading Table\n *\n * @param    asset emptyAsset, copies value of asset for which empty table is found\n * @return         the reading id associated to the provided empty table\n */\nReadingsCatalogue::tyReadingReference ReadingsCatalogue::getEmptyReadingTableReference(std::string& asset)\n{\n\tReadingsCatalogue::tyReadingReference emptyTableReference = {-1, -1};\n\tif (m_EmptyAssetReadingCatalogue.size() == 0)\n\t{\n\t\tloadEmptyAssetReadingCatalogue();\n\t}\n\n\tauto it = m_EmptyAssetReadingCatalogue.begin();\n\tif (it != m_EmptyAssetReadingCatalogue.end())\n\t{\n\t\tasset = it->first;\n\t\temptyTableReference.tableId = it->second.first;\n\t\temptyTableReference.dbId = it->second.second;\n\t}\n\n\treturn emptyTableReference;\n}\n\n/**\n * Retrieve the maximum table id for the provided database id\n *\n * @param dbId Database id for which the maximum reading id must be retrieved\n * @return     Maximum readings for the requested database id\n *\n */\nint ReadingsCatalogue::getMaxReadingsId(int dbId)\n{\n\tint maxId = 0;\n\n\tfor (auto &item : m_AssetReadingCatalogue)\n\t{\n\t\tif (item.second.getDatabase() == dbId && item.second.getTable() > maxId)\n\t\t\tmaxId = item.second.getTable();\n\t}\n\n\treturn (maxId);\n}\n\n\n/**\n * Returns the number of readings in use\n *\n * @return number of readings in use\n *\n */\nint ReadingsCatalogue::getReadingsCount()\n{\n\treturn (m_AssetReadingCatalogue.size());\n}\n\n/**\n * Returns the position in the array of the specific readings Id considering the database id and the table id\n *\n * @param dbId    Database id for which calculation must be evaluated\n * @param tableId table  id for which calculation must be evaluated\n * @return        Position in the array of the specific readings Id\n *\n */\nint ReadingsCatalogue::getReadingPosition(int dbId, int tableId)\n{\n\tint position;\n\n\tif ((dbId == 0) && (tableId == 0))\n\t{\n\t\tdbId = m_dbIdCurrent;\n\t\tgetMaxReadingsId(dbId);\n\t}\n\n\tposition = ((dbId - 1) * m_storageConfigCurrent.nReadingsPerDb) + tableId;\n\n\treturn (position);\n}\n\n\n/**\n * Calculate the number of reading tables associated to the given database id\n *\n * @param dbId    Database id for which calculation must be evaluated\n * @return        Number of reading tables associated to the given database id\n *\n */\nint ReadingsCatalogue::getUsedTablesDbId(int dbId)\n{\n\tint count = 0;\n\n\tfor (auto &item : m_AssetReadingCatalogue)\n\t{\n\t\tif (item.second.getTable() != 0 && item.second.getDatabase() == dbId)\n\t\t\tcount++;\n\t}\n\n\treturn (count);\n}\n\n/**\n * Delete the content of all the active readings tables using the provided sql command sqlCmdBase\n *\n * @param dbHandle     Database connection to use for the operations\n * @param sqlCmdBase   Sql command to execute\n * @param zErrMsg      value returned by reference, Error message\n * @param rowsAffected value returned by reference if != 0, Number of affected rows\n * @return             returns SQLITE_OK if all the sql commands are properly executed\n *\n */\nint  ReadingsCatalogue::purgeAllReadings(sqlite3 *dbHandle, const char *sqlCmdBase, char **zErrMsg, unsigned long *rowsAffected)\n{\n\tstring dbReadingsName;\n\tstring dbName;\n\tstring sqlCmdTmp;\n\tstring sqlCmd;\n\tbool firstRow;\n\tint rc;\n\n\tif (m_AssetReadingCatalogue.empty())\n\t{\n\t\tLogger::getLogger()->debug(\"purgeAllReadings: no tables defined\");\n\t\trc = SQLITE_OK;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->debug(\"purgeAllReadings tables defined\");\n\n\t\tPurgeConfiguration *purgeConfig = PurgeConfiguration::getInstance();\n\t\tbool exclusions = purgeConfig->hasExclusions();\n\n\t\tfirstRow = true;\n\t\tif  (rowsAffected != nullptr)\n\t\t\t*rowsAffected = 0;\n\n\t\tfor (auto &item : m_AssetReadingCatalogue)\n\t\t{\n\t\t\tif (exclusions && purgeConfig->isExcluded(item.first))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Asset %s excluded from purge\", item.first.c_str());\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tsqlCmdTmp = sqlCmdBase;\n\n\t\t\tdbName = generateDbName(item.second.getDatabase());\n\t\t\tdbReadingsName = generateReadingsName(item.second.getDatabase(), item.second.getTable());\n\n\t\t\tStringReplaceAll (sqlCmdTmp, \"_assetcode_\", item.first);\n\t\t\tStringReplaceAll (sqlCmdTmp, \"_dbname_\", dbName);\n\t\t\tStringReplaceAll (sqlCmdTmp, \"_tablename_\", dbReadingsName);\n\t\t\tsqlCmd += sqlCmdTmp;\n\t\t\tfirstRow = false;\n\n\t\t\trc = SQLExec(dbHandle, sqlCmdTmp.c_str(), zErrMsg);\n\n\t\t\tLogger::getLogger()->debug(\"purgeAllReadings:  rc:%d, errorMsg:'%s', cmd:'%s'\", rc , (*zErrMsg) ? (*zErrMsg) : \"\", sqlCmdTmp.c_str() );\n\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\t// sqlite3_free(*zErrMsg); // needed by calling method\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif  (rowsAffected != nullptr) {\n\n\t\t\t\t*rowsAffected += (unsigned long ) sqlite3_changes(dbHandle);\n\t\t\t}\n\n\t\t}\n\t}\n\n\treturn(rc);\n\n}\n\n/**\n * Constructs a sql command from the given one consisting of a set of UNION ALL commands\n * considering all the readings tables in use\n *\n * @param sqlCmdBase        Base Sql command\n * @param assetCodes        Asset codes to evaluate for the operation\n * @param considerExclusion If True the asset code in the excluded list must not be considered\n * @return                  Full sql command\n *\n */\nstring  ReadingsCatalogue::sqlConstructMultiDb(string &sqlCmdBase, vector<string>  &assetCodes, bool considerExclusion)\n{\n\tstring dbReadingsName;\n\tstring dbName;\n\tstring sqlCmdTmp;\n\tstring sqlCmd;\n\n\tstring assetCode;\n\tbool addTable;\n\tbool addedOne;\n\n\tif (m_AssetReadingCatalogue.empty())\n\t{\n\t\tLogger::getLogger()->debug(\"sqlConstructMultiDb: no tables defined\");\n\t\tsqlCmd = sqlCmdBase;\n\n\t\tdbReadingsName = generateReadingsName(1, 1);\n\n\t\tStringReplaceAll (sqlCmd, \"_assetcode_\", \"dummy_asset_code\");\n\t\tStringReplaceAll (sqlCmd, \".assetcode.\", \"asset_code\");\n\t\tStringReplaceAll (sqlCmd, \"_dbname_\", READINGS_DB);\n\t\tStringReplaceAll (sqlCmd, \"_tablename_\", dbReadingsName);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->debug(\"sqlConstructMultiDb: tables defined\");\n\n\t\tbool firstRow = true;\n\t\taddedOne = false;\n\n\t\tPurgeConfiguration *purgeConfig = PurgeConfiguration::getInstance();\n\t\tbool exclusions = purgeConfig->hasExclusions();\n\n\t\tfor (auto &item : m_AssetReadingCatalogue)\n\t\t{\n\t\t\tassetCode=item.first;\n\t\t\taddTable = false;\n\n\t\t\tif (considerExclusion && exclusions && purgeConfig->isExcluded(item.first))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Asset %s excluded from the query on the multiple readings\", item.first.c_str());\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t// Exclude the overflow table\n\t\t\tif (item.second.getTable() == 0)\n\t\t\t{\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t// Evaluates which tables should be referenced\n\t\t\tif (assetCodes.empty())\n\t\t\t\taddTable = true;\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (std::find(assetCodes.begin(), assetCodes.end(), assetCode) != assetCodes.end())\n\t\t\t\t\taddTable = true;\n\t\t\t}\n\n\t\t\tif (addTable)\n\t\t\t{\n\t\t\t\taddedOne = true;\n\n\t\t\t\tsqlCmdTmp = sqlCmdBase;\n\n\t\t\t\tif (!firstRow)\n\t\t\t\t{\n\t\t\t\t\tsqlCmd += \" UNION ALL \";\n\t\t\t\t}\n\n\t\t\t\tdbName = generateDbName(item.second.getDatabase());\n\t\t\t\tdbReadingsName = generateReadingsName(item.second.getDatabase(), item.second.getTable());\n\t\t\t\tstd::string target =\"\\\"\";\n\t\t\t\tstd::string replacement =\"\\\"\\\"\";\n\t\t\t\tStringReplaceAllEx(assetCode, target, replacement);\n\n\t\t\t\tStringReplaceAll(sqlCmdTmp, \"_assetcode_\", assetCode);\n\t\t\t\tStringReplaceAll (sqlCmdTmp, \".assetcode.\", \"asset_code\");\n\t\t\t\tStringReplaceAll(sqlCmdTmp, \"_dbname_\", dbName);\n\t\t\t\tStringReplaceAll(sqlCmdTmp, \"_tablename_\", dbReadingsName);\n\t\t\t\tsqlCmd += sqlCmdTmp;\n\t\t\t\tfirstRow = false;\n\t\t\t}\n\t\t}\n\t\t// Add at least one table eventually a dummy one\n\t\tif (! addedOne)\n\t\t{\n\t\t\tdbReadingsName = generateReadingsName(1, 1);\n\n\t\t\tsqlCmd = sqlCmdBase;\n\t\t\tStringReplaceAll (sqlCmd, \"_assetcode_\", \"dummy_asset_code\");\n\t\t\tStringReplaceAll (sqlCmd, \"_dbname_\", READINGS_DB);\n\t\t\tStringReplaceAll (sqlCmd, \"_tablename_\", dbReadingsName);\n\t\t}\n\t}\n\n\treturn sqlCmd;\n}\n\n/**\n * Add union all clauses for all the overflow tables based on tempalted SQL that is passed\n * in and a set of assets codes\n *\n * @param sqlCmdBase        Base Sql command\n * @param assetCodes        Asset codes to evaluate for the operation\n * @param considerExclusion If True the asset code in the excluded list must not be considered\n * @param groupBy\t    Include a group by asset_code in each sub query\n * @return                  Full sql command\n *\n */\nstring  ReadingsCatalogue::sqlConstructOverflow(string &sqlCmdBase, vector<string>  &assetCodes, bool considerExclusion, bool groupBy)\n{\n\tstring dbReadingsName;\n\tstring dbName;\n\tstring sqlCmdTmp;\n\tstring sqlCmd;\n\n\tstring assetCode;\n\tbool addTable;\n\tbool addedOne;\n\n\tfor (int dbId = 1; dbId <= m_maxOverflowUsed; dbId++)\n\t{\n\t\tdbReadingsName = generateReadingsName(dbId, 0);\n\t\tsqlCmdTmp = sqlCmdBase;\n\n\t\tsqlCmd += \" UNION ALL \";\n\n\t\tdbName = generateDbName(dbId);\n\n\t\tStringReplaceAll (sqlCmdTmp, \".assetcode.\", \"asset_code\");\n\t\tStringReplaceAll(sqlCmdTmp, \"_dbname_\", dbName);\n\t\tStringReplaceAll(sqlCmdTmp, \"_tablename_\", dbReadingsName);\n\t\tsqlCmd += sqlCmdTmp;\n\t\tif (! assetCodes.empty())\n\t\t{\n\t\t\tsqlCmd += \" WHERE \";\n\t\t\tbool first = true;\n\t\t\tfor (auto& code : assetCodes)\n\t\t\t{\n\t\t\t\tif (!first)\n\t\t\t\t{\n\t\t\t\t\tsqlCmd += \" or \";\n\t\t\t\t\tfirst = false;\n\t\t\t\t}\n\t\t\t\tsqlCmd += \"asset_code = \\'\";\n\t\t\t\tsqlCmd += code;\n\t\t\t\tsqlCmd += \"\\'\";\n\t\t\t}\n\t\t}\n\n\t\tif (groupBy)\n\t\t{\n\t\t\tsqlCmd += \" GROUP By asset_code\";\n\t\t}\n\t}\n\n\treturn sqlCmd;\n}\n\n\n/**\n * Generates a SQLite db alias from the database id\n *\n * @param dbId Database id for which the alias must be generated\n * @return     Generated alias\n *\n */\nstring ReadingsCatalogue::generateDbAlias(int dbId)\n{\n\n\treturn (READINGS_DB_NAME_BASE \"_\" + to_string(dbId));\n}\n\n/**\n * Generates a SQLIte database name from the database id\n *\n * @param dbId Database id for which the database name must be generated\n * @return     Generated database name\n *\n */\nstring ReadingsCatalogue::generateDbName(int dbId)\n{\n\treturn (READINGS_DB_NAME_BASE \"_\" + to_string(dbId));\n}\n\n/**\n * Generates a SQLITE database file name from the database id\n *\n * @param dbId Database id for which the database file name must be generated\n * @return     Generated database file name\n *\n */\nstring ReadingsCatalogue::generateDbFileName(int dbId)\n{\n\treturn (READINGS_DB_NAME_BASE \"_\" + to_string (dbId) + \".db\");\n}\n\n/**\n * Extracts the readings id from the table name\n *\n * @param tableName Table name from which the id must be extracted\n * @return          Extracted reading id or -1 on error\n *\n */\nint ReadingsCatalogue::extractReadingsIdFromName(string tableName)\n{\n\tint dbId;\n\tint tableId = -1;\n\tstring dbIdTableId;\n\n\ttry {\n\t\tdbIdTableId = tableName.substr (tableName.find('_') + 1);\n\n\t\ttableId = stoi(dbIdTableId.substr (dbIdTableId.find('_') + 1));\n\n\t\tdbId = stoi(dbIdTableId.substr (0, dbIdTableId.find('_') ));\n\t} catch (exception &e) {\n\t\tLogger::getLogger()->fatal(\"extractReadingsIdFromName: exception on table %s, %s\",\n\t\t\t\ttableName.c_str(), e.what());\n\t}\n\n\n\treturn tableId;\n}\n\n/**\n * Extract the database id from the table name\n *\n * @param tableName Table name from which the database id must be extracted\n * @return          Extracted database id or -1 on error\n *\n */\nint ReadingsCatalogue::extractDbIdFromName(string tableName)\n{\n\tint dbId = -1;\n\tint tableId;\n\tstring dbIdTableId;\n\n\ttry {\n\t\tdbIdTableId = tableName.substr (tableName.find('_') + 1);\n\n\t\ttableId = stoi(dbIdTableId.substr (dbIdTableId.find('_') + 1));\n\n\t\tdbId = stoi(dbIdTableId.substr (0, dbIdTableId.find('_') ));\n\t} catch (exception &e) {\n\t\tLogger::getLogger()->fatal(\"extractReadingsIdFromName: exception on table %s, %s\",\n\t\t\t\ttableName.c_str(), e.what());\n\t}\n\n\treturn dbId;\n}\n/**\n * Generates the name of the reading table from the given table id as:\n * Prefix + db Id + reading Id. If the tableId is 0 then this is a \n * reference to the overflow table\n *\n * @param dbId    Database id to use for the generation of the table name\n * @param tableId Table id to use for the generation of the table name\n * @return        Generated reading table name\n *\n */\nstring ReadingsCatalogue::generateReadingsName(int  dbId, int tableId)\n{\n\tstring tableName;\n\n\tif (dbId == -1)\n\t\tdbId = retrieveDbIdFromTableId(tableId);\n\n\tif (tableId == 0)\t// Overflow table\n\t{\n\t\ttableName = READINGS_TABLE \"_\" + to_string(dbId) + \"_overflow\";\n\t}\n\telse\n\t{\n\t\ttableName = READINGS_TABLE \"_\" + to_string(dbId) + \"_\" + to_string(tableId);\n\t}\n\tLogger::getLogger()->debug(\"%s -  dbId %d tableId %d table name '%s' \", __FUNCTION__, dbId, tableId, tableName.c_str());\n\n\treturn tableName;\n}\n\n/**\n * Retrieves the database id from the table id\n *\n * @param tableId Table id for which the database id must be retrieved\n * @return        Retrieved database id for the requested reading id\n *\n */\nint ReadingsCatalogue::retrieveDbIdFromTableId(int tableId)\n{\n\tint dbId;\n\n\tdbId = -1;\n\tfor (auto &item : m_AssetReadingCatalogue)\n\t{\n\n\t\tif (item.second.getTable() == tableId)\n\t\t{\n\t\t\tdbId = item.second.getDatabase();\n\t\t\tbreak;\n\t\t}\n\t}\n\treturn (dbId);\n}\n\n/**\n * Identifies SQLIte database name from the given table id\n *\n * @param tableId Table id for which the database name must be retrieved\n * @return        Retrieved database name for the requested reading id\n *\n */\nstring ReadingsCatalogue::generateDbNameFromTableId(int tableId)\n{\n\tstring dbName;\n\n\tfor (auto &item : m_AssetReadingCatalogue)\n\t{\n\n\t\tif (item.second.getTable() == tableId)\n\t\t{\n\t\t\tdbName = READINGS_DB_NAME_BASE \"_\" + to_string(item.second.getDatabase());\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (dbName == \"\")\n\t\tdbName = READINGS_DB_NAME_BASE \"_1\";\n\n\treturn (dbName);\n}\n\n/**\n * SQLIte wrapper to retry statements when the database is locked\n *\n * @param dbHandle  Database connection to use for the operations\n * @param sqlCmdsql The SQL to execute\n * @param errmsg\tReturned by reference, error message\n * @return          SQLite constant indicating the outcome of the requested operation, like for example SQLITE_LOCKED, SQLITE_BUSY...\n *\n */\nint ReadingsCatalogue::SQLExec(sqlite3 *dbHandle, const char *sqlCmd, char **errMsg)\n{\n\tint retries = 0, rc;\n\n\tif (errMsg)\n\t{\n\t\t*errMsg = NULL;\n\t}\n\tLogger::getLogger()->debug(\"SQLExec: cmd '%s' \", sqlCmd);\n\n\tdo {\n\t\tif (errMsg && *errMsg)\n\t\t{\n\t\t\tsqlite3_free(*errMsg);\n\t\t\t*errMsg = NULL;\n\t\t}\n\t\trc = sqlite3_exec(dbHandle, sqlCmd, NULL, NULL, errMsg);\n\t\tLogger::getLogger()->debug(\"SQLExec: rc %d \", rc);\n\n\t\tretries++;\n\t\tif (rc == SQLITE_LOCKED || rc == SQLITE_BUSY)\n\t\t{\n\t\t\tint interval = (retries * RETRY_BACKOFF);\n\t\t\tusleep(interval);\t// sleep retries milliseconds\n\t\t\tif (retries > 5) Logger::getLogger()->info(\"SQLExec - error '%s' retry %d of %d, rc=%s, DB connection @ %p, slept for %d msecs\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t   sqlite3_errmsg(dbHandle), retries, MAX_RETRIES, (rc==SQLITE_LOCKED)?\"SQLITE_LOCKED\":\"SQLITE_BUSY\", this, interval);\n\t\t}\n\t} while (retries < MAX_RETRIES && (rc == SQLITE_LOCKED || rc == SQLITE_BUSY));\n\n\tif (rc == SQLITE_LOCKED)\n\t{\n\t\tLogger::getLogger()->error(\"SQLExec - Database still locked after maximum retries\");\n\t}\n\tif (rc == SQLITE_BUSY)\n\t{\n\t\tLogger::getLogger()->error(\"SQLExec - Database still busy after maximum retries\");\n\t}\n\n\treturn rc;\n}\n\n/**\n * SQLIte wrapper to retry statements when the database error occurs\n *\n * @param statement\tSQLIte statement to execute\n * @return          SQLite constant indicating the outcome of the requested operation, like for example SQLITE_LOCKED, SQLITE_BUSY...\n *\n */\nint ReadingsCatalogue::SQLStep(sqlite3_stmt *statement)\n{\n\tint retries = 0, rc;\n\n\tdo {\n\t\trc = sqlite3_step(statement);\n\t\tretries++;\n\t\tif (rc == SQLITE_LOCKED || rc == SQLITE_BUSY)\n\t\t{\n\t\t\tint interval = (retries * RETRY_BACKOFF);\n\t\t\tusleep(interval);\t// sleep retries milliseconds\n\t\t\tif (retries > 5) Logger::getLogger()->info(\"SQLStep: retry %d of %d, rc=%s, DB connection @ %p, slept for %d msecs\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t   retries, MAX_RETRIES, (rc==SQLITE_LOCKED)?\"SQLITE_LOCKED\":\"SQLITE_BUSY\", this, interval);\n\t\t}\n\t} while (retries < MAX_RETRIES && (rc == SQLITE_LOCKED || rc == SQLITE_BUSY));\n\n\tif (rc == SQLITE_LOCKED)\n\t{\n\t\tLogger::getLogger()->error(\"Database still locked after maximum retries\");\n\t}\n\tif (rc == SQLITE_BUSY)\n\t{\n\t\tLogger::getLogger()->error(\"Database still busy after maximum retries\");\n\t}\n\n\treturn rc;\n}\n\n/**\n * Remove committed TRANSACTION for the given thread\n *\n * @param    tid\tThe thread id\n * \t\t\tthat just committed a transaction\n */\nvoid TransactionBoundary::ClearThreadTransaction(std::thread::id tid)\n{\n\t// Lock m_boundaries map\n\tstd::lock_guard<std::mutex> lck(m_boundaryLock);\n\n\t// Find thread id\n\tauto itr = m_boundaries.find(tid);\n\tif (itr != m_boundaries.end())\n\t{\n\t\t// Remove element\n\t\tm_boundaries.erase(itr);\n#if LOG_TX_BOUNDARIES\n\t\tLogger::getLogger()->debug(\"ClearThreadTransaction: thread [%ld] cleared TX start %ld\",\n\t\t\t\t\ttid, itr->second);\n#endif\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"ClearThreadTransaction: thread [%ld] not found\", tid);\n\t}\n}\n\n/**\n * Set BEGIN of a transaction for a given thread, reading id\n *\n * @param    tid\tThe thread id\n * \t\t\tthat just started a transaction\n * @param    id\t\tThe global reading id that starts the transaction\n */\nvoid TransactionBoundary::SetThreadTransactionStart(std::thread::id tid, unsigned long id)\n{\n\t// Lock m_boundaries map\n\tstd::lock_guard<std::mutex> lck(m_boundaryLock);\n\n\t// Set id per thread\n\tm_boundaries[tid] = id;\n\n#if LOG_TX_BOUNDARIES\n\tLogger::getLogger()->debug(\"SetThreadTransactionStart: thread [%ld] set TX start at %ld\",\n\t\t\t\ttid,\n\t\t\t\tid);\n#endif\n}\n\n/**\n * Fetch the minimum safe global reading id\n * among all UNCOMMITTED per thread transactions\n *\n * @return\t\tThe safe global reading id to use in\n *\t\t\tUNION ALL queries as boundary limit\n */\nunsigned long TransactionBoundary::GetMinReadingId()\n{\n\t// Lock m_boundaries map\n\tstd::lock_guard<std::mutex> lck(m_boundaryLock);\n\n\tunsigned long id = 0;\n\n\t// Get minimum reading id\n\tauto it = std::min_element(std::begin(m_boundaries),\n\t\t\t\tstd::end(m_boundaries),\n\t\t\t\t// Lambda compare function\n\t\t\t\t[](std::pair<std::thread::id ,unsigned long> i,\n\t\t\t\t\tstd::pair<std::thread::id ,unsigned long> j)\n\t\t\t\t\t{\n\t\t\t\t\t\treturn i.second < j.second;\n\t\t\t\t\t});\n\n\t// Found, set id\n\tif (it != m_boundaries.end())\n\t{\n\t\tid = it->second;\n\t}\n\n#ifdef LOG_TX_BOUNDARIES\n\tstd::thread::id tid = std::this_thread::get_id();\n\tLogger::getLogger()->debug(\"GetMinReadingId: thread [%ld] TX min id is %ld\",\n\t\t\t\t   tid,\n\t\t\t\t   id);\n#endif\n\n\treturn id;\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlite/include/common.h",
    "content": "#ifndef _COMMON_CONNECTION_H\n#define _COMMON_CONNECTION_H\n\n#include <sql_buffer.h>\n#include <iostream>\n#include <sqlite3.h>\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n#include <string>\n#include <map>\n#include <stdarg.h>\n#include <stdlib.h>\n#include <sstream>\n#include <logger.h>\n#include <time.h>\n#include <unistd.h>\n#include <chrono>\n#include <thread>\n#include <atomic>\n#include <condition_variable>\n#include <sys/time.h>\n#include <connection.h>\n\n#define\tSTORAGE_PURGE_RETAIN_ANY 0x0001U\n#define\tSTORAGE_PURGE_RETAIN_ALL 0x0002U\n#define STORAGE_PURGE_SIZE\t     0x0004U\n\nstatic std::map<std::string, std::string> sqliteDateFormat = {\n                                                {\"HH24:MI:SS\",\n                                                        F_TIMEH24_S},\n                                                {\"YYYY-MM-DD HH24:MI:SS.MS\",\n                                                        F_DATEH24_MS},\n                                                {\"YYYY-MM-DD HH24:MI:SS\",\n                                                        F_DATEH24_S},\n                                                {\"YYYY-MM-DD HH24:MI\",\n                                                        F_DATEH24_M},\n                                                {\"YYYY-MM-DD HH24\",\n                                                        F_DATEH24_H},\n                                                {\"\", \"\"}\n                                        };\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlite/include/profile.h",
    "content": "#ifndef _PROFILE_H\n#define _PROFILE_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <vector>\n#include <sys/time.h>\n#include <logger.h>\n\n#define\tTIME_BUCKETS\t20\n#define BUCKET_SIZE\t5\nclass ProfileItem\n{\n\tpublic:\n\t\tProfileItem(const std::string& reference) : m_reference(reference)\n\t\t\t{ gettimeofday(&m_tvStart, NULL);\n\t\t\t\tauto timenow = chrono::system_clock::to_time_t(chrono::system_clock::now());\n\t\t\t\tm_ts = std::string(ctime(&timenow));\n\t\t\t\tm_ts.back() = '\\0'; };\n\t\t~ProfileItem() {};\n\t\tvoid \tcomplete()\n\t\t\t{\n\t\t\t\tstruct timeval tv;\n\n\t\t\t\tgettimeofday(&tv, NULL);\n\t\t\t\tm_duration = (tv.tv_sec - m_tvStart.tv_sec) * 1000 +\n\t\t\t\t(tv.tv_usec - m_tvStart.tv_usec) / 1000;\n\t\t\t};\n\t\tunsigned long getDuration() { return m_duration; };\n\t\tconst std::string& getReference() const { return m_reference; };\n\t\tconst std::string& getTs() const { return m_ts; };\n\tprivate:\n\t\tstd::string\t\tm_reference;\n\t\tstruct timeval\t\tm_tvStart;\n\t\tunsigned long\t\tm_duration;\n\t\tstd::string\t\tm_ts;\n};\n\nclass QueryProfile\n{\n\tpublic:\n\t\tQueryProfile(int samples) : m_samples(samples) { time(&m_lastReport); };\n\t\tvoid\tinsert(ProfileItem *item)\n\t\t\t{\n\t\t\t\tint b = item->getDuration() / BUCKET_SIZE;\n\t\t\t\tif (b >= TIME_BUCKETS)\n\t\t\t\t\tb = TIME_BUCKETS - 1;\n\t\t\t\tm_buckets[b]++;\n\t\t\t\tif (m_items.size() == m_samples)\n\t\t\t\t{\n\t\t\t\t\tint minIndex = 0;\n\t\t\t\t\tunsigned long minDuration = m_items[0]->getDuration();\n\t\t\t\t\tfor (int i = 1; i < m_items.size(); i++)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (m_items[i]->getDuration() < minDuration)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tminDuration = m_items[i]->getDuration();\n\t\t\t\t\t\t\tminIndex = i;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (item->getDuration() > minDuration)\n\t\t\t\t\t{\n\t\t\t\t\t\tdelete m_items[minIndex];\n\t\t\t\t\t\tm_items[minIndex] = item;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tdelete item;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tm_items.push_back(item);\n\t\t\t\t}\n\t\t\t\tif (time(0) - m_lastReport > 600)\n\t\t\t\t{\n\t\t\t\t\treport();\n\t\t\t\t}\n\t\t\t};\n\tprivate:\n\t\tint\t\t\t\tm_samples;\n\t\tstd::vector<ProfileItem *>\tm_items;\n\t\ttime_t\t\t\t\tm_lastReport;\n\t\tunsigned int\t\t\tm_buckets[TIME_BUCKETS];\n\t\tvoid\treport()\n\t\t\t{\n\t\t\t\tLogger *logger = Logger::getLogger();\n\t\t\t\tlogger->info(\"Storage profile report\");\n\t\t\t\tlogger->info(\" < %3d mS %d\", BUCKET_SIZE, m_buckets[0]);\n\t\t\t\tfor (int j = 1; j < TIME_BUCKETS - 1; j++)\n\t\t\t\t{\n\t\t\t\t\tlogger->info(\"%3d-%3d mS %d\",\n\t\t\t\t\t\tj * BUCKET_SIZE, (j + 1) * BUCKET_SIZE,\n\t\t\t\t\t\tm_buckets[j]);\n\t\t\t\t}\n\t\t\t\tlogger->info(\" > %3d mS %d\", BUCKET_SIZE * TIME_BUCKETS, m_buckets[TIME_BUCKETS-1]);\n\t\t\t\tfor (int i = 0; i < m_items.size(); i++)\n\t\t\t\t{\n\t\t\t\t\tlogger->info(\"%ld mS, %s, %s\\n\",\n\t\t\t\t\t\tm_items[i]->getDuration(),\n\t\t\t\t\t\tm_items[i]->getTs().c_str(),\n\t\t\t\t\t\tm_items[i]->getReference().c_str());\n\t\t\t\t}\n\t\t\t\ttime(&m_lastReport);\n\t\t\t};\n};\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlite/plugin.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <sqlite_common.h>\n#include <connection_manager.h>\n#include <connection.h>\n#include <plugin_api.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <strings.h>\n#include \"sqlite3.h\"\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <sstream>\n#include <iostream>\n#include <string>\n#include <logger.h>\n#include <plugin_exception.h>\n#include <reading_stream.h>\n#include <config_category.h>\n#include <readings_catalogue.h>\n#include <purge_configuration.h>\n#include <string_utils.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * The SQLite3 plugin interface\n */\nextern \"C\" {\n\nconst char *default_config = QUOTE({\n\t\t\"poolSize\" : {\n\t\t\t\"description\" : \"The number of connections to create in the intial pool of connections\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"minimum\": \"1\",\n\t\t\t\"maximum\": \"10\",\n\t\t\t\"default\" : \"5\",\n\t\t\t\"displayName\" : \"Pool Size\",\n\t\t\t\"order\" : \"1\"\n\t\t},\n\t\t\"nReadingsPerDb\" : {\n\t\t\t\"description\" : \"The number of unique assets tables to maintain in each database that is created\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"minimum\": \"1\",\n\t\t\t\"default\" : \"15\",\n\t\t\t\"maximum\": \"00\",\n\t\t\t\"displayName\" : \"No. Readings per database\",\n\t\t\t\"order\" : \"2\"\n\t\t},\n\t\t\"nDbPreallocate\" : {\n\t\t\t\"description\" : \"Number of databases to allocate in advance. NOTE: SQLite has a default maximum of 10 attachable databases\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"default\" : \"3\",\n\t\t\t\"minimum\": \"1\",\n\t\t\t\"maximum\" : \"10\",\n\t\t\t\"displayName\" : \"No. databases to allocate in advance\",\n\t\t\t\"order\" : \"3\"\n\t\t},\n\t\t\"nDbLeftFreeBeforeAllocate\" : {\n\t\t\t\"description\" : \"Allocate new databases when the number of free databases drops below this value\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"default\" : \"1\",\n\t\t\t\"minimum\": \"1\",\n\t\t\t\"maximum\": \"10\",\n\t\t\t\"displayName\" : \"Database allocation threshold\",\n\t\t\t\"order\" : \"4\"\n\t\t},\n\t\t\"nDbToAllocate\" : {\n\t\t\t\"description\" : \"The number of databases to create whenever the number of available databases drops below the allocation threshold\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"default\" : \"2\",\n\t\t\t\"minimum\" : \"1\",\n\t\t\t\"maximum\" : \"10\",\n\t\t\t\"displayName\" : \"Database allocation size\",\n\t\t\t\"order\" : \"5\"\n\t\t},\n\t\t\"purgeExclude\" : {\n\t\t\t\"description\" : \"A comma seperated list of assets to exclude from the purge process\",\n\t\t\t\"type\" : \"string\",\n\t\t\t\"default\" : \"\",\n\t\t\t\"displayName\" : \"Purge Exclusions\",\n\t\t\t\"order\" : \"6\"\n\t\t},\n\t\t\"vacuumInterval\" : {\n\t\t\t\"description\" : \"The interval between execution of a SQLite vacuum command\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"minimum\" : \"1\",\n\t\t\t\"default\" : \"6\",\n\t\t\t\"displayName\" : \"Vacuum Interval\",\n\t\t\t\"order\" : \"7\"\n\t\t},\n\t\t\"deployment\" : {\n\t\t\t\"description\" : \"The type of Fledge deployment\",\n\t\t\t\"type\" : \"enumeration\",\n\t\t\t\"options\" : [ \"Small\", \"Normal\", \"High Bandwidth\" ],\n\t\t\t\"default\" : \"Normal\",\n\t\t\t\"displayName\" : \"Deployment\",\n\t\t\t\"order\" : \"8\"\n\t\t}\n\n});\n\n/**\n * The plugin information structure\n */\nstatic PLUGIN_INFORMATION info = {\n\t\"SQLite3\",                // Name\n\t\"1.2.0\",                  // Version\n\tSP_COMMON|SP_READINGS,    // Flags\n\tPLUGIN_TYPE_STORAGE,      // Type\n\t\"1.6.0\",                  // Interface version\n\tdefault_config\n};\n\n/**\n * Return the information about this plugin\n */\nPLUGIN_INFORMATION *plugin_info()\n{\n\treturn &info;\n}\n\n/**\n * Initialise the plugin, called to get the plugin handle\n * In the case of SQLLite we also get a pool of connections\n * to use.\n */\nPLUGIN_HANDLE plugin_init(ConfigCategory *category)\n{\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\t\n\t// Create a copy as the category we are called with has been constructed on the stack\n\tConfigCategory *newCategory = new ConfigCategory(category);\n\tmanager->setConfiguration(newCategory);\n\n\tSTORAGE_CONFIGURATION storageConfig;\n\n\tif (category->itemExists(\"poolSize\"))\n\t{\n\t\tstorageConfig.poolSize = strtol(category->getValue(\"poolSize\").c_str(), NULL, 10);\n\t}\n\tmanager->growPool(storageConfig.poolSize);\n\n\n\tif (category->itemExists(\"nReadingsPerDb\"))\n\t{\n\t\tstorageConfig.nReadingsPerDb = strtol(category->getValue(\"nReadingsPerDb\").c_str(), NULL, 10);\n\t}\n\n\n\tif (category->itemExists(\"nDbPreallocate\"))\n\t{\n\t\tstorageConfig.nDbPreallocate = strtol(category->getValue(\"nDbPreallocate\").c_str(), NULL, 10);\n\t}\n\n\tif (category->itemExists(\"nDbLeftFreeBeforeAllocate\"))\n\t{\n\t\tstorageConfig.nDbLeftFreeBeforeAllocate = strtol(category->getValue(\"nDbLeftFreeBeforeAllocate\").c_str(), NULL, 10);\n\t}\n\n\tif (category->itemExists(\"nDbToAllocate\"))\n\t{\n\t\tstorageConfig.nDbToAllocate = strtol(category->getValue(\"nDbToAllocate\").c_str(), NULL, 10);\n\t}\n\n\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\treadCat->multipleReadingsInit(storageConfig);\n\n\tif (category->itemExists(\"purgeExclude\"))\n\t{\n\t\tstring exclusions = category->getValue(\"purgeExclude\");\n\t\tPurgeConfiguration *purge = PurgeConfiguration::getInstance();\n\t\tsize_t s = 0, pos;\n\t\twhile ((pos = exclusions.find_first_of(\",\", s)) != string::npos)\n\t\t{\n\t\t\tpurge->exclude(StringTrim(exclusions.substr(s, pos - s)));\n\t\t\ts = pos + 1;\n\t\t}\n\t\tpurge->exclude(StringTrim(exclusions.substr(s, pos)));\n\t}\n\n\tif (category->itemExists(\"vacuumInterval\"))\n\t{\n\t\tmanager->setVacuumInterval(strtol(category->getValue(\"vacuumInterval\").c_str(), NULL, 10));\n\t}\n\n\treturn manager;\n}\n\n/**\n * Insert into an arbitrary table\n */\nint plugin_common_insert(PLUGIN_HANDLE handle, char *schema, char *table, char *data)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Insert into \" + string(table);\n\tconnection->setUsage(usage);\n#endif\n\tint result = connection->insert(std::string(schema), std::string(table), std::string(data));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Retrieve data from an arbitrary table\n */\nconst char *plugin_common_retrieve(PLUGIN_HANDLE handle, char *schema, char *table, char *query)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Retrieve from \" + string(table);\n\tconnection->setUsage(usage);\n#endif\n\tbool rval = connection->retrieve(std::string(schema), std::string(table), std::string(query), results);\n\tmanager->release(connection);\n\tif (rval)\n\t{\n\t\treturn strdup(results.c_str());\n\t}\n\treturn NULL;\n}\n\n/**\n * Update an arbitary table\n */\nint plugin_common_update(PLUGIN_HANDLE handle, char *schema, char *table, char *data)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Update \" + string(table);\n\tconnection->setUsage(usage);\n#endif\n\n\tint result = connection->update(std::string(schema), std::string(table), std::string(data));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Delete from an arbitrary table\n */\nint plugin_common_delete(PLUGIN_HANDLE handle, char *schema, char *table, char *condition)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Delete from \" + string(table);\n\tconnection->setUsage(usage);\n#endif\n\tint result = connection->deleteRows(std::string(schema), std::string(table), std::string(condition));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Append a sequence of readings to the readings buffer\n */\nint plugin_reading_append(PLUGIN_HANDLE handle, char *readings)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Reading append\";\n\tconnection->setUsage(usage);\n#endif\n\tint result = connection->appendReadings(readings);\n\tmanager->release(connection);\n\treturn result;;\n}\n\n/**\n * Append a stream of readings to the readings buffer\n */\nint plugin_readingStream(PLUGIN_HANDLE handle, ReadingStream **readings, bool commit)\n{\n\tint result = 0;\n\tConnectionManager *manager = (ConnectionManager *)handle;\n\tConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Reading Stream\";\n\tconnection->setUsage(usage);\n#endif\n\n\tresult = connection->readingStream(readings, commit);\n\n\tmanager->release(connection);\n\treturn result;;\n}\n\n/**\n * Fetch a block of readings from the readings buffer\n */\nchar *plugin_reading_fetch(PLUGIN_HANDLE handle, unsigned long id, unsigned int blksize)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string\t  resultSet;\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Fetch readings\";\n\tconnection->setUsage(usage);\n#endif\n\n\tconnection->fetchReadings(id, blksize, resultSet);\n\tmanager->release(connection);\n\treturn strdup(resultSet.c_str());\n}\n\n/**\n * Retrieve some readings from the readings buffer\n */\nchar *plugin_reading_retrieve(PLUGIN_HANDLE handle, char *condition)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Reading retrieve\";\n\tconnection->setUsage(usage);\n#endif\n\n\tconnection->retrieveReadings(std::string(condition), results);\n\tmanager->release(connection);\n\treturn strdup(results.c_str());\n}\n\n/**\n * Purge readings from the buffer\n */\nchar *plugin_reading_purge(PLUGIN_HANDLE handle, unsigned long param, unsigned int flags, unsigned long sent)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string \t  results;\nunsigned long\t  age, size;\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Purge\";\n\tconnection->setUsage(usage);\n#endif\n\tif (flags & STORAGE_PURGE_SIZE)\n\t{\n\t\t(void)connection->purgeReadingsByRows(param, flags, sent, results);\n\t}\n\telse\n\t{\n\t\tage = param;\n\t\t(void)connection->purgeReadings(age, flags, sent, results);\n\t}\n\tmanager->release(connection);\n\treturn strdup(results.c_str());\n}\n\n/**\n * Release a previously returned result set\n */\nvoid plugin_release(PLUGIN_HANDLE handle, char *results)\n{\n\t(void)handle;\n\tfree(results);\n}\n\n/**\n * Return details on the last error that occured.\n */\nPLUGIN_ERROR *plugin_last_error(PLUGIN_HANDLE handle)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\n  \n\treturn manager->getError();\n}\n\n/**\n * Shutdown the plugin\n */\nbool plugin_shutdown(PLUGIN_HANDLE handle)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\n\n\tConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Shutdown\";\n\tconnection->setUsage(usage);\n#endif\n\tif (connection->supportsReadings())\n\t{\n\t\tconnection->shutdownAppendReadings();\n\n\t\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\t\treadCat->storeGlobalId();\n\t}\n\tmanager->release(connection);\n\n\tmanager->shutdown();\n\treturn true;\n}\n\n/**\n * Create snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table to shapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= o on success\n *\n * The new created table has the following name:\n * table_id\n */\nint plugin_create_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\t char *table,\n\t\t\t\t char *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Snapshot \" + string(table);\n\tconnection->setUsage(usage);\n#endif\n        int result = connection->create_table_snapshot(std::string(table),\n\t\t\t\t\t\t\tstd::string(id));\n        manager->release(connection);\n        return result;\n}\n\n/**\n * Load a snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table to fill from a given snapshot\n * @param id\t\tThe table snapshot id\n * @return\t\t-1 on error, >= o on success\n */\nint plugin_load_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\tchar *table,\n\t\t\t\tchar *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Load snapshot \" + string(table);\n\tconnection->setUsage(usage);\n#endif\n        int result = connection->load_table_snapshot(std::string(table),\n\t\t\t\t\t\t     std::string(id));\n        manager->release(connection);\n        return result;\n}\n\n/**\n * Delete a snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table which shapshot will be removed\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= o on success\n *\n */\nint plugin_delete_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\t char *table,\n\t\t\t\t char *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Delete snapshot \" + string(table);\n\tconnection->setUsage(usage);\n#endif\n        int result = connection->delete_table_snapshot(std::string(table),\n\t\t\t\t\t\t\tstd::string(id));\n        manager->release(connection);\n        return result;\n}\n\n/**\n * Get all snapshots of a given common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table name \n * @return\t\tList of snapshots (even empty list) or NULL for errors\n *\n */\nconst char* plugin_get_table_snapshots(PLUGIN_HANDLE handle,\n\t\t\t\tchar *table)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Get table snapshots\" + string(table);\n\tconnection->setUsage(usage);\n#endif\n\tbool rval = connection->get_table_snapshots(std::string(table), results);\n\tmanager->release(connection);\n\n\treturn rval ? strdup(results.c_str()) : NULL;\n}\n\n\n/**\n * Update or creats a schema\n *\n * @param handle        The plugin handle\n * @param schema\tThe name of the schema\n * @param definition    The schema definition\n * @return              -1 on error, >= 0 on success\n *\n */\nint plugin_createSchema(PLUGIN_HANDLE handle, char *definition)\n{\n\tConnectionManager *manager = (ConnectionManager *)handle;\n\tConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Create schema\";\n\tconnection->setUsage(usage);\n#endif\n\tint result = connection->createSchema(std::string(definition));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Purge given readings asset or all readings from the buffer\n */\nunsigned int plugin_reading_purge_asset(PLUGIN_HANDLE handle, char *asset)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n#if TRACK_CONNECTION_USER\n\tstring usage = \"Purge asset \";\n\tconnection->setUsage(usage);\n#endif\n\tunsigned int deleted = connection->purgeReadingsAsset(asset);\n\tmanager->release(connection);\n\treturn deleted;\n}\n\n};\n\n"
  },
  {
    "path": "C/plugins/storage/sqlite/schema/include/schema.h",
    "content": "#ifndef _SCHEMAS_H\n#define _SCHEMAS_H\n/*\n * Fledge utilities functions for handling stringa\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <sql_buffer.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <sqlite3.h>\n#include <logger.h>\n#include <map>\n\n#define DDL_BACKOFF\t50\t// Microseconds to backoff between DDL retries\n\n/**\n * Representation of an extension schema\n *\n * Each active schema has an instance of the class that is managed by the\n * SchemaManager class. This is either created when Fledge restarts and reads the schema\n * definition from the database or when a service requests an extension schema to be\n * created.\n *\n * The class is responsible for the creation and update of the extension schemas. The \n * name, service, version and definition of each schema is written to a control table\n * in the Fledge schema to allow for the versions to be tracked and also to allow the\n * extension schemas to be attached.\n */\nclass Schema {\n\tpublic:\n\t\tSchema(const std::string& name, const std::string& service, int version,\n\t\t\t\tconst std::string& definition);\n\t\tSchema(sqlite3 *db, const rapidjson::Document& doc);\n\t\t~Schema();\n\t\tint\t\tgetVersion() { return m_version; };\n\t\tstd::string\tgetService() { return m_service; };\n\t\tbool\t\tupgrade(sqlite3 *db, const rapidjson::Document& doc, const std::string& definition);\n\t\tbool\t\tattach(sqlite3 *db);\n\tprivate:\n\t\tstd::string\tm_name;\n\t\tstd::string\tm_service;\n\t\tint\t\tm_version;\n\t\tstd::string\tm_definition;\n\t\tint\t\tm_indexNo;\n\t\tstd::string\tm_schemaPath;\n\t\tstd::map<sqlite3 *, bool>\n\t\t\t\tm_attached;\n\tprivate:\n\t\tbool\t\tcreateTable(sqlite3 *db, const rapidjson::Value& table);\n\t\tbool\t\tcreateIndex(sqlite3 *db, const std::string& table,\n\t\t\t\t\t\tconst rapidjson::Value& index);\n\t\tbool\t\thasTable(const rapidjson::Document& doc, const std::string& table);\n\t\tbool\t\thasColumn(const rapidjson::Document& doc, const std::string& table,\n\t\t\t\t\t\tconst std::string& column);\n\t\tbool\t\taddTableColumn(sqlite3 *db, const std::string& table,\n\t\t\t\t\t\tconst rapidjson::Value& column);\n\t\tbool\t\texecuteDDL(sqlite3 *db, SQLBuffer& sql);\n\n\t\tbool\t\thasString(const rapidjson::Value& value, const char *key)\n\t\t\t\t{\n\t\t\t\t\treturn (value.HasMember(key) && value[key].IsString());\n\t\t\t\t};\n\t\tbool\t\thasInt(const rapidjson::Value& value, const char *key)\n\t\t\t\t{\n\t\t\t\t\treturn (value.HasMember(key) && value[key].IsInt());\n\t\t\t\t};\n\t\tbool\t\thasArray(const rapidjson::Value& value, const char *key)\n\t\t\t\t{\n\t\t\t\t\treturn (value.HasMember(key) && value[key].IsArray());\n\t\t\t\t};\n\t\tbool\t\tcreateDatabase();\n\t\tvoid\t\tsetDatabasePath();\n};\n\n/**\n * The singleton SchemaManager class used to interact with\n * the extension schemas created by various extension services.\n */\nclass SchemaManager {\n\tpublic:\n\t\tstatic SchemaManager\t\t*getInstance();\n\t\tvoid\t\t\t\tload(sqlite3 *db);\n\t\tbool\t\t\t\tcreate(sqlite3 *db, const std::string& definition);\n\t\tbool\t\t\t\texists(sqlite3 *db, const std::string& schema);\n\tpublic:\n\t\tstatic SchemaManager\t\t*instance;\n\tprivate:\n\t\tSchemaManager();\n\tprivate:\n\t\tLogger\t\t\t\t*m_logger;\n\t\tstd::map<std::string, Schema *>\tm_schema;\n\t\tbool\t\t\t\tm_loaded;\n};\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlite/schema/schema.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <schema.h>\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n#include <unistd.h>\n#include <connection.h>\n#ifndef DB_CONFIGURATION\n#define DB_CONFIGURATION \"PRAGMA busy_timeout = 5000; PRAGMA cache_size = -4000; PRAGMA journal_mode = WAL; PRAGMA secure_delete = off; PRAGMA journal_size_limit = 4096000;\"\n#endif\n\nusing namespace std;\nusing namespace rapidjson;\n\nSchemaManager *SchemaManager::instance = 0;\n\n/**\n * Fetch the singleton instance of the SchemaManager\n *\n * @return SchemaManager*\tThe singleton SchemaManager instance\n */\nSchemaManager *SchemaManager::getInstance()\n{\n\tif (!instance)\n\t\tinstance = new SchemaManager();\n\treturn instance;\n}\n\n/**\n * Constructor for the singleton SchemaManager\n */\nSchemaManager::SchemaManager() : m_loaded(false)\n{\n\tm_logger = Logger::getLogger();\n}\n\n/**\n * Load the existing Schema from the table of supported schemas\n *\n * @param db\tThe database connection to use to load the schema information\n */\nvoid SchemaManager::load(sqlite3 *db)\n{\n\tconst char *sql = \"SELECT name, service, version, definition FROM fledge.service_schema;\";\n\tsqlite3_stmt *stmt;\n\tint rc = sqlite3_prepare_v2(db, sql, -1, &stmt, NULL);\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\tm_logger->error(\"Failed to retrieve list of schemas\");\n\t\treturn;\n\t}\n\n\tif (stmt)\n\t{\n\t\twhile ((rc = sqlite3_step(stmt)) == SQLITE_ROW)\n\t\t{\n\t\t\tstring name = (const char *)sqlite3_column_text(stmt, 0);\n\t\t\tstring service = (const char *)sqlite3_column_text(stmt, 1);\n\t\t\tint version = sqlite3_column_int(stmt, 2);\n\t\t\tstring definition = (const char *)sqlite3_column_text(stmt, 3);\n\n\t\t\tm_schema.insert(pair<string, Schema *>(name,\n\t\t\t\t\t\tnew Schema(name, service, version, definition)));\n\t\t}\n\t\tsqlite3_finalize(stmt);\n\t}\n\tm_loaded = true;\n}\n\n/**\n * Create a schema. This may create a completely new schema if it does not\n * already exist, update an existing schema if the version is different from\n * the one that already exists or no nothing if the schema exists and the version\n * of the schema is the same.\n *\n * @param db\tA database connection to use for sqlite3 interactions\n * @param definition\tThe schema definition\n * @return bool\tReturns true if the schema was created, updated or no action is required\n */\nbool SchemaManager::create(sqlite3 *db, const std::string& definition)\n{\nDocument doc;\n\n\tdoc.Parse(definition.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tm_logger->error(\"Failed to parse extension schema definition '%s' at %d: %s\",\n\t\t\t\tGetParseError_En(doc.GetParseError()), doc.GetErrorOffset(),\n\t\t\t\tdefinition.c_str());\n\t\treturn false;\n\t}\n\n\tstring name;\n\tif (doc.HasMember(\"schema\") && doc[\"schema\"].IsString())\n\t{\n\t\tname = doc[\"schema\"].GetString();\n\t}\n\telse\n\t{\n\t\tm_logger->error(\"Extension schema is missing the schema name in the definition\");\n\t\treturn false;\n\t}\n\tif (!m_loaded)\n\t{\n\t\tload(db);\n\t}\n\tauto it = m_schema.find(name);\n\tif (it == m_schema.end())\n\t{\n\t\tSchema *schema = new Schema(db, doc);\n\t\tm_schema.insert(pair<string, Schema *>(name, schema));\n\t}\n\telse\n\t{\n\t\tint version;\n\t\tif (doc.HasMember(\"version\") && doc[\"version\"].IsInt())\n\t\t{\n\t\t\tversion = doc[\"version\"].GetInt();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->error(\"Extension schema %s is missing a version number\", name.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tif (it->second->getVersion() != version)\n\t\t{\n\t\t\treturn it->second->upgrade(db, doc, definition);\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Check if a named schema exists, loading the schemas if need be. As\n * a side effect the schema will be attached to the database connection.\n *\n * @param schema\tThe schema to check the existance of\n * @return bool\t\tTrue if the schema exists\n */\nbool SchemaManager::exists(sqlite3 *db, const string& schema)\n{\n\tif (schema.compare(\"fledge\") == 0)\t// The fledge schema always exists\n\t\treturn true;\n\n\tif (!m_loaded)\n\t{\n\t\tload(db);\n\t}\n\tauto it = m_schema.find(schema);\n\tif (it == m_schema.end())\n\t{\n\t\treturn false;\n\t}\n\treturn it->second->attach(db);\n}\n\n\n/**\n * Constructor for a schema. This is the case in which a schema is beign loaded from\n * the database of schemas rather than becasue a service has requested a schema to be\n * created.\n *\n * Schemas will be loaded from the database before any schema creation request is\n * made by the services and will be used to attched the schemas and also to load the\n * baseline schema such that when the service requests a schema we can see if it already\n * exists and has the same version number.\n *\n * @param name\tThe name of the schema to create\n * @param service\tThe service requesting the schema\n * @param version\tThe version of the schema\n * @param definition\tThe JSON definition of the schema\n */\nSchema::Schema(const string& name, const string& service, int version, const string& definition) :\n\tm_name(name), m_service(service), m_version(version), m_definition(definition), m_indexNo(0)\n{\n\tsetDatabasePath();\n}\n\n/**\n * Constructor for a schema. This is the case when a service has requested a schema\n * that does not already exist. We must create a new schema from scratch.\n *\n * @param db\tSQLite3 database handle\n * @param doc\tJSON definition of the schema to create\n */\nSchema::Schema(sqlite3 *db, const rapidjson::Document& doc) : m_indexNo(0)\n{\n\tLogger *logger = Logger::getLogger();\n\n\tif (hasString(doc, \"schema\"))\n\t{\n\t\tm_name = doc[\"schema\"].GetString();\n\t}\n\telse\n\t{\n\t\tlogger->error(\"Schema definition is missing a schema name property\");\n\t\tthrow runtime_error(\"Schema missing a name property\");\n\t}\n\tif (hasString(doc, \"service\"))\n\t{\n\t\tm_service = doc[\"service\"].GetString();\n\t}\n\telse\n\t{\n\t\tlogger->error(\"Schema definition %s is missing a service property\", m_name.c_str());\n\t\tthrow runtime_error(\"Schema missing a service property\");\n\t}\n\tif (hasInt(doc, \"version\"))\n\t{\n\t\tm_version = doc[\"version\"].GetInt();\n\t}\n\telse\n\t{\n\t\tlogger->error(\"Schema definition %s for service %s is missing a version property\",\n\t\t\t\tm_name.c_str(), m_service.c_str());\n\t\tthrow runtime_error(\"Schema missing a version property\");\n\t}\n\n\tsetDatabasePath();\n\n\t// Create the Schema\n\tcreateDatabase();\n\n\t// Attach the database file to this database connection\n\tattach(db);\n\t\n\t// Create the tables in the schema\n\tif (hasArray(doc, \"tables\"))\n\t{\n\t\tconst Value& tables = doc[\"tables\"];\n\t\tfor (auto& table : tables.GetArray())\n\t\t{\n\t\t\tcreateTable(db, table);\n\t\t}\n\t}\n\tSQLBuffer sql;\n\tsql.append(\"INSERT INTO fledge.service_schema ( name, service, version, definition ) VALUES (\");\n\tsql.quote(m_name);\n\tsql.append(',');\n\tsql.quote(m_service);\n\tsql.append(',');\n\tsql.append(m_version);\n\tsql.append(',');\n\tsql.quote(m_definition);\n\tsql.append(\");\");\n\tif (!executeDDL(db, sql))\n\t{\n\t\tlogger->error(\"Failed to add schema to dictionary\");\n\t}\n}\n\n/**\n * Create a table within the extension schema\n *\n * @param db\tSQLite3 database handle\n * @param table\tJSON representation of the table definition\n */\nbool Schema::createTable(sqlite3 *db, const rapidjson::Value& table)\n{\n\tLogger *logger = Logger::getLogger();\n\tif (!hasString(table, \"name\"))\n\t{\n\t\tlogger->error(\"Table in schema %s is missing a name definition\", m_name.c_str());\n\t\treturn false;\n\t}\n\tstring name = table[\"name\"].GetString();\n\tif (!hasArray(table, \"columns\"))\n\t{\n\t\tlogger->error(\"Table %s in schema %s has no columns defined\", name.c_str(), m_name.c_str());\n\t\treturn false;\n\t}\n\tconst Value& columns = table[\"columns\"];\n\n\tSQLBuffer\tsql;\n\tsql.append(\"CREATE TABLE \");\n\tsql.append(m_name);\n\tsql.append('.');\n\tsql.append(name);\n\tsql.append(\" (\");\n\tbool first = true;\n\tfor (auto& column : columns.GetArray())\n\t{\n\t\tif (first)\n\t\t\tfirst = false;\n\t\telse\n\t\t\tsql.append(',');\n\t\tif (!hasString(column, \"column\"))\n\t\t{\n\t\t\tlogger->error(\"Table %s in schema %s is missing a column name definition\",\n\t\t\t\t\tname.c_str(),m_name.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tstring col = column[\"column\"].GetString();\n\t\tif (!hasString(column, \"type\"))\n\t\t{\n\t\t\tlogger->error(\"Column %s in table %s in schema %s is missing a column name definition\",\n\t\t\t\t\tcol.c_str(), name.c_str(), m_name.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tstring type = column[\"type\"].GetString();\n\t\t\n\t\tsql.append(col);\n\t\tif (type.compare(\"integer\") == 0)\n\t\t{\n\t\t\tsql.append(\" INTEGER\");\n\t\t}\n\t\telse if (type.compare(\"varchar\") == 0)\n\t\t{\n\t\t\tif (!hasInt(column, \"size\"))\n\t\t\t{\n\t\t\t}\n\t\t\tint size = column[\"size\"].GetInt();\n\t\t\tsql.append(\" CHARACTER VARYING(\");\n\t\t\tsql.append(size);\n\t\t\tsql.append(')');\n\t\t}\n\t\telse if (type.compare(\"double\") == 0)\n\t\t{\n\t\t\tsql.append(\" REAL\");\n\t\t}\n\t\telse if (type.compare(\"sequence\") == 0)\n\t\t{\n\t\t\tsql.append(\" INTEGER AUTOINCREMENT\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogger->error(\"Type %s is not supported in column %s of table %s in schema %s\",\n\t\t\t\t\ttype.c_str(), col.c_str(), name.c_str(), m_name.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tif (column.HasMember(\"key\"))\n\t\t{\n\t\t\tsql.append(\" PRIMARY KEY\");\n\t\t}\n\t}\n\tsql.append(\");\");\n\n\t// Execute the SQL statement\n\tif (!executeDDL(db, sql))\n\t{\n\t\treturn false;\n\t}\n\t\n\t// Now create any indexes on the table\n\tif (hasArray(table, \"indexes\"))\n\t{\n\t\tconst Value& indexes = table[\"indexes\"];\n\t\tfor (auto& index : indexes.GetArray())\n\t\t{\n\t\t\tif (!createIndex(db, name, index))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * Create an index on a table\n *\n * @param db\tSQLite database connection\n * @param table\tThe name of the table the index is created on\n * @param index\tJSON defintion of the index\n */\nbool Schema::createIndex(sqlite3 *db, const std::string& table, const rapidjson::Value& index)\n{\n\tif (!index.IsArray())\n\t{\n\t\tLogger::getLogger()->error(\"Malformed index for table %s in schema %s\",\n\t\t\t\ttable.c_str(), m_name.c_str());\n\t\treturn false;\n\t}\n\tSQLBuffer sql;\n\tsql.append(\"CREATE INDEX \");\n\tsql.append(m_name);\n\tsql.append(\".\");\n\tsql.append(table);\n\tsql.append(\"_idx\");\n\tsql.append(m_indexNo++);\n\tsql.append(\" ON \");\n\tsql.append(table);\n\tsql.append('(');\n\tbool first = true;\n\tfor (auto& col : index.GetArray())\n\t{\n\t\tif (col.IsString())\n\t\t{\n\t\t\tif (first)\n\t\t\t\tfirst = false;\n\t\t\telse\n\t\t\t\tsql.append(',');\n\t\t\tsql.append(col.GetString());\n\t\t}\n\t}\n\tsql.append(\");\");\n\n\t// Execute the SQL statement\n\treturn executeDDL(db, sql);\n}\n\n/**\n * Upgrade an existing schema. The upgrade process is limited and will only do the\n * following operations; add a new table, drop a table, add a new column to a table,\n * drop a column from a table, add a new index or drop an index.\n *\n * @param db\tThe SQLite3 database connection\n * @param doc\tThe pre-parsed version of the schema definition\n * @param definition\tThe schema defintion for the new version of the schema as JSON\n * @param bool\tTrue if the upgrade suceeded.\n */\nbool Schema::upgrade(sqlite3 *db, const Document& doc, const string& definition)\n{\n\tLogger *logger = Logger::getLogger();\n\n\tDocument onDisk;\n\tonDisk.Parse(m_definition.c_str());\n\n\tlogger->debug(\"Schema update: %s: Phase 1 - adding any new tables\", m_name.c_str());\n\t// Iterate over the new schema tables and find any not in the existing schema\n\tif (hasArray(doc, \"tables\"))\n\t{\n\t\tconst Value& newTables = doc[\"tables\"];\n\t\tfor (auto& newTable : newTables.GetArray())\n\t\t{\n\t\t\tif (hasString(newTable, \"name\"))\n\t\t\t{\n\t\t\t\tstring name = newTable[\"name\"].GetString();\n\t\t\t\tif (!hasTable(onDisk, name))\n\t\t\t\t{\n\t\t\t\t\tlogger->debug(\"Schema Upgrade of %s create table %s\", m_name.c_str(), name.c_str());\n\t\t\t\t\tif (!createTable(db, newTable))\n\t\t\t\t\t{\n\t\t\t\t\t\tlogger->error(\"Unable to create new table during schema upgrade for schema %s\", m_name.c_str());\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// Now look for tables that need to be dropped\n\tlogger->debug(\"Schema update: %s: Phase 2 - deleting any obsolete tables\", m_name.c_str());\n\tif (hasArray(onDisk, \"tables\"))\n\t{\n\t\tconst Value& oldTables = onDisk[\"tables\"];\n\t\tfor (auto& oldTable : oldTables.GetArray())\n\t\t{\n\t\t\tif (hasString(oldTable, \"name\"))\n\t\t\t{\n\t\t\t\tstring name = oldTable[\"name\"].GetString();\n\t\t\t\tif (!hasTable(doc, name))\n\t\t\t\t{\n\t\t\t\t\tlogger->debug(\"Schema Upgrade of %s drop table %s\", m_name.c_str(), name.c_str());\n\t\t\t\t\tSQLBuffer sql;\n\t\t\t\t\tsql.append(\"DROP TABLE IF EXISTS \");\n\t\t\t\t\tsql.append(m_name);\n\t\t\t\t\tsql.append('.');\n\t\t\t\t\tsql.append(name);\n\t\t\t\t\tsql.append(';');\n\t\t\t\t\tif (!executeDDL(db, sql))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tlogger->debug(\"Schema update: %s: Phase 3 - add any new columns to tables\", m_name.c_str());\n\t// Iterate over the new schema tables in both and then check for new columns\n\tif (hasArray(doc, \"tables\"))\n\t{\n\t\tconst Value& newTables = doc[\"tables\"];\n\t\tfor (auto& newTable : newTables.GetArray())\n\t\t{\n\t\t\tif (hasString(newTable, \"name\"))\n\t\t\t{\n\t\t\t\tstring name = newTable[\"name\"].GetString();\n\t\t\t\tif (hasTable(onDisk, name))\n\t\t\t\t{\n\t\t\t\t\tif (hasArray(newTable, \"columns\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tconst Value& columns = newTable[\"columns\"];\n\t\t\t\t\t\tfor (auto& column : columns.GetArray())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (hasString(column, \"column\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tstring col = column[\"column\"].GetString();\n\t\t\t\t\t\t\t\tif (!hasColumn(onDisk, name, col))\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (!addTableColumn(db, name, column))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tlogger->debug(\"Schema update: %s: Phase 4 - remove any obsolete columns from tables\", m_name.c_str());\n\t// Iterate over the on disk tables looking for tables that exist in the new schema\n\t// and then look for columns that are on disk but not in the new schema\n\tif (hasArray(onDisk, \"tables\"))\n\t{\n\t\tconst Value& oldTables = onDisk[\"tables\"];\n\t\tfor (auto& oldTable : oldTables.GetArray())\n\t\t{\n\t\t\tif (hasString(oldTable, \"name\"))\n\t\t\t{\n\t\t\t\tstring name = oldTable[\"name\"].GetString();\n\t\t\t\tif (hasTable(doc, name))\n\t\t\t\t{\n\t\t\t\t\tif (hasArray(oldTable, \"columns\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tconst Value& columns = oldTable[\"columns\"];\n\t\t\t\t\t\tfor (auto& column : columns.GetArray())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (hasString(column, \"column\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tstring col = column[\"column\"].GetString();\n\t\t\t\t\t\t\t\tif (!hasColumn(doc, name, col))\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tlogger->debug(\"Schema Upgrade of %s drop column %s from table %s\", m_name.c_str(), col.c_str(), name.c_str());\n\t\t\t\t\t\t\t\t\tSQLBuffer sql;\n\t\t\t\t\t\t\t\t\tsql.append(\"ALTER TABLE \");\n\t\t\t\t\t\t\t\t\tsql.append(m_name);\n\t\t\t\t\t\t\t\t\tsql.append('.');\n\t\t\t\t\t\t\t\t\tsql.append(name);\n\t\t\t\t\t\t\t\t\tsql.append(\" DROP COLUMN \");\n\t\t\t\t\t\t\t\t\tsql.append(col);\n\t\t\t\t\t\t\t\t\tsql.append(';');\n\t\t\t\t\t\t\t\t\tif (!executeDDL(db, sql))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tlogger->debug(\"Schema update: %s: Phase 5 - add any new indexes\", m_name.c_str());\n\t// Iterate over the new schema tables in both and then check for new columns\n\tif (hasArray(doc, \"tables\"))\n\t{\n\t\tconst Value& newTables = doc[\"tables\"];\n\t\tfor (auto& newTable : newTables.GetArray())\n\t\t{\n\t\t\tif (hasString(newTable, \"name\"))\n\t\t\t{\n\t\t\t\tstring name = newTable[\"name\"].GetString();\n\t\t\t\tif (hasTable(onDisk, name))\n\t\t\t\t{\n\t\t\t\t\tif (hasArray(newTable, \"indexes\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tconst Value& indexes = newTable[\"indexes\"];\n\t\t\t\t\t\tfor (auto& index : indexes.GetArray())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (hasArray(index, \"index\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t// TODO Compare indexes\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tlogger->debug(\"Schema update: %s: Phase 6 - remove any obsolete indexes\", m_name.c_str());\n\n\n\tm_version = doc[\"version\"].GetInt();\t// Safe as we would not get here if version was missing\n\tm_definition = definition;\n\n\tlogger->debug(\"Schema update: %s: Phase 7 - update schema table\", m_name.c_str());\n\tSQLBuffer sql;\n\tsql.append(\"UPDATE fledge.service_schema SET version = \");\n\tsql.append(m_version);\n\tsql.append(\", definition = \");\n\tsql.quote(m_definition);\n\tsql.append(\" WHERE name = \");\n\tsql.quote(m_name);\n\tsql.append(\" AND service = \");\n\tsql.quote(m_service);\n\tsql.append(';');\n\tif (!executeDDL(db, sql))\n\t\treturn false;\n\n\treturn true;\n}\n\n/**\n * Add a new column to an existing table within the schema\n *\n * @param db\tThe SQLite database handle\n * @param table\tThe name of the table we are adding the column to\n * @param column\tThe JSON definition of the column\n * @return bool\tTrue if the column was added to the table\n */\nbool Schema::addTableColumn(sqlite3 *db, const string& table, const Value& column)\n{\n\tLogger *logger = Logger::getLogger();\n\tSQLBuffer sql;\n\tsql.append(\"ALTER TABLE \");\n\tsql.append(m_name);\n\tsql.append('.');\n\tsql.append(table);\n\tsql.append(\" ADD COLUMN \");\n\tif (!hasString(column, \"column\"))\n\t{\n\t\tlogger->error(\"Schema update %s, missing name for column in table %s\",\n\t\t\t\tm_name.c_str(), table.c_str());\n\t\treturn false;\n\t}\n\tstring colName = column[\"colummn\"].GetString();\n\tsql.append(colName);\n\tif (!hasString(column, \"type\"))\n\t{\n\t\tlogger->error(\"Schema update %s, missing type for column %s in table %s\",\n\t\t\t\tm_name.c_str(), colName.c_str(), table.c_str());\n\t\treturn false;\n\t}\n\tstring type = column[\"type\"].GetString();\n\tif (type.compare(\"integer\") == 0)\n\t{\n\t\tsql.append(\" INTEGER\");\n\t}\n\telse if (type.compare(\"varchar\") == 0)\n\t{\n\t\tsql.append(\" CHARACTER VARYING(\");\n\t\tsql.append(')');\n\t}\n\telse if (type.compare(\"double\") == 0)\n\t{\n\t\tsql.append(\" REAL\");\n\t}\n\telse if (type.compare(\"sequence\") == 0)\n\t{\n\t\tsql.append(\" INTEGER AUTOINCREMENT\");\n\t}\n\telse\n\t{\n\t\tlogger->error(\"Update schema type %s is not supported for column %s of table %s in schema %s\",\n\t\t\t\ttype.c_str(), colName.c_str(), table.c_str(), m_name.c_str());\n\t\treturn false;\n\t}\n\tsql.append(';');\n\treturn executeDDL(db, sql);\n}\n\n/**\n * Execute a DDL statement against the SQLite database\n *\n * @param db\tThe SQLite database handle\n * @param sql\tThe SQLBuffer to execute\n * @return bool\tTrue if the statement succeeded\n */\nbool Schema::executeDDL(sqlite3 *db, SQLBuffer& sql)\n{\n\tconst char *ddl = sql.coalesce();\n\tLogger *logger = Logger::getLogger();\n\tlogger->debug(\"Schema %s: Execute DDL %s\", m_name.c_str(), ddl);\n\n\tchar *errMsg = NULL;\n\tint rc, retries = 0;\n\tif (((rc = sqlite3_exec(db, ddl, NULL, NULL, &errMsg)) == SQLITE_BUSY || rc == SQLITE_LOCKED)\n\t\t\t&& ++retries < 10)\n\t{\n\t\tint interval = retries * DDL_BACKOFF;\n\t\tusleep(interval);\n\t}\n\tif (rc != SQLITE_OK)\n\t{\n\t\tlogger->error(\"Schema %s, failed to execute DDL %s, %s\", m_name.c_str(), ddl,\n\t\t\t\terrMsg ? errMsg : \"no reason available\");\n\t\tif (errMsg)\n\t\t{\n\t\t\tsqlite3_free(errMsg);\n\t\t}\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\n/**\n * Look in the JSON definition of a schema and check for the existance of a table\n *\n * @param doc\t\tThe JSON document that defines the schema\n * @param table\t\tThe name of the table name to look for\n * @return bool\t\tTrue if the table exists in the schema\n */\nbool Schema::hasTable(const Document& doc, const string& tableName)\n{\n\tif (!hasArray(doc, \"tables\"))\n\t{\n\t\treturn false;\n\t}\n\tconst Value& tables = doc[\"tables\"];\n\tfor (auto& table : tables.GetArray())\n\t{\n\t\tif (hasString(table, \"name\"))\n\t\t{\n\t\t\tstring name = table[\"name\"].GetString();\n\t\t\tif (name.compare(tableName) == 0)\n\t\t\t{\n\t\t\t\treturn true;\n\t\t\t}\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Look in the JSON definition of a schema and check for the existance of a column within a table\n *\n * @param doc\t\tThe JSON document that defines the schema\n * @param tableName\tThe name of the table name to look for\n * @param columnName\tThe name of the column name to look for\n * @return bool\t\tTrue if the table exists in the schema\n */\nbool Schema::hasColumn(const Document& doc, const string& tableName, const string& columnName)\n{\n\tif (!hasArray(doc, \"tables\"))\n\t{\n\t\treturn false;\n\t}\n\tconst Value& tables = doc[\"tables\"];\n\tfor (auto& table : tables.GetArray())\n\t{\n\t\tif (hasString(table, \"name\"))\n\t\t{\n\t\t\tstring name = table[\"name\"].GetString();\n\t\t\tif (name.compare(tableName) == 0)\n\t\t\t{\n\t\t\t\tif (hasArray(table, \"columns\"))\n\t\t\t\t{\n\t\t\t\t\tconst Value& columns = table[\"columns\"];\n\t\t\t\t\tfor (auto& column : columns.GetArray())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (hasString(column, \"column\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring col = column[\"column\"].GetString();\n\t\t\t\t\t\t\tif (col.compare(columnName) == 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Setup the path of the schema database file\n */\nvoid Schema::setDatabasePath()\n{\n\tchar *data = getenv(\"FLEDGE_DATA\");\n\tif (!data)\n\t{\n\t       \tm_schemaPath = getenv(\"FLEDGE_ROOT\");\n\t\tm_schemaPath +=\"/data\";\n\t}\n\telse\n\t{\n\t\tm_schemaPath = data;\n\t}\n\tm_schemaPath += \"/\";\n\tm_schemaPath += m_name;\n\tm_schemaPath += \".db\";\n}\n\n/**\n * Create the SQLite database and enable the WAL mode for the database\n *\n * @return bool\tReturns true on success\n */\nbool Schema::createDatabase()\n{\n\n\tsqlite3\t*dbHandle;\n\n\tint rc = sqlite3_open(m_schemaPath.c_str(), &dbHandle);\n\tif (rc != SQLITE_OK)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to create database for schema %s\", m_name.c_str());\n\t\treturn false;\n\t}\n\tif ((rc = sqlite3_exec(dbHandle, DB_CONFIGURATION, NULL, NULL, NULL)) != SQLITE_OK)\n\t{\n\t\tLogger::getLogger()->error(\"Unable to set database configuration for schema %s\", m_name.c_str());\n\t\treturn false;\n\t}\n\tsqlite3_close(dbHandle);\n\treturn true;\n}\n\n/**\n * Attach the schema to the database handle if not already attached\n *\n * @param db\tThe database handle to attach the schema to\n * @return bool\tTrue if the schema was attached\n */\nbool Schema::attach(sqlite3 *db)\n{\n\tif (m_attached.find(db) != m_attached.end())\n\t{\n\t\t// Already attached\n\t\treturn true;\n\t}\n\tSQLBuffer sql;\n\tsql.append(\"ATTACH DATABASE '\");\n\tsql.append(m_schemaPath);\n\tsql.append(\"' AS \");\n\tsql.append(m_name);\n\tsql.append(';');\n\n\tif (!executeDDL(db, sql))\n\t{\n\t\treturn false;\n\t}\n\tm_attached[db] = true;\n\treturn true;\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(sqlitelb)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Path of compiled sqlite3 file: /usr/local/bin\nset(FLEDGE_SQLITE3_LIBS \"/usr/local/bin\" CACHE INTERNAL \"\")\n\n# Find source files\nfile(GLOB SOURCES ./common/*.cpp ../sqlite/schema/*.cpp *.cpp)\n\n# Include header files\ninclude_directories(./include)\ninclude_directories(./common/include)\ninclude_directories(../sqlite/schema/include)\ninclude_directories(../sqlite/common/include)\ninclude_directories(../../../common/include)\ninclude_directories(../../../services/common/include)\ninclude_directories(../common/include)\ninclude_directories(../../../thirdparty/rapidjson/include)\nlink_directories(${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Check Sqlite3 required version\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\nfind_package(sqlite3)\n\n# Commented out as it is the persistent storage\n#add_definitions(-DSQLITE_SPLIT_READINGS=1)\nadd_definitions(-DPLUGIN_LOG_NAME=\"SQLite 3 lb\")\n\n# Use static SQLite3 library\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tinclude_directories(${FLEDGE_SQLITE3_LIBS})\n\ttarget_link_libraries(${PROJECT_NAME} -L\"${FLEDGE_SQLITE3_LIBS}/.libs\" -lsqlite3)\nelse()\n\ttarget_link_libraries(${PROJECT_NAME} -lsqlite3)\nendif()\n\n# Install SQLite3 command line with static library\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tinstall(PROGRAMS ${FLEDGE_SQLITE3_LIBS}/sqlite3 DESTINATION \"fledge/plugins/storage/${PROJECT_NAME}\")\nendif()\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/plugins/storage/${PROJECT_NAME})\n\n# Install init.sql\ninstall(FILES ${CMAKE_SOURCE_DIR}/scripts/plugins/storage/${PROJECT_NAME}/init.sql DESTINATION fledge/plugins/storage/${PROJECT_NAME})\ninstall(FILES ${CMAKE_SOURCE_DIR}/scripts/plugins/storage/${PROJECT_NAME}/init_readings.sql DESTINATION fledge/plugins/storage/${PROJECT_NAME})\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/Findsqlite3.cmake",
    "content": "# This CMake file locates the SQLite3 development libraries\n#\n# The following variables are set:\n# SQLITE_FOUND - If the SQLite library was found\n# SQLITE_LIBRARIES - Path to the static library\n# SQLITE_INCLUDE_DIR - Path to SQLite headers\n# SQLITE_VERSION - Library version\n\nset(SQLITE_MIN_VERSION \"3.11.0\")\n# Check wether path of compiled libsqlite3.a and .h files exists\nif (EXISTS ${FLEDGE_SQLITE3_LIBS})\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h PATHS ${FLEDGE_SQLITE3_LIBS})\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.a PATHS \"${FLEDGE_SQLITE3_LIBS}/.libs\")\nelse()\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h)\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.so)\nendif()\n\nif (SQLITE_INCLUDE_DIR AND SQLITE_LIBRARIES)\n  execute_process(COMMAND grep \".*#define.*SQLITE_VERSION \" ${SQLITE_INCLUDE_DIR}/sqlite3.h\n    COMMAND sed \"s/.*\\\"\\\\(.*\\\\)\\\".*/\\\\1/\"\n    OUTPUT_VARIABLE SQLITE_VERSION\n    OUTPUT_STRIP_TRAILING_WHITESPACE)\n    if (\"${SQLITE_VERSION}\" VERSION_LESS \"${SQLITE_MIN_VERSION}\")\n        message(FATAL_ERROR \"SQLite3 version >= ${SQLITE_MIN_VERSION} required, found version ${SQLITE_VERSION}\")\n    else()\n        message(STATUS \"Found SQLite version ${SQLITE_VERSION}: ${SQLITE_LIBRARIES}\")\n        set(SQLITE_FOUND TRUE)\n    endif()\nelse()\n  message(FATAL_ERROR \"Could not find SQLite\")\nendif()\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/common/connection.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <connection.h>\n#include <connection_manager.h>\n#include <sqlite_common.h>\n#include <utils.h>\n#ifndef MEMORY_READING_PLUGIN\n#include <schema.h>\n#endif\n\n/*\n * Control the way purge deletes readings. The block size sets a limit as to how many rows\n * get deleted in each call, whilst the sleep interval controls how long the thread sleeps\n * between deletes. The idea is to not keep the database locked too long and allow other threads\n * to have access to the database between blocks.\n */\n#define PURGE_SLEEP_MS 500\n#define PURGE_DELETE_BLOCK_SIZE\t20\n#define TARGET_PURGE_BLOCK_DEL_TIME\t(70*1000) \t// 70 msec\n#define PURGE_BLOCK_SZ_GRANULARITY\t5 \t// 5 rows\n#define MIN_PURGE_DELETE_BLOCK_SIZE\t20\n#define MAX_PURGE_DELETE_BLOCK_SIZE\t1500\n#define RECALC_PURGE_BLOCK_SIZE_NUM_BLOCKS\t30\t// recalculate purge block size after every 30 blocks\n\n#define PURGE_SLOWDOWN_AFTER_BLOCKS 5\n#define PURGE_SLOWDOWN_SLEEP_MS 500\n\n/**\n * SQLite3 storage plugin for Fledge\n */\n\nusing namespace std;\nusing namespace rapidjson;\n\n#define CONNECT_ERROR_THRESHOLD\t\t5*60\t// 5 minutes\n\n\n/*\n * The following allows for conditional inclusion of code that tracks the top queries\n * run by the storage plugin and the number of times a particular statement has to\n * be retried because of the database being busy./\n */\n#define DO_PROFILE\t\t0\n#define DO_PROFILE_RETRIES\t0\n#if DO_PROFILE\n#include <profile.h>\n\n#define\tTOP_N_STATEMENTS\t\t10\t// Number of statements to report in top n\n#define RETRY_REPORT_THRESHOLD\t\t1000\t// Report retry statistics every X calls\n\nQueryProfile profiler(TOP_N_STATEMENTS);\nunsigned long retryStats[MAX_RETRIES] = { 0,0,0,0,0,0,0,0,0,0 };\nunsigned long numStatements = 0;\nint\t      maxQueue = 0;\n#endif\n\nstatic std::atomic<int> m_waiting(0);\nstatic std::atomic<int> m_writeAccessOngoing(0);\nstatic std::mutex\tdb_mutex;\nstatic std::condition_variable\tdb_cv;\nstatic int purgeBlockSize = PURGE_DELETE_BLOCK_SIZE;\n\n#define START_TIME std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();\n#define END_TIME std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now(); \\\n\t\t\t\t auto usecs = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count();\n\n\nstatic time_t connectErrorTime = 0;\n\n/**\n * This SQLIte3 query callback returns a formatted date\n * by SELECT strftime('format', column, 'locatime')\n *\n * @param data         Output parameter to update with new datetime\n * @param nCols        The number of columns or the row\n * @param colValues    The column values\n * @param colNames     The column names\n * @return             0 on success, 1 otherwise\n */\nint dateCallback(void *data,\n\t\tint nCols,\n\t\tchar **colValues,\n\t\tchar **colNames)\n{\n\tif (colValues[0] != NULL)\n\t{\n\t\tmemcpy((char *)data,\n\t\t\tcolValues[0],\n\t\t\tstrlen(colValues[0]));\n\t\t// OK\n\t\treturn 0;\n\t}\n\telse\n\t{\n\t\t// Failure\n\t\treturn 1;\n\t}\n}\n\n/**\n * Retrieves the current datetime (now ()) from SQlite\n *\n * @param Now      Output parameter - now ()\n * @return         True, operations succeded\n *\n */\nbool Connection::getNow(string& Now)\n{\n\tbool retCode;\n\tchar* zErrMsg = NULL;\n\tchar nowDate[100] = \"\";\n\n\tstring nowSqlCMD = \"SELECT \" SQLITE3_NOW_READING;\n\n\tint rc = SQLexec(dbHandle, \"now\",\n\t                 nowSqlCMD.c_str(),\n\t                 dateCallback,\n\t                 nowDate,\n\t                 &zErrMsg);\n\n\tif (rc == SQLITE_OK )\n\t{\n\t\tNow = nowDate;\n\t\tretCode = true;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"SELECT NOW() error :%s:\", nowSqlCMD.c_str(), zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\tNow = \"\";\n\t\tretCode = false;\n\t}\n\treturn retCode;\n}\n\n//###   #########################################################################################:\n/**\n * Apply Fledge default datetime formatting\n * to a detected DATETIME datatype column\n *\n * @param pStmt    Current SQLite3 result set\n * @param i        Current column index\n * @param          Output parameter for new date\n * @return         True is format has been applied,\n *\t\t   False otherwise\n */\nbool Connection::applyColumnDateTimeFormat(sqlite3_stmt *pStmt,\n\t\t\t\t\t   int i,\n\t\t\t\t\t   string& newDate)\n{\n\n\tbool apply_format = false;\n\tstring formatStmt = {};\n\n\tif (sqlite3_column_database_name(pStmt, i) != NULL &&\n\t    sqlite3_column_table_name(pStmt, i)    != NULL)\n\t{\n\n\t\tif ((strcmp(sqlite3_column_origin_name(pStmt, i), \"user_ts\") == 0) &&\n\t\t    (strcmp(sqlite3_column_table_name(pStmt, i), \"readings\") == 0) &&\n\t\t    (strlen((char *) sqlite3_column_text(pStmt, i)) == 32))\n\t\t{\n\n\t\t\t// Extract milliseconds and microseconds for the user_ts field of the readings table\n\t\t\tformatStmt = string(\"SELECT strftime('\");\n\t\t\tformatStmt += string(F_DATEH24_SEC);\n\t\t\tformatStmt += \"', '\" + string((char *) sqlite3_column_text(pStmt, i));\n\t\t\tformatStmt += \"')\";\n\t\t\tformatStmt += \" || substr('\" + string((char *) sqlite3_column_text(pStmt, i));\n\t\t\tformatStmt += \"', instr('\" + string((char *) sqlite3_column_text(pStmt, i));\n\t\t\tformatStmt += \"', '.'), 7)\";\n\n\t\t\tapply_format = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t/**\n\t\t\t * Handle here possible unformatted DATETIME column type\n\t\t\t * If (column_name == column_original_name) AND\n\t\t\t * (sqlite3_column_table_name() == \"DATETIME\")\n\t\t\t * we assume the column has not been formatted\n\t\t\t * by any datetime() or strftime() SQLite function.\n\t\t\t * Thus we apply default FLEDGE formatting:\n\t\t\t * \"%Y-%m-%d %H:%M:%f\"\n\t\t\t */\n\t\t\tif (sqlite3_column_database_name(pStmt, i) != NULL &&\n\t\t\t    sqlite3_column_table_name(pStmt, i) != NULL &&\n\t\t\t    (strcmp(sqlite3_column_origin_name(pStmt, i),\n\t\t\t\t    sqlite3_column_name(pStmt, i)) == 0))\n\t\t\t{\n\t\t\t\tconst char *pzDataType;\n\t\t\t\tint retType = sqlite3_table_column_metadata(dbHandle,\n\t\t\t\t\t\t\t\t\t    sqlite3_column_database_name(pStmt, i),\n\t\t\t\t\t\t\t\t\t    sqlite3_column_table_name(pStmt, i),\n\t\t\t\t\t\t\t\t\t    sqlite3_column_name(pStmt, i),\n\t\t\t\t\t\t\t\t\t    &pzDataType,\n\t\t\t\t\t\t\t\t\t    NULL, NULL, NULL, NULL);\n\n\t\t\t\t// Check whether to Apply dateformat\n\t\t\t\tif (pzDataType != NULL &&\n\t\t\t\t    retType == SQLITE_OK &&\n\t\t\t\t    strcmp(pzDataType, SQLITE3_FLEDGE_DATETIME_TYPE) == 0 &&\n\t\t\t\t    strcmp(sqlite3_column_origin_name(pStmt, i),\n\t\t\t\t\t   sqlite3_column_name(pStmt, i)) == 0)\n\t\t\t\t{\n\t\t\t\t\t// Column metadata found and column datatype is \"pzDataType\"\n\t\t\t\t\tformatStmt = string(\"SELECT strftime('\");\n\t\t\t\t\tformatStmt += string(F_DATEH24_MS);\n\t\t\t\t\tformatStmt += \"', '\" + string((char *) sqlite3_column_text(pStmt, i));\n\t\t\t\t\tformatStmt += \"')\";\n\n\t\t\t\t\tapply_format = true;\n\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// Format not done\n\t\t\t\t\t// Just log the error if present\n\t\t\t\t\tif (retType != SQLITE_OK)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"SQLite3 failed \" \\\n                                                                \"to call sqlite3_table_column_metadata() \" \\\n                                                                \"for column '%s'\",\n\t\t\t\t\t\t\t\t\t   sqlite3_column_name(pStmt, i));\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (apply_format)\n\t{\n\n\t\tchar* zErrMsg = NULL;\n\t\t// New formatted data\n\t\tchar formattedData[100] = \"\";\n\n\t\t// Exec the format SQL\n\t\tint rc = SQLexec(dbHandle, \"date\", \n\t\t\t\t formatStmt.c_str(),\n\t\t\t\t dateCallback,\n\t\t\t\t formattedData,\n\t\t\t\t &zErrMsg);\n\n\t\tif (rc == SQLITE_OK )\n\t\t{\n\t\t\t// Use new formatted datetime value\n\t\t\tnewDate.assign(formattedData);\n\n\t\t\treturn true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"SELECT dateformat '%s': error %s\",\n\t\t\t\t\t\t   formatStmt.c_str(),\n\t\t\t\t\t\t   zErrMsg);\n\n\t\t\tsqlite3_free(zErrMsg);\n\t\t}\n\n\t}\n\n\treturn false;\n}\n\n/**\n * Apply the specified date format\n * using the available formats in SQLite3\n * for a specific column\n *\n * If the requested format is not available\n * the input column is used as is.\n * Additionally milliseconds could be rounded\n * upon request.\n * The routine return false if date format is not\n * found and the caller might decide to raise an error\n * or use the non formatted value\n *\n * @param inFormat     Input date format from application\n * @param colName      The column name to format\n * @param outFormat    The formatted column\n * @return             True if format has been applied or\n *\t\t       false id no format is in use.\n */\nbool applyColumnDateFormat(const string& inFormat,\n\t\t\t\t  const string& colName,\n\t\t\t\t  string& outFormat,\n\t\t\t\t  bool roundMs)\n\n{\nbool retCode;\n\t// Get format, if any, from the supported formats map\n\tconst string format = sqliteDateFormat[inFormat];\n\tif (!format.empty())\n\t{\n\t\t// Apply found format via SQLite3 strftime()\n\t\toutFormat.append(\"strftime('\");\n\t\toutFormat.append(format);\n\t\toutFormat.append(\"', \");\n\n\t\t// Check whether we have to round milliseconds\n\t\tif (roundMs == true &&\n\t\t    format.back() == 'f')\n\t\t{\n\t\t\toutFormat.append(\"cast(round((julianday(\");\n\t\t\toutFormat.append(colName);\n\t\t\toutFormat.append(\") - 2440587.5)*86400 -0.00005, 3) AS FLOAT), 'unixepoch'\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\toutFormat.append(colName);\n\t\t}\n\n\t\toutFormat.append(\" )\");\n\t\tretCode = true;\n\t}\n\telse\n\t{\n\t\t// Use column as is\n\t\toutFormat.append(colName);\n\t\tretCode = false;\n\t}\n\n\treturn retCode;\n}\n\n/**\n * Apply the specified date format\n * using the available formats in SQLite3\n * for a specific column\n *\n * If the requested format is not available\n * the input column is used as is.\n * Additionally milliseconds could be rounded\n * upon request.\n * The routine return false if date format is not\n * found and the caller might decide to raise an error\n * or use the non formatted value\n *\n * @param inFormat     Input date format from application\n * @param colName      The column name to format\n * @param outFormat    The formatted column\n * @return             True if format has been applied or\n *\t\t       false id no format is in use.\n */\nbool applyColumnDateFormatLocaltime(const string& inFormat,\n\t\t\t\t  const string& colName,\n\t\t\t\t  string& outFormat,\n\t\t\t\t  bool roundMs)\n\n{\nbool retCode;\n\t// Get format, if any, from the supported formats map\n\tconst string format = sqliteDateFormat[inFormat];\n\tif (!format.empty())\n\t{\n\t\t// Apply found format via SQLite3 strftime()\n\t\toutFormat.append(\"strftime('\");\n\t\toutFormat.append(format);\n\t\toutFormat.append(\"', \");\n\n\t\t// Check whether we have to round milliseconds\n\t\tif (roundMs == true &&\n\t\t    format.back() == 'f')\n\t\t{\n\t\t\toutFormat.append(\"cast(round((julianday(\");\n\t\t\toutFormat.append(colName);\n\t\t\toutFormat.append(\") - 2440587.5)*86400 -0.00005, 3) AS FLOAT), 'unixepoch'\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\toutFormat.append(colName);\n\t\t}\n\n\t\toutFormat.append(\", 'localtime')\");\n\t\tretCode = true;\n\t}\n\telse\n\t{\n\t\t// Use column as is\n\t\toutFormat.append(colName);\n\t\tretCode = false;\n\t}\n\n\treturn retCode;\n}\n\n/**\n * Apply the specified date format\n * using the available formats in SQLite3\n *\n * @param inFormat     Input date format from application\n * @param outFormat    The formatted column\n * @return             True if format has been applied or\n *\t\t       false\n */\nbool applyDateFormat(const string& inFormat, string& outFormat)\n\n{\nbool retCode;\n\t// Get format, if any, from the supported formats map\n\tconst string format = sqliteDateFormat[inFormat];\n\tif (!format.empty())\n\t{\n\t\t// Apply found format via SQLite3 strftime()\n\t\toutFormat.append(\"strftime('\");\n\t\toutFormat.append(format);\n\t\toutFormat.append(\"', \");\n\n\t\treturn true;\n\t}\n\telse\n\t{\n\t\treturn false;\n\t}\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Create a SQLite3 database connection\n */\nConnection::Connection() : m_purgeBlockSize(10000)\n{\n\tstring dbPath, dbPathReadings;\n\tconst char *defaultConnection = getenv(\"DEFAULT_SQLITE_DB_FILE\");\n\tconst char *defaultReadingsConnection = getenv(\"DEFAULT_SQLITE_DB_READINGS_FILE\");\n\n\tm_logSQL = false;\n\tm_queuing = 0;\n\tm_streamOpenTransaction = true;\n\n\tif (defaultConnection == NULL)\n\t{\n\t\t// Set DB base path\n\t\tdbPath = getDataDir();\n\t\t// Add the filename\n\t\tdbPath += _DB_NAME;\n\t}\n\telse\n\t{\n\t\tdbPath = defaultConnection;\n\t}\n\n\tif (defaultReadingsConnection == NULL)\n\t{\n\t\t// Set DB base path\n\t\tdbPathReadings = getDataDir();\n\t\t// Add the filename\n\t\tdbPathReadings += READINGS_DB_FILE_NAME;\n\t}\n\telse\n\t{\n\t\tdbPathReadings = defaultReadingsConnection;\n\t}\n\n\t// Allow usage of URI for filename\n\tsqlite3_config(SQLITE_CONFIG_URI, 1);\n\n\tLogger *logger = Logger::getLogger();\n\n\t/**\n\t * Make a connection to the database\n\t * and check backend connection was successfully made\n\t * Note:\n\t *   we assume the database already exists, so the flag\n\t *   SQLITE_OPEN_CREATE is not added in sqlite3_open_v2 call\n\t */\n\tif (sqlite3_open_v2(dbPath.c_str(),\n\t\t\t    &dbHandle,\n\t\t\t    SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX,\n\t\t\t    NULL) != SQLITE_OK)\n\t{\n\t\tconst char* dbErrMsg = sqlite3_errmsg(dbHandle);\n\t\tconst char* errMsg = \"Failed to open the SQLite3 database\";\n\n\t\tLogger::getLogger()->error(\"%s '%s': %s\",\n\t\t\t\t\t   dbErrMsg,\n\t\t\t\t\t   dbPath.c_str(),\n\t\t\t\t\t   dbErrMsg);\n\t\tconnectErrorTime = time(0);\n\n\t\traiseError(\"Connection\", \"%s '%s': '%s'\",\n\t\t\t   dbErrMsg,\n\t\t\t   dbPath.c_str(),\n\t\t\t   dbErrMsg);\n\n\t\tsqlite3_close_v2(dbHandle);\n\t\tdbHandle = NULL;\n\t}\n\telse\n\t{\n\t\tint rc;\n\t\tchar *zErrMsg = NULL;\n\n\t\tstring dbConfiguration = \"PRAGMA busy_timeout = 5000; PRAGMA cache_size = -4000; PRAGMA journal_mode = WAL; PRAGMA secure_delete = off; PRAGMA journal_size_limit = 4096000;\";\n\n\t\t// Enable the WAL for the fledge DB\n\t\trc = sqlite3_exec(dbHandle, dbConfiguration.c_str(), NULL, NULL, &zErrMsg);\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tstring errMsg = \"Failed to set WAL from the fledge DB - \" + dbConfiguration;\n\t\t\tLogger::getLogger()->error(\"%s : error %s\",\n\t\t\t\t\t\t\t\t\t   dbConfiguration.c_str(),\n\t\t\t\t\t\t\t\t\t   zErrMsg);\n\t\t\tconnectErrorTime = time(0);\n\n\t\t\tsqlite3_free(zErrMsg);\n\t\t}\n\n\t\t/*\n\t\t * Build the ATTACH DATABASE command in order to get\n\t\t * 'fledge.' prefix in all SQL queries\n\t\t */\n\t\tSQLBuffer attachDb;\n\t\tattachDb.append(\"ATTACH DATABASE '\");\n\t\tattachDb.append(dbPath + \"' AS fledge;\");\n\n\t\tconst char *sqlStmt = attachDb.coalesce();\n\n\t\tzErrMsg = NULL;\n\t\t// Exec the statement\n\t\trc = SQLexec(dbHandle, \"database\",\n\t\t\t     sqlStmt,\n\t\t\t     NULL,\n\t\t\t     NULL,\n\t\t\t     &zErrMsg);\n\n\t\t// Check result\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tconst char* errMsg = \"Failed to attach 'fledge' database in\";\n\t\t\tLogger::getLogger()->error(\"%s '%s': error %s\",\n\t\t\t\t\t\t   errMsg,\n\t\t\t\t\t\t   sqlStmt,\n\t\t\t\t\t\t   zErrMsg);\n\t\t\tconnectErrorTime = time(0);\n\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\tsqlite3_close_v2(dbHandle);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Connected to SQLite3 database: %s\",\n\t\t\t\t\t\t  dbPath.c_str());\n\t\t}\n\t\t//Release sqlStmt buffer\n\t\tdelete[] sqlStmt;\n\t\n\t\tbool initialiseReadings = false;\n\t\tif (access(dbPathReadings.c_str(), R_OK) == -1)\n\t\t{\n\t\t\tsqlite3 *dbHandle;\n\t\t\t// Readings do not exist so set flag to initialise\n\t\t\trc = sqlite3_open(dbPathReadings.c_str(), &dbHandle);\n\t\t\tif(rc != SQLITE_OK)\n\t\t\t{\n\t\t\t}\n        \t\telse\n\t\t\t{\n\t\t\t\t// Enables the WAL feature\n\t\t\t\trc = sqlite3_exec(dbHandle, DB_CONFIGURATION, NULL, NULL, NULL);\n\t\t\t}\n        \t\tsqlite3_close(dbHandle);\n\t\t\tinitialiseReadings = true;\n\t\t}\n\n\t\t// Attach readings database\n\t\tSQLBuffer attachReadingsDb;\n\t\tattachReadingsDb.append(\"ATTACH DATABASE '\");\n\t\tattachReadingsDb.append(dbPathReadings + \"' AS readings;\");\n\n\t\tconst char *sqlReadingsStmt = attachReadingsDb.coalesce();\n\n\t\t// Exec the statement\n\t\trc = SQLexec(dbHandle, \"database\", \n\t\t\t     sqlReadingsStmt,\n\t\t\t     NULL,\n\t\t\t     NULL,\n\t\t\t     &zErrMsg);\n\n\t\t// Check result\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tconst char* errMsg = \"Failed to attach 'readings' database in\";\n\t\t\tLogger::getLogger()->error(\"%s '%s': error %s\",\n\t\t\t\t\t\t   errMsg,\n\t\t\t\t\t\t   sqlReadingsStmt,\n\t\t\t\t\t\t   zErrMsg);\n\t\t\tconnectErrorTime = time(0);\n\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\tsqlite3_close_v2(dbHandle);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Connected to SQLite3 database: %s\",\n\t\t\t\t\t\t  dbPath.c_str());\n\t\t}\n\t\t//Release sqlStmt buffer\n\t\tdelete[] sqlReadingsStmt;\n\n\t\tif (initialiseReadings)\n\t\t{\n\t\t\t// Would really like to run an external script here, but until we have that\n\t\t\t// worked out we have the SQL needed to create the table and indexes\n\t\t\t\n\t\t\t// Need to initialise the readings\n\t\t\tSQLBuffer initReadings;\n\t\t\tinitReadings.append(\"CREATE TABLE readings.readings (\");\n\t\t\tinitReadings.append(\"id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\");\n\t\t\tinitReadings.append(\"asset_code character varying(50)       NOT NULL,\");\n\t\t\tinitReadings.append(\"reading    JSON                        NOT NULL DEFAULT '{}',\");\n\t\t\tinitReadings.append(\"user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),\");\n\t\t\tinitReadings.append(\"ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))\");\n\t\t\tinitReadings.append(\");\");\n\n\t\t\tconst char *sqlReadingsStmt = initReadings.coalesce();\n\n\t\t\t// Exec the statement\n\t\t\tzErrMsg = NULL;\n\t\t\trc = SQLexec(dbHandle, \"readings creation\", \n\t\t\t\t     sqlReadingsStmt,\n\t\t\t\t     NULL,\n\t\t\t\t     NULL,\n\t\t\t\t     &zErrMsg);\n\n\t\t\t// Check result\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tconst char* errMsg = \"Failed to create 'readings' table, \";\n\t\t\t\tLogger::getLogger()->error(\"%s '%s': error %s\",\n\t\t\t\t\t\t\t   errMsg,\n\t\t\t\t\t\t\t   sqlReadingsStmt,\n\t\t\t\t\t\t\t   zErrMsg);\n\t\t\t\tconnectErrorTime = time(0);\n\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\tsqlite3_close_v2(dbHandle);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Initialised readings database\");\n\t\t\t}\n\t\t\t//Release sqlStmt buffer\n\t\t\tdelete[] sqlReadingsStmt;\n\n\t\t\tSQLBuffer index1;\n\t\t\tindex1.append(\"CREATE INDEX readings.fki_readings_fk1 ON readings (asset_code, user_ts desc);\");\n\n\t\t\tconst char *sqlIndex1Stmt = index1.coalesce();\n\n\t\t\t// Exec the statement\n\t\t\tzErrMsg = NULL;\n\t\t\trc = SQLexec(dbHandle, \"readings creation\", \n\t\t\t\t     sqlIndex1Stmt,\n\t\t\t\t     NULL,\n\t\t\t\t     NULL,\n\t\t\t\t     &zErrMsg);\n\n\t\t\t// Check result\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tconst char* errMsg = \"Failed to create 'readings' index, \";\n\t\t\t\tLogger::getLogger()->error(\"%s '%s': error %s\",\n\t\t\t\t\t\t\t   errMsg,\n\t\t\t\t\t\t\t   sqlIndex1Stmt,\n\t\t\t\t\t\t\t   zErrMsg);\n\t\t\t\tconnectErrorTime = time(0);\n\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\tsqlite3_close_v2(dbHandle);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Initialised readings database\");\n\t\t\t}\n\t\t\t//Release sqlStmt buffer\n\t\t\tdelete[] sqlIndex1Stmt;\n\n\t\t\tSQLBuffer index2;\n\t\t\tindex2.append(\"CREATE INDEX readings.readings_ix2 ON readings (asset_code);\");\n\n\t\t\tconst char *sqlIndex2Stmt = index2.coalesce();\n\n\t\t\t// Exec the statement\n\t\t\tzErrMsg = NULL;\n\t\t\trc = SQLexec(dbHandle, \"readings creation\",\n\t\t\t\t     sqlIndex2Stmt,\n\t\t\t\t     NULL,\n\t\t\t\t     NULL,\n\t\t\t\t     &zErrMsg);\n\n\t\t\t// Check result\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tconst char* errMsg = \"Failed to create 'readings' index, \";\n\t\t\t\tLogger::getLogger()->error(\"%s '%s': error %s\",\n\t\t\t\t\t\t\t   errMsg,\n\t\t\t\t\t\t\t   sqlIndex2Stmt,\n\t\t\t\t\t\t\t   zErrMsg);\n\t\t\t\tconnectErrorTime = time(0);\n\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\tsqlite3_close_v2(dbHandle);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Initialised readings database\");\n\t\t\t}\n\t\t\t//Release sqlStmt buffer\n\t\t\tdelete[] sqlIndex2Stmt;\n\n\t\t\tSQLBuffer index3;\n\t\t\tindex3.append(\"CREATE INDEX readings.readings_ix3 ON readings (user_ts);\");\n\n\t\t\tconst char *sqlIndex3Stmt = index3.coalesce();\n\n\t\t\t// Exec the statement\n\t\t\tzErrMsg = NULL;\n\t\t\trc = SQLexec(dbHandle, \"readings creation\",\n\t\t\t\t     sqlIndex3Stmt,\n\t\t\t\t     NULL,\n\t\t\t\t     NULL,\n\t\t\t\t     &zErrMsg);\n\n\t\t\t// Check result\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\tconst char* errMsg = \"Failed to create 'readings' index, \";\n\t\t\t\tLogger::getLogger()->error(\"%s '%s': error %s\",\n\t\t\t\t\t\t\t   errMsg,\n\t\t\t\t\t\t\t   sqlIndex3Stmt,\n\t\t\t\t\t\t\t   zErrMsg);\n\t\t\t\tconnectErrorTime = time(0);\n\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\tsqlite3_close_v2(dbHandle);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Initialised readings database\");\n\t\t\t}\n\t\t\t//Release sqlStmt buffer\n\t\t\tdelete[] sqlIndex3Stmt;\n\t\t}\n\n\t\t// Enable the WAL for the readings DB\n\t\trc = sqlite3_exec(dbHandle, dbConfiguration.c_str(),NULL, NULL, &zErrMsg);\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tstring errMsg = \"Failed to set WAL from the readings DB - \" + dbConfiguration;\n\t\t\tLogger::getLogger()->error(\"%s : error %s\",\n\t\t\t\t\t\t\t\t\t   errMsg.c_str(),\n\t\t\t\t\t\t\t\t\t   zErrMsg);\n\t\t\tconnectErrorTime = time(0);\n\n\t\t\tsqlite3_free(zErrMsg);\n\t\t}\n\n\t}\n\n\tm_schemaManager = SchemaManager::getInstance();\n}\n#endif\n\n/**\n * Destructor for the database connection.\n * Close the connection to SQLite3 db\n */\nConnection::~Connection()\n{\n\tsqlite3_close_v2(dbHandle);\n}\n\n/**\n * Enable or disable the tracing of SQL statements\n *\n * @param flag\tDesired state of the SQL trace flag\n */\nvoid Connection::setTrace(bool flag)\n{\n\tm_logSQL = flag;\n}\n\n/**\n * Map a SQLite3 result set to a string version of a JSON document\n *\n * @param res          Sqlite3 result set\n * @param resultSet    Output Json as string\n * @return             SQLite3 result code of sqlite3_step(res)\n *\n */\nint Connection::mapResultSet(void* res, string& resultSet)\n{\n// Cast to SQLite3 result set\nsqlite3_stmt* pStmt = (sqlite3_stmt *)res;\n// JSON generic document\nDocument doc;\n// SQLite3 return code\nint rc;\n// Number of returned rows, number of columns\nunsigned long nRows = 0, nCols = 0;\n\n\t// Create the JSON document\n\tdoc.SetObject();\n\t// Get document allocator\n\tDocument::AllocatorType& allocator = doc.GetAllocator();\n\t// Create the array for returned rows\n\tValue rows(kArrayType);\n\t// Rows counter, set it to 0 now\n\tValue count;\n\tcount.SetInt(0);\n\n\t// Iterate over all the rows in the resultSet\n\twhile ((rc = SQLstep(pStmt)) == SQLITE_ROW)\n\t{\n\t\t// Get number of columns for current row\n\t\tnCols = sqlite3_column_count(pStmt);\n\t\t// Create the 'row' object\n\t\tValue row(kObjectType);\n\n\t\t// Build the row with all fields\n\t\tfor (int i = 0; i < nCols; i++)\n\t\t{\n\t\t\t// JSON document for the current row\n\t\t\tDocument d;\n\t\t\t// Set object name as the column name\n\t\t\tValue name(sqlite3_column_name(pStmt, i), allocator);\n\t\t\t// Get the \"TEXT\" value of the column value\n\t\t\tchar* str = (char *)sqlite3_column_text(pStmt, i);\n\n\t\t\t// Check the column value datatype\n\t\t\tswitch (sqlite3_column_type(pStmt, i))\n\t\t\t{\n\t\t\t\tcase (SQLITE_NULL):\n\t\t\t\t{\n\t\t\t\t\trow.AddMember(name, \"\", allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase (SQLITE3_TEXT):\n\t\t\t\t{\n\n\t\t\t\t\t/**\n\t\t\t\t\t * Handle here possible unformatted DATETIME column type\n\t\t\t\t\t */\n\t\t\t\t\tstring newDate;\n\t\t\t\t\tif (applyColumnDateTimeFormat(pStmt, i, newDate))\n\t\t\t\t\t{\n\t\t\t\t\t\t// Use new formatted datetime value\n\t\t\t\t\t\tstr = (char *)newDate.c_str();\n\t\t\t\t\t}\n\n\t\t\t\t\tValue value;\n\t\t\t\t\tif (!d.Parse(str).HasParseError())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (d.IsNumber())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Set string\n\t\t\t\t\t\t\tvalue = Value(str, allocator);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// JSON parsing ok, use the document\n\t\t\t\t\t\t\t// if string value is not \"null\", \"true\", \"false\"\n\t\t\t\t\t\t\tif (strcmp(str, \"null\") != 0 && strcmp(str, \"true\") != 0 && strcmp(str, \"false\") != 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tvalue = Value(d, allocator);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t// Use (char *) value for \"null\", \"true\", \"false\"\n\t\t\t\t\t\t\t\tvalue = Value(str, allocator);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\t// Use (char *) value\n\t\t\t\t\t\tvalue = Value(str, allocator);\n\t\t\t\t\t}\n\t\t\t\t\t// Add name & value to the current row\n\t\t\t\t\trow.AddMember(name, value, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase (SQLITE_INTEGER):\n\t\t\t\t{\n\t\t\t\t\tint64_t intVal = atol(str);\n\t\t\t\t\t// Add name & value to the current row\n\t\t\t\t\trow.AddMember(name, intVal, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tcase (SQLITE_FLOAT):\n\t\t\t\t{\n\t\t\t\t\tdouble dblVal = atof(str);\n\t\t\t\t\t// Add name & value to the current row\n\t\t\t\t\trow.AddMember(name, dblVal, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tdefault:\n\t\t\t\t{\n\t\t\t\t\t// Default: use  (char *) value\n\t\t\t\t\tValue value(str != NULL ? str : \"\", allocator);\n\t\t\t\t\t// Add name & value to the current row\n\t\t\t\t\trow.AddMember(name, value, allocator);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// All fields added: increase row counter\n\t\tnRows++;\n\n\t\t// Add the current row to the all rows object\n\t\trows.PushBack(row, allocator);\n\t}\n\n\t// All rows added: update rows count\n\tcount.SetInt(nRows);\n\n\t// Add 'rows' and 'count' to the final JSON document\n\tdoc.AddMember(\"count\", count, allocator);\n\tdoc.AddMember(\"rows\", rows, allocator);\n\n\t/* Write out the JSON document we created */\n\tStringBuffer buffer;\n\tWriter<StringBuffer> writer(buffer);\n\tdoc.Accept(writer);\n\n\t// Set the result as a CPP string \n\tresultSet = buffer.GetString();\n\n\t// Return SQLite3 ret code\n\treturn rc;\n}\n\n/**\n * This SQLIte3 query callback just returns the number of rows seen\n * by a SELECT statement in the 'data' parameter\n *\n * @param data         Output parameter to update with number of rows\n * @param nCols        The number of columns or the row\n * @param colValues    The column values\n * @param colNames     The column names\n * @return             0 on success, 1 otherwise\n */\nint selectCallback(void *data,\n\t\t  int nCols,\n\t\t  char **colValues,\n\t\t  char **colNames)\n{\nint *nRows = (int *)data;\n\t// Increment the number of rows seen\n\t*nRows++;\n\n\t// Set OK\n\treturn 0;\n}\n\n/**\n * This SQLIte3 query count callback just returns the number of rows\n * as per 'count(*)' column\n * by a SELECT statement in the 'data' parameter\n *\n * @param data         Output parameter to update with number of rows\n * @param nCols        The number of columns or the row\n * @param colValues    The column values\n * @param colNames     The column names\n * @return             0 on success, 1 otherwise\n */\nint countCallback(void *data,\n\t\t int nCols,\n\t\t char **colValues,\n\t\t char **colNames)\n{\nint *nRows = (int *)data;\n\n\t// Return the value of the first column: the count(*)\n\t*nRows = atoi(colValues[0]);\n\n\t// Set OK\n\treturn 0;\n}\n\n/**\n * This SQLIte3 query rowid callback just returns the rowid\n * by a SELECT statement in the 'data' parameter\n *\n * @param data         Output parameter to update with rowid\n * @param nCols        The number of columns or the row\n * @param colValues    The column values\n * @param colNames     The column names\n * @return             0 on success, 1 otherwise\n */\nint rowidCallback(void *data,\n\t\t\t int nCols,\n\t\t\t char **colValues,\n\t\t\t char **colNames)\n{\nunsigned long *rowid = (unsigned long *)data;\n\n\t// Return the value of the first column: the count(*)\n\tif (colValues[0])\n\t\t*rowid = strtoul(colValues[0], NULL, 10);\n\telse\n\t\t*rowid = 0;\n\n\t// Set OK\n\treturn 0;\n}\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Perform a query against a common table\n *\n */\nbool Connection::retrieve(const string& schema,\n\t\t\t  const string& table,\n\t\t\t  const string& condition,\n\t\t\t  string& resultSet)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument\tdocument;\nSQLBuffer\tsql;\n// Extra constraints to add to where clause\nSQLBuffer\tjsonConstraints;\nvector<string>  asset_codes;\n\n\tif (!m_schemaManager->exists(dbHandle, schema))\n\t{\n\t\traiseError(\"retrieve\",\n\t\t\t\"Schema %s does not exist, unable to retrieve from table %s\",\n\t\t\tschema.c_str(),\n\t\t\ttable.c_str());\n\t\treturn false;\n\t}\n\n\ttry {\n\t\tif (dbHandle == NULL)\n\t\t{\n\t\t\traiseError(\"retrieve\", \"No SQLite 3 db connection available\");\n\t\t\treturn false;\n\t\t}\n\n\t\tif (condition.empty())\n\t\t{\n\t\t\tsql.append(\"SELECT * FROM \");\n\t\t\tsql.append(schema);\n\t\t\tsql.append('.');\n\t\t\tsql.append(table);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\", \"Failed to parse JSON payload\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (document.HasMember(\"aggregate\"))\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tif (!jsonAggregates(document, document[\"aggregate\"], sql, jsonConstraints, false))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t\tsql.append(schema);\n\t\t\t\tsql.append('.');\n\t\t\t}\n\t\t\telse if (document.HasMember(\"join\"))\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tselectColumns(document, sql, 0);\n\t\t\t}\n\t\t\telse if (document.HasMember(\"return\"))\n\t\t\t{\n\t\t\t\tint col = 0;\n\t\t\t\tValue& columns = document[\"return\"];\n\t\t\t\tif (! columns.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col)\n\t\t\t\t\t\tsql.append(\", \");\n\t\t\t\t\tif (!itr->IsObject())\t// Simple column name\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t\t   \"column must be a string\");\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t\t\t   \"format must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t// SQLite 3 date format.\n\t\t\t\t\t\t\t\tstring new_format;\n\t\t\t\t\t\t\t\tapplyColumnDateFormat((*itr)[\"format\"].GetString(),\n\t\t\t\t\t\t\t\t\t\t      (*itr)[\"column\"].GetString(),\n\t\t\t\t\t\t\t\t\t\t      new_format, true);\n\n\t\t\t\t\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\t\t\t\t\tsql.append(new_format);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t\t\t   \"timezone must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t// SQLite3 doesnt support time zone formatting\n\t\t\t\t\t\t\t\tif (strcasecmp((*itr)[\"timezone\"].GetString(), \"utc\") != 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t\t   \"SQLite3 plugin does not support timezones in qeueries\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\tsql.append(\", 'utc')\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsql.append(' ');\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t   \"return object must have either a column or json property\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\t\t\tsql.append('\"');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t\tsql.append(schema);\n\t\t\t\tsql.append('.');\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tsql.append(\" * FROM \");\n\t\t\t\tsql.append(schema);\n\t\t\t\tsql.append('.');\n\t\t\t}\n\t\t\tif (document.HasMember(\"join\"))\n\t\t\t{\n\t\t\t\tsql.append(\" FROM \");\n\t\t\t\tsql.append(schema);\n\t\t\t\tsql.append('.');\n\t\t\t\tsql.append(table);\n\t\t\t\tsql.append(\" t0\");\n\t\t\t\tappendTables(schema, document, sql, 1);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(table);\n\t\t\t}\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\n\t\t\t\tif (document.HasMember(\"join\"))\n\t\t\t\t{\n\t\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes, false, \"t0.\"))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\n\t\t\t\t\t// Now and the join condition itself\n\t\t\t\t\tstring col0, col1;\n\t\t\t\t\tconst Value& join = document[\"join\"];\n\t\t\t\t\tif (join.HasMember(\"on\") && join[\"on\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tcol0 = join[\"on\"].GetString();\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"rerieve\", \"Missing on item\");\n\t\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (join.HasMember(\"table\"))\n\t\t\t\t{\n\t\t\t\t\tconst Value& table = join[\"table\"];\n\t\t\t\t\tif (table.HasMember(\"column\") && table[\"column\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tcol1 = table[\"column\"].GetString();\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"QueryTable\", \"Missing column in join table\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\" AND t0.\");\n\t\t\t\t\tsql.append(col0);\n\t\t\t\t\tsql.append(\" = t1.\");\n\t\t\t\t\tsql.append(col1);\n\t\t\t\t\tsql.append(\" \");\n\t\t\t\t\tif (join.HasMember(\"query\") && join[\"query\"].IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\"AND  \");\n\t\t\t\t\t\tconst Value& query = join[\"query\"];\n\t\t\t\t\t\tprocessJoinQueryWhereClause(query, sql, asset_codes, 1);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse if (document.HasMember(\"where\"))\n\t\t\t\t{\n\t\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes, false))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"retrieve\", \"Failed to add where clause\");\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t   \"JSON does not contain where clause\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! jsonConstraints.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AND \");\n                                        const char *jsonBuf =  jsonConstraints.coalesce();\n                                        sql.append(jsonBuf);\n                                        delete[] jsonBuf;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!jsonModifiers(document, sql, false))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(';');\n\n\t\tconst char *query = sql.coalesce();\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tsqlite3_stmt *stmt;\n\n\t\tlogSQL(\"CommonRetrive\", query);\n\n\t\t// Prepare the SQL statement and get the result set\n\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\tdelete[] query;\n\t\t\treturn false;\n\t\t}\n\n\t\t// Call result set mapping\n\t\trc = mapResultSet(stmt, resultSet);\n\n\t\t// Delete result set\n\t\tsqlite3_finalize(stmt);\n\n\t\t// Check result set mapping errors\n\t\tif (rc != SQLITE_DONE)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\tdelete[] query;\n\t\t\t// Failure\n\t\t\treturn false;\n\t\t}\n\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\t\t// Success\n\t\treturn true;\n\t} catch (exception e) {\n\t\traiseError(\"retrieve\", \"Internal error: %s\", e.what());\n\t}\n\treturn false;\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Insert data into a table\n */\nint Connection::insert(const std::string& schema,\n\t\t\tconst std::string& table,\n\t\t\tconst std::string& data)\n{\nDocument\tdocument;\nostringstream convert;\nsqlite3_stmt *stmt;\nint rc;\nstd::size_t arr = data.find(\"inserts\");\n\n\tif (!m_schemaManager->exists(dbHandle, schema))\n\t{\n\t\traiseError(\"insert\", \"Schema %s does not exist, unable to insert into table %s\", schema.c_str(), table.c_str());\n\t\treturn false;\n\t}\n\n\n\t// Check first the 'inserts' property in JSON data\n\tbool stdInsert = (arr == std::string::npos || arr > 8);\n\n\t// If input data is not an array of iserts\n\t// create an array with one element\n\tif (stdInsert)\n\t{\n\t\tconvert << \"{ \\\"inserts\\\" : [ \";\n\t\tconvert << data;\n\t\tconvert << \" ] }\";\n\t}\n\n\tif (document.Parse(stdInsert ? convert.str().c_str() : data.c_str()).HasParseError())\n\t{\n\t\traiseError(\"insert\", \"Failed to parse JSON payload\\n\");\n\t\treturn -1;\n\t}\n\t// Get the array with row(s)\n\tValue &inserts = document[\"inserts\"];\n\tif (!inserts.IsArray())\n\t{\n\t\traiseError(\"insert\", \"Payload is missing the inserts array\");\n\t\treturn -1;\n\t}\n\n\t// Number of inserts\n\tint ins = 0;\n\tint failedInsertCount = 0;\n\t\n\t// Generate sql query for prepared statement\n\tfor (Value::ConstValueIterator iter = inserts.Begin();\n\t\t\t\t\titer != inserts.End();\n\t\t\t\t\t++iter)\n\t{\n\t\tif (!iter->IsObject())\n\t\t{\n\t\t\traiseError(\"insert\",\n\t\t\t\t   \"Each entry in the insert array must be an object\");\n\t\t\treturn -1;\n\t\t}\n\n\t\t{\n\t\t\tint col = 0;\n\t\t\tSQLBuffer sql;\n\t\t\tSQLBuffer values;\n\t\t\tsql.append(\"INSERT INTO \" + schema + \".\" + table + \" (\");\n\t\t\t\n\t\t\tfor (Value::ConstMemberIterator itr = (*iter).MemberBegin();\n\t\t\t\t\t\t\titr != (*iter).MemberEnd();\n\t\t\t\t\t\t\t++itr)\n\t\t\t{\n\t\t\t\t// Append column name\n\t\t\t\tif (col)\n\t\t\t\t{\n\t\t\t\t\tsql.append(\", \");\n\t\t\t\t}\n\t\t\t\tsql.append(itr->name.GetString());\n\t\t\t\tcol++;\n\t\t\t}\n\t\t\t\n\t\t\tsql.append(\") VALUES (\");\n\t\t\tfor ( auto i = 0 ; i < col; i++ )\n\t\t\t{\n\t\t\t\tif (i) \n\t\t\t\t{\n\t\t\t\t\tsql.append(\",\");\n\t\t\t\t}\n\t\t\t\tsql.append(\"?\");\n\t\t\t}\n\t\t\tsql.append(\");\");\n\t\t\t\n\t\t\tconst char *query = sql.coalesce();\n\t\t\t\n\t\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"insert\", sqlite3_errmsg(dbHandle));\n\t\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\t// Bind columns with prepared sql query\n\t\t\tint columID = 1;\n\t\t\tfor (Value::ConstMemberIterator itr = (*iter).MemberBegin();\n\t\t\t\t\t\t\titr != (*iter).MemberEnd();\n\t\t\t\t\t\t\t++itr)\n\t\t\t{\n\t\t\t\t\n\t\t\t\tif (itr->value.IsString())\n\t\t\t\t{\n\t\t\t\t\tconst char *str = itr->value.GetString();\n\t\t\t\t\tif (strcmp(str, \"now()\") == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsqlite3_bind_text(stmt, columID, SQLITE3_NOW, -1, SQLITE_TRANSIENT);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\t\n\t\t\t\t\t\tsqlite3_bind_text(stmt, columID, str, -1, SQLITE_TRANSIENT);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse if (itr->value.IsDouble()) {\n\t\t\t\t\tsqlite3_bind_double(stmt, columID,itr->value.GetDouble());\n\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\telse if (itr->value.IsInt64())\n\t\t\t\t{\n\t\t\t\t\tsqlite3_bind_int(stmt, columID,(long)itr->value.GetInt64());\n\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\telse if (itr->value.IsInt())\n\t\t\t\t{\n\t\t\t\t\tsqlite3_bind_int(stmt, columID,itr->value.GetInt());\n\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\telse if (itr->value.IsObject())\n\t\t\t\t{\n\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\titr->value.Accept(writer);\n\t\t\t\t\tsqlite3_bind_text(stmt, columID, buffer.GetString(), -1, SQLITE_TRANSIENT);\n\t\t\t\t}\n\t\t\t\tcolumID++ ;\n\t\t\t}\n\t\t\t\n\t\t\tif (sqlite3_exec(dbHandle, \"BEGIN TRANSACTION\", NULL, NULL, NULL) != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"insert\", sqlite3_errmsg(dbHandle));\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tm_writeAccessOngoing.fetch_add(1);\n\t\t\t\n\t\t\tint sqlite3_resut = SQLstep(stmt);\n\t\t\t\n\t\t\tm_writeAccessOngoing.fetch_sub(1);\n\t\t\t\n\t\t\tif (sqlite3_resut == SQLITE_DONE)\n\t\t\t{\n\t\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\t\tsqlite3_reset(stmt);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tfailedInsertCount++;\n\t\t\t\traiseError(\"insert\", sqlite3_errmsg(dbHandle));\n\t\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", sqlite3_expanded_sql(stmt));\n\t\t\t\t\n\t\t\t\t// transaction is still open, do rollback\n\t\t\t\tif (sqlite3_get_autocommit(dbHandle) == 0)\n\t\t\t\t{\n\t\t\t\t\trc = sqlite3_exec(dbHandle,\"ROLLBACK TRANSACTION;\",NULL,NULL,NULL);\n\t\t\t\t\tif (rc != SQLITE_OK)\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"insert rollback\", sqlite3_errmsg(dbHandle));\n\t\t\t\t\t}\n\t\t\t\t\n\t\t\t\t}\n\t\t\t}\n\t\t\t\n\n\t\t\tif (sqlite3_exec(dbHandle, \"COMMIT TRANSACTION\", NULL, NULL, NULL) != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"insert\", sqlite3_errmsg(dbHandle));\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\n\t\t\tdelete[] query;\n\t\t}\n\t\t// Increment row count\n\t\tins++;\n\t\t\n\t}\n\n\tsqlite3_finalize(stmt);\n\n\tif (m_writeAccessOngoing == 0)\n\t\tdb_cv.notify_all();\n\n\tif (failedInsertCount)\n\t{\n\t\tchar buf[100];\n\t\tsnprintf(buf, sizeof(buf),\n\t\t\t\t\"Not all inserts into table '%s.%s' within transaction succeeded\",\n\t\t\t\tschema.c_str(), table.c_str());\n\t\traiseError(\"insert\", buf);\n\t}\n\n\treturn (!failedInsertCount ? ins : -1);\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Perform an update against a common table\n * This routine uses SQLite 3 JSON1 extension:\n *\n *    json_set(field, '$.key.value', the_value)\n *\n */\nint Connection::update(const string& schema,\n\t\t\tconst string& table,\n\t\t\tconst string& payload)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument\tdocument;\nSQLBuffer\tsql;\nbool\t\tallowZero = false;\nvector<string>  asset_codes;\n\n\tint \trow = 0;\n\tostringstream convert;\n\n\tif (!m_schemaManager->exists(dbHandle, schema))\n\t{\n\t\traiseError(\"update\",\n\t\t\t\"Schema %s does not exist, unable to update table %s\",\n\t\t\tschema.c_str(),\n\t\t\ttable.c_str());\n\t\treturn false;\n\t}\n\n\tstd::size_t arr = payload.find(\"updates\");\n\tbool changeReqd = (arr == std::string::npos || arr > 8);\n\tif (changeReqd)\n\t{\n\t\tconvert << \"{ \\\"updates\\\" : [ \";\n\t\tconvert << payload;\n\t\tconvert << \" ] }\";\n\t}\n\n\tif (document.Parse(changeReqd?convert.str().c_str():payload.c_str()).HasParseError())\n\t{\n\t\traiseError(\"update\", \"Failed to parse JSON payload\");\n\t\treturn -1;\n\t}\n\telse\n\t{\n\t\tValue &updates = document[\"updates\"];\n\t\tif (!updates.IsArray())\n\t\t{\n\t\t\traiseError(\"update\", \"Payload is missing the updates array\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tsql.append(\"BEGIN TRANSACTION;\");\n\t\tint i=0;\n\t\tfor (Value::ConstValueIterator iter = updates.Begin(); iter != updates.End(); ++iter,++i)\n\t\t{\n\t\t\tif (!iter->IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"update\",\n\t\t\t\t\t   \"Each entry in the update array must be an object\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tsql.append(\"UPDATE \");\n\t\t\tsql.append(schema);\n\t\t\tsql.append('.');\n\t\t\tsql.append(table);\n\t\t\tsql.append(\" SET \");\n\n\t\t\tint\t\tcol = 0;\n\t\t\tif ((*iter).HasMember(\"values\"))\n\t\t\t{\n\t\t\t\tconst Value& values = (*iter)[\"values\"];\n\t\t\t\tfor (Value::ConstMemberIterator itr = values.MemberBegin();\n\t\t\t\t\t\titr != values.MemberEnd(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(itr->name.GetString());\n\t\t\t\t\tsql.append(\" = \");\n\t\t \n\t\t\t\t\tif (itr->value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = itr->value.GetString();\n\t\t\t\t\t\tif (strcmp(str, \"now()\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(SQLITE3_NOW);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t\tsql.append(escape(str));\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->value.IsDouble())\n\t\t\t\t\t\tsql.append(itr->value.GetDouble());\n\t\t\t\t\telse if (itr->value.IsUint64())\n\t\t\t\t\t\tsql.append((unsigned long)itr->value.GetUint64());\n\t\t\t\t\telse if (itr->value.IsInt64())\n\t\t\t\t\t\tsql.append((long)itr->value.GetInt64());\n\t\t\t\t\telse if (itr->value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\titr->value.Accept(writer);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(buffer.GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\t// Handle JSON value null: \"item\" : null\n\t\t\t\t\telse if (itr->value.IsNull())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\"NULL\");\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"expressions\"))\n\t\t\t{\n\t\t\t\tconst Value& exprs = (*iter)[\"expressions\"];\n\t\t\t\tif (!exprs.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"update\", \"The property exressions must be an array\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = exprs.Begin(); itr != exprs.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"expressions must be an array of objects\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"column\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing column property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"operator\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing operator property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"value\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing value property in expressions array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(\" = \");\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t\tsql.append((*itr)[\"operator\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t\tconst Value& value = (*itr)[\"value\"];\n\t\t \n\t\t\t\t\tif (value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = value.GetString();\n\t\t\t\t\t\tif (strcmp(str, \"now()\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(SQLITE3_NOW);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t\tsql.append(str);\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsDouble())\n\t\t\t\t\t\tsql.append(value.GetDouble());\n\t\t\t\t\telse if (value.IsInt64())\n\t\t\t\t\t\tsql.append((long)value.GetInt64());\n\t\t\t\t\telse if (value.IsInt())\n\t\t\t\t\t\tsql.append(value.GetInt());\n\t\t\t\t\telse if (value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\tvalue.Accept(writer);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(buffer.GetString());\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"json_properties\"))\n\t\t\t{\n\t\t\t\tconst Value& exprs = (*iter)[\"json_properties\"];\n\t\t\t\tif (!exprs.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t   \"The property json_properties must be an array\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = exprs.Begin(); itr != exprs.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append( \", \");\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"json_properties must be an array of objects\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"column\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing column property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"path\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"Missing path property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!itr->HasMember(\"value\"))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t  \"Missing value property in json_properties array item\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\n\t\t\t\t\t// SQLite 3 JSON1 extension: json_set\n\t\t\t\t\t// json_set(field, '$.key.value', the_value)\n\t\t\t\t\tsql.append(\" = json_set(\");\n\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\tsql.append(\", '$.\");\n\n\t\t\t\t\tconst Value& path = (*itr)[\"path\"];\n\t\t\t\t\tif (!path.IsArray())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t   \"The property path must be an array\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tint pathElement = 0;\n\t\t\t\t\tfor (Value::ConstValueIterator itr2 = path.Begin();\n\t\t\t\t\t\titr2 != path.End(); ++itr2)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (pathElement > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('.');\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (itr2->IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr2->GetString());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"update\",\n\t\t\t\t\t\t\t\t   \"The elements of path must all be strings\");\n\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tpathElement++;\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\"', \");\n\t\t\t\t\tconst Value& value = (*itr)[\"value\"];\n\t\t \n\t\t\t\t\tif (value.IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tconst char *str = value.GetString();\n\t\t\t\t\t\tif (strcmp(str, \"now()\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(SQLITE3_NOW);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t\tsql.append(escape(str));\n\t\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsDouble())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(value.GetDouble());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsInt64())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append((long)value.GetInt64());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsInt())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(value.GetInt());\n\t\t\t\t\t}\n\t\t\t\t\telse if (value.IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\tvalue.Accept(writer);\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(buffer.GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(\")\");\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (iter->HasMember(\"modifier\") && (*iter)[\"modifier\"].IsArray())\n\t\t\t{\n\t\t\t\tconst Value& modifier = (*iter)[\"modifier\"];\n\t\t\t\tfor (Value::ConstValueIterator modifiers = modifier.Begin(); modifiers != modifier.End(); ++modifiers)\n                \t\t{\n\t\t\t\t\tif (modifiers->IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tstring mod = modifiers->GetString();\n\t\t\t\t\t\tif (mod.compare(\"allowzero\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tallowZero = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (col == 0)\n\t\t\t{\n\t\t\t\traiseError(\"update\",\n\t\t\t\t\t   \"Missing values or expressions object in payload\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tif ((*iter).HasMember(\"condition\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t\tif (!jsonWhereClause((*iter)[\"condition\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if ((*iter).HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t\tif (!jsonWhereClause((*iter)[\"where\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t}\n\t\tsql.append(';');\n\t\trow++;\n\t\t}\n\t}\n\tsql.append(\"COMMIT TRANSACTION;\");\n\t\n\tconst char *query = sql.coalesce();\n\tlogSQL(\"CommonUpdate\", query);\n\tchar *zErrMsg = NULL;\n\tint rc;\n\n\t// Exec the UPDATE statement: no callback, no result set\n\tm_writeAccessOngoing.fetch_add(1);\n\trc = SQLexec(dbHandle, table,\n\t\t     query,\n\t\t     NULL,\n\t\t     NULL,\n\t\t     &zErrMsg);\n\tm_writeAccessOngoing.fetch_sub(1);\n\tif (m_writeAccessOngoing == 0)\n\t\tdb_cv.notify_all();\n\n\t// Check result code\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"update\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\tif (sqlite3_get_autocommit(dbHandle)==0) // transaction is still open, do rollback\n\t\t{\n\t\t\trc=SQLexec(dbHandle, table,\n\t\t\t\t\"ROLLBACK TRANSACTION;\",\n\t\t\t\tNULL,\n\t\t\t\tNULL,\n\t\t\t\t&zErrMsg);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"rollback\", zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t}\n\t\t}\n\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\t\treturn -1;\n\t}\n\telse\n\t{\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\n\t\tint update = sqlite3_changes(dbHandle);\n\n\t\tint return_value=0;\n\n\t\tif (update == 0 && allowZero == false)\n\t\t{\n\t\t\tchar buf[100];\n\t\t\tsnprintf(buf, sizeof(buf),\n\t\t\t\t\t\"Not all updates within transaction '%s.%s' succeeded\",\n\t\t\t\t\tschema.c_str(), table.c_str());\n\t\t\traiseError(\"update\", buf);\n\t\t\treturn_value = -1;\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn_value = (row == 1 ? update : row);\n\t\t}\n\n\t\t// Returns the number of rows affected, cases :\n\t\t//\n\t\t// 1) update == 0, no update,                                    returns -1\n\t\t// 2) single command SQL that could affects multiple rows,       returns 'update'\n\t\t// 3) multiple SQL commands packed and executed in one SQLexec,  returns 'row'\n\t\treturn (return_value);\n\t}\n\n\t// Return failure\n\treturn -1;\n}\n#endif\n\n/**\n * Format a date to a fixed format with milliseconds, microseconds and\n * timezone expressed, examples :\n *\n *   case - formatted |2019-01-01 10:01:01.000000+00:00| date |2019-01-01 10:01:01|\n *   case - formatted |2019-02-01 10:02:01.000000+00:00| date |2019-02-01 10:02:01.0|\n *   case - formatted |2019-02-02 10:02:02.841000+00:00| date |2019-02-02 10:02:02.841|\n *   case - formatted |2019-02-03 10:02:03.123456+00:00| date |2019-02-03 10:02:03.123456|\n *   case - formatted |2019-03-01 10:03:01.100000+00:00| date |2019-03-01 10:03:01.1+00:00|\n *   case - formatted |2019-03-02 10:03:02.123000+00:00| date |2019-03-02 10:03:02.123+00:00|\n *   case - formatted |2019-03-03 10:03:03.123456+00:00| date |2019-03-03 10:03:03.123456+00:00|\n *   case - formatted |2019-03-04 10:03:04.123456+01:00| date |2019-03-04 10:03:04.123456+01:00|\n *   case - formatted |2019-03-05 10:03:05.123456-01:00| date |2019-03-05 10:03:05.123456-01:00|\n *   case - formatted |2019-03-04 10:03:04.123456+02:30| date |2019-03-04 10:03:04.123456+02:30|\n *   case - formatted |2019-03-05 10:03:05.123456-02:30| date |2019-03-05 10:03:05.123456-02:30|\n *\n * @param out\tfalse if the date is invalid\n *\n */\nbool Connection::formatDate(char *formatted_date, size_t buffer_size, const char *date)\n{\n\n\tstruct timeval tv = {0};\n\tstruct tm tm  = {0};\n\tchar *valid_date = nullptr;\n\n\tenum codeOptimization{CO_NONE, CO_01, CO_02, CO_03};\n\tcodeOptimization opt;\n\tint len;\n\n\t// Code optimization for the cases:\n\t//\n\t// 2019-03-03 10:03:03.123456+00:00\n\t// 2019-02-02 10:02:02.841\n\t// 2019-01-01 10:01:01\n\n\tlen = strlen(date);\n\tif (len == 32)\n\t{\n\t\tif ( date[19] == '.' &&\n\t\t\t (date[26] == '-' || date[26] == '+')&&\n\t\t\t date[29] == ':' )\n\n\t\t{\n\t\t\t// Case - 2019-03-03 10:03:03.123456+00:00\n\t\t\tstrcpy(formatted_date, date);\n\t\t\topt = CO_01;\n\t\t}\n\t\telse\n\t\t\topt = CO_NONE;\n\n\t}\n\telse if (len == 23)\n\t{\n\t\tif ( date[19] == '.')\n\t\t{\n\t\t\t// Case - 2019-02-02 10:02:02.841\n\t\t\tstrcpy(formatted_date, date);\n\t\t\tstrcat(formatted_date, \"000+00:00\");\n\t\t\topt = CO_02;\n\t\t}\n\t\telse\n\t\t\topt = CO_NONE;\n\t}\n\telse if (len == 19)\n\t{\n\t\t// Case - 2019-01-01 10:01:01\n\t\tstrcpy(formatted_date, date);\n\t\tstrcat(formatted_date, \".000000+00:00\");\n\t\topt = CO_03;\n\t}\n\telse\n\t{\n\t\topt = CO_NONE;\n\t}\n\n\tif (opt != CO_NONE)\n\t{\n\t\treturn (true);\n\t}\n\n\t// Extract up to seconds\n\tmemset(&tm, 0, sizeof(tm));\n\tvalid_date = strptime(date, F_DATEH24_SEC, &tm);\n\n\tif (! valid_date)\n\t{\n\t\treturn (false);\n\t}\n\n\tstrftime (formatted_date, buffer_size, F_DATEH24_SEC, &tm);\n\n\t// Work out the microseconds from the fractional part of the seconds\n\tchar fractional[10] = {0};\n\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%[0-9]*\", fractional);\n\t// Truncate to max 6 digits\n\tfractional[6] = 0;\n\tint multiplier = 6 - (int)strlen(fractional);\n\tif (multiplier < 0)\n\t\tmultiplier = 0;\n\twhile (multiplier--)\n\t\tstrcat(fractional, \"0\");\n\n\tstrcat(formatted_date ,\".\");\n\tstrcat(formatted_date ,fractional);\n\n\t// Handles timezone\n\tchar timezone_hour[5] = {0};\n\tchar timezone_min[5] = {0};\n\tchar sign[2] = {0};\n\n\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%*d-%2[0-9]:%2[0-9]\", timezone_hour, timezone_min);\n\tif (timezone_hour[0] != 0)\n\t{\n\t\tstrcat(sign, \"-\");\n\t}\n\telse\n\t{\n\t\tmemset(timezone_hour, 0, sizeof(timezone_hour));\n\t\tmemset(timezone_min,  0, sizeof(timezone_min));\n\n\t\tsscanf(date, \"%*d-%*d-%*d %*d:%*d:%*d.%*d+%2[0-9]:%2[0-9]\", timezone_hour, timezone_min);\n\t\tif  (timezone_hour[0] != 0)\n\t\t{\n\t\t\tstrcat(sign, \"+\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// No timezone is expressed in the source date\n\t\t\t// the default UTC is added\n\t\t\tstrcat(formatted_date, \"+00:00\");\n\t\t}\n\t}\n\n\tif (sign[0] != 0)\n\t{\n\t\tif (timezone_hour[0] != 0)\n\t\t{\n\t\t\tstrcat(formatted_date, sign);\n\n\t\t\t// Pad with 0 if an hour having only 1 digit was provided\n\t\t\t// +1 -> +01\n\t\t\tif (strlen(timezone_hour) == 1)\n\t\t\t\tstrcat(formatted_date, \"0\");\n\n\t\t\tstrcat(formatted_date, timezone_hour);\n\t\t\tstrcat(formatted_date, \":\");\n\t\t}\n\n\t\tif (timezone_min[0] != 0)\n\t\t{\n\t\t\tstrcat(formatted_date, timezone_min);\n\n\t\t\t// Pad with 0 if minutes having only 1 digit were provided\n\t\t\t// 3 -> 30\n\t\t\tif (strlen(timezone_min) == 1)\n\t\t\t\tstrcat(formatted_date, \"0\");\n\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Minutes aren't expressed in the source date\n\t\t\tstrcat(formatted_date, \"00\");\n\t\t}\n\t}\n\n\n\treturn (true);\n\n\n}\n\n/**\n * Process the aggregate options and return the columns to be selected\n */\nbool Connection::jsonAggregates(const Value& payload,\n\t\t\t\tconst Value& aggregates,\n\t\t\t\tSQLBuffer& sql,\n\t\t\t\tSQLBuffer& jsonConstraint,\n\t\t\t\tbool isTableReading)\n{\n\tif (aggregates.IsObject())\n\t{\n\t\tif (! aggregates.HasMember(\"operation\"))\n\t\t{\n\t\t\traiseError(\"Select aggregation\",\n\t\t\t\t   \"Missing property \\\"operation\\\"\");\n\t\t\treturn false;\n\t\t}\n\t\tif ((! aggregates.HasMember(\"column\")) && (! aggregates.HasMember(\"json\")))\n\t\t{\n\t\t\traiseError(\"Select aggregation\",\n\t\t\t\t   \"Missing property \\\"column\\\" or \\\"json\\\"\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(aggregates[\"operation\"].GetString());\n\t\tsql.append('(');\n\t\tif (aggregates.HasMember(\"column\"))\n\t\t{\n\t\t\tstring col = aggregates[\"column\"].GetString();\n\t\t\tif (col.compare(\"*\") == 0)\t// Faster to count ROWID rather than *\n\t\t\t{\n\t\t\t\tsql.append(\"ROWID\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// an operation different from the 'count' is requested\n\t\t\t\tif (isTableReading && (col.compare(\"user_ts\") == 0) )\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, 'localtime') \");\n\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(col);\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\telse if (aggregates.HasMember(\"json\"))\n\t\t{\n\t\t\tconst Value& json = aggregates[\"json\"];\n\t\t\tif (! json.IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\",\n\t\t\t\t\t   \"The json property must be an object\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (!json.HasMember(\"column\"))\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t   \"The json property is missing a column property\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\t// Use json_extract(field, '$.key1.key2') AS value\n\t\t\tsql.append(\"json_extract(\");\n\t\t\tsql.append(json[\"column\"].GetString());\n\t\t\tsql.append(\", '$.\");\n\n\t\t\tif (!json.HasMember(\"properties\"))\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t   \"The json property is missing a properties property\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tconst Value& jsonFields = json[\"properties\"];\n\n\t\t\tif (jsonFields.IsArray())\n\t\t\t{\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t\t\t// Build the Json keys NULL check\n\t\t\t\tjsonConstraint.append(\"json_type(\");\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tjsonConstraint.append(\", '$.\");\n\n\t\t\t\tint field = 0;\n\t\t\t\tstring prev;\n\t\t\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (field)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\".\");\n\t\t\t\t\t}\n\t\t\t\t\tif (prev.length() > 0)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Append Json field for NULL check\n\t\t\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\t\t\tjsonConstraint.append(\".\");\n\t\t\t\t\t}\n\t\t\t\t\tprev = itr->GetString();\n\t\t\t\t\tfield++;\n\t\t\t\t\t// Append Json field for query\n\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t}\n\t\t\t\t// Add last Json key\n\t\t\t\tjsonConstraint.append(prev);\n\n\t\t\t\t// Add condition for all json keys not null\n\t\t\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Append Json field for query\n\t\t\t\tsql.append(jsonFields.GetString());\n\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t\t\t// Build the Json key NULL check\n\t\t\t\tjsonConstraint.append(\"json_type(\");\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tjsonConstraint.append(\", '$.\");\n\t\t\t\tjsonConstraint.append(jsonFields.GetString());\n\n\t\t\t\t// Add condition for json key not null\n\t\t\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t\t\t}\n\t\t\tsql.append(\"')\");\n\t\t}\n\t\tsql.append(\") AS \\\"\");\n\t\tif (aggregates.HasMember(\"alias\"))\n\t\t{\n\t\t\tsql.append(aggregates[\"alias\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(aggregates[\"operation\"].GetString());\n\t\t\tsql.append('_');\n\t\t\tsql.append(aggregates[\"column\"].GetString());\n\t\t}\n\t\tsql.append(\"\\\"\");\n\t}\n\telse if (aggregates.IsArray())\n\t{\n\t\tint index = 0;\n\t\tfor (Value::ConstValueIterator itr = aggregates.Begin(); itr != aggregates.End(); ++itr)\n\t\t{\n\t\t\tif (!itr->IsObject())\n\t\t\t{\n\t\t\t\traiseError(\"select aggregation\",\n\t\t\t\t\t\t\"Each element in the aggregate array must be an object\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif ((! itr->HasMember(\"column\")) && (! itr->HasMember(\"json\")))\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\", \"Missing property \\\"column\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (! itr->HasMember(\"operation\"))\n\t\t\t{\n\t\t\t\traiseError(\"Select aggregation\", \"Missing property \\\"operation\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tif (index)\n\t\t\t\tsql.append(\", \");\n\t\t\tindex++;\n\t\t\tsql.append((*itr)[\"operation\"].GetString());\n\t\t\tsql.append('(');\n\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t{\n\t\t\t\tstring column_name= (*itr)[\"column\"].GetString();\n\t\t\t\tif (isTableReading && (column_name.compare(\"user_ts\") == 0) )\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, 'localtime') \");\n\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t\tsql.append(column_name);\n\t\t\t\t\tsql.append(\"\\\"\");\n\t\t\t\t}\n\n\t\t\t}\n\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t{\n\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\tif (! json.IsObject())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"Select aggregation\", \"The json property must be an object\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (!json.HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The json property is missing a column property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (!json.HasMember(\"properties\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The json property is missing a properties property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tconst Value& jsonFields = json[\"properties\"];\n\t\t\t\tif (! jsonConstraint.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tjsonConstraint.append(\" AND \");\n\t\t\t\t}\n\t\t\t\t// Use json_extract(field, '$.key1.key2') AS value\n\t\t\t\tsql.append(\"json_extract(\");\n\t\t\t\tsql.append(json[\"column\"].GetString());\n\t\t\t\tsql.append(\", '$.\");\n\n\t\t\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t\t\t// Build the Json keys NULL check\n\t\t\t\tjsonConstraint.append(\"json_type(\");\n\t\t\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\t\t\tjsonConstraint.append(\", '$.\");\n\n\t\t\t\tif (jsonFields.IsArray())\n\t\t\t\t{\n\t\t\t\t\tstring prev;\n\t\t\t\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (prev.length() > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\t\t\t\tjsonConstraint.append('.');\n\t\t\t\t\t\t\tsql.append('.');\n\t\t\t\t\t\t}\n\t\t\t\t\t\t// Append Json field for query\n\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t\tprev = itr->GetString();\n\t\t\t\t\t}\n\t\t\t\t\t// Add last Json key\n\t\t\t\t\tjsonConstraint.append(prev);\n\n\t\t\t\t\t// Add condition for json key not null\n\t\t\t\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// Append Json field for query\n\t\t\t\t\tsql.append(jsonFields.GetString());\n\n\t\t\t\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t\t\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t\t\t\t// Build the Json key NULL check\n\t\t\t\t\tjsonConstraint.append(jsonFields.GetString());\n\n\t\t\t\t\t// Add condition for json key not null\n\t\t\t\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t\t\t\t}\n\t\t\t\tsql.append(\"')\");\n\t\t\t}\n\t\t\tsql.append(\") AS \\\"\");\n\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t{\n\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append((*itr)[\"operation\"].GetString());\n\t\t\t\tsql.append('_');\n\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t}\n\t\t\tsql.append(\"\\\"\");\n\t\t}\n\t}\n\tif (payload.HasMember(\"group\"))\n\t{\n\t\tsql.append(\", \");\n\t\tif (payload[\"group\"].IsObject())\n\t\t{\n\t\t\tconst Value& grp = payload[\"group\"];\n\t\t\t\n\t\t\tif (grp.HasMember(\"format\"))\n\t\t\t{\n\t\t\t\t// SQLite 3 date format.\n\t\t\t\tstring new_format;\n\t\t\t\tif (isTableReading)\n\t\t\t\t{\n\t\t\t\t\tapplyColumnDateFormatLocaltime(grp[\"format\"].GetString(),\n\t\t\t\t\t\t\t               grp[\"column\"].GetString(),\n\t\t\t\t\t\t\t               new_format);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tapplyColumnDateFormat(grp[\"format\"].GetString(),\n\t\t\t\t\t\t\t      grp[\"column\"].GetString(),\n\t\t\t\t\t\t\t      new_format);\n\t\t\t\t}\n\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\tsql.append(new_format);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(grp[\"column\"].GetString());\n\t\t\t}\n\n\t\t\tif (grp.HasMember(\"alias\"))\n\t\t\t{\n\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\tsql.append(grp[\"alias\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\tsql.append(grp[\"column\"].GetString());\n\t\t\t\tsql.append(\"\\\"\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(payload[\"group\"].GetString());\n\t\t}\n\t}\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& tb = payload[\"timebucket\"];\n\t\tif (! tb.IsObject())\n\t\t{\n\t\t\traiseError(\"Select data\",\n\t\t\t\t   \"The \\\"timebucket\\\" property must be an object\");\n\t\t\treturn false;\n\t\t}\n\t\tif (! tb.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"Select data\",\n\t\t\t\t   \"The \\\"timebucket\\\" object must have a timestamp property\");\n\t\t\treturn false;\n\t\t}\n\n\t\tif (tb.HasMember(\"format\"))\n\t\t{\n\t\t\t// SQLite 3 date format is limited.\n\t\t\tstring new_format;\n\t\t\tif (applyDateFormat(tb[\"format\"].GetString(),\n\t\t\t\t\t    new_format))\n\t\t\t{\n\t\t\t\tsql.append(\", \");\n\t\t\t\t// Add the formatted column\n\t\t\t\tsql.append(new_format);\n\n\t\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t\t{\n\t\t\t\t\t// Use Unix epoch, without microseconds\n\t\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\t\tsql.append(\" * round(\");\n\t\t\t\t\tsql.append(\"strftime('%s', \");\n\t\t\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\t\t\tsql.append(\") / \");\n\t\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\t\tsql.append(\", 6)\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\t\t}\n\t\t\t\tsql.append(\", 'unixepoch')\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t/**\n\t\t\t\t * No date format found: we should return an error.\n\t\t\t\t * Note: currently if input Json payload has no 'result' member\n\t\t\t\t * raiseError() results in no data being sent to the client\n\t\t\t\t * We use Unix epoch without microseconds\n\t\t\t\t */\n\t\t\t\tsql.append(\", datetime(\");\n\t\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\t\tsql.append(\" * round(\");\n\t\t\t\t}\n\t\t\t\t// Use Unix epoch, without microseconds\n\t\t\t\tsql.append(\"strftime('%s', \");\n\t\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(\") / \");\n\t\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\t\tsql.append(\", 6)\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append(\")\");\n\t\t\t\t}\n\t\t\t\tsql.append(\", 'unixepoch')\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\", datetime(\");\n\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t{\n\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\tsql.append(\" * round(\");\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * Default format when no format is specified:\n\t\t\t * - we use Unix time without milliseconds.\n\t\t\t */\n\t\t\tsql.append(\"strftime('%s', \");\n\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\tif (tb.HasMember(\"size\"))\n\t\t\t{\n\t\t\t\tsql.append(\") / \");\n\t\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\t\tsql.append(\", 6)\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\")\");\n\t\t\t}\n\t\t\tsql.append(\", 'unixepoch')\");\n\t\t}\n\n\t\tsql.append(\" AS \\\"\");\n\t\tif (tb.HasMember(\"alias\"))\n\t\t{\n\t\t\tsql.append(tb[\"alias\"].GetString());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\"timestamp\");\n\t\t}\n\t\tsql.append('\"');\n\t}\n\treturn true;\n}\n\n/**\n * Process the modifiers for limit, skip, sort and group\n */\nbool Connection::jsonModifiers(const Value& payload,\n                               SQLBuffer& sql,\n\t\t\t       bool isTableReading)\n{\n\n\tif (payload.HasMember(\"timebucket\") && payload.HasMember(\"sort\"))\n\t{\n\t\traiseError(\"query modifiers\",\n\t\t\t   \"Sort and timebucket modifiers can not be used in the same payload\");\n\t\treturn false;\n\t}\n\n\tif (payload.HasMember(\"group\"))\n\t{\n\t\tsql.append(\" GROUP BY \");\n\t\tif (payload[\"group\"].IsObject())\n\t\t{\n\t\t\tconst Value& grp = payload[\"group\"];\n\t\t\tif (grp.HasMember(\"format\"))\n\t\t\t{\n\t\t\t\t/**\n\t\t\t\t * SQLite 3 date format is limited.\n\t\t\t\t * Handle all available formats here.\n\t\t\t\t */\n\t\t\t\tstring new_format;\n\t\t\t\tif (isTableReading)\n\t\t\t\t{\n\t\t\t\t\tapplyColumnDateFormatLocaltime(grp[\"format\"].GetString(),\n\t\t\t\t\t\t\t\t       grp[\"column\"].GetString(),\n\t\t\t\t\t\t\t\t       new_format);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tapplyColumnDateFormat(grp[\"format\"].GetString(),\n\t\t\t\t\t\t\t      grp[\"column\"].GetString(),\n\t\t\t\t\t\t\t      new_format);\n\t\t\t\t}\n\n\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\tsql.append(new_format);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(payload[\"group\"].GetString());\n\t\t}\n\t}\n\n\tif (payload.HasMember(\"sort\"))\n\t{\n\t\tsql.append(\" ORDER BY \");\n\t\tconst Value& sortBy = payload[\"sort\"];\n\t\tif (sortBy.IsObject())\n\t\t{\n\t\t\tif (! sortBy.HasMember(\"column\"))\n\t\t\t{\n\t\t\t\traiseError(\"Select sort\", \"Missing property \\\"column\\\"\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(sortBy[\"column\"].GetString());\n\t\t\tsql.append(' ');\n\t\t\tif (! sortBy.HasMember(\"direction\"))\n\t\t\t{\n\t\t\t\tsql.append(\"ASC\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(sortBy[\"direction\"].GetString());\n\t\t\t}\n\t\t}\n\t\telse if (sortBy.IsArray())\n\t\t{\n\t\t\tint index = 0;\n\t\t\tfor (Value::ConstValueIterator itr = sortBy.Begin(); itr != sortBy.End(); ++itr)\n\t\t\t{\n\t\t\t\tif (!itr->IsObject())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"select sort\",\n\t\t\t\t\t\t   \"Each element in the sort array must be an object\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! itr->HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"Select sort\", \"Missing property \\\"column\\\"\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (index)\n\t\t\t\t{\n\t\t\t\t\tsql.append(\", \");\n\t\t\t\t}\n\t\t\t\tindex++;\n\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\tsql.append(' ');\n\t\t\t\tif (! itr->HasMember(\"direction\"))\n\t\t\t\t{\n\t\t\t\t\t sql.append(\"ASC\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tsql.append((*itr)[\"direction\"].GetString());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& tb = payload[\"timebucket\"];\n\t\tif (! tb.IsObject())\n\t\t{\n\t\t\traiseError(\"Select data\",\n\t\t\t\t   \"The \\\"timebucket\\\" property must be an object\");\n\t\t\treturn false;\n\t\t}\n\t\tif (! tb.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"Select data\",\n\t\t\t\t   \"The \\\"timebucket\\\" object must have a timestamp property\");\n\t\t\treturn false;\n\t\t}\n\t\tif (payload.HasMember(\"group\"))\n\t\t{\n\t\t\tsql.append(\", \");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(\" GROUP BY \");\n\t\t}\n\n\t\t// Divide by \"size\"\n\t\tif (tb.HasMember(\"size\"))\n\t\t{\n\t\t\t// Use Unix epoch without milliseconds\n\t\t\tsql.append(\"round(strftime('%s', \");\n\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\tsql.append(\") / \");\n\t\t\tsql.append(tb[\"size\"].GetString());\n\t\t\tsql.append(\", 6)\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Use Unix epoch\n\t\t\tsql.append(\"strftime('%s', \");\n\t\t\tsql.append(tb[\"timestamp\"].GetString());\n\t\t\tsql.append(\")\");\n\t\t}\n\n\t\tsql.append(\" ORDER BY \");\n\n                // Use Unix epoch without milliseconds\n                sql.append(\"strftime('%s', \");\n                sql.append(tb[\"timestamp\"].GetString());\n                sql.append(\")\");\n\n\t\tsql.append(\" DESC\");\n\t}\n\n\tif (payload.HasMember(\"limit\"))\n\t{\n\t\tif (!payload[\"limit\"].IsInt())\n\t\t{\n\t\t\traiseError(\"limit\",\n\t\t\t\t   \"Limit must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" LIMIT \");\n\t\ttry {\n\t\t\tsql.append(payload[\"limit\"].GetInt());\n\t\t} catch (exception e) {\n\t\t\traiseError(\"limit\",\n\t\t\t\t   \"Bad value for limit parameter: %s\",\n\t\t\t\t   e.what());\n\t\t\treturn false;\n\t\t}\n\t}\n\n\t// OFFSET must go after LIMIT\n\tif (payload.HasMember(\"skip\"))\n\t{\n\t\t// Add no limits\n\t\tif (!payload.HasMember(\"limit\"))\n\t\t{\n\t\t\tsql.append(\" LIMIT -1\");\n\t\t}\n\n\t\tif (!payload[\"skip\"].IsInt())\n\t\t{\n\t\t\traiseError(\"skip\",\n\t\t\t\t   \"Skip must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" OFFSET \");\n\t\tsql.append(payload[\"skip\"].GetInt());\n\t}\n\treturn true;\n}\n\n/**\n * Convert a JSON where clause into a SQLite3 where clause\n *\n */\nbool Connection::jsonWhereClause(const Value& whereClause,\n\t\t\t\t SQLBuffer& sql,\n\t\t\t\t std::vector<std::string>  &asset_codes,\n\t\t\t\t bool convertLocaltime,\n\t\t\t\t string prefix)\n{\n\tif (!whereClause.IsObject())\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" property must be a JSON object\");\n\t\treturn false;\n\t}\n\tif (!whereClause.HasMember(\"column\"))\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" object is missing a \\\"column\\\" property\");\n\t\treturn false;\n\t}\n\tif (!whereClause.HasMember(\"condition\"))\n\t{\n\t\traiseError(\"where clause\", \"The \\\"where\\\" object is missing a \\\"condition\\\" property\");\n\t\treturn false;\n\t}\n\n\tstring column = whereClause[\"column\"].GetString();\n\tif (!prefix.empty())\n\t{\n                sql.append(prefix);\n\t}\n\tsql.append(column);\n\tsql.append(' ');\n\tstring cond = whereClause[\"condition\"].GetString();\n\n\tif (cond.compare(\"isnull\") == 0)\n\t{\n\t\tsql.append(\"isnull \");\n\t}\n\telse if (cond.compare(\"notnull\") == 0)\n\t{\n\t\tsql.append(\"notnull \");\n\t}\n\telse\n\t{\n\t\tif (!whereClause.HasMember(\"value\"))\n\t\t{\n\t\t\traiseError(\"where clause\",\n\t\t\t\t\"The \\\"where\\\" object is missing a \\\"value\\\" property\");\n\t\t\treturn false;\n\t\t}\n\n\t\tif (!cond.compare(\"older\"))\n\t\t{\n\t\t\tif (!whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\traiseError(\"where clause\",\n\t\t\t\t\t   \"The \\\"value\\\" of an \\\"older\\\" condition must be an integer\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(\"< datetime('now', '-\");\n\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\tif (convertLocaltime)\n\t\t\t\tsql.append(\" seconds', 'localtime')\"); // Get value in localtime\n\t\t\telse\n\t\t\t\tsql.append(\" seconds')\"); // Get value in UTC by asking for no timezone\n\t\t}\n\t\telse if (!cond.compare(\"newer\"))\n\t\t{\n\t\t\tif (!whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\traiseError(\"where clause\",\n\t\t\t\t\t   \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(\"> datetime('now', '-\");\n\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\tif (convertLocaltime)\n\t\t\t\tsql.append(\" seconds', 'localtime')\"); // Get value in localtime\n\t\t\telse\n\t\t\t\tsql.append(\" seconds')\"); // Get value in UTC by asking for no timezone\n\t\t}\n\t\telse if (!cond.compare(\"in\") || !cond.compare(\"not in\"))\n\t\t{\n\t\t\t// Check we have a non empty array\n\t\t\tif (whereClause[\"value\"].IsArray() &&\n\t\t\t    whereClause[\"value\"].Size())\n\t\t\t{\n\t\t\t\tsql.append(cond);\n\t\t\t\tsql.append(\" ( \");\n\t\t\t\tint field = 0;\n\t\t\t\tfor (Value::ConstValueIterator itr = whereClause[\"value\"].Begin();\n\t\t\t\t\t\t\t\titr != whereClause[\"value\"].End();\n\t\t\t\t\t\t\t\t++itr)\n\t\t\t\t{\n\t\t\t\t\tif (field)\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(\", \");\n\t\t\t\t\t}\n\t\t\t\t\tfield++;\n\t\t\t\t\tif (itr->IsNumber())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->IsInt())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr->GetInt());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->IsInt64())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append((long)itr->GetInt64());\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr->GetDouble());\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t\tsql.append(escape(itr->GetString()));\n\t\t\t\t\t\tsql.append('\\'');\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring message(\"The \\\"value\\\" of a \\\"\" + \\\n\t\t\t\t\t\t\t\tcond + \\\n\t\t\t\t\t\t\t\t\"\\\" condition array element must be \" \\\n\t\t\t\t\t\t\t\t\"a string, integer or double.\");\n\t\t\t\t\t\traiseError(\"where clause\", message.c_str());\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tsql.append(\" )\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring message(\"The \\\"value\\\" of a \\\"\" + \\\n\t\t\t\t\t\tcond + \"\\\" condition must be an array \" \\\n\t\t\t\t\t\t\"and must not be empty.\");\n\t\t\t\traiseError(\"where clause\", message.c_str());\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tsql.append(cond);\n\t\t\tsql.append(' ');\n\t\t\tif (whereClause[\"value\"].IsInt())\n\t\t\t{\n\t\t\t\tsql.append(whereClause[\"value\"].GetInt());\n\t\t\t} else if (whereClause[\"value\"].IsString())\n\t\t\t{\n\t\t\t\tstring value = whereClause[\"value\"].GetString();\n\t\t\t\tsql.append('\\'');\n\t\t\t\tsql.append(escape(value));\n\t\t\t\tsql.append('\\'');\n\n\t\t\t\t// Identify a specific operation to restrinct the tables involved\n\t\t\t\tif (column.compare(\"asset_code\") == 0)\n\t\t\t\t\tif ( cond.compare(\"=\") == 0)\n\t\t\t\t\t\tasset_codes.push_back(value);\n\t\t\t}\n\t\t}\n\t}\n \n\tif (whereClause.HasMember(\"and\"))\n\t{\n\t\tsql.append(\" AND \");\n\t\tif (!jsonWhereClause(whereClause[\"and\"], sql, asset_codes, convertLocaltime, prefix))\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\tif (whereClause.HasMember(\"or\"))\n\t{\n\t\tsql.append(\" OR \");\n\t\tif (!jsonWhereClause(whereClause[\"or\"], sql, asset_codes, convertLocaltime, prefix))\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\n\treturn true;\n}\n\n/**\n * This routine uses SQLit3 JSON1 extension functions\n */\nbool Connection::returnJson(const Value& json,\n\t\t\t    SQLBuffer& sql,\n\t\t\t    SQLBuffer& jsonConstraint)\n{\n\tif (! json.IsObject())\n\t{\n\t\traiseError(\"retrieve\", \"The json property must be an object\");\n\t\treturn false;\n\t}\n\tif (!json.HasMember(\"column\"))\n\t{\n\t\traiseError(\"retrieve\", \"The json property is missing a column property\");\n\t\treturn false;\n\t}\n\t// Call JSON1 SQLite3 extension routine 'json_extract'\n\t// json_extract(field, '$.key1.key2') AS value\n\tsql.append(\"json_extract(\");\n\tsql.append(json[\"column\"].GetString());\n\tsql.append(\", '$.\");\n\tif (!json.HasMember(\"properties\"))\n\t{\n\t\traiseError(\"retrieve\", \"The json property is missing a properties property\");\n\t\treturn false;\n\t}\n\tconst Value& jsonFields = json[\"properties\"];\n\tif (jsonFields.IsArray())\n\t{\n\t\tif (! jsonConstraint.isEmpty())\n\t\t{\n\t\t\tjsonConstraint.append(\" AND \");\n\t\t}\n\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t// Build the Json keys NULL check\n\t\tjsonConstraint.append(\"json_type(\");\n\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\tjsonConstraint.append(\", '$.\");\n\t\tint field = 0;\n\t\tstring prev;\n\t\tfor (Value::ConstValueIterator itr = jsonFields.Begin(); itr != jsonFields.End(); ++itr)\n\t\t{\n\t\t\tif (field)\n\t\t\t{\n\t\t\t\tsql.append(\".\");\n\t\t\t}\n\t\t\tif (prev.length())\n\t\t\t{\n\t\t\t\tjsonConstraint.append(prev);\n\t\t\t\tjsonConstraint.append(\".\");\n\t\t\t}\n\t\t\tfield++;\n\t\t\t// Append Json field for query\n\t\t\tsql.append(itr->GetString());\n\t\t\tprev = itr->GetString();\n\t\t}\n\t\t// Add last Json key\n\t\tjsonConstraint.append(prev);\n\n\t\t// Add condition for all json keys not null\n\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t}\n\telse\n\t{\n\t\t// Append Json field for query\n\t\tsql.append(jsonFields.GetString());\n\t\tif (! jsonConstraint.isEmpty())\n\t\t{\n\t\t\tjsonConstraint.append(\" AND \");\n\t\t}\n\t\t// JSON1 SQLite3 extension 'json_type' object check:\n\t\t// json_type(field, '$.key1.key2') IS NOT NULL\n\t\t// Build the Json key NULL check\n\t\tjsonConstraint.append(\"json_type(\");\n\t\tjsonConstraint.append(json[\"column\"].GetString());\n\t\tjsonConstraint.append(\", '$.\");\n\t\tjsonConstraint.append(jsonFields.GetString());\n\n\t\t// Add condition for json key not null\n\t\tjsonConstraint.append(\"') IS NOT NULL\");\n\t}\n\tsql.append(\"') \");\n\n\treturn true;\n}\n\n/**\n * Remove whitespace at both ends of a string\n */\nchar *Connection::trim(char *str)\n{\nchar *ptr;\n\n\twhile (*str && *str == ' ')\n\t\tstr++;\n\n\tptr = str + strlen(str) - 1;\n\twhile (ptr > str && *ptr == ' ')\n\t{\n\t\t*ptr = 0;\n\t\tptr--;\n\t}\n\treturn str;\n}\n\n/**\n * Raise an error to return from the plugin\n */\nvoid Connection::raiseError(const char *operation, const char *reason, ...)\n{\nConnectionManager *manager = ConnectionManager::getInstance();\nchar\ttmpbuf[512];\n\n\tva_list ap;\n\tva_start(ap, reason);\n\tvsnprintf(tmpbuf, sizeof(tmpbuf), reason, ap);\n\tva_end(ap);\n\tLogger::getLogger()->error(\"%s storage plugin raising error: %s\",\n\t\t\t\t   PLUGIN_LOG_NAME,\n\t\t\t\t   tmpbuf);\n\tmanager->setError(operation, tmpbuf, false);\n}\n\n/**\n * Return the sie of a given table in bytes\n */\nlong Connection::tableSize(const string& table)\n{\nSQLBuffer buf;\n\n \traiseError(\"tableSize\", \"Not available in SQLite3 storage plugin\");\n\treturn -1;\n}\n\n/**\n * String escape routine\n */\nconst string Connection::escape(const string& str)\n{\nchar    *buffer;\nconst char    *p1;\nchar  *p2;\nstring  newString;\n\n    if (str.find_first_of('\\'') == string::npos)\n    {\n        return str;\n    }\n\n    buffer = (char *)malloc(str.length() * 2);\n\n    p1 = str.c_str();\n    p2 = buffer;\n    while (*p1)\n    {\n        if (*p1 == '\\'')\n        {\n            *p2++ = '\\'';\n            *p2++ = '\\'';\n            p1++;\n        }\n        else\n        {\n            *p2++ = *p1++;\n        }\n    }\n    *p2 = 0;\n    newString = string(buffer);\n    free(buffer);\n    return newString;\n}\n\n/**\n * Optionally log SQL statement execution\n *\n * @param\ttag\tA string tag that says why the SQL is being executed\n * @param\tstmt\tThe SQL statement itself\n */\nvoid Connection::logSQL(const char *tag, const char *stmt)\n{\n\tif (m_logSQL)\n\t{\n\t\tLogger::getLogger()->info(\"%s, %s: %s\",\n\t\t\t\t\t  PLUGIN_LOG_NAME,\n\t\t\t\t\t  tag,\n\t\t\t\t\t  stmt);\n\t}\n}\n\n/**\n * SQLITE wrapper to rety statements when the database is locked\n *\n * @param\tdb\tThe open SQLite database\n * @param\tsql\tThe SQL to execute\n * @param\tcallback\tCallback function\n * @param\tcbArg\t\tCallback 1st argument\n * @param\terrmsg\t\tLocaiton to write error message\n */\nint Connection::SQLexec(sqlite3 *db, const string& table, const char *sql, int (*callback)(void*,int,char**,char**),\n  \t\t\tvoid *cbArg, char **errmsg)\n{\nint retries = 0, rc;\nint interval;\n\n\tdo {\n#if DO_PROFILE\n\t\tProfileItem *prof = new ProfileItem(sql);\n#endif\n\t\trc = sqlite3_exec(db, sql, callback, cbArg, errmsg);\n#if DO_PROFILE\n\t\tprof->complete();\n\t\tprofiler.insert(prof);\n#endif\n\t\tretries++;\n\t\tif (rc != SQLITE_OK)\n\t\t{\n#if DO_PROFILE_RETRIES\n\t\t\tm_qMutex.lock();\n\t\t\tm_waiting.fetch_add(1);\n\t\t\tif (maxQueue < m_waiting)\n\t\t\t\tmaxQueue = m_waiting;\n\t\t\tm_qMutex.unlock();\n#endif\n\t\t\tinterval = (1 * RETRY_BACKOFF);\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(interval));\n\t\t\tif (retries > 5)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\n\t\t\t\t\t\"SQLexec: retry %d of %d, rc=%s, errmsg=%s, DB connection @ %p, slept for %d msecs\",\n\t\t\t\t\tretries, MAX_RETRIES, (rc == SQLITE_LOCKED) ? \"SQLITE_LOCKED\" : \"SQLITE_BUSY\", sqlite3_errmsg(db),\n\t\t\t\t\tthis, interval);\n\n\t\t\t}\n#if DO_PROFILE_RETRIES\n\t\t\tm_qMutex.lock();\n\t\t\tm_waiting.fetch_sub(1);\n\t\t\tm_qMutex.unlock();\n#endif\n\t\t\tif (sqlite3_get_autocommit(db)==0) // if transaction is still open, do rollback\n\t\t\t{\n\t\t\t\tint rc2;\n\t\t\t\tchar *zErrMsg = NULL;\n\t\t\t\trc2=SQLexec(db, table,\n\t\t\t\t\t\"ROLLBACK TRANSACTION;\",\n\t\t\t\t\tNULL,\n\t\t\t\t\tNULL,\n\t\t\t\t\t&zErrMsg);\n\t\t\t\tif (rc2 != SQLITE_OK)\n\t\t\t\t{\n\t\t\t\t\traiseError(\"rollback\", zErrMsg);\n\t\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} while (retries < MAX_RETRIES && (rc != SQLITE_OK));\n\n\tif (retries >1) {\n\t\tostringstream threadId;\n\t\tthreadId << std::this_thread::get_id();\n\n\t\tLogger::getLogger()->debug(\"%s - Completed retries :%d: :%s:\", __FUNCTION__, retries, threadId.str().c_str() );\n\t}\n\n#if DO_PROFILE_RETRIES\n\tretryStats[retries-1]++;\n\tif (++numStatements > RETRY_REPORT_THRESHOLD - 1)\n\t{\n\t\tnumStatements = 0;\n\t\tLogger *log = Logger::getLogger();\n\t\tlog->info(\"Storage layer statement retry profile\");\n\t\tfor (int i = 0; i < MAX_RETRIES-1; i++)\n\t\t{\n\t\t\tlog->info(\"%2d: %d\", i, retryStats[i]);\n\t\t\tretryStats[i] = 0;\n\t\t}\n\t\tlog->info(\"Too many retries: %d\", retryStats[MAX_RETRIES-1]);\n\t\tretryStats[MAX_RETRIES-1] = 0;\n\t\tlog->info(\"Maximum retry queue length: %d\", maxQueue);\n\t\tmaxQueue = 0;\n\t}\n#endif\n\n\tif (rc == SQLITE_LOCKED)\n\t{\n\t\tLogger::getLogger()->error(\"Database still locked after maximum retries, executing %s operation on %s\", operation(sql).c_str(), table.c_str());\n\t}\n\tif (rc == SQLITE_BUSY)\n\t{\n\t\tLogger::getLogger()->error(\"Database still busy after maximum retries, executing %s operation on %s\", operation(sql).c_str(), table.c_str());\n\t}\n\n\treturn rc;\n}\n\n\nint Connection::SQLstep(sqlite3_stmt *statement)\n{\n\tint retries = 0, rc;\n\tint interval;\n\n\tdo {\n#if DO_PROFILE\n\t\tProfileItem *prof = new ProfileItem(sqlite3_sql(statement));\n#endif\n\t\trc = sqlite3_step(statement);\n#if DO_PROFILE\n\t\tprof->complete();\n\t\tprofiler.insert(prof);\n#endif\n\t\tretries++;\n\t\tif (rc == SQLITE_LOCKED || rc == SQLITE_BUSY)\n\t\t{\n\t\t\tinterval = (retries * RETRY_BACKOFF);\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(interval));\n\n\t\t\tif (retries > 5) {\n\t\t\t\tLogger::getLogger()->info(\"SQLstep: retry %d of %d, rc=%s, DB connection @ %p, slept for %d msecs\",\n\t\t\t\tretries, MAX_RETRIES, (rc==SQLITE_LOCKED)?\"SQLITE_LOCKED\":\"SQLITE_BUSY\", this, interval);\n\t\t\t}\n\n\t\t}\n\t} while (retries < MAX_RETRIES && (rc == SQLITE_LOCKED || rc == SQLITE_BUSY));\n#if DO_PROFILE_RETRIES\n\tretryStats[retries-1]++;\n\tif (++numStatements > 1000)\n\t{\n\t\tnumStatements = 0;\n\t\tLogger *log = Logger::getLogger();\n\t\tlog->info(\"Storage layer statement retry profile\");\n\t\tfor (int i = 0; i < MAX_RETRIES-1; i++)\n\t\t{\n\t\t\tlog->info(\"%2d: %d\", i, retryStats[i]);\n\t\t\tretryStats[i] = 0;\n\t\t}\n\t\tlog->info(\"Too many retries: %d\", retryStats[MAX_RETRIES-1]);\n\t\tretryStats[MAX_RETRIES-1] = 0;\n\t}\n#endif\n\n\tif (retries >1) {\n\t\tostringstream threadId;\n\t\tthreadId << std::this_thread::get_id();\n\n\t\tLogger::getLogger()->debug(\"%s - Completed retries :%d: :%s:\", __FUNCTION__, retries, threadId.str().c_str() );\n\t}\n\n\tif (rc == SQLITE_LOCKED)\n\t{\n\t\tLogger::getLogger()->error(\"Database still locked after maximum retries\");\n\t}\n\tif (rc == SQLITE_BUSY)\n\t{\n\t\tLogger::getLogger()->error(\"Database still busy after maximum retries\");\n\t}\n\n\treturn rc;\n}\n\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Perform a delete against a common table\n *\n */\nint Connection::deleteRows(const string& schema,\n\t\t\tconst string& table,\n\t\t\tconst string& condition)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument document;\nSQLBuffer\tsql;\nvector<string>  asset_codes;\n\n\tif (!m_schemaManager->exists(dbHandle, schema))\n\t{\n\t\traiseError(\"delete\",\n\t\t\t\"Schema %s does not exist, unable to delete from table %s\",\n\t\t\tschema.c_str(),\n\t\t\ttable.c_str());\n\t\treturn false;\n\t}\n\n\tsql.append(\"DELETE FROM \");\n\tsql.append(schema);\n\tsql.append('.');\n\tsql.append(table);\n\n\tif (! condition.empty())\n\t{\n\t\tsql.append(\" WHERE \");\n\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t{\n\t\t\traiseError(\"delete\", \"Failed to parse JSON payload\");\n\t\t\treturn -1;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes))\n\t\t\t\t{\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"delete\",\n\t\t\t\t\t   \"JSON does not contain where clause\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n\tsql.append(';');\n\n\tconst char *query = sql.coalesce();\n\tlogSQL(\"CommonDelete\", query);\n\tchar *zErrMsg = NULL;\n\tint delete_rows;\n\tint rc;\n\n\t// Exec the DELETE statement: no callback, no result set\n\tm_writeAccessOngoing.fetch_add(1);\n\trc = SQLexec(dbHandle, table,\n\t\t     query,\n\t\t     NULL,\n\t\t     NULL,\n\t\t     &zErrMsg);\n\tm_writeAccessOngoing.fetch_sub(1);\n\tif (m_writeAccessOngoing == 0)\n\t\tdb_cv.notify_all();\n\n\t// Check result code\n\tif (rc == SQLITE_OK)\n\t{\n\t\t// Success. Release memory for 'query' var\n\t\tdelete[] query;\n        \treturn sqlite3_changes(dbHandle);\n\t}\n\telse\n\t{\n \t\traiseError(\"delete\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\tdelete[] query;\n\n\t\t// Failure\n\t\treturn -1;\n\t}\n}\n#endif\n\n#ifndef SQLITE_SPLIT_READINGS\n/**\n * Create snapshot of a common table\n *\n * @param table\t\tThe table to snapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= 0 on success\n *\n * The new created table name has the name:\n * $table_snap$id\n */\nint Connection::create_table_snapshot(const string& table, const string& id)\n{\n\tstring query = \"CREATE TABLE fledge.\";\n\tquery += table + \"_snap\" +  id + \" AS SELECT * FROM fledge.\" + table;\n\n\tlogSQL(\"CreateTableSnapshot\", query.c_str());\n\n\tchar* zErrMsg = NULL;\n\tint rc = SQLexec(dbHandle, table,\n\t\t\t query.c_str(),\n\t\t\t NULL,\n\t\t\t NULL,\n\t\t\t &zErrMsg);\n\n\t// Check result code\n\tif (rc == SQLITE_OK)\n\t{\n\t\treturn 1;\n\t}\n\telse\n\t{\n\t\traiseError(\"create_table_snapshot\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn -1;\n\t}\n}\n\n/**\n * Set the contents of a common table from a snapshot\n *\n * @param table\t\tThe table to fill\n * @param id\t\tThe snapshot id of the table\n * @return\t\t-1 on error, >= 0 on success\n *\n */\nint Connection::load_table_snapshot(const string& table, const string& id)\n{\n\tstring purgeQuery = \"DELETE FROM fledge.\" + table;\n\tstring query = \"BEGIN TRANSACTION; \";\n\tquery += purgeQuery +\"; INSERT INTO fledge.\" + table;\n\tquery += \" SELECT * FROM fledge.\" + table + \"_snap\" + id;\n\tquery += \"; COMMIT TRANSACTION;\";\n\n\tlogSQL(\"LoadTableSnapshot\", query.c_str());\n\n\tchar* zErrMsg = NULL;\n\tint rc = SQLexec(dbHandle, table,\n\t\t\t query.c_str(),\n\t\t\t NULL,\n\t\t\t NULL,\n\t\t\t &zErrMsg);\n\n\t// Check result code\n\tif (rc == SQLITE_OK)\n\t{\n\t\treturn 1;\n\t}\n\telse\n\t{\n\t\traiseError(\"load_table_snapshot\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\n\t\t// transaction is still open, do rollback\n\t\tif (sqlite3_get_autocommit(dbHandle) == 0)\n\t\t{\n\t\t\trc = SQLexec(dbHandle, table,\n\t\t\t\t     \"ROLLBACK TRANSACTION;\",\n\t\t\t\t     NULL,\n\t\t\t\t     NULL,\n\t\t\t\t     &zErrMsg);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"rollback for load_table_snapshot\", zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t}\n\t\t}\n\t\treturn -1;\n\t}\n}\n\n/**\n * Delete a snapshot of a common table\n *\n * @param table\t\tThe table to snapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= 0 on success\n *\n */\nint Connection::delete_table_snapshot(const string& table, const string& id)\n{\n\tstring query = \"DROP TABLE fledge.\" + table + \"_snap\" + id;\n\n\tlogSQL(\"DeleteTableSnapshot\", query.c_str());\n\n\tchar* zErrMsg = NULL;\n\tint rc = SQLexec(dbHandle, table,\n\t\t\t query.c_str(),\n\t\t\t NULL,\n\t\t\t NULL,\n\t\t\t &zErrMsg);\n\n\t// Check result code\n\tif (rc == SQLITE_OK)\n\t{\n\t\treturn 1;\n\t}\n\telse\n\t{\n\t\traiseError(\"delete_table_snapshot\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn -1;\n\t}\n}\n\n/**\n * Get list of snapshots for a given common table\n *\n * @param table\t\tThe given table name\n */\nbool Connection::get_table_snapshots(const string& table,\n\t\t\t\t     string& resultSet)\n{\nSQLBuffer sql;\n\ttry {\n\t\tif (dbHandle == NULL)\n\t\t{\n\t\t\traiseError(\"retrieve\", \"No SQLite 3 db connection available\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\"SELECT REPLACE(name, '\");\n\t\tsql.append(table);\n\t\tsql.append(\"_snap', '') AS id FROM sqlite_master WHERE type='table' AND name LIKE '\");\n\t\tsql.append(table);\n\t\tsql.append(\"_snap%';\");\n\n\t\tconst char *query = sql.coalesce();\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tsqlite3_stmt *stmt;\n\n\t\tlogSQL(\"GetTableSnapshots\", query);\n\n\t\t// Prepare the SQL statement and get the result set\n\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"get_table_snapshots\", sqlite3_errmsg(dbHandle));\n\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\tdelete[] query;\n\t\t\treturn false;\n\t\t}\n\n\t\t// Call result set mapping\n\t\trc = mapResultSet(stmt, resultSet);\n\n\t\t// Delete result set\n\t\tsqlite3_finalize(stmt);\n\n\t\t// Check result set mapping errors\n\t\tif (rc != SQLITE_DONE)\n\t\t{\n\t\t\traiseError(\"get_table_snapshots\", sqlite3_errmsg(dbHandle));\n\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\tdelete[] query;\n\t\t\t// Failure\n\t\t\treturn false;\n\t\t}\n\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\t\t// Success\n\t\treturn true;\n\t} catch (exception e) {\n\t\traiseError(\"get_table_snapshots\", \"Internal error: %s\", e.what());\n\t\t// Failure\n\t\treturn false;\n\t}\n}\n/**\n * Create schema and populate with tables and indexes as defined in the JSON schema\n * definition.\n *\n * @param schema   The  schema defintion as a JSON document containing information about schema of tables to create\n * @return true if the schema was created\n */\nbool Connection::createSchema(const std::string &schema)\n{\n\treturn m_schemaManager->create(dbHandle, schema);\n}\n\n/**\n * Execute a SQLite VACUUM command on the database\n */\nbool Connection::vacuum()\n{\n\tchar* zErrMsg = NULL;\n\t// Exec the statement\n\tint rc = SQLexec(dbHandle, \"database\", \"VACUUM;\", NULL, NULL, &zErrMsg);\n\n\t// Check result\n\tif (rc != SQLITE_OK)\n\t{\n\t\t\tconst char* errMsg = \"Failed to vacuum database \";\n\t\t\tLogger::getLogger()->error(\"%s: error %s\",\n\t\t\t\t\t\t   errMsg, zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn false;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->info(\"Database vacuum complete\");\n\t}\n\treturn true;\n}\n#endif\n\n/*\n * Return the first word in a SQL statement, ie the operation that is being executed.\n *\n * @param sql\tThe complete SQL statement\n * @return string\tThe operation\n */\nstring Connection::operation(const char *sql)\n{\n\tconst char *p1 = sql;\n\tchar buf[40], *p2 = buf;\n\twhile (*p1 && !isspace(*p1) && p2 - buf < 40)\n\t\t*p2++ = *p1++;\n\t*p2 = '\\0';\n\treturn string(buf);\n\n}\n\n/**\n * In the case of a join add the tables to select from for all the tables in\n * the join\n *\n * @param schema        The schema we are using\n * @param document      The query we are processing\n * @param sql           The SQLBuffer we are writing\n * @param level         The table number we are processing\n */\nbool Connection::appendTables(const string& schema, const Value& document, SQLBuffer& sql, int level)\n{\n\tstring tag = \"t\" + to_string(level);\n\tif (document.HasMember(\"join\"))\n\t{\n\t\tconst Value& join = document[\"join\"];\n\t\tif (join.HasMember(\"table\"))\n\t\t{\n\t\t\tconst Value& table = join[\"table\"];\n\t\t\tif (!table.HasMember(\"name\"))\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Joining table is missing a table name\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tconst Value& name = table[\"name\"];\n\t\t\tif (!name.IsString())\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Joining table name is not a string\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\tsql.append(\", \");\n\t\t\tsql.append(schema);\n\t\t\tsql.append('.');\n\t\t\tsql.append(name.GetString());\n\t\t\tsql.append(\" \");\n\t\t\tsql.append(tag);\n\t\t\tif (join.HasMember(\"query\"))\n\t\t\t{\n\t\t\t\tconst Value& query = join[\"query\"];\n\t\t\t\tappendTables(schema, query, sql, ++level);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Join is missing a join query definition\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"commonRetrieve\", \"Join is missing a table definition\");\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * Recurse down and add the where cluase and join terms for each\n * new table joined to the query\n *\n * @param query The JSON query\n * @param sql   The SQLBuffer we are writing the data to\n * @param asset_codes   The asset codes\n * @param level The nestign level of the joined table\n */\nbool Connection::processJoinQueryWhereClause(const Value& query,\n\t\t\t\t\tSQLBuffer& sql,\n\t\t\t\t\tstd::vector<std::string> &asset_codes,\n\t\t\t\t\tint level)\n{\n\tstring tag = \"t\" + to_string(level) + \".\";\n\tif (!jsonWhereClause(query[\"where\"], sql, asset_codes, true, tag))\n\t{\n\t\treturn false;\n\t}\n\n\tif (query.HasMember(\"join\"))\n\t{\n\t\t// Now and the join condition itself\n\t\tstring col0, col1;\n\t\tconst Value& join = query[\"join\"];\n\t\tif (join.HasMember(\"on\") && join[\"on\"].IsString())\n\t\t{\n\t\t\tcol0 = join[\"on\"].GetString();\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t\tif (join.HasMember(\"table\"))\n\t\t{\n\t\t\tconst Value& table = join[\"table\"];\n\t\t\tif (table.HasMember(\"column\") && table[\"column\"].IsString())\n\t\t\t{\n\t\t\t\tcol1 = table[\"column\"].GetString();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"Joined query\", \"Missing join column in table\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(\" AND \");\n\t\tsql.append(tag);\n\t\tsql.append(col0);\n\t\tsql.append(\" = t\");\n\t\tsql.append(level + 1);\n\t\tsql.append(\".\");\n\t\tsql.append(col1);\n\t\tsql.append(\" \");\n\t\tif (join.HasMember(\"query\") && join[\"query\"].IsObject())\n\t\t{\n\t\t\tsql.append(\" AND \");\n\t\t\tconst Value& query = join[\"query\"];\n\t\t\tprocessJoinQueryWhereClause(query, sql, asset_codes, level + 1);\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * In the case of a join add the columns to select from for all the tables in\n * the join\n *\n * @param document      The query we are processing\n * @param sql           The SQLBuffer we are writing\n * @param level         The table number we are processing\n */\nbool Connection::selectColumns(const Value& document, SQLBuffer& sql, int level)\n{\nSQLBuffer       jsonConstraints;\n\nstring tag = \"t\" + to_string(level) + \".\";\n\n\tif (document.HasMember(\"return\"))\n\t{\n\t\tint col = 0;\n\t\tconst Value& columns = document[\"return\"];\n\t\tif (! columns.IsArray())\n\t\t{\n\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\treturn false;\n\t\t}\n\t\tif (document.HasMember(\"modifier\"))\n\t\t{\n\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\tsql.append(' ');\n\t\t}\n\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t{\n\t\t\tif (col)\n\t\t\t{\n\t\t\t\tsql.append(\", \");\n\t\t\t}\n\t\t\tif (!itr->IsObject())   // Simple column name\n\t\t\t{\n\t\t\t\tsql.append(tag);\n\t\t\t\tsql.append(itr->GetString());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t{\n\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\"column must be a string\");\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t\"format must be a string\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// SQLite 3 date format.\n\t\t\t\t\t\tstring new_format;\n\t\t\t\t\t\tapplyColumnDateFormat((*itr)[\"format\"].GetString(),\n\t\t\t\t\t\t\t\ttag + (*itr)[\"column\"].GetString(),\n\t\t\t\t\t\t\t\tnew_format,\n\t\t\t\t\t\t\t\ttrue);\n\n\t\t\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\t\t\tsql.append(new_format);\n\t\t\t\t\t}\n\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"rerieve\",\n\t\t\t\t\t\t\t\t\"timezone must be a string\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\t// SQLite3 doesnt support time zone formatting\n\t\t\t\t\t\tif (strcasecmp((*itr)[\"timezone\"].GetString(), \"utc\") != 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\"SQLite3 plugin does not support timezones in qeueries\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\tsql.append(tag);\n\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\n\t\t\t\t\t\t\tsql.append(\", 'utc')\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tsql.append(tag);\n\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t}\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t{\n\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\"return object must have either a column or json property\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\n\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\tsql.append('\"');\n\t\t\t\t}\n\t\t\t}\n\t\t\tcol++;\n\t\t}\n\t}\n\telse\n\t{\n\t\tsql.append('*');\n\t\treturn true;\n\t}\n\tif (document.HasMember(\"join\"))\n\t{\n\t\tconst Value& join = document[\"join\"];\n\t\tif (join.HasMember(\"query\"))\n\t\t{\n\t\t\tconst Value& query = join[\"query\"];\n\t\t\tsql.append(\", \");\n\t\t\tif (!selectColumns(query, sql, ++level))\n\t\t\t{\n\t\t\t\traiseError(\"commonRetrieve\", \"Join failed to add select columns\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t}\n\treturn true;\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/common/connection_manager.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <connection_manager.h>\n#include <connection.h>\n#include <unistd.h>\n#include <disk_monitor.h>\n#include <utils.h>\n#include <sqlite_common.h>\n\nConnectionManager *ConnectionManager::instance = 0;\n\n/**\n * Background thread entry point\n */\nstatic void managerBackground(void *arg)\n{\n\tConnectionManager *mgr = (ConnectionManager *)arg;\n\tmgr->background();\n}\n\n/**\n * Default constructor for the connection manager.\n */\nConnectionManager::ConnectionManager() : m_shutdown(false), m_vacuumInterval(6 * 60 * 60), m_purgeBlockSize(10000),\n\t\tm_diskSpaceMonitor(NULL)\n{\n\tlastError.message = NULL;\n\tlastError.entryPoint = NULL;\n\tif (getenv(\"FLEDGE_TRACE_SQL\"))\n\t\tm_trace = true;\n\telse\n\t\tm_trace = false;\n\n\tstd::string dbPath, dbPathReadings;\n\tconst char *defaultConnection = getenv(\"DEFAULT_SQLITE_DB_FILE\");\n\tconst char *defaultReadingsConnection = getenv(\"DEFAULT_SQLITE_DB_READINGS_FILE\");\n\tif (defaultConnection == NULL)\n\t{\n\t\t// Set DB base path\n\t\tdbPath = getDataDir();\n\t\t// Add the filename\n\t\tdbPath += _DB_NAME;\n\t}\n\telse\n\t{\n\t\tdbPath = defaultConnection;\n\t}\n\n\tif (defaultReadingsConnection == NULL)\n\t{\n\t\t// Set DB base path\n\t\tdbPathReadings = getDataDir();\n\t\t// Add the filename\n\t\tdbPathReadings += READINGS_DB_FILE_NAME;\n\t}\n\telse\n\t{\n\t\tdbPathReadings = defaultReadingsConnection;\n\t}\n\n\tm_diskSpaceMonitor = new DiskSpaceMonitor(dbPath, dbPathReadings);\n\tm_background = new std::thread(managerBackground, this);\n}\n\n/**\n * Called at shutdown. Shrink the idle pool, this will\n * have the side effect of closing the connections to the database.\n */\nvoid ConnectionManager::shutdown()\n{\n\tm_shutdown = true;\n\tshrinkPool(idle.size());\n\tif (m_background)\n\t\tm_background->join();\n}\n\n/**\n * Return the singleton instance of the connection manager.\n * if none was created then create it.\n */\nConnectionManager *ConnectionManager::getInstance()\n{\n\tif (instance == 0)\n\t{\n\t\tinstance = new ConnectionManager();\n\t}\n\treturn instance;\n}\n\n/**\n * Set the purge block size in each of the connections\n *\n * @param purgeBlockSize\tThe requested purgeBlockSize\n */\nvoid ConnectionManager::setPurgeBlockSize(unsigned long purgeBlockSize)\n{\n\tm_purgeBlockSize = purgeBlockSize;\n\tidleLock.lock();\n\tfor (auto& c : idle)\n\t\tc->setPurgeBlockSize(purgeBlockSize);\n\tidleLock.unlock();\n}\n\n/**\n * Grow the connection pool by the number of connections\n * specified.\n *\n * @param delta\tThe number of connections to add to the pool\n */\nvoid ConnectionManager::growPool(unsigned int delta)\n{\n\twhile (delta-- > 0)\n\t{\n\t\tConnection *conn = new Connection();\n\t\tconn->setPurgeBlockSize(m_purgeBlockSize);\n\t\tif (m_trace)\n\t\t\tconn->setTrace(true);\n\t\tidleLock.lock();\n\t\tidle.push_back(conn);\n\t\tidleLock.unlock();\n\t}\n}\n\n/**\n * Attempt to shrink the number of connections in the idle pool\n *\n * @param delta\t\tNumber of connections to attempt to remove\n * @return The number of connections removed.\n */\nunsigned int ConnectionManager::shrinkPool(unsigned int delta)\n{\nunsigned int removed = 0;\nConnection   *conn;\n\n\twhile (delta-- > 0)\n\t{\n\t\tidleLock.lock();\n\t\tconn = idle.back();\n\t\tidle.pop_back();\n\t\tidleLock.unlock();\n\t\tif (conn)\n\t\t{\n\t\t\tdelete conn;\n\t\t\tremoved++;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tbreak;\n\t\t}\n\t}\n\treturn removed;\n}\n\n/**\n * Allocate a connection from the idle pool. If\n * no connection is available add a new connection\n */\nConnection *ConnectionManager::allocate()\n{\nConnection *conn = 0;\n\n\tidleLock.lock();\n\tif (idle.empty())\n\t{\n\t\tconn = new Connection();\n\t}\n\telse\n\t{\n\t\tconn = idle.front();\n\t    \tidle.pop_front();\n\t}\n\tidleLock.unlock();\n\tif (conn)\n\t{\n\t\tinUseLock.lock();\n\t\tinUse.push_front(conn);\n\t\tinUseLock.unlock();\n\t}\n\treturn conn;\n}\n\n/**\n * Release a connection back to the idle pool for\n * reallocation.\n *\n * @param conn\tThe connection to release.\n */\nvoid ConnectionManager::release(Connection *conn)\n{\n\tinUseLock.lock();\n\tinUse.remove(conn);\n\tinUseLock.unlock();\n\tidleLock.lock();\n\tidle.push_back(conn);\n\tidleLock.unlock();\n}\n\n/**\n * Set the last error information for a plugin.\n *\n * @param source\tThe source of the error\n * @param description\tThe error description\n * @param retryable\tFlag to determien if the error condition is transient\n */\nvoid ConnectionManager::setError(const char *source, const char *description, bool retryable)\n{\n\terrorLock.lock();\n\tif (lastError.entryPoint)\n\t\tfree(lastError.entryPoint);\n\tif (lastError.message)\n\t\tfree(lastError.message);\n\tlastError.retryable = retryable;\n\tlastError.entryPoint = strdup(source);\n\tlastError.message = strdup(description);\n\terrorLock.unlock();\n}\n\n/**\n * Background thread used to execute periodic tasks and oversee the database activity.\n *\n * We will runt he SQLite vacuum command periodically to allow space to be reclaimed\n */\nvoid ConnectionManager::background()\n{\n\ttime_t nextVacuum = time(0) + m_vacuumInterval;\n\n\twhile (!m_shutdown)\n\t{\n\t\tif (m_diskSpaceMonitor)\n\t\t\t\tm_diskSpaceMonitor->periodic(15);       // Called with the interval we sleep for\n\t\tsleep(15);\n\t\ttime_t tim = time(0);\n\t\tif (m_vacuumInterval && tim > nextVacuum)\n\t\t{\n\t\t\tConnection *con = allocate();\n\t\t\tcon->vacuum();\n\t\t\trelease(con);\n\t\t\tnextVacuum = time(0) + m_vacuumInterval;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/common/include/connection.h",
    "content": "#ifndef _CONNECTION_H\n#define _CONNECTION_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <sql_buffer.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <sqlite3.h>\n#include <mutex>\n#include <vector>\n#include <reading_stream.h>\n#ifndef MEMORY_READING_PLUGIN\n#include <schema.h>\n#endif\n\n#define _DB_NAME                  \"/fledge.db\"\n#define READINGS_DB_NAME_BASE     \"readings\"\n#define READINGS_DB_FILE_NAME     \"/\" READINGS_DB_NAME_BASE \".db\"\n#define READINGS_DB               READINGS_DB_NAME_BASE\n#define READINGS_TABLE            \"readings\"\n#define READINGS_TABLE_MEM       READINGS_TABLE\n\n#define MAX_RETRIES\t\t80\t// Maximum no. of retries when a lock is encountered\n#define RETRY_BACKOFF\t\t100\t// Multipler to backoff DB retry on lock\n#define RETRY_BACKOFF_EXEC   \t1000\t// Multipler to backoff DB retry on lock\n\n#define LEN_BUFFER_DATE 100\n#define F_TIMEH24_S             \"%H:%M:%S\"\n#define F_DATEH24_S             \"%Y-%m-%d %H:%M:%S\"\n#define F_DATEH24_M             \"%Y-%m-%d %H:%M\"\n#define F_DATEH24_H             \"%Y-%m-%d %H\"\n// This is the default datetime format in Fledge: 2018-05-03 18:15:00.622\n#define F_DATEH24_MS            \"%Y-%m-%d %H:%M:%f\"\n// Format up to seconds\n#define F_DATEH24_SEC           \"%Y-%m-%d %H:%M:%S\"\n#define SQLITE3_NOW             \"strftime('%Y-%m-%d %H:%M:%f', 'now', 'localtime')\"\n// The default precision is milliseconds, it adds microseconds and timezone\n#define SQLITE3_NOW_READING     \"strftime('%Y-%m-%d %H:%M:%f000+00:00', 'now')\"\n#define SQLITE3_FLEDGE_DATETIME_TYPE \"DATETIME\"\n\n// Set plugin name for log messages\n#ifndef PLUGIN_LOG_NAME\n#define PLUGIN_LOG_NAME \"SQLite3\"\n#endif\n\nint dateCallback(void *data, int nCols, char **colValues, char **colNames);\nbool applyColumnDateFormat(const std::string& inFormat,\n\t\t\t   const std::string& colName,\n\t\t\t   std::string& outFormat,\n\t\t\t   bool roundMs = false);\n\nbool applyColumnDateFormatLocaltime(const std::string& inFormat,\n\t\t\t\t    const std::string& colName,\n\t\t\t\t    std::string& outFormat,\n\t\t\t\t    bool roundMs = false);\n\nint rowidCallback(void *data,\n\t\t  int nCols,\n\t\t  char **colValues,\n\t\t  char **colNames);\n\nint selectCallback(void *data,\n\t\t   int nCols,\n\t\t   char **colValues,\n\t\t   char **colNames);\n\nint countCallback(void *data,\n\t\t  int nCols,\n\t\t  char **colValues,\n\t\t  char **colNames);\n\nbool applyDateFormat(const std::string& inFormat, std::string& outFormat);\n\nclass Connection {\n\tpublic:\n\t\tConnection();\n\t\t~Connection();\n#ifndef SQLITE_SPLIT_READINGS\n\t\tbool\t\tcreateSchema(const std::string& schema);\n\t\tbool\t\tretrieve(const std::string& schema,\n\t\t\t\t\t const std::string& table,\n\t\t\t\t\t const std::string& condition,\n\t\t\t\t\t std::string& resultSet);\n\t\tint\t\tinsert(const std::string& schema,\n\t\t\t\t\tconst std::string& table,\n\t\t\t\t\tconst std::string& data);\n\t\tint\t\tupdate(const std::string& schema,\n\t\t\t\t\tconst std::string& table,\n\t\t\t\t\tconst std::string& data);\n\t\tint\t\tdeleteRows(const std::string& schema,\n\t\t\t\t\tconst std::string& table,\n\t\t\t\t\tconst std::string& condition);\n\t\tint\t\tcreate_table_snapshot(const std::string& table, const std::string& id);\n\t\tint\t\tload_table_snapshot(const std::string& table, const std::string& id);\n\t\tint\t\tdelete_table_snapshot(const std::string& table, const std::string& id);\n\t\tbool\t\tget_table_snapshots(const std::string& table, std::string& resultSet);\n#endif\n\t\tint\t\tappendReadings(const char *readings);\n\t\tint \t\treadingStream(ReadingStream **readings, bool commit);\n\t\tbool\t\tfetchReadings(unsigned long id, unsigned int blksize,\n\t\t\t\t\t\tstd::string& resultSet);\n\t\tbool\t\tretrieveReadings(const std::string& condition,\n\t\t\t\t\t\t std::string& resultSet);\n\t\tunsigned int\tpurgeReadings(unsigned long age, unsigned int flags,\n\t\t\t\t\t\tunsigned long sent, std::string& results);\n\t\tunsigned int\tpurgeReadingsByRows(unsigned long rowcount, unsigned int flags,\n\t\t\t\t\t\tunsigned long sent, std::string& results);\n\t\tlong\t\ttableSize(const std::string& table);\n\t\tvoid\t\tsetTrace(bool);\n\t\tbool\t\tformatDate(char *formatted_date, size_t formatted_date_size, const char *date);\n\t\tbool\t\taggregateQuery(const rapidjson::Value& payload, std::string& resultSet);\n\t\tbool        \tgetNow(std::string& Now);\n\t\tunsigned int\tpurgeReadingsAsset(const std::string& asset);\n\t\tbool\t\tvacuum();\n#ifdef MEMORY_READING_PLUGIN\n\t\tbool\t\tloadDatabase(const std::string& filname);\n\t\tbool\t\tsaveDatabase(const std::string& filname);\n#endif\n\t\tvoid\t\tsetPurgeBlockSize(unsigned long purgeBlockSize)\n\t\t\t\t{\n\t\t\t\t\tm_purgeBlockSize = purgeBlockSize;\n\t\t\t\t};\n\n\tprivate:\n#ifndef MEMORY_READING_PLUGIN\n\t\tSchemaManager   *m_schemaManager;\n#endif\n\t\tbool \t\tm_streamOpenTransaction;\n\t\tint\t\tm_queuing;\n\t\tstd::mutex\tm_qMutex;\n\t\tunsigned long\tm_purgeBlockSize;\n\t\tstd::string\toperation(const char *sql);\n\t\tint \t\tSQLexec(sqlite3 *db, const std::string& table, const char *sql,\n\t\t\t\t\tint (*callback)(void*,int,char**,char**),\n\t\t\t\t\tvoid *cbArg, char **errmsg);\n\t\tint\t\tSQLstep(sqlite3_stmt *statement);\n\t\tbool\t\tm_logSQL;\n\t\tvoid\t\traiseError(const char *operation, const char *reason,...);\n\t\tsqlite3\t\t*dbHandle;\n\t\tint\t\tmapResultSet(void *res, std::string& resultSet);\n\t\tbool\t\tjsonWhereClause(const rapidjson::Value& whereClause,\n\t\t\t\t\t\tSQLBuffer&, std::vector<std::string> &asset_codes,\n\t\t\t\t\t\tbool convertLocaltime = false,\n\t\t\t\t\t\tstd::string prefix = \"\");\n\t\tbool\t\tjsonModifiers(const rapidjson::Value&, SQLBuffer&, bool isTableReading = false);\n\t\tbool\t\tjsonAggregates(const rapidjson::Value&,\n\t\t\t\t\t       const rapidjson::Value&,\n\t\t\t\t\t       SQLBuffer&,\n\t\t\t\t\t       SQLBuffer&,\n\t\t\t\t\t       bool isTableReading = false);\n\t\tbool\t\treturnJson(const rapidjson::Value&, SQLBuffer&, SQLBuffer&);\n\t\tchar\t\t*trim(char *str);\n\t\tconst std::string\n\t\t\t\tescape(const std::string&);\n\t\tbool\t\tapplyColumnDateTimeFormat(sqlite3_stmt *pStmt,\n\t\t\t\t\t\tint i,\n\t\t\t\t\t\tstd::string& newDate);\n\t\tvoid\t\tlogSQL(const char *, const char *);\n\t\tbool\t\tappendTables(const std::string& schema, const rapidjson::Value& document, SQLBuffer& sql, int level);\n\t\tbool\t\tprocessJoinQueryWhereClause(const rapidjson::Value& query,\n\t\t\t\t\t\t\tSQLBuffer& sql,\n\t\t\t\t\t\t\tstd::vector<std::string> &asset_codes,\n\t\t\t\t\t\t\tint level);\n\t\tbool\t\tselectColumns(const rapidjson::Value& document, SQLBuffer& sql, int level);\n};\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/common/include/connection_manager.h",
    "content": "#ifndef _CONNECTION_MANAGER_H\n#define _CONNECTION_MANAGER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <plugin_api.h>\n#include <list>\n#include <mutex>\n#include <thread>\n\nclass Connection;\nclass DiskSpaceMonitor;\n\n/**\n * Singleton class to manage SQLite3 connection pool\n */\nclass ConnectionManager {\n\tpublic:\n\t\tstatic ConnectionManager  *getInstance();\n\t\tvoid                      growPool(unsigned int);\n\t\tunsigned int              shrinkPool(unsigned int);\n\t\tConnection                *allocate();\n\t\tvoid                      release(Connection *);\n\t\tvoid\t\t\t  shutdown();\n\t\tvoid\t\t\t  setError(const char *, const char *, bool);\n\t\tPLUGIN_ERROR\t\t  *getError()\n\t\t\t\t\t  {\n\t\t\t\t\t\treturn &lastError;\n\t\t\t\t\t  }\n\t\tvoid\t\t\t  background();\n\t\tvoid\t\t\t  setVacuumInterval(long hours) {\n\t\t\t\t\t\tm_vacuumInterval = 60 * 60 * hours;\n\t\t\t\t\t  };\n\t\tvoid\t\t\t  setPersist(bool persist, const std::string& filename = \"\")\n\t\t\t\t\t  {\n\t\t\t\t\t\tm_persist = persist;\n\t\t\t\t\t\tm_filename = filename;\n\t\t\t\t\t  }\n\t\tbool\t\t\t  persist() { return m_persist; };\n\t\tstd::string\t\t  filename() { return m_filename; };\n\t\tvoid\t\t\t  setPurgeBlockSize(unsigned long purgeBlockSize);\n\tprotected:\n\t\tConnectionManager();\n\n\tprivate:\n\t\tstatic ConnectionManager     *instance;\n\tprotected:\n\t\tstd::list<Connection *>      idle;\n\t\tstd::list<Connection *>      inUse;\n\t\tstd::mutex                   idleLock;\n\t\tstd::mutex                   inUseLock;\n\t\tstd::mutex                   errorLock;\n\t\tPLUGIN_ERROR\t\t     lastError;\n\t\tbool\t\t\t     m_trace;\n\t\tbool\t\t\t     m_shutdown;\n\t\tstd::thread\t\t     *m_background;\n\t\tlong\t\t\t     m_vacuumInterval;\n\t\tbool\t\t\t     m_persist;\n\t\tstd::string\t             m_filename;\n\t\tunsigned long\t\t     m_purgeBlockSize;\n\t\tDiskSpaceMonitor\t     *m_diskSpaceMonitor;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/common/readings.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <math.h>\n#include <connection.h>\n#include <connection_manager.h>\n#include <sqlite_common.h>\n#include <reading_stream.h>\n#include <random>\n\n// 1 enable performance tracking\n#define INSTRUMENT\t0\n\n#if INSTRUMENT\n#include <sys/time.h>\n#endif\n\n/*\n * The number of readings to insert in a single prepared statement\n */\n#define APPEND_BATCH_SIZE\t100\n\n/*\n * JSON parsing requires a lot of mempry allocation, which is slow and causes\n * bottlenecks with thread synchronisation. RapidJSON supports in-situ parsing\n * whereby it will reuse the storage of the string it is parsing to store the\n * keys and string values of the parsed JSON. This is destructive on the buffer.\n * However it can be quicker to maek a copy of the raw string and then do in-situ\n * parsing on that copy of the string.\n * See http://rapidjson.org/md_doc_dom.html#InSituParsing\n *\n * Define a threshold length for the append readings to switch to using in-situ\n * parsing of the JSON to save on memory allocation overheads. Define as 0 to\n * disable the in-situ parsing.\n */\n#define INSITU_THRESHOLD\t10240\n\n// Decode stream data\n#define\tRDS_USER_TIMESTAMP(stream, x) \t\tstream[x]->userTs\n#define\tRDS_ASSET_CODE(stream, x)\t\tstream[x]->assetCode\n#define\tRDS_PAYLOAD(stream, x)\t\t\t&(stream[x]->assetCode[0]) + stream[x]->assetCodeLength\n\n// Retry mechanism\n#define PREP_CMD_MAX_RETRIES\t\t100\t// Maximum no. of retries when a lock is encountered\n#define PREP_CMD_RETRY_BASE \t\t20\t// Base time to wait for\n#define PREP_CMD_RETRY_BACKOFF\t\t20 \t// Variable time to wait for\n\n/*\n * Control the way purge deletes readings. The block size sets a limit as to how many rows\n * get deleted in each call, whilst the sleep interval controls how long the thread sleeps\n * between deletes. The idea is to not keep the database locked too long and allow other threads\n * to have access to the database between blocks.\n */\n#define PURGE_SLEEP_MS 500\n#define PURGE_DELETE_BLOCK_SIZE\t20\n#define TARGET_PURGE_BLOCK_DEL_TIME\t(70*1000) \t// 70 msec\n#define PURGE_BLOCK_SZ_GRANULARITY\t5 \t// 5 rows\n#define MIN_PURGE_DELETE_BLOCK_SIZE\t20\n#define MAX_PURGE_DELETE_BLOCK_SIZE\t1500\n#define RECALC_PURGE_BLOCK_SIZE_NUM_BLOCKS\t30\t// recalculate purge block size after every 30 blocks\n\n#define PURGE_SLOWDOWN_AFTER_BLOCKS 5\n#define PURGE_SLOWDOWN_SLEEP_MS 500\n\n#define SECONDS_PER_DAY \"86400.0\"\n// 2440587.5 is the julian day at 1/1/1970 0:00 UTC.\n#define JULIAN_DAY_START_UNIXTIME \"2440587.5\"\n\n//#ifndef PLUGIN_LOG_NAME\n//#define PLUGIN_LOG_NAME \"SQLite 3\"\n//#endif\n\n/**\n * SQLite3 storage plugin for Fledge\n */\n\nusing namespace std;\nusing namespace rapidjson;\n\n#define CONNECT_ERROR_THRESHOLD\t\t5*60\t// 5 minutes\n\n\n/*\n * The following allows for conditional inclusion of code that tracks the top queries\n * run by the storage plugin and the number of times a particular statement has to\n * be retried because of the database being busy./\n */\n#define DO_PROFILE\t\t0\n#define DO_PROFILE_RETRIES\t0\n#if DO_PROFILE\n#include <profile.h>\n\n#define\tTOP_N_STATEMENTS\t\t10\t// Number of statements to report in top n\n#define RETRY_REPORT_THRESHOLD\t\t1000\t// Report retry statistics every X calls\n\nQueryProfile profiler(TOP_N_STATEMENTS);\nunsigned long retryStats[MAX_RETRIES] = { 0,0,0,0,0,0,0,0,0,0 };\nunsigned long numStatements = 0;\nint\t      maxQueue = 0;\n#endif\n\nstatic std::atomic<int> m_waiting(0);\nstatic std::atomic<int> m_writeAccessOngoing(0);\nstatic std::mutex\tdb_mutex;\nstatic std::condition_variable\tdb_cv;\nstatic int purgeBlockSize = PURGE_DELETE_BLOCK_SIZE;\n\n#define START_TIME std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();\n#define END_TIME std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now(); \\\n\t\t\t\t auto usecs = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count();\n\nstatic time_t connectErrorTime = 0;\n\n\n/**\n * Check whether to compute timebucket query with min,max,avg for all datapoints\n *\n * @param    payload\tJSON payload\n * @return\t\tTrue if aggregation is 'all'\n */\nbool aggregateAll(const Value& payload)\n{\n\tif (payload.HasMember(\"aggregate\") &&\n\t    payload[\"aggregate\"].IsObject())\n\t{\n\t\tconst Value& agg = payload[\"aggregate\"];\n\t\tif (agg.HasMember(\"operation\") &&\n\t\t    (strcmp(agg[\"operation\"].GetString(), \"all\") == 0))\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Build, exucute and return data of a timebucket query with min,max,avg for all datapoints\n *\n * @param    payload\tJSON object for timebucket query\n * @param    resultSet\tJSON Output buffer\n * @return\t\tTrue of success, false on any error\n */\nbool Connection::aggregateQuery(const Value& payload, string& resultSet)\n{\n\tvector<string>  asset_codes;\n\n\tif (!payload.HasMember(\"where\") ||\n\t    !payload.HasMember(\"timebucket\"))\n\t{\n\t\traiseError(\"retrieve\", \"aggregateQuery is missing \"\n\t\t\t   \"'where' and/or 'timebucket' properties\");\n\t\treturn false;\n\t}\n\n\tSQLBuffer sql;\n\n\tsql.append(\"SELECT asset_code, \");\n\n\tdouble size = 1;\n\tstring timeColumn;\n\n\t// Check timebucket object\n\tif (payload.HasMember(\"timebucket\"))\n\t{\n\t\tconst Value& bucket = payload[\"timebucket\"];\n\t\tif (!bucket.HasMember(\"timestamp\"))\n\t\t{\n\t\t\traiseError(\"retrieve\", \"aggregateQuery is missing \"\n\t\t\t\t   \"'timestamp' property for 'timebucket'\");\n\t\t\treturn false;\n\t\t}\n\n\t\t// Time column\n\t\ttimeColumn = bucket[\"timestamp\"].GetString();\n\n\t\t// Bucket size\n\t\tif (bucket.HasMember(\"size\"))\n\t\t{\n\t\t\tsize = atof(bucket[\"size\"].GetString());\n\t\t\tif (!size)\n\t\t\t{\n\t\t\t\tsize = 1;\n\t\t\t}\n\t\t}\n\n\t\t// Time format for output\n\t\tstring newFormat;\n\t\tif (bucket.HasMember(\"format\") && size >= 1)\n\t\t{\n\t\t\tapplyColumnDateFormatLocaltime(bucket[\"format\"].GetString(),\n\t\t\t\t\t\t\t\"timestamp\",\n\t\t\t\t\t\t\tnewFormat,\n\t\t\t\t\t\t\ttrue);\n\t\t\tsql.append(newFormat);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (size < 1) \n\t\t\t{\n\t\t\t\t// sub-second granularity to time bucket size:\n\t\t\t\t// force output formatting with microseconds\t\n\t\t\t\tnewFormat = \"strftime('%Y-%m-%d %H:%M:%S', \" + timeColumn + \n\t\t\t\t\t    \", 'localtime') || substr(\" + timeColumn\t  +\n\t\t\t\t\t    \", instr(\" + timeColumn + \", '.'), 7)\";\n\t\t\t\tsql.append(newFormat);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"timestamp\");\n\t\t\t}\n\t\t}\n\n\t\t// Time output alias\n\t\tif (bucket.HasMember(\"alias\"))\n\t\t{\n\t\t\tsql.append(\" AS \");\n\t\t\tsql.append(bucket[\"alias\"].GetString());\n\t\t}\n\t}\n\n\t// JSON format aggregated data\n\tsql.append(\", '{' || group_concat('\\\"' || x || '\\\" : ' || resd, ', ') || '}' AS reading \");\n\n\t// subquery\n\tsql.append(\"FROM ( SELECT  x, asset_code, max(timestamp) AS timestamp, \");\n\t// Add min\n\tsql.append(\"'{\\\"min\\\" : ' || min(theval) || ', \");\n\t// Add max\n\tsql.append(\"\\\"max\\\" : ' || max(theval) || ', \");\n\t// Add avg\n\tsql.append(\"\\\"average\\\" : ' || avg(theval) || ', \");\n\t// Add count\n\tsql.append(\"\\\"count\\\" : ' || count(theval) || ', \");\n\t// Add sum\n\tsql.append(\"\\\"sum\\\" : ' || sum(theval) || '}' AS resd \");\n\n\tif (size < 1)\n\t{\n\t\t// Add max(user_ts)\n\t\tsql.append(\", max(\" + timeColumn + \") AS \" + timeColumn + \" \");\n\t}\n\n\t// subquery\n\tsql.append(\"FROM ( SELECT asset_code, \");\n\tsql.append(timeColumn);\n\n\tif (size >= 1)\n\t{\n\t\tsql.append(\", datetime(\");\n\t}\n\telse\n\t{\n\t\tsql.append(\", (\");\n\t}\n\n\t// Size formatted string\n\tstring size_format;\n\tif (fmod(size, 1.0) == 0.0)\n\t{\n\t\tsize_format = to_string(int(size));\n\t}\n\telse\n\t{\n\t\tsize_format = to_string(size);\n\t}\n\n\t// Add timebucket size\n\t// Unix Time is (Julian Day - JulianDay(1/1/1970 0:00 UTC) * Seconds_per_day\n\tif (size != 1)\n\t{\n\t\tsql.append(size_format);\n\t\tsql.append(\" * round((julianday(\");\n\t\tsql.append(timeColumn);\n\t\tsql.append(\") - \" + string(JULIAN_DAY_START_UNIXTIME) + \") * \" + string(SECONDS_PER_DAY) + \" / \");\n\t\tsql.append(size_format);\n\t\tsql.append(\")\");\n\t}\n\telse\n\t{\n\t\tsql.append(\"round((julianday(\");\n\t\tsql.append(timeColumn);\n\t\tsql.append(\") - \" + string(JULIAN_DAY_START_UNIXTIME) + \") * \" + string(SECONDS_PER_DAY) + \" / 1)\");\n\t}\n\tif (size >= 1)\n\t{\n\t\tsql.append(\", 'unixepoch') AS \\\"timestamp\\\", reading, \");\n\t}\n\telse\n\t{\n\t\tsql.append(\") AS \\\"timestamp\\\", reading, \");\n\t}\n\n\t// Get all datapoints in 'reading' field\n\tsql.append(\"json_each.key AS x, json_each.value AS theval FROM \" READINGS_DB_NAME_BASE \".readings, json_each(readings.reading) \");\n\n\t// Add where condition\n\tsql.append(\"WHERE \");\n\tif (!jsonWhereClause(payload[\"where\"], sql, asset_codes))\n\t{\n\t\traiseError(\"retrieve\", \"aggregateQuery: failure while building WHERE clause\");\n\t\treturn false;\n\t}\n\n\t// close subquery\n\tsql.append(\") tmp \");\n\n\t// Add group by\n\t// Unix Time is (Julian Day - JulianDay(1/1/1970 0:00 UTC) * Seconds_per_day\n\tsql.append(\" GROUP BY x, asset_code, \");\n\tsql.append(\"round((julianday(\");\n\tsql.append(timeColumn);\n\tsql.append(\") - \" + string(JULIAN_DAY_START_UNIXTIME) + \") * \" + string(SECONDS_PER_DAY) + \" / \");\n\n\tif (size != 1)\n\t{\n\t\tsql.append(size_format);\n\t}\n\telse\n\t{\n\t\tsql.append('1');\n\t}\n\tsql.append(\") \");\n\n\t// close subquery\n\tsql.append(\") tbl \");\n\n\t// Add final group and sort\n\tsql.append(\"GROUP BY timestamp, asset_code ORDER BY timestamp DESC\");\n\n\t// Add limit\n\tif (payload.HasMember(\"limit\"))\n\t{\n\t\tif (!payload[\"limit\"].IsInt())\n\t\t{\n\t\t\traiseError(\"retrieve\", \"aggregateQuery: limit must be specfied as an integer\");\n\t\t\treturn false;\n\t\t}\n\t\tsql.append(\" LIMIT \");\n\t\ttry {\n\t\t\tsql.append(payload[\"limit\"].GetInt());\n\t\t} catch (exception e) {\n\t\t\traiseError(\"retrieve\", \"aggregateQuery: bad value for limit parameter: %s\", e.what());\n\t\t\treturn false;\n\t\t}\n\t}\n\tsql.append(';');\n\n\t// Execute query\n\tconst char *query = sql.coalesce();\n\tint rc;\n\tsqlite3_stmt *stmt;\n\n\tlogSQL(\"CommonRetrieve\", query);\n\n\t// Prepare the SQL statement and get the result set\n\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\n\t// Release memory for 'query' var\n\tdelete[] query;\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\treturn false;\n\t}\n\n\t// Call result set mapping\n\trc = mapResultSet(stmt, resultSet);\n\n\t// Delete result set\n\tsqlite3_finalize(stmt);\n\n\t// Check result set mapping errors\n\tif (rc != SQLITE_DONE)\n\t{\n\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t// Failure\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\n/**\n * Append a stream of readings to SQLite db\n *\n * @param readings  readings to store into the SQLite db\n * @param commit    if true a database commit is executed and a new transaction will be opened at the next execution\n *\n */\nint Connection::readingStream(ReadingStream **readings, bool commit)\n{\n\t// Row defintion related\n\tint i;\n\tbool add_row = false;\n\tconst char *user_ts;\n\tstring now;\n\tchar ts[60], micro_s[10];\n\tchar formatted_date[LEN_BUFFER_DATE] = {0};\n\tstruct tm timeinfo;\n\tconst char *asset_code;\n\tconst char *payload;\n\tstring reading;\n\n\t// Retry mechanism\n\tint retries = 0;\n\tint sleep_time_ms = 0;\n\n\t// SQLite related\n\tsqlite3_stmt *stmt, *batch_stmt;\n\tint sqlite3_resut;\n\tint rowNumber = -1;\n\t\n#if INSTRUMENT\n\tstruct timeval start, t1, t2, t3, t4, t5;\n#endif\n\n\tconst char *sql_cmd = \"INSERT INTO  \" READINGS_DB_NAME_BASE \".readings ( user_ts, asset_code, reading ) VALUES  (?,?,?)\";\n\tstring cmd = sql_cmd;\n\tfor (int i = 0; i < APPEND_BATCH_SIZE - 1; i++)\n\t{\n\t\tcmd.append(\", (?,?,?)\");\n\t}\n\n\tsqlite3_prepare_v2(dbHandle, sql_cmd, strlen(sql_cmd), &stmt, NULL);\n\tsqlite3_prepare_v2(dbHandle, cmd.c_str(), cmd.length(), &batch_stmt, NULL);\n\n\tif (sqlite3_prepare_v2(dbHandle, sql_cmd, strlen(sql_cmd), &stmt, NULL) != SQLITE_OK)\n\t{\n\t\traiseError(\"readingStream\", sqlite3_errmsg(dbHandle));\n\t\treturn -1;\n\t}\n\n\t// The handling of the commit parameter is overridden as using a pool of connections every execution receives\n\t// a differen one, so a commit at every run is executed.\n\tm_streamOpenTransaction = true;\n\tcommit = true;\n\n\tif (m_streamOpenTransaction)\n\t{\n\t\tif (sqlite3_exec(dbHandle, \"BEGIN TRANSACTION\", NULL, NULL, NULL) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"readingStream\", sqlite3_errmsg(dbHandle));\n\t\t\treturn -1;\n\t\t}\n\t\tm_streamOpenTransaction = false;\n\t}\n\n#if INSTRUMENT\n\tgettimeofday(&start, NULL);\n#endif\n\n\tint nReadings;\n\tfor (nReadings = 0; readings[nReadings]; nReadings++);\n\n\ttry\n\t{\n\t\t\n\t\tunsigned int nBatches = nReadings / APPEND_BATCH_SIZE;\n\t\tint curReading = 0;\n\t       \tfor (int batch = 0; batch < nBatches; batch++)\n\t\t{\n\t\t\tint varNo = 1;\n\t\t\tfor (int readingNo = 0; readingNo < APPEND_BATCH_SIZE; readingNo++)\n\t\t\t{\n\t\t\t\tadd_row = true;\n\n\t\t\t\t// Handles - asset_code\n\t\t\t\tasset_code = RDS_ASSET_CODE(readings, curReading);\n\n\t\t\t\t// Handles - reading\n\t\t\t\tpayload = RDS_PAYLOAD(readings, curReading);\n\t\t\t\treading = escape(payload);\n\n\t\t\t\t// Handles - user_ts\n\t\t\t\tmemset(&timeinfo, 0, sizeof(struct tm));\n\t\t\t\tgmtime_r(&RDS_USER_TIMESTAMP(readings, curReading).tv_sec, &timeinfo);\n\t\t\t\tstd::strftime(ts, sizeof(ts), \"%Y-%m-%d %H:%M:%S\", &timeinfo);\n\t\t\t\tsnprintf(micro_s, sizeof(micro_s), \".%06lu\", RDS_USER_TIMESTAMP(readings, curReading).tv_usec);\n\n\t\t\t\tformatted_date[0] = {0};\n\t\t\t\tstrncat(ts, micro_s, 10);\n\t\t\t\tuser_ts = ts;\n\t\t\t\tif (strcmp(user_ts, \"now()\") == 0)\n\t\t\t\t{\n\t\t\t\t\tgetNow(now);\n\t\t\t\t\tuser_ts = now.c_str();\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tif (!formatDate(formatted_date, sizeof(formatted_date), user_ts))\n\t\t\t\t\t{\n\t\t\t\t\t\traiseError(\"streamReadings\", \"Invalid date |%s|\", user_ts);\n\t\t\t\t\t\tadd_row = false;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tuser_ts = formatted_date;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (add_row)\n\t\t\t\t{\n\t\t\t\t\tif (batch_stmt != NULL)\n\t\t\t\t\t{\n\t\t\t\t\t\tsqlite3_bind_text(batch_stmt, varNo++, user_ts,         -1, SQLITE_STATIC);\n\t\t\t\t\t\tsqlite3_bind_text(batch_stmt, varNo++, asset_code,      -1, SQLITE_STATIC);\n\t\t\t\t\t\tsqlite3_bind_text(batch_stmt, varNo++, reading.c_str(), -1, SQLITE_STATIC);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tcurReading++;\n\t\t\t}\n\t\t\t// Write the batch\n\n\t\t\tretries = 0;\n\t\t\tsleep_time_ms = 0;\n\n\t\t\t// Retry mechanism in case SQLlite DB is locked\n\t\t\tdo {\n\t\t\t\t// Insert the row using a lock to ensure one insert at time\n\t\t\t\t{\n\t\t\t\t\tm_writeAccessOngoing.fetch_add(1);\n\t\t\t\t\t//unique_lock<mutex> lck(db_mutex);\n\n\t\t\t\t\tsqlite3_resut = sqlite3_step(batch_stmt);\n\n\t\t\t\t\tm_writeAccessOngoing.fetch_sub(1);\n\t\t\t\t\t//db_cv.notify_all();\n\t\t\t\t}\n\n\t\t\t\tif (sqlite3_resut == SQLITE_LOCKED  )\n\t\t\t\t{\n\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + ((retries / 2 ) * (random() %  PREP_CMD_RETRY_BACKOFF));\n\t\t\t\t\tretries++;\n\n\t\t\t\t\tLogger::getLogger()->info(\"SQLITE_LOCKED - record :%d: - retry number :%d: sleep time ms :%d:\",i, retries, sleep_time_ms);\n\n\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t}\n\t\t\t\tif (sqlite3_resut == SQLITE_BUSY)\n\t\t\t\t{\n\t\t\t\t\tostringstream threadId;\n\t\t\t\t\tthreadId << std::this_thread::get_id();\n\n\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + ((retries / 2 ) * (random() %  PREP_CMD_RETRY_BACKOFF));\n\t\t\t\t\tretries++;\n\n\t\t\t\t\tLogger::getLogger()->info(\"SQLITE_BUSY - thread :%s: - record :%d: - retry number :%d: sleep time ms :%d:\", threadId.str().c_str() ,i , retries, sleep_time_ms);\n\n\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t}\n\t\t\t} while (retries < PREP_CMD_MAX_RETRIES && (sqlite3_resut == SQLITE_LOCKED || sqlite3_resut == SQLITE_BUSY));\n\n\t\t\tif (sqlite3_resut == SQLITE_DONE)\n\t\t\t{\n\t\t\t\trowNumber++;\n\n\t\t\t\tsqlite3_clear_bindings(batch_stmt);\n\t\t\t\tsqlite3_reset(batch_stmt);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"streamReadings\",\n\t\t\t\t\t\t   \"Inserting a row into SQLite using a prepared command - asset_code :%s: error :%s: reading :%s: \",\n\t\t\t\t\t\t   asset_code,\n\t\t\t\t\t\t   sqlite3_errmsg(dbHandle),\n\t\t\t\t\t\t   reading.c_str());\n\n\t\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION\", NULL, NULL, NULL);\n\t\t\t\tm_streamOpenTransaction = true;\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\n\t\twhile (readings[curReading])\n\t\t{\n\t\t\tadd_row = true;\n\n\t\t\t// Handles - asset_code\n\t\t\tasset_code = RDS_ASSET_CODE(readings, curReading);\n\n\t\t\t// Handles - reading\n\t\t\tpayload = RDS_PAYLOAD(readings, curReading);\n\t\t\treading = escape(payload);\n\n\t\t\t// Handles - user_ts\n\t\t\tmemset(&timeinfo, 0, sizeof(struct tm));\n\t\t\tgmtime_r(&RDS_USER_TIMESTAMP(readings, curReading).tv_sec, &timeinfo);\n\t\t\tstd::strftime(ts, sizeof(ts), \"%Y-%m-%d %H:%M:%S\", &timeinfo);\n\t\t\tsnprintf(micro_s, sizeof(micro_s), \".%06lu\", RDS_USER_TIMESTAMP(readings, curReading).tv_usec);\n\n\t\t\tformatted_date[0] = {0};\n\t\t\tstrncat(ts, micro_s, 10);\n\t\t\tuser_ts = ts;\n\t\t\tif (strcmp(user_ts, \"now()\") == 0)\n\t\t\t{\n\t\t\t\tgetNow(now);\n\t\t\t\tuser_ts = now.c_str();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (!formatDate(formatted_date, sizeof(formatted_date), user_ts))\n\t\t\t\t{\n\t\t\t\t\traiseError(\"streamReadings\", \"Invalid date |%s|\", user_ts);\n\t\t\t\t\tadd_row = false;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tuser_ts = formatted_date;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (add_row)\n\t\t\t{\n\t\t\t\tif (batch_stmt != NULL)\n\t\t\t\t{\n\t\t\t\t\tsqlite3_bind_text(stmt, 1, user_ts,         -1, SQLITE_STATIC);\n\t\t\t\t\tsqlite3_bind_text(stmt, 2, asset_code,      -1, SQLITE_STATIC);\n\t\t\t\t\tsqlite3_bind_text(stmt, 3, reading.c_str(), -1, SQLITE_STATIC);\n\t\t\t\t}\n\t\t\t}\n\t\t\tcurReading++;\n\n\n\t\t\tretries =0;\n\t\t\tsleep_time_ms = 0;\n\n\t\t\t// Retry mechanism in case SQLlite DB is locked\n\t\t\tdo {\n\t\t\t\t// Insert the row using a lock to ensure one insert at time\n\t\t\t\t{\n\t\t\t\t\tm_writeAccessOngoing.fetch_add(1);\n\t\t\t\t\t//unique_lock<mutex> lck(db_mutex);\n\n\t\t\t\t\tsqlite3_resut = sqlite3_step(stmt);\n\n\t\t\t\t\tm_writeAccessOngoing.fetch_sub(1);\n\t\t\t\t\t//db_cv.notify_all();\n\t\t\t\t}\n\n\t\t\t\tif (sqlite3_resut == SQLITE_LOCKED  )\n\t\t\t\t{\n\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + ((retries / 2 ) * (random() %  PREP_CMD_RETRY_BACKOFF));\n\t\t\t\t\tretries++;\n\n\t\t\t\t\tLogger::getLogger()->info(\"SQLITE_LOCKED - record :%d: - retry number :%d: sleep time ms :%d:\",i, retries, sleep_time_ms);\n\n\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t}\n\t\t\t\tif (sqlite3_resut == SQLITE_BUSY)\n\t\t\t\t{\n\t\t\t\t\tostringstream threadId;\n\t\t\t\t\tthreadId << std::this_thread::get_id();\n\n\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + ((retries / 2 ) * (random() %  PREP_CMD_RETRY_BACKOFF));\n\t\t\t\t\tretries++;\n\n\t\t\t\t\tLogger::getLogger()->info(\"SQLITE_BUSY - thread :%s: - record :%d: - retry number :%d: sleep time ms :%d:\", threadId.str().c_str() ,i , retries, sleep_time_ms);\n\n\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t}\n\t\t\t} while (retries < PREP_CMD_MAX_RETRIES && (sqlite3_resut == SQLITE_LOCKED || sqlite3_resut == SQLITE_BUSY));\n\n\t\t\tif (sqlite3_resut == SQLITE_DONE)\n\t\t\t{\n\t\t\t\trowNumber++;\n\n\t\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\t\tsqlite3_reset(stmt);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\traiseError(\"streamReadings\",\n\t\t\t\t\t\t   \"Inserting a row into SQLite using a prepared command - asset_code :%s: error :%s: reading :%s: \",\n\t\t\t\t\t\t   asset_code,\n\t\t\t\t\t\t   sqlite3_errmsg(dbHandle),\n\t\t\t\t\t\t   reading.c_str());\n\n\t\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION\", NULL, NULL, NULL);\n\t\t\t\tm_streamOpenTransaction = true;\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t\trowNumber = curReading;\n\t} catch (exception e) {\n\n\t\traiseError(\"appendReadings\", \"Inserting a row into SQLite using a prepared statement  - error :%s:\", e.what());\n\n\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION\", NULL, NULL, NULL);\n\t\tm_streamOpenTransaction = true;\n\t\treturn -1;\n\t}\n\n#if INSTRUMENT\n\tgettimeofday(&t1, NULL);\n#endif\n\n\tif (commit)\n\t{\n\t\tsqlite3_resut = sqlite3_exec(dbHandle, \"END TRANSACTION\", NULL, NULL, NULL);\n\t\tif (sqlite3_resut != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"appendReadings\", \"Executing the commit of the transaction - error :%s:\", sqlite3_errmsg(dbHandle));\n\t\t\trowNumber = -1;\n\t\t}\n\t\tm_streamOpenTransaction = true;\n\t}\n\n\tif(stmt != NULL)\n\t{\n\t\tif (sqlite3_finalize(stmt) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"appendReadings\",\"freeing SQLite in memory structure - error :%s:\", sqlite3_errmsg(dbHandle));\n\t\t}\n\t}\n\tif(batch_stmt != NULL)\n\t{\n\t\tif (sqlite3_finalize(batch_stmt) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"appendReadings\",\"freeing SQLite in memory batch structure - error :%s:\", sqlite3_errmsg(dbHandle));\n\t\t}\n\t}\n\n#if INSTRUMENT\n\tgettimeofday(&t2, NULL);\n#endif\n\n#if INSTRUMENT\n\tstruct timeval tm;\n\tdouble timeT1, timeT2, timeT3;\n\n\ttimersub(&t1, &start, &tm);\n\ttimeT1 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\ttimersub(&t2, &t1, &tm);\n\ttimeT2 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\n\tLogger::getLogger()->warn(\"readingStream Timing with %d rows - stream handling %.3f seconds - commit/finalize %.3f seconds\",\n\t\t\t\trowNumber, timeT1, timeT2);\n#endif\n\n\treturn rowNumber;\n}\n\n/**\n * Append a set of readings to the readings table\n */\nint Connection::appendReadings(const char *readings)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument doc;\nint      row = 0;\nbool     add_row = false;\n\n// Variables related to the SQLite insert using prepared command\nconst char   *user_ts;\nconst char   *asset_code;\nstring        reading;\nsqlite3_stmt *stmt, *batch_stmt;\nint           sqlite3_resut;\nstring        now;\n\n// Retry mechanism\nint retries = 0;\nint sleep_time_ms = 0;\n\n\tostringstream threadId;\n\tthreadId << std::this_thread::get_id();\n\n#if INSTRUMENT\n\tLogger::getLogger()->warn(\"appendReadings start thread :%s:\", threadId.str().c_str());\n\n\tstruct timeval\tstart, t1, t2, t3, t4, t5;\n#endif\n\n#if INSTRUMENT\n\tgettimeofday(&start, NULL);\n#endif\n\n\tint len = strlen(readings) + 1;\n\tchar *readingsCopy = NULL;\n\tParseResult ok;\n#if INSITU_THRESHOLD\n\tif (len > INSITU_THRESHOLD)\n\t{\n\t\treadingsCopy = (char *)malloc(len);\n\t\tmemcpy(readingsCopy, readings, len);\n\t\tok = doc.ParseInsitu(readingsCopy);\n\t}\n\telse\n#endif\n\t{\n\t\tok = doc.Parse(readings);\n\t}\n\tif (!ok)\n\t{\n \t\traiseError(\"appendReadings\", GetParseError_En(doc.GetParseError()));\n\t\tif (readingsCopy)\n\t\t{\n\t\t\tfree(readingsCopy);\n\t\t}\n\t\treturn -1;\n\t}\n\n\tif (!doc.HasMember(\"readings\"))\n\t{\n \t\traiseError(\"appendReadings\", \"Payload is missing a readings array\");\n\t\tif (readingsCopy)\n\t\t{\n\t\t\tfree(readingsCopy);\n\t\t}\n\t\treturn -1;\n\t}\n\tValue &readingsValue = doc[\"readings\"];\n\tif (!readingsValue.IsArray())\n\t{\n\t\traiseError(\"appendReadings\", \"Payload is missing the readings array\");\n\t\tif (readingsCopy)\n\t\t{\n\t\t\tfree(readingsCopy);\n\t\t}\n\t\treturn -1;\n\t}\n\n\tconst char *sql_cmd=\"INSERT INTO  \" READINGS_DB_NAME_BASE \".readings ( user_ts, asset_code, reading ) VALUES  (?,?,?)\";\n\tstring cmd = sql_cmd;\n\tfor (int i = 0; i < APPEND_BATCH_SIZE - 1; i++)\n\t{\n\t\tcmd.append(\", (?,?,?)\");\n\t}\n\n\tsqlite3_prepare_v2(dbHandle, sql_cmd, strlen(sql_cmd), &stmt, NULL);\n\tsqlite3_prepare_v2(dbHandle, cmd.c_str(), cmd.length(), &batch_stmt, NULL);\n\t{\n\tm_writeAccessOngoing.fetch_add(1);\n\t//unique_lock<mutex> lck(db_mutex);\n\tsqlite3_exec(dbHandle, \"BEGIN TRANSACTION\", NULL, NULL, NULL);\n\n#if INSTRUMENT\n\tgettimeofday(&t1, NULL);\n#endif\n\n\tValue::ConstValueIterator itr = readingsValue.Begin();\n\tSizeType nReadings = readingsValue.Size();\n\tunsigned int nBatches = nReadings / APPEND_BATCH_SIZE;\n\tLogger::getLogger()->debug(\"Write %d readings in %d batches of %d\", nReadings, nBatches, APPEND_BATCH_SIZE);\n       \tfor (int batch = 0; batch < nBatches; batch++)\n\t{\n\t\tint varNo = 1;\n\t\tfor (int readingNo = 0; readingNo < APPEND_BATCH_SIZE; readingNo++)\n\t\t{\n\t\t\tif (!itr->IsObject())\n\t\t\t{\n\t\t\t\tchar err[132];\n\t\t\t\tsnprintf(err, sizeof(err),\n\t\t\t\t\t\t\"Each reading in the readings array must be an object. Reading %d of batch %d\", readingNo, batch);\n\t\t\t\traiseError(\"appendReadings\",err);\n\t\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION;\", NULL, NULL, NULL);\n\t\t\t\tif (readingsCopy)\n\t\t\t\t{\n\t\t\t\t\tfree(readingsCopy);\n\t\t\t\t}\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tadd_row = true;\n\n\t\t\t// Handles - user_ts\n\t\t\tchar formatted_date[LEN_BUFFER_DATE] = {0};\n\t\t\tuser_ts = (*itr)[\"user_ts\"].GetString();\n\t\t\tif (strcmp(user_ts, \"now()\") == 0)\n\t\t\t{\n\t\t\t\tgetNow(now);\n\t\t\t\tuser_ts = now.c_str();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (! formatDate(formatted_date, sizeof(formatted_date), user_ts) )\n\t\t\t\t{\n\t\t\t\t\traiseError(\"appendReadings\", \"Invalid date |%s|\", user_ts);\n\t\t\t\t\tadd_row = false;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tuser_ts = formatted_date;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (add_row)\n\t\t\t{\n\t\t\t\t// Handles - asset_code\n\t\t\t\tasset_code = (*itr)[\"asset_code\"].GetString();\n\n\t\t\t\tif (strlen(asset_code) == 0)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"Sqlitelb appendReadings - empty asset code value, row is ignored\");\n\t\t\t\t\titr++;\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\t// Handles - reading\n\t\t\t\tStringBuffer buffer;\n\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t(*itr)[\"reading\"].Accept(writer);\n\t\t\t\treading = escape(buffer.GetString());\n\n\t\t\t\tif (stmt != NULL)\n\t\t\t\t{\n\n\t\t\t\t\tsqlite3_bind_text(batch_stmt, varNo++, user_ts, -1, SQLITE_TRANSIENT);\n\t\t\t\t\tsqlite3_bind_text(batch_stmt, varNo++, asset_code, -1, SQLITE_TRANSIENT);\n\t\t\t\t\tsqlite3_bind_text(batch_stmt, varNo++, reading.c_str(), -1, SQLITE_TRANSIENT);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\titr++;\n\t\t\tif (itr == readingsValue.End())\n\t\t\t\tbreak;\n\t\t}\n\n\n\t\tretries =0;\n\t\tsleep_time_ms = 0;\n\n\t\t// Retry mechanism in case SQLlite DB is locked\n\t\tdo {\n\t\t\t// Insert the row using a lock to ensure one insert at time\n\t\t\t{\n\n\t\t\t\tsqlite3_resut = sqlite3_step(batch_stmt);\n\n\t\t\t}\n\t\t\tif (sqlite3_resut == SQLITE_LOCKED  )\n\t\t\t{\n\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + ((retries / 2 ) * (random() %  PREP_CMD_RETRY_BACKOFF));\n\t\t\t\tretries++;\n\n\t\t\t\tLogger::getLogger()->info(\"SQLITE_LOCKED - record :%d: - retry number :%d: sleep time ms :%d:\" ,row ,retries ,sleep_time_ms);\n\n\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t}\n\t\t\tif (sqlite3_resut == SQLITE_BUSY)\n\t\t\t{\n\t\t\t\tostringstream threadId;\n\t\t\t\tthreadId << std::this_thread::get_id();\n\n\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + ((retries / 2 ) * (random() %  PREP_CMD_RETRY_BACKOFF));\n\t\t\t\tretries++;\n\n\t\t\t\tLogger::getLogger()->info(\"SQLITE_BUSY - thread :%s: - record :%d: - retry number :%d: sleep time ms :%d:\", threadId.str().c_str() ,row, retries, sleep_time_ms);\n\n\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t}\n\t\t} while (retries < PREP_CMD_MAX_RETRIES && (sqlite3_resut == SQLITE_LOCKED || sqlite3_resut == SQLITE_BUSY));\n\n\t\tif (sqlite3_resut == SQLITE_DONE)\n\t\t{\n\t\t\trow += APPEND_BATCH_SIZE;\n\n\t\t\tsqlite3_clear_bindings(batch_stmt);\n\t\t\tsqlite3_reset(batch_stmt);\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"appendReadings\",\"Inserting a row into SQLite using a prepared command - asset_code :%s: error :%s: reading :%s: \",\n\t\t\t\tasset_code,\n\t\t\t\tsqlite3_errmsg(dbHandle),\n\t\t\t\treading.c_str());\n\n\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION\", NULL, NULL, NULL);\n\t\t\tif (readingsCopy)\n\t\t\t{\n\t\t\t\tfree(readingsCopy);\n\t\t\t}\n\t\t\treturn -1;\n\t\t}\n\n\n\t}\n\n\tLogger::getLogger()->debug(\"Now do the remaining readings\");\n\t// Do individual inserts for the remainder of the readings\n\twhile (itr != readingsValue.End())\n\t{\n\t\tif (!itr->IsObject())\n\t\t{\n\t\t\traiseError(\"appendReadings\",\"Each reading in the readings array must be an object\");\n\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION;\", NULL, NULL, NULL);\n\t\t\tif (readingsCopy)\n\t\t\t{\n\t\t\t\tfree(readingsCopy);\n\t\t\t}\n\t\t\treturn -1;\n\t\t}\n\n\t\tadd_row = true;\n\n\t\t// Handles - user_ts\n\t\tchar formatted_date[LEN_BUFFER_DATE] = {0};\n\t\tuser_ts = (*itr)[\"user_ts\"].GetString();\n\t\tif (strcmp(user_ts, \"now()\") == 0)\n\t\t{\n\t\t\tgetNow(now);\n\t\t\tuser_ts = now.c_str();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (! formatDate(formatted_date, sizeof(formatted_date), user_ts) )\n\t\t\t{\n\t\t\t\traiseError(\"appendReadings\", \"Invalid date |%s|\", user_ts);\n\t\t\t\tadd_row = false;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tuser_ts = formatted_date;\n\t\t\t}\n\t\t}\n\n\t\tif (add_row)\n\t\t{\n\t\t\t// Handles - asset_code\n\t\t\tasset_code = (*itr)[\"asset_code\"].GetString();\n\n\t\t\tif (strlen(asset_code) == 0)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Sqlitelb appendReadings - empty asset code value, row is ignored\");\n\t\t\t\titr++;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t// Handles - reading\n\t\t\tStringBuffer buffer;\n\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t(*itr)[\"reading\"].Accept(writer);\n\t\t\treading = escape(buffer.GetString());\n\n\t\t\tif(stmt != NULL) {\n\n\t\t\t\tsqlite3_bind_text(stmt, 1, user_ts         ,-1, SQLITE_TRANSIENT);\n\t\t\t\tsqlite3_bind_text(stmt, 2, asset_code      ,-1, SQLITE_TRANSIENT);\n\t\t\t\tsqlite3_bind_text(stmt, 3, reading.c_str(), -1, SQLITE_TRANSIENT);\n\n\t\t\t\tretries =0;\n\t\t\t\tsleep_time_ms = 0;\n\n\t\t\t\t// Retry mechanism in case SQLlite DB is locked\n\t\t\t\tdo {\n\t\t\t\t\t// Insert the row using a lock to ensure one insert at time\n\t\t\t\t\t{\n\n\t\t\t\t\t\tsqlite3_resut = sqlite3_step(stmt);\n\n\t\t\t\t\t}\n\t\t\t\t\tif (sqlite3_resut == SQLITE_LOCKED  )\n\t\t\t\t\t{\n\t\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + ((retries / 2 ) * (random() %  PREP_CMD_RETRY_BACKOFF));\n\t\t\t\t\t\tretries++;\n\n\t\t\t\t\t\tLogger::getLogger()->info(\"SQLITE_LOCKED - record :%d: - retry number :%d: sleep time ms :%d:\" ,row ,retries ,sleep_time_ms);\n\n\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t\t}\n\t\t\t\t\tif (sqlite3_resut == SQLITE_BUSY)\n\t\t\t\t\t{\n\t\t\t\t\t\tostringstream threadId;\n\t\t\t\t\t\tthreadId << std::this_thread::get_id();\n\n\t\t\t\t\t\tsleep_time_ms = PREP_CMD_RETRY_BASE + ((retries / 2 ) * (random() %  PREP_CMD_RETRY_BACKOFF));\n\t\t\t\t\t\tretries++;\n\n\t\t\t\t\t\tLogger::getLogger()->info(\"SQLITE_BUSY - thread :%s: - record :%d: - retry number :%d: sleep time ms :%d:\", threadId.str().c_str() ,row, retries, sleep_time_ms);\n\n\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(sleep_time_ms));\n\t\t\t\t\t}\n\t\t\t\t} while (retries < PREP_CMD_MAX_RETRIES && (sqlite3_resut == SQLITE_LOCKED || sqlite3_resut == SQLITE_BUSY));\n\n\t\t\t\tif (sqlite3_resut == SQLITE_DONE)\n\t\t\t\t{\n\t\t\t\t\trow++;\n\n\t\t\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\t\t\tsqlite3_reset(stmt);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"appendReadings\",\"Inserting a row into SQLite using a prepared command - asset_code :%s: error :%s: reading :%s: \",\n\t\t\t\t\t\tasset_code,\n\t\t\t\t\t\tsqlite3_errmsg(dbHandle),\n\t\t\t\t\t\treading.c_str());\n\n\t\t\t\t\tsqlite3_exec(dbHandle, \"ROLLBACK TRANSACTION\", NULL, NULL, NULL);\n\t\t\t\t\tif (readingsCopy)\n\t\t\t\t\t{\n\t\t\t\t\t\tfree(readingsCopy);\n\t\t\t\t\t}\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\titr++;\n\t}\n\n\tsqlite3_resut = sqlite3_exec(dbHandle, \"END TRANSACTION\", NULL, NULL, NULL);\n\tif (sqlite3_resut != SQLITE_OK)\n\t{\n\t\traiseError(\"appendReadings\", \"Executing the commit of the transaction :%s:\", sqlite3_errmsg(dbHandle));\n\t\trow = -1;\n\t}\n\tm_writeAccessOngoing.fetch_sub(1);\n\t//db_cv.notify_all();\n\t}\n\n#if INSTRUMENT\n\t\tgettimeofday(&t2, NULL);\n#endif\n\n\tif(stmt != NULL)\n\t{\n\t\tif (sqlite3_finalize(stmt) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"appendReadings\",\"freeing SQLite in memory structure - error :%s:\", sqlite3_errmsg(dbHandle));\n\t\t}\n\t}\n\tif(batch_stmt != NULL)\n\t{\n\t\tif (sqlite3_finalize(batch_stmt) != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"appendReadings\",\"freeing SQLite in memory batch structure - error :%s:\", sqlite3_errmsg(dbHandle));\n\t\t}\n\t}\n\n\tif (readingsCopy)\n\t{\n\t\tfree(readingsCopy);\n\t}\n#if INSTRUMENT\n\t\tgettimeofday(&t3, NULL);\n#endif\n\n#if INSTRUMENT\n\t\tstruct timeval tm;\n\t\tdouble timeT1, timeT2, timeT3;\n\n\t\ttimersub(&t1, &start, &tm);\n\t\ttimeT1 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\ttimersub(&t2, &t1, &tm);\n\t\ttimeT2 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\ttimersub(&t3, &t2, &tm);\n\t\ttimeT3 = tm.tv_sec + ((double)tm.tv_usec / 1000000);\n\n\t\tLogger::getLogger()->warn(\"appendReadings end   thread :%s: buffer :%10lu: count :%5d: JSON :%6.3f: inserts :%6.3f: finalize :%6.3f:\",\n\t\t\t\t\t\t\t\t   threadId.str().c_str(),\n\t\t\t\t\t\t\t\t   strlen(readings),\n\t\t\t\t\t\t\t\t   row,\n\t\t\t\t\t\t\t\t   timeT1,\n\t\t\t\t\t\t\t\t   timeT2,\n\t\t\t\t\t\t\t\t   timeT3\n\t\t);\n\n#endif\n\n\treturn row;\n}\n\n/**\n * Fetch a block of readings from the reading table\n * It might not work with SQLite 3\n *\n * Fetch, used by the north side, returns timestamp in UTC.\n *\n * NOTE : it expects to handle a date having a fixed format\n * with milliseconds, microseconds and timezone expressed,\n * like for example :\n *\n *    2019-01-11 15:45:01.123456+01:00\n */\nbool Connection::fetchReadings(unsigned long id,\n\t\t\t       unsigned int blksize,\n\t\t\t       std::string& resultSet)\n{\nchar sqlbuffer[512];\nchar *zErrMsg = NULL;\nint rc;\nint retrieve;\n\n\t// SQL command to extract the data from the readings.readings\n\tconst char *sql_cmd = R\"(\n\tSELECT\n\t\tid,\n\t\tasset_code,\n\t\treading,\n\t\tstrftime('%%Y-%%m-%%d %%H:%%M:%%S', user_ts, 'utc')  ||\n\t\tsubstr(user_ts, instr(user_ts, '.'), 7) AS user_ts,\n\t\tstrftime('%%Y-%%m-%%d %%H:%%M:%%f', ts, 'utc') AS ts\n\tFROM  )\" READINGS_DB_NAME_BASE R\"(.readings\n\tWHERE id >= %lu\n\tORDER BY id ASC\n\tLIMIT %u;\n\t)\";\n\n\t/*\n\t * This query assumes datetime values are in 'localtime'\n\t */\n\tsnprintf(sqlbuffer,\n\t\t sizeof(sqlbuffer),\n\t\t sql_cmd,\n\t\t id,\n\t\t blksize);\n\tlogSQL(\"ReadingsFetch\", sqlbuffer);\n\tsqlite3_stmt *stmt;\n\t// Prepare the SQL statement and get the result set\n\tif (sqlite3_prepare_v2(dbHandle,\n\t\t\t       sqlbuffer,\n\t\t\t       -1,\n\t\t\t       &stmt,\n\t\t\t       NULL) != SQLITE_OK)\n\t{\n\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\n\t\t// Failure\n\t\treturn false;\n\t}\n\telse\n\t{\n\t\t// Call result set mapping\n\t\trc = mapResultSet(stmt, resultSet);\n\n\t\t// Delete result set\n\t\tsqlite3_finalize(stmt);\n\n\t\t// Check result set errors\n\t\tif (rc != SQLITE_DONE)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\n\t\t\t// Failure\n\t\t\treturn false;\n               \t}\n\t\telse\n\t\t{\n\t\t\t// Success\n\t\t\treturn true;\n\t\t}\n\t}\n}\n\n\n/**\n * Perform a query against the readings table\n *\n * retrieveReadings, used by the API, returns timestamp in utc unless\n * otherwise requested.\n *\n */\nbool Connection::retrieveReadings(const string& condition, string& resultSet)\n{\n// Default template parameter uses UTF8 and MemoryPoolAllocator.\nDocument\tdocument;\nSQLBuffer\tsql;\n// Extra constraints to add to where clause\nSQLBuffer\tjsonConstraints;\nbool\t\tisAggregate = false;\nconst char\t*timezone = \"utc\";\nvector<string>  asset_codes;\n\n\ttry {\n\t\tif (dbHandle == NULL)\n\t\t{\n\t\t\traiseError(\"retrieve\", \"No SQLite 3 db connection available\");\n\t\t\treturn false;\n\t\t}\n\n\t\tif (condition.empty())\n\t\t{\n\t\t\tconst char *sql_cmd = R\"(\n\t\t\t\t\tSELECT\n\t\t\t\t\t\tid,\n\t\t\t\t\t\tasset_code,\n\t\t\t\t\t\treading,\n\t\t\t\t\t\tstrftime(')\" F_DATEH24_SEC R\"(', user_ts, 'utc')  ||\n\t\t\t\t\t\tsubstr(user_ts, instr(user_ts, '.'), 7) AS user_ts,\n\t\t\t\t\t\tstrftime(')\" F_DATEH24_MS R\"(', ts, 'localtime') AS ts\n\t\t\t\t\tFROM )\" READINGS_DB_NAME_BASE R\"(.readings)\";\n\n\t\t\tsql.append(sql_cmd);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (document.Parse(condition.c_str()).HasParseError())\n\t\t\t{\n\t\t\t\traiseError(\"retrieve\", \"Failed to parse JSON payload\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (document.HasMember(\"timezone\") && document[\"timezone\"].IsString())\n\t\t\t{\n\t\t\t\ttimezone = document[\"timezone\"].GetString();\n\t\t\t}\n\n\t\t\t// timebucket aggregate all datapoints\n\t\t\tif (aggregateAll(document))\n\t\t\t{       \n\t\t\t\treturn aggregateQuery(document, resultSet);\n\t\t\t}\n\n\t\t\tif (document.HasMember(\"aggregate\"))\n\t\t\t{\n\t\t\t\tisAggregate = true;\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tif (!jsonAggregates(document, document[\"aggregate\"], sql, jsonConstraints, true))\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM  \" READINGS_DB_NAME_BASE \".\");\n\t\t\t}\n\t\t\telse if (document.HasMember(\"return\"))\n\t\t\t{\n\t\t\t\tint col = 0;\n\t\t\t\tValue& columns = document[\"return\"];\n\t\t\t\tif (! columns.IsArray())\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\", \"The property return must be an array\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\t\t\t\tfor (Value::ConstValueIterator itr = columns.Begin(); itr != columns.End(); ++itr)\n\t\t\t\t{\n\t\t\t\t\tif (col)\n\t\t\t\t\t\tsql.append(\", \");\n\t\t\t\t\tif (!itr->IsObject())\t// Simple column name\n\t\t\t\t\t{\n\t\t\t\t\t\tif (strcmp(itr->GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Display without TZ expression and microseconds also\n\t\t\t\t\t\t\tsql.append(\" strftime('\" F_DATEH24_SEC \"', user_ts, '\");\n\t\t\t\t\t\t\tsql.append(timezone);\n\t\t\t\t\t\t\tsql.append(\"') \");\n\t\t\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t\t\t\tsql.append(\" as  user_ts \");\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (strcmp(itr->GetString() ,\"ts\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Display without TZ expression and microseconds also\n\t\t\t\t\t\t\tsql.append(\" strftime('\" F_DATEH24_MS \"', ts, '\");\n\t\t\t\t\t\t\tsql.append(timezone);\n\t\t\t\t\t\t\tsql.append(\"') \");\n\t\t\t\t\t\t\tsql.append(\" as ts \");\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(itr->GetString());\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tif (itr->HasMember(\"column\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (! (*itr)[\"column\"].IsString())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t   \"column must be a string\");\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (itr->HasMember(\"format\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"format\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t\t   \"format must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t// SQLite 3 date format.\n\t\t\t\t\t\t\t\tstring new_format;\n\t\t\t\t\t\t\t\tapplyColumnDateFormatLocaltime((*itr)[\"format\"].GetString(),\n\t\t\t\t\t\t\t\t\t\t      (*itr)[\"column\"].GetString(),\n\t\t\t\t\t\t\t\t\t\t      new_format, true);\n\t\t\t\t\t\t\t\t// Add the formatted column or use it as is\n\t\t\t\t\t\t\t\tsql.append(new_format);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (itr->HasMember(\"timezone\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (! (*itr)[\"timezone\"].IsString())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t\t   \"timezone must be a string\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t// SQLite3 doesnt support time zone formatting\n\t\t\t\t\t\t\t\tconst char *tz = (*itr)[\"timezone\"].GetString();\n\n\t\t\t\t\t\t\t\tif (strncasecmp(tz, \"utc\", 3) == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (strcmp((*itr)[\"column\"].GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t// Extract milliseconds and microseconds for the user_ts fields\n\n\t\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, 'utc') \");\n\t\t\t\t\t\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\tsql.append(\", 'utc')\");\n\t\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse if (strncasecmp(tz, \"localtime\", 9) == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tif (strcmp((*itr)[\"column\"].GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t// Extract milliseconds and microseconds for the user_ts fields\n\n\t\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, 'localtime') \");\n\t\t\t\t\t\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\tsql.append(\", 'localtime')\");\n\t\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t\t\t   \"SQLite3 plugin does not support timezones in queries\");\n\t\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\n\t\t\t\t\t\t\t\tif (strcmp((*itr)[\"column\"].GetString() ,\"user_ts\") == 0)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t// Extract milliseconds and microseconds for the user_ts fields\n\n\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_SEC \"', user_ts, '\");\n\t\t\t\t\t\t\t\t\tsql.append(timezone);\n\t\t\t\t\t\t\t\t\tsql.append(\"') \");\n\t\t\t\t\t\t\t\t\tsql.append(\" || substr(user_ts, instr(user_ts, '.'), 7) \");\n\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', \");\n\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\tsql.append(\", '\");\n\t\t\t\t\t\t\t\t\tsql.append(timezone);\n\t\t\t\t\t\t\t\t\tsql.append(\"')\");\n\t\t\t\t\t\t\t\t\tif (! itr->HasMember(\"alias\"))\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tsql.append(\" AS \");\n\t\t\t\t\t\t\t\t\t\tsql.append((*itr)[\"column\"].GetString());\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsql.append(' ');\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (itr->HasMember(\"json\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value& json = (*itr)[\"json\"];\n\t\t\t\t\t\t\tif (! returnJson(json, sql, jsonConstraints))\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t\t\t   \"return object must have either a column or json property\");\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (itr->HasMember(\"alias\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsql.append(\" AS \\\"\");\n\t\t\t\t\t\t\tsql.append((*itr)[\"alias\"].GetString());\n\t\t\t\t\t\t\tsql.append('\"');\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcol++;\n\t\t\t\t}\n\t\t\t\tsql.append(\" FROM  \" READINGS_DB_NAME_BASE \".\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsql.append(\"SELECT \");\n\t\t\t\tif (document.HasMember(\"modifier\"))\n\t\t\t\t{\n\t\t\t\t\tsql.append(document[\"modifier\"].GetString());\n\t\t\t\t\tsql.append(' ');\n\t\t\t\t}\n\n\t\t\t\tsql.append(\"id, asset_code, reading, strftime('\" F_DATEH24_SEC \"', user_ts, '\");\n\t\t\t\tsql.append(timezone);\n\t\t\t\tsql.append(\"')  || substr(user_ts, instr(user_ts, '.'), 7) AS user_ts,\");\n\t\t\t\tsql.append(\"strftime('\" F_DATEH24_MS \"', ts, '\");\n\t\t\t\tsql.append(timezone);\n\t\t\t\tsql.append(\"') AS ts FROM \" READINGS_DB_NAME_BASE \".\");\n\n\t\t\t}\n\t\t\tsql.append(\"readings\");\n\t\t\tif (document.HasMember(\"where\"))\n\t\t\t{\n\t\t\t\tsql.append(\" WHERE \");\n\t\t\t \n\t\t\t\tif (document.HasMember(\"where\"))\n\t\t\t\t{\n\t\t\t\t\tif (!jsonWhereClause(document[\"where\"], sql, asset_codes))\n\t\t\t\t\t{\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\traiseError(\"retrieve\",\n\t\t\t\t\t\t   \"JSON does not contain where clause\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t\tif (! jsonConstraints.isEmpty())\n\t\t\t\t{\n\t\t\t\t\tsql.append(\" AND \");\n                                        const char *jsonBuf =  jsonConstraints.coalesce();\n                                        sql.append(jsonBuf);\n                                        delete[] jsonBuf;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (isAggregate)\n\t\t\t{\n\t\t\t\t/*\n\t\t\t\t * Performance improvement: force sqlite to use an index\n\t\t\t\t * if we are doing an aggregate and have no where clause.\n\t\t\t\t */\n\t\t\t\tsql.append(\" WHERE asset_code = asset_code\");\n\t\t\t}\n\t\t\tif (!jsonModifiers(document, sql, true))\n\t\t\t{\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tsql.append(';');\n\n\t\tconst char *query = sql.coalesce();\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tsqlite3_stmt *stmt;\n\n\t\tlogSQL(\"ReadingsRetrieve\", query);\n\n\t\t// Prepare the SQL statement and get the result set\n\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &stmt, NULL);\n\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t\treturn false;\n\t\t}\n\n\t\t// Call result set mapping\n\t\trc = mapResultSet(stmt, resultSet);\n\n\t\t// Delete result set\n\t\tsqlite3_finalize(stmt);\n\n\t\t// Check result set mapping errors\n\t\tif (rc != SQLITE_DONE)\n\t\t{\n\t\t\traiseError(\"retrieve\", sqlite3_errmsg(dbHandle));\n\t\t\t// Failure\n\t\t\treturn false;\n\t\t}\n\t\t// Success\n\t\treturn true;\n\t} catch (exception e) {\n\t\traiseError(\"retrieve\", \"Internal error: %s\", e.what());\n\t\treturn false;\n\t}\n}\n/**\n * Purge readings from the reading table\n */\nunsigned int  Connection::purgeReadings(unsigned long age,\n\t\t\t\t\tunsigned int flags,\n\t\t\t\t\tunsigned long sent,\n\t\t\t\t\tstd::string& result)\n{\n\tlong unsentPurged = 0;\n\tlong unsentRetained = 0;\n\tlong numReadings = 0;\n\tunsigned long rowidLimit = 0, minrowidLimit = 0, maxrowidLimit = 0, rowidMin;\n\tstruct timeval startTv, endTv;\n\tint blocks = 0;\n\tbool flag_retain;\n\n\tLogger *logger = Logger::getLogger();\n\n\tflag_retain = false;\n\n\tif ( (flags & STORAGE_PURGE_RETAIN_ANY) || (flags & STORAGE_PURGE_RETAIN_ALL) )\n\t{\n\t\tflag_retain = true;\n\t}\n\tLogger::getLogger()->debug(\"%s - flags :%X: flag_retain :%d: sent :%ld:\", __FUNCTION__, flags, flag_retain, sent);\n\n\t// Prepare empty result\n\tresult = \"{ \\\"removed\\\" : 0, \";\n\tresult += \" \\\"unsentPurged\\\" : 0, \";\n\tresult += \" \\\"unsentRetained\\\" : 0, \";\n\tresult += \" \\\"readings\\\" : 0, \";\n\tresult += \" \\\"method\\\" : \\\"time\\\", \";\n\tresult += \" \\\"duration\\\" : 0 }\";\n\n\tlogger->info(\"Purge starting...\");\n\tgettimeofday(&startTv, NULL);\n\t/*\n\t * We fetch the current rowid and limit the purge process to work on just\n\t * those rows present in the database when the purge process started.\n\t * This prevents us looping in the purge process if new readings become\n\t * eligible for purging at a rate that is faster than we can purge them.\n\t */\n\t{\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t\t \"select max(rowid) from \" READINGS_DB_NAME_BASE \".\"  READINGS_TABLE \";\",\n\t\t\trowidCallback,\n\t\t\t&rowidLimit,\n\t\t\t&zErrMsg);\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"purge - phase 0, fetching rowid limit \", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\t\tmaxrowidLimit = rowidLimit;\n\t}\n\n\t{\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t\t \"select min(rowid) from \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \";\",\n\t\t\trowidCallback,\n\t\t\t&minrowidLimit,\n\t\t\t&zErrMsg);\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"purge - phaase 0, fetching minrowid limit \", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\tif (age == 0)\n\t{\n\t\t/*\n\t\t * An age of 0 means remove the oldest hours data.\n\t\t * So set age based on the data we have and continue.\n\t\t */\n\t\tSQLBuffer oldest;\n\t\toldest.append(\"SELECT (strftime('%s','now', 'utc') - strftime('%s', MIN(user_ts)))/360 FROM \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \" where rowid <= \");\n\t\toldest.append(rowidLimit);\n\t\toldest.append(';');\n\t\tconst char *query = oldest.coalesce();\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tint purge_readings = 0;\n\n\t\t// Exec query and get result in 'purge_readings' via 'selectCallback'\n\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t\t query,\n\t\t\t\t\t selectCallback,\n\t\t\t\t\t &purge_readings,\n\t\t\t\t\t &zErrMsg);\n\t\t// Release memory for 'query' var\n\t\tdelete[] query;\n\n\t\tif (rc == SQLITE_OK)\n\t\t{\n\t\t\tage = purge_readings;\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"purge - phase 1\", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\t{\n\t\t/*\n\t\t * Refine rowid limit to just those rows older than age hours.\n\t\t */\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tunsigned long l = minrowidLimit;\n\t\tunsigned long r;\n\t\tif (flag_retain) {\n\n\t\t\tr = min(sent, rowidLimit);\n\t\t} else {\n\t\t\tr = rowidLimit;\n\t\t}\n\n\t\tr = max(r, l);\n\t\tlogger->debug   (\"%s:%d: l=%u, r=%u, sent=%u, rowidLimit=%u, minrowidLimit=%u, flags=%u\", __FUNCTION__, __LINE__, l, r, sent, rowidLimit, minrowidLimit, flags);\n\n\t\tif (l == r)\n\t\t{\n\t\t\tlogger->info(\"No data to purge: min_id == max_id == %u\", minrowidLimit);\n\t\t\treturn 0;\n\t\t}\n\n\t\tunsigned long m=l;\n\t\tsqlite3_stmt *idStmt;\n\t\tbool isMinId = false;\n\t\twhile (l <= r)\n\t\t{\n\t\t\tunsigned long midRowId = 0;\n\t\t\tunsigned long prev_m = m;\n\t\t\tm = l + (r - l) / 2;\n\t\t\tif (prev_m == m) break;\n\n\t\t\t// e.g. select id from readings where rowid = 219867307 AND user_ts < datetime('now' , '-24 hours', 'utc');\n\t\t\tSQLBuffer sqlBuffer;\n\t\t\tsqlBuffer.append(\"select id from \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \" where rowid = ?\");\n\t\t\tsqlBuffer.append(\" AND user_ts < datetime('now' , '-?\");\n\t\t\tsqlBuffer.append(\" hours');\");\n\t\t\t\n\t\t\tconst char *query = sqlBuffer.coalesce();\n\n\t\t\trc = sqlite3_prepare_v2(dbHandle, query, -1, &idStmt, NULL);\n\t\t\n\t\t\tsqlite3_bind_int(idStmt, 1,(unsigned long) m);\n\t\t\tsqlite3_bind_int(idStmt, 2,(unsigned long) age);\n\n\t\t\tif (SQLstep(idStmt) == SQLITE_ROW)\n\t\t\t{\n\t\t\t\tmidRowId = sqlite3_column_int(idStmt, 0);\n\t\t\t\tisMinId = true;\n\t\t\t}\n\t\t\tdelete[] query;\n\t\t\tsqlite3_clear_bindings(idStmt);\n\t\t\tsqlite3_reset(idStmt);\n\n\t\t\tif (rc == SQLITE_ERROR)\n\t\t\t{\n\t\t\t\traiseError(\"purge - phase 1, fetching midRowId \", sqlite3_errmsg(dbHandle));\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\tif (midRowId == 0) // mid row doesn't satisfy given condition for user_ts, so discard right/later half and look in left/earlier half\n\t\t\t{\n\t\t\t\t// search in earlier/left half\n\t\t\t\tr = m - 1;\n\n\t\t\t\t// The m position should be skipped as midRowId is 0\n\t\t\t\tm = r;\n\t\t\t}\n\t\t\telse //if (l != m)\n\t\t\t{\n\t\t\t\t// search in later/right half\n\t\t\t\tl = m + 1;\n\t\t\t}\n\t\t}\n\n\t\tif(isMinId)\n\t\t{\n\t\t\tsqlite3_finalize(idStmt);\n\t\t}\n\n\t\trowidLimit = m;\n\n\t\t{ // Fix the value of rowidLimit\n\n\t\t\tLogger::getLogger()->debug(\"%s - s1 rowidLimit :%lu: minrowidLimit :%lu: maxrowidLimit :%lu:\", __FUNCTION__, rowidLimit, minrowidLimit, maxrowidLimit);\n\n\t\t\tSQLBuffer sqlBuffer;\n\t\t\tsqlBuffer.append(\"select max(id) from \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \" where rowid <= \");\n\t\t\tsqlBuffer.append(rowidLimit);\n\t\t\tsqlBuffer.append(\" AND user_ts < datetime('now' , '-\");\n\t\t\tsqlBuffer.append(age);\n\t\t\tsqlBuffer.append(\" hours');\");\n\t\t\tconst char *query = sqlBuffer.coalesce();\n\n\t\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t\t\t query,\n\t\t\t\t\t\t rowidCallback,\n\t\t\t\t\t\t &rowidLimit,\n\t\t\t\t\t\t &zErrMsg);\n\n\t\t\tdelete[] query;\n\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"purge - phase 1, fetching rowidLimit \", zErrMsg);\n\t\t\t\tsqlite3_free(zErrMsg);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\tLogger::getLogger()->debug(\"%s - s2 rowidLimit :%lu: minrowidLimit :%lu: maxrowidLimit :%lu:\", __FUNCTION__, rowidLimit, minrowidLimit, maxrowidLimit);\n\t\t}\n\n\t\tif (minrowidLimit == rowidLimit)\n\t\t{\n\t\t\tlogger->info(\"No data to purge\");\n\t\t\treturn 0;\n\t\t}\n\n\t\trowidMin = minrowidLimit;\n\t}\n\t//logger->info(\"Purge collecting unsent row count\");\n\tif ( ! flag_retain )\n\t{\n\t\tchar *zErrMsg = NULL;\n\t\tint rc;\n\t\tint lastPurgedId;\n\t\tSQLBuffer idBuffer;\n\t\tidBuffer.append(\"select id from \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \" where rowid = \");\n\t\tidBuffer.append(rowidLimit);\n\t\tidBuffer.append(';');\n\t\tconst char *idQuery = idBuffer.coalesce();\n\t\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t\t idQuery,\n\t\t\t\t\t rowidCallback,\n\t\t\t\t\t &lastPurgedId,\n\t\t\t\t\t &zErrMsg);\n\n\t\t// Release memory for 'idQuery' var\n\t\tdelete[] idQuery;\n\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\traiseError(\"purge - phase 0, fetching rowid limit \", zErrMsg);\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\n\t\tif (sent != 0 && lastPurgedId > sent)\t// Unsent readings will be purged\n\t\t{\n\t\t\t// Get number of unsent rows we are about to remove\n\t\t\tint unsent = rowidLimit - sent;\n\t\t\tunsentPurged = unsent;\n\t\t}\n\t}\n#if 0\n\tif (m_writeAccessOngoing)\n\t{\n\t\twhile (m_writeAccessOngoing)\n\t\t{\n\t\t\tlogger->warn(\"Yielding for another write access\");\n\t\t\tstd::this_thread::yield();\n\t\t}\n\t}\n#endif\n\n\tunsigned int deletedRows = 0;\n\tunsigned int rowsAffected, totTime=0, prevBlocks=0, prevTotTime=0;\n\tlogger->info(\"Purge about to delete readings # %ld to %ld\", rowidMin, rowidLimit);\n\tsqlite3_stmt *stmt;\n\tbool rowsAvailableToPurge = false;\n\twhile (rowidMin < rowidLimit)\n\t{\n\t\tblocks++;\n\t\trowidMin += purgeBlockSize;\n\t\tif (rowidMin > rowidLimit)\n\t\t{\n\t\t\trowidMin = rowidLimit;\n\t\t}\n\t\t\n\t\tint rc;\n\t\t{\n\t\t\tSQLBuffer sql;\n\t\t\tsql.append(\"DELETE FROM \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \" WHERE rowid <= ? ;\");\n\t\t\tconst char *query = sql.coalesce();\n\t\t\t\n\t\t\trc = sqlite3_prepare_v2(dbHandle, query, strlen(query), &stmt, NULL);\n\t\t\tif (rc != SQLITE_OK)\n\t\t\t{\n\t\t\t\traiseError(\"purgeReadings\", sqlite3_errmsg(dbHandle));\n\t\t\t\tLogger::getLogger()->error(\"SQL statement: %s\", query);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\tdelete[] query;\n\t\t}\n\t\tsqlite3_bind_int(stmt, 1,(unsigned long) rowidMin);\n\t\trowsAvailableToPurge = true;\n\t\t{\n\t\t\t//unique_lock<mutex> lck(db_mutex);\n//\t\tif (m_writeAccessOngoing) db_cv.wait(lck);\n\n\t\t\tSTART_TIME;\n\t\t\t// Exec DELETE query: no callback, no resultset\n\t\t\trc = SQLstep(stmt);\n\n\t\t\tEND_TIME;\n\t\t\t\n\t\t\tlogSQL(\"ReadingsPurge\", sqlite3_expanded_sql(stmt));\n\n\t\t\tlogger->debug(\"%s - DELETE - query :%s: rowsAffected :%ld:\",  __FUNCTION__, sqlite3_expanded_sql(stmt) ,rowsAffected);\n\n\t\t\ttotTime += usecs;\n\n\t\t\tif(usecs>150000)\n\t\t\t{\n\t\t\t\tstd::this_thread::yield();\t// Give other threads a chance to run\n\t\t\t}\n\t\t}\n\t\tif (rc == SQLITE_DONE)\n\t\t{\n\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\tsqlite3_reset(stmt);\n\t\t}\n\t\telse\n\t\t{\n\t\t\traiseError(\"purge - phase 3\", sqlite3_errmsg(dbHandle));\n\t\t\treturn 0;\n\t\t}\n\t\t\n\t\t// Get db changes\n\t\trowsAffected = sqlite3_changes(dbHandle);\n\t\tdeletedRows += rowsAffected;\n\t\t\n\t\tlogger->debug(\"Purge delete block #%d with %d readings\", blocks, rowsAffected);\n\n\t\tif(blocks % RECALC_PURGE_BLOCK_SIZE_NUM_BLOCKS == 0)\n\t\t{\n\t\t\tint prevAvg = prevTotTime/(prevBlocks?prevBlocks:1);\n\t\t\tint currAvg = (totTime-prevTotTime)/(blocks-prevBlocks);\n\t\t\tint avg = ((prevAvg?prevAvg:currAvg)*5 + currAvg*5) / 10; // 50% weightage for long term avg and 50% weightage for current avg\n\t\t\tprevBlocks = blocks;\n\t\t\tprevTotTime = totTime;\n\t\t\tint deviation = abs(avg - TARGET_PURGE_BLOCK_DEL_TIME);\n\t\t\tlogger->debug(\"blocks=%d, totTime=%d usecs, prevAvg=%d usecs, currAvg=%d usecs, avg=%d usecs, TARGET_PURGE_BLOCK_DEL_TIME=%d usecs, deviation=%d usecs\",\n\t\t\t\t\t\t  blocks, totTime, prevAvg, currAvg, avg, TARGET_PURGE_BLOCK_DEL_TIME, deviation);\n\t\t\tif (deviation > TARGET_PURGE_BLOCK_DEL_TIME/10)\n\t\t\t{\n\t\t\t\tfloat ratio = (float)TARGET_PURGE_BLOCK_DEL_TIME / (float)avg;\n\t\t\t\tif (ratio > 2.0) ratio = 2.0;\n\t\t\t\tif (ratio < 0.5) ratio = 0.5;\n\t\t\t\tpurgeBlockSize = (float)purgeBlockSize * ratio;\n\t\t\t\tpurgeBlockSize = purgeBlockSize / PURGE_BLOCK_SZ_GRANULARITY * PURGE_BLOCK_SZ_GRANULARITY;\n\t\t\t\tif (purgeBlockSize < MIN_PURGE_DELETE_BLOCK_SIZE)\n\t\t\t\t\tpurgeBlockSize = MIN_PURGE_DELETE_BLOCK_SIZE;\n\t\t\t\tif (purgeBlockSize > MAX_PURGE_DELETE_BLOCK_SIZE)\n\t\t\t\t\tpurgeBlockSize = MAX_PURGE_DELETE_BLOCK_SIZE;\n\t\t\t\tlogger->debug(\"Changed purgeBlockSize to %d\", purgeBlockSize);\n\t\t\t}\n\t\t\tstd::this_thread::yield();\t// Give other threads a chance to run\n\t\t}\n\t\t//Logger::getLogger()->debug(\"Purge delete block #%d with %d readings\", blocks, rowsAffected);\n\t} while (rowidMin  < rowidLimit);\n\t\n\tif (rowsAvailableToPurge)\n\t{\n\t\tsqlite3_finalize(stmt);\n\t}\n\t\n\tunsentRetained = maxrowidLimit - rowidLimit;\n\n\tnumReadings = maxrowidLimit +1 - minrowidLimit - deletedRows;\n\n\tif (sent == 0)\t// Special case when not north process is used\n\t{\n\t\tunsentPurged = deletedRows;\n\t}\n\n\tgettimeofday(&endTv, NULL);\n\tunsigned long duration = (1000000 * (endTv.tv_sec - startTv.tv_sec)) + endTv.tv_usec - startTv.tv_usec;\n\n\tostringstream convert;\n\n\tconvert << \"{ \\\"removed\\\" : \" << deletedRows << \", \";\n\tconvert << \" \\\"unsentPurged\\\" : \" << unsentPurged << \", \";\n\tconvert << \" \\\"unsentRetained\\\" : \" << unsentRetained << \", \";\n\tconvert << \" \\\"readings\\\" : \" << numReadings << \", \";\n\tconvert << \" \\\"method\\\" : \\\"time\\\", \";\n\tconvert << \" \\\"duration\\\" : \" << duration << \" }\";\n\n\tresult = convert.str();\n\n\t//logger->debug(\"Purge result=%s\", result.c_str());\n\n\tlogger->info(\"Purge process complete in %d blocks in %lduS\", blocks, duration);\n\n\tLogger::getLogger()->debug(\"%s - age :%lu: flag_retain :%x: sent :%lu: result :%s:\", __FUNCTION__, age, flags, flag_retain, result.c_str() );\n\n\treturn deletedRows;\n}\n\n\n/**\n * Purge readings from the reading table\n */\nunsigned int  Connection::purgeReadingsByRows(unsigned long rows,\n\t\t\t\t\t  unsigned int flags,\n\t\t\t\t\t  unsigned long sent,\n\t\t\t\t\t  std::string& result)\n{\n\tunsigned long deletedRows = 0, unsentPurged = 0, unsentRetained = 0, numReadings = 0;\n\tunsigned long limit = 0;\n\n\tunsigned long rowcount, minId, maxId;\n\tunsigned long rowsAffected;\n\tunsigned long deletePoint;\n\tbool flag_retain;\n\tstruct timeval startTv, endTv;\n\n\n\tLogger *logger = Logger::getLogger();\n\n\tgettimeofday(&startTv, NULL);\n\tflag_retain = false;\n\n\tif ( (flags & STORAGE_PURGE_RETAIN_ANY) || (flags & STORAGE_PURGE_RETAIN_ALL) )\n\t{\n\t\tflag_retain = true;\n\t}\n\tLogger::getLogger()->debug(\"%s - flags :%X: flag_retain :%d: sent :%ld:\", __FUNCTION__, flags, flag_retain, sent);\n\n\tlogger->info(\"Purge by Rows called\");\n\tif (flag_retain)\n\t{\n\t\tlimit = sent;\n\t\tlogger->info(\"Sent is %lu\", sent);\n\t}\n\tlogger->info(\"Purge by Rows called with flag_retain %d, rows %lu, limit %lu\", flag_retain, rows, limit);\n\t// Don't save unsent rows\n\n\tchar *zErrMsg = NULL;\n\tint rc;\n\tsqlite3_stmt *stmt;\n\tsqlite3_stmt *idStmt;\n\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t \"select count(rowid) from \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \";\",\n\t\trowidCallback,\n\t\t&rowcount,\n\t\t&zErrMsg);\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"purge - phaase 0, fetching row count\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn 0;\n\t}\n\n\trc = SQLexec(dbHandle, \"readings\",\n\t\t\t\t \"select max(id) from \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \";\",\n\t\trowidCallback,\n\t\t&maxId,\n\t\t&zErrMsg);\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"purge - phaase 0, fetching maximum id\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn 0;\n\t}\n\n\tnumReadings = rowcount;\n\trowsAffected = 0;\n\tdeletedRows = 0;\n\tbool rowsAvailableToPurge = true;\n\n\t// Create the prepared statements\n\tSQLBuffer sqlBuffer;\n\tsqlBuffer.append(\"select min(id) from \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \";\");\n\tconst char *idquery = sqlBuffer.coalesce();\n\n\trc = sqlite3_prepare_v2(dbHandle, idquery, -1, &idStmt, NULL);\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"purgeReadingsByRows\", sqlite3_errmsg(dbHandle));\n\t\tLogger::getLogger()->error(\"SQL statement: %s\", idquery);\n\t\tdelete[] idquery;\n\t\treturn 0;\n\t}\n\tdelete[] idquery;\n\n\tSQLBuffer sql;\n\tsql.append(\"delete from \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE \"  where id <= ? ;\");\n\tconst char *delquery = sql.coalesce();\n\n\trc = sqlite3_prepare_v2(dbHandle, delquery, strlen(delquery), &stmt, NULL);\n\t\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"purgeReadingsByRows\", sqlite3_errmsg(dbHandle));\n\t\tLogger::getLogger()->error(\"SQL statement: %s\", delquery);\n\t\tdelete[] delquery;\n\t\treturn 0;\n\t}\n\tdelete[] delquery;\n\n\tdo\n\t{\n\t\tif (rowcount <= rows)\n\t\t{\n\t\t\tlogger->info(\"Row count %d is less than required rows %d\", rowcount, rows);\n\t\t\trowsAvailableToPurge = false;\n\t\t\tbreak;\n\t\t}\n\n\t\tif (SQLstep(idStmt) == SQLITE_ROW)\n\t\t{\n\t\t\tminId = sqlite3_column_int(idStmt, 0);\n\t\t}\n\n\n\t\tsqlite3_clear_bindings(idStmt);\n\t\tsqlite3_reset(idStmt);\n\t\t\n\t\tif (rc == SQLITE_ERROR)\n\t\t{\n\t\t\traiseError(\"purge - phaase 0, fetching minimum id\", sqlite3_errmsg(dbHandle));\n\t\t\tsqlite3_free(zErrMsg);\n\t\t\treturn 0;\n\t\t}\n\n\t\tdeletePoint = minId + m_purgeBlockSize;\n\t\tif (maxId - deletePoint < rows || deletePoint > maxId)\n\t\t\tdeletePoint = maxId - rows;\n\n\t\t// Do not delete\n\t\tif (flag_retain) {\n\n\t\t\tif (limit < deletePoint)\n\t\t\t{\n\t\t\t\tdeletePoint = limit;\n\t\t\t}\n\t\t}\n\t\t\n\t\t{\n\t\t\tlogger->info(\"RowCount %lu, Max Id %lu, min Id %lu, delete point %lu\", rowcount, maxId, minId, deletePoint);\n\t\t\t\n\t\t}\n\t\tsqlite3_bind_int(stmt, 1,(unsigned long) deletePoint);\n\n\t\t{\n\t\t\t// Exec DELETE query: no callback, no resultset\n\t\t\trc = SQLstep(stmt);\n\t\t\tif (rc == SQLITE_DONE)\n\t\t\t{\n\t\t\t\tsqlite3_clear_bindings(stmt);\n\t\t\t\tsqlite3_reset(stmt);\n\t\t\t}\n\t\t\trowsAffected = sqlite3_changes(dbHandle);\n\n\t\t\tdeletedRows += rowsAffected;\n\t\t\tnumReadings -= rowsAffected;\n\t\t\trowcount    -= rowsAffected;\n\n\t\t\tlogger->debug(\"Deleted %lu rows\", rowsAffected);\n\t\t\tif (rowsAffected == 0)\n\t\t\t{\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (limit != 0 && sent != 0)\n\t\t\t{\n\t\t\t\tunsentPurged = deletePoint - sent;\n\t\t\t}\n\t\t\telse if (!limit)\n\t\t\t{\n\t\t\t\tunsentPurged += rowsAffected;\n\t\t\t}\n\t\t}\n\t\tstd::this_thread::yield();\t// Give other threads a chance to run\n\t} while (rowcount > rows);\n\n\tif (rowsAvailableToPurge)\n\t{\n\t\tsqlite3_finalize(idStmt);\n\t\tsqlite3_finalize(stmt);\n\t}\n\t\n\n\tif (limit)\n\t{\n\t\tunsentRetained = numReadings - rows;\n\t}\n\n\tgettimeofday(&endTv, NULL);\n\tunsigned long duration = (1000000 * (endTv.tv_sec - startTv.tv_sec)) + endTv.tv_usec - startTv.tv_usec;\n\n\tostringstream convert;\n\n\tconvert << \"{ \\\"removed\\\" : \" << deletedRows << \", \";\n\tconvert << \" \\\"unsentPurged\\\" : \" << unsentPurged << \", \";\n\tconvert << \" \\\"unsentRetained\\\" : \" << unsentRetained << \", \";\n\tconvert << \" \\\"readings\\\" : \" << numReadings << \", \";\n\tconvert << \" \\\"method\\\" : \\\"rows\\\", \";\n\tconvert << \" \\\"duration\\\" : \" << duration << \" }\";\n\n\tresult = convert.str();\n\n\tLogger::getLogger()->debug(\"%s - rows :%lu: flag :%x: sent :%lu: numReadings :%lu:  rowsAffected :%u:  result :%s:\", __FUNCTION__, rows, flags, sent, numReadings, rowsAffected, result.c_str() );\n\n\tlogger->info(\"Purge by Rows complete: %s\", result.c_str());\n\treturn deletedRows;\n}\n\n/**\n * Purge readings by asset or purge all readings\n *\n * @param asset\t\tThe asset name to purge\n * \t\t\tIf empty all assets will be removed\n * @return\t\tThe number of removed asset records\n */\nunsigned int Connection::purgeReadingsAsset(const string& asset)\n{\nSQLBuffer sql;\nunsigned int rowsAffected = 0;\n\tsql.append(\"DELETE FROM \" READINGS_DB_NAME_BASE \".\" READINGS_TABLE);\n\t\t       \n\tif (!asset.empty())\n\t{\n\t\tsql.append(\"  WHERE asset_code = '\");\n\t\tsql.append(asset);\n\t\tsql.append('\\'');\n\t}\n\tsql.append(';');\n\n\tconst char *query = sql.coalesce();\n\tchar *zErrMsg = NULL;\n\tint rc;\n\n\tlogSQL(\"ReadingsAssetPurge\", query);\n\n#if 0\n\tif (m_writeAccessOngoing)\n\t{\n\t\twhile (m_writeAccessOngoing)\n\t\t{\n\t\t\tstd::this_thread::yield();\n\t\t}\n\t}\n#endif\n\n\tSTART_TIME;\n\t// Exec DELETE query: no callback, no resultset\n\trc = SQLexec(dbHandle, \"readings\",\n\t\t\tquery,\n\t\t\tNULL,\n\t\t\tNULL,\n\t\t\t&zErrMsg);\n\tEND_TIME;\n\n\t// Release memory for 'query' var\n\tdelete[] query;\n\n\tif (rc != SQLITE_OK)\n\t{\n\t\traiseError(\"ReadingsAssetPurge\", zErrMsg);\n\t\tsqlite3_free(zErrMsg);\n\t\treturn rowsAffected;\n\t}\n\n\t// Get db changes\n\trowsAffected = sqlite3_changes(dbHandle);\n\n\treturn rowsAffected;\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/include/common.h",
    "content": "#ifndef _COMMON_CONNECTION_H\n#define _COMMON_CONNECTION_H\n\n#include <sql_buffer.h>\n#include <iostream>\n#include <sqlite3.h>\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n#include <string>\n#include <map>\n#include <stdarg.h>\n#include <stdlib.h>\n#include <sstream>\n#include <logger.h>\n#include <time.h>\n#include <unistd.h>\n#include <chrono>\n#include <thread>\n#include <atomic>\n#include <condition_variable>\n#include <sys/time.h>\n#include <connection.h>\n\n#define\tSTORAGE_PURGE_RETAIN_ANY 0x0001U\n#define\tSTORAGE_PURGE_RETAIN_ALL 0x0002U\n#define STORAGE_PURGE_SIZE\t     0x0004U\n\n#define  DB_CONFIGURATION \"PRAGMA busy_timeout = 5000; PRAGMA cache_size = -4000; PRAGMA journal_mode = WAL; PRAGMA secure_delete = off; PRAGMA journal_size_limit = 4096000;\"\n\nstatic std::map<std::string, std::string> sqliteDateFormat = {\n                                                {\"HH24:MI:SS\",\n                                                        F_TIMEH24_S},\n                                                {\"YYYY-MM-DD HH24:MI:SS.MS\",\n                                                        F_DATEH24_MS},\n                                                {\"YYYY-MM-DD HH24:MI:SS\",\n                                                        F_DATEH24_S},\n                                                {\"YYYY-MM-DD HH24:MI\",\n                                                        F_DATEH24_M},\n                                                {\"YYYY-MM-DD HH24\",\n                                                        F_DATEH24_H},\n                                                {\"\", \"\"}\n                                        };\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/include/profile.h",
    "content": "#ifndef _PROFILE_H\n#define _PROFILE_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <string>\n#include <vector>\n#include <sys/time.h>\n#include <logger.h>\n\n#define\tTIME_BUCKETS\t20\n#define BUCKET_SIZE\t5\nclass ProfileItem\n{\n\tpublic:\n\t\tProfileItem(const std::string& reference) : m_reference(reference)\n\t\t\t{ gettimeofday(&m_tvStart, NULL);\n\t\t\t\tauto timenow = chrono::system_clock::to_time_t(chrono::system_clock::now());\n\t\t\t\tm_ts = std::string(ctime(&timenow));\n\t\t\t\tm_ts.back() = '\\0'; };\n\t\t~ProfileItem() {};\n\t\tvoid \tcomplete()\n\t\t\t{\n\t\t\t\tstruct timeval tv;\n\n\t\t\t\tgettimeofday(&tv, NULL);\n\t\t\t\tm_duration = (tv.tv_sec - m_tvStart.tv_sec) * 1000 +\n\t\t\t\t(tv.tv_usec - m_tvStart.tv_usec) / 1000;\n\t\t\t};\n\t\tunsigned long getDuration() { return m_duration; };\n\t\tconst std::string& getReference() const { return m_reference; };\n\t\tconst std::string& getTs() const { return m_ts; };\n\tprivate:\n\t\tstd::string\t\tm_reference;\n\t\tstruct timeval\t\tm_tvStart;\n\t\tunsigned long\t\tm_duration;\n\t\tstd::string\t\tm_ts;\n};\n\nclass QueryProfile\n{\n\tpublic:\n\t\tQueryProfile(int samples) : m_samples(samples) { time(&m_lastReport); };\n\t\tvoid\tinsert(ProfileItem *item)\n\t\t\t{\n\t\t\t\tint b = item->getDuration() / BUCKET_SIZE;\n\t\t\t\tif (b >= TIME_BUCKETS)\n\t\t\t\t\tb = TIME_BUCKETS - 1;\n\t\t\t\tm_buckets[b]++;\n\t\t\t\tif (m_items.size() == m_samples)\n\t\t\t\t{\n\t\t\t\t\tint minIndex = 0;\n\t\t\t\t\tunsigned long minDuration = m_items[0]->getDuration();\n\t\t\t\t\tfor (int i = 1; i < m_items.size(); i++)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (m_items[i]->getDuration() < minDuration)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tminDuration = m_items[i]->getDuration();\n\t\t\t\t\t\t\tminIndex = i;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (item->getDuration() > minDuration)\n\t\t\t\t\t{\n\t\t\t\t\t\tdelete m_items[minIndex];\n\t\t\t\t\t\tm_items[minIndex] = item;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tdelete item;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tm_items.push_back(item);\n\t\t\t\t}\n\t\t\t\tif (time(0) - m_lastReport > 600)\n\t\t\t\t{\n\t\t\t\t\treport();\n\t\t\t\t}\n\t\t\t};\n\tprivate:\n\t\tint\t\t\t\tm_samples;\n\t\tstd::vector<ProfileItem *>\tm_items;\n\t\ttime_t\t\t\t\tm_lastReport;\n\t\tunsigned int\t\t\tm_buckets[TIME_BUCKETS];\n\t\tvoid\treport()\n\t\t\t{\n\t\t\t\tLogger *logger = Logger::getLogger();\n\t\t\t\tlogger->info(\"Storage profile report\");\n\t\t\t\tlogger->info(\" < %3d mS %d\", BUCKET_SIZE, m_buckets[0]);\n\t\t\t\tfor (int j = 1; j < TIME_BUCKETS - 1; j++)\n\t\t\t\t{\n\t\t\t\t\tlogger->info(\"%3d-%3d mS %d\",\n\t\t\t\t\t\tj * BUCKET_SIZE, (j + 1) * BUCKET_SIZE,\n\t\t\t\t\t\tm_buckets[j]);\n\t\t\t\t}\n\t\t\t\tlogger->info(\" > %3d mS %d\", BUCKET_SIZE * TIME_BUCKETS, m_buckets[TIME_BUCKETS-1]);\n\t\t\t\tfor (int i = 0; i < m_items.size(); i++)\n\t\t\t\t{\n\t\t\t\t\tlogger->info(\"%ld mS, %s, %s\\n\",\n\t\t\t\t\t\tm_items[i]->getDuration(),\n\t\t\t\t\t\tm_items[i]->getTs().c_str(),\n\t\t\t\t\t\tm_items[i]->getReference().c_str());\n\t\t\t\t}\n\t\t\t\ttime(&m_lastReport);\n\t\t\t};\n};\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlitelb/plugin.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <sqlite_common.h>\n#include <connection_manager.h>\n#include <connection.h>\n#include <plugin_api.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <strings.h>\n#include \"sqlite3.h\"\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <sstream>\n#include <iostream>\n#include <string>\n#include <logger.h>\n#include <plugin_exception.h>\n#include <reading_stream.h>\n#include <config_category.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * The SQLite3 plugin interface\n */\nextern \"C\" {\n\nconst char *default_config = QUOTE({\n\t\t\"poolSize\" : {\n\t\t\t\"description\" : \"Connection pool size\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"default\" : \"5\",\n\t\t\t\"displayName\" : \"Pool Size\",\n\t\t\t\"order\" : \"1\"\n\t\t\t},\n\t\t\"vacuumInterval\" : {\n\t\t\t\"description\" : \"The interval between execution of a SQLite vacuum command\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"minimum\" : \"1\",\n\t\t\t\"default\" : \"6\",\n\t\t\t\"displayName\" : \"Vacuum Interval\",\n\t\t\t\"order\" : \"2\"\n\t\t},\n\t\t\"purgeBlockSize\" : {\n                        \"description\" : \"The number of rows to purge in each delete statement\",\n                        \"type\" : \"integer\",\n                        \"default\" : \"10000\",\n                        \"displayName\" : \"Purge Block Size\",\n                        \"order\" : \"3\",\n                        \"minimum\" : \"1000\",\n                        \"maximum\" : \"100000\"\n                }\n\t\t});\n\n/**\n * The plugin information structure\n */\nstatic PLUGIN_INFORMATION info = {\n\t\"SQLiteLb\",               // Name\n\t\"1.2.0\",                  // Version\n\tSP_COMMON|SP_READINGS,    // Flags\n\tPLUGIN_TYPE_STORAGE,      // Type\n\t\"1.6.0\",                  // Interface version\n\tdefault_config\n};\n\n/**\n * Return the information about this plugin\n */\nPLUGIN_INFORMATION *plugin_info()\n{\n\treturn &info;\n}\n\n/**\n * Initialise the plugin, called to get the plugin handle\n * In the case of SQLLite we also get a pool of connections\n * to use.\n */\nPLUGIN_HANDLE plugin_init(ConfigCategory *category)\n{\nConnectionManager *manager = ConnectionManager::getInstance();\nint poolSize = 5;\n\n\tif (category->itemExists(\"poolSize\"))\n\t{\n\t\tpoolSize = strtol(category->getValue(\"poolSize\").c_str(), NULL, 10);\n\t}\n\tmanager->growPool(poolSize);\n\tif (category->itemExists(\"vacuumInterval\"))\n\t{\n\t\tmanager->setVacuumInterval(strtol(category->getValue(\"vacuumInterval\").c_str(), NULL, 10));\n\t}\n\tif (category->itemExists(\"purgeBlockSize\"))\n\t{\n\t\tunsigned long purgeBlockSize = strtoul(category->getValue(\"purgeBlockSize\").c_str(), NULL, 10);\n\t\tmanager->setPurgeBlockSize(purgeBlockSize);\n\t}\n\treturn manager;\n}\n\n/**\n * Insert into an arbitrary table\n */\nint plugin_common_insert(PLUGIN_HANDLE handle,\n\t\t\tchar *schema,\n\t\t\tchar *table,\n\t\t\tchar *data)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tint result = connection->insert(std::string(schema),\n\t\t\t\t\tstd::string(table),\n\t\t\t\t\tstd::string(data));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Retrieve data from an arbitrary table\n */\nconst char *plugin_common_retrieve(PLUGIN_HANDLE handle,\n\t\t\t\tchar *schema,\n\t\t\t\tchar *table,\n\t\t\t\tchar *query)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n\tbool rval = connection->retrieve(std::string(schema),\n\t\t\t\t\tstd::string(table),\n\t\t\t\t\tstd::string(query),\n\t\t\t\t\tresults);\n\tmanager->release(connection);\n\tif (rval)\n\t{\n\t\treturn strdup(results.c_str());\n\t}\n\treturn NULL;\n}\n\n/**\n * Update an arbitary table\n */\nint plugin_common_update(PLUGIN_HANDLE handle,\n\t\t\tchar *schema,\n\t\t\tchar *table,\n\t\t\tchar *data)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tint result = connection->update(std::string(schema),\n\t\t\t\t\tstd::string(table),\n\t\t\t\t\tstd::string(data));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Delete from an arbitrary table\n */\nint plugin_common_delete(PLUGIN_HANDLE handle,\n\t\t\tchar *schema,\n\t\t\tchar *table,\n\t\t\tchar *condition)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tint result = connection->deleteRows(std::string(schema),\n\t\t\t\t\tstd::string(table),\n\t\t\t\t\tstd::string(condition));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Append a sequence of readings to the readings buffer\n */\nint plugin_reading_append(PLUGIN_HANDLE handle, char *readings)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tint result = connection->appendReadings(readings);\n\tmanager->release(connection);\n\treturn result;;\n}\n\n/**\n * Append a stream of readings to the readings buffer\n */\nint plugin_readingStream(PLUGIN_HANDLE handle, ReadingStream **readings, bool commit)\n{\n\tint result = 0;\n\tConnectionManager *manager = (ConnectionManager *)handle;\n\tConnection        *connection = manager->allocate();\n\n\tresult = connection->readingStream(readings, commit);\n\n\tmanager->release(connection);\n\treturn result;;\n}\n\n/**\n * Fetch a block of readings from the readings buffer\n */\nchar *plugin_reading_fetch(PLUGIN_HANDLE handle, unsigned long id, unsigned int blksize)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string\t  resultSet;\n\n\tconnection->fetchReadings(id, blksize, resultSet);\n\tmanager->release(connection);\n\treturn strdup(resultSet.c_str());\n}\n\n/**\n * Retrieve some readings from the readings buffer\n */\nchar *plugin_reading_retrieve(PLUGIN_HANDLE handle, char *condition)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n\tconnection->retrieveReadings(std::string(condition), results);\n\tmanager->release(connection);\n\treturn strdup(results.c_str());\n}\n\n/**\n * Purge readings from the buffer\n */\nchar *plugin_reading_purge(PLUGIN_HANDLE handle, unsigned long param, unsigned int flags, unsigned long sent)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string \t  results;\nunsigned long\t  age, size;\n\n\tif (flags & STORAGE_PURGE_SIZE)\n\t{\n\t\t(void)connection->purgeReadingsByRows(param, flags, sent, results);\n\t}\n\telse\n\t{\n\t\tage = param;\n\t\t(void)connection->purgeReadings(age, flags, sent, results);\n\t}\n\tmanager->release(connection);\n\treturn strdup(results.c_str());\n}\n\n/**\n * Release a previously returned result set\n */\nvoid plugin_release(PLUGIN_HANDLE handle, char *results)\n{\n\t(void)handle;\n\tfree(results);\n}\n\n/**\n * Return details on the last error that occured.\n */\nPLUGIN_ERROR *plugin_last_error(PLUGIN_HANDLE handle)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\n  \n\treturn manager->getError();\n}\n\n/**\n * Shutdown the plugin\n */\nbool plugin_shutdown(PLUGIN_HANDLE handle)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\n  \n\tmanager->shutdown();\n\treturn true;\n}\n\n/**\n * Create snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table to shapshot\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= o on success\n *\n * The new created table has the following name:\n * table_id\n */\nint plugin_create_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\t char *table,\n\t\t\t\t char *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n        int result = connection->create_table_snapshot(std::string(table),\n\t\t\t\t\t\t\tstd::string(id));\n        manager->release(connection);\n        return result;\n}\n\n/**\n * Load a snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table to fill from a given snapshot\n * @param id\t\tThe table snapshot id\n * @return\t\t-1 on error, >= o on success\n */\nint plugin_load_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\tchar *table,\n\t\t\t\tchar *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n        int result = connection->load_table_snapshot(std::string(table),\n\t\t\t\t\t\t     std::string(id));\n        manager->release(connection);\n        return result;\n}\n\n/**\n * Delete a snapshot of a common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table which shapshot will be removed\n * @param id\t\tThe snapshot id\n * @return\t\t-1 on error, >= o on success\n *\n */\nint plugin_delete_table_snapshot(PLUGIN_HANDLE handle,\n\t\t\t\t char *table,\n\t\t\t\t char *id)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n        int result = connection->delete_table_snapshot(std::string(table),\n\t\t\t\t\t\t\tstd::string(id));\n        manager->release(connection);\n        return result;\n}\n\n/**\n * Get all snapshots of a given common table\n *\n * @param handle\tThe plugin handle\n * @param table\t\tThe table name \n * @return\t\tList of snapshots (even empty list) or NULL for errors\n *\n */\nconst char* plugin_get_table_snapshots(PLUGIN_HANDLE handle,\n\t\t\t\tchar *table)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n\tbool rval = connection->get_table_snapshots(std::string(table), results);\n\tmanager->release(connection);\n\n\treturn rval ? strdup(results.c_str()) : NULL;\n}\n\n/**\n * Update or creats a schema\n *\n * @param handle        The plugin handle\n * @param schema        The name of the schema\n * @param definition    The schema definition\n * @return              -1 on error, >= 0 on success\n *\n */\nint plugin_createSchema(PLUGIN_HANDLE handle, char *definition)\n{\n\tConnectionManager *manager = (ConnectionManager *)handle;\n\tConnection        *connection = manager->allocate();\n\n\tint result = connection->createSchema(std::string(definition));\n\tmanager->release(connection);\n\treturn result;\n}\n\n/**\n * Purge given readings asset or all readings from the buffer\n */\nunsigned int plugin_reading_purge_asset(PLUGIN_HANDLE handle, char *asset)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tunsigned int deleted = connection->purgeReadingsAsset(asset);\n\tmanager->release(connection);\n\treturn deleted;\n}\n\n};\n\n"
  },
  {
    "path": "C/plugins/storage/sqlitememory/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(sqlitememory)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Path of compiled sqlite3 file: /usr/local/bin\nset(FLEDGE_SQLITE3_LIBS \"/usr/local/bin\" CACHE INTERNAL \"\")\n\n# Find source files\n# Add sqlitelb plugin common files\nfile(GLOB COMMON_SOURCES ../sqlitelb/common/*.cpp)\n# Add sqlitememory files\nfile(GLOB SOURCES *.cpp)\n\n# Include header files\ninclude_directories(../../../common/include)\ninclude_directories(../../../services/common/include)\ninclude_directories(../common/include)\ninclude_directories(../../../thirdparty/rapidjson/include)\n# Add sqlitelb plugin header files\ninclude_directories(../sqlitelb/include)\ninclude_directories(../sqlitelb/common/include)\ninclude_directories(../sqlite/common/include)\n\nlink_directories(${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES} ${COMMON_SOURCES})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\n\n# Check Sqlite3 required version\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\nfind_package(sqlite3)\n\nadd_definitions(-DSQLITE_SPLIT_READINGS=1)\nadd_definitions(-DPLUGIN_LOG_NAME=\"SQLite 3 in_memory\")\nadd_definitions(-DMEMORY_READING_PLUGIN=1)\n\n# Use static SQLite3 library\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n        include_directories(${FLEDGE_SQLITE3_LIBS})\n        target_link_libraries(${PROJECT_NAME} -L\"${FLEDGE_SQLITE3_LIBS}/.libs\" -lsqlite3)\nelse()\n\t# Link with SQLite3 library\n\ttarget_link_libraries(${PROJECT_NAME} -lsqlite3)\nendif()\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/plugins/storage/${PROJECT_NAME})\n"
  },
  {
    "path": "C/plugins/storage/sqlitememory/Findsqlite3.cmake",
    "content": "# This CMake file locates the SQLite3 development libraries\n#\n# The following variables are set:\n# SQLITE_FOUND - If the SQLite library was found\n# SQLITE_LIBRARIES - Path to the static library\n# SQLITE_INCLUDE_DIR - Path to SQLite headers\n# SQLITE_VERSION - Library version\n\nset(SQLITE_MIN_VERSION \"3.11.0\")\nfind_path(SQLITE_INCLUDE_DIR sqlite3.h)\nfind_library(SQLITE_LIBRARIES NAMES libsqlite3.so)\n\n# Check wether path of compiled libsqlite3.a and .h files exists\nif (EXISTS ${FLEDGE_SQLITE3_LIBS})\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h PATHS ${FLEDGE_SQLITE3_LIBS})\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.a PATHS \"${FLEDGE_SQLITE3_LIBS}/.libs\")\nelse()\n    # Use system defaults\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h)\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.so)\nendif()\n\nif (SQLITE_INCLUDE_DIR AND SQLITE_LIBRARIES)\n  execute_process(COMMAND grep \".*#define.*SQLITE_VERSION \" ${SQLITE_INCLUDE_DIR}/sqlite3.h\n    COMMAND sed \"s/.*\\\"\\\\(.*\\\\)\\\".*/\\\\1/\"\n    OUTPUT_VARIABLE SQLITE_VERSION\n    OUTPUT_STRIP_TRAILING_WHITESPACE)\n    if (\"${SQLITE_VERSION}\" VERSION_LESS \"${SQLITE_MIN_VERSION}\")\n        message(FATAL_ERROR \"SQLite3 version >= ${SQLITE_MIN_VERSION} required, found version ${SQLITE_VERSION}\")\n    else()\n        message(STATUS \"Found SQLite version ${SQLITE_VERSION}: ${SQLITE_LIBRARIES}\")\n        set(SQLITE_FOUND TRUE)\n    endif()\nelse()\n  message(FATAL_ERROR \"Could not find SQLite\")\nendif()\n"
  },
  {
    "path": "C/plugins/storage/sqlitememory/connection.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2019 OSIsoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <connection.h>\n#include <connection_manager.h>\n#include <sqlite_common.h>\n#include <utils.h>\n#include <unistd.h>\n\n/**\n * SQLite3 storage plugin for Fledge\n */\n\nusing namespace std;\nusing namespace rapidjson;\n\nstatic time_t connectErrorTime = 0;\n\n/**\n * Create a SQLite3 database connection\n */\nConnection::Connection()\n{\n\tif (getenv(\"FLEDGE_TRACE_SQL\"))\n\t{\n\t\tm_logSQL = true;\n\t}\n\telse\n\t{\n\t\tm_logSQL = false;\n\t}\n\n\t/**\n\t * Create IN MEMORY database for \"readings\" table: set empty file\n\t */\n\tconst char *dbHandleConn = \"file:?cache=shared\";\n\n\t// UTC time as default\n\tconst char * createReadings = \"CREATE TABLE \" READINGS_DB_NAME_BASE \" .\" READINGS_TABLE_MEM \" (\" \\\n\t\t\t\t\t\"id\t\tINTEGER\t\t\tPRIMARY KEY AUTOINCREMENT,\" \\\n\t\t\t\t\t\"asset_code\tcharacter varying(50)\tNOT NULL,\" \\\n\t\t\t\t\t\"reading\tJSON\t\t\tNOT NULL DEFAULT '{}',\" \\\n\t\t\t\t\t\"user_ts\tDATETIME \t\tDEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW' )),\" \\\n\t\t\t\t\t\"ts\t\tDATETIME \t\tDEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW' ))\" \\\n\t\t\t\t\t\");\";\n\n\tconst char * createReadingsFk = \"CREATE INDEX fki_\" READINGS_TABLE_MEM \"_fk1 ON \" READINGS_TABLE_MEM \" (asset_code);\";\n\tconst char * createReadingsIdx1 = \"CREATE INDEX ix1_\" READINGS_TABLE_MEM \" ON \" READINGS_TABLE_MEM \" (asset_code, user_ts desc);\";\n\tconst char * createReadingsIdx2 = \"CREATE INDEX ix2_\" READINGS_TABLE_MEM \" ON \" READINGS_TABLE_MEM \" (user_ts);\";\n\n\t// Allow usage of URI for filename\n        sqlite3_config(SQLITE_CONFIG_URI, 1);\n\n\tif (sqlite3_open(dbHandleConn, &dbHandle) != SQLITE_OK)\n        {\n\t\tconst char* dbErrMsg = sqlite3_errmsg(dbHandle);\n\t\tconst char* errMsg = \"Failed to open the IN_MEMORY SQLite3 database\";\n\n\t\tLogger::getLogger()->error(\"%s '%s'\",\n\t\t\t\t\t   dbErrMsg,\n\t\t\t\t\t   dbHandleConn);\n\t\tconnectErrorTime = time(0);\n\n\t\traiseError(\"InMemory Connection\", \"%s '%s'\",\n\t\t\t   dbErrMsg,\n\t\t\t   dbHandleConn);\n\n\t\tsqlite3_close_v2(dbHandle);\n\t}\n        else\n\t{\n\t\tLogger::getLogger()->info(\"Connected to IN_MEMORY SQLite3 database: %s\",\n\t\t\t\t\t  dbHandleConn);\n\n\t\tint rc;\n                // Exec the statements without getting error messages, for now\n\n\t\t// ATTACH 'fledge' as in memory shared DB\n\t\trc = sqlite3_exec(dbHandle,\n\t\t\t\t  \"ATTACH DATABASE 'file::memory:?cache=shared' AS '\" READINGS_TABLE_MEM \"'\",\n\t\t\t\t  NULL,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL);\n\n\t\t// CREATE TABLE readings\n\t\trc = sqlite3_exec(dbHandle,\n\t\t\t\t  createReadings,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL);\n\n                // FK\n\t\trc = sqlite3_exec(dbHandle,\n\t\t\t\t  createReadingsFk,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL);\n\n                // Idx1\n\t\trc = sqlite3_exec(dbHandle,\n\t\t\t\t  createReadingsIdx1,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL);\n                // Idx2\n\t\trc = sqlite3_exec(dbHandle,\n\t\t\t\t  createReadingsIdx2,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL,\n\t\t\t\t  NULL);\n\t}\n\n}\n\n/** \n * Add a vacuum funtion, this is not needed for SQLite In Memory, but is here \n * to satisfy the interface requirement.\n */\nbool Connection::vacuum()\n{\n\treturn true;\n}\n\n/**\n * Load the in memory database from a file backup\n *\n * @param filename\tThe name of the file to restore from\n * @return bool\t\tSuccess or failure of the backup\n */\nbool Connection::loadDatabase(const string& filename)\n{\nint rc;\nsqlite3 *file;\nsqlite3_backup *backup;\n\n\tstring pathname = getDataDir() + \"/\";\n\tpathname.append(filename);\n\tpathname.append(\".db\");\n\tif (access(pathname.c_str(), R_OK) != 0)\n\t{\n\t\tLogger::getLogger()->warn(\"Persisted database %s does not exist\",\n\t\t\t\tpathname.c_str());\n\t\treturn false;\n\t}\n\tif ((rc = sqlite3_open(pathname.c_str(), &file)) == SQLITE_OK)\n\t{\n\t\tif (backup = sqlite3_backup_init(dbHandle, READINGS_TABLE_MEM, file, \"main\"))\n\t\t{\n\t\t\t(void)sqlite3_backup_step(backup, -1);\n\t\t\t(void)sqlite3_backup_finish(backup);\n\t\t\tLogger::getLogger()->info(\"Reloaded persisted data to in-memory database\");\n\t\t}\n\t\trc = sqlite3_errcode(dbHandle);\n\n\t\t(void)sqlite3_close(file);\n\t}\n\treturn rc == SQLITE_OK;\n}\n\n/**\n * Backup the in memory database to a file\n *\n * @param filename\tThe name of the file to backup to\n * @return bool\t\tSuccess or failure of the backup\n */\nbool Connection::saveDatabase(const string& filename)\n{\nint rc;\nsqlite3 *file;\nsqlite3_backup *backup;\n\n\tstring pathname = getDataDir() + \"/\";\n\tpathname.append(filename);\n\tpathname.append(\".db\");\n\tunlink(pathname.c_str());\n\tif ((rc = sqlite3_open(pathname.c_str(), &file)) == SQLITE_OK)\n\t{\n\t\tif (backup = sqlite3_backup_init(file, \"main\", dbHandle, READINGS_TABLE_MEM))\n\t\t{\n\t\t\trc = sqlite3_backup_step(backup, -1);\n\t\t\t(void)sqlite3_backup_finish(backup);\n\t\t\tLogger::getLogger()->info(\"Persisted data from in-memory database to %s\", pathname.c_str());\n\t\t}\n\t\trc = sqlite3_errcode(file);\n\t\tif (rc != SQLITE_OK)\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Persisting in-memory database failed: %s\", sqlite3_errmsg(file));\n\t\t}\n\n\t\t(void)sqlite3_close(file);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->warn(\"Failed to open database %s to persist in-memory data\", pathname.c_str());\n\t}\n\treturn rc == SQLITE_OK;\n}\n"
  },
  {
    "path": "C/plugins/storage/sqlitememory/include/connection.h",
    "content": "#ifndef _CONNECTION_H\n#define _CONNECTION_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <sql_buffer.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <sqlite3.h>\n\nWARNING: THIS FILE IS NOT USED\n\n#define\tREADINGS_TABLE\t\t\"readings\"\n#define\tREADINGS_TABLE_MEM\tREADINGS_TABLE \"_1\"\n\nclass Connection {\n\tpublic:\n\t\tConnection();\n\t\t~Connection();\n\t\tbool\t\tretrieveReadings(const std::string& condition,\n\t\t\t\t\tstd::string& resultSet);\n\t\tint\t\tappendReadings(const char *readings);\n\t\tbool\t\tfetchReadings(unsigned long id, unsigned int blksize,\n\t\t\t\t\t\tstd::string& resultSet);\n\t\tunsigned int\tpurgeReadings(unsigned long age, unsigned int flags,\n\t\t\t\t\t\tunsigned long sent, std::string& results);\n\t\tlong\t\ttableSize(const std::string& table);\n\t\tvoid\t\tsetTrace(bool flag) { m_logSQL = flag; };\n\t\tstatic bool \tformatDate(char *formatted_date, size_t formatted_date_size, const char *date);\n\t\tunsigned int\tpurgeReadingsAsset(const std::string& asset);\n\t\tbool\t\tvacuum();\n\t\tbool\t\tloadDatabase(const std::string& filname);\n\t\tbool\t\tsaveDatabase(const std::string& filname);\n\t\tvoid\t\tsetPurgeBlockSize(unsigned long purgeBlockSize)\n\t\t\t\t{\n\t\t\t\t\tm_purgeBlockSize = purgeBlockSize;\n\t\t\t\t}\n\tprivate:\n\t\tint \t\tSQLexec(sqlite3 *db, const char *sql,\n\t\t\t\t\tint (*callback)(void*,int,char**,char**),\n\t\t  \t\t\tvoid *cbArg, char **errmsg);\n\t\tbool\t\tm_logSQL;\n\t\tvoid\t\traiseError(const char *operation, const char *reason,...);\n\t\tsqlite3\t\t*inMemory; // Handle for :memory: database\n\t\tint\t\tmapResultSet(void *res, std::string& resultSet);\n\t\tbool\t\tjsonWhereClause(const rapidjson::Value& whereClause, SQLBuffer&, bool convertLocaltime = false);\n\t\tbool\t\tjsonModifiers(const rapidjson::Value&, SQLBuffer&);\n    \t\tbool\t\tjsonAggregates(const rapidjson::Value&,\n\t\t                               const rapidjson::Value&,\n\t\t                               SQLBuffer&,\n\t\t                               SQLBuffer&,\n\t\t                               bool isTableReading = false);\n\t\tbool\t\treturnJson(const rapidjson::Value&, SQLBuffer&, SQLBuffer&);\n\t\tchar\t\t*trim(char *str);\n\t\tconst std::string\tescape(const std::string&);\n\t\tbool applyColumnDateTimeFormat(sqlite3_stmt *pStmt,\n\t\t\t\t\t\tint i,\n\t\t\t\t\t\tstd::string& newDate);\n\t\tvoid\t\tlogSQL(const char *, const char *);\n\t\tunsigned long\tm_purgeBlockSize;\n};\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlitememory/include/connection_manager.h",
    "content": "#ifndef _CONNECTION_MANAGER_H\n#define _CONNECTION_MANAGER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <plugin_api.h>\n#include <list>\n#include <mutex>\n\nclass Connection;\n\n/**\n * Singleton class to manage SQLite3 Memory connection pool\n */\nclass MemConnectionManager {\n\tpublic:\n\t\tstatic MemConnectionManager  *getInstance();\n\t\tvoid                         growPool(unsigned int);\n\t\tunsigned int                 shrinkPool(unsigned int);\n\t\tConnection                   *allocate();\n\t\tvoid                         release(Connection *);\n\t\tvoid\t\t   \t     shutdown();\n\t\tvoid\t\t\t     setError(const char *, const char *, bool);\n\t\tPLUGIN_ERROR\t\t     *getError()\n\t\t\t\t\t     {\n\t\t\t\t\t\treturn &lastError;\n\t\t\t\t\t     }\n\t\tvoid\t\t\t     setPersist(bool persist, const std::string& filename = \"\")\n\t\t\t\t\t     {\n\t\t\t\t\t\t     m_persist = persist;\n\t\t\t\t\t\t     m_filename = filename;\n\t\t\t\t\t     }\n\t\tbool\t\t\t     persist() { return m_persist; };\n\t\tstd::string\t\t     filename() { return m_filename; };\n\t\tvoid\t\t\t     setPurgeBlockSize(unsigned long purgeBlockSize);\n\n\tprivate:\n\t\tMemConnectionManager();\n\t\tstatic MemConnectionManager  *instance;\n\t\tstd::list<Connection *>      idle;\n\t\tstd::list<Connection *>      inUse;\n\t\tstd::mutex                   idleLock;\n\t\tstd::mutex                   inUseLock;\n\t\tstd::mutex                   errorLock;\n\t\tPLUGIN_ERROR\t\t     lastError;\n\t\tbool\t\t\t     m_trace;\n\t\tbool\t\t\t     m_persist;\n\t\tstd::string\t\t     m_filename;\n\t\tunsigned long\t\t     m_purgeBlockSize;\n};\n\n#endif\n"
  },
  {
    "path": "C/plugins/storage/sqlitememory/plugin.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <sqlite_common.h>\n#include <connection_manager.h>\n#include <connection.h>\n#include <plugin_api.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <strings.h>\n#include \"sqlite3.h\"\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <config_category.h>\n#include <sstream>\n#include <iostream>\n#include <string>\n#include <logger.h>\n#include <plugin_exception.h>\n#include <reading_stream.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * The SQLite3 plugin interface\n */\nextern \"C\" {\n\nconst char *default_config = QUOTE({\n\t\t\"poolSize\" : {\n\t\t\t\"description\" : \"The number of connections to create in the intial pool of connections\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"default\" : \"5\",\n\t\t\t\"displayName\" : \"Pool Size\",\n\t\t\t\"order\" : \"1\"\n\t\t},\n\t\t\"filename\" : {\n\t\t\t\"description\" : \"The name of the file to which the in-memory database should be persisted\",\n\t\t\t\"type\" : \"string\",\n\t\t\t\"default\" : \"inmemory\",\n\t\t\t\"displayName\" : \"Persist File\",\n\t\t\t\"order\" : \"3\",\n\t\t\t\"validity\": \"persist == \\\"true\\\"\"\n\t\t},\n\t\t\"persist\" : {\n\t\t\t\"description\" : \"Enable the persistence of the in-memory database between executions\",\n\t\t\t\"type\" : \"boolean\",\n\t\t\t\"default\" : \"false\",\n\t\t\t\"displayName\" : \"Persist Data\",\n\t\t\t\"order\" : \"2\"\n\t\t},\n\t\t\"purgeBlockSize\" : {\n\t\t\t\"description\" : \"The number of rows to purge in each delete statement\",\n\t\t\t\"type\" : \"integer\",\n\t\t\t\"default\" : \"10000\",\n\t\t\t\"displayName\" : \"Purge Block Size\",\n\t\t\t\"order\" : \"3\",\n\t\t\t\"minimum\" : \"1000\",\n\t\t\t\"maximum\" : \"100000\"\n\t\t}\n});\n\n/**\n * The plugin information structure\n */\nstatic PLUGIN_INFORMATION info = {\n\t\"SQLite3\",\t\t// Name\n\t\"1.1.0\",\t\t// Version\n\tSP_READINGS,\t\t// Flags\n\tPLUGIN_TYPE_STORAGE,\t// Type\n\t\"1.6.0\",\t\t// Interface version\n\tdefault_config\n};\n\n/**\n * Return the information about this plugin\n */\nPLUGIN_INFORMATION *plugin_info()\n{\n\treturn &info;\n}\n\n/**\n * Initialise the plugin, called to get the plugin handle\n * In the case of SQLLite we also get a pool of connections\n * to use.\n *\n * @param category\tThe plugin configuration category\n */\nPLUGIN_HANDLE plugin_init(ConfigCategory *category)\n{\nConnectionManager *manager = ConnectionManager::getInstance();\n\nint poolSize = 5;\n\n\tif (category->itemExists(\"poolSize\"))\n\t{\n\t\tpoolSize = strtol(category->getValue(\"poolSize\").c_str(), NULL, 10);\n\t}\n\tmanager->growPool(poolSize);\n\tif (category->itemExists(\"persist\"))\n\t{\n\t\tstring p = category->getValue(\"persist\");\n\t\tif (p.compare(\"true\") == 0 && category->itemExists(\"filename\"))\n\t\t{\n\t\t\tmanager->setPersist(true, category->getValue(\"filename\"));\n\t\t}\n\t\telse\n\t\t{\n\t\t\tmanager->setPersist(false);\n\t\t}\n\t}\n\telse\n\t{\n\t\tmanager->setPersist(false);\n\t}\n\tif (manager->persist())\n\t{\n\t\tConnection        *connection = manager->allocate();\n\t\tconnection->loadDatabase(manager->filename());\n\t}\n\tif (category->itemExists(\"purgeBlockSize\"))\n\t{\n\t\tunsigned long purgeBlockSize = strtoul(category->getValue(\"purgeBlockSize\").c_str(), NULL, 10);\n\t\tmanager->setPurgeBlockSize(purgeBlockSize);\n\t}\n\treturn manager;\n}\n/**\n * Append a sequence of readings to the readings buffer\n */\nint plugin_reading_append(PLUGIN_HANDLE handle, char *readings)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tint result = connection->appendReadings(readings);\n\tmanager->release(connection);\n\treturn result;;\n}\n\n/**\n * Append a stream of readings to the readings buffer\n */\nint plugin_readingStream(PLUGIN_HANDLE handle, ReadingStream **readings, bool commit)\n{\n\tint result = 0;\n\tConnectionManager *manager = (ConnectionManager *)handle;\n\tConnection        *connection = manager->allocate();\n\n\tresult = connection->readingStream(readings, commit);\n\n\tmanager->release(connection);\n\treturn result;;\n}\n\n/**\n * Fetch a block of readings from the readings buffer\n */\nchar *plugin_reading_fetch(PLUGIN_HANDLE handle, unsigned long id, unsigned int blksize)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string\t  resultSet;\n\n\tconnection->fetchReadings(id, blksize, resultSet);\n\tmanager->release(connection);\n\treturn strdup(resultSet.c_str());\n}\n\n/**\n * Retrieve some readings from the readings buffer\n */\nchar *plugin_reading_retrieve(PLUGIN_HANDLE handle, char *condition)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string results;\n\n\tconnection->retrieveReadings(std::string(condition), results);\n\tmanager->release(connection);\n\treturn strdup(results.c_str());\n}\n\n/**\n * Purge readings from the buffer\n */\nchar *plugin_reading_purge(PLUGIN_HANDLE handle, unsigned long param, unsigned int flags, unsigned long sent)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\nstd::string \t  results;\nunsigned long\t  age, size;\n\n\tif (flags & STORAGE_PURGE_SIZE)\t// Purge by size\n\t{\n\t\t(void)connection->purgeReadingsByRows(param, flags, sent, results);\n\t}\n\telse\n\t{\n\t\tage = param;\n\t\t(void)connection->purgeReadings(age, flags, sent, results);\n\t}\n\tmanager->release(connection);\n\treturn strdup(results.c_str());\n}\n\n/**\n * Release a previously returned result set\n */\nvoid plugin_release(PLUGIN_HANDLE handle, char *results)\n{\n\t(void)handle;\n\tfree(results);\n}\n\n/**\n * Return details on the last error that occured.\n */\nPLUGIN_ERROR *plugin_last_error(PLUGIN_HANDLE handle)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\n  \n\treturn manager->getError();\n}\n\n/**\n * Shutdown the plugin\n */\nbool plugin_shutdown(PLUGIN_HANDLE handle)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\n  \n\tif (manager->persist())\n\t{\n\t\tConnection        *connection = manager->allocate();\n\t\tconnection->saveDatabase(manager->filename());\n\t}\n\tmanager->shutdown();\n\treturn true;\n}\n\n/**\n * Purge given readings asset or all readings from the buffer\n */\nunsigned int plugin_reading_purge_asset(PLUGIN_HANDLE handle, char *asset)\n{\nConnectionManager *manager = (ConnectionManager *)handle;\nConnection        *connection = manager->allocate();\n\n\tunsigned int deleted = connection->purgeReadingsAsset(asset);\n\tmanager->release(connection);\n\treturn deleted;\n}\n};\n\n"
  },
  {
    "path": "C/plugins/utils/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.4.0)\n\nproject(get_plugin_info)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\n\n# Include header files\ninclude_directories(include ../../services/common/include)\n\n# Create get_plugin_info utility\nadd_executable(${PROJECT_NAME} get_plugin_info.cpp)\ntarget_link_libraries(${PROJECT_NAME} -ldl)\n\nadd_executable(cmdutil cmdutil.cpp)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/extras/C)\ninstall(TARGETS cmdutil DESTINATION fledge/extras/C)\n"
  },
  {
    "path": "C/plugins/utils/cmdutil.cpp",
    "content": "/*\n * Utility to run some commands for fledge as root using setuid\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <unistd.h>\n#include <errno.h>\n\nextern int errno;\n\n/**\n * Check whether file/dir exists within FLEDGE_ROOT\n *\n * @param rootdir\tFLEDGE_ROOT path\n * @param file\t\trelative path of file or dir inside FLEDGE_ROOT\n */\nbool checkFile(char *rootdir, char *file)\n{\n\tchar path[256];\n\tsnprintf(path, sizeof(path), \"%s/%s\", rootdir, file);\n\treturn (access(path, F_OK) == 0);\n}\n\nconst char *cmds[] = {\"tar-extract\", \"cp\", \"rm\", \"pip3-pkg\", \"pip3-req\", \"mkdir\"};\n\ntypedef enum {\n\tTAR_EXTRACT,\n\tCP,\n\tRM,\n\tPIP3_PKG,\n\tPIP3_REQ,\n\tMKDIR\n} cmdtype_t;\n\nchar *argsArray[][6] = {\n\t{(char *) \"/bin/tar\", (char *) \"-C\", \t(char *) \"PLACEHOLDER\", (char *) \"-xf\", \t\t(char *) \"PLACEHOLDER\", NULL},\n\t{(char *) \"/bin/cp\",  (char *) \"-r\", \t(char *) \"PLACEHOLDER\", (char *) \"PLACEHOLDER\", NULL, \t\t\t\t\tNULL},\n\t{(char *) \"/bin/rm\",  (char *) \"-rf\", \t(char *) \"PLACEHOLDER\", NULL, \t\t\t\t\tNULL, \t\t\t\t\tNULL},\n\t{(char *) \"pip3\", \t(char *) \"install\", (char *) \"PLACEHOLDER\", (char *) \"--no-cache-dir\", NULL, \t\t\t\tNULL},\n\t{(char *) \"pip3\", \t(char *) \"install\", (char *) \"-Ir\", \t\t(char *) \"PLACEHOLDER\",\t(char *) \"--no-cache-dir\", NULL},\n\t{(char *) \"mkdir\",  (char *) \"-p\", \t\t(char *) \"PLACEHOLDER\", NULL, \t\t\t\t\tNULL, \t\t\t\t\tNULL}\n};\n\nint getCmdType(const char *cmd)\n{\n\tfor (int i=0; i<sizeof(cmds)/sizeof(const char *); i++)\n\t\tif (strcmp(cmd, cmds[i])==0)\n\t\t\treturn i;\n\t\n\treturn -1;\n}\n\n/**\n * Run some shell commands, if setuid bit is set, these cmds are run as root user\n *\n *    Usage: cmdutil <cmd> <params>\n *\n *\t\tExample command to execute\t\t\t\t\t\tWay to invoke cmdutil to do so\n *\t\t--------------------------\t\t\t\t\t\t-------------------------------\n * \t\tsudo tar -C $FLEDGE_ROOT -xf abc.tar.gz\t\tcmdutil tar-extract abc.tar.gz\n *\t\tsudo cp -r abc $FLEDGE_ROOT/xyz\t\t\t\tcmdutil cp abc xyz\n *\t\tsudo rm -rf $FLEDGE_ROOT/abc\t\t\t\t\tcmdutil rm abc\n *\n *\t\tsudo pip3 install aiocoap==0.3 --no-cache-dir\tcmdutil pip3-pkg aiocoap==0.3\n *\t\tsudo pip3 install -Ir requirements.txt --no-cache-dir\tcmdutil pip3-req requirements.txt\n *\n *\t\tsudo mkdir -p $FLEDGE_ROOT/abc\t\t\t\t\tcmdutil mkdir abc\n */\nint main(int argc, char *argv[])\n{\n\tif(argc < 2)\n\t{\n\t\tprintf(\"Incorrect usage\\n\");\n\t\treturn 1;\n\t}\n\n\tchar *rootdir = getenv(\"FLEDGE_ROOT\");\n\tif (!rootdir || rootdir[0]==0)\n\t{\n\t\tprintf(\"Unable to find path where archive is to be extracted\\n\");\n\t\treturn 2;\n\t}\n\tstruct stat sb;\n\tstat(rootdir, &sb);\n\tif ((sb.st_mode & S_IFMT) != S_IFDIR)\n\t{\n\t\tprintf(\"Unable to find path where archive is to be extracted\\n\");\n\t\treturn 2;\n\t}\n\t\n\tif (!checkFile(rootdir, (char *) \"bin/fledge\") || \n\t\t!checkFile(rootdir, (char *) \"services/fledge.services.storage\") || \n\t\t!checkFile(rootdir, (char *) \"python/fledge/services/core/routes.py\") || \n\t\t!checkFile(rootdir, (char *) \"lib/libcommon-lib.so\") || \n\t\t!checkFile(rootdir, (char *) \"tasks/sending_process\"))\n\t{\n\t\tprintf(\"Unable to find fledge insallation\\n\");\n\t\treturn 2;\n\t}\n\n\tint cmdtype = getCmdType(argv[1]);\n\t//printf(\"cmdtype=%d\\n\", cmdtype);\n\tif(cmdtype == -1)\n\t{\n\t\tprintf(\"Unidentified command\\n\");\n\t\treturn 3;\n\t}\n\n\tchar *args[6];\n\tfor(int i=0; i<6; i++)\n\t\targs[i] = argsArray[cmdtype][i];\n\tchar buf[128];\n\tswitch (cmdtype)\n\t{\n\t\tcase TAR_EXTRACT:\n\t\t\t\targs[2] = rootdir;\n\t\t\t\targs[4] = argv[2];\n\t\t\tbreak;\n\t\tcase CP:\n\t\t\t\targs[2] = argv[2];\n\t\t\t\tsnprintf(buf, sizeof(buf), \"%s/%s\", rootdir, argv[3]);\n\t\t\t\tbuf[sizeof(buf)-1] = '\\0'; // force null terminate\n\t\t\t\targs[3] = buf;\n\t\t\tbreak;\n\t\tcase RM:\n\t\t\t\tsnprintf(buf, sizeof(buf), \"%s/%s\", rootdir, argv[2]);\n\t\t\t\tbuf[sizeof(buf)-1] = '\\0'; // force null terminate\n\t\t\t\targs[2] = buf;\n\t\t\tbreak;\n\t\tcase PIP3_PKG:\n\t\t\t\targs[2] = argv[2];\n\t\t\tbreak;\n\t\tcase PIP3_REQ:\n\t\t\t\targs[3] = argv[2];\n\t\t\tbreak;\n\t\tcase MKDIR:\n\t\t\t\tsnprintf(buf, sizeof(buf), \"%s/%s\", rootdir, argv[2]);\n\t\t\t\tbuf[sizeof(buf)-1] = '\\0'; // force null terminate\n\t\t\t\targs[2] = buf;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tprintf(\"Unidentified command\\n\");\n\t\t\treturn 3;\n\t}\n\n\t//printf(\"cmd=%s %s %s %s %s %s\\n\", args[0], args[1], args[2], args[3]?args[3]:\"\", args[4]?args[4]:\"\", args[5]?args[5]:\"\");\n\n\terrno = 0;\n\tint rc = execvp(args[0], args);\n\tif (rc != 0)\n\t{\n\t\tprintf(\"execvp failed: rc=%d, errno %d=%s\\n\", rc, errno, strerror(errno));\n\t\treturn rc;\n\t}\n\t\n\treturn 0;\n}\n\n"
  },
  {
    "path": "C/plugins/utils/get_plugin_info.cpp",
    "content": "/*\n * Utility to extract plugin_info from north/south C plugin library\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <dlfcn.h>\n#include <syslog.h>\n#include \"plugin_api.h\"\n\ntypedef PLUGIN_INFORMATION *(*func_t)();\n\n/**\n * Extract value of a given symbol from given plugin library\n *\n *    Usage: get_plugin_info <plugin library> <function symbol to fetch plugin info from>\n *\n * @param argv[1]  relative/absolute path to north/south C plugin shared library\n *\n * @param argv[2]  symbol to extract value from (defaults 'plugin_info')\n */\nint main(int argc, char *argv[])\n{\n  void *hndl;\n  char *routine = (char *)\"plugin_info\";\n\n  if (argc == 3)\n  {\n\t  routine = argv[2];\n  }\n  else if (argc < 2)\n  {\n    fprintf(stderr, \"Insufficient number of args...\\n\\nUsage: %s <plugin library> [ <function to fetch plugin info> ]\\n\", argv[0]);\n    exit(1);\n  }\n\n  openlog(\"Fledge PluginInfo\", LOG_PID|LOG_CONS, LOG_USER);\n  setlogmask(LOG_UPTO(LOG_WARNING));\n\n  if (access(argv[1], F_OK|R_OK) != 0)\n  {\n    syslog(LOG_ERR, \"Unable to access library file '%s', exiting...\\n\", argv[1]);\n    exit(2);\n  }\n\n\n  if ((hndl = dlopen(argv[1], RTLD_GLOBAL|RTLD_LAZY)) != NULL)\n  {\n    func_t infoEntry = (func_t)dlsym(hndl, routine);\n    if (infoEntry == NULL)\n    {\n      // Unable to find plugin_info entry point\n      syslog(LOG_ERR, \"Plugin library %s does not support %s function : %s\\n\", argv[1], routine, dlerror());\n      dlclose(hndl);\n      closelog();\n      exit(3);\n    }\n    PLUGIN_INFORMATION *info = (PLUGIN_INFORMATION *)(*infoEntry)();\n    printf(\"{\\\"name\\\": \\\"%s\\\", \\\"version\\\": \\\"%s\\\", \\\"type\\\": \\\"%s\\\", \\\"interface\\\": \\\"%s\\\", \\\"flag\\\": %d, \\\"config\\\": %s}\\n\", info->name, info->version, info->type, info->interface, info->options, info->config);\n  }\n  else\n  {\n    syslog(LOG_ERR, \"dlopen failed: %s\\n\", dlerror());\n  }\n  closelog();\n  \n  return 0;\n}\n\n"
  },
  {
    "path": "C/services/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.4.0)\n\nif( POLICY CMP0007 )\n    cmake_policy( SET CMP0007 NEW )\nendif()\n\nproject(services-common-lib)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -O0\")\nset(DLLIB -ldl)\n\n# Find source files\nfile(GLOB SOURCES *.cpp)\n\n# Find python3.x dev/lib package\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development)\nendif()\n\n# Include header files\ninclude_directories(include ../../common/include ../../thirdparty/Simple-Web-Server  ../../thirdparty/rapidjson/include)\n\n# Add Python 3.x header files\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${DLLIB})\n\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n"
  },
  {
    "path": "C/services/common/README.rst",
    "content": "Common C Code\n=============\n\nThis directory contains the C/C++ code that is common to one or more\nmicroservice or executable written in C or C++ withion Fledge.\n"
  },
  {
    "path": "C/services/common/config_handler.cpp",
    "content": "/*\n * Fledge config manager.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <config_handler.h>\n\nusing namespace std;\n\nConfigHandler *ConfigHandler::instance = 0;\n\n\n/**\n * ConfigHandler Singleton implementation\n*/\nConfigHandler *\nConfigHandler::getInstance(ManagementClient *mgtClient)\n{\n\tif (!instance)\n\t\tinstance = new ConfigHandler(mgtClient);\n\treturn instance;\n}\n\n/**\n * Config Handler Constructor\n */\nConfigHandler::ConfigHandler(ManagementClient *mgtClient)\n\t\t\t\t\t : m_mgtClient(mgtClient)\n{\t\n\tm_logger = Logger::getLogger();\n}\n\n/**\n * Handle a callback from the core to propagate a configuration category\n * change and propagate that to all the local ServiceHandlers that have\n * registered for it.\n *\n * @param category\tThe name of the category that has changed\n * @param config\tThe configuration category itself\n */\nvoid\nConfigHandler::configChange(const string& category, const string& config)\n{\n\n\tm_logger->info(\"Configuration change notification for %s\", category.c_str());\n\tstd::unique_lock<std::mutex> lck(m_mutex);\n\tpair<CONFIG_MAP::iterator, CONFIG_MAP::iterator> res = m_registrations.equal_range(category);\n\tfor (CONFIG_MAP::iterator it = res.first; it != res.second; it++)\n\t{\n\t\t// The config change call could effect the registered handlers\n\t\t// we therefore need to guard against the map changing\n\t\tm_change = false;\n\t\tlck.unlock();\n\t\tit->second->configChange(category, config);\n\t\tlck.lock();\n\t\tif (m_change) // Something changed\n\t\t{\n\t\t\treturn;\t// Call any other subscribers to this category. In reality there are no others\n\t\t}\n\t}\n}\n\n/**\n * Handle a callback from the core to handle the creation of a child category.\n *\n * @param parent_category The parent category of the child\n * @param child_category  The name of the category that has created\n * @param config          Configuration of the child category\n */\nvoid ConfigHandler::configChildCreate(const std::string& parent_category, const string& child_category, const string& config)\n{\n\tstd::unique_lock<std::mutex> lck(m_mutex);\n\n\n\tm_logger->info(\"Configuration change notification,  child category created %s\", child_category.c_str());\n\n\tpair<CONFIG_MAP::iterator, CONFIG_MAP::iterator> res = m_registrationsChild.equal_range(parent_category);\n\tfor (CONFIG_MAP::iterator it = res.first; it != res.second; it++)\n\t{\n\n\t\t// The config change call could effect the registered handlers\n\t\t// we therefore need to guard against the map changing\n\t\tm_change = false;\n\t\tlck.unlock();\n\t\tit->second->configChildCreate(parent_category, child_category, config);\n\t\tlck.lock();\n\t\tif (m_change) // Something changed\n\t\t{\n\t\t\treturn;\t// Call any other subscribers to this category. In reality there are no others\n\t\t}\n\n\t}\n\n}\n\n\n/**\n * Handle a callback from the core to handle the deletion of a child category.\n *\n * @param parent_category The parent category of the child\n * @param child_category  The name of the category that has created\n */\nvoid ConfigHandler::configChildDelete(const std::string& parent_category, const string& child_category)\n{\n\tstd::unique_lock<std::mutex> lck(m_mutex);\n\n\tm_logger->info(\"Configuration change notification,  child category deleted %s\", child_category.c_str());\n\n\tpair<CONFIG_MAP::iterator, CONFIG_MAP::iterator> res = m_registrationsChild.equal_range(parent_category);\n\tfor (CONFIG_MAP::iterator it = res.first; it != res.second; it++)\n\t{\n\n\t\t// The config change call could effect the registered handlers\n\t\t// we therefore need to guard against the map changing\n\t\tm_change = false;\n\t\tlck.unlock();\n\t\tit->second->configChildDelete(parent_category, child_category);\n\t\tlck.lock();\n\t\tif (m_change) // Something changed\n\t\t{\n\t\t\treturn;\t// Call any other subscribers to this category. In reality there are no others\n\t\t}\n\n\t}\n\n}\n\n\n/**\n * Register a service handler for a given configuration category\n *\n * @param handler\tThe service handler to call\n * @param category\tThe configuration category to register\n */\nvoid\nConfigHandler::registerCategory(ServiceHandler *handler, const string& category)\n{\n\tif (m_registrations.count(category) == 0)\n\t{\n\t\tint retryCount = 0;\n\t\twhile (m_mgtClient->registerCategory(category) == false &&\n\t\t\t\tretryCount++ < 10)\n\t\t{\n\t\t\tsleep(2 * retryCount);\n\t\t}\n\t\tif (retryCount >= 10)\n\t\t{\n\t\t\tm_logger->error(\"Failed to register configuration category %s\", category.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\t m_logger->debug(\"Interest in %s registered\", category.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tm_logger->info(\"Interest in %s already registered\", category.c_str());\n\t}\n\tstd::unique_lock<std::mutex> lck(m_mutex);\n\tm_registrations.insert(pair<string, ServiceHandler *>(category, handler));\n\tm_change = true;\n}\n\n/**\n * Register a service handler for a given configuration category when a children category is changed\n *\n * @param handler\tThe service handler to call\n * @param category\tThe configuration category to register\n */\nvoid ConfigHandler::registerCategoryChild(ServiceHandler *handler, const string& category)\n{\n\tif (m_registrationsChild.count(category) == 0)\n\t{\n\t\tint retryCount = 0;\n\t\twhile (m_mgtClient->registerCategoryChild(category) == false &&\n\t\t\t\tretryCount++ < 10)\n\t\t{\n\t\t\tsleep(2 * retryCount);\n\t\t}\n\t\tif (retryCount >= 10)\n\t\t{\n\t\t\tm_logger->error(\"Failed to register configuration category %s\", category.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\t m_logger->debug(\"Interest in children categories of %s registered\", category.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tm_logger->info(\"Interest in children categories of %s already registered\", category.c_str());\n\t}\n\tstd::unique_lock<std::mutex> lck(m_mutex);\n\tm_registrationsChild.insert(pair<string, ServiceHandler *>(category, handler));\n\tm_change = true;\n}\n\n\n/**\n * Unregister a configuration category from the ConfigHandler for\n * a particular registered ServiceHandler class\n *\n * @param handler The configuration handler we would call\n * @param category\tThe category to remove.\n */\nvoid\nConfigHandler::unregisterCategory(ServiceHandler *handler, const string& category)\n{\n\tstd::unique_lock<std::mutex> lck(m_mutex);\n\tpair<CONFIG_MAP::iterator, CONFIG_MAP::iterator> res =\n\t\t\t m_registrations.equal_range(category);\n\tfor (CONFIG_MAP::iterator it = res.first; it != res.second; it++)\n\t{\n\t\tif (it->second == handler)\n\t\t{\n\t\t\tm_registrations.erase(it);\n\t\t\tbreak;\n\t\t}\n\t}\n\t// No remaining registration for this category\n\tif (m_registrations.count(category) == 0)\n\t{\n\t\tm_mgtClient->unregisterCategory(category);\n\t}\n\tm_change = true;\n}\n"
  },
  {
    "path": "C/services/common/filter_python_plugin_handle.cpp",
    "content": "/*\n * Fledge Filter Python plugin handle related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <config_category.h>\n#include <reading.h>\n#include <logger.h>\n#include <filter_python_plugin_handle.h>\n\n#define PYTHON_PLUGIN_INTF_LIB \"libfilter-plugin-python-interface.so\"\n\n#define PRINT_FUNC\tLogger::getLogger()->info(\"%s:%d\", __FUNCTION__, __LINE__);\n\ntypedef PLUGIN_INFORMATION *(*pluginInitFn)(const char *pluginName, const char *path);\n\nusing namespace std;\n\n/**\n * Constructor for FilterPythonPluginHandle\n *    - Load python interface library and initialize the interface\n */\nFilterPythonPluginHandle::FilterPythonPluginHandle(const char *pluginName,\n\t\t\t\t\t\t   const char *pluginPathName) :\n\t\t\t\t\t\t   PythonPluginHandle(pluginName, pluginPathName)\n{\n\t// expecting this lib to be present in LD_LIBRARY_PATH:\n\t//same dir as where lib-services-common.so is present\n\tm_interfaceObjName = PYTHON_PLUGIN_INTF_LIB;\n\n\t// Open interface library object\n\tm_hndl = dlopen(m_interfaceObjName.c_str(), RTLD_NOW | RTLD_GLOBAL);\n\tif (!m_hndl)\n\t{\n\t\tLogger::getLogger()->error(\"FilterPythonPluginHandle c'tor: dlopen failed for library '%s' : %s\",\n\t\t\t\t\t   m_interfaceObjName.c_str(),\n\t\t\t\t\t   dlerror());\n\t\treturn;\n\t}\n\n\tpluginInitFn initFn =\n\t\t(pluginInitFn) dlsym(m_hndl, \"PluginInterfaceInit\");\n\tif (initFn == NULL)\n\t{\n\t\t// Unable to find PluginInterfaceInit entry point\n\t\tLogger::getLogger()->error(\"Plugin library %s does not support %s function : %s\",\n\t\t\t\t\t   m_interfaceObjName.c_str(),\n\t\t\t\t\t   \"PluginInterfaceInit\",\n\t\t\t\t\t   dlerror());\n\t\tdlclose(m_hndl);\n\t\tm_hndl = NULL;\n\t\treturn;\n\t}\n\n\t// Initialise Python plugin object\n\tvoid *ref = initFn(pluginName, pluginPathName);\n\tif (ref == NULL)\n\t{\n\t\tfprintf(stderr,\n\t\t\t\"Plugin library %s : PluginInterfaceInit returned failure\",\n\t\t\tm_interfaceObjName.c_str());\n\t\tdlclose(m_hndl);\n\t\tm_hndl = NULL;\n\t\treturn;\n\t}\n\n\t// Set type\n\tm_type = PLUGIN_TYPE_FILTER;\n}\n"
  },
  {
    "path": "C/services/common/include/binary_plugin_handle.h",
    "content": "\n#ifndef _BINARY_PLUGIN_HANDLE_H\n#define _BINARY_PLUGIN_HANDLE_H\n/*\n * Fledge plugin handle related\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n#include <logger.h>\n#include <dlfcn.h>\n#include <plugin_handle.h>\n#include <plugin_manager.h>\n\n/**\n * The BinaryPluginHandle class is used to represent an interface to \n * a plugin that is available in a binary format\n */\nclass BinaryPluginHandle : public PluginHandle\n{\n\tpublic:\n\t\t// for the Storage plugin\n\t\tBinaryPluginHandle(const char *name, const char *path, tPluginType type) {\n\t\t\tdlerror();\t// Clear the existing error\n\t\t\thandle = dlopen(path, RTLD_LAZY);\n\t\t\tif (!handle)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Unable to load storage plugin %s, %s\",\n\t\t\t\t\t\tname, dlerror());\n\t\t\t}\n\n\t\t\tLogger::getLogger()->debug(\"%s - storage plugin / RTLD_LAZY - name :%s: path :%s:\", __FUNCTION__, name, path);\n\t\t}\n\n\t\t// for all the others plugins\n\t\tBinaryPluginHandle(const char *name, const char *path)                   {\n\t\t\tdlerror();\t// Clear the existing error\n\t\t\thandle = dlopen(path, RTLD_LAZY|RTLD_GLOBAL);\n\t\t\tif (!handle)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Unable to load plugin %s, %s\",\n\t\t\t\t\t\tname, dlerror());\n\t\t\t}\n\n\t\t\tLogger::getLogger()->debug(\"%s - other plugin / RTLD_LAZY|RTLD_GLOBAL - name :%s: path :%s:\", __FUNCTION__, name, path);\n\t\t}\n\n\t\t~BinaryPluginHandle() { if (handle) dlclose(handle); }\n\t\tvoid *GetInfo() { return dlsym(handle, \"plugin_info\"); }\n\t\tvoid *ResolveSymbol(const char* sym) { return dlsym(handle, sym); }\n\t\tvoid *getHandle() { return handle; }\n\tprivate:\n\t\tPLUGIN_HANDLE handle; // pointer returned by dlopen on plugin shared lib\n\n};\n\n#endif\n\n"
  },
  {
    "path": "C/services/common/include/config_handler.h",
    "content": "#ifndef _CONFIG_HANDLER_H\n#define _CONFIG_HANDLER_H\n/*\n * Fledge \n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <service_handler.h>\n#include <management_client.h>\n#include <config_category.h>\n#include <logger.h>\n#include <string>\n#include <map>\n#include <mutex>\n\ntypedef std::multimap<std::string, ServiceHandler *> CONFIG_MAP;\n\n/**\n * Handler class within a service to manage configuration changes\n */\nclass ConfigHandler {\n\tpublic:\n\t\tstatic ConfigHandler\t*getInstance(ManagementClient *);\n\t\tvoid\t\t\tconfigChange(const std::string& category, const std::string& config);\n\t\tvoid            configChildCreate(const std::string& parent_category, const std::string& child_category, const std::string& config);\n\t\tvoid            configChildDelete(const std::string& parent_category, const std::string& child_category);\n\t\tvoid\t\t\tregisterCategory(ServiceHandler *handler,\n\t\t\t\t\t\t\t const std::string& category);\n\t\tvoid \t\t\tregisterCategoryChild(ServiceHandler *handler, const std::string& category);\n\n\t\tvoid\t\t\tunregisterCategory(ServiceHandler *handler, const std::string& category);\n\t\tstatic ConfigHandler\t*instance;\n\tprivate:\n\t\tConfigHandler(ManagementClient *);\n\t\t~ConfigHandler();\n\t\tManagementClient\t*m_mgtClient;\n\t\tCONFIG_MAP\t\tm_registrations;\n\t\tCONFIG_MAP\t\tm_registrationsChild;\n\t\tLogger\t\t\t*m_logger;\n\t\tstd::mutex\t\tm_mutex;\n\t\tbool\t\t\tm_change;\n};\n#endif\n"
  },
  {
    "path": "C/services/common/include/filter_python_plugin_handle.h",
    "content": "\n#ifndef _FILTER_PYTHON_PLUGIN_HANDLE_H\n#define _FILTER_PYTHON_PLUGIN_HANDLE_H\n/*\n * Fledge Notification Python plugin handle related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <logger.h>\n#include <vector>\n#include <sstream>\n#include <dlfcn.h>\n#include <python_plugin_handle.h>\n\n/**\n * The PythonPluginHandle class is used to represent an interface to \n * a plugin that is available as a python script\n */\nclass FilterPythonPluginHandle : public PythonPluginHandle\n{\n\tpublic:\n\t\tFilterPythonPluginHandle(const char *name, const char *path);\n};\n\n#endif\n\n"
  },
  {
    "path": "C/services/common/include/management_api.h",
    "content": "#ifndef _MANAGEMENT_API_H\n#define _MANAGEMENT_API_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <json_provider.h>\n#include <service_handler.h>\n#include <server_http.hpp>\n#include <logger.h>\n#include <string>\n#include <time.h>\n#include <thread>\n\n#define PING\t\t\t\"/fledge/service/ping\"\n#define SERVICE_SHUTDOWN\t\"/fledge/service/shutdown\"\n#define CONFIG_CHANGE\t\t\"/fledge/change\"\n#define CONFIG_CHILD_CREATE     \"/fledge/child_create\"\n#define CONFIG_CHILD_DELETE     \"/fledge/child_delete\"\n#define SECURITY_CHANGE         \"^/fledge/security$\"\n\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\n/**\n * Management API server for a C++ microservice\n */\nclass ManagementApi {\n\tpublic:\n\t\tManagementApi(const std::string& name, const unsigned short port);\n\t\t~ManagementApi();\n\t\tstatic ManagementApi *getInstance();\n\t\tvoid start();\n\t\tvoid startServer();\n\t\tvoid stop();\n\t\tvoid stopServer();\n\t\tvoid registerStats(JSONProvider *statsProvider);\n\t\tvoid registerProvider(JSONProvider *provider);\n\t\tvoid registerService(ServiceHandler *serviceHandler) {\n\t\t\tm_serviceHandler = serviceHandler;\n\t\t}\n\t\tunsigned short getListenerPort() {\n\t\t\treturn m_server->getLocalPort();\n\t\t}\n\t\tvoid ping(std::shared_ptr<HttpServer::Response> response, std::shared_ptr<HttpServer::Request> request);\n\t\tvoid shutdown(std::shared_ptr<HttpServer::Response> response, std::shared_ptr<HttpServer::Request> request);\n\t\tvoid configChange(std::shared_ptr<HttpServer::Response> response, std::shared_ptr<HttpServer::Request> request);\n\t\tvoid configChildCreate(std::shared_ptr<HttpServer::Response> response, std::shared_ptr<HttpServer::Request> request);\n\t\tvoid configChildDelete(std::shared_ptr<HttpServer::Response> response, std::shared_ptr<HttpServer::Request> request);\n\t\tvoid securityChange(std::shared_ptr<HttpServer::Response> response, std::shared_ptr<HttpServer::Request> request);\n\n\tprotected:\n\t\tstatic ManagementApi *m_instance;\n\t\tstd::string\tm_name;\n\t\tLogger\t\t*m_logger;\n\t\ttime_t\t\tm_startTime;\n\t\tHttpServer\t*m_server;\n\t\tJSONProvider\t*m_statsProvider;\n\t\tServiceHandler\t*m_serviceHandler;\n\t\tstd::thread\t*m_thread;\n\tprivate:\n\t\tvoid            respond(std::shared_ptr<HttpServer::Response>, const std::string&);\n\t\tstd::vector<JSONProvider *>\n\t\t\t\tm_providers;\n};\n#endif\n"
  },
  {
    "path": "C/services/common/include/north_python_plugin_handle.h",
    "content": "\n#ifndef _NORTH_PYTHON_PLUGIN_HANDLE_H\n#define _NORTH_PYTHON_PLUGIN_HANDLE_H\n/*\n * Fledge plugin handle related\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <logger.h>\n#include <vector>\n#include <sstream>\n#include <dlfcn.h>\n#include <python_plugin_handle.h>\n\n/**\n * The NorthPythonPluginHandle class is used to represent an interface to \n * a South plugin that is available as a python script\n */\nclass NorthPythonPluginHandle : public PythonPluginHandle\n{\n\tpublic:\n\t\tNorthPythonPluginHandle(const char *name, const char *path);\n};\n\n#endif\n\n"
  },
  {
    "path": "C/services/common/include/notification_python_plugin_handle.h",
    "content": "\n#ifndef _NOTIFICATION_PYTHON_PLUGIN_HANDLE_H\n#define _NOTIFICATION_PYTHON_PLUGIN_HANDLE_H\n/*\n * Fledge Notification Python plugin handle related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <logger.h>\n#include <vector>\n#include <sstream>\n#include <dlfcn.h>\n#include <python_plugin_handle.h>\n\n/**\n * The PythonPluginHandle class is used to represent an interface to \n * a plugin that is available as a python script\n */\nclass NotificationPythonPluginHandle : public PythonPluginHandle\n{\n\tpublic:\n\t\tNotificationPythonPluginHandle(const char *name, const char *path);\n};\n\n#endif\n\n"
  },
  {
    "path": "C/services/common/include/perfmonitors.h",
    "content": "#ifndef _PERFMONITOR_H\n#define _PERFMONITOR_H\n/*\n * Fledge performance monitor\n *\n * Copyright (c) 2023 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <thread>\n#include <storage_client.h>\n#include <unordered_map>\n#include <insert.h>\n#include <mutex>\n#include <condition_variable>\n\nclass PerfMon {\n\tpublic:\n\t\tPerfMon(const std::string& name);\n\t\tvoid\t\taddValue(long value);\n\t\tint\t\tgetValues(InsertValues& values);\n\tprivate:\n\t\tstd::string\tm_name;\n\t\tlong\t\tm_average;\n\t\tlong\t\tm_min;\n\t\tlong\t\tm_max;\n\t\tint\t\tm_samples;\n\t\tstd::mutex\tm_mutex;\n};\n/**\n * Class to handle the performance monitors\n */\nclass PerformanceMonitor {\n\tpublic:\n\t\tPerformanceMonitor(const std::string& service, StorageClient *storage);\n\t\t// Write data to storage\n\t\tvirtual void writeData(const std::string& table, const InsertValues& values)\n\t\t{\n\t\t\t// Write data via storage client\n\t\t\tif (m_storage != NULL)\n\t\t\t{\n\t\t\t\tm_storage->insertTable(table, values);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Failed to save performace monitor data: \"\\\n\t\t\t\t\t\t\"storage client is null for servide '%s'\",\n\t\t\t\t\t\tm_service.c_str());\n\t\t\t}\n\t\t};\n\t\tvirtual ~PerformanceMonitor();\n\t\t\t\t\t/**\n\t\t\t\t\t * Collect a performance monitor\n\t\t\t\t\t *\n\t\t\t\t\t * @param name\tName of the monitor\n\t\t\t\t\t * @param calue\tValue of the monitor\n\t\t\t\t\t */\n\t\tinline void\t\tcollect(const std::string& name, long value)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (m_collecting)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdoCollection(name, value);\n\t\t\t\t\t\t}\n\t\t\t\t\t};\n\t\tvoid\t\t\tsetCollecting(bool state);\n\t\tvoid\t\t\twriteThread();\n\t\tbool\t\t\tisCollecting() { return m_collecting; };\n\tprivate:\n\t\tvoid\t\t\tdoCollection(const std::string& name, long value);\n\tprivate:\n\t\tstd::string\t\tm_service;\n\t\tStorageClient\t\t*m_storage;\n\t\tstd::thread\t\t*m_thread;\n\t\tbool\t\t\tm_collecting;\n\t\tstd::unordered_map<std::string, PerfMon *>\n\t\t\t\t\tm_monitors;\n\t\tstd::condition_variable m_cv;\n\t\tstd::mutex\t\tm_mutex;\n};\n#endif\n"
  },
  {
    "path": "C/services/common/include/plugin.h",
    "content": "#ifndef _PLUGIN_H\n#define _PLUGIN_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <plugin_api.h>\n\nclass PluginManager;\n\n/**\n * A generic representation of a plugin\n */\nclass Plugin {\n\n  public:\n    Plugin(PLUGIN_HANDLE handle);\n    ~Plugin();\n\n    const PLUGIN_INFORMATION *getInfo();\n    PLUGIN_HANDLE getHandle() { return handle; }\n\n  protected:\n    PLUGIN_HANDLE handle;\n    PluginManager *manager;\n    PLUGIN_INFORMATION *info;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/common/include/plugin_api.h",
    "content": "#ifndef _PLUGIN_API\n#define _PLUGIN_API\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017,2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <string>\n\n#define TO_STRING(...) DEFER(TO_STRING_)(__VA_ARGS__)\n#define DEFER(x) x\n#define TO_STRING_(...) #__VA_ARGS__\n#define QUOTE(...) TO_STRING(__VA_ARGS__)\n\n/**\n * The plugin infiornation structure, used to return information\n * from a plugin during the laod and configuration stage.\n */\ntypedef struct {\n\t/** The name of the plugin */\n        const char\t*name;\n\t/** The release version of the plugin */\n        const char\t*version;\n\t/** The set of option flags that apply to this plugin */\n        unsigned int\toptions;\n\t/**\n\t * The plugin type, this is one of storage, south, \n\t * filter, north, notificationRule or notificationDelivery\n\t */\n        const char\t*type;\n\t/** The interface version of this plugin */\n        const char\t*interface;\n\t/** The default JSON configuration category for this plugin */\n\tconst char\t*config;\n} PLUGIN_INFORMATION;\n\n/**\n * Structure used by plugins to return error information\n */\ntypedef struct {\n        char         *message;\n        char         *entryPoint;\n        bool         retryable;\n} PLUGIN_ERROR;\n\n/**\n * Pass a name/value pair to a plugin\n */\ntypedef struct plugin_parameter {\n\tstd::string\tname;\n\tstd::string\tvalue;\n} PLUGIN_PARAMETER;\n\n/**\n * The handle used to reference a plugin. This is an opaque data\n * pointer and is used by the plugins as a way to pass information\n * between each invocation of the plugin entry points.\n */\ntypedef void * PLUGIN_HANDLE;\n\n/**\n * The destinations to which control messages may be sent\n */\ntypedef enum controlDestination {\n\t/** The control message is destined for the source of a particular asset */\n\tDestinationAsset,\n\t/** The control message is destined for the named service */\n\tDestinationService,\n\t/** The control message is destined for all south services that support control */\n\tDestinationBroadcast,\n\t/** The control message is destined to execute the named script */\n\tDestinationScript\n} ControlDestination;\n \n/**\n * Plugin options bitmask values\n */\n#define SP_COMMON\t\t0x0001\n#define SP_READINGS\t\t0x0002\n/** The plugin ingests data asynchronously */\n#define SP_ASYNC\t\t0x0004\n/** The plugin wishes to persist data between executions */\n#define SP_PERSIST_DATA\t\t0x0008\n/** The notification delivery plugin wishes to add (ingest) new data into the system */\n#define SP_INGEST\t\t0x0010\n/** The plugin requires access to the Microservice Management API */\n#define SP_GET_MANAGEMENT\t0x0020\n/** The plugin requires direct access to the storage service */\n#define SP_GET_STORAGE\t\t0x0040\n/** The plugin has been deprecated and will be removed in a future release */\n#define SP_DEPRECATED\t\t0x0080\n/** The plugin is built in and not installed be a seperate package */\n#define SP_BUILTIN\t\t0x0100\n/** The plugin supports control data */\n#define SP_CONTROL\t\t0x1000\n\n/**\n * Plugin types\n */\n#define PLUGIN_TYPE_STORAGE\t\t\t\"storage\"\n#define PLUGIN_TYPE_SOUTH\t\t\t\"south\"\n#define PLUGIN_TYPE_NORTH\t\t\t\"north\"\n#define PLUGIN_TYPE_FILTER\t\t\t\"filter\"\n#define PLUGIN_TYPE_NOTIFICATION_RULE\t\t\"notificationRule\"\n#define PLUGIN_TYPE_NOTIFICATION_DELIVERY\t\"notificationDelivery\"\n\n#endif\n"
  },
  {
    "path": "C/services/common/include/plugin_exception.h",
    "content": "/*\n * Fledge services common.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n/**\n * Implementation of PluginNotImplementedException.\n * This exception should be thrown when a feature is not implemented yet.\n */\nclass PluginNotImplementedException : public std::exception {\npublic:\n\t// Construct with a default error message:\n\tPluginNotImplementedException(const char * error = \"Functionality not implemented yet!\")\n\t{\n\t\terrorMessage = error;\n\t}\n\n\t// Compatibility with std::exception.\n\tconst char * what() const noexcept\n\t{\n\t\treturn errorMessage.c_str();\n\t}\n\nprivate:\n        std::string errorMessage;\n};\n"
  },
  {
    "path": "C/services/common/include/plugin_handle.h",
    "content": "\n#ifndef _PLUGIN_HANDLE_H\n#define _PLUGIN_HANDLE_H\n/*\n * Fledge plugin handle related\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n#include <logger.h>\n#include <vector>\n#include <sstream>\n#include <unordered_map>\n#include <dlfcn.h>\n#include <plugin_api.h>\n\n/**\n * The PluginHandle class is used to represent an opaque handle to a \n * plugin instance\n */\nclass PluginHandle\n{\n\tpublic:\n\t\tPluginHandle() {}\n\t\tvirtual ~PluginHandle() {}\n\t\tvirtual void *GetInfo() = 0;\n\t\tvirtual void *ResolveSymbol(const char* sym) = 0;\n\t\tvirtual void *getHandle() = 0;\n};\n\n#endif\n\n"
  },
  {
    "path": "C/services/common/include/plugin_manager.h",
    "content": "#ifndef PLUGIN_MANAGER_H\n#define PLUGIN_MANAGER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017, 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <plugin_api.h>\n#include <plugin_handle.h>\n#include <logger.h>\n#include <string>\n#include <list>\n#include <map>\n#include <vector>\n\ntypedef enum PluginType\n{\n\tPLUGIN_TYPE_ID_STORAGE,\n\tPLUGIN_TYPE_ID_OTHER\n} tPluginType;\n\nenum PLUGIN_TYPE {\n\tBINARY_PLUGIN,\n\tPYTHON_PLUGIN,\n\tJSON_PLUGIN\n};\n\n\n/**\n * The manager for plugins.\n *\n * This manager is a singleton and is responsible for loading, tracking and unloading\n * the plugins within the system.\n */\nclass PluginManager {\n\tpublic:\n\t\tstatic PluginManager *getInstance();\n\t\tPLUGIN_HANDLE\tloadPlugin(const std::string& name,\n\t\t\t\t\t   const std::string& type);\n\t\tvoid\t\tunloadPlugin(PLUGIN_HANDLE handle);\n\t\tvoid*\t\tresolveSymbol(PLUGIN_HANDLE handle,\n\t\t\t\t\t      const std::string& symbol);\n\t\tPLUGIN_HANDLE\tfindPluginByName(const std::string& name);\n\t\tPLUGIN_HANDLE\tfindPluginByType(const std::string& type);\n\t\tPLUGIN_INFORMATION\n\t\t\t\t*getInfo(const PLUGIN_HANDLE);\n\t\tvoid\t\tgetInstalledPlugins(const std::string& type,\n\t\t\t\t\t\t    std::list<std::string>& plugins);\n\t\tvoid setPluginType(tPluginType type);\n\t\tPLUGIN_TYPE getPluginImplType(const PLUGIN_HANDLE hndl) { return pluginImplTypes[hndl]; }\n\t\tstd::vector<std::string> getPluginsByFlags(const std::string& type, unsigned int flags);\n\tpublic:\n                static PluginManager* instance;\n\n\tprivate:\n                PluginManager();\n\t\tstd::string\tfindPlugin(std::string name, std::string _type, std::string _plugin_path, PLUGIN_TYPE type);\n\n\tprivate:\n                std::list<PLUGIN_HANDLE>\t\tplugins;\n                std::map<std::string, PLUGIN_HANDLE>\tpluginNames;\n                std::map<std::string, std::string>\tpluginTypes;\n\t\tstd::map<PLUGIN_HANDLE, PLUGIN_TYPE>\tpluginImplTypes;\n                std::map<PLUGIN_HANDLE, PLUGIN_INFORMATION *>\n\t\t\t\t\t\t\tpluginInfo;\n                std::map<PLUGIN_HANDLE, PluginHandle*>\n\t\t\t\t\t\t\tpluginHandleMap;\n                Logger*\t\t\t\t\tlogger;\n\t\ttPluginType\t\t\t\tm_pluginType;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/common/include/python_plugin_handle.h",
    "content": "\n#ifndef _PYTHON_PLUGIN_HANDLE_H\n#define _PYTHON_PLUGIN_HANDLE_H\n/*\n * Fledge Base plugin handle class\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora, Massimiliano Pinto\n */\n#include <logger.h>\n#include <vector>\n#include <sstream>\n#include <dlfcn.h>\n#include <plugin_handle.h>\n#include <Python.h>\n\ntypedef void* (*pluginResolveSymbolFn)(const char *, const std::string&);\ntypedef void (*pluginCleanupFn)(const std::string&);\n\n/**\n * The PythonPluginHandle class is the base class used to represent an interface to\n * a plugin that is available as a python script\n */\nclass PythonPluginHandle : public PluginHandle\n{\n\tpublic:\n\t\t// Base constructor\n\t\tPythonPluginHandle(const char *name, const char *path) : m_name(name) {};\n\n\t\t/**\n\t\t * Base destructor\n\t\t *    - Call cleanup on python plugin interface\n\t\t *    - Close python plugin interface library handle\n\t\t */\n\t\t~PythonPluginHandle()\n\t\t{\n\t\t\tif (!m_hndl)\n\t\t\t{\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpluginCleanupFn cleanupFn =\n\t\t\t\t(pluginCleanupFn) dlsym(m_hndl, \"PluginInterfaceCleanup\");\n\t\t\tif (cleanupFn == NULL)\n\t\t\t{\n\t\t\t\t// Unable to find PluginInterfaceCleanup entry point\n\t\t\t\tLogger::getLogger()->error(\"Plugin library %s does not support %s function : %s\",\n\t\t\t\t\t\t\t   m_interfaceObjName.c_str(),\n\t\t\t\t\t\t\t   \"PluginInterfaceCleanup\",\n\t\t\t\t\t\t\t   dlerror());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tcleanupFn(m_name);\n\t\t\t}\n\t\t\tdlclose(m_hndl);\n\t\t\tm_hndl = NULL;\n\t\t};\n\n\t\t/**\n\t\t * Gets function pointer from loaded python interface library that can\n\t\t * be invoked to call 'sym' function in python plugin\n\t\t *\n\t\t * @param    sym\tThe symbol to fetch\n\t\t */\n\t\tvoid *ResolveSymbol(const char* sym)\n\t\t{\n\t\t\tif (!m_hndl)\n\t\t\t{\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tpluginResolveSymbolFn resolvSymFn =\n\t\t\t\t(pluginResolveSymbolFn) dlsym(m_hndl, \"PluginInterfaceResolveSymbol\");\n\t\t\tif (resolvSymFn == NULL)\n\t\t\t{\n\t\t\t\t// Unable to find PluginInterfaceResolveSymbol entry point\n\t\t\t\tLogger::getLogger()->error(\"Plugin library %s does not support \"\n\t\t\t\t\t\t\t   \"%s function : %s\",\n\t\t\t\t\t\t\t   m_interfaceObjName.c_str(),\n\t\t\t\t\t\t\t   \"PluginInterfaceResolveSymbol\",\n\t\t\t\t\t\t\t   dlerror());\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tvoid *rv = resolvSymFn(sym, m_name);\n\t\t\tif (!rv)\n\t\t\t{\n\t\t\t\t// Python filter plugins do not support plugin_start\n\t\t\t\t// just log a debug message\n\t\t\t\tif (m_type.compare(PLUGIN_TYPE_FILTER) == 0)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"PythonPluginHandle::ResolveSymbol \"\n\t\t\t\t\t\t\t\t   \"returning NULL for sym=%s, plugin %s, type %s\",\n\t\t\t\t\t\t\t\t   sym,\n\t\t\t\t\t\t\t\t   m_name.c_str(),\n\t\t\t\t\t\t\t\t   m_type.c_str());\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->error(\"PythonPluginHandle::ResolveSymbol \"\n\t\t\t\t\t\t\t\t   \"returning NULL for sym=%s, plugin %s, type %s\",\n\t\t\t\t\t\t\t\t   sym,\n\t\t\t\t\t\t\t\t   m_name.c_str(),\n\t\t\t\t\t\t\t\t   m_type.c_str());\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn rv;\n\t\t};\n\n\t\t/**\n\t\t * Returns function pointer that can be invoked to call 'plugin_info'\n\t\t * function in python plugin\n\t\t */\n\t\tvoid *GetInfo()\n\t\t{\n\t\t        return (void *) ResolveSymbol(\"plugin_info\");\n\t\t};\n\n\t\t// Return pointer to this class\n\t\tvoid *getHandle() { return this; }\n\tpublic:\n\t\t// The python plugin interface library shared object\n\t\tvoid*\t\tm_hndl;\n\t\t// The interface library name to load\n\t\tstd::string\tm_interfaceObjName;\n\n\t\t// Set plugin name\n\t\tstd::string\tm_name;\n\n\t\t// plugin type\n\t\tstd::string\tm_type;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/common/include/service_handler.h",
    "content": "#ifndef _SERVICE_HANDLER_H\n#define _SERVICE_HANDLER_H\n/*\n * Fledge service class\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <config_category.h>\n#include <string>\n#include <management_client.h>\n\n/**\n * ServiceHandler abstract class - the interface that services using the\n * management API must provide.\n */\nclass ServiceHandler\n{\n\tpublic:\n\t\tvirtual void\tshutdown() = 0;\n\t\tvirtual void\trestart() = 0;\n\t\tvirtual void\tconfigChange(const std::string& category, const std::string& config) = 0;\n\t\tvirtual void\tconfigChildCreate(const std::string& parent_category, const std::string& category, const std::string& config) = 0;\n\t\tvirtual void\tconfigChildDelete(const std::string& parent_category, const std::string& category) = 0;\n\t\tvirtual bool\tisRunning() = 0;\n\t\tvirtual bool\tsecurityChange(const std::string &payload) { return payload.empty(); };\n};\n\n/**\n * ServiceAuthHandler adds security to the base class ServiceHandler\n */\nclass ServiceAuthHandler : public ServiceHandler\n{\n\tpublic:\n\t\tServiceAuthHandler() : m_refreshThread(NULL), m_refreshRunning(true) {};\n\t\tvirtual ~ServiceAuthHandler() { if (m_refreshThread) { m_refreshRunning = false; m_refreshThread->join(); delete m_refreshThread; } };\n\t\tstd::string&\tgetName() { return m_name; };\n\t\tstd::string&\tgetType() { return m_type; };\n\t\tbool\t\tcreateSecurityCategories(ManagementClient* mgtClient, bool dryRun);\n\t\tbool\t\tupdateSecurityCategory(const std::string& newCategory);\n\t\tvoid\t\tsetInitialAuthenticatedCaller();\n\t\tvoid\t\tsetAuthenticatedCaller(bool enabled);\n\t\tbool\t\tgetAuthenticatedCaller();\n\t\t// ACL verification (for Dispatcher)\n\t\tbool\t\tAuthenticationMiddlewareACL(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\tstd::shared_ptr<HttpServer::Request> request,\n\t\t\t\t\t\t\tconst std::string& serviceName,\n\t\t\t\t\t\t\tconst std::string& serviceType);\n\t\t// Hanlder for Dispatcher\n\t\tbool\t\tAuthenticationMiddlewareCommon(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\tstd::shared_ptr<HttpServer::Request> request,\n\t\t\t\t\t\t\tstd::string& callerName,\n\t\t\t\t\t\t\tstd::string& callerType);\n\t\t// Handler for South services: token verifation and service ACL check\n\t\tvoid\t\tAuthenticationMiddlewarePUT(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\tstd::shared_ptr<HttpServer::Request> request,\n\t\t\t\t\t\t\tstd::function<void(\n\t\t\t\t\t\t\t\tstd::shared_ptr<HttpServer::Response>,\n\t\t\t\t\t\t\t\tstd::shared_ptr<HttpServer::Request>)> funcPUT);\n\t\tvoid\t\trefreshBearerToken();\n \t\t// Send a good HTTP response to the caller\n\t\tvoid\t\trespond(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\tconst std::string& payload)\n\t\t\t\t{\n\t\t\t\t\t*response << \"HTTP/1.1 200 OK\\r\\n\"\n\t\t\t\t\t\t<< \"Content-Length: \" << payload.length() << \"\\r\\n\"\n\t\t\t\t\t\t<<  \"Content-type: application/json\\r\\n\\r\\n\"\n\t\t\t\t\t\t<< payload;\n\t\t\t\t};\n \t\t// Send an error messagei HTTP response to the caller with given HTTP code\n\t\tvoid\t\trespond(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\tSimpleWeb::StatusCode code,\n\t\t\t\t\t\t\tconst std::string& payload)\n\t\t\t\t{\n\t\t\t\t\t*response << \"HTTP/1.1 \" << status_code(code) << \"\\r\\n\"\n\t\t\t\t\t\t<< \"Content-Length: \" << payload.length() << \"\\r\\n\"\n\t\t\t\t\t\t<<  \"Content-type: application/json\\r\\n\\r\\n\"\n\t\t\t\t\t\t<< payload;\n\t\t\t\t};\n\t\tstatic ManagementClient *\n\t\t\t\tgetMgmtClient() { return m_mgtClient; };\n\t\tbool\t\tsecurityChange(const std::string &payload);\n\n\tprivate:\n\t\tbool\t\tverifyURL(const std::string& path,\n\t\t\t\t\tconst std::string& sName,\n\t\t\t\t\tconst std::string& sType);\n\t\tbool\t\tverifyService(const std::string& sName,\n\t\t\t\t\tconst std::string &sType);\n\n\tprotected:\n\t\tstd::string\tm_name;\n\t\tstd::string\tm_type;\n\t\t// Management client pointer\n\t\tstatic ManagementClient\n\t\t\t\t*m_mgtClient;\n\n\tprivate:\n\t\t// Security configuration change mutex\n\t\tstd::mutex\tm_mtx_config;\n\t\t// Authentication is enabled for API endpoints\n\t\tbool\t\tm_authentication_enabled;\n\t\t// Security configuration\n\t\tConfigCategory\tm_security;\n\t\t// Service ACL\n\t\tACL\t\tm_service_acl;\n\t\tstd::thread\t*m_refreshThread;\n\t\tbool\t\tm_refreshRunning;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/common/include/south_python_plugin_handle.h",
    "content": "\n#ifndef _SOUTH_PYTHON_PLUGIN_HANDLE_H\n#define _SOUTH_PYTHON_PLUGIN_HANDLE_H\n/*\n * Fledge plugin handle related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n#include <logger.h>\n#include <vector>\n#include <sstream>\n#include <dlfcn.h>\n#include <python_plugin_handle.h>\n\n/**\n * The SouthPythonPluginHandle class is used to represent an interface to \n * a South plugin that is available as a python script\n */\nclass SouthPythonPluginHandle : public PythonPluginHandle\n{\n\tpublic:\n\t\tSouthPythonPluginHandle(const char *name, const char *path);\n};\n\n#endif\n\n"
  },
  {
    "path": "C/services/common/management_api.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <management_api.h>\n#include <config_handler.h>\n#include <rapidjson/document.h>\n#include <logger.h>\n#include <time.h>\n#include <sstream>\n\nusing namespace std;\nusing namespace rapidjson;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\nManagementApi *ManagementApi::m_instance = 0;\n\n/**\n * Wrapper for ping method\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid pingWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n        ManagementApi *api = ManagementApi::getInstance();\n        api->ping(response, request);\n}\n\n/**\n * Wrapper for shutdown method\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid shutdownWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n        ManagementApi *api = ManagementApi::getInstance();\n        api->shutdown(response, request);\n}\n\n/**\n * Wrapper for config change method\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid configChangeWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n        ManagementApi *api = ManagementApi::getInstance();\n        api->configChange(response, request);\n}\n\n/**\n * Wrapper for config child  create method\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid configChildCreateWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n        ManagementApi *api = ManagementApi::getInstance();\n        api->configChildCreate(response, request);\n}\n\n/**\n * Wrapper for config child delete method\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid configChildDeleteWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n        ManagementApi *api = ManagementApi::getInstance();\n        api->configChildDelete(response, request);\n}\n\n/**\n * Wrapper for security change method\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid securityChangeWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n        ManagementApi *api = ManagementApi::getInstance();\n        api->securityChange(response, request);\n}\n\n/**\n * Construct a microservices management API manager class\n *\n * @param name\tThe service name\n * @param port\tThe management API port\n */\nManagementApi::ManagementApi(const string& name, const unsigned short port) : m_name(name)\n{\n\tm_server = new HttpServer();\n\tm_logger = Logger::getLogger();\n\tm_server->config.port = port;\n\tm_startTime = time(0);\n\tm_statsProvider = 0;\n\tm_server->resource[PING][\"GET\"] = pingWrapper;\n\tm_server->resource[SERVICE_SHUTDOWN][\"POST\"] = shutdownWrapper;\n\tm_server->resource[CONFIG_CHANGE][\"POST\"] = configChangeWrapper;\n\tm_server->resource[CONFIG_CHILD_CREATE][\"POST\"] = configChildCreateWrapper;\n\tm_server->resource[CONFIG_CHILD_DELETE][\"DELETE\"] = configChildDeleteWrapper;\n\tm_server->resource[SECURITY_CHANGE][\"PUT\"] = securityChangeWrapper;\n\n\tm_instance = this;\n\n\tm_logger->info(\"Starting management api on port %d.\", port);\n}\n\n/**\n * Start HTTP server for management API\n */\nstatic void startService()\n{\n        ManagementApi::getInstance()->startServer();\n}\n\nvoid ManagementApi::start() {\n        m_thread = new thread(startService);\n}\n\nvoid ManagementApi::startServer() {\n\tm_server->start();\n}\n\nvoid ManagementApi::stop()\n{\n\tthis->stopServer();\n}\n\nvoid ManagementApi::stopServer()\n{\n\tm_server->stop();\n\tm_thread->join();\n}\n\n/**\n * Return the signleton instance of the management interface\n *\n * Note if one has not been explicitly created then this will\n * return 0.\n */\nManagementApi *ManagementApi::getInstance()\n{\n\treturn m_instance;\n}\n\n/**\n * Management API destructor\n */\nManagementApi::~ManagementApi()\n{\n\tdelete m_server;\n\tdelete m_thread;\n}\n\n/**\n * Register a statistics provider\n *\n * @param statsProvider\tThe statistics provider for the service\n */\nvoid ManagementApi::registerStats(JSONProvider *statsProvider)\n{\n\tm_statsProvider = statsProvider;\n}\n\n/**\n * Register a generic provider. There can be multiple providers for\n * a single service\n *\n * @param provider\tThe JSON status provider to add\n */\nvoid ManagementApi::registerProvider(JSONProvider *provider)\n{\n\tm_providers.emplace_back(provider);\n}\n\n/**\n * Received a ping request, construct a reply and return to caller\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid ManagementApi::ping(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nostringstream convert;\nstring responsePayload;\n\n\t(void)request;\t// Unused argument\n\tconvert << \"{ \\\"uptime\\\" : \" << time(0) - m_startTime << \",\";\n\tconvert << \"\\\"name\\\" : \\\"\" << m_name << \"\\\"\";\n\tfor (auto& p : m_providers)\n\t{\n\t\tstring data;\n\t\tp->asJSON(data);\n\t\tconvert << \", \" << data;\n\t}\n\tif (m_statsProvider)\n\t{\n\t\tstring stats;\n\t\tm_statsProvider->asJSON(stats);\n\t\tconvert << \", \\\"statistics\\\" : \" << stats;\n\t}\n\tconvert << \" }\";\n\tresponsePayload = convert.str();\n\trespond(response, responsePayload);\n}\n\n/**\n * Received a shutdown request, construct a reply and return to caller\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid ManagementApi::shutdown(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nostringstream convert;\nstring responsePayload;\n\n\t(void)request;\t// Unsused argument\n\tm_serviceHandler->shutdown();\n\tconvert << \"{ \\\"message\\\" : \\\"Shutdown in progress\\\" }\";\n\tresponsePayload = convert.str();\n\trespond(response, responsePayload);\n}\n\n/**\n * Received a config change request, construct a reply and return to caller\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid ManagementApi::configChange(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nostringstream convert;\nstring responsePayload;\nstring payload;\n\n\ttry\n\t{\t\n\t\tpayload = request->content.string();\n\t\tConfigCategoryChange conf(payload);\n\t\tConfigHandler\t*handler = ConfigHandler::getInstance(NULL);\n\t\thandler->configChange(conf.getName(), conf.itemsToJSON(true));\n\t\tconvert << \"{ \\\"message\\\" : \\\"Config change accepted\\\" }\";\n\t}\n\tcatch(const std::exception& e)\n\t{\n\t\tconvert << \"{ \\\"exception\\\" : \\\"\" << e.what() << \"\\\" }\";\n\t}\n\tcatch(...)\n\t{\n\t\tconvert << \"{ \\\"exception\\\" : \\\"generic\\\" }\";\n\t}\n\t\n\tresponsePayload = convert.str();\n\trespond(response, responsePayload);\n}\n\n/**\n * Received a children deletion request, construct a reply and return to caller\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid ManagementApi::configChildDelete(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nostringstream convert;\nstring responsePayload;\nstring\tcategory, items, payload, parent_category;\n\n\tpayload = request->content.string();\n\n\tConfigCategoryChange\tconf(payload);\n\tConfigHandler\t*handler = ConfigHandler::getInstance(NULL);\n\n\tparent_category = conf.getmParentName();\n\tcategory = conf.getName();\n\titems = conf.itemsToJSON(true);\n\n\tLogger::getLogger()->debug(\"%s - parent_category:%s: child_category:%s: items:%s: \", __FUNCTION__\n\t\t\t\t\t\t\t   , parent_category.c_str()\n\t\t\t\t\t\t\t   , category.c_str()\n\t\t\t\t\t\t\t   , items.c_str()\n\t\t\t\t\t\t\t   );\n\n\thandler->configChildDelete(parent_category, category);\n\tconvert << \"{ \\\"message\\\" ; \\\"Config child category change accepted\\\" }\";\n\tresponsePayload = convert.str();\n\trespond(response, responsePayload);\n}\n\n/**\n * Received a children creation request, construct a reply and return to caller\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid ManagementApi::configChildCreate(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nostringstream convert;\nstring responsePayload;\nstring\tcategory, items, payload, parent_category;\n\n\tpayload = request->content.string();\n\n\tConfigCategoryChange\tconf(payload);\n\tConfigHandler\t*handler = ConfigHandler::getInstance(NULL);\n\n\tparent_category = conf.getmParentName();\n\tcategory = conf.getName();\n\titems = conf.itemsToJSON(true);\n\n\tLogger::getLogger()->debug(\"%s - parent_category:%s: child_category:%s: items:%s: \", __FUNCTION__\n\t\t\t\t\t\t\t   , parent_category.c_str()\n\t\t\t\t\t\t\t   , category.c_str()\n\t\t\t\t\t\t\t   , items.c_str()\n\t\t\t\t\t\t\t   );\n\n\thandler->configChildCreate(parent_category, category, items);\n\tconvert << \"{ \\\"message\\\" ; \\\"Config child category change accepted\\\" }\";\n\tresponsePayload = convert.str();\n\trespond(response, responsePayload);\n}\n\n\n/**\n * HTTP response method\n */\nvoid ManagementApi::respond(shared_ptr<HttpServer::Response> response, const string& payload)\n{\n        *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << payload.length() << \"\\r\\n\"\n                 <<  \"Content-type: application/json\\r\\n\\r\\n\" << payload;\n}\n\n/**\n * Received a security change request, construct a reply and return to caller\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid ManagementApi::securityChange(shared_ptr<HttpServer::Response> response,\n\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tstring payload = request->content.string();\n\n\tLogger::getLogger()->debug(\"Received securityChange: %s\", payload.c_str());\n\n\tostringstream convert;\n\tstring responsePayload;\n\n\t// Call server securityChange method\n\tm_serviceHandler->securityChange(payload);\n\n\tconvert << \"{ \\\"message\\\" : \\\"Security change accepted\\\" }\";\n\n\tresponsePayload = convert.str();\n\trespond(response, responsePayload);\n}\n"
  },
  {
    "path": "C/services/common/north_python_plugin_handle.cpp",
    "content": "/*\n * Fledge plugin handle related\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <config_category.h>\n#include <reading.h>\n#include <logger.h>\n#include <north_python_plugin_handle.h>\n\n#define PYTHON_PLUGIN_INTF_LIB \"libnorth-plugin-python-interface.so\"\n\n#define PRINT_FUNC\tLogger::getLogger()->info(\"%s:%d\", __FUNCTION__, __LINE__);\n\ntypedef PLUGIN_INFORMATION *(*pluginInitFn)(const char *pluginName, const char *path);\n\nusing namespace std;\n\n/**\n * Constructor for NorthPythonPluginHandle\n *    - Load python interface library and initialize the interface\n *\n * @param    pluginName\t\tThe Python plugin name to load\n * @param    pluginPathName\tThe Python plugin path\n */\nNorthPythonPluginHandle::NorthPythonPluginHandle(const char *pluginName,\n\t\t\t\t\t\t const char *pluginPathName) :\n\t\t\t\t\t\t PythonPluginHandle(pluginName, pluginPathName)\n{\n\t// expecting this lib to be present in LD_LIBRARY_PATH:\n\t//same dir as where lib-services-common.so is present\n\tstring libPath = PYTHON_PLUGIN_INTF_LIB;\n\t\n\tm_hndl = dlopen(libPath.c_str(), RTLD_NOW | RTLD_GLOBAL);\n\tif (!m_hndl)\n\t{\n\t\tLogger::getLogger()->error(\"PythonPluginHandle c'tor: dlopen failed for library '%s' : %s\",\n\t\t\t\t\t   libPath.c_str(),\n\t\t\t\t\t   dlerror());\n\t\treturn;\n\t}\n\n\tpluginInitFn initFn = (pluginInitFn) dlsym(m_hndl, \"PluginInterfaceInit\");\n\tif (initFn == NULL)\n\t{\n\t\t// Unable to find PluginInterfaceInit entry point\n\t\tLogger::getLogger()->error(\"Plugin library %s does not support %s function : %s\",\n\t\t\t\t\t   libPath.c_str(),\n\t\t\t\t\t   \"PluginInterfaceInit\",\n\t\t\t\t\t   dlerror());\n\t\tdlclose(m_hndl);\n\t\tm_hndl = NULL;\n\t\treturn;\n\t}\n\n\t// Initialise embedded Python and the interface\n\tvoid *ref = initFn(pluginName, pluginPathName);\n\tif (ref == NULL)\n\t{\n\t\tfprintf(stderr,\n\t\t\t\"Plugin library %s : PluginInterfaceInit returned failure\",\n\t\t\tlibPath.c_str());\n\t\tdlclose(m_hndl);\n\t\tm_hndl = NULL;\n\t\treturn;\n\t}\n\n\t// Set type\n\tm_type = PLUGIN_TYPE_NORTH;\n}\n"
  },
  {
    "path": "C/services/common/notification_python_plugin_handle.cpp",
    "content": "/*\n * Fledge Notification Python plugin handle related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <config_category.h>\n#include <reading.h>\n#include <logger.h>\n#include <notification_python_plugin_handle.h>\n\n#define PYTHON_PLUGIN_INTF_LIB \"libnotification-plugin-python-interface.so\"\n\n#define PRINT_FUNC\tLogger::getLogger()->info(\"%s:%d\", __FUNCTION__, __LINE__);\n\ntypedef PLUGIN_INFORMATION *(*pluginInitFn)(const char *pluginName, const char *path);\n\nusing namespace std;\n\n/**\n * Constructor for NotificationPythonPluginHandle\n *    - Load python interface library and initialize the interface\n */\nNotificationPythonPluginHandle::NotificationPythonPluginHandle(const char *pluginName,\n\t\t\t\t\t\t\t\tconst char *pluginPathName) :\n\t\t\t\t\t\t\t\tPythonPluginHandle(pluginName, pluginPathName)\n{\n\t// expecting this lib to be present in LD_LIBRARY_PATH:\n\t//same dir as where lib-services-common.so is present\n\tm_interfaceObjName = PYTHON_PLUGIN_INTF_LIB;\n\n\t// Open interface library object\n\tm_hndl = dlopen(m_interfaceObjName.c_str(), RTLD_NOW | RTLD_GLOBAL);\n\tif (!m_hndl)\n\t{\n\t\tLogger::getLogger()->error(\"NotificationPythonPluginHandle c'tor: dlopen failed for library '%s' : %s\",\n\t\t\t\t\t   m_interfaceObjName.c_str(),\n\t\t\t\t\t   dlerror());\n\t\treturn;\n\t}\n\n\tpluginInitFn initFn =\n\t\t(pluginInitFn) dlsym(m_hndl, \"PluginInterfaceInit\");\n\tif (initFn == NULL)\n\t{\n\t\t// Unable to find PluginInterfaceInit entry point\n\t\tLogger::getLogger()->error(\"Plugin library %s does not support %s function : %s\",\n\t\t\t\t\t   m_interfaceObjName.c_str(),\n\t\t\t\t\t   \"PluginInterfaceInit\",\n\t\t\t\t\t   dlerror());\n\t\tdlclose(m_hndl);\n\t\tm_hndl = NULL;\n\t\treturn;\n\t}\n\n\t// Initialise Python plugin object\n\tvoid *ref = initFn(pluginName, pluginPathName);\n\tif (ref == NULL)\n\t{\n\t\tfprintf(stderr,\n\t\t\t\"Plugin library %s : PluginInterfaceInit returned failure\",\n\t\t\tm_interfaceObjName.c_str());\n\t\tdlclose(m_hndl);\n\t\tm_hndl = NULL;\n\t\treturn;\n\t}\n\n\t// Set type\n\tm_type = strstr(pluginPathName, PLUGIN_TYPE_NOTIFICATION_RULE) != NULL ?\n\t\tPLUGIN_TYPE_NOTIFICATION_RULE :\n\t\tPLUGIN_TYPE_NOTIFICATION_DELIVERY;\n}\n"
  },
  {
    "path": "C/services/common/perfmonitor.cpp",
    "content": "/*\n * Fledge storage service client\n *\n * Copyright (c) 2023 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <perfmonitors.h>\n#include <chrono>\n\nusing namespace std;\n\n/**\n * Constructor for an individual performance monitor\n *\n * @param name\tThe name of the performance monitor\n */\nPerfMon::PerfMon(const string& name) : m_name(name), m_samples(0)\n{\n}\n\n/**\n * Collect a new value for the performance monitor\n *\n * @param value\tThe new value\n */\nvoid PerfMon::addValue(long value)\n{\n\tlock_guard<mutex> guard(m_mutex);\n\tif (m_samples)\n\t{\n\t\tif (value < m_min)\n\t\t\tm_min = value;\n\t\telse if (value > m_max)\n\t\t\tm_max = value;\n\t\tm_average = ((m_samples * m_average) + value) / (m_samples + 1);\n\t\tm_samples++;\n\t}\n\telse\n\t{\n\t\tm_min = value;\n\t\tm_max = value;\n\t\tm_average = value;\n\t\tm_samples = 1;\n\t}\n}\n\n/**\n * Return the performance values to insert\n *\n */\nint PerfMon::getValues(InsertValues& values)\n{\n\tlock_guard<mutex> guard(m_mutex);\n\tif (m_samples == 0)\n\t\treturn 0;\n\tvalues.push_back(InsertValue(\"minimum\", m_min));\n\tvalues.push_back(InsertValue(\"maximum\", m_max));\n\tvalues.push_back(InsertValue(\"average\", m_average));\n\tvalues.push_back(InsertValue(\"samples\", m_samples));\n\tm_min = 0;\n\tm_max = 0;\n\tm_average = 0;\n\tint samples = m_samples;\n\tm_samples = 0;\n\treturn samples;\n}\n\n/**\n * Constructor for the performance monitors\n *\n * @param service\tThe name of the service\n * @param storage\tPoint to the storage client class for the service\n */\nPerformanceMonitor::PerformanceMonitor(const string& service, StorageClient *storage) :\n\tm_service(service), m_storage(storage), m_collecting(false), m_thread(NULL)\n{\n}\n\n/**\n * Destructor for the performance monitor\n */\nPerformanceMonitor::~PerformanceMonitor()\n{\n\tif (m_collecting)\n\t{\n\t\tsetCollecting(false);\n\t}\n\t// Write thread has now been stopped or\n\t// was never running\n\tfor (const auto& it : m_monitors)\n\t{\n\t\tstring name = it.first;\n\t\tPerfMon *mon = it.second;\n\t\tdelete mon;\n\t}\n}\n\n/**\n * Monitor thread entry point\n *\n * @param perfMon\tThe perforamnce monitore class\n */\nstatic void monitorThread(PerformanceMonitor *perfMon)\n{\n\tperfMon->writeThread();\n}\n\n/**\n * Set the collection state of the performance monitors\n *\n * @param state\tThe required collection state\n */\nvoid PerformanceMonitor::setCollecting(bool state)\n{\n\tm_collecting = state;\n\tif (m_collecting && m_thread == NULL)\n\t{\n\t\t// Start the thread to write the monitors to the database\n\t\tm_thread = new thread(monitorThread, this);\n\t}\n\telse if (m_collecting == false && m_thread)\n\t{\n\t\t// Stop the thread to write the monitors to the database\n\t\tm_cv.notify_all();\n\t\tm_thread->join();\n\t\tdelete m_thread;\n\t\tm_thread = NULL;\n\t}\n}\n\n/**\n * Add a new value to the named performance monitor\n *\n * @param name\tThe name of the performance monitor\n * @param value\tThe value to add\n */\nvoid PerformanceMonitor::doCollection(const string& name, long value)\n{\n\tPerfMon *mon;\n\tauto it = m_monitors.find(name);\n\tif (it == m_monitors.end())\n\t{\n\t\t// Create a new monitor\n\t\tmon = new PerfMon(name);\n\t\tm_monitors[name] = mon;\n\t}\n\telse\n\t{\n\t\tmon = it->second;\n\t}\n\tmon->addValue(value);\n}\n\n/**\n * The thread that runs to write database values\n */\nvoid PerformanceMonitor::writeThread()\n{\n\twhile (m_collecting)\n\t{\n\t\tunique_lock<mutex> lk(m_mutex);\n\t\tm_cv.wait_for(lk, chrono::seconds(60));\n\t\tif (m_collecting)\n\t\t{\n\t\t\t// Write to the database\n\t\t\tfor (const auto& it : m_monitors)\n\t\t\t{\n\t\t\t\tstring name = it.first;\n\t\t\t\tPerfMon *mon = it.second;\n\t\t\t\tInsertValues values;\n\t\t\t       \tif (mon->getValues(values) > 0)\n\t\t\t\t{\n\t\t\t\t\tvalues.push_back(InsertValue(\"service\", m_service));\n\t\t\t\t\tvalues.push_back(InsertValue(\"monitor\", name));\n\n\t\t\t\t\t// Inser data\n\t\t\t\t\twriteData(\"monitors\", values);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "C/services/common/plugin.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <plugin.h>\n#include <plugin_manager.h>\n\nPlugin::Plugin(PLUGIN_HANDLE handle)\n{\n  this->handle = handle;\n  this->manager = PluginManager::getInstance();\n  this->info = this->manager->getInfo(handle);\n}\n\nPlugin::~Plugin()\n{\n}\n\nconst PLUGIN_INFORMATION *Plugin::getInfo()\n{\n  return this->info;\n}\n"
  },
  {
    "path": "C/services/common/plugin_manager.cpp",
    "content": "/*\n * Fledge plugin manager.\n *\n * Copyright (c) 2017, 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <cstdio>\n#include <dlfcn.h>\n#include <string.h>\n#include <iostream>\n#include <fstream>\n#include <unistd.h>\n#include <plugin_manager.h>\n#include <binary_plugin_handle.h>\n#include <south_python_plugin_handle.h>\n#include <north_python_plugin_handle.h>\n#include <notification_python_plugin_handle.h>\n#include <filter_python_plugin_handle.h>\n#include <dirent.h>\n#include <sys/param.h>\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n#include <algorithm>\n#include <config_category.h>\n#include <string_utils.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nPluginManager *PluginManager::instance = 0;\n\ntypedef PLUGIN_INFORMATION *(*func_t)();\n\n/**\n * PluginManager Singleton implementation\n*/\nPluginManager *PluginManager::getInstance()\n{\n  if (!instance)\n    instance = new PluginManager();\n  return instance;\n}\n\n/**\n * Plugin Manager Constructor\n */\nPluginManager::PluginManager()\n{\n  logger = Logger::getLogger();\n\n  m_pluginType = PLUGIN_TYPE_ID_OTHER;\n}\n\n/**\n * Update plugin info by merging the JSON plugin config over base plugin config\n *\n * @param    info\t\t\tThe plugin info structure\n * @param    json_plugin_name\t\tJSON plugin name\n * @param    json_plugin_defaults\tJSON plugin defaults dict\n * @param    json_plugin_description\tJSON plugin description\n */\nvoid updateJsonPluginConfig(PLUGIN_INFORMATION *info, string json_plugin_name, string json_plugin_defaults, string json_plugin_description)\n{\n\tLogger *logger = Logger::getLogger();\n\tlogger->debug(\"Loading base plugin for JSON plugin, so updating plugin_info structure loaded from base plugin\");\n\tchar *nameStr = new char [json_plugin_name.length()+1];\n\tstd::strcpy (nameStr, json_plugin_name.c_str());\n\tinfo->name = nameStr;\n\t\n\t// Update json_plugin_description in plugin->description\n\tDocument doc;\n\tdoc.Parse(json_plugin_defaults.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tlogger->error(\"Parse error in plugin '%s' defaults: %s at %d '%s'\", json_plugin_name.c_str(),\n\t\t\t\t\tGetParseError_En(doc.GetParseError()), (unsigned)doc.GetErrorOffset(),\n                        StringAround(json_plugin_defaults, (unsigned)doc.GetErrorOffset()));\n\t\treturn;\n\t}\n\n\tDocument docBase;\n\tdocBase.Parse(info->config);\n\tif (docBase.HasParseError())\n\t{\n\t\tlogger->error(\"Parse error in plugin '%s' information defaults: %s at %d '%s'\", json_plugin_name.c_str(),\n\t\t\t\t\tGetParseError_En(doc.GetParseError()), (unsigned)doc.GetErrorOffset(),\n                        \t\t\tStringAround(info->config, (unsigned)doc.GetErrorOffset()));\n\t\treturn;\n\t}\n\n\tstatic const char* kTypeNames[] = { \"Null\", \"False\", \"True\", \"Object\", \"Array\", \"String\", \"Number\" };\n\n\tDefaultConfigCategory basePluginCc(\"base\", string(info->config));\n\tlogger->debug(\"Original basePluginCc=%s\", basePluginCc.toJSON().c_str());\n\t\t\n\t// Iterate over overlay config and find same item in base config and update their default from overlay to base config\n\tfor (auto& m : doc.GetObject())\n\t{\n\t\trapidjson::StringBuffer sb;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer( sb );\n\t\tm.value.Accept( writer );\n\t\tstring s = sb.GetString();\n\t\t//logger->debug(\"m.value.type()=%s, m.value.GetString()=%s\", kTypeNames[m.value.GetType()], s.c_str());\n\n\t\t// find item with name 'm.name.GetString()' in base config\n\t\tif (!docBase.HasMember(m.name.GetString()))\n\t\t{\n\t\t\tlogger->warn(\"Item with name '%s' missing from base config, ignoring it\", m.name.GetString());\n\t\t\tcontinue;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring baseItemValue = basePluginCc.getDefault(m.name.GetString());\n\t\t\t//logger->debug(\"Original baseItemValue=%s\", baseItemValue.c_str());\n\t\t\t\n\t\t\tValue::MemberIterator baseItemDefault = docBase[m.name.GetString()].FindMember(\"default\");\n\t\t\tValue::MemberIterator overlayItemDefault = m.value.FindMember(\"default\");\n\t\t\tif(baseItemDefault == docBase.MemberEnd() || overlayItemDefault == m.value.MemberEnd())\n\t\t\t{\n\t\t\t\tlogger->warn(\"Default value for item with name %s missing from base config, ignoring it\", m.name.GetString());\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t//logger->debug(\"baseItemDefault: name=%s, type=%s, value=%s\", \n\t\t\t\t//\t\tbaseItemDefault->name.GetString(), kTypeNames[baseItemDefault->value.GetType()], baseItemDefault->value.GetString());\n\t\t\t\tstring s;\n\t\t\t\tif (overlayItemDefault->value.IsObject())\n\t\t\t\t{\n\t\t\t\t\trapidjson::StringBuffer sb;\n\t\t\t\t\trapidjson::Writer<rapidjson::StringBuffer> writer( sb );\n\t\t\t\t\toverlayItemDefault->value.Accept( writer );\n\t\t\t\t\ts = sb.GetString();\n\t\t\t\t}\n\t\t\t\telse if (overlayItemDefault->value.IsString())\n\t\t\t\t{\n\t\t\t\t\ts = overlayItemDefault->value.GetString();\n\t\t\t\t}\n\t\t\t\telse if (overlayItemDefault->value.IsDouble())\n\t\t\t\t{\n\t\t\t\t\ts = to_string(overlayItemDefault->value.GetDouble());\n\t\t\t\t}\n\t\t\t\telse if (overlayItemDefault->value.IsNumber())\n\t\t\t\t{\n\t\t\t\t\ts = to_string(overlayItemDefault->value.GetInt());\n\t\t\t\t}\n\t\t\t\telse if (overlayItemDefault->value.IsBool())\n\t\t\t\t{\n\t\t\t\t\ts = overlayItemDefault->value.GetBool() ? \"true\" : \"false\";\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tlogger->error(\"Unable to handle overlayItemDefault: name=%s, type=%d\",\n\t\t\t\t\toverlayItemDefault->name.GetString(), overlayItemDefault->value.GetType());\n\t\t\t\t}\n\t\t\t\t//logger->debug(\"overlayItemDefault: name=%s, type=%s, value=%s\",\n\t\t\t\t//\toverlayItemDefault->name.GetString(), kTypeNames[overlayItemDefault->value.GetType()], s.c_str());\n\t\t\t\t\n\t\t\t\tbasePluginCc.setDefault(m.name.GetString(), s);\n\t\t\t\t//logger->debug(\"Updated basePluginCc=%s\", basePluginCc.toJSON().c_str());\n\t\t\t\t//logger->printLongString(basePluginCc.itemsToJSON());\n\t\t\t}\n\t\t}\n\t}\n\t\n\t// Update info->config\n\tchar *confStr = new char [basePluginCc.itemsToJSON().length()+1];\n\tstd::strcpy (confStr, basePluginCc.itemsToJSON().c_str());\n\tinfo->config = confStr;\n\t//logger->debug(\"\\\"defaults\\\" updated:\");\n\t//logger->printLongString(info->config);\n\n\t// Update plugin name and description\n\tDocument doc2;\n\tdoc2.Parse(info->config);\n\tif (doc2.HasParseError())\n\t{\n\t\tlogger->error(\"Parse error in information returned from plugin: %s at %d '%s'\", \n\t\t\t\t\tGetParseError_En(doc2.GetParseError()), (unsigned)doc2.GetErrorOffset(),\n                        \t\t\tStringAround(info->config, (unsigned)doc2.GetErrorOffset()));\n\t}\n\tif (doc2.HasMember(\"plugin\"))\n\t{\n\t\tValue::MemberIterator itemValueIter = doc2[\"plugin\"].FindMember(\"default\");\n\t\t//logger->debug(\"plugin->default=%s\", itemValueIter->value.GetString());\n\t\titemValueIter->value.SetString(json_plugin_name.c_str(), doc2.GetAllocator());\n\n\t\tValue::MemberIterator itemValueIter2 = doc2[\"plugin\"].FindMember(\"description\");\n\t\t//logger->debug(\"plugin->description=%s\", itemValueIter2->value.GetString());\n\t\titemValueIter2->value.SetString(json_plugin_description.c_str(), doc2.GetAllocator());\n\t}\n\tStringBuffer buf;\n\tWriter<StringBuffer> writer (buf);\n\tdoc2.Accept (writer);\n\tchar *confStr2 = new char [string(buf.GetString()).length()+1];\n\tstd::strcpy (confStr2, buf.GetString());\n\tinfo->config = confStr2;\n\tdelete[] confStr;\n\tlogger->debug(\"Fields updated based on JSON config overlay:\");\n\tlogger->printLongString(info->config);\n}\n\n/**\n * Find a specific plugin in the directories listed in FLEDGE_PLUGIN_PATH\n *\n * @param    name\t\tThe plugin name\n * @param    _type\t\tThe plugin type string\n * @param    _plugin_path\tValue of FLEDGE_PLUGIN_PATH environment variable\n * @param    type\t\tThe plugin type\n * @return   string\t\tThe absolute path of plugin\n */\nstring PluginManager::findPlugin(string name, string _type, string _plugin_path, PLUGIN_TYPE type)\n{\n\tif (type != BINARY_PLUGIN && type != PYTHON_PLUGIN && type != JSON_PLUGIN)\n\t{\n\t\treturn \"\";\n\t}\n\t\n\tstringstream plugin_path(_plugin_path);\n\tstring temp;\n\t\n\t// Tokenizing w.r.t. semicolon ';' \n\twhile(getline(plugin_path, temp, ';')) \n\t{\n\t\tstring path = temp+\"/\"+_type+\"/\"+name+\"/\";\n\t\tswitch(type)\n\t\t{\n\t\t\tcase BINARY_PLUGIN:\n\t\t\t\tpath += \"lib\"+name+\".so\";\n\t\t\t\tbreak;\n\t\t\tcase PYTHON_PLUGIN:\n\t\t\t\tpath += name+\".py\";\n\t\t\t\tbreak;\n\t\t\tcase JSON_PLUGIN:\n\t\t\t\tpath += name+\".json\";\n\t\t\t\tbreak;\n\t\t}\n\t\tif (access(path.c_str(), F_OK) == 0)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"Found plugin @ %s\", path.c_str());\n\t\t\treturn path;\n\t\t}\n\t}\n\tLogger::getLogger()->debug(\"Didn't find plugin : name=%s, _type=%s, _plugin_path=%s\", name.c_str(), _type.c_str(), _plugin_path.c_str());\n\treturn \"\";\n}\n\n/**\n * Set Plugin Type\n */\nvoid PluginManager::setPluginType(tPluginType type)\n{\n\tm_pluginType = type;\n}\n\n/**\n * Load a given plugin\n */\nPLUGIN_HANDLE PluginManager::loadPlugin(const string& _name, const string& type)\n{\nPluginHandle *pluginHandle = NULL;\nPLUGIN_HANDLE hndl;\nchar\t\tbuf[MAXPATHLEN];\n\n\tstring json_plugin_name, json_base_plugin_name, json_plugin_defaults, json_plugin_description;\n\tbool json_plugin = false;\n\tstring name(_name);\n\n\tif (pluginNames.find(name) != pluginNames.end())\n\t{\n\t\tif (type.compare(pluginTypes.find(name)->second))\n\t\t{\n\t\t\tlogger->error(\"Plugin %s is already loaded but not the expected type %s\\n\",\n\t\t\t\t\tname.c_str(), type.c_str());\n\t\t\treturn NULL;\n\t\t}\n\t\treturn pluginNames[name];\n\t}\n\n\tconst char *home = getenv(\"FLEDGE_ROOT\");\n\tconst char *plugin_path = getenv(\"FLEDGE_PLUGIN_PATH\");\n\tstring paths(\"\");\n\tif (home)\n\t{\n\t\tpaths += string(home)+\"/plugins\";\n\t\tpaths += \";\"+string(home)+\"/python/fledge/plugins\";\n\t}\n\tif (plugin_path)\n\t\tpaths += (home ? \";\" : \"\")+string(plugin_path);\n  \n\t/*\n\t * Find and try to load the plugin that is described via a JSON file\n\t */\n\tstring path = findPlugin(name, type, paths, JSON_PLUGIN);\n\tstrncpy(buf, path.c_str(), sizeof(buf));\n\tif (buf[0] && access(buf, F_OK|R_OK) == 0)\n\t{\n\t\t// read config from JSON file\n\t\tifstream ifs(buf, ios::in);\n\t\n\t\tstd::stringstream sstr;\n\t\tsstr << ifs.rdbuf();\n\t\tstring json=sstr.str();\n\t\tjson.erase(remove(json.begin(), json.end(), '\\t'), json.end());\n\t\tjson.erase(remove(json.begin(), json.end(), '\\n'), json.end());\n\t\n\t\t// parse JSON document\n\t\tDocument doc;\n\t\tdoc.Parse(json.c_str());\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Parse error for JSON plugin config in '%s': %s at %d\", name.c_str(),\n\t\t\t\t\tGetParseError_En(doc.GetParseError()), (unsigned)doc.GetErrorOffset());\n\t\t\treturn NULL;\n\t\t}\n\t\t\n\t\tif (!(doc.HasMember(\"name\") && doc[\"name\"].IsString() &&\n\t\t\tdoc.HasMember(\"defaults\") && doc[\"defaults\"].IsObject() &&\n\t\t\tdoc.HasMember(\"connection\") && doc[\"connection\"].IsString()))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"JSON config for plugin @ '%s' is missing/misconfigured, exiting...\", buf);\n\t\t\treturn NULL;\n\t\t}\n\t\n\t\tjson_plugin_name = doc[\"name\"].GetString();\n\t\tjson_base_plugin_name = doc[\"connection\"].GetString();\n\t\n\t\tif (doc.HasMember(\"description\") && doc[\"description\"].IsString())\n\t\t{\n\t\t\tjson_plugin_description = doc[\"description\"].GetString();\n\t\t}\n\t\tif (doc[\"defaults\"].IsObject())\n\t\t{\n\t\t\trapidjson::StringBuffer sb;\n\t\t\trapidjson::Writer<rapidjson::StringBuffer> writer( sb );\n\t\t\tdoc[\"defaults\"].Accept( writer );\n\t\t\tjson_plugin_defaults = sb.GetString();\n\t\t}\n\t\n\t\t// set plugin name so that base plugin can be loaded next\n\t\tjson_plugin = true;\n\t\tname = json_base_plugin_name;\n\t\tlogger->debug(\"json_plugin=%s, json_plugin_name=%s, json_base_plugin_name=%s, json_plugin_description=%s, json_plugin_defaults=%s\", \n\t\t\tjson_plugin?\"true\":\"false\", json_plugin_name.c_str(), json_base_plugin_name.c_str(), json_plugin_description.c_str(), json_plugin_defaults.c_str());\n\t}\n  \n\t/*\n\t * Find and try to load the dynamic library that is the plugin\n\t */\n\tpath = findPlugin(name, type, paths, BINARY_PLUGIN);\n\tstrncpy(buf, path.c_str(), sizeof(buf));\n\tif (buf[0] && access(buf, F_OK|R_OK) == 0)\n\t{\n\t\tif (m_pluginType == PLUGIN_TYPE_ID_STORAGE)\n\t\t{\n\t\t\tpluginHandle = new BinaryPluginHandle(name.c_str(), buf, PLUGIN_TYPE_ID_STORAGE);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tpluginHandle = new BinaryPluginHandle(name.c_str(), buf);\n\t\t}\n\n\t\thndl = pluginHandle->getHandle();\n\t\tif (hndl != NULL)\n\t\t{\n\t\t\tfunc_t infoEntry = (func_t)pluginHandle->GetInfo();\n\t\t\tif (infoEntry == NULL)\n\t\t\t{\n\t\t\t\t// Unable to find plugin_info entry point\n\t\t\t\tlogger->error(\"C plugin %s does not support plugin_info entry point.\\n\", name.c_str());\n\t\t\t\tdelete pluginHandle;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tPLUGIN_INFORMATION *info = (PLUGIN_INFORMATION *)(*infoEntry)();\n\n\t\t\tlogger->debug(\"%s:%d: name=%s, type=%s, default config=%s\", __FUNCTION__, __LINE__, info->name, info->type, info->config);\n\t  \n\t\t\tif (strcmp(info->type, type.c_str()) != 0)\n\t\t\t{\n\t\t\t\t// Log error, incorrect plugin type\n\t\t\t\tlogger->error(\"C plugin %s is not of the expected type %s, it is of type %s.\\n\",\n\t\t\t\t\t\t\tname.c_str(), type.c_str(), info->type);\n\t\t\t\tdelete pluginHandle;\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tif (json_plugin)\n\t\t\t{\n\t\t\t\tupdateJsonPluginConfig(info, json_plugin_name, json_plugin_defaults, json_plugin_description);\n\t\t\t}\n\t  \n\t\t\tplugins.push_back(pluginHandle);\n\t\t\tpluginNames[name] = hndl;\n\t\t\tpluginTypes[name] = type;\n\t\t\tpluginImplTypes[hndl] = BINARY_PLUGIN;\n\t\t\tpluginInfo[hndl] = info;\n\n\t\t\tpluginHandleMap[hndl] = pluginHandle;\n\t\t\tlogger->debug(\"%s:%d: Added entry in pluginHandleMap={%p, %p}\", __FUNCTION__, __LINE__, hndl, pluginHandle);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogger->error(\"PluginManager: Failed to load C plugin %s in %s: %s.\",\n\t\t\t\tname.c_str(), buf, dlerror());\n\t\t}\n\t\treturn hndl;\n\t}\n\n\t// look for and load python plugin with given name\n\tpath = findPlugin(name, type, paths, PYTHON_PLUGIN);\n\tstrncpy(buf, path.c_str(), sizeof(buf));\n\tif (buf[0] && access(buf, F_OK|R_OK) == 0)\n\t{\n\t\t// is it Notification Rule Python plugin ?\n\t\tif (type.compare(PLUGIN_TYPE_NOTIFICATION_RULE) == 0 ||\n\t\t\ttype.compare(PLUGIN_TYPE_NOTIFICATION_DELIVERY) == 0)\n\t\t{\n\t\t\tpluginHandle = new NotificationPythonPluginHandle(name.c_str(), buf);\n\t\t}\n\t\telse if (type.compare(PLUGIN_TYPE_FILTER) == 0)\n\t\t{\n\t\t\tpluginHandle = new FilterPythonPluginHandle(name.c_str(), buf);\n\t\t}\n\t\telse if (type.compare(PLUGIN_TYPE_NORTH) == 0)\n\t\t{\n\t\t\tpluginHandle = new NorthPythonPluginHandle(name.c_str(), buf);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tpluginHandle = new SouthPythonPluginHandle(name.c_str(), buf);\n\t\t}\n\n\t\thndl = pluginHandle->getHandle();\n\n\t\tif (hndl != NULL)\n\t\t{\n\t\t\tfunc_t infoEntry = (func_t)pluginHandle->GetInfo();\n\t\t\tif (infoEntry == NULL)\n\t\t\t{\n\t\t\t\t// Unable to find plugin_info entry point\n\t\t\t\tlogger->error(\"Python plugin %s does not support plugin_info entry point.\\n\", name.c_str());\n\t\t\t\tdelete pluginHandle;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tPLUGIN_INFORMATION *info = (PLUGIN_INFORMATION *)(*infoEntry)();\n\t\t\tif (!info)\n\t\t\t{\n\t\t\t\t// Unable to get data from plugin_info entry point\n\t\t\t\tlogger->error(\"Python plugin %s cannot get data from plugin_info entry point.\\n\", name.c_str());\n\t\t\t\tdelete pluginHandle;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t \n\t\t\tif (strcmp(info->type, type.c_str()) != 0)\n\t\t\t{\n\t\t\t\t// Log error, incorrect plugin type\n\t\t\t\tlogger->error(\"C plugin %s is not of the expected type %s, it is of type %s.\\n\",\n\t\t\t\t\t\tname.c_str(),\n\t\t\t\t\t\ttype.c_str(),\n\t\t\t\t\t\tinfo->type);\n\t\t\t\tdelete pluginHandle;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tif (json_plugin)\n\t\t\t{\n\t\t\t\tupdateJsonPluginConfig(info,\n\t\t\t\t\t\t\tjson_plugin_name,\n\t\t\t\t\t\t\tjson_plugin_defaults,\n\t\t\t\t\t\t\tjson_plugin_description);\n\t\t\t}\n\t \n\t\t\tplugins.push_back(pluginHandle);\n\t\t\tpluginNames[name] = hndl;\n\t\t\tpluginTypes[name] = type;\n\t\t\tpluginImplTypes[hndl] = PYTHON_PLUGIN;\n\t\t\tpluginInfo[hndl] = info;\n\t\t\tpluginHandleMap[hndl] = pluginHandle;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogger->error(\"PluginManager: Failed to load python plugin %s in %s\",\n\t\t\t\t\tname.c_str(),\n\t\t\t\t\tbuf);\n\t\t}\n\t\treturn hndl;\n  \t}\n  \n\tif (json_plugin) // if base plugin had been found, this function would have returned already\n\t{\n\t  \tlogger->error(\"PluginManager: Could not load base plugin '%s' for JSON plugin '%s'\", json_base_plugin_name.c_str(), json_plugin_name.c_str());\n\t\treturn NULL;\n\t}\n  \n\tlogger->error(\"PluginManager: Failed to load plugin '%s' as any of the recognised types. Check that the plugin exists and the plugin name and installation directory match\", name.c_str());\n\treturn NULL;\n}\n\n/**\n * Find a loaded plugin by name.\n */\nPLUGIN_HANDLE PluginManager::findPluginByName(const string& name)\n{\n  if (pluginNames.find(name) == pluginNames.end())\n  {\n    return NULL;\n  }\n  return pluginNames.find(name)->second;\n}\n\n/**\n * Find a loaded plugin by type\n */\nPLUGIN_HANDLE PluginManager::findPluginByType(const string& type)\n{\n  if (pluginNames.find(type) == pluginNames.end())\n  {\n    return NULL;\n  }\n  return pluginNames.find(type)->second;\n}\n\n/**\n * Return the information for a named plugin\n */\nPLUGIN_INFORMATION *PluginManager::getInfo(const PLUGIN_HANDLE handle)\n{\n  if (pluginInfo.find(handle) == pluginInfo.end())\n  {\n    return NULL;\n  }\n  return pluginInfo.find(handle)->second;\n}\n\n/**\n * Resolve a symbol within the plugin\n */\nPLUGIN_HANDLE PluginManager::resolveSymbol(PLUGIN_HANDLE handle, const string& symbol)\n{\n  if (pluginHandleMap.find(handle) == pluginHandleMap.end())\n  {\n  \tlogger->warn(\"%s:%d: Cannot find PLUGIN_HANDLE in pluginHandleMap: returning NULL\", __FUNCTION__, __LINE__);\n    return NULL;\n  }\n  return pluginHandleMap.find(handle)->second->ResolveSymbol(symbol.c_str());\n}\n\n/**\n * Get the installed plugins in the given plugin type\n * subdirectory of \"plugins\" under FLEDGE_ROOT\n * Plugin type is one of:\n * south, north, filter, notificationRule, notificationDelivery\n *\n * @param    type\t\tThe plugin type\n * @param    plugins\t\tThe output plugin list name to fill\t\n */\nvoid PluginManager::getInstalledPlugins(const string& type,\n\t\t\t\t\tlist<string>& plugins)\n{\n\tchar *home = getenv(\"FLEDGE_ROOT\");\n\tchar *plugin_path = getenv(\"FLEDGE_PLUGIN_PATH\");\n\tstring paths(\"\");\n\tif (home)\n\t{\n\t\t// Binary C plugins\n\t\tpaths += string(home)+\"/plugins\";\n\n\t\t// Python Plugins\n\t\tpaths += \";\"+string(home)+\"/python/fledge/plugins\";\n\t}\n\tif (plugin_path)\n\t{\n\t\tpaths += (home?\";\":\"\")+string(plugin_path);\n\t}\n\n\tstringstream _paths(paths);\n\t\n\tstring temp;\n\t// Tokenize w.r.t. semicolon ';'\n\twhile(getline(_paths, temp, ';'))\n\t{\n\t\tstruct dirent *entry;\n\t\tDIR *dp;\n\t\tstring path = temp + \"/\" + type + \"/\";\n\n\t\t// Open the plugins dir/type\n\t\tdp = opendir(path.c_str());\n\n\t\tif (!dp)\n\t\t{\n\t\t\t// Can not open specified dir path\n\t\t\tchar msg[128];\n\t\t\tchar* ret = strerror_r(errno, msg, 128);\n\t\t\tlogger->warn(\"Can not access plugin directory %s: %s\",\n\t\t\t\t      path.c_str(),\n\t\t\t\t      ret);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/**\n\t\t * Get all sub directory names in path:\n\t\t * path = plugins/filter/\n\t\t *     delta\n\t\t *     scale\n\n\t\t * Plugin filename is libdelta.so, libscale.so\n\t\t * Plugin name is the subdirecory name in path\n\t\t * Skip directory starting with '_' or \n\t\t * with name 'common'\n\t\t */\n\t\twhile ((entry = readdir(dp)))\n\t\t{\n\t\t\tif (strcmp (entry->d_name, \"..\") != 0 &&\n\t\t\t    strcmp (entry->d_name, \".\") != 0 && \n\t\t\t\tstrcmp (entry->d_name, \"common\") != 0 &&\n\t\t\t\tentry->d_name[0] != '_')\n\t\t\t{\n\t\t\t\tstruct stat stbuf;\n\t\t\t\tbool is_dir(false);\n\t\t\t\tif (stat((path + entry->d_name).c_str(), &stbuf) != 0) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tis_dir = S_ISDIR(stbuf.st_mode);\n\t\t\t\tif (!is_dir) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\t/* check for duplicate names to avoid\n\t\t\t\t\tmultiple loadPlugin calls\n\t\t\t\t*/ \n\t\t\t\tbool is_duplicate = false;\n\t\t\t\tfor (const auto& loadedPlugin : plugins)\n\t\t\t\t{\n\t\t\t\t\tif (loadedPlugin == entry->d_name)\n\t\t\t\t\t{\n\t\t\t\t\t\tis_duplicate = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!is_duplicate) \n\t\t\t\t{\n\t\t\t\t\t// Load plugin, given its name: the directory name\n\t\t\t\t\tloadPlugin(entry->d_name, type);\n\t\t\t\t\t// Add name to ouput list\n\t\t\t\t\tplugins.push_back(entry->d_name);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tclosedir(dp);\n\t}\n}\n\n/**\n * Return a list of plugins matching the criteria\n * of plugin type and plugin flags\n *\n * @param type          The plugin type to match\n * @param flags         A bitmask of flags to match\n * @return vector<string>       A list of matching plugin names\n */\nstd::vector<string> PluginManager::getPluginsByFlags(const std::string& type, \n\t\t\t\t\t\t\t\t\tunsigned int flags) \n{\n\t// Plugins matching type and flag bits\n\tstd::vector<std::string> matchingPlugins;\n\t\n\t// Get list of installed plugins of given type\n\tstd::list<std::string> plugins;\n\tgetInstalledPlugins(type, plugins);\n\t\n\t/* Iterate list of installed plugins and\n\t\tmatch plugin 'options' with passed \n\t\tplugin flags\n\t*/\n\tfor (auto &pName: plugins) \n\t{\n\t\t// Fetch loaded plugin handle\n\t\tauto pluginHandle = pluginNames.find(pName);\n\t\tunsigned int pluginOptions = 0;\n\t\tif (pluginHandle != pluginNames.end()) {\n\t\t\tpluginOptions = getInfo(pluginHandle->second)->options;\n\t\t}\n\t\t// Match bit fields corresponding to loaded plugins\n\t\tif ((flags & pluginOptions) == flags) {\n\t\t\tmatchingPlugins.push_back(pName);\n\t\t}\n\t}\n\t\n\treturn matchingPlugins;\n}\n"
  },
  {
    "path": "C/services/common/service_security.cpp",
    "content": "#include <config_category.h>\n#include <string>\n#include <management_client.h>\n#include <service_handler.h>\n#include <config_handler.h>\n#include <server_http.hpp>\n#include <rapidjson/error/en.h>\n#include <acl.h>\n\n#define TO_STRING(...) DEFER(TO_STRING_)(__VA_ARGS__)\n#define DEFER(x) x\n#define TO_STRING_(...) #__VA_ARGS__\n#define QUOTE(...) TO_STRING(__VA_ARGS__)\n\n#define DELTA_SECONDS_BEFORE_TOKEN_EXPIRATION 120\n\nusing namespace std;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\nstatic void bearer_token_refresh_thread(void *data);\n\n/**\n * Initialise m_mgtClient object to NULL\n */\nManagementClient *ServiceAuthHandler::m_mgtClient = NULL;\n\n/**\n * Create \"${service}Security\" category with empty content\n *\n * @param mgtClient\tThe management client object\n * @param dryRun\tDryrun so do not register interest in the category\n * @return\t\tTrue on success, False otherwise\n */\nbool ServiceAuthHandler::createSecurityCategories(ManagementClient* mgtClient, bool dryRun)\n{\n\tstring securityCatName = m_name + string(\"Security\");\n\tDefaultConfigCategory defConfigSecurity(securityCatName, string(\"{}\"));\n\n\t// All services add 'AuthenticatedCaller' item\n\t// Add AuthenticatedCaller item, set to \"false\"\n\tdefConfigSecurity.addItem(\"AuthenticatedCaller\",\n\t\t\t\t\"Security enable parameter\",\n\t\t\t\t\"boolean\",\n\t\t\t\t// For dispatcher set default = true\n\t\t\t\tthis->getType() == \"Dispatcher\" ? \"true\" : \"false\", // Default\n\t\t\t\t\"false\"); // Value\n\tdefConfigSecurity.setItemDisplayName(\"AuthenticatedCaller\",\n\t\t\t\t\"Enable caller authorisation\");\n\n\tdefConfigSecurity.addItem(\"ACL\",\n\t\t\t\t\"Service ACL for \" + m_name,\n\t\t\t\t\"ACL\",\n\t\t\t\t\"\",  // Default\n\t\t\t\t\"\"); // Value\n\tdefConfigSecurity.setItemDisplayName(\"ACL\",\n\t\t\t\t\"Service ACL\");\n\n\tdefConfigSecurity.setDescription(m_name + \" Security\");\n\n\t// Create/Update category name (we pass keep_original_items=true)\n\tmgtClient->addCategory(defConfigSecurity, true);\n\n\t// Add this service under 'm_name' parent category\n\tvector<string> children1;\n\tchildren1.push_back(securityCatName);\n\tmgtClient->addChildCategories(m_name, children1);\n\n\t// Get new or merged category content\n\tm_security = mgtClient->getCategory(m_name + \"Security\");\n\n\tthis->setInitialAuthenticatedCaller();\n\n\t// Register for security category content changes\n\tConfigHandler *configHandler = ConfigHandler::getInstance(mgtClient);\n\tif (configHandler == NULL)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to get access to ConfigHandler for %s\",\n\t\t\t\t\tm_name.c_str());\n\t\treturn false;\n\t}\n\n\tif (!dryRun)\n\t{\n\t\t// Register for content change notification\n\t\tconfigHandler->registerCategory(this, m_name + \"Security\");\n\t}\n\n\t// Load ACL given the value of 'acl' type item: i.e.\n\tstring acl_name = m_security.getValue(\"ACL\");\n\tif (!acl_name.empty())\n\t{\n\t\tm_service_acl = m_mgtClient->getACL(acl_name);\n\t}\n\n\t// Start thread for automatic bearer token refresh, before expiration\n\tif (this->getType() != \"Southbound\" && dryRun == false)\n\t{\n\t\tm_refreshThread = new thread(bearer_token_refresh_thread, this);\n\t}\n\n\treturn true;\n}\n\n/**\n * Update the class objects from security category content update\n *\n * @param category\tThe service category name\n * @return\t\tTrue on success, False otherwise\n */\n \nbool ServiceAuthHandler::updateSecurityCategory(const string& category)\n{\n\t// Lock config\n\tlock_guard<mutex> cfgLock(m_mtx_config);\n\n\tm_security = ConfigCategory(m_name + \"Security\", category);\n\tbool acl_set = false;\n\n\t// Note: as per FOGL-6612\n\t// Only AuthenticatedCaller will be handled in Security category change notification\n\t// ACL update is made via security change service handler \n\t// Check for AuthenticatedCaller main switch\n\tif (m_security.itemExists(\"AuthenticatedCaller\"))\n\t{\n\t\tstring val = m_security.getValue(\"AuthenticatedCaller\");\n\t\tif (val[0] == 't' || val[0] == 'T')\n\t\t{\n\t\t\tacl_set = true;\n\t\t}\n\t}\n\n\tm_authentication_enabled = acl_set;\n\n\tLogger::getLogger()->debug(\"updateSecurityCategory called, switch val %d\", acl_set);\n\n\treturn acl_set;\n}\n\n/**\n * Set initial value of enabled authentication\n */\nvoid ServiceAuthHandler::setInitialAuthenticatedCaller()\n{\n\tbool acl_set = false;\n\tif (m_security.itemExists(\"AuthenticatedCaller\"))\n\t{\n\t\tstring val = m_security.getValue(\"AuthenticatedCaller\");\n\t\tLogger::getLogger()->debug(\"This service '%s' has AuthenticatedCaller item %s\",\n\t\t\tm_name.c_str(),\n\t\t\tval.c_str());\n\t\tif (val[0] == 't' || val[0] == 'T')\n\t\t{\n\t\t\tacl_set = true;\n\t\t}\n\t\tthis->setAuthenticatedCaller(acl_set);\n\t}\n}\n\n/**\n * Set enabled authentication value\n *\n * @param enabled\tThe enable/disable flag to set\n */\nvoid ServiceAuthHandler::setAuthenticatedCaller(bool enabled)\n{\n\tlock_guard<mutex> guard(m_mtx_config);\n\tm_authentication_enabled = enabled;\n}\n\n/**\n * Return enabled authentication value\n *\n * @return\tTrue on success, False otherwise\n */\nbool ServiceAuthHandler::getAuthenticatedCaller()\n{\n\tlock_guard<mutex> guard(m_mtx_config);\n\treturn m_authentication_enabled;\n}\n\n/**\n * Verify URL path against URL array in security configuration\n * If array item value has ACL property a service name/type check is added\n *\n * @param   path\tThe requested service HTTP resource\n * @param   serviceName\tThe serviceName to check\n * @param   serviceType\tThe serviceType to check\n * @return\t\tTrue is the resource acces has been granted, false otherwise\n */\nbool ServiceAuthHandler::verifyURL(const string& path,\n\t\t\t\tconst string& serviceName,\n\t\t\t\tconst string& serviceType)\n{\n\t// Check config with lock\n\tunique_lock<mutex> cfgLock(m_mtx_config);\n\n\t// Check m_security category item ACL is set\n\tstring acl;\n\tif (this->m_security.itemExists(\"ACL\"))\n\t{\n\t\tacl = this->m_security.getValue(\"ACL\");\n\t}\n\tcfgLock.unlock();\n\n\tif (acl.empty())\n\t{\n\t\tLogger::getLogger()->debug(\"verifyURL '%s', type '%s', \"\n\t\t\t\t\t\"the ACL is not set: allow any URL from any service type\",\n\t\t\t\t\tserviceName.c_str(),\n\t\t\t\t\tserviceType.c_str());\n\t\treturn true;\n\t}\n\n\tconst vector<ACL::UrlItem>& arrayURL = this->m_service_acl.getURL();\n\tif (arrayURL.size() == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"verifyURL '%s', type '%s', \"\n\t\t\t\t\t\"the URL array is empty: allow any URL from any service type\",\n\t\t\t\t\tserviceName.c_str(),\n\t\t\t\t\tserviceType.c_str());\n\t\treturn true;\n\t}\n\n\tif (arrayURL.size() > 0)\n\t{\n\t\tbool typeMatched = false;\n\t\tbool URLMatched = false;\n\n\t\t// Check URL value\n\t\tfor (auto it = arrayURL.begin(); it != arrayURL.end(); ++it)\n\t\t{\n\t\t\tstring configURL = (*it).url;\n\t\t\t// Request path matches configured URLs\n\t\t\tif (configURL != \"\" && configURL == path)\n\t\t\t{\n\t\t\t\tURLMatched = true;\n\t\t\t}\n\n\t\t\tvector<ACL::KeyValueItem> aclServices = (*it).acl;\n\t\t\tif (URLMatched && aclServices.size() == 0)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->debug(\"verifyURL '%s', type '%s', \"\n\t\t\t\t\t\"the URL '%s' has no ACL : allow any service type\",\n\t\t\t\t\tserviceName.c_str(),\n\t\t\t\t\tserviceType.c_str());\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tfor (auto iS = aclServices.begin();\n\t\t\t    \t  iS != aclServices.end();\n\t\t\t\t  ++iS)\n\t\t\t{\n\t\t\t\tif ((*iS).key == \"type\" && (*iS).value == serviceType)\n\t\t\t\t{\n\t\t\t\t\ttypeMatched = true;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"verify URL path '%s', type '%s': \"\n\t\t\t\t\t\"result URL %d, result type %d\",\n\t\t\t\t\tpath.c_str(),\n\t\t\t\t\tserviceType.c_str(),\n\t\t\t\t\tURLMatched,\n\t\t\t\t\ttypeMatched);\n\n\t\treturn URLMatched == true || typeMatched == true;\n\t}\n\n\treturn false;\n}\n\n/**\n * Verify service caller name and type against ACL array in security configuration\n *\n * @param   sName\tThe caller service name\n * @param   sType\tThe caller service type (Northbound, Southbound, Notification, etc)\n * @return\t\tTrue is the resource acces has been granted, false otherwise\n */\nbool ServiceAuthHandler::verifyService(const string& sName, const string &sType)\n{\n\t// Check config with lock\n\tunique_lock<mutex> cfgLock(m_mtx_config);\n\n\t// Check m_security category item ACL is set\n\tstring acl;\n\tif (this->m_security.itemExists(\"ACL\"))\n\t{\n\t\tacl = this->m_security.getValue(\"ACL\");\n\t}\n\tcfgLock.unlock();\n\n\tif (acl.empty())\n\t{\n\t\tLogger::getLogger()->debug(\"verifyService '%s', type '%s', \"\n\t\t\t\t\t\"the ACL is not set: allow any service\",\n\t\t\t\t\tsName.c_str(),\n\t\t\t\t\tsType.c_str());\n\t\treturn true;\n\t}\n\n\tvector<ACL::KeyValueItem> aclServices = this->m_service_acl.getService();\n\tif (aclServices.size() == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"verifyService '%s', type '%s', \" \\\n\t\t\t\t\t\"has an empty ACL service array: allow any service\",\n\t\t\t\t\tsName.c_str(),\n\t\t\t\t\tsType.c_str());\n\t\treturn true;\n\t}\n\n\tif (aclServices.size() > 0)\n\t{\n\t\tbool serviceMatched = false;\n\t\tbool typeMatched = false;\n\n\t\tfor (auto it = aclServices.begin(); it != aclServices.end(); ++it)\n\t\t{\n\t\t\tif ((*it).key == \"name\" && (*it).value == sName)\n\t\t\t{\n\t\t\t\tserviceMatched = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif ((*it).key == \"type\" && (*it).value == sType)\n\t\t\t{\n\t\t\t\ttypeMatched = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"verify service '%s', type '%s': \"\n\t\t\t\t\t\"result service %d, result type %d\",\n\t\t\t\t\tsName.c_str(),\n\t\t\t\t\tsType.c_str(),\n\t\t\t\t\tserviceMatched,\n\t\t\t\t\ttypeMatched);\n\n\t\treturn serviceMatched == true || typeMatched == true;\n\t}\n\n\treturn false;\n}\n\n/**\n * Authentication Middleware for PUT methods\n *\n * Routine first check whether the service is configured with authentication\n *\n * Access bearer token is then verified against FogLAMP core API endpoint\n * JWT token claims are passed to verifyURL and verifyService routines\n *\n * If access is granted the input funcPUT funcion is called\n * otherwise error response is sent to the client\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n * @param funcPUT\tThe function to call in case of access granted\n */\nvoid ServiceAuthHandler::AuthenticationMiddlewarePUT(shared_ptr<HttpServer::Response> response,\n\t\t\t\tshared_ptr<HttpServer::Request> request,\n        \t\t\tstd::function<void(\n        \t\t        shared_ptr<HttpServer::Response>,\n        \t\t        shared_ptr<HttpServer::Request>)> funcPUT)\n{\n\tstring callerName;\n\tstring callerType;\n\n\tfor(auto &field : request->header)\n\t{\n\t\tif (field.first == \"Service-Orig-From\")\n\t\t{\n\t\t\tcallerName = field.second;\n\t\t}\n\t\tif (field.first == \"Service-Orig-Type\")\n\t\t{\n\t\t\tcallerType = field.second;\n\t\t}\n\t}\n\n\t// Get authentication enabled value\n\tbool acl_set = this->getAuthenticatedCaller();\n\tLogger::getLogger()->debug(\"This service '%s' has AuthenticatedCaller flag set %d \"\n\t\t\t\"caller service is %s, type %s\",\n\t\t\tthis->getName().c_str(),\n\t\t\tacl_set,\n\t\t\tcallerName.c_str(),\n\t\t\tcallerType.c_str());\n\n\t// Check authentication\n\tif (acl_set)\n\t{\n\t\t// Verify token via Fledge management core POST API call\n\t\t// we do not need token claims here\n\t\tbool ret = m_mgtClient->verifyAccessBearerToken(request);\n\t\tif (!ret)\n\t\t{\n\t\t\tstring msg = \"invalid service bearer token\";\n\t\t\tstring responsePayload = \"{ \\\"error\\\" : \\\"\" + msg + \"\\\" }\";\n\t\t\tLogger::getLogger()->error(msg.c_str());\n\t\t\tthis->respond(response,\n\t\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t\tresponsePayload);\n\n\t\t\treturn;\n\t\t}\n\n\t\t// Check whether caller name and type are passed\n\t\tif (callerName.empty() && callerType.empty())\n\t\t{\n\t\t\tstring msg = \"authorisation not granted \" \\\n\t\t\t\t\"to this service: missing caller name and type\";\n\t\t\tstring responsePayload = \"{ \\\"error\\\" : \\\"\" + msg + \"\\\" }\";\n\t\t\tLogger::getLogger()->error(msg.c_str());\n\n\t\t\tthis->respond(response,\n\t\t\t\t\tSimpleWeb::StatusCode::client_error_unauthorized,\n\t\t\t\t\tresponsePayload);\n\t\t\treturn;\n\t\t}\n\n\t\t// Dispatcher service is always allowed to send control requests\n\t\t// to south service\n\t\t//\n\t\t// Checking for valid origin service caller (name/type) i.e\n\t\t// N1_HTTP/Northbound\n\t\t// NOTS/Notification\n\t\tbool valid_service = this->verifyService(callerName, callerType);\n\t\tif (!valid_service)\n\t\t{\n\t\t\tstring msg = \"authorisation not granted to this service\";\n\t\t\tstring responsePayload = \"{ \\\"error\\\" : \\\"\" + msg + \"\\\" }\";\n\t\t\tLogger::getLogger()->error(msg.c_str());\n\t\t\tthis->respond(response,\n\t\t\t\t\tSimpleWeb::StatusCode::client_error_unauthorized,\n\t\t\t\t\tresponsePayload);\n\t\t\treturn;\n\t\t}\n\n\t\t// Check URLS\n\t\tbool access_granted = this->verifyURL(request->path,\n\t\t\t\t\t\tcallerName,\n\t\t\t\t\t\tcallerType);\n\t\tif (!access_granted)\n\t\t{\n\t\t\tstring msg = \"authorisation not granted to this resource\";\n\t\t\tstring responsePayload = \"{ \\\"error\\\" : \\\"\" + msg + \"\\\" }\";\n\t\t\tLogger::getLogger()->error(msg.c_str());\n\t\t\tthis->respond(response,\n\t\t\t\t\tSimpleWeb::StatusCode::client_error_unauthorized,\n\t\t\t\t\tresponsePayload);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t// Call PUT endpoint routine\n\tfuncPUT(response, request);\n}\n\n/**\n * Authentication Middleware ACL check\n *\n * serviceName, serviceType and url (request->path)\n * are cheked with verifyURL and verifyService routines\n *\n * If access is granted return true\n * otherwise error response is sent to the client and return is false\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n * @param callerName\tThe caller service name to check\n * @param callerType\tThe caller service type to check\n * @return\t\tTrue on success\n * \t\t\tFalse otherwise with server reply error\n */\nbool ServiceAuthHandler::AuthenticationMiddlewareACL(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\tshared_ptr<HttpServer::Request> request,\n\t\t\t\t\t\tconst string& callerName,\n\t\t\t\t\t\tconst string& callerType)\n{\n\t// Check for valid service caller (name, type)\n\tbool valid_service = this->verifyService(callerName, callerType);\n\tif (!valid_service)\n\t{\n\t\tstring msg = \"authorisation not granted to this service\";\n\t\tstring responsePayload = \"{ \\\"error\\\" : \\\"\" + msg + \"\\\" }\";\n\t\tLogger::getLogger()->error(msg.c_str());\n\n\t\t// Error reply to client\n\t\tthis->respond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_unauthorized,\n\t\t\t\tresponsePayload);\n\t\treturn false;\n\t}\n\n\t// Check URLS\n\tbool access_granted = this->verifyURL(request->path, callerName, callerType);\n\tif (!access_granted)\n\t{\n\t\tstring msg = \"authorisation not granted to this resource\";\n\t\tstring responsePayload = \"{ \\\"error\\\" : \\\"\" + msg + \"\\\" }\";\n\t\tLogger::getLogger()->error(msg.c_str());\n\n\t\t// Error reply to client\n\t\tthis->respond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_unauthorized,\n\t\t\t\tresponsePayload);\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\n/**\n * Authentication Middleware for Dispatcher service\n *\n * Routine first check whether the service is configured with authentication\n *\n * Access bearer token is then verified against FogLAMP core API endpoint\n * token claims 'sub' and 'aud' along with request are passed to\n * verifyURL and verifyService routines\n *\n * If access is granted then return map with token claims \n * otherwise error response is sent to the client\n * and empty map is returned.\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n * @return\t\tTrue on success\n *\t\t\tFalse on errors\n */\nbool ServiceAuthHandler::AuthenticationMiddlewareCommon(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\tshared_ptr<HttpServer::Request> request,\n\t\t\t\t\t\t\tstring& callerName,\n\t\t\t\t\t\t\tstring& callerType)\n{\n\t// Get token from HTTP request\n\tBearerToken bToken(request);\n\n\t// Verify token via Fledge management core POST API call and fill tokenClaims map\n\tbool ret = m_mgtClient->verifyAccessBearerToken(bToken);\n\tif (!ret)\n\t{\n\t\tstring msg = \"invalid service bearer token\";\n\t\tstring responsePayload = \"{ \\\"error\\\" : \\\"\" + msg + \"\\\" }\";\n\t\tLogger::getLogger()->error(msg.c_str());\n\n\t\t// Error reply to client\n\t\tthis->respond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\tresponsePayload);\n\n\t\t// Failure\n\t\treturn false;\n\t}\n\n\t// Check for valid service caller (name, type) and URLs\n\tbool check = this->AuthenticationMiddlewareACL(response,\n\t\t\t\t\t\trequest,\n\t\t\t\t\t\tbToken.getSubject(),\n\t\t\t\t\t\tbToken.getAudience());\n\t// Check ACL result\n\tif (!check)\n\t{\n\t\t// Failure\n\t\treturn false;\n\t}\n\n\t// Set caller name & type\n\tcallerName = bToken.getSubject();\n\tcallerType = bToken.getAudience();\n\n\t// Success\n\treturn true;\n}\n\n/**\n * Refresh the bearer token of the runnign service\n * This routine is run by a thread started in\n * createSecurityCategories.\n *\n * After sleep time got in 'exp' of curren token\n * a new one is requested to the core via\n * token_refresh API endpoint\n */\nvoid ServiceAuthHandler::refreshBearerToken()\n{\n\tLogger::getLogger()->debug(\"Bearer token refresh thread starts for service '%s'\",\n\t\t\t\tthis->getName().c_str());\n\n\tint max_retries = 10;\n\ttime_t expires_in = 0;\n\tint k = 0;\n\tbool tokenVerified = false;\n\tstring current_token;\n\n\t// While server is running get bearer token\n\t// and sleeps for a few secods.\n\t// When expires_in - DELTA_SECONDS_BEFORE_TOKEN_EXPIRATION seconds is done\n\t// then get new token and sleep again\n\twhile (m_refreshRunning)\n\t{\n\t\tif (k >= max_retries)\n\t\t{\n\t\t\tstring msg = \"Bearer token not found for service '\" + this->getName() +\n\t\t\t\t\t\" refresh thread exits after \" + std::to_string(max_retries) + \" retries\";\t\n\t\t\tLogger::getLogger()->error(msg.c_str());\n\n\t\t\t// Shutdown service\n\t\t\tif (m_refreshRunning)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Service is being restarted \" \\\n\t\t\t\t\t\t\"due to bearer token refresh error\");\n\t\t\t\tthis->restart();\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tif (!tokenVerified)\n\t\t{\n\t\t\t// Fetch current bearer token\n\t\t\tBearerToken bToken(m_mgtClient->getRegistrationBearerToken());\n\t\t\tif (bToken.exists())\n\t\t\t{\n\t\t\t\t// Ask verification to core service and get token claims\n\t\t\t\ttokenVerified = m_mgtClient->verifyBearerToken(bToken);\n\n\t\t\t}\n\n\t\t\t// Give it a try in case of any error from core service\n\t\t\tif (!tokenVerified)\n\t\t\t{\n\t\t\t\tk++;\n\t\t\t\tLogger::getLogger()->error(\"Refreshing bearer token thread for service '%s' \"\n\t\t\t\t\t\t\t\"got empty or invalid bearer token '%s', retry n. %d\",\n\t\t\t\t\t\t\tthis->getName().c_str(),\n\t\t\t\t\t\t\tbToken.token().c_str(),\n\t\t\t\t\t\t\tk);\n\n\t\t\t\t// Sleep for 1 second\n\t\t\t\tstd::this_thread::sleep_for(std::chrono::seconds(1));\n\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t// Save verified token\n\t\t\tcurrent_token = bToken.token();\n\n\t\t\t// Token exists and it is valid, get expiration time\n\t\t\texpires_in = bToken.getExpiration() - time(NULL) - DELTA_SECONDS_BEFORE_TOKEN_EXPIRATION;\n\n\t\t\tLogger::getLogger()->debug(\"Bearer token refresh will be called in \"\n\t\t\t\t\t\t\"%ld seconds, service '%s'\",\n\t\t\t\t\t\texpires_in,\n\t\t\t\t\t\tthis->getName().c_str());\n\t\t}\n\n\t\t// Check the expiration time is done\n\t\tif (expires_in > 0)\n\t\t{\n\t\t\t// Thread sleeps for a few seconds, so it can get shutdown indicator\n\t\t\tstd::this_thread::sleep_for(std::chrono::seconds(10));\n\t\t\texpires_in -= 10;\n\t\t\tcontinue;\n\t\t}\n\n\t\t// A shutdown maybe is set, since last check: check it now\n\t\t// refresh_token core API endpoint\n\t\tif (!m_refreshRunning)\n\t\t{\n\t\t\tLogger::getLogger()->info(\"Service is being shut down: \" \\\n\t\t\t\t\t\t\"refresh thread does not call \" \\\n\t\t\t\t\t\t\"refresh endpoint and exits now\");\n\t\t\tbreak;\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"Bearer token refresh thread calls \"\n\t\t\t\t\t\"token refresh endpoint for service '%s'\",\n\t\t\t\t\tthis->getName().c_str());\n\n\t\t// Get a new bearer token for this service via\n\t\t// refresh_token core API endpoint\n\t\tstring newToken;\n\t\tbool ret = m_mgtClient->refreshBearerToken(current_token, newToken);\n\t\tif (ret)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"Bearer token refresh thread has got \"\n\t\t\t\t\t\"a new bearer token for service '%s, %s\",\n\t\t\t\t\tthis->getName().c_str(),\n\t\t\t\t\tnewToken.c_str());\n\n\t\t\t// Store new bearer token\n\t\t\tm_mgtClient->setNewBearerToken(newToken);\n\n\t\t\t// Next loop will veryfy token\n\t\t\ttokenVerified = false;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tk++;\n\t\t\tstring msg = \"Failed to get a new token \"\n\t\t\t\t\"via refresh API call for service '\" + this->getName() + \"'\";\n\t\t\tLogger::getLogger()->fatal(\"%s, current token is '%s', retry n. %d\",\n\t\t\t\t\tmsg.c_str(),\n\t\t\t\t\tcurrent_token.c_str(),\n\t\t\t\t\tk);\n\t\t\t// Sleep for some time\n\t\t\tstd::this_thread::sleep_for(std::chrono::seconds(1));\n\n\t\t\tcontinue;\n\t\t}\n\t}\n\n\tLogger::getLogger()->info(\"Refreshing bearer token thread for service '%s' stopped\",\n\t\t\t\tthis->getName().c_str());\n}\n\n/**\n * Thread to refresh the bearer token for\n *\n * @param data\tPointer to ServiceAuthHandler object\n */\nstatic void bearer_token_refresh_thread(void *data)\n{\n\tServiceAuthHandler *service = (ServiceAuthHandler *)data;\n\tservice->refreshBearerToken();\n}\n\n/**\n * Request security change action:\n *\n * Given a reason code, “attachACL”, “detachACL”, “reloadACL”, “updateACL”\n * in 'reason' atribute, the ACL name in 'argument' could be\n * attached, detached or reloaded\n *\n * @param payload\tThe JSON document with 'reason' and 'argument' \n * @retun \t\tTrue on success\n */\nbool ServiceAuthHandler::securityChange(const string& payload)\n{\n\t// Parse JSON data\n\tACL::ACLReason reason(payload);\n\n\tLogger::getLogger()->debug(\"Reason is %s, argument %s\",\n\t\t\t\treason.getReason().c_str(),\n\t\t\t\treason.getArgument().c_str());\n\n\tstring r = reason.getReason();\n\n\t// Lock config\n\tlock_guard<mutex> cfgLock(m_mtx_config);\n\n\tif (r == \"attachACL\")\n\t{\n\t\t// Fetch and load ACL\n\t\tm_service_acl = m_mgtClient->getACL(reason.getArgument());\n\t}\n\telse if (r == \"reloadACL\" || r == \"updateACL\")\n\t{\n\t\t// Fetch and load new or updated ACL\n\t\tm_service_acl = m_mgtClient->getACL(reason.getArgument());\n\t}\n\telse if (r == \"detachACL\")\n\t{\n\t\tm_service_acl = ACL();\n\t}\n\telse\n\t{\n\t\t// Error\n\t\tLogger::getLogger()->error(\"Reason '%s' is not supported\",\n\t\t\t\t\treason.getReason().c_str());\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n"
  },
  {
    "path": "C/services/common/south_python_plugin_handle.cpp",
    "content": "/*\n * Fledge plugin handle related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora, Massimiliano Pinto\n */\n\n#include <config_category.h>\n#include <reading.h>\n#include <logger.h>\n#include <south_python_plugin_handle.h>\n\n#define PYTHON_PLUGIN_INTF_LIB \"libsouth-plugin-python-interface.so\"\n\n#define PRINT_FUNC\tLogger::getLogger()->info(\"%s:%d\", __FUNCTION__, __LINE__);\n\ntypedef PLUGIN_INFORMATION *(*pluginInitFn)(const char *pluginName, const char *path);\n\nusing namespace std;\n\n/**\n * Constructor for PythonPluginHandle\n *    - Load python interface library and initialize the interface\n *\n * @param    pluginName\t\tThe Python plugin name to load\n * @param    pluginPathName\tThe Python plugin path\n */\nSouthPythonPluginHandle::SouthPythonPluginHandle(const char *pluginName,\n\t\t\t\t\t\t const char *pluginPathName) :\n\t\t\t\t\t\t PythonPluginHandle(pluginName, pluginPathName)\n{\n\t// expecting this lib to be present in LD_LIBRARY_PATH:\n\t//same dir as where lib-services-common.so is present\n\tstring libPath = PYTHON_PLUGIN_INTF_LIB;\n\t\n\tm_hndl = dlopen(libPath.c_str(), RTLD_NOW | RTLD_GLOBAL);\n\tif (!m_hndl)\n\t{\n\t\tLogger::getLogger()->error(\"PythonPluginHandle c'tor: dlopen failed for library '%s' : %s\",\n\t\t\t\t\t   libPath.c_str(),\n\t\t\t\t\t   dlerror());\n\t\treturn;\n\t}\n\n\tpluginInitFn initFn = (pluginInitFn) dlsym(m_hndl, \"PluginInterfaceInit\");\n\tif (initFn == NULL)\n\t{\n\t\t// Unable to find PluginInterfaceInit entry point\n\t\tLogger::getLogger()->error(\"Plugin library %s does not support %s function : %s\",\n\t\t\t\t\t   libPath.c_str(),\n\t\t\t\t\t   \"PluginInterfaceInit\",\n\t\t\t\t\t   dlerror());\n\t\tdlclose(m_hndl);\n\t\tm_hndl = NULL;\n\t\treturn;\n\t}\n\n\t// Initialise embedded Python and the interface\n\tvoid *ref = initFn(pluginName, pluginPathName);\n\tif (ref == NULL)\n\t{\n\t\tfprintf(stderr,\n\t\t\t\"Plugin library %s : PluginInterfaceInit returned failure\",\n\t\t\tlibPath.c_str());\n\t\tdlclose(m_hndl);\n\t\tm_hndl = NULL;\n\t\treturn;\n\t}\n\n\t// Set type\n\tm_type = PLUGIN_TYPE_SOUTH;\n}\n"
  },
  {
    "path": "C/services/common-plugin-interfaces/python/include/python_plugin_common_interface.h",
    "content": "#ifndef _PYTHON_PLUGIN_BASE_INTERFACE_H\n#define _PYTHON_PLUGIN_BASE_INTERFACE_H\n/*\n * Fledge common plugin interface\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto, Amandeep Singh Arora\n */\n\n#include <cctype>\n#include <plugin_manager.h>\n\n#define SHIM_SCRIPT_REL_PATH  \"/python/fledge/plugins/common/shim/\"\n#define SHIM_SCRIPT_POSTFIX \"_shim\"\n\nusing namespace std;\n\n/**\n * This class represents the loaded Python module\n * with interpreter initialisation flag.\n * That flag is checked in PluginInterfaceCleanup\n * before removing Python interpreter.\n */\nclass PythonModule\n{\n\tpublic:\n\t\tPythonModule(PyObject* module,\n\t\t\t    bool init,\n\t\t\t    string name,\n\t\t\t    string type,\n\t\t\t    PyThreadState* state) :\n\t\t\tm_module(module),\n\t\t\tm_init(init),\n\t\t\tm_name(name),\n\t\t\tm_type(type),\n\t\t\tm_tState(state)\n\t\t{\n\t\t};\n\n\t\t~PythonModule()\n\t\t{\n\t\t\t// Destroy loaded Python module\n\t\t\tPy_CLEAR(m_module);\n\t\t\tm_module = NULL;\n\t\t};\n\n\t\tvoid\tsetCategoryName(string category)\n\t\t{\n\t\t\tm_categoryName = category;\n\t\t};\n\n\t\tstring&\tgetCategoryName()\n\t\t{\n\t\t\treturn m_categoryName;\n\t\t};\n\n\tpublic:\n\t\tPyObject* m_module;\n\t\tbool      m_init;\n\t\tstring    m_name;\n\t\tstring    m_type;\n\t\tPyThreadState*\tm_tState;\n\t\tstring    m_categoryName;\n};\n\nextern \"C\" {\n// This is the map of Python object initialised in each \n// South, Notification, Filter  plugin interfaces\nstatic map<string, PythonModule*> *pythonModules = new map<string, PythonModule*>();\n// Map of PLUGIN_HANDLE objects, updated by plugin_init calls\nstatic map<PLUGIN_HANDLE, PythonModule*> *pythonHandles = new map<PLUGIN_HANDLE, PythonModule*>();\n\n// Global variable gPluginName set by PluginInterfaceInit:\n// it has a different memory address when set/read by\n// PluginInterfaceInit in South, Filter or Notification\n// Only used in plugin_info_fn calls\nstatic string gPluginName;\n\n// Common methods to all plugin interfaces\nstatic PLUGIN_INFORMATION *plugin_info_fn();\nstatic PLUGIN_HANDLE plugin_init_fn(ConfigCategory *);\nstatic void plugin_reconfigure_fn(PLUGIN_HANDLE*, const std::string&);\nstatic void plugin_shutdown_fn(PLUGIN_HANDLE);\n\nstatic void logErrorMessage();\nstatic bool numpyImportError = false;\n\n/**\n * Destructor for PythonPluginHandle\n *    - Free up owned references\n *    - Unload python 3.5 interpreter\n *\n * @param plugnName\tThe Python plugin to cleanup\n */\nvoid PluginInterfaceCleanup(const string& pluginName)\n{\n\tbool removePython = false;\n\n\tif (!pythonModules)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in PluginInterfaceCleanup, plugin '%s'\",\n\t\t\t\t\t   pluginName.c_str());\n\n\t\treturn;\n\t}\n\n\t// Acquire GIL\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Look for Python module, pluginName is the key\n\tauto it = pythonModules->find(pluginName);\n\tif (it != pythonModules->end())\n\t{\n\t\t// Remove Python 3.x environment?\n\t\tremovePython = it->second->m_init;\n\n\t\t// Remove this element\n\t\tpythonModules->erase(it);\n\t}\n\n\t// Look for Python module handle\n\tfor (auto h = pythonHandles->begin();\n\t\t  h != pythonHandles->end(); )\n\t{\n\t\t// Compare pluginName with m_name\n\t\tif (h->second->m_name.compare(pluginName) == 0)\n\t\t{\n\t\t\t// Remove PythonModule object\n\t\t\tif (h->second->m_module)\n\t\t\t{\n\t\t\t\tPy_CLEAR(h->second->m_module);\n\t\t\t\th->second->m_module = NULL;\n\t\t\t}\n\n\t\t\t// Remove PythonModule\n\t\t\tdelete h->second;\n\t\t\th->second = NULL;\n\n\t\t\t// Remove this element\n\t\t\th = pythonHandles->erase(h);\n\t\t}\n\t\telse\n\t\t{\n\t\t\t++h;\n\t\t}\n\t}\n\n\t// Remove PythonModule object\n\tif (it->second &&\n\t    it->second->m_module)\n\t{\n\t\tPy_CLEAR(it->second->m_module);\n\t\tit->second->m_module = NULL;\n\t}\n\n\t// Remove all maps if empty\n\tif (pythonModules->size() == 0)\n\t{\n\t\t// Remove map object\n\t\tdelete pythonModules;\n\t}\n\tif (pythonHandles->size() == 0)\n\t{\n\t\t// Remove map object\n\t\tdelete pythonHandles;\n\t}\n\n\tif (removePython)\n\t{\n\t\tLogger::getLogger()->debug(\"Removing Python interpreter \"\n\t\t\t\t\t   \"started by plugin '%s'\",\n\t\t\t\t\t   pluginName.c_str());\n\n\t\t// Cleanup Python 3.5\n\t\tPy_Finalize();\n\t}\n\telse\n\t{\n\t\tPyGILState_Release(state);\n\t}\n\n\tLogger::getLogger()->debug(\"PluginInterfaceCleanup succesfully \"\n\t\t\t\t   \"called for plugin '%s'\",\n\t\t\t\t   pluginName.c_str());\n}\n\n/**\n * Returns function pointer that can be invoked to call 'plugin_info'\n * function in python plugin\n */\nstatic void* PluginInterfaceGetInfo()\n{\n\treturn (void *) plugin_info_fn;\n}\n\n/**\n * Function to set current loglevel in given python plugin/filter module\n *\n * @param\tpython_module\tThe python plugin/filter module to which to propagate the loglevel\n * @param\ts\tDebug string indicating the module name and plugin API that caused this loglevel change\n */\nvoid set_loglevel_in_python_module(PyObject *python_module, string s)\n{\n\tstring& _loglevel = Logger::getLogger()->getMinLevel();\n\tfor (auto & c: _loglevel) c = toupper(c);\n\tconst char *loglevel = _loglevel.c_str();\n\t\n\tPyObject* mod = python_module;\n\tif (mod != NULL)\n\t{\n\t\tPyObject* loggerObj = PyObject_GetAttrString(mod, \"_LOGGER\");\n\t\tif (loggerObj != NULL)\n\t\t{\n\t\t\tPyObject* method = PyObject_GetAttrString(loggerObj, \"setLevel\");\n\t\t\tif (method != NULL)\n\t\t\t{\n\t\t\t\tPyObject *args = PyTuple_New(1);\n\t\t\t\tPyObject *pValue = Py_BuildValue(\"s\", loglevel);\n\t\t\t\tPyTuple_SetItem(args, 0, pValue);\n\t\t\t\tPyObject* retVal = PyObject_Call(method, args, NULL);\n\n\t\t\t\tPy_CLEAR(args);\n\t\t\t\tPy_CLEAR(method);\n\t\t\t\tPy_CLEAR(loggerObj);\n\t\t\t\tif (retVal != NULL)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"%s: %s: _LOGGER.setLevel(%s) done successfully\", __FUNCTION__, s.c_str(), loglevel);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"%s: _LOGGER.setLevel(%s) failed\", __FUNCTION__, loglevel);\n\t\t\t\t\tif (PyErr_Occurred())\n\t\t\t\t\t{       \n\t\t\t\t\t\tlogErrorMessage();\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"%s: Method 'setLevel' not found\", __FUNCTION__);\n\t\t\t\tPy_CLEAR(loggerObj);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"%s: Object '_LOGGER' not found in python module\", __FUNCTION__);\n\t\t}\n\t}\n\telse\n\t\tLogger::getLogger()->warn(\"%s: module is NULL\", __FUNCTION__);\n\n\tPyErr_Clear();\n}\n\n/**\n * Invokes json.dumps inside python interpreter\n */\nconst char *json_dumps(PyObject *json_dict)\n{\n    PyObject *rval;\n    PyObject *mod, *method;\n\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tif ((mod = PyImport_ImportModule(\"json\")) != NULL)\n\t{\n\t\tif ((method = PyObject_GetAttrString(mod, \"dumps\")) != NULL)\n\t\t{\n\t\t\tPyObject *args = PyTuple_New(1);\n\t\t\tPyObject *pValue = Py_BuildValue(\"O\", json_dict);\n\t\t\tPyTuple_SetItem(args, 0, pValue);\n\t\t\t\n\t\t\trval = PyObject_Call(method, args, NULL);\n\t\t\tPy_CLEAR(args);\n\t\t\tPy_CLEAR(method);\n\t\t\tPy_CLEAR(mod);\n            \n\t\t\tif (rval == NULL)\n\t\t\t{\n\t\t\t\tif (PyErr_Occurred())\n\t\t\t\t{\n\t\t\t\t\tlogErrorMessage();\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t\tLogger::getLogger()->info(\"%s:%d, rval type=%s\", __FUNCTION__, __LINE__, (Py_TYPE(rval))->tp_name);\n            \n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->fatal(\"Method 'dumps' not found\");\n\t\t\tPy_CLEAR(mod);\n\t\t}\n\t\t// Remove references\n\t\t\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->fatal(\"Failed to import module\");\n\t}\n\n\t// Reset error\n\tPyErr_Clear();\n\n\tPyGILState_Release(state);\n\n\tconst char *retVal = PyUnicode_AsUTF8(rval);\n\tLogger::getLogger()->debug(\"%s: retVal=%s\", __FUNCTION__, retVal);\n    \n\treturn retVal;\n}\n\n\n/**\n * Invokes json.loads inside python interpreter\n */\nPyObject *json_loads(const char *json_str)\n{\nPyObject *rval;\nPyObject *mod, *method;\n\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tif ((mod = PyImport_ImportModule(\"json\")) != NULL)\n\t{\n\t\tif ((method = PyObject_GetAttrString(mod, \"loads\")) != NULL)\n\t\t{\n\t\t\tPyObject *args = PyTuple_New(1);\n\t\t\tPyObject *pValue = Py_BuildValue(\"s\", json_str);\n\t\t\tPyTuple_SetItem(args, 0, pValue);\n\n\t\t\tLogger::getLogger()->debug(\"%s:%d: method=%p, args=%p, pValue=%p\", __FUNCTION__, __LINE__, method, args, pValue);\n\t\t\trval = PyObject_Call(method, args, NULL);\n\t\t\tPy_CLEAR(args);\n\t\t\tPy_CLEAR(method);\n\t\t\tPy_CLEAR(mod);\n            \n\t\t\tif (rval == NULL)\n\t\t\t{\n\t\t\t\tif (PyErr_Occurred())\n\t\t\t\t{\n\t\t\t\t\tlogErrorMessage();\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t\tLogger::getLogger()->debug(\"%s:%d, rval type=%s\", __FUNCTION__, __LINE__, (Py_TYPE(rval))->tp_name);\n\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->fatal(\"Method 'loads' not found\");\n\t\t\tPy_CLEAR(mod);\n\t\t}\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->fatal(\"Failed to import module\");\n\t}\n\n\t// Reset error\n\tPyErr_Clear();\n\n\tPyGILState_Release(state);\n    \n\treturn rval;\n}\n\n\n/**\n * Fill PLUGIN_INFORMATION structure from Python object\n *\n * @param pyRetVal      Python 3.5 Object (dict)\n * @return              Pointer to a new PLUGIN_INFORMATION structure\n *                              or NULL in case of errors\n */\nstatic PLUGIN_INFORMATION *Py2C_PluginInfo(PyObject* pyRetVal)\n{\n\t// Create returnable PLUGIN_INFORMATION structure\n\tPLUGIN_INFORMATION *info = new PLUGIN_INFORMATION;\n        info->options = 0;\n\n\t// these are borrowed references returned by PyDict_Next\n\tPyObject *dKey, *dValue;\n\tPy_ssize_t dPos = 0;\n    \n\tPyObject* objectsRepresentation = PyObject_Repr(pyRetVal);\n\tconst char* s = PyUnicode_AsUTF8(objectsRepresentation);\n\tLogger::getLogger()->debug(\"Py2C_PluginInfo(): plugin_info returned: %s\", s);\n\tPy_CLEAR(objectsRepresentation);\n\n\t// dKey and dValue are borrowed references\n\twhile (PyDict_Next(pyRetVal, &dPos, &dKey, &dValue))\n\t{\n\t\tconst char* ckey = PyUnicode_AsUTF8(dKey);\n\t\tconst char* cval = PyUnicode_AsUTF8(dValue);\n\t\tLogger::getLogger()->debug(\"%s:%d, key=%s, value=%s, dValue type=%s\", __FUNCTION__, __LINE__, ckey, cval, (Py_TYPE(dValue))->tp_name);\n\n\t\tchar *valStr = NULL;\n\t\tif (!PyDict_Check(dValue))\n\t\t{\n\t\t\tvalStr = new char [string(cval).length()+1];\n\t\t\tstd::strcpy (valStr, cval);\n\t\t\tLogger::getLogger()->debug(\"%s:%d, key=%s, value=%s, valStr=%s\", __FUNCTION__, __LINE__, ckey, cval, valStr);\n\t\t}\n\n\t\tif(!strcmp(ckey, \"name\"))\n\t\t{\n\t\t\tinfo->name = valStr;\n\t\t}\n\t\telse if(!strcmp(ckey, \"version\"))\n\t\t{\n\t\t\tinfo->version = valStr;\n\t\t}\n\t\telse if(!strcmp(ckey, \"mode\"))\n\t\t{\n\t\t\t// Need to also handle mode values of the form \"poll|control\"\n\t\t\tstringstream ss(valStr); \n\t\t\tstring s;\n\n\t\t\tinfo->options = 0;\n\t\t\t\n\t\t\t// Tokenizing w.r.t. pipe '|'\n\t\t\twhile(getline(ss, s, '|'))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->debug(\"%s: mode: Found token %s\", __FUNCTION__, s.c_str());\n\t\t\t\tif (s.compare(\"async\")==0)\n\t\t\t\t{\n\t\t\t\t\tinfo->options |= SP_ASYNC;\n\t\t\t\t}\n\t\t\t\telse if (s.compare(\"control\")==0)\n\t\t\t\t{\n\t\t\t\t\tinfo->options |= SP_CONTROL;\n\t\t\t\t}\n\t\t\t\telse if (s.compare(\"poll\")==0)\n\t\t\t\t{\n\t\t\t\t\t// Nothing to set\n\t\t\t\t}\n\t\t\t\telse if (s.compare(\"none\")==0)\n\t\t\t\t{\n\t\t\t\t\t// Ignore\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t\tLogger::getLogger()->warn(\"%s: mode: Unknown token/value %s\", __FUNCTION__, s.c_str());\n\t\t\t}\n\n\t\t\tdelete[] valStr;\n\t\t}\n\t\telse if(!strcmp(ckey, \"type\"))\n\t\t{\n\t\t\tinfo->type = valStr;\n\t\t}\n\t\telse if(!strcmp(ckey, \"interface\"))\n\t\t{\n\t\t\tinfo->interface = valStr;\n\t\t}\n\t\telse if(!strcmp(ckey, \"config\"))\n\t\t{            \n\t\t\t// if 'config' value is of dict type, convert it to string\n\t\t\tif (strcmp((Py_TYPE(dValue))->tp_name, \"dict\")==0)\n\t\t\t{\n\t\t\t\tPyObject* objectsRepresentation = PyObject_Repr(dValue);\n\t\t\t\tconst char* s = PyUnicode_AsUTF8(objectsRepresentation);\n\t\t\t\tLogger::getLogger()->debug(\"Py2C_PluginInfo(): INPUT: config value=%s\", s);\n\t\t\t\tPy_CLEAR(objectsRepresentation);\n\n\t\t\t\tinfo->config = json_dumps(dValue);\n\t\t\t\tLogger::getLogger()->info(\"Py2C_PluginInfo(): OUTPUT: config value=%s\", info->config);\n\t\t\t}\n\t\t\telse\n\t\t\t\tinfo->config = valStr;\n\t\t}\n\t\telse\n\t\t\tLogger::getLogger()->info(\"%s:%d: Unexpected key %s\", __FUNCTION__, __LINE__, ckey);\n\t}\n\n\treturn info;\n}\n\n/**\n * Function to invoke 'plugin_info' function in python plugin\n */\nstatic PLUGIN_INFORMATION *plugin_info_fn()\n{\n\tif (!pythonModules)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_info_fn, plugin '%s'\",\n\t\t\t\t\t   gPluginName.c_str());\n\t\treturn NULL;\n\t}\n\n\t// Look for Python module for gPluginName key\n\tauto it = pythonModules->find(gPluginName);\n\tif (it == pythonModules->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_info(): \"\n\t\t\t\t\t   \"pModule is NULL for plugin '%s'\",\n\t\t\t\t\t   gPluginName.c_str());\n\t\treturn NULL;\n\t}\n\tPyObject* pFunc; \n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_info\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find method 'plugin_info' \"\n\t\t\t\t\t   \"in loaded python module '%s', m_module=%p\",\n\t\t\t\t\t   gPluginName.c_str(), it->second->m_module);\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{       \n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method 'plugin_info' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   gPluginName.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn NULL;\n\t}\n\n\t// Call Python method passing an object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc, NULL);\n\n\tPy_CLEAR(pFunc);\n\n\tPLUGIN_INFORMATION *info = NULL;\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method 'plugin_info' \"\n\t\t\t\t\t    \": error while getting result object, plugin '%s'\",\n\t\t\t\t\t   gPluginName.c_str());\n\t\tlogErrorMessage();\n\t\tinfo = NULL;\n\t}\n\telse\n\t{\n\t\t// Parse plugin information\n\t\tinfo = Py2C_PluginInfo(pReturn);\n\n\t\t// Remove pReturn object\n\t\tPy_CLEAR(pReturn);\n\t}\n\n\tif (info)\n\t{\n\t\t// bump interface version to atleast 2.x so that we are able to handle\n\t\t// list of readings from python plugins in plugin_poll\n\t\tif (info->interface[0] =='1' &&\n\t\t    info->interface[1] == '.')\n\t\t{\n\t\t\tLogger::getLogger()->info(\"plugin_handle: plugin_info(): \"\n\t\t\t\t\t\t   \"Updating interface version \"\n\t\t\t\t\t\t   \"from '%s' to '2.0.0', plugin '%s'\",\n\t\t\t\t\t\t   info->interface,\n\t\t\t\t\t\t   gPluginName.c_str());\n\t\t\tdelete[] info->interface;\n\t\t\tchar *valStr = new char[6];\n\t\t\tstd::strcpy(valStr, \"2.0.0\");\n\t\t\tinfo->interface = valStr;\n\t\t}\n\n\t\tLogger::getLogger()->info(\"plugin_handle: plugin_info(): info={name=%s, \"\n\t\t\t\t\t   \"version=%s, options=%d, type=%s, interface=%s, config=%s}\",\n\t\t\t\t\t   info->name,\n\t\t\t\t\t   info->version,\n\t\t\t\t\t   info->options,\n\t\t\t\t\t   info->type,\n\t\t\t\t\t   info->interface,\n\t\t\t\t\t   info->config);\n\t}\n\n\tPyGILState_Release(state);\n\n\treturn info;\n}\n\n/**\n * Function to invoke 'plugin_init' function in python plugin\n *\n * @param    config\tConfigCategory configuration object\n * @retun\t\tPLUGIN_HANDLE object\n */\nstatic PLUGIN_HANDLE plugin_init_fn(ConfigCategory *config)\n{\n\t// Get plugin name\n\tstring pName = config->getValue(\"plugin\");\n\n\tif (!pythonModules)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_init_fn, plugin '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\treturn NULL;\n\t}\n\n\tLogger::getLogger()->debug(\"plugin_handle: plugin_init(): \"\n\t\t\t\t   \"config->itemsToJSON()='%s'\",\n\t\t\t\t   config->itemsToJSON().c_str());\n\n\tbool loadModule = false;\n\tbool reloadModule = false;\n\tbool pythonInitState = false;\n\tstring loadPluginType;\n\n\tPythonModule* module = NULL;\n\tPyThreadState* newInterp = NULL;\n\n\t// Check wether plugin pName has been already loaded\n\tfor (auto h = pythonHandles->begin();\n\t\t  h != pythonHandles->end();\n\t\t  ++h)\n\t{\n\t\tif (h->second->m_name.compare(pName) == 0)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"%s_plugin_init_fn: already loaded \"\n\t\t\t\t\t\t   \"a plugin with name '%s'. Loading a new \",\n\t\t\t\t\t\t   h->second->m_type.c_str(),\n\t\t\t\t\t\t   pName.c_str());\n\n\t\t\t// Set Python library loaded state\n\t\t\tpythonInitState = h->second->m_init;\n\n\t\t\t// Set plugin type\n\t\t\tloadPluginType = h->second->m_type;\n\n\t\t\t// Set load indicator\n\t\t\tloadModule = true;\n\t\t}\n\t}\n\n\tif (!loadModule)\n\t{\n\n\t\t// Plugin name not previously loaded: check current Python module\n\t\t// pName is the key\n\t\tauto it = pythonModules->find(pName);\n\t\tif (it == pythonModules->end())\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"plugin_handle: plugin_init(): \"\n\t\t\t\t\t\t   \"pModule not found for plugin '%s': \",\n\t\t\t\t\t\t   pName.c_str());\n\n\t\t\t// Set plugin type\n\t\t\tPluginManager* pMgr = PluginManager::getInstance();\t\n\t\t\tPLUGIN_HANDLE tmp = pMgr->findPluginByName(pName);\n\t\t\tif (tmp)\n\t\t\t{\n\t\t\t\tPLUGIN_INFORMATION* pInfo = pMgr->getInfo(tmp);\n\t\t\t\tif (pInfo)\n\t\t\t\t{\n\t\t\t\t\tloadPluginType = string(pInfo->type);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Set reload indicator\n\t\t\treloadModule = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (it->second && it->second->m_module)\n\t\t\t{\n\t\t\t\t// Just use current loaded module: no load or re-load action\n\t\t\t\tmodule = it->second;\n\n\t\t\t\t// Set Python library loaded state\n\t\t\t\tpythonInitState = it->second->m_init;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_init(): \"\n\t\t\t\t\t\t\t   \"found pModule is NULL for plugin '%s': \",\n\t\t\t\t\t\t\t   pName.c_str());\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\tLogger::getLogger()->info(\"%s:%d: loadModule=%s, reloadModule=%s\", \n                                __FUNCTION__, __LINE__, loadModule?\"TRUE\":\"FALSE\", reloadModule?\"TRUE\":\"FALSE\");\n\n\t// Acquire GIL\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Import Python module using a new interpreter\n\tif (loadModule || reloadModule)\n\t{\n\t\tstring fledgePythonDir;\n\t\n\t\tstring fledgeRootDir(getenv(\"FLEDGE_ROOT\"));\n\t\tfledgePythonDir = fledgeRootDir + \"/python\";\n\n\t\tint argc = 2;\n\n\t\t// Set Python path for embedded Python 3.x\n\t\t// Get current sys.path - borrowed reference\n\t\tPyObject* sysPath = PySys_GetObject((char *)\"path\");\n\t\tPyList_Append(sysPath, PyUnicode_FromString((char *) fledgePythonDir.c_str()));\n        \n\t\t// Set sys.argv for embedded Python 3.x\n\t\twchar_t* argv[argc];\n\t\targv[0] = Py_DecodeLocale(\"\", NULL);\n\t\targv[1] = Py_DecodeLocale(pName.c_str(), NULL);\n\t\tif (argc > 2)\n\t\t{\n\t\t\targv[2] = Py_DecodeLocale(loadPluginType.c_str(), NULL);\n\t\t}\n\n\t\t// Set script parameters\n\t\tPySys_SetArgv(argc, argv);\n\n\t\tLogger::getLogger()->debug(\"%s_plugin_init_fn, %sloading plugin '%s', \",\n\t\t\t\t\t   loadPluginType.c_str(),\n\t\t\t\t\t   reloadModule ? \"re-\" : \"\", \n\t\t\t\t\t   pName.c_str());\n\n\t\t// Import Python script\n\t\tPyObject *newObj = PyImport_ImportModule(pName.c_str());\n\n\t\t// Check result\n\t\tif (newObj)\n\t\t{\n\t\t\t// Create a new PythonModule\n\t\t\tPythonModule* newModule;\n\t\t\tif ((newModule = new PythonModule(newObj,\n\t\t\t\t\t\t\t  pythonInitState,\n\t\t\t\t\t\t\t  pName,\n\t\t\t\t\t\t\t  loadPluginType,\n\t\t\t\t\t\t\t  NULL)) == NULL)\n\t\t\t{\n\t\t\t\t// Release lock\n\t\t\t\tPyGILState_Release(state);\n\n\t\t\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_init(): \"\n\t\t\t\t\t\t\t   \"failed to create Python module \"\n\t\t\t\t\t\t\t   \"object, plugin '%s'\",\n\t\t\t\t\t\t\t   pName.c_str());\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\t// Set module\n\t\t\tmodule = newModule;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogErrorMessage();\n\n\t\t\t// Release lock\n\t\t\tPyGILState_Release(state);\n\n\t\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_init(): \"\n\t\t\t\t\t\t   \"failed to import plugin '%s'\",\n\t\t\t\t\t\t   pName.c_str());\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tLogger::getLogger()->debug(\"%s_plugin_init_fn for '%s', pModule '%p', \",\n\t\t\t\t   loadPluginType.c_str(),\n\t\t\t\t   module->m_name.c_str(),\n\t\t\t\t   module->m_module);\n\n\tLogger::getLogger()->debug(\"%s:%d: calling set_loglevel_in_python_module(), loglevel=%s\", __FUNCTION__, __LINE__, Logger::getLogger()->getMinLevel().c_str());\n\tset_loglevel_in_python_module(module->m_module, module->m_name + \" plugin_init\");\n    \n\tPyObject *config_dict = json_loads(config->itemsToJSON().c_str());\n    \n\t// Call Python method passing an object\n\tPyObject* pReturn = PyObject_CallMethod(module->m_module,\n\t\t\t\t\t\t\"plugin_init\",\n\t\t\t\t\t\t\"O\",\n\t\t\t\t\t\tconfig_dict);\n\n\tPy_CLEAR(config_dict);\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_init : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->debug(\"plugin_handle: plugin_init(): \"\n\t\t\t\t\t   \"got handle from python plugin='%p', *handle %p, plugin '%s'\",\n\t\t\t\t\t   pReturn,\n\t\t\t\t\t   &pReturn,\n\t\t\t\t\t   pName.c_str());\n\t}\n\n\t// Add the handle to handles map as key, PythonModule object as value\n\tstd::pair<std::map<PLUGIN_HANDLE, PythonModule*>::iterator, bool> ret;\n\tif (pythonHandles)\n\t{\n\t\t// Add to handles map the PythonModule object\n\t\tret = pythonHandles->insert(pair<PLUGIN_HANDLE, PythonModule*>\n\t\t\t((PLUGIN_HANDLE)pReturn, module));\n\n\t\tif (ret.second)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"plugin_handle: plugin_init(): \"\n\t\t\t\t\t\t   \"handle %p of python plugin '%s' \"\n\t\t\t\t\t\t   \"added to pythonHandles map\",\n\t\t\t\t\t\t   pReturn,\n\t\t\t\t\t\t   pName.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"plugin_handle: plugin_init(): \"\n\t\t\t\t\t\t   \"failed to insert handle %p of \"\n\t\t\t\t\t\t   \"python plugin '%s' to pythonHandles map\",\n\t\t\t\t\t\t   pReturn,\n\t\t\t\t\t\t   pName.c_str());\n\n\t\t\tPy_CLEAR(module->m_module);\n\t\t\tmodule->m_module = NULL;\n\t\t\tdelete module;\n\t\t\tmodule = NULL;\n\n\t\t\tPy_CLEAR(pReturn);\n\t\t\tpReturn = NULL;\n\t\t}\n\t}\n\n\t// Release locks\n\tif (newInterp)\n\t{\n\t\tPyEval_ReleaseThread(newInterp);\n\t}\n\telse\n\t{\n\t\tPyGILState_Release(state);\n\t}\n\n\treturn pReturn ? (PLUGIN_HANDLE) pReturn : NULL;\n}\n\n/**\n * Function to invoke 'plugin_reconfigure' function in python plugin\n *\n * @param    handle\tThe plugin handle from plugin_init_fn\n * @param    config\tThe new configuration, as string\n */\nstatic void plugin_reconfigure_fn(PLUGIN_HANDLE* handle,\n\t\t\t\t  const std::string& config)\n{\n\tLogger::getLogger()->debug(\"%s:%d: config=%s\", __FUNCTION__, __LINE__, config.c_str());\n\t\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonHandles map is NULL \"\n\t\t\t\t\t   \"in plugin_reconfigure, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(*handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn;\n\t}\n\n\tstd::mutex mtx;\n\tPyObject* pFunc;\n\tlock_guard<mutex> guard(mtx);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\tLogger::getLogger()->debug(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t   \"pModule=%p, *handle=%p, plugin '%s'\",\n\t\t\t\t   it->second->m_module,\n\t\t\t\t   *handle,\n\t\t\t\t   it->second->m_name.c_str());\n\t\n\tif(config.compare(\"logLevel\") == 0)\n\t{\n\t\tLogger::getLogger()->debug(\"calling set_loglevel_in_python_module() for updating loglevel\");\n\t\tset_loglevel_in_python_module(it->second->m_module, it->second->m_name+\" plugin_reconf\");\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\t\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_reconfigure\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find method 'plugin_reconfigure' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method plugin_reconfigure \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\tLogger::getLogger()->debug(\"plugin_reconfigure with %s\", config.c_str());\n\n\tPyObject *new_config_dict = json_loads(config.c_str());\n\n\t// Call Python method passing an object and a C string\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"OO\",\n\t\t\t\t\t\t  *handle,\n\t\t\t\t\t\t  new_config_dict);\n\n\tPy_CLEAR(pFunc);\n\tPy_CLEAR(new_config_dict);\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_reconfigure \"\n\t\t\t\t\t   \": error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t\t//*handle = NULL; // not sure if this should be treated as unrecoverable failure on python plugin side\n\t}\n\telse\n\t{\n\t\t// Save PythonModule\n\t\tPythonModule* currentModule = it->second;\n\n\t\tPy_CLEAR(*handle);\n\t\t*handle = pReturn;\n\n\t\tif (pythonHandles)\n\t\t{\n\t\t\t// Remove current handle from the pythonHandles map\n\t\t\tpythonHandles->erase(it);\n\n\t\t\t// Add the handle to handles map as key, PythonModule object as value\n\t\t\tstd::pair<std::map<PLUGIN_HANDLE, PythonModule*>::iterator, bool> ret;\n\t\t\tret = pythonHandles->insert(pair<PLUGIN_HANDLE, PythonModule*>\n\t\t\t\t((PLUGIN_HANDLE)*handle, currentModule));\n\n\t\t\tLogger::getLogger()->debug(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t\t   \"updated handle %p of python plugin '%s'\"\n\t\t\t\t\t\t   \" in pythonHandles map\",\n\t\t\t\t\t\t   *handle,\n\t\t\t\t\t\t   currentModule->m_name.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t\t   \"failed to update handle %p of python plugin '%s'\"\n\t\t\t\t\t\t   \" in pythonHandles map\",\n\t\t\t\t\t\t   *handle,\n\t\t\t\t\t\t   currentModule->m_name.c_str());\n\t\t}\n\t}\n\n\tPyGILState_Release(state);\n}\n\n/**\n * Function to log error message encountered while interfacing with\n * Python runtime\n */\nstatic void logErrorMessage()\n{\n\tPyObject* type;\n\tPyObject* value;\n\tPyObject* traceback;\n\n\tnumpyImportError = false;\n\n\tPyErr_Fetch(&type, &value, &traceback);\n\tPyErr_NormalizeException(&type, &value, &traceback);\n\n\tPyObject* str_exc_value = PyObject_Repr(value);\n\tPyObject* pyExcValueStr = PyUnicode_AsEncodedString(str_exc_value, \"utf-8\", \"Error ~\");\n\tconst char* pErrorMessage = value ?\n\t\t\t\t    PyBytes_AsString(pyExcValueStr) :\n\t\t\t\t    \"no error description.\";\n\tLogger::getLogger()->warn(\"logErrorMessage: Error '%s', plugin '%s'\",\n\t\t\t\t   pErrorMessage,\n\t\t\t\t   gPluginName.c_str());\n\t\n\t// Check for numpy/pandas import errors\n\tconst char *err1 = \"implement_array_function method already has a docstring\";\n\tconst char *err2 = \"cannot import name 'check_array_indexer' from 'pandas.core.indexers'\";\n\n\tnumpyImportError = strstr(pErrorMessage, err1) || strstr(pErrorMessage, err2);\n\t\n\tstd::string fcn = \"\";\n\tfcn += \"def get_pretty_traceback(exc_type, exc_value, exc_tb):\\n\";\n\tfcn += \"    import sys, traceback\\n\";\n\tfcn += \"    lines = []\\n\"; \n\tfcn += \"    lines = traceback.format_exception(exc_type, exc_value, exc_tb)\\n\";\n\tfcn += \"    output = '\\\\n'.join(lines)\\n\";\n\tfcn += \"    return output\\n\";\n\n\tPyRun_SimpleString(fcn.c_str());\n\tPyObject* mod = PyImport_ImportModule(\"__main__\");\n\tif (mod != NULL) {\n\t\tPyObject* method = PyObject_GetAttrString(mod, \"get_pretty_traceback\");\n\t\tif (method != NULL) {\n\t\t\tPyObject* outStr = PyObject_CallObject(method, Py_BuildValue(\"OOO\", type, value, traceback));\n\t\t\tif (outStr != NULL) {\n\t\t\t\tPyObject* tmp = PyUnicode_AsASCIIString(outStr);\n\t\t\t\tif (tmp != NULL) {\n\t\t\t\t\tstd::string pretty = PyBytes_AsString(tmp);\n\t\t\t\t\tLogger::getLogger()->warn(\"%s\", pretty.c_str());\n\t\t\t\t\tLogger::getLogger()->printLongString(pretty.c_str());\n\t\t\t\t}\n\t\t\t\tPy_CLEAR(tmp);\n\t\t\t}\n\t\t\tPy_CLEAR(outStr);\n\t\t}\n\t\tPy_CLEAR(method);\n\t}\n\n\t// Reset error\n\tPyErr_Clear();\n\n\t// Remove references\n\tPy_CLEAR(type);\n\tPy_CLEAR(value);\n\tPy_CLEAR(traceback);\n\tPy_CLEAR(str_exc_value);\n\tPy_CLEAR(pyExcValueStr);\n\tPy_CLEAR(mod);\n}\n\n/**\n * Function to invoke 'plugin_shutdown' function in python plugin\n *\n * @param    handle\tThe plugin handle from plugin_init_fn\n */\nstatic void plugin_shutdown_fn(PLUGIN_HANDLE handle)\n{\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_shutdown_fn: \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonHandles map is NULL \"\n\t\t\t\t\t   \"in plugin_shutdown_fn, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_shutdown_fn: \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn;\n\t}\n\n\tif (! Py_IsInitialized()) {\n\n\t\tLogger::getLogger()->debug(\"%s - Python environment not initialized, exiting from the function \", __FUNCTION__);\n\t\treturn;\n\t}\n\n\tPyObject* pFunc; \n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_shutdown\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find method 'plugin_shutdown' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method 'plugin_shutdown' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\t// Call Python method passing an object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"O\",\n\t\t\t\t\t\t  handle);\n\n\tPy_CLEAR(pFunc);\n\n\n\tif (false) // no seperate python interpreter is used anymore for python plugins\n\t{\n\t\t// Switch to Interpreter thread\n\t\tPyThreadState* swapState = PyThreadState_Swap(it->second->m_tState);\n\n\t\t// Remove Python module\n\t\tPy_CLEAR(it->second->m_module);\n\t\tit->second->m_module = NULL;\n\n\t\t// Stop Interpreter thread\n\t\tPy_EndInterpreter(it->second->m_tState);\n\n\t\tLogger::getLogger()->debug(\"plugin_shutdown_fn: Py_EndInterpreter of '%p' \"\n\t\t\t\t\t   \"for plugin '%s'\",\n\t\t\t\t\t   it->second->m_tState,\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\t// Return to main thread\n\t\tPyThreadState_Swap(swapState);\n\n\t\t// Set pointer to null\n\t\tit->second->m_tState = NULL;\n\t}\n\telse\n\t{\n\t\t// Remove Python module\n\t\tPy_CLEAR(it->second->m_module);\n\t\tit->second->m_module = NULL;\n\t}\n\n\tPythonModule* module = it->second;\n\tstring pName = it->second->m_name;\n\n\t// Remove item\n\tpythonHandles->erase(it);\n\n\t// Look for Python module, pName is the key\n\tauto m = pythonModules->find(pName);\n\tif (m != pythonModules->end())\n\t{\n\t\t// Remove this element\n\t\tpythonModules->erase(m);\n\t}\n\n\t// Release module object\n\tdelete module;\n\tmodule = NULL;\n\n\t// Release GIL\n\tPyGILState_Release(state);\n\n\tLogger::getLogger()->debug(\"plugin_shutdown_fn succesfully \"\n\t\t\t\t   \"called for plugin '%s'\",\n\t\t\t\t   pName.c_str());\n}\n\n};\n#endif\n\n"
  },
  {
    "path": "C/services/core/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (Core)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wsign-conversion\")\nset(DLLIB -ldl)\nset(UUIDLIB -luuid)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\n\ninclude_directories(. include ../../thirdparty/Simple-Web-Server ../../thirdparty/rapidjson/include  ../common/include ../../common/include)\n\nfind_package(Threads REQUIRED)\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\nif(APPLE)\n    set(OPENSSL_ROOT_DIR \"/usr/local/opt/openssl\")\nendif()\n\nfile(GLOB core_src \"*.cpp\")\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\n# Create static library\nadd_library(core ${core_src})\ntarget_link_libraries(core ${Boost_LIBRARIES})\ntarget_link_libraries(core ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(core ${DLLIB})\ntarget_link_libraries(core ${UUIDLIB})\ntarget_link_libraries(core -lssl -lcrypto)\ntarget_link_libraries(core ${COMMON_LIB})\ntarget_link_libraries(core ${SERVICE_COMMON_LIB})\n\nif(MSYS) #TODO: Is MSYS true when MSVC is true?\n    target_link_libraries(storage ws2_32 wsock32)\n    if(OPENSSL_FOUND)\n        target_link_libraries(storage ws2_32 wsock32)\n    endif()\nendif()\n"
  },
  {
    "path": "C/services/core/configuration_manager.cpp",
    "content": "/*\n * Fledge Fledge Configuration management.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <sstream>\n#include <configuration_manager.h>\n#include <rapidjson/writer.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nConfigurationManager *ConfigurationManager::m_instance = 0;\n\n/**\n * Constructor\n *\n * @param host    Storage layer TCP address\n * @param port\t  Storage layer TCP port\n */\nConfigurationManager::ConfigurationManager(const string& host,\n\t\t\t\t\t   unsigned short port)\n{\n\tm_storage = new StorageClient(host, port);\n}\n\n// Destructor\nConfigurationManager::~ConfigurationManager()\n{\n\tdelete m_storage;\n}\n\n/**\n * Return the singleton instance of the configuration manager\n *\n * @param host    Storage layer TCP address\n * @param port\t  Storage layer TCP port\n * @return        The configuration manager class instance\n */\nConfigurationManager* ConfigurationManager::getInstance(const string& host,\n\t\t\t\t\t\t\tunsigned short port)\n{\n\tif (m_instance == 0)\n\t{\n\t\tm_instance = new ConfigurationManager(host, port);\n\t}\n\treturn m_instance;\n}\n\n/**\n * Return all Fledge categories from storage layer\n *\n * @return\tConfigCategories class object with\n *\t\tkey and description for all found categories.\n * @throw\tCategoryDetailsEx exception\n */\nConfigCategories ConfigurationManager::getAllCategoryNames() const\n{\n\t// Return object\n\tConfigCategories categories;\n\n\tvector<Returns *> columns;\n\tcolumns.push_back(new Returns(\"key\"));\n\tcolumns.push_back(new Returns(\"description\"));\n\tQuery qAllCategories(columns);\n\n\tResultSet* allCategories = 0;\n\ttry\n\t{\n\t\t// Query via Storage client\n\t\tallCategories = m_storage->queryTable(\"configuration\", qAllCategories);\n\t\tif (!allCategories || !allCategories->rowCount())\n\t\t{\n\t\t\t// Data layer error or no data to handle\n\t\t\tthrow CategoryDetailsEx();\n\t\t}\n\n\t\t// Fetch all cetegories\n\t\tResultSet::RowIterator it = allCategories->firstRow();\n\t\tdo\n\t\t{\n\t\t\tResultSet::Row* row = *it;\n\t\t\tif (!row)\n\t\t\t{\n\t\t\t\tthrow CategoryDetailsEx();\n\t\t\t}\t\n\t\t\tResultSet::ColumnValue* key = row->getColumn(\"key\");\n\t\t\tResultSet::ColumnValue* description = row->getColumn(\"description\");\n\n\t\t\tConfigCategoryDescription *value = new ConfigCategoryDescription(key->getString(),\n\t\t\t\t\t\t\t\t\t\t\t description->getString());\n\t\t\t// Add current row data to categories;\n\t\t\tcategories.addCategoryDescription(value);\n\n\t\t} while (!allCategories->isLastRow(it++));\n\n\t\t// Free result set\n\t\tdelete allCategories;\n\n\t\t// Return object\n\t\treturn categories;\n\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\tdelete e;\n\t\tif (allCategories)\n\t\t{\n\t\t\t// Free result set\n\t\t\tdelete allCategories;\n\t\t}\n\t\tthrow CategoryDetailsEx();\n\t}\n\tcatch (...)\n\t{\n\t\tif (allCategories)\n\t\t{\n\t\t\t// Free result set\n\t\t\tdelete allCategories;\n\t\t}\n\t\tthrow CategoryDetailsEx();\n\t}\n}\n\n/**\n * Return all the items of a specific category\n * from the storage layer.\n *\n * @param categoryName\tThe specified category name\n * @return\t\tConfigCategory calss object\n *\t\t\twith all category items\n * @throw \t\tNoSuchCategory exception\n * @throw\t\tConfigCategoryEx exception\n * @throw\t\tCategoryDetailsEx exception\n */\n\nConfigCategory ConfigurationManager::getCategoryAllItems(const string& categoryName) const\n{\n\t// SELECT * FROM fledge.configuration WHERE key = categoryName\n\tconst Condition conditionKey(Equals);\n\tWhere *wKey = new Where(\"key\", conditionKey, categoryName);\n\tQuery qKey(wKey);\n\n\tResultSet* categoryItems = 0;\n\ttry\n\t{\n\t\t// Query via storage client\n\t\tcategoryItems = m_storage->queryTable(\"configuration\", qKey);\n\t\tif (!categoryItems)\n\t\t{\n\t\t\tthrow ConfigCategoryEx();\n\t\t}\n\n\t\t// Category not found\n\t\tif (!categoryItems->rowCount())\n\t\t{\n\t\t\tthrow NoSuchCategory();\n\t\t}\n\n\t\t// Get first row\n\t\tResultSet::RowIterator it = categoryItems->firstRow();\n\t\tResultSet::Row* row = *it;\n\t\tif (!row)\n\t\t{\n\t\t\tthrow CategoryDetailsEx();\n\t\t}\t\n\n\t\t// If we have an exception catch it and free the result set\n\t\tResultSet::ColumnValue* key = row->getColumn(\"key\");\n\t\tResultSet::ColumnValue* description = row->getColumn(\"description\");\n\t\tResultSet::ColumnValue* items = row->getColumn(\"value\");\n\n\t\t// Create string representation of JSON object\n\t\trapidjson::StringBuffer buffer;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(buffer);\n\t\tconst rapidjson::Value *v = items->getJSON();\n\t\tv->Accept(writer);\n\n\t\tconst string sItems(buffer.GetString(), buffer.GetSize());\n\n\t\t// Create category object\n\t\tConfigCategory theVal(key->getString(), sItems);\n\t\t// Set description\n\t\ttheVal.setDescription(description->getString());\n\n\t\t// Free result set\n\t\tdelete categoryItems;\n\n\t\treturn theVal;\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\tdelete e;\n\t\tif (categoryItems)\n\t\t{\n\t\t\t// Free result set\n\t\t\tdelete categoryItems;\n\t\t}\n\t\tthrow ConfigCategoryEx();\n\t}\n\tcatch (NoSuchCategory& e)\n\t{\n\t\tif (categoryItems)\n\t\t{\n\t\t\t// Free result set\n\t\t\tdelete categoryItems;\n\t\t}\n\t\tthrow;\n\t}\n\tcatch (...)\n\t{\n\t\tif (categoryItems)\n\t\t{\n\t\t\t// Free result set\n\t\t\tdelete categoryItems;\n\t\t}\n\t\tthrow ConfigCategoryEx();\n\t}\n}\n\n/**\n * Create or update a new category\n *\n * @param categoryName\t\tThe category name\n * @param categoryDescription\tThe category description\n * @param categoryItems\t\tThe category items\n * @param keepOriginalItems\tKeep stored iterms or replace them\n * @return\t\t\tThe ConfigCategory object\n *\t\t\t\twith \"value\" and \"default\"\n *\t\t\t\tof the new category added\n *\t\t\t\tor the merged configuration\n *\t\t\t\tof the updated confguration.\n * @throw\t\t\tCategoryDetailsEx exception\n * @throw\t\t\tConfigCategoryEx exception\n * @throw\t\t\tConfigCategoryDefaultWithValue exception\n */\n\nConfigCategory ConfigurationManager::createCategory(const std::string& categoryName,\n\t\t\t\t\t\t    const std::string& categoryDescription,\n\t\t\t\t\t\t    const std::string& categoryItems,\n\t\t\t\t\t\t    bool keepOriginalItems) const\n{\n\t// Fill the ready to insert category object with input data\n\tConfigCategory preparedValue(categoryName, categoryItems);\n\tpreparedValue.setDescription(categoryDescription);\n\n\ttry\n\t{\n\t\t// Abort if items contain both value and default\n\t\tpreparedValue.checkDefaultValuesOnly();\n\n\t\t// Add 'value' from 'default' for each item\n\t\tpreparedValue.setItemsValueFromDefault();\n\t}\n\tcatch (ConfigMalformed* e)\n\t{\n\t\tdelete e;\n\t\tthrow ConfigCategoryEx();\n\t}\n\tcatch (ConfigValueFoundWithDefault* e)\n\t{\n\t\t// The category items have both default and value properties\n\t\t// raise the ConfigCategoryDefaultWithValue exception;\n\t\tdelete e;\n\n\t\t// Raise specific exception\n\t\tthrow ConfigCategoryDefaultWithValue();\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\tdelete e;\n\t\tthrow ConfigCategoryEx();\n\t}\n\tcatch (...)\n\t{\n\t\tthrow ConfigCategoryEx();\n\t}\n\n\t// Parse JSON input\n\tDocument doc;\n\t// Parse the prepared input category with \"value\" and \"default\"\n\tdoc.Parse(preparedValue.itemsToJSON().c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tthrow ConfigCategoryEx();\n\t}\n\n\t// Set the JSON string for merged category values\n\tstring updatedItems;\n\n\t// SELECT * FROM fledge.configuration WHERE key = categoryName\n\tconst Condition conditionKey(Equals);\n\tWhere *wKey = new Where(\"key\", conditionKey, categoryName);\n\tQuery qKey(wKey);\n\n\tResultSet* result = 0;\n\ttry\n\t{\n\t\t// Query via storage client\n\t\tresult = m_storage->queryTable(\"configuration\", qKey);\n\t\tif (!result)\n\t\t{\n\t\t\tthrow ConfigCategoryEx();\n\t\t}\n\n\t\tif (!result->rowCount())\n\t\t{\n\t\t\t// Prepare insert values for insertTable\n\t\t\tInsertValues newCategory;\n\t\t\tnewCategory.push_back(InsertValue(\"key\", categoryName));\n\t\t\tnewCategory.push_back(InsertValue(\"description\", categoryDescription));\n\t\t\t// Set \"value\" field for inseert using the JSON document object\n\t\t\tnewCategory.push_back(InsertValue(\"value\", doc));\n\n\t\t\t// Do the insert\n\t\t\tif (!m_storage->insertTable(\"configuration\", newCategory))\n\t\t\t{\n\t\t\t\tthrow ConfigCategoryEx();\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// The category already exists: fetch data\n\t\t\tResultSet::RowIterator it = result->firstRow();\n\t\t\tResultSet::Row* row = *it;\n\t\t\tif (!row)\n\t\t\t{\n\t\t\t\tthrow CategoryDetailsEx();\n\t\t\t}\n\n\t\t\t// Get current category items\n\t\t\tResultSet::ColumnValue* theItems = row->getColumn(\"value\");\n\t\t\tconst Value* storedData = theItems->getJSON();\n\n\t\t\t// Prepare for merge\n\t\t\tDocument::AllocatorType& allocator = doc.GetAllocator();\n\t\t\tValue inputValues = doc.GetObject();\n\n\t\t\t/**\n\t\t\t * Merge input data with stored data:\n\t\t\t * stored configuration items are merged or replaced\n\t\t\t * accordingly to keepOriginalItems parameter value.\n\t\t\t *\n\t\t\t * Items \"value\" are preserved for items being updated, only \"default\" values\n\t\t\t * are overwritten.\n\t\t\t */\n\t\t\tmergeCategoryValues(inputValues,\n\t\t\t\t\t    storedData,\n\t\t\t\t\t    allocator,\n\t\t\t\t\t    keepOriginalItems);\n\n\t\t\t// Create the new JSON string representation of merged category items\n\t\t\trapidjson::StringBuffer buffer;\n\t\t\trapidjson::Writer<rapidjson::StringBuffer> writer(buffer);\n\n\t\t\t// inputValues is the merged configuration\n\t\t\tinputValues.Accept(writer);\n\n\t\t\t// Set the JSON string with updated items\n\t\t\tupdatedItems = string(buffer.GetString(), buffer.GetSize());\n\n\t\t\t// Prepare WHERE id = val\n\t\t\tconst Condition conditionKey(Equals);\n\t\t\tWhere wKey(\"key\", conditionKey, categoryName);\n\n\t\t\t// Prepare insert values for updateTable\n\t\t\tInsertValues updateCategoryValues;\n\t\t\tupdateCategoryValues.push_back(InsertValue(\"key\", categoryName));\n\t\t\tupdateCategoryValues.push_back(InsertValue(\"description\", categoryDescription));\n\n\t\t\t// Add the \"value\" DB field for UPDATE (inputValuea with merged data)\n\t\t\tupdateCategoryValues.push_back(InsertValue(\"value\", inputValues));\n\n\t\t\t// Perform UPDATE fledge.configuration SET value = x WHERE okey = y\n\t\t\tif (!m_storage->updateTable(\"configuration\", updateCategoryValues, wKey))\n\t\t\t{\n\t\t\t\tthrow ConfigCategoryEx();\n\t\t\t}\n\n\t\t}\n\t\tbool returnNew = result->rowCount() == 0;\n\n\t\t// Free result set data\n\t\tdelete result;\n\n\t\tif (returnNew)\n\t\t{\n\t\t\t// Return the new created category\n\t\t\treturn preparedValue;\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Return the updated/merged category\n\t\t\tConfigCategory returnValue(categoryName, updatedItems);\n\t\t\treturnValue.setDescription(categoryDescription);\n\n\t\t\treturn  returnValue;\n\t\t}\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\tdelete e;\n\t\tif (result)\n\t\t{\n\t\t\t// Free result set\n\t\t\tdelete result;\n\t\t}\n\t\tthrow ConfigCategoryEx();\n\t}\n\tcatch (...)\n\t{\n\t\tif (result)\n\t\t{\n\t\t\t// Free result set\n\t\t\tdelete result;\n\t\t}\n\t\tthrow ConfigCategoryEx();\n\t}\n}\n\n/**\n * Merge the input data with stored data:\n *\n * The stored configuration items are merged with new ones if\n * paramter keepOriginalItems is true otherwise they are replaced.\n *\n * The confguration items \"value\" objects are preserved\n * for the item names being updated, only the \"default\" values\n * are overwritten.\n *\n * Examples:\n * \"value\" : {\"item_1\" : { \"description\" : \"B\", \"type\" : \"string\", \"default\" : \"TWO\" }\n\t      \"item_7\": { \"description\" : \"Z\", \"type\" : \"string\", \"default\" : \"SEVEN\" }}\n *\n * If \"item_1\" exists with \"value\" ONE and \"default\" ONE, the result is:\n * \"value\" : ONE, \"default\" : \"TWO\"\n * other fields in \"item_1\" are overwritten and any other item removed.\n *\n * if \"item_1\"  doesn't exist and current data is\n * \"value\" : {\"item_0\" : { \"description\" : \"A\", \"type\" : \"string\", \"default\" : \"NONE\" },\n\t      \"item_7\": { \"description\" : \"Z\", \"type\" : \"string\", \"default\" : \"SEVEN\" }}\n * that entry is completely replaced by the new one \"value\" : {\"item_1\" : { ...}}\n *\n *\n * @param inputValues\t\tNew inout configuration items\n * @param storedValues\t\tCurrent stored items in storage layer\n * @param keepOriginalItems\tKeep stored items or replace them\n * @throw\t\t\tNotSupportedDataType exception\n */\n\nvoid ConfigurationManager::mergeCategoryValues(Value& inputValues,\n\t\t\t\t\t\tconst Value* storedValues,\n\t\t\t\t\t\tDocument::AllocatorType& allocator,\n\t\t\t\t\t\tbool keepOriginalItems) const\n{\n\t// Loop throught input data\n\t// For each item fetch the value of stored one, if existent\n\tfor (Value::MemberIterator itr = inputValues.MemberBegin(); itr != inputValues.MemberEnd(); ++itr)\n\t{\n\t\t// Get current item name\n\t\tstring itemName = itr->name.GetString();\n\n\t\t// Find the itemName \"value\" in the stored data\n\t\tValue::ConstMemberIterator storedItr = storedValues->FindMember(itemName.c_str());\n\n\t\tif (storedItr != storedValues->MemberEnd() && storedItr->value.IsObject())\n\t\t{\n\t\t\t// Item name is present in stored data\n\n\t\t\t// 1. Remove current \"value\"\n\t\t\titr->value.EraseMember(\"value\");\n\t\t\t// 2. Get itemName \"value\" in stored data\n\t\t\tauto& v = storedItr->value.GetObject()[\"value\"];\n\t\t\tValue object;\n\n\t\t\t// 3. Set new value\n\t\t\tswitch (v.GetType())\n\t\t\t{\n\t\t\t\t// String\n\t\t\t\tcase (kStringType):\n\t\t\t\t{\n\t\t\t\t\tobject.SetString(v.GetString(), allocator);\n\t\t\t\t\titr->value.AddMember(\"value\", object, allocator);\n\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t// Object\n\t\t\t\tcase (kObjectType):\n\t\t\t\t{\n\t\t\t\t\trapidjson::StringBuffer strbuf;\n\t\t\t\t\trapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);\n\t\t\t\t\tv.Accept(writer);\n\t\t\t\t\tobject.SetString(strbuf.GetString(), allocator);\n\t\t\t\t\titr->value.AddMember(\"value\", object, allocator);\n\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t//  Array and numbers not supported yet\n\t\t\t\tdefault:\n\t\t\t\t{\n\t\t\t\t\tthrow NotSupportedDataType();\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// Add stored items not found in input items only if we want to keep them.\n\tif (keepOriginalItems == true)\n\t{\n\t\tValue::ConstMemberIterator itr;\n\n\t\t// Loop throught stored data\n\t\tfor (itr = storedValues->MemberBegin(); itr != storedValues->MemberEnd(); ++itr )\n\t\t{\n\t\t\tstring itemName = itr->name.GetString();\n\n\t\t\t// Find the itemName in the inout data\n\t\t\tValue::MemberIterator inputItr = inputValues.FindMember(itemName.c_str());\n\n\t\t\tif (inputItr == inputValues.MemberEnd())\n\t\t\t{\n\t\t\t\t// Set item name\n\t\t\t\tValue name(itemName.c_str(), allocator);\n\t\t\t\t\n\t\t\t\tValue object;\n\t\t\t\tobject.SetObject();\n\t\t\t\t// Object copy\n\t\t\t\tobject.CopyFrom(itr->value, allocator);\n\t\t\t\t\n\t\t\t\t// Add the new object\n\t\t\t\tinputValues.AddMember(name, object, allocator);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Get a given item within a given category\n * @param categoryName\tThe given category\n * @param itemName\tThe given item \n * @return\t\tJSON string with item details\n */\nstring ConfigurationManager::getCategoryItem(const string& categoryName,\n\t\t\t\t\t     const string& itemName) const\n{\n\tConfigCategory allItems = this->getCategoryAllItems(categoryName);\n\treturn allItems.itemToJSON(itemName);\n}\n\n/**\n * Get the value of a given item within a given category\n * @param categoryName\tThe given category\n * @param itemName\tThe given item \n * @return\t\tstring with item value\n * @throw\t\tNoSuchCategoryItemValue exception\n */\nstring ConfigurationManager::getCategoryItemValue(const string& categoryName,\n\t\t\t\t\t\t  const string& itemName) const\n{\n\ttry\n\t{\n\t\tConfigCategory allItems = this->getCategoryAllItems(categoryName);\n\t\treturn allItems.getValue(itemName);\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\t//catch pointer exceptions)\n\t\tdelete e;\n\t\tthrow NoSuchCategoryItemValue();\n\t}\n\tcatch (...)\n\t{\n\t\t// General catch\n\t\tthrow NoSuchCategoryItemValue();\n\t}\n}\n\n/**\n * Set the \"value\" entry of a given item within a given category.\n *\n * @param categoryName\tThe given category\n * @param itemName\tThe given item\n * @param newValue\tThe \"value\" entry to set\n * @return\t\tTrue on success.\n *\t\t\tFalse on DB update error or storage layer exception\n *\t\t\t\n * @throw\t\tNoSuchCategoryItem exception\n *\t\t\tif categoryName/itemName doesn't exist\n */\nbool ConfigurationManager::setCategoryItemValue(const std::string& categoryName,\n\t\t\t\t\t\tconst std::string& itemName,\n\t\t\t\t\t\tconst std::string& newValue) const\n{\n\t// Fetch itemName from categoryName\n\tstring currentItemValue;\n\ttry\n\t{\n\t\tcurrentItemValue = this->getCategoryItemValue(categoryName, itemName);\n\t}\n\tcatch (...)\n\t{\n\t\tstring errMsg(\"No details found for the category_name: \" + categoryName);\n\t\terrMsg += \" and config_item: \" + itemName;\n\n\t\tthrow NoSuchCategoryItem(errMsg);\n\t}\n\n\t/**\n\t * Check whether newValue is the same as currentValue\n\t * NOTE:\n\t * Does it work if newValue represents JSON object\n\t * istead of a simple value?\n\t */\n\tif (currentItemValue.compare(newValue) == 0)\n\t{\n\t\t// Same value: return success\t\n\t\treturn true;\n\t}\n\n\t// Prepare WHERE id = val\n\tconst Condition conditionKey(Equals);\n\tWhere wKey(\"key\", conditionKey, categoryName);\n\n\t// Prepare jsonPropertis with one string vector: itemName, value\n\tvector<string> jsonPaths;\n\tjsonPaths.push_back(itemName);\n\tjsonPaths.push_back(\"value\");\n\tJSONProperties jsonValues;\n\tjsonValues.push_back(JSONProperty(\"value\", jsonPaths, newValue));\n\n\ttry\n\t{\n\t\t// UPDATE fledge.configuration SET vale = JSON(jsonValues)\n\t\t// WHERE key = 'categoryName';\n\t\treturn (!m_storage->updateTable(\"configuration\", jsonValues, wKey)) ? false : true;\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\tdelete e;\n\t\t// Return failure\n\t\treturn false;\n\t}\n\tcatch (...)\n\t{\n\t\t// Return failure\n\t\treturn false;\n\t}\n}\n\n/**\n * Add child categories under a given (parent) category\n *\n * @param parentCategoryName\tThe parent category name\n * @param childCategories\tThe child categories list (JSON array)\n * @return\t\t\tThe JSON string with all (old and new) child\n *\t\t\t\tcategories of the parent category name\n * @throw\t\t\tChildCategoriesEx exception\n * @throw\t\t\tExistingChildCategories exception\n * @thow\t\t\tNoSuchCategory exception\n */\nstring ConfigurationManager::addChildCategory(const string& parentCategoryName,\n\t\t\t\t\t      const string& childCategories) const\n{\n\t// Check first parent category exists\n\ttry\n\t{\n\t\tthis->getCategoryAllItems(parentCategoryName);\n\t}\n\tcatch (...)\n\t{\n\t\tthrow NoSuchCategory();\n\t}\n\n\t// Parse JSON input\n\tDocument doc;\n\t// Parse the prepared input category with \"value\" and \"default\"\n\tdoc.Parse(childCategories.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tthrow ChildCategoriesEx();\n\t}\n\n\tValue& children = doc[\"children\"];\n\tif (!children.IsArray())\n\t{\n\t\tthrow ChildCategoriesEx();\n\t}\n\n\tunsigned int rowsAdded = 0;\n\n\tResultSet* categoryItems = 0;\n\n\tfor (Value::ConstValueIterator itr = children.Begin(); itr != children.End(); ++itr)\n\t{\n\t\tif (!(*itr).IsString())\n\t\t{       \n\t\t\tthrow ChildCategoriesEx();\n\t\t}\n\n\t\tstring childCategory = (*itr).GetString();\n\n\t\t// Note: all \"children\" categories must exist\n\t\t// SELECT * FROM fledge.configuration WHERE key = categoryName\n\t\tconst Condition conditionKey(Equals);\n\t\tWhere *wKey = new Where(\"key\", conditionKey, childCategory);\n\t\tQuery qKey(wKey);\n\n\t\ttry\n\t\t{\n\t\t\t// Query via storage client\n\t\t\tcategoryItems = m_storage->queryTable(\"configuration\", qKey);\n\t\t\tif (!categoryItems)\n\t\t\t{\n\t\t\t\tthrow ChildCategoriesEx();\n\t\t\t}\n\n\t\t\t// Child category not found. throw exception\n\t\t\tif (!categoryItems->rowCount())\n\t\t\t{\n\t\t\t\tthrow NoSuchCategory();\n\t\t\t}\n\n\t\t\t// Free result set\n\t\t\tdelete categoryItems;\n\n\t\t\t// Check whether parent/child row already exists\n\t\t\tconst Condition conditionParent(Equals);\n\t\t\t// Build the parent AND child WHHERE\n\t\t\tWhere *wChild = new Where(\"child\", conditionParent, childCategory);\n\t\t\tWhere *wParent = new Where(\"parent\", conditionParent, parentCategoryName, wChild);\n\t\t\tQuery qParentChild(wParent);\n\n\t\t\t// Query via storage client\n\t\t\tcategoryItems = m_storage->queryTable(\"category_children\", qParentChild);\n\t\t\tif (!categoryItems)\n\t\t\t{\n\t\t\t\tthrow ChildCategoriesEx();\n\t\t\t}\n\n\t\t\t// Parent/child has been found: skip the insert\n\t\t\tif (categoryItems->rowCount())\n\t\t\t{\n\t\t\t\t// Free result set\n\t\t\t\tdelete categoryItems;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t// Free result set\n\t\t\tdelete categoryItems;\n\n\t\t\t// Prepare insert values for insertTable\n\t\t\tInsertValues newCategory;\n\t\t\tnewCategory.push_back(InsertValue(\"parent\", parentCategoryName));\n\t\t\tnewCategory.push_back(InsertValue(\"child\", (*itr).GetString()));\n\n\t\t\t/**\n\t\t\t * Do the insert:\n\t\t\t * we don't check for failed result as we checked\n\t\t\t * parent/child presence above\n\t\t\t */\n\t\t\tm_storage->insertTable(\"category_children\", newCategory);\n\n\t\t\t// Increment counter\n\t\t\trowsAdded++;\n\t\t}\n\t\tcatch (std::exception* e)\n\t\t{\n\t\t\tdelete e;\n\t\t\tif (categoryItems)\n\t\t\t{\n\t\t\t\t// Free result set\n\t\t\t\tdelete categoryItems;\n\t\t\t}\n\t\t\tthrow ChildCategoriesEx();\n\t\t}\n\t\tcatch (NoSuchCategory& e)\n\t\t{\n\t\t\tif (categoryItems)\n\t\t\t{\n\t\t\t\t// Free result set\n\t\t\t\tdelete categoryItems;\n\t\t\t}\n\t\t\tthrow;\n\t\t}\n\t\tcatch (...)\n\t\t{\n\t\t\tif (categoryItems)\n\t\t\t{\n\t\t\t\t// Free result set\n\t\t\t\tdelete categoryItems;\n\t\t\t}\n\t\t\tthrow ChildCategoriesEx();\n\t\t}\n\t}\n\n\t// If no rows have been inserted, then abort\n\tif (!rowsAdded)\n\t{\n\t\tthrow ExistingChildCategories();\n\t}\n\n\t// Fetch current children of parentCategoryName;\n\treturn this->fetchChildCategories(parentCategoryName);\n}\n\n/**\n * Fetch all child categories of a given parent one\n * @param parentCategoryName\tThe given category name\n * @return\t\t\tJSON array string with child categories\n * @throw\t\t\tChildCategoriesEx exception\n */\nstring ConfigurationManager::fetchChildCategories(const string& parentCategoryName) const\n{\n\tostringstream currentChildCategories;\n\n\t// Fetch current children of parentCategoryName;\n\t// SELECT * FROM fledge.category_children WHERE parent = 'parentCategoryName'\n\tconst Condition conditionCurrent(Equals);\n\tWhere *wCurrent = new Where(\"parent\", conditionCurrent, parentCategoryName);\n\tQuery qCurrent(wCurrent);\n\n\tResultSet* newCategories = 0;\n\ttry\n\t{\n\t\t// Fetch all child categories\n\t\tnewCategories = m_storage->queryTable(\"category_children\", qCurrent);\n\t\tif (!newCategories)\n\t\t{\t\n\t\t\tthrow ChildCategoriesEx();\n\t\t}\n\t\t// Build ther JSON output\n\t\tcurrentChildCategories << \"{ \\\"children\\\" : [ \";\n\n\t\t// If no child categories return empty array\n\t\tif (!newCategories->rowCount())\n\t\t{\n\t\t\tdelete newCategories;\n\t\t\tcurrentChildCategories << \" ] }\";\n\n\t\t\treturn currentChildCategories.str();\n\t\t}\n\n\t\t// We have some data\n        \tResultSet::RowIterator it = newCategories->firstRow();\n\t\tdo\n        \t{\n                \tResultSet::Row* row = *it;\n                \tif (!row)\n                \t{\n\t\t\t\tthrow ChildCategoriesEx();\n        \t        }\n\n\t\t\t// Add the child category to output result\n               \t\tResultSet::ColumnValue* child = row->getColumn(\"child\");\n\t\t\tcurrentChildCategories << \"\\\"\";\n\t\t\tcurrentChildCategories << child->getString();\n\t\t\tcurrentChildCategories << \"\\\"\";\n\t\t\tif (!newCategories->isLastRow(it))\n\t\t\t{\n\t\t\t\tcurrentChildCategories << \", \";\n\t\t\t}\n\t\t} while (!newCategories->isLastRow(it++));\n\n\t\tcurrentChildCategories << \" ] }\";\n\n\t\t// Free result set\n\t\tdelete newCategories;\n\n\t\t// Returm child categories\n\t\treturn currentChildCategories.str();\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\tdelete e;\n\t\tif (newCategories)\n\t\t{\n\t\t\tdelete newCategories;\n\t\t}\n\t\tthrow ChildCategoriesEx();\n\t}\n\tcatch (...)\n\t{\n\t\tif (newCategories)\n\t\t{\n\t\t\tdelete newCategories;\n\t\t}\n\t\tthrow ChildCategoriesEx();\n\t}\n}\n\n/**\n * Get all the child categories of a given category name\n *\n * @param parentCategoryName\tThe given category name\n * @return \t\t\tA ConfigCategories object\n *\t\t\t\twith child categories (name and description)\n * @throw\t\t\tChildCategoriesEx exception\n */\nConfigCategories ConfigurationManager::getChildCategories(const string& parentCategoryName) const\n{\n\tConfigCategories categories;\n\n\ttry\n\t{\n\t\t// Fetch all child categories\n\t\tstring childCategories = this->fetchChildCategories(parentCategoryName);\n\n\t\t// Parse JSON input\n\t\tDocument doc;\n\t\t// Parse the prepared input category with \"value\" and \"default\"\n\t\tdoc.Parse(childCategories.c_str());\n\n\t\tif (doc.HasParseError() || !doc.HasMember(\"children\"))\n\t\t{\n\t\t\tthrow ChildCategoriesEx();\n\t\t}\n\n\t\t// Get child categories\n\t\tValue& children = doc[\"children\"];\n\t\tif (!children.IsArray())\n\t\t{\n\t\t\tthrow ChildCategoriesEx();\n\t\t}\n\n\t\t/**\n\t\t * For each element fetch then category description\n\t\t * and add the entry to ConfigCategories result\n\t\t */\n\t\tfor (Value::ConstValueIterator itr = children.Begin(); itr != children.End(); ++itr)\n\t\t{\n\t\t\tstring categoryDesc;\n\t\t\t// Description must be a string\n\t\t\tif (!(*itr).IsString())\n\t\t\t{\n\t\t\t\tthrow ChildCategoriesEx();\n\t\t\t}\n\t\t\tstring categoryName = (*itr).GetString();\n\n\t\t\t// Fetch description\n\t\t\tcategoryDesc = this->getCategoryDescription(categoryName);\n\t\t\tConfigCategoryDescription *value = new ConfigCategoryDescription(categoryName,\n\t\t\t\t\t\t\t\t\t\t\t categoryDesc);\n\t\t\t// Add current row data to categories;\n\t\t\tcategories.addCategoryDescription(value);\n\t\t}\n\n\t\t// Return ConfigCategories object\n\t\treturn categories;\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\tdelete e;\n\t\tthrow ChildCategoriesEx();\n\t}\n\tcatch (...)\n\t{\n\t\tthrow ChildCategoriesEx();\n\t}\n}\n\n/**\n * Get the categpry description of a given category\n *\n * @param categoryName\tThe given category\n * @return\t\tThe category description\n */\nstring ConfigurationManager::getCategoryDescription(const string& categoryName) const\n{\n\t// Note:\n\t// Any throw exception that must be catched by the caller\n\tConfigCategory currentCategory = this->getCategoryAllItems(categoryName);\n\treturn currentCategory.getDescription();\n}\n\n/**\n * Remove the link between a child category and its parent.\n * The child becomes a root category when the link is broken.\n * Note the child category still exists after this call is made.\n *\n * @param parentCategoryName\tThe parennt category\n * @param childCategory\t\tThe child category to remove\n * @return\t\t\tJSON array string with remaining\n *\t\t\t\tchild categories\n * @throw\t\t\tChildCategoriesEx exception\n */\nstring ConfigurationManager::deleteChildCategory(const string& parentCategoryName,\n\t\t\t\t\t\t const string& childCategory) const\n{\n\tconst Condition conditionParent(Equals);\n\t// Build the parent AND child WHHERE\n\tWhere* wChild = new Where(\"child\", conditionParent, childCategory);\n\tWhere* wParent = new Where(\"parent\", conditionParent, parentCategoryName, wChild);\n\tQuery qParentChild(wParent);\n\n\ttry\n\t{\n\t\t// Do the delete\n\t\tint deletedRows = m_storage->deleteTable(\"category_children\", qParentChild);\n\t\tif (deletedRows == -1)\n\t\t{\n\t\t\tthrow ChildCategoriesEx();\n\t\t}\n\t\treturn this->fetchChildCategories(parentCategoryName);\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\tdelete e;\n\t\tthrow ChildCategoriesEx();\n\t}\n\tcatch (...)\n\t{\n\t\tthrow ChildCategoriesEx();\n\t}\n}\n\n/**\n * Unset the category item value.\n *\n * @param categoryName\t\tThe category name\n * @param itemName\t\tThe item name\n * @return\t\t\tJSON string of category item\n * @throw\t\t\tConfigCategoryEx exception\n * @throw\t\t\tNoSuchCategoryItem exception\n */\nstring ConfigurationManager::deleteCategoryItemValue(const string& categoryName,\n\t\t\t\t\t\t     const string& itemName) const\n{\n\ttry\n\t{\n\t\t// Set the empty value\n\t\tif (!this->setCategoryItemValue(categoryName, itemName, \"\"))\n\t\t{\n\t\t\tthrow ConfigCategoryEx();\n\t\t}\n\t\t// Return category item\n\t\treturn this->getCategoryItem(categoryName, itemName);\n\t}\n\tcatch (NoSuchCategoryItem& e)\n\t{\n\t\tthrow;\n\t}\n\tcatch (...)\n\t{\n\t\tthrow ConfigCategoryEx();\n\t}\n}\n\n/**\n * Delete a category from database.\n * Also remove the link between a child category and its parent.\n *\n * @param categoryName\tThe category being deleted\n * @return\t\tThe remaining config categories as object\n * @throw\t\tNoSuchCategory exception\n * @throw\t\tConfigCategoryEx exception\n */\nConfigCategories ConfigurationManager::deleteCategory(const string& categoryName) const\n{\n\tconst Condition conditionDelete(Equals);\n\t// Build WHERE key = 'categoryName'\n\tWhere* wDelete = new Where(\"key\", conditionDelete, categoryName);\n\n\t// Build the WHERE parent = 'categoryName'\n\tWhere* wParent = new Where(\"parent\", conditionDelete, categoryName);\n\n\t// DELETE from configuration\n\tQuery qDelete(wDelete);\n\t// DELETE from category_children\n\tQuery qParent(wParent);\n\n\ttry\n\t{\n\t\t// Do the category delete\n\t\tint deletedRows = m_storage->deleteTable(\"configuration\", qDelete);\n\t\tif (deletedRows == 0)\n\t\t{\n\t\t\tthrow NoSuchCategory();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (deletedRows == -1)\n\t\t\t{\n\t\t\t\tthrow ConfigCategoryEx();\n\t\t\t}\n\t\t}\n\n\t\t// Do the child categores delete\n\t\tdeletedRows = m_storage->deleteTable(\"category_children\", qParent);\n\t\tif (deletedRows < 0)\n\t\t{\n\t\t\tthrow ConfigCategoryEx();\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn getAllCategoryNames();\n\t\t}\n\t}\n\tcatch (NoSuchCategory& ex)\n\t{\n\t\tthrow;\n\t}\n\tcatch (...)\n\t{\n\t\tthrow ConfigCategoryEx();\n\t}\n}\n"
  },
  {
    "path": "C/services/core/core_management_api.cpp",
    "content": "/*\n * Fledge core microservice management API.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <core_management_api.h>\n#include <service_registry.h>\n#include <rapidjson/document.h>\n#include <rapidjson/writer.h>\n\nusing namespace std;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\nusing namespace rapidjson;\n\nCoreManagementApi *CoreManagementApi::m_instance = 0;\n\n/**\n * Wrapper for \"fake\" registrer category interest\n *\n * TODO implement the missing functionality\n * This method is just a fake returning a fixed id to caller\n */\nvoid registerInterestWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t     shared_ptr<HttpServer::Request> request)\n{\n\tstring payload(\"{\\\"id\\\" : \\\"1232abcd-8889-a568-0001-aabbccdd\\\"}\");\n\t*response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << payload.length() << \"\\r\\n\"\n\t\t  <<  \"Content-type: application/json\\r\\n\\r\\n\" << payload;\n}\n\nvoid replaceSubstr(std::string& str, const std::string& from, const std::string& to) {\n    size_t start_pos = 0;\n    while((start_pos = str.find(from, start_pos)) != std::string::npos) {\n         str.replace(start_pos, from.length(), to);\n         start_pos += to.length();\n\t}\n}\n\n/**\n * Easy wrapper for getting a specific service.\n * It is called to get storage service details:\n * example: GET /fledge/service?name=Fledge%20Storage\n *\n * Immediate utility is to get the management_port of\n * storage service when running tests.\n * TODO fully implemtent the getService API call\n */\nvoid getServiceWrapper(shared_ptr<HttpServer::Response> response,\n\t\t       shared_ptr<HttpServer::Request> request)\n{\n\n\t// Get QUERY STRING from request\n\tstring queryString = request->query_string;\n\n\tsize_t pos = queryString.find(\"name=\");\n\tif (pos != std::string::npos)\n\t{\n\t\tstring serviceName = queryString.substr(pos + strlen(\"name=\"));\n\t\t// replace %20 with SPACE\n\t\t/*serviceName = std::regex_replace(serviceName,\n\t\t\t\t\t\t std::regex(\"%20\"),\n\t\t\t\t\t\t \" \"); */\n\t\t// RHEL 7.6 gcc pkg \"gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)\"\n\t\t// doesn't support std:regex and std::regex_replace\n\t\treplaceSubstr(serviceName, \"%20\", \" \");\n\t\tServiceRegistry* registry = ServiceRegistry::getInstance();\n\t\tServiceRecord* foundService = registry->findService(serviceName);\n\t\tstring payload;\n\n\t\tif (foundService)\n\t\t{\n\t\t\t// Set JSON string with service details\n\t\t\t// Note: the service UUID is missing at the time being\n\t\t\t// TODO add all API required fields\n\t\t\tfoundService->asJSON(payload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Return not found message\n\t\t\tpayload = \"{ \\\"message\\\": \\\"error: service name not found\\\" }\";\n\t\t}\n\n\t\t*response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << payload.length() << \"\\r\\n\"\n\t\t\t  <<  \"Content-type: application/json\\r\\n\\r\\n\" << payload;\n\t}\n\telse\n\t{\n\t\tstring errorMsg(\"{ \\\"message\\\": \\\"error: find service by name is supported right now\\\" }\");\n\t\t*response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << errorMsg.length() << \"\\r\\n\"\n\t\t\t  <<  \"Content-type: application/json\\r\\n\\r\\n\" << errorMsg;\n\t}\n}\n\n/**\n * Wrapper for service registration method\n */\nvoid registerMicroServiceWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\t shared_ptr<HttpServer::Request> request)\n{\n        CoreManagementApi *api = CoreManagementApi::getInstance();\n        api->registerMicroService(response, request);\n}\n\n/**\n * Wrapper for service registration method\n */\nvoid unRegisterMicroServiceWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\t   shared_ptr<HttpServer::Request> request)\n{\n        CoreManagementApi *api = CoreManagementApi::getInstance();\n        api->unRegisterMicroService(response, request);\n}\n\n/**\n * Wrapper for get all categories\n */\nvoid getAllCategoriesWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t     shared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->getAllCategories(response, request);\n}\n\n/**\n * Wrapper for get category name\n */\nvoid getCategoryWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->getCategory(response, request);\n}\n\n/**\n * Wrapper for get category name\n * Also handle th special item name 'children'\n * return ing child categoriies instead of the given item\n *\n * GET /fledge/service/category/{categoryName}/{itemName}\n * returns JSON string with item properties\n * GET /fledge/service/category/{categoryName}/children\n * returns JSON string with child categories\n */\nvoid getCategoryItemWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t    shared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->getCategoryItem(response, request);\n}\n\n/**\n * Wrapper for delete a category item value\n */\nvoid deleteCategoryItemValueWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\t    shared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->deleteCategoryItemValue(response, request);\n}\n\n/**\n * Wrapper for set category item value\n */\nvoid setCategoryItemValueWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\t shared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->setCategoryItemValue(response, request);\n}\n\n/**\n * Wrapper for delete category\n */\nvoid deleteCategoryWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t   shared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->deleteCategory(response, request);\n}\n\n/**\n * Wrapper for delete child category\n */\nvoid deleteChildCategoryWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->deleteChildCategory(response, request);\n}\n\n/**\n * Wrapper for create category\n */\nvoid createCategoryWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t   shared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->createCategory(response, request);\n}\n\n/**\n * Wrapper for create child categories\n */\nvoid addChildCategoryWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t     shared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->addChildCategory(response, request);\n}\n\n/**\n * Received a GET /fledge/service/category/{categoryName}\n */\nvoid CoreManagementApi::getCategory(shared_ptr<HttpServer::Response> response,\n\t\t\t\t    shared_ptr<HttpServer::Request> request)\n{\n\ttry\n\t{\n\t\tstring categoryName = request->path_match[CATEGORY_NAME_COMPONENT];\n\t\t// Fetch category items\n\t\tConfigCategory category = m_config->getCategoryAllItems(categoryName);\n\n\t\t// Build JSON output\n\t\tostringstream convert;\n\t\tconvert << category.itemsToJSON();\n\n\t\t// Send JSON data to client\n\t\trespond(response, convert.str());\n\t}\n\tcatch (NoSuchCategory& ex)\n\t{\n\t\t// Return proper error message\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n                                    \"get category\",\n\t\t\t\t    ex.what());\n\t}\n\t// TODO: also catch the exceptions from ConfigurationManager\n\t// and return proper message\n\tcatch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Received a GET /fledge/service/category/{categoryName}/{itemName]\n */\nvoid CoreManagementApi::getCategoryItem(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\ttry\n\t{\n\t\tstring categoryName = request->path_match[CATEGORY_NAME_COMPONENT];\n\t\tstring itemName = request->path_match[CATEGORY_ITEM_COMPONENT];\n\n\t\tif (itemName.compare(\"children\") == 0)\n\t\t{\n\t\t\t// Fetch child categories\n\t\t\tConfigCategories childCategories = m_config->getChildCategories(categoryName);\n\t\t\t// Send JSON data to client\n\t\t\trespond(response, \"{ \\\"categories\\\" : \" + childCategories.toJSON() + \" }\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Fetch category item\n\t\t\tstring categoryIitem = m_config->getCategoryItem(categoryName, itemName);\n\t\t\t// Send JSON data to client\n\t\t\trespond(response, categoryIitem);\n\t\t}\n\t}\n\t// Catch the exceptions from ConfigurationManager\n\t// and return proper message\n\tcatch (ChildCategoriesEx& ex)\n\t{\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"get child categories\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (NoSuchCategory& ex)\n\t{\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"get category item\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (ConfigCategoryEx& ex)\n\t{\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"get category item\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (CategoryDetailsEx& ex)\n\t{\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"get category item\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Received a GET /fledge/service/category\n */\nvoid CoreManagementApi::getAllCategories(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t shared_ptr<HttpServer::Request> request)\n{\n\ttry\n\t{\n\t\t// Fetch all categories\n\t\tConfigCategories allCategories = m_config->getAllCategoryNames();\n\n\t\t// Build JSON output\n\t\tostringstream convert;\n\t\tconvert << \"{ \\\"categories\\\" : [ \";\n\t\tconvert << allCategories.toJSON();\n\t\tconvert << \" ] }\";\n\n\t\t// Send JSON data to client\n\t\trespond(response, convert.str());\n\t}\n\t// TODO: also catch the exceptions from ConfigurationManager\n\t// and return proper message\n\tcatch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Wrapper function for the default resource call.\n * This is called whenever an unrecognised entry point call is received.\n */\nvoid defaultWrapper(shared_ptr<HttpServer::Response> response,\n\t\t    shared_ptr<HttpServer::Request> request)\n{\n\tCoreManagementApi *api = CoreManagementApi::getInstance();\n\tapi->defaultResource(response, request);\n}\n\n\n/**\n * Handle a bad URL endpoint call\n */\nvoid CoreManagementApi::defaultResource(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tstring payload(\"{ \\\"error\\\" : \\\"Unsupported URL: \" + request->path + \"\\\" }\");\n\trespond(response,\n\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\tpayload);\n}\n\n/**\n * Construct a microservices management API manager class\n */\nCoreManagementApi::CoreManagementApi(const string& name,\n\t\t\t\t     const unsigned short port) : ManagementApi(name, port)\n{\n\n\t// Setup supported URL and HTTP methods\n\t// Services\n\tm_server->resource[REGISTER_SERVICE][\"POST\"] = registerMicroServiceWrapper;\n\tm_server->resource[UNREGISTER_SERVICE][\"DELETE\"] = unRegisterMicroServiceWrapper;\n\n\tm_server->resource[GET_SERVICE][\"GET\"] = getServiceWrapper;\n\n\t// Register category interest\n\t// TODO implement this, right now it's just a fake\n\tm_server->resource[REGISTER_CATEGORY_INTEREST][\"POST\"] = registerInterestWrapper;\n\n\t// Default wrapper\n\tm_server->default_resource[\"GET\"] = defaultWrapper;\n\tm_server->default_resource[\"PUT\"] = defaultWrapper;\n\tm_server->default_resource[\"POST\"] = defaultWrapper;\n\tm_server->default_resource[\"DELETE\"] = defaultWrapper;\n\tm_server->default_resource[\"HEAD\"] = defaultWrapper;\n\tm_server->default_resource[\"CONNECT\"] = defaultWrapper;\n\n\t// Set the instance\n\tm_instance = this;\n}\n\n/**\n * Return the singleton instance of the core management interface\n *\n * Note if one has not been explicitly created then this will\n * return 0.\n */\nCoreManagementApi *CoreManagementApi::getInstance()\n{\n\treturn m_instance;\n}\n\n/**\n * Received a service registration request\n */\nvoid CoreManagementApi::registerMicroService(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t     shared_ptr<HttpServer::Request> request)\n{\nostringstream convert;\nstring uuid, payload, responsePayload;\n\n\ttry {\n\t\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\t\tpayload = request->content.string();\n\n\t\tDocument doc;\n\t\tif (doc.Parse(payload.c_str()).HasParseError())\n\t\t{\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring name, type, protocol, address;\n\t\t\tunsigned short port, managementPort;\n\t\t\tif (doc.HasMember(\"name\"))\n\t\t\t{\n\t\t\t\tname = string(doc[\"name\"].GetString());\n\t\t\t}\n\t\t\tif (doc.HasMember(\"type\"))\n\t\t\t{\t\n\t\t\t\ttype = string(doc[\"type\"].GetString());\n\t\t\t}\n\t\t\tif (doc.HasMember(\"address\"))\n\t\t\t{\n\t\t\t\taddress = string(doc[\"address\"].GetString());\n\t\t\t}\n\t\t\tif (doc.HasMember(\"protocol\"))\n\t\t\t{\n\t\t\t\tprotocol = string(doc[\"protocol\"].GetString());\n\t\t\t}\n\t\t\tif (doc.HasMember(\"service_port\"))\n\t\t\t{\n\t\t\t\tport = doc[\"service_port\"].GetUint();\n\t\t\t}\n\t\t\tif (doc.HasMember(\"management_port\"))\n\t\t\t{\n\t\t\t\tmanagementPort = doc[\"management_port\"].GetUint();\n\t\t\t}\n\t\t\tServiceRecord *srv = new ServiceRecord(name,\n\t\t\t\t\t\t\t\ttype,\n\t\t\t\t\t\t\t\tprotocol,\n\t\t\t\t\t\t\t\taddress,\n\t\t\t\t\t\t\t\tport,\n\t\t\t\t\t\t\t\tmanagementPort);\n\t\t\tif (!registry->registerService(srv))\n\t\t\t{\n\t\t\t\terrorResponse(response,\n\t\t\t\t\t      SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t\t      \"register service\",\n\t\t\t\t\t      \"Failed to register service\");\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t// Setup configuration API entry points\n\t\t\tif (type.compare(\"Storage\") == 0)\n\t\t\t{\n\t\t\t\t/**\n\t\t\t\t * Storage layer is registered\n\t\t\t\t * Setup ConfigurationManager instance and URL entry points\n\t\t\t \t */\n\t\t\t\tif (!getConfigurationManager(address, port))\n\t\t\t\t{\n\t\t\t\t\terrorResponse(response,\n\t\t\t\t\t\t      SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t\t\t      \"ConfigurationManager\",\n\t\t\t\t\t\t      \"Failed to connect to storage service\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t// Add Configuration Manager URL entry points\n\t\t\t\tsetConfigurationEntryPoints();\n\t\t\t}\n\n\t\t\t// Set service uuid\n\t\t\tuuid = registry->getUUID(srv);\n\t\t}\n\n\t\tconvert << \"{ \\\"id\\\" : \\\"\" << uuid << \"\\\", \";\n\t\tconvert << \"\\\"message\\\" : \\\"Service registered successfully\\\"\";\n\t\tconvert << \" }\";\n\t\tresponsePayload = convert.str();\n\t\trespond(response, responsePayload);\n\t} catch (exception ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Received a service unregister request\n */\nvoid CoreManagementApi::unRegisterMicroService(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t       shared_ptr<HttpServer::Request> request)\n{\nostringstream convert;\n\n\ttry {\n\t\tServiceRegistry *registry = ServiceRegistry::getInstance();\n                string uuid = request->path_match[UUID_COMPONENT];\n\n\t\tif (registry->unRegisterService(uuid))\n\t\t{\n\t\t\tconvert << \"{ \\\"id\\\" : \" << uuid << \",\";\n\t\t\tconvert << \"\\\"message\\\" : \\\"Service unregistered successfully\\\"\";\n\t\t\tconvert << \" }\";\n\t\t\tstring payload = convert.str();\n\t\t\trespond(response, payload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\terrorResponse(response,\n\t\t\t\t      SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t      \"unregister service\",\n\t\t\t\t      \"Failed to unregister service\");\n\t\t}\n\t} catch (exception ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n/**\n * Send back an error response\n *\n * @param response\tThe HTTP Response\n * @param statusCode\tThe HTTP status code\n * @param entryPoint\tThe entry point in the API\n * @param msg\t\tThe actual error message\n */\nvoid CoreManagementApi::errorResponse(shared_ptr<HttpServer::Response> response,\n\t\t\t\t      SimpleWeb::StatusCode statusCode,\n\t\t\t\t      const string& entryPoint,\n\t\t\t\t      const string& msg)\n{\nostringstream convert;\n\n\tconvert << \"{ \\\"message\\\" : \\\"\" << msg << \"\\\", \";\n\tconvert << \"\\\"entryPoint\\\" : \\\"\" << entryPoint << \"\\\" }\";\n\trespond(response, statusCode, convert.str());\n}\n/**\n * Handle a exception by sending back an internal error\n *\n * @param response\tThe HTTP response\n * @param ex\t\tThe exception that caused the error\n */\nvoid CoreManagementApi::internalError(shared_ptr<HttpServer::Response> response,\n\t\t\t\t      const exception& ex)\n{\nstring payload = \"{ \\\"Exception\\\" : \\\"\";\n\n        payload = payload + string(ex.what());\n        payload = payload + \"\\\" }\";\n\n        Logger *logger = Logger::getLogger();\n        logger->error(\"CoreManagementApi Internal Error: %s\\n\", ex.what());\n        respond(response,\n\t\tSimpleWeb::StatusCode::server_error_internal_server_error,\n\t\tpayload);\n}\n\n\n/**\n * HTTP response method\n */\nvoid CoreManagementApi::respond(shared_ptr<HttpServer::Response> response,\n\t\t\t\tconst string& payload)\n{\n        *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << payload.length() << \"\\r\\n\"\n\t\t  <<  \"Content-type: application/json\\r\\n\\r\\n\" << payload;\n}\n\n/**\n * HTTP response method\n */\nvoid CoreManagementApi::respond(shared_ptr<HttpServer::Response> response,\n\t\t\t\tSimpleWeb::StatusCode statusCode,\n\t\t\t\tconst string& payload)\n{\n        *response << \"HTTP/1.1 \" << status_code(statusCode)\n\t\t  << \"\\r\\nContent-Length: \" << payload.length() << \"\\r\\n\"\n\t\t  <<  \"Content-type: application/json\\r\\n\\r\\n\" << payload;\n}\n\n/**\n * Instantiate the ConfigurationManager class\n * having storage service already registered\n *\n * @return\tTrue if ConfigurationManager is set\n *\t\tFalse otherwise.\n */\nbool CoreManagementApi::getConfigurationManager(const string& address,\n\t\t\t\t\t\tconst unsigned short port)\n{\n\t// Instantiate the ConfigurationManager\n\tif (!(m_config = ConfigurationManager::getInstance(address, port)))\n\t{\n\t\treturn false;\n\t}\n\n\tLogger *logger = Logger::getLogger();\n\tlogger->info(\"Storage service is connected: %s:%d\\n\",\n\t\t     address.c_str(),\n\t\t     port);\n\n\treturn true;\n}\n\n/**\n * Add configuration manager entry points\n */\nvoid CoreManagementApi::setConfigurationEntryPoints()\n{\n\t// Add Configuration Manager entry points\n\tm_server->resource[GET_ALL_CATEGORIES][\"GET\"] = getAllCategoriesWrapper;\n\tm_server->resource[GET_CATEGORY][\"GET\"] = getCategoryWrapper;\n\t// This also hanles 'children' param for child categories\n\tm_server->resource[GET_CATEGORY_ITEM][\"GET\"] = getCategoryItemWrapper;\n\tm_server->resource[DELETE_CATEGORY_ITEM_VALUE][\"DELETE\"] = deleteCategoryItemValueWrapper;\n\tm_server->resource[SET_CATEGORY_ITEM_VALUE][\"PUT\"] = setCategoryItemValueWrapper;\n\tm_server->resource[DELETE_CATEGORY][\"DELETE\"] = deleteCategoryWrapper;\n\tm_server->resource[DELETE_CHILD_CATEGORY][\"DELETE\"] = deleteChildCategoryWrapper;\n\tm_server->resource[CREATE_CATEGORY][\"POST\"] = createCategoryWrapper;\n\tm_server->resource[ADD_CHILD_CATEGORIES][\"POST\"] = addChildCategoryWrapper;\n\n\tLogger *logger = Logger::getLogger();\n\tlogger->info(\"ConfigurationManager setup is done.\");\n}\n\n/**\n * Received a DELETE /fledge/service/category/{categoryName}/{configItem}/value\n */\nvoid CoreManagementApi::deleteCategoryItemValue(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tstring categoryName = request->path_match[CATEGORY_NAME_COMPONENT];\n\tstring itemName = request->path_match[CATEGORY_ITEM_COMPONENT];\n\tstring value = request->path_match[ITEM_VALUE_NAME];\n\n\ttry\n\t{\n\t\t// Unset the item value and return current updated item\n\t\tstring updatedItem = m_config->deleteCategoryItemValue(categoryName,\n\t\t\t\t\t\t\t\t       itemName);\n\t\trespond(response, updatedItem);\n\t}\n\tcatch (NoSuchCategoryItem& ex)\n\t{\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"delete category item value\",\n\t\t\t\t    ex.what());\n\t}\n        catch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Received PUT /fledge/service/category/{categoryName}/{configItem}\n * Payload is {\"value\" : \"some_data\"}\n * Send to client the JSON string of category item properties\n */\nvoid CoreManagementApi::setCategoryItemValue(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t     shared_ptr<HttpServer::Request> request)\n{\n\ttry\n\t{\n\t\tstring categoryName = request->path_match[CATEGORY_NAME_COMPONENT];\n\t\tstring itemName = request->path_match[CATEGORY_ITEM_COMPONENT];\n\t\tstring value = request->path_match[ITEM_VALUE_NAME];\n\n\t\t// Get PUT data\n\t\tstring payload = request->content.string();\n\n\t\tDocument doc;\n\t\tif (doc.Parse(payload.c_str()).HasParseError() || !doc.HasMember(\"value\"))\n\t\t{\n\t\t\t// Return proper error message\n\t\t\tthis->errorResponse(response,\n\t\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t\t    \"set category item value\",\n\t\t\t\t\t    \"failure while parsing JSON data\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// TODO: it can be JSON object, tranform it to a string\n\t\t\tstring theValue = doc[\"value\"].GetString();\n\n\t\t\t// Set the new value\n\t\t\tif (!m_config->setCategoryItemValue(categoryName,\n\t\t\t\t\t\t\t     itemName,\n\t\t\t\t\t\t\t     theValue))\n\t\t\t{\n\t\t\t\t// Return proper error message\n\t\t\t\tthis->errorResponse(response,\n\t\t\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t\t\t    \"set category item value\",\n\t\t\t\t\t\t    \"failure while writing to storage layer\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Send JSON data\n\t\t\t\tthis->respond(response,\n\t\t\t\t\t      m_config->getCategoryItem(categoryName,\n\t\t\t\t\t\t\t\t\titemName));\n\t\t\t}\n\t\t}\n\t}\n\tcatch(NoSuchCategoryItem& ex)\n\t{\n\t\t// Return proper error message\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"set category item value\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Delete a config category\n * Received DELETE /fledge/service/category/{categoryName}\n * Send to client the JSON string of all remaining categories\n */\nvoid CoreManagementApi::deleteCategory(shared_ptr<HttpServer::Response> response,\n\t\t\t\t       shared_ptr<HttpServer::Request> request)\n{\n\ttry\n\t{\n\t\tstring categoryName = request->path_match[CATEGORY_NAME_COMPONENT];\n\t\tConfigCategories updatedCategories = m_config->deleteCategory(categoryName);\n\n\t\tthis->respond(response,\n\t\t\t      \"{ \\\"categories\\\" : \" + updatedCategories.toJSON() + \" }\");\n\t\treturn;\n\t}\n\tcatch (NoSuchCategory& ex)\n\t{\n\t\t// Return proper error message\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"delete category\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Delete child categories of a config category\n * Received DELETE /fledge/service/category/{categoryName}/children/{childCategory}\n * Send to client the JSON string of all remaining categories\n */\nvoid CoreManagementApi::deleteChildCategory(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t    shared_ptr<HttpServer::Request> request)\n{\n\ttry\n\t{\n\t\tstring categoryName = request->path_match[CATEGORY_NAME_COMPONENT];\n\t\tstring childCategoryName = request->path_match[CHILD_CATEGORY_COMPONENT];\n\n\t\t// Remove selecte child cateogry fprm parent category\n\t\tstring updatedChildren = m_config->deleteChildCategory(categoryName,\n\t\t\t\t\t\t\t\t       childCategoryName);\n\t\tthis->respond(response, updatedChildren);\n\t}\n\tcatch (ChildCategoriesEx& ex)\n\t{\n\t\t// Return proper error message\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"delete child category\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Create a new configuration category\n * Received POST /fledge/service/category\n *\n * Send to client the JSON string of new category's items\n */\nvoid CoreManagementApi::createCategory(shared_ptr<HttpServer::Response> response,\n\t\t\t\t       shared_ptr<HttpServer::Request> request)\n{\n\ttry\n\t{\n\t\tbool keepOriginalItems = false;\n\n\t\t// Get query_string\n\t\tstring queryString = request->query_string;\n\n\t\tsize_t pos = queryString.find(\"keep_original_items\");\n\t\tif (pos != std::string::npos)\n\t\t{\n\t\t\tstring paramValue = queryString.substr(pos + strlen(\"keep_original_items=\"));\n\n\t\t\tfor (auto &c: paramValue) c = tolower(c);\n\n\t\t\tif (paramValue.compare(\"true\") == 0)\n\t\t\t{\n\t\t\t\tkeepOriginalItems = true;\n\t\t\t}\t\n\t\t}\n\n\t\t// Get POST data\n\t\tstring payload = request->content.string();\n\n                Document doc;\n\t\tif (doc.Parse(payload.c_str()).HasParseError() ||\n\t\t    !doc.HasMember(\"key\") ||\n\t\t    !doc.HasMember(\"description\") ||\n\t\t    !doc.HasMember(\"value\") ||\n\t\t    // It must be an object\n\t\t    !doc[\"value\"].IsObject() ||\n\t\t    // It must be a string\n\t\t    !doc[\"key\"].IsString() ||\n\t\t    // It must be a string\n\t\t    !doc[\"description\"].IsString())\n\t\t{\n\t\t\t// Return proper error message\n\t\t\tthis->errorResponse(response,\n\t\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t\t    \"create category\",\n\t\t\t\t\t    \"failure while parsing JSON data\");\n\t\t\treturn;\n\t\t}\n\n\t\t// Get the JSON input properties\n\t\tstring categoryName = doc[\"key\"].GetString();\n\t\tstring categoryDescription = doc[\"description\"].GetString();\n\t\tconst Value& categoryItems = doc[\"value\"];\n\n\t\t// Create string representation of JSON object\n\t\trapidjson::StringBuffer buffer;\n\t\trapidjson::Writer<rapidjson::StringBuffer> writer(buffer);\n\t\tcategoryItems.Accept(writer);\n\t\tconst string sItems(buffer.GetString(), buffer.GetSize());\n\n\t\t// Create the new config category\n\t\tConfigCategory items = m_config->createCategory(categoryName,\n\t\t\t\t\t\t\t\tcategoryDescription,\n\t\t\t\t\t\t\t\tsItems,\n\t\t\t\t\t\t\t\tkeepOriginalItems);\n\n\t\t// Return JSON string of the new created category\n\t\tthis->respond(response, items.toJSON());\n\t}\n\tcatch (ConfigCategoryDefaultWithValue& ex)\n\t{\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"create category\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (ConfigCategoryEx ex)\n\t{\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"create category\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (CategoryDetailsEx ex)\n\t{\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"create category\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Add child categories to a given category name\n * Received POST /fledge/service/category/{categoryName}/children\n *\n * Send to client the JSON string with child categories\n */\nvoid CoreManagementApi::addChildCategory(shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t shared_ptr<HttpServer::Request> request)\n{\n\ttry\n\t{\n\t\t// Get categopryName\n\t\tstring categoryName = request->path_match[CATEGORY_NAME_COMPONENT];\n\t\t// Get POST data\n\t\tstring childCategories = request->content.string();\n\n                Document doc;\n\t\tif (doc.Parse(childCategories.c_str()).HasParseError() ||\n\t\t    // It must be an object\n\t\t    !doc.IsObject())\n\t\t{\n\t\t\t// Return proper error message\n\t\t\tthis->errorResponse(response,\n\t\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t\t    \"add child category\",\n\t\t\t\t\t    \"failure while parsing JSON data\");\n\t\t\treturn;\n\t\t}\n\n\t\t// Add new child categories and return all child items JSON list\n\t\tthis->respond(response,\n\t\t\t      m_config->addChildCategory(categoryName,\n\t\t\t\t\t\t\t childCategories));\n\t}\n\tcatch (ExistingChildCategories& ex)\n\t{\n\t\t// Return proper error message\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"add child category\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (NoSuchCategory& ex)\n\t{\n\t\t// Return proper error message\n\t\tthis->errorResponse(response,\n\t\t\t\t    SimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\t    \"add child category\",\n\t\t\t\t    ex.what());\n\t}\n\tcatch (exception ex)\n\t{\n\t\tinternalError(response, ex);\n\t}\n}\n"
  },
  {
    "path": "C/services/core/include/configuration_manager.h",
    "content": "#ifndef _CONFIGURATION_MANAGER_H\n#define _CONFIGURATION_MANAGER_H\n\n/*\n * Fledge Configuration management.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <storage_client.h>\n#include <config_category.h>\n#include <string>\n\nclass ConfigurationManager {\n        public:\n\t\tstatic ConfigurationManager*\tgetInstance(const std::string&, short unsigned int);\n\t\t// Called by microservice management API or the admin API:\n\t\t// GET /fledge/service/category\n\t\t// GET /fledge//category\n\t\tConfigCategories\t\tgetAllCategoryNames() const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// GET /fledge/service/category/{category_name}\n\t\t// GET /fledge/category/{category_name}\n\t\tConfigCategory\t\t\tgetCategoryAllItems(const std::string& categoryName) const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// POST /fledge/service/category\n\t\t// POST /fledge/category\n\t\tConfigCategory\t\t\tcreateCategory(const std::string& categoryName,\n\t\t\t\t\t\t\t       const std::string& categoryDescription,\n\t\t\t\t\t\t\t       const std::string& categoryItems,\n\t\t\t\t\t\t\t       bool keepOriginalIterms = false) const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// GET /fledge/service/category/{categoryName}/{configItem}\n\t\t// GET /fledge/category/{categoryName}/{configItem}\n\t\tstd::string\t\t\tgetCategoryItem(const std::string& categoryName,\n\t\t\t\t\t\t\t\tconst std::string& itemName) const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// PUT /fledge/service/category/{categoryName}/{configItem}\n\t\t// PUT /fledge/service/{categoryName}/{configItem}\n\t\tbool\t\t\t\tsetCategoryItemValue(const std::string& categoryName,\n\t\t\t\t\t\t\t\t     const std::string& itemName,\n\t\t\t\t\t\t\t\t     const std::string& newValue) const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// POST /fledge/service/category/{categoryName}/children\n\t\t// POST /fledge/category/{categoryName}/children\n\t\tstd::string\t\t\taddChildCategory(const std::string& parentCategoryName,\n\t\t\t\t\t\t\t\t const std::string& childCategories) const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// GET /fledge/service/category/{categoryName}/children\n\t\t// GET /fledge/category/{categoryName}/children\n\t\tConfigCategories\t\tgetChildCategories(const std::string& parentCategoryName) const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// DELETE /fledge/service/category/{CategoryName}/children/{ChildCategory}\n\t\t// DELETE /fledge/category/{CategoryName}/children/{ChildCategory}\n\t\tstd::string\t\t\tdeleteChildCategory(const std::string& parentCategoryName,\n\t\t\t\t\t\t\t\t    const std::string& childCategory) const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// DELETE /fledge/service/category/{categoryName}/{configItem}/value\n\t\t// DELETE /fledge/category/{categoryName}/{configItem}/value\n\t\tstd::string \t\t\tdeleteCategoryItemValue(const std::string& categoryName,\n\t\t\t\t\t\t\t\t\tconst std::string& itemName) const;\n\t\t// Called by microservice management API or the admin API:\n\t\t// DELETE /fledge/service/category/{categoryName}\n\t\t// DELETE /fledge/category/{categoryName}\n\t\tConfigCategories\t\tdeleteCategory(const std::string& categoryName) const;\n\t\t// Internal usage\n\t\tstd::string\t\t\tgetCategoryItemValue(const std::string& categoryName,\n\t\t\t\t\t\t\t\t     const std::string& itemName) const;\n\n\tprivate:\n\t\tConfigurationManager(const std::string& host,\n\t\t\t\t     unsigned short port);\n\t\t~ConfigurationManager();\n\t\tvoid\t\tmergeCategoryValues(rapidjson::Value& inputValues,\n\t\t\t\t\t\t    const rapidjson::Value* storedValues,\n\t\t\t\t\t\t    rapidjson::Document::AllocatorType& allocator,\n\t\t\t\t\t\t    bool keepOriginalitems) const;\n\t\t// Internal usage\n\t\tstd::string\tfetchChildCategories(const std::string& parentCategoryName) const;\n\t\tstd::string\tgetCategoryDescription(const std::string& categoryName) const;\n\n\tprivate:\n\t\tstatic  ConfigurationManager*\tm_instance;\n\t\tStorageClient*\t\t\tm_storage;\n};\n\n/**\n * NoSuchCategory\n */\nclass NoSuchCategory : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Config category does not exist\";\n\t\t}\n};\n\n/**\n * NoSuchCategoryItemValue\n */\nclass NoSuchCategoryItemValue : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Failure while fetching config category item value\";\n\t\t}\n};\n\n/**\n * NoSuchItem\n */\nclass NoSuchCategoryItem : public std::exception {\n\tpublic:\n\t\tNoSuchCategoryItem(const std::string& message)\n\t\t{\n\t\t\tm_error = message;\n\t\t}\n\t\t\t\t\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn m_error.c_str();\n\t\t}\n\n\tprivate:\n\t\tstd::string m_error;\n};\n\n/**\n * CategoryDetailsEx\n */\nclass CategoryDetailsEx : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Cannot access category informations\";\n\t\t}\n};\n\n/**\n * StorageOperation\n */\nclass StorageOperation : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Failure while performing insert or update operation\";\n\t\t}\n};\n\n/**\n * NotSupportedDataType\n */\nclass NotSupportedDataType : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Data type not supported\";\n\t\t}\n};\n\n/**\n * AllCategoriesEx\n */\nclass AllCategoriesEx : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Failure while fetching all config categories\";\n\t\t}\n};\n\n/**\n * ConfigCategoryDefaultWithValue\n */\nclass ConfigCategoryDefaultWithValue : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"The config category being inserted/updated has both default and value properties for items\";\n\t\t}\n};\n\n/**\n * ConfigCategoryEx\n */\nclass ConfigCategoryEx : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Failure while setting/fetching a config category\";\n\t\t}\n};\n\n/**\n * ChildCategoriesEx\n */\nclass ChildCategoriesEx : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Failure while setting/fetching child categories\";\n\t\t}\n};\n\n/**\n * ExistingChildCategories\n */\nclass ExistingChildCategories : public std::exception {\n\tpublic:\n\t\tvirtual const char* what() const throw()\n\t\t{\n\t\t\treturn \"Requested child categories are already set for the given parent category\";\n\t\t}\n};\n#endif\n"
  },
  {
    "path": "C/services/core/include/core_management_api.h",
    "content": "#ifndef _CORE_MANAGEMENT_API_H\n#define _CORE_MANAGEMENT_API_H\n/*\n * Fledge core microservice management API.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <management_api.h>\n#include <configuration_manager.h>\n\n\n#define REGISTER_SERVICE\t\t\"/fledge/service\"\n#define UNREGISTER_SERVICE\t\t\"/fledge/service/([0-9A-F][0-9A-F\\\\-]*)\"\n#define GET_ALL_CATEGORIES\t\t\"/fledge/service/category\"\n#define CREATE_CATEGORY\t\t\tGET_ALL_CATEGORIES\n#define GET_CATEGORY\t\t\t\"/fledge/service/category/([A-Za-z][a-zA-Z_0-9]*)\"\n#define GET_CATEGORY_ITEM\t\t\"/fledge/service/category/([A-Za-z][a-zA-Z_0-9]*)/([A-Za-z][a-zA-Z_0-9]*)\"\n#define DELETE_CATEGORY_ITEM_VALUE\t\"/fledge/service/category/([A-Za-z][a-zA-Z_0-9]*)/([A-Za-z][a-zA-Z_0-9]*)/(value)\"\n#define SET_CATEGORY_ITEM_VALUE\t\tGET_CATEGORY_ITEM\n#define DELETE_CATEGORY\t\t\tGET_CATEGORY\n#define DELETE_CHILD_CATEGORY\t\t\"/fledge/service/category/([A-Za-z][a-zA-Z_0-9]*)/(children)/([A-Za-z][a-zA-Z_0-9]*)\"\n#define ADD_CHILD_CATEGORIES\t\t\"/fledge/service/category/([A-Za-z][a-zA-Z_0-9]*)/(children)\"\n#define REGISTER_CATEGORY_INTEREST\t\"/fledge/interest\"\t// TODO implment this, right now it's a fake.\n#define GET_SERVICE\t\t\tREGISTER_SERVICE\n\n#define UUID_COMPONENT\t\t\t1\n#define CATEGORY_NAME_COMPONENT\t\t1\n#define CATEGORY_ITEM_COMPONENT\t\t2\n#define ITEM_VALUE_NAME\t\t\t3\n#define CHILD_CATEGORY_COMPONENT\t3\n\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\n/**\n * Management API server for a C++ microservice\n */\nclass CoreManagementApi : public ManagementApi {\n\tpublic:\n\t\tCoreManagementApi(const std::string& name, const unsigned short port);\n\t\t~CoreManagementApi() {};\n\t\tstatic CoreManagementApi *getInstance();\n\t\tvoid\t\t\tregisterMicroService(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\t     std::shared_ptr<HttpServer::Request> request);\n\t\tvoid\t\t\tunRegisterMicroService(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\t       std::shared_ptr<HttpServer::Request> request);\n\t\t// GET /fledge/service/category\n\t\tvoid\t\t\tgetAllCategories(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\t std::shared_ptr<HttpServer::Request> request);\n\t\t// GET /fledge/service/category/{categoryName}\n\t\tvoid\t\t\tgetCategory(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t    std::shared_ptr<HttpServer::Request> request);\n\t\t// GET /fledge/service/category/{categoryName}/{configItem}\n\t\t// GET /fledge/service/category/{categoryName}/children\n\t\tvoid\t\t\tgetCategoryItem(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\tstd::shared_ptr<HttpServer::Request> request);\n\t\t// DELETE /fledge/service/category/{categoryName}/{configItem}/value\n\t\tvoid\t\t\tdeleteCategoryItemValue(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\t\tstd::shared_ptr<HttpServer::Request> request);\n\t\t//  PUT /fledge/service/category/{categoryName}/{configItemn}\n\t\tvoid\t\t\tsetCategoryItemValue(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\t     std::shared_ptr<HttpServer::Request> request);\n                // Called by DELETE /fledge/service/category/{categoryName}\n\t\tvoid\t\t\tdeleteCategory(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t       std::shared_ptr<HttpServer::Request> request);\n\t\t// Called by DELETE /fledge/service/category/{CategoryName}/children/{ChildCategory}\n\t\tvoid\t\t\tdeleteChildCategory(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\t    std::shared_ptr<HttpServer::Request> request);\n\t\t// Called by POST /fledge/service/category\n\t\tvoid\t\t\tcreateCategory(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t       std::shared_ptr<HttpServer::Request> request);\n\t\t// Called by POST /fledge/service/category/{categoryName}/children\n\t\tvoid\t\t\taddChildCategory(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\t std::shared_ptr<HttpServer::Request> request);\n\t\t// Default handler for unsupported URLs\n\t\tvoid\t\t\tdefaultResource(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t\tstd::shared_ptr<HttpServer::Request> request);\n\n\tprivate:\n\t\tvoid\t\t\terrorResponse(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\t      SimpleWeb::StatusCode statusCode,\n\t\t\t\t\t\t      const std::string& entryPoint,\n\t\t\t\t\t\t      const std::string& msg);\n\t\tvoid\t\t\tinternalError(std::shared_ptr<HttpServer::Response>,\n\t\t\t\t\t\t      const std::exception&);\n\t\tvoid\t\t\trespond(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\tSimpleWeb::StatusCode statusCode,\n\t\t\t\t\t\tconst std::string& payload);\n\t\tvoid\t\t\trespond(std::shared_ptr<HttpServer::Response> response,\n\t\t\t\t\t\tconst std::string& payload);\n\t\tbool\t\t\tgetConfigurationManager(const std::string& address,\n\t\t\t\t\t\t\t\tconst unsigned short port);\n\t\tvoid\t\t\tsetConfigurationEntryPoints();\n\n\tprivate:\n\t\tstatic CoreManagementApi*\tm_instance;\n\t\tConfigurationManager*\t\tm_config;\n};\n#endif\n"
  },
  {
    "path": "C/services/core/include/service_registry.h",
    "content": "#ifndef _SERVICE_REGISTRY_H\n#define _SERVICE_REGISTRY_H\n/*\n * Fledge service registry.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <service_record.h>\n#include <vector>\n#include <map>\n#include <string>\n\n/**\n * ServiceRegistry Singleton class\n */\nclass ServiceRegistry {\n\tpublic:\n\t\tstatic ServiceRegistry\t\t*getInstance();\n\t\tbool\t\t\t\tregisterService(ServiceRecord *service);\n\t\tbool\t\t\t\tunRegisterService(ServiceRecord *service);\n\t\tbool\t\t\t\tunRegisterService(const std::string& uuid);\n\t\tServiceRecord\t\t\t*findService(const std::string& name);\n\t\tstd::string\t\t\tgetUUID(ServiceRecord *service);\n\tprivate:\n\t\tServiceRegistry();\n\t\t~ServiceRegistry();\n\t\tstatic\tServiceRegistry\t\t*m_instance;\n\t\tstd::vector<ServiceRecord *>\tm_services;\n\t\tstd::map<std::string, ServiceRecord *>\tm_uuids;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/core/service_registry.cpp",
    "content": "/*\n * Fledge service registry.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <service_registry.h>\n#include <uuid/uuid.h>\n\nusing namespace std;\n\nServiceRegistry *ServiceRegistry::m_instance = 0;\n\n/**\n * Create the service registry singleton class\n */\nServiceRegistry::ServiceRegistry()\n{\n}\n\n/**\n * Destroy the service registry singleton class\n */\nServiceRegistry::~ServiceRegistry()\n{\n\tfor (vector<ServiceRecord *>::iterator it = m_services.begin();\n\t\tit != m_services.end(); ++it)\n\t{\n\t\tdelete *it;\n\t}\n}\n\n/**\n * Return the singleton instance of the service registry\n */\nServiceRegistry *ServiceRegistry::getInstance()\n{\n\tif (m_instance == 0)\n\t\tm_instance = new ServiceRegistry();\n\treturn m_instance;\n}\n\n/**\n * Register a service with the service registry\n *\n * @param service\tThe service to register\n * @return bool\t\tTrue if the service was registered\n */\nbool ServiceRegistry::registerService(ServiceRecord *service)\n{\nuuid_t\t\tuuid;\nchar\t\tuuid_str[37];\nServiceRecord\t*existing;\n\n\tif ((existing = findService(service->getName())) != 0)\n\t{\n\t\tif (existing->getAddress().compare(service->getAddress()) ||\n\t\t\texisting->getType().compare(service->getType()) ||\n\t\t\texisting->getPort() != service->getPort())\n\t\t{\n\t\t\t/* Service already registered with the same name on\n\t\t\t * a different address, port or type\n\t\t\t */\n\t\t\treturn false;\n\t\t}\n\t\t// Overwrite existing service\n\t\tunRegisterService(existing);\n\t}\n\tm_services.push_back(service);\n\tuuid_generate_time_safe(uuid);\n\tuuid_unparse_lower(uuid, uuid_str);\n\tm_uuids[string(uuid_str)] = service;\n\treturn true;\n}\n\n/**\n * Unregister a service with the service registry\n *\n * @param service\tThe service to unregister\n * @return bool\t\tTrue if the service was unregistered\n */\nbool ServiceRegistry::unRegisterService(ServiceRecord *service)\n{\n\tfor (vector<ServiceRecord *>::iterator it = m_services.begin();\n\t\tit != m_services.end(); ++it)\n\t{\n\t\tif (*service == **it)\n\t\t{\n\t\t\tm_services.erase(it);\n\t\t\tfor (map<string, ServiceRecord *>::iterator uit = m_uuids.begin(); uit != m_uuids.end(); ++ uit)\n\t\t\t{\n\t\t\t\tif (uit->second == service)\n\t\t\t\t{\n\t\t\t\t\tm_uuids.erase(uit);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Unregister a service with the service registry\n *\n * @param uuid\t\tThe uuid of the service to unregister\n * @return bool\t\tTrue if the service was unregistered\n */\nbool ServiceRegistry::unRegisterService(const string& uuid)\n{\nServiceRecord\t*service;\nmap<string, ServiceRecord *>::iterator\tuuidIt;\n\t\n\tif ((uuidIt = m_uuids.find(uuid)) == m_uuids.end())\n\t{\n\t\treturn false;\n\t}\n\tservice = m_uuids[uuid];\n\tfor (vector<ServiceRecord *>::iterator it = m_services.begin();\n\t\tit != m_services.end(); ++it)\n\t{\n\t\tif (*service == **it)\n\t\t{\n\t\t\tm_services.erase(it);\n\t\t\tm_uuids.erase(uuidIt);\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/**\n * Find a service that is registered with the service registry\n *\n * @param name\t\tThe name of the service to find\n * @return ServiceRecord*\tThe service record or null if not found\n */\nServiceRecord *ServiceRegistry::findService(const string& name)\n{\n\tfor (vector<ServiceRecord *>::iterator it = m_services.begin();\n\t\tit != m_services.end(); ++it)\n\t{\n\t\tif ((*it)->getName().compare(name) == 0)\n\t\t\treturn *it;\n\t}\n\treturn 0;\n}\n\n/**\n * Return the uuid of the registration record for a given service\n *\n * @param\tservice\tThe service to return the uuid for\n * @return string\tThe uud of the service registration\n * @throws eception\tIf the service could not be found\n */\nstring ServiceRegistry::getUUID(ServiceRecord *service)\n{\nmap<string, ServiceRecord *>::const_iterator  it;\n\n\tfor (it = m_uuids.cbegin(); it != m_uuids.cend(); ++it)\n\t{\n\t\tif (it->second == service)\n\t\t{\n\t\t\treturn it->first;\n\t\t}\n\t}\n\tthrow new exception();\n}\n"
  },
  {
    "path": "C/services/filter-plugin-interfaces/python/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(filter-plugin-python-interface)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(DLLIB -ldl)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\n\n# Find source files\nfile(GLOB SOURCES python_plugin_interface.cpp)\n\n# Find Python.h 3.x dev/lib package\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development)\nendif()\n\n# Include header files\ninclude_directories(include ../../../common/include ../../../services/common/include ../../../services/south/include ../../../thirdparty/rapidjson/include)\ninclude_directories(../../../services/common-plugin-interfaces/python/include)\ninclude_directories(../../../thirdparty/Simple-Web-Server)\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${DLLIB})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n"
  },
  {
    "path": "C/services/filter-plugin-interfaces/python/filter_ingest_pymodule/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(filter_ingest)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(DLLIB -ldl)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\n\n# Find source files\nfile(GLOB SOURCES ingest_callback_pymodule.cpp)\n\n# Find Python 3.5 or higher dev/lib/interp package\n#find_package(PythonInterp 3.5 REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development)\nendif()\n\n# Include header files\ninclude_directories(include)\ninclude_directories(../../../../common/include)\ninclude_directories(../../../../services/common/include)\ninclude_directories(../../../../services/south/include)\ninclude_directories(../../../../thirdparty/rapidjson/include)\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\nlink_directories(${PROJECT_BINARY_DIR}/../../../../lib)\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../../../../../python)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\ntarget_link_libraries(${PROJECT_NAME} ${DLLIB})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\n\nset_target_properties(${PROJECT_NAME} PROPERTIES LINKER_LANGUAGE C)\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\nset_target_properties(${PROJECT_NAME} PROPERTIES VERSION 1)\nset_target_properties(${PROJECT_NAME} PROPERTIES PREFIX \"\")\n\n# Install libraries\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/python)\n"
  },
  {
    "path": "C/services/filter-plugin-interfaces/python/filter_ingest_pymodule/ingest_callback_pymodule.cpp",
    "content": "/*\n * Fledge python module for filter plugin ingest callback\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <reading.h>\n#include <reading_set.h>\n#include <logger.h>\n#include <Python.h>\n#include <vector>\n#include <pythonreadingset.h>\n\nextern \"C\" {\n\ntypedef void (*INGEST_CB_DATA)(void *, PythonReadingSet *);\n\nstatic void filter_plugin_async_ingest_fn(PyObject *ingest_callback,\n\t\t\t\t    PyObject *ingest_obj_ref_data,\n\t\t\t\t    PyObject *readingsObj);\n\nstatic PyObject *IngestError;\n\n/**\n * Implementation of data ingest into filters chain\n *\n * @param    self       The python module object\n * @param    args       Input arguments\n * @return              PyObject of None type\n */\nstatic PyObject *filter_ingest_callback(PyObject *self, PyObject *args)\n{\n\tPyObject *readingList;\n\tPyObject *callback;\n\tPyObject *ingestData;\n\n\tif (!PyArg_ParseTuple(args,\n\t\t\t      \"OOO\",\n\t\t\t      &callback,\n\t\t\t      &ingestData,\n\t\t\t      &readingList))\n\t{\n\t\tLogger::getLogger()->error(\"Cannot parse input arguments \"\n\t\t\t\t\t   \"of filter_ingest_callback C API module\");\n\t\treturn NULL;\n\t}\n\n\t// Invoke callback routine\n\tfilter_plugin_async_ingest_fn(callback,\n\t\t\t\tingestData,\n\t\t\t\treadingList);\n\n\tPy_INCREF(Py_None);\n\treturn Py_None;\n}\n\nstatic PyMethodDef FilterIngestMethods[] = {\n\t{\n\t\t\"filter_ingest_callback\",\n\t\tfilter_ingest_callback,\n\t\tMETH_VARARGS,\n\t\t\"Invoke filter ingest callback\"\n\t},\n\t{NULL, NULL, 0, NULL}    /* Sentinel */\n};\n\nstatic struct PyModuleDef filterIngestmodule = {\n\tPyModuleDef_HEAD_INIT,\n\t\"filter_ingest\",   /* name of module */\n\tNULL, \t\t/* module documentation, may be NULL */\n\t-1,       \t/* size of per-interpreter state of the module,\n\t             or -1 if the module keeps state in global variables. */\n\tFilterIngestMethods\n};\n\n/**\n * Init the C API Python module\n */\nPyMODINIT_FUNC\nPyInit_filter_ingest(void)\n{\t\n\tPyObject *m;\n\n\tm = PyModule_Create(&filterIngestmodule);\n\tif (m == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot initialise filter_ingest C API module\");\n\t\treturn NULL;\n\t}\n\n\tIngestError = PyErr_NewException(\"ingest.error\", NULL, NULL);\n\tPy_INCREF(IngestError);\n\tPyModule_AddObject(m, \"error\", IngestError);\n\n\treturn m;\n}\n\n/**\n * Ingest data into filters chain\n *\n * @param    ingest_callback            The callback routine\n * @param    ingest_obj_ref_data        Object parameter for callback routine\n * @param    readingsObj                Readongs data as PyObject\n */\nvoid filter_plugin_async_ingest_fn(PyObject *ingest_callback,\n\t\t\t     PyObject *ingest_obj_ref_data,\n\t\t\t     PyObject *readingsObj)\n{\n\tif (ingest_callback == NULL ||\n\t    ingest_obj_ref_data == NULL ||\n\t    readingsObj == NULL)\n\t{\n\t\tLogger::getLogger()->error(\"PyC interface error: \"\n\t\t\t\t\t   \"%s: \"\n\t\t\t\t\t   \"filter_ingest_callback=%p, \"\n\t\t\t\t\t   \"ingest_obj_ref_data=%p, \"\n\t\t\t\t\t   \"readingsObj=%p\",\n\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t   ingest_callback,\n\t\t\t\t\t   ingest_obj_ref_data,\n\t\t\t\t\t   readingsObj);\n\t\treturn;\n\t}\n\t\n\tPythonReadingSet *pyReadingSet = NULL;\n    \n\t// Check we have a list of readings\n\tif (PyList_Check(readingsObj))\n\t{\n\t\ttry\n\t\t{\n\t\t\t// Get vector of Readings from Python object\n\t\t\tpyReadingSet = new PythonReadingSet(readingsObj);\n\t\t}\n\t\tcatch (std::exception e)\n\t\t{\n\t\t\tLogger::getLogger()->warn(\"Unable to create a PythonReadingSet, error: %s\", e.what());\n\t\t\tpyReadingSet = NULL;\n\t\t}\n        \n\t\tLogger::getLogger()->debug(\"%s:%d, pyReadingSet=%p, pyReadingSet readings count=%d\", \n                                    __FUNCTION__, __LINE__, pyReadingSet, pyReadingSet?pyReadingSet->getCount():0);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Filter did not return a Python List \"\n\t\t\t\t\t   \"but object type %s\",\n\t\t\t\t\t   Py_TYPE(readingsObj)->tp_name);\n\t}\n\n\t// From: https://docs.python.org/3/c-api/arg.html\n\t// Note that any Python object references which are provided to the caller are borrowed references; \n\t// do not decrement their reference count!\n\n\t/*if(readingsObj)\n\t\tPy_CLEAR(readingsObj);*/\n\n\tif (pyReadingSet)\n\t{\n\t\t// Get callback pointer\n\t\tINGEST_CB_DATA cb = (INGEST_CB_DATA) PyCapsule_GetPointer(ingest_callback, NULL);\n        \n\t\t// Get ingest object parameter\n\t\tvoid *data = PyCapsule_GetPointer(ingest_obj_ref_data, NULL);\n\n\t\tLogger::getLogger()->debug(\"%s:%d: cb function at address %p\", __FUNCTION__, __LINE__, *cb);\n\t\t// Invoke callback method for ReadingSet filter ingestion\n\t\t(*cb)(data, pyReadingSet);\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"PyC interface: plugin_ingest_fn: \"\n\t\t\t\t\t   \"Got invalid ReadingSet while converting from PyObject\");\n\t}\n}\n}; // end of extern \"C\" block\n"
  },
  {
    "path": "C/services/filter-plugin-interfaces/python/python_plugin_interface.cpp",
    "content": "/*\n * Fledge filter plugin interface related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <logger.h>\n#include <config_category.h>\n#include <reading.h>\n#include <reading_set.h>\n#include <mutex>\n#include <plugin_handle.h>\n#include <pyruntime.h>\n#include <Python.h>\n\n#include <python_plugin_common_interface.h>\n#include <reading_set.h>\n#include <filter_plugin.h>\n#include <pythonreadingset.h>\n\nusing namespace std;\n\nextern \"C\" {\n\n// This is a C++ ReadingSet class instance passed through\ntypedef ReadingSet READINGSET;\n// Data handle passed to function pointer\ntypedef void OUTPUT_HANDLE;\n// Function pointer called by \"plugin_ingest\" plugin method\ntypedef void (*OUTPUT_STREAM)(OUTPUT_HANDLE *, READINGSET *);\n\nextern PLUGIN_INFORMATION *Py2C_PluginInfo(PyObject *);\nextern void logErrorMessage();\nextern PLUGIN_INFORMATION *plugin_info_fn();\nextern void plugin_shutdown_fn(PLUGIN_HANDLE);\n\n\n/**\n * Function to invoke 'plugin_reconfigure' function in python plugin\n *\n * @param    handle     The plugin handle from plugin_init_fn\n * @param    config     The new configuration, as string\n */\nstatic void filter_plugin_reconfigure_fn(PLUGIN_HANDLE handle,\n\t\t\t\t\t const std::string& config)\n{\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: filter_plugin_reconfigure_fn(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\t// Plugin name can not be logged here\n\t\tLogger::getLogger()->error(\"pythonHandles map is NULL \"\n\t\t\t\t\t   \"in filter_plugin_reconfigure_fn\");\n\t\treturn;\n\t}\n\n\t// Look for Python module, handle is the key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second)\n\t{\n\t\t// Plugin name can not be logged here\n\t\tLogger::getLogger()->fatal(\"filter_plugin_reconfigure_fn(): \"\n\t\t\t\t\t   \"pModule is NULL, handle %p\",\n\t\t\t\t\t   handle);\n\t\treturn;\n\t}\n\n        // We have plugin name\n        string pName = it->second->m_name;\n\n\tstd::mutex mtx;\n\tPyObject* pFunc;\n\tlock_guard<mutex> guard(mtx);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\tLogger::getLogger()->debug(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t   \"pModule=%p, *handle=%p, plugin '%s'\",\n\t\t\t\t   it->second->m_module,\n\t\t\t\t   handle,\n\t\t\t\t   pName.c_str());\n\n\tLogger::getLogger()->debug(\"%s:%d: calling set_loglevel_in_python_module(), loglevel=%s\", __FUNCTION__, __LINE__, Logger::getLogger()->getMinLevel().c_str());\n\tif(config.compare(\"logLevel\") == 0)\n\t{\n\t\tset_loglevel_in_python_module(it->second->m_module, it->second->m_name+\" filter_plugin_reconf\");\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\t\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_reconfigure\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find method 'plugin_reconfigure' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method plugin_reconfigure \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\tLogger::getLogger()->debug(\"plugin_reconfigure with %s\", config.c_str());\n\n\tPyObject *config_dict = json_loads(config.c_str());\n\n\t// Call Python method passing an object and JSON config dict\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"OO\",\n\t\t\t\t\t\t  handle,\n\t\t\t\t\t\t  config_dict);\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_reconfigure \"\n\t\t\t\t\t   \": error while getting result object, plugin '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n        Logger::getLogger()->info(\"%s:%d: Py_TYPE(pReturn)->tp_name=%s\", __FUNCTION__, __LINE__, Py_TYPE(pReturn)->tp_name);\n\t\tPyObject* tmp = (PyObject *)handle;\n\t\t// Check current handle is Dict and pReturn is a Dict too\n\t\tif (PyDict_Check(tmp) && PyDict_Check(pReturn))\n\t\t{\n\t\t\t// Clear Dict content\n\t\t\tPyDict_Clear(tmp);\n\t\t\t// Populate handle Dict with new data in pReturn\n\t\t\tPyDict_Update(tmp, pReturn);\n\t\t\t// Remove pReturn ojbect\n\t\t\tPy_CLEAR(pReturn);\n\n\t\t\tLogger::getLogger()->debug(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t\t   \"got updated handle from python plugin=%p, plugin '%s'\",\n\t\t\t\t\t\t   handle,\n\t\t\t\t\t\t   pName.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\t Logger::getLogger()->error(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t\t    \"got object type '%s' instead of Python Dict, \"\n\t\t\t\t\t\t    \"python plugin=%p, plugin '%s'\",\n\t\t\t\t\t\t    Py_TYPE(pReturn)->tp_name,\n\t\t\t\t\t   \t    handle,\n\t\t\t\t\t  \t    pName.c_str());\n\t\t}\n\t}\n\n\tPyGILState_Release(state);\n}\n\n/**\n * Ingest data into filters chain\n *\n * @param    handle     The plugin handle returned from plugin_init\n * @param    data       The ReadingSet data to filter\n */\nvoid filter_plugin_ingest_fn(PLUGIN_HANDLE handle, READINGSET *data)\n{\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: filter_plugin_ingest_fn(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\t// Plugin name can not be logged here\n\t\tLogger::getLogger()->error(\"pythonHandles map is NULL \"\n\t\t\t\t\t   \"in filter_plugin_ingest_fn\");\n\t\treturn;\n\t}\n\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second)\n\t{\n\t\t// Plugin name can not be logged here\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_ingest(): \"\n\t\t\t\t\t   \"pModule is NULL\");\n\t\treturn;\n\t}\n\n\t// We have plugin name\n\tstring pName = it->second->m_name;\n\n\tPyObject* pFunc;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_ingest\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find 'plugin_ingest' \"\n\t\t\t\t\t   \"method in loaded python module '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method plugin_ingest\"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\t// Call asset tracker\n\t// int i=0;\n\tvector<Reading *>* readings = ((ReadingSet *)data)->getAllReadingsPtr();\n\tfor (vector<Reading *>::const_iterator elem = readings->begin();\n\t\t\t\t\t\t      elem != readings->end();\n\t\t\t\t\t\t      ++elem)\n\t{\n\t\t// Logger::getLogger()->debug(\"Reading %d: %s\", i++, (*elem)->toJSON().c_str());\n\t\tAssetTracker* atr = AssetTracker::getAssetTracker();\n\t\tif (atr)\n\t\t{\n\t\t\tAssetTracker::getAssetTracker()->addAssetTrackingTuple(it->second->getCategoryName(),\n\t\t\t\t\t\t\t\t\t\t(*elem)->getAssetName(),\n\t\t\t\t\t\t\t\t\t\tstring(\"Filter\"));\n\t\t}\n\t}\n\n\tLogger::getLogger()->debug(\"C2Py: filter_plugin_ingest_fn():L%d: data->getCount()=%d\", __LINE__, data->getCount());\n\t\n\t// Create a readingList of readings to be filtered\n\tPythonReadingSet *pyReadingSet = (PythonReadingSet *) data;\n\tPyObject* readingsList = pyReadingSet->toPython();\n\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"OO\",\n\t\t\t\t\t\t  handle,\n\t\t\t\t\t\t  readingsList);\n\tPy_CLEAR(pFunc);\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_ingest \"\n\t\t\t\t\t   \": error while getting result object, plugin '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tlogErrorMessage();\n\t}\n\n\tdata->removeAll();\n\tdelete data;\n\n#if 0\n\tPythonReadingSet *filteredReadingSet = NULL;\n\tif (pReturn)\n\t{\n\t\t// Check we have a list of readings\n\t\tif (PyList_Check(readingsList))\n\t\t{\n\t\t\ttry\n\t\t\t{\n\t\t\t\t// Create ReadingSet from Python reading list\n\t\t\t\tfilteredReadingSet = new PythonReadingSet(readingsList);\n\n\t\t\t\t// Remove input data\n\t\t\t\tdata->removeAll();\n\n\t\t\t\t// Append filtered readings;  append will empty the passed reading set as well\n\t\t\t\tdata->append(filteredReadingSet);\n\n\t\t\t\tdelete filteredReadingSet;\n\t\t\t\tfilteredReadingSet = NULL;\n\t\t\t}\n\t\t\tcatch (std::exception e)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Unable to create a PythonReadingSet, error: %s\", e.what());\n\t\t\t\tfilteredReadingSet = NULL;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Filter did not return a Python List \"\n\t\t\t\t\t\t   \"but object type %s\",\n\t\t\t\t\t\t   Py_TYPE(readingsList)->tp_name);\n\t\t}\n\t}\n#endif\n\t\n\t// Remove readings to dict\n\tPy_CLEAR(readingsList);\n\t// Remove CallFunction result\n\tPy_CLEAR(pReturn);\n\n\t// Release GIL\n\tPyGILState_Release(state);\n}\n\n/**\n * Initialise the plugin, called to get the plugin handle and setup the\n * output handle that will be passed to the output stream. The output stream\n * is merely a function pointer that is called with the output handle and\n * the new set of readings generated by the plugin.\n *     (*output)(outHandle, readings);\n * Note that the plugin may not call the output stream if the result of\n * the filtering is that no readings are to be sent onwards in the chain.\n * This allows the plugin to discard data or to buffer it for aggregation\n * with data that follows in subsequent calls\n *\n * @param config\tThe configuration category for the filter\n * @param outHandle\tA handle that will be passed to the output stream\n * @param output\tThe output stream (function pointer) to which data is passed\n * @return\t\tAn opaque handle that is used in all subsequent calls to the plugin\n */\nPLUGIN_HANDLE filter_plugin_init_fn(ConfigCategory* config,\n\t\t\t  OUTPUT_HANDLE *outHandle,\n\t\t\t  OUTPUT_STREAM output)\n{\n\t// Get pluginName\n\tstring pName = config->getValue(\"plugin\");\n\n\tLogger::getLogger()->info(\"filter_plugin_init_fn(): pName=%s\", pName.c_str());\n\n\tif (!pythonModules)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in filter_plugin_init_fn, plugin '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\treturn NULL;\n\t}\n\n\tbool loadModule = false;   // whether module is already loaded\n\tbool reloadModule = false;  // whether module is to be loaded again\n\tbool pythonInitState = false;\n\tPythonModule *module = NULL;\n\n\t// Check whether plugin pName has been already loaded\n\tfor (auto h = pythonHandles->begin();\n                  h != pythonHandles->end(); ++h)\n\t{\n\t\tif (h->second->m_name.compare(pName) == 0)\n\t\t{\n\t\t\tLogger::getLogger()->info(\"filter_plugin_init_fn: already loaded \"\n\t\t\t\t\t\t   \"a plugin with name '%s'. A new Python obj is needed\",\n\t\t\t\t\t\t   pName.c_str());\n\n\t\t\t// Set Python library loaded state\n\t\t\tpythonInitState = h->second->m_init;\n\n\t\t\t// Set load indicator\n\t\t\tloadModule = true;\n\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (!loadModule)\n\t{\n       \t\tLogger::getLogger()->info(\"filter_plugin_init_fn: NOT already loaded \"\n\t\t\t\t\t\t   \"a plugin with name '%s'. A new Python obj is needed\",\n\t\t\t\t\t\t   pName.c_str());\n\t\t// Plugin name not previously loaded: check current Python module\n\t\t// pName is the key\n\t\tauto it = pythonModules->find(pName);\n\t\tif (it == pythonModules->end())\n\t\t{\n\t\t\tLogger::getLogger()->info(\"plugin_handle: filter_plugin_init(): \"\n\t\t\t\t\t\t   \"pModule not found for plugin '%s': \",\n\t\t\t\t\t\t   pName.c_str());\n\n\t\t\t// Set reload indicator\n\t\t\treloadModule = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->info(\"plugin_handle: filter_plugin_init(): \"\n\t\t\t\t\t\t   \"pModule FOUND for plugin '%s': \",\n\t\t\t\t\t\t   pName.c_str());\n            \n\t\t\tif (it->second && it->second->m_module)\n\t\t\t{\n\t\t\t\t// Just use current loaded module: no load or re-load action\n\t\t\t\tmodule = it->second;\n\t\t\t\tLogger::getLogger()->info(\"plugin_handle: filter_plugin_init(): \"\n\t\t\t\t\t\t   \"module set to PythonModule object @ address %p\",\n\t\t\t\t\t\t   module);\n\n\t\t\t\t// Set Python library loaded state\n\t\t\t\tpythonInitState = it->second->m_init;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->fatal(\"plugin_handle: filter_plugin_init(): \"\n\t\t\t\t\t\t\t   \"found pModule is NULL for plugin '%s': \",\n\t\t\t\t\t\t\t   pName.c_str());\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\tLogger::getLogger()->info(\"filter_plugin_init_fn: loadModule=%s, reloadModule=%s\", \n                                loadModule?\"TRUE\":\"FALSE\", reloadModule?\"TRUE\":\"FALSE\");\n    \n\t// Acquire GIL\n\tPyGILState_STATE state = PyGILState_Ensure();\n    \n\t// Import Python module\n\tif (loadModule || reloadModule)\n\t{        \n\t\tstring fledgePythonDir;\n\n\t\tstring fledgeRootDir(getenv(\"FLEDGE_ROOT\"));\n\t\tfledgePythonDir = fledgeRootDir + \"/python\";\n\n\t\t// Set Python path for embedded Python 3.x\n\t\t// Get current sys.path - borrowed reference\n\t\tPyObject* sysPath = PySys_GetObject((char *)\"path\");\n\t\tPyList_Append(sysPath, PyUnicode_FromString((char *) fledgePythonDir.c_str()));\n        \n\t\t// Set sys.argv for embedded Python 3.x\n\t\tint argc = 2;\n\t\twchar_t* argv[2];\n\t\targv[0] = Py_DecodeLocale(\"\", NULL);\n\t\targv[1] = Py_DecodeLocale(pName.c_str(), NULL);\n\n\t\t// Set script parameters\n\t\tPySys_SetArgv(argc, argv);\n\n\t\tLogger::getLogger()->debug(\"%s_plugin_init_fn, %sloading plugin '%s', \",\n\t\t\t\t\t   PLUGIN_TYPE_FILTER,\n\t\t\t\t\t   reloadModule ? \"re-\" : \"\",\n\t\t\t\t\t   pName.c_str());\n\n\t\t// Import Python script\n\t\tPyObject *newObj = PyImport_ImportModule(pName.c_str());\n\n\t\t// Check for NULL\n\t\tif (newObj)\n\t\t{\n\t\t\tPythonModule* newModule;\n\t\t\tif ((newModule = new PythonModule(newObj,\n\t\t\t\t\t\t\t  pythonInitState,\n\t\t\t\t\t\t\t  pName,\n\t\t\t\t\t\t\t  PLUGIN_TYPE_FILTER,\n\t\t\t\t\t\t\t  NULL)) == NULL)\n\t\t\t{\n\t\t\t\t// Release lock\n\t\t\t\tPyGILState_Release(state);\n\n\t\t\t\tLogger::getLogger()->fatal(\"plugin_handle: filter_plugin_init(): \"\n\t\t\t\t\t\t\t   \"failed to create Python module \"\n\t\t\t\t\t\t\t   \"object, plugin '%s'\",\n\t\t\t\t\t\t\t   pName.c_str());\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\t// Set category name\n\t\t\tnewModule->setCategoryName(config->getName());\n\n\t\t\t// Set module\n\t\t\tmodule = newModule;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogErrorMessage();\n\n\t\t\t// Release lock\n\t\t\tPyGILState_Release(state);\n\n\t\t\tLogger::getLogger()->fatal(\"plugin_handle: filter_plugin_init(): \"\n\t\t\t\t\t\t   \"failed to import plugin '%s'\",\n\t\t\t\t\t\t   pName.c_str());\n\t\t\treturn NULL;\n\t\t}\n\t}\n\telse\n\t{\n\t\t// Set category name\n\t\tmodule->setCategoryName(config->getName());\n\t}\n\n\tLogger::getLogger()->info(\"filter_plugin_init_fn for '%s', pModule '%p', \"\n\t\t\t\t   \"Python interpreter '%p', config=%s\",\n\t\t\t\t   module->m_name.c_str(),\n\t\t\t\t   module->m_module,\n\t\t\t\t   module->m_tState,\n\t\t\t\t   config->itemsToJSON().c_str());\n\n\tLogger::getLogger()->debug(\"%s:%d: calling set_loglevel_in_python_module(), loglevel=%s\", __FUNCTION__, __LINE__, Logger::getLogger()->getMinLevel().c_str());\n\tset_loglevel_in_python_module(module->m_module, module->m_name + \" plugin_init\");\n\n\tPyObject *config_dict = json_loads(config->itemsToJSON().c_str());\n        \n\t// Call Python method passing an object\n\tPyObject* ingest_fn = PyCapsule_New((void *)output, NULL, NULL);\n\tPyObject* ingest_ref = PyCapsule_New((void *)outHandle, NULL, NULL);\n\tPyObject* pReturn = PyObject_CallMethod(module->m_module,\n\t\t\t\t\t\"plugin_init\",\n\t\t\t\t\t\"OOO\",\n\t\t\t\t\tconfig_dict,\n\t\t\t\t\tingest_ref,\n\t\t\t\t\tingest_fn);\n\n\tPy_CLEAR(ingest_ref);\n\tPy_CLEAR(ingest_fn);\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_init \"\n                                       \": error while getting result object, plugin '%s'\",\n                                       pName.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->info(\"plugin_handle: filter_plugin_init(): \"\n                                       \"got result object '%p', plugin '%s'\",\n                                       pReturn,\n                                       pName.c_str());\n\t}\n\n\t// Add the handle to handles map as key, PythonModule object as value\n\tstd::pair<std::map<PLUGIN_HANDLE, PythonModule*>::iterator, bool> ret;\n\tif (pythonHandles)\n\t{\n\t\t// Add to handles map the PythonHandles object\n\t\tret = pythonHandles->insert(pair<PLUGIN_HANDLE, PythonModule*>\n\t\t\t((PLUGIN_HANDLE)pReturn, module));\n\n\t\tif (ret.second)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"plugin_handle: filter_plugin_init_fn(): \"\n\t\t\t\t\t\t   \"handle %p of python plugin '%s' \"\n\t\t\t\t\t\t   \"added to pythonHandles map\",\n\t\t\t\t\t\t   pReturn,\n\t\t\t\t\t\t   pName.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"plugin_handle: filter_plugin_init_fn(): \"\n\t\t\t\t\t\t   \"failed to insert handle %p of \"\n\t\t\t\t\t\t   \"python plugin '%s' to pythonHandles map\",\n\t\t\t\t\t\t   pReturn,\n\t\t\t\t\t\t   pName.c_str());\n\n\t\t\tPy_CLEAR(module->m_module);\n\t\t\tmodule->m_module = NULL;\n\t\t\tdelete module;\n\t\t\tmodule = NULL;\n\n\t\t\tPy_CLEAR(pReturn);\n\t\t\tpReturn = NULL;\n\t\t}\n\t}\n\n\t// Release locks\n\tPyGILState_Release(state);\n\n\treturn pReturn ? (PLUGIN_HANDLE) pReturn : NULL;\n}\n\n/**\n * Constructor for PythonPluginHandle\n *    - Load python interpreter\n *    - Set sys.path and sys.argv\n *\n * @param    pluginName         The plugin name to load\n * @param    pluginPathName     The plugin pathname\n * @return                      PyObject of loaded module\n */\nvoid *PluginInterfaceInit(const char *pluginName, const char * pluginPathName)\n{\n\tbool initPython = true;\n\n\t// Set plugin name, also for methods in common-plugin-interfaces/python\n\tgPluginName = pluginName;\n\n\tstring fledgePythonDir;\n    \n\tstring fledgeRootDir(getenv(\"FLEDGE_ROOT\"));\n\tfledgePythonDir = fledgeRootDir + \"/python\";\n    \n\tstring filtersRootPath = fledgePythonDir + string(R\"(/fledge/plugins/filter/)\") + string(pluginName);\n\tLogger::getLogger()->info(\"%s:%d:, filtersRootPath=%s\", __FUNCTION__, __LINE__, filtersRootPath.c_str());\n    \n\tPythonRuntime::getPythonRuntime();\n    \n\t// Acquire GIL\n\tPyGILState_STATE state = PyGILState_Ensure();\n        \n\tLogger::getLogger()->info(\"FilterPlugin PluginInterfaceInit %s:%d: \"\n\t\t\t\t   \"fledgePythonDir=%s, plugin '%s'\",\n\t\t\t\t   __FUNCTION__,\n\t\t\t\t   __LINE__,\n\t\t\t\t   fledgePythonDir.c_str(),\n\t\t\t\t   pluginName);\n\n\t// Set Python path for embedded Python 3.x\n\t// Get current sys.path - borrowed reference\n\tPyObject* sysPath = PySys_GetObject((char *)\"path\");\n\tPyList_Append(sysPath, PyUnicode_FromString((char *) filtersRootPath.c_str()));\n\tPyList_Append(sysPath, PyUnicode_FromString((char *) fledgePythonDir.c_str()));\n\n\t// Set sys.argv for embedded Python 3.5\n\tint argc = 2;\n\twchar_t* argv[2];\n\targv[0] = Py_DecodeLocale(\"\", NULL);\n\targv[1] = Py_DecodeLocale(pluginName, NULL);\n\tPySys_SetArgv(argc, argv);\n\n\t// 2) Import Python script\n\tPyObject *pModule = PyImport_ImportModule(pluginName);\n\tLogger::getLogger()->info(\"%s:%d: pluginName=%s, pModule=%p\", __FUNCTION__, __LINE__, pluginName, pModule);\n\n\t// Check whether the Python module has been imported\n\tif (!pModule)\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\t\tLogger::getLogger()->fatal(\"FilterPlugin PluginInterfaceInit: \"\n\t\t\t\t\t   \"cannot import Python module file \"\n\t\t\t\t\t   \"from '%s', plugin '%s'\",\n\t\t\t\t\t   pluginPathName,\n\t\t\t\t\t   pluginName);\n\t}\n\telse\n\t{\n\t\tstd::pair<std::map<string, PythonModule*>::iterator, bool> ret;\n\t\tPythonModule* newModule = NULL;\n\t\tif (pythonModules)\n\t\t{\n\t\t\t// Add module into pythonModules, pluginName is the key\n\t\t\tif ((newModule = new PythonModule(pModule,\n\t\t\t\t\t\t\t  initPython,\n\t\t\t\t\t\t\t  string(pluginName),\n\t\t\t\t\t\t\t  PLUGIN_TYPE_FILTER,\n\t\t\t\t\t\t\t  NULL)) == NULL)\n\t\t\t{\n\t\t\t\t// Release lock\n\t\t\t\tPyGILState_Release(state);\n\n\t\t\t\tLogger::getLogger()->fatal(\"plugin_handle: filter_plugin_init(): \"\n\t\t\t\t\t\t\t   \"failed to create Python module \"\n\t\t\t\t\t\t\t   \"object, plugin '%s'\",\n\t\t\t\t\t\t\t   pluginName);\n\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tret = pythonModules->insert(pair<string, PythonModule*>\n\t\t\t\t(string(pluginName), newModule));\n\t\t\tLogger::getLogger()->info(\"%s:%d: Added pair to pythonModules: <%s, %p>\", \n                                        __FUNCTION__, __LINE__, pluginName, newModule);\n\t\t}\n\n\t\t// Check result\n\t\tif (!pythonModules ||\n\t\t    ret.second == false)\n\t\t{\n\t\t\tLogger::getLogger()->fatal(\"%s:%d: python module \"\n\t\t\t\t\t\t   \"not added to the map \"\n\t\t\t\t\t\t   \"of loaded plugins, \"\n\t\t\t\t\t\t   \"pModule=%p, plugin '%s', aborting.\",\n\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t   __LINE__,\n\t\t\t\t\t\t   pModule,\n\t\t\t\t\t\t   pluginName);\n\t\t\t// Cleanup\n\t\t\tPy_CLEAR(pModule);\n\t\t\tpModule = NULL;\n\n\t\t\tdelete newModule;\n\t\t\tnewModule = NULL;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"%s:%d: python module \"\n\t\t\t\t\t\t   \"successfully loaded, \"\n\t\t\t\t\t\t   \"pModule=%p, plugin '%s'\",\n\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t   __LINE__,\n\t\t\t\t\t\t   pModule,\n\t\t\t\t\t\t   pluginName);\n\t\t}\n\t}\n\n\t// Release locks\n\tPyGILState_Release(state);\n\n\t// Return new Python module or NULL\n\treturn pModule;\n}\n\n/**\n * Returns function pointer that can be invoked to call '_sym' function\n * in python plugin\n *\n * @param    _sym\tSymbol name\n * @param    pName\tPlugin name\n * @return\t\tfunction pointer to be invoked\n */\nvoid *PluginInterfaceResolveSymbol(const char *_sym, const string& pName)\n{\n\tstring sym(_sym);\n\tif (!sym.compare(\"plugin_info\"))\n\t\treturn (void *) plugin_info_fn;\n\telse if (!sym.compare(\"plugin_init\"))\n\t\treturn (void *) filter_plugin_init_fn;\n\telse if (!sym.compare(\"plugin_shutdown\"))\n\t\treturn (void *) plugin_shutdown_fn;\n\telse if (!sym.compare(\"plugin_reconfigure\"))\n\t\treturn (void *) filter_plugin_reconfigure_fn;\n\telse if (!sym.compare(\"plugin_ingest\"))\n\t\treturn (void *) filter_plugin_ingest_fn;\n\telse if (!sym.compare(\"plugin_start\"))\n\t{\n\t\tLogger::getLogger()->debug(\"FilterPluginInterface currently \"\n\t\t\t\t\t   \"does not support 'plugin_start', plugin '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\treturn NULL;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->fatal(\"FilterPluginInterfaceResolveSymbol can not find symbol '%s' \"\n\t\t\t\t\t   \"in the Filter Python plugin interface library, \"\n\t\t\t\t\t   \"loaded plugin '%s'\",\n\t\t\t\t\t   _sym,\n\t\t\t\t\t   pName.c_str());\n\t\treturn NULL;\n\t}\n}\n}; // End of extern C\n"
  },
  {
    "path": "C/services/north/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (North)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb -DPy_DEBUG\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wsign-conversion\")\nset(CMAKE_CXX_FLAGS_PROFILING \"-O2 -pg\")\nset(DLLIB -ldl)\nset(UUIDLIB -luuid)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\nset(EXEC fledge.services.north)\n\ninclude_directories(. include ../../thirdparty/Simple-Web-Server ../../thirdparty/rapidjson/include  ../common/include ../../common/include)\n\nfind_package(Threads REQUIRED)\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\nif(APPLE)\n    set(OPENSSL_ROOT_DIR \"/usr/local/opt/openssl\")\nendif()\n\nfile(GLOB north_src \"*.cpp\")\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_executable(${EXEC} ${north_src} ${common_src} ${services_src})\ntarget_link_libraries(${EXEC} ${Boost_LIBRARIES})\ntarget_link_libraries(${EXEC} ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(${EXEC} ${DLLIB})\ntarget_link_libraries(${EXEC} ${UUIDLIB})\ntarget_link_libraries(${EXEC} ${COMMON_LIB})\ntarget_link_libraries(${EXEC} ${SERVICE_COMMON_LIB})\n\ninstall(TARGETS ${EXEC} RUNTIME DESTINATION fledge/services)\n\nif(MSYS) #TODO: Is MSYS true when MSVC is true?\n    target_link_libraries(${EXEC} ws2_32 wsock32)\n    if(OPENSSL_FOUND)\n        target_link_libraries(${EXEC} ws2_32 wsock32)\n    endif()\nendif()\n\n# Set profiling flags if 'Profiling' build\nif(CMAKE_BUILD_TYPE STREQUAL \"Profiling\")\n    message(\"Building in Profiling mode\")\n    set_target_properties(${EXEC} PROPERTIES COMPILE_FLAGS \"${CMAKE_CXX_FLAGS_PROFILING}\")\n    # define 'PROFILING' flag used by service to change directory\n    target_compile_definitions(${EXEC} PRIVATE PROFILING=1)\n    set(CMAKE_SHARED_LINKED_FLAGS \"${CMAKE_SHARED_LINKED_FLAGS} -O2 -pg\")\n    target_link_libraries(${EXEC} -O2 -pg)\nendif()\n\n    "
  },
  {
    "path": "C/services/north/README.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n\n\n*********************\nFledge North Service\n*********************\n\nThis is the north service of the Fledge platform written in C.\nThis service is responsible for sending the readings data onwards to the\nupstream systems. The service registers with the storage service to be\ngiven any new data as it arrives.\n|br| |br|\n\n\nBuilding\n========\n\nThe Storage service is built using cmake, to build the Storage service:\n::\n  mkdir build\n  cd build\n  cmake ..\n  make\n\nThis will create the executable file ``north`` service.\n\nUse the command ``make install`` to install in the default location,\nnote you will need permission on the installation directory or use\nthe sudo command. Pass the option *DESTDIR=* to set your own destination\ninto which to install the Storage service.\n\nBuild the plugins by going to the directory *C/plugins/north* and follow\nthe instructions in each of the plugin directories.\n|br| |br|\n  \n\nPrerequisites\n=============\n\nTo build the North service the machine must have installed the\n*cmake* system, *make* and *g++*, plus the libraries for the North plugin,\ne.g. the boost libraries\n\n\nOn Ubuntu based Linux distributions these can be installed with *apt-get*:\n::\n  apt-get install libboost-dev libboost-system-dev libboost-thread-dev\n  apt-get install cmake g++ make\n\n|br| |br|\n\n\nRunning\n=======\n\nThe North service may be run in daemon mode or interactively by use\nof the *-d* command line argument.\n\nThe North service will register with the core to allow the core to\nmonitor the North service and to allow the North storage to find the\nStorage service.  It assumes the core is located on the same machine. This\ncan however be overridden by the use of the command line argument\n*--port=* and *--address=* to set the port and address of the core\nmicroservice.\n\nThe North service will look for North plugins in the current directory\nor in the directory *$FLEDGE_ROOT/plugins/north*.\n|br| |br|\n"
  },
  {
    "path": "C/services/north/data_load.cpp",
    "content": "/*\n * Fledge North Service Data Loading.\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <data_load.h>\n#include <north_service.h>\n\n#define INITIAL_BLOCK_WAIT\t10\n#define MAX_WAIT_PERIOD\t\t200\n\nusing namespace std;\n\nstatic void threadMain(void *arg)\n{\n\tDataLoad *dl = (DataLoad *)arg;\n\tdl->loadThread();\n}\n\n/**\n * DataLoad Constructor\n *\n * Create and start the loading thread\n */\nDataLoad::DataLoad(const string& name, long streamId, StorageClient *storage) : \n\tm_name(name), m_streamId(streamId), m_storage(storage), m_shutdown(false),\n\tm_readRequest(0), m_dataSource(SourceReadings), m_pipeline(NULL), m_perfMonitor(NULL),\n\tm_prefetchLimit(2), m_isolate(false), m_debuggerAttached(false),\n\tm_debuggerBufferSize(1), m_suspendIngest(false), m_steps(0)\n{\n\tm_blockSize = DEFAULT_BLOCK_SIZE;\n\n\tif (m_streamId == 0)\n\t{\n\t\tm_streamId = createNewStream();\n\t}\n\tm_nextStreamUpdate = 1;\n\tm_streamUpdate = 1;\n\tm_lastFetched = getLastSentId();\n\tm_streamSent = getLastSentId();\n\tm_flushRequired = false;\n\tm_thread = new thread(threadMain, this);\n\tloadFilters(name);\n}\n\n/**\n * DataLoad destructor\n *\n * Shutdown and wait for the loading thread\n */\nDataLoad::~DataLoad()\n{\n\t// Request the loading thread to shutdown and wait for it\n\tLogger::getLogger()->info(\"Data load shutdown in progress\");\n\tm_shutdown = true;\n\tm_cv.notify_all();\n\tm_fetchCV.notify_all();\n\tm_thread->join();\n\tdelete m_thread;\n\tif (m_pipeline)\n\t{\n\t\tm_pipeline->cleanupFilters(m_name);\n\t\tdelete m_pipeline;\n\t}\n\tif (m_flushRequired)\n\t{\n\t\tflushLastSentId();\n\t}\n\t// Clear out the queue of readings\n\tunique_lock<mutex> lck(m_qMutex);\t// Should not need to do this\n\twhile (! m_queue.empty())\n\t{\n\t\tReadingSet *readings = m_queue.front();\n\t\tdelete readings;\n\t\tm_queue.pop_front();\n\t}\n\tLogger::getLogger()->info(\"Data load shutdown complete\");\n}\n\n/**\n * External call to shutdown the north service\n */\nvoid DataLoad::shutdown()\n{\n\tm_shutdown = true;\n\tm_cv.notify_all();\n\tm_fetchCV.notify_all();\n}\n\n/**\n * External call to restart the north service\n */\nvoid DataLoad::restart()\n{\n\tshutdown();\n}\n\n/**\n * Set the source of data for the service\n *\n * @param source\tThe data source\n */\nbool DataLoad::setDataSource(const string& source)\n{\n\tif (source.compare(\"statistics\") == 0)\n\t\tm_dataSource = SourceStatistics;\n\telse if (source.compare(\"readings\") == 0)\n\t\tm_dataSource = SourceReadings;\n\telse if (source.compare(\"audit\") == 0)\n\t\tm_dataSource = SourceAudit;\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Unsupported source '%s' for north service '%s'\",\n\t\t\t\tsource.c_str(), m_name.c_str());\n\t\treturn false;\n\t}\n\treturn true;\n}\n\n/**\n * The background thread that loads data from the database\n */\nvoid DataLoad::loadThread()\n{\n\twhile (!m_shutdown)\n\t{\n\t\tunsigned int block = waitForReadRequest();\n\t\twhile (m_shutdown == false && m_queue.size() < m_prefetchLimit)\n\t\t{\n\t\t\t// Read another block if we have less than \n\t\t\t// the prefetch limit already queued\n\t\t\treadBlock(block);\n\t\t}\n\t}\n}\n\n/**\n * Wait for a read request to be made. Read requests come from consumer\n * threads calling the triggerRead call that will cause a block of reading\n * data (or whatever the source of data is) to be added to the reading\n * buffer.\n *\n * @return int\tThe size of the block to read\n */\nunsigned int DataLoad::waitForReadRequest()\n{\n\tunique_lock<mutex> lck(m_mutex);\n\twhile (m_shutdown == false && m_readRequest == 0)\n\t{\n\t\tm_cv.wait(lck);\n\t}\n\tunsigned int rval =  m_readRequest;\n\tm_readRequest = 0;\n\tLogger::getLogger()->debug(\"DataLoad received read request for %d readings\", rval);\n\treturn rval;\n}\n\n/**\n * Trigger the loading thread to read a block of data. This is called by\n * any thread to request that data be added to the buffer ready for collection.\n */\nvoid DataLoad::triggerRead(unsigned int blockSize)\n{\n\tunique_lock<mutex> lck(m_mutex);\n\tm_readRequest = blockSize;\n\tm_cv.notify_all();\n}\n\n/**\n * Read a block of readings, statistics or audit date  from the storage service\n *\n * @param blockSize\tThe number of readings to fetch\n */\nvoid DataLoad::readBlock(unsigned int blockSize)\n{\n\tint n_waits = 0;\n\tint n_update_streamId = 0;\n\tint max_wait_count = 5;\t// Maximum wait counter to update streams table\n\tunsigned int waitPeriod = INITIAL_BLOCK_WAIT;\n\tif (m_suspendIngest && willStep())\n\t{\n\t\tlock_guard<mutex> guard(m_suspendMutex);\n\t\tblockSize = m_steps;\n\t\tm_steps = 0;\n\t}\n\telse if (m_suspendIngest)\n\t{\n\t\treturn;\n\t}\n\tdo\n\t{\n\t\tReadingSet* readings = nullptr;\n\t\ttry\n\t\t{\n\t\t\tswitch (m_dataSource)\n\t\t\t{\n\t\t\t\tcase SourceReadings:\n\t\t\t\t\t// Logger::getLogger()->debug(\"Fetch %d readings from %d\", blockSize, m_lastFetched + 1);\n\t\t\t\t\treadings = m_storage->readingFetch(m_lastFetched + 1, blockSize);\n\t\t\t\t\tbreak;\n\t\t\t\tcase SourceStatistics:\n\t\t\t\t\treadings = fetchStatistics(blockSize);\n\t\t\t\t\tbreak;\n\t\t\t\tcase SourceAudit:\n\t\t\t\t\treadings = fetchAudit(blockSize);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tLogger::getLogger()->fatal(\"Bad source for data to send\");\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tcatch (ReadingSetException* e)\n\t\t{\n\t\t\t// Ignore, the exception has been reported in the layer below\n\t\t\t// readings may contain erroneous data, clear it\n\t\t\treadings = nullptr;\n\t\t}\n\t\tcatch (exception& e)\n\t\t{\n\t\t\t// Ignore, the exception has been reported in the layer below\n\t\t\t// readings may contain erroneous data, clear it\n\t\t\treadings = nullptr;\n\t\t}\n\t\tif (readings && readings->getCount())\n\t\t{\n\t\t\tn_update_streamId = 0;\n\t\t\tm_lastFetched = readings->getLastId();\n\t\t\tLogger::getLogger()->debug(\"DataLoad::readBlock(): Got %lu readings from storage client, updated m_lastFetched=%lu\", \n\t\t\t\t\t\t\treadings->getCount(), m_lastFetched);\n\t\t\tbufferReadings(readings);\n\t\t\tif (m_perfMonitor)\n\t\t\t{\n\t\t\t\tm_perfMonitor->collect(\"No of waits for data\", n_waits);\n\t\t\t\tm_perfMonitor->collect(\"Block utilisation %\", (long)((readings->getCount() * 100) / blockSize));\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\telse if (readings)\n\t\t{\n\t\t\t// Delete the empty readings set\n\t\t\tdelete readings;\n\t\t\tn_update_streamId++;\n\t\t\tif (n_update_streamId > max_wait_count) {\n\t\t\t\t// Update 'last_object_id' in 'streams' table when no readings to send\n\t\t\t\tn_update_streamId = 0;\n\t\t\t\tflushLastSentId();\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Logger::getLogger()->debug(\"DataLoad::readBlock(): No readings available\");\n\t\t}\n\t\tif (!m_shutdown)\n\t\t{\n\t\t\t// TODO improve this\n\t\t\tthis_thread::sleep_for(chrono::milliseconds(waitPeriod));\n\t\t\twaitPeriod *= 2;\n\t\t\tif (waitPeriod > MAX_WAIT_PERIOD)\n\t\t\t\twaitPeriod = MAX_WAIT_PERIOD;\n\t\t\tn_waits++;\n\t\t}\n\t} while (m_shutdown == false);\n}\n\n/**\n * Fetch data from the statistics history table\n *\n * @param blockSize\tNumber of records to fetch\n * @return ReadingSet*\tA set of readings\n */\nReadingSet *DataLoad::fetchStatistics(unsigned int blockSize)\n{\n\tconst Condition conditionId(GreaterThan);\n\t// WHERE id > lastId\n\tWhere* wId = new Where(\"id\", conditionId, to_string(m_lastFetched));\n\tvector<Returns *> columns;\n\t// Add colums and needed aliases\n\tcolumns.push_back(new Returns(\"id\"));\n\tcolumns.push_back(new Returns(\"key\", \"asset_code\"));\n\tcolumns.push_back(new Returns(\"ts\"));\n\n\tReturns *tmpReturn = new Returns(\"history_ts\", \"user_ts\");\n\ttmpReturn->timezone(\"utc\");\n\tcolumns.push_back(tmpReturn);\n\n\tcolumns.push_back(new Returns(\"value\"));\n\t// Build the query with fields, aliases and where\n\tQuery qStatistics(columns, wId);\n\t// Set limit\n\tqStatistics.limit(blockSize);\n\t// Set sort\n\tSort* sort = new Sort(\"id\");\n\tqStatistics.sort(sort);\n\n\t// Query the statistics_history table and get a ReadingSet result\n\treturn m_storage->queryTableToReadings(\"statistics_history\", qStatistics);\n}\n\n/**\n * Fetch data from the audit log table\n *\n * @param blockSize\tNumber of records to fetch\n * @return ReadingSet*\tA set of readings\n */\nReadingSet *DataLoad::fetchAudit(unsigned int blockSize)\n{\n\tconst Condition conditionId(GreaterThan);\n\t// WHERE id > lastId\n\tWhere* wId = new Where(\"id\", conditionId, to_string(m_lastFetched));\n\tvector<Returns *> columns;\n\t// Add colums and needed aliases\n\tcolumns.push_back(new Returns(\"id\"));\n\tcolumns.push_back(new Returns(\"code\", \"asset_code\"));\n\tcolumns.push_back(new Returns(\"ts\"));\n\n\tReturns *tmpReturn = new Returns(\"ts\", \"user_ts\");\n\ttmpReturn->timezone(\"utc\");\n\tcolumns.push_back(tmpReturn);\n\n\tcolumns.push_back(new Returns(\"log\", \"reading\"));\n\t// Build the query with fields, aliases and where\n\tQuery qStatistics(columns, wId);\n\t// Set limit\n\tqStatistics.limit(blockSize);\n\t// Set sort\n\tSort* sort = new Sort(\"id\");\n\tqStatistics.sort(sort);\n\n\t// Query the audit  table and get a ReadingSet result\n\treturn m_storage->queryTableToReadings(\"log\", qStatistics);\n}\n\n/**\n * Get the ID of the last reading that was sent with this service\n */\nunsigned long DataLoad::getLastSentId()\n{\n\tconst Condition conditionId(Equals);\n\tstring streamId = to_string(m_streamId);\n\tWhere* wStreamId = new Where(\"id\",\n\t\t\t\t     conditionId,\n\t\t\t\t     streamId);\n\n\t// SELECT * FROM fledge.streams WHERE id = x\n\tQuery qLastId(wStreamId);\n\n\tResultSet* lastObjectId = m_storage->queryTable(\"streams\", qLastId);\n\n\tif (lastObjectId != NULL && lastObjectId->rowCount())\n\t{\n\t\t// Get the first row only\n\t\tResultSet::RowIterator it = lastObjectId->firstRow();\n\t\t// Access the element\n\t\tResultSet::Row* row = *it;\n\t\tif (row)\n\t\t{\n\t\t\t// Get column value\n\t\t\tResultSet::ColumnValue* theVal = row->getColumn(\"last_object\");\n\t\t\t// Set found id\n\t\t\tunsigned long rval = (unsigned long)theVal->getInteger();\n\t\t\tdelete lastObjectId;\n\t\t\treturn rval;\n\t\t}\n\t}\n\t// Free result set\n\tdelete lastObjectId;\n\n\treturn 0;\n}\n\n/**\n * Buffer a block of readings. Called after a block of data has been\n * read to add that block to the queue reading for collection by the\n * consuming thread.\n *\n * @param readings\tThe readings to buffer\n */\nvoid DataLoad::bufferReadings(ReadingSet *readings)\n{\n\tif (m_pipeline)\n\t{\n\t\tPipelineElement *firstElement = m_pipeline->getFirstFilterPlugin();\n\t\tif (firstElement)\n\t\t{\n\n\t\t\t// Check whether filters are set before calling ingest\n\t\t\twhile (!m_pipeline->isReady())\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Ingest called before \"\n\t\t\t\t\t\t\t\t\t  \"filter pipeline is ready\");\n\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(150));\n\t\t\t}\n\n\t\t\tm_pipeline->execute();\n\t\t\t// Pass readingSet to filter chain\n\t\t\tfirstElement->ingest(readings);\n\t\t\tm_pipeline->completeBranch();\t// Main branch has completed\n\t\t\tm_pipeline->awaitCompletion();\n\t\t\treturn;\n\t\t}\n\t}\n\tunique_lock<mutex> lck(m_qMutex);\n\tm_queue.push_back(readings);\n\tif (m_perfMonitor && m_perfMonitor->isCollecting())\n\t{\n\t\tm_perfMonitor->collect(\"Readings added to buffer\", (long)(readings->getCount()));\n\t\tm_perfMonitor->collect(\"Reading sets buffered\", (long)(m_queue.size()));\n\t\tunsigned long i = 0;\n\t\tfor (auto& set : m_queue)\n\t\t\ti += set->getCount();\n\t\tm_perfMonitor->collect(\"Total readings buffered\", (long)i);\n\t}\n\tLogger::getLogger()->debug(\"Buffered %d readings for north processing\", readings->getCount());\n\tm_fetchCV.notify_all();\n}\n\n/**\n * Fetch Readings\n *\n * @param wait\t\tBoolean to determine if the call should block the calling thread\n * @return ReadingSet*\tReturn a block of readings from the buffer\n */\nReadingSet *DataLoad::fetchReadings(bool wait)\n{\n\tunique_lock<mutex> lck(m_qMutex);\n\twhile (m_shutdown == false && m_queue.empty())\n\t{\n\t\tif (m_perfMonitor && m_perfMonitor->isCollecting())\n\t\t{\n\t\t\tm_perfMonitor->collect(\"No data available to fetch\", 1);\n\t\t}\n\t\ttriggerRead(m_blockSize);\n\t\tif (wait && !m_shutdown)\n\t\t{\n\t\t\tm_fetchCV.wait(lck);\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn NULL;\n\t\t}\n\t}\n\tReadingSet *rval = NULL;\n\tif (!m_queue.empty())\n\t{\n\t\trval = m_queue.front();\n\t\tm_queue.pop_front();\n\t}\n\tif (m_queue.size() < m_prefetchLimit && m_shutdown == false)\t// Read another block if we have less than 5 already queued\n\t{\n\t\ttriggerRead(m_blockSize);\n\t}\n\treturn rval;\n}\n\n/**\n * Creates a new stream, it adds a new row into the streams table allocating a new stream id\n *\n * @return newly created stream, 0 otherwise\n */\nint DataLoad::createNewStream()\n{\nint streamId = 0;\nInsertValues streamValues;\n\n\tstreamValues.push_back(InsertValue(\"description\",    m_name));\n\tstreamValues.push_back(InsertValue(\"last_object\",    0));\n\n\tif (m_storage->insertTable(\"streams\", streamValues) != 1)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to insert a row into the streams table\");\n    }\n\telse\n\t{\n\t\t// Select the row just created, having description='process name'\n\t\tconst Condition conditionId(Equals);\n\t\tWhere* wName = new Where(\"description\", conditionId, m_name);\n\t\tQuery qName(wName);\n\n\t\tResultSet *rows = m_storage->queryTable(\"streams\", qName);\n\n\t\tif (rows != NULL && rows->rowCount())\n\t\t{\n\t\t\t// Get the first row only\n\t\t\tResultSet::RowIterator it = rows->firstRow();\n\t\t\t// Access the element\n\t\t\tResultSet::Row* row = *it;\n\t\t\tif (row)\n\t\t\t{\n\t\t\t\t// Get column value\n\t\t\t\tResultSet::ColumnValue* theVal = row->getColumn(\"id\");\n\t\t\t\tstreamId = (int)theVal->getInteger();\n\t\t\t}\n\t\t}\n\t\tdelete rows;\n\t}\n\n\tNorthService::getMgmtClient()->setCategoryItemValue(m_name, \"streamId\", to_string(streamId));\n\treturn streamId;\n}\n\n/**\n * Update the last sent ID for our stream\n */\nvoid DataLoad::updateLastSentId(unsigned long id)\n{\n\tm_streamSent = id;\n\tm_flushRequired = true;\n\tif (m_nextStreamUpdate-- <= 0)\n\t{\n\t\tflushLastSentId();\n\t\tm_nextStreamUpdate = m_streamUpdate;\n\t}\n}\n\n/**\n * Flush the last sent Id to the storeage layer\n */\nvoid DataLoad::flushLastSentId()\n{\n\tconst Condition condition(Equals);\n\tWhere where(\"id\", condition, to_string(m_streamId));\n\tInsertValues lastId;\n\n\tlastId.push_back(InsertValue(\"last_object\", (long)m_streamSent));\n\tm_storage->updateTable(\"streams\", lastId, where);\n}\n\n\n/**\n * Load filter plugins\n *\n * Filters found in configuration are loaded\n * and add to the data load class instance\n *\n * @param categoryName\tConfiguration category name\n * @return\t\tTrue if filters were loaded and initialised\n *\t\t\tor there are no filters\n *\t\t\tFalse with load/init errors\n */\nbool DataLoad::loadFilters(const string& categoryName)\n{\n\tLogger::getLogger()->info(\"loadFilters: categoryName=%s\", categoryName.c_str());\n\t/*\n\t * We do everything to setup the pipeline using a local FilterPipeline and then assign it\n\t * to the service m_filterPipeline once it is setup to guard against access to the pipeline\n\t * during setup.\n\t * This should not be an issue if the mutex is held, however this approach lessens the risk\n\t * in the case of this routine being called when the mutex is not held and ensure m_filterPipeline\n\t * only ever points to a fully configured filter pipeline.\n\t */\n\tManagementClient *management = NorthService::getMgmtClient();\n\tlock_guard<mutex> guard(m_pipelineMutex);\n\tFilterPipeline *filterPipeline = new FilterPipeline(management, *m_storage, m_name);\n\t\n\t// Try to load filters:\n\tif (!filterPipeline->loadFilters(categoryName))\n\t{\n\t\t// Return false on any error\n\t\treturn false;\n\t}\n\n\t// Set up the filter pipeline\n\tbool rval = filterPipeline->setupFiltersPipeline((void *)passToOnwardFilter, (void *)pipelineEnd, this);\n\tif (rval)\n\t{\n\t\tm_pipeline = filterPipeline;\n\t\t// If we previously had a debugger attached then attach to the new pipeline\n\t\tif (m_debuggerAttached)\n\t\t{\n\t\t\tattachDebugger();\n\t\t\tsetDebuggerBuffer(m_debuggerBufferSize);\n\t\t}\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Failed to setup the filter pipeline, the filters are not attached to the service\");\n\t\tfilterPipeline->cleanupFilters(categoryName);\n\t}\n\treturn rval;\n}\n\n/**\n * Pass the current readings set to the next filter in the pipeline\n *\n * Note:\n * This routine must be passed to all filters \"plugin_init\" except the last one\n *\n * Static method\n *\n * @param outHandle     Pointer to next filter\n * @param readings      Current readings set\n */\nvoid DataLoad::passToOnwardFilter(OUTPUT_HANDLE *outHandle,\n\t\t\t\tREADINGSET *readingSet)\n{\n\t// Get next element in the pipeline\n\tPipelineElement *next = (PipelineElement *)outHandle;\n\t// Pass readings to next filter\n\tnext->ingest(readingSet);\n}\n\n/**\n * Use the current readings (they have been filtered\n * by all filters)\n *\n * The assumption is that one of two things has happened.\n *\n *\t1. The filtering has all been done in place. In which case\n *\tthe m_data vector is in the ReadingSet passed in here.\n *\n *\t2. The filtering has created new ReadingSet in which case\n *\tthe reading vector must be copied into m_data from the\n *\tReadingSet.\n *\n * Note:\n * This routine must be passed to last filter \"plugin_init\" only\n *\n * Static method\n *\n * @param outHandle     Pointer to DataLoad class\n * @param readingSet    Filtered reading set being added to Ingest::m_data\n */\nvoid DataLoad::pipelineEnd(OUTPUT_HANDLE *outHandle,\n\t\t\t     READINGSET *readingSet)\n{\n\n\tDataLoad *load = (DataLoad *)outHandle;\n\n\tif (load->isolated())\n\t{\n\t\tdelete readingSet;\n\t\treturn;\n\t}\n\tstd::vector<Reading *>* vecPtr = readingSet->getAllReadingsPtr();\n    unsigned long lastReadingId = 0;\n\n    for(auto rdngPtrItr = vecPtr->crbegin(); rdngPtrItr != vecPtr->crend(); rdngPtrItr++)\n    {\n        if((*rdngPtrItr)->hasId()) // only consider valid reading IDs\n        {\n            lastReadingId = (*rdngPtrItr)->getId();\n            break;\n        }\n    }\n    \n    Logger::getLogger()->debug(\"DataLoad::pipelineEnd(): readingSet->getCount()=%d, lastReadingId=%lu, \" \n                                \"load->m_lastFetched=%lu\",\n                                  readingSet->getCount(), lastReadingId, load->m_lastFetched);\n    \n\t// Special case when all readings are filtered out \n\t// or new readings are appended by filter with id 0\n\tif ((readingSet->getCount() == 0) || (lastReadingId == 0))\n\t{\n\t    Logger::getLogger()->debug(\"DataLoad::pipelineEnd(): updating with load->updateLastSentId(%d)\", \n\t                                load->m_lastFetched);\n\t\tload->updateLastSentId(load->m_lastFetched);\n\t}\n\n\tunique_lock<mutex> lck(load->m_qMutex);\n\tload->m_queue.push_back(readingSet);\n\tload->m_fetchCV.notify_all();\n}\n\n/**\n * Configuration change for one of the filters or to the pipeline.\n *\n * @param category\tThe name of the configuration category\n * @param newConfig\tThe new category contents\n */\nvoid DataLoad::configChange(const string& category, const string& newConfig)\n{\n\tLogger::getLogger()->debug(\"DataLoad::configChange(): category=%s, newConfig=%s\", category.c_str(), newConfig.c_str());\n\tif (category == m_name) \n\t{\n\t\t/**\n\t\t * The category that has changed is the one for the north service itself.\n\t\t * The only items that concerns us here is the filter item that defines\n\t\t * the filter pipeline and the data source. If the item is the filter pipeline\n\t\t * we extract that item and check to see if it defines a pipeline that is\n\t\t * different to the one we currently have.\n\t\t *\n\t\t * If it is the filter pipeline we destroy the current pipeline and create a new one.\n\t\t */\n\t\tConfigCategory config(\"tmp\", newConfig);\n\t\tif (config.itemExists(\"source\"))\n\t\t{\n\t\t\tsetDataSource(config.getValue(\"source\"));\n\t\t}\n\t\tstring newPipeline = \"\";\n\t\tif (config.itemExists(\"filter\"))\n\t\t{\n\t\t      newPipeline  = config.getValue(\"filter\");\n\t\t}\n\n\t\t{\n\t\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t\t\tif (m_pipeline)\n\t\t\t{\n\t\t\t\tif (newPipeline == \"\" ||\n\t\t\t\t    m_pipeline->hasChanged(newPipeline) == false)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->info(\"DataLoad::configChange(): \"\n\t\t\t\t\t\t\t\t  \"filter pipeline is not set or \"\n\t\t\t\t\t\t\t\t  \"it hasn't changed\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t/* The new filter pipeline is different to what we have already running\n\t\t\t\t * So remove the current pipeline and recreate.\n\t\t\t \t */\n\t\t\t\tLogger::getLogger()->info(\"DataLoad::configChange(): \"\n\t\t\t\t\t\t\t  \"filter pipeline has changed, \"\n\t\t\t\t\t\t\t  \"recreating filter pipeline\");\n\t\t\t\tm_pipeline->cleanupFilters(m_name);\n\t\t\t\tdelete m_pipeline;\n\t\t\t\tm_pipeline = NULL;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * We have to setup a new pipeline to match the changed configuration.\n\t\t * Release the lock before reloading the filters as this will acquire\n\t\t * the lock again\n\t\t */\n\t\tloadFilters(category);\n\n\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t}\n\telse\n\t{\n\t\t/*\n\t\t * The category is for one fo the filters. We simply call the Filter Pipeline\n\t\t * instance and get it to deal with sending the configuration to the right filter.\n\t\t * This is done holding the pipeline mutex to prevent the pipeline being changed\n\t\t * during this call and also to hold the ingest thread from running the filters\n\t\t * during reconfiguration.\n\t\t */\n\t\tLogger::getLogger()->info(\"DataLoad::configChange(): change to config of some filter(s)\");\n\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t\tif (m_pipeline)\n\t\t{\n\t\t\tm_pipeline->configChange(category, newConfig);\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "C/services/north/data_send.cpp",
    "content": "/*\n * Fledge North Service Data Loading.\n *\n * Copyright (c) 2020, 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <data_sender.h>\n#include <data_load.h>\n#include <north_service.h>\n#include <reading.h>\n\nusing namespace std;\n\n/**\n * Start the sending thread within the DataSender class\n *\n * @param data\tThe instance of the class DataSender\n */\nstatic void startSenderThread(void *data)\n{\n\tDataSender *sender = (DataSender *)data;\n\tsender->sendThread();\n}\n\n/**\n * Thread to update statistics table in DB\n */\nstatic void statsThread(DataSender *sender)\n{\n\twhile (sender->isRunning())\n\t{\n\t\tsender->flushStatistics();\n\t}\n}\n\n/**\n * Constructor for the data sending class\n */\nDataSender::DataSender(NorthPlugin *plugin, DataLoad *loader, NorthService *service) :\n\tm_plugin(plugin), m_loader(loader), m_service(service), m_shutdown(false),\n\tm_paused(false), m_sending(false), m_perfMonitor(NULL), m_repeatedFailure(0)\n{\n\tm_statsUpdateFails = 0;\n\n\tm_logger = Logger::getLogger();\n\n\t// Create statistics rows if not existant\n\tif (createStats(\"Readings Sent\", 0))\n\t{\n\t\tm_statsDbEntriesCache.insert(\"Readings Sent\");\n\t}\n\tif (createStats(m_loader->getName(), 0))\n\t{\n\t\tm_statsDbEntriesCache.insert(m_loader->getName());\n\t}\n\n\t/*\n\t * Start the thread. Everything must be initialsied\n\t * before the thread is started\n\t */\n\tm_thread = new thread(startSenderThread, this);\n\n\tm_statsThread = new thread(statsThread, this);\n}\n\n/**\n * Destructor for the data sender class\n */\nDataSender::~DataSender()\n{\n\tm_logger->info(\"DataSender shutdown in progress\");\n\t{\n\t\tlock_guard<std::mutex> lck(m_backoffMutex);\n\t\tm_shutdown = true;\n\t}\n\t// Wakeup any sleep sender thread\n\tm_backoffCV.notify_all();\n\tm_thread->join();\n\tdelete m_thread;\n\n\tm_statsCv.notify_one();\n\tm_logger->debug(\"DataSender stats thread notified\");\n\tm_statsThread->join();\n\tm_logger->debug(\"DataSender stats thread joined\");\n\tdelete m_statsThread;\n\n\tm_logger->info(\"DataSender shutdown complete\");\n}\n\n/**\n * The sending thread entry point\n */\nvoid DataSender::sendThread()\n{\n\tif(isDryRun())\n\t\treturn;\n\n\tReadingSet *readings = nullptr;\n\n\twhile (!m_shutdown)\n\t{\n\t\tif (readings == NULL) {\n\n\t\t\treadings = m_loader->fetchReadings(true);\n\t\t}\n\t\tif (!readings)\n\t\t{\n\t\t\tm_logger->warn(\n\t\t\t\t\"Sending thread closing down after failing to fetch readings\");\n\t\t\treturn;\n\t\t}\n\t\tbool removeReadings = false;\n\t\tif (m_shutdown == false && readings->getCount() > 0)\n\t\t{\n\t\t\tunsigned long lastSent = send(readings);\n\t\t\tif (lastSent)\n\t\t\t{\n\t\t\t\tm_loader->updateLastSentId(lastSent);\n\n\t\t\t\t// Check all readings sent\n\t\t\t\tvector<Reading *> *vec = readings->getAllReadingsPtr();\n\n\t\t\t\t// Set readings removal\n\t\t\t\tremoveReadings = vec->size() == 0;\n\t\t\t}\n\t\t}\n\t\telse if (m_shutdown == false) \n\t\t{\n\t\t\t// All readings filtered out\n\t\t\tLogger::getLogger()->debug(\"All readings filtered out\");\n\n\t\t\t// Get last read item from the readings database\n\t\t\tunsigned long lastRead = m_loader->getLastFetched();\n\n\t\t\t// Update LastSentId in streams table\n\t\t\tm_loader->updateLastSentId(lastRead);\n\n\t\t\t// Set readings removal\n\t\t\tremoveReadings = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\treadings = NULL;\n\t\t}\n\n\t\t// Remove readings object if needed\n\t\tif (removeReadings)\n\t\t{\n\t\t\tdelete readings;\n\t\t\treadings = NULL;\n\t\t}\n\t}\n\tif (readings)\n\t{\n\t\t// Rremove any readings we had failed to send before shutting down\n\t\tdelete readings;\n\t}\n\tm_logger->info(\"Sending thread shutdown\");\n}\n\n/**\n * Send a block of readings\n *\n * @param readings\tThe readings to send\n * @return long\t\tThe ID of the last reading sent\n */\nunsigned long DataSender::send(ReadingSet *readings)\n{\n\tblockPause();\n\tuint32_t to_send = readings->getCount();\n\tuint32_t sent = m_plugin->send(readings->getAllReadings());\n\treleasePause();\n\n\tif (to_send > 0 && sent == 0)\n\t{\n\t\tm_repeatedFailure++;\n\t\t// We had readings to send but sent known. This could be as a result\n\t\t// of a failed connection north or a bad configuration, we have no way\n\t\t// to tell. If we take no action we will continue to use lots of CPU\n\t\t// and load the system. We instigate a backoff strategy here to try\n\t\t// to keep some CPU available for other tasks\n\t\tif (m_repeatedFailure == FAILURE_BACKOFF_THRESHOLD)\n\t\t{\n\t\t\tm_service->alertFailures();\n\t\t}\n\t\tif (m_repeatedFailure > FAILURE_BACKOFF_THRESHOLD)\n\t\t{\n\t\t\tm_sendBackoffTime = m_repeatedFailure * MIN_SEND_BACKOFF / FAILURE_BACKOFF_THRESHOLD;\n\t\t\tif (m_sendBackoffTime > MAX_SEND_BACKOFF)\n\t\t\t{\n\t\t\t\tm_sendBackoffTime = MAX_SEND_BACKOFF;\n\t\t\t}\n\t\t\t{\n\t\t\t\tunique_lock<mutex> lk(m_backoffMutex);\n\t\t\t\tm_backoffCV.wait_for(lk, chrono::milliseconds(m_sendBackoffTime));\n\t\t\t}\n\t\t\tif (m_shutdown)\n\t\t\t{\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t}\n\t}\n\telse if (sent > 0)\n\t{\n\t\tif (m_repeatedFailure >= FAILURE_BACKOFF_THRESHOLD)\n\t\t{\n\t\t\tm_service->clearFailures();\n\t\t}\n\t\t// Reset the backoff and continue at full rate\n\t\tm_repeatedFailure = 0;\n\t}\n\n\t// last few readings in the reading set may have 0 reading ID, \n\t// if they have been generated by filters on north service itself\n\tconst std::vector<Reading *>& readingsVec = readings->getAllReadings();\n\tunsigned long lastSent = 0;\n\tfor (auto rdngPtrItr = readingsVec.crbegin(); rdngPtrItr != readingsVec.crend(); rdngPtrItr++)\n\t{\n\t\tif((*rdngPtrItr)->hasId()) // only consider readings with valid reading IDs\n\t\t{\n\t\t\tlastSent = (*rdngPtrItr)->getId();\n\t\t\tbreak;\n\t\t}\n\t}\n    \n\t// unsigned long lastSent = readings->getReadingId(sent);\n\tif (m_perfMonitor)\n\t{\n\t\tm_perfMonitor->collect(\"Readings sent\", sent);\n\t\tm_perfMonitor->collect(\"Percentage readings sent\", (100 * sent) / to_send);\n\t}\n\n\tLogger::getLogger()->debug(\"DataSender::send(): to_send=%d, sent=%d, lastSent=%lu\", to_send, sent, lastSent);\n\n\tif (sent > 0)\n\t{\n\t\t// lastSent = readings->getLastId();\n\n\t\t// Update asset tracker table/cache, if required\n\t\tvector<Reading *> *vec = readings->getAllReadingsPtr();\n\n\t\tfor (vector<Reading *>::iterator it = vec->begin(); it != vec->end(); )\n\t\t{\n\t\t\tReading *reading = *it;\n\n\t\t\tif (!reading->hasId() || reading->getId() <= lastSent)\n\t\t\t{\n\t\t\t\tAssetTrackingTuple tuple(m_service->getName(), m_service->getPluginName(), reading->getAssetName(), \"Egress\");\n\t\t\t\tif (!AssetTracker::getAssetTracker()->checkAssetTrackingCache(tuple))\n\t\t\t\t{\n\t\t\t\t\tAssetTracker::getAssetTracker()->addAssetTrackingTuple(tuple);\n\t\t\t\t\tm_logger->info(\"sendDataThread:  Adding new asset tracking tuple - egress: %s\", tuple.assetToString().c_str());\n\t\t\t\t}\n\n\t\t\t\t// Remove current reading\n\t\t\t\tdelete reading;\n\t\t\t\treading = NULL;\n\n\t\t\t\t// Remove item and set iterator to next element\n\t\t\t\tit = vec->erase(it);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tupdateStatistics(sent);\n\t\treturn lastSent;\n\t}\n\treturn 0;\n}\n\n/**\n * Cause the data sender process to pause sending data until a corresponding release call is made.\n *\n * This call does not block until release is called, but does block until the current\n * send completes.\n *\n * Called by external classes that want to prevent interaction\n * with the north plugin.\n */\nvoid DataSender::pause()\n{\n\tunique_lock<mutex> lck(m_pauseMutex);\n\tm_pauseCV.wait(lck, [this]{ return m_sending == false; });\n\n\tm_paused = true;\n}\n\n/**\n * Release the paused data sender thread\n *\n * Called by external classes that want to release interaction\n * with thew north plugin.\n */\nvoid DataSender::release()\n{\n\t{\n\t\tstd::lock_guard<std::mutex> lck(m_pauseMutex);\n\t\tm_paused = false;\n\t}\n\n\tm_pauseCV.notify_all();\n}\n\n/**\n * Check if we have paused the sending of data\n *\n * Called before we interact with the north plugin by the\n * DataSender class\n */\nvoid DataSender::blockPause()\n{\n\tunique_lock<mutex> lck(m_pauseMutex);\n\tm_pauseCV.wait(lck, [this]{ return m_paused == false; });\n\n\tm_sending = true;\n}\n\n/*\n * Release the block on pausing the sender\n *\n * Called after we interact with the north plugin by the\n * DataSender class\n */\nvoid DataSender::releasePause()\n{\n\t{\n\t\tstd::lock_guard<std::mutex> lck(m_pauseMutex);\n\t\tm_sending = false;\n\t}\n\tm_pauseCV.notify_all();\n}\n\n/**\n * Update the sent statistics\n *\n * @param increment     Increment of the number of readings sent\n */\nvoid DataSender::updateStatistics(uint32_t increment)\n{\n\tlock_guard<mutex> guard(m_statsMtx);\n\n\t// Add statistics counter to the map\n\tm_statsPendingEntries[m_loader->getName()] += increment;\n\tm_statsPendingEntries[\"Readings Sent\"] += increment;\n}\n\n/**\n * Flush statistics to storage service\n */\nvoid DataSender::flushStatistics()\n{\n\t// Wait for FLUSH_STATS_INTERVAL seconds or receive notification\n\t// when shutdown is called\n\tunique_lock<mutex> flush(m_flushStatsMtx);\n\tm_statsCv.wait_for(flush, std::chrono::seconds(FLUSH_STATS_INTERVAL));\n\tflush.unlock();\n\n\tstd::map<std::string, unsigned int> statsData;\n\n\t// Acquire m_statsMtx lock for m_statsMtx\n\tunique_lock<mutex> lck(m_statsMtx);\n\n\t// copy statistics map\n\tstatsData = m_statsPendingEntries;\n\n\t// Reset statistics\n\tm_statsPendingEntries.clear();\n\n\t// Release lock\n\tlck.unlock();\n\n\tif (statsData.empty())\n\t{\n\t\treturn;\n\t}\n\n\tvector<pair<ExpressionValues *, Where *>> statsUpdates;\n\tconst Condition conditionStat(Equals);\n\n\t// Send statistics to storage service\n\tmap<string, unsigned int>::iterator it;\n\tfor (it = statsData.begin(); it != statsData.end(); it++)\n\t{\n\t\t// Prepare \"WHERE key = name\n\t\tWhere *nStat = new Where(\"key\", conditionStat, it->first);\n\t\t// Prepare value = value + inc\n\t\tExpressionValues *updateValue = new ExpressionValues;\n\t\tupdateValue->push_back(Expression(\"value\", \"+\", (int) it->second));\n\n\t\tstatsUpdates.emplace_back(updateValue, nStat);\n\n\t\t// Check whether to create stats entry into the storage\n\t\tif (m_statsDbEntriesCache.find(it->first) == m_statsDbEntriesCache.end())\n\t\t{\n\t\t\tif (createStats(it->first, it->second))\n\t\t\t{\n\t\t\t\tm_statsDbEntriesCache.insert(it->first);\n\t\t\t}\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"Flushing statistics '%s': %d\",\n\t\t\t\tit->first.c_str(),\n\t\t\t\tit->second);\n\t}\n\n\t// Bulk update\n\tif (m_loader->getStorage())\n\t{\n\t\t// Do the update\n\t\tint rv = m_loader->getStorage()->updateTable(\"statistics\", statsUpdates);\n\n\t\t// Check for errors\n\t\tif (rv < 1)\n\t\t{\n\t\t\tif (++m_statsUpdateFails > STATS_UPDATE_FAIL_THRESHOLD)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Update of statistics failure has persisted, attempting recovery\");\n\n\t\t\t\tm_statsDbEntriesCache.clear();\n\t\t\t\t// Create statistics rows if not existant\n\t\t\t\tif (createStats(\"Readings Sent\", 0))\n\t\t\t\t{\n\t\t\t\t\tm_statsDbEntriesCache.insert(\"Readings Sent\");\n\t\t\t\t}\n\t\t\t\tif (createStats(m_loader->getName(), 0))\n\t\t\t\t{\n\t\t\t\t\tm_statsDbEntriesCache.insert(m_loader->getName());\n\t\t\t\t}\n\n\t\t\t\tm_statsUpdateFails = 0;\n\t\t\t}\n\t\t\telse if (m_statsUpdateFails == 1)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Update of statistics failed\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Update of statistics still failing\");\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Create a row into statistic table for each statistic\n *\n * @param key\t\tThe statistics key to create\n * @param value\t\tThe statistics value\n * @return\t\tTrue for created data, False for no operation or error\n */\nbool DataSender::createStats(const std::string &key,\n\t\tunsigned int value)\n{\n\tif (!m_loader->getStorage())\n\t{\n\t\treturn false;\n\t}\n\n\t// SELECT * FROM fledge.statiatics WHERE key = statistics_key\n\tconst Condition conditionKey(Equals);\n\tWhere *wKey = new Where(\"key\", conditionKey, key);\n\tQuery qKey(wKey);\n\n\tResultSet* result = 0;\n\n\t// Query via storage client\n\tresult = m_loader->getStorage()->queryTable(\"statistics\", qKey);\n\n\tbool doInsert = !result->rowCount();\n\tdelete result;\n\n\tif (!doInsert)\n \t{\n\t\t// Row already exists\n\t\treturn true;\n\t}\n\n\tstring description;\n\tif (key == m_loader->getName())\n\t{\n\t\tdescription = key + \" Readings Sent\";\n\t}\n\telse\n\t{\n\t\tdescription = key + \" North\";\n\t}\n\tInsertValues values;\n\tvalues.push_back(InsertValue(\"key\",         key));\n\tvalues.push_back(InsertValue(\"description\", description));\n\tvalues.push_back(InsertValue(\"value\",       (long)value));\n\tstring table = \"statistics\";\n\n\tif (m_loader->getStorage()->insertTable(table, values) != 1)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to insert a new \"\\\n\t\t\t\t\"row into the 'statistics' table, key '%s'\",\n\t\t\t\tkey.c_str());\n\t\treturn false;\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->info(\"New row added into 'statistics' table, key '%s'\",\n\t\t\tkey.c_str());\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\n/**\n * Check status of dryrun flag\n *\n * @return True dryrun flag is true\n */\nbool DataSender::isDryRun()\n{\n\treturn m_service->getDryRun();\n}\n\n/**\n * Notify the data sender that there has been a configuration change.\n * This is used to wake up early from the wait that is performed when\n * we have failures to send. Changing the configuration may resolve the\n * issue causing the send failure.\n */\nvoid DataSender::configChange()\n{\n\tlock_guard<std::mutex> lck(m_backoffMutex);\n\t// Wakeup any sleep sender thread\n\tm_backoffCV.notify_all();\n}\n"
  },
  {
    "path": "C/services/north/include/data_load.h",
    "content": "#ifndef _DATA_LOAD_H\n#define _DATA_LOAD_H\n\n#include <string>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n#include <deque>\n#include <storage_client.h>\n#include <reading.h>\n#include <filter_pipeline.h>\n#include <service_handler.h>\n#include <perfmonitors.h>\n\n#define DEFAULT_BLOCK_SIZE 100\n\n/**\n * A class used in the North service to load data from the buffer\n *\n * This class is responsible for loading the reading from the \n * storage service and buffering them ready for the egress thread\n * to process them.\n */\nclass DataLoad : public ServiceHandler {\n\tpublic:\n\t\tDataLoad(const std::string& name, long streamId,\n\t\t\t       \tStorageClient *storage);\n\t\tvirtual ~DataLoad();\n\n\t\tvoid\t\t\tloadThread();\n\t\tbool\t\t\tsetDataSource(const std::string& source);\n\t\tvoid\t\t\ttriggerRead(unsigned int blockSize);\n\t\tvoid\t\t\tupdateLastSentId(unsigned long id);\n\t\tvoid\t\t\tflushLastSentId();\n\t\tReadingSet\t\t*fetchReadings(bool wait);\n\t\tstatic void\t\tpassToOnwardFilter(OUTPUT_HANDLE *outHandle,\n\t\t\t\t\t\tREADINGSET* readings);\n\t\tstatic void\t\tpipelineEnd(OUTPUT_HANDLE *outHandle,\n\t\t\t\t\t\tREADINGSET* readings);\n\t\tvoid\t\t\tshutdown();\n\t\tvoid\t\t\trestart();\n\t\tbool\t\t\tisRunning() { return !m_shutdown; };\n\t\tvoid\t\t\tconfigChange(const std::string& category, const std::string& newConfig);\n\t\tvoid\t\t\tconfigChildCreate(const std::string& , const std::string&, const std::string&){};\n\t\tvoid\t\t\tconfigChildDelete(const std::string& , const std::string&){};\n\t\tunsigned long\t\tgetLastFetched() { return m_lastFetched; };\n\t\tvoid\t\t\tsetBlockSize(unsigned long blockSize)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_blockSize = blockSize;\n\t\t\t\t\t};\n\t\tvoid\t\t\tsetStreamUpdate(unsigned long streamUpdate)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_streamUpdate = streamUpdate;\n\t\t\t\t\t\tm_nextStreamUpdate = streamUpdate;\n\t\t\t\t\t};\n\t\tvoid\t\t\tsetPerfMonitor(PerformanceMonitor *perfMonitor) { m_perfMonitor = perfMonitor; };\n\t\tconst std::string\t&getName() { return m_name; };\n\t\tStorageClient\t\t*getStorage() { return m_storage; }; \n\t\tvoid\t\t\tsetPrefetchLimit(unsigned int limit)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_prefetchLimit = limit;\n\t\t\t\t\t};\n\n\t\t// Debugger entry points\n\t\tbool\t\t\tattachDebugger()\n\t\t\t\t\t{\n\t\t\t\t\t\tif (m_pipeline)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_debuggerAttached = true;\n\t\t\t\t\t\t\treturn m_pipeline->attachDebugger();\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t};\n\t\tvoid\t\t\tdetachDebugger()\n\t\t\t\t\t{\n\t\t\t\t\t\tif (m_pipeline)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_debuggerAttached = false;\n\t\t\t\t\t\t\tm_debuggerBufferSize = 1;\n\t\t\t\t\t\t\tm_pipeline->detachDebugger();\n\t\t\t\t\t\t}\n\t\t\t\t\t};\n\t\tvoid\t\t\tsetDebuggerBuffer(unsigned int size)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (m_pipeline)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tm_debuggerBufferSize = size;\n\t\t\t\t\t\t\tm_pipeline->setDebuggerBuffer(size);\n\t\t\t\t\t\t}\n\t\t\t\t\t};\n\t\tstd::string\t\tgetDebuggerBuffer()\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::string rval;\n\t\t\t\t\t\tif (m_pipeline)\n\t\t\t\t\t\t\trval = m_pipeline->getDebuggerBuffer();\n\t\t\t\t\t\treturn rval;\n\t\t\t\t\t};\n\t\tvoid\t\t\tisolate(bool isolate)\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_isolateMutex);\n\t\t\t\t\t\tm_isolate = isolate;\n\t\t\t\t\t};\n\t\tbool\t\t\tisolated()\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_isolateMutex);\n\t\t\t\t\t\treturn m_isolate;\n\t\t\t\t\t};\n\t\tbool\t\t\treplayDebugger()\n\t\t\t\t\t{\n\t\t\t\t\t\tif (m_pipeline)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn m_pipeline->replayDebugger();\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t}\n\t\t\t\t\t};\n\t\tvoid\t\t\tsuspendIngest(bool suspend)\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_suspendMutex);\n\t\t\t\t\t\tm_suspendIngest = suspend;\n\t\t\t\t\t\tm_steps = 0;\n\t\t\t\t\t};\n\t\tbool\t\t\tisSuspended()\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_suspendMutex);\n\t\t\t\t\t\treturn m_suspendIngest;\n\t\t\t\t\t};\n\t\tvoid\t\t\tstepDebugger(unsigned int steps)\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_suspendMutex);\n\t\t\t\t\t\tm_steps = steps;\n\t\t\t\t\t};\n\t\tbool\t\t\twillStep()\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_suspendMutex);\n\t\t\t\t\t\tif (m_suspendIngest && m_steps > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t};\n\tprivate:\n\t\tvoid\t\t\treadBlock(unsigned int blockSize);\n\t\tunsigned int\t\twaitForReadRequest();\n\t\tunsigned long\t\tgetLastSentId();\n\t\tint\t\t\tcreateNewStream();\n\t\tReadingSet\t\t*fetchStatistics(unsigned int blockSize);\n\t\tReadingSet\t\t*fetchAudit(unsigned int blockSize);\n\t\tvoid\t\t\tbufferReadings(ReadingSet *readings);\n\t\tbool\t\t\tloadFilters(const std::string& category);\n\n\tprivate:\n\t\tconst std::string&\tm_name;\n\t\tlong\t\t\tm_streamId;\n\t\tStorageClient\t\t*m_storage;\n\t\tvolatile bool\t\tm_shutdown;\n\t\tstd::thread\t\t*m_thread;\n\t\tstd::mutex\t\tm_mutex;\n\t\tstd::condition_variable m_cv;\n\t\tstd::condition_variable m_fetchCV;\n\t\tunsigned int\t\tm_readRequest;\n\t\tenum { SourceReadings, SourceStatistics, SourceAudit }\n\t\t\t\t\tm_dataSource;\n\t\tunsigned long\t\tm_lastFetched;\n\t\tstd::deque<ReadingSet *>\n\t\t\t\t\tm_queue;\n\t\tstd::mutex\t\tm_qMutex;\n\t\tFilterPipeline\t\t*m_pipeline;\n\t\tstd::mutex\t\tm_pipelineMutex;\n\t\tunsigned long\t\tm_blockSize;\n\t\tPerformanceMonitor\t*m_perfMonitor;\n\t\tint\t\t\tm_streamUpdate;\n\t\tunsigned long\t\tm_streamSent;\n\t\tint\t\t\tm_nextStreamUpdate;\n\t\tunsigned int\t\tm_prefetchLimit;\n\t\tbool\t\t\tm_flushRequired;\n\t\tstd::mutex\t\tm_isolateMutex;\n\t\tbool\t\t\tm_isolate;\n\t\tbool\t\t\tm_debuggerAttached;\n\t\tunsigned int \t\tm_debuggerBufferSize;\n\t\tbool\t\t\tm_suspendIngest;\n\t\tunsigned int\t\tm_steps;\n\t\tstd::mutex\t\tm_suspendMutex;\n};\n#endif\n"
  },
  {
    "path": "C/services/north/include/data_sender.h",
    "content": "#ifndef _DATA_SENDER_H\n#define _DATA_SENDER_H\n\n#include <north_plugin.h>\n#include <reading_set.h>\n#include <logger.h>\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n#include <perfmonitors.h>\n\n// Send statistics to storage in seconds\n#define FLUSH_STATS_INTERVAL 5\n// Failure counter before re-recreating statics rows\n#define STATS_UPDATE_FAIL_THRESHOLD 3\n\n// BAckoff sending when we see repeated failures\n#define FAILURE_BACKOFF_THRESHOLD\t10\t// Number of consequetive failures to trigger backoff\n#define MIN_SEND_BACKOFF\t\t50\t// Min backoff in milliseconds\n#define MAX_SEND_BACKOFF\t\t60000\t// Max backoff in milliseconds\n\nclass DataLoad;\nclass NorthService;\n\nclass DataSender {\n\tpublic:\n\t\tDataSender(NorthPlugin *plugin, DataLoad *loader, NorthService *north);\n\t\t~DataSender();\n\t\tvoid\t\t\tsendThread();\n\t\tvoid\t\t\tupdatePlugin(NorthPlugin *plugin) { m_plugin = plugin; };\n\t\tvoid\t\t\tpause();\n\t\tvoid\t\t\trelease();\n\t\tvoid\t\t\tsetPerfMonitor(PerformanceMonitor *perfMonitor) { m_perfMonitor = perfMonitor; };\n\t\tbool\t\t\tisRunning() { return !m_shutdown; };\n\t\tvoid\t\t\tflushStatistics();\n\t\tbool\t\t\tisDryRun();\n\t\tvoid\t\t\tconfigChange();\n\tprivate:\n\t\tvoid\t\t\tupdateStatistics(uint32_t increment);\n\t\tbool \t\t\tcreateStats(const std::string &key, unsigned int value);\n\t\tunsigned long\t\tsend(ReadingSet *readings);\n\t\tvoid\t\t\tblockPause();\n\t\tvoid\t\t\treleasePause();\n\tprivate:\n\t\tNorthPlugin\t\t*m_plugin;\n\t\tDataLoad\t\t*m_loader;\n\t\tNorthService\t\t*m_service;\n\t\tvolatile bool\t\tm_shutdown;\n\t\tstd::thread\t\t*m_thread;\n\t\tLogger\t\t\t*m_logger;\n\t\tbool\t\t\tm_paused;\n\t\tbool\t\t\tm_sending;\n\t\tstd::mutex\t\tm_pauseMutex;\n\t\tstd::condition_variable m_pauseCV;\n\t\tPerformanceMonitor\t*m_perfMonitor;\n\t\t// Statistics send via thread\n\t\tstd::thread\t\t*m_statsThread;\n\t\tstd::mutex\t\tm_flushStatsMtx;\n\t\t// Statistics save map\n\t\tstd::condition_variable m_statsCv;\n\t\tstd::mutex\t\tm_statsMtx;\n\t\tstd::map<std::string, unsigned int>\n\t\t\t\t\tm_statsPendingEntries;\n\t\tint\t\t\tm_statsUpdateFails;\n\t\t// confirmed stats table entries\n\t\tstd::unordered_set<std::string>\n\t\t\t\t\tm_statsDbEntriesCache;\n\t\tunsigned int\t\tm_repeatedFailure;\n\t\tunsigned int\t\tm_sendBackoffTime;\n\t\tstd::mutex\t\tm_backoffMutex;\n\t\tstd::condition_variable m_backoffCV;\n};\n#endif\n"
  },
  {
    "path": "C/services/north/include/defaults.h",
    "content": "#ifndef _DEFAULTS_H\n#define _DEFAULTS_H\n/*\n * Fledge north service configuration defaults for the advanced category.\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\nstatic struct {\n\tconst char\t*name;\n\tconst char\t*displayName;\n\tconst char\t*description;\n\tconst char\t*type;\n\tconst char\t*value;\n} defaults[] = {\n\t{ NULL, NULL, NULL, NULL, NULL }\n};\n#endif\n"
  },
  {
    "path": "C/services/north/include/north_api.h",
    "content": "#ifndef _NORTH_API_H\n#define _NORTH_API_H\n/*\n * Fledge north service API.\n *\n * Copyright (c) 2025  Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <logger.h>\n#include <server_http.hpp>\n\n// Debugger URLs\n#define DEBUG_ATTACH\t\t\"^/fledge/north/debug/attach$\"\n#define DEBUG_DETACH\t\t\"^/fledge/north/debug/detach$\"\n#define DEBUG_BUFFER\t\t\"^/fledge/north/debug/buffer$\"\n#define DEBUG_ISOLATE\t\t\"^/fledge/north/debug/isolate$\"\n#define DEBUG_SUSPEND\t\t\"^/fledge/north/debug/suspend$\"\n#define DEBUG_STEP\t\t\"^/fledge/north/debug/step$\"\n#define DEBUG_REPLAY\t\t\"^/fledge/north/debug/replay$\"\n#define DEBUG_STATE\t\t\"^/fledge/north/debug/state$\"\n\nclass NorthService;\n\ntypedef std::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Response> Response;\ntypedef std::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Request> Request;\n\nclass NorthApi {\n\tpublic:\n\t\tNorthApi(NorthService *);\n\t\t~NorthApi();\n\t\tunsigned short\t\tgetListenerPort();\n\t\tvoid\t\t\tstartServer();\n\n\t\t// Debugger entry points\n\t\tvoid\t\t\tattachDebugger(Response response, Request request);\n\t\tvoid\t\t\tdetachDebugger(Response response, Request request);\n\t\tvoid\t\t\tsetDebuggerBuffer(Response response, Request request);\n\t\tvoid\t\t\tgetDebuggerBuffer(Response response, Request request);\n\t\tvoid\t\t\tisolateDebugger(Response response, Request request);\n\t\tvoid\t\t\tsuspendDebugger(Response response, Request request);\n\t\tvoid\t\t\tstepDebugger(Response response, Request request);\n\t\tvoid\t\t\treplayDebugger(Response response, Request request);\n\t\tvoid\t\t\tstateDebugger(Response response, Request request);\n\n\tprivate:\n\t\tSimpleWeb::Server<SimpleWeb::HTTP>\n\t\t\t\t\t*m_server;\n\t\tNorthService\t\t*m_service;\n\t\tstd::thread\t\t*m_thread;\n\t\tLogger\t\t\t*m_logger;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/north/include/north_plugin.h",
    "content": "#ifndef _NORTH_PLUGIN\n#define _NORTH_PLUGIN\n/*\n * Fledge north service.\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <plugin.h>\n#include <plugin_manager.h>\n#include <config_category.h>\n#include <string>\n#include <reading.h>\n\ntypedef void (*INGEST_CB)(void *, Reading);\ntypedef void (*INGEST_CB2)(void *, std::vector<Reading *>*);\n\n/**\n * Class that represents a north plugin.\n *\n * The purpose of this class is to hide the use of the pointers into the\n * dynamically loaded plugin and wrap the interface into a class that\n * can be used directly in the north subsystem.\n *\n * This is achieved by having a set of private member variables which are\n * the pointers to the functions in the plugin, and a set of public methods\n * that will call these functions via the function pointers.\n */\nclass NorthPlugin : public Plugin {\n\npublic:\n\tNorthPlugin(PLUGIN_HANDLE handle, const ConfigCategory& category);\n\t~NorthPlugin();\n\n\tuint32_t\tsend(const std::vector<Reading *>& readings);\n\tvoid\t\treconfigure(const std::string&);\n\tvoid\t\tshutdown();\n\tbool\t\tpersistData() { return info->options & SP_PERSIST_DATA; };\n\tvoid\t\tstart();\n\tvoid\t\tstartData(const std::string& pluginData);\n\tstd::string\tshutdownSaveData();\n\tbool\t\thasControl() { return info->options & SP_CONTROL; };\n\tvoid\t\tpluginRegister(bool ( *write)(char *name, char *value, ControlDestination destination, ...),\n\t\t\t\tint (* operation)(char *operation, int paramCount, char *names[], char *parameters[], ControlDestination destination, ...));\n\nprivate:\n\tPLUGIN_HANDLE\tm_instance;\n\tuint32_t\t(*pluginSendPtr)(PLUGIN_HANDLE, const std::vector<Reading *>& readings);\n\tvoid\t\t(*pluginReconfigurePtr)(PLUGIN_HANDLE*,\n\t\t\t\t\t        const std::string& newConfig);\n\tvoid\t\t(*pluginShutdownPtr)(PLUGIN_HANDLE);\n\tstd::string\t(*pluginShutdownDataPtr)(const PLUGIN_HANDLE);\n\tvoid\t\t(*pluginStartPtr)(PLUGIN_HANDLE);\n\tvoid\t\t(*pluginStartDataPtr)(PLUGIN_HANDLE,\n\t\t\t\t\t      const std::string& pluginData);\n\tvoid\t\t(*pluginRegisterPtr)(PLUGIN_HANDLE handle,\n\t\t\t\tbool ( *write)(char *name, char *value, ControlDestination destination, ...),\n\t\t\t\tint (* operation)(char *operation, int paramCount, char *names[], char *parameters[], ControlDestination destination, ...));\n\t\n};\n\n#endif\n"
  },
  {
    "path": "C/services/north/include/north_service.h",
    "content": "#ifndef _NORTH_SERVICE_H\n#define _NORTH_SERVICE_H\n/*\n * Fledge north service.\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <logger.h>\n#include <north_plugin.h>\n#include <service_handler.h>\n#include <storage_client.h>\n#include <config_category.h>\n#include <filter_plugin.h>\n#include <mutex>\n#include <condition_variable>\n#include <audit_logger.h>\n#include <perfmonitors.h>\n#include <data_load.h>\n#include <data_sender.h>\n\n#define SERVICE_NAME  \"Fledge North\"\n\n/**\n * State bits for the south pipeline debugger\n */\n#define DEBUG_ATTACHED\t\t0x01\n#define DEBUG_SUSPENDED\t\t0x02\n#define DEBUG_ISOLATED\t\t0x04\n\nclass NorthServiceProvider;\n\n/**\n * The NorthService class. This class is the core\n * of the service that provides north side services\n * to Fledge.\n */\nclass NorthService : public ServiceAuthHandler {\n\tpublic:\n\t\tNorthService(const std::string& name,\n\t\t\t\tconst std::string& token = \"\");\n\t\tvirtual ~NorthService();\n\t\tvoid \t\t\t\tstart(std::string& coreAddress,\n\t\t\t\t\t\t      unsigned short corePort);\n\t\tvoid \t\t\t\tstop();\n\t\tvoid\t\t\t\tshutdown();\n\t\tvoid\t\t\t\trestart();\n\t\tvoid\t\t\t\tconfigChange(const std::string&, const std::string&);\n\t\tvoid\t\t\t\tconfigChildCreate(const std::string& , const std::string&, const std::string&){};\n\t\tvoid\t\t\t\tconfigChildDelete(const std::string& , const std::string&){};\n\t\tbool\t\t\t\tisRunning() { return !m_shutdown; };\n\t\tconst std::string&\t\tgetName() { return m_name; };\n\t\tconst std::string&\t\tgetPluginName() { return m_pluginName; };\n\t\tvoid\t\t\t\tpause();\n\t\tvoid\t\t\t\trelease();\n\t\tbool\t\t\t\twrite(const std::string& name, const std::string& value, const ControlDestination);\n\t\tbool\t\t\t\twrite(const std::string& name, const std::string& value, const ControlDestination, const std::string& arg);\n\t\tint\t\t\t\toperation(const std::string& name, int paramCount, char *names[], char *parameters[], const ControlDestination);\n\t\tint\t\t\t\toperation(const std::string& name, int paramCount, char *names[], char *parameters[], const ControlDestination, const std::string& arg);\n\t\tvoid\t\t\t\tsetDryRun() { m_dryRun = true; };\n\t\tbool\t\t\t\tgetDryRun() { return m_dryRun; };\n\t\tvoid\t\t\t\talertFailures();\n\t\tvoid\t\t\t\tclearFailures();\n\t\t// Debugger Entry point\n\t\tbool\t\t\t\tattachDebugger()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_dataLoad)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tm_debugState = DEBUG_ATTACHED;\n\t\t\t\t\t\t\t\treturn m_dataLoad->attachDebugger();\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tdetachDebugger()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_dataLoad)\n\t\t\t\t\t\t\t\tm_dataLoad->detachDebugger();\n\t\t\t\t\t\t\tsuspendDebugger(false);\n\t\t\t\t\t\t\tisolateDebugger(false);\n\t\t\t\t\t\t\tm_debugState = 0;\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tsetDebuggerBuffer(unsigned int size)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_dataLoad)\n\t\t\t\t\t\t\t\tm_dataLoad->setDebuggerBuffer(size);\n\t\t\t\t\t\t};\n\t\tstd::string\t\t\tgetDebuggerBuffer()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_dataLoad)\n\t\t\t\t\t\t\t\treturn m_dataLoad->getDebuggerBuffer();\n\t\t\t\t\t\t\treturn \"\";\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tsuspendDebugger(bool suspend)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_dataLoad)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tm_dataLoad->suspendIngest(suspend);\n\t\t\t\t\t\t\t\tif (suspend)\n\t\t\t\t\t\t\t\t\tm_debugState |= DEBUG_SUSPENDED;\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\tm_debugState &= ~(unsigned int)DEBUG_SUSPENDED;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tisolateDebugger(bool isolate)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_dataLoad)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tm_dataLoad->isolate(isolate);\n\t\t\t\t\t\t\t\tif (isolate)\n\t\t\t\t\t\t\t\t\tm_debugState |= DEBUG_ISOLATED;\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\tm_debugState &= ~(unsigned int)DEBUG_ISOLATED;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tstepDebugger(unsigned int steps)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_dataLoad)\n\t\t\t\t\t\t\t\tm_dataLoad->stepDebugger(steps);\n\t\t\t\t\t\t}\n\t\tbool\t\t\t\treplayDebugger()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_dataLoad)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\treturn m_dataLoad->replayDebugger();\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t};\n\t\tstd::string\t\t\tdebugState();\n\t\tbool\t\t\t\tdebuggerAttached()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn m_debugState & DEBUG_ATTACHED;\n\t\t\t\t\t\t}\n\t\tbool\t\t\t\tallowDebugger()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn m_allowDebugger;\n\t\t\t\t\t\t}\n\t\t\n\tprivate:\n\t\tvoid\t\t\t\taddConfigDefaults(DefaultConfigCategory& defaults);\n\t\tbool \t\t\t\tloadPlugin();\n\t\tvoid \t\t\t\tcreateConfigCategories(DefaultConfigCategory configCategory, std::string parent_name,std::string current_name);\n\t\tvoid\t\t\t\trestartPlugin();\n\t\tvoid\t\t\t\tupdateFeatures(const ConfigCategory& category);\n\tprivate:\n\t\tstd::string\t\t\tcontrolSource();\n\t\tbool\t\t\t\tsendToService(const std::string& southService, const std::string& name, const std::string& value);\n\t\tbool\t\t\t\tsendToDispatcher(const std::string& path, const std::string& payload);\n\t\tDataLoad\t\t\t*m_dataLoad;\n\t\tDataSender\t\t\t*m_dataSender;\n\t\tNorthPlugin\t\t\t*northPlugin;\n\t\tstd::string\t\t\tm_pluginName;\n\t\tLogger        \t\t\t*logger;\n\t\tAssetTracker\t\t\t*m_assetTracker;\n\t\tvolatile bool\t\t\tm_shutdown;\n\t\tConfigCategory\t\t\tm_config;\n\t\tConfigCategory\t\t\tm_configAdvanced;\n\t\tStorageClient\t\t\t*m_storage;\n\t\tstd::mutex\t\t\tm_mutex;\n                std::condition_variable\t\tm_cv;\n\t\tPluginData\t\t\t*m_pluginData;\n\t\tbool\t\t\t\tm_restartPlugin;\n\t\tconst std::string\t\tm_token;\n\t\tbool\t\t\t\tm_allowControl;\n\t\tbool\t\t\t\tm_dryRun;\n\t\tbool\t\t\t\tm_requestRestart;\n\t\tAuditLogger\t\t\t*m_auditLogger;\n\t\tPerformanceMonitor\t\t*m_perfMonitor;\n\t\tunsigned int\t\t\tm_debugState;\n\t\tNorthServiceProvider\t\t*m_provider;\n\t\tbool\t\t\t\tm_allowDebugger;\n};\n\n\n/**\n *\n * A data provider class to return data in the north service ping response\n */\nclass NorthServiceProvider : public JSONProvider {\n\tpublic:\n\t\tNorthServiceProvider(NorthService *north) : m_north(north) {};\n\t\tvirtual ~NorthServiceProvider() {};\n\t\tvoid \tasJSON(std::string &json) const\n\t\t\t{\n\t\t\t\tif (m_north)\n\t\t\t\t{\n\t\t\t\t\tjson = \"\\\"debug\\\" : \" + m_north->debugState();\n\t\t\t\t}\n\t\t\t};\n\tprivate:\n\t\tNorthService\t*m_north;\n};\n#endif\n"
  },
  {
    "path": "C/services/north/north.cpp",
    "content": "/*\n * Fledge north service.\n *\n * Copyright (c) 2020 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <sys/timerfd.h>\n#include <time.h>\n#include <stdint.h>\n#include <stdlib.h>\n#include <signal.h>\n#include <execinfo.h>\n#include <dlfcn.h>    // for dladdr\n#include <cxxabi.h>   // for __cxa_demangle\n#include <unistd.h>\n#include <north_service.h>\n#include <north_api.h>\n#include <management_api.h>\n#include <storage_client.h>\n#include <service_record.h>\n#include <plugin_manager.h>\n#include <plugin_api.h>\n#include <plugin.h>\n#include <logger.h>\n#include <reading.h>\n#include <data_load.h>\n#include <data_sender.h>\n#include <iostream>\n#include <defaults.h>\n#include <filter_plugin.h>\n#include <config_handler.h>\n#include <syslog.h>\n#include <stdarg.h>\n#include <string_utils.h>\n#include <audit_logger.h>\n\n#define SERVICE_TYPE \"Northbound\"\n\nextern int makeDaemon(void);\nextern void handler(int sig);\n\nstatic const char *defaultServiceConfig = QUOTE({\n\t\"enable\": {\n\t\t\"description\": \"A switch that can be used to enable or disable execution of the sending process.\",\n\t\t\"type\": \"boolean\",\n\t\t\"default\": \"true\" ,\n\t\t\"readonly\": \"true\"\n\t\t},\n\t\"streamId\": {\n\t\t\"description\": \"Identifies the specific stream to handle and the related information, among them the ID of the last object streamed.\",\n\t\t\"type\": \"integer\",\n\t\t\"default\": \"0\",\n\t\t\"readonly\": \"true\"\n\t\t }\n\t\t});\n\nusing namespace std;\n\nstatic NorthService *service;\n\n/**\n * Callback function when a plugin wishes to perform a write operation\n *\n * @param name\tThe name of the value to write\n * @param value\tThe value to write\n * @param destination\tWhere to write the value\n */\nstatic bool controlWrite(char *name, char *value, ControlDestination destination, ...)\n{\n\tva_list ap;\n\tbool rval = false;\n\n\tswitch (destination)\n\t{\n\t\tcase DestinationAsset:\n\t\tcase DestinationService:\n\t\tcase DestinationScript:\n\t\t{\n\t\t\tva_start(ap, destination);\n\t\t\tchar *arg1 = va_arg(ap, char *);\n\t\t\tva_end(ap);\n\t\t\trval = service->write(name, value, destination, arg1);\n\t\t\tbreak;\n\t\t}\n\t\tcase DestinationBroadcast:\n\t\t\trval = service->write(name, value, destination);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tLogger::getLogger()->error(\"Unknown control write destination %d\", destination);\n\t}\n\treturn rval;\n}\n\n/**\n * Callback function when a plugin wishes to perform a control operation\n *\n * @param operation\tThe name of the operation to perform\n * @param paramCount\tThe count of the number of parameters\n * @param names\t\tThe names of the parameters\n * @param parameters\tThe values of the parameters\n * @param destiantion\tThe destiantion for the operation\n */\nstatic int controlOperation(char *operation, int paramCount, char *names[], char *parameters[], ControlDestination destination, ...)\n{\n\tva_list ap;\n\tint\trval = -1;\n\n\tswitch (destination)\n\t{\n\t\tcase DestinationAsset:\n\t\tcase DestinationService:\n\t\t\tva_start(ap, destination);\n\t\t\trval = service->operation(operation, paramCount, names, parameters, destination, va_arg(ap, char *));\n\t\t\tva_end(ap);\n\t\t\tbreak;\n\t\tcase DestinationBroadcast:\n\t\t\trval = service->operation(operation, paramCount, names, parameters, destination);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tLogger::getLogger()->error(\"Unknown control operation destination %d for operation %s\", destination, operation);\n\t}\n\treturn rval;\n}\n\n// Displays service information in JSON format\nstatic void printServiceInfoAsJSON()\n{\n\tstatic std::string serviceInfoJSON = R\"({\"name\":\"North Service\",\"description\":\"Service To Egress Data\",\"type\":\")\" + std::string(SERVICE_TYPE) + R\"(\",\"process\":\"north_C\",\"process_script\":\"[\\\"services/north_C\\\"]\",\"startup_priority\":200})\";\n\tstd::cout << serviceInfoJSON << std::endl;\n}\n\n/**\n * North service main entry point\n */\nint main(int argc, char *argv[])\n{\nunsigned short\tcorePort = 8082;\nstring\t\tcoreAddress = \"localhost\";\nbool\t\tdaemonMode = true;\nstring\t\tmyName = SERVICE_NAME;\nstring\t\tlogLevel = \"warning\";\nstring\t\ttoken = \"\";\nbool\t\tdryRun = false;\n\n\tsignal(SIGSEGV, handler);\n\tsignal(SIGILL, handler);\n\tsignal(SIGBUS, handler);\n\tsignal(SIGFPE, handler);\n\tsignal(SIGABRT, handler);\n\n\tfor (int i = 1; i < argc; i++)\n\t{\n\t\tif (!strcmp(argv[i], \"--info\"))\n\t\t{\n\t\t\tprintServiceInfoAsJSON();\n\t\t\treturn 0;\n\t\t}\n\t\tif (!strcmp(argv[i], \"-d\"))\n\t\t{\n\t\t\tdaemonMode = false;\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--port=\", 7))\n\t\t{\n\t\t\tcorePort = (unsigned short)strtol(&argv[i][7], NULL, 10);\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--name=\", 7))\n\t\t{\n\t\t\tmyName = &argv[i][7];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--address=\", 10))\n\t\t{\n\t\t\tcoreAddress = &argv[i][10];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--logLevel=\", 11))\n\t\t{\n\t\t\tlogLevel = &argv[i][11];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--token=\", 8))\n\t\t{\n\t\t\ttoken = &argv[i][8];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--dryrun\", 8))\n\t\t{\n\t\t\tdryRun = true;\n\t\t}\n\t}\n\n#ifdef PROFILING\n\tchar profilePath[200]{0};\n\tif (getenv(\"FLEDGE_DATA\")) \n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"%s/%s_Profile\", getenv(\"FLEDGE_DATA\"), myName.c_str());\n\t} else if (getenv(\"FLEDGE_ROOT\"))\n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"%s/data/%s_Profile\", getenv(\"FLEDGE_ROOT\"), myName.c_str());\n\t} else \n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"/usr/local/fledge/data/%s_Profile\", myName.c_str());\n\t}\n\tmkdir(profilePath, 0777);\n\tchdir(profilePath);\n#endif\n\n\tif (daemonMode && makeDaemon() == -1)\n\t{\n\t\t// Failed to run in daemon mode\n\t\tcout << \"Failed to run as deamon - proceeding in interactive mode.\" << endl;\n\t}\n\n\tservice = new NorthService(myName, token);\n\tif (dryRun)\n\t{\n\t\tservice->setDryRun();\n\t}\n\tLogger::getLogger()->setMinLevel(logLevel);\n\tservice->start(coreAddress, corePort);\n\n\tdelete service;\n\treturn 0;\n}\n\n/**\n * Detach the process from the terminal and run in the background.\n */\nint makeDaemon()\n{\npid_t pid;\n\n\t/* Make the child process inherit the log level */\n\tint logmask = setlogmask(0);\n\t/* create new process */\n\tif ((pid = fork()  ) == -1)\n\t{\n\t\treturn -1;  \n\t}\n\telse if (pid != 0)  \n\t{\n\t\texit (EXIT_SUCCESS);  \n\t}\n\tsetlogmask(logmask);\n\n\t// If we got here we are a child process\n\n\t// create new session and process group \n\tif (setsid() == -1)  \n\t{\n\t\treturn -1;  \n\t}\n\n\t// Close stdin, stdout and stderr\n\tclose(0);\n\tclose(1);\n\tclose(2);\n\t// redirect fd's 0,1,2 to /dev/null\n\topen(\"/dev/null\", O_RDWR);  \t// stdin\n\tif (dup(0) == -1) {}  \t\t// stdout\tGCC bug 66425 produces warning\n\tif (dup(0) == -1) {}  \t\t// stderr\tGCC bug 66425 produces warning\n \treturn 0;\n}\n\nvoid handler(int sig)\n{\nLogger\t*logger = Logger::getLogger();\nvoid\t*array[20];\nchar\tbuf[1024];\nint\tsize;\n\n\t// get void*'s for all entries on the stack\n\tsize = backtrace(array, 20);\n\n\t// print out all the frames to stderr\n\tlogger->fatal(\"Signal %d (%s) trapped:\\n\", sig, strsignal(sig));\n\tchar **messages = backtrace_symbols(array, size);\n\tfor (int i = 0; i < size; i++)\n\t{\n\t\tDl_info info;\n\t\tif (dladdr(array[i], &info) && info.dli_sname)\n\t\t{\n\t\t    char *demangled = NULL;\n\t\t    int status = -1;\n\t\t    if (info.dli_sname[0] == '_')\n\t\t        demangled = abi::__cxa_demangle(info.dli_sname, NULL, 0, &status);\n\t\t    snprintf(buf, sizeof(buf), \"%-3d %*p %s + %zd---------\",\n\t\t             i, int(2 + sizeof(void*) * 2), array[i],\n\t\t             status == 0 ? demangled :\n\t\t             info.dli_sname == 0 ? messages[i] : info.dli_sname,\n\t\t             (char *)array[i] - (char *)info.dli_saddr);\n\t\t    free(demangled);\n\t\t} \n\t\telse\n\t\t{\n\t\t    snprintf(buf, sizeof(buf), \"%-3d %*p %s---------\",\n\t\t             i, int(2 + sizeof(void*) * 2), array[i], messages[i]);\n\t\t}\n\t\tlogger->fatal(\"(%d) %s\", i, buf);\n\t}\n\tfree(messages);\n\texit(1);\n}\n\t\t\n\n/**\n * Constructor for the north service\n */\nNorthService::NorthService(const string& myName, const string& token) :\n\tm_dataLoad(NULL),\n\tm_dataSender(NULL),\n\tnorthPlugin(NULL),\n\tm_assetTracker(NULL),\n\tm_shutdown(false),\n\tm_storage(NULL),\n\tm_pluginData(NULL),\n\tm_restartPlugin(false),\n\tm_token(token),\n\tm_allowControl(true),\n\tm_dryRun(false),\n\tm_requestRestart(),\n\tm_auditLogger(NULL),\n\tm_perfMonitor(NULL),\n\tm_debugState(0),\n\tm_provider(NULL),\n\tm_allowDebugger(true)\n{\n\tm_name = myName;\n\tlogger = new Logger(myName);\n\tlogger->setMinLevel(\"warning\");\n}\n\n/**\n * Destructor for the north service\n */\nNorthService::~NorthService()\n{\n\tif (m_perfMonitor)\n\t\tdelete m_perfMonitor;\n\tif (northPlugin)\n\t\tdelete northPlugin;\n\tif (m_storage)\n\t\tdelete m_storage;\n\tif (m_dataLoad)\n\t\tdelete m_dataLoad;\n\tif (m_dataSender)\n\t\tdelete m_dataSender;\n\tif (m_pluginData)\n\t\tdelete m_pluginData;\n\tif (m_assetTracker)\n\t\tdelete m_assetTracker;\n\tif (m_auditLogger)\n\t\tdelete m_auditLogger;\n\tif (m_mgtClient)\n\t\tdelete m_mgtClient;\n\tif (m_provider)\n\t\tdelete m_provider;\n\tdelete logger;\n}\n\n/**\n * Start the north service\n */\nvoid NorthService::start(string& coreAddress, unsigned short corePort)\n{\n\tunsigned short managementPort = (unsigned short)0;\n\tManagementApi management(SERVICE_NAME, managementPort);\t// Start managemenrt API\n\tlogger->info(\"Starting north service...\");\n\tNorthServiceProvider *provider = new NorthServiceProvider(this);\n\tmanagement.registerProvider(provider);\n\tmanagement.registerService(this);\n\n\t// Listen for incomming managment requests\n\tmanagement.start();\n\n\n\t// Create the south API\n\tNorthApi *api = new NorthApi(this);\n\tif (!api)\n\t{\n\t\tlogger->fatal(\"Unable to create API object\");\n\t\treturn;\n\t}\n\t// Allow time for the listeners to start before we register\n\tsleep(1);\n\tif (! m_shutdown)\n\t{\n\t\tunsigned short sport = api->getListenerPort();\n\t\t// Now register our service\n\t\t// TODO proper hostname lookup\n\t\tunsigned short managementListener = management.getListenerPort();\n\t\tServiceRecord record(m_name,\t\t// Service name\n\t\t\t\tSERVICE_TYPE,\t\t// Service type\n\t\t\t\t\"http\",\t\t\t// Protocol\n\t\t\t\t\"localhost\",\t\t// Listening address\n\t\t\t\tsport,\t\t\t// Service port\n\t\t\t\tmanagementListener,\t// Management port\n\t\t\t\tm_token);\t\t// Token);\n\t\tm_mgtClient = new ManagementClient(coreAddress, corePort);\n\n\t\tm_auditLogger = new AuditLogger(m_mgtClient);\n\n\t\t// Create an empty North category if one doesn't exist\n\t\tDefaultConfigCategory northConfig(string(\"North\"), string(\"{}\"));\n\t\tnorthConfig.setDescription(\"North\");\n\t\tm_mgtClient->addCategory(northConfig, true);\n\n\t\t// Fetch Configuration\n\t\tm_config = m_mgtClient->getCategory(m_name);\n\t\tif (!loadPlugin())\n\t\t{\n\t\t\tlogger->fatal(\"Failed to load north plugin, exiting...\");\n\t\t\tmanagement.stop();\n\t\t\treturn;\n\t\t}\n\t\tif (!m_dryRun)\n\t\t{\n\t\t\tif (!m_mgtClient->registerService(record))\n\t\t\t{\n\t\t\t\tlogger->error(\"Failed to register service %s\", m_name.c_str());\n\t\t\t\tmanagement.stop();\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tConfigCategory features = m_mgtClient->getCategory(\"FEATURES\");\n\t\t\tupdateFeatures(features);\n\n\t\t\tConfigHandler *configHandler = ConfigHandler::getInstance(m_mgtClient);\n\t\t\tconfigHandler->registerCategory(this, m_name);\n\t\t\tconfigHandler->registerCategory(this, m_name+\"Advanced\");\n\t\t\tconfigHandler->registerCategory(this, \"FEATURES\");\n\t\t}\n\n\t\t// Get a handle on the storage layer\n\t\tServiceRecord storageRecord(\"Fledge Storage\");\n\t\tif (!m_mgtClient->getService(storageRecord))\n\t\t{\n\t\t\tlogger->fatal(\"Unable to find storage service\");\n\t\t\tif (!m_dryRun)\n\t\t\t{\n\n\t\t\t\tif (m_requestRestart)\n\t\t\t\t\tm_mgtClient->restartService();\n\t\t\t\telse\n\t\t\t\t\tm_mgtClient->unregisterService();\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\tlogger->info(\"Connect to storage on %s:%d\",\n\t\t\t\tstorageRecord.getAddress().c_str(),\n\t\t\t\tstorageRecord.getPort());\n\n\t\t\n\t\tm_storage = new StorageClient(storageRecord.getAddress(),\n\t\t\t\t\t\tstorageRecord.getPort());\n\n\t\tm_storage->registerManagement(m_mgtClient);\n\n\t\t// Setup the performance monitor\n\t\tm_perfMonitor = new PerformanceMonitor(m_name, m_storage);\n\n\t\tif (m_configAdvanced.itemExists(\"perfmon\"))\n\t\t{\n\t\t\tstring perf = m_configAdvanced.getValue(\"perfmon\");\n\t\t\tif (perf.compare(\"true\") == 0)\n\t\t\t\tm_perfMonitor->setCollecting(true);\n\t\t\telse\n\t\t\t\tm_perfMonitor->setCollecting(false);\n\t\t}\n\n\t\tlogger->debug(\"Initialise the asset tracker\");\n\t\tm_assetTracker = new AssetTracker(m_mgtClient, m_name);\n\t\tAssetTracker::getAssetTracker()->populateAssetTrackingCache(m_name, \"Egress\");\n\n\t\t// If the plugin supports control register the callback functions\n\t\tif (northPlugin->hasControl())\n\t\t{\n\t\t\tnorthPlugin->pluginRegister(controlWrite, controlOperation);\n\t\t}\n\n\t\t// Deal with persisted data and start the plugin\n\t\tif (!m_dryRun)\n\t\t{\n\t\t\tif (northPlugin->persistData())\n\t\t\t{\n\t\t\t\tlogger->debug(\"Plugin %s requires persisted data\", m_pluginName.c_str());\n\t\t\t\tm_pluginData = new PluginData(m_storage);\n\t\t\t\tstring key = m_name + m_pluginName;\n\t\t\t\tstring storedData = m_pluginData->loadStoredData(key);\n\t\t\t\tlogger->debug(\"Starting plugin with storedData: %s\", storedData.c_str());\n\t\t\t\tnorthPlugin->startData(storedData);\n\t\t\t\t\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tlogger->debug(\"Start %s plugin\", m_pluginName.c_str());\n\t\t\t\tnorthPlugin->start();\n\t\t\t}\n\t\t}\n\n\t\t// Create default security category\n\t\tthis->createSecurityCategories(m_mgtClient, m_dryRun);\n\n\t\t// Setup the data loading\n\t\tlong streamId = 0;\n\t\tif (m_config.itemExists(\"streamId\"))\n\t\t{\n\t\t\tstreamId = strtol(m_config.getValue(\"streamId\").c_str(), NULL, 10);\n\t\t}\n\t\tlogger->debug(\"Create threads for stream %d\", streamId);\n\t\tm_dataLoad = new DataLoad(m_name, streamId, m_storage);\n\t\tm_dataLoad->setPerfMonitor(m_perfMonitor);\n\t\tif (m_config.itemExists(\"source\"))\n\t\t{\n\t\t\tm_dataLoad->setDataSource(m_config.getValue(\"source\"));\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"blockSize\"))\n\t\t{\n\t\t\tunsigned long newBlock = strtoul(\n\t\t\t\t\t\tm_configAdvanced.getValue(\"blockSize\").c_str(),\n\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t10);\n\t\t\tif (newBlock > 0)\n\t\t\t{\n\t\t\t\tm_dataLoad->setBlockSize(newBlock);\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"streamUpdate\"))\n\t\t{\n\t\t\tunsigned long newStreamUpdate = strtoul(\n\t\t\t\t\t\tm_configAdvanced.getValue(\"streamUpdate\").c_str(),\n\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t10);\n\t\t\tif (newStreamUpdate > 0)\n\t\t\t{\n\t\t\t\tm_dataLoad->setStreamUpdate(newStreamUpdate);\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"prefetchLimnit\"))\n\t\t{\n\t\t\tunsigned long limit = strtoul(\n\t\t\t\t\t\tm_configAdvanced.getValue(\"prefetchLimit\").c_str(),\n\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t10);\n\t\t\tif (limit > 0)\n\t\t\t{\n\t\t\t\tm_dataLoad->setPrefetchLimit(limit);\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"assetTrackerInterval\"))\n\t\t{\n\t\t\tunsigned long interval  = strtoul(\n\t\t\t\t\t\tm_configAdvanced.getValue(\"assetTrackerInterval\").c_str(),\n\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t10);\n\t\t\tif (m_assetTracker)\n\t\t\t\tm_assetTracker->tune(interval);\n\t\t}\n\t\tm_dataSender = new DataSender(northPlugin, m_dataLoad, this);\n\t\tm_dataSender->setPerfMonitor(m_perfMonitor);\n\n\t\tif (!m_dryRun)\n\t\t{\n\t\t\tlogger->debug(\"North service is running\");\n\n\t\t\t\n\t\t\t// wait for shutdown\n\t\t\tunique_lock<mutex> lck(m_mutex);\n\t\t\twhile (!m_shutdown)\n\t\t\t{\n\t\t\t\tm_cv.wait(lck);\n\t\t\t\tlogger->debug(\"North main thread woken up, shutdown %s\", m_shutdown ? \"true\" : \"false\");\n\t\t\t\tif (m_shutdown == false && m_restartPlugin)\n\t\t\t\t{\n\t\t\t\t\trestartPlugin();\n\t\t\t\t}\n\t\t\t}\n\t\t\tlogger->debug(\"North service is shutting down\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogger->info(\"Dryrun of service, shutting down\");\n\t\t}\n\n\t\tm_dataLoad->shutdown();\t\t// Forces the data load to return from any blocking fetch call\n\t\tdelete m_dataSender;\n\t\tm_dataSender = NULL;\n\t\tlogger->debug(\"North service data sender has shut down\");\n\t\tdelete m_dataLoad;\n\t\tm_dataLoad = NULL;\n\t\tlogger->debug(\"North service shutting down plugin\");\n\n\n\t\t// Shutdown the north plugin\n\t\tif (northPlugin && !m_dryRun)\n\t\t{\n\t\t\tif (m_pluginData)\n\t\t\t{\n\t\t\t\tlogger->debug(\"North service persist plugin data\");\n\t\t\t\tstring saveData = northPlugin->shutdownSaveData();\n\t\t\t\tstring key = m_name + m_pluginName;\n\t\t\t\tlogger->debug(\"Persist plugin data, key: '%s' data: '%s' service name: '%s'\", key.c_str(), saveData.c_str(), m_name.c_str());\n\t\t\t\tif (!m_pluginData->persistPluginData(key, saveData, m_name))\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->error(\"Plugin %s has failed to save data [%s] for key %s\",\n\t\t\t\t\t\tm_pluginName.c_str(), saveData.c_str(), key.c_str());\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tnorthPlugin->shutdown();\n\t\t\t}\n\t\t}\n\t\t\n\t\tif (!m_dryRun)\n\t\t{\n\t\t\tif (m_requestRestart)\n\t\t\t{\n\t\t\t\t// Request core to restart this service\n\t\t\t\tm_mgtClient->restartService();\n\t\t\t} \n\t\t\telse\n\t\t\t{\n\t\t\t\t// Clean shutdown, unregister the storage service\n\t\t\t\tlogger->info(\"Unregistering service\");\n\t\t\t\tm_mgtClient->unregisterService();\n\t\t\t}\n\t\t}\n\t}\n\tmanagement.stop();\n\tlogger->info(\"North service %s shutdown completed\", m_dryRun ? \"dry run execution \" : \"\");\n}\n\n/**\n * Stop the storage service/\n */\nvoid NorthService::stop()\n{\n\tlogger->info(\"Stopping north service...\\n\");\n}\n\n/**\n * Creates config categories and sub categories recursively, along with their parent-child relations\n */\nvoid NorthService::createConfigCategories(DefaultConfigCategory configCategory, std::string parent_name, std::string current_name)\n{\n\n\t// Deal with registering and fetching the configuration\n\tDefaultConfigCategory defConfig(configCategory);\n\n\tDefaultConfigCategory defConfigCategoryOnly(defConfig);\n\tdefConfigCategoryOnly.keepItemsType(ConfigCategory::ItemType::CategoryType);\n\tdefConfig.removeItemsType(ConfigCategory::ItemType::CategoryType);\n\n\tDefaultConfigCategory serviceConfig(current_name,\n                                               defaultServiceConfig);\n\tdefConfig += serviceConfig;\n\n\tdefConfig.setDescription(current_name);\t// TODO We do not have access to the description\n\t// Create/Update category name (we pass keep_original_items=true)\n\tm_mgtClient->addCategory(defConfig, true);\n\n\t// Add this service under 'North' parent category\n\tvector<string> children;\n\tchildren.push_back(current_name);\n\tm_mgtClient->addChildCategories(parent_name, children);\n\n\t// Adds sub categories to the configuration\n\tbool extracted = true;\n\tConfigCategory subCategory;\n\twhile (extracted) {\n\n\t\textracted = subCategory.extractSubcategory(defConfigCategoryOnly);\n\n\t\tif (extracted) {\n\t\t\tDefaultConfigCategory defSubCategory(subCategory);\n\n\t\t\tcreateConfigCategories(defSubCategory, current_name, subCategory.getName());\n\n\t\t\t// Cleans the category\n\t\t\tsubCategory.removeItems();\n\t\t\tsubCategory = ConfigCategory() ;\n\t\t}\n\t}\n\n}\n\n/**\n * Load the configured north plugin\n */\nbool NorthService::loadPlugin()\n{\n\ttry {\n\t\tPluginManager *manager = PluginManager::getInstance();\n\n\t\tif (! m_config.itemExists(\"plugin\"))\n\t\t{\n\t\t\tlogger->error(\"Unable to fetch plugin name from configuration.\\n\");\n\t\t\treturn false;\n\t\t}\n\t\tm_pluginName = m_config.getValue(\"plugin\");\n\t\tlogger->info(\"Loading north plugin %s.\", m_pluginName.c_str());\n\t\tPLUGIN_HANDLE handle;\n\t\tif ((handle = manager->loadPlugin(m_pluginName, PLUGIN_TYPE_NORTH)) != NULL)\n\t\t{\n\t\t\t// Adds categories and sub categories to the configuration\n\t\t\tDefaultConfigCategory defConfig(m_name, manager->getInfo(handle)->config);\n\t\t\tcreateConfigCategories(defConfig, string(\"North\"), m_name);\n\n\t\t\t// Must now reload the configuration to obtain any items added from\n\t\t\t// the plugin\n\t\t\t// Removes all the m_items already present in the category\n\t\t\tm_config.removeItems();\n\t\t\tm_config = m_mgtClient->getCategory(m_name);\n\n\t\t\ttry {\n\t\t\t\tnorthPlugin = new NorthPlugin(handle, m_config);\n\t\t\t} catch (...) {\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\t// Deal with registering and fetching the advanced configuration\n\t\t\tstring advancedCatName = m_name+string(\"Advanced\");\n\t\t\tDefaultConfigCategory defConfigAdvanced(advancedCatName, string(\"{}\"));\n\t\t\taddConfigDefaults(defConfigAdvanced);\n\t\t\tdefConfigAdvanced.setDescription(m_name+string(\" advanced config params\"));\n\n\t\t\t// Create/Update category name (we pass keep_original_items=true)\n\t\t\tm_mgtClient->addCategory(defConfigAdvanced, true);\n\n\t\t\t// Add this service under 'm_name' parent category\n\t\t\tvector<string> children1;\n\t\t\tchildren1.push_back(advancedCatName);\n\t\t\tm_mgtClient->addChildCategories(m_name, children1);\n\n\t\t\t// Must now reload the merged configuration\n\t\t\tm_configAdvanced = m_mgtClient->getCategory(advancedCatName);\n\t\t\tif (m_configAdvanced.itemExists(\"logLevel\"))\n\t\t\t{\n\t\t\t\tstring prevLogLevel = logger->getMinLevel();\n\t\t\t\tlogger->setMinLevel(m_configAdvanced.getValue(\"logLevel\"));\n\n\t\t\t\tPluginManager *manager = PluginManager::getInstance();\n\t\t\t\tPLUGIN_TYPE type = manager->getPluginImplType(northPlugin->getHandle());\n\t\t\t\tlogger->debug(\"%s:%d: North plugin type = %s\", __FUNCTION__, __LINE__, (type==PYTHON_PLUGIN)?\"PYTHON_PLUGIN\":\"BINARY_PLUGIN\");\n\n\t\t\t\tif (m_dataLoad)\n\t\t\t\t{\n\t\t\t\t\tlogger->debug(\"%s:%d: calling m_dataLoad->configChange() for updating loglevel\", __FUNCTION__, __LINE__);\n\t\t\t\t\tm_dataLoad->configChange(\"north filters\", \"logLevel\");\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\tif (type == PYTHON_PLUGIN)\n\t\t\t\t{\n\t\t\t\t\t// propagate loglevel changes to python filters/plugins, if present\n\t\t\t\t\tlogger->debug(\"prevLogLevel=%s, m_configAdvanced.getValue(\\\"logLevel\\\")=%s\", prevLogLevel.c_str(), m_configAdvanced.getValue(\"logLevel\").c_str());\n\t\t\t\t\tif (prevLogLevel.compare(m_configAdvanced.getValue(\"logLevel\")) != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tlogger->debug(\"calling northPlugin->reconfigure() for updating loglevel\");\n\t\t\t\t\t\tnorthPlugin->reconfigure(\"logLevel\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (m_configAdvanced.itemExists(\"control\"))\n\t\t\t{\n\t\t\t\tstring c = m_configAdvanced.getValue(\"control\");\n\t\t\t\tif (c.compare(\"true\") == 0)\n\t\t\t\t{\n\t\t\t\t\tm_allowControl = true;\n\t\t\t\t\tlogger->warn(\"Control operations have been enabled\");\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tm_allowControl = false;\n\t\t\t\t\tlogger->warn(\"Control operations have been disabled\");\n\t\t\t\t}\n\t\t\t}\n\n\n\t\t\treturn true;\n\t\t}\n\t} catch (exception &e) {\n\t\tlogger->fatal(\"Failed to load north plugin: %s\\n\", e.what());\n\t}\n\treturn false;\n}\n\n/**\n * Shutdown request\n */\nvoid NorthService::shutdown()\n{\n\t/* Stop recieving new requests and allow existing\n\t * requests to drain.\n\t */\n\tm_shutdown = true;\n\tlogger->info(\"North service shutdown in progress.\");\n\n\t// Signal main thread to shutdown\n\tm_cv.notify_all();\n}\n\n/**\n * Restart request\n */\nvoid NorthService::restart()\n{\n\tlogger->info(\"North service restart in progress.\");\n\n\t// Set restart action\n\tm_requestRestart = true;\n\n\t// Set shutdown action\n\tm_shutdown = true;\n\n\t// Signal main thread to shutdown\n\tm_cv.notify_all();\n}\n\n/**\n * Configuration change notification\n */\nvoid NorthService::configChange(const string& categoryName, const string& category)\n{\n\tlogger->info(\"Configuration change in category %s: %s\", categoryName.c_str(),\n\t\t\tcategory.c_str());\n\tif (categoryName.compare(m_name) == 0)\n\t{\n\n\t\tm_config = ConfigCategory(m_name, category);\n\n\t\tm_restartPlugin = true;\n\t\tm_cv.notify_all();\n\n\t\tif (m_dataLoad)\n\t\t{\n\t\t\tm_dataLoad->configChange(categoryName, category);\n\t\t}\n\t\tif (m_dataSender)\n\t\t{\n\t\t\tm_dataSender->configChange();\n\t\t}\n\t}\n\tif (categoryName.compare(m_name+\"Advanced\") == 0)\n\t{\n\t\tm_configAdvanced = ConfigCategory(m_name+\"Advanced\", category);\n\t\tif (m_configAdvanced.itemExists(\"logLevel\"))\n\t\t{\n\t\t\tstring prevLogLevel = logger->getMinLevel();\n\t\t\tlogger->setMinLevel(m_configAdvanced.getValue(\"logLevel\"));\n\n\t\t\tPluginManager *manager = PluginManager::getInstance();\n\t\t\tPLUGIN_TYPE type = manager->getPluginImplType(northPlugin->getHandle());\n\t\t\tlogger->debug(\"%s:%d: North plugin type = %s\", __FUNCTION__, __LINE__, (type==PYTHON_PLUGIN)?\"PYTHON_PLUGIN\":\"BINARY_PLUGIN\");\n\t\t\t\n\t\t\tif (m_dataLoad)\n\t\t\t{\n\t\t\t\tlogger->debug(\"%s:%d: calling m_dataLoad->configChange() for updating loglevel\", __FUNCTION__, __LINE__);\n\t\t\t\tm_dataLoad->configChange(\"north filters\", \"logLevel\");\n\t\t\t}\n\t\t\t\n\t\t\tif (type == PYTHON_PLUGIN)\n\t\t\t{\n\t\t\t\t// propagate loglevel changes to python filters/plugins, if present\n\t\t\t\tlogger->debug(\"prevLogLevel=%s, m_configAdvanced.getValue(\\\"logLevel\\\")=%s\", prevLogLevel.c_str(), m_configAdvanced.getValue(\"logLevel\").c_str());\n\t\t\t\tif (prevLogLevel.compare(m_configAdvanced.getValue(\"logLevel\")) != 0)\n\t\t\t\t{\n\t\t\t\t\tlogger->debug(\"%s:%d: calling northPlugin->reconfigure() for updating loglevel\", __FUNCTION__, __LINE__);\n\t\t\t\t\tnorthPlugin->reconfigure(\"logLevel\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"control\"))\n\t\t{\n\t\t\tstring c = m_configAdvanced.getValue(\"control\");\n\t\t\tif (c.compare(\"true\") == 0)\n\t\t\t{\n\t\t\t\tm_allowControl = true;\n\t\t\t\tlogger->warn(\"Control operations have been enabled\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_allowControl = false;\n\t\t\t\tlogger->warn(\"Control operations have been disabled\");\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"blockSize\"))\n\t\t{\n\t\t\tunsigned long newBlock = strtoul(\n\t\t\t\t\tm_configAdvanced.getValue(\"blockSize\").c_str(),\n\t\t\t\t\tNULL,\n\t\t\t\t\t10);\n\t\t\tif (newBlock > 0)\n\t\t\t{\n\t\t\t\tm_dataLoad->setBlockSize(newBlock);\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"streamUpdate\"))\n\t\t{\n\t\t\tunsigned long newStreamUpdate = strtoul(\n\t\t\t\t\t\tm_configAdvanced.getValue(\"streamUpdate\").c_str(),\n\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t10);\n\t\t\tif (newStreamUpdate > 0)\n\t\t\t{\n\t\t\t\tm_dataLoad->setStreamUpdate(newStreamUpdate);\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"assetTrackerInterval\"))\n\t\t{\n\t\t\tunsigned long interval  = strtoul(\n\t\t\t\t\t\tm_configAdvanced.getValue(\"assetTrackerInterval\").c_str(),\n\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t10);\n\t\t\tif (m_assetTracker)\n\t\t\t\tm_assetTracker->tune(interval);\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"perfmon\"))\n\t\t{\n\t\t\tstring perf = m_configAdvanced.getValue(\"perfmon\");\n\t\t\tif (perf.compare(\"true\") == 0)\n\t\t\t\tm_perfMonitor->setCollecting(true);\n\t\t\telse\n\t\t\t\tm_perfMonitor->setCollecting(false);\n\t\t}\n\t}\n\n\t// Update the  Security category\n\tif (categoryName.compare(m_name+\"Security\") == 0)\n\t{\n\t\tthis->updateSecurityCategory(category);\n\t}\n\tif (categoryName.compare(\"FEATURES\") == 0)\n\t{\n\t\tConfigCategory conf(\"FEATURES\", category);\n\t\tthis->updateFeatures(conf);\n\t}\n}\n\n/**\n * Restart the plugin with an updated configuration.\n * We need to do this as north plugins do not have a reconfigure method\n *\n * We need to make sure we are not sending data and the send data thread does not startup\n * whilst we are doing the restart.\n *\n * We also need to make sure the send data thread gets the new plugin.\n */\nvoid NorthService::restartPlugin()\n{\n\tm_restartPlugin = false;\n\n\t// Stop the send data thread\n\tm_dataSender->pause();\n\n\tif (m_pluginData)\n\t{\n\t\tstring saveData = northPlugin->shutdownSaveData();\n\t\tstring key = m_name + m_pluginName;\n\t\tlogger->debug(\"Persist plugin data, key: '%s' data: '%s' service name: '%s'\", key.c_str(), saveData.c_str(), m_name.c_str());\n\t\tif (!m_pluginData->persistPluginData(key, saveData, m_name))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Plugin %s has failed to save data [%s] for key %s\",\n\t\t\t\tm_pluginName.c_str(), saveData.c_str(), key.c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\tnorthPlugin->shutdown();\n\t}\n\n\tdelete northPlugin;\n\tnorthPlugin = NULL;\n\tloadPlugin();\n\t// Deal with persisted data and start the plugin\n\tif (northPlugin->persistData())\n\t{\n\t\tlogger->debug(\"Plugin %s requires persisted data\", m_pluginName.c_str());\n\t\tm_pluginData = new PluginData(m_storage);\n\t\tstring key = m_name + m_pluginName;\n\t\tstring storedData = m_pluginData->loadStoredData(key);\n\t\tlogger->debug(\"Starting plugin with storedData: %s\", storedData.c_str());\n\t\tnorthPlugin->startData(storedData);\n\t}\n\telse\n\t{\n\t\tlogger->debug(\"Start %s plugin\", m_pluginName.c_str());\n\t\tnorthPlugin->start();\n\t}\n\tm_dataSender->updatePlugin(northPlugin);\n\tm_dataSender->release();\n\n\t// If the plugin supports control register the callback functions\n\tif (northPlugin->hasControl() && m_allowControl)\n\t{\n\t\tnorthPlugin->pluginRegister(controlWrite, controlOperation);\n\t}\n}\n\n/**\n * Add the generic north service configuration options to the advanced\n * category\n *\n * @param defaultConfiguration\tThe default configuration from the plugin\n */\nvoid NorthService::addConfigDefaults(DefaultConfigCategory& defaultConfig)\n{\n\tfor (int i = 0; defaults[i].name; i++)\n\t{\n\t\tdefaultConfig.addItem(defaults[i].name, defaults[i].description,\n\t\t\tdefaults[i].type, defaults[i].value, defaults[i].value);\n\t\tdefaultConfig.setItemDisplayName(defaults[i].name, defaults[i].displayName);\n\t}\n\tif (northPlugin->hasControl())\n\t{\n\t\tdefaultConfig.addItem(\"control\", \"Allow write and control operations from the upstream system\",\n\t\t\t\"boolean\", \"true\", \"true\");\n\t\tdefaultConfig.setItemDisplayName(\"control\", \"Allow Control\");\n\t}\n\n\t/* Add the set of logging levels to the service */\n\tvector<string>\tlogLevels = { \"error\", \"warning\", \"info\", \"debug\" };\n\tdefaultConfig.addItem(\"logLevel\", \"Minimum logging level reported\",\n\t\t\t\"warning\", \"warning\", logLevels);\n\tdefaultConfig.setItemDisplayName(\"logLevel\", \"Minimum Log Level\");\n\n\t// Add blockSize configuration item\n\tdefaultConfig.addItem(\"blockSize\",\n\t\t\"The size of a block of data to send in each transmission.\",\n\t\t\"integer\",\n\t\tstd::to_string(DEFAULT_BLOCK_SIZE),\n\t\tstd::to_string(DEFAULT_BLOCK_SIZE));\n\tdefaultConfig.setItemDisplayName(\"blockSize\", \"Data block size\");\n\t// Add streams update configuration item\n\tdefaultConfig.addItem(\"streamUpdate\",\n\t\t\"Set the number of blocks to be sent before updating the stream location in the storage layer.\",\n\t\t\"integer\",\n\t\tstd::to_string(1),\n\t\tstd::to_string(1));\n\tdefaultConfig.setItemDisplayName(\"streamUpdate\", \"Stream update frequency\");\n\tdefaultConfig.setItemAttribute(\"streamUpdate\", ConfigCategory::MINIMUM_ATTR, \"1\");\n\t// Add prefetch limit item\n\tdefaultConfig.addItem(\"prefetchLimit\",\n\t\t\"The maximum number of blocks to be prefetched and queued ready for transmission.\",\n\t\t\"integer\",\n\t\tstd::to_string(2),\n\t\tstd::to_string(2));\n\tdefaultConfig.setItemDisplayName(\"prefetchLimit\", \"Data block prefetch\");\n\tdefaultConfig.setItemAttribute(\"prefetchLimit\", ConfigCategory::MINIMUM_ATTR, \"2\");\n\tdefaultConfig.setItemAttribute(\"prefetchLimit\", ConfigCategory::MAXIMUM_ATTR, \"10\");\n\tdefaultConfig.addItem(\"assetTrackerInterval\",\n\t\t\t\"Number of milliseconds between updates of the asset tracker information\",\n\t\t\t\"integer\", std::to_string(MIN_ASSET_TRACKER_UPDATE),\n\t\t\tstd::to_string(MIN_ASSET_TRACKER_UPDATE));\n\tdefaultConfig.setItemDisplayName(\"assetTrackerInterval\",\n\t\t\t\"Asset Tracker Update\");\n\tdefaultConfig.addItem(\"perfmon\", \"Track and store performance counters\",\n\t\t\t\"boolean\", \"false\", \"false\");\n\tdefaultConfig.setItemDisplayName(\"perfmon\", \"Performance Counters\");\n}\n\n/**\n * Control write operation\n *\n * @param name\t\tName of the variable to write\n * @param value\t\tValue to write to the variable\n * @param destination\tWhere to write the value\n * @return true if write was succesfully sent to dispatcher, else false\n */\nbool NorthService::write(const string& name, const string& value, const ControlDestination destination)\n{\n\tLogger::getLogger()->info(\"Control write %s with %s\", name.c_str(), value.c_str());\n\tif (destination != DestinationBroadcast)\n\t{\n\t\tLogger::getLogger()->error(\"Write destination requires an argument that is not given\");\n\t\treturn -1;\n\t}\n\t// Build payload for dispatcher service\n\tstring payload = \"{ \\\"destination\\\" : \\\"broadcast\\\",\";\n\tpayload += controlSource();\n\tpayload += \", \\\"write\\\" : { \\\"\";\n\tpayload += name;\n\tpayload += \"\\\" : \\\"\";\n\tstring escaped = value;\n\tStringEscapeQuotes(escaped);\n\tpayload += escaped;\n\tpayload += \"\\\" } }\";\n\treturn sendToDispatcher(\"/dispatch/write\", payload);\n}\n\n/**\n * Control write operation\n *\n * @param name\t\tName of the variable to write\n * @param value\t\tValue to write to the variable\n * @param destination\tWhere to write the value\n * @param arg\t\tArgument used to determine destination\n * @return true if write was succesfully sent to dispatcher, else false\n */\nbool NorthService::write(const string& name, const string& value, const ControlDestination destination, const string& arg)\n{\n\tLogger::getLogger()->info(\"Control write %s with %s\", name.c_str(), value.c_str());\n\n\t// Build payload for dispatcher service\n\tstring payload = \"{ \\\"destination\\\" : \\\"\";\n\tswitch (destination)\n\t{\n\t\tcase DestinationService:\n\t\t\tpayload += \"service\\\", \\\"name\\\" : \\\"\";\n\t\t\tpayload += arg;\n\t\t\tpayload += \"\\\"\";\n\t\t\tbreak;\n\t\tcase DestinationAsset:\n\t\t\tpayload += \"asset\\\", \\\"name\\\" : \\\"\";\n\t\t\tpayload += arg;\n\t\t\tpayload += \"\\\"\";\n\t\t\tbreak;\n\t\tcase DestinationScript:\n\t\t\tpayload += \"script\\\", \\\"name\\\" : \\\"\";\n\t\t\tpayload += arg;\n\t\t\tpayload += \"\\\"\";\n\t\t\tbreak;\n\t\tcase DestinationBroadcast:\n\t\t\tpayload += \"broadcast\\\"\";\n\t\t\tbreak;\n\t}\n\tpayload += \", \";\n\tpayload += controlSource();\n\tpayload += \", \\\"write\\\" : { \\\"\";\n\tpayload += name;\n\tpayload += \"\\\" : \\\"\";\n\tstring escaped = value;\n\tStringEscapeQuotes(escaped);\n\tpayload += escaped;\n\tpayload += \"\\\" } }\";\n\treturn sendToDispatcher(\"/dispatch/write\", payload);\n}\n\n/**\n * Control operation\n *\n * @param name\t\tName of the operation to perform\n * @param paramCount\tThe number of parameters\n * @param parameters\tThe parameters to the operation\n * @param destination\tWhere to write the value\n * @return -1 in case of error on operation destination, 1 if operation was succesfully sent to dispatcher, else 0\n */\nint  NorthService::operation(const string& name, int paramCount, char *names[], char *parameters[], const ControlDestination destination)\n{\n\tLogger::getLogger()->info(\"Control operation %s with %d parameters\", name.c_str(),\n\t\t\tparamCount);\n\tfor (int i = 0; i < paramCount; i++)\n\t\tLogger::getLogger()->info(\"Parameter %d: %s\", i, parameters[i]);\n\tif (destination != DestinationBroadcast)\n\t{\n\t\tLogger::getLogger()->error(\"Operation destination requires an argument that is not given\");\n\t\treturn -1;\n\t}\n\t// Build payload for dispatcher service\n\tstring payload = \"{ \\\"destination\\\" : \\\"broadcast\\\",\";\n\tpayload += controlSource();\n\tpayload += \", \\\"operation\\\" : { \\\"\";\n\tpayload += name;\n\tpayload += \"\\\" : { \";\n\tfor (int i = 0; i < paramCount; i++)\n\t{\n\t\tpayload += \"\\\"\";\n\t\tpayload += names[i];\n\t\tpayload += \"\\\": \\\"\";\n\t\tstring escaped = parameters[i];\n\t\tStringEscapeQuotes(escaped);\n\t\tpayload += escaped;\n\t\tpayload += \"\\\"\";\n\t\tif (i < paramCount -1)\n\t\t\tpayload += \",\";\n\t}\n\tpayload += \" } } }\";\n\treturn static_cast<int>(sendToDispatcher(\"/dispatch/operation\", payload));\n}\n\n/**\n * Control write operation\n *\n * @param name\t\tName of the operation to perform\n * @param paramCount\tThe number of parameters\n * @param parameters\tThe parameters to the operation\n * @param destination\tWhere to write the value\n * @param arg\t\tArgument used to determine destination\n * @return 1 if operation was succesfully sent to dispatcher, else 0\n */\nint NorthService::operation(const string& name, int paramCount, char *names[], char *parameters[], const ControlDestination destination, const string& arg)\n{\n\tLogger::getLogger()->info(\"Control operation %s with %d parameters\", name.c_str(),\n\t\t\tparamCount);\n\tfor (int i = 0; i < paramCount; i++)\n\t\tLogger::getLogger()->info(\"Parameter %d: %s\", i, parameters[i]);\n\t// Build payload for dispatcher service\n\tstring payload = \"{ \\\"destination\\\" : \\\"\";\n\tswitch (destination)\n\t{\n\t\tcase DestinationService:\n\t\t\tpayload += \"service\\\", \\\"name\\\" : \\\"\";\n\t\t\tpayload += arg;\n\t\t\tpayload += \"\\\"\";\n\t\t\tbreak;\n\t\tcase DestinationAsset:\n\t\t\tpayload += \"asset\\\", \\\"name\\\" : \\\"\";\n\t\t\tpayload += arg;\n\t\t\tpayload += \"\\\"\";\n\t\t\tbreak;\n\t\tcase DestinationScript:\n\t\t\tpayload += \"script\\\", \\\"name\\\" : \\\"\";\n\t\t\tpayload += arg;\n\t\t\tpayload += \"\\\"\";\n\t\t\tbreak;\n\t\tcase DestinationBroadcast:\n\t\t\tpayload += \"broadcast\\\"\";\n\t\t\tbreak;\n\t}\n\tpayload += \", \";\n\tpayload += controlSource();\n\tpayload += \", \\\"operation\\\" : { \\\"\";\n\tpayload += name;\n\tpayload += \"\\\" : { \";\n\tfor (int i = 0; i < paramCount; i++)\n\t{\n\t\tpayload += \"\\\"\";\n\t\tpayload += names[i];\n\t\tpayload += \"\\\": \\\"\";\n\t\tstring escaped = parameters[i];\n\t\tStringEscapeQuotes(escaped);\n\t\tpayload += escaped;\n\t\tpayload += \"\\\"\";\n\t\tif (i < paramCount -1)\n\t\t\tpayload += \",\";\n\t}\n\tpayload += \"} } }\";\n\treturn static_cast<int>(sendToDispatcher(\"/dispatch/operation\", payload));\n}\n\n/**\n * Send to a south service direct. This is temporary until we have the \n * service dispatcher in place.\n */\nbool NorthService::sendToService(const string& southService, const string& name, const string& value)\n{\n\tstd::string payload = \"{ \\\"values\\\" : { \\\"\";\n\tpayload += name;\n\tpayload += \"\\\": \\\"\";\n\tpayload += value;\n\tpayload += \"\\\"} }\";\n\n\t// Send the control message to the south service\n\ttry {\n\t\tServiceRecord service(southService);\n\t\tif (!m_mgtClient->getService(service))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Unable to find service '%s'\", southService.c_str());\n\t\t\treturn false;\n\t\t}\n\t\tstring address = service.getAddress();\n\t\tunsigned short port = service.getPort();\n\t\tchar addressAndPort[80];\n\t\tsnprintf(addressAndPort, sizeof(addressAndPort), \"%s:%d\", address.c_str(), port);\n\t\tSimpleWeb::Client<SimpleWeb::HTTP> http(addressAndPort);\n\n\t\tstring url = \"/fledge/south/setpoint\";\n\t\ttry {\n\t\t\tSimpleWeb::CaseInsensitiveMultimap headers = {{\"Content-Type\", \"application/json\"}};\n\t\t\tauto res = http.request(\"PUT\", url, payload, headers);\n\t\t\tif (res->status_code.compare(\"200 OK\"))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Failed to send set point operation to service %s, %s\",\n\t\t\t\t\t\tsouthService.c_str(), res->status_code.c_str());\n\t\t\t\tLogger::getLogger()->error(\"Failed Payload: %s\", payload.c_str());\n\t\t\t\treturn false;\n\t\t\t}\n\t\t} catch (exception& e) {\n\t\t\tLogger::getLogger()->error(\"Failed to send set point operation to service %s, %s\",\n\t\t\t\t\t\tsouthService.c_str(), e.what());\n\t\t\treturn false;\n\t\t}\n\n\t\treturn true;\n\t}\n\tcatch (exception &e) {\n\t\tLogger::getLogger()->error(\"Failed to send set point operation to service %s, %s\",\n\t\t\t\tsouthService.c_str(), e.what());\n\t\treturn false;\n\t}\n\n}\n\n/**\n * Send to the control dispatcher service\n */\nbool NorthService::sendToDispatcher(const string& path, const string& payload)\n{\n\tLogger::getLogger()->debug(\"Dispatch %s with %s\", path.c_str(), payload.c_str());\n\t// Send the control message to the south service\n\ttry {\n\t\tServiceRecord service(\"dispatcher\");\n\t\tif (!m_mgtClient->getService(service))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Unable to find dispatcher service 'Dispatcher'\");\n\t\t\treturn false;\n\t\t}\n\t\tstring address = service.getAddress();\n\t\tunsigned short port = service.getPort();\n\t\tchar addressAndPort[80];\n\t\tsnprintf(addressAndPort, sizeof(addressAndPort), \"%s:%d\", address.c_str(), port);\n\t\tSimpleWeb::Client<SimpleWeb::HTTP> http(addressAndPort);\n\n\t\ttry {\n\t\t\tSimpleWeb::CaseInsensitiveMultimap headers = {{\"Content-Type\", \"application/json\"}};\n\t\t\t// Pass North service bearer token to dispatcher\n\t\t\tstring regToken = m_mgtClient->getRegistrationBearerToken();\n\t\t\tif (regToken != \"\")\n\t\t\t{\n\t\t\t\theaders.emplace(\"Authorization\", \"Bearer \" + regToken);\n\t\t\t}\n\n\t\t\tauto res = http.request(\"POST\", path, payload, headers);\n\t\t\tif (res->status_code.compare(\"202 Accepted\"))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\n\t\t\t\t\t\t\"Failed to send control operation '%s' to dispatcher service, %s %s\",\n\t\t\t\t\t\t\tpath.c_str(), res->status_code.c_str(),\n\t\t\t\t\t\t\tres->content.string().c_str());\n\t\t\t\tLogger::getLogger()->error(\"Failed Payload: %s\", payload.c_str());\n\t\t\t\treturn false;\n\t\t\t}\n\t\t} catch (exception& e) {\n\t\t\tLogger::getLogger()->error(\"Failed to send control operation to dispatcher service, %s\",\n\t\t\t\t\t\te.what());\n\t\t\treturn false;\n\t\t}\n\n\t\treturn true;\n\t}\n\tcatch (exception &e) {\n\t\tLogger::getLogger()->error(\"Failed to send control operation to dispatcher service, %s\", e.what());\n\t\treturn false;\n\t}\n\n}\n\n/**\n * Return the control source for control operations. This is used\n * for pipeline matching.\n *\n * @return string\tThe control source\n */\nstring NorthService::controlSource()\n{\n\tstring source = \"\\\"source\\\" : \\\"service\\\", \\\"source_name\\\" : \\\"\";\n\tsource += m_name;\n\tsource += \"\\\"\";\n\n\treturn source;\n}\n\n/**\n * Raise an alert that we are having issues sending data\n *\n * We also write a warning to the system log to aid with debugging\n */\nvoid NorthService::alertFailures()\n{\n\tstring key = \"North \" + m_name;\n\tstring message = \"Repeated failures to send data via the \" + m_name + \" north service \";\n\tm_mgtClient->raiseAlert(key, message, \"normal\");\n\tlogger->warn(\"Repeated failures to send data to destination\");\n}\n\n/**\n * Clear the failure alert for sending data\n *\n * We clear the alert from the status bar and write a message to the system\n * log\n */\nvoid NorthService::clearFailures()\n{\n\tstring key = \"North \" + m_name;\n\tm_mgtClient->clearAlert(key);\n\tlogger->info(\"The sending of data has resumed\");\n}\n\n/**\n * Return the state of the pipeline debugger\n *\n * @return string\tJSON document reporting the state of the pipeline debugger\n */\nstring NorthService::debugState()\n{\n\tstring rval;\n\trval = \"{ \";\n\trval += \"\\\"debugger\\\" : \";\n\tif (m_debugState & DEBUG_ATTACHED)\n\t{\n\t\trval += \"\\\"Attached\\\",\";\n\t\trval += \"\\\"ingress\\\" : \";\n\t\tif (m_debugState & DEBUG_SUSPENDED)\n\t\t\trval += \"\\\"Suspended\\\", \";\n\t\telse\n\t\t\trval += \"\\\"Running\\\", \";\n\t\trval += \"\\\"egress\\\" : \";\n\t\tif (m_debugState & DEBUG_ISOLATED)\n\t\t\trval += \"\\\"Isolated\\\"\";\n\t\telse\n\t\t\trval += \"\\\"Storage\\\"\";\n\t}\n\telse if (m_allowDebugger)\n\t{\n\t\trval += \"\\\"Detached\\\"\";\n\t}\n\telse\n\t{\n\t\trval += \"\\\"Disabled\\\"\";\n\t}\n\trval += \"}\";\n\treturn rval;\n}\n\n/**\n * Process the setting of allowed features\n *\n * @param category\tThe configuration category\n */\nvoid NorthService::updateFeatures(const ConfigCategory& category)\n{\n\tif (category.itemExists(\"debugging\"))\n\t{\n\t\tstring s = category.getValue(\"debugging\");\n\t\tm_allowDebugger = s.compare(\"true\") == 0 ? true : false;\n\t\tif ((m_debugState & DEBUG_ATTACHED) != 0 && m_allowDebugger == false)\n\t\t{\n\t\t\t// Detach the debugger in case there is an active session\n\t\t\tdetachDebugger();\n\t\t}\n\t}\n}\n\n"
  },
  {
    "path": "C/services/north/north_api.cpp",
    "content": "/**\n * Fledge north service API\n *\n * Copyright (c) 2025 Dianomic Systems\n *\n * Author: Mark Riddoch\n */\n\n#include <north_api.h>\n#include <north_service.h>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\nstatic NorthApi *api = NULL;\n\n/**\n * Wrapper for the PUT attach debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void attachDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->attachDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT detach debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void detachDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->detachDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT set debugger buffer size API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void setDebuggerBufferWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->setDebuggerBuffer(response, request);\n}\n\n/**\n * Wrapper for the GET debugger buffer API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void getDebuggerBufferWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->getDebuggerBuffer(response, request);\n}\n\n/**\n * Wrapper for the PUT debugger isolate API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void isolateDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->isolateDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT debugger suspend API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void suspendDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->suspendDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT step debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void stepDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->stepDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT replay debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void replayDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->replayDebugger(response, request);\n}\n\n/**\n * Wrapper for the GET state debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void stateDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->stateDebugger(response, request);\n}\n\n/**\n * Wrapper for thread creation that is used to start the API\n */\nstatic void startService()\n{\n\tapi->startServer();\n}\n\n/**\n * North API class constructor\n *\n * @param service\tThe NorthService class this is the API for\n */\nNorthApi::NorthApi(NorthService *service) : m_service(service), m_thread(NULL)\n{\n\tm_logger = Logger::getLogger();\n\tm_server = new HttpServer();\n\tm_server->config.port = 0;\n\tm_server->config.thread_pool_size = 1;\n\n\t// Add the debugger entry points\n\tm_server->resource[DEBUG_ATTACH][\"PUT\"] = attachDebuggerWrapper;\n\tm_server->resource[DEBUG_DETACH][\"PUT\"] = detachDebuggerWrapper;\n\tm_server->resource[DEBUG_BUFFER][\"POST\"] = setDebuggerBufferWrapper;\n\tm_server->resource[DEBUG_BUFFER][\"GET\"] = getDebuggerBufferWrapper;\n\tm_server->resource[DEBUG_ISOLATE][\"PUT\"] = isolateDebuggerWrapper;\n\tm_server->resource[DEBUG_SUSPEND][\"PUT\"] = suspendDebuggerWrapper;\n\tm_server->resource[DEBUG_STEP][\"PUT\"] = stepDebuggerWrapper;\n\tm_server->resource[DEBUG_REPLAY][\"PUT\"] = replayDebuggerWrapper;\n\tm_server->resource[DEBUG_STATE][\"GET\"] = stateDebuggerWrapper;\n\n\tapi = this;\n\tm_thread = new thread(startService);\n}\n\n/**\n * Destroy the API.\n *\n * Stop the service and wait for the thread to terminate.\n */\nNorthApi::~NorthApi()\n{\n\tif (m_thread)\n\t{\n\t\tm_server->stop();\n\t\tm_thread->join();\n\t\tdelete m_thread;\n\t}\n\tif (m_server)\n\t\tdelete m_server;\n}\n\n/**\n * Called on the API service thread. Start the listener for HTTP requests\n */\nvoid NorthApi::startServer()\n{\n\tm_server->start();\n}\n\n/**\n * Return the port the service is listening on\n */\nunsigned short NorthApi::getListenerPort()\n{\n\tint max_wait = 10;\n\t// Need to make sure the server thread has started\n\twhile (m_server->getLocalPort() == 0 && max_wait-- > 0)\n\t\tusleep(100);\n\treturn m_server->getLocalPort();\n}\n\n/**\n * Invoke debugger attach on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid NorthApi::attachDebugger(Response response, Request /*request*/)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tbool status = m_service->attachDebugger();\n\n\t\tif (status)\n\t\t{\n\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\tm_service->respond(response, responsePayload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed to attach the debugger to the pipeline\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\tstring responsePayload = QUOTE({ \"status\" : \"Pipeline debugger features have been disabled\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n}\n\n/**\n * Invoke debugger detach on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid NorthApi::detachDebugger(Response response, Request /*request*/)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tstring responsePayload;\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tm_service->detachDebugger();\n\t\t\tresponsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t}\n\t\telse\n\t\t{\n\t\t\tresponsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t}\n\t\tm_service->respond(response, responsePayload);\n\t}\n\telse\n\t{\tstring responsePayload = QUOTE({ \"status\" : \"Pipeline debugger features have been disabled\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n}\n\n/**\n * Invoke set debugger buffer size on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid NorthApi::setDebuggerBuffer(Response response, Request request)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tstring payload = request->content.string();\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"size\"))\n\t\t\t\t{\n\t\t\t\t\tif (doc[\"size\"].IsUint())\n\t\t\t\t\t{\n\t\t\t\t\t\tunsigned int size = doc[\"size\"].GetUint();\n\t\t\t\t\t\tm_service->setDebuggerBuffer(size);\n\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'size' should be an unsigned integer\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'size' item in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\tstring responsePayload = QUOTE({ \"status\" : \"Pipeline debugger features have been disabled\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n}\n\n/**\n * Invoke get debugger buffer size on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid NorthApi::getDebuggerBuffer(Response response, Request /*request*/)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tstring result;\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tresult = m_service->getDebuggerBuffer();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tresult = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t}\n\t\tm_service->respond(response, result);\n\t}\n\telse\n\t{\tstring responsePayload = QUOTE({ \"status\" : \"Pipeline debugger features have been disabled\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n}\n\n/**\n * Invoke isolate debugger handler on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid NorthApi::isolateDebugger(Response response, Request request)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tstring payload = request->content.string();\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"state\"))\n\t\t\t\t{\n\t\t\t\t\tif (doc[\"state\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tstring state = doc[\"state\"].GetString();\n\t\t\t\t\t\tif (state.compare(\"discard\") == 0)\n\t\t\t\t\t\t\tm_service->isolateDebugger(true);\n\t\t\t\t\t\telse if (state.compare(\"store\") == 0)\n\t\t\t\t\t\t\tm_service->isolateDebugger(false);\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'state' should be one of 'discard' or 'store'\" });\n\t\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'size' should be a string with either 'discard' or 'store'.\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'state' item in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\tstring responsePayload = QUOTE({ \"status\" : \"Pipeline debugger features have been disabled\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n}\n\n/**\n * Invoke suspend debugger handler on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid NorthApi::suspendDebugger(Response response, Request request)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tstring payload = request->content.string();\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"state\"))\n\t\t\t\t{\n\t\t\t\t\tif (doc[\"state\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tstring state = doc[\"state\"].GetString();\n\t\t\t\t\t\tif (state.compare(\"suspend\") == 0)\n\t\t\t\t\t\t\tm_service->suspendDebugger(true);\n\t\t\t\t\t\telse if (state.compare(\"resume\") == 0)\n\t\t\t\t\t\t\tm_service->suspendDebugger(false);\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'state' should be one of 'suspend' or 'resume'\" });\n\t\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'size' should be a string with either 'suspend' or 'resume'.\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'state' item in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\tstring responsePayload = QUOTE({ \"status\" : \"Pipeline debugger features have been disabled\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n}\n\n/**\n * Invoke set debugger step command on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid NorthApi::stepDebugger(Response response, Request request)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tstring payload = request->content.string();\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"steps\"))\n\t\t\t\t{\n\t\t\t\t\tif (doc[\"steps\"].IsUint())\n\t\t\t\t\t{\n\t\t\t\t\t\tunsigned int steps = doc[\"steps\"].GetUint();\n\t\t\t\t\t\tm_service->stepDebugger(steps);\n\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'steps' should be an unsigned integer\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'steps' item in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\tstring responsePayload = QUOTE({ \"status\" : \"Pipeline debugger features have been disabled\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n}\n\n/**\n * Invoke debugger replay on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid NorthApi::replayDebugger(Response response, Request /*request*/)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tstring responsePayload;\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\t// TODO Handle pre-requisites\n\t\t\tif (m_service->replayDebugger())\n\t\t\t{\n\t\t\t\tresponsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tresponsePayload = QUOTE({ \"status\" : \"No data to replay\" });\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tresponsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t}\n\t\tm_service->respond(response, responsePayload);\n\t}\n\telse\n\t{\tstring responsePayload = QUOTE({ \"status\" : \"Pipeline debugger features have been disabled\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n}\n\n/**\n * Invoke debugger state on the north plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid NorthApi::stateDebugger(Response response, Request /*request*/)\n{\n\tstring payload = m_service->debugState();\n\tm_service->respond(response, payload);\n}\n"
  },
  {
    "path": "C/services/north/north_plugin.cpp",
    "content": "/*\n * Fledge north service.\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <north_plugin.h>\n#include <config_category.h>\n#include <logger.h>\n#include <exception>\n#include <typeinfo>\n#include <stdexcept>\n#include <mutex>\n\nusing namespace std;\n\n// mutex between various plugin methods, since reconfigure changes the handle \n// object itself and marks previous handle as garbage collectible by Python runtime\nstd::mutex mtx2;\n\n/**\n * Constructor for the class that wraps the north plugin\n *\n * Create a set of function points that resolve to the loaded plugin and\n * enclose in the class.\n *\n */\nNorthPlugin::NorthPlugin(PLUGIN_HANDLE handle, const ConfigCategory& category) : Plugin(handle)\n{\n\t// Call the init method of the plugin\n\tPLUGIN_HANDLE (*pluginInit)(const void *) = (PLUGIN_HANDLE (*)(const void *))\n\t\t\t\t\tmanager->resolveSymbol(handle, \"plugin_init\");\n\tm_instance = (*pluginInit)(&category);\n\n\tif (!m_instance)\n\t{\n\t\tLogger::getLogger()->error(\"plugin_init returned NULL, cannot proceed\");\n\t\tthrow new exception();\n\t}\n\n\t// Setup the function pointers to the plugin\n  \tpluginSendPtr = (uint32_t (*)(PLUGIN_HANDLE, const std::vector<Reading *>& readings))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_send\");\n\t\n  \tpluginReconfigurePtr = (void (*)(PLUGIN_HANDLE*, const std::string&))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_reconfigure\");\n  \tpluginShutdownPtr = (void (*)(PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_shutdown\");\n\tpluginShutdownDataPtr = (string (*)(const PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_shutdown\");\n\n\tpluginStartPtr = (void (*)(const PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_start\");\n\tpluginStartDataPtr = (void (*)(const PLUGIN_HANDLE, const string& storedData))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_start\");\n\tif (hasControl())\n\t{\n\t\tpluginRegisterPtr = (void (*)(const PLUGIN_HANDLE handle, bool ( *write)(char *name, char *value, ControlDestination destination, ...),\n                                     int (* operation)(char *operation, int paramCount, char *names[], char *parameters[], ControlDestination destination, ...)))manager->resolveSymbol(handle, \"plugin_register\");\n\t}\n\telse\n\t{\n\t\tpluginRegisterPtr = NULL;\n\t}\n}\n\nNorthPlugin::~NorthPlugin()\n{\n}\n\n/**\n * Call the start method in the plugin\n * with no persisted data\n */\nvoid NorthPlugin::start()\n{\n\t// Check pluginStart function pointer exists\n\tif (this->pluginStartPtr)\n\t{\n\t\tthis->pluginStartPtr(m_instance);\n\t}\n}\n\n/**\n * Call the start method in the plugin\n * passing persisted data\n */\nvoid NorthPlugin::startData(const string& storedData)\n{\n\t// Ccheck pluginStartData function pointer exists\n\tif (this->pluginStartDataPtr)\n\t{\n\t\tthis->pluginStartDataPtr(m_instance, storedData);\n\t}\n}\n\n/**\n * Call the send method in the plugin\n */\nuint32_t NorthPlugin::send(const vector<Reading *>& readings)\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\treturn this->pluginSendPtr(m_instance, readings);\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in north plugin send(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in north plugin send(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the reconfigure method in the plugin\n */\nvoid NorthPlugin::reconfigure(const string& newConfig)\n{\n\tif (!pluginReconfigurePtr)\n\t{\n\t\t/*\n\t\t * The plugin does not support reconfiguration, shutdown\n\t\t * and restart the plugin.\n\t\t */\n\t\tlock_guard<mutex> guard(mtx2);\n\t\tif (persistData())\n\t\t{\n\t\t}\n\t\telse\n\t\t{\n\t\t\t(*pluginShutdownPtr)(m_instance);\n\t\t\tPLUGIN_HANDLE (*pluginInit)(const void *) = (PLUGIN_HANDLE (*)(const void *))\n\t\t\t\t\tmanager->resolveSymbol(handle, \"plugin_init\");\n\t\t\tConfigCategory category(\"new\", newConfig);\n\t\t\tm_instance = (*pluginInit)(&category);\n\t\t}\n\t\treturn;\n\t}\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\tthis->pluginReconfigurePtr(&m_instance, newConfig);\n\t\tif (!m_instance)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"plugin_reconfigure returned NULL, cannot proceed\");\n\t\t\tthrow new exception();\n\t\t}\n\t\treturn;\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in north plugin reconfigure(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in north plugin reconfigure(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the shutdown method in the plugin\n */\nvoid NorthPlugin::shutdown()\n{\n\tif (this->pluginShutdownPtr)\n\t{\n\t\ttry {\n\t\t\treturn this->pluginShutdownPtr(m_instance);\n\t\t} catch (exception& e) {\n\t\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in north plugin shutdown(), %s\",\n\t\t\t\te.what());\n\t\t\tthrow;\n\t\t} catch (...) {\n\t\t\tstd::exception_ptr p = std::current_exception();\n\t\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in north plugin shutdown(), %s\",\n\t\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\t\tthrow;\n\t\t}\n\t}\n}\n\n/**\n * Call the shutdown method in the plugin\n * and return plugin data to parsist as JSON string\n */\nstring NorthPlugin::shutdownSaveData()\n{\n\tstring ret(\"\");\n\t// Check pluginShutdownData function pointer exists\n\tif (this->pluginShutdownDataPtr)\n\t{\n\t\tret = this->pluginShutdownDataPtr(m_instance);\n\t}\n\treturn ret;\n}\n\n/**\n * Call the plugin_register entry point of the plugin if one has been defined\n */\nvoid NorthPlugin::pluginRegister(bool ( *write)(char *name, char *value, ControlDestination destination, ...), \n                                     int (* operation)(char *operation, int paramCount, char *names[], char *parameters[], ControlDestination destination, ...))\n{\n\tif (hasControl() && pluginRegisterPtr)\n\t{\n\t\t(*pluginRegisterPtr)(m_instance, write, operation);\n\t}\n}\n"
  },
  {
    "path": "C/services/north-plugin-interfaces/python/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(north-plugin-python-interface)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(DLLIB -ldl)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\n\n# Find source files\nfile(GLOB SOURCES python_plugin_interface.cpp)\n\n# Find Python.h 3.x dev/lib package\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development)\nendif()\n\n# Include header files\ninclude_directories(include ../../../common/include ../../../services/common/include ../../../services/north/include ../../../thirdparty/rapidjson/include)\ninclude_directories(../../../services/common-plugin-interfaces/python/include)\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${DLLIB})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n"
  },
  {
    "path": "C/services/north-plugin-interfaces/python/python_plugin_interface.cpp",
    "content": "/*\n * Fledge North plugin interface related\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <logger.h>\n#include <config_category.h>\n#include <reading.h>\n#include <pythonreadingset.h>\n#include <mutex>\n#include <north_plugin.h>\n#include <pyruntime.h>\n#include <Python.h>\n#include <python_plugin_common_interface.h>\n\n#define SHIM_SCRIPT_NAME \"north_shim\"\n\nusing namespace std;\n\nextern \"C\" {\n\nextern PLUGIN_INFORMATION *plugin_info_fn();\nextern PLUGIN_HANDLE plugin_init_fn(ConfigCategory *);\nextern void plugin_reconfigure_fn(PLUGIN_HANDLE*, const std::string&);\nextern void plugin_shutdown_fn(PLUGIN_HANDLE);\nextern void logErrorMessage();\nextern PLUGIN_INFORMATION *Py2C_PluginInfo(PyObject *);\n\n// North plugin entry points\nvoid plugin_start_fn(PLUGIN_HANDLE handle);\nuint32_t plugin_send_fn(PLUGIN_HANDLE handle, const std::vector<Reading *>& readings);\n\n\n/**\n * Function to invoke async 'plugin_send' function in python plugin\n *\n * @param   plugin_send_module_func Reference to plugin's plugin_send async method\n * @param   handle     Plugin handle from plugin_init_fn\n * @param   readingsList    Reading list to send\n */\nunsigned int call_plugin_send_coroutine(PyObject *plugin_send_module_func, PLUGIN_HANDLE handle, PyObject *readingsList)\n{\n\tunsigned int numSent=0;\n\n\tstd::string fcn = \"\";\n\tfcn += \"def plugin_send_wrapper(handle, readings, plugin_send_module_func):\\n\";\n\tfcn += \"    import asyncio\\n\";\n\tfcn += \"    loop = asyncio.new_event_loop()\\n\"; \n\tfcn += \"    asyncio.set_event_loop(loop)\\n\";\n\tfcn += \"    coroObj = plugin_send_module_func(handle, readings, \\\"000001\\\")\\n\";\n\tfcn += \"    task = loop.create_task(coroObj)\\n\";\n\tfcn += \"    done, _ = loop.run_until_complete(asyncio.wait([task]))\\n\";\n\tfcn += \"    numSent = 0\\n\";\n\tfcn += \"    for t in done:\\n\";\n\tfcn += \"        retCode, lastId, numSent = t.result()\\n\";\n\tfcn += \"    return numSent\\n\";\n\n\n\tPyRun_SimpleString(fcn.c_str());\n\tPyObject* mod = PyImport_ImportModule(\"__main__\");\n\tif (mod != NULL) \n\t{\n\t\tPyObject* method = PyObject_GetAttrString(mod, \"plugin_send_wrapper\");\n\t\tif (method != NULL)\n\t\t{\n\t\t\tPyObject* arg = Py_BuildValue(\"OOO\", handle, readingsList, plugin_send_module_func);\n\t\t\tPyObject* pReturn = PyObject_CallObject(method, arg);\n\t\t\tLogger::getLogger()->debug(\"%s:%d, pReturn=%p\", __FUNCTION__, __LINE__, pReturn);\n\t\t\tPy_CLEAR(arg);\n\t\t    \n\t\t\tif (pReturn != NULL)\n\t\t\t{\n\t\t\t\tif(PyLong_Check(pReturn))\n\t\t\t\t{\n\t\t\t\t\tnumSent = (long)PyLong_AsUnsignedLongMask(pReturn);\n\t\t\t\t\tLogger::getLogger()->debug(\"numSent=%d\", numSent);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->warn(\"plugin_send_wrapper() didn't return a number, returned value is of type %s\", (Py_TYPE(pReturn))->tp_name);\n\t\t\t\t}\t\n\t\t\t\tPy_CLEAR(pReturn);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->debug(\"%s:%d: pReturn is NULL\", __FUNCTION__, __LINE__);\n\t\t\t\tif (PyErr_Occurred())\n\t\t\t\t{\n\t\t\t\t\tlogErrorMessage();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tPy_CLEAR(method);\n\t}\n        \n\t// Reset error\n\tPyErr_Clear();\n\n\t// Remove references\n\tPy_CLEAR(mod);\n\n\treturn numSent;\n}\n\n\n/**\n * Constructor for PythonPluginHandle\n */\nvoid *PluginInterfaceInit(const char *pluginName, const char * pluginPathName)\n{\n\tbool initialisePython = false;\n\n\t// Set plugin name, also for methods in common-plugin-interfaces/python\n\tgPluginName = pluginName;\n\n\tstring fledgePythonDir;\n    \n\tstring fledgeRootDir(getenv(\"FLEDGE_ROOT\"));\n\tfledgePythonDir = fledgeRootDir + \"/python\";\n    \n\tstring northRootPath = fledgePythonDir + string(R\"(/fledge/plugins/north/)\") + string(pluginName);\n\tLogger::getLogger()->debug(\"%s:%d:, northRootPath=%s\", __FUNCTION__, __LINE__, northRootPath.c_str());\n    \n\t// Embedded Python 3.5 program name\n\twchar_t *programName = Py_DecodeLocale(pluginName, NULL);\n\tPy_SetProgramName(programName);\n\tPyMem_RawFree(programName);\n\n\tPythonRuntime::getPythonRuntime();\n\n\tLogger::getLogger()->debug(\"%s:%d\", __FUNCTION__, __LINE__);\n    \n\t// Acquire GIL\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\tLogger::getLogger()->debug(\"NorthPlugin %s:%d: \"\n\t\t\t\t   \"northRootPath=%s, fledgePythonDir=%s, plugin '%s'\",\n\t\t\t\t   __FUNCTION__,\n\t\t\t\t   __LINE__,\n\t\t\t\t   northRootPath.c_str(),\n\t\t\t\t   fledgePythonDir.c_str(),\n\t\t\t\t   pluginName);\n\n\t// Set Python path for embedded Python 3.x\n\t// Get current sys.path - borrowed reference\n\tPyObject* sysPath = PySys_GetObject((char *)\"path\");\n\tPyList_Append(sysPath, PyUnicode_FromString((char *) northRootPath.c_str()));\n\tPyList_Append(sysPath, PyUnicode_FromString((char *) fledgePythonDir.c_str()));\n\n\t// Set sys.argv for embedded Python 3.5\n\tint argc = 2;\n\twchar_t* argv[2];\n\targv[0] = Py_DecodeLocale(\"\", NULL);\n\targv[1] = Py_DecodeLocale(pluginName, NULL);\n\tPySys_SetArgv(argc, argv);\n\n\t// 2) Import Python script\n\tPyObject *pModule = PyImport_ImportModule(pluginName);\n\n\t// Check whether the Python module has been imported\n\tif (!pModule)\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\t\tLogger::getLogger()->fatal(\"PluginInterfaceInit: cannot import Python 3.5 script \"\n\t\t\t\t\t   \"'%s' from '%s' : plugin '%s'\",\n\t\t\t\t\t   pluginName, northRootPath.c_str(),\n\t\t\t\t\t   pluginName);\n\t}\n\telse\n\t{\n\t\tstd::pair<std::map<string, PythonModule*>::iterator, bool> ret;\n\t\tif (pythonModules)\n\t\t{\n\t\t\t// Add element\n\t\t\tret = pythonModules->insert(pair<string, PythonModule*>\n\t\t\t\t(string(pluginName), new PythonModule(pModule,\n\t\t\t\t\t\t\t\t      initialisePython,\n\t\t\t\t\t\t\t\t      string(pluginName),\n\t\t\t\t\t\t\t\t      PLUGIN_TYPE_NORTH,\n\t\t\t\t\t\t\t\t      // New Python interpteter not set\n\t\t\t\t\t\t\t\t      NULL)));\n\t\t}\n\t\t// Check result\n\t\tif (!pythonModules ||\n\t\t    ret.second == false)\n\t\t{\n\t\t\tLogger::getLogger()->fatal(\"%s:%d: python module not added to the map \"\n\t\t\t\t\t\t   \"of loaded plugins, pModule=%p, plugin '%s'i, aborting.\",\n\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t   __LINE__,\n\t\t\t\t\t\t   pModule,\n\t\t\t\t\t\t   pluginName);\n\t\t\tPy_CLEAR(pModule);\n\t\t\treturn NULL;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"%s:%d: python module loaded successfully, pModule=%p, plugin '%s'\",\n\t\t\t\t\t\t   __FUNCTION__,\n\t\t\t\t\t\t   __LINE__,\n\t\t\t\t\t\t   pModule,\n\t\t\t\t\t \t   pluginName);\n\t\t}\n\t}\n\n\t// Release GIL\n\tPyGILState_Release(state);\n\n\treturn pModule;\n}\n\n/**\n * Returns function pointer that can be invoked to call '_sym' function\n * in python plugin\n */\nvoid* PluginInterfaceResolveSymbol(const char *_sym, const string& name)\n{\n\tstring sym(_sym);\n\tif (!sym.compare(\"plugin_info\"))\n\t\treturn (void *) plugin_info_fn;\n\telse if (!sym.compare(\"plugin_init\"))\n\t\treturn (void *) plugin_init_fn;\n\telse if (!sym.compare(\"plugin_shutdown\"))\n\t\treturn (void *) plugin_shutdown_fn;\n\telse if (!sym.compare(\"plugin_reconfigure\"))\n\t\treturn (void *) plugin_reconfigure_fn;\n\telse if (!sym.compare(\"plugin_start\"))\n\t\treturn (void *) plugin_start_fn;\n\telse if (!sym.compare(\"plugin_send\"))\n\t\treturn (void *) plugin_send_fn;\n\telse\n\t{\n\t\tLogger::getLogger()->fatal(\"PluginInterfaceResolveSymbol can not find symbol '%s' \"\n\t\t\t\t\t   \"in the North Python plugin interface library, loaded plugin '%s'\",\n\t\t\t\t\t   _sym,\n\t\t\t\t\t   name.c_str());\n\t\treturn NULL;\n\t}\n}\n\n/**\n * Function to invoke 'plugin_start' function in python plugin\n *\n * @param    handle     Plugin handle from plugin_init_fn\n */\nvoid plugin_start_fn(PLUGIN_HANDLE handle)\n{\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_start_fn: \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_start_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\t return;\n\t}\n\n        // Look for Python module for handle key\n        auto it = pythonHandles->find(handle);\n        if (it == pythonHandles->end() ||\n            !it->second ||\n            !it->second->m_module)\n        {\n                Logger::getLogger()->fatal(\"plugin_handle: plugin_start(): \"\n                                           \"pModule is NULL, plugin handle '%p'\",\n                                           handle);\n                return;\n        }\n\n\tPyObject* pFunc;\n\n\t// Take GIL\n\tPyGILState_STATE state = PyGILState_Ensure();\n\t\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_start\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->info(\"Cannot find 'plugin_start' method \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->info(\"Cannot call method 'plugin_start' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\t// Release GIL\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\t// Call Python method passing an object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"O\",\n\t\t\t\t\t\t  handle);\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle return\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->warn(\"Called python script method plugin_start : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\n\t// Remore result object\n\tPy_CLEAR(pReturn);\n\n\t// Release GIL\n\tPyGILState_Release(state);\n}\n\n/**\n * Function to invoke 'plugin_send' function in python plugin\n *\n * @param    handle     Plugin handle from plugin_init_fn\n * @param    readings\tVector of readings data to send\n *\n * NOTE: currently doesn't work with async plugin_send\n */\nuint32_t plugin_send_fn(PLUGIN_HANDLE handle, const std::vector<Reading *>& readings)\n{\n\n\tuint32_t numReadingsSent = 0UL;\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_send_fn: \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn numReadingsSent;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\t// Plugin name can not be logged here\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_send_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\t return numReadingsSent;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_send(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn numReadingsSent;\n\t}\n\n\t// We have plugin name\n\tstring pName = it->second->m_name;\n\n\tPyObject* pFunc;\n\n\t// Take GIL\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_send\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find 'plugin_send' \"\n\t\t\t\t\t   \"method in loaded python module '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn numReadingsSent;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t        // Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method plugin_send\"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   pName.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\t// Release GIL\n\t\tPyGILState_Release(state);\n\t\treturn numReadingsSent;\n\t}\n\n\t// 1. create a ReadingSet\n\tReadingSet set(&readings);\n\n\t// 2. create a PythonReadingSet object\n\tPythonReadingSet *pyReadingSet = (PythonReadingSet *) &set;\n\n\t// 3. create PyObject\n\tPyObject* readingsList = pyReadingSet->toPython(true);\n\t    \n\tnumReadingsSent = call_plugin_send_coroutine(pFunc, handle, readingsList);\n\tLogger::getLogger()->debug(\"C2Py: plugin_send_fn():L%d: filtered readings sent %d\",\n\t\t\t\t__LINE__,\n\t\t\t\tnumReadingsSent);\n\n\tset.clear(); // to avoid deletion of contained Reading objects; they are subsequently accessed in calling function DataSender::send()\n\n\t// Remove python object\n\tPy_CLEAR(readingsList);\n\tPy_CLEAR(pFunc);\n\n\t// Release GIL\n\tPyGILState_Release(state);\n\n\t// Return the number of readings sent\n\treturn numReadingsSent;\n}\n\n};\n\n"
  },
  {
    "path": "C/services/notification-plugin-interfaces/python/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(notification-plugin-python-interface)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(DLLIB -ldl)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\n\n# Find source files\nfile(GLOB SOURCES python_plugin_interface.cpp)\n\n# Find Python.h 3.x dev/lib package\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development)\nendif()\n\n# Include header files\ninclude_directories(include ../../../common/include ../../../services/common/include ../../../services/south/include ../../../thirdparty/rapidjson/include)\ninclude_directories(../../../services/common-plugin-interfaces/python/include)\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${DLLIB})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n"
  },
  {
    "path": "C/services/notification-plugin-interfaces/python/python_plugin_interface.cpp",
    "content": "/*\n * Fledge south plugin interface related\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <logger.h>\n#include <config_category.h>\n#include <reading.h>\n#include <mutex>\n#include <plugin_handle.h>\n#include <pyruntime.h>\n#include <Python.h>\n\n#include <python_plugin_common_interface.h>\n#include <pythonreadingset.h>\n#include <base64dpimage.h>\n#include <base64databuffer.h>\n\n#define PY_ARRAY_UNIQUE_SYMBOL  PyArray_API_FLEDGE\n#include <numpy/npy_common.h>\n#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION\n#include <numpy/ndarraytypes.h>\n#include <numpy/ndarrayobject.h>\n\n#undef NUMPY_IMPORT_ARRAY_RETVAL\n#define NUMPY_IMPORT_ARRAY_RETVAL       0\n\nusing namespace std;\n\nextern \"C\" {\n\nextern PLUGIN_INFORMATION *Py2C_PluginInfo(PyObject *);\nextern void logErrorMessage();\nextern bool numpyImportError;\nextern PLUGIN_INFORMATION *plugin_info_fn();\nextern PLUGIN_HANDLE plugin_init_fn(ConfigCategory *);\nextern void plugin_shutdown_fn(PLUGIN_HANDLE);\n\n// Reconfigure entry point for rule and delivery plugings\nvoid notification_plugin_reconfigure_fn(PLUGIN_HANDLE,\n\t\t\t\t\tconst std::string&);\n// Notification rule plugin entry points\nstd::string plugin_triggers_fn(PLUGIN_HANDLE handle);\nbool plugin_eval_fn(PLUGIN_HANDLE handle,\n\t\t    const std::string& assetValues);\nstd::string plugin_reason_fn(PLUGIN_HANDLE handle);\n//Notificztion deelivery plugin entry point\nbool plugin_deliver_fn(PLUGIN_HANDLE handle,\n\t\t\tconst std::string& deliveryName,\n\t\t\tconst std::string& notificationName,\n\t\t\tconst std::string& triggerReason,\n\t\t\tconst std::string& customMessage);\n\n// Substitute string values with known data types\nbool substituteObjects(PyObject *data, vector<PyObject*> &removeObjects);\n\n/**\n * Constructor for PythonPluginHandle\n *    - Set sys.path and sys.argv\n *    - Set plugin_type (notificationRule or notificationDelivery\n *    - Load Python module for the plugin\n *\n * @param    pluginName\t\tThe plugin name to load\n * @param    pluginPathName\tThe plugin pathname\n * @return\t\t\tPython object pointer\n *\t\t\t\tof loaded Python shim  file\n *\t\t\t\tor NULL for errors\n */\nvoid *PluginInterfaceInit(const char *pluginName, const char * pluginPathName)\n{\n\tbool initPython = false;\n\n\t// Set plugin name for common-plugin-interfaces/python\n\tgPluginName = pluginName;\n\n\t// Extract plugin type from path\n\tstring pluginType = strstr(pluginPathName, PLUGIN_TYPE_NOTIFICATION_RULE) != NULL ?\n\t\t\t\t\tPLUGIN_TYPE_NOTIFICATION_RULE :\n\t\t\t\t\tPLUGIN_TYPE_NOTIFICATION_DELIVERY;\n\n\tstring appPythonDir;\n\n\tstring appRootDir(getenv(\"FLEDGE_ROOT\"));\n\tappPythonDir = appRootDir + \"/python\";\n\n\tstring notificationsRootPath = appPythonDir +\n\t\t\tstring(\"/fledge/plugins/\") +\n\t\t\tpluginType + \"/\" +\n\t\t\tstring(pluginName);\n\n\tLogger::getLogger()->error(\"%s:%d:, filtersRootPath=%s\",\n\t\t\t\t__FUNCTION__,\n\t\t\t\t__LINE__,\n\t\t\t\tnotificationsRootPath.c_str());\n\n\t// Get Python runtime\n\tPythonRuntime::getPythonRuntime();\n\n\t// Acquire GIL\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\tLogger::getLogger()->debug(\"NotificationPlugin PluginInterfaceInit %s:%d: \"\n\t\t\t\t\"appPythonDir=%s, plugin '%s', type '%s'\",\n\t\t\t\t__FUNCTION__,\n\t\t\t\t__LINE__,\n\t\t\t\tappPythonDir.c_str(),\n\t\t\t\tpluginName,\n\t\t\t\tpluginType.c_str());\n\n\t// Set Python path for embedded Python 3.x\n\t// Get current sys.path - borrowed reference\n\tPyObject* sysPath = PySys_GetObject((char *)\"path\");\n\tPyList_Append(sysPath,\n\t\t\tPyUnicode_FromString((char *)notificationsRootPath.c_str()));\n\n    // Set sys.argv for embedded Python 3.5\n\tint argc = 2;\n\twchar_t* argv[2];\n\targv[0] = Py_DecodeLocale(\"\", NULL);\n\targv[1] = Py_DecodeLocale(pluginName, NULL);\n\tPySys_SetArgv(argc, argv);\n\n\t// Import plugin module\n\tPyObject *pModule = PyImport_ImportModule(pluginName);\n\n\tLogger::getLogger()->debug(\"%s:%d: pluginName=%s, type '%s', pModule=%p\",\n\t\t\t\t__FUNCTION__,\n\t\t\t\t__LINE__,\n\t\t\t\tpluginName,\n\t\t\t\tpluginType.c_str(),\n\t\t\t\tpModule);\n\n\t// Check whether the Python module has been imported\n\tif (!pModule)\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\t\tLogger::getLogger()->fatal(\"NotificationPlugin PluginInterfaceInit: \"\n\t\t\t\t\t\"cannot import Python module file \"\n\t\t\t\t\t\"from '%s', plugin '%s', type '%s'\",\n\t\t\t\t\tpluginPathName,\n\t\t\t\t\tpluginName,\n\t\t\t\t\tpluginType.c_str());\n\t}\n\telse\n\t{\n\t\tstd::pair<std::map<string, PythonModule*>::iterator, bool> ret;\n\t\tPythonModule* newModule = NULL;\n\t\tif (pythonModules)\n\t\t{\n\t\t\t// Add module into pythonModules, pluginName is the key\n\t\t\tif ((newModule = new PythonModule(pModule,\n\t\t\t\t\tinitPython,\n\t\t\t\t\tstring(pluginName),\n\t\t\t\t\tpluginType,\n\t\t\t\t\tNULL)) == NULL)\n\t\t\t{\n\t\t\t\t// Release lock\n\t\t\t\tPyGILState_Release(state);\n\n\t\t\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_init(): \"\n\t\t\t\t\t\t\t\"failed to create Python module \"\n\t\t\t\t\t\t\t\"object, plugin '%s', type '%s'\",\n\t\t\t\t\t\t\tpluginName,\n\t\t\t\t\t\t\tpluginType.c_str());\n\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\n\t\t\t// Add module to the list of loaded modules\n\t\t\tret = pythonModules->insert(pair<string, PythonModule*>\n\t\t\t\t\t\t(string(pluginName), newModule));\n\t\t}\n\n\t\t// Check result\n\t\tif (!pythonModules || ret.second == false)\n\t\t{\n\t\t\tLogger::getLogger()->fatal(\"%s:%d: python module \"\n\t\t\t\t\t\t\"not added to the map \"\n\t\t\t\t\t\t\"of loaded plugins, \"\n\t\t\t\t\t\t\"pModule=%p, plugin '%s', type '%s', aborting.\",\n\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t__LINE__,\n\t\t\t\t\t\tpModule,\n\t\t\t\t\t\tpluginName,\n\t\t\t\t\t\tpluginType.c_str());\n\n\t\t\t// Cleanup\n\t\t\tPy_CLEAR(pModule);\n\t\t\tpModule = NULL;\n\n\t\t\tdelete newModule;\n\t\t\tnewModule = NULL;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"%s:%d: python module \"\n\t\t\t\t\t\t\"successfully loaded, \"\n\t\t\t\t\t\t\"pModule=%p, plugin '%s', type '%s'\",\n\t\t\t\t\t\t__FUNCTION__,\n\t\t\t\t\t\t__LINE__,\n\t\t\t\t\t\tpModule,\n\t\t\t\t\t\tpluginName,\n\t\t\t\t\t\tpluginType.c_str());\n\t\t}\n\t}\n\n\t// Release locks\n\tPyGILState_Release(state);\n\n\t// Return new Python module or NULL\n\treturn pModule;\n}\n\n/**\n * Returns function pointer that can be invoked to call '_sym' function\n * in python plugin\n *\n * @param    _sym       Symbol name\n * @param    name       Plugin name\n * @return              function pointer to be invoked\n */\nvoid* PluginInterfaceResolveSymbol(const char *_sym, const string& name)\n{\n\tstring sym(_sym);\n\tif (!sym.compare(\"plugin_info\"))\n\t\treturn (void *) plugin_info_fn;\n\telse if (!sym.compare(\"plugin_init\"))\n\t\treturn (void *) plugin_init_fn;\n\telse if (!sym.compare(\"plugin_shutdown\"))\n\t\treturn (void *) plugin_shutdown_fn;\n\telse if (!sym.compare(\"plugin_reconfigure\"))\n\t\treturn (void *) notification_plugin_reconfigure_fn;\n\telse if (!sym.compare(\"plugin_triggers\"))\n\t\treturn (void *) plugin_triggers_fn;\n\telse if (!sym.compare(\"plugin_eval\"))\n\t\treturn (void *) plugin_eval_fn;\n\telse if (!sym.compare(\"plugin_reason\"))\n\t\treturn (void *) plugin_reason_fn;\n\telse if (!sym.compare(\"plugin_deliver\"))\n\t\treturn (void *) plugin_deliver_fn;\n\telse\n\t{\n\t\tLogger::getLogger()->fatal(\"PluginInterfaceResolveSymbol can not find symbol '%s' \"\n\t\t\t\t\t   \"in the Notification Python plugin interface library, \"\n\t\t\t\t\t   \"loaded plugin '%s'\",\n\t\t\t\t\t   _sym,\n\t\t\t\t\t   name.c_str());\n\t\treturn NULL;\n\t}\n}\n\n/**\n * Invoke 'plugin_triggers' function in notification rule python plugin\n *\n * Returned JSON data will be used for notification data subscription\n * to Fledge storage service\n *\n * @param    handle\tThe plugin handle from plugin_init_fn\n * @return\t\tJSON string with array of\n *\t\t\tasset name and windom data evaluation\n */\nstring plugin_triggers_fn(PLUGIN_HANDLE handle)\n{\n\tstring ret = \"{\\\"triggers\\\" : []}\";\n\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_triggers(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn ret;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_triggers_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn ret;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_triggers(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn ret;\n\t}\n\n\tstd::mutex mtx;\n\tlock_guard<mutex> guard(mtx);\n\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\tPyObject* pFunc;\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_triggers\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find 'plugin_triggers' method \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn ret;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method 'plugin_triggers' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\t\n\t\tPyGILState_Release(state);\n\t\treturn ret;\n\t}\n\n\t// Call Python method passing an object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"O\",\n\t\t\t\t\t\t  handle);\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle return\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method 'plugin_triggers' : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\n\t// Return C++ string\n\tret = string(json_dumps(pReturn));\n\n\t// Remove objects\n\tPy_CLEAR(pReturn);\n\n\tPyGILState_Release(state);\n\n\treturn ret;\n}\n\n/**\n * Function to invoke 'plugin_reason' function in notification rule python plugin\n *\n * @param    handle\tThe plugin handle from plugin_init_fn\n * @return\t\tJSON string with trigger reason\n */\nstd::string plugin_reason_fn(PLUGIN_HANDLE handle)\n{\n\tstring ret = \"{\\\"reason\\\" : \\\"errored\\\"}\";\n\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_reason(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn ret;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_reason_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn NULL;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_reason(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn ret;\n\t}\n\n\tstd::mutex mtx;\n\tlock_guard<mutex> guard(mtx);\n\n\tPyObject* pFunc;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_reason\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find 'plugin_reason' method \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn ret;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method 'plugin_reason' \"\n\t\t\t\t\t    \"in loaded python module '%s'\",\n\t\t\t\t\t    it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\t\tPyGILState_Release(state);\n\t\treturn ret;\n\t}\n\n\t// Call Python method passing an object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"O\",\n\t\t\t\t\t\t  handle);\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle return\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method 'plugin_reason' : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\n\t// Get Python object\n\tret = string(json_dumps(pReturn));\n\n\t// REmove objects\n\tPy_CLEAR(pReturn);\n\n\tPyGILState_Release(state);\n\n\treturn ret;\n}\n\n/**\n * Function to invoke 'plugin_eval' function in notification rule python plugin\n *\n * @param    handle\t\tThe plugin handle from plugin_init_fn\n * @param    assetValues\tJSON string with asset data to evaluate\n * @return\t\t\tTrue if rule evaluation triggers, false otherwise\n */\nbool plugin_eval_fn(PLUGIN_HANDLE handle,\n\t\t    const std::string& assetValues)\n{\n\tbool ret = false;\n\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_eval(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn ret;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_eval_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn NULL;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_eval(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn ret;\n\t}\n\n\tstd::mutex mtx;\n\tlock_guard<mutex> guard(mtx);\n\n\tPyObject* pFunc;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_eval\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find 'plugin_eval' method \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\t\tPyGILState_Release(state);\n\t\treturn ret;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method 'plugin_eval' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\t\tPyGILState_Release(state);\n\t\treturn ret;\n\t}\n\n\t// Call Python method passing an object and the data as C string\n\tPyObject *evalData = json_loads(assetValues.c_str());\n\n\tvector<PyObject*> removeObjects;\n\t// Replace content of some known string data:\n\t// DPImage\n\tsubstituteObjects(evalData, removeObjects);\n\n\t// Call plugin_eval\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"OO\",\n\t\t\t\t\t\t  handle,\n\t\t\t\t\t\t  evalData);\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle return\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method 'plugin_eval' : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n\t\t// check bool abd return true or false\n\t\tif (PyBool_Check(pReturn))\n\t\t{\n\t\t\tret = PyObject_IsTrue(pReturn);\n\t\t}\n\t}\n\n\t// REmove objects\n\tPy_CLEAR(evalData);\n\tPy_CLEAR(pReturn);\n\n\t// Remove any allocated object in substituteObjects()\n\tfor (auto it = removeObjects.begin();\n\t\t  it != removeObjects.end();\n\t\t  ++it)\n\t{\n\t\tPy_CLEAR(*it);\n\t}\n\tremoveObjects.clear();\n\n\tPyGILState_Release(state);\n\n\treturn ret;\n}\n\n/**\n * Function to invoke 'plugin_reconfigure' function in python plugin\n *\n * @param    handle     The plugin handle from plugin_init_fn\n * @param    config     The new configuration, as string\n */\nvoid notification_plugin_reconfigure_fn(PLUGIN_HANDLE handle,\n\t\t\t\t\tconst std::string& config)\n{\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_reconfigure_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn;\n\t}\n\n\tstd::mutex mtx;\n\tlock_guard<mutex> guard(mtx);\n\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\tLogger::getLogger()->debug(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t   \"pModule=%p, handle=%p, plugin '%s'\",\n\t\t\t\t   it->second->m_module,\n\t\t\t\t   handle,\n\t\t\t\t   it->second->m_name.c_str());\n\n\tPyObject* pFunc;\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_reconfigure\");\n\tif (!pFunc)\n\t{       \n\t\tLogger::getLogger()->fatal(\"Cannot find method 'plugin_reconfigure' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method 'plugin_reconfigure' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\t\t\t\t   Py_CLEAR(pFunc);\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\tLogger::getLogger()->debug(\"plugin_reconfigure with %s\", config.c_str());\n\n\t// Create Python object from string\n\tPyObject *config_dict = json_loads(config.c_str());\n\n\t// Call Python method passing the Python object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"OO\",\n\t\t\t\t\t\t  handle,\n\t\t\t\t\t\t  config_dict);\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_reconfigure \"\n\t\t\t\t\t   \": error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n\t\tPyObject* tmp = (PyObject *)handle;\n\t\t// Check current handle is Dict and pReturn is a Dict too\n\t\tif (PyDict_Check(tmp) && PyDict_Check(pReturn))\n\t\t{\n\t\t\t// Clear Dict content\n\t\t\tPyDict_Clear(tmp);\n\t\t\t// Populate hadnle Dict with new data in pReturn\n\t\t\tPyDict_Update(tmp, pReturn);\n\t\t\t// Remove pReturn ojbect\n\t\t\tPy_CLEAR(pReturn);\n\n\t\t\tLogger::getLogger()->debug(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t\t    \"got updated handle from python plugin=%p, plugin '%s'\",\n\t\t\t\t\t\t    handle,\n\t\t\t\t\t\t    it->second->m_name.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"plugin_handle: plugin_reconfigure(): \"\n\t\t\t\t\t\t   \"got object type '%s' instead of Python Dict, \"\n\t\t\t\t\t\t   \"python plugin=%p, plugin '%s'\",\n\t\t\t\t\t\t   Py_TYPE(pReturn)->tp_name,\n\t\t\t\t\t\t   handle,\n\t\t\t\t\t\t   it->second->m_name.c_str());\n\t\t}\n\t}\n\tPyGILState_Release(state);\n}\n\n/**\n * Function to invoke 'plugin_deliver' function in\n * notification deliveryi python plugin\n *\n * @param    handle     \tThe plugin handle from plugin_init_fn\n * @param    handle\t\tThe plugin handle returned from plugin_init\n * @param    deliveryName\tThe delivery category name\n * @param    notificationName\tThe notification name\n * @param    triggerReason\tThe trigger reason for notification\n * @param    customMessage\tThe message from notification\n * @return\t\t\tTrue is notification has been delivered,\n *\t\t\t\tfalse otherwise\n */\nbool plugin_deliver_fn(PLUGIN_HANDLE handle,\n\t\t\tconst std::string& deliveryName,\n\t\t\tconst std::string& notificationName,\n\t\t\tconst std::string& triggerReason,\n\t\t\tconst std::string& customMessage)\n{\n\tbool ret = false;\n\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_deliver(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn ret;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_deliver_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn ret;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t    !it->second ||\n\t    !it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_deliver(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn ret;\n\t}\n\n\tstd::mutex mtx;\n\tlock_guard<mutex> guard(mtx);\n\n\tPyObject* pFunc;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_deliver\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find 'plugin_deliver' method \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn ret;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method 'plugin_deliver' \"\n\t\t\t\t\t    \"in loaded python module '%s'\",\n\t\t\t\t\t    it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\t\tPyGILState_Release(state);\n\t\treturn ret;\n\t}\n\n\t// Transform triggerReason into a Python object\n\tPyObject *reason = json_loads(triggerReason.c_str());\n\n\t// Call Python method passing an object and the data ac C string bu\n\t// triggerReason as a Python object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"OssOs\",\n\t\t\t\t\t\t  handle,\n\t\t\t\t\t\t  deliveryName.c_str(),\n\t\t\t\t\t\t  notificationName.c_str(),\n\t\t\t\t\t\t  reason,\n\t\t\t\t\t\t  customMessage.c_str());\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle return\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method 'plugin_deliver' : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n\t\t// check bool abd return true or false\n\t\tif (PyBool_Check(pReturn))\n\t\t{\n\t\t\tret = PyObject_IsTrue(pReturn);\n\t\t}\n\t}\n\n\t// Remove objects\n\tPy_CLEAR(reason);\n\tPy_CLEAR(pReturn);\n\n\tPyGILState_Release(state);\n\n\treturn ret;\n}\n\n/**\n * Substitute value for a second level dict in the Pythin object\n * if DPImage string is found\n *\n *  {\n *  \t\"TC1\" : {\n *  \t\t\"width\" : 256,\n *  \t\t\"height\" : 256,\n *  \t\t\"depth\" : 24,\n *  \t\t\"img\" : \"__DPIMAGE:2,2,24_AAAAAAAACAAACAAA\"\n *  \t},\n *  \t\"timestamp_TC1\" : 1643293555.389629\n *  \t}\n *\n *  \t\"img\" string value will be substituted by\n *  \tPyArray_SimpleNewFromData(...) data\n */\nbool substituteObjects(PyObject *data, vector<PyObject*> &removeObjects)\n{\n\n\tPyObject *dKey, *dValue;\n\tPy_ssize_t dPos = 0;\n\n\t// Fetch all Datapoints in 'reading' dict\n\t// dKey and dValue are borrowed references\n\twhile (PyDict_Next(data, &dPos, &dKey, &dValue))\n\t{\n\t\tif (PyDict_Check(dValue))\n\t\t{\n\t\t\tPyObject *iKey, *iValue;\n\t\t\tPy_ssize_t iPos = 0;\n\t\t\twhile (PyDict_Next(dValue, &iPos, &iKey, &iValue))\n\t\t\t{\n\t\t\t\tif (PyUnicode_Check(iValue))\n\t\t\t\t{\n\t\t\t\t\tstring str = PyUnicode_AsUTF8(iValue);\n\t\t\t\t\tstring key = PyUnicode_AsUTF8(iKey);\n\t\t\t\t\tif (str[0] == '_' && str[1] == '_')\n\t\t\t\t\t{\n\t\t\t\t\t\tsize_t pos = str.find_first_of(':');\n\t\t\t\t\t\tif (str.compare(2, 7, \"DPIMAGE\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tPyObject *newImage = NULL;\n\t\t\t\t\t\t\tDPImage *image = new Base64DPImage(str.substr(pos + 1));\n\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"Inner key '%s' will be \"\n\t\t\t\t\t\t\t\t\t\t\"substituted with a DPImage of %dx%d@%d\",\n\t\t\t\t\t\t\t\t\t\tkey.c_str(),\n\t\t\t\t\t\t\t\t\t\timage->getHeight(),\n\t\t\t\t\t\t\t\t\t\timage->getWidth(),\n\t\t\t\t\t\t\t\t\t\timage->getDepth());\n\n\t\t\t\t\t\t\t// Initialise Nunpy array\n\t\t\t\t\t\t\timport_array();\n\n\t\t\t\t\t\t\tif (image->getDepth() == 24)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tnpy_intp dim[3];\n\t\t\t\t\t\t\t\tdim[0] = image->getHeight();\n\t\t\t\t\t\t\t\tdim[1] = image->getWidth();\n\t\t\t\t\t\t\t\tdim[2] = 3;\n\t\t\t\t\t\t\t\tenum NPY_TYPES type = NPY_UBYTE;\n\n\t\t\t\t\t\t\t\t// Create Python array wrapper around image data\n\t\t\t\t\t\t\t\tnewImage = PyArray_SimpleNewFromData(3,\n\t\t\t\t\t\t\t\t\t\t\t\tdim,\n\t\t\t\t\t\t\t\t\t\t\t\ttype,\n\t\t\t\t\t\t\t\t\t\t\t\timage->getData());\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tnpy_intp dim[2];\n\t\t\t\t\t\t\t\tdim[0] = image->getHeight();\n\t\t\t\t\t\t\t\tdim[1] = image->getWidth();\n\t\t\t\t\t\t\t\tenum NPY_TYPES type;\n\t\t\t\t\t\t\t\tbool createImage = true;\n\t\t\t\t\t\t\t\tswitch (image->getDepth())\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tcase 8:\n\t\t\t\t\t\t\t\t\t\ttype = NPY_UBYTE;\n\t\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\tcase 16:\n\t\t\t\t\t\t\t\t\t\ttype = NPY_UINT16;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\tcase 32:\n\t\t\t\t\t\t\t\t\t\ttype = NPY_UINT32;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\tcase 64:\n\t\t\t\t\t\t\t\t\t\ttype = NPY_UINT64;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\t\t\t\tcreateImage = false;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif (createImage)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t// Create Python array wrapper around image data\n\t\t\t\t\t\t\t\t\tnewImage = PyArray_SimpleNewFromData(2,\n\t\t\t\t\t\t\t\t\t\t\t\t\tdim,\n\t\t\t\t\t\t\t\t\t\t\t\t\ttype,\n\t\t\t\t\t\t\t\t\t\t\t\t\timage->getData());\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (newImage)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t// Replace value\n\t\t\t\t\t\t\t\tPyDict_SetItem(dValue, iKey, newImage);\n\n\t\t\t\t\t\t\t\t// Add object to remove vector\n\t\t\t\t\t\t\t\tremoveObjects.push_back(newImage);\n\t\t\t\t\t\t\t}\n\t\n\t\t\t\t\t\t\t// Delete DPImage object\n\t\t\t\t\t\t\tdelete(image);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (str.compare(2, 10, \"DATABUFFER\") == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tPyObject *newImage = NULL;\n\t\t\t\t\t\t\tDataBuffer *dbuf = new Base64DataBuffer(str.substr(pos + 1));\n\t\t\t\t\t\t\tnpy_intp dim = dbuf->getItemCount();\n\t\t\t\t\t\t\tenum NPY_TYPES type;\n\t\t\t\t\t\t\tbool createImage = true;\n\t\t\t\t\t\t\tswitch (dbuf->getItemSize())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tcase 1:\n\t\t\t\t\t\t\t\t\ttype = NPY_UBYTE;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\tcase 2:\n\t\t\t\t\t\t\t\t\ttype = NPY_UINT16;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\tcase 4:\n\t\t\t\t\t\t\t\t\ttype = NPY_UINT32;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\tcase 8:\n\t\t\t\t\t\t\t\t\ttype = NPY_UINT64;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\t\t\tcreateImage = false;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t// Initialise Nunpy array\n\t\t\t\t\t\t\timport_array();\n\n\t\t\t\t\t\t\tif (createImage)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t// Create Python array wrapper around image data\n\t\t\t\t\t\t\t\tnewImage = PyArray_SimpleNewFromData(1, &dim, type, dbuf->getData());\n\t\t\t\t\t\t\t\tif (newImage)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t// Replace value\n\t\t\t\t\t\t\t\t\tPyDict_SetItem(dValue, iKey, newImage);\n\n\t\t\t\t\t\t\t\t\t// Add object to remove vector\n\t\t\t\t\t\t\t\t\tremoveObjects.push_back(newImage);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\n\t\t\t\t\t\t\t// Delete Databiffer object\n\t\t\t\t\t\t\tdelete(dbuf);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn true;\n}\n}; // End of extern C\n"
  },
  {
    "path": "C/services/south/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (South)\n\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb -DPy_DEBUG\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wsign-conversion\")\nset(CMAKE_CXX_FLAGS_PROFILING \"-O2 -pg\")\nset(DLLIB -ldl)\nset(UUIDLIB -luuid)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\nset(EXEC fledge.services.south)\n\ninclude_directories(. include ../../thirdparty/Simple-Web-Server ../../thirdparty/rapidjson/include  ../common/include ../../common/include)\n\nfind_package(Threads REQUIRED)\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\nif(APPLE)\n    set(OPENSSL_ROOT_DIR \"/usr/local/opt/openssl\")\nendif()\n\n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development NumPy)\nendif()\n\nfile(GLOB south_src \"*.cpp\")\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS} ${Python3_NUMPY_INCLUDE_DIRS})\nendif()\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_executable(${EXEC} ${south_src} ${common_src} ${services_src})\ntarget_link_libraries(${EXEC} ${Boost_LIBRARIES})\ntarget_link_libraries(${EXEC} ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(${EXEC} ${DLLIB})\ntarget_link_libraries(${EXEC} ${UUIDLIB})\ntarget_link_libraries(${EXEC} ${COMMON_LIB})\ntarget_link_libraries(${EXEC} ${SERVICE_COMMON_LIB})\n\ninstall(TARGETS ${EXEC} RUNTIME DESTINATION fledge/services)\n\nif(MSYS) #TODO: Is MSYS true when MSVC is true?\n    target_link_libraries(${EXEC} ws2_32 wsock32)\n    if(OPENSSL_FOUND)\n        target_link_libraries(${EXEC} ws2_32 wsock32)\n    endif()\nendif()\n\n# Set profiling flags if 'Profiling' build\nif(CMAKE_BUILD_TYPE STREQUAL \"Profiling\")\n    message(\"Building in Profiling mode\")\n    set_target_properties(${EXEC} PROPERTIES COMPILE_FLAGS \"${CMAKE_CXX_FLAGS_PROFILING}\")\n    # define 'PROFILING' flag used by service to change directory\n    target_compile_definitions(${EXEC} PRIVATE PROFILING=1)\n    set(CMAKE_SHARED_LINKED_FLAGS \"${CMAKE_SHARED_LINKED_FLAGS} -O2 -pg\")\n    target_link_libraries(${EXEC} -O2 -pg)\nendif()\n\n"
  },
  {
    "path": "C/services/south/README.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n\n\n*********************\nFledge South Service\n*********************\n\nThis is the south service of the Fledge platform written in C.\nThis service is responsible for gathering readings and sending\nthen to the Fledge buffer for storage.\n|br| |br|\n\n\nBuilding\n========\n\nThe Storage service is built using cmake, to build the Storage service:\n::\n  mkdir build\n  cd build\n  cmake ..\n  make\n\nThis will create the executable file ``south`` service.\n\nUse the command ``make install`` to install in the default location,\nnote you will need permission on the installation directory or use\nthe sudo command. Pass the option *DESTDIR=* to set your own destination\ninto which to install the Storage service.\n\nBuild the plugins by going to the directory *C/plugins/south* and follow\nthe instructions in each of the plugin directories.\n|br| |br|\n  \n\nPrerequisites\n=============\n\nTo build the South service the machine must have installed the\n*cmake* system, *make* and *g++*, plus the libraries for the South plugin,\ne.g. the boost libraries\n\n\nOn Ubuntu based Linux distributions these can be installed with *apt-get*:\n::\n  apt-get install libboost-dev libboost-system-dev libboost-thread-dev\n  apt-get install cmake g++ make\n\n|br| |br|\n\n\nRunning\n=======\n\nThe South service may be run in daemon mode or interactively by use\nof the *-d* command line argument.\n\nThe South service will register with the core to allow the core to\nmonitor the South service and to allow the South storage to find the\nStorage service.  It assumes the core is located on the same machine. This\ncan however be overridden by the use of the command line argument\n*--port=* and *--address=* to set the port and address of the core\nmicroservice.\n\nThe South service will look for South plugins in the current directory\nor in the directory *$FLEDGE_ROOT/plugins/south*.\n|br| |br|\n"
  },
  {
    "path": "C/services/south/include/defaults.h",
    "content": "#ifndef _DEFAULTS_H\n#define _DEFAULTS_H\n/*\n * Fledge reading ingest.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n// The maximum value a user will be allowed to set the maxSendLatency config item expressed in mS\n#define MAXSENDLATENCY\t600000\t// 10 minutes\n\n// The default advanced configuration items to add to the category\nstatic struct {\n\tconst char\t*name;\n\tconst char\t*displayName;\n\tconst char\t*description;\n\tconst char\t*type;\n\tconst char\t*value;\n} defaults[] = {\n\t{ \"maxSendLatency\",\t\"Maximum Reading Latency (mS)\",\n\t\t\t\"Maximum time to spend filling buffer before sending\", \"integer\", \"5000\" },\n\t{ \"bufferThreshold\",\t\"Maximum buffered Readings\",\n\t\t\t\"Number of readings to buffer before sending\", \"integer\", \"100\" },\n\t{ \"throttle\",\t\"Throttle\",\n\t\t\t\"Enable flow control by reducing the poll rate\", \"boolean\", \"false\" },\n\t{ \"readingsPerSec\",\t\"Reading Rate\",\n\t\t\t\"Number of readings to generate per interval\", \"integer\", \"1\" },\n\t{ \"assetTrackerInterval\",\t\"Asset Tracker Update\",\n\t\t\t\"Number of milliseconds between updates of the asset tracker information\",\n\t\t\t\"integer\", \"500\" },\n\t{ NULL, NULL, NULL, NULL, NULL }\n};\n#endif\n"
  },
  {
    "path": "C/services/south/include/ingest.h",
    "content": "#ifndef _INGEST_H\n#define _INGEST_H\n/*\n * Fledge reading ingest.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto, Amandeep Singh Arora\n */\n#include <storage_client.h>\n#include <reading.h>\n#include <logger.h>\n#include <vector>\n#include <queue>\n#include <thread>\n#include <chrono>\n#include <mutex>\n#include <sstream>\n#include <unordered_set>\n#include <condition_variable>\n#include <filter_plugin.h>\n#include <filter_pipeline.h>\n#include <asset_tracking.h>\n#include <service_handler.h>\n#include <set>\n#include <perfmonitors.h>\n\n#define SERVICE_NAME  \"Fledge South\"\n\n#define INGEST_SUFFIX\t\"-Ingest\"\t// Suffix for per service ingest statistic\n\n#define FLUSH_STATS_INTERVAL 5\t\t// Period between flushing of stats to storage (seconds)\n\n#define STATS_UPDATE_FAIL_THRESHOLD 10\t// After this many update fails try creating new stats\n\n#define DEPRECATED_CACHE_AGE\t600\t// Maximum allowed aged of the deprecated asset cache\n\n/*\n * Constants related to flow control for async south services.\n *\n */\n#define\tAFC_SLEEP_INCREMENT\t20\t// Number of milliseconds to wait for readings to drain\n#define AFC_SLEEP_MAX\t\t200\t// Maximum sleep tiem in ms between tests\n#define AFC_MAX_WAIT\t\t5000\t// Maximum amount of time we wait for the queue to drain\n\nclass IngestRate;\n\n// Enum for service buffering type\nenum class ServiceBufferingType {\n\tUNLIMITED,\n\tLIMITED\n};\n\n// Enum for discard policy\nenum class DiscardPolicy {\n\tDISCARD_OLDEST,\n\tREDUCE_FIDELITY,\n\tDISCARD_NEWEST\n};\n\n#define SERVICE_BUFFER_BUFFER_TYPE_DEFAULT \tServiceBufferingType::UNLIMITED\n#define SERVICE_BUFFER_DISCARD_POLICY_DEFAULT \tDiscardPolicy::DISCARD_OLDEST\n#define SERVICE_BUFFER_SIZE_DEFAULT\t\t1000\n#define SERVICE_BUFFER_SIZE_MIN\t\t1000\n\n/**\n * The ingest class is used to ingest asset readings.\n * It maintains a queue of readings to be sent to storage,\n * these are sent using a background thread that regularly\n * wakes up and sends the queued readings.\n */\nclass Ingest : public ServiceHandler {\n\npublic:\n\tIngest(StorageClient& storage,\n\t\tconst std::string& serviceName,\n\t\tconst std::string& pluginName,\n\t\tManagementClient *mgmtClient);\n\t~Ingest();\n\n\tvoid\t\tingest(const Reading& reading);\n\tvoid\t\tingest(const std::vector<Reading *> *vec);\n\tvoid\t\tstart(long timeout, unsigned int threshold);\n\tbool\t\trunning();\n    \tbool\t\tisStopping();\n\tbool\t\tisRunning() { return !m_shutdown; };\n\tvoid\t\tprocessQueue();\n\tvoid\t\twaitForQueue();\n\tsize_t\t\tqueueLength();\n\tvoid\t\tupdateStats(void);\n\tint \t\tcreateStatsDbEntry(const std::string& assetName);\n\n\tbool\t\tloadFilters(const std::string& categoryName);\n\tstatic void\tpassToOnwardFilter(OUTPUT_HANDLE *outHandle,\n\t\t\t\t\t   READINGSET* readings);\n\tstatic void\tuseFilteredData(OUTPUT_HANDLE *outHandle,\n\t\t\t\t\tREADINGSET* readings);\n\n\tvoid\t\tsetTimeout(const long timeout) { m_timeout = timeout; };\n\tvoid\t\tsetThreshold(const unsigned int threshold) { m_queueSizeThreshold = threshold; };\n\tvoid\t\tconfigChange(const std::string&, const std::string&);\n\tvoid\t\tconfigChildCreate(const std::string& , const std::string&, const std::string&){};\n\tvoid\t\tconfigChildDelete(const std::string& , const std::string&){};\n\tvoid\t\tshutdown() {};\t// Satisfy ServiceHandler\n\tvoid\t\trestart() {};\t// Satisfy ServiceHandler\n\tvoid\t\tunDeprecateAssetTrackingRecord(AssetTrackingTuple* currentTuple,\n\t\t\t\t\t\t\tconst std::string& assetName,\n\t\t\t\t\t\t\tconst std::string& event);\n\tvoid\t\tunDeprecateStorageAssetTrackingRecord(StorageAssetTrackingTuple* currentTuple,\n\t\t\t\t\t\t\tconst std::string& assetName,\n\t\t\t\t\t\t\tconst std::string&,\n\t\t\t\t\t\t\tconst unsigned int&);\n\tvoid\t\tsetStatistics(const std::string& option);\n\n\tstd::string  \tgetStringFromSet(const std::set<std::string> &dpSet);\n\tvoid\t\tsetFlowControl(unsigned int lowWater, unsigned int highWater) { m_lowWater = lowWater; m_highWater = highWater; };\n\tvoid\t\tsetResourceLimit(ServiceBufferingType serviceBufferingType, unsigned long serviceBufferSize, DiscardPolicy discardPolicy);\n\tvoid\t\tflowControl();\n\tvoid\t\tsetPerfMon(PerformanceMonitor *mon)\n\t\t\t{\n\t\t\t\tm_performance = mon;\n\t\t\t};\n\tvoid\t\tconfigureRateMonitor(long interval, long factor);\n\n\t// Debugger entry points\n\tbool\t\tattachDebugger()\n\t\t\t{\n\t\t\t\tif (m_filterPipeline)\n\t\t\t\t{\n\t\t\t\t\tm_debuggerAttached = true;\n\t\t\t\t\treturn m_filterPipeline->attachDebugger();\n\t\t\t\t}\n\t\t\t\treturn false;\n\t\t\t};\n\tvoid\t\tdetachDebugger()\n\t\t\t{\n\t\t\t\tif (m_filterPipeline)\n\t\t\t\t{\n\t\t\t\t\tm_debuggerAttached = false;\n\t\t\t\t\tm_debuggerBufferSize = 1;\n\t\t\t\t\tm_filterPipeline->detachDebugger();\n\t\t\t\t}\n\t\t\t};\n\tvoid\t\tsetDebuggerBuffer(unsigned int size)\n\t\t\t{\n\t\t\t\tif (m_filterPipeline)\n\t\t\t\t{\n\t\t\t\t\tm_debuggerBufferSize = size;\n\t\t\t\t\tm_filterPipeline->setDebuggerBuffer(size);\n\t\t\t\t}\n\t\t\t};\n\tstd::string\tgetDebuggerBuffer()\n\t\t\t{\n\t\t\t\tstd::string rval;\n\t\t\t\tif (m_filterPipeline)\n\t\t\t\t\trval = m_filterPipeline->getDebuggerBuffer();\n\t\t\t\treturn rval;\n\t\t\t};\n\tvoid\t\tisolate(bool isolate)\n\t\t\t{\n\t\t\t\tstd::lock_guard<std::mutex> guard(m_isolateMutex);\n\t\t\t\tm_isolate = isolate;\n\t\t\t};\n\tbool\t\tisolated()\n\t\t\t{\n\t\t\t\tstd::lock_guard<std::mutex> guard(m_isolateMutex);\n\t\t\t\treturn m_isolate;\n\t\t\t};\n\tbool\t\treplayDebugger()\n\t\t\t{\n\t\t\t\tif (m_filterPipeline)\n\t\t\t\t{\n\t\t\t\t\treturn m_filterPipeline->replayDebugger();\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t};\n\nprivate:\n\tvoid\t\t\t\tsignalStatsUpdate() {\n\t\t\t\t\t\t// Signal stats thread to update stats\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_statsMutex);\n\t\t\t\t\t\tm_statsCv.notify_all();\n\t\t\t\t\t};\n\tvoid\t\t\t\tlogDiscardedStat() {\n\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_statsMutex);\n\t\t\t\t\t\tm_discardedReadings++;\n\t\t\t\t\t};\n\tlong\t\t\t\tcalculateWaitTime();\n\tint \t\t\t\tcreateServiceStatsDbEntry();\n\tvoid\t\t\t\tdiscardOldest();\n\tvoid\t\t\t\tdiscardNewest();\n\tvoid\t\t\t\treduceFidelity();\n\tvoid\t\t\t\tenforceResourceLimits();\n\n\tStorageClient&\t\t\tm_storage;\n\tlong\t\t\t\tm_timeout;\n\tbool\t\t\t\tm_shutdown;\n\tunsigned int\t\t\tm_queueSizeThreshold;\n\tbool\t\t\t\tm_running;\n\tstd::string \t\t\tm_serviceName;\n\tstd::string \t\t\tm_pluginName;\n\tManagementClient\t\t*m_mgtClient;\n\t// New data: queued\n\tstd::vector<Reading *>*\t\tm_queue;\n\tstd::mutex\t\t\tm_qMutex;\n\tstd::mutex\t\t\tm_statsMutex;\n\tstd::mutex\t\t\tm_pipelineMutex;\n\tstd::thread*\t\t\tm_thread;\n\tstd::thread*\t\t\tm_statsThread;\n\tLogger*\t\t\t\tm_logger;\n\tstd::condition_variable\t\tm_cv;\n\tstd::condition_variable\t\tm_statsCv;\n\t// Data ready to be filtered/sent\n\tstd::vector<Reading *>*\t\tm_data;\n\tstd::vector<std::vector<Reading *>*>\n\t\t\t\t\tm_resendQueues;\n\tstd::queue<std::vector<Reading *>*>\n\t\t\t\t\tm_fullQueues;\n\tstd::mutex\t\t\tm_fqMutex;\n\tunsigned int\t\t\tm_discardedReadings; // discarded readings since last update to statistics table\n\tFilterPipeline*\t\t\tm_filterPipeline;\n\t\n\tstd::unordered_set<std::string> statsDbEntriesCache;  // confirmed stats table entries\n\tstd::map<std::string, int>\tstatsPendingEntries;  // pending stats table entries\n\tbool\t\t\t\tm_highLatency;\t      // Flag to indicate we are exceeding latency request\n\tbool\t\t\t\tm_10Latency;\t      // Latency within 10%\n\ttime_t\t\t\t\tm_reportedLatencyTime;// Last tiem we reported high latency\n\tint\t\t\t\tm_failCnt;\n\tbool\t\t\t\tm_storageFailed;\n\tint\t\t\t\tm_storesFailed;\n\tint\t\t\t\tm_statsUpdateFails;\n\tenum { STATS_BOTH, STATS_ASSET, STATS_SERVICE }\n\t\t\t\t\tm_statisticsOption;\n\tunsigned int\t\t\tm_highWater;\n\tunsigned int\t\t\tm_lowWater;\n\tAssetTrackingTable\t\t*m_deprecated;\n\ttime_t\t\t\t\tm_deprecatedAgeOut;\n\ttime_t\t\t\t\tm_deprecatedAgeOutStorage;\n\tPerformanceMonitor\t\t*m_performance;\n\tstd::mutex\t\t\tm_useDataMutex;\n\tIngestRate\t\t\t*m_ingestRate;\n\tstd::mutex\t\t\tm_isolateMutex;\n\tbool\t\t\t\tm_isolate;\n\tbool\t\t\t\tm_debuggerAttached;\n\tunsigned int \t\t\tm_debuggerBufferSize;\n\tstd::atomic<ServiceBufferingType>\t\t\tm_serviceBufferingType;\n\tstd::atomic<unsigned int>\t\t\tm_serviceBufferSize;\n\tstd::atomic<DiscardPolicy>\t\t\tm_discardPolicy;\n\tbool m_resourceGovernorActive{false}; // Tracks if the resource governor is active\n\ttime_t m_lastFidelityReductionTimestamp{0}; // Used for \"Reduce Fidelity\"\n};\n\n#endif\n"
  },
  {
    "path": "C/services/south/include/ingest_rate.h",
    "content": "#ifndef _INGEST_RATE_H\n#define _INGEST_RATE_H\n/*\n * Fledge reading ingest rate.\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <logger.h>\n#include <vector>\n#include <queue>\n#include <chrono>\n#include <mutex>\n#include <condition_variable>\n#include <filter_plugin.h>\n#include <filter_pipeline.h>\n#include <asset_tracking.h>\n#include <service_handler.h>\n#include <set>\n#include <ingest.h>\n\n#define IGRSAMPLES\t10\t// The number of samples used to calculate initial average\n\n/**\n * A class used to track and report on the ingest rates of a data stream\n *\n * It collects the number of readings ingested in a configurable period, the\n * period being defined in minutes. It then calculates the mean of the collection\n * rates and the standard deviation. If the current collection rate differs from\n * the averaged collection rate by more than a configured number of standard \n * deviations an alert is raised. A \"priming\" mechanism is used to require two\n * consecutive deviations from the norm to be required before an alert is trigger.\n * This reduces the occurance of false positives caused by transient spikes in colection.\n * These spikes may occur when heavy operations are performed on the Fledge instance.\n *\n * When the rate returns to within the number of defined standard deviations of\n * the average then the alert is cleared.\n *\n * Before alerting is enabled a number of the defined time periods, IGRSAMPLES,\n * must have passed, during this time an average is calculated and used for the\n * intial comparision.\n *\n * If the user reconfigures the collection rate of the plugin then the accumulated\n * average is discarded and the process starts again by collecting a new average\n */\nclass IngestRate {\n\tpublic:\n\t\tIngestRate(ManagementClient *mgmtClient, const std::string& service);\n\t\t~IngestRate();\n\t\tvoid\t\tingest(unsigned int numberReadings);\n\t\tvoid\t\tperiodic();\n\t\tvoid\t\tupdateConfig(int interval, int factor);\n\t\tvoid\t\trelearn();\n\tprivate:\n\t\tvoid\t\tupdateCounters();\n\tprivate:\n\t\tManagementClient\t*m_mgmtClient;\n\t\tstd::string\t\tm_service;\n\t\tint\t\t\tm_currentInterval;\n\t\tint\t\t\tm_numIntervals;\n\t\tunsigned long\t\tm_thisInterval;\n\t\tdouble\t\t\tm_mean;\n\t\tdouble\t\t\tm_dsq;\n\t\tunsigned long\t\tm_count;\n\t\tdouble\t\t\tm_factor;\n\t\tstd::mutex\t\tm_mutex;\n\t\tbool\t\t\tm_alerted;\n\t\tbool\t\t\tm_primed;\n};\n#endif\n"
  },
  {
    "path": "C/services/south/include/south_api.h",
    "content": "#ifndef _SOUTH_API_H\n#define _SOUTH_API_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2021  Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <logger.h>\n#include <server_http.hpp>\n\n#define SETPOINT\t\"^/fledge/south/setpoint$\"\n#define OPERATION\t\"^/fledge/south/operation$\"\n\n// Debugger URLs\n#define DEBUG_ATTACH\t\t\"^/fledge/south/debug/attach$\"\n#define DEBUG_DETACH\t\t\"^/fledge/south/debug/detach$\"\n#define DEBUG_BUFFER\t\t\"^/fledge/south/debug/buffer$\"\n#define DEBUG_ISOLATE\t\t\"^/fledge/south/debug/isolate$\"\n#define DEBUG_SUSPEND\t\t\"^/fledge/south/debug/suspend$\"\n#define DEBUG_STEP\t\t\"^/fledge/south/debug/step$\"\n#define DEBUG_REPLAY\t\t\"^/fledge/south/debug/replay$\"\n#define DEBUG_STATE\t\t\"^/fledge/south/debug/state$\"\n\nclass SouthService;\n\ntypedef std::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Response> Response;\ntypedef std::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Request> Request;\n\nclass SouthApi {\n\tpublic:\n\t\tSouthApi(SouthService *);\n\t\t~SouthApi();\n\t\tunsigned short\t\tgetListenerPort();\n\t\tvoid\t\t\tsetPoint(std::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Response> response,\n\t\t\t\t\t\t\tstd::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Request> request);\n\t\tvoid\t\t\toperation(std::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Response> response,\n\t\t\t\t\t\t\tstd::shared_ptr<SimpleWeb::Server<SimpleWeb::HTTP>::Request> request);\n\t\tvoid\t\t\tstartServer();\n\n\t\t// Debugger entry points\n\t\tvoid\t\t\tattachDebugger(Response response, Request request);\n\t\tvoid\t\t\tdetachDebugger(Response response, Request request);\n\t\tvoid\t\t\tsetDebuggerBuffer(Response response, Request request);\n\t\tvoid\t\t\tgetDebuggerBuffer(Response response, Request request);\n\t\tvoid\t\t\tisolateDebugger(Response response, Request request);\n\t\tvoid\t\t\tsuspendDebugger(Response response, Request request);\n\t\tvoid\t\t\tstepDebugger(Response response, Request request);\n\t\tvoid\t\t\treplayDebugger(Response response, Request request);\n\t\tvoid\t\t\tstateDebugger(Response response, Request request);\n\n\tprivate:\n\t\tSimpleWeb::Server<SimpleWeb::HTTP>\n\t\t\t\t\t*m_server;\n\t\tSouthService\t\t*m_service;\n\t\tstd::thread\t\t*m_thread;\n\t\tLogger\t\t\t*m_logger;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/south/include/south_plugin.h",
    "content": "#ifndef _SOUTH_PLUGIN\n#define _SOUTH_PLUGIN\n/*\n * Fledge south service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <plugin.h>\n#include <plugin_manager.h>\n#include <config_category.h>\n#include <string>\n#include <reading_set.h>\n\ntypedef void (*INGEST_CB)(void *, Reading);\ntypedef void (*INGEST_CB2)(void *, std::vector<Reading *>*);\n\n/**\n * Class that represents a south plugin.\n *\n * The purpose of this class is to hide the use of the pointers into the\n * dynamically loaded plugin and wrap the interface into a class that\n * can be used directly in the south subsystem.\n *\n * This is achieved by having a set of private member variables which are\n * the pointers to the functions in the plugin, and a set of public methods\n * that will call these functions via the function pointers.\n */\nclass SouthPlugin : public Plugin {\n\npublic:\n\tSouthPlugin(PLUGIN_HANDLE handle, const ConfigCategory& category);\n\t~SouthPlugin();\n\n\tReading\t\tpoll();\n\tReadingSet*\tpollV2();\n\tvoid\t\tstart();\n\tvoid\t\treconfigure(const std::string&);\n\tvoid\t\tshutdown();\n\tvoid\t\tregisterIngest(INGEST_CB, void *);\n\tvoid\t\tregisterIngestV2(INGEST_CB2, void *);\n\tbool\t\tisAsync() { return info->options & SP_ASYNC; };\n\tbool\t\thasControl() { return info->options & SP_CONTROL; };\n\tbool\t\tpersistData() { return info->options & SP_PERSIST_DATA; };\n\tvoid\t\tstartData(const std::string& pluginData);\n\tstd::string\tshutdownSaveData();\n\tbool\t\twrite(const std::string& name, const std::string& value);\n\tbool\t\toperation(const std::string& name, std::vector<PLUGIN_PARAMETER *>& );\nprivate:\n\tPLUGIN_HANDLE\tinstance;\n\tbool\t\tm_started; // Plugin started indicator, for async plugins\n\tvoid\t\t(*pluginStartPtr)(PLUGIN_HANDLE);\n\tReading\t\t(*pluginPollPtr)(PLUGIN_HANDLE);\n\tstd::vector<Reading*>* (*pluginPollPtrV2)(PLUGIN_HANDLE);\n\tvoid\t\t(*pluginReconfigurePtr)(PLUGIN_HANDLE*,\n\t\t\t\t\t        const std::string& newConfig);\n\tvoid\t\t(*pluginShutdownPtr)(PLUGIN_HANDLE);\n\tvoid\t\t(*pluginRegisterPtr)(PLUGIN_HANDLE, INGEST_CB, void *);\n\tvoid\t\t(*pluginRegisterPtrV2)(PLUGIN_HANDLE, INGEST_CB2, void *);\n\tstd::string\t(*pluginShutdownDataPtr)(const PLUGIN_HANDLE);\n\tvoid\t\t(*pluginStartDataPtr)(PLUGIN_HANDLE,\n\t\t\t\t\t      const std::string& pluginData);\n\tbool\t\t(*pluginWritePtr)(PLUGIN_HANDLE, const std::string& name, const std::string& value);\n\tbool\t\t(*pluginOperationPtr)(const PLUGIN_HANDLE, const std::string& name, int count,\n\t\t\t\t\t\tPLUGIN_PARAMETER  **parameters);\n};\n\n#endif\n"
  },
  {
    "path": "C/services/south/include/south_service.h",
    "content": "#ifndef _SOUTH_SERVICE_H\n#define _SOUTH_SERVICE_H\n/*\n * Fledge south service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <logger.h>\n#include <south_plugin.h>\n#include <service_handler.h>\n#include <config_category.h>\n#include <ingest.h>\n#include <filter_plugin.h>\n#include <plugin_data.h>\n#include <audit_logger.h>\n#include <perfmonitors.h>\n\n#define MAX_SLEEP\t5\t\t// Maximum number of seconds the service will sleep during a poll cycle\n\n#define SERVICE_NAME  \"Fledge South\"\n\n/*\n * Control the throttling of poll based plugins\n *\n * If the ingest queue grows then we reduce the poll rate, i.e. increase\n * the interval between poll calls. If the ingest queue then drops below\n * the threshold set in the advance configuration we then bring the poll\n * rate back up.\n */\n#define SOUTH_THROTTLE_HIGH_PERCENT\t50\t// Percentage above buffer threshold where we throttle down\n#define SOUTH_THROTTLE_LOW_PERCENT\t10\t// Percentage above buffer threshold where we throttle up\n#define SOUTH_THROTTLE_PERCENT\t\t10\t// The percentage we throttle poll by\n#define SOUTH_THROTTLE_DOWN_INTERVAL\t10\t// Interval between throttle down attmepts\n#define SOUTH_THROTTLE_UP_INTERVAL\t15\t// Interval between throttle up attempts\n\n\n/**\n * State bits for the south pipeline debugger\n */\n#define DEBUG_ATTACHED\t\t0x01\n#define DEBUG_SUSPENDED\t\t0x02\n#define DEBUG_ISOLATED\t\t0x04\n\n\nclass SouthServiceProvider;\n\n/**\n * The SouthService class. This class is the core\n * of the service that provides south side services\n * to Fledge.\n */\nclass SouthService : public ServiceAuthHandler {\n\tpublic:\n\t\tSouthService(const std::string& name,\n\t\t\tconst std::string& token = \"\");\n\t\tvirtual\t\t\t\t~SouthService();\n\t\tvoid \t\t\t\tstart(std::string& coreAddress,\n\t\t\t\t\t\t      unsigned short corePort);\n\t\tvoid \t\t\t\tstop();\n\t\tvoid\t\t\t\tshutdown();\n\t\tvoid\t\t\t\trestart();\n\t\tvoid\t\t\t\tconfigChange(const std::string&, const std::string&);\n\t\tvoid\t\t\t\tprocessConfigChange(const std::string&, const std::string&);\n\t\tvoid\t\t\t\tconfigChildCreate(const std::string&,\n\t\t\t\t\t\t\t\tconst std::string&,\n\t\t\t\t\t\t\t\tconst std::string&){};\n\t\tvoid\t\t\t\tconfigChildDelete(const std::string&,\n\t\t\t\t\t\t\t\tconst std::string&){};\n\t\tbool\t\t\t\tisRunning() { return !m_shutdown; };\n\t\tbool\t\t\t\tsetPoint(const std::string& name, const std::string& value);\n\t\tbool\t\t\t\toperation(const std::string& name, std::vector<PLUGIN_PARAMETER *>& );\n\t\tvoid\t\t\t\tsetDryRun() { m_dryRun = true; };\n\t\tvoid\t\t\t\thandlePendingReconf();\n\t\t// Debugger Entry point\n\t\tbool\t\t\t\tattachDebugger()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_ingest)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tm_debugState = DEBUG_ATTACHED;\n\t\t\t\t\t\t\t\treturn m_ingest->attachDebugger();\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tdetachDebugger()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_ingest)\n\t\t\t\t\t\t\t\tm_ingest->detachDebugger();\n\t\t\t\t\t\t\tsuspendDebugger(false);\n\t\t\t\t\t\t\tisolateDebugger(false);\n\t\t\t\t\t\t\tm_debugState = 0;\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tsetDebuggerBuffer(unsigned int size)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_ingest)\n\t\t\t\t\t\t\t\tm_ingest->setDebuggerBuffer(size);\n\t\t\t\t\t\t};\n\t\tstd::string\t\t\tgetDebuggerBuffer()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_ingest)\n\t\t\t\t\t\t\t\treturn m_ingest->getDebuggerBuffer();\n\t\t\t\t\t\t\treturn \"\";\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tsuspendDebugger(bool suspend)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsuspendIngest(suspend);\n\t\t\t\t\t\t\tif (suspend)\n\t\t\t\t\t\t\t\tm_debugState |= DEBUG_SUSPENDED;\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tm_debugState &= ~(unsigned int)DEBUG_SUSPENDED;\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tisolateDebugger(bool isolate)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_ingest)\n\t\t\t\t\t\t\t\tm_ingest->isolate(isolate);\n\t\t\t\t\t\t\tif (isolate)\n\t\t\t\t\t\t\t\tm_debugState |= DEBUG_ISOLATED;\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tm_debugState &= ~(unsigned int)DEBUG_ISOLATED;\n\t\t\t\t\t\t};\n\t\tvoid\t\t\t\tstepDebugger(unsigned int steps)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_suspendMutex);\n\t\t\t\t\t\t\tm_steps = steps;\n\t\t\t\t\t\t}\n\t\tbool\t\t\t\treplayDebugger()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (m_ingest)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\treturn m_ingest->replayDebugger();\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t};\n\t\tstd::string\t\t\tdebugState();\n\t\tbool\t\t\t\tdebuggerAttached()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn m_debugState & DEBUG_ATTACHED;\n\t\t\t\t\t\t};\n\t\t// Global controls\n\t\tbool\t\t\t\tallowControl()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn m_controlEnabled;\n\t\t\t\t\t\t};\n\t\tbool\t\t\t\tallowDebugger()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\treturn m_debuggerEnabled;\n\t\t\t\t\t\t};\n\t\t\n\tprivate:\n\t\tvoid\t\t\t\taddConfigDefaults(DefaultConfigCategory& defaults);\n\t\tbool \t\t\t\tloadPlugin();\n\t\tint \t\t\t\tcreateTimerFd(struct timeval rate);\n\t\tvoid \t\t\t\tcreateConfigCategories(DefaultConfigCategory configCategory,\n\t\t\t\t\t\t\t\t\tstd::string parent_name,\n\t\t\t\t\t\t\t\t\tstd::string current_name);\n\t\tvoid\t\t\t\tthrottlePoll();\n\t\tvoid\t\t\t\tprocessNumberList(const ConfigCategory& cateogry, const std::string& item, std::vector<unsigned long>& list);\n\t\tvoid\t\t\t\tcalculateTimerRate();\n\t\tbool\t\t\t\tsyncToNextPoll();\n\t\tbool\t\t\t\tonDemandPoll();\n\t\tvoid\t\t\t\tcheckPendingReconfigure();\n\t\tvoid\t\t\t\tupdateFeatures(const ConfigCategory& category);\n\t\tvoid\t\t\t\tsuspendIngest(bool suspend)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_suspendMutex);\n\t\t\t\t\t\t\tm_suspendIngest = suspend;\n\t\t\t\t\t\t\tm_steps = 0;\n\t\t\t\t\t\t};\n\t\tbool\t\t\t\tisSuspended()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_suspendMutex);\n\t\t\t\t\t\t\treturn m_suspendIngest;\n\t\t\t\t\t\t};\n\t\tbool\t\t\t\twillStep()\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstd::lock_guard<std::mutex> guard(m_suspendMutex);\n\t\t\t\t\t\t\tif (m_suspendIngest && m_steps > 0)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tm_steps--;\n\t\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\treturn false;\n\t\t\t\t\t\t};\n\t\tvoid \t\t\t\tgetResourceLimit();\n\tprivate:\n\t\tstd::thread\t\t\t*m_reconfThread;\n\t\tstd::deque<std::pair<std::string,std::string>>\tm_pendingNewConfig;\n\t\tstd::mutex\t\t\tm_pendingNewConfigMutex;\n\t\tstd::condition_variable\t\tm_cvNewReconf;\n\t\n\t\tSouthPlugin\t\t\t*southPlugin;\n\t\tLogger        \t\t\t*logger;\n\t\tAssetTracker\t\t\t*m_assetTracker;\n\t\tbool\t\t\t\tm_shutdown;\n\t\tConfigCategory\t\t\tm_config;\n\t\tConfigCategory\t\t\tm_configAdvanced;\n\t\tConfigCategory\t\t\tm_configResourceLimit;\n\t\tunsigned long\t\t\tm_readingsPerSec;\t// May not be per second, new rate defines time units\n\t\tunsigned int\t\t\tm_threshold;\n\t\tunsigned long\t\t\tm_timeout;\n\t\tIngest\t\t\t\t*m_ingest;\n\t\tbool\t\t\t\tm_throttle;\n\t\tbool\t\t\t\tm_throttled;\n\t\tunsigned int\t\t\tm_highWater;\n\t\tunsigned int\t\t\tm_lowWater;\n\t\tstruct timeval\t\t\tm_lastThrottle;\n\t\tstruct timeval\t\t\tm_desiredRate;\n\t\tstruct timeval\t\t\tm_currentRate;\n\t\tint\t\t\t\tm_timerfd;\n\t\tconst std::string\t\tm_token;\n\t\tunsigned int\t\t\tm_repeatCnt;\n\t\tPluginData\t\t\t*m_pluginData;\n\t\tstd::string\t\t\tm_dataKey;\n\t\tbool\t\t\t\tm_dryRun;\n\t\tbool\t\t\t\tm_requestRestart;\n\t\tstd::string\t\t\tm_rateUnits;\n\t\tenum { POLL_INTERVAL, POLL_FIXED, POLL_ON_DEMAND }\n\t\t\t\t\t\tm_pollType;\n\t\tstd::vector<unsigned long>\tm_hours;\n\t\tstd::vector<unsigned long>\tm_minutes;\n\t\tstd::vector<unsigned long>\tm_seconds;\n\t\tstd::string\t\t\tm_hoursStr;\n\t\tstd::string\t\t\tm_minutesStr;\n\t\tstd::string\t\t\tm_secondsStr;\n\t\tstd::condition_variable\t\tm_pollCV;\n\t\tstd::mutex\t\t\tm_pollMutex;\n\t\tbool\t\t\t\tm_doPoll;\n\t\tAuditLogger\t\t\t*m_auditLogger;\n\t\tPerformanceMonitor\t\t*m_perfMonitor;\n\t\tbool\t\t\t\tm_suspendIngest;\n\t\tunsigned int\t\t\tm_steps;\n\t\tstd::mutex\t\t\tm_suspendMutex;\n\t\tunsigned int\t\t\tm_debugState;\n\t\tSouthServiceProvider\t\t*m_provider;\n\t\tbool\t\t\t\tm_controlEnabled;\n\t\tbool\t\t\t\tm_debuggerEnabled;\n\t\tServiceBufferingType\t\tm_serviceBufferingType;\n\t\tunsigned int\t\t\tm_serviceBufferSize;\n\t\tDiscardPolicy\t\t\tm_discardPolicy;\n};\n\n/**\n *\n * A data provider class to return data in the south service ping response\n */\nclass SouthServiceProvider : public JSONProvider {\n\tpublic:\n\t\tSouthServiceProvider(SouthService *south) : m_south(south) {};\n\t\tvirtual ~SouthServiceProvider() {};\n\t\tvoid \tasJSON(std::string &json) const\n\t\t\t{\n\t\t\t\tif (m_south)\n\t\t\t\t{\n\t\t\t\t\tjson = \"\\\"debug\\\" : \" + m_south->debugState();\n\t\t\t\t}\n\t\t\t};\n\tprivate:\n\t\tSouthService\t*m_south;\n};\n#endif\n"
  },
  {
    "path": "C/services/south/ingest.cpp",
    "content": "/*\n * Fledge readings ingest.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto, Amandeep Singh Arora\n */\n#include <ingest.h>\n#include <reading.h>\n#include <config_handler.h>\n#include <thread>\n#include <logger.h>\n#include <set>\n#include \"string_utils.h\"\n#include <ingest_rate.h>\n\nusing namespace std;\n\n/**\n * Thread to process the ingest queue and send the data\n * to the storage layer.\n */\nstatic void ingestThread(Ingest *ingest)\n{\n\twhile (! ingest->isStopping())\n\t{\n\t\tif (ingest->running())\n\t\t{\n\t\t\tingest->waitForQueue();\n\t\t\tingest->processQueue();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100));\n\t\t}\n\t}\n}\n\n/**\n * Thread to update statistics table in DB\n */\nstatic void statsThread(Ingest *ingest)\n{\n\twhile (ingest->running())\n\t{\n\t\tingest->updateStats();\n\t}\n}\n\n/**\n * Create a row for given assetName in statistics DB table, if not present already\n * The key checked/created in the table is \"<assetName>\"\n * \n * @param assetName     Asset name for the plugin that is sending readings\n * @return int\t\tReturn -1 on error, 0 if not required or 1 if the entry exists\n */\nint Ingest::createStatsDbEntry(const string& assetName)\n{\n\tif (m_statisticsOption == STATS_SERVICE)\n\t{\n\t\t// No asset stats required\n\t\treturn 0;\n\t}\n\t// Prepare fledge.statistics update\n\tstring statistics_key = assetName;\n\tfor (auto & c: statistics_key) c = toupper(c);\n\t\n\t// SELECT * FROM fledge.statiatics WHERE key = statistics_key\n\tconst Condition conditionKey(Equals);\n\tWhere *wKey = new Where(\"key\", conditionKey, statistics_key);\n\tQuery qKey(wKey);\n\n\tResultSet* result = 0;\n\ttry\n\t{\n\t\t// Query via storage client\n\t\tresult = m_storage.queryTable(\"statistics\", qKey);\n\n\t\tif (!result->rowCount())\n\t\t{\n\t\t\t// Prepare insert values for insertTable\n\t\t\tInsertValues newStatsEntry;\n\t\t\tnewStatsEntry.push_back(InsertValue(\"key\", statistics_key));\n\t\t\tnewStatsEntry.push_back(InsertValue(\"description\", string(\"Readings received from asset \")+assetName));\n\t\t\t// Set \"value\" field for insert using the JSON document object\n\t\t\tnewStatsEntry.push_back(InsertValue(\"value\", 0));\n\t\t\tnewStatsEntry.push_back(InsertValue(\"previous_value\", 0));\n\n\t\t\t// Do the insert\n\t\t\tif (!m_storage.insertTable(\"statistics\", newStatsEntry))\n\t\t\t{\n\t\t\t\tm_logger->error(\"%s:%d : Insert new row into statistics table failed, newStatsEntry='%s'\", __FUNCTION__, __LINE__, newStatsEntry.toJSON().c_str());\n\t\t\t\tdelete result;\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t\tdelete result;\n\t}\n\tcatch (...)\n\t{\n\t\tm_logger->error(\"%s:%d : Unable to create new row in statistics table with key='%s'\", __FUNCTION__, __LINE__, statistics_key.c_str());\n\t\treturn -1;\n\t}\n\treturn 1;\n}\n\n/**\n * Create a row for service in the statistics DB table, if not present already\n * \n */\nint Ingest::createServiceStatsDbEntry()\n{\n\tif (m_statisticsOption == STATS_ASSET)\n\t{\n\t\t// No service stats required\n\t\treturn 0;\n\t}\n\t// SELECT * FROM fledge.configuration WHERE key = categoryName\n\tconst Condition conditionKey(Equals);\n\tWhere *wKey = new Where(\"key\", conditionKey, m_serviceName + INGEST_SUFFIX);\n\tQuery qKey(wKey);\n\n\tResultSet* result = 0;\n\ttry\n\t{\n\t\t// Query via storage client\n\t\tresult = m_storage.queryTable(\"statistics\", qKey);\n\n\t\tif (!result->rowCount())\n\t\t{\n\t\t\t// Prepare insert values for insertTable\n\t\t\tInsertValues newStatsEntry;\n\t\t\tnewStatsEntry.push_back(InsertValue(\"key\", m_serviceName + INGEST_SUFFIX));\n\t\t\tnewStatsEntry.push_back(InsertValue(\"description\", string(\"Readings received from service \")+ m_serviceName));\n\t\t\t// Set \"value\" field for insert using the JSON document object\n\t\t\tnewStatsEntry.push_back(InsertValue(\"value\", 0));\n\t\t\tnewStatsEntry.push_back(InsertValue(\"previous_value\", 0));\n\n\t\t\t// Do the insert\n\t\t\tif (!m_storage.insertTable(\"statistics\", newStatsEntry))\n\t\t\t{\n\t\t\t\tm_logger->error(\"%s:%d : Insert new row into statistics table failed, newStatsEntry='%s'\", __FUNCTION__, __LINE__, newStatsEntry.toJSON().c_str());\n\t\t\t\tdelete result;\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t\tdelete result;\n\t}\n\tcatch (...)\n\t{\n\t\tm_logger->error(\"%s:%d : Unable to create new row in statistics table with key='%s'\", __FUNCTION__, __LINE__, m_serviceName.c_str());\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n/**\n * Update statistics for this south service. Successfully processed \n * readings are reflected against plugin asset name and READINGS keys.\n * Discarded readings stats are updated against DISCARDED key.\n */\nvoid Ingest::updateStats()\n{\n\tunique_lock<mutex> lck(m_statsMutex);\n\tif (m_running) // don't wait on condition variable if plugin/ingest is being shutdown\n\t\tm_statsCv.wait_for(lck, std::chrono::seconds(FLUSH_STATS_INTERVAL));\n\n\tif (m_ingestRate)\n\t\tm_ingestRate->periodic();\n\n\tif (statsPendingEntries.empty())\n\t{\n\t\treturn;\n\t}\n\n\tint readings=0;\n\tvector<pair<ExpressionValues *, Where *>> statsUpdates;\n\tstring key;\n\tconst Condition conditionStat(Equals);\n\t\n\tfor (auto it = statsPendingEntries.begin(); it != statsPendingEntries.end(); ++it)\n\t{\n\t\tif (statsDbEntriesCache.find(it->first) == statsDbEntriesCache.end())\n\t\t{\n\t\t\tif (createStatsDbEntry(it->first) > 0)\n\t\t\t{\n\t\t\t\tstatsDbEntriesCache.insert(it->first);\n\t\t\t}\n\t\t}\n\t\t\n\t\tif (it->second)\n\t\t{\n\t\t\tif (m_statisticsOption == STATS_BOTH || m_statisticsOption == STATS_ASSET)\n\t\t\t{\n\t\t\t\t// Prepare fledge.statistics update\n\t\t\t\tkey = it->first;\n\t\t\t\tfor (auto & c: key) c = toupper(c);\n\n\t\t\t\t// Prepare \"WHERE key = name\n\t\t\t\tWhere *wPluginStat = new Where(\"key\", conditionStat, key);\n\n\t\t\t\t// Prepare value = value + inc\n\t\t\t\tExpressionValues *updateValue = new ExpressionValues;\n\t\t\t\tupdateValue->push_back(Expression(\"value\", \"+\", (int) it->second));\n\n\t\t\t\tstatsUpdates.emplace_back(updateValue, wPluginStat);\n\t\t\t}\n\t\t\treadings += it->second;\n\t\t}\n\t}\n\n\tif (readings)\n\t{\n\t\tWhere *wPluginStat = new Where(\"key\", conditionStat, \"READINGS\");\n\t\tExpressionValues *updateValue = new ExpressionValues;\n\t\tupdateValue->push_back(Expression(\"value\", \"+\", (int) readings));\n\t\tstatsUpdates.emplace_back(updateValue, wPluginStat);\n\n\t\tif (m_statisticsOption == STATS_BOTH || m_statisticsOption == STATS_SERVICE)\n\t\t{\n\t\t\tWhere *wPluginStat = new Where(\"key\", conditionStat, m_serviceName + INGEST_SUFFIX);\n\t\t\tExpressionValues *updateValue = new ExpressionValues;\n\t\t\tupdateValue->push_back(Expression(\"value\", \"+\", (int) readings));\n\t\t\tstatsUpdates.emplace_back(updateValue, wPluginStat);\n\t\t}\n\t}\n\tif (m_discardedReadings)\n\t{\n\t\tWhere *wPluginStat = new Where(\"key\", conditionStat, \"DISCARDED\");\n\t\tExpressionValues *updateValue = new ExpressionValues;\n\t\tupdateValue->push_back(Expression(\"value\", \"+\", (int) m_discardedReadings));\n\t\tstatsUpdates.emplace_back(updateValue, wPluginStat);\n \t}\n\t\n\ttry {\n\t\tint rv = m_storage.updateTable(\"statistics\", statsUpdates);\n\t\t\n\t\tif (rv < 0)\n\t\t{\n\t\t\tif (++m_statsUpdateFails > STATS_UPDATE_FAIL_THRESHOLD)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Update of statistics failure has persisted, attempting recovery\");\n\t\t\t\tcreateServiceStatsDbEntry();\n\t\t\t\tstatsDbEntriesCache.clear();\n\t\t\t\tm_statsUpdateFails = 0;\n\t\t\t}\n\t\t\telse if (m_statsUpdateFails == 1)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Update of statistics failed\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Update of statistics still failing\");\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_discardedReadings = 0;\n\t\t\tstatsPendingEntries.clear();\n\t\t}\n\t}\n\tcatch (...) {\n\t\tLogger::getLogger()->info(\"%s:%d : Statistics table update failed, will retry on next iteration\", __FUNCTION__, __LINE__);\n\t}\n\tfor (auto it = statsUpdates.begin(); it != statsUpdates.end(); ++it)\n\t{\n\t\tdelete it->first;\n\t\tdelete it->second;\n\t}\n}\n\n/**\n * Construct an Ingest class to handle the readings queue.\n * A seperate thread is used to send the readings to the\n * storage layer based on time. This thread in created in\n * the constructor and will terminate when the destructor\n * is called.\n *\n * @param storage\tThe storage client to use\n */\nIngest::Ingest(StorageClient& storage,\n\t\tconst std::string& serviceName,\n\t\tconst std::string& pluginName,\n\t\tManagementClient *mgmtClient) :\n\t\t\tm_storage(storage),\n\t\t\tm_serviceName(serviceName),\n\t\t\tm_pluginName(pluginName),\n\t\t\tm_mgtClient(mgmtClient),\n\t\t\tm_failCnt(0),\n\t\t\tm_storageFailed(false),\n\t\t\tm_storesFailed(0),\n\t\t\tm_statisticsOption(STATS_BOTH),\n\t\t\tm_highWater(0),\n\t\t\tm_isolate(false),\n\t\t\tm_debuggerAttached(false),\n\t\t\tm_debuggerBufferSize(1)\n{\n\tm_shutdown = false;\n\tm_running = true;\n\tm_queue = new vector<Reading *>();\n\tm_logger = Logger::getLogger();\n\tm_data = NULL;\n\tm_discardedReadings = 0;\n\tm_highLatency = false;\n\n\t// populate asset and storage asset tracking cache\n\tAssetTracker *as = AssetTracker::getAssetTracker();\n\tas->populateAssetTrackingCache(m_pluginName, \"Ingest\");\n\tas->populateStorageAssetTrackingCache();\n\n\t// Create the stats entry for the service\n\tcreateServiceStatsDbEntry();\n\n\tm_filterPipeline = NULL;\n\n\tm_deprecated = NULL;\n\n\tm_deprecatedAgeOut = 0;\n\tm_deprecatedAgeOutStorage = 0;\n\n\tm_ingestRate = new IngestRate(mgmtClient, serviceName);\n}\n\n/**\n * Start the ingest threads\n *\n * @param timeout\tMaximum time before sending a queue of readings in milliseconds\n * @param threshold\tLength of queue before sending readings\n */\nvoid Ingest::start(long timeout, unsigned int threshold)\n{\n\tm_timeout = timeout;\n\tm_queueSizeThreshold = threshold;\n\tm_thread = new thread(ingestThread, this);\n\tm_statsThread = new thread(statsThread, this);\n}\n\n/**\n * Destructor for the Ingest class\n *\n * Set's the running flag to false. This will\n * cause the processing thread to drain the queue\n * and then exit.\n * Once this thread has exited the destructor will\n * return.\n */\nIngest::~Ingest()\n{\n\tm_shutdown = true;\n\tm_running = false;\n\t\n\t// Cleanup filters\n\t{\n\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t\tif (m_filterPipeline)\n\t\t{\n\t\t\tm_filterPipeline->setShuttingDown();\n\t\t\tm_filterPipeline->cleanupFilters(m_serviceName);  // filter's shutdown API could potentially try to feed some new readings using the async ingest mechnanism\n\t\t}\n\t}\n\t\n\tm_cv.notify_one();\n\tm_thread->join();\n\tprocessQueue();\n\tm_statsCv.notify_one();\n\tm_statsThread->join();\n\tupdateStats();\n\t// Cleanup and readings left in the various queues\n\tfor (auto& reading : *m_queue)\n\t{\n\t\tdelete reading;\n\t}\n\tdelete m_queue;\n\tfor (auto& q : m_resendQueues)\n\t{\n\t\tfor (auto& rq : *q)\n\t\t{\n\t\t\tdelete rq;\n\t\t}\n\t\tdelete q;\n\t}\n\twhile (m_fullQueues.size() > 0)\n\t{\n\t\tvector<Reading *> *q = m_fullQueues.front();\n\t\tfor (auto& rq : *q)\n\t\t{\n\t\t\tdelete rq;\n\t\t}\n\t\tdelete q;\n\t\tm_fullQueues.pop();\n\t}\n\tdelete m_thread;\n\tdelete m_statsThread;\n\n\t// Delete filter pipeline\n\t{\n\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t\tif (m_filterPipeline)\n\t\t\tdelete m_filterPipeline;\n\t}\n\n\tif (m_deprecated)\n\t\tdelete m_deprecated;\n\tif (m_ingestRate)\n\t\tdelete m_ingestRate;\n}\n\n/**\n * Check if the ingest process is still running.\n * This becomes false when the service is shutdown\n * and is used to allow the queue to drain and then\n * the processing routine to terminate.\n */\nbool Ingest::running()\n{\n\tlock_guard<mutex> guard(m_pipelineMutex);\n\treturn m_running;\n}\n\n/**\n * Check if a shutdown is requested\n */\nbool Ingest::isStopping()\n{\n\treturn m_shutdown;\n}\n\n/**\n * Set the resource limit for the service\n *\n * @param serviceBufferingType\tBuffering type\n * @param serviceBufferSize\tBuffer size\n * @param discardPolicy\tDiscard policy\n */\nvoid Ingest::setResourceLimit(ServiceBufferingType serviceBufferingType, unsigned long serviceBufferSize, DiscardPolicy discardPolicy)\n{\n\tm_serviceBufferingType = serviceBufferingType;\n\tm_serviceBufferSize = serviceBufferSize;\n\tm_discardPolicy = discardPolicy;\n\n\tif(m_resourceGovernorActive && \n\t\t(m_serviceBufferingType == ServiceBufferingType::UNLIMITED))\n\t{\n\t\tLogger::getLogger()->info(\"Resource governor deactivated: normal flow resumed.\");\n\t\tm_resourceGovernorActive = false;\n\t}\n\n\tm_logger->info(\"Set resource limit in Ingest: \"\n\t\t\"Service Buffering Type: '%d', \"\n\t\t\"Service Buffer Size: '%lu', \"\n\t\t\"Discard Policy: '%d'.\",\n\t\tserviceBufferingType,\n\t\tserviceBufferSize,\n\t\tdiscardPolicy);\n}\t\n\n/**\n * @brief Discard the oldest (first) reading from the queue.\n * \n * This method removes the oldest reading from the queue when the resource limit \n * is exceeded. It ensures that the queue length stays within the configured limit.\n * Warning: Caller should ensure Thread safety.\n */\nvoid Ingest::discardOldest() \n{\n\t// Check if the queue is not empty\n\tif (!m_queue->empty()) \n\t{\n\t\t// Delete the oldest (front) reading from the queue\n\t\tdelete m_queue->front();\n\n\t\t// Remove the oldest reading from the queue\n\t\tm_queue->erase(m_queue->begin());\n\n\t\t// Log that a reading was discarded for statistics purposes\n\t\tlogDiscardedStat();\n\t}\n}\n\n/**\n * @brief Discard the newest (last) reading from the queue.\n * \n * This method removes the newest reading from the queue when the resource limit \n * is exceeded. It ensures that the queue length stays within the configured limit.\n * Warning: Caller should ensure Thread safety.\n * \n */\nvoid Ingest::discardNewest()\n{\n\t// Check if the queue is not empty\n\tif (!m_queue->empty()) \n\t{\n\t\t// Delete the newest (back) reading from the queue\n\t\tdelete m_queue->back();\n\n\t\t// Remove the newest reading from the queue\n\t\tm_queue->pop_back();\n\n\t\t// Log that a reading was discarded for statistics purposes\n\t\tlogDiscardedStat();\n\t}\n}\n\n/**\n * @brief Reduce the fidelity of the queue by discarding every second reading.\n * \n * This method reduces the fidelity of the data in the queue by keeping only \n * readings at even indices and discarding readings at odd indices. It is applied \n * when the resource limit is exceeded and ensures the queue length stays within \n * the configured limit.\n * Warning: Caller should ensure Thread safety.\n */\nvoid Ingest::reduceFidelity() \n{\n\t// Check if the queue is not empty\n\tif (!m_queue->empty()) \n\t{\n\t\t// Create a new temporary queue to store reduced-fidelity data\n\t\tstd::vector<Reading*> newQueue;\n\n\t\t// Iterate through the queue, keeping only every second reading\n\t\tfor (size_t i = 0; i < m_queue->size(); i++) \n\t\t{\n\t\t\tif (i % 2 == 0) \n\t\t\t{\n\t\t\t\t// Keep readings at even indices\n\t\t\t\tnewQueue.push_back(m_queue->at(i));\n\t\t\t} \n\t\t\telse \n\t\t\t{\n\t\t\t\t// Delete readings at odd indices and log the discarded reading\n\t\t\t\tdelete m_queue->at(i);\n\t\t\t\tlogDiscardedStat();\n\t\t\t}\n\t\t}\n\n\t\t// Replace the original queue with the reduced-fidelity queue\n\t\tm_queue->swap(newQueue);\n\t}\n}\n\n/**\n * @brief Enforce resource limits on the queue.\n * \n * This method ensures that the queue length stays within the configured limits \n * when the buffering policy is set to \"Limited.\" It applies the configured discard \n * policy (e.g., Discard Oldest, Discard Newest, Reduce Fidelity) when the queue \n * length exceeds the specified limit. Logging is performed when the resource \n * governor activates or deactivates.\n */\nvoid Ingest::enforceResourceLimits() \n{\n\t// Enforce limits while the queue length exceeds the configured limit\n\tif(queueLength() > m_serviceBufferSize)\n\t{\n\t\tif(!m_resourceGovernorActive) \n\t\t{\n\t\t\t// Log that the resource governor is activated\n\t\t\tLogger::getLogger()->warn(\"Resource governor activated: enforcing resource limits.\");\n\t\t\tm_resourceGovernorActive = true;\n\t\t}\n\n\t\tunsigned int orignalQueueLength = queueLength();\n\t\twhile (queueLength() > m_serviceBufferSize) \n\t\t{\n\t\t\t// Apply the configured discard policy\n\t\t\tswitch (m_discardPolicy) \n\t\t\t{\n\t\t\t\tcase DiscardPolicy::DISCARD_OLDEST:\n\t\t\t\t\tdiscardOldest();\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase DiscardPolicy::DISCARD_NEWEST:\n\t\t\t\t\tdiscardNewest();\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase DiscardPolicy::REDUCE_FIDELITY:\n\t\t\t\t\treduceFidelity();\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tLogger::getLogger()->debug(\"Resource governor applied: original queue length = %u, reduced queue length = %u.\", orignalQueueLength, queueLength());\n\t}\n\n\t// Deactivate the resource governor if the queue length drops below half the limit\n\tif (queueLength() <= m_serviceBufferSize / 2 && m_resourceGovernorActive) \n\t{\n\t\tLogger::getLogger()->info(\"Resource governor deactivated: normal flow resumed.\");\n\t\tm_resourceGovernorActive = false;\n\t}\n}\n\n/**\n * Add a reading to the reading queue\n *\n * @param reading\tThe single reading to ingest\n */\nvoid Ingest::ingest(const Reading& reading)\n{\n\tvector<Reading *> *fullQueue = 0;\n\n\tif (m_ingestRate)\n\t\tm_ingestRate->ingest(1);\n\n\t{\n\t\tlock_guard<mutex> guard(m_qMutex);\n\t\tm_queue->emplace_back(new Reading(reading));\n\n\t\t// Enforce resource limits\n\t\tif (m_serviceBufferingType == ServiceBufferingType::LIMITED) \n\t\t{\n\t\t\tenforceResourceLimits();\n\t\t}\n\n\t\tif (m_queue->size() >= m_queueSizeThreshold || m_running == false)\n\t\t{\n\t\t\tfullQueue = m_queue;\n\t\t\tm_queue = new vector<Reading *>;\n\t\t}\n\t}\n\tif (fullQueue)\n\t{\n\t\tlock_guard<mutex> guard(m_fqMutex);\n\t\tm_fullQueues.push(fullQueue);\n\t}\n\tif (m_fullQueues.size())\n\t\tm_cv.notify_all();\n\tm_performance->collect(\"queueLength\", (long)queueLength());\n}\n\n/**\n * Add a set of readings to the reading queue\n *\n * @param vec\tA vector of readings to ingest\n */\nvoid Ingest::ingest(const vector<Reading *> *vec)\n{\n\tvector<Reading *> *fullQueue = 0;\n\tsize_t qSize;\n\tunsigned int nFullQueues = 0;\n\n\tif (m_ingestRate)\n\t\tm_ingestRate->ingest(vec->size());\n\n\t{\n\t\tlock_guard<mutex> guard(m_qMutex);\n\t\t\n\t\t// Get the readings in the set\n\t\tfor (auto & rdng : *vec)\n\t\t{\n\t\t\tm_queue->emplace_back(rdng);\n\t\t}\n\n\t\t// Enforce resource limits\n\t\tif (m_serviceBufferingType == ServiceBufferingType::LIMITED) \n\t\t{\n\t\t\tenforceResourceLimits();\n\t\t}\n\n\t\tif (m_queue->size() >= m_queueSizeThreshold || m_running == false)\n\t\t{\n\t\t\tfullQueue = m_queue;\n\t\t\tm_queue = new vector<Reading *>;\n\t\t}\n\t\tqSize = m_queue->size();\n\t}\n\tif (fullQueue)\n\t{\n\t\tlock_guard<mutex> guard(m_fqMutex);\n\t\tm_fullQueues.push(fullQueue);\n\t\tnFullQueues = m_fullQueues.size();\n\t}\n\telse\n\t{\n\t\tlock_guard<mutex> guard(m_fqMutex);\n\t\tnFullQueues = m_fullQueues.size();\n\t}\n\tif (nFullQueues != 0 || qSize > m_queueSizeThreshold * 3 / 4)\n\t{\n\t\tm_cv.notify_all();\n\t}\n\tm_performance->collect(\"queueLength\", (long)queueLength());\n\tm_performance->collect(\"ingestCount\", (long)vec->size());\n}\n\n/**\n * Work out how long to wait based on age of oldest queued reading\n * We do this in a separate function so that we can lock the qMutex\n * to access the oldest element in the queue\n *\n * @return the time to wait\n */\nlong Ingest::calculateWaitTime()\n{\n\tlong timeout = m_timeout;\n\tlock_guard<mutex> guard(m_qMutex);\n\tif (!m_queue->empty())\n\t{\n\t\tReading *reading = (*m_queue)[0];\n\t\tstruct timeval tm, now;\n\t\treading->getUserTimestamp(&tm);\n\t\tgettimeofday(&now, NULL);\n\t\tlong ageMS = (now.tv_sec - tm.tv_sec) * 1000 +\n\t\t\t(now.tv_usec - tm.tv_usec) / 1000;\n\t\ttimeout = m_timeout - ageMS;\n\t}\n\treturn timeout;\n}\n\n/**\n * Wait for a period of time to allow the queue to build\n */\nvoid Ingest::waitForQueue()\n{\n\tif (m_fullQueues.size() > 0 || m_resendQueues.size() > 0)\n\t\treturn;\n\tif (m_running && m_queue->size() < m_queueSizeThreshold)\n\t{\n\t\tlong timeout = calculateWaitTime();\n\t\tif (timeout > 0)\n\t\t{\n\t\t\tmutex mtx;\n\t\t\tunique_lock<mutex> lck(mtx);\n\t\t\tm_cv.wait_for(lck,chrono::milliseconds((3 * timeout) / 4));\n\t\t}\n\t}\n}\n\n/**\n * Process the queue of readings.\n *\n * Send them to the storage layer as a block. If the append call\n * fails requeue the readings for the next transmission.\n *\n * In order not to lock the queue for an excessie time a new queue\n * is created and the old one moved to a local variable. This minimise\n * the time we hold the queue mutex to the time it takes to swap two\n * variables.\n */\nvoid Ingest::processQueue()\n{\n\tdo {\n\t\t/*\n\t\t * If we have some data that has been previously filtered but failed to send,\n\t\t * then first try to send that data.\n\t\t */\n\t\twhile (m_resendQueues.size() > 0)\n\t\t{\n\t\t\tvector<Reading *> *q = *m_resendQueues.begin();\n\t\t\tif (m_storage.readingAppend(*q) == false)\n\t\t\t{\n\t\t\t\tif (!m_storageFailed)\n\t\t\t\t\tm_logger->info(\"Still unable to resend buffered data, leaving on resend queue.\");\n\t\t\t\tm_storageFailed = true;\n\t\t\t\tm_storesFailed++;\n\t\t\t\tm_failCnt++;\n\t\t\t\tif (m_failCnt > 5)\n\t\t\t\t{\n\t\t\t\t\tm_logger->info(\"Too many failures with block of readings. Removing readings from block\");\n\t\t\t\t\tfor (int cnt = 5; cnt > 0 && q->size() > 0; cnt--)\n\t\t\t\t\t{\n\t\t\t\t\t\tReading *reading = q->front();\n\t\t\t\t\t\tm_logger->info(\"Remove reading: %s\",\n\t\t\t\t\t\t\t\treading->toJSON().c_str());\n\t\t\t\t\t\tdelete reading;\n\t\t\t\t\t\tq->erase(q->begin());\n\t\t\t\t\t\tlogDiscardedStat();\n\t\t\t\t\t}\n\t\t\t\t\tm_performance->collect(\"removedFromQueue\", 5);\n\t\t\t\t\tif (q->size() == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tdelete q;\n\t\t\t\t\t\tm_resendQueues.erase(m_resendQueues.begin());\n\t\t\t\t\t}\n\t\t\t\t\tm_failCnt = 0;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\n\t\t\t\tm_performance->collect(\"storedReadings\", (long int)(q->size()));\n\t\t\t\tif (m_storageFailed)\n\t\t\t\t{\n\t\t\t\t\tm_logger->warn(\"Storage operational after %d failures\", m_storesFailed);\n\t\t\t\t\tm_storageFailed = false;\n\t\t\t\t\tm_storesFailed = 0;\n\t\t\t\t}\n\t\t\t\tm_failCnt = 0;\n\t\t\t\tstd::map<std::string, int>\t\tstatsEntriesCurrQueue;\n\t\t\t\tAssetTracker *tracker = AssetTracker::getAssetTracker();\n\t\t\t\tif (tracker == nullptr)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->error(\"Failed to initialize asset tracker.\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\tstring lastAsset = \"\";\n\t\t\t\tint *lastStat = NULL;\n\t\t\t\tstd::map <std::string , std::set<std::string> > assetDatapointMap;\n\n\t\t\t\tfor (vector<Reading *>::iterator it = q->begin();\n\t\t\t\t\t\t\t it != q->end(); ++it)\n\t\t\t\t{\n\t\t\t\t\tReading *reading = *it;\n\t\t\t\t\tstring assetName = reading->getAssetName();\n\t\t\t\t\tassetName = escape(assetName);\n                                        const std::vector<Datapoint *> dpVec = reading->getReadingData();\n\t\t\t\t\tstd::string temp;\n\t\t\t\t\tstd::set<std::string> tempSet;\n\t\t\t\t\t// first sort the individual datapoints \n\t\t\t\t\t// e.g. dp2, dp3, dp1 push them in a set,to make them \n\t\t\t\t\t// dp1,dp2,dp3\n\t\t\t\t\tfor ( auto dp : dpVec)\n\t\t\t\t\t{\n\t\t\t\t\t\ttemp.clear();\n\t\t\t\t\t\ttemp.append(dp->getName());\n\t\t\t\t\t\ttempSet.insert(temp);\n\t\t\t\t\t}\n\n\t\t\t\t\ttemp.clear();\n\n\t\t\t\t\t// Push them in a set so as to avoid duplication of datapoints\n\t\t\t\t\t// a reading of d1, d2, d3 and another d2,d3,d1 , second will be discarded\n\t\t\t\t\t//\n\t\t\t\t\tfor (auto dp: tempSet)\n\t\t\t\t\t{\n\t\t\t\t\t\tset<string> &s= assetDatapointMap[assetName];\n\t\t\t\t\t\tif (s.find(dp) == s.end())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\ts.insert(dp);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (lastAsset.compare(assetName))\n\t\t\t\t\t{\n\t\t\t\t\t\tAssetTrackingTuple tuple(m_serviceName,\n\t\t\t\t\t\t\t\t\tm_pluginName,\n\t\t\t\t\t\t\t\t\tassetName,\n\t\t\t\t\t\t\t\t\t\"Ingest\");\n\n\t\t\t\t\t\t// Check Asset record exists\n\t\t\t\t\t\tAssetTrackingTuple* res = tracker->findAssetTrackingCache(tuple);\n\t\t\t\t\t\tif (res == NULL)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Record non in cache, add it\n\t\t\t\t\t\t\ttracker->addAssetTrackingTuple(tuple);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Possibly Un-deprecate asset tracking record\n\t\t\t\t\t\t\tunDeprecateAssetTrackingRecord(res,\n\t\t\t\t\t\t\t\t\t\t\tassetName,\n\t\t\t\t\t\t\t\t\t\t\t\"Ingest\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tlastAsset = assetName;\n\t\t\t\t\t\tlastStat = &(statsEntriesCurrQueue[assetName]);\n\t\t\t\t\t\t(*lastStat)++;\n\t\t\t\t\t}\n\t\t\t\t\telse if (lastStat)\n\t\t\t\t\t{\n\t\t\t\t\t\t(*lastStat)++;\n\t\t\t\t\t}\n\t\t\t\t\tdelete reading;\n\t\t\t\t}\n\n\t\t\t\tfor (auto itr : assetDatapointMap)\n\t\t\t\t{\n\t\t\t\t\tstd::set<string> &s = itr.second;\n\t\t\t\t\tunsigned int count = s.size();\n\t\t\t\t\tStorageAssetTrackingTuple storageTuple(m_serviceName,\n\t\t\t\t\t\t\t\t\t\tm_pluginName,\n\t\t\t\t\t\t\t\t\t\titr.first,\n\t\t\t\t\t\t\t\t\t\t\"store\",\n\t\t\t\t\t\t\t\t\t\tfalse,\n\t\t\t\t\t\t\t\t\t\t\"\",\n\t\t\t\t\t\t\t\t\t\tcount);\n\n\t\t\t\t\tStorageAssetTrackingTuple *ptr = &storageTuple;\n\n\t\t\t\t\t// Update SAsset Tracker database and cache\n\t\t\t\t\ttracker->updateCache(s, ptr);\n\t\t\t\t}\n\n\t\t\t\tdelete q;\n\t\t\t\tm_resendQueues.erase(m_resendQueues.begin());\n\t\t\t\tunique_lock<mutex> lck(m_statsMutex);\n\t\t\t\tfor (auto &it : statsEntriesCurrQueue)\n\t\t\t\t{\n\t\t\t\t\tstatsPendingEntries[it.first] += it.second;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t{\n\t\t\tlock_guard<mutex> fqguard(m_fqMutex);\n\t\t\tif (m_fullQueues.empty())\n\t\t\t{\n\t\t\t\tif (!m_shutdown)\n\t\t\t\t{\n\t\t\t\t\t// Block of code to execute holding the mutex\n\t\t\t\t\tlock_guard<mutex> guard(m_qMutex);\n\t\t\t\t\tstd::vector<Reading *> *newQ = new vector<Reading *>;\n\t\t\t\t\tm_data = m_queue;\n\t\t\t\t\tm_queue = newQ;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_data = m_fullQueues.front();\n\t\t\t\tm_fullQueues.pop();\n\t\t\t}\n\t\t}\n\t\t\n\t\t/*\n\t\t * Create a ReadingSet from m_data readings if we have filters.\n\t\t *\n\t\t * At this point the m_data vector is cleared so that the only reference to\n\t\t * the readings is in the ReadingSet that is passed along the filter pipeline\n\t\t *\n\t\t * The final filter in the pipeline will pass the ReadingSet back into the\n\t\t * ingest class where it will repopulate the m_data member.\n\t\t *\n\t\t * We lock the filter pipeline here to prevent it being reconfigured whilst we\n\t\t * process the data. We do this because the qMutex is not good enough here as we\n\t\t * do not hold it, by deliberate policy. As we copy the queue holding the qMutex\n\t\t * and then release it to enable more data to be queued while we process the previous\n\t\t * queue via the filter pipeline and up to the storage layer.\n\t\t */\n\t\t{\n\t\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t\t\tif (m_filterPipeline && !m_filterPipeline->isShuttingDown())\n\t\t\t{\n\t\t\t\tPipelineElement *firstFilter = m_filterPipeline->getFirstFilterPlugin();\n\t\t\t\tif (firstFilter)\n\t\t\t\t{\n\t\t\t\t\t// Check whether filters are set before calling ingest\n\t\t\t\t\twhile (!m_filterPipeline->isReady())\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->warn(\"Ingest called before \"\n\t\t\t\t\t\t\t\t\t  \"filter pipeline is ready\");\n\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(150));\n\t\t\t\t\t}\n\t\t\t\t\tReadingSet *readingSet = new ReadingSet(m_data);\n\t\t\t\t\tm_data->clear();\n\t\t\t\t\tm_filterPipeline->execute();\t// Set the pipeline executing\n\t\t\t\t\t// Pass readingSet to filter chain\n\t\t\t\t\tfirstFilter->ingest(readingSet);\n\n\t\t\t\t\tm_filterPipeline->completeBranch();\t// Main branch has completed\n\t\t\t\t\tm_filterPipeline->awaitCompletion();\n\t\t\t\t\t/*\n\t\t\t\t\t * If filtering removed all the readings then simply clean up m_data and\n\t\t\t\t\t * return.\n\t\t\t\t\t */\n\t\t\t\t\tif (m_data->size() == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tdelete m_data;\n\t\t\t\t\t\tm_data = NULL;\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * Check the first reading in the list to see if we are meeting the\n\t\t * latency configuration we have been set\n\t\t */\n\t\tif (m_data)\n\t\t{\n\t\t\tvector<Reading *>::iterator itr = m_data->begin();\n\t\t\tif (itr != m_data->cend())\n\t\t\t{\n\t\t\t\tReading *firstReading = *itr;\n\t\t\t\tstruct timeval tmFirst, tmNow, dur;\n\t\t\t\tgettimeofday(&tmNow, NULL);\n\t\t\t\tfirstReading->getUserTimestamp(&tmFirst);\n\t\t\t\ttimersub(&tmNow, &tmFirst, &dur);\n\t\t\t\tlong latency = dur.tv_sec * 1000 + (dur.tv_usec / 1000);\n\t\t\t\tm_performance->collect(\"readLatency\", latency);\n\t\t\t\tif (latency > m_timeout && m_highLatency == false)\n\t\t\t\t{\n\t\t\t\t\tm_logger->warn(\"Current send latency of %ldms exceeds requested maximum latency of %dmS\", latency, m_timeout);\n\t\t\t\t\tm_highLatency = true;\n\t\t\t\t\tm_10Latency = false;\n\t\t\t\t\tm_reportedLatencyTime = time(0);\n\t\t\t\t}\n\t\t\t\telse if (latency <= m_timeout / 1000 && m_highLatency)\n\t\t\t\t{\n\t\t\t\t\tm_logger->warn(\"Send latency now within requested limits\");\n\t\t\t\t\tm_highLatency = false;\n\t\t\t\t}\n\t\t\t\telse if (m_highLatency && latency > m_timeout + (m_timeout / 10) && time(0) - m_reportedLatencyTime > 60)\n\t\t\t\t{\n\t\t\t\t\t// Report again every minute if we are outside the latency\n\t\t\t\t\t// target by more than 10%\n\t\t\t\t\tm_logger->warn(\"Current send latency of %ldms still significantly exceeds requested maximum latency of %dmS\", latency, m_timeout);\n\t\t\t\t\tm_reportedLatencyTime = time(0);\n\t\t\t\t}\n\t\t\t\telse if (m_highLatency && latency < m_timeout + (m_timeout / 10) && m_10Latency == false)\n\t\t\t\t{\n\t\t\t\t\tm_logger->warn(\"Send latency of %ldms is now less than 10%% from target\", latency);\n\t\t\t\t\tm_10Latency = true;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t\t\n\t\t/**\n\t\t * 'm_data' vector is ready to be sent to storage service.\n\t\t *\n\t\t * Note: m_data might contain:\n\t\t * - Readings set by the configured service \"plugin\" \n\t\t * OR\n\t\t * - filtered readings by filter plugins in 'readingSet' object:\n\t\t *\t1- values only\n\t\t *\t2- some readings removed\n\t\t *\t3- New set of readings\n\t\t */\n\t\tif (m_data && m_data->size())\n\t\t{\n\t\t\tif (m_storage.readingAppend(*m_data) == false)\n\t\t\t{\n\t\t\t\tif (!m_storageFailed)\n\t\t\t\t\tm_logger->warn(\"Failed to write readings to storage layer, queue for resend\");\n\t\t\t\tm_storageFailed = true;\n\t\t\t\tm_storesFailed++;\n\t\t\t\tm_performance->collect(\"resendQueued\", (long int)(m_data->size()));\n\t\t\t\tm_resendQueues.push_back(m_data);\n\t\t\t\tm_data = NULL;\n\t\t\t\tm_failCnt = 1;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_performance->collect(\"storedReadings\", (long int)(m_data->size()));\n\t\t\t\tif (m_storageFailed)\n\t\t\t\t{\n\t\t\t\t\tm_logger->warn(\"Storage operational after %d failures\", m_storesFailed);\n\t\t\t\t\tm_storageFailed = false;\n\t\t\t\t\tm_storesFailed = 0;\n\t\t\t\t}\n\t\t\t\tm_failCnt = 0;\n\t\t\t\tstd::map<std::string, int>\t\tstatsEntriesCurrQueue;\n\t\t\t\t// check if this requires addition of a new asset tracker tuple\n\t\t\t\t// Remove the Readings in the vector\n\t\t\t\tAssetTracker *tracker = AssetTracker::getAssetTracker();\n\n\t\t\t\tstring lastAsset;\n\t\t\t\tint *lastStat = NULL;\n\t\t\t\tstd::map <std::string, std::set<std::string> > assetDatapointMap;\n\n\t\t\t\tfor (vector<Reading *>::iterator it = m_data->begin(); it != m_data->end(); ++it)\n\t\t\t\t{\n\t\t               \t        Reading *reading = *it;\n\t\t\t\t\tstring\tassetName = reading->getAssetName();\n\t\t\t\t\tassetName = escape(assetName);\n\t\t\t\t\tconst std::vector<Datapoint *> dpVec = reading->getReadingData();\n\t\t\t\t\tstd::string temp;\n                                        std::set<std::string> tempSet;\n                                        // first sort the individual datapoints\n                                        // e.g. dp2, dp3, dp1 push them in a set,to make them\n                                        // dp1,dp2,dp3\n                                        for ( auto dp : dpVec)\n                                        {\n                                                temp.clear();\n                                                temp.append(dp->getName());\n                                                tempSet.insert(temp);\n                                        }\n\n                                        temp.clear();\n\n                                        // Push them in a set so as to avoid duplication of datapoints\n                                        // a reading of d1, d2, d3 and another d2,d3,d1 , second will be discarded\n                                        //\n                                        for (auto dp: tempSet)\n                                        {\n                                                set<string> &s= assetDatapointMap[assetName];\n                                                if (s.find(dp) == s.end())\n                                                {\n                                                        s.insert(dp);\n                                                }\n                                        }\n\n                                        if (lastAsset.compare(assetName))\n                                        {\n\t\t\t\t\t\tAssetTrackingTuple tuple(m_serviceName,\n\t\t\t\t\t\t\t\t\tm_pluginName,\n\t\t\t\t\t\t\t\t\tassetName,\n\t\t\t\t\t\t\t\t\t\"Ingest\");\n\n\t\t\t\t\t\t// Check Asset record exists\n\t\t\t\t\t\tAssetTrackingTuple* res = tracker->findAssetTrackingCache(tuple);\n\t\t\t\t\t\tif (res == NULL)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Record not in cache, add it\n\t\t\t\t\t\t\ttracker->addAssetTrackingTuple(tuple);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t// Un-deprecate asset tracking record\n\t\t\t\t\t\t\tunDeprecateAssetTrackingRecord(res,\n\t\t\t\t\t\t\t\t\t\t\tassetName,\n\t\t\t\t\t\t\t\t\t\t\t\"Ingest\");\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tlastAsset = assetName;\n                                                  lastStat = &statsEntriesCurrQueue[assetName];\n                                                  (*lastStat)++;\n                                          }\n                                          else if (lastStat)\n                                          {\n                                                  (*lastStat)++;\n                                          }\n                                          // delete reading;\n\n\t\t\t\t}\n\t\t\t\tfor( auto & rdng : *m_data)\n\t\t\t\t{\n\t\t\t\t\tdelete rdng;\n\t\t\t\t}\n\t\t\t\tm_data->clear();\n\n\t\t\t\tfor (auto itr : assetDatapointMap)\n\t\t\t\t{\n\t\t\t\t\tstd::set<string> &s = itr.second;\n\t\t\t\t        unsigned int count = s.size();\n\t\t\t\t        StorageAssetTrackingTuple storageTuple(m_serviceName,\n\t\t\t\t\t\t\t\t\t\tm_pluginName,\n\t\t\t\t\t\t\t\t\t\titr.first,\n\t\t\t\t\t\t\t\t\t\t\"store\",\n\t\t\t\t\t\t\t\t\t\tfalse,\n\t\t\t\t\t\t\t\t\t\t\"\",\n\t\t\t\t\t\t\t\t\t\tcount);\n\n\t\t\t\t\tStorageAssetTrackingTuple *ptr = &storageTuple;\n\n\t\t\t\t\t// Update SAsset Tracker database and cache\n\t\t\t\t\ttracker->updateCache(s, ptr);\n\t\t\t\t}\n\n\t\t\t\t{\n\t\t\t\t\tunique_lock<mutex> lck(m_statsMutex);\n\t\t\t\t\tfor (auto &it : statsEntriesCurrQueue)\n\t\t\t\t\t{\n\t\t\t\t\t\tstatsPendingEntries[it.first] += it.second;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (m_data)\n\t\t{\n\t\t\tdelete m_data;\n\t\t\tm_data = NULL;\n\t\t}\n\t} while (! m_fullQueues.empty());\n}\n\n/**\n * Load filter plugins\n *\n * Filters found in configuration are loaded\n * and add to the Ingest class instance\n *\n * @param categoryName\tConfiguration category name\n * @param ingest\tThe Ingest class reference\n *\t\t\tFilters are added to m_filters member\n *\t\t\tFalse for errors.\n * @return\t\tTrue if filters were loaded and initialised\n *\t\t\tor there are no filters\n *\t\t\tFalse with load/init errors\n */\nbool Ingest::loadFilters(const string& categoryName)\n{\n\tLogger::getLogger()->info(\"Ingest::loadFilters(): categoryName=%s\", categoryName.c_str());\n\t/*\n\t * We do everything to setup the pipeline using a local FilterPipeline and then assign it\n\t * to the service m_filterPipeline once it is setup to guard against access to the pipeline\n\t * during setup.\n\t * This should not be an issue if the mutex is held, however this approach lessens the risk\n\t * in the case of this routine being called when the mutex is not held and ensure m_filterPipeline\n\t * only ever points to a fully configured filter pipeline.\n\t */\n\tlock_guard<mutex> guard(m_pipelineMutex);\n\tFilterPipeline *filterPipeline = new FilterPipeline(m_mgtClient, m_storage, m_serviceName);\n\t\n\t// Try to load filters:\n\tif (!filterPipeline->loadFilters(categoryName))\n\t{\n\t\t// Return false on any error\n\t\treturn false;\n\t}\n\n\t// Set up the filter pipeline\n\tbool rval = filterPipeline->setupFiltersPipeline((void *)passToOnwardFilter, (void *)useFilteredData, this);\n\tif (rval)\n\t{\n\t\tm_filterPipeline = filterPipeline;\n\t\t// If we previously had a debugger attached then attach to the new pipeline\n\t\tif (m_debuggerAttached)\n\t\t{\n\t\t\tattachDebugger();\n\t\t\tsetDebuggerBuffer(m_debuggerBufferSize);\n\t\t}\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Failed to setup the filter pipeline, the filters are not attached to the service\");\n\t\tfilterPipeline->cleanupFilters(categoryName);\n\t}\n\treturn rval;\n}\n\n/**\n * Pass the current readings set to the next filter in the pipeline\n *\n * Note:\n * This routine must be passed to all filters \"plugin_init\" except the last one\n *\n * Static method\n *\n * @param outHandle     Pointer to next filter\n * @param readings      Current readings set\n */\nvoid Ingest::passToOnwardFilter(OUTPUT_HANDLE *outHandle,\n\t\t\t\tREADINGSET *readingSet)\n{\n\t// Get next filter in the pipeline\n\tPipelineElement *next = (PipelineElement *)outHandle;\n\n\t// Pass readings to the next stage in the pipeline\n\tnext->ingest(readingSet);\n}\n\n/**\n * Use the current input readings (they have been filtered\n * by all filters)\n *\n * The assumption is that one of two things has happened.\n *\n *\t1. The filtering has all been done in place. In which case\n *\tthe m_data vector is in the ReadingSet passed in here.\n *\n *\t2. The filtering has created new ReadingSet in which case\n *\tthe reading vector must be copied into m_data from the\n *\tReadingSet.\n *\n * Note:\n * This routine must be passed to last filter \"plugin_init\" only\n *\n * Static method\n *\n * @param outHandle     Pointer to Ingest class instance\n * @param readingSet    Filtered reading set being added to Ingest::m_data\n */\nvoid Ingest::useFilteredData(OUTPUT_HANDLE *outHandle,\n\t\t\t     READINGSET *readingSet)\n{\n\n\tIngest* ingest = (Ingest *)outHandle;\n\n\tif (ingest->isolated())\n\t{\n\t\tdelete readingSet;\n\t\treturn;\n\t}\n\tlock_guard<mutex> guard(ingest->m_useDataMutex);\n\t\n\tvector<Reading *> *newData = readingSet->getAllReadingsPtr();\n\tif (!ingest->m_data)\n\t{\n\t\t// If we are called during shutdown there will be no m_data in place\n\t\t// and we create a new one to handle this special case. In this case\n\t\t// the m_data will not be explicitly deleted. However as we are shutting\n\t\t// down this will note cause a problem as all memory is recovered at process\n\t\t// exit time.\n\t\tingest->m_data = new vector<Reading *>;\n\t}\n\tingest->m_data->insert(ingest->m_data->end(), newData->cbegin(), newData->cend());\n\t\n\treadingSet->clear();\n\tdelete readingSet;\n}\n\n/**\n * Configuration change for one of the filters or to the pipeline.\n *\n * @param category\tThe name of the configuration category\n * @param newConfig\tThe new category contents\n */\nvoid Ingest::configChange(const string& category, const string& newConfig)\n{\n\tLogger::getLogger()->debug(\"Ingest::configChange(): category=%s, newConfig=%s\", category.c_str(), newConfig.c_str());\n\tstring advanced = m_serviceName + \"Advanced\";\n\tif (category == m_serviceName) \n\t{\n\t\t/**\n\t\t * The category that has changed is the one for the south service itself.\n\t\t * The only item that concerns us here is the filter item that defines\n\t\t * the filter pipeline. We extract that item and check to see if it defines\n\t\t * a pipeline that is different to the one we currently have.\n\t\t *\n\t\t * If it is we destroy the current pipeline and create a new one.\n\t\t */\n\t\tConfigCategory config(\"tmp\", newConfig);\n\t\tstring newPipeline = \"\";\n\t\tif (config.itemExists(\"filter\"))\n\t\t{\n\t\t      newPipeline  = config.getValue(\"filter\");\n\t\t}\n\n\t\t{\n\t\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t\t\tif (m_filterPipeline)\n\t\t\t{\n\t\t\t\tif (newPipeline == \"\" ||\n\t\t\t\t    m_filterPipeline->hasChanged(newPipeline) == false)\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->info(\"Ingest::configChange(): \"\n\t\t\t\t\t\t\t\t  \"filter pipeline is not set or \"\n\t\t\t\t\t\t\t\t  \"it hasn't changed\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t/* The new filter pipeline is different to what we have already running\n\t\t\t\t * So remove the current pipeline and recreate.\n\t\t\t \t */\n\t\t\t\tm_running = false;\n\t\t\t\tLogger::getLogger()->info(\"Ingest::configChange(): \"\n\t\t\t\t\t\t\t  \"filter pipeline has changed, \"\n\t\t\t\t\t\t\t  \"recreating filter pipeline\");\n\t\t\t\tm_filterPipeline->cleanupFilters(m_serviceName);\n\t\t\t\tdelete m_filterPipeline;\n\t\t\t\tm_filterPipeline = NULL;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * We have to setup a new pipeline to match the changed configuration.\n\t\t * Release the lock before reloading the filters as this will acquire\n\t\t * the lock again\n\t\t */\n\t\tloadFilters(category);\n\n\t\t// Set m_running holding the lock\n\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t\tm_running = true;\n\t}\n\telse if (category == advanced) \n\t{\n\t\tConfigCategory config(\"tmp\", newConfig);\n\t\tstring s = config.getValue(\"rateMonitoringInterval\");\n\t\tlong interval = strtol(s.c_str(), NULL, 10);\n\t\ts = config.getValue(\"rateSigmaFactor\");\n\t\tlong factor = strtol(s.c_str(), NULL, 10);\n\t\tm_ingestRate->updateConfig(interval, factor);\n\n\t\t// TODO If the rate has changed we need to restart the monitoring for\n\t\t// now we trigger this if the category changes\n\t\tm_ingestRate->relearn();\n\t}\n\telse\n\t{\n\t\t/*\n\t\t * The category is for one of the filters. We simply call the Filter Pipeline\n\t\t * instance and get it to deal with sending the configuration to the right filter.\n\t\t * This is done holding the pipeline mutex to prevent the pipeline being changed\n\t\t * during this call and also to hold the ingest thread from running the filters\n\t\t * during reconfiguration.\n\t\t */\n\t\tLogger::getLogger()->info(\"Ingest::configChange(): change to config of some filter(s)\");\n\t\tlock_guard<mutex> guard(m_pipelineMutex);\n\t\tif (m_filterPipeline)\n\t\t{\n\t\t\tm_filterPipeline->configChange(category, newConfig);\n\t\t}\n\t}\n}\n\n/**\n * Return the number of queued readings in the south service\n */\nsize_t Ingest::queueLength()\n{\n\tsize_t\tlen = m_queue->size();\n\n\t// Approximate the amount of data in the full queues\n\tlen += m_fullQueues.size() * m_queueSizeThreshold;\n\tlen += m_resendQueues.size() * m_queueSizeThreshold;\n\n\treturn len;\n}\n\n/**\n * Load an up-to-date AssetTracking record for the given parameters\n * and un-deprecate AssetTracking record it has been found as deprecated\n * Existing cache element is updated\n *\n * @param currentTuple\t\tCurrent AssetTracking record for given assetName\n * @param assetName\t\tAssetName to fetch from AssetTracking\n * @param event\t\t\tThe event type to fetch\n */\nvoid Ingest::unDeprecateAssetTrackingRecord(AssetTrackingTuple* currentTuple,\n\t\t\t\t\tconst string& assetName,\n\t\t\t\t\tconst string& event)\n{\n\ttime_t now = time(0);\n\tif (m_deprecatedAgeOut < now)\n\t{\n\t\tdelete m_deprecated;\n\t\tm_deprecated = m_mgtClient->getDeprecatedAssetTrackingTuples();\n\t\tm_deprecatedAgeOut = now + DEPRECATED_CACHE_AGE;\n\t}\n\tif (m_deprecated && m_deprecated->find(assetName))\n\t{\n\t\t// The asset is deprecated possibly\n\t\tm_deprecated->remove(assetName);\n\t}\n\telse\n\t{\n\t\t// The asset is not believed to be deprecated so return. If\n\t\t// it has been deprecated since we last loaded the cache this\n\t\t// will leave the asset incorrectly deprecated. This will be\n\t\t// resolved next time the cache is reloaded\n\t\treturn;\n\t}\n\t// Get up-to-date Asset Tracking record \n\tAssetTrackingTuple* updatedTuple =\n\t\t\tm_mgtClient->getAssetTrackingTuple(\n\t\t\tm_serviceName,\n\t\t\tassetName,\n\t\t\tevent);\n\n\tbool unDeprecateDataPoints = false;\n\tif (updatedTuple)\n\t{\n\t\tif (updatedTuple->isDeprecated())\n\t\t{\n\t\t\t// Update un-deprecated state in cached object\n\t\t\tcurrentTuple->unDeprecate();\n\n\t\t\tm_logger->debug(\"Asset '%s' is being un-deprecated\",\n\t\t\t\t\tassetName.c_str());\n\n\t\t\t// Prepare UPDATE query\n\t\t\tconst Condition conditionParams(Equals);\n\t\t\tWhere * wAsset = new Where(\"asset\",\n\t\t\t\t\t\tconditionParams,\n\t\t\t\t\t\tassetName);\n\t\t\tWhere *wService = new Where(\"service\",\n\t\t\t\t\t\tconditionParams,\n\t\t\t\t\t\tm_serviceName,\n\t\t\t\t\t\twAsset);\n\t\t\tWhere *wEvent = new Where(\"event\",\n\t\t\t\t\t\tconditionParams,\n\t\t\t\t\t\tevent,\n\t\t\t\t\t\twService);\n\n\t\t\tInsertValues unDeprecated;\n\n\t\t\t// Set NULL value\n\t\t\tunDeprecated.push_back(InsertValue(\"deprecated_ts\"));\n\t\t\t\t\n\t\t\t// Update storage with NULL value\n\t\t\tint rv = m_storage.updateTable(\"asset_tracker\",\n\t\t\t\t\t\t\tunDeprecated,\n\t\t\t\t\t\t\t*wEvent);\n\n\t\t\t// Check update operation\n\t\t\tif (rv < 0)\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failure while un-deprecating asset '%s'\",\n\t\t\t\t\t\tassetName.c_str());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring audit_details = \"{\\\"asset\\\" : \\\"\" + assetName +\n\t\t\t\t\t\t\t\"\\\", \\\"service\\\" : \\\"\" + m_serviceName +\n\t\t\t\t\t\t\t\"\\\", \\\"event\\\" : \\\"\" + event + \"\\\"}\";\n\t\t\t\t// Add AuditLog entry\n\t\t\t\tif (!m_mgtClient->addAuditEntry(\"ASTUN\", \"INFORMATION\", audit_details))\n\t\t\t\t{\n\t\t\t\t\tm_logger->warn(\"Failure while adding AuditLog entry \" \\\n\t\t\t\t\t\t\t\" for un-deprecated asset '%s'\",\n\t\t\t\t\t\t\tassetName.c_str());\n\t\t\t\t}\n\t\t\t\tm_logger->info(\"Asset '%s' has been un-deprecated, event '%s'\",\n\t\t\t\t\t\tassetName.c_str(),\n\t\t\t\t\t\tevent.c_str());\n\n\t\t\t\tunDeprecateDataPoints = true;\n\t\t\t}\n\t\t}\n\t}\n\telse\n\t{\n\t\tm_logger->error(\"Failure to get AssetTracking record \"\n\t\t\t\t\"for service '%s', asset '%s'\",\n\t\t\t\tm_serviceName.c_str(),\n\t\t\t\tassetName.c_str());\n\t}\n\n\tdelete updatedTuple;\n\n\t// Undeprecate all \"store\" events related to the serviceName and assetName\n\tif (unDeprecateDataPoints)\n\t{\n\t\t// Prepare UPDATE query\n\t\tconst Condition conditionParams(Equals);\n\t\tWhere * wAsset = new Where(\"asset\",\n\t\t\t\t\tconditionParams,\n\t\t\t\t\tassetName);\n\t\tWhere *wService = new Where(\"service\",\n\t\t\t\t\tconditionParams,\n\t\t\t\t\tm_serviceName,\n\t\t\t\t\twAsset);\n\t\tWhere *wEvent = new Where(\"event\",\n\t\t\t\t\tconditionParams,\n\t\t\t\t\t\"store\",\n\t\t\t\t\twService);\n\n\t\tInsertValues unDeprecated;\n\n\t\t// Set NULL value\n\t\tunDeprecated.push_back(InsertValue(\"deprecated_ts\"));\n        \n\t\t// Update storage with NULL value\n\t\tint rv = m_storage.updateTable(\"asset_tracker\",\n\t\t\t\t\t\tunDeprecated,\n\t\t\t\t\t\t*wEvent);\n\n\t\t// Check update operation\n\t\tif (rv < 0)\n\t\t{\n\t\t\tm_logger->error(\"Failure while un-deprecating asset '%s'\",\n\t\t\t\t\tassetName.c_str());\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_logger->info(\"Asset '%s' has been un-deprecated, event '%s'\",\n\t\t\t\t\tassetName.c_str(),\n\t\t\t\t\t\"store\");\n\t\t}\n\t}\n}\n\n/**\n * Load an up-to-date StorageAssetTracking record for the given parameters\n * and un-deprecate StorageAssetTracking record it has been found as deprecated\n * Existing cache element is updated\n *\n * @param currentTuple          Current StorageAssetTracking record for given assetName\n * @param assetName             AssetName to fetch from AssetTracking\n * @param  datapoints           The datapoints comma separated list\n * @param count\t\t\tThe number of datapoints per asset\n */\nvoid Ingest::unDeprecateStorageAssetTrackingRecord(StorageAssetTrackingTuple* currentTuple,\n                                        const string& assetName,\n\t\t\t\t\tconst string& datapoints,\n\t\t\t\t\tconst unsigned int& count)\n                                        \n{\n\ttime_t now = time(0);\n\tif (m_deprecatedAgeOutStorage < now)\n\t{\n\t\tm_deprecatedAgeOutStorage = now + DEPRECATED_CACHE_AGE;\n\t}\n\telse\n\t{\n\t\t// Nothing to do right now\n\t\treturn;\n\t}\n\n\n        // Get up-to-date Asset Tracking record\n        StorageAssetTrackingTuple* updatedTuple =\n                        m_mgtClient->getStorageAssetTrackingTuple(\n                        m_serviceName,\n                        assetName,\n                        \"store\", \n\t\t\tdatapoints,\n\t\t\tcount);\n\n        vector<string> tokens;\n        stringstream dpStringStream(datapoints);\n        string temp;\n        while(getline(dpStringStream, temp, ','))\n        {\n                tokens.push_back(temp);\n        }\n\n\tostringstream convert;\n        convert << \"{\";\n        convert << \"\\\"datapoints\\\":[\";\n        for (unsigned int i = 0; i < tokens.size() ; ++i)\n        {\n\t\tconvert << \"\\\"\" << tokens[i].c_str() << \"\\\"\" ;\n                if (i < tokens.size()-1){\n\t                convert << \",\";\n                }\n        }\n        convert << \"],\" ;\n        convert << \"\\\"count\\\":\" << to_string(count).c_str();\n        convert << \"}\";\n\n        if (updatedTuple)\n        {\n                if (updatedTuple->isDeprecated())\n                {\n\n                        // Update un-deprecated state in cached object\n                        currentTuple->unDeprecate();\n\n                        m_logger->info(\"%s:%d, Asset '%s' is being un-deprecated\",__FILE__, __LINE__, assetName.c_str());\n\n                        // Prepare UPDATE query\n                        const Condition conditionParams(Equals);\n                        Where * wAsset = new Where(\"asset\",\n                                                conditionParams,\n                                                assetName);\n                        Where *wService = new Where(\"service\",\n                                                conditionParams,\n                                                m_serviceName,\n                                                wAsset);\n                        Where *wEvent = new Where(\"event\",\n                                                conditionParams,\n                                                \"store\",\n                                                wService);\n\n\t\t\tWhere *wData = new Where(\"data\",\n                                                conditionParams,\n                                                JSONescape(convert.str()),\n                                                wEvent);\n\n\n\t\t\tInsertValues unDeprecated;\n\n                        // Set NULL value\n                        unDeprecated.push_back(InsertValue(\"deprecated_ts\"));\n\n                        // Update storage with NULL value\n                        int rv = m_storage.updateTable(\"asset_tracker\",\n                                                        unDeprecated,\n                                                        *wData);\n\n                        // Check update operation\n                        if (rv < 0)\n                        {\n                                m_logger->error(\"%s:%d, Failure while un-deprecating asset '%s'\", __FILE__, __LINE__, assetName.c_str());\n\t\t\t}\n\t\t}\n\t}\n\n\tif (updatedTuple != nullptr)\n\t\tdelete updatedTuple;\n}\n\n/**\n * Set the statistics option. The statistics collection regime may be one of\n * \"per asset\", \"per service\" or \"per asset & service\".\n *\n * @param option\tThe desired statistics collection regime\n */\nvoid Ingest::setStatistics(const string& option)\n{\n\tunique_lock<mutex> lck(m_statsMutex);\n\tif (option.compare(\"per asset\") == 0)\n\t\tm_statisticsOption = STATS_ASSET;\n\telse if (option.compare(\"per service\") == 0)\n\t\tm_statisticsOption = STATS_SERVICE;\n\telse\n\t\tm_statisticsOption = STATS_BOTH;\n}\n\n/*\n * Returns comma-separated string from set of datapoints\n */\nstd::string  Ingest::getStringFromSet(const std::set<std::string> &dpSet)\n{\n\tstd::string s;\n\tfor (auto itr: dpSet)\n\t{\n\t\ts.append(itr);\n\t\ts.append(\",\");\n\t}\n\t// remove the last comma\n\tif (s[s.size() -1] == ',')\n\t\ts.pop_back();\n\treturn s;\n}\n\n/**\n * Implement flow control backoff for the async ingest mechanism.\n *\n * The flow control is \"soft\" in that it will only wait for a maximum\n * amount of time before continuing regardless of the queue length.\n *\n * The mechanism is to have a high water and low water mark. When the queue\n * get longer than the high water mark we wait until the queue drains below\n * the low water mark before proceeding.\n *\n * The wait is done with a backoff algorithm that start at AFC_SLEEP_INCREMENT\n * and doubles each time we have not dropped below the low water mark. It will\n * sleep for a maximum of AFC_SLEEP_MAX before testing again.\n */\nvoid Ingest::flowControl()\n{\n\tif (m_highWater == 0)\t// No flow control\n\t{\n\t\treturn;\n\t}\n\tif (m_highWater < queueLength())\n\t{\n\t\tm_logger->debug(\"Waiting for ingest queue to drain\");\n\t\tint total = 0, delay = AFC_SLEEP_INCREMENT;\n\t\twhile (total < AFC_MAX_WAIT && queueLength() > m_lowWater)\n\t\t{\n\t\t\tthis_thread::sleep_for(chrono::milliseconds(delay));\n\t\t\ttotal += delay;\n\t\t\tdelay *= 2;\n\t\t\tif (delay > AFC_SLEEP_MAX)\n\t\t\t{\n\t\t\t\tdelay = AFC_SLEEP_MAX;\n\t\t\t}\n\t\t}\n\t\tm_logger->debug(\"Ingest queue has %s\", queueLength() > m_lowWater\n\t\t\t       \t? \"failed to drain in sufficient time\" : \"has drained\");\n\t\tm_performance->collect(\"flow controlled\", total);\n\t}\n}\n\n/**\n * Configure the ingest rate class with the collection interval and\n * the sigma factor allowed before reporting\n *\n * @param interval\tNumber of minutes to average ingest stats over\n * @param factor\tNumber of standard deviations to allow before reporting\n */\nvoid Ingest::configureRateMonitor(long interval, long factor)\n{\n\tm_ingestRate->updateConfig(interval, factor);\n}\n"
  },
  {
    "path": "C/services/south/ingestRate.cpp",
    "content": "/*\n * Fledge readings ingest rate.\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <ingest_rate.h>\n#include <thread>\n#include <logger.h>\n\nusing namespace std;\n\n/**\n * Constructor for ingest rate class\n *\n * @param mgmtClient\tThe management Client interface\n */\nIngestRate::IngestRate(ManagementClient *mgmtClient, const std::string& service) : m_mgmtClient(mgmtClient), m_service(service),\n\tm_currentInterval(0), m_thisInterval(0), m_mean(0), m_dsq(0), m_count(0), m_factor(3), m_alerted(false), m_primed(false)\n{\n\tm_numIntervals = 60 / FLUSH_STATS_INTERVAL;\n}\n\n/**\n * Destructor for the ingest rate class\n */\nIngestRate::~IngestRate()\n{\n}\n\n/**\n * Update the configuration of the ingest rate mechanism\n *\n * @param interval\tNumber of minutes to average over\n * @param factor\tNumber of standard deviations to tolarate\n */\nvoid IngestRate::updateConfig(int interval, int factor)\n{\n\tbool restart = false;\n\tif (interval * 60 != m_numIntervals * FLUSH_STATS_INTERVAL)\n\t{\n\t\tm_numIntervals = (interval * 60) / FLUSH_STATS_INTERVAL;\n\t\trestart = true;\n\t}\n\tif (m_factor != factor)\n\t{\n\t\tm_factor = factor;\n\t}\n\tif (restart)\n\t{\n\t\trelearn();\n\t}\n}\n\n/**\n * The configuration has changed so we need to reset our state\n * and go back into the mode of determining what a good mean and\n * standard deviation is for the select interval.\n */\nvoid IngestRate::relearn()\n{\n\tlock_guard<mutex> guard(m_mutex);\n\tm_count = 0;\n\tm_thisInterval = 0;\n\tm_currentInterval = 0;\n\tm_dsq = 0;\n\tm_mean = 0;\n}\n\n/**\n * Called each time we ingest any readings.\n *\n * @param numberReadings\tThe number of readings ingested\n */\nvoid IngestRate::ingest(unsigned int numberReadings)\n{\n\tif (m_numIntervals == 0)\n\t\treturn;\n\tlock_guard<mutex> guard(m_mutex);\n\tm_thisInterval += numberReadings;\n}\n\n/**\n * Called periodically by the stats update thread. Called every FLUSH_STATS_INTERVAL seconds\n */\nvoid IngestRate::periodic()\n{\n\tif (m_numIntervals == 0)\n\t\treturn;\n\tupdateCounters();\n}\n\n/**\n * The periodic work that needs to be done holding the mutex\n */\nvoid IngestRate::updateCounters()\n{\n\tlock_guard<mutex> guard(m_mutex);\n\tm_currentInterval++;\n\tif (m_currentInterval == m_numIntervals)\n\t{\n\t\tif (m_count > IGRSAMPLES)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"Ingest rate checking for service %s is enabled\", m_service.c_str());\n\t\t\tdouble sigma = sqrt(m_dsq / m_count);\n\t\t\tif (m_thisInterval < (m_mean - (m_factor * sigma))  || m_thisInterval > (m_mean + (m_factor * sigma)))\n\t\t\t{\n\t\t\t\tif (m_primed)\n\t\t\t\t{\n\t\t\t\t\t// Outlier detected\n\t\t\t\t\tstring key = \"SouthIngestRate\" + m_service;\n\t\t\t\t\tstring message = \"Ingest rate of the south service \" +\n\t\t\t\t\t       m_service + \" falls outside of normal boundaries\";\n\t\t\t\t\tm_mgmtClient->raiseAlert(key, message);\n\t\t\t\t\tLogger::getLogger()->warn(\"Current ingest rate falls outside normal boundaries, rate is %ld with average rate of %f\", m_thisInterval, m_mean);\n\t\t\t\t\tm_alerted = true;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// We have had one outlier, prime the alert on the second consequtive outlier\n\t\t\t\t\tm_primed = true;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (m_alerted)\n\t\t\t{\n\t\t\t\tstring key = \"SouthIngestRate\" + m_service;\n\t\t\t\tm_mgmtClient->clearAlert(key);\n\t\t\t\tm_primed = false;\n\t\t\t\tm_alerted = false;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_primed = false;\n\t\t\t}\n\t\t}\n\t\tm_count++;\n\t\tdouble meanDiff = (m_thisInterval - m_mean) / m_count;\n\t\tdouble newMean = m_mean + meanDiff;\n\t\tdouble dsqInc = (m_thisInterval - newMean) * (m_thisInterval - m_mean);\n\t\tm_dsq += dsqInc;\n\t\tm_mean = newMean;\n\t\tm_thisInterval = 0;\n\t\tm_currentInterval = 0;\n\t}\n}\n"
  },
  {
    "path": "C/services/south/south.cpp",
    "content": "/*\n * Fledge south service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <sys/timerfd.h>\n#include <time.h>\n#include <stdint.h>\n#include <stdlib.h>\n#include <signal.h>\n#include <execinfo.h>\n#include <dlfcn.h>    // for dladdr\n#include <cxxabi.h>   // for __cxa_demangle\n#include <unistd.h>\n#include <south_service.h>\n#include <south_api.h>\n#include <management_api.h>\n#include <storage_client.h>\n#include <service_record.h>\n#include <plugin_manager.h>\n#include <plugin_api.h>\n#include <plugin.h>\n#include <logger.h>\n#include <reading.h>\n#include <ingest.h>\n#include <iostream>\n#include <defaults.h>\n#include <filter_plugin.h>\n#include <config_handler.h>\n#include <syslog.h>\n#include <pyruntime.h>\n\n#define SERVICE_TYPE \"Southbound\"\n#define RESOURCE_LIMIT_CATEGORY \"RESOURCE_LIMIT\"\nextern int makeDaemon(void);\nextern void handler(int sig);\n\nstatic void reconfThreadMain(void *arg);\n\nusing namespace std;\n\n// Displays service information in JSON format\nstatic void printServiceInfoAsJSON()\n{\n\tstatic std::string serviceInfoJSON = R\"({\"name\":\"South Service\",\"description\":\"Service To Ingress Data\",\"type\":\")\" + std::string(SERVICE_TYPE) + R\"(\",\"process\":\"south_c\",\"process_script\":\"[\\\"services/south_c\\\"]\",\"startup_priority\":100})\";\n\tstd::cout << serviceInfoJSON << std::endl;\n}\n\n/**\n * South service main entry point\n */\nint main(int argc, char *argv[])\n{\nunsigned short corePort = 8082;\nstring\t       coreAddress = \"localhost\";\nbool\t       daemonMode = true;\nstring\t       myName = SERVICE_NAME;\nstring\t       logLevel = \"warning\";\nstring         token = \"\";\nbool\t       dryrun = false;\n\n\tsignal(SIGSEGV, handler);\n\tsignal(SIGILL, handler);\n\tsignal(SIGBUS, handler);\n\tsignal(SIGFPE, handler);\n\tsignal(SIGABRT, handler);\n\n\tfor (int i = 1; i < argc; i++)\n\t{\n\t\tif (!strcmp(argv[i], \"--info\"))\n\t\t{\n\t\t\tprintServiceInfoAsJSON();\n\t\t\treturn 0;\n\t\t}\n\t\tif (!strcmp(argv[i], \"-d\"))\n\t\t{\n\t\t\tdaemonMode = false;\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--port=\", 7))\n\t\t{\n\t\t\tcorePort = (unsigned short)strtol(&argv[i][7], NULL, 10);\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--name=\", 7))\n\t\t{\n\t\t\tmyName = &argv[i][7];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--address=\", 10))\n\t\t{\n\t\t\tcoreAddress = &argv[i][10];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--logLevel=\", 11))\n\t\t{\n\t\t\tlogLevel = &argv[i][11];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--token=\", 8))\n\t\t{\n\t\t\ttoken = &argv[i][8];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--dryrun\", 8))\n\t\t{\n\t\t\tdryrun = true;\n\t\t}\n\t}\n\n#ifdef PROFILING\n\tchar profilePath[200]{0};\n\tif (getenv(\"FLEDGE_DATA\")) \n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"%s/%s_Profile\", getenv(\"FLEDGE_DATA\"), myName.c_str());\n\t} else if (getenv(\"FLEDGE_ROOT\"))\n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"%s/data/%s_Profile\", getenv(\"FLEDGE_ROOT\"), myName.c_str());\n\t} else \n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"/usr/local/fledge/data/%s_Profile\", myName.c_str());\n\t}\n\tmkdir(profilePath, 0777);\n\tchdir(profilePath);\n#endif\n\n\tif (daemonMode && makeDaemon() == -1)\n\t{\n\t\t// Failed to run in daemon mode\n\t\tcout << \"Failed to run as deamon - proceeding in interactive mode.\" << endl;\n\t}\n\n\tSouthService *service = new SouthService(myName, token);\n\tif (dryrun)\n\t{\n\t\tservice->setDryRun();\n\t}\n\tLogger *logger = Logger::getLogger();\n\tlogger->setMinLevel(logLevel);\n\t// Start the service. This will only return whren the serivce is shutdown\n\tservice->start(coreAddress, corePort);\n\tdelete service;\n\tdelete logger;\n\treturn 0;\n}\n\n/**\n * Detach the process from the terminal and run in the background.\n */\nint makeDaemon()\n{\npid_t pid;\n\n\t/* Make the child process inherit the log level */\n\tint logmask = setlogmask(0);\n\t/* create new process */\n\tif ((pid = fork()  ) == -1)\n\t{\n\t\treturn -1;  \n\t}\n\telse if (pid != 0)  \n\t{\n\t\texit (EXIT_SUCCESS);  \n\t}\n\tsetlogmask(logmask);\n\n\t// If we got here we are a child process\n\n\t// create new session and process group \n\tif (setsid() == -1)  \n\t{\n\t\treturn -1;  \n\t}\n\n\t// Close stdin, stdout and stderr\n\tclose(0);\n\tclose(1);\n\tclose(2);\n\t// redirect fd's 0,1,2 to /dev/null\n\t(void)open(\"/dev/null\", O_RDWR);  \t// stdin\n\tif (dup(0) == -1) {}\t\t\t// stdout\tWorkaround for GCC bug 66425 produces warning\n\tif (dup(0) == -1) {} \t\t\t// stderr\tWOrkaround for GCC bug 66425 produces warning\n \treturn 0;\n}\n\nvoid handler(int sig)\n{\nLogger\t*logger = Logger::getLogger();\nvoid\t*array[20];\nchar\tbuf[1024];\nint\tsize;\n\n\t// get void*'s for all entries on the stack\n\tsize = backtrace(array, 20);\n\n\t// print out all the frames to stderr\n\tlogger->fatal(\"Signal %d (%s) trapped:\\n\", sig, strsignal(sig));\n\tchar **messages = backtrace_symbols(array, size);\n\tfor (int i = 0; i < size; i++)\n\t{\n\t\tDl_info info;\n\t\tif (dladdr(array[i], &info) && info.dli_sname)\n\t\t{\n\t\t    char *demangled = NULL;\n\t\t    int status = -1;\n\t\t    if (info.dli_sname[0] == '_')\n\t\t        demangled = abi::__cxa_demangle(info.dli_sname, NULL, 0, &status);\n\t\t    snprintf(buf, sizeof(buf), \"%-3d %*p %s + %zd---------\",\n\t\t             i, int(2 + sizeof(void*) * 2), array[i],\n\t\t             status == 0 ? demangled :\n\t\t             info.dli_sname == 0 ? messages[i] : info.dli_sname,\n\t\t             (char *)array[i] - (char *)info.dli_saddr);\n\t\t    free(demangled);\n\t\t} \n\t\telse\n\t\t{\n\t\t    snprintf(buf, sizeof(buf), \"%-3d %*p %s---------\",\n\t\t             i, int(2 + sizeof(void*) * 2), array[i], messages[i]);\n\t\t}\n\t\tlogger->fatal(\"(%d) %s\", i, buf);\n\t}\n\tfree(messages);\n\texit(1);\n}\n\t\t\n/**\n * Callback called by south plugin to ingest readings into Fledge\n *\n * @param ingest\tThe ingest class to use\n * @param reading\tThe Reading to ingest\n */\nvoid doIngest(Ingest *ingest, Reading reading)\n{\n\tingest->ingest(reading);\n}\n\nvoid doIngestV2(Ingest *ingest, ReadingSet *set)\n{\n    std::vector<Reading *> *vec = set->getAllReadingsPtr();\n    if (!vec)\n    {\n        Logger::getLogger()->info(\"%s:%d: V2 async ingest method: vec is NULL\", __FUNCTION__, __LINE__);\n        return;\n    }\n\t// move reading vector from set to new vector vec2\n    std::vector<Reading *> *vec2 = set->moveAllReadings();\n    \n    Logger::getLogger()->debug(\"%s:%d: V2 async ingest method returned: vec->size()=%d\", __FUNCTION__, __LINE__, vec->size());\n\n\tingest->ingest(vec2);\n\tdelete vec2; \t// each reading object inside vector has been allocated on heap and moved to Ingest class's internal queue\n\tdelete set;\n\n\tingest->flowControl();\n}\n\n/**\n * Constructor for the south service\n */\nSouthService::SouthService(const string& myName, const string& token) :\n\t\t\t\tsouthPlugin(NULL),\n\t\t\t\tm_assetTracker(NULL),\n\t\t\t\tm_shutdown(false),\n\t\t\t\tm_readingsPerSec(1),\n\t\t\t\tm_throttle(false),\n\t\t\t\tm_throttled(false),\n\t\t\t\tm_token(token),\n\t\t\t\tm_repeatCnt(1),\n\t\t\t\tm_pluginData(NULL),\n\t\t\t\tm_dryRun(false),\n\t\t\t\tm_requestRestart(false),\n\t\t\t\tm_auditLogger(NULL),\n\t\t\t\tm_perfMonitor(NULL),\n\t\t\t\tm_suspendIngest(false),\n\t\t\t\tm_steps(0),\n\t\t\t\tm_provider(NULL),\n\t\t\t\tm_controlEnabled(true),\n\t\t\t\tm_debuggerEnabled(true)\n{\n\tm_name = myName;\n\tm_type = SERVICE_TYPE;\n\tm_pollType = POLL_INTERVAL;\n\n\tlogger = new Logger(myName);\n\tlogger->setMinLevel(\"warning\");\n\n\tm_reconfThread = new std::thread(reconfThreadMain, this);\n}\n\n/**\n * Destructor for south service\n */\nSouthService::~SouthService()\n{\n\tm_cvNewReconf.notify_all();\t// Wakeup the reconfigure thread to terminate it\n\tm_reconfThread->join();\n\tdelete m_reconfThread;\n\tif (m_pluginData)\n\t\tdelete m_pluginData;\n\tif (m_perfMonitor)\n\t\tdelete m_perfMonitor;\n\tdelete m_assetTracker;\n\tdelete m_auditLogger;\n\tdelete m_mgtClient;\n\tdelete m_provider;\n\n\t// We would like to shutdown the Python environment if it\n\t// was running. However this causes a segmentation fault within Python\n\t// so we currently can not do this\n#if PYTHON_SHUTDOWN\n\tPythonRuntime::shutdown();\t// Shutdown and release Python resources\n#endif\n}\n\n/**\n * Start the south service\n */\nvoid SouthService::start(string& coreAddress, unsigned short corePort)\n{\n\tunsigned short managementPort = (unsigned short)0;\n\tManagementApi management(SERVICE_NAME, managementPort);\t// Start managemenrt API\n\tlogger->info(\"Starting south service...\");\n\tm_provider = new SouthServiceProvider(this);\n\tmanagement.registerProvider(m_provider);\n\tmanagement.registerService(this);\n\n\t// Listen for incoming managment requests\n\tmanagement.start();\n\n\t// Create the south API\n\tSouthApi *api = new SouthApi(this);\n\tif (!api)\n\t{\n\t\tlogger->fatal(\"Unable to create API object\");\n\t\treturn;\n\t}\n\t// Allow time for the listeners to start before we register\n\tsleep(1);\n\tif (! m_shutdown)\n\t{\n\t\tunsigned short sport = api->getListenerPort();\n\n\t\t// Now register our service\n\t\t// TODO proper hostname lookup\n\t\tunsigned short managementListener = management.getListenerPort();\n\t\tServiceRecord record(m_name,\t\t\t// Service name\n\t\t\t\t\tSERVICE_TYPE,\t\t// Service type\n\t\t\t\t\t\"http\",\t\t\t// Protocol\n\t\t\t\t\t\"localhost\",\t\t// Listening address\n\t\t\t\t\tsport,\t\t\t// Service port\n\t\t\t\t\tmanagementListener,\t// Management port\n\t\t\t\t\tm_token);\t\t// Token\n\n\t\t// Allocate and save ManagementClient object\n\t\tm_mgtClient = new ManagementClient(coreAddress, corePort);\n\n\t\t// Create the audit logger instance\n\t\tm_auditLogger = new AuditLogger(m_mgtClient);\n\n\t\t// Create an empty South category if one doesn't exist\n\t\tDefaultConfigCategory southConfig(string(\"South\"), string(\"{}\"));\n\t\tsouthConfig.setDescription(\"South\");\n\t\tm_mgtClient->addCategory(southConfig, true);\n\n\t\t// Get configuration for service name\n\t\tm_config = m_mgtClient->getCategory(m_name);\n\t\tm_configResourceLimit = m_mgtClient->getCategory(RESOURCE_LIMIT_CATEGORY);\n\t\tif (!loadPlugin())\n\t\t{\n\t\t\tlogger->fatal(\"Failed to load south plugin %s, exiting...\", m_name.c_str());\n\t\t\tstring key = m_name + \"LoadPlugin\";\n\t\t\tm_mgtClient->raiseAlert(key, \"South service \" + m_name + \" is shutting down due to a failure loading the south plugin\");\n\t\t\tmanagement.stop();\n\t\t\treturn;\n\t\t}\n\n\t\tif (southPlugin->hasControl())\n\t\t{\n\t\t\tlogger->info(\"South plugin has a control facility, adding south service API\");\n\t\t}\n\n\t\tif (!m_dryRun)\n\t\t{\n\t\t\tif (!m_mgtClient->registerService(record))\n\t\t\t{\n\t\t\t\tlogger->error(\"Failed to register service %s\", m_name.c_str());\n\t\t\t\tmanagement.stop();\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tConfigCategory features = m_mgtClient->getCategory(\"FEATURES\");\n\t\t\tupdateFeatures(features);\n\n\t\t\t// Register for category content changes\n\t\t\tConfigHandler *configHandler = ConfigHandler::getInstance(m_mgtClient);\n\t\t\tconfigHandler->registerCategory(this, m_name);\n\t\t\tconfigHandler->registerCategory(this, m_name+\"Advanced\");\n\t\t\tconfigHandler->registerCategory(this, \"FEATURES\");\n\t\t\tconfigHandler->registerCategory(this, RESOURCE_LIMIT_CATEGORY);\n\t\t}\n\n\t\t// Get a handle on the storage layer\n\t\tServiceRecord storageRecord(\"Fledge Storage\");\n\t\tif (!m_mgtClient->getService(storageRecord))\n\t\t{\n\t\t\tlogger->fatal(\"Unable to find storage service\");\n\t\t\treturn;\n\t\t}\n\t\tlogger->info(\"Connect to storage on %s:%d\",\n\t\t\t\tstorageRecord.getAddress().c_str(),\n\t\t\t\tstorageRecord.getPort());\n\n\t\t\n\t\tStorageClient storage(storageRecord.getAddress(),\n\t\t\t\t\t\tstorageRecord.getPort());\n\t\tstorage.registerManagement(m_mgtClient);\n\n\t\tm_perfMonitor = new PerformanceMonitor(m_name, &storage);\n\t\tunsigned int threshold = 100;\n\t\tlong timeout = 5000;\n\t\tstd::string pluginName;\n\t\ttry {\n\t\t\tif (m_configAdvanced.itemExists(\"bufferThreshold\"))\n\t\t\t\tthreshold = (unsigned int)strtol(m_configAdvanced.getValue(\"bufferThreshold\").c_str(), NULL, 10);\n\t\t\tif (m_configAdvanced.itemExists(\"maxSendLatency\"))\n\t\t\t\ttimeout = strtol(m_configAdvanced.getValue(\"maxSendLatency\").c_str(), NULL, 10);\n\t\t\tif (m_config.itemExists(\"plugin\"))\n\t\t\t\tpluginName = m_config.getValue(\"plugin\");\n\t\t\tif (m_configAdvanced.itemExists(\"logLevel\"))\n\t\t\t{\n\t\t\t\tstring prevLogLevel = logger->getMinLevel();\n\t\t\t\tlogger->setMinLevel(m_configAdvanced.getValue(\"logLevel\"));\n\n\t\t\t\tPluginManager *manager = PluginManager::getInstance();\n\t\t\t\tPLUGIN_TYPE type = manager->getPluginImplType(southPlugin->getHandle());\n\t\t\t\tlogger->debug(\"%s:%d: plugin type = %s\", __FUNCTION__, __LINE__, (type==PYTHON_PLUGIN)?\"PYTHON_PLUGIN\":\"BINARY_PLUGIN\");\n\t\t\t\t\n\t\t\t\tif (type == PYTHON_PLUGIN)\n\t\t\t\t{\n\t\t\t\t\t// propagate loglevel changes to python filters/plugins, if present\n\t\t\t\t\tlogger->debug(\"prevLogLevel=%s, m_configAdvanced.getValue(\\\"logLevel\\\")=%s\", prevLogLevel.c_str(), m_configAdvanced.getValue(\"logLevel\").c_str());\n\t\t\t\t\tif (prevLogLevel.compare(m_configAdvanced.getValue(\"logLevel\")) != 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tlogger->debug(\"calling southPlugin->reconfigure() for updating loglevel\");\n\t\t\t\t\t\tsouthPlugin->reconfigure(\"logLevel\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (m_configAdvanced.itemExists(\"throttle\"))\n\t\t\t{\n\t\t\t\tstring throt = m_configAdvanced.getValue(\"throttle\");\n\t\t\t\tif (throt[0] == 't' || throt[0] == 'T')\n\t\t\t\t{\n\t\t\t\t\tm_throttle = true;\n\t\t\t\t\tm_highWater = threshold\n\t\t\t\t\t       \t+ (((float)threshold * SOUTH_THROTTLE_HIGH_PERCENT) / 100.0);\n\t\t\t\t\tm_lowWater = threshold\n\t\t\t\t\t       \t+ (((float)threshold * SOUTH_THROTTLE_LOW_PERCENT) / 100.0);\n\t\t\t\t\tlogger->info(\"Throttling is enabled, high water mark is set to %ld\", m_highWater);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tm_throttle = false;\n\t\t\t\t}\n\t\t\t}\n\t\t} catch (ConfigItemNotFound& e) {\n\t\t\tlogger->info(\"Defaulting to inline defaults for south configuration\");\n\t\t}\n\n\t\tm_assetTracker = new AssetTracker(m_mgtClient, m_name);\n\t\tif (m_configAdvanced.itemExists(\"assetTrackerInterval\"))\n\t\t{\n\t\t\tstring interval = m_configAdvanced.getValue(\"assetTrackerInterval\");\n\t\t\tunsigned long i = strtoul(interval.c_str(), NULL, 10);\n\t\t\tif (m_assetTracker)\n\t\t\t\tm_assetTracker->tune(i);\n\t\t}\n\n\t\t{\n\t\t// Instantiate the Ingest class\n\t\tIngest ingest(storage, m_name, pluginName, m_mgtClient);\n\t\tingest.setPerfMon(m_perfMonitor);\n\t\tm_ingest = &ingest;\n\t\tif (m_throttle)\n\t\t{\n\t\t\tm_ingest->setFlowControl(m_lowWater, m_highWater);\n\t\t}\n\n\t\tif (m_configAdvanced.itemExists(\"statistics\"))\n\t\t{\n\t\t\tm_ingest->setStatistics(m_configAdvanced.getValue(\"statistics\"));\n\t\t}\n\n\t\tif (m_configAdvanced.itemExists(\"perfmon\"))\n\t\t{\n\t\t\tstring perf = m_configAdvanced.getValue(\"perfmon\");\n\t\t\tif (perf.compare(\"true\") == 0)\n\t\t\t\tm_perfMonitor->setCollecting(true);\n\t\t\telse\n\t\t\t\tm_perfMonitor->setCollecting(false);\n\t\t}\n\n\t\tif (m_configAdvanced.itemExists(\"rateMonitoringInterval\") && m_configAdvanced.itemExists(\"rateSigmaFactor\"))\n\t\t{\n\t\t\tstring s = m_configAdvanced.getValue(\"rateMonitoringInterval\");\n\t\t\tlong interval = strtol(s.c_str(), NULL, 10);\n\t\t\ts = m_configAdvanced.getValue(\"rateSigmaFactor\");\n\t\t\tlong factor = strtol(s.c_str(), NULL, 10);\n\t\t\tingest.configureRateMonitor(interval, factor);\n\n\t\t}\n\n\t\tgetResourceLimit();\n\t\tm_ingest->setResourceLimit(m_serviceBufferingType, m_serviceBufferSize, m_discardPolicy);\n\n\t\tm_ingest->start(timeout, threshold);\t// Start the ingest threads running\n\n\t\ttry {\n\t\t\tm_readingsPerSec = 1;\n\t\t\tif (m_configAdvanced.itemExists(\"readingsPerSec\"))\n\t\t\t\tm_readingsPerSec = (unsigned long)strtol(m_configAdvanced.getValue(\"readingsPerSec\").c_str(), NULL, 10);\n\t\t\tif (m_readingsPerSec < 1)\n\t\t\t{\n\t\t\t\tlogger->warn(\"Invalid setting of reading rate, defaulting to 1\");\n\t\t\t\tm_readingsPerSec = 1;\n\t\t\t}\n\t\t} catch (ConfigItemNotFound& e) {\n\t\t\tlogger->info(\"Defaulting to inline default for poll interval\");\n\t\t}\n\n\t\t// Load filter plugins and set them in the Ingest class\n\t\tif (!ingest.loadFilters(m_name))\n\t\t{\n\t\t\tstring errMsg(\"'\" + m_name + \"' plugin: failed loading filter plugins.\");\n\t\t\tLogger::getLogger()->fatal((errMsg + \" Shutting down south service.\").c_str());\n\t\t\tstring key = m_name + \"LoadPipeline\";\n\t\t\tm_mgtClient->raiseAlert(key, \"South service \" + m_name + \" is shutting down due to a failure to create the data pipeline\");\n\t\t\treturn;\n\t\t}\n\n\t\tif (southPlugin->persistData())\n\t\t{\n\t\t\tm_pluginData = new PluginData(new StorageClient(storageRecord.getAddress(),\n                                                storageRecord.getPort()));\n\t\t\tm_dataKey = m_name + m_config.getValue(\"plugin\");\n\t\t}\n\n\t\t// Create default security category\n\t\tthis->createSecurityCategories(m_mgtClient, m_dryRun);\n\n\t\tif (!m_dryRun)\t// If not a dry run then handle readings\n\t\t{\n\t\t\t// Get and ingest data\n\t\t\tif (! southPlugin->isAsync())\n\t\t\t{\n\t\t\t\tcalculateTimerRate();\n\t\t\t\tm_timerfd = createTimerFd(m_desiredRate); // interval to be passed is in usecs\n\t\t\t\tm_currentRate = m_desiredRate;\n\t\t\t\tif (m_timerfd < 0)\n\t\t\t\t{\n\t\t\t\t\tlogger->fatal(\"Could not create timer FD\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\tint pollCount = 0;\n\t\t\t\tstruct timespec start, end;\n\t\t\t\tif (clock_gettime(CLOCK_MONOTONIC, &start) == -1)\n\t\t\t\t   Logger::getLogger()->error(\"polling loop start: clock_gettime\");\n\n\t\t\t\tconst char *pluginInterfaceVer = southPlugin->getInfo()->interface;\n\t\t\t\tbool pollInterfaceV2 = (pluginInterfaceVer[0]=='2' && pluginInterfaceVer[1]=='.');\n\t\t\t\tlogger->info(\"pollInterfaceV2=%s\", pollInterfaceV2?\"true\":\"false\");\n\n\t\t\t\t/*\n\t\t\t\t * Start the plugin. If it fails with an exception, retry the start with a delay\n\t\t\t\t * That delay starts at 500mS and will backoff to 1 minute\n\t\t\t\t *\n\t\t\t\t * We will continue to retry the start until the service is shutdown\n\t\t\t\t */\n\t\t\t\tbool started = false;\n\t\t\t\tint delay = 500;\n\t\t\t\twhile (started == false && m_shutdown == false)\n\t\t\t\t{\n\t\t\t\t\tif (southPlugin->persistData())\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->debug(\"Plugin persists data\");\n\t\t\t\t\t\tstring pluginData = m_pluginData->loadStoredData(m_dataKey);\n\t\t\t\t\t\ttry {\n\t\t\t\t\t\t\tsouthPlugin->startData(pluginData);\n\t\t\t\t\t\t\tstarted = true;\n\t\t\t\t\t\t} catch (...) {\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"Plugin start raised an exception\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->debug(\"Plugin does not persist data\");\n\t\t\t\t\t\tstarted = true;\n\t\t\t\t\t}\n\t\t\t\t\tif (!started)\n\t\t\t\t\t{\n\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(delay));\n\t\t\t\t\t\tif (delay < 60 * 1000)\t// Backoff the delay to 1 minute\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdelay *= 2;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\twhile (!m_shutdown)\n\t\t\t\t{\n\t\t\t\t\tuint64_t exp = 0;\n\t\t\t\t\tssize_t s;\n\t\t\t\t\t\n\t\t\t\t\tif (m_pollType == POLL_FIXED)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (syncToNextPoll())\n\t\t\t\t\t\t\texp = 1;\t// Perform one poll\n\t\t\t\t\t}\n\t\t\t\t\telse if (m_pollType == POLL_INTERVAL)\n\t\t\t\t\t{\n\t\t\t\t\t\tlong rep = m_repeatCnt;\n\t\t\t\t\t\twhile (rep > 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\ts = read(m_timerfd, &exp, sizeof(uint64_t));\n\t\t\t\t\t\t\tif ((unsigned int)s != sizeof(uint64_t))\n\t\t\t\t\t\t\t\tlogger->error(\"timerfd read()\");\n\t\t\t\t\t\t\tif (exp > 100 && exp > m_readingsPerSec/2)\n\t\t\t\t\t\t\tlogger->error(\"%d expiry notifications accumulated\", exp);\n\t\t\t\t\t\t\trep--;\n\t\t\t\t\t\t\tif (m_shutdown)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tcheckPendingReconfigure();\n\t\t\t\t\t\t\tif (rep > m_repeatCnt)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t// Reconfigure has resulted in more frequent\n\t\t\t\t\t\t\t\t// polling\n\t\t\t\t\t\t\t\trep = m_repeatCnt;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (m_pollType == POLL_ON_DEMAND)\n\t\t\t\t\t{\n\t\t\t\t\t\tif (onDemandPoll())\n\t\t\t\t\t\t\texp = 1;\n\t\t\t\t\t}\n\t\t\t\t\tif (m_shutdown)\n\t\t\t\t\t{\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n#if DO_CATCHUP\n\t\t\t\t\tfor (uint64_t i=0; i<exp; i++)\n#endif\n\t\t\t\t\t{\n\t\t\t\t\t\tbool doPoll = true;\n\t\t\t\t\t\tif (isSuspended())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdoPoll = false;\n\t\t\t\t\t\t\tif (willStep())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tdoPoll = true;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (doPoll && (!pollInterfaceV2)) // v1 poll method\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tReading reading = southPlugin->poll();\n\t\t\t\t\t\t\tif (reading.getDatapointCount())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tingest.ingest(reading);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t++pollCount;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (doPoll)// V2 poll method\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tcheckPendingReconfigure();\n\t\t\t\t\t\t\tReadingSet *set = southPlugin->pollV2();\n\t\t\t\t\t\t\tif (set)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t    std::vector<Reading *> *vec = set->getAllReadingsPtr();\n\t\t\t\t\t\t\t    if (!vec)\n\t\t\t\t\t\t\t    {\n\t\t\t\t\t\t\t\tLogger::getLogger()->info(\"%s:%d: V2 poll method: vec is NULL\", __FUNCTION__, __LINE__);\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t    }\n\t\t\t\t\t\t\t    // move reading vector from set to vec2\n\t\t\t\t\t\t\t\tstd::vector<Reading *> *vec2 = set->moveAllReadings();\n\t\t\t\t\t\t\t\tingest.ingest(vec2);\n\t\t\t\t\t\t\t\tpollCount += (int) vec2->size();\n\t\t\t\t\t\t\t\tdelete vec2; \t// each reading object inside vector has been allocated on heap and moved to Ingest class's internal queue\n\t\t\t\t\t\t\t\tdelete set;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tcheckPendingReconfigure();\n\t\t\t\t\t\t}\n\t\t\t\t\t\tthrottlePoll();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (clock_gettime(CLOCK_MONOTONIC, &end) == -1)\n\t\t\t\t   Logger::getLogger()->error(\"polling loop end: clock_gettime\");\n\t\t\t\t\n\t\t\t\tint secs = end.tv_sec - start.tv_sec;\n\t\t\t\tint nsecs = end.tv_nsec - start.tv_nsec;\n\t\t\t\tif (nsecs < 0)\n\t\t\t\t{\n\t\t\t\t\tsecs--;\n\t\t\t\t\tnsecs += 1000000000;\n\t\t\t\t}\n\t\t\t\tLogger::getLogger()->info(\"%d readings generated in %d.%d secs\", pollCount, secs, nsecs);\n\t\t\t\tclose(m_timerfd);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tconst char *pluginInterfaceVer = southPlugin->getInfo()->interface;\n\t\t\t\tbool pollInterfaceV2 = (pluginInterfaceVer[0]=='2' && pluginInterfaceVer[1]=='.');\n\t\t\t\tLogger::getLogger()->info(\"pluginInterfaceVer=%s, pollInterfaceV2=%s\", pluginInterfaceVer, pollInterfaceV2?\"true\":\"false\");\n\t\t\t\tif (!pollInterfaceV2)\n\t\t\t\t\tsouthPlugin->registerIngest((INGEST_CB)doIngest, &ingest);\n\t\t\t\telse\n\t\t\t\t\tsouthPlugin->registerIngestV2((INGEST_CB2)doIngestV2, &ingest);\n\t\t\t\tbool started = false;\n\t\t\t\tint backoff = 1000;\n\t\t\t\twhile (started == false && m_shutdown == false)\n\t\t\t\t{\n\t\t\t\t\ttry {\n\t\t\t\t\t\tif (southPlugin->persistData())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring pluginData = m_pluginData->loadStoredData(m_dataKey);\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"Plugin persists data, %s\", pluginData.c_str());\n\t\t\t\t\t\t\tsouthPlugin->startData(pluginData);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tLogger::getLogger()->debug(\"Plugin does not persist data\");\n\t\t\t\t\t\t\tsouthPlugin->start();\n\t\t\t\t\t\t}\n\t\t\t\t\t\tstarted = true;\n\t\t\t\t\t} catch (...) {\n\t\t\t\t\t\tLogger::getLogger()->debug(\"Plugin start raised an exception\");\n\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(backoff));\n\t\t\t\t\t\tif (backoff < 60000)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tbackoff *= 2;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\twhile (!m_shutdown)\n\t\t\t\t{\n\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(1000));\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_shutdown = true;\n\t\t\tLogger::getLogger()->info(\"Dryrun of service, shutting down\");\n\t\t}\n\n\t\t// Shutdown the API\n\t\tdelete api;\n\n\t\t// do plugin shutdown before destroying Ingest object on stack\n\t\tif (southPlugin)\n\t\t{\n\t\t\tif (southPlugin->persistData())\n\t\t\t{\n\t\t\t\tstring data = southPlugin->shutdownSaveData();\n\t\t\t\tLogger::getLogger()->debug(\"Persist plugin data, key: '%s' data: '%s' service name: '%s'\", m_dataKey, data.c_str(), m_name.c_str());\n\t\t\t\tm_pluginData->persistPluginData(m_dataKey, data, m_name);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tsouthPlugin->shutdown();\n\t\t\t}\n\t\t\tdelete southPlugin;\n\t\t\tsouthPlugin = NULL;\n\t\t}\n\t\t}\n\t\t\n\t\t// Clean shutdown, unregister the storage service\n\t\tif (!m_dryRun)\n\t\t{\n\t\t\tif (m_requestRestart)\n\t\t\t{\n\t\t\t\tm_mgtClient->restartService();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_mgtClient->unregisterService();\n\t\t\t}\n\t\t}\n\t}\n\tmanagement.stop();\n\tlogger->info(\"South service shutdown %s completed\", m_dryRun ? \"from dry run \" : \"\");\n}\n\n/**\n * @brief Retrieves and processes resource limit configuration for the South Service\n *\n * This function reads the resource limit configuration values from the service configuration,\n * validates them, and sets the corresponding member variables. The function handles three main\n * configuration parameters:\n *\n * 1. Service Buffering Type (Unlimited or Limited)\n * 2. Service Buffer Size (minimum value enforced)\n * 3. Discard Policy (Discard Oldest, Discard Newest, or Reduce Fidelity)\n *\n * If any configuration value is invalid or cannot be parsed, the function logs an error and\n * applies default values to ensure the service can continue running.\n *\n * @throws std::exception Catches any exceptions during configuration parsing and applies defaults\n */\nvoid SouthService::getResourceLimit()\n{\n\tauto discardPolicyToString = [](DiscardPolicy policy) -> std::string \n\t{\n\t\tswitch (policy) \n\t\t{\n\t\t\tcase DiscardPolicy::DISCARD_OLDEST:   return \"Discard Oldest\";\n\t\t\tcase DiscardPolicy::REDUCE_FIDELITY: return \"Reduce Fidelity\";\n\t\t\tcase DiscardPolicy::DISCARD_NEWEST:  return \"Discard Newest\";\n\t\t\tdefault: throw std::invalid_argument(\"Invalid DiscardPolicy enum value\");\n\t\t}\n\t};\n\n\tauto serviceBufferingTypeToString = [](ServiceBufferingType type) -> std::string \n\t{\n\t\tswitch (type) \n\t\t{\n\t\t\tcase ServiceBufferingType::UNLIMITED: return \"Unlimited\";\n\t\t\tcase ServiceBufferingType::LIMITED:   return \"Limited\";\n\t\t\tdefault: throw std::invalid_argument(\"Invalid ServiceBufferingType enum value\");\n\t\t}\n\t};\n\n\t// Update the resource limit configuration\n\ttry \n\t{\n\t\t// Parse the service buffering type\n\t\tstd::string serviceBuffering = m_configResourceLimit.getValue(\"serviceBuffering\");\n\t\tif (serviceBuffering == \"Unlimited\") \n\t\t{\n\t\t\tm_serviceBufferingType = ServiceBufferingType::UNLIMITED;\n\t\t} \n\t\telse if (serviceBuffering == \"Limited\") \n\t\t{\n\t\t\tm_serviceBufferingType = ServiceBufferingType::LIMITED;\n\t\t} \n\t\telse\n\t\t{\n\t\t\tm_serviceBufferingType = SERVICE_BUFFER_BUFFER_TYPE_DEFAULT; // Default value\n\t\t\tLogger::getLogger()->error(\"Invalid 'Service Buffering Type' configuration value: '%s'. Default value '%s' has been applied.\", serviceBuffering.c_str(), serviceBufferingTypeToString(m_serviceBufferingType).c_str());\n\t\t}\n\n\t\t// Parse and validate the service buffer size\n\t\ttry \n\t\t{\n\t\t\tstd::string bufferSizeStr = m_configResourceLimit.getValue(\"serviceBufferSize\");\n\t\t\tm_serviceBufferSize = (unsigned int)std::stoi(bufferSizeStr); // Convert to integer\n\n\t\t\tif (m_serviceBufferSize < SERVICE_BUFFER_SIZE_MIN)\n\t\t\t{\n\t\t\t\tm_serviceBufferSize = SERVICE_BUFFER_SIZE_DEFAULT; // Default value\n\t\t\t\tLogger::getLogger()->error(\"Invalid 'Service Buffer Size' value: '%s'. The value must be at least %d. Default value of %d has been applied.\", bufferSizeStr.c_str(), SERVICE_BUFFER_SIZE_MIN, m_serviceBufferSize);\n\t\t\t}\n\t\t} \n\t\tcatch (const std::exception& e)\n\t\t{\n\t\t\t// Handle conversion errors and out-of-range values\n\t\t\tm_serviceBufferSize = SERVICE_BUFFER_SIZE_DEFAULT; // Default value\n\t\t\tLogger::getLogger()->error(\"Failed to parse 'serviceBufferSize': %s. Default value '%d' has been applied.\", e.what(), m_serviceBufferSize);\n\t\t}\n\n\t\t// Parse the discard policy\n\t\tstd::string discardPolicy = m_configResourceLimit.getValue(\"discardPolicy\");\n\t\tif (discardPolicy == \"Discard Oldest\") \n\t\t{\n\t\t\tm_discardPolicy = DiscardPolicy::DISCARD_OLDEST;\n\t\t} \n\t\telse if (discardPolicy == \"Discard Newest\") \n\t\t{\n\t\t\tm_discardPolicy = DiscardPolicy::DISCARD_NEWEST;\n\t\t} \n\t\telse if (discardPolicy == \"Reduce Fidelity\") \n\t\t{\n\t\t\tm_discardPolicy = DiscardPolicy::REDUCE_FIDELITY;\n\t\t} \n\t\telse \n\t\t{\n\t\t\tm_discardPolicy = SERVICE_BUFFER_DISCARD_POLICY_DEFAULT; // Default value\n\t\t\tLogger::getLogger()->error(\"Invalid 'Discard Policy' configuration value: '%s'. Default value '%s' has been applied.\", discardPolicy.c_str(), discardPolicyToString(m_discardPolicy).c_str());\n\t\t}\n\n\t\tLogger::getLogger()->info(\"Resource Limit configuration applied successfully: \"\n\t\t\t\"Service Buffering Type: '%s', \"\n\t\t\t\"Service Buffer Size: '%d', \"\n\t\t\t\"Discard Policy: '%s'.\",\n\t\t\tserviceBufferingTypeToString(m_serviceBufferingType).c_str(),\n\t\t\tm_serviceBufferSize,\n\t\t\tdiscardPolicyToString(m_discardPolicy).c_str());\n\n\t} \n\tcatch (const std::exception& e) \n\t{\n\t\t// Catch any other exceptions and log the error\n\t\tLogger::getLogger()->error(\"Failed to update resource limit configuration due to an exception: %s. Default values will be applied to ensure system stability.\", e.what());\n\n\t\t// Set default values to ensure the system can continue running\n\t\tm_serviceBufferingType = SERVICE_BUFFER_BUFFER_TYPE_DEFAULT;\n\t\tm_serviceBufferSize = SERVICE_BUFFER_SIZE_DEFAULT;\n\t\tm_discardPolicy = SERVICE_BUFFER_DISCARD_POLICY_DEFAULT;\n\n\t\tLogger::getLogger()->info(\"Default configuration applied: \"\n\t\t\t\t\t\t\t\t\"Service Buffering Type: '%s', \"\n\t\t\t\t\t\t\t\t\"Service Buffer Size: '%d', \"\n\t\t\t\t\t\t\t\t\"Discard Policy: '%s'.\",\n\t\t\t\t\t\t\t\tserviceBufferingTypeToString(m_serviceBufferingType).c_str(),\n\t\t\t\t\t\t\t\tm_serviceBufferSize,\n\t\t\t\t\t\t\t\tdiscardPolicyToString(m_discardPolicy).c_str());\n\t}\n}\n\n/**\n * Stop the storage service/\n */\nvoid SouthService::stop()\n{\n\n\tlogger->info(\"Stopping south service...\\n\");\n}\n\n/**\n * Creates config categories and sub categories recursively, along with their parent-child relations\n */\nvoid SouthService::createConfigCategories(DefaultConfigCategory configCategory, std::string parent_name, std::string current_name)\n{\n\n\t// Deal with registering and fetching the configuration\n\tDefaultConfigCategory defConfig(configCategory);\n\tdefConfig.setDescription(current_name);\t// TODO We do not have access to the description\n\n\tDefaultConfigCategory defConfigCategoryOnly(defConfig);\n\tdefConfigCategoryOnly.keepItemsType(ConfigCategory::ItemType::CategoryType);\n\tdefConfig.removeItemsType(ConfigCategory::ItemType::CategoryType);\n\n\t// Create/Update category name (we pass keep_original_items=true)\n\tm_mgtClient->addCategory(defConfig, true);\n\n\t// Add this service under 'South' parent category\n\tvector<string> children;\n\tchildren.push_back(current_name);\n\tm_mgtClient->addChildCategories(parent_name, children);\n\n\t// Adds sub categories to the configuration\n\tbool extracted = true;\n\tConfigCategory subCategory;\n\twhile (extracted) {\n\n\t\textracted = subCategory.extractSubcategory(defConfigCategoryOnly);\n\n\t\tif (extracted) {\n\t\t\tDefaultConfigCategory defSubCategory(subCategory);\n\n\t\t\tcreateConfigCategories(defSubCategory, current_name, subCategory.getName());\n\n\t\t\t// Cleans the category\n\t\t\tsubCategory.removeItems();\n\t\t\tsubCategory = ConfigCategory() ;\n\t\t}\n\t}\n\n}\n\n/**\n * Load the configured south plugin\n *\n * TODO Should search for the plugin in specified locations\n */\nbool SouthService::loadPlugin()\n{\n\ttry {\n\t\tPluginManager *manager = PluginManager::getInstance();\n\n\t\tif (! m_config.itemExists(\"plugin\"))\n\t\t{\n\t\t\tlogger->error(\"Unable to fetch plugin name from configuration.\\n\");\n\t\t\treturn false;\n\t\t}\n\t\tstring plugin = m_config.getValue(\"plugin\");\n\t\tlogger->info(\"Loading south plugin %s.\", plugin.c_str());\n\t\tPLUGIN_HANDLE handle;\n\t\tif ((handle = manager->loadPlugin(plugin, PLUGIN_TYPE_SOUTH)) != NULL)\n\t\t{\n\t\t\t// Adds categories and sub categories to the configuration\n\t\t\tDefaultConfigCategory defConfig(m_name, manager->getInfo(handle)->config);\n\t\t\tcreateConfigCategories(defConfig, string(\"South\"), m_name);\n\n\t\t\t// Must now reload the configuration to obtain any items added from\n\t\t\t// the plugin\n\t\t\t// Removes all the m_items already present in the category\n\t\t\tm_config.removeItems();\n\t\t\tm_config = m_mgtClient->getCategory(m_name);\n\t\t\tm_config.addItem(\"mgmt_client_url_base\", \"Management client host and port\",\n                             \"string\", \"127.0.0.1:0\",\n                             m_mgtClient->getUrlbase());\n\t\t\ttry {\n\t\t\t\tsouthPlugin = new SouthPlugin(handle, m_config);\n\t\t\t} catch (...) {\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\t// Deal with registering and fetching the advanced configuration\n\t\t\tstring advancedCatName = m_name+string(\"Advanced\");\n\t\t\tDefaultConfigCategory defConfigAdvanced(advancedCatName, string(\"{}\"));\n\t\t\taddConfigDefaults(defConfigAdvanced);\n\t\t\tdefConfigAdvanced.setDescription(m_name+string(\" advanced config params\"));\n\n\t\t\t// Create/Update category name (we pass keep_original_items=true)\n\t\t\tm_mgtClient->addCategory(defConfigAdvanced, true);\n\n\t\t\t// Add this service under 'm_name' parent category\n\t\t\tvector<string> children1;\n\t\t\tchildren1.push_back(advancedCatName);\n\t\t\tm_mgtClient->addChildCategories(m_name, children1);\n\n\t\t\t// Must now reload the merged configuration\n\t\t\tm_configAdvanced = m_mgtClient->getCategory(advancedCatName);\n\n\t\t\treturn true;\n\t\t}\n\t} catch (exception& e) {\n\t\tlogger->fatal(\"Failed to load south plugin: %s\\n\", e.what());\n\t}\n\treturn false;\n}\n\n/**\n * Shutdown request\n */\nvoid SouthService::shutdown()\n{\n\t/* Stop recieving new requests and allow existing\n\t * requests to drain.\n\t */\n\tif (m_pollType == POLL_ON_DEMAND)\n\t{\n\t\tlock_guard<mutex> lk(m_pollMutex);\n\t\tm_shutdown = true;\n\t\tm_pollCV.notify_all();\n\t}\n\telse\n\t{\n\t\tm_shutdown = true;\n\t}\n\tlogger->info(\"South service shutdown in progress.\");\n}\n\n/**\n * Restart request\n */\nvoid SouthService::restart()\n{\n\t/* Stop recieving new requests and allow existing\n\t * requests to drain.\n\t */\n\tm_requestRestart = true;\n\tm_shutdown = true;\n\tlogger->info(\"South service shutdown for restart in progress.\");\n}\n\n/**\n * Configuration change notification\n *\n * @param categoryName\tCategory name\n * @param category\tCategory value\n */\nvoid SouthService::processConfigChange(const string& categoryName, const string& category)\n{\n\tlogger->info(\"Configuration change in category %s: %s\", categoryName.c_str(),\n\t\t\tcategory.c_str());\n\tif (categoryName.compare(m_name) == 0)\n\t{\n\t\tm_config = ConfigCategory(m_name, category);\n\t\ttry {\n\t\t\tsouthPlugin->reconfigure(category);\n\t\t}\n\t\tcatch (...) {\n\t\t\tlogger->fatal(\"Unrecoverable failure during South plugin reconfigure, south service exiting...\");\n\t\t\tshutdown();\n\t\t}\n\t\t// Let ingest class check for changes to filter pipeline\n\t\tm_ingest->configChange(categoryName, category);\n\t}\n\tif (categoryName.compare(m_name+\"Advanced\") == 0)\n\t{\n\t\t// Propogate advanced configuration changes to the ingest class always\n\t\tm_ingest->configChange(categoryName, category);\n\t\tm_configAdvanced = ConfigCategory(m_name+\"Advanced\", category);\n\t\tif (m_configAdvanced.itemExists(\"statistics\"))\n\t\t{\n\t\t\tm_ingest->setStatistics(m_configAdvanced.getValue(\"statistics\"));\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"perfmon\"))\n\t\t{\n\t\t\tstring perf = m_configAdvanced.getValue(\"perfmon\");\n\t\t\tif (perf.compare(\"true\") == 0)\n\t\t\t\tm_perfMonitor->setCollecting(true);\n\t\t\telse\n\t\t\t\tm_perfMonitor->setCollecting(false);\n\t\t}\n\t\tif (! southPlugin->isAsync())\n\t\t{\n\t\t\ttry {\n\t\t\t\tunsigned long newval = (unsigned long)strtol(m_configAdvanced.getValue(\"readingsPerSec\").c_str(), NULL, 10);\n\t\t\t\tif (newval < 1)\n\t\t\t\t{\n\t\t\t\t\tlogger->warn(\"Invalid setting of reading rate, defaulting to 1\");\n\t\t\t\t\tm_readingsPerSec = 1;\n\t\t\t\t}\n\t\t\t\tstring units = m_configAdvanced.getValue(\"units\");\n\t\t\t\tstring pollType = m_configAdvanced.getValue(\"pollType\");\n\t\t\t\tbool wakeup = false;\n\t\t\t\tif (m_pollType == POLL_ON_DEMAND)\n\t\t\t\t{\n\t\t\t\t\twakeup = true;\n\t\t\t\t}\n\t\t\t\tif (pollType.compare(\"Fixed Times\") == 0)\n\t\t\t\t{\n\t\t\t\t\tm_pollType = POLL_FIXED;\n\t\t\t\t\tprocessNumberList(m_configAdvanced, \"pollHours\", m_hours);\n\t\t\t\t\tprocessNumberList(m_configAdvanced, \"pollMinutes\", m_minutes);\n\t\t\t\t\tprocessNumberList(m_configAdvanced, \"pollSeconds\", m_seconds);\n\n\t\t\t\t\tif (m_minutes.size() == 0 && m_hours.size() != 0)\n\t\t\t\t\t\tm_minutes.push_back(0);\n\t\t\t\t\tif (m_seconds.size() == 0 && m_minutes.size() != 0)\n\t\t\t\t\t\tm_seconds.push_back(0);\n\n\t\t\t\t\tm_desiredRate.tv_sec  = 1;\n\t\t\t\t\tm_desiredRate.tv_usec = 0;\n\t\t\t\t\tif (wakeup)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Wakup from on demand polling\n\t\t\t\t\t\tm_pollCV.notify_all();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse if (pollType.compare(\"Interval\") == 0\n\t\t\t\t\t\t&& (newval != m_readingsPerSec || m_rateUnits.compare(units) != 0))\n\t\t\t\t{\n\t\t\t\t\tm_pollType = POLL_INTERVAL;\n\t\t\t\t\tm_readingsPerSec = newval;\n\t\t\t\t\tm_rateUnits = units;\n\t\t\t\t\tclose(m_timerfd);\n\t\t\t\t\tcalculateTimerRate();\n\t\t\t\t\tm_currentRate = m_desiredRate;\n\t\t\t\t\tm_timerfd = createTimerFd(m_desiredRate); // interval to be passed is in usecs\n\t\t\t\t\tif (wakeup)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Wakup from on demand polling\n\t\t\t\t\t\tm_pollCV.notify_all();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse if (pollType.compare(\"Interval\") == 0 && m_pollType != POLL_INTERVAL)\n\t\t\t\t{\n\t\t\t\t\t// Change to interval mode without the rate changing\n\t\t\t\t\tm_pollType = POLL_INTERVAL;\n\t\t\t\t\tif (wakeup)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Wakup from on demand polling\n\t\t\t\t\t\tm_pollCV.notify_all();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse if (pollType.compare(\"On Demand\") == 0)\n\t\t\t\t{\n\t\t\t\t\tm_pollType = POLL_ON_DEMAND;\n\t\t\t\t}\n\t\t\t} catch (ConfigItemNotFound& e) {\n\t\t\t\tlogger->error(\"Failed to update poll interval following configuration change\");\n\t\t\t}\n\t\t}\n\t\tunsigned long threshold = 5000;\t// This should never be used\n\t\tif (m_configAdvanced.itemExists(\"bufferThreshold\"))\n\t\t{\n\t\t\tthreshold = (unsigned int)strtol(m_configAdvanced.getValue(\"bufferThreshold\").c_str(), NULL, 10);\n\t\t\tm_ingest->setThreshold(threshold);\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"maxSendLatency\"))\n\t\t{\n\t\t\tm_ingest->setTimeout(strtol(m_configAdvanced.getValue(\"maxSendLatency\").c_str(), NULL, 10));\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"logLevel\"))\n\t\t{\n\t\t\tstring prevLogLevel = logger->getMinLevel();\n\t\t\tlogger->setMinLevel(m_configAdvanced.getValue(\"logLevel\"));\n\n\t\t\tPluginManager *manager = PluginManager::getInstance();\n\t\t\tPLUGIN_TYPE type = manager->getPluginImplType(southPlugin->getHandle());\n\t\t\tlogger->debug(\"%s:%d: South plugin type = %s\", __FUNCTION__, __LINE__, (type==PYTHON_PLUGIN)?\"PYTHON_PLUGIN\":\"BINARY_PLUGIN\");\n\n\t\t\t\n\t\t\tif (type == PYTHON_PLUGIN)\n\t\t\t{\n\t\t\t\t// propagate loglevel changes to python filters/plugins, if present\n\t\t\t\tlogger->debug(\"prevLogLevel=%s, m_configAdvanced.getValue(\\\"logLevel\\\")=%s\", prevLogLevel.c_str(), m_configAdvanced.getValue(\"logLevel\").c_str());\n\t\t\t\tif (prevLogLevel.compare(m_configAdvanced.getValue(\"logLevel\")) != 0)\n\t\t\t\t{\n\t\t\t\t\tlogger->debug(\"%s:%d: calling southPlugin->reconfigure() for updating loglevel\", __FUNCTION__, __LINE__);\n\t\t\t\t\tsouthPlugin->reconfigure(\"logLevel\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"throttle\"))\n\t\t{\n\t\t\tstring throt = m_configAdvanced.getValue(\"throttle\");\n\t\t\tif (throt[0] == 't' || throt[0] == 'T')\n\t\t\t{\n\t\t\t\tm_throttle = true;\n\t\t\t\tm_highWater = threshold\n\t\t\t\t       \t+ (((float)threshold * SOUTH_THROTTLE_HIGH_PERCENT) / 100.0);\n\t\t\t\tm_lowWater = threshold\n\t\t\t\t       \t+ (((float)threshold * SOUTH_THROTTLE_LOW_PERCENT) / 100.0);\n\t\t\t\tlogger->info(\"Throttling is enabled, high water mark is set to %ld\", m_highWater);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_throttle = false;\n\t\t\t}\n\t\t}\n\t\tif (m_configAdvanced.itemExists(\"assetTrackerInterval\"))\n\t\t{\n\t\t\tstring interval = m_configAdvanced.getValue(\"assetTrackerInterval\");\n\t\t\tunsigned long i = strtoul(interval.c_str(), NULL, 10);\n\t\t\tif (m_assetTracker)\n\t\t\t\tm_assetTracker->tune(i);\n\t\t}\n\t}\n\n\t// Update the  Security category\n\tif (categoryName.compare(m_name+\"Security\") == 0)\n\t{\n\t\tthis->updateSecurityCategory(category);\n\t}\n\n\t// Deal with changes to the features settings\n\tif (categoryName.compare(\"FEATURES\") == 0)\n\t{\n\t\tthis->updateFeatures(ConfigCategory(\"FEATURES\", category));\n\t}\n\n\tif(categoryName.compare(RESOURCE_LIMIT_CATEGORY) == 0)\n\t{\n\t\tm_configResourceLimit = ConfigCategory(RESOURCE_LIMIT_CATEGORY, category);\n\t\tgetResourceLimit();\n\t\tm_ingest->setResourceLimit(m_serviceBufferingType, m_serviceBufferSize, m_discardPolicy);\n\t}\n}\n\n/**\n * Separate thread to run plugin_reconf, to avoid blocking \n * service's management interface due to long plugin_poll calls\n */\nstatic void reconfThreadMain(void *arg)\n{\n\tSouthService *ss = (SouthService *)arg;\n\tLogger::getLogger()->info(\"reconfThreadMain(): Spawned new thread for plugin reconf\");\n\tss->handlePendingReconf();\n\tLogger::getLogger()->info(\"reconfThreadMain(): plugin reconf thread exiting\");\n}\n\n/**\n * Handle configuration change notification; called by reconf thread\n * Waits for some reconf operation(s) to get queued up, then works thru' them\n */\nvoid SouthService::handlePendingReconf()\n{\n\twhile (isRunning())\n\t{\n\t\tLogger::getLogger()->debug(\"SouthService::handlePendingReconf: Going into cv wait\");\n\t\tmutex mtx;\n\t\tunique_lock<mutex> lck(mtx);\n\t\tm_cvNewReconf.wait(lck);\n\t\tLogger::getLogger()->debug(\"SouthService::handlePendingReconf: cv wait has completed; some reconf request(s) has/have been queued up\");\n\t\tunsigned int numPendingReconfs = 0;\n\t\t{\n\t\t\tlock_guard<mutex> guard(m_pendingNewConfigMutex);\n\t\t\tnumPendingReconfs = m_pendingNewConfig.size();\n\t\t}\n\t\twhile (isRunning() && numPendingReconfs)\n\t\t{\n\t\t\tstd::pair<std::string,std::string> reconfValue;\n\t\t\t{\n\t\t\t\treconfValue = m_pendingNewConfig.front();\n\t\t\t\tm_pendingNewConfig.pop_front();\n\t\t\t}\n\t\t\t{\n\t\t\t\tstring categoryName = reconfValue.first;\n\t\t\t\tstring category = reconfValue.second;\n\t\t\t\tlogger->info(\"Handle config change %s, %s\",\n\t\t\t\t\t\tcategoryName.c_str(), category.c_str());\n\t\t\t\tprocessConfigChange(categoryName, category);\n\t\t\t}\n\t\t\t{\n\t\t\t\tlock_guard<mutex> guard(m_pendingNewConfigMutex);\n\t\t\t\tnumPendingReconfs = m_pendingNewConfig.size();\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Configuration change notification using a separate thread\n *\n * @param categoryName\tCategory name\n * @param category\tCategory value\n */\nvoid SouthService::configChange(const string& categoryName, const string& category)\n{\n\t{\n\t\tlock_guard<mutex> guard(m_pendingNewConfigMutex);\n\t\tm_pendingNewConfig.emplace_back(std::make_pair(categoryName, category));\n\t\tLogger::getLogger()->debug(\"SouthService::reconfigure(): After adding new entry, m_pendingNewConfig.size()=%d\", m_pendingNewConfig.size());\n\n\t\tm_cvNewReconf.notify_all();\n\t}\n}\n\n/**\n * Add the generic south service configuration options to the advanced\n * category\n *\n * @param defaultConfiguration\tThe default configuration from the plugin\n */\nvoid SouthService::addConfigDefaults(DefaultConfigCategory& defaultConfig)\n{\n\tbool isAsync = southPlugin->isAsync();\n\tfor (int i = 0; defaults[i].name; i++)\n\t{\n\t\tif (strcmp(defaults[i].name, \"readingsPerSec\") == 0 && isAsync)\n\t\t{\n\t\t\tcontinue;\n\t\t}\n\t\tdefaultConfig.addItem(defaults[i].name, defaults[i].description,\n\t\t\tdefaults[i].type, defaults[i].value, defaults[i].value);\n\t\tdefaultConfig.setItemDisplayName(defaults[i].name, defaults[i].displayName);\n\t\tif (!strcmp(defaults[i].name, \"readingsPerSec\"))\n\t\t{\n\t\t\tdefaultConfig.setItemAttribute(defaults[i].name, ConfigCategory::MINIMUM_ATTR, \"1\");\n\n\t\t}\n\t}\n\n\tdefaultConfig.setItemAttribute(\"maxSendLatency\", ConfigCategory::MAXIMUM_ATTR, to_string(MAXSENDLATENCY));\n\tdefaultConfig.setItemAttribute(\"maxSendLatency\", ConfigCategory::MINIMUM_ATTR, \"0\");\n\n\tif (!isAsync)\n\t{\n\t\t/* Add the reading rate units */\n\t\tvector<string>\trateUnits = { \"second\", \"minute\", \"hour\" };\n\t\tdefaultConfig.addItem(\"units\", \"Reading Rate Per\",\n\t\t\t\t\"second\", \"second\", rateUnits);\n\t\tdefaultConfig.setItemDisplayName(\"units\", \"Reading Rate Per\");\n\n\t\t/* Now add the fixed time polling option */\n\t\tvector<string> pollOptions = { \"Interval\", \"Fixed Times\", \"On Demand\" };\n\t\tdefaultConfig.addItem(\"pollType\", \"Either poll at fixed intervals, at fixed times or when trigger by a poll control operation.\",\n\t\t\t\t\"Interval\", \"Interval\", pollOptions);\n\t\tdefaultConfig.setItemDisplayName(\"pollType\", \"Poll Type\");\n\n\t\t/* Add the validity for interval polling items */\n\t\tdefaultConfig.setItemAttribute(\"readingsPerSec\",\n\t\t\t\tConfigCategory::VALIDITY_ATTR, \"pollType == \\\"Interval\\\"\");\n\t\tdefaultConfig.setItemAttribute(\"units\",\n\t\t\t\tConfigCategory::VALIDITY_ATTR, \"pollType == \\\"Interval\\\"\");\n\t\tdefaultConfig.setItemAttribute(\"throttle\",\n\t\t\t\tConfigCategory::VALIDITY_ATTR, \"pollType == \\\"Interval\\\"\");\n\n\t\t/* Add the three time specifiers */\n\t\tdefaultConfig.addItem(\"pollHours\",\n\t\t\t\t\"List of hours on which to poll or leave empty for all hours\",\n\t\t\t\t\"string\", \"\", \"\");\n\t\tdefaultConfig.setItemDisplayName(\"pollHours\", \"Hours\");\n\t\tdefaultConfig.setItemAttribute(\"pollHours\",\n\t\t\t\tConfigCategory::VALIDITY_ATTR, \"pollType == \\\"Fixed Times\\\"\");\n\t\tdefaultConfig.addItem(\"pollMinutes\",\n\t\t\t\t\"List of minutes on which to poll or leave empty for all minutes\",\n\t\t\t\t\"string\", \"\", \"\");\n\t\tdefaultConfig.setItemDisplayName(\"pollMinutes\", \"Minutes\");\n\t\tdefaultConfig.setItemAttribute(\"pollMinutes\",\n\t\t\t\tConfigCategory::VALIDITY_ATTR, \"pollType == \\\"Fixed Times\\\"\");\n\t\tdefaultConfig.addItem(\"pollSeconds\",\n\t\t\t\t\"Seconds on which to poll expressed as a comma seperated list\",\n\t\t\t\t\"string\", \"0,15,30,45\", \"0,15,30,40\");\n\t\tdefaultConfig.setItemDisplayName(\"pollSeconds\", \"Seconds\");\n\t\tdefaultConfig.setItemAttribute(\"pollSeconds\",\n\t\t\t\tConfigCategory::VALIDITY_ATTR, \"pollType == \\\"Fixed Times\\\"\");\n\t}\n\n\tif (southPlugin->hasControl())\n\t{\n\t\tdefaultConfig.addItem(\"control\", \"Allow write and control operations on the device\",\n\t\t\t       \"boolean\", \"true\", \"true\");\n\t\tdefaultConfig.setItemDisplayName(\"control\", \"Allow Control\");\n\t}\n\n\t/* Add the set of logging levels to the service */\n\tvector<string>\tlogLevels = { \"error\", \"warning\", \"info\", \"debug\" };\n\tdefaultConfig.addItem(\"logLevel\", \"Minimum logging level reported\",\n\t\t\t\"warning\", \"warning\", logLevels);\n\tdefaultConfig.setItemDisplayName(\"logLevel\", \"Minimum Log Level\");\n\n\t/* Add the set of logging levels to the service */\n\tvector<string>\tstatistics = { \"per asset\", \"per service\", \"per asset & service\" };\n\tdefaultConfig.addItem(\"statistics\", \"Collect statistics either for every asset ingested, for the service in total or both\",\n\t\t\t\"per asset & service\", \"per asset & service\", statistics);\n\tdefaultConfig.setItemDisplayName(\"statistics\", \"Statistics Collection\");\n\tdefaultConfig.addItem(\"perfmon\", \"Track and store performance counters\",\n\t\t\t       \"boolean\", \"false\", \"false\");\n\tdefaultConfig.setItemDisplayName(\"perfmon\", \"Performance Counters\");\n\n\t// Rate Monitoring options\n\tdefaultConfig.addItem(\"rateMonitoringInterval\",\n\t\t\t\t\"The interval in minutes to use when calculating average ingestion rates for monitoring the service ingestion\",\n\t\t\t\t\"integer\", \"1\", \"1\");\n\tdefaultConfig.setItemDisplayName(\"rateMonitoringInterval\", \"Monitoring Period\");\n\tdefaultConfig.setItemAttribute(\"rateMonitoringInterval\", ConfigCategory::MINIMUM_ATTR, \"0\");\n\tdefaultConfig.addItem(\"rateSigmaFactor\",\n\t\t\t\t\"The sensitivity of the ingest rate monitor, expressed as a number of standard deviations of the average ingest rate.\",\n\t\t\t\t\"integer\", \"3\", \"3\");\n\tdefaultConfig.setItemDisplayName(\"rateSigmaFactor\", \"Monitoring Sensitivity\");\n\tdefaultConfig.setItemAttribute(\"rateSigmaFactor\", ConfigCategory::MINIMUM_ATTR, \"1\");\n}\n\n/**\n * Create a timer FD on which a read would return data every time the given \n * interval elapses\n *\n * @param usecs\t Time in micro-secs after which data would be available on the timer FD\n */\nint SouthService::createTimerFd(struct timeval rate)\n{\n\tint fd = -1;\n\tstruct itimerspec new_value;\n\tstruct timespec now;\n\n\tif (clock_gettime(CLOCK_REALTIME, &now) == -1)\n\t   Logger::getLogger()->error(\"clock_gettime\");\n\n\tnew_value.it_value.tv_sec = now.tv_sec + rate.tv_sec;\n\tnew_value.it_value.tv_nsec = now.tv_nsec + rate.tv_usec*1000;\n\tif (new_value.it_value.tv_nsec >= 1000000000)\n\t{\n\t\tnew_value.it_value.tv_sec += new_value.it_value.tv_nsec/1000000000;\n\t\tnew_value.it_value.tv_nsec %= 1000000000;\n\t}\n\t\n\tnew_value.it_interval.tv_sec = rate.tv_sec;\n\tnew_value.it_interval.tv_nsec = rate.tv_usec*1000;\n\tif (new_value.it_interval.tv_nsec >= 1000000000)\n\t{\n\t\tnew_value.it_interval.tv_sec += new_value.it_interval.tv_nsec/1000000000;\n\t\tnew_value.it_interval.tv_nsec %= 1000000000;\n\t}\n\t\n\terrno=0;\n\tfd = timerfd_create(CLOCK_REALTIME, 0);\n\tif (fd == -1)\n\t{\n\t\tLogger::getLogger()->error(\"timerfd_create failed, errno=%d (%s)\", errno, strerror(errno));\n\t\treturn fd;\n\t}\n\n\tif (timerfd_settime(fd, TFD_TIMER_ABSTIME, &new_value, NULL) == -1)\n\t{\n\t    Logger::getLogger()->error(\"timerfd_settime failed, errno=%d (%s)\", errno, strerror(errno));\n\t    close (fd);\n\t\treturn -1;\n\t}\n\n\treturn fd;\n}\n\n/**\n * If enabled, control the throttling of the poll rate in order to keep\n * the buffer usage of the service within check.\n *\n * Although this is written as if rate is being control, which it\n * logically is, the actual values are poll intervals. Hence reducing\n * the poll rate increases the value of m_currentRate.\n */\nvoid SouthService::throttlePoll()\n{\nstruct timeval now, res;\n\n\tif (!m_throttle)\n\t{\n\t\treturn;\n\t}\n\tdouble desired = m_desiredRate.tv_sec + ((double)m_desiredRate.tv_usec / 1000000);\n\tdesired *= m_repeatCnt;\n\tgettimeofday(&now, NULL);\n\ttimersub(&now, &m_lastThrottle, &res);\n\tif (m_ingest->queueLength() > m_highWater && res.tv_sec > SOUTH_THROTTLE_DOWN_INTERVAL)\n\t{\n\t\tdouble rate = m_currentRate.tv_sec + ((double)m_currentRate.tv_usec / 1000000);\n\t\trate *= (1.0 + ((double)SOUTH_THROTTLE_PERCENT / 100.0));\n\t\tif (rate > MAX_SLEEP * 1000000)\n\t\t{\n\t\t\tdouble x = rate / (MAX_SLEEP * 1000000);\n\t\t\tm_repeatCnt = ceil(x);\n\t\t\trate /= m_repeatCnt;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_repeatCnt = 1;\n\t\t}\n\t\tm_currentRate.tv_sec = (long)rate;\n\t\tm_currentRate.tv_usec = (rate - m_currentRate.tv_sec) * 1000000;\n\t\tclose(m_timerfd);\n\t\tm_timerfd = createTimerFd(m_currentRate); // interval to be passed is in usecs\n\t\tm_lastThrottle = now;\n\t\tm_throttled = true;\n\t\tlogger->warn(\"%s Throttled down poll, rate is now %.1f%% of desired rate\", m_name.c_str(), (desired * 100) / rate);\n\t\tm_perfMonitor->collect(\"throttled rate\", (long)(rate * 1000));\n\t}\n\telse if (m_throttled && m_ingest->queueLength() < m_lowWater && res.tv_sec > SOUTH_THROTTLE_UP_INTERVAL)\n\t{\n\t\t// We are currently throttled back but the queue is below the low water mark\n\t\ttimersub(&m_desiredRate, &m_currentRate, &res);\n\t\tif (res.tv_sec != 0 || res.tv_usec != 0)\n\t\t{\n\t\t\tdouble rate = m_currentRate.tv_sec + ((double)m_currentRate.tv_usec / 1000000);\n\t\t\trate *= (1.0 - ((double)SOUTH_THROTTLE_PERCENT / 100.0));\n\t\t\tif (rate > MAX_SLEEP * 1000000)\n\t\t\t{\n\t\t\t\tdouble x = rate / (MAX_SLEEP * 1000000);\n\t\t\t\tm_repeatCnt = ceil(x);\n\t\t\t\trate /= m_repeatCnt;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tm_repeatCnt = 1;\n\t\t\t}\n\t\t\tm_currentRate.tv_sec = (long)rate;\n\t\t\tm_currentRate.tv_usec = (rate - m_currentRate.tv_sec) * 1000000;\n\t\t\tif (m_currentRate.tv_sec <= m_desiredRate.tv_sec\n\t\t\t\t\t&& m_currentRate.tv_usec < m_desiredRate.tv_usec)\n\t\t\t{\n\t\t\t\tm_currentRate = m_desiredRate;\n\t\t\t\tm_throttled = false;\n\t\t\t\tlogger->warn(\"%s Poll rate returned to configured value\", m_name.c_str());\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tlogger->warn(\"%s Throttled up poll, rate is now %.1f%% of desired rate\", m_name.c_str(), (desired * 100) / rate);\n\t\t\t}\n\t\t\tm_perfMonitor->collect(\"throttled rate\", (long)(rate * 1000));\n\t\t\tclose(m_timerfd);\n\t\t\tm_timerfd = createTimerFd(m_currentRate); // interval to be passed is in usecs\n\t\t\tm_lastThrottle = now;\n\t\t}\n\t}\n}\n\n/**\n * Perform a setPoint operation on the south plugin\n *\n * @param name\tName of the point to set\n * @param value\tThe value to set\n * @return\tSuccess or failure of the SetPoint operation\n */\nbool SouthService::setPoint(const string& name, const string& value)\n{\n\tif (southPlugin->hasControl())\n\t{\n\t\treturn southPlugin->write(name, value);\n\t}\n\telse\n\t{\n\t\tlogger->warn(\"SetPoint operation %s = %s attempted on plugin that does not support control\", name.c_str(), value.c_str());\n\t\treturn false;\n\t}\n}\n\n/**\n * Perform an operation on the south plugin\n *\n * @param name\tName of the operation\n * @param params The parameters for the operaiton, if any\n * @return\tSuccess or failure of the operation\n */\nbool SouthService::operation(const string& operation, vector<PLUGIN_PARAMETER *>& params)\n{\n\tif (operation.compare(\"poll\") == 0)\n\t{\n\t\tif (m_pollType == POLL_ON_DEMAND)\n\t\t{\n\t\t\tm_doPoll = true;\n\t\t\tm_pollCV.notify_all();\n\t\t\treturn true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlogger->warn(\"Received a poll request for a service that is not enabled for on demand polling\");\n\t\t\treturn false;\n\t\t}\n\t}\n\telse if (southPlugin->hasControl())\n\t{\n\t\treturn southPlugin->operation(operation, params);\n\t}\n\telse\n\t{\n\t\tlogger->warn(\"Operation %s attempted on plugin that does not support control\", operation.c_str());\n\t\treturn false;\n\t}\n}\n\n/**\n * Process a list of numbers into a vector of integers.\n * The list of numbers is obtained from a configuration\n * item.\n *\n * @param category\tThe configuration category\n * @param item\t\tName of the configuration item\n * @param list\t\tThe vector to populate\n */\nvoid SouthService::processNumberList(const ConfigCategory& category,\n\t\t\t\tconst string& item, vector<unsigned long>& list)\n{\n\tlist.clear();\n\tif (!category.itemExists(item))\n\t{\n\t\tLogger::getLogger()->warn(\"Item %s does not exist\", item.c_str());\n\t\treturn;\n\t}\n\tstring value = category.getValue(item);\n\tif (value.length() == 0)\n\t{\n\t\tLogger::getLogger()->info(\"Item %s is empty\", item.c_str());\n\t\treturn;\n\t}\n\n\tconst char *ptr = value.c_str();\n\tchar *eptr;\n\twhile (*ptr)\n\t{\n\t\tlist.push_back(strtoul(ptr, &eptr, 10));\n\t\tptr = eptr;\n\t\tif (*ptr == ',')\n\t\t\tptr++;\n\t}\n}\n\n/**\n * Calcuate the rate at which the timer should trigger and the repeat\n * requirement needed to match the requested poll rate\n */\nvoid SouthService::calculateTimerRate()\n{\n\tstring pollType = m_configAdvanced.getValue(\"pollType\");\n\tif (pollType.compare(\"Fixed Times\") == 0)\n\t{\n\t\tif (m_pollType == POLL_ON_DEMAND)\n\t\t{\n\t\t\tlock_guard<mutex> lk(m_pollMutex);\n\t\t\tm_pollType = POLL_FIXED;\n\t\t\tm_pollCV.notify_all();\n\t\t}\n\t\tm_pollType = POLL_FIXED;\n\t\tprocessNumberList(m_configAdvanced, \"pollHours\", m_hours);\n\t\tprocessNumberList(m_configAdvanced, \"pollMinutes\", m_minutes);\n\t\tprocessNumberList(m_configAdvanced, \"pollSeconds\", m_seconds);\n\n\t\tif (m_minutes.size() == 0 && m_hours.size() != 0)\n\t\t\tm_minutes.push_back(0);\n\t\tif (m_seconds.size() == 0 && m_minutes.size() != 0)\n\t\t\tm_seconds.push_back(0);\n\n\t\tm_desiredRate.tv_sec  = 1;\n\t\tm_desiredRate.tv_usec = 0;\n\t}\n\telse if (pollType.compare(\"On Demand\") == 0)\n\t{\n\t\tm_pollType = POLL_ON_DEMAND;\n\t}\n\telse\n\t{\n\t\tif (m_pollType == POLL_ON_DEMAND)\n\t\t{\n\t\t\tlock_guard<mutex> lk(m_pollMutex);\n\t\t\tm_pollType = POLL_INTERVAL;\n\t\t\tm_pollCV.notify_all();\n\t\t}\n\t\tm_pollType = POLL_INTERVAL;\n\t\tstring units = m_configAdvanced.getValue(\"units\");\n\t\tunsigned long dividend = 1000000;\n\t\tif (units.compare(\"second\") == 0)\n\t\t\tdividend = 1000000;\n\t\telse if (units.compare(\"minute\") == 0)\n\t\t\tdividend = 60000000;\n\t\telse if (units.compare(\"hour\") == 0)\n\t\t\tdividend = 3600000000;\n\t\tm_rateUnits = units;\n\t\tunsigned long usecs = dividend / m_readingsPerSec;\n\n\t\tif (usecs > MAX_SLEEP * 1000000)\n\t\t{\n\t\t\tdouble x = usecs / (MAX_SLEEP * 1000000);\n\t\t\tm_repeatCnt = ceil(x);\n\t\t\tusecs /= m_repeatCnt;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tm_repeatCnt = 1;\n\t\t}\n\t\tm_desiredRate.tv_sec  = (int)(usecs / 1000000);\n\t\tm_desiredRate.tv_usec = (int)(usecs % 1000000);\n\t}\n}\n\n/**\n * Find the next fixed time poll time and wait for that time before returning.\n * This method will also return if m_shutdown is set.\n *\n * @return bool\tTrue if the return is doe to a poll being required.\n */\nbool SouthService::syncToNextPoll()\n{\n\ttime_t tim = time(0);\n\tstruct tm tm;\n\tlocaltime_r(&tim, &tm);\n\tunsigned long waitFor = 1;\n\n\tif (m_hours.size() == 0 && m_minutes.size() == 0 && m_seconds.size() == 0)\n\t{\n\t\tLogger::getLogger()->error(\"Poll time misconfigured.\");\n\t}\n\telse if (m_hours.size() == 0 && m_minutes.size() == 0)\n\t{\n\t\t// Only looking at seconds\n\t\tunsigned int i;\n\t\tfor (i = 0; i < m_seconds.size() && m_seconds[i] <= (unsigned)tm.tm_sec; i++)\n\t\t{\n\t\t}\n\t\tif (i == m_seconds.size())\n\t\t{\n\t\t\twaitFor = (60 - (unsigned)tm.tm_sec) + m_seconds[0];\n\t\t}\n\t\telse\n\t\t{\n\t\t\twaitFor = m_seconds[i] - (unsigned)tm.tm_sec;\n\t\t}\n\t}\n\telse if (m_hours.size() == 0)\n\t{\n\t\tunsigned int target_min = (unsigned)tm.tm_min;\n\t\tunsigned int min, sec;\n\t\tfor (min = 0; min < m_minutes.size() && m_minutes[min] < target_min; min++)\n\t\t{\n\t\t}\n\t\tif (min == m_minutes.size()) // Reset to start of minute list\n\t\t{\n\t\t\tmin = 0;\n\t\t}\n\n\t\tif (m_minutes[min] != target_min)\t// Not this minute\n\t\t{\n\t\t\tsec = 0;\t// Always use first setting of seconds\n\t\t}\n\t\telse\n\t\t{\n\t\t\tfor (sec = 0; sec < m_seconds.size() && m_seconds[sec] <= (unsigned)tm.tm_sec; sec++)\n\t\t\t{\n\t\t\t}\n\t\t\tif (sec == m_seconds.size())\n\t\t\t{\n\t\t\t\t// Too late in this minute use next minute setting\n\t\t\t\tsec = 0;\n\t\t\t\tmin++;\n\t\t\t\tif (min >= m_minutes[min])\n\t\t\t\t{\n\t\t\t\t\tmin = 0;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\twaitFor = 0;\n\t\tif (m_minutes[min] > (unsigned)tm.tm_min)\n\t\t{\n\t\t\twaitFor = 60 * (m_minutes[min] - (unsigned)tm.tm_min);\n\t\t}\n\t\telse if (m_minutes[min] < (unsigned)tm.tm_min)\n\t\t{\n\t\t\twaitFor = 60 * ((60 - (unsigned)tm.tm_min) + m_minutes[min]);\n\t\t}\n\t\tif (m_seconds[sec] > (unsigned)tm.tm_sec)\n\t\t{\n\t\t\twaitFor += ((unsigned)tm.tm_sec - m_seconds[sec]);\n\t\t}\n\t\telse\n\t\t{\n\t\t\twaitFor += ((60 - (unsigned)tm.tm_sec) + m_seconds[sec]);\n\t\t}\n\t}\n\telse\t// Hours, minutes and seconds\n\t{\n\t\tunsigned int hour, min, sec;\n\t\tfor (hour = 0; hour < m_hours.size() && m_hours[hour] < (unsigned)tm.tm_hour; hour++)\n\t\t{\n\t\t}\n\t\tif (hour == m_hours.size()) // Reset to start of minute list\n\t\t{\n\t\t\tmin = 0;\n\t\t\tsec = 0;\n\t\t\thour = 0;\n\t\t}\n\t\telse if (m_hours[hour] == (unsigned)tm.tm_hour)\t// Check for this hour\n\t\t{\n\t\t\tfor (min = 0; min < m_minutes.size() && m_minutes[min] < (unsigned)tm.tm_min; min++)\n\t\t\t{\n\t\t\t}\n\t\t\tif (min < m_minutes.size()) // may still be a trogger in this hor\n\t\t\t{\n\t\t\t\tfor (sec = 0; sec < m_seconds.size() && m_seconds[sec] <= (unsigned)tm.tm_sec; sec++)\n\t\t\t\t{\n\t\t\t\t}\n\t\t\t\tif (sec == m_seconds.size())\n\t\t\t\t{\n\t\t\t\t\t// Too late in this minute use next minute setting\n\t\t\t\t\tsec = 0;\n\t\t\t\t\tmin++;\n\t\t\t\t\tif (min == m_minutes.size())\n\t\t\t\t\t{\n\t\t\t\t\t\tmin = 0;\n\t\t\t\t\t\tsec = 0;\n\t\t\t\t\t\thour++;\n\t\t\t\t\t\tif (m_hours.size() == hour)\n\t\t\t\t\t\t\thour = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\thour++;\n\t\t\t\tmin = 0;\n\t\t\t\tsec = 0;\n\t\t\t\tif (m_hours.size() == hour)\n\t\t\t\t\thour = 0;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\thour++;\n\t\t\tmin = 0;\n\t\t\tsec = 0;\n\t\t\tif (m_hours.size() == hour)\n\t\t\t\thour = 0;\n\t\t}\n\t\twaitFor = 0;\n\t\tif (m_hours[hour] > (unsigned)tm.tm_hour)\n\t\t{\n\t\t\twaitFor += 60 * 60 * (m_hours[hour] - (unsigned)tm.tm_hour);\n\t\t}\n\t\telse if (m_minutes[min] < (unsigned)tm.tm_min)\n\t\t{\n\t\t\twaitFor += 60 * 60 * ((24 - (unsigned)tm.tm_hour) + m_hours[hour]);\n\t\t}\n\t\tif (m_minutes[min] > (unsigned)tm.tm_min)\n\t\t{\n\t\t\twaitFor += 60 * (m_minutes[min] - (unsigned)tm.tm_min);\n\t\t}\n\t\telse if (m_minutes[min] < (unsigned)tm.tm_min)\n\t\t{\n\t\t\twaitFor += 60 * ((60 - (unsigned)tm.tm_min) + m_minutes[min]);\n\t\t}\n\t\tif (m_seconds[sec] > (unsigned)tm.tm_sec)\n\t\t{\n\t\t\twaitFor += ((unsigned)tm.tm_sec - m_seconds[sec]);\n\t\t}\n\t\telse\n\t\t{\n\t\t\twaitFor += ((60 - (unsigned)tm.tm_sec) + m_seconds[sec]);\n\t\t}\n\t}\n\n\n\tuint64_t exp;\n\twhile (waitFor)\n\t{\n\t\tif (read(m_timerfd, &exp, sizeof(uint64_t)) == -1)\n\t\t\treturn false;\n\t\twaitFor--;\n\t\tif (m_shutdown)\n\t\t\treturn false;\n\t\tif (m_pollType != POLL_FIXED)\t// Configuration has change to the poll type\n\t\t{\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * Wait until either a shutdown request is received or a poll operation\n *\n * @return bool\t\tTrue if the return is due to a new poll request\n */\nbool SouthService::onDemandPoll()\n{\n\tunique_lock<mutex> lk(m_pollMutex);\n\tif (! m_shutdown)\n\t{\n\t\tm_doPoll = false;\n\t\tm_pollCV.wait(lk);\n\t}\n\treturn m_doPoll;\n}\n\n/**\n * Check to see if there is a reconfiguration option blocking in another\n * thread and yield until that reconfiguration has occured.\n */\nvoid SouthService::checkPendingReconfigure()\n{\n\twhile(1)\n\t{\n\t\tunsigned int numPendingReconfs;\n\t\t{\n\t\t\tlock_guard<mutex> guard(m_pendingNewConfigMutex);\n\t\t\tnumPendingReconfs = m_pendingNewConfig.size();\n\t\t}\n\t\t// if a reconf is pending, make this poll thread yield CPU, sleep_for is needed to sleep this thread for sufficiently long time\n\t\tif (numPendingReconfs)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"SouthService::start(): %d entries in m_pendingNewConfig, poll thread yielding CPU\", numPendingReconfs);\n\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(200));\n\t\t}\n\t\telse\n\t\t\treturn;\n\t}\n}\n\n/**\n * Process the setting of allowed features\n *\n * @param category\tThe configuration category\n */\nvoid SouthService::updateFeatures(const ConfigCategory& category)\n{\n\tif (category.itemExists(\"control\"))\n\t{\n\t\tstring s = category.getValue(\"control\");\n\t\tm_controlEnabled = s.compare(\"true\") == 0 ? true : false;\n\t}\n\tif (category.itemExists(\"debugging\"))\n\t{\n\t\tstring s = category.getValue(\"debugging\");\n\t\tm_debuggerEnabled = s.compare(\"true\") == 0 ? true : false;\n\t\tif ((m_debugState & DEBUG_ATTACHED) != 0 && m_debuggerEnabled == false)\n\t\t{\n\t\t\t// Detach the debugger\n\t\t\tdetachDebugger();\n\t\t}\n\t}\n}\n\n/**\n * Return the state of the pipeline debugger\n *\n * @return string\tJSON document reporting the state of the pipeline debugger\n */\nstring SouthService::debugState()\n{\n\tstring rval;\n\trval = \"{ \";\n\trval += \"\\\"debugger\\\" : \";\n\tif (m_debugState & DEBUG_ATTACHED)\n\t{\n\t\trval += \"\\\"Attached\\\",\";\n\t\trval += \"\\\"ingress\\\" : \";\n\t\tif (m_debugState & DEBUG_SUSPENDED)\n\t\t\trval += \"\\\"Suspended\\\", \";\n\t\telse\n\t\t\trval += \"\\\"Running\\\", \";\n\t\trval += \"\\\"egress\\\" : \";\n\t\tif (m_debugState & DEBUG_ISOLATED)\n\t\t\trval += \"\\\"Isolated\\\"\";\n\t\telse\n\t\t\trval += \"\\\"Storage\\\"\";\n\t}\n\telse if (allowDebugger())\n\t{\n\t\trval += \"\\\"Detached\\\"\";\n\t}\n\telse\n\t{\n\t\trval += \"\\\"Disabled\\\"\";\n\t}\n\trval += \"}\";\n\treturn rval;\n}\n"
  },
  {
    "path": "C/services/south/south_api.cpp",
    "content": "/**\n * Fledge south service API\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <south_api.h>\n#include <south_service.h>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\nstatic SouthApi *api = NULL;\n\n/**\n * Wrapper for the PUT setPoint API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void setPointWrapper(shared_ptr<HttpServer::Response> response,\n\t\tshared_ptr<HttpServer::Request> request)\n{\n\tif (api)\n\t\tapi->setPoint(response, request);\n}\n\n/**\n * Wrapper for the PUT operation API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void operationWrapper(shared_ptr<HttpServer::Response> response,\n\t\tshared_ptr<HttpServer::Request> request)\n{\n\tif (api)\n\t\tapi->operation(response, request);\n}\n\n/**\n * Wrapper for the PUT attach debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void attachDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->attachDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT detach debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void detachDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->detachDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT set debugger buffer size API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void setDebuggerBufferWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->setDebuggerBuffer(response, request);\n}\n\n/**\n * Wrapper for the GET debugger buffer API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void getDebuggerBufferWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->getDebuggerBuffer(response, request);\n}\n\n/**\n * Wrapper for the PUT debugger isolate API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void isolateDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->isolateDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT debugger suspend API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void suspendDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->suspendDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT step debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void stepDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->stepDebugger(response, request);\n}\n\n/**\n * Wrapper for the PUT replay debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void replayDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->replayDebugger(response, request);\n}\n\n/**\n * Wrapper for the GET state debugger API call\n *\n * @param response\tThe HTTP Response to send\n * @param request\tThe HTTP Request\n */\nstatic void stateDebuggerWrapper(Response response, Request request)\n{\n\tif (api)\n\t\tapi->stateDebugger(response, request);\n}\n\n/**\n * Wrapper for thread creation that is used to start the API\n */\nstatic void startService()\n{\n\tapi->startServer();\n}\n\n/**\n * South API class constructor\n *\n * @param service\tThe SouthService class this is the API for\n */\nSouthApi::SouthApi(SouthService *service) : m_service(service), m_thread(NULL)\n{\n\tm_logger = Logger::getLogger();\n\tm_server = new HttpServer();\n\tm_server->config.port = 0;\n\tm_server->config.thread_pool_size = 1;\n\n\t// AuthenticationMiddleware for PUT regexp paths: use lambda funcion, passing the class object\n\tm_server->resource[SETPOINT][\"PUT\"] = [this](shared_ptr<HttpServer::Response> response,\n                                                        shared_ptr<HttpServer::Request> request) {\n\t\t\t\tm_service->AuthenticationMiddlewarePUT(response, request, setPointWrapper);\n\t};\n\tm_server->resource[OPERATION][\"PUT\"] = [this](shared_ptr<HttpServer::Response> response,\n                                                        shared_ptr<HttpServer::Request> request) {\n\t\t\t\tm_service->AuthenticationMiddlewarePUT(response, request, operationWrapper);\n\t};\n\n\t// Add the debugger entry points\n\tm_server->resource[DEBUG_ATTACH][\"PUT\"] = attachDebuggerWrapper;\n\tm_server->resource[DEBUG_DETACH][\"PUT\"] = detachDebuggerWrapper;\n\tm_server->resource[DEBUG_BUFFER][\"POST\"] = setDebuggerBufferWrapper;\n\tm_server->resource[DEBUG_BUFFER][\"GET\"] = getDebuggerBufferWrapper;\n\tm_server->resource[DEBUG_ISOLATE][\"PUT\"] = isolateDebuggerWrapper;\n\tm_server->resource[DEBUG_SUSPEND][\"PUT\"] = suspendDebuggerWrapper;\n\tm_server->resource[DEBUG_STEP][\"PUT\"] = stepDebuggerWrapper;\n\tm_server->resource[DEBUG_REPLAY][\"PUT\"] = replayDebuggerWrapper;\n\tm_server->resource[DEBUG_STATE][\"GET\"] = stateDebuggerWrapper;\n\n\tapi = this;\n\tm_thread = new thread(startService);\n}\n\n/**\n * Destroy the API.\n *\n * Stop the service and wait fo rthe thread to terminate.\n */\nSouthApi::~SouthApi()\n{\n\tif (m_thread)\n\t{\n\t\tm_server->stop();\n\t\tm_thread->join();\n\t\tdelete m_thread;\n\t}\n\tif (m_server)\n\t\tdelete m_server;\n}\n\n/**\n * Called on the API service thread. Start the listener for HTTP requests\n */\nvoid SouthApi::startServer()\n{\n\tm_server->start();\n}\n\n/**\n * Return the port the service is listening on\n */\nunsigned short SouthApi::getListenerPort()\n{\n\tint max_wait = 10;\n\t// Need to make sure the server thread has started\n\twhile (m_server->getLocalPort() == 0 && max_wait-- > 0)\n\t\tusleep(100);\n\treturn m_server->getLocalPort();\n}\n\n/**\n * Implement the setPoint PUT request. Caues the write operation on\n * the south plugin to be called with each of the set point parameters\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid SouthApi::setPoint(shared_ptr<HttpServer::Response> response,\n\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tif (m_service->allowControl())\n\t{\n\t\tstring payload = request->content.string();\n\t\ttry {\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"values\") && doc[\"values\"].IsObject())\n\t\t\t\t{\n\t\t\t\t\tbool status = true;\n\t\t\t\t\tValue& values = doc[\"values\"];\n\t\t\t\t\tfor (Value::ConstMemberIterator itr = values.MemberBegin();\n\t\t\t\t\t\t\titr != values.MemberEnd(); ++itr)\n\t\t\t\t\t{\n\t\t\t\t\t\tstring name = itr->name.GetString();\n\t\t\t\t\t\tif (itr->value.IsString())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring value = itr->value.GetString();\n\t\t\t\t\t\t\tif (!m_service->setPoint(name, value))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tstatus = false;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (status)\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"failed\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'values' object in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t\t\n\t\t} catch (exception &e) {\n\t\t\tchar buffer[80];\n\t\t\tsnprintf(buffer, sizeof(buffer), \"\\\"Exception: %s\\\"\", e.what());\n\t\t\tstring responsePayload = QUOTE({ \"message\" : buffer });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, control features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke an operation on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid SouthApi::operation(shared_ptr<HttpServer::Response> response,\n\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tif (m_service->allowControl())\n\t{\n\t\tstring payload = request->content.string();\n\t\ttry {\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tstring operation;\n\t\t\t\tif (doc.HasMember(\"operation\") && doc[\"operation\"].IsString())\n\t\t\t\t{\n\t\t\t\t\toperation = doc[\"operation\"].GetString();\n\t\t\t\t\tvector<PLUGIN_PARAMETER *> parameters;\n\n\t\t\t\t\tif (doc.HasMember(\"parameters\") && doc[\"parameters\"].IsObject())\n\t\t\t\t\t{\n\t\t\t\t\t\tValue& values = doc[\"parameters\"];\n\t\t\t\t\t\tfor (Value::ConstMemberIterator itr = values.MemberBegin();\n\t\t\t\t\t\t\t\titr != values.MemberEnd(); ++itr)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring name = itr->name.GetString();\n\t\t\t\t\t\t\tif (itr->value.IsString())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tstring value = itr->value.GetString();\n\t\t\t\t\t\t\t\tPLUGIN_PARAMETER *param = new PLUGIN_PARAMETER;\n\t\t\t\t\t\t\t\tparam->name = name;\n\t\t\t\t\t\t\t\tparam->value = value;\n\t\t\t\t\t\t\t\tparameters.push_back(param);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\telse if (doc.HasMember(\"parameters\"))\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"If present, parameters of an operation must be a JSON object\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\n\t\t\t\t\tbool status = m_service->operation(operation, parameters);\n\n\t\t\t\t\tfor (auto param : parameters)\n\t\t\t\t\t\tdelete param;\n\t\t\t\t\tif (status)\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"plugin returned failed status for operation\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\treturn;\n\t\t\t\t\t\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'operation' in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"failed to parse operation payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} catch (exception &e) {\n\t\t}\n\t\tstring responsePayload = QUOTE({ \"status\" : \"failed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, control features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke debugger attach on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid SouthApi::attachDebugger(Response response, Request /*request*/)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tbool status = m_service->attachDebugger();\n\n\t\tif (status)\n\t\t{\n\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\tm_service->respond(response, responsePayload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed to attach the debugger to the pipeline. A pipeline must contain at least one filter in order to attach the debugger to the pipeline.\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, debugger features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke debugger detach on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid SouthApi::detachDebugger(Response response, Request /*request*/)\n{\n\tstring responsePayload;\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tm_service->detachDebugger();\n\n\t\t\tresponsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t}\n\t\telse\n\t\t{\n\t\t\tresponsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t}\n\t\tm_service->respond(response, responsePayload);\n\t}\n\telse\n\t{\n\t\tresponsePayload = QUOTE({ \"status\" : \"Failed, debugger features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke set debugger buffer size on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid SouthApi::setDebuggerBuffer(Response response, Request request)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tstring payload = request->content.string();\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"size\"))\n\t\t\t\t{\n\t\t\t\t\tif (doc[\"size\"].IsUint())\n\t\t\t\t\t{\n\t\t\t\t\t\tunsigned int size = doc[\"size\"].GetUint();\n\t\t\t\t\t\tm_service->setDebuggerBuffer(size);\n\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'size' should be an unsigned integer\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'size' item in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, debugger features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke get debugger buffer size on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid SouthApi::getDebuggerBuffer(Response response, Request /*request*/)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tstring result;\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tresult = m_service->getDebuggerBuffer();\n\t\t}\n\t\telse\n\t\t{\n\t\t\tresult = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t}\n\t\tm_service->respond(response, result);\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, debugger features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke isolate debugger handler on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid SouthApi::isolateDebugger(Response response, Request request)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tstring payload = request->content.string();\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"state\"))\n\t\t\t\t{\n\t\t\t\t\tif (doc[\"state\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tstring state = doc[\"state\"].GetString();\n\t\t\t\t\t\tif (state.compare(\"discard\") == 0)\n\t\t\t\t\t\t\tm_service->isolateDebugger(true);\n\t\t\t\t\t\telse if (state.compare(\"store\") == 0)\n\t\t\t\t\t\t\tm_service->isolateDebugger(false);\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'state' should be one of 'discard' or 'store'\" });\n\t\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'size' should be a string with either 'discard' or 'store'\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'state' item in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, debugger features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke suspend debugger handler on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid SouthApi::suspendDebugger(Response response, Request request)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tstring payload = request->content.string();\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"state\"))\n\t\t\t\t{\n\t\t\t\t\tif (doc[\"state\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tstring state = doc[\"state\"].GetString();\n\t\t\t\t\t\tif (state.compare(\"suspend\") == 0)\n\t\t\t\t\t\t\tm_service->suspendDebugger(true);\n\t\t\t\t\t\telse if (state.compare(\"resume\") == 0)\n\t\t\t\t\t\t\tm_service->suspendDebugger(false);\n\t\t\t\t\t\telse\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'state' should be one of 'suspend' or 'resume'\" });\n\t\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'size' should be a string with either 'suspend' or 'resume'\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'state' item in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, debugger features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke set debugger step command on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request\n */\nvoid SouthApi::stepDebugger(Response response, Request request)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tstring payload = request->content.string();\n\t\t\tDocument doc;\n\t\t\tParseResult result = doc.Parse(payload.c_str());\n\t\t\tif (result)\n\t\t\t{\n\t\t\t\tif (doc.HasMember(\"steps\"))\n\t\t\t\t{\n\t\t\t\t\tif (doc[\"steps\"].IsUint())\n\t\t\t\t\t{\n\t\t\t\t\t\tunsigned int steps = doc[\"steps\"].GetUint();\n\t\t\t\t\t\tm_service->stepDebugger(steps);\n\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t\t\t\tm_service->respond(response, responsePayload);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"The value of 'steps' should be an unsigned integer\" });\n\t\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Missing 'steps' item in payload\" });\n\t\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tstring responsePayload = QUOTE({ \"message\" : \"Failed to parse request payload\" });\n\t\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_bad_request,responsePayload);\n\t\t}\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, debugger features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke debugger replay on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid SouthApi::replayDebugger(Response response, Request /*request*/)\n{\n\tif (m_service->allowDebugger())\n\t{\n\t\t// TODO Handle pre-requisites\n\t\tstring responsePayload;\n\t\tif (m_service->debuggerAttached())\n\t\t{\n\t\t\tif (m_service->replayDebugger())\n\t\t\t{\n\t\t\t\tresponsePayload = QUOTE({ \"status\" : \"ok\" });\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tresponsePayload = QUOTE({ \"status\" : \"No data to replay\" });\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tresponsePayload = QUOTE({\"status\" : \"Debugger is not attached to the service\" });\n\t\t}\n\t\tm_service->respond(response, responsePayload);\n\t}\n\telse\n\t{\n\t\tstring responsePayload = QUOTE({ \"status\" : \"Failed, debugger features are not allowed\" });\n\t\tm_service->respond(response, SimpleWeb::StatusCode::client_error_forbidden,responsePayload);\n\t}\n}\n\n/**\n * Invoke debugger state on the south plugin\n *\n * @param response\tThe HTTP response\n * @param request\tThe HTTP request - unused\n */\nvoid SouthApi::stateDebugger(Response response, Request /*request*/)\n{\n\tstring payload = m_service->debugState();\n\tm_service->respond(response, payload);\n}\n"
  },
  {
    "path": "C/services/south/south_plugin.cpp",
    "content": "/*\n * Fledge south service.\n *\n * Copyright (c) 2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <south_plugin.h>\n#include <south_service.h>\n#include <config_category.h>\n#include <logger.h>\n#include <exception>\n#include <typeinfo>\n#include <stdexcept>\n#include <mutex>\n\nusing namespace std;\n\n// mutex between various plugin methods, since reconfigure changes the handle \n// object itself and marks previous handle as garbage collectible by Python runtime\nstd::mutex mtx2;\n\n/**\n * Constructor for the class that wraps the south plugin\n *\n * Create a set of function points that resolve to the loaded plugin and\n * enclose in the class.\n *\n */\nSouthPlugin::SouthPlugin(PLUGIN_HANDLE handle, const ConfigCategory& category) : Plugin(handle)\n{\n\tm_started = false; // Set started indicator, overrided by async plugins only\n\n\t// Call the init method of the plugin\n\tPLUGIN_HANDLE (*pluginInit)(const void *) = (PLUGIN_HANDLE (*)(const void *))\n\t\t\t\t\tmanager->resolveSymbol(handle, \"plugin_init\");\n\tinstance = (*pluginInit)(&category);\n\n\tif (!instance)\n\t{\n\t\tLogger::getLogger()->error(\"plugin_init returned NULL, cannot proceed\");\n\t\tthrow new exception();\n\t}\n\n\t// Setup the function pointers to the plugin\n  \tpluginStartPtr = (void (*)(PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_start\");\n\t\n\tconst char *pluginInterfaceVer = manager->getInfo(handle)->interface;\n\tif (pluginInterfaceVer[0]=='1' && pluginInterfaceVer[1]=='.')\n\t{\n  \t\tpluginPollPtr = (Reading (*)(PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_poll\");\n\t}\n\telse if (pluginInterfaceVer[0]=='2' && pluginInterfaceVer[1]=='.')\n\t{\n\t\tpluginPollPtrV2 = (std::vector<Reading*>* (*)(PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_poll\");\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->error(\"Invalid plugin interface version '%s', assuming version 1.x\", pluginInterfaceVer);\n\t\tpluginPollPtr = (Reading (*)(PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_poll\");\n\t}\n\t\n  \tpluginReconfigurePtr = (void (*)(PLUGIN_HANDLE*, const std::string&))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_reconfigure\");\n  \tpluginShutdownPtr = (void (*)(PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_shutdown\");\n\tif (isAsync())\n\t{\n\t\tif (pluginInterfaceVer[0]=='1' && pluginInterfaceVer[1]=='.')\n\t\t{\n\t  \t\tpluginRegisterPtr = (void (*)(PLUGIN_HANDLE, INGEST_CB cb, void *data))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_register_ingest\");\n\t\t}\n\t\telse if (pluginInterfaceVer[0]=='2' && pluginInterfaceVer[1]=='.')\n\t\t{\n\t\t\tpluginRegisterPtrV2 = (void (*)(PLUGIN_HANDLE, INGEST_CB2 cb, void *data))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_register_ingest\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Invalid plugin interface version '%s', assuming version 1.x\", pluginInterfaceVer);\n\t\t\tpluginRegisterPtr = (void (*)(PLUGIN_HANDLE, INGEST_CB cb, void *data))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_register_ingest\");\n\t\t}\n\t}\n\n\tpluginShutdownDataPtr = (string (*)(const PLUGIN_HANDLE))\n\t\t\t\t manager->resolveSymbol(handle, \"plugin_shutdown\");\n\tpluginStartDataPtr = (void (*)(const PLUGIN_HANDLE, const string& storedData))\n\t\t\t      manager->resolveSymbol(handle, \"plugin_start\");\n\n\tpluginWritePtr = NULL;\n\tpluginOperationPtr = NULL;\n\n\tif (hasControl())\n\t{\n\t\tpluginWritePtr = (bool (*)(const PLUGIN_HANDLE,\n\t\t\t\t\tconst std::string&,\n\t\t\t\t\tconst std::string&))\n\t\t\tmanager->resolveSymbol(handle, \"plugin_write\");\n\t\tpluginOperationPtr = (bool (*)(const PLUGIN_HANDLE,\n\t\t\t\t\tconst std::string&,\n\t\t\t\t\tint,\n\t\t\t\t\tPLUGIN_PARAMETER **))\n\t\t\tmanager->resolveSymbol(handle, \"plugin_operation\");\n\t}\n}\n\n/**\n * South plugin destructor\n */\nSouthPlugin::~SouthPlugin()\n{\n}\n\n/**\n * Call the start method in the plugin\n */\nvoid SouthPlugin::start()\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\tthis->pluginStartPtr(instance);\n\t\tm_started = true; // Set start indicator\n\t\treturn;\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin start(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin start(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the start method in the plugin\n */\nvoid SouthPlugin::startData(const string& data)\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\tthis->pluginStartDataPtr(instance, data);\n\t\tm_started = true; // Set start indicator\n\t\treturn;\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin start(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin start(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the poll method in the plugin\n */\nReading SouthPlugin::poll()\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\treturn this->pluginPollPtr(instance);\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin poll(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin poll(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the poll method in the plugin supporting interface ver 2.x\n */\nReadingSet* SouthPlugin::pollV2()\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\tstd::vector<Reading *> *vec = this->pluginPollPtrV2(instance);\n\t\tif(vec)\n\t\t{\n\t\t\tReadingSet *set = new ReadingSet(vec);\n\t\t\tvec->clear();\n\t\t\tdelete vec;\n\t\t\treturn set;  // this->pluginPollPtrV2(instance);\n        \t}\n\t\telse\n\t\t\treturn NULL;\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in v2 south plugin poll(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in v2 south plugin poll(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the reconfigure method in the plugin\n */\nvoid SouthPlugin::reconfigure(const string& newConfig)\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\tthis->pluginReconfigurePtr(&instance, newConfig);\n\t\tif (!instance)\n\t\t{\n\t\t\tLogger::getLogger()->error(\"plugin_reconfigure returned NULL, cannot proceed\");\n\t\t\tthrow new exception();\n\t\t}\n\t\treturn;\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin reconfigure(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin reconfigure(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the shutdown method in the plugin\n */\nvoid SouthPlugin::shutdown()\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\treturn this->pluginShutdownPtr(instance);\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin shutdown(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin shutdown(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the shutdown method in the plugin\n */\nstring SouthPlugin::shutdownSaveData()\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\treturn this->pluginShutdownDataPtr(instance);\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin shutdown(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin shutdown(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\nvoid SouthPlugin::registerIngest(INGEST_CB cb, void *data)\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\treturn this->pluginRegisterPtr(instance, cb, data);\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin registerIngest(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin registerIngest(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\nvoid SouthPlugin::registerIngestV2(INGEST_CB2 cb, void *data)\n{\n\tlock_guard<mutex> guard(mtx2);\n\ttry {\n\t\treturn this->pluginRegisterPtrV2(instance, cb, data);\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin registerIngestV2(), %s\",\n\t\t\te.what());\n\t\tthrow;\n\t} catch (...) {\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tLogger::getLogger()->fatal(\"Unhandled exception raised in south plugin registerIngestV2(), %s\",\n\t\t\tp ? p.__cxa_exception_type()->name() : \"unknown exception\");\n\t\tthrow;\n\t}\n}\n\n/**\n * Call the write entry point of the plugin\n *\n * @param name\tThe name of the parameter to change\n * @param value\tThe value to set the parameter\n */\nbool SouthPlugin::write(const string& name, const string& value)\n{\n\ttry {\n\t\tif (pluginWritePtr)\n\t\t{\n\t\t\tbool run = true;\n\t\t\t// Check plugin_start is done for async plugin before calling pluginWritePtr\n\t\t\tif (isAsync()) {\n\t\t\t\tint tries = 0;\n\t\t\t\twhile (!m_started) {\n\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100));\n\t\t\t\t\tLogger::getLogger()->debug(\"South plugin write call is on hold, try %d\", tries);\n\t\t\t\t\tif (tries > 20) {\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\ttries++;\n\t\t\t\t}\n\t\t\t\trun = m_started;\n\t\t\t}\n\n\t\t\tif (run) {\n\t\t\t\treturn this->pluginWritePtr(instance, name, value);\n\t\t\t}\n                        else\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"South plugin write canceled after waiting for 2 seconds\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception in plugin write operation: %s\", e.what());\n\t}\n\treturn false;\n}\n\n/**\n * Call the plugin operation entry point with the operation to execute\n *\n * @param name\tThe name of the operation\n * @param parameters\tThe paramters for the operation.\n * @return bool\tStatus of the operation\n */\nbool SouthPlugin::operation(const string& name, vector<PLUGIN_PARAMETER *>& parameters)\n{\n\tbool status = false;\n\n\tif (! this->pluginOperationPtr)\n\t{\n\t\tLogger::getLogger()->error(\n\t\t\t\t\"Attempt to invoke an operation '%s' on a plugin that does not provide operation entry point\",\n\t\t\t\tname.c_str());\n\t\treturn status;\n\t}\n\tunsigned int count = parameters.size();\n\tPLUGIN_PARAMETER **params = (PLUGIN_PARAMETER **)malloc(sizeof(PLUGIN_PARAMETER *) * (count + 1));\n\tif (params == NULL)\n\t{\n\t\tLogger::getLogger()->fatal(\"Unable to allocate parameters, out of memory\");\n\t\treturn status;\n\t}\n\n\tfor (unsigned int i = 0; i < parameters.size(); i++)\n\t{\n\t\tparams[i] = parameters[i];\n\t}\n\tparams[count] = NULL;\n\ttry {\n\t\tbool run = true;\n\t\t// Check plugin_start is done for async plugin before calling pluginOperationPtr\n\t\tif (isAsync()) {\n\t\t\tint tries = 0;\n\t\t\twhile (!m_started) {\n\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(100));\n\t\t\t\tLogger::getLogger()->debug(\"South plugin operation is on hold, try %d\", tries);\n\t\t\t\tif (tries > 20) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\ttries++;\n\t\t\t}\n\t\t\trun = m_started;\n\t\t}\n\n\t\tif (run) {\n\t\t\tstatus = this->pluginOperationPtr(instance, name, (int)count, params);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"South plugin operation canceled after waiting for 2 seconds\");\n\t\t\treturn false;\n\t\t}\n\t} catch (exception& e) {\n\t\tLogger::getLogger()->fatal(\"Unhandled exception in plugin operation: %s\", e.what());\n\t}\n\tfree(params);\n\treturn status;\n}\n"
  },
  {
    "path": "C/services/south-plugin-interfaces/python/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(south-plugin-python-interface)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(DLLIB -ldl)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\n\n# Find source files\nfile(GLOB SOURCES python_plugin_interface.cpp)\n\n# Find Python.h 3.x dev/lib package\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development)\nendif()\n\n# Include header files\ninclude_directories(include ../../../common/include ../../../services/common/include ../../../services/south/include ../../../thirdparty/rapidjson/include)\ninclude_directories(../../../services/common-plugin-interfaces/python/include)\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../../lib)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} ${DLLIB})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\n# Install library\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/lib)\n"
  },
  {
    "path": "C/services/south-plugin-interfaces/python/async_ingest_pymodule/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6.0)\n\nproject(async_ingest)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\nset(DLLIB -ldl)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\n\n# Find source files\nfile(GLOB SOURCES ingest_callback_pymodule.cpp)\n\n# Find Python 3.5 or higher dev/lib/interp package\n#find_package(PythonInterp 3.5 REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development)\nendif()\n\n# Include header files\ninclude_directories(include ../../../../common/include ../../../../services/common/include ../../../../services/south/include ../../../../thirdparty/rapidjson/include)\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\nlink_directories(${PROJECT_BINARY_DIR}/../../../../lib)\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../../../../../../python)\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\ntarget_link_libraries(${PROJECT_NAME} ${DLLIB})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\n\nset_target_properties(${PROJECT_NAME} PROPERTIES LINKER_LANGUAGE C)\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\nset_target_properties(${PROJECT_NAME} PROPERTIES VERSION 1)\nset_target_properties(${PROJECT_NAME} PROPERTIES PREFIX \"\")\n\n# Install libraries\ninstall(TARGETS ${PROJECT_NAME} DESTINATION fledge/python)\n"
  },
  {
    "path": "C/services/south-plugin-interfaces/python/async_ingest_pymodule/ingest_callback_pymodule.cpp",
    "content": "/*\n * Fledge python module for async plugin ingest callback\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n\n#include <reading.h>\n#include <logger.h>\n#include <Python.h>\n#include <vector>\n#include <pythonreadingset.h>\n\nextern \"C\" {\n\ntypedef void (*INGEST_CB2)(void *, PythonReadingSet *);\n\nvoid plugin_ingest_fn(PyObject *ingest_callback, PyObject *ingest_obj_ref_data, PyObject *readingsObj);\n\nstatic PyObject *IngestError;\n\nstatic PyObject *\ningest_callback(PyObject *self, PyObject *args)\n{\n\tPyObject *readingList;\n\tPyObject *callback;\n\tPyObject *ingestData;\n\n\tif (!PyArg_ParseTuple(args, \"OOO\", &callback, &ingestData, &readingList))\n\t\treturn NULL;\n\n\tplugin_ingest_fn(callback, ingestData, readingList);\n\n\tPy_INCREF(Py_None);\n\treturn Py_None;\n}\n\nstatic PyMethodDef IngestMethods[] = {\n\t{\"ingest_callback\",  ingest_callback, METH_VARARGS, \"Invoke ingest callback\"},\n\t{NULL, NULL, 0, NULL}        /* Sentinel */\n};\n\nstatic struct PyModuleDef ingestmodule = {\n\tPyModuleDef_HEAD_INIT,\n\t\"async_ingest\",   /* name of module */\n\tNULL, \t\t/* module documentation, may be NULL */\n\t-1,       \t/* size of per-interpreter state of the module,\n\t             or -1 if the module keeps state in global variables. */\n\tIngestMethods\n};\n\nPyMODINIT_FUNC\nPyInit_async_ingest(void)\n{\t\n\tPyObject *m;\n\n\tm = PyModule_Create(&ingestmodule);\n\tif (m == NULL)\n\t\treturn NULL;\n\n\tLogger::getLogger()->debug(\"PyModule_Create() succeeded\");\n\n\tIngestError = PyErr_NewException(\"ingest.error\", NULL, NULL);\n\tPy_INCREF(IngestError);\n\tPyModule_AddObject(m, \"error\", IngestError);\n\n\tLogger::getLogger()->debug(\"PyInit_ingest() returning\");\n\treturn m;\n}\n\nvoid plugin_ingest_fn(PyObject *ingest_callback, PyObject *ingest_obj_ref_data, PyObject *readingsObj)\n{\n\tif (ingest_callback == NULL || ingest_obj_ref_data == NULL || readingsObj == NULL)\n\t{\n\t\tLogger::getLogger()->error(\"Py2C interface: plugin_ingest_fn: ingest_callback=%p, ingest_obj_ref_data=%p, readingsObj=%p\",\n\t\t\t\t\t\tingest_callback, ingest_obj_ref_data, readingsObj);\n\t\treturn;\n\t}\n\n    PythonReadingSet *pyReadingSet = NULL;\n\n    try\n    {\n        pyReadingSet = new PythonReadingSet(readingsObj);\n    }\n    catch (std::exception e)\n    {\n\t\tLogger::getLogger()->warn(\"PythonReadingSet c'tor failed, error: %s\", e.what());\n        pyReadingSet = NULL;\n\t}\n    \n    // Py_XDECREF(readingsObj);\n\n\tif(pyReadingSet)\n\t{\n\t\tINGEST_CB2 cb = (INGEST_CB2) PyCapsule_GetPointer(ingest_callback, NULL);\n\t\tvoid *data = PyCapsule_GetPointer(ingest_obj_ref_data, NULL);\n\t\t(*cb)(data, pyReadingSet);\n\t}\n\telse\n\t\tLogger::getLogger()->error(\"Py2C interface: plugin_ingest_fn: PythonReadingSet c'tor returned NULL\");\n}\n}; // end of extern \"C\" block\n"
  },
  {
    "path": "C/services/south-plugin-interfaces/python/python_plugin_interface.cpp",
    "content": "/*\n * Fledge south plugin interface related\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n\n#include <logger.h>\n#include <config_category.h>\n#include <reading_set.h>\n#include <mutex>\n#include <south_plugin.h>\n#include <pyruntime.h>\n#include <Python.h>\n#include <python_plugin_common_interface.h>\n#include <pythonreadingset.h>\n\n#define SHIM_SCRIPT_NAME \"south_shim\"\n\nusing namespace std;\n\nextern \"C\" {\n\nextern PLUGIN_INFORMATION *plugin_info_fn();\nextern PLUGIN_HANDLE plugin_init_fn(ConfigCategory *);\nextern void plugin_reconfigure_fn(PLUGIN_HANDLE*, const std::string&);\nextern void plugin_shutdown_fn(PLUGIN_HANDLE);\nextern void logErrorMessage();\nextern PLUGIN_INFORMATION *Py2C_PluginInfo(PyObject *);\n\n// South plugin entry points\nstd::vector<Reading *>* plugin_poll_fn(PLUGIN_HANDLE);\nvoid plugin_start_fn(PLUGIN_HANDLE handle);\nvoid plugin_register_ingest_fn(PLUGIN_HANDLE handle,INGEST_CB2 cb,void * data);\nbool plugin_write_fn(PLUGIN_HANDLE handle, const std::string& name, const std::string& value);\nbool plugin_operation_fn(PLUGIN_HANDLE handle, string operation, int parameterCount, PLUGIN_PARAMETER *parameters[]);\n\n\n/**\n * Constructor for PythonPluginHandle\n */\nvoid *PluginInterfaceInit(const char *pluginName, const char * pluginPathName)\n{\n    bool initialisePython = false;\n\n    // Set plugin name, also for methods in common-plugin-interfaces/python\n    gPluginName = pluginName;\n\n    string fledgePythonDir;\n    \n    string fledgeRootDir(getenv(\"FLEDGE_ROOT\"));\n\tfledgePythonDir = fledgeRootDir + \"/python\";\n    \n    string southRootPath = fledgePythonDir + string(R\"(/fledge/plugins/south/)\") + string(pluginName);\n    Logger::getLogger()->info(\"%s:%d:, southRootPath=%s\", __FUNCTION__, __LINE__, southRootPath.c_str());\n    \n    // Embedded Python 3.5 program name\n    wchar_t *programName = Py_DecodeLocale(pluginName, NULL);\n    Py_SetProgramName(programName);\n    PyMem_RawFree(programName);\n\n    PythonRuntime::getPythonRuntime();\n    \n    // Acquire GIL\n    PyGILState_STATE state = PyGILState_Ensure();\n\n    Logger::getLogger()->info(\"SouthPlugin %s:%d: \"\n                   \"southRootPath=%s, fledgePythonDir=%s, plugin '%s'\",\n                   __FUNCTION__,\n                   __LINE__,\n                   southRootPath.c_str(),\n                   fledgePythonDir.c_str(),\n                   pluginName);\n    \n    // Set Python path for embedded Python 3.x\n    // Get current sys.path - borrowed reference\n    PyObject* sysPath = PySys_GetObject((char *)\"path\");\n    PyList_Append(sysPath, PyUnicode_FromString((char *) southRootPath.c_str()));\n    PyList_Append(sysPath, PyUnicode_FromString((char *) fledgePythonDir.c_str()));\n\n    // Set sys.argv for embedded Python 3.5\n\tint argc = 2;\n\twchar_t* argv[2];\n\targv[0] = Py_DecodeLocale(\"\", NULL);\n\targv[1] = Py_DecodeLocale(pluginName, NULL);\n\tPySys_SetArgv(argc, argv);\n\n    // 2) Import Python script\n    PyObject *pModule = PyImport_ImportModule(pluginName);\n\n    // Check whether the Python module has been imported\n    if (!pModule)\n    {\n        // Failure\n        if (PyErr_Occurred())\n        {\n            logErrorMessage();\n        }\n        Logger::getLogger()->fatal(\"PluginInterfaceInit: cannot import Python 3.5 script \"\n                       \"'%s' from '%s' : plugin '%s'\",\n                       pluginName, southRootPath.c_str(),\n                       pluginName);\n    }\n    else\n    {\n        std::pair<std::map<string, PythonModule*>::iterator, bool> ret;\n        if (pythonModules)\n        {\n            // Add element\n            ret = pythonModules->insert(pair<string, PythonModule*>\n                (string(pluginName), new PythonModule(pModule,\n                                      initialisePython,\n                                      string(pluginName),\n                                      PLUGIN_TYPE_SOUTH,\n                                      // New Python interpteter not set\n                                      NULL)));\n        }\n        // Check result\n        if (!pythonModules ||\n            ret.second == false)\n        {\n            Logger::getLogger()->fatal(\"%s:%d: python module not added to the map \"\n                           \"of loaded plugins, pModule=%p, plugin '%s', aborting.\",\n                           __FUNCTION__,\n                           __LINE__,\n                           pModule,\n                           pluginName);\n            Py_CLEAR(pModule);\n            return NULL;\n        }\n        else\n        {\n            Logger::getLogger()->debug(\"%s:%d: python module loaded successfully, pModule=%p, plugin '%s'\",\n                           __FUNCTION__,\n                           __LINE__,\n                           pModule,\n                           pluginName);\n        }\n    }\n\n    // Release GIL\n    PyGILState_Release(state);\n\n    return pModule;\n}\n\n/**\n * Function to invoke 'plugin_write' function in python plugin\n *\n * @param    handle\t\tPlugin handle from plugin_init_fn\n * @param    name\t\tName of parameter to write\n * @param    value\t\tValue to be written to that parameter\n */\nbool plugin_write_fn(PLUGIN_HANDLE handle, const std::string& name, const std::string& value)\n{\n\tbool rv = false;\n\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_write(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn rv;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonHandles map is NULL \"\n\t\t\t\t\t   \"in plugin_write, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn rv;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t\t!it->second ||\n\t\t!it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_write(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn rv;\n\t}\n\n\tstd::mutex mtx;\n\tPyObject* pFunc;\n\tlock_guard<mutex> guard(mtx);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\tLogger::getLogger()->debug(\"plugin_handle: plugin_write(): \"\n\t\t\t\t   \"pModule=%p, handle=%p, plugin '%s'\",\n\t\t\t\t   it->second->m_module,\n\t\t\t\t   handle,\n\t\t\t\t   it->second->m_name.c_str());\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_write\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find method 'plugin_write' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\n\t\tPyGILState_Release(state);\n\t\treturn rv;\n\t}\n\n\tif (!PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method plugin_write \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn rv;\n\t}\n\n\tLogger::getLogger()->debug(\"plugin_write with name=%s, value=%s\", name.c_str(), value.c_str());\n\n\t// Call Python method passing an object and 2 C-style strings\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"Oss\",\n\t\t\t\t\t\t  handle, name.c_str(), value.c_str());\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle return\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_write : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n\t\tif (PyBool_Check(pReturn))\n\t\t{\n\t\t\trv = PyObject_IsTrue(pReturn);\n\t\t\tLogger::getLogger()->info(\"plugin_write() returned %s\", rv?\"TRUE\":\"FALSE\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"plugin_handle: plugin_write(): \"\n\t\t\t\t\t\t\t\t\t\"got result object '%p' of unexpected type %s, plugin '%s'\",\n\t\t\t\t\t\t\t\t\tpReturn, pReturn->ob_type->tp_name,\n\t\t\t\t\t\t\t\t\tit->second->m_name.c_str());\n\t\t}\n\t\tPy_CLEAR(pReturn);\n\t}\n\tPyGILState_Release(state);\n\n\treturn rv;\n}\n\n/**\n * Function to invoke 'plugin_operation' function in python plugin\n *\n * @param    handle\t\t\tPlugin handle from plugin_init_fn\n * @param    operation\t\tName of operation\n * @param    parameterCount\tNumber of parameters in Parameter list\n * @param    parameters\t\tParameter list\n */\nbool plugin_operation_fn(PLUGIN_HANDLE handle, string operation, int parameterCount, PLUGIN_PARAMETER *parameters[])\n{\n\tbool rv = false;\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_operation(): \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn rv;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonHandles map is NULL \"\n\t\t\t\t\t   \"in plugin_operation, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn rv;\n\t}\n\n\t// Look for Python module for handle key\n\tauto it = pythonHandles->find(handle);\n\tif (it == pythonHandles->end() ||\n\t\t!it->second ||\n\t\t!it->second->m_module)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_operation(): \"\n\t\t\t\t\t   \"pModule is NULL, plugin handle '%p'\",\n\t\t\t\t\t   handle);\n\t\treturn rv;\n\t}\n\n\tstd::mutex mtx;\n\tPyObject* pFunc;\n\tlock_guard<mutex> guard(mtx);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\n\tLogger::getLogger()->debug(\"plugin_handle: plugin_operation(): \"\n\t\t\t\t   \"pModule=%p, *handle=%p, plugin '%s'\",\n\t\t\t\t   it->second->m_module,\n\t\t\t\t   handle,\n\t\t\t\t   it->second->m_name.c_str());\n\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_operation\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find method 'plugin_operation' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\n\t\tPyGILState_Release(state);\n\t\treturn rv;\n\t}\n\n\tif (!PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method plugin_operation \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn rv;\n\t}\n\n\tLogger::getLogger()->debug(\"plugin_operation with operation=%s, parameterCount=%d\", operation.c_str(), parameterCount);\n\n\tPyObject *paramsList = PyList_New(parameterCount);\n\tfor (int i=0; i<parameterCount; i++)\n\t{\n\t\tPyList_SetItem(paramsList, i, Py_BuildValue(\"(ss)\", parameters[i]->name.c_str(), parameters[i]->value.c_str()) );\n\t}\n\t\n\t// Call Python method passing an object and 2 C-style strings\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"OsO\",\n\t\t\t\t\t\t  handle, operation.c_str(), paramsList);\n\n\tPy_CLEAR(pFunc);\n\tPy_CLEAR(paramsList);\n\n\t// Handle return\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_operation : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n\t\tif (PyBool_Check(pReturn))\n\t\t{\n\t\t\trv = PyObject_IsTrue(pReturn);\n\t\t\tLogger::getLogger()->info(\"plugin_operation() returned %s\", rv?\"TRUE\":\"FALSE\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"plugin_handle: plugin_operation(): \"\n\t\t\t\t\t\t\t\t\t\"got result object '%p' of unexpected type %s, plugin '%s'\",\n\t\t\t\t\t\t\t\t\tpReturn, pReturn->ob_type->tp_name,\n\t\t\t\t\t\t\t\t\tit->second->m_name.c_str());\n\t\t}\n\t\tPy_CLEAR(pReturn);\n\t}\n\tPyGILState_Release(state);\n\n\treturn rv;\n}\n\n/**\n * Returns function pointer that can be invoked to call '_sym' function\n * in python plugin\n */\nvoid* PluginInterfaceResolveSymbol(const char *_sym, const string& name)\n{\n\tstring sym(_sym);\n\tif (!sym.compare(\"plugin_info\"))\n\t\treturn (void *) plugin_info_fn;\n\telse if (!sym.compare(\"plugin_init\"))\n\t\treturn (void *) plugin_init_fn;\n\telse if (!sym.compare(\"plugin_poll\"))\n\t\treturn (void *) plugin_poll_fn;\n\telse if (!sym.compare(\"plugin_shutdown\"))\n\t\treturn (void *) plugin_shutdown_fn;\n\telse if (!sym.compare(\"plugin_reconfigure\"))\n\t\treturn (void *) plugin_reconfigure_fn;\n\telse if (!sym.compare(\"plugin_start\"))\n\t\treturn (void *) plugin_start_fn;\n\telse if (!sym.compare(\"plugin_register_ingest\"))\n\t\treturn (void *) plugin_register_ingest_fn;\n\telse if (!sym.compare(\"plugin_write\"))\n\t\treturn (void *) plugin_write_fn;\n\telse if (!sym.compare(\"plugin_operation\"))\n\t\treturn (void *) plugin_operation_fn;\n\telse\n\t{\n\t\tLogger::getLogger()->fatal(\"PluginInterfaceResolveSymbol can not find symbol '%s' \"\n\t\t\t\t\t   \"in the South Python plugin interface library, loaded plugin '%s'\",\n\t\t\t\t\t   _sym,\n\t\t\t\t\t   name.c_str());\n\t\treturn NULL;\n\t}\n}\n\n/**\n * Function to invoke 'plugin_poll' function in python plugin\n *\n * @param    handle\tPlugin handle from plugin_init_fn\n * @return\t\tVector of Reading data\n */\nstd::vector<Reading *>* plugin_poll_fn(PLUGIN_HANDLE handle)\n{\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_poll_fn: \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn NULL;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_poll_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\t return NULL;\n\t}\n\n        // Look for Python module for handle key\n        auto it = pythonHandles->find(handle);\n        if (it == pythonHandles->end() ||\n            !it->second ||\n            !it->second->m_module)\n        {\n                Logger::getLogger()->fatal(\"plugin_handle: plugin_poll(): \"\n                                           \"pModule is NULL, plugin handle '%p'\",\n                                           handle);\n                return NULL;\n        }\n\n\tstd::mutex mtx;\n\tPyObject* pFunc;\n\tlock_guard<mutex> guard(mtx);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\t\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_poll\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->fatal(\"Cannot find 'plugin_poll' method \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn NULL;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->fatal(\"Cannot call method 'plugin_poll' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t    it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn NULL;\n\t}\n\n\t// Call Python method passing an object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"O\",\n\t\t\t\t\t\t  handle);\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\t// Errors while getting result object\n\t\tLogger::getLogger()->error(\"Called python script method 'plugin_poll' : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t    it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\n\t\tPyGILState_Release(state);\n\t\treturn NULL;\n\t}\n\telse\n\t{\n\t\t\t// Get reading data\n\t\tPythonReadingSet *pyReadingSet = NULL;\n\n\t\t// Valid ReadingSet would be in the form of python dict or list\n\t\tif (PyList_Check(pReturn) || PyDict_Check(pReturn))\n\t\t{\n\t\t\ttry {\n\t\t\t\tpyReadingSet = new PythonReadingSet(pReturn);\n\t\t\t} catch (std::exception e) {\n\t\t\t\tLogger::getLogger()->warn(\"Failed to create a Python ReadingSet from the data returned by the south plugin poll routine, %s\", e.what());\n\t\t\t\tpyReadingSet = NULL;\n\t\t\t}\n\t\t}\n\t\t\t\n\t\t// Remove pReturn object\n\t\tPy_CLEAR(pReturn);\n\n\t\tPyGILState_Release(state);\n\n\t\tif (pyReadingSet)\n\t\t{\n\t\t\tstd::vector<Reading *> *vec2 = pyReadingSet->moveAllReadings();\n\t\t\tdelete pyReadingSet;\n\t\t\treturn vec2;\n\t\t}\n\t\telse\n\t\t{\n\t\t\treturn NULL;\n\t\t}\n\t}\n}\n\t\n/**\n * Function to invoke 'plugin_start' function in python plugin\n *\n * @param    handle     Plugin handle from plugin_init_fn\n */\nvoid plugin_start_fn(PLUGIN_HANDLE handle)\n{\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_start_fn: \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_start_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\t return;\n\t}\n\n        // Look for Python module for handle key\n        auto it = pythonHandles->find(handle);\n        if (it == pythonHandles->end() ||\n            !it->second ||\n            !it->second->m_module)\n        {\n                Logger::getLogger()->fatal(\"plugin_handle: plugin_start(): \"\n                                           \"pModule is NULL, plugin handle '%p'\",\n                                           handle);\n                return;\n        }\n\n\tPyObject* pFunc;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\t\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_start\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->warn(\"Cannot find 'plugin_start' method \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->warn(\"Cannot call method 'plugin_start' \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\t// Call Python method passing an object\n\tPyObject* pReturn = PyObject_CallFunction(pFunc,\n\t\t\t\t\t\t  \"O\",\n\t\t\t\t\t\t  handle);\n\n\tPy_CLEAR(pFunc);\n\n\t// Handle return\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_start : \"\n\t\t\t\t\t   \"error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\tPyGILState_Release(state);\n}\n\n\n/**\n * Function to invoke 'plugin_register_ingest' function in python plugin\n *\n * @param    handle     Plugin handle from plugin_init_fn\n * @param    cb\t\tIngest routine to call\n * @param    data\tData to pass to Ingest routine\n */\nvoid plugin_register_ingest_fn(PLUGIN_HANDLE handle,\n\t\t\t\tINGEST_CB2 cb,\n\t\t\t\tvoid *data)\n{\n\tif (!handle)\n\t{\n\t\tLogger::getLogger()->fatal(\"plugin_handle: plugin_register_ingest_fn: \"\n\t\t\t\t\t   \"handle is NULL\");\n\t\treturn;\n\t}\n\n\tif (!pythonHandles)\n\t{\n\t\tLogger::getLogger()->error(\"pythonModules map is NULL \"\n\t\t\t\t\t   \"in plugin_register_ingest_fn, handle '%p'\",\n\t\t\t\t\t   handle);\n\t\t return;\n\t}\n\n        // Look for Python module for handle key\n        auto it = pythonHandles->find(handle);\n        if (it == pythonHandles->end() ||\n            !it->second ||\n            !it->second->m_module)\n        {\n                Logger::getLogger()->fatal(\"plugin_handle: plugin_register_ingest(): \"\n                                           \"pModule is NULL, plugin handle '%p'\",\n                                           handle);\n                return;\n        }\n\n\tPyObject* pFunc;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\t\n\t// Fetch required method in loaded object\n\tpFunc = PyObject_GetAttrString(it->second->m_module, \"plugin_register_ingest\");\n\tif (!pFunc)\n\t{\n\t\tLogger::getLogger()->warn(\"Cannot find 'plugin_register_ingest' \"\n\t\t\t\t\t   \"method in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\n\tif (!pFunc || !PyCallable_Check(pFunc))\n\t{\n\t\t// Failure\n\t\tif (PyErr_Occurred())\n\t\t{\n\t\t\tlogErrorMessage();\n\t\t}\n\n\t\tLogger::getLogger()->warn(\"Cannot call method plugin_register_ingest \"\n\t\t\t\t\t   \"in loaded python module '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tPy_CLEAR(pFunc);\n\n\t\tPyGILState_Release(state);\n\t\treturn;\n\t}\n\t\n\t// Call Python method passing an object\n\tPyObject* ingest_fn = PyCapsule_New((void *)cb, NULL, NULL);\n\tPyObject* ingest_ref = PyCapsule_New((void *)data, NULL, NULL);\n\tPyObject* pReturn = PyObject_CallFunction(pFunc, \"OOO\", handle, ingest_fn, ingest_ref);\n\n\tPy_CLEAR(pFunc);\n\tPy_CLEAR(ingest_fn);\n\n\t// Handle returned data\n\tif (!pReturn)\n\t{\n\t\tLogger::getLogger()->error(\"Called python script method plugin_register_ingest \"\n\t\t\t\t\t   \": error while getting result object, plugin '%s'\",\n\t\t\t\t\t   it->second->m_name.c_str());\n\t\tlogErrorMessage();\n\t}\n\telse\n\t{\n\t\tLogger::getLogger()->info(\"plugin_handle: plugin_register_ingest(): \"\n\t\t\t\t\t  \"got result object '%p', plugin '%s'\",\n\t\t\t\t\t  pReturn,\n\t\t\t\t\t  it->second->m_name.c_str());\n\t}\n\tPyGILState_Release(state);\n}\n\n};\n\n"
  },
  {
    "path": "C/services/storage/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (Storage)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wsign-conversion\")\nset(CMAKE_CXX_FLAGS_PROFILING \"-O2 -pg\")\nset(DLLIB -ldl)\nset(UUIDLIB -luuid)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\nset(EXEC fledge.services.storage)\n\ninclude_directories(. include ../../thirdparty/Simple-Web-Server ../../thirdparty/rapidjson/include  ../common/include ../../common/include)\n\nfind_package(Threads REQUIRED)\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\nif(APPLE)\n    set(OPENSSL_ROOT_DIR \"/usr/local/opt/openssl\")\nendif()\n\nfile(GLOB storage_src \"*.cpp\")\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_executable(${EXEC} ${storage_src} ${service_common_src} ${common_src})\ntarget_link_libraries(${EXEC} ${Boost_LIBRARIES})\ntarget_link_libraries(${EXEC} ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(${EXEC} ${DLLIB})\ntarget_link_libraries(${EXEC} ${UUIDLIB})\ntarget_link_libraries(${EXEC} ${COMMON_LIB})\ntarget_link_libraries(${EXEC} ${SERVICE_COMMON_LIB})\n\ninstall(TARGETS ${EXEC} RUNTIME DESTINATION fledge/services)\n\nif(MSYS) #TODO: Is MSYS true when MSVC is true?\n    target_link_libraries(${EXEC} ws2_32 wsock32)\n    if(OPENSSL_FOUND)\n        target_link_libraries(${EXEC} ws2_32 wsock32)\n    endif()\nendif()\n\n# Set profiling flags if 'Profiling' build\nif(CMAKE_BUILD_TYPE STREQUAL \"Profiling\")\n    message(\"Building in Profiling mode\")\n    set_target_properties(${EXEC} PROPERTIES COMPILE_FLAGS \"${CMAKE_CXX_FLAGS_PROFILING}\")\n    # define 'PROFILING' flag used by service to change directory\n    target_compile_definitions(${EXEC} PRIVATE PROFILING=1)\n    set(CMAKE_SHARED_LINKED_FLAGS \"${CMAKE_SHARED_LINKED_FLAGS} -O2 -pg\")\n    target_link_libraries(${EXEC} -O2 -pg)\nendif()\n"
  },
  {
    "path": "C/services/storage/README.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n\n\n***********************\nFledge Storage Service\n***********************\n\nThis is the Storage service of the Fledge platform, it provides a\nstorage layer with REST interface and a pluggable mechanism to attach\nto data storage systems, e.g. databases or document stores.\n|br| |br|\n\n\nBuilding\n========\n\nThe Storage service is built using cmake, to build the Storage service:\n::\n  mkdir build\n  cd build\n  cmake ..\n  make\n\nThis will create the executable file ``storage`` service.\n\nUse the command ``make install`` to install in the default location,\nnote you will need permission on the installation directory or use\nthe sudo command. Pass the option *DESTDIR=* to set your own destination\ninto which to install the Storage service.\n\nBuild the plugins by going to the directory *C/plugins/storage* and follow\nthe instructions in each of the plugin directories.\n|br| |br|\n  \n\nPrerequisites\n=============\n\nTo build the Storage service the machine must have installed the\n*cmake* system, *make* and *g++*, plus the libraries for the Storage plugin,\ne.g. Postgres and the boost libraries\n\nTo run the Storage service the system requires a number of libraries be\ninstalled; boost system and the Postgres libpg libraries\n\nOn Ubuntu based Linux distributions these can be installed with *apt-get*:\n::\n  apt-get install libboost-dev libboost-system-dev libboost-thread-dev libpq-dev\n  apt-get install cmake g++ make\n\n|br| |br|\n\n\nRunning\n=======\n\nThe Storage service may be run in daemon mode or interactively by use\nof the *-d* command line argument.\n\nThe Storage service will register with the core to allow other services\nand the core to find the API of the Storage service. It assumes the core\nis located on the same machine. This can however be overridden by the use of\nthe command line argument *--port=* and *--address=* to set the port and\naddress of the core microservice.\n\nThe Storage layer will look for Storage plugins in the current directory\nor in the directory *$FLEDGE_ROOT/plugins/storage*.\n|br| |br|\n\n\nPorts\n=====\n\nThe Storage system listens for REST requests on two separate ports, the\nservice port for storage based requests and the management port for\nmanagement requests. These may either be set to specific ports in the\nconfiguration file or dynamic ports can be allocated at runtime. In this\nlater mode of operation the clients of the Storage layer must determine\nthese ports by connecting to the core and requesting for the Storage\nlayer registration information.\n\nTo run the Storage service with fixed ports modify the configuration\ncache file, *storage.json* in *$FLEDGE_DATA/etc* to pass explicit ports\nrather than 0. Note that if not set, *$FLEDGE_DATA* has the same value of\n*$FLEDGE_ROOT*. \n\nconfig.json file\n----------------\n\nThis is an example of a *config.json* file:\n::\n  { \"plugin\"        : { \"value\":\"postgres\" },\n      \"threads\"       : { \"value\":\"1\" },\n      \"port\"          : { \"value\":\"8082\" },\n      \"managementPort\": { \"value\":\"1082\" }\n  }\n\n|br| |br|\n\n\nTesting\n=======\n\nA test suite is available in the development directory *tests/unit_tests/services/storage*.\n\n"
  },
  {
    "path": "C/services/storage/configuration.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017-2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include <configuration.h>\n#include <rapidjson/document.h>\n#include <rapidjson/istreamwrapper.h>\n#include <rapidjson/ostreamwrapper.h>\n#include <rapidjson/error/en.h>\n#include <rapidjson/writer.h>\n#include <fstream>\n#include <iostream>\n#include <unordered_set>\n#include <unistd.h>\n#include <plugin_api.h>\n#include <plugin_manager.h>\n\nstatic std::string defaultConfiguration(QUOTE({\n\t\"plugin\" : {\n\t\t\"value\" : \"sqlite\",\n\t\t\"default\" : \"sqlite\",\n\t\t\"description\" : \"The main storage plugin to load\",\n\t\t\"type\" : \"enumeration\",\n\t\t\"options\" : [ \"sqlite\", \"sqlitelb\", \"postgres\" ],\n\t\t\"displayName\" : \"Storage Plugin\",\n\t\t\"order\" : \"1\"\n\t\t},\n\t\"readingPlugin\" : {\n\t\t\"value\" : \"Use main plugin\",\n\t\t\"default\" : \"Use main plugin\",\n\t\t\"description\" : \"The storage plugin to load for readings data.\",\n\t\t\"type\" : \"enumeration\",\n\t\t\"options\" : [ \"Use main plugin\", \"sqlite\", \"sqlitelb\", \"sqlitememory\", \"postgres\" ],\n\t\t\"displayName\" : \"Readings Plugin\",\n\t\t\"order\" : \"2\"\n\t\t},\n\t\"threads\" : {\n\t       \t\"value\" : \"1\", \n\t\t\"default\" : \"1\",\n\t\t\"description\" : \"The number of threads to use for the storage API\",\n\t\t\"type\" : \"integer\",\n\t\t\"displayName\" : \"Storage API threads\",\n\t\t\"minimum\" : \"1\",\n\t\t\"maximum\" : \"10\",\n\t\t\"order\" : \"3\"\n\t       \t},\n\t\"workerPool\" : {\n\t       \t\"value\" : \"5\", \n\t\t\"default\" : \"5\",\n\t\t\"description\" : \"The number of threads to create in the thread pool used to execute operations against reading data\",\n\t\t\"type\" : \"integer\",\n\t\t\"displayName\" : \"Worker thread pool\",\n\t\t\"minimum\" : \"1\",\n\t\t\"maximum\" : \"10\",\n\t\t\"order\" : \"4\"\n\t       \t},\n\t\"managedStatus\" : {\n\t\t\"value\" : \"false\",\n\t\t\"default\" : \"false\",\n\t\t\"description\" : \"Control if Fledge should manage the storage provider\",\n\t\t\"type\" : \"boolean\",\n\t\t\"displayName\" : \"Manage Storage\",\n\t\t\"order\" : \"5\"\n\t\t},\n\t\"port\" : { \n\t\t\"value\" : \"0\",\n\t\t\"default\" : \"0\",\n\t\t\"description\" : \"The port to listen on\",\n\t\t\"type\" : \"integer\",\n\t\t\"displayName\" : \"Service Port\",\n\t\t\"order\" : \"6\"\n\t},\n\t\"managementPort\" : {\n\t\t\"value\" : \"0\", \n\t\t\"default\" : \"0\",\n\t\t\"description\" : \"The management port to listen on.\",\n\t\t\"type\" : \"integer\",\n\t\t\"displayName\" : \"Management Port\",\n\t\t\"order\" : \"7\"\n       \t},\n\t\"logLevel\" : {\n\t\t\"value\" : \"warning\",\n\t\t\"default\" : \"warning\",\n\t\t\"description\" : \"Minimum level of messages to log\",\n\t\t\"type\" : \"enumeration\",\n\t\t\"displayName\" : \"Log Level\",\n\t\t\"options\" : [ \"error\", \"warning\", \"info\", \"debug\" ],\n\t\t\"order\" : \"8\"\n\t},\n\t\"timeout\" : {\n\t\t\"value\" : \"60\",\n\t\t\"default\" : \"60\",\n\t\t\"description\" : \"Server request timeout, expressed in seconds\",\n\t\t\"type\" : \"integer\",\n\t\t\"displayName\" : \"Timeout\",\n\t\t\"order\" : \"9\",\n\t\t\"minimum\" : \"5\",\n\t\t\"maximum\" : \"3600\"\n\t},\n\t\"perfmon\": {\n\t\t\"description\": \"Track and store performance counters\",\n\t\t\"type\": \"boolean\",\n\t\t\"displayName\": \"Performance Counters\",\n\t\t\"default\": \"false\",\n\t\t\"value\": \"false\",\n\t\t\"order\" : \"10\"\n\t}\n}));\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * Constructor for storage service configuration class.\n */\nStorageConfiguration::StorageConfiguration()\n{\n\tlogger = Logger::getLogger();\n\tdocument = new Document();\n\t/**\n\t * Update options in deafult configuration for items 'plugin' and \n\t * 'readingPlugin' with installed plugins\n\t */\n\tupdateStoragePluginConfig();\n\n\treadCache();\n\tcheckCache();\n\tif (hasValue(\"logLevel\"))\n\t{\n\t\tlogger->setMinLevel(getValue(\"logLevel\"));\n\t}\n}\n\n/**\n * Storage configuration destructor\n */\nStorageConfiguration::~StorageConfiguration()\n{\n\tdelete document;\n}\n\n/**\n * Return if a value exsits for the cached configuration category\n */\nbool StorageConfiguration::hasValue(const string& key)\n{\n\tif (document->HasParseError())\n\t{\n\t\tlogger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\tGetParseError_En(document->GetParseError()),\n\t\t\t\tdocument->GetErrorOffset());\n\t\treturn false;\n\t}\n\tif (!document->HasMember(key.c_str()))\n\t\treturn false;\n\treturn true;\n}\n\n/**\n * Return a value from the cached configuration category\n */\nconst char *StorageConfiguration::getValue(const string& key)\n{\n\tif (document->HasParseError())\n\t{\n\t\tlogger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\tGetParseError_En(document->GetParseError()),\n\t\t\t\tdocument->GetErrorOffset());\n\t\treturn 0;\n\t}\n\tif (!document->HasMember(key.c_str()))\n\t\treturn 0;\n\tValue& item = (*document)[key.c_str()];\n\treturn item[\"value\"].GetString();\n}\n\n/**\n * Set the value of a configuration item\n */\nbool StorageConfiguration::setValue(const string& key, const string& value)\n{\n\ttry {\n\t\tValue& item = (*document)[key.c_str()];\n\t\tconst char *cstr = value.c_str();\n\t\titem[\"value\"].SetString(cstr, strlen(cstr), document->GetAllocator());\n\t\treturn true;\n\t} catch (...) {\n\t\treturn false;\n\t}\n}\n\n/**\n * Called when the configuration category is updated.\n */\nvoid StorageConfiguration::updateCategory(const string& json)\n{\n\tlogger->info(\"New storage configuration %s\", json.c_str());\n\tDocument *newdoc = new Document();\n\tnewdoc->Parse(json.c_str());\n\tif (newdoc->HasParseError())\n\t{\n\t\tlogger->error(\"New configuration failed to parse. %s at %d\",\n\t\t\t\tGetParseError_En(newdoc->GetParseError()),\n\t\t\t\tnewdoc->GetErrorOffset());\n\t\tdelete newdoc;\n\t}\n\telse\n\t{\n\t\tdelete document;\n\t\tdocument = newdoc;\n\t\twriteCache();\n\t}\n}\n\n/**\n * Read the cache JSON for te configuration category from the cache file \n * into memory.\n */\nvoid StorageConfiguration::readCache()\n{\t\n\tstring\tcachefile;\n\n\tgetConfigCache(cachefile);\n\tif (access(cachefile.c_str(), F_OK ) != 0)\n\t{\n\t\tlogger->info(\"Storage cache %s unreadable, using default configuration: %s.\",\n\t\t\t\tcachefile.c_str(), defaultConfiguration.c_str());\n\n\t\tdocument->Parse(defaultConfiguration.c_str());\n\t\tif (document->HasParseError())\n\t\t{\n\t\t\tlogger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\t\tGetParseError_En(document->GetParseError()),\n\t\t\t\t\tdocument->GetErrorOffset());\n\t\t}\n\t\twriteCache();\n\t\treturn;\n\t}\n\ttry {\n\t\tifstream ifs(cachefile);\n\t\tIStreamWrapper isw(ifs);\n\t\tdocument->ParseStream(isw);\n\t\tif (document->HasParseError())\n\t\t{\n\t\t\tlogger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\t\tGetParseError_En(document->GetParseError()),\n\t\t\t\t\tdocument->GetErrorOffset());\n\t\t}\n\t} catch (exception& ex) {\n\t\tlogger->error(\"Configuration cache failed to read %s.\", ex.what());\n\t}\n}\n\n/**\n * Write the configuration cache to disk\n */\nvoid StorageConfiguration::writeCache()\n{\nstring\tcachefile;\n\n\tgetConfigCache(cachefile);\n\tofstream ofs(cachefile);\n\tOStreamWrapper osw(ofs);\n\tWriter<OStreamWrapper> writer(osw);\n\tdocument->Accept(writer);\n}\n\n/**\n * Retrieve the location of the configuration cache to use\n *\n * If a configuration cache exists in the current directory then it is used\n *\n * If not and the environment variable FLEDGE_DATA exists then the\n * configuration file under etc in that directory will be used.\n *\n * If that does not exist and the configuration variable FLEDGE_HONE\n * exists then a configuration file under etc in that dirstory is used\n */\nvoid StorageConfiguration::getConfigCache(string& cache)\n{\nchar buf[512], *basedir;\n\n\tif (access(CONFIGURATION_CACHE_FILE, F_OK) == 0)\n\t{\n\t\tcache = CONFIGURATION_CACHE_FILE;\n\t\treturn;\n\t}\n\tif ((basedir = getenv(\"FLEDGE_DATA\")) != NULL)\n\t{\n\t\tsnprintf(buf, sizeof(buf), \"%s/etc/%s\", basedir, CONFIGURATION_CACHE_FILE);\n\t\tif (access(buf, F_OK) == 0)\n\t\t{\n\t\t\tcache = buf;\n\t\t\treturn;\n\t\t}\n\t}\n\telse if ((basedir = getenv(\"FLEDGE_ROOT\")) != NULL)\n\t{\n\t\tsnprintf(buf, sizeof(buf), \"%s/data/etc/%s\", basedir, CONFIGURATION_CACHE_FILE);\n\t\tif (access(buf, F_OK) == 0)\n\t\t{\n\t\t\tcache = buf;\n\t\t\treturn;\n\t\t}\n\t}\n\telse\n\t{\n\t\tsnprintf(buf, sizeof(buf), \"%s\", CONFIGURATION_CACHE_FILE);\n\t}\n\n\t// No configuration cache has been found - return the default location\n\tcache = buf;\n}\n\n/**\n * Return the default category to register with the core. This allows\n * the storage configuration to appear in the UI\n *\n * @return DefaultConfigCategory* The default configuration category\n */\nDefaultConfigCategory *StorageConfiguration::getDefaultCategory()\n{\n\tStringBuffer buffer;\n\tWriter<StringBuffer> writer(buffer);\n\tdocument->Accept(writer);\n\n\tconst char *config = buffer.GetString();\n\treturn new DefaultConfigCategory(STORAGE_CATEGORY, config);\n}\n\n/**\n * One off check for upgrade to cache that has full UI information\n *\n * This is only really triggered when we first do an upgrade from the\n * older cache files to the current JSON defaults that contains the\n * full information needed for the GUI.\n *\n * FOGL-4151 After changing to a new plugin, say from sqlite to postgres, the first\n * time we run in the new database there is no configuraion category. In this case we will\n * get the default category, which will have a default of sqlite and no value. This will\n * end up reporting the wrong information in the UI when we look at the category, therefore\n * we special case the plugin name and set the default to whatever the current value is\n * for just this property.\n *\n * FOGL-7074 Make the plugin selection an enumeration\n */\nvoid StorageConfiguration::checkCache()\n{\nbool forceUpdate = false;\nbool writeCacheRequired = false;\n\n\t/*\n\t * If the cached version of the configuFration that has been read in\n\t * does not contain an item in the default configuration, then copy\n\t * that item from the default configuration.\n\t *\n\t * This allows new tiems to be added to the configuration and populated\n\t * in the cache on first restart.\n\t */\n\tDocument *newdoc = new Document();\n\tnewdoc->Parse(defaultConfiguration.c_str());\n\tif (newdoc->HasParseError())\n\t{\n\t\tlogger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\tGetParseError_En(document->GetParseError()),\n\t\t\t\tnewdoc->GetErrorOffset());\n\t}\n\telse\n\t{\n\t\tfor (Value::ConstMemberIterator itr = newdoc->MemberBegin();\n\t\t\t\titr != newdoc->MemberEnd(); ++itr)\n\t\t{\n\t\t\tconst char *name = itr->name.GetString();\n\t\t\tValue &newval = (*newdoc)[name];\n\t\t\tif (!hasValue(name))\n\t\t\t{\n\t\t\t\tlogger->warn(\"Adding storage configuration item %s from defaults\", name);\n\t\t\t\tDocument::AllocatorType& a = document->GetAllocator();\n\t\t\t\tValue copy(name, a);\n\t\t\t\tcopy.CopyFrom(newval, a);\n\t\t\t\tValue n(name, a);\n\t\t\t\tdocument->AddMember(n, copy, a);\n\t\t\t\twriteCacheRequired = true;\n\t\t\t}\n\t\t}\n\n\t\t// if storage plugins are updated after cache is created, update exisitng cache\n\t\t// with new/removed plugins\n\t\tif (document->HasMember(\"plugin\") && newdoc->HasMember(\"plugin\"))\n\t\t{\n\t\t\tValue& currentItem = (*newdoc)[\"plugin\"];\n\t\t\tValue& cacheItem = (*document)[\"plugin\"];\n\t\t\t// check for difference between cached plugin options and \n\t\t\t// currently installed storage plugins\n\t\t\tunordered_set<std::string>cacheOptions;\n\t\t\tunordered_set<std::string>currentOptions;\n\t\t\t\n\t\t\t// build list of plugins\n\t\t\tfor (auto& options : currentItem[\"options\"].GetArray())\n\t\t\t{\n\t\t\t\tcurrentOptions.insert(options.GetString());\n\t\t\t}\n\t\t\tif (cacheItem.HasMember(\"options\") && cacheItem[\"options\"].IsArray())\n\t\t\t{\n\t\t\t\tfor (auto& options : cacheItem[\"options\"].GetArray())\n\t\t\t\t{\n\t\t\t\t\tif (options.IsString()) \n\t\t\t\t\t{\n\t\t\t\t\t\tcacheOptions.insert(options.GetString());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t// check for difference between cached and current plugins\n\t\t\tbool updateOptions = false;\n\t\t\tif (cacheOptions.size() != currentOptions.size()) \n\t\t\t{\n\t\t\t\tupdateOptions = true;\n\t\t\t} \n\t\t\telse \n\t\t\t{\n\t\t\t\tfor (const std::string& element : currentOptions) {\n\t\t\t\t\tif (cacheOptions.find(element) == cacheOptions.end()) {\n\t\t\t\t\t\tupdateOptions = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t}\n\t\t\tif (updateOptions) \n\t\t\t{\n\t\t\t\t// Update cached plugins option\n\t\t\t\tDocument::AllocatorType& a = document->GetAllocator();\n\t\t\t\tcacheItem[\"options\"].SetArray();\n\t\t\t\tfor (auto& option : currentOptions)\n\t\t\t\t{\n\t\t\t\t\tcacheItem[\"options\"].PushBack(Value().SetString(option.c_str(),a), a);\n\t\t\t\t}\n\t\t\t\twriteCacheRequired = true;\n\t\t\t}\n\t\t}\n\n\t\tif (document->HasMember(\"readingPlugin\") && newdoc->HasMember(\"readingPlugin\"))\n\t\t{\n\t\t\tValue& currentItem = (*newdoc)[\"readingPlugin\"];\n\t\t\tValue& cacheItem = (*document)[\"readingPlugin\"];\n\t\t\t// check for difference between cached plugin options and \n\t\t\t// currently installed storage plugins\n\t\t\tunordered_set<std::string>cacheOptions;\n\t\t\tunordered_set<std::string>currentOptions;\n\t\t\t\n\t\t\t// build list of plugins\n\t\t\tfor (auto& options : currentItem[\"options\"].GetArray())\n\t\t\t{\n\t\t\t\tcurrentOptions.insert(options.GetString());\n\t\t\t}\n\t\t\tif (cacheItem.HasMember(\"options\") && cacheItem[\"options\"].IsArray())\n\t\t\t{\n\t\t\t\tfor (auto& options : cacheItem[\"options\"].GetArray())\n\t\t\t\t{\n\t\t\t\t\tif (options.IsString()) \n\t\t\t\t\t{\n\t\t\t\t\t\tcacheOptions.insert(options.GetString());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t// check for difference between cached and current plugins\n\t\t\tbool updateOptions = false;\n\t\t\tif (cacheOptions.size() != currentOptions.size()) \n\t\t\t{\n\t\t\t\tupdateOptions = true;\n\t\t\t} \n\t\t\telse \n\t\t\t{\n\t\t\t\tfor (const std::string& element : currentOptions) {\n\t\t\t\t\tif (cacheOptions.find(element) == cacheOptions.end()) {\n\t\t\t\t\t\tupdateOptions = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t}\n\t\t\tif (updateOptions) \n\t\t\t{\n\t\t\t\t// Update cached plugins option\n\t\t\t\tDocument::AllocatorType& a = document->GetAllocator();\n\t\t\t\tcacheItem[\"options\"].SetArray();\n\t\t\t\tfor (auto& option : currentOptions)\n\t\t\t\t{\n\t\t\t\t\tcacheItem[\"options\"].PushBack(Value().SetString(option.c_str(),a), a);\n\t\t\t\t}\n\t\t\t\twriteCacheRequired = true;\n\t\t\t}\n\t\t}\n\t}\n\n\tdelete newdoc;\n\n\tif (writeCacheRequired)\n\t{\n\t\t// We added a new member\n\t\twriteCache();\n\t}\n\n\t// Upgrade step to add eumeration for plugin\n\tif (document->HasMember(\"plugin\"))\n\t{\n\t\tValue& item = (*document)[\"plugin\"];\n\t\tif (item.HasMember(\"type\") && item[\"type\"].IsString())\n\t\t{\n\t\t\tconst char *type = item[\"type\"].GetString();\n\t\t\tif (strcmp(type, \"enumeration\"))\n\t\t\t{\n\t\t\t\t// It's not an enumeration currently\n\t\t\t\tforceUpdate = true;\n\t\t\t}\n\t\t}\n\t}\n\n\t// Cache is from before we used an enumeration for the plugin, force upgrade\n\t// steps\n\tif (forceUpdate == false && document->HasMember(\"plugin\"))\n\t{\n\t\tlogger->info(\"Adding database plugin enumerations\");\n\t\tValue& item = (*document)[\"plugin\"];\n\t\tif (item.HasMember(\"type\"))\n\t\t{\n\t\t\tconst char *val = getValue(\"plugin\");\n\t\t\titem[\"default\"].SetString(val, strlen(val));\n\t\t\tValue& rp = (*document)[\"readingPlugin\"];\n\t\t\tconst char *rval = getValue(\"readingPlugin\");\n\t\t\tif (strlen(rval) == 0)\n\t\t\t{\n\t\t\t\trval = \"Use main plugin\";\n\t\t\t}\n\t\t\tchar *ncrval = strdup(rval);\n\t\t\trp[\"default\"].SetString(ncrval, strlen(rval));\n\t\t\trp[\"value\"].SetString(ncrval, strlen(rval));\n\t\t\tlogger->info(\"Storage configuration cache is up to date\");\n\t\t\treturn;\n\t\t}\n\t}\n\n\tlogger->info(\"Storage configuration cache is not up to date\");\n\tnewdoc = new Document();\n\tnewdoc->Parse(defaultConfiguration.c_str());\n\tif (newdoc->HasParseError())\n\t{\n\t\tlogger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\tGetParseError_En(document->GetParseError()),\n\t\t\t\tnewdoc->GetErrorOffset());\n\t}\n\telse\n\t{\n\t\tfor (Value::ConstMemberIterator itr = newdoc->MemberBegin();\n\t\t\t\titr != newdoc->MemberEnd(); ++itr)\n\t\t{\n\t\t\tconst char *name = itr->name.GetString();\n\t\t\tValue &newval = (*newdoc)[name];\n\t\t\tif (hasValue(name))\n\t\t\t{\n\t\t\t\tconst char *val = getValue(name);\n\t\t\t\tnewval[\"value\"].SetString(strdup(val), strlen(val));\n\t\t\t\tif (strcmp(name, \"plugin\") == 0)\n\t\t\t\t{\n\t\t\t\t\tnewval[\"default\"].SetString(strdup(val), strlen(val));\n\t\t\t\t\tlogger->warn(\"Set default of %s to %s\", name, val);\n\t\t\t\t}\n\t\t\t\tif (strcmp(name, \"readingPlugin\") == 0)\n\t\t\t\t{\n\t\t\t\t\tif (strlen(val) == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tval = \"Use main plugin\";\n\t\t\t\t\t}\n\t\t\t\t\tnewval[\"default\"].SetString(strdup(val), strlen(val));\n\t\t\t\t\tlogger->warn(\"Set default of %s to %s\", name, val);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tdelete document;\n\tdocument = newdoc;\n\twriteCache();\n}\n\n/**\n * Check for installed storage and readings plugin and update default configuration.\n * \n * Update options for category item 'plugin' and 'readingPlugin' \n * with installed plugins.\n * \n * If no plugin is found default config is not updated.\n * \n * For plugins installed after cache is created options is updated via checkCache on restart\n */\nvoid StorageConfiguration::updateStoragePluginConfig()\n{\n\tPluginManager *manager = PluginManager::getInstance();\n\tmanager->setPluginType(PLUGIN_TYPE_ID_STORAGE);\n\n\t// Fetch installed storage and readings plugins.\n\tauto storagePlugins = manager->getPluginsByFlags(PLUGIN_TYPE_STORAGE, SP_COMMON);\n\tauto readingsPlugins = manager->getPluginsByFlags(PLUGIN_TYPE_STORAGE, SP_READINGS);\n\t\n\tDocument newDocument;\n\tnewDocument.Parse(defaultConfiguration.c_str());\n\n\tif (storagePlugins.size() > 0) \n\t{\n\t\t// Modify the \"options\" array for storage with installed plugins\n\t\tif (newDocument.HasMember(\"plugin\") && newDocument[\"plugin\"].IsObject()) {\n\t\t\tValue& plugin = newDocument[\"plugin\"];\n\t\t\tif (plugin.HasMember(\"options\") && plugin[\"options\"].IsArray()) {\n\t\t\t\tValue& options = plugin[\"options\"];\n\t\t\t\toptions.Clear();\n\t\t\t\tfor (const auto& option : storagePlugins) \n\t\t\t\t{\n\t\t\t\t\toptions.PushBack(Value().SetString(option.c_str(), newDocument.GetAllocator()), newDocument.GetAllocator());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else {\n\t\tlogger->debug(\"unable to find installed storage plugins\");\n\t}\n\n\tif (readingsPlugins.size() > 0) \n\t{\n\t\t// Modify the \"options\" array for readingsPlugin with installed plugins\n\t\tif (newDocument.HasMember(\"readingPlugin\") && newDocument[\"readingPlugin\"].IsObject()) \n\t\t{\n\t\t\tValue& plugin = newDocument[\"readingPlugin\"];\n\t\t\tif (plugin.HasMember(\"options\") && plugin[\"options\"].IsArray()) \n\t\t\t{\n\t\t\t\tValue& options = plugin[\"options\"];\n\t\t\t\toptions.Clear();\n\t\t\t\t// Add default option \"Use main plugin\"\n\t\t\t\toptions.PushBack(Value().SetString(\"Use main plugin\", newDocument.GetAllocator()), newDocument.GetAllocator());\n\t\t\t\tfor (const auto& option : readingsPlugins) \n\t\t\t\t{\n\t\t\t\t\toptions.PushBack(Value().SetString(option.c_str(), newDocument.GetAllocator()), newDocument.GetAllocator());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else {\n\t\tlogger->debug(\"unable to find installed readings plugins\");\n\t}\n\n\t// Update default configuration if options are modified\n\tif (storagePlugins.size() > 0 || readingsPlugins.size() > 0) \n\t{\n\t\tStringBuffer buffer;\n\t\tWriter<StringBuffer> writer(buffer);\n\t\tnewDocument.Accept(writer);\n\t\tdefaultConfiguration = buffer.GetString();\n\t}\n}\n"
  },
  {
    "path": "C/services/storage/include/configuration.h",
    "content": "#ifndef _CONFIGURATION_H\n#define _CONFIGURATION_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <logger.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <config_category.h>\n\n#define STORAGE_CATEGORY\t  \"Storage\"\n#define CATEGORY_DESCRIPTION\t  \"Storage configuration\"\n#define ADVANCED\t\t  \"Advanced\"\n#define CONFIGURATION_CACHE_FILE  \"storage.json\"\n\n/**\n * The storage service must handle its own configuration differently\n * to other services as it is unable to read the configuration from\n * the database. The configuration is required in order to connnect\n * to the database. Therefore it keeps a shadow copy in a local file\n * and it keeps this local, cached copy up to date by registering\n * interest in the category and whenever a chaneg is made writing\n * the category to the local cache file.\n */\nclass StorageConfiguration {\n  public:\n    StorageConfiguration();\n    ~StorageConfiguration();\n    const char            *getValue(const std::string& key);\n    bool\t\t  hasValue(const std::string& key);\n    bool                  setValue(const std::string& key, const std::string& value);\n    void                  updateCategory(const std::string& json);\n    DefaultConfigCategory *getDefaultCategory();\n  private:\n    void\t\t  getConfigCache(std::string& cache);\n    rapidjson::Document   *document;\n    void                  readCache();\n    void                  writeCache();\n    void                  checkCache();\n    void                  updateStoragePluginConfig();\n    Logger *logger;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/storage/include/plugin_configuration.h",
    "content": "#ifndef _PLUGIN_CONFIGURATION_H\n#define _PLUGIN_CONFIGURATION_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2020 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <logger.h>\n#include <string>\n#include <rapidjson/document.h>\n#include <config_category.h>\n#include <management_client.h>\n\nclass StoragePlugin;\n\n/**\n * The storage service must handle its own configuration differently\n * to other services as it is unable to read the configuration from\n * the database.\n * This class deals with the configuration from the storage plugins, \n * maintaining a cache for the plugin\n */\nclass StoragePluginConfiguration {\n  public:\n    StoragePluginConfiguration(const std::string& name, StoragePlugin *plugin);\n    const char            *getValue(const std::string& key);\n    bool\t\t  hasValue(const std::string& key);\n    bool                  setValue(const std::string& key, const std::string& value);\n    void                  updateCategory(const std::string& json);\n    void\t\t  registerCategory(ManagementClient *client);\n    DefaultConfigCategory *getDefaultCategory();\n    ConfigCategory\t  *getConfiguration();\n  private:\n    void\t\t  getConfigCache(std::string& cache);\n    void                  readCache();\n    void                  writeCache();\n    void\t\t  updateCache();\n    const std::string\t  m_name;\n    const StoragePlugin\t  *m_plugin;\n    std::string\t\t  m_category;\n    std::string\t    \t  m_defaultConfiguration;\n    rapidjson::Document   *m_document;\n    Logger                *m_logger;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/storage/include/storage_api.h",
    "content": "#ifndef _STORAGE_API_H\n#define _STORAGE_API_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <server_http.hpp>\n#include <storage_plugin.h>\n#include <storage_stats.h>\n#include <storage_registry.h>\n#include <stream_handler.h>\n#include <perfmonitors.h>\n\nusing namespace std;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\n/*\n * The URL for each entry point\n */\n#define COMMON_ACCESS\t\t\"^/storage/table/([A-Za-z][a-zA-Z0-9_]*)$\"\n#define COMMON_QUERY\t\t\"^/storage/table/([A-Za-z][a-zA-Z_0-9]*)/query$\"\n#define READING_ACCESS  \t\"^/storage/reading$\"\n#define READING_QUERY   \t\"^/storage/reading/query\"\n#define READING_PURGE   \t\"^/storage/reading/purge\"\n#define READING_INTEREST\t\"^/storage/reading/interest/([A-Za-z0-9\\\\*][a-zA-Z0-9_%\\\\.\\\\-]*)$\"\n#define TABLE_INTEREST\t\t\"^/storage/table/interest/([A-Za-z\\\\*][a-zA-Z0-9_%\\\\.\\\\-]*)$\"\n\n#define GET_TABLE_SNAPSHOTS\t\"^/storage/table/([A-Za-z][a-zA-Z_0-9_]*)/snapshot$\"\n#define CREATE_TABLE_SNAPSHOT\tGET_TABLE_SNAPSHOTS\n#define LOAD_TABLE_SNAPSHOT\t\"^/storage/table/([A-Za-z][a-zA-Z_0-9_]*)/snapshot/([a-zA-Z_0-9_]*)$\"\n#define DELETE_TABLE_SNAPSHOT\tLOAD_TABLE_SNAPSHOT\n#define CREATE_STORAGE_STREAM\t\"^/storage/reading/stream$\"\n#define STORAGE_SCHEMA\t\t\"^/storage/schema\"\n#define STORAGE_TABLE_ACCESS    \"^/storage/schema/([A-Za-z][a-zA-Z0-9_]*)/table/([A-Za-z][a-zA-Z0-9_]*)$\"\n#define STORAGE_TABLE_QUERY\t \"^/storage/schema/([A-Za-z][a-zA-Z0-9_]*)/table/([A-Za-z][a-zA-Z_0-9]*)/query$\"           \n\n#define PURGE_FLAG_RETAIN      \"retain\"\n#define PURGE_FLAG_RETAIN_ANY  \"retainany\"\n#define PURGE_FLAG_RETAIN_ALL  \"retainall\"\n#define PURGE_FLAG_PURGE       \"purge\"\n\n#define TABLE_NAME_COMPONENT\t1\n#define STORAGE_SCHEMA_NAME_COMPONENT\t1\n#define STORAGE_TABLE_NAME_COMPONENT\t2\n#define ASSET_NAME_COMPONENT\t1\n#define SNAPSHOT_ID_COMPONENT\t2\n\n/**\n * Class used to queue the operations to be executed by\n * the worker thread pool\n */\nclass StorageOperation {\n\tpublic:\n\t\tenum Operations\t{ ReadingAppend, ReadingPurge, ReadingFetch, ReadingQuery };\n\tpublic:\n\t\tStorageOperation(StorageOperation::Operations operation, shared_ptr<HttpServer::Request> request,\n\t\t\t\tshared_ptr<HttpServer::Response> response) :\n\t\t\t\t\tm_operation(operation),\n\t\t\t\t\tm_request(request),\n\t\t\t\t\tm_response(response)\n\t\t{\n\t\t};\n\t\t~StorageOperation()\n\t\t{\n\t\t};\n\tpublic:\n\t\tStorageOperation::Operations\tm_operation;\n\t\tshared_ptr<HttpServer::Request> m_request;\n\t\tshared_ptr<HttpServer::Response> m_response;\n};\n\nclass StoragePerformanceMonitor;\n/**\n * The Storage API class - this class is responsible for the registration of all API\n * entry points in the storage API and the dispatch of those API calls to the internals\n * of the storage service and the storage plugin itself.\n */\nclass StorageApi {\n\npublic:\n\tStorageApi(const unsigned short port, const unsigned int threads, const unsigned int workerPoolSize);\n\t~StorageApi();\n        static StorageApi *getInstance();\n\tvoid\tinitResources();\n\tvoid\tsetPlugin(StoragePlugin *);\n\tvoid\tsetReadingPlugin(StoragePlugin *);\n\tvoid\tstart();\n\tvoid\tstartServer();\n\tvoid\twait();\n\tvoid\tstopServer();\n\tunsigned short getListenerPort();\n\tvoid\tcommonInsert(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tcommonSimpleQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tcommonQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tcommonUpdate(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tcommonDelete(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tdefaultResource(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\treadingAppend(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\treadingFetch(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\treadingQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\treadingPurge(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\treadingRegister(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\treadingUnregister(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\ttableRegister(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\ttableUnregister(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tcreateTableSnapshot(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tloadTableSnapshot(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tdeleteTableSnapshot(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tgetTableSnapshots(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid\tcreateStorageStream(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tbool\treadingStream(ReadingStream **readings, bool commit);\n\tvoid    createStorageSchema(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid \tstorageTableInsert(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid    storageTableUpdate(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid    storageTableDelete(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid    storageTableSimpleQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\tvoid    storageTableQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request);\n\n\n\tvoid\tprintList();\n\tbool\tcreateSchema(const std::string& schema);\n\tvoid\tsetTimeout(long timeout)\n\t\t{\n\t\t\tif (m_server)\n\t\t\t{\n\t\t\t\tm_server->config.timeout_request = timeout;\n\t\t\t}\n\t\t};\n\n\tStoragePlugin\t*getStoragePlugin() { return plugin; };\n\tStoragePerformanceMonitor\n\t\t\t*getPerformanceMonitor() { return m_perfMonitor; };\n\tvoid\t\tworker();\n\tvoid\t\tqueue(StorageOperation::Operations op, shared_ptr<HttpServer::Request> request, shared_ptr<HttpServer::Response> response);\npublic:\n\tstd::atomic<int>        m_workers_count;\n\nprivate:\n        static StorageApi       *m_instance;\n        HttpServer              *m_server;\n\tunsigned short          m_port;\n\tunsigned int\t\tm_threads;\n        thread                  *m_thread;\n\tStoragePlugin\t\t*plugin;\n\tStoragePlugin\t\t*readingPlugin;\n\tStorageStats\t\tstats;\n\tstd::map<string, pair<int,std::list<std::string>::iterator>> m_seqnum_map;\n\tconst unsigned int\tmax_entries_in_seqnum_map = 16;\n\tstd::list<std::string>\tseqnum_map_lru_list; // has the most recently accessed elements of m_seqnum_map at front of the dequeue\n\tstd::mutex \t\tmtx_seqnum_map;\n\tStorageRegistry\t\tregistry;\n\tvoid\t\t\trespond(shared_ptr<HttpServer::Response>, const string&);\n\tvoid\t\t\trespond(shared_ptr<HttpServer::Response>, SimpleWeb::StatusCode, const string&);\n\tvoid\t\t\tinternalError(shared_ptr<HttpServer::Response>, const exception&);\n\tvoid\t\t\tmapError(string&, PLUGIN_ERROR *);\n\tStreamHandler\t\t*streamHandler;\n\tStoragePerformanceMonitor\n\t\t\t\t*m_perfMonitor;\n\tstd::mutex\t\tm_queueMutex;\n\tstd::condition_variable\tm_queueCV;\n\tstd::queue<StorageOperation *>\n\t\t\t\tm_queue;\n\tstd::vector<std::thread\t*>\n\t\t\t\tm_workers;\n\tunsigned int\t\tm_workerPoolSize;\n\tbool\t\t\tm_shutdown;\n};\n\n/**\n * StoragePerformanceMonitor is a derived class from PerformanceMonitor\n * It allows direct writing of monitoring data to database\n */\nclass StoragePerformanceMonitor : public PerformanceMonitor {\n\tpublic:\n\t\t// Constructor with StorageApi pointer passed (also calling parent PerformanceMonitor constructor)\n\t\tStoragePerformanceMonitor(const std::string& name, StorageApi *api) :\n\t\t\t\t\tPerformanceMonitor(name, NULL), m_name(name), m_instance(api) {\n\t\t};\n\t\t// Direct write to storage of monitor data\n\t\tvoid writeData(const std::string& table, const InsertValues& values) {\n\t\t\tm_instance->getStoragePlugin()->commonInsert(table,\n\t\t\t\t\t\t\t\tvalues.toJSON());\n\t\t}\n\tprivate:\n\t\tstd::string\tm_name;\n\t\tStorageApi *m_instance;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/storage/include/storage_plugin.h",
    "content": "#ifndef _STORAGE_PLUGIN\n#define _STORAGE_PLUGIN\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n\n#include <plugin.h>\n#include <plugin_manager.h>\n#include <string>\n#include <reading_stream.h>\n#include <plugin_configuration.h>\n\n#define\tSTORAGE_PURGE_RETAIN_ANY 0x0001U\n#define\tSTORAGE_PURGE_RETAIN_ALL 0x0002U\n#define STORAGE_PURGE_SIZE\t     0x0004U\n\n/**\n * Class that represents a storage plugin.\n *\n * The purpose of this class is to hide the use of the pointers into the\n * dynamically loaded plugin and wrap the interface into a class that\n * can be used directly in the storage subsystem.\n *\n * This is achieved by having a set of private member variables which are\n * the pointers to the functions in the plugin, and a set of public methods\n * that will call these functions via the function pointers.\n */\nclass StoragePlugin : public Plugin {\n\npublic:\n\tStoragePlugin(const std::string& name, PLUGIN_HANDLE handle);\n\t~StoragePlugin();\n\n\tint\t\tcommonInsert(const std::string& table, const std::string& payload, const char *schema = nullptr);\n\tchar\t\t*commonRetrieve(const std::string& table, const std::string& payload, const char *schema = nullptr);\n\tint\t\tcommonUpdate(const std::string& table, const std::string& payload, const char *schema = nullptr);\n\tint\t\tcommonDelete(const std::string& table, const std::string& payload, const char *schema = nullptr);\n\tint\t\treadingsAppend(const std::string& payload);\n\tchar\t\t*readingsFetch(unsigned long id, unsigned int blksize);\n\tchar\t\t*readingsRetrieve(const std::string& payload);\n\tchar\t\t*readingsPurge(unsigned long age, unsigned int flags, unsigned long sent);\n\tlong\t\t*readingsPurge();\n\tchar\t\t*readingsPurgeAsset(const std::string& asset);\n\tvoid\t\trelease(const char *response);\n\tint\t\tcreateTableSnapshot(const std::string& table, const std::string& id);\n\tint\t\tloadTableSnapshot(const std::string& table, const std::string& id);\n\tint\t\tdeleteTableSnapshot(const std::string& table, const std::string& id);\n\tchar\t\t*getTableSnapshots(const std::string& table);\n\tPLUGIN_ERROR\t*lastError();\n\tbool\t\thasStreamSupport() { return readingStreamPtr != NULL; };\n\tint\t\treadingStream(ReadingStream **stream, bool commit);\n\tbool\t\tpluginShutdown();\n\tint \t\tcreateSchema(const std::string& payload);\n\tStoragePluginConfiguration\n\t\t\t*getConfig() { return m_config; };\n\tconst std::string\n\t\t\t&getName() { return m_name; };\n\nprivate:\n\tPLUGIN_HANDLE\tinstance;\n\tint\t\t(*commonInsertPtr)(PLUGIN_HANDLE, const char *, const char *) = nullptr;\n\tchar\t\t*(*commonRetrievePtr)(PLUGIN_HANDLE, const char *, const char *) = nullptr;\n\tint\t\t(*commonUpdatePtr)(PLUGIN_HANDLE, const char *, const char *) = nullptr;\n\tint\t\t(*commonDeletePtr)(PLUGIN_HANDLE, const char *, const char *) = nullptr;\n\tint             (*storageSchemaInsertPtr)(PLUGIN_HANDLE, const char *, const char *, const char*) = nullptr;\n\tchar            *(*storageSchemaRetrievePtr)(PLUGIN_HANDLE, const char *, const char *, const char*) = nullptr;\n        int             (*storageSchemaUpdatePtr)(PLUGIN_HANDLE, const char *, const char *, const char*) = nullptr;\n        int             (*storageSchemaDeletePtr)(PLUGIN_HANDLE, const char *, const char *, const char*) = nullptr;\n\tint\t\t(*readingsAppendPtr)(PLUGIN_HANDLE, const char *);\n\tchar\t\t*(*readingsFetchPtr)(PLUGIN_HANDLE, unsigned long id, unsigned int blksize);\n\tchar\t\t*(*readingsRetrievePtr)(PLUGIN_HANDLE, const char *payload);\n\tchar\t\t*(*readingsPurgePtr)(PLUGIN_HANDLE, unsigned long age, unsigned int flags, unsigned long sent);\n\tunsigned int\t(*readingsPurgeAssetPtr)(PLUGIN_HANDLE, const char *asset);\n\tvoid\t\t(*releasePtr)(PLUGIN_HANDLE, const char *payload);\n\tint\t\t(*createTableSnapshotPtr)(PLUGIN_HANDLE, const char *, const char *);\n\tint\t\t(*loadTableSnapshotPtr)(PLUGIN_HANDLE, const char *, const char *);\n\tint\t\t(*deleteTableSnapshotPtr)(PLUGIN_HANDLE, const char *, const char *);\n\tchar\t\t*(*getTableSnapshotsPtr)(PLUGIN_HANDLE, const char *);\n\tint\t\t(*readingStreamPtr)(PLUGIN_HANDLE, ReadingStream **, bool);\n\tPLUGIN_ERROR\t*(*lastErrorPtr)(PLUGIN_HANDLE);\n\tbool\t\t(*pluginShutdownPtr)(PLUGIN_HANDLE);\n        int \t\t(*createSchemaPtr)(PLUGIN_HANDLE, const char*);\n\tstd::string\tm_name;\n\tStoragePluginConfiguration\n\t\t\t*m_config;\n\tbool \t\tm_bStorageSchemaFlag = false;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/storage/include/storage_registry.h",
    "content": "#ifndef _STORAGE_REGISTRY_H\n#define _STORAGE_REGISTRY_H\n\n#include <vector>\n#include <queue>\n#include <string>\n#include <mutex>\n#include <condition_variable>\n#include <thread>\n#include <map>\n\n/**\n * The number of refused connections required before a registration\n * is removed. Connection refusal is a result of the service that had\n * registered failing.\n */\n#define MAX_REFUSALS\t3\n\ntypedef std::vector<std::pair<std::string *, std::string *> > REGISTRY;\n\ntypedef struct {\n\tstd::string url;\n\tstd::string key;\n\tstd::vector<std::string> keyValues;\n\tstd::string operation;\n} TableRegistration;\n\ntypedef std::vector<std::pair<std::string *, TableRegistration *> > REGISTRY_TABLE;\n\n\n/**\n * StorageRegistry - a class that manages requests from other microservices\n * to register interest in new readings being inserted into the storage layer\n * that match a given asset code, or any asset code \"*\".\n */\nclass StorageRegistry {\n\tpublic:\n\t\tStorageRegistry();\n\t\t~StorageRegistry();\n\t\tvoid\t\tregisterAsset(const std::string& asset, const std::string& url);\n\t\tvoid\t\tunregisterAsset(const std::string& asset, const std::string& url);\n\t\tvoid\t\tprocess(const std::string& payload);\n\t\tvoid\t\tprocessTableInsert(const std::string& tableName, const std::string& payload);\n\t\tvoid\t\tprocessTableUpdate(const std::string& tableName, const std::string& payload);\n\t\tvoid\t\tprocessTableDelete(const std::string& tableName, const std::string& payload);\n\t\tvoid\t\tregisterTable(const std::string& table, const std::string& url);\n\t\tvoid\t\tunregisterTable(const std::string& table, const std::string& url);\n\t\tvoid\t\trun();\n\tprivate:\n\t\tvoid\t\tprocessPayload(char *payload);\n\t\tvoid\t\tsendPayload(const std::string& url, const char *payload);\n\t\tvoid\t\tfilterPayload(const std::string& url, char *payload, const std::string& asset);\n\t\tvoid\t\tprocessInsert(char *tableName, char *payload);\n\t\tvoid\t\tprocessUpdate(char *tableName, char *payload);\n\t\tvoid\t\tprocessDelete(char *tableName, char *payload);\n\t\tTableRegistration*\n\t\t\t\tparseTableSubscriptionPayload(const std::string& payload);\n\t\tvoid\t\tprocessAssetRefusals();\n\t\tvoid\t\tprocessTableRefusals();\n\t\tvoid \t\tinsertTestTableReg();\n\t\tvoid\t\tremoveTestTableReg(int n);\n        \n\t\ttypedef \tstd::pair<time_t, char *> Item;\n\t\ttypedef \tstd::tuple<time_t, char *, char *> TableItem;\n\t\tREGISTRY\t\t\tm_registrations;\n\t\tREGISTRY_TABLE\t\t\tm_tableRegistrations;\n        \n\t\tstd::queue<StorageRegistry::Item>\n\t\t\t\t\t\tm_queue;\n\t\tstd::queue<StorageRegistry::TableItem>\n\t\t\t\t\t\tm_tableInsertQueue;\n\t\tstd::queue<StorageRegistry::TableItem>\n\t\t\t\t\t\tm_tableUpdateQueue;\n\t\tstd::queue<StorageRegistry::TableItem>\n\t\t\t\t\t\tm_tableDeleteQueue;\n\t\tstd::mutex\t\t\tm_qMutex;\n\t\tstd::mutex\t\t\tm_registrationsMutex;\n\t\tstd::mutex\t\t\tm_tableRegistrationsMutex;\n\t\tstd::thread\t\t\t*m_thread;\n\t\tstd::condition_variable\t\tm_cv;\n\t\tstd::mutex\t\t\tm_cvMutex;\n\t\tbool\t\t\t\tm_running;\n\t\tstd::map<const std::string, int>\tm_refusals;\n};\n\n#endif\n"
  },
  {
    "path": "C/services/storage/include/storage_service.h",
    "content": "#ifndef _STORAGE_SERVICE_H\n#define _STORAGE_SERVICE_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <storage_api.h>\n#include <logger.h>\n#include <configuration.h>\n#include <storage_plugin.h>\n#include <plugin_configuration.h>\n#include <service_handler.h>\n\n#define SERVICE_NAME  \"Fledge Storage\"\n\n/**\n * The StorageService class. This class is the core\n * of the service that offers access to the Fledge\n * storage layer. It maintains the API and provides\n * the hooks for incoming management API requests.\n */\nclass StorageService : public ServiceHandler {\n\tpublic:\n\t\tStorageService(const string& name);\n\t\t~StorageService();\n\t\tvoid \t\t\tstart(std::string& coreAddress, unsigned short corePort);\n\t\tvoid \t\t\tstop();\n\t\tvoid\t\t\tshutdown();\n\t\tvoid\t\t\trestart();\n\t\tbool\t\t\tisRunning() { return !m_shutdown; };\n\t\tvoid\t\t\tconfigChange(const std::string&, const std::string&);\n\t\tvoid\t\t\tconfigChildCreate(const std::string&, const std::string&, const std::string&){};\n\t\tvoid\t\t\tconfigChildDelete(const std::string& , const std::string&){};\n\t\tstring\t\t\tgetPluginName();\n\t\tstring\t\t\tgetPluginManagedStatus();\n\t\tstring\t\t\tgetReadingPluginName();\n\t\tvoid\t\t\tsetLogLevel(std::string level)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_logLevel = level;\n\t\t\t\t\t};\n\tprivate:\n\t\tconst string&\t\tm_name;\n\t\tbool \t\t\tloadPlugin();\n\t\tStorageApi    \t\t*api;\n\t\tStorageConfiguration\t*config;\n\t\tLogger        \t\t*logger;\n\t\tStoragePlugin \t\t*storagePlugin;\n\t\tStoragePlugin \t\t*readingPlugin;\n\t\tbool\t\t\tm_shutdown;\n\t\tbool\t\t\tm_requestRestart;\n\t\tstd::string\t\tm_logLevel;\n\t\tlong\t\t\tm_timeout;\n};\n#endif\n"
  },
  {
    "path": "C/services/storage/include/storage_stats.h",
    "content": "#ifndef _STORAGE_STATS_H\n#define _STORAGE_STATS_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <json_provider.h>\n#include <string>\n\nclass StorageStats : public JSONProvider {\n\tpublic:\n\t\tStorageStats();\n\t\tvoid\t\tasJSON(std::string &) const;\n\t\tunsigned int commonInsert;\n\t\tunsigned int commonSimpleQuery;\n\t\tunsigned int commonQuery;\n\t\tunsigned int commonUpdate;\n\t\tunsigned int commonDelete;\n\t\tunsigned int readingAppend;\n\t\tunsigned int readingFetch;\n\t\tunsigned int readingQuery;\n\t\tunsigned int readingPurge;\n};\n#endif\n"
  },
  {
    "path": "C/services/storage/include/stream_handler.h",
    "content": "#ifndef _STREAM_HANDLER_H\n#define _STREAM_HANDLER_H\n/*\n * Fledge storage service.\n *\n * Copyright (c) 2019 Dianomic Systems Inc.\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <thread>\n#include <mutex>\n#include <condition_variable>\n#include <vector>\n#include <map>\n#include <sys/epoll.h>\n#include <reading_stream.h>\n#include <string>\n\n#define MAX_EVENTS\t  40\t// Number of epoll events in one epoll_wait call\n#define RDS_BLOCK\t 10000\t// Number of readings to insert in each call to the storage plugin\n#define BLOCK_POOL_SIZES 512\t// Increments of block sizes in a block pool\n\nclass StorageApi;\n\nclass StreamHandler {\n\tpublic:\n\t\tStreamHandler(StorageApi *);\n\t\t~StreamHandler();\n\t\tvoid\t\t\thandler();\n\t\tuint32_t\t\tcreateStream(uint32_t *token);\n\tprivate:\n\t\tclass Stream {\n\t\t\tpublic:\n\t\t\t\tStream();\n\t\t\t\t~Stream();\n\t\t\t\tuint32_t\tcreate(int epollfd, uint32_t *token);\n\t\t\t\tvoid\t\thandleEvent(int epollfd, StorageApi *api, uint32_t events);\n\t\t\tprivate:\n\t\t\t\t/**\n\t\t\t\t * A simple memory pool we use to store the messages we receive.\n\t\t\t\t * We use this rather than malloc because it let's us avoid the overhead of\n\t\t\t\t * the more complex heap mamagement and also because it means we avoid\n\t\t\t\t * taking out a process wide mutex.\n\t\t\t\t */\n\t\t\t\tclass MemoryPool {\n\t\t\t\t\t\tpublic:\n\t\t\t\t\t\t\tMemoryPool(size_t blkIncr) : m_blkIncr(blkIncr) {};\n\t\t\t\t\t\t\t~MemoryPool();\n\t\t\t\t\t\t\tvoid\t\t*allocate(size_t size);\n\t\t\t\t\t\t\tvoid\t\trelease(void *handle);\n\t\t\t\t\t\tprivate:\n\t\t\t\t\t\t\tsize_t\t\trndSize(size_t size)\n\t\t\t\t\t\t\t\t\t{ \n\t\t\t\t\t\t\t\t\t\treturn m_blkIncr * ((size + m_blkIncr - 1)\n\t\t\t\t\t\t\t\t\t\t\t       \t/ m_blkIncr);\n\t\t\t\t\t\t\t\t\t};\n\t\t\t\t\t\t\tvoid\t\tcreatePool(size_t size);\n\t\t\t\t\t\t\tvoid\t\tgrowPool(std::vector<void *>*, size_t);\n\t\t\t\t\t\t\tsize_t\t\tm_blkIncr;\n\t\t\t\t\t\t\tstd::map<size_t, std::vector<void *>* >\n\t\t\t\t\t\t\t\t\tm_pool;\n\t\t\t\t\t};\n\t\t\t\t\tvoid\t\tsetNonBlocking(int fd);\n\t\t\t\t\tunsigned int\tavailable(int fd);\n\t\t\t\t\tvoid\t\tqueueInsert(StorageApi *api, unsigned int nReadings, bool commit);\n\t\t\t\t\tvoid\t\tdump(int n);\n\t\t\t\t\tenum { Closed, Listen, AwaitingToken, Connected }\n\t\t\t\t       \t\t\tm_status;\n\t\t\t\t\tint\t\tm_socket;\n\t\t\t\t\tuint16_t\tm_port;\n\t\t\t\t\tuint32_t\tm_token;\n\t\t\t\t\tuint32_t\tm_blockNo;\n\t\t\t\t\tenum { BlkHdr, RdHdr, RdBody }\n\t\t\t\t       \t\t\tm_protocolState;\n\t\t\t\t\tuint32_t\tm_readingNo;\n\t\t\t\t\tuint32_t\tm_blockSize;\n\t\t\t\t\tsize_t\t\tm_readingSize;\n\t\t\t\t\tstruct epoll_event\n\t\t\t\t\t\t\tm_event;\n\t\t\t\t\tReadingStream\t*m_readings[RDS_BLOCK+1];\n\t\t\t\t\tReadingStream\t*m_currentReading;\n\t\t\t\t\tMemoryPool\t*m_blockPool;\n\t\t\t\t\tstd::string\tm_lastAsset;\n\t\t\t\t\tbool\t\tm_sameAsset;\n\t\t};\n\t\tStorageApi\t\t*m_api;\n\t\tstd::thread\t\tm_handlerThread;\n\t\tint\t\t\tm_tokens;\n\t\tstd::condition_variable\tm_streamsCV;\n\t\tstd::mutex\t\tm_streamsMutex;\n\t\tstd::vector<Stream *>\tm_streams;\n\t\tbool\t\t\tm_running;\n\t\tint\t\t\tm_pollfd;\n};\n#endif\n"
  },
  {
    "path": "C/services/storage/pluginconfiguration.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2020 Diamonic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <plugin_configuration.h>\n#include <rapidjson/document.h>\n#include <rapidjson/istreamwrapper.h>\n#include <rapidjson/ostreamwrapper.h>\n#include <rapidjson/error/en.h>\n#include <rapidjson/writer.h>\n#include <fstream>\n#include <iostream>\n#include <unistd.h>\n#include <plugin_api.h>\n#include <storage_plugin.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n/**\n * Constructor for storage service configuration class.\n */\nStoragePluginConfiguration::StoragePluginConfiguration(const string& name, StoragePlugin *plugin)\n\t: m_name(name), m_plugin(plugin)\n{\n\tm_defaultConfiguration = plugin->getInfo()->config;\n\tm_logger = Logger::getLogger();\n\tm_document = new Document();\n\tm_category = m_name;\n\treadCache();\n\tupdateCache();\n}\n\n/**\n * Return if a value exsits for the cached configuration category\n */\nbool StoragePluginConfiguration::hasValue(const string& key)\n{\n\tif (m_document->HasParseError())\n\t{\n\t\tm_logger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\tGetParseError_En(m_document->GetParseError()),\n\t\t\t\tm_document->GetErrorOffset());\n\t\treturn false;\n\t}\n\tif (!m_document->HasMember(key.c_str()))\n\t\treturn false;\n\treturn true;\n}\n\n/**\n * Return a value from the cached configuration category\n */\nconst char *StoragePluginConfiguration::getValue(const string& key)\n{\n\tif (m_document->HasParseError())\n\t{\n\t\tm_logger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\tGetParseError_En(m_document->GetParseError()),\n\t\t\t\tm_document->GetErrorOffset());\n\t\treturn 0;\n\t}\n\tif (!m_document->HasMember(key.c_str()))\n\t\treturn 0;\n\tValue& item = (*m_document)[key.c_str()];\n\treturn item[\"value\"].GetString();\n}\n\n/**\n * Set the value of a configuration item\n */\nbool StoragePluginConfiguration::setValue(const string& key, const string& value)\n{\n\ttry {\n\t\tValue& item = (*m_document)[key.c_str()];\n\t\tconst char *cstr = value.c_str();\n\t\titem[\"value\"].SetString(cstr, strlen(cstr), m_document->GetAllocator());\n\t\treturn true;\n\t} catch (...) {\n\t\treturn false;\n\t}\n}\n\n/**\n * Called when the configuration category is updated.\n */\nvoid StoragePluginConfiguration::updateCategory(const string& json)\n{\n\tm_logger->info(\"New storage configuration %s\", json.c_str());\n\tDocument *newdoc = new Document();\n\tnewdoc->Parse(json.c_str());\n\tif (newdoc->HasParseError())\n\t{\n\t\tm_logger->error(\"New configuration failed to parse. %s at %d\",\n\t\t\t\tGetParseError_En(newdoc->GetParseError()),\n\t\t\t\tnewdoc->GetErrorOffset());\n\t\tdelete newdoc;\n\t}\n\telse\n\t{\n\t\tdelete m_document;\n\t\tm_document = newdoc;\n\t\twriteCache();\n\t}\n}\n\n/**\n * Read the cache JSON for te configuration category from the cache file \n * into memory.\n */\nvoid StoragePluginConfiguration::readCache()\n{\nstring\tcachefile;\n\n\tgetConfigCache(cachefile);\n\tif (access(cachefile.c_str(), F_OK ) != 0)\n\t{\n\t\tm_logger->info(\"Storage cache %s unreadable, using default configuration: %s.\",\n\t\t\t\tcachefile.c_str(), m_defaultConfiguration.c_str());\n\t\tConfigCategory confCategory(\"tmp\", m_defaultConfiguration.c_str());\n\t\tconfCategory.setItemsValueFromDefault();\n\t\tm_document->Parse(confCategory.itemsToJSON().c_str());\n\t\tif (m_document->HasParseError())\n\t\t{\n\t\t\tm_logger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\t\tGetParseError_En(m_document->GetParseError()),\n\t\t\t\t\tm_document->GetErrorOffset());\n\t\t}\n\t\twriteCache();\n\t\treturn;\n\t}\n\ttry {\n\t\tifstream ifs(cachefile);\n\t\tIStreamWrapper isw(ifs);\n\t\tm_document->ParseStream(isw);\n\t\tif (m_document->HasParseError())\n\t\t{\n\t\t\tm_logger->error(\"Default configuration failed to parse. %s at %d\",\n\t\t\t\t\tGetParseError_En(m_document->GetParseError()),\n\t\t\t\t\tm_document->GetErrorOffset());\n\t\t}\n\t} catch (exception& ex) {\n\t\tm_logger->error(\"Configuration cache failed to read %s.\", ex.what());\n\t}\n}\n\n/**\n * Write the configuration cache to disk\n */\nvoid StoragePluginConfiguration::writeCache()\n{\nstring\tcachefile;\n\n\tgetConfigCache(cachefile);\n\tofstream ofs(cachefile);\n\tOStreamWrapper osw(ofs);\n\tWriter<OStreamWrapper> writer(osw);\n\tm_document->Accept(writer);\n}\n\n/**\n * Retrieve the location of the configuration cache to use\n *\n * If a configuration cache exists in the current directory then it is used\n *\n * If not and the environment variable FLEDGE_DATA exists then the\n * configuration file under etc in that directory will be used.\n *\n * If that does not exist and the configuration variable FLEDGE_HONE\n * exists then a configuration file under etc in that dirstory is used\n */\nvoid StoragePluginConfiguration::getConfigCache(string& cache)\n{\nchar buf[512], *basedir;\n\n\tsnprintf(buf, sizeof(buf), \"%s.json\", m_name.c_str());\n\tif (access(buf, F_OK) == 0)\n\t{\n\t\tcache = buf;\n\t\treturn;\n\t}\n\tif ((basedir = getenv(\"FLEDGE_DATA\")) != NULL)\n\t{\n\t\tsnprintf(buf, sizeof(buf), \"%s/etc/%s.json\", basedir, m_name.c_str());\n\t\tif (access(buf, F_OK) == 0)\n\t\t{\n\t\t\tcache = buf;\n\t\t\treturn;\n\t\t}\n\t}\n\telse if ((basedir = getenv(\"FLEDGE_ROOT\")) != NULL)\n\t{\n\t\tsnprintf(buf, sizeof(buf), \"%s/data/etc/%s.json\", basedir, m_name.c_str());\n\t\tif (access(buf, F_OK) == 0)\n\t\t{\n\t\t\tcache = buf;\n\t\t\treturn;\n\t\t}\n\t}\n\telse\n\t{\n\t\tsnprintf(buf, sizeof(buf), \"%s.json\", m_name.c_str());\n\t}\n\n\t// No configuration cache has been found - return the default location\n\tcache = buf;\n}\n\n/**\n * Return the default category to register with the core. This allows\n * the storage configuration to appear in the UI\n *\n * @return DefaultConfigCategory* The default configuration category\n */\nDefaultConfigCategory *StoragePluginConfiguration::getDefaultCategory()\n{\n\tStringBuffer buffer;\n\tWriter<StringBuffer> writer(buffer);\n\tm_document->Accept(writer);\n\n\tconst char *config = buffer.GetString();\n\treturn new DefaultConfigCategory(m_category, config);\n}\n/**\n * Return the category to register with the core. This allows\n * the storage configuration to appear in the UI\n *\n * @return ConfigCategory* The default configuration category\n */\nConfigCategory *StoragePluginConfiguration::getConfiguration()\n{\n\tStringBuffer buffer;\n\tWriter<StringBuffer> writer(buffer);\n\tm_document->Accept(writer);\n\n\tconst char *config = buffer.GetString();\n\treturn new ConfigCategory(m_category, config);\n}\n\n/**\n * Update the cache with any new items found in the configuration returned\n * by the plugin\n */\nvoid StoragePluginConfiguration::updateCache()\n{\n\tDocument d;\n\td.Parse(m_defaultConfiguration.c_str());\n\tif (d.HasParseError())\n\t{\n\t\tm_logger->error(\"Configuration returned by plugin_init has parse errors\");\n\t}\n\tfor (auto &item : d.GetObject())\n\t{\n\t\tstring itemName = item.name.GetString();\n\t\tif (m_document->HasMember(itemName.c_str()))\n\t\t{\n\t\t}\n\t\telse\n\t\t{\n\t\t\tValue v;\n\t\t\tv.CopyFrom(d[itemName.c_str()], m_document->GetAllocator());\n\t\t\tValue name;\n\t\t\tname.SetString(itemName.c_str(), itemName.length(), m_document->GetAllocator());\n\t\t\tm_document->AddMember(name, v, m_document->GetAllocator());\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "C/services/storage/storage",
    "content": ""
  },
  {
    "path": "C/services/storage/storage.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <storage_service.h>\n#include <configuration.h>\n#include <management_api.h>\n#include <management_client.h>\n#include <service_record.h>\n#include <plugin_manager.h>\n#include <plugin_api.h>\n#include <plugin.h>\n#include <logger.h>\n#include <iostream>\n#include <string>\n#include <signal.h>\n#include <execinfo.h>\n#include <dlfcn.h>\n#include <cxxabi.h>\n#include <syslog.h>\n#include <config_handler.h>\n#include <plugin_configuration.h>\n\n#define NO_EXIT_STACKTRACE\t\t0\t// Set to 1 to make storage loop after stacktrace\n\t\t\t\t\t\t// This is useful to be able to attach a debbugger\n\n#define SERVICE_TYPE \"Storage\"\nextern int makeDaemon(void);\n\nusing namespace std;\n\n/**\n * Signal handler to log stack trqaces on fatal signals\n */\nstatic void handler(int sig)\n{\nLogger\t*logger = Logger::getLogger();\nvoid\t*array[20];\nchar\tbuf[1024];\nint\tsize;\n\n\t// get void*'s for all entries on the stack\n\tsize = backtrace(array, 20);\n\n\t// print out all the frames to stderr\n\tlogger->fatal(\"Signal %d (%s) trapped:\\n\", sig, strsignal(sig));\n\tchar **messages = backtrace_symbols(array, size);\n\tfor (int i = 0; i < size; i++)\n\t{\n\t\tDl_info info;\n\t\tif (dladdr(array[i], &info) && info.dli_sname)\n\t\t{\n\t\t    char *demangled = NULL;\n\t\t    int status = -1;\n\t\t    if (info.dli_sname[0] == '_')\n\t\t        demangled = abi::__cxa_demangle(info.dli_sname, NULL, 0, &status);\n\t\t    snprintf(buf, sizeof(buf), \"%-3d %*p %s + %zd---------\",\n\t\t             i, int(2 + sizeof(void*) * 2), array[i],\n\t\t             status == 0 ? demangled :\n\t\t             info.dli_sname == 0 ? messages[i] : info.dli_sname,\n\t\t             (char *)array[i] - (char *)info.dli_saddr);\n\t\t    free(demangled);\n\t\t} \n\t\telse\n\t\t{\n\t\t    snprintf(buf, sizeof(buf), \"%-3d %*p %s---------\",\n\t\t             i, int(2 + sizeof(void*) * 2), array[i], messages[i]);\n\t\t}\n\t\tlogger->fatal(\"(%d) %s\", i, buf);\n\t}\n\tfree(messages);\n#if NO_EXIT_STACKTRACE\n\twhile (1)\n\t{\n\t\tsleep(100);\n\t}\n#endif\n\texit(1);\n}\n\n// Displays service information in JSON format\nstatic void printServiceInfoAsJSON()\n{\n\t\tstatic std::string serviceInfoJSON = R\"({\"name\":\"Storage Service\",\"description\":\"Service buffers data within a single instance\",\"type\":\")\" + std::string(SERVICE_TYPE) + R\"(\",\"process\":\"storage\",\"process_script\":\"[\\\"services/storage\\\"]\"})\";\n        std::cout << serviceInfoJSON << std::endl;\n}\n\n/**\n * Storage service main entry point\n */\nint main(int argc, char *argv[])\n{\nunsigned short corePort = 8082;\nstring\t       coreAddress = \"localhost\";\nbool\t       daemonMode = true;\nstring\t       myName = SERVICE_NAME;\nbool           returnPlugin = false;\nbool           returnReadingsPlugin = false;\nstring\t       logLevel = \"warning\";\n\n\tfor (int i = 1; i < argc; i++)\n\t{\n\t\tif (!strcmp(argv[i], \"--info\"))\n\t\t{\n\t\t\tprintServiceInfoAsJSON();\n\t\t\treturn 0;\n\t\t}\n\t\tif (!strcmp(argv[i], \"-d\"))\n\t\t{\n\t\t\tdaemonMode = false;\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--port=\", 7))\n\t\t{\n\t\t\tcorePort = (unsigned short)atoi(&argv[i][7]);\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--name=\", 7))\n\t\t{\n\t\t\tmyName = &argv[i][7];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--address=\", 10))\n\t\t{\n\t\t\tcoreAddress = &argv[i][10];\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--plugin\", 8))\n\t\t{\n\t\t\treturnPlugin = true;\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--readingsplugin\", 8))\n\t\t{\n\t\t\treturnReadingsPlugin = true;\n\t\t}\n\t\telse if (!strncmp(argv[i], \"--logLevel=\", 11))\n\t\t{\n\t\t\tlogLevel = &argv[i][11];\n\t\t}\n\t}\n\n#ifdef PROFILING\n\tchar profilePath[200]{0};\n\tif (getenv(\"FLEDGE_DATA\")) \n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"%s/%s_Profile\", getenv(\"FLEDGE_DATA\"), myName.c_str());\n\t} else if (getenv(\"FLEDGE_ROOT\"))\n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"%s/data/%s_Profile\", getenv(\"FLEDGE_ROOT\"), myName.c_str());\n\t} else \n\t{\n\t\tsnprintf(profilePath, sizeof(profilePath), \"/usr/local/fledge/data/%s_Profile\", myName.c_str());\n\t}\n\tmkdir(profilePath, 0777);\n\tchdir(profilePath);\n#endif\n\n\tif (returnPlugin == false && returnReadingsPlugin == false && daemonMode && makeDaemon() == -1)\n\t{\n\t\t// Failed to run in daemon mode\n\t\tcout << \"Failed to run as deamon - proceeding in interactive mode.\" << endl;\n\t}\n\n\tif (returnPlugin && returnReadingsPlugin)\n\t{\n\t\tcout << \"You can not specify --plugin and --readingsplugin together\";\n\t\texit(1);\n\t}\n\n\tStorageService service(myName);\n\tservice.setLogLevel(logLevel);\n\tLogger::getLogger()->setMinLevel(logLevel);\n\tif (returnPlugin)\n\t{\n\t\tcout << service.getPluginName() << \" \" << service.getPluginManagedStatus() << endl;\n\t}\n\telse if (returnReadingsPlugin)\n\t{\n\t\tcout << service.getReadingPluginName() << \" \" << service.getPluginManagedStatus() << endl;\n\t}\n\telse\n\t{\n\t\tservice.start(coreAddress, corePort);\n\t}\n\treturn 0;\n}\n\n/**\n * Detach the process from the terminal and run in the background.\n */\nint makeDaemon()\n{\npid_t pid;\n\n\tint logmask = setlogmask(0);\n\t/* create new process */\n\tif ((pid = fork()  ) == -1)\n\t{\n\t\treturn -1;  \n\t}\n\telse if (pid != 0)  \n\t{\n\t\texit (EXIT_SUCCESS);  \n\t}\n\n\t// If we got here we are a child process\n\n\t// create new session and process group \n\tif (setsid() == -1)  \n\t{\n\t\treturn -1;  \n\t}\n\tsetlogmask(logmask);\n\n\t// Close stdin, stdout and stderr\n\tclose(0);\n\tclose(1);\n\tclose(2);\n\t// redirect fd's 0,1,2 to /dev/null\n\t(void)open(\"/dev/null\", O_RDWR);  \t// stdin\n\tif (dup(0) == -1) {}  \t\t\t// stdout\tWorkaround GCC bug 66425 produces warning\n\tif (dup(0) == -1) {}  \t\t\t// stderr\tWorkaround GCC bug 66425 produces warning\n \treturn 0;\n}\n\n/**\n * Constructor for the storage service\n */\nStorageService::StorageService(const string& myName) : m_name(myName),\n\t\t\t\t\t\treadingPlugin(NULL), m_shutdown(false),\n\t\t\t\t\t\tm_requestRestart(false)\n{\nunsigned short servicePort;\n\n\tlogger = new Logger(myName);\t// Do this first to make sure we have the right logger\n\tconfig = new StorageConfiguration();\n\n\tsignal(SIGSEGV, handler);\n\tsignal(SIGILL, handler);\n\tsignal(SIGBUS, handler);\n\tsignal(SIGFPE, handler);\n\tsignal(SIGABRT, handler);\n\n\tif (config->getValue(\"port\") == NULL)\n\t{\n\t\tservicePort = 0;\t// default to a dynamic port\n\t}\n\telse\n\t{\n\t\tservicePort = (unsigned short)atoi(config->getValue(\"port\"));\n\t}\n\tunsigned int threads = 1;\n\tif (config->hasValue(\"threads\"))\n\t{\n\t\tthreads = (unsigned int)atoi(config->getValue(\"threads\"));\n\t}\n\tunsigned int workerPoolSize = 5;\n\tif (config->hasValue(\"workerPool\"))\n\t{\n\t\tworkerPoolSize = (unsigned int)atoi(config->getValue(\"workerPool\"));\n\t}\n\tif (config->hasValue(\"logLevel\"))\n\t{\n\t\tm_logLevel = config->getValue(\"logLevel\");\n\t}\n\telse\n\t{\n\t\tm_logLevel = \"warning\";\n\t}\n\tlogger->setMinLevel(m_logLevel);\n\n\tif (config->hasValue(\"timeout\"))\n\t{\n\t\tm_timeout = strtol(config->getValue(\"timeout\"), NULL, 10);\n\t}\n\telse\n\t{\n\t\tm_timeout = 5;\n\t}\n\n\tapi = new StorageApi(servicePort, threads, workerPoolSize);\n\tapi->setTimeout(m_timeout);\n}\n\n/**\n * Storage Service destructor\n */\nStorageService::~StorageService()\n{\n\tdelete api;\n\tdelete config;\n\tdelete logger;\n}\n\n/**\n * Start the storage service\n */\nvoid StorageService::start(string& coreAddress, unsigned short corePort)\n{\n\tif (!loadPlugin())\n\t{\n\t\tlogger->fatal(\"Failed to load storage plugin.\");\n\t\treturn;\n\t}\n\tunsigned short managementPort = (unsigned short)0;\n\tif (config->getValue(\"managementPort\"))\n\t{\n\t\tmanagementPort = (unsigned short)atoi(config->getValue(\"managementPort\"));\n\t}\n\tManagementApi management(SERVICE_NAME, managementPort);\t// Start managemenrt API\n\tapi->initResources();\n\tlogger->info(\"Starting service...\");\n\tapi->start();\n\tmanagement.registerService(this);\n\n\tmanagement.start();\n\n\t// Allow time for the listeners to start before we register\n\tsleep(1);\n\tif (! m_shutdown)\n\t{\n\t\t// Now register our service\n\t\t// TODO proper hostname lookup\n\t\tunsigned short listenerPort = api->getListenerPort();\n\t\tunsigned short managementListener = management.getListenerPort();\n\t\tServiceRecord record(m_name, SERVICE_TYPE, \"http\", \"localhost\", listenerPort, managementListener);\n\t\tManagementClient *client = new ManagementClient(coreAddress, corePort);\n\t\tclient->registerService(record);\n\n\t\t// FOGL-7074 upgrade step\n\t\ttry {\n\t\t\tConfigCategory cat = client->getCategory(\"Storage\");\n\t\t\tstring rp = cat.getValue(\"readingPlugin\");\n\t\t\tif (rp.empty())\n\t\t\t{\n\t\t\t\tclient->setCategoryItemValue(\"Storage\", \"readingPlugin\",\n\t\t\t\t\t\t\"Use main plugin\");\n\t\t\t}\n\t\t} catch (...) {\n\t\t\t// ignore\n\t\t}\n\n\t\t// Add the default configuration under the Advanced category\n\t\tunsigned int retryCount = 0;\n\t\tDefaultConfigCategory *conf = config->getDefaultCategory();\n\t\tconf->setDescription(CATEGORY_DESCRIPTION);\n\t\twhile (client->addCategory(*conf, false) == false && ++retryCount < 10)\n\t\t{\n\t\t\tsleep(2 * retryCount);\n\t\t}\n\n\t\tdelete conf;\n\n\t\tvector<string> children1;\n\t\tchildren1.push_back(STORAGE_CATEGORY);\n\t\tConfigCategories categories = client->getCategories();\n\t\ttry {\n\t\t\tbool found = false;\n\t\t\tfor (unsigned int idx = 0; idx < categories.length(); idx++)\n\t\t\t{\n\t\t\t\tif (categories[idx]->getName().compare(ADVANCED) == 0)\n\t\t\t\t{\n\t\t\t\t\tclient->addChildCategories(ADVANCED, children1);\n\t\t\t\t\tfound = true;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!found)\n\t\t\t{\n\t\t\t\tDefaultConfigCategory advanced(ADVANCED, \"{}\");\n\t\t\t\tadvanced.setDescription(ADVANCED);\n\t\t\t\tif (client->addCategory(advanced, true))\n\t\t\t\t{\n\t\t\t\t\tclient->addChildCategories(ADVANCED, children1);\n\t\t\t\t}\n\t\t\t}\n\t\t} catch (...) {\n\t\t}\n\n\t\t// Register for configuration changes to our category\n\t\tConfigHandler *configHandler = ConfigHandler::getInstance(client);\n\t\tconfigHandler->registerCategory(this, STORAGE_CATEGORY);\n\n\t\tStoragePluginConfiguration *storagePluginConfig = storagePlugin->getConfig();\n\t\tif (storagePluginConfig != NULL)\n\t\t{\n\t\t\tDefaultConfigCategory *conf = storagePluginConfig->getDefaultCategory();\n\t\t\tconf->setDescription(\"Storage Plugin\");\n\t\t\twhile (client->addCategory(*conf, true) == false && ++retryCount < 10)\n\t\t\t{\n\t\t\t\tsleep(2 * retryCount);\n\t\t\t}\n\t\t\tvector<string> children1;\n\t\t\tchildren1.push_back(conf->getName());\n\t\t\tclient->addChildCategories(STORAGE_CATEGORY, children1);\n\n\t\t\t// Register for configuration changes to our storage plugin category\n\t\t\tConfigHandler *configHandler = ConfigHandler::getInstance(client);\n\t\t\tconfigHandler->registerCategory(this, conf->getName());\n\n\t\t\tdelete conf;\n\t\t}\n\t\tif (readingPlugin)\n\t\t{\n\t\t\tStoragePluginConfiguration *storagePluginConfig = readingPlugin->getConfig();\n\t\t\tif (storagePluginConfig != NULL)\n\t\t\t{\n\t\t\t\tStoragePluginConfiguration *storagePluginConfig = readingPlugin->getConfig();\n\t\t\t\tif (storagePluginConfig != NULL)\n\t\t\t\t{\n\t\t\t\t\tDefaultConfigCategory *conf = storagePluginConfig->getDefaultCategory();\n\t\t\t\t\tconf->setDescription(\"Reading Plugin\");\n\t\t\t\t\twhile (client->addCategory(*conf, true) == false && ++retryCount < 10)\n\t\t\t\t\t{\n\t\t\t\t\t\tsleep(2 * retryCount);\n\t\t\t\t\t}\n\t\t\t\t\tvector<string> children1;\n\t\t\t\t\tchildren1.push_back(conf->getName());\n\t\t\t\t\tclient->addChildCategories(STORAGE_CATEGORY, children1);\n\n\t\t\t\t\t// Regsiter for configuration changes to our reading category category\n\t\t\t\t\tConfigHandler *configHandler = ConfigHandler::getInstance(client);\n\t\t\t\t\tconfigHandler->registerCategory(this, conf->getName());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// Now we are running force the plugin names back to the configuration manager to\n\t\t// make sure they match what we are running. This can be out of sync if the storage\n\t\t// configuration cache has been manually reset or altered while Fledge was down\n\t\tclient->setCategoryItemValue(STORAGE_CATEGORY, \"plugin\", config->getValue(\"plugin\"));\n\t\tclient->setCategoryItemValue(STORAGE_CATEGORY, \"readingPlugin\", config->getValue(\"readingPlugin\"));\n\n\t\t// Check whether to enable storage performance monitor\n\t\tif (config->hasValue(\"perfmon\"))\n\t\t{\n\t\t\tstring perf = config->getValue(\"perfmon\");\n\t\t\tif (perf.compare(\"true\") == 0)\n\t\t\t{\n\t\t\t\tapi->getPerformanceMonitor()->setCollecting(true);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tapi->getPerformanceMonitor()->setCollecting(false);\n\t\t\t}\n\t\t}\n\n\t\t// Wait for all the API threads to complete\n\t\tapi->wait();\n\n\t\tif (readingPlugin)\n\t\t\treadingPlugin->pluginShutdown();\n\t\treadingPlugin = NULL;\n\n\t\tif (storagePlugin)\n\t\t\tstoragePlugin->pluginShutdown();\n\t\tstoragePlugin = NULL;\n\n\t\t// Clean shutdown, unregister the storage service\n\t\tif (m_requestRestart)\n\t\t\tclient->restartService();\n\t\telse\n\t\t\tclient->unregisterService();\n\t}\n\telse\n\t{\n\t\tapi->wait();\n\t}\n\tmanagement.stop();\n\tlogger->info(\"Storage service shut down.\");\n}\n\n/**\n * Stop the storage service/\n */\nvoid StorageService::stop()\n{\n\tlogger->info(\"Stopping service...\\n\");\n}\n\n/**\n * Load the configured storage plugin or plugins\n *\n * @return bool\tTrue if the plugins have been loaded and support the correct operations\n */\nbool StorageService::loadPlugin()\n{\n\tPluginManager *manager = PluginManager::getInstance();\n\tmanager->setPluginType(PLUGIN_TYPE_ID_STORAGE);\n\n\tconst char *plugin = config->getValue(\"plugin\");\n\tif (plugin == NULL)\n\t{\n\t\tlogger->error(\"Unable to fetch plugin name from configuration.\\n\");\n\t\treturn false;\n\t}\n\tlogger->info(\"Load storage plugin %s.\", plugin);\n\tPLUGIN_HANDLE handle;\n\tstring\tpname = plugin;\n\tif ((handle = manager->loadPlugin(pname, PLUGIN_TYPE_STORAGE)) != NULL)\n\t{\n\t\tstoragePlugin = new StoragePlugin(pname, handle);\n\t\tif ((storagePlugin->getInfo()->options & SP_COMMON) == 0)\n\t\t{\n\t\t\tlogger->error(\"Defined storage plugin %s does not support common table operations.\\n\",\n\t\t\t\t\tplugin);\n\t\t\treturn false;\n\t\t}\n\t\tif (config->hasValue(\"raedingPlugin\") == false && (storagePlugin->getInfo()->options & SP_READINGS) == 0)\n\t\t{\n\t\t\tlogger->error(\"Defined storage plugin %s does not support readings operations.\\n\",\n\t\t\t\t\tplugin);\n\t\t\treturn false;\n\t\t}\n\t\tapi->setPlugin(storagePlugin);\n\t\tlogger->info(\"Loaded storage plugin %s.\", plugin);\n\t}\n\telse\n\t{\n\t\treturn false;\n\t}\n\tif (! config->hasValue(\"readingPlugin\"))\n\t{\n\t\t// Single plugin does everything\n\t\treturn true;\n\t}\n\tconst char *readingPluginName = config->getValue(\"readingPlugin\");\n\tif (! *readingPluginName)\n\t{\n\t\t// Single plugin does everything\n\t\treturn true;\n\t}\n       \tif (strcmp(readingPluginName, plugin) == 0 \n\t\t\t|| strcmp(readingPluginName, \"Use main plugin\") == 0)\n\t{\n \t\t// Storage plugin and reading plugin are the same, or we have been \n\t\t// explicitly told to use the storage plugin for reading so no need\n\t\t// to add a reading plugin\n\t\treturn true;\n\t}\n\tif (plugin == NULL)\n\t{\n\t\tlogger->error(\"Unable to fetch reading plugin name from configuration.\\n\");\n\t\treturn false;\n\t}\n\tlogger->info(\"Load reading plugin %s.\", readingPluginName);\n\tstring rpname = readingPluginName;\n\tif ((handle = manager->loadPlugin(rpname, PLUGIN_TYPE_STORAGE)) != NULL)\n\t{\n\t\treadingPlugin = new StoragePlugin(rpname, handle);\n\t\tif ((storagePlugin->getInfo()->options & SP_READINGS) == 0)\n\t\t{\n\t\t\tlogger->error(\"Defined readings storage plugin %s does not support readings operations.\\n\",\n\t\t\t\t\treadingPluginName);\n\t\t\treturn false;\n\t\t}\n\t\tapi->setReadingPlugin(readingPlugin);\n\t\tlogger->info(\"Loaded reading plugin %s.\", readingPluginName);\n\t}\n\telse\n\t{\n\t\treturn false;\n\t}\n\treturn true;\n}\n\n/**\n * Shutdown request\n */\nvoid StorageService::shutdown()\n{\n\t/* Stop recieving new requests and allow existing\n\t * requests to drain.\n\t */\n\tm_shutdown = true;\n\tlogger->info(\"Storage service shutdown in progress.\");\n\tapi->stopServer();\n\n}\n\n/**\n * Restart request\n */\nvoid StorageService::restart()\n{\n\t/* Stop recieving new requests and allow existing\n\t * requests to drain.\n\t */\n\tm_shutdown = true;\n\tlogger->info(\"Storage service shutdown in progress.\");\n\tapi->stopServer();\n\n}\n\n/**\n * Configuration change notification\n */\nvoid StorageService::configChange(const string& categoryName, const string& category)\n{\n\tlogger->info(\"Configuration category change '%s'\", categoryName.c_str());\n\tif (!categoryName.compare(STORAGE_CATEGORY))\n\t{\n\t\tconfig->updateCategory(category);\n\n\t\tif (m_logLevel.compare(config->getValue(\"logLevel\")))\n\t\t{\n\t\t\tm_logLevel = config->getValue(\"logLevel\");\n\t\t\tlogger->setMinLevel(m_logLevel);\n\t\t}\n\t\tif (config->hasValue(\"timeout\"))\n\t\t{\n\t\t\tlong timeout = strtol(config->getValue(\"timeout\"), NULL, 10);\n\t\t\tif (timeout != m_timeout)\n\t\t\t{\n\t\t\t\tapi->setTimeout(timeout);\n\t\t\t\tm_timeout = timeout;\n\t\t\t}\n\t\t}\n\t\tif (config->hasValue(\"perfmon\"))\n                {\n\t\t\tstring perf = config->getValue(\"perfmon\");\n\t\t\tif (perf.compare(\"true\") == 0)\n\t\t\t{\n\t\t\t\tapi->getPerformanceMonitor()->setCollecting(true);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tapi->getPerformanceMonitor()->setCollecting(false);\n\t\t\t}\n\t\t}\n\t\treturn;\n\t}\n\tif (!categoryName.compare(getPluginName()))\n\t{\n\t\tstoragePlugin->getConfig()->updateCategory(category);\n\t\treturn;\n\t}\n\tif (config->hasValue(\"readingPlugin\"))\n\t{\n\t\tconst char *readingPluginName = config->getValue(\"readingPlugin\");\n\t\tif (!categoryName.compare(readingPluginName))\n\t\t{\n\t\t\treadingPlugin->getConfig()->updateCategory(category);\n\t\t}\n\t}\n}\n\n/**\n * Return the name of the configured storage service\n */\nstring StorageService::getPluginName()\n{\n\treturn string(config->getValue(\"plugin\"));\n}\n\n/**\n * Return the managed status of the storage plugin\n */\nstring StorageService::getPluginManagedStatus()\n{\n\treturn string(config->getValue(\"managedStatus\"));\n}\n\n/**\n * Return the name of the configured reading plugin\n */\nstring StorageService::getReadingPluginName()\n{\n\tstring rval = config->getValue(\"readingPlugin\");\n\tif (rval.empty())\n\t{\n\t\trval = config->getValue(\"plugin\");\n\t}\n\treturn rval;\n}\n"
  },
  {
    "path": "C/services/storage/storage_api.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017-2018 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch, Massimiliano Pinto\n */\n#include \"client_http.hpp\"\n#include \"server_http.hpp\"\n#include \"storage_api.h\"\n#include \"storage_stats.h\"\n#include \"management_api.h\"\n#include \"logger.h\"\n#include \"plugin_exception.h\"\n#include <rapidjson/document.h>\n#include <atomic>\n\n// Added for the default_resource example\n#include <algorithm>\n#include <fstream>\n#include <vector>\n#ifdef HAVE_OPENSSL\n#include \"crypto.hpp\"\n#endif\n\n#include <string_utils.h>\n\n#define WORKER_THREAD_POOL\t1\n// Enable worker threads for readings append and fetch\n#define WORKER_THREADS\t\t1\n\n// Threshold for logging number of threads in use for some \"readings\" wrappers\n#define MAX_WORKER_THREADS\t5\n\n/**\n * Definition of the Storage Service REST API\n */\n\nStorageApi *StorageApi::m_instance = 0;\n\nusing namespace std;\nusing namespace rapidjson;\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\n/**\n * The following are a set of wrapper C functions that are registered with the HTTP Server\n * for each of the API entry points. These must be outside if a class as the library has no\n * mechanism to have a class isntance and hence can not provide a \"this\" pointer for the callback.\n *\n * These functions do the minumum work needed to find the singleton instance of the StorageAPI\n * class and call the appriopriate method of that class to the the actual work.\n */\n\n/**\n * Wrapper function for the common insert API call.\n */\nvoid commonInsertWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->commonInsert(response, request);\n}\n\n/**\n * Wrapper function for the common update API call.\n */\nvoid commonUpdateWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->commonUpdate(response, request);\n}\n\n/**\n * Wrapper function for the common delete API call.\n */\nvoid commonDeleteWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->commonDelete(response, request);\n}\n\n/**\n * Wrapper function for the common simle query API call.\n */\nvoid commonSimpleQueryWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->commonSimpleQuery(response, request);\n}\n\n/**\n * Wrapper function for the common query API call.\n */\nvoid commonQueryWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->commonQuery(response, request);\n}\n\n/**\n * Wrapper function for the default resource API call. This is called whenever\n * an unrecognised API call is received.\n */\nvoid defaultWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->defaultResource(response, request);\n}\n\n/**\n * Called when an error occurs\n */\nvoid on_error(__attribute__((unused)) shared_ptr<HttpServer::Request> request, __attribute__((unused)) const SimpleWeb::error_code &ec) {\n}\n\n/**\n * Wrapper function for the reading appendAPI call.\n */\nvoid readingAppendWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t  shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n#if WORKER_THREAD_POOL\n        api->queue(StorageOperation::ReadingAppend, request, response);\n#elif WORKER_THREADS\n\tstd::atomic<int>* cnt = &(api->m_workers_count);\n\t// Check rurrent number of workers and log if threshold value is hit\n\tint tVal = std::atomic_load(cnt);\n\tif (tVal >= MAX_WORKER_THREADS)\n\t{\n\t\tLogger::getLogger()->warn(\"Storage API: readingAppend() is being run by a new thread. \"\n\t\t\t\t\t  \"Current worker threads count %d exceeds the warning limit of %d allowed threads hit.\",\n\t\t\t\t\t  tVal,\n\t\t\t\t\t  MAX_WORKER_THREADS);\n\t}\n\n\t// Start a new thread\n\tthread work_thread([api, cnt, response, request]\n\t{\n\t\t// Increase count\n\t\tstd::atomic_fetch_add(cnt, 1);\n\n\t\tapi->readingAppend(response, request);\n\n\t\t// Decrease counter \n\t\tstd::atomic_fetch_sub(cnt, 1);\n\t});\n\t// Detach the new thread\n\twork_thread.detach();\n#else\n\tapi->readingAppend(response, request);\n#endif\n}\n\n/**\n * Wrapper function for the reading fetch API call.\n */\nvoid readingFetchWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n#if WORKER_THREAD_POOL\n        api->queue(StorageOperation::ReadingFetch, request, response);\n#elif WORKER_THREADS\n\tstd::atomic<int>* cnt = &(api->m_workers_count);\n\t// Check rurrent number of workers and log if threshold value is hit\n\tint tVal = std::atomic_load(cnt);\n\tif (tVal >= MAX_WORKER_THREADS)\n\t{\n\t\tLogger::getLogger()->warn(\"Storage API: readingFetch() is being run by a new thread. \"\n\t\t\t\t\t  \"Current worker threads count %d exceeds the warning limit of %d allowed threads hit.\",\n\t\t\t\t\t  tVal,\n\t\t\t\t\t  MAX_WORKER_THREADS);\n\t}\n\n\t// Start a new thread\n\tthread work_thread([api, cnt, response, request]\n\t{\n\t\t// Increase count\n\t\tstd::atomic_fetch_add(cnt, 1);\n\n\t\tapi->readingFetch(response, request);\n\n\t\t// Decrease counter \n\t\tstd::atomic_fetch_sub(cnt, 1);\n\t});\n\t// Detach the new thread\n\twork_thread.detach();\n#else\n\tapi->readingFetch(response, request);\n#endif\n}\n\n/**\n * Wrapper function for the reading query API call.\n */\nvoid readingQueryWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n#if WORKER_THREAD_POOL\n        api->queue(StorageOperation::ReadingQuery, request, response);\n#else\n\tapi->readingQuery(response, request);\n#endif\n}\n\n/**\n * Wrapper function for the reading purge API call.\n */\nvoid readingPurgeWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n#if WORKER_THREAD_POOL\n        api->queue(StorageOperation::ReadingPurge, request, response);\n#elif WORKER_THREADS\n\tstd::atomic<int>* cnt = &(api->m_workers_count);\n\t// Check rurrent number of workers and log if threshold value is hit\n\tint tVal = std::atomic_load(cnt);\n\tif (tVal >= MAX_WORKER_THREADS)\n\t{\n\t\tLogger::getLogger()->warn(\"Storage API: readingPurge() is being run by a new thread. \"\n\t\t\t\t\t  \"Current worker threads count %d exceeds the warning limit of %d allowed threads hit.\",\n\t\t\t\t\t  tVal,\n\t\t\t\t\t  MAX_WORKER_THREADS);\n\t}\n\n\t// Start a new thread\n\tthread work_thread([api, cnt, response, request]\n\t{\n\t\t// Increase count\n\t\tstd::atomic_fetch_add(cnt, 1);\n\n\t\tapi->readingPurge(response, request);\n\t\t// Decrease counter \n\t\tstd::atomic_fetch_sub(cnt, 1);\n\t});\n\t// Detach the new thread\n\twork_thread.detach();\n#else\n\tapi->readingPurge(response, request);\n#endif\n}\n\n/**\n * Wrapper function for the reading purge API call.\n */\nvoid readingRegisterWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->readingRegister(response, request);\n}\n\n/**\n * Wrapper function for the reading purge API call.\n */\nvoid readingUnregisterWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->readingUnregister(response, request);\n}\n\n/**\n * Wrapper function for the table interest register API call.\n */\nvoid tableRegisterWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->tableRegister(response, request);\n}\n\n/**\n * Wrapper function for the table interest unregister API call.\n */\nvoid tableUnregisterWrapper(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->tableUnregister(response, request);\n}\n\n/**\n * Wrapper function for the create snapshot API call.\n */\nvoid createTableSnapshotWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->createTableSnapshot(response, request);\n}\n\n/**\n * Wrapper function for the load snapshot API call.\n */\nvoid loadTableSnapshotWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t      shared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->loadTableSnapshot(response, request);\n}\n\n/**\n * Wrapper function for the delete snapshot API call.\n */\nvoid deleteTableSnapshotWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->deleteTableSnapshot(response, request);\n}\n\n/**\n * Wrapper function for the delete snapshot API call.\n */\nvoid getTableSnapshotsWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->getTableSnapshots(response, request);\n}\n\n/**\n * Wrapper function for the create storage stream API call.\n */\nvoid createStorageStreamWrapper(shared_ptr<HttpServer::Response> response,\n\t\t\t\tshared_ptr<HttpServer::Request> request)\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->createStorageStream(response, request);\n}\n\n/**\n * Wrapper function for the create storage stream API call.\n */\nvoid createStorageSchemaWrapper(shared_ptr<HttpServer::Response> response,\n                                shared_ptr<HttpServer::Request> request)\n{\n        StorageApi *api = StorageApi::getInstance();\n        api->createStorageSchema(response, request);\n}\n\n\n/**\n * Wrapper function for the insert into storage table API call.\n */\nvoid storageTableInsertWrapper(shared_ptr<HttpServer::Response> response,\n                                shared_ptr<HttpServer::Request> request)\n{\n        StorageApi *api = StorageApi::getInstance();\n        api->storageTableInsert(response, request);\n}\n\n\n/**\n * Wrapper function for the simple query in storage table API call.\n */\nvoid storageTableSimpleQueryWrapper(shared_ptr<HttpServer::Response> response,\n                                shared_ptr<HttpServer::Request> request)\n{\n        StorageApi *api = StorageApi::getInstance();\n        api->storageTableSimpleQuery(response, request);\n}\n\n/**\n * Wrapper function for the update into storage table API call.\n */\nvoid storageTableUpdateWrapper(shared_ptr<HttpServer::Response> response,\n                                shared_ptr<HttpServer::Request> request)\n{\n        StorageApi *api = StorageApi::getInstance();\n        api->storageTableUpdate(response, request);\n}\n\n/**\n * Wrapper function for the update into storage table API call.\n */\nvoid storageTableDeleteWrapper(shared_ptr<HttpServer::Response> response,\n                                shared_ptr<HttpServer::Request> request)\n{\n        StorageApi *api = StorageApi::getInstance();\n        api->storageTableDelete(response, request);\n}\n\n/**\n * Wrapper function for the update into storage table API call.\n */\nvoid storageTableQueryWrapper(shared_ptr<HttpServer::Response> response,\n                                shared_ptr<HttpServer::Request> request)\n{\n        StorageApi *api = StorageApi::getInstance();\n        api->storageTableQuery(response, request);\n}\n\n/**\n * Construct the singleton Storage API \n */\nStorageApi::StorageApi(const unsigned short port, const unsigned int threads, const unsigned int poolSize) : m_thread(NULL), readingPlugin(0), streamHandler(0)\n{\n\tm_port = port;\n\tm_threads = threads;\n\tm_server = new HttpServer();\n\tm_server->config.port = port;\n\tm_server->config.thread_pool_size = threads;\n\tm_server->config.timeout_request = 60;\n\tm_perfMonitor = NULL;\n\tm_workerPoolSize = poolSize;\n\tm_workers.resize(poolSize, NULL);\n\tStorageApi::m_instance = this;\n}\n\n/**\n * Destructor for the storage API class. There is only ever one StorageApi class\n * in existance and it lives for the entire duration of the storage service, so this\n * is really for completeness rather than any pracitical use.\n */\nStorageApi::~StorageApi()\n{\n\tif (m_server)\n\t{\n\t\tdelete m_server;\n\t}\n\tm_instance = NULL;\n\n\tif (m_thread)\n\t{\n\t\tdelete m_thread;\n\t}\n\tif (m_perfMonitor)\n\t{\n\t\tdelete m_perfMonitor;\n\t}\n\tfor (unsigned int i = 0; i < m_workerPoolSize; i++)\n\t{\n\t\tif (m_workers[i])\n\t\t\tdelete m_workers[i];\n\t}\n}\n\n/**\n * Return the singleton instance of the StorageAPI class\n */\nStorageApi *StorageApi::getInstance()\n{\n\tif (m_instance == NULL)\n\t{\n\t\tLogger::getLogger()->warn(\"Creating a default storage API instance, tuning parameters will be ignored\");\n\t\tm_instance = new StorageApi(0, 1, 5);\n\t}\n\treturn m_instance;\n}\n\n/**\n * Return the current listener port\n */\nunsigned short StorageApi::getListenerPort()\n{\n\treturn m_server->getLocalPort();\n}\n\n/**\n * Initialise the API entry points for the common data resource and\n * the readings resource.\n */\nvoid StorageApi::initResources()\n{\n\t// Initialise workers threads counter\n\tm_workers_count = ATOMIC_VAR_INIT(0);\n\n\t// Initialise the API entry points\n\tm_server->resource[COMMON_ACCESS][\"POST\"] = commonInsertWrapper;\n\tm_server->resource[COMMON_ACCESS][\"GET\"] = commonSimpleQueryWrapper;\n\tm_server->resource[COMMON_QUERY][\"PUT\"] = commonQueryWrapper;\n\tm_server->resource[COMMON_ACCESS][\"PUT\"] = commonUpdateWrapper;\n\tm_server->resource[COMMON_ACCESS][\"DELETE\"] = commonDeleteWrapper;\n\tm_server->default_resource[\"POST\"] = defaultWrapper;\n\tm_server->default_resource[\"PUT\"] = defaultWrapper;\n\tm_server->default_resource[\"GET\"] = defaultWrapper;\n\tm_server->default_resource[\"DELETE\"] = defaultWrapper;\n\n\tm_server->resource[READING_INTEREST][\"POST\"] = readingRegisterWrapper;\n\tm_server->resource[READING_INTEREST][\"DELETE\"] = readingUnregisterWrapper;\n\n\tm_server->resource[TABLE_INTEREST][\"POST\"] = tableRegisterWrapper;\n\tm_server->resource[TABLE_INTEREST][\"DELETE\"] = tableUnregisterWrapper;\n\n\tm_server->resource[CREATE_TABLE_SNAPSHOT][\"POST\"] = createTableSnapshotWrapper;\n\tm_server->resource[LOAD_TABLE_SNAPSHOT][\"PUT\"] = loadTableSnapshotWrapper;\n\tm_server->resource[DELETE_TABLE_SNAPSHOT][\"DELETE\"] = deleteTableSnapshotWrapper;\n\tm_server->resource[GET_TABLE_SNAPSHOTS][\"GET\"] = getTableSnapshotsWrapper;\n\n\tm_server->resource[READING_ACCESS][\"POST\"] = readingAppendWrapper;\n\tm_server->resource[READING_ACCESS][\"GET\"] = readingFetchWrapper;\n\tm_server->resource[READING_QUERY][\"PUT\"] = readingQueryWrapper;\n\tm_server->resource[READING_PURGE][\"PUT\"] = readingPurgeWrapper;\n\n\tm_server->resource[CREATE_STORAGE_STREAM][\"POST\"] = createStorageStreamWrapper;\n\tm_server->resource[STORAGE_SCHEMA][\"POST\"] = createStorageSchemaWrapper;\n\n\tm_server->resource[STORAGE_TABLE_ACCESS][\"POST\"] = storageTableInsertWrapper;\n\tm_server->resource[STORAGE_TABLE_ACCESS][\"GET\"] = storageTableSimpleQueryWrapper;\n\tm_server->resource[STORAGE_TABLE_ACCESS][\"PUT\"] = storageTableUpdateWrapper;\n\tm_server->resource[STORAGE_TABLE_ACCESS][\"DELETE\"] = storageTableDeleteWrapper;\n\tm_server->resource[STORAGE_TABLE_QUERY][\"PUT\"] = storageTableQueryWrapper;\n\n\tm_server->on_error = on_error;\n\n\tManagementApi *management = ManagementApi::getInstance();\n\tmanagement->registerStats(&stats);\n\n\t// Create StoragePerformanceMonitor object fr direct monitorind data saving\n\tm_perfMonitor = new StoragePerformanceMonitor(\"Storage\", this);\n}\n\nvoid startService()\n{\n\tStorageApi::getInstance()->startServer();\n}\n\n\n/**\n * Static method used to start the thread\n */\nstatic void workerStart()\n{\n\tStorageApi *api = StorageApi::getInstance();\n\tapi->worker();\n}\n\n/**\n * Start the HTTP server\n */\nvoid StorageApi::start() {\n\tm_thread = new thread(startService);\n\tm_shutdown = false;\n\tfor (unsigned int i = 0; i < m_workerPoolSize; i++)\n\t{\n\t\tm_workers[i] = new thread(workerStart);\n\t}\n}\n\nvoid StorageApi::startServer() {\n\tm_server->start();\n}\n\nvoid StorageApi::stopServer() {\n\tm_server->stop();\n}\n/**\n * Wait for the HTTP server to shutdown\n */\nvoid StorageApi::wait() {\n\tm_thread->join();\n\tm_shutdown = true;\n\tm_queueCV.notify_all();\n\tfor (unsigned int i = 0; i < m_workerPoolSize; i++)\n\t{\n\t\tif (m_workers[i])\n\t\t{\n\t\t\tm_workers[i]->join();\n\t\t\tdelete m_workers[i];\n\t\t\tm_workers[i] = NULL;\n\t\t}\n\t}\n}\n\n/**\n * Connect with the storage plugin\n */\nvoid StorageApi::setPlugin(StoragePlugin *plugin)\n{\n\tthis->plugin = plugin;\n}\n\n/**\n * Connect with the storage plugin\n */\nvoid StorageApi::setReadingPlugin(StoragePlugin *plugin)\n{\n\tthis->readingPlugin = plugin;\n}\n\n/**\n * Construct an HTTP response with the 200 OK return code using the payload\n * provided.\n *\n * @param response The response stream to send the response on\n * @param payload  The payload to send\n */\nvoid StorageApi::respond(shared_ptr<HttpServer::Response> response, const string& payload)\n{\n\t*response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << payload.length() << \"\\r\\n\"\n\t\t <<  \"Content-type: application/json\\r\\n\\r\\n\" << payload;\n}\n/**\n * The worker thread\n */\nvoid StorageApi::worker()\n{\n\tunique_lock<mutex> lck(m_queueMutex);\n\twhile (!m_shutdown)\n\t{\n\t\twhile (!m_queue.empty())\n\t\t{\n\t\t\tStorageOperation *op = m_queue.front();\n\t\t\tm_queue.pop();\n\t\t\tlck.unlock();\n\t\t\tswitch (op->m_operation)\n\t\t\t{\n\t\t\tcase StorageOperation::ReadingAppend:\n\t\t\t\treadingAppend(op->m_response, op->m_request);\n\t\t\t\tbreak;\n\t\t\tcase StorageOperation::ReadingFetch:\n\t\t\t\treadingFetch(op->m_response, op->m_request);\n\t\t\t\tbreak;\n\t\t\tcase StorageOperation::ReadingPurge:\n\t\t\t\treadingPurge(op->m_response, op->m_request);\n\t\t\t\tbreak;\n\t\t\tcase StorageOperation::ReadingQuery:\n\t\t\t\treadingQuery(op->m_response, op->m_request);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tLogger::getLogger()->error(\"Internal error, unknown operation %d requested of storage worker thread\", op->m_operation);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tdelete op;\n\t\t\tlck.lock();\n\t\t}\n\t\tm_queueCV.wait(lck);\n\t}\n}\n\n/**\n * Append a request to the readings request queue\n *\n * If the queue is starting to get long delay the return as\n * a primitive way to throttle incoming requests\n *\n * @param op\tThe operation to perform\n * @param request\tThe HTTP request\n * @param response\tThe HTTP response\n */\nvoid StorageApi::queue(StorageOperation::Operations op, shared_ptr<HttpServer::Request> request, shared_ptr<HttpServer::Response> response)\n{\n\tunique_lock<mutex> lck(m_queueMutex);\n\tm_queue.push(new StorageOperation(op, request, response));\n\tm_queueCV.notify_all();\n\tunsigned int length = m_queue.size();\n\tm_perfMonitor->collect(\"Worker Queue length\", length);\n\tif (length > 10)\n\t{\n\t\tlck.unlock();\n\t\tusleep(1000 * length);\n\t\tif (length % 10 == 0)\n\t\t\tLogger::getLogger()->warn(\"Reading request queue now at %d\", length);\n\t}\n}\n\n/**\n * Construct an HTTP response with the specified return code using the payload\n * provided.\n *\n * @param response \tThe response stream to send the response on\n * @param code\t\tThe HTTP esponse code to send\n * @param payload  \tThe payload to send\n */\nvoid StorageApi::respond(shared_ptr<HttpServer::Response> response, SimpleWeb::StatusCode code, const string& payload)\n{\n\t*response << \"HTTP/1.1 \" << status_code(code) << \"\\r\\nContent-Length: \" << payload.length() << \"\\r\\n\"\n\t\t <<  \"Content-type: application/json\\r\\n\\r\\n\" << payload;\n}\n\n/**\n * Perform an insert into a table of the data provided in the payload.\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::commonInsert(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  tableName;\nstring\tpayload;\nstring  responsePayload;\n\n\tstats.commonInsert++;\n\ttry {\n\t\ttableName = request->path_match[TABLE_NAME_COMPONENT];\n\t\tpayload = request->content.string();\n\n\t\tint rval = plugin->commonInsert(tableName, payload);\n\t\tif (rval != -1)\n\t\t{\n\t\t\tregistry.processTableInsert(tableName, payload);\n\t\t\tresponsePayload = \"{ \\\"response\\\" : \\\"inserted\\\", \\\"rows_affected\\\" : \";\n\t\t\tresponsePayload += to_string(rval);\n\t\t\tresponsePayload += \" }\";\n\t\t\trespond(response, responsePayload);\n\n\t\t\tif (m_perfMonitor->isCollecting())\n\t\t\t{\n\t\t\t\tm_perfMonitor->collect(\"insert rows \" + tableName, rval);\n\t\t\t\tm_perfMonitor->collect(\"insert Payload Size \" + tableName, (long)payload.length());\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tmapError(responsePayload, plugin->lastError());\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n\t\t}\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Perform an update on a table of the data provided in the payload.\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::commonUpdate(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  tableName;\nstring\tpayload;\nstring\tresponsePayload;\n\n\tauto header_seq = request->header.find(\"SeqNum\");\n\tif(header_seq != request->header.end())\n\t{\n\t\tstring threadId = header_seq->second.substr(0, header_seq->second.find(\"_\"));\n\t\tint seqNum = stoi(header_seq->second.substr(header_seq->second.find(\"_\")+1));\n\t\t{\n\t\t\tstd::unique_lock<std::mutex> lock(mtx_seqnum_map);\n\t\t\tauto it = m_seqnum_map.find(threadId);\n\t\t\tif (it != m_seqnum_map.end())\n\t\t\t{\n\t\t\t\tif (seqNum <= it->second.first)\n\t\t\t\t{\n\t\t\t\t\tresponsePayload = \"{ \\\"response\\\" : \\\"updated\\\", \\\"rows_affected\\\"  : \";\n\t\t\t\t\tresponsePayload += to_string(0);\n\t\t\t\t\tresponsePayload += \" }\";\n\t\t\t\t\tLogger::getLogger()->info(\"%s:%d: Repeat/old request: responding with zero response - threadId=%s, last seen seqNum for this threadId=%d, HTTP request header seqNum=%d\",\n\t\t\t\t\t\t\t\t\t__FUNCTION__, __LINE__, threadId.c_str(), it->second.first, seqNum);\n\t\t\t\t\trespond(response, responsePayload);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\t// remove this threadId from LRU list; will add this to front of LRU list below\n\t\t\t\tseqnum_map_lru_list.erase(m_seqnum_map[threadId].second);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (seqnum_map_lru_list.size() == max_entries_in_seqnum_map) // LRU list is full\n\t\t\t\t{\n\t\t\t\t\t//delete least recently used element\n\t\t\t\t\tstring last = seqnum_map_lru_list.back();\n\t\t\t\t\tseqnum_map_lru_list.pop_back();\n\t\t\t\t\tm_seqnum_map.erase(last);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// insert an entry for threadId at front of LRU queue\n\t\t\tseqnum_map_lru_list.push_front(threadId);\n\t\t\tm_seqnum_map[threadId] = make_pair(seqNum, seqnum_map_lru_list.begin());\n\t\t}\n\t}\n\t\n\tstats.commonUpdate++;\n\ttry {\n\t\ttableName = request->path_match[TABLE_NAME_COMPONENT];\n\t\tpayload = request->content.string();\n\n\t\tint rval = plugin->commonUpdate(tableName, payload);\n\t\tif (rval != -1)\n\t\t{\n\t\t\tregistry.processTableUpdate(tableName, payload);\n\t\t\tresponsePayload = \"{ \\\"response\\\" : \\\"updated\\\", \\\"rows_affected\\\"  : \";\n\t\t\tresponsePayload += to_string(rval);\n\t\t\tresponsePayload += \" }\";\n\t\t\trespond(response, responsePayload);\n\n\t\t\tif (m_perfMonitor->isCollecting())\n\t\t\t{\n\t\t\t\tm_perfMonitor->collect(\"update rows \" + tableName, rval);\n\t\t\t\tm_perfMonitor->collect(\"update Payload Size \" + tableName, (long)payload.length());\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tmapError(responsePayload, plugin->lastError());\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n\t\t}\n\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t\t}\n}\n\n/**\n * Perform a simple query on the table using the query parameters as conditions\n * TODO make this work for multiple column queries\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::commonSimpleQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  tableName;\nSimpleWeb::CaseInsensitiveMultimap\tquery;\nstring payload;\n\n\tstats.commonSimpleQuery++;\n\ttry {\n\t\ttableName = request->path_match[TABLE_NAME_COMPONENT];\n\t\tquery = request->parse_query_string();\n\n\t\tif (query.size() > 0)\n\t\t{\n\t\t\tpayload = \"{ \\\"where\\\" : { \";\n\t\t\tfor(auto &param : query)\n\t\t\t{\n\t\t\t\tpayload = payload + \"\\\"column\\\" :  \\\"\";\n\t\t\t\tpayload = payload + param.first;\n\t\t\t\tpayload = payload + \"\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"\";\n\t\t\t\tpayload = payload + param.second;\n\t\t\t\tpayload = payload + \"\\\"\";\n\t\t\t}\n\t\t\tpayload = payload + \"} }\";\n\t\t}\n\n\t\tchar *pluginResult = plugin->commonRetrieve(tableName, payload);\n\t\tif (pluginResult)\n\t\t{\n\t\t\tstring res = pluginResult;\n\n\t\t\trespond(response, res);\n\t\t\tfree(pluginResult);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload;\n\t\t\tmapError(responsePayload, plugin->lastError());\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n\t\t}\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Perform query on a table using the JSON encoded query in the payload\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::commonQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  tableName;\nstring\tpayload;\n\n\tstats.commonQuery++;\n\ttry {\n\t\ttableName = request->path_match[TABLE_NAME_COMPONENT];\n\t\tpayload = request->content.string();\n\n\t\tchar *pluginResult = plugin->commonRetrieve(tableName, payload);\n\t\tif (pluginResult)\n\t\t{\n\t\t\tstring res = pluginResult;\n\n\t\t\trespond(response, res);\n\t\t\tfree(pluginResult);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload;\n\t\t\tmapError(responsePayload, plugin->lastError());\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n\t\t}\n\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Perform a delete on a table using the condition encoded in the JSON payload\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::commonDelete(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  tableName;\nstring\tpayload;\nstring  responsePayload;\n\n\tstats.commonDelete++;\n\ttry {\n\t\ttableName = request->path_match[TABLE_NAME_COMPONENT];\n\t\tpayload = request->content.string();\n\n\t\tint rval = plugin->commonDelete(tableName, payload);\n\t\tif (rval != -1)\n\t\t{\n\t\t\tregistry.processTableDelete(tableName, payload);\n\t\t\tresponsePayload = \"{ \\\"response\\\" : \\\"deleted\\\", \\\"rows_affected\\\"  : \";\n\t\t\tresponsePayload += to_string(rval);\n\t\t\tresponsePayload += \" }\";\n\t\t\trespond(response, responsePayload);\n\n\t\t\tif (m_perfMonitor->isCollecting())\n\t\t\t{\n\t\t\t\tm_perfMonitor->collect(\"delete rows \" + tableName, rval);\n\t\t\t\tm_perfMonitor->collect(\"delete Payload Size \" + tableName, (long)payload.length());\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tmapError(responsePayload, plugin->lastError());\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n\t\t}\n\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Perform an append operation on the readings.\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::readingAppend(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring payload;\nstring  responsePayload;\nstruct timeval\ttStart, tEnd;\n\n\tif (m_perfMonitor->isCollecting())\n\t{\n\t\tgettimeofday(&tStart, NULL);\n\t}\n\t\n\tauto header_seq = request->header.find(\"SeqNum\");\n\tif(header_seq != request->header.end())\n\t{\n\t\tstring threadId = header_seq->second.substr(0, header_seq->second.find(\"_\"));\n\t\tint seqNum = stoi(header_seq->second.substr(header_seq->second.find(\"_\")+1));\n\n\t\t{\n\t\t\tstd::unique_lock<std::mutex> lock(mtx_seqnum_map);\n\t\t\tauto it = m_seqnum_map.find(threadId);\n\t\t\tif (it != m_seqnum_map.end())\n\t\t\t{\n\t\t\t\tif (seqNum <= it->second.first)\n\t\t\t\t{\n\t\t\t\t\tresponsePayload = \"{ \\\"response\\\" : \\\"appended\\\", \\\"readings_added\\\" : \";\n\t\t\t\t\tresponsePayload += to_string(0);\n\t\t\t\t\tresponsePayload += \" }\";\n\t\t\t\t\tLogger::getLogger()->info(\"%s:%d: Repeat/old request: responding with zero response - threadId=%s, last seen seqNum for this threadId=%d, HTTP request header seqNum=%d\",\n\t\t\t\t\t\t\t\t\t__FUNCTION__, __LINE__, threadId.c_str(), it->second.first, seqNum);\n\t\t\t\t\trespond(response, responsePayload);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t// remove this threadId from LRU list; will add this to front of LRU list below\n\t\t\t\tseqnum_map_lru_list.erase(m_seqnum_map[threadId].second);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (seqnum_map_lru_list.size() == max_entries_in_seqnum_map) // LRU list is full\n\t\t\t\t{\n\t\t\t\t\t//delete least recently used element\n\t\t\t\t\tstring last = seqnum_map_lru_list.back();\n\t\t\t\t\tseqnum_map_lru_list.pop_back();\n\t\t\t\t\tm_seqnum_map.erase(last);\n\t\t\t\t}\n\t\t\t}\n\t\t\t// insert an entry for threadId at front of LRU queue\n\t\t\tseqnum_map_lru_list.push_front(threadId);\n\t\t\tm_seqnum_map[threadId] = make_pair(seqNum, seqnum_map_lru_list.begin());\n\t\t}\n\t}\n\n\tstats.readingAppend++;\n\ttry {\n\t\tpayload = request->content.string();\n\t\tint rval = (readingPlugin ? readingPlugin : plugin)->readingsAppend(payload);\n\t\tif (rval != -1)\n\t\t{\n\t\t\tregistry.process(payload);\n\t\t\tresponsePayload = \"{ \\\"response\\\" : \\\"appended\\\", \\\"readings_added\\\" : \";\n\t\t\tresponsePayload += to_string(rval);\n\t\t\tresponsePayload += \" }\";\n\t\t\trespond(response, responsePayload);\n\n\t\t\tif (m_perfMonitor->isCollecting())\n\t\t\t{\n\t\t\t\tgettimeofday(&tEnd, NULL);\n\t\t\t\tm_perfMonitor->collect(\"Reading Append Rows \" +\n\t\t\t\t\t\t(readingPlugin ? readingPlugin : plugin)->getName(),\n\t\t\t\t\t\trval);\n\t\t\t\tm_perfMonitor->collect(\"Reading Append PayloadSize \" +\n\t\t\t\t\t\t(readingPlugin ? readingPlugin : plugin)->getName(),\n\t\t\t\t\t\t(long)payload.length());\n\t\t\t\tstruct timeval diff;\n\t\t\t\ttimersub(&tEnd, &tStart, &diff);\n\t\t\t\tm_perfMonitor->collect(\"Reading Append Time (ms)\", diff.tv_sec * 1000 + diff.tv_usec / 1000);\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tmapError(responsePayload, (readingPlugin ? readingPlugin : plugin)->lastError());\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n\t\t}\n\n\t\t//respond(response, responsePayload);\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Fetch a block of readings.\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::readingFetch(shared_ptr<HttpServer::Response> response,\n\t\t\t      shared_ptr<HttpServer::Request> request)\n{\nSimpleWeb::CaseInsensitiveMultimap query;\nunsigned long\t\t\t   id = 0;\nunsigned long\t\t\t   count = 0;\n\tstats.readingFetch++;\n\ttry {\n\t\tquery = request->parse_query_string();\n\n\t\tauto search = query.find(\"id\");\n\t\tif (search == query.end())\n\t\t{\n\t\t\tstring payload = \"{ \\\"error\\\" : \\\"Missing query parameter id\\\" }\";\n\t\t\trespond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\tpayload);\n\t\t\treturn;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tid = (unsigned long)atol(search->second.c_str());\n\t\t}\n\t\tsearch = query.find(\"count\");\n\t\tif (search == query.end())\n\t\t{\n\t\t\tstring payload = \"{ \\\"error\\\" : \\\"Missing query parameter count\\\" }\";\n\t\t\trespond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\tpayload);\n\t\t\treturn;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tcount = (unsigned)atol(search->second.c_str());\n\t\t}\n\n\t\t// Get plugin data\n\t\tchar *responsePayload = (readingPlugin ? readingPlugin : plugin)->readingsFetch(id, count);\n\t\tstring res = responsePayload;\n\n\t\t// Reply to client\n\t\trespond(response, res);\n\t\t// Free plugin data\n\t\tfree(responsePayload);\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n\n/**\n * Perform a query on a set of readings\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::readingQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring\tpayload;\n\n\tstats.readingQuery++;\n\ttry {\n\t\tpayload = request->content.string();\n\n\t\tchar *resultSet = (readingPlugin ? readingPlugin : plugin)->readingsRetrieve(payload);\n\t\tstring res = resultSet;\n\n\t\trespond(response, res);\n\t\tfree(resultSet);\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t}\n}\n\n\n/**\n * Purge the readings\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::readingPurge(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nSimpleWeb::CaseInsensitiveMultimap query;\nunsigned long age = 0;\nunsigned long size = 0;\nunsigned long lastSent = 0;\nunsigned int  flagsMask = 0;\nstring        flags;\nstring\t      asset;\nbool\t      byAsset = false;\nstatic std::atomic<bool> already_running(false);\n\n\tif (already_running)\n\t{\n\t\tstring payload = \"{ \\\"error\\\" : \\\"Previous instance of purge is still running, not starting another one.\\\" }\";\n\t\trespond(response, SimpleWeb::StatusCode::client_error_too_many_requests, payload);\n\t\treturn;\n\t}\n\talready_running.store(true);\n\t\t\n\tstats.readingPurge++;\n\ttry {\n\t\tquery = request->parse_query_string();\n\n\t\tauto search = query.find(\"age\");\n\t\tif (search != query.end())\n\t\t{\n\t\t\tage = (unsigned)atol(search->second.c_str());\n\t\t}\n\t\tsearch = query.find(\"size\");\n\t\tif (search != query.end())\n\t\t{\n\t\t\tsize = (unsigned)atol(search->second.c_str());\n\t\t}\n\t\tsearch = query.find(\"asset\");\n\t\tif (search != query.end())\n\t\t{\n\t\t\tasset = search->second;\n\t\t\tbyAsset = true;\n\t\t}\n\t\tsearch = query.find(\"sent\");\n\t\tif (search == query.end())\n\t\t{\n\t\t\tif (!byAsset)\n\t\t\t{\n\t\t\t\tstring payload = \"{ \\\"error\\\" : \\\"Missing query parameter sent\\\" }\";\n\t\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, payload);\n\t\t\t\talready_running.store(false);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tlastSent = (unsigned)atol(search->second.c_str());\n\t\t}\n\n\t\tsearch = query.find(\"flags\");\n\n\t\tif (search != query.end())\n\t\t{\n\t\t\tflags = search->second;\n\n\t\t\tLogger::getLogger()->debug(\"%s - flags :%s:\", __FUNCTION__, flags.c_str());\n\n\t\t\t// TODO Turn flags into a bitmap\n\n\t\t\tif (flags.compare(PURGE_FLAG_RETAIN_ANY) == 0)\n\t\t\t{\n\n\t\t\t\tflagsMask |= STORAGE_PURGE_RETAIN_ANY;\n\t\t\t}\n\t\t\telse if ( (flags.compare(PURGE_FLAG_RETAIN)     == 0) ||  // Backward compability\n\t\t\t         (flags.compare(PURGE_FLAG_RETAIN_ALL) == 0) )\n\t\t\t{\n\t\t\t\tflagsMask |= STORAGE_PURGE_RETAIN_ALL;\n\t\t\t}\n\t\t\telse if (flags.compare(PURGE_FLAG_PURGE) == 0)\n\t\t\t{\n\t\t\t\tflagsMask &= (~STORAGE_PURGE_RETAIN_ANY);\n\t\t\t\tflagsMask &= (~STORAGE_PURGE_RETAIN_ALL);\n\t\t\t}\n\n\t\t\tLogger::getLogger()->debug(\"%s - flagsMask :%d:\", __FUNCTION__, flagsMask);\n\n\t\t}\n\n\n\t\tchar *purged = NULL;\n\t\tif (age)\n\t\t{\n\t\t\tpurged = (readingPlugin ? readingPlugin : plugin)->readingsPurge(age, flagsMask, lastSent);\n\t\t}\n\t\telse if (size)\n\t\t{\n\t\t\tpurged = (readingPlugin ? readingPlugin : plugin)->readingsPurge(size, flagsMask|STORAGE_PURGE_SIZE, lastSent);\n\t\t}\n\t\telse if (byAsset)\n\t\t{\n\t\t\tpurged = (readingPlugin ? readingPlugin : plugin)->readingsPurgeAsset(asset);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring payload = \"{ \\\"error\\\" : \\\"Must either specify age or size parameter\\\" }\";\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, payload);\n\t\t\talready_running.store(false);\n\t\t\treturn;\n\t\t}\n\t\trespond(response, purged);\n\t\tfree(purged);\n\t}\n\t/** Handle PluginNotImplementedException exception here */\n\tcatch (PluginNotImplementedException& ex) {\n\t\tstring payload = \"{ \\\"error\\\" : \\\"\";\n\t\tpayload += ex.what();\n\t\tpayload += \"\\\" }\";\n\t\t/** Return HTTP code 400 with message from storage plugin */\n\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, payload);\n\t\talready_running.store(false);\n\t\treturn;\n\t}\n\t/** Handle general exception */\n\tcatch (exception& ex) {\n\t\tinternalError(response, ex);\n\t\talready_running.store(false);\n\t\treturn;\n\t}\n\talready_running.store(false);\n}\n\n/**\n * Register interest in readings for an asset\n */\nvoid StorageApi::readingRegister(shared_ptr<HttpServer::Response> response,\n\t\t\t\t shared_ptr<HttpServer::Request> request)\n{\nstring\t\tasset;\nstring\t\tpayload;\nDocument\tdoc;\n\n\tpayload = request->content.string();\n\t// URL decode asset name\n\tasset = urlDecode(request->path_match[ASSET_NAME_COMPONENT]);\n\tdoc.Parse(payload.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\t\tstring resp = \"{ \\\"error\\\" : \\\"Badly formed payload\\\" }\";\n\t\t\trespond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\tresp);\n\t}\n\telse\n\t{\n\t\tif (doc.HasMember(\"url\"))\n\t\t{\n\t\t\tregistry.registerAsset(asset, doc[\"url\"].GetString());\n\t\t\tstring resp = \" { \\\"\" + asset + \"\\\" : \\\"registered\\\" }\";\n\t\t\trespond(response, resp);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring resp = \"{ \\\"error\\\" : \\\"Missing url element in payload\\\" }\";\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, resp);\n\t\t}\n\t}\n}\n\n/**\n * Unregister interest in readings for an asset\n */\nvoid StorageApi::readingUnregister(shared_ptr<HttpServer::Response> response,\n\t\t\t\t   shared_ptr<HttpServer::Request> request)\n{\nstring\t\tasset;\nstring\t\tpayload;\nDocument\tdoc;\n\n\tpayload = request->content.string();\n\t// URL decode asset name\n\tasset = urlDecode(request->path_match[ASSET_NAME_COMPONENT]);\n\tdoc.Parse(payload.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\t\tstring resp = \"{ \\\"error\\\" : \\\"Badly formed payload\\\" }\";\n\t\t\trespond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\tresp);\n\t}\n\telse\n\t{\n\t\tif (doc.HasMember(\"url\"))\n\t\t{\n\t\t\tregistry.unregisterAsset(asset, doc[\"url\"].GetString());\n\t\t\tstring resp = \" { \\\"\" + asset + \"\\\" : \\\"unregistered\\\" }\";\n\t\t\trespond(response, resp);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring resp = \"{ \\\"error\\\" : \\\"Missing url element in payload\\\" }\";\n\t\t\trespond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\tresp);\n\t\t}\n\t}\n}\n\n/**\n * Register interest in readings for an asset\n */\nvoid StorageApi::tableRegister(shared_ptr<HttpServer::Response> response,\n\t\t\t\t shared_ptr<HttpServer::Request> request)\n{\nstring\t\ttable;\nstring\t\tpayload;\nDocument\tdoc;\n\n\tpayload = request->content.string();\n\t// URL decode table name\n\ttable = urlDecode(request->path_match[TABLE_NAME_COMPONENT]);\n\t\n\tdoc.Parse(payload.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tstring resp = \"{ \\\"error\\\" : \\\"Badly formed payload\\\" }\";\n\t\trespond(response,\n\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\tresp);\n\t}\n\telse\n\t{\n\t\tif (doc.HasMember(\"url\"))\n\t\t{\n\t\t\tregistry.registerTable(table, payload);\n\t\t\tstring resp = \" { \\\"\" + table + \"\\\" : \\\"registered\\\" }\";\n\t\t\trespond(response, resp);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring resp = \"{ \\\"error\\\" : \\\"Missing url element in payload\\\" }\";\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, resp);\n\t\t}\n\t}\n}\n\n/**\n * Unregister interest in readings for an asset\n */\nvoid StorageApi::tableUnregister(shared_ptr<HttpServer::Response> response,\n\t\t\t\t   shared_ptr<HttpServer::Request> request)\n{\nstring\t\ttable;\nstring\t\tpayload;\nDocument\tdoc;\n\n\tpayload = request->content.string();\n\t// URL decode table name\n\ttable = urlDecode(request->path_match[TABLE_NAME_COMPONENT]);\n\t\n\tdoc.Parse(payload.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\t\tstring resp = \"{ \\\"error\\\" : \\\"Badly formed payload\\\" }\";\n\t\t\trespond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\tresp);\n\t}\n\telse\n\t{\n\t\tif (doc.HasMember(\"url\"))\n\t\t{\n\t\t\tregistry.unregisterTable(table, payload);\n\t\t\tstring resp = \" { \\\"\" + table + \"\\\" : \\\"unregistered\\\" }\";\n\t\t\trespond(response, resp);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring resp = \"{ \\\"error\\\" : \\\"Missing url element in payload\\\" }\";\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, resp);\n\t\t}\n\t}\n}\n\n/**\n * Create a stream for high speed storage ingestion\n *\n * @param response\tThe response stream to send the response on\n * @param request\tThe HTTP request\n */\nvoid StorageApi::createStorageStream(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring\tresponsePayload;\n\n\t(void)(request); \t// Surpress unused arguemnt warning\n\ttry {\n\t\tif (!streamHandler)\n\t\t{\n\t\t\tstreamHandler = new StreamHandler(this);\n\t\t}\n\t\tuint32_t token;\n\t\tuint32_t port = streamHandler->createStream(&token);\n\t\tif (port != 0)\n\t\t{\n\t\t\tresponsePayload = \"{ \\\"port\\\":\"; \n\t\t\tresponsePayload += to_string(port);\n\t\t\tresponsePayload += \", \\\"token\\\":\"; \n\t\t\tresponsePayload += to_string(token);\n\t\t\tresponsePayload += \" }\";\n\t\t\trespond(response, responsePayload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n\t\t}\n\n\t} catch (exception& ex) {\n\t\tinternalError(response, ex);\n\t\t}\n}\n\n/**\n * Append the readings that have arrived via a stream to the storage plugin\n *\n * @param readings\tA Null terminated array of points to ReadingStream structures\n * @param commit\tA flag to commit the readings block\n */\nbool StorageApi::readingStream(ReadingStream **readings, bool commit)\n{\n\tif ((readingPlugin ? readingPlugin : plugin)->hasStreamSupport())\n\t{\n\t\treturn (readingPlugin ? readingPlugin : plugin)->readingStream(readings, commit);\n\t}\n\telse\n\t{\n\t\t// Plugin does not support streaming input\n\t\tostringstream convert;\n\t\tchar\tts[60], micro_s[10];\n\t\t\n\n\t\tconvert << \"{\\\"readings\\\":[\";\n\t\tfor (int i = 0; readings[i]; i++)\n\t\t{\n\t\t\tif (i > 0)\n\t\t\t\tconvert << \",\";\n\t\t\tconvert << \"{\\\"asset_code\\\":\\\"\";\n\t\t\tconvert << readings[i]->assetCode;\n\t\t\tconvert << \"\\\",\\\"user_ts\\\":\\\"\";\n\t\t\tstruct tm timeinfo;\n\t\t\tgmtime_r(&readings[i]->userTs.tv_sec, &timeinfo);\n\t\t\tstd::strftime(ts, sizeof(ts), \"%Y-%m-%d %H:%M:%S\", &timeinfo);\n\t\t\tsnprintf(micro_s, sizeof(micro_s), \".%06lu\", readings[i]->userTs.tv_usec);\n\t\t\tconvert << ts << micro_s;\n\t\t\tconvert << \"\\\",\\\"reading\\\":\";\n\t\t\tconvert << &(readings[i]->assetCode[readings[i]->assetCodeLength]);\n\t\t\tconvert << \"}\";\n\t\t}\n\t\tconvert << \"]}\";\n\t\tLogger::getLogger()->debug(\"Fallback created payload: %s\", convert.str().c_str());\n\t\t(readingPlugin ? readingPlugin : plugin)->readingsAppend(convert.str());\n\t}\t\n\treturn false;\n}\n\n/**\n * Handle a bad URL endpoint call\n */\nvoid StorageApi::defaultResource(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring\tpayload;\n\n\tpayload = \"{ \\\"error\\\" : \\\"Unsupported URL: \" + request->path + \"\\\" }\";\n\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, payload);\n}\n\n/**\n * Handle a exception by sendign back an internal error\n */\nvoid StorageApi::internalError(shared_ptr<HttpServer::Response> response, const exception& ex)\n{\nstring payload = \"{ \\\"Exception\\\" : \\\"\";\n\n\tpayload = payload + string(ex.what());\n\tpayload = payload + \"\\\"\";\n\n\tLogger *logger = Logger::getLogger();\n\tlogger->error(\"StorgeApi Internal Error: %s\\n\", ex.what());\n\trespond(response, SimpleWeb::StatusCode::server_error_internal_server_error, payload);\n}\n\nvoid StorageApi::mapError(string& payload, PLUGIN_ERROR *lastError)\n{\nchar *ptr, *ptr1, *buf = new char[strlen(lastError->message) * 2 + 1];\n\n\tptr = buf;\n\tptr1 = lastError->message;\n\twhile (*ptr1)\n\t{\n\t\tif (*ptr1 == '\"')\n\t\t\t*ptr++ = '\\\\';\n\t\t*ptr++ = *ptr1++;\n\t\tif (*ptr1 == '\\n')\n\t\t\tptr1++;\n\t}\n\t*ptr = 0;\n\tpayload = \"{ \\\"entryPoint\\\" : \\\"\";\n\tpayload = payload + lastError->entryPoint;\n\tpayload = payload + \"\\\", \\\"message\\\" : \\\"\";\n\tpayload = payload + buf;\n\tpayload = payload + \"\\\", \\\"retryable\\\" : \";\n\tpayload = payload + (lastError->retryable ? \"true\" : \"false\");\n\tpayload = payload + \"}\";\n\tdelete[] buf;\n}\n\n/**\n * Create a table snapshot\n */\nvoid StorageApi::createTableSnapshot(shared_ptr<HttpServer::Response> response,\n\t\t\t\t     shared_ptr<HttpServer::Request> request)\n{\nstring   sTable;\nstring   payload;\nDocument doc;\n\n\tpayload = request->content.string();\n\tsTable = request->path_match[TABLE_NAME_COMPONENT];\n\tdoc.Parse(payload.c_str());\n\tif (!doc.HasMember(\"id\"))\n\t{\n\t\tstring resp = \"{ \\\"error\\\" : \\\"Missing id element in payload for create snapshot\\\" }\";\n\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, resp);\n\t\treturn;\n\t}\n\n\tstring responsePayload;\n\tstring sId = doc[\"id\"].GetString();\n\t// call plugin method\n\tif (plugin->createTableSnapshot(sTable, sId) < 0)\n\t{\n\t\tmapError(responsePayload, plugin->lastError());\n\t\trespond(response,\n\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\tresponsePayload);\n\t}\n\telse\n\t{\n\t\tresponsePayload = \"{\\\"created\\\": {\\\"id\\\": \\\"\" + sId;\n\t\tresponsePayload += \"\\\", \\\"table\\\": \\\"\" + sTable + \"\\\"} }\";\n\t\trespond(response, responsePayload);\n\t}\n}\n\n/**\n * Load a table snapshot\n */\nvoid StorageApi::loadTableSnapshot(shared_ptr<HttpServer::Response> response,\n\t\t\t\t     shared_ptr<HttpServer::Request> request)\n{\nstring   sId;\nstring   sTable;\nstring   payload;\n\n\tpayload = request->content.string();\n\tsTable = request->path_match[TABLE_NAME_COMPONENT];\n\tsId = request->path_match[SNAPSHOT_ID_COMPONENT];\n\tif (sId.empty())\n\t{\n\t\tstring resp = \"{ \\\"error\\\" : \\\"Missing id element in payload for load snapshot\\\" }\";\n\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, resp);\n\t\treturn;\n\t}\n\tstring responsePayload;\n\tif (plugin->loadTableSnapshot(sTable, sId) < 0)\n\t{\n\t\tmapError(responsePayload, plugin->lastError());\n\t\trespond(response,\n\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\tresponsePayload);\n\t}\n\telse\n\t{\n\t\tresponsePayload = \"{\\\"loaded\\\": {\\\"id\\\": \\\"\" + sId;\n\t\tresponsePayload += \"\\\", \\\"table\\\": \\\"\" + sTable + \"\\\"} }\";\n\t\trespond(response, responsePayload);\n\t}\n}\n\n/**\n * Delete a table snapshot\n */\nvoid StorageApi::deleteTableSnapshot(shared_ptr<HttpServer::Response> response,\n\t\t\t\t     shared_ptr<HttpServer::Request> request)\n{\nstring   sId;\nstring   sTable;\nstring   payload;\n\n\tpayload = request->content.string();\n\tsTable = request->path_match[TABLE_NAME_COMPONENT];\n\tsId = request->path_match[SNAPSHOT_ID_COMPONENT];\n\tif (sId.empty())\n\t{\n\t\tstring resp = \"{ \\\"error\\\" : \\\"Missing id element in payload fopr delete snapshot\\\" }\";\n\t\trespond(response, SimpleWeb::StatusCode::client_error_bad_request, resp);\n\t\treturn;\n\t}\n\tstring responsePayload;\n\tif (plugin->deleteTableSnapshot(sTable, sId) < 0)\n\t{\n\t\tmapError(responsePayload, plugin->lastError());\n\t\trespond(response,\n\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\tresponsePayload);\n\t}\n\telse\n\t{\n\t\tresponsePayload = \"{\\\"deleted\\\": {\\\"id\\\": \\\"\" + sId;\n\t\tresponsePayload += \"\\\", \\\"table\\\": \\\"\" + sTable + \"\\\"} }\";\n\t\trespond(response, responsePayload);\n\t}\n}\n\n/**\n * Get list of a table snapshots\n */\nvoid StorageApi::getTableSnapshots(shared_ptr<HttpServer::Response> response,\n\t\t\t\t   shared_ptr<HttpServer::Request> request)\n{\nstring   sTable;\nstring   payload;\n\n        try\n\t{\n\t\tpayload = request->content.string();\n\t\tsTable = request->path_match[TABLE_NAME_COMPONENT];\n\n\t\t// Get plugin data\n                char* pluginResult = plugin->getTableSnapshots(sTable);\n\t\tif (pluginResult)\n\t\t{\n                        string res = pluginResult;\n                        respond(response, res);\n                        free(pluginResult);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tstring responsePayload;\n\t\t\tmapError(responsePayload, plugin->lastError());\n\t\t\trespond(response,\n\t\t\t\tSimpleWeb::StatusCode::client_error_bad_request,\n\t\t\t\tresponsePayload);\n\t\t}\n        } catch (exception& ex) {\n                internalError(response, ex);\n        }\n}\n\n\n/**\n * Perform an create table and create index for schema provided in the payload.\n *\n * @param response      The response stream to send the response on\n * @param request       The HTTP request\n */\nvoid StorageApi::createStorageSchema(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  payload;\nstring  responsePayload;\n\n        try {\n                payload = request->content.string();\n\n                int rval = plugin->createSchema(payload);\n                if (rval != -1)\n                {\n                        responsePayload = \"{ \\\"Successfully created schema\\\"}  \";\n                        respond(response, responsePayload);\n                }\n                else\n                {\n                        mapError(responsePayload, plugin->lastError());\n                        respond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n                }\n\n        } catch (exception& ex) {\n                internalError(response, ex);\n        }\n}\n\n/**\n * Perform an insert table operation.\n *\n * @param response      The response stream to send the response on\n * @param request       The HTTP request\n */\nvoid StorageApi::storageTableInsert(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  schemaName;\nstring  tableName;\nstring  payload;\nstring  responsePayload;\n\n        stats.commonInsert++;\n        try {\n\t\tschemaName = request->path_match[STORAGE_SCHEMA_NAME_COMPONENT];\n                tableName = request->path_match[STORAGE_TABLE_NAME_COMPONENT];\n                payload = request->content.string();\n\n                int rval = plugin->commonInsert(tableName, payload, const_cast<char*>(schemaName.c_str()));\n                if (rval != -1)\n                {\n\t\t\tif (m_perfMonitor->isCollecting())\n\t\t\t{\n\t\t\t\tm_perfMonitor->collect(\"insert rows \" + tableName, rval);\n\t\t\t\tm_perfMonitor->collect(\"insert Payload Size \" + tableName, (long)payload.length());\n\t\t\t}\n\t\t\tregistry.processTableInsert(tableName, payload);\n                        responsePayload = \"{ \\\"response\\\" : \\\"inserted\\\", \\\"rows_affected\\\" : \";\n                        responsePayload += to_string(rval);\n                        responsePayload += \" }\";\n                        respond(response, responsePayload);\n                }\n                else\n                {\n                        mapError(responsePayload, plugin->lastError());\n                        respond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n                }\n        } catch (exception& ex) {\n                internalError(response, ex);\n        }\n}\n\n/**\n * Perform an update on a table of the data provided in the payload.\n *\n * @param response      The response stream to send the response on\n * @param request       The HTTP request\n */\nvoid StorageApi::storageTableUpdate(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  schemaName;\nstring  tableName;\nstring  payload;\nstring  responsePayload;\n\n        auto header_seq = request->header.find(\"SeqNum\");\n        if(header_seq != request->header.end())\n        {\n                string threadId = header_seq->second.substr(0, header_seq->second.find(\"_\"));\n                int seqNum = stoi(header_seq->second.substr(header_seq->second.find(\"_\")+1));\n                {\n                        std::unique_lock<std::mutex> lock(mtx_seqnum_map);\n                        auto it = m_seqnum_map.find(threadId);\n                        if (it != m_seqnum_map.end())\n                        {\n                                if (seqNum <= it->second.first)\n                                {\n                                        responsePayload = \"{ \\\"response\\\" : \\\"updated\\\", \\\"rows_affected\\\"  : \";\n                                        responsePayload += to_string(0);\n                                        responsePayload += \" }\";\n                                        Logger::getLogger()->info(\"%s:%d: Repeat/old request: responding with zero response - threadId=%s, last seen seqNum for this threadId=%d, HTTP request header seqNum=%d\",\n                                                                        __FUNCTION__, __LINE__, threadId.c_str(), it->second.first, seqNum);\n                                        respond(response, responsePayload);\n                                        return;\n\t\t\t\t}\n\n                                // remove this threadId from LRU list; will add this to front of LRU list below\n                                seqnum_map_lru_list.erase(m_seqnum_map[threadId].second);\n                        }\n                        else\n                        {\n                                if (seqnum_map_lru_list.size() == max_entries_in_seqnum_map) // LRU list is full\n                                {\n                                        //delete least recently used element\n                                        string last = seqnum_map_lru_list.back();\n                                        seqnum_map_lru_list.pop_back();\n                                        m_seqnum_map.erase(last);\n                                }\n                        }\n\n                        // insert an entry for threadId at front of LRU queue\n                        seqnum_map_lru_list.push_front(threadId);\n                        m_seqnum_map[threadId] = make_pair(seqNum, seqnum_map_lru_list.begin());\n                }\n        }\n\n        stats.commonUpdate++;\n        try {\n\t\tschemaName = request->path_match[STORAGE_SCHEMA_NAME_COMPONENT];\n                tableName = request->path_match[STORAGE_TABLE_NAME_COMPONENT];\n                payload = request->content.string();\n\n                int rval = plugin->commonUpdate(tableName, payload, const_cast<char*>(schemaName.c_str()));\n                if (rval != -1)\n                {\n\t\t\tif (m_perfMonitor->isCollecting())\n\t\t\t{\n\t\t\t\tm_perfMonitor->collect(\"update rows \" + tableName, rval);\n\t\t\t\tm_perfMonitor->collect(\"update Payload Size \" + tableName, (long)payload.length());\n\t\t\t}\n\t\t\tregistry.processTableUpdate(tableName, payload);\n                        responsePayload = \"{ \\\"response\\\" : \\\"updated\\\", \\\"rows_affected\\\"  : \";\n                        responsePayload += to_string(rval);\n                        responsePayload += \" }\";\n                        respond(response, responsePayload);\n                }\n                else\n                {\n                        mapError(responsePayload, plugin->lastError());\n                        respond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n                }\n\n        } catch (exception& ex) {\n                internalError(response, ex);\n                }\n}\n\n/**\n * Perform a delete on a table using the condition encoded in the JSON payload\n *\n * @param response      The response stream to send the response on\n * @param request       The HTTP request\n */\nvoid StorageApi::storageTableDelete(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring \tschemaName;\nstring  tableName;\nstring  payload;\nstring  responsePayload;\n\n        stats.commonDelete++;\n        try {\n\t\tschemaName = request->path_match[STORAGE_SCHEMA_NAME_COMPONENT];\n                tableName = request->path_match[STORAGE_TABLE_NAME_COMPONENT];\n                payload = request->content.string();\n\n                int rval = plugin->commonDelete(tableName, payload, const_cast<char*>(schemaName.c_str()));\n                if (rval != -1)\n                {\n\t\t\tif (m_perfMonitor->isCollecting())\n\t\t\t{\n\t\t\t\tm_perfMonitor->collect(\"delete rows \" + tableName, rval);\n\t\t\t\tm_perfMonitor->collect(\"delete Payload Size \" + tableName, (long)payload.length());\n\t\t\t}\n\t\t\tregistry.processTableDelete(tableName, payload);\n                        responsePayload = \"{ \\\"response\\\" : \\\"deleted\\\", \\\"rows_affected\\\"  : \";\n                        responsePayload += to_string(rval);\n                        responsePayload += \" }\";\n                        respond(response, responsePayload);\n                }\n                else\n                {\n                        mapError(responsePayload, plugin->lastError());\n                        respond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n                }\n\n\t}catch (exception& ex) {\n               \tinternalError(response, ex);\n        }\n}\n\n/**\n * Perform a simple query on the table using the query parameters as conditions\n * TODO make this work for multiple column queries\n *\n * @param response      The response stream to send the response on\n * @param request       The HTTP request\n */\nvoid StorageApi::storageTableSimpleQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  schemaName;\nstring  tableName;\nSimpleWeb::CaseInsensitiveMultimap      query;\nstring payload;\n\n        stats.commonSimpleQuery++;\n        try {\n\t\tschemaName = request->path_match[STORAGE_SCHEMA_NAME_COMPONENT];\n                tableName = request->path_match[STORAGE_TABLE_NAME_COMPONENT];\n                query = request->parse_query_string();\n\n                if (query.size() > 0)\n                {\n                        payload = \"{ \\\"where\\\" : { \";\n                        for(auto &param : query)\n                        {\n                                payload = payload + \"\\\"column\\\" :  \\\"\";\n                                payload = payload + param.first;\n                                payload = payload + \"\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"\";\n                                payload = payload + param.second;\n                                payload = payload + \"\\\"\";\n                        }\n                        payload = payload + \"} }\";\n                }\n                char *pluginResult = plugin->commonRetrieve(tableName, payload, const_cast<char*>(schemaName.c_str()));\n                if (pluginResult)\n                {\n                        string res = pluginResult;\n\n                        respond(response, res);\n                        free(pluginResult);\n                }\n                else\n                {\n                        string responsePayload;\n                        mapError(responsePayload, plugin->lastError());\n                        respond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n                }\n        } catch (exception& ex) {\n                internalError(response, ex);\n        }\n}\n\n/**\n * Perform query on a table using the JSON encoded query in the payload\n *\n * @param response      The response stream to send the response on\n * @param request       The HTTP request\n */\nvoid StorageApi::storageTableQuery(shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request)\n{\nstring  schemaName;\nstring  tableName;\nstring  payload;\n\n        stats.commonQuery++;\n        try {\n\t\tschemaName = request->path_match[STORAGE_SCHEMA_NAME_COMPONENT];\n                tableName = request->path_match[STORAGE_TABLE_NAME_COMPONENT];\n                payload = request->content.string();\n\n                char *pluginResult = plugin->commonRetrieve(tableName, payload, const_cast<char*>(schemaName.c_str()));\n                if (pluginResult)\n                {\n                        string res = pluginResult;\n\n                        respond(response, res);\n                        free(pluginResult);\n                }\n                else\n                {\n                        string responsePayload;\n                        mapError(responsePayload, plugin->lastError());\n                        respond(response, SimpleWeb::StatusCode::client_error_bad_request, responsePayload);\n                }\n\n        } catch (exception& ex) {\n                internalError(response, ex); \n        }\n}\n\n"
  },
  {
    "path": "C/services/storage/storage_plugin.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <config_category.h>\n#include <storage_plugin.h>\n#include <plugin_exception.h>\n\nusing namespace std;\n\n#define DEFAULT_SCHEMA \"fledge\"\n\n/**\n * Constructor for the class that wraps the storage plugin\n *\n * Create a set of function points that resolve to the loaded plugin and\n * enclose in the class.\n */\nStoragePlugin::StoragePlugin(const string& name, PLUGIN_HANDLE handle) : Plugin(handle), m_name(name), m_config(NULL)\n{\n\t// Call the init method of the plugin\n\tstring version = this->getInfo()->interface;\n\tint major = strtol(version.c_str(), NULL, 10);\n\tsize_t offset = version.find(\".\");\n\tint minor = 0;\n\tif (offset != string::npos)\n\t{\n\t\tminor = strtol(version.substr(offset + 1).c_str(), NULL, 10);\n\t}\n\tif (major > 1 || minor > 3)\t// Configuration starts at 1.4.0 of the interface\n\t{\n\t\tm_config = new StoragePluginConfiguration(name, this);\n\t\tPLUGIN_HANDLE (*pluginInit)(ConfigCategory *) = (PLUGIN_HANDLE (*)(ConfigCategory *))\n\t\t\t\t\tmanager->resolveSymbol(handle, \"plugin_init\");\n\t\tConfigCategory *config = m_config->getConfiguration();\n\t\tinstance = (*pluginInit)(config);\n\t\tdelete config;\n\t}\n\telse\n\t{\n\t\tPLUGIN_HANDLE (*pluginInit)() = (PLUGIN_HANDLE (*)())\n\t\t\t\t\tmanager->resolveSymbol(handle, \"plugin_init\");\n\t\tinstance = (*pluginInit)();\n\t}\n\n\tif (major >= 1 && minor >= 5)\n\t{\n\t\tm_bStorageSchemaFlag = true;\n\t}\n\n\n\t// Setup the function pointers to the plugin\n\t\n\tif (!m_bStorageSchemaFlag)\n\t{\n  \t\tcommonInsertPtr = (int (*)(PLUGIN_HANDLE, const char*, const char*))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_common_insert\");\n\t}\n\telse\n\t{\n\t\tstorageSchemaInsertPtr = (int (*)(PLUGIN_HANDLE, const char*, const char*, const char*))\n                                manager->resolveSymbol(handle, \"plugin_common_insert\");\n\t}\n\n\tif (!m_bStorageSchemaFlag)\n\t{\n\t\tcommonRetrievePtr = (char * (*)(PLUGIN_HANDLE, const char*, const char*))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_common_retrieve\");\n\t}\n\telse\n\t{\n\t\tstorageSchemaRetrievePtr = (char * (*)(PLUGIN_HANDLE, const char*, const char*, const char*))\n                                manager->resolveSymbol(handle, \"plugin_common_retrieve\");\n\t}\n\n\tif (!m_bStorageSchemaFlag)\n\t{\n\t\tcommonUpdatePtr = (int (*)(PLUGIN_HANDLE, const char*, const char*))\n                                manager->resolveSymbol(handle, \"plugin_common_update\");\n\t}\n\telse\n\t{\n\t\tstorageSchemaUpdatePtr = (int (*)(PLUGIN_HANDLE, const char*, const char*, const char*))\n                                manager->resolveSymbol(handle, \"plugin_common_update\");\n\t}\n\n\tif (!m_bStorageSchemaFlag)\n\t{\n\t\tcommonDeletePtr = (int (*)(PLUGIN_HANDLE, const char*, const char*))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_common_delete\");\n\t}\n\telse\n\t{\n\t\tstorageSchemaDeletePtr = (int (*)(PLUGIN_HANDLE, const char*, const char*, const char*))\n                                manager->resolveSymbol(handle, \"plugin_common_delete\");\n\t}\n\n\treadingsAppendPtr = (int (*)(PLUGIN_HANDLE, const char *))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_reading_append\");\n\treadingsFetchPtr = (char * (*)(PLUGIN_HANDLE, unsigned long id, unsigned int blksize))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_reading_fetch\");\n\treadingsRetrievePtr = (char * (*)(PLUGIN_HANDLE, const char *))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_reading_retrieve\");\n\treadingsPurgePtr = (char * (*)(PLUGIN_HANDLE, unsigned long age, unsigned int flags, unsigned long sent))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_reading_purge\");\n\treadingsPurgeAssetPtr = (unsigned int (*)(PLUGIN_HANDLE, const char *))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_reading_purge_asset\");\n\treleasePtr = (void (*)(PLUGIN_HANDLE, const char *))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_release\");\n\tlastErrorPtr = (PLUGIN_ERROR * (*)(PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_last_error\");\n\tcreateTableSnapshotPtr =\n\t\t\t(int (*)(PLUGIN_HANDLE, const char*, const char*))\n\t\t\t      manager->resolveSymbol(handle, \"plugin_create_table_snapshot\");\n\tloadTableSnapshotPtr =\n\t\t\t(int (*)(PLUGIN_HANDLE, const char*, const char*))\n\t\t\t      manager->resolveSymbol(handle, \"plugin_load_table_snapshot\");\n\tdeleteTableSnapshotPtr =\n\t\t\t(int (*)(PLUGIN_HANDLE, const char*, const char*))\n\t\t\t      manager->resolveSymbol(handle, \"plugin_delete_table_snapshot\");\n\tgetTableSnapshotsPtr =\n\t\t\t(char * (*)(PLUGIN_HANDLE, const char*))\n\t\t\t      manager->resolveSymbol(handle, \"plugin_get_table_snapshots\");\n\treadingStreamPtr =\n\t\t\t(int (*)(PLUGIN_HANDLE, ReadingStream **, bool))\n\t\t\t      manager->resolveSymbol(handle, \"plugin_readingStream\");\n\tpluginShutdownPtr = (bool (*)(PLUGIN_HANDLE))manager->resolveSymbol(handle, \"plugin_shutdown\");\n\n\tcreateSchemaPtr = \n              \t\t(int (*)(PLUGIN_HANDLE, const char*))\n                              manager->resolveSymbol(handle, \"plugin_createSchema\");\n}\n\n/**\n * Call the insert method in the plugin\n */\nint StoragePlugin::commonInsert(const string& table, const string& payload, const char *schema)\n{\n\tif(!m_bStorageSchemaFlag && this->commonInsertPtr)\n\t{\n\t\treturn this->commonInsertPtr(instance, table.c_str(), payload.c_str());\n\t}\n\telse\n\t{\n\t\tif (this->storageSchemaInsertPtr)\n\t\t\treturn this->storageSchemaInsertPtr(instance, schema ? schema : DEFAULT_SCHEMA, table.c_str(), payload.c_str());\n\t}\n\treturn 0;\n}\n\n/**\n * Call the retrieve method in the plugin\n */\nchar *StoragePlugin::commonRetrieve(const string& table, const string& payload, const char *schema)\n{\n\tif (!m_bStorageSchemaFlag && this->commonRetrievePtr)\n\t{\n\t\treturn this->commonRetrievePtr(instance, table.c_str(), payload.c_str());\n\t}\n\telse\n\t{\n\t\tif (this->storageSchemaRetrievePtr)\n\t                return this->storageSchemaRetrievePtr(instance, schema ? schema : DEFAULT_SCHEMA, table.c_str(), payload.c_str());\n        }\n\treturn NULL;\n}\n\n/**\n * Call the update method in the plugin\n */\nint StoragePlugin::commonUpdate(const string& table, const string& payload, const char *schema)\n{\n\tif (!m_bStorageSchemaFlag && this->commonUpdatePtr)\n        {\n\t\treturn this->commonUpdatePtr(instance, table.c_str(), payload.c_str());\n\t}\n\telse\n\t{\n\t\tif (this->storageSchemaUpdatePtr)\n                \treturn this->storageSchemaUpdatePtr(instance, schema ? schema : DEFAULT_SCHEMA, table.c_str(), payload.c_str());\n        }\n\treturn 0;\n}\n\n/**\n * Call the delete method in the plugin\n */\nint StoragePlugin::commonDelete(const string& table, const string& payload, const char *schema)\n{\n\tif (!m_bStorageSchemaFlag && this->commonDeletePtr)\n        {\n\t\treturn this->commonDeletePtr(instance, table.c_str(), payload.c_str());\n\t}\n\telse\n\t{\n\t\tif (this->storageSchemaDeletePtr)\n\t\t\treturn this->storageSchemaDeletePtr(instance, schema ? schema : DEFAULT_SCHEMA, table.c_str(), payload.c_str());\n\t}\n\treturn 0;\n}\n\n/**\n * Call the readings append method in the plugin\n */\nint StoragePlugin::readingsAppend(const string& payload)\n{\n\treturn this->readingsAppendPtr(instance, payload.c_str());\n}\n\n/**\n * Call the readings fetch method in the plugin\n */\nchar * StoragePlugin::readingsFetch(unsigned long id, unsigned int blksize)\n{\n\treturn this->readingsFetchPtr(instance, id, blksize);\n}\n\n/**\n * Call the readings retrieve method in the plugin\n */\nchar *StoragePlugin::readingsRetrieve(const string& payload)\n{\n\treturn this->readingsRetrievePtr(instance, payload.c_str());\n}\n\n/**\n * Call the readings purge method in the plugin\n */\nchar *StoragePlugin::readingsPurge(unsigned long age, unsigned int flags, unsigned long sent)\n{\n\treturn this->readingsPurgePtr(instance, age, flags, sent);\n}\n\n/**\n * Call the readings purge asset method in the plugin\n */\nchar *StoragePlugin::readingsPurgeAsset(const string& asset)\n{\n\tif (this->readingsPurgeAssetPtr)\n\t{\n\t\tunsigned int purged = this->readingsPurgeAssetPtr(instance, asset.c_str());\n\t\tchar *json = (char *)malloc(80);\n\t\tif (json)\n\t\t{\n\t\t\tsnprintf(json, 80, \"{ \\\"purged\\\" : %u }\", purged);\n\t\t\treturn json;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tthrow runtime_error(\"Out of memory\");\n\t\t}\n\t}\n\tthrow PluginNotImplementedException(\"Purge by asset name not implemented in the storage plugin\");\n}\n\n/**\n * Release a result from a retrieve\n */\nvoid StoragePlugin::release(const char *results)\n{\n\tthis->releasePtr(instance, results);\n}\n\n/**\n * Get the last error from the plugin\n */\nPLUGIN_ERROR *StoragePlugin::lastError()\n{\n\treturn this->lastErrorPtr(instance);\n}\n\n/**\n * Call the create table snaphot method in the plugin\n */\nint StoragePlugin::createTableSnapshot(const string& table, const string& id)\n{\n        return this->createTableSnapshotPtr(instance, table.c_str(), id.c_str());\n}\n\n/**\n * Call the load table snaphot method in the plugin\n */\nint StoragePlugin::loadTableSnapshot(const string& table, const string& id)\n{\n        return this->loadTableSnapshotPtr(instance, table.c_str(), id.c_str());\n}\n\n/**\n * Call the delete table snaphot method in the plugin\n */\nint StoragePlugin::deleteTableSnapshot(const string& table, const string& id)\n{\n        return this->deleteTableSnapshotPtr(instance, table.c_str(), id.c_str());\n}\n\n/**\n * Call the get table snaphot method in the plugin\n */\nchar *StoragePlugin::getTableSnapshots(const string& table)\n{\n        return this->getTableSnapshotsPtr(instance, table.c_str());\n}\n\n/**\n * Call the reading stream method in the plugin\n */\nint StoragePlugin::readingStream(ReadingStream **stream, bool commit)\n{\n        return this->readingStreamPtr(instance, stream, commit);\n}\n\n/**\n * Call the shutdown entry point of the plugin\n */\nbool StoragePlugin::pluginShutdown()\n{\n\tif (this->pluginShutdownPtr)\n\t\treturn this->pluginShutdownPtr(instance);\n\treturn true;\n}\n\n/**\n * Call the schema create method in the plugin\n */\nint StoragePlugin::createSchema(const string& payload)\n{\n\tif (this->createSchemaPtr)\n        \treturn this->createSchemaPtr(instance, payload.c_str());\n\treturn 0;\n}\n"
  },
  {
    "path": "C/services/storage/storage_registry.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <rapidjson/document.h>\n#include \"rapidjson/stringbuffer.h\"\n#include <rapidjson/writer.h>\n#include \"storage_registry.h\"\n#include \"client_http.hpp\"\n#include \"server_http.hpp\"\n#include \"management_api.h\"\n#include \"reading_set.h\"\n#include \"reading.h\"\n#include \"logger.h\"\n#include \"strings.h\"\n#include \"client_http.hpp\"\n#include <chrono>\n\n#define CHECK_QTIMES\t0\t// Turn on to check length of time data is queued\n#define QTIME_THRESHOLD 3\t// Threshold to report long queue times\n\n#define REGISTRY_SLEEP_TIME\t5\t// Time to sleep in the register process thread\n\t\t\t\t\t// between checks for chutdown\n\nusing namespace std;\nusing namespace rapidjson;\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\n/**\n * Worker thread entry point\n */\nstatic void worker(StorageRegistry *registry)\n{\n\tregistry->run();\n}\n\n/**\n * StorageRegistry constructor\n *\n * The storage registry holds registrations for other micro services\n * that wish to receive notifications when new data is avialable for\n * a given asset. The interested service registers a URL and an asset\n * code, or * for all assets, that URL will then be called when new\n * data arrives for the particular asset.\n *\n * The service registry maintians a worker thread that is responsible\n * for sending these notifications such that the main flow of data into\n * the storage layer is minimally impacted by the registration and\n * delivery of these messages to interested microservices.\n */\nStorageRegistry::StorageRegistry() : m_thread(NULL)\n{\n\tm_running = true;\n\tm_thread = new thread(worker, this);\n}\n\n/**\n * StorageRegistry destructor\n */\nStorageRegistry::~StorageRegistry()\n{\n\tm_running = false;\n\tm_cv.notify_all();\n\tif (m_thread)\n\t{\n\t\tif (m_thread->joinable())\n\t\t\tm_thread->join();\n\t\tdelete m_thread;\n\t\tm_thread = NULL;\n\t}\n\twhile (!m_queue.empty())\n\t\tm_queue.pop();\n\twhile (!m_tableInsertQueue.empty())\n\t\tm_tableInsertQueue.pop();\n\twhile (!m_tableUpdateQueue.empty())\n\t\tm_tableUpdateQueue.pop();\n\twhile (!m_tableDeleteQueue.empty())\n\t\tm_tableDeleteQueue.pop();\n}\n\n/**\n * Process a reading append payload and determine\n * if any microservice has registered an interest\n * in this asset.\n *\n * @param payload\tThe reading append payload\n */\nvoid\nStorageRegistry::process(const string& payload)\n{\n\tif (m_registrations.size() != 0)\n\t{\n\t\t/*\n\t\t * We have some registrations so queue a copy of the payload\n\t\t * to be examined in the thread the send reading notifications\n\t\t * to interested parties.\n\t\t */\n\t\tchar *data = NULL;\n\t\tif ((data = strdup(payload.c_str())) != NULL)\n\t\t{\n\t\t\ttime_t now = time(0);\n\t\t\tItem item = make_pair(now, data);\n\t\t\tlock_guard<mutex> guard(m_qMutex);\n\t\t\tm_queue.push(item);\n\t\t\tm_cv.notify_all();\n\t\t}\n\t}\n}\n\n/**\n * Process a table insert payload and determine\n * if any microservice has registered an interest\n * in this table. Called from StorageApi::commonInsert()\n *\n * @param payload\tThe table insert payload\n */\nvoid\nStorageRegistry::processTableInsert(const string& tableName, const string& payload)\n{\n\tLogger::getLogger()->debug(\"StorageRegistry::processTableInsert(): tableName=%s, payload=%s\", tableName.c_str(), payload.c_str());\n\t\n\tif (m_tableRegistrations.size() > 0)\n\t{\n\t\t/*\n\t\t * We have some registrations so queue a copy of the payload\n\t\t * to be examined in the thread the send table notifications\n\t\t * to interested parties.\n\t\t */\n\t\tchar *table = strdup(tableName.c_str());\n\t\tchar *data = strdup(payload.c_str());\n\t\t\n\t\tif (data != NULL && table != NULL)\n\t\t{\n\t\t\ttime_t now = time(0);\n\t\t\tTableItem item = make_tuple(now, table, data);\n\t\t\tlock_guard<mutex> guard(m_qMutex);\n\t\t\tm_tableInsertQueue.push(item);\n\t\t\tm_cv.notify_all();\n\t\t}\n\t}\n}\n\n/**\n * Process a table update payload and determine\n * if any microservice has registered an interest\n * in this table. Called from StorageApi::commonUpdate()\n *\n * @param payload\tThe table update payload\n */\nvoid\nStorageRegistry::processTableUpdate(const string& tableName, const string& payload)\n{\n\tLogger::getLogger()->info(\"Checking for registered interest in table %s with update %s\", tableName.c_str(), payload.c_str());\n\t\n\tif (m_tableRegistrations.size() > 0)\n\t{\n\t\t/*\n\t\t * We have some registrations so queue a copy of the payload\n\t\t * to be examined in the thread the send table notifications\n\t\t * to interested parties.\n\t\t */\n\t\tchar *table = strdup(tableName.c_str());\n\t\tchar *data = strdup(payload.c_str());\n\t\t\n\t\tif (data != NULL && table != NULL)\n\t\t{\n\t\t\ttime_t now = time(0);\n\t\t\tTableItem item = make_tuple(now, table, data);\n\t\t\tlock_guard<mutex> guard(m_qMutex);\n\t\t\tm_tableUpdateQueue.push(item);\n\t\t\tm_cv.notify_all();\n\t\t}\n\t}\n}\n\n/**\n * Process a table delete payload and determine\n * if any microservice has registered an interest\n * in this table. Called from StorageApi::commonDelete()\n *\n * @param payload\tThe table delete payload\n */\nvoid\nStorageRegistry::processTableDelete(const string& tableName, const string& payload)\n{\n\tLogger::getLogger()->info(\"Checking for registered interest in table %s with delete %s\", tableName.c_str(), payload.c_str());\n\t\n\tif (m_tableRegistrations.size() > 0)\n\t{\n\t\t/*\n\t\t * We have some registrations so queue a copy of the payload\n\t\t * to be examined in the thread the send table notifications\n\t\t * to interested parties.\n\t\t */\n\t\tchar *table = strdup(tableName.c_str());\n\t\tchar *data = strdup(payload.c_str());\n\t\t\n\t\tif (data != NULL && table != NULL)\n\t\t{\n\t\t\ttime_t now = time(0);\n\t\t\tTableItem item = make_tuple(now, table, data);\n\t\t\tlock_guard<mutex> guard(m_qMutex);\n\t\t\tm_tableDeleteQueue.push(item);\n\t\t\tm_cv.notify_all();\n\t\t}\n\t}\n}\n\n/**\n * Handle a registration request from a client of the storage layer\n *\n * @param asset\t\tThe asset of interest\n * @param url\t\tThe URL to call\n */\nvoid\nStorageRegistry::registerAsset(const string& asset, const string& url)\n{\n\tlock_guard<mutex> guard(m_registrationsMutex);\n\tm_registrations.push_back(pair<string *, string *>(new string(asset), new string(url)));\n}\n\n/**\n * Handle a request to remove a registration of interest\n *\n * @param asset\t\tThe asset of interest\n * @param url\t\tThe URL to call\n */\nvoid\nStorageRegistry::unregisterAsset(const string& asset, const string& url)\n{\n\tlock_guard<mutex> guard(m_registrationsMutex);\n\tfor (auto it = m_registrations.begin(); it != m_registrations.end(); )\n\t{\n\t\tif (asset.compare(*(it->first)) == 0 && url.compare(*(it->second)) == 0)\n\t\t{\n\t\t\tdelete it->first;\n\t\t\tdelete it->second;\n\t\t\tit = m_registrations.erase(it);\n\t\t}\n\t\telse\n\t\t{\n\t\t\t++it;\n        \t}\n\t}\n}\n\n/**\n * Parse a table subscription (un)register JSON payload\n *\n * @param payload\tJSON payload describing the interest\n */\nTableRegistration* StorageRegistry::parseTableSubscriptionPayload(const string& payload)\n{\n\tDocument\tdoc;\n\t\n\tdoc.Parse(payload.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"StorageRegistry::parseTableSubscriptionPayload(): Parse error in subscription request payload\");\n\t\treturn NULL;\n\t}\n\tif (!doc.HasMember(\"url\"))\n\t{\n\t\tLogger::getLogger()->error(\"StorageRegistry::parseTableSubscriptionPayload(): subscription request doesn't have url field\");\n\t\treturn NULL;\n\t}\n\tif (!doc.HasMember(\"key\"))\n\t{\n\t\tLogger::getLogger()->error(\"StorageRegistry::parseTableSubscriptionPayload(): subscription request doesn't have url field\");\n\t\treturn NULL;\n\t}\n\tif (!doc.HasMember(\"operation\"))\n\t{\n\t\tLogger::getLogger()->error(\"StorageRegistry::parseTableSubscriptionPayload(): subscription request doesn't have url field\");\n\t\treturn NULL;\n\t}\n\n\tTableRegistration *reg = new TableRegistration;\n\t\n\treg->url = doc[\"url\"].GetString();\n\treg->key = doc[\"key\"].GetString();\n\treg->operation = doc[\"operation\"].GetString();\n\t\n\tif (reg->key.size())\n\t{\n\t\tif (!doc.HasMember(\"values\") || !doc[\"values\"].IsArray())\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Subscription request\" \\\n\t\t\t\t\t\" doesn't have a proper values field, payload=%s\", payload.c_str());\n\t\t\tdelete reg;\n\t\t\treturn NULL;\n\t\t}\n\t\tfor (auto & v : doc[\"values\"].GetArray())\n\t    \t\treg->keyValues.emplace_back(v.GetString());\n\t}\n\n\treturn reg;\n}\n\n/**\n * Handle a registration request for a table from a client of the storage layer\n *\n * @param table\t\tThe table of interest\n * @param payload\tJSON payload describing the interest\n */\nvoid\nStorageRegistry::registerTable(const string& table, const string& payload)\n{\n\tTableRegistration *reg = parseTableSubscriptionPayload(payload); \n\n\tif (!reg)\n\t{\n\t\tLogger::getLogger()->error(\"Unable to register invalid Registration entry for table %s, payload %s\",\n\t\t\t\ttable.c_str(), payload.c_str());\n\t\treturn;\n\t}\n\n\tlock_guard<mutex> guard(m_tableRegistrationsMutex);\n\tLogger::getLogger()->info(\"Adding registration entry for table %s\", table.c_str());\n\tm_tableRegistrations.push_back(pair<string *, TableRegistration *>(new string(table), reg));\n}\n\n/**\n * Handle a request to remove a registration of interest in a table\n *\n * @param table\t\tThe table of interest\n * @param payload\tJSON payload describing the interest\n */\nvoid\nStorageRegistry::unregisterTable(const string& table, const string& payload)\n{\n\tTableRegistration *reg = parseTableSubscriptionPayload(payload);\n\n\tif (!reg)\n\t{\n\t\tLogger::getLogger()->info(\"Invalid Registration entry for table %s, payload %s\",\n\t\t\t\ttable.c_str(), payload.c_str());\n\t\treturn;\n\t}\n\n\tlock_guard<mutex> guard(m_tableRegistrationsMutex);\n\t\n\tLogger::getLogger()->info(\"%d entries registered interest in table operations\", m_tableRegistrations.size());\n\tbool found = false;\n\tfor (auto it = m_tableRegistrations.begin(); found == false && it != m_tableRegistrations.end(); )\n\t{\n\t\tTableRegistration *reg_it = it->second;\n\t\tif (table.compare(*(it->first)) == 0 && \n\t\t\treg->url.compare(reg_it->url)==0 &&\n\t\t\treg->key.compare(reg_it->key)==0 &&\n\t\t\treg->operation.compare(reg_it->operation)==0)\n\t\t{\n\t\t\t// Either no key is to be matched or a key is to be matched against a possible set of values\n\t\t\tif (reg->key.size()==0 || (reg->key.size()>0 && reg->keyValues == reg_it->keyValues))\n\t\t\t{\n\t\t\t\tdelete it->first;\n\t\t\t\tdelete it->second;\n\t\t\t\tit = m_tableRegistrations.erase(it);\n\t\t\t\tLogger::getLogger()->info(\"Removed registration for table %s and url %s\", table, reg->key.c_str());\n\t\t\t\tfound = true;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t++it;\n    \t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t++it;\n    \t\t}\n\t}\n\tif (!found)\n\t{\n\t\tLogger::getLogger()->warn(\n\t\t\t\t\"Failed to remove subscription for table '%s' using key '%s' with operation '%s' and url '%s'\",\n\t\t\t\ttable.c_str(), reg->key.c_str(), reg->operation.c_str(), reg->url.c_str());\n\t}\n\tdelete reg;\n}\n\n\n/**\n * The worker function that processes the queue of payloads\n * that may need to be sent to subscribers.\n */\nvoid\nStorageRegistry::run()\n{\t\n\twhile (m_running)\n\t{\n\t\tchar *data = NULL;\n#if CHECK_QTIMES\n\t\ttime_t qTime;\n#endif\n\t\t{\n\t\t\tunique_lock<mutex> mlock(m_cvMutex);\n\t\t\twhile (m_queue.size() == 0 && m_tableInsertQueue.size() == 0 && m_tableUpdateQueue.size() == 0 && m_tableDeleteQueue.size() == 0)\n\t\t\t{\n\t\t\t\tm_cv.wait_for(mlock, std::chrono::seconds(REGISTRY_SLEEP_TIME));\n\t\t\t\tif (!m_running)\n\t\t\t\t{\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t\t\n\t\t\twhile (!m_queue.empty())\n\t\t\t{\n\t\t\t\tItem item = m_queue.front();\n\t\t\t\tm_queue.pop();\n\t\t\t\tdata = item.second;\n#if CHECK_QTIMES\n\t\t\t\tqTime = item.first;\n#endif\n\t\t\t\tif (data)\n\t\t\t\t{\n#if CHECK_QTIMES\n\t\t\t\t\tif (time(0) - qTime > QTIME_THRESHOLD)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"Readings data has been queued for %d seconds to be sent to registered party\", (time(0) - qTime));\n\t\t\t\t\t}\n#endif\n\t\t\t\t\tprocessPayload(data);\n\t\t\t\t\tfree(data);\n\t\t\t\t}\n\t\t\t}\n\t\t\t\n\t\t\twhile (!m_tableInsertQueue.empty())\n\t\t\t{\n\t\t\t\tchar *tableName = NULL;\n\t\t\t\t\n\t\t\t\tTableItem item = m_tableInsertQueue.front();\n\t\t\t\tm_tableInsertQueue.pop();\n\t\t\t\ttableName = get<1>(item);\n\t\t\t\tdata = get<2>(item);\n#if CHECK_QTIMES\n\t\t\t\tqTime = item.first;\n#endif\n\t\t\t\tif (tableName && data)\n\t\t\t\t{\n#if CHECK_QTIMES\n\t\t\t\t\tif (time(0) - qTime > QTIME_THRESHOLD)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"Table insert data has been queued for %d seconds to be sent to registered party\", (time(0) - qTime));\n\t\t\t\t\t}\n#endif\n\t\t\t\t\tprocessInsert(tableName, data);\n\t\t\t\t\tfree(tableName);\n\t\t\t\t\tfree(data);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\twhile (!m_tableUpdateQueue.empty())\n\t\t\t{\n\t\t\t\tchar *tableName = NULL;\n\t\t\t\t\n\t\t\t\tTableItem item = m_tableUpdateQueue.front();\n\t\t\t\tm_tableUpdateQueue.pop();\n\t\t\t\ttableName = get<1>(item);\n\t\t\t\tdata = get<2>(item);\n#if CHECK_QTIMES\n\t\t\t\tqTime = item.first;\n#endif\n\t\t\t\tif (tableName && data)\n\t\t\t\t{\n#if CHECK_QTIMES\n\t\t\t\t\tif (time(0) - qTime > QTIME_THRESHOLD)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"Table update data has been queued for %d seconds to be sent to registered party\", (time(0) - qTime));\n\t\t\t\t\t}\n#endif\n\t\t\t\t\tprocessUpdate(tableName, data);\n\t\t\t\t\tfree(tableName);\n\t\t\t\t\tfree(data);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\twhile (!m_tableDeleteQueue.empty())\n\t\t\t{\n\t\t\t\tchar *tableName = NULL;\n\t\t\t\t\n\t\t\t\tTableItem item = m_tableDeleteQueue.front();\n\t\t\t\tm_tableDeleteQueue.pop();\n\t\t\t\ttableName = get<1>(item);\n\t\t\t\tdata = get<2>(item);\n#if CHECK_QTIMES\n\t\t\t\tqTime = item.first;\n#endif\n\t\t\t\tif (tableName && data)\n\t\t\t\t{\n#if CHECK_QTIMES\n\t\t\t\t\tif (time(0) - qTime > QTIME_THRESHOLD)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"Table delete data has been queued for %d seconds to be sent to registered party\", (time(0) - qTime));\n\t\t\t\t\t}\n#endif\n\t\t\t\t\tprocessDelete(tableName, data);\n\t\t\t\t\tfree(tableName);\n\t\t\t\t\tfree(data);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Process an incoming payload and distribute as required to registered\n * services\n *\n * @param payload\tThe payload to potentially distribute\n */\nvoid\nStorageRegistry::processPayload(char *payload)\n{\nbool allDone = true;\n\n\tlock_guard<mutex> guard(m_registrationsMutex);\n\n\t// First of all deal with those that registered for all assets\n\tfor (REGISTRY::const_iterator it = m_registrations.cbegin(); it != m_registrations.cend(); it++)\n\t{\n\t\tif (it->first->compare(\"*\") == 0)\n\t\t{\n\t\t\tsendPayload(*(it->second), payload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tallDone = false;\n\t\t}\n\t}\n\tif (allDone)\n\t{\n\t\t// No registrations for individual assets, no need to parse payload\n\t\tprocessAssetRefusals();\n\t\treturn;\n\t}\n\tfor (REGISTRY::const_iterator it = m_registrations.cbegin(); it != m_registrations.cend(); it++)\n\t{\n\t\tif (it->first->compare(\"*\") != 0)\n\t\t{\n\t\t\ttry {\n\t\t\t\tfilterPayload(*(it->second), payload, *(it->first));\n\t\t\t} catch (const exception& e) {\n\t\t\t\tLogger::getLogger()->error(\"filterPayload: exception %s\", e.what());\n\t\t\t}\n\t\t}\n\t}\n\t// Remove any registrations that are no longer listening\n\tprocessAssetRefusals();\n}\n\n\n/**\n * Send the copy of the payload to the given URL\n *\n * @param url\t\tThe URL to send the payload to\n * @param payload\tThe payload to send\n */\nvoid\nStorageRegistry::sendPayload(const string& url, const char *payload)\n{\n\tsize_t found = url.find_first_of(\"://\");\n\tsize_t found1 = url.find_first_of(\"/\", found + 3);\n\tstring hostport = url.substr(found+3, found1 - found - 3);\n\tstring resource = url.substr(found1);\n\tHttpClient client(hostport);\n\ttry {\n\t\tclient.request(\"POST\", resource, payload);\n\t} catch (const exception& e) {\n\t\tstring why = e.what();\n\t\tif (why.compare(\"Connection refused\") == 0)\n\t\t{\n\t\t\t// The registered service is no longer listening\n\t\t\t// Log this for potential removal if the issue persists\n\t\t\tauto it = m_refusals.find(url);\n\t\t\tif (it != m_refusals.end())\n\t\t\t\tm_refusals[url]++;\n\t\t\telse\n\t\t\t\tm_refusals[url] = 1;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"sendPayload: exception %s sending reading data to interested party %s\", why.c_str(), url.c_str());\n\t\t}\n\t}\n}\n\n/**\n * Send a filtered copy of the payload to the given URL\n *\n * @param url\t\tThe URL to send the payload to\n * @param payload\tThe payload to send\n * @param asset\t\tThe asset code to filter\n */\nvoid\nStorageRegistry::filterPayload(const string& url, char *payload, const string& asset)\n{\nostringstream convert;\n\n\tsize_t found = url.find_first_of(\"://\");\n\tsize_t found1 = url.find_first_of(\"/\", found + 3);\n\tstring hostport = url.substr(found+3, found1 - found - 3);\n\tstring resource = url.substr(found1);\n\n\t// Filter the payload to include just the one asset\n\tDocument doc;\n\tdoc.Parse(payload);\n\tif (doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"filterPayload: Parse error in payload\");\n\t\treturn;\n\t}\n\tif (!doc.HasMember(\"readings\"))\n\t{\n\t\tLogger::getLogger()->error(\"filterPayload: payload has no readings object\");\n\t\treturn;\n\t}\n\tconst Value& readings = doc[\"readings\"];\n\tif (!readings.IsArray())\n\t{\n\t\tLogger::getLogger()->error(\"filterPayload: payload readings object is not an array\");\n\t\treturn;\n\t}\n\tconvert << \"{ \\\"readings\\\" : [ \";\n\tint count = 0;\n\t/*\n\t * Loop over the readings and create a reading object for\n\t * each, check if it matches the asset name and incldue in the\n\t * new payload if it does. In eother case free that object\n\t * immediately to reduce the memory requirement.\n\t */\n\tfor (auto& reading : readings.GetArray())\n\t{\n\t\tif (reading.IsObject())\n\t\t{\n\t\t\tJSONReading *value = new JSONReading(reading);\n\t\t\tif (value->getAssetName().compare(asset) == 0)\n\t\t\t{\n\t\t\t\tif (count)\n\t\t\t\t\tconvert << \",\";\n\t\t\t\tcount++;\n\t\t\t\tconvert << value->toJSON();\n\t\t\t}\n\t\t\tdelete value;\n\t\t}\n\t}\n\t\n\tconvert << \"] }\";\n\n\t/*\n\t * Check if any assets inthe filtered payload\n\t */\n\tif (count == 0)\n\t{\n\t\t// Nothing to send\n\t\treturn;\n\t}\n\n\tHttpClient client(hostport);\n\ttry {\n\t\tclient.request(\"POST\", resource, convert.str());\n\t} catch (const exception& e) {\n\t\tstring why = e.what();\n\t\tif (why.compare(\"Connection refused\") == 0)\n\t\t{\n\t\t\t// The registered sewrvice is no longer listening\n\t\t\t// Log this for potential removal if the issue persists\n\t\t\tauto it = m_refusals.find(url);\n\t\t\tif (it != m_refusals.end())\n\t\t\t\tm_refusals[url]++;\n\t\t\telse\n\t\t\t\tm_refusals[url] = 1;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tLogger::getLogger()->error(\"filterPayload: exception %s sending reading data to interested party %s\", why.c_str(), url.c_str());\n\t\t}\n\t}\n}\n\n/**\n * Process an incoming payload and distribute as required to registered\n * services\n *\n * @param payload\tThe payload to potentially distribute\n */\nvoid\nStorageRegistry::processInsert(char *tableName, char *payload)\n{\n\tLogger::getLogger()->debug(\"StorageRegistry::processInsert(): Handling for table:%s, payload=%s\", tableName, payload);\n\tLogger::getLogger()->debug(\"StorageRegistry::processInsert(): m_tableRegistrations.size()=%d\", m_tableRegistrations.size());\n\t\n\tDocument\tpayloadDoc;\n\t\n\tpayloadDoc.Parse(payload);\n\tif (payloadDoc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"Internal error unable to parse payload for insert into table %s, payload is %s\", tableName, payload);\n\t\treturn;\n\t}\n\n\tlock_guard<mutex> guard(m_tableRegistrationsMutex);\n\tfor (auto & reg : m_tableRegistrations)\n\t{\n\t\tif (reg.first->compare(tableName) != 0)\n\t\t\tcontinue;\n\n\t\tTableRegistration *tblreg = reg.second;\n\n\t\t// If key is empty string, no need to match key/value pair in payload\n\t\t// Also operation must be \"insert\" for initial implementation\n\t\tif (tblreg->operation.compare(\"insert\") != 0)\n\t\t{\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (tblreg->key.size() == 0)\n\t\t{\n\t\t\tsendPayload(tblreg->url, payload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (payloadDoc.HasMember(\"inserts\") && payloadDoc[\"inserts\"].IsArray())\n\t\t\t{\n\t\t\t\t// We have multiple inserts in the payload, parse each one and send\n\t\t\t\t// only the insert for which the key has been registered\n\t\t\t\tValue &inserts = payloadDoc[\"inserts\"];\n\t\t\t\tfor (Value::ConstValueIterator iter = inserts.Begin();\n\t\t\t\t\t\titer != inserts.End(); ++iter)\n\t\t\t\t{\n\t\t\t\t\tif (iter->HasMember(tblreg->key.c_str()))\n\t\t\t\t\t{\n\t\t\t\t\t\tstring payloadKeyValue = (*iter)[tblreg->key.c_str()].GetString();\n\t\t\t\t\t\tif (std::find(tblreg->keyValues.begin(), tblreg->keyValues.end(), payloadKeyValue) != tblreg->keyValues.end())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\t\titer->Accept(writer);\n\n\t\t\t\t\t\t\tconst char *output = buffer.GetString();\n\t\t\t\t\t\t\tsendPayload(tblreg->url, output);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (payloadDoc.HasMember(tblreg->key.c_str()) && payloadDoc[tblreg->key.c_str()].IsString())\n\t\t\t\t{\n\t\t\t\t\tstring payloadKeyValue = payloadDoc[tblreg->key.c_str()].GetString();\n\t\t\t\t\tif (std::find(tblreg->keyValues.begin(), tblreg->keyValues.end(), payloadKeyValue) != tblreg->keyValues.end())\n\t\t\t\t\t{\n\t\t\t\t\t\tsendPayload(tblreg->url, payload);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tprocessTableRefusals();\n}\n\n/**\n * Process an incoming payload and distribute as required to registered\n * services\n *\n * @param payload\tThe payload to potentially distribute\n */\nvoid\nStorageRegistry::processUpdate(char *tableName, char *payload)\n{\n\tDocument\tdoc;\n\n\tdoc.Parse(payload);\n\tif (doc.HasParseError())\n\t{\n\t\tLogger::getLogger()->error(\"Unable to parse table update payload for table %s, request is %s\", tableName, payload);\n\t\treturn;\n\t}\n\n\tlock_guard<mutex> guard(m_tableRegistrationsMutex);\n\tfor (auto & reg : m_tableRegistrations)\n\t{\n\t\tif (reg.first->compare(tableName) != 0)\n\t\t\tcontinue;\n\n\t\tTableRegistration *tblreg = reg.second;\n\n\t\t// If key is empty string, no need to match key/value pair in payload\n\t\tif (tblreg->operation.compare(\"update\") != 0)\n\t\t{\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (tblreg->key.empty())\n\t\t{\n\t\t\t// No key to match, send all updates to table\n\t\t\tsendPayload(tblreg->url, payload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (doc.HasMember(\"updates\") && doc[\"updates\"].IsArray())\n\t\t\t{\n\t\t\t\t// Multiple updates in a single call\n\t\t\t\tValue &updates = doc[\"updates\"];\n\t\t\t\tfor (Value::ConstValueIterator iter = updates.Begin();\n\t\t\t\t\t\titer != updates.End(); ++iter)\n\t\t\t\t{\n\t\t\t\t\tconst Value& where = (*iter)[\"where\"];\n\t\t\t\t\tif (where.HasMember(\"column\") && where[\"column\"].IsString() &&\n\t\t\t\t\t\t\twhere.HasMember(\"value\") && where[\"value\"].IsString())\n\t\t\t\t\t{\n\t\t\t\t\t\tstring updateKey = where[\"column\"].GetString();\n\t\t\t\t\t\tstring keyValue = where[\"value\"].GetString();\n\t\t\t\t\t\tif (updateKey.compare(tblreg->key) == 0 &&\n\t\t\t\t\t\t\t\tstd::find(tblreg->keyValues.begin(), tblreg->keyValues.end(), keyValue)\n\t\t\t\t\t\t\t\t!= tblreg->keyValues.end())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (iter->HasMember(\"values\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tconst Value& values = (*iter)[\"values\"];\n\t\t\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\t\t\tvalues.Accept(writer);\n\n\t\t\t\t\t\t\t\tconst char *output = buffer.GetString();\n\t\t\t\t\t\t\t\tsendPayload(tblreg->url, output);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if (iter->HasMember(\"expressions\"))\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tconst Value& expressions = (*iter)[\"expressions\"];\n\t\t\t\t\t\t\t\tfor (Value::ConstValueIterator expr = expressions.Begin();\n\t\t\t\t\t\t\t\t\t\texpr != expressions.End(); ++expr)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\t\t\t\texpr->Accept(writer);\n\t\n\t\t\t\t\t\t\t\t\tconst char *output = buffer.GetString();\n\t\t\t\t\t\t\t\t\tsendPayload(tblreg->url, output);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\telse if (doc.HasMember(\"where\") && doc[\"where\"].IsObject())\n\t\t\t{\n\t\t\t\tconst Value& where = doc[\"where\"];\n\t\t\t\tif (where.HasMember(\"column\") && where[\"column\"].IsString() &&\n\t\t\t\t\t\twhere.HasMember(\"value\") && where[\"value\"].IsString())\n\t\t\t\t{\n\t\t\t\t\tstring updateKey = where[\"column\"].GetString();\n\t\t\t\t\tstring keyValue = where[\"value\"].GetString();\n\t\t\t\t\tif (updateKey.compare(tblreg->key) == 0 &&\n\t\t\t\t\t\t\tstd::find(tblreg->keyValues.begin(), tblreg->keyValues.end(), keyValue)\n\t\t\t\t\t\t\t!= tblreg->keyValues.end())\n\t\t\t\t\t{\n\t\t\t\t\t\tif (doc.HasMember(\"values\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value& values = doc[\"values\"];\n\t\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\t\tvalues.Accept(writer);\n\n\t\t\t\t\t\t\tconst char *output = buffer.GetString();\n\t\t\t\t\t\t\tsendPayload(tblreg->url, output);\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if (doc.HasMember(\"expressions\"))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tconst Value& expressions = doc[\"expressions\"];\n\t\t\t\t\t\t\tfor (Value::ConstValueIterator expr = expressions.Begin();\n\t\t\t\t\t\t\t\t\texpr != expressions.End(); ++expr)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\t\t\texpr->Accept(writer);\n\n\t\t\t\t\t\t\t\tconst char *output = buffer.GetString();\n\t\t\t\t\t\t\t\tsendPayload(tblreg->url, output);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Process an incoming payload and distribute as required to registered\n * services\n *\n * @param payload\tThe payload to potentially distribute\n */\nvoid\nStorageRegistry::processDelete(char *tableName, char *payload)\n{\n\tDocument\tdoc;\n\tbool allRows = false;\n\n\tif (! *payload) // Empty\n\t{\n\t\tallRows = true;\n\t}\n\telse\n\t{\n\t\tdoc.Parse(payload);\n\t\tif (doc.HasParseError())\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Unable to parse table delete payload for table %s, request is %s\", tableName, payload);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tlock_guard<mutex> guard(m_tableRegistrationsMutex);\n\tfor (auto & reg : m_tableRegistrations)\n\t{\n\t\tif (reg.first->compare(tableName) != 0)\n\t\t\tcontinue;\n\n\t\tTableRegistration *tblreg = reg.second;\n\n\t\t// If key is empty string, no need to match key/value pair in payload\n\t\tif (tblreg->operation.compare(\"delete\") != 0)\n\t\t{\n\t\t\tcontinue;\n\t\t}\n\t\tif (allRows)\n\t\t{\n\t\t\tsendPayload(tblreg->url, payload);\n\t\t}\n\t\telse if (tblreg->key.empty())\n\t\t{\n\t\t\t// No key to match, send all updates to table\n\t\t\tsendPayload(tblreg->url, payload);\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (doc.HasMember(\"where\") && doc[\"where\"].IsObject())\n\t\t\t{\n\t\t\t\tconst Value& where = doc[\"where\"];\n\t\t\t\tif (where.HasMember(\"column\") && where[\"column\"].IsString() &&\n\t\t\t\t\t\twhere.HasMember(\"value\") && where[\"value\"].IsString())\n\t\t\t\t{\n\t\t\t\t\tstring updateKey = where[\"column\"].GetString();\n\t\t\t\t\tstring keyValue = where[\"value\"].GetString();\n\t\t\t\t\tif (updateKey.compare(tblreg->key) == 0 &&\n\t\t\t\t\t\t\tstd::find(tblreg->keyValues.begin(), tblreg->keyValues.end(), keyValue)\n\t\t\t\t\t\t\t!= tblreg->keyValues.end())\n\t\t\t\t\t{\n\t\t\t\t\t\tStringBuffer buffer;\n\t\t\t\t\t\tWriter<StringBuffer> writer(buffer);\n\t\t\t\t\t\twhere.Accept(writer);\n\n\t\t\t\t\t\tconst char *output = buffer.GetString();\n\t\t\t\t\t\tsendPayload(tblreg->url, output);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Unregister a registration of interest in assets after a number of refusals.\n * Called holding the m_registrationsMutex\n *\n * A certain number of refused connections is tolerated to allow for\n * transient faults.\n */\nvoid StorageRegistry::processAssetRefusals()\n{\nREGISTRY newRegistry;\n\n\tif (m_refusals.empty())\n\t{\n\t\treturn;\n\t}\n\tfor (auto& item : m_registrations)\n\t{\n\t\tstring url = *item.second;\n\t\tauto refusal = m_refusals.find(url);\n\t\tif (refusal != m_refusals.end())\n\t\t{\n\t\t\tint cnt = m_refusals[url];\n\t\t\tif (cnt < MAX_REFUSALS)\n\t\t\t{\n\t\t\t\tnewRegistry.push_back(item);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Removing registration for %s with URL %s. Service has probably failed\", item.first->c_str(), url);\n\t\t\t\tm_refusals.erase(refusal);\n\t\t\t\tdelete item.first;\n\t\t\t\tdelete item.second;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tnewRegistry.push_back(item);\n\t\t}\n\t}\n\tm_registrations.clear();\n\tm_registrations = newRegistry;\n}\n\n/**\n * Unregister a registration for interest in tables after a number of refusals.\n * Called holding the m_tableRegistrationsMutex\n *\n * A certain number of refused connections is tolerated to allow for\n * transient faults.\n */\nvoid StorageRegistry::processTableRefusals()\n{\nREGISTRY_TABLE newRegistry;\n\n\tif (m_refusals.empty())\n\t{\n\t\treturn;\n\t}\n\tfor (auto& item : m_tableRegistrations)\n\t{\n\t\tstring url = item.second->url;\n\t\tauto refusal = m_refusals.find(url);\n\t\tif (refusal != m_refusals.end())\n\t\t{\n\t\t\tint cnt = m_refusals[url];\n\t\t\tif (cnt < MAX_REFUSALS)\n\t\t\t{\n\t\t\t\tnewRegistry.push_back(item);\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Removing registration for table %s with URL %s. Service has probably failed\", item.first->c_str(), url);\n\t\t\t\tm_refusals.erase(refusal);\n\t\t\t\tdelete item.first;\n\t\t\t\tdelete item.second;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tnewRegistry.push_back(item);\n\t\t}\n\t}\n\tm_tableRegistrations.clear();\n\tm_tableRegistrations = newRegistry;\n}\n\n/**\n * Test function to add some dummy/test table subscriptions\n */\nvoid StorageRegistry::insertTestTableReg()\n{\n\tstring table1(\"log\");\n\tstring payload1 = R\"***( {\"url\": \"http://localhost:8081/dummyTableNotifyUrl\", \"key\": \"code\", \"values\":[\"CONAD\", \"PURGE\", \"CONCH\", \"FSTOP\", \"SRVRG\"], \"operation\": \"insert\"} )***\";\n\n\tstring table2(\"asset_tracker\");\n\tstring payload2 = R\"***( {\"url\": \"http://localhost:8081/dummyTableNotifyUrl2\", \"key\": \"\", \"operation\": \"insert\"} )***\";\n\n\tstring table3(\"asset_tracker\");\n\tstring payload3 = R\"***( {\"url\": \"http://localhost:8081/dummyTableNotifyUrl3\", \"key\": \"event\", \"values\":[\"Ingest\", \"Filter\"], \"operation\": \"insert\"} )***\";\n\t\n\tLogger::getLogger()->error(\"StorageRegistry::insertTestTableReg(): table=%s, payload=%s\", table1.c_str(), payload1.c_str());\n\tregisterTable(table1, payload1);\n\n\tLogger::getLogger()->error(\"StorageRegistry::insertTestTableReg(): table=%s, payload=%s\", table2.c_str(), payload2.c_str());\n\tregisterTable(table2, payload2);\n\n\tLogger::getLogger()->error(\"StorageRegistry::insertTestTableReg(): table=%s, payload=%s\", table3.c_str(), payload3.c_str());\n\tregisterTable(table3, payload3);\n}\n\n/**\n * Test function to remove a dummy/test table subscription\n *\n * @param n\t\tThe subscription number to remove\n */\nvoid StorageRegistry::removeTestTableReg(int n)\n{\n\tstring table1(\"log\");\n\tstring payload1 = R\"***( {\"url\": \"http://localhost:8081/dummyTableNotifyUrl\", \"key\": \"code\", \"values\":[\"CONAD\", \"PURGE\", \"CONCH\", \"FSTOP\", \"SRVRG\"], \"operation\": \"insert\"} )***\";\n\n\tstring table2(\"asset_tracker\");\n\tstring payload2 = R\"***( {\"url\": \"http://localhost:8081/dummyTableNotifyUrl2\", \"key\": \"\", \"operation\": \"insert\"} )***\";\n\n\tstring table3(\"asset_tracker\");\n\tstring payload3 = R\"***( {\"url\": \"http://localhost:8081/dummyTableNotifyUrl3\", \"key\": \"event\", \"values\":[\"Ingest\", \"Filter\"], \"operation\": \"insert\"} )***\";\n\t\n\tswitch(n)\n\t{\n\t\tcase 1:\n\t\t\tunregisterTable(table1, payload1);\n\t\t\tLogger::getLogger()->error(\"StorageRegistry::removeTestTableReg(): table=%s, payload=%s\", table1.c_str(), payload1.c_str());\n\t\t\tbreak;\n\n\t\tcase 2:\n\t\t\tunregisterTable(table2, payload2);\n\t\t\tLogger::getLogger()->error(\"StorageRegistry::removeTestTableReg(): table=%s, payload=%s\", table2.c_str(), payload2.c_str());\n\t\t\tbreak;\n\n\t\tcase 3:\n\t\t\tunregisterTable(table3, payload3);\n\t\t\tLogger::getLogger()->error(\"StorageRegistry::removeTestTableReg(): table=%s, payload=%s\", table3.c_str(), payload3.c_str());\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tLogger::getLogger()->error(\"StorageRegistry::removeTestTableReg(): unhandled value n=%d\", n);\n\t\t\tbreak;\n\t}\n}\n\n\n"
  },
  {
    "path": "C/services/storage/storage_stats.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2017 OSisoft, LLC\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <storage_stats.h>\n#include <string>\n#include <sstream>\n\nusing namespace std;\n\n/**\n * Construct the statistics class for the storage service.\n */\nStorageStats::StorageStats() : commonInsert(0), commonSimpleQuery(0),\n\t\t\t\tcommonQuery(0), commonUpdate(0), commonDelete(0),\n\t\t\t\treadingAppend(0), readingFetch(0),\n\t\t\t\treadingQuery(0), readingPurge(0)\n{\n}\n\n/**\n * Serialise the statistics as JSON\n */\nvoid StorageStats::asJSON(string& json) const\n{\nostringstream convert;   // stream used for the conversion\n\n\tconvert << \"{ \\\"commonInsert\\\" : \" << commonInsert << \",\";\n\tconvert << \" \\\"commonSimpleQuery\\\" : \" << commonSimpleQuery << \",\";\n\tconvert << \" \\\"commonQuery\\\" : \" << commonQuery << \",\";\n\tconvert << \" \\\"commonUpdate\\\" : \" << commonUpdate << \",\";\n\tconvert << \" \\\"commonDelete\\\" : \" << commonDelete << \",\";\n\tconvert << \" \\\"readingAppend\\\" : \" << readingAppend << \",\";\n\tconvert << \" \\\"readingFetch\\\" : \" << readingFetch << \",\";\n\tconvert << \" \\\"readingQuery\\\" : \" << readingQuery << \",\";\n\tconvert << \" \\\"readingPurge\\\" : \" << readingPurge << \" }\";\n\n\tjson = convert.str();\n}\n"
  },
  {
    "path": "C/services/storage/stream_handler.cpp",
    "content": "/*\n * Fledge storage service.\n *\n * Copyright (c) 2019 Dianomic Systems Inc\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <stream_handler.h>\n#include <storage_api.h>\n#include <storage_api.h>\n#include <reading_stream.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <fcntl.h>\n#include <sys/epoll.h>\n#include <sys/ioctl.h>\n#include <chrono>\n#include <unistd.h>\n#include <errno.h>\n\n\nusing namespace std;\n\n/**\n * C wrapper for the handler thread we use to handle the polling of\n * the stream ingestion protocol.\n *\n * @param handler\tThe SgtreamHanler instance that started this thread\n */\nstatic void threadWrapper(void *handler)\n{\n\t((StreamHandler *)handler)->handler();\n}\n\n/**\n * Constructor for the StreamHandler class\n */\nStreamHandler::StreamHandler(StorageApi *api) : m_api(api), m_running(true)\n{\n\tm_pollfd = epoll_create(1);\n\tm_handlerThread = thread(threadWrapper, this);\n}\n\n\n/**\n * Destructor for the StreamHandler. Close down the epoll\n * system and wait for the handler thread to terminate.\n */\nStreamHandler::~StreamHandler()\n{\n\tm_running = false;\n\tclose(m_pollfd);\n\tm_handlerThread.join();\n}\n\n/**\n * The handler method for the stream handler. This is run in its own thread\n * and is responsible for using epoll to gather events on the descriptors and\n * to dispatch them to the individual streams\n */\nvoid StreamHandler::handler()\n{\n\tstruct epoll_event events[MAX_EVENTS];\n\twhile (m_running)\n\t{\n\t\tstd::unique_lock<std::mutex> lock(m_streamsMutex);\n\t\tif (m_streams.size() == 0)\n\t\t{\n\t\t\tLogger::getLogger()->debug(\"Waiting for first stream to be created\");\n\t\t\tm_streamsCV.wait_for(lock, chrono::milliseconds(500));\n\t\t}\n\t\telse\n\t\t{\n\t\t\t/*\n\t\t\t * Call epoll_wait with a zero timeout to see if any data is available.\n\t\t\t * If not then call with a tiemout. This prevents Linux from scheduling\n\t\t\t * us out if there is data on the socket.\n\t\t\t */\n\t\t\tint nfds = epoll_wait(m_pollfd, events, MAX_EVENTS, 100);\n\t\t\tif (nfds == 0)\n\t\t\t{\n\t\t\t\tnfds = epoll_wait(m_pollfd, events, MAX_EVENTS, 100);\n\t\t\t}\n\t\t\tif (nfds == -1)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"Stream epoll error: %s\", strerror(errno));\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tfor (int i = 0; i < nfds; i++)\n\t\t\t\t{\n\t\t\t\t\tStream *stream = (Stream *)events[i].data.ptr;\n\t\t\t\t\tstream->handleEvent(m_pollfd, m_api, events[i].events);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Create a new stream and add it to the epoll mechanism for the stream handler\n *\n * @param token\t\tThe single use connection token the client should send\n * @param The port on which this stream is listening\n */\nuint32_t StreamHandler::createStream(uint32_t *token)\n{\n\tStream *stream = new Stream();\n\tuint32_t port = stream->create(m_pollfd, token);\n\t{\n\t\tstd::unique_lock<std::mutex> lock(m_streamsMutex);\n\t\tm_streams.push_back(stream);\n\t}\n\n\tm_streamsCV.notify_all();\n\n\treturn port;\n}\n\n/**\n * Create a stream object to deal with the stream protocol\n */\nStreamHandler::Stream::Stream() : m_status(Closed)\n{\n}\n\n/**\n * Destroy a stream\n */\nStreamHandler::Stream::~Stream() \n{\n\tdelete m_blockPool;\n}\n\n/**\n * Create a new stream object. Add that stream to the epoll structure.\n * A listener socket is created and the port sent back to the caller. The client\n * will connect to this port and then send the token to verify they are the \n * service that requested the stream to be connected.\n *\n * The client calls a REST API endpoint in the storage layer to request a streaming\n * connection which results in this method beign called.\n *\n * @param epollfd\tThe epoll descriptor\n * @param token\t\tThe single use token the client will send in the connect request\n */\nuint32_t StreamHandler::Stream::create(int epollfd, uint32_t *token)\n{\nstruct sockaddr_in\taddress;\n\n\t// Create the memory pool from whuch readings will be allocated\n\tif ((m_blockPool = new MemoryPool(BLOCK_POOL_SIZES)) == NULL)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to create memory block pool\");\n\t\treturn 0;\n\t}\n\n\t// Open the socket used to listen for the incoming stream connection\n\tif ((m_socket = socket(AF_INET, SOCK_STREAM, 0)) < 0)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to create socket: %s\", strerror(errno));\n\t\treturn 0;\n\t}\n\taddress.sin_family = AF_INET;\n\taddress.sin_addr.s_addr = INADDR_ANY;\n\taddress.sin_port = 0;\n\n\tif (bind(m_socket, (struct sockaddr *)&address, sizeof(address)) < 0)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to bind socket: %s\", strerror(errno));\n\t\treturn 0;\n\t}\n\tsocklen_t len = sizeof(address);\n\tif (getsockname(m_socket, (struct sockaddr *)&address, &len) == -1)\n\t\tLogger::getLogger()->error(\"Failed to get socket name, %s\", strerror(errno));\n\tm_port = ntohs(address.sin_port);\n\tLogger::getLogger()->info(\"Stream port bound to %d\", m_port);\n\tsetNonBlocking(m_socket);\n\n\tif (listen(m_socket, 3) < 0)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to listen: %s\", strerror(errno));\n\t\treturn 0;\n    \t}\n\tm_status = Listen;\n\n\t// Create the random token that is used to verify the connection comes from the\n\t// source that requested the streaming connection\n\tsrand(m_port + (unsigned int)time(0));\n\tm_token = (uint32_t)random() & 0xffffffff;\n\t*token = m_token;\n\n\t// Add to epoll set\n\tm_event.data.ptr = this;\n\tm_event.events = EPOLLIN | EPOLLRDHUP | EPOLLHUP | EPOLLPRI | EPOLLERR;\n\tif (epoll_ctl(epollfd, EPOLL_CTL_ADD, m_socket, &m_event) < 0)\n\t{\n\t\tLogger::getLogger()->error(\"Failed to add listening port %d to epoll fileset, %s\", m_port, strerror(errno));\n\t}\n\n\treturn m_port;\n}\n\n/**\n * Set the file descriptor to be non blocking\n *\n * @param fd\tThe file descripter to set non-blocking\n */\nvoid StreamHandler::Stream::setNonBlocking(int fd)\n{\n\tint flags;\n\tflags = fcntl(fd, F_GETFL, 0);\n\tflags |= O_NONBLOCK;\n\tfcntl(fd, F_SETFL, flags);\n}\n\n/**\n * Handle an epoll event. The precise handling will depend\n * on the state of the stream.\n *\n * One of the things done here is to handle the streaming protocol,\n * reading the block header the individual reading headers and the\n * readings themselves. \n *\n * TODO Improve memory handling, use seperate threads for inserts, send acknowledgements\n *\n * @param epollfd\tThe epoll file descriptor\n */\nvoid StreamHandler::Stream::handleEvent(int epollfd, StorageApi *api, uint32_t events)\n{\nssize_t n;\n\n\tif (events & EPOLLRDHUP)\n\t{\n\t\t// TODO mark this stream for destruction\n\t\tepoll_ctl(epollfd, EPOLL_CTL_DEL, m_socket, &m_event);\n\t\tclose(m_socket);\n\t\tLogger::getLogger()->error(\"Closing stream...\");\n\t\tm_status = Closed;\n\t}\n\tif (events & EPOLLHUP)\n\t{\n\t\t// TODO mark this stream for destruction\n\t\tepoll_ctl(epollfd, EPOLL_CTL_DEL, m_socket, &m_event);\n\t\tclose(m_socket);\n\t\tLogger::getLogger()->error(\"Hangup on socket Closing stream...\");\n\t\tm_status = Closed;\n\t}\n\tif (events & EPOLLPRI)\n\t{\n\t\t// TODO mark this stream for destruction\n\t\tepoll_ctl(epollfd, EPOLL_CTL_DEL, m_socket, &m_event);\n\t\tclose(m_socket);\n\t\tLogger::getLogger()->error(\"Eceptional condition  on socket Closing stream...\");\n\t\tm_status = Closed;\n\t}\n\tif (events & EPOLLERR)\n\t{\n\t\t// TODO mark this stream for destruction\n\t\tepoll_ctl(epollfd, EPOLL_CTL_DEL, m_socket, &m_event);\n\t\tclose(m_socket);\n\t\tm_status = Closed;\n\t\tLogger::getLogger()->error(\"Error condition  on socket Closing stream...\");\n\t}\n\tif (events & EPOLLIN)\n\t{\n\t\tif (m_status == Listen)\n\t\t{\n\t\t\t// Accept the connection for the streaming data\n\t\t\tint conn_sock;\n\t\t\tstruct sockaddr\taddr;\n\t\t\tsocklen_t\taddrlen = sizeof(addr);\n\t\t\tif ((conn_sock = accept(m_socket,\n\t\t\t\t\t\t  (struct sockaddr *)&addr, &addrlen)) == -1)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->info(\"Accept failed for streaming socket: %s\", strerror(errno));\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t// Remove and close the listening socket now we have a connection\n\t\t\tepoll_ctl(epollfd, EPOLL_CTL_DEL, m_socket, &m_event);\n\t\t\tclose(m_socket);\n\t\t\tLogger::getLogger()->info(\"Stream connection established\");\n\t\t\tm_socket = conn_sock;\n\t\t\tm_status = AwaitingToken;\n\t\t\tm_event.events = EPOLLIN | EPOLLRDHUP | EPOLLHUP | EPOLLERR | EPOLLPRI | EPOLLET;\n\t\t\tm_event.data.ptr = this;\n\t\t\tif (epoll_ctl(epollfd, EPOLL_CTL_ADD, m_socket, &m_event) == -1)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->fatal(\"Failed to add data socket to epoll set: %s\", strerror(errno));\n\t\t\t}\n\t\t}\n\t\telse if (m_status == AwaitingToken)\n\t\t{\n\t\t\tRDSConnectHeader\thdr;\n\t\t\tif (available(m_socket) < sizeof(hdr))\n\t\t\t{\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif ((n = read(m_socket, &hdr, sizeof(hdr))) != (int)sizeof(hdr))\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Token exchange failed: Short read of %d bytes: %s\", n, strerror(errno));\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (hdr.magic == RDS_CONNECTION_MAGIC && hdr.token == m_token)\n\t\t\t{\n\t\t\t\tm_status = Connected;\n\t\t\t\tm_blockNo = 0;\n\t\t\t\tm_readingNo = 0;\n\t\t\t\tm_protocolState = BlkHdr;\n\t\t\t\tLogger::getLogger()->info(\"Token for streaming socket exchanged\");\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->warn(\"Incorrect token for streaming socket\");\n\t\t\t\tclose(m_socket);\n\t\t\t}\n\t\t}\n\t\telse if (m_status == Connected)\n\t\t{\n\t\t\t/*\n\t\t\t * We are connected so loop on the available data reading block headers,\n\t\t\t * reading headers and the readings themselves.\n\t\t\t *\n\t\t\t * We use the available method to see if there is enough data before we\n\t\t\t * read in order to avoid blocking in a red call. This also allows to\n\t\t\t * not have to set the socket to non-blocking mode. meaning that our\n\t\t\t * epoll interaction does not need to be edge triggered.\n\t\t\t *\n\t\t\t * Once we exhaust the data that is aviaabnle we return and allow the\n\t\t\t * epoll to inform us when more data becomes available.\n\t\t\t */\n\t\t\twhile (1)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->debug(\"Connected in protocol state %d, readingNo %d\", m_protocolState, m_readingNo);\n\t\t\t\tif (m_protocolState == BlkHdr)\n\t\t\t\t{\n\t\t\t\t\tRDSBlockHeader blkHdr;\n\t\t\t\t\tif (available(m_socket) < sizeof(blkHdr))\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->debug(\"Not enough bytes for block header\");\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tif ((n = read(m_socket, &blkHdr, sizeof(blkHdr))) != (int)sizeof(blkHdr))\n\t\t\t\t\t{\n\t\t\t\t\t\t// This should never happen as avialable said we had enough data\n\t\t\t\t\t\tLogger::getLogger()->warn(\"Block Header: Short read of %d bytes: %s\", n, strerror(errno));\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tif (blkHdr.magic != RDS_BLOCK_MAGIC)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"Expected block header %d, but incorrect header found 0x%x\", m_blockNo, blkHdr.magic);\n\t\t\t\t\t\tLogger::getLogger()->error(\"Previous block size was %d\", m_blockSize);\n\t\t\t\t\t\tdump(10);\n\t\t\t\t\t\tclose(m_socket);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tif (blkHdr.blockNumber != m_blockNo)\n\t\t\t\t\t{\n\t\t\t\t\t\t// Somehow we lost a block\n\t\t\t\t\t}\n\t\t\t\t\tm_blockNo++;\n\t\t\t\t\tm_blockSize = blkHdr.count;\n\t\t\t\t\tm_protocolState = RdHdr;\n\t\t\t\t\tm_readingNo = 0;\n\t\t\t\t\tLogger::getLogger()->info(\"New block %d of %d readings\", blkHdr.blockNumber, blkHdr.count);\n\t\t\t\t}\n\t\t\t\telse if (m_protocolState == RdHdr)\n\t\t\t\t{\n\t\t\t\t\t// We are expecting a reading header\n\t\t\t\t\tRDSReadingHeader rdhdr;\n\t\t\t\t\tif (available(m_socket) < sizeof(rdhdr))\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->warn(\"Not enough bytes %d for reading header %d in block %d (socket %d)\", available(m_socket), m_readingNo, m_blockNo - 1, m_socket);\n\t\t\t\t\t\tstatic bool reported = false;\n\t\t\t\t\t\tif (!reported)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tchar buf[40];\n\t\t\t\t\t\t\tint i;\n\t\t\t\t\t\t\ti = recv(m_socket, buf, sizeof(buf), MSG_PEEK);\n\t\t\t\t\t\t\tfor (int j = 0; j < i; j++)\n\t\t\t\t\t\t\t\tLogger::getLogger()->warn(\"Byte at %d is %x\", j, buf[j]);\n\t\t\t\t\t\t\treported = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tint n;\n\t\t\t\t\tif ((n = read(m_socket, &rdhdr, sizeof(rdhdr))) < (int)sizeof(rdhdr))\n\t\t\t\t\t{\n\t\t\t\t\t\t// Should never happen\n\t\t\t\t\t\tLogger::getLogger()->warn(\"Not enough bytes read %d for reading header\", n);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tif (rdhdr.magic != RDS_READING_MAGIC)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"Expected reading header %d of %d in block %d, but incorrect header found 0x%x\", m_readingNo, m_blockSize, m_blockNo, rdhdr.magic);\n\t\t\t\t\t\tdump(10);\n\t\t\t\t\t\tclose(m_socket);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tLogger::getLogger()->debug(\"Reading Header: assetCodeLngth %d, payloadLength %d\", rdhdr.assetLength, rdhdr.payloadLength);\n\t\t\t\t\tm_readingSize = sizeof(struct timeval) + rdhdr.assetLength + rdhdr.payloadLength;\n\t\t\t\t\tuint32_t extra = 0;\n\t\t\t\t\tif (rdhdr.assetLength)\n\t\t\t\t\t{\n\t\t\t\t\t\tm_sameAsset = false;\n\t\t\t\t\t\textra = 0;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tm_sameAsset = true;\n\t\t\t\t\t\textra = m_lastAsset.length() + 1;\n\t\t\t\t\t\trdhdr.assetLength = extra;\n\t\t\t\t\t}\n\t\t\t\t\textra  += 2 * sizeof(uint32_t);\n\t\t\t\t\tm_currentReading = (ReadingStream *)m_blockPool->allocate(m_readingSize + extra);\n\t\t\t\t\tm_readings[m_readingNo % RDS_BLOCK] = m_currentReading;\n\t\t\t\t\tm_currentReading->assetCodeLength = rdhdr.assetLength;\n\t\t\t\t\tm_currentReading->payloadLength = rdhdr.payloadLength;\n\t\t\t\t\tm_protocolState = RdBody;\n\t\t\t\t}\n\t\t\t\telse if (m_protocolState == RdBody)\n\t\t\t\t{\n\t\t\t\t\t// We are expecting a reading body\n\t\t\t\t\tif (available(m_socket) < m_readingSize)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->warn(\"Not enough bytes %d for reading %d in block %d\", m_readingSize, m_readingNo, m_blockNo - 1);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tstruct iovec iov[3];\n\n\t\t\t\t\tiov[0].iov_base = &m_currentReading->userTs;\n\t\t\t\t\tiov[0].iov_len = sizeof(struct timeval);\n\n\t\t\t\t\tif (!m_sameAsset)\n\t\t\t\t\t{\n\t\t\t\t\t\tiov[1].iov_base = &m_currentReading->assetCode;\n\t\t\t\t\t\tiov[1].iov_len = m_currentReading->assetCodeLength;\n\t\t\t\t\t\tiov[2].iov_base = &m_currentReading->assetCode[m_currentReading->assetCodeLength];\n\t\t\t\t\t\tiov[2].iov_len = m_currentReading->payloadLength;\n\t\t\t\t\t\tlong n = readv(m_socket, iov, 3);\n\t\t\t\t\t\tif ((unsigned long)n != m_currentReading->assetCodeLength +\n\t\t\t\t\t\t\t\tm_currentReading->payloadLength + sizeof(struct timeval))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tLogger::getLogger()->error(\"Short read for reading\");\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tm_lastAsset = m_currentReading->assetCode;\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tiov[1].iov_base = &m_currentReading->assetCode[m_currentReading->assetCodeLength];\n\t\t\t\t\t\tiov[1].iov_len = m_currentReading->payloadLength;\n\t\t\t\t\t\tlong n = readv(m_socket, iov, 2);\n\t\t\t\t\t\tif ((unsigned long)n != m_currentReading->payloadLength + sizeof(struct timeval))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tLogger::getLogger()->error(\"Short read for reading\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tmemcpy(&m_currentReading->assetCode[0], m_lastAsset.c_str(), m_currentReading->assetCodeLength);\n\t\t\t\t\t}\n\t\t\t\t\tm_readingNo++;\n\t\t\t\t\tm_protocolState = RdHdr;\n\t\t\t\t\tif ((m_readingNo % RDS_BLOCK) == 0)\n\t\t\t\t\t{\n\t\t\t\t\t\tqueueInsert(api, RDS_BLOCK, false);\n\t\t\t\t\t\tfor (int i = 0; i < RDS_BLOCK; i++)\n\t\t\t\t\t\t\tm_blockPool->release(m_readings[i]);\n\t\t\t\t\t}\n\t\t\t\t\telse if (m_readingNo == m_blockSize)\n\t\t\t\t\t{\n\t\t\t\t\t\t// We have completed the block, insert readings and wait\n\t\t\t\t\t\t// for a block header\n\t\t\t\t\t\tqueueInsert(api, m_readingNo % RDS_BLOCK, true);\n\t\t\t\t\t\tfor (uint32_t i = 0; i < m_readingNo % RDS_BLOCK; i++)\n\t\t\t\t\t\t\tm_blockPool->release(m_readings[i]);\n\t\t\t\t\t\tm_protocolState = BlkHdr;\n\t\t\t\t\t\tLogger::getLogger()->warn(\"Waiting for the next block header\");\n\t\t\t\t\t}\n\t\t\t\t\telse if (m_readingNo > m_blockSize)\n\t\t\t\t\t{\n\t\t\t\t\t\tLogger::getLogger()->error(\"Too many readings in block\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * Queue a block of readings to be inserted into the database. The readings\n * are available via the m_readings array.\n *\n * @param nReadings\tThe number of readings to insert\n * @param commit\tPerform commit at end of this block\n */\nvoid StreamHandler::Stream::queueInsert(StorageApi *api, unsigned int nReadings, bool commit)\n{\n\tm_readings[nReadings] = NULL;\n\tapi->readingStream(m_readings, commit);\n}\n\n/**\n * Return the number of bytes available to read on the\n * given file descriptor\n *\n * @param fd\tThe file descriptor to check\n */\nunsigned int  StreamHandler::Stream::available(int fd)\n{\nunsigned int\tavail;\n\n\tif (ioctl(fd, FIONREAD, &avail) < 0)\n\t{\n\t\tLogger::getLogger()->warn(\"FIONREAD failed: %s\", strerror(errno));\n\t\treturn 0;\n\t}\n\treturn avail;\n}\n\n/**\n * Block memory pool destructor. Return any memory from the memory pools\n * to the system.\n */\nStreamHandler::Stream::MemoryPool::~MemoryPool()\n{\n\tfor (auto it = m_pool.begin(); it != m_pool.end(); it++)\n        {\n                while (! it->second->empty())\n                {\n\t\t\tvoid *mem = it->second->back();\n\t\t\tit->second->pop_back();\n                        free(&((size_t *)mem)[-1]);\n                }\n\t\tdelete it->second;\n        }\n}\n\n/**\n * Allocate a buffer from the block pool\n *\n * @param size\tMinimum size of block to allocate\n */\nvoid *StreamHandler::Stream::MemoryPool::allocate(size_t size)\n{\n\tsize = rndSize(size);\n\tauto blkpool = m_pool.find(size);\n\tif (blkpool == m_pool.end())\n\t{\n\t\tLogger::getLogger()->info(\"No block pool for %d bytes, creating\", size);\n\t\t// Create a new memory pool\n\t\tcreatePool(size);\n\t\tblkpool = m_pool.find(size);\n\t}\n\tif (blkpool->second->empty())\n\t{\n\t\tLogger::getLogger()->warn(\"Extending block pool for %d bytes\", size);\n\t\tgrowPool(blkpool->second, size);\n\t}\n\tvoid *memory = blkpool->second->back();\n\tblkpool->second->pop_back();\n\n\treturn memory;\n}\n\n/**\n * Release memory back to the memory pool\n *\n * @param memory\tThe memory to release\n */\nvoid StreamHandler::Stream::MemoryPool::release(void *memory)\n{\n\tsize_t poolSize = ((size_t *)memory)[-1];\n\tauto blkpool = m_pool.find(poolSize);\n\tif (blkpool == m_pool.end())\n\t{\n\t\tLogger::getLogger()->fatal(\"Returning memory to a block pool (%d) that does not exist\", poolSize);\n\t\tthrow runtime_error(\"Invalid block pool\");\n\t}\n\tblkpool->second->push_back(memory);\n}\n\n/**\n * Allocate a new memory block pool\n *\n * @param size\tSize of the memory blocks in the pool\n */\nvoid StreamHandler::Stream::MemoryPool::createPool(size_t size)\n{\n\tsize_t realSize = size + sizeof(size_t);\n\tvector<void *> *blocks = new vector<void *>;\n\tfor (int i = 0; i < RDS_BLOCK; i++)\n\t{\n\t\tsize_t *mem = (size_t *)malloc(realSize);\n\t\tblocks->push_back(&mem[1]);\n\t\tmem[0] = size;\n\t}\n\tm_pool.insert(pair<int, vector<void *>* >(size, blocks));\n}\n\n/**\n * Grow the memory pool for this size block\n *\n * @param pool\t\tThe memory pool\n * @param size\t\tThe size of the blocks in the memory pool\n */\nvoid StreamHandler::Stream::MemoryPool::growPool(vector<void *> *pool, size_t size)\n{\n\tsize_t realSize = size + sizeof(size_t);\n\tfor (int i = 0; i < RDS_BLOCK; i++)\n\t{\n\t\tsize_t *mem = (size_t *)malloc(realSize);\n\t\tpool->push_back(&mem[1]);\n\t\tmem[0] = size;\n\t}\n}\n\n/**\n * Diagnostic routine to display stream content.\n *\n * @param n Number of lines to display\n */\nvoid StreamHandler::Stream::dump(int n)\n{\n\tchar buf[132];\n\tchar data[10];\n\twhile (n--)\n\t{\n\t\tbuf[0] = 0;\n\t\tint r = read(m_socket, data, 10);\n\t\tfor (int i = 0; i < r; i++)\n\t\t{\n\t\t\tchar one[8];\n\t\t\tsnprintf(one, sizeof(one), \"0x%02x \", data[i]);\n\t\t\tstrcat(buf, one);\n\t\t}\n\t\tLogger::getLogger()->error(buf);\n\t}\n}\n"
  },
  {
    "path": "C/tasks/check_updates/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (check_updates)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wsign-conversion\")\nset(COMMON_LIB common-lib)\nset(PLUGINS_COMMON_LIB plugins-common-lib)\n\nfind_package(Threads REQUIRED)\n\nset(BOOST_COMPONENTS system thread)\n\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\ninclude_directories(.)\ninclude_directories(include)\ninclude_directories(../../thirdparty/Simple-Web-Server)\ninclude_directories(../../thirdparty/rapidjson/include)\ninclude_directories(../../common/include)\n\nfile(GLOB check_updates_src \"*.cpp\")\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_executable(${PROJECT_NAME} ${check_updates_src} ${common_src})\ntarget_link_libraries(${PROJECT_NAME} ${Boost_LIBRARIES})\ntarget_link_libraries(${PROJECT_NAME} ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${PLUGINS_COMMON_LIB})\n\n\ninstall(TARGETS check_updates RUNTIME DESTINATION fledge/tasks)\n\nif(MSYS) #TODO: Is MSYS true when MSVC is true?\n    target_link_libraries(check_updates ws2_32 wsock32)\n    if(OPENSSL_FOUND)\n        target_link_libraries(check_updates ws2_32 wsock32)\n    endif()\nendif()\n"
  },
  {
    "path": "C/tasks/check_updates/check_updates.cpp",
    "content": "/*\n * Fledge Check Updates\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <check_updates.h>\n#include <logger.h>\n\n#include <cstdlib>\n#include <thread>\n#include <csignal>\n#include <fstream>\n#include <errno.h>\n#include <cstring>\n#include <sstream>\n\nusing namespace std;\n\nvolatile std::sig_atomic_t signalReceived = 0;\n\nstatic void signalHandler(int signal)\n{\n\tsignalReceived = signal;\n}\n\n\n/**\n * Constructor for CheckUpdates\n */\nCheckUpdates::CheckUpdates(int argc, char** argv) : FledgeProcess(argc, argv)\n{\n\tstd::string paramName;\n\tparamName = getName();\n\tm_logger = Logger::getLogger();\t// Logger is created by FledgeProcess\n\tm_logger->info(\"CheckUpdates starting - parameters name :%s:\", paramName.c_str() );\n\tm_mgtClient = this->getManagementClient();\n\n}\n\n/**\n * Destructor for CheckUpdates\n */\nCheckUpdates::~CheckUpdates()\n{\n}\n\n/**\n * CheckUpdates run method, called by the base class to start the process and do the actual work.\n */\nvoid CheckUpdates::run()\n{\n\t// We handle these signals, add more if needed\n\tstd::signal(SIGINT,  signalHandler);\n\tstd::signal(SIGSTOP, signalHandler);\n\tstd::signal(SIGTERM, signalHandler);\n\n\n\tif (!m_dryRun)\n\t{\n\t\traiseAlerts();\n\t}\n\tprocessEnd();\n}\n\n/**\n * Execute the raiseAlerts, create an alert for all the packages for which update is available\n */\nvoid CheckUpdates::raiseAlerts()\n{\n\tm_logger->debug(\"raiseAlerts running\");\n\ttry\n\t{\n\t\tint availableUpdates = getUpgradablePackageList().size();\n\n\t\tif (availableUpdates > 0)\n\t\t{\n\t\t\tstd::string key = \"package_updates\";\n\t\t\tstd::string message = \"\";\n\t\t\tif (availableUpdates == 1)\n\t\t\t\tmessage = \"There is \" + std::to_string(availableUpdates) + \" update available to be installed\";\n\t\t\telse\n\t\t\t\tmessage = \"There are \" + std::to_string(availableUpdates) + \" updates available to be installed\";\n\n\t\t\tstd::string urgency = \"normal\";\n\t\t\tif (!m_mgtClient->raiseAlert(key,message,urgency))\n\t\t\t{\n\t\t\t\tm_logger->error(\"Failed to raise an alert for key=%s,message=%s,urgency=%s\", key.c_str(), message.c_str(), urgency.c_str());\n\t\t\t}\n\t\t}\n\n\t}\n\tcatch (...)\n\t{\n\t\ttry\n\t\t{\n\t\t\tstd::exception_ptr p = std::current_exception();\n\t\t\tstd::rethrow_exception(p);\n\t\t}\n\t\tcatch(const std::exception& e)\n\t\t{\n\t\t\tm_logger->error(\"Failed to raise alert : %s\", e.what());\n\t\t}\n\n\t}\n}\n\n/**\n * Logs process end message\n */\n\nvoid CheckUpdates::processEnd()\n{\n\tm_logger->debug(\"raiseAlerts completed\");\n}\n\n/**\n * Fetch package manager name \n */\n\nstd::string CheckUpdates::getPackageManager() \n{\n\tstd::string command = \"command -v yum || command -v apt-get\";\n\tstd::string result = \"\";\n\tchar buffer[128];\n\t\n\t// Open pipe to file\n\tFILE* pipe = popen(command.c_str(), \"r\");\n\tif (!pipe)\n\t{\n\t\tm_logger->error(\"getPackageManager: popen call failed : %s\",strerror(errno));\n\t\treturn \"\";\n\t}\n\t// read till end of process:\n\twhile (!feof(pipe))\n\t{\n\t\tif (fgets(buffer, 128, pipe) != NULL)\n\t\t\tresult += buffer;\n\t}\n\n\tpclose(pipe);\n\n\tif (result.find(\"apt\") != std::string::npos)\n\t\treturn \"apt\";\n\tif (result.find(\"yum\") != std::string::npos)\n\t\treturn \"yum\";\n\n\tm_logger->warn(\"Unspported environment %s\", result.c_str() );\n\treturn \"\";\n}\n\n/**\n * Fetch a list of all the package name for which upgrade is available\n */\nstd::vector<std::string> CheckUpdates::getUpgradablePackageList() \n{\n\tstd::string packageManager = getPackageManager();\n\tstd::vector<std::string> packageList;\n\tif(!packageManager.empty())\n\t{\n\t\tstd::string command = \"(sudo apt update && sudo apt list --upgradeable) 2>/dev/null | grep -v '^fledge-manage' | grep '^fledge' |  tr -s ' ' | cut -d' ' -f-1,2 \";\n\t\tif (packageManager.find(\"yum\") != std::string::npos)\n\t\t{\n\t\t\tcommand = \"(sudo yum check-update && sudo yum list updates) 2>/dev/null | grep -v '^fledge-manage' | grep '^fledge' |  tr -s ' ' | cut -d' ' -f-1,2 \";\n\t\t}\t\n\n\t\tFILE* pipe = popen(command.c_str(), \"r\");\n\t\tif (!pipe)\n\t\t{\n\t\t\tm_logger->error(\"getUpgradablePackageList: popen call failed : %s\",strerror(errno));\n\t\t\treturn packageList;\n\t\t}\n\n\t\tchar buffer[1024];\n\t\twhile (!feof(pipe))\n\t\t{\n\t\t\tif (fgets(buffer, sizeof(buffer), pipe) != NULL)\n\t\t\t{\n\t\t\t\t//strip out newline character\n\t\t\t\tint len = strlen(buffer) - 1;\n\t\t\t\tif (*buffer && buffer[len] == '\\n')\n\t\t\t\t\tbuffer[len] = '\\0';\n\n\t\t\t\tpackageList.emplace_back(buffer);\n\n\t\t\t}\n\t\t}\n\t\t\n\t\tpclose(pipe);\n\t}\n\n\treturn packageList;\n}\n"
  },
  {
    "path": "C/tasks/check_updates/include/check_updates.h",
    "content": "#ifndef _CHECK_UPDATES_H\n#define _CHECK_UPDATES_H\n\n/*\n * Fledge Check Updates\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <process.h>\n\n#define LOG_NAME \"check_updates\"\n\n/**\n * CheckUpdates class\n */\n\nclass CheckUpdates : public FledgeProcess\n{\n\tpublic:\n\t\tCheckUpdates(int argc, char** argv);\n\t\t~CheckUpdates();\n\t\tvoid run();\n\n\tprivate:\n\t\tLogger *m_logger;\n\t\tManagementClient *m_mgtClient;\n\n\t\tvoid raiseAlerts();\n\t\tstd::string getPackageManager();\n\t\tstd::vector<std::string> getUpgradablePackageList();\n\t\tvoid processEnd();\n};\n#endif\n"
  },
  {
    "path": "C/tasks/check_updates/main.cpp",
    "content": "/*\n * Fledge Check Updates\n *\n * Copyright (c) 2024 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <check_updates.h>\n#include <logger.h>\n\nusing namespace std;\n\nint main(int argc, char** argv)\n{\n\ttry\n\t{\n\t\tCheckUpdates check(argc, argv);\n\n\t\tcheck.run();\n\t}\n\tcatch (...)\n\t{\n\t\ttry\n                {\n                        std::exception_ptr p = std::current_exception();\n                        std::rethrow_exception(p);\n                }\n                catch(const std::exception& e)\n                {\n\t\t\tLogger::getLogger()->error(\"An error occurred during the execution : %s\", e.what());\n                }\n\n\t\texit(1);\n\t}\n\n\t// Return success\n\texit(0);\n}\n"
  },
  {
    "path": "C/tasks/north/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (Fledge_tasks_north)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\nadd_subdirectory(sending_process)\n"
  },
  {
    "path": "C/tasks/north/sending_process/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (sending_process)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wsign-conversion\")\nset(DLLIB -ldl)\nset(UUIDLIB -luuid)\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\nset(PLUGINS_COMMON_LIB plugins-common-lib)\n\ninclude_directories(. include ../../../thirdparty/Simple-Web-Server ../../../thirdparty/rapidjson/include  ../../../common/include ../../../services/common/include ../../../plugins/common/include)\n\nfind_package(Threads REQUIRED)\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\nif(APPLE)\n    set(OPENSSL_ROOT_DIR \"/usr/local/opt/openssl\")\nendif()\n\nfile(GLOB sending_process_src \"*.cpp\")\n\nlink_directories(${PROJECT_BINARY_DIR}/../../../lib)\n\nadd_executable(sending_process ${sending_process_src})\ntarget_link_libraries(sending_process ${Boost_LIBRARIES})\ntarget_link_libraries(sending_process ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(sending_process ${DLLIB})\ntarget_link_libraries(sending_process ${UUIDLIB})\ntarget_link_libraries(sending_process -lssl -lcrypto)\n\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${PLUGINS_COMMON_LIB})\n\ninstall(TARGETS sending_process RUNTIME DESTINATION fledge/tasks)\n\nif(MSYS) #TODO: Is MSYS true when MSVC is true?\n    target_link_libraries(sending_process ws2_32 wsock32)\n    if(OPENSSL_FOUND)\n        target_link_libraries(sending_process ws2_32 wsock32)\n    endif()\nendif()\n"
  },
  {
    "path": "C/tasks/north/sending_process/include/north_filter_pipeline.h",
    "content": "#ifndef _NORTH_FILTER_PIPELINE_H\n#define _NORTH_FILTER_PIPELINE_H\n/*\n * Fledge filter pipeline class for sending process\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n\n#include <filter_pipeline.h>\n\n/**\n * The NorthFilterPipeline class is derived from FilterPipeline class and \n * is used to represent a pipeline of filter applicable to sending process. \n * Methods are provided to load filters, setup filtering pipeline and for \n * pipeline/filters cleanup.\n */\nclass NorthFilterPipeline : public FilterPipeline \n{\n\npublic:\n\tNorthFilterPipeline(ManagementClient* mgtClient, StorageClient& storage, std::string serviceName);\n\t~NorthFilterPipeline() {}\n\t\n\t// Setup the filter pipeline\n\tbool\t\tsetupFiltersPipeline(void *passToOnwardFilter, void *useFilteredData, void *sendingProcess);\n};\n\n#endif\n"
  },
  {
    "path": "C/tasks/north/sending_process/include/north_plugin.h",
    "content": "#ifndef _NORTH_PLUGIN\n#define _NORTH_PLUGIN\n/*\n * Fledge north plugin.\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <plugin.h>\n#include <plugin_manager.h>\n#include <reading.h>\n#include <config_category.h>\n#include <plugin_data.h>\n\n/**\n * Class that represents a north plugin.\n *\n * The purpose of this class is to hide the use of the pointers into the\n * dynamically loaded plugin and wrap the interface into a class that\n * can be used directly in the north subsystem.\n *\n * This is achieved by having a set of private member variables which are\n * the pointers to the functions in the plugin, and a set of public methods\n * that will call these functions via the function pointers.\n */\nclass NorthPlugin : public Plugin {\n\tpublic:\n\t\t// Methods\n\t\tNorthPlugin(const PLUGIN_HANDLE handle);\n\t\t~NorthPlugin();\n\n\t\tvoid\t\t\tshutdown();\n\t\tstd::string\t\tshutdownSaveData();\n\t\tuint32_t\t\tsend(const std::vector<Reading* >& readings) const;\n\t\tPLUGIN_HANDLE\t\tinit(const ConfigCategory& config);\n\t\tbool\t\t\tpersistData() { return info->options & SP_PERSIST_DATA; };\n\t\tvoid\t\t\tstart();\n\t\tvoid\t\t\tstartData(const std::string& pluginData);\n\n\tprivate:\n\t\t// Function pointers\n\t\tvoid\t\t\t(*pluginShutdown)(const PLUGIN_HANDLE);\n\t\tstd::string\t\t(*pluginShutdownData)(const PLUGIN_HANDLE);\n\t\tuint32_t\t\t(*pluginSend)(const PLUGIN_HANDLE,\n\t\t\t\t\t\t      const std::vector<Reading* >& readings);\n\t\tPLUGIN_HANDLE\t\t(*pluginInit)(const ConfigCategory* config);\n\t\tvoid\t\t\t(*pluginStart)(PLUGIN_HANDLE);\n\t\tvoid\t\t\t(*pluginStartData)(PLUGIN_HANDLE,\n\t\t\t\t\t\t\t   const std::string& pluginData);\n\n\tpublic:\n\t\t// Persist plugin data\n\t\tPluginData*\t\tm_plugin_data;\n\n\tprivate:\n\t\t// Attributes\n\t\tPLUGIN_HANDLE\t\tm_instance;\n};\n\n#endif\n"
  },
  {
    "path": "C/tasks/north/sending_process/include/sending.h",
    "content": "#ifndef _SENDING_PROCESS_H\n#define _SENDING_PROCESS_H\n\n/*\n * Fledge process class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <process.h>\n#include <thread>\n#include <north_plugin.h>\n#include <reading.h>\n#include <filter_plugin.h>\n#include <north_filter_pipeline.h>\n#include <asset_tracking.h>\n\n// SendingProcess class\nclass SendingProcess : public FledgeProcess\n{\n\tpublic:\n\t\t// Constructor:\n\t\tSendingProcess(int argc, char** argv);\n\n\t\t// Destructor\n\t\t~SendingProcess();\n\n\t\tvoid\t\t\trun() const;\n\t\tvoid\t\t\tstop();\n\t\tint\t\t\tgetStreamId() const { return m_stream_id; };\n\t\tbool\t\t\tisRunning() const { if (m_dryRun) return false; return m_running; };\n\t\tvoid\t\t\tstopRunning() { m_running = false; };\n\t\tvoid\t\t\tsetLastFetchId(unsigned long id) { m_last_fetch_id = id; };\n\t\tunsigned long\t\tgetLastFetchId() const { return m_last_fetch_id; };\n\t\tvoid\t\t\tsetLastSentId(unsigned long id) { m_last_sent_id = id; };\n\t\tunsigned long\t\tgetLastSentId() const { return m_last_sent_id; };\n\n\t\tunsigned long\t\tgetSentReadings() const { return m_tot_sent; };\n\t\tbool\t\t\tupdateSentReadings(unsigned long num) {\n\t\t\t\t\t\tm_tot_sent += num;\n\t\t\t\t\t\treturn m_tot_sent;\n\t\t};\n\t\tvoid\t\t\tresetSentReadings() { m_tot_sent = 0; };\n\t\tvoid\t\t\tupdateDatabaseCounters();\n\t\tbool\t\t\tgetLastSentReadingId();\n\t\tbool\t\t\tcreateStream(int);\n\t\tint\t\t\tcreateNewStream();\n\t\tunsigned int\t\tgetDuration() const { return m_duration; };\n\t\tunsigned int\t\tgetSleepTime() const { return m_sleep; };\n\t\tbool\t\t\tgetUpdateDb() const { return m_update_db; };\n\t\tbool\t\t\tsetUpdateDb(bool val) {\n\t\t\t\t\t\t    m_update_db = val;\n\t\t\t\t\t\t    return m_update_db;\n\t\t};\n\t\tunsigned long\t\tgetReadBlockSize() const { return m_block_size; };\n\t\tconst std::string& \tgetDataSourceType() const { return m_data_source_t; };\n\t\tconst std::string& \tgetPluginName() const { return m_plugin_name; };\n\t\tvoid\t\t\tsetLoadBufferIndex(unsigned long loadBufferIdx);\n\t\tunsigned long\t\tgetLoadBufferIndex() const;\n\t\tconst unsigned long*\tgetLoadBufferIndexPtr() const;\n\n    \t\tunsigned long\t\tgetMemoryBufferSize() const { return m_memory_buffer_size; };\n    \t\tvoid \t\t\tcreateConfigCategories(DefaultConfigCategory configCategory,\n    \t\t\t\t\t\t\t       std::string parent_name,\n    \t\t\t\t\t\t\t       std::string current_name,\n    \t\t\t\t\t\t\t       std::string current_description);\n\n    // Public static methods\n\tpublic:\n\t\tstatic void\t\tsetLoadBufferData(unsigned long index,\n\t\t\t\t\t\t\t  ReadingSet* readings);\n\t\tstatic std::vector<ReadingSet *>*\n\t\t\t\t\tgetDataBuffers() { return m_buffer_ptr; };\n\t\tstatic void\t\tuseFilteredData(OUTPUT_HANDLE *outHandle,\n\t\t\t\t\t\t\tREADINGSET* readings);\n\t\tstatic void\t\tpassToOnwardFilter(OUTPUT_HANDLE *outHandle,\n\t\t\t\t\t\t\t   READINGSET* readings);\n\n\tprivate:\n\t\tstd::string             retrieveTableInformationName(const char* dataSource);\n\t\tvoid                    updateStreamLastSentId(long lastSentId);\n\t\tvoid\t\t\tsetDuration(unsigned int val) { m_duration = val; };\n\t\tvoid\t\t\tsetSleepTime(unsigned long val) { m_sleep = val; };\n\t\tvoid\t\t\tsetReadBlockSize(unsigned long size) { m_block_size = size; };\n\t\tbool\t\t\tloadPlugin(const std::string& pluginName);\n\t\tConfigCategory\t\tfetchConfiguration(const std::string& defCfg,\n\t\t\t\t\t\t\t   const std::string& pluginName);\n\t\tbool\t\t\tloadFilters(const std::string& pluginName);\n\t\tvoid \t\t\tupdateStatistics(std::string& stat_key,\n\t\t\t\t\t\t\t const std::string& stat_description);\n\n\t\t// Make private the copy constructor and operator=\n\t\tSendingProcess(const SendingProcess &);\n                SendingProcess&\t\toperator=(SendingProcess const &);\n\n\tpublic:\n\t\tstd::vector<ReadingSet *>\tm_buffer;\n\t\tstd::thread*\t\t\tm_thread_load;\n\t\tstd::thread*\t\t\tm_thread_send;\n\t\tNorthPlugin*\t\t\tm_plugin;\n\t\tstd::vector<unsigned long>\tm_last_read_id;\n\t\tNorthFilterPipeline*\t\tfilterPipeline;\n\n\tprivate:\n\t\tbool\t\t\t\tm_running;\n\t\tint \t\t\t\tm_stream_id;\n\t\tunsigned long\t\t\tm_last_sent_id;\n    \t\tunsigned long\t\t\tm_last_fetch_id;\n\t\tunsigned long\t\t\tm_tot_sent;\n\t\tunsigned int\t\t\tm_duration;\n\t\tunsigned long\t\t\tm_sleep;\n\t\tunsigned long\t\t\tm_block_size;\n\t\tbool\t\t\t\tm_update_db;\n    \t\tstd::string\t\t\tm_plugin_name;\n                Logger*\t\t\t        m_logger;\n\t\tstd::string\t\t\tm_data_source_t;\n\t\tunsigned long\t\t\tm_load_buffer_index;\n    \t\tunsigned long\t\t\tm_memory_buffer_size = 1;\n\t\t\n\t\t// static pointer for data buffer access\n\t\tstatic std::vector<ReadingSet *>*\n\t\t\t\t\t\tm_buffer_ptr;\n\t\tAssetTracker\t\t\t*m_assetTracker;\n};\n\n#endif\n"
  },
  {
    "path": "C/tasks/north/sending_process/north_filter_pipeline.cpp",
    "content": "/*\n * Fledge filter pipeline class for sending process\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Amandeep Singh Arora\n */\n\n#include <north_filter_pipeline.h>\n#include <sending.h>\n\n#define JSON_CONFIG_FILTER_ELEM \"filter\"\n#define JSON_CONFIG_PIPELINE_ELEM \"pipeline\"\n\nusing namespace std;\n\n/**\n * NorthFilterPipeline class constructor\n *\n * This class abstracts the filter pipeline interface for sending process\n *\n * @param mgtClient\tManagement client handle\n * @param storage\tStorage client handle\n * @param serviceName\tName of the service to which this pipeline applies\n */\nNorthFilterPipeline::NorthFilterPipeline(ManagementClient* mgtClient, StorageClient& storage, string serviceName) : \n\t\t\tFilterPipeline(mgtClient, storage, serviceName)\n{\n}\n\n/**\n * Set the filter pipeline for sending process\n * \n * This method calls the the method \"plugin_init\" for all loadad filters.\n * Up to date filter configurations and Ingest filtering methods\n * are passed to \"plugin_init\"\n *\n * @param passToOnwardFilter\tPtr to function that passes data to next filter\n * @param useFilteredData\tPtr to function that gets final filtered data\n * @param _sendingProcess\tThe SendingProcess class handle\n * @return \t\tTrue on success,\n *\t\t\tFalse otherwise.\n * @thown\t\tAny caught exception\n */\nbool NorthFilterPipeline::setupFiltersPipeline(void *passToOnwardFilter, void *useFilteredData, void *_sendingProcess)\n{\n\tbool initErrors = false;\n\tstring errMsg = \"'plugin_init' failed for filter '\";\n\tfor (auto it = m_filters.begin(); it != m_filters.end(); ++it)\n\t{\n\n\t\tif ((*it)->isBranch())\n\t\t{\n\t\t\tPipelineBranch *branch = (PipelineBranch *)(*it);\n\t\t\tbranch->setFunctions(passToOnwardFilter, useFilteredData, _sendingProcess);\n\t\t}\n\t\t(*it)->setup(mgtClient, _sendingProcess, m_filterCategories);\n\t\t// Iterate the load filters set in the Ingest class m_filters member \n\t\tif ((it + 1) != m_filters.end())\n\t\t{\n\t\t\t// Set next filter pointer as OUTPUT_HANDLE\n\t\t\tif (!(*it)->init((OUTPUT_HANDLE *)(*(it + 1)),\n\t\t\t\t\tfilterReadingSetFn(passToOnwardFilter)))\n\t\t\t{\n\t\t\t\terrMsg += (*it)->getName() + \"'\";\n\t\t\t\tinitErrors = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\t// Set load buffer index pointer as OUTPUT_HANDLE\n\t\t\tSendingProcess *sendingProcess = (SendingProcess *) _sendingProcess;\n\t\t\tconst unsigned long* bufferIndex = sendingProcess->getLoadBufferIndexPtr();\n\t\t\t\n\t\t\t// Set the Ingest class pointer as OUTPUT_HANDLE\n\t\t\tif (!(*it)->init((OUTPUT_HANDLE *)(bufferIndex),\n\t\t\t\t\t filterReadingSetFn(useFilteredData)))\n\t\t\t{\n\t\t\t\terrMsg += (*it)->getName() + \"'\";\n\t\t\t\tinitErrors = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t}\n\n\tif (initErrors)\n\t{\n\t\t// Failure\n\t\tLogger::getLogger()->fatal(\"%s error: %s\", __FUNCTION__, errMsg.c_str());\n\t\treturn false;\n\t}\n\n\t// Set filter pipeline is ready for data ingest\n\tm_ready = true;\n\n\t//Success\n\treturn true;\n}\n\n"
  },
  {
    "path": "C/tasks/north/sending_process/north_plugin.cpp",
    "content": "/*\n * Fledge north plugin\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <north_plugin.h>\n#include <iostream>\n\nusing namespace std;\n\n/**\n * Constructor for the class that wraps the OMF north plugin\n *\n * Create a set of function pointers.\n * @param handle    The loaded plugin handle\n */\nNorthPlugin::NorthPlugin(const PLUGIN_HANDLE handle) : Plugin(handle)\n{\n        // Setup the function pointers to the plugin\n        pluginInit = (PLUGIN_HANDLE (*)(const ConfigCategory* config))\n\t\t\t\t\tmanager->resolveSymbol(handle, \"plugin_init\");\n\n\tpluginShutdown = (void (*)(const PLUGIN_HANDLE))\n\t\t\t\t   manager->resolveSymbol(handle, \"plugin_shutdown\");\n\tpluginShutdownData = (string (*)(const PLUGIN_HANDLE))\n\t\t\t\t\t manager->resolveSymbol(handle, \"plugin_shutdown\");\n\n\tpluginSend = (uint32_t (*)(const PLUGIN_HANDLE, const vector<Reading* >& readings))\n\t\t\t\t   manager->resolveSymbol(handle, \"plugin_send\");\n\n\tpluginStart = (void (*)(const PLUGIN_HANDLE))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_start\");\n\tpluginStartData = (void (*)(const PLUGIN_HANDLE, const string& storedData))\n\t\t\t\tmanager->resolveSymbol(handle, \"plugin_start\");\n\n\t// Persist data initialised\n\tm_plugin_data = NULL;\n}\n\n// Destructor\nNorthPlugin::~NorthPlugin()\n{\n\tdelete m_plugin_data;\n}\n\n/**\n * Initialise the plugin with configuration data\n *\n * @param config    The configuration data\n * @return          The plugin handle\n */\nPLUGIN_HANDLE NorthPlugin::init(const ConfigCategory& config)\n{\n\t// Pass input data pointer\n\tm_instance = this->pluginInit(&config);\n\treturn &m_instance;\n}\n\n/**\n * Call the start method in the plugin\n * with no persisted data\n */\nvoid NorthPlugin::start()\n{\n\t// Ccheck pluginStart function pointer exists\n\tif (this->pluginStart)\n\t{\n\t\tthis->pluginStart(m_instance);\n\t}\n}\n\n/**\n * Call the start method in the plugin\n * passing persisted data\n */\nvoid NorthPlugin::startData(const string& storedData)\n{\n\t// Ccheck pluginStartData function pointer exists\n\tif (this->pluginStartData)\n\t{\n\t\tthis->pluginStartData(m_instance, storedData);\n\t}\n}\n\n/**\n * Send vector (by reference) of readings pointer to historian server\n *\n * @param  readings    The readings data\n * @return             The readings sent or 0 in case of any error\n */\nuint32_t NorthPlugin::send(const vector<Reading* >& readings) const\n{\n\treturn this->pluginSend(m_instance, readings);\n}\n\n/**\n * Call the shutdown method in the plugin\n */\nvoid NorthPlugin::shutdown()\n{\n\t// Ccheck pluginShutdown function pointer exists\n\tif (this->pluginShutdown)\n\t{\n\t\treturn this->pluginShutdown(m_instance);\n\t}\n}\n\n/**\n * Call the shutdown method in the plugin\n * and return plugin data to parsist as JSON string\n */\nstring NorthPlugin::shutdownSaveData()\n{\n\tstring ret(\"\");\n\t// Check pluginShutdownData function pointer exists\n\tif (this->pluginShutdownData)\n\t{\n\t\tret = this->pluginShutdownData(m_instance);\n\t}\n\treturn ret;\n}\n"
  },
  {
    "path": "C/tasks/north/sending_process/sending.cpp",
    "content": "/*\n * Fledge process class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <sending.h>\n#include <csignal>\n#include <sys/prctl.h>\n#include <filter_plugin.h>\n#include <map>\n\n#define VERBOSE_LOG\t0\n\n#define PLUGIN_UNDEFINED \"\"\n\n// The type of the plugin managed by the Sending Process\n#define PLUGIN_TYPE \"north\"\n\n#define GLOBAL_CONFIG_KEY \"GLOBAL_CONFIGURATION\"\n#define PLUGIN_CONFIG_KEY \"PLUGIN\"\n#define PLUGIN_TYPES_KEY \"OMF_TYPES\"\n\n// Configuration retrieved from the Configuration Manager\n#define CONFIG_CATEGORY_DESCRIPTION \"Configuration of the Sending Process\"\n#define CATEGORY_OMF_TYPES_DESCRIPTION \"Configuration of OMF types\"\n\n// Used for the handling of the hierarchical configuration structure\n#define PARENT_CONFIGURATION_KEY \"North\"\n\nusing namespace std;\n\n// Default values for the creation of a new stream,\n// the description is derived from the parameter --name\n#define NEW_STREAM_LAST_OBJECT 0\n\n// Data sources handled by the sending process\n#define DATA_SOURCE_READINGS    \"readings\"\n#define DATA_SOURCE_STATISTICS  \"statistics\"\n#define DATA_SOURCE_AUDIT       \"audit\"\n\n#define DATA_SOURCE_INFORMATION_TABLE_NAME 0\n#define DATA_SOURCE_INFORMATION_STAT_KEY   1\n#define DATA_SOURCE_INFORMATION_STAT_DESCR 2\n\n// Translation from the data source type to data source information\nconst map<string, std::tuple<string, string, string>>  data_source_to_information = {\n\n\t// Data source                         - TableName   - Statistics key   - Statistics description\n\t{DATA_SOURCE_READINGS,   std::make_tuple(\"readings\",   \"Readings Sent\",   \"Readings Sent North\")},\n\t{DATA_SOURCE_STATISTICS, std::make_tuple(\"statistics\", \"Statistics Sent\", \"Statistics Sent North\")},\n\t{DATA_SOURCE_AUDIT,      std::make_tuple(\"audit\",      \"Audit Sent\",      \"Audit Sent North\")}\n};\n\n// static pointer to data buffers for filter plugins\nstd::vector<ReadingSet*>* SendingProcess::m_buffer_ptr = 0;\n\n// Used to identifies logs\nconst string LOG_SERVICE_NAME = \"SendingProcess/sending\";\n\nstatic map<string, string> globalConfiguration = {};\n\n// Sending process default configuration\nstatic const string sendingDefaultConfig = QUOTE({\n\t\"enable\": {\n\t\t\"description\": \"A switch that can be used to enable or disable execution of the sending process.\",\n\t\t\"type\": \"boolean\",\n\t\t\"default\": \"true\" ,\n\t\t\"readonly\": \"true\"\n\t\t},\n\t\"streamId\": {\n\t\t\"description\": \"Identifies the specific stream to handle and the related information, among them the ID of the last object streamed.\",\n\t\t\"type\": \"integer\",\n\t\t\"default\": \"0\",\n\t\t\"readonly\": \"true\"\n\t\t }\n\t});\n\n// Sending process advanced configuration\nstatic const string sendingAdvancedConfig = QUOTE({\n\t\"duration\": {\n\t\t\"description\": \"How long the sending process should run (in seconds) before stopping.\",\n\t\t\"type\": \"integer\",\n\t\t\"default\": \"60\",\n\t\t\"order\": \"30\",\n\t\t\"displayName\" : \"Duration\"\n\t\t},\n        \"blockSize\":  {\n\t\t\"description\": \"The size of a block of readings to send in each transmission.\",\n\t\t\"type\": \"integer\",\n\t\t\"default\": \"500\",\n\t\t\"order\": \"31\",\n\t\t\"displayName\" : \"Readings Block Size\"\n\t\t},\n        \"sleepInterval\": {\n\t\t\"description\": \"A period of time, expressed in seconds, to wait between attempts to send readings when there are no readings to be sent.\",\n\t\t\"type\": \"integer\",\n\t\t \"default\": \"1\",\n\t\t\"order\": \"32\",\n\t\t\"displayName\" : \"Sleep Interval\"\n\t\t},\n\t\"memoryBufferSize\": {\n\t\t\"description\": \"Number of elements of blockSize size to be buffered in memory\",\n\t\t\"type\": \"integer\",\n\t\t\"default\": \"10\",\n\t\t\"order\": \"33\",\n\t\t\"displayName\" : \"Memory Buffer Size\" ,\n\t\t\"readonly\": \"false\"\n\t\t },\n\t\"logLevel\" : {\n\t\t\"description\" : \"Minimum level of message logged\",\n\t\t\"type\" : \"enumeration\",\n\t\t\"options\" : [ \"error\", \"warning\", \"info\", \"debug\" ],\n\t\t\"displayName\" : \"Log Level\",\n\t\t\"default\" : \"warning\",\n\t\t\"order\" : \"40\"\n\t\t}\n\t});\n\nvolatile std::sig_atomic_t signalReceived = 0;\n\n// Handle Signals\nstatic void signalHandler(int signal)\n{\n        signalReceived = signal;\n}\n\n/**\n * SendingProcess class methods\n */\n\n// Destructor\nSendingProcess::~SendingProcess()\n{\n\tdelete m_thread_load;\n\tdelete m_thread_send;\n\tdelete m_plugin;\n}\n\n// SendingProcess Class Constructor\nSendingProcess::SendingProcess(int argc, char** argv) : FledgeProcess(argc, argv)\n{\n        m_logger = Logger::getLogger();\n\n\t// the stream_id to use is retrieved from the configuration\n        m_stream_id = -1;\n\tm_plugin_name = PLUGIN_UNDEFINED;\n\n#if VERBOSE_LOG\n        int i;\n        for (i = 0; i < argc; i++)\n        {\n                m_logger->debug(\"%s - param :%d: :%s:\",\n\t\t\t\tLOG_SERVICE_NAME.c_str(),\n\t\t\t\ti,\n\t\t\t\targv[i]);\n        }\n#endif\n\n\t// Mark running state\n\tm_running = true;\n\n\t// NorthPlugin\n\tm_plugin = NULL;\n\n\t// Set vars & counters to 0, false\n\tm_last_sent_id  = 0;\n\tm_tot_sent = 0;\n\tm_update_db = false;\n\n\tLogger::getLogger()->info(\"SendingProcess is starting\");\n\n\t/**\n\t * Get Configuration from sending process and loaded plugin\n\t * Create or update configuration via Fledge API\n\t */\n\n\t// Reads the sending process configuration\n\tConfigCategory processDefault = this->fetchConfiguration(sendingDefaultConfig,\n\t\t\t\t\t\t\t\t PLUGIN_UNDEFINED);\n\n\n\t// The allocation should be done after fetchConfiguration\n\t// as the value for m_memory_buffer_size is retrieved from the configuration\n\t//\n\t// Set buffer of ReadingSet with NULLs\n\tm_buffer.resize(m_memory_buffer_size, NULL);\n\t// Initialise buffer last read id\n\tm_last_read_id.resize(m_memory_buffer_size, 0);\n\t// Set the static pointer\n\tm_buffer_ptr = &m_buffer;\n\n\tif (m_plugin_name == PLUGIN_UNDEFINED) {\n\n                // Ends the execution if the plug-in is not defined\n\n                string errMsg(LOG_SERVICE_NAME + \\\n\t\t\t      \" - the plugin-in is not defined \"\n\t\t\t      \"for the sending process :\" +  this->getName() + \" :.\");\n\n                m_logger->fatal(errMsg);\n                throw runtime_error(errMsg);\n        }\n\n        // Loads the plug-in\n        if (!loadPlugin(string(m_plugin_name)))\n        {\n                string errMsg(\"SendingProcess: failed to load north plugin '\");\n                errMsg.append(m_plugin_name);\n                errMsg += \"'.\";\n\n                Logger::getLogger()->fatal(errMsg);\n\n                throw runtime_error(errMsg);\n        }\n\n\t// Read now the sending process configuration merged with the one\n        // related to the loaded plugin\n\n        ConfigCategory config = this->fetchConfiguration(sendingDefaultConfig,\n\t\t\t\t\t\t\t m_plugin_name);\n\n#if VERBOSE_LOG\n        m_logger->debug(\"%s - stream-id :%d:\",\n\t\t\tLOG_SERVICE_NAME.c_str(),\n\t\t\tm_stream_id);\n#endif\n\n        // Checks if stream-id is undefined, it allocates a new one in the case\n        if (m_stream_id == 0) {\n\n                m_logger->info(\"%s - stream-id is undefined, allocating a new one.\",\n\t\t\t       LOG_SERVICE_NAME.c_str());\n\n                m_stream_id = this->createNewStream();\n\n                if (m_stream_id == 0) {\n\n\t\t\tstring errMsg(LOG_SERVICE_NAME + \" - it is not possible to create a new stream.\");\n\n\t\t\tm_logger->fatal(errMsg);\n\t\t\tthrow runtime_error(errMsg);\n\t\t} else {\n\t\t\tm_logger->info(\"%s - new stream-id allocated :%d:\",\n\t\t\t\t       LOG_SERVICE_NAME.c_str(),\n\t\t\t\t       m_stream_id);\n\n                        const string categoryName = this->getName();\n                        const string itemName = \"streamId\";\n                        const string itemValue = to_string(m_stream_id);\n\n                        // Prepares the error message in case of an error\n                        string errMsg(LOG_SERVICE_NAME + \\\n\t\t\t\t      \" - it is not possible to update the item :\" + \\\n\t\t\t\t      itemName + \" : of the category :\" + categoryName + \":\");\n\n                        try {\n                                this->getManagementClient()->setCategoryItemValue(categoryName,\n                                                                                  itemName,\n                                                                                  itemValue);\n\n                                m_logger->info(\"%s - configuration updated, using stream-id :%d:\",\n\t\t\t\t\t       LOG_SERVICE_NAME.c_str(),\n\t\t\t\t\t       m_stream_id);\n\n                        } catch (std::exception* e) {\n\n                                delete e;\n\n                                m_logger->error(errMsg);\n                                throw runtime_error(errMsg);\n\n                        } catch (...) {\n                                m_logger->fatal(errMsg);\n                                throw runtime_error(errMsg);\n                        }\n                }\n        }\n\n        // Init plugin with merged configuration from Fledge API\n\tthis->m_plugin->init(config);\n\t\n\tif(m_dryRun)\n\t{\n\t\treturn;\n\t}\n\n\tif (this->m_plugin->m_plugin_data)\n\t{\n\t\t// If plugin has SP_PERSIST_DATA:\n\t\t// 1 - load plugin stored data from storage: key is taskName + pluginName\n\t\tstring storedData = this->m_plugin->m_plugin_data->loadStoredData(this->getName() + m_plugin_name);\n\n\t\t// 2 - call 'plugin_start' with plugin data: startData()\n\t\tm_plugin->startData(storedData);\n\t}\n\telse\n\t{\n\t\t// Call 'plugin_start' without parameters: start()\n\t\tm_plugin->start();\n\t}\n\n\t// Fetch last_object sent from fledge.streams\n\tif (!this->getLastSentReadingId())\n\t{\n                m_logger->warn(LOG_SERVICE_NAME + \" - Last object id for stream '\" + to_string(m_stream_id) + \"' NOT found, creating a new stream.\");\n\n\t\tif (!this->createStream(m_stream_id)) {\n\n\t\t\tstring errMsg(LOG_SERVICE_NAME + \" - It is not possible to create a new stream for streamId :\" + to_string(m_stream_id) + \":.\");\n\n                        m_logger->fatal(errMsg);\n\t\t\tthrow runtime_error(errMsg);\n\t\t} else {\n                        m_logger->info(LOG_SERVICE_NAME + \" - streamId :\" + to_string(m_stream_id) + \": created.\");\n\t\t}\n\t}\n\n#if VERBOSE_LOG\n\tLogger::getLogger()->info(\"SendingProcess initialised with %d data buffers.\",\n\t\t\t\t  m_memory_buffer_size);\n\n\tLogger::getLogger()->info(\"SendingProcess data source type is '%s'\",\n\t\t\t\t  this->getDataSourceType().c_str());\n\n\tLogger::getLogger()->info(\"SendingProcess reads data from last id %lu\",\n\t\t\t\t  this->getLastSentId());\n#endif\n\n\tfilterPipeline = NULL;\n\n\tm_assetTracker = new AssetTracker(getManagementClient(), getName());\n\tAssetTracker::getAssetTracker()->populateAssetTrackingCache(getName(), \"Egress\");\n\t\n\t// Load filter plugins\n\tif (!this->loadFilters(this->getName()))\n\t{\n\t\tLogger::getLogger()->fatal(\"SendingProcess failed loading filter plugins. Exiting\");\n\t\tthrow runtime_error(LOG_SERVICE_NAME + \" failure while loading filter plugins.\");\n\t}\n}\n\n// While running check signals and execution time\nvoid SendingProcess::run() const\n{\n\n        // Requests the kernel to deliver SIGHUP when parent dies\n        prctl(PR_SET_PDEATHSIG, SIGHUP);\n\n\t// We handle these signals, add more if needed\n        std::signal(SIGHUP,  signalHandler);\n\tstd::signal(SIGINT,  signalHandler);\n\tstd::signal(SIGSTOP, signalHandler);\n\tstd::signal(SIGTERM, signalHandler);\n        std::signal(SIGABRT, signalHandler);   // Catches the Fledge kill command\n\n        // Check running time\n\ttime_t elapsedSeconds = 0;\n\twhile (elapsedSeconds < (time_t)m_duration)\n\t{\n\t\t// Check whether a signal has been received\n\t\tif (signalReceived != 0)\n\t\t{\n\t\t\tLogger::getLogger()->info(\"SendingProcess is stopping due to caught signal %d (%s)\",\n\t\t\t\t\t\t  signalReceived,\n\t\t\t\t\t\t  strsignal(signalReceived),\n\t\t\t\t\t\t  elapsedSeconds);\n\t\t\tbreak;\n\t\t}\n\n                // Just sleep\n\t\tsleep(m_sleep);\n\n\t\tif (m_dryRun)\t// We do this here to allow the threads time to setup\n\t\t{\n\t\t\tbreak;\n\t\t}\n\n\t\telapsedSeconds = time(NULL) - this->getStartTime();\n\t}\n\tLogger::getLogger()->info(\"SendingProcess is stopping, after %d seconds.\",\n\t\t\t\t  elapsedSeconds);\n}\n\n/**\n * Load the Historian specific 'transform & send data' plugin\n *\n * @param    pluginName    The plugin to load\n * @return   true if loded, false otherwise \n */\nbool SendingProcess::loadPlugin(const string& pluginName)\n{\n\tPluginManager *manager = PluginManager::getInstance();\n\n\tif (pluginName.empty())\n\t{\n\t\tLogger::getLogger()->error(\"Unable to fetch north plugin \"\n\t\t\t\t\t   \"'%s' from configuration.\",\n\t\t\t\t\t   pluginName.c_str());\n                return false;\n        }\n\tLogger::getLogger()->info(\"Load north plugin '%s'.\",\n\t\t\t\t  pluginName.c_str());\n\n        PLUGIN_HANDLE handle;\n\tif ((handle = manager->loadPlugin(pluginName,\n\t\t\t\t\t  PLUGIN_TYPE_NORTH)) != NULL)\n        {\n#if VERBOSE_LOG\n\t\tLogger::getLogger()->info(\"Loaded north plugin '%s'.\",\n\t\t\t\t\t  pluginName.c_str());\n#endif\n\t\tm_plugin = new NorthPlugin(handle);\n\t\t// Check persist data option for plugin.\n\t\tif (m_plugin->persistData())\n\t\t{\n\t\t\t// Instantiate PluginData class for persistence of data\n\t\t\tm_plugin->m_plugin_data = new PluginData(this->getStorageClient());\n\t\t}\n\t\treturn true;\n\t}\n\treturn false;\n}\n\n// Stop running threads & cleanup used resources\nvoid SendingProcess::stop()\n{\n\t// End of processing loop for threads\n\tthis->stopRunning();\n\n\t// Threads execution has completed.\n\tthis->m_thread_load->join();\n        this->m_thread_send->join();\n\n\t// Remove the data buffers\n\tfor (unsigned int i = 0; i < m_memory_buffer_size; i++)\n\t{\n\t\tReadingSet* data = this->m_buffer[i];\n\t\tif (data != NULL)\n\t\t{\n\t\t\tdelete data;\n\t\t}\n\t}\n\n\t// Cleanup the plugin resources\n\tif (this->m_plugin->m_plugin_data)\n\t{\n\t\t// If plugin has SP_PERSIST_DATA option:\n\t\t// 1- call shutdownSaveData and get up-to-date plugin data.\n\t\tstring saveData = this->m_plugin->shutdownSaveData();\n\t\t// 2- store returned data: key is taskName + pluginName\n\t\tstring key(this->getName() + m_plugin_name);\n\t\tif (!this->m_plugin->m_plugin_data->persistPluginData(key, saveData, this->getName()))\n\t\t{\n\t\t\tLogger::getLogger()->error(\"Plugin %s has failed to save data [%s] for key %s and task name %s\",\n\t\t\t\t\t\t   m_plugin_name.c_str(),\n\t\t\t\t\t\t   saveData.c_str(),\n\t\t\t\t\t\t   key.c_str(),\n\t\t\t\t\t\t   this->getName().c_str());\n\t\t}\n\t}\n\telse\n\t{\n\t\t// No data to save\n\t\tthis->m_plugin->shutdown();\n\t}\n\n\t// Cleanup filters\n\tif (filterPipeline)\n\t{\n\t\tfilterPipeline->cleanupFilters(getName());\n\t\tdelete filterPipeline;\n\t}\n\n\tLogger::getLogger()->info(\"SendingProcess successfully terminated\");\n}\n\n/**\n * Sets the position of the readings table the sending procress\n * has already sent\n *\n * @lastSentId\tId of the readings table already sent\n */\nvoid SendingProcess::updateStreamLastSentId(long lastSentId)\n{\n\n\tstring streamId = to_string(this->getStreamId());\n\n\t// Prepare WHERE id = val\n\tconst Condition conditionStream(Equals);\n\tWhere wStreamId(\"id\",\n\t                conditionStream,\n\t                streamId);\n\n\t// Prepare last_object = value\n\tInsertValues lastId;\n\tlastId.push_back(InsertValue(\"last_object\",lastSentId));\n\n\t// Perform UPDATE fledge.streams SET last_object = x WHERE id = y\n\tthis->getStorageClient()->updateTable(\"streams\",\n\t                                      lastId,\n\t                                      wStreamId);\n}\n/**\n * Update database tables statistics and streams\n * setting last_object id in streams\n */\nvoid SendingProcess::updateDatabaseCounters()\n{\n\tupdateStreamLastSentId((long)this->getLastSentId());\n\n\t// Updates 'Master' statistic\n\tstring stat_key;\n\tstring stat_description;\n\n\t// Identifies the statistics that should be updated in relation to the data source\n\tauto item = data_source_to_information.find(m_data_source_t);\n\tif (item != data_source_to_information.end())\n\t{\n\n\t\tstat_key = std::get<DATA_SOURCE_INFORMATION_STAT_KEY>(item->second);\n\t\tstat_description = std::get<DATA_SOURCE_INFORMATION_STAT_DESCR>(item->second);\n\t}\n        this->updateStatistics(stat_key, stat_description);\n\n\t// Updates 'stream' specific statistic\n\tstat_key = this->getName();\n\tstat_description = stat_key;\n\n\tthis->updateStatistics(stat_key, stat_description);\n}\n\n/**\n * Update database tables statistics\n * numReadings sent in statistics\n * it either updates the specific row if it is already available\n * or add the new row\n */\nvoid SendingProcess::updateStatistics(string& stat_key, const string& stat_description)\n{\n\n\n\tif (stat_key.empty())\n\t{\n\t\tLogger::getLogger()->error(\"It is not possible to update the statistics as the data source is unknown, data source -%s-\", m_data_source_t.c_str());\n\t}\n\telse\n\t{\n\t\t// Prepare \"WHERE key = name\n\t\tconst Condition conditionStat(Equals);\n\t\tWhere wLastStat(\"key\",\n\t\t\t\tconditionStat,\n\t\t\t\tstat_key);\n\n\t\t// Prepare value = value + inc\n\t\tExpressionValues updateValue;\n\t\tupdateValue.push_back(Expression(\"value\",\n\t\t\t\t      \"+\",\n\t\t\t\t      (int)this->getSentReadings()));\n\n\t\t// Perform UPDATE fledge.statistics SET value = value + x WHERE key = 'name'\n\t\tint row_affected = this->getStorageClient()->updateTable(\"statistics\",\n\t\t\t\t\t\t\t\t\t updateValue,\n\t\t\t\t\t\t\t\t\t wLastStat);\n\n\t\tif (row_affected == -1){\n\t\t\t// The required row is not in the statistics table yet\n\t\t\t// this situation happens only at the initial setup\n\t\t\t// adding the required row.\n\n\t\t\tLogger::getLogger()->info(\"Adding a new row into the statistics as it is not present yet, key -%s- description -%s-\"\n\t\t\t\t,stat_key.c_str()\n\t\t\t\t,stat_description.c_str());\n\n\t\t\tInsertValues values;\n\t\t\tvalues.push_back(InsertValue(\"key\",         stat_key));\n\t\t\tvalues.push_back(InsertValue(\"description\", stat_description));\n\t\t\tvalues.push_back(InsertValue(\"value\",       (int)this->getSentReadings()));\n\t\t\tstring table = \"statistics\";\n\n\t\t\tif (getStorageClient()->insertTable(table, values) != 1) {\n\n\t\t\t\tgetLogger()->error(\"Failed to insert a new row into the %s\", table.c_str());\n\t\t\t} else {\n\t\t\t\tLogger::getLogger()->info(\"New row added into the %s, key -%s- description -%s-\"\n\t\t\t\t\t,table.c_str()\n\t\t\t\t\t,stat_key.c_str()\n\t\t\t\t\t,stat_description.c_str());\n\n\t                }\n\n\t\t}\n\n\t}\n}\n\n/**\n * Retrieves the name table of the data source\n *\n * @dataSource\tdatasource for which the table name should be identified\n * @return\ttable name\n */\nstring SendingProcess::retrieveTableInformationName(const char* dataSource)\n{\n\tstring tableInfo;\n\n\t// Identifies table name\n\tauto item = data_source_to_information.find(dataSource);\n\tif (item != data_source_to_information.end())\n\t{\n\n\t\ttableInfo = std::get<DATA_SOURCE_INFORMATION_TABLE_NAME>(item->second);\n\t}\n\n\treturn(tableInfo);\n}\n\n/**\n * Get last_object id sent for current stream_id\n * Access foglam.streams table.\n *\n * @return true if last_object is found, false otherwise\n */\nbool SendingProcess::getLastSentReadingId()\n{\n\t// Fetch last_object sent from fledge.streams\n\n\tbool foundId = false;\n\tconst Condition conditionId(Equals);\n\tstring streamId = to_string(this->getStreamId());\n\tWhere* wStreamId = new Where(\"id\",\n\t\t\t\t     conditionId,\n\t\t\t\t     streamId);\n\n\t// SELECT * FROM fledge.streams WHERE id = x\n\tQuery qLastId(wStreamId);\n\n\tResultSet* lastObjectId = this->getStorageClient()->queryTable(\"streams\", qLastId);\n\n\tif (lastObjectId != NULL && lastObjectId->rowCount())\n\t{\n\t\t// Get the first row only\n\t\tResultSet::RowIterator it = lastObjectId->firstRow();\n\t\t// Access the element\n\t\tResultSet::Row* row = *it;\n\t\tif (row)\n\t\t{\n\t\t\t// Get column value\n\t\t\tResultSet::ColumnValue* theVal = row->getColumn(\"last_object\");\n\t\t\t// Set found id\n\t\t\tthis->setLastSentId((unsigned long)theVal->getInteger());\n\n\t\t\tfoundId = true;\n\t\t}\n\t}\n\t// Free result set\n\tdelete lastObjectId;\n\n\treturn foundId;\n}\n\n/**\n * Creates a new stream, it adds a new row into the streams table allocating a new stream id\n *\n * @return newly created stream, 0 otherwise\n */\nint SendingProcess::createNewStream()\n{\n        int streamId = 0;\n\n        InsertValues streamValues;\n        streamValues.push_back(InsertValue(\"description\",    this->getName()));\n        streamValues.push_back(InsertValue(\"last_object\",    NEW_STREAM_LAST_OBJECT));\n\n        if (getStorageClient()->insertTable(\"streams\", streamValues) != 1) {\n\n                getLogger()->error(\"Failed to insert a row into the streams table\");\n\n        } else {\n\n                // Select the row just created, having description='process name'\n                const Condition conditionId(Equals);\n                string name  = getName();\n                Where* wName = new Where(\"description\", conditionId, name);\n                Query qName(wName);\n\n                ResultSet* rows = this->getStorageClient()->queryTable(\"streams\", qName);\n\n                if (rows != NULL && rows->rowCount())\n                {\n                        // Get the first row only\n                        ResultSet::RowIterator it = rows->firstRow();\n                        // Access the element\n                        ResultSet::Row* row = *it;\n                        if (row)\n                        {\n                                // Get column value\n                                ResultSet::ColumnValue* theVal = row->getColumn(\"id\");\n                                streamId = (int)theVal->getInteger();\n                        }\n                }\n\t\tdelete rows;\n        }\n\n        return streamId;\n}\n\n/**\n * Creates a new stream, it adds a new row into the streams table allocating specific stream id\n *\n * @return true if successful created, false otherwise\n */\nbool SendingProcess::createStream(int streamId)\n{\n\tbool created = false;\n\n\tInsertValues streamValues;\n\tstreamValues.push_back(InsertValue(\"id\",             streamId));\n\tstreamValues.push_back(InsertValue(\"description\",    this->getName()));\n\tstreamValues.push_back(InsertValue(\"last_object\",    NEW_STREAM_LAST_OBJECT));\n\n        if (getStorageClient()->insertTable(\"streams\", streamValues) != 1) {\n\n\t\tgetLogger()->error(\"Failed to insert a row into the streams table for the streamId :%d:\" ,streamId);\n\n\t} else {\n\t\tcreated = true;\n\n\t\t// Set initial last_object\n\t\tthis->setLastSentId((unsigned long) NEW_STREAM_LAST_OBJECT);\n\t}\n\n\treturn created;\n}\n\n/**\n * Creates config categories and sub categories recursively, along with their parent-child relations\n */\nvoid SendingProcess::createConfigCategories(DefaultConfigCategory configCategory, std::string parent_name, std::string current_name, std::string current_description)\n{\n\t// Deal with registering and fetching the configuration\n\tDefaultConfigCategory defConfig(configCategory);\n\tdefConfig.setDescription(current_description);\n\n\tDefaultConfigCategory defConfigCategoryOnly(defConfig);\n\tdefConfigCategoryOnly.keepItemsType(ConfigCategory::ItemType::CategoryType);\n\tdefConfig.removeItemsType(ConfigCategory::ItemType::CategoryType);\n\n\t// Create/Update category name (we pass keep_original_items=true)\n\tif (! this->getManagementClient()->addCategory(defConfig, true))\n\t{\n\t\tstring errMsg = string(\"Failure creating/updating configuration key '\").append(current_name).append(\"'\");\n\n\t\tLogger::getLogger()->fatal(errMsg);\n\t\tthrow runtime_error(errMsg);\n\t}\n\n\t// Add parent-child relationship\n\tvector<string> children;\n\tchildren.push_back(current_name);\n\tthis->getManagementClient()->addChildCategories(parent_name, children);\n\n\t// Adds sub categories to the configuration\n\tbool extracted = true;\n\tConfigCategory subCategory;\n\twhile (extracted) {\n\n\t\textracted = subCategory.extractSubcategory(defConfigCategoryOnly);\n\n\t\tif (extracted) {\n\t\t\tDefaultConfigCategory defSubCategory(subCategory);\n\n\t\t\tcreateConfigCategories(defSubCategory, current_name, subCategory.getName(), subCategory.getDescription());\n\n\t\t\t// Cleans the category\n\t\t\tsubCategory.removeItems();\n\t\t\tsubCategory = ConfigCategory() ;\n\t\t}\n\t}\n\n}\n\n/**\n * Create or Update the sending process configuration\n * by accessing Fledge rest API service\n *\n * SendingProcess + plugin DEFAULT configuration is passed to\n * configuration manager and a merged one with \"value\" and \"default\"\n * is returned.\n *\n * Return to caller the configuration items as a ConfigCategory object\n *\n * @param    defaultConfig\tSending Process default configuration\n * @param    plugin_name\tThe plugin name: if not set yet\n *\t\t\t\tpassed value is PLUGIN_UNDEFINED\n * @return   The configuration category with Sending Process defaults\n *\t     and plugin defaults\n * @throw    runtime_error\n */\nConfigCategory SendingProcess::fetchConfiguration(const std::string& defaultConfig,\n\t\t\t\t\t\t  const std::string&  plugin_name)\n{\n\t// retrieves the configuration using the value of the --name parameter\n\t// (received in the command line) as the key\n\tstring categoryName(this->getName());\n#if VERBOSE_LOG\n\tLogger::getLogger()->debug(\"%s - catName :%s:\",\n\t\t\t\t   LOG_SERVICE_NAME.c_str(),\n\t\t\t\t   categoryName.c_str());\n#endif\n\n\tConfigCategory configuration;\n\tConfigCategory advancedConfiguration;\n\ttry {\n\t\t// Create category, with \"default\" values only \n\t\tDefaultConfigCategory category(categoryName,\n\t\t\t\t\t       defaultConfig);\n\t\tcategory.setDescription(CONFIG_CATEGORY_DESCRIPTION);\n\n\t\t// Build JSON merged configuration (sendingProcess + pluginConfig\n\t\tif (plugin_name != PLUGIN_UNDEFINED) {\n\t\t\t// Get plugin default config via API method \"plugin_info\"\n\t\t\tconst PLUGIN_INFORMATION *info = this->m_plugin->getInfo();\n\t\t\tDefaultConfigCategory pluginInfo(categoryName,\n\t\t\t\t\t\t\t info->config);\n\n\t\t\t// Copy all pluginInfo items into current sendingProcess config\n\t\t\tcategory += pluginInfo;\n\t\t}\n\n\t\t// Create/Update hierarchical configuration categories\n\t\tcreateConfigCategories(category,\n\t\t\t\t\tPARENT_CONFIGURATION_KEY,\n\t\t\t\t\tcategoryName,\n\t\t\t\t\tCONFIG_CATEGORY_DESCRIPTION);\n\n\t\t// Create advanced configuration category\n\t\tstring advancedCatName = categoryName + string(\"Advanced\");\n\t\tDefaultConfigCategory defConfigAdvanced(advancedCatName,\n\t\t\t\t\t\t\tsendingAdvancedConfig);\n\t\t// Set/Updaqte advanced configuration category\n\t\tthis->getManagementClient()->addCategory(defConfigAdvanced, true);\n\t\t// Set advanced configuration category as child pf parent categoryName\n\t\tvector<string> children1;\n\t\tchildren1.push_back(advancedCatName);\n\t\tthis->getManagementClient()->addChildCategories(categoryName, children1);\n\n\t\t// Get the category with values and defaults\n\t\tconfiguration = this->getManagementClient()->getCategory(categoryName);\n\n\t\t// Get the advanced category with values and defaults\n\t\tadvancedConfiguration = this->getManagementClient()->getCategory(advancedCatName);\n\n\t\t/**\n\t\t * Handle the sending process parameters here:\n\t\t * fetch the Advanced configuration\n\t\t */\n\t\tstring blockSize = advancedConfiguration.getValue(\"blockSize\");\n\t\tstring duration = advancedConfiguration.getValue(\"duration\");\n\t\tstring sleepInterval = advancedConfiguration.getValue(\"sleepInterval\");\n\t\tstring memoryBufferSize = advancedConfiguration.getValue(\"memoryBufferSize\");\n\t\tstring minLevel = advancedConfiguration.getValue(\"logLevel\");\n\n\t\tLogger::getLogger()->setMinLevel(minLevel);\n\n                // Handles the case in which the stream_id is not defined\n\t\t// in the configuration and sets it to not defined (0)\n                string streamId = \"\";\n                try {\n                        streamId = configuration.getValue(\"streamId\");\n                } catch (std::exception* e) {\n\n                        delete e;\n                        streamId = \"0\";\n                } catch (...) {\n                        streamId = \"0\";\n                }\n\n                // sets to undefined if not defined in the configuration\n                try {\n                        m_plugin_name = configuration.getValue(\"plugin\");\n                } catch (std::exception* e) {\n\n                        delete e;\n                        m_plugin_name = PLUGIN_UNDEFINED;\n                } catch (...) {\n                        m_plugin_name = PLUGIN_UNDEFINED;\n                }\n\n\t\t/**\n\t\t * Set member variables\n\t\t */\n\t\tm_block_size = strtoul(blockSize.c_str(), NULL, 10);\n\t\tm_sleep = strtoul(sleepInterval.c_str(), NULL, 10);\n\t\tm_duration = strtoul(duration.c_str(), NULL, 10);\n                m_stream_id = atoi(streamId.c_str());\n\t\t// Set the data source type: readings (default) or statistics\n\t\ttry\n\t\t{\n\t\t\tm_data_source_t = configuration.getValue(\"source\");\n\t\t} catch (...)\n\t\t{\n\t\t\tm_data_source_t = \"readings\";\n\t\t}\n\n\t\t// Sets the m_memory_buffer_size = 1 in case of an invalid value\n\t\t// from the configuration like for example \"A432\"\n\t\tm_memory_buffer_size = strtoul(memoryBufferSize.c_str(), NULL, 10);\n\t\tif (m_memory_buffer_size < 1)\n\t\t{\n\t\t\tm_memory_buffer_size = 1;\n\t\t}\n\n#if VERBOSE_LOG\n\t\tLogger::getLogger()->info(\"SendingProcess configuration parameters: \"\n\t\t\t\t\t  \"pluginName=%s, source=%s, blockSize=%d, \"\n\t\t\t\t\t  \"duration=%d, sleepInterval=%d, streamId=%d\",\n\t\t\t\t\t  m_plugin_name.c_str(),\n\t\t\t\t\t  m_data_source_t.c_str(),\n\t\t\t\t\t  m_block_size,\n\t\t\t\t\t  m_duration,\n\t\t\t\t\t  m_sleep,\n                                          m_stream_id);\n#endif\n\t\t// Return configuration\n\t\treturn ConfigCategory(configuration);\n\t}\n\tcatch (std::exception* e)\n\t{\n\t\treturn ConfigCategory(configuration);\n\t}\n\tcatch (...)\n\t{\n\t\treturn ConfigCategory(configuration);\n\t}\n}\n\n/**\n * Load filter plugins for the given configuration\n *\n * @param categoryName\tThe sending process category name\n * @return \t\tTrue if filters were loaded and initialised\n *\t\t\tor there are no filters\n *\t\t\tFalse with load/init errors\n */\nbool SendingProcess::loadFilters(const string& categoryName)\n{\n\tfilterPipeline = new NorthFilterPipeline(this->getManagementClient(), *(this->getStorageClient()), getName());\n\n\t// Try to load filters:\n\tif (!filterPipeline->loadFilters(categoryName))\n\t{\n\t\t// return false on any error\n\t\treturn false;\n\t}\n\n\t// return true if no filters\n\tif (filterPipeline->getFilterCount() == 0)\n\t{\n\t\treturn true;\n\t}\n\t\n\t// We have some filters: set up the filter pipeline\n\treturn filterPipeline->setupFiltersPipeline((void *)passToOnwardFilter, (void *)useFilteredData, this);\n}\n\n/**\n * Use the current input readings (they have been filtered\n * by all filters)\n *\n * Note:\n * This routine must passed to last filter \"plugin_init\" only\n *\n * Static method\n *\n * @param outHandle\tPointer to current buffer index\n *\t\t\twhere to add the readings\n * @param readings\tFiltered readings to add to buffer[index]\n */ \t\nvoid SendingProcess::useFilteredData(OUTPUT_HANDLE *outHandle,\n\t\t\t\t     READINGSET *readings)\n{\n\t// Handle the readings set by adding readings set to data buffer[index]\n\tunsigned long* loadBufferIndex = (unsigned long *)outHandle;\n\tSendingProcess::getDataBuffers()->at(*loadBufferIndex) = (ReadingSet *)readings;\n}\n\n/**\n * Pass the current readings set to the next filter in the pipeline\n *\n * Note:\n * This routine must be passed to all filters \"plugin_init\" except the last one\n *\n * Static method\n *\n * @param outHandle\tPointer to next filter\n * @param readings\tCurrent readings set\n */ \t\nvoid SendingProcess::passToOnwardFilter(OUTPUT_HANDLE *outHandle,\n\t\t\t\t\tREADINGSET *readings)\n{\n\t// Get next filter in the pipeline\n\tPipelineElement *next = (PipelineElement *)outHandle;\n\t// Pass readings to next filter\n\tnext->ingest(readings);\n}\n\n/**\n * Set the current buffer load index\n *\n * @param loadBufferIndex    The buffer load index the load thread is using\n */\nvoid SendingProcess::setLoadBufferIndex(unsigned long loadBufferIndex)\n{\n\tm_load_buffer_index = loadBufferIndex;\n}\n\n/**\n * Get the current buffer load index\n *\n * @return\tThe buffer load index the load thread is using\n */\nunsigned long SendingProcess::getLoadBufferIndex() const\n{\n        return m_load_buffer_index;\n}\n\n/**\n * Get the current buffer load index pointer\n *\n * NOTE:\n * this routine must be called only to pass the index pointer\n * to the last filter in the pipeline for the readings set.\n *\n * @return    The pointer to the buffer load index being used by the load thread\n */\nconst unsigned long* SendingProcess::getLoadBufferIndexPtr() const\n{\n        return &m_load_buffer_index;\n}\n\n"
  },
  {
    "path": "C/tasks/north/sending_process/sending_process.cpp",
    "content": "/*\n * Fledge process class\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n#include <sending.h>\n#include <condition_variable>\n#include <reading_set.h>\n#include <plugin_manager.h>\n#include <plugin_api.h>\n#include <plugin.h>\n\n#define VERBOSE_LOG\t0\n\n/**\n * The sending process is run according to a schedule in order to send reading data\n * to the historian, e.g. the PI system.\n * It’s role is to implement the rules as to what needs to be sent and when,\n * extract the data from the storage subsystem and stream it to the north\n * for sending to the external system.\n * The sending process does not implement the protocol used to send the data,\n * that is devolved to the translation plugin in order to allow for flexibility\n * in the translation process.\n */\n\n#define TASK_FETCH_SLEEP 500\n#define TASK_SEND_SLEEP 500\n#define TASK_SLEEP_MAX_INCREMENTS 7 // from 0,5 secs to up to 32 secs\n\nusing namespace std;\nusing namespace std::chrono;\n\n// Mutex for m_buffer access\nmutex      readMutex;\n// Mutex for thread idle time\nmutex\twaitMutex;\n// Block the calling thread until notified to resume.\ncondition_variable cond_var;\n\n// Buffer max elements\nunsigned long memoryBufferSize;\n\n// Exit code:\n// 0 = success (some data sent)\n// 1 = 100% failure sending data to north server\n// 2 =internal errors\nint exitCode = 1;\n\n// Used to identifies logs\nconst string LOG_SERVICE_NAME = \"SendingProcess/sending_process\";\n\n// Load data from storage\nstatic void loadDataThread(SendingProcess *loadData);\n// Send data from historian\nstatic void sendDataThread(SendingProcess *sendData);\n\nint main(int argc, char** argv)\n{\n\ttry\n\t{\n\t\t// Instantiate SendingProcess class\n\t\tSendingProcess sendingProcess(argc, argv);\n\n\t\tif (!sendingProcess.isRunning())\n\t\t{\n\t\t\t// Dryrun execution\n\t\t\texit(0);\n\t\t}\n\n\t\tmemoryBufferSize = sendingProcess.getMemoryBufferSize();\n\n\t\t// Launch the load thread\n\t\tsendingProcess.m_thread_load = new thread(loadDataThread, &sendingProcess);\n\t\t// Launch the send thread\n\t\tsendingProcess.m_thread_send = new thread(sendDataThread, &sendingProcess);\n\n\t\t// Run: max execution time or caught signals can stop it\n\t\tsendingProcess.run();\n\n\t\t// Unlock load & send threads\n\t\tcond_var.notify_all();\n\n\t\t// End processing\n\t\tsendingProcess.stop();\n\t}\n\tcatch (const std::exception& e)\n\t{\n\t\tcerr << \"Exception in \" << argv[0] << \" : \" << e.what() << endl;\n\t\t// Return failure for class instance/configuration etc\n\t\texit(2);\n\t}\n\t// Catch all exceptions\n\tcatch (...)\n\t{\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tstring name = (p ? p.__cxa_exception_type()->name() : \"null\");\n\t\tcerr << \"Generic Exception in \" << argv[0] << \" : \" << name << endl;\n\t\texit(2);\n\t}\n\n\t// Return success\n\texit(exitCode);\n}\n\n/**\n * Apply load filter\n *\n * Just call \"ingest\" methid of the first one\n *\n * @param loadData    pointer to SendingProcess instance\n * @param readingSet  The current reading set loaded from storage\n */\nvoid applyFilters(SendingProcess* loadData,\n\t\t  ReadingSet* readingSet)\n{\n\t// Get first filter\n\tPipelineElement *firstFilter = loadData->filterPipeline->getFirstFilterPlugin();\n\t\n\t// Call first filter \"ingest\"\n\t// Note:\n\t// next filters will be automatically called\n\tif (firstFilter)\n\t{\n\t\tfirstFilter->ingest(readingSet);\n\t}\n}\n\n/**\n * Thread to load data from the storage layer.\n *\n * @param loadData    pointer to SendingProcess instance\n */\nstatic void loadDataThread(SendingProcess *loadData)\n{\n\tint sleep_num_increments, sleep_time;\n\tunsigned int    readIdx = 0;\n\n\tsleep_num_increments = 0;\n\tsleep_time = TASK_FETCH_SLEEP;\n\n\t// Read from the storage last Id already sent\n\tloadData->setLastFetchId(loadData->getLastSentId());\n\n\twhile (loadData->isRunning())\n        {\n                if (readIdx >= memoryBufferSize)\n                {\n                        readIdx = 0;\n                }\n\n\t\t/**\n\t\t * Check whether m_buffer[readIdx] is NULL or contains a ReadingSet\n\t\t *\n\t\t * Access is protected by a mutex.\n\t\t */\n                readMutex.lock();\n                ReadingSet *canLoad = loadData->m_buffer.at(readIdx);\n                readMutex.unlock();\n\n                if (canLoad)\n                {\n#if VERBOSE_LOG\n\t\t\tLogger::getLogger()->info(\"SendingProcess loadDataThread: \"\n\t\t\t\t\t\t  \"('%s' stream id %d), readIdx %u, buffer is NOT empty, waiting ...\",\n\t\t\t\t\t\t  loadData->getDataSourceType().c_str(),\n\t\t\t\t\t\t  loadData->getStreamId(),\n\t\t\t\t\t\t  readIdx);\n#endif\n\n\t                Logger::getLogger()->info(\"SendingProcess is faster to load data than the destination to process them,\"\n\t                                          \" so all the %lu in memory buffers are full and the load thread should wait until at least a buffer is freed.\",\n\t                                          loadData->getMemoryBufferSize());\n\n\t                if (loadData->isRunning()) {\n\n\t\t\t\t// Load thread is put on hold, only if the execution should proceed\n\t\t\t\tunique_lock<mutex> lock(waitMutex);\n\t\t\t\tcond_var.wait(lock);\n\t\t\t}\n                }\n                else\n                {\n                        // Load data from storage client (id >= lastId and getReadBlockSize() rows)\n\t\t\tReadingSet* readings = NULL;\n\t\t\ttry\n\t\t\t{\n\t\t\t\tstring source = loadData->getDataSourceType();\n\t\t\t\t//high_resolution_clock::time_point t1 = high_resolution_clock::now();\n\t\t\t\tif (source.compare(\"readings\") == 0)\n\t\t\t\t{\n\t\t\t\t\t// Read from storage all readings with id > last sent id\n\t\t\t\t\tunsigned long lastReadId = loadData->getLastFetchId() + 1;\n\t\t\t\t\treadings = loadData->getStorageClient()->readingFetch(lastReadId,\n\t\t\t\t\t\t\t\t\t\t\t      loadData->getReadBlockSize());\n\t\t\t\t}\n\t\t\t\telse if (source.compare(\"statistics\") == 0)\n\t\t\t\t{\n\t\t\t\t\t// SELECT id,\n\t\t\t\t\t//\t  key AS asset_code,\n\t\t\t\t\t//\t  ts,\n\t\t\t\t\t//\t  history_ts AS user_ts,\n\t\t\t\t\t//\t  value\n\t\t\t\t\t// FROM statistic_history\n\t\t\t\t\t// WHERE id > lastId\n\t\t\t\t\t// ORDER BY ID ASC\n\t\t\t\t\t// LIMIT blockSize\n\t\t\t\t\tconst Condition conditionId(GreaterThan);\n\t\t\t\t\t// WHERE id > lastId\n\t\t\t\t\tWhere* wId = new Where(\"id\",\n\t\t\t\t\t\t\t\tconditionId,\n\t\t\t\t\t\t\t\tto_string(loadData->getLastFetchId()));\n\t\t\t\t\tvector<Returns *> columns;\n\t\t\t\t\t// Add colums and needed aliases\n\t\t\t\t\tcolumns.push_back(new Returns(\"id\"));\n\t\t\t\t\tcolumns.push_back(new Returns(\"key\", \"asset_code\"));\n\t\t\t\t\tcolumns.push_back(new Returns(\"ts\"));\n\n\t\t\t\t\tReturns *tmpReturn = new Returns(\"history_ts\", \"user_ts\");\n\t\t\t\t\ttmpReturn->timezone(\"utc\");\n\t\t\t\t\tcolumns.push_back(tmpReturn);\n\n\t\t\t\t\tcolumns.push_back(new Returns(\"value\"));\n\t\t\t\t\t// Build the query with fields, aliases and where\n\t\t\t\t\tQuery qStatistics(columns, wId);\n\t\t\t\t\t// Set limit\n\t\t\t\t\tqStatistics.limit(loadData->getReadBlockSize());\n\t\t\t\t\t// Set sort\n\t\t\t\t\tSort* sort = new Sort(\"id\");\n\t\t\t\t\tqStatistics.sort(sort);\n\n\t\t\t\t\t// Query the statistics_history table and get a ReadingSet result\n\t\t\t\t\treadings = loadData->getStorageClient()->queryTableToReadings(\"statistics_history\",\n\t\t\t\t\t\t\t\t\t\t\t\t      qStatistics);\n\t\t\t\t}\n\t\t\t\telse if (source.compare(\"audit\") == 0)\n\t\t\t\t{\n\t\t\t\t\tconst Condition conditionId(GreaterThan);\n\t\t\t\t\t// WHERE id > lastId\n\t\t\t\t\tWhere* wId = new Where(\"id\",\n\t\t\t\t\t\t\t\tconditionId,\n\t\t\t\t\t\t\t\tto_string(loadData->getLastFetchId()));\n\t\t\t\t\tvector<Returns *> columns;\n\t\t\t\t\t// Add colums and needed aliases\n\t\t\t\t\tcolumns.push_back(new Returns(\"id\"));\n\t\t\t\t\tcolumns.push_back(new Returns(\"code\", \"asset_code\"));\n\t\t\t\t\tcolumns.push_back(new Returns(\"ts\"));\n\n\t\t\t\t\tReturns *tmpReturn = new Returns(\"ts\", \"user_ts\");\n\t\t\t\t\ttmpReturn->timezone(\"utc\");\n\t\t\t\t\tcolumns.push_back(tmpReturn);\n\n\t\t\t\t\tcolumns.push_back(new Returns(\"log\", \"reading\"));\n\t\t\t\t\t// Build the query with fields, aliases and where\n\t\t\t\t\tQuery qLog(columns, wId);\n\t\t\t\t\t// Set limit\n\t\t\t\t\tqLog.limit(loadData->getReadBlockSize());\n\t\t\t\t\t// Set sort\n\t\t\t\t\tSort* sort = new Sort(\"id\");\n\t\t\t\t\tqLog.sort(sort);\n\n\t\t\t\t\t// Query the log table and get a ReadingSet result\n\t\t\t\t\treadings = loadData->getStorageClient()->queryTableToReadings(\"log\",\n\t\t\t\t\t\t\t\t\t\t\t\t      qLog);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->error(\"Unsupported source '%s' for north task.\",\n\t\t                                source.c_str());\n\t\t\t\t}\n\t\t\t\t//high_resolution_clock::time_point t2 = high_resolution_clock::now();\n\t\t\t\t//auto duration = duration_cast<microseconds>( t2 - t1 ).count();\n\t\t\t}\n\t\t\tcatch (ReadingSetException* e)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"SendingProcess loadData(): ReadingSet Exception '%s'\", e->what());\n\t\t\t}\n\t\t\tcatch (std::exception& e)\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"SendingProcess loadData(): Generic Exception: '%s'\", e.what());\n\t\t\t}\n\n\t\t\t// Data fetched from storage layer\n\t\t\tif (readings != NULL && readings->getCount())\n\t\t\t{\n\t\t\t\tsleep_time = TASK_FETCH_SLEEP;\n\t\t\t\tsleep_num_increments = 0;\n\n\t\t\t\t//Update last fetched reading Id\n\t\t\t\tloadData->setLastFetchId(readings->getLastId());\n\n\t\t\t\t/**\n\t\t\t\t * Set last fetched reading Id for buffer index\n\t\t\t\t * This is used by send thread whiule updating the next\n\t\t\t\t * position to read from db.\n\t\t\t\t * NOTE:\n\t\t\t\t * The saved position is not ffected by the filters\n\t\t\t\t * called below which can skip some or all input readings.\n\t\t\t\t */\n\t\t\t\tloadData->m_last_read_id.at(readIdx) = readings->getLastId();\n\n\t\t\t\t/**\n\t\t\t\t * The buffer access is protected by a mutex\n\t\t\t\t */\n                \t        readMutex.lock();\n\n\t\t\t\t/**\n\t\t\t\t * Set now the buffer at index to ReadingSet pointer\n\t\t\t\t * Note: the ReadingSet pointer will be deleted by\n\t\t\t\t * - the sending thread when processin it\n\t\t\t\t * OR\n\t\t\t\t * at program exit by a cleanup routine\n\t\t\t\t *\t\t\t\t\t\n\t\t\t\t * Note: the readings set can be optionally filtered\n\t\t\t\t * if plugin filters are set.\n\t\t\t\t */\n\n\t\t\t\t// Apply filters to the reading set\n\t\t\t\tif (loadData->filterPipeline)\n\t\t\t\t{\n\t\t\t\t\tPipelineElement *firstFilter = loadData->filterPipeline->getFirstFilterPlugin();\n\t\t\t\t\tif (firstFilter)\n\t\t\t\t\t{\n\n\t\t\t\t\t\t// Check whether filters are set before calling ingest\n\t\t\t\t\t\twhile (!loadData->filterPipeline->isReady())\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tLogger::getLogger()->warn(\"Load data thread called before \"\n\t\t\t\t\t\t\t\t\t\t  \"filter pipeline is ready\");\n\t\t\t\t\t\t\tstd::this_thread::sleep_for(std::chrono::milliseconds(150));\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// Make the load readIdx available to filters\n\t\t\t\t\t\tloadData->setLoadBufferIndex(readIdx);\n\t\t\t\t\t\t// Apply filters\n\t\t\t\t\t\tapplyFilters(loadData, readings);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\t// No filters: just set buffer with current data\n\t\t\t\t\t\tloadData->m_buffer.at(readIdx) = readings;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// No filters: just set buffer with current data\n\t\t\t\t\tloadData->m_buffer.at(readIdx) = readings;\n\t\t\t\t}\n\n\t\t\t\treadMutex.unlock();\n\n\t\t\t\treadIdx++;\n\n\t\t\t\t// Unlock the sendData thread\n\t\t\t\tunique_lock<mutex> lock(waitMutex);\n\t\t\t\tcond_var.notify_one();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\t// Free empty result set\n\t\t\t\tif (readings)\n\t\t\t\t{\n\t\t\t\t\tdelete readings;\n\t\t\t\t}\n\t\t\t\t// Error or no data read: just wait\n\t\t\t\t// TODO: add increments from 1 to TASK_SLEEP_MAX_INCREMENTS\n\n\t\t\t\tsleep_num_increments += 1;\n\t\t\t\tsleep_time *= 2;\n\t\t\t\tif (sleep_num_increments >= TASK_SLEEP_MAX_INCREMENTS)\n\t\t\t\t{\n\t\t\t\t\tsleep_time = TASK_FETCH_SLEEP;\n\t\t\t\t\tsleep_num_increments = 0;\n\t\t\t\t}\n\n\t\t\t\tthis_thread::sleep_for(chrono::milliseconds(sleep_time));\n\t\t\t}\n\t\t\t}\n        }\n\n#if VERBOSE_LOG\n\tLogger::getLogger()->info(\"SendingProcess loadData thread: Last ID '%s' read is %lu\",\n\t\t\t\t  loadData->getDataSourceType().c_str(),\n\t\t\t\t  loadData->getLastFetchId());\n#endif\n\n\t/**\n\t * The loop is over: unlock the sendData thread\n\t */\n\tunique_lock<mutex> lock(waitMutex);\n\tcond_var.notify_one();\n}\n\n/**\n * Thread to send data to historian service\n *\n * @param loadData    pointer to SendingProcess instance\n */\nstatic void sendDataThread(SendingProcess *sendData)\n{\n\tunsigned long totSent = 0;\n\tunsigned int  sendIdx = 0;\n\n\tbool slept;\n\tlong sleep_time = TASK_SEND_SLEEP;\n\tint sleep_num_increments = 0;\n\n        while (sendData->isRunning())\n        {\n\t\tslept = false;\n\n                if (sendIdx >= memoryBufferSize)\n\t\t{\n\n\t\t\tif (sendData->getUpdateDb())\n\t\t\t{\n\t\t\t\t// Update counters to Database\n\t\t\t\tsendData->updateDatabaseCounters();\n\n\t\t\t\t// Reset current sent readings\n\t\t\t\tsendData->resetSentReadings();\t\n\n\t\t\t\t// DB update done\n\t\t\t\tsendData->setUpdateDb(false);\n                        }\n\n\t\t\t// Reset send index\n\t\t\tsendIdx = 0;\n\t\t}\n\n\t\t/*\n\t\t * Check whether m_buffer[sendIdx] is NULL or contains ReadinSet data.\n\t\t * Access is protected by a mutex.\n\t\t */\n                readMutex.lock();\n                ReadingSet *canSend = sendData->m_buffer.at(sendIdx);\n                readMutex.unlock();\n\n                if (canSend == NULL)\n                {\n#if VERBOSE_LOG\n                        Logger::getLogger()->info(\"SendingProcess sendDataThread: \" \\\n                                                  \"('%s' stream id %d), sendIdx %u, buffer is empty, waiting ...\",\n\t\t\t\t\t\t  sendData->getDataSourceType().c_str(),\n                                                  sendData->getStreamId(),\n                                                  sendIdx);\n#endif\n\n\t\t\tif (sendData->getUpdateDb())\n\t\t\t{\n                                // Update counters to Database\n\t\t\t\tsendData->updateDatabaseCounters();\n\n\t\t\t\t// Reset current sent readings\n\t\t\t\tsendData->resetSentReadings();\t\n\n\t\t\t\t// DB update done\n\t\t\t\tsendData->setUpdateDb(false);\n\t\t\t}\n\n\t\t\tif (sendData->isRunning())\n\t\t\t{\n\t\t\t\t// Send thread is put on hold, only if the execution shoule proceed\n\t\t\t\tunique_lock<mutex> lock(waitMutex);\n\t\t\t\tcond_var.wait(lock);\n\t\t\t}\n                }\n                else\n                {\n\t\t\t/**\n\t\t\t * Send the buffer content ( const vector<Readings *>& )\n\t\t\t * to historian server via m_plugin->send(data).\n\t\t\t * Readings data by getAllReadings() will be\n\t\t\t * transformed using historian protocol and then sent to destination.\n\t\t\t */\n\n\t\t\tbool emptyReadings = sendData->m_buffer[sendIdx]->getCount() == 0;\n\t\t\tuint32_t sentReadings = 0;\n\t\t\tbool processUpdate = false;\n\n\t\t\tif (!emptyReadings)\n\t\t\t{\n\t\t\t\t// We have some readings to send\n\t\t\t\tconst vector<Reading *> &readingData = sendData->m_buffer.at(sendIdx)->getAllReadings();\n\t\t\t\tif (readingData.size() <= sendData->getReadBlockSize())\n\t\t\t\t{\n\t\t\t\t\tsentReadings = sendData->m_plugin->send(readingData);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\tLogger::getLogger()->debug(\"Breaking up incomming readings block\");\n\t\t\t\t\t// Filtering has made the readings too long, split into smaller\n\t\t\t\t\t// vectors for sending\n\t\t\t\t\tunsigned int bs = (unsigned int)sendData->getReadBlockSize();\n\t\t\t\t\tvector<Reading *>v;\n\t\t\t\t\tfor (unsigned int i = 0; i < readingData.size(); i++)\n\t\t\t\t\t{\n\t\t\t\t\t\tv.push_back(readingData[i]);\n\t\t\t\t\t\tif (i > 0 && (i % bs) == 0)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tsentReadings += sendData->m_plugin->send(v);\n\t\t\t\t\t\t\tv.clear();\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (v.size() > 0)\t// Flush final partial block\n\t\t\t\t\t{\n\t\t\t\t\t\tsentReadings += sendData->m_plugin->send(v);\n\t\t\t\t\t\tv.clear();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t// Check sent readings result\n\t\t\t\tif (sentReadings)\n\t\t\t\t{\n\t\t\t\t\tprocessUpdate = true;\n\t\t\t\t\texitCode = 0;\n\n\t\t\t\t\t// Update asset tracker table/cache, if required\n\t\t\t\t\tvector<Reading *> *vec = sendData->m_buffer.at(sendIdx)->getAllReadingsPtr();\n\n\t\t\t\t\tfor (vector<Reading *>::iterator it = vec->begin(); it != vec->end(); ++it)\n\t\t\t\t\t{\n\t\t\t\t\t\tReading *reading = *it;\n\n\t\t\t\t\t\tAssetTrackingTuple tuple(sendData->getName(), sendData->getPluginName(), reading->getAssetName(), \"Egress\");\n\t\t\t\t\t\tif (!AssetTracker::getAssetTracker()->checkAssetTrackingCache(tuple))\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tAssetTracker::getAssetTracker()->addAssetTrackingTuple(tuple);\n\t\t\t\t\t\t\tLogger::getLogger()->info(\"sendDataThread:  Adding new asset tracking tuple - egress: %s\", tuple.assetToString().c_str());\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\texitCode = 0;\n\t\t\t\t// We have an empty readings set: check last id\n\t\t\t\tif (sendData->m_last_read_id.at(sendIdx) > 0)\n\t\t\t\t{\n\t\t\t\t\tprocessUpdate = true;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (processUpdate)\n\t\t\t{\n\t\t\t\texitCode = 0;\n\n\t\t\t\t/** Sending done */\n\t\t\t\tsendData->setUpdateDb(true);\n\n\t\t\t\t/**\n\t\t\t\t * 1- emptying data in m_buffer[sendIdx].\n\t\t\t\t * The buffer access is protected by a mutex.\n\t\t\t\t */\n\t\t\t\treadMutex.lock();\n\n\t\t\t\t// Update last sent reading Id using the last id of the unfiltered readings buffer\n\t\t\t\tsendData->setLastSentId(sendData->m_last_read_id.at(sendIdx));\n\n\t\t\t\t// Free buffer\n\t\t\t\tdelete sendData->m_buffer.at(sendIdx);\n\t\t\t\tsendData->m_buffer.at(sendIdx) = NULL;\n\t\t\t\t// Reset buffer last id\n\t\t\t\tsendData->m_last_read_id.at(sendIdx) = 0;\n\n\t\t\t\t/** 2- Update sent counter (memory only) */\n\t\t\t\tsendData->updateSentReadings(sentReadings);\n\n\t\t\t\t// numReadings sent so far\n\t\t\t\ttotSent += sentReadings;\n\n\t\t\t\treadMutex.unlock();\n\n\t\t\t\tsendIdx++;\n\n\t\t\t\t// Unlock the loadData thread\n\t\t\t\tunique_lock<mutex> lock(waitMutex);\n\t\t\t\tcond_var.notify_one();\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tLogger::getLogger()->error(\"SendingProcess sendDataThread: Error while sending \" \\\n\t\t\t\t\t\t\t   \"('%s' stream id %d), sendIdx %u, N. (%d readings), \" \\\n\t\t\t\t\t\t\t   \", last reading id in buffer %ld\",\n\t\t\t\t\t\t\t   sendData->getDataSourceType().c_str(),\n\t\t\t\t\t\t\t   sendData->getStreamId(),\n\t\t\t\t\t\t\t   sendIdx,\n\t\t\t\t\t\t\t   sendData->m_buffer[sendIdx]->getCount(),\n\t\t\t\t\t\t\t   sendData->m_last_read_id.at(sendIdx));\n\n\t\t\t\tif (sendData->getUpdateDb())\n\t\t\t\t{\n\t\t\t\t\t// Update counters to Database\n\t\t\t\t\tsendData->updateDatabaseCounters();\n\n\t\t\t\t\t// Reset current sent readings\n\t\t\t\t\tsendData->resetSentReadings();\t\n\n\t\t\t\t\t// DB update done\n\t\t\t\t\tsendData->setUpdateDb(false);\n\t\t\t\t}\n\n\t\t\t\t// Error: just wait & continue\n\t\t\t\tthis_thread::sleep_for(chrono::milliseconds(sleep_time));\n\t\t\t\tslept = true;\n\t\t\t}\n                }\n\n\t\t// Handles the sleep time, it is doubled every time up to a limit\n\t\tif (slept)\n\t\t{\n\t\t\tsleep_num_increments += 1;\n\t\t\tsleep_time *= 2;\n\t\t\tif (sleep_num_increments >= TASK_SLEEP_MAX_INCREMENTS)\n\t\t\t{\n\t\t\t\tsleep_time = TASK_SEND_SLEEP;\n\t\t\t\tsleep_num_increments = 0;\n\t\t\t}\n\t\t}\n\n        }\n#if VERBOSE_LOG\n\tLogger::getLogger()->info(\"SendingProcess sendData thread: sent %lu total '%s'\",\n\t\t\t\t  totSent,\n\t\t\t\t  sendData->getDataSourceType().c_str());\n#endif\n\n\tif (sendData->getUpdateDb())\n\t{\n                // Update counters to Database\n\t\tsendData->updateDatabaseCounters();\n\n                // Reset current sent readings\n\t\tsendData->resetSentReadings();\n\n                sendData->setUpdateDb(false);\n        }\n\n\t/**\n\t * The loop is over: unlock the loadData thread\n\t */\n\tunique_lock<mutex> lock(waitMutex);\n\tcond_var.notify_one();\n}\n"
  },
  {
    "path": "C/tasks/purge_system/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (purge_system)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wsign-conversion\")\nset(COMMON_LIB common-lib)\nset(PLUGINS_COMMON_LIB plugins-common-lib)\n\nfind_package(Threads REQUIRED)\n\nset(BOOST_COMPONENTS system thread)\n\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\ninclude_directories(.)\ninclude_directories(include)\ninclude_directories(../../thirdparty/Simple-Web-Server)\ninclude_directories(../../thirdparty/rapidjson/include)\ninclude_directories(../../common/include)\n\nfile(GLOB purge_system_src \"*.cpp\")\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_executable(${PROJECT_NAME} ${purge_system_src} ${common_src})\ntarget_link_libraries(${PROJECT_NAME} ${Boost_LIBRARIES})\ntarget_link_libraries(${PROJECT_NAME} ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${PLUGINS_COMMON_LIB})\n\n\ninstall(TARGETS purge_system RUNTIME DESTINATION fledge/tasks)\n\nif(MSYS) #TODO: Is MSYS true when MSVC is true?\n    target_link_libraries(purge_system ws2_32 wsock32)\n    if(OPENSSL_FOUND)\n        target_link_libraries(purge_system ws2_32 wsock32)\n    endif()\nendif()\n"
  },
  {
    "path": "C/tasks/purge_system/include/purge_system.h",
    "content": "#ifndef _PURGE_SYSTEM_H\n#define _PURGE_SYSTEM_H\n\n/*\n * Fledge Statistics History\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <process.h>\n\n#define TO_STRING(...) DEFER(TO_STRING_)(__VA_ARGS__)\n#define DEFER(x) x\n#define TO_STRING_(...) #__VA_ARGS__\n#define QUOTE(...) TO_STRING(__VA_ARGS__)\n\n#define LOG_NAME                     \"purge_system\"\n#define CONFIG_CATEGORY_DESCRIPTION  \"Configuration of the Purge System\"\n#define CONFIG_CATEGORY_DISPLAY_NAME \"Purge System\"\n\n\n#define UTILITIES_CATEGORY\t  \"Utilities\"\n\n\nclass PurgeSystem : public FledgeProcess\n{\n\tpublic:\n\t\tPurgeSystem(int argc, char** argv);\n\t\t~PurgeSystem();\n\n\t\tvoid     run();\n\n\tprivate:\n\t\tLogger        *m_logger;\n\t\tStorageClient *m_storage;\n\t\tunsigned long  m_retainStatsHistory;\n\t\tunsigned long  m_retainAuditLog;\n\t\tunsigned long  m_retainTaskHistory;\n\n\tprivate:\n\t\tvoid           raiseError(const char *reason, ...);\n\t\tvoid           purgeExecution();\n\t\tvoid           purgeTable(const std::string& tableName, const std::string& fieldName, unsigned long retentionDays);\n\t\tvoid           historicizeData(unsigned long retentionDays);\n\t\tResultSet     *extractData(const std::string& tableName, const std::string& fieldName, unsigned long retentionDays);\n\t\tvoid           storeData(const std::string& tableDest, ResultSet *data);\n\t\tvoid           processEnd() const;\n\t\tConfigCategory configurationHandling(const std::string& config);\n};\n\n#endif\n"
  },
  {
    "path": "C/tasks/purge_system/main.cpp",
    "content": "/*\n * Fledge statistics history task\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <purge_system.h>\n#include <logger.h>\n\nusing namespace std;\n\nint main(int argc, char** argv)\n{\n\n\ttry\n\t{\n\t\tPurgeSystem PurgeSystem(argc, argv);\n\n\t\tPurgeSystem.run();\n\t}\n\tcatch (const std::exception& e)\n\t{\n\t\tLogger::getLogger()->error(\"An error occurred during the execution, :%s: \", e.what());\n\t\texit(1);\n\t}\n\tcatch (...)\n\t{\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tstring name = (p ? p.__cxa_exception_type()->name() : \"null\");\n\n\t\tLogger::getLogger()->error(\"An error occurred during the execution, :%s: \", name.c_str() );\n\t\texit(1);\n\t}\n\n\t// Return success\n\texit(0);\n}\n"
  },
  {
    "path": "C/tasks/purge_system/purge_system.cpp",
    "content": "/*\n * Fledge Purge System - purge tables in the fledge database\n *\n * Copyright (c) 2021 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <purge_system.h>\n#include <logger.h>\n\n#include <cstdarg>     /* va_list, va_start, va_arg, va_end */\n#include <cstdlib>\n#include <thread>\n#include <csignal>\n\nusing namespace std;\n\nvolatile std::sig_atomic_t signalReceived = 0;\n\nstatic const string DEFAULT_CONFIG = QUOTE({\n\t\"retainStatsHistory\": {\n\t\t\"description\": \"The number of days for which full granularity statistics history is maintained.\",\n\t\t\"type\": \"integer\",\n\t\t\"default\": \"7\",\n\t\t\"displayName\": \"Statistics Retention\",\n\t\t\"order\": \"1\",\n\t\t\"minimum\": \"1\"\n\t},\n\t\"retainAuditLog\" : {\n\t\t\"description\": \"The number of days for which audit trail data is retained\",\n\t\t\"type\": \"integer\",\n\t\t\"default\": \"30\",\n\t\t\"displayName\": \"Audit Retention\",\n\t\t\"order\": \"2\",\n\t\t\"minimum\": \"1\"\n\t},\n\t\"retainTaskHistory\" : {\n\t\t\"description\": \"The number of days for which task history is retained\",\n\t\t\"type\": \"integer\",\n\t\t\"default\": \"30\",\n\t\t\"displayName\": \"Task Retention\",\n\t\t\"order\": \"3\",\n\t\t\"minimum\": \"1\"\n\t}\n});\n\nstatic void signalHandler(int signal)\n{\n\tsignalReceived = signal;\n}\n\n/**\n * Error handler - logs the error and raises the exception\n */\nvoid PurgeSystem::raiseError(const char *reason, ...)\n{\n\t//By default Syslog is limited to a message size of 1024 bytes\n\tchar buffer[1024];\n\n\tva_list ap;\n\tva_start(ap, reason);\n\tvsnprintf(buffer, sizeof(buffer), reason, ap);\n\tva_end(ap);\n\n\tm_logger->error(\"PurgeSystem raising error: %s\", buffer);\n\tthrow runtime_error(buffer);\n}\n\n/**\n * Constructor for Purge system\n */\nPurgeSystem::PurgeSystem(int argc, char** argv) : FledgeProcess(argc, argv)\n{\n\tstring paramName;\n\n\tparamName = getName();\n\n\tm_logger = Logger::getLogger();\t\t// Logger is created by FledgeProcess\n\tm_logger->info(\"PurgeSystem starting - parameters name :%s:\", paramName.c_str() );\n\n\tm_retainStatsHistory = 0;\n\tm_retainAuditLog = 0;\n\tm_retainTaskHistory = 0;\n\n\tm_storage = this->getStorageClient();\n}\n\n/**\n *\n */\nPurgeSystem::~PurgeSystem()\n{\n}\n\n/**\n * PurgeSystem run method, called by the base class\n * to start the process and do the actual work.\n */\nvoid PurgeSystem::run()\n{\n\t// We handle these signals, add more if needed\n\tstd::signal(SIGINT,  signalHandler);\n\tstd::signal(SIGSTOP, signalHandler);\n\tstd::signal(SIGTERM, signalHandler);\n\n\tConfigCategory configuration = configurationHandling(DEFAULT_CONFIG);\n\n\ttry {\n\t\tm_retainStatsHistory = strtoul(configuration.getValue(\"retainStatsHistory\").c_str(), nullptr, 10);\n\t\tm_retainAuditLog     = strtoul(configuration.getValue(\"retainAuditLog\").c_str(), nullptr, 10);\n\t\tm_retainTaskHistory  = strtoul(configuration.getValue(\"retainTaskHistory\").c_str(), nullptr, 10);\n\n\t} catch (const std::exception &e) {\n\t\traiseError (\"impossible to retrieve the configuration :%s:\", e.what() );\n\t}\n\n\tm_logger->info(\"configuration retainStatsHistory :%d: retainAuditLog :%d: retainTaskHistory :%d:\"\n\t\t\t\t   ,m_retainStatsHistory\n\t\t\t\t   ,m_retainAuditLog\n\t\t\t\t   ,m_retainTaskHistory);\n\n\tif (!m_dryRun)\n\t{\n\t\tpurgeExecution();\n\t}\n\tprocessEnd();\n}\n\n/**\n * Retrieves and store the configuration\n *\n * @param   config  Default configuration\n */\nConfigCategory PurgeSystem::configurationHandling(const std::string& config)\n{\n\t// retrieves the configuration using the value of the --name parameter\n\t// (received in the command line) as the key\n\tstring categoryName(this->getName());\n\n\tConfigCategory configuration;\n\n\tManagementClient *client = this->getManagementClient();\n\n\tm_logger->debug(\"%s - categoryName :%s:\", __FUNCTION__, categoryName.c_str());\n\n\t// Create category, with \"default\" values only\n\tDefaultConfigCategory defaultConfig(categoryName, config);\n\tdefaultConfig.setDescription(CONFIG_CATEGORY_DESCRIPTION);\n\tdefaultConfig.setDisplayName(CONFIG_CATEGORY_DISPLAY_NAME);\n\n\t// Create/Update category name (we pass keep_original_items=true)\n\tif (! client->addCategory(defaultConfig, true))\n\t{\n\t\traiseError (\"Failure creating/updating configuration key :%s: \", categoryName.c_str() );\n\t}\n\n\t// Purge system category as child of Utilities\n\t{\n\t\tvector<string> children;\n\t\tchildren.push_back(categoryName);\n\t\tConfigCategories categories = client->getCategories();\n\t\ttry {\n\t\t\tbool found = false;\n\t\t\tfor (unsigned int idx = 0; idx < categories.length(); idx++)\n\t\t\t{\n\t\t\t\tif (categories[idx]->getName().compare(UTILITIES_CATEGORY) == 0)\n\t\t\t\t{\n\t\t\t\t\tclient->addChildCategories(UTILITIES_CATEGORY, children);\n\t\t\t\t\tfound = true;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!found)\n\t\t\t{\n\t\t\t\traiseError(\"adding %s as a child of %s\", categoryName.c_str(), UTILITIES_CATEGORY);\n\t\t\t}\n\t\t} catch (...) {\n\t\t\tstd::exception_ptr p = std::current_exception();\n\t\t\tstring errorInfo = (p ? p.__cxa_exception_type()->name() : \"null\");\n\n\t\t\traiseError(\"adding %s as a child of %s - %s\", categoryName.c_str(), UTILITIES_CATEGORY, errorInfo.c_str());\n\t\t}\n\t}\n\n\t// Get the category with values and defaults\n\tconfiguration = client->getCategory(categoryName);\n\n\treturn ConfigCategory(configuration);\n}\n\n/**\n * Execute the purge, store information in an historicization table and delete teh information\n * the tables currently handled are:\n *\n *   - fledge.statistics_history\n *   - fledge.tasks\n *   - fledge.log\n */\nvoid PurgeSystem::purgeExecution()\n{\n\tstring tableName;\n\n\tm_logger->info(\"PurgeSystem running\");\n\n\ttableName = \"statistics_history\";\n\ttry {\n\t\thistoricizeData(m_retainStatsHistory);\n\t\tpurgeTable(tableName, \"history_ts\", m_retainStatsHistory);\n\n\t} catch (const std::exception &e) {\n\n\t\traiseError (\"Failure historicizing and purging table :%s: :%s:\",tableName.c_str(),  e.what() );\n\t}\n\n\tpurgeTable(\"tasks\", \"start_time\", m_retainTaskHistory);\n\tpurgeTable(\"log\", \"ts\", m_retainAuditLog);\n}\n\n/**\n * Store statistics_history details information in a historicization table\n *\n * @param   retentionDays  Number of days to retain\n */\nvoid PurgeSystem::historicizeData(unsigned long retentionDays)\n{\n\tstring tableSource;\n\tstring fieldName;\n\tstring tableDest;\n\tResultSet *data;\n\n\ttableSource=\"statistics_history\";\n\tfieldName=\"history_ts\";\n\ttableDest=\"statistics_history_daily\";\n\n\tm_logger->debug(\"%s - historicizing :%s: retention days :%d: \", __FUNCTION__, tableSource.c_str(), retentionDays);\n\n\tdata = extractData(tableSource, fieldName, retentionDays);\n\n\tif (data->rowCount()) {\n\n\t\ttry {\n\t\t\tstoreData(tableDest, data);\n\n\t\t} catch (const std::exception &e) {\n\n\t\t\t;\n\t\t}\n\t}\n\n\tdelete data;\n}\n\n/**\n * Retrieve grouped information to historicize\n *\n * @param   tableName      Name of the table from which the records should be extracted\n * @param   fieldName      Timestamp on which the where condition should be based on\n * @param   retentionDays  Number of days to retain\n *\n * @return  Retrieved recordset\n */\nResultSet *PurgeSystem::extractData(const std::string& tableName, const std::string& fieldName, unsigned long retentionDays)\n{\n\tResultSet *data;\n\tstring conditionValue;\n\n\tdata = nullptr;\n\n\tconst Condition conditionExpr(Older);\n\tconditionValue = to_string (retentionDays * 60 * 60 * 24); // the days should be expressed in seconds\n\t//conditionValue = to_string (retentionDays);\n\n\tWhere *_where     = new Where(fieldName, conditionExpr, conditionValue);\n\tQuery _query(_where);\n\n\t// Alias handling is ignored because of the presence of the group by\n\t//\tvector<Returns *> _returns {};\n\t//\t_returns.push_back(new Returns(\"date(history_ts)\", \"date\") );\n\t//\t_returns.push_back(new Returns(\"key\") );\n\t//\t_query.returns(_returns);\n\n\tAggregate *_aggregate = new Aggregate(\"sum\", \"value\");\n\t_query.aggregate(_aggregate);\n\n\t_query.group(\"date(history_ts), key\");\n\n\ttry\n\t{\n\t\tdata = m_storage->queryTable(tableName, _query);\n\t\tif (data == nullptr)\n\t\t{\n\t\t\traiseError (\"Failure extracting data from the table :%s: \", tableName.c_str() );\n\t\t}\n\n\t} catch (const std::exception &e) {\n\n\t\traiseError (\"Failure extracting data :%s:\", e.what() );\n\t}\n\n\tm_logger->debug(\"%s - %s rows extracted :%d:\", __FUNCTION__, tableName.c_str(), data->rowCount() );\n\n\treturn (data);\n}\n\n/**\n * Store the content of the provided recordset in the given table\n *\n * @param   tableDest  Name of the table in which the recordset should be stored\n * @param   data       recordset to store on the table tableDest\n */\nvoid PurgeSystem::storeData(const std::string& tableDest, ResultSet *data)\n{\n\tlong   fieldYear;\n\tstring fieldDate;\n\tstring fieldKey;\n\tlong   fieldValue = 0;\n\n\tint affected = 0;\n\n\tbool retrieved;\n\n\ttry\n\t{\n\t\tm_logger->debug(\"%s - storing in :%s: rows :%d:\", __FUNCTION__, tableDest.c_str(), data->rowCount() );\n\n\t\tResultSet::RowIterator item = data->firstRow();\n\t\tdo\n\t\t{\n\t\t\tResultSet::Row* row = *item;\n\n\t\t\tif (row)\n\t\t\t{\n\t\t\t\t// SQLite and PostgreSQL plugins behave differently, it initially tries the code for SQLite and in case\n\t\t\t\t// of an error it executes the PostgreSQL one\n\t\t\t\ttry {\n\t\t\t\t\tfieldDate = row->getColumn(\"date(history_ts)\")->getString();\n\t\t\t\t\tretrieved = true;\n\n\t\t\t\t} catch (...) {\n\t\t\t\t\tretrieved = false;\n\t\t\t\t}\n\t\t\t\tif (! retrieved)\n\t\t\t\t{\n\t\t\t\t\tfieldDate = row->getColumn(\"date\")->getString();\n\t\t\t\t}\n\n\t\t\t\tfieldYear = strtol(fieldDate.substr(0, 4).c_str(), nullptr, 10);\n\t\t\t\tfieldKey = row->getColumn(\"key\")->getString();\n\n\t\t\t\t// SQLite and PostgreSQL plugins behave differently, it initially tries the code for SQLite and in case\n\t\t\t\t// of an error it executes the PostgreSQL one\n\t\t\t\ttry {\n\t\t\t\t\tfieldValue = row->getColumn(\"sum_value\")->getInteger();\n\t\t\t\t\tretrieved = true;\n\t\t\t\t} catch (...) {\n\t\t\t\t\tretrieved = false;\n\t\t\t\t}\n\t\t\t\tif (! retrieved)\n\t\t\t\t{\n\t\t\t\t\tfieldValue = strtol(row->getColumn(\"sum_value\")->getString(), nullptr, 10);\n\t\t\t\t}\n\n\t\t\t\tInsertValues values;\n\t\t\t\tvalues.push_back(InsertValue(\"year\", fieldYear) );\n\t\t\t\tvalues.push_back(InsertValue(\"day\", fieldDate) );\n\t\t\t\tvalues.push_back(InsertValue(\"key\", fieldKey) );\n\t\t\t\tvalues.push_back(InsertValue(\"value\", fieldValue) );\n\n\t\t\t\tm_logger->debug(\"%s - :%s: inserting :%ld: :%s: :%s: :%ld:  \", __FUNCTION__, tableDest.c_str()\n\t\t\t\t\t, fieldYear\n\t\t\t\t\t, fieldDate.c_str()\n\t\t\t\t\t, fieldKey.c_str()\n\t\t\t\t\t, fieldValue);\n\n\t\t\t\taffected = m_storage->insertTable(tableDest, values);\n\t\t\t\tif (affected == -1)\n\t\t\t\t{\n\t\t\t\t\traiseError (\"Failure inserting rows into :%s: \", tableDest.c_str() );\n\t\t\t\t}\n\t\t\t}\n\n\t\t} while (!data->isLastRow(item++));\n\n\t} catch (const std::exception &e) {\n\n\t\traiseError (\"Failure inserting rows into :%s: error :%s: \", tableDest.c_str(), e.what() );\n\t}\n}\n\n/**\n * Purge the content of the given table from the information older than a provided number of days\n *\n * @param   tableName      Name of the table to purge\n * @param   fieldName      Timestamp on which the where condition should be based on\n * @param   retentionDays  Number of days to retain\n */\nvoid PurgeSystem::purgeTable(const std::string& tableName, const std::string& fieldName, unsigned long retentionDays)\n{\n\tint affected;\n\tstring conditionValue;\n\taffected = 0;\n\n\tconst Condition conditionExpr(Older);\n\n\tconditionValue = to_string (retentionDays * 60 * 60 * 24); // the days should be expressed in seconds\n\t//conditionValue = to_string (retentionDays);\n\n\tm_logger->debug(\"%s - purging :%s: retention days :%d: conditionValue :%s:\", __FUNCTION__, tableName.c_str(), retentionDays, conditionValue.c_str() );\n\n\tWhere *_where = new Where(fieldName, conditionExpr, conditionValue);\n\tQuery _query(_where);\n\ttry\n\t{\n\t\taffected = m_storage->deleteTable(tableName, _query);\n\t\tif (affected == -1)\n\t\t{\n\t\t\traiseError (\"Failure purging the table :%s: \", tableName.c_str() );\n\t\t}\n\n\t} catch (const std::exception &e) {\n\n\t\traiseError (\"Failure purging the table :%s: \", tableName.c_str() );\n\t}\n\n\tm_logger->debug(\"%s - %s rows purged :%d:\", __FUNCTION__, tableName.c_str(), affected);\n}\n\n/**\n * Terminate the operation\n *\n */\nvoid PurgeSystem::processEnd() const\n{\n\tm_logger->info(\"PurgeSystem completed\");\n}\n"
  },
  {
    "path": "C/tasks/statistics_history/CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (statistics_history)\n\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wsign-conversion\")\nset(UUIDLIB -luuid)\nset(COMMON_LIB common-lib services-common-lib)\n\ninclude_directories(. include ../../thirdparty/Simple-Web-Server ../../thirdparty/rapidjson/include  ../../common/include)\n\nfind_package(Threads REQUIRED)\n\nset(BOOST_COMPONENTS system thread)\n\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\nfile(GLOB statistics_history_src \"*.cpp\")\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_executable(statistics_history ${statistics_history_src} ${common_src})\ntarget_link_libraries(statistics_history ${Boost_LIBRARIES})\ntarget_link_libraries(statistics_history ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(statistics_history ${UUIDLIB})\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\n\n\ninstall(TARGETS statistics_history RUNTIME DESTINATION fledge/tasks)\n\nif(MSYS) #TODO: Is MSYS true when MSVC is true?\n    target_link_libraries(statistics_history ws2_32 wsock32)\n    if(OPENSSL_FOUND)\n        target_link_libraries(statistics_history ws2_32 wsock32)\n    endif()\nendif()\n"
  },
  {
    "path": "C/tasks/statistics_history/include/stats_history.h",
    "content": "#ifndef _STATISTICS_HISTORY_H\n#define _STATISTICS_HISTORY_H\n\n/*\n * Fledge Statistics History\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <process.h>\n#include <vector>\n#include <string>\n#include <utility>\n\n\n/**\n * StatisticsHisotry class\n */\nclass StatsHistory : public FledgeProcess\n{\n\tpublic:\n\t\t// Constructor:\n\t\tStatsHistory(int argc, char** argv);\n\t\t// Destructor\n\t\t~StatsHistory();\n\n\t\tvoid\t\t\trun() const;\n\n\tprivate:\n\t\tvoid processKey(const std::string& key, std::vector<InsertValues> &historyValues, \n\t\t\tstd::vector<std::pair<InsertValue *, Where *> > &updateValues, std::string dateTimeStr, int val , int prev) const;\n\t\tstd::string getTime(void) const;\n\n\n};\n\n#endif\n"
  },
  {
    "path": "C/tasks/statistics_history/main.cpp",
    "content": "/*\n * Fledge statistics history task\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <stats_history.h>\n\nusing namespace std;\n\nint main(int argc, char** argv)\n{\n\ttry\n\t{\n\t\t// Instantiate StatsHistory class\n\t\tStatsHistory statisticsHistory(argc, argv);\n\n\t\tstatisticsHistory.run();\n\n\t}\n\tcatch (const std::exception& e)\n\t{\n\t\tLogger::getLogger()->fatal(\"Exception %s starting Stats History task\", e.what());\n\t\t// Return failure for class instance/configuration etc\n\t\texit(1);\n\t}\n\t// Catch all exceptions\n\tcatch (...)\n\t{\n\t\tstd::exception_ptr p = std::current_exception();\n\t\tstring name = (p ? p.__cxa_exception_type()->name() : \"null\");\n\t\tLogger::getLogger()->fatal(\"Exception %s starting Stats History task\", name);\n\t\texit(1);\n\t}\n\n\t// Return success\n\texit(0);\n}\n"
  },
  {
    "path": "C/tasks/statistics_history/stats_history.cpp",
    "content": "/*\n * Fledge Statistics History\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n\n#include <stats_history.h>\n#include <csignal>\n#include <time.h>\n#include <sys/time.h>\n#include \"string_utils.h\"\n\n#define DATETIME_MAX_LEN 52\n#define MICROSECONDS_FORMAT_LEN\t10\n#define DATETIME_FORMAT_DEFAULT\t\"%Y-%m-%d %H:%M:%S\"\n\nusing namespace std;\n\nvolatile std::sig_atomic_t signalReceived = 0;\n\n/**\n * Handle Signals\n */\nstatic void signalHandler(int signal)\n{\n\tsignalReceived = signal;\n}\n\n/**\n * Constructor for Statistics history task\n */\nStatsHistory::StatsHistory(int argc, char** argv) : FledgeProcess(argc, argv)\n{\n\tLogger::getLogger()->info(\"StatsHistory starting\");\n}\n\n/**\n * StatsHistory class methods\n */\nStatsHistory::~StatsHistory()\n{\n}\n\n/**\n * Statisitics History run method, called by the base class\n * to start the process and do the actual work.\n */\nvoid StatsHistory::run() const\n{\n\n\t// We handle these signals, add more if needed\n\tstd::signal(SIGINT,  signalHandler);\n\tstd::signal(SIGSTOP, signalHandler);\n\tstd::signal(SIGTERM, signalHandler);\n\n\tif (m_dryRun)\n\t\treturn;\n\n\t// Get the set of distinct statistics keys\n\tQuery query(new Returns(\"key\"));\n\tquery.distinct();\n        query.returns(new Returns(\"value\"));\n        query.returns(new Returns(\"previous_value\"));\n        ResultSet *keySet = getStorageClient()->queryTable(\"statistics\", query);\n\n\tResultSet::RowIterator rowIter = keySet->firstRow();\n\tstd::vector<InsertValues> historyValues;\n\tvector<pair<InsertValue *, Where *>> updateValues;\n\n\tstd::string dateTimeStr = getTime();\n\n        while (keySet->hasNextRow(rowIter) || keySet->isLastRow(rowIter) )\n\t{\n\t\tstring key = (*rowIter)->getColumn(\"key\")->getString();\n\t\tint val = (*rowIter)->getColumn(\"value\")->getInteger();\n        \tint prev = (*rowIter)->getColumn(\"previous_value\")->getInteger();\n\n\t\ttry {\n\t\t\tprocessKey(key, historyValues, updateValues, dateTimeStr, val, prev);\n\t\t} catch (exception& e) {\n\t\t\tgetLogger()->error(\"Failed to process statisitics key %s, %s\", key, e.what());\n\t\t}\n\t\tif (!keySet->isLastRow(rowIter))\n                \trowIter = keySet->nextRow(rowIter);\n\t\telse\n\t\t\tbreak;\n\t}\n\n\tint n_rows;\n        if ((n_rows = getStorageClient()->insertTable(\"statistics_history\", historyValues)) < 1)\n        {\n                getLogger()->error(\"Failed to insert rows to statistics history table \");\n        }\n\n\tif (getStorageClient()->updateTable(\"statistics\", updateValues) < 1)\n        {\n                getLogger()->error(\"Failed to update rows to statistics table\");\n        }\n\n\tfor (auto it = updateValues.begin(); it != updateValues.end() ; ++it)\n\t{\n\t\tInsertValue *updateValue = it->first;\n\t\tif (updateValue)\n\t\t{\n\t\t\tdelete updateValue;\n\t\t\tupdateValue=nullptr;\n\t\t}\n        \tWhere *wKey = it->second;\n\t\tif(wKey)\n\t\t{\n\t\t\tdelete wKey;\n\t\t\twKey = nullptr;\n\t\t}\n\t}\n\n\tdelete keySet;\n}\n\n/**\n * Process statistics keys\n *\n * @param key\t         The statistics key to process\n * @param historyValues  Values to be inserted in statistics_history\n * @param updateValues   Values to be updated in statistics\n * @param dateTimeStr    Local time with microseconds precision \n * @param val\t\t int \n * @param prev\t\t int \n * @return void\n */\nvoid StatsHistory::processKey(const std::string& key, std::vector<InsertValues> &historyValues, std::vector<std::pair<InsertValue*, Where *> > &updateValues, std::string dateTimeStr, int val, int prev) const\n{\n\tInsertValues iValue;\n\n\t// Insert the row into the statistics history\n\t// create an object of InsertValues and push in historyValues vector\n\t// for batch insertion\n\tstring escaped_key = escape(key);\n\tiValue.push_back(InsertValue(\"key\", escaped_key));\n\tiValue.push_back(InsertValue(\"value\", val - prev));\n\tiValue.push_back(InsertValue(\"history_ts\", dateTimeStr));\n\n\thistoryValues.push_back(iValue);\n\n\t// Update the previous value in the statistics row\n\t// create an object of InsertValue and push in updateValues vector\n\t// for batch updation\n\tInsertValue *updateValue = new InsertValue(\"previous_value\", val);\n\tWhere *wKey = new Where(\"key\", Equals, escaped_key);\n\tupdateValues.emplace_back(updateValue, wKey);\n}\n\n/**\n * getTime() function returns the localTime with microseconds precision  \n *\n * @param  void \n * @return std::string localTime\n */\n\nstd::string StatsHistory::getTime(void) const\n{\n\tstruct timeval tv ;\n\tstruct tm* timeinfo;\n\tgettimeofday(&tv, NULL);\n\ttimeinfo = gmtime(&tv.tv_sec);\n\tchar date_time[DATETIME_MAX_LEN];\n\t// Create datetime with seconds\n\tstrftime(date_time,\n\t\t    sizeof(date_time),\n\t\t    DATETIME_FORMAT_DEFAULT,\n\t\t    timeinfo);\n\n\tstd::string dateTimeLocal = date_time;\n\tchar micro_s[MICROSECONDS_FORMAT_LEN];\n\t// Add microseconds\n\tsnprintf(micro_s,\n\t\t    sizeof(micro_s),\n\t\t    \".%06lu\",\n\t\t    tv.tv_usec);\n\n\tdateTimeLocal.append(micro_s);\n\treturn dateTimeLocal;\n}\n\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 3.0)\n\nproject(Simple-Web-Server)\n\noption(USE_STANDALONE_ASIO \"set ON to use standalone Asio instead of Boost.Asio\" OFF)\nif(CMAKE_SOURCE_DIR STREQUAL \"${CMAKE_CURRENT_SOURCE_DIR}\")\n    option(BUILD_TESTING \"set ON to build library tests\" ON)\nelse()\n    option(BUILD_TESTING \"set ON to build library tests\" OFF)\nendif()\noption(BUILD_FUZZING \"set ON to build library fuzzers\" OFF)\noption(USE_OPENSSL \"set OFF to build without OpenSSL\" ON)\n\nadd_library(simple-web-server INTERFACE)\n\ntarget_include_directories(simple-web-server INTERFACE ${CMAKE_CURRENT_SOURCE_DIR})\n\nfind_package(Threads REQUIRED)\ntarget_link_libraries(simple-web-server INTERFACE ${CMAKE_THREAD_LIBS_INIT})\n\n# TODO 2020 when Debian Jessie LTS ends:\n# Remove Boost system, thread, regex components; use Boost::<component> aliases; remove Boost target_include_directories\nif(USE_STANDALONE_ASIO)\n    target_compile_definitions(simple-web-server INTERFACE USE_STANDALONE_ASIO)\n    find_path(ASIO_PATH asio.hpp)\n    if(NOT ASIO_PATH)\n        message(FATAL_ERROR \"Standalone Asio not found\")\n    else()\n        target_include_directories(simple-web-server INTERFACE ${ASIO_PATH})\n    endif()\nelse()\n    find_package(Boost 1.53.0 COMPONENTS system thread REQUIRED)\n    target_link_libraries(simple-web-server INTERFACE ${Boost_LIBRARIES})\n    target_include_directories(simple-web-server INTERFACE ${Boost_INCLUDE_DIR})\n    if(CMAKE_CXX_COMPILER_ID STREQUAL \"GNU\" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        target_compile_definitions(simple-web-server INTERFACE USE_BOOST_REGEX)\n        find_package(Boost 1.53.0 COMPONENTS regex REQUIRED)\n        target_link_libraries(simple-web-server INTERFACE ${Boost_LIBRARIES})\n        target_include_directories(simple-web-server INTERFACE ${Boost_INCLUDE_DIR})\n    endif()\nendif()\nif(WIN32)\n    target_link_libraries(simple-web-server INTERFACE ws2_32 wsock32)\nendif()\n\nif(APPLE)\n    set(OPENSSL_ROOT_DIR \"/usr/local/opt/openssl\")\nendif()\nif(USE_OPENSSL)\n    find_package(OpenSSL)\nendif()\nif(OPENSSL_FOUND)\n    target_compile_definitions(simple-web-server INTERFACE HAVE_OPENSSL)\n    target_link_libraries(simple-web-server INTERFACE ${OPENSSL_LIBRARIES})\n    target_include_directories(simple-web-server INTERFACE ${OPENSSL_INCLUDE_DIR})\nendif()\n\n# If Simple-Web-Server is not a sub-project:\nif(CMAKE_SOURCE_DIR STREQUAL \"${CMAKE_CURRENT_SOURCE_DIR}\")\n    if(NOT MSVC)\n        add_compile_options(-std=c++11 -Wall -Wextra)\n        if (CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n            add_compile_options(-Wthread-safety)\n        endif()\n    else()\n        add_compile_options(/W1)\n    endif()\n\n    find_package(Boost 1.53.0 COMPONENTS system thread filesystem)\n    if(Boost_FOUND)\n        add_executable(http_examples http_examples.cpp)\n        target_link_libraries(http_examples simple-web-server)\n        target_link_libraries(http_examples ${Boost_LIBRARIES})\n        target_include_directories(http_examples PRIVATE ${Boost_INCLUDE_DIR})\n        if(OPENSSL_FOUND)\n            add_executable(https_examples https_examples.cpp)\n            target_link_libraries(https_examples simple-web-server)\n            target_link_libraries(https_examples ${Boost_LIBRARIES})\n            target_include_directories(https_examples PRIVATE ${Boost_INCLUDE_DIR})\n        endif()\n     endif()\n\n    install(FILES asio_compatibility.hpp server_http.hpp client_http.hpp server_https.hpp client_https.hpp crypto.hpp utility.hpp status_code.hpp mutex.hpp DESTINATION include/simple-web-server)\nendif()\n\nif(BUILD_TESTING OR BUILD_FUZZING)\n    if(BUILD_TESTING)\n        enable_testing()\n    endif()\n    add_subdirectory(tests)\nendif()\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2014-2020 Ole Christian Eidheim\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/README.md",
    "content": "# Simple-Web-Server\n\nA very simple, fast, multithreaded, platform independent HTTP and HTTPS server and client library implemented using C++11 and Asio (both Boost.Asio and standalone Asio can be used). Created to be an easy way to make REST resources available from C++ applications. \n\nSee https://gitlab.com/eidheim/Simple-WebSocket-Server for an easy way to make WebSocket/WebSocket Secure endpoints in C++. Also, feel free to check out the new C++ IDE supporting C++11/14/17: https://gitlab.com/cppit/jucipp. \n\n## Features\n\n* Asynchronous request handling\n* Thread pool if needed\n* Platform independent\n* HTTP/1.1 supported, including persistent connections\n* HTTPS supported\n* Chunked transfer encoding and server-sent events\n* Can set timeouts for request/response and content\n* Can set max request/response size\n* Sending outgoing messages is thread safe\n* Client creates necessary connections and perform reconnects when needed\n\nSee also [benchmarks](https://gitlab.com/eidheim/Simple-Web-Server/blob/master/docs/benchmarks.md) for a performance comparisons to a few other HTTP libraries.\n\n## Usage\n\nSee [http_examples.cpp](https://gitlab.com/eidheim/Simple-Web-Server/blob/master/http_examples.cpp) or\n[https_examples.cpp](https://gitlab.com/eidheim/Simple-Web-Server/blob/master/https_examples.cpp) for example usage.\nThe following server resources are setup using regular expressions to match request paths:\n* `POST /string` - responds with the posted string.\n* `POST /json` - parses the request content as JSON, and responds with some of the parsed values.\n* `GET /info` - responds with information extracted from the request.\n* `GET /match/([0-9]+)` - matches for instance `/match/123` and responds with the matched number `123`.\n* `GET /work` - starts a thread, simulating heavy work, and responds when the work is done.\n* `GET` - a special default_resource handler is called when a request path does not match any of the above resources.\nThis resource responds with the content of files in the `web/`-folder if the request path identifies one of these files.\n\n[Documentation](https://eidheim.gitlab.io/Simple-Web-Server/annotated.html) is also available, generated from the master branch.\n\n## Dependencies\n\n* Boost.Asio or standalone Asio\n* Boost is required to compile the examples\n* For HTTPS: OpenSSL libraries\n\nInstallation instructions for the dependencies needed to compile the examples on a selection of platforms can be seen below.\nDefault build with Boost.Asio is assumed. Turn on CMake option `USE_STANDALONE_ASIO` to instead use standalone Asio.\n\n### Debian based distributions\n\n```sh\nsudo apt-get install libssl-dev libboost-filesystem-dev libboost-thread-dev\n```\n\n### Arch Linux based distributions\n\n```sh\nsudo pacman -S boost\n```\n\n### MacOS\n\n```sh\nbrew install openssl boost\n```\n\n## Compile and run\n\nCompile with a C++11 compliant compiler:\n```sh\ncmake -H. -Bbuild\ncmake --build build\n```\n\n### HTTP\n\nRun the server and client examples: `./build/http_examples`\n\nDirect your favorite browser to for instance http://localhost:8080/\n\n### HTTPS\n\nBefore running the server, an RSA private key (server.key) and an SSL certificate (server.crt) must be created.\n\nRun the server and client examples: `./build/https_examples`\n\nDirect your favorite browser to for instance https://localhost:8080/\n\n## Contributing\n\nContributions are welcome, either by creating an issue or a merge request.\nHowever, before you create a new issue or merge request, please search for previous similar issues or requests.\nA response will normally be given within a few days.\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/asio_compatibility.hpp",
    "content": "#ifndef SIMPLE_WEB_ASIO_COMPATIBILITY_HPP\n#define SIMPLE_WEB_ASIO_COMPATIBILITY_HPP\n\n#include <memory>\n\n#ifdef USE_STANDALONE_ASIO\n#include <asio.hpp>\n#include <asio/steady_timer.hpp>\nnamespace SimpleWeb {\n  namespace error = asio::error;\n  using error_code = std::error_code;\n  using errc = std::errc;\n  using system_error = std::system_error;\n  namespace make_error_code = std;\n} // namespace SimpleWeb\n#else\n#include <boost/asio.hpp>\n#include <boost/asio/steady_timer.hpp>\nnamespace SimpleWeb {\n  namespace asio = boost::asio;\n  namespace error = asio::error;\n  using error_code = boost::system::error_code;\n  namespace errc = boost::system::errc;\n  using system_error = boost::system::system_error;\n  namespace make_error_code = boost::system::errc;\n} // namespace SimpleWeb\n#endif\n\nnamespace SimpleWeb {\n#if(USE_STANDALONE_ASIO && ASIO_VERSION >= 101300) || BOOST_ASIO_VERSION >= 101300\n  using io_context = asio::io_context;\n  using resolver_results = asio::ip::tcp::resolver::results_type;\n  using async_connect_endpoint = asio::ip::tcp::endpoint;\n\n  template <typename handler_type>\n  inline void post(io_context &context, handler_type &&handler) {\n    asio::post(context, std::forward<handler_type>(handler));\n  }\n  inline void restart(io_context &context) noexcept {\n    context.restart();\n  }\n  inline asio::ip::address make_address(const std::string &str) noexcept {\n    return asio::ip::make_address(str);\n  }\n  template <typename socket_type, typename duration_type>\n  std::unique_ptr<asio::steady_timer> make_steady_timer(socket_type &socket, std::chrono::duration<duration_type> duration) {\n    return std::unique_ptr<asio::steady_timer>(new asio::steady_timer(socket.get_executor(), duration));\n  }\n  template <typename handler_type>\n  void async_resolve(asio::ip::tcp::resolver &resolver, const std::pair<std::string, std::string> &host_port, handler_type &&handler) {\n    resolver.async_resolve(host_port.first, host_port.second, std::forward<handler_type>(handler));\n  }\n  inline asio::executor_work_guard<io_context::executor_type> make_work_guard(io_context &context) {\n    return asio::make_work_guard(context);\n  }\n#else\n  using io_context = asio::io_service;\n  using resolver_results = asio::ip::tcp::resolver::iterator;\n  using async_connect_endpoint = asio::ip::tcp::resolver::iterator;\n\n  template <typename handler_type>\n  inline void post(io_context &context, handler_type &&handler) {\n    context.post(std::forward<handler_type>(handler));\n  }\n  inline void restart(io_context &context) noexcept {\n    context.reset();\n  }\n  inline asio::ip::address make_address(const std::string &str) noexcept {\n    return asio::ip::address::from_string(str);\n  }\n  template <typename socket_type, typename duration_type>\n  std::unique_ptr<asio::steady_timer> make_steady_timer(socket_type &socket, std::chrono::duration<duration_type> duration) {\n    return std::unique_ptr<asio::steady_timer>(new asio::steady_timer(socket.get_io_service(), duration));\n  }\n  template <typename handler_type>\n  void async_resolve(asio::ip::tcp::resolver &resolver, const std::pair<std::string, std::string> &host_port, handler_type &&handler) {\n    resolver.async_resolve(asio::ip::tcp::resolver::query(host_port.first, host_port.second), std::forward<handler_type>(handler));\n  }\n  inline io_context::work make_work_guard(io_context &context) {\n    return io_context::work(context);\n  }\n#endif\n} // namespace SimpleWeb\n\n#endif /* SIMPLE_WEB_ASIO_COMPATIBILITY_HPP */\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/client_http.hpp",
    "content": "#ifndef SIMPLE_WEB_CLIENT_HTTP_HPP\n#define SIMPLE_WEB_CLIENT_HTTP_HPP\n\n#include \"asio_compatibility.hpp\"\n#include \"mutex.hpp\"\n#include \"utility.hpp\"\n#include <future>\n#include <limits>\n#include <random>\n#include <unordered_set>\n#include <vector>\n\nnamespace SimpleWeb {\n  class HeaderEndMatch {\n    int crlfcrlf = 0;\n    int lflf = 0;\n\n  public:\n    /// Match condition for asio::read_until to match both standard and non-standard HTTP header endings.\n    std::pair<asio::buffers_iterator<asio::const_buffers_1>, bool> operator()(asio::buffers_iterator<asio::const_buffers_1> begin, asio::buffers_iterator<asio::const_buffers_1> end) {\n      auto it = begin;\n      for(; it != end; ++it) {\n        if(*it == '\\n') {\n          if(crlfcrlf == 1)\n            ++crlfcrlf;\n          else if(crlfcrlf == 2)\n            crlfcrlf = 0;\n          else if(crlfcrlf == 3)\n            return {++it, true};\n          if(lflf == 0)\n            ++lflf;\n          else if(lflf == 1)\n            return {++it, true};\n        }\n        else if(*it == '\\r') {\n          if(crlfcrlf == 0)\n            ++crlfcrlf;\n          else if(crlfcrlf == 2)\n            ++crlfcrlf;\n          else\n            crlfcrlf = 0;\n          lflf = 0;\n        }\n        else {\n          crlfcrlf = 0;\n          lflf = 0;\n        }\n      }\n      return {it, false};\n    }\n  };\n} // namespace SimpleWeb\n#ifndef USE_STANDALONE_ASIO\nnamespace boost {\n#endif\n  namespace asio {\n    template <>\n    struct is_match_condition<SimpleWeb::HeaderEndMatch> : public std::true_type {};\n  } // namespace asio\n#ifndef USE_STANDALONE_ASIO\n} // namespace boost\n#endif\n\nnamespace SimpleWeb {\n  template <class socket_type>\n  class Client;\n\n  template <class socket_type>\n  class ClientBase {\n  public:\n    class Content : public std::istream {\n      friend class ClientBase<socket_type>;\n\n    public:\n      std::size_t size() noexcept {\n        return streambuf.size();\n      }\n      /// Convenience function to return content as a string.\n      std::string string() noexcept {\n        return std::string(asio::buffers_begin(streambuf.data()), asio::buffers_end(streambuf.data()));\n      }\n\n      /// When true, this is the last response content part from server for the current request.\n      bool end = true;\n\n    private:\n      asio::streambuf &streambuf;\n      Content(asio::streambuf &streambuf) noexcept : std::istream(&streambuf), streambuf(streambuf) {}\n    };\n\n  protected:\n    class Connection;\n\n  public:\n    class Response {\n      friend class ClientBase<socket_type>;\n      friend class Client<socket_type>;\n\n      class Shared {\n      public:\n        std::string http_version, status_code;\n\n        CaseInsensitiveMultimap header;\n      };\n\n      asio::streambuf streambuf;\n\n      std::shared_ptr<Shared> shared;\n\n      std::weak_ptr<Connection> connection_weak;\n\n      Response(std::size_t max_response_streambuf_size, const std::shared_ptr<Connection> &connection_) noexcept\n          : streambuf(max_response_streambuf_size), shared(new Shared()), connection_weak(connection_), http_version(shared->http_version), status_code(shared->status_code), header(shared->header), content(streambuf) {}\n\n      /// Constructs a response object that has empty content, but otherwise is equal to the response parameter\n      Response(const Response &response) noexcept\n          : streambuf(response.streambuf.max_size()), shared(response.shared), connection_weak(response.connection_weak), http_version(shared->http_version), status_code(shared->status_code), header(shared->header), content(streambuf) {}\n\n    public:\n      std::string &http_version, &status_code;\n\n      CaseInsensitiveMultimap &header;\n\n      Content content;\n\n      /// Closes the connection to the server, preventing further response content parts from server.\n      void close() noexcept {\n        if(auto connection = this->connection_weak.lock())\n          connection->close();\n      }\n    };\n\n    class Config {\n      friend class ClientBase<socket_type>;\n\n    private:\n      Config() noexcept {}\n\n    public:\n      /// Set timeout on requests in seconds. Default value: 0 (no timeout).\n      long timeout = 0;\n      /// Set connect timeout in seconds. Default value: 0 (Config::timeout is then used instead).\n      long timeout_connect = 0;\n      /// Maximum size of response stream buffer. Defaults to architecture maximum.\n      /// Reaching this limit will result in a message_size error code.\n      std::size_t max_response_streambuf_size = (std::numeric_limits<std::size_t>::max)();\n      /// Set proxy server (server:port)\n      std::string proxy_server;\n    };\n\n  protected:\n    class Connection : public std::enable_shared_from_this<Connection> {\n    public:\n      template <typename... Args>\n      Connection(std::shared_ptr<ScopeRunner> handler_runner_, Args &&... args) noexcept\n          : handler_runner(std::move(handler_runner_)), socket(new socket_type(std::forward<Args>(args)...)) {}\n\n      std::shared_ptr<ScopeRunner> handler_runner;\n\n      std::unique_ptr<socket_type> socket; // Socket must be unique_ptr since asio::ssl::stream<asio::ip::tcp::socket> is not movable\n      bool in_use = false;\n      bool attempt_reconnect = true;\n\n      std::unique_ptr<asio::steady_timer> timer;\n\n      void close() noexcept {\n        error_code ec;\n        socket->lowest_layer().shutdown(asio::ip::tcp::socket::shutdown_both, ec);\n        socket->lowest_layer().cancel(ec);\n      }\n\n      void set_timeout(long seconds) noexcept {\n        if(seconds == 0) {\n          timer = nullptr;\n          return;\n        }\n        timer = make_steady_timer(*socket, std::chrono::seconds(seconds));\n        std::weak_ptr<Connection> self_weak(this->shared_from_this()); // To avoid keeping Connection instance alive longer than needed\n        timer->async_wait([self_weak](const error_code &ec) {\n          if(!ec) {\n            if(auto self = self_weak.lock())\n              self->close();\n          }\n        });\n      }\n\n      void cancel_timeout() noexcept {\n        if(timer) {\n          try {\n            timer->cancel();\n          }\n          catch(...) {\n          }\n        }\n      }\n    };\n\n    class Session {\n    public:\n      Session(std::size_t max_response_streambuf_size, std::shared_ptr<Connection> connection_, std::unique_ptr<asio::streambuf> request_streambuf_) noexcept\n          : connection(std::move(connection_)), request_streambuf(std::move(request_streambuf_)), response(new Response(max_response_streambuf_size, connection)) {}\n\n      std::shared_ptr<Connection> connection;\n      std::unique_ptr<asio::streambuf> request_streambuf;\n      std::shared_ptr<Response> response;\n      std::function<void(const error_code &)> callback;\n    };\n\n  public:\n    /// Set before calling a request function.\n    Config config;\n\n    /// If you want to reuse an already created asio::io_service, store its pointer here before calling a request function.\n    /// Do not set when using synchronous request functions.\n    std::shared_ptr<io_context> io_service;\n\n    /// Convenience function to perform synchronous request. The io_service is started in this function.\n    /// Should not be combined with asynchronous request functions.\n    /// If you reuse the io_service for other tasks, use the asynchronous request functions instead.\n    /// When requesting Server-Sent Events: will throw on error::eof, please use asynchronous request functions instead.\n    std::shared_ptr<Response> request(const std::string &method, const std::string &path = {\"/\"}, string_view content = {}, const CaseInsensitiveMultimap &header = CaseInsensitiveMultimap()) {\n      return sync_request(method, path, content, header);\n    }\n\n    /// Convenience function to perform synchronous request. The io_service is started in this function.\n    /// Should not be combined with asynchronous request functions.\n    /// If you reuse the io_service for other tasks, use the asynchronous request functions instead.\n    /// When requesting Server-Sent Events: will throw on error::eof, please use asynchronous request functions instead.\n    std::shared_ptr<Response> request(const std::string &method, const std::string &path, std::istream &content, const CaseInsensitiveMultimap &header = CaseInsensitiveMultimap()) {\n      return sync_request(method, path, content, header);\n    }\n\n    /// Asynchronous request where running Client's io_service is required.\n    /// Do not use concurrently with the synchronous request functions.\n    /// When requesting Server-Sent Events: request_callback might be called more than twice, first call with empty contents on open, and with ec = error::eof on last call\n    void request(const std::string &method, const std::string &path, string_view content, const CaseInsensitiveMultimap &header,\n                 std::function<void(std::shared_ptr<Response>, const error_code &)> &&request_callback_) {\n      auto session = std::make_shared<Session>(config.max_response_streambuf_size, get_connection(), create_request_header(method, path, header));\n      std::weak_ptr<Session> session_weak(session); // To avoid keeping session alive longer than needed\n      auto request_callback = std::make_shared<std::function<void(std::shared_ptr<Response>, const error_code &)>>(std::move(request_callback_));\n      session->callback = [this, session_weak, request_callback](const error_code &ec) {\n        if(auto session = session_weak.lock()) {\n          if(session->response->content.end) {\n            session->connection->cancel_timeout();\n            session->connection->in_use = false;\n          }\n          {\n            LockGuard lock(this->connections_mutex);\n\n            // Remove unused connections, but keep one open for HTTP persistent connection:\n            std::size_t unused_connections = 0;\n            for(auto it = this->connections.begin(); it != this->connections.end();) {\n              if(ec && session->connection == *it)\n                it = this->connections.erase(it);\n              else if((*it)->in_use)\n                ++it;\n              else {\n                ++unused_connections;\n                if(unused_connections > 1)\n                  it = this->connections.erase(it);\n                else\n                  ++it;\n              }\n            }\n          }\n\n          if(*request_callback)\n            (*request_callback)(session->response, ec);\n        }\n      };\n\n      std::ostream write_stream(session->request_streambuf.get());\n      if(content.size() > 0) {\n        auto header_it = header.find(\"Content-Length\");\n        if(header_it == header.end()) {\n          header_it = header.find(\"Transfer-Encoding\");\n          if(header_it == header.end() || header_it->second != \"chunked\")\n            write_stream << \"Content-Length: \" << content.size() << \"\\r\\n\";\n        }\n      }\n      write_stream << \"\\r\\n\";\n      write_stream.write(content.data(), static_cast<std::streamsize>(content.size()));\n\n      connect(session);\n    }\n\n    /// Asynchronous request where running Client's io_service is required.\n    /// Do not use concurrently with the synchronous request functions.\n    /// When requesting Server-Sent Events: request_callback might be called more than twice, first call with empty contents on open, and with ec = error::eof on last call\n    void request(const std::string &method, const std::string &path, string_view content,\n                 std::function<void(std::shared_ptr<Response>, const error_code &)> &&request_callback_) {\n      request(method, path, content, CaseInsensitiveMultimap(), std::move(request_callback_));\n    }\n\n    /// Asynchronous request where running Client's io_service is required.\n    /// Do not use concurrently with the synchronous request functions.\n    /// When requesting Server-Sent Events: request_callback might be called more than twice, first call with empty contents on open, and with ec = error::eof on last call\n    void request(const std::string &method, const std::string &path,\n                 std::function<void(std::shared_ptr<Response>, const error_code &)> &&request_callback_) {\n      request(method, path, std::string(), CaseInsensitiveMultimap(), std::move(request_callback_));\n    }\n\n    /// Asynchronous request where running Client's io_service is required.\n    /// Do not use concurrently with the synchronous request functions.\n    /// When requesting Server-Sent Events: request_callback might be called more than twice, first call with empty contents on open, and with ec = error::eof on last call\n    void request(const std::string &method, std::function<void(std::shared_ptr<Response>, const error_code &)> &&request_callback_) {\n      request(method, std::string(\"/\"), std::string(), CaseInsensitiveMultimap(), std::move(request_callback_));\n    }\n\n    /// Asynchronous request where running Client's io_service is required.\n    /// Do not use concurrently with the synchronous request functions.\n    /// When requesting Server-Sent Events: request_callback might be called more than twice, first call with empty contents on open, and with ec = error::eof on last call\n    void request(const std::string &method, const std::string &path, std::istream &content, const CaseInsensitiveMultimap &header,\n                 std::function<void(std::shared_ptr<Response>, const error_code &)> &&request_callback_) {\n      auto session = std::make_shared<Session>(config.max_response_streambuf_size, get_connection(), create_request_header(method, path, header));\n      std::weak_ptr<Session> session_weak(session); // To avoid keeping session alive longer than needed\n      auto request_callback = std::make_shared<std::function<void(std::shared_ptr<Response>, const error_code &)>>(std::move(request_callback_));\n      session->callback = [this, session_weak, request_callback](const error_code &ec) {\n        if(auto session = session_weak.lock()) {\n          if(session->response->content.end) {\n            session->connection->cancel_timeout();\n            session->connection->in_use = false;\n          }\n          {\n            LockGuard lock(this->connections_mutex);\n\n            // Remove unused connections, but keep one open for HTTP persistent connection:\n            std::size_t unused_connections = 0;\n            for(auto it = this->connections.begin(); it != this->connections.end();) {\n              if(ec && session->connection == *it)\n                it = this->connections.erase(it);\n              else if((*it)->in_use)\n                ++it;\n              else {\n                ++unused_connections;\n                if(unused_connections > 1)\n                  it = this->connections.erase(it);\n                else\n                  ++it;\n              }\n            }\n          }\n\n          if(*request_callback)\n            (*request_callback)(session->response, ec);\n        }\n      };\n\n      content.seekg(0, std::ios::end);\n      auto content_length = content.tellg();\n      content.seekg(0, std::ios::beg);\n      std::ostream write_stream(session->request_streambuf.get());\n      if(content_length > 0) {\n        auto header_it = header.find(\"Content-Length\");\n        if(header_it == header.end()) {\n          header_it = header.find(\"Transfer-Encoding\");\n          if(header_it == header.end() || header_it->second != \"chunked\")\n            write_stream << \"Content-Length: \" << content_length << \"\\r\\n\";\n        }\n      }\n      write_stream << \"\\r\\n\";\n      if(content_length > 0)\n        write_stream << content.rdbuf();\n\n      connect(session);\n    }\n\n    /// Asynchronous request where running Client's io_service is required.\n    /// Do not use concurrently with the synchronous request functions.\n    /// When requesting Server-Sent Events: request_callback might be called more than twice, first call with empty contents on open, and with ec = error::eof on last call\n    void request(const std::string &method, const std::string &path, std::istream &content,\n                 std::function<void(std::shared_ptr<Response>, const error_code &)> &&request_callback_) {\n      request(method, path, content, CaseInsensitiveMultimap(), std::move(request_callback_));\n    }\n\n    /// Close connections.\n    void stop() noexcept {\n      LockGuard lock(connections_mutex);\n      for(auto it = connections.begin(); it != connections.end();) {\n        (*it)->close();\n        it = connections.erase(it);\n      }\n    }\n\n    virtual ~ClientBase() noexcept {\n      handler_runner->stop();\n      stop();\n      if(internal_io_service)\n        io_service->stop();\n    }\n\n  protected:\n    bool internal_io_service = false;\n\n    std::string host;\n    unsigned short port;\n    unsigned short default_port;\n\n    std::unique_ptr<std::pair<std::string, std::string>> host_port;\n\n    Mutex connections_mutex;\n    std::unordered_set<std::shared_ptr<Connection>> connections GUARDED_BY(connections_mutex);\n\n    std::shared_ptr<ScopeRunner> handler_runner;\n\n    Mutex synchronous_request_mutex;\n    bool synchronous_request_called GUARDED_BY(synchronous_request_mutex) = false;\n\n    ClientBase(const std::string &host_port, unsigned short default_port) noexcept : default_port(default_port), handler_runner(new ScopeRunner()) {\n      auto parsed_host_port = parse_host_port(host_port, default_port);\n      host = parsed_host_port.first;\n      port = parsed_host_port.second;\n    }\n\n    template <typename ContentType>\n    std::shared_ptr<Response> sync_request(const std::string &method, const std::string &path, ContentType &content, const CaseInsensitiveMultimap &header) {\n      {\n        LockGuard lock(synchronous_request_mutex);\n        if(!synchronous_request_called) {\n          if(io_service) // Throw if io_service already set\n            throw make_error_code::make_error_code(errc::operation_not_permitted);\n          io_service = std::make_shared<io_context>();\n          internal_io_service = true;\n          auto io_service_ = io_service;\n          std::thread thread([io_service_] {\n            auto work = make_work_guard(*io_service_);\n            io_service_->run();\n          });\n          thread.detach();\n          synchronous_request_called = true;\n        }\n      }\n\n      std::shared_ptr<Response> response;\n      std::promise<std::shared_ptr<Response>> response_promise;\n      auto stop_future_handlers = std::make_shared<bool>(false);\n      request(method, path, content, header, [&response, &response_promise, stop_future_handlers](std::shared_ptr<Response> response_, error_code ec) {\n        if(*stop_future_handlers)\n          return;\n\n        if(!response)\n          response = response_;\n        else if(!ec) {\n          if(response_->streambuf.size() + response->streambuf.size() > response->streambuf.max_size()) {\n            ec = make_error_code::make_error_code(errc::message_size);\n            response->close();\n          }\n          else {\n            // Move partial response_ content to response:\n            auto &source = response_->streambuf;\n            auto &target = response->streambuf;\n            target.commit(asio::buffer_copy(target.prepare(source.size()), source.data()));\n            source.consume(source.size());\n          }\n        }\n\n        if(ec) {\n          response_promise.set_exception(std::make_exception_ptr(system_error(ec)));\n          *stop_future_handlers = true;\n        }\n        else if(response_->content.end)\n          response_promise.set_value(response);\n      });\n\n      return response_promise.get_future().get();\n    }\n\n    std::shared_ptr<Connection> get_connection() noexcept {\n      std::shared_ptr<Connection> connection;\n      LockGuard lock(connections_mutex);\n\n      if(!io_service) {\n        io_service = std::make_shared<io_context>();\n        internal_io_service = true;\n      }\n\n      for(auto it = connections.begin(); it != connections.end(); ++it) {\n        if(!(*it)->in_use) {\n          connection = *it;\n          break;\n        }\n      }\n      if(!connection) {\n        connection = create_connection();\n        connections.emplace(connection);\n      }\n      connection->attempt_reconnect = true;\n      connection->in_use = true;\n\n      if(!host_port) {\n        if(config.proxy_server.empty())\n          host_port = std::unique_ptr<std::pair<std::string, std::string>>(new std::pair<std::string, std::string>(host, std::to_string(port)));\n        else {\n          auto proxy_host_port = parse_host_port(config.proxy_server, 8080);\n          host_port = std::unique_ptr<std::pair<std::string, std::string>>(new std::pair<std::string, std::string>(proxy_host_port.first, std::to_string(proxy_host_port.second)));\n        }\n      }\n\n      return connection;\n    }\n\n    std::pair<std::string, unsigned short> parse_host_port(const std::string &host_port, unsigned short default_port) const noexcept {\n      std::pair<std::string, unsigned short> parsed_host_port;\n      std::size_t host_end = host_port.find(':');\n      if(host_end == std::string::npos) {\n        parsed_host_port.first = host_port;\n        parsed_host_port.second = default_port;\n      }\n      else {\n        parsed_host_port.first = host_port.substr(0, host_end);\n        try {\n          parsed_host_port.second = static_cast<unsigned short>(std::stoul(host_port.substr(host_end + 1)));\n        }\n        catch(...) {\n          parsed_host_port.second = default_port;\n        }\n      }\n      return parsed_host_port;\n    }\n\n    virtual std::shared_ptr<Connection> create_connection() noexcept = 0;\n    virtual void connect(const std::shared_ptr<Session> &) = 0;\n\n    std::unique_ptr<asio::streambuf> create_request_header(const std::string &method, const std::string &path, const CaseInsensitiveMultimap &header) const {\n      auto corrected_path = path;\n      if(corrected_path == \"\")\n        corrected_path = \"/\";\n      if(!config.proxy_server.empty() && std::is_same<socket_type, asio::ip::tcp::socket>::value)\n        corrected_path = \"http://\" + host + ':' + std::to_string(port) + corrected_path;\n\n      std::unique_ptr<asio::streambuf> streambuf(new asio::streambuf());\n      std::ostream write_stream(streambuf.get());\n      write_stream << method << \" \" << corrected_path << \" HTTP/1.1\\r\\n\";\n      write_stream << \"Host: \" << host;\n      if(port != default_port)\n        write_stream << ':' << std::to_string(port);\n      write_stream << \"\\r\\n\";\n      for(auto &h : header)\n        write_stream << h.first << \": \" << h.second << \"\\r\\n\";\n      return streambuf;\n    }\n\n    void write(const std::shared_ptr<Session> &session) {\n      session->connection->set_timeout(config.timeout);\n      asio::async_write(*session->connection->socket, session->request_streambuf->data(), [this, session](const error_code &ec, std::size_t /*bytes_transferred*/) {\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n        if(!ec)\n          this->read(session);\n        else {\n          if(session->connection->attempt_reconnect && ec != error::operation_aborted)\n            reconnect(session, ec);\n          else\n            session->callback(ec);\n        }\n      });\n    }\n\n    void read(const std::shared_ptr<Session> &session) {\n      asio::async_read_until(*session->connection->socket, session->response->streambuf, HeaderEndMatch(), [this, session](const error_code &ec, std::size_t bytes_transferred) {\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n\n        if(!ec) {\n          session->connection->attempt_reconnect = true;\n          std::size_t num_additional_bytes = session->response->streambuf.size() - bytes_transferred;\n\n          if(!ResponseMessage::parse(session->response->content, session->response->http_version, session->response->status_code, session->response->header)) {\n            session->callback(make_error_code::make_error_code(errc::protocol_error));\n            return;\n          }\n\n          auto header_it = session->response->header.find(\"Content-Length\");\n          if(header_it != session->response->header.end()) {\n            auto content_length = std::stoull(header_it->second);\n            if(content_length > num_additional_bytes)\n              this->read_content(session, content_length - num_additional_bytes);\n            else\n              session->callback(ec);\n          }\n          else if((header_it = session->response->header.find(\"Transfer-Encoding\")) != session->response->header.end() && header_it->second == \"chunked\") {\n            // Expect hex number to not exceed 16 bytes (64-bit number), but take into account previous additional read bytes\n            auto chunk_size_streambuf = std::make_shared<asio::streambuf>(std::max<std::size_t>(16 + 2, session->response->streambuf.size()));\n\n            // Move leftover bytes\n            auto &source = session->response->streambuf;\n            auto &target = *chunk_size_streambuf;\n            target.commit(asio::buffer_copy(target.prepare(source.size()), source.data()));\n            source.consume(source.size());\n\n            this->read_chunked_transfer_encoded(session, chunk_size_streambuf);\n          }\n          else if(session->response->http_version < \"1.1\" || ((header_it = session->response->header.find(\"Connection\")) != session->response->header.end() && header_it->second == \"close\"))\n            read_content(session);\n          else if(((header_it = session->response->header.find(\"Content-Type\")) != session->response->header.end() && header_it->second == \"text/event-stream\")) {\n            auto events_streambuf = std::make_shared<asio::streambuf>(this->config.max_response_streambuf_size);\n\n            // Move leftover bytes\n            auto &source = session->response->streambuf;\n            auto &target = *events_streambuf;\n            target.commit(asio::buffer_copy(target.prepare(source.size()), source.data()));\n            source.consume(source.size());\n\n            session->callback(ec); // Connection to a Server-Sent Events resource is opened\n\n            this->read_server_sent_event(session, events_streambuf);\n          }\n          else\n            session->callback(ec);\n        }\n        else {\n          if(session->connection->attempt_reconnect && ec != error::operation_aborted)\n            reconnect(session, ec);\n          else\n            session->callback(ec);\n        }\n      });\n    }\n\n    void reconnect(const std::shared_ptr<Session> &session, const error_code &ec) {\n      LockGuard lock(connections_mutex);\n      auto it = connections.find(session->connection);\n      if(it != connections.end()) {\n        connections.erase(it);\n        session->connection = create_connection();\n        session->connection->attempt_reconnect = false;\n        session->connection->in_use = true;\n        session->response = std::shared_ptr<Response>(new Response(this->config.max_response_streambuf_size, session->connection));\n        connections.emplace(session->connection);\n        lock.unlock();\n        this->connect(session);\n      }\n      else {\n        lock.unlock();\n        session->callback(ec);\n      }\n    }\n\n    void read_content(const std::shared_ptr<Session> &session, std::size_t remaining_length) {\n      asio::async_read(*session->connection->socket, session->response->streambuf, asio::transfer_exactly(remaining_length), [this, session, remaining_length](const error_code &ec, std::size_t bytes_transferred) {\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n\n        if(!ec) {\n          if(session->response->streambuf.size() == session->response->streambuf.max_size() && remaining_length > bytes_transferred) {\n            session->response->content.end = false;\n            session->callback(ec);\n            session->response = std::shared_ptr<Response>(new Response(*session->response));\n            this->read_content(session, remaining_length - bytes_transferred);\n          }\n          else\n            session->callback(ec);\n        }\n        else\n          session->callback(ec);\n      });\n    }\n\n    void read_content(const std::shared_ptr<Session> &session) {\n      asio::async_read(*session->connection->socket, session->response->streambuf, [this, session](const error_code &ec_, std::size_t /*bytes_transferred*/) {\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n\n        auto ec = ec_ == error::eof ? error_code() : ec_;\n\n        if(!ec) {\n          {\n            LockGuard lock(this->connections_mutex);\n            this->connections.erase(session->connection);\n          }\n          if(session->response->streambuf.size() == session->response->streambuf.max_size()) {\n            session->response->content.end = false;\n            session->callback(ec);\n            session->response = std::shared_ptr<Response>(new Response(*session->response));\n            this->read_content(session);\n          }\n          else\n            session->callback(ec);\n        }\n        else\n          session->callback(ec);\n      });\n    }\n\n    void read_chunked_transfer_encoded(const std::shared_ptr<Session> &session, const std::shared_ptr<asio::streambuf> &chunk_size_streambuf) {\n      asio::async_read_until(*session->connection->socket, *chunk_size_streambuf, \"\\r\\n\", [this, session, chunk_size_streambuf](const error_code &ec, size_t bytes_transferred) {\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n\n        if(!ec) {\n          std::istream istream(chunk_size_streambuf.get());\n          std::string line;\n          std::getline(istream, line);\n          bytes_transferred -= line.size() + 1;\n          unsigned long chunk_size = 0;\n          try {\n            chunk_size = std::stoul(line, 0, 16);\n          }\n          catch(...) {\n            session->callback(make_error_code::make_error_code(errc::protocol_error));\n            return;\n          }\n\n          if(chunk_size == 0) {\n            session->callback(error_code());\n            return;\n          }\n\n          if(chunk_size + session->response->streambuf.size() > session->response->streambuf.max_size()) {\n            session->response->content.end = false;\n            session->callback(ec);\n            session->response = std::shared_ptr<Response>(new Response(*session->response));\n          }\n\n          auto num_additional_bytes = chunk_size_streambuf->size() - bytes_transferred;\n\n          auto bytes_to_move = std::min<std::size_t>(chunk_size, num_additional_bytes);\n          if(bytes_to_move > 0) {\n            auto &source = *chunk_size_streambuf;\n            auto &target = session->response->streambuf;\n            target.commit(asio::buffer_copy(target.prepare(bytes_to_move), source.data(), bytes_to_move));\n            source.consume(bytes_to_move);\n          }\n\n          if(chunk_size > num_additional_bytes) {\n            asio::async_read(*session->connection->socket, session->response->streambuf, asio::transfer_exactly(chunk_size - num_additional_bytes), [this, session, chunk_size_streambuf](const error_code &ec, size_t /*bytes_transferred*/) {\n              auto lock = session->connection->handler_runner->continue_lock();\n              if(!lock)\n                return;\n\n              if(!ec) {\n                // Remove \"\\r\\n\"\n                auto null_buffer = std::make_shared<asio::streambuf>(2);\n                asio::async_read(*session->connection->socket, *null_buffer, asio::transfer_exactly(2), [this, session, chunk_size_streambuf, null_buffer](const error_code &ec, size_t /*bytes_transferred*/) {\n                  auto lock = session->connection->handler_runner->continue_lock();\n                  if(!lock)\n                    return;\n                  if(!ec)\n                    read_chunked_transfer_encoded(session, chunk_size_streambuf);\n                  else\n                    session->callback(ec);\n                });\n              }\n              else\n                session->callback(ec);\n            });\n          }\n          else if(2 + chunk_size > num_additional_bytes) { // If only end of chunk remains unread (\\n or \\r\\n)\n            // Remove \"\\r\\n\"\n            if(2 + chunk_size - num_additional_bytes == 1)\n              istream.get();\n            auto null_buffer = std::make_shared<asio::streambuf>(2);\n            asio::async_read(*session->connection->socket, *null_buffer, asio::transfer_exactly(2 + chunk_size - num_additional_bytes), [this, session, chunk_size_streambuf, null_buffer](const error_code &ec, size_t /*bytes_transferred*/) {\n              auto lock = session->connection->handler_runner->continue_lock();\n              if(!lock)\n                return;\n              if(!ec)\n                read_chunked_transfer_encoded(session, chunk_size_streambuf);\n              else\n                session->callback(ec);\n            });\n          }\n          else {\n            // Remove \"\\r\\n\"\n            istream.get();\n            istream.get();\n\n            read_chunked_transfer_encoded(session, chunk_size_streambuf);\n          }\n        }\n        else\n          session->callback(ec);\n      });\n    }\n\n    void read_server_sent_event(const std::shared_ptr<Session> &session, const std::shared_ptr<asio::streambuf> &events_streambuf) {\n      asio::async_read_until(*session->connection->socket, *events_streambuf, HeaderEndMatch(), [this, session, events_streambuf](const error_code &ec, std::size_t /*bytes_transferred*/) {\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n\n        if(!ec) {\n          session->response->content.end = false;\n          std::istream istream(events_streambuf.get());\n          std::ostream ostream(&session->response->streambuf);\n          std::string line;\n          while(std::getline(istream, line) && !line.empty() && !(line.back() == '\\r' && line.size() == 1)) {\n            ostream.write(line.data(), static_cast<std::streamsize>(line.size() - (line.back() == '\\r' ? 1 : 0)));\n            ostream.put('\\n');\n          }\n\n          session->callback(ec);\n          session->response = std::shared_ptr<Response>(new Response(*session->response));\n          read_server_sent_event(session, events_streambuf);\n        }\n        else\n          session->callback(ec);\n      });\n    }\n  };\n\n  template <class socket_type>\n  class Client : public ClientBase<socket_type> {};\n\n  using HTTP = asio::ip::tcp::socket;\n\n  template <>\n  class Client<HTTP> : public ClientBase<HTTP> {\n  public:\n    /**\n     * Constructs a client object.\n     *\n     * @param server_port_path Server resource given by host[:port][/path]\n     */\n    Client(const std::string &server_port_path) noexcept : ClientBase<HTTP>::ClientBase(server_port_path, 80) {}\n\n  protected:\n    std::shared_ptr<Connection> create_connection() noexcept override {\n      return std::make_shared<Connection>(handler_runner, *io_service);\n    }\n\n    void connect(const std::shared_ptr<Session> &session) override {\n      if(!session->connection->socket->lowest_layer().is_open()) {\n        auto resolver = std::make_shared<asio::ip::tcp::resolver>(*io_service);\n        session->connection->set_timeout(config.timeout_connect);\n        async_resolve(*resolver, *host_port, [this, session, resolver](const error_code &ec, resolver_results results) {\n          session->connection->cancel_timeout();\n          auto lock = session->connection->handler_runner->continue_lock();\n          if(!lock)\n            return;\n          if(!ec) {\n            session->connection->set_timeout(config.timeout_connect);\n            asio::async_connect(*session->connection->socket, results, [this, session, resolver](const error_code &ec, async_connect_endpoint /*endpoint*/) {\n              session->connection->cancel_timeout();\n              auto lock = session->connection->handler_runner->continue_lock();\n              if(!lock)\n                return;\n              if(!ec) {\n                asio::ip::tcp::no_delay option(true);\n                error_code ec;\n                session->connection->socket->set_option(option, ec);\n                this->write(session);\n              }\n              else\n                session->callback(ec);\n            });\n          }\n          else\n            session->callback(ec);\n        });\n      }\n      else\n        write(session);\n    }\n  };\n} // namespace SimpleWeb\n\n#endif /* SIMPLE_WEB_CLIENT_HTTP_HPP */\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/client_https.hpp",
    "content": "#ifndef SIMPLE_WEB_CLIENT_HTTPS_HPP\n#define SIMPLE_WEB_CLIENT_HTTPS_HPP\n\n#include \"client_http.hpp\"\n\n#ifdef USE_STANDALONE_ASIO\n#include <asio/ssl.hpp>\n#else\n#include <boost/asio/ssl.hpp>\n#endif\n\nnamespace SimpleWeb {\n  using HTTPS = asio::ssl::stream<asio::ip::tcp::socket>;\n\n  template <>\n  class Client<HTTPS> : public ClientBase<HTTPS> {\n  public:\n    /**\n     * Constructs a client object.\n     *\n     * @param server_port_path   Server resource given by host[:port][/path]\n     * @param verify_certificate Set to true (default) to verify the server's certificate and hostname according to RFC 2818.\n     * @param certification_file If non-empty, sends the given certification file to server. Requires private_key_file.\n     * @param private_key_file   If non-empty, specifies the file containing the private key for certification_file. Requires certification_file.\n     * @param verify_file        If non-empty, use this certificate authority file to perform verification.\n     */\n    Client(const std::string &server_port_path, bool verify_certificate = true, const std::string &certification_file = std::string(),\n           const std::string &private_key_file = std::string(), const std::string &verify_file = std::string())\n        : ClientBase<HTTPS>::ClientBase(server_port_path, 443),\n#ifdef RHEL_CENTOS_7\n          context(asio::ssl::context::tlsv1)\n#else\n          context(asio::ssl::context::tlsv12)\n#endif\n    {    \n      if(certification_file.size() > 0 && private_key_file.size() > 0) {\n        context.use_certificate_chain_file(certification_file);\n        context.use_private_key_file(private_key_file, asio::ssl::context::pem);\n      }\n\n      if(verify_certificate)\n        context.set_verify_callback(asio::ssl::rfc2818_verification(host));\n\n      if(verify_file.size() > 0)\n        context.load_verify_file(verify_file);\n      else\n        context.set_default_verify_paths();\n\n      if(verify_certificate)\n        context.set_verify_mode(asio::ssl::verify_peer);\n      else\n        context.set_verify_mode(asio::ssl::verify_none);\n    }\n\n  protected:\n    asio::ssl::context context;\n\n    std::shared_ptr<Connection> create_connection() noexcept override {\n      return std::make_shared<Connection>(handler_runner, *io_service, context);\n    }\n\n    void connect(const std::shared_ptr<Session> &session) override {\n      if(!session->connection->socket->lowest_layer().is_open()) {\n        auto resolver = std::make_shared<asio::ip::tcp::resolver>(*io_service);\n        async_resolve(*resolver, *host_port, [this, session, resolver](const error_code &ec, resolver_results results) {\n          auto lock = session->connection->handler_runner->continue_lock();\n          if(!lock)\n            return;\n          if(!ec) {\n            session->connection->set_timeout(this->config.timeout_connect);\n            asio::async_connect(session->connection->socket->lowest_layer(), results, [this, session, resolver](const error_code &ec, async_connect_endpoint /*endpoint*/) {\n              session->connection->cancel_timeout();\n              auto lock = session->connection->handler_runner->continue_lock();\n              if(!lock)\n                return;\n              if(!ec) {\n                asio::ip::tcp::no_delay option(true);\n                error_code ec;\n                session->connection->socket->lowest_layer().set_option(option, ec);\n\n                if(!this->config.proxy_server.empty()) {\n                  auto write_buffer = std::make_shared<asio::streambuf>();\n                  std::ostream write_stream(write_buffer.get());\n                  auto host_port = this->host + ':' + std::to_string(this->port);\n                  write_stream << \"CONNECT \" + host_port + \" HTTP/1.1\\r\\n\"\n                               << \"Host: \" << host_port << \"\\r\\n\\r\\n\";\n                  session->connection->set_timeout(this->config.timeout_connect);\n                  asio::async_write(session->connection->socket->next_layer(), *write_buffer, [this, session, write_buffer](const error_code &ec, std::size_t /*bytes_transferred*/) {\n                    session->connection->cancel_timeout();\n                    auto lock = session->connection->handler_runner->continue_lock();\n                    if(!lock)\n                      return;\n                    if(!ec) {\n                      std::shared_ptr<Response> response(new Response(this->config.max_response_streambuf_size, session->connection));\n                      session->connection->set_timeout(this->config.timeout_connect);\n                      asio::async_read_until(session->connection->socket->next_layer(), response->streambuf, \"\\r\\n\\r\\n\", [this, session, response](const error_code &ec, std::size_t /*bytes_transferred*/) {\n                        session->connection->cancel_timeout();\n                        auto lock = session->connection->handler_runner->continue_lock();\n                        if(!lock)\n                          return;\n                        if(response->streambuf.size() == response->streambuf.max_size()) {\n                          session->callback(make_error_code::make_error_code(errc::message_size));\n                          return;\n                        }\n\n                        if(!ec) {\n                          if(!ResponseMessage::parse(response->content, response->http_version, response->status_code, response->header))\n                            session->callback(make_error_code::make_error_code(errc::protocol_error));\n                          else {\n                            if(response->status_code.compare(0, 3, \"200\") != 0)\n                              session->callback(make_error_code::make_error_code(errc::permission_denied));\n                            else\n                              this->handshake(session);\n                          }\n                        }\n                        else\n                          session->callback(ec);\n                      });\n                    }\n                    else\n                      session->callback(ec);\n                  });\n                }\n                else\n                  this->handshake(session);\n              }\n              else\n                session->callback(ec);\n            });\n          }\n          else\n            session->callback(ec);\n        });\n      }\n      else\n        write(session);\n    }\n\n    void handshake(const std::shared_ptr<Session> &session) {\n      SSL_set_tlsext_host_name(session->connection->socket->native_handle(), this->host.c_str());\n\n      session->connection->set_timeout(this->config.timeout_connect);\n      session->connection->socket->async_handshake(asio::ssl::stream_base::client, [this, session](const error_code &ec) {\n        session->connection->cancel_timeout();\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n        if(!ec)\n          this->write(session);\n        else\n          session->callback(ec);\n      });\n    }\n  };\n} // namespace SimpleWeb\n\n#endif /* SIMPLE_WEB_CLIENT_HTTPS_HPP */\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/crypto.hpp",
    "content": "#ifndef SIMPLE_WEB_CRYPTO_HPP\n#define SIMPLE_WEB_CRYPTO_HPP\n\n#include <cmath>\n#include <iomanip>\n#include <istream>\n#include <sstream>\n#include <string>\n#include <vector>\n\n#include <openssl/buffer.h>\n#include <openssl/evp.h>\n#include <openssl/md5.h>\n#include <openssl/sha.h>\n\nnamespace SimpleWeb {\n// TODO 2017: remove workaround for MSVS 2012\n#if _MSC_VER == 1700                       // MSVS 2012 has no definition for round()\n  inline double round(double x) noexcept { // Custom definition of round() for positive numbers\n    return floor(x + 0.5);\n  }\n#endif\n\n  class Crypto {\n    const static std::size_t buffer_size = 131072;\n\n  public:\n    class Base64 {\n    public:\n      /// Returns Base64 encoded string from input string.\n      static std::string encode(const std::string &input) noexcept {\n        std::string base64;\n\n        BIO *bio, *b64;\n        BUF_MEM *bptr = BUF_MEM_new();\n\n        b64 = BIO_new(BIO_f_base64());\n        BIO_set_flags(b64, BIO_FLAGS_BASE64_NO_NL);\n        bio = BIO_new(BIO_s_mem());\n        BIO_push(b64, bio);\n        BIO_set_mem_buf(b64, bptr, BIO_CLOSE);\n\n        // Write directly to base64-buffer to avoid copy\n        auto base64_length = static_cast<std::size_t>(round(4 * ceil(static_cast<double>(input.size()) / 3.0)));\n        base64.resize(base64_length);\n        bptr->length = 0;\n        bptr->max = base64_length + 1;\n        bptr->data = &base64[0];\n\n        if(BIO_write(b64, &input[0], static_cast<int>(input.size())) <= 0 || BIO_flush(b64) <= 0)\n          base64.clear();\n\n        // To keep &base64[0] through BIO_free_all(b64)\n        bptr->length = 0;\n        bptr->max = 0;\n        bptr->data = nullptr;\n\n        BIO_free_all(b64);\n\n        return base64;\n      }\n\n      /// Returns Base64 decoded string from base64 input.\n      static std::string decode(const std::string &base64) noexcept {\n        std::string ascii;\n\n        // Resize ascii, however, the size is a up to two bytes too large.\n        ascii.resize((6 * base64.size()) / 8);\n        BIO *b64, *bio;\n\n        b64 = BIO_new(BIO_f_base64());\n        BIO_set_flags(b64, BIO_FLAGS_BASE64_NO_NL);\n// TODO: Remove in 2022 or later\n#if(defined(OPENSSL_VERSION_NUMBER) && OPENSSL_VERSION_NUMBER < 0x1000214fL) || (defined(LIBRESSL_VERSION_NUMBER) && LIBRESSL_VERSION_NUMBER < 0x2080000fL)\n        bio = BIO_new_mem_buf(const_cast<char *>(&base64[0]), static_cast<int>(base64.size()));\n#else\n        bio = BIO_new_mem_buf(&base64[0], static_cast<int>(base64.size()));\n#endif\n        bio = BIO_push(b64, bio);\n\n        auto decoded_length = BIO_read(bio, &ascii[0], static_cast<int>(ascii.size()));\n        if(decoded_length > 0)\n          ascii.resize(static_cast<std::size_t>(decoded_length));\n        else\n          ascii.clear();\n\n        BIO_free_all(b64);\n\n        return ascii;\n      }\n    };\n\n    /// Returns hex string from bytes in input string.\n    static std::string to_hex_string(const std::string &input) noexcept {\n      std::stringstream hex_stream;\n      hex_stream << std::hex << std::internal << std::setfill('0');\n      for(auto &byte : input)\n        hex_stream << std::setw(2) << static_cast<int>(static_cast<unsigned char>(byte));\n      return hex_stream.str();\n    }\n\n    /// Returns md5 hash value from input string.\n    static std::string md5(const std::string &input, std::size_t iterations = 1) noexcept {\n      std::string hash;\n\n      hash.resize(128 / 8);\n      MD5(reinterpret_cast<const unsigned char *>(&input[0]), input.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      for(std::size_t c = 1; c < iterations; ++c)\n        MD5(reinterpret_cast<const unsigned char *>(&hash[0]), hash.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      return hash;\n    }\n\n    /// Returns md5 hash value from input stream.\n    static std::string md5(std::istream &stream, std::size_t iterations = 1) noexcept {\n      MD5_CTX context;\n      MD5_Init(&context);\n      std::streamsize read_length;\n      std::vector<char> buffer(buffer_size);\n      while((read_length = stream.read(&buffer[0], buffer_size).gcount()) > 0)\n        MD5_Update(&context, buffer.data(), static_cast<std::size_t>(read_length));\n      std::string hash;\n      hash.resize(128 / 8);\n      MD5_Final(reinterpret_cast<unsigned char *>(&hash[0]), &context);\n\n      for(std::size_t c = 1; c < iterations; ++c)\n        MD5(reinterpret_cast<const unsigned char *>(&hash[0]), hash.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      return hash;\n    }\n\n    /// Returns sha1 hash value from input string.\n    static std::string sha1(const std::string &input, std::size_t iterations = 1) noexcept {\n      std::string hash;\n\n      hash.resize(160 / 8);\n      SHA1(reinterpret_cast<const unsigned char *>(&input[0]), input.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      for(std::size_t c = 1; c < iterations; ++c)\n        SHA1(reinterpret_cast<const unsigned char *>(&hash[0]), hash.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      return hash;\n    }\n\n    /// Returns sha1 hash value from input stream.\n    static std::string sha1(std::istream &stream, std::size_t iterations = 1) noexcept {\n      SHA_CTX context;\n      SHA1_Init(&context);\n      std::streamsize read_length;\n      std::vector<char> buffer(buffer_size);\n      while((read_length = stream.read(&buffer[0], buffer_size).gcount()) > 0)\n        SHA1_Update(&context, buffer.data(), static_cast<std::size_t>(read_length));\n      std::string hash;\n      hash.resize(160 / 8);\n      SHA1_Final(reinterpret_cast<unsigned char *>(&hash[0]), &context);\n\n      for(std::size_t c = 1; c < iterations; ++c)\n        SHA1(reinterpret_cast<const unsigned char *>(&hash[0]), hash.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      return hash;\n    }\n\n    /// Returns sha256 hash value from input string.\n    static std::string sha256(const std::string &input, std::size_t iterations = 1) noexcept {\n      std::string hash;\n\n      hash.resize(256 / 8);\n      SHA256(reinterpret_cast<const unsigned char *>(&input[0]), input.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      for(std::size_t c = 1; c < iterations; ++c)\n        SHA256(reinterpret_cast<const unsigned char *>(&hash[0]), hash.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      return hash;\n    }\n\n    /// Returns sha256 hash value from input stream.\n    static std::string sha256(std::istream &stream, std::size_t iterations = 1) noexcept {\n      SHA256_CTX context;\n      SHA256_Init(&context);\n      std::streamsize read_length;\n      std::vector<char> buffer(buffer_size);\n      while((read_length = stream.read(&buffer[0], buffer_size).gcount()) > 0)\n        SHA256_Update(&context, buffer.data(), static_cast<std::size_t>(read_length));\n      std::string hash;\n      hash.resize(256 / 8);\n      SHA256_Final(reinterpret_cast<unsigned char *>(&hash[0]), &context);\n\n      for(std::size_t c = 1; c < iterations; ++c)\n        SHA256(reinterpret_cast<const unsigned char *>(&hash[0]), hash.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      return hash;\n    }\n\n    /// Returns sha512 hash value from input string.\n    static std::string sha512(const std::string &input, std::size_t iterations = 1) noexcept {\n      std::string hash;\n\n      hash.resize(512 / 8);\n      SHA512(reinterpret_cast<const unsigned char *>(&input[0]), input.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      for(std::size_t c = 1; c < iterations; ++c)\n        SHA512(reinterpret_cast<const unsigned char *>(&hash[0]), hash.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      return hash;\n    }\n\n    /// Returns sha512 hash value from input stream.\n    static std::string sha512(std::istream &stream, std::size_t iterations = 1) noexcept {\n      SHA512_CTX context;\n      SHA512_Init(&context);\n      std::streamsize read_length;\n      std::vector<char> buffer(buffer_size);\n      while((read_length = stream.read(&buffer[0], buffer_size).gcount()) > 0)\n        SHA512_Update(&context, buffer.data(), static_cast<std::size_t>(read_length));\n      std::string hash;\n      hash.resize(512 / 8);\n      SHA512_Final(reinterpret_cast<unsigned char *>(&hash[0]), &context);\n\n      for(std::size_t c = 1; c < iterations; ++c)\n        SHA512(reinterpret_cast<const unsigned char *>(&hash[0]), hash.size(), reinterpret_cast<unsigned char *>(&hash[0]));\n\n      return hash;\n    }\n\n    /// Returns PBKDF2 hash value from the given password\n    /// Input parameter key_size  number of bytes of the returned key.\n\n    /**\n     * Returns PBKDF2 derived key from the given password.\n     *\n     * @param password   The password to derive key from.\n     * @param salt       The salt to be used in the algorithm.\n     * @param iterations Number of iterations to be used in the algorithm.\n     * @param key_size   Number of bytes of the returned key.\n     *\n     * @return The PBKDF2 derived key.\n     */\n    static std::string pbkdf2(const std::string &password, const std::string &salt, int iterations, int key_size) noexcept {\n      std::string key;\n      key.resize(static_cast<std::size_t>(key_size));\n      PKCS5_PBKDF2_HMAC_SHA1(password.c_str(), password.size(),\n                             reinterpret_cast<const unsigned char *>(salt.c_str()), salt.size(), iterations,\n                             key_size, reinterpret_cast<unsigned char *>(&key[0]));\n      return key;\n    }\n  };\n} // namespace SimpleWeb\n#endif /* SIMPLE_WEB_CRYPTO_HPP */\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/docs/Doxyfile",
    "content": "# Doxyfile 1.8.15\n\n# This file describes the settings to be used by the documentation system\n# doxygen (www.doxygen.org) for a project.\n#\n# All text after a double hash (##) is considered a comment and is placed in\n# front of the TAG it is preceding.\n#\n# All text after a single hash (#) is considered a comment and will be ignored.\n# The format is:\n# TAG = value [value, ...]\n# For lists, items can also be appended using:\n# TAG += value [value, ...]\n# Values that contain spaces should be placed between quotes (\\\" \\\").\n\n#---------------------------------------------------------------------------\n# Project related configuration options\n#---------------------------------------------------------------------------\n\n# This tag specifies the encoding used for all characters in the configuration\n# file that follow. The default is UTF-8 which is also the encoding used for all\n# text before the first occurrence of this tag. Doxygen uses libiconv (or the\n# iconv built into libc) for the transcoding. See\n# https://www.gnu.org/software/libiconv/ for the list of possible encodings.\n# The default value is: UTF-8.\n\nDOXYFILE_ENCODING      = UTF-8\n\n# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by\n# double-quotes, unless you are using Doxywizard) that should identify the\n# project for which the documentation is generated. This name is used in the\n# title of most generated pages and in a few other places.\n# The default value is: My Project.\n\nPROJECT_NAME           = \"Simple-Web-Server\"\n\n# The PROJECT_NUMBER tag can be used to enter a project or revision number. This\n# could be handy for archiving the generated documentation or if some version\n# control system is used.\n\nPROJECT_NUMBER         =\n\n# Using the PROJECT_BRIEF tag one can provide an optional one line description\n# for a project that appears at the top of each page and should give viewer a\n# quick idea about the purpose of the project. Keep the description short.\n\nPROJECT_BRIEF          =\n\n# With the PROJECT_LOGO tag one can specify a logo or an icon that is included\n# in the documentation. The maximum height of the logo should not exceed 55\n# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy\n# the logo to the output directory.\n\nPROJECT_LOGO           =\n\n# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path\n# into which the generated documentation will be written. If a relative path is\n# entered, it will be relative to the location where doxygen was started. If\n# left blank the current directory will be used.\n\nOUTPUT_DIRECTORY       = docs/doxygen_output\n\n# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-\n# directories (in 2 levels) under the output directory of each output format and\n# will distribute the generated files over these directories. Enabling this\n# option can be useful when feeding doxygen a huge amount of source files, where\n# putting all generated files in the same directory would otherwise causes\n# performance problems for the file system.\n# The default value is: NO.\n\nCREATE_SUBDIRS         = NO\n\n# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII\n# characters to appear in the names of generated files. If set to NO, non-ASCII\n# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode\n# U+3044.\n# The default value is: NO.\n\nALLOW_UNICODE_NAMES    = NO\n\n# The OUTPUT_LANGUAGE tag is used to specify the language in which all\n# documentation generated by doxygen is written. Doxygen will use this\n# information to generate all constant output in the proper language.\n# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,\n# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),\n# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,\n# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),\n# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,\n# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,\n# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,\n# Ukrainian and Vietnamese.\n# The default value is: English.\n\nOUTPUT_LANGUAGE        = English\n\n# The OUTPUT_TEXT_DIRECTION tag is used to specify the direction in which all\n# documentation generated by doxygen is written. Doxygen will use this\n# information to generate all generated output in the proper direction.\n# Possible values are: None, LTR, RTL and Context.\n# The default value is: None.\n\nOUTPUT_TEXT_DIRECTION  = None\n\n# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member\n# descriptions after the members that are listed in the file and class\n# documentation (similar to Javadoc). Set to NO to disable this.\n# The default value is: YES.\n\nBRIEF_MEMBER_DESC      = YES\n\n# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief\n# description of a member or function before the detailed description\n#\n# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the\n# brief descriptions will be completely suppressed.\n# The default value is: YES.\n\nREPEAT_BRIEF           = YES\n\n# This tag implements a quasi-intelligent brief description abbreviator that is\n# used to form the text in various listings. Each string in this list, if found\n# as the leading text of the brief description, will be stripped from the text\n# and the result, after processing the whole list, is used as the annotated\n# text. Otherwise, the brief description is used as-is. If left blank, the\n# following values are used ($name is automatically replaced with the name of\n# the entity):The $name class, The $name widget, The $name file, is, provides,\n# specifies, contains, represents, a, an and the.\n\nABBREVIATE_BRIEF       = \"The $name class\" \\\n                         \"The $name widget\" \\\n                         \"The $name file\" \\\n                         is \\\n                         provides \\\n                         specifies \\\n                         contains \\\n                         represents \\\n                         a \\\n                         an \\\n                         the\n\n# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then\n# doxygen will generate a detailed section even if there is only a brief\n# description.\n# The default value is: NO.\n\nALWAYS_DETAILED_SEC    = NO\n\n# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all\n# inherited members of a class in the documentation of that class as if those\n# members were ordinary class members. Constructors, destructors and assignment\n# operators of the base classes will not be shown.\n# The default value is: NO.\n\nINLINE_INHERITED_MEMB  = NO\n\n# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path\n# before files name in the file list and in the header files. If set to NO the\n# shortest path that makes the file name unique will be used\n# The default value is: YES.\n\nFULL_PATH_NAMES        = YES\n\n# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.\n# Stripping is only done if one of the specified strings matches the left-hand\n# part of the path. The tag can be used to show relative paths in the file list.\n# If left blank the directory from which doxygen is run is used as the path to\n# strip.\n#\n# Note that you can specify absolute paths here, but also relative paths, which\n# will be relative from the directory where doxygen is started.\n# This tag requires that the tag FULL_PATH_NAMES is set to YES.\n\nSTRIP_FROM_PATH        =\n\n# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the\n# path mentioned in the documentation of a class, which tells the reader which\n# header file to include in order to use a class. If left blank only the name of\n# the header file containing the class definition is used. Otherwise one should\n# specify the list of include paths that are normally passed to the compiler\n# using the -I flag.\n\nSTRIP_FROM_INC_PATH    =\n\n# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but\n# less readable) file names. This can be useful is your file systems doesn't\n# support long names like on DOS, Mac, or CD-ROM.\n# The default value is: NO.\n\nSHORT_NAMES            = NO\n\n# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the\n# first line (until the first dot) of a Javadoc-style comment as the brief\n# description. If set to NO, the Javadoc-style will behave just like regular Qt-\n# style comments (thus requiring an explicit @brief command for a brief\n# description.)\n# The default value is: NO.\n\nJAVADOC_AUTOBRIEF      = NO\n\n# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first\n# line (until the first dot) of a Qt-style comment as the brief description. If\n# set to NO, the Qt-style will behave just like regular Qt-style comments (thus\n# requiring an explicit \\brief command for a brief description.)\n# The default value is: NO.\n\nQT_AUTOBRIEF           = NO\n\n# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a\n# multi-line C++ special comment block (i.e. a block of //! or /// comments) as\n# a brief description. This used to be the default behavior. The new default is\n# to treat a multi-line C++ comment block as a detailed description. Set this\n# tag to YES if you prefer the old behavior instead.\n#\n# Note that setting this tag to YES also means that rational rose comments are\n# not recognized any more.\n# The default value is: NO.\n\nMULTILINE_CPP_IS_BRIEF = YES\n\n# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the\n# documentation from any documented member that it re-implements.\n# The default value is: YES.\n\nINHERIT_DOCS           = YES\n\n# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new\n# page for each member. If set to NO, the documentation of a member will be part\n# of the file/class/namespace that contains it.\n# The default value is: NO.\n\nSEPARATE_MEMBER_PAGES  = NO\n\n# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen\n# uses this value to replace tabs by spaces in code fragments.\n# Minimum value: 1, maximum value: 16, default value: 4.\n\nTAB_SIZE               = 4\n\n# This tag can be used to specify a number of aliases that act as commands in\n# the documentation. An alias has the form:\n# name=value\n# For example adding\n# \"sideeffect=@par Side Effects:\\n\"\n# will allow you to put the command \\sideeffect (or @sideeffect) in the\n# documentation, which will result in a user-defined paragraph with heading\n# \"Side Effects:\". You can put \\n's in the value part of an alias to insert\n# newlines (in the resulting output). You can put ^^ in the value part of an\n# alias to insert a newline as if a physical newline was in the original file.\n# When you need a literal { or } or , in the value part of an alias you have to\n# escape them by means of a backslash (\\), this can lead to conflicts with the\n# commands \\{ and \\} for these it is advised to use the version @{ and @} or use\n# a double escape (\\\\{ and \\\\})\n\nALIASES                =\n\n# This tag can be used to specify a number of word-keyword mappings (TCL only).\n# A mapping has the form \"name=value\". For example adding \"class=itcl::class\"\n# will allow you to use the command class in the itcl::class meaning.\n\nTCL_SUBST              =\n\n# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources\n# only. Doxygen will then generate output that is more tailored for C. For\n# instance, some of the names that are used will be different. The list of all\n# members will be omitted, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_FOR_C  = NO\n\n# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or\n# Python sources only. Doxygen will then generate output that is more tailored\n# for that language. For instance, namespaces will be presented as packages,\n# qualified scopes will look different, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_JAVA   = NO\n\n# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran\n# sources. Doxygen will then generate output that is tailored for Fortran.\n# The default value is: NO.\n\nOPTIMIZE_FOR_FORTRAN   = NO\n\n# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL\n# sources. Doxygen will then generate output that is tailored for VHDL.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_VHDL   = NO\n\n# Set the OPTIMIZE_OUTPUT_SLICE tag to YES if your project consists of Slice\n# sources only. Doxygen will then generate output that is more tailored for that\n# language. For instance, namespaces will be presented as modules, types will be\n# separated into more groups, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_SLICE  = NO\n\n# Doxygen selects the parser to use depending on the extension of the files it\n# parses. With this tag you can assign which parser to use for a given\n# extension. Doxygen has a built-in mapping, but you can override or extend it\n# using this tag. The format is ext=language, where ext is a file extension, and\n# language is one of the parsers supported by doxygen: IDL, Java, Javascript,\n# Csharp (C#), C, C++, D, PHP, md (Markdown), Objective-C, Python, Slice,\n# Fortran (fixed format Fortran: FortranFixed, free formatted Fortran:\n# FortranFree, unknown formatted Fortran: Fortran. In the later case the parser\n# tries to guess whether the code is fixed or free formatted code, this is the\n# default for Fortran type files), VHDL, tcl. For instance to make doxygen treat\n# .inc files as Fortran files (default is PHP), and .f files as C (default is\n# Fortran), use: inc=Fortran f=C.\n#\n# Note: For files without extension you can use no_extension as a placeholder.\n#\n# Note that for custom extensions you also need to set FILE_PATTERNS otherwise\n# the files are not read by doxygen.\n\nEXTENSION_MAPPING      =\n\n# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments\n# according to the Markdown format, which allows for more readable\n# documentation. See https://daringfireball.net/projects/markdown/ for details.\n# The output of markdown processing is further processed by doxygen, so you can\n# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in\n# case of backward compatibilities issues.\n# The default value is: YES.\n\nMARKDOWN_SUPPORT       = YES\n\n# When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up\n# to that level are automatically included in the table of contents, even if\n# they do not have an id attribute.\n# Note: This feature currently applies only to Markdown headings.\n# Minimum value: 0, maximum value: 99, default value: 0.\n# This tag requires that the tag MARKDOWN_SUPPORT is set to YES.\n\nTOC_INCLUDE_HEADINGS   = 0\n\n# When enabled doxygen tries to link words that correspond to documented\n# classes, or namespaces to their corresponding documentation. Such a link can\n# be prevented in individual cases by putting a % sign in front of the word or\n# globally by setting AUTOLINK_SUPPORT to NO.\n# The default value is: YES.\n\nAUTOLINK_SUPPORT       = YES\n\n# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want\n# to include (a tag file for) the STL sources as input, then you should set this\n# tag to YES in order to let doxygen match functions declarations and\n# definitions whose arguments contain STL classes (e.g. func(std::string);\n# versus func(std::string) {}). This also make the inheritance and collaboration\n# diagrams that involve STL classes more complete and accurate.\n# The default value is: NO.\n\nBUILTIN_STL_SUPPORT    = NO\n\n# If you use Microsoft's C++/CLI language, you should set this option to YES to\n# enable parsing support.\n# The default value is: NO.\n\nCPP_CLI_SUPPORT        = NO\n\n# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:\n# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen\n# will parse them like normal C++ but will assume all classes use public instead\n# of private inheritance when no explicit protection keyword is present.\n# The default value is: NO.\n\nSIP_SUPPORT            = NO\n\n# For Microsoft's IDL there are propget and propput attributes to indicate\n# getter and setter methods for a property. Setting this option to YES will make\n# doxygen to replace the get and set methods by a property in the documentation.\n# This will only work if the methods are indeed getting or setting a simple\n# type. If this is not the case, or you want to show the methods anyway, you\n# should set this option to NO.\n# The default value is: YES.\n\nIDL_PROPERTY_SUPPORT   = YES\n\n# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC\n# tag is set to YES then doxygen will reuse the documentation of the first\n# member in the group (if any) for the other members of the group. By default\n# all members of a group must be documented explicitly.\n# The default value is: NO.\n\nDISTRIBUTE_GROUP_DOC   = NO\n\n# If one adds a struct or class to a group and this option is enabled, then also\n# any nested class or struct is added to the same group. By default this option\n# is disabled and one has to add nested compounds explicitly via \\ingroup.\n# The default value is: NO.\n\nGROUP_NESTED_COMPOUNDS = NO\n\n# Set the SUBGROUPING tag to YES to allow class member groups of the same type\n# (for instance a group of public functions) to be put as a subgroup of that\n# type (e.g. under the Public Functions section). Set it to NO to prevent\n# subgrouping. Alternatively, this can be done per class using the\n# \\nosubgrouping command.\n# The default value is: YES.\n\nSUBGROUPING            = YES\n\n# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions\n# are shown inside the group in which they are included (e.g. using \\ingroup)\n# instead of on a separate page (for HTML and Man pages) or section (for LaTeX\n# and RTF).\n#\n# Note that this feature does not work in combination with\n# SEPARATE_MEMBER_PAGES.\n# The default value is: NO.\n\nINLINE_GROUPED_CLASSES = NO\n\n# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions\n# with only public data fields or simple typedef fields will be shown inline in\n# the documentation of the scope in which they are defined (i.e. file,\n# namespace, or group documentation), provided this scope is documented. If set\n# to NO, structs, classes, and unions are shown on a separate page (for HTML and\n# Man pages) or section (for LaTeX and RTF).\n# The default value is: NO.\n\nINLINE_SIMPLE_STRUCTS  = NO\n\n# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or\n# enum is documented as struct, union, or enum with the name of the typedef. So\n# typedef struct TypeS {} TypeT, will appear in the documentation as a struct\n# with name TypeT. When disabled the typedef will appear as a member of a file,\n# namespace, or class. And the struct will be named TypeS. This can typically be\n# useful for C code in case the coding convention dictates that all compound\n# types are typedef'ed and only the typedef is referenced, never the tag name.\n# The default value is: NO.\n\nTYPEDEF_HIDES_STRUCT   = NO\n\n# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This\n# cache is used to resolve symbols given their name and scope. Since this can be\n# an expensive process and often the same symbol appears multiple times in the\n# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small\n# doxygen will become slower. If the cache is too large, memory is wasted. The\n# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range\n# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536\n# symbols. At the end of a run doxygen will report the cache usage and suggest\n# the optimal cache size from a speed point of view.\n# Minimum value: 0, maximum value: 9, default value: 0.\n\nLOOKUP_CACHE_SIZE      = 0\n\n#---------------------------------------------------------------------------\n# Build related configuration options\n#---------------------------------------------------------------------------\n\n# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in\n# documentation are documented, even if no documentation was available. Private\n# class members and static file members will be hidden unless the\n# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.\n# Note: This will also disable the warnings about undocumented members that are\n# normally produced when WARNINGS is set to YES.\n# The default value is: NO.\n\nEXTRACT_ALL            = YES\n\n# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will\n# be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PRIVATE        = NO\n\n# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal\n# scope will be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PACKAGE        = NO\n\n# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be\n# included in the documentation.\n# The default value is: NO.\n\nEXTRACT_STATIC         = NO\n\n# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined\n# locally in source files will be included in the documentation. If set to NO,\n# only classes defined in header files are included. Does not have any effect\n# for Java sources.\n# The default value is: YES.\n\nEXTRACT_LOCAL_CLASSES  = YES\n\n# This flag is only useful for Objective-C code. If set to YES, local methods,\n# which are defined in the implementation section but not in the interface are\n# included in the documentation. If set to NO, only methods in the interface are\n# included.\n# The default value is: NO.\n\nEXTRACT_LOCAL_METHODS  = NO\n\n# If this flag is set to YES, the members of anonymous namespaces will be\n# extracted and appear in the documentation as a namespace called\n# 'anonymous_namespace{file}', where file will be replaced with the base name of\n# the file that contains the anonymous namespace. By default anonymous namespace\n# are hidden.\n# The default value is: NO.\n\nEXTRACT_ANON_NSPACES   = NO\n\n# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all\n# undocumented members inside documented classes or files. If set to NO these\n# members will be included in the various overviews, but no documentation\n# section is generated. This option has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_MEMBERS     = NO\n\n# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all\n# undocumented classes that are normally visible in the class hierarchy. If set\n# to NO, these classes will be included in the various overviews. This option\n# has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_CLASSES     = NO\n\n# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend\n# (class|struct|union) declarations. If set to NO, these declarations will be\n# included in the documentation.\n# The default value is: NO.\n\nHIDE_FRIEND_COMPOUNDS  = NO\n\n# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any\n# documentation blocks found inside the body of a function. If set to NO, these\n# blocks will be appended to the function's detailed documentation block.\n# The default value is: NO.\n\nHIDE_IN_BODY_DOCS      = NO\n\n# The INTERNAL_DOCS tag determines if documentation that is typed after a\n# \\internal command is included. If the tag is set to NO then the documentation\n# will be excluded. Set it to YES to include the internal documentation.\n# The default value is: NO.\n\nINTERNAL_DOCS          = NO\n\n# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file\n# names in lower-case letters. If set to YES, upper-case letters are also\n# allowed. This is useful if you have classes or files whose names only differ\n# in case and if your file system supports case sensitive file names. Windows\n# and Mac users are advised to set this option to NO.\n# The default value is: system dependent.\n\nCASE_SENSE_NAMES       = NO\n\n# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with\n# their full class and namespace scopes in the documentation. If set to YES, the\n# scope will be hidden.\n# The default value is: NO.\n\nHIDE_SCOPE_NAMES       = NO\n\n# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will\n# append additional text to a page's title, such as Class Reference. If set to\n# YES the compound reference will be hidden.\n# The default value is: NO.\n\nHIDE_COMPOUND_REFERENCE= NO\n\n# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of\n# the files that are included by a file in the documentation of that file.\n# The default value is: YES.\n\nSHOW_INCLUDE_FILES     = YES\n\n# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each\n# grouped member an include statement to the documentation, telling the reader\n# which file to include in order to use the member.\n# The default value is: NO.\n\nSHOW_GROUPED_MEMB_INC  = NO\n\n# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include\n# files with double quotes in the documentation rather than with sharp brackets.\n# The default value is: NO.\n\nFORCE_LOCAL_INCLUDES   = NO\n\n# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the\n# documentation for inline members.\n# The default value is: YES.\n\nINLINE_INFO            = YES\n\n# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the\n# (detailed) documentation of file and class members alphabetically by member\n# name. If set to NO, the members will appear in declaration order.\n# The default value is: YES.\n\nSORT_MEMBER_DOCS       = YES\n\n# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief\n# descriptions of file, namespace and class members alphabetically by member\n# name. If set to NO, the members will appear in declaration order. Note that\n# this will also influence the order of the classes in the class list.\n# The default value is: NO.\n\nSORT_BRIEF_DOCS        = NO\n\n# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the\n# (brief and detailed) documentation of class members so that constructors and\n# destructors are listed first. If set to NO the constructors will appear in the\n# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.\n# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief\n# member documentation.\n# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting\n# detailed member documentation.\n# The default value is: NO.\n\nSORT_MEMBERS_CTORS_1ST = NO\n\n# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy\n# of group names into alphabetical order. If set to NO the group names will\n# appear in their defined order.\n# The default value is: NO.\n\nSORT_GROUP_NAMES       = NO\n\n# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by\n# fully-qualified names, including namespaces. If set to NO, the class list will\n# be sorted only by class name, not including the namespace part.\n# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.\n# Note: This option applies only to the class list, not to the alphabetical\n# list.\n# The default value is: NO.\n\nSORT_BY_SCOPE_NAME     = NO\n\n# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper\n# type resolution of all parameters of a function it will reject a match between\n# the prototype and the implementation of a member function even if there is\n# only one candidate or it is obvious which candidate to choose by doing a\n# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still\n# accept a match between prototype and implementation in such cases.\n# The default value is: NO.\n\nSTRICT_PROTO_MATCHING  = NO\n\n# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo\n# list. This list is created by putting \\todo commands in the documentation.\n# The default value is: YES.\n\nGENERATE_TODOLIST      = YES\n\n# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test\n# list. This list is created by putting \\test commands in the documentation.\n# The default value is: YES.\n\nGENERATE_TESTLIST      = YES\n\n# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug\n# list. This list is created by putting \\bug commands in the documentation.\n# The default value is: YES.\n\nGENERATE_BUGLIST       = YES\n\n# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)\n# the deprecated list. This list is created by putting \\deprecated commands in\n# the documentation.\n# The default value is: YES.\n\nGENERATE_DEPRECATEDLIST= YES\n\n# The ENABLED_SECTIONS tag can be used to enable conditional documentation\n# sections, marked by \\if <section_label> ... \\endif and \\cond <section_label>\n# ... \\endcond blocks.\n\nENABLED_SECTIONS       =\n\n# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the\n# initial value of a variable or macro / define can have for it to appear in the\n# documentation. If the initializer consists of more lines than specified here\n# it will be hidden. Use a value of 0 to hide initializers completely. The\n# appearance of the value of individual variables and macros / defines can be\n# controlled using \\showinitializer or \\hideinitializer command in the\n# documentation regardless of this setting.\n# Minimum value: 0, maximum value: 10000, default value: 30.\n\nMAX_INITIALIZER_LINES  = 30\n\n# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at\n# the bottom of the documentation of classes and structs. If set to YES, the\n# list will mention the files that were used to generate the documentation.\n# The default value is: YES.\n\nSHOW_USED_FILES        = YES\n\n# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This\n# will remove the Files entry from the Quick Index and from the Folder Tree View\n# (if specified).\n# The default value is: YES.\n\nSHOW_FILES             = YES\n\n# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces\n# page. This will remove the Namespaces entry from the Quick Index and from the\n# Folder Tree View (if specified).\n# The default value is: YES.\n\nSHOW_NAMESPACES        = NO\n\n# The FILE_VERSION_FILTER tag can be used to specify a program or script that\n# doxygen should invoke to get the current version for each file (typically from\n# the version control system). Doxygen will invoke the program by executing (via\n# popen()) the command command input-file, where command is the value of the\n# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided\n# by doxygen. Whatever the program writes to standard output is used as the file\n# version. For an example see the documentation.\n\nFILE_VERSION_FILTER    =\n\n# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed\n# by doxygen. The layout file controls the global structure of the generated\n# output files in an output format independent way. To create the layout file\n# that represents doxygen's defaults, run doxygen with the -l option. You can\n# optionally specify a file name after the option, if omitted DoxygenLayout.xml\n# will be used as the name of the layout file.\n#\n# Note that if you run doxygen from a directory containing a file called\n# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE\n# tag is left empty.\n\nLAYOUT_FILE            =\n\n# The CITE_BIB_FILES tag can be used to specify one or more bib files containing\n# the reference definitions. This must be a list of .bib files. The .bib\n# extension is automatically appended if omitted. This requires the bibtex tool\n# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info.\n# For LaTeX the style of the bibliography can be controlled using\n# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the\n# search path. See also \\cite for info how to create references.\n\nCITE_BIB_FILES         =\n\n#---------------------------------------------------------------------------\n# Configuration options related to warning and progress messages\n#---------------------------------------------------------------------------\n\n# The QUIET tag can be used to turn on/off the messages that are generated to\n# standard output by doxygen. If QUIET is set to YES this implies that the\n# messages are off.\n# The default value is: NO.\n\nQUIET                  = NO\n\n# The WARNINGS tag can be used to turn on/off the warning messages that are\n# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES\n# this implies that the warnings are on.\n#\n# Tip: Turn warnings on while writing the documentation.\n# The default value is: YES.\n\nWARNINGS               = YES\n\n# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate\n# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag\n# will automatically be disabled.\n# The default value is: YES.\n\nWARN_IF_UNDOCUMENTED   = YES\n\n# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for\n# potential errors in the documentation, such as not documenting some parameters\n# in a documented function, or documenting parameters that don't exist or using\n# markup commands wrongly.\n# The default value is: YES.\n\nWARN_IF_DOC_ERROR      = YES\n\n# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that\n# are documented, but have no documentation for their parameters or return\n# value. If set to NO, doxygen will only warn about wrong or incomplete\n# parameter documentation, but not about the absence of documentation. If\n# EXTRACT_ALL is set to YES then this flag will automatically be disabled.\n# The default value is: NO.\n\nWARN_NO_PARAMDOC       = NO\n\n# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when\n# a warning is encountered.\n# The default value is: NO.\n\nWARN_AS_ERROR          = NO\n\n# The WARN_FORMAT tag determines the format of the warning messages that doxygen\n# can produce. The string should contain the $file, $line, and $text tags, which\n# will be replaced by the file and line number from which the warning originated\n# and the warning text. Optionally the format may contain $version, which will\n# be replaced by the version of the file (if it could be obtained via\n# FILE_VERSION_FILTER)\n# The default value is: $file:$line: $text.\n\nWARN_FORMAT            = \"$file:$line: $text\"\n\n# The WARN_LOGFILE tag can be used to specify a file to which warning and error\n# messages should be written. If left blank the output is written to standard\n# error (stderr).\n\nWARN_LOGFILE           =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the input files\n#---------------------------------------------------------------------------\n\n# The INPUT tag is used to specify the files and/or directories that contain\n# documented source files. You may enter file names like myfile.cpp or\n# directories like /usr/src/myproject. Separate the files or directories with\n# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING\n# Note: If this tag is empty the current directory is searched.\n\nINPUT                  =\n\n# This tag can be used to specify the character encoding of the source files\n# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses\n# libiconv (or the iconv built into libc) for the transcoding. See the libiconv\n# documentation (see: https://www.gnu.org/software/libiconv/) for the list of\n# possible encodings.\n# The default value is: UTF-8.\n\nINPUT_ENCODING         = UTF-8\n\n# If the value of the INPUT tag contains directories, you can use the\n# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and\n# *.h) to filter out the source-files in the directories.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# read by doxygen.\n#\n# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,\n# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,\n# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,\n# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,\n# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf, *.qsf and *.ice.\n\nFILE_PATTERNS          = *.c \\\n                         *.cc \\\n                         *.cxx \\\n                         *.cpp \\\n                         *.c++ \\\n                         *.java \\\n                         *.ii \\\n                         *.ixx \\\n                         *.ipp \\\n                         *.i++ \\\n                         *.inl \\\n                         *.idl \\\n                         *.ddl \\\n                         *.odl \\\n                         *.h \\\n                         *.hh \\\n                         *.hxx \\\n                         *.hpp \\\n                         *.h++ \\\n                         *.cs \\\n                         *.d \\\n                         *.php \\\n                         *.php4 \\\n                         *.php5 \\\n                         *.phtml \\\n                         *.inc \\\n                         *.m \\\n                         *.markdown \\\n                         *.md \\\n                         *.mm \\\n                         *.dox \\\n                         *.py \\\n                         *.pyw \\\n                         *.f90 \\\n                         *.f95 \\\n                         *.f03 \\\n                         *.f08 \\\n                         *.f \\\n                         *.for \\\n                         *.tcl \\\n                         *.vhd \\\n                         *.vhdl \\\n                         *.ucf \\\n                         *.qsf \\\n                         *.ice\n\n# The RECURSIVE tag can be used to specify whether or not subdirectories should\n# be searched for input files as well.\n# The default value is: NO.\n\nRECURSIVE              = NO\n\n# The EXCLUDE tag can be used to specify files and/or directories that should be\n# excluded from the INPUT source files. This way you can easily exclude a\n# subdirectory from a directory tree whose root is specified with the INPUT tag.\n#\n# Note that relative paths are relative to the directory from which doxygen is\n# run.\n\nEXCLUDE                =\n\n# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or\n# directories that are symbolic links (a Unix file system feature) are excluded\n# from the input.\n# The default value is: NO.\n\nEXCLUDE_SYMLINKS       = NO\n\n# If the value of the INPUT tag contains directories, you can use the\n# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude\n# certain files from those directories.\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories for example use the pattern */test/*\n\nEXCLUDE_PATTERNS       =\n\n# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names\n# (namespaces, classes, functions, etc.) that should be excluded from the\n# output. The symbol name can be a fully qualified name, a word, or if the\n# wildcard * is used, a substring. Examples: ANamespace, AClass,\n# AClass::ANamespace, ANamespace::*Test\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories use the pattern */test/*\n\nEXCLUDE_SYMBOLS        =\n\n# The EXAMPLE_PATH tag can be used to specify one or more files or directories\n# that contain example code fragments that are included (see the \\include\n# command).\n\nEXAMPLE_PATH           =\n\n# If the value of the EXAMPLE_PATH tag contains directories, you can use the\n# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank all\n# files are included.\n\nEXAMPLE_PATTERNS       = *\n\n# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be\n# searched for input files to be used with the \\include or \\dontinclude commands\n# irrespective of the value of the RECURSIVE tag.\n# The default value is: NO.\n\nEXAMPLE_RECURSIVE      = NO\n\n# The IMAGE_PATH tag can be used to specify one or more files or directories\n# that contain images that are to be included in the documentation (see the\n# \\image command).\n\nIMAGE_PATH             =\n\n# The INPUT_FILTER tag can be used to specify a program that doxygen should\n# invoke to filter for each input file. Doxygen will invoke the filter program\n# by executing (via popen()) the command:\n#\n# <filter> <input-file>\n#\n# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the\n# name of an input file. Doxygen will then use the output that the filter\n# program writes to standard output. If FILTER_PATTERNS is specified, this tag\n# will be ignored.\n#\n# Note that the filter must not add or remove lines; it is applied before the\n# code is scanned, but not when the output code is generated. If lines are added\n# or removed, the anchors will not be placed correctly.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# properly processed by doxygen.\n\nINPUT_FILTER           =\n\n# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern\n# basis. Doxygen will compare the file name with each pattern and apply the\n# filter if there is a match. The filters are a list of the form: pattern=filter\n# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how\n# filters are used. If the FILTER_PATTERNS tag is empty or if none of the\n# patterns match the file name, INPUT_FILTER is applied.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# properly processed by doxygen.\n\nFILTER_PATTERNS        =\n\n# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using\n# INPUT_FILTER) will also be used to filter the input files that are used for\n# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).\n# The default value is: NO.\n\nFILTER_SOURCE_FILES    = NO\n\n# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file\n# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and\n# it is also possible to disable source filtering for a specific pattern using\n# *.ext= (so without naming a filter).\n# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.\n\nFILTER_SOURCE_PATTERNS =\n\n# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that\n# is part of the input, its contents will be placed on the main page\n# (index.html). This can be useful if you have a project on for instance GitHub\n# and want to reuse the introduction page also for the doxygen output.\n\nUSE_MDFILE_AS_MAINPAGE = README.md\n\n#---------------------------------------------------------------------------\n# Configuration options related to source browsing\n#---------------------------------------------------------------------------\n\n# If the SOURCE_BROWSER tag is set to YES then a list of source files will be\n# generated. Documented entities will be cross-referenced with these sources.\n#\n# Note: To get rid of all source code in the generated output, make sure that\n# also VERBATIM_HEADERS is set to NO.\n# The default value is: NO.\n\nSOURCE_BROWSER         = NO\n\n# Setting the INLINE_SOURCES tag to YES will include the body of functions,\n# classes and enums directly into the documentation.\n# The default value is: NO.\n\nINLINE_SOURCES         = NO\n\n# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any\n# special comment blocks from generated source code fragments. Normal C, C++ and\n# Fortran comments will always remain visible.\n# The default value is: YES.\n\nSTRIP_CODE_COMMENTS    = YES\n\n# If the REFERENCED_BY_RELATION tag is set to YES then for each documented\n# entity all documented functions referencing it will be listed.\n# The default value is: NO.\n\nREFERENCED_BY_RELATION = NO\n\n# If the REFERENCES_RELATION tag is set to YES then for each documented function\n# all documented entities called/used by that function will be listed.\n# The default value is: NO.\n\nREFERENCES_RELATION    = NO\n\n# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set\n# to YES then the hyperlinks from functions in REFERENCES_RELATION and\n# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will\n# link to the documentation.\n# The default value is: YES.\n\nREFERENCES_LINK_SOURCE = YES\n\n# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the\n# source code will show a tooltip with additional information such as prototype,\n# brief description and links to the definition and documentation. Since this\n# will make the HTML file larger and loading of large files a bit slower, you\n# can opt to disable this feature.\n# The default value is: YES.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nSOURCE_TOOLTIPS        = YES\n\n# If the USE_HTAGS tag is set to YES then the references to source code will\n# point to the HTML generated by the htags(1) tool instead of doxygen built-in\n# source browser. The htags tool is part of GNU's global source tagging system\n# (see https://www.gnu.org/software/global/global.html). You will need version\n# 4.8.6 or higher.\n#\n# To use it do the following:\n# - Install the latest version of global\n# - Enable SOURCE_BROWSER and USE_HTAGS in the configuration file\n# - Make sure the INPUT points to the root of the source tree\n# - Run doxygen as normal\n#\n# Doxygen will invoke htags (and that will in turn invoke gtags), so these\n# tools must be available from the command line (i.e. in the search path).\n#\n# The result: instead of the source browser generated by doxygen, the links to\n# source code will now point to the output of htags.\n# The default value is: NO.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nUSE_HTAGS              = NO\n\n# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a\n# verbatim copy of the header file for each class for which an include is\n# specified. Set to NO to disable this.\n# See also: Section \\class.\n# The default value is: YES.\n\nVERBATIM_HEADERS       = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to the alphabetical class index\n#---------------------------------------------------------------------------\n\n# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all\n# compounds will be generated. Enable this if the project contains a lot of\n# classes, structs, unions or interfaces.\n# The default value is: YES.\n\nALPHABETICAL_INDEX     = YES\n\n# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in\n# which the alphabetical index list will be split.\n# Minimum value: 1, maximum value: 20, default value: 5.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nCOLS_IN_ALPHA_INDEX    = 5\n\n# In case all classes in a project start with a common prefix, all classes will\n# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag\n# can be used to specify a prefix (or a list of prefixes) that should be ignored\n# while generating the index headers.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nIGNORE_PREFIX          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the HTML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output\n# The default value is: YES.\n\nGENERATE_HTML          = YES\n\n# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_OUTPUT            = html\n\n# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each\n# generated HTML page (for example: .htm, .php, .asp).\n# The default value is: .html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FILE_EXTENSION    = .html\n\n# The HTML_HEADER tag can be used to specify a user-defined HTML header file for\n# each generated HTML page. If the tag is left blank doxygen will generate a\n# standard header.\n#\n# To get valid HTML the header file that includes any scripts and style sheets\n# that doxygen needs, which is dependent on the configuration options used (e.g.\n# the setting GENERATE_TREEVIEW). It is highly recommended to start with a\n# default header using\n# doxygen -w html new_header.html new_footer.html new_stylesheet.css\n# YourConfigFile\n# and then modify the file new_header.html. See also section \"Doxygen usage\"\n# for information on how to generate the default header that doxygen normally\n# uses.\n# Note: The header is subject to change so you typically have to regenerate the\n# default header when upgrading to a newer version of doxygen. For a description\n# of the possible markers and block names see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_HEADER            =\n\n# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each\n# generated HTML page. If the tag is left blank doxygen will generate a standard\n# footer. See HTML_HEADER for more information on how to generate a default\n# footer and what special commands can be used inside the footer. See also\n# section \"Doxygen usage\" for information on how to generate the default footer\n# that doxygen normally uses.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FOOTER            =\n\n# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style\n# sheet that is used by each HTML page. It can be used to fine-tune the look of\n# the HTML output. If left blank doxygen will generate a default style sheet.\n# See also section \"Doxygen usage\" for information on how to generate the style\n# sheet that doxygen normally uses.\n# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as\n# it is more robust and this tag (HTML_STYLESHEET) will in the future become\n# obsolete.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_STYLESHEET        =\n\n# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined\n# cascading style sheets that are included after the standard style sheets\n# created by doxygen. Using this option one can overrule certain style aspects.\n# This is preferred over using HTML_STYLESHEET since it does not replace the\n# standard style sheet and is therefore more robust against future updates.\n# Doxygen will copy the style sheet files to the output directory.\n# Note: The order of the extra style sheet files is of importance (e.g. the last\n# style sheet in the list overrules the setting of the previous ones in the\n# list). For an example see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_STYLESHEET  =\n\n# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the HTML output directory. Note\n# that these files will be copied to the base HTML output directory. Use the\n# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these\n# files. In the HTML_STYLESHEET file, use the file name only. Also note that the\n# files will be copied as-is; there are no commands or markers available.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_FILES       =\n\n# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen\n# will adjust the colors in the style sheet and background images according to\n# this color. Hue is specified as an angle on a colorwheel, see\n# https://en.wikipedia.org/wiki/Hue for more information. For instance the value\n# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300\n# purple, and 360 is red again.\n# Minimum value: 0, maximum value: 359, default value: 220.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_HUE    = 220\n\n# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors\n# in the HTML output. For a value of 0 the output will use grayscales only. A\n# value of 255 will produce the most vivid colors.\n# Minimum value: 0, maximum value: 255, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_SAT    = 100\n\n# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the\n# luminance component of the colors in the HTML output. Values below 100\n# gradually make the output lighter, whereas values above 100 make the output\n# darker. The value divided by 100 is the actual gamma applied, so 80 represents\n# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not\n# change the gamma.\n# Minimum value: 40, maximum value: 240, default value: 80.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_GAMMA  = 80\n\n# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML\n# page will contain the date and time when the page was generated. Setting this\n# to YES can help to show when doxygen was last run and thus if the\n# documentation is up to date.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_TIMESTAMP         = NO\n\n# If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML\n# documentation will contain a main index with vertical navigation menus that\n# are dynamically created via Javascript. If disabled, the navigation index will\n# consists of multiple levels of tabs that are statically embedded in every HTML\n# page. Disable this option to support browsers that do not have Javascript,\n# like the Qt help browser.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_DYNAMIC_MENUS     = YES\n\n# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML\n# documentation will contain sections that can be hidden and shown after the\n# page has loaded.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_DYNAMIC_SECTIONS  = NO\n\n# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries\n# shown in the various tree structured indices initially; the user can expand\n# and collapse entries dynamically later on. Doxygen will expand the tree to\n# such a level that at most the specified number of entries are visible (unless\n# a fully collapsed tree already exceeds this amount). So setting the number of\n# entries 1 will produce a full collapsed tree by default. 0 is a special value\n# representing an infinite number of entries and will result in a full expanded\n# tree by default.\n# Minimum value: 0, maximum value: 9999, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_INDEX_NUM_ENTRIES = 100\n\n# If the GENERATE_DOCSET tag is set to YES, additional index files will be\n# generated that can be used as input for Apple's Xcode 3 integrated development\n# environment (see: https://developer.apple.com/xcode/), introduced with OSX\n# 10.5 (Leopard). To create a documentation set, doxygen will generate a\n# Makefile in the HTML output directory. Running make will produce the docset in\n# that directory and running make install will install the docset in\n# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at\n# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy\n# genXcode/_index.html for more information.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_DOCSET        = NO\n\n# This tag determines the name of the docset feed. A documentation feed provides\n# an umbrella under which multiple documentation sets from a single provider\n# (such as a company or product suite) can be grouped.\n# The default value is: Doxygen generated docs.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_FEEDNAME        = \"Doxygen generated docs\"\n\n# This tag specifies a string that should uniquely identify the documentation\n# set bundle. This should be a reverse domain-name style string, e.g.\n# com.mycompany.MyDocSet. Doxygen will append .docset to the name.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_BUNDLE_ID       = org.doxygen.Project\n\n# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify\n# the documentation publisher. This should be a reverse domain-name style\n# string, e.g. com.mycompany.MyDocSet.documentation.\n# The default value is: org.doxygen.Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_ID    = org.doxygen.Publisher\n\n# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.\n# The default value is: Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_NAME  = Publisher\n\n# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three\n# additional HTML index files: index.hhp, index.hhc, and index.hhk. The\n# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop\n# (see: https://www.microsoft.com/en-us/download/details.aspx?id=21138) on\n# Windows.\n#\n# The HTML Help Workshop contains a compiler that can convert all HTML output\n# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML\n# files are now used as the Windows 98 help format, and will replace the old\n# Windows help format (.hlp) on all Windows platforms in the future. Compressed\n# HTML files also contain an index, a table of contents, and you can search for\n# words in the documentation. The HTML workshop also contains a viewer for\n# compressed HTML files.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_HTMLHELP      = NO\n\n# The CHM_FILE tag can be used to specify the file name of the resulting .chm\n# file. You can add a path in front of the file if the result should not be\n# written to the html output directory.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_FILE               =\n\n# The HHC_LOCATION tag can be used to specify the location (absolute path\n# including file name) of the HTML help compiler (hhc.exe). If non-empty,\n# doxygen will try to run the HTML help compiler on the generated index.hhp.\n# The file has to be specified with full path.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nHHC_LOCATION           =\n\n# The GENERATE_CHI flag controls if a separate .chi index file is generated\n# (YES) or that it should be included in the master .chm file (NO).\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nGENERATE_CHI           = NO\n\n# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)\n# and project file content.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_INDEX_ENCODING     =\n\n# The BINARY_TOC flag controls whether a binary table of contents is generated\n# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it\n# enables the Previous and Next buttons.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nBINARY_TOC             = NO\n\n# The TOC_EXPAND flag can be set to YES to add extra items for group members to\n# the table of contents of the HTML help documentation and to the tree view.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nTOC_EXPAND             = NO\n\n# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and\n# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that\n# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help\n# (.qch) of the generated HTML documentation.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_QHP           = NO\n\n# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify\n# the file name of the resulting .qch file. The path specified is relative to\n# the HTML output folder.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQCH_FILE               =\n\n# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help\n# Project output. For more information please see Qt Help Project / Namespace\n# (see: http://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace).\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_NAMESPACE          = org.doxygen.Project\n\n# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt\n# Help Project output. For more information please see Qt Help Project / Virtual\n# Folders (see: http://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-\n# folders).\n# The default value is: doc.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_VIRTUAL_FOLDER     = doc\n\n# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom\n# filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_NAME   =\n\n# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the\n# custom filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_ATTRS  =\n\n# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this\n# project's filter section matches. Qt Help Project / Filter Attributes (see:\n# http://doc.qt.io/archives/qt-4.8/qthelpproject.html#filter-attributes).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_SECT_FILTER_ATTRS  =\n\n# The QHG_LOCATION tag can be used to specify the location of Qt's\n# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the\n# generated .qhp file.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHG_LOCATION           =\n\n# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be\n# generated, together with the HTML files, they form an Eclipse help plugin. To\n# install this plugin and make it available under the help contents menu in\n# Eclipse, the contents of the directory containing the HTML and XML files needs\n# to be copied into the plugins directory of eclipse. The name of the directory\n# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.\n# After copying Eclipse needs to be restarted before the help appears.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_ECLIPSEHELP   = NO\n\n# A unique identifier for the Eclipse help plugin. When installing the plugin\n# the directory name containing the HTML and XML files should also have this\n# name. Each documentation set should have its own identifier.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.\n\nECLIPSE_DOC_ID         = org.doxygen.Project\n\n# If you want full control over the layout of the generated HTML pages it might\n# be necessary to disable the index and replace it with your own. The\n# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top\n# of each HTML page. A value of NO enables the index and the value YES disables\n# it. Since the tabs in the index contain the same information as the navigation\n# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nDISABLE_INDEX          = NO\n\n# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index\n# structure should be generated to display hierarchical information. If the tag\n# value is set to YES, a side panel will be generated containing a tree-like\n# index structure (just like the one that is generated for HTML Help). For this\n# to work a browser that supports JavaScript, DHTML, CSS and frames is required\n# (i.e. any modern browser). Windows users are probably better off using the\n# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can\n# further fine-tune the look of the index. As an example, the default style\n# sheet generated by doxygen has an example that shows how to put an image at\n# the root of the tree instead of the PROJECT_NAME. Since the tree basically has\n# the same information as the tab index, you could consider setting\n# DISABLE_INDEX to YES when enabling this option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_TREEVIEW      = NO\n\n# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that\n# doxygen will group on one line in the generated HTML documentation.\n#\n# Note that a value of 0 will completely suppress the enum values from appearing\n# in the overview section.\n# Minimum value: 0, maximum value: 20, default value: 4.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nENUM_VALUES_PER_LINE   = 4\n\n# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used\n# to set the initial width (in pixels) of the frame in which the tree is shown.\n# Minimum value: 0, maximum value: 1500, default value: 250.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nTREEVIEW_WIDTH         = 250\n\n# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to\n# external symbols imported via tag files in a separate window.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nEXT_LINKS_IN_WINDOW    = NO\n\n# Use this tag to change the font size of LaTeX formulas included as images in\n# the HTML documentation. When you change the font size after a successful\n# doxygen run you need to manually remove any form_*.png images from the HTML\n# output directory to force them to be regenerated.\n# Minimum value: 8, maximum value: 50, default value: 10.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_FONTSIZE       = 10\n\n# Use the FORMULA_TRANSPARENT tag to determine whether or not the images\n# generated for formulas are transparent PNGs. Transparent PNGs are not\n# supported properly for IE 6.0, but are supported on all modern browsers.\n#\n# Note that when changing this option you need to delete any form_*.png files in\n# the HTML output directory before the changes have effect.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_TRANSPARENT    = YES\n\n# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see\n# https://www.mathjax.org) which uses client side Javascript for the rendering\n# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX\n# installed or if you want to formulas look prettier in the HTML output. When\n# enabled you may also need to install MathJax separately and configure the path\n# to it using the MATHJAX_RELPATH option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nUSE_MATHJAX            = NO\n\n# When MathJax is enabled you can set the default output format to be used for\n# the MathJax output. See the MathJax site (see:\n# http://docs.mathjax.org/en/latest/output.html) for more details.\n# Possible values are: HTML-CSS (which is slower, but has the best\n# compatibility), NativeMML (i.e. MathML) and SVG.\n# The default value is: HTML-CSS.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_FORMAT         = HTML-CSS\n\n# When MathJax is enabled you need to specify the location relative to the HTML\n# output directory using the MATHJAX_RELPATH option. The destination directory\n# should contain the MathJax.js script. For instance, if the mathjax directory\n# is located at the same level as the HTML output directory, then\n# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax\n# Content Delivery Network so you can quickly see the result without installing\n# MathJax. However, it is strongly recommended to install a local copy of\n# MathJax from https://www.mathjax.org before deployment.\n# The default value is: https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_RELPATH        = https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/\n\n# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax\n# extension names that should be enabled during MathJax rendering. For example\n# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_EXTENSIONS     =\n\n# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces\n# of code that will be used on startup of the MathJax code. See the MathJax site\n# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an\n# example see the documentation.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_CODEFILE       =\n\n# When the SEARCHENGINE tag is enabled doxygen will generate a search box for\n# the HTML output. The underlying search engine uses javascript and DHTML and\n# should work on any modern browser. Note that when using HTML help\n# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)\n# there is already a search function so this one should typically be disabled.\n# For large projects the javascript based search engine can be slow, then\n# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to\n# search using the keyboard; to jump to the search box use <access key> + S\n# (what the <access key> is depends on the OS and browser, but it is typically\n# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down\n# key> to jump into the search results window, the results can be navigated\n# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel\n# the search. The filter options can be selected when the cursor is inside the\n# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>\n# to select a filter and <Enter> or <escape> to activate or cancel the filter\n# option.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nSEARCHENGINE           = YES\n\n# When the SERVER_BASED_SEARCH tag is enabled the search engine will be\n# implemented using a web server instead of a web client using Javascript. There\n# are two flavors of web server based searching depending on the EXTERNAL_SEARCH\n# setting. When disabled, doxygen will generate a PHP script for searching and\n# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing\n# and searching needs to be provided by external tools. See the section\n# \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSERVER_BASED_SEARCH    = NO\n\n# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP\n# script for searching. Instead the search results are written to an XML file\n# which needs to be processed by an external indexer. Doxygen will invoke an\n# external search engine pointed to by the SEARCHENGINE_URL option to obtain the\n# search results.\n#\n# Doxygen ships with an example indexer (doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: https://xapian.org/).\n#\n# See the section \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH        = NO\n\n# The SEARCHENGINE_URL should point to a search engine hosted by a web server\n# which will return the search results when EXTERNAL_SEARCH is enabled.\n#\n# Doxygen ships with an example indexer (doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: https://xapian.org/). See the section \"External Indexing and\n# Searching\" for details.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHENGINE_URL       =\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed\n# search data is written to a file for indexing by an external tool. With the\n# SEARCHDATA_FILE tag the name of this file can be specified.\n# The default file is: searchdata.xml.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHDATA_FILE        = searchdata.xml\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the\n# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is\n# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple\n# projects and redirect the results back to the right project.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH_ID     =\n\n# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen\n# projects other than the one defined by this configuration file, but that are\n# all added to the same external search index. Each project needs to have a\n# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of\n# to a relative location where the documentation can be found. The format is:\n# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTRA_SEARCH_MAPPINGS  =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the LaTeX output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.\n# The default value is: YES.\n\nGENERATE_LATEX         = NO\n\n# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_OUTPUT           = latex\n\n# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be\n# invoked.\n#\n# Note that when not enabling USE_PDFLATEX the default is latex when enabling\n# USE_PDFLATEX the default is pdflatex and when in the later case latex is\n# chosen this is overwritten by pdflatex. For specific output languages the\n# default can have been set differently, this depends on the implementation of\n# the output language.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_CMD_NAME         =\n\n# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate\n# index for LaTeX.\n# Note: This tag is used in the Makefile / make.bat.\n# See also: LATEX_MAKEINDEX_CMD for the part in the generated output file\n# (.tex).\n# The default file is: makeindex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nMAKEINDEX_CMD_NAME     = makeindex\n\n# The LATEX_MAKEINDEX_CMD tag can be used to specify the command name to\n# generate index for LaTeX. In case there is no backslash (\\) as first character\n# it will be automatically added in the LaTeX code.\n# Note: This tag is used in the generated output file (.tex).\n# See also: MAKEINDEX_CMD_NAME for the part in the Makefile / make.bat.\n# The default value is: makeindex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_MAKEINDEX_CMD    = makeindex\n\n# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nCOMPACT_LATEX          = NO\n\n# The PAPER_TYPE tag can be used to set the paper type that is used by the\n# printer.\n# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x\n# 14 inches) and executive (7.25 x 10.5 inches).\n# The default value is: a4.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPAPER_TYPE             = a4\n\n# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names\n# that should be included in the LaTeX output. The package can be specified just\n# by its name or with the correct syntax as to be used with the LaTeX\n# \\usepackage command. To get the times font for instance you can specify :\n# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}\n# To use the option intlimits with the amsmath package you can specify:\n# EXTRA_PACKAGES=[intlimits]{amsmath}\n# If left blank no extra packages will be included.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nEXTRA_PACKAGES         =\n\n# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the\n# generated LaTeX document. The header should contain everything until the first\n# chapter. If it is left blank doxygen will generate a standard header. See\n# section \"Doxygen usage\" for information on how to let doxygen write the\n# default header to a separate file.\n#\n# Note: Only use a user-defined header if you know what you are doing! The\n# following commands have a special meaning inside the header: $title,\n# $datetime, $date, $doxygenversion, $projectname, $projectnumber,\n# $projectbrief, $projectlogo. Doxygen will replace $title with the empty\n# string, for the replacement values of the other commands the user is referred\n# to HTML_HEADER.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HEADER           =\n\n# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the\n# generated LaTeX document. The footer should contain everything after the last\n# chapter. If it is left blank doxygen will generate a standard footer. See\n# LATEX_HEADER for more information on how to generate a default footer and what\n# special commands can be used inside the footer.\n#\n# Note: Only use a user-defined footer if you know what you are doing!\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_FOOTER           =\n\n# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined\n# LaTeX style sheets that are included after the standard style sheets created\n# by doxygen. Using this option one can overrule certain style aspects. Doxygen\n# will copy the style sheet files to the output directory.\n# Note: The order of the extra style sheet files is of importance (e.g. the last\n# style sheet in the list overrules the setting of the previous ones in the\n# list).\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_STYLESHEET =\n\n# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the LATEX_OUTPUT output\n# directory. Note that the files will be copied as-is; there are no commands or\n# markers available.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_FILES      =\n\n# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is\n# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will\n# contain links (just like the HTML output) instead of page references. This\n# makes the output suitable for online browsing using a PDF viewer.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPDF_HYPERLINKS         = YES\n\n# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate\n# the PDF file directly from the LaTeX files. Set this option to YES, to get a\n# higher quality PDF documentation.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nUSE_PDFLATEX           = YES\n\n# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode\n# command to the generated LaTeX files. This will instruct LaTeX to keep running\n# if errors occur, instead of asking the user for help. This option is also used\n# when generating formulas in HTML.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BATCHMODE        = NO\n\n# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the\n# index chapters (such as File Index, Compound Index, etc.) in the output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HIDE_INDICES     = NO\n\n# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source\n# code with syntax highlighting in the LaTeX output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_SOURCE_CODE      = NO\n\n# The LATEX_BIB_STYLE tag can be used to specify the style to use for the\n# bibliography, e.g. plainnat, or ieeetr. See\n# https://en.wikipedia.org/wiki/BibTeX and \\cite for more info.\n# The default value is: plain.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BIB_STYLE        = plain\n\n# If the LATEX_TIMESTAMP tag is set to YES then the footer of each generated\n# page will contain the date and time when the page was generated. Setting this\n# to NO can help when comparing the output of multiple runs.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_TIMESTAMP        = NO\n\n# The LATEX_EMOJI_DIRECTORY tag is used to specify the (relative or absolute)\n# path from which the emoji images will be read. If a relative path is entered,\n# it will be relative to the LATEX_OUTPUT directory. If left blank the\n# LATEX_OUTPUT directory will be used.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EMOJI_DIRECTORY  =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the RTF output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The\n# RTF output is optimized for Word 97 and may not look too pretty with other RTF\n# readers/editors.\n# The default value is: NO.\n\nGENERATE_RTF           = NO\n\n# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: rtf.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_OUTPUT             = rtf\n\n# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nCOMPACT_RTF            = NO\n\n# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will\n# contain hyperlink fields. The RTF file will contain links (just like the HTML\n# output) instead of page references. This makes the output suitable for online\n# browsing using Word or some other Word compatible readers that support those\n# fields.\n#\n# Note: WordPad (write) and others do not support links.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_HYPERLINKS         = NO\n\n# Load stylesheet definitions from file. Syntax is similar to doxygen's\n# configuration file, i.e. a series of assignments. You only have to provide\n# replacements, missing definitions are set to their default value.\n#\n# See also section \"Doxygen usage\" for information on how to generate the\n# default style sheet that doxygen normally uses.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_STYLESHEET_FILE    =\n\n# Set optional variables used in the generation of an RTF document. Syntax is\n# similar to doxygen's configuration file. A template extensions file can be\n# generated using doxygen -e rtf extensionFile.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_EXTENSIONS_FILE    =\n\n# If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code\n# with syntax highlighting in the RTF output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_SOURCE_CODE        = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the man page output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for\n# classes and files.\n# The default value is: NO.\n\nGENERATE_MAN           = NO\n\n# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it. A directory man3 will be created inside the directory specified by\n# MAN_OUTPUT.\n# The default directory is: man.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_OUTPUT             = man\n\n# The MAN_EXTENSION tag determines the extension that is added to the generated\n# man pages. In case the manual section does not start with a number, the number\n# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is\n# optional.\n# The default value is: .3.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_EXTENSION          = .3\n\n# The MAN_SUBDIR tag determines the name of the directory created within\n# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by\n# MAN_EXTENSION with the initial . removed.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_SUBDIR             =\n\n# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it\n# will generate one additional man file for each entity documented in the real\n# man page(s). These additional files only source the real man page, but without\n# them the man command would be unable to find the correct page.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_LINKS              = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the XML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that\n# captures the structure of the code including all documentation.\n# The default value is: NO.\n\nGENERATE_XML           = NO\n\n# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: xml.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_OUTPUT             = xml\n\n# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program\n# listings (including syntax highlighting and cross-referencing information) to\n# the XML output. Note that enabling this will significantly increase the size\n# of the XML output.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_PROGRAMLISTING     = YES\n\n# If the XML_NS_MEMB_FILE_SCOPE tag is set to YES, doxygen will include\n# namespace members in file scope as well, matching the HTML output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_NS_MEMB_FILE_SCOPE = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the DOCBOOK output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files\n# that can be used to generate PDF.\n# The default value is: NO.\n\nGENERATE_DOCBOOK       = NO\n\n# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.\n# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in\n# front of it.\n# The default directory is: docbook.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_OUTPUT         = docbook\n\n# If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the\n# program listings (including syntax highlighting and cross-referencing\n# information) to the DOCBOOK output. Note that enabling this will significantly\n# increase the size of the DOCBOOK output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_PROGRAMLISTING = NO\n\n#---------------------------------------------------------------------------\n# Configuration options for the AutoGen Definitions output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an\n# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures\n# the structure of the code including all documentation. Note that this feature\n# is still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_AUTOGEN_DEF   = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the Perl module output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module\n# file that captures the structure of the code including all documentation.\n#\n# Note that this feature is still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_PERLMOD       = NO\n\n# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary\n# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI\n# output from the Perl module output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_LATEX          = NO\n\n# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely\n# formatted so it can be parsed by a human reader. This is useful if you want to\n# understand what is going on. On the other hand, if this tag is set to NO, the\n# size of the Perl module output will be much smaller and Perl will parse it\n# just the same.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_PRETTY         = YES\n\n# The names of the make variables in the generated doxyrules.make file are\n# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful\n# so different doxyrules.make files included by the same Makefile don't\n# overwrite each other's variables.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_MAKEVAR_PREFIX =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the preprocessor\n#---------------------------------------------------------------------------\n\n# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all\n# C-preprocessor directives found in the sources and include files.\n# The default value is: YES.\n\nENABLE_PREPROCESSING   = YES\n\n# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names\n# in the source code. If set to NO, only conditional compilation will be\n# performed. Macro expansion can be done in a controlled way by setting\n# EXPAND_ONLY_PREDEF to YES.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nMACRO_EXPANSION        = NO\n\n# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then\n# the macro expansion is limited to the macros specified with the PREDEFINED and\n# EXPAND_AS_DEFINED tags.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_ONLY_PREDEF     = NO\n\n# If the SEARCH_INCLUDES tag is set to YES, the include files in the\n# INCLUDE_PATH will be searched if a #include is found.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSEARCH_INCLUDES        = YES\n\n# The INCLUDE_PATH tag can be used to specify one or more directories that\n# contain include files that are not input files but should be processed by the\n# preprocessor.\n# This tag requires that the tag SEARCH_INCLUDES is set to YES.\n\nINCLUDE_PATH           =\n\n# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard\n# patterns (like *.h and *.hpp) to filter out the header-files in the\n# directories. If left blank, the patterns specified with FILE_PATTERNS will be\n# used.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nINCLUDE_FILE_PATTERNS  =\n\n# The PREDEFINED tag can be used to specify one or more macro names that are\n# defined before the preprocessor is started (similar to the -D option of e.g.\n# gcc). The argument of the tag is a list of macros of the form: name or\n# name=definition (no spaces). If the definition and the \"=\" are omitted, \"=1\"\n# is assumed. To prevent a macro definition from being undefined via #undef or\n# recursively expanded use the := operator instead of the = operator.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nPREDEFINED             =\n\n# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this\n# tag can be used to specify a list of macro names that should be expanded. The\n# macro definition that is found in the sources will be used. Use the PREDEFINED\n# tag if you want to use a different macro definition that overrules the\n# definition found in the source code.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_AS_DEFINED      =\n\n# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will\n# remove all references to function-like macros that are alone on a line, have\n# an all uppercase name, and do not end with a semicolon. Such function macros\n# are typically used for boiler-plate code, and will confuse the parser if not\n# removed.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSKIP_FUNCTION_MACROS   = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to external references\n#---------------------------------------------------------------------------\n\n# The TAGFILES tag can be used to specify one or more tag files. For each tag\n# file the location of the external documentation should be added. The format of\n# a tag file without this location is as follows:\n# TAGFILES = file1 file2 ...\n# Adding location for the tag files is done as follows:\n# TAGFILES = file1=loc1 \"file2 = loc2\" ...\n# where loc1 and loc2 can be relative or absolute paths or URLs. See the\n# section \"Linking to external documentation\" for more information about the use\n# of tag files.\n# Note: Each tag file must have a unique name (where the name does NOT include\n# the path). If a tag file is not located in the directory in which doxygen is\n# run, you must also specify the path to the tagfile here.\n\nTAGFILES               =\n\n# When a file name is specified after GENERATE_TAGFILE, doxygen will create a\n# tag file that is based on the input files it reads. See section \"Linking to\n# external documentation\" for more information about the usage of tag files.\n\nGENERATE_TAGFILE       =\n\n# If the ALLEXTERNALS tag is set to YES, all external class will be listed in\n# the class index. If set to NO, only the inherited external classes will be\n# listed.\n# The default value is: NO.\n\nALLEXTERNALS           = NO\n\n# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed\n# in the modules index. If set to NO, only the current project's groups will be\n# listed.\n# The default value is: YES.\n\nEXTERNAL_GROUPS        = YES\n\n# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in\n# the related pages index. If set to NO, only the current project's pages will\n# be listed.\n# The default value is: YES.\n\nEXTERNAL_PAGES         = YES\n\n# The PERL_PATH should be the absolute path and name of the perl script\n# interpreter (i.e. the result of 'which perl').\n# The default file (with absolute path) is: /usr/bin/perl.\n\nPERL_PATH              = /usr/bin/perl\n\n#---------------------------------------------------------------------------\n# Configuration options related to the dot tool\n#---------------------------------------------------------------------------\n\n# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram\n# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to\n# NO turns the diagrams off. Note that this option also works with HAVE_DOT\n# disabled, but it is recommended to install and use dot, since it yields more\n# powerful graphs.\n# The default value is: YES.\n\nCLASS_DIAGRAMS         = YES\n\n# You can define message sequence charts within doxygen comments using the \\msc\n# command. Doxygen will then run the mscgen tool (see:\n# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the\n# documentation. The MSCGEN_PATH tag allows you to specify the directory where\n# the mscgen tool resides. If left empty the tool is assumed to be found in the\n# default search path.\n\nMSCGEN_PATH            =\n\n# You can include diagrams made with dia in doxygen documentation. Doxygen will\n# then run dia to produce the diagram and insert it in the documentation. The\n# DIA_PATH tag allows you to specify the directory where the dia binary resides.\n# If left empty dia is assumed to be found in the default search path.\n\nDIA_PATH               =\n\n# If set to YES the inheritance and collaboration graphs will hide inheritance\n# and usage relations if the target is undocumented or is not a class.\n# The default value is: YES.\n\nHIDE_UNDOC_RELATIONS   = YES\n\n# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is\n# available from the path. This tool is part of Graphviz (see:\n# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent\n# Bell Labs. The other options in this section have no effect if this option is\n# set to NO\n# The default value is: NO.\n\nHAVE_DOT               = NO\n\n# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed\n# to run in parallel. When set to 0 doxygen will base this on the number of\n# processors available in the system. You can set it explicitly to a value\n# larger than 0 to get control over the balance between CPU load and processing\n# speed.\n# Minimum value: 0, maximum value: 32, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_NUM_THREADS        = 0\n\n# When you want a differently looking font in the dot files that doxygen\n# generates you can specify the font name using DOT_FONTNAME. You need to make\n# sure dot is able to find the font, which can be done by putting it in a\n# standard location or by setting the DOTFONTPATH environment variable or by\n# setting DOT_FONTPATH to the directory containing the font.\n# The default value is: Helvetica.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTNAME           = Helvetica\n\n# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of\n# dot graphs.\n# Minimum value: 4, maximum value: 24, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTSIZE           = 10\n\n# By default doxygen will tell dot to use the default font as specified with\n# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set\n# the path where dot can find it using this tag.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTPATH           =\n\n# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for\n# each documented class showing the direct and indirect inheritance relations.\n# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCLASS_GRAPH            = YES\n\n# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a\n# graph for each documented class showing the direct and indirect implementation\n# dependencies (inheritance, containment, and class references variables) of the\n# class with other documented classes.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCOLLABORATION_GRAPH    = YES\n\n# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for\n# groups, showing the direct groups dependencies.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGROUP_GRAPHS           = YES\n\n# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and\n# collaboration diagrams in a style similar to the OMG's Unified Modeling\n# Language.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LOOK               = NO\n\n# If the UML_LOOK tag is enabled, the fields and methods are shown inside the\n# class node. If there are many fields or methods and many nodes the graph may\n# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the\n# number of items for each type to make the size more manageable. Set this to 0\n# for no limit. Note that the threshold may be exceeded by 50% before the limit\n# is enforced. So when you set the threshold to 10, up to 15 fields may appear,\n# but if the number exceeds 15, the total amount of fields shown is limited to\n# 10.\n# Minimum value: 0, maximum value: 100, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LIMIT_NUM_FIELDS   = 10\n\n# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and\n# collaboration graphs will show the relations between templates and their\n# instances.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nTEMPLATE_RELATIONS     = NO\n\n# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to\n# YES then doxygen will generate a graph for each documented file showing the\n# direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDE_GRAPH          = YES\n\n# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are\n# set to YES then doxygen will generate a graph for each documented file showing\n# the direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDED_BY_GRAPH      = YES\n\n# If the CALL_GRAPH tag is set to YES then doxygen will generate a call\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable call graphs for selected\n# functions only using the \\callgraph command. Disabling a call graph can be\n# accomplished by means of the command \\hidecallgraph.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALL_GRAPH             = NO\n\n# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable caller graphs for selected\n# functions only using the \\callergraph command. Disabling a caller graph can be\n# accomplished by means of the command \\hidecallergraph.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALLER_GRAPH           = NO\n\n# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical\n# hierarchy of all classes instead of a textual one.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGRAPHICAL_HIERARCHY    = YES\n\n# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the\n# dependencies a directory has on other directories in a graphical way. The\n# dependency relations are determined by the #include relations between the\n# files in the directories.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDIRECTORY_GRAPH        = YES\n\n# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images\n# generated by dot. For an explanation of the image formats see the section\n# output formats in the documentation of the dot tool (Graphviz (see:\n# http://www.graphviz.org/)).\n# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order\n# to make the SVG files visible in IE 9+ (other browsers do not have this\n# requirement).\n# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,\n# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and\n# png:gdiplus:gdiplus.\n# The default value is: png.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_IMAGE_FORMAT       = png\n\n# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to\n# enable generation of interactive SVG images that allow zooming and panning.\n#\n# Note that this requires a modern browser other than Internet Explorer. Tested\n# and working are Firefox, Chrome, Safari, and Opera.\n# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make\n# the SVG files visible. Older versions of IE do not have SVG support.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINTERACTIVE_SVG        = NO\n\n# The DOT_PATH tag can be used to specify the path where the dot tool can be\n# found. If left blank, it is assumed the dot tool can be found in the path.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_PATH               =\n\n# The DOTFILE_DIRS tag can be used to specify one or more directories that\n# contain dot files that are included in the documentation (see the \\dotfile\n# command).\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOTFILE_DIRS           =\n\n# The MSCFILE_DIRS tag can be used to specify one or more directories that\n# contain msc files that are included in the documentation (see the \\mscfile\n# command).\n\nMSCFILE_DIRS           =\n\n# The DIAFILE_DIRS tag can be used to specify one or more directories that\n# contain dia files that are included in the documentation (see the \\diafile\n# command).\n\nDIAFILE_DIRS           =\n\n# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the\n# path where java can find the plantuml.jar file. If left blank, it is assumed\n# PlantUML is not used or called during a preprocessing step. Doxygen will\n# generate a warning when it encounters a \\startuml command in this case and\n# will not generate output for the diagram.\n\nPLANTUML_JAR_PATH      =\n\n# When using plantuml, the PLANTUML_CFG_FILE tag can be used to specify a\n# configuration file for plantuml.\n\nPLANTUML_CFG_FILE      =\n\n# When using plantuml, the specified paths are searched for files specified by\n# the !include statement in a plantuml block.\n\nPLANTUML_INCLUDE_PATH  =\n\n# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes\n# that will be shown in the graph. If the number of nodes in a graph becomes\n# larger than this value, doxygen will truncate the graph, which is visualized\n# by representing a node as a red box. Note that doxygen if the number of direct\n# children of the root node in a graph is already larger than\n# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that\n# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.\n# Minimum value: 0, maximum value: 10000, default value: 50.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_GRAPH_MAX_NODES    = 50\n\n# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs\n# generated by dot. A depth value of 3 means that only nodes reachable from the\n# root by following a path via at most 3 edges will be shown. Nodes that lay\n# further from the root node will be omitted. Note that setting this option to 1\n# or 2 may greatly reduce the computation time needed for large code bases. Also\n# note that the size of a graph can be further restricted by\n# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.\n# Minimum value: 0, maximum value: 1000, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nMAX_DOT_GRAPH_DEPTH    = 0\n\n# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent\n# background. This is disabled by default, because dot on Windows does not seem\n# to support this out of the box.\n#\n# Warning: Depending on the platform used, enabling this option may lead to\n# badly anti-aliased labels on the edges of a graph (i.e. they become hard to\n# read).\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_TRANSPARENT        = NO\n\n# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output\n# files in one run (i.e. multiple -o and -T options on the command line). This\n# makes dot run faster, but since only newer versions of dot (>1.8.10) support\n# this, this feature is disabled by default.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_MULTI_TARGETS      = NO\n\n# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page\n# explaining the meaning of the various boxes and arrows in the dot generated\n# graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGENERATE_LEGEND        = YES\n\n# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot\n# files that are used to generate the various graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_CLEANUP            = YES\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/docs/benchmarks.md",
    "content": "# Benchmarks\n\nA simple benchmark of Simple-Web-Server and a few similar web libraries.\n\nDetails:\n* Linux distribution: Debian Testing (2019-07-29)\n* Linux kernel: 4.19.0-1-amd64\n* CPU: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz\n* CPU cores: 4\n* The HTTP load generator [httperf](https://github.com/httperf/httperf) is used\nto create the benchmark results, with the following arguments:\n```sh\nhttperf --server=localhost --port=3000 --uri=/ --num-conns=20000 --num-calls=200\n```\n\nThe response messages were made identical.\n\n## Express\n\n[Express](https://expressjs.com/) is a popular Node.js web framework.\n\nVersions:\n* Node: v10.15.2\n* Express: 4.17.1\n\nCode:\n```js\nconst express = require('express');\nconst app = express();\n\napp.get('/', (req, res) => {\n  res.removeHeader('X-Powered-By');\n  res.removeHeader('Connection');\n  res.end('Hello World!')\n});\n\nconst port = 3000;\napp.listen(port, () => console.log(`Example app listening on port ${port}!`));\n```\n\nExecution:\n```sh\nNODE_ENV=production node index.js\n```\n\nExample results (13659.7 req/s):\n```sh\nhttperf --client=0/1 --server=localhost --port=3000 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=20000 --num-calls=200\nhttperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE\nMaximum connect burst length: 1\n\nTotal: connections 20000 requests 40000 replies 20000 test-duration 2.928 s\n\nConnection rate: 6829.9 conn/s (0.1 ms/conn, <=1 concurrent connections)\nConnection time [ms]: min 0.1 avg 0.1 max 14.8 median 0.5 stddev 0.1\nConnection time [ms]: connect 0.0\nConnection length [replies/conn]: 1.000\n\nRequest rate: 13659.7 req/s (0.1 ms/req)\nRequest size [B]: 62.0\n\nReply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)\nReply time [ms]: response 0.1 transfer 0.0\nReply size [B]: header 76.0 content 12.0 footer 0.0 (total 88.0)\nReply status: 1xx=0 2xx=20000 3xx=0 4xx=0 5xx=0\n\nCPU time [s]: user 0.66 system 2.27 (user 22.4% system 77.5% total 99.9%)\nNet I/O: 1414.0 KB/s (11.6*10^6 bps)\n\nErrors: total 20000 client-timo 0 socket-timo 0 connrefused 0 connreset 20000\nErrors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0\n```\n\n## Hyper\n\n[Hyper](https://hyper.rs/) is a Rust HTTP library that topped the\n[TechEmpower Web Framework Benchmarks results](https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=plaintext) in 2019-07-09.\n\nVersions:\n* rustc: 1.38.0-nightly\n* hyper: 0.12\n\nCode (copied from\nhttps://github.com/hyperium/hyper/blob/0.12.x/examples/hello.rs, but removed `pretty_env_logger`\ncalls due to compilation issues):\n```rust\n#![deny(warnings)]\nextern crate hyper;\n// extern crate pretty_env_logger;\n\nuse hyper::{Body, Request, Response, Server};\nuse hyper::service::service_fn_ok;\nuse hyper::rt::{self, Future};\n\nfn main() {\n    // pretty_env_logger::init();\n    let addr = ([127, 0, 0, 1], 3000).into();\n\n    let server = Server::bind(&addr)\n        .serve(|| {\n            // This is the `Service` that will handle the connection.\n            // `service_fn_ok` is a helper to convert a function that\n            // returns a Response into a `Service`.\n            service_fn_ok(move |_: Request<Body>| {\n                Response::new(Body::from(\"Hello World!\"))\n            })\n        })\n        .map_err(|e| eprintln!(\"server error: {}\", e));\n\n    println!(\"Listening on http://{}\", addr);\n\n    rt::run(server);\n}\n```\n\nCompilation and run:\n```sh\ncargo run --release\n```\n\nExample results (60712.3 req/s):\n```sh\nhttperf --client=0/1 --server=localhost --port=3000 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=20000 --num-calls=200\nhttperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE\nMaximum connect burst length: 1\n\nTotal: connections 20000 requests 4000000 replies 4000000 test-duration 65.884 s\n\nConnection rate: 303.6 conn/s (3.3 ms/conn, <=1 concurrent connections)\nConnection time [ms]: min 3.0 avg 3.3 max 11.3 median 3.5 stddev 0.3\nConnection time [ms]: connect 0.0\nConnection length [replies/conn]: 200.000\n\nRequest rate: 60712.3 req/s (0.0 ms/req)\nRequest size [B]: 62.0\n\nReply rate [replies/s]: min 58704.0 avg 60732.7 max 62587.7 stddev 1021.7 (13 samples)\nReply time [ms]: response 0.0 transfer 0.0\nReply size [B]: header 76.0 content 12.0 footer 0.0 (total 88.0)\nReply status: 1xx=0 2xx=4000000 3xx=0 4xx=0 5xx=0\n\nCPU time [s]: user 15.91 system 49.97 (user 24.1% system 75.8% total 100.0%)\nNet I/O: 8893.4 KB/s (72.9*10^6 bps)\n\nErrors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0\nErrors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0\n```\n\n## Simple-Web-Server\n\nIn these simplistic tests, the performance of Simple-Web-Server is similar to\nthe Hyper Rust HTTP library, although Hyper seems to be slightly faster more\noften than not.\n\nVersions:\n* g++: 9.1.0\n\nCode (modified `http_examples.cpp`):\n```c++\n#include \"server_http.hpp\"\n\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\n\nint main() {\n  HttpServer server;\n  server.config.port = 3000;\n\n  server.default_resource[\"GET\"] = [](std::shared_ptr<HttpServer::Response> response, std::shared_ptr<HttpServer::Request> /*request*/) {\n    response->write(\"Hello World!\", {{\"Date\", SimpleWeb::Date::to_string(std::chrono::system_clock::now())}});\n  };\n\n  server.start();\n}\n```\n\nBuild, compilation and run:\n```sh\nmkdir build && cd build\nCXX=g++-9 CXXFLAGS=\"-O2 -DNDEBUG -flto\" cmake ..\nmake\n./http_examples\n```\n\nExample results (60596.3 req/s):\n```sh\nhttperf --client=0/1 --server=localhost --port=3000 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=20000 --num-calls=200\nhttperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE\nMaximum connect burst length: 1\n\nTotal: connections 20000 requests 4000000 replies 4000000 test-duration 66.011 s\n\nConnection rate: 303.0 conn/s (3.3 ms/conn, <=1 concurrent connections)\nConnection time [ms]: min 3.2 avg 3.3 max 8.0 median 3.5 stddev 0.0\nConnection time [ms]: connect 0.0\nConnection length [replies/conn]: 200.000\n\nRequest rate: 60596.3 req/s (0.0 ms/req)\nRequest size [B]: 62.0\n\nReply rate [replies/s]: min 60399.6 avg 60596.9 max 60803.8 stddev 130.9 (13 samples)\nReply time [ms]: response 0.0 transfer 0.0\nReply size [B]: header 76.0 content 12.0 footer 0.0 (total 88.0)\nReply status: 1xx=0 2xx=4000000 3xx=0 4xx=0 5xx=0\n\nCPU time [s]: user 16.07 system 49.93 (user 24.3% system 75.6% total 100.0%)\nNet I/O: 8876.4 KB/s (72.7*10^6 bps)\n\nErrors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0\nErrors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0\n```\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/http_examples.cpp",
    "content": "#include \"client_http.hpp\"\n#include \"server_http.hpp\"\n#include <future>\n\n// Added for the json-example\n#define BOOST_SPIRIT_THREADSAFE\n#include <boost/property_tree/json_parser.hpp>\n#include <boost/property_tree/ptree.hpp>\n\n// Added for the default_resource example\n#include <algorithm>\n#include <boost/filesystem.hpp>\n#include <fstream>\n#include <vector>\n#ifdef HAVE_OPENSSL\n#include \"crypto.hpp\"\n#endif\n\nusing namespace std;\n// Added for the json-example:\nusing namespace boost::property_tree;\n\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\nint main() {\n  // HTTP-server at port 8080 using 1 thread\n  // Unless you do more heavy non-threaded processing in the resources,\n  // 1 thread is usually faster than several threads\n  HttpServer server;\n  server.config.port = 8080;\n\n  // Add resources using path-regex and method-string, and an anonymous function\n  // POST-example for the path /string, responds the posted string\n  server.resource[\"^/string$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    // Retrieve string:\n    auto content = request->content.string();\n    // request->content.string() is a convenience function for:\n    // stringstream ss;\n    // ss << request->content.rdbuf();\n    // auto content=ss.str();\n\n    *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << content.length() << \"\\r\\n\\r\\n\"\n              << content;\n\n\n    // Alternatively, use one of the convenience functions, for instance:\n    // response->write(content);\n  };\n\n  // POST-example for the path /json, responds firstName+\" \"+lastName from the posted json\n  // Responds with an appropriate error message if the posted json is not valid, or if firstName or lastName is missing\n  // Example posted json:\n  // {\n  //   \"firstName\": \"John\",\n  //   \"lastName\": \"Smith\",\n  //   \"age\": 25\n  // }\n  server.resource[\"^/json$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    try {\n      ptree pt;\n      read_json(request->content, pt);\n\n      auto name = pt.get<string>(\"firstName\") + \" \" + pt.get<string>(\"lastName\");\n\n      *response << \"HTTP/1.1 200 OK\\r\\n\"\n                << \"Content-Length: \" << name.length() << \"\\r\\n\\r\\n\"\n                << name;\n    }\n    catch(const exception &e) {\n      *response << \"HTTP/1.1 400 Bad Request\\r\\nContent-Length: \" << strlen(e.what()) << \"\\r\\n\\r\\n\"\n                << e.what();\n    }\n\n\n    // Alternatively, using a convenience function:\n    // try {\n    //     ptree pt;\n    //     read_json(request->content, pt);\n\n    //     auto name=pt.get<string>(\"firstName\")+\" \"+pt.get<string>(\"lastName\");\n    //     response->write(name);\n    // }\n    // catch(const exception &e) {\n    //     response->write(SimpleWeb::StatusCode::client_error_bad_request, e.what());\n    // }\n  };\n\n  // GET-example for the path /info\n  // Responds with request-information\n  server.resource[\"^/info$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    stringstream stream;\n    stream << \"<h1>Request from \" << request->remote_endpoint().address().to_string() << \":\" << request->remote_endpoint().port() << \"</h1>\";\n\n    stream << request->method << \" \" << request->path << \" HTTP/\" << request->http_version;\n\n    stream << \"<h2>Query Fields</h2>\";\n    auto query_fields = request->parse_query_string();\n    for(auto &field : query_fields)\n      stream << field.first << \": \" << field.second << \"<br>\";\n\n    stream << \"<h2>Header Fields</h2>\";\n    for(auto &field : request->header)\n      stream << field.first << \": \" << field.second << \"<br>\";\n\n    response->write(stream);\n  };\n\n  // GET-example for the path /match/[number], responds with the matched string in path (number)\n  // For instance a request GET /match/123 will receive: 123\n  server.resource[\"^/match/([0-9]+)$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    response->write(request->path_match[1].str());\n  };\n\n  // GET-example simulating heavy work in a separate thread\n  server.resource[\"^/work$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    thread work_thread([response] {\n      this_thread::sleep_for(chrono::seconds(5));\n      response->write(\"Work done\");\n    });\n    work_thread.detach();\n  };\n\n  // Default GET-example. If no other matches, this anonymous function will be called.\n  // Will respond with content in the web/-directory, and its subdirectories.\n  // Default file: index.html\n  // Can for instance be used to retrieve an HTML 5 client that uses REST-resources on this server\n  server.default_resource[\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    try {\n      auto web_root_path = boost::filesystem::canonical(\"web\");\n      auto path = boost::filesystem::canonical(web_root_path / request->path);\n      // Check if path is within web_root_path\n      if(distance(web_root_path.begin(), web_root_path.end()) > distance(path.begin(), path.end()) ||\n         !equal(web_root_path.begin(), web_root_path.end(), path.begin()))\n        throw invalid_argument(\"path must be within root path\");\n      if(boost::filesystem::is_directory(path))\n        path /= \"index.html\";\n\n      SimpleWeb::CaseInsensitiveMultimap header;\n\n      // Uncomment the following line to enable Cache-Control\n      // header.emplace(\"Cache-Control\", \"max-age=86400\");\n\n#ifdef HAVE_OPENSSL\n//    Uncomment the following lines to enable ETag\n//    {\n//      ifstream ifs(path.string(), ifstream::in | ios::binary);\n//      if(ifs) {\n//        auto hash = SimpleWeb::Crypto::to_hex_string(SimpleWeb::Crypto::md5(ifs));\n//        header.emplace(\"ETag\", \"\\\"\" + hash + \"\\\"\");\n//        auto it = request->header.find(\"If-None-Match\");\n//        if(it != request->header.end()) {\n//          if(!it->second.empty() && it->second.compare(1, hash.size(), hash) == 0) {\n//            response->write(SimpleWeb::StatusCode::redirection_not_modified, header);\n//            return;\n//          }\n//        }\n//      }\n//      else\n//        throw invalid_argument(\"could not read file\");\n//    }\n#endif\n\n      auto ifs = make_shared<ifstream>();\n      ifs->open(path.string(), ifstream::in | ios::binary | ios::ate);\n\n      if(*ifs) {\n        auto length = ifs->tellg();\n        ifs->seekg(0, ios::beg);\n\n        header.emplace(\"Content-Length\", to_string(length));\n        response->write(header);\n\n        // Trick to define a recursive function within this scope (for example purposes)\n        class FileServer {\n        public:\n          static void read_and_send(const shared_ptr<HttpServer::Response> &response, const shared_ptr<ifstream> &ifs) {\n            // Read and send 128 KB at a time\n            static vector<char> buffer(131072); // Safe when server is running on one thread\n            streamsize read_length;\n            if((read_length = ifs->read(&buffer[0], static_cast<streamsize>(buffer.size())).gcount()) > 0) {\n              response->write(&buffer[0], read_length);\n              if(read_length == static_cast<streamsize>(buffer.size())) {\n                response->send([response, ifs](const SimpleWeb::error_code &ec) {\n                  if(!ec)\n                    read_and_send(response, ifs);\n                  else\n                    cerr << \"Connection interrupted\" << endl;\n                });\n              }\n            }\n          }\n        };\n        FileServer::read_and_send(response, ifs);\n      }\n      else\n        throw invalid_argument(\"could not read file\");\n    }\n    catch(const exception &e) {\n      response->write(SimpleWeb::StatusCode::client_error_bad_request, \"Could not open path \" + request->path + \": \" + e.what());\n    }\n  };\n\n  server.on_error = [](shared_ptr<HttpServer::Request> /*request*/, const SimpleWeb::error_code & /*ec*/) {\n    // Handle errors here\n    // Note that connection timeouts will also call this handle with ec set to SimpleWeb::errc::operation_canceled\n  };\n\n  // Start server and receive assigned port when server is listening for requests\n  promise<unsigned short> server_port;\n  thread server_thread([&server, &server_port]() {\n    // Start server\n    server.start([&server_port](unsigned short port) {\n      server_port.set_value(port);\n    });\n  });\n  cout << \"Server listening on port \" << server_port.get_future().get() << endl\n       << endl;\n\n  // Client examples\n  string json_string = \"{\\\"firstName\\\": \\\"John\\\",\\\"lastName\\\": \\\"Smith\\\",\\\"age\\\": 25}\";\n\n  // Synchronous request examples\n  {\n    HttpClient client(\"localhost:8080\");\n    try {\n      cout << \"Example GET request to http://localhost:8080/match/123\" << endl;\n      auto r1 = client.request(\"GET\", \"/match/123\");\n      cout << \"Response content: \" << r1->content.rdbuf() << endl // Alternatively, use the convenience function r1->content.string()\n           << endl;\n\n      cout << \"Example POST request to http://localhost:8080/string\" << endl;\n      auto r2 = client.request(\"POST\", \"/string\", json_string);\n      cout << \"Response content: \" << r2->content.rdbuf() << endl\n           << endl;\n    }\n    catch(const SimpleWeb::system_error &e) {\n      cerr << \"Client request error: \" << e.what() << endl;\n    }\n  }\n\n  // Asynchronous request example\n  {\n    HttpClient client(\"localhost:8080\");\n    cout << \"Example POST request to http://localhost:8080/json\" << endl;\n    client.request(\"POST\", \"/json\", json_string, [](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {\n      if(!ec)\n        cout << \"Response content: \" << response->content.rdbuf() << endl;\n    });\n    client.io_service->run();\n  }\n\n  server_thread.join();\n}\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/https_examples.cpp",
    "content": "#include \"client_https.hpp\"\n#include \"server_https.hpp\"\n#include <future>\n\n// Added for the json-example\n#define BOOST_SPIRIT_THREADSAFE\n#include <boost/property_tree/json_parser.hpp>\n#include <boost/property_tree/ptree.hpp>\n\n// Added for the default_resource example\n#include \"crypto.hpp\"\n#include <algorithm>\n#include <boost/filesystem.hpp>\n#include <fstream>\n#include <vector>\n\nusing namespace std;\n// Added for the json-example:\nusing namespace boost::property_tree;\n\nusing HttpsServer = SimpleWeb::Server<SimpleWeb::HTTPS>;\nusing HttpsClient = SimpleWeb::Client<SimpleWeb::HTTPS>;\n\nint main() {\n  // HTTPS-server at port 8080 using 1 thread\n  // Unless you do more heavy non-threaded processing in the resources,\n  // 1 thread is usually faster than several threads\n  HttpsServer server(\"server.crt\", \"server.key\");\n  server.config.port = 8080;\n\n  // Add resources using path-regex and method-string, and an anonymous function\n  // POST-example for the path /string, responds the posted string\n  server.resource[\"^/string$\"][\"POST\"] = [](shared_ptr<HttpsServer::Response> response, shared_ptr<HttpsServer::Request> request) {\n    // Retrieve string:\n    auto content = request->content.string();\n    // request->content.string() is a convenience function for:\n    // stringstream ss;\n    // ss << request->content.rdbuf();\n    // auto content=ss.str();\n\n    *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << content.length() << \"\\r\\n\\r\\n\"\n              << content;\n\n\n    // Alternatively, use one of the convenience functions, for instance:\n    // response->write(content);\n  };\n\n  // POST-example for the path /json, responds firstName+\" \"+lastName from the posted json\n  // Responds with an appropriate error message if the posted json is not valid, or if firstName or lastName is missing\n  // Example posted json:\n  // {\n  //   \"firstName\": \"John\",\n  //   \"lastName\": \"Smith\",\n  //   \"age\": 25\n  // }\n  server.resource[\"^/json$\"][\"POST\"] = [](shared_ptr<HttpsServer::Response> response, shared_ptr<HttpsServer::Request> request) {\n    try {\n      ptree pt;\n      read_json(request->content, pt);\n\n      auto name = pt.get<string>(\"firstName\") + \" \" + pt.get<string>(\"lastName\");\n\n      *response << \"HTTP/1.1 200 OK\\r\\n\"\n                << \"Content-Length: \" << name.length() << \"\\r\\n\\r\\n\"\n                << name;\n    }\n    catch(const exception &e) {\n      *response << \"HTTP/1.1 400 Bad Request\\r\\nContent-Length: \" << strlen(e.what()) << \"\\r\\n\\r\\n\"\n                << e.what();\n    }\n\n\n    // Alternatively, using a convenience function:\n    // try {\n    //     ptree pt;\n    //     read_json(request->content, pt);\n\n    //     auto name=pt.get<string>(\"firstName\")+\" \"+pt.get<string>(\"lastName\");\n    //     response->write(name);\n    // }\n    // catch(const exception &e) {\n    //     response->write(SimpleWeb::StatusCode::client_error_bad_request, e.what());\n    // }\n  };\n\n  // GET-example for the path /info\n  // Responds with request-information\n  server.resource[\"^/info$\"][\"GET\"] = [](shared_ptr<HttpsServer::Response> response, shared_ptr<HttpsServer::Request> request) {\n    stringstream stream;\n    stream << \"<h1>Request from \" << request->remote_endpoint().address().to_string() << \":\" << request->remote_endpoint().port() << \"</h1>\";\n\n    stream << request->method << \" \" << request->path << \" HTTP/\" << request->http_version;\n\n    stream << \"<h2>Query Fields</h2>\";\n    auto query_fields = request->parse_query_string();\n    for(auto &field : query_fields)\n      stream << field.first << \": \" << field.second << \"<br>\";\n\n    stream << \"<h2>Header Fields</h2>\";\n    for(auto &field : request->header)\n      stream << field.first << \": \" << field.second << \"<br>\";\n\n    response->write(stream);\n  };\n\n  // GET-example for the path /match/[number], responds with the matched string in path (number)\n  // For instance a request GET /match/123 will receive: 123\n  server.resource[\"^/match/([0-9]+)$\"][\"GET\"] = [](shared_ptr<HttpsServer::Response> response, shared_ptr<HttpsServer::Request> request) {\n    response->write(request->path_match[1].str());\n  };\n\n  // GET-example simulating heavy work in a separate thread\n  server.resource[\"^/work$\"][\"GET\"] = [](shared_ptr<HttpsServer::Response> response, shared_ptr<HttpsServer::Request> /*request*/) {\n    thread work_thread([response] {\n      this_thread::sleep_for(chrono::seconds(5));\n      response->write(\"Work done\");\n    });\n    work_thread.detach();\n  };\n\n  // Default GET-example. If no other matches, this anonymous function will be called.\n  // Will respond with content in the web/-directory, and its subdirectories.\n  // Default file: index.html\n  // Can for instance be used to retrieve an HTML 5 client that uses REST-resources on this server\n  server.default_resource[\"GET\"] = [](shared_ptr<HttpsServer::Response> response, shared_ptr<HttpsServer::Request> request) {\n    try {\n      auto web_root_path = boost::filesystem::canonical(\"web\");\n      auto path = boost::filesystem::canonical(web_root_path / request->path);\n      // Check if path is within web_root_path\n      if(distance(web_root_path.begin(), web_root_path.end()) > distance(path.begin(), path.end()) ||\n         !equal(web_root_path.begin(), web_root_path.end(), path.begin()))\n        throw invalid_argument(\"path must be within root path\");\n      if(boost::filesystem::is_directory(path))\n        path /= \"index.html\";\n\n      SimpleWeb::CaseInsensitiveMultimap header;\n\n      // Uncomment the following line to enable Cache-Control\n      // header.emplace(\"Cache-Control\", \"max-age=86400\");\n\n#ifdef HAVE_OPENSSL\n//    Uncomment the following lines to enable ETag\n//    {\n//      ifstream ifs(path.string(), ifstream::in | ios::binary);\n//      if(ifs) {\n//        auto hash = SimpleWeb::Crypto::to_hex_string(SimpleWeb::Crypto::md5(ifs));\n//        header.emplace(\"ETag\", \"\\\"\" + hash + \"\\\"\");\n//        auto it = request->header.find(\"If-None-Match\");\n//        if(it != request->header.end()) {\n//          if(!it->second.empty() && it->second.compare(1, hash.size(), hash) == 0) {\n//            response->write(SimpleWeb::StatusCode::redirection_not_modified, header);\n//            return;\n//          }\n//        }\n//      }\n//      else\n//        throw invalid_argument(\"could not read file\");\n//    }\n#endif\n\n      auto ifs = make_shared<ifstream>();\n      ifs->open(path.string(), ifstream::in | ios::binary | ios::ate);\n\n      if(*ifs) {\n        auto length = ifs->tellg();\n        ifs->seekg(0, ios::beg);\n\n        header.emplace(\"Content-Length\", to_string(length));\n        response->write(header);\n\n        // Trick to define a recursive function within this scope (for example purposes)\n        class FileServer {\n        public:\n          static void read_and_send(const shared_ptr<HttpsServer::Response> &response, const shared_ptr<ifstream> &ifs) {\n            // Read and send 128 KB at a time\n            static vector<char> buffer(131072); // Safe when server is running on one thread\n            streamsize read_length;\n            if((read_length = ifs->read(&buffer[0], static_cast<streamsize>(buffer.size())).gcount()) > 0) {\n              response->write(&buffer[0], read_length);\n              if(read_length == static_cast<streamsize>(buffer.size())) {\n                response->send([response, ifs](const SimpleWeb::error_code &ec) {\n                  if(!ec)\n                    read_and_send(response, ifs);\n                  else\n                    cerr << \"Connection interrupted\" << endl;\n                });\n              }\n            }\n          }\n        };\n        FileServer::read_and_send(response, ifs);\n      }\n      else\n        throw invalid_argument(\"could not read file\");\n    }\n    catch(const exception &e) {\n      response->write(SimpleWeb::StatusCode::client_error_bad_request, \"Could not open path \" + request->path + \": \" + e.what());\n    }\n  };\n\n  server.on_error = [](shared_ptr<HttpsServer::Request> /*request*/, const SimpleWeb::error_code & /*ec*/) {\n    // Handle errors here\n    // Note that connection timeouts will also call this handle with ec set to SimpleWeb::errc::operation_canceled\n  };\n\n  // Start server and receive assigned port when server is listening for requests\n  promise<unsigned short> server_port;\n  thread server_thread([&server, &server_port]() {\n    // Start server\n    server.start([&server_port](unsigned short port) {\n      server_port.set_value(port);\n    });\n  });\n  cout << \"Server listening on port \" << server_port.get_future().get() << endl\n       << endl;\n\n  // Client examples\n  string json_string = \"{\\\"firstName\\\": \\\"John\\\",\\\"lastName\\\": \\\"Smith\\\",\\\"age\\\": 25}\";\n\n  // Synchronous request examples\n  {\n    HttpsClient client(\"localhost:8080\", false);\n    try {\n      cout << \"Example GET request to http://localhost:8080/match/123\" << endl;\n      auto r1 = client.request(\"GET\", \"/match/123\");\n      cout << \"Response content: \" << r1->content.rdbuf() << endl // Alternatively, use the convenience function r1->content.string()\n           << endl;\n\n      cout << \"Example POST request to http://localhost:8080/string\" << endl;\n      auto r2 = client.request(\"POST\", \"/string\", json_string);\n      cout << \"Response content: \" << r2->content.rdbuf() << endl\n           << endl;\n    }\n    catch(const SimpleWeb::system_error &e) {\n      cerr << \"Client request error: \" << e.what() << endl;\n    }\n  }\n\n  // Asynchronous request example\n  {\n    HttpsClient client(\"localhost:8080\", false);\n    cout << \"Example POST request to http://localhost:8080/json\" << endl;\n    client.request(\"POST\", \"/json\", json_string, [](shared_ptr<HttpsClient::Response> response, const SimpleWeb::error_code &ec) {\n      if(!ec)\n        cout << \"Response content: \" << response->content.rdbuf() << endl;\n    });\n    client.io_service->run();\n  }\n\n  server_thread.join();\n}\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/mutex.hpp",
    "content": "// Based on https://clang.llvm.org/docs/ThreadSafetyAnalysis.html\n#ifndef SIMPLE_WEB_MUTEX_HPP\n#define SIMPLE_WEB_MUTEX_HPP\n\n#include <mutex>\n\n// Enable thread safety attributes only with clang.\n#if defined(__clang__) && (!defined(SWIG))\n#define THREAD_ANNOTATION_ATTRIBUTE__(x) __attribute__((x))\n#else\n#define THREAD_ANNOTATION_ATTRIBUTE__(x) // no-op\n#endif\n\n#define CAPABILITY(x) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(capability(x))\n\n#define SCOPED_CAPABILITY \\\n  THREAD_ANNOTATION_ATTRIBUTE__(scoped_lockable)\n\n#define GUARDED_BY(x) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(guarded_by(x))\n\n#define PT_GUARDED_BY(x) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(pt_guarded_by(x))\n\n#define ACQUIRED_BEFORE(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(acquired_before(__VA_ARGS__))\n\n#define ACQUIRED_AFTER(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(acquired_after(__VA_ARGS__))\n\n#define REQUIRES(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(requires_capability(__VA_ARGS__))\n\n#define REQUIRES_SHARED(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(requires_shared_capability(__VA_ARGS__))\n\n#define ACQUIRE(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(acquire_capability(__VA_ARGS__))\n\n#define ACQUIRE_SHARED(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(acquire_shared_capability(__VA_ARGS__))\n\n#define RELEASE(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(release_capability(__VA_ARGS__))\n\n#define RELEASE_SHARED(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(release_shared_capability(__VA_ARGS__))\n\n#define TRY_ACQUIRE(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(try_acquire_capability(__VA_ARGS__))\n\n#define TRY_ACQUIRE_SHARED(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(try_acquire_shared_capability(__VA_ARGS__))\n\n#define EXCLUDES(...) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(locks_excluded(__VA_ARGS__))\n\n#define ASSERT_CAPABILITY(x) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(assert_capability(x))\n\n#define ASSERT_SHARED_CAPABILITY(x) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(assert_shared_capability(x))\n\n#define RETURN_CAPABILITY(x) \\\n  THREAD_ANNOTATION_ATTRIBUTE__(lock_returned(x))\n\n#define NO_THREAD_SAFETY_ANALYSIS \\\n  THREAD_ANNOTATION_ATTRIBUTE__(no_thread_safety_analysis)\n\nnamespace SimpleWeb {\n  /// Mutex class that is annotated for Clang Thread Safety Analysis.\n  class CAPABILITY(\"mutex\") Mutex {\n    std::mutex mutex;\n\n  public:\n    void lock() ACQUIRE() {\n      mutex.lock();\n    }\n\n    void unlock() RELEASE() {\n      mutex.unlock();\n    }\n  };\n\n  /// Scoped mutex guard class that is annotated for Clang Thread Safety Analysis.\n  class SCOPED_CAPABILITY LockGuard {\n    Mutex &mutex;\n    bool locked = true;\n\n  public:\n    LockGuard(Mutex &mutex_) ACQUIRE(mutex_) : mutex(mutex_) {\n      mutex.lock();\n    }\n    void unlock() RELEASE() {\n      mutex.unlock();\n      locked = false;\n    }\n    ~LockGuard() RELEASE() {\n      if(locked)\n        mutex.unlock();\n    }\n  };\n\n} // namespace SimpleWeb\n\n#endif // SIMPLE_WEB_MUTEX_HPP\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/server_http.hpp",
    "content": "#ifndef SIMPLE_WEB_SERVER_HTTP_HPP\n#define SIMPLE_WEB_SERVER_HTTP_HPP\n\n#include \"asio_compatibility.hpp\"\n#include \"mutex.hpp\"\n#include \"utility.hpp\"\n#include <functional>\n#include <iostream>\n#include <limits>\n#include <list>\n#include <map>\n#include <sstream>\n#include <thread>\n#include <unordered_set>\n\n// Late 2017 TODO: remove the following checks and always use std::regex\n#ifdef USE_BOOST_REGEX\n#include <boost/regex.hpp>\nnamespace SimpleWeb {\n  namespace regex = boost;\n}\n#else\n#include <regex>\nnamespace SimpleWeb {\n  namespace regex = std;\n}\n#endif\n\nnamespace SimpleWeb {\n  template <class socket_type>\n  class Server;\n\n  template <class socket_type>\n  class ServerBase {\n  protected:\n    class Connection;\n    class Session;\n\n  public:\n    /// Response class where the content of the response is sent to client when the object is about to be destroyed.\n    class Response : public std::enable_shared_from_this<Response>, public std::ostream {\n      friend class ServerBase<socket_type>;\n      friend class Server<socket_type>;\n\n      std::unique_ptr<asio::streambuf> streambuf = std::unique_ptr<asio::streambuf>(new asio::streambuf());\n\n      std::shared_ptr<Session> session;\n      long timeout_content;\n\n      Mutex send_queue_mutex;\n      std::list<std::pair<std::shared_ptr<asio::streambuf>, std::function<void(const error_code &)>>> send_queue GUARDED_BY(send_queue_mutex);\n\n      Response(std::shared_ptr<Session> session_, long timeout_content) noexcept : std::ostream(nullptr), session(std::move(session_)), timeout_content(timeout_content) {\n        rdbuf(streambuf.get());\n      }\n\n      template <typename size_type>\n      void write_header(const CaseInsensitiveMultimap &header, size_type size) {\n        bool content_length_written = false;\n        bool chunked_transfer_encoding = false;\n        for(auto &field : header) {\n          if(!content_length_written && case_insensitive_equal(field.first, \"content-length\"))\n            content_length_written = true;\n          else if(!chunked_transfer_encoding && case_insensitive_equal(field.first, \"transfer-encoding\") && case_insensitive_equal(field.second, \"chunked\"))\n            chunked_transfer_encoding = true;\n\n          *this << field.first << \": \" << field.second << \"\\r\\n\";\n        }\n        if(!content_length_written && !chunked_transfer_encoding && !close_connection_after_response)\n          *this << \"Content-Length: \" << size << \"\\r\\n\\r\\n\";\n        else\n          *this << \"\\r\\n\";\n      }\n\n      void send_from_queue() REQUIRES(send_queue_mutex) {\n        auto self = this->shared_from_this();\n        asio::async_write(*self->session->connection->socket, *send_queue.begin()->first, [self](const error_code &ec, std::size_t /*bytes_transferred*/) {\n          auto lock = self->session->connection->handler_runner->continue_lock();\n          if(!lock)\n            return;\n          {\n            LockGuard lock(self->send_queue_mutex);\n            if(!ec) {\n              auto it = self->send_queue.begin();\n              auto callback = std::move(it->second);\n              self->send_queue.erase(it);\n              if(self->send_queue.size() > 0)\n                self->send_from_queue();\n\n              lock.unlock();\n              if(callback)\n                callback(ec);\n            }\n            else {\n              // All handlers in the queue is called with ec:\n              std::vector<std::function<void(const error_code &)>> callbacks;\n              for(auto &pair : self->send_queue) {\n                if(pair.second)\n                  callbacks.emplace_back(std::move(pair.second));\n              }\n              self->send_queue.clear();\n\n              lock.unlock();\n              for(auto &callback : callbacks)\n                callback(ec);\n            }\n          }\n        });\n      }\n\n      void send_on_delete(const std::function<void(const error_code &)> &callback = nullptr) noexcept {\n        auto self = this->shared_from_this(); // Keep Response instance alive through the following async_write\n        asio::async_write(*session->connection->socket, *streambuf, [self, callback](const error_code &ec, std::size_t /*bytes_transferred*/) {\n          auto lock = self->session->connection->handler_runner->continue_lock();\n          if(!lock)\n            return;\n          if(callback)\n            callback(ec);\n        });\n      }\n\n    public:\n      std::size_t size() noexcept {\n        return streambuf->size();\n      }\n\n      /// Send the content of the response stream to client. The callback is called when the send has completed.\n      ///\n      /// Use this function if you need to recursively send parts of a longer message, or when using server-sent events.\n      void send(std::function<void(const error_code &)> callback = nullptr) noexcept {\n        std::shared_ptr<asio::streambuf> streambuf = std::move(this->streambuf);\n        this->streambuf = std::unique_ptr<asio::streambuf>(new asio::streambuf());\n        rdbuf(this->streambuf.get());\n\n        LockGuard lock(send_queue_mutex);\n        send_queue.emplace_back(std::move(streambuf), std::move(callback));\n        if(send_queue.size() == 1)\n          send_from_queue();\n      }\n\n      /// Write directly to stream buffer using std::ostream::write.\n      void write(const char_type *ptr, std::streamsize n) {\n        std::ostream::write(ptr, n);\n      }\n\n      /// Convenience function for writing status line, potential header fields, and empty content.\n      void write(StatusCode status_code = StatusCode::success_ok, const CaseInsensitiveMultimap &header = CaseInsensitiveMultimap()) {\n        *this << \"HTTP/1.1 \" << SimpleWeb::status_code(status_code) << \"\\r\\n\";\n        write_header(header, 0);\n      }\n\n      /// Convenience function for writing status line, header fields, and content.\n      void write(StatusCode status_code, string_view content, const CaseInsensitiveMultimap &header = CaseInsensitiveMultimap()) {\n        *this << \"HTTP/1.1 \" << SimpleWeb::status_code(status_code) << \"\\r\\n\";\n        write_header(header, content.size());\n        if(!content.empty())\n          *this << content;\n      }\n\n      /// Convenience function for writing status line, header fields, and content.\n      void write(StatusCode status_code, std::istream &content, const CaseInsensitiveMultimap &header = CaseInsensitiveMultimap()) {\n        *this << \"HTTP/1.1 \" << SimpleWeb::status_code(status_code) << \"\\r\\n\";\n        content.seekg(0, std::ios::end);\n        auto size = content.tellg();\n        content.seekg(0, std::ios::beg);\n        write_header(header, size);\n        if(size)\n          *this << content.rdbuf();\n      }\n\n      /// Convenience function for writing success status line, header fields, and content.\n      void write(string_view content, const CaseInsensitiveMultimap &header = CaseInsensitiveMultimap()) {\n        write(StatusCode::success_ok, content, header);\n      }\n\n      /// Convenience function for writing success status line, header fields, and content.\n      void write(std::istream &content, const CaseInsensitiveMultimap &header = CaseInsensitiveMultimap()) {\n        write(StatusCode::success_ok, content, header);\n      }\n\n      /// Convenience function for writing success status line, and header fields.\n      void write(const CaseInsensitiveMultimap &header) {\n        write(StatusCode::success_ok, std::string(), header);\n      }\n\n      /// If set to true, force server to close the connection after the response have been sent.\n      ///\n      /// This is useful when implementing a HTTP/1.0-server sending content\n      /// without specifying the content length.\n      bool close_connection_after_response = false;\n    };\n\n    class Content : public std::istream {\n      friend class ServerBase<socket_type>;\n\n    public:\n      std::size_t size() noexcept {\n        return streambuf.size();\n      }\n      /// Convenience function to return content data without copy\n      const std::uint8_t* data() noexcept {\n        return asio::buffer_cast<const uint8_t *>(streambuf.data());\n      }\n      /// Convenience function to return content as std::string.\n      std::string string() noexcept {\n        return std::string(asio::buffers_begin(streambuf.data()), asio::buffers_end(streambuf.data()));\n      }\n\n    private:\n      asio::streambuf &streambuf;\n      Content(asio::streambuf &streambuf) noexcept : std::istream(&streambuf), streambuf(streambuf) {}\n    };\n\n    class Request {\n      friend class ServerBase<socket_type>;\n      friend class Server<socket_type>;\n      friend class Session;\n\n      asio::streambuf streambuf;\n      std::weak_ptr<Connection> connection;\n      std::string optimization = std::to_string(0); // TODO: figure out what goes wrong in gcc optimization without this line\n\n      Request(std::size_t max_request_streambuf_size, const std::shared_ptr<Connection> &connection_) noexcept : streambuf(max_request_streambuf_size), connection(connection_), content(streambuf) {}\n\n    public:\n      std::string method, path, query_string, http_version;\n\n      Content content;\n\n      CaseInsensitiveMultimap header;\n\n      /// The result of the resource regular expression match of the request path.\n      regex::smatch path_match;\n\n      /// The time point when the request header was fully read.\n      std::chrono::system_clock::time_point header_read_time;\n\n      asio::ip::tcp::endpoint remote_endpoint() const noexcept {\n        try {\n          if(auto connection = this->connection.lock())\n            return connection->socket->lowest_layer().remote_endpoint();\n        }\n        catch(...) {\n        }\n        return asio::ip::tcp::endpoint();\n      }\n\n      asio::ip::tcp::endpoint local_endpoint() const noexcept {\n        try {\n          if(auto connection = this->connection.lock())\n            return connection->socket->lowest_layer().local_endpoint();\n        }\n        catch(...) {\n        }\n        return asio::ip::tcp::endpoint();\n      }\n\n      /// Deprecated, please use remote_endpoint().address().to_string() instead.\n      DEPRECATED std::string remote_endpoint_address() const noexcept {\n        try {\n          if(auto connection = this->connection.lock())\n            return connection->socket->lowest_layer().remote_endpoint().address().to_string();\n        }\n        catch(...) {\n        }\n        return std::string();\n      }\n\n      /// Deprecated, please use remote_endpoint().port() instead.\n      DEPRECATED unsigned short remote_endpoint_port() const noexcept {\n        try {\n          if(auto connection = this->connection.lock())\n            return connection->socket->lowest_layer().remote_endpoint().port();\n        }\n        catch(...) {\n        }\n        return 0;\n      }\n\n      /// Returns query keys with percent-decoded values.\n      CaseInsensitiveMultimap parse_query_string() const noexcept {\n        return SimpleWeb::QueryString::parse(query_string);\n      }\n    };\n\n  protected:\n    class Connection : public std::enable_shared_from_this<Connection> {\n    public:\n      template <typename... Args>\n      Connection(std::shared_ptr<ScopeRunner> handler_runner_, Args &&... args) noexcept : handler_runner(std::move(handler_runner_)), socket(new socket_type(std::forward<Args>(args)...)) {}\n\n      std::shared_ptr<ScopeRunner> handler_runner;\n\n      std::unique_ptr<socket_type> socket; // Socket must be unique_ptr since asio::ssl::stream<asio::ip::tcp::socket> is not movable\n\n      std::unique_ptr<asio::steady_timer> timer;\n\n      void close() noexcept {\n        error_code ec;\n        socket->lowest_layer().shutdown(asio::ip::tcp::socket::shutdown_both, ec);\n        socket->lowest_layer().cancel(ec);\n      }\n\n      void set_timeout(long seconds) noexcept {\n        if(seconds == 0) {\n          timer = nullptr;\n          return;\n        }\n\n        timer = make_steady_timer(*socket, std::chrono::seconds(seconds));\n        std::weak_ptr<Connection> self_weak(this->shared_from_this()); // To avoid keeping Connection instance alive longer than needed\n        timer->async_wait([self_weak](const error_code &ec) {\n          if(!ec) {\n            if(auto self = self_weak.lock())\n              self->close();\n          }\n        });\n      }\n\n      void cancel_timeout() noexcept {\n        if(timer) {\n          try {\n            timer->cancel();\n          }\n          catch(...) {\n          }\n        }\n      }\n    };\n\n    class Session {\n    public:\n      Session(std::size_t max_request_streambuf_size, std::shared_ptr<Connection> connection_) noexcept : connection(std::move(connection_)), request(new Request(max_request_streambuf_size, connection)) {}\n\n      std::shared_ptr<Connection> connection;\n      std::shared_ptr<Request> request;\n    };\n\n  public:\n    class Config {\n      friend class ServerBase<socket_type>;\n\n      Config(unsigned short port) noexcept : port(port) {}\n\n    public:\n      /// Port number to use. Defaults to 80 for HTTP and 443 for HTTPS. Set to 0 get an assigned port.\n      unsigned short port;\n      /// If io_service is not set, number of threads that the server will use when start() is called.\n      /// Defaults to 1 thread.\n      std::size_t thread_pool_size = 1;\n      /// Timeout on request completion. Defaults to 5 seconds.\n      long timeout_request = 5;\n      /// Timeout on request/response content completion. Defaults to 300 seconds.\n      long timeout_content = 300;\n      /// Maximum size of request stream buffer. Defaults to architecture maximum.\n      /// Reaching this limit will result in a message_size error code.\n      std::size_t max_request_streambuf_size = (std::numeric_limits<std::size_t>::max)();\n      /// IPv4 address in dotted decimal form or IPv6 address in hexadecimal notation.\n      /// If empty, the address will be any address.\n      std::string address;\n      /// Set to false to avoid binding the socket to an address that is already in use. Defaults to true.\n      bool reuse_address = true;\n      /// Make use of RFC 7413 or TCP Fast Open (TFO)\n      bool fast_open = false;\n    };\n    /// Set before calling start().\n    Config config;\n\n  private:\n    class regex_orderable : public regex::regex {\n    public:\n      std::string str;\n\n      regex_orderable(const char *regex_cstr) : regex::regex(regex_cstr), str(regex_cstr) {}\n      regex_orderable(std::string regex_str_) : regex::regex(regex_str_), str(std::move(regex_str_)) {}\n      bool operator<(const regex_orderable &rhs) const noexcept {\n        return str < rhs.str;\n      }\n    };\n\n  public:\n    /// Use this container to add resources for specific request paths depending on the given regex and method.\n    /// Warning: do not add or remove resources after start() is called\n    std::map<regex_orderable, std::map<std::string, std::function<void(std::shared_ptr<typename ServerBase<socket_type>::Response>, std::shared_ptr<typename ServerBase<socket_type>::Request>)>>> resource;\n\n    /// If the request path does not match a resource regex, this function is called.\n    std::map<std::string, std::function<void(std::shared_ptr<typename ServerBase<socket_type>::Response>, std::shared_ptr<typename ServerBase<socket_type>::Request>)>> default_resource;\n\n    /// Called when an error occurs.\n    std::function<void(std::shared_ptr<typename ServerBase<socket_type>::Request>, const error_code &)> on_error;\n\n    /// Called on upgrade requests.\n    std::function<void(std::unique_ptr<socket_type> &, std::shared_ptr<typename ServerBase<socket_type>::Request>)> on_upgrade;\n\n    /// If you want to reuse an already created asio::io_service, store its pointer here before calling start().\n    std::shared_ptr<io_context> io_service;\n\n    /// Start the server.\n    /// If io_service is not set, an internal io_service is created instead.\n    /// The callback argument is called after the server is accepting connections,\n    /// where its parameter contains the assigned port.\n    void start(const std::function<void(unsigned short /*port*/)> &callback = nullptr) {\n      std::unique_lock<std::mutex> lock(start_stop_mutex);\n\n      asio::ip::tcp::endpoint endpoint;\n      if(!config.address.empty())\n        endpoint = asio::ip::tcp::endpoint(make_address(config.address), config.port);\n      else\n        endpoint = asio::ip::tcp::endpoint(asio::ip::tcp::v6(), config.port);\n\n      if(!io_service) {\n        io_service = std::make_shared<io_context>();\n        internal_io_service = true;\n      }\n\n      if(!acceptor)\n        acceptor = std::unique_ptr<asio::ip::tcp::acceptor>(new asio::ip::tcp::acceptor(*io_service));\n      try {\n        acceptor->open(endpoint.protocol());\n      }\n      catch(const system_error &error) {\n        if(error.code() == asio::error::address_family_not_supported && config.address.empty()) {\n          endpoint = asio::ip::tcp::endpoint(asio::ip::tcp::v4(), config.port);\n          acceptor->open(endpoint.protocol());\n        }\n        else\n          throw;\n      }\n      acceptor->set_option(asio::socket_base::reuse_address(config.reuse_address));\n      if(config.fast_open) {\n#if defined(__linux__) && defined(TCP_FASTOPEN)\n        const int qlen = 5; // This seems to be the value that is used in other examples.\n        error_code ec;\n        acceptor->set_option(asio::detail::socket_option::integer<IPPROTO_TCP, TCP_FASTOPEN>(qlen), ec);\n#endif // End Linux\n      }\n      acceptor->bind(endpoint);\n\n      after_bind();\n\n      auto port = acceptor->local_endpoint().port();\n\n      acceptor->listen();\n      accept();\n\n      if(internal_io_service && io_service->stopped())\n        restart(*io_service);\n\n      if(callback)\n        post(*io_service, [callback, port] {\n          callback(port);\n        });\n\n      if(internal_io_service) {\n        // If thread_pool_size>1, start m_io_service.run() in (thread_pool_size-1) threads for thread-pooling\n        threads.clear();\n        for(std::size_t c = 1; c < config.thread_pool_size; c++) {\n          threads.emplace_back([this]() {\n            this->io_service->run();\n          });\n        }\n\n        lock.unlock();\n\n        // Main thread\n        if(config.thread_pool_size > 0)\n          io_service->run();\n\n        lock.lock();\n\n        // Wait for the rest of the threads, if any, to finish as well\n        for(auto &t : threads)\n          t.join();\n      }\n    }\n\n    // MR - added method to return the port we are listening on\n    unsigned short getLocalPort() {\n\tif (acceptor)\n\t{\n\t\tboost::asio::ip::tcp::endpoint endpoint = acceptor->local_endpoint();\n\t\treturn endpoint.port();\n\t}\n\telse\n\t{\n\t\treturn 0;\n\t}\n    }\n\n    /// Stop accepting new requests, and close current connections.\n    void stop() noexcept {\n      std::lock_guard<std::mutex> lock(start_stop_mutex);\n\n      if(acceptor) {\n        error_code ec;\n        acceptor->close(ec);\n\n        {\n          LockGuard lock(connections->mutex);\n          for(auto &connection : connections->set)\n            connection->close();\n          connections->set.clear();\n        }\n\n        if(internal_io_service)\n          io_service->stop();\n      }\n    }\n\n    virtual ~ServerBase() noexcept {\n      handler_runner->stop();\n      stop();\n    }\n\n  protected:\n    std::mutex start_stop_mutex;\n\n    bool internal_io_service = false;\n\n    std::unique_ptr<asio::ip::tcp::acceptor> acceptor;\n    std::vector<std::thread> threads;\n\n    struct Connections {\n      Mutex mutex;\n      std::unordered_set<Connection *> set GUARDED_BY(mutex);\n    };\n    std::shared_ptr<Connections> connections;\n\n    std::shared_ptr<ScopeRunner> handler_runner;\n\n    ServerBase(unsigned short port) noexcept : config(port), connections(new Connections()), handler_runner(new ScopeRunner()) {}\n\n    virtual void after_bind() {}\n    virtual void accept() = 0;\n\n    template <typename... Args>\n    std::shared_ptr<Connection> create_connection(Args &&... args) noexcept {\n      auto connections = this->connections;\n      auto connection = std::shared_ptr<Connection>(new Connection(handler_runner, std::forward<Args>(args)...), [connections](Connection *connection) {\n        {\n          LockGuard lock(connections->mutex);\n          auto it = connections->set.find(connection);\n          if(it != connections->set.end())\n            connections->set.erase(it);\n        }\n        delete connection;\n      });\n      {\n        LockGuard lock(connections->mutex);\n        connections->set.emplace(connection.get());\n      }\n      return connection;\n    }\n\n    void read(const std::shared_ptr<Session> &session) {\n      session->connection->set_timeout(config.timeout_request);\n      asio::async_read_until(*session->connection->socket, session->request->streambuf, \"\\r\\n\\r\\n\", [this, session](const error_code &ec, std::size_t bytes_transferred) {\n        session->connection->set_timeout(config.timeout_content);\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n        session->request->header_read_time = std::chrono::system_clock::now();\n\n        if(!ec) {\n          // request->streambuf.size() is not necessarily the same as bytes_transferred, from Boost-docs:\n          // \"After a successful async_read_until operation, the streambuf may contain additional data beyond the delimiter\"\n          // The chosen solution is to extract lines from the stream directly when parsing the header. What is left of the\n          // streambuf (maybe some bytes of the content) is appended to in the async_read-function below (for retrieving content).\n          std::size_t num_additional_bytes = session->request->streambuf.size() - bytes_transferred;\n\n          if(!RequestMessage::parse(session->request->content, session->request->method, session->request->path,\n                                    session->request->query_string, session->request->http_version, session->request->header)) {\n            if(this->on_error)\n              this->on_error(session->request, make_error_code::make_error_code(errc::protocol_error));\n            return;\n          }\n\n          // If content, read that as well\n          auto header_it = session->request->header.find(\"Content-Length\");\n          if(header_it != session->request->header.end()) {\n            unsigned long long content_length = 0;\n            try {\n              content_length = std::stoull(header_it->second);\n            }\n            catch(const std::exception &) {\n              if(this->on_error)\n                this->on_error(session->request, make_error_code::make_error_code(errc::protocol_error));\n              return;\n            }\n            if(content_length > session->request->streambuf.max_size()) {\n              auto response = std::shared_ptr<Response>(new Response(session, this->config.timeout_content));\n              response->write(StatusCode::client_error_payload_too_large);\n              if(this->on_error)\n                this->on_error(session->request, make_error_code::make_error_code(errc::message_size));\n              return;\n            }\n            if(content_length > num_additional_bytes) {\n              asio::async_read(*session->connection->socket, session->request->streambuf, asio::transfer_exactly(content_length - num_additional_bytes), [this, session](const error_code &ec, std::size_t /*bytes_transferred*/) {\n                auto lock = session->connection->handler_runner->continue_lock();\n                if(!lock)\n                  return;\n\n                if(!ec)\n                  this->find_resource(session);\n                else if(this->on_error)\n                  this->on_error(session->request, ec);\n              });\n            }\n            else\n              this->find_resource(session);\n          }\n          else if((header_it = session->request->header.find(\"Transfer-Encoding\")) != session->request->header.end() && header_it->second == \"chunked\") {\n            // Expect hex number to not exceed 16 bytes (64-bit number), but take into account previous additional read bytes\n            auto chunk_size_streambuf = std::make_shared<asio::streambuf>(std::max<std::size_t>(16 + 2, session->request->streambuf.size()));\n\n            // Move leftover bytes\n            auto &source = session->request->streambuf;\n            auto &target = *chunk_size_streambuf;\n            target.commit(asio::buffer_copy(target.prepare(source.size()), source.data()));\n            source.consume(source.size());\n\n            this->read_chunked_transfer_encoded(session, chunk_size_streambuf);\n          }\n          else\n            this->find_resource(session);\n        }\n        else if(this->on_error)\n          this->on_error(session->request, ec);\n      });\n    }\n\n    void read_chunked_transfer_encoded(const std::shared_ptr<Session> &session, const std::shared_ptr<asio::streambuf> &chunk_size_streambuf) {\n      asio::async_read_until(*session->connection->socket, *chunk_size_streambuf, \"\\r\\n\", [this, session, chunk_size_streambuf](const error_code &ec, size_t bytes_transferred) {\n        auto lock = session->connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n\n        if(!ec) {\n          std::istream istream(chunk_size_streambuf.get());\n          std::string line;\n          std::getline(istream, line);\n          bytes_transferred -= line.size() + 1;\n          unsigned long chunk_size = 0;\n          try {\n            chunk_size = std::stoul(line, 0, 16);\n          }\n          catch(...) {\n            if(this->on_error)\n              this->on_error(session->request, make_error_code::make_error_code(errc::protocol_error));\n            return;\n          }\n\n          if(chunk_size == 0) {\n            this->find_resource(session);\n            return;\n          }\n\n          if(chunk_size + session->request->streambuf.size() > session->request->streambuf.max_size()) {\n            auto response = std::shared_ptr<Response>(new Response(session, this->config.timeout_content));\n            response->write(StatusCode::client_error_payload_too_large);\n            if(this->on_error)\n              this->on_error(session->request, make_error_code::make_error_code(errc::message_size));\n            return;\n          }\n\n          auto num_additional_bytes = chunk_size_streambuf->size() - bytes_transferred;\n\n          auto bytes_to_move = std::min<std::size_t>(chunk_size, num_additional_bytes);\n          if(bytes_to_move > 0) {\n            // Move leftover bytes\n            auto &source = *chunk_size_streambuf;\n            auto &target = session->request->streambuf;\n            target.commit(asio::buffer_copy(target.prepare(bytes_to_move), source.data(), bytes_to_move));\n            source.consume(bytes_to_move);\n          }\n\n          if(chunk_size > num_additional_bytes) {\n            asio::async_read(*session->connection->socket, session->request->streambuf, asio::transfer_exactly(chunk_size - num_additional_bytes), [this, session, chunk_size_streambuf](const error_code &ec, size_t /*bytes_transferred*/) {\n              auto lock = session->connection->handler_runner->continue_lock();\n              if(!lock)\n                return;\n\n              if(!ec) {\n                // Remove \"\\r\\n\"\n                auto null_buffer = std::make_shared<asio::streambuf>(2);\n                asio::async_read(*session->connection->socket, *null_buffer, asio::transfer_exactly(2), [this, session, chunk_size_streambuf, null_buffer](const error_code &ec, size_t /*bytes_transferred*/) {\n                  auto lock = session->connection->handler_runner->continue_lock();\n                  if(!lock)\n                    return;\n                  if(!ec)\n                    read_chunked_transfer_encoded(session, chunk_size_streambuf);\n                  else\n                    this->on_error(session->request, ec);\n                });\n              }\n              else if(this->on_error)\n                this->on_error(session->request, ec);\n            });\n          }\n          else if(2 + chunk_size > num_additional_bytes) { // If only end of chunk remains unread (\\n or \\r\\n)\n            // Remove \"\\r\\n\"\n            if(2 + chunk_size - num_additional_bytes == 1)\n              istream.get();\n            auto null_buffer = std::make_shared<asio::streambuf>(2);\n            asio::async_read(*session->connection->socket, *null_buffer, asio::transfer_exactly(2 + chunk_size - num_additional_bytes), [this, session, chunk_size_streambuf, null_buffer](const error_code &ec, size_t /*bytes_transferred*/) {\n              auto lock = session->connection->handler_runner->continue_lock();\n              if(!lock)\n                return;\n              if(!ec)\n                read_chunked_transfer_encoded(session, chunk_size_streambuf);\n              else\n                this->on_error(session->request, ec);\n            });\n          }\n          else {\n            // Remove \"\\r\\n\"\n            istream.get();\n            istream.get();\n\n            read_chunked_transfer_encoded(session, chunk_size_streambuf);\n          }\n        }\n        else if(this->on_error)\n          this->on_error(session->request, ec);\n      });\n    }\n\n    void find_resource(const std::shared_ptr<Session> &session) {\n      // Upgrade connection\n      if(on_upgrade) {\n        auto it = session->request->header.find(\"Upgrade\");\n        if(it != session->request->header.end()) {\n          // remove connection from connections\n          {\n            LockGuard lock(connections->mutex);\n            auto it = connections->set.find(session->connection.get());\n            if(it != connections->set.end())\n              connections->set.erase(it);\n          }\n\n          on_upgrade(session->connection->socket, session->request);\n          return;\n        }\n      }\n      // Find path- and method-match, and call write\n      for(auto &regex_method : resource) {\n        auto it = regex_method.second.find(session->request->method);\n        if(it != regex_method.second.end()) {\n          regex::smatch sm_res;\n          if(regex::regex_match(session->request->path, sm_res, regex_method.first)) {\n            session->request->path_match = std::move(sm_res);\n            write(session, it->second);\n            return;\n          }\n        }\n      }\n      auto it = default_resource.find(session->request->method);\n      if(it != default_resource.end())\n        write(session, it->second);\n    }\n\n    void write(const std::shared_ptr<Session> &session,\n               std::function<void(std::shared_ptr<typename ServerBase<socket_type>::Response>, std::shared_ptr<typename ServerBase<socket_type>::Request>)> &resource_function) {\n      auto response = std::shared_ptr<Response>(new Response(session, config.timeout_content), [this](Response *response_ptr) {\n        auto response = std::shared_ptr<Response>(response_ptr);\n        response->send_on_delete([this, response](const error_code &ec) {\n          response->session->connection->cancel_timeout();\n          if(!ec) {\n            if(response->close_connection_after_response)\n              return;\n\n            auto range = response->session->request->header.equal_range(\"Connection\");\n            for(auto it = range.first; it != range.second; it++) {\n              if(case_insensitive_equal(it->second, \"close\"))\n                return;\n              else if(case_insensitive_equal(it->second, \"keep-alive\")) {\n                auto new_session = std::make_shared<Session>(this->config.max_request_streambuf_size, response->session->connection);\n                this->read(new_session);\n                return;\n              }\n            }\n            if(response->session->request->http_version >= \"1.1\") {\n              auto new_session = std::make_shared<Session>(this->config.max_request_streambuf_size, response->session->connection);\n              this->read(new_session);\n              return;\n            }\n          }\n          else if(this->on_error)\n            this->on_error(response->session->request, ec);\n        });\n      });\n\n      try {\n        resource_function(response, session->request);\n      }\n      catch(const std::exception &) {\n        if(on_error)\n          on_error(session->request, make_error_code::make_error_code(errc::operation_canceled));\n        return;\n      }\n    }\n  };\n\n  template <class socket_type>\n  class Server : public ServerBase<socket_type> {};\n\n  using HTTP = asio::ip::tcp::socket;\n\n  template <>\n  class Server<HTTP> : public ServerBase<HTTP> {\n  public:\n    /// Constructs a server object.\n    Server() noexcept : ServerBase<HTTP>::ServerBase(80) {}\n\n  protected:\n    void accept() override {\n      auto connection = create_connection(*io_service);\n\n      acceptor->async_accept(*connection->socket, [this, connection](const error_code &ec) {\n        auto lock = connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n\n        // Immediately start accepting a new connection (unless io_service has been stopped)\n        if(ec != error::operation_aborted)\n          this->accept();\n\n        auto session = std::make_shared<Session>(config.max_request_streambuf_size, connection);\n\n        if(!ec) {\n          asio::ip::tcp::no_delay option(true);\n          error_code ec;\n          session->connection->socket->set_option(option, ec);\n\n          this->read(session);\n        }\n        else if(this->on_error)\n          this->on_error(session->request, ec);\n      });\n    }\n  };\n} // namespace SimpleWeb\n\n#endif /* SIMPLE_WEB_SERVER_HTTP_HPP */\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/server_https.hpp",
    "content": "#ifndef SIMPLE_WEB_SERVER_HTTPS_HPP\n#define SIMPLE_WEB_SERVER_HTTPS_HPP\n\n#include \"server_http.hpp\"\n\n#ifdef USE_STANDALONE_ASIO\n#include <asio/ssl.hpp>\n#else\n#include <boost/asio/ssl.hpp>\n#endif\n\n#include <algorithm>\n#include <openssl/ssl.h>\n\nnamespace SimpleWeb {\n  using HTTPS = asio::ssl::stream<asio::ip::tcp::socket>;\n\n  template <>\n  class Server<HTTPS> : public ServerBase<HTTPS> {\n    bool set_session_id_context = false;\n\n  public:\n    /**\n     * Constructs a server object.\n     *\n     * @param certification_file If non-empty, sends the given certification file to client.\n     * @param private_key_file   Specifies the file containing the private key for certification_file.\n     * @param verify_file        If non-empty, use this certificate authority file to perform verification of client's certificate and hostname according to RFC 2818.\n     */\n    Server(const std::string &certification_file, const std::string &private_key_file, const std::string &verify_file = std::string())\n        : ServerBase<HTTPS>::ServerBase(443), context(asio::ssl::context::tlsv12) {\n      context.use_certificate_chain_file(certification_file);\n      context.use_private_key_file(private_key_file, asio::ssl::context::pem);\n\n      if(verify_file.size() > 0) {\n        context.load_verify_file(verify_file);\n        context.set_verify_mode(asio::ssl::verify_peer | asio::ssl::verify_fail_if_no_peer_cert | asio::ssl::verify_client_once);\n        set_session_id_context = true;\n      }\n    }\n\n  protected:\n    asio::ssl::context context;\n\n    void after_bind() override {\n      if(set_session_id_context) {\n        // Creating session_id_context from address:port but reversed due to small SSL_MAX_SSL_SESSION_ID_LENGTH\n        auto session_id_context = std::to_string(acceptor->local_endpoint().port()) + ':';\n        session_id_context.append(config.address.rbegin(), config.address.rend());\n        SSL_CTX_set_session_id_context(context.native_handle(),\n                                       reinterpret_cast<const unsigned char *>(session_id_context.data()),\n                                       static_cast<unsigned int>(std::min<std::size_t>(session_id_context.size(), SSL_MAX_SSL_SESSION_ID_LENGTH)));\n      }\n    }\n\n    void accept() override {\n      auto connection = create_connection(*io_service, context);\n\n      acceptor->async_accept(connection->socket->lowest_layer(), [this, connection](const error_code &ec) {\n        auto lock = connection->handler_runner->continue_lock();\n        if(!lock)\n          return;\n\n        if(ec != error::operation_aborted)\n          this->accept();\n\n        auto session = std::make_shared<Session>(config.max_request_streambuf_size, connection);\n\n        if(!ec) {\n          asio::ip::tcp::no_delay option(true);\n          error_code ec;\n          session->connection->socket->lowest_layer().set_option(option, ec);\n\n          session->connection->set_timeout(config.timeout_request);\n          session->connection->socket->async_handshake(asio::ssl::stream_base::server, [this, session](const error_code &ec) {\n            session->connection->cancel_timeout();\n            auto lock = session->connection->handler_runner->continue_lock();\n            if(!lock)\n              return;\n            if(!ec)\n              this->read(session);\n            else if(this->on_error)\n              this->on_error(session->request, ec);\n          });\n        }\n        else if(this->on_error)\n          this->on_error(session->request, ec);\n      });\n    }\n  };\n} // namespace SimpleWeb\n\n#endif /* SIMPLE_WEB_SERVER_HTTPS_HPP */\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/status_code.hpp",
    "content": "#ifndef SIMPLE_WEB_STATUS_CODE_HPP\n#define SIMPLE_WEB_STATUS_CODE_HPP\n\n#include <cstdlib>\n#include <map>\n#include <string>\n#include <unordered_map>\n#include <vector>\n\nnamespace SimpleWeb {\n  enum class StatusCode {\n    unknown = 0,\n    information_continue = 100,\n    information_switching_protocols,\n    information_processing,\n    success_ok = 200,\n    success_created,\n    success_accepted,\n    success_non_authoritative_information,\n    success_no_content,\n    success_reset_content,\n    success_partial_content,\n    success_multi_status,\n    success_already_reported,\n    success_im_used = 226,\n    redirection_multiple_choices = 300,\n    redirection_moved_permanently,\n    redirection_found,\n    redirection_see_other,\n    redirection_not_modified,\n    redirection_use_proxy,\n    redirection_switch_proxy,\n    redirection_temporary_redirect,\n    redirection_permanent_redirect,\n    client_error_bad_request = 400,\n    client_error_unauthorized,\n    client_error_payment_required,\n    client_error_forbidden,\n    client_error_not_found,\n    client_error_method_not_allowed,\n    client_error_not_acceptable,\n    client_error_proxy_authentication_required,\n    client_error_request_timeout,\n    client_error_conflict,\n    client_error_gone,\n    client_error_length_required,\n    client_error_precondition_failed,\n    client_error_payload_too_large,\n    client_error_uri_too_long,\n    client_error_unsupported_media_type,\n    client_error_range_not_satisfiable,\n    client_error_expectation_failed,\n    client_error_im_a_teapot,\n    client_error_misdirection_required = 421,\n    client_error_unprocessable_entity,\n    client_error_locked,\n    client_error_failed_dependency,\n    client_error_upgrade_required = 426,\n    client_error_precondition_required = 428,\n    client_error_too_many_requests,\n    client_error_request_header_fields_too_large = 431,\n    client_error_unavailable_for_legal_reasons = 451,\n    server_error_internal_server_error = 500,\n    server_error_not_implemented,\n    server_error_bad_gateway,\n    server_error_service_unavailable,\n    server_error_gateway_timeout,\n    server_error_http_version_not_supported,\n    server_error_variant_also_negotiates,\n    server_error_insufficient_storage,\n    server_error_loop_detected,\n    server_error_not_extended = 510,\n    server_error_network_authentication_required\n  };\n\n  inline const std::map<StatusCode, std::string> &status_code_strings() {\n    static const std::map<StatusCode, std::string> status_code_strings = {\n        {StatusCode::unknown, \"\"},\n        {StatusCode::information_continue, \"100 Continue\"},\n        {StatusCode::information_switching_protocols, \"101 Switching Protocols\"},\n        {StatusCode::information_processing, \"102 Processing\"},\n        {StatusCode::success_ok, \"200 OK\"},\n        {StatusCode::success_created, \"201 Created\"},\n        {StatusCode::success_accepted, \"202 Accepted\"},\n        {StatusCode::success_non_authoritative_information, \"203 Non-Authoritative Information\"},\n        {StatusCode::success_no_content, \"204 No Content\"},\n        {StatusCode::success_reset_content, \"205 Reset Content\"},\n        {StatusCode::success_partial_content, \"206 Partial Content\"},\n        {StatusCode::success_multi_status, \"207 Multi-Status\"},\n        {StatusCode::success_already_reported, \"208 Already Reported\"},\n        {StatusCode::success_im_used, \"226 IM Used\"},\n        {StatusCode::redirection_multiple_choices, \"300 Multiple Choices\"},\n        {StatusCode::redirection_moved_permanently, \"301 Moved Permanently\"},\n        {StatusCode::redirection_found, \"302 Found\"},\n        {StatusCode::redirection_see_other, \"303 See Other\"},\n        {StatusCode::redirection_not_modified, \"304 Not Modified\"},\n        {StatusCode::redirection_use_proxy, \"305 Use Proxy\"},\n        {StatusCode::redirection_switch_proxy, \"306 Switch Proxy\"},\n        {StatusCode::redirection_temporary_redirect, \"307 Temporary Redirect\"},\n        {StatusCode::redirection_permanent_redirect, \"308 Permanent Redirect\"},\n        {StatusCode::client_error_bad_request, \"400 Bad Request\"},\n        {StatusCode::client_error_unauthorized, \"401 Unauthorized\"},\n        {StatusCode::client_error_payment_required, \"402 Payment Required\"},\n        {StatusCode::client_error_forbidden, \"403 Forbidden\"},\n        {StatusCode::client_error_not_found, \"404 Not Found\"},\n        {StatusCode::client_error_method_not_allowed, \"405 Method Not Allowed\"},\n        {StatusCode::client_error_not_acceptable, \"406 Not Acceptable\"},\n        {StatusCode::client_error_proxy_authentication_required, \"407 Proxy Authentication Required\"},\n        {StatusCode::client_error_request_timeout, \"408 Request Timeout\"},\n        {StatusCode::client_error_conflict, \"409 Conflict\"},\n        {StatusCode::client_error_gone, \"410 Gone\"},\n        {StatusCode::client_error_length_required, \"411 Length Required\"},\n        {StatusCode::client_error_precondition_failed, \"412 Precondition Failed\"},\n        {StatusCode::client_error_payload_too_large, \"413 Payload Too Large\"},\n        {StatusCode::client_error_uri_too_long, \"414 URI Too Long\"},\n        {StatusCode::client_error_unsupported_media_type, \"415 Unsupported Media Type\"},\n        {StatusCode::client_error_range_not_satisfiable, \"416 Range Not Satisfiable\"},\n        {StatusCode::client_error_expectation_failed, \"417 Expectation Failed\"},\n        {StatusCode::client_error_im_a_teapot, \"418 I'm a teapot\"},\n        {StatusCode::client_error_misdirection_required, \"421 Misdirected Request\"},\n        {StatusCode::client_error_unprocessable_entity, \"422 Unprocessable Entity\"},\n        {StatusCode::client_error_locked, \"423 Locked\"},\n        {StatusCode::client_error_failed_dependency, \"424 Failed Dependency\"},\n        {StatusCode::client_error_upgrade_required, \"426 Upgrade Required\"},\n        {StatusCode::client_error_precondition_required, \"428 Precondition Required\"},\n        {StatusCode::client_error_too_many_requests, \"429 Too Many Requests\"},\n        {StatusCode::client_error_request_header_fields_too_large, \"431 Request Header Fields Too Large\"},\n        {StatusCode::client_error_unavailable_for_legal_reasons, \"451 Unavailable For Legal Reasons\"},\n        {StatusCode::server_error_internal_server_error, \"500 Internal Server Error\"},\n        {StatusCode::server_error_not_implemented, \"501 Not Implemented\"},\n        {StatusCode::server_error_bad_gateway, \"502 Bad Gateway\"},\n        {StatusCode::server_error_service_unavailable, \"503 Service Unavailable\"},\n        {StatusCode::server_error_gateway_timeout, \"504 Gateway Timeout\"},\n        {StatusCode::server_error_http_version_not_supported, \"505 HTTP Version Not Supported\"},\n        {StatusCode::server_error_variant_also_negotiates, \"506 Variant Also Negotiates\"},\n        {StatusCode::server_error_insufficient_storage, \"507 Insufficient Storage\"},\n        {StatusCode::server_error_loop_detected, \"508 Loop Detected\"},\n        {StatusCode::server_error_not_extended, \"510 Not Extended\"},\n        {StatusCode::server_error_network_authentication_required, \"511 Network Authentication Required\"}};\n    return status_code_strings;\n  }\n\n  inline StatusCode status_code(const std::string &status_code_string) noexcept {\n    if(status_code_string.size() < 3)\n      return StatusCode::unknown;\n\n    auto number = status_code_string.substr(0, 3);\n    if(number[0] < '0' || number[0] > '9' || number[1] < '0' || number[1] > '9' || number[2] < '0' || number[2] > '9')\n      return StatusCode::unknown;\n\n    class StringToStatusCode : public std::unordered_map<std::string, SimpleWeb::StatusCode> {\n    public:\n      StringToStatusCode() {\n        for(auto &status_code : status_code_strings())\n          emplace(status_code.second.substr(0, 3), status_code.first);\n      }\n    };\n    static StringToStatusCode string_to_status_code;\n\n    auto pos = string_to_status_code.find(number);\n    if(pos == string_to_status_code.end())\n      return static_cast<StatusCode>(atoi(number.c_str()));\n    return pos->second;\n  }\n\n  inline const std::string &status_code(StatusCode status_code_enum) noexcept {\n    auto pos = status_code_strings().find(status_code_enum);\n    if(pos == status_code_strings().end()) {\n      static std::string empty_string;\n      return empty_string;\n    }\n    return pos->second;\n  }\n} // namespace SimpleWeb\n\n#endif // SIMPLE_WEB_STATUS_CODE_HPP\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/tests/CMakeLists.txt",
    "content": "if(NOT MSVC)\n    add_compile_options(-fno-access-control)\n    if (CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n        add_compile_options(-Wno-thread-safety)\n    endif()\n    \n    if(BUILD_TESTING)\n        add_executable(sws_io_test io_test.cpp)\n        target_link_libraries(sws_io_test simple-web-server)\n        add_test(NAME sws_io_test COMMAND sws_io_test)\n    \n        add_executable(sws_parse_test parse_test.cpp)\n        target_link_libraries(sws_parse_test simple-web-server)\n        add_test(NAME sws_parse_test COMMAND sws_parse_test)\n    endif()\nendif()\n\nif(OPENSSL_FOUND AND BUILD_TESTING)\n    add_executable(sws_crypto_test crypto_test.cpp)\n    target_link_libraries(sws_crypto_test simple-web-server)\n    add_test(NAME sws_crypto_test COMMAND sws_crypto_test)\nendif()\n\nif(BUILD_TESTING)\n    add_executable(status_code_test status_code_test.cpp)\n    target_link_libraries(status_code_test simple-web-server)\n    add_test(NAME status_code_test COMMAND status_code_test)\nendif()\n\nif(BUILD_FUZZING)\n    add_executable(percent_decode fuzzers/percent_decode.cpp)\n    target_compile_options(percent_decode PRIVATE -fsanitize=address,fuzzer)\n    target_link_options(percent_decode PRIVATE -fsanitize=address,fuzzer)\n    target_link_libraries(percent_decode simple-web-server)\n    \n    add_executable(query_string_parse fuzzers/query_string_parse.cpp)\n    target_compile_options(query_string_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_options(query_string_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_libraries(query_string_parse simple-web-server)\n    \n    add_executable(http_header_parse fuzzers/http_header_parse.cpp)\n    target_compile_options(http_header_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_options(http_header_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_libraries(http_header_parse simple-web-server)\n    \n    add_executable(http_header_field_value_semicolon_separated_attributes_parse fuzzers/http_header_field_value_semicolon_separated_attributes_parse.cpp)\n    target_compile_options(http_header_field_value_semicolon_separated_attributes_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_options(http_header_field_value_semicolon_separated_attributes_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_libraries(http_header_field_value_semicolon_separated_attributes_parse simple-web-server)\n    \n    add_executable(request_message_parse fuzzers/request_message_parse.cpp)\n    target_compile_options(request_message_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_options(request_message_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_libraries(request_message_parse simple-web-server)\n    \n    add_executable(response_message_parse fuzzers/response_message_parse.cpp)\n    target_compile_options(response_message_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_options(response_message_parse PRIVATE -fsanitize=address,fuzzer)\n    target_link_libraries(response_message_parse simple-web-server)\nendif()"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/tests/assert.hpp",
    "content": "#ifndef SIMPLE_WEB_ASSERT_HPP\n#define SIMPLE_WEB_ASSERT_HPP\n\n#include <cstdlib>\n#include <iostream>\n\n#define ASSERT(e) ((void)((e) ? ((void)0) : ((void)(std::cerr << \"Assertion failed: (\" << #e << \"), function \" << __func__ << \", file \" << __FILE__ << \", line \" << __LINE__ << \".\\n\"), std::abort())))\n\n#endif /* SIMPLE_WEB_ASSERT_HPP */\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/tests/crypto_test.cpp",
    "content": "#include \"assert.hpp\"\n#include \"crypto.hpp\"\n#include <vector>\n\nusing namespace std;\nusing namespace SimpleWeb;\n\nconst vector<pair<string, string>> base64_string_tests = {\n    {\"\", \"\"},\n    {\"f\", \"Zg==\"},\n    {\"fo\", \"Zm8=\"},\n    {\"foo\", \"Zm9v\"},\n    {\"foob\", \"Zm9vYg==\"},\n    {\"fooba\", \"Zm9vYmE=\"},\n    {\"foobar\", \"Zm9vYmFy\"},\n    {\"The itsy bitsy spider climbed up the waterspout.\\r\\nDown came the rain\\r\\nand washed the spider out.\\r\\nOut came the sun\\r\\nand dried up all the rain\\r\\nand the itsy bitsy spider climbed up the spout again.\",\n     \"VGhlIGl0c3kgYml0c3kgc3BpZGVyIGNsaW1iZWQgdXAgdGhlIHdhdGVyc3BvdXQuDQpEb3duIGNhbWUgdGhlIHJhaW4NCmFuZCB3YXNoZWQgdGhlIHNwaWRlciBvdXQuDQpPdXQgY2FtZSB0aGUgc3VuDQphbmQgZHJpZWQgdXAgYWxsIHRoZSByYWluDQphbmQgdGhlIGl0c3kgYml0c3kgc3BpZGVyIGNsaW1iZWQgdXAgdGhlIHNwb3V0IGFnYWluLg==\"}};\n\nconst vector<pair<string, string>> md5_string_tests = {\n    {\"\", \"d41d8cd98f00b204e9800998ecf8427e\"},\n    {\"The quick brown fox jumps over the lazy dog\", \"9e107d9d372bb6826bd81d3542a419d6\"}};\n\nconst vector<pair<string, string>> sha1_string_tests = {\n    {\"\", \"da39a3ee5e6b4b0d3255bfef95601890afd80709\"},\n    {\"The quick brown fox jumps over the lazy dog\", \"2fd4e1c67a2d28fced849ee1bb76e7391b93eb12\"}};\n\nconst vector<pair<string, string>> sha256_string_tests = {\n    {\"\", \"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\"},\n    {\"The quick brown fox jumps over the lazy dog\", \"d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592\"}};\n\nconst vector<pair<string, string>> sha512_string_tests = {\n    {\"\", \"cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e\"},\n    {\"The quick brown fox jumps over the lazy dog\", \"07e547d9586f6a73f73fbac0435ed76951218fb7d0c8d788a309d785436bbb642e93a252a954f23912547d1e8a3b5ed6e1bfd7097821233fa0538f3db854fee6\"}};\n\nint main() {\n  for(auto &string_test : base64_string_tests) {\n    ASSERT(Crypto::Base64::encode(string_test.first) == string_test.second);\n    ASSERT(Crypto::Base64::decode(string_test.second) == string_test.first);\n  }\n\n  for(auto &string_test : md5_string_tests) {\n    ASSERT(Crypto::to_hex_string(Crypto::md5(string_test.first)) == string_test.second);\n    stringstream ss(string_test.first);\n    ASSERT(Crypto::to_hex_string(Crypto::md5(ss)) == string_test.second);\n  }\n\n  for(auto &string_test : sha1_string_tests) {\n    ASSERT(Crypto::to_hex_string(Crypto::sha1(string_test.first)) == string_test.second);\n    stringstream ss(string_test.first);\n    ASSERT(Crypto::to_hex_string(Crypto::sha1(ss)) == string_test.second);\n  }\n\n  for(auto &string_test : sha256_string_tests) {\n    ASSERT(Crypto::to_hex_string(Crypto::sha256(string_test.first)) == string_test.second);\n    stringstream ss(string_test.first);\n    ASSERT(Crypto::to_hex_string(Crypto::sha256(ss)) == string_test.second);\n  }\n\n  for(auto &string_test : sha512_string_tests) {\n    ASSERT(Crypto::to_hex_string(Crypto::sha512(string_test.first)) == string_test.second);\n    stringstream ss(string_test.first);\n    ASSERT(Crypto::to_hex_string(Crypto::sha512(ss)) == string_test.second);\n  }\n\n  // Testing iterations\n  ASSERT(Crypto::to_hex_string(Crypto::sha1(\"Test\", 1)) == \"640ab2bae07bedc4c163f679a746f7ab7fb5d1fa\");\n  ASSERT(Crypto::to_hex_string(Crypto::sha1(\"Test\", 2)) == \"af31c6cbdecd88726d0a9b3798c71ef41f1624d5\");\n  stringstream ss(\"Test\");\n  ASSERT(Crypto::to_hex_string(Crypto::sha1(ss, 2)) == \"af31c6cbdecd88726d0a9b3798c71ef41f1624d5\");\n\n  ASSERT(Crypto::to_hex_string(Crypto::pbkdf2(\"Password\", \"Salt\", 4096, 128 / 8)) == \"f66df50f8aaa11e4d9721e1312ff2e66\");\n  ASSERT(Crypto::to_hex_string(Crypto::pbkdf2(\"Password\", \"Salt\", 8192, 512 / 8)) == \"a941ccbc34d1ee8ebbd1d34824a419c3dc4eac9cbc7c36ae6c7ca8725e2b618a6ad22241e787af937b0960cf85aa8ea3a258f243e05d3cc9b08af5dd93be046c\");\n}\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/tests/io_test.cpp",
    "content": "#include \"assert.hpp\"\n#include \"client_http.hpp\"\n#include \"server_http.hpp\"\n#include <future>\n\nusing namespace std;\n\nusing HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;\nusing HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;\n\nint main() {\n  // Test ScopeRunner\n  {\n    SimpleWeb::ScopeRunner scope_runner;\n    std::thread cancel_thread;\n    {\n      ASSERT(scope_runner.count == 0);\n      auto lock = scope_runner.continue_lock();\n      ASSERT(lock);\n      ASSERT(scope_runner.count == 1);\n      {\n        auto lock = scope_runner.continue_lock();\n        ASSERT(lock);\n        ASSERT(scope_runner.count == 2);\n      }\n      ASSERT(scope_runner.count == 1);\n      cancel_thread = thread([&scope_runner] {\n        scope_runner.stop();\n        ASSERT(scope_runner.count == -1);\n      });\n      this_thread::sleep_for(chrono::milliseconds(500));\n      ASSERT(scope_runner.count == 1);\n    }\n    cancel_thread.join();\n    ASSERT(scope_runner.count == -1);\n    auto lock = scope_runner.continue_lock();\n    ASSERT(!lock);\n    scope_runner.stop();\n    ASSERT(scope_runner.count == -1);\n\n    scope_runner.count = 0;\n\n    vector<thread> threads;\n    for(size_t c = 0; c < 100; ++c) {\n      threads.emplace_back([&scope_runner] {\n        auto lock = scope_runner.continue_lock();\n        ASSERT(scope_runner.count > 0);\n      });\n    }\n    for(auto &thread : threads)\n      thread.join();\n    ASSERT(scope_runner.count == 0);\n  }\n\n  HttpServer server;\n  server.config.port = 8080;\n\n  server.resource[\"^/string$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    auto content = request->content.string();\n    ASSERT(content == request->content.string());\n\n    *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << content.length() << \"\\r\\n\\r\\n\"\n              << content;\n\n    ASSERT(!request->remote_endpoint().address().to_string().empty());\n    ASSERT(request->remote_endpoint().port() != 0);\n  };\n\n  server.resource[\"^/string/dup$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    auto content = request->content.string();\n\n    // Send content twice, before it has a chance to be written to the socket.\n    *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << (content.length() * 2) << \"\\r\\n\\r\\n\"\n              << content;\n    response->send();\n    *response << content;\n    response->send();\n\n    ASSERT(!request->remote_endpoint().address().to_string().empty());\n    ASSERT(request->remote_endpoint().port() != 0);\n  };\n\n  server.resource[\"^/string2$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    response->write(request->content.string());\n  };\n\n  server.resource[\"^/string3$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    stringstream stream;\n    stream << request->content.rdbuf();\n    response->write(stream);\n  };\n\n  server.resource[\"^/string4$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    response->write(SimpleWeb::StatusCode::client_error_forbidden, {{\"Test1\", \"test2\"}, {\"tesT3\", \"test4\"}});\n  };\n\n  server.resource[\"^/info$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    stringstream content_stream;\n    content_stream << request->method << \" \" << request->path << \" \" << request->http_version << \" \";\n    content_stream << request->header.find(\"test parameter\")->second;\n\n    content_stream.seekp(0, ios::end);\n\n    *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << content_stream.tellp() << \"\\r\\n\\r\\n\"\n              << content_stream.rdbuf();\n  };\n\n  server.resource[\"^/work$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    thread work_thread([response] {\n      this_thread::sleep_for(chrono::seconds(5));\n      response->write(\"Work done\");\n    });\n    work_thread.detach();\n  };\n\n  server.resource[\"^/match/([0-9]+)$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    string number = request->path_match[1];\n    *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << number.length() << \"\\r\\n\\r\\n\"\n              << number;\n  };\n\n  server.resource[\"^/header$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    auto content = request->header.find(\"test1\")->second + request->header.find(\"test2\")->second;\n\n    *response << \"HTTP/1.1 200 OK\\r\\nContent-Length: \" << content.length() << \"\\r\\n\\r\\n\"\n              << content;\n  };\n\n  server.resource[\"^/query_string$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    ASSERT(request->path == \"/query_string\");\n    ASSERT(request->query_string == \"testing\");\n    auto queries = request->parse_query_string();\n    auto it = queries.find(\"Testing\");\n    ASSERT(it != queries.end() && it->first == \"testing\" && it->second == \"\");\n    response->write(request->query_string);\n  };\n\n  server.resource[\"^/chunked$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    ASSERT(request->path == \"/chunked\");\n\n    ASSERT(request->content.string() == \"SimpleWeb in\\r\\n\\r\\nchunks.\");\n\n    response->write(\"6\\r\\nSimple\\r\\n3\\r\\nWeb\\r\\nE\\r\\n in\\r\\n\\r\\nchunks.\\r\\n0\\r\\n\\r\\n\", {{\"Transfer-Encoding\", \"chunked\"}});\n  };\n\n  server.resource[\"^/chunked2$\"][\"POST\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> request) {\n    ASSERT(request->path == \"/chunked2\");\n\n    ASSERT(request->content.string() == \"HelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorld\");\n\n    response->write(\"258\\r\\nHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorld\\r\\n0\\r\\n\\r\\n\", {{\"Transfer-Encoding\", \"chunked\"}});\n  };\n\n  server.resource[\"^/event-stream1$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    thread work_thread([response] {\n      response->close_connection_after_response = true; // Unspecified content length\n\n      // Send header\n      promise<bool> header_error;\n      response->write({{\"Content-Type\", \"text/event-stream\"}});\n      response->send([&header_error](const SimpleWeb::error_code &ec) {\n        header_error.set_value(static_cast<bool>(ec));\n      });\n      ASSERT(!header_error.get_future().get());\n\n      *response << \"data: 1\\n\\n\";\n      promise<bool> error;\n      response->send([&error](const SimpleWeb::error_code &ec) {\n        error.set_value(static_cast<bool>(ec));\n      });\n      ASSERT(!error.get_future().get());\n\n      // Write result\n      *response << \"data: 2\\n\\n\";\n    });\n    work_thread.detach();\n  };\n\n  server.resource[\"^/event-stream2$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    thread work_thread([response] {\n      response->close_connection_after_response = true; // Unspecified content length\n\n      // Send header\n      promise<bool> header_error;\n      response->write({{\"Content-Type\", \"text/event-stream\"}});\n      response->send([&header_error](const SimpleWeb::error_code &ec) {\n        header_error.set_value(static_cast<bool>(ec));\n      });\n      ASSERT(!header_error.get_future().get());\n\n      *response << \"data: 1\\r\\n\\r\\n\";\n      promise<bool> error;\n      response->send([&error](const SimpleWeb::error_code &ec) {\n        error.set_value(static_cast<bool>(ec));\n      });\n      ASSERT(!error.get_future().get());\n\n      // Write result\n      *response << \"data: 2\\r\\n\\r\\n\";\n    });\n    work_thread.detach();\n  };\n\n  server.resource[\"^/session-close$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    response->close_connection_after_response = true; // Unspecified content length\n    response->write(\"test\", {{\"Session\", \"close\"}});\n  };\n  server.resource[\"^/session-close-without-correct-header$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    response->close_connection_after_response = true; // Unspecified content length\n    response->write(\"test\");\n  };\n\n  server.resource[\"^/non-standard-line-endings1$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    *response << \"HTTP/1.1 200 OK\\r\\nname: value\\n\\n\";\n  };\n\n  server.resource[\"^/non-standard-line-endings2$\"][\"GET\"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    *response << \"HTTP/1.1 200 OK\\nname: value\\n\\n\";\n  };\n\n  std::string long_response;\n  for(int c = 0; c < 1000; ++c)\n    long_response += to_string(c);\n  server.resource[\"^/long-response$\"][\"GET\"] = [&long_response](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n    response->write(long_response, {{\"name\", \"value\"}});\n  };\n\n  thread server_thread([&server]() {\n    // Start server\n    server.start();\n  });\n\n  this_thread::sleep_for(chrono::seconds(1));\n\n  server.stop();\n  server_thread.join();\n\n  server_thread = thread([&server]() {\n    // Start server\n    server.start();\n  });\n\n  this_thread::sleep_for(chrono::seconds(1));\n\n  // Test various request types\n  {\n    HttpClient client(\"localhost:8080\");\n    {\n      stringstream output;\n      auto r = client.request(\"POST\", \"/string\", \"A string\");\n      ASSERT(SimpleWeb::status_code(r->status_code) == SimpleWeb::StatusCode::success_ok);\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"A string\");\n    }\n\n    {\n      auto r = client.request(\"POST\", \"/string\", \"A string\");\n      ASSERT(SimpleWeb::status_code(r->status_code) == SimpleWeb::StatusCode::success_ok);\n      ASSERT(r->content.string() == \"A string\");\n      ASSERT(r->content.string() == \"A string\");\n    }\n\n    {\n      stringstream output;\n      auto r = client.request(\"POST\", \"/string2\", \"A string\");\n      ASSERT(SimpleWeb::status_code(r->status_code) == SimpleWeb::StatusCode::success_ok);\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"A string\");\n    }\n\n    {\n      stringstream output;\n      auto r = client.request(\"POST\", \"/string3\", \"A string\");\n      ASSERT(SimpleWeb::status_code(r->status_code) == SimpleWeb::StatusCode::success_ok);\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"A string\");\n    }\n\n    {\n      stringstream output;\n      auto r = client.request(\"POST\", \"/string4\", \"A string\");\n      ASSERT(SimpleWeb::status_code(r->status_code) == SimpleWeb::StatusCode::client_error_forbidden);\n      ASSERT(r->header.size() == 3);\n      ASSERT(r->header.find(\"test1\")->second == \"test2\");\n      ASSERT(r->header.find(\"tEst3\")->second == \"test4\");\n      ASSERT(r->header.find(\"content-length\")->second == \"0\");\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"\");\n    }\n\n    {\n      stringstream output;\n      stringstream content(\"A string\");\n      auto r = client.request(\"POST\", \"/string\", content);\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"A string\");\n    }\n\n    {\n      // Test rapid calls to Response::send\n      stringstream output;\n      stringstream content(\"A string\\n\");\n      auto r = client.request(\"POST\", \"/string/dup\", content);\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"A string\\nA string\\n\");\n    }\n\n    {\n      stringstream output;\n      auto r = client.request(\"GET\", \"/info\", \"\", {{\"Test Parameter\", \"test value\"}});\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"GET /info 1.1 test value\");\n    }\n\n    {\n      stringstream output;\n      auto r = client.request(\"GET\", \"/match/123\");\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"123\");\n    }\n    {\n      auto r = client.request(\"POST\", \"/chunked\", \"6\\r\\nSimple\\r\\n3\\r\\nWeb\\r\\nE\\r\\n in\\r\\n\\r\\nchunks.\\r\\n0\\r\\n\\r\\n\", {{\"Transfer-Encoding\", \"chunked\"}});\n      ASSERT(r->content.string() == \"SimpleWeb in\\r\\n\\r\\nchunks.\");\n    }\n    {\n      auto r = client.request(\"POST\", \"/chunked2\", \"258\\r\\nHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorld\\r\\n0\\r\\n\\r\\n\", {{\"Transfer-Encoding\", \"chunked\"}});\n      ASSERT(r->content.string() == \"HelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorldHelloWorld\");\n    }\n\n    // Test reconnecting\n    for(int c = 0; c < 20; ++c) {\n      auto r = client.request(\"GET\", \"/session-close\");\n      ASSERT(r->content.string() == \"test\");\n    }\n    for(int c = 0; c < 20; ++c) {\n      auto r = client.request(\"GET\", \"/session-close-without-correct-header\");\n      ASSERT(r->content.string() == \"test\");\n    }\n\n    // Test non-standard line endings\n    {\n      auto r = client.request(\"GET\", \"/non-standard-line-endings1\");\n      ASSERT(r->http_version == \"1.1\");\n      ASSERT(r->status_code == \"200 OK\");\n      ASSERT(r->header.size() == 1);\n      ASSERT(r->header.begin()->first == \"name\");\n      ASSERT(r->header.begin()->second == \"value\");\n      ASSERT(r->content.string().empty());\n    }\n    {\n      auto r = client.request(\"GET\", \"/non-standard-line-endings2\");\n      ASSERT(r->http_version == \"1.1\");\n      ASSERT(r->status_code == \"200 OK\");\n      ASSERT(r->header.size() == 1);\n      ASSERT(r->header.begin()->first == \"name\");\n      ASSERT(r->header.begin()->second == \"value\");\n      ASSERT(r->content.string().empty());\n    }\n  }\n  {\n    HttpClient client(\"localhost:8080\");\n\n    HttpClient::Connection *connection;\n    {\n      // test performing the stream version of the request methods first\n      stringstream output;\n      stringstream content(\"A string\");\n      auto r = client.request(\"POST\", \"/string\", content);\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"A string\");\n      ASSERT(client.connections.size() == 1);\n      connection = client.connections.begin()->get();\n    }\n\n    {\n      stringstream output;\n      auto r = client.request(\"POST\", \"/string\", \"A string\");\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"A string\");\n      ASSERT(client.connections.size() == 1);\n      ASSERT(connection == client.connections.begin()->get());\n    }\n\n    {\n      stringstream output;\n      auto r = client.request(\"GET\", \"/header\", \"\", {{\"test1\", \"test\"}, {\"test2\", \"ing\"}});\n      output << r->content.rdbuf();\n      ASSERT(output.str() == \"testing\");\n      ASSERT(client.connections.size() == 1);\n      ASSERT(connection == client.connections.begin()->get());\n    }\n\n    {\n      stringstream output;\n      auto r = client.request(\"GET\", \"/query_string?testing\");\n      ASSERT(r->content.string() == \"testing\");\n      ASSERT(client.connections.size() == 1);\n      ASSERT(connection == client.connections.begin()->get());\n    }\n  }\n\n  // Test large responses\n  {\n    {\n      HttpClient client(\"localhost:8080\");\n      client.config.max_response_streambuf_size = 400;\n      bool thrown = false;\n      try {\n        auto r = client.request(\"GET\", \"/long-response\");\n      }\n      catch(...) {\n        thrown = true;\n      }\n      ASSERT(thrown);\n    }\n    HttpClient client(\"localhost:8080\");\n    client.config.max_response_streambuf_size = 400;\n    {\n      size_t calls = 0;\n      bool end = false;\n      std::string content;\n      client.request(\"GET\", \"/long-response\", [&calls, &content, &end](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {\n        ASSERT(!ec);\n        content += response->content.string();\n        calls++;\n        if(calls == 1)\n          ASSERT(response->content.end == false);\n        end = response->content.end;\n      });\n      client.io_service->run();\n      ASSERT(content == long_response);\n      ASSERT(calls > 2);\n      ASSERT(end == true);\n    }\n    {\n      size_t calls = 0;\n      std::string content;\n      client.request(\"GET\", \"/long-response\", [&calls, &content](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {\n        if(calls == 0)\n          ASSERT(!ec);\n        content += response->content.string();\n        calls++;\n        response->close();\n      });\n      SimpleWeb::restart(*client.io_service);\n      client.io_service->run();\n      ASSERT(!content.empty());\n      ASSERT(calls >= 2);\n    }\n  }\n\n  // Test client timeout\n  {\n    HttpClient client(\"localhost:8080\");\n    client.config.timeout = 2;\n    bool thrown = false;\n    try {\n      auto r = client.request(\"GET\", \"/work\");\n    }\n    catch(...) {\n      thrown = true;\n    }\n    ASSERT(thrown);\n  }\n  {\n    HttpClient client(\"localhost:8080\");\n    client.config.timeout = 2;\n    bool call = false;\n    client.request(\"GET\", \"/work\", [&call](shared_ptr<HttpClient::Response> /*response*/, const SimpleWeb::error_code &ec) {\n      ASSERT(ec);\n      call = true;\n    });\n    SimpleWeb::restart(*client.io_service);\n    client.io_service->run();\n    ASSERT(call);\n  }\n\n  // Test asynchronous requests\n  {\n    HttpClient client(\"localhost:8080\");\n    bool call = false;\n    client.request(\"GET\", \"/match/123\", [&call](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {\n      ASSERT(!ec);\n      stringstream output;\n      output << response->content.rdbuf();\n      ASSERT(output.str() == \"123\");\n      call = true;\n    });\n    client.io_service->run();\n    ASSERT(call);\n\n    // Test event-stream\n    {\n      vector<int> calls(4, 0);\n      std::size_t call_num = 0;\n      client.request(\"GET\", \"/event-stream1\", [&calls, &call_num](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {\n        calls.at(call_num) = 1;\n        if(call_num == 0) {\n          ASSERT(response->content.string().empty());\n          ASSERT(!ec);\n        }\n        else if(call_num == 1) {\n          ASSERT(response->content.string() == \"data: 1\\n\");\n          ASSERT(!ec);\n        }\n        else if(call_num == 2) {\n          ASSERT(response->content.string() == \"data: 2\\n\");\n          ASSERT(!ec);\n        }\n        else if(call_num == 3) {\n          ASSERT(response->content.string().empty());\n          ASSERT(ec == SimpleWeb::error::eof);\n        }\n        ++call_num;\n      });\n      SimpleWeb::restart(*client.io_service);\n      client.io_service->run();\n      for(auto call : calls)\n        ASSERT(call);\n    }\n    {\n      vector<int> calls(4, 0);\n      std::size_t call_num = 0;\n      client.request(\"GET\", \"/event-stream2\", [&calls, &call_num](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {\n        calls.at(call_num) = 1;\n        if(call_num == 0) {\n          ASSERT(response->content.string().empty());\n          ASSERT(!ec);\n        }\n        else if(call_num == 1) {\n          ASSERT(response->content.string() == \"data: 1\\n\");\n          ASSERT(!ec);\n        }\n        else if(call_num == 2) {\n          ASSERT(response->content.string() == \"data: 2\\n\");\n          ASSERT(!ec);\n        }\n        else if(call_num == 3) {\n          ASSERT(response->content.string().empty());\n          ASSERT(ec == SimpleWeb::error::eof);\n        }\n        ++call_num;\n      });\n      SimpleWeb::restart(*client.io_service);\n      client.io_service->run();\n      for(auto call : calls)\n        ASSERT(call);\n    }\n\n    // Test concurrent requests from same client\n    {\n      vector<int> calls(100, 0);\n      vector<thread> threads;\n      for(size_t c = 0; c < 100; ++c) {\n        threads.emplace_back([c, &client, &calls] {\n          client.request(\"GET\", \"/match/123\", [c, &calls](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {\n            ASSERT(!ec);\n            stringstream output;\n            output << response->content.rdbuf();\n            ASSERT(output.str() == \"123\");\n            calls[c] = 1;\n          });\n        });\n      }\n      for(auto &thread : threads)\n        thread.join();\n      ASSERT(client.connections.size() == 100);\n      SimpleWeb::restart(*client.io_service);\n      client.io_service->run();\n      ASSERT(client.connections.size() == 1);\n      for(auto call : calls)\n        ASSERT(call);\n    }\n\n    // Test concurrent synchronous request calls from same client\n    {\n      HttpClient client(\"localhost:8080\");\n      {\n        vector<int> calls(5, 0);\n        vector<thread> threads;\n        for(size_t c = 0; c < 5; ++c) {\n          threads.emplace_back([c, &client, &calls] {\n            try {\n              auto r = client.request(\"GET\", \"/match/123\");\n              ASSERT(SimpleWeb::status_code(r->status_code) == SimpleWeb::StatusCode::success_ok);\n              ASSERT(r->content.string() == \"123\");\n              calls[c] = 1;\n            }\n            catch(...) {\n              ASSERT(false);\n            }\n          });\n        }\n        for(auto &thread : threads)\n          thread.join();\n        ASSERT(client.connections.size() == 1);\n        for(auto call : calls)\n          ASSERT(call);\n      }\n    }\n\n    // Test concurrent requests from different clients\n    {\n      vector<int> calls(10, 0);\n      vector<thread> threads;\n      for(size_t c = 0; c < 10; ++c) {\n        threads.emplace_back([c, &calls] {\n          HttpClient client(\"localhost:8080\");\n          client.request(\"POST\", \"/string\", \"A string\", [c, &calls](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {\n            ASSERT(!ec);\n            ASSERT(response->content.string() == \"A string\");\n            calls[c] = 1;\n          });\n          client.io_service->run();\n        });\n      }\n      for(auto &thread : threads)\n        thread.join();\n      for(auto call : calls)\n        ASSERT(call);\n    }\n  }\n\n  // Test multiple requests through a persistent connection\n  {\n    HttpClient client(\"localhost:8080\");\n    ASSERT(client.connections.size() == 0);\n    for(size_t c = 0; c < 5000; ++c) {\n      auto r1 = client.request(\"POST\", \"/string\", \"A string\");\n      ASSERT(SimpleWeb::status_code(r1->status_code) == SimpleWeb::StatusCode::success_ok);\n      ASSERT(r1->content.string() == \"A string\");\n      ASSERT(client.connections.size() == 1);\n\n      stringstream content(\"A string\");\n      auto r2 = client.request(\"POST\", \"/string\", content);\n      ASSERT(SimpleWeb::status_code(r2->status_code) == SimpleWeb::StatusCode::success_ok);\n      ASSERT(r2->content.string() == \"A string\");\n      ASSERT(client.connections.size() == 1);\n    }\n  }\n\n  // Test multiple requests through new several client objects\n  for(size_t c = 0; c < 100; ++c) {\n    {\n      HttpClient client(\"localhost:8080\");\n      auto r = client.request(\"POST\", \"/string\", \"A string\");\n      ASSERT(SimpleWeb::status_code(r->status_code) == SimpleWeb::StatusCode::success_ok);\n      ASSERT(r->content.string() == \"A string\");\n      ASSERT(client.connections.size() == 1);\n    }\n\n    {\n      HttpClient client(\"localhost:8080\");\n      stringstream content(\"A string\");\n      auto r = client.request(\"POST\", \"/string\", content);\n      ASSERT(SimpleWeb::status_code(r->status_code) == SimpleWeb::StatusCode::success_ok);\n      ASSERT(r->content.string() == \"A string\");\n      ASSERT(client.connections.size() == 1);\n    }\n  }\n\n  // Test Client client's stop()\n  for(size_t c = 0; c < 40; ++c) {\n    auto io_service = make_shared<SimpleWeb::io_context>();\n    bool call = false;\n    HttpClient client(\"localhost:8080\");\n    client.io_service = io_service;\n    client.request(\"GET\", \"/work\", [&call](shared_ptr<HttpClient::Response> /*response*/, const SimpleWeb::error_code &ec) {\n      call = true;\n      ASSERT(ec);\n    });\n    thread thread([io_service] {\n      io_service->run();\n    });\n    this_thread::sleep_for(chrono::milliseconds(100));\n    client.stop();\n    this_thread::sleep_for(chrono::milliseconds(100));\n    thread.join();\n    ASSERT(call);\n  }\n\n  // Test Client destructor that should cancel the client's request\n  for(size_t c = 0; c < 40; ++c) {\n    auto io_service = make_shared<SimpleWeb::io_context>();\n    {\n      HttpClient client(\"localhost:8080\");\n      client.io_service = io_service;\n      client.request(\"GET\", \"/work\", [](shared_ptr<HttpClient::Response> /*response*/, const SimpleWeb::error_code & /*ec*/) {\n        ASSERT(false);\n      });\n      thread thread([io_service] {\n        io_service->run();\n      });\n      thread.detach();\n      this_thread::sleep_for(chrono::milliseconds(100));\n    }\n    this_thread::sleep_for(chrono::milliseconds(100));\n  }\n\n  server.stop();\n  server_thread.join();\n\n  // Test server destructor\n  {\n    auto io_service = make_shared<SimpleWeb::io_context>();\n    bool call = false;\n    bool client_catch = false;\n    {\n      HttpServer server;\n      server.config.port = 8081;\n      server.io_service = io_service;\n      server.resource[\"^/test$\"][\"GET\"] = [&call](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {\n        call = true;\n        thread sleep_thread([response] {\n          this_thread::sleep_for(chrono::seconds(5));\n          response->write(SimpleWeb::StatusCode::success_ok, \"test\");\n          response->send([](const SimpleWeb::error_code & /*ec*/) {\n            ASSERT(false);\n          });\n        });\n        sleep_thread.detach();\n      };\n      server.start();\n      thread server_thread([io_service] {\n        io_service->run();\n      });\n      server_thread.detach();\n      this_thread::sleep_for(chrono::seconds(1));\n      thread client_thread([&client_catch] {\n        HttpClient client(\"localhost:8081\");\n        try {\n          auto r = client.request(\"GET\", \"/test\");\n          ASSERT(false);\n        }\n        catch(...) {\n          client_catch = true;\n        }\n      });\n      client_thread.detach();\n      this_thread::sleep_for(chrono::seconds(1));\n    }\n    this_thread::sleep_for(chrono::seconds(5));\n    ASSERT(call);\n    ASSERT(client_catch);\n    io_service->stop();\n  }\n}\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/tests/parse_test.cpp",
    "content": "#include \"assert.hpp\"\n#include \"client_http.hpp\"\n#include \"server_http.hpp\"\n#include <iostream>\n\nusing namespace std;\nusing namespace SimpleWeb;\n\nclass ServerTest : public ServerBase<HTTP> {\npublic:\n  ServerTest() : ServerBase<HTTP>::ServerBase(8080) {}\n\n  void accept() noexcept override {}\n\n  void parse_request_test() {\n    auto session = std::make_shared<Session>(static_cast<size_t>(-1), create_connection(*io_service));\n\n    std::ostream stream(&session->request->content.streambuf);\n    stream << \"GET /test/ HTTP/1.1\\r\\n\";\n    stream << \"TestHeader: test\\r\\n\";\n    stream << \"TestHeader2:test2\\r\\n\";\n    stream << \"TestHeader3:test3a\\r\\n\";\n    stream << \"TestHeader3:test3b\\r\\n\";\n    stream << \"\\r\\n\";\n\n    ASSERT(RequestMessage::parse(session->request->content, session->request->method, session->request->path,\n                                 session->request->query_string, session->request->http_version, session->request->header));\n\n    ASSERT(session->request->method == \"GET\");\n    ASSERT(session->request->path == \"/test/\");\n    ASSERT(session->request->http_version == \"1.1\");\n\n    ASSERT(session->request->header.size() == 4);\n    auto header_it = session->request->header.find(\"TestHeader\");\n    ASSERT(header_it != session->request->header.end() && header_it->second == \"test\");\n    header_it = session->request->header.find(\"TestHeader2\");\n    ASSERT(header_it != session->request->header.end() && header_it->second == \"test2\");\n\n    header_it = session->request->header.find(\"testheader\");\n    ASSERT(header_it != session->request->header.end() && header_it->second == \"test\");\n    header_it = session->request->header.find(\"testheader2\");\n    ASSERT(header_it != session->request->header.end() && header_it->second == \"test2\");\n\n    auto range = session->request->header.equal_range(\"testheader3\");\n    auto first = range.first;\n    auto second = first;\n    ++second;\n    ASSERT(range.first != session->request->header.end() && range.second != session->request->header.end() &&\n           ((first->second == \"test3a\" && second->second == \"test3b\") ||\n            (first->second == \"test3b\" && second->second == \"test3a\")));\n  }\n};\n\nclass ClientTest : public ClientBase<HTTP> {\npublic:\n  ClientTest(const std::string &server_port_path) : ClientBase<HTTP>::ClientBase(server_port_path, 80) {}\n\n  std::shared_ptr<Connection> create_connection() noexcept override {\n    return nullptr;\n  }\n\n  void connect(const std::shared_ptr<Session> &) noexcept override {}\n\n  void constructor_parse_test1() {\n    ASSERT(host == \"test.org\");\n    ASSERT(port == 8080);\n  }\n\n  void constructor_parse_test2() {\n    ASSERT(host == \"test.org\");\n    ASSERT(port == 80);\n  }\n\n  void parse_response_header_test() {\n    std::shared_ptr<Response> response(new Response(static_cast<size_t>(-1), nullptr));\n\n    ostream stream(&response->streambuf);\n    stream << \"HTTP/1.1 200 OK\\r\\n\";\n    stream << \"TestHeader: test\\r\\n\";\n    stream << \"TestHeader2:  test2\\r\\n\";\n    stream << \"TestHeader3:test3a\\r\\n\";\n    stream << \"TestHeader3:test3b\\r\\n\";\n    stream << \"TestHeader4:\\r\\n\";\n    stream << \"TestHeader5: \\r\\n\";\n    stream << \"TestHeader6:  \\r\\n\";\n    stream << \"\\r\\n\";\n\n    ASSERT(ResponseMessage::parse(response->content, response->http_version, response->status_code, response->header));\n\n    ASSERT(response->http_version == \"1.1\");\n    ASSERT(response->status_code == \"200 OK\");\n\n    ASSERT(response->header.size() == 7);\n    auto header_it = response->header.find(\"TestHeader\");\n    ASSERT(header_it != response->header.end() && header_it->second == \"test\");\n    header_it = response->header.find(\"TestHeader2\");\n    ASSERT(header_it != response->header.end() && header_it->second == \"test2\");\n\n    header_it = response->header.find(\"testheader\");\n    ASSERT(header_it != response->header.end() && header_it->second == \"test\");\n    header_it = response->header.find(\"testheader2\");\n    ASSERT(header_it != response->header.end() && header_it->second == \"test2\");\n\n    auto range = response->header.equal_range(\"testheader3\");\n    auto first = range.first;\n    auto second = first;\n    ++second;\n    ASSERT(range.first != response->header.end() && range.second != response->header.end() &&\n           ((first->second == \"test3a\" && second->second == \"test3b\") ||\n            (first->second == \"test3b\" && second->second == \"test3a\")));\n\n    header_it = response->header.find(\"TestHeader4\");\n    ASSERT(header_it != response->header.end() && header_it->second == \"\");\n    header_it = response->header.find(\"TestHeader5\");\n    ASSERT(header_it != response->header.end() && header_it->second == \"\");\n    header_it = response->header.find(\"TestHeader6\");\n    ASSERT(header_it != response->header.end() && header_it->second == \"\");\n  }\n};\n\nint main() {\n  ASSERT(case_insensitive_equal(\"Test\", \"tesT\"));\n  ASSERT(case_insensitive_equal(\"tesT\", \"test\"));\n  ASSERT(!case_insensitive_equal(\"test\", \"tseT\"));\n  CaseInsensitiveEqual equal;\n  ASSERT(equal(\"Test\", \"tesT\"));\n  ASSERT(equal(\"tesT\", \"test\"));\n  ASSERT(!equal(\"test\", \"tset\"));\n  CaseInsensitiveHash hash;\n  ASSERT(hash(\"Test\") == hash(\"tesT\"));\n  ASSERT(hash(\"tesT\") == hash(\"test\"));\n  ASSERT(hash(\"test\") != hash(\"tset\"));\n\n  auto percent_decoded = \"testing æøå !#$&'()*+,/:;=?@[]123-._~\\r\\n\";\n  auto percent_encoded = \"testing%20%C3%A6%C3%B8%C3%A5%20%21%23%24%26%27%28%29%2A%2B%2C%2F%3A%3B%3D%3F%40%5B%5D123-._~%0D%0A\";\n  ASSERT(Percent::encode(percent_decoded) == percent_encoded);\n  ASSERT(Percent::decode(percent_encoded) == percent_decoded);\n  ASSERT(Percent::decode(Percent::encode(percent_decoded)) == percent_decoded);\n\n  SimpleWeb::CaseInsensitiveMultimap fields = {{\"test1\", \"æøå\"}, {\"test2\", \"!#$&'()*+,/:;=?@[]\"}};\n  auto query_string1 = \"test1=%C3%A6%C3%B8%C3%A5&test2=%21%23%24%26%27%28%29%2A%2B%2C%2F%3A%3B%3D%3F%40%5B%5D\";\n  auto query_string2 = \"test2=%21%23%24%26%27%28%29%2A%2B%2C%2F%3A%3B%3D%3F%40%5B%5D&test1=%C3%A6%C3%B8%C3%A5\";\n  auto query_string_result = QueryString::create(fields);\n  ASSERT(query_string_result == query_string1 || query_string_result == query_string2);\n  auto fields_result1 = QueryString::parse(query_string1);\n  auto fields_result2 = QueryString::parse(query_string2);\n  ASSERT(fields_result1 == fields_result2 && fields_result1 == fields);\n\n  auto serverTest = make_shared<ServerTest>();\n  serverTest->io_service = std::make_shared<io_context>();\n\n  serverTest->parse_request_test();\n\n  auto clientTest = make_shared<ClientTest>(\"test.org:8080\");\n  clientTest->constructor_parse_test1();\n\n  auto clientTest2 = make_shared<ClientTest>(\"test.org\");\n  clientTest2->constructor_parse_test2();\n\n  clientTest2->parse_response_header_test();\n\n\n  io_context io_service;\n  asio::ip::tcp::socket socket(io_service);\n  SimpleWeb::Server<HTTP>::Request request(static_cast<size_t>(-1), nullptr);\n  {\n    request.query_string = \"\";\n    auto queries = request.parse_query_string();\n    ASSERT(queries.empty());\n  }\n  {\n    request.query_string = \"=\";\n    auto queries = request.parse_query_string();\n    ASSERT(queries.empty());\n  }\n  {\n    request.query_string = \"=test\";\n    auto queries = request.parse_query_string();\n    ASSERT(queries.empty());\n  }\n  {\n    request.query_string = \"a=1%202%20%203&b=3+4&c&d=æ%25ø%26å%3F\";\n    auto queries = request.parse_query_string();\n    {\n      auto range = queries.equal_range(\"a\");\n      ASSERT(range.first != range.second);\n      ASSERT(range.first->second == \"1 2  3\");\n    }\n    {\n      auto range = queries.equal_range(\"b\");\n      ASSERT(range.first != range.second);\n      ASSERT(range.first->second == \"3 4\");\n    }\n    {\n      auto range = queries.equal_range(\"c\");\n      ASSERT(range.first != range.second);\n      ASSERT(range.first->second == \"\");\n    }\n    {\n      auto range = queries.equal_range(\"d\");\n      ASSERT(range.first != range.second);\n      ASSERT(range.first->second == \"æ%ø&å?\");\n    }\n  }\n\n  {\n    SimpleWeb::CaseInsensitiveMultimap solution;\n    std::stringstream header;\n    auto parsed = SimpleWeb::HttpHeader::parse(header);\n    ASSERT(parsed == solution);\n  }\n  {\n    SimpleWeb::CaseInsensitiveMultimap solution = {{\"Content-Type\", \"application/json\"}};\n    std::stringstream header(\"Content-Type: application/json\");\n    auto parsed = SimpleWeb::HttpHeader::parse(header);\n    ASSERT(parsed == solution);\n  }\n  {\n    SimpleWeb::CaseInsensitiveMultimap solution = {{\"Content-Type\", \"application/json\"}};\n    std::stringstream header(\"Content-Type: application/json\\r\");\n    auto parsed = SimpleWeb::HttpHeader::parse(header);\n    ASSERT(parsed == solution);\n  }\n  {\n    SimpleWeb::CaseInsensitiveMultimap solution = {{\"Content-Type\", \"application/json\"}};\n    std::stringstream header(\"Content-Type: application/json\\r\\n\");\n    auto parsed = SimpleWeb::HttpHeader::parse(header);\n    ASSERT(parsed == solution);\n  }\n\n  {\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution;\n      auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"\");\n      ASSERT(parsed == solution);\n    }\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution = {{\"a\", \"\"}};\n      auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"a\");\n      ASSERT(parsed == solution);\n    }\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution = {{\"a\", \"\"}, {\"b\", \"\"}};\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"a; b\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"a;b\");\n        ASSERT(parsed == solution);\n      }\n    }\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution = {{\"a\", \"\"}, {\"b\", \"c\"}};\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"a; b=c\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"a;b=c\");\n        ASSERT(parsed == solution);\n      }\n    }\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution = {{\"form-data\", \"\"}};\n      auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data\");\n      ASSERT(parsed == solution);\n    }\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution = {{\"form-data\", \"\"}, {\"test\", \"\"}};\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; test\");\n        ASSERT(parsed == solution);\n      }\n    }\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution = {{\"form-data\", \"\"}, {\"name\", \"file\"}};\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; name=\\\"file\\\"\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; name=file\");\n        ASSERT(parsed == solution);\n      }\n    }\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution = {{\"form-data\", \"\"}, {\"name\", \"file\"}, {\"filename\", \"filename.png\"}};\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; name=\\\"file\\\"; filename=\\\"filename.png\\\"\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data;name=\\\"file\\\";filename=\\\"filename.png\\\"\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; name=file; filename=filename.png\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data;name=file;filename=filename.png\");\n        ASSERT(parsed == solution);\n      }\n    }\n    {\n      SimpleWeb::CaseInsensitiveMultimap solution = {{\"form-data\", \"\"}, {\"name\", \"fi le\"}, {\"filename\", \"file name.png\"}};\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; name=\\\"fi le\\\"; filename=\\\"file name.png\\\"\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; name=\\\"fi%20le\\\"; filename=\\\"file%20name.png\\\"\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; name=fi le; filename=file name.png\");\n        ASSERT(parsed == solution);\n      }\n      {\n        auto parsed = SimpleWeb::HttpHeader::FieldValue::SemicolonSeparatedAttributes::parse(\"form-data; name=fi%20le; filename=file%20name.png\");\n        ASSERT(parsed == solution);\n      }\n    }\n  }\n\n  ASSERT(SimpleWeb::Date::to_string(std::chrono::system_clock::now()).size() == 29);\n}\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/tests/status_code_test.cpp",
    "content": "#include \"assert.hpp\"\n#include \"status_code.hpp\"\n\nusing namespace SimpleWeb;\n\n\nint main() {\n  ASSERT(status_code(\"\") == StatusCode::unknown);\n  ASSERT(status_code(\"Error\") == StatusCode::unknown);\n  ASSERT(status_code(\"000 Error\") == StatusCode::unknown);\n  ASSERT(status_code(StatusCode::unknown) == \"\");\n  ASSERT(static_cast<int>(status_code(\"050 Custom\")) == 50);\n  ASSERT(static_cast<int>(status_code(\"950 Custom\")) == 950);\n  ASSERT(status_code(\"100 Continue\") == StatusCode::information_continue);\n  ASSERT(status_code(\"100 C\") == StatusCode::information_continue);\n  ASSERT(status_code(\"100\") == StatusCode::information_continue);\n  ASSERT(status_code(StatusCode::information_continue) == \"100 Continue\");\n  ASSERT(status_code(\"200 OK\") == StatusCode::success_ok);\n  ASSERT(status_code(StatusCode::success_ok) == \"200 OK\");\n  ASSERT(status_code(\"208 Already Reported\") == StatusCode::success_already_reported);\n  ASSERT(status_code(StatusCode::success_already_reported) == \"208 Already Reported\");\n  ASSERT(status_code(\"308 Permanent Redirect\") == StatusCode::redirection_permanent_redirect);\n  ASSERT(status_code(StatusCode::redirection_permanent_redirect) == \"308 Permanent Redirect\");\n  ASSERT(status_code(\"404 Not Found\") == StatusCode::client_error_not_found);\n  ASSERT(status_code(StatusCode::client_error_not_found) == \"404 Not Found\");\n  ASSERT(status_code(\"502 Bad Gateway\") == StatusCode::server_error_bad_gateway);\n  ASSERT(status_code(StatusCode::server_error_bad_gateway) == \"502 Bad Gateway\");\n  ASSERT(status_code(\"504 Gateway Timeout\") == StatusCode::server_error_gateway_timeout);\n  ASSERT(status_code(StatusCode::server_error_gateway_timeout) == \"504 Gateway Timeout\");\n  ASSERT(status_code(\"511 Network Authentication Required\") == StatusCode::server_error_network_authentication_required);\n  ASSERT(status_code(StatusCode::server_error_network_authentication_required) == \"511 Network Authentication Required\");\n}\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/utility.hpp",
    "content": "#ifndef SIMPLE_WEB_UTILITY_HPP\n#define SIMPLE_WEB_UTILITY_HPP\n\n#include \"status_code.hpp\"\n#include <atomic>\n#include <chrono>\n#include <cstdlib>\n#include <ctime>\n#include <iostream>\n#include <memory>\n#include <mutex>\n#include <string>\n#include <unordered_map>\n\n#ifndef DEPRECATED\n#if defined(__GNUC__) || defined(__clang__)\n#define DEPRECATED __attribute__((deprecated))\n#elif defined(_MSC_VER)\n#define DEPRECATED __declspec(deprecated)\n#else\n#define DEPRECATED\n#endif\n#endif\n\n#if __cplusplus > 201402L || _MSVC_LANG > 201402L\n#include <string_view>\nnamespace SimpleWeb {\n  using string_view = std::string_view;\n}\n#elif !defined(USE_STANDALONE_ASIO)\n#include <boost/utility/string_ref.hpp>\nnamespace SimpleWeb {\n  using string_view = boost::string_ref;\n}\n#else\nnamespace SimpleWeb {\n  using string_view = const std::string &;\n}\n#endif\n\nnamespace SimpleWeb {\n  inline bool case_insensitive_equal(const std::string &str1, const std::string &str2) noexcept {\n    return str1.size() == str2.size() &&\n           std::equal(str1.begin(), str1.end(), str2.begin(), [](char a, char b) {\n             return tolower(a) == tolower(b);\n           });\n  }\n  class CaseInsensitiveEqual {\n  public:\n    bool operator()(const std::string &str1, const std::string &str2) const noexcept {\n      return case_insensitive_equal(str1, str2);\n    }\n  };\n  // Based on https://stackoverflow.com/questions/2590677/how-do-i-combine-hash-values-in-c0x/2595226#2595226\n  class CaseInsensitiveHash {\n  public:\n    std::size_t operator()(const std::string &str) const noexcept {\n      std::size_t h = 0;\n      std::hash<int> hash;\n      for(auto c : str)\n        h ^= hash(tolower(c)) + 0x9e3779b9 + (h << 6) + (h >> 2);\n      return h;\n    }\n  };\n\n  using CaseInsensitiveMultimap = std::unordered_multimap<std::string, std::string, CaseInsensitiveHash, CaseInsensitiveEqual>;\n\n  /// Percent encoding and decoding\n  class Percent {\n  public:\n    /// Returns percent-encoded string\n    static std::string encode(const std::string &value) noexcept {\n      static auto hex_chars = \"0123456789ABCDEF\";\n\n      std::string result;\n      result.reserve(value.size()); // Minimum size of result\n\n      for(auto &chr : value) {\n        if(!((chr >= '0' && chr <= '9') || (chr >= 'A' && chr <= 'Z') || (chr >= 'a' && chr <= 'z') || chr == '-' || chr == '.' || chr == '_' || chr == '~'))\n          result += std::string(\"%\") + hex_chars[static_cast<unsigned char>(chr) >> 4] + hex_chars[static_cast<unsigned char>(chr) & 15];\n        else\n          result += chr;\n      }\n\n      return result;\n    }\n\n    /// Returns percent-decoded string\n    static std::string decode(const std::string &value) noexcept {\n      std::string result;\n      result.reserve(value.size() / 3 + (value.size() % 3)); // Minimum size of result\n\n      for(std::size_t i = 0; i < value.size(); ++i) {\n        auto &chr = value[i];\n        if(chr == '%' && i + 2 < value.size()) {\n          auto hex = value.substr(i + 1, 2);\n          auto decoded_chr = static_cast<char>(std::strtol(hex.c_str(), nullptr, 16));\n          result += decoded_chr;\n          i += 2;\n        }\n        else if(chr == '+')\n          result += ' ';\n        else\n          result += chr;\n      }\n\n      return result;\n    }\n  };\n\n  /// Query string creation and parsing\n  class QueryString {\n  public:\n    /// Returns query string created from given field names and values\n    static std::string create(const CaseInsensitiveMultimap &fields) noexcept {\n      std::string result;\n\n      bool first = true;\n      for(auto &field : fields) {\n        result += (!first ? \"&\" : \"\") + field.first + '=' + Percent::encode(field.second);\n        first = false;\n      }\n\n      return result;\n    }\n\n    /// Returns query keys with percent-decoded values.\n    static CaseInsensitiveMultimap parse(const std::string &query_string) noexcept {\n      CaseInsensitiveMultimap result;\n\n      if(query_string.empty())\n        return result;\n\n      std::size_t name_pos = 0;\n      auto name_end_pos = std::string::npos;\n      auto value_pos = std::string::npos;\n      for(std::size_t c = 0; c < query_string.size(); ++c) {\n        if(query_string[c] == '&') {\n          auto name = query_string.substr(name_pos, (name_end_pos == std::string::npos ? c : name_end_pos) - name_pos);\n          if(!name.empty()) {\n            auto value = value_pos == std::string::npos ? std::string() : query_string.substr(value_pos, c - value_pos);\n            result.emplace(std::move(name), Percent::decode(value));\n          }\n          name_pos = c + 1;\n          name_end_pos = std::string::npos;\n          value_pos = std::string::npos;\n        }\n        else if(query_string[c] == '=' && name_end_pos == std::string::npos) {\n          name_end_pos = c;\n          value_pos = c + 1;\n        }\n      }\n      if(name_pos < query_string.size()) {\n        auto name = query_string.substr(name_pos, (name_end_pos == std::string::npos ? std::string::npos : name_end_pos - name_pos));\n        if(!name.empty()) {\n          auto value = value_pos >= query_string.size() ? std::string() : query_string.substr(value_pos);\n          result.emplace(std::move(name), Percent::decode(value));\n        }\n      }\n\n      return result;\n    }\n  };\n\n  class HttpHeader {\n  public:\n    /// Parse header fields from stream\n    static CaseInsensitiveMultimap parse(std::istream &stream) noexcept {\n      CaseInsensitiveMultimap result;\n      std::string line;\n      std::size_t param_end;\n      while(getline(stream, line) && (param_end = line.find(':')) != std::string::npos) {\n        std::size_t value_start = param_end + 1;\n        while(value_start + 1 < line.size() && line[value_start] == ' ')\n          ++value_start;\n        if(value_start < line.size())\n          result.emplace(line.substr(0, param_end), line.substr(value_start, line.size() - value_start - (line.back() == '\\r' ? 1 : 0)));\n      }\n      return result;\n    }\n\n    class FieldValue {\n    public:\n      class SemicolonSeparatedAttributes {\n      public:\n        /// Parse Set-Cookie or Content-Disposition from given header field value.\n        /// Attribute values are percent-decoded.\n        static CaseInsensitiveMultimap parse(const std::string &value) {\n          CaseInsensitiveMultimap result;\n\n          std::size_t name_start_pos = std::string::npos;\n          std::size_t name_end_pos = std::string::npos;\n          std::size_t value_start_pos = std::string::npos;\n          for(std::size_t c = 0; c < value.size(); ++c) {\n            if(name_start_pos == std::string::npos) {\n              if(value[c] != ' ' && value[c] != ';')\n                name_start_pos = c;\n            }\n            else {\n              if(name_end_pos == std::string::npos) {\n                if(value[c] == ';') {\n                  result.emplace(value.substr(name_start_pos, c - name_start_pos), std::string());\n                  name_start_pos = std::string::npos;\n                }\n                else if(value[c] == '=')\n                  name_end_pos = c;\n              }\n              else {\n                if(value_start_pos == std::string::npos) {\n                  if(value[c] == '\"' && c + 1 < value.size())\n                    value_start_pos = c + 1;\n                  else\n                    value_start_pos = c;\n                }\n                else if(value[c] == '\"' || value[c] == ';') {\n                  result.emplace(value.substr(name_start_pos, name_end_pos - name_start_pos), Percent::decode(value.substr(value_start_pos, c - value_start_pos)));\n                  name_start_pos = std::string::npos;\n                  name_end_pos = std::string::npos;\n                  value_start_pos = std::string::npos;\n                }\n              }\n            }\n          }\n          if(name_start_pos != std::string::npos) {\n            if(name_end_pos == std::string::npos)\n              result.emplace(value.substr(name_start_pos), std::string());\n            else if(value_start_pos != std::string::npos) {\n              if(value.back() == '\"')\n                result.emplace(value.substr(name_start_pos, name_end_pos - name_start_pos), Percent::decode(value.substr(value_start_pos, value.size() - 1)));\n              else\n                result.emplace(value.substr(name_start_pos, name_end_pos - name_start_pos), Percent::decode(value.substr(value_start_pos)));\n            }\n          }\n\n          return result;\n        }\n      };\n    };\n  };\n\n  class RequestMessage {\n  public:\n    /** Parse request line and header fields from a request stream.\n     *\n     * @param[in]  stream       Stream to parse.\n     * @param[out] method       HTTP method.\n     * @param[out] path         Path from request URI.\n     * @param[out] query_string Query string from request URI.\n     * @param[out] version      HTTP version.\n     * @param[out] header       Header fields.\n     *\n     * @return True if stream is parsed successfully, false if not.\n     */\n    static bool parse(std::istream &stream, std::string &method, std::string &path, std::string &query_string, std::string &version, CaseInsensitiveMultimap &header) noexcept {\n      std::string line;\n      std::size_t method_end;\n      if(getline(stream, line) && (method_end = line.find(' ')) != std::string::npos) {\n        method = line.substr(0, method_end);\n\n        std::size_t query_start = std::string::npos;\n        std::size_t path_and_query_string_end = std::string::npos;\n        for(std::size_t i = method_end + 1; i < line.size(); ++i) {\n          if(line[i] == '?' && (i + 1) < line.size() && query_start == std::string::npos)\n            query_start = i + 1;\n          else if(line[i] == ' ') {\n            path_and_query_string_end = i;\n            break;\n          }\n        }\n        if(path_and_query_string_end != std::string::npos) {\n          if(query_start != std::string::npos) {\n            path = line.substr(method_end + 1, query_start - method_end - 2);\n            query_string = line.substr(query_start, path_and_query_string_end - query_start);\n          }\n          else\n            path = line.substr(method_end + 1, path_and_query_string_end - method_end - 1);\n\n          std::size_t protocol_end;\n          if((protocol_end = line.find('/', path_and_query_string_end + 1)) != std::string::npos) {\n            if(line.compare(path_and_query_string_end + 1, protocol_end - path_and_query_string_end - 1, \"HTTP\") != 0)\n              return false;\n            version = line.substr(protocol_end + 1, line.size() - protocol_end - 2);\n          }\n          else\n            return false;\n\n          header = HttpHeader::parse(stream);\n        }\n        else\n          return false;\n      }\n      else\n        return false;\n      return true;\n    }\n  };\n\n  class ResponseMessage {\n  public:\n    /** Parse status line and header fields from a response stream.\n     *\n     * @param[in]  stream      Stream to parse.\n     * @param[out] version     HTTP version.\n     * @param[out] status_code HTTP status code.\n     * @param[out] header      Header fields.\n     *\n     * @return True if stream is parsed successfully, false if not.\n     */\n    static bool parse(std::istream &stream, std::string &version, std::string &status_code, CaseInsensitiveMultimap &header) noexcept {\n      std::string line;\n      std::size_t version_end;\n      if(getline(stream, line) && (version_end = line.find(' ')) != std::string::npos) {\n        if(5 < line.size())\n          version = line.substr(5, version_end - 5);\n        else\n          return false;\n        if((version_end + 1) < line.size())\n          status_code = line.substr(version_end + 1, line.size() - (version_end + 1) - (line.back() == '\\r' ? 1 : 0));\n        else\n          return false;\n\n        header = HttpHeader::parse(stream);\n      }\n      else\n        return false;\n      return true;\n    }\n  };\n\n  /// Date class working with formats specified in RFC 7231 Date/Time Formats\n  class Date {\n  public:\n    /// Returns the given std::chrono::system_clock::time_point as a string with the following format: Wed, 31 Jul 2019 11:34:23 GMT.\n    static std::string to_string(const std::chrono::system_clock::time_point time_point) noexcept {\n      static std::string result_cache;\n      static std::chrono::system_clock::time_point last_time_point;\n\n      static std::mutex mutex;\n      std::lock_guard<std::mutex> lock(mutex);\n\n      if(std::chrono::duration_cast<std::chrono::seconds>(time_point - last_time_point).count() == 0 && !result_cache.empty())\n        return result_cache;\n\n      last_time_point = time_point;\n\n      std::string result;\n      result.reserve(29);\n\n      auto time = std::chrono::system_clock::to_time_t(time_point);\n      tm tm;\n#if defined(_MSC_VER) || defined(__MINGW32__)\n      if(gmtime_s(&tm, &time) != 0)\n        return {};\n      auto gmtime = &tm;\n#else\n      auto gmtime = gmtime_r(&time, &tm);\n      if(!gmtime)\n        return {};\n#endif\n\n      switch(gmtime->tm_wday) {\n      case 0: result += \"Sun, \"; break;\n      case 1: result += \"Mon, \"; break;\n      case 2: result += \"Tue, \"; break;\n      case 3: result += \"Wed, \"; break;\n      case 4: result += \"Thu, \"; break;\n      case 5: result += \"Fri, \"; break;\n      case 6: result += \"Sat, \"; break;\n      }\n\n      result += gmtime->tm_mday < 10 ? '0' : static_cast<char>(gmtime->tm_mday / 10 + 48);\n      result += static_cast<char>(gmtime->tm_mday % 10 + 48);\n\n      switch(gmtime->tm_mon) {\n      case 0: result += \" Jan \"; break;\n      case 1: result += \" Feb \"; break;\n      case 2: result += \" Mar \"; break;\n      case 3: result += \" Apr \"; break;\n      case 4: result += \" May \"; break;\n      case 5: result += \" Jun \"; break;\n      case 6: result += \" Jul \"; break;\n      case 7: result += \" Aug \"; break;\n      case 8: result += \" Sep \"; break;\n      case 9: result += \" Oct \"; break;\n      case 10: result += \" Nov \"; break;\n      case 11: result += \" Dec \"; break;\n      }\n\n      auto year = gmtime->tm_year + 1900;\n      result += static_cast<char>(year / 1000 + 48);\n      result += static_cast<char>((year / 100) % 10 + 48);\n      result += static_cast<char>((year / 10) % 10 + 48);\n      result += static_cast<char>(year % 10 + 48);\n      result += ' ';\n\n      result += gmtime->tm_hour < 10 ? '0' : static_cast<char>(gmtime->tm_hour / 10 + 48);\n      result += static_cast<char>(gmtime->tm_hour % 10 + 48);\n      result += ':';\n\n      result += gmtime->tm_min < 10 ? '0' : static_cast<char>(gmtime->tm_min / 10 + 48);\n      result += static_cast<char>(gmtime->tm_min % 10 + 48);\n      result += ':';\n\n      result += gmtime->tm_sec < 10 ? '0' : static_cast<char>(gmtime->tm_sec / 10 + 48);\n      result += static_cast<char>(gmtime->tm_sec % 10 + 48);\n\n      result += \" GMT\";\n\n      result_cache = result;\n      return result;\n    }\n  };\n} // namespace SimpleWeb\n\n#ifdef __SSE2__\n#include <emmintrin.h>\nnamespace SimpleWeb {\n  inline void spin_loop_pause() noexcept { _mm_pause(); }\n} // namespace SimpleWeb\n// TODO: need verification that the following checks are correct:\n#elif defined(_MSC_VER) && _MSC_VER >= 1800 && (defined(_M_X64) || defined(_M_IX86))\n#include <intrin.h>\nnamespace SimpleWeb {\n  inline void spin_loop_pause() noexcept { _mm_pause(); }\n} // namespace SimpleWeb\n#else\nnamespace SimpleWeb {\n  inline void spin_loop_pause() noexcept {}\n} // namespace SimpleWeb\n#endif\n\nnamespace SimpleWeb {\n  /// Makes it possible to for instance cancel Asio handlers without stopping asio::io_service.\n  class ScopeRunner {\n    /// Scope count that is set to -1 if scopes are to be canceled.\n    std::atomic<long> count;\n\n  public:\n    class SharedLock {\n      friend class ScopeRunner;\n      std::atomic<long> &count;\n      SharedLock(std::atomic<long> &count) noexcept : count(count) {}\n      SharedLock &operator=(const SharedLock &) = delete;\n      SharedLock(const SharedLock &) = delete;\n\n    public:\n      ~SharedLock() noexcept {\n        count.fetch_sub(1);\n      }\n    };\n\n    ScopeRunner() noexcept : count(0) {}\n\n    /// Returns nullptr if scope should be exited, or a shared lock otherwise.\n    /// The shared lock ensures that a potential destructor call is delayed until all locks are released.\n    std::unique_ptr<SharedLock> continue_lock() noexcept {\n      long expected = count;\n      while(expected >= 0 && !count.compare_exchange_weak(expected, expected + 1))\n        spin_loop_pause();\n\n      if(expected < 0)\n        return nullptr;\n      else\n        return std::unique_ptr<SharedLock>(new SharedLock(count));\n    }\n\n    /// Blocks until all shared locks are released, then prevents future shared locks.\n    void stop() noexcept {\n      long expected = 0;\n      while(!count.compare_exchange_weak(expected, -1)) {\n        if(expected < 0)\n          return;\n        expected = 0;\n        spin_loop_pause();\n      }\n    }\n  };\n} // namespace SimpleWeb\n\n#endif // SIMPLE_WEB_UTILITY_HPP\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/web/index.html",
    "content": "<html>\n    <head>\n        <title>Simple-Web-Server html-file</title>\n    </head>\n    <body>\n        This is the content of index.html\n    </body>\n</html>\n"
  },
  {
    "path": "C/thirdparty/Simple-Web-Server/web/test.html",
    "content": "<html>\n    <head>\n        <title>Simple-Web-Server html-file</title>\n    </head>\n    <body>\n        This is the content of test.html\n    </body>\n</html>\n"
  },
  {
    "path": "C/thirdparty/rapidjson/.gitattributes",
    "content": "# Set the default behavior, in case people don't have core.autocrlf set.\n* text=auto\n\n# Explicitly declare text files you want to always be normalized and converted\n# to native line endings on checkout.\n*.cpp text\n*.h text\n*.txt text\n*.md text\n*.cmake text\n*.svg text\n*.dot text\n*.yml text\n*.in text\n*.sh text\n*.autopkg text\nDockerfile text\n\n# Denote all files that are truly binary and should not be modified.\n*.png binary\n*.jpg binary\n*.json binary"
  },
  {
    "path": "C/thirdparty/rapidjson/.gitignore",
    "content": "/bin/*\n!/bin/data\n!/bin/encodings\n!/bin/jsonchecker\n!/bin/types\n!/bin/unittestschema\n/build\n/doc/html\n/doc/doxygen_*.db\n*.a\n\n# Temporary files created during CMake build\nCMakeCache.txt\nCMakeFiles\ncmake_install.cmake\nCTestTestfile.cmake\nMakefile\nRapidJSON*.cmake\nRapidJSON.pc\nTesting\n/googletest\ninstall_manifest.txt\nDoxyfile\nDoxyfile.zh-cn\nDartConfiguration.tcl\n*.nupkg\n\n# Files created by OS\n*.DS_Store\n"
  },
  {
    "path": "C/thirdparty/rapidjson/.gitmodules",
    "content": "[submodule \"thirdparty/gtest\"]\n\tpath = thirdparty/gtest\n\turl = https://github.com/google/googletest.git\n"
  },
  {
    "path": "C/thirdparty/rapidjson/.travis.yml",
    "content": "sudo: required\ndist: xenial\n\nlanguage: cpp\ncache:\n  - ccache\n\naddons:\n  apt:\n    sources:\n      - ubuntu-toolchain-r-test\n    packages:\n      - cmake\n      - valgrind\n      - clang-8\nenv:\n  global:\n    - USE_CCACHE=1\n    - CCACHE_SLOPPINESS=pch_defines,time_macros\n    - CCACHE_COMPRESS=1\n    - CCACHE_MAXSIZE=100M\n    - ARCH_FLAGS_x86='-m32'        # #266: don't use SSE on 32-bit\n    - ARCH_FLAGS_x86_64='-msse4.2' #       use SSE4.2 on 64-bit\n    - ARCH_FLAGS_aarch64='-march=armv8-a'\n    - GITHUB_REPO='Tencent/rapidjson'\n    - secure: \"HrsaCb+N66EG1HR+LWH1u51SjaJyRwJEDzqJGYMB7LJ/bfqb9mWKF1fLvZGk46W5t7TVaXRDD5KHFx9DPWvKn4gRUVkwTHEy262ah5ORh8M6n/6VVVajeV/AYt2C0sswdkDBDO4Xq+xy5gdw3G8s1A4Inbm73pUh+6vx+7ltBbk=\"\n\nmatrix:\n  include:\n    # gcc\n    - env: CONF=release ARCH=x86     CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=OFF\n      compiler: gcc\n      arch: amd64\n    - env: CONF=release ARCH=x86_64  CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=OFF\n      compiler: gcc\n      arch: amd64\n    - env: CONF=release ARCH=x86_64  CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=ON\n      compiler: gcc\n      arch: amd64\n    - env: CONF=debug   ARCH=x86     CXX11=OFF CXX17=OFF CXX20=OFF MEMBERSMAP=OFF\n      compiler: gcc\n      arch: amd64\n    - env: CONF=debug   ARCH=x86_64  CXX11=OFF CXX17=OFF CXX20=OFF MEMBERSMAP=OFF\n      compiler: gcc\n      arch: amd64\n    - env: CONF=debug   ARCH=x86     CXX11=OFF CXX17=ON  CXX20=OFF MEMBERSMAP=ON CXX_FLAGS='-D_GLIBCXX_DEBUG'\n      compiler: gcc\n      arch: amd64\n    - env: CONF=debug   ARCH=x86_64  CXX11=OFF CXX17=ON  CXX20=OFF MEMBERSMAP=ON CXX_FLAGS='-D_GLIBCXX_DEBUG'\n      compiler: gcc\n      arch: amd64\n    - env: CONF=release ARCH=aarch64 CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=OFF\n      compiler: gcc\n      arch: arm64\n    - env: CONF=release ARCH=aarch64 CXX11=OFF CXX17=OFF CXX20=OFF MEMBERSMAP=OFF\n      compiler: gcc\n      arch: arm64\n    - env: CONF=release ARCH=aarch64 CXX11=OFF CXX17=ON  CXX20=OFF MEMBERSMAP=ON\n      compiler: gcc\n      arch: arm64\n    # clang\n    - env: CONF=release ARCH=x86     CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=ON  CCACHE_CPP2=yes\n      compiler: clang\n      arch: amd64\n    - env: CONF=release ARCH=x86_64  CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=ON  CCACHE_CPP2=yes\n      compiler: clang\n      arch: amd64\n    - env: CONF=release ARCH=x86_64  CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=OFF CCACHE_CPP2=yes\n      compiler: clang\n      arch: amd64\n    - env: CONF=debug   ARCH=x86     CXX11=OFF CXX17=OFF CXX20=OFF MEMBERSMAP=ON  CCACHE_CPP2=yes\n      compiler: clang\n      arch: amd64\n    - env: CONF=debug   ARCH=x86_64  CXX11=OFF CXX17=OFF CXX20=OFF MEMBERSMAP=ON  CCACHE_CPP2=yes\n      compiler: clang\n      arch: amd64\n    - env: CONF=debug   ARCH=x86     CXX11=OFF CXX17=ON  CXX20=OFF MEMBERSMAP=OFF CCACHE_CPP2=yes\n      compiler: clang\n      arch: amd64\n    - env: CONF=debug   ARCH=x86_64  CXX11=OFF CXX17=ON  CXX20=OFF MEMBERSMAP=OFF CCACHE_CPP2=yes\n      compiler: clang\n      arch: amd64\n    - env: CONF=debug   ARCH=aarch64 CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=ON  CCACHE_CPP2=yes\n      compiler: clang\n      arch: arm64\n    - env: CONF=debug   ARCH=aarch64 CXX11=OFF CXX17=OFF CXX20=OFF MEMBERSMAP=ON  CCACHE_CPP2=yes\n      compiler: clang\n      arch: arm64\n    - env: CONF=debug   ARCH=aarch64 CXX11=OFF CXX17=ON  CXX20=OFF MEMBERSMAP=OFF CCACHE_CPP2=yes\n      compiler: clang\n      arch: arm64\n    # coverage report\n    - env: CONF=debug   ARCH=x86     GCOV_FLAGS='--coverage' CXX_FLAGS='-O0' CXX11=OFF CXX17=OFF CXX20=OFF\n      compiler: gcc\n      arch: amd64\n      cache:\n        - ccache\n        - pip\n      after_success:\n        - pip install --user cpp-coveralls\n        - coveralls -r .. --gcov-options '\\-lp' -e thirdparty -e example -e test -e build/CMakeFiles -e include/rapidjson/msinttypes -e include/rapidjson/internal/meta.h -e include/rapidjson/error/en.h\n    - env: CONF=debug   ARCH=x86_64  GCOV_FLAGS='--coverage' CXX_FLAGS='-O0' CXX11=ON  CXX17=OFF CXX20=OFF MEMBERSMAP=ON\n      compiler: gcc\n      arch: amd64\n      cache:\n        - ccache\n        - pip\n      after_success:\n        - pip install --user cpp-coveralls\n        - coveralls -r .. --gcov-options '\\-lp' -e thirdparty -e example -e test -e build/CMakeFiles -e include/rapidjson/msinttypes -e include/rapidjson/internal/meta.h -e include/rapidjson/error/en.h\n    - env: CONF=debug   ARCH=aarch64 GCOV_FLAGS='--coverage' CXX_FLAGS='-O0' CXX11=OFF CXX17=ON  CXX20=OFF\n      compiler: gcc\n      arch: arm64\n      cache:\n        - ccache\n        - pip\n      after_success:\n        - pip install --user cpp-coveralls\n        - coveralls -r .. --gcov-options '\\-lp' -e thirdparty -e example -e test -e build/CMakeFiles -e include/rapidjson/msinttypes -e include/rapidjson/internal/meta.h -e include/rapidjson/error/en.h\n    - script: # Documentation task\n      - cd build\n      - cmake .. -DRAPIDJSON_HAS_STDSTRING=ON -DCMAKE_VERBOSE_MAKEFILE=ON\n      - make travis_doc\n      cache: false\n      addons:\n        apt:\n          packages:\n            - doxygen\n\nbefore_install:\n  - if [ \"x86_64\" = \"$(arch)\" ]; then sudo apt-get install -y g++-multilib libc6-dbg:i386 --allow-unauthenticated; fi\n\nbefore_script:\n    # travis provides clang-7 for amd64 and clang-3.8 for arm64\n    # here use clang-8 to all architectures as clang-7 is not available for arm64\n  - if [ -f /usr/bin/clang++-8 ]; then\n      sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-8 1000;\n      sudo update-alternatives --config clang++;\n      export PATH=/usr/bin:$PATH;\n    fi\n  - if [ \"$CXX\" = \"clang++\" ]; then export CCACHE_CPP2=yes; fi\n  - ccache -s\n    #   hack to avoid Valgrind bug (https://bugs.kde.org/show_bug.cgi?id=326469),\n    #   exposed by merging PR#163 (using -march=native)\n    #   TODO: Since this bug is already fixed. Remove this when valgrind can be upgraded.\n  - sed -i \"s/-march=native//\" CMakeLists.txt\n  - mkdir build\n\nscript:\n  - if [ \"$CXX\" = \"clang++\" ]; then export CXXFLAGS=\"-stdlib=libc++ ${CXXFLAGS}\"; fi\n  - >\n      eval \"ARCH_FLAGS=\\${ARCH_FLAGS_${ARCH}}\" ;\n      (cd build && cmake\n      -DRAPIDJSON_HAS_STDSTRING=ON\n      -DRAPIDJSON_USE_MEMBERSMAP=$MEMBERSMAP\n      -DRAPIDJSON_BUILD_CXX11=$CXX11\n      -DRAPIDJSON_BUILD_CXX17=$CXX17\n      -DRAPIDJSON_BUILD_CXX20=$CXX20\n      -DCMAKE_VERBOSE_MAKEFILE=ON\n      -DCMAKE_BUILD_TYPE=$CONF\n      -DCMAKE_CXX_FLAGS=\"$ARCH_FLAGS $GCOV_FLAGS $CXX_FLAGS\"\n      -DCMAKE_EXE_LINKER_FLAGS=$GCOV_FLAGS\n      ..)\n  - cd build\n  - make tests -j 2\n  - make examples -j 2\n  - ctest -j 2 -V `[ \"$CONF\" = \"release\" ] || echo \"-E perftest\"`\n"
  },
  {
    "path": "C/thirdparty/rapidjson/CHANGELOG.md",
    "content": "# Change Log\nAll notable changes to this project will be documented in this file.\nThis project adheres to [Semantic Versioning](http://semver.org/).\n\n## [Unreleased]\n\n## 1.1.0 - 2016-08-25\n\n### Added\n* Add GenericDocument ctor overload to specify JSON type (#369)\n* Add FAQ (#372, #373, #374, #376)\n* Add forward declaration header `fwd.h`\n* Add @PlatformIO Library Registry manifest file (#400)\n* Implement assignment operator for BigInteger (#404)\n* Add comments support (#443)\n* Adding coapp definition (#460)\n* documenttest.cpp: EXPECT_THROW when checking empty allocator (470)\n* GenericDocument: add implicit conversion to ParseResult (#480)\n* Use <wchar.h> with C++ linkage on Windows ARM (#485)\n* Detect little endian for Microsoft ARM targets \n* Check Nan/Inf when writing a double (#510)\n* Add JSON Schema Implementation (#522)\n* Add iostream wrapper (#530)\n* Add Jsonx example for converting JSON into JSONx (a XML format) (#531)\n* Add optional unresolvedTokenIndex parameter to Pointer::Get() (#532)\n* Add encoding validation option for Writer/PrettyWriter (#534)\n* Add Writer::SetMaxDecimalPlaces() (#536)\n* Support {0, } and {0, m} in Regex (#539)\n* Add Value::Get/SetFloat(), Value::IsLossLessFloat/Double() (#540)\n* Add stream position check to reader unit tests (#541)\n* Add Templated accessors and range-based for (#542)\n* Add (Pretty)Writer::RawValue() (#543)\n* Add Document::Parse(std::string), Document::Parse(const char*, size_t length) and related APIs. (#553)\n* Add move constructor for GenericSchemaDocument (#554)\n* Add VS2010 and VS2015 to AppVeyor CI (#555)\n* Add parse-by-parts example (#556, #562)\n* Support parse number as string (#564, #589)\n* Add kFormatSingleLineArray for PrettyWriter (#577)\n* Added optional support for trailing commas (#584)\n* Added filterkey and filterkeydom examples (#615)\n* Added npm docs (#639)\n* Allow options for writing and parsing NaN/Infinity (#641)\n* Add std::string overload to PrettyWriter::Key() when RAPIDJSON_HAS_STDSTRING is defined (#698)\n\n### Fixed\n* Fix gcc/clang/vc warnings (#350, #394, #397, #444, #447, #473, #515, #582, #589, #595, #667)\n* Fix documentation (#482, #511, #550, #557, #614, #635, #660)\n* Fix emscripten alignment issue (#535)\n* Fix missing allocator to uses of AddMember in document (#365)\n* CMake will no longer complain that the minimum CMake version is not specified (#501)\n* Make it usable with old VC8 (VS2005) (#383)\n* Prohibit C++11 move from Document to Value (#391)\n* Try to fix incorrect 64-bit alignment (#419)\n* Check return of fwrite to avoid warn_unused_result build failures (#421)\n* Fix UB in GenericDocument::ParseStream (#426)\n* Keep Document value unchanged on parse error (#439)\n* Add missing return statement (#450)\n* Fix Document::Parse(const Ch*) for transcoding (#478)\n* encodings.h: fix typo in preprocessor condition (#495)\n* Custom Microsoft headers are necessary only for Visual Studio 2012 and lower (#559)\n* Fix memory leak for invalid regex (26e69ffde95ba4773ab06db6457b78f308716f4b)\n* Fix a bug in schema minimum/maximum keywords for 64-bit integer (e7149d665941068ccf8c565e77495521331cf390)\n* Fix a crash bug in regex (#605)\n* Fix schema \"required\" keyword cannot handle duplicated keys (#609)\n* Fix cmake CMP0054 warning (#612)\n* Added missing include guards in istreamwrapper.h and ostreamwrapper.h (#634)\n* Fix undefined behaviour (#646)\n* Fix buffer overrun using PutN (#673)\n* Fix rapidjson::value::Get<std::string>() may returns wrong data (#681)\n* Add Flush() for all value types (#689)\n* Handle malloc() fail in PoolAllocator (#691)\n* Fix builds on x32 platform. #703\n\n### Changed\n* Clarify problematic JSON license (#392)\n* Move Travis to container based infrastructure (#504, #558)\n* Make whitespace array more compact (#513)\n* Optimize Writer::WriteString() with SIMD (#544)\n* x86-64 48-bit pointer optimization for GenericValue (#546)\n* Define RAPIDJSON_HAS_CXX11_RVALUE_REFS directly in clang (#617)\n* Make GenericSchemaDocument constructor explicit (#674)\n* Optimize FindMember when use std::string (#690)\n\n## [1.0.2] - 2015-05-14\n\n### Added\n* Add Value::XXXMember(...) overloads for std::string (#335)\n\n### Fixed\n* Include rapidjson.h for all internal/error headers.\n* Parsing some numbers incorrectly in full-precision mode (`kFullPrecisionParseFlag`) (#342)\n* Fix some numbers parsed incorrectly (#336)\n* Fix alignment of 64bit platforms (#328)\n* Fix MemoryPoolAllocator::Clear() to clear user-buffer (0691502573f1afd3341073dd24b12c3db20fbde4)\n\n### Changed\n* CMakeLists for include as a thirdparty in projects (#334, #337)\n* Change Document::ParseStream() to use stack allocator for Reader (ffbe38614732af8e0b3abdc8b50071f386a4a685) \n\n## [1.0.1] - 2015-04-25\n\n### Added\n* Changelog following [Keep a CHANGELOG](https://github.com/olivierlacan/keep-a-changelog) suggestions.\n\n### Fixed\n* Parsing of some numbers (e.g. \"1e-00011111111111\") causing assertion (#314).\n* Visual C++ 32-bit compilation error in `diyfp.h` (#317).\n\n## [1.0.0] - 2015-04-22\n\n### Added\n* 100% [Coverall](https://coveralls.io/r/Tencent/rapidjson?branch=master) coverage.\n* Version macros (#311)\n\n### Fixed\n* A bug in trimming long number sequence (4824f12efbf01af72b8cb6fc96fae7b097b73015).\n* Double quote in unicode escape (#288).\n* Negative zero roundtrip (double only) (#289).\n* Standardize behavior of `memcpy()` and `malloc()` (0c5c1538dcfc7f160e5a4aa208ddf092c787be5a, #305, 0e8bbe5e3ef375e7f052f556878be0bd79e9062d).\n\n### Removed\n* Remove an invalid `Document::ParseInsitu()` API (e7f1c6dd08b522cfcf9aed58a333bd9a0c0ccbeb).\n\n## 1.0-beta - 2015-04-8\n\n### Added\n* RFC 7159 (#101)\n* Optional Iterative Parser (#76)\n* Deep-copy values (#20)\n* Error code and message (#27)\n* ASCII Encoding (#70)\n* `kParseStopWhenDoneFlag` (#83)\n* `kParseFullPrecisionFlag` (881c91d696f06b7f302af6d04ec14dd08db66ceb)\n* Add `Key()` to handler concept (#134)\n* C++11 compatibility and support (#128)\n* Optimized number-to-string and vice versa conversions (#137, #80)\n* Short-String Optimization (#131)\n* Local stream optimization by traits (#32)\n* Travis & Appveyor Continuous Integration, with Valgrind verification (#24, #242)\n* Redo all documentation (English, Simplified Chinese)\n\n### Changed\n* Copyright ownership transferred to THL A29 Limited (a Tencent company).\n* Migrating from Premake to CMAKE (#192)\n* Resolve all warning reports\n\n### Removed\n* Remove other JSON libraries for performance comparison (#180)\n\n## 0.11 - 2012-11-16\n\n## 0.1 - 2011-11-18\n\n[Unreleased]: https://github.com/Tencent/rapidjson/compare/v1.1.0...HEAD\n[1.1.0]: https://github.com/Tencent/rapidjson/compare/v1.0.2...v1.1.0\n[1.0.2]: https://github.com/Tencent/rapidjson/compare/v1.0.1...v1.0.2\n[1.0.1]: https://github.com/Tencent/rapidjson/compare/v1.0.0...v1.0.1\n[1.0.0]: https://github.com/Tencent/rapidjson/compare/v1.0-beta...v1.0.0\n"
  },
  {
    "path": "C/thirdparty/rapidjson/CMakeLists.txt",
    "content": "CMAKE_MINIMUM_REQUIRED(VERSION 2.8.12)\nif(POLICY CMP0025)\n  # detect Apple's Clang\n  cmake_policy(SET CMP0025 NEW)\nendif()\nif(POLICY CMP0054)\n  cmake_policy(SET CMP0054 NEW)\nendif()\n\nSET(CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/CMakeModules)\n\nset(LIB_MAJOR_VERSION \"1\")\nset(LIB_MINOR_VERSION \"1\")\nset(LIB_PATCH_VERSION \"0\")\nset(LIB_VERSION_STRING \"${LIB_MAJOR_VERSION}.${LIB_MINOR_VERSION}.${LIB_PATCH_VERSION}\")\n\nif (CMAKE_VERSION VERSION_LESS 3.0)\n    PROJECT(RapidJSON CXX)\nelse()\n    cmake_policy(SET CMP0048 NEW)\n    PROJECT(RapidJSON VERSION \"${LIB_VERSION_STRING}\" LANGUAGES CXX)\nendif()\n\n# compile in release with debug info mode by default\nif(NOT CMAKE_BUILD_TYPE)\n    set(CMAKE_BUILD_TYPE \"RelWithDebInfo\" CACHE STRING \"Choose the type of build, options are: Debug Release RelWithDebInfo MinSizeRel.\" FORCE)\nendif()\n\n# Build all binaries in a separate directory\nSET(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)\n\noption(RAPIDJSON_BUILD_DOC \"Build rapidjson documentation.\" ON)\noption(RAPIDJSON_BUILD_EXAMPLES \"Build rapidjson examples.\" ON)\noption(RAPIDJSON_BUILD_TESTS \"Build rapidjson perftests and unittests.\" ON)\noption(RAPIDJSON_BUILD_THIRDPARTY_GTEST\n    \"Use gtest installation in `thirdparty/gtest` by default if available\" OFF)\n\noption(RAPIDJSON_BUILD_CXX11 \"Build rapidjson with C++11\" ON)\noption(RAPIDJSON_BUILD_CXX17 \"Build rapidjson with C++17\" OFF)\noption(RAPIDJSON_BUILD_CXX20 \"Build rapidjson with C++20\" OFF)\nif(RAPIDJSON_BUILD_CXX11)\n    set(CMAKE_CXX_STANDARD 11)\n    set(CMAKE_CXX_STANDARD_REQUIRED TRUE)\nendif()\n\noption(RAPIDJSON_BUILD_ASAN \"Build rapidjson with address sanitizer (gcc/clang)\" OFF)\noption(RAPIDJSON_BUILD_UBSAN \"Build rapidjson with undefined behavior sanitizer (gcc/clang)\" OFF)\n\noption(RAPIDJSON_ENABLE_INSTRUMENTATION_OPT \"Build rapidjson with -march or -mcpu options\" ON)\n\noption(RAPIDJSON_HAS_STDSTRING \"\" OFF)\nif(RAPIDJSON_HAS_STDSTRING)\n    add_definitions(-DRAPIDJSON_HAS_STDSTRING)\nendif()\n\noption(RAPIDJSON_USE_MEMBERSMAP \"\" OFF)\nif(RAPIDJSON_USE_MEMBERSMAP)\n    add_definitions(-DRAPIDJSON_USE_MEMBERSMAP=1)\nendif()\n\nfind_program(CCACHE_FOUND ccache)\nif(CCACHE_FOUND)\n    set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)\n    set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache)\n    if (CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -Qunused-arguments -fcolor-diagnostics\")\n    endif()\nendif(CCACHE_FOUND)\n\nfind_program(VALGRIND_FOUND valgrind)\n\nif (CMAKE_CXX_COMPILER_ID STREQUAL \"GNU\")\n    if(RAPIDJSON_ENABLE_INSTRUMENTATION_OPT AND NOT CMAKE_CROSSCOMPILING)\n        if(CMAKE_SYSTEM_PROCESSOR STREQUAL \"powerpc\" OR CMAKE_SYSTEM_PROCESSOR STREQUAL \"ppc\" OR CMAKE_SYSTEM_PROCESSOR STREQUAL \"ppc64\" OR CMAKE_SYSTEM_PROCESSOR STREQUAL \"ppc64le\")\n          set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -mcpu=native\")\n        else()\n          #FIXME: x86 is -march=native, but doesn't mean every arch is this option. To keep original project's compatibility, I leave this except POWER.\n          set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -march=native\")\n        endif()\n    endif()\n    set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -Wall -Wextra -Werror\")\n    set(EXTRA_CXX_FLAGS -Weffc++ -Wswitch-default -Wfloat-equal -Wconversion -Wsign-conversion)\n    if (RAPIDJSON_BUILD_CXX11 AND CMAKE_VERSION VERSION_LESS 3.1)\n        if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"4.7.0\")\n            set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++0x\")\n        else()\n            set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\n        endif()\n    elseif (RAPIDJSON_BUILD_CXX17 AND NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"5.0\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++17\")\n    elseif (RAPIDJSON_BUILD_CXX20 AND NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"8.0\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++20\")\n    endif()\n    if (RAPIDJSON_BUILD_ASAN)\n        if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"4.8.0\")\n            message(FATAL_ERROR \"GCC < 4.8 doesn't support the address sanitizer\")\n        else()\n            set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -fsanitize=address\")\n        endif()\n    endif()\n    if (RAPIDJSON_BUILD_UBSAN)\n        if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"4.9.0\")\n            message(FATAL_ERROR \"GCC < 4.9 doesn't support the undefined behavior sanitizer\")\n        else()\n            set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -fsanitize=undefined\")\n        endif()\n    endif()\nelseif (CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n    if(NOT CMAKE_CROSSCOMPILING)\n      if(CMAKE_SYSTEM_PROCESSOR STREQUAL \"powerpc\" OR CMAKE_SYSTEM_PROCESSOR STREQUAL \"ppc\" OR CMAKE_SYSTEM_PROCESSOR STREQUAL \"ppc64\" OR CMAKE_SYSTEM_PROCESSOR STREQUAL \"ppc64le\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -mcpu=native\")\n      else()\n        #FIXME: x86 is -march=native, but doesn't mean every arch is this option. To keep original project's compatibility, I leave this except POWER.\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -march=native\")\n      endif()\n    endif()\n    set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -Wall -Wextra -Werror -Wno-missing-field-initializers\")\n    set(EXTRA_CXX_FLAGS -Weffc++ -Wswitch-default -Wfloat-equal -Wconversion -Wimplicit-fallthrough)\n    if (RAPIDJSON_BUILD_CXX11 AND CMAKE_VERSION VERSION_LESS 3.1)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11\")\n    elseif (RAPIDJSON_BUILD_CXX17 AND NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"4.0\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++17\")\n    elseif (RAPIDJSON_BUILD_CXX20 AND NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"10.0\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++20\")\n    endif()\n    if (RAPIDJSON_BUILD_ASAN)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -fsanitize=address\")\n    endif()\n    if (RAPIDJSON_BUILD_UBSAN)\n        if (CMAKE_CXX_COMPILER_ID STREQUAL \"AppleClang\")\n            set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -fsanitize=undefined-trap -fsanitize-undefined-trap-on-error\")\n        else()\n            set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -fsanitize=undefined\")\n        endif()\n    endif()\nelseif (CMAKE_CXX_COMPILER_ID STREQUAL \"MSVC\")\n    add_definitions(-D_CRT_SECURE_NO_WARNINGS=1)\n    add_definitions(-DNOMINMAX)\n    set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} /EHsc\")\n    # CMake >= 3.10 should handle the above CMAKE_CXX_STANDARD fine, otherwise use /std:c++XX with MSVC >= 19.10\n    if (RAPIDJSON_BUILD_CXX11 AND NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"19.10\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} /std:c++11\")\n    elseif (RAPIDJSON_BUILD_CXX17 AND NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"19.14\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} /std:c++17\")\n    elseif (RAPIDJSON_BUILD_CXX20 AND NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS \"19.29\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} /std:c++20\")\n    endif()\n    # Always compile with /WX\n    if(CMAKE_CXX_FLAGS MATCHES \"/WX-\")\n        string(REGEX REPLACE \"/WX-\" \"/WX\" CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS}\")\n    else()\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} /WX\")\n    endif()\nelseif (CMAKE_CXX_COMPILER_ID MATCHES \"XL\")\n    set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -qarch=auto\")\nendif()\n\n#add extra search paths for libraries and includes\nSET(INCLUDE_INSTALL_DIR \"${CMAKE_INSTALL_PREFIX}/include\" CACHE PATH \"The directory the headers are installed in\")\nSET(LIB_INSTALL_DIR \"${CMAKE_INSTALL_PREFIX}/lib\" CACHE STRING \"Directory where lib will install\")\nSET(DOC_INSTALL_DIR \"${CMAKE_INSTALL_PREFIX}/share/doc/${PROJECT_NAME}\" CACHE PATH \"Path to the documentation\")\n\nIF(UNIX OR CYGWIN)\n    SET(_CMAKE_INSTALL_DIR \"${LIB_INSTALL_DIR}/cmake/${PROJECT_NAME}\")\nELSEIF(WIN32)\n    SET(_CMAKE_INSTALL_DIR \"${CMAKE_INSTALL_PREFIX}/cmake\")\nENDIF()\nSET(CMAKE_INSTALL_DIR \"${_CMAKE_INSTALL_DIR}\" CACHE PATH \"The directory cmake files are installed in\")\n\ninclude_directories(${CMAKE_CURRENT_SOURCE_DIR}/include)\n\nif(RAPIDJSON_BUILD_DOC)\n    add_subdirectory(doc)\nendif()\n\nadd_custom_target(travis_doc)\nadd_custom_command(TARGET travis_doc\n    COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/travis-doxygen.sh)\n\nif(RAPIDJSON_BUILD_EXAMPLES)\n    add_subdirectory(example)\nendif()\n\nif(RAPIDJSON_BUILD_TESTS)\n    if(MSVC11)\n        # required for VS2012 due to missing support for variadic templates\n        add_definitions(-D_VARIADIC_MAX=10)\n    endif(MSVC11)\n    add_subdirectory(test)\n    include(CTest)\nendif()\n\n# pkg-config\nIF (UNIX OR CYGWIN)\n  CONFIGURE_FILE (${CMAKE_CURRENT_SOURCE_DIR}/${PROJECT_NAME}.pc.in\n                  ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}.pc\n                  @ONLY)\n  INSTALL (FILES ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}.pc\n      DESTINATION \"${LIB_INSTALL_DIR}/pkgconfig\"\n      COMPONENT pkgconfig)\nENDIF()\n\ninstall(FILES readme.md\n        DESTINATION \"${DOC_INSTALL_DIR}\"\n        COMPONENT doc)\n\n# Add an interface target to export it\nadd_library(RapidJSON INTERFACE)\n\ntarget_include_directories(RapidJSON INTERFACE $<INSTALL_INTERFACE:include/rapidjson>)\n\ninstall(DIRECTORY include/rapidjson\n    DESTINATION \"${INCLUDE_INSTALL_DIR}\"\n    COMPONENT dev)\n\ninstall(DIRECTORY example/\n    DESTINATION \"${DOC_INSTALL_DIR}/examples\"\n    COMPONENT examples\n    # Following patterns are for excluding the intermediate/object files\n    # from an install of in-source CMake build.\n    PATTERN \"CMakeFiles\" EXCLUDE\n    PATTERN \"Makefile\" EXCLUDE\n    PATTERN \"cmake_install.cmake\" EXCLUDE)\n\n# Provide config and version files to be used by other applications\n# ===============================\n\n################################################################################\n# Export package for use from the build tree\nEXPORT( PACKAGE ${PROJECT_NAME} )\n\n# Create the RapidJSONConfig.cmake file for other cmake projects.\n# ... for the build tree\nSET( CONFIG_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR})\nSET( CONFIG_DIR ${CMAKE_CURRENT_BINARY_DIR})\nSET( ${PROJECT_NAME}_INCLUDE_DIR \"\\${${PROJECT_NAME}_SOURCE_DIR}/include\" )\n\nINCLUDE(CMakePackageConfigHelpers)\nCONFIGURE_FILE( ${CMAKE_CURRENT_SOURCE_DIR}/${PROJECT_NAME}Config.cmake.in\n    ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake @ONLY )\nCONFIGURE_FILE(${CMAKE_CURRENT_SOURCE_DIR}/${PROJECT_NAME}ConfigVersion.cmake.in\n    ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake @ONLY)\n\n# ... for the install tree\nSET( CMAKECONFIG_INSTALL_DIR ${LIB_INSTALL_DIR}/cmake/${PROJECT_NAME} )\nFILE( RELATIVE_PATH REL_INCLUDE_DIR\n    \"${CMAKECONFIG_INSTALL_DIR}\"\n    \"${CMAKE_INSTALL_PREFIX}/include\" )\n\nSET( ${PROJECT_NAME}_INCLUDE_DIR \"\\${${PROJECT_NAME}_CMAKE_DIR}/${REL_INCLUDE_DIR}\" )\nSET( CONFIG_SOURCE_DIR )\nSET( CONFIG_DIR )\nCONFIGURE_FILE( ${CMAKE_CURRENT_SOURCE_DIR}/${PROJECT_NAME}Config.cmake.in\n    ${CMAKE_CURRENT_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/${PROJECT_NAME}Config.cmake @ONLY )\n\nINSTALL(FILES \"${CMAKE_CURRENT_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/${PROJECT_NAME}Config.cmake\"\n        DESTINATION ${CMAKECONFIG_INSTALL_DIR} )\n\n# Install files\nIF(CMAKE_INSTALL_DIR)\n    INSTALL(FILES\n        ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake\n        ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake\n        DESTINATION \"${CMAKE_INSTALL_DIR}\"\n        COMPONENT dev)\n\n    INSTALL(TARGETS RapidJSON EXPORT RapidJSON-targets)\n    INSTALL(EXPORT RapidJSON-targets DESTINATION ${CMAKE_INSTALL_DIR})\nENDIF()\n"
  },
  {
    "path": "C/thirdparty/rapidjson/CMakeModules/FindGTestSrc.cmake",
    "content": "\nSET(GTEST_SEARCH_PATH\n    \"${GTEST_SOURCE_DIR}\"\n    \"${CMAKE_CURRENT_LIST_DIR}/../thirdparty/gtest/googletest\")\n\nIF(UNIX)\n    IF(RAPIDJSON_BUILD_THIRDPARTY_GTEST)\n        LIST(APPEND GTEST_SEARCH_PATH \"/usr/src/gtest\")\n    ELSE()\n        LIST(INSERT GTEST_SEARCH_PATH 1 \"/usr/src/gtest\")\n    ENDIF()\nENDIF()\n\nFIND_PATH(GTEST_SOURCE_DIR\n    NAMES CMakeLists.txt src/gtest_main.cc\n    PATHS ${GTEST_SEARCH_PATH})\n\n\n# Debian installs gtest include directory in /usr/include, thus need to look\n# for include directory separately from source directory.\nFIND_PATH(GTEST_INCLUDE_DIR\n    NAMES gtest/gtest.h\n    PATH_SUFFIXES include\n    HINTS ${GTEST_SOURCE_DIR}\n    PATHS ${GTEST_SEARCH_PATH})\n\nINCLUDE(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(GTestSrc DEFAULT_MSG\n    GTEST_SOURCE_DIR\n    GTEST_INCLUDE_DIR)\n"
  },
  {
    "path": "C/thirdparty/rapidjson/RapidJSON.pc.in",
    "content": "includedir=@INCLUDE_INSTALL_DIR@\n\nName: @PROJECT_NAME@\nDescription: A fast JSON parser/generator for C++ with both SAX/DOM style API\nVersion: @LIB_VERSION_STRING@\nURL: https://github.com/Tencent/rapidjson\nCflags: -I${includedir}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/RapidJSONConfig.cmake.in",
    "content": "@PACKAGE_INIT@\n\ninclude (\"${CMAKE_CURRENT_LIST_DIR}/RapidJSON-targets.cmake\")\n\n################################################################################\n# RapidJSON source dir\nset( RapidJSON_SOURCE_DIR \"@CONFIG_SOURCE_DIR@\")\n\n################################################################################\n# RapidJSON build dir\nset( RapidJSON_DIR \"@CONFIG_DIR@\")\n\n################################################################################\n# Compute paths\nget_filename_component(RapidJSON_CMAKE_DIR \"${CMAKE_CURRENT_LIST_FILE}\" PATH)\n\nget_target_property(RapidJSON_INCLUDE_DIR RapidJSON INTERFACE_INCLUDE_DIRECTORIES)\n\nset( RapidJSON_INCLUDE_DIRS ${RapidJSON_INCLUDE_DIR} )\n"
  },
  {
    "path": "C/thirdparty/rapidjson/RapidJSONConfigVersion.cmake.in",
    "content": "SET(PACKAGE_VERSION \"@LIB_VERSION_STRING@\")\n\nIF (PACKAGE_FIND_VERSION VERSION_EQUAL PACKAGE_VERSION)\n  SET(PACKAGE_VERSION_EXACT \"true\")\nENDIF (PACKAGE_FIND_VERSION VERSION_EQUAL PACKAGE_VERSION)\nIF (NOT PACKAGE_FIND_VERSION VERSION_GREATER PACKAGE_VERSION)\n  SET(PACKAGE_VERSION_COMPATIBLE \"true\")\nELSE (NOT PACKAGE_FIND_VERSION VERSION_GREATER PACKAGE_VERSION)\n  SET(PACKAGE_VERSION_UNSUITABLE \"true\")\nENDIF (NOT PACKAGE_FIND_VERSION VERSION_GREATER PACKAGE_VERSION)\n"
  },
  {
    "path": "C/thirdparty/rapidjson/appveyor.yml",
    "content": "version: 1.1.0.{build}\n\nconfiguration:\n- Debug\n- Release\n\nenvironment:\n  matrix:\n  # - VS_VERSION: 9 2008\n  #   VS_PLATFORM: win32\n  # - VS_VERSION: 9 2008\n  #   VS_PLATFORM: x64\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2013\n    VS_VERSION: 10 2010\n    VS_PLATFORM: win32\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: OFF\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2013\n    VS_VERSION: 10 2010\n    VS_PLATFORM: x64\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: ON\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2013\n    VS_VERSION: 11 2012\n    VS_PLATFORM: win32\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: ON\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2013\n    VS_VERSION: 11 2012\n    VS_PLATFORM: x64\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: OFF\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2013\n    VS_VERSION: 12 2013\n    VS_PLATFORM: win32\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: OFF\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2013\n    VS_VERSION: 12 2013\n    VS_PLATFORM: x64\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: ON\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2015\n    VS_VERSION: 14 2015\n    VS_PLATFORM: win32\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: ON\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2015\n    VS_VERSION: 14 2015\n    VS_PLATFORM: x64\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: OFF\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2017\n    VS_VERSION: 15 2017\n    VS_PLATFORM: win32\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: OFF\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2017\n    VS_VERSION: 15 2017\n    VS_PLATFORM: x64\n    CXX11: OFF\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: ON\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2017\n    VS_VERSION: 15 2017\n    VS_PLATFORM: x64\n    CXX11: ON\n    CXX17: OFF\n    CXX20: OFF\n    MEMBERSMAP: OFF\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2017\n    VS_VERSION: 15 2017\n    VS_PLATFORM: x64\n    CXX11: OFF\n    CXX17: ON\n    CXX20: OFF\n    MEMBERSMAP: OFF\n  - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2019\n    VS_VERSION: 16 2019\n    VS_PLATFORM: x64\n    CXX11: OFF\n    CXX17: ON\n    CXX20: OFF\n    MEMBERSMAP: ON\n\nbefore_build:\n- git submodule update --init --recursive\n- cmake -H. -BBuild/VS -G \"Visual Studio %VS_VERSION%\" -DCMAKE_GENERATOR_PLATFORM=%VS_PLATFORM% -DCMAKE_VERBOSE_MAKEFILE=ON -DBUILD_SHARED_LIBS=true -DRAPIDJSON_BUILD_CXX11=%CXX11% -DRAPIDJSON_BUILD_CXX17=%CXX17% -DRAPIDJSON_BUILD_CXX20=%CXX20% -DRAPIDJSON_USE_MEMBERSMAP=%MEMBERSMAP% -Wno-dev\n\nbuild:\n  project: Build\\VS\\RapidJSON.sln\n  parallel: true\n  verbosity: minimal\n\ntest_script:\n- cd Build\\VS && if %CONFIGURATION%==Debug (ctest --verbose -E perftest --build-config %CONFIGURATION%) else (ctest --verbose --build-config %CONFIGURATION%)\n"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/data/glossary.json",
    "content": "{\r\n    \"glossary\": {\r\n        \"title\": \"example glossary\",\r\n\t\t\"GlossDiv\": {\r\n            \"title\": \"S\",\r\n\t\t\t\"GlossList\": {\r\n                \"GlossEntry\": {\r\n                    \"ID\": \"SGML\",\r\n\t\t\t\t\t\"SortAs\": \"SGML\",\r\n\t\t\t\t\t\"GlossTerm\": \"Standard Generalized Markup Language\",\r\n\t\t\t\t\t\"Acronym\": \"SGML\",\r\n\t\t\t\t\t\"Abbrev\": \"ISO 8879:1986\",\r\n\t\t\t\t\t\"GlossDef\": {\r\n                        \"para\": \"A meta-markup language, used to create markup languages such as DocBook.\",\r\n\t\t\t\t\t\t\"GlossSeeAlso\": [\"GML\", \"XML\"]\r\n                    },\r\n\t\t\t\t\t\"GlossSee\": \"markup\"\r\n                }\r\n            }\r\n        }\r\n    }\r\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/data/menu.json",
    "content": "{\"menu\": {\r\n    \"header\": \"SVG Viewer\",\r\n    \"items\": [\r\n        {\"id\": \"Open\"},\r\n        {\"id\": \"OpenNew\", \"label\": \"Open New\"},\r\n        null,\r\n        {\"id\": \"ZoomIn\", \"label\": \"Zoom In\"},\r\n        {\"id\": \"ZoomOut\", \"label\": \"Zoom Out\"},\r\n        {\"id\": \"OriginalView\", \"label\": \"Original View\"},\r\n        null,\r\n        {\"id\": \"Quality\"},\r\n        {\"id\": \"Pause\"},\r\n        {\"id\": \"Mute\"},\r\n        null,\r\n        {\"id\": \"Find\", \"label\": \"Find...\"},\r\n        {\"id\": \"FindAgain\", \"label\": \"Find Again\"},\r\n        {\"id\": \"Copy\"},\r\n        {\"id\": \"CopyAgain\", \"label\": \"Copy Again\"},\r\n        {\"id\": \"CopySVG\", \"label\": \"Copy SVG\"},\r\n        {\"id\": \"ViewSVG\", \"label\": \"View SVG\"},\r\n        {\"id\": \"ViewSource\", \"label\": \"View Source\"},\r\n        {\"id\": \"SaveAs\", \"label\": \"Save As\"},\r\n        null,\r\n        {\"id\": \"Help\"},\r\n        {\"id\": \"About\", \"label\": \"About Adobe CVG Viewer...\"}\r\n    ]\r\n}}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/data/readme.txt",
    "content": "sample.json is obtained from http://code.google.com/p/json-test-suite/downloads/detail?name=sample.zip\n"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/data/sample.json",
    "content": "{\n \"a\": {\n  \"6U閆崬밺뀫颒myj츥휘:$薈mY햚#rz飏+玭V㭢뾿愴YꖚX亥ᮉ푊\\u0006垡㐭룝\\\"厓ᔧḅ^Sqpv媫\\\"⤽걒\\\"˽Ἆ?ꇆ䬔未tv{DV鯀Tἆl凸g\\\\㈭ĭ즿UH㽤\": null,\n  \"b茤z\\\\.N\": [[\n   \"ZL:ￄዎ*Y|猫劁櫕荾Oj为1糕쪥泏S룂w࡛Ᏺ⸥蚙)\",\n   {\n    \"\\\"䬰ỐwD捾V`邀⠕VD㺝sH6[칑.:醥葹*뻵倻aD\\\"\": true,\n    \"e浱up蔽Cr෠JK軵xCʨ<뜡癙Y獩ｹ齈X/螗唻?<蘡+뷄㩤쳖3偑犾&\\\\첊xz坍崦ݻ鍴\\\"嵥B3㰃詤豺嚼aqJ⑆∥韼@\\u000b㢊\\u0015L臯.샥\": false,\n    \"l?Ǩ喳e6㔡$M꼄I,(3᝝縢,䊀疅뉲B㴔傳䂴\\u0088㮰钘ꜵ!ᅛ韽>\": -5514085325291784739,\n    \"o㮚?\\\"춛㵉<\\/﬊ࠃ䃪䝣wp6ἀ䱄[s*S嬈貒pᛥ㰉'돀\": [{\n     \"(QP윤懊FI<ꃣ『䕷[\\\"珒嶮?%Ḭ壍಻䇟0荤!藲끹bd浶tl\\u2049#쯀@僞\": {\"i妾8홫\": {\n      \",M맃䞛K5nAㆴVN㒊햬$n꩑&ꎝ椞阫?/ṏ세뉪1x쥼㻤㪙`\\\"$쟒薟B煌܀쨝ଢ଼2掳7㙟鴙X婢\\u0002\": \"Vዉ菈᧷⦌kﮞఈnz*<?੃'ahhCFX(\\u0007⮊E㭍䱾Gxꥩr❣.洎\",\n      \"뻴5bDD큯O傆盓왻U?ꞅꐊN鐭᧢τ\\\"迳豲8\\u001b䃥ꂻ䴺ྸH筴,\": {\n       \"\\\"L鸔SE㬡XV&~͎'놅蔞눶l匛?'.K氁\\\\ƢẨ疇mΊ'꽳&!鹠m'|{P痊 秄쒿u\\u00111䋧gϩx7t丗D䊨䠻z0.A0\": -1.50139930144708198E18,\n       \"8鋂뛷?첒B☚>﷜FM\\\"荭7ꍀ-VR<\\/';䁙E9$䩉\\f @s?퍪o3^衴cඎ䧪aK鼟ｑ䆨c{䳠5mᒲՙ蘹ᮩ\": {\n        \"F㲷JGo⯍P덵x뒳p䘧☔\\\"+ꨲ吿JfR㔹)4n紬G练Q፞!C|\": true,\n        \"p^㫮솎oc.೚A㤠??r\\u000f)⾽⌲們M2.䴘䩳:⫭胃\\\\፾@Fᭌ\\\\K\": false,\n        \"蟌Tk愙潦伩\": {\n         \"a<\\/@ᾛ慂侇瘎\": -7271305752851720826,\n         \"艓藬/>၄ṯ,XW~㲆w\": {\"E痧郶)㜓ha朗!N赻瞉駠uC\\u20ad辠<Ve?폱!Im䁎搄:*s 9諚Prᵾ뒰髶B̌qWA8梸vS⫊⢳{t㺲q㺈랊뮣RqK밢쳪\": [\n          false,\n          {\n           \"\\u000b=>x퓮⣫P1ࠫLMMX'M刼唳됤\": null,\n           \"P쓫晥%k覛ዩIUᇸ滨:噐혲lMR5䋈V梗>%幽u頖\\\\)쟟\": null,\n           \"eg+昉~矠䧞难\\b?gQ쭷筝\\\\eꮠNl{ಢ哭|]Mn銌╥zꖘzⱷ⭤ᮜ^\": [\n            -1.30142114406914976E17,\n            -1.7555215491128452E-19,\n            null,\n            \"渾㨝ߏ牄귛r?돌?w[⚞ӻ~廩輫㼧/\",\n            -4.5737191805302129E18,\n            null,\n            \"xy࿑M[oc셒竓Ⓔx?뜓y䊦>-D켍(&&?XKkc꩖ﺸᏋ뵞K伕6ী)딀P朁yW揙?훻魢傎EG碸9類៌g踲C⟌aEX舲:z꒸许\",\n            3808159498143417627,\n            null,\n            {\"m試\\u20df1{G8&뚈h홯J<\\/\": {\n             \"3ஸ厠zs#1K7:rᥞoꅔꯧ&띇鵼鞫6跜#赿5l'8{7㕳(b/j\\\"厢aq籀ꏚ\\u0015厼稥\": [\n              -2226135764510113982,\n              true,\n              null,\n              {\n               \"h%'맞S싅Hs&dl슾W0j鿏MםD놯L~S-㇡R쭬%\": null,\n               \"⟓咔謡칲\\u0000孺ꛭx旑檉㶆?\": null,\n               \"恇I転;￸B2Y`z\\\\獓w,놏濐撐埵䂄)!䶢D=ഭ㴟jyY\": {\n                \"$ࡘt厛毣ൢI芁<겿骫⫦6tr惺a\": [\n                 6.385779736989334E-20,\n                 false,\n                 true,\n                 true,\n                 [\n                  -6.891946211462334E-19,\n                  null,\n                  {\n                   \"]-\\\\Ꟑ1/薓❧Ὂ\\\\l牑\\u0007A郃)阜ᇒᓌ-塯`W峬G}SDb㬨Q臉⮻빌O鞟톴첂B㺱<ƈmu챑J㴹㷳픷Oㆩs\": {\n                    \"\\\"◉B\\\"pᶉt骔J꩸ᄇᛐi╰栛K쉷㉯鐩!㈐n칍䟅難>盥y铿e୔蒏M貹ヅ8嘋퀯䉶ጥ㏢殊뻳\\\"絧╿ꉑ䠥?∃蓊{}㣣Gk긔H1哵峱\": false,\n                    \"6.瀫cN䇮F㧺?\\\\椯=ڈT䘆4␘8qv\": -3.5687501019676885E-19,\n                    \"Q?yऴr혴{஀䳘p惭f1ﹸ䅷䕋贲<ྃᄊ繲hq\\\\b|#QSTs1c-7(䵢\\u2069匏絘ꯉ:l毴汞t戀oෟᵶ뮱፣-醇Jx䙬䐁햢0࣫ᡁgrㄛ\": \"\\u0011_xM/蘇Chv;dhA5.嗀绱V爤ﰦi뵲M\",\n                    \"⏑[\\\"ugoy^儣횎~U\\\\섯겜論l2jw஌yD腅̂\\u0019\": true,\n                    \"ⵯɇ䐲᫿࢚!㯢l샅笶戮1꣖0Xe\": null,\n                    \"劅f넀識b宁焊E찓橵G!ʱ獓뭔雩괛\": [{\"p⹣켙[q>燣䍃㞽ᩲx:쓤삘7玑퇼0<\\/q璂ᑁ[Z\\\\3䅵䧳\\u0011㤧|妱緒C['췓Yꞟ3Z鳱雼P錻BU씧U`ᢶg蓱>.1ӧ譫'L_5V䏵Ц\": [\n                     false,\n                     false,\n                     {\"22䂍盥N霂얢<F8꼵7Gసyh뀍g᦭ꄢx硴嬢\\u001a?E괆T|;7犟\\\"Wt%䐩O⨵t&#ᬋK'蜍Ძ揔⾠鲂T멷靃\\u0018䓞cE\": {\"f=䏏츜瞾zw?孡鏣\\\\铀᫞yẆg(\\u0011M6(s2]`ਫ\": [[[{\n                      \"'y몱纣4S@\\\\,i㷯럹Ua充Tᣢ9躘Zଞ쥿䐊s<\\/刎\\\\\\\"뉦-8/\": \"蜑.X0꭛낢륹i젨ꚁ<8?s볕蝡|Q✬᯦@\\\\G㑢屿Mn졾J굤⥟JW뤵苑r쁕툄嵵?⾥O\",\n                      \"^1挲~[n귆誈央碠멪gI洷\": -8214236471236116548,\n                      \"sሣ%娌暡clr蟜㑓2\\u000bS❟_X㨔⚴5~蔷ꀇ|Xu㬖,꤭卹r(g믇쩍%췸앙|栣U\\\\2]䤉+啠菡ꯎT鉹m\\n/`SzDᅼ鞶\": 1.1217523390167132E-19,\n                      \"u톇=黚\\\\ ꂮ췵L>躰e9⑩_뵜斌n@B}$괻Yᐱ@䧋V\\\"☒-諯cV돯ʠ\": true,\n                      \"Ű螧ᔼ檍鍎땒딜qꄃH뜣<獧ूCY吓⸏>XQ㵡趌o끬k픀빯a(ܵ甏끆୯/6Nᪧ}搚ᆚ짌P牰泱鈷^d꣟#L삀\\\"㕹襻;k㸊\\\\f+\": true,\n                      \"쎣\\\",|⫝̸阊x庿k잣v庅$鈏괎炔k쬪O_\": [\n                       \"잩AzZGz3v愠ꉈⵎ?㊱}S尳௏p\\r2>췝IP䘈M)w|\\u000eE\",\n                       -9222726055990423201,\n                       null,\n                       [\n                        false,\n                        {\"´킮'뮤쯽Wx讐V,6ᩪ1紲aႈ\\u205czD\": [\n                         -930994432421097536,\n                         3157232031581030121,\n                         \"l貚PY䃛5@䭄<nW\\u001e\",\n                         [\n                          3.801747732605161E18,\n                          [\n                           null,\n                           false,\n                           {\n                            \"\": 4.0442013775147072E16,\n                            \"2J[sᡪ㞿|n'#廲꯬乞\": true,\n                            \"B[繰`\\\\㏏a̼㨀偛㽓<\\/꥖ᵈO让\\r43⡩徑ﬓ๨ﮕx:㣜o玐ꉟぢC珵὆ᓞ쇓Qs氯였9駵q혃Ljꂔ<\\/昺+t䐋༻猙c沪~櫆bpJ9UᏐ:칣妙!皗F4㑄탎䕀櫳振讓\": 7.3924182188256287E18,\n                            \"H磵ai委曷n柋T<\\/勿F&:ꣴfU@㿗榻Lb+?퍄sp\\\"᪟~>귻m㎮琸f\": 1.0318894506812084E-19,\n                            \"࢜⩢Ш䧔1肽씮+༎ᣰ闺馺窃䕨8Mƶq腽xc(៯夐J5굄䕁Qj_훨/~価.䢵慯틠퇱豠㼇Qﵘ$DuSp(8Uญ<\\/ಟ룴𥳐ݩ$\": 8350772684161555590,\n                            \"ㆎQ䄾\\u001bpᩭ${[諟^^骴᤮b^ㅥI┧T㉇⾞\\\"绦<AYJ⒃-oF<\\/蛎mm;obh婃ᦢ\": false,\n                            \"䔤䣈?汝.p襟&d㱅\\\\Jᚠ@?O첁ࢽ휔VR蔩|㒢柺\": [[\n                             \"-ꕨ岓棻r@鿆^3~䪤Ѐ狼︌ﹲ\\\\᝸MlE쵠Q+\",\n                             null,\n                             false,\n                             3346674396990536343,\n                             null,\n                             {\n                              \"\": null,\n                              \"/䏨S쨑,&繷㉥8C엮赸3馢|뇲{鄎ꗇqFﶉ雕UD躢?Ꟛအ꽡[hᕱᗅ㦋쭞Mユ茍?L槽암V#성唐%㣕嘵\\\\ڹ(嘏躿&q\": [\n                               -1364715155337673920,\n                               false,\n                               -8197733031775379251,\n                               \"E팗鮲JwH\\\\觡܈᜝\\\"+뉞娂N휗v噙၂깼\\u001dD帒l%-斔N\",\n                               -3.844267973858711E-20,\n                               [{\"쬯(褈Q 蟚뿢 /ⱖ㻥\\u0017/?v邘䃡0U.Z1x?鯔V尠8Em<\": [[[\n                                null,\n                                [\n                                 null,\n                                 -5841406347577698873,\n                                 \"킷\\\"S⋄籞繗솸ᵣ浵w쑿ퟗ7nᎏx3앙z㘌쿸I葥覯㬏0ᆝb汆狺뷘ႀnꋋ\",\n                                 -1227911573141158702,\n                                 {\n                                  \"u㉮PᾺV鵸A\\\\g*ࡗ9슟晭+ͧↀ쿅H\\u001c꾣犓}癇恛ᗬ黩䟘X梑鐆e>r䰂f矩'-7䡭桥Dz兔V9谶居㺍ᔊ䩯덲.\\u001eL0ὅㅷ釣\": [{\n                                   \"<쯬J卷^숞u࠯䌗艞R9닪g㐾볎a䂈歖意:%鐔|ﵤ|y}>;2,覂⶚啵tb*仛8乒㓶B࿠㯉戩oX 貘5V嗆렽낁߼4h䧛ꍺM空\\\\b꿋貼\": 8478577078537189402,\n                                   \"VD*|吝z~h譺aᯒ\": {\n                                    \"YI췢K<\\/濳xNne玗rJo쾘3핰鴊\\\"↱AR:ࢷ\\\"9?\\\"臁說)?誚ꊏe)_D翾W?&F6J@뺾ꍰNZ醊Z쾈വH嶿?炫㷱鬰M겈<bS}㎥l|刖k\": {\"H7鷮퇢_k\": [\n                                     true,\n                                     \"s㟑瀭좾쮀⑁Y찺k맢戲쀸俻ກ6儮끗扖puߖꜻ馶rꈞ痘?3ྚ畊惘䎗\\\"vv)*臔웅鿈䧲^v,껛㰙J <ᚶ5\",\n                                     7950276470944656796,\n                                     4.9392301536234746E17,\n                                     -4796050478201554639,\n                                     \"yꬴc<3㻚\",\n                                     \"o塁\\u20a4蒵鮬裢CᴧnB㭱f.\",\n                                     false,\n                                     [\n                                      false,\n                                      \"㡐弑V?瀆䰺q!출㇞yᘪ꼼(IS~Ka 烿ꟿ샕桤\\u0005HQҹ㯪罂q萾⚇懋⦕둡v\",\n                                      1862560050083946970,\n                                      \"\\u20b6[|(뭹gꍒ펉O轄Dl묽]ﯨ髯QEbA㒾m@롴礠㕓2땫n6ْ엘঵篳R잷꙲m색摪|@㿫5aK设f胭r8/NI4춫栵\\\\꯬2]\",\n                                      false,\n                                      {\n                                       \"\\u000b7*㙛燏.~?䔊p搕e_拺艿뷍f{ꔻ1s驙`$Ė戧?q⋬沭?塷᭚蹀unoa5\": {\n                                        \"S귯o紞㾕ᅶ侏銇12|ʟ畴iNAo?|Sw$M拲գ㭄紧螆+,梔\": null,\n                                        \"㭚0?xB疱敻ேBPwv뾃熉(ӠpJ]갢\\\"Bj'\\u0016GE椱<\\/zgៅx黢礇h},M9ﴦ?LḨ\": \"Si B%~㬒E\",\n                                        \"핇㉊살㍢숨~ȪRo䦅D桺0z]﬽蠆c9ᣨyPP㿷U~㞐?쯟퍸宒뉆U|}㉓郾ࣻ*櫎꼪䁗s?~7\\u001e㘔h9{aឋ}:㶒P8\": [{\"\\\\R囡쐬nN柋琍؛7칾 :㶃衇徜V 深f1淍♠i?3S角폞^ᆞ\\u20e8ṰD\\u0007秡+躒臔&-6\": {\n                                         \"䨑g.fh㔗=8!\\\"狿ൻLU^뻱g䲚㻐'W}k欤?๒鲇S꧗䫾$ĥ피\": -794055816303360636,\n                                         \"外頮詋~텡竆繃䏩苨뾺朁꼃瘹f*㉀枙NH/\\u2027ꢁ}j묎vペq︉식뜡Od5 N顯ି烅仟Qfㆤ嚢(i䬅c;맧?嶰㩼츱獡?-\": {\n                                          \"e݆㍡⬬'2㻒?U篲鿄\\\"隻Ҭ5NꭰꤺBꀈ拾᩺[刯5곑Na램ﴦ዆]㝓qw钄\\u001b\\\"Y洊䗿祏塥迵[⼞⠳P$꠱5먃0轢`\": [{\"獰E賝﫚b먭N긆Ⰹ史2逶ꜛ?H짉~?P}jj}侷珿_T>᭨b,⻁鈵P䕡䀠८ⱄ홎鄣\": {\n                                           \"@?k2鶖㋮\\\"Oರ K㨇廪儲\\u0017䍾J?);\\b*묀㗠섳햭1MC V\": null,\n                                           \"UIICP!BUA`ᢈ㋸~袩㗪⾒=fB﮴l1ꡛ죘R辂여ҳ7쮡<䩲`熕8頁\": 4481809488267626463,\n                                           \"Y?+8먙ᚔ鋳蜩럶1㥔y璜౩`\": [\n                                            null,\n                                            1.2850335807501874E-19,\n                                            \"~V2\",\n                                            2035406654801997866,\n                                            {\n                                             \"<숻1>\\\"\": -8062468865199390827,\n                                             \"M㿣E]}qwG莎Gn᝶(ꔙ\\\\D⬲iꇲs寢t駇S뀡ꢜ\": false,\n                                             \"pꝤ㎏9W%>M;-U璏f(^j1?&RB隧 忓b똊E\": \"#G?C8.躬ꥯ'?냪#< 渟&헿란zpo왓Kj}鷧XﻘMツb䕖;㪻\",\n                                             \"vE풤幉xz뱕쫥Ug㦲aH} ᣟp:鬼Yᰟ<Fɋ잣緂頒⺏䉲瑑䅂,C~ޅG!f熢-B7~9Pqࡢ[츑#3ꕎ,Öඳ聁⩅㵧춀뿍xy䌏͂tdj!箧᳆|9蚡돬\": -2.54467378964089632E17,\n                                             \"䵈䅦5빖,궆-:໿댾仫0ᙚyᦝhqᚄ\": null,\n                                             \"侯Y\\\"湛졯劇U셎YX灍ⅸ2伴|筧\\\\䁒㶶᷏쁑Waᦵᗱ㜏늾膠<Jc63<G\\u20fe䇹66僣k0O\\\"_@U\": null,\n                                             \"姪y$#s漴JH璌Ӊ脛J㝾펔ﹴoꈶ㚸PD:薠쏖%說ថ蹂1］⾕5튄\": {\n                                              \"᝾Huw3䮅如쿺䍟嫝]<鰨ݷ?꯯䫓傩|ᐶස媽\\\\澒≡闢\": \"Mm\\\"쏇ᯄ졽\\\"楇<\\/ꥆ흭局n隴@鿣w⠊4P贈徎W㊋;䤞'.팇蒁䡴egpx嗎wஅ獗堮ᛐnˁ︖䀤4噙?໚郝᱋ޘॎt恑姫籕殥陃\\\"4[ꝬqL4Wꠎx\",\n                                              \"ℇj遌5B뒚\\\" U\": \"硄ꏘ{憠굏:&t䌨m Cઌ쿣鞛XFꠟs䝭ﶃ\\\"格a0x闊昵吲L\\\\杚聈aꁸj싹獅\\\"灟ﱡ馆*굖糠<ꔏ躎\",\n                                              \"톌賠弳ꟍb\\\"螖X50sĶ晠3f秂坯Iⓟ:萘\": 5.573183333596288E18,\n                                              \"%䴺\": [[[[\n                                               -6957233336860166165,\n                                               false,\n                                               null,\n                                               {\n                                                \"\\\"\\\\௮茒袀ᕥ23ୃ괶?䕎.嚲◉㏞L+ᵡ艱hL콇붆@\": null,\n                                                \"%螥9ꭌ<\\/-t\": true,\n                                                \",9|耢椸䁓Xk죱\\u0015$Ώ鲞[?엢ᝲ혪즈ⴂ▂ℴ㗯\\\"g뺘\\\\ꍜ#\\u0002ヮ}ሎ芲P[鹮轧@냲䃦=#(\": 2.78562909315899616E17,\n                                                \"R?H䧰ⵇ<,憰쮼Q總iR>H3镔ᴚ斦\\\\鏑r*2橱G⼔F/.j\": true,\n                                                \"RK좬뎂a홠f*f㱉ᮍ⦋潙㨋Gu곌SGI3I뿐\\\\F',)t`荁蘯囯ﮉ裲뇟쥼_ገ驪▵撏ᕤV\": 1.52738225997956557E18,\n                                                \"^k굲䪿꠹B逤%F㱢漥O披M㽯镞竇霒i꼂焅륓\\u00059=皫之눃\\u2047娤閍銤唫ၕb<\\/w踲䔼u솆맚,䝒ᝳ'/it\": \"B餹饴is権ꖪ怯ꦂẉဎt\\\"!凢谵⧿0\\\\<=(uL䷍刨쑪>俆揓Cy襸Q힆䆭涷<\\/ᐱ0ɧ䗾䚹\\\\ኜ?ꄢᇘ`䴢{囇}᠈䴥X4퓪檄]ꥷ/3謒ሴn+g騍X\",\n                                                \"GgG꽬[(嫓몍6\\u0004궍宩㙻/>\\u0011^辍dT腪hxǑ%ꊇk,8(W⧂結P鬜O\": [{\n                                                 \"M㴾c>\\\\ᓲ\\u0019V{>ꤩ혙넪㭪躂TS-痴໸闓⍵/徯O.M㏥ʷD囎⧔쁳휤T??鉬뇙=#ꢫ숣BX䭼<\\/d똬졬g榿)eꨋﯪ좇첻<?2K)\": null,\n                                                 \"Z17縬z]愀䖌 ᾋBCg5딒국憍꾓aⲷ턷u:U촳驿?雺楶\\u0001\\u001c{q*ᰗ苑B@k揰z.*蓗7ረIm\\\"Oᱍ@7?_\": true,\n                                                 \"㺃Z<\": -4349275766673120695,\n                                                 \"휃䠂fa塆ﬃxKe'덬鏗੄뺾w࠾鑎k땢m*႑햞鐮6攊&虜h黚,Y䱳Sﭼ둺pN6\": [\n                                                  false,\n                                                  \"IΎ䣲,\\\"ᬮ˪癘P~Qlnx喁Sᮔ༬˨I珌m䜛酛\\u0003iꐸ㦧cQ帲晼D' \\\\(粋wQcN\\\\뵰跈\",\n                                                  [\n                                                   \"D0\\\\L?M1쥍Kaꏌsd+盌귤憊tz䌣댐בO坂wϢ%ὒgp,Ai⎧ᶆI餾ꦍ棩嘅᳉怴%m]ၶis纖D凜镧o심b U\",\n                                                   {\n                                                    \"?଼\\u0011Rv&^[+匚I趈T媫\\u0010.䥤ᆯ1q僤HydⲰl㒽K'ᅾiౕ豲초딨@\\u0013J'쪪VD౼P4Ezg#8*㋤W馓]c쿯8\": false,\n                                                    \"c/擯X5~JmK䵶^쐎ച|B|u[솝(X뚤6v}W㤘⠛aR弌臌쾭諦eⒷ僡-;㩩⭖ⷴ徆龄갬{䱓ᥩ!﯏⊚ᇨ<v燡露`:볉癮꨽り★Ax7Ꮀ譥~舑\\\\Vꍋ\\\"$)v\": \"e&sFF쬘OBd슊寮f蠛জ봞mn~锆竒G脁\\\"趵G刕䕳&L唽붵<\\/I,X팚B⍥X,kԇҗ眄_慡:U附ᓚA蕧>\\u001a\\u0011\\\";~쓆BH4坋攊7힪\",\n                                                    \"iT:L闞椕윚*滛gI≀Wਟඊ'ꢆ縺뱹鮚Nꩁ᧬蕼21줧\\\\䋯``⍐\\\\㏱鳨\": 1927052677739832894,\n                                                    \"쮁缦腃g]礿Y㬙 fヺSɪ꾾N㞈\": [\n                                                     null,\n                                                     null,\n                                                     {\n                                                      \"!t,灝Y 1䗉罵?c饃호䉂Cᐭ쒘z(즽sZG㬣sഖE4뢜㓕䏞丮Qp簍6EZឪ겛fx'ꩱQ0罣i{k锩*㤴㯞r迎jTⲤ渔m炅肳\": [\n                                                       -3.3325685522591933E18,\n                                                       [{\"㓁5]A䢕1룥BC?Ꙍ`r룔Ⳛ䙡u伲+\\u0001്o\": [\n                                                        null,\n                                                        4975309147809803991,\n                                                        null,\n                                                        null,\n                                                        {\"T팘8Dﯲ稟MM☻㧚䥧/8ﻥ⥯aXLaH\\\"顾S☟耲ît7fS෉놁뮔/ꕼ䓈쁺4\\\\霶䠴ᩢ<\\/t4?죵>uD5➶༆쉌럮⢀秙䘥\\u20972ETR3濡恆vB? ~鸆\\u0005\": {\n                                                         \"`閖m璝㥉b뜴?Wf;?DV콜\\u2020퍉౓擝宏ZMj3mJ먡-傷뱙yח㸷꥿ ໘u=M읝!5吭L4v\\\\?ǎ7C홫\": null,\n                                                         \"|\": false,\n                                                         \"~Ztᛋ䚘\\\\擭㗝傪W陖+㗶qᵿ蘥ᙄp%䫎)}=⠔6ᮢS湟-螾-mXH?cp\": 448751162044282216,\n                                                         \"\\u209fad놹j檋䇌ᶾ梕㉝bוּ<d䗱:줰M酄\\u0000X#_r獢A饓ꍗُKo_跔?ᪧ嵜鼲<\": null,\n                                                         \"ꆘ)ubI@h@洭Ai㜎䏱k\\u0003?T䉐3间%j6j棍j=❁\\\\U毮ᬹ*8䀔v6cpj⭬~Q꿾뺶펵悡!쩭厝l六㽫6퇓ޭ2>\": {\"?苴ꩠD䋓帘5騱qﱖPF?☸珗顒yU ᡫcb䫎 S@㥚gꮒ쎘泴멖\\\\:I鮱TZ듒ᶨQ3+f7캙\\\"?\\f풾\\\\o杞紟﻽M.⏎靑OP\": [\n                                                          -2.6990368911551596E18,\n                                                          [{\"䒖@<᰿<\\/⽬tTr腞&G%᳊秩蜰擻f㎳?S㵧\\r*k뎾-乢겹隷j軛겷0룁鮁\": {\")DO0腦:춍逿:1㥨่!蛍樋2\": [{\n                                                           \",ꌣf侴笾m๫ꆽ?1?U?\\u0011ꌈꂇ\": {\n                                                            \"x捗甠nVq䅦w`CD⦂惺嘴0I#vỵ} \\\\귂S끴D얾?Ԓj溯\\\"v餄a\": {\n                                                             \"@翙c⢃趚痋i\\u0015OQ⍝lq돆Y0pࢥ3쉨䜩^<8g懥0w)]䊑n洺o5쭝QL댊랖L镈Qnt⪟㒅십q헎鳒⮤眉ᔹ梠@O縠u泌ㄘb榚癸XޔFtj;iC\": false,\n                                                             \"I&뱋゘|蓔䔕측瓯%6ᗻHW\\\\N1貇#?僐ᗜgh᭪o'䗈꽹Rc욏/蔳迄༝!0邔䨷푪8疩)[쭶緄㇈୧ፐ\": {\n                                                              \"B+:ꉰ`s쾭)빼C羍A䫊pMgjdx䐝Hf9᥸W0!C樃'蘿f䫤סи\\u0017Jve? 覝f둀⬣퓉Whk\\\"஼=չﳐ皆笁BIW虨쫓F廰饞\": -642906201042308791,\n                                                              \"sb,XcZ<\\/m㉹ ;䑷@c䵀s奤⬷7`ꘖ蕘戚?Feb#輜}p4nH⬮eKL트}\": [\n                                                               \"RK鳗z=袤Pf|[,u욺\",\n                                                               \"Ẏᏻ罯뉋⺖锅젯㷻{H䰞쬙-쩓D]~\\u0013O㳢gb@揶蔉|kᦂ❗!\\u001ebM褐sca쨜襒y⺉룓\",\n                                                               null,\n                                                               null,\n                                                               true,\n                                                               -1.650777344339075E-19,\n                                                               false,\n                                                               \"☑lꄆs힨꤇]'uTന⌳농].1⋔괁沰\\\"IWഩ\\u0019氜8쟇䔻;3衲恋,窌z펏喁횗?4?C넁问?ᥙ橭{稻Ⴗ_썔\",\n                                                               \"n?]讇빽嗁}1孅9#ꭨ靶v\\u0014喈)vw祔}룼쮿I\",\n                                                               -2.7033457331882025E18,\n                                                               {\n                                                                \";⚃^㱋x:饬ኡj'꧵T☽O㔬RO婎?향ᒭ搩$渣y4i;(Q>꿘e8q\": \"j~錘}0g;L萺*;ᕭꄮ0l潛烢5H▄쳂ꏒוֹꙶT犘≫x閦웧v\",\n                                                                \"~揯\\u2018c4職렁E~ᑅቚꈂ?nq뎤.:慹`F햘+%鉎O瀜쟏敛菮⍌浢<\\/㮺紿P鳆ࠉ8I-o?#jﮨ7v3Dt赻J9\": null,\n                                                                \"ࣝW䌈0ꍎqC逖,횅c၃swj;jJS櫍5槗OaB>D踾Y\": {\"㒰䵝F%?59.㍈cᕨ흕틎ḏ㋩B=9IېⓌ{:9.yw｝呰ㆮ肒᎒tI㾴62\\\"ዃ抡C﹬B<\\/<EO꽓ᇕu&鋫\\\\禞퐹u꒍.7훯ಶ2䩦͉ᶱf깵ᷣ늎\": [\n                                                                 5.5099570884646902E18,\n                                                                 \"uQN濿m臇<%?谣鮢s]]x0躩慌闋<;( 鋤.0ᠵd1#벘a:Gs?햷'.)ㅴ䞟琯崈FS@O㌛ᓬ抢큌ើ냷쿟툥IZn[惵ꐧ3뙍[&v憙J>촋jo朣\",\n                                                                 [\n                                                                  -7675533242647793366,\n                                                                  {\"ᙧ呃:[㒺쳀쌡쏂H稈㢤\\u001dᶗGG-{GHྻຊꡃ哸䵬;$?&d\\\\⥬こN圴됤挨-'ꕮ$PU%?冕눖i魁q騎Q\": [\n                                                                   false,\n                                                                   [[\n                                                                    7929823049157504248,\n                                                                    [[\n                                                                     true,\n                                                                     \"Z菙\\u0017'eꕤ᱕l,0\\\\X\\u001c[=雿8蠬L<\\/낲긯W99g톉4ퟋb㝺\\u0007劁'!麕Q궈oW:@X၎z蘻m絙璩귓죉+3柚怫tS捇蒣䝠-擶D[0=퉿8)q0ٟ\",\n                                                                     \"唉\\nFA椭穒巯\\\\䥴䅺鿤S#b迅獘 ﶗ꬘\\\\?q1qN犠pX꜅^䤊⛤㢌[⬛휖岺q唻ⳡ틍\\\"㙙Eh@oA賑㗠y必Nꊑᗘ\",\n                                                                     -2154220236962890773,\n                                                                     -3.2442003245397908E18,\n                                                                     \"Wᄿ筠:瘫퀩?o貸q⊻(᎞KWf宛尨h^残3[U(='橄\",\n                                                                     -7857990034281549164,\n                                                                     1.44283696979059942E18,\n                                                                     null,\n                                                                     {\"ꫯAw跭喀 ?_9\\\"Aty背F=9缉ྦྷ@;?^鞀w:uN㘢Rỏ\": [\n                                                                      7.393662029337442E15,\n                                                                      3564680942654233068,\n                                                                      [\n                                                                       false,\n                                                                       -5253931502642112194,\n                                                                       \"煉\\\\辎ೆ罍5⒭1䪁䃑s䎢:[e5}峳ﴱn騎3?腳Hyꏃ膼N潭錖,Yᝋ˜YAၓ㬠bG렣䰣:\",\n                                                                       true,\n                                                                       null,\n                                                                       {\n                                                                        \"⒛'P&%죮|:⫶춞\": -3818336746965687085,\n                                                                        \"钖m<\\/0ݎMtF2Pk=瓰୮洽겎.\": [[\n                                                                         -8757574841556350607,\n                                                                         -3045234949333270161,\n                                                                         null,\n                                                                         {\n                                                                          \"Ꮬr輳>⫇9hU##w@귪A\\\\C 鋺㘓ꖐ梒뒬묹㹻+郸嬏윤'+g<\\/碴,}ꙫ>손;情d齆J䬁ຩ撛챝탹/R澡7剌tꤼ?ặ!`⏲睤\\u00002똥଴⟏\": null,\n                                                                          \"\\u20f2ܹe\\\\tAꥍư\\\\x当뿖렉禛;G檳ﯪS૰3~㘠#[J<}{奲 5箉⨔{놁<\\/釿抋,嚠/曳m&WaOvT赋皺璑텁\": [[\n                                                                           false,\n                                                                           null,\n                                                                           true,\n                                                                           -5.7131445659795661E18,\n                                                                           \"萭m䓪D5|3婁ఞ>蠇晼6nﴺPp禽羱DS<睓닫屚삏姿\",\n                                                                           true,\n                                                                           [\n                                                                            -8759747687917306831,\n                                                                            {\n                                                                             \">ⓛ\\t,odKr{䘠?b퓸C嶈=DyEᙬ@ᴔ쨺芛髿UT퓻春<\\/yꏸ>豚W釺N뜨^?꽴﨟5殺ᗃ翐%>퍂ဿ䄸沂Ea;A_\\u0005閹殀W+窊?Ꭼd\\u0013P汴G5썓揘\": 4.342729067882445E-18,\n                                                                             \"Q^즾眆@AN\\u0011Kb榰냎Y#䝀ꀒᳺ'q暇睵s\\\"!3#I⊆畼寤@HxJ9\": false,\n                                                                             \"⿾D[)袨㇩i]웪䀤ᛰMvR<蟏㣨\": {\"v퇓L㪱ꖣ豛톤\\\\곱#kDTN\": [{\n                                                                              \"(쾴䡣,寴ph(C\\\"㳶w\\\"憳2s馆E!n!&柄<\\/0Pꈗſ?㿳Qd鵔\": {\"娇堰孹L錮h嵅⛤躏顒?CglN束+쨣ﺜ\\\\MrH\": {\"獞䎇둃ቲ弭팭^ꄞ踦涟XK錆쳞ឌ`;੶S炥騞ଋ褂B៎{ڒ䭷ᶼ靜pI荗虶K$\": [{\"◖S~躘蒉꫿輜譝Q㽙闐@ᢗ¥E榁iء5┄^B[絮跉ᰥ遙PWi3wㄾⵀDJ9!w㞣ᄎ{듒ꓓb6\\\\篴??c⼰鶹⟧\\\\鮇ꮇ\": [[\n                                                                               654120831325413520,\n                                                                               -1.9562073916357608E-19,\n                                                                               {\n                                                                                \"DC(昐衵ἡ긙갵姭|֛[t\": 7.6979110359897907E18,\n                                                                                \"J␅))嫼❳9Xfd飉j7猬ᩉ+⤻眗벎E鰉Zﾶ63zၝ69}ZᶐL崭ᦥ⡦靚⋛ꎨ~i㨃咊ꧭo䰠阀3C(\": -3.5844809362512589E17,\n                                                                                \"p꣑팱쒬ꎑ뛡Ꙩ挴恍胔&7ᔈ묒4Hd硶훐㎖zꢼ豍㿢aሃ=<\\/湉鵲EӅ%$F!퍶棌孼{O駍਺geu+\": \")\\u001b잓kŀX쩫A밁®ڣ癦狢)扔弒p}k縕ꩋ,䃉tࣼi\",\n                                                                                \"ァF肿輸<솄G-䢹䛸ꊏl`Tqꕗ蒞a氷⸅ᴉ蠰]S/{J왲m5{9.uέ~㕚㣹u>x8U讁B덺襪盎QhVS맅킃i识{벂磄Iහ䙅xZy/抍૭Z鲁-霳V据挦ℒ\": null,\n                                                                                \"㯛|Nꐸb7ⵐb?拠O\\u0014ކ?-(EꞨ4ꕷᄤYᯕOW瞺~螸\\\"욿ќ<u鵵઎⸊倾쑷෻rT⪄牤銱;W殆͢芄ਰ嚝훚샢⊿+㲽\": null,\n                                                                                \"単逆ົ%_맛d)zJ%3칧_릟#95䌨怡\\u001ci턠ॣi冘4赖'ਐ䧐_栔!\": {\n                                                                                 \"*?2~4㲌᭳쯁ftႷ1#oJ\\b䊇镇됔 \\u2079x䛁㊝ᮂN;穽跖s휇ᣄ홄傷z⸷(霸!3y뺏M쒿햏۽v㳉tở心3黎v쭻 Rp཮Vr~T?&˴k糒븥쩩r*D\": null,\n                                                                                 \"8@~홟ꔘk1[\": -5570970366240640754,\n                                                                                 \"BZt鏦ꡬc餖  s(mᛴ\\u0000◄d腑t84C⟐坯VṊ뉙'噱Ꝕ珽GC顀?허0ꞹ&돇䛭C䷫](\": 2.4303828213012387E-20,\n                                                                                 \"y撔Z외放+}ḑ骈ᙝ&\\u0016`G便2|-e]঳?QF㜹YF\\\"㿒緄햷㈟塚䷦ୀጤlM蘸N㾆▛럪㞂tᕬ镈쇝喠l amcxPnm\\u001a᱋<\\/]_]ﻹ瞧?H\": false,\n                                                                                 \"ፏ氏묢뜚I[♺뽛x?0H봬Wpn꨹Ra䝿쌑{㴂ni祻윸A'y|⺴ᚘ庌9{$恲{톽=m#@6ᨧfgs44陎J#<Ễ쨓瀵❩a୛㷉㙉ܸ◠냔嬯~呄籁羥镳\": false,\n                                                                                 \"㘱{<頬22?IF@곊I겂嶻L᝛D{@r쒂?IAᣧ洪惒誸b徂z췺꾍㠭\\\\刊%禨쌐ⶣ仵\\\\P[:47;<ᇅ<\\/\": {\n                                                                                  \"^U釳-v㢈ꗝ◄菘rᜨi;起kR犺䵫\\u0000锍쁙m-ԙ!lḃ꛸뻾F(W귛y\": \"#ᠺH㸢5v8_洑C\",\n                                                                                  \"䔵໳$ᙠ6菞\\u206e摎q圩P|慍sV4:㜾(I溞I?\": -6569206717947549676,\n                                                                                  \"透Ꞃ緵퇝8 >e㺰\\\"'㌢ƐW\\u0004瞕>0?V鷵엳\": true,\n                                                                                  \"뤥G\\\\迋䠿[庩'꼡\\u001aiᩮV쯁ᳪ䦪Ô;倱ନ뛁誈\": null,\n                                                                                  \"쥹䄆䚟Q榁䎐᢭<\\/2㕣p}HW蟔|䃏꿈ꚉ锳2Pb7㙑Tⅹᵅ\": {\n                                                                                   \"Y?֭$>#cVBꩨ:>eL蒁務\": {\n                                                                                    \"86柡0po 䏚&-捑Ћ祌<\\/휃-G*㶢הּ쩍s㶟餇c걺yu꽎還5*턧簕Og婥SꝐ\": null,\n                                                                                    \"a+葞h٥ࠆ裈嗫ﵢ5輙퀟ᛜ,QDﹼ⟶Y騠锪E_|x죗j侵;m蜫轘趥?븅w5+mi콛L\": {\n                                                                                     \";⯭ﱢ!买F⽍柤鶂n䵣V㫚墱2렾ELEl⣆\": [\n                                                                                      true,\n                                                                                      -3.6479311868339015E-18,\n                                                                                      -7270785619461995400,\n                                                                                      3.334081886177621E18,\n                                                                                      2.581457786298155E18,\n                                                                                      -6.605252412954115E-20,\n                                                                                      -3.9232347037744167E-20,\n                                                                                      {\n                                                                                       \"B6㊕.k1\": null,\n                                                                                       \"ZAꄮJ鮷ᳱo갘硥鈠䠒츼\": {\n                                                                                        \"ᕅ}럡}.@y陪鶁r業'援퀉x䉴ﵴl퍘):씭脴ᥞhiꃰblﲂ䡲엕8߇M㶭0燋標挝-?PCwe⾕J碻Ᾱ䬈䈥뷰憵賣뵓痬+\": {\"a췩v礗X⋈耓ፊf罅靮!㔽YYᣓw澍33⎔芲F|\\\"䜏T↮輦挑6ᓘL侘?ￆ]덆1R௯✎餘6ꏽ<\\/௨\\\\?q喷ꁫj~@ulq\": {\"嗫欆뾔Xꆹ4H㌋F嵧]ࠎ]㠖1ꞤT<$m뫏O i댳0䲝i\": {\"?෩?\\u20cd슮|ꯆjs{?d7?eNs⢚嫥氂䡮쎱:鑵롟2hJꎒﯭ鱢3춲亄:뼣v䊭諱Yj択cVmR䩃㘬T\\\"N홝*ै%x^F\\\\_s9보zz4淗?q\": [\n                                                                                         null,\n                                                                                         \"?\",\n                                                                                         2941869570821073737,\n                                                                                         \"{5{殇0䝾g6밖퍋臩綹R$䖭j紋釰7sXI繳漪행y\",\n                                                                                         false,\n                                                                                         \"aH磂?뛡#惇d婅?Fe,쐘+늵䍘\\\"3r瘆唊勐j⳧࠴ꇓ<\\/唕윈x⬌讣䋵%拗ᛆⰿ妴᝔M2㳗必꧂淲?ゥ젯檢<8끒MidX䏒3᳻Q▮佐UT|⤪봦靏⊏\",\n                                                                                         [[{\n                                                                                          \"颉(&뜸귙{y^\\\"P퟉춝Ჟ䮭D顡9=?}Y誱<$b뱣RvO8cH煉＠tk~4ǂ⤧⩝屋SS;J{vV#剤餓ᯅc?#a6D,s\": [\n                                                                                           -7.8781018564821536E16,\n                                                                                           true,\n                                                                                           [\n                                                                                            -2.28770899315832371E18,\n                                                                                            false,\n                                                                                            -1.0863912140143876E-20,\n                                                                                            -6282721572097446995,\n                                                                                            6767121921199223078,\n                                                                                            -2545487755405567831,\n                                                                                            false,\n                                                                                            null,\n                                                                                            -9065970397975641765,\n                                                                                            [\n                                                                                             -5.928721243413937E-20,\n                                                                                             {\"6촊\\u001a홯kB0w撨燠룉{绎6⳹!턍贑y▾鱧ժ[;7ᨷ∀*땒䪮1x霆Hᩭ☔\\\"r䝐7毟ᝰr惃3ꉭE+>僒澐\": [\n                                                                                              \"Ta쎩aƝt쵯ⰪVb\",\n                                                                                              [\n                                                                                               -5222472249213580702,\n                                                                                               null,\n                                                                                               -2851641861541559595,\n                                                                                               null,\n                                                                                               4808804630502809099,\n                                                                                               5657671602244269874,\n                                                                                               \"5犲﨣4mᥣ?yf젫꾯|䋬잁$`Iⳉﴷ扳兝,'c\",\n                                                                                               false,\n                                                                                               [\n                                                                                                null,\n                                                                                                {\n                                                                                                 \"DyUIN쎾M仼惀⮥裎岶泭lh扠\\u001e礼.tEC癯튻@_Qd4c5S熯A<\\/＼6U윲蹴Q=%푫汹\\\\\\u20614b[௒C⒥Xe⊇囙b,服3ss땊뢍i~逇PA쇸1\": -2.63273619193485312E17,\n                                                                                                 \"Mq꺋貘k휕=nK硍뫞輩>㾆~἞ࡹ긐榵l⋙Hw뮢帋M엳뢯v⅃^\": 1877913476688465125,\n                                                                                                 \"ᶴ뻗`~筗免⚽টW˃⽝b犳䓺Iz篤p;乨A\\u20ef쩏?疊m㝀컩뫡b탔鄃ᾈV(遢珳=뎲ିeF仢䆡谨8t0醄7㭧瘵⻰컆r厡궥d)a阄፷Ed&c﯄伮1p\": null,\n                                                                                                 \"⯁w4曢\\\"(欷輡\": \"\\\"M᭫]䣒頳B\\\\燧ࠃN㡇j姈g⊸⺌忉ꡥF矉স%^\",\n                                                                                                 \"㣡Oᄦ昵⫮Y祎S쐐級㭻撥>{I$\": -378474210562741663,\n                                                                                                 \"䛒掷留Q%쓗1*1J*끓헩ᦢ﫫哉쩧EↅIcꅡ\\\\?ⴊl귛顮4\": false,\n                                                                                                 \"寔愆샠5]䗄IH贈=d﯊/偶?ॊn%晥D視N򗘈'᫂⚦|X쵩넽z질tskxDQ莮Aoﱻ뛓\": true,\n                                                                                                 \"钣xp?&\\u001e侉/y䴼~?U篔蘚缣/I畚?Q绊\": -3034854258736382234,\n                                                                                                 \"꺲໣眀)⿷J暘pИfAV삕쳭Nꯗ4々'唄ⶑ伻㷯騑倭D*Ok꧁3b␽_<\\/챣Xm톰ၕ䆄`*fl㭀暮滠毡?\": [\n                                                                                                  \"D男p`V뙸擨忝븪9c麺`淂⢦Yw⡢+kzܖ\\fY1䬡H歁)벾Z♤溊-혰셢?1<-\\u0005;搢Tᐁle\\\\ᛵߓﭩ榩<QF;t=?Qꀞ\",\n                                                                                                  [\n                                                                                                   null,\n                                                                                                   [{\"-췫揲ᬨ墊臸<ࠒH跥 㔭쥃㫯W=z[wধ╌<~yW楄S!⑻h즓lĖN￧篌W듷튗乵᪪템먵Pf悥ᘀk䷭焼\\\\讄r擁鐬y6VF<\\/6랿p)麡ꁠ㪁\\\"pழe\": [\n                                                                                                    \"#幎杴颒嶈)ㄛJ.嶤26_⋌东챯ꠉ⤋ؚ/⏚%秼Q룠QGztﾺ㎷អI翰Xp睔鍜ꨍ\",\n                                                                                                    {\",T?\": [\n                                                                                                     false,\n                                                                                                     [[\n                                                                                                      true,\n                                                                                                      7974824014498027996,\n                                                                                                      false,\n                                                                                                      [\n                                                                                                       4.3305464880956252E18,\n                                                                                                       {\n                                                                                                        \"᱿W^A]'rᮢ)鏥z餝;Hu\\\\Fk?ﴺ?IG浅-䙧>訝-xJ;巡8깊蠝ﻓU$K\": {\n                                                                                                         \"Vꕡ諅搓W=斸s︪vﲜ츧$)iꡟ싉e寳?ጭムVથ嵬i楝Fg<\\/Z|៪ꩆ-5'@ꃱ80!燱R쇤t糳]罛逇dṌ֣XHiͦ{\": true,\n                                                                                                         \"Ya矲C멗Q9膲墅携휻c\\\\딶G甔<\\/.齵휴\": -1.1456247877031811E-19,\n                                                                                                         \"z#.OO￝J\": -8263224695871959017,\n                                                                                                         \"崍_3夼ᮟ1F븍뽯ᦓ鴭V豈Ь\": [{\n                                                                                                          \"N蒬74\": null,\n                                                                                                          \"yuB?厅vK笗!ᔸcXQ旦컶P-녫mﾵ麟_\": \"1R@ 톘xa_|﩯遘s槞d!d껀筤⬫薐焵먑D{\\\\6k共倌☀G~AS_D\\\"딟쬚뮥馲렓쓠攥WTMܭ8nX㩴䕅檹E\\u0007ﭨN 2 ℆涐ꥏ꠵3▙玽|됨_\\u2048\",\n                                                                                                          \"恐A C䧩G\": {\":M큣5e들\\\\ꍀ恼ᔄ靸|I﨏$)n\": {\n                                                                                                           \"|U䬫㟯SKV6ꛤ㗮\\bn봻䲄fＸT:㾯쳤'笓0b/ೢC쳖?2浓uO.䰴\": \"ཐ꼋e?``,ᚇ慐^8ꜙNM䂱\\u0001IᖙꝧM'vKdꌊH牮r\\\\O@䊷ᓵ쀆(fy聻i툺\\\"?<\\/峧ࣞ⓺ᤤ쵒߯ꎺ騬?)刦\\u2072l慪y꺜ﲖTj+u\",\n                                                                                                           \"뽫<G;稳UL⸙q2n쵿C396炿J蓡z⣁zဩSOU?<\\/뙍oE큸O鿅෴ꍈEm#\\\"[瑦⤫ᝆgl⡗q8\\\"큘덥係@ᆤ=\\u0001爖羝췀㸩b9\\\\jeqt㟿㮸龾m㳳긄\": {\n                                                                                                            \"9\\\"V霟釜{/o0嫲C咀-饷䈍[녩)\\r䤴tMW\\\\龟ϣ^ي㪙忩䞞N湆Y笕)萨ꖤ誥煽:14⫻57U$擒䲐薡Qvↇ櫲현誧?nஷ6\": {\"l웾䌵.䅋䦝ic碳g[糲Ƿ-ឈᚱ4쑧\\u0004C࿼\\u0018&쬑?멲<\\/fD_檼픃pd쪼n㕊渪V䛉m揈W儅톳뗳䓆7㭽諤T煠Ney?0᪵鈑&\": [\n                                                                                                             false,\n                                                                                                             null,\n                                                                                                             {\n                                                                                                              \"\\r;鼶j᠂꼍RLz~♔9gf?ӡ浐\": -1.4843072575250897E-19,\n                                                                                                              \"&ꊒ\\\"ꋟ䝭E诮ﯚO?SW뒁훪mb旙⎕ᗕ⶙|ᷤ5y4甥\": \"j5|庠t铱?v 횋0\\\"'rxz䃢杺Ɜ!\\u0002\",\n                                                                                                              \"Q ၩ㟧\": {\"Hﬔ\\u2058䪠틙izZㅛ탟H^ﶲA??R6呠Z솋R.࿶g8\": [\n                                                                                                               -8762672252886298799,\n                                                                                                               -1.9486830507000208E17,\n                                                                                                               null,\n                                                                                                               -7157359405410123024,\n                                                                                                               null,\n                                                                                                               null,\n                                                                                                               -995856734219489233,\n                                                                                                               \"呧㫹A4!\",\n                                                                                                               null,\n                                                                                                               -1.9105609358624648E-19,\n                                                                                                               5888184370445333848,\n                                                                                                               2.25460605078245E-19,\n                                                                                                               2.5302739297121987E18,\n                                                                                                               \"뢹sbEf捵2丯?뗾耸(Wd띙SବꭖrtU?筤P똙QpbbKqaE$来V웰3i/lK퉜,8︸e= g螓t竦컼?.寋8鵗\",\n                                                                                                               7377742975895263424,\n                                                                                                               2.4218442017790503E-19,\n                                                                                                               {\n                                                                                                                \"y꒚ཫ쨘醬킃糟}yTSt䡀⇂뿽4ൢ戰U\": [[\n                                                                                                                 3600537227234741875,\n                                                                                                                 4435474101760273035,\n                                                                                                                 -1.42274517007951795E18,\n                                                                                                                 -5567915915496026866,\n                                                                                                                 null,\n                                                                                                                 null,\n                                                                                                                 [\n                                                                                                                  -3204084299154861161,\n                                                                                                                  {\n                                                                                                                   \"7梧慸憏.a瘎\\u00041U鵮Ck֨d惥耍ⳡY,⭏써E垁FFI鱑ⳬ줢7⧵Bﴠ耘줕햸q컴~*瑍W.떛ࡆ@'᐀+轳\": -961121410259132975,\n                                                                                                                   \"⥅]l黭㣓绶;!!⎃=朼㐿e&ἂ繤C﯀l䝣㌀6TM쑮w懃ꡡ#ᤆ䰓,墼湼゙뽸㲿䧽쫨xᵖ듨<\\/ T0峸iQ:溫脐\\\\\\\"쎪ὴ砇宖^M泼큥➅鈫@ᄟ༩\\u2008⥼\": true,\n                                                                                                                   \"⩐\\\"籽汎P싯鲘蟼sRᐯ䅩\\u0019R(kRᖁ&ಌ 0\\\"鳶!馼YH\": null,\n                                                                                                                   \"鮼ꚇ싋։刟\\rRLd步Nⴗ5Eࡆ訛갚[I醵NC(郴ṉy5D뤺౳QY壯5苴y훨(W\\\\Cଇ姚C艄깹\\u001c歷㋵ZC᥂\": [\n                                                                                                                    -6806235313106257498,\n                                                                                                                    null,\n                                                                                                                    \"}N⸿讽sꚪ;\\\\p繇j苄䫨\\u20e7%5x?t#\",\n                                                                                                                    {\n                                                                                                                     \"O〗k<墻yV$ఁrs-c1ఌ唪.C7_Yobᦜ褷'b帰mㄑl⌅\": {\"qB뗗擄3隂5뺍櫂䱟e촸P/鏩,3掁ꗩ=冉棓㑉|˞F襴뿴,:㞦<퓂⧙礞♗g뚎ᛩ<\\/뉽ⶳ⸻A?_x2I㽝勒*I홱鍧粿~曟㤙2绥Ly6+썃uu鿜בf큘|歍ࣖÉ\": [\n                                                                                                                      \">hh䈵w>1ⲏ쐭V[ⅎ\\\\헑벑F_㖝⠗㫇h恽;῝汰ᱼ瀖J옆9RR셏vsZ柺鶶툤r뢱橾/ꉇ囦FGm\\\"謗ꉦ⨶쒿⥡%]鵩#ᖣ_蹎 u5|祥?O\",\n                                                                                                                      null,\n                                                                                                                      2.0150326776036215E-19,\n                                                                                                                      null,\n                                                                                                                      true,\n                                                                                                                      false,\n                                                                                                                      true,\n                                                                                                                      {\"\\fa᭶P捤WWc᠟f뚉ᬏ퓗ⳀW睹5:HXH=q7x찙X$)모r뚥ᆟ!Jﳸf\": [\n                                                                                                                       -2995806398034583407,\n                                                                                                                       [\n                                                                                                                        6441377066589744683,\n                                                                                                                        \"Mﶒ醹i)Gἦ廃s6몞 KJ౹礎VZ螺费힀\\u0000冺업{谥'꡾뱻:.ꘘ굄奉攼Di᷑K鶲y繈욊阓v㻘}枭캗e矮1c?휐\\\"4\\u0005厑莔뀾墓낝⽴洗ṹ䇃糞@b1\\u0016즽Y轹\",\n                                                                                                                        {\n                                                                                                                         \"1⽕⌰鉟픏M㤭n⧴ỼD#%鐘⊯쿼稁븣몐紧ᅇ㓕ᛖcw嬀~ഌ㖓(0r⧦Q䑕髍ര铂㓻R儮\\\"@ꇱm❈௿᦯頌8}㿹犴?xn잆꥽R\": 2.07321075750427366E18,\n                                                                                                                         \"˳b18㗈䃟柵Z曆VTAu7+㛂cb0﯑Wp執<\\/臋뭡뚋刼틮荋벲TLP预庰܈G\\\\O@VD'鱃#乖끺*鑪ꬳ?Mޞdﭹ{␇圯쇜㼞顄︖Y홡g\": [{\n                                                                                                                          \"0a,FZ\": true,\n                                                                                                                          \"2z̬蝣ꧦ驸\\u0006L↛Ḣ4๚뿀'?lcwᄧ㐮!蓚䃦-|7.飑挴.樵*+1ﮊ\\u0010ꛌ%貨啺/JdM:똍!FBe?鰴㨗0O财I藻ʔWA᫓G쳛u`<\\/I\": [{\n                                                                                                                           \"$τ5V鴐a뾆両環iZp頻යn븃v\": -4869131188151215571,\n                                                                                                                           \"*즢[⦃b礞R◚nΰꕢH=귰燙[yc誘g䆌?ଜ臛\": {\n                                                                                                                            \"洤湌鲒)⟻\\\\䥳va}PeAMnＮ[\": \"㐳ɪ/(軆lZR,Cp殍ȮN啷\\\"3B婴?i=r$펽ᤐ쀸\",\n                                                                                                                            \"阄R4㒿㯔ڀ69ZᲦ2癁핌噗P崜#\\\\-쭍袛&鐑/$4童V꩑_ZHA澢fZ3\": {\"x;P{긳:G閉:9?活H\": [\n                                                                                                                             \"繺漮6?z犞焃슳\\\">ỏ[Ⳛ䌜녏䂹>聵⼶煜Y桥[泥뚩MvK$4jtﾛ\",\n                                                                                                                             \"E#갶霠좭㦻ୗ먵F+䪀o蝒ba쮎4X㣵 h\",\n                                                                                                                             -335836610224228782,\n                                                                                                                             null,\n                                                                                                                             null,\n                                                                                                                             [\n                                                                                                                              \"r1᫩0>danjY짿bs{\",\n                                                                                                                              [\n                                                                                                                               -9.594464059325631E-23,\n                                                                                                                               1.0456894622831624E-20,\n                                                                                                                               null,\n                                                                                                                               5.803973284253454E-20,\n                                                                                                                               -8141787905188892123,\n                                                                                                                               true,\n                                                                                                                               -4735305442504973382,\n                                                                                                                               9.513150514479281E-20,\n                                                                                                                               \"7넳$螔忷㶪}䪪l짴\\u0007鹁P鰚HF銏ZJﳴ/⍎1ᷓ忉睇ᜋ쓈x뵠m䷐窥Ꮤ^\\u0019ᶌ偭#ヂt☆၃pᎍ臶䟱5$䰵&๵分숝]䝈뉍♂坎\\u0011<>\",\n                                                                                                                               \"C蒑貑藁lﰰ}X喇몛;t밿O7/᯹f\\u0015kI嘦<ዴ㟮ᗎZ`GWퟩ瑹࡮ᅴB꿊칈??R校s脚\",\n                                                                                                                               {\n                                                                                                                                \"9珵戬+AU^洘拻ቒy柭床'粙XG鞕᠜繀伪%]hC,$輙?Ut乖Qm떚W8઼}~q⠪rU䤶CQ痗ig@#≲t샌f㈥酧l;y闥ZH斦e⸬]j⸗?ঢ拻퀆滌\": null,\n                                                                                                                                \"畯}㧢J罚帐VX㨑>1ꢶkT⿄蘥㝑o|<嗸層沈挄GEOM@-䞚䧰$만峬輏䠱V✩5宸-揂D'㗪yP掶7b⠟J㕻SfP?d}v㼂Ꮕ'猘\": {\n                                                                                                                                 \"陓y잀v>╪\": null,\n                                                                                                                                 \"鬿L+7:됑Y=焠U;킻䯌잫!韎ஔ\\f\": {\n                                                                                                                                  \"駫WmGጶ\": {\n                                                                                                                                   \"\\\\~m6狩K\": -2586304199791962143,\n                                                                                                                                   \"ႜࠀ%͑l⿅D.瑢Dk%0紪dḨTI픸%뗜☓s榗኉\\\"?V籄7w髄♲쟗翛歂E䤓皹t ?)ᄟ鬲鐜6C\": {\n                                                                                                                                    \"_췤a圷1\\u000eB-XOy缿請∎$`쳌eZ~杁튻/蜞`塣৙\\\"⪰\\\"沒l}蕌\\\\롃荫氌.望wZ|o!)Hn獝qg}\": null,\n                                                                                                                                    \"kOSܧ䖨钨:಼鉝ꭝO醧S`십`ꓭ쭁ﯢN&Et㺪馻㍢ⅳ㢺崡ຊ蜚锫\\\\%ahx켨|ż劻ꎄ㢄쐟A躊᰹p譞綨Ir쿯\\u0016ﵚOd럂*僨郀N*b㕷63z\": {\n                                                                                                                                     \":L5r+T㡲\": [{\n                                                                                                                                      \"VK泓돲ᮙRy㓤➙Ⱗ38oi}LJቨ7Ó㹡৘*q)1豢⛃e᫛뙪壥镇枝7G藯g㨛oI䄽 孂L缊ꋕ'EN`\": -2148138481412096818,\n                                                                                                                                      \"`⛝ᘑ$(खꊲ⤖ᄁꤒ䦦3=)]Y㢌跨NĴ驳줟秠++d孳>8ᎊ떩EꡣSv룃 쯫أ?#E|᭙㎐?zv:5祉^⋑V\": [\n                                                                                                                                       -1.4691944435285607E-19,\n                                                                                                                                       3.4128661569395795E17,\n                                                                                                                                       \"㐃촗^G9佭龶n募8R厞eEw⺡_ㆱ%⼨D뉄퉠2ꩵᛅⳍ搿L팹Lවn=\\\"慉념ᛮy>!`g!풲晴[/;?[v겁軇}⤳⤁핏∌T㽲R홓遉㓥\",\n                                                                                                                                       \"愰_⮹T䓒妒閤둥?0aB@㈧g焻-#~跬x<\\/舁P݄ꐡ=\\\\׳P\\u0015jᳪᢁq;㯏l%᭗;砢觨▝,謁ꍰGy?躤O黩퍋Y㒝a擯\\n7覌똟_䔡]fJ晋IAS\",\n                                                                                                                                       4367930106786121250,\n                                                                                                                                       -4.9421193149720582E17,\n                                                                                                                                       null,\n                                                                                                                                       {\n                                                                                                                                        \";ﾸ똾柉곟ⰺKpፇ䱻ฺ䖝{o~h!ｅꁿ઻욄ښ\\u0002y?xUd\\u207c悜ꌭ\": [\n                                                                                                                                         1.6010824122815255E-19,\n                                                                                                                                         [\n                                                                                                                                          \"宨︩9앉檥pr쇷?WxLb\",\n                                                                                                                                          \"氇9】J玚\\u000f옛呲~ 輠1D嬛,*mW3?n휂糊γ虻*ᴫ꾠?q凐趗Ko↦GT铮\",\n                                                                                                                                          \"㶢ថmO㍔k'诔栀Z蛟}GZ钹D\",\n                                                                                                                                          false,\n                                                                                                                                          -6.366995517736813E-20,\n                                                                                                                                          -4894479530745302899,\n                                                                                                                                          null,\n                                                                                                                                          \"V%᫡II璅䅛䓎풹ﱢ/pU9se되뛞x梔~C)䨧䩻蜺(g㘚R?/Ự[忓C뾠ࢤc왈邠买?嫥挤풜隊枕\",\n                                                                                                                                          \",v碍喔㌲쟚蔚톬៓ꭶ\",\n                                                                                                                                          3.9625444752577524E-19,\n                                                                                                                                          null,\n                                                                                                                                          [\n                                                                                                                                           \"kO8란뿒䱕馔b臻⍟隨\\\"㜮鲣Yq5m퐔<u뷆c譆\\u001bN?<\",\n                                                                                                                                           [{\n                                                                                                                                            \";涉c蒀ᴧN䘱䤳 ÿꭷ,핉dSTDB>K#ꢘug㼈ᝦ=P^6탲@䧔%$CqSw铜랊0&m⟭<\\/a逎ym\\u0013vᯗ\": true,\n                                                                                                                                            \"洫`|XN뤮\\u0018詞=紩鴘_sX)㯅鿻Ố싹\": 7.168252736947373E-20,\n                                                                                                                                            \"ꛊ饤ﴏ袁(逊+~⽫얢鈮艬O힉7D筗S곯w操I斞᠈븘蓷x\": [[[[\n                                                                                                                                             -7.3136069426336952E18,\n                                                                                                                                             -2.13572396712722688E18,\n                                                                                                                                             {\n                                                                                                                                              \"硢3㇩R:o칢行E<=\\u0018ၬYuH!\\u00044U%卝炼2>\\u001eSi$⓷ꒈ'렢gᙫ番ꯒ㛹럥嶀澈v;葷鄕x蓎\\\\惩+稘UEᖸﳊ㊈壋N嫿⏾挎,袯苷ኢ\\\\x|3c\": 7540762493381776411,\n                                                                                                                                              \"?!*^ᢏ窯?\\u0001ڔꙃw虜돳FgJ?&⨫*uo籤:?}ꃹ=ٴ惨瓜Z媊@ત戹㔏똩Ԛ耦Wt轁\\\\枒^\\\\ꩵ}}}ꀣD\\\\]6M_⌫)H豣:36섘㑜\": {\n                                                                                                                                               \";홗ᰰU஋㙛`D왔ཿЃS회爁\\u001b-㢈`봆?盂㛣듿ᦾ蒽_AD~EEຆ㊋(eNwk=Rɠ峭q\\\"5Ἠ婾^>'ls\\n8QAK<l_⭨穟\": [\n                                                                                                                                                true,\n                                                                                                                                                true,\n                                                                                                                                                {\"ﳷm箅6qⷈ?ﲈ憟b۷⫉἞V뚴少U呡瓴ꉆs~嘵得㌶4XR漊\": [\n                                                                                                                                                 \"폆介fM暪$9K[ㄇ샍큳撦g撟恸jҐF㹹aj bHᘀ踉ꎐＣ粄 a?\\u000fK즉郝 幨9D舢槷Xh뵎u훩Ꜿ턾ƅ埂P埆k멀{䢹~?D<\\/꼢XR\\u001b〱䝽꼨i㘀ḟ㚺A-挸\",\n                                                                                                                                                 false,\n                                                                                                                                                 null,\n                                                                                                                                                 -1.1710758021294953E-20,\n                                                                                                                                                 3996737830256461142,\n                                                                                                                                                 true,\n                                                                                                                                                 null,\n                                                                                                                                                 -8271596984134071193,\n                                                                                                                                                 \"_1G퉁텑m䮔鰼6멲Nmꇩﬅ쓟튍N许FDj+3^ﶜ⎸\\u0019⤕橥!\\\"s-뾞lz北׸ꍚ랬)?l⻮고i䑰\\u001f䪬\",\n                                                                                                                                                 4.459124464204517E-19,\n                                                                                                                                                 -4.0967172848578447E18,\n                                                                                                                                                 5643211135841796287,\n                                                                                                                                                 -9.482336221192844E-19,\n                                                                                                                                                 \"౪冏釶9D?s螭X榈枸j2秀v]泌鰚岒聵轀쌶i텽qMbL]R,\",\n                                                                                                                                                 null,\n                                                                                                                                                 [\n                                                                                                                                                  null,\n                                                                                                                                                  {\"M쪊ꯪ@;\\u0011罙ꕅ<e᝺|爑Yⵝ<\\/&ᩎ<腊ሑᮔ੃F豭\": [\n                                                                                                                                                   \"^0࡟1볏P폋ፏ杈F⨥Iꂴ\\\"z磣VⅡ=8퀝2]䢹h1\\u0017{jT<I煛5%D셍S⑙⅏J*샐 巙ດ;᧡䙞\",\n                                                                                                                                                   [{\n                                                                                                                                                    \"'㶡큾鄧`跊\\\"gV[?u᭒Ʊ髷%葉굵a띦N켧Qﯳy%y䩟髒L䯜S䵳r絅肾킂ၐ'ꔦg긓a'@혔যW谁ᝬF栩ŷ+7w鞚\": 6.3544416675584832E17,\n                                                                                                                                                    \"苎脷v改hm쏵|㋊g_ᔐ 뒨蹨峟썎㷸|Ο刢?Gͨ옛-?GꦱIEYUX4?%ꘋᆊ㱺\": -2.8418378709165287E-19,\n                                                                                                                                                    \"誰?(H]N맘]k洳\\\"q蒧蘞!R퐫\\\\(Q$T5N堍⫣윿6|럦속︅ﭗ(\": [\n                                                                                                                                                     \"峩_\\u0003A瘘?✓[硫䎯ၽuጭ\\\"@Y綅첞m榾=贮9R벿῜Z\",\n                                                                                                                                                     null,\n                                                                                                                                                     \"䰉㗹㷾Iaᝃqcp쓘὾൫Q|ﵓ<\\/ḙ>)- Q䲌mo펹L_칍樖庫9꩝쪹ᘹ䑖瀍aK ?*趤f뭓廝p=磕\",\n                                                                                                                                                     \"哑z懅ᤏ-ꍹux쀭\",\n                                                                                                                                                     [\n                                                                                                                                                      true,\n                                                                                                                                                      3998739591332339511,\n                                                                                                                                                      \"ጻ㙙?᳸aK<\\/囩U`B3袗ﱱ?\\\"/k鏔䍧2l@쿎VZ쨎/6ꃭ脥|B?31+on颼-ꮧ,O嫚m ࡭`KH葦:粘i]aSU쓙$쐂f+詛頖b\",\n                                                                                                                                                      [{\"^<9<箝&絡;%i﫡2攑紴\\\\켉h쓙-柂䚝ven\\u20f7浯-Ꮏ\\r^훁䓚헬\\u000e?\\\\ㅡֺJ떷VOt\": [{\n                                                                                                                                                       \"-௄卶k㘆혐஽y⎱㢬sS઄+^瞥h;ᾷj;抭\\u0003밫f<\\/5Ⱗ裏_朻%*[-撵䷮彈-芈\": {\n                                                                                                                                                        \"㩩p3篊G|宮hz䑊o곥j^Co0\": [\n                                                                                                                                                         653239109285256503,\n                                                                                                                                                         {\"궲?|\\\":N1ۿ氃NZ#깩:쇡o8킗ࡊ[\\\"됸Po핇1(6鰏$膓}⽐*)渽J'DN<썙긘毦끲Ys칖\": {\n                                                                                                                                                          \"2Pr?Xjㆠ?搮/?㓦柖馃5뚣Nᦼ|铢r衴㩖\\\"甝湗ܝ憍\": \"\\\"뾯i띇筝牻$珲/4ka $匝휴译zbAᩁꇸ瑅&뵲衯ꎀᆿ7@ꈋ'ᶨH@ᠴl+\",\n                                                                                                                                                          \"7뢽뚐v?4^ꊥ_⪛.>pởr渲<\\/⢕疻c\\\"g䇘vU剺dஔ鮥꒚(dv祴X⼹\\\\a8y5坆\": true,\n                                                                                                                                                          \"o뼄B욞羁hr﷔폘뒚⿛U5pꪴfg!6\\\\\\\"爑쏍䢱W<ﶕ\\\\텣珇oI/BK뺡'谑♟[Ut븷亮g(\\\"t⡎有?ꬊ躺翁艩nl F⤿蠜\": 1695826030502619742,\n                                                                                                                                                          \"ۊ깖>ࡹ햹^ⵕ쌾BnN〳2C䌕tʬ]찠?ݾ2饺蹳ぶꌭ訍\\\"◹ᬁD鯎4e滨T輀ﵣ੃3\\u20f3킙D瘮g\\\\擦+泙ၧ 鬹ﯨַ肋7놷郟lP冝{ߒhড়r5,꓋\": null,\n                                                                                                                                                          \"ΉN$y{}2\\\\N﹯ⱙK'8ɜͣwt,．钟廣䎘ꆚk媄_\": null,\n                                                                                                                                                          \"䎥eᾆᝦ읉,Jުn岪㥐s搖謽䚔5t㯏㰳㱊ZhD䃭f絕s鋡篟a`Q鬃┦鸳n_靂(E4迠_觅뷝_宪D(NL疶hL追V熑%]v肫=惂!㇫5⬒\\u001f喺4랪옑\": {\n                                                                                                                                                           \"2a輍85먙R㮧㚪Sm}E2yꆣꫨrRym㐱膶ᔨ\\\\t綾A☰.焄뙗9<쫷챻䒵셴᭛䮜.<\\/慌꽒9叻Ok䰊Z㥪幸k\": [\n                                                                                                                                                            null,\n                                                                                                                                                            true,\n                                                                                                                                                            {\"쌞쐍\": {\n                                                                                                                                                             \"▟GL K2i뛱iＱ\\\"̠.옛1X$}涺]靎懠ڦ늷?tf灟ݞゟ{\": 1.227740268699265E-19,\n                                                                                                                                                             \"꒶]퓚%ฬK❅\": [{\n                                                                                                                                                              \"(ෛ@Ǯっ䧼䵤[aﾃൖvEnAdU렖뗈@볓yꈪ,mԴ|꟢캁(而첸죕CX4Y믅\": \"2⯩㳿ꢚ훀~迯?᪑\\\\啚;4X\\u20c2襏B箹)俣eỻw䇄\",\n                                                                                                                                                              \"75༂f詳䅫ꐧ鏿 }3\\u20b5'∓䝱虀f菼Iq鈆﨤g퍩)BFa왢d0뮪痮M鋡nw∵謊;ꝧf美箈ḋ*\\u001c`퇚퐋䳫$!V#N㹲抗ⱉ珎(V嵟鬒_b㳅\\u0019\": null,\n                                                                                                                                                              \"e_m@(i㜀3ꦗ䕯䭰Oc+-련0뭦⢹苿蟰ꂏSV䰭勢덥.ྈ爑Vd,ᕥ=퀍)vz뱊ꈊB_6듯\\\"?{㒲&㵞뵫疝돡믈%Qw限,?\\r枮\\\"? N~癃ruࡗdn&\": null,\n                                                                                                                                                              \"㉹&'Pfs䑜공j<\\/?|8oc᧨L7\\\\pXᭁ 9᪘\": -2.423073789014103E18,\n                                                                                                                                                              \"䝄瑄䢸穊f盈᥸,B뾧푗횵B1쟢f\\u001f凄\": \"魖⚝2儉j꼂긾껢嗎0ࢇ纬xI4](੓`蕞;픬\\fC\\\"斒\\\")2櫷I﹥迧\",\n                                                                                                                                                              \"ퟯ詔x悝령+T?Bg⥄섅kOeQ큼㻴*{E靼6氿L缋\\u001c둌๶-㥂2==-츫I즃㠐Lg踞ꙂEG貨鞠\\\"\\u0014d'.缗gI-lIb䋱ᎂDy缦?\": null,\n                                                                                                                                                              \"紝M㦁犿w浴詟棓쵫G:䜁?V2ힽ7N*n&㖊Nd-'ຊ?-樹DIv⊜)g䑜9뉂ㄹ푍阉~ꅐ쵃#R^\\u000bB䌎䦾]p.䀳\": [{\"ϒ爛\\\"ꄱ︗竒G䃓-ま帳あ.j)qgu扐徣ਁZ鼗A9A鸦甈!k蔁喙:3T%&㠘+,䷞|챽v䚞문H<\\/醯r셓㶾\\\\a볜卺zE䝷_죤ဵ뿰᎟CB\": [\n                                                                                                                                                               6233512720017661219,\n                                                                                                                                                               null,\n                                                                                                                                                               -1638543730522713294,\n                                                                                                                                                               false,\n                                                                                                                                                               -8901187771615024724,\n                                                                                                                                                               [\n                                                                                                                                                                3891351109509829590,\n                                                                                                                                                                true,\n                                                                                                                                                                false,\n                                                                                                                                                                -1.03836679125188032E18,\n                                                                                                                                                                {\n                                                                                                                                                                 \"<?起HCᷭ죎劐莇逰/{gs\\u0014⽛㰾愫tￖ<솞ڢ됌煲膺਻9x닳x࡭Q訽,ᶭඦtt掾\\\"秧㺌d˪䙻꫗:ᭈh4緞痐䤴c뫚떩త<?ᕢ謚6]폛O鰐鋛镠贩赟\\\"<G♷1'\": true,\n                                                                                                                                                                 \"቙ht4ߝBqꦤ+\\u0006멲趫灔)椾\": -1100102890585798710,\n                                                                                                                                                                 \"総兎곇뇸粟F醇;朠?厱楛㶆ⶏ7r⾛o꯬᳡F\\\\머幖 㜦\\f[搦᥽㮣0䕊?J㊳뀄e㔔+?<n↴复\": [\n                                                                                                                                                                  \"4~ꉍ羁\\\\偮(泤叕빜\\u0014>j랎:g曞ѕᘼ}链N\",\n                                                                                                                                                                  -1.1103819473845426E-19,\n                                                                                                                                                                  true,\n                                                                                                                                                                  [\n                                                                                                                                                                   true,\n                                                                                                                                                                   null,\n                                                                                                                                                                   -7.9091791735309888E17,\n                                                                                                                                                                   true,\n                                                                                                                                                                   {\"}蔰鋈+ꐨ啵0?g*사%`J?*\": [{\n                                                                                                                                                                    \"\\\"2wG?yn,癷BK\\\\龞䑞x?蠢\": -3.7220345009853505E-19,\n                                                                                                                                                                    \";饹়❀)皋`噿焒j(3⿏w>偍5X<np?<줯<Y]捘!J೸UⳂNे7v௸㛃ᄧ톿䨷鯻v焇=烻TQ!F⦰䣣눿K鷚눁'⭲m捠(䚻\": [\n                                                                                                                                                                     \"蹕 淜੃b\\\"+몾ⴕ\",\n                                                                                                                                                                     null,\n                                                                                                                                                                     35892237756161615,\n                                                                                                                                                                     {\n                                                                                                                                                                      \" 듹㏝)5慁箱&$~:遰쮐<\\/堋?% \\\\勽唅z손帋䘺H髀麡M퇖uz\\u0012m諦d᳤콌樝\\rX싹̡Ო\": -433791617729505482,\n                                                                                                                                                                      \"-j溗ࢵcz!:}✽5ഇ,욨ݏs#덫=南浺^}E\\\\Y\\\\T*뼈cd꺐cۘ뎁䨸됱K䠴㉿恿逳@wf쏢<\\/[L[\": -9.3228549642908109E17,\n                                                                                                                                                                      \"Ms킭u஗%\\\\u⍎/家欲ἅ答㓽/꯳齳|㭘Pr\\\"v<\\/禇䔆$GA䊻˔-:틊[h?倬荤ᾞ৳.Gw\\u000b\": [\n                                                                                                                                                                       \"0宜塙I@䏴蝉\\\\Uy뒅=2<h暒K._贡璐Yi檻_⮵uᐝ㘗聠[f\\u0015힢Hꔮ}጑;誏yf0\\\"\\u20cc?(=q斠➽5ꎾ鞘kⲃ\",\n                                                                                                                                                                       -2.9234211354411E-19,\n                                                                                                                                                                       false,\n                                                                                                                                                                       true,\n                                                                                                                                                                       {\n                                                                                                                                                                        \"\\u0011⟴GH_;#怵:\\u001c\\u0002n1U\\\\p/왔(┫]hꐚ7\\r0䵷첗岷O௷?㝎[殇|J=?韷pᶟ儜犆?5კ1kꍖiH竧뛈ପdmk游y(콪팱꾍k慧 y辣\": [\n                                                                                                                                                                         false,\n                                                                                                                                                                         \"O\\\"끍p覈ykv磂㢠㝵~뀬튍lC&4솎䇃:Mj\",\n                                                                                                                                                                         -7.009964654003924E-20,\n                                                                                                                                                                         false,\n                                                                                                                                                                         -49306078522414046,\n                                                                                                                                                                         null,\n                                                                                                                                                                         null,\n                                                                                                                                                                         2160432477732354319,\n                                                                                                                                                                         true,\n                                                                                                                                                                         \"4횡h+!踹ꐬP鮄{0&뱥M?샍鞅n㮞ᨹ?쒆毪l'箅^ꚥ頛`e㻨52柳⮙嫪࡟딯a.~䵮1f吘N&zȭL榓ۃ鳠5d㟆M@㣥ӋA΍q0縶$\",\n                                                                                                                                                                         -3.848996532974368E16,\n                                                                                                                                                                         true,\n                                                                                                                                                                         null,\n                                                                                                                                                                         -3.5240055580952525E18,\n                                                                                                                                                                         {\n                                                                                                                                                                          \" vﭷၵ#ce乃5僞?Z D`묨粇ᐔ绠vWL譢u뽀\\\\J|tⓙt№\\\"ꨋnT凮ᒩ蝂篝b騩:䢭Hbv읻峨z㹚T趗햆귣학津XiＹ@ᖥK\": true,\n                                                                                                                                                                          \"!F 醌y䉸W2ꇬ\\u0006/䒏7~%9擛햀徉9⛰+?㌘;ꠓX䇻Dfi뼧쒒\\u0012F謞՝絺+臕kऍLSQ쌁X쎬幦HZ98蒊枳\": \"澤令#\\u001d抍⛳@N搕퀂[5,✄ꘇ~䘷?\\u0011Xꈺ[硸⠘⛯X醪聡x\\u0007쌇MiX/|ﾐ뚁K8䁡W)銀q僞綂蔕E\",\n                                                                                                                                                                          \"6␲䣖R৞@ငg?<\\/೴x陙Xꈺ崸⠅ᇾ\\\\0X,H쟴셭A稂ힿゝF\\\\쑞\\u0012懦(Aᯕ灭~\\u0001껮X?逊\": 5.7566819207732864E17,\n                                                                                                                                                                          \"[c?椓\": false,\n                                                                                                                                                                          \"k䒇\": 2583824107104166717,\n                                                                                                                                                                          \"꙯N훙㏅ﮒ燣㨊瞯咽jMxby뻭뵫װ[\\\"1畈?ৱL\": \"띣ᔂ魠羓犴ꚃ+|rY\",\n                                                                                                                                                                          \"녺Z?䬝鉉:?ⳙ瘏Cኯ.Vs[釿䨉쐧\\\\\\\\*쵢猒$\\\\y溔^,㑳\": {\"藶꺟\": [{\n                                                                                                                                                                           \"\\\"d훣N2zq]?'檿죸忷篇ﮟ擤m'9!죶≓p뭻\\\\ᇷ\\f퇶_䰸h๐Q嵃訾㘑従ꯦ䞶jL틊r澵Omᾫ!H䱤팼/;|᭺I7슎YhuXi⚼\": -1.352716906472438E-19,\n                                                                                                                                                                           \"M⽇倻5J䂫औ᝔楸#J[Fﹱ쫮W誻bWz?}1\\\"9硪뻶fe\": \"盬:Ѹ砿획땣T凊(m灦呜ﻝR㿎艴䂵h\",\n                                                                                                                                                                           \"R띾k힪CH钙_i苮ⰵoᾨ紑퉎7h؉\\\"柀蝽z0့\\\"<?嘭$蜝?礲7岇槀묡?V钿T⣜v+솒灚ԛ2米mH?>薙婏聿3aFÆÝ\": \"2,ꓴg?_섦_>Y쪥션钺;=趘F~?D㨫\\bX?㹤+>/믟kᠪ멅쬂Uzỵ]$珧`m雁瑊ඖ鯬cꙉ梢f묛bB\",\n                                                                                                                                                                           \"♽n$YjKiXX*GO贩鏃豮祴遞K醞眡}ꗨv嵎꼷0୸+M菋eH徸J꣆:⼐悥B켽迚㯃b諂\\u000bjꠜ碱逮m8\": [\n                                                                                                                                                                            \"푷᣺ﻯd8ﱖ嬇ភH鹎⡱᱅0g:果6$GQ췎{vᷧYy-脕x偹砡館⮸C蓼ꏚ=軄H犠G谖ES詤Z蠂3l봟hￒ7䦹1GPQG癸숟~[#駥8zQ뛣J소obg,\",\n                                                                                                                                                                            null,\n                                                                                                                                                                            1513751096373485652,\n                                                                                                                                                                            null,\n                                                                                                                                                                            -6.851466660824754E-19,\n                                                                                                                                                                            {\"䩂-⴮2ٰK솖풄꾚ႻP앳1H鷛wmR䗂皎칄?醜<\\/&ࠧ㬍X濬䵈K`vJ륒Q/IC묛!;$vϑ\": {\n                                                                                                                                                                             \"@-ꚗxྐྵ@m瘬\\u0010U絨ﮌ驐\\\\켑寛넆T=tQ㭤L연@脸삯e-:⩼u㎳VQ㋱襗ຓ<Ⅶ䌸cML3+\\u001e_C)r\\\\9+Jn\\\\Pﺔ8蠱檾萅Pq鐳话T䄐I\": -1.80683891195530061E18,\n                                                                                                                                                                             \"ᷭዻU~ཷsgSJ`᪅'%㖔n5픆桪砳峣3獮枾䌷⊰呀\": {\n                                                                                                                                                                              \"Ş੉䓰邟自~X耤pl7间懑徛s첦5ਕXexh⬖鎥᐀nNr(J컗｜ૃF\\\"Q겮葲놔엞^겄+㈆话〾희紐G'E?飕1f❼텬悚泬먐U睬훶Qs\": false,\n                                                                                                                                                                              \"(\\u20dag8큽튣>^Y{뤋.袊䂓;_g]S\\u202a꽬L;^'#땏bႌ?C緡<䝲䲝断ꏏ6\\u001asD7IK5Wxo8\\u0006p弊⼂ꯍ扵\\u0003`뵂픋%ꄰ⫙됶l囏尛+䗅E쟇\\\\\": [\n                                                                                                                                                                               true,\n                                                                                                                                                                               {\n                                                                                                                                                                                \"\\n鱿aK㝡␒㼙2촹f;`쾏qIࡔG}㝷䐍瓰w늮*粅9뒪ㄊCj倡翑閳R渚MiUO~仨䜶RꙀA僈㉋⦋n{㖥0딿벑逦⥻0h薓쯴Ꝼ\": [\n                                                                                                                                                                                 5188716534221998369,\n                                                                                                                                                                                 2579413015347802508,\n                                                                                                                                                                                 9.010794400256652E-21,\n                                                                                                                                                                                 -6.5327297761238093E17,\n                                                                                                                                                                                 1.11635352494065523E18,\n                                                                                                                                                                                 -6656281618760253655,\n                                                                                                                                                                                 {\n                                                                                                                                                                                  \"\": \")?\",\n                                                                                                                                                                                  \"TWKLꑙ裑꺔UE俸塑炌Ũ᜕-o\\\"徚#\": {\"M/癟6!oI51ni퐚=댡>xꍨ\\u0004 ?\": {\n                                                                                                                                                                                   \"皭\": {\"⢫䋖>u%w잼<䕏꘍P䋵$魋拝U䮎緧皇Y훂&|羋ꋕ잿cJ䨈跓齳5\\u001a삱籷I꿾뤔S8㌷繖_Yឯ䲱B턼O歵F\\\\l醴o_欬6籏=D\": [\n                                                                                                                                                                                    false,\n                                                                                                                                                                                    true,\n                                                                                                                                                                                    {\"Mt|ꏞD|F궣MQ뵕T,띺k+?㍵i\": [\n                                                                                                                                                                                     7828094884540988137,\n                                                                                                                                                                                     false,\n                                                                                                                                                                                     {\n                                                                                                                                                                                      \"!༦鯠,&aﳑ>[euJꏽ綷搐B.h\": -7648546591767075632,\n                                                                                                                                                                                      \"-n켧嘰{7挐毄Y,>❏螵煫乌pv醑Q嶚!|⌝責0왾덢ꏅ蛨S\\\\)竰'舓Q}A釡5#v\": 3344849660672723988,\n                                                                                                                                                                                      \"8閪麁V=鈢1녈幬6棉⪮둌\\u207d᚛驉ꛃ'r䆉惏ै|bἧﺢᒙ<=穊强s혧eꮿ慩⌡ \\\\槳W븧J檀C,ᘉ의0俯퀉M;筷ࣴ瓿{늊埂鄧_4揸Nn阼Jੵ˥(社\": true,\n                                                                                                                                                                                      \"o뼀vw)4A뢵(a䵢)p姃뛸\\u000fK#KiQp\\u0005ꅍ芅쏅\": null,\n                                                                                                                                                                                      \"砥$ꥸ┇耽u斮Gc{z빔깎밇\\\\숰\\u001e괷各㶇쵿_ᴄ+h穢p촀Ნ䃬z䝁酳ӂ31xꔄ1_砚W렘G#2葊P \": [\n                                                                                                                                                                                       -3709692921720865059,\n                                                                                                                                                                                       null,\n                                                                                                                                                                                       [\n                                                                                                                                                                                        6669892810652602379,\n                                                                                                                                                                                        -135535375466621127,\n                                                                                                                                                                                        \"뎴iO}Z? 馢녱稹ᄾ䐩rSt帤넆&7i騏멗畖9誧鄜'w{Ͻ^2窭외b㑎粖i矪ꦨ탪跣)KEㆹ\\u0015V8[W?⽉>'kc$䨘ᮛ뉻٬M5\",\n                                                                                                                                                                                        1.10439588726055846E18,\n                                                                                                                                                                                        false,\n                                                                                                                                                                                        -4349729830749729097,\n                                                                                                                                                                                        null,\n                                                                                                                                                                                        [\n                                                                                                                                                                                         false,\n                                                                                                                                                                                         \"_蠢㠝^䟪/D녒㡋ỎC䒈판\\u0006એq@O펢%;鹐쏌o戥~A[ꡉ濽ỳ&虃᩾荣唙藍茨Ig楡꒻M窓冉?\",\n                                                                                                                                                                                         true,\n                                                                                                                                                                                         2.17220752996421728E17,\n                                                                                                                                                                                         -5079714907315156164,\n                                                                                                                                                                                         -9.960375974658589E-20,\n                                                                                                                                                                                         \"ᾎ戞༒\",\n                                                                                                                                                                                         true,\n                                                                                                                                                                                         false,\n                                                                                                                                                                                         [[\n                                                                                                                                                                                          \"ⶉᖌX⧕홇)g엃⹪x뚐癟\\u0002\",\n                                                                                                                                                                                          -5185853871623955469,\n                                                                                                                                                                                          {\n                                                                                                                                                                                           \"L㜤9ợㇶK鐰⋓V뽋˖!斫as|9＂፬䆪?7胜&n薑~\": -2.11545634977136992E17,\n                                                                                                                                                                                           \"O8뀩D}캖q萂6༣㏗䈓煮吽ਆᎼDᣘ폛;\": false,\n                                                                                                                                                                                           \"YTᡅ^L㗎cbY$pᣞ縿#fh!ꘂb삵玊颟샞ဢ$䁗鼒몁~rkH^:닮먖츸륈⪺쒉砉?㙓扫㆕꣒`R䢱B酂?C뇞<5Iޚ讳騕S瞦z\": null,\n                                                                                                                                                                                           \"\\\\RB?`mG댵鉡幐物䵎有5*e骄T㌓ᛪ琾駒Ku\\u001a[柆jUq8⋈5鿋츿myﻗ?雍ux঴?\": 5828963951918205428,\n                                                                                                                                                                                           \"n0晅:黯 xu씪^퓞cB㎊ᬍ⺘٤փ~B岚3㥕擄vᲂ~F?C䶖@$m~忔S왖㲚?챴⊟W#벌{'㰝I䝠縁s樘\\\\X뢻9핡I6菍ㄛ8쯶]wॽ0L\\\"q\": null,\n                                                                                                                                                                                           \"x增줖j⦦t䏢᎙㛿Yf鼘~꫓恄4惊\\u209c\": \"oOhbᤃ᛽z&Bi犑\\\\3B㩬劇䄑oŁ쨅孥멁ຖacA㖫借㞝vg싰샂㐜#譞⢤@k]鋰嘘䜾L熶塥_<\\/⍾屈ﮊ_mY菹t뙺}Ox=w鮮4S1ꐩמּ'巑\",\n                                                                                                                                                                                           \"㗓蟵ꂾe蠅匳(JP䗏෸\\u0089耀왲\": [{\n                                                                                                                                                                                            \"ᤃ㵥韎뤽\\r?挥O쯡⇔㞚3伖\\u0005P⋪\\\"D궣QLn(⚘罩䩢Ŏv䤘尗뼤됛O淽鋋闚r崩a{4箙{煷m6〈\": {\n                                                                                                                                                                                             \"l곺1L\": {\n                                                                                                                                                                                              \"T'ਤ?砅|੬Km]䄩\\\"(࿶<\\/6U爢䫈倔郴l2㴱^줣k'L浖L鰄Rp今鎗⒗C얨M훁㡧ΘX粜뫈N꤇輊㌻켑#㮮샶-䍗룲蠝癜㱐V>=\\\\I尬癤t=\": 7648082845323511446,\n                                                                                                                                                                                              \"鋞EP:<\\/_`ၧe混ㇹBd⯢㮂驋\\\\q碽饩跓྿ᴜ+j箿렏㗑yK毢宸p謹h䦹乕U媣\\\\炤\": [[\n                                                                                                                                                                                               \"3\",\n                                                                                                                                                                                               [\n                                                                                                                                                                                                true,\n                                                                                                                                                                                                3.4058271399411134E-20,\n                                                                                                                                                                                                true,\n                                                                                                                                                                                                \"揀+憱f逮@먻BpW曉\\u001a㣐⎊$n劈D枤㡞좾\\u001aᛁ苔౩闝1B䷒Ṋ݋➐ꀞꐃ磍$t੤_:蘺⮼(#N\",\n                                                                                                                                                                                                697483894874368636,\n                                                                                                                                                                                                [\n                                                                                                                                                                                                 \"vᘯ锴)0訶}䳅⩚0O壱韈ߜ\\u0018*U鍾䏖=䧉뽑单휻ID쿇嘗?ꌸῬ07\",\n                                                                                                                                                                                                 -5.4858784319382006E18,\n                                                                                                                                                                                                 7.5467775182251151E18,\n                                                                                                                                                                                                 -8911128589670029195,\n                                                                                                                                                                                                 -7531052386005780140,\n                                                                                                                                                                                                 null,\n                                                                                                                                                                                                 [\n                                                                                                                                                                                                  null,\n                                                                                                                                                                                                  true,\n                                                                                                                                                                                                  [[{\n                                                                                                                                                                                                   \"1欯twG<u䝮␽ꇣ_ჟﱴଶ-쪋\\\"?홺k:莝Ꜫ*⺵꽹댅釔좵}P?=9렿46b\\u001c\\\\S?(筈僦⇶爷谰1ྷa\": true,\n                                                                                                                                                                                                   \"TҫJYxڪ\\\\鰔℮혡)m_WVi眪1[71><\\/Q:0怯押殃탷聫사<ỗꕧ蚨䡁nDꌕ\\u001c녬~蓩<N蹑\\\"{䫥lKc혁뫖앺:vⵑ\": \"g槵?\",\n                                                                                                                                                                                                   \"aꨩ뻃싥렌1`롗}Yg>鲃g儊>ꏡl㻿/⑷*챳6㻜W毤緛ﹺᨪ4\\u0013뺚J髬e3쳸䘦伧?恪&{L掾p+꬜M䏊d娘6\": {\n                                                                                                                                                                                                    \"2p첼양棜h䜢﮶aQ*c扦v︥뮓kC寵횂S銩&ǝ{O*य़iH`U큅ࡓr䩕5ꄸ?`\\\\᧫?ᮼ?t〟崾훈k薐ì/ｉy꤃뵰z1<\\/AQ#뿩8jJ1z@u䕥\": 1.82135747285215155E18,\n                                                                                                                                                                                                    \"ZdN &=d년ᅆ'쑏ⅉ:烋5&៏ᄂ汎来L㯄固{钧u\\\\㊏튚e摑&t嗄ꖄUb❌?m䴘熚9EW\": [{\n                                                                                                                                                                                                     \"ଛ{i*a(\": -8.0314147546006822E17,\n                                                                                                                                                                                                     \"⫾ꃆY\\u000e+W`௸ \\\"M뒶+\\\\뷐lKE}(NT킶Yj選篒쁶'jNQ硾(똡\\\\\\\"逌ⴍy? IRꜘ὞鄬﨧:M\\\\f⠋Cꚜ쫊ᚴNV^D䕗ㅖἔIao꿬C⍏8\": [\n                                                                                                                                                                                                      287156137829026547,\n                                                                                                                                                                                                      {\n                                                                                                                                                                                                       \"H丞N逕<rO䎗:텕<\\/䶩샌Sd%^ᵯ눐엑者g䖩똭蕮1U驣?Pⰰ\\u001fp(W]67\\u0015﫣6굺OR羸#촐F蒈;嘙i✵@_撶y㤏⤍(:᧗뼢༌朆@⏰㤨ꭲ?-n>⯲\": {\"\": {\n                                                                                                                                                                                                        \"7-;枮阕梒9ᑄZ\": [[[[\n                                                                                                                                                                                                         null,\n                                                                                                                                                                                                         {\n                                                                                                                                                                                                          \"\": [[[[\n                                                                                                                                                                                                           -7.365909561486078E-19,\n                                                                                                                                                                                                           2948694324944243408,\n                                                                                                                                                                                                           null,\n                                                                                                                                                                                                           [\n                                                                                                                                                                                                            true,\n                                                                                                                                                                                                            \"荒\\\"并孷䂡쵼9o䀘F\\u0002龬7⮹Wz%厖/*? a*R枈㌦됾g뒠䤈q딄㺿$쮸tᶎ릑弣^鏎<\\/Y鷇驜L鿽<\\/춋9Mᲆឨ^<\\/庲3'l낢\",\n                                                                                                                                                                                                            \"c鮦\\u001b두\\\\~?眾ಢu݆綑෪蘛轋◜gȃ<\\/ⴃcpkDt誩܅\\\"Y\",\n                                                                                                                                                                                                            [[\n                                                                                                                                                                                                             null,\n                                                                                                                                                                                                             null,\n                                                                                                                                                                                                             [\n                                                                                                                                                                                                              3113744396744005402,\n                                                                                                                                                                                                              true,\n                                                                                                                                                                                                              \"v(y\",\n                                                                                                                                                                                                              {\n                                                                                                                                                                                                               \"AQ幆h쾜O+꺷铀ꛉ練A蚗⼺螔j㌍3꽂楎䥯뎸먩?\": null,\n                                                                                                                                                                                                               \"蠗渗iz鱖w]擪E\": 1.2927828494783804E-17,\n                                                                                                                                                                                                               \"튷|䀭n*曎b✿~杤U]Gz鄭kW|㴚#㟗ഠ8u擨\": [[\n                                                                                                                                                                                                                true,\n                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                {\"⾪壯톽g7?㥜ώQꑐ㦀恃㧽伓\\\\*᧰閖樧뢇赸N휶䎈pI氇镊maᬠ탷#X?A+kНM ༑᩟؝?5꧎鰜ṚY즫궔 =ঈ;ﳈ?*s|켦蜌wM笙莔\": [\n                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                 -3808207793125626469,\n                                                                                                                                                                                                                 [\n                                                                                                                                                                                                                  -469910450345251234,\n                                                                                                                                                                                                                  7852761921290328872,\n                                                                                                                                                                                                                  -2.7979740127017492E18,\n                                                                                                                                                                                                                  1.4458504352519893E-20,\n                                                                                                                                                                                                                  true,\n                                                                                                                                                                                                                  \"㽙깹?먏䆢:䴎ۻg殠JBTU⇞}ꄹꗣi#I뵣鉍r혯~脀쏃#釯:场:䔁>䰮o'㼽HZ擓௧nd\",\n                                                                                                                                                                                                                  [\n                                                                                                                                                                                                                   974441101787238751,\n                                                                                                                                                                                                                   null,\n                                                                                                                                                                                                                   -2.1647718292441327E-19,\n                                                                                                                                                                                                                   1.03602824249831488E18,\n                                                                                                                                                                                                                   [\n                                                                                                                                                                                                                    null,\n                                                                                                                                                                                                                    1.0311977941822604E-17,\n                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                    true,\n                                                                                                                                                                                                                    {\n                                                                                                                                                                                                                     \"\": -3.7019778830816707E18,\n                                                                                                                                                                                                                     \"E峾恆茍6xLIm縂0n2视֯J-ᤜz+ᨣ跐mYD豍繹⹺䊓몓ﴀE(@詮(!Y膽#᎙2䟓섣A䈀㟎,囪QbK插wcG湎ꤧtG엝x⥏俎j'A一ᯥ뛙6ㅑ鬀\": 8999803005418087004,\n                                                                                                                                                                                                                     \"よ殳\\\\zD⧅%Y泥簳Uꈩ*wRL{3#3FYHା[d岀䉯T稉駅䞘礄P:闈W怏ElB㤍喬赔bG䠼U଄Nw鰯闀楈ePsDꥷ꭬⊊\": [\n                                                                                                                                                                                                                      6.77723657904486E-20,\n                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                      [\n                                                                                                                                                                                                                       \"ཚ_뷎꾑蹝q'㾱ꂓ钚蘞慵렜떆`ⴹ⎼櫯]J?[t9Ⓢ !컶躔I᮸uz>3a㠕i,錃L$氰텰@7녫W㸮?羧W뇧ꃞ,N鋮숪2ɼ콏┍䁲6\",\n                                                                                                                                                                                                                       \"&y?뢶=킕올Za惻HZk>c\\u20b58i?ꦶcfBv잉ET9j䡡\",\n                                                                                                                                                                                                                       \"im珊Ճb칧<D-諂*u2ꡜ췛~䬢(텸ﵦ>校\\\\뼾쯀\",\n                                                                                                                                                                                                                       9.555715121193197E-20,\n                                                                                                                                                                                                                       true,\n                                                                                                                                                                                                                       {\n                                                                                                                                                                                                                        \"<㫚v6腓㨭e1㕔&&V∌ᗈT奄5Lጥ>탤?튣瑦㳆ꉰ!(ᙪ㿬擇_n쌯IMΉ㕨␰櫈ᱷ5풔蟹&L.첽e鰷쯃劼﫭b#ﭶ퓀7뷄Wr㢈๧Tʴશ㶑澕鍍%\": -1810142373373748101,\n                                                                                                                                                                                                                        \"fg晌o?߲ꗄ;>C>?=鑰監侯Kt굅\": true,\n                                                                                                                                                                                                                        \"䫡蓺ꑷ]C蒹㦘\\\"1ః@呫\\u0014NL䏾eg呮፳,r$裢k>/\\\\<z\": [[\n                                                                                                                                                                                                                         null,\n                                                                                                                                                                                                                         \"C䡏>?ㄤᇰﻛ쉕1஥'Ċ\\\" \\\\_?쨔\\\"ʾr: 9S䘏禺ᪧꄂ㲄\",\n                                                                                                                                                                                                                         [[{\n                                                                                                                                                                                                                          \"*硙^+E쌺I1䀖ju?:⦈Ꞓl๴竣迃xKC/饉:\\fl\\\"XTFﾨ蟭,芢<\\/骡軺띜hꏘ\\u001f銿<棔햳▨(궆*=乥b8\\\\媦䷀뫝}닶ꇭ(Kej䤑M\": [{\n                                                                                                                                                                                                                           \"1Ꮼ?>옿I╅C<ގ?ꊌ冉SV5A㢊㶆z-๎玶绢2F뵨@㉌뀌o嶔f9-庒茪珓뷳4\": null,\n                                                                                                                                                                                                                           \";lᰳ\": \"CbB+肻a䄷苝*/볳+/4fq=㰁h6瘉샴4铢Y骐.⌖@哼猎㦞+'gꋸ㒕ߤ㞑(䶒跲ti⑴a硂#No볔\",\n                                                                                                                                                                                                                           \"t?/jE幸YHT셵⩎K!Eq糦ꗣv刴w\\\"l$ο:=6:移\": {\n                                                                                                                                                                                                                            \"z]鑪醊嫗J-Xm銌翁絨c里됏炙Ep㣋鏣똼嚌䀓GP﹖cmf4鹭T䅿꣭姧␸wy6ꦶ;S&(}ᎧKxᾂQ|t뻳k\\\"d6\\\"|Ml췆hwLt꼼4$&8Պ褵婶鯀9\": {\"嵃닢ᒯ'd᧫䳳#NXe3-붋鸿ଢ떓%dK\\u0013䲎ꖍYV.裸R⍉rR3蟛\\\\:젯:南ĺLʆ넕>|텩鴷矔ꋅⒹ{t孶㓑4_\": [\n                                                                                                                                                                                                                             true,\n                                                                                                                                                                                                                             null,\n                                                                                                                                                                                                                             [\n                                                                                                                                                                                                                              false,\n                                                                                                                                                                                                                              \"l怨콈lᏒ\",\n                                                                                                                                                                                                                              {\n                                                                                                                                                                                                                               \"0w䲏嬧-:`䉅쉇漧\\\\܂yㄨb%㽄j7ᦶ涶<\": 3.7899452730383747E-19,\n                                                                                                                                                                                                                               \"ꯛTẀq纤q嶏V⿣?\\\"g}ი艹(쥯B T騠I=仵및X\": {\"KX6颠+&ᅃ^f畒y[\": {\n                                                                                                                                                                                                                                \"H?뱜^?꤂-⦲1a㋞&ꍃ精Ii᤾챪咽쬘唂쫷<땡劈훫놡o㥂\\\\ KⴙD秼F氮[{'좴:례晰Iq+I쭥_T綺砸GO煝䟪ᚪ`↹l羉q쐼D꽁ᜅ훦: vUV\": true,\n                                                                                                                                                                                                                                \"u^yﳍ0㱓#[y뜌앸ꊬL㷩?蕶蘾⻍KӼ\": -7931695755102841701,\n                                                                                                                                                                                                                                \"䤬轉車>\\u001c鴵惋\\\"$쯃྆⇻n뽀G氠S坪]ಲꨍ捇Qxኻ椕駔\\\\9ࣼ﫻읜磡煮뺪ᶚ볝l㕆t+sζ\": [[[\n                                                                                                                                                                                                                                 true,\n                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                 [\n                                                                                                                                                                                                                                  null,\n                                                                                                                                                                                                                                  3363739578828074923,\n                                                                                                                                                                                                                                  true,\n                                                                                                                                                                                                                                  {\n                                                                                                                                                                                                                                   \"\\\"鸣詩 볰㑵gL㯦῅춝旫}ED辗ﮈI쀤-ꧤ|㠦Z\\\"娑ᕸ4爏騍㣐\\\"]쳝Af]茛⬻싦o蚁k䢯䩐菽3廇喑ޅ\": 4.5017999150704666E17,\n                                                                                                                                                                                                                                   \"TYႇ7ʠ值4챳唤~Zo&ݛ\": false,\n                                                                                                                                                                                                                                   \"`塄J袛㭆끺㳀N㺣`꽐嶥KﯝSVᶔ∲퀠獾N딂X\\\"ᤏhNﬨvI\": {\"\\u20bb㭘I䖵䰼?sw䂷쇪](泒f\\\"~;꼪Fԝsᝦ\": {\"p,'ꉂ軿=A蚶?bƉ㏵䅰諬'LYKL6B깯⋩겦뎙(ᜭ\\u0006噣d꾆㗼Z;䄝䚔cd<情@䞂3苼㸲U{)<6&ꩻ钛\\u001au〷N숨囖愙j=BXW욕^x芜堏Ῑ爂뛷꒻t✘Q\\b\": [[\n                                                                                                                                                                                                                                    \"籛&ଃ䩹.ꃩ㦔\\\\C颫#暪&!勹ꇶ놽攺J堬镙~軌C'꾖䣹㮅岃ᙴ鵣\",\n                                                                                                                                                                                                                                    4.317829988264744E15,\n                                                                                                                                                                                                                                    6.013585322002147E-20,\n                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                    true,\n                                                                                                                                                                                                                                    null,\n                                                                                                                                                                                                                                    null,\n                                                                                                                                                                                                                                    -3.084633632357326E-20,\n                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                    null,\n                                                                                                                                                                                                                                    {\n                                                                                                                                                                                                                                     \"\\\"짫愔昻  X\\\"藣j\\\"\\\"먁ཅѻ㘤㬯0晲DU꟒㸃d벀윒l䦾c੻*3\": null,\n                                                                                                                                                                                                                                     \"谈Wm陧阦咟ฯ歖擓N喴㋐銭rCCnVࢥ^♼Ⅾ젲씗刊S༝+_t赔\\\\b䚍뉨ꬫ6펛cL䊘᜼<\\/澤pF懽&H\": [\n                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                      {\n                                                                                                                                                                                                                                       \"W\\\"HDUuΌ퀟M'P4࿰H똆ⰱﮯ<\\/凐蘲\\\"C鴫ﭒж}ꭩ쥾t5yd诪ﮡ퍉ⴰ@?氐醳rj4I6Qt\": 6.9090159359219891E17,\n                                                                                                                                                                                                                                       \"絛ﳛ⺂\": {\"諰P㗮聦`ZQ?ꫦh*റcb⧱}埌茥h{棩렛툽o3钛5鮁l7Q榛6_g)ὄ\\u0013kj뤬^爖eO4Ⱈ槞鉨ͺ订%qX0T썗嫷$?\\\\\\\"봅늆'%\": [\n                                                                                                                                                                                                                                        -2.348150870600346E-19,\n                                                                                                                                                                                                                                        [[\n                                                                                                                                                                                                                                         true,\n                                                                                                                                                                                                                                         -6619392047819511778,\n                                                                                                                                                                                                                                         false,\n                                                                                                                                                                                                                                         [[\n                                                                                                                                                                                                                                          -1.2929189982356161E-20,\n                                                                                                                                                                                                                                          1.7417192219309838E-19,\n                                                                                                                                                                                                                                          {\"?嵲2࿐2\\u0001啑㷳c縯\": [\n                                                                                                                                                                                                                                           null,\n                                                                                                                                                                                                                                           [\n                                                                                                                                                                                                                                            false,\n                                                                                                                                                                                                                                            true,\n                                                                                                                                                                                                                                            2578060295690793218,\n                                                                                                                                                                                                                                            {\n                                                                                                                                                                                                                                             \"?\\\"殃呎#㑑F\": true,\n                                                                                                                                                                                                                                             \"}F炊_殛oU헢兔Ꝉ,赭9703.B数gTz3⏬\": {\n                                                                                                                                                                                                                                              \"5&t3,햓Mݸᵣ㴵;꣫䩍↳#@뫷䠅+W-ࣇzᓃ鿕ಔ梭?T䮑ꥬ旴]u뫵막bB讍:왳둛lEh=숾鱠p咐$짏#?g⹷ᗊv㷵.斈u頻\\u0018-G.\": \"뽙m-ouࣤ஫牷\\\"`Ksꕞ筼3HlȨvC堈\\\"I]㖡玎r먞#'W賜鴇k'c룼髋䆿飉㗆xg巤9;芔cጐ/ax䊨♢큓r吓㸫೼䢗da᩾\\\"]屣`\",\n                                                                                                                                                                                                                                              \":M딪<䢥喠\\u0013㖅x9蕐㑂XO]ｆ*Q呰瞊吭VP@9,㨣 D\\\\穎vˤƩs㜂-曱唅L걬/롬j㈹EB8g<\\/섩o渀\\\"u0y&룣\": \">氍緩L/䕑돯Ꟙ蕞^aB뒣+0jK⪄瑨痜LXK^힦1qK{淚t츔X:Vm{2r獁B뾄H첚7氥?쉟䨗ꠂv팳圎踁齀\\\\\",\n                                                                                                                                                                                                                                              \"D彤5㢷Gꪻ[lㄆ@὜⓰絳[ଃ獽쮹☒[*0ꑚ㜳\": 9022717159376231865,\n                                                                                                                                                                                                                                              \"ҖaV銣tW+$魿\\u20c3亜~뫡ᙰ禿쨽㏡fṼzE/h\": \"5臐㋇Ჯ쮺? 昨탰Wﾑ밎#'\\\"崲钅U?幫뺀⍾@4kh>騧\\\\0ҾEV=爐͌U捀%ꉼ 㮋<{j]{R>:gԩL\\u001c瀈锌ﯲﳡꚒ'⫿E4暍㌗뵉X\\\"H᝜\",\n                                                                                                                                                                                                                                              \"ᱚגּ;s醒}犍SἿ㦣&{T$jkB\\\\\\tḮ앾䤹o<避(tW\": \"vb⯽䴪䮢@|)\",\n                                                                                                                                                                                                                                              \"⥒퐁껉%惀뗌+녣迺顀q條g⚯i⤭룐M琹j̈́⽜A\": -8385214638503106917,\n                                                                                                                                                                                                                                              \"逨ꊶZ<\\/W⫟솪㎮ᘇb?ꠔi\\\"H㧺x෷韒Xꫨฟ|]窽\\u001a熑}Agn?Mᶖa<rఄ4Ů䢤슲Axģe곖㴤x竾郍B謉鸵k薽M)\\\"芣眜`菉ꉛ䴺\": \"鹏^ె캫?3耲]|Ü1䡒㝮]8e?䶍^\",\n                                                                                                                                                                                                                                              \"뿸樅#P㡊1M룮Uꪭ绢ꑮZ9꽸\": {\"\\nJ^є|3袄ㅐ7⨆銦y睝⋷仴ct?[,<\\/ㅬ`?갔髞%揁A೚C\": {\n                                                                                                                                                                                                                                               \" 䇞3갫䅪\": [{\n                                                                                                                                                                                                                                                \"0|⩁㑂砕ㅻ\": null,\n                                                                                                                                                                                                                                                \"D箳᠉`|=⼭)\\\"*࣊㦏LjO誋\": \"\",\n                                                                                                                                                                                                                                                \"ࠚǱmꗥ}ᷴ╈r7헴ȥ4Kp5a)o}鎘门L搰䆓'✎k俎c#T68ӏ⩶6L鎴<r൦$黊BQY㼳\\\\跿F慮⡨拵贀!甶V喅/\": null,\n                                                                                                                                                                                                                                                \"ⵣq⳹ﻨLk]晩1*y\\\\$%}䖶P煑㇆䈦E嫁櫕Y࣓嫨䓏OL낮梚㸇洛洚BYtgl∛S☕䉓宑⋢粚ꔯ꠼붠\": \")ꬑ윤`\\\"Ⱓ<\\/婽*Y䔸ᓰ_ﳍt슲坩隥&S糧䛮闵诌豐sh쯽邴*섴؏͎=㯨\\\"RVힳ,^t\\\"ac?䤒ꉀxHa=Uꛕ㐙TkF껾\",\n                                                                                                                                                                                                                                                \"弾cUAF?暤c덽.欀nK앭]r傊䀓ﯳ馽垃[䥛oI0N砊鈥헅Co쟋钄ㅷ㊌뷚7\": [\n                                                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                                                 \"૓鏨?^䆏{\\u0006`X䧵儱&롡尙砡\\u0006뻝쑬sj▻XfᬶgcㄢV >9韲4$3Ỵ^=쏍煤ፐ돷2䣃%鷠/eQ9頸쥎\",\n                                                                                                                                                                                                                                                 2398360204813891033,\n                                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                                 3.2658897259932633E-19,\n                                                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                                                 \"?ꚃ8Nn㞷幵d䲳䱲뀙ꪛQ瑓鎴]䩋-鰾捡䳡??掊\",\n                                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                                 -1309779089385483661,\n                                                                                                                                                                                                                                                 \"ᦲxu_/yecR.6芏.ᜇ過 ~\",\n                                                                                                                                                                                                                                                 -5658779764160586501,\n                                                                                                                                                                                                                                                 \"쒌:曠=l썜䢜wk#s蕚\\\"互㮉m䉤~0듐䋙#G;h숄옥顇෤勹(C7㢅雚㐯L⠅VV簅<\",\n                                                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                                                 -4.664877097240962E18,\n                                                                                                                                                                                                                                                 -4.1931322262828017E18,\n                                                                                                                                                                                                                                                 {\n                                                                                                                                                                                                                                                  \",\": {\n                                                                                                                                                                                                                                                   \"v㮟麑䄠뤵g{M띮.\\u001bzt뢜뵡0Ǥ龍떟Ᾰ怷ϓRT@Lꀌ樂U㏠⾕e扉|bJg(뵒㠶唺~ꂿ(땉x⻫싉쁊;%0鎻V(o\\f,N鏊%nk郼螺\": -1.73631993428376141E18,\n                                                                                                                                                                                                                                                   \"쟧摑繮Q@Rᕾ㭚㾣4隅待㓎3蒟\": [\n                                                                                                                                                                                                                                                    4971487283312058201,\n                                                                                                                                                                                                                                                    8973067552274458613,\n                                                                                                                                                                                                                                                    {\n                                                                                                                                                                                                                                                     \"`a揙ᣗ\\u0015i<S幼訃锭B0&槩✨[Wp皩[g≊k葾x2ᡆ橲䲢W\": true,\n                                                                                                                                                                                                                                                     \"kH皈Sꁱq傑u?솹풑~o^F=劣N*reJ沤wW苯7p㼹䎐a=ꮧL㷩냴nWꌑ㞱uu谁lVN珿᤻(e豶5#L쪉ᅄ઄\\u0015숟봊P瀚X蓎\": false,\n                                                                                                                                                                                                                                                     \"䫯דּ〖Sc䛭점L뵾pCꙞ\\\"엇즓_ﰛ톣ꫀ먩㺣㮠⭴!\\\\W┏t䖰軅y\\u0014~ᇰ렢E7*俜䥪W䀩䷐h봆vjஉ묣༏G39.뼳輼:㮿ᐦA饕TUL}~\": [\n                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                      8.8648298810470003E17,\n                                                                                                                                                                                                                                                      5.735561205600924E-20,\n                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                      -102555823658171644,\n                                                                                                                                                                                                                                                      1.2674932032973067E-19,\n                                                                                                                                                                                                                                                      {\n                                                                                                                                                                                                                                                       \"D胣O㯨\\u0017Ku눓㒏텁nᨊ!Ꚇ廫_>Bo¸\": 4.3236479112537999E18,\n                                                                                                                                                                                                                                                       \"HW&퉡ぁ圍<W)6悰ꠑHEp14xy峑ft\\u0005s亘V튉䢮ꦈX嵐꬝?lI_덝춇-6Ss噺Nk-ﮥ큃܁郪*PR(S6╋@仙V懸뺵ﯜV粹\": \"9䗌斀4㐈^Qs隄硏j\\u0003\",\n                                                                                                                                                                                                                                                       \"Vk鶅C泹筁HX훉朗*r\\\\z顊誌儖4?n7᏾6몋䎡ﳈ],H頢p蚐㑄P4满E䏩V䬕ญL廂쒬쑨ꆷh迡ꍰ譖墎 ]鹿ฌ7ﶽ冭༽<ꈓS\\\\l䋮?_ﾕ檒?\": -8598528325153980065,\n                                                                                                                                                                                                                                                       \"t=q퍣疻тZ\\\\錅J.镎|nfḷ鴒1厰L灯纜E]୦⥪]Ꮾ'羝p/咩0닳ﳁqﳖཽk ?X1Ft%ś뭢v鋋⺃爵⒗\": [[\n                                                                                                                                                                                                                                                        5.0824756359232045E-19,\n                                                                                                                                                                                                                                                        [\n                                                                                                                                                                                                                                                         7.268480839079619E-19,\n                                                                                                                                                                                                                                                         {\"탿^굞⧕iј덊ꀛw껩6ꟳXs酚\\\\>Y?瑡Qy훍q!帰敏s舠㫸zꚗaS歲v`G株巷Jp6킼 (귶鍔⾏⡈>M汐㞍ቴ꙲dv@i㳓ᇆ?黍\": [\n                                                                                                                                                                                                                                                          null,\n                                                                                                                                                                                                                                                          4997607199327183467,\n                                                                                                                                                                                                                                                          \"E㻎蠫ᐾ高䙟蘬洼旾﫠텛㇛?'M$㣒蔸=A_亀绉앭rN帮\",\n                                                                                                                                                                                                                                                          null,\n                                                                                                                                                                                                                                                          [{\n                                                                                                                                                                                                                                                           \"Eᑞ)8<Z㡿W镀䛒C생V?0ꯦ+tL)`齳AjB姀XೳD빠㻲ƙgn9⑰ྍ῜&\\\"㚹>餧A5u&㗾q?\": [\n                                                                                                                                                                                                                                                            -1.969987519306507E-19,\n                                                                                                                                                                                                                                                            null,\n                                                                                                                                                                                                                                                            [\n                                                                                                                                                                                                                                                             3.42437673373841E-20,\n                                                                                                                                                                                                                                                             true,\n                                                                                                                                                                                                                                                             \"e걷M墁\\\"割P␛퍧厀R䱜3ﻴO퓫r﹉⹊\",\n                                                                                                                                                                                                                                                             [\n                                                                                                                                                                                                                                                              -8164221302779285367,\n                                                                                                                                                                                                                                                              [\n                                                                                                                                                                                                                                                               true,\n                                                                                                                                                                                                                                                               null,\n                                                                                                                                                                                                                                                               \"爘y^-?蘞Ⲽꪓa␅ꍨ}I\",\n                                                                                                                                                                                                                                                               1.4645984996724427E-19,\n                                                                                                                                                                                                                                                               [{\n                                                                                                                                                                                                                                                                \"tY좗⧑mrzﺝ㿥ⴖ᥷j諅\\u0000q賋譁Ꞅ⮱S\\nࡣB/큃굪3Zɑ复o<\\/;롋\": null,\n                                                                                                                                                                                                                                                                \"彟h浠_|V4䦭Dᙣ♞u쿻=삮㍦\\u001e哀鬌\": [{\"6횣楠,qʎꗇ鎆빙]㱭R굋鈌%栲j分僅ペ䇰w폦p蛃N溈ꡐꏀ?@(GI뉬$ﮄ9誁ꓚ2e甸ڋ[䁺,\\u0011\\u001cࢃ=\\\\+衪䷨ᯕ鬸K\": [[\n                                                                                                                                                                                                                                                                 \"ㅩ拏鈩勥\\u000etgWVXs陂規p狵w퓼{뮵_i\\u0002ퟑႢ⬐d6鋫F~챿搟\\u0096䚼1ۼ칥0꣯儏=鋷牋ⅈꍞ龐\",\n                                                                                                                                                                                                                                                                 -7283717290969427831,\n                                                                                                                                                                                                                                                                 true,\n                                                                                                                                                                                                                                                                 [\n                                                                                                                                                                                                                                                                  4911644391234541055,\n                                                                                                                                                                                                                                                                  {\n                                                                                                                                                                                                                                                                   \"I鈒첽P릜朸W徨觘-Hᎄ퐟⓺>8kr1{겵䍃〛ᬡ̨O귑o䝕'쿡鉕p5\": \"fv粖RN瞖蛐a?q꤄\\u001d⸥}'ꣴ犿ꦼ?뤋?鵆쥴덋䡫s矷̄?ඣ/;괱絢oWfV<\\/\\u202cC,㖦0䑾%n賹g&T;|ǉ_欂N4w\",\n                                                                                                                                                                                                                                                                   \"짨䠗;䌕u i+r๏0\": [{\"9䥁\\\\఩8\\\"馇z䇔<\\/ႡY3e狚쐡\\\"ุ6ﰆZ遖c\\\"Ll:ꮾ疣<\\/᭙Ｏ◌납୕湞9⡳Und㫜\\u0018^4pj1;䧐儂䗷ୗ>@e톬\": {\n                                                                                                                                                                                                                                                                    \"a⑂F鋻Q螰'<퇽Q贝瀧{ᘪ,cP&~䮃Z?gＩ彃\": [\n                                                                                                                                                                                                                                                                     -1.69158726118025933E18,\n                                                                                                                                                                                                                                                                     [\n                                                                                                                                                                                                                                                                      \"궂z簽㔛㮨瘥⤜䛖Gℤ逆Y⪾j08Sn昞ꘔ캻禀鴚P謦b{ꓮmN靐Mᥙ5\\\"睏2냑I\\u0011.L&=?6ᄠ뻷X鸌t刑\\\"#z)o꫚n쳟줋\",\n                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                      7517598198523963704,\n                                                                                                                                                                                                                                                                      \"ኑQp襟`uᩄr方]*F48ꔵn俺ሙ9뇒\",\n                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                      6645782462773449868,\n                                                                                                                                                                                                                                                                      1219168146640438184,\n                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                      {\n                                                                                                                                                                                                                                                                       \")ယ넌竀Sd䰾zq⫣⏌ʥ\\u0010ΐ' |磪&p牢蔑mV蘸૰짬꺵;K\": [\n                                                                                                                                                                                                                                                                        -7.539062290108008E-20,\n                                                                                                                                                                                                                                                                        [\n                                                                                                                                                                                                                                                                         true,\n                                                                                                                                                                                                                                                                         false,\n                                                                                                                                                                                                                                                                         null,\n                                                                                                                                                                                                                                                                         true,\n                                                                                                                                                                                                                                                                         6574577753576444630,\n                                                                                                                                                                                                                                                                         [[\n                                                                                                                                                                                                                                                                          1.2760162530699766E-19,\n                                                                                                                                                                                                                                                                          [\n                                                                                                                                                                                                                                                                           null,\n                                                                                                                                                                                                                                                                           [\n                                                                                                                                                                                                                                                                            \"顊\\\\憎zXB,\",\n                                                                                                                                                                                                                                                                            [{\n                                                                                                                                                                                                                                                                             \"㇆{CVC9－MN㜋ઘR눽#{h@ퟨ!鼚׼XOvXS\\u0017ᝣ=cS+梽៲綆16s덽휐y屬?ᇳG2ᴭ\\u00054쫖y룇nKcW̭炦s/鰘ᬽ?J|퓀髣n勌\\u0010홠P>j\": false,\n                                                                                                                                                                                                                                                                             \"箴\": [\n                                                                                                                                                                                                                                                                              false,\n                                                                                                                                                                                                                                                                              \"鍞j\\\"ꮾ*엇칬瘫xṬ⭽쩁䃳\\\"-⋵?ᦽ<cਔ↎⩧%鱩涎삧u9K⦈\\\"῝ᬑV绩킯愌ṱv@GꝾ跶Ꚇ(?䖃vI᧊xV\\r哦j㠒?*=S굤紴ꊀ鹭쬈s<DrIu솹꧑?\",\n                                                                                                                                                                                                                                                                              {\n                                                                                                                                                                                                                                                                               \".}S㸼L?t\\u000fK⑤s~hU鱜꘦}쪍C滈4ꓗ蛌):ྦ\\\"顥이⢷ῳYLn\\\"?fꘌ>댎Ĝ\": true,\n                                                                                                                                                                                                                                                                               \"Pg帯佃籛n㔠⭹࠳뷏≻࿟3㞱!-쒾!}쭪䃕!籿n涻J5ਲ਼yvy;Rኂ%ᔡጀ裃;M⣼)쵂쑈\": 1.80447711803435366E18,\n                                                                                                                                                                                                                                                                               \"ꈑC⡂ᑆ㤉壂뎃Xub<\\/쀆༈憓ق쨐ק\\\\\": [\n                                                                                                                                                                                                                                                                                7706977185172797197,\n                                                                                                                                                                                                                                                                                {\"\": {\"K╥踮砆NWࡆFy韣7ä밥{|紒︧䃀榫rᩛꦡTSy잺iH8}ퟴ,M?Ʂ勺ᴹ@T@~꾂=I㙕뾰_涀쑜嫴曣8IY?ҿo줫fऒ}\\\\S\\\"ᦨ뵼#nDX\": {\n                                                                                                                                                                                                                                                                                 \"♘k6?଱癫d68?㽚乳䬳-V顷\\u0005蝕?\\u0018䞊V{邾zじl]雏k臤~ൖH뒐iꢥ]g?.G碄懺䔛p<q꜉S岗_.%\": 7688630934772863849,\n                                                                                                                                                                                                                                                                                 \"溗摽嗙O㧀,⡢⼰呠ꅧ㓲/葇䢛icc@-r\\b渂ꌳ뻨饑觝ᖜ\\\\鮭\\u0014엙㥀᧺@浹W2꛵{W률G溮킀轡䬆g㨑'Q聨៪网Hd\\\"Q늴ᱢﶨ邮昕纚枑?▰hr羌驀[痹<\\/\": [\n                                                                                                                                                                                                                                                                                  -1.0189902027934687E-19,\n                                                                                                                                                                                                                                                                                  {\"窶椸릎뚻shE\\\"ꪗႥꎳU矖佟{SJ\": [{\"-慜x櫹XY-澐ܨ⣷ઢ鯙%Fu\\u0000迋▒}᥷L嗭臖oញc넨\\u0016/迎1b꯸g뢱㐧蓤䒏8C散삭|\\\"컪輩鹩\\\"\\\\g$zG䥽긷?狸꿭扵㲐:URON&oU8\": [\n                                                                                                                                                                                                                                                                                   null,\n                                                                                                                                                                                                                                                                                   true,\n                                                                                                                                                                                                                                                                                   null,\n                                                                                                                                                                                                                                                                                   -2.8907335031148883E17,\n                                                                                                                                                                                                                                                                                   -3864019407187144121,\n                                                                                                                                                                                                                                                                                   {\n                                                                                                                                                                                                                                                                                    \"`빬d⵺4H뜳⧈쓑ohஸ*㶐ﻇ⸕䠵!i䝬﹑h夘▥ꗐ푹갇㵳TA鳠嚵\\\\B<X}3訒c⋝{*﫢w]璨-g捭\\\\j໵侠Ei层\\u0011\": 3.758356090089446E-19,\n                                                                                                                                                                                                                                                                                    \"䄘ﮐ)Y놞씃㾱陰큁:{\\u2059/S⓴\": [[\n                                                                                                                                                                                                                                                                                     null,\n                                                                                                                                                                                                                                                                                     [[\n                                                                                                                                                                                                                                                                                      -3.8256602120220546E-20,\n                                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                                      7202317607724472882,\n                                                                                                                                                                                                                                                                                      \"CWQ뚿\",\n                                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                                      false,\n                                                                                                                                                                                                                                                                                      true,\n                                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                                      2857038485417498625,\n                                                                                                                                                                                                                                                                                      6.191302233218633E-20,\n                                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                                      -6795250594296208046,\n                                                                                                                                                                                                                                                                                      [\n                                                                                                                                                                                                                                                                                       true,\n                                                                                                                                                                                                                                                                                       {\n                                                                                                                                                                                                                                                                                        \"%ዧ遰Yᚯ⚀x莰愒Vᔈ턗BN洝ꤟA1⍌l콹풪H;OX๫륞쪐ᰚц@͎黾a邬<L厒Xb龃7f웨窂二;\": [[\n                                                                                                                                                                                                                                                                                         null,\n                                                                                                                                                                                                                                                                                         \"耲?䙧㘓F6Xs틭멢.v뚌?鄟恠▽'묺競?WvᆾCtxo?dZ;䨸疎\",\n                                                                                                                                                                                                                                                                                         {\n                                                                                                                                                                                                                                                                                          \"@hWꉁ&\\\"빜4礚UO~C;う殩_ꀥ蘁奢^챟k→ᡱKMⵉ<\\/Jㅲ붉L͟Q\": false,\n                                                                                                                                                                                                                                                                                          \"tU뢂8龰I먽7,.Y搽Z툼=&⨥覽K乫햶㠸%#@Z끖愓^⍊⾂몒3E_噆J(廊ឭyd䞜鈬Ћ档'⣘I\": {\n                                                                                                                                                                                                                                                                                           \"tK*ꔵ銂u艗ԃ쿏∳ꄂ霫X3♢9y?=ⲭdЊb&xy}\": [\n                                                                                                                                                                                                                                                                                            -4.097346784534325E-20,\n                                                                                                                                                                                                                                                                                            null,\n                                                                                                                                                                                                                                                                                            6016848468610144624,\n                                                                                                                                                                                                                                                                                            -8194387253692332861,\n                                                                                                                                                                                                                                                                                            null,\n                                                                                                                                                                                                                                                                                            {\n                                                                                                                                                                                                                                                                                             \"(祬诀譕쯠娣c봝r?畄kT뼾⌘⎨?noV䏘쥝硎n?\": [\n                                                                                                                                                                                                                                                                                              1.82679422844617293E18,\n                                                                                                                                                                                                                                                                                              [\n                                                                                                                                                                                                                                                                                               false,\n                                                                                                                                                                                                                                                                                               2.6849944122427694E18,\n                                                                                                                                                                                                                                                                                               true,\n                                                                                                                                                                                                                                                                                               [\n                                                                                                                                                                                                                                                                                                false,\n                                                                                                                                                                                                                                                                                                {\n                                                                                                                                                                                                                                                                                                 \";0z⭆;화$bਔ瀓\\\"衱^?잢ᢛ⣿~`ꕉ薸⌳໿湘腌'&:ryБꋥၼ꒥筙꬜긨?X\": -3536753685245791530,\n                                                                                                                                                                                                                                                                                                 \"c;Y7釚Uꃣ割J༨Y戣w}c峰뢨㽑㫈0N>R$䅒X觨l봜A刊8R梒',}u邩퉕?;91Ea䈈믁G⊶芔h袪&廣㺄j;㡏綽\\u001bN頸쳘橆\": -2272208444812560733,\n                                                                                                                                                                                                                                                                                                 \"拑Wﵚj鵼駳Oࣿ)#㾅顂N傓纝y僱栜'Bꐍ-!KF*ꭇK￤?䈴^:啤wG逭w᧯\": \"xᣱmYe1ۏ@霄F$ě꧘푫O䤕퀐Pq52憬ꀜ兴㑗ᡚ?L鷝ퟐ뭐zJꑙ}╆ᅨJB]\\\"袌㺲u8䯆f\",\n                                                                                                                                                                                                                                                                                                 \"꿽၅㔂긱Ǧ?SI\": -1669030251960539193,\n                                                                                                                                                                                                                                                                                                 \"쇝ɨ`!葎>瞺瘡驷錶❤ﻮ酜=\": -6961311505642101651,\n                                                                                                                                                                                                                                                                                                 \"?f7♄꫄Jᡔ훮e읇퍾፣䭴KhखT;Qty}O\\\\|뫁IῒNe(5惁ꥶㆷY9ﮡ\\\\ oy⭖-䆩婁m#x봉>Y鈕E疣s驇↙ᙰm<\": {\"퉻:dꂁ&efￎ쫢[\\\"돈늖꺙|Ô剐1͖-K:ʚ᭕/;쏖㷛]I痐职4g<Oꗢ뫺N쯂륬J╆.`ᇵP轆&fd$?苅o궓vO侃沲⍩嚅沗 E%⿰얦wi\\\\*趫\": [\n                                                                                                                                                                                                                                                                                                  3504362220185634767,\n                                                                                                                                                                                                                                                                                                  false,\n                                                                                                                                                                                                                                                                                                  \"qzX朝qT3軞T垈ꮲQ览ᚻ⻑쎎b驌䵆ꬠ5Fୗ䲁缿ꝁ蒇潇Ltᆄ钯蜀W欥ሺ\",\n                                                                                                                                                                                                                                                                                                  \"볰ɐ霬)젝鶼kwoc엷荁r \\u001d쒷⎹8{%澡K늒?iﺩd=&皼倚J9s@３偛twὡgj䁠흪5⭉⨺役&놎cﺉ㺡N5\",\n                                                                                                                                                                                                                                                                                                  false,\n                                                                                                                                                                                                                                                                                                  null,\n                                                                                                                                                                                                                                                                                                  \"D0ﬆ[ni锹r*0k6ꀎ덇UX2⽼৞䃚粭#)Z桷36P]<\\/`\",\n                                                                                                                                                                                                                                                                                                  4281410120849816730,\n                                                                                                                                                                                                                                                                                                  null,\n                                                                                                                                                                                                                                                                                                  -3256922126984394461,\n                                                                                                                                                                                                                                                                                                  1.16174580369801549E18,\n                                                                                                                                                                                                                                                                                                  {\n                                                                                                                                                                                                                                                                                                   \" ᆼꤗ~*TN긂<㡴턱℃酰^蘒涯잰淭傛2rൡet쾣䐇m*㸏y\\\"\\\\糮᧺qv쌜镜T@yg1譬ﭧﳭ\\f\": null,\n                                                                                                                                                                                                                                                                                                   \"圾ᨿ0xᮛ禵ਗ਼D-㟻ẵ錚e\\\"赜.˶m)鴑B(I$<\\/轴퉯揷⋏⏺*)宓쓌?*橯Lx\\\\f쩂㞼⇸\\\"ﺧ軂遳V\\\\땒\\\"캘c:G\": null,\n                                                                                                                                                                                                                                                                                                   \"?﵁_곢翸폈8㿠h열Q2㭛}RY㯕ＹT놂⽻e^B<\\/맫ﻇ繱\\u0017Gц⟊ᢑﵩS:jt櫣嗒⟰W㴚搦ᅉe[w䋺?藂翙Ⲱ芮䍘╢囥lpdu7r볺I 近qFyᗊ\": [\n                                                                                                                                                                                                                                                                                                    \"$b脬aﾠ襬育Bگ嵺Pw+'M<\\/כֿn䚚v螁bN⒂}褺%lቦ阤\\\"ꓺᏗM牏,۞Ҷ!矬?ke9銊X괦)䈽틁脽ṫ䈞ᴆ^=Yᗿ遛4I귺⋥%\",\n                                                                                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                                                                                    2.9444482723232051E18,\n                                                                                                                                                                                                                                                                                                    2072621064799640026,\n                                                                                                                                                                                                                                                                                                    \"/_뇴뫢j㍒=Nꡦ↍Ժ赒❬톥䨞珯su*媸瀳鷔抡o흺-៳辏勷f绔:䵢搢2\",\n                                                                                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                                                                                    \"쒜 E䌐/큁\\u0018懺_<\\\\隺&{wF⤊谼(<죽遠8?@*rᶊGd뻻갇&Ⳇq᣿e࢔t_ꩄ梸O詬C᧧Kꩠ풤9눙醅됞}竸rw?滨ӽK⥿ཊG魲']`๖5㄰\",\n                                                                                                                                                                                                                                                                                                    -2375253967958699084,\n                                                                                                                                                                                                                                                                                                    {\"嗱⿲\\\"f億ᝬ\": {\"v?䚑킡`◤k3,骥曘뒤Oᒱ㲹^圮᠀YT껛&촮P:G/T⣝#튣k3炩蠏k@橈䏷S䧕,熜晬k1鮥玸먚7䤡f绝嗚샴ᥒ~0q拮垑a뻱LⰖ_\": [{\n                                                                                                                                                                                                                                                                                                     \":p尢\": -6.688985172863383E17,\n                                                                                                                                                                                                                                                                                                     \"A0\\u0001疠ﻵ爻鼀湶I~W^岀mZx#㍈7r拣$Ꜷ疕≛⦒痋盩Vꬷ᭝ΩQꍪ療鈑A(劽詗ꭅo-獶鑺\\\"Ⓠ@$j탥;\": [\n                                                                                                                                                                                                                                                                                                      8565614620787930994,\n                                                                                                                                                                                                                                                                                                      [\n                                                                                                                                                                                                                                                                                                       \"嶗PC?උQ㪣$&j幾㾷h慑 즊慧⪉霄M窊ꁷ'鮕)䊏铨m趦䗲(g罣ЮKVﯦ鏮5囗ﰼ鿦\",\n                                                                                                                                                                                                                                                                                                       -7168038789747526632,\n                                                                                                                                                                                                                                                                                                       null,\n                                                                                                                                                                                                                                                                                                       -7.8069738975270288E16,\n                                                                                                                                                                                                                                                                                                       2.25819579241348352E17,\n                                                                                                                                                                                                                                                                                                       -6.5597416611655936E18,\n                                                                                                                                                                                                                                                                                                       {\n                                                                                                                                                                                                                                                                                                        \"瘕멦핓+?ﾌZ귢z鍛V\": {\n                                                                                                                                                                                                                                                                                                         \"ᕾ\": 1.7363275204701887E-19,\n                                                                                                                                                                                                                                                                                                         \"㭌s뎹㳉\": {\"\\u00187FI6Yf靺+UC쬸麁␲䂿긕R\\\\ᆮC?Φ耭\\rOத际핅홦*베W㸫㯼᡹cㅜ|G㮗\\u0013[o`?jHV앝?蒪꩚!퍫ᜦ㌇䚇鿘:@\": [\n                                                                                                                                                                                                                                                                                                          \"}푛Г콲<䟏C藐呈#2㓋#ྕ፟尿9q竓gI%랙mꍬoa睕贿J咿D_熏Zz皳験I豼B扳ḢQ≖㻹㱣D䝦練2'ᗍ㗣▌砲8罿%హF姦;0悇<\\/\\\"p嚧\",\n                                                                                                                                                                                                                                                                                                          -710184373154164247,\n                                                                                                                                                                                                                                                                                                          \"Vo쫬⬾ꝫⴷŻ\\u0004靎HBꅸ_aVBＨbN>Z4⍜kเꛘZ⥺\\\\Bʫᇩ鄨魢弞&幟ᓮ2̊盜\",\n                                                                                                                                                                                                                                                                                                          -9006004849098116748,\n                                                                                                                                                                                                                                                                                                          -3118404930403695681,\n                                                                                                                                                                                                                                                                                                          {\n                                                                                                                                                                                                                                                                                                           \"_彃Y艘-\\\"Xx㤩㳷瑃?%2䐡鵛o<A?\\\"顜ᘌΈ;ⷅC洺L蚴蚀voq:,Oo4쪂)\": 5719065258177391842,\n                                                                                                                                                                                                                                                                                                           \"l륪맽耞塻論倐E㗑/㲕QM辬I\\\"qi酨玑㖪5q]尾魨鲡ƞY}⮯蠇%衟Fsf윔䐚찤i腳\": {\"ꢪ'a䣊糈\": {\"밑/♋S8s㼴5瓹O{댞\\\"9XﰇlJ近8}q{긧ⓈI᱑꿋腸D瀬H\\\"ﺬ'3?}\\u0014#?丙㑯ᥨ圦',g鑠(樴턇?\": [\n                                                                                                                                                                                                                                                                                                            2.5879275511391145E18,\n                                                                                                                                                                                                                                                                                                            null,\n                                                                                                                                                                                                                                                                                                            [\n                                                                                                                                                                                                                                                                                                             \"3㼮ꔌ1Gẃ2W龙j͊{1囐㦭9x宠㑝oR䐕犽\",\n                                                                                                                                                                                                                                                                                                             1268729930083267852,\n                                                                                                                                                                                                                                                                                                             \"땕軚⿦7C\",\n                                                                                                                                                                                                                                                                                                             [\n                                                                                                                                                                                                                                                                                                              -3.757935946502082E18,\n                                                                                                                                                                                                                                                                                                              \"\\\"赌'糬_2뭾᝝b\",\n                                                                                                                                                                                                                                                                                                              {\n                                                                                                                                                                                                                                                                                                               \"(a䕎ጽjҰD4.ᴡ66ԃ畮<\\/l`k癸\\\\㇋ࣆ욯R㫜픉녬挛;ڴ맺`.;焓q淞뮕ٹ趴r蔞ꯔ䟩v粏u5<\\/pZ埖Skrvj帛=\\u0005aa\": null,\n                                                                                                                                                                                                                                                                                                               \"璄≩ v몛ᘮ%?:1頌챀H㷪뉮k滘e\": [\n                                                                                                                                                                                                                                                                                                                \"ꤾ{`c샬왌펡[俊络vmz㪀悫⸹ᷥ5o'㾵 L蹦qjYIYណԠW냁剫<\\/W嗂0,}\",\n                                                                                                                                                                                                                                                                                                                2.4817616702666762E18,\n                                                                                                                                                                                                                                                                                                                false,\n                                                                                                                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                                                                                                                -8.6036958071260979E17,\n                                                                                                                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                                                                                                                -1.2744078022652468E-19,\n                                                                                                                                                                                                                                                                                                                -4.4752020268429594E17,\n                                                                                                                                                                                                                                                                                                                1.13672865156637872E17,\n                                                                                                                                                                                                                                                                                                                [\n                                                                                                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                                                                                                                 -4.178004168554046E-20,\n                                                                                                                                                                                                                                                                                                                 true,\n                                                                                                                                                                                                                                                                                                                 2927542512798605527,\n                                                                                                                                                                                                                                                                                                                 {\n                                                                                                                                                                                                                                                                                                                  \".ꔓ뉤1䵬cHy汼䊆賓ᐇƩ|樷❇醎㬅4\\u0003赵}#yD5膏晹뱓9ꖁ虛J㺕 t䊛膎ؤ\": {\n                                                                                                                                                                                                                                                                                                                   \"rVtᓸ5^`েN⹻Yv᥋lꌫt拘?<鮰넿ZC?㒽^\": {\"␪k_:>귵옔夘v*탋职&㳈챗|O钧\": [\n                                                                                                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                                                                                                    \"daꧺdᗹ羞쯧H㍤鄳頳<型孒ン냆㹀f4㹰\\u000f|C*ሟ鰠(O<ꨭ峹ipຠ*y೧4VQ蔔hV淬{?ᵌEfrI_\",\n                                                                                                                                                                                                                                                                                                                    \"j;ꗣ밷邍副]ᗓ\",\n                                                                                                                                                                                                                                                                                                                    -4299029053086432759,\n                                                                                                                                                                                                                                                                                                                    -5610837526958786727,\n                                                                                                                                                                                                                                                                                                                    [\n                                                                                                                                                                                                                                                                                                                     null,\n                                                                                                                                                                                                                                                                                                                     [\n                                                                                                                                                                                                                                                                                                                      -1.3958390678662759E-19,\n                                                                                                                                                                                                                                                                                                                      {\n                                                                                                                                                                                                                                                                                                                       \"lh좈T_믝Y\\\"伨\\u001cꔌG爔겕ꫳ晚踍⿻읐T䯎]~e#฽燇\\\"5hٔ嶰`泯r;ᗜ쮪Q):/t筑,榄&5懶뎫狝(\": [{\n                                                                                                                                                                                                                                                                                                                        \"2ፁⓛ]r3C攟וּ9賵s⛔6'ஂ|\\\"ⵈ鶆䐹禝3\\\"痰ࢤ霏䵩옆䌀?栕r7O簂Isd?K᫜`^讶}z8?z얰T:X倫⨎ꑹ\": -6731128077618251511,\n                                                                                                                                                                                                                                                                                                                        \"|︦僰~m漿햭\\\\Y1'Vvخ굇ቍ챢c趖\": [null]\n                                                                                                                                                                                                                                                                                                                       }],\n                                                                                                                                                                                                                                                                                                                       \"虌魿閆5⛔煊뎰㞤ᗴꥰF䮥蘦䂪樳-K᝷-(^\\u20dd_\": 2.11318679791770592E17\n                                                                                                                                                                                                                                                                                                                      }\n                                                                                                                                                                                                                                                                                                                     ]\n                                                                                                                                                                                                                                                                                                                    ]\n                                                                                                                                                                                                                                                                                                                   ]},\n                                                                                                                                                                                                                                                                                                                   \"묗E䀳㧯᳀逞GMc\\b墹㓄끖Ơ&U??펌鑍 媋k))ᄊ\": null,\n                                                                                                                                                                                                                                                                                                                   \"묥7콽벼諌J_DɯﮪM殴䣏,煚ྼ`Y:씧<\\/⩫%yf䦀!1Ჶk춎Q米W∠WC跉鬽*ᛱi<?,l<崣炂骵*?8푐៣ⰵ憉⎑.,Nw罣q+ο컆弎\": false\n                                                                                                                                                                                                                                                                                                                  },\n                                                                                                                                                                                                                                                                                                                  \"e[|଀+lꑸ㝈TT?뿿|ꫛ9`㱯䊸楋-곳賨?쳁k棽擋wQ餈⟐Nq[q霩䵀뷮锅ꚢ\": 5753148631596678144,\n                                                                                                                                                                                                                                                                                                                  \"sᓝ鴻߸d렶ὕ蜗ဟ툑!诉౿\": false,\n                                                                                                                                                                                                                                                                                                                  \"|4䕳鵻?䈔(]틍/Ui#湻{듲ーMዀt7潔泄Ch⸨}쏣`螧銚㋼壯kⰥQ戵峉갑x辙'첛\": \"jd䘯$䕌茷!auw眶ㅥ䁣ꆢ民i\",\n                                                                                                                                                                                                                                                                                                                  \"剖駰ꞫsM2]ᾴ2ࡷ祅拌Av狔꩛'ꓗ킧ꣁ0酜✘O'\": false,\n                                                                                                                                                                                                                                                                                                                  \"澩뢣ꀁeU~D\\\\ꮡ킠\": \"v^YC嚈ί\\u0007죋h>㴕L꘻ꀏ쓪\\\"_g鿄'#t⽙?,Wg㥖|D鑆e⥏쪸僬h鯔咼ඡ;4TK聎졠嫞\"\n                                                                                                                                                                                                                                                                                                                 }\n                                                                                                                                                                                                                                                                                                                ]\n                                                                                                                                                                                                                                                                                                               ]\n                                                                                                                                                                                                                                                                                                              }\n                                                                                                                                                                                                                                                                                                             ]\n                                                                                                                                                                                                                                                                                                            ]\n                                                                                                                                                                                                                                                                                                           ]}}\n                                                                                                                                                                                                                                                                                                          }\n                                                                                                                                                                                                                                                                                                         ]}\n                                                                                                                                                                                                                                                                                                        },\n                                                                                                                                                                                                                                                                                                        \"뿋뀾淣截䔲踀&XJ펖꙯^Xb訅ꫥgᬐ>棟S\\\"혧騾밫겁7-\": \"擹8C憎W\\\"쵮yR뢩浗絆䠣簿9䏈引Wcy䤶孖ꯥ;퐌]輩䍐3@{叝 뽸0ᡈ쵡Ⲇ\\u001dL匁꧐2F~ݕ㪂@W^靽L襒ᦘ~沦zZ棸!꒲栬R\"\n                                                                                                                                                                                                                                                                                                       }\n                                                                                                                                                                                                                                                                                                      ]\n                                                                                                                                                                                                                                                                                                     ],\n                                                                                                                                                                                                                                                                                                     \"Z:덃൛5Iz찇䅄駠㭧蓡K1\": \"e8᧤좱U%?ⵇ䯿鿝\\u0013縮R∱骒EO\\u000fg?幤@֗퉙vU`\",\n                                                                                                                                                                                                                                                                                                     \"䐃쪈埽້=Ij,쭗쓇చ\": false\n                                                                                                                                                                                                                                                                                                    }]}}\n                                                                                                                                                                                                                                                                                                   ]\n                                                                                                                                                                                                                                                                                                  }\n                                                                                                                                                                                                                                                                                                 ]}\n                                                                                                                                                                                                                                                                                                }\n                                                                                                                                                                                                                                                                                               ]\n                                                                                                                                                                                                                                                                                              ]\n                                                                                                                                                                                                                                                                                             ],\n                                                                                                                                                                                                                                                                                             \"咰긖VM]᝼6䓑쇎琺etDҌ?㞏ꩄ퇫밉gj8蠃\\\"⩐5䛹1ࣚ㵪\": \"ക蹊?⎲⧘⾚̀I#\\\"䈈⦞돷`wo窭戕෱휾䃼)앷嵃꾞稧,Ⴆ윧9S?೗EMk3Მ3+e{⹔Te驨7䵒?타Ulg悳o43\"\n                                                                                                                                                                                                                                                                                            }\n                                                                                                                                                                                                                                                                                           ],\n                                                                                                                                                                                                                                                                                           \"zQᤚ纂땺6#ٽ﹧v￿#ࠫ휊冟蹧텈ꃊʆ?&a䥯De潝|쿓pt瓞㭻啹^盚2Ꝋf醪,얏T窧\\\\Di䕎谄nn父ꋊE\": -2914269627845628872,\n                                                                                                                                                                                                                                                                                           \"䉩跐|㨻ᷢ㝉B{蓧瞸`I!℄욃힕#ೲᙾ竛ᔺCjk췒늕貭词\\u0017署?W딚%(pꍁ⤼띳^=on뺲l䆼bzrﳨ[&j狸䠠=ᜑꦦ\\u2061յnj=牲攑)M\\\\龏\": false,\n                                                                                                                                                                                                                                                                                           \"뎕y絬᫡⥮Ϙᯑ㌔/NF*˓.,QEzvK!Iwz?|쥾\\\"ꩻL꼗Bꔧ賴緜s뉣隤茛>ロ?(?^`>冺飒=噸泥⺭Ᲊ婓鎔븜z^坷裮êⓅ໗jM7ﶕ找\\\\O\": 1.376745434746303E-19\n                                                                                                                                                                                                                                                                                          },\n                                                                                                                                                                                                                                                                                          \"䐛r滖w㏤<k;l8ꡔጵ⮂ny辶⋃퍼僮z\\\"﮲X@t5෼暧퓞猋♅䦖QC鹮|픨( ,>,|Nዜ\": false\n                                                                                                                                                                                                                                                                                         }\n                                                                                                                                                                                                                                                                                        ]],\n                                                                                                                                                                                                                                                                                        \"@꿙?薕尬 gd晆(띄5躕ﻫS蔺4)떒錸瓍?~\": 1665108992286702624,\n                                                                                                                                                                                                                                                                                        \"w믍nᏠ=`঺ￆC>'從됐槷䤝眷螄㎻揰扰XￊC贽uჍ낟jKD03T!lDV쀉Ӊy뢖,袛!终캨G?鉮Q)⑗1쾅庅O4ꁉH7?d\\u0010蠈줘월ސ粯Q!낇껉6텝|{\": null,\n                                                                                                                                                                                                                                                                                        \"~˷jg쿤촖쉯y\": -5.5527605669177098E18,\n                                                                                                                                                                                                                                                                                        \"펅Wᶺzꐆと푭e?4j仪열[D<鈑皶婆䵽ehS?袪;HꍨM뗎ば[(嗏M3q퍟g4y╸鰧茀[Bi盤~﫝唎鋆彺⦊q?B4쉓癚O洙킋툈䶯_?ퟲ\": null\n                                                                                                                                                                                                                                                                                       }\n                                                                                                                                                                                                                                                                                      ]\n                                                                                                                                                                                                                                                                                     ]]\n                                                                                                                                                                                                                                                                                    ]],\n                                                                                                                                                                                                                                                                                    \"꟱Ԕ㍤7曁聯ಃ錐V䷰?v㪃૦~K\\\"$%请|ꇹn\\\"k䫛㏨鲨\\u2023䄢\\u0004[<S8ᐬ뭩脥7U.m࿹:D葍┆2蘸^U'w1젅;䠆ꋪB껮>︊VJ?䶟ាꮈ䗱=깘U빩\": -4863152493797013264\n                                                                                                                                                                                                                                                                                   }\n                                                                                                                                                                                                                                                                                  ]}]}\n                                                                                                                                                                                                                                                                                 ]\n                                                                                                                                                                                                                                                                                }}}\n                                                                                                                                                                                                                                                                               ],\n                                                                                                                                                                                                                                                                               \"쏷쐲۹퉃~aE唙a챑,9㮹gLHd'䔏|킗㍞䎥&KZYT맵7䥺N<Hp4ꕭ⹠꽐c~皽z\": \"课|ᖾ䡁廋萄䐪W\\u0016&Jn괝b~摓M>ⱳ同莞鿧w\\\\༌疣n/+ꎥU\\\"封랾○ퟙAJᭌ?9䛝$?驔9讐짘魡T֯c藳`虉C읇쐦T\"\n                                                                                                                                                                                                                                                                              }\n                                                                                                                                                                                                                                                                             ],\n                                                                                                                                                                                                                                                                             \"谶개gTR￐>ၵ͚dt晑䉇陏滺}9㉸P漄\": -3350307268584339381\n                                                                                                                                                                                                                                                                            }]\n                                                                                                                                                                                                                                                                           ]\n                                                                                                                                                                                                                                                                          ]\n                                                                                                                                                                                                                                                                         ]]\n                                                                                                                                                                                                                                                                        ]\n                                                                                                                                                                                                                                                                       ],\n                                                                                                                                                                                                                                                                       \"0y꟭馋X뱔瑇:䌚￐廿jg-懲鸭䷭垤㒬茭u賚찶ಽ+\\\\mT땱\\u20821殑㐄J쩩䭛ꬿNS潔*d\\\\X,壠뒦e殟%LxG9:摸\": 3737064585881894882,\n                                                                                                                                                                                                                                                                       \"풵O^-⧧ⅶvѪ8廸鉵㈉ר↝Q㿴뺟EႳvNM:磇>w/៻唎뷭୥!냹D䯙i뵱貁C#⼉NH6`柴ʗ#\\\\!2䂗Ⱨf?諳.P덈-返I꘶6?8ꐘ\": -8934657287877777844,\n                                                                                                                                                                                                                                                                       \"溎-蘍寃i诖ര\\\"汵\\\"\\ftl,?d⼡쾪⺋h匱[,෩I8MҧF{k瓿PA'橸ꩯ綷퉲翓\": null\n                                                                                                                                                                                                                                                                      }\n                                                                                                                                                                                                                                                                     ]\n                                                                                                                                                                                                                                                                    ],\n                                                                                                                                                                                                                                                                    \"ោ係؁<元\": 1.7926963090826924E-18\n                                                                                                                                                                                                                                                                   }}]\n                                                                                                                                                                                                                                                                  }\n                                                                                                                                                                                                                                                                 ]\n                                                                                                                                                                                                                                                                ]]}]\n                                                                                                                                                                                                                                                               }]\n                                                                                                                                                                                                                                                              ]\n                                                                                                                                                                                                                                                             ]\n                                                                                                                                                                                                                                                            ]\n                                                                                                                                                                                                                                                           ],\n                                                                                                                                                                                                                                                           \"ጩV<\\\"ڸsOᤘ\": 2.0527167903723048E-19\n                                                                                                                                                                                                                                                          }]\n                                                                                                                                                                                                                                                         ]}\n                                                                                                                                                                                                                                                        ]\n                                                                                                                                                                                                                                                       ]],\n                                                                                                                                                                                                                                                       \"∳㙰3젴p᧗䱙?`<U὇<\\/意E[ᮚAj诂ᒽ阚uv徢ဎ떗尔Ᵹ훀쩑J䐴?⪏=륪ᆩ푰ஓ㐕?럽VK\\\"X?檨လ齿I/耉A(AWA~⏯稐蹫\": false,\n                                                                                                                                                                                                                                                       \"偒妝뾇}䀼链i⇃%⋜&璪Ix渥5涧qq棩ᥝ-⠫AA낇yY颕A*裦O|n?䭬혗F\": null,\n                                                                                                                                                                                                                                                       \"琭CL얭B혆Kॎ`鎃nrsZiժW砏)?p~K~A眱䲏QO妣\\u001b\\u001b]ᵆᆯ&㐋ᏹ豉뺘$ꭧ#j=C)祤⫢歑1o㒙諩\": 7028426989382601021,\n                                                                                                                                                                                                                                                       \"쳱冲&ဤ䌏앧h胺-齱H忱8왪RDKᅒ䬋ᔶS*J}ስ漵'㼹뮠9걢9p봋경ጕtởꚳT䶽瘙%춴`@nಆ4<d??#僜ᙤ钴=薔ꭂbLXNam蹈\": \"樭る蹿= Uurwkn뙧⌲%\\\"쑃牪\\\"cq윕o@\",\n                                                                                                                                                                                                                                                       \"溌[H]焎SLㅁ?뀼䫨災W\": 1.1714289118497062E-19,\n                                                                                                                                                                                                                                                       \"ﬢp븇剌燇kĔ尘㶿㴞睠꾘Ia;s❺^)$穮?sHᢥ폪l\": null\n                                                                                                                                                                                                                                                      }\n                                                                                                                                                                                                                                                     ]\n                                                                                                                                                                                                                                                    }\n                                                                                                                                                                                                                                                   ]\n                                                                                                                                                                                                                                                  },\n                                                                                                                                                                                                                                                  \"TKnzj5o<\\/K㊗ꗣ藠⦪駇>yZA8Ez0,^ᙛ4_0븢\\u001ft:~䎼s.bb룦明yNP8弆C偯;⪾짍'蕴뮛\": -6976654157771105701,\n                                                                                                                                                                                                                                                  \"큵ꦀ\\\\㇑:nv+뒤燻䀪ﴣ﷍9ᚈ኷K㚊誦撪䚛,ꮪxሲ쳊\\u0005HSf?asg昱dqꬌVꙇ㼺'k*'㈈\": -5.937042203633044E-20\n                                                                                                                                                                                                                                                 }\n                                                                                                                                                                                                                                                ]\n                                                                                                                                                                                                                                               }],\n                                                                                                                                                                                                                                               \"?}\\u20e0],s嶳菋@#2u쒴sQS䩗=ꥮ;烌,|ꘔ䘆\": \"ᅩ영N璠kZ먕眻?2ቲ芋眑D륟渂⸑ﴃIRE]啗`K'\"\n                                                                                                                                                                                                                                              }},\n                                                                                                                                                                                                                                              \"쨀jmV賂ﰊ姐䂦玞㬙ᏪM᪟Վ씜~`uOn*ॠ8\\u000ef6??\\\\@/?9見d筜ﳋB|S䝬葫㽁o\": true\n                                                                                                                                                                                                                                             },\n                                                                                                                                                                                                                                             \"즛ꄤ酳艚␂㺘봿㎨iG৕ࡿ?1\\\"䘓您\\u001fSኝ⺿溏zៀ뻤B\\u0019?윐a䳵᭱䉺膷d:<\\/\": 3935553551038864272\n                                                                                                                                                                                                                                            }\n                                                                                                                                                                                                                                           ]\n                                                                                                                                                                                                                                          ]}\n                                                                                                                                                                                                                                         ]]\n                                                                                                                                                                                                                                        ]]\n                                                                                                                                                                                                                                       ]}\n                                                                                                                                                                                                                                      }\n                                                                                                                                                                                                                                     ]\n                                                                                                                                                                                                                                    }\n                                                                                                                                                                                                                                   ]]}},\n                                                                                                                                                                                                                                   \"᥺3h↛!ꋰy\\\"攜(ெl䪕oUkc1A㘞ᡲ촾ᣫ<\\/䒌E㛝潨i{v?W౾H\\\\RჅpz蝬R脾;v:碽✘↯삞鷱o㸧瑠jcmK7㶧뾥찲n\": true,\n                                                                                                                                                                                                                                   \"ⶸ?x䊺⬝-䰅≁!e쩆2ꎿ准G踌XXᩯ1߁}0?.헀Z馟;稄\\baDꟹ{-寪⚈ꉷ鮸_L7ƽᾚ<\\u001bጨA䧆송뇵⨔\\\\礍뗔d设룱㶉cq{HyぱR㥽吢ﬅp\": -7985372423148569301,\n                                                                                                                                                                                                                                   \"緫#콮IB6<\\/=5Eh礹\\t8럭@饹韠r㰛斣$甝LV췐a갵'请o0g:^\": \"䔨(.\",\n                                                                                                                                                                                                                                   \"띳℡圤pﾝ௄ĝ倧訜B쁟G䙔\\\"Sb⓮;$$▏S1J뢙SF|赡g*\\\"Vu䲌y\": \"䪈&틐),\\\\kT鬜1풥;뷴'Zေ䩹@J鞽NぼM?坥eWb6榀ƩZڮ淽⺞삳煳xჿ絯8eⶍ羷V}ჿ쎱䄫R뱃9Z>'\\u20f1ⓕ䏜齮\"\n                                                                                                                                                                                                                                  }\n                                                                                                                                                                                                                                 ]\n                                                                                                                                                                                                                                ]]]\n                                                                                                                                                                                                                               }}\n                                                                                                                                                                                                                              }\n                                                                                                                                                                                                                             ]\n                                                                                                                                                                                                                            ]},\n                                                                                                                                                                                                                            \"펮b.h粔폯2npX詫g錰鷇㇒<쐙S値bBi@?镬矉`剔}c2壧ଭfhY깨R()痩⺃a\\\\⍔?M&ﯟ<劜꺄멊ᄟA\\\"_=\": null\n                                                                                                                                                                                                                           },\n                                                                                                                                                                                                                           \"~潹Rqn榢㆓aR鬨侅?䜑亡V_翅㭔(䓷w劸ၳDp䀅<\\/ﰎ鶊m䵱팱긽ꆘ<tD쇋>긓准D3掱;o:_ќ)껚콥8곤d矦8nP倥ꃸI\": null,\n                                                                                                                                                                                                                           \"뾎/Q㣩㫸벯➡㠦◕挮a鶧⋓偼\\u00001뱓fm覞n?㛅\\\"\": 2.8515592202045408E17\n                                                                                                                                                                                                                          }],\n                                                                                                                                                                                                                          \",\": -5426918750465854828,\n                                                                                                                                                                                                                          \"2櫫@0柡g䢻/gꆑ6演&D稒肩Y?艘/놘p{f투`飷ᒉ챻돎<늛䘍ﴡ줰쫄\": false,\n                                                                                                                                                                                                                          \"8(鸑嵀⵹ퟡ<9㣎Tߗ┘d슒ل蘯&㠦뮮eࠍk砝g 엻\": false,\n                                                                                                                                                                                                                          \"d-\\u208b?0ﳮ嵙'(J`蔿d^踅⤔榥\\\\J⵲v7\": 6.8002426206715341E17,\n                                                                                                                                                                                                                          \"ཎ耰큓ꐕ㱷\\u0013y=詽I\\\"盈xm{0쾽倻䉚ષso#鰑/8㸴짯%ꀄ떸b츟*\\\\鲷礬ZQ兩?np㋄椂榨kc᡹醅3\": false,\n                                                                                                                                                                                                                          \"싊j20\": false\n                                                                                                                                                                                                                         }]]\n                                                                                                                                                                                                                        ]],\n                                                                                                                                                                                                                        \"俛\\u0017n緽Tu뫉蜍鼟烬.ꭠIⰓ\\\"Ἀ᜾uC쎆J@古%ꛍm뻨ᾀ画蛐휃T:錖㑸ዚ9죡$\": true\n                                                                                                                                                                                                                       }\n                                                                                                                                                                                                                      ]\n                                                                                                                                                                                                                     ],\n                                                                                                                                                                                                                     \"㍵⇘ꦖ辈s}㱮慀밒s`\\\"㞟j:`i픻Z<C1衽$\\\"-饧?℃\\u0010⼒{p飗%R\\\"䲔\\\")칀\\\\%\": true,\n                                                                                                                                                                                                                     \"苧.8\\u00120ݬ仓\": 6912164821255417986,\n                                                                                                                                                                                                                     \"떎顣俁X;.#Q틝.笂'p쟨唒퐏랩냆¦aⱍ{谐.b我$蜑SH\\u000f琾=䟼⣼奔ᜏ攕B&挰繗㝔ꅂ-Qv\\\\0䶝䚥ぺio［㑮-ᇼ䬰컪ṼiY){데\\u0010q螰掻~\\n輚x\\u0014罺)軴\": 3.024364150712629E-20\n                                                                                                                                                                                                                    }\n                                                                                                                                                                                                                   ]\n                                                                                                                                                                                                                  ]\n                                                                                                                                                                                                                 ]\n                                                                                                                                                                                                                ]}\n                                                                                                                                                                                                               ]]\n                                                                                                                                                                                                              }\n                                                                                                                                                                                                             ]\n                                                                                                                                                                                                            ]]\n                                                                                                                                                                                                           ]\n                                                                                                                                                                                                          ]]]],\n                                                                                                                                                                                                          \"\\\"凲o肉Iz絾豉J8?i~傠᫽䇂!WD溊J?ᡒvs菆嵹➒淴>섫^諎0Ok{켿歁෣胰a2﨤[탳뚬쎼嫭뉮m\": 409440660915023105,\n                                                                                                                                                                                                          \"w墄#*ᢄ峠밮jLa`ㆪ꺊漓Lで끎!Agk'ꁛ뢃㯐岬D#㒦\": false,\n                                                                                                                                                                                                          \"ଦPGI䕺L몥罭ꃑ궩﮶#⮈ᢓӢ䚬p7웼臧%~S菠␌힀6&t䳙y㪘냏\\\\*;鉏ￊ鿵'嗕pa\\\"oL쇿꬈Cg\": \"㶽1灸D⟸䴅ᆤ뉎﷛渤csx 䝔цꬃ锚捬?ຽ+x~꘩uI࡞\\u0007栲5呚ẓem?袝\\\")=㥴䨃pac!/揎Y\",\n                                                                                                                                                                                                          \"ᷱo\\\\||뎂몷r篙|#X䦜I#딌媸픕叞RD斳X4t⯩夬=[뭲r=绥jh뷱츝⪘%]⚋܈㖴スH텹m(WO曝劉0~K3c柢Ր㏉着逳~\": false,\n                                                                                                                                                                                                          \"煽_qb[첑\\\\륌wE❽ZtCNﭝ+餌ᕜOꛭ\": \"{ﳾ쉌&s惧ᭁⵆ3䢫;䨞팑꒪흘褀࢖Q䠿V5뭀䎂澻%받u5텸oA⮥U㎦;B䳌wz䕙$ឿ\\\\௅婺돵⪾퐆\\\\`Kyौꋟ._\\u0006L챯l뇠Hi䧈偒5\",\n                                                                                                                                                                                                          \"艊佁ࣃ롇䱠爬!*;⨣捎慓q靓|儑ᨋL+迥=6㒺딉6弄3辅J-㕎뛄듘SG㆛(\\noAzQꝱ䰩X*ぢO퀌%펠낌mo틮a^<\\/F&_눊ᾉ㨦ы4\\\"8H\": 2974648459619059400,\n                                                                                                                                                                                                          \"鬙@뎣䫳ၮ끡?){y?5K;TA*k溱䫜J汃ꂯ싔썍\\u001dA}룖(<\\/^,\": false,\n                                                                                                                                                                                                          \"몏@QꋦFꊩᒐ뎶lXl垨4^郣|ꮇ;䝴ᝓ}쵲z珖\": null\n                                                                                                                                                                                                         }\n                                                                                                                                                                                                        ]]]],\n                                                                                                                                                                                                        \":_=닧弗D䙋暨鏛. 㱻붘䂍J儒&ZK/녩䪜r囁⽯D喠죥7⹌䪥c\\u001a\\u2076￞妈朹oLk菮F౟覛쐧㮏7T;}蛙2{9\\\"崓bB<\\/⡷룀;즮鿹)丒툃୤뷠5W⊢嶜(fb뭳갣\": \"E{响1WM\"\n                                                                                                                                                                                                       }},\n                                                                                                                                                                                                       \"䘨tjJ驳豨?y輊M*᳑梵瞻઻ofQG瑮e\": 2.222802939724948E-19,\n                                                                                                                                                                                                       \"䮴=❑➶T෋w䞜\\\"垦ꃼUt\\u001dx;B$뵣䙶E↌艣ᡥ!᧟;䱀[䔯k쬃`੍8饙른熏'2_'袻tGf蒭J땟as꯳╖&啒zWࡇᒫYSᏬ\\u0014ℑ첥鈤|cG~Pᓮ\\\">\\\"\": \"ႆl\\f7V儊㦬nHꄬꨧC{쐢~C⮃⛓嶦vꄎ1w鰠嘩뿠魄&\\\"_qMⵖ釔녮ꝇ 㝚{糍J哋 cv?-jkﻯྌ鹑L舟r\",\n                                                                                                                                                                                                       \"龧葆yB✱H盋夔ﶉ?n*0(\": \"ꧣኆ㢓氥qZZ酒ຜ)鮢樛)X䣆gTSґG텞k.J圬疝롫쯭z L：\\\\ྤ@w炋塜쿖ᾳy뢀䶃뱝N䥨㚔勇겁#p\",\n                                                                                                                                                                                                       \"도畎Q娡\\\"@S/뼋:䵏!P衅촚fVHQs✜ᐫi㻑殡B䜇%믚k*U#濨낄~\": \"ꍟዕ쳸ꍈ敋&l妏\\u0005憡멗瘌uPgᅪm<\\/To쯬锩h뒓k\"\n                                                                                                                                                                                                      }\n                                                                                                                                                                                                     ]\n                                                                                                                                                                                                    }],\n                                                                                                                                                                                                    \"墥홞r绚<\\/⸹ⰃB}<躅\\\\Y;๑@䔸>韫䜲뱀X뗩鿥쩗SI%ﴞ㳕䛇?<\\/\\u00018x\\\\&侂9鋙a[LR㋭W胕)⡿8㞙0JF,}?허d1cDMᐃ␛鄝ⱕ%X)!XQ\": \"ⳍꗳ=橇a;3t⦾꼑仈ူaᚯ⯋ꕃAs鴷N⍕_䎃ꙎAz\\u0016䯷\\\\<࿫>8q{}ｷ?ᣰ}'0ᴕ펓B┦lF#趤厃T?㕊#撹圂䆲\"\n                                                                                                                                                                                                   },\n                                                                                                                                                                                                   \"܋닐龫論c웑\": false,\n                                                                                                                                                                                                   \"ㇿ/q\\\"6-co髨휝C큦#\\u001b4~?3䐹E삇<<\": 7.600917488140322E-20,\n                                                                                                                                                                                                   \"䁝E6?㣖ꃁ间t祗*鑠{ḣV(浾h逇큞=W?ૉ?nꇽ8ꅉຉj으쮺@Ꚅ㰤u]Oyr\": \"v≁᫸_*όAඤԆl)ۓᦇQ}폠z༏q滚\",\n                                                                                                                                                                                                   \"ｿ᥊/넺I\": true\n                                                                                                                                                                                                  }]]\n                                                                                                                                                                                                 ]\n                                                                                                                                                                                                ]\n                                                                                                                                                                                               ]\n                                                                                                                                                                                              ]]\n                                                                                                                                                                                             },\n                                                                                                                                                                                             \"䭑Ik攑\\u0002QV烄:芩.麑㟴㘨≕\": true,\n                                                                                                                                                                                             \"坄꿕C쇻풉~崍%碼\\\\8\\\"䬦꣙\": null,\n                                                                                                                                                                                             \"欌L圬䅘Y8c(♺2?ON}o椳s宥2䉀eJ%闹r冁O^K諭%凞⺉⡻,掜?$ꥉ?略焕찳㯊艼誜4?\\\"﯎<゛XፈINT:詓 +\": -1.0750456770694562E-19,\n                                                                                                                                                                                             \"獒àc뜭싼ﺳ뎤K`]p隨LtE\": null,\n                                                                                                                                                                                             \"甙8䵊神EIꩤ鐯ᢀ,ﵮU䝑u疒ử驺䚿≚ഋ梶秓F`覤譐#짾蔀묊4<媍쬦靪_Yzgcࡶ4k紥`kc[Lﮗ簐*I瀑[⾰L殽鑥_mGȠ<\\/|囹灠g桰iri\": true,\n                                                                                                                                                                                             \"챓ꖙꟻ좝菇ou,嗠0\\\\jK핻뜠qwQ?ഩ㼕3Y彦b\\u009bJ榶N棨f?됦鏖綃6鳵M[OE봨u햏.Ꮁ癜蟳뽲ꩌ뻾rM豈R嗀羫 uDꎚ%\": null\n                                                                                                                                                                                            },\n                                                                                                                                                                                            \"V傜2<\": 7175127699521359521\n                                                                                                                                                                                           }],\n                                                                                                                                                                                           \"铫aG切<\\/\\\"ী⊆e<^g࢛)D顝ｎאַ饼\\u008c猪繩嵿ﱚCꡬ㻊g엺A엦\\u000f暿_f꿤볝㦕桦`蒦䎔j甬%岝rj 糏\": \"䚢偎눴Au<4箞7礦Iﱔ坠eȧ䪸u䵁p|逹$嗫쨘ꖾ﷐!胠z寓팢^㨔|u8Nሇe텔ꅦ抷]،鹎㳁#༔繁 \",\n                                                                                                                                                                                           \"낂乕ꃻ볨ϱ-ꇋ㖍fs⿫)zꜦ/K?솞♞ꑌ宭hJ᤭瑥Fu\": false,\n                                                                                                                                                                                           \"쟰ぜ魛G\\u0003u?`㾕ℾ㣭5螠烶這趩ꖢ:@咕ꐶx뒘느m䰨b痃렐0鳊喵熬딃$摉_~7*ⱦ녯1錾GKhJ惎秴6'H妈Tᧅ窹㺒疄矤铟wላ\": null,\n                                                                                                                                                                                           \"쯆q4!3錕㲏ⵆ㇛꘷Z瑩뭆\\\\◪NH\\u001d\\\\㽰U~㯶<\\\"쑣낞3ᵤ'峉eꢬ;鬹o꣒木X*長PXᘱu\\\"䠹n惞\": null,\n                                                                                                                                                                                           \"ᅸ祊\\\"&ꥴCjࢼ﴿?䡉`U效5殼㮞V昽ꏪ#ﺸ\\\\&t6x꠹盥꣰a[\\u001aꪍSpe鎿蠹\": -1.1564713893659811E-19\n                                                                                                                                                                                          }\n                                                                                                                                                                                         ]]\n                                                                                                                                                                                        ]\n                                                                                                                                                                                       ]\n                                                                                                                                                                                      ],\n                                                                                                                                                                                      \"羵䥳H,6ⱎ겾|@t\\\"#햊1|稃 섭)띜=뻔ꡜ???櫎~*ῡ꫌/繣ﻠq\": null\n                                                                                                                                                                                     }\n                                                                                                                                                                                    ]}\n                                                                                                                                                                                   ]},\n                                                                                                                                                                                   \"츤\": false\n                                                                                                                                                                                  }},\n                                                                                                                                                                                  \"s\": 3.7339341963399598E18\n                                                                                                                                                                                 }\n                                                                                                                                                                                ],\n                                                                                                                                                                                \"N,I?1+㢓|ࣱ嶃쩥V2\\u0012(4EE虪朶$|w颇v步\": \"~읢~_,Mzr㐫YB溓E淚\\\"ⅹ䈔ᏺ抙 b,nt5V㐒J檶ꏨ⻔?\",\n                                                                                                                                                                                \"Q껑ꡡ}$넎qH煔惍/ez^!ẳF댙䝌馻剁8\": \"梲;yt钰$i冄}AL%a j뜐奷걳뚾d꿽*ሬuDY3?뮟鼯뮟w㍪틱V\",\n                                                                                                                                                                                \"o{Q/K O胟㍏zUdꀐm&⨺J舕⾏魸訟㌥[T籨櫉唐킝 aṭ뱫촙莛>碶覆⧬짙쭰ׯdAiH໥벤퐥_恸[ 0e:죃TC弼荎뵁DA:w唵ꣁ\": null,\n                                                                                                                                                                                \"὏樎䵮軧|?౗aWH쩃1 ꅭsu\": null\n                                                                                                                                                                               }\n                                                                                                                                                                              ]\n                                                                                                                                                                             },\n                                                                                                                                                                             \"勂\\\\&m鰈J釮=Ⲽ鳋+䂡郑\": null,\n                                                                                                                                                                             \"殣b綊倶5㥗惢⳷萢ᑀ䬄镧M^ﱴ3⣢翣n櫻1㨵}ኯ뗙顖Z.Q➷ꮨ뗇\\u0004\": \"ꔙ䁼>n^[GीA䨟AM琢ᒊS쨲w?d㶣젊嘶纝麓+愣a%気ྞSc됓ᔘ:8bM7Xd8㶑臌]Ꙥ0ꐭ쒙䫣挵C薽Dfⵃ떼᷸\",\n                                                                                                                                                                             \"?紡.셪_෨j\\u0013Ox┠$Xᶨ-ᅇo薹-}軫;y毝㪜K㣁?.EV쮱4둽⛻䤜'2盡\\u001f60(|e쐰㼎ᦀ㒧-$l@ﻑ坳\\u0003䭱响巗WFo5c㧆T턁Y맸♤(\": -2.50917882560589088E17\n                                                                                                                                                                            }}\n                                                                                                                                                                           ],\n                                                                                                                                                                           \"侸\\\\릩.᳠뎠狣살cs项䭩畳H1s瀉븇19?.w骴崖㤊h痠볭㞳㞳䁮Ql怠㦵\": \"@䟴-=7f\",\n                                                                                                                                                                           \"鹟1x௢+d ;vi䭴FSDS\\u0004hꎹ㚍?⒍⦏ў6u,扩@됷Su)Pag휛TᒗV痩!瞏釀ꖞ蘥&ೞ蘐ꭰꞇᝎ\": \"ah懱Ժ&\\u20f7䵅♎඀䞧鿪굛ౕ湚粎蚵ᯋ幌YOE)५襦㊝Y*^\\\"R+ඈ咷蝶9ꥂ榨艦멎헦閝돶v좛咊E)K㓷ྭr\",\n                                                                                                                                                                           \"搆q쮦4綱켙셁.f4<\\/g<籽늷?#蚴픘:fF\\u00051㹉뀭.ᰖ풎f֦Hv蔎㧤.!䭽=鞽]음H:?\\\"-4\": 8.740133984938656E-20\n                                                                                                                                                                          }]}\n                                                                                                                                                                         }\n                                                                                                                                                                        ],\n                                                                                                                                                                        \"tVKn딩꘥⊾蹓᤹{\\u0003lR꼽ᄲQFᅏ傅ﱋ猢⤊ᔁ,E㓒秤nTතv`♛I\\u0000]꫔ṞD\\\"麵c踝杰X&濿또꣹깳౥葂鿎\\\\aꡨ?\": 3900062609292104525\n                                                                                                                                                                       }\n                                                                                                                                                                      ],\n                                                                                                                                                                      \"ਉ샒⊩Lu@S䧰^g\": -1.1487677090371648E18,\n                                                                                                                                                                      \"⎢k⑊꬗yᏫ7^err糎Dt\\u000bJ礯확ㆍ沑ｻꋽe赔㝢^J\\u0004笲㿋idra剰-᪉C錇/Ĝ䂾ညS지?~콮gR敉⬹'䧭\": 1901472137232418266,\n                                                                                                                                                                      \"灗k䶥:?촽贍쓉꓈㒸g獘[뵎\\\\胕?\\u0014_榙p.j稶,$`糉妋0>Fᡰly㘽$?\": \"]ꙛO赎&#㠃돱剳\\\"<◆>0誉齐_|z|裵씪>ᐌ㼍\\\"Z[琕}O?G뚇諦cs⠜撺5cu痑U圲\\u001c?鴴計l춥/╓哼䄗茏ꮅ뫈댽A돌롖뤫V窗讬sHd&\\nOi;_u\"\n                                                                                                                                                                     }\n                                                                                                                                                                    ],\n                                                                                                                                                                    \"Uﺗ\\\\Y\\\\梷䄬~\\u0002\": null,\n                                                                                                                                                                    \"k\\\"Y磓ᗔ휎@U冈<\\/w컑)[\": false,\n                                                                                                                                                                    \"曏J蝷⌻덦\\u001f㙳s꥓⍟邫P늮쥄c∬ྡྷ舆렮칤Z趣5콡넛A쳨\\\\뀙骫(棻.*&輛LiIfi{@EA婳KᬰTXT\": -4.3088230431977587E17\n                                                                                                                                                                   }]}\n                                                                                                                                                                  ]\n                                                                                                                                                                 ],\n                                                                                                                                                                 \"곃㲧<\\/dఓꂟs其ࡧ&N葶=?c㠤Ჴ'횠숄臼#\\u001a~\": false\n                                                                                                                                                                }\n                                                                                                                                                               ]\n                                                                                                                                                              ]}]\n                                                                                                                                                             }]\n                                                                                                                                                            }}\n                                                                                                                                                           ],\n                                                                                                                                                           \"2f`⽰E쵟>J笂裭!〛觬囀ۺ쟰#桊l鹛ⲋ|RA_Vx፭gE됓h﵀mfỐ|?juTU档[d⢼⺻p濚7E峿\": 5613688852456817133\n                                                                                                                                                          },\n                                                                                                                                                          \"濘끶g忮7㏵殬W팕Q曁 뫰)惃廊5%-蹚zYZ樭ﴷQ锘쯤崫gg\": true,\n                                                                                                                                                          \"絥ᇑ⦏쒓븣爚H.㗊߄o蘵貆ꂚ(쎔O᥉ﮓ]姨Wꁓ!RMA|o퉢THx轮7M껁U즨'i뾘舯o\": \"跥f꜃?\"\n                                                                                                                                                         }}\n                                                                                                                                                        ],\n                                                                                                                                                        \"鷰鹮K-9k;ﰰ?_ݦѷ-ꅣ䩨Zꥱ\\\"mꠟ屎/콑Y╘2&鸞脇㏢ꀇ࠺ⰼ拾喭틮L꽩bt俸墶 [l/웄\\\"꾦\\u20d3iও-&+\\u000fQ+໱뵞\": -1.296494662286671E-19\n                                                                                                                                                       },\n                                                                                                                                                       \"HX੹/⨇୕붷Uﮘ旧\\\\쾜͔3l鄈磣糂̖䟎Eᐳw橖b῀_딕hu葰窳闹вU颵|染H죶.fP䗮:j䫢\\\\b뎖i燕ꜚG⮠W-≚뉗l趕\": \"ଊ칭Oa᡺$IV㷧L\\u0019脴셀붿餲햪$迳向쐯켂PqfT\\\" ?I屉鴼쿕@硙z^鏕㊵M}㚛T젣쓌-W⩐-g%⺵<뮱~빅╴瑿浂脬\\u0005왦燲4Ⴭb|D堧 <\\/oEQh\",\n                                                                                                                                                       \"䘶#㥘੐캔f巋ἡAJ䢚쭈ࣨ뫒*mᇊK，ࣺAꑱ\\u000bR<\\/A\\\"1a6鵌㯀bh곿w(\\\"$ꘁ*rಐ趣.d࿩k/抶면䒎9W⊃9\": \"漩b挋Sw藎\\u0000\",\n                                                                                                                                                       \"畀e㨼mK꙼HglKb,\\\"'䤜\": null\n                                                                                                                                                      }]}]\n                                                                                                                                                     ]\n                                                                                                                                                    ]\n                                                                                                                                                   }]\n                                                                                                                                                  ]}\n                                                                                                                                                 ]\n                                                                                                                                                ]}\n                                                                                                                                               ],\n                                                                                                                                               \"歙>駿ꣂ숰Q`J΋方樛(d鱾뼣(뫖턭\\u20f9lচ9歌8o]8윶l얶?镖G摄탗6폋폵+g:䱫홊<멀뀿/س|ꭺs걐跶稚W々c㫣⎖\": \"㣮蔊깚Cꓔ舊|XRf遻㆚︆'쾉췝\\\\&言\",\n                                                                                                                                               \"殭\\\"cށɨꝙ䞘:嬮e潽Y펪㳅/\\\"O@ࠗ겴]췖YǞ(t>R\\\"N?梳LD恭=n氯T豰2R諸#N}*灧4}㶊G䍣b얚\": null,\n                                                                                                                                               \"襞<\\/啧 B|싞W瓇)6簭鼡艆lN쩝`|펭佡\\\\間邝[z릶&쭟愱ꅅ\\\\T᰽1鯯偐栈4̸s윜R7⒝/똽?치X\": \"⏊躖Cﱰ2Qẫ脐&இ?%냝悊\",\n                                                                                                                                               \",鰧偵셣싹xᎹ힨᯳EṬH㹖9\": -4604276727380542356\n                                                                                                                                              }\n                                                                                                                                             }\n                                                                                                                                            ]]]],\n                                                                                                                                            \"웺㚑xs}q䭵䪠馯8?LB犯zK'os䚛HZ\\\"L?셎s^㿧㴘Cv2\": null\n                                                                                                                                           }]\n                                                                                                                                          ]\n                                                                                                                                         ]\n                                                                                                                                        ],\n                                                                                                                                        \"Kd2Kv+|z\": 7367845130646124107,\n                                                                                                                                        \"ᦂⶨ?ᝢ 祂些ഷ牢㋇操\\\"腭䙾㖪\\\\(y4cE뽺ㆷ쫺ᔖ%zfۻ$ў1柦,㶢9r漢\": -3.133230960444846E-20,\n                                                                                                                                        \"琘M焀q%㢟f鸯O⣏蓑맕鯊$O噷|)z褫^㢦⠮ꚯ꫞`毕1qꢚ{ĭ䎀বώT\\\"뱘3G൴?^^of\": null\n                                                                                                                                       }\n                                                                                                                                      ],\n                                                                                                                                      \"a8V᯺?:ﺃ/8ꉿBq|9啓댚;*i2\": null,\n                                                                                                                                      \"cpT瀇H珰Ừpೃi鎪Rr␣숬-鹸ҩ䠚z脚цGoN8入y%趌I┽2ឪЀiJNcN)槣/▟6S숆牟\\\"箑X僛G殱娇葱T%杻:J諹昰qV쨰\": 8331037591040855245\n                                                                                                                                     }],\n                                                                                                                                     \"G5ᩜ䄗巢껳\": true\n                                                                                                                                    }\n                                                                                                                                   },\n                                                                                                                                   \"Ồ巢ゕ@_譙A`碫鄐㡥砄㠓(^K\": \"?܃B혢▦@犑ὺD~T⧁|醁;o=J牌9냚⢽㨘{4觍蚔9#$∺\\u0016p囅\\\\3Xk阖⪚\\\"UzA穕롬✎➁㭒춺C㣌ဉ\\\"2瓑员ᅽꝶ뫍}꽚ꞇ鶂舟彺]ꍽJC蝧銉\",\n                                                                                                                                   \"␆Ě膝\\\"b-퉐ACR言J謈53~V튥x䜢?ꃽɄY뮩ꚜ\": \"K/↾e萃}]Bs⾿q룅鷦-膋?m+死^魊镲6\",\n                                                                                                                                   \"粡霦c枋AHퟁo礼Ke?qWcA趸㡔ꂏ?\\u000e춂8iতᦜ婪\\u0015㢼nﵿꍻ!ᐴ関\\u001d5j㨻gfῩUK5Ju丝tかTI'?㓏t>⼟o a>i}ᰗ;뤕ܝ\": false,\n                                                                                                                                   \"ꄮ匴껢ꂰ涽+䜨B蛹H䛓-k蕞fu7kL谖,'涃V~챳逋穞cT\\\"vQ쓕ObaCRQ㓡Ⲯ?轭⫦輢墳?vA餽=h䮇킵n폲퉅喙?\\\"'1疬V嬗Qd灗'Lự\": \"6v!s믁㭟㣯獃!磸餠ቂh0C뿯봗F鷭gꖶ~ｺkK<ᦈTt\\\\跓w㭣횋钘ᆹ듡䑚W䟾X'ꅔ4FL勉Vܴ邨y)2'〚쭉⽵-鞣E,Q.?块\",\n                                                                                                                                   \"?(˧쩯@崟吋歄K\": null\n                                                                                                                                  },\n                                                                                                                                  \"Gc럃녧>?2DYI鴿\\\\륨)澔0ᔬlx'觔7젘⤡縷螩%Sv׫묈/]↱&S h\\u0006歋ᑛxi̘}ひY蔯_醨鯘煑橾8?䵎쨋z儬ꁏ*@츾:\": null\n                                                                                                                                 }\n                                                                                                                                }\n                                                                                                                               }\n                                                                                                                              ]\n                                                                                                                             ]\n                                                                                                                            ]}\n                                                                                                                           },\n                                                                                                                           \"HO츧G\": 3.694949578823609E17,\n                                                                                                                           \"QC\\u0012(翻曇Tf㷟bGBJ옉53\\\\嚇ᛎD/\\u001b夾၉4\\\"핀@祎)쫆yD\\\"i먎Vn㿿V1W᨝䶀\": -6150931500380982286,\n                                                                                                                           \"Z㓮P翸鍱鉼K䋞꘺튿⭁Y\": -7704503411315138850,\n                                                                                                                           \"]모开ꬖP븣c霤<[3aΠ\\\"黁䖖䰑뮋ꤦ秽∼㑷冹T+YUt\\\"싳F↭䖏&鋌\": -2.7231911483181824E18,\n                                                                                                                           \"tꎖ\": -4.9517948741799555E-19,\n                                                                                                                           \"䋘즊.⬅IꬃۣQ챢ꄑ黐|f?C⾺|兕읯sC鬸섾整腨솷V\": \"旆柩l<K髝M戶鯮t:wR2ꉱ`9'l픪*폍芦㊢Pjjo堡^  읇얛嶅있ষ0?F\",\n                                                                                                                           \"下9T挞\\\\$yᮇk쌋⼇,ਉ\": true,\n                                                                                                                           \"櫨:ㆣ,邍lr崕祜㐮烜Z,XXD蕼㉴ kM꯽?P0﹉릗\": null,\n                                                                                                                           \"gv솠歽閘4镳䗄2澾>쪦sᖸMy㦅울썉瘗㎜檵9ꍂ駓ૉᚿ/u3씅徐拉[Z䞸ࡗ1ꆱ&Ｑ풘?ǂ8\\u0011BCDY2볨;鸏\": null,\n                                                                                                                           \"幫 n煥s쁇펇 왊-$C\\\"衝:\\u0014㣯舼.3뙗Yl⋇\\\"K迎멎[꽵s}9鉳UK8쐥\\\"掄㹖h㙈!얄સ?Ꜳ봺R伕UTD媚Ｉ䜘W鏨蔮\": -4.150842714188901E-17,\n                                                                                                                           \"ﺯ^㄄\\b죵@fྉkf颡팋Ꞧ{/Pm0V둳⻿/落韒ꊔᚬ@5螺G\\\\咸a谆⊪ቧ慷绖?财(鷇u錝F=r၍橢ឳn:^iᴵtD볠覅N赴\": null\n                                                                                                                          }]\n                                                                                                                         }]\n                                                                                                                        }\n                                                                                                                       ]\n                                                                                                                      ]}\n                                                                                                                     ]},\n                                                                                                                     \"謯?w厓奰T李헗聝ឍ貖o⪇弒L!캶$ᆅ\": -4299324168507841322,\n                                                                                                                     \"뺊奉_垐浸延몏孄Z舰2i$q붿좾껇d▵餏\\\"v暜Ҭ섁m￴g>\": -1.60911932510533427E18\n                                                                                                                    }\n                                                                                                                   ]\n                                                                                                                  }\n                                                                                                                 ]\n                                                                                                                ]],\n                                                                                                                \"퉝꺔㠦楶Pꅱ\": 7517896876489142899,\n                                                                                                                \"\": false\n                                                                                                               }\n                                                                                                              ]},\n                                                                                                              \"是u&I狻餼|谖j\\\"7c됮sסּ-踳鉷`䣷쉄_A艣鳞凃*m⯾☦椿q㎭N溔铉tlㆈ^\": 1.93547720203604352E18,\n                                                                                                              \"kⲨ\\\\%vr#\\u000bⒺY\\\\t<\\/3﬌R訤='﹠8蝤Ꞵ렴曔r\": false\n                                                                                                             }\n                                                                                                            ]},\n                                                                                                            \"阨{c?C\\u001d~K?鎌Ԭ8烫#뙣P초遗t㭱E­돒䆺}甗[R*1!\\\\~h㕅᰺@<9JꏏષI䳖栭6綘걹ￌM\\\"▯是∔v鬽顭⋊譬\": \"운ﶁK敂(欖C취پ℄爦賾\"\n                                                                                                           }\n                                                                                                          }}\n                                                                                                         }],\n                                                                                                         \"鷨赼鸙+\\\\䭣t圙ڹx᜾ČN<\\/踘\\\"S_맶a鷺漇T彚⎲i㈥LT-xA캔$\\u001cUH=a0츺l릦\": \"溣㣂0濕=鉵氬駘>Pꌢpb솇쬤h힊줎獪㪬CrQ矠a&脍꼬爼M茴/΅\\u0017弝轼y#Ꞡc6둴=?R崏뷠麖w?\"\n                                                                                                        },\n                                                                                                        \"閕ᘜ]CT)䵞l9z'xZF{:ؐI/躅匽졁:䟇AGF૸\\u001cퟗ9)駬慟ꡒꆒRS״툋A<>\\u0010\\\"ꂔ炃7g덚E৏bꅰ輤]o㱏_뷕ܘ暂\\\"u\": \"芢+U^+㢩^鱆8*1鈶鮀\\u0002뺰9⬳ꪮlL䃣괟,G8\\u20a8DF㉪錖0ㄤ瓶8Nଷd?眡GLc陓\\\\_죌V쁰ल二?c띦捱 \\u0019JC\\u0011b⤉zẒT볕\\\"绣蘨뚋cꡉkI\\u001e鳴\",\n                                                                                                        \"ꃣI'{6u^㡃#཰Kq4逹y൒䧠䵮!㱙/n??{L풓ZET㙠퍿X2᩟綳跠葿㚙w཮x캽扳B唕S|尾}촕%N?o䪨\": null,\n                                                                                                        \"ⰴFjෟ셈[\\u0018辷px?椯\\\\1<ﲻ栘ᣁ봢憠뉴p\": -5263694954586507640\n                                                                                                       }\n                                                                                                      ]\n                                                                                                     ]]\n                                                                                                    ]}\n                                                                                                   ]}]\n                                                                                                  ]\n                                                                                                 ],\n                                                                                                 \"?#癘82禩鋆ꊝty?&\": -1.9419029518535086E-19\n                                                                                                }\n                                                                                               ]\n                                                                                              ]\n                                                                                             ]}\n                                                                                            ]\n                                                                                           ]\n                                                                                          ],\n                                                                                          \"훊榲.|῕戄&.㚏Zꛦ2\\\"䢥ሆ⤢fV_摕婔?≍Fji冀탆꜕i㏬_ẑKᅢ꫄蔻XWc|饡Siẘ^㲦?羡2ぴ1縁ᙅ?쐉Ou\": false\n                                                                                         }]]\n                                                                                        ]}}},\n                                                                                        \"慂뗄卓蓔ᐓ匐嚖/颹蘯/翻ㆼL?뇊,텵<\\\\獷ごCボ\": null\n                                                                                       },\n                                                                                       \"p溉ᑟi짣z:䒤棇r^٫%G9缑r砌롧.물农g?0׼ሩ4ƸO㣥㯄쩞ጩ\": null,\n                                                                                       \"껎繥YxK\\\"F젷쨹뤤1wq轫o?鱑뜀瘊?뎃h灑\\\\ꛣ}K峐^ኖ⤐林ꉓhy\": null\n                                                                                      }\n                                                                                     ],\n                                                                                     \"᱀n肓ㄛ\\\"堻2>m殮'1橌%Ꞵ군=Ӳ鯨9耛<\\/n據0u彘8㬇៩f᏿诙]嚊\": \"䋯쪦S럶匏ㅛ#)O`ሀX_鐪渲⛀㨻宅闩➈ꢙஶDR⪍\"\n                                                                                    },\n                                                                                    \"tA썓龇 ⋥bj왎录r땽✒롰;羋^\\\\?툳*┎?썀ma䵳넅U䳆૘〹䆀LQ0\\b疀U~u$M}(鵸g⳾i抦뛹?䤈땚검.鹆?ꩡtⶥGĒ;!ቹHS峻B츪켏f5≺\": 2366175040075384032,\n                                                                                    \"전pJjleb]ួ\": -7.5418493141528422E18,\n                                                                                    \"n.鎖ጲ\\n?,$䪘\": true\n                                                                                   },\n                                                                                   \"欈Ar㉣螵᪚茩?O)\": null\n                                                                                  },\n                                                                                  \"쫸M#x}D秱欐K=侫们丐.KꕾxẠ\\u001e㿯䣛F܍캗qq8꟞ṢFD훎⵳簕꭛^鳜\\u205c٫~⑟~冫ऊ2쫰<\\/戲윱o<\\\"\": true\n                                                                                 },\n                                                                                 \"㷝聥/T뱂\\u0010锕|内䞇x侁≦㭖:M?iM᣿IJe煜dG࣯尃⚩gPt*辂.{磼럾䝪@a\\\\袛?}ᓺB珼\": true\n                                                                                }\n                                                                               }\n                                                                              ]]}]}},\n                                                                              \"tn\\\"6ꫤ샾䄄;銞^%VBPwu묪`Y僑N.↺Ws?3C⤻9唩S䠮ᐴm;sᇷ냞඘B/;툥B?lB∤)G+O9m裢0kC햪䪤\": -4.5941249382502277E18,\n                                                                              \"ᚔt'\\\\愫?鵀@\\\\びꂕP큠<<]煹G-b!S?\\nꖽ鼫,ݛ&頺y踦?E揆릱H}햧캡b@手.p탻>췽㣬ꒅ`qe佭P>ᓂ&?u}毚ᜉ蟶頳졪ᎏzl2wO\": -2.53561440423275936E17\n                                                                             }]}\n                                                                            }\n                                                                           ]\n                                                                          ]],\n                                                                          \"潈촒⿂叡\": 5495738871964062986\n                                                                         }\n                                                                        ]]\n                                                                       }\n                                                                      ]\n                                                                     ]}\n                                                                    ]]\n                                                                   ]]\n                                                                  ]}\n                                                                 ]\n                                                                ]},\n                                                                \"ႁq킍蓅R`謈蟐ᦏ儂槐僻ﹶ9婌櫞釈~\\\"%匹躾ɢ뤥>࢟瀴愅?殕节/냔O✬H鲽엢?ᮈੁ⋧d␽㫐zCe*\": 2.15062231586689536E17,\n                                                                \"㶵Ui曚珰鋪ᾼ臧P{䍏䷪쨑̟A뼿T渠誈䏚D1!잶<\\/㡍7?)2l≣穷᛾稝{:;㡹nemיּ訊`G\": null,\n                                                                \"䀕\\\"飕辭p圁f#뫆䶷뛮;⛴ᩍ3灚덏ᰝ쎓⦷詵%᜖Մfs⇫(\\u001e~P|ﭗCⲾផv湟W첋(텪બT<บSꏉ੗⋲X婵i ӵ⇮?L䬇|ꈏ?졸\": 1.548341247351782E-19\n                                                               }\n                                                              ]\n                                                             },\n                                                             \"t;:N\\u0015q鐦Rt缆{ꮐC?஛㷱敪\\\\+鲊㉫㓪몗릙竏(氵kYS\": \"XᰂT?൮ô\",\n                                                             \"碕飦幑|+ 㚦鏶`镥ꁩ B<\\/加륙\": -4314053432419755959,\n                                                             \"秌孳(p!G?V傫%8ሽ8w;5鲗㦙LI檸\\u2098\": \"zG N볞䆭鎍흘\\\\ONK3횙<\\/樚立圌Q튅k쩎Ff쁋aׂJK銆ઘ즐狩6༥✙䩜篥CzP(聻駇HHퟲ讃%,ά{렍p而刲vy䦅ክ^톺M楒鍢㹳]Mdg2>䤉洞\",\n                                                             \"踛M젧>忔芿㌜Zk\": 2215369545966507819,\n                                                             \"씐A`$槭頰퍻^U覒\\bG毲aᣴU;8!팲f꜇E⸃_卵{嫏羃X쀳C7뗮m(嚼u N܁谟D劯9]#\": true,\n                                                             \"ﻩ!뵸-筚P᭛}ἰ履lPh?౮ⶹꆛ穉뎃g萑㑓溢CX뾇G㖬A錟]RKaꄘ]Yo+@䘁's섎襠$^홰}F\": null\n                                                            },\n                                                            \"粘ꪒ4HXᕘ蹵.$區\\r\\u001d묁77pPc^y笲Q<\\/ꖶ 訍䃍ᨕG?*\": 1.73773035935040224E17\n                                                           },\n                                                           \"婅拳?bkU;#D矠❴vVN쩆t㜷A풃갮娪a%鮏絪3dAv룒#tm쑬⌛qYwc4|L8KZ;xU⓭㳔밆拓EZ7襨eD|隰ऌ䧼u9Ԣ+]贴P荿\": 2.9628516456987075E18\n                                                          }]}}]\n                                                         ]}\n                                                        }}\n                                                       ]}]\n                                                      ],\n                                                      \"|g翉F*湹̶\\u0005⏐1脉̀eI쩓ᖂ㫱0碞l䴨ꑅ㵽7AtἈ턧yq䳥塑:z:遀ﾼX눔擉)`N3昛oQ셖y-ڨ⾶恢ꈵq^<\\/\": null,\n                                                      \"菹\\\\랓G^璬x৴뭸ゆUS겧﮷Bꮤ ┉銜᯻0%N7}~f洋坄Xꔼ<\\/4妟Vꄟ9:౟곡t킅冩䧉笭裟炂4봋ⱳ叺怊t+怯涗\\\"0㖈Hq\": false,\n                                                      \"졬믟'ﺇফ圪쓬멤m邸QLব䗁愍4jvs翙 ྍ꧀艳H-|\": null,\n                                                      \"컮襱⣱뗠 R毪/鹙꾀%헳8&\": -5770986448525107020\n                                                     }\n                                                    ],\n                                                    \"B䔚bꐻ뙏姓展槰T-똌鷺tc灿᫽^㓟䏀o3o$꘭趙萬I顩)뇭Ἑ䓝\\f@{ᣨ`x3蔛\": null\n                                                   }\n                                                  ]\n                                                 ]\n                                                }],\n                                                \"⦖扚vWꃱ꥙㾠壢輓{-⎳鹷贏璿䜑bG倛⋐磎c皇皩7a~ﳫU╣Q࠭ꎉS摅姽OW.홌ೞ.\": null,\n                                                \"蚪eVlH献r}ᮏ믠ﰩꔄ@瑄ⲱ\": null,\n                                                \"퀭$JWoꩢg역쁍䖔㑺h&ୢtXX愰㱇?㾫I_6 OaB瑈q裿\": null,\n                                                \"꽦ﲼLyr纛Zdu珍B絟쬴糔?㕂짹䏵e\": \"ḱ\\u2009cX9멀i䶛簆㳀k\"\n                                               }\n                                              ]]]],\n                                              \"(_ꏮg່澮?ᩑyM<艷\\u001aꪽ\\\\庼뙭Z맷㰩Vm\\\\lY筺]3㋲2㌩㄀Eਟ䝵⨄쐨ᔟgङHn鐖⤇놋瓇Q탚單oY\\\"♆臾jHᶈ征ቄ??uㇰA?#1侓\": null\n                                             },\n                                             \"觓^~ሢ&iI띆g륎ḱ캀.ᓡꀮ胙鈉\": 1.0664523593012836E-19,\n                                             \"y詭Gbᔶऽs댁U:杜⤎ϲ쁗⮼D醄诿q뙰I#즧v蔎xHᵿt᡽[**?崮耖p缫쿃L菝,봬ꤦC쯵#=X1瞻@OZc鱗CQTx\": null\n                                            }\n                                           ]\n                                          }}],\n                                          \"剘紁\\u0004\\\\Xn⊠6,တױ;嵣崇}讃iႽ)d1\\\\䔓\": null\n                                         },\n                                         \"脨z\\\"{X,1u찜<'k&@?1}Yn$\\u0015Rd輲ｰa쮂굄+B$l\": true,\n                                         \"諳>*쭮괐䵟Ґ+<箁}빀䅱⡔檏臒hIH脟ꩪC핝ଗP좕\\\"0i<\\/C褻D۞恗+^5?'ꂱ䚫^7}㡠cq6\\\\쨪ꔞꥢ?纖䫀氮蒫侲빦敶q{A煲G\": -6880961710038544266\n                                        }}]\n                                       },\n                                       \"5s⨲JvಽῶꭂᄢI.a৊\": null,\n                                       \"?1q꽏쿻ꛋDR%U娝>DgN乭G\": -1.2105047302732358E-19\n                                      }\n                                     ]\n                                    ]},\n                                    \"qZz`撋뙹둣j碇쁏\\\\ꆥ\\u0018@藴疰Wz)O{F䶛l᷂绘訥$]뮍夻䢋䩇萿獰樧猵⣭j萶q)$꬚⵷0馢W:Ⱍ!Qoe\": -1666634370862219540,\n                                    \"t\": \"=wp|~碎Q鬳Ӎ\\\\l-<\\/^ﳊhn퐖}䍔t碵ḛ혷?靻䊗\",\n                                    \"邙쇡㯇%#=,E4勃驆V繚q[Y댻XV㡸[逹ᰏ葢B@u=JS5?bLRn얮㍉⏅ﰳ?a6[&큟!藈\": 1.2722786745736667E-19\n                                   },\n                                   \"X블땨4{ph鵋ꉯ웸 5p簂䦭s_E徔濧d稝~No穔噕뽲)뉈c5M윅>⚋[岦䲟懷恁?鎐꓆ฬ爋獠䜔s{\\u001bm鐚儸煛%bﯿXT>ꗘ@8G\": 1157841540507770724,\n                                   \"媤娪Q杸\\u0011SAyᡈ쿯\": true,\n                                   \"灚^ಸ%걁<\\/蛯<O\\\"-刷㏠R(kO=䢊䅎l䰓팪A絫픧\": \"譔\\\\㚄 ?R7㔪G㋉⣰渆?\\\\#|gN⤴;W칷A׫癮଼ೣ㏳뒜7d恓꾲0扬S0ᆵi/贎ྡn䆋武\",\n                                   \"萇砇Gこ朦켋Wq`㞲攊*冁~霓L剢zI腧튴T繙Cঅ뫬╈뮜ㄾ䦧촄椘B⊬츩r2f㶱厊8eϬ{挚␯OM焄覤\\\\(Kӡ>?\\\"祴坓\\\\\\\\'흍\": -3.4614808555942579E18,\n                                   \"釴U:O湛㴑䀣렑縓\\ta)<D8ﭳ槁髭D.L|xs斋敠\\\"띋早7wᎍ\": true,\n                                   \"쵈+쬎簨up䓬?q+~\\u0019仇뵈᫯3ᵣ恘枰劫㪢u珘-퀭:컙:u`⌿A(9鄦!<珚nj3:Hࣨ巋䀁旸뎈맻v\\\"\\\\(곘vO㤰aZe<\\/W鹙鄜;l厮둝\": null,\n                                   \"\": -1.2019926774977002E-18,\n                                   \"%者O7.Nꪍs梇接z蕜綛<\\/䜭\\\"죊y<曋漵@Ś⹝sD⟓jݗᢜ?z/9ၲMa쨮긗贎8ᔮ㦛;6p뾥໭䭊0B찛+)(Y㿠鸁䕒^옥\": \"鬃뫤&痽舎J콮藐󽸰ᨨMꈫ髿v<N\\\\.삒껅я1ꭼ5䴷5쳬臨wj덥\"\n                                  }],\n                                  \"鷎'㳗@帚妇OAj' 谬f94ǯ(횡ﾋ%io쪖삐좛>(j:숾却䗌gCiB뽬Oyuq輥厁/7)?今hY︺Q\": null\n                                 }\n                                ]\n                               ]]]}]\n                              ],\n                              \"I笔趠Ph!<ཛྷ㸞诘X$畉F\\u0005笷菟.Esr릙!W☆䲖뗷莾뒭U\\\"䀸犜Uo3Gꯌx4r蔇᡹㧪쨢準<䂀%ࡡꟼ瑍8炝Xs0䀝销?fi쥱ꆝલBB\": -8571484181158525797,\n                              \"L⦁o#J|\\\"⽩-㱢d㌛8d\\\\㶤傩儻E[Y熯)r噤὘勇 }\": \"e(濨쓌K䧚僒㘍蠤Vᛸ\\\"络QJL2,嬓왍伢㋒䴿考澰@(㏾`kX$끑эE斡,蜍&~y\",\n                              \"vj.|统圪ᵮPL?2oŶ`밧\\\"勃+0ue%⿥绬췈체$6:qa렐Q;~晘3㙘鹑\": true,\n                              \"ශؙ4獄⶿c︋i⚅:ん閝Ⳙ苆籦kw{䙞셕pC췃ꍬ␜꟯ꚓ酄b힝hwk꭭M鬋8B耳쑘WQ\\\\偙ac'唀x᪌\\u2048*h짎#ፇ鮠뾏ឿ뀌\": false,\n                              \"⎀jꄒ牺3Ⓝ컴~?親ꕽぼܓ喏瘘!@<튋㐌꿱⩦{a?Yv%⪧笯Uܱ栅E搚i뚬:ꄃx7䙳ꦋ&䓹vq☶I䁘ᾘ涜\\\\썉뺌Lr%Bc㍜3?ꝭ砿裞]\": null,\n                              \"⭤뙓z(㡂%亳K䌽꫿AԾ岺㦦㼴輞낚Vꦴw냟鬓㹈뽈+o3譻K1잞\": 2091209026076965894,\n                              \"ㇲ\\t⋇轑ꠤ룫X긒\\\"zoY읇희wj梐쐑l侸`e%s\": -9.9240075473576563E17,\n                              \"啸ꮑ㉰!ᚓ}銏\": -4.0694813896301194E18,\n                              \">]囋੽EK뇜>_ꀣ緳碖{쐐裔[<ನ\\\"䇅\\\"5L?#xTwv#罐\\u0005래t应\\\\N?빗;\": \"v쮽瞭p뭃\"\n                             }\n                            ]],\n                            \"斴槾?Z翁\\\"~慍弞ﻆ=꜡o5鐋dw\\\"?K蠡i샾ogDﲰ_C*⬟iㇷ4nય蟏[㟉U꽌娛苸 ঢ়操贻洞펻)쿗૊許X⨪VY츚Z䍾㶭~튃ᵦ<\\/E臭tve猑x嚢\": null,\n                            \"锡⛩<\\/칥ꈙᬙ蝀&Ꚑ籬■865?_>L詏쿨䈌浿弥爫̫lj&zx<\\/C쉾?覯n?\": null,\n                            \"꾳鑤/꼩d=ᘈn挫ᑩ䰬ZC\": \"3錢爋6Ƹ䴗v⪿Wr益G韠[\\u0010屗9쁡钁u?殢c䳀蓃樄욂NAq赟c튒瘁렶Aૡɚ捍\"\n                           }\n                          ]\n                         ]\n                        ]}\n                       ]\n                      ]\n                     }]]]}}\n                    ]}],\n                    \"Ej䗳U<\\/Q=灒샎䞦,堰頠@褙g_\\u0003ꤾfⶽ?퇋!łB〙ד3CC䌴鈌U:뭔咎(Qો臃䡬荋BO7㢝䟸\\\"Yb\": 2.36010731779814E-20,\n                    \"逸'0岔j\\u000e눘먷翌C츊秦=ꭣ棭ှ;鳸=麱$XP⩉駚橄A\\\\좱⛌jqv䰞3Ь踌v㳆¹gT┌gvLB賖烡m?@E঳i\": null\n                   },\n                   \"曺v찘ׁ?&绫O័\": 9107241066550187880\n                  }\n                 ]\n                ],\n                \"(e屄\\u0019昜훕琖b蓘ᬄ0/۲묇Z蘮ဏ⨏蛘胯뢃@㘉8ሪWᨮ⦬ᅳ䅴HI၇쨳z囕陻엣1赳o\": true,\n                \",b刈Z,ၠ晐T솝ŕB⩆ou'퐼≃绗雗d譊\": null,\n                \"a唥KB\\\"ﳝ肕$u\\n^⅄P䟼냉䞸⩪u윗瀱ꔨ#yşs꒬=1|ﲤ爢`t౐튼쳫_Az(Ṋ擬㦷좕耈6\": 2099309172767331582,\n                \"?㴸U<\\/䢔ꯡ阽扆㐤q鐋?f㔫wM嬙-;UV죫嚔픞G&\\\"Cᗍ䪏풊Q\": \"VM7疹+陕枡툩窲}翡䖶8欞čsT뮐}璤:jﺋ鎴}HfA൝⧻Zd#Qu茅J髒皣Y-︴[?-~쉜v딏璮㹚䅊﩯<-#\\u000e걀h\\u0004u抱﵊㼃U<㱷⊱IC進\"\n               },\n               \"숌dee節鏽邺p넱蹓+e罕U\": true\n              }\n             ],\n             \"b⧴룏??ᔠ3ぱ>%郿劃翐ꏬꠛW瞳᫏누躨狀ໄy੽\\\"ីuS=㨞馸k乆E\": \"トz݈^9R䬑<ﮛG<s~<\\/?ⵆᏥ老熷u듷\"\n            }}\n           ]\n          }\n         ]}\n        }\n       }\n      }\n     }},\n     \"宩j鬅쳜QꝖјy獔Z᭵1v擖}䨿F%cֲ᫺贴m塼딚NP亪\\\"ￋsa뺯ꘓ2:9뛓༂쌅䊈#>Rꨳ\\u000fTT泠纷꽀MR<CBxP񱒫X쇤\": -2.22390568492330598E18,\n     \"?䯣ᄽ@Z鸅->ᴱ纊:㠭볮?%N56%鈕1䗍䜁a䲗j陇=뿻偂衋࿘ᓸ?ᕵZ+<\\/}H耢b䀁z^f$&㝒LkꢳI脚뙛u\": 5.694374481577558E-20\n    }]\n   }\n  ]],\n  \"obj\": {\"key\": \"wrong value\"},\n  \"퓲꽪m{㶩/뇿#⼢&᭙硞㪔E嚉c樱㬇1a綑᝖DḾ䝩\": null\n },\n \"key\": \"6.908319653520691E8\",\n \"z\": {\n  \"6U閆崬밺뀫颒myj츥휘:$薈mY햚#rz飏+玭V㭢뾿愴YꖚX亥ᮉ푊\\u0006垡㐭룝\\\"厓ᔧḅ^Sqpv媫\\\"⤽걒\\\"˽Ἆ?ꇆ䬔未tv{DV鯀Tἆl凸g\\\\㈭ĭ즿UH㽤\": null,\n  \"b茤z\\\\.N\": [[\n   \"ZL:ￄዎ*Y|猫劁櫕荾Oj为1糕쪥泏S룂w࡛Ᏺ⸥蚙)\",\n   {\n    \"\\\"䬰ỐwD捾V`邀⠕VD㺝sH6[칑.:醥葹*뻵倻aD\\\"\": true,\n    \"e浱up蔽Cr෠JK軵xCʨ<뜡癙Y獩ｹ齈X/螗唻?<蘡+뷄㩤쳖3偑犾&\\\\첊xz坍崦ݻ鍴\\\"嵥B3㰃詤豺嚼aqJ⑆∥韼@\\u000b㢊\\u0015L臯.샥\": false,\n    \"l?Ǩ喳e6㔡$M꼄I,(3᝝縢,䊀疅뉲B㴔傳䂴\\u0088㮰钘ꜵ!ᅛ韽>\": -5514085325291784739,\n    \"o㮚?\\\"춛㵉<\\/﬊ࠃ䃪䝣wp6ἀ䱄[s*S嬈貒pᛥ㰉'돀\": [{\n     \"(QP윤懊FI<ꃣ『䕷[\\\"珒嶮?%Ḭ壍಻䇟0荤!藲끹bd浶tl\\u2049#쯀@僞\": {\"i妾8홫\": {\n      \",M맃䞛K5nAㆴVN㒊햬$n꩑&ꎝ椞阫?/ṏ세뉪1x쥼㻤㪙`\\\"$쟒薟B煌܀쨝ଢ଼2掳7㙟鴙X婢\\u0002\": \"Vዉ菈᧷⦌kﮞఈnz*<?੃'ahhCFX(\\u0007⮊E㭍䱾Gxꥩr❣.洎\",\n      \"뻴5bDD큯O傆盓왻U?ꞅꐊN鐭᧢τ\\\"迳豲8\\u001b䃥ꂻ䴺ྸH筴,\": {\n       \"\\\"L鸔SE㬡XV&~͎'놅蔞눶l匛?'.K氁\\\\ƢẨ疇mΊ'꽳&!鹠m'|{P痊 秄쒿u\\u00111䋧gϩx7t丗D䊨䠻z0.A0\": -1.50139930144708198E18,\n       \"8鋂뛷?첒B☚>﷜FM\\\"荭7ꍀ-VR<\\/';䁙E9$䩉\\f @s?퍪o3^衴cඎ䧪aK鼟ｑ䆨c{䳠5mᒲՙ蘹ᮩ\": {\n        \"F㲷JGo⯍P덵x뒳p䘧☔\\\"+ꨲ吿JfR㔹)4n紬G练Q፞!C|\": true,\n        \"p^㫮솎oc.೚A㤠??r\\u000f)⾽⌲們M2.䴘䩳:⫭胃\\\\፾@Fᭌ\\\\K\": false,\n        \"蟌Tk愙潦伩\": {\n         \"a<\\/@ᾛ慂侇瘎\": -7271305752851720826,\n         \"艓藬/>၄ṯ,XW~㲆w\": {\"E痧郶)㜓ha朗!N赻瞉駠uC\\u20ad辠<Ve?폱!Im䁎搄:*s 9諚Prᵾ뒰髶B̌qWA8梸vS⫊⢳{t㺲q㺈랊뮣RqK밢쳪\": [\n          false,\n          {\n           \"\\u000b=>x퓮⣫P1ࠫLMMX'M刼唳됤\": null,\n           \"P쓫晥%k覛ዩIUᇸ滨:噐혲lMR5䋈V梗>%幽u頖\\\\)쟟\": null,\n           \"eg+昉~矠䧞难\\b?gQ쭷筝\\\\eꮠNl{ಢ哭|]Mn銌╥zꖘzⱷ⭤ᮜ^\": [\n            -1.30142114406914976E17,\n            -1.7555215491128452E-19,\n            null,\n            \"渾㨝ߏ牄귛r?돌?w[⚞ӻ~廩輫㼧/\",\n            -4.5737191805302129E18,\n            null,\n            \"xy࿑M[oc셒竓Ⓔx?뜓y䊦>-D켍(&&?XKkc꩖ﺸᏋ뵞K伕6ী)딀P朁yW揙?훻魢傎EG碸9類៌g踲C⟌aEX舲:z꒸许\",\n            3808159498143417627,\n            null,\n            {\"m試\\u20df1{G8&뚈h홯J<\\/\": {\n             \"3ஸ厠zs#1K7:rᥞoꅔꯧ&띇鵼鞫6跜#赿5l'8{7㕳(b/j\\\"厢aq籀ꏚ\\u0015厼稥\": [\n              -2226135764510113982,\n              true,\n              null,\n              {\n               \"h%'맞S싅Hs&dl슾W0j鿏MםD놯L~S-㇡R쭬%\": null,\n               \"⟓咔謡칲\\u0000孺ꛭx旑檉㶆?\": null,\n               \"恇I転;￸B2Y`z\\\\獓w,놏濐撐埵䂄)!䶢D=ഭ㴟jyY\": {\n                \"$ࡘt厛毣ൢI芁<겿骫⫦6tr惺a\": [\n                 6.385779736989334E-20,\n                 false,\n                 true,\n                 true,\n                 [\n                  -6.891946211462334E-19,\n                  null,\n                  {\n                   \"]-\\\\Ꟑ1/薓❧Ὂ\\\\l牑\\u0007A郃)阜ᇒᓌ-塯`W峬G}SDb㬨Q臉⮻빌O鞟톴첂B㺱<ƈmu챑J㴹㷳픷Oㆩs\": {\n                    \"\\\"◉B\\\"pᶉt骔J꩸ᄇᛐi╰栛K쉷㉯鐩!㈐n칍䟅難>盥y铿e୔蒏M貹ヅ8嘋퀯䉶ጥ㏢殊뻳\\\"絧╿ꉑ䠥?∃蓊{}㣣Gk긔H1哵峱\": false,\n                    \"6.瀫cN䇮F㧺?\\\\椯=ڈT䘆4␘8qv\": -3.5687501019676885E-19,\n                    \"Q?yऴr혴{஀䳘p惭f1ﹸ䅷䕋贲<ྃᄊ繲hq\\\\b|#QSTs1c-7(䵢\\u2069匏絘ꯉ:l毴汞t戀oෟᵶ뮱፣-醇Jx䙬䐁햢0࣫ᡁgrㄛ\": \"\\u0011_xM/蘇Chv;dhA5.嗀绱V爤ﰦi뵲M\",\n                    \"⏑[\\\"ugoy^儣횎~U\\\\섯겜論l2jw஌yD腅̂\\u0019\": true,\n                    \"ⵯɇ䐲᫿࢚!㯢l샅笶戮1꣖0Xe\": null,\n                    \"劅f넀識b宁焊E찓橵G!ʱ獓뭔雩괛\": [{\"p⹣켙[q>燣䍃㞽ᩲx:쓤삘7玑퇼0<\\/q璂ᑁ[Z\\\\3䅵䧳\\u0011㤧|妱緒C['췓Yꞟ3Z鳱雼P錻BU씧U`ᢶg蓱>.1ӧ譫'L_5V䏵Ц\": [\n                     false,\n                     false,\n                     {\"22䂍盥N霂얢<F8꼵7Gసyh뀍g᦭ꄢx硴嬢\\u001a?E괆T|;7犟\\\"Wt%䐩O⨵t&#ᬋK'蜍Ძ揔⾠鲂T멷靃\\u0018䓞cE\": {\"f=䏏츜瞾zw?孡鏣\\\\铀᫞yẆg(\\u0011M6(s2]`ਫ\": [[[{\n                      \"'y몱纣4S@\\\\,i㷯럹Ua充Tᣢ9躘Zଞ쥿䐊s<\\/刎\\\\\\\"뉦-8/\": \"蜑.X0꭛낢륹i젨ꚁ<8?s볕蝡|Q✬᯦@\\\\G㑢屿Mn졾J굤⥟JW뤵苑r쁕툄嵵?⾥O\",\n                      \"^1挲~[n귆誈央碠멪gI洷\": -8214236471236116548,\n                      \"sሣ%娌暡clr蟜㑓2\\u000bS❟_X㨔⚴5~蔷ꀇ|Xu㬖,꤭卹r(g믇쩍%췸앙|栣U\\\\2]䤉+啠菡ꯎT鉹m\\n/`SzDᅼ鞶\": 1.1217523390167132E-19,\n                      \"u톇=黚\\\\ ꂮ췵L>躰e9⑩_뵜斌n@B}$괻Yᐱ@䧋V\\\"☒-諯cV돯ʠ\": true,\n                      \"Ű螧ᔼ檍鍎땒딜qꄃH뜣<獧ूCY吓⸏>XQ㵡趌o끬k픀빯a(ܵ甏끆୯/6Nᪧ}搚ᆚ짌P牰泱鈷^d꣟#L삀\\\"㕹襻;k㸊\\\\f+\": true,\n                      \"쎣\\\",|⫝̸阊x庿k잣v庅$鈏괎炔k쬪O_\": [\n                       \"잩AzZGz3v愠ꉈⵎ?㊱}S尳௏p\\r2>췝IP䘈M)w|\\u000eE\",\n                       -9222726055990423201,\n                       null,\n                       [\n                        false,\n                        {\"´킮'뮤쯽Wx讐V,6ᩪ1紲aႈ\\u205czD\": [\n                         -930994432421097536,\n                         3157232031581030121,\n                         \"l貚PY䃛5@䭄<nW\\u001e\",\n                         [\n                          3.801747732605161E18,\n                          [\n                           null,\n                           false,\n                           {\n                            \"\": 4.0442013775147072E16,\n                            \"2J[sᡪ㞿|n'#廲꯬乞\": true,\n                            \"B[繰`\\\\㏏a̼㨀偛㽓<\\/꥖ᵈO让\\r43⡩徑ﬓ๨ﮕx:㣜o玐ꉟぢC珵὆ᓞ쇓Qs氯였9駵q혃Ljꂔ<\\/昺+t䐋༻猙c沪~櫆bpJ9UᏐ:칣妙!皗F4㑄탎䕀櫳振讓\": 7.3924182188256287E18,\n                            \"H磵ai委曷n柋T<\\/勿F&:ꣴfU@㿗榻Lb+?퍄sp\\\"᪟~>귻m㎮琸f\": 1.0318894506812084E-19,\n                            \"࢜⩢Ш䧔1肽씮+༎ᣰ闺馺窃䕨8Mƶq腽xc(៯夐J5굄䕁Qj_훨/~価.䢵慯틠퇱豠㼇Qﵘ$DuSp(8Uญ<\\/ಟ룴𥳐ݩ$\": 8350772684161555590,\n                            \"ㆎQ䄾\\u001bpᩭ${[諟^^骴᤮b^ㅥI┧T㉇⾞\\\"绦<AYJ⒃-oF<\\/蛎mm;obh婃ᦢ\": false,\n                            \"䔤䣈?汝.p襟&d㱅\\\\Jᚠ@?O첁ࢽ휔VR蔩|㒢柺\": [[\n                             \"-ꕨ岓棻r@鿆^3~䪤Ѐ狼︌ﹲ\\\\᝸MlE쵠Q+\",\n                             null,\n                             false,\n                             3346674396990536343,\n                             null,\n                             {\n                              \"\": null,\n                              \"/䏨S쨑,&繷㉥8C엮赸3馢|뇲{鄎ꗇqFﶉ雕UD躢?Ꟛအ꽡[hᕱᗅ㦋쭞Mユ茍?L槽암V#성唐%㣕嘵\\\\ڹ(嘏躿&q\": [\n                               -1364715155337673920,\n                               false,\n                               -8197733031775379251,\n                               \"E팗鮲JwH\\\\觡܈᜝\\\"+뉞娂N휗v噙၂깼\\u001dD帒l%-斔N\",\n                               -3.844267973858711E-20,\n                               [{\"쬯(褈Q 蟚뿢 /ⱖ㻥\\u0017/?v邘䃡0U.Z1x?鯔V尠8Em<\": [[[\n                                null,\n                                [\n                                 null,\n                                 -5841406347577698873,\n                                 \"킷\\\"S⋄籞繗솸ᵣ浵w쑿ퟗ7nᎏx3앙z㘌쿸I葥覯㬏0ᆝb汆狺뷘ႀnꋋ\",\n                                 -1227911573141158702,\n                                 {\n                                  \"u㉮PᾺV鵸A\\\\g*ࡗ9슟晭+ͧↀ쿅H\\u001c꾣犓}癇恛ᗬ黩䟘X梑鐆e>r䰂f矩'-7䡭桥Dz兔V9谶居㺍ᔊ䩯덲.\\u001eL0ὅㅷ釣\": [{\n                                   \"<쯬J卷^숞u࠯䌗艞R9닪g㐾볎a䂈歖意:%鐔|ﵤ|y}>;2,覂⶚啵tb*仛8乒㓶B࿠㯉戩oX 貘5V嗆렽낁߼4h䧛ꍺM空\\\\b꿋貼\": 8478577078537189402,\n                                   \"VD*|吝z~h譺aᯒ\": {\n                                    \"YI췢K<\\/濳xNne玗rJo쾘3핰鴊\\\"↱AR:ࢷ\\\"9?\\\"臁說)?誚ꊏe)_D翾W?&F6J@뺾ꍰNZ醊Z쾈വH嶿?炫㷱鬰M겈<bS}㎥l|刖k\": {\"H7鷮퇢_k\": [\n                                     true,\n                                     \"s㟑瀭좾쮀⑁Y찺k맢戲쀸俻ກ6儮끗扖puߖꜻ馶rꈞ痘?3ྚ畊惘䎗\\\"vv)*臔웅鿈䧲^v,껛㰙J <ᚶ5\",\n                                     7950276470944656796,\n                                     4.9392301536234746E17,\n                                     -4796050478201554639,\n                                     \"yꬴc<3㻚\",\n                                     \"o塁\\u20a4蒵鮬裢CᴧnB㭱f.\",\n                                     false,\n                                     [\n                                      false,\n                                      \"㡐弑V?瀆䰺q!출㇞yᘪ꼼(IS~Ka 烿ꟿ샕桤\\u0005HQҹ㯪罂q萾⚇懋⦕둡v\",\n                                      1862560050083946970,\n                                      \"\\u20b6[|(뭹gꍒ펉O轄Dl묽]ﯨ髯QEbA㒾m@롴礠㕓2땫n6ْ엘঵篳R잷꙲m색摪|@㿫5aK设f胭r8/NI4춫栵\\\\꯬2]\",\n                                      false,\n                                      {\n                                       \"\\u000b7*㙛燏.~?䔊p搕e_拺艿뷍f{ꔻ1s驙`$Ė戧?q⋬沭?塷᭚蹀unoa5\": {\n                                        \"S귯o紞㾕ᅶ侏銇12|ʟ畴iNAo?|Sw$M拲գ㭄紧螆+,梔\": null,\n                                        \"㭚0?xB疱敻ேBPwv뾃熉(ӠpJ]갢\\\"Bj'\\u0016GE椱<\\/zgៅx黢礇h},M9ﴦ?LḨ\": \"Si B%~㬒E\",\n                                        \"핇㉊살㍢숨~ȪRo䦅D桺0z]﬽蠆c9ᣨyPP㿷U~㞐?쯟퍸宒뉆U|}㉓郾ࣻ*櫎꼪䁗s?~7\\u001e㘔h9{aឋ}:㶒P8\": [{\"\\\\R囡쐬nN柋琍؛7칾 :㶃衇徜V 深f1淍♠i?3S角폞^ᆞ\\u20e8ṰD\\u0007秡+躒臔&-6\": {\n                                         \"䨑g.fh㔗=8!\\\"狿ൻLU^뻱g䲚㻐'W}k欤?๒鲇S꧗䫾$ĥ피\": -794055816303360636,\n                                         \"外頮詋~텡竆繃䏩苨뾺朁꼃瘹f*㉀枙NH/\\u2027ꢁ}j묎vペq︉식뜡Od5 N顯ି烅仟Qfㆤ嚢(i䬅c;맧?嶰㩼츱獡?-\": {\n                                          \"e݆㍡⬬'2㻒?U篲鿄\\\"隻Ҭ5NꭰꤺBꀈ拾᩺[刯5곑Na램ﴦ዆]㝓qw钄\\u001b\\\"Y洊䗿祏塥迵[⼞⠳P$꠱5먃0轢`\": [{\"獰E賝﫚b먭N긆Ⰹ史2逶ꜛ?H짉~?P}jj}侷珿_T>᭨b,⻁鈵P䕡䀠८ⱄ홎鄣\": {\n                                           \"@?k2鶖㋮\\\"Oರ K㨇廪儲\\u0017䍾J?);\\b*묀㗠섳햭1MC V\": null,\n                                           \"UIICP!BUA`ᢈ㋸~袩㗪⾒=fB﮴l1ꡛ죘R辂여ҳ7쮡<䩲`熕8頁\": 4481809488267626463,\n                                           \"Y?+8먙ᚔ鋳蜩럶1㥔y璜౩`\": [\n                                            null,\n                                            1.2850335807501874E-19,\n                                            \"~V2\",\n                                            2035406654801997866,\n                                            {\n                                             \"<숻1>\\\"\": -8062468865199390827,\n                                             \"M㿣E]}qwG莎Gn᝶(ꔙ\\\\D⬲iꇲs寢t駇S뀡ꢜ\": false,\n                                             \"pꝤ㎏9W%>M;-U璏f(^j1?&RB隧 忓b똊E\": \"#G?C8.躬ꥯ'?냪#< 渟&헿란zpo왓Kj}鷧XﻘMツb䕖;㪻\",\n                                             \"vE풤幉xz뱕쫥Ug㦲aH} ᣟp:鬼Yᰟ<Fɋ잣緂頒⺏䉲瑑䅂,C~ޅG!f熢-B7~9Pqࡢ[츑#3ꕎ,Öඳ聁⩅㵧춀뿍xy䌏͂tdj!箧᳆|9蚡돬\": -2.54467378964089632E17,\n                                             \"䵈䅦5빖,궆-:໿댾仫0ᙚyᦝhqᚄ\": null,\n                                             \"侯Y\\\"湛졯劇U셎YX灍ⅸ2伴|筧\\\\䁒㶶᷏쁑Waᦵᗱ㜏늾膠<Jc63<G\\u20fe䇹66僣k0O\\\"_@U\": null,\n                                             \"姪y$#s漴JH璌Ӊ脛J㝾펔ﹴoꈶ㚸PD:薠쏖%說ថ蹂1］⾕5튄\": {\n                                              \"᝾Huw3䮅如쿺䍟嫝]<鰨ݷ?꯯䫓傩|ᐶස媽\\\\澒≡闢\": \"Mm\\\"쏇ᯄ졽\\\"楇<\\/ꥆ흭局n隴@鿣w⠊4P贈徎W㊋;䤞'.팇蒁䡴egpx嗎wஅ獗堮ᛐnˁ︖䀤4噙?໚郝᱋ޘॎt恑姫籕殥陃\\\"4[ꝬqL4Wꠎx\",\n                                              \"ℇj遌5B뒚\\\" U\": \"硄ꏘ{憠굏:&t䌨m Cઌ쿣鞛XFꠟs䝭ﶃ\\\"格a0x闊昵吲L\\\\杚聈aꁸj싹獅\\\"灟ﱡ馆*굖糠<ꔏ躎\",\n                                              \"톌賠弳ꟍb\\\"螖X50sĶ晠3f秂坯Iⓟ:萘\": 5.573183333596288E18,\n                                              \"%䴺\": [[[[\n                                               -6957233336860166165,\n                                               false,\n                                               null,\n                                               {\n                                                \"\\\"\\\\௮茒袀ᕥ23ୃ괶?䕎.嚲◉㏞L+ᵡ艱hL콇붆@\": null,\n                                                \"%螥9ꭌ<\\/-t\": true,\n                                                \",9|耢椸䁓Xk죱\\u0015$Ώ鲞[?엢ᝲ혪즈ⴂ▂ℴ㗯\\\"g뺘\\\\ꍜ#\\u0002ヮ}ሎ芲P[鹮轧@냲䃦=#(\": 2.78562909315899616E17,\n                                                \"R?H䧰ⵇ<,憰쮼Q總iR>H3镔ᴚ斦\\\\鏑r*2橱G⼔F/.j\": true,\n                                                \"RK좬뎂a홠f*f㱉ᮍ⦋潙㨋Gu곌SGI3I뿐\\\\F',)t`荁蘯囯ﮉ裲뇟쥼_ገ驪▵撏ᕤV\": 1.52738225997956557E18,\n                                                \"^k굲䪿꠹B逤%F㱢漥O披M㽯镞竇霒i꼂焅륓\\u00059=皫之눃\\u2047娤閍銤唫ၕb<\\/w踲䔼u솆맚,䝒ᝳ'/it\": \"B餹饴is権ꖪ怯ꦂẉဎt\\\"!凢谵⧿0\\\\<=(uL䷍刨쑪>俆揓Cy襸Q힆䆭涷<\\/ᐱ0ɧ䗾䚹\\\\ኜ?ꄢᇘ`䴢{囇}᠈䴥X4퓪檄]ꥷ/3謒ሴn+g騍X\",\n                                                \"GgG꽬[(嫓몍6\\u0004궍宩㙻/>\\u0011^辍dT腪hxǑ%ꊇk,8(W⧂結P鬜O\": [{\n                                                 \"M㴾c>\\\\ᓲ\\u0019V{>ꤩ혙넪㭪躂TS-痴໸闓⍵/徯O.M㏥ʷD囎⧔쁳휤T??鉬뇙=#ꢫ숣BX䭼<\\/d똬졬g榿)eꨋﯪ좇첻<?2K)\": null,\n                                                 \"Z17縬z]愀䖌 ᾋBCg5딒국憍꾓aⲷ턷u:U촳驿?雺楶\\u0001\\u001c{q*ᰗ苑B@k揰z.*蓗7ረIm\\\"Oᱍ@7?_\": true,\n                                                 \"㺃Z<\": -4349275766673120695,\n                                                 \"휃䠂fa塆ﬃxKe'덬鏗੄뺾w࠾鑎k땢m*႑햞鐮6攊&虜h黚,Y䱳Sﭼ둺pN6\": [\n                                                  false,\n                                                  \"IΎ䣲,\\\"ᬮ˪癘P~Qlnx喁Sᮔ༬˨I珌m䜛酛\\u0003iꐸ㦧cQ帲晼D' \\\\(粋wQcN\\\\뵰跈\",\n                                                  [\n                                                   \"D0\\\\L?M1쥍Kaꏌsd+盌귤憊tz䌣댐בO坂wϢ%ὒgp,Ai⎧ᶆI餾ꦍ棩嘅᳉怴%m]ၶis纖D凜镧o심b U\",\n                                                   {\n                                                    \"?଼\\u0011Rv&^[+匚I趈T媫\\u0010.䥤ᆯ1q僤HydⲰl㒽K'ᅾiౕ豲초딨@\\u0013J'쪪VD౼P4Ezg#8*㋤W馓]c쿯8\": false,\n                                                    \"c/擯X5~JmK䵶^쐎ച|B|u[솝(X뚤6v}W㤘⠛aR弌臌쾭諦eⒷ僡-;㩩⭖ⷴ徆龄갬{䱓ᥩ!﯏⊚ᇨ<v燡露`:볉癮꨽り★Ax7Ꮀ譥~舑\\\\Vꍋ\\\"$)v\": \"e&sFF쬘OBd슊寮f蠛জ봞mn~锆竒G脁\\\"趵G刕䕳&L唽붵<\\/I,X팚B⍥X,kԇҗ眄_慡:U附ᓚA蕧>\\u001a\\u0011\\\";~쓆BH4坋攊7힪\",\n                                                    \"iT:L闞椕윚*滛gI≀Wਟඊ'ꢆ縺뱹鮚Nꩁ᧬蕼21줧\\\\䋯``⍐\\\\㏱鳨\": 1927052677739832894,\n                                                    \"쮁缦腃g]礿Y㬙 fヺSɪ꾾N㞈\": [\n                                                     null,\n                                                     null,\n                                                     {\n                                                      \"!t,灝Y 1䗉罵?c饃호䉂Cᐭ쒘z(즽sZG㬣sഖE4뢜㓕䏞丮Qp簍6EZឪ겛fx'ꩱQ0罣i{k锩*㤴㯞r迎jTⲤ渔m炅肳\": [\n                                                       -3.3325685522591933E18,\n                                                       [{\"㓁5]A䢕1룥BC?Ꙍ`r룔Ⳛ䙡u伲+\\u0001്o\": [\n                                                        null,\n                                                        4975309147809803991,\n                                                        null,\n                                                        null,\n                                                        {\"T팘8Dﯲ稟MM☻㧚䥧/8ﻥ⥯aXLaH\\\"顾S☟耲ît7fS෉놁뮔/ꕼ䓈쁺4\\\\霶䠴ᩢ<\\/t4?죵>uD5➶༆쉌럮⢀秙䘥\\u20972ETR3濡恆vB? ~鸆\\u0005\": {\n                                                         \"`閖m璝㥉b뜴?Wf;?DV콜\\u2020퍉౓擝宏ZMj3mJ먡-傷뱙yח㸷꥿ ໘u=M읝!5吭L4v\\\\?ǎ7C홫\": null,\n                                                         \"|\": false,\n                                                         \"~Ztᛋ䚘\\\\擭㗝傪W陖+㗶qᵿ蘥ᙄp%䫎)}=⠔6ᮢS湟-螾-mXH?cp\": 448751162044282216,\n                                                         \"\\u209fad놹j檋䇌ᶾ梕㉝bוּ<d䗱:줰M酄\\u0000X#_r獢A饓ꍗُKo_跔?ᪧ嵜鼲<\": null,\n                                                         \"ꆘ)ubI@h@洭Ai㜎䏱k\\u0003?T䉐3间%j6j棍j=❁\\\\U毮ᬹ*8䀔v6cpj⭬~Q꿾뺶펵悡!쩭厝l六㽫6퇓ޭ2>\": {\"?苴ꩠD䋓帘5騱qﱖPF?☸珗顒yU ᡫcb䫎 S@㥚gꮒ쎘泴멖\\\\:I鮱TZ듒ᶨQ3+f7캙\\\"?\\f풾\\\\o杞紟﻽M.⏎靑OP\": [\n                                                          -2.6990368911551596E18,\n                                                          [{\"䒖@<᰿<\\/⽬tTr腞&G%᳊秩蜰擻f㎳?S㵧\\r*k뎾-乢겹隷j軛겷0룁鮁\": {\")DO0腦:춍逿:1㥨่!蛍樋2\": [{\n                                                           \",ꌣf侴笾m๫ꆽ?1?U?\\u0011ꌈꂇ\": {\n                                                            \"x捗甠nVq䅦w`CD⦂惺嘴0I#vỵ} \\\\귂S끴D얾?Ԓj溯\\\"v餄a\": {\n                                                             \"@翙c⢃趚痋i\\u0015OQ⍝lq돆Y0pࢥ3쉨䜩^<8g懥0w)]䊑n洺o5쭝QL댊랖L镈Qnt⪟㒅십q헎鳒⮤眉ᔹ梠@O縠u泌ㄘb榚癸XޔFtj;iC\": false,\n                                                             \"I&뱋゘|蓔䔕측瓯%6ᗻHW\\\\N1貇#?僐ᗜgh᭪o'䗈꽹Rc욏/蔳迄༝!0邔䨷푪8疩)[쭶緄㇈୧ፐ\": {\n                                                              \"B+:ꉰ`s쾭)빼C羍A䫊pMgjdx䐝Hf9᥸W0!C樃'蘿f䫤סи\\u0017Jve? 覝f둀⬣퓉Whk\\\"஼=չﳐ皆笁BIW虨쫓F廰饞\": -642906201042308791,\n                                                              \"sb,XcZ<\\/m㉹ ;䑷@c䵀s奤⬷7`ꘖ蕘戚?Feb#輜}p4nH⬮eKL트}\": [\n                                                               \"RK鳗z=袤Pf|[,u욺\",\n                                                               \"Ẏᏻ罯뉋⺖锅젯㷻{H䰞쬙-쩓D]~\\u0013O㳢gb@揶蔉|kᦂ❗!\\u001ebM褐sca쨜襒y⺉룓\",\n                                                               null,\n                                                               null,\n                                                               true,\n                                                               -1.650777344339075E-19,\n                                                               false,\n                                                               \"☑lꄆs힨꤇]'uTന⌳농].1⋔괁沰\\\"IWഩ\\u0019氜8쟇䔻;3衲恋,窌z펏喁횗?4?C넁问?ᥙ橭{稻Ⴗ_썔\",\n                                                               \"n?]讇빽嗁}1孅9#ꭨ靶v\\u0014喈)vw祔}룼쮿I\",\n                                                               -2.7033457331882025E18,\n                                                               {\n                                                                \";⚃^㱋x:饬ኡj'꧵T☽O㔬RO婎?향ᒭ搩$渣y4i;(Q>꿘e8q\": \"j~錘}0g;L萺*;ᕭꄮ0l潛烢5H▄쳂ꏒוֹꙶT犘≫x閦웧v\",\n                                                                \"~揯\\u2018c4職렁E~ᑅቚꈂ?nq뎤.:慹`F햘+%鉎O瀜쟏敛菮⍌浢<\\/㮺紿P鳆ࠉ8I-o?#jﮨ7v3Dt赻J9\": null,\n                                                                \"ࣝW䌈0ꍎqC逖,횅c၃swj;jJS櫍5槗OaB>D踾Y\": {\"㒰䵝F%?59.㍈cᕨ흕틎ḏ㋩B=9IېⓌ{:9.yw｝呰ㆮ肒᎒tI㾴62\\\"ዃ抡C﹬B<\\/<EO꽓ᇕu&鋫\\\\禞퐹u꒍.7훯ಶ2䩦͉ᶱf깵ᷣ늎\": [\n                                                                 5.5099570884646902E18,\n                                                                 \"uQN濿m臇<%?谣鮢s]]x0躩慌闋<;( 鋤.0ᠵd1#벘a:Gs?햷'.)ㅴ䞟琯崈FS@O㌛ᓬ抢큌ើ냷쿟툥IZn[惵ꐧ3뙍[&v憙J>촋jo朣\",\n                                                                 [\n                                                                  -7675533242647793366,\n                                                                  {\"ᙧ呃:[㒺쳀쌡쏂H稈㢤\\u001dᶗGG-{GHྻຊꡃ哸䵬;$?&d\\\\⥬こN圴됤挨-'ꕮ$PU%?冕눖i魁q騎Q\": [\n                                                                   false,\n                                                                   [[\n                                                                    7929823049157504248,\n                                                                    [[\n                                                                     true,\n                                                                     \"Z菙\\u0017'eꕤ᱕l,0\\\\X\\u001c[=雿8蠬L<\\/낲긯W99g톉4ퟋb㝺\\u0007劁'!麕Q궈oW:@X၎z蘻m絙璩귓죉+3柚怫tS捇蒣䝠-擶D[0=퉿8)q0ٟ\",\n                                                                     \"唉\\nFA椭穒巯\\\\䥴䅺鿤S#b迅獘 ﶗ꬘\\\\?q1qN犠pX꜅^䤊⛤㢌[⬛휖岺q唻ⳡ틍\\\"㙙Eh@oA賑㗠y必Nꊑᗘ\",\n                                                                     -2154220236962890773,\n                                                                     -3.2442003245397908E18,\n                                                                     \"Wᄿ筠:瘫퀩?o貸q⊻(᎞KWf宛尨h^残3[U(='橄\",\n                                                                     -7857990034281549164,\n                                                                     1.44283696979059942E18,\n                                                                     null,\n                                                                     {\"ꫯAw跭喀 ?_9\\\"Aty背F=9缉ྦྷ@;?^鞀w:uN㘢Rỏ\": [\n                                                                      7.393662029337442E15,\n                                                                      3564680942654233068,\n                                                                      [\n                                                                       false,\n                                                                       -5253931502642112194,\n                                                                       \"煉\\\\辎ೆ罍5⒭1䪁䃑s䎢:[e5}峳ﴱn騎3?腳Hyꏃ膼N潭錖,Yᝋ˜YAၓ㬠bG렣䰣:\",\n                                                                       true,\n                                                                       null,\n                                                                       {\n                                                                        \"⒛'P&%죮|:⫶춞\": -3818336746965687085,\n                                                                        \"钖m<\\/0ݎMtF2Pk=瓰୮洽겎.\": [[\n                                                                         -8757574841556350607,\n                                                                         -3045234949333270161,\n                                                                         null,\n                                                                         {\n                                                                          \"Ꮬr輳>⫇9hU##w@귪A\\\\C 鋺㘓ꖐ梒뒬묹㹻+郸嬏윤'+g<\\/碴,}ꙫ>손;情d齆J䬁ຩ撛챝탹/R澡7剌tꤼ?ặ!`⏲睤\\u00002똥଴⟏\": null,\n                                                                          \"\\u20f2ܹe\\\\tAꥍư\\\\x当뿖렉禛;G檳ﯪS૰3~㘠#[J<}{奲 5箉⨔{놁<\\/釿抋,嚠/曳m&WaOvT赋皺璑텁\": [[\n                                                                           false,\n                                                                           null,\n                                                                           true,\n                                                                           -5.7131445659795661E18,\n                                                                           \"萭m䓪D5|3婁ఞ>蠇晼6nﴺPp禽羱DS<睓닫屚삏姿\",\n                                                                           true,\n                                                                           [\n                                                                            -8759747687917306831,\n                                                                            {\n                                                                             \">ⓛ\\t,odKr{䘠?b퓸C嶈=DyEᙬ@ᴔ쨺芛髿UT퓻春<\\/yꏸ>豚W釺N뜨^?꽴﨟5殺ᗃ翐%>퍂ဿ䄸沂Ea;A_\\u0005閹殀W+窊?Ꭼd\\u0013P汴G5썓揘\": 4.342729067882445E-18,\n                                                                             \"Q^즾眆@AN\\u0011Kb榰냎Y#䝀ꀒᳺ'q暇睵s\\\"!3#I⊆畼寤@HxJ9\": false,\n                                                                             \"⿾D[)袨㇩i]웪䀤ᛰMvR<蟏㣨\": {\"v퇓L㪱ꖣ豛톤\\\\곱#kDTN\": [{\n                                                                              \"(쾴䡣,寴ph(C\\\"㳶w\\\"憳2s馆E!n!&柄<\\/0Pꈗſ?㿳Qd鵔\": {\"娇堰孹L錮h嵅⛤躏顒?CglN束+쨣ﺜ\\\\MrH\": {\"獞䎇둃ቲ弭팭^ꄞ踦涟XK錆쳞ឌ`;੶S炥騞ଋ褂B៎{ڒ䭷ᶼ靜pI荗虶K$\": [{\"◖S~躘蒉꫿輜譝Q㽙闐@ᢗ¥E榁iء5┄^B[絮跉ᰥ遙PWi3wㄾⵀDJ9!w㞣ᄎ{듒ꓓb6\\\\篴??c⼰鶹⟧\\\\鮇ꮇ\": [[\n                                                                               654120831325413520,\n                                                                               -1.9562073916357608E-19,\n                                                                               {\n                                                                                \"DC(昐衵ἡ긙갵姭|֛[t\": 7.6979110359897907E18,\n                                                                                \"J␅))嫼❳9Xfd飉j7猬ᩉ+⤻眗벎E鰉Zﾶ63zၝ69}ZᶐL崭ᦥ⡦靚⋛ꎨ~i㨃咊ꧭo䰠阀3C(\": -3.5844809362512589E17,\n                                                                                \"p꣑팱쒬ꎑ뛡Ꙩ挴恍胔&7ᔈ묒4Hd硶훐㎖zꢼ豍㿢aሃ=<\\/湉鵲EӅ%$F!퍶棌孼{O駍਺geu+\": \")\\u001b잓kŀX쩫A밁®ڣ癦狢)扔弒p}k縕ꩋ,䃉tࣼi\",\n                                                                                \"ァF肿輸<솄G-䢹䛸ꊏl`Tqꕗ蒞a氷⸅ᴉ蠰]S/{J왲m5{9.uέ~㕚㣹u>x8U讁B덺襪盎QhVS맅킃i识{벂磄Iහ䙅xZy/抍૭Z鲁-霳V据挦ℒ\": null,\n                                                                                \"㯛|Nꐸb7ⵐb?拠O\\u0014ކ?-(EꞨ4ꕷᄤYᯕOW瞺~螸\\\"욿ќ<u鵵઎⸊倾쑷෻rT⪄牤銱;W殆͢芄ਰ嚝훚샢⊿+㲽\": null,\n                                                                                \"単逆ົ%_맛d)zJ%3칧_릟#95䌨怡\\u001ci턠ॣi冘4赖'ਐ䧐_栔!\": {\n                                                                                 \"*?2~4㲌᭳쯁ftႷ1#oJ\\b䊇镇됔 \\u2079x䛁㊝ᮂN;穽跖s휇ᣄ홄傷z⸷(霸!3y뺏M쒿햏۽v㳉tở心3黎v쭻 Rp཮Vr~T?&˴k糒븥쩩r*D\": null,\n                                                                                 \"8@~홟ꔘk1[\": -5570970366240640754,\n                                                                                 \"BZt鏦ꡬc餖  s(mᛴ\\u0000◄d腑t84C⟐坯VṊ뉙'噱Ꝕ珽GC顀?허0ꞹ&돇䛭C䷫](\": 2.4303828213012387E-20,\n                                                                                 \"y撔Z외放+}ḑ骈ᙝ&\\u0016`G便2|-e]঳?QF㜹YF\\\"㿒緄햷㈟塚䷦ୀጤlM蘸N㾆▛럪㞂tᕬ镈쇝喠l amcxPnm\\u001a᱋<\\/]_]ﻹ瞧?H\": false,\n                                                                                 \"ፏ氏묢뜚I[♺뽛x?0H봬Wpn꨹Ra䝿쌑{㴂ni祻윸A'y|⺴ᚘ庌9{$恲{톽=m#@6ᨧfgs44陎J#<Ễ쨓瀵❩a୛㷉㙉ܸ◠냔嬯~呄籁羥镳\": false,\n                                                                                 \"㘱{<頬22?IF@곊I겂嶻L᝛D{@r쒂?IAᣧ洪惒誸b徂z췺꾍㠭\\\\刊%禨쌐ⶣ仵\\\\P[:47;<ᇅ<\\/\": {\n                                                                                  \"^U釳-v㢈ꗝ◄菘rᜨi;起kR犺䵫\\u0000锍쁙m-ԙ!lḃ꛸뻾F(W귛y\": \"#ᠺH㸢5v8_洑C\",\n                                                                                  \"䔵໳$ᙠ6菞\\u206e摎q圩P|慍sV4:㜾(I溞I?\": -6569206717947549676,\n                                                                                  \"透Ꞃ緵퇝8 >e㺰\\\"'㌢ƐW\\u0004瞕>0?V鷵엳\": true,\n                                                                                  \"뤥G\\\\迋䠿[庩'꼡\\u001aiᩮV쯁ᳪ䦪Ô;倱ନ뛁誈\": null,\n                                                                                  \"쥹䄆䚟Q榁䎐᢭<\\/2㕣p}HW蟔|䃏꿈ꚉ锳2Pb7㙑Tⅹᵅ\": {\n                                                                                   \"Y?֭$>#cVBꩨ:>eL蒁務\": {\n                                                                                    \"86柡0po 䏚&-捑Ћ祌<\\/휃-G*㶢הּ쩍s㶟餇c걺yu꽎還5*턧簕Og婥SꝐ\": null,\n                                                                                    \"a+葞h٥ࠆ裈嗫ﵢ5輙퀟ᛜ,QDﹼ⟶Y騠锪E_|x죗j侵;m蜫轘趥?븅w5+mi콛L\": {\n                                                                                     \";⯭ﱢ!买F⽍柤鶂n䵣V㫚墱2렾ELEl⣆\": [\n                                                                                      true,\n                                                                                      -3.6479311868339015E-18,\n                                                                                      -7270785619461995400,\n                                                                                      3.334081886177621E18,\n                                                                                      2.581457786298155E18,\n                                                                                      -6.605252412954115E-20,\n                                                                                      -3.9232347037744167E-20,\n                                                                                      {\n                                                                                       \"B6㊕.k1\": null,\n                                                                                       \"ZAꄮJ鮷ᳱo갘硥鈠䠒츼\": {\n                                                                                        \"ᕅ}럡}.@y陪鶁r業'援퀉x䉴ﵴl퍘):씭脴ᥞhiꃰblﲂ䡲엕8߇M㶭0燋標挝-?PCwe⾕J碻Ᾱ䬈䈥뷰憵賣뵓痬+\": {\"a췩v礗X⋈耓ፊf罅靮!㔽YYᣓw澍33⎔芲F|\\\"䜏T↮輦挑6ᓘL侘?ￆ]덆1R௯✎餘6ꏽ<\\/௨\\\\?q喷ꁫj~@ulq\": {\"嗫欆뾔Xꆹ4H㌋F嵧]ࠎ]㠖1ꞤT<$m뫏O i댳0䲝i\": {\"?෩?\\u20cd슮|ꯆjs{?d7?eNs⢚嫥氂䡮쎱:鑵롟2hJꎒﯭ鱢3춲亄:뼣v䊭諱Yj択cVmR䩃㘬T\\\"N홝*ै%x^F\\\\_s9보zz4淗?q\": [\n                                                                                         null,\n                                                                                         \"?\",\n                                                                                         2941869570821073737,\n                                                                                         \"{5{殇0䝾g6밖퍋臩綹R$䖭j紋釰7sXI繳漪행y\",\n                                                                                         false,\n                                                                                         \"aH磂?뛡#惇d婅?Fe,쐘+늵䍘\\\"3r瘆唊勐j⳧࠴ꇓ<\\/唕윈x⬌讣䋵%拗ᛆⰿ妴᝔M2㳗必꧂淲?ゥ젯檢<8끒MidX䏒3᳻Q▮佐UT|⤪봦靏⊏\",\n                                                                                         [[{\n                                                                                          \"颉(&뜸귙{y^\\\"P퟉춝Ჟ䮭D顡9=?}Y誱<$b뱣RvO8cH煉＠tk~4ǂ⤧⩝屋SS;J{vV#剤餓ᯅc?#a6D,s\": [\n                                                                                           -7.8781018564821536E16,\n                                                                                           true,\n                                                                                           [\n                                                                                            -2.28770899315832371E18,\n                                                                                            false,\n                                                                                            -1.0863912140143876E-20,\n                                                                                            -6282721572097446995,\n                                                                                            6767121921199223078,\n                                                                                            -2545487755405567831,\n                                                                                            false,\n                                                                                            null,\n                                                                                            -9065970397975641765,\n                                                                                            [\n                                                                                             -5.928721243413937E-20,\n                                                                                             {\"6촊\\u001a홯kB0w撨燠룉{绎6⳹!턍贑y▾鱧ժ[;7ᨷ∀*땒䪮1x霆Hᩭ☔\\\"r䝐7毟ᝰr惃3ꉭE+>僒澐\": [\n                                                                                              \"Ta쎩aƝt쵯ⰪVb\",\n                                                                                              [\n                                                                                               -5222472249213580702,\n                                                                                               null,\n                                                                                               -2851641861541559595,\n                                                                                               null,\n                                                                                               4808804630502809099,\n                                                                                               5657671602244269874,\n                                                                                               \"5犲﨣4mᥣ?yf젫꾯|䋬잁$`Iⳉﴷ扳兝,'c\",\n                                                                                               false,\n                                                                                               [\n                                                                                                null,\n                                                                                                {\n                                                                                                 \"DyUIN쎾M仼惀⮥裎岶泭lh扠\\u001e礼.tEC癯튻@_Qd4c5S熯A<\\/＼6U윲蹴Q=%푫汹\\\\\\u20614b[௒C⒥Xe⊇囙b,服3ss땊뢍i~逇PA쇸1\": -2.63273619193485312E17,\n                                                                                                 \"Mq꺋貘k휕=nK硍뫞輩>㾆~἞ࡹ긐榵l⋙Hw뮢帋M엳뢯v⅃^\": 1877913476688465125,\n                                                                                                 \"ᶴ뻗`~筗免⚽টW˃⽝b犳䓺Iz篤p;乨A\\u20ef쩏?疊m㝀컩뫡b탔鄃ᾈV(遢珳=뎲ିeF仢䆡谨8t0醄7㭧瘵⻰컆r厡궥d)a阄፷Ed&c﯄伮1p\": null,\n                                                                                                 \"⯁w4曢\\\"(欷輡\": \"\\\"M᭫]䣒頳B\\\\燧ࠃN㡇j姈g⊸⺌忉ꡥF矉স%^\",\n                                                                                                 \"㣡Oᄦ昵⫮Y祎S쐐級㭻撥>{I$\": -378474210562741663,\n                                                                                                 \"䛒掷留Q%쓗1*1J*끓헩ᦢ﫫哉쩧EↅIcꅡ\\\\?ⴊl귛顮4\": false,\n                                                                                                 \"寔愆샠5]䗄IH贈=d﯊/偶?ॊn%晥D視N򗘈'᫂⚦|X쵩넽z질tskxDQ莮Aoﱻ뛓\": true,\n                                                                                                 \"钣xp?&\\u001e侉/y䴼~?U篔蘚缣/I畚?Q绊\": -3034854258736382234,\n                                                                                                 \"꺲໣眀)⿷J暘pИfAV삕쳭Nꯗ4々'唄ⶑ伻㷯騑倭D*Ok꧁3b␽_<\\/챣Xm톰ၕ䆄`*fl㭀暮滠毡?\": [\n                                                                                                  \"D男p`V뙸擨忝븪9c麺`淂⢦Yw⡢+kzܖ\\fY1䬡H歁)벾Z♤溊-혰셢?1<-\\u0005;搢Tᐁle\\\\ᛵߓﭩ榩<QF;t=?Qꀞ\",\n                                                                                                  [\n                                                                                                   null,\n                                                                                                   [{\"-췫揲ᬨ墊臸<ࠒH跥 㔭쥃㫯W=z[wধ╌<~yW楄S!⑻h즓lĖN￧篌W듷튗乵᪪템먵Pf悥ᘀk䷭焼\\\\讄r擁鐬y6VF<\\/6랿p)麡ꁠ㪁\\\"pழe\": [\n                                                                                                    \"#幎杴颒嶈)ㄛJ.嶤26_⋌东챯ꠉ⤋ؚ/⏚%秼Q룠QGztﾺ㎷អI翰Xp睔鍜ꨍ\",\n                                                                                                    {\",T?\": [\n                                                                                                     false,\n                                                                                                     [[\n                                                                                                      true,\n                                                                                                      7974824014498027996,\n                                                                                                      false,\n                                                                                                      [\n                                                                                                       4.3305464880956252E18,\n                                                                                                       {\n                                                                                                        \"᱿W^A]'rᮢ)鏥z餝;Hu\\\\Fk?ﴺ?IG浅-䙧>訝-xJ;巡8깊蠝ﻓU$K\": {\n                                                                                                         \"Vꕡ諅搓W=斸s︪vﲜ츧$)iꡟ싉e寳?ጭムVથ嵬i楝Fg<\\/Z|៪ꩆ-5'@ꃱ80!燱R쇤t糳]罛逇dṌ֣XHiͦ{\": true,\n                                                                                                         \"Ya矲C멗Q9膲墅携휻c\\\\딶G甔<\\/.齵휴\": -1.1456247877031811E-19,\n                                                                                                         \"z#.OO￝J\": -8263224695871959017,\n                                                                                                         \"崍_3夼ᮟ1F븍뽯ᦓ鴭V豈Ь\": [{\n                                                                                                          \"N蒬74\": null,\n                                                                                                          \"yuB?厅vK笗!ᔸcXQ旦컶P-녫mﾵ麟_\": \"1R@ 톘xa_|﩯遘s槞d!d껀筤⬫薐焵먑D{\\\\6k共倌☀G~AS_D\\\"딟쬚뮥馲렓쓠攥WTMܭ8nX㩴䕅檹E\\u0007ﭨN 2 ℆涐ꥏ꠵3▙玽|됨_\\u2048\",\n                                                                                                          \"恐A C䧩G\": {\":M큣5e들\\\\ꍀ恼ᔄ靸|I﨏$)n\": {\n                                                                                                           \"|U䬫㟯SKV6ꛤ㗮\\bn봻䲄fＸT:㾯쳤'笓0b/ೢC쳖?2浓uO.䰴\": \"ཐ꼋e?``,ᚇ慐^8ꜙNM䂱\\u0001IᖙꝧM'vKdꌊH牮r\\\\O@䊷ᓵ쀆(fy聻i툺\\\"?<\\/峧ࣞ⓺ᤤ쵒߯ꎺ騬?)刦\\u2072l慪y꺜ﲖTj+u\",\n                                                                                                           \"뽫<G;稳UL⸙q2n쵿C396炿J蓡z⣁zဩSOU?<\\/뙍oE큸O鿅෴ꍈEm#\\\"[瑦⤫ᝆgl⡗q8\\\"큘덥係@ᆤ=\\u0001爖羝췀㸩b9\\\\jeqt㟿㮸龾m㳳긄\": {\n                                                                                                            \"9\\\"V霟釜{/o0嫲C咀-饷䈍[녩)\\r䤴tMW\\\\龟ϣ^ي㪙忩䞞N湆Y笕)萨ꖤ誥煽:14⫻57U$擒䲐薡Qvↇ櫲현誧?nஷ6\": {\"l웾䌵.䅋䦝ic碳g[糲Ƿ-ឈᚱ4쑧\\u0004C࿼\\u0018&쬑?멲<\\/fD_檼픃pd쪼n㕊渪V䛉m揈W儅톳뗳䓆7㭽諤T煠Ney?0᪵鈑&\": [\n                                                                                                             false,\n                                                                                                             null,\n                                                                                                             {\n                                                                                                              \"\\r;鼶j᠂꼍RLz~♔9gf?ӡ浐\": -1.4843072575250897E-19,\n                                                                                                              \"&ꊒ\\\"ꋟ䝭E诮ﯚO?SW뒁훪mb旙⎕ᗕ⶙|ᷤ5y4甥\": \"j5|庠t铱?v 횋0\\\"'rxz䃢杺Ɜ!\\u0002\",\n                                                                                                              \"Q ၩ㟧\": {\"Hﬔ\\u2058䪠틙izZㅛ탟H^ﶲA??R6呠Z솋R.࿶g8\": [\n                                                                                                               -8762672252886298799,\n                                                                                                               -1.9486830507000208E17,\n                                                                                                               null,\n                                                                                                               -7157359405410123024,\n                                                                                                               null,\n                                                                                                               null,\n                                                                                                               -995856734219489233,\n                                                                                                               \"呧㫹A4!\",\n                                                                                                               null,\n                                                                                                               -1.9105609358624648E-19,\n                                                                                                               5888184370445333848,\n                                                                                                               2.25460605078245E-19,\n                                                                                                               2.5302739297121987E18,\n                                                                                                               \"뢹sbEf捵2丯?뗾耸(Wd띙SବꭖrtU?筤P똙QpbbKqaE$来V웰3i/lK퉜,8︸e= g螓t竦컼?.寋8鵗\",\n                                                                                                               7377742975895263424,\n                                                                                                               2.4218442017790503E-19,\n                                                                                                               {\n                                                                                                                \"y꒚ཫ쨘醬킃糟}yTSt䡀⇂뿽4ൢ戰U\": [[\n                                                                                                                 3600537227234741875,\n                                                                                                                 4435474101760273035,\n                                                                                                                 -1.42274517007951795E18,\n                                                                                                                 -5567915915496026866,\n                                                                                                                 null,\n                                                                                                                 null,\n                                                                                                                 [\n                                                                                                                  -3204084299154861161,\n                                                                                                                  {\n                                                                                                                   \"7梧慸憏.a瘎\\u00041U鵮Ck֨d惥耍ⳡY,⭏써E垁FFI鱑ⳬ줢7⧵Bﴠ耘줕햸q컴~*瑍W.떛ࡆ@'᐀+轳\": -961121410259132975,\n                                                                                                                   \"⥅]l黭㣓绶;!!⎃=朼㐿e&ἂ繤C﯀l䝣㌀6TM쑮w懃ꡡ#ᤆ䰓,墼湼゙뽸㲿䧽쫨xᵖ듨<\\/ T0峸iQ:溫脐\\\\\\\"쎪ὴ砇宖^M泼큥➅鈫@ᄟ༩\\u2008⥼\": true,\n                                                                                                                   \"⩐\\\"籽汎P싯鲘蟼sRᐯ䅩\\u0019R(kRᖁ&ಌ 0\\\"鳶!馼YH\": null,\n                                                                                                                   \"鮼ꚇ싋։刟\\rRLd步Nⴗ5Eࡆ訛갚[I醵NC(郴ṉy5D뤺౳QY壯5苴y훨(W\\\\Cଇ姚C艄깹\\u001c歷㋵ZC᥂\": [\n                                                                                                                    -6806235313106257498,\n                                                                                                                    null,\n                                                                                                                    \"}N⸿讽sꚪ;\\\\p繇j苄䫨\\u20e7%5x?t#\",\n                                                                                                                    {\n                                                                                                                     \"O〗k<墻yV$ఁrs-c1ఌ唪.C7_Yobᦜ褷'b帰mㄑl⌅\": {\"qB뗗擄3隂5뺍櫂䱟e촸P/鏩,3掁ꗩ=冉棓㑉|˞F襴뿴,:㞦<퓂⧙礞♗g뚎ᛩ<\\/뉽ⶳ⸻A?_x2I㽝勒*I홱鍧粿~曟㤙2绥Ly6+썃uu鿜בf큘|歍ࣖÉ\": [\n                                                                                                                      \">hh䈵w>1ⲏ쐭V[ⅎ\\\\헑벑F_㖝⠗㫇h恽;῝汰ᱼ瀖J옆9RR셏vsZ柺鶶툤r뢱橾/ꉇ囦FGm\\\"謗ꉦ⨶쒿⥡%]鵩#ᖣ_蹎 u5|祥?O\",\n                                                                                                                      null,\n                                                                                                                      2.0150326776036215E-19,\n                                                                                                                      null,\n                                                                                                                      true,\n                                                                                                                      false,\n                                                                                                                      true,\n                                                                                                                      {\"\\fa᭶P捤WWc᠟f뚉ᬏ퓗ⳀW睹5:HXH=q7x찙X$)모r뚥ᆟ!Jﳸf\": [\n                                                                                                                       -2995806398034583407,\n                                                                                                                       [\n                                                                                                                        6441377066589744683,\n                                                                                                                        \"Mﶒ醹i)Gἦ廃s6몞 KJ౹礎VZ螺费힀\\u0000冺업{谥'꡾뱻:.ꘘ굄奉攼Di᷑K鶲y繈욊阓v㻘}枭캗e矮1c?휐\\\"4\\u0005厑莔뀾墓낝⽴洗ṹ䇃糞@b1\\u0016즽Y轹\",\n                                                                                                                        {\n                                                                                                                         \"1⽕⌰鉟픏M㤭n⧴ỼD#%鐘⊯쿼稁븣몐紧ᅇ㓕ᛖcw嬀~ഌ㖓(0r⧦Q䑕髍ര铂㓻R儮\\\"@ꇱm❈௿᦯頌8}㿹犴?xn잆꥽R\": 2.07321075750427366E18,\n                                                                                                                         \"˳b18㗈䃟柵Z曆VTAu7+㛂cb0﯑Wp執<\\/臋뭡뚋刼틮荋벲TLP预庰܈G\\\\O@VD'鱃#乖끺*鑪ꬳ?Mޞdﭹ{␇圯쇜㼞顄︖Y홡g\": [{\n                                                                                                                          \"0a,FZ\": true,\n                                                                                                                          \"2z̬蝣ꧦ驸\\u0006L↛Ḣ4๚뿀'?lcwᄧ㐮!蓚䃦-|7.飑挴.樵*+1ﮊ\\u0010ꛌ%貨啺/JdM:똍!FBe?鰴㨗0O财I藻ʔWA᫓G쳛u`<\\/I\": [{\n                                                                                                                           \"$τ5V鴐a뾆両環iZp頻යn븃v\": -4869131188151215571,\n                                                                                                                           \"*즢[⦃b礞R◚nΰꕢH=귰燙[yc誘g䆌?ଜ臛\": {\n                                                                                                                            \"洤湌鲒)⟻\\\\䥳va}PeAMnＮ[\": \"㐳ɪ/(軆lZR,Cp殍ȮN啷\\\"3B婴?i=r$펽ᤐ쀸\",\n                                                                                                                            \"阄R4㒿㯔ڀ69ZᲦ2癁핌噗P崜#\\\\-쭍袛&鐑/$4童V꩑_ZHA澢fZ3\": {\"x;P{긳:G閉:9?活H\": [\n                                                                                                                             \"繺漮6?z犞焃슳\\\">ỏ[Ⳛ䌜녏䂹>聵⼶煜Y桥[泥뚩MvK$4jtﾛ\",\n                                                                                                                             \"E#갶霠좭㦻ୗ먵F+䪀o蝒ba쮎4X㣵 h\",\n                                                                                                                             -335836610224228782,\n                                                                                                                             null,\n                                                                                                                             null,\n                                                                                                                             [\n                                                                                                                              \"r1᫩0>danjY짿bs{\",\n                                                                                                                              [\n                                                                                                                               -9.594464059325631E-23,\n                                                                                                                               1.0456894622831624E-20,\n                                                                                                                               null,\n                                                                                                                               5.803973284253454E-20,\n                                                                                                                               -8141787905188892123,\n                                                                                                                               true,\n                                                                                                                               -4735305442504973382,\n                                                                                                                               9.513150514479281E-20,\n                                                                                                                               \"7넳$螔忷㶪}䪪l짴\\u0007鹁P鰚HF銏ZJﳴ/⍎1ᷓ忉睇ᜋ쓈x뵠m䷐窥Ꮤ^\\u0019ᶌ偭#ヂt☆၃pᎍ臶䟱5$䰵&๵分숝]䝈뉍♂坎\\u0011<>\",\n                                                                                                                               \"C蒑貑藁lﰰ}X喇몛;t밿O7/᯹f\\u0015kI嘦<ዴ㟮ᗎZ`GWퟩ瑹࡮ᅴB꿊칈??R校s脚\",\n                                                                                                                               {\n                                                                                                                                \"9珵戬+AU^洘拻ቒy柭床'粙XG鞕᠜繀伪%]hC,$輙?Ut乖Qm떚W8઼}~q⠪rU䤶CQ痗ig@#≲t샌f㈥酧l;y闥ZH斦e⸬]j⸗?ঢ拻퀆滌\": null,\n                                                                                                                                \"畯}㧢J罚帐VX㨑>1ꢶkT⿄蘥㝑o|<嗸層沈挄GEOM@-䞚䧰$만峬輏䠱V✩5宸-揂D'㗪yP掶7b⠟J㕻SfP?d}v㼂Ꮕ'猘\": {\n                                                                                                                                 \"陓y잀v>╪\": null,\n                                                                                                                                 \"鬿L+7:됑Y=焠U;킻䯌잫!韎ஔ\\f\": {\n                                                                                                                                  \"駫WmGጶ\": {\n                                                                                                                                   \"\\\\~m6狩K\": -2586304199791962143,\n                                                                                                                                   \"ႜࠀ%͑l⿅D.瑢Dk%0紪dḨTI픸%뗜☓s榗኉\\\"?V籄7w髄♲쟗翛歂E䤓皹t ?)ᄟ鬲鐜6C\": {\n                                                                                                                                    \"_췤a圷1\\u000eB-XOy缿請∎$`쳌eZ~杁튻/蜞`塣৙\\\"⪰\\\"沒l}蕌\\\\롃荫氌.望wZ|o!)Hn獝qg}\": null,\n                                                                                                                                    \"kOSܧ䖨钨:಼鉝ꭝO醧S`십`ꓭ쭁ﯢN&Et㺪馻㍢ⅳ㢺崡ຊ蜚锫\\\\%ahx켨|ż劻ꎄ㢄쐟A躊᰹p譞綨Ir쿯\\u0016ﵚOd럂*僨郀N*b㕷63z\": {\n                                                                                                                                     \":L5r+T㡲\": [{\n                                                                                                                                      \"VK泓돲ᮙRy㓤➙Ⱗ38oi}LJቨ7Ó㹡৘*q)1豢⛃e᫛뙪壥镇枝7G藯g㨛oI䄽 孂L缊ꋕ'EN`\": -2148138481412096818,\n                                                                                                                                      \"`⛝ᘑ$(खꊲ⤖ᄁꤒ䦦3=)]Y㢌跨NĴ驳줟秠++d孳>8ᎊ떩EꡣSv룃 쯫أ?#E|᭙㎐?zv:5祉^⋑V\": [\n                                                                                                                                       -1.4691944435285607E-19,\n                                                                                                                                       3.4128661569395795E17,\n                                                                                                                                       \"㐃촗^G9佭龶n募8R厞eEw⺡_ㆱ%⼨D뉄퉠2ꩵᛅⳍ搿L팹Lවn=\\\"慉념ᛮy>!`g!풲晴[/;?[v겁軇}⤳⤁핏∌T㽲R홓遉㓥\",\n                                                                                                                                       \"愰_⮹T䓒妒閤둥?0aB@㈧g焻-#~跬x<\\/舁P݄ꐡ=\\\\׳P\\u0015jᳪᢁq;㯏l%᭗;砢觨▝,謁ꍰGy?躤O黩퍋Y㒝a擯\\n7覌똟_䔡]fJ晋IAS\",\n                                                                                                                                       4367930106786121250,\n                                                                                                                                       -4.9421193149720582E17,\n                                                                                                                                       null,\n                                                                                                                                       {\n                                                                                                                                        \";ﾸ똾柉곟ⰺKpፇ䱻ฺ䖝{o~h!ｅꁿ઻욄ښ\\u0002y?xUd\\u207c悜ꌭ\": [\n                                                                                                                                         1.6010824122815255E-19,\n                                                                                                                                         [\n                                                                                                                                          \"宨︩9앉檥pr쇷?WxLb\",\n                                                                                                                                          \"氇9】J玚\\u000f옛呲~ 輠1D嬛,*mW3?n휂糊γ虻*ᴫ꾠?q凐趗Ko↦GT铮\",\n                                                                                                                                          \"㶢ថmO㍔k'诔栀Z蛟}GZ钹D\",\n                                                                                                                                          false,\n                                                                                                                                          -6.366995517736813E-20,\n                                                                                                                                          -4894479530745302899,\n                                                                                                                                          null,\n                                                                                                                                          \"V%᫡II璅䅛䓎풹ﱢ/pU9se되뛞x梔~C)䨧䩻蜺(g㘚R?/Ự[忓C뾠ࢤc왈邠买?嫥挤풜隊枕\",\n                                                                                                                                          \",v碍喔㌲쟚蔚톬៓ꭶ\",\n                                                                                                                                          3.9625444752577524E-19,\n                                                                                                                                          null,\n                                                                                                                                          [\n                                                                                                                                           \"kO8란뿒䱕馔b臻⍟隨\\\"㜮鲣Yq5m퐔<u뷆c譆\\u001bN?<\",\n                                                                                                                                           [{\n                                                                                                                                            \";涉c蒀ᴧN䘱䤳 ÿꭷ,핉dSTDB>K#ꢘug㼈ᝦ=P^6탲@䧔%$CqSw铜랊0&m⟭<\\/a逎ym\\u0013vᯗ\": true,\n                                                                                                                                            \"洫`|XN뤮\\u0018詞=紩鴘_sX)㯅鿻Ố싹\": 7.168252736947373E-20,\n                                                                                                                                            \"ꛊ饤ﴏ袁(逊+~⽫얢鈮艬O힉7D筗S곯w操I斞᠈븘蓷x\": [[[[\n                                                                                                                                             -7.3136069426336952E18,\n                                                                                                                                             -2.13572396712722688E18,\n                                                                                                                                             {\n                                                                                                                                              \"硢3㇩R:o칢行E<=\\u0018ၬYuH!\\u00044U%卝炼2>\\u001eSi$⓷ꒈ'렢gᙫ番ꯒ㛹럥嶀澈v;葷鄕x蓎\\\\惩+稘UEᖸﳊ㊈壋N嫿⏾挎,袯苷ኢ\\\\x|3c\": 7540762493381776411,\n                                                                                                                                              \"?!*^ᢏ窯?\\u0001ڔꙃw虜돳FgJ?&⨫*uo籤:?}ꃹ=ٴ惨瓜Z媊@ત戹㔏똩Ԛ耦Wt轁\\\\枒^\\\\ꩵ}}}ꀣD\\\\]6M_⌫)H豣:36섘㑜\": {\n                                                                                                                                               \";홗ᰰU஋㙛`D왔ཿЃS회爁\\u001b-㢈`봆?盂㛣듿ᦾ蒽_AD~EEຆ㊋(eNwk=Rɠ峭q\\\"5Ἠ婾^>'ls\\n8QAK<l_⭨穟\": [\n                                                                                                                                                true,\n                                                                                                                                                true,\n                                                                                                                                                {\"ﳷm箅6qⷈ?ﲈ憟b۷⫉἞V뚴少U呡瓴ꉆs~嘵得㌶4XR漊\": [\n                                                                                                                                                 \"폆介fM暪$9K[ㄇ샍큳撦g撟恸jҐF㹹aj bHᘀ踉ꎐＣ粄 a?\\u000fK즉郝 幨9D舢槷Xh뵎u훩Ꜿ턾ƅ埂P埆k멀{䢹~?D<\\/꼢XR\\u001b〱䝽꼨i㘀ḟ㚺A-挸\",\n                                                                                                                                                 false,\n                                                                                                                                                 null,\n                                                                                                                                                 -1.1710758021294953E-20,\n                                                                                                                                                 3996737830256461142,\n                                                                                                                                                 true,\n                                                                                                                                                 null,\n                                                                                                                                                 -8271596984134071193,\n                                                                                                                                                 \"_1G퉁텑m䮔鰼6멲Nmꇩﬅ쓟튍N许FDj+3^ﶜ⎸\\u0019⤕橥!\\\"s-뾞lz北׸ꍚ랬)?l⻮고i䑰\\u001f䪬\",\n                                                                                                                                                 4.459124464204517E-19,\n                                                                                                                                                 -4.0967172848578447E18,\n                                                                                                                                                 5643211135841796287,\n                                                                                                                                                 -9.482336221192844E-19,\n                                                                                                                                                 \"౪冏釶9D?s螭X榈枸j2秀v]泌鰚岒聵轀쌶i텽qMbL]R,\",\n                                                                                                                                                 null,\n                                                                                                                                                 [\n                                                                                                                                                  null,\n                                                                                                                                                  {\"M쪊ꯪ@;\\u0011罙ꕅ<e᝺|爑Yⵝ<\\/&ᩎ<腊ሑᮔ੃F豭\": [\n                                                                                                                                                   \"^0࡟1볏P폋ፏ杈F⨥Iꂴ\\\"z磣VⅡ=8퀝2]䢹h1\\u0017{jT<I煛5%D셍S⑙⅏J*샐 巙ດ;᧡䙞\",\n                                                                                                                                                   [{\n                                                                                                                                                    \"'㶡큾鄧`跊\\\"gV[?u᭒Ʊ髷%葉굵a띦N켧Qﯳy%y䩟髒L䯜S䵳r絅肾킂ၐ'ꔦg긓a'@혔যW谁ᝬF栩ŷ+7w鞚\": 6.3544416675584832E17,\n                                                                                                                                                    \"苎脷v改hm쏵|㋊g_ᔐ 뒨蹨峟썎㷸|Ο刢?Gͨ옛-?GꦱIEYUX4?%ꘋᆊ㱺\": -2.8418378709165287E-19,\n                                                                                                                                                    \"誰?(H]N맘]k洳\\\"q蒧蘞!R퐫\\\\(Q$T5N堍⫣윿6|럦속︅ﭗ(\": [\n                                                                                                                                                     \"峩_\\u0003A瘘?✓[硫䎯ၽuጭ\\\"@Y綅첞m榾=贮9R벿῜Z\",\n                                                                                                                                                     null,\n                                                                                                                                                     \"䰉㗹㷾Iaᝃqcp쓘὾൫Q|ﵓ<\\/ḙ>)- Q䲌mo펹L_칍樖庫9꩝쪹ᘹ䑖瀍aK ?*趤f뭓廝p=磕\",\n                                                                                                                                                     \"哑z懅ᤏ-ꍹux쀭\",\n                                                                                                                                                     [\n                                                                                                                                                      true,\n                                                                                                                                                      3998739591332339511,\n                                                                                                                                                      \"ጻ㙙?᳸aK<\\/囩U`B3袗ﱱ?\\\"/k鏔䍧2l@쿎VZ쨎/6ꃭ脥|B?31+on颼-ꮧ,O嫚m ࡭`KH葦:粘i]aSU쓙$쐂f+詛頖b\",\n                                                                                                                                                      [{\"^<9<箝&絡;%i﫡2攑紴\\\\켉h쓙-柂䚝ven\\u20f7浯-Ꮏ\\r^훁䓚헬\\u000e?\\\\ㅡֺJ떷VOt\": [{\n                                                                                                                                                       \"-௄卶k㘆혐஽y⎱㢬sS઄+^瞥h;ᾷj;抭\\u0003밫f<\\/5Ⱗ裏_朻%*[-撵䷮彈-芈\": {\n                                                                                                                                                        \"㩩p3篊G|宮hz䑊o곥j^Co0\": [\n                                                                                                                                                         653239109285256503,\n                                                                                                                                                         {\"궲?|\\\":N1ۿ氃NZ#깩:쇡o8킗ࡊ[\\\"됸Po핇1(6鰏$膓}⽐*)渽J'DN<썙긘毦끲Ys칖\": {\n                                                                                                                                                          \"2Pr?Xjㆠ?搮/?㓦柖馃5뚣Nᦼ|铢r衴㩖\\\"甝湗ܝ憍\": \"\\\"뾯i띇筝牻$珲/4ka $匝휴译zbAᩁꇸ瑅&뵲衯ꎀᆿ7@ꈋ'ᶨH@ᠴl+\",\n                                                                                                                                                          \"7뢽뚐v?4^ꊥ_⪛.>pởr渲<\\/⢕疻c\\\"g䇘vU剺dஔ鮥꒚(dv祴X⼹\\\\a8y5坆\": true,\n                                                                                                                                                          \"o뼄B욞羁hr﷔폘뒚⿛U5pꪴfg!6\\\\\\\"爑쏍䢱W<ﶕ\\\\텣珇oI/BK뺡'谑♟[Ut븷亮g(\\\"t⡎有?ꬊ躺翁艩nl F⤿蠜\": 1695826030502619742,\n                                                                                                                                                          \"ۊ깖>ࡹ햹^ⵕ쌾BnN〳2C䌕tʬ]찠?ݾ2饺蹳ぶꌭ訍\\\"◹ᬁD鯎4e滨T輀ﵣ੃3\\u20f3킙D瘮g\\\\擦+泙ၧ 鬹ﯨַ肋7놷郟lP冝{ߒhড়r5,꓋\": null,\n                                                                                                                                                          \"ΉN$y{}2\\\\N﹯ⱙK'8ɜͣwt,．钟廣䎘ꆚk媄_\": null,\n                                                                                                                                                          \"䎥eᾆᝦ읉,Jުn岪㥐s搖謽䚔5t㯏㰳㱊ZhD䃭f絕s鋡篟a`Q鬃┦鸳n_靂(E4迠_觅뷝_宪D(NL疶hL追V熑%]v肫=惂!㇫5⬒\\u001f喺4랪옑\": {\n                                                                                                                                                           \"2a輍85먙R㮧㚪Sm}E2yꆣꫨrRym㐱膶ᔨ\\\\t綾A☰.焄뙗9<쫷챻䒵셴᭛䮜.<\\/慌꽒9叻Ok䰊Z㥪幸k\": [\n                                                                                                                                                            null,\n                                                                                                                                                            true,\n                                                                                                                                                            {\"쌞쐍\": {\n                                                                                                                                                             \"▟GL K2i뛱iＱ\\\"̠.옛1X$}涺]靎懠ڦ늷?tf灟ݞゟ{\": 1.227740268699265E-19,\n                                                                                                                                                             \"꒶]퓚%ฬK❅\": [{\n                                                                                                                                                              \"(ෛ@Ǯっ䧼䵤[aﾃൖvEnAdU렖뗈@볓yꈪ,mԴ|꟢캁(而첸죕CX4Y믅\": \"2⯩㳿ꢚ훀~迯?᪑\\\\啚;4X\\u20c2襏B箹)俣eỻw䇄\",\n                                                                                                                                                              \"75༂f詳䅫ꐧ鏿 }3\\u20b5'∓䝱虀f菼Iq鈆﨤g퍩)BFa왢d0뮪痮M鋡nw∵謊;ꝧf美箈ḋ*\\u001c`퇚퐋䳫$!V#N㹲抗ⱉ珎(V嵟鬒_b㳅\\u0019\": null,\n                                                                                                                                                              \"e_m@(i㜀3ꦗ䕯䭰Oc+-련0뭦⢹苿蟰ꂏSV䰭勢덥.ྈ爑Vd,ᕥ=퀍)vz뱊ꈊB_6듯\\\"?{㒲&㵞뵫疝돡믈%Qw限,?\\r枮\\\"? N~癃ruࡗdn&\": null,\n                                                                                                                                                              \"㉹&'Pfs䑜공j<\\/?|8oc᧨L7\\\\pXᭁ 9᪘\": -2.423073789014103E18,\n                                                                                                                                                              \"䝄瑄䢸穊f盈᥸,B뾧푗횵B1쟢f\\u001f凄\": \"魖⚝2儉j꼂긾껢嗎0ࢇ纬xI4](੓`蕞;픬\\fC\\\"斒\\\")2櫷I﹥迧\",\n                                                                                                                                                              \"ퟯ詔x悝령+T?Bg⥄섅kOeQ큼㻴*{E靼6氿L缋\\u001c둌๶-㥂2==-츫I즃㠐Lg踞ꙂEG貨鞠\\\"\\u0014d'.缗gI-lIb䋱ᎂDy缦?\": null,\n                                                                                                                                                              \"紝M㦁犿w浴詟棓쵫G:䜁?V2ힽ7N*n&㖊Nd-'ຊ?-樹DIv⊜)g䑜9뉂ㄹ푍阉~ꅐ쵃#R^\\u000bB䌎䦾]p.䀳\": [{\"ϒ爛\\\"ꄱ︗竒G䃓-ま帳あ.j)qgu扐徣ਁZ鼗A9A鸦甈!k蔁喙:3T%&㠘+,䷞|챽v䚞문H<\\/醯r셓㶾\\\\a볜卺zE䝷_죤ဵ뿰᎟CB\": [\n                                                                                                                                                               6233512720017661219,\n                                                                                                                                                               null,\n                                                                                                                                                               -1638543730522713294,\n                                                                                                                                                               false,\n                                                                                                                                                               -8901187771615024724,\n                                                                                                                                                               [\n                                                                                                                                                                3891351109509829590,\n                                                                                                                                                                true,\n                                                                                                                                                                false,\n                                                                                                                                                                -1.03836679125188032E18,\n                                                                                                                                                                {\n                                                                                                                                                                 \"<?起HCᷭ죎劐莇逰/{gs\\u0014⽛㰾愫tￖ<솞ڢ됌煲膺਻9x닳x࡭Q訽,ᶭඦtt掾\\\"秧㺌d˪䙻꫗:ᭈh4緞痐䤴c뫚떩త<?ᕢ謚6]폛O鰐鋛镠贩赟\\\"<G♷1'\": true,\n                                                                                                                                                                 \"቙ht4ߝBqꦤ+\\u0006멲趫灔)椾\": -1100102890585798710,\n                                                                                                                                                                 \"総兎곇뇸粟F醇;朠?厱楛㶆ⶏ7r⾛o꯬᳡F\\\\머幖 㜦\\f[搦᥽㮣0䕊?J㊳뀄e㔔+?<n↴复\": [\n                                                                                                                                                                  \"4~ꉍ羁\\\\偮(泤叕빜\\u0014>j랎:g曞ѕᘼ}链N\",\n                                                                                                                                                                  -1.1103819473845426E-19,\n                                                                                                                                                                  true,\n                                                                                                                                                                  [\n                                                                                                                                                                   true,\n                                                                                                                                                                   null,\n                                                                                                                                                                   -7.9091791735309888E17,\n                                                                                                                                                                   true,\n                                                                                                                                                                   {\"}蔰鋈+ꐨ啵0?g*사%`J?*\": [{\n                                                                                                                                                                    \"\\\"2wG?yn,癷BK\\\\龞䑞x?蠢\": -3.7220345009853505E-19,\n                                                                                                                                                                    \";饹়❀)皋`噿焒j(3⿏w>偍5X<np?<줯<Y]捘!J೸UⳂNे7v௸㛃ᄧ톿䨷鯻v焇=烻TQ!F⦰䣣눿K鷚눁'⭲m捠(䚻\": [\n                                                                                                                                                                     \"蹕 淜੃b\\\"+몾ⴕ\",\n                                                                                                                                                                     null,\n                                                                                                                                                                     35892237756161615,\n                                                                                                                                                                     {\n                                                                                                                                                                      \" 듹㏝)5慁箱&$~:遰쮐<\\/堋?% \\\\勽唅z손帋䘺H髀麡M퇖uz\\u0012m諦d᳤콌樝\\rX싹̡Ო\": -433791617729505482,\n                                                                                                                                                                      \"-j溗ࢵcz!:}✽5ഇ,욨ݏs#덫=南浺^}E\\\\Y\\\\T*뼈cd꺐cۘ뎁䨸됱K䠴㉿恿逳@wf쏢<\\/[L[\": -9.3228549642908109E17,\n                                                                                                                                                                      \"Ms킭u஗%\\\\u⍎/家欲ἅ答㓽/꯳齳|㭘Pr\\\"v<\\/禇䔆$GA䊻˔-:틊[h?倬荤ᾞ৳.Gw\\u000b\": [\n                                                                                                                                                                       \"0宜塙I@䏴蝉\\\\Uy뒅=2<h暒K._贡璐Yi檻_⮵uᐝ㘗聠[f\\u0015힢Hꔮ}጑;誏yf0\\\"\\u20cc?(=q斠➽5ꎾ鞘kⲃ\",\n                                                                                                                                                                       -2.9234211354411E-19,\n                                                                                                                                                                       false,\n                                                                                                                                                                       true,\n                                                                                                                                                                       {\n                                                                                                                                                                        \"\\u0011⟴GH_;#怵:\\u001c\\u0002n1U\\\\p/왔(┫]hꐚ7\\r0䵷첗岷O௷?㝎[殇|J=?韷pᶟ儜犆?5კ1kꍖiH竧뛈ପdmk游y(콪팱꾍k慧 y辣\": [\n                                                                                                                                                                         false,\n                                                                                                                                                                         \"O\\\"끍p覈ykv磂㢠㝵~뀬튍lC&4솎䇃:Mj\",\n                                                                                                                                                                         -7.009964654003924E-20,\n                                                                                                                                                                         false,\n                                                                                                                                                                         -49306078522414046,\n                                                                                                                                                                         null,\n                                                                                                                                                                         null,\n                                                                                                                                                                         2160432477732354319,\n                                                                                                                                                                         true,\n                                                                                                                                                                         \"4횡h+!踹ꐬP鮄{0&뱥M?샍鞅n㮞ᨹ?쒆毪l'箅^ꚥ頛`e㻨52柳⮙嫪࡟딯a.~䵮1f吘N&zȭL榓ۃ鳠5d㟆M@㣥ӋA΍q0縶$\",\n                                                                                                                                                                         -3.848996532974368E16,\n                                                                                                                                                                         true,\n                                                                                                                                                                         null,\n                                                                                                                                                                         -3.5240055580952525E18,\n                                                                                                                                                                         {\n                                                                                                                                                                          \" vﭷၵ#ce乃5僞?Z D`묨粇ᐔ绠vWL譢u뽀\\\\J|tⓙt№\\\"ꨋnT凮ᒩ蝂篝b騩:䢭Hbv읻峨z㹚T趗햆귣학津XiＹ@ᖥK\": true,\n                                                                                                                                                                          \"!F 醌y䉸W2ꇬ\\u0006/䒏7~%9擛햀徉9⛰+?㌘;ꠓX䇻Dfi뼧쒒\\u0012F謞՝絺+臕kऍLSQ쌁X쎬幦HZ98蒊枳\": \"澤令#\\u001d抍⛳@N搕퀂[5,✄ꘇ~䘷?\\u0011Xꈺ[硸⠘⛯X醪聡x\\u0007쌇MiX/|ﾐ뚁K8䁡W)銀q僞綂蔕E\",\n                                                                                                                                                                          \"6␲䣖R৞@ငg?<\\/೴x陙Xꈺ崸⠅ᇾ\\\\0X,H쟴셭A稂ힿゝF\\\\쑞\\u0012懦(Aᯕ灭~\\u0001껮X?逊\": 5.7566819207732864E17,\n                                                                                                                                                                          \"[c?椓\": false,\n                                                                                                                                                                          \"k䒇\": 2583824107104166717,\n                                                                                                                                                                          \"꙯N훙㏅ﮒ燣㨊瞯咽jMxby뻭뵫װ[\\\"1畈?ৱL\": \"띣ᔂ魠羓犴ꚃ+|rY\",\n                                                                                                                                                                          \"녺Z?䬝鉉:?ⳙ瘏Cኯ.Vs[釿䨉쐧\\\\\\\\*쵢猒$\\\\y溔^,㑳\": {\"藶꺟\": [{\n                                                                                                                                                                           \"\\\"d훣N2zq]?'檿죸忷篇ﮟ擤m'9!죶≓p뭻\\\\ᇷ\\f퇶_䰸h๐Q嵃訾㘑従ꯦ䞶jL틊r澵Omᾫ!H䱤팼/;|᭺I7슎YhuXi⚼\": -1.352716906472438E-19,\n                                                                                                                                                                           \"M⽇倻5J䂫औ᝔楸#J[Fﹱ쫮W誻bWz?}1\\\"9硪뻶fe\": \"盬:Ѹ砿획땣T凊(m灦呜ﻝR㿎艴䂵h\",\n                                                                                                                                                                           \"R띾k힪CH钙_i苮ⰵoᾨ紑퉎7h؉\\\"柀蝽z0့\\\"<?嘭$蜝?礲7岇槀묡?V钿T⣜v+솒灚ԛ2米mH?>薙婏聿3aFÆÝ\": \"2,ꓴg?_섦_>Y쪥션钺;=趘F~?D㨫\\bX?㹤+>/믟kᠪ멅쬂Uzỵ]$珧`m雁瑊ඖ鯬cꙉ梢f묛bB\",\n                                                                                                                                                                           \"♽n$YjKiXX*GO贩鏃豮祴遞K醞眡}ꗨv嵎꼷0୸+M菋eH徸J꣆:⼐悥B켽迚㯃b諂\\u000bjꠜ碱逮m8\": [\n                                                                                                                                                                            \"푷᣺ﻯd8ﱖ嬇ភH鹎⡱᱅0g:果6$GQ췎{vᷧYy-脕x偹砡館⮸C蓼ꏚ=軄H犠G谖ES詤Z蠂3l봟hￒ7䦹1GPQG癸숟~[#駥8zQ뛣J소obg,\",\n                                                                                                                                                                            null,\n                                                                                                                                                                            1513751096373485652,\n                                                                                                                                                                            null,\n                                                                                                                                                                            -6.851466660824754E-19,\n                                                                                                                                                                            {\"䩂-⴮2ٰK솖풄꾚ႻP앳1H鷛wmR䗂皎칄?醜<\\/&ࠧ㬍X濬䵈K`vJ륒Q/IC묛!;$vϑ\": {\n                                                                                                                                                                             \"@-ꚗxྐྵ@m瘬\\u0010U絨ﮌ驐\\\\켑寛넆T=tQ㭤L연@脸삯e-:⩼u㎳VQ㋱襗ຓ<Ⅶ䌸cML3+\\u001e_C)r\\\\9+Jn\\\\Pﺔ8蠱檾萅Pq鐳话T䄐I\": -1.80683891195530061E18,\n                                                                                                                                                                             \"ᷭዻU~ཷsgSJ`᪅'%㖔n5픆桪砳峣3獮枾䌷⊰呀\": {\n                                                                                                                                                                              \"Ş੉䓰邟自~X耤pl7间懑徛s첦5ਕXexh⬖鎥᐀nNr(J컗｜ૃF\\\"Q겮葲놔엞^겄+㈆话〾희紐G'E?飕1f❼텬悚泬먐U睬훶Qs\": false,\n                                                                                                                                                                              \"(\\u20dag8큽튣>^Y{뤋.袊䂓;_g]S\\u202a꽬L;^'#땏bႌ?C緡<䝲䲝断ꏏ6\\u001asD7IK5Wxo8\\u0006p弊⼂ꯍ扵\\u0003`뵂픋%ꄰ⫙됶l囏尛+䗅E쟇\\\\\": [\n                                                                                                                                                                               true,\n                                                                                                                                                                               {\n                                                                                                                                                                                \"\\n鱿aK㝡␒㼙2촹f;`쾏qIࡔG}㝷䐍瓰w늮*粅9뒪ㄊCj倡翑閳R渚MiUO~仨䜶RꙀA僈㉋⦋n{㖥0딿벑逦⥻0h薓쯴Ꝼ\": [\n                                                                                                                                                                                 5188716534221998369,\n                                                                                                                                                                                 2579413015347802508,\n                                                                                                                                                                                 9.010794400256652E-21,\n                                                                                                                                                                                 -6.5327297761238093E17,\n                                                                                                                                                                                 1.11635352494065523E18,\n                                                                                                                                                                                 -6656281618760253655,\n                                                                                                                                                                                 {\n                                                                                                                                                                                  \"\": \")?\",\n                                                                                                                                                                                  \"TWKLꑙ裑꺔UE俸塑炌Ũ᜕-o\\\"徚#\": {\"M/癟6!oI51ni퐚=댡>xꍨ\\u0004 ?\": {\n                                                                                                                                                                                   \"皭\": {\"⢫䋖>u%w잼<䕏꘍P䋵$魋拝U䮎緧皇Y훂&|羋ꋕ잿cJ䨈跓齳5\\u001a삱籷I꿾뤔S8㌷繖_Yឯ䲱B턼O歵F\\\\l醴o_欬6籏=D\": [\n                                                                                                                                                                                    false,\n                                                                                                                                                                                    true,\n                                                                                                                                                                                    {\"Mt|ꏞD|F궣MQ뵕T,띺k+?㍵i\": [\n                                                                                                                                                                                     7828094884540988137,\n                                                                                                                                                                                     false,\n                                                                                                                                                                                     {\n                                                                                                                                                                                      \"!༦鯠,&aﳑ>[euJꏽ綷搐B.h\": -7648546591767075632,\n                                                                                                                                                                                      \"-n켧嘰{7挐毄Y,>❏螵煫乌pv醑Q嶚!|⌝責0왾덢ꏅ蛨S\\\\)竰'舓Q}A釡5#v\": 3344849660672723988,\n                                                                                                                                                                                      \"8閪麁V=鈢1녈幬6棉⪮둌\\u207d᚛驉ꛃ'r䆉惏ै|bἧﺢᒙ<=穊强s혧eꮿ慩⌡ \\\\槳W븧J檀C,ᘉ의0俯퀉M;筷ࣴ瓿{늊埂鄧_4揸Nn阼Jੵ˥(社\": true,\n                                                                                                                                                                                      \"o뼀vw)4A뢵(a䵢)p姃뛸\\u000fK#KiQp\\u0005ꅍ芅쏅\": null,\n                                                                                                                                                                                      \"砥$ꥸ┇耽u斮Gc{z빔깎밇\\\\숰\\u001e괷各㶇쵿_ᴄ+h穢p촀Ნ䃬z䝁酳ӂ31xꔄ1_砚W렘G#2葊P \": [\n                                                                                                                                                                                       -3709692921720865059,\n                                                                                                                                                                                       null,\n                                                                                                                                                                                       [\n                                                                                                                                                                                        6669892810652602379,\n                                                                                                                                                                                        -135535375466621127,\n                                                                                                                                                                                        \"뎴iO}Z? 馢녱稹ᄾ䐩rSt帤넆&7i騏멗畖9誧鄜'w{Ͻ^2窭외b㑎粖i矪ꦨ탪跣)KEㆹ\\u0015V8[W?⽉>'kc$䨘ᮛ뉻٬M5\",\n                                                                                                                                                                                        1.10439588726055846E18,\n                                                                                                                                                                                        false,\n                                                                                                                                                                                        -4349729830749729097,\n                                                                                                                                                                                        null,\n                                                                                                                                                                                        [\n                                                                                                                                                                                         false,\n                                                                                                                                                                                         \"_蠢㠝^䟪/D녒㡋ỎC䒈판\\u0006એq@O펢%;鹐쏌o戥~A[ꡉ濽ỳ&虃᩾荣唙藍茨Ig楡꒻M窓冉?\",\n                                                                                                                                                                                         true,\n                                                                                                                                                                                         2.17220752996421728E17,\n                                                                                                                                                                                         -5079714907315156164,\n                                                                                                                                                                                         -9.960375974658589E-20,\n                                                                                                                                                                                         \"ᾎ戞༒\",\n                                                                                                                                                                                         true,\n                                                                                                                                                                                         false,\n                                                                                                                                                                                         [[\n                                                                                                                                                                                          \"ⶉᖌX⧕홇)g엃⹪x뚐癟\\u0002\",\n                                                                                                                                                                                          -5185853871623955469,\n                                                                                                                                                                                          {\n                                                                                                                                                                                           \"L㜤9ợㇶK鐰⋓V뽋˖!斫as|9＂፬䆪?7胜&n薑~\": -2.11545634977136992E17,\n                                                                                                                                                                                           \"O8뀩D}캖q萂6༣㏗䈓煮吽ਆᎼDᣘ폛;\": false,\n                                                                                                                                                                                           \"YTᡅ^L㗎cbY$pᣞ縿#fh!ꘂb삵玊颟샞ဢ$䁗鼒몁~rkH^:닮먖츸륈⪺쒉砉?㙓扫㆕꣒`R䢱B酂?C뇞<5Iޚ讳騕S瞦z\": null,\n                                                                                                                                                                                           \"\\\\RB?`mG댵鉡幐物䵎有5*e骄T㌓ᛪ琾駒Ku\\u001a[柆jUq8⋈5鿋츿myﻗ?雍ux঴?\": 5828963951918205428,\n                                                                                                                                                                                           \"n0晅:黯 xu씪^퓞cB㎊ᬍ⺘٤փ~B岚3㥕擄vᲂ~F?C䶖@$m~忔S왖㲚?챴⊟W#벌{'㰝I䝠縁s樘\\\\X뢻9핡I6菍ㄛ8쯶]wॽ0L\\\"q\": null,\n                                                                                                                                                                                           \"x增줖j⦦t䏢᎙㛿Yf鼘~꫓恄4惊\\u209c\": \"oOhbᤃ᛽z&Bi犑\\\\3B㩬劇䄑oŁ쨅孥멁ຖacA㖫借㞝vg싰샂㐜#譞⢤@k]鋰嘘䜾L熶塥_<\\/⍾屈ﮊ_mY菹t뙺}Ox=w鮮4S1ꐩמּ'巑\",\n                                                                                                                                                                                           \"㗓蟵ꂾe蠅匳(JP䗏෸\\u0089耀왲\": [{\n                                                                                                                                                                                            \"ᤃ㵥韎뤽\\r?挥O쯡⇔㞚3伖\\u0005P⋪\\\"D궣QLn(⚘罩䩢Ŏv䤘尗뼤됛O淽鋋闚r崩a{4箙{煷m6〈\": {\n                                                                                                                                                                                             \"l곺1L\": {\n                                                                                                                                                                                              \"T'ਤ?砅|੬Km]䄩\\\"(࿶<\\/6U爢䫈倔郴l2㴱^줣k'L浖L鰄Rp今鎗⒗C얨M훁㡧ΘX粜뫈N꤇輊㌻켑#㮮샶-䍗룲蠝癜㱐V>=\\\\I尬癤t=\": 7648082845323511446,\n                                                                                                                                                                                              \"鋞EP:<\\/_`ၧe混ㇹBd⯢㮂驋\\\\q碽饩跓྿ᴜ+j箿렏㗑yK毢宸p謹h䦹乕U媣\\\\炤\": [[\n                                                                                                                                                                                               \"3\",\n                                                                                                                                                                                               [\n                                                                                                                                                                                                true,\n                                                                                                                                                                                                3.4058271399411134E-20,\n                                                                                                                                                                                                true,\n                                                                                                                                                                                                \"揀+憱f逮@먻BpW曉\\u001a㣐⎊$n劈D枤㡞좾\\u001aᛁ苔౩闝1B䷒Ṋ݋➐ꀞꐃ磍$t੤_:蘺⮼(#N\",\n                                                                                                                                                                                                697483894874368636,\n                                                                                                                                                                                                [\n                                                                                                                                                                                                 \"vᘯ锴)0訶}䳅⩚0O壱韈ߜ\\u0018*U鍾䏖=䧉뽑单휻ID쿇嘗?ꌸῬ07\",\n                                                                                                                                                                                                 -5.4858784319382006E18,\n                                                                                                                                                                                                 7.5467775182251151E18,\n                                                                                                                                                                                                 -8911128589670029195,\n                                                                                                                                                                                                 -7531052386005780140,\n                                                                                                                                                                                                 null,\n                                                                                                                                                                                                 [\n                                                                                                                                                                                                  null,\n                                                                                                                                                                                                  true,\n                                                                                                                                                                                                  [[{\n                                                                                                                                                                                                   \"1欯twG<u䝮␽ꇣ_ჟﱴଶ-쪋\\\"?홺k:莝Ꜫ*⺵꽹댅釔좵}P?=9렿46b\\u001c\\\\S?(筈僦⇶爷谰1ྷa\": true,\n                                                                                                                                                                                                   \"TҫJYxڪ\\\\鰔℮혡)m_WVi眪1[71><\\/Q:0怯押殃탷聫사<ỗꕧ蚨䡁nDꌕ\\u001c녬~蓩<N蹑\\\"{䫥lKc혁뫖앺:vⵑ\": \"g槵?\",\n                                                                                                                                                                                                   \"aꨩ뻃싥렌1`롗}Yg>鲃g儊>ꏡl㻿/⑷*챳6㻜W毤緛ﹺᨪ4\\u0013뺚J髬e3쳸䘦伧?恪&{L掾p+꬜M䏊d娘6\": {\n                                                                                                                                                                                                    \"2p첼양棜h䜢﮶aQ*c扦v︥뮓kC寵횂S銩&ǝ{O*य़iH`U큅ࡓr䩕5ꄸ?`\\\\᧫?ᮼ?t〟崾훈k薐ì/ｉy꤃뵰z1<\\/AQ#뿩8jJ1z@u䕥\": 1.82135747285215155E18,\n                                                                                                                                                                                                    \"ZdN &=d년ᅆ'쑏ⅉ:烋5&៏ᄂ汎来L㯄固{钧u\\\\㊏튚e摑&t嗄ꖄUb❌?m䴘熚9EW\": [{\n                                                                                                                                                                                                     \"ଛ{i*a(\": -8.0314147546006822E17,\n                                                                                                                                                                                                     \"⫾ꃆY\\u000e+W`௸ \\\"M뒶+\\\\뷐lKE}(NT킶Yj選篒쁶'jNQ硾(똡\\\\\\\"逌ⴍy? IRꜘ὞鄬﨧:M\\\\f⠋Cꚜ쫊ᚴNV^D䕗ㅖἔIao꿬C⍏8\": [\n                                                                                                                                                                                                      287156137829026547,\n                                                                                                                                                                                                      {\n                                                                                                                                                                                                       \"H丞N逕<rO䎗:텕<\\/䶩샌Sd%^ᵯ눐엑者g䖩똭蕮1U驣?Pⰰ\\u001fp(W]67\\u0015﫣6굺OR羸#촐F蒈;嘙i✵@_撶y㤏⤍(:᧗뼢༌朆@⏰㤨ꭲ?-n>⯲\": {\"\": {\n                                                                                                                                                                                                        \"7-;枮阕梒9ᑄZ\": [[[[\n                                                                                                                                                                                                         null,\n                                                                                                                                                                                                         {\n                                                                                                                                                                                                          \"\": [[[[\n                                                                                                                                                                                                           -7.365909561486078E-19,\n                                                                                                                                                                                                           2948694324944243408,\n                                                                                                                                                                                                           null,\n                                                                                                                                                                                                           [\n                                                                                                                                                                                                            true,\n                                                                                                                                                                                                            \"荒\\\"并孷䂡쵼9o䀘F\\u0002龬7⮹Wz%厖/*? a*R枈㌦됾g뒠䤈q딄㺿$쮸tᶎ릑弣^鏎<\\/Y鷇驜L鿽<\\/춋9Mᲆឨ^<\\/庲3'l낢\",\n                                                                                                                                                                                                            \"c鮦\\u001b두\\\\~?眾ಢu݆綑෪蘛轋◜gȃ<\\/ⴃcpkDt誩܅\\\"Y\",\n                                                                                                                                                                                                            [[\n                                                                                                                                                                                                             null,\n                                                                                                                                                                                                             null,\n                                                                                                                                                                                                             [\n                                                                                                                                                                                                              3113744396744005402,\n                                                                                                                                                                                                              true,\n                                                                                                                                                                                                              \"v(y\",\n                                                                                                                                                                                                              {\n                                                                                                                                                                                                               \"AQ幆h쾜O+꺷铀ꛉ練A蚗⼺螔j㌍3꽂楎䥯뎸먩?\": null,\n                                                                                                                                                                                                               \"蠗渗iz鱖w]擪E\": 1.2927828494783804E-17,\n                                                                                                                                                                                                               \"튷|䀭n*曎b✿~杤U]Gz鄭kW|㴚#㟗ഠ8u擨\": [[\n                                                                                                                                                                                                                true,\n                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                {\"⾪壯톽g7?㥜ώQꑐ㦀恃㧽伓\\\\*᧰閖樧뢇赸N휶䎈pI氇镊maᬠ탷#X?A+kНM ༑᩟؝?5꧎鰜ṚY즫궔 =ঈ;ﳈ?*s|켦蜌wM笙莔\": [\n                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                 -3808207793125626469,\n                                                                                                                                                                                                                 [\n                                                                                                                                                                                                                  -469910450345251234,\n                                                                                                                                                                                                                  7852761921290328872,\n                                                                                                                                                                                                                  -2.7979740127017492E18,\n                                                                                                                                                                                                                  1.4458504352519893E-20,\n                                                                                                                                                                                                                  true,\n                                                                                                                                                                                                                  \"㽙깹?먏䆢:䴎ۻg殠JBTU⇞}ꄹꗣi#I뵣鉍r혯~脀쏃#釯:场:䔁>䰮o'㼽HZ擓௧nd\",\n                                                                                                                                                                                                                  [\n                                                                                                                                                                                                                   974441101787238751,\n                                                                                                                                                                                                                   null,\n                                                                                                                                                                                                                   -2.1647718292441327E-19,\n                                                                                                                                                                                                                   1.03602824249831488E18,\n                                                                                                                                                                                                                   [\n                                                                                                                                                                                                                    null,\n                                                                                                                                                                                                                    1.0311977941822604E-17,\n                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                    true,\n                                                                                                                                                                                                                    {\n                                                                                                                                                                                                                     \"\": -3.7019778830816707E18,\n                                                                                                                                                                                                                     \"E峾恆茍6xLIm縂0n2视֯J-ᤜz+ᨣ跐mYD豍繹⹺䊓몓ﴀE(@詮(!Y膽#᎙2䟓섣A䈀㟎,囪QbK插wcG湎ꤧtG엝x⥏俎j'A一ᯥ뛙6ㅑ鬀\": 8999803005418087004,\n                                                                                                                                                                                                                     \"よ殳\\\\zD⧅%Y泥簳Uꈩ*wRL{3#3FYHା[d岀䉯T稉駅䞘礄P:闈W怏ElB㤍喬赔bG䠼U଄Nw鰯闀楈ePsDꥷ꭬⊊\": [\n                                                                                                                                                                                                                      6.77723657904486E-20,\n                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                      [\n                                                                                                                                                                                                                       \"ཚ_뷎꾑蹝q'㾱ꂓ钚蘞慵렜떆`ⴹ⎼櫯]J?[t9Ⓢ !컶躔I᮸uz>3a㠕i,錃L$氰텰@7녫W㸮?羧W뇧ꃞ,N鋮숪2ɼ콏┍䁲6\",\n                                                                                                                                                                                                                       \"&y?뢶=킕올Za惻HZk>c\\u20b58i?ꦶcfBv잉ET9j䡡\",\n                                                                                                                                                                                                                       \"im珊Ճb칧<D-諂*u2ꡜ췛~䬢(텸ﵦ>校\\\\뼾쯀\",\n                                                                                                                                                                                                                       9.555715121193197E-20,\n                                                                                                                                                                                                                       true,\n                                                                                                                                                                                                                       {\n                                                                                                                                                                                                                        \"<㫚v6腓㨭e1㕔&&V∌ᗈT奄5Lጥ>탤?튣瑦㳆ꉰ!(ᙪ㿬擇_n쌯IMΉ㕨␰櫈ᱷ5풔蟹&L.첽e鰷쯃劼﫭b#ﭶ퓀7뷄Wr㢈๧Tʴશ㶑澕鍍%\": -1810142373373748101,\n                                                                                                                                                                                                                        \"fg晌o?߲ꗄ;>C>?=鑰監侯Kt굅\": true,\n                                                                                                                                                                                                                        \"䫡蓺ꑷ]C蒹㦘\\\"1ః@呫\\u0014NL䏾eg呮፳,r$裢k>/\\\\<z\": [[\n                                                                                                                                                                                                                         null,\n                                                                                                                                                                                                                         \"C䡏>?ㄤᇰﻛ쉕1஥'Ċ\\\" \\\\_?쨔\\\"ʾr: 9S䘏禺ᪧꄂ㲄\",\n                                                                                                                                                                                                                         [[{\n                                                                                                                                                                                                                          \"*硙^+E쌺I1䀖ju?:⦈Ꞓl๴竣迃xKC/饉:\\fl\\\"XTFﾨ蟭,芢<\\/骡軺띜hꏘ\\u001f銿<棔햳▨(궆*=乥b8\\\\媦䷀뫝}닶ꇭ(Kej䤑M\": [{\n                                                                                                                                                                                                                           \"1Ꮼ?>옿I╅C<ގ?ꊌ冉SV5A㢊㶆z-๎玶绢2F뵨@㉌뀌o嶔f9-庒茪珓뷳4\": null,\n                                                                                                                                                                                                                           \";lᰳ\": \"CbB+肻a䄷苝*/볳+/4fq=㰁h6瘉샴4铢Y骐.⌖@哼猎㦞+'gꋸ㒕ߤ㞑(䶒跲ti⑴a硂#No볔\",\n                                                                                                                                                                                                                           \"t?/jE幸YHT셵⩎K!Eq糦ꗣv刴w\\\"l$ο:=6:移\": {\n                                                                                                                                                                                                                            \"z]鑪醊嫗J-Xm銌翁絨c里됏炙Ep㣋鏣똼嚌䀓GP﹖cmf4鹭T䅿꣭姧␸wy6ꦶ;S&(}ᎧKxᾂQ|t뻳k\\\"d6\\\"|Ml췆hwLt꼼4$&8Պ褵婶鯀9\": {\"嵃닢ᒯ'd᧫䳳#NXe3-붋鸿ଢ떓%dK\\u0013䲎ꖍYV.裸R⍉rR3蟛\\\\:젯:南ĺLʆ넕>|텩鴷矔ꋅⒹ{t孶㓑4_\": [\n                                                                                                                                                                                                                             true,\n                                                                                                                                                                                                                             null,\n                                                                                                                                                                                                                             [\n                                                                                                                                                                                                                              false,\n                                                                                                                                                                                                                              \"l怨콈lᏒ\",\n                                                                                                                                                                                                                              {\n                                                                                                                                                                                                                               \"0w䲏嬧-:`䉅쉇漧\\\\܂yㄨb%㽄j7ᦶ涶<\": 3.7899452730383747E-19,\n                                                                                                                                                                                                                               \"ꯛTẀq纤q嶏V⿣?\\\"g}ი艹(쥯B T騠I=仵및X\": {\"KX6颠+&ᅃ^f畒y[\": {\n                                                                                                                                                                                                                                \"H?뱜^?꤂-⦲1a㋞&ꍃ精Ii᤾챪咽쬘唂쫷<땡劈훫놡o㥂\\\\ KⴙD秼F氮[{'좴:례晰Iq+I쭥_T綺砸GO煝䟪ᚪ`↹l羉q쐼D꽁ᜅ훦: vUV\": true,\n                                                                                                                                                                                                                                \"u^yﳍ0㱓#[y뜌앸ꊬL㷩?蕶蘾⻍KӼ\": -7931695755102841701,\n                                                                                                                                                                                                                                \"䤬轉車>\\u001c鴵惋\\\"$쯃྆⇻n뽀G氠S坪]ಲꨍ捇Qxኻ椕駔\\\\9ࣼ﫻읜磡煮뺪ᶚ볝l㕆t+sζ\": [[[\n                                                                                                                                                                                                                                 true,\n                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                 [\n                                                                                                                                                                                                                                  null,\n                                                                                                                                                                                                                                  3363739578828074923,\n                                                                                                                                                                                                                                  true,\n                                                                                                                                                                                                                                  {\n                                                                                                                                                                                                                                   \"\\\"鸣詩 볰㑵gL㯦῅춝旫}ED辗ﮈI쀤-ꧤ|㠦Z\\\"娑ᕸ4爏騍㣐\\\"]쳝Af]茛⬻싦o蚁k䢯䩐菽3廇喑ޅ\": 4.5017999150704666E17,\n                                                                                                                                                                                                                                   \"TYႇ7ʠ值4챳唤~Zo&ݛ\": false,\n                                                                                                                                                                                                                                   \"`塄J袛㭆끺㳀N㺣`꽐嶥KﯝSVᶔ∲퀠獾N딂X\\\"ᤏhNﬨvI\": {\"\\u20bb㭘I䖵䰼?sw䂷쇪](泒f\\\"~;꼪Fԝsᝦ\": {\"p,'ꉂ軿=A蚶?bƉ㏵䅰諬'LYKL6B깯⋩겦뎙(ᜭ\\u0006噣d꾆㗼Z;䄝䚔cd<情@䞂3苼㸲U{)<6&ꩻ钛\\u001au〷N숨囖愙j=BXW욕^x芜堏Ῑ爂뛷꒻t✘Q\\b\": [[\n                                                                                                                                                                                                                                    \"籛&ଃ䩹.ꃩ㦔\\\\C颫#暪&!勹ꇶ놽攺J堬镙~軌C'꾖䣹㮅岃ᙴ鵣\",\n                                                                                                                                                                                                                                    4.317829988264744E15,\n                                                                                                                                                                                                                                    6.013585322002147E-20,\n                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                    true,\n                                                                                                                                                                                                                                    null,\n                                                                                                                                                                                                                                    null,\n                                                                                                                                                                                                                                    -3.084633632357326E-20,\n                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                    null,\n                                                                                                                                                                                                                                    {\n                                                                                                                                                                                                                                     \"\\\"짫愔昻  X\\\"藣j\\\"\\\"먁ཅѻ㘤㬯0晲DU꟒㸃d벀윒l䦾c੻*3\": null,\n                                                                                                                                                                                                                                     \"谈Wm陧阦咟ฯ歖擓N喴㋐銭rCCnVࢥ^♼Ⅾ젲씗刊S༝+_t赔\\\\b䚍뉨ꬫ6펛cL䊘᜼<\\/澤pF懽&H\": [\n                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                      {\n                                                                                                                                                                                                                                       \"W\\\"HDUuΌ퀟M'P4࿰H똆ⰱﮯ<\\/凐蘲\\\"C鴫ﭒж}ꭩ쥾t5yd诪ﮡ퍉ⴰ@?氐醳rj4I6Qt\": 6.9090159359219891E17,\n                                                                                                                                                                                                                                       \"絛ﳛ⺂\": {\"諰P㗮聦`ZQ?ꫦh*റcb⧱}埌茥h{棩렛툽o3钛5鮁l7Q榛6_g)ὄ\\u0013kj뤬^爖eO4Ⱈ槞鉨ͺ订%qX0T썗嫷$?\\\\\\\"봅늆'%\": [\n                                                                                                                                                                                                                                        -2.348150870600346E-19,\n                                                                                                                                                                                                                                        [[\n                                                                                                                                                                                                                                         true,\n                                                                                                                                                                                                                                         -6619392047819511778,\n                                                                                                                                                                                                                                         false,\n                                                                                                                                                                                                                                         [[\n                                                                                                                                                                                                                                          -1.2929189982356161E-20,\n                                                                                                                                                                                                                                          1.7417192219309838E-19,\n                                                                                                                                                                                                                                          {\"?嵲2࿐2\\u0001啑㷳c縯\": [\n                                                                                                                                                                                                                                           null,\n                                                                                                                                                                                                                                           [\n                                                                                                                                                                                                                                            false,\n                                                                                                                                                                                                                                            true,\n                                                                                                                                                                                                                                            2578060295690793218,\n                                                                                                                                                                                                                                            {\n                                                                                                                                                                                                                                             \"?\\\"殃呎#㑑F\": true,\n                                                                                                                                                                                                                                             \"}F炊_殛oU헢兔Ꝉ,赭9703.B数gTz3⏬\": {\n                                                                                                                                                                                                                                              \"5&t3,햓Mݸᵣ㴵;꣫䩍↳#@뫷䠅+W-ࣇzᓃ鿕ಔ梭?T䮑ꥬ旴]u뫵막bB讍:왳둛lEh=숾鱠p咐$짏#?g⹷ᗊv㷵.斈u頻\\u0018-G.\": \"뽙m-ouࣤ஫牷\\\"`Ksꕞ筼3HlȨvC堈\\\"I]㖡玎r먞#'W賜鴇k'c룼髋䆿飉㗆xg巤9;芔cጐ/ax䊨♢큓r吓㸫೼䢗da᩾\\\"]屣`\",\n                                                                                                                                                                                                                                              \":M딪<䢥喠\\u0013㖅x9蕐㑂XO]ｆ*Q呰瞊吭VP@9,㨣 D\\\\穎vˤƩs㜂-曱唅L걬/롬j㈹EB8g<\\/섩o渀\\\"u0y&룣\": \">氍緩L/䕑돯Ꟙ蕞^aB뒣+0jK⪄瑨痜LXK^힦1qK{淚t츔X:Vm{2r獁B뾄H첚7氥?쉟䨗ꠂv팳圎踁齀\\\\\",\n                                                                                                                                                                                                                                              \"D彤5㢷Gꪻ[lㄆ@὜⓰絳[ଃ獽쮹☒[*0ꑚ㜳\": 9022717159376231865,\n                                                                                                                                                                                                                                              \"ҖaV銣tW+$魿\\u20c3亜~뫡ᙰ禿쨽㏡fṼzE/h\": \"5臐㋇Ჯ쮺? 昨탰Wﾑ밎#'\\\"崲钅U?幫뺀⍾@4kh>騧\\\\0ҾEV=爐͌U捀%ꉼ 㮋<{j]{R>:gԩL\\u001c瀈锌ﯲﳡꚒ'⫿E4暍㌗뵉X\\\"H᝜\",\n                                                                                                                                                                                                                                              \"ᱚגּ;s醒}犍SἿ㦣&{T$jkB\\\\\\tḮ앾䤹o<避(tW\": \"vb⯽䴪䮢@|)\",\n                                                                                                                                                                                                                                              \"⥒퐁껉%惀뗌+녣迺顀q條g⚯i⤭룐M琹j̈́⽜A\": -8385214638503106917,\n                                                                                                                                                                                                                                              \"逨ꊶZ<\\/W⫟솪㎮ᘇb?ꠔi\\\"H㧺x෷韒Xꫨฟ|]窽\\u001a熑}Agn?Mᶖa<rఄ4Ů䢤슲Axģe곖㴤x竾郍B謉鸵k薽M)\\\"芣眜`菉ꉛ䴺\": \"鹏^ె캫?3耲]|Ü1䡒㝮]8e?䶍^\",\n                                                                                                                                                                                                                                              \"뿸樅#P㡊1M룮Uꪭ绢ꑮZ9꽸\": {\"\\nJ^є|3袄ㅐ7⨆銦y睝⋷仴ct?[,<\\/ㅬ`?갔髞%揁A೚C\": {\n                                                                                                                                                                                                                                               \" 䇞3갫䅪\": [{\n                                                                                                                                                                                                                                                \"0|⩁㑂砕ㅻ\": null,\n                                                                                                                                                                                                                                                \"D箳᠉`|=⼭)\\\"*࣊㦏LjO誋\": \"\",\n                                                                                                                                                                                                                                                \"ࠚǱmꗥ}ᷴ╈r7헴ȥ4Kp5a)o}鎘门L搰䆓'✎k俎c#T68ӏ⩶6L鎴<r൦$黊BQY㼳\\\\跿F慮⡨拵贀!甶V喅/\": null,\n                                                                                                                                                                                                                                                \"ⵣq⳹ﻨLk]晩1*y\\\\$%}䖶P煑㇆䈦E嫁櫕Y࣓嫨䓏OL낮梚㸇洛洚BYtgl∛S☕䉓宑⋢粚ꔯ꠼붠\": \")ꬑ윤`\\\"Ⱓ<\\/婽*Y䔸ᓰ_ﳍt슲坩隥&S糧䛮闵诌豐sh쯽邴*섴؏͎=㯨\\\"RVힳ,^t\\\"ac?䤒ꉀxHa=Uꛕ㐙TkF껾\",\n                                                                                                                                                                                                                                                \"弾cUAF?暤c덽.欀nK앭]r傊䀓ﯳ馽垃[䥛oI0N砊鈥헅Co쟋钄ㅷ㊌뷚7\": [\n                                                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                                                 \"૓鏨?^䆏{\\u0006`X䧵儱&롡尙砡\\u0006뻝쑬sj▻XfᬶgcㄢV >9韲4$3Ỵ^=쏍煤ፐ돷2䣃%鷠/eQ9頸쥎\",\n                                                                                                                                                                                                                                                 2398360204813891033,\n                                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                                 3.2658897259932633E-19,\n                                                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                                                 \"?ꚃ8Nn㞷幵d䲳䱲뀙ꪛQ瑓鎴]䩋-鰾捡䳡??掊\",\n                                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                                 -1309779089385483661,\n                                                                                                                                                                                                                                                 \"ᦲxu_/yecR.6芏.ᜇ過 ~\",\n                                                                                                                                                                                                                                                 -5658779764160586501,\n                                                                                                                                                                                                                                                 \"쒌:曠=l썜䢜wk#s蕚\\\"互㮉m䉤~0듐䋙#G;h숄옥顇෤勹(C7㢅雚㐯L⠅VV簅<\",\n                                                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                                                 -4.664877097240962E18,\n                                                                                                                                                                                                                                                 -4.1931322262828017E18,\n                                                                                                                                                                                                                                                 {\n                                                                                                                                                                                                                                                  \",\": {\n                                                                                                                                                                                                                                                   \"v㮟麑䄠뤵g{M띮.\\u001bzt뢜뵡0Ǥ龍떟Ᾰ怷ϓRT@Lꀌ樂U㏠⾕e扉|bJg(뵒㠶唺~ꂿ(땉x⻫싉쁊;%0鎻V(o\\f,N鏊%nk郼螺\": -1.73631993428376141E18,\n                                                                                                                                                                                                                                                   \"쟧摑繮Q@Rᕾ㭚㾣4隅待㓎3蒟\": [\n                                                                                                                                                                                                                                                    4971487283312058201,\n                                                                                                                                                                                                                                                    8973067552274458613,\n                                                                                                                                                                                                                                                    {\n                                                                                                                                                                                                                                                     \"`a揙ᣗ\\u0015i<S幼訃锭B0&槩✨[Wp皩[g≊k葾x2ᡆ橲䲢W\": true,\n                                                                                                                                                                                                                                                     \"kH皈Sꁱq傑u?솹풑~o^F=劣N*reJ沤wW苯7p㼹䎐a=ꮧL㷩냴nWꌑ㞱uu谁lVN珿᤻(e豶5#L쪉ᅄ઄\\u0015숟봊P瀚X蓎\": false,\n                                                                                                                                                                                                                                                     \"䫯דּ〖Sc䛭점L뵾pCꙞ\\\"엇즓_ﰛ톣ꫀ먩㺣㮠⭴!\\\\W┏t䖰軅y\\u0014~ᇰ렢E7*俜䥪W䀩䷐h봆vjஉ묣༏G39.뼳輼:㮿ᐦA饕TUL}~\": [\n                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                      8.8648298810470003E17,\n                                                                                                                                                                                                                                                      5.735561205600924E-20,\n                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                      -102555823658171644,\n                                                                                                                                                                                                                                                      1.2674932032973067E-19,\n                                                                                                                                                                                                                                                      {\n                                                                                                                                                                                                                                                       \"D胣O㯨\\u0017Ku눓㒏텁nᨊ!Ꚇ廫_>Bo¸\": 4.3236479112537999E18,\n                                                                                                                                                                                                                                                       \"HW&퉡ぁ圍<W)6悰ꠑHEp14xy峑ft\\u0005s亘V튉䢮ꦈX嵐꬝?lI_덝춇-6Ss噺Nk-ﮥ큃܁郪*PR(S6╋@仙V懸뺵ﯜV粹\": \"9䗌斀4㐈^Qs隄硏j\\u0003\",\n                                                                                                                                                                                                                                                       \"Vk鶅C泹筁HX훉朗*r\\\\z顊誌儖4?n7᏾6몋䎡ﳈ],H頢p蚐㑄P4满E䏩V䬕ญL廂쒬쑨ꆷh迡ꍰ譖墎 ]鹿ฌ7ﶽ冭༽<ꈓS\\\\l䋮?_ﾕ檒?\": -8598528325153980065,\n                                                                                                                                                                                                                                                       \"t=q퍣疻тZ\\\\錅J.镎|nfḷ鴒1厰L灯纜E]୦⥪]Ꮾ'羝p/咩0닳ﳁqﳖཽk ?X1Ft%ś뭢v鋋⺃爵⒗\": [[\n                                                                                                                                                                                                                                                        5.0824756359232045E-19,\n                                                                                                                                                                                                                                                        [\n                                                                                                                                                                                                                                                         7.268480839079619E-19,\n                                                                                                                                                                                                                                                         {\"탿^굞⧕iј덊ꀛw껩6ꟳXs酚\\\\>Y?瑡Qy훍q!帰敏s舠㫸zꚗaS歲v`G株巷Jp6킼 (귶鍔⾏⡈>M汐㞍ቴ꙲dv@i㳓ᇆ?黍\": [\n                                                                                                                                                                                                                                                          null,\n                                                                                                                                                                                                                                                          4997607199327183467,\n                                                                                                                                                                                                                                                          \"E㻎蠫ᐾ高䙟蘬洼旾﫠텛㇛?'M$㣒蔸=A_亀绉앭rN帮\",\n                                                                                                                                                                                                                                                          null,\n                                                                                                                                                                                                                                                          [{\n                                                                                                                                                                                                                                                           \"Eᑞ)8<Z㡿W镀䛒C생V?0ꯦ+tL)`齳AjB姀XೳD빠㻲ƙgn9⑰ྍ῜&\\\"㚹>餧A5u&㗾q?\": [\n                                                                                                                                                                                                                                                            -1.969987519306507E-19,\n                                                                                                                                                                                                                                                            null,\n                                                                                                                                                                                                                                                            [\n                                                                                                                                                                                                                                                             3.42437673373841E-20,\n                                                                                                                                                                                                                                                             true,\n                                                                                                                                                                                                                                                             \"e걷M墁\\\"割P␛퍧厀R䱜3ﻴO퓫r﹉⹊\",\n                                                                                                                                                                                                                                                             [\n                                                                                                                                                                                                                                                              -8164221302779285367,\n                                                                                                                                                                                                                                                              [\n                                                                                                                                                                                                                                                               true,\n                                                                                                                                                                                                                                                               null,\n                                                                                                                                                                                                                                                               \"爘y^-?蘞Ⲽꪓa␅ꍨ}I\",\n                                                                                                                                                                                                                                                               1.4645984996724427E-19,\n                                                                                                                                                                                                                                                               [{\n                                                                                                                                                                                                                                                                \"tY좗⧑mrzﺝ㿥ⴖ᥷j諅\\u0000q賋譁Ꞅ⮱S\\nࡣB/큃굪3Zɑ复o<\\/;롋\": null,\n                                                                                                                                                                                                                                                                \"彟h浠_|V4䦭Dᙣ♞u쿻=삮㍦\\u001e哀鬌\": [{\"6횣楠,qʎꗇ鎆빙]㱭R굋鈌%栲j分僅ペ䇰w폦p蛃N溈ꡐꏀ?@(GI뉬$ﮄ9誁ꓚ2e甸ڋ[䁺,\\u0011\\u001cࢃ=\\\\+衪䷨ᯕ鬸K\": [[\n                                                                                                                                                                                                                                                                 \"ㅩ拏鈩勥\\u000etgWVXs陂規p狵w퓼{뮵_i\\u0002ퟑႢ⬐d6鋫F~챿搟\\u0096䚼1ۼ칥0꣯儏=鋷牋ⅈꍞ龐\",\n                                                                                                                                                                                                                                                                 -7283717290969427831,\n                                                                                                                                                                                                                                                                 true,\n                                                                                                                                                                                                                                                                 [\n                                                                                                                                                                                                                                                                  4911644391234541055,\n                                                                                                                                                                                                                                                                  {\n                                                                                                                                                                                                                                                                   \"I鈒첽P릜朸W徨觘-Hᎄ퐟⓺>8kr1{겵䍃〛ᬡ̨O귑o䝕'쿡鉕p5\": \"fv粖RN瞖蛐a?q꤄\\u001d⸥}'ꣴ犿ꦼ?뤋?鵆쥴덋䡫s矷̄?ඣ/;괱絢oWfV<\\/\\u202cC,㖦0䑾%n賹g&T;|ǉ_欂N4w\",\n                                                                                                                                                                                                                                                                   \"짨䠗;䌕u i+r๏0\": [{\"9䥁\\\\఩8\\\"馇z䇔<\\/ႡY3e狚쐡\\\"ุ6ﰆZ遖c\\\"Ll:ꮾ疣<\\/᭙Ｏ◌납୕湞9⡳Und㫜\\u0018^4pj1;䧐儂䗷ୗ>@e톬\": {\n                                                                                                                                                                                                                                                                    \"a⑂F鋻Q螰'<퇽Q贝瀧{ᘪ,cP&~䮃Z?gＩ彃\": [\n                                                                                                                                                                                                                                                                     -1.69158726118025933E18,\n                                                                                                                                                                                                                                                                     [\n                                                                                                                                                                                                                                                                      \"궂z簽㔛㮨瘥⤜䛖Gℤ逆Y⪾j08Sn昞ꘔ캻禀鴚P謦b{ꓮmN靐Mᥙ5\\\"睏2냑I\\u0011.L&=?6ᄠ뻷X鸌t刑\\\"#z)o꫚n쳟줋\",\n                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                      7517598198523963704,\n                                                                                                                                                                                                                                                                      \"ኑQp襟`uᩄr方]*F48ꔵn俺ሙ9뇒\",\n                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                      6645782462773449868,\n                                                                                                                                                                                                                                                                      1219168146640438184,\n                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                      {\n                                                                                                                                                                                                                                                                       \")ယ넌竀Sd䰾zq⫣⏌ʥ\\u0010ΐ' |磪&p牢蔑mV蘸૰짬꺵;K\": [\n                                                                                                                                                                                                                                                                        -7.539062290108008E-20,\n                                                                                                                                                                                                                                                                        [\n                                                                                                                                                                                                                                                                         true,\n                                                                                                                                                                                                                                                                         false,\n                                                                                                                                                                                                                                                                         null,\n                                                                                                                                                                                                                                                                         true,\n                                                                                                                                                                                                                                                                         6574577753576444630,\n                                                                                                                                                                                                                                                                         [[\n                                                                                                                                                                                                                                                                          1.2760162530699766E-19,\n                                                                                                                                                                                                                                                                          [\n                                                                                                                                                                                                                                                                           null,\n                                                                                                                                                                                                                                                                           [\n                                                                                                                                                                                                                                                                            \"顊\\\\憎zXB,\",\n                                                                                                                                                                                                                                                                            [{\n                                                                                                                                                                                                                                                                             \"㇆{CVC9－MN㜋ઘR눽#{h@ퟨ!鼚׼XOvXS\\u0017ᝣ=cS+梽៲綆16s덽휐y屬?ᇳG2ᴭ\\u00054쫖y룇nKcW̭炦s/鰘ᬽ?J|퓀髣n勌\\u0010홠P>j\": false,\n                                                                                                                                                                                                                                                                             \"箴\": [\n                                                                                                                                                                                                                                                                              false,\n                                                                                                                                                                                                                                                                              \"鍞j\\\"ꮾ*엇칬瘫xṬ⭽쩁䃳\\\"-⋵?ᦽ<cਔ↎⩧%鱩涎삧u9K⦈\\\"῝ᬑV绩킯愌ṱv@GꝾ跶Ꚇ(?䖃vI᧊xV\\r哦j㠒?*=S굤紴ꊀ鹭쬈s<DrIu솹꧑?\",\n                                                                                                                                                                                                                                                                              {\n                                                                                                                                                                                                                                                                               \".}S㸼L?t\\u000fK⑤s~hU鱜꘦}쪍C滈4ꓗ蛌):ྦ\\\"顥이⢷ῳYLn\\\"?fꘌ>댎Ĝ\": true,\n                                                                                                                                                                                                                                                                               \"Pg帯佃籛n㔠⭹࠳뷏≻࿟3㞱!-쒾!}쭪䃕!籿n涻J5ਲ਼yvy;Rኂ%ᔡጀ裃;M⣼)쵂쑈\": 1.80447711803435366E18,\n                                                                                                                                                                                                                                                                               \"ꈑC⡂ᑆ㤉壂뎃Xub<\\/쀆༈憓ق쨐ק\\\\\": [\n                                                                                                                                                                                                                                                                                7706977185172797197,\n                                                                                                                                                                                                                                                                                {\"\": {\"K╥踮砆NWࡆFy韣7ä밥{|紒︧䃀榫rᩛꦡTSy잺iH8}ퟴ,M?Ʂ勺ᴹ@T@~꾂=I㙕뾰_涀쑜嫴曣8IY?ҿo줫fऒ}\\\\S\\\"ᦨ뵼#nDX\": {\n                                                                                                                                                                                                                                                                                 \"♘k6?଱癫d68?㽚乳䬳-V顷\\u0005蝕?\\u0018䞊V{邾zじl]雏k臤~ൖH뒐iꢥ]g?.G碄懺䔛p<q꜉S岗_.%\": 7688630934772863849,\n                                                                                                                                                                                                                                                                                 \"溗摽嗙O㧀,⡢⼰呠ꅧ㓲/葇䢛icc@-r\\b渂ꌳ뻨饑觝ᖜ\\\\鮭\\u0014엙㥀᧺@浹W2꛵{W률G溮킀轡䬆g㨑'Q聨៪网Hd\\\"Q늴ᱢﶨ邮昕纚枑?▰hr羌驀[痹<\\/\": [\n                                                                                                                                                                                                                                                                                  -1.0189902027934687E-19,\n                                                                                                                                                                                                                                                                                  {\"窶椸릎뚻shE\\\"ꪗႥꎳU矖佟{SJ\": [{\"-慜x櫹XY-澐ܨ⣷ઢ鯙%Fu\\u0000迋▒}᥷L嗭臖oញc넨\\u0016/迎1b꯸g뢱㐧蓤䒏8C散삭|\\\"컪輩鹩\\\"\\\\g$zG䥽긷?狸꿭扵㲐:URON&oU8\": [\n                                                                                                                                                                                                                                                                                   null,\n                                                                                                                                                                                                                                                                                   true,\n                                                                                                                                                                                                                                                                                   null,\n                                                                                                                                                                                                                                                                                   -2.8907335031148883E17,\n                                                                                                                                                                                                                                                                                   -3864019407187144121,\n                                                                                                                                                                                                                                                                                   {\n                                                                                                                                                                                                                                                                                    \"`빬d⵺4H뜳⧈쓑ohஸ*㶐ﻇ⸕䠵!i䝬﹑h夘▥ꗐ푹갇㵳TA鳠嚵\\\\B<X}3訒c⋝{*﫢w]璨-g捭\\\\j໵侠Ei层\\u0011\": 3.758356090089446E-19,\n                                                                                                                                                                                                                                                                                    \"䄘ﮐ)Y놞씃㾱陰큁:{\\u2059/S⓴\": [[\n                                                                                                                                                                                                                                                                                     null,\n                                                                                                                                                                                                                                                                                     [[\n                                                                                                                                                                                                                                                                                      -3.8256602120220546E-20,\n                                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                                      7202317607724472882,\n                                                                                                                                                                                                                                                                                      \"CWQ뚿\",\n                                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                                      false,\n                                                                                                                                                                                                                                                                                      true,\n                                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                                      2857038485417498625,\n                                                                                                                                                                                                                                                                                      6.191302233218633E-20,\n                                                                                                                                                                                                                                                                                      null,\n                                                                                                                                                                                                                                                                                      -6795250594296208046,\n                                                                                                                                                                                                                                                                                      [\n                                                                                                                                                                                                                                                                                       true,\n                                                                                                                                                                                                                                                                                       {\n                                                                                                                                                                                                                                                                                        \"%ዧ遰Yᚯ⚀x莰愒Vᔈ턗BN洝ꤟA1⍌l콹풪H;OX๫륞쪐ᰚц@͎黾a邬<L厒Xb龃7f웨窂二;\": [[\n                                                                                                                                                                                                                                                                                         null,\n                                                                                                                                                                                                                                                                                         \"耲?䙧㘓F6Xs틭멢.v뚌?鄟恠▽'묺競?WvᆾCtxo?dZ;䨸疎\",\n                                                                                                                                                                                                                                                                                         {\n                                                                                                                                                                                                                                                                                          \"@hWꉁ&\\\"빜4礚UO~C;う殩_ꀥ蘁奢^챟k→ᡱKMⵉ<\\/Jㅲ붉L͟Q\": false,\n                                                                                                                                                                                                                                                                                          \"tU뢂8龰I먽7,.Y搽Z툼=&⨥覽K乫햶㠸%#@Z끖愓^⍊⾂몒3E_噆J(廊ឭyd䞜鈬Ћ档'⣘I\": {\n                                                                                                                                                                                                                                                                                           \"tK*ꔵ銂u艗ԃ쿏∳ꄂ霫X3♢9y?=ⲭdЊb&xy}\": [\n                                                                                                                                                                                                                                                                                            -4.097346784534325E-20,\n                                                                                                                                                                                                                                                                                            null,\n                                                                                                                                                                                                                                                                                            6016848468610144624,\n                                                                                                                                                                                                                                                                                            -8194387253692332861,\n                                                                                                                                                                                                                                                                                            null,\n                                                                                                                                                                                                                                                                                            {\n                                                                                                                                                                                                                                                                                             \"(祬诀譕쯠娣c봝r?畄kT뼾⌘⎨?noV䏘쥝硎n?\": [\n                                                                                                                                                                                                                                                                                              1.82679422844617293E18,\n                                                                                                                                                                                                                                                                                              [\n                                                                                                                                                                                                                                                                                               false,\n                                                                                                                                                                                                                                                                                               2.6849944122427694E18,\n                                                                                                                                                                                                                                                                                               true,\n                                                                                                                                                                                                                                                                                               [\n                                                                                                                                                                                                                                                                                                false,\n                                                                                                                                                                                                                                                                                                {\n                                                                                                                                                                                                                                                                                                 \";0z⭆;화$bਔ瀓\\\"衱^?잢ᢛ⣿~`ꕉ薸⌳໿湘腌'&:ryБꋥၼ꒥筙꬜긨?X\": -3536753685245791530,\n                                                                                                                                                                                                                                                                                                 \"c;Y7釚Uꃣ割J༨Y戣w}c峰뢨㽑㫈0N>R$䅒X觨l봜A刊8R梒',}u邩퉕?;91Ea䈈믁G⊶芔h袪&廣㺄j;㡏綽\\u001bN頸쳘橆\": -2272208444812560733,\n                                                                                                                                                                                                                                                                                                 \"拑Wﵚj鵼駳Oࣿ)#㾅顂N傓纝y僱栜'Bꐍ-!KF*ꭇK￤?䈴^:啤wG逭w᧯\": \"xᣱmYe1ۏ@霄F$ě꧘푫O䤕퀐Pq52憬ꀜ兴㑗ᡚ?L鷝ퟐ뭐zJꑙ}╆ᅨJB]\\\"袌㺲u8䯆f\",\n                                                                                                                                                                                                                                                                                                 \"꿽၅㔂긱Ǧ?SI\": -1669030251960539193,\n                                                                                                                                                                                                                                                                                                 \"쇝ɨ`!葎>瞺瘡驷錶❤ﻮ酜=\": -6961311505642101651,\n                                                                                                                                                                                                                                                                                                 \"?f7♄꫄Jᡔ훮e읇퍾፣䭴KhखT;Qty}O\\\\|뫁IῒNe(5惁ꥶㆷY9ﮡ\\\\ oy⭖-䆩婁m#x봉>Y鈕E疣s驇↙ᙰm<\": {\"퉻:dꂁ&efￎ쫢[\\\"돈늖꺙|Ô剐1͖-K:ʚ᭕/;쏖㷛]I痐职4g<Oꗢ뫺N쯂륬J╆.`ᇵP轆&fd$?苅o궓vO侃沲⍩嚅沗 E%⿰얦wi\\\\*趫\": [\n                                                                                                                                                                                                                                                                                                  3504362220185634767,\n                                                                                                                                                                                                                                                                                                  false,\n                                                                                                                                                                                                                                                                                                  \"qzX朝qT3軞T垈ꮲQ览ᚻ⻑쎎b驌䵆ꬠ5Fୗ䲁缿ꝁ蒇潇Ltᆄ钯蜀W欥ሺ\",\n                                                                                                                                                                                                                                                                                                  \"볰ɐ霬)젝鶼kwoc엷荁r \\u001d쒷⎹8{%澡K늒?iﺩd=&皼倚J9s@３偛twὡgj䁠흪5⭉⨺役&놎cﺉ㺡N5\",\n                                                                                                                                                                                                                                                                                                  false,\n                                                                                                                                                                                                                                                                                                  null,\n                                                                                                                                                                                                                                                                                                  \"D0ﬆ[ni锹r*0k6ꀎ덇UX2⽼৞䃚粭#)Z桷36P]<\\/`\",\n                                                                                                                                                                                                                                                                                                  4281410120849816730,\n                                                                                                                                                                                                                                                                                                  null,\n                                                                                                                                                                                                                                                                                                  -3256922126984394461,\n                                                                                                                                                                                                                                                                                                  1.16174580369801549E18,\n                                                                                                                                                                                                                                                                                                  {\n                                                                                                                                                                                                                                                                                                   \" ᆼꤗ~*TN긂<㡴턱℃酰^蘒涯잰淭傛2rൡet쾣䐇m*㸏y\\\"\\\\糮᧺qv쌜镜T@yg1譬ﭧﳭ\\f\": null,\n                                                                                                                                                                                                                                                                                                   \"圾ᨿ0xᮛ禵ਗ਼D-㟻ẵ錚e\\\"赜.˶m)鴑B(I$<\\/轴퉯揷⋏⏺*)宓쓌?*橯Lx\\\\f쩂㞼⇸\\\"ﺧ軂遳V\\\\땒\\\"캘c:G\": null,\n                                                                                                                                                                                                                                                                                                   \"?﵁_곢翸폈8㿠h열Q2㭛}RY㯕ＹT놂⽻e^B<\\/맫ﻇ繱\\u0017Gц⟊ᢑﵩS:jt櫣嗒⟰W㴚搦ᅉe[w䋺?藂翙Ⲱ芮䍘╢囥lpdu7r볺I 近qFyᗊ\": [\n                                                                                                                                                                                                                                                                                                    \"$b脬aﾠ襬育Bگ嵺Pw+'M<\\/כֿn䚚v螁bN⒂}褺%lቦ阤\\\"ꓺᏗM牏,۞Ҷ!矬?ke9銊X괦)䈽틁脽ṫ䈞ᴆ^=Yᗿ遛4I귺⋥%\",\n                                                                                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                                                                                    2.9444482723232051E18,\n                                                                                                                                                                                                                                                                                                    2072621064799640026,\n                                                                                                                                                                                                                                                                                                    \"/_뇴뫢j㍒=Nꡦ↍Ժ赒❬톥䨞珯su*媸瀳鷔抡o흺-៳辏勷f绔:䵢搢2\",\n                                                                                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                                                                                    \"쒜 E䌐/큁\\u0018懺_<\\\\隺&{wF⤊谼(<죽遠8?@*rᶊGd뻻갇&Ⳇq᣿e࢔t_ꩄ梸O詬C᧧Kꩠ풤9눙醅됞}竸rw?滨ӽK⥿ཊG魲']`๖5㄰\",\n                                                                                                                                                                                                                                                                                                    -2375253967958699084,\n                                                                                                                                                                                                                                                                                                    {\"嗱⿲\\\"f億ᝬ\": {\"v?䚑킡`◤k3,骥曘뒤Oᒱ㲹^圮᠀YT껛&촮P:G/T⣝#튣k3炩蠏k@橈䏷S䧕,熜晬k1鮥玸먚7䤡f绝嗚샴ᥒ~0q拮垑a뻱LⰖ_\": [{\n                                                                                                                                                                                                                                                                                                     \":p尢\": -6.688985172863383E17,\n                                                                                                                                                                                                                                                                                                     \"A0\\u0001疠ﻵ爻鼀湶I~W^岀mZx#㍈7r拣$Ꜷ疕≛⦒痋盩Vꬷ᭝ΩQꍪ療鈑A(劽詗ꭅo-獶鑺\\\"Ⓠ@$j탥;\": [\n                                                                                                                                                                                                                                                                                                      8565614620787930994,\n                                                                                                                                                                                                                                                                                                      [\n                                                                                                                                                                                                                                                                                                       \"嶗PC?උQ㪣$&j幾㾷h慑 즊慧⪉霄M窊ꁷ'鮕)䊏铨m趦䗲(g罣ЮKVﯦ鏮5囗ﰼ鿦\",\n                                                                                                                                                                                                                                                                                                       -7168038789747526632,\n                                                                                                                                                                                                                                                                                                       null,\n                                                                                                                                                                                                                                                                                                       -7.8069738975270288E16,\n                                                                                                                                                                                                                                                                                                       2.25819579241348352E17,\n                                                                                                                                                                                                                                                                                                       -6.5597416611655936E18,\n                                                                                                                                                                                                                                                                                                       {\n                                                                                                                                                                                                                                                                                                        \"瘕멦핓+?ﾌZ귢z鍛V\": {\n                                                                                                                                                                                                                                                                                                         \"ᕾ\": 1.7363275204701887E-19,\n                                                                                                                                                                                                                                                                                                         \"㭌s뎹㳉\": {\"\\u00187FI6Yf靺+UC쬸麁␲䂿긕R\\\\ᆮC?Φ耭\\rOத际핅홦*베W㸫㯼᡹cㅜ|G㮗\\u0013[o`?jHV앝?蒪꩚!퍫ᜦ㌇䚇鿘:@\": [\n                                                                                                                                                                                                                                                                                                          \"}푛Г콲<䟏C藐呈#2㓋#ྕ፟尿9q竓gI%랙mꍬoa睕贿J咿D_熏Zz皳験I豼B扳ḢQ≖㻹㱣D䝦練2'ᗍ㗣▌砲8罿%హF姦;0悇<\\/\\\"p嚧\",\n                                                                                                                                                                                                                                                                                                          -710184373154164247,\n                                                                                                                                                                                                                                                                                                          \"Vo쫬⬾ꝫⴷŻ\\u0004靎HBꅸ_aVBＨbN>Z4⍜kเꛘZ⥺\\\\Bʫᇩ鄨魢弞&幟ᓮ2̊盜\",\n                                                                                                                                                                                                                                                                                                          -9006004849098116748,\n                                                                                                                                                                                                                                                                                                          -3118404930403695681,\n                                                                                                                                                                                                                                                                                                          {\n                                                                                                                                                                                                                                                                                                           \"_彃Y艘-\\\"Xx㤩㳷瑃?%2䐡鵛o<A?\\\"顜ᘌΈ;ⷅC洺L蚴蚀voq:,Oo4쪂)\": 5719065258177391842,\n                                                                                                                                                                                                                                                                                                           \"l륪맽耞塻論倐E㗑/㲕QM辬I\\\"qi酨玑㖪5q]尾魨鲡ƞY}⮯蠇%衟Fsf윔䐚찤i腳\": {\"ꢪ'a䣊糈\": {\"밑/♋S8s㼴5瓹O{댞\\\"9XﰇlJ近8}q{긧ⓈI᱑꿋腸D瀬H\\\"ﺬ'3?}\\u0014#?丙㑯ᥨ圦',g鑠(樴턇?\": [\n                                                                                                                                                                                                                                                                                                            2.5879275511391145E18,\n                                                                                                                                                                                                                                                                                                            null,\n                                                                                                                                                                                                                                                                                                            [\n                                                                                                                                                                                                                                                                                                             \"3㼮ꔌ1Gẃ2W龙j͊{1囐㦭9x宠㑝oR䐕犽\",\n                                                                                                                                                                                                                                                                                                             1268729930083267852,\n                                                                                                                                                                                                                                                                                                             \"땕軚⿦7C\",\n                                                                                                                                                                                                                                                                                                             [\n                                                                                                                                                                                                                                                                                                              -3.757935946502082E18,\n                                                                                                                                                                                                                                                                                                              \"\\\"赌'糬_2뭾᝝b\",\n                                                                                                                                                                                                                                                                                                              {\n                                                                                                                                                                                                                                                                                                               \"(a䕎ጽjҰD4.ᴡ66ԃ畮<\\/l`k癸\\\\㇋ࣆ욯R㫜픉녬挛;ڴ맺`.;焓q淞뮕ٹ趴r蔞ꯔ䟩v粏u5<\\/pZ埖Skrvj帛=\\u0005aa\": null,\n                                                                                                                                                                                                                                                                                                               \"璄≩ v몛ᘮ%?:1頌챀H㷪뉮k滘e\": [\n                                                                                                                                                                                                                                                                                                                \"ꤾ{`c샬왌펡[俊络vmz㪀悫⸹ᷥ5o'㾵 L蹦qjYIYណԠW냁剫<\\/W嗂0,}\",\n                                                                                                                                                                                                                                                                                                                2.4817616702666762E18,\n                                                                                                                                                                                                                                                                                                                false,\n                                                                                                                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                                                                                                                -8.6036958071260979E17,\n                                                                                                                                                                                                                                                                                                                null,\n                                                                                                                                                                                                                                                                                                                -1.2744078022652468E-19,\n                                                                                                                                                                                                                                                                                                                -4.4752020268429594E17,\n                                                                                                                                                                                                                                                                                                                1.13672865156637872E17,\n                                                                                                                                                                                                                                                                                                                [\n                                                                                                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                                                                                                 false,\n                                                                                                                                                                                                                                                                                                                 null,\n                                                                                                                                                                                                                                                                                                                 -4.178004168554046E-20,\n                                                                                                                                                                                                                                                                                                                 true,\n                                                                                                                                                                                                                                                                                                                 2927542512798605527,\n                                                                                                                                                                                                                                                                                                                 {\n                                                                                                                                                                                                                                                                                                                  \".ꔓ뉤1䵬cHy汼䊆賓ᐇƩ|樷❇醎㬅4\\u0003赵}#yD5膏晹뱓9ꖁ虛J㺕 t䊛膎ؤ\": {\n                                                                                                                                                                                                                                                                                                                   \"rVtᓸ5^`েN⹻Yv᥋lꌫt拘?<鮰넿ZC?㒽^\": {\"␪k_:>귵옔夘v*탋职&㳈챗|O钧\": [\n                                                                                                                                                                                                                                                                                                                    false,\n                                                                                                                                                                                                                                                                                                                    \"daꧺdᗹ羞쯧H㍤鄳頳<型孒ン냆㹀f4㹰\\u000f|C*ሟ鰠(O<ꨭ峹ipຠ*y೧4VQ蔔hV淬{?ᵌEfrI_\",\n                                                                                                                                                                                                                                                                                                                    \"j;ꗣ밷邍副]ᗓ\",\n                                                                                                                                                                                                                                                                                                                    -4299029053086432759,\n                                                                                                                                                                                                                                                                                                                    -5610837526958786727,\n                                                                                                                                                                                                                                                                                                                    [\n                                                                                                                                                                                                                                                                                                                     null,\n                                                                                                                                                                                                                                                                                                                     [\n                                                                                                                                                                                                                                                                                                                      -1.3958390678662759E-19,\n                                                                                                                                                                                                                                                                                                                      {\n                                                                                                                                                                                                                                                                                                                       \"lh좈T_믝Y\\\"伨\\u001cꔌG爔겕ꫳ晚踍⿻읐T䯎]~e#฽燇\\\"5hٔ嶰`泯r;ᗜ쮪Q):/t筑,榄&5懶뎫狝(\": [{\n                                                                                                                                                                                                                                                                                                                        \"2ፁⓛ]r3C攟וּ9賵s⛔6'ஂ|\\\"ⵈ鶆䐹禝3\\\"痰ࢤ霏䵩옆䌀?栕r7O簂Isd?K᫜`^讶}z8?z얰T:X倫⨎ꑹ\": -6731128077618251511,\n                                                                                                                                                                                                                                                                                                                        \"|︦僰~m漿햭\\\\Y1'Vvخ굇ቍ챢c趖\": [null]\n                                                                                                                                                                                                                                                                                                                       }],\n                                                                                                                                                                                                                                                                                                                       \"虌魿閆5⛔煊뎰㞤ᗴꥰF䮥蘦䂪樳-K᝷-(^\\u20dd_\": 2.11318679791770592E17\n                                                                                                                                                                                                                                                                                                                      }\n                                                                                                                                                                                                                                                                                                                     ]\n                                                                                                                                                                                                                                                                                                                    ]\n                                                                                                                                                                                                                                                                                                                   ]},\n                                                                                                                                                                                                                                                                                                                   \"묗E䀳㧯᳀逞GMc\\b墹㓄끖Ơ&U??펌鑍 媋k))ᄊ\": null,\n                                                                                                                                                                                                                                                                                                                   \"묥7콽벼諌J_DɯﮪM殴䣏,煚ྼ`Y:씧<\\/⩫%yf䦀!1Ჶk춎Q米W∠WC跉鬽*ᛱi<?,l<崣炂骵*?8푐៣ⰵ憉⎑.,Nw罣q+ο컆弎\": false\n                                                                                                                                                                                                                                                                                                                  },\n                                                                                                                                                                                                                                                                                                                  \"e[|଀+lꑸ㝈TT?뿿|ꫛ9`㱯䊸楋-곳賨?쳁k棽擋wQ餈⟐Nq[q霩䵀뷮锅ꚢ\": 5753148631596678144,\n                                                                                                                                                                                                                                                                                                                  \"sᓝ鴻߸d렶ὕ蜗ဟ툑!诉౿\": false,\n                                                                                                                                                                                                                                                                                                                  \"|4䕳鵻?䈔(]틍/Ui#湻{듲ーMዀt7潔泄Ch⸨}쏣`螧銚㋼壯kⰥQ戵峉갑x辙'첛\": \"jd䘯$䕌茷!auw眶ㅥ䁣ꆢ民i\",\n                                                                                                                                                                                                                                                                                                                  \"剖駰ꞫsM2]ᾴ2ࡷ祅拌Av狔꩛'ꓗ킧ꣁ0酜✘O'\": false,\n                                                                                                                                                                                                                                                                                                                  \"澩뢣ꀁeU~D\\\\ꮡ킠\": \"v^YC嚈ί\\u0007죋h>㴕L꘻ꀏ쓪\\\"_g鿄'#t⽙?,Wg㥖|D鑆e⥏쪸僬h鯔咼ඡ;4TK聎졠嫞\"\n                                                                                                                                                                                                                                                                                                                 }\n                                                                                                                                                                                                                                                                                                                ]\n                                                                                                                                                                                                                                                                                                               ]\n                                                                                                                                                                                                                                                                                                              }\n                                                                                                                                                                                                                                                                                                             ]\n                                                                                                                                                                                                                                                                                                            ]\n                                                                                                                                                                                                                                                                                                           ]}}\n                                                                                                                                                                                                                                                                                                          }\n                                                                                                                                                                                                                                                                                                         ]}\n                                                                                                                                                                                                                                                                                                        },\n                                                                                                                                                                                                                                                                                                        \"뿋뀾淣截䔲踀&XJ펖꙯^Xb訅ꫥgᬐ>棟S\\\"혧騾밫겁7-\": \"擹8C憎W\\\"쵮yR뢩浗絆䠣簿9䏈引Wcy䤶孖ꯥ;퐌]輩䍐3@{叝 뽸0ᡈ쵡Ⲇ\\u001dL匁꧐2F~ݕ㪂@W^靽L襒ᦘ~沦zZ棸!꒲栬R\"\n                                                                                                                                                                                                                                                                                                       }\n                                                                                                                                                                                                                                                                                                      ]\n                                                                                                                                                                                                                                                                                                     ],\n                                                                                                                                                                                                                                                                                                     \"Z:덃൛5Iz찇䅄駠㭧蓡K1\": \"e8᧤좱U%?ⵇ䯿鿝\\u0013縮R∱骒EO\\u000fg?幤@֗퉙vU`\",\n                                                                                                                                                                                                                                                                                                     \"䐃쪈埽້=Ij,쭗쓇చ\": false\n                                                                                                                                                                                                                                                                                                    }]}}\n                                                                                                                                                                                                                                                                                                   ]\n                                                                                                                                                                                                                                                                                                  }\n                                                                                                                                                                                                                                                                                                 ]}\n                                                                                                                                                                                                                                                                                                }\n                                                                                                                                                                                                                                                                                               ]\n                                                                                                                                                                                                                                                                                              ]\n                                                                                                                                                                                                                                                                                             ],\n                                                                                                                                                                                                                                                                                             \"咰긖VM]᝼6䓑쇎琺etDҌ?㞏ꩄ퇫밉gj8蠃\\\"⩐5䛹1ࣚ㵪\": \"ക蹊?⎲⧘⾚̀I#\\\"䈈⦞돷`wo窭戕෱휾䃼)앷嵃꾞稧,Ⴆ윧9S?೗EMk3Მ3+e{⹔Te驨7䵒?타Ulg悳o43\"\n                                                                                                                                                                                                                                                                                            }\n                                                                                                                                                                                                                                                                                           ],\n                                                                                                                                                                                                                                                                                           \"zQᤚ纂땺6#ٽ﹧v￿#ࠫ휊冟蹧텈ꃊʆ?&a䥯De潝|쿓pt瓞㭻啹^盚2Ꝋf醪,얏T窧\\\\Di䕎谄nn父ꋊE\": -2914269627845628872,\n                                                                                                                                                                                                                                                                                           \"䉩跐|㨻ᷢ㝉B{蓧瞸`I!℄욃힕#ೲᙾ竛ᔺCjk췒늕貭词\\u0017署?W딚%(pꍁ⤼띳^=on뺲l䆼bzrﳨ[&j狸䠠=ᜑꦦ\\u2061յnj=牲攑)M\\\\龏\": false,\n                                                                                                                                                                                                                                                                                           \"뎕y絬᫡⥮Ϙᯑ㌔/NF*˓.,QEzvK!Iwz?|쥾\\\"ꩻL꼗Bꔧ賴緜s뉣隤茛>ロ?(?^`>冺飒=噸泥⺭Ᲊ婓鎔븜z^坷裮êⓅ໗jM7ﶕ找\\\\O\": 1.376745434746303E-19\n                                                                                                                                                                                                                                                                                          },\n                                                                                                                                                                                                                                                                                          \"䐛r滖w㏤<k;l8ꡔጵ⮂ny辶⋃퍼僮z\\\"﮲X@t5෼暧퓞猋♅䦖QC鹮|픨( ,>,|Nዜ\": false\n                                                                                                                                                                                                                                                                                         }\n                                                                                                                                                                                                                                                                                        ]],\n                                                                                                                                                                                                                                                                                        \"@꿙?薕尬 gd晆(띄5躕ﻫS蔺4)떒錸瓍?~\": 1665108992286702624,\n                                                                                                                                                                                                                                                                                        \"w믍nᏠ=`঺ￆC>'從됐槷䤝眷螄㎻揰扰XￊC贽uჍ낟jKD03T!lDV쀉Ӊy뢖,袛!终캨G?鉮Q)⑗1쾅庅O4ꁉH7?d\\u0010蠈줘월ސ粯Q!낇껉6텝|{\": null,\n                                                                                                                                                                                                                                                                                        \"~˷jg쿤촖쉯y\": -5.5527605669177098E18,\n                                                                                                                                                                                                                                                                                        \"펅Wᶺzꐆと푭e?4j仪열[D<鈑皶婆䵽ehS?袪;HꍨM뗎ば[(嗏M3q퍟g4y╸鰧茀[Bi盤~﫝唎鋆彺⦊q?B4쉓癚O洙킋툈䶯_?ퟲ\": null\n                                                                                                                                                                                                                                                                                       }\n                                                                                                                                                                                                                                                                                      ]\n                                                                                                                                                                                                                                                                                     ]]\n                                                                                                                                                                                                                                                                                    ]],\n                                                                                                                                                                                                                                                                                    \"꟱Ԕ㍤7曁聯ಃ錐V䷰?v㪃૦~K\\\"$%请|ꇹn\\\"k䫛㏨鲨\\u2023䄢\\u0004[<S8ᐬ뭩脥7U.m࿹:D葍┆2蘸^U'w1젅;䠆ꋪB껮>︊VJ?䶟ាꮈ䗱=깘U빩\": -4863152493797013264\n                                                                                                                                                                                                                                                                                   }\n                                                                                                                                                                                                                                                                                  ]}]}\n                                                                                                                                                                                                                                                                                 ]\n                                                                                                                                                                                                                                                                                }}}\n                                                                                                                                                                                                                                                                               ],\n                                                                                                                                                                                                                                                                               \"쏷쐲۹퉃~aE唙a챑,9㮹gLHd'䔏|킗㍞䎥&KZYT맵7䥺N<Hp4ꕭ⹠꽐c~皽z\": \"课|ᖾ䡁廋萄䐪W\\u0016&Jn괝b~摓M>ⱳ同莞鿧w\\\\༌疣n/+ꎥU\\\"封랾○ퟙAJᭌ?9䛝$?驔9讐짘魡T֯c藳`虉C읇쐦T\"\n                                                                                                                                                                                                                                                                              }\n                                                                                                                                                                                                                                                                             ],\n                                                                                                                                                                                                                                                                             \"谶개gTR￐>ၵ͚dt晑䉇陏滺}9㉸P漄\": -3350307268584339381\n                                                                                                                                                                                                                                                                            }]\n                                                                                                                                                                                                                                                                           ]\n                                                                                                                                                                                                                                                                          ]\n                                                                                                                                                                                                                                                                         ]]\n                                                                                                                                                                                                                                                                        ]\n                                                                                                                                                                                                                                                                       ],\n                                                                                                                                                                                                                                                                       \"0y꟭馋X뱔瑇:䌚￐廿jg-懲鸭䷭垤㒬茭u賚찶ಽ+\\\\mT땱\\u20821殑㐄J쩩䭛ꬿNS潔*d\\\\X,壠뒦e殟%LxG9:摸\": 3737064585881894882,\n                                                                                                                                                                                                                                                                       \"풵O^-⧧ⅶvѪ8廸鉵㈉ר↝Q㿴뺟EႳvNM:磇>w/៻唎뷭୥!냹D䯙i뵱貁C#⼉NH6`柴ʗ#\\\\!2䂗Ⱨf?諳.P덈-返I꘶6?8ꐘ\": -8934657287877777844,\n                                                                                                                                                                                                                                                                       \"溎-蘍寃i诖ര\\\"汵\\\"\\ftl,?d⼡쾪⺋h匱[,෩I8MҧF{k瓿PA'橸ꩯ綷퉲翓\": null\n                                                                                                                                                                                                                                                                      }\n                                                                                                                                                                                                                                                                     ]\n                                                                                                                                                                                                                                                                    ],\n                                                                                                                                                                                                                                                                    \"ោ係؁<元\": 1.7926963090826924E-18\n                                                                                                                                                                                                                                                                   }}]\n                                                                                                                                                                                                                                                                  }\n                                                                                                                                                                                                                                                                 ]\n                                                                                                                                                                                                                                                                ]]}]\n                                                                                                                                                                                                                                                               }]\n                                                                                                                                                                                                                                                              ]\n                                                                                                                                                                                                                                                             ]\n                                                                                                                                                                                                                                                            ]\n                                                                                                                                                                                                                                                           ],\n                                                                                                                                                                                                                                                           \"ጩV<\\\"ڸsOᤘ\": 2.0527167903723048E-19\n                                                                                                                                                                                                                                                          }]\n                                                                                                                                                                                                                                                         ]}\n                                                                                                                                                                                                                                                        ]\n                                                                                                                                                                                                                                                       ]],\n                                                                                                                                                                                                                                                       \"∳㙰3젴p᧗䱙?`<U὇<\\/意E[ᮚAj诂ᒽ阚uv徢ဎ떗尔Ᵹ훀쩑J䐴?⪏=륪ᆩ푰ஓ㐕?럽VK\\\"X?檨လ齿I/耉A(AWA~⏯稐蹫\": false,\n                                                                                                                                                                                                                                                       \"偒妝뾇}䀼链i⇃%⋜&璪Ix渥5涧qq棩ᥝ-⠫AA낇yY颕A*裦O|n?䭬혗F\": null,\n                                                                                                                                                                                                                                                       \"琭CL얭B혆Kॎ`鎃nrsZiժW砏)?p~K~A眱䲏QO妣\\u001b\\u001b]ᵆᆯ&㐋ᏹ豉뺘$ꭧ#j=C)祤⫢歑1o㒙諩\": 7028426989382601021,\n                                                                                                                                                                                                                                                       \"쳱冲&ဤ䌏앧h胺-齱H忱8왪RDKᅒ䬋ᔶS*J}ስ漵'㼹뮠9걢9p봋경ጕtởꚳT䶽瘙%춴`@nಆ4<d??#僜ᙤ钴=薔ꭂbLXNam蹈\": \"樭る蹿= Uurwkn뙧⌲%\\\"쑃牪\\\"cq윕o@\",\n                                                                                                                                                                                                                                                       \"溌[H]焎SLㅁ?뀼䫨災W\": 1.1714289118497062E-19,\n                                                                                                                                                                                                                                                       \"ﬢp븇剌燇kĔ尘㶿㴞睠꾘Ia;s❺^)$穮?sHᢥ폪l\": null\n                                                                                                                                                                                                                                                      }\n                                                                                                                                                                                                                                                     ]\n                                                                                                                                                                                                                                                    }\n                                                                                                                                                                                                                                                   ]\n                                                                                                                                                                                                                                                  },\n                                                                                                                                                                                                                                                  \"TKnzj5o<\\/K㊗ꗣ藠⦪駇>yZA8Ez0,^ᙛ4_0븢\\u001ft:~䎼s.bb룦明yNP8弆C偯;⪾짍'蕴뮛\": -6976654157771105701,\n                                                                                                                                                                                                                                                  \"큵ꦀ\\\\㇑:nv+뒤燻䀪ﴣ﷍9ᚈ኷K㚊誦撪䚛,ꮪxሲ쳊\\u0005HSf?asg昱dqꬌVꙇ㼺'k*'㈈\": -5.937042203633044E-20\n                                                                                                                                                                                                                                                 }\n                                                                                                                                                                                                                                                ]\n                                                                                                                                                                                                                                               }],\n                                                                                                                                                                                                                                               \"?}\\u20e0],s嶳菋@#2u쒴sQS䩗=ꥮ;烌,|ꘔ䘆\": \"ᅩ영N璠kZ먕眻?2ቲ芋眑D륟渂⸑ﴃIRE]啗`K'\"\n                                                                                                                                                                                                                                              }},\n                                                                                                                                                                                                                                              \"쨀jmV賂ﰊ姐䂦玞㬙ᏪM᪟Վ씜~`uOn*ॠ8\\u000ef6??\\\\@/?9見d筜ﳋB|S䝬葫㽁o\": true\n                                                                                                                                                                                                                                             },\n                                                                                                                                                                                                                                             \"즛ꄤ酳艚␂㺘봿㎨iG৕ࡿ?1\\\"䘓您\\u001fSኝ⺿溏zៀ뻤B\\u0019?윐a䳵᭱䉺膷d:<\\/\": 3935553551038864272\n                                                                                                                                                                                                                                            }\n                                                                                                                                                                                                                                           ]\n                                                                                                                                                                                                                                          ]}\n                                                                                                                                                                                                                                         ]]\n                                                                                                                                                                                                                                        ]]\n                                                                                                                                                                                                                                       ]}\n                                                                                                                                                                                                                                      }\n                                                                                                                                                                                                                                     ]\n                                                                                                                                                                                                                                    }\n                                                                                                                                                                                                                                   ]]}},\n                                                                                                                                                                                                                                   \"᥺3h↛!ꋰy\\\"攜(ெl䪕oUkc1A㘞ᡲ촾ᣫ<\\/䒌E㛝潨i{v?W౾H\\\\RჅpz蝬R脾;v:碽✘↯삞鷱o㸧瑠jcmK7㶧뾥찲n\": true,\n                                                                                                                                                                                                                                   \"ⶸ?x䊺⬝-䰅≁!e쩆2ꎿ准G踌XXᩯ1߁}0?.헀Z馟;稄\\baDꟹ{-寪⚈ꉷ鮸_L7ƽᾚ<\\u001bጨA䧆송뇵⨔\\\\礍뗔d设룱㶉cq{HyぱR㥽吢ﬅp\": -7985372423148569301,\n                                                                                                                                                                                                                                   \"緫#콮IB6<\\/=5Eh礹\\t8럭@饹韠r㰛斣$甝LV췐a갵'请o0g:^\": \"䔨(.\",\n                                                                                                                                                                                                                                   \"띳℡圤pﾝ௄ĝ倧訜B쁟G䙔\\\"Sb⓮;$$▏S1J뢙SF|赡g*\\\"Vu䲌y\": \"䪈&틐),\\\\kT鬜1풥;뷴'Zေ䩹@J鞽NぼM?坥eWb6榀ƩZڮ淽⺞삳煳xჿ絯8eⶍ羷V}ჿ쎱䄫R뱃9Z>'\\u20f1ⓕ䏜齮\"\n                                                                                                                                                                                                                                  }\n                                                                                                                                                                                                                                 ]\n                                                                                                                                                                                                                                ]]]\n                                                                                                                                                                                                                               }}\n                                                                                                                                                                                                                              }\n                                                                                                                                                                                                                             ]\n                                                                                                                                                                                                                            ]},\n                                                                                                                                                                                                                            \"펮b.h粔폯2npX詫g錰鷇㇒<쐙S値bBi@?镬矉`剔}c2壧ଭfhY깨R()痩⺃a\\\\⍔?M&ﯟ<劜꺄멊ᄟA\\\"_=\": null\n                                                                                                                                                                                                                           },\n                                                                                                                                                                                                                           \"~潹Rqn榢㆓aR鬨侅?䜑亡V_翅㭔(䓷w劸ၳDp䀅<\\/ﰎ鶊m䵱팱긽ꆘ<tD쇋>긓准D3掱;o:_ќ)껚콥8곤d矦8nP倥ꃸI\": null,\n                                                                                                                                                                                                                           \"뾎/Q㣩㫸벯➡㠦◕挮a鶧⋓偼\\u00001뱓fm覞n?㛅\\\"\": 2.8515592202045408E17\n                                                                                                                                                                                                                          }],\n                                                                                                                                                                                                                          \",\": -5426918750465854828,\n                                                                                                                                                                                                                          \"2櫫@0柡g䢻/gꆑ6演&D稒肩Y?艘/놘p{f투`飷ᒉ챻돎<늛䘍ﴡ줰쫄\": false,\n                                                                                                                                                                                                                          \"8(鸑嵀⵹ퟡ<9㣎Tߗ┘d슒ل蘯&㠦뮮eࠍk砝g 엻\": false,\n                                                                                                                                                                                                                          \"d-\\u208b?0ﳮ嵙'(J`蔿d^踅⤔榥\\\\J⵲v7\": 6.8002426206715341E17,\n                                                                                                                                                                                                                          \"ཎ耰큓ꐕ㱷\\u0013y=詽I\\\"盈xm{0쾽倻䉚ષso#鰑/8㸴짯%ꀄ떸b츟*\\\\鲷礬ZQ兩?np㋄椂榨kc᡹醅3\": false,\n                                                                                                                                                                                                                          \"싊j20\": false\n                                                                                                                                                                                                                         }]]\n                                                                                                                                                                                                                        ]],\n                                                                                                                                                                                                                        \"俛\\u0017n緽Tu뫉蜍鼟烬.ꭠIⰓ\\\"Ἀ᜾uC쎆J@古%ꛍm뻨ᾀ画蛐휃T:錖㑸ዚ9죡$\": true\n                                                                                                                                                                                                                       }\n                                                                                                                                                                                                                      ]\n                                                                                                                                                                                                                     ],\n                                                                                                                                                                                                                     \"㍵⇘ꦖ辈s}㱮慀밒s`\\\"㞟j:`i픻Z<C1衽$\\\"-饧?℃\\u0010⼒{p飗%R\\\"䲔\\\")칀\\\\%\": true,\n                                                                                                                                                                                                                     \"苧.8\\u00120ݬ仓\": 6912164821255417986,\n                                                                                                                                                                                                                     \"떎顣俁X;.#Q틝.笂'p쟨唒퐏랩냆¦aⱍ{谐.b我$蜑SH\\u000f琾=䟼⣼奔ᜏ攕B&挰繗㝔ꅂ-Qv\\\\0䶝䚥ぺio［㑮-ᇼ䬰컪ṼiY){데\\u0010q螰掻~\\n輚x\\u0014罺)軴\": 3.024364150712629E-20\n                                                                                                                                                                                                                    }\n                                                                                                                                                                                                                   ]\n                                                                                                                                                                                                                  ]\n                                                                                                                                                                                                                 ]\n                                                                                                                                                                                                                ]}\n                                                                                                                                                                                                               ]]\n                                                                                                                                                                                                              }\n                                                                                                                                                                                                             ]\n                                                                                                                                                                                                            ]]\n                                                                                                                                                                                                           ]\n                                                                                                                                                                                                          ]]]],\n                                                                                                                                                                                                          \"\\\"凲o肉Iz絾豉J8?i~傠᫽䇂!WD溊J?ᡒvs菆嵹➒淴>섫^諎0Ok{켿歁෣胰a2﨤[탳뚬쎼嫭뉮m\": 409440660915023105,\n                                                                                                                                                                                                          \"w墄#*ᢄ峠밮jLa`ㆪ꺊漓Lで끎!Agk'ꁛ뢃㯐岬D#㒦\": false,\n                                                                                                                                                                                                          \"ଦPGI䕺L몥罭ꃑ궩﮶#⮈ᢓӢ䚬p7웼臧%~S菠␌힀6&t䳙y㪘냏\\\\*;鉏ￊ鿵'嗕pa\\\"oL쇿꬈Cg\": \"㶽1灸D⟸䴅ᆤ뉎﷛渤csx 䝔цꬃ锚捬?ຽ+x~꘩uI࡞\\u0007栲5呚ẓem?袝\\\")=㥴䨃pac!/揎Y\",\n                                                                                                                                                                                                          \"ᷱo\\\\||뎂몷r篙|#X䦜I#딌媸픕叞RD斳X4t⯩夬=[뭲r=绥jh뷱츝⪘%]⚋܈㖴スH텹m(WO曝劉0~K3c柢Ր㏉着逳~\": false,\n                                                                                                                                                                                                          \"煽_qb[첑\\\\륌wE❽ZtCNﭝ+餌ᕜOꛭ\": \"{ﳾ쉌&s惧ᭁⵆ3䢫;䨞팑꒪흘褀࢖Q䠿V5뭀䎂澻%받u5텸oA⮥U㎦;B䳌wz䕙$ឿ\\\\௅婺돵⪾퐆\\\\`Kyौꋟ._\\u0006L챯l뇠Hi䧈偒5\",\n                                                                                                                                                                                                          \"艊佁ࣃ롇䱠爬!*;⨣捎慓q靓|儑ᨋL+迥=6㒺딉6弄3辅J-㕎뛄듘SG㆛(\\noAzQꝱ䰩X*ぢO퀌%펠낌mo틮a^<\\/F&_눊ᾉ㨦ы4\\\"8H\": 2974648459619059400,\n                                                                                                                                                                                                          \"鬙@뎣䫳ၮ끡?){y?5K;TA*k溱䫜J汃ꂯ싔썍\\u001dA}룖(<\\/^,\": false,\n                                                                                                                                                                                                          \"몏@QꋦFꊩᒐ뎶lXl垨4^郣|ꮇ;䝴ᝓ}쵲z珖\": null\n                                                                                                                                                                                                         }\n                                                                                                                                                                                                        ]]]],\n                                                                                                                                                                                                        \":_=닧弗D䙋暨鏛. 㱻붘䂍J儒&ZK/녩䪜r囁⽯D喠죥7⹌䪥c\\u001a\\u2076￞妈朹oLk菮F౟覛쐧㮏7T;}蛙2{9\\\"崓bB<\\/⡷룀;즮鿹)丒툃୤뷠5W⊢嶜(fb뭳갣\": \"E{响1WM\"\n                                                                                                                                                                                                       }},\n                                                                                                                                                                                                       \"䘨tjJ驳豨?y輊M*᳑梵瞻઻ofQG瑮e\": 2.222802939724948E-19,\n                                                                                                                                                                                                       \"䮴=❑➶T෋w䞜\\\"垦ꃼUt\\u001dx;B$뵣䙶E↌艣ᡥ!᧟;䱀[䔯k쬃`੍8饙른熏'2_'袻tGf蒭J땟as꯳╖&啒zWࡇᒫYSᏬ\\u0014ℑ첥鈤|cG~Pᓮ\\\">\\\"\": \"ႆl\\f7V儊㦬nHꄬꨧC{쐢~C⮃⛓嶦vꄎ1w鰠嘩뿠魄&\\\"_qMⵖ釔녮ꝇ 㝚{糍J哋 cv?-jkﻯྌ鹑L舟r\",\n                                                                                                                                                                                                       \"龧葆yB✱H盋夔ﶉ?n*0(\": \"ꧣኆ㢓氥qZZ酒ຜ)鮢樛)X䣆gTSґG텞k.J圬疝롫쯭z L：\\\\ྤ@w炋塜쿖ᾳy뢀䶃뱝N䥨㚔勇겁#p\",\n                                                                                                                                                                                                       \"도畎Q娡\\\"@S/뼋:䵏!P衅촚fVHQs✜ᐫi㻑殡B䜇%믚k*U#濨낄~\": \"ꍟዕ쳸ꍈ敋&l妏\\u0005憡멗瘌uPgᅪm<\\/To쯬锩h뒓k\"\n                                                                                                                                                                                                      }\n                                                                                                                                                                                                     ]\n                                                                                                                                                                                                    }],\n                                                                                                                                                                                                    \"墥홞r绚<\\/⸹ⰃB}<躅\\\\Y;๑@䔸>韫䜲뱀X뗩鿥쩗SI%ﴞ㳕䛇?<\\/\\u00018x\\\\&侂9鋙a[LR㋭W胕)⡿8㞙0JF,}?허d1cDMᐃ␛鄝ⱕ%X)!XQ\": \"ⳍꗳ=橇a;3t⦾꼑仈ူaᚯ⯋ꕃAs鴷N⍕_䎃ꙎAz\\u0016䯷\\\\<࿫>8q{}ｷ?ᣰ}'0ᴕ펓B┦lF#趤厃T?㕊#撹圂䆲\"\n                                                                                                                                                                                                   },\n                                                                                                                                                                                                   \"܋닐龫論c웑\": false,\n                                                                                                                                                                                                   \"ㇿ/q\\\"6-co髨휝C큦#\\u001b4~?3䐹E삇<<\": 7.600917488140322E-20,\n                                                                                                                                                                                                   \"䁝E6?㣖ꃁ间t祗*鑠{ḣV(浾h逇큞=W?ૉ?nꇽ8ꅉຉj으쮺@Ꚅ㰤u]Oyr\": \"v≁᫸_*όAඤԆl)ۓᦇQ}폠z༏q滚\",\n                                                                                                                                                                                                   \"ｿ᥊/넺I\": true\n                                                                                                                                                                                                  }]]\n                                                                                                                                                                                                 ]\n                                                                                                                                                                                                ]\n                                                                                                                                                                                               ]\n                                                                                                                                                                                              ]]\n                                                                                                                                                                                             },\n                                                                                                                                                                                             \"䭑Ik攑\\u0002QV烄:芩.麑㟴㘨≕\": true,\n                                                                                                                                                                                             \"坄꿕C쇻풉~崍%碼\\\\8\\\"䬦꣙\": null,\n                                                                                                                                                                                             \"欌L圬䅘Y8c(♺2?ON}o椳s宥2䉀eJ%闹r冁O^K諭%凞⺉⡻,掜?$ꥉ?略焕찳㯊艼誜4?\\\"﯎<゛XፈINT:詓 +\": -1.0750456770694562E-19,\n                                                                                                                                                                                             \"獒àc뜭싼ﺳ뎤K`]p隨LtE\": null,\n                                                                                                                                                                                             \"甙8䵊神EIꩤ鐯ᢀ,ﵮU䝑u疒ử驺䚿≚ഋ梶秓F`覤譐#짾蔀묊4<媍쬦靪_Yzgcࡶ4k紥`kc[Lﮗ簐*I瀑[⾰L殽鑥_mGȠ<\\/|囹灠g桰iri\": true,\n                                                                                                                                                                                             \"챓ꖙꟻ좝菇ou,嗠0\\\\jK핻뜠qwQ?ഩ㼕3Y彦b\\u009bJ榶N棨f?됦鏖綃6鳵M[OE봨u햏.Ꮁ癜蟳뽲ꩌ뻾rM豈R嗀羫 uDꎚ%\": null\n                                                                                                                                                                                            },\n                                                                                                                                                                                            \"V傜2<\": 7175127699521359521\n                                                                                                                                                                                           }],\n                                                                                                                                                                                           \"铫aG切<\\/\\\"ী⊆e<^g࢛)D顝ｎאַ饼\\u008c猪繩嵿ﱚCꡬ㻊g엺A엦\\u000f暿_f꿤볝㦕桦`蒦䎔j甬%岝rj 糏\": \"䚢偎눴Au<4箞7礦Iﱔ坠eȧ䪸u䵁p|逹$嗫쨘ꖾ﷐!胠z寓팢^㨔|u8Nሇe텔ꅦ抷]،鹎㳁#༔繁 \",\n                                                                                                                                                                                           \"낂乕ꃻ볨ϱ-ꇋ㖍fs⿫)zꜦ/K?솞♞ꑌ宭hJ᤭瑥Fu\": false,\n                                                                                                                                                                                           \"쟰ぜ魛G\\u0003u?`㾕ℾ㣭5螠烶這趩ꖢ:@咕ꐶx뒘느m䰨b痃렐0鳊喵熬딃$摉_~7*ⱦ녯1錾GKhJ惎秴6'H妈Tᧅ窹㺒疄矤铟wላ\": null,\n                                                                                                                                                                                           \"쯆q4!3錕㲏ⵆ㇛꘷Z瑩뭆\\\\◪NH\\u001d\\\\㽰U~㯶<\\\"쑣낞3ᵤ'峉eꢬ;鬹o꣒木X*長PXᘱu\\\"䠹n惞\": null,\n                                                                                                                                                                                           \"ᅸ祊\\\"&ꥴCjࢼ﴿?䡉`U效5殼㮞V昽ꏪ#ﺸ\\\\&t6x꠹盥꣰a[\\u001aꪍSpe鎿蠹\": -1.1564713893659811E-19\n                                                                                                                                                                                          }\n                                                                                                                                                                                         ]]\n                                                                                                                                                                                        ]\n                                                                                                                                                                                       ]\n                                                                                                                                                                                      ],\n                                                                                                                                                                                      \"羵䥳H,6ⱎ겾|@t\\\"#햊1|稃 섭)띜=뻔ꡜ???櫎~*ῡ꫌/繣ﻠq\": null\n                                                                                                                                                                                     }\n                                                                                                                                                                                    ]}\n                                                                                                                                                                                   ]},\n                                                                                                                                                                                   \"츤\": false\n                                                                                                                                                                                  }},\n                                                                                                                                                                                  \"s\": 3.7339341963399598E18\n                                                                                                                                                                                 }\n                                                                                                                                                                                ],\n                                                                                                                                                                                \"N,I?1+㢓|ࣱ嶃쩥V2\\u0012(4EE虪朶$|w颇v步\": \"~읢~_,Mzr㐫YB溓E淚\\\"ⅹ䈔ᏺ抙 b,nt5V㐒J檶ꏨ⻔?\",\n                                                                                                                                                                                \"Q껑ꡡ}$넎qH煔惍/ez^!ẳF댙䝌馻剁8\": \"梲;yt钰$i冄}AL%a j뜐奷걳뚾d꿽*ሬuDY3?뮟鼯뮟w㍪틱V\",\n                                                                                                                                                                                \"o{Q/K O胟㍏zUdꀐm&⨺J舕⾏魸訟㌥[T籨櫉唐킝 aṭ뱫촙莛>碶覆⧬짙쭰ׯdAiH໥벤퐥_恸[ 0e:죃TC弼荎뵁DA:w唵ꣁ\": null,\n                                                                                                                                                                                \"὏樎䵮軧|?౗aWH쩃1 ꅭsu\": null\n                                                                                                                                                                               }\n                                                                                                                                                                              ]\n                                                                                                                                                                             },\n                                                                                                                                                                             \"勂\\\\&m鰈J釮=Ⲽ鳋+䂡郑\": null,\n                                                                                                                                                                             \"殣b綊倶5㥗惢⳷萢ᑀ䬄镧M^ﱴ3⣢翣n櫻1㨵}ኯ뗙顖Z.Q➷ꮨ뗇\\u0004\": \"ꔙ䁼>n^[GीA䨟AM琢ᒊS쨲w?d㶣젊嘶纝麓+愣a%気ྞSc됓ᔘ:8bM7Xd8㶑臌]Ꙥ0ꐭ쒙䫣挵C薽Dfⵃ떼᷸\",\n                                                                                                                                                                             \"?紡.셪_෨j\\u0013Ox┠$Xᶨ-ᅇo薹-}軫;y毝㪜K㣁?.EV쮱4둽⛻䤜'2盡\\u001f60(|e쐰㼎ᦀ㒧-$l@ﻑ坳\\u0003䭱响巗WFo5c㧆T턁Y맸♤(\": -2.50917882560589088E17\n                                                                                                                                                                            }}\n                                                                                                                                                                           ],\n                                                                                                                                                                           \"侸\\\\릩.᳠뎠狣살cs项䭩畳H1s瀉븇19?.w骴崖㤊h痠볭㞳㞳䁮Ql怠㦵\": \"@䟴-=7f\",\n                                                                                                                                                                           \"鹟1x௢+d ;vi䭴FSDS\\u0004hꎹ㚍?⒍⦏ў6u,扩@됷Su)Pag휛TᒗV痩!瞏釀ꖞ蘥&ೞ蘐ꭰꞇᝎ\": \"ah懱Ժ&\\u20f7䵅♎඀䞧鿪굛ౕ湚粎蚵ᯋ幌YOE)५襦㊝Y*^\\\"R+ඈ咷蝶9ꥂ榨艦멎헦閝돶v좛咊E)K㓷ྭr\",\n                                                                                                                                                                           \"搆q쮦4綱켙셁.f4<\\/g<籽늷?#蚴픘:fF\\u00051㹉뀭.ᰖ풎f֦Hv蔎㧤.!䭽=鞽]음H:?\\\"-4\": 8.740133984938656E-20\n                                                                                                                                                                          }]}\n                                                                                                                                                                         }\n                                                                                                                                                                        ],\n                                                                                                                                                                        \"tVKn딩꘥⊾蹓᤹{\\u0003lR꼽ᄲQFᅏ傅ﱋ猢⤊ᔁ,E㓒秤nTතv`♛I\\u0000]꫔ṞD\\\"麵c踝杰X&濿또꣹깳౥葂鿎\\\\aꡨ?\": 3900062609292104525\n                                                                                                                                                                       }\n                                                                                                                                                                      ],\n                                                                                                                                                                      \"ਉ샒⊩Lu@S䧰^g\": -1.1487677090371648E18,\n                                                                                                                                                                      \"⎢k⑊꬗yᏫ7^err糎Dt\\u000bJ礯확ㆍ沑ｻꋽe赔㝢^J\\u0004笲㿋idra剰-᪉C錇/Ĝ䂾ညS지?~콮gR敉⬹'䧭\": 1901472137232418266,\n                                                                                                                                                                      \"灗k䶥:?촽贍쓉꓈㒸g獘[뵎\\\\胕?\\u0014_榙p.j稶,$`糉妋0>Fᡰly㘽$?\": \"]ꙛO赎&#㠃돱剳\\\"<◆>0誉齐_|z|裵씪>ᐌ㼍\\\"Z[琕}O?G뚇諦cs⠜撺5cu痑U圲\\u001c?鴴計l춥/╓哼䄗茏ꮅ뫈댽A돌롖뤫V窗讬sHd&\\nOi;_u\"\n                                                                                                                                                                     }\n                                                                                                                                                                    ],\n                                                                                                                                                                    \"Uﺗ\\\\Y\\\\梷䄬~\\u0002\": null,\n                                                                                                                                                                    \"k\\\"Y磓ᗔ휎@U冈<\\/w컑)[\": false,\n                                                                                                                                                                    \"曏J蝷⌻덦\\u001f㙳s꥓⍟邫P늮쥄c∬ྡྷ舆렮칤Z趣5콡넛A쳨\\\\뀙骫(棻.*&輛LiIfi{@EA婳KᬰTXT\": -4.3088230431977587E17\n                                                                                                                                                                   }]}\n                                                                                                                                                                  ]\n                                                                                                                                                                 ],\n                                                                                                                                                                 \"곃㲧<\\/dఓꂟs其ࡧ&N葶=?c㠤Ჴ'횠숄臼#\\u001a~\": false\n                                                                                                                                                                }\n                                                                                                                                                               ]\n                                                                                                                                                              ]}]\n                                                                                                                                                             }]\n                                                                                                                                                            }}\n                                                                                                                                                           ],\n                                                                                                                                                           \"2f`⽰E쵟>J笂裭!〛觬囀ۺ쟰#桊l鹛ⲋ|RA_Vx፭gE됓h﵀mfỐ|?juTU档[d⢼⺻p濚7E峿\": 5613688852456817133\n                                                                                                                                                          },\n                                                                                                                                                          \"濘끶g忮7㏵殬W팕Q曁 뫰)惃廊5%-蹚zYZ樭ﴷQ锘쯤崫gg\": true,\n                                                                                                                                                          \"絥ᇑ⦏쒓븣爚H.㗊߄o蘵貆ꂚ(쎔O᥉ﮓ]姨Wꁓ!RMA|o퉢THx轮7M껁U즨'i뾘舯o\": \"跥f꜃?\"\n                                                                                                                                                         }}\n                                                                                                                                                        ],\n                                                                                                                                                        \"鷰鹮K-9k;ﰰ?_ݦѷ-ꅣ䩨Zꥱ\\\"mꠟ屎/콑Y╘2&鸞脇㏢ꀇ࠺ⰼ拾喭틮L꽩bt俸墶 [l/웄\\\"꾦\\u20d3iও-&+\\u000fQ+໱뵞\": -1.296494662286671E-19\n                                                                                                                                                       },\n                                                                                                                                                       \"HX੹/⨇୕붷Uﮘ旧\\\\쾜͔3l鄈磣糂̖䟎Eᐳw橖b῀_딕hu葰窳闹вU颵|染H죶.fP䗮:j䫢\\\\b뎖i燕ꜚG⮠W-≚뉗l趕\": \"ଊ칭Oa᡺$IV㷧L\\u0019脴셀붿餲햪$迳向쐯켂PqfT\\\" ?I屉鴼쿕@硙z^鏕㊵M}㚛T젣쓌-W⩐-g%⺵<뮱~빅╴瑿浂脬\\u0005왦燲4Ⴭb|D堧 <\\/oEQh\",\n                                                                                                                                                       \"䘶#㥘੐캔f巋ἡAJ䢚쭈ࣨ뫒*mᇊK，ࣺAꑱ\\u000bR<\\/A\\\"1a6鵌㯀bh곿w(\\\"$ꘁ*rಐ趣.d࿩k/抶면䒎9W⊃9\": \"漩b挋Sw藎\\u0000\",\n                                                                                                                                                       \"畀e㨼mK꙼HglKb,\\\"'䤜\": null\n                                                                                                                                                      }]}]\n                                                                                                                                                     ]\n                                                                                                                                                    ]\n                                                                                                                                                   }]\n                                                                                                                                                  ]}\n                                                                                                                                                 ]\n                                                                                                                                                ]}\n                                                                                                                                               ],\n                                                                                                                                               \"歙>駿ꣂ숰Q`J΋方樛(d鱾뼣(뫖턭\\u20f9lচ9歌8o]8윶l얶?镖G摄탗6폋폵+g:䱫홊<멀뀿/س|ꭺs걐跶稚W々c㫣⎖\": \"㣮蔊깚Cꓔ舊|XRf遻㆚︆'쾉췝\\\\&言\",\n                                                                                                                                               \"殭\\\"cށɨꝙ䞘:嬮e潽Y펪㳅/\\\"O@ࠗ겴]췖YǞ(t>R\\\"N?梳LD恭=n氯T豰2R諸#N}*灧4}㶊G䍣b얚\": null,\n                                                                                                                                               \"襞<\\/啧 B|싞W瓇)6簭鼡艆lN쩝`|펭佡\\\\間邝[z릶&쭟愱ꅅ\\\\T᰽1鯯偐栈4̸s윜R7⒝/똽?치X\": \"⏊躖Cﱰ2Qẫ脐&இ?%냝悊\",\n                                                                                                                                               \",鰧偵셣싹xᎹ힨᯳EṬH㹖9\": -4604276727380542356\n                                                                                                                                              }\n                                                                                                                                             }\n                                                                                                                                            ]]]],\n                                                                                                                                            \"웺㚑xs}q䭵䪠馯8?LB犯zK'os䚛HZ\\\"L?셎s^㿧㴘Cv2\": null\n                                                                                                                                           }]\n                                                                                                                                          ]\n                                                                                                                                         ]\n                                                                                                                                        ],\n                                                                                                                                        \"Kd2Kv+|z\": 7367845130646124107,\n                                                                                                                                        \"ᦂⶨ?ᝢ 祂些ഷ牢㋇操\\\"腭䙾㖪\\\\(y4cE뽺ㆷ쫺ᔖ%zfۻ$ў1柦,㶢9r漢\": -3.133230960444846E-20,\n                                                                                                                                        \"琘M焀q%㢟f鸯O⣏蓑맕鯊$O噷|)z褫^㢦⠮ꚯ꫞`毕1qꢚ{ĭ䎀বώT\\\"뱘3G൴?^^of\": null\n                                                                                                                                       }\n                                                                                                                                      ],\n                                                                                                                                      \"a8V᯺?:ﺃ/8ꉿBq|9啓댚;*i2\": null,\n                                                                                                                                      \"cpT瀇H珰Ừpೃi鎪Rr␣숬-鹸ҩ䠚z脚цGoN8入y%趌I┽2ឪЀiJNcN)槣/▟6S숆牟\\\"箑X僛G殱娇葱T%杻:J諹昰qV쨰\": 8331037591040855245\n                                                                                                                                     }],\n                                                                                                                                     \"G5ᩜ䄗巢껳\": true\n                                                                                                                                    }\n                                                                                                                                   },\n                                                                                                                                   \"Ồ巢ゕ@_譙A`碫鄐㡥砄㠓(^K\": \"?܃B혢▦@犑ὺD~T⧁|醁;o=J牌9냚⢽㨘{4觍蚔9#$∺\\u0016p囅\\\\3Xk阖⪚\\\"UzA穕롬✎➁㭒춺C㣌ဉ\\\"2瓑员ᅽꝶ뫍}꽚ꞇ鶂舟彺]ꍽJC蝧銉\",\n                                                                                                                                   \"␆Ě膝\\\"b-퉐ACR言J謈53~V튥x䜢?ꃽɄY뮩ꚜ\": \"K/↾e萃}]Bs⾿q룅鷦-膋?m+死^魊镲6\",\n                                                                                                                                   \"粡霦c枋AHퟁo礼Ke?qWcA趸㡔ꂏ?\\u000e춂8iতᦜ婪\\u0015㢼nﵿꍻ!ᐴ関\\u001d5j㨻gfῩUK5Ju丝tかTI'?㓏t>⼟o a>i}ᰗ;뤕ܝ\": false,\n                                                                                                                                   \"ꄮ匴껢ꂰ涽+䜨B蛹H䛓-k蕞fu7kL谖,'涃V~챳逋穞cT\\\"vQ쓕ObaCRQ㓡Ⲯ?轭⫦輢墳?vA餽=h䮇킵n폲퉅喙?\\\"'1疬V嬗Qd灗'Lự\": \"6v!s믁㭟㣯獃!磸餠ቂh0C뿯봗F鷭gꖶ~ｺkK<ᦈTt\\\\跓w㭣횋钘ᆹ듡䑚W䟾X'ꅔ4FL勉Vܴ邨y)2'〚쭉⽵-鞣E,Q.?块\",\n                                                                                                                                   \"?(˧쩯@崟吋歄K\": null\n                                                                                                                                  },\n                                                                                                                                  \"Gc럃녧>?2DYI鴿\\\\륨)澔0ᔬlx'觔7젘⤡縷螩%Sv׫묈/]↱&S h\\u0006歋ᑛxi̘}ひY蔯_醨鯘煑橾8?䵎쨋z儬ꁏ*@츾:\": null\n                                                                                                                                 }\n                                                                                                                                }\n                                                                                                                               }\n                                                                                                                              ]\n                                                                                                                             ]\n                                                                                                                            ]}\n                                                                                                                           },\n                                                                                                                           \"HO츧G\": 3.694949578823609E17,\n                                                                                                                           \"QC\\u0012(翻曇Tf㷟bGBJ옉53\\\\嚇ᛎD/\\u001b夾၉4\\\"핀@祎)쫆yD\\\"i먎Vn㿿V1W᨝䶀\": -6150931500380982286,\n                                                                                                                           \"Z㓮P翸鍱鉼K䋞꘺튿⭁Y\": -7704503411315138850,\n                                                                                                                           \"]모开ꬖP븣c霤<[3aΠ\\\"黁䖖䰑뮋ꤦ秽∼㑷冹T+YUt\\\"싳F↭䖏&鋌\": -2.7231911483181824E18,\n                                                                                                                           \"tꎖ\": -4.9517948741799555E-19,\n                                                                                                                           \"䋘즊.⬅IꬃۣQ챢ꄑ黐|f?C⾺|兕읯sC鬸섾整腨솷V\": \"旆柩l<K髝M戶鯮t:wR2ꉱ`9'l픪*폍芦㊢Pjjo堡^  읇얛嶅있ষ0?F\",\n                                                                                                                           \"下9T挞\\\\$yᮇk쌋⼇,ਉ\": true,\n                                                                                                                           \"櫨:ㆣ,邍lr崕祜㐮烜Z,XXD蕼㉴ kM꯽?P0﹉릗\": null,\n                                                                                                                           \"gv솠歽閘4镳䗄2澾>쪦sᖸMy㦅울썉瘗㎜檵9ꍂ駓ૉᚿ/u3씅徐拉[Z䞸ࡗ1ꆱ&Ｑ풘?ǂ8\\u0011BCDY2볨;鸏\": null,\n                                                                                                                           \"幫 n煥s쁇펇 왊-$C\\\"衝:\\u0014㣯舼.3뙗Yl⋇\\\"K迎멎[꽵s}9鉳UK8쐥\\\"掄㹖h㙈!얄સ?Ꜳ봺R伕UTD媚Ｉ䜘W鏨蔮\": -4.150842714188901E-17,\n                                                                                                                           \"ﺯ^㄄\\b죵@fྉkf颡팋Ꞧ{/Pm0V둳⻿/落韒ꊔᚬ@5螺G\\\\咸a谆⊪ቧ慷绖?财(鷇u錝F=r၍橢ឳn:^iᴵtD볠覅N赴\": null\n                                                                                                                          }]\n                                                                                                                         }]\n                                                                                                                        }\n                                                                                                                       ]\n                                                                                                                      ]}\n                                                                                                                     ]},\n                                                                                                                     \"謯?w厓奰T李헗聝ឍ貖o⪇弒L!캶$ᆅ\": -4299324168507841322,\n                                                                                                                     \"뺊奉_垐浸延몏孄Z舰2i$q붿좾껇d▵餏\\\"v暜Ҭ섁m￴g>\": -1.60911932510533427E18\n                                                                                                                    }\n                                                                                                                   ]\n                                                                                                                  }\n                                                                                                                 ]\n                                                                                                                ]],\n                                                                                                                \"퉝꺔㠦楶Pꅱ\": 7517896876489142899,\n                                                                                                                \"\": false\n                                                                                                               }\n                                                                                                              ]},\n                                                                                                              \"是u&I狻餼|谖j\\\"7c됮sסּ-踳鉷`䣷쉄_A艣鳞凃*m⯾☦椿q㎭N溔铉tlㆈ^\": 1.93547720203604352E18,\n                                                                                                              \"kⲨ\\\\%vr#\\u000bⒺY\\\\t<\\/3﬌R訤='﹠8蝤Ꞵ렴曔r\": false\n                                                                                                             }\n                                                                                                            ]},\n                                                                                                            \"阨{c?C\\u001d~K?鎌Ԭ8烫#뙣P초遗t㭱E­돒䆺}甗[R*1!\\\\~h㕅᰺@<9JꏏષI䳖栭6綘걹ￌM\\\"▯是∔v鬽顭⋊譬\": \"운ﶁK敂(欖C취پ℄爦賾\"\n                                                                                                           }\n                                                                                                          }}\n                                                                                                         }],\n                                                                                                         \"鷨赼鸙+\\\\䭣t圙ڹx᜾ČN<\\/踘\\\"S_맶a鷺漇T彚⎲i㈥LT-xA캔$\\u001cUH=a0츺l릦\": \"溣㣂0濕=鉵氬駘>Pꌢpb솇쬤h힊줎獪㪬CrQ矠a&脍꼬爼M茴/΅\\u0017弝轼y#Ꞡc6둴=?R崏뷠麖w?\"\n                                                                                                        },\n                                                                                                        \"閕ᘜ]CT)䵞l9z'xZF{:ؐI/躅匽졁:䟇AGF૸\\u001cퟗ9)駬慟ꡒꆒRS״툋A<>\\u0010\\\"ꂔ炃7g덚E৏bꅰ輤]o㱏_뷕ܘ暂\\\"u\": \"芢+U^+㢩^鱆8*1鈶鮀\\u0002뺰9⬳ꪮlL䃣괟,G8\\u20a8DF㉪錖0ㄤ瓶8Nଷd?眡GLc陓\\\\_죌V쁰ल二?c띦捱 \\u0019JC\\u0011b⤉zẒT볕\\\"绣蘨뚋cꡉkI\\u001e鳴\",\n                                                                                                        \"ꃣI'{6u^㡃#཰Kq4逹y൒䧠䵮!㱙/n??{L풓ZET㙠퍿X2᩟綳跠葿㚙w཮x캽扳B唕S|尾}촕%N?o䪨\": null,\n                                                                                                        \"ⰴFjෟ셈[\\u0018辷px?椯\\\\1<ﲻ栘ᣁ봢憠뉴p\": -5263694954586507640\n                                                                                                       }\n                                                                                                      ]\n                                                                                                     ]]\n                                                                                                    ]}\n                                                                                                   ]}]\n                                                                                                  ]\n                                                                                                 ],\n                                                                                                 \"?#癘82禩鋆ꊝty?&\": -1.9419029518535086E-19\n                                                                                                }\n                                                                                               ]\n                                                                                              ]\n                                                                                             ]}\n                                                                                            ]\n                                                                                           ]\n                                                                                          ],\n                                                                                          \"훊榲.|῕戄&.㚏Zꛦ2\\\"䢥ሆ⤢fV_摕婔?≍Fji冀탆꜕i㏬_ẑKᅢ꫄蔻XWc|饡Siẘ^㲦?羡2ぴ1縁ᙅ?쐉Ou\": false\n                                                                                         }]]\n                                                                                        ]}}},\n                                                                                        \"慂뗄卓蓔ᐓ匐嚖/颹蘯/翻ㆼL?뇊,텵<\\\\獷ごCボ\": null\n                                                                                       },\n                                                                                       \"p溉ᑟi짣z:䒤棇r^٫%G9缑r砌롧.물农g?0׼ሩ4ƸO㣥㯄쩞ጩ\": null,\n                                                                                       \"껎繥YxK\\\"F젷쨹뤤1wq轫o?鱑뜀瘊?뎃h灑\\\\ꛣ}K峐^ኖ⤐林ꉓhy\": null\n                                                                                      }\n                                                                                     ],\n                                                                                     \"᱀n肓ㄛ\\\"堻2>m殮'1橌%Ꞵ군=Ӳ鯨9耛<\\/n據0u彘8㬇៩f᏿诙]嚊\": \"䋯쪦S럶匏ㅛ#)O`ሀX_鐪渲⛀㨻宅闩➈ꢙஶDR⪍\"\n                                                                                    },\n                                                                                    \"tA썓龇 ⋥bj왎录r땽✒롰;羋^\\\\?툳*┎?썀ma䵳넅U䳆૘〹䆀LQ0\\b疀U~u$M}(鵸g⳾i抦뛹?䤈땚검.鹆?ꩡtⶥGĒ;!ቹHS峻B츪켏f5≺\": 2366175040075384032,\n                                                                                    \"전pJjleb]ួ\": -7.5418493141528422E18,\n                                                                                    \"n.鎖ጲ\\n?,$䪘\": true\n                                                                                   },\n                                                                                   \"欈Ar㉣螵᪚茩?O)\": null\n                                                                                  },\n                                                                                  \"쫸M#x}D秱欐K=侫们丐.KꕾxẠ\\u001e㿯䣛F܍캗qq8꟞ṢFD훎⵳簕꭛^鳜\\u205c٫~⑟~冫ऊ2쫰<\\/戲윱o<\\\"\": true\n                                                                                 },\n                                                                                 \"㷝聥/T뱂\\u0010锕|内䞇x侁≦㭖:M?iM᣿IJe煜dG࣯尃⚩gPt*辂.{磼럾䝪@a\\\\袛?}ᓺB珼\": true\n                                                                                }\n                                                                               }\n                                                                              ]]}]}},\n                                                                              \"tn\\\"6ꫤ샾䄄;銞^%VBPwu묪`Y僑N.↺Ws?3C⤻9唩S䠮ᐴm;sᇷ냞඘B/;툥B?lB∤)G+O9m裢0kC햪䪤\": -4.5941249382502277E18,\n                                                                              \"ᚔt'\\\\愫?鵀@\\\\びꂕP큠<<]煹G-b!S?\\nꖽ鼫,ݛ&頺y踦?E揆릱H}햧캡b@手.p탻>췽㣬ꒅ`qe佭P>ᓂ&?u}毚ᜉ蟶頳졪ᎏzl2wO\": -2.53561440423275936E17\n                                                                             }]}\n                                                                            }\n                                                                           ]\n                                                                          ]],\n                                                                          \"潈촒⿂叡\": 5495738871964062986\n                                                                         }\n                                                                        ]]\n                                                                       }\n                                                                      ]\n                                                                     ]}\n                                                                    ]]\n                                                                   ]]\n                                                                  ]}\n                                                                 ]\n                                                                ]},\n                                                                \"ႁq킍蓅R`謈蟐ᦏ儂槐僻ﹶ9婌櫞釈~\\\"%匹躾ɢ뤥>࢟瀴愅?殕节/냔O✬H鲽엢?ᮈੁ⋧d␽㫐zCe*\": 2.15062231586689536E17,\n                                                                \"㶵Ui曚珰鋪ᾼ臧P{䍏䷪쨑̟A뼿T渠誈䏚D1!잶<\\/㡍7?)2l≣穷᛾稝{:;㡹nemיּ訊`G\": null,\n                                                                \"䀕\\\"飕辭p圁f#뫆䶷뛮;⛴ᩍ3灚덏ᰝ쎓⦷詵%᜖Մfs⇫(\\u001e~P|ﭗCⲾផv湟W첋(텪બT<บSꏉ੗⋲X婵i ӵ⇮?L䬇|ꈏ?졸\": 1.548341247351782E-19\n                                                               }\n                                                              ]\n                                                             },\n                                                             \"t;:N\\u0015q鐦Rt缆{ꮐC?஛㷱敪\\\\+鲊㉫㓪몗릙竏(氵kYS\": \"XᰂT?൮ô\",\n                                                             \"碕飦幑|+ 㚦鏶`镥ꁩ B<\\/加륙\": -4314053432419755959,\n                                                             \"秌孳(p!G?V傫%8ሽ8w;5鲗㦙LI檸\\u2098\": \"zG N볞䆭鎍흘\\\\ONK3횙<\\/樚立圌Q튅k쩎Ff쁋aׂJK銆ઘ즐狩6༥✙䩜篥CzP(聻駇HHퟲ讃%,ά{렍p而刲vy䦅ክ^톺M楒鍢㹳]Mdg2>䤉洞\",\n                                                             \"踛M젧>忔芿㌜Zk\": 2215369545966507819,\n                                                             \"씐A`$槭頰퍻^U覒\\bG毲aᣴU;8!팲f꜇E⸃_卵{嫏羃X쀳C7뗮m(嚼u N܁谟D劯9]#\": true,\n                                                             \"ﻩ!뵸-筚P᭛}ἰ履lPh?౮ⶹꆛ穉뎃g萑㑓溢CX뾇G㖬A錟]RKaꄘ]Yo+@䘁's섎襠$^홰}F\": null\n                                                            },\n                                                            \"粘ꪒ4HXᕘ蹵.$區\\r\\u001d묁77pPc^y笲Q<\\/ꖶ 訍䃍ᨕG?*\": 1.73773035935040224E17\n                                                           },\n                                                           \"婅拳?bkU;#D矠❴vVN쩆t㜷A풃갮娪a%鮏絪3dAv룒#tm쑬⌛qYwc4|L8KZ;xU⓭㳔밆拓EZ7襨eD|隰ऌ䧼u9Ԣ+]贴P荿\": 2.9628516456987075E18\n                                                          }]}}]\n                                                         ]}\n                                                        }}\n                                                       ]}]\n                                                      ],\n                                                      \"|g翉F*湹̶\\u0005⏐1脉̀eI쩓ᖂ㫱0碞l䴨ꑅ㵽7AtἈ턧yq䳥塑:z:遀ﾼX눔擉)`N3昛oQ셖y-ڨ⾶恢ꈵq^<\\/\": null,\n                                                      \"菹\\\\랓G^璬x৴뭸ゆUS겧﮷Bꮤ ┉銜᯻0%N7}~f洋坄Xꔼ<\\/4妟Vꄟ9:౟곡t킅冩䧉笭裟炂4봋ⱳ叺怊t+怯涗\\\"0㖈Hq\": false,\n                                                      \"졬믟'ﺇফ圪쓬멤m邸QLব䗁愍4jvs翙 ྍ꧀艳H-|\": null,\n                                                      \"컮襱⣱뗠 R毪/鹙꾀%헳8&\": -5770986448525107020\n                                                     }\n                                                    ],\n                                                    \"B䔚bꐻ뙏姓展槰T-똌鷺tc灿᫽^㓟䏀o3o$꘭趙萬I顩)뇭Ἑ䓝\\f@{ᣨ`x3蔛\": null\n                                                   }\n                                                  ]\n                                                 ]\n                                                }],\n                                                \"⦖扚vWꃱ꥙㾠壢輓{-⎳鹷贏璿䜑bG倛⋐磎c皇皩7a~ﳫU╣Q࠭ꎉS摅姽OW.홌ೞ.\": null,\n                                                \"蚪eVlH献r}ᮏ믠ﰩꔄ@瑄ⲱ\": null,\n                                                \"퀭$JWoꩢg역쁍䖔㑺h&ୢtXX愰㱇?㾫I_6 OaB瑈q裿\": null,\n                                                \"꽦ﲼLyr纛Zdu珍B絟쬴糔?㕂짹䏵e\": \"ḱ\\u2009cX9멀i䶛簆㳀k\"\n                                               }\n                                              ]]]],\n                                              \"(_ꏮg່澮?ᩑyM<艷\\u001aꪽ\\\\庼뙭Z맷㰩Vm\\\\lY筺]3㋲2㌩㄀Eਟ䝵⨄쐨ᔟgङHn鐖⤇놋瓇Q탚單oY\\\"♆臾jHᶈ征ቄ??uㇰA?#1侓\": null\n                                             },\n                                             \"觓^~ሢ&iI띆g륎ḱ캀.ᓡꀮ胙鈉\": 1.0664523593012836E-19,\n                                             \"y詭Gbᔶऽs댁U:杜⤎ϲ쁗⮼D醄诿q뙰I#즧v蔎xHᵿt᡽[**?崮耖p缫쿃L菝,봬ꤦC쯵#=X1瞻@OZc鱗CQTx\": null\n                                            }\n                                           ]\n                                          }}],\n                                          \"剘紁\\u0004\\\\Xn⊠6,တױ;嵣崇}讃iႽ)d1\\\\䔓\": null\n                                         },\n                                         \"脨z\\\"{X,1u찜<'k&@?1}Yn$\\u0015Rd輲ｰa쮂굄+B$l\": true,\n                                         \"諳>*쭮괐䵟Ґ+<箁}빀䅱⡔檏臒hIH脟ꩪC핝ଗP좕\\\"0i<\\/C褻D۞恗+^5?'ꂱ䚫^7}㡠cq6\\\\쨪ꔞꥢ?纖䫀氮蒫侲빦敶q{A煲G\": -6880961710038544266\n                                        }}]\n                                       },\n                                       \"5s⨲JvಽῶꭂᄢI.a৊\": null,\n                                       \"?1q꽏쿻ꛋDR%U娝>DgN乭G\": -1.2105047302732358E-19\n                                      }\n                                     ]\n                                    ]},\n                                    \"qZz`撋뙹둣j碇쁏\\\\ꆥ\\u0018@藴疰Wz)O{F䶛l᷂绘訥$]뮍夻䢋䩇萿獰樧猵⣭j萶q)$꬚⵷0馢W:Ⱍ!Qoe\": -1666634370862219540,\n                                    \"t\": \"=wp|~碎Q鬳Ӎ\\\\l-<\\/^ﳊhn퐖}䍔t碵ḛ혷?靻䊗\",\n                                    \"邙쇡㯇%#=,E4勃驆V繚q[Y댻XV㡸[逹ᰏ葢B@u=JS5?bLRn얮㍉⏅ﰳ?a6[&큟!藈\": 1.2722786745736667E-19\n                                   },\n                                   \"X블땨4{ph鵋ꉯ웸 5p簂䦭s_E徔濧d稝~No穔噕뽲)뉈c5M윅>⚋[岦䲟懷恁?鎐꓆ฬ爋獠䜔s{\\u001bm鐚儸煛%bﯿXT>ꗘ@8G\": 1157841540507770724,\n                                   \"媤娪Q杸\\u0011SAyᡈ쿯\": true,\n                                   \"灚^ಸ%걁<\\/蛯<O\\\"-刷㏠R(kO=䢊䅎l䰓팪A絫픧\": \"譔\\\\㚄 ?R7㔪G㋉⣰渆?\\\\#|gN⤴;W칷A׫癮଼ೣ㏳뒜7d恓꾲0扬S0ᆵi/贎ྡn䆋武\",\n                                   \"萇砇Gこ朦켋Wq`㞲攊*冁~霓L剢zI腧튴T繙Cঅ뫬╈뮜ㄾ䦧촄椘B⊬츩r2f㶱厊8eϬ{挚␯OM焄覤\\\\(Kӡ>?\\\"祴坓\\\\\\\\'흍\": -3.4614808555942579E18,\n                                   \"釴U:O湛㴑䀣렑縓\\ta)<D8ﭳ槁髭D.L|xs斋敠\\\"띋早7wᎍ\": true,\n                                   \"쵈+쬎簨up䓬?q+~\\u0019仇뵈᫯3ᵣ恘枰劫㪢u珘-퀭:컙:u`⌿A(9鄦!<珚nj3:Hࣨ巋䀁旸뎈맻v\\\"\\\\(곘vO㤰aZe<\\/W鹙鄜;l厮둝\": null,\n                                   \"\": -1.2019926774977002E-18,\n                                   \"%者O7.Nꪍs梇接z蕜綛<\\/䜭\\\"죊y<曋漵@Ś⹝sD⟓jݗᢜ?z/9ၲMa쨮긗贎8ᔮ㦛;6p뾥໭䭊0B찛+)(Y㿠鸁䕒^옥\": \"鬃뫤&痽舎J콮藐󽸰ᨨMꈫ髿v<N\\\\.삒껅я1ꭼ5䴷5쳬臨wj덥\"\n                                  }],\n                                  \"鷎'㳗@帚妇OAj' 谬f94ǯ(횡ﾋ%io쪖삐좛>(j:숾却䗌gCiB뽬Oyuq輥厁/7)?今hY︺Q\": null\n                                 }\n                                ]\n                               ]]]}]\n                              ],\n                              \"I笔趠Ph!<ཛྷ㸞诘X$畉F\\u0005笷菟.Esr릙!W☆䲖뗷莾뒭U\\\"䀸犜Uo3Gꯌx4r蔇᡹㧪쨢準<䂀%ࡡꟼ瑍8炝Xs0䀝销?fi쥱ꆝલBB\": -8571484181158525797,\n                              \"L⦁o#J|\\\"⽩-㱢d㌛8d\\\\㶤傩儻E[Y熯)r噤὘勇 }\": \"e(濨쓌K䧚僒㘍蠤Vᛸ\\\"络QJL2,嬓왍伢㋒䴿考澰@(㏾`kX$끑эE斡,蜍&~y\",\n                              \"vj.|统圪ᵮPL?2oŶ`밧\\\"勃+0ue%⿥绬췈체$6:qa렐Q;~晘3㙘鹑\": true,\n                              \"ශؙ4獄⶿c︋i⚅:ん閝Ⳙ苆籦kw{䙞셕pC췃ꍬ␜꟯ꚓ酄b힝hwk꭭M鬋8B耳쑘WQ\\\\偙ac'唀x᪌\\u2048*h짎#ፇ鮠뾏ឿ뀌\": false,\n                              \"⎀jꄒ牺3Ⓝ컴~?親ꕽぼܓ喏瘘!@<튋㐌꿱⩦{a?Yv%⪧笯Uܱ栅E搚i뚬:ꄃx7䙳ꦋ&䓹vq☶I䁘ᾘ涜\\\\썉뺌Lr%Bc㍜3?ꝭ砿裞]\": null,\n                              \"⭤뙓z(㡂%亳K䌽꫿AԾ岺㦦㼴輞낚Vꦴw냟鬓㹈뽈+o3譻K1잞\": 2091209026076965894,\n                              \"ㇲ\\t⋇轑ꠤ룫X긒\\\"zoY읇희wj梐쐑l侸`e%s\": -9.9240075473576563E17,\n                              \"啸ꮑ㉰!ᚓ}銏\": -4.0694813896301194E18,\n                              \">]囋੽EK뇜>_ꀣ緳碖{쐐裔[<ನ\\\"䇅\\\"5L?#xTwv#罐\\u0005래t应\\\\N?빗;\": \"v쮽瞭p뭃\"\n                             }\n                            ]],\n                            \"斴槾?Z翁\\\"~慍弞ﻆ=꜡o5鐋dw\\\"?K蠡i샾ogDﲰ_C*⬟iㇷ4nય蟏[㟉U꽌娛苸 ঢ়操贻洞펻)쿗૊許X⨪VY츚Z䍾㶭~튃ᵦ<\\/E臭tve猑x嚢\": null,\n                            \"锡⛩<\\/칥ꈙᬙ蝀&Ꚑ籬■865?_>L詏쿨䈌浿弥爫̫lj&zx<\\/C쉾?覯n?\": null,\n                            \"꾳鑤/꼩d=ᘈn挫ᑩ䰬ZC\": \"3錢爋6Ƹ䴗v⪿Wr益G韠[\\u0010屗9쁡钁u?殢c䳀蓃樄욂NAq赟c튒瘁렶Aૡɚ捍\"\n                           }\n                          ]\n                         ]\n                        ]}\n                       ]\n                      ]\n                     }]]]}}\n                    ]}],\n                    \"Ej䗳U<\\/Q=灒샎䞦,堰頠@褙g_\\u0003ꤾfⶽ?퇋!łB〙ד3CC䌴鈌U:뭔咎(Qો臃䡬荋BO7㢝䟸\\\"Yb\": 2.36010731779814E-20,\n                    \"逸'0岔j\\u000e눘먷翌C츊秦=ꭣ棭ှ;鳸=麱$XP⩉駚橄A\\\\좱⛌jqv䰞3Ь踌v㳆¹gT┌gvLB賖烡m?@E঳i\": null\n                   },\n                   \"曺v찘ׁ?&绫O័\": 9107241066550187880\n                  }\n                 ]\n                ],\n                \"(e屄\\u0019昜훕琖b蓘ᬄ0/۲묇Z蘮ဏ⨏蛘胯뢃@㘉8ሪWᨮ⦬ᅳ䅴HI၇쨳z囕陻엣1赳o\": true,\n                \",b刈Z,ၠ晐T솝ŕB⩆ou'퐼≃绗雗d譊\": null,\n                \"a唥KB\\\"ﳝ肕$u\\n^⅄P䟼냉䞸⩪u윗瀱ꔨ#yşs꒬=1|ﲤ爢`t౐튼쳫_Az(Ṋ擬㦷좕耈6\": 2099309172767331582,\n                \"?㴸U<\\/䢔ꯡ阽扆㐤q鐋?f㔫wM嬙-;UV죫嚔픞G&\\\"Cᗍ䪏풊Q\": \"VM7疹+陕枡툩窲}翡䖶8欞čsT뮐}璤:jﺋ鎴}HfA൝⧻Zd#Qu茅J髒皣Y-︴[?-~쉜v딏璮㹚䅊﩯<-#\\u000e걀h\\u0004u抱﵊㼃U<㱷⊱IC進\"\n               },\n               \"숌dee節鏽邺p넱蹓+e罕U\": true\n              }\n             ],\n             \"b⧴룏??ᔠ3ぱ>%郿劃翐ꏬꠛW瞳᫏누躨狀ໄy੽\\\"ីuS=㨞馸k乆E\": \"トz݈^9R䬑<ﮛG<s~<\\/?ⵆᏥ老熷u듷\"\n            }}\n           ]\n          }\n         ]}\n        }\n       }\n      }\n     }},\n     \"宩j鬅쳜QꝖјy獔Z᭵1v擖}䨿F%cֲ᫺贴m塼딚NP亪\\\"ￋsa뺯ꘓ2:9뛓༂쌅䊈#>Rꨳ\\u000fTT泠纷꽀MR<CBxP񱒫X쇤\": -2.22390568492330598E18,\n     \"?䯣ᄽ@Z鸅->ᴱ纊:㠭볮?%N56%鈕1䗍䜁a䲗j陇=뿻偂衋࿘ᓸ?ᕵZ+<\\/}H耢b䀁z^f$&㝒LkꢳI脚뙛u\": 5.694374481577558E-20\n    }]\n   }\n  ]],\n  \"obj\": {\"key\": \"wrong value\"},\n  \"퓲꽪m{㶩/뇿#⼢&᭙硞㪔E嚉c樱㬇1a綑᝖DḾ䝩\": null\n }\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/data/webapp.json",
    "content": "{\"web-app\": {\r\n  \"servlet\": [   \r\n    {\r\n      \"servlet-name\": \"cofaxCDS\",\r\n      \"servlet-class\": \"org.cofax.cds.CDSServlet\",\r\n      \"init-param\": {\r\n        \"configGlossary:installationAt\": \"Philadelphia, PA\",\r\n        \"configGlossary:adminEmail\": \"ksm@pobox.com\",\r\n        \"configGlossary:poweredBy\": \"Cofax\",\r\n        \"configGlossary:poweredByIcon\": \"/images/cofax.gif\",\r\n        \"configGlossary:staticPath\": \"/content/static\",\r\n        \"templateProcessorClass\": \"org.cofax.WysiwygTemplate\",\r\n        \"templateLoaderClass\": \"org.cofax.FilesTemplateLoader\",\r\n        \"templatePath\": \"templates\",\r\n        \"templateOverridePath\": \"\",\r\n        \"defaultListTemplate\": \"listTemplate.htm\",\r\n        \"defaultFileTemplate\": \"articleTemplate.htm\",\r\n        \"useJSP\": false,\r\n        \"jspListTemplate\": \"listTemplate.jsp\",\r\n        \"jspFileTemplate\": \"articleTemplate.jsp\",\r\n        \"cachePackageTagsTrack\": 200,\r\n        \"cachePackageTagsStore\": 200,\r\n        \"cachePackageTagsRefresh\": 60,\r\n        \"cacheTemplatesTrack\": 100,\r\n        \"cacheTemplatesStore\": 50,\r\n        \"cacheTemplatesRefresh\": 15,\r\n        \"cachePagesTrack\": 200,\r\n        \"cachePagesStore\": 100,\r\n        \"cachePagesRefresh\": 10,\r\n        \"cachePagesDirtyRead\": 10,\r\n        \"searchEngineListTemplate\": \"forSearchEnginesList.htm\",\r\n        \"searchEngineFileTemplate\": \"forSearchEngines.htm\",\r\n        \"searchEngineRobotsDb\": \"WEB-INF/robots.db\",\r\n        \"useDataStore\": true,\r\n        \"dataStoreClass\": \"org.cofax.SqlDataStore\",\r\n        \"redirectionClass\": \"org.cofax.SqlRedirection\",\r\n        \"dataStoreName\": \"cofax\",\r\n        \"dataStoreDriver\": \"com.microsoft.jdbc.sqlserver.SQLServerDriver\",\r\n        \"dataStoreUrl\": \"jdbc:microsoft:sqlserver://LOCALHOST:1433;DatabaseName=goon\",\r\n        \"dataStoreUser\": \"sa\",\r\n        \"dataStorePassword\": \"dataStoreTestQuery\",\r\n        \"dataStoreTestQuery\": \"SET NOCOUNT ON;select test='test';\",\r\n        \"dataStoreLogFile\": \"/usr/local/tomcat/logs/datastore.log\",\r\n        \"dataStoreInitConns\": 10,\r\n        \"dataStoreMaxConns\": 100,\r\n        \"dataStoreConnUsageLimit\": 100,\r\n        \"dataStoreLogLevel\": \"debug\",\r\n        \"maxUrlLength\": 500}},\r\n    {\r\n      \"servlet-name\": \"cofaxEmail\",\r\n      \"servlet-class\": \"org.cofax.cds.EmailServlet\",\r\n      \"init-param\": {\r\n      \"mailHost\": \"mail1\",\r\n      \"mailHostOverride\": \"mail2\"}},\r\n    {\r\n      \"servlet-name\": \"cofaxAdmin\",\r\n      \"servlet-class\": \"org.cofax.cds.AdminServlet\"},\r\n \r\n    {\r\n      \"servlet-name\": \"fileServlet\",\r\n      \"servlet-class\": \"org.cofax.cds.FileServlet\"},\r\n    {\r\n      \"servlet-name\": \"cofaxTools\",\r\n      \"servlet-class\": \"org.cofax.cms.CofaxToolsServlet\",\r\n      \"init-param\": {\r\n        \"templatePath\": \"toolstemplates/\",\r\n        \"log\": 1,\r\n        \"logLocation\": \"/usr/local/tomcat/logs/CofaxTools.log\",\r\n        \"logMaxSize\": \"\",\r\n        \"dataLog\": 1,\r\n        \"dataLogLocation\": \"/usr/local/tomcat/logs/dataLog.log\",\r\n        \"dataLogMaxSize\": \"\",\r\n        \"removePageCache\": \"/content/admin/remove?cache=pages&id=\",\r\n        \"removeTemplateCache\": \"/content/admin/remove?cache=templates&id=\",\r\n        \"fileTransferFolder\": \"/usr/local/tomcat/webapps/content/fileTransferFolder\",\r\n        \"lookInContext\": 1,\r\n        \"adminGroupID\": 4,\r\n        \"betaServer\": true}}],\r\n  \"servlet-mapping\": {\r\n    \"cofaxCDS\": \"/\",\r\n    \"cofaxEmail\": \"/cofaxutil/aemail/*\",\r\n    \"cofaxAdmin\": \"/admin/*\",\r\n    \"fileServlet\": \"/static/*\",\r\n    \"cofaxTools\": \"/tools/*\"},\r\n \r\n  \"taglib\": {\r\n    \"taglib-uri\": \"cofax.tld\",\r\n    \"taglib-location\": \"/WEB-INF/tlds/cofax.tld\"}}}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/data/widget.json",
    "content": "{\"widget\": {\r\n    \"debug\": \"on\",\r\n    \"window\": {\r\n        \"title\": \"Sample Konfabulator Widget\",\r\n        \"name\": \"main_window\",\r\n        \"width\": 500,\r\n        \"height\": 500\r\n    },\r\n    \"image\": { \r\n        \"src\": \"Images/Sun.png\",\r\n        \"name\": \"sun1\",\r\n        \"hOffset\": 250,\r\n        \"vOffset\": 250,\r\n        \"alignment\": \"center\"\r\n    },\r\n    \"text\": {\r\n        \"data\": \"Click Here\",\r\n        \"size\": 36,\r\n        \"style\": \"bold\",\r\n        \"name\": \"text1\",\r\n        \"hOffset\": 250,\r\n        \"vOffset\": 100,\r\n        \"alignment\": \"center\",\r\n        \"onMouseUp\": \"sun1.opacity = (sun1.opacity / 100) * 90;\"\r\n    }\r\n}}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/encodings/utf8.json",
    "content": "{\r\n\t\"en\":\"I can eat glass and it doesn't hurt me.\",\r\n\t\"zh-Hant\":\"我能吞下玻璃而不傷身體。\",\r\n\t\"zh-Hans\":\"我能吞下玻璃而不伤身体。\",\r\n\t\"ja\":\"私はガラスを食べられます。それは私を傷つけません。\",\r\n\t\"ko\":\"나는 유리를 먹을 수 있어요. 그래도 아프지 않아요\"\r\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/encodings/utf8bom.json",
    "content": "﻿{\r\n\t\"en\":\"I can eat glass and it doesn't hurt me.\",\r\n\t\"zh-Hant\":\"我能吞下玻璃而不傷身體。\",\r\n\t\"zh-Hans\":\"我能吞下玻璃而不伤身体。\",\r\n\t\"ja\":\"私はガラスを食べられます。それは私を傷つけません。\",\r\n\t\"ko\":\"나는 유리를 먹을 수 있어요. 그래도 아프지 않아요\"\r\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail1.json",
    "content": "\"A JSON payload should be an object or array, not a string.\""
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail10.json",
    "content": "{\"Extra value after close\": true} \"misplaced quoted value\""
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail11.json",
    "content": "{\"Illegal expression\": 1 + 2}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail12.json",
    "content": "{\"Illegal invocation\": alert()}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail13.json",
    "content": "{\"Numbers cannot have leading zeroes\": 013}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail14.json",
    "content": "{\"Numbers cannot be hex\": 0x14}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail15.json",
    "content": "[\"Illegal backslash escape: \\x15\"]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail16.json",
    "content": "[\\naked]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail17.json",
    "content": "[\"Illegal backslash escape: \\017\"]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail18.json",
    "content": "[[[[[[[[[[[[[[[[[[[[\"Too deep\"]]]]]]]]]]]]]]]]]]]]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail19.json",
    "content": "{\"Missing colon\" null}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail2.json",
    "content": "[\"Unclosed array\""
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail20.json",
    "content": "{\"Double colon\":: null}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail21.json",
    "content": "{\"Comma instead of colon\", null}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail22.json",
    "content": "[\"Colon instead of comma\": false]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail23.json",
    "content": "[\"Bad value\", truth]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail24.json",
    "content": "['single quote']"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail25.json",
    "content": "[\"\ttab\tcharacter\tin\tstring\t\"]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail26.json",
    "content": "[\"tab\\   character\\   in\\  string\\  \"]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail27.json",
    "content": "[\"line\nbreak\"]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail28.json",
    "content": "[\"line\\\nbreak\"]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail29.json",
    "content": "[0e]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail3.json",
    "content": "{unquoted_key: \"keys must be quoted\"}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail30.json",
    "content": "[0e+]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail31.json",
    "content": "[0e+-1]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail32.json",
    "content": "{\"Comma instead if closing brace\": true,"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail33.json",
    "content": "[\"mismatch\"}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail4.json",
    "content": "[\"extra comma\",]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail5.json",
    "content": "[\"double extra comma\",,]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail6.json",
    "content": "[   , \"<-- missing value\"]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail7.json",
    "content": "[\"Comma after the close\"],"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail8.json",
    "content": "[\"Extra close\"]]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/fail9.json",
    "content": "{\"Extra comma\": true,}"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/pass1.json",
    "content": "[\n    \"JSON Test Pattern pass1\",\n    {\"object with 1 member\":[\"array with 1 element\"]},\n    {},\n    [],\n    -42,\n    true,\n    false,\n    null,\n    {\n        \"integer\": 1234567890,\n        \"real\": -9876.543210,\n        \"e\": 0.123456789e-12,\n        \"E\": 1.234567890E+34,\n        \"\":  23456789012E66,\n        \"zero\": 0,\n        \"one\": 1,\n        \"space\": \" \",\n        \"quote\": \"\\\"\",\n        \"backslash\": \"\\\\\",\n        \"controls\": \"\\b\\f\\n\\r\\t\",\n        \"slash\": \"/ & \\/\",\n        \"alpha\": \"abcdefghijklmnopqrstuvwyz\",\n        \"ALPHA\": \"ABCDEFGHIJKLMNOPQRSTUVWYZ\",\n        \"digit\": \"0123456789\",\n        \"0123456789\": \"digit\",\n        \"special\": \"`1~!@#$%^&*()_+-={':[,]}|;.</>?\",\n        \"hex\": \"\\u0123\\u4567\\u89AB\\uCDEF\\uabcd\\uef4A\",\n        \"true\": true,\n        \"false\": false,\n        \"null\": null,\n        \"array\":[  ],\n        \"object\":{  },\n        \"address\": \"50 St. James Street\",\n        \"url\": \"http://www.JSON.org/\",\n        \"comment\": \"// /* <!-- --\",\n        \"# -- --> */\": \" \",\n        \" s p a c e d \" :[1,2 , 3\n\n,\n\n4 , 5        ,          6           ,7        ],\"compact\":[1,2,3,4,5,6,7],\n        \"jsontext\": \"{\\\"object with 1 member\\\":[\\\"array with 1 element\\\"]}\",\n        \"quotes\": \"&#34; \\u0022 %22 0x22 034 &#x22;\",\n        \"\\/\\\\\\\"\\uCAFE\\uBABE\\uAB98\\uFCDE\\ubcda\\uef4A\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:',./<>?\"\n: \"A key can be any string\"\n    },\n    0.5 ,98.6\n,\n99.44\n,\n\n1066,\n1e1,\n0.1e1,\n1e-1,\n1e00,2e+00,2e-00\n,\"rosebud\"]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/pass2.json",
    "content": "[[[[[[[[[[[[[[[[[[[\"Not too deep\"]]]]]]]]]]]]]]]]]]]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/pass3.json",
    "content": "{\n    \"JSON Test Pattern pass3\": {\n        \"The outermost value\": \"must be an object or array.\",\n        \"In this test\": \"It is an object.\"\n    }\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/jsonchecker/readme.txt",
    "content": "Test suite from http://json.org/JSON_checker/.\n\nIf the JSON_checker is working correctly, it must accept all of the pass*.json files and reject all of the fail*.json files.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/types/booleans.json",
    "content": "[\n  true,\n  true,\n  false,\n  false,\n  true,\n  true,\n  true,\n  false,\n  false,\n  true,\n  false,\n  false,\n  true,\n  false,\n  false,\n  false,\n  true,\n  false,\n  false,\n  true,\n  true,\n  false,\n  true,\n  true,\n  true,\n  false,\n  false,\n  false,\n  true,\n  false,\n  true,\n  false,\n  false,\n  true,\n  true,\n  true,\n  true,\n  true,\n  true,\n  false,\n  false,\n  true,\n  false,\n  false,\n  false,\n  true,\n  true,\n  false,\n  true,\n  true,\n  false,\n  true,\n  false,\n  true,\n  true,\n  true,\n  false,\n  false,\n  false,\n  true,\n  false,\n  false,\n  false,\n  true,\n  true,\n  false,\n  true,\n  true,\n  true,\n  true,\n  true,\n  true,\n  true,\n  true,\n  false,\n  false,\n  false,\n  false,\n  false,\n  true,\n  true,\n  true,\n  true,\n  true,\n  true,\n  true,\n  false,\n  false,\n  false,\n  true,\n  false,\n  false,\n  false,\n  true,\n  true,\n  true,\n  false,\n  false,\n  true,\n  false\n]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/types/floats.json",
    "content": "[\n  135.747111636,\n  123.377054008,\n  140.527504552,\n  -72.299143906,\n  -23.851678949,\n  73.586193519,\n  -158.299382442,\n  177.477876032,\n  32.268518982,\n  -139.560009969,\n  115.203105183,\n  -106.025823607,\n  167.224138231,\n  103.378383732,\n  -97.498486285,\n  18.184723416,\n  69.137075711,\n  33.849002681,\n  -120.185228215,\n  -20.841408615,\n  -172.659492727,\n  -2.691464061,\n  22.426164066,\n  -98.416909437,\n  -31.603082708,\n  -85.072296561,\n  108.620987395,\n  -43.127078238,\n  -126.473562057,\n  -158.595489097,\n  -57.890678254,\n  -13.254016573,\n  -85.024504709,\n  171.663552644,\n  -146.495558248,\n  -10.606748276,\n  -118.786969354,\n  153.352057804,\n  -45.215545083,\n  37.038725288,\n  106.344071897,\n  -64.607402031,\n  85.148030911,\n  28.897784566,\n  39.51082061,\n  20.450382102,\n  -113.174943618,\n  71.60785784,\n  -168.202648062,\n  -157.338200017,\n  10.879588527,\n  -114.261694831,\n  -5.622927072,\n  -173.330830616,\n  -29.47002003,\n  -39.829034201,\n  50.031545162,\n  82.815735508,\n  -119.188760828,\n  -48.455928081,\n  163.964263034,\n  46.30378861,\n  -26.248889762,\n  -47.354615322,\n  155.388677633,\n  -166.710356904,\n  42.987233558,\n  144.275297374,\n  37.394383186,\n  -122.550388725,\n  177.469945914,\n  101.104677413,\n  109.429869885,\n  -104.919625624,\n  147.522756541,\n  -81.294703727,\n  122.744731363,\n  81.803603684,\n  26.321556167,\n  147.045441354,\n  147.256895816,\n  -174.211095908,\n  52.518769316,\n  -78.58250334,\n  -173.356685435,\n  -107.728209264,\n  -69.982325771,\n  -113.776095893,\n  -35.785267074,\n  -105.748545976,\n  -30.206523864,\n  -76.185311723,\n  -126.400112781,\n  -26.864958639,\n  56.840053629,\n  93.781553535,\n  -116.002949803,\n  -46.617140948,\n  176.846840093,\n  -144.24821335\n]\n"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/types/guids.json",
    "content": "[\n  \"d35bf0d4-8d8f-4e17-a5c3-ad9bfd675266\",\n  \"db402774-eeb6-463b-9986-c458c44d8b5a\",\n  \"2a2e4101-b5f2-40b8-8750-e03f01661e60\",\n  \"76787cfa-f4eb-4d62-aaad-e1d588d00ad5\",\n  \"fd73894b-b500-4a7c-888c-06b5bd9cec65\",\n  \"cce1862a-cf31-4ef2-9e23-f1d23b4e6163\",\n  \"00a98bb0-2b6e-4368-8512-71c21aa87db7\",\n  \"ab9a8d69-cec7-4550-bd35-3ed678e22782\",\n  \"f18b48e1-5114-4fbe-9652-579e8d66950e\",\n  \"4efe3baa-7ac5-4d6a-a839-6b9cfe825764\",\n  \"b4aec119-5b0a-434c-b388-109816c482a5\",\n  \"e0ef0cbb-127a-4a28-9831-5741b4295275\",\n  \"d50286a5-cb7b-4c9e-be99-f214439bae8c\",\n  \"a981094c-f1ac-42ed-a9fa-86404c7210ff\",\n  \"2a34ee57-5815-4829-b77b-eeebaa8fe340\",\n  \"a0530d44-48f8-4eff-b9ea-8810c4308351\",\n  \"c6f91509-83e1-4ea1-9680-e667fbfd56ee\",\n  \"cab11402-dcdd-4454-b190-6da124947395\",\n  \"283d159c-2b18-4856-b4c7-5059252eaa15\",\n  \"146157c6-72a8-4051-9991-cb6ea6743d81\",\n  \"aef6f269-7306-4bd2-83f7-6d5605b5dc9a\",\n  \"37fe6027-d638-4017-80a9-e7b0567b278e\",\n  \"5003d731-33fb-4159-af61-d76348a44079\",\n  \"e0e06979-5f80-4713-9fe0-8a4d60dc89f8\",\n  \"7e85bdc3-0345-4cb6-9398-ccab06e79976\",\n  \"f2ebf5af-6568-4ffe-a46d-403863fd4b66\",\n  \"e0b5bb1c-b4dd-4535-9a9e-3c73f1167d46\",\n  \"c852d20b-6bcb-4b12-bd57-308296c64c5a\",\n  \"7ac3ae82-1818-49cd-a8a4-5ac77dfafd46\",\n  \"138004a9-76e2-4ad7-bd42-e74dabdbb803\",\n  \"ab25b5be-96be-45b0-b765-947b40ec36a6\",\n  \"08404734-fd57-499e-a4cf-71e9ec782ede\",\n  \"8dfdeb16-248b-4a21-bf89-2e22b11a4101\",\n  \"a0e44ef0-3b09-41e8-ad5d-ed8e6a1a2a67\",\n  \"a7981e49-188d-414a-9779-b1ad91e599d1\",\n  \"329186c0-bf27-4208-baf7-c0a0a5a2d5b7\",\n  \"cb5f3381-d33e-4b30-b1a9-f482623cad33\",\n  \"15031262-ca73-4e3c-bd0a-fcf89bdf0caf\",\n  \"6d7333d1-2e8c-4d78-bfde-5be47e70eb13\",\n  \"acaa160c-670a-4e8f-ac45-49416e77d5f9\",\n  \"228f87eb-cde4-4106-808b-2dbf3c7b6d2e\",\n  \"2ff830a3-5445-4d8e-b161-bddd30666697\",\n  \"f488bedd-ff6e-4108-b9a7-07f6da62f476\",\n  \"2e12b846-0a34-478e-adf7-a438493803e6\",\n  \"6686b8ef-7446-4d86-bd8c-df24119e3bfe\",\n  \"e474a5c5-5793-4d41-b4ab-5423acc56ef1\",\n  \"ac046573-e718-44dc-a0dc-9037eeaba6a9\",\n  \"6b0e9099-cf53-4d5a-8a71-977528628fcf\",\n  \"d51a3f22-0ff9-4087-ba9b-fcee2a2d8ade\",\n  \"bdc01286-3511-4d22-bfb8-76d01203d366\",\n  \"ca44eb84-17ff-4f27-8f1e-1bd25f4e8725\",\n  \"4e9a8c2f-be0b-4913-92d2-c801b9a50d04\",\n  \"7685d231-dadd-4041-9165-898397438ab7\",\n  \"86f0bf26-d66a-44d8-99f5-d6768addae3b\",\n  \"2ca1167c-72ba-45a0-aa42-faf033db0d0b\",\n  \"199a1182-ea55-49ff-ba51-71c29cdd0aac\",\n  \"be6a4dd2-c821-4aa0-8b83-d64d6644b5b2\",\n  \"4c5f4781-7f80-4daa-9c20-76b183000514\",\n  \"513b31bd-54fb-4d12-a427-42a7c13ff8e1\",\n  \"8e211bcb-d76c-4012-83ad-74dd7d23b687\",\n  \"44d5807e-0501-4f66-8779-e244d4fdca0a\",\n  \"db8cd555-0563-4b7b-b00c-eada300a7065\",\n  \"cb14d0c9-46cc-4797-bd3a-752b05629f07\",\n  \"4f68b3ef-ac9b-47a0-b6d7-57f398a5c6a5\",\n  \"77221aae-1bcf-471c-be45-7f31f733f9d6\",\n  \"42a7cac8-9e80-4c45-8c71-511d863c98ea\",\n  \"f9018d22-b82c-468c-bdb5-8864d5964801\",\n  \"75f4e9b8-62a2-4f21-ad8a-e19eff0419bc\",\n  \"9b7385c8-8653-4184-951c-b0ac1b36b42e\",\n  \"571018aa-ffbf-4b42-a16d-07b57a7f5f0e\",\n  \"35de4a2f-6bf1-45aa-b820-2a27ea833e44\",\n  \"0b8edb20-3bb4-4cb4-b089-31957466dbab\",\n  \"97da4778-9a7b-4140-a545-968148c81fb7\",\n  \"969f326c-8f2a-47c5-b41c-d9c2f06c9b9d\",\n  \"ae211037-8b53-4b17-bfc8-c06fc7774409\",\n  \"12c5c3c4-0bd5-45d3-bc1d-d04a3c65d3e6\",\n  \"ec02024f-ce43-4dd3-8169-a59f7baee043\",\n  \"5b6afe77-ce48-47ca-90a0-25cd10ca5ffd\",\n  \"2e3a61d4-6b8f-4d2f-ba86-878b4012efd8\",\n  \"19a88a67-a5d3-4647-898f-1cde07bce040\",\n  \"6db6f420-b5c8-48b9-bbb2-8864fe6fed65\",\n  \"5a45dbde-7b53-4f6b-b864-e3b63be3708a\",\n  \"c878321b-8a02-4239-9981-15760c2e7d15\",\n  \"4e36687f-8bf6-4b12-b496-3a8e382d067e\",\n  \"a59a63cd-43c0-4c6e-b208-6dbca86f8176\",\n  \"303308c4-2e4a-45b5-8bf3-3e66e9ad05a1\",\n  \"8b58fdf1-43a6-4c98-9547-6361b50791af\",\n  \"a3563591-72ed-42b5-8e41-bac1d76d70cf\",\n  \"38db8c78-3739-4f6e-8313-de4138082114\",\n  \"86615bea-7e73-4daf-95da-ae6b9eee1bbb\",\n  \"35d38e3e-076e-40dd-9aa8-05be2603bd59\",\n  \"9f84c62d-b454-4ba3-8c19-a01878985cdc\",\n  \"6721bbae-d765-4a06-8289-6fe46a1bf943\",\n  \"0837796f-d0dd-4e50-9b7c-1983e6cc7c48\",\n  \"021eb7d7-e869-49b9-80c3-9dd16ce2d981\",\n  \"819c56f8-e040-475d-aad5-c6d5e98b20aa\",\n  \"3a61ef02-735e-4229-937d-b3777a3f4e1f\",\n  \"79dfab84-12e6-4ec8-bfc8-460ae71e4eca\",\n  \"a106fabf-e149-476c-8053-b62388b6eb57\",\n  \"9a3900a5-bfb4-4de0-baa5-253a8bd0b634\"\n]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/types/integers.json",
    "content": "[\n  8125686,\n  8958709,\n  5976222,\n  1889524,\n  7968493,\n  1357486,\n  118415,\n  7081097,\n  4635968,\n  7555332,\n  2270233,\n  3428352,\n  8699968,\n  2087333,\n  7861337,\n  7554440,\n  2017031,\n  7981692,\n  6060687,\n  1877715,\n  3297474,\n  8373177,\n  6158629,\n  7853641,\n  3004441,\n  9650406,\n  2695251,\n  1180761,\n  4988426,\n  6043805,\n  8063373,\n  6103218,\n  2848339,\n  8188690,\n  9235573,\n  5949816,\n  6116081,\n  6471138,\n  3354531,\n  4787414,\n  9660600,\n  942529,\n  7278535,\n  7967399,\n  554292,\n  1436493,\n  267319,\n  2606657,\n  7900601,\n  4276634,\n  7996757,\n  8544466,\n  7266469,\n  3301373,\n  4005350,\n  6437652,\n  7717672,\n  7126292,\n  8588394,\n  2127902,\n  7410190,\n  1517806,\n  4583602,\n  3123440,\n  7747613,\n  5029464,\n  9834390,\n  3087227,\n  4913822,\n  7550487,\n  4518144,\n  5862588,\n  1778599,\n  9493290,\n  5588455,\n  3638706,\n  7394293,\n  4294719,\n  3837830,\n  6381878,\n  7175866,\n  8575492,\n  1415229,\n  1453733,\n  6972404,\n  9782571,\n  4234063,\n  7117418,\n  7293130,\n  8057071,\n  9345285,\n  7626648,\n  3358911,\n  4574537,\n  9371826,\n  7627107,\n  6154093,\n  5392367,\n  5398105,\n  6956377\n]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/types/mixed.json",
    "content": "[\n  {\n    \"favoriteFruit\": \"banana\",\n    \"greeting\": \"Hello, Kim! You have 10 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Higgins Rodriquez\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"James Floyd\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Gay Stewart\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"pariatur\",\n      \"ad\",\n      \"eiusmod\",\n      \"sit\",\n      \"et\",\n      \"velit\",\n      \"culpa\"\n    ],\n    \"longitude\": -57.919246,\n    \"latitude\": -36.022812,\n    \"registered\": \"Friday, March 21, 2014 9:13 PM\",\n    \"about\": \"Laborum nulla aliquip ullamco proident excepteur est officia ipsum. Eiusmod exercitation minim ex do labore reprehenderit aliqua minim qui excepteur reprehenderit cupidatat. Sint enim exercitation duis id consequat nisi enim magna. Commodo aliqua id ipsum sit magna enim. Veniam officia in labore fugiat veniam ea laboris ex veniam duis.\\r\\n\",\n    \"address\": \"323 Pulaski Street, Ronco, North Carolina, 7701\",\n    \"phone\": \"+1 (919) 438-2678\",\n    \"email\": \"kim.griffith@cipromox.biz\",\n    \"company\": \"CIPROMOX\",\n    \"name\": {\n      \"last\": \"Griffith\",\n      \"first\": \"Kim\"\n    },\n    \"eyeColor\": \"green\",\n    \"age\": 26,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$1,283.55\",\n    \"isActive\": false,\n    \"guid\": \"10ab0392-c5e2-48a3-9473-aa725bad892d\",\n    \"index\": 0,\n    \"_id\": \"551b91198238a0bcf9a41133\"\n  },\n  {\n    \"favoriteFruit\": \"banana\",\n    \"greeting\": \"Hello, Skinner! You have 9 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Rhonda Justice\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Audra Castaneda\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Vicky Chavez\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"dolore\",\n      \"enim\",\n      \"sit\",\n      \"non\",\n      \"exercitation\",\n      \"fugiat\",\n      \"adipisicing\"\n    ],\n    \"longitude\": -60.291407,\n    \"latitude\": -84.619318,\n    \"registered\": \"Friday, February 7, 2014 3:17 AM\",\n    \"about\": \"Consectetur eiusmod laboris dolore est ullamco nulla in velit quis esse Lorem. Amet aliqua sunt aute occaecat veniam officia in duis proident aliqua cupidatat mollit. Sint eu qui anim duis ut anim duis eu cillum. Cillum nostrud adipisicing tempor Lorem commodo sit in ad qui non et irure qui. Labore eu aliquip id duis eiusmod veniam.\\r\\n\",\n    \"address\": \"347 Autumn Avenue, Fidelis, Puerto Rico, 543\",\n    \"phone\": \"+1 (889) 457-2319\",\n    \"email\": \"skinner.maddox@moltonic.co.uk\",\n    \"company\": \"MOLTONIC\",\n    \"name\": {\n      \"last\": \"Maddox\",\n      \"first\": \"Skinner\"\n    },\n    \"eyeColor\": \"green\",\n    \"age\": 22,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$3,553.10\",\n    \"isActive\": false,\n    \"guid\": \"cfbc2fb6-2641-4388-b06d-ec0212cfac1e\",\n    \"index\": 1,\n    \"_id\": \"551b91197e0abe92d6642700\"\n  },\n  {\n    \"favoriteFruit\": \"strawberry\",\n    \"greeting\": \"Hello, Reynolds! You have 5 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Brady Valdez\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Boyer Golden\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Gladys Knapp\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"commodo\",\n      \"eiusmod\",\n      \"cupidatat\",\n      \"et\",\n      \"occaecat\",\n      \"proident\",\n      \"Lorem\"\n    ],\n    \"longitude\": 140.866287,\n    \"latitude\": 1.401032,\n    \"registered\": \"Monday, October 20, 2014 8:01 AM\",\n    \"about\": \"Deserunt elit consequat ea dolor pariatur aute consectetur et nulla ipsum ad. Laboris occaecat ipsum ad duis et esse ea ut voluptate. Ex magna consequat pariatur amet. Quis excepteur non mollit dolore cillum dolor ex esse veniam esse deserunt non occaecat veniam. Sit amet proident proident amet. Nisi est id ut ut adipisicing esse fugiat non dolor aute.\\r\\n\",\n    \"address\": \"872 Montague Terrace, Haena, Montana, 3106\",\n    \"phone\": \"+1 (974) 410-2655\",\n    \"email\": \"reynolds.sanford@combot.biz\",\n    \"company\": \"COMBOT\",\n    \"name\": {\n      \"last\": \"Sanford\",\n      \"first\": \"Reynolds\"\n    },\n    \"eyeColor\": \"green\",\n    \"age\": 21,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$3,664.47\",\n    \"isActive\": true,\n    \"guid\": \"f9933a9c-c41a-412f-a18d-e727c569870b\",\n    \"index\": 2,\n    \"_id\": \"551b91197f170b65413a06e3\"\n  },\n  {\n    \"favoriteFruit\": \"banana\",\n    \"greeting\": \"Hello, Neva! You have 7 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Clara Cotton\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Ray Gates\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Jacobs Reese\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"magna\",\n      \"labore\",\n      \"incididunt\",\n      \"velit\",\n      \"ea\",\n      \"et\",\n      \"eiusmod\"\n    ],\n    \"longitude\": -133.058479,\n    \"latitude\": 87.803677,\n    \"registered\": \"Friday, May 9, 2014 5:41 PM\",\n    \"about\": \"Do duis occaecat ut officia occaecat officia nostrud reprehenderit ex excepteur aute anim in reprehenderit. Cupidatat nulla eiusmod nulla non minim veniam aute nulla deserunt adipisicing consectetur veniam. Sit consequat ex laboris aliqua labore consectetur tempor proident consequat est. Fugiat quis esse culpa aliquip. Excepteur laborum aliquip sunt eu cupidatat magna eiusmod amet nisi labore aliquip. Ut consectetur esse aliquip exercitation nulla ex occaecat elit do ex eiusmod deserunt. Ex eu voluptate minim deserunt fugiat minim est occaecat ad Lorem nisi.\\r\\n\",\n    \"address\": \"480 Eagle Street, Fostoria, Oklahoma, 2614\",\n    \"phone\": \"+1 (983) 439-3000\",\n    \"email\": \"neva.barker@pushcart.us\",\n    \"company\": \"PUSHCART\",\n    \"name\": {\n      \"last\": \"Barker\",\n      \"first\": \"Neva\"\n    },\n    \"eyeColor\": \"brown\",\n    \"age\": 36,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$3,182.24\",\n    \"isActive\": true,\n    \"guid\": \"52489849-78e1-4b27-8b86-e3e5ab2b7dc8\",\n    \"index\": 3,\n    \"_id\": \"551b9119a13061c083c878d5\"\n  },\n  {\n    \"favoriteFruit\": \"banana\",\n    \"greeting\": \"Hello, Rodgers! You have 6 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Marguerite Conway\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Margarita Cunningham\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Carmela Gallagher\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"ipsum\",\n      \"magna\",\n      \"amet\",\n      \"elit\",\n      \"sit\",\n      \"occaecat\",\n      \"elit\"\n    ],\n    \"longitude\": -125.436981,\n    \"latitude\": 19.868524,\n    \"registered\": \"Tuesday, July 8, 2014 8:09 PM\",\n    \"about\": \"In cillum esse tempor do magna id ad excepteur ex nostrud mollit deserunt aliqua. Minim aliqua commodo commodo consectetur exercitation nulla nisi dolore aliqua in. Incididunt deserunt mollit nostrud excepteur. Ipsum fugiat anim deserunt Lorem aliquip nisi consequat eu minim in ex duis.\\r\\n\",\n    \"address\": \"989 Varanda Place, Duryea, Palau, 3972\",\n    \"phone\": \"+1 (968) 578-2974\",\n    \"email\": \"rodgers.conner@frenex.net\",\n    \"company\": \"FRENEX\",\n    \"name\": {\n      \"last\": \"Conner\",\n      \"first\": \"Rodgers\"\n    },\n    \"eyeColor\": \"blue\",\n    \"age\": 23,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$1,665.17\",\n    \"isActive\": true,\n    \"guid\": \"ed3b2374-5afe-4fca-9325-8a7bbc9f81a0\",\n    \"index\": 4,\n    \"_id\": \"551b91197bcedb1b56a241ce\"\n  },\n  {\n    \"favoriteFruit\": \"strawberry\",\n    \"greeting\": \"Hello, Mari! You have 10 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Irwin Boyd\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Dejesus Flores\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Lane Mcmahon\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"esse\",\n      \"aliquip\",\n      \"excepteur\",\n      \"dolor\",\n      \"ex\",\n      \"commodo\",\n      \"anim\"\n    ],\n    \"longitude\": -17.038176,\n    \"latitude\": 17.154663,\n    \"registered\": \"Sunday, April 6, 2014 4:46 AM\",\n    \"about\": \"Excepteur veniam occaecat sint nulla magna in in officia elit. Eiusmod qui dolor fugiat tempor in minim esse officia minim consequat. Lorem ullamco labore proident ipsum id pariatur fugiat consectetur anim cupidatat qui proident non ipsum.\\r\\n\",\n    \"address\": \"563 Hendrickson Street, Westwood, South Dakota, 4959\",\n    \"phone\": \"+1 (980) 434-3976\",\n    \"email\": \"mari.fleming@beadzza.org\",\n    \"company\": \"BEADZZA\",\n    \"name\": {\n      \"last\": \"Fleming\",\n      \"first\": \"Mari\"\n    },\n    \"eyeColor\": \"blue\",\n    \"age\": 21,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$1,948.04\",\n    \"isActive\": true,\n    \"guid\": \"6bd02166-3b1f-4ed8-84c9-ed96cbf12abc\",\n    \"index\": 5,\n    \"_id\": \"551b9119b359ff6d24846f77\"\n  },\n  {\n    \"favoriteFruit\": \"strawberry\",\n    \"greeting\": \"Hello, Maxine! You have 7 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Sullivan Stark\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Underwood Mclaughlin\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Kristy Carlson\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"commodo\",\n      \"ipsum\",\n      \"quis\",\n      \"non\",\n      \"est\",\n      \"mollit\",\n      \"exercitation\"\n    ],\n    \"longitude\": -105.40635,\n    \"latitude\": 37.197993,\n    \"registered\": \"Tuesday, January 20, 2015 12:30 AM\",\n    \"about\": \"Proident ullamco Lorem est consequat consectetur non eiusmod esse nostrud pariatur eiusmod enim exercitation eiusmod. Consequat duis elit elit minim ullamco et dolor eu minim do tempor esse consequat excepteur. Mollit dolor do voluptate nostrud quis anim cillum velit tempor eiusmod adipisicing tempor do culpa. Eu magna dolor sit amet nisi do laborum dolore nisi. Deserunt ipsum et deserunt non nisi.\\r\\n\",\n    \"address\": \"252 Boulevard Court, Brenton, Tennessee, 9444\",\n    \"phone\": \"+1 (950) 466-3377\",\n    \"email\": \"maxine.moreno@zentia.tv\",\n    \"company\": \"ZENTIA\",\n    \"name\": {\n      \"last\": \"Moreno\",\n      \"first\": \"Maxine\"\n    },\n    \"eyeColor\": \"brown\",\n    \"age\": 24,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$1,200.24\",\n    \"isActive\": false,\n    \"guid\": \"ce307a37-ca1f-43f5-b637-dca2605712be\",\n    \"index\": 6,\n    \"_id\": \"551b91195a6164b2e35f6dc8\"\n  },\n  {\n    \"favoriteFruit\": \"strawberry\",\n    \"greeting\": \"Hello, Helga! You have 5 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Alicia Vance\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Vinson Phelps\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Francisca Kelley\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"nostrud\",\n      \"eiusmod\",\n      \"dolore\",\n      \"officia\",\n      \"sint\",\n      \"non\",\n      \"qui\"\n    ],\n    \"longitude\": -7.275151,\n    \"latitude\": 75.54202,\n    \"registered\": \"Wednesday, October 1, 2014 6:35 PM\",\n    \"about\": \"Quis duis ullamco velit qui. Consectetur non adipisicing id magna anim. Deserunt est officia qui esse. Et do pariatur incididunt anim ad mollit non. Et eiusmod sunt fugiat elit mollit ad excepteur anim nisi laboris eiusmod aliquip aliquip.\\r\\n\",\n    \"address\": \"981 Bush Street, Beaulieu, Vermont, 3775\",\n    \"phone\": \"+1 (956) 506-3807\",\n    \"email\": \"helga.burch@synkgen.name\",\n    \"company\": \"SYNKGEN\",\n    \"name\": {\n      \"last\": \"Burch\",\n      \"first\": \"Helga\"\n    },\n    \"eyeColor\": \"blue\",\n    \"age\": 22,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$3,827.89\",\n    \"isActive\": false,\n    \"guid\": \"ff5dfea0-1052-4ef2-8b66-4dc1aad0a4fb\",\n    \"index\": 7,\n    \"_id\": \"551b911946be8358ae40e90e\"\n  },\n  {\n    \"favoriteFruit\": \"banana\",\n    \"greeting\": \"Hello, Shaw! You have 5 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Christian Cardenas\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Cohen Pennington\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Mary Lindsay\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"occaecat\",\n      \"ut\",\n      \"occaecat\",\n      \"magna\",\n      \"exercitation\",\n      \"incididunt\",\n      \"irure\"\n    ],\n    \"longitude\": -89.102972,\n    \"latitude\": 89.489596,\n    \"registered\": \"Thursday, August 21, 2014 5:00 PM\",\n    \"about\": \"Amet cupidatat quis velit aute Lorem consequat pariatur mollit deserunt et sint culpa excepteur duis. Enim proident duis qui ex tempor sunt nostrud occaecat. Officia sit veniam mollit eiusmod minim do aute eiusmod fugiat qui anim adipisicing in laboris. Do tempor reprehenderit sunt laborum esse irure dolor ad consectetur aute sit id ipsum. Commodo et voluptate anim consequat do. Minim laborum ad veniam ad minim incididunt excepteur excepteur aliqua.\\r\\n\",\n    \"address\": \"237 Pierrepont Street, Herbster, New York, 3490\",\n    \"phone\": \"+1 (976) 455-2880\",\n    \"email\": \"shaw.zamora@shadease.me\",\n    \"company\": \"SHADEASE\",\n    \"name\": {\n      \"last\": \"Zamora\",\n      \"first\": \"Shaw\"\n    },\n    \"eyeColor\": \"blue\",\n    \"age\": 38,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$3,440.82\",\n    \"isActive\": false,\n    \"guid\": \"ac5fdb0e-e1fb-427e-881d-da461be0d1ca\",\n    \"index\": 8,\n    \"_id\": \"551b9119af0077bc28a2de25\"\n  },\n  {\n    \"favoriteFruit\": \"apple\",\n    \"greeting\": \"Hello, Melissa! You have 5 unread messages.\",\n    \"friends\": [\n      {\n        \"name\": \"Marion Villarreal\",\n        \"id\": 0\n      },\n      {\n        \"name\": \"Kate Rose\",\n        \"id\": 1\n      },\n      {\n        \"name\": \"Hines Simon\",\n        \"id\": 2\n      }\n    ],\n    \"range\": [\n      0,\n      1,\n      2,\n      3,\n      4,\n      5,\n      6,\n      7,\n      8,\n      9\n    ],\n    \"tags\": [\n      \"amet\",\n      \"veniam\",\n      \"mollit\",\n      \"ad\",\n      \"cupidatat\",\n      \"deserunt\",\n      \"Lorem\"\n    ],\n    \"longitude\": -52.735052,\n    \"latitude\": 16.258838,\n    \"registered\": \"Wednesday, April 16, 2014 7:56 PM\",\n    \"about\": \"Aute ut culpa eiusmod tempor duis dolor tempor incididunt. Nisi non proident excepteur eiusmod incididunt nisi minim irure sit. In veniam commodo deserunt proident reprehenderit et consectetur ullamco quis nulla cupidatat.\\r\\n\",\n    \"address\": \"642 Halsey Street, Blandburg, Kansas, 6761\",\n    \"phone\": \"+1 (941) 539-3851\",\n    \"email\": \"melissa.vaughn@memora.io\",\n    \"company\": \"MEMORA\",\n    \"name\": {\n      \"last\": \"Vaughn\",\n      \"first\": \"Melissa\"\n    },\n    \"eyeColor\": \"brown\",\n    \"age\": 24,\n    \"picture\": \"http://placehold.it/32x32\",\n    \"balance\": \"$2,399.44\",\n    \"isActive\": true,\n    \"guid\": \"1769f022-a7f1-4a69-bf4c-f5a5ebeab2d1\",\n    \"index\": 9,\n    \"_id\": \"551b9119b607c09c7ffc3b8a\"\n  }\n]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/types/nulls.json",
    "content": "[\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null,\n  null\n]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/types/paragraphs.json",
    "content": "[\n  \"Commodo ullamco cupidatat nisi sit proident ex. Cillum pariatur occaecat in officia do commodo nisi cillum tempor minim. Ad dolor ut et aliquip fugiat eu officia cupidatat occaecat consectetur eiusmod veniam enim officia.\\r\\n\",\n  \"Adipisicing cillum laborum nisi irure. Cillum dolor proident duis nulla qui mollit dolore reprehenderit mollit. Irure nulla dolor ipsum irure nulla quis laboris do.\\r\\n\",\n  \"Est adipisicing consectetur incididunt in. Occaecat ea magna ex consequat irure sit laborum cillum officia magna sunt do exercitation aliquip. Laboris id aute in dolore reprehenderit voluptate non deserunt laborum.\\r\\n\",\n  \"Consectetur eu aute est est occaecat adipisicing sint enim dolor eu. Tempor amet id non mollit eu consectetur cillum duis. Eu labore velit nulla ipsum commodo consequat aliquip. Cupidatat commodo dolore mollit enim sit excepteur nisi duis laboris deserunt esse.\\r\\n\",\n  \"Incididunt ullamco est fugiat enim fugiat. Do sit mollit anim ad excepteur eu laboris exercitation officia labore nulla ut. Voluptate non voluptate cillum sit et voluptate anim duis velit consequat aliquip dolor. Elit et et esse laboris consectetur officia eiusmod aliquip nisi est. Qui labore dolore ad dolor.\\r\\n\",\n  \"Anim adipisicing est irure proident sit officia ullamco voluptate sunt consectetur duis mollit excepteur veniam. Nostrud ut duis aute exercitation officia et quis elit commodo elit tempor aute aliquip enim. Est officia non cillum consequat voluptate ipsum sit voluptate nulla id.\\r\\n\",\n  \"Ipsum enim consectetur aliquip nulla commodo ut ex aliqua elit duis do. Officia et sunt aliqua dolor minim voluptate veniam esse elit enim. Adipisicing reprehenderit duis ex magna non in fugiat sunt ipsum nostrud fugiat aliquip. Labore voluptate id officia voluptate eu. Magna do nostrud excepteur sunt aliqua adipisicing qui.\\r\\n\",\n  \"Est occaecat non non cupidatat laborum qui. Veniam sit est voluptate labore sit irure consectetur fugiat. Anim enim enim fugiat exercitation anim ad proident esse in aliqua. Laboris ut aute culpa ullamco.\\r\\n\",\n  \"Sit et aliquip cupidatat deserunt eiusmod sint aliquip occaecat nostrud aliqua elit commodo ut magna. Amet sit est deserunt id duis in officia pariatur cupidatat ex. Mollit duis est consequat nulla aute velit ipsum sit consectetur pariatur ut non ex ipsum. Tempor esse velit pariatur reprehenderit et nostrud commodo laborum mollit labore.\\r\\n\",\n  \"Aliquip irure quis esse aliquip. Ex non deserunt culpa aliqua ad anim occaecat ad. Lorem consectetur mollit eu consectetur est non nisi non ipsum. Qui veniam ullamco officia est ut excepteur. Nulla elit dolore cupidatat aliqua enim Lorem elit consequat eiusmod non aliqua eu in. Pariatur in culpa labore sint ipsum consectetur occaecat ad ex ipsum laboris aliquip officia. Non officia eiusmod nisi officia id id laboris deserunt sunt enim magna mollit sit.\\r\\n\",\n  \"Mollit velit laboris laborum nulla aliquip consequat Lorem non incididunt irure. Eu voluptate sint do consectetur tempor sit Lorem in. Laborum eiusmod nisi Lorem ipsum dolore do aute laborum occaecat aute sunt. Sit laborum in ea do ipsum officia irure cillum irure nisi laboris. Ad anim deserunt excepteur ea veniam eiusmod culpa velit veniam. Commodo incididunt ea Lorem eu enim esse nisi incididunt mollit.\\r\\n\",\n  \"Velit proident sunt aute dolore reprehenderit culpa. Pariatur reprehenderit commodo ad ea voluptate anim nulla ipsum eu irure fugiat aliqua et. Adipisicing incididunt anim excepteur voluptate minim qui culpa. Sunt veniam enim reprehenderit magna magna. Sit ad amet deserunt ut aute dolore ad minim.\\r\\n\",\n  \"Esse ullamco sunt mollit mollit. Eu enim dolore laboris cupidatat. Cupidatat adipisicing non aute exercitation fugiat. Non ut cillum labore fugiat aliquip ex duis quis consectetur ut nisi Lorem amet qui. Proident veniam amet qui reprehenderit duis qui. Nisi culpa sit occaecat ullamco occaecat laborum fugiat ut. Non duis deserunt culpa duis.\\r\\n\",\n  \"Id ipsum eiusmod laboris non est ipsum deserunt labore duis reprehenderit deserunt. Sint tempor fugiat eiusmod nostrud in ut laborum esse in nostrud sit deserunt nostrud reprehenderit. Cupidatat aliqua qui anim consequat eu quis consequat consequat elit ipsum pariatur. Cupidatat in dolore velit quis. Exercitation cillum ullamco ex consectetur commodo tempor incididunt exercitation labore ad dolore. Minim incididunt consequat adipisicing esse eu eu voluptate.\\r\\n\",\n  \"Anim sint eiusmod nisi anim do deserunt voluptate ut cillum eiusmod esse ex reprehenderit laborum. Dolore nulla excepteur duis excepteur. Magna nisi nostrud duis non commodo velit esse ipsum Lorem incididunt. Nulla enim consequat ad aliqua. Incididunt irure culpa nostrud ea aute ex sit non ad esse.\\r\\n\",\n  \"Ullamco nostrud cupidatat adipisicing anim fugiat mollit eu. Et ut eu in nulla consequat. Sunt do pariatur culpa non est.\\r\\n\",\n  \"Pariatur incididunt reprehenderit non qui excepteur cillum exercitation nisi occaecat ad. Lorem aliquip laborum commodo reprehenderit sint. Laboris qui ut veniam magna quis et et ullamco voluptate. Tempor reprehenderit deserunt consequat nisi. Esse duis sint in tempor. Amet aute cupidatat in sint et.\\r\\n\",\n  \"Est officia nisi dolore consequat irure et excepteur. Sit qui elit tempor magna qui cillum anim amet proident exercitation proident. Eu cupidatat laborum consectetur duis ullamco irure nulla. Adipisicing culpa non reprehenderit anim aute.\\r\\n\",\n  \"Eu est laborum culpa velit dolore non sunt. Tempor magna veniam ea sit non qui Lorem qui exercitation aliqua aliqua et excepteur eiusmod. Culpa aute anim proident culpa adipisicing duis tempor elit aliquip elit nulla laboris esse dolore. Sit adipisicing non dolor eiusmod occaecat cupidatat.\\r\\n\",\n  \"Culpa velit eu esse sunt. Laborum irure aliqua reprehenderit velit ipsum fugiat officia dolor ut aute officia deserunt. Ipsum sit quis fugiat nostrud aliqua cupidatat ex pariatur et. Cillum proident est irure nisi dolor aliqua deserunt esse occaecat velit dolor.\\r\\n\",\n  \"Exercitation nulla officia sit eiusmod cillum eu incididunt officia exercitation qui Lorem deserunt. Voluptate Lorem minim commodo laborum esse in duis excepteur do duis aliquip nisi voluptate consectetur. Amet tempor officia enim ex esse minim reprehenderit.\\r\\n\",\n  \"Laboris sint deserunt ad aute incididunt. Anim officia sunt elit qui laborum labore commodo irure non. Mollit adipisicing ullamco do aute nulla eu laborum et quis sint aute adipisicing amet. Aliqua officia irure nostrud duis ex.\\r\\n\",\n  \"Eiusmod ipsum aliqua reprehenderit esse est non aute id veniam eiusmod. Elit consequat ad sit tempor elit eu incididunt quis irure ad. Eu incididunt veniam consequat Lorem nostrud cillum officia ea consequat ad cillum. Non nisi irure cupidatat incididunt pariatur incididunt. Duis velit officia ad cillum qui. Aliquip consequat sint aute nisi cillum. Officia commodo nisi incididunt laborum nisi voluptate aliquip Lorem cupidatat anim consequat sit laboris.\\r\\n\",\n  \"Veniam cupidatat et incididunt mollit do ex voluptate veniam nostrud labore esse. Eiusmod irure sint fugiat esse. Aute irure consectetur ut mollit nulla sint esse. Lorem ut quis ex proident nostrud mollit nostrud ea duis duis in magna anim consectetur.\\r\\n\",\n  \"Irure culpa esse qui do dolor fugiat veniam ad. Elit commodo aute elit magna incididunt tempor pariatur velit irure pariatur cillum et ea ad. Ad consequat ea et ad minim ut sunt qui commodo voluptate. Laboris est aliquip anim reprehenderit eu officia et exercitation. Occaecat laboris cupidatat Lorem ullamco in nostrud commodo ipsum in quis esse ex.\\r\\n\",\n  \"Incididunt officia quis voluptate eiusmod esse nisi ipsum quis commodo. Eiusmod dolore tempor occaecat sit exercitation aliqua minim consequat minim mollit qui ad nisi. Aute quis irure adipisicing veniam nisi nisi velit deserunt incididunt anim nostrud.\\r\\n\",\n  \"Voluptate exercitation exercitation id minim excepteur excepteur mollit. Fugiat aute proident nulla ullamco ea. Nisi ea culpa duis dolore veniam anim tempor officia in dolore exercitation exercitation. Dolore quis cillum adipisicing sunt do nulla esse proident ad sint.\\r\\n\",\n  \"Laborum ut mollit sint commodo nulla laborum deserunt Lorem magna commodo mollit tempor deserunt ut. Qui aliquip commodo ea id. Consectetur dolor fugiat dolor excepteur eiusmod. Eu excepteur ex aute ex ex elit ex esse officia cillum exercitation. Duis ut labore ea nostrud excepteur. Reprehenderit labore aute sunt nisi quis Lorem officia. Ad aliquip cupidatat voluptate exercitation voluptate ad irure magna quis.\\r\\n\",\n  \"Tempor velit veniam sit labore elit minim do elit cillum eiusmod sunt excepteur nisi. Aliquip est deserunt excepteur duis fugiat incididunt veniam fugiat. Pariatur sit irure labore et minim non. Cillum quis aute anim sint laboris laboris ullamco exercitation nostrud. Nulla pariatur id laborum minim nisi est adipisicing irure.\\r\\n\",\n  \"Irure exercitation laboris nostrud in do consectetur ad. Magna aliqua Lorem culpa exercitation sint do culpa incididunt mollit eu exercitation. Elit tempor Lorem dolore enim deserunt. Anim et ullamco sint ullamco mollit cillum officia et. Proident incididunt laboris aliquip laborum sint veniam deserunt eu consequat deserunt voluptate laboris. Anim Lorem non laborum exercitation voluptate. Cupidatat reprehenderit culpa Lorem fugiat enim minim consectetur tempor quis ad reprehenderit laboris irure.\\r\\n\",\n  \"Deserunt elit mollit nostrud occaecat labore reprehenderit laboris ex. Esse reprehenderit adipisicing cillum minim in esse aliquip excepteur ex et nisi cillum quis. Cillum labore ut ex sunt. Occaecat proident et mollit magna consequat irure esse. Dolor do enim esse nisi ad.\\r\\n\",\n  \"Pariatur est anim cillum minim elit magna adipisicing quis tempor proident nisi laboris incididunt cupidatat. Nulla est adipisicing sit adipisicing id nostrud amet qui consequat eiusmod tempor voluptate ad. Adipisicing non magna sit occaecat magna mollit ad ex nulla velit ea pariatur. Irure labore ad ea exercitation ex cillum.\\r\\n\",\n  \"Lorem fugiat eu eu cillum nulla tempor sint. Lorem id officia nulla velit labore ut duis ad tempor non. Excepteur quis aute adipisicing nisi nisi consectetur aliquip enim Lorem id ullamco cillum sint voluptate. Qui aliquip incididunt tempor aliqua voluptate labore reprehenderit. Veniam eiusmod elit occaecat voluptate tempor culpa consectetur ea ut exercitation eiusmod exercitation qui.\\r\\n\",\n  \"Aliqua esse pariatur nulla veniam velit ea. Aliquip consectetur tempor ex magna sit aliquip exercitation veniam. Dolor ullamco minim commodo pariatur. Et amet reprehenderit dolore proident elit tempor eiusmod eu incididunt enim ullamco. Adipisicing id officia incididunt esse dolor sunt cupidatat do deserunt mollit do non. Magna ut officia fugiat adipisicing quis ea cillum laborum dolore ad nostrud magna minim est. Dolor voluptate officia proident enim ea deserunt eu voluptate dolore proident laborum officia ea.\\r\\n\",\n  \"Culpa aute consequat esse fugiat cupidatat minim voluptate voluptate eiusmod irure anim elit. Do eiusmod culpa laboris consequat incididunt minim nostrud eiusmod commodo velit ea ullamco proident. Culpa pariatur magna ut mollit nisi. Ea officia do magna deserunt minim nisi tempor ea deserunt veniam cillum exercitation esse.\\r\\n\",\n  \"Anim ullamco nostrud commodo Lorem. Do sunt laborum exercitation proident proident magna. Lorem officia laborum laborum dolor sunt duis commodo Lorem. Officia aute adipisicing ea cupidatat ea dolore. Aliquip adipisicing pariatur consectetur aliqua sit amet officia reprehenderit laborum culpa. Occaecat Lorem eu nisi do Lorem occaecat enim eiusmod laboris id quis. Ad mollit adipisicing sunt adipisicing esse.\\r\\n\",\n  \"Laborum quis sit adipisicing cupidatat. Veniam Lorem eiusmod esse esse sint nisi labore elit et. Deserunt aliqua mollit ut commodo aliqua non incididunt ipsum reprehenderit consectetur. Eiusmod nulla minim laboris Lorem ea Lorem aute tempor pariatur in sit. Incididunt culpa ut do irure amet irure cupidatat est anim anim culpa occaecat. Est velit consectetur eiusmod veniam reprehenderit officia sunt occaecat eiusmod ut sunt occaecat amet.\\r\\n\",\n  \"Elit minim aute fugiat nulla ex quis. Labore fugiat sint nostrud amet quis culpa excepteur in. Consectetur exercitation cupidatat laborum sit. Aute nisi eu aliqua est deserunt eiusmod commodo dolor id. Mollit laborum esse sint ipsum voluptate reprehenderit velit et. Veniam aliquip enim in veniam Lorem voluptate quis deserunt consequat qui commodo ut excepteur aute.\\r\\n\",\n  \"Dolore deserunt veniam aute nisi labore sunt et voluptate irure nisi anim ea. Magna nisi quis anim mollit nisi est dolor do ex aliquip elit aliquip ipsum minim. Dolore est officia nostrud eiusmod ex laborum ea amet est. Officia culpa non est et tempor consectetur exercitation tempor eiusmod enim. Ea tempor laboris qui amet ex nisi culpa dolore consectetur incididunt sunt sunt. Lorem aliquip incididunt magna do et ullamco ex elit aliqua eiusmod qui. Commodo amet dolor sint incididunt ex veniam non Lorem fugiat.\\r\\n\",\n  \"Officia culpa enim voluptate dolore commodo. Minim commodo aliqua minim ex sint excepteur cupidatat adipisicing eu irure. Anim magna deserunt anim Lorem non.\\r\\n\",\n  \"Cupidatat aliquip nulla excepteur sunt cupidatat cupidatat laborum cupidatat exercitation. Laboris minim ex cupidatat culpa elit. Amet enim reprehenderit aliqua laborum est tempor exercitation cupidatat ex dolore do. Do incididunt labore fugiat commodo consectetur nisi incididunt irure sit culpa sit. Elit aute occaecat qui excepteur velit proident cillum qui aliqua ex do ex. Dolore irure ex excepteur veniam id proident mollit Lorem.\\r\\n\",\n  \"Ad commodo cillum duis deserunt elit officia consectetur veniam eiusmod. Reprehenderit et veniam ad commodo reprehenderit magna elit laboris sunt non quis. Adipisicing dolor aute proident ea magna sunt et proident in consectetur.\\r\\n\",\n  \"Veniam exercitation esse esse veniam est nisi. Minim velit incididunt sint aute dolor anim. Fugiat cupidatat id ad nisi in voluptate dolor culpa eiusmod magna eiusmod amet id. Duis aliquip labore et ex amet amet aliquip laborum eiusmod ipsum. Quis qui ut duis duis. Minim in voluptate reprehenderit aliqua.\\r\\n\",\n  \"Elit ut pariatur dolor veniam ipsum consequat. Voluptate Lorem mollit et esse dolore mollit Lorem ad. Elit nostrud eu Lorem labore mollit minim cupidatat officia quis minim dolore incididunt. In cillum aute cillum ut.\\r\\n\",\n  \"Commodo laborum deserunt ut cupidatat pariatur ullamco in esse anim exercitation cillum duis. Consectetur incididunt sit esse Lorem in aute. Eiusmod mollit Lorem consequat minim reprehenderit laborum enim excepteur irure nisi elit. Laborum esse proident aute aute proident adipisicing laborum. Pariatur tempor duis incididunt qui velit pariatur ut officia ea mollit labore dolore. Cillum pariatur minim ullamco sunt incididunt culpa id ullamco exercitation consectetur. Ea exercitation consequat reprehenderit ut ullamco velit eu ad velit magna excepteur eiusmod.\\r\\n\",\n  \"Eu deserunt magna laboris laborum laborum in consequat dolore. Officia proident consectetur proident do occaecat minim pariatur officia ipsum sit non velit officia cillum. Laborum excepteur labore eu minim eiusmod. Sit anim dolore cillum ad do minim culpa sit est ad.\\r\\n\",\n  \"Cupidatat dolor nostrud Lorem sint consequat quis. Quis labore sint incididunt officia tempor. Fugiat nostrud in elit reprehenderit dolor. Nisi sit enim officia minim est adipisicing nulla aute labore nulla nostrud cupidatat est. Deserunt dolore qui irure Lorem esse voluptate velit qui nostrud.\\r\\n\",\n  \"Fugiat Lorem amet nulla nisi qui amet laboris enim cillum. Dolore occaecat exercitation id labore velit do commodo ut cupidatat laborum velit fugiat mollit. Ut et aliqua pariatur occaecat. Lorem occaecat dolore quis esse enim cupidatat exercitation ut tempor sit laboris fugiat adipisicing. Est tempor ex irure consectetur ipsum magna labore. Lorem non quis qui minim nisi magna amet aliquip ex cillum fugiat tempor.\\r\\n\",\n  \"Aliquip eiusmod laborum ipsum deserunt velit esse do magna excepteur consectetur exercitation sit. Minim ullamco reprehenderit commodo nostrud exercitation id irure ex qui ullamco sit esse laboris. Nulla cillum non minim qui cillum nisi aute proident. Dolor anim culpa elit quis excepteur aliqua eiusmod. Elit ea est excepteur consectetur sunt eiusmod enim id commodo irure amet et pariatur laboris. Voluptate magna ad magna dolore cillum cillum irure laboris ipsum officia id Lorem veniam.\\r\\n\",\n  \"Esse sunt elit est aliquip cupidatat commodo deserunt. Deserunt pariatur ipsum qui ad esse esse magna qui cillum laborum. Exercitation veniam pariatur elit amet enim.\\r\\n\",\n  \"Esse quis in id elit nulla occaecat incididunt. Et amet Lorem mollit in veniam do. Velit mollit Lorem consequat commodo Lorem aliquip cupidatat. Minim consequat nostrud nulla in nostrud.\\r\\n\",\n  \"Cillum nulla et eu est nostrud quis elit cupidatat dolor enim excepteur exercitation nisi voluptate. Nulla dolore non ex velit et qui tempor proident id deserunt nisi eu. Tempor ad Lorem ipsum reprehenderit in anim. Anim dolore ullamco enim deserunt quis ex id exercitation velit. Magna exercitation fugiat mollit pariatur ipsum ex consectetur nostrud. Id dolore officia nostrud excepteur laborum. Magna incididunt elit ipsum pariatur adipisicing enim duis est qui commodo velit aute.\\r\\n\",\n  \"Quis esse ex qui nisi dolor. Ullamco laborum dolor esse laboris eiusmod ea magna laboris ea esse ut. Dolore ipsum pariatur veniam sint mollit. Lorem ea proident fugiat ullamco ut nisi culpa eu exercitation exercitation aliquip veniam laborum consectetur.\\r\\n\",\n  \"Pariatur veniam laboris sit aliquip pariatur tempor aute sunt id et ut. Laboris excepteur eiusmod nisi qui quis elit enim ut cupidatat. Et et laborum in fugiat veniam consectetur ipsum laboris duis excepteur ullamco aliqua dolor Lorem. Aliqua ex amet sint anim cupidatat nisi ipsum anim et sunt deserunt. Occaecat culpa ut tempor cillum pariatur ex tempor.\\r\\n\",\n  \"Dolor deserunt eiusmod magna do officia voluptate excepteur est cupidatat. Veniam qui cupidatat amet anim est qui consectetur sit commodo commodo ea ad. Enim ad adipisicing qui nostrud. Non nulla esse ullamco nulla et ex.\\r\\n\",\n  \"Id ullamco ea consectetur est incididunt deserunt et esse. Elit nostrud voluptate eiusmod ut. Excepteur adipisicing qui cupidatat consequat labore id. Qui dolor aliqua do dolore do cupidatat labore ex consectetur ea sit cillum. Sint veniam eiusmod in consectetur consequat fugiat et mollit ut fugiat esse dolor adipisicing.\\r\\n\",\n  \"Ea magna proident labore duis pariatur. Esse cillum aliquip dolor duis fugiat ea ex officia ea irure. Sint elit nisi pariatur sunt nostrud exercitation ullamco culpa magna do.\\r\\n\",\n  \"Minim aliqua voluptate dolor consequat sint tempor deserunt amet magna excepteur. Irure do voluptate magna velit. Nostrud in reprehenderit magna officia nostrud. Cupidatat nulla irure laboris non fugiat ex ex est cupidatat excepteur officia aute velit duis. Sit voluptate id ea exercitation deserunt culpa voluptate nostrud est adipisicing incididunt. Amet proident laborum commodo magna ipsum quis.\\r\\n\",\n  \"Ipsum consectetur consectetur excepteur tempor eiusmod ea fugiat aute velit magna in officia sunt. Sit ut sunt dolore cupidatat dolor adipisicing. Veniam nisi adipisicing esse reprehenderit amet aliqua voluptate ex commodo occaecat est voluptate mollit sunt. Pariatur aliqua qui qui in dolor. Fugiat reprehenderit sit nostrud do sint esse. Tempor sit irure adipisicing ea pariatur duis est sit est incididunt laboris quis do. Et voluptate anim minim aliquip excepteur consequat nisi anim pariatur aliquip ut ipsum dolor magna.\\r\\n\",\n  \"Cillum sit labore excepteur magna id aliqua exercitation consequat laborum Lorem id pariatur nostrud. Lorem qui est labore sint cupidatat sint excepteur nulla in eu aliqua et. Adipisicing velit do enim occaecat laboris quis excepteur ipsum dolor occaecat Lorem dolore id exercitation.\\r\\n\",\n  \"Incididunt in laborum reprehenderit eiusmod irure ex. Elit duis consequat minim magna. Esse consectetur aliquip cillum excepteur excepteur fugiat. Sint tempor consequat minim reprehenderit consectetur adipisicing dolor id Lorem elit non. Occaecat esse quis mollit ea et sint aute fugiat qui tempor. Adipisicing tempor duis non dolore irure elit deserunt qui do.\\r\\n\",\n  \"Labore fugiat eiusmod sint laborum sit duis occaecat. Magna in laborum non cillum excepteur nostrud sit proident pariatur voluptate voluptate adipisicing exercitation occaecat. Ad non dolor aute ex sint do do minim exercitation veniam laborum irure magna ea. Magna do non quis sit consequat Lorem aliquip.\\r\\n\",\n  \"Velit anim do laborum laboris laborum Lorem. Sunt do Lorem amet ipsum est sint velit sit do voluptate mollit veniam enim. Commodo do deserunt in pariatur ut elit sint elit deserunt ea. Ad dolor anim consequat aliquip ut mollit nostrud tempor sunt mollit elit. Reprehenderit laboris labore excepteur occaecat veniam adipisicing cupidatat esse. Ad enim aliquip ea minim excepteur magna. Sint velit veniam pariatur qui dolor est adipisicing ex laboris.\\r\\n\",\n  \"Ea cupidatat ex nulla in sunt est sit dolor enim ad. Eu tempor consequat cupidatat consequat ex incididunt sint culpa. Est Lorem Lorem non cupidatat sunt ut aliqua non nostrud do ullamco. Reprehenderit ad ad nulla nostrud do nulla in. Ipsum adipisicing commodo mollit ipsum exercitation. Aliqua ea anim anim est elit. Ea incididunt consequat minim ad sunt eu cillum.\\r\\n\",\n  \"Tempor quis excepteur eiusmod cupidatat ipsum occaecat id et occaecat. Eiusmod magna aliquip excepteur id amet elit. Ullamco dolore amet anim dolor enim ea magna magna elit. Occaecat magna pariatur in deserunt consectetur officia aliquip ullamco ex aute anim. Minim laborum eu sit elit officia esse do irure pariatur tempor et reprehenderit ullamco labore.\\r\\n\",\n  \"Sit tempor eu minim dolore velit pariatur magna duis reprehenderit ea nulla in. Amet est do consectetur commodo do adipisicing adipisicing in amet. Cillum id ut commodo do pariatur duis aliqua nisi sint ad irure officia reprehenderit. Mollit labore id enim fugiat ullamco irure mollit cupidatat. Quis nisi amet labore eu dolor occaecat commodo aliqua laboris deserunt excepteur deserunt officia. Aliqua non ut sit ad. Laborum veniam ad velit minim dolore ea id magna dolor qui in.\\r\\n\",\n  \"Dolore nostrud ipsum aliqua pariatur id reprehenderit enim ad eiusmod qui. Deserunt anim commodo pariatur excepteur velit eu irure nulla ex labore ipsum aliqua minim aute. Id consequat amet tempor aliquip ex elit adipisicing est do. Eu enim Lorem consectetur minim id irure nulla culpa. Consectetur do consequat aute tempor anim. Qui ad non elit dolor est adipisicing nisi amet cillum sunt quis anim laboris incididunt. Incididunt proident adipisicing labore Lorem.\\r\\n\",\n  \"Et reprehenderit ea officia veniam. Aliquip ullamco consequat elit nisi magna mollit id elit. Amet amet sint velit labore ad nisi. Consectetur tempor id dolor aliqua esse deserunt amet. Qui laborum enim proident voluptate aute eu aute aute sit sit incididunt eu. Sunt ullamco nisi nostrud labore commodo non consectetur quis do duis minim irure. Tempor sint dolor sint aliquip dolore nostrud fugiat.\\r\\n\",\n  \"Aute ullamco quis nisi ut excepteur nostrud duis elit. Veniam ex ad incididunt veniam voluptate. Commodo dolore ullamco sit sint adipisicing proident amet aute duis deserunt.\\r\\n\",\n  \"Labore velit eu cillum nisi. Laboris do cupidatat et non duis cillum. Ullamco dolor tempor cupidatat voluptate laborum ullamco ea duis.\\r\\n\",\n  \"Deserunt consequat aliqua duis aliquip nostrud nostrud dolore nisi. Culpa do sint laborum consectetur ipsum quis laborum laborum pariatur eiusmod. Consectetur laboris ad ad ut quis. Ullamco laboris qui velit id laborum voluptate qui aute nostrud aliquip ea.\\r\\n\",\n  \"Ad cillum anim ex est consectetur mollit id in. Non enim aliquip consequat qui deserunt commodo cillum ad laborum fugiat. Dolor deserunt amet laborum tempor adipisicing voluptate dolor pariatur dolor cillum. Eu mollit ex sunt officia veniam qui est sunt proident. Non aliqua qui elit eu cupidatat ex enim ex proident. Lorem sit minim ullamco officia cupidatat duis minim. Exercitation laborum deserunt voluptate culpa tempor quis nulla id pariatur.\\r\\n\",\n  \"Nostrud quis consectetur ut aliqua excepteur elit consectetur occaecat. Occaecat voluptate Lorem pariatur consequat ullamco fugiat minim. Anim voluptate eu eu cillum tempor dolore aliquip aliqua. Fugiat incididunt ut tempor amet minim. Voluptate nostrud minim pariatur non excepteur ullamco.\\r\\n\",\n  \"Dolore nulla velit officia exercitation irure laboris incididunt anim in laborum in fugiat ut proident. Fugiat aute id consequat fugiat officia ut. Labore sint amet proident amet sint nisi laboris amet id ullamco culpa quis consequat proident. Magna do fugiat veniam dolore elit irure minim. Esse ullamco excepteur labore tempor labore fugiat dolore nisi cupidatat irure dolor pariatur. Magna excepteur laboris nisi eiusmod sit pariatur mollit.\\r\\n\",\n  \"In enim aliquip officia ea ad exercitation cillum culpa occaecat dolore Lorem. Irure cillum commodo adipisicing sunt pariatur ea duis fugiat exercitation laboris culpa ullamco aute. Ut voluptate exercitation qui dolor. Irure et duis elit consequat deserunt proident.\\r\\n\",\n  \"Officia ea Lorem sunt culpa id et tempor excepteur enim deserunt proident. Dolore aliquip dolor laboris cillum proident velit. Et culpa occaecat exercitation cupidatat irure sint adipisicing excepteur pariatur incididunt ad occaecat. Qui proident ipsum cillum minim. Quis ut culpa irure aliqua minim fugiat. In voluptate cupidatat fugiat est laborum dolor esse in pariatur voluptate.\\r\\n\",\n  \"Voluptate enim ipsum officia aute ea adipisicing nisi ut ex do aliquip amet. Reprehenderit enim voluptate tempor ex adipisicing culpa. Culpa occaecat voluptate dolor mollit ipsum exercitation labore et tempor sit ea consectetur aliqua. Elit elit sit minim ea ea commodo do tempor cupidatat irure dolore. Occaecat esse adipisicing anim eiusmod commodo fugiat mollit amet. Incididunt tempor tempor qui occaecat cupidatat in.\\r\\n\",\n  \"Ut qui anim velit enim aliquip do ut nulla labore. Mollit ut commodo ut eiusmod consectetur laboris aliqua qui voluptate culpa fugiat incididunt elit. Lorem ullamco esse elit elit. Labore amet incididunt ea nulla aliquip eiusmod. Sit nulla est voluptate officia ipsum aute aute cillum tempor deserunt. Laboris commodo eiusmod labore sunt aute excepteur ea consectetur reprehenderit veniam nisi. Culpa nisi sint sunt sint tempor laboris dolore cupidatat.\\r\\n\",\n  \"Duis cillum qui nisi duis amet velit ad cillum ut elit aute sint ad. Amet laboris pariatur excepteur ipsum Lorem aliqua veniam Lorem quis mollit cupidatat aliqua exercitation. Pariatur ex ullamco sit commodo cillum eiusmod ut proident elit cillum. Commodo ut ipsum excepteur occaecat sint elit consequat ex dolor adipisicing consectetur id ut ad. Velit sit eiusmod est esse tempor incididunt consectetur eiusmod duis commodo veniam.\\r\\n\",\n  \"Ut sunt qui officia anim laboris exercitation Lorem quis laborum do eiusmod officia. Enim consectetur occaecat fugiat cillum cillum. Dolore dolore nostrud in commodo fugiat mollit consequat occaecat non et et elit ullamco. Sit voluptate minim ut est culpa velit nulla fugiat reprehenderit eu aliquip adipisicing labore. Sit minim minim do dolor dolor. Lorem Lorem labore exercitation magna veniam eiusmod do.\\r\\n\",\n  \"Fugiat dolor adipisicing quis aliquip aute dolore. Qui proident anim elit veniam ex aliquip eiusmod ipsum sunt pariatur est. Non fugiat duis do est officia adipisicing.\\r\\n\",\n  \"Nulla deserunt do laboris cupidatat veniam do consectetur ipsum elit veniam in mollit eu. Ea in consequat cupidatat laboris sint fugiat irure. In commodo esse reprehenderit deserunt minim velit ullamco enim eu cupidatat tempor ex. Ullamco in non id culpa amet occaecat culpa nostrud id. Non occaecat culpa magna incididunt.\\r\\n\",\n  \"Enim laboris ex mollit reprehenderit eiusmod exercitation magna. Exercitation Lorem ex mollit non non culpa labore enim. Adipisicing labore dolore incididunt do amet aliquip excepteur ad et nostrud officia aute veniam voluptate. Fugiat enim eiusmod Lorem esse. Minim ullamco commodo consequat ex commodo aliqua eu nulla eu. Veniam non enim nulla ut Lorem nostrud minim sint duis.\\r\\n\",\n  \"Enim duis consectetur in ullamco cillum veniam nulla amet. Exercitation nisi sunt sunt duis in culpa nisi magna ex id ipsum laboris reprehenderit qui. Officia pariatur qui ex fugiat veniam et sunt sit nostrud. Veniam ullamco tempor fugiat minim Lorem proident velit in eiusmod elit. Enim minim excepteur aute aliquip ex magna commodo dolore qui et labore. Proident eu aliquip cillum dolor. Nostrud ipsum ut irure consequat fugiat nulla proident occaecat laborum.\\r\\n\",\n  \"Amet duis eiusmod sunt adipisicing esse ex nostrud consectetur voluptate cillum. Ipsum occaecat sit et anim velit irure ea incididunt cupidatat ullamco in nisi quis. Esse officia ipsum commodo qui quis qui do. Commodo aliquip amet aute sit sit ut cupidatat elit nostrud.\\r\\n\",\n  \"Laboris laboris sit mollit cillum nulla deserunt commodo culpa est commodo anim id anim sit. Officia id consectetur velit incididunt est dolor sunt ipsum magna aliqua consectetur. Eiusmod pariatur minim deserunt cupidatat veniam Lorem aliquip sunt proident eu Lorem sit dolor fugiat. Proident qui ut ex in incididunt nulla nulla dolor ex laboris ea ad.\\r\\n\",\n  \"Ex incididunt enim labore nulla cupidatat elit. Quis ut incididunt incididunt non irure commodo do mollit cillum anim excepteur. Qui consequat laborum dolore elit tempor aute ut nulla pariatur eu ullamco veniam. Nisi non velit labore in commodo excepteur culpa nulla tempor cillum. Ipsum qui sit sint reprehenderit ut labore incididunt dolor aliquip sunt. Reprehenderit occaecat tempor nisi laborum.\\r\\n\",\n  \"Lorem officia ullamco eu occaecat in magna eiusmod consectetur nisi aliqua mollit esse. Ullamco ex aute nostrud pariatur do enim cillum sint do fugiat nostrud culpa tempor. Do aliquip excepteur nostrud culpa eu pariatur eiusmod cillum excepteur do. Est sunt non quis cillum voluptate ex.\\r\\n\",\n  \"Deserunt consectetur tempor irure mollit qui tempor et. Labore enim eu irure laboris in. Nisi in tempor ex occaecat amet cupidatat laboris occaecat amet minim ut magna incididunt id. Consequat cillum laborum commodo mollit. Et magna culpa sunt dolore consequat laboris et sit. Deserunt qui voluptate excepteur dolor. Eu qui amet est proident.\\r\\n\",\n  \"Eu elit minim eiusmod occaecat eu nostrud dolor qui ut elit. Sunt dolore proident ea eu do eiusmod fugiat incididunt pariatur duis amet Lorem nisi ut. Adipisicing quis veniam cupidatat Lorem sint culpa sunt veniam sint. Excepteur eu exercitation est magna pariatur veniam dolore qui fugiat labore proident eiusmod cillum. Commodo reprehenderit elit proident duis sint magna.\\r\\n\",\n  \"Ut aliquip pariatur deserunt nostrud commodo ad proident est exercitation. Sit minim do ea enim sint officia nisi incididunt laborum. Ex amet duis commodo fugiat. Ut aute tempor deserunt irure occaecat aliquip voluptate cillum aute elit qui nostrud.\\r\\n\",\n  \"Irure et quis consectetur sit est do sunt aliquip eu. Cupidatat pariatur consequat dolore consectetur. Adipisicing magna velit mollit occaecat do id. Nisi pariatur cupidatat cillum incididunt excepteur consectetur excepteur do laborum deserunt irure pariatur cillum.\\r\\n\",\n  \"Adipisicing esse incididunt cillum est irure consequat irure ad aute voluptate. Incididunt do occaecat nostrud do ipsum pariatur Lorem qui laboris et pariatur. Est exercitation dolor culpa ad velit ut et.\\r\\n\",\n  \"Sit eiusmod id enim ad ex dolor pariatur do. Ullamco occaecat quis dolor minim non elit labore amet est. Commodo velit eu nulla eiusmod ullamco. Incididunt anim pariatur aute eiusmod veniam tempor enim officia elit id. Elit Lorem est commodo dolore nostrud. Labore et consectetur do exercitation veniam laboris incididunt aliqua proident dolore ea officia cupidatat. Velit laboris aliquip deserunt labore commodo.\\r\\n\",\n  \"Proident nostrud labore eu nostrud. Excepteur ut in velit labore ea proident labore ea sint cillum. Incididunt ipsum consectetur officia irure sit pariatur veniam id velit officia mollit. Adipisicing magna voluptate velit excepteur enim consectetur incididunt voluptate tempor occaecat fugiat velit excepteur labore. Do do incididunt qui nisi voluptate enim. Laboris aute sit voluptate cillum pariatur minim excepteur ullamco mollit deserunt.\\r\\n\",\n  \"Excepteur laborum adipisicing nisi elit fugiat tempor. Elit laboris qui enim labore duis. Proident tempor in consectetur proident excepteur do ex laboris sit.\\r\\n\",\n  \"Dolore do ea incididunt do duis dolore eu labore nisi cupidatat voluptate amet incididunt minim. Nulla pariatur mollit cupidatat adipisicing nulla et. Dolor aliquip in ex magna excepteur. Nulla consequat minim consequat ullamco dolor laboris ullamco eu reprehenderit duis nostrud pariatur.\\r\\n\",\n  \"Id nisi labore duis qui. Incididunt laboris tempor aute do sit. Occaecat excepteur est mollit ea in mollit ullamco est amet reprehenderit.\\r\\n\",\n  \"Aute labore ipsum velit non voluptate eiusmod et reprehenderit cupidatat occaecat. Lorem tempor tempor consectetur exercitation qui nostrud sunt cillum quis ut non dolore. Reprehenderit consequat reprehenderit laborum qui pariatur anim et officia est cupidatat enim velit velit.\\r\\n\",\n  \"Commodo ex et fugiat cupidatat non adipisicing commodo. Minim ad dolore fugiat mollit cupidatat aliqua sunt dolor sit. Labore esse labore velit aute enim. Nulla duis incididunt est aliquip consectetur elit qui incididunt minim minim labore amet sit cillum.\\r\\n\"\n]"
  },
  {
    "path": "C/thirdparty/rapidjson/bin/types/readme.txt",
    "content": "Test data obtained from https://github.com/xpol/lua-rapidjson/tree/master/performance\n"
  },
  {
    "path": "C/thirdparty/rapidjson/contrib/natvis/LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2017 Bart Muzzin\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\nDerived from:\n\nThe MIT License (MIT)\n\nCopyright (c) 2015 mojmir svoboda\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/contrib/natvis/README.md",
    "content": "# rapidjson.natvis\n\nThis file can be used as a [Visual Studio Visualizer](https://docs.microsoft.com/en-gb/visualstudio/debugger/create-custom-views-of-native-objects) to aid in visualizing rapidjson structures within the Visual Studio debugger. Natvis visualizers are supported in Visual Studio 2012 and later. To install, copy the file into this directory:\n\n`%USERPROFILE%\\Documents\\Visual Studio 2012\\Visualizers`\n\nEach version of Visual Studio has a similar directory, it must be copied into each directory to be used with that particular version. In Visual Studio 2015 and later, this can be done without restarting Visual Studio (a new debugging session must be started).\n"
  },
  {
    "path": "C/thirdparty/rapidjson/contrib/natvis/rapidjson.natvis",
    "content": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<AutoVisualizer xmlns=\"http://schemas.microsoft.com/vstudio/debugger/natvis/2010\">\n\t<!-- rapidjson::GenericValue - basic support -->\n\t<Type Name=\"rapidjson::GenericValue&lt;*,*&gt;\">\n\t\t<DisplayString Condition=\"(data_.f.flags &amp; kTypeMask) == rapidjson::kNullType\">null</DisplayString>\n\t\t<DisplayString Condition=\"data_.f.flags == kTrueFlag\">true</DisplayString>\n\t\t<DisplayString Condition=\"data_.f.flags == kFalseFlag\">false</DisplayString>\n\t\t<DisplayString Condition=\"data_.f.flags == kShortStringFlag\">{(const Ch*)data_.ss.str,na}</DisplayString>\n\t\t<DisplayString Condition=\"(data_.f.flags &amp; kTypeMask) == rapidjson::kStringType\">{(const Ch*)((size_t)data_.s.str &amp; 0x0000FFFFFFFFFFFF),[data_.s.length]na}</DisplayString>\n\t\t<DisplayString Condition=\"(data_.f.flags &amp; kNumberIntFlag) == kNumberIntFlag\">{data_.n.i.i}</DisplayString>\n\t\t<DisplayString Condition=\"(data_.f.flags &amp; kNumberUintFlag) == kNumberUintFlag\">{data_.n.u.u}</DisplayString>\n\t\t<DisplayString Condition=\"(data_.f.flags &amp; kNumberInt64Flag) == kNumberInt64Flag\">{data_.n.i64}</DisplayString>\n\t\t<DisplayString Condition=\"(data_.f.flags &amp; kNumberUint64Flag) == kNumberUint64Flag\">{data_.n.u64}</DisplayString>\n\t\t<DisplayString Condition=\"(data_.f.flags &amp; kNumberDoubleFlag) == kNumberDoubleFlag\">{data_.n.d}</DisplayString>\n\t\t<DisplayString Condition=\"data_.f.flags == rapidjson::kObjectType\">Object members={data_.o.size}</DisplayString>\n\t\t<DisplayString Condition=\"data_.f.flags == rapidjson::kArrayType\">Array members={data_.a.size}</DisplayString>\n\t\t<Expand>\n\t\t\t<Item Condition=\"data_.f.flags == rapidjson::kObjectType\" Name=\"[size]\">data_.o.size</Item>\n\t\t\t<Item Condition=\"data_.f.flags == rapidjson::kObjectType\" Name=\"[capacity]\">data_.o.capacity</Item>\n\t\t\t<ArrayItems Condition=\"data_.f.flags == rapidjson::kObjectType\">\n\t\t\t\t<Size>data_.o.size</Size>\n\t\t\t\t<!-- NOTE: Rapidjson stores some extra data in the high bits of pointers, which is why the mask -->\n\t\t\t\t<ValuePointer>(rapidjson::GenericMember&lt;$T1,$T2&gt;*)(((size_t)data_.o.members) &amp; 0x0000FFFFFFFFFFFF)</ValuePointer>\n\t\t\t</ArrayItems>\n\n\t\t\t<Item Condition=\"data_.f.flags == rapidjson::kArrayType\" Name=\"[size]\">data_.a.size</Item>\n\t\t\t<Item Condition=\"data_.f.flags == rapidjson::kArrayType\" Name=\"[capacity]\">data_.a.capacity</Item>\n\t\t\t<ArrayItems Condition=\"data_.f.flags == rapidjson::kArrayType\">\n\t\t\t\t<Size>data_.a.size</Size>\n\t\t\t\t<!-- NOTE: Rapidjson stores some extra data in the high bits of pointers, which is why the mask -->\n\t\t\t\t<ValuePointer>(rapidjson::GenericValue&lt;$T1,$T2&gt;*)(((size_t)data_.a.elements) &amp; 0x0000FFFFFFFFFFFF)</ValuePointer>\n\t\t\t</ArrayItems>\n\n\t\t</Expand>\n\t</Type>\n\n</AutoVisualizer>\n\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/CMakeLists.txt",
    "content": "find_package(Doxygen)\n\nIF(NOT DOXYGEN_FOUND)\n    MESSAGE(STATUS \"No Doxygen found. Documentation won't be built\")\nELSE()\n    file(GLOB SOURCES ${CMAKE_CURRENT_LIST_DIR}/../include/*)\n    file(GLOB MARKDOWN_DOC ${CMAKE_CURRENT_LIST_DIR}/../doc/*.md)\n    list(APPEND MARKDOWN_DOC ${CMAKE_CURRENT_LIST_DIR}/../readme.md)\n\n    CONFIGURE_FILE(Doxyfile.in Doxyfile @ONLY)\n    CONFIGURE_FILE(Doxyfile.zh-cn.in Doxyfile.zh-cn @ONLY)\n\n    file(GLOB DOXYFILES ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile*)\n    \n    add_custom_command(OUTPUT html\n        COMMAND ${DOXYGEN_EXECUTABLE} ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile\n        COMMAND ${DOXYGEN_EXECUTABLE} ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile.zh-cn\n        COMMAND ${CMAKE_COMMAND} -E touch ${CMAKE_CURRENT_BINARY_DIR}/html\n        DEPENDS ${MARKDOWN_DOC} ${SOURCES} ${DOXYFILES}\n        WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}/../\n        )\n\n    add_custom_target(doc ALL DEPENDS html)\n    install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/html\n        DESTINATION ${DOC_INSTALL_DIR}\n        COMPONENT doc)\nENDIF()\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/Doxyfile.in",
    "content": "# Doxyfile 1.8.7\n\n# This file describes the settings to be used by the documentation system\n# doxygen (www.doxygen.org) for a project.\n#\n# All text after a double hash (##) is considered a comment and is placed in\n# front of the TAG it is preceding.\n#\n# All text after a single hash (#) is considered a comment and will be ignored.\n# The format is:\n# TAG = value [value, ...]\n# For lists, items can also be appended using:\n# TAG += value [value, ...]\n# Values that contain spaces should be placed between quotes (\\\" \\\").\n\n#---------------------------------------------------------------------------\n# Project related configuration options\n#---------------------------------------------------------------------------\n\n# This tag specifies the encoding used for all characters in the config file\n# that follow. The default is UTF-8 which is also the encoding used for all text\n# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv\n# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv\n# for the list of possible encodings.\n# The default value is: UTF-8.\n\nDOXYFILE_ENCODING      = UTF-8\n\n# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by\n# double-quotes, unless you are using Doxywizard) that should identify the\n# project for which the documentation is generated. This name is used in the\n# title of most generated pages and in a few other places.\n# The default value is: My Project.\n\nPROJECT_NAME           = RapidJSON\n\n# The PROJECT_NUMBER tag can be used to enter a project or revision number. This\n# could be handy for archiving the generated documentation or if some version\n# control system is used.\n\nPROJECT_NUMBER         =\n\n# Using the PROJECT_BRIEF tag one can provide an optional one line description\n# for a project that appears at the top of each page and should give viewer a\n# quick idea about the purpose of the project. Keep the description short.\n\nPROJECT_BRIEF          = \"A fast JSON parser/generator for C++ with both SAX/DOM style API\"\n\n# With the PROJECT_LOGO tag one can specify an logo or icon that is included in\n# the documentation. The maximum height of the logo should not exceed 55 pixels\n# and the maximum width should not exceed 200 pixels. Doxygen will copy the logo\n# to the output directory.\n\nPROJECT_LOGO           =\n\n# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path\n# into which the generated documentation will be written. If a relative path is\n# entered, it will be relative to the location where doxygen was started. If\n# left blank the current directory will be used.\n\nOUTPUT_DIRECTORY       = @CMAKE_CURRENT_BINARY_DIR@\n\n# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub-\n# directories (in 2 levels) under the output directory of each output format and\n# will distribute the generated files over these directories. Enabling this\n# option can be useful when feeding doxygen a huge amount of source files, where\n# putting all generated files in the same directory would otherwise causes\n# performance problems for the file system.\n# The default value is: NO.\n\nCREATE_SUBDIRS         = NO\n\n# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII\n# characters to appear in the names of generated files. If set to NO, non-ASCII\n# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode\n# U+3044.\n# The default value is: NO.\n\nALLOW_UNICODE_NAMES    = NO\n\n# The OUTPUT_LANGUAGE tag is used to specify the language in which all\n# documentation generated by doxygen is written. Doxygen will use this\n# information to generate all constant output in the proper language.\n# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,\n# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),\n# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,\n# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),\n# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,\n# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,\n# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,\n# Ukrainian and Vietnamese.\n# The default value is: English.\n\nOUTPUT_LANGUAGE        = English\n\n# If the BRIEF_MEMBER_DESC tag is set to YES doxygen will include brief member\n# descriptions after the members that are listed in the file and class\n# documentation (similar to Javadoc). Set to NO to disable this.\n# The default value is: YES.\n\nBRIEF_MEMBER_DESC      = YES\n\n# If the REPEAT_BRIEF tag is set to YES doxygen will prepend the brief\n# description of a member or function before the detailed description\n#\n# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the\n# brief descriptions will be completely suppressed.\n# The default value is: YES.\n\nREPEAT_BRIEF           = YES\n\n# This tag implements a quasi-intelligent brief description abbreviator that is\n# used to form the text in various listings. Each string in this list, if found\n# as the leading text of the brief description, will be stripped from the text\n# and the result, after processing the whole list, is used as the annotated\n# text. Otherwise, the brief description is used as-is. If left blank, the\n# following values are used ($name is automatically replaced with the name of\n# the entity):The $name class, The $name widget, The $name file, is, provides,\n# specifies, contains, represents, a, an and the.\n\nABBREVIATE_BRIEF       = \"The $name class\" \\\n                         \"The $name widget\" \\\n                         \"The $name file\" \\\n                         is \\\n                         provides \\\n                         specifies \\\n                         contains \\\n                         represents \\\n                         a \\\n                         an \\\n                         the\n\n# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then\n# doxygen will generate a detailed section even if there is only a brief\n# description.\n# The default value is: NO.\n\nALWAYS_DETAILED_SEC    = NO\n\n# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all\n# inherited members of a class in the documentation of that class as if those\n# members were ordinary class members. Constructors, destructors and assignment\n# operators of the base classes will not be shown.\n# The default value is: NO.\n\nINLINE_INHERITED_MEMB  = NO\n\n# If the FULL_PATH_NAMES tag is set to YES doxygen will prepend the full path\n# before files name in the file list and in the header files. If set to NO the\n# shortest path that makes the file name unique will be used\n# The default value is: YES.\n\nFULL_PATH_NAMES        = YES\n\n# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.\n# Stripping is only done if one of the specified strings matches the left-hand\n# part of the path. The tag can be used to show relative paths in the file list.\n# If left blank the directory from which doxygen is run is used as the path to\n# strip.\n#\n# Note that you can specify absolute paths here, but also relative paths, which\n# will be relative from the directory where doxygen is started.\n# This tag requires that the tag FULL_PATH_NAMES is set to YES.\n\nSTRIP_FROM_PATH        =\n\n# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the\n# path mentioned in the documentation of a class, which tells the reader which\n# header file to include in order to use a class. If left blank only the name of\n# the header file containing the class definition is used. Otherwise one should\n# specify the list of include paths that are normally passed to the compiler\n# using the -I flag.\n\nSTRIP_FROM_INC_PATH    =\n\n# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but\n# less readable) file names. This can be useful is your file systems doesn't\n# support long names like on DOS, Mac, or CD-ROM.\n# The default value is: NO.\n\nSHORT_NAMES            = NO\n\n# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the\n# first line (until the first dot) of a Javadoc-style comment as the brief\n# description. If set to NO, the Javadoc-style will behave just like regular Qt-\n# style comments (thus requiring an explicit @brief command for a brief\n# description.)\n# The default value is: NO.\n\nJAVADOC_AUTOBRIEF      = NO\n\n# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first\n# line (until the first dot) of a Qt-style comment as the brief description. If\n# set to NO, the Qt-style will behave just like regular Qt-style comments (thus\n# requiring an explicit \\brief command for a brief description.)\n# The default value is: NO.\n\nQT_AUTOBRIEF           = NO\n\n# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a\n# multi-line C++ special comment block (i.e. a block of //! or /// comments) as\n# a brief description. This used to be the default behavior. The new default is\n# to treat a multi-line C++ comment block as a detailed description. Set this\n# tag to YES if you prefer the old behavior instead.\n#\n# Note that setting this tag to YES also means that rational rose comments are\n# not recognized any more.\n# The default value is: NO.\n\nMULTILINE_CPP_IS_BRIEF = NO\n\n# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the\n# documentation from any documented member that it re-implements.\n# The default value is: YES.\n\nINHERIT_DOCS           = YES\n\n# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce a\n# new page for each member. If set to NO, the documentation of a member will be\n# part of the file/class/namespace that contains it.\n# The default value is: NO.\n\nSEPARATE_MEMBER_PAGES  = NO\n\n# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen\n# uses this value to replace tabs by spaces in code fragments.\n# Minimum value: 1, maximum value: 16, default value: 4.\n\nTAB_SIZE               = 4\n\n# This tag can be used to specify a number of aliases that act as commands in\n# the documentation. An alias has the form:\n# name=value\n# For example adding\n# \"sideeffect=@par Side Effects:\\n\"\n# will allow you to put the command \\sideeffect (or @sideeffect) in the\n# documentation, which will result in a user-defined paragraph with heading\n# \"Side Effects:\". You can put \\n's in the value part of an alias to insert\n# newlines.\n\nALIASES                =\n\n# This tag can be used to specify a number of word-keyword mappings (TCL only).\n# A mapping has the form \"name=value\". For example adding \"class=itcl::class\"\n# will allow you to use the command class in the itcl::class meaning.\n\nTCL_SUBST              =\n\n# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources\n# only. Doxygen will then generate output that is more tailored for C. For\n# instance, some of the names that are used will be different. The list of all\n# members will be omitted, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_FOR_C  = NO\n\n# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or\n# Python sources only. Doxygen will then generate output that is more tailored\n# for that language. For instance, namespaces will be presented as packages,\n# qualified scopes will look different, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_JAVA   = NO\n\n# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran\n# sources. Doxygen will then generate output that is tailored for Fortran.\n# The default value is: NO.\n\nOPTIMIZE_FOR_FORTRAN   = NO\n\n# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL\n# sources. Doxygen will then generate output that is tailored for VHDL.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_VHDL   = NO\n\n# Doxygen selects the parser to use depending on the extension of the files it\n# parses. With this tag you can assign which parser to use for a given\n# extension. Doxygen has a built-in mapping, but you can override or extend it\n# using this tag. The format is ext=language, where ext is a file extension, and\n# language is one of the parsers supported by doxygen: IDL, Java, Javascript,\n# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:\n# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:\n# Fortran. In the later case the parser tries to guess whether the code is fixed\n# or free formatted code, this is the default for Fortran type files), VHDL. For\n# instance to make doxygen treat .inc files as Fortran files (default is PHP),\n# and .f files as C (default is Fortran), use: inc=Fortran f=C.\n#\n# Note For files without extension you can use no_extension as a placeholder.\n#\n# Note that for custom extensions you also need to set FILE_PATTERNS otherwise\n# the files are not read by doxygen.\n\nEXTENSION_MAPPING      =\n\n# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments\n# according to the Markdown format, which allows for more readable\n# documentation. See http://daringfireball.net/projects/markdown/ for details.\n# The output of markdown processing is further processed by doxygen, so you can\n# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in\n# case of backward compatibilities issues.\n# The default value is: YES.\n\nMARKDOWN_SUPPORT       = YES\n\n# When enabled doxygen tries to link words that correspond to documented\n# classes, or namespaces to their corresponding documentation. Such a link can\n# be prevented in individual cases by by putting a % sign in front of the word\n# or globally by setting AUTOLINK_SUPPORT to NO.\n# The default value is: YES.\n\nAUTOLINK_SUPPORT       = YES\n\n# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want\n# to include (a tag file for) the STL sources as input, then you should set this\n# tag to YES in order to let doxygen match functions declarations and\n# definitions whose arguments contain STL classes (e.g. func(std::string);\n# versus func(std::string) {}). This also make the inheritance and collaboration\n# diagrams that involve STL classes more complete and accurate.\n# The default value is: NO.\n\nBUILTIN_STL_SUPPORT    = NO\n\n# If you use Microsoft's C++/CLI language, you should set this option to YES to\n# enable parsing support.\n# The default value is: NO.\n\nCPP_CLI_SUPPORT        = NO\n\n# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:\n# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen\n# will parse them like normal C++ but will assume all classes use public instead\n# of private inheritance when no explicit protection keyword is present.\n# The default value is: NO.\n\nSIP_SUPPORT            = NO\n\n# For Microsoft's IDL there are propget and propput attributes to indicate\n# getter and setter methods for a property. Setting this option to YES will make\n# doxygen to replace the get and set methods by a property in the documentation.\n# This will only work if the methods are indeed getting or setting a simple\n# type. If this is not the case, or you want to show the methods anyway, you\n# should set this option to NO.\n# The default value is: YES.\n\nIDL_PROPERTY_SUPPORT   = YES\n\n# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC\n# tag is set to YES, then doxygen will reuse the documentation of the first\n# member in the group (if any) for the other members of the group. By default\n# all members of a group must be documented explicitly.\n# The default value is: NO.\n\nDISTRIBUTE_GROUP_DOC   = NO\n\n# Set the SUBGROUPING tag to YES to allow class member groups of the same type\n# (for instance a group of public functions) to be put as a subgroup of that\n# type (e.g. under the Public Functions section). Set it to NO to prevent\n# subgrouping. Alternatively, this can be done per class using the\n# \\nosubgrouping command.\n# The default value is: YES.\n\nSUBGROUPING            = YES\n\n# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions\n# are shown inside the group in which they are included (e.g. using \\ingroup)\n# instead of on a separate page (for HTML and Man pages) or section (for LaTeX\n# and RTF).\n#\n# Note that this feature does not work in combination with\n# SEPARATE_MEMBER_PAGES.\n# The default value is: NO.\n\nINLINE_GROUPED_CLASSES = YES\n\n# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions\n# with only public data fields or simple typedef fields will be shown inline in\n# the documentation of the scope in which they are defined (i.e. file,\n# namespace, or group documentation), provided this scope is documented. If set\n# to NO, structs, classes, and unions are shown on a separate page (for HTML and\n# Man pages) or section (for LaTeX and RTF).\n# The default value is: NO.\n\nINLINE_SIMPLE_STRUCTS  = NO\n\n# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or\n# enum is documented as struct, union, or enum with the name of the typedef. So\n# typedef struct TypeS {} TypeT, will appear in the documentation as a struct\n# with name TypeT. When disabled the typedef will appear as a member of a file,\n# namespace, or class. And the struct will be named TypeS. This can typically be\n# useful for C code in case the coding convention dictates that all compound\n# types are typedef'ed and only the typedef is referenced, never the tag name.\n# The default value is: NO.\n\nTYPEDEF_HIDES_STRUCT   = NO\n\n# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This\n# cache is used to resolve symbols given their name and scope. Since this can be\n# an expensive process and often the same symbol appears multiple times in the\n# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small\n# doxygen will become slower. If the cache is too large, memory is wasted. The\n# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range\n# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536\n# symbols. At the end of a run doxygen will report the cache usage and suggest\n# the optimal cache size from a speed point of view.\n# Minimum value: 0, maximum value: 9, default value: 0.\n\nLOOKUP_CACHE_SIZE      = 0\n\n#---------------------------------------------------------------------------\n# Build related configuration options\n#---------------------------------------------------------------------------\n\n# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in\n# documentation are documented, even if no documentation was available. Private\n# class members and static file members will be hidden unless the\n# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.\n# Note: This will also disable the warnings about undocumented members that are\n# normally produced when WARNINGS is set to YES.\n# The default value is: NO.\n\nEXTRACT_ALL            = NO\n\n# If the EXTRACT_PRIVATE tag is set to YES all private members of a class will\n# be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PRIVATE        = NO\n\n# If the EXTRACT_PACKAGE tag is set to YES all members with package or internal\n# scope will be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PACKAGE        = NO\n\n# If the EXTRACT_STATIC tag is set to YES all static members of a file will be\n# included in the documentation.\n# The default value is: NO.\n\nEXTRACT_STATIC         = NO\n\n# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) defined\n# locally in source files will be included in the documentation. If set to NO\n# only classes defined in header files are included. Does not have any effect\n# for Java sources.\n# The default value is: YES.\n\nEXTRACT_LOCAL_CLASSES  = YES\n\n# This flag is only useful for Objective-C code. When set to YES local methods,\n# which are defined in the implementation section but not in the interface are\n# included in the documentation. If set to NO only methods in the interface are\n# included.\n# The default value is: NO.\n\nEXTRACT_LOCAL_METHODS  = NO\n\n# If this flag is set to YES, the members of anonymous namespaces will be\n# extracted and appear in the documentation as a namespace called\n# 'anonymous_namespace{file}', where file will be replaced with the base name of\n# the file that contains the anonymous namespace. By default anonymous namespace\n# are hidden.\n# The default value is: NO.\n\nEXTRACT_ANON_NSPACES   = NO\n\n# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all\n# undocumented members inside documented classes or files. If set to NO these\n# members will be included in the various overviews, but no documentation\n# section is generated. This option has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_MEMBERS     = NO\n\n# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all\n# undocumented classes that are normally visible in the class hierarchy. If set\n# to NO these classes will be included in the various overviews. This option has\n# no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_CLASSES     = NO\n\n# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend\n# (class|struct|union) declarations. If set to NO these declarations will be\n# included in the documentation.\n# The default value is: NO.\n\nHIDE_FRIEND_COMPOUNDS  = NO\n\n# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any\n# documentation blocks found inside the body of a function. If set to NO these\n# blocks will be appended to the function's detailed documentation block.\n# The default value is: NO.\n\nHIDE_IN_BODY_DOCS      = NO\n\n# The INTERNAL_DOCS tag determines if documentation that is typed after a\n# \\internal command is included. If the tag is set to NO then the documentation\n# will be excluded. Set it to YES to include the internal documentation.\n# The default value is: NO.\n\nINTERNAL_DOCS          = NO\n\n# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file\n# names in lower-case letters. If set to YES upper-case letters are also\n# allowed. This is useful if you have classes or files whose names only differ\n# in case and if your file system supports case sensitive file names. Windows\n# and Mac users are advised to set this option to NO.\n# The default value is: system dependent.\n\nCASE_SENSE_NAMES       = NO\n\n# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with\n# their full class and namespace scopes in the documentation. If set to YES the\n# scope will be hidden.\n# The default value is: NO.\n\nHIDE_SCOPE_NAMES       = NO\n\n# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of\n# the files that are included by a file in the documentation of that file.\n# The default value is: YES.\n\nSHOW_INCLUDE_FILES     = YES\n\n# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each\n# grouped member an include statement to the documentation, telling the reader\n# which file to include in order to use the member.\n# The default value is: NO.\n\nSHOW_GROUPED_MEMB_INC  = NO\n\n# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include\n# files with double quotes in the documentation rather than with sharp brackets.\n# The default value is: NO.\n\nFORCE_LOCAL_INCLUDES   = NO\n\n# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the\n# documentation for inline members.\n# The default value is: YES.\n\nINLINE_INFO            = YES\n\n# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the\n# (detailed) documentation of file and class members alphabetically by member\n# name. If set to NO the members will appear in declaration order.\n# The default value is: YES.\n\nSORT_MEMBER_DOCS       = YES\n\n# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief\n# descriptions of file, namespace and class members alphabetically by member\n# name. If set to NO the members will appear in declaration order. Note that\n# this will also influence the order of the classes in the class list.\n# The default value is: NO.\n\nSORT_BRIEF_DOCS        = NO\n\n# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the\n# (brief and detailed) documentation of class members so that constructors and\n# destructors are listed first. If set to NO the constructors will appear in the\n# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.\n# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief\n# member documentation.\n# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting\n# detailed member documentation.\n# The default value is: NO.\n\nSORT_MEMBERS_CTORS_1ST = NO\n\n# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy\n# of group names into alphabetical order. If set to NO the group names will\n# appear in their defined order.\n# The default value is: NO.\n\nSORT_GROUP_NAMES       = NO\n\n# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by\n# fully-qualified names, including namespaces. If set to NO, the class list will\n# be sorted only by class name, not including the namespace part.\n# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.\n# Note: This option applies only to the class list, not to the alphabetical\n# list.\n# The default value is: NO.\n\nSORT_BY_SCOPE_NAME     = NO\n\n# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper\n# type resolution of all parameters of a function it will reject a match between\n# the prototype and the implementation of a member function even if there is\n# only one candidate or it is obvious which candidate to choose by doing a\n# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still\n# accept a match between prototype and implementation in such cases.\n# The default value is: NO.\n\nSTRICT_PROTO_MATCHING  = NO\n\n# The GENERATE_TODOLIST tag can be used to enable ( YES) or disable ( NO) the\n# todo list. This list is created by putting \\todo commands in the\n# documentation.\n# The default value is: YES.\n\nGENERATE_TODOLIST      = YES\n\n# The GENERATE_TESTLIST tag can be used to enable ( YES) or disable ( NO) the\n# test list. This list is created by putting \\test commands in the\n# documentation.\n# The default value is: YES.\n\nGENERATE_TESTLIST      = YES\n\n# The GENERATE_BUGLIST tag can be used to enable ( YES) or disable ( NO) the bug\n# list. This list is created by putting \\bug commands in the documentation.\n# The default value is: YES.\n\nGENERATE_BUGLIST       = YES\n\n# The GENERATE_DEPRECATEDLIST tag can be used to enable ( YES) or disable ( NO)\n# the deprecated list. This list is created by putting \\deprecated commands in\n# the documentation.\n# The default value is: YES.\n\nGENERATE_DEPRECATEDLIST= YES\n\n# The ENABLED_SECTIONS tag can be used to enable conditional documentation\n# sections, marked by \\if <section_label> ... \\endif and \\cond <section_label>\n# ... \\endcond blocks.\n\nENABLED_SECTIONS       = $(RAPIDJSON_SECTIONS)\n\n# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the\n# initial value of a variable or macro / define can have for it to appear in the\n# documentation. If the initializer consists of more lines than specified here\n# it will be hidden. Use a value of 0 to hide initializers completely. The\n# appearance of the value of individual variables and macros / defines can be\n# controlled using \\showinitializer or \\hideinitializer command in the\n# documentation regardless of this setting.\n# Minimum value: 0, maximum value: 10000, default value: 30.\n\nMAX_INITIALIZER_LINES  = 30\n\n# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at\n# the bottom of the documentation of classes and structs. If set to YES the list\n# will mention the files that were used to generate the documentation.\n# The default value is: YES.\n\nSHOW_USED_FILES        = YES\n\n# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This\n# will remove the Files entry from the Quick Index and from the Folder Tree View\n# (if specified).\n# The default value is: YES.\n\nSHOW_FILES             = YES\n\n# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces\n# page. This will remove the Namespaces entry from the Quick Index and from the\n# Folder Tree View (if specified).\n# The default value is: YES.\n\nSHOW_NAMESPACES        = NO\n\n# The FILE_VERSION_FILTER tag can be used to specify a program or script that\n# doxygen should invoke to get the current version for each file (typically from\n# the version control system). Doxygen will invoke the program by executing (via\n# popen()) the command command input-file, where command is the value of the\n# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided\n# by doxygen. Whatever the program writes to standard output is used as the file\n# version. For an example see the documentation.\n\nFILE_VERSION_FILTER    =\n\n# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed\n# by doxygen. The layout file controls the global structure of the generated\n# output files in an output format independent way. To create the layout file\n# that represents doxygen's defaults, run doxygen with the -l option. You can\n# optionally specify a file name after the option, if omitted DoxygenLayout.xml\n# will be used as the name of the layout file.\n#\n# Note that if you run doxygen from a directory containing a file called\n# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE\n# tag is left empty.\n\nLAYOUT_FILE            =\n\n# The CITE_BIB_FILES tag can be used to specify one or more bib files containing\n# the reference definitions. This must be a list of .bib files. The .bib\n# extension is automatically appended if omitted. This requires the bibtex tool\n# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.\n# For LaTeX the style of the bibliography can be controlled using\n# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the\n# search path. Do not use file names with spaces, bibtex cannot handle them. See\n# also \\cite for info how to create references.\n\nCITE_BIB_FILES         =\n\n#---------------------------------------------------------------------------\n# Configuration options related to warning and progress messages\n#---------------------------------------------------------------------------\n\n# The QUIET tag can be used to turn on/off the messages that are generated to\n# standard output by doxygen. If QUIET is set to YES this implies that the\n# messages are off.\n# The default value is: NO.\n\nQUIET                  = NO\n\n# The WARNINGS tag can be used to turn on/off the warning messages that are\n# generated to standard error ( stderr) by doxygen. If WARNINGS is set to YES\n# this implies that the warnings are on.\n#\n# Tip: Turn warnings on while writing the documentation.\n# The default value is: YES.\n\nWARNINGS               = YES\n\n# If the WARN_IF_UNDOCUMENTED tag is set to YES, then doxygen will generate\n# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag\n# will automatically be disabled.\n# The default value is: YES.\n\nWARN_IF_UNDOCUMENTED   = YES\n\n# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for\n# potential errors in the documentation, such as not documenting some parameters\n# in a documented function, or documenting parameters that don't exist or using\n# markup commands wrongly.\n# The default value is: YES.\n\nWARN_IF_DOC_ERROR      = YES\n\n# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that\n# are documented, but have no documentation for their parameters or return\n# value. If set to NO doxygen will only warn about wrong or incomplete parameter\n# documentation, but not about the absence of documentation.\n# The default value is: NO.\n\nWARN_NO_PARAMDOC       = NO\n\n# The WARN_FORMAT tag determines the format of the warning messages that doxygen\n# can produce. The string should contain the $file, $line, and $text tags, which\n# will be replaced by the file and line number from which the warning originated\n# and the warning text. Optionally the format may contain $version, which will\n# be replaced by the version of the file (if it could be obtained via\n# FILE_VERSION_FILTER)\n# The default value is: $file:$line: $text.\n\nWARN_FORMAT            = \"$file:$line: $text\"\n\n# The WARN_LOGFILE tag can be used to specify a file to which warning and error\n# messages should be written. If left blank the output is written to standard\n# error (stderr).\n\nWARN_LOGFILE           =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the input files\n#---------------------------------------------------------------------------\n\n# The INPUT tag is used to specify the files and/or directories that contain\n# documented source files. You may enter file names like myfile.cpp or\n# directories like /usr/src/myproject. Separate the files or directories with\n# spaces.\n# Note: If this tag is empty the current directory is searched.\n\nINPUT                  = readme.md \\\n                         CHANGELOG.md \\\n                         include/rapidjson/rapidjson.h \\\n                         include/ \\\n                         doc/features.md \\\n                         doc/tutorial.md \\\n                         doc/pointer.md \\\n                         doc/stream.md \\\n                         doc/encoding.md \\\n                         doc/dom.md \\\n                         doc/sax.md \\\n                         doc/schema.md \\\n                         doc/performance.md \\\n                         doc/internals.md \\\n                         doc/faq.md\n\n# This tag can be used to specify the character encoding of the source files\n# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses\n# libiconv (or the iconv built into libc) for the transcoding. See the libiconv\n# documentation (see: http://www.gnu.org/software/libiconv) for the list of\n# possible encodings.\n# The default value is: UTF-8.\n\nINPUT_ENCODING         = UTF-8\n\n# If the value of the INPUT tag contains directories, you can use the\n# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank the\n# following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii,\n# *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp,\n# *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown,\n# *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf,\n# *.qsf, *.as and *.js.\n\nFILE_PATTERNS          = *.c \\\n                         *.cc \\\n                         *.cxx \\\n                         *.cpp \\\n                         *.h \\\n                         *.hh \\\n                         *.hxx \\\n                         *.hpp \\\n                         *.inc \\\n                         *.md\n\n# The RECURSIVE tag can be used to specify whether or not subdirectories should\n# be searched for input files as well.\n# The default value is: NO.\n\nRECURSIVE              = YES\n\n# The EXCLUDE tag can be used to specify files and/or directories that should be\n# excluded from the INPUT source files. This way you can easily exclude a\n# subdirectory from a directory tree whose root is specified with the INPUT tag.\n#\n# Note that relative paths are relative to the directory from which doxygen is\n# run.\n\nEXCLUDE                = ./include/rapidjson/msinttypes/\n\n# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or\n# directories that are symbolic links (a Unix file system feature) are excluded\n# from the input.\n# The default value is: NO.\n\nEXCLUDE_SYMLINKS       = NO\n\n# If the value of the INPUT tag contains directories, you can use the\n# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude\n# certain files from those directories.\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories for example use the pattern */test/*\n\nEXCLUDE_PATTERNS       =\n\n# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names\n# (namespaces, classes, functions, etc.) that should be excluded from the\n# output. The symbol name can be a fully qualified name, a word, or if the\n# wildcard * is used, a substring. Examples: ANamespace, AClass,\n# AClass::ANamespace, ANamespace::*Test\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories use the pattern */test/*\n\nEXCLUDE_SYMBOLS        = internal\n\n# The EXAMPLE_PATH tag can be used to specify one or more files or directories\n# that contain example code fragments that are included (see the \\include\n# command).\n\nEXAMPLE_PATH           =\n\n# If the value of the EXAMPLE_PATH tag contains directories, you can use the\n# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank all\n# files are included.\n\nEXAMPLE_PATTERNS       = *\n\n# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be\n# searched for input files to be used with the \\include or \\dontinclude commands\n# irrespective of the value of the RECURSIVE tag.\n# The default value is: NO.\n\nEXAMPLE_RECURSIVE      = NO\n\n# The IMAGE_PATH tag can be used to specify one or more files or directories\n# that contain images that are to be included in the documentation (see the\n# \\image command).\n\nIMAGE_PATH             = ./doc\n\n# The INPUT_FILTER tag can be used to specify a program that doxygen should\n# invoke to filter for each input file. Doxygen will invoke the filter program\n# by executing (via popen()) the command:\n#\n# <filter> <input-file>\n#\n# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the\n# name of an input file. Doxygen will then use the output that the filter\n# program writes to standard output. If FILTER_PATTERNS is specified, this tag\n# will be ignored.\n#\n# Note that the filter must not add or remove lines; it is applied before the\n# code is scanned, but not when the output code is generated. If lines are added\n# or removed, the anchors will not be placed correctly.\n\nINPUT_FILTER           =\n\n# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern\n# basis. Doxygen will compare the file name with each pattern and apply the\n# filter if there is a match. The filters are a list of the form: pattern=filter\n# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how\n# filters are used. If the FILTER_PATTERNS tag is empty or if none of the\n# patterns match the file name, INPUT_FILTER is applied.\n\nFILTER_PATTERNS        =\n\n# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using\n# INPUT_FILTER ) will also be used to filter the input files that are used for\n# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).\n# The default value is: NO.\n\nFILTER_SOURCE_FILES    = NO\n\n# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file\n# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and\n# it is also possible to disable source filtering for a specific pattern using\n# *.ext= (so without naming a filter).\n# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.\n\nFILTER_SOURCE_PATTERNS =\n\n# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that\n# is part of the input, its contents will be placed on the main page\n# (index.html). This can be useful if you have a project on for instance GitHub\n# and want to reuse the introduction page also for the doxygen output.\n\nUSE_MDFILE_AS_MAINPAGE = readme.md\n\n#---------------------------------------------------------------------------\n# Configuration options related to source browsing\n#---------------------------------------------------------------------------\n\n# If the SOURCE_BROWSER tag is set to YES then a list of source files will be\n# generated. Documented entities will be cross-referenced with these sources.\n#\n# Note: To get rid of all source code in the generated output, make sure that\n# also VERBATIM_HEADERS is set to NO.\n# The default value is: NO.\n\nSOURCE_BROWSER         = NO\n\n# Setting the INLINE_SOURCES tag to YES will include the body of functions,\n# classes and enums directly into the documentation.\n# The default value is: NO.\n\nINLINE_SOURCES         = NO\n\n# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any\n# special comment blocks from generated source code fragments. Normal C, C++ and\n# Fortran comments will always remain visible.\n# The default value is: YES.\n\nSTRIP_CODE_COMMENTS    = NO\n\n# If the REFERENCED_BY_RELATION tag is set to YES then for each documented\n# function all documented functions referencing it will be listed.\n# The default value is: NO.\n\nREFERENCED_BY_RELATION = NO\n\n# If the REFERENCES_RELATION tag is set to YES then for each documented function\n# all documented entities called/used by that function will be listed.\n# The default value is: NO.\n\nREFERENCES_RELATION    = NO\n\n# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set\n# to YES, then the hyperlinks from functions in REFERENCES_RELATION and\n# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will\n# link to the documentation.\n# The default value is: YES.\n\nREFERENCES_LINK_SOURCE = YES\n\n# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the\n# source code will show a tooltip with additional information such as prototype,\n# brief description and links to the definition and documentation. Since this\n# will make the HTML file larger and loading of large files a bit slower, you\n# can opt to disable this feature.\n# The default value is: YES.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nSOURCE_TOOLTIPS        = YES\n\n# If the USE_HTAGS tag is set to YES then the references to source code will\n# point to the HTML generated by the htags(1) tool instead of doxygen built-in\n# source browser. The htags tool is part of GNU's global source tagging system\n# (see http://www.gnu.org/software/global/global.html). You will need version\n# 4.8.6 or higher.\n#\n# To use it do the following:\n# - Install the latest version of global\n# - Enable SOURCE_BROWSER and USE_HTAGS in the config file\n# - Make sure the INPUT points to the root of the source tree\n# - Run doxygen as normal\n#\n# Doxygen will invoke htags (and that will in turn invoke gtags), so these\n# tools must be available from the command line (i.e. in the search path).\n#\n# The result: instead of the source browser generated by doxygen, the links to\n# source code will now point to the output of htags.\n# The default value is: NO.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nUSE_HTAGS              = NO\n\n# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a\n# verbatim copy of the header file for each class for which an include is\n# specified. Set to NO to disable this.\n# See also: Section \\class.\n# The default value is: YES.\n\nVERBATIM_HEADERS       = YES\n\n# If the CLANG_ASSISTED_PARSING tag is set to YES, then doxygen will use the\n# clang parser (see: http://clang.llvm.org/) for more accurate parsing at the\n# cost of reduced performance. This can be particularly helpful with template\n# rich C++ code for which doxygen's built-in parser lacks the necessary type\n# information.\n# Note: The availability of this option depends on whether or not doxygen was\n# compiled with the --with-libclang option.\n# The default value is: NO.\n\nCLANG_ASSISTED_PARSING = NO\n\n# If clang assisted parsing is enabled you can provide the compiler with command\n# line options that you would normally use when invoking the compiler. Note that\n# the include paths will already be set by doxygen for the files and directories\n# specified with INPUT and INCLUDE_PATH.\n# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.\n\nCLANG_OPTIONS          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the alphabetical class index\n#---------------------------------------------------------------------------\n\n# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all\n# compounds will be generated. Enable this if the project contains a lot of\n# classes, structs, unions or interfaces.\n# The default value is: YES.\n\nALPHABETICAL_INDEX     = NO\n\n# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in\n# which the alphabetical index list will be split.\n# Minimum value: 1, maximum value: 20, default value: 5.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nCOLS_IN_ALPHA_INDEX    = 5\n\n# In case all classes in a project start with a common prefix, all classes will\n# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag\n# can be used to specify a prefix (or a list of prefixes) that should be ignored\n# while generating the index headers.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nIGNORE_PREFIX          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the HTML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_HTML tag is set to YES doxygen will generate HTML output\n# The default value is: YES.\n\nGENERATE_HTML          = YES\n\n# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_OUTPUT            = html\n\n# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each\n# generated HTML page (for example: .htm, .php, .asp).\n# The default value is: .html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FILE_EXTENSION    = .html\n\n# The HTML_HEADER tag can be used to specify a user-defined HTML header file for\n# each generated HTML page. If the tag is left blank doxygen will generate a\n# standard header.\n#\n# To get valid HTML the header file that includes any scripts and style sheets\n# that doxygen needs, which is dependent on the configuration options used (e.g.\n# the setting GENERATE_TREEVIEW). It is highly recommended to start with a\n# default header using\n# doxygen -w html new_header.html new_footer.html new_stylesheet.css\n# YourConfigFile\n# and then modify the file new_header.html. See also section \"Doxygen usage\"\n# for information on how to generate the default header that doxygen normally\n# uses.\n# Note: The header is subject to change so you typically have to regenerate the\n# default header when upgrading to a newer version of doxygen. For a description\n# of the possible markers and block names see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_HEADER            = ./doc/misc/header.html\n\n# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each\n# generated HTML page. If the tag is left blank doxygen will generate a standard\n# footer. See HTML_HEADER for more information on how to generate a default\n# footer and what special commands can be used inside the footer. See also\n# section \"Doxygen usage\" for information on how to generate the default footer\n# that doxygen normally uses.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FOOTER            = ./doc/misc/footer.html\n\n# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style\n# sheet that is used by each HTML page. It can be used to fine-tune the look of\n# the HTML output. If left blank doxygen will generate a default style sheet.\n# See also section \"Doxygen usage\" for information on how to generate the style\n# sheet that doxygen normally uses.\n# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as\n# it is more robust and this tag (HTML_STYLESHEET) will in the future become\n# obsolete.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_STYLESHEET        =\n\n# The HTML_EXTRA_STYLESHEET tag can be used to specify an additional user-\n# defined cascading style sheet that is included after the standard style sheets\n# created by doxygen. Using this option one can overrule certain style aspects.\n# This is preferred over using HTML_STYLESHEET since it does not replace the\n# standard style sheet and is therefore more robust against future updates.\n# Doxygen will copy the style sheet file to the output directory. For an example\n# see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_STYLESHEET  = ./doc/misc/doxygenextra.css\n\n# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the HTML output directory. Note\n# that these files will be copied to the base HTML output directory. Use the\n# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these\n# files. In the HTML_STYLESHEET file, use the file name only. Also note that the\n# files will be copied as-is; there are no commands or markers available.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_FILES       =\n\n# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen\n# will adjust the colors in the stylesheet and background images according to\n# this color. Hue is specified as an angle on a colorwheel, see\n# http://en.wikipedia.org/wiki/Hue for more information. For instance the value\n# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300\n# purple, and 360 is red again.\n# Minimum value: 0, maximum value: 359, default value: 220.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_HUE    = 220\n\n# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors\n# in the HTML output. For a value of 0 the output will use grayscales only. A\n# value of 255 will produce the most vivid colors.\n# Minimum value: 0, maximum value: 255, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_SAT    = 100\n\n# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the\n# luminance component of the colors in the HTML output. Values below 100\n# gradually make the output lighter, whereas values above 100 make the output\n# darker. The value divided by 100 is the actual gamma applied, so 80 represents\n# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not\n# change the gamma.\n# Minimum value: 40, maximum value: 240, default value: 80.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_GAMMA  = 80\n\n# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML\n# page will contain the date and time when the page was generated. Setting this\n# to NO can help when comparing the output of multiple runs.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_TIMESTAMP         = YES\n\n# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML\n# documentation will contain sections that can be hidden and shown after the\n# page has loaded.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_DYNAMIC_SECTIONS  = NO\n\n# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries\n# shown in the various tree structured indices initially; the user can expand\n# and collapse entries dynamically later on. Doxygen will expand the tree to\n# such a level that at most the specified number of entries are visible (unless\n# a fully collapsed tree already exceeds this amount). So setting the number of\n# entries 1 will produce a full collapsed tree by default. 0 is a special value\n# representing an infinite number of entries and will result in a full expanded\n# tree by default.\n# Minimum value: 0, maximum value: 9999, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_INDEX_NUM_ENTRIES = 100\n\n# If the GENERATE_DOCSET tag is set to YES, additional index files will be\n# generated that can be used as input for Apple's Xcode 3 integrated development\n# environment (see: http://developer.apple.com/tools/xcode/), introduced with\n# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a\n# Makefile in the HTML output directory. Running make will produce the docset in\n# that directory and running make install will install the docset in\n# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at\n# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html\n# for more information.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_DOCSET        = NO\n\n# This tag determines the name of the docset feed. A documentation feed provides\n# an umbrella under which multiple documentation sets from a single provider\n# (such as a company or product suite) can be grouped.\n# The default value is: Doxygen generated docs.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_FEEDNAME        = \"Doxygen generated docs\"\n\n# This tag specifies a string that should uniquely identify the documentation\n# set bundle. This should be a reverse domain-name style string, e.g.\n# com.mycompany.MyDocSet. Doxygen will append .docset to the name.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_BUNDLE_ID       = org.doxygen.Project\n\n# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify\n# the documentation publisher. This should be a reverse domain-name style\n# string, e.g. com.mycompany.MyDocSet.documentation.\n# The default value is: org.doxygen.Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_ID    = org.doxygen.Publisher\n\n# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.\n# The default value is: Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_NAME  = Publisher\n\n# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three\n# additional HTML index files: index.hhp, index.hhc, and index.hhk. The\n# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop\n# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on\n# Windows.\n#\n# The HTML Help Workshop contains a compiler that can convert all HTML output\n# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML\n# files are now used as the Windows 98 help format, and will replace the old\n# Windows help format (.hlp) on all Windows platforms in the future. Compressed\n# HTML files also contain an index, a table of contents, and you can search for\n# words in the documentation. The HTML workshop also contains a viewer for\n# compressed HTML files.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_HTMLHELP      = NO\n\n# The CHM_FILE tag can be used to specify the file name of the resulting .chm\n# file. You can add a path in front of the file if the result should not be\n# written to the html output directory.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_FILE               =\n\n# The HHC_LOCATION tag can be used to specify the location (absolute path\n# including file name) of the HTML help compiler ( hhc.exe). If non-empty\n# doxygen will try to run the HTML help compiler on the generated index.hhp.\n# The file has to be specified with full path.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nHHC_LOCATION           =\n\n# The GENERATE_CHI flag controls if a separate .chi index file is generated (\n# YES) or that it should be included in the master .chm file ( NO).\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nGENERATE_CHI           = NO\n\n# The CHM_INDEX_ENCODING is used to encode HtmlHelp index ( hhk), content ( hhc)\n# and project file content.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_INDEX_ENCODING     =\n\n# The BINARY_TOC flag controls whether a binary table of contents is generated (\n# YES) or a normal table of contents ( NO) in the .chm file. Furthermore it\n# enables the Previous and Next buttons.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nBINARY_TOC             = NO\n\n# The TOC_EXPAND flag can be set to YES to add extra items for group members to\n# the table of contents of the HTML help documentation and to the tree view.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nTOC_EXPAND             = NO\n\n# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and\n# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that\n# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help\n# (.qch) of the generated HTML documentation.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_QHP           = NO\n\n# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify\n# the file name of the resulting .qch file. The path specified is relative to\n# the HTML output folder.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQCH_FILE               =\n\n# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help\n# Project output. For more information please see Qt Help Project / Namespace\n# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_NAMESPACE          = org.doxygen.Project\n\n# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt\n# Help Project output. For more information please see Qt Help Project / Virtual\n# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-\n# folders).\n# The default value is: doc.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_VIRTUAL_FOLDER     = doc\n\n# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom\n# filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_NAME   =\n\n# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the\n# custom filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_ATTRS  =\n\n# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this\n# project's filter section matches. Qt Help Project / Filter Attributes (see:\n# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_SECT_FILTER_ATTRS  =\n\n# The QHG_LOCATION tag can be used to specify the location of Qt's\n# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the\n# generated .qhp file.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHG_LOCATION           =\n\n# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be\n# generated, together with the HTML files, they form an Eclipse help plugin. To\n# install this plugin and make it available under the help contents menu in\n# Eclipse, the contents of the directory containing the HTML and XML files needs\n# to be copied into the plugins directory of eclipse. The name of the directory\n# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.\n# After copying Eclipse needs to be restarted before the help appears.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_ECLIPSEHELP   = NO\n\n# A unique identifier for the Eclipse help plugin. When installing the plugin\n# the directory name containing the HTML and XML files should also have this\n# name. Each documentation set should have its own identifier.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.\n\nECLIPSE_DOC_ID         = org.doxygen.Project\n\n# If you want full control over the layout of the generated HTML pages it might\n# be necessary to disable the index and replace it with your own. The\n# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top\n# of each HTML page. A value of NO enables the index and the value YES disables\n# it. Since the tabs in the index contain the same information as the navigation\n# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nDISABLE_INDEX          = YES\n\n# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index\n# structure should be generated to display hierarchical information. If the tag\n# value is set to YES, a side panel will be generated containing a tree-like\n# index structure (just like the one that is generated for HTML Help). For this\n# to work a browser that supports JavaScript, DHTML, CSS and frames is required\n# (i.e. any modern browser). Windows users are probably better off using the\n# HTML help feature. Via custom stylesheets (see HTML_EXTRA_STYLESHEET) one can\n# further fine-tune the look of the index. As an example, the default style\n# sheet generated by doxygen has an example that shows how to put an image at\n# the root of the tree instead of the PROJECT_NAME. Since the tree basically has\n# the same information as the tab index, you could consider setting\n# DISABLE_INDEX to YES when enabling this option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_TREEVIEW      = YES\n\n# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that\n# doxygen will group on one line in the generated HTML documentation.\n#\n# Note that a value of 0 will completely suppress the enum values from appearing\n# in the overview section.\n# Minimum value: 0, maximum value: 20, default value: 4.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nENUM_VALUES_PER_LINE   = 4\n\n# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used\n# to set the initial width (in pixels) of the frame in which the tree is shown.\n# Minimum value: 0, maximum value: 1500, default value: 250.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nTREEVIEW_WIDTH         = 250\n\n# When the EXT_LINKS_IN_WINDOW option is set to YES doxygen will open links to\n# external symbols imported via tag files in a separate window.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nEXT_LINKS_IN_WINDOW    = NO\n\n# Use this tag to change the font size of LaTeX formulas included as images in\n# the HTML documentation. When you change the font size after a successful\n# doxygen run you need to manually remove any form_*.png images from the HTML\n# output directory to force them to be regenerated.\n# Minimum value: 8, maximum value: 50, default value: 10.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_FONTSIZE       = 10\n\n# Use the FORMULA_TRANPARENT tag to determine whether or not the images\n# generated for formulas are transparent PNGs. Transparent PNGs are not\n# supported properly for IE 6.0, but are supported on all modern browsers.\n#\n# Note that when changing this option you need to delete any form_*.png files in\n# the HTML output directory before the changes have effect.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_TRANSPARENT    = YES\n\n# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see\n# http://www.mathjax.org) which uses client side Javascript for the rendering\n# instead of using prerendered bitmaps. Use this if you do not have LaTeX\n# installed or if you want to formulas look prettier in the HTML output. When\n# enabled you may also need to install MathJax separately and configure the path\n# to it using the MATHJAX_RELPATH option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nUSE_MATHJAX            = NO\n\n# When MathJax is enabled you can set the default output format to be used for\n# the MathJax output. See the MathJax site (see:\n# http://docs.mathjax.org/en/latest/output.html) for more details.\n# Possible values are: HTML-CSS (which is slower, but has the best\n# compatibility), NativeMML (i.e. MathML) and SVG.\n# The default value is: HTML-CSS.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_FORMAT         = HTML-CSS\n\n# When MathJax is enabled you need to specify the location relative to the HTML\n# output directory using the MATHJAX_RELPATH option. The destination directory\n# should contain the MathJax.js script. For instance, if the mathjax directory\n# is located at the same level as the HTML output directory, then\n# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax\n# Content Delivery Network so you can quickly see the result without installing\n# MathJax. However, it is strongly recommended to install a local copy of\n# MathJax from http://www.mathjax.org before deployment.\n# The default value is: http://cdn.mathjax.org/mathjax/latest.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_RELPATH        = http://www.mathjax.org/mathjax\n\n# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax\n# extension names that should be enabled during MathJax rendering. For example\n# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_EXTENSIONS     =\n\n# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces\n# of code that will be used on startup of the MathJax code. See the MathJax site\n# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an\n# example see the documentation.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_CODEFILE       =\n\n# When the SEARCHENGINE tag is enabled doxygen will generate a search box for\n# the HTML output. The underlying search engine uses javascript and DHTML and\n# should work on any modern browser. Note that when using HTML help\n# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)\n# there is already a search function so this one should typically be disabled.\n# For large projects the javascript based search engine can be slow, then\n# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to\n# search using the keyboard; to jump to the search box use <access key> + S\n# (what the <access key> is depends on the OS and browser, but it is typically\n# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down\n# key> to jump into the search results window, the results can be navigated\n# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel\n# the search. The filter options can be selected when the cursor is inside the\n# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>\n# to select a filter and <Enter> or <escape> to activate or cancel the filter\n# option.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nSEARCHENGINE           = YES\n\n# When the SERVER_BASED_SEARCH tag is enabled the search engine will be\n# implemented using a web server instead of a web client using Javascript. There\n# are two flavors of web server based searching depending on the EXTERNAL_SEARCH\n# setting. When disabled, doxygen will generate a PHP script for searching and\n# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing\n# and searching needs to be provided by external tools. See the section\n# \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSERVER_BASED_SEARCH    = NO\n\n# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP\n# script for searching. Instead the search results are written to an XML file\n# which needs to be processed by an external indexer. Doxygen will invoke an\n# external search engine pointed to by the SEARCHENGINE_URL option to obtain the\n# search results.\n#\n# Doxygen ships with an example indexer ( doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/).\n#\n# See the section \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH        = NO\n\n# The SEARCHENGINE_URL should point to a search engine hosted by a web server\n# which will return the search results when EXTERNAL_SEARCH is enabled.\n#\n# Doxygen ships with an example indexer ( doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/). See the section \"External Indexing and\n# Searching\" for details.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHENGINE_URL       =\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed\n# search data is written to a file for indexing by an external tool. With the\n# SEARCHDATA_FILE tag the name of this file can be specified.\n# The default file is: searchdata.xml.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHDATA_FILE        = searchdata.xml\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the\n# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is\n# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple\n# projects and redirect the results back to the right project.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH_ID     =\n\n# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen\n# projects other than the one defined by this configuration file, but that are\n# all added to the same external search index. Each project needs to have a\n# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of\n# to a relative location where the documentation can be found. The format is:\n# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTRA_SEARCH_MAPPINGS  =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the LaTeX output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_LATEX tag is set to YES doxygen will generate LaTeX output.\n# The default value is: YES.\n\nGENERATE_LATEX         = NO\n\n# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_OUTPUT           = latex\n\n# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be\n# invoked.\n#\n# Note that when enabling USE_PDFLATEX this option is only used for generating\n# bitmaps for formulas in the HTML output, but not in the Makefile that is\n# written to the output directory.\n# The default file is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_CMD_NAME         = latex\n\n# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate\n# index for LaTeX.\n# The default file is: makeindex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nMAKEINDEX_CMD_NAME     = makeindex\n\n# If the COMPACT_LATEX tag is set to YES doxygen generates more compact LaTeX\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nCOMPACT_LATEX          = NO\n\n# The PAPER_TYPE tag can be used to set the paper type that is used by the\n# printer.\n# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x\n# 14 inches) and executive (7.25 x 10.5 inches).\n# The default value is: a4.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPAPER_TYPE             = a4\n\n# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names\n# that should be included in the LaTeX output. To get the times font for\n# instance you can specify\n# EXTRA_PACKAGES=times\n# If left blank no extra packages will be included.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nEXTRA_PACKAGES         =\n\n# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the\n# generated LaTeX document. The header should contain everything until the first\n# chapter. If it is left blank doxygen will generate a standard header. See\n# section \"Doxygen usage\" for information on how to let doxygen write the\n# default header to a separate file.\n#\n# Note: Only use a user-defined header if you know what you are doing! The\n# following commands have a special meaning inside the header: $title,\n# $datetime, $date, $doxygenversion, $projectname, $projectnumber. Doxygen will\n# replace them by respectively the title of the page, the current date and time,\n# only the current date, the version number of doxygen, the project name (see\n# PROJECT_NAME), or the project number (see PROJECT_NUMBER).\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HEADER           =\n\n# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the\n# generated LaTeX document. The footer should contain everything after the last\n# chapter. If it is left blank doxygen will generate a standard footer.\n#\n# Note: Only use a user-defined footer if you know what you are doing!\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_FOOTER           =\n\n# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the LATEX_OUTPUT output\n# directory. Note that the files will be copied as-is; there are no commands or\n# markers available.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_FILES      =\n\n# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is\n# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will\n# contain links (just like the HTML output) instead of page references. This\n# makes the output suitable for online browsing using a PDF viewer.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPDF_HYPERLINKS         = YES\n\n# If the LATEX_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate\n# the PDF file directly from the LaTeX files. Set this option to YES to get a\n# higher quality PDF documentation.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nUSE_PDFLATEX           = YES\n\n# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode\n# command to the generated LaTeX files. This will instruct LaTeX to keep running\n# if errors occur, instead of asking the user for help. This option is also used\n# when generating formulas in HTML.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BATCHMODE        = NO\n\n# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the\n# index chapters (such as File Index, Compound Index, etc.) in the output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HIDE_INDICES     = NO\n\n# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source\n# code with syntax highlighting in the LaTeX output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_SOURCE_CODE      = NO\n\n# The LATEX_BIB_STYLE tag can be used to specify the style to use for the\n# bibliography, e.g. plainnat, or ieeetr. See\n# http://en.wikipedia.org/wiki/BibTeX and \\cite for more info.\n# The default value is: plain.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BIB_STYLE        = plain\n\n#---------------------------------------------------------------------------\n# Configuration options related to the RTF output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_RTF tag is set to YES doxygen will generate RTF output. The\n# RTF output is optimized for Word 97 and may not look too pretty with other RTF\n# readers/editors.\n# The default value is: NO.\n\nGENERATE_RTF           = NO\n\n# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: rtf.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_OUTPUT             = rtf\n\n# If the COMPACT_RTF tag is set to YES doxygen generates more compact RTF\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nCOMPACT_RTF            = NO\n\n# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will\n# contain hyperlink fields. The RTF file will contain links (just like the HTML\n# output) instead of page references. This makes the output suitable for online\n# browsing using Word or some other Word compatible readers that support those\n# fields.\n#\n# Note: WordPad (write) and others do not support links.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_HYPERLINKS         = NO\n\n# Load stylesheet definitions from file. Syntax is similar to doxygen's config\n# file, i.e. a series of assignments. You only have to provide replacements,\n# missing definitions are set to their default value.\n#\n# See also section \"Doxygen usage\" for information on how to generate the\n# default style sheet that doxygen normally uses.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_STYLESHEET_FILE    =\n\n# Set optional variables used in the generation of an RTF document. Syntax is\n# similar to doxygen's config file. A template extensions file can be generated\n# using doxygen -e rtf extensionFile.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_EXTENSIONS_FILE    =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the man page output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_MAN tag is set to YES doxygen will generate man pages for\n# classes and files.\n# The default value is: NO.\n\nGENERATE_MAN           = NO\n\n# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it. A directory man3 will be created inside the directory specified by\n# MAN_OUTPUT.\n# The default directory is: man.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_OUTPUT             = man\n\n# The MAN_EXTENSION tag determines the extension that is added to the generated\n# man pages. In case the manual section does not start with a number, the number\n# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is\n# optional.\n# The default value is: .3.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_EXTENSION          = .3\n\n# The MAN_SUBDIR tag determines the name of the directory created within\n# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by\n# MAN_EXTENSION with the initial . removed.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_SUBDIR             =\n\n# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it\n# will generate one additional man file for each entity documented in the real\n# man page(s). These additional files only source the real man page, but without\n# them the man command would be unable to find the correct page.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_LINKS              = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the XML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_XML tag is set to YES doxygen will generate an XML file that\n# captures the structure of the code including all documentation.\n# The default value is: NO.\n\nGENERATE_XML           = NO\n\n# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: xml.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_OUTPUT             = xml\n\n# If the XML_PROGRAMLISTING tag is set to YES doxygen will dump the program\n# listings (including syntax highlighting and cross-referencing information) to\n# the XML output. Note that enabling this will significantly increase the size\n# of the XML output.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_PROGRAMLISTING     = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to the DOCBOOK output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_DOCBOOK tag is set to YES doxygen will generate Docbook files\n# that can be used to generate PDF.\n# The default value is: NO.\n\nGENERATE_DOCBOOK       = NO\n\n# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.\n# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in\n# front of it.\n# The default directory is: docbook.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_OUTPUT         = docbook\n\n#---------------------------------------------------------------------------\n# Configuration options for the AutoGen Definitions output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_AUTOGEN_DEF tag is set to YES doxygen will generate an AutoGen\n# Definitions (see http://autogen.sf.net) file that captures the structure of\n# the code including all documentation. Note that this feature is still\n# experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_AUTOGEN_DEF   = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the Perl module output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_PERLMOD tag is set to YES doxygen will generate a Perl module\n# file that captures the structure of the code including all documentation.\n#\n# Note that this feature is still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_PERLMOD       = NO\n\n# If the PERLMOD_LATEX tag is set to YES doxygen will generate the necessary\n# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI\n# output from the Perl module output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_LATEX          = NO\n\n# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be nicely\n# formatted so it can be parsed by a human reader. This is useful if you want to\n# understand what is going on. On the other hand, if this tag is set to NO the\n# size of the Perl module output will be much smaller and Perl will parse it\n# just the same.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_PRETTY         = YES\n\n# The names of the make variables in the generated doxyrules.make file are\n# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful\n# so different doxyrules.make files included by the same Makefile don't\n# overwrite each other's variables.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_MAKEVAR_PREFIX =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the preprocessor\n#---------------------------------------------------------------------------\n\n# If the ENABLE_PREPROCESSING tag is set to YES doxygen will evaluate all\n# C-preprocessor directives found in the sources and include files.\n# The default value is: YES.\n\nENABLE_PREPROCESSING   = YES\n\n# If the MACRO_EXPANSION tag is set to YES doxygen will expand all macro names\n# in the source code. If set to NO only conditional compilation will be\n# performed. Macro expansion can be done in a controlled way by setting\n# EXPAND_ONLY_PREDEF to YES.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nMACRO_EXPANSION        = YES\n\n# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then\n# the macro expansion is limited to the macros specified with the PREDEFINED and\n# EXPAND_AS_DEFINED tags.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_ONLY_PREDEF     = YES\n\n# If the SEARCH_INCLUDES tag is set to YES the includes files in the\n# INCLUDE_PATH will be searched if a #include is found.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSEARCH_INCLUDES        = YES\n\n# The INCLUDE_PATH tag can be used to specify one or more directories that\n# contain include files that are not input files but should be processed by the\n# preprocessor.\n# This tag requires that the tag SEARCH_INCLUDES is set to YES.\n\nINCLUDE_PATH           =\n\n# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard\n# patterns (like *.h and *.hpp) to filter out the header-files in the\n# directories. If left blank, the patterns specified with FILE_PATTERNS will be\n# used.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nINCLUDE_FILE_PATTERNS  =\n\n# The PREDEFINED tag can be used to specify one or more macro names that are\n# defined before the preprocessor is started (similar to the -D option of e.g.\n# gcc). The argument of the tag is a list of macros of the form: name or\n# name=definition (no spaces). If the definition and the \"=\" are omitted, \"=1\"\n# is assumed. To prevent a macro definition from being undefined via #undef or\n# recursively expanded use the := operator instead of the = operator.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nPREDEFINED             = \\\n\tRAPIDJSON_DOXYGEN_RUNNING \\\n\tRAPIDJSON_NAMESPACE_BEGIN=\"namespace rapidjson {\" \\\n\tRAPIDJSON_NAMESPACE_END=\"}\" \\\n\tRAPIDJSON_REMOVEFPTR_(x)=x \\\n\tRAPIDJSON_ENABLEIF_RETURN(cond,returntype)=\"RAPIDJSON_REMOVEFPTR_ returntype\" \\\n\tRAPIDJSON_DISABLEIF_RETURN(cond,returntype)=\"RAPIDJSON_REMOVEFPTR_ returntype\"\n\n# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this\n# tag can be used to specify a list of macro names that should be expanded. The\n# macro definition that is found in the sources will be used. Use the PREDEFINED\n# tag if you want to use a different macro definition that overrules the\n# definition found in the source code.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_AS_DEFINED      = \\\n\tRAPIDJSON_NOEXCEPT\n\n# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will\n# remove all references to function-like macros that are alone on a line, have\n# an all uppercase name, and do not end with a semicolon. Such function macros\n# are typically used for boiler-plate code, and will confuse the parser if not\n# removed.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSKIP_FUNCTION_MACROS   = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to external references\n#---------------------------------------------------------------------------\n\n# The TAGFILES tag can be used to specify one or more tag files. For each tag\n# file the location of the external documentation should be added. The format of\n# a tag file without this location is as follows:\n# TAGFILES = file1 file2 ...\n# Adding location for the tag files is done as follows:\n# TAGFILES = file1=loc1 \"file2 = loc2\" ...\n# where loc1 and loc2 can be relative or absolute paths or URLs. See the\n# section \"Linking to external documentation\" for more information about the use\n# of tag files.\n# Note: Each tag file must have a unique name (where the name does NOT include\n# the path). If a tag file is not located in the directory in which doxygen is\n# run, you must also specify the path to the tagfile here.\n\nTAGFILES               =\n\n# When a file name is specified after GENERATE_TAGFILE, doxygen will create a\n# tag file that is based on the input files it reads. See section \"Linking to\n# external documentation\" for more information about the usage of tag files.\n\nGENERATE_TAGFILE       =\n\n# If the ALLEXTERNALS tag is set to YES all external class will be listed in the\n# class index. If set to NO only the inherited external classes will be listed.\n# The default value is: NO.\n\nALLEXTERNALS           = NO\n\n# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed in\n# the modules index. If set to NO, only the current project's groups will be\n# listed.\n# The default value is: YES.\n\nEXTERNAL_GROUPS        = YES\n\n# If the EXTERNAL_PAGES tag is set to YES all external pages will be listed in\n# the related pages index. If set to NO, only the current project's pages will\n# be listed.\n# The default value is: YES.\n\nEXTERNAL_PAGES         = YES\n\n# The PERL_PATH should be the absolute path and name of the perl script\n# interpreter (i.e. the result of 'which perl').\n# The default file (with absolute path) is: /usr/bin/perl.\n\nPERL_PATH              = /usr/bin/perl\n\n#---------------------------------------------------------------------------\n# Configuration options related to the dot tool\n#---------------------------------------------------------------------------\n\n# If the CLASS_DIAGRAMS tag is set to YES doxygen will generate a class diagram\n# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to\n# NO turns the diagrams off. Note that this option also works with HAVE_DOT\n# disabled, but it is recommended to install and use dot, since it yields more\n# powerful graphs.\n# The default value is: YES.\n\nCLASS_DIAGRAMS         = YES\n\n# You can define message sequence charts within doxygen comments using the \\msc\n# command. Doxygen will then run the mscgen tool (see:\n# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the\n# documentation. The MSCGEN_PATH tag allows you to specify the directory where\n# the mscgen tool resides. If left empty the tool is assumed to be found in the\n# default search path.\n\nMSCGEN_PATH            =\n\n# You can include diagrams made with dia in doxygen documentation. Doxygen will\n# then run dia to produce the diagram and insert it in the documentation. The\n# DIA_PATH tag allows you to specify the directory where the dia binary resides.\n# If left empty dia is assumed to be found in the default search path.\n\nDIA_PATH               =\n\n# If set to YES, the inheritance and collaboration graphs will hide inheritance\n# and usage relations if the target is undocumented or is not a class.\n# The default value is: YES.\n\nHIDE_UNDOC_RELATIONS   = YES\n\n# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is\n# available from the path. This tool is part of Graphviz (see:\n# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent\n# Bell Labs. The other options in this section have no effect if this option is\n# set to NO\n# The default value is: NO.\n\nHAVE_DOT               = NO\n\n# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed\n# to run in parallel. When set to 0 doxygen will base this on the number of\n# processors available in the system. You can set it explicitly to a value\n# larger than 0 to get control over the balance between CPU load and processing\n# speed.\n# Minimum value: 0, maximum value: 32, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_NUM_THREADS        = 0\n\n# When you want a differently looking font n the dot files that doxygen\n# generates you can specify the font name using DOT_FONTNAME. You need to make\n# sure dot is able to find the font, which can be done by putting it in a\n# standard location or by setting the DOTFONTPATH environment variable or by\n# setting DOT_FONTPATH to the directory containing the font.\n# The default value is: Helvetica.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTNAME           = Helvetica\n\n# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of\n# dot graphs.\n# Minimum value: 4, maximum value: 24, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTSIZE           = 10\n\n# By default doxygen will tell dot to use the default font as specified with\n# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set\n# the path where dot can find it using this tag.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTPATH           =\n\n# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for\n# each documented class showing the direct and indirect inheritance relations.\n# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCLASS_GRAPH            = YES\n\n# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a\n# graph for each documented class showing the direct and indirect implementation\n# dependencies (inheritance, containment, and class references variables) of the\n# class with other documented classes.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCOLLABORATION_GRAPH    = YES\n\n# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for\n# groups, showing the direct groups dependencies.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGROUP_GRAPHS           = YES\n\n# If the UML_LOOK tag is set to YES doxygen will generate inheritance and\n# collaboration diagrams in a style similar to the OMG's Unified Modeling\n# Language.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LOOK               = NO\n\n# If the UML_LOOK tag is enabled, the fields and methods are shown inside the\n# class node. If there are many fields or methods and many nodes the graph may\n# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the\n# number of items for each type to make the size more manageable. Set this to 0\n# for no limit. Note that the threshold may be exceeded by 50% before the limit\n# is enforced. So when you set the threshold to 10, up to 15 fields may appear,\n# but if the number exceeds 15, the total amount of fields shown is limited to\n# 10.\n# Minimum value: 0, maximum value: 100, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LIMIT_NUM_FIELDS   = 10\n\n# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and\n# collaboration graphs will show the relations between templates and their\n# instances.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nTEMPLATE_RELATIONS     = NO\n\n# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to\n# YES then doxygen will generate a graph for each documented file showing the\n# direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDE_GRAPH          = YES\n\n# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are\n# set to YES then doxygen will generate a graph for each documented file showing\n# the direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDED_BY_GRAPH      = YES\n\n# If the CALL_GRAPH tag is set to YES then doxygen will generate a call\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable call graphs for selected\n# functions only using the \\callgraph command.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALL_GRAPH             = NO\n\n# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable caller graphs for selected\n# functions only using the \\callergraph command.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALLER_GRAPH           = NO\n\n# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical\n# hierarchy of all classes instead of a textual one.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGRAPHICAL_HIERARCHY    = YES\n\n# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the\n# dependencies a directory has on other directories in a graphical way. The\n# dependency relations are determined by the #include relations between the\n# files in the directories.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDIRECTORY_GRAPH        = YES\n\n# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images\n# generated by dot.\n# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order\n# to make the SVG files visible in IE 9+ (other browsers do not have this\n# requirement).\n# Possible values are: png, jpg, gif and svg.\n# The default value is: png.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_IMAGE_FORMAT       = png\n\n# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to\n# enable generation of interactive SVG images that allow zooming and panning.\n#\n# Note that this requires a modern browser other than Internet Explorer. Tested\n# and working are Firefox, Chrome, Safari, and Opera.\n# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make\n# the SVG files visible. Older versions of IE do not have SVG support.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINTERACTIVE_SVG        = NO\n\n# The DOT_PATH tag can be used to specify the path where the dot tool can be\n# found. If left blank, it is assumed the dot tool can be found in the path.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_PATH               =\n\n# The DOTFILE_DIRS tag can be used to specify one or more directories that\n# contain dot files that are included in the documentation (see the \\dotfile\n# command).\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOTFILE_DIRS           =\n\n# The MSCFILE_DIRS tag can be used to specify one or more directories that\n# contain msc files that are included in the documentation (see the \\mscfile\n# command).\n\nMSCFILE_DIRS           =\n\n# The DIAFILE_DIRS tag can be used to specify one or more directories that\n# contain dia files that are included in the documentation (see the \\diafile\n# command).\n\nDIAFILE_DIRS           =\n\n# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes\n# that will be shown in the graph. If the number of nodes in a graph becomes\n# larger than this value, doxygen will truncate the graph, which is visualized\n# by representing a node as a red box. Note that doxygen if the number of direct\n# children of the root node in a graph is already larger than\n# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that\n# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.\n# Minimum value: 0, maximum value: 10000, default value: 50.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_GRAPH_MAX_NODES    = 50\n\n# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs\n# generated by dot. A depth value of 3 means that only nodes reachable from the\n# root by following a path via at most 3 edges will be shown. Nodes that lay\n# further from the root node will be omitted. Note that setting this option to 1\n# or 2 may greatly reduce the computation time needed for large code bases. Also\n# note that the size of a graph can be further restricted by\n# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.\n# Minimum value: 0, maximum value: 1000, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nMAX_DOT_GRAPH_DEPTH    = 0\n\n# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent\n# background. This is disabled by default, because dot on Windows does not seem\n# to support this out of the box.\n#\n# Warning: Depending on the platform used, enabling this option may lead to\n# badly anti-aliased labels on the edges of a graph (i.e. they become hard to\n# read).\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_TRANSPARENT        = NO\n\n# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output\n# files in one run (i.e. multiple -o and -T options on the command line). This\n# makes dot run faster, but since only newer versions of dot (>1.8.10) support\n# this, this feature is disabled by default.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_MULTI_TARGETS      = NO\n\n# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page\n# explaining the meaning of the various boxes and arrows in the dot generated\n# graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGENERATE_LEGEND        = YES\n\n# If the DOT_CLEANUP tag is set to YES doxygen will remove the intermediate dot\n# files that are used to generate the various graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_CLEANUP            = YES\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/Doxyfile.zh-cn.in",
    "content": "# Doxyfile 1.8.7\n\n# This file describes the settings to be used by the documentation system\n# doxygen (www.doxygen.org) for a project.\n#\n# All text after a double hash (##) is considered a comment and is placed in\n# front of the TAG it is preceding.\n#\n# All text after a single hash (#) is considered a comment and will be ignored.\n# The format is:\n# TAG = value [value, ...]\n# For lists, items can also be appended using:\n# TAG += value [value, ...]\n# Values that contain spaces should be placed between quotes (\\\" \\\").\n\n#---------------------------------------------------------------------------\n# Project related configuration options\n#---------------------------------------------------------------------------\n\n# This tag specifies the encoding used for all characters in the config file\n# that follow. The default is UTF-8 which is also the encoding used for all text\n# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv\n# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv\n# for the list of possible encodings.\n# The default value is: UTF-8.\n\nDOXYFILE_ENCODING      = UTF-8\n\n# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by\n# double-quotes, unless you are using Doxywizard) that should identify the\n# project for which the documentation is generated. This name is used in the\n# title of most generated pages and in a few other places.\n# The default value is: My Project.\n\nPROJECT_NAME           = RapidJSON\n\n# The PROJECT_NUMBER tag can be used to enter a project or revision number. This\n# could be handy for archiving the generated documentation or if some version\n# control system is used.\n\nPROJECT_NUMBER         =\n\n# Using the PROJECT_BRIEF tag one can provide an optional one line description\n# for a project that appears at the top of each page and should give viewer a\n# quick idea about the purpose of the project. Keep the description short.\n\nPROJECT_BRIEF          = \"一个C++快速JSON解析器及生成器，包含SAX/DOM风格API\"\n\n# With the PROJECT_LOGO tag one can specify an logo or icon that is included in\n# the documentation. The maximum height of the logo should not exceed 55 pixels\n# and the maximum width should not exceed 200 pixels. Doxygen will copy the logo\n# to the output directory.\n\nPROJECT_LOGO           =\n\n# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path\n# into which the generated documentation will be written. If a relative path is\n# entered, it will be relative to the location where doxygen was started. If\n# left blank the current directory will be used.\n\nOUTPUT_DIRECTORY       = @CMAKE_CURRENT_BINARY_DIR@\n\n# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub-\n# directories (in 2 levels) under the output directory of each output format and\n# will distribute the generated files over these directories. Enabling this\n# option can be useful when feeding doxygen a huge amount of source files, where\n# putting all generated files in the same directory would otherwise causes\n# performance problems for the file system.\n# The default value is: NO.\n\nCREATE_SUBDIRS         = NO\n\n# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII\n# characters to appear in the names of generated files. If set to NO, non-ASCII\n# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode\n# U+3044.\n# The default value is: NO.\n\nALLOW_UNICODE_NAMES    = NO\n\n# The OUTPUT_LANGUAGE tag is used to specify the language in which all\n# documentation generated by doxygen is written. Doxygen will use this\n# information to generate all constant output in the proper language.\n# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,\n# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),\n# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,\n# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),\n# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,\n# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,\n# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,\n# Ukrainian and Vietnamese.\n# The default value is: English.\n\nOUTPUT_LANGUAGE        = Chinese\n\n# If the BRIEF_MEMBER_DESC tag is set to YES doxygen will include brief member\n# descriptions after the members that are listed in the file and class\n# documentation (similar to Javadoc). Set to NO to disable this.\n# The default value is: YES.\n\nBRIEF_MEMBER_DESC      = YES\n\n# If the REPEAT_BRIEF tag is set to YES doxygen will prepend the brief\n# description of a member or function before the detailed description\n#\n# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the\n# brief descriptions will be completely suppressed.\n# The default value is: YES.\n\nREPEAT_BRIEF           = YES\n\n# This tag implements a quasi-intelligent brief description abbreviator that is\n# used to form the text in various listings. Each string in this list, if found\n# as the leading text of the brief description, will be stripped from the text\n# and the result, after processing the whole list, is used as the annotated\n# text. Otherwise, the brief description is used as-is. If left blank, the\n# following values are used ($name is automatically replaced with the name of\n# the entity):The $name class, The $name widget, The $name file, is, provides,\n# specifies, contains, represents, a, an and the.\n\nABBREVIATE_BRIEF       = \"The $name class\" \\\n                         \"The $name widget\" \\\n                         \"The $name file\" \\\n                         is \\\n                         provides \\\n                         specifies \\\n                         contains \\\n                         represents \\\n                         a \\\n                         an \\\n                         the\n\n# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then\n# doxygen will generate a detailed section even if there is only a brief\n# description.\n# The default value is: NO.\n\nALWAYS_DETAILED_SEC    = NO\n\n# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all\n# inherited members of a class in the documentation of that class as if those\n# members were ordinary class members. Constructors, destructors and assignment\n# operators of the base classes will not be shown.\n# The default value is: NO.\n\nINLINE_INHERITED_MEMB  = NO\n\n# If the FULL_PATH_NAMES tag is set to YES doxygen will prepend the full path\n# before files name in the file list and in the header files. If set to NO the\n# shortest path that makes the file name unique will be used\n# The default value is: YES.\n\nFULL_PATH_NAMES        = YES\n\n# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.\n# Stripping is only done if one of the specified strings matches the left-hand\n# part of the path. The tag can be used to show relative paths in the file list.\n# If left blank the directory from which doxygen is run is used as the path to\n# strip.\n#\n# Note that you can specify absolute paths here, but also relative paths, which\n# will be relative from the directory where doxygen is started.\n# This tag requires that the tag FULL_PATH_NAMES is set to YES.\n\nSTRIP_FROM_PATH        =\n\n# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the\n# path mentioned in the documentation of a class, which tells the reader which\n# header file to include in order to use a class. If left blank only the name of\n# the header file containing the class definition is used. Otherwise one should\n# specify the list of include paths that are normally passed to the compiler\n# using the -I flag.\n\nSTRIP_FROM_INC_PATH    =\n\n# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but\n# less readable) file names. This can be useful is your file systems doesn't\n# support long names like on DOS, Mac, or CD-ROM.\n# The default value is: NO.\n\nSHORT_NAMES            = NO\n\n# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the\n# first line (until the first dot) of a Javadoc-style comment as the brief\n# description. If set to NO, the Javadoc-style will behave just like regular Qt-\n# style comments (thus requiring an explicit @brief command for a brief\n# description.)\n# The default value is: NO.\n\nJAVADOC_AUTOBRIEF      = NO\n\n# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first\n# line (until the first dot) of a Qt-style comment as the brief description. If\n# set to NO, the Qt-style will behave just like regular Qt-style comments (thus\n# requiring an explicit \\brief command for a brief description.)\n# The default value is: NO.\n\nQT_AUTOBRIEF           = NO\n\n# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a\n# multi-line C++ special comment block (i.e. a block of //! or /// comments) as\n# a brief description. This used to be the default behavior. The new default is\n# to treat a multi-line C++ comment block as a detailed description. Set this\n# tag to YES if you prefer the old behavior instead.\n#\n# Note that setting this tag to YES also means that rational rose comments are\n# not recognized any more.\n# The default value is: NO.\n\nMULTILINE_CPP_IS_BRIEF = NO\n\n# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the\n# documentation from any documented member that it re-implements.\n# The default value is: YES.\n\nINHERIT_DOCS           = YES\n\n# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce a\n# new page for each member. If set to NO, the documentation of a member will be\n# part of the file/class/namespace that contains it.\n# The default value is: NO.\n\nSEPARATE_MEMBER_PAGES  = NO\n\n# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen\n# uses this value to replace tabs by spaces in code fragments.\n# Minimum value: 1, maximum value: 16, default value: 4.\n\nTAB_SIZE               = 4\n\n# This tag can be used to specify a number of aliases that act as commands in\n# the documentation. An alias has the form:\n# name=value\n# For example adding\n# \"sideeffect=@par Side Effects:\\n\"\n# will allow you to put the command \\sideeffect (or @sideeffect) in the\n# documentation, which will result in a user-defined paragraph with heading\n# \"Side Effects:\". You can put \\n's in the value part of an alias to insert\n# newlines.\n\nALIASES                =\n\n# This tag can be used to specify a number of word-keyword mappings (TCL only).\n# A mapping has the form \"name=value\". For example adding \"class=itcl::class\"\n# will allow you to use the command class in the itcl::class meaning.\n\nTCL_SUBST              =\n\n# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources\n# only. Doxygen will then generate output that is more tailored for C. For\n# instance, some of the names that are used will be different. The list of all\n# members will be omitted, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_FOR_C  = NO\n\n# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or\n# Python sources only. Doxygen will then generate output that is more tailored\n# for that language. For instance, namespaces will be presented as packages,\n# qualified scopes will look different, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_JAVA   = NO\n\n# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran\n# sources. Doxygen will then generate output that is tailored for Fortran.\n# The default value is: NO.\n\nOPTIMIZE_FOR_FORTRAN   = NO\n\n# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL\n# sources. Doxygen will then generate output that is tailored for VHDL.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_VHDL   = NO\n\n# Doxygen selects the parser to use depending on the extension of the files it\n# parses. With this tag you can assign which parser to use for a given\n# extension. Doxygen has a built-in mapping, but you can override or extend it\n# using this tag. The format is ext=language, where ext is a file extension, and\n# language is one of the parsers supported by doxygen: IDL, Java, Javascript,\n# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:\n# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:\n# Fortran. In the later case the parser tries to guess whether the code is fixed\n# or free formatted code, this is the default for Fortran type files), VHDL. For\n# instance to make doxygen treat .inc files as Fortran files (default is PHP),\n# and .f files as C (default is Fortran), use: inc=Fortran f=C.\n#\n# Note For files without extension you can use no_extension as a placeholder.\n#\n# Note that for custom extensions you also need to set FILE_PATTERNS otherwise\n# the files are not read by doxygen.\n\nEXTENSION_MAPPING      =\n\n# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments\n# according to the Markdown format, which allows for more readable\n# documentation. See http://daringfireball.net/projects/markdown/ for details.\n# The output of markdown processing is further processed by doxygen, so you can\n# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in\n# case of backward compatibilities issues.\n# The default value is: YES.\n\nMARKDOWN_SUPPORT       = YES\n\n# When enabled doxygen tries to link words that correspond to documented\n# classes, or namespaces to their corresponding documentation. Such a link can\n# be prevented in individual cases by by putting a % sign in front of the word\n# or globally by setting AUTOLINK_SUPPORT to NO.\n# The default value is: YES.\n\nAUTOLINK_SUPPORT       = YES\n\n# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want\n# to include (a tag file for) the STL sources as input, then you should set this\n# tag to YES in order to let doxygen match functions declarations and\n# definitions whose arguments contain STL classes (e.g. func(std::string);\n# versus func(std::string) {}). This also make the inheritance and collaboration\n# diagrams that involve STL classes more complete and accurate.\n# The default value is: NO.\n\nBUILTIN_STL_SUPPORT    = NO\n\n# If you use Microsoft's C++/CLI language, you should set this option to YES to\n# enable parsing support.\n# The default value is: NO.\n\nCPP_CLI_SUPPORT        = NO\n\n# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:\n# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen\n# will parse them like normal C++ but will assume all classes use public instead\n# of private inheritance when no explicit protection keyword is present.\n# The default value is: NO.\n\nSIP_SUPPORT            = NO\n\n# For Microsoft's IDL there are propget and propput attributes to indicate\n# getter and setter methods for a property. Setting this option to YES will make\n# doxygen to replace the get and set methods by a property in the documentation.\n# This will only work if the methods are indeed getting or setting a simple\n# type. If this is not the case, or you want to show the methods anyway, you\n# should set this option to NO.\n# The default value is: YES.\n\nIDL_PROPERTY_SUPPORT   = YES\n\n# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC\n# tag is set to YES, then doxygen will reuse the documentation of the first\n# member in the group (if any) for the other members of the group. By default\n# all members of a group must be documented explicitly.\n# The default value is: NO.\n\nDISTRIBUTE_GROUP_DOC   = NO\n\n# Set the SUBGROUPING tag to YES to allow class member groups of the same type\n# (for instance a group of public functions) to be put as a subgroup of that\n# type (e.g. under the Public Functions section). Set it to NO to prevent\n# subgrouping. Alternatively, this can be done per class using the\n# \\nosubgrouping command.\n# The default value is: YES.\n\nSUBGROUPING            = YES\n\n# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions\n# are shown inside the group in which they are included (e.g. using \\ingroup)\n# instead of on a separate page (for HTML and Man pages) or section (for LaTeX\n# and RTF).\n#\n# Note that this feature does not work in combination with\n# SEPARATE_MEMBER_PAGES.\n# The default value is: NO.\n\nINLINE_GROUPED_CLASSES = YES\n\n# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions\n# with only public data fields or simple typedef fields will be shown inline in\n# the documentation of the scope in which they are defined (i.e. file,\n# namespace, or group documentation), provided this scope is documented. If set\n# to NO, structs, classes, and unions are shown on a separate page (for HTML and\n# Man pages) or section (for LaTeX and RTF).\n# The default value is: NO.\n\nINLINE_SIMPLE_STRUCTS  = NO\n\n# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or\n# enum is documented as struct, union, or enum with the name of the typedef. So\n# typedef struct TypeS {} TypeT, will appear in the documentation as a struct\n# with name TypeT. When disabled the typedef will appear as a member of a file,\n# namespace, or class. And the struct will be named TypeS. This can typically be\n# useful for C code in case the coding convention dictates that all compound\n# types are typedef'ed and only the typedef is referenced, never the tag name.\n# The default value is: NO.\n\nTYPEDEF_HIDES_STRUCT   = NO\n\n# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This\n# cache is used to resolve symbols given their name and scope. Since this can be\n# an expensive process and often the same symbol appears multiple times in the\n# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small\n# doxygen will become slower. If the cache is too large, memory is wasted. The\n# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range\n# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536\n# symbols. At the end of a run doxygen will report the cache usage and suggest\n# the optimal cache size from a speed point of view.\n# Minimum value: 0, maximum value: 9, default value: 0.\n\nLOOKUP_CACHE_SIZE      = 0\n\n#---------------------------------------------------------------------------\n# Build related configuration options\n#---------------------------------------------------------------------------\n\n# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in\n# documentation are documented, even if no documentation was available. Private\n# class members and static file members will be hidden unless the\n# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.\n# Note: This will also disable the warnings about undocumented members that are\n# normally produced when WARNINGS is set to YES.\n# The default value is: NO.\n\nEXTRACT_ALL            = NO\n\n# If the EXTRACT_PRIVATE tag is set to YES all private members of a class will\n# be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PRIVATE        = NO\n\n# If the EXTRACT_PACKAGE tag is set to YES all members with package or internal\n# scope will be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PACKAGE        = NO\n\n# If the EXTRACT_STATIC tag is set to YES all static members of a file will be\n# included in the documentation.\n# The default value is: NO.\n\nEXTRACT_STATIC         = NO\n\n# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) defined\n# locally in source files will be included in the documentation. If set to NO\n# only classes defined in header files are included. Does not have any effect\n# for Java sources.\n# The default value is: YES.\n\nEXTRACT_LOCAL_CLASSES  = YES\n\n# This flag is only useful for Objective-C code. When set to YES local methods,\n# which are defined in the implementation section but not in the interface are\n# included in the documentation. If set to NO only methods in the interface are\n# included.\n# The default value is: NO.\n\nEXTRACT_LOCAL_METHODS  = NO\n\n# If this flag is set to YES, the members of anonymous namespaces will be\n# extracted and appear in the documentation as a namespace called\n# 'anonymous_namespace{file}', where file will be replaced with the base name of\n# the file that contains the anonymous namespace. By default anonymous namespace\n# are hidden.\n# The default value is: NO.\n\nEXTRACT_ANON_NSPACES   = NO\n\n# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all\n# undocumented members inside documented classes or files. If set to NO these\n# members will be included in the various overviews, but no documentation\n# section is generated. This option has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_MEMBERS     = NO\n\n# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all\n# undocumented classes that are normally visible in the class hierarchy. If set\n# to NO these classes will be included in the various overviews. This option has\n# no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_CLASSES     = NO\n\n# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend\n# (class|struct|union) declarations. If set to NO these declarations will be\n# included in the documentation.\n# The default value is: NO.\n\nHIDE_FRIEND_COMPOUNDS  = NO\n\n# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any\n# documentation blocks found inside the body of a function. If set to NO these\n# blocks will be appended to the function's detailed documentation block.\n# The default value is: NO.\n\nHIDE_IN_BODY_DOCS      = NO\n\n# The INTERNAL_DOCS tag determines if documentation that is typed after a\n# \\internal command is included. If the tag is set to NO then the documentation\n# will be excluded. Set it to YES to include the internal documentation.\n# The default value is: NO.\n\nINTERNAL_DOCS          = NO\n\n# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file\n# names in lower-case letters. If set to YES upper-case letters are also\n# allowed. This is useful if you have classes or files whose names only differ\n# in case and if your file system supports case sensitive file names. Windows\n# and Mac users are advised to set this option to NO.\n# The default value is: system dependent.\n\nCASE_SENSE_NAMES       = NO\n\n# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with\n# their full class and namespace scopes in the documentation. If set to YES the\n# scope will be hidden.\n# The default value is: NO.\n\nHIDE_SCOPE_NAMES       = NO\n\n# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of\n# the files that are included by a file in the documentation of that file.\n# The default value is: YES.\n\nSHOW_INCLUDE_FILES     = YES\n\n# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each\n# grouped member an include statement to the documentation, telling the reader\n# which file to include in order to use the member.\n# The default value is: NO.\n\nSHOW_GROUPED_MEMB_INC  = NO\n\n# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include\n# files with double quotes in the documentation rather than with sharp brackets.\n# The default value is: NO.\n\nFORCE_LOCAL_INCLUDES   = NO\n\n# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the\n# documentation for inline members.\n# The default value is: YES.\n\nINLINE_INFO            = YES\n\n# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the\n# (detailed) documentation of file and class members alphabetically by member\n# name. If set to NO the members will appear in declaration order.\n# The default value is: YES.\n\nSORT_MEMBER_DOCS       = YES\n\n# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief\n# descriptions of file, namespace and class members alphabetically by member\n# name. If set to NO the members will appear in declaration order. Note that\n# this will also influence the order of the classes in the class list.\n# The default value is: NO.\n\nSORT_BRIEF_DOCS        = NO\n\n# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the\n# (brief and detailed) documentation of class members so that constructors and\n# destructors are listed first. If set to NO the constructors will appear in the\n# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.\n# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief\n# member documentation.\n# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting\n# detailed member documentation.\n# The default value is: NO.\n\nSORT_MEMBERS_CTORS_1ST = NO\n\n# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy\n# of group names into alphabetical order. If set to NO the group names will\n# appear in their defined order.\n# The default value is: NO.\n\nSORT_GROUP_NAMES       = NO\n\n# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by\n# fully-qualified names, including namespaces. If set to NO, the class list will\n# be sorted only by class name, not including the namespace part.\n# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.\n# Note: This option applies only to the class list, not to the alphabetical\n# list.\n# The default value is: NO.\n\nSORT_BY_SCOPE_NAME     = NO\n\n# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper\n# type resolution of all parameters of a function it will reject a match between\n# the prototype and the implementation of a member function even if there is\n# only one candidate or it is obvious which candidate to choose by doing a\n# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still\n# accept a match between prototype and implementation in such cases.\n# The default value is: NO.\n\nSTRICT_PROTO_MATCHING  = NO\n\n# The GENERATE_TODOLIST tag can be used to enable ( YES) or disable ( NO) the\n# todo list. This list is created by putting \\todo commands in the\n# documentation.\n# The default value is: YES.\n\nGENERATE_TODOLIST      = YES\n\n# The GENERATE_TESTLIST tag can be used to enable ( YES) or disable ( NO) the\n# test list. This list is created by putting \\test commands in the\n# documentation.\n# The default value is: YES.\n\nGENERATE_TESTLIST      = YES\n\n# The GENERATE_BUGLIST tag can be used to enable ( YES) or disable ( NO) the bug\n# list. This list is created by putting \\bug commands in the documentation.\n# The default value is: YES.\n\nGENERATE_BUGLIST       = YES\n\n# The GENERATE_DEPRECATEDLIST tag can be used to enable ( YES) or disable ( NO)\n# the deprecated list. This list is created by putting \\deprecated commands in\n# the documentation.\n# The default value is: YES.\n\nGENERATE_DEPRECATEDLIST= YES\n\n# The ENABLED_SECTIONS tag can be used to enable conditional documentation\n# sections, marked by \\if <section_label> ... \\endif and \\cond <section_label>\n# ... \\endcond blocks.\n\nENABLED_SECTIONS       = $(RAPIDJSON_SECTIONS)\n\n# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the\n# initial value of a variable or macro / define can have for it to appear in the\n# documentation. If the initializer consists of more lines than specified here\n# it will be hidden. Use a value of 0 to hide initializers completely. The\n# appearance of the value of individual variables and macros / defines can be\n# controlled using \\showinitializer or \\hideinitializer command in the\n# documentation regardless of this setting.\n# Minimum value: 0, maximum value: 10000, default value: 30.\n\nMAX_INITIALIZER_LINES  = 30\n\n# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at\n# the bottom of the documentation of classes and structs. If set to YES the list\n# will mention the files that were used to generate the documentation.\n# The default value is: YES.\n\nSHOW_USED_FILES        = YES\n\n# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This\n# will remove the Files entry from the Quick Index and from the Folder Tree View\n# (if specified).\n# The default value is: YES.\n\nSHOW_FILES             = YES\n\n# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces\n# page. This will remove the Namespaces entry from the Quick Index and from the\n# Folder Tree View (if specified).\n# The default value is: YES.\n\nSHOW_NAMESPACES        = NO\n\n# The FILE_VERSION_FILTER tag can be used to specify a program or script that\n# doxygen should invoke to get the current version for each file (typically from\n# the version control system). Doxygen will invoke the program by executing (via\n# popen()) the command command input-file, where command is the value of the\n# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided\n# by doxygen. Whatever the program writes to standard output is used as the file\n# version. For an example see the documentation.\n\nFILE_VERSION_FILTER    =\n\n# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed\n# by doxygen. The layout file controls the global structure of the generated\n# output files in an output format independent way. To create the layout file\n# that represents doxygen's defaults, run doxygen with the -l option. You can\n# optionally specify a file name after the option, if omitted DoxygenLayout.xml\n# will be used as the name of the layout file.\n#\n# Note that if you run doxygen from a directory containing a file called\n# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE\n# tag is left empty.\n\nLAYOUT_FILE            =\n\n# The CITE_BIB_FILES tag can be used to specify one or more bib files containing\n# the reference definitions. This must be a list of .bib files. The .bib\n# extension is automatically appended if omitted. This requires the bibtex tool\n# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.\n# For LaTeX the style of the bibliography can be controlled using\n# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the\n# search path. Do not use file names with spaces, bibtex cannot handle them. See\n# also \\cite for info how to create references.\n\nCITE_BIB_FILES         =\n\n#---------------------------------------------------------------------------\n# Configuration options related to warning and progress messages\n#---------------------------------------------------------------------------\n\n# The QUIET tag can be used to turn on/off the messages that are generated to\n# standard output by doxygen. If QUIET is set to YES this implies that the\n# messages are off.\n# The default value is: NO.\n\nQUIET                  = NO\n\n# The WARNINGS tag can be used to turn on/off the warning messages that are\n# generated to standard error ( stderr) by doxygen. If WARNINGS is set to YES\n# this implies that the warnings are on.\n#\n# Tip: Turn warnings on while writing the documentation.\n# The default value is: YES.\n\nWARNINGS               = YES\n\n# If the WARN_IF_UNDOCUMENTED tag is set to YES, then doxygen will generate\n# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag\n# will automatically be disabled.\n# The default value is: YES.\n\nWARN_IF_UNDOCUMENTED   = YES\n\n# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for\n# potential errors in the documentation, such as not documenting some parameters\n# in a documented function, or documenting parameters that don't exist or using\n# markup commands wrongly.\n# The default value is: YES.\n\nWARN_IF_DOC_ERROR      = YES\n\n# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that\n# are documented, but have no documentation for their parameters or return\n# value. If set to NO doxygen will only warn about wrong or incomplete parameter\n# documentation, but not about the absence of documentation.\n# The default value is: NO.\n\nWARN_NO_PARAMDOC       = NO\n\n# The WARN_FORMAT tag determines the format of the warning messages that doxygen\n# can produce. The string should contain the $file, $line, and $text tags, which\n# will be replaced by the file and line number from which the warning originated\n# and the warning text. Optionally the format may contain $version, which will\n# be replaced by the version of the file (if it could be obtained via\n# FILE_VERSION_FILTER)\n# The default value is: $file:$line: $text.\n\nWARN_FORMAT            = \"$file:$line: $text\"\n\n# The WARN_LOGFILE tag can be used to specify a file to which warning and error\n# messages should be written. If left blank the output is written to standard\n# error (stderr).\n\nWARN_LOGFILE           =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the input files\n#---------------------------------------------------------------------------\n\n# The INPUT tag is used to specify the files and/or directories that contain\n# documented source files. You may enter file names like myfile.cpp or\n# directories like /usr/src/myproject. Separate the files or directories with\n# spaces.\n# Note: If this tag is empty the current directory is searched.\n\nINPUT                  = readme.zh-cn.md \\\n                         CHANGELOG.md \\\n                         include/rapidjson/rapidjson.h \\\n                         include/ \\\n                         doc/features.zh-cn.md \\\n                         doc/tutorial.zh-cn.md \\\n                         doc/pointer.zh-cn.md \\\n                         doc/stream.zh-cn.md \\\n                         doc/encoding.zh-cn.md \\\n                         doc/dom.zh-cn.md \\\n                         doc/sax.zh-cn.md \\\n                         doc/schema.zh-cn.md \\\n                         doc/performance.zh-cn.md \\\n                         doc/internals.zh-cn.md \\\n                         doc/faq.zh-cn.md\n\n# This tag can be used to specify the character encoding of the source files\n# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses\n# libiconv (or the iconv built into libc) for the transcoding. See the libiconv\n# documentation (see: http://www.gnu.org/software/libiconv) for the list of\n# possible encodings.\n# The default value is: UTF-8.\n\nINPUT_ENCODING         = UTF-8\n\n# If the value of the INPUT tag contains directories, you can use the\n# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank the\n# following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii,\n# *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp,\n# *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown,\n# *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf,\n# *.qsf, *.as and *.js.\n\nFILE_PATTERNS          = *.c \\\n                         *.cc \\\n                         *.cxx \\\n                         *.cpp \\\n                         *.h \\\n                         *.hh \\\n                         *.hxx \\\n                         *.hpp \\\n                         *.inc \\\n                         *.md\n\n# The RECURSIVE tag can be used to specify whether or not subdirectories should\n# be searched for input files as well.\n# The default value is: NO.\n\nRECURSIVE              = YES\n\n# The EXCLUDE tag can be used to specify files and/or directories that should be\n# excluded from the INPUT source files. This way you can easily exclude a\n# subdirectory from a directory tree whose root is specified with the INPUT tag.\n#\n# Note that relative paths are relative to the directory from which doxygen is\n# run.\n\nEXCLUDE                = ./include/rapidjson/msinttypes/\n\n# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or\n# directories that are symbolic links (a Unix file system feature) are excluded\n# from the input.\n# The default value is: NO.\n\nEXCLUDE_SYMLINKS       = NO\n\n# If the value of the INPUT tag contains directories, you can use the\n# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude\n# certain files from those directories.\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories for example use the pattern */test/*\n\nEXCLUDE_PATTERNS       =\n\n# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names\n# (namespaces, classes, functions, etc.) that should be excluded from the\n# output. The symbol name can be a fully qualified name, a word, or if the\n# wildcard * is used, a substring. Examples: ANamespace, AClass,\n# AClass::ANamespace, ANamespace::*Test\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories use the pattern */test/*\n\nEXCLUDE_SYMBOLS        = internal\n\n# The EXAMPLE_PATH tag can be used to specify one or more files or directories\n# that contain example code fragments that are included (see the \\include\n# command).\n\nEXAMPLE_PATH           =\n\n# If the value of the EXAMPLE_PATH tag contains directories, you can use the\n# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank all\n# files are included.\n\nEXAMPLE_PATTERNS       = *\n\n# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be\n# searched for input files to be used with the \\include or \\dontinclude commands\n# irrespective of the value of the RECURSIVE tag.\n# The default value is: NO.\n\nEXAMPLE_RECURSIVE      = NO\n\n# The IMAGE_PATH tag can be used to specify one or more files or directories\n# that contain images that are to be included in the documentation (see the\n# \\image command).\n\nIMAGE_PATH             = ./doc\n\n# The INPUT_FILTER tag can be used to specify a program that doxygen should\n# invoke to filter for each input file. Doxygen will invoke the filter program\n# by executing (via popen()) the command:\n#\n# <filter> <input-file>\n#\n# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the\n# name of an input file. Doxygen will then use the output that the filter\n# program writes to standard output. If FILTER_PATTERNS is specified, this tag\n# will be ignored.\n#\n# Note that the filter must not add or remove lines; it is applied before the\n# code is scanned, but not when the output code is generated. If lines are added\n# or removed, the anchors will not be placed correctly.\n\nINPUT_FILTER           =\n\n# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern\n# basis. Doxygen will compare the file name with each pattern and apply the\n# filter if there is a match. The filters are a list of the form: pattern=filter\n# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how\n# filters are used. If the FILTER_PATTERNS tag is empty or if none of the\n# patterns match the file name, INPUT_FILTER is applied.\n\nFILTER_PATTERNS        =\n\n# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using\n# INPUT_FILTER ) will also be used to filter the input files that are used for\n# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).\n# The default value is: NO.\n\nFILTER_SOURCE_FILES    = NO\n\n# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file\n# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and\n# it is also possible to disable source filtering for a specific pattern using\n# *.ext= (so without naming a filter).\n# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.\n\nFILTER_SOURCE_PATTERNS =\n\n# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that\n# is part of the input, its contents will be placed on the main page\n# (index.html). This can be useful if you have a project on for instance GitHub\n# and want to reuse the introduction page also for the doxygen output.\n\nUSE_MDFILE_AS_MAINPAGE = readme.zh-cn.md\n\n#---------------------------------------------------------------------------\n# Configuration options related to source browsing\n#---------------------------------------------------------------------------\n\n# If the SOURCE_BROWSER tag is set to YES then a list of source files will be\n# generated. Documented entities will be cross-referenced with these sources.\n#\n# Note: To get rid of all source code in the generated output, make sure that\n# also VERBATIM_HEADERS is set to NO.\n# The default value is: NO.\n\nSOURCE_BROWSER         = NO\n\n# Setting the INLINE_SOURCES tag to YES will include the body of functions,\n# classes and enums directly into the documentation.\n# The default value is: NO.\n\nINLINE_SOURCES         = NO\n\n# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any\n# special comment blocks from generated source code fragments. Normal C, C++ and\n# Fortran comments will always remain visible.\n# The default value is: YES.\n\nSTRIP_CODE_COMMENTS    = NO\n\n# If the REFERENCED_BY_RELATION tag is set to YES then for each documented\n# function all documented functions referencing it will be listed.\n# The default value is: NO.\n\nREFERENCED_BY_RELATION = NO\n\n# If the REFERENCES_RELATION tag is set to YES then for each documented function\n# all documented entities called/used by that function will be listed.\n# The default value is: NO.\n\nREFERENCES_RELATION    = NO\n\n# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set\n# to YES, then the hyperlinks from functions in REFERENCES_RELATION and\n# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will\n# link to the documentation.\n# The default value is: YES.\n\nREFERENCES_LINK_SOURCE = YES\n\n# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the\n# source code will show a tooltip with additional information such as prototype,\n# brief description and links to the definition and documentation. Since this\n# will make the HTML file larger and loading of large files a bit slower, you\n# can opt to disable this feature.\n# The default value is: YES.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nSOURCE_TOOLTIPS        = YES\n\n# If the USE_HTAGS tag is set to YES then the references to source code will\n# point to the HTML generated by the htags(1) tool instead of doxygen built-in\n# source browser. The htags tool is part of GNU's global source tagging system\n# (see http://www.gnu.org/software/global/global.html). You will need version\n# 4.8.6 or higher.\n#\n# To use it do the following:\n# - Install the latest version of global\n# - Enable SOURCE_BROWSER and USE_HTAGS in the config file\n# - Make sure the INPUT points to the root of the source tree\n# - Run doxygen as normal\n#\n# Doxygen will invoke htags (and that will in turn invoke gtags), so these\n# tools must be available from the command line (i.e. in the search path).\n#\n# The result: instead of the source browser generated by doxygen, the links to\n# source code will now point to the output of htags.\n# The default value is: NO.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nUSE_HTAGS              = NO\n\n# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a\n# verbatim copy of the header file for each class for which an include is\n# specified. Set to NO to disable this.\n# See also: Section \\class.\n# The default value is: YES.\n\nVERBATIM_HEADERS       = YES\n\n# If the CLANG_ASSISTED_PARSING tag is set to YES, then doxygen will use the\n# clang parser (see: http://clang.llvm.org/) for more accurate parsing at the\n# cost of reduced performance. This can be particularly helpful with template\n# rich C++ code for which doxygen's built-in parser lacks the necessary type\n# information.\n# Note: The availability of this option depends on whether or not doxygen was\n# compiled with the --with-libclang option.\n# The default value is: NO.\n\nCLANG_ASSISTED_PARSING = NO\n\n# If clang assisted parsing is enabled you can provide the compiler with command\n# line options that you would normally use when invoking the compiler. Note that\n# the include paths will already be set by doxygen for the files and directories\n# specified with INPUT and INCLUDE_PATH.\n# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.\n\nCLANG_OPTIONS          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the alphabetical class index\n#---------------------------------------------------------------------------\n\n# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all\n# compounds will be generated. Enable this if the project contains a lot of\n# classes, structs, unions or interfaces.\n# The default value is: YES.\n\nALPHABETICAL_INDEX     = NO\n\n# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in\n# which the alphabetical index list will be split.\n# Minimum value: 1, maximum value: 20, default value: 5.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nCOLS_IN_ALPHA_INDEX    = 5\n\n# In case all classes in a project start with a common prefix, all classes will\n# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag\n# can be used to specify a prefix (or a list of prefixes) that should be ignored\n# while generating the index headers.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nIGNORE_PREFIX          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the HTML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_HTML tag is set to YES doxygen will generate HTML output\n# The default value is: YES.\n\nGENERATE_HTML          = YES\n\n# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_OUTPUT            = html/zh-cn\n\n# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each\n# generated HTML page (for example: .htm, .php, .asp).\n# The default value is: .html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FILE_EXTENSION    = .html\n\n# The HTML_HEADER tag can be used to specify a user-defined HTML header file for\n# each generated HTML page. If the tag is left blank doxygen will generate a\n# standard header.\n#\n# To get valid HTML the header file that includes any scripts and style sheets\n# that doxygen needs, which is dependent on the configuration options used (e.g.\n# the setting GENERATE_TREEVIEW). It is highly recommended to start with a\n# default header using\n# doxygen -w html new_header.html new_footer.html new_stylesheet.css\n# YourConfigFile\n# and then modify the file new_header.html. See also section \"Doxygen usage\"\n# for information on how to generate the default header that doxygen normally\n# uses.\n# Note: The header is subject to change so you typically have to regenerate the\n# default header when upgrading to a newer version of doxygen. For a description\n# of the possible markers and block names see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_HEADER            = ./doc/misc/header.html\n\n# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each\n# generated HTML page. If the tag is left blank doxygen will generate a standard\n# footer. See HTML_HEADER for more information on how to generate a default\n# footer and what special commands can be used inside the footer. See also\n# section \"Doxygen usage\" for information on how to generate the default footer\n# that doxygen normally uses.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FOOTER            = ./doc/misc/footer.html\n\n# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style\n# sheet that is used by each HTML page. It can be used to fine-tune the look of\n# the HTML output. If left blank doxygen will generate a default style sheet.\n# See also section \"Doxygen usage\" for information on how to generate the style\n# sheet that doxygen normally uses.\n# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as\n# it is more robust and this tag (HTML_STYLESHEET) will in the future become\n# obsolete.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_STYLESHEET        =\n\n# The HTML_EXTRA_STYLESHEET tag can be used to specify an additional user-\n# defined cascading style sheet that is included after the standard style sheets\n# created by doxygen. Using this option one can overrule certain style aspects.\n# This is preferred over using HTML_STYLESHEET since it does not replace the\n# standard style sheet and is therefore more robust against future updates.\n# Doxygen will copy the style sheet file to the output directory. For an example\n# see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_STYLESHEET  = ./doc/misc/doxygenextra.css\n\n# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the HTML output directory. Note\n# that these files will be copied to the base HTML output directory. Use the\n# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these\n# files. In the HTML_STYLESHEET file, use the file name only. Also note that the\n# files will be copied as-is; there are no commands or markers available.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_FILES       =\n\n# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen\n# will adjust the colors in the stylesheet and background images according to\n# this color. Hue is specified as an angle on a colorwheel, see\n# http://en.wikipedia.org/wiki/Hue for more information. For instance the value\n# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300\n# purple, and 360 is red again.\n# Minimum value: 0, maximum value: 359, default value: 220.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_HUE    = 220\n\n# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors\n# in the HTML output. For a value of 0 the output will use grayscales only. A\n# value of 255 will produce the most vivid colors.\n# Minimum value: 0, maximum value: 255, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_SAT    = 100\n\n# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the\n# luminance component of the colors in the HTML output. Values below 100\n# gradually make the output lighter, whereas values above 100 make the output\n# darker. The value divided by 100 is the actual gamma applied, so 80 represents\n# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not\n# change the gamma.\n# Minimum value: 40, maximum value: 240, default value: 80.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_GAMMA  = 80\n\n# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML\n# page will contain the date and time when the page was generated. Setting this\n# to NO can help when comparing the output of multiple runs.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_TIMESTAMP         = YES\n\n# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML\n# documentation will contain sections that can be hidden and shown after the\n# page has loaded.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_DYNAMIC_SECTIONS  = NO\n\n# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries\n# shown in the various tree structured indices initially; the user can expand\n# and collapse entries dynamically later on. Doxygen will expand the tree to\n# such a level that at most the specified number of entries are visible (unless\n# a fully collapsed tree already exceeds this amount). So setting the number of\n# entries 1 will produce a full collapsed tree by default. 0 is a special value\n# representing an infinite number of entries and will result in a full expanded\n# tree by default.\n# Minimum value: 0, maximum value: 9999, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_INDEX_NUM_ENTRIES = 100\n\n# If the GENERATE_DOCSET tag is set to YES, additional index files will be\n# generated that can be used as input for Apple's Xcode 3 integrated development\n# environment (see: http://developer.apple.com/tools/xcode/), introduced with\n# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a\n# Makefile in the HTML output directory. Running make will produce the docset in\n# that directory and running make install will install the docset in\n# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at\n# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html\n# for more information.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_DOCSET        = NO\n\n# This tag determines the name of the docset feed. A documentation feed provides\n# an umbrella under which multiple documentation sets from a single provider\n# (such as a company or product suite) can be grouped.\n# The default value is: Doxygen generated docs.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_FEEDNAME        = \"Doxygen generated docs\"\n\n# This tag specifies a string that should uniquely identify the documentation\n# set bundle. This should be a reverse domain-name style string, e.g.\n# com.mycompany.MyDocSet. Doxygen will append .docset to the name.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_BUNDLE_ID       = org.doxygen.Project\n\n# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify\n# the documentation publisher. This should be a reverse domain-name style\n# string, e.g. com.mycompany.MyDocSet.documentation.\n# The default value is: org.doxygen.Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_ID    = org.doxygen.Publisher\n\n# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.\n# The default value is: Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_NAME  = Publisher\n\n# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three\n# additional HTML index files: index.hhp, index.hhc, and index.hhk. The\n# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop\n# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on\n# Windows.\n#\n# The HTML Help Workshop contains a compiler that can convert all HTML output\n# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML\n# files are now used as the Windows 98 help format, and will replace the old\n# Windows help format (.hlp) on all Windows platforms in the future. Compressed\n# HTML files also contain an index, a table of contents, and you can search for\n# words in the documentation. The HTML workshop also contains a viewer for\n# compressed HTML files.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_HTMLHELP      = NO\n\n# The CHM_FILE tag can be used to specify the file name of the resulting .chm\n# file. You can add a path in front of the file if the result should not be\n# written to the html output directory.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_FILE               =\n\n# The HHC_LOCATION tag can be used to specify the location (absolute path\n# including file name) of the HTML help compiler ( hhc.exe). If non-empty\n# doxygen will try to run the HTML help compiler on the generated index.hhp.\n# The file has to be specified with full path.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nHHC_LOCATION           =\n\n# The GENERATE_CHI flag controls if a separate .chi index file is generated (\n# YES) or that it should be included in the master .chm file ( NO).\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nGENERATE_CHI           = NO\n\n# The CHM_INDEX_ENCODING is used to encode HtmlHelp index ( hhk), content ( hhc)\n# and project file content.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_INDEX_ENCODING     =\n\n# The BINARY_TOC flag controls whether a binary table of contents is generated (\n# YES) or a normal table of contents ( NO) in the .chm file. Furthermore it\n# enables the Previous and Next buttons.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nBINARY_TOC             = NO\n\n# The TOC_EXPAND flag can be set to YES to add extra items for group members to\n# the table of contents of the HTML help documentation and to the tree view.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nTOC_EXPAND             = NO\n\n# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and\n# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that\n# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help\n# (.qch) of the generated HTML documentation.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_QHP           = NO\n\n# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify\n# the file name of the resulting .qch file. The path specified is relative to\n# the HTML output folder.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQCH_FILE               =\n\n# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help\n# Project output. For more information please see Qt Help Project / Namespace\n# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_NAMESPACE          = org.doxygen.Project\n\n# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt\n# Help Project output. For more information please see Qt Help Project / Virtual\n# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-\n# folders).\n# The default value is: doc.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_VIRTUAL_FOLDER     = doc\n\n# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom\n# filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_NAME   =\n\n# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the\n# custom filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_ATTRS  =\n\n# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this\n# project's filter section matches. Qt Help Project / Filter Attributes (see:\n# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_SECT_FILTER_ATTRS  =\n\n# The QHG_LOCATION tag can be used to specify the location of Qt's\n# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the\n# generated .qhp file.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHG_LOCATION           =\n\n# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be\n# generated, together with the HTML files, they form an Eclipse help plugin. To\n# install this plugin and make it available under the help contents menu in\n# Eclipse, the contents of the directory containing the HTML and XML files needs\n# to be copied into the plugins directory of eclipse. The name of the directory\n# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.\n# After copying Eclipse needs to be restarted before the help appears.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_ECLIPSEHELP   = NO\n\n# A unique identifier for the Eclipse help plugin. When installing the plugin\n# the directory name containing the HTML and XML files should also have this\n# name. Each documentation set should have its own identifier.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.\n\nECLIPSE_DOC_ID         = org.doxygen.Project\n\n# If you want full control over the layout of the generated HTML pages it might\n# be necessary to disable the index and replace it with your own. The\n# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top\n# of each HTML page. A value of NO enables the index and the value YES disables\n# it. Since the tabs in the index contain the same information as the navigation\n# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nDISABLE_INDEX          = YES\n\n# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index\n# structure should be generated to display hierarchical information. If the tag\n# value is set to YES, a side panel will be generated containing a tree-like\n# index structure (just like the one that is generated for HTML Help). For this\n# to work a browser that supports JavaScript, DHTML, CSS and frames is required\n# (i.e. any modern browser). Windows users are probably better off using the\n# HTML help feature. Via custom stylesheets (see HTML_EXTRA_STYLESHEET) one can\n# further fine-tune the look of the index. As an example, the default style\n# sheet generated by doxygen has an example that shows how to put an image at\n# the root of the tree instead of the PROJECT_NAME. Since the tree basically has\n# the same information as the tab index, you could consider setting\n# DISABLE_INDEX to YES when enabling this option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_TREEVIEW      = YES\n\n# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that\n# doxygen will group on one line in the generated HTML documentation.\n#\n# Note that a value of 0 will completely suppress the enum values from appearing\n# in the overview section.\n# Minimum value: 0, maximum value: 20, default value: 4.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nENUM_VALUES_PER_LINE   = 4\n\n# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used\n# to set the initial width (in pixels) of the frame in which the tree is shown.\n# Minimum value: 0, maximum value: 1500, default value: 250.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nTREEVIEW_WIDTH         = 250\n\n# When the EXT_LINKS_IN_WINDOW option is set to YES doxygen will open links to\n# external symbols imported via tag files in a separate window.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nEXT_LINKS_IN_WINDOW    = NO\n\n# Use this tag to change the font size of LaTeX formulas included as images in\n# the HTML documentation. When you change the font size after a successful\n# doxygen run you need to manually remove any form_*.png images from the HTML\n# output directory to force them to be regenerated.\n# Minimum value: 8, maximum value: 50, default value: 10.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_FONTSIZE       = 10\n\n# Use the FORMULA_TRANPARENT tag to determine whether or not the images\n# generated for formulas are transparent PNGs. Transparent PNGs are not\n# supported properly for IE 6.0, but are supported on all modern browsers.\n#\n# Note that when changing this option you need to delete any form_*.png files in\n# the HTML output directory before the changes have effect.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_TRANSPARENT    = YES\n\n# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see\n# http://www.mathjax.org) which uses client side Javascript for the rendering\n# instead of using prerendered bitmaps. Use this if you do not have LaTeX\n# installed or if you want to formulas look prettier in the HTML output. When\n# enabled you may also need to install MathJax separately and configure the path\n# to it using the MATHJAX_RELPATH option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nUSE_MATHJAX            = NO\n\n# When MathJax is enabled you can set the default output format to be used for\n# the MathJax output. See the MathJax site (see:\n# http://docs.mathjax.org/en/latest/output.html) for more details.\n# Possible values are: HTML-CSS (which is slower, but has the best\n# compatibility), NativeMML (i.e. MathML) and SVG.\n# The default value is: HTML-CSS.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_FORMAT         = HTML-CSS\n\n# When MathJax is enabled you need to specify the location relative to the HTML\n# output directory using the MATHJAX_RELPATH option. The destination directory\n# should contain the MathJax.js script. For instance, if the mathjax directory\n# is located at the same level as the HTML output directory, then\n# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax\n# Content Delivery Network so you can quickly see the result without installing\n# MathJax. However, it is strongly recommended to install a local copy of\n# MathJax from http://www.mathjax.org before deployment.\n# The default value is: http://cdn.mathjax.org/mathjax/latest.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_RELPATH        = http://www.mathjax.org/mathjax\n\n# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax\n# extension names that should be enabled during MathJax rendering. For example\n# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_EXTENSIONS     =\n\n# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces\n# of code that will be used on startup of the MathJax code. See the MathJax site\n# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an\n# example see the documentation.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_CODEFILE       =\n\n# When the SEARCHENGINE tag is enabled doxygen will generate a search box for\n# the HTML output. The underlying search engine uses javascript and DHTML and\n# should work on any modern browser. Note that when using HTML help\n# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)\n# there is already a search function so this one should typically be disabled.\n# For large projects the javascript based search engine can be slow, then\n# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to\n# search using the keyboard; to jump to the search box use <access key> + S\n# (what the <access key> is depends on the OS and browser, but it is typically\n# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down\n# key> to jump into the search results window, the results can be navigated\n# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel\n# the search. The filter options can be selected when the cursor is inside the\n# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>\n# to select a filter and <Enter> or <escape> to activate or cancel the filter\n# option.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nSEARCHENGINE           = YES\n\n# When the SERVER_BASED_SEARCH tag is enabled the search engine will be\n# implemented using a web server instead of a web client using Javascript. There\n# are two flavors of web server based searching depending on the EXTERNAL_SEARCH\n# setting. When disabled, doxygen will generate a PHP script for searching and\n# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing\n# and searching needs to be provided by external tools. See the section\n# \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSERVER_BASED_SEARCH    = NO\n\n# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP\n# script for searching. Instead the search results are written to an XML file\n# which needs to be processed by an external indexer. Doxygen will invoke an\n# external search engine pointed to by the SEARCHENGINE_URL option to obtain the\n# search results.\n#\n# Doxygen ships with an example indexer ( doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/).\n#\n# See the section \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH        = NO\n\n# The SEARCHENGINE_URL should point to a search engine hosted by a web server\n# which will return the search results when EXTERNAL_SEARCH is enabled.\n#\n# Doxygen ships with an example indexer ( doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/). See the section \"External Indexing and\n# Searching\" for details.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHENGINE_URL       =\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed\n# search data is written to a file for indexing by an external tool. With the\n# SEARCHDATA_FILE tag the name of this file can be specified.\n# The default file is: searchdata.xml.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHDATA_FILE        = searchdata.xml\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the\n# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is\n# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple\n# projects and redirect the results back to the right project.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH_ID     =\n\n# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen\n# projects other than the one defined by this configuration file, but that are\n# all added to the same external search index. Each project needs to have a\n# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of\n# to a relative location where the documentation can be found. The format is:\n# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTRA_SEARCH_MAPPINGS  =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the LaTeX output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_LATEX tag is set to YES doxygen will generate LaTeX output.\n# The default value is: YES.\n\nGENERATE_LATEX         = NO\n\n# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_OUTPUT           = latex\n\n# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be\n# invoked.\n#\n# Note that when enabling USE_PDFLATEX this option is only used for generating\n# bitmaps for formulas in the HTML output, but not in the Makefile that is\n# written to the output directory.\n# The default file is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_CMD_NAME         = latex\n\n# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate\n# index for LaTeX.\n# The default file is: makeindex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nMAKEINDEX_CMD_NAME     = makeindex\n\n# If the COMPACT_LATEX tag is set to YES doxygen generates more compact LaTeX\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nCOMPACT_LATEX          = NO\n\n# The PAPER_TYPE tag can be used to set the paper type that is used by the\n# printer.\n# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x\n# 14 inches) and executive (7.25 x 10.5 inches).\n# The default value is: a4.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPAPER_TYPE             = a4\n\n# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names\n# that should be included in the LaTeX output. To get the times font for\n# instance you can specify\n# EXTRA_PACKAGES=times\n# If left blank no extra packages will be included.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nEXTRA_PACKAGES         =\n\n# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the\n# generated LaTeX document. The header should contain everything until the first\n# chapter. If it is left blank doxygen will generate a standard header. See\n# section \"Doxygen usage\" for information on how to let doxygen write the\n# default header to a separate file.\n#\n# Note: Only use a user-defined header if you know what you are doing! The\n# following commands have a special meaning inside the header: $title,\n# $datetime, $date, $doxygenversion, $projectname, $projectnumber. Doxygen will\n# replace them by respectively the title of the page, the current date and time,\n# only the current date, the version number of doxygen, the project name (see\n# PROJECT_NAME), or the project number (see PROJECT_NUMBER).\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HEADER           =\n\n# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the\n# generated LaTeX document. The footer should contain everything after the last\n# chapter. If it is left blank doxygen will generate a standard footer.\n#\n# Note: Only use a user-defined footer if you know what you are doing!\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_FOOTER           =\n\n# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the LATEX_OUTPUT output\n# directory. Note that the files will be copied as-is; there are no commands or\n# markers available.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_FILES      =\n\n# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is\n# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will\n# contain links (just like the HTML output) instead of page references. This\n# makes the output suitable for online browsing using a PDF viewer.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPDF_HYPERLINKS         = YES\n\n# If the LATEX_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate\n# the PDF file directly from the LaTeX files. Set this option to YES to get a\n# higher quality PDF documentation.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nUSE_PDFLATEX           = YES\n\n# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode\n# command to the generated LaTeX files. This will instruct LaTeX to keep running\n# if errors occur, instead of asking the user for help. This option is also used\n# when generating formulas in HTML.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BATCHMODE        = NO\n\n# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the\n# index chapters (such as File Index, Compound Index, etc.) in the output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HIDE_INDICES     = NO\n\n# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source\n# code with syntax highlighting in the LaTeX output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_SOURCE_CODE      = NO\n\n# The LATEX_BIB_STYLE tag can be used to specify the style to use for the\n# bibliography, e.g. plainnat, or ieeetr. See\n# http://en.wikipedia.org/wiki/BibTeX and \\cite for more info.\n# The default value is: plain.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BIB_STYLE        = plain\n\n#---------------------------------------------------------------------------\n# Configuration options related to the RTF output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_RTF tag is set to YES doxygen will generate RTF output. The\n# RTF output is optimized for Word 97 and may not look too pretty with other RTF\n# readers/editors.\n# The default value is: NO.\n\nGENERATE_RTF           = NO\n\n# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: rtf.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_OUTPUT             = rtf\n\n# If the COMPACT_RTF tag is set to YES doxygen generates more compact RTF\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nCOMPACT_RTF            = NO\n\n# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will\n# contain hyperlink fields. The RTF file will contain links (just like the HTML\n# output) instead of page references. This makes the output suitable for online\n# browsing using Word or some other Word compatible readers that support those\n# fields.\n#\n# Note: WordPad (write) and others do not support links.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_HYPERLINKS         = NO\n\n# Load stylesheet definitions from file. Syntax is similar to doxygen's config\n# file, i.e. a series of assignments. You only have to provide replacements,\n# missing definitions are set to their default value.\n#\n# See also section \"Doxygen usage\" for information on how to generate the\n# default style sheet that doxygen normally uses.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_STYLESHEET_FILE    =\n\n# Set optional variables used in the generation of an RTF document. Syntax is\n# similar to doxygen's config file. A template extensions file can be generated\n# using doxygen -e rtf extensionFile.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_EXTENSIONS_FILE    =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the man page output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_MAN tag is set to YES doxygen will generate man pages for\n# classes and files.\n# The default value is: NO.\n\nGENERATE_MAN           = NO\n\n# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it. A directory man3 will be created inside the directory specified by\n# MAN_OUTPUT.\n# The default directory is: man.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_OUTPUT             = man\n\n# The MAN_EXTENSION tag determines the extension that is added to the generated\n# man pages. In case the manual section does not start with a number, the number\n# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is\n# optional.\n# The default value is: .3.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_EXTENSION          = .3\n\n# The MAN_SUBDIR tag determines the name of the directory created within\n# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by\n# MAN_EXTENSION with the initial . removed.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_SUBDIR             =\n\n# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it\n# will generate one additional man file for each entity documented in the real\n# man page(s). These additional files only source the real man page, but without\n# them the man command would be unable to find the correct page.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_LINKS              = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the XML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_XML tag is set to YES doxygen will generate an XML file that\n# captures the structure of the code including all documentation.\n# The default value is: NO.\n\nGENERATE_XML           = NO\n\n# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: xml.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_OUTPUT             = xml\n\n# If the XML_PROGRAMLISTING tag is set to YES doxygen will dump the program\n# listings (including syntax highlighting and cross-referencing information) to\n# the XML output. Note that enabling this will significantly increase the size\n# of the XML output.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_PROGRAMLISTING     = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to the DOCBOOK output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_DOCBOOK tag is set to YES doxygen will generate Docbook files\n# that can be used to generate PDF.\n# The default value is: NO.\n\nGENERATE_DOCBOOK       = NO\n\n# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.\n# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in\n# front of it.\n# The default directory is: docbook.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_OUTPUT         = docbook\n\n#---------------------------------------------------------------------------\n# Configuration options for the AutoGen Definitions output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_AUTOGEN_DEF tag is set to YES doxygen will generate an AutoGen\n# Definitions (see http://autogen.sf.net) file that captures the structure of\n# the code including all documentation. Note that this feature is still\n# experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_AUTOGEN_DEF   = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the Perl module output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_PERLMOD tag is set to YES doxygen will generate a Perl module\n# file that captures the structure of the code including all documentation.\n#\n# Note that this feature is still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_PERLMOD       = NO\n\n# If the PERLMOD_LATEX tag is set to YES doxygen will generate the necessary\n# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI\n# output from the Perl module output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_LATEX          = NO\n\n# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be nicely\n# formatted so it can be parsed by a human reader. This is useful if you want to\n# understand what is going on. On the other hand, if this tag is set to NO the\n# size of the Perl module output will be much smaller and Perl will parse it\n# just the same.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_PRETTY         = YES\n\n# The names of the make variables in the generated doxyrules.make file are\n# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful\n# so different doxyrules.make files included by the same Makefile don't\n# overwrite each other's variables.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_MAKEVAR_PREFIX =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the preprocessor\n#---------------------------------------------------------------------------\n\n# If the ENABLE_PREPROCESSING tag is set to YES doxygen will evaluate all\n# C-preprocessor directives found in the sources and include files.\n# The default value is: YES.\n\nENABLE_PREPROCESSING   = YES\n\n# If the MACRO_EXPANSION tag is set to YES doxygen will expand all macro names\n# in the source code. If set to NO only conditional compilation will be\n# performed. Macro expansion can be done in a controlled way by setting\n# EXPAND_ONLY_PREDEF to YES.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nMACRO_EXPANSION        = YES\n\n# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then\n# the macro expansion is limited to the macros specified with the PREDEFINED and\n# EXPAND_AS_DEFINED tags.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_ONLY_PREDEF     = YES\n\n# If the SEARCH_INCLUDES tag is set to YES the includes files in the\n# INCLUDE_PATH will be searched if a #include is found.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSEARCH_INCLUDES        = YES\n\n# The INCLUDE_PATH tag can be used to specify one or more directories that\n# contain include files that are not input files but should be processed by the\n# preprocessor.\n# This tag requires that the tag SEARCH_INCLUDES is set to YES.\n\nINCLUDE_PATH           =\n\n# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard\n# patterns (like *.h and *.hpp) to filter out the header-files in the\n# directories. If left blank, the patterns specified with FILE_PATTERNS will be\n# used.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nINCLUDE_FILE_PATTERNS  =\n\n# The PREDEFINED tag can be used to specify one or more macro names that are\n# defined before the preprocessor is started (similar to the -D option of e.g.\n# gcc). The argument of the tag is a list of macros of the form: name or\n# name=definition (no spaces). If the definition and the \"=\" are omitted, \"=1\"\n# is assumed. To prevent a macro definition from being undefined via #undef or\n# recursively expanded use the := operator instead of the = operator.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nPREDEFINED             = \\\n\tRAPIDJSON_DOXYGEN_RUNNING \\\n\tRAPIDJSON_NAMESPACE_BEGIN=\"namespace rapidjson {\" \\\n\tRAPIDJSON_NAMESPACE_END=\"}\" \\\n\tRAPIDJSON_REMOVEFPTR_(x)=x \\\n\tRAPIDJSON_ENABLEIF_RETURN(cond,returntype)=\"RAPIDJSON_REMOVEFPTR_ returntype\" \\\n\tRAPIDJSON_DISABLEIF_RETURN(cond,returntype)=\"RAPIDJSON_REMOVEFPTR_ returntype\"\n\n# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this\n# tag can be used to specify a list of macro names that should be expanded. The\n# macro definition that is found in the sources will be used. Use the PREDEFINED\n# tag if you want to use a different macro definition that overrules the\n# definition found in the source code.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_AS_DEFINED      = \\\n\tRAPIDJSON_NOEXCEPT\n\n# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will\n# remove all references to function-like macros that are alone on a line, have\n# an all uppercase name, and do not end with a semicolon. Such function macros\n# are typically used for boiler-plate code, and will confuse the parser if not\n# removed.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSKIP_FUNCTION_MACROS   = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to external references\n#---------------------------------------------------------------------------\n\n# The TAGFILES tag can be used to specify one or more tag files. For each tag\n# file the location of the external documentation should be added. The format of\n# a tag file without this location is as follows:\n# TAGFILES = file1 file2 ...\n# Adding location for the tag files is done as follows:\n# TAGFILES = file1=loc1 \"file2 = loc2\" ...\n# where loc1 and loc2 can be relative or absolute paths or URLs. See the\n# section \"Linking to external documentation\" for more information about the use\n# of tag files.\n# Note: Each tag file must have a unique name (where the name does NOT include\n# the path). If a tag file is not located in the directory in which doxygen is\n# run, you must also specify the path to the tagfile here.\n\nTAGFILES               =\n\n# When a file name is specified after GENERATE_TAGFILE, doxygen will create a\n# tag file that is based on the input files it reads. See section \"Linking to\n# external documentation\" for more information about the usage of tag files.\n\nGENERATE_TAGFILE       =\n\n# If the ALLEXTERNALS tag is set to YES all external class will be listed in the\n# class index. If set to NO only the inherited external classes will be listed.\n# The default value is: NO.\n\nALLEXTERNALS           = NO\n\n# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed in\n# the modules index. If set to NO, only the current project's groups will be\n# listed.\n# The default value is: YES.\n\nEXTERNAL_GROUPS        = YES\n\n# If the EXTERNAL_PAGES tag is set to YES all external pages will be listed in\n# the related pages index. If set to NO, only the current project's pages will\n# be listed.\n# The default value is: YES.\n\nEXTERNAL_PAGES         = YES\n\n# The PERL_PATH should be the absolute path and name of the perl script\n# interpreter (i.e. the result of 'which perl').\n# The default file (with absolute path) is: /usr/bin/perl.\n\nPERL_PATH              = /usr/bin/perl\n\n#---------------------------------------------------------------------------\n# Configuration options related to the dot tool\n#---------------------------------------------------------------------------\n\n# If the CLASS_DIAGRAMS tag is set to YES doxygen will generate a class diagram\n# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to\n# NO turns the diagrams off. Note that this option also works with HAVE_DOT\n# disabled, but it is recommended to install and use dot, since it yields more\n# powerful graphs.\n# The default value is: YES.\n\nCLASS_DIAGRAMS         = YES\n\n# You can define message sequence charts within doxygen comments using the \\msc\n# command. Doxygen will then run the mscgen tool (see:\n# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the\n# documentation. The MSCGEN_PATH tag allows you to specify the directory where\n# the mscgen tool resides. If left empty the tool is assumed to be found in the\n# default search path.\n\nMSCGEN_PATH            =\n\n# You can include diagrams made with dia in doxygen documentation. Doxygen will\n# then run dia to produce the diagram and insert it in the documentation. The\n# DIA_PATH tag allows you to specify the directory where the dia binary resides.\n# If left empty dia is assumed to be found in the default search path.\n\nDIA_PATH               =\n\n# If set to YES, the inheritance and collaboration graphs will hide inheritance\n# and usage relations if the target is undocumented or is not a class.\n# The default value is: YES.\n\nHIDE_UNDOC_RELATIONS   = YES\n\n# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is\n# available from the path. This tool is part of Graphviz (see:\n# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent\n# Bell Labs. The other options in this section have no effect if this option is\n# set to NO\n# The default value is: NO.\n\nHAVE_DOT               = NO\n\n# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed\n# to run in parallel. When set to 0 doxygen will base this on the number of\n# processors available in the system. You can set it explicitly to a value\n# larger than 0 to get control over the balance between CPU load and processing\n# speed.\n# Minimum value: 0, maximum value: 32, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_NUM_THREADS        = 0\n\n# When you want a differently looking font n the dot files that doxygen\n# generates you can specify the font name using DOT_FONTNAME. You need to make\n# sure dot is able to find the font, which can be done by putting it in a\n# standard location or by setting the DOTFONTPATH environment variable or by\n# setting DOT_FONTPATH to the directory containing the font.\n# The default value is: Helvetica.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTNAME           = Helvetica\n\n# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of\n# dot graphs.\n# Minimum value: 4, maximum value: 24, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTSIZE           = 10\n\n# By default doxygen will tell dot to use the default font as specified with\n# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set\n# the path where dot can find it using this tag.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTPATH           =\n\n# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for\n# each documented class showing the direct and indirect inheritance relations.\n# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCLASS_GRAPH            = YES\n\n# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a\n# graph for each documented class showing the direct and indirect implementation\n# dependencies (inheritance, containment, and class references variables) of the\n# class with other documented classes.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCOLLABORATION_GRAPH    = YES\n\n# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for\n# groups, showing the direct groups dependencies.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGROUP_GRAPHS           = YES\n\n# If the UML_LOOK tag is set to YES doxygen will generate inheritance and\n# collaboration diagrams in a style similar to the OMG's Unified Modeling\n# Language.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LOOK               = NO\n\n# If the UML_LOOK tag is enabled, the fields and methods are shown inside the\n# class node. If there are many fields or methods and many nodes the graph may\n# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the\n# number of items for each type to make the size more manageable. Set this to 0\n# for no limit. Note that the threshold may be exceeded by 50% before the limit\n# is enforced. So when you set the threshold to 10, up to 15 fields may appear,\n# but if the number exceeds 15, the total amount of fields shown is limited to\n# 10.\n# Minimum value: 0, maximum value: 100, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LIMIT_NUM_FIELDS   = 10\n\n# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and\n# collaboration graphs will show the relations between templates and their\n# instances.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nTEMPLATE_RELATIONS     = NO\n\n# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to\n# YES then doxygen will generate a graph for each documented file showing the\n# direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDE_GRAPH          = YES\n\n# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are\n# set to YES then doxygen will generate a graph for each documented file showing\n# the direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDED_BY_GRAPH      = YES\n\n# If the CALL_GRAPH tag is set to YES then doxygen will generate a call\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable call graphs for selected\n# functions only using the \\callgraph command.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALL_GRAPH             = NO\n\n# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable caller graphs for selected\n# functions only using the \\callergraph command.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALLER_GRAPH           = NO\n\n# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical\n# hierarchy of all classes instead of a textual one.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGRAPHICAL_HIERARCHY    = YES\n\n# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the\n# dependencies a directory has on other directories in a graphical way. The\n# dependency relations are determined by the #include relations between the\n# files in the directories.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDIRECTORY_GRAPH        = YES\n\n# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images\n# generated by dot.\n# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order\n# to make the SVG files visible in IE 9+ (other browsers do not have this\n# requirement).\n# Possible values are: png, jpg, gif and svg.\n# The default value is: png.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_IMAGE_FORMAT       = png\n\n# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to\n# enable generation of interactive SVG images that allow zooming and panning.\n#\n# Note that this requires a modern browser other than Internet Explorer. Tested\n# and working are Firefox, Chrome, Safari, and Opera.\n# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make\n# the SVG files visible. Older versions of IE do not have SVG support.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINTERACTIVE_SVG        = NO\n\n# The DOT_PATH tag can be used to specify the path where the dot tool can be\n# found. If left blank, it is assumed the dot tool can be found in the path.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_PATH               =\n\n# The DOTFILE_DIRS tag can be used to specify one or more directories that\n# contain dot files that are included in the documentation (see the \\dotfile\n# command).\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOTFILE_DIRS           =\n\n# The MSCFILE_DIRS tag can be used to specify one or more directories that\n# contain msc files that are included in the documentation (see the \\mscfile\n# command).\n\nMSCFILE_DIRS           =\n\n# The DIAFILE_DIRS tag can be used to specify one or more directories that\n# contain dia files that are included in the documentation (see the \\diafile\n# command).\n\nDIAFILE_DIRS           =\n\n# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes\n# that will be shown in the graph. If the number of nodes in a graph becomes\n# larger than this value, doxygen will truncate the graph, which is visualized\n# by representing a node as a red box. Note that doxygen if the number of direct\n# children of the root node in a graph is already larger than\n# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that\n# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.\n# Minimum value: 0, maximum value: 10000, default value: 50.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_GRAPH_MAX_NODES    = 50\n\n# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs\n# generated by dot. A depth value of 3 means that only nodes reachable from the\n# root by following a path via at most 3 edges will be shown. Nodes that lay\n# further from the root node will be omitted. Note that setting this option to 1\n# or 2 may greatly reduce the computation time needed for large code bases. Also\n# note that the size of a graph can be further restricted by\n# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.\n# Minimum value: 0, maximum value: 1000, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nMAX_DOT_GRAPH_DEPTH    = 0\n\n# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent\n# background. This is disabled by default, because dot on Windows does not seem\n# to support this out of the box.\n#\n# Warning: Depending on the platform used, enabling this option may lead to\n# badly anti-aliased labels on the edges of a graph (i.e. they become hard to\n# read).\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_TRANSPARENT        = NO\n\n# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output\n# files in one run (i.e. multiple -o and -T options on the command line). This\n# makes dot run faster, but since only newer versions of dot (>1.8.10) support\n# this, this feature is disabled by default.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_MULTI_TARGETS      = NO\n\n# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page\n# explaining the meaning of the various boxes and arrows in the dot generated\n# graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGENERATE_LEGEND        = YES\n\n# If the DOT_CLEANUP tag is set to YES doxygen will remove the intermediate dot\n# files that are used to generate the various graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_CLEANUP            = YES\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/architecture.dot",
    "content": "digraph {\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.2\n\tnodesep=0.5\n\tpenwidth=0.5\n\tcolorscheme=spectral7\n\t\n\tnode [shape=box, fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5, style=filled, fillcolor=white]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\n\tsubgraph cluster1 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"SAX\"\n\t\tstyle=filled\n\t\tfillcolor=6\n\n\t\tReader -> Writer [style=invis]\n\t}\n\n\tsubgraph cluster2 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"DOM\"\n\t\tstyle=filled\n\t\tfillcolor=7\n\n\t\tValue\n\t\tDocument\n\t}\n\n\tHandler [label=\"<<concept>>\\nHandler\"]\n\n\t{\n\t\tedge [arrowtail=onormal, dir=back]\n\t\tValue -> Document\n\t\tHandler -> Document\n\t\tHandler -> Writer\n\t}\n\n\t{\n\t\tedge [arrowhead=vee, style=dashed, constraint=false]\n\t\tReader -> Handler [label=\"calls\"]\n\t\tValue -> Handler [label=\"calls\"]\n\t\tDocument -> Reader [label=\"uses\"]\n\t}\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/insituparsing.dot",
    "content": "digraph {\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.2\n\tpenwidth=0.5\n\t\n\tnode [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10, arrowhead=normal]\n\n\t{\n\t\tnode [shape=record, fontsize=\"8\", margin=\"0.04\", height=0.2, color=gray]\n\t\toldjson [label=\"\\{|\\\"|m|s|g|\\\"|:|\\\"|H|e|l|l|o|\\\\|n|W|o|r|l|d|!|\\\"|,|\\\"|\\\\|u|0|0|7|3|t|a|r|s|\\\"|:|1|0|\\}\", xlabel=\"Before Parsing\"]\n\t\t//newjson [label=\"\\{|\\\"|<a>m|s|g|\\\\0|:|\\\"|<b>H|e|l|l|o|\\\\n|W|o|r|l|d|!|\\\\0|\\\"|,|\\\"|<c>s|t|a|r|s|\\\\0|t|a|r|s|:|1|0|\\}\", xlabel=\"After Parsing\"]\n\t\tnewjson [shape=plaintext, label=<\n<table BORDER=\"0\" CELLBORDER=\"1\" CELLSPACING=\"0\" CELLPADDING=\"2\"><tr>\n<td>{</td>\n<td>\"</td><td port=\"a\">m</td><td>s</td><td>g</td><td bgcolor=\"yellow\">\\\\0</td>\n<td>:</td>\n<td>\"</td><td port=\"b\">H</td><td>e</td><td>l</td><td>l</td><td>o</td><td bgcolor=\"yellow\">\\\\n</td><td bgcolor=\"yellow\">W</td><td bgcolor=\"yellow\">o</td><td bgcolor=\"yellow\">r</td><td bgcolor=\"yellow\">l</td><td bgcolor=\"yellow\">d</td><td bgcolor=\"yellow\">!</td><td bgcolor=\"yellow\">\\\\0</td><td>\"</td>\n<td>,</td>\n<td>\"</td><td port=\"c\" bgcolor=\"yellow\">s</td><td bgcolor=\"yellow\">t</td><td bgcolor=\"yellow\">a</td><td bgcolor=\"yellow\">r</td><td bgcolor=\"yellow\">s</td><td bgcolor=\"yellow\">\\\\0</td><td>t</td><td>a</td><td>r</td><td>s</td>\n<td>:</td>\n<td>1</td><td>0</td>\n<td>}</td>\n</tr></table>\n>, xlabel=\"After Parsing\"]\n\t}\n\n\tsubgraph cluster1 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"Document by In situ Parsing\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\t\t\n\t\troot [label=\"{object|}\", fillcolor=3]\n\n\t\t{\t\t\t\n\t\t\tmsg [label=\"{string|<a>}\", fillcolor=5]\n\t\t\thelloworld [label=\"{string|<a>}\", fillcolor=5]\n\t\t\tstars [label=\"{string|<a>}\", fillcolor=5]\n\t\t\tten [label=\"{number|10}\", fillcolor=6]\n\t\t}\n\t}\n\n\toldjson -> root [label=\" ParseInsitu()\" lhead=\"cluster1\"]\n\tedge [arrowhead=vee]\n\troot -> { msg; stars }\n\n\tedge [arrowhead=\"none\"]\n\tmsg  -> helloworld\n\tstars -> ten\n\n\t{\n\t\tedge [arrowhead=vee, arrowtail=dot, arrowsize=0.5, dir=both, tailclip=false]\n\t\tmsg:a:c -> newjson:a\n\t\thelloworld:a:c -> newjson:b\n\t\tstars:a:c -> newjson:c\n\t}\n\n\t//oldjson -> newjson [style=invis]\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/iterative-parser-states-diagram.dot",
    "content": "digraph {\n    fontname=\"Inconsolata, Consolas\"\n    fontsize=10\n    margin=\"0,0\"\n    penwidth=0.0\n    \n    node [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n    edge [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\n    node [shape = doublecircle]; Start; Finish;\n    node [shape = box; style = \"rounded, filled\"; fillcolor=white ];\n\n    Start -> ArrayInitial [label=\" [\"];\n    Start -> ObjectInitial [label=\" {\"];\n\n    subgraph clusterArray {\n        margin=\"10,10\"\n        style=filled\n        fillcolor=gray95\n        label = \"Array\"\n        \n        ArrayInitial; Element; ElementDelimiter; ArrayFinish;\n    }\n\n    subgraph clusterObject {\n        margin=\"10,10\"\n        style=filled\n        fillcolor=gray95\n        label = \"Object\"\n\n        ObjectInitial; MemberKey; KeyValueDelimiter; MemberValue; MemberDelimiter; ObjectFinish;\n    }\n\n    ArrayInitial -> ArrayInitial [label=\"[\"];\n    ArrayInitial -> ArrayFinish [label=\" ]\"];\n    ArrayInitial -> ObjectInitial [label=\"{\", constraint=false];\n    ArrayInitial -> Element [label=\"string\\nfalse\\ntrue\\nnull\\nnumber\"];\n\n    Element -> ArrayFinish [label=\"]\"];\n    Element -> ElementDelimiter [label=\",\"];\n\n    ElementDelimiter -> ArrayInitial [label=\" [\"];\n    ElementDelimiter -> ObjectInitial [label=\"{\"];\n    ElementDelimiter -> Element [label=\"string\\nfalse\\ntrue\\nnull\\nnumber\"];\n\n    ObjectInitial -> ObjectFinish [label=\" }\"];\n    ObjectInitial -> MemberKey [label=\" string \"];\n\n    MemberKey -> KeyValueDelimiter [label=\":\"];\n\n    KeyValueDelimiter -> ArrayInitial [label=\"[\"];\n    KeyValueDelimiter -> ObjectInitial [label=\" {\"];\n    KeyValueDelimiter -> MemberValue [label=\" string\\n false\\n true\\n null\\n number\"];\n\n    MemberValue -> ObjectFinish [label=\"}\"];\n    MemberValue -> MemberDelimiter [label=\",\"];\n\n    MemberDelimiter -> MemberKey [label=\" string \"];\n\n    ArrayFinish -> Finish;\n    ObjectFinish -> Finish;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/move1.dot",
    "content": "digraph {\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.2\n\tpenwidth=0.5\n\n\tnode [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10, arrowhead=normal]\n\n\tsubgraph cluster1 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"Before\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\n\t\t{\n\t\t\trank = same\n\t\t\tb1 [label=\"{b:number|456}\", fillcolor=6]\n\t\t\ta1 [label=\"{a:number|123}\", fillcolor=6]\n\t\t}\n\n\t\ta1 -> b1 [style=\"dashed\", label=\"Move\", dir=back]\n\t}\n\n\tsubgraph cluster2 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"After\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\n\t\t{\n\t\t\trank = same\n\t\t\tb2 [label=\"{b:null|}\", fillcolor=1]\n\t\t\ta2 [label=\"{a:number|456}\", fillcolor=6]\n\t\t}\n\t\ta2 -> b2 [style=invis, dir=back]\n\t}\n\tb1 -> b2 [style=invis]\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/move2.dot",
    "content": "digraph {\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.2\n\tpenwidth=0.5\n\n\tnode [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10, arrowhead=normal]\n\n\tsubgraph cluster1 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"Before Copying (Hypothetic)\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\n\t\tc1 [label=\"{contacts:array|}\", fillcolor=4]\n\t\tc11 [label=\"{|}\"]\n\t\tc12 [label=\"{|}\"]\n\t\tc13 [shape=\"none\", label=\"...\", style=\"solid\"]\n\t\to1 [label=\"{o:object|}\", fillcolor=3]\n\t\tghost [label=\"{o:object|}\", style=invis]\n\n\t\tc1 -> o1 [style=\"dashed\", label=\"AddMember\", constraint=false]\n\n\t\tedge [arrowhead=vee]\n\t\tc1 -> { c11; c12; c13 }\n\t\to1 -> ghost [style=invis]\n\t}\n\n\tsubgraph cluster2 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"After Copying (Hypothetic)\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\n\t\tc2 [label=\"{contacts:array|}\", fillcolor=4]\n\t\tc3 [label=\"{array|}\", fillcolor=4]\n\t\tc21 [label=\"{|}\"]\n\t\tc22 [label=\"{|}\"]\n\t\tc23 [shape=none, label=\"...\", style=\"solid\"]\n\t\to2 [label=\"{o:object|}\", fillcolor=3]\n\t\tcs [label=\"{string|\\\"contacts\\\"}\", fillcolor=5]\n\t\tc31 [label=\"{|}\"]\n\t\tc32 [label=\"{|}\"]\n\t\tc33 [shape=\"none\", label=\"...\", style=\"solid\"]\n\n\t\tedge [arrowhead=vee]\n\t\tc2 -> { c21; c22; c23 }\n\t\to2 -> cs\n\t\tcs -> c3 [arrowhead=none]\n\t\tc3 -> { c31; c32; c33 }\n\t}\n\tghost -> o2 [style=invis]\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/move3.dot",
    "content": "digraph {\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.2\n\tpenwidth=0.5\n\tforcelabels=true\n\n\tnode [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10, arrowhead=normal]\n\n\tsubgraph cluster1 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"Before Moving\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\n\t\tc1 [label=\"{contacts:array|}\", fillcolor=4]\n\t\tc11 [label=\"{|}\"]\n\t\tc12 [label=\"{|}\"]\n\t\tc13 [shape=none, label=\"...\", style=\"solid\"]\n\t\to1 [label=\"{o:object|}\", fillcolor=3]\n\t\tghost [label=\"{o:object|}\", style=invis]\n\n\t\tc1 -> o1 [style=\"dashed\", constraint=false, label=\"AddMember\"]\n\n\t\tedge [arrowhead=vee]\n\t\tc1 -> { c11; c12; c13 }\n\t\to1 -> ghost [style=invis]\n\t}\n\n\tsubgraph cluster2 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"After Moving\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\n\t\tc2 [label=\"{contacts:null|}\", fillcolor=1]\n\t\tc3 [label=\"{array|}\", fillcolor=4]\n\t\tc21 [label=\"{|}\"]\n\t\tc22 [label=\"{|}\"]\n\t\tc23 [shape=\"none\", label=\"...\", style=\"solid\"]\n\t\to2 [label=\"{o:object|}\", fillcolor=3]\n\t\tcs [label=\"{string|\\\"contacts\\\"}\", fillcolor=5]\n\t\tc2 -> o2 [style=\"dashed\", constraint=false, label=\"AddMember\", style=invis]\n\n\t\tedge [arrowhead=vee]\n\t\tc3 -> { c21; c22; c23 }\n\t\to2 -> cs\n\t\tcs -> c3 [arrowhead=none]\n\t}\n\tghost -> o2 [style=invis]\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/normalparsing.dot",
    "content": "digraph {\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.2\n\tpenwidth=0.5\n\t\n\tnode [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10, arrowhead=normal]\n\n\t{\n\t\tnode [shape=record, fontsize=\"8\", margin=\"0.04\", height=0.2, color=gray]\n\t\tnormaljson [label=\"\\{|\\\"|m|s|g|\\\"|:|\\\"|H|e|l|l|o|\\\\|n|W|o|r|l|d|!|\\\"|,|\\\"|\\\\|u|0|0|7|3|t|a|r|s\\\"|:|1|0|\\}\"]\n\n\t\t{\n\t\t\trank = same\n\t\t\tmsgstring  [label=\"m|s|g|\\\\0\"]\n\t\t\thelloworldstring  [label=\"H|e|l|l|o|\\\\n|W|o|r|l|d|!|\\\\0\"]\n\t\t\tstarsstring [label=\"s|t|a|r|s\\\\0\"]\n\t\t}\n\t}\n\n\tsubgraph cluster1 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"Document by Normal Parsing\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\t\t\n\t\troot [label=\"{object|}\", fillcolor=3]\n\n\t\t{\t\t\t\n\t\t\tmsg [label=\"{string|<a>}\", fillcolor=5]\n\t\t\thelloworld [label=\"{string|<a>}\", fillcolor=5]\n\t\t\tstars [label=\"{string|<a>}\", fillcolor=5]\n\t\t\tten [label=\"{number|10}\", fillcolor=6]\n\t\t}\n\t}\n\n\tnormaljson -> root [label=\" Parse()\" lhead=\"cluster1\"]\n\tedge [arrowhead=vee]\n\troot -> { msg; stars }\n\n\tedge [arrowhead=\"none\"]\n\tmsg  -> helloworld\n\tstars -> ten\n\n\tedge [arrowhead=vee, arrowtail=dot, arrowsize=0.5, dir=both, tailclip=false]\n\tmsg:a:c -> msgstring:w\n\thelloworld:a:c -> helloworldstring:w\n\tstars:a:c -> starsstring:w\n\n\tmsgstring -> helloworldstring -> starsstring [style=invis]\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/simpledom.dot",
    "content": "digraph {\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.2\n\tpenwidth=0.5\n\t\n\tnode [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10, arrowhead=normal]\n\n\t{\n\t\tnode [shape=record, fontsize=\"8\", margin=\"0.04\", height=0.2, color=gray]\n\t\tsrcjson [label=\"\\{|\\\"|p|r|o|j|e|c|t|\\\"|:|\\\"|r|a|p|i|d|j|s|o|n|\\\"|,|\\\"|s|t|a|r|s|\\\"|:|1|0|\\}\"]\n\t\tdstjson [label=\"\\{|\\\"|p|r|o|j|e|c|t|\\\"|:|\\\"|r|a|p|i|d|j|s|o|n|\\\"|,|\\\"|s|t|a|r|s|\\\"|:|1|1|\\}\"]\n\t}\n\n\t{\n\t\tnode [shape=\"box\", style=\"filled\", fillcolor=\"gray95\"]\n\t\tDocument2 [label=\"(Modified) Document\"]\n\t\tWriter\n\t}\n\n\tsubgraph cluster1 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"Document\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\t\t\n\t\troot [label=\"{object|}\", fillcolor=3]\n\n\t\t{\t\t\t\n\t\t\tproject [label=\"{string|\\\"project\\\"}\", fillcolor=5]\n\t\t\trapidjson [label=\"{string|\\\"rapidjson\\\"}\", fillcolor=5]\n\t\t\tstars [label=\"{string|\\\"stars\\\"}\", fillcolor=5]\n\t\t\tten [label=\"{number|10}\", fillcolor=6]\n\t\t}\n\n\t\tedge [arrowhead=vee]\n\t\troot -> { project; stars }\n\n\t\tedge [arrowhead=\"none\"]\n\t\tproject -> rapidjson\n\t\tstars -> ten\n\t}\n\n\tsrcjson -> root [label=\" Parse()\", lhead=\"cluster1\"]\n\n\tten -> Document2 [label=\" Increase \\\"stars\\\"\", ltail=\"cluster1\" ]\n\tDocument2  -> Writer [label=\" Traverse DOM by Accept()\"]\n\tWriter -> dstjson [label=\" Output to StringBuffer\"]\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/tutorial.dot",
    "content": "digraph {\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.2\n\tpenwidth=0.5\n\t\n\tnode [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10]\n\n\tsubgraph cluster1 {\n\t\tmargin=\"10,10\"\n\t\tlabeljust=\"left\"\n\t\tlabel = \"Document\"\n\t\tstyle=filled\n\t\tfillcolor=gray95\n\t\tnode [shape=Mrecord, style=filled, colorscheme=spectral7]\n\t\t\n\t\troot [label=\"{object|}\", fillcolor=3]\n\n\t\t{\t\t\t\n\t\t\thello [label=\"{string|\\\"hello\\\"}\", fillcolor=5]\n\t\t\tt [label=\"{string|\\\"t\\\"}\", fillcolor=5]\n\t\t\tf [label=\"{string|\\\"f\\\"}\", fillcolor=5]\n\t\t\tn [label=\"{string|\\\"n\\\"}\", fillcolor=5]\n\t\t\ti [label=\"{string|\\\"i\\\"}\", fillcolor=5]\n\t\t\tpi [label=\"{string|\\\"pi\\\"}\", fillcolor=5]\n\t\t\ta [label=\"{string|\\\"a\\\"}\", fillcolor=5]\n\n\t\t\tworld [label=\"{string|\\\"world\\\"}\", fillcolor=5]\n\t\t\ttrue [label=\"{true|}\", fillcolor=7]\n\t\t\tfalse [label=\"{false|}\", fillcolor=2]\n\t\t\tnull [label=\"{null|}\", fillcolor=1]\n\t\t\ti1 [label=\"{number|123}\", fillcolor=6]\n\t\t\tpi1 [label=\"{number|3.1416}\", fillcolor=6]\n\t\t\tarray [label=\"{array|size=4}\", fillcolor=4]\n\n\t\t\ta1 [label=\"{number|1}\", fillcolor=6]\n\t\t\ta2 [label=\"{number|2}\", fillcolor=6]\n\t\t\ta3 [label=\"{number|3}\", fillcolor=6]\n\t\t\ta4 [label=\"{number|4}\", fillcolor=6]\n\t\t}\n\n\t\tedge [arrowhead=vee]\n\t\troot -> { hello; t; f; n; i; pi; a }\t\t\n\t\tarray -> { a1; a2; a3; a4 }\n\n\t\tedge [arrowhead=none]\n\t\thello -> world\n\t\tt -> true\n\t\tf -> false\n\t\tn -> null\n\t\ti -> i1\n\t\tpi -> pi1\n\t\ta -> array\n\t}\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/diagram/utilityclass.dot",
    "content": "digraph {\n\trankdir=LR\n\tcompound=true\n\tfontname=\"Inconsolata, Consolas\"\n\tfontsize=10\n\tmargin=\"0,0\"\n\tranksep=0.3\n\tnodesep=0.15\n\tpenwidth=0.5\n\tcolorscheme=spectral7\n\t\n\tnode [shape=box, fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5, style=filled, fillcolor=white]\n\tedge [fontname=\"Inconsolata, Consolas\", fontsize=10, penwidth=0.5]\n\n\tsubgraph cluster0 {\n\t\tstyle=filled\n\t\tfillcolor=4\n\n\t\tEncoding [label=\"<<concept>>\\nEncoding\"]\n\n\t\tedge [arrowtail=onormal, dir=back]\n\t\tEncoding -> { UTF8; UTF16; UTF32; ASCII; AutoUTF }\n\t\tUTF16 -> { UTF16LE; UTF16BE }\n\t\tUTF32 -> { UTF32LE; UTF32BE }\n\t}\n\n\tsubgraph cluster1 {\n\t\tstyle=filled\n\t\tfillcolor=5\n\n\t\tStream [label=\"<<concept>>\\nStream\"]\n\t\tInputByteStream [label=\"<<concept>>\\nInputByteStream\"]\n\t\tOutputByteStream [label=\"<<concept>>\\nOutputByteStream\"]\n\n\t\tedge [arrowtail=onormal, dir=back]\n\t\tStream -> { \n\t\t\tStringStream; InsituStringStream; StringBuffer; \n\t\t\tEncodedInputStream; EncodedOutputStream; \n\t\t\tAutoUTFInputStream; AutoUTFOutputStream \n\t\t\tInputByteStream; OutputByteStream\n\t\t}\n\n\t\tInputByteStream ->\t{ MemoryStream; FlieReadStream }\n\t\tOutputByteStream -> { MemoryBuffer; FileWriteStream } \n\t}\n\n\tsubgraph cluster2 {\n\t\tstyle=filled\n\t\tfillcolor=3\n\n\t\tAllocator [label=\"<<concept>>\\nAllocator\"]\n\n\t\tedge [arrowtail=onormal, dir=back]\n\t\tAllocator -> { CrtAllocator; MemoryPoolAllocator }\n\t}\n\n\t{\n\t\tedge [arrowtail=odiamond, arrowhead=vee, dir=both]\n\t\tEncodedInputStream -> InputByteStream\n\t\tEncodedOutputStream -> OutputByteStream\n\t\tAutoUTFInputStream -> InputByteStream\n\t\tAutoUTFOutputStream -> OutputByteStream\n\t\tMemoryPoolAllocator -> Allocator [label=\"base\", tailport=s]\n\t}\n\n\t{\n\t\tedge [arrowhead=vee, style=dashed]\n\t\tAutoUTFInputStream -> AutoUTF\n\t\tAutoUTFOutputStream -> AutoUTF\n\t}\n\n\t//UTF32LE -> Stream [style=invis]\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/dom.md",
    "content": "# DOM\n\nDocument Object Model(DOM) is an in-memory representation of JSON for query and manipulation. The basic usage of DOM is described in [Tutorial](doc/tutorial.md). This section will describe some details and more advanced usages.\n\n[TOC]\n\n# Template {#Template}\n\nIn the tutorial,  `Value` and `Document` was used. Similarly to `std::string`, these are actually `typedef` of template classes:\n\n~~~~~~~~~~cpp\nnamespace rapidjson {\n\ntemplate <typename Encoding, typename Allocator = MemoryPoolAllocator<> >\nclass GenericValue {\n    // ...\n};\n\ntemplate <typename Encoding, typename Allocator = MemoryPoolAllocator<> >\nclass GenericDocument : public GenericValue<Encoding, Allocator> {\n    // ...\n};\n\ntypedef GenericValue<UTF8<> > Value;\ntypedef GenericDocument<UTF8<> > Document;\n\n} // namespace rapidjson\n~~~~~~~~~~\n\nUser can customize these template parameters.\n\n## Encoding {#Encoding}\n\nThe `Encoding` parameter specifies the encoding of JSON String value in memory. Possible options are `UTF8`, `UTF16`, `UTF32`. Note that, these 3 types are also template class. `UTF8<>` is `UTF8<char>`, which means using char to store the characters. You may refer to [Encoding](doc/encoding.md) for details.\n\nSuppose a Windows application would query localization strings stored in JSON files. Unicode-enabled functions in Windows use UTF-16 (wide character) encoding. No matter what encoding was used in JSON files, we can store the strings in UTF-16 in memory.\n\n~~~~~~~~~~cpp\nusing namespace rapidjson;\n\ntypedef GenericDocument<UTF16<> > WDocument;\ntypedef GenericValue<UTF16<> > WValue;\n\nFILE* fp = fopen(\"localization.json\", \"rb\"); // non-Windows use \"r\"\n\nchar readBuffer[256];\nFileReadStream bis(fp, readBuffer, sizeof(readBuffer));\n\nAutoUTFInputStream<unsigned, FileReadStream> eis(bis);  // wraps bis into eis\n\nWDocument d;\nd.ParseStream<0, AutoUTF<unsigned> >(eis);\n\nconst WValue locale(L\"ja\"); // Japanese\n\nMessageBoxW(hWnd, d[locale].GetString(), L\"Test\", MB_OK);\n~~~~~~~~~~\n\n## Allocator {#Allocator}\n\nThe `Allocator` defines which allocator class is used when allocating/deallocating memory for `Document`/`Value`. `Document` owns, or references to an `Allocator` instance. On the other hand, `Value` does not do so, in order to reduce memory consumption.\n\nThe default allocator used in `GenericDocument` is `MemoryPoolAllocator`. This allocator actually allocate memory sequentially, and cannot deallocate one by one. This is very suitable when parsing a JSON into a DOM tree.\n\nAnother allocator is `CrtAllocator`, of which CRT is short for C RunTime library. This allocator simply calls the standard `malloc()`/`realloc()`/`free()`. When there is a lot of add and remove operations, this allocator may be preferred. But this allocator is far less efficient than `MemoryPoolAllocator`.\n\n# Parsing {#Parsing}\n\n`Document` provides several functions for parsing. In below, (1) is the fundamental function, while the others are helpers which call (1).\n\n~~~~~~~~~~cpp\nusing namespace rapidjson;\n\n// (1) Fundamental\ntemplate <unsigned parseFlags, typename SourceEncoding, typename InputStream>\nGenericDocument& GenericDocument::ParseStream(InputStream& is);\n\n// (2) Using the same Encoding for stream\ntemplate <unsigned parseFlags, typename InputStream>\nGenericDocument& GenericDocument::ParseStream(InputStream& is);\n\n// (3) Using default parse flags\ntemplate <typename InputStream>\nGenericDocument& GenericDocument::ParseStream(InputStream& is);\n\n// (4) In situ parsing\ntemplate <unsigned parseFlags>\nGenericDocument& GenericDocument::ParseInsitu(Ch* str);\n\n// (5) In situ parsing, using default parse flags\nGenericDocument& GenericDocument::ParseInsitu(Ch* str);\n\n// (6) Normal parsing of a string\ntemplate <unsigned parseFlags, typename SourceEncoding>\nGenericDocument& GenericDocument::Parse(const Ch* str);\n\n// (7) Normal parsing of a string, using same Encoding of Document\ntemplate <unsigned parseFlags>\nGenericDocument& GenericDocument::Parse(const Ch* str);\n\n// (8) Normal parsing of a string, using default parse flags\nGenericDocument& GenericDocument::Parse(const Ch* str);\n~~~~~~~~~~\n\nThe examples of [tutorial](doc/tutorial.md) uses (8) for normal parsing of string. The examples of [stream](doc/stream.md) uses the first three. *In situ* parsing will be described soon.\n\nThe `parseFlags` are combination of the following bit-flags:\n\nParse flags                   | Meaning\n------------------------------|-----------------------------------\n`kParseNoFlags`               | No flag is set.\n`kParseDefaultFlags`          | Default parse flags. It is equal to macro `RAPIDJSON_PARSE_DEFAULT_FLAGS`, which is defined as `kParseNoFlags`.\n`kParseInsituFlag`            | In-situ(destructive) parsing.\n`kParseValidateEncodingFlag`  | Validate encoding of JSON strings.\n`kParseIterativeFlag`         | Iterative(constant complexity in terms of function call stack size) parsing.\n`kParseStopWhenDoneFlag`      | After parsing a complete JSON root from stream, stop further processing the rest of stream. When this flag is used, parser will not generate `kParseErrorDocumentRootNotSingular` error. Using this flag for parsing multiple JSONs in the same stream.\n`kParseFullPrecisionFlag`     | Parse number in full precision (slower). If this flag is not set, the normal precision (faster) is used. Normal precision has maximum 3 [ULP](http://en.wikipedia.org/wiki/Unit_in_the_last_place) error.\n`kParseCommentsFlag`          | Allow one-line `// ...` and multi-line `/* ... */` comments (relaxed JSON syntax).\n`kParseNumbersAsStringsFlag`  | Parse numerical type values as strings.\n`kParseTrailingCommasFlag`    | Allow trailing commas at the end of objects and arrays (relaxed JSON syntax).\n`kParseNanAndInfFlag`         | Allow parsing `NaN`, `Inf`, `Infinity`, `-Inf` and `-Infinity` as `double` values (relaxed JSON syntax).\n`kParseEscapedApostropheFlag` | Allow escaped apostrophe `\\'` in strings (relaxed JSON syntax).\n\nBy using a non-type template parameter, instead of a function parameter, C++ compiler can generate code which is optimized for specified combinations, improving speed, and reducing code size (if only using a single specialization). The downside is the flags needed to be determined in compile-time.\n\nThe `SourceEncoding` parameter defines what encoding is in the stream. This can be differed to the `Encoding` of the `Document`. See [Transcoding and Validation](#TranscodingAndValidation) section for details.\n\nAnd the `InputStream` is type of input stream.\n\n## Parse Error {#ParseError}\n\nWhen the parse processing succeeded, the `Document` contains the parse results. When there is an error, the original DOM is *unchanged*. And the error state of parsing can be obtained by `bool HasParseError()`,  `ParseErrorCode GetParseError()` and `size_t GetErrorOffset()`.\n\nParse Error Code                            | Description\n--------------------------------------------|---------------------------------------------------\n`kParseErrorNone`                           | No error.\n`kParseErrorDocumentEmpty`                  | The document is empty.\n`kParseErrorDocumentRootNotSingular`        | The document root must not follow by other values.\n`kParseErrorValueInvalid`                   | Invalid value.\n`kParseErrorObjectMissName`                 | Missing a name for object member.\n`kParseErrorObjectMissColon`                | Missing a colon after a name of object member.\n`kParseErrorObjectMissCommaOrCurlyBracket`  | Missing a comma or `}` after an object member.\n`kParseErrorArrayMissCommaOrSquareBracket`  | Missing a comma or `]` after an array element.\n`kParseErrorStringUnicodeEscapeInvalidHex`  | Incorrect hex digit after `\\\\u` escape in string.\n`kParseErrorStringUnicodeSurrogateInvalid`  | The surrogate pair in string is invalid.\n`kParseErrorStringEscapeInvalid`            | Invalid escape character in string.\n`kParseErrorStringMissQuotationMark`        | Missing a closing quotation mark in string.\n`kParseErrorStringInvalidEncoding`          | Invalid encoding in string.\n`kParseErrorNumberTooBig`                   | Number too big to be stored in `double`.\n`kParseErrorNumberMissFraction`             | Miss fraction part in number.\n`kParseErrorNumberMissExponent`             | Miss exponent in number.\n\nThe offset of error is defined as the character number from beginning of stream. Currently RapidJSON does not keep track of line number.\n\nTo get an error message, RapidJSON provided a English messages in `rapidjson/error/en.h`. User can customize it for other locales, or use a custom localization system.\n\nHere shows an example of parse error handling.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\"\n#include \"rapidjson/error/en.h\"\n\n// ...\nDocument d;\nif (d.Parse(json).HasParseError()) {\n    fprintf(stderr, \"\\nError(offset %u): %s\\n\", \n        (unsigned)d.GetErrorOffset(),\n        GetParseError_En(d.GetParseError()));\n    // ...\n}\n~~~~~~~~~~\n\n## In Situ Parsing {#InSituParsing}\n\nFrom [Wikipedia](http://en.wikipedia.org/wiki/In_situ):\n\n> *In situ* ... is a Latin phrase that translates literally to \"on site\" or \"in position\". It means \"locally\", \"on site\", \"on the premises\" or \"in place\" to describe an event where it takes place, and is used in many different contexts.\n> ...\n> (In computer science) An algorithm is said to be an in situ algorithm, or in-place algorithm, if the extra amount of memory required to execute the algorithm is O(1), that is, does not exceed a constant no matter how large the input. For example, heapsort is an in situ sorting algorithm.\n\nIn normal parsing process, a large overhead is to decode JSON strings and copy them to other buffers. *In situ* parsing decodes those JSON string at the place where it is stored. It is possible in JSON because the length of decoded string is always shorter than or equal to the one in JSON. In this context, decoding a JSON string means to process the escapes, such as `\"\\n\"`, `\"\\u1234\"`, etc., and add a null terminator (`'\\0'`)at the end of string.\n\nThe following diagrams compare normal and *in situ* parsing. The JSON string values contain pointers to the decoded string.\n\n![normal parsing](diagram/normalparsing.png)\n\nIn normal parsing, the decoded string are copied to freshly allocated buffers. `\"\\\\n\"` (2 characters) is decoded as `\"\\n\"` (1 character). `\"\\\\u0073\"` (6 characters) is decoded as `\"s\"` (1 character).\n\n![instiu parsing](diagram/insituparsing.png)\n\n*In situ* parsing just modified the original JSON. Updated characters are highlighted in the diagram. If the JSON string does not contain escape character, such as `\"msg\"`, the parsing process merely replace the closing double quotation mark with a null character.\n\nSince *in situ* parsing modify the input, the parsing API needs `char*` instead of `const char*`.\n\n~~~~~~~~~~cpp\n// Read whole file into a buffer\nFILE* fp = fopen(\"test.json\", \"r\");\nfseek(fp, 0, SEEK_END);\nsize_t filesize = (size_t)ftell(fp);\nfseek(fp, 0, SEEK_SET);\nchar* buffer = (char*)malloc(filesize + 1);\nsize_t readLength = fread(buffer, 1, filesize, fp);\nbuffer[readLength] = '\\0';\nfclose(fp);\n\n// In situ parsing the buffer into d, buffer will also be modified\nDocument d;\nd.ParseInsitu(buffer);\n\n// Query/manipulate the DOM here...\n\nfree(buffer);\n// Note: At this point, d may have dangling pointers pointed to the deallocated buffer.\n~~~~~~~~~~\n\nThe JSON strings are marked as const-string. But they may not be really \"constant\". The life cycle of it depends on the JSON buffer.\n\nIn situ parsing minimizes allocation overheads and memory copying. Generally this improves cache coherence, which is an important factor of performance in modern computer.\n\nThere are some limitations of *in situ* parsing:\n\n1. The whole JSON is in memory.\n2. The source encoding in stream and target encoding in document must be the same.\n3. The buffer need to be retained until the document is no longer used.\n4. If the DOM need to be used for long period after parsing, and there are few JSON strings in the DOM, retaining the buffer may be a memory waste.\n\n*In situ* parsing is mostly suitable for short-term JSON that only need to be processed once, and then be released from memory. In practice, these situation is very common, for example, deserializing JSON to C++ objects, processing web requests represented in JSON, etc.\n\n## Transcoding and Validation {#TranscodingAndValidation}\n\nRapidJSON supports conversion between Unicode formats (officially termed UCS Transformation Format) internally. During DOM parsing, the source encoding of the stream can be different from the encoding of the DOM. For example, the source stream contains a UTF-8 JSON, while the DOM is using UTF-16 encoding. There is an example code in [EncodedInputStream](doc/stream.md).\n\nWhen writing a JSON from DOM to output stream, transcoding can also be used. An example is in [EncodedOutputStream](doc/stream.md).\n\nDuring transcoding, the source string is decoded to into Unicode code points, and then the code points are encoded in the target format. During decoding, it will validate the byte sequence in the source string. If it is not a valid sequence, the parser will be stopped with `kParseErrorStringInvalidEncoding` error.\n\nWhen the source encoding of stream is the same as encoding of DOM, by default, the parser will *not* validate the sequence. User may use `kParseValidateEncodingFlag` to force validation.\n\n# Techniques {#Techniques}\n\nSome techniques about using DOM API is discussed here.\n\n## DOM as SAX Event Publisher\n\nIn RapidJSON, stringifying a DOM with `Writer` may be look a little bit weird.\n\n~~~~~~~~~~cpp\n// ...\nWriter<StringBuffer> writer(buffer);\nd.Accept(writer);\n~~~~~~~~~~\n\nActually, `Value::Accept()` is responsible for publishing SAX events about the value to the handler. With this design, `Value` and `Writer` are decoupled. `Value` can generate SAX events, and `Writer` can handle those events.\n\nUser may create custom handlers for transforming the DOM into other formats. For example, a handler which converts the DOM into XML.\n\nFor more about SAX events and handler, please refer to [SAX](doc/sax.md).\n\n## User Buffer {#UserBuffer}\n\nSome applications may try to avoid memory allocations whenever possible.\n\n`MemoryPoolAllocator` can support this by letting user to provide a buffer. The buffer can be on the program stack, or a \"scratch buffer\" which is statically allocated (a static/global array) for storing temporary data.\n\n`MemoryPoolAllocator` will use the user buffer to satisfy allocations. When the user buffer is used up, it will allocate a chunk of memory from the base allocator (by default the `CrtAllocator`).\n\nHere is an example of using stack memory. The first allocator is for storing values, while the second allocator is for storing temporary data during parsing.\n\n~~~~~~~~~~cpp\ntypedef GenericDocument<UTF8<>, MemoryPoolAllocator<>, MemoryPoolAllocator<>> DocumentType;\nchar valueBuffer[4096];\nchar parseBuffer[1024];\nMemoryPoolAllocator<> valueAllocator(valueBuffer, sizeof(valueBuffer));\nMemoryPoolAllocator<> parseAllocator(parseBuffer, sizeof(parseBuffer));\nDocumentType d(&valueAllocator, sizeof(parseBuffer), &parseAllocator);\nd.Parse(json);\n~~~~~~~~~~\n\nIf the total size of allocation is less than 4096+1024 bytes during parsing, this code does not invoke any heap allocation (via `new` or `malloc()`) at all.\n\nUser can query the current memory consumption in bytes via `MemoryPoolAllocator::Size()`. And then user can determine a suitable size of user buffer.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/dom.zh-cn.md",
    "content": "# DOM\n\n文档对象模型（Document Object Model, DOM）是一种罝于内存中的 JSON 表示方式，以供查询及操作。我们已于 [教程](doc/tutorial.zh-cn.md) 中介绍了 DOM 的基本用法，本节将讲述一些细节及高级用法。\n\n[TOC]\n\n# 模板 {#Template}\n\n教程中使用了 `Value` 和 `Document` 类型。与 `std::string` 相似，这些类型其实是两个模板类的 `typedef`：\n\n~~~~~~~~~~cpp\nnamespace rapidjson {\n\ntemplate <typename Encoding, typename Allocator = MemoryPoolAllocator<> >\nclass GenericValue {\n    // ...\n};\n\ntemplate <typename Encoding, typename Allocator = MemoryPoolAllocator<> >\nclass GenericDocument : public GenericValue<Encoding, Allocator> {\n    // ...\n};\n\ntypedef GenericValue<UTF8<> > Value;\ntypedef GenericDocument<UTF8<> > Document;\n\n} // namespace rapidjson\n~~~~~~~~~~\n\n使用者可以自定义这些模板参数。\n\n## 编码 {#Encoding}\n\n`Encoding` 参数指明在内存中的 JSON String 使用哪种编码。可行的选项有 `UTF8`、`UTF16`、`UTF32`。要注意这 3 个类型其实也是模板类。`UTF8<>` 等同 `UTF8<char>`，这代表它使用 `char` 来存储字符串。更多细节可以参考 [编码](doc/encoding.zh-cn.md)。\n\n这里是一个例子。假设一个 Windows 应用软件希望查询存储于 JSON 中的本地化字符串。Windows 中含 Unicode 的函数使用 UTF-16（宽字符）编码。无论 JSON 文件使用哪种编码，我们都可以把字符串以 UTF-16 形式存储在内存。\n\n~~~~~~~~~~cpp\nusing namespace rapidjson;\n\ntypedef GenericDocument<UTF16<> > WDocument;\ntypedef GenericValue<UTF16<> > WValue;\n\nFILE* fp = fopen(\"localization.json\", \"rb\"); // 非 Windows 平台使用 \"r\"\n\nchar readBuffer[256];\nFileReadStream bis(fp, readBuffer, sizeof(readBuffer));\n\nAutoUTFInputStream<unsigned, FileReadStream> eis(bis);  // 包装 bis 成 eis\n\nWDocument d;\nd.ParseStream<0, AutoUTF<unsigned> >(eis);\n\nconst WValue locale(L\"ja\"); // Japanese\n\nMessageBoxW(hWnd, d[locale].GetString(), L\"Test\", MB_OK);\n~~~~~~~~~~\n\n## 分配器 {#Allocator}\n\n`Allocator` 定义当 `Document`/`Value` 分配或释放内存时使用那个分配类。`Document` 拥有或引用到一个 `Allocator` 实例。而为了节省内存，`Value` 没有这么做。\n\n`GenericDocument` 的缺省分配器是 `MemoryPoolAllocator`。此分配器实际上会顺序地分配内存，并且不能逐一释放。当要解析一个 JSON 并生成 DOM，这种分配器是非常合适的。\n\nRapidJSON 还提供另一个分配器 `CrtAllocator`，当中 CRT 是 C 运行库（C RunTime library）的缩写。此分配器简单地读用标准的 `malloc()`/`realloc()`/`free()`。当我们需要许多增减操作，这种分配器会更为适合。然而这种分配器远远比 `MemoryPoolAllocator` 低效。\n\n# 解析 {#Parsing}\n\n`Document` 提供几个解析函数。以下的 (1) 是根本的函数，其他都是调用 (1) 的协助函数。\n\n~~~~~~~~~~cpp\nusing namespace rapidjson;\n\n// (1) 根本\ntemplate <unsigned parseFlags, typename SourceEncoding, typename InputStream>\nGenericDocument& GenericDocument::ParseStream(InputStream& is);\n\n// (2) 使用流的编码\ntemplate <unsigned parseFlags, typename InputStream>\nGenericDocument& GenericDocument::ParseStream(InputStream& is);\n\n// (3) 使用缺省标志\ntemplate <typename InputStream>\nGenericDocument& GenericDocument::ParseStream(InputStream& is);\n\n// (4) 原位解析\ntemplate <unsigned parseFlags>\nGenericDocument& GenericDocument::ParseInsitu(Ch* str);\n\n// (5) 原位解析，使用缺省标志\nGenericDocument& GenericDocument::ParseInsitu(Ch* str);\n\n// (6) 正常解析一个字符串\ntemplate <unsigned parseFlags, typename SourceEncoding>\nGenericDocument& GenericDocument::Parse(const Ch* str);\n\n// (7) 正常解析一个字符串，使用 Document 的编码\ntemplate <unsigned parseFlags>\nGenericDocument& GenericDocument::Parse(const Ch* str);\n\n// (8) 正常解析一个字符串，使用缺省标志\nGenericDocument& GenericDocument::Parse(const Ch* str);\n~~~~~~~~~~\n\n[教程](doc/tutorial.zh-cn.md) 中的例使用 (8) 去正常解析字符串。而 [流](doc/stream.zh-cn.md) 的例子使用前 3 个函数。我们将稍后介绍原位（*In situ*） 解析。\n\n`parseFlags` 是以下位标置的组合：\n\n解析位标志                    | 意义\n------------------------------|-----------------------------------\n`kParseNoFlags`               | 没有任何标志。\n`kParseDefaultFlags`          | 缺省的解析选项。它等于 `RAPIDJSON_PARSE_DEFAULT_FLAGS` 宏，此宏定义为 `kParseNoFlags`。\n`kParseInsituFlag`            | 原位（破坏性）解析。\n`kParseValidateEncodingFlag`  | 校验 JSON 字符串的编码。\n`kParseIterativeFlag`         | 迭代式（调用堆栈大小为常数复杂度）解析。\n`kParseStopWhenDoneFlag`      | 当从流解析了一个完整的 JSON 根节点之后，停止继续处理余下的流。当使用了此标志，解析器便不会产生 `kParseErrorDocumentRootNotSingular` 错误。可使用本标志去解析同一个流里的多个 JSON。\n`kParseFullPrecisionFlag`     | 使用完整的精确度去解析数字（较慢）。如不设置此标节，则会使用正常的精确度（较快）。正常精确度会有最多 3 个 [ULP](http://en.wikipedia.org/wiki/Unit_in_the_last_place) 的误差。\n`kParseCommentsFlag`          | 容许单行 `// ...` 及多行 `/* ... */` 注释（放宽的 JSON 语法）。\n`kParseNumbersAsStringsFlag`  | 把数字类型解析成字符串。\n`kParseTrailingCommasFlag`    | 容许在对象和数组结束前含有逗号（放宽的 JSON 语法）。\n`kParseNanAndInfFlag`         | 容许 `NaN`、`Inf`、`Infinity`、`-Inf` 及 `-Infinity` 作为 `double` 值（放宽的 JSON 语法）。\n`kParseEscapedApostropheFlag` | 容许字符串中转义单引号 `\\'` （放宽的 JSON 语法）。\n\n由于使用了非类型模板参数，而不是函数参数，C++ 编译器能为个别组合生成代码，以改善性能及减少代码尺寸（当只用单种特化）。缺点是需要在编译期决定标志。\n\n`SourceEncoding` 参数定义流使用了什么编码。这与 `Document` 的 `Encoding` 不相同。细节可参考 [转码和校验](#TranscodingAndValidation) 一节。\n\n此外 `InputStream` 是输入流的类型。\n\n## 解析错误 {#ParseError}\n\n当解析过程顺利完成，`Document` 便会含有解析结果。当过程出现错误，原来的 DOM 会*维持不变*。可使用 `bool HasParseError()`、`ParseErrorCode GetParseError()` 及 `size_t GetErrorOffset()` 获取解析的错误状态。\n\n解析错误代号                                | 描述\n--------------------------------------------|---------------------------------------------------\n`kParseErrorNone`                           | 无错误。\n`kParseErrorDocumentEmpty`                  | 文档是空的。\n`kParseErrorDocumentRootNotSingular`        | 文档的根后面不能有其它值。\n`kParseErrorValueInvalid`                   | 不合法的值。\n`kParseErrorObjectMissName`                 | Object 成员缺少名字。\n`kParseErrorObjectMissColon`                | Object 成员名字后缺少冒号。\n`kParseErrorObjectMissCommaOrCurlyBracket`  | Object 成员后缺少逗号或 `}`。\n`kParseErrorArrayMissCommaOrSquareBracket`  | Array 元素后缺少逗号或 `]` 。\n`kParseErrorStringUnicodeEscapeInvalidHex`  | String 中的 `\\\\u` 转义符后含非十六进位数字。\n`kParseErrorStringUnicodeSurrogateInvalid`  | String 中的代理对（surrogate pair）不合法。\n`kParseErrorStringEscapeInvalid`            | String 含非法转义字符。\n`kParseErrorStringMissQuotationMark`        | String 缺少关闭引号。\n`kParseErrorStringInvalidEncoding`          | String 含非法编码。\n`kParseErrorNumberTooBig`                   | Number 的值太大，不能存储于 `double`。\n`kParseErrorNumberMissFraction`             | Number 缺少了小数部分。\n`kParseErrorNumberMissExponent`             | Number 缺少了指数。\n\n错误的偏移量定义为从流开始至错误处的字符数量。目前 RapidJSON 不记录错误行号。\n\n要取得错误讯息，RapidJSON 在 `rapidjson/error/en.h` 中提供了英文错误讯息。使用者可以修改它用于其他语言环境，或使用一个自定义的本地化系统。\n\n以下是一个处理错误的例子。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\"\n#include \"rapidjson/error/en.h\"\n\n// ...\nDocument d;\nif (d.Parse(json).HasParseError()) {\n    fprintf(stderr, \"\\nError(offset %u): %s\\n\", \n        (unsigned)d.GetErrorOffset(),\n        GetParseError_En(d.GetParseErrorCode()));\n    // ...\n}\n~~~~~~~~~~\n\n## 原位解析 {#InSituParsing}\n\n根据 [维基百科](http://en.wikipedia.org/wiki/In_situ):\n\n> *In situ* ... is a Latin phrase that translates literally to \"on site\" or \"in position\". It means \"locally\", \"on site\", \"on the premises\" or \"in place\" to describe an event where it takes place, and is used in many different contexts.\n> ...\n> (In computer science) An algorithm is said to be an in situ algorithm, or in-place algorithm, if the extra amount of memory required to execute the algorithm is O(1), that is, does not exceed a constant no matter how large the input. For example, heapsort is an in situ sorting algorithm.\n\n> 翻译：*In situ*……是一个拉丁文片语，字面上的意思是指「现场」、「在位置」。在许多不同语境中，它描述一个事件发生的位置，意指「本地」、「现场」、「在处所」、「就位」。\n> ……\n> （在计算机科学中）一个算法若称为原位算法，或在位算法，是指执行该算法所需的额外内存空间是 O(1) 的，换句话说，无论输入大小都只需要常数空间。例如，堆排序是一个原位排序算法。\n\n在正常的解析过程中，对 JSON string 解码并复制至其他缓冲区是一个很大的开销。原位解析（*in situ* parsing）把这些 JSON string 直接解码于它原来存储的地方。由于解码后的 string 长度总是短于或等于原来储存于 JSON 的 string，所以这是可行的。在这个语境下，对 JSON string 进行解码是指处理转义符，如 `\"\\n\"`、`\"\\u1234\"` 等，以及在 string 末端加入空终止符号 (`'\\0'`)。\n\n以下的图比较正常及原位解析。JSON string 值包含指向解码后的字符串。\n\n![正常解析](diagram/normalparsing.png)\n\n在正常解析中，解码后的字符串被复制至全新分配的缓冲区中。`\"\\\\n\"`（2 个字符）被解码成 `\"\\n\"`（1 个字符）。`\"\\\\u0073\"`（6 个字符）被解码成 `\"s\"`（1 个字符）。\n\n![原位解析](diagram/insituparsing.png)\n\n原位解析直接修改了原来的 JSON。图中高亮了被更新的字符。若 JSON string 不含转义符，例如 `\"msg\"`，那么解析过程仅仅是以空字符代替结束双引号。\n\n由于原位解析修改了输入，其解析 API 需要 `char*` 而非 `const char*`。\n\n~~~~~~~~~~cpp\n// 把整个文件读入 buffer\nFILE* fp = fopen(\"test.json\", \"r\");\nfseek(fp, 0, SEEK_END);\nsize_t filesize = (size_t)ftell(fp);\nfseek(fp, 0, SEEK_SET);\nchar* buffer = (char*)malloc(filesize + 1);\nsize_t readLength = fread(buffer, 1, filesize, fp);\nbuffer[readLength] = '\\0';\nfclose(fp);\n\n// 原位解析 buffer 至 d，buffer 内容会被修改。\nDocument d;\nd.ParseInsitu(buffer);\n\n// 在此查询、修改 DOM……\n\nfree(buffer);\n// 注意：在这个位置，d 可能含有指向已被释放的 buffer 的悬空指针\n~~~~~~~~~~\n\nJSON string 会被打上 const-string 的标志。但它们可能并非真正的「常数」。它的生命周期取决于存储 JSON 的缓冲区。\n\n原位解析把分配开销及内存复制减至最小。通常这样做能改善缓存一致性，而这对现代计算机来说是一个重要的性能因素。\n\n原位解析有以下限制：\n\n1. 整个 JSON 须存储在内存之中。\n2. 流的来源缓码与文档的目标编码必须相同。\n3. 需要保留缓冲区，直至文档不再被使用。\n4. 若 DOM 需要在解析后被长期使用，而 DOM 内只有很少 JSON string，保留缓冲区可能造成内存浪费。\n\n原位解析最适合用于短期的、用完即弃的 JSON。实际应用中，这些场合是非常普遍的，例如反序列化 JSON 至 C++ 对象、处理以 JSON 表示的 web 请求等。\n\n## 转码与校验 {#TranscodingAndValidation}\n\nRapidJSON 内部支持不同 Unicode 格式（正式的术语是 UCS 变换格式）间的转换。在 DOM 解析时，流的来源编码与 DOM 的编码可以不同。例如，来源流可能含有 UTF-8 的 JSON，而 DOM 则使用 UTF-16 编码。在 [EncodedInputStream](doc/stream.zh-cn.md) 一节里有一个例子。\n\n当从 DOM 输出一个 JSON 至输出流之时，也可以使用转码功能。在 [EncodedOutputStream](doc/stream.zh-cn.md) 一节里有一个例子。\n\n在转码过程中，会把来源 string 解码成 Unicode 码点，然后把码点编码成目标格式。在解码时，它会校验来源 string 的字节序列是否合法。若遇上非合法序列，解析器会停止并返回 `kParseErrorStringInvalidEncoding` 错误。\n\n当来源编码与 DOM 的编码相同，解析器缺省地 * 不会 * 校验序列。使用者可开启 `kParseValidateEncodingFlag` 去强制校验。\n\n# 技巧 {#Techniques}\n\n这里讨论一些 DOM API 的使用技巧。\n\n## 把 DOM 作为 SAX 事件发表者\n\n在 RapidJSON 中，利用 `Writer` 把 DOM 生成 JSON 的做法，看来有点奇怪。\n\n~~~~~~~~~~cpp\n// ...\nWriter<StringBuffer> writer(buffer);\nd.Accept(writer);\n~~~~~~~~~~\n\n实际上，`Value::Accept()` 是负责发布该值相关的 SAX 事件至处理器的。通过这个设计，`Value` 及 `Writer` 解除了偶合。`Value` 可生成 SAX 事件，而 `Writer` 则可以处理这些事件。\n\n使用者可以创建自定义的处理器，去把 DOM 转换成其它格式。例如，一个把 DOM 转换成 XML 的处理器。\n\n要知道更多关于 SAX 事件与处理器，可参阅 [SAX](doc/sax.zh-cn.md)。\n\n## 使用者缓冲区 {#UserBuffer}\n\n许多应用软件可能需要尽量减少内存分配。\n\n`MemoryPoolAllocator` 可以帮助这方面，它容许使用者提供一个缓冲区。该缓冲区可能置于程序堆栈，或是一个静态分配的「草稿缓冲区（scratch buffer）」（一个静态／全局的数组），用于储存临时数据。\n\n`MemoryPoolAllocator` 会先用使用者缓冲区去解决分配请求。当使用者缓冲区用完，就会从基础分配器（缺省为 `CrtAllocator`）分配一块内存。\n\n以下是使用堆栈内存的例子，第一个分配器用于存储值，第二个用于解析时的临时缓冲。\n\n~~~~~~~~~~cpp\ntypedef GenericDocument<UTF8<>, MemoryPoolAllocator<>, MemoryPoolAllocator<>> DocumentType;\nchar valueBuffer[4096];\nchar parseBuffer[1024];\nMemoryPoolAllocator<> valueAllocator(valueBuffer, sizeof(valueBuffer));\nMemoryPoolAllocator<> parseAllocator(parseBuffer, sizeof(parseBuffer));\nDocumentType d(&valueAllocator, sizeof(parseBuffer), &parseAllocator);\nd.Parse(json);\n~~~~~~~~~~\n\n若解析时分配总量少于 4096+1024 字节时，这段代码不会造成任何堆内存分配（经 `new` 或 `malloc()`）。\n\n使用者可以通过 `MemoryPoolAllocator::Size()` 查询当前已分的内存大小。那么使用者可以拟定使用者缓冲区的合适大小。\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/encoding.md",
    "content": "# Encoding\n\nAccording to [ECMA-404](http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf),\n\n> (in Introduction) JSON text is a sequence of Unicode code points.\n\nThe earlier [RFC4627](http://www.ietf.org/rfc/rfc4627.txt) stated that,\n\n> (in §3) JSON text SHALL be encoded in Unicode.  The default encoding is UTF-8.\n\n> (in §6) JSON may be represented using UTF-8, UTF-16, or UTF-32. When JSON is written in UTF-8, JSON is 8bit compatible.  When JSON is written in UTF-16 or UTF-32, the binary content-transfer-encoding must be used.\n\nRapidJSON supports various encodings. It can also validate the encodings of JSON, and transcoding JSON among encodings. All these features are implemented internally, without the need for external libraries (e.g. [ICU](http://site.icu-project.org/)).\n\n[TOC]\n\n# Unicode {#Unicode}\nFrom [Unicode's official website](http://www.unicode.org/standard/WhatIsUnicode.html):\n> Unicode provides a unique number for every character, \n> no matter what the platform,\n> no matter what the program,\n> no matter what the language.\n\nThose unique numbers are called code points, which is in the range `0x0` to `0x10FFFF`.\n\n## Unicode Transformation Format {#UTF}\n\nThere are various encodings for storing Unicode code points. These are called Unicode Transformation Format (UTF). RapidJSON supports the most commonly used UTFs, including\n\n* UTF-8: 8-bit variable-width encoding. It maps a code point to 1–4 bytes.\n* UTF-16: 16-bit variable-width encoding. It maps a code point to 1–2 16-bit code units (i.e., 2–4 bytes).\n* UTF-32: 32-bit fixed-width encoding. It directly maps a code point to a single 32-bit code unit (i.e. 4 bytes).\n\nFor UTF-16 and UTF-32, the byte order (endianness) does matter. Within computer memory, they are often stored in the computer's endianness. However, when it is stored in file or transferred over network, we need to state the byte order of the byte sequence, either little-endian (LE) or big-endian (BE). \n\nRapidJSON provide these encodings via the structs in `rapidjson/encodings.h`:\n\n~~~~~~~~~~cpp\nnamespace rapidjson {\n\ntemplate<typename CharType = char>\nstruct UTF8;\n\ntemplate<typename CharType = wchar_t>\nstruct UTF16;\n\ntemplate<typename CharType = wchar_t>\nstruct UTF16LE;\n\ntemplate<typename CharType = wchar_t>\nstruct UTF16BE;\n\ntemplate<typename CharType = unsigned>\nstruct UTF32;\n\ntemplate<typename CharType = unsigned>\nstruct UTF32LE;\n\ntemplate<typename CharType = unsigned>\nstruct UTF32BE;\n\n} // namespace rapidjson\n~~~~~~~~~~\n\nFor processing text in memory, we normally use `UTF8`, `UTF16` or `UTF32`. For processing text via I/O, we may use `UTF8`, `UTF16LE`, `UTF16BE`, `UTF32LE` or `UTF32BE`.\n\nWhen using the DOM-style API, the `Encoding` template parameter in `GenericValue<Encoding>` and `GenericDocument<Encoding>` indicates the encoding to be used to represent JSON string in memory. So normally we will use `UTF8`, `UTF16` or `UTF32` for this template parameter. The choice depends on operating systems and other libraries that the application is using. For example, Windows API represents Unicode characters in UTF-16, while most Linux distributions and applications prefer UTF-8.\n\nExample of UTF-16 DOM declaration:\n\n~~~~~~~~~~cpp\ntypedef GenericDocument<UTF16<> > WDocument;\ntypedef GenericValue<UTF16<> > WValue;\n~~~~~~~~~~\n\nFor a detail example, please check the example in [DOM's Encoding](doc/stream.md) section.\n\n## Character Type {#CharacterType}\n\nAs shown in the declaration, each encoding has a `CharType` template parameter. Actually, it may be a little bit confusing, but each `CharType` stores a code unit, not a character (code point). As mentioned in previous section, a code point may be encoded to 1–4 code units for UTF-8.\n\nFor `UTF16(LE|BE)`, `UTF32(LE|BE)`, the `CharType` must be integer type of at least 2 and 4 bytes  respectively.\n\nNote that C++11 introduces `char16_t` and `char32_t`, which can be used for `UTF16` and `UTF32` respectively.\n\n## AutoUTF {#AutoUTF}\n\nPrevious encodings are statically bound in compile-time. In other words, user must know exactly which encodings will be used in the memory or streams. However, sometimes we may need to read/write files of different encodings. The encoding needed to be decided in runtime.\n\n`AutoUTF` is an encoding designed for this purpose. It chooses which encoding to be used according to the input or output stream. Currently, it should be used with `EncodedInputStream` and `EncodedOutputStream`.\n\n## ASCII {#ASCII}\n\nAlthough the JSON standards did not mention about [ASCII](http://en.wikipedia.org/wiki/ASCII), sometimes we would like to write 7-bit ASCII JSON for applications that cannot handle UTF-8. Since any JSON can represent unicode characters in escaped sequence `\\uXXXX`, JSON can always be encoded in ASCII.\n\nHere is an example for writing a UTF-8 DOM into ASCII:\n\n~~~~~~~~~~cpp\nusing namespace rapidjson;\nDocument d; // UTF8<>\n// ...\nStringBuffer buffer;\nWriter<StringBuffer, Document::EncodingType, ASCII<> > writer(buffer);\nd.Accept(writer);\nstd::cout << buffer.GetString();\n~~~~~~~~~~\n\nASCII can be used in input stream. If the input stream contains bytes with values above 127, it will cause `kParseErrorStringInvalidEncoding` error.\n\nASCII *cannot* be used in memory (encoding of `Document` or target encoding of `Reader`), as it cannot represent Unicode code points.\n\n# Validation & Transcoding {#ValidationTranscoding}\n\nWhen RapidJSON parses a JSON, it can validate the input JSON, whether it is a valid sequence of a specified encoding. This option can be turned on by adding `kParseValidateEncodingFlag` in `parseFlags` template parameter.\n\nIf the input encoding and output encoding is different, `Reader` and `Writer` will automatically transcode (convert) the text. In this case, `kParseValidateEncodingFlag` is not necessary, as it must decode the input sequence. And if the sequence was unable to be decoded, it must be invalid.\n\n## Transcoder {#Transcoder}\n\nAlthough the encoding functions in RapidJSON are designed for JSON parsing/generation, user may abuse them for transcoding of non-JSON strings.\n\nHere is an example for transcoding a string from UTF-8 to UTF-16:\n\n~~~~~~~~~~cpp\n#include \"rapidjson/encodings.h\"\n\nusing namespace rapidjson;\n\nconst char* s = \"...\"; // UTF-8 string\nStringStream source(s);\nGenericStringBuffer<UTF16<> > target;\n\nbool hasError = false;\nwhile (source.Peek() != '\\0')\n    if (!Transcoder<UTF8<>, UTF16<> >::Transcode(source, target)) {\n        hasError = true;\n        break;\n    }\n\nif (!hasError) {\n    const wchar_t* t = target.GetString();\n    // ...\n}\n~~~~~~~~~~\n\nYou may also use `AutoUTF` and the associated streams for setting source/target encoding in runtime.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/encoding.zh-cn.md",
    "content": "# 编码\n\n根据 [ECMA-404](http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf)：\n\n> (in Introduction) JSON text is a sequence of Unicode code points.\n> \n> 翻译：JSON 文本是 Unicode 码点的序列。\n\n较早的 [RFC4627](http://www.ietf.org/rfc/rfc4627.txt) 申明：\n\n> (in §3) JSON text SHALL be encoded in Unicode.  The default encoding is UTF-8.\n> \n> 翻译：JSON 文本应该以 Unicode 编码。缺省的编码为 UTF-8。\n\n> (in §6) JSON may be represented using UTF-8, UTF-16, or UTF-32. When JSON is written in UTF-8, JSON is 8bit compatible.  When JSON is written in UTF-16 or UTF-32, the binary content-transfer-encoding must be used.\n> \n> 翻译：JSON 可使用 UTF-8、UTF-16 或 UTF-32 表示。当 JSON 以 UTF-8 写入，该 JSON 是 8 位兼容的。当 JSON 以 UTF-16 或 UTF-32 写入，就必须使用二进制的内容传送编码。\n\nRapidJSON 支持多种编码。它也能检查 JSON 的编码，以及在不同编码中进行转码。所有这些功能都是在内部实现，无需使用外部的程序库（如 [ICU](http://site.icu-project.org/)）。\n\n[TOC]\n\n# Unicode {#Unicode}\n根据 [Unicode 的官方网站](http://www.unicode.org/standard/translations/t-chinese.html)：\n>Unicode 给每个字符提供了一个唯一的数字，\n不论是什么平台、\n不论是什么程序、\n不论是什么语言。\n\n这些唯一数字称为码点（code point），其范围介乎 `0x0` 至 `0x10FFFF` 之间。\n\n## Unicode 转换格式 {#UTF}\n\n存储 Unicode 码点有多种编码方式。这些称为 Unicode 转换格式（Unicode Transformation Format, UTF）。RapidJSON 支持最常用的 UTF，包括：\n\n* UTF-8：8 位可变长度编码。它把一个码点映射至 1 至 4 个字节。\n* UTF-16：16 位可变长度编码。它把一个码点映射至 1 至 2 个 16 位编码单元（即 2 至 4 个字节）。\n* UTF-32：32 位固定长度编码。它直接把码点映射至单个 32 位编码单元（即 4 字节）。\n\n对于 UTF-16 及 UTF-32 来说，字节序（endianness）是有影响的。在内存中，它们通常都是以该计算机的字节序来存储。然而，当要储存在文件中或在网上传输，我们需要指明字节序列的字节序，是小端（little endian, LE）还是大端（big-endian, BE）。 \n\nRapidJSON 通过 `rapidjson/encodings.h` 中的 struct 去提供各种编码：\n\n~~~~~~~~~~cpp\nnamespace rapidjson {\n\ntemplate<typename CharType = char>\nstruct UTF8;\n\ntemplate<typename CharType = wchar_t>\nstruct UTF16;\n\ntemplate<typename CharType = wchar_t>\nstruct UTF16LE;\n\ntemplate<typename CharType = wchar_t>\nstruct UTF16BE;\n\ntemplate<typename CharType = unsigned>\nstruct UTF32;\n\ntemplate<typename CharType = unsigned>\nstruct UTF32LE;\n\ntemplate<typename CharType = unsigned>\nstruct UTF32BE;\n\n} // namespace rapidjson\n~~~~~~~~~~\n\n对于在内存中的文本，我们正常会使用 `UTF8`、`UTF16` 或 `UTF32`。对于处理经过 I/O 的文本，我们可使用 `UTF8`、`UTF16LE`、`UTF16BE`、`UTF32LE` 或 `UTF32BE`。\n\n当使用 DOM 风格的 API，`GenericValue<Encoding>` 及 `GenericDocument<Encoding>` 里的 `Encoding` 模板参数是用于指明内存中存储的 JSON 字符串使用哪种编码。因此通常我们会在此参数中使用 `UTF8`、`UTF16` 或 `UTF32`。如何选择，视乎应用软件所使用的操作系统及其他程序库。例如，Windows API 使用 UTF-16 表示 Unicode 字符，而多数的 Linux 发行版本及应用软件则更喜欢 UTF-8。\n\n使用 UTF-16 的 DOM 声明例子：\n\n~~~~~~~~~~cpp\ntypedef GenericDocument<UTF16<> > WDocument;\ntypedef GenericValue<UTF16<> > WValue;\n~~~~~~~~~~\n\n可以在 [DOM's Encoding](doc/stream.zh-cn.md) 一节看到更详细的使用例子。\n\n## 字符类型 {#CharacterType}\n\n从之前的声明中可以看到，每个编码都有一个 `CharType` 模板参数。这可能比较容易混淆，实际上，每个 `CharType` 存储一个编码单元，而不是一个字符（码点）。如之前所谈及，在 UTF-8 中一个码点可能会编码成 1 至 4 个编码单元。\n\n对于 `UTF16(LE|BE)` 及 `UTF32(LE|BE)` 来说，`CharType` 必须分别是一个至少 2 及 4 字节的整数类型。\n\n注意 C++11 新添了 `char16_t` 及 `char32_t` 类型，也可分别用于 `UTF16` 及 `UTF32`。\n\n## AutoUTF {#AutoUTF}\n\n上述所介绍的编码都是在编译期静态挷定的。换句话说，使用者必须知道内存或流之中使用了哪种编码。然而，有时候我们可能需要读写不同编码的文件，而且这些编码需要在运行时才能决定。\n\n`AutoUTF` 是为此而设计的编码。它根据输入或输出流来选择使用哪种编码。目前它应该与 `EncodedInputStream` 及 `EncodedOutputStream` 结合使用。\n\n## ASCII {#ASCII}\n\n虽然 JSON 标准并未提及 [ASCII](http://en.wikipedia.org/wiki/ASCII)，有时候我们希望写入 7 位的 ASCII JSON，以供未能处理 UTF-8 的应用程序使用。由于任 JSON 都可以把 Unicode 字符表示为 `\\uXXXX` 转义序列，JSON 总是可用 ASCII 来编码。\n\n以下的例子把 UTF-8 的 DOM 写成 ASCII 的 JSON：\n\n~~~~~~~~~~cpp\nusing namespace rapidjson;\nDocument d; // UTF8<>\n// ...\nStringBuffer buffer;\nWriter<StringBuffer, Document::EncodingType, ASCII<> > writer(buffer);\nd.Accept(writer);\nstd::cout << buffer.GetString();\n~~~~~~~~~~\n\nASCII 可用于输入流。当输入流包含大于 127 的字节，就会导致 `kParseErrorStringInvalidEncoding` 错误。\n\nASCII * 不能 * 用于内存（`Document` 的编码，或 `Reader` 的目标编码)，因为它不能表示 Unicode 码点。\n\n# 校验及转码 {#ValidationTranscoding}\n\n当 RapidJSON 解析一个 JSON 时，它能校验输入 JSON，判断它是否所标明编码的合法序列。要开启此选项，请把 `kParseValidateEncodingFlag` 加入 `parseFlags` 模板参数。\n\n若输入编码和输出编码并不相同，`Reader` 及 `Writer` 会算把文本转码。在这种情况下，并不需要 `kParseValidateEncodingFlag`，因为它必须解码输入序列。若序列不能被解码，它必然是不合法的。\n\n## 转码器 {#Transcoder}\n\n虽然 RapidJSON 的编码功能是为 JSON 解析／生成而设计，使用者也可以“滥用”它们来为非 JSON 字符串转码。\n\n以下的例子把 UTF-8 字符串转码成 UTF-16：\n\n~~~~~~~~~~cpp\n#include \"rapidjson/encodings.h\"\n\nusing namespace rapidjson;\n\nconst char* s = \"...\"; // UTF-8 string\nStringStream source(s);\nGenericStringBuffer<UTF16<> > target;\n\nbool hasError = false;\nwhile (source.Peek() != '\\0')\n    if (!Transcoder<UTF8<>, UTF16<> >::Transcode(source, target)) {\n        hasError = true;\n        break;\n    }\n\nif (!hasError) {\n    const wchar_t* t = target.GetString();\n    // ...\n}\n~~~~~~~~~~\n\n你也可以用 `AutoUTF` 及对应的流来在运行时设置内源／目的之编码。\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/faq.md",
    "content": "# FAQ\n\n[TOC]\n\n## General\n\n1. What is RapidJSON?\n\n   RapidJSON is a C++ library for parsing and generating JSON. You may check all [features](doc/features.md) of it.\n\n2. Why is RapidJSON named so?\n\n   It is inspired by [RapidXML](http://rapidxml.sourceforge.net/), which is a fast XML DOM parser.\n\n3. Is RapidJSON similar to RapidXML?\n\n   RapidJSON borrowed some designs of RapidXML, including *in situ* parsing, header-only library. But the two APIs are completely different. Also RapidJSON provide many features that are not in RapidXML.\n\n4. Is RapidJSON free?\n\n   Yes, it is free under MIT license. It can be used in commercial applications. Please check the details in [license.txt](https://github.com/Tencent/rapidjson/blob/master/license.txt).\n\n5. Is RapidJSON small? What are its dependencies? \n\n   Yes. A simple executable which parses a JSON and prints its statistics is less than 30KB on Windows.\n\n   RapidJSON depends on C++ standard library only.\n\n6. How to install RapidJSON?\n\n   Check [Installation section](https://miloyip.github.io/rapidjson/).\n\n7. Can RapidJSON run on my platform?\n\n   RapidJSON has been tested in many combinations of operating systems, compilers and CPU architecture by the community. But we cannot ensure that it can be run on your particular platform. Building and running the unit test suite will give you the answer.\n\n8. Does RapidJSON support C++03? C++11?\n\n   RapidJSON was firstly implemented for C++03. Later it added optional support of some C++11 features (e.g., move constructor, `noexcept`). RapidJSON shall be compatible with C++03 or C++11 compliant compilers.\n\n9. Does RapidJSON really work in real applications?\n\n   Yes. It is deployed in both client and server real applications. A community member reported that RapidJSON in their system parses 50 million JSONs daily.\n\n10. How RapidJSON is tested?\n\n   RapidJSON contains a unit test suite for automatic testing. [Travis](https://travis-ci.org/Tencent/rapidjson/)(for Linux) and [AppVeyor](https://ci.appveyor.com/project/Tencent/rapidjson/)(for Windows) will compile and run the unit test suite for all modifications. The test process also uses Valgrind (in Linux) to detect memory leaks.\n\n11. Is RapidJSON well documented?\n\n   RapidJSON provides user guide and API documentationn.\n\n12. Are there alternatives?\n\n   Yes, there are a lot alternatives. For example, [nativejson-benchmark](https://github.com/miloyip/nativejson-benchmark) has a listing of open-source C/C++ JSON libraries. [json.org](http://www.json.org/) also has a list.\n\n## JSON\n\n1. What is JSON?\n\n   JSON (JavaScript Object Notation) is a lightweight data-interchange format. It uses human readable text format. More details of JSON can be referred to [RFC7159](http://www.ietf.org/rfc/rfc7159.txt) and [ECMA-404](http://www.ecma-international.org/publications/standards/Ecma-404.htm).\n\n2. What are applications of JSON?\n\n   JSON are commonly used in web applications for transferring structured data. It is also used as a file format for data persistence.\n\n3. Does RapidJSON conform to the JSON standard?\n\n   Yes. RapidJSON is fully compliance with [RFC7159](http://www.ietf.org/rfc/rfc7159.txt) and [ECMA-404](http://www.ecma-international.org/publications/standards/Ecma-404.htm). It can handle corner cases, such as supporting null character and surrogate pairs in JSON strings.\n\n4. Does RapidJSON support relaxed syntax?\n\n   Currently no. RapidJSON only support the strict standardized format. Support on related syntax is under discussion in this [issue](https://github.com/Tencent/rapidjson/issues/36).\n\n## DOM and SAX\n\n1. What is DOM style API?\n\n   Document Object Model (DOM) is an in-memory representation of JSON for query and manipulation.\n\n2. What is SAX style API?\n\n   SAX is an event-driven API for parsing and generation.\n\n3. Should I choose DOM or SAX?\n\n   DOM is easy for query and manipulation. SAX is very fast and memory-saving but often more difficult to be applied.\n\n4. What is *in situ* parsing?\n\n   *in situ* parsing decodes the JSON strings directly into the input JSON. This is an optimization which can reduce memory consumption and improve performance, but the input JSON will be modified. Check [in-situ parsing](doc/dom.md) for details.\n\n5. When does parsing generate an error?\n\n   The parser generates an error when the input JSON contains invalid syntax, or a value can not be represented (a number is too big), or the handler of parsers terminate the parsing. Check [parse error](doc/dom.md) for details.\n\n6. What error information is provided? \n\n   The error is stored in `ParseResult`, which includes the error code and offset (number of characters from the beginning of JSON). The error code can be translated into human-readable error message.\n\n7. Why not just using `double` to represent JSON number?\n\n   Some applications use 64-bit unsigned/signed integers. And these integers cannot be converted into `double` without loss of precision. So the parsers detects whether a JSON number is convertible to different types of integers and/or `double`.\n\n8. How to clear-and-minimize a document or value?\n\n   Call one of the `SetXXX()` methods - they call destructor which deallocates DOM data:\n\n   ~~~~~~~~~~cpp\n   Document d;\n   ...\n   d.SetObject();  // clear and minimize\n   ~~~~~~~~~~\n\n   Alternatively, use equivalent of the [C++ swap with temporary idiom](https://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Clear-and-minimize):\n   ~~~~~~~~~~cpp\n   Value(kObjectType).Swap(d);\n   ~~~~~~~~~~\n   or equivalent, but slightly longer to type:\n   ~~~~~~~~~~cpp\n   d.Swap(Value(kObjectType).Move()); \n   ~~~~~~~~~~\n\n9. How to insert a document node into another document?\n\n   Let's take the following two DOM trees represented as JSON documents:\n   ~~~~~~~~~~cpp\n   Document person;\n   person.Parse(\"{\\\"person\\\":{\\\"name\\\":{\\\"first\\\":\\\"Adam\\\",\\\"last\\\":\\\"Thomas\\\"}}}\");\n   \n   Document address;\n   address.Parse(\"{\\\"address\\\":{\\\"city\\\":\\\"Moscow\\\",\\\"street\\\":\\\"Quiet\\\"}}\");\n   ~~~~~~~~~~\n   Let's assume we want to merge them in such way that the whole `address` document becomes a node of the `person`:\n   ~~~~~~~~~~js\n   { \"person\": {\n      \"name\": { \"first\": \"Adam\", \"last\": \"Thomas\" },\n      \"address\": { \"city\": \"Moscow\", \"street\": \"Quiet\" }\n      }\n   }\n   ~~~~~~~~~~\n\n   The most important requirement to take care of document and value life-cycle as well as consistent memory management using the right allocator during the value transfer.\n   \n   Simple yet most efficient way to achieve that is to modify the `address` definition above to initialize it with allocator of the `person` document, then we just add the root member of the value:\n   ~~~~~~~~~~cpp\n   Document address(&person.GetAllocator());\n   ...\n   person[\"person\"].AddMember(\"address\", address[\"address\"], person.GetAllocator());\n   ~~~~~~~~~~\nAlternatively, if we don't want to explicitly refer to the root value of `address` by name, we can refer to it via iterator:\n   ~~~~~~~~~~cpp\n   auto addressRoot = address.MemberBegin();\n   person[\"person\"].AddMember(addressRoot->name, addressRoot->value, person.GetAllocator());\n   ~~~~~~~~~~\n   \n   Second way is to deep-clone the value from the address document:\n   ~~~~~~~~~~cpp\n   Value addressValue = Value(address[\"address\"], person.GetAllocator());\n   person[\"person\"].AddMember(\"address\", addressValue, person.GetAllocator());\n   ~~~~~~~~~~\n\n## Document/Value (DOM)\n\n1. What is move semantics? Why?\n\n   Instead of copy semantics, move semantics is used in `Value`. That means, when assigning a source value to a target value, the ownership of source value is moved to the target value.\n\n   Since moving is faster than copying, this design decision forces user to aware of the copying overhead.\n\n2. How to copy a value?\n\n   There are two APIs: constructor with allocator, and `CopyFrom()`. See [Deep Copy Value](doc/tutorial.md) for an example.\n\n3. Why do I need to provide the length of string?\n\n   Since C string is null-terminated, the length of string needs to be computed via `strlen()`, with linear runtime complexity. This incurs an unnecessary overhead of many operations, if the user already knows the length of string.\n\n   Also, RapidJSON can handle `\\u0000` (null character) within a string. If a string contains null characters, `strlen()` cannot return the true length of it. In such case user must provide the length of string explicitly.\n\n4. Why do I need to provide allocator parameter in many DOM manipulation API?\n\n   Since the APIs are member functions of `Value`, we do not want to save an allocator pointer in every `Value`.\n\n5. Does it convert between numerical types?\n\n   When using `GetInt()`, `GetUint()`, ... conversion may occur. For integer-to-integer conversion, it only convert when it is safe (otherwise it will assert). However, when converting a 64-bit signed/unsigned integer to double, it will convert but be aware that it may lose precision. A number with fraction, or an integer larger than 64-bit, can only be obtained by `GetDouble()`.\n\n## Reader/Writer (SAX)\n\n1. Why don't we just `printf` a JSON? Why do we need a `Writer`? \n\n   Most importantly, `Writer` will ensure the output JSON is well-formed. Calling SAX events incorrectly (e.g. `StartObject()` pairing with `EndArray()`) will assert. Besides, `Writer` will escapes strings (e.g., `\\n`). Finally, the numeric output of `printf()` may not be a valid JSON number, especially in some locale with digit delimiters. And the number-to-string conversion in `Writer` is implemented with very fast algorithms, which outperforms than `printf()` or `iostream`.\n\n2. Can I pause the parsing process and resume it later?\n\n   This is not directly supported in the current version due to performance consideration. However, if the execution environment supports multi-threading, user can parse a JSON in a separate thread, and pause it by blocking in the input stream.\n\n## Unicode\n\n1. Does it support UTF-8, UTF-16 and other format?\n\n   Yes. It fully support UTF-8, UTF-16 (LE/BE), UTF-32 (LE/BE) and ASCII. \n\n2. Can it validate the encoding?\n\n   Yes, just pass `kParseValidateEncodingFlag` to `Parse()`. If there is invalid encoding in the stream, it will generate `kParseErrorStringInvalidEncoding` error.\n\n3. What is surrogate pair? Does RapidJSON support it?\n\n   JSON uses UTF-16 encoding when escaping unicode character, e.g. `\\u5927` representing Chinese character \"big\". To handle characters other than those in basic multilingual plane (BMP), UTF-16 encodes those characters with two 16-bit values, which is called UTF-16 surrogate pair. For example, the Emoji character U+1F602 can be encoded as `\\uD83D\\uDE02` in JSON.\n\n   RapidJSON fully support parsing/generating UTF-16 surrogates. \n\n4. Can it handle `\\u0000` (null character) in JSON string?\n\n   Yes. RapidJSON fully support null character in JSON string. However, user need to be aware of it and using `GetStringLength()` and related APIs to obtain the true length of string.\n\n5. Can I output `\\uxxxx` for all non-ASCII character?\n\n   Yes, use `ASCII<>` as output encoding template parameter in `Writer` can enforce escaping those characters.\n\n## Stream\n\n1. I have a big JSON file. Should I load the whole file to memory?\n\n   User can use `FileReadStream` to read the file chunk-by-chunk. But for *in situ* parsing, the whole file must be loaded.\n\n2. Can I parse JSON while it is streamed from network?\n\n   Yes. User can implement a custom stream for this. Please refer to the implementation of `FileReadStream`.\n\n3. I don't know what encoding will the JSON be. How to handle them?\n\n   You may use `AutoUTFInputStream` which detects the encoding of input stream automatically. However, it will incur some performance overhead.\n\n4. What is BOM? How RapidJSON handle it?\n\n   [Byte order mark (BOM)](http://en.wikipedia.org/wiki/Byte_order_mark) sometimes reside at the beginning of file/stream to indicate the UTF encoding type of it.\n\n   RapidJSON's `EncodedInputStream` can detect/consume BOM. `EncodedOutputStream` can optionally write a BOM. See [Encoded Streams](doc/stream.md) for example.\n\n5. Why little/big endian is related?\n\n   little/big endian of stream is an issue for UTF-16 and UTF-32 streams, but not UTF-8 stream.\n\n## Performance\n\n1. Is RapidJSON really fast?\n\n   Yes. It may be the fastest open source JSON library. There is a [benchmark](https://github.com/miloyip/nativejson-benchmark) for evaluating performance of C/C++ JSON libraries.\n\n2. Why is it fast?\n\n   Many design decisions of RapidJSON is aimed at time/space performance. These may reduce user-friendliness of APIs. Besides, it also employs low-level optimizations (intrinsics, SIMD) and special algorithms (custom double-to-string, string-to-double conversions).\n\n3. What is SIMD? How it is applied in RapidJSON?\n\n   [SIMD](http://en.wikipedia.org/wiki/SIMD) instructions can perform parallel computation in modern CPUs. RapidJSON support Intel's SSE2/SSE4.2 and ARM's Neon to accelerate whitespace/tabspace/carriage-return/line-feed skipping. This improves performance of parsing indent formatted JSON. Define `RAPIDJSON_SSE2`, `RAPIDJSON_SSE42` or `RAPIDJSON_NEON` macro to enable this feature. However, running the executable on a machine without such instruction set support will make it crash.\n\n4. Does it consume a lot of memory?\n\n   The design of RapidJSON aims at reducing memory footprint.\n\n   In the SAX API, `Reader` consumes memory proportional to maximum depth of JSON tree, plus maximum length of JSON string.\n\n   In the DOM API, each `Value` consumes exactly 16/24 bytes for 32/64-bit architecture respectively. RapidJSON also uses a special memory allocator to minimize overhead of allocations.\n\n5. What is the purpose of being high performance?\n\n   Some applications need to process very large JSON files. Some server-side applications need to process huge amount of JSONs. Being high performance can improve both latency and throughput. In a broad sense, it will also save energy.\n\n## Gossip\n\n1. Who are the developers of RapidJSON?\n\n   Milo Yip ([miloyip](https://github.com/miloyip)) is the original author of RapidJSON. Many contributors from the world have improved RapidJSON.  Philipp A. Hartmann ([pah](https://github.com/pah)) has implemented a lot of improvements, setting up automatic testing and also involves in a lot of discussions for the community. Don Ding ([thebusytypist](https://github.com/thebusytypist)) implemented the iterative parser. Andrii Senkovych ([jollyroger](https://github.com/jollyroger)) completed the CMake migration. Kosta ([Kosta-Github](https://github.com/Kosta-Github)) provided a very neat short-string optimization. Thank you for all other contributors and community members as well.\n\n2. Why do you develop RapidJSON?\n\n   It was just a hobby project initially in 2011. Milo Yip is a game programmer and he just knew about JSON at that time and would like to apply JSON in future projects. As JSON seems very simple he would like to write a header-only and fast library.\n\n3. Why there is a long empty period of development?\n\n   It is basically due to personal issues, such as getting new family members. Also, Milo Yip has spent a lot of spare time on translating \"Game Engine Architecture\" by Jason Gregory into Chinese.\n\n4. Why did the repository move from Google Code to GitHub?\n\n   This is the trend. And GitHub is much more powerful and convenient.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/faq.zh-cn.md",
    "content": "# 常见问题\n\n[TOC]\n\n## 一般问题\n\n1. RapidJSON 是什么？\n\n   RapidJSON 是一个 C++ 库，用于解析及生成 JSON。读者可参考它的所有 [特点](doc/features.zh-cn.md)。\n\n2. 为什么称作 RapidJSON？\n\n   它的灵感来自于 [RapidXML](http://rapidxml.sourceforge.net/)，RapidXML 是一个高速的 XML DOM 解析器。\n\n3. RapidJSON 与 RapidXML 相似么？\n\n   RapidJSON 借镜了 RapidXML 的一些设计, 包括原位（*in situ*）解析、只有头文件的库。但两者的 API 是完全不同的。此外 RapidJSON 也提供许多 RapidXML 没有的特点。\n\n4. RapidJSON 是免费的么？\n\n   是的，它在 MIT 协议下免费。它可用于商业软件。详情请参看 [license.txt](https://github.com/Tencent/rapidjson/blob/master/license.txt)。\n\n5. RapidJSON 很小么？它有何依赖？\n\n   是的。在 Windows 上，一个解析 JSON 并打印出统计的可执行文件少于 30KB。\n\n   RapidJSON 仅依赖于 C++ 标准库。\n\n6. 怎样安装 RapidJSON？\n\n   见 [安装一节](../readme.zh-cn.md#安装)。\n\n7. RapidJSON 能否运行于我的平台？\n\n   社区已在多个操作系统／编译器／CPU 架构的组合上测试 RapidJSON。但我们无法确保它能运行于你特定的平台上。只需要生成及执行单元测试便能获取答案。\n\n8. RapidJSON 支持 C++03 么？C++11 呢？\n\n   RapidJSON 开始时在 C++03 上实现。后来加入了可选的 C++11 特性支持（如转移构造函数、`noexcept`）。RapidJSON 应该兼容所有遵从 C++03 或 C++11 的编译器。\n\n9. RapidJSON 是否真的用于实际应用？\n\n   是的。它被配置于前台及后台的真实应用中。一个社区成员说 RapidJSON 在他们的系统中每日解析 5 千万个 JSON。\n\n10. RapidJSON 是如何被测试的？\n\n   RapidJSON 包含一组单元测试去执行自动测试。[Travis](https://travis-ci.org/Tencent/rapidjson/)（供 Linux 平台）及 [AppVeyor](https://ci.appveyor.com/project/Tencent/rapidjson/)（供 Windows 平台）会对所有修改进行编译及执行单元测试。在 Linux 下还会使用 Valgrind 去检测内存泄漏。\n\n11. RapidJSON 是否有完整的文档？\n\n   RapidJSON 提供了使用手册及 API 说明文档。\n\n12. 有没有其他替代品？\n\n   有许多替代品。例如 [nativejson-benchmark](https://github.com/miloyip/nativejson-benchmark) 列出了一些开源的 C/C++ JSON 库。[json.org](http://www.json.org/) 也有一个列表。\n\n## JSON\n\n1. 什么是 JSON？\n\n   JSON (JavaScript Object Notation) 是一个轻量的数据交换格式。它使用人类可读的文本格式。更多关于 JSON 的细节可考 [RFC7159](http://www.ietf.org/rfc/rfc7159.txt) 及 [ECMA-404](http://www.ecma-international.org/publications/standards/Ecma-404.htm)。\n\n2. JSON 有什么应用场合？\n\n   JSON 常用于网页应用程序，以传送结构化数据。它也可作为文件格式用于数据持久化。\n\n3. RapidJSON 是否符合 JSON 标准？\n\n   是。RapidJSON 完全符合 [RFC7159](http://www.ietf.org/rfc/rfc7159.txt) 及 [ECMA-404](http://www.ecma-international.org/publications/standards/Ecma-404.htm)。它能处理一些特殊情况，例如支持 JSON 字符串中含有空字符及代理对（surrogate pair）。\n\n4. RapidJSON 是否支持宽松的语法？\n\n   目前不支持。RapidJSON 只支持严格的标准格式。宽松语法可以在这个 [issue](https://github.com/Tencent/rapidjson/issues/36) 中进行讨论。\n\n## DOM 与 SAX\n\n1. 什么是 DOM 风格 API？\n\n   Document Object Model（DOM）是一个储存于内存的 JSON 表示方式，用于查询及修改 JSON。\n\n2. 什么是 SAX 风格 API?\n\n   SAX 是一个事件驱动的 API，用于解析及生成 JSON。\n\n3. 我应用 DOM 还是 SAX？\n\n   DOM 易于查询及修改。SAX 则是非常快及省内存的，但通常较难使用。\n\n4. 什么是原位（*in situ*）解析？\n\n   原位解析会把 JSON 字符串直接解码至输入的 JSON 中。这是一个优化，可减少内存消耗及提升性能，但输入的 JSON 会被更改。进一步细节请参考 [原位解析](doc/dom.zh-cn.md) 。\n\n5. 什么时候会产生解析错误？\n\n   当输入的 JSON 包含非法语法，或不能表示一个值（如 Number 太大），或解析器的处理器中断解析过程，解析器都会产生一个错误。详情请参考 [解析错误](doc/dom.zh-cn.md)。\n\n6. 有什么错误信息？\n\n   错误信息存储在 `ParseResult`，它包含错误代号及偏移值（从 JSON 开始至错误处的字符数目）。可以把错误代号翻译为人类可读的错误讯息。\n\n7. 为何不只使用 `double` 去表示 JSON number？\n\n   一些应用需要使用 64 位无号／有号整数。这些整数不能无损地转换成 `double`。因此解析器会检测一个 JSON number 是否能转换至各种整数类型及 `double`。\n\n8. 如何清空并最小化 `document` 或 `value` 的容量？\n\n   调用 `SetXXX()` 方法 - 这些方法会调用析构函数，并重建空的 Object 或 Array:\n\n   ~~~~~~~~~~cpp\n   Document d;\n   ...\n   d.SetObject();  // clear and minimize\n   ~~~~~~~~~~\n\n   另外，也可以参考在 [C++ swap with temporary idiom](https://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Clear-and-minimize) 中的一种等价的方法:\n   ~~~~~~~~~~cpp\n   Value(kObjectType).Swap(d);\n   ~~~~~~~~~~\n   或者，使用这个稍微长一点的代码也能完成同样的事情:\n   ~~~~~~~~~~cpp\n   d.Swap(Value(kObjectType).Move()); \n   ~~~~~~~~~~\n\n9. 如何将一个 `document` 节点插入到另一个 `document` 中？\n\n   比如有以下两个 document(DOM):\n   ~~~~~~~~~~cpp\n   Document person;\n   person.Parse(\"{\\\"person\\\":{\\\"name\\\":{\\\"first\\\":\\\"Adam\\\",\\\"last\\\":\\\"Thomas\\\"}}}\");\n   \n   Document address;\n   address.Parse(\"{\\\"address\\\":{\\\"city\\\":\\\"Moscow\\\",\\\"street\\\":\\\"Quiet\\\"}}\");\n   ~~~~~~~~~~\n   假设我们希望将整个 `address` 插入到 `person` 中，作为其的一个子节点:\n   ~~~~~~~~~~js\n   { \"person\": {\n      \"name\": { \"first\": \"Adam\", \"last\": \"Thomas\" },\n      \"address\": { \"city\": \"Moscow\", \"street\": \"Quiet\" }\n      }\n   }\n   ~~~~~~~~~~\n\n   在插入节点的过程中需要注意 `document` 和 `value` 的生命周期并且正确地使用 allocator 进行内存分配和管理。\n\n   一个简单有效的方法就是修改上述 `address` 变量的定义，让其使用 `person` 的 allocator 初始化，然后将其添加到根节点。\n\n   ~~~~~~~~~~cpp\n   Documnet address(&person.GetAllocator());\n   ...\n   person[\"person\"].AddMember(\"address\", address[\"address\"], person.GetAllocator());\n   ~~~~~~~~~~\n   当然，如果你不想通过显式地写出 `address` 的 key 来得到其值，可以使用迭代器来实现:\n   ~~~~~~~~~~cpp\n   auto addressRoot = address.MemberBegin();\n   person[\"person\"].AddMember(addressRoot->name, addressRoot->value, person.GetAllocator());\n   ~~~~~~~~~~\n   \n   此外，还可以通过深拷贝 address document 来实现:\n   ~~~~~~~~~~cpp\n   Value addressValue = Value(address[\"address\"], person.GetAllocator());\n   person[\"person\"].AddMember(\"address\", addressValue, person.GetAllocator());\n   ~~~~~~~~~~\n\n## Document/Value (DOM)\n\n1. 什么是转移语义？为什么？\n\n   `Value` 不用复制语义，而使用了转移语义。这是指，当把来源值赋值于目标值时，来源值的所有权会转移至目标值。\n\n   由于转移快于复制，此设计决定强迫使用者注意到复制的消耗。\n\n2. 怎样去复制一个值？\n\n   有两个 API 可用：含 allocator 的构造函数，以及 `CopyFrom()`。可参考 [深复制 Value](doc/tutorial.zh-cn.md) 里的用例。\n\n3. 为什么我需要提供字符串的长度？\n\n   由于 C 字符串是空字符结尾的，需要使用 `strlen()` 去计算其长度，这是线性复杂度的操作。若使用者已知字符串的长度，对很多操作来说会造成不必要的消耗。\n\n   此外，RapidJSON 可处理含有 `\\u0000`（空字符）的字符串。若一个字符串含有空字符，`strlen()` 便不能返回真正的字符串长度。在这种情况下使用者必须明确地提供字符串长度。\n\n4. 为什么在许多 DOM 操作 API 中要提供分配器作为参数？\n\n   由于这些 API 是 `Value` 的成员函数，我们不希望为每个 `Value` 储存一个分配器指针。\n\n5. 它会转换各种数值类型么？\n\n   当使用 `GetInt()`、`GetUint()` 等 API 时，可能会发生转换。对于整数至整数转换，仅当保证转换安全才会转换（否则会断言失败）。然而，当把一个 64 位有号／无号整数转换至 double 时，它会转换，但有可能会损失精度。含有小数的数字、或大于 64 位的整数，都只能使用 `GetDouble()` 获取其值。\n\n## Reader/Writer (SAX)\n\n1. 为什么不仅仅用 `printf` 输出一个 JSON？为什么需要 `Writer`？\n\n   最重要的是，`Writer` 能确保输出的 JSON 是格式正确的。错误地调用 SAX 事件（如 `StartObject()` 错配 `EndArray()`）会造成断言失败。此外，`Writer` 会把字符串进行转义（如 `\\n`）。最后，`printf()` 的数值输出可能并不是一个合法的 JSON number，特别是某些 locale 会有数字分隔符。而且 `Writer` 的数值字符串转换是使用非常快的算法来实现的，胜过 `printf()` 及 `iostream`。\n\n2. 我能否暂停解析过程，并在稍后继续？\n\n   基于性能考虑，目前版本并不直接支持此功能。然而，若执行环境支持多线程，使用者可以在另一线程解析 JSON，并通过阻塞输入流去暂停。\n\n## Unicode\n\n1. 它是否支持 UTF-8、UTF-16 及其他格式？\n\n   是。它完全支持 UTF-8、UTF-16（大端／小端）、UTF-32（大端／小端）及 ASCII。\n\n2. 它能否检测编码的合法性？\n\n   能。只需把 `kParseValidateEncodingFlag` 参考传给 `Parse()`。若发现在输入流中有非法的编码，它就会产生 `kParseErrorStringInvalidEncoding` 错误。\n\n3. 什么是代理对（surrogate pair)？RapidJSON 是否支持？\n\n   JSON 使用 UTF-16 编码去转义 Unicode 字符，例如 `\\u5927` 表示中文字“大”。要处理基本多文种平面（basic multilingual plane，BMP）以外的字符时，UTF-16 会把那些字符编码成两个 16 位值，这称为 UTF-16 代理对。例如，绘文字字符 U+1F602 在 JSON 中可被编码成 `\\uD83D\\uDE02`。\n\n   RapidJSON 完全支持解析及生成 UTF-16 代理对。 \n\n4. 它能否处理 JSON 字符串中的 `\\u0000`（空字符）？\n\n   能。RapidJSON 完全支持 JSON 字符串中的空字符。然而，使用者需要注意到这件事，并使用 `GetStringLength()` 及相关 API 去取得字符串真正长度。\n\n5. 能否对所有非 ASCII 字符输出成 `\\uxxxx` 形式？\n\n   可以。只要在 `Writer` 中使用 `ASCII<>` 作为输出编码参数，就可以强逼转义那些字符。\n\n## 流\n\n1. 我有一个很大的 JSON 文件。我应否把它整个载入内存中？\n\n   使用者可使用 `FileReadStream` 去逐块读入文件。但若使用于原位解析，必须载入整个文件。\n\n2. 我能否解析一个从网络上串流进来的 JSON？\n\n   可以。使用者可根据 `FileReadStream` 的实现，去实现一个自定义的流。\n\n3. 我不知道一些 JSON 将会使用哪种编码。怎样处理它们？\n\n   你可以使用 `AutoUTFInputStream`，它能自动检测输入流的编码。然而，它会带来一些性能开销。\n\n4. 什么是 BOM？RapidJSON 怎样处理它？\n\n   [字节顺序标记（byte order mark, BOM）](http://en.wikipedia.org/wiki/Byte_order_mark) 有时会出现于文件／流的开始，以表示其 UTF 编码类型。\n\n   RapidJSON 的 `EncodedInputStream` 可检测／跳过 BOM。`EncodedOutputStream` 可选择是否写入 BOM。可参考 [编码流](doc/stream.zh-cn.md) 中的例子。\n\n5. 为什么会涉及大端／小端？\n\n   流的大端／小端是 UTF-16 及 UTF-32 流要处理的问题，而 UTF-8 不需要处理。\n\n## 性能\n\n1. RapidJSON 是否真的快？\n\n   是。它可能是最快的开源 JSON 库。有一个 [评测](https://github.com/miloyip/nativejson-benchmark) 评估 C/C++ JSON 库的性能。\n\n2. 为什么它会快？\n\n   RapidJSON 的许多设计是针对时间／空间性能来设计的，这些决定可能会影响 API 的易用性。此外，它也使用了许多底层优化（内部函数／intrinsic、SIMD）及特别的算法（自定义的 double 至字符串转换、字符串至 double 的转换）。\n\n3. 什是是 SIMD？它如何用于 RapidJSON？\n\n   [SIMD](http://en.wikipedia.org/wiki/SIMD) 指令可以在现代 CPU 中执行并行运算。RapidJSON 支持使用 Intel 的 SSE2/SSE4.2 和 ARM 的 Neon 来加速对空白符、制表符、回车符和换行符的过滤处理。在解析含缩进的 JSON 时，这能提升性能。只要定义名为 `RAPIDJSON_SSE2` ，`RAPIDJSON_SSE42` 或 `RAPIDJSON_NEON` 的宏，就能启动这个功能。然而，若在不支持这些指令集的机器上执行这些可执行文件，会导致崩溃。\n\n4. 它会消耗许多内存么？\n\n   RapidJSON 的设计目标是减低内存占用。\n\n   在 SAX API 中，`Reader` 消耗的内存与 JSON 树深度加上最长 JSON 字符成正比。\n\n   在 DOM API 中，每个 `Value` 在 32/64 位架构下分别消耗 16/24 字节。RapidJSON 也使用一个特殊的内存分配器去减少分配的额外开销。\n\n5. 高性能的意义何在？\n\n   有些应用程序需要处理非常大的 JSON 文件。而有些后台应用程序需要处理大量的 JSON。达到高性能同时改善延时及吞吐量。更广义来说，这也可以节省能源。\n\n## 八卦\n\n1. 谁是 RapidJSON 的开发者？\n\n   叶劲峰（Milo Yip，[miloyip](https://github.com/miloyip)）是 RapidJSON 的原作者。全世界许多贡献者一直在改善 RapidJSON。Philipp A. Hartmann（[pah](https://github.com/pah)）实现了许多改进，也设置了自动化测试，而且还参与许多社区讨论。丁欧南（Don Ding，[thebusytypist](https://github.com/thebusytypist)）实现了迭代式解析器。Andrii Senkovych（[jollyroger](https://github.com/jollyroger)）完成了向 CMake 的迁移。Kosta（[Kosta-Github](https://github.com/Kosta-Github)）提供了一个非常灵巧的短字符串优化。也需要感谢其他献者及社区成员。\n\n2. 为何你要开发 RapidJSON？\n\n   在 2011 年开始这项目时，它只是一个兴趣项目。Milo Yip 是一个游戏程序员，他在那时候认识到 JSON 并希望在未来的项目中使用。由于 JSON 好像很简单，他希望写一个快速的仅有头文件的程序库。\n\n3. 为什么开发中段有一段长期空档？\n\n   主要是个人因素，例如加入新家庭成员。另外，Milo Yip 也花了许多业余时间去翻译 Jason Gregory 的《Game Engine Architecture》至中文版《游戏引擎架构》。\n\n4. 为什么这个项目从 Google Code 搬到 GitHub？\n\n   这是大势所趋，而且 GitHub 更为强大及方便。\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/features.md",
    "content": "# Features\n\n## General\n\n* Cross-platform\n * Compilers: Visual Studio, gcc, clang, etc.\n * Architectures: x86, x64, ARM, etc.\n * Operating systems: Windows, Mac OS X, Linux, iOS, Android, etc.\n* Easy installation\n * Header files only library. Just copy the headers to your project.\n* Self-contained, minimal dependences\n * No STL, BOOST, etc.\n * Only included `<cstdio>`, `<cstdlib>`, `<cstring>`, `<inttypes.h>`, `<new>`, `<stdint.h>`. \n* Without C++ exception, RTTI\n* High performance\n * Use template and inline functions to reduce function call overheads.\n * Internal optimized Grisu2 and floating point parsing implementations.\n * Optional SSE2/SSE4.2 support.\n\n## Standard compliance\n\n* RapidJSON should be fully RFC4627/ECMA-404 compliance.\n* Support JSON Pointer (RFC6901).\n* Support JSON Schema Draft v4.\n* Support Swagger v2 schema.\n* Support OpenAPI v3.0.x schema.\n* Support Unicode surrogate.\n* Support null character (`\"\\u0000\"`)\n* For example, `[\"Hello\\u0000World\"]` can be parsed and handled gracefully. There is API for getting/setting lengths of string.\n* Support optional relaxed syntax.\n* Single line (`// ...`) and multiple line (`/* ... */`) comments (`kParseCommentsFlag`). \n* Trailing commas at the end of objects and arrays (`kParseTrailingCommasFlag`).\n* `NaN`, `Inf`, `Infinity`, `-Inf` and `-Infinity` as `double` values (`kParseNanAndInfFlag`)\n* [NPM compliant](http://github.com/Tencent/rapidjson/blob/master/doc/npm.md).\n\n## Unicode\n\n* Support UTF-8, UTF-16, UTF-32 encodings, including little endian and big endian.\n * These encodings are used in input/output streams and in-memory representation.\n* Support automatic detection of encodings in input stream.\n* Support transcoding between encodings internally.\n * For example, you can read a UTF-8 file and let RapidJSON transcode the JSON strings into UTF-16 in the DOM.\n* Support encoding validation internally.\n * For example, you can read a UTF-8 file, and let RapidJSON check whether all JSON strings are valid UTF-8 byte sequence.\n* Support custom character types.\n * By default the character types are `char` for UTF8, `wchar_t` for UTF16, `uint32_t` for UTF32.\n* Support custom encodings.\n\n## API styles\n\n* SAX (Simple API for XML) style API\n * Similar to [SAX](http://en.wikipedia.org/wiki/Simple_API_for_XML), RapidJSON provides a event sequential access parser API (`rapidjson::GenericReader`). It also provides a generator API (`rapidjson::Writer`) which consumes the same set of events.\n* DOM (Document Object Model) style API\n * Similar to [DOM](http://en.wikipedia.org/wiki/Document_Object_Model) for HTML/XML, RapidJSON can parse JSON into a DOM representation (`rapidjson::GenericDocument`), for easy manipulation, and finally stringify back to JSON if needed.\n * The DOM style API (`rapidjson::GenericDocument`) is actually implemented with SAX style API (`rapidjson::GenericReader`). SAX is faster but sometimes DOM is easier. Users can pick their choices according to scenarios.\n\n## Parsing\n\n* Recursive (default) and iterative parser\n * Recursive parser is faster but prone to stack overflow in extreme cases.\n * Iterative parser use custom stack to keep parsing state.\n* Support *in situ* parsing.\n * Parse JSON string values in-place at the source JSON, and then the DOM points to addresses of those strings.\n * Faster than convention parsing: no allocation for strings, no copy (if string does not contain escapes), cache-friendly.\n* Support 32-bit/64-bit signed/unsigned integer and `double` for JSON number type.\n* Support parsing multiple JSONs in input stream (`kParseStopWhenDoneFlag`).\n* Error Handling\n * Support comprehensive error code if parsing failed.\n * Support error message localization.\n\n## DOM (Document)\n\n* RapidJSON checks range of numerical values for conversions.\n* Optimization for string literal\n * Only store pointer instead of copying\n* Optimization for \"short\" strings\n * Store short string in `Value` internally without additional allocation.\n * For UTF-8 string: maximum 11 characters in 32-bit, 21 characters in 64-bit (13 characters in x86-64).\n* Optionally support `std::string` (define `RAPIDJSON_HAS_STDSTRING=1`)\n\n## Generation\n\n* Support `rapidjson::PrettyWriter` for adding newlines and indentations.\n\n## Stream\n\n* Support `rapidjson::GenericStringBuffer` for storing the output JSON as string.\n* Support `rapidjson::FileReadStream` and `rapidjson::FileWriteStream` for input/output `FILE` object.\n* Support custom streams.\n\n## Memory\n\n* Minimize memory overheads for DOM.\n * Each JSON value occupies exactly 16/20 bytes for most 32/64-bit machines (excluding text string).\n* Support fast default allocator.\n * A stack-based allocator (allocate sequentially, prohibit to free individual allocations, suitable for parsing).\n * User can provide a pre-allocated buffer. (Possible to parse a number of JSONs without any CRT allocation)\n* Support standard CRT(C-runtime) allocator.\n* Support custom allocators.\n\n## Miscellaneous\n\n* Some C++11 support (optional)\n * Rvalue reference\n * `noexcept` specifier\n * Range-based for loop\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/features.zh-cn.md",
    "content": "# 特点\n\n## 总体\n\n* 跨平台\n * 编译器：Visual Studio、gcc、clang 等\n * 架构：x86、x64、ARM 等\n * 操作系统：Windows、Mac OS X、Linux、iOS、Android 等\n* 容易安装\n * 只有头文件的库。只需把头文件复制至你的项目中。\n* 独立、最小依赖\n * 不需依赖 STL、BOOST 等。\n * 只包含 `<cstdio>`, `<cstdlib>`, `<cstring>`, `<inttypes.h>`, `<new>`, `<stdint.h>`。 \n* 没使用 C++ 异常、RTTI\n* 高性能\n * 使用模版及内联函数去降低函数调用开销。\n * 内部经优化的 Grisu2 及浮点数解析实现。\n * 可选的 SSE2/SSE4.2 支持。\n\n## 符合标准\n\n* RapidJSON 应完全符合 RFC4627/ECMA-404 标准。\n* 支持 JSON Pointer (RFC6901).\n* 支持 JSON Schema Draft v4.\n* 支持 Unicode 代理对（surrogate pair）。\n* 支持空字符（`\"\\u0000\"`）。\n * 例如，可以优雅地解析及处理 `[\"Hello\\u0000World\"]`。含读写字符串长度的 API。\n* 支持可选的放宽语法\n * 单行（`// ...`）及多行（`/* ... */`） 注释 (`kParseCommentsFlag`)。\n * 在对象和数组结束前含逗号 (`kParseTrailingCommasFlag`)。\n * `NaN`、`Inf`、`Infinity`、`-Inf` 及 `-Infinity` 作为 `double` 值 (`kParseNanAndInfFlag`)\n* [NPM 兼容](https://github.com/Tencent/rapidjson/blob/master/doc/npm.md).\n\n## Unicode\n\n* 支持 UTF-8、UTF-16、UTF-32 编码，包括小端序和大端序。\n * 这些编码用于输入输出流，以及内存中的表示。\n* 支持从输入流自动检测编码。\n* 内部支持编码的转换。\n * 例如，你可以读取一个 UTF-8 文件，让 RapidJSON 把 JSON 字符串转换至 UTF-16 的 DOM。\n* 内部支持编码校验。\n * 例如，你可以读取一个 UTF-8 文件，让 RapidJSON 检查是否所有 JSON 字符串是合法的 UTF-8 字节序列。\n* 支持自定义的字符类型。\n * 预设的字符类型是：UTF-8 为 `char`，UTF-16 为 `wchar_t`，UTF32 为 `uint32_t`。\n* 支持自定义的编码。\n\n## API 风格\n\n* SAX（Simple API for XML）风格 API\n * 类似于 [SAX](http://en.wikipedia.org/wiki/Simple_API_for_XML), RapidJSON 提供一个事件循序访问的解析器 API（`rapidjson::GenericReader`）。RapidJSON 也提供一个生成器 API（`rapidjson::Writer`），可以处理相同的事件集合。\n* DOM（Document Object Model）风格 API\n * 类似于 HTML／XML 的 [DOM](http://en.wikipedia.org/wiki/Document_Object_Model)，RapidJSON 可把 JSON 解析至一个 DOM 表示方式（`rapidjson::GenericDocument`），以方便操作。如有需要，可把 DOM 转换（stringify）回 JSON。\n * DOM 风格 API（`rapidjson::GenericDocument`）实际上是由 SAX 风格 API（`rapidjson::GenericReader`）实现的。SAX 更快，但有时 DOM 更易用。用户可根据情况作出选择。\n\n## 解析\n\n* 递归式（预设）及迭代式解析器\n * 递归式解析器较快，但在极端情况下可出现堆栈溢出。\n * 迭代式解析器使用自定义的堆栈去维持解析状态。\n* 支持原位（*in situ*）解析。\n * 把 JSON 字符串的值解析至原 JSON 之中，然后让 DOM 指向那些字符串。\n * 比常规分析更快：不需字符串的内存分配、不需复制（如字符串不含转义符）、缓存友好。\n* 对于 JSON 数字类型，支持 32-bit/64-bit 的有号／无号整数，以及 `double`。\n* 错误处理\n * 支持详尽的解析错误代号。\n * 支持本地化错误信息。\n\n## DOM (Document)\n\n* RapidJSON 在类型转换时会检查数值的范围。\n* 字符串字面量的优化\n * 只储存指针，不作复制\n* 优化“短”字符串\n * 在 `Value` 内储存短字符串，无需额外分配。\n * 对 UTF-8 字符串来说，32 位架构下可存储最多 11 字符，64 位下 21 字符（x86-64 下 13 字符）。\n* 可选地支持 `std::string`（定义 `RAPIDJSON_HAS_STDSTRING=1`）\n\n## 生成\n\n* 支持 `rapidjson::PrettyWriter` 去加入换行及缩进。\n\n## 输入输出流\n\n* 支持 `rapidjson::GenericStringBuffer`，把输出的 JSON 储存于字符串内。\n* 支持 `rapidjson::FileReadStream` 及 `rapidjson::FileWriteStream`，使用 `FILE` 对象作输入输出。\n* 支持自定义输入输出流。\n\n## 内存\n\n* 最小化 DOM 的内存开销。\n * 对大部分 32／64 位机器而言，每个 JSON 值只占 16 或 20 字节（不包含字符串）。\n* 支持快速的预设分配器。\n * 它是一个堆栈形式的分配器（顺序分配，不容许单独释放，适合解析过程之用）。\n * 使用者也可提供一个预分配的缓冲区。（有可能达至无需 CRT 分配就能解析多个 JSON）\n* 支持标准 CRT（C-runtime）分配器。\n* 支持自定义分配器。\n\n## 其他\n\n* 一些 C++11 的支持（可选）\n * 右值引用（rvalue reference）\n * `noexcept` 修饰符\n * 范围 for 循环\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/internals.md",
    "content": "# Internals\n\nThis section records some design and implementation details.\n\n[TOC]\n\n# Architecture {#Architecture}\n\n## SAX and DOM\n\nThe basic relationships of SAX and DOM is shown in the following UML diagram.\n\n![Architecture UML class diagram](diagram/architecture.png)\n\nThe core of the relationship is the `Handler` concept. From the SAX side, `Reader` parses a JSON from a stream and publish events to a `Handler`. `Writer` implements the `Handler` concept to handle the same set of events. From the DOM side, `Document` implements the `Handler` concept to build a DOM according to the events. `Value` supports a `Value::Accept(Handler&)` function, which traverses the DOM to publish events.\n\nWith this design, SAX is not dependent on DOM. Even `Reader` and `Writer` have no dependencies between them. This provides flexibility to chain event publisher and handlers. Besides, `Value` does not depends on SAX as well. So, in addition to stringify a DOM to JSON, user may also stringify it to a XML writer, or do anything else.\n\n## Utility Classes\n\nBoth SAX and DOM APIs depends on 3 additional concepts: `Allocator`, `Encoding` and `Stream`. Their inheritance hierarchy is shown as below.\n\n![Utility classes UML class diagram](diagram/utilityclass.png)\n\n# Value {#Value}\n\n`Value` (actually a typedef of `GenericValue<UTF8<>>`) is the core of DOM API. This section describes the design of it.\n\n## Data Layout {#DataLayout}\n\n`Value` is a [variant type](http://en.wikipedia.org/wiki/Variant_type). In RapidJSON's context, an instance of `Value` can contain 1 of 6 JSON value types. This is possible by using `union`. Each `Value` contains two members: `union Data data_` and a`unsigned flags_`. The `flags_` indicates the JSON type, and also additional information. \n\nThe following tables show the data layout of each type. The 32-bit/64-bit columns indicates the size of the field in bytes.\n\n| Null              |                                  |32-bit|64-bit|\n|-------------------|----------------------------------|:----:|:----:|\n| (unused)          |                                  |4     |8     | \n| (unused)          |                                  |4     |4     |\n| (unused)          |                                  |4     |4     |\n| `unsigned flags_` | `kNullType kNullFlag`            |4     |4     |\n\n| Bool              |                                                    |32-bit|64-bit|\n|-------------------|----------------------------------------------------|:----:|:----:|\n| (unused)          |                                                    |4     |8     | \n| (unused)          |                                                    |4     |4     |\n| (unused)          |                                                    |4     |4     |\n| `unsigned flags_` | `kBoolType` (either `kTrueFlag` or `kFalseFlag`) |4     |4     |\n\n| String              |                                     |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `Ch* str`           | Pointer to the string (may own)     |4     |8     | \n| `SizeType length`   | Length of string                    |4     |4     |\n| (unused)            |                                     |4     |4     |\n| `unsigned flags_`   | `kStringType kStringFlag ...`       |4     |4     |\n\n| Object              |                                     |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `Member* members`   | Pointer to array of members (owned) |4     |8     | \n| `SizeType size`     | Number of members                   |4     |4     |\n| `SizeType capacity` | Capacity of members                 |4     |4     |\n| `unsigned flags_`   | `kObjectType kObjectFlag`           |4     |4     |\n\n| Array               |                                     |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `Value* values`     | Pointer to array of values (owned)  |4     |8     | \n| `SizeType size`     | Number of values                    |4     |4     |\n| `SizeType capacity` | Capacity of values                  |4     |4     |\n| `unsigned flags_`   | `kArrayType kArrayFlag`             |4     |4     |\n\n| Number (Int)        |                                     |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `int i`             | 32-bit signed integer               |4     |4     | \n| (zero padding)      | 0                                   |4     |4     |\n| (unused)            |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kIntFlag kInt64Flag ...` |4     |4     |\n\n| Number (UInt)       |                                     |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `unsigned u`        | 32-bit unsigned integer             |4     |4     | \n| (zero padding)      | 0                                   |4     |4     |\n| (unused)            |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kUintFlag kUint64Flag ...` |4     |4     |\n\n| Number (Int64)      |                                     |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `int64_t i64`       | 64-bit signed integer               |8     |8     | \n| (unused)            |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kInt64Flag ...`          |4     |4     |\n\n| Number (Uint64)     |                                     |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `uint64_t i64`      | 64-bit unsigned integer             |8     |8     | \n| (unused)            |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kInt64Flag ...`          |4     |4     |\n\n| Number (Double)     |                                     |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `uint64_t i64`      | Double precision floating-point     |8     |8     | \n| (unused)            |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kDoubleFlag` |4     |4     |\n\nHere are some notes:\n* To reduce memory consumption for 64-bit architecture, `SizeType` is typedef as `unsigned` instead of `size_t`.\n* Zero padding for 32-bit number may be placed after or before the actual type, according to the endianness. This makes possible for interpreting a 32-bit integer as a 64-bit integer, without any conversion.\n* An `Int` is always an `Int64`, but the converse is not always true.\n\n## Flags {#Flags}\n\nThe 32-bit `flags_` contains both JSON type and other additional information. As shown in the above tables, each JSON type contains redundant `kXXXType` and `kXXXFlag`. This design is for optimizing the operation of testing bit-flags (`IsNumber()`) and obtaining a sequential number for each type (`GetType()`).\n\nString has two optional flags. `kCopyFlag` means that the string owns a copy of the string. `kInlineStrFlag` means using [Short-String Optimization](#ShortString).\n\nNumber is a bit more complicated. For normal integer values, it can contains `kIntFlag`, `kUintFlag`,  `kInt64Flag` and/or `kUint64Flag`, according to the range of the integer. For numbers with fraction, and integers larger than 64-bit range, they will be stored as `double` with `kDoubleFlag`.\n\n## Short-String Optimization {#ShortString}\n\n [Kosta](https://github.com/Kosta-Github) provided a very neat short-string optimization. The optimization idea is given as follow. Excluding the `flags_`, a `Value` has 12 or 16 bytes (32-bit or 64-bit) for storing actual data. Instead of storing a pointer to a string, it is possible to store short strings in these space internally. For encoding with 1-byte character type (e.g. `char`), it can store maximum 11 or 15 characters string inside the `Value` type.\n\n| ShortString (Ch=char) |                                   |32-bit|64-bit|\n|---------------------|-------------------------------------|:----:|:----:|\n| `Ch str[MaxChars]`  | String buffer                       |11    |15    | \n| `Ch invLength`      | MaxChars - Length                   |1     |1     |\n| `unsigned flags_`   | `kStringType kStringFlag ...`       |4     |4     |\n\nA special technique is applied. Instead of storing the length of string directly, it stores (MaxChars - length). This make it possible to store 11 characters with trailing `\\0`.\n\nThis optimization can reduce memory usage for copy-string. It can also improve cache-coherence thus improve runtime performance.\n\n# Allocator {#InternalAllocator}\n\n`Allocator` is a concept in RapidJSON:\n~~~cpp\nconcept Allocator {\n    static const bool kNeedFree;    //!< Whether this allocator needs to call Free().\n\n    // Allocate a memory block.\n    // \\param size of the memory block in bytes.\n    // \\returns pointer to the memory block.\n    void* Malloc(size_t size);\n\n    // Resize a memory block.\n    // \\param originalPtr The pointer to current memory block. Null pointer is permitted.\n    // \\param originalSize The current size in bytes. (Design issue: since some allocator may not book-keep this, explicitly pass to it can save memory.)\n    // \\param newSize the new size in bytes.\n    void* Realloc(void* originalPtr, size_t originalSize, size_t newSize);\n\n    // Free a memory block.\n    // \\param pointer to the memory block. Null pointer is permitted.\n    static void Free(void *ptr);\n};\n~~~\n\nNote that `Malloc()` and `Realloc()` are member functions but `Free()` is static member function.\n\n## MemoryPoolAllocator {#MemoryPoolAllocator}\n\n`MemoryPoolAllocator` is the default allocator for DOM. It allocate but do not free memory. This is suitable for building a DOM tree.\n\nInternally, it allocates chunks of memory from the base allocator (by default `CrtAllocator`) and stores the chunks as a singly linked list. When user requests an allocation, it allocates memory from the following order:\n\n1. User supplied buffer if it is available. (See [User Buffer section in DOM](doc/dom.md))\n2. If user supplied buffer is full, use the current memory chunk.\n3. If the current block is full, allocate a new block of memory.\n\n# Parsing Optimization {#ParsingOptimization}\n\n## Skip Whitespaces with SIMD {#SkipwhitespaceWithSIMD}\n\nWhen parsing JSON from a stream, the parser need to skip 4 whitespace characters:\n\n1. Space (`U+0020`)\n2. Character Tabulation (`U+000B`)\n3. Line Feed (`U+000A`)\n4. Carriage Return (`U+000D`)\n\nA simple implementation will be simply:\n~~~cpp\nvoid SkipWhitespace(InputStream& s) {\n    while (s.Peek() == ' ' || s.Peek() == '\\n' || s.Peek() == '\\r' || s.Peek() == '\\t')\n        s.Take();\n}\n~~~\n\nHowever, this requires 4 comparisons and a few branching for each character. This was found to be a hot spot.\n\nTo accelerate this process, SIMD was applied to compare 16 characters with 4 white spaces for each iteration. Currently RapidJSON supports SSE2, SSE4.2 and ARM Neon instructions for this. And it is only activated for UTF-8 memory streams, including string stream or *in situ* parsing.\n\nTo enable this optimization, need to define `RAPIDJSON_SSE2`, `RAPIDJSON_SSE42` or `RAPIDJSON_NEON` before including `rapidjson.h`. Some compilers can detect the setting, as in `perftest.h`:\n\n~~~cpp\n// __SSE2__ and __SSE4_2__ are recognized by gcc, clang, and the Intel compiler.\n// We use -march=native with gmake to enable -msse2 and -msse4.2, if supported.\n// Likewise, __ARM_NEON is used to detect Neon.\n#if defined(__SSE4_2__)\n#  define RAPIDJSON_SSE42\n#elif defined(__SSE2__)\n#  define RAPIDJSON_SSE2\n#elif defined(__ARM_NEON)\n#  define RAPIDJSON_NEON\n#endif\n~~~\n\nNote that, these are compile-time settings. Running the executable on a machine without such instruction set support will make it crash.\n\n### Page boundary issue\n\nIn an early version of RapidJSON, [an issue](https://code.google.com/archive/p/rapidjson/issues/104) reported that the `SkipWhitespace_SIMD()` causes crash very rarely (around 1 in 500,000). After investigation, it is suspected that `_mm_loadu_si128()` accessed bytes after `'\\0'`, and across a protected page boundary.\n\nIn [Intel® 64 and IA-32 Architectures Optimization Reference Manual\n](http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-optimization-manual.html), section 10.2.1:\n\n> To support algorithms requiring unaligned 128-bit SIMD memory accesses, memory buffer allocation by a caller function should consider adding some pad space so that a callee function can safely use the address pointer safely with unaligned 128-bit SIMD memory operations.\n> The minimal padding size should be the width of the SIMD register that might be used in conjunction with unaligned SIMD memory access.\n\nThis is not feasible as RapidJSON should not enforce such requirement.\n\nTo fix this issue, currently the routine process bytes up to the next aligned address. After tha, use aligned read to perform SIMD processing. Also see [#85](https://github.com/Tencent/rapidjson/issues/85).\n\n## Local Stream Copy {#LocalStreamCopy}\n\nDuring optimization, it is found that some compilers cannot localize some member data access of streams into local variables or registers. Experimental results show that for some stream types, making a copy of the stream and used it in inner-loop can improve performance. For example, the actual (non-SIMD) implementation of `SkipWhitespace()` is implemented as:\n\n~~~cpp\ntemplate<typename InputStream>\nvoid SkipWhitespace(InputStream& is) {\n    internal::StreamLocalCopy<InputStream> copy(is);\n    InputStream& s(copy.s);\n\n    while (s.Peek() == ' ' || s.Peek() == '\\n' || s.Peek() == '\\r' || s.Peek() == '\\t')\n        s.Take();\n}\n~~~\n\nDepending on the traits of stream, `StreamLocalCopy` will make (or not make) a copy of the stream object, use it locally and copy the states of stream back to the original stream.\n\n## Parsing to Double {#ParsingDouble}\n\nParsing string into `double` is difficult. The standard library function `strtod()` can do the job but it is slow. By default, the parsers use normal precision setting. This has has maximum 3 [ULP](http://en.wikipedia.org/wiki/Unit_in_the_last_place) error and implemented in `internal::StrtodNormalPrecision()`.\n\nWhen using `kParseFullPrecisionFlag`, the parsers calls `internal::StrtodFullPrecision()` instead, and this function actually implemented 3 versions of conversion methods.\n1. [Fast-Path](http://www.exploringbinary.com/fast-path-decimal-to-floating-point-conversion/).\n2. Custom DIY-FP implementation as in [double-conversion](https://github.com/floitsch/double-conversion).\n3. Big Integer Method as in (Clinger, William D. How to read floating point numbers accurately. Vol. 25. No. 6. ACM, 1990).\n\nIf the first conversion methods fail, it will try the second, and so on.\n\n# Generation Optimization {#GenerationOptimization}\n\n## Integer-to-String conversion {#itoa}\n\nThe naive algorithm for integer-to-string conversion involves division per each decimal digit. We have implemented various implementations and evaluated them in [itoa-benchmark](https://github.com/miloyip/itoa-benchmark).\n\nAlthough SSE2 version is the fastest but the difference is minor by comparing to the first running-up `branchlut`. And `branchlut` is pure C++ implementation so we adopt `branchlut` in RapidJSON.\n\n## Double-to-String conversion {#dtoa}\n\nOriginally RapidJSON uses `snprintf(..., ..., \"%g\")`  to achieve double-to-string conversion. This is not accurate as the default precision is 6. Later we also find that this is slow and there is an alternative.\n\nGoogle's V8 [double-conversion](https://github.com/floitsch/double-conversion\n) implemented a newer, fast algorithm called Grisu3 (Loitsch, Florian. \"Printing floating-point numbers quickly and accurately with integers.\" ACM Sigplan Notices 45.6 (2010): 233-243.).\n\nHowever, since it is not header-only so that we implemented a header-only version of Grisu2. This algorithm guarantees that the result is always accurate. And in most of cases it produces the shortest (optimal) string representation.\n\nThe header-only conversion function has been evaluated in [dtoa-benchmark](https://github.com/miloyip/dtoa-benchmark).\n\n# Parser {#Parser}\n\n## Iterative Parser {#IterativeParser}\n\nThe iterative parser is a recursive descent LL(1) parser\nimplemented in a non-recursive manner.\n\n### Grammar {#IterativeParserGrammar}\n\nThe grammar used for this parser is based on strict JSON syntax:\n~~~~~~~~~~\nS -> array | object\narray -> [ values ]\nobject -> { members }\nvalues -> non-empty-values | ε\nnon-empty-values -> value addition-values\naddition-values -> ε | , non-empty-values\nmembers -> non-empty-members | ε\nnon-empty-members -> member addition-members\naddition-members -> ε | , non-empty-members\nmember -> STRING : value\nvalue -> STRING | NUMBER | NULL | BOOLEAN | object | array\n~~~~~~~~~~\n\nNote that left factoring is applied to non-terminals `values` and `members`\nto make the grammar be LL(1).\n\n### Parsing Table {#IterativeParserParsingTable}\n\nBased on the grammar, we can construct the FIRST and FOLLOW set.\n\nThe FIRST set of non-terminals is listed below:\n\n|    NON-TERMINAL   |               FIRST              |\n|:-----------------:|:--------------------------------:|\n|       array       |                 [                |\n|       object      |                 {                |\n|       values      | ε STRING NUMBER NULL BOOLEAN { [ |\n|  addition-values  |              ε COMMA             |\n|      members      |             ε STRING             |\n|  addition-members |              ε COMMA             |\n|       member      |              STRING              |\n|       value       |  STRING NUMBER NULL BOOLEAN { [  |\n|         S         |                [ {               |\n| non-empty-members |              STRING              |\n|  non-empty-values |  STRING NUMBER NULL BOOLEAN { [  |\n\nThe FOLLOW set is listed below:\n\n|    NON-TERMINAL   |  FOLLOW |\n|:-----------------:|:-------:|\n|         S         |    $    |\n|       array       | , $ } ] |\n|       object      | , $ } ] |\n|       values      |    ]    |\n|  non-empty-values |    ]    |\n|  addition-values  |    ]    |\n|      members      |    }    |\n| non-empty-members |    }    |\n|  addition-members |    }    |\n|       member      |   , }   |\n|       value       |  , } ]  |\n\nFinally the parsing table can be constructed from FIRST and FOLLOW set:\n\n|    NON-TERMINAL   |           [           |           {           |          ,          | : | ] | } |          STRING         |         NUMBER        |          NULL         |        BOOLEAN        |\n|:-----------------:|:---------------------:|:---------------------:|:-------------------:|:-:|:-:|:-:|:-----------------------:|:---------------------:|:---------------------:|:---------------------:|\n|         S         |         array         |         object        |                     |   |   |   |                         |                       |                       |                       |\n|       array       |       [ values ]      |                       |                     |   |   |   |                         |                       |                       |                       |\n|       object      |                       |      { members }      |                     |   |   |   |                         |                       |                       |                       |\n|       values      |    non-empty-values   |    non-empty-values   |                     |   | ε |   |     non-empty-values    |    non-empty-values   |    non-empty-values   |    non-empty-values   |\n|  non-empty-values | value addition-values | value addition-values |                     |   |   |   |  value addition-values  | value addition-values | value addition-values | value addition-values |\n|  addition-values  |                       |                       |  , non-empty-values |   | ε |   |                         |                       |                       |                       |\n|      members      |                       |                       |                     |   |   | ε |    non-empty-members    |                       |                       |                       |\n| non-empty-members |                       |                       |                     |   |   |   | member addition-members |                       |                       |                       |\n|  addition-members |                       |                       | , non-empty-members |   |   | ε |                         |                       |                       |                       |\n|       member      |                       |                       |                     |   |   |   |      STRING : value     |                       |                       |                       |\n|       value       |         array         |         object        |                     |   |   |   |          STRING         |         NUMBER        |          NULL         |        BOOLEAN        |\n\nThere is a great [tool](http://hackingoff.com/compilers/predict-first-follow-set) for above grammar analysis.\n\n### Implementation {#IterativeParserImplementation}\n\nBased on the parsing table, a direct(or conventional) implementation\nthat pushes the production body in reverse order\nwhile generating a production could work.\n\nIn RapidJSON, several modifications(or adaptations to current design) are made to a direct implementation.\n\nFirst, the parsing table is encoded in a state machine in RapidJSON.\nStates are constructed by the head and body of production.\nState transitions are constructed by production rules.\nBesides, extra states are added for productions involved with `array` and `object`.\nIn this way the generation of array values or object members would be a single state transition,\nrather than several pop/push operations in the direct implementation.\nThis also makes the estimation of stack size more easier.\n\nThe state diagram is shown as follows:\n\n![State Diagram](diagram/iterative-parser-states-diagram.png)\n\nSecond, the iterative parser also keeps track of array's value count and object's member count\nin its internal stack, which may be different from a conventional implementation.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/internals.zh-cn.md",
    "content": "# 内部架构\n\n本部分记录了一些设计和实现细节。\n\n[TOC]\n\n# 架构 {#Architecture}\n\n## SAX 和 DOM\n\n下面的 UML 图显示了 SAX 和 DOM 的基本关系。\n\n![架构 UML 类图](diagram/architecture.png)\n\n关系的核心是 `Handler` 概念。在 SAX 一边，`Reader` 从流解析 JSON 并将事件发送到 `Handler`。`Writer` 实现了 `Handler` 概念，用于处理相同的事件。在 DOM 一边，`Document` 实现了 `Handler` 概念，用于通过这些时间来构建 DOM。`Value` 支持了 `Value::Accept(Handler&)` 函数，它可以将 DOM 转换为事件进行发送。\n\n在这个设计，SAX 是不依赖于 DOM 的。甚至 `Reader` 和 `Writer` 之间也没有依赖。这提供了连接事件发送器和处理器的灵活性。除此之外，`Value` 也是不依赖于 SAX 的。所以，除了将 DOM 序列化为 JSON 之外，用户也可以将其序列化为 XML，或者做任何其他事情。\n\n## 工具类\n\nSAX 和 DOM API 都依赖于3个额外的概念：`Allocator`、`Encoding` 和 `Stream`。它们的继承层次结构如下图所示。\n\n![工具类 UML 类图](diagram/utilityclass.png)\n\n# 值（Value） {#Value}\n\n`Value` （实际上被定义为 `GenericValue<UTF8<>>`）是 DOM API 的核心。本部分描述了它的设计。\n\n## 数据布局 {#DataLayout}\n\n`Value` 是[可变类型](http://en.wikipedia.org/wiki/Variant_type)。在 RapidJSON 的上下文中，一个 `Value` 的实例可以包含6种 JSON 数据类型之一。通过使用 `union` ，这是可能实现的。每一个 `Value` 包含两个成员：`union Data data_` 和 `unsigned flags_`。`flags_` 表明了 JSON 类型，以及附加的信息。\n\n下表显示了所有类型的数据布局。32位/64位列表明了字段所占用的字节数。\n\n| Null              |                                  | 32位 | 64位 |\n|-------------------|----------------------------------|:----:|:----:|\n| （未使用）        |                                  |4     |8     |\n| （未使用）        |                                  |4     |4     |\n| （未使用）        |                                  |4     |4     |\n| `unsigned flags_` | `kNullType kNullFlag`            |4     |4     |\n\n| Bool              |                                                    | 32位 | 64位 |\n|-------------------|----------------------------------------------------|:----:|:----:|\n| （未使用）        |                                                    |4     |8     |\n| （未使用）        |                                                    |4     |4     |\n| （未使用）        |                                                    |4     |4     |\n| `unsigned flags_` | `kBoolType` (either `kTrueFlag` or `kFalseFlag`)   |4     |4     |\n\n| String              |                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `Ch* str`           | 指向字符串的指针（可能拥有所有权）  |4     |8     |\n| `SizeType length`   | 字符串长度                          |4     |4     |\n| （未使用）          |                                     |4     |4     |\n| `unsigned flags_`   | `kStringType kStringFlag ...`       |4     |4     |\n\n| Object              |                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `Member* members`   | 指向成员数组的指针（拥有所有权）    |4     |8     |\n| `SizeType size`     | 成员数量                            |4     |4     |\n| `SizeType capacity` | 成员容量                            |4     |4     |\n| `unsigned flags_`   | `kObjectType kObjectFlag`           |4     |4     |\n\n| Array               |                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `Value* values`     | 指向值数组的指针（拥有所有权）      |4     |8     |\n| `SizeType size`     | 值数量                              |4     |4     |\n| `SizeType capacity` | 值容量                              |4     |4     |\n| `unsigned flags_`   | `kArrayType kArrayFlag`             |4     |4     |\n\n| Number (Int)        |                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `int i`             | 32位有符号整数                      |4     |4     |\n| （零填充）          | 0                                   |4     |4     |\n| （未使用）          |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kIntFlag kInt64Flag ...` |4     |4     |\n\n| Number (UInt)       |                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `unsigned u`        | 32位无符号整数                      |4     |4     |\n| （零填充）          | 0                                   |4     |4     |\n| （未使用）          |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kUintFlag kUint64Flag ...` |4     |4     |\n\n| Number (Int64)      |                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `int64_t i64`       | 64位有符号整数                      |8     |8     |\n| （未使用）          |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kInt64Flag ...`          |4     |4     |\n\n| Number (Uint64)     |                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `uint64_t i64`      | 64位无符号整数                      |8     |8     |\n| （未使用）          |                                     |4     |8     |\n| `unsigned flags_`   | `kNumberType kNumberFlag kInt64Flag ...`          |4     |4     |\n\n| Number (Double)     |                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `uint64_t i64`      | 双精度浮点数                        |8     |8     |\n| （未使用）          |                                     |4     |8     |\n| `unsigned flags_`   |`kNumberType kNumberFlag kDoubleFlag`|4     |4     |\n\n这里有一些需要注意的地方：\n* 为了减少在64位架构上的内存消耗，`SizeType` 被定义为 `unsigned` 而不是 `size_t`。\n* 32位整数的零填充可能被放在实际类型的前面或后面，这依赖于字节序。这使得它可以将32位整数不经过任何转换就可以解释为64位整数。\n* `Int` 永远是 `Int64`，反之不然。\n\n## 标志 {#Flags}\n\n32位的 `flags_` 包含了 JSON 类型和其他信息。如前文中的表所述，每一种 JSON 类型包含了冗余的 `kXXXType` 和 `kXXXFlag`。这个设计是为了优化测试位标志（`IsNumber()`）和获取每一种类型的序列号（`GetType()`）。\n\n字符串有两个可选的标志。`kCopyFlag` 表明这个字符串拥有字符串拷贝的所有权。而 `kInlineStrFlag` 意味着使用了[短字符串优化](#ShortString)。\n\n数字更加复杂一些。对于普通的整数值，它可以包含 `kIntFlag`、`kUintFlag`、 `kInt64Flag` 和/或 `kUint64Flag`，这由整数的范围决定。带有小数或者超过64位所能表达的范围的整数的数字会被存储为带有 `kDoubleFlag` 的 `double`。\n\n## 短字符串优化 {#ShortString}\n\n[Kosta](https://github.com/Kosta-Github) 提供了很棒的短字符串优化。这个优化的xxx如下所述。除去 `flags_` ，`Value` 有12或16字节（对于32位或64位）来存储实际的数据。这为在其内部直接存储短字符串而不是存储字符串的指针创造了可能。对于1字节的字符类型（例如 `char`），它可以在 `Value` 类型内部存储至多11或15个字符的字符串。\n\n|ShortString (Ch=char)|                                     | 32位 | 64位 |\n|---------------------|-------------------------------------|:----:|:----:|\n| `Ch str[MaxChars]`  | 字符串缓冲区                        |11    |15    |\n| `Ch invLength`      | MaxChars - Length                   |1     |1     |\n| `unsigned flags_`   | `kStringType kStringFlag ...`       |4     |4     |\n\n这里使用了一项特殊的技术。它存储了 (MaxChars - length) 而不直接存储字符串的长度。这使得存储11个字符并且带有后缀 `\\0` 成为可能。\n\n这个优化可以减少字符串拷贝内存占用。它也改善了缓存一致性，并进一步提高了运行时性能。\n\n# 分配器（Allocator） {#InternalAllocator}\n\n`Allocator` 是 RapidJSON 中的概念：\n~~~cpp\nconcept Allocator {\n    static const bool kNeedFree;    //!< 表明这个分配器是否需要调用 Free()。\n\n    // 申请内存块。\n    // \\param size 内存块的大小，以字节记。\n    // \\returns 指向内存块的指针。\n    void* Malloc(size_t size);\n\n    // 调整内存块的大小。\n    // \\param originalPtr 当前内存块的指针。空指针是被允许的。\n    // \\param originalSize 当前大小，以字节记。（设计问题：因为有些分配器可能不会记录它，显示的传递它可以节约内存。）\n    // \\param newSize 新大小，以字节记。\n    void* Realloc(void* originalPtr, size_t originalSize, size_t newSize);\n\n    // 释放内存块。\n    // \\param ptr 指向内存块的指针。空指针是被允许的。\n    static void Free(void *ptr);\n};\n~~~\n\n需要注意的是 `Malloc()` 和 `Realloc()` 是成员函数而 `Free()` 是静态成员函数。\n\n## MemoryPoolAllocator {#MemoryPoolAllocator}\n\n`MemoryPoolAllocator` 是 DOM 的默认内存分配器。它只申请内存而不释放内存。这对于构建 DOM 树非常合适。\n\n在它的内部，它从基础的内存分配器申请内存块（默认为 `CrtAllocator`）并将这些内存块存储为单向链表。当用户请求申请内存，它会遵循下列步骤来申请内存：\n\n1. 如果可用，使用用户提供的缓冲区。（见 [User Buffer section in DOM](doc/dom.md)）\n2. 如果用户提供的缓冲区已满，使用当前内存块。\n3. 如果当前内存块已满，申请新的内存块。\n\n# 解析优化 {#ParsingOptimization}\n\n## 使用 SIMD 跳过空格 {#SkipwhitespaceWithSIMD}\n\n当从流中解析 JSON 时，解析器需要跳过4种空格字符：\n\n1. 空格 (`U+0020`)\n2. 制表符 (`U+000B`)\n3. 换行 (`U+000A`)\n4. 回车 (`U+000D`)\n\n这是一份简单的实现：\n~~~cpp\nvoid SkipWhitespace(InputStream& s) {\n    while (s.Peek() == ' ' || s.Peek() == '\\n' || s.Peek() == '\\r' || s.Peek() == '\\t')\n        s.Take();\n}\n~~~\n\n但是，这需要对每个字符进行4次比较以及一些分支。这被发现是一个热点。\n\n为了加速这一处理，RapidJSON 使用 SIMD 来在一次迭代中比较16个字符和4个空格。目前 RapidJSON 支持 SSE2 ， SSE4.2 和 ARM Neon 指令。同时它也只会对 UTF-8 内存流启用，包括字符串流或 *原位* 解析。\n\n你可以通过在包含 `rapidjson.h` 之前定义 `RAPIDJSON_SSE2` ， `RAPIDJSON_SSE42` 或 `RAPIDJSON_NEON` 来启用这个优化。一些编译器可以检测这个设置，如 `perftest.h`：\n\n~~~cpp\n// __SSE2__ 和 __SSE4_2__ 可被 gcc、clang 和 Intel 编译器识别：\n// 如果支持的话，我们在 gmake 中使用了 -march=native 来启用 -msse2 和 -msse4.2\n// 同样的， __ARM_NEON 被用于识别Neon\n#if defined(__SSE4_2__)\n#  define RAPIDJSON_SSE42\n#elif defined(__SSE2__)\n#  define RAPIDJSON_SSE2\n#elif defined(__ARM_NEON)\n#  define RAPIDJSON_NEON\n#endif\n~~~\n\n需要注意的是，这是编译期的设置。在不支持这些指令的机器上运行可执行文件会使它崩溃。\n\n### 页面对齐问题\n\n在 RapidJSON 的早期版本中，被报告了[一个问题](https://code.google.com/archive/p/rapidjson/issues/104)：`SkipWhitespace_SIMD()` 会罕见地导致崩溃（约五十万分之一的几率）。在调查之后，怀疑是 `_mm_loadu_si128()` 访问了 `'\\0'` 之后的内存，并越过被保护的页面边界。\n\n在 [Intel® 64 and IA-32 Architectures Optimization Reference Manual\n](http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-optimization-manual.html) 中，章节 10.2.1：\n\n> 为了支持需要费对齐的128位 SIMD 内存访问的算法，调用者的内存缓冲区申请应当考虑添加一些填充空间，这样被调用的函数可以安全地将地址指针用于未对齐的128位 SIMD 内存操作。\n> 在结合非对齐的 SIMD 内存操作中，最小的对齐大小应该等于 SIMD 寄存器的大小。\n\n对于 RapidJSON 来说，这显然是不可行的，因为 RapidJSON 不应当强迫用户进行内存对齐。\n\n为了修复这个问题，当前的代码会先按字节处理直到下一个对齐的地址。在这之后，使用对齐读取来进行 SIMD 处理。见 [#85](https://github.com/Tencent/rapidjson/issues/85)。\n\n## 局部流拷贝 {#LocalStreamCopy}\n\n在优化的过程中，我们发现一些编译器不能将访问流的一些成员数据放入局部变量或者寄存器中。测试结果显示，对于一些流类型，创建流的拷贝并将其用于内层循环中可以改善性能。例如，实际（非 SIMD）的 `SkipWhitespace()` 被实现为：\n\n~~~cpp\ntemplate<typename InputStream>\nvoid SkipWhitespace(InputStream& is) {\n    internal::StreamLocalCopy<InputStream> copy(is);\n    InputStream& s(copy.s);\n\n    while (s.Peek() == ' ' || s.Peek() == '\\n' || s.Peek() == '\\r' || s.Peek() == '\\t')\n        s.Take();\n}\n~~~\n\n基于流的特征，`StreamLocalCopy` 会创建（或不创建）流对象的拷贝，在局部使用它并将流的状态拷贝回原来的流。\n\n## 解析为双精度浮点数 {#ParsingDouble}\n\n将字符串解析为 `double` 并不简单。标准库函数 `strtod()` 可以胜任这项工作，但它比较缓慢。默认情况下，解析器使用默认的精度设置。这最多有 3[ULP](http://en.wikipedia.org/wiki/Unit_in_the_last_place) 的误差，并实现在 `internal::StrtodNormalPrecision()` 中。\n\n当使用 `kParseFullPrecisionFlag` 时，编译器会改为调用 `internal::StrtodFullPrecision()` ，这个函数会自动调用三个版本的转换。\n1. [Fast-Path](http://www.exploringbinary.com/fast-path-decimal-to-floating-point-conversion/)。\n2. [double-conversion](https://github.com/floitsch/double-conversion) 中的自定义 DIY-FP 实现。\n3. （Clinger, William D. How to read floating point numbers accurately. Vol. 25. No. 6. ACM, 1990） 中的大整数算法。\n\n如果第一个转换方法失败，则尝试使用第二种方法，以此类推。\n\n# 生成优化 {#GenerationOptimization}\n\n## 整数到字符串的转换 {#itoa}\n\n整数到字符串转换的朴素算法需要对每一个十进制位进行一次除法。我们实现了若干版本并在 [itoa-benchmark](https://github.com/miloyip/itoa-benchmark) 中对它们进行了评估。\n\n虽然 SSE2 版本是最快的，但它和第二快的 `branchlut` 差距不大。而且 `branchlut` 是纯C++实现，所以我们在 RapidJSON 中使用了 `branchlut`。\n\n## 双精度浮点数到字符串的转换 {#dtoa}\n\n原来 RapidJSON 使用 `snprintf(..., ..., \"%g\")` 来进行双精度浮点数到字符串的转换。这是不准确的，因为默认的精度是6。随后我们发现它很缓慢，而且有其它的替代品。\n\nGoogle 的 V8 [double-conversion](https://github.com/floitsch/double-conversion\n) 实现了更新的、快速的被称为 Grisu3 的算法（Loitsch, Florian. \"Printing floating-point numbers quickly and accurately with integers.\" ACM Sigplan Notices 45.6 (2010): 233-243.）。\n\n然而，这个实现不是仅头文件的，所以我们实现了一个仅头文件的 Grisu2 版本。这个算法保证了结果永远精确。而且在大多数情况下，它会生成最短的（可选）字符串表示。\n\n这个仅头文件的转换函数在 [dtoa-benchmark](https://github.com/miloyip/dtoa-benchmark) 中进行评估。\n\n# 解析器 {#Parser}\n\n## 迭代解析 {#IterativeParser}\n\n迭代解析器是一个以非递归方式实现的递归下降的 LL(1) 解析器。\n\n### 语法 {#IterativeParserGrammar}\n\n解析器使用的语法是基于严格 JSON 语法的：\n~~~~~~~~~~\nS -> array | object\narray -> [ values ]\nobject -> { members }\nvalues -> non-empty-values | ε\nnon-empty-values -> value addition-values\naddition-values -> ε | , non-empty-values\nmembers -> non-empty-members | ε\nnon-empty-members -> member addition-members\naddition-members -> ε | , non-empty-members\nmember -> STRING : value\nvalue -> STRING | NUMBER | NULL | BOOLEAN | object | array\n~~~~~~~~~~\n\n注意到左因子被加入了非终结符的 `values` 和 `members` 来保证语法是 LL(1) 的。\n\n### 解析表 {#IterativeParserParsingTable}\n\n基于这份语法，我们可以构造 FIRST 和 FOLLOW 集合。\n\n非终结符的 FIRST 集合如下所示：\n\n|    NON-TERMINAL   |               FIRST              |\n|:-----------------:|:--------------------------------:|\n|       array       |                 [                |\n|       object      |                 {                |\n|       values      | ε STRING NUMBER NULL BOOLEAN { [ |\n|  addition-values  |              ε COMMA             |\n|      members      |             ε STRING             |\n|  addition-members |              ε COMMA             |\n|       member      |              STRING              |\n|       value       |  STRING NUMBER NULL BOOLEAN { [  |\n|         S         |                [ {               |\n| non-empty-members |              STRING              |\n|  non-empty-values |  STRING NUMBER NULL BOOLEAN { [  |\n\nFOLLOW 集合如下所示：\n\n|    NON-TERMINAL   |  FOLLOW |\n|:-----------------:|:-------:|\n|         S         |    $    |\n|       array       | , $ } ] |\n|       object      | , $ } ] |\n|       values      |    ]    |\n|  non-empty-values |    ]    |\n|  addition-values  |    ]    |\n|      members      |    }    |\n| non-empty-members |    }    |\n|  addition-members |    }    |\n|       member      |   , }   |\n|       value       |  , } ]  |\n\n最终可以从 FIRST 和 FOLLOW 集合生成解析表：\n\n|    NON-TERMINAL   |           [           |           {           |          ,          | : | ] | } |          STRING         |         NUMBER        |          NULL         |        BOOLEAN        |\n|:-----------------:|:---------------------:|:---------------------:|:-------------------:|:-:|:-:|:-:|:-----------------------:|:---------------------:|:---------------------:|:---------------------:|\n|         S         |         array         |         object        |                     |   |   |   |                         |                       |                       |                       |\n|       array       |       [ values ]      |                       |                     |   |   |   |                         |                       |                       |                       |\n|       object      |                       |      { members }      |                     |   |   |   |                         |                       |                       |                       |\n|       values      |    non-empty-values   |    non-empty-values   |                     |   | ε |   |     non-empty-values    |    non-empty-values   |    non-empty-values   |    non-empty-values   |\n|  non-empty-values | value addition-values | value addition-values |                     |   |   |   |  value addition-values  | value addition-values | value addition-values | value addition-values |\n|  addition-values  |                       |                       |  , non-empty-values |   | ε |   |                         |                       |                       |                       |\n|      members      |                       |                       |                     |   |   | ε |    non-empty-members    |                       |                       |                       |\n| non-empty-members |                       |                       |                     |   |   |   | member addition-members |                       |                       |                       |\n|  addition-members |                       |                       | , non-empty-members |   |   | ε |                         |                       |                       |                       |\n|       member      |                       |                       |                     |   |   |   |      STRING : value     |                       |                       |                       |\n|       value       |         array         |         object        |                     |   |   |   |          STRING         |         NUMBER        |          NULL         |        BOOLEAN        |\n\n对于上面的语法分析，这里有一个很棒的[工具](http://hackingoff.com/compilers/predict-first-follow-set)。\n\n### 实现 {#IterativeParserImplementation}\n\n基于这份解析表，一个直接的（常规的）将规则反向入栈的实现可以正常工作。\n\n在 RapidJSON 中，对直接的实现进行了一些修改：\n\n首先，在 RapidJSON 中，这份解析表被编码为状态机。\n规则由头部和主体组成。\n状态转换由规则构造。\n除此之外，额外的状态被添加到与 `array` 和 `object` 有关的规则。\n通过这种方式，生成数组值或对象成员可以只用一次状态转移便可完成，\n而不需要在直接的实现中的多次出栈/入栈操作。\n这也使得估计栈的大小更加容易。\n\n状态图如如下所示：\n\n![状态图](diagram/iterative-parser-states-diagram.png)\n\n第二，迭代解析器也在内部栈保存了数组的值个数和对象成员的数量，这也与传统的实现不同。\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/misc/DoxygenLayout.xml",
    "content": "<doxygenlayout version=\"1.0\">\n  <!-- Generated by doxygen 1.8.7 -->\n  <!-- Navigation index tabs for HTML output -->\n  <navindex>\n    <tab type=\"mainpage\" visible=\"yes\" title=\"\"/>\n    <tab type=\"pages\" visible=\"yes\" title=\"\" intro=\"\"/>\n    <tab type=\"modules\" visible=\"yes\" title=\"\" intro=\"\"/>\n    <tab type=\"namespaces\" visible=\"yes\" title=\"\">\n      <tab type=\"namespacelist\" visible=\"yes\" title=\"\" intro=\"\"/>\n      <tab type=\"namespacemembers\" visible=\"yes\" title=\"\" intro=\"\"/>\n    </tab>\n    <tab type=\"classes\" visible=\"yes\" title=\"\">\n      <tab type=\"classlist\" visible=\"yes\" title=\"\" intro=\"\"/>\n      <tab type=\"classindex\" visible=\"$ALPHABETICAL_INDEX\" title=\"\"/> \n      <tab type=\"hierarchy\" visible=\"yes\" title=\"\" intro=\"\"/>\n      <tab type=\"classmembers\" visible=\"yes\" title=\"\" intro=\"\"/>\n    </tab>\n    <tab type=\"files\" visible=\"yes\" title=\"\">\n      <tab type=\"filelist\" visible=\"yes\" title=\"\" intro=\"\"/>\n      <tab type=\"globals\" visible=\"yes\" title=\"\" intro=\"\"/>\n    </tab>\n    <tab type=\"examples\" visible=\"yes\" title=\"\" intro=\"\"/>  \n  </navindex>\n\n  <!-- Layout definition for a class page -->\n  <class>\n    <briefdescription visible=\"yes\"/>\n    <includes visible=\"$SHOW_INCLUDE_FILES\"/>\n    <inheritancegraph visible=\"$CLASS_GRAPH\"/>\n    <collaborationgraph visible=\"$COLLABORATION_GRAPH\"/>\n    <memberdecl>\n      <nestedclasses visible=\"yes\" title=\"\"/>\n      <publictypes title=\"\"/>\n      <services title=\"\"/>\n      <interfaces title=\"\"/>\n      <publicslots title=\"\"/>\n      <signals title=\"\"/>\n      <publicmethods title=\"\"/>\n      <publicstaticmethods title=\"\"/>\n      <publicattributes title=\"\"/>\n      <publicstaticattributes title=\"\"/>\n      <protectedtypes title=\"\"/>\n      <protectedslots title=\"\"/>\n      <protectedmethods title=\"\"/>\n      <protectedstaticmethods title=\"\"/>\n      <protectedattributes title=\"\"/>\n      <protectedstaticattributes title=\"\"/>\n      <packagetypes title=\"\"/>\n      <packagemethods title=\"\"/>\n      <packagestaticmethods title=\"\"/>\n      <packageattributes title=\"\"/>\n      <packagestaticattributes title=\"\"/>\n      <properties title=\"\"/>\n      <events title=\"\"/>\n      <privatetypes title=\"\"/>\n      <privateslots title=\"\"/>\n      <privatemethods title=\"\"/>\n      <privatestaticmethods title=\"\"/>\n      <privateattributes title=\"\"/>\n      <privatestaticattributes title=\"\"/>\n      <friends title=\"\"/>\n      <related title=\"\" subtitle=\"\"/>\n      <membergroups visible=\"yes\"/>\n    </memberdecl>\n    <detaileddescription title=\"\"/>\n    <memberdef>\n      <inlineclasses title=\"\"/>\n      <typedefs title=\"\"/>\n      <enums title=\"\"/>\n      <services title=\"\"/>\n      <interfaces title=\"\"/>\n      <constructors title=\"\"/>\n      <functions title=\"\"/>\n      <related title=\"\"/>\n      <variables title=\"\"/>\n      <properties title=\"\"/>\n      <events title=\"\"/>\n    </memberdef>\n    <allmemberslink visible=\"yes\"/>\n    <usedfiles visible=\"$SHOW_USED_FILES\"/>\n    <authorsection visible=\"yes\"/>\n  </class>\n\n  <!-- Layout definition for a namespace page -->\n  <namespace>\n    <briefdescription visible=\"yes\"/>\n    <memberdecl>\n      <nestednamespaces visible=\"yes\" title=\"\"/>\n      <constantgroups visible=\"yes\" title=\"\"/>\n      <classes visible=\"yes\" title=\"\"/>\n      <typedefs title=\"\"/>\n      <enums title=\"\"/>\n      <functions title=\"\"/>\n      <variables title=\"\"/>\n      <membergroups visible=\"yes\"/>\n    </memberdecl>\n    <detaileddescription title=\"\"/>\n    <memberdef>\n      <inlineclasses title=\"\"/>\n      <typedefs title=\"\"/>\n      <enums title=\"\"/>\n      <functions title=\"\"/>\n      <variables title=\"\"/>\n    </memberdef>\n    <authorsection visible=\"yes\"/>\n  </namespace>\n\n  <!-- Layout definition for a file page -->\n  <file>\n    <briefdescription visible=\"yes\"/>\n    <includes visible=\"$SHOW_INCLUDE_FILES\"/>\n    <includegraph visible=\"$INCLUDE_GRAPH\"/>\n    <includedbygraph visible=\"$INCLUDED_BY_GRAPH\"/>\n    <sourcelink visible=\"yes\"/>\n    <memberdecl>\n      <classes visible=\"yes\" title=\"\"/>\n      <namespaces visible=\"yes\" title=\"\"/>\n      <constantgroups visible=\"yes\" title=\"\"/>\n      <defines title=\"\"/>\n      <typedefs title=\"\"/>\n      <enums title=\"\"/>\n      <functions title=\"\"/>\n      <variables title=\"\"/>\n      <membergroups visible=\"yes\"/>\n    </memberdecl>\n    <detaileddescription title=\"\"/>\n    <memberdef>\n      <inlineclasses title=\"\"/>\n      <defines title=\"\"/>\n      <typedefs title=\"\"/>\n      <enums title=\"\"/>\n      <functions title=\"\"/>\n      <variables title=\"\"/>\n    </memberdef>\n    <authorsection/>\n  </file>\n\n  <!-- Layout definition for a group page -->\n  <group>\n    <briefdescription visible=\"yes\"/>\n    <groupgraph visible=\"$GROUP_GRAPHS\"/>\n    <memberdecl>\n      <nestedgroups visible=\"yes\" title=\"\"/>\n      <dirs visible=\"yes\" title=\"\"/>\n      <files visible=\"yes\" title=\"\"/>\n      <namespaces visible=\"yes\" title=\"\"/>\n      <classes visible=\"yes\" title=\"\"/>\n      <defines title=\"\"/>\n      <typedefs title=\"\"/>\n      <enums title=\"\"/>\n      <enumvalues title=\"\"/>\n      <functions title=\"\"/>\n      <variables title=\"\"/>\n      <signals title=\"\"/>\n      <publicslots title=\"\"/>\n      <protectedslots title=\"\"/>\n      <privateslots title=\"\"/>\n      <events title=\"\"/>\n      <properties title=\"\"/>\n      <friends title=\"\"/>\n      <membergroups visible=\"yes\"/>\n    </memberdecl>\n    <detaileddescription title=\"\"/>\n    <memberdef>\n      <pagedocs/>\n      <inlineclasses title=\"\"/>\n      <defines title=\"\"/>\n      <typedefs title=\"\"/>\n      <enums title=\"\"/>\n      <enumvalues title=\"\"/>\n      <functions title=\"\"/>\n      <variables title=\"\"/>\n      <signals title=\"\"/>\n      <publicslots title=\"\"/>\n      <protectedslots title=\"\"/>\n      <privateslots title=\"\"/>\n      <events title=\"\"/>\n      <properties title=\"\"/>\n      <friends title=\"\"/>\n    </memberdef>\n    <authorsection visible=\"yes\"/>\n  </group>\n\n  <!-- Layout definition for a directory page -->\n  <directory>\n    <briefdescription visible=\"yes\"/>\n    <directorygraph visible=\"yes\"/>\n    <memberdecl>\n      <dirs visible=\"yes\"/>\n      <files visible=\"yes\"/>\n    </memberdecl>\n    <detaileddescription title=\"\"/>\n  </directory>\n</doxygenlayout>\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/misc/doxygenextra.css",
    "content": "body code {\n\tmargin: 0;\n\tborder: 1px solid #ddd;\n\tbackground-color: #f8f8f8;\n\tborder-radius: 3px;\n\tpadding: 0;\n}\n\na {\n\tcolor: #4183c4;\n}\n\na.el {\n\tfont-weight: normal;\n}\n\nbody, table, div, p, dl {\n\tcolor: #333333;\n\tfont-family: Helvetica, arial, freesans, clean, sans-serif, 'Segoe UI Emoji', 'Segoe UI Symbol';\n\tfont-size: 15px;\n\tfont-style: normal;\n\tfont-variant: normal;\n\tfont-weight: normal;\n\tline-height: 25.5px;\n}\n\nbody {\n\tbackground-color: #eee;\n}\n\ndiv.header {\n\tbackground-image: none;\n\tbackground-color: white;\n\tmargin: 0px;\n\tborder: 0px;\n}\n\ndiv.headertitle {\n\twidth: 858px;\n\tmargin: 30px;\n\tpadding: 0px;\n}\n\ndiv.toc {\n\tbackground-color: #f8f8f8;\n\tborder-color: #ddd;\n\tmargin-right: 10px;\n\tmargin-left: 20px;\n}\ndiv.toc h3 {\n\tcolor: #333333;\n\tfont-family: Helvetica, arial, freesans, clean, sans-serif, 'Segoe UI Emoji', 'Segoe UI Symbol';\n\tfont-size: 18px;\n\tfont-style: normal;\n\tfont-variant: normal;\n\tfont-weight: normal;\n}\ndiv.toc li {\n\tcolor: #333333;\n\tfont-family: Helvetica, arial, freesans, clean, sans-serif, 'Segoe UI Emoji', 'Segoe UI Symbol';\n\tfont-size: 12px;\n\tfont-style: normal;\n\tfont-variant: normal;\n\tfont-weight: normal;\n}\n\n.title {\n\tfont-size: 2.5em;\n\tline-height: 63.75px;\n\tborder-bottom: 1px solid #ddd;\n\tmargin-bottom: 15px;\n\tmargin-left: 0px;\n\tmargin-right: 0px;\n\tmargin-top: 0px;\n}\n\n.summary {\n\tfloat: none !important;\n\twidth: auto !important;\n\tpadding-top: 10px;\n\tpadding-right: 10px !important;\n}\n\n.summary + .headertitle .title {\n\tfont-size: 1.5em;\n\tline-height: 2.0em;\n}\n\nbody h1 {\n\tfont-size: 2em;\n\tline-height: 1.7;\n\tborder-bottom: 1px solid #eee;\n\tmargin: 1em 0 15px;\n\tpadding: 0;\n\toverflow: hidden;\n}\n\nbody h2 {\n\tfont-size: 1.5em;\n\tline-height: 1.7;\n\tmargin: 1em 0 15px;\n\tpadding: 0;\n}\n\npre.fragment {\n\tfont-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace;\n\tfont-size: 13px;\n\tfont-style: normal;\n\tfont-variant: normal;\n\tfont-weight: normal;\n\tline-height: 19px;\n}\n\ntable.doxtable th {\n\tbackground-color: #f8f8f8;\n\tcolor: #333333;\n\tfont-size: 15px;\n}\n\ntable.doxtable td, table.doxtable th {\n\tborder: 1px solid #ddd;\n}\n\n#doc-content {\n\tbackground-color: #fff;\n\twidth: 918px;\n\theight: auto !important;\n\tmargin-left: 270px !important;\n}\n\ndiv.contents {\n\twidth: 858px;\n\tmargin: 30px;\n}\n\ndiv.line {\n\tfont-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace;\n\tfont-size: 13px;\n\tfont-style: normal;\n\tfont-variant: normal;\n\tfont-weight: normal;\n\tline-height: 19px;\t\n}\n\ntt, code, pre {\n\tfont-family: Consolas, \"Liberation Mono\", Menlo, Courier, monospace;\n\tfont-size: 12px;\n}\n\ndiv.fragment {\n\tbackground-color: #f8f8f8;\n\tborder: 1px solid #ddd;\n\tfont-size: 13px;\n\tline-height: 19px;\n\toverflow: auto;\n\tpadding: 6px 10px;\n\tborder-radius: 3px;\n}\n\n#topbanner {\n\tposition: fixed;\n\tmargin: 15px;\n\tz-index: 101;\n}\n\n#projectname\n{\n\tfont-family: Helvetica, arial, freesans, clean, sans-serif, 'Segoe UI Emoji', 'Segoe UI Symbol';\n\tfont-size: 38px;\n\tfont-weight: bold;\n\tline-height: 63.75px;\n\tmargin: 0px;\n\tpadding: 2px 0px;\n}\n    \n#projectbrief\n{\n\tfont-family: Helvetica, arial, freesans, clean, sans-serif, 'Segoe UI Emoji', 'Segoe UI Symbol';\n\tfont-size: 16px;\n\tline-height: 22.4px;\n\tmargin: 0px 0px 13px 0px;\n\tpadding: 2px;\n}\n\n/* side bar and search */\n\n#side-nav\n{\n\tpadding: 10px 0px 20px 20px;\n\tborder-top: 60px solid #2980b9;\n\tbackground-color: #343131;\n\twidth: 250px !important;\n\theight: 100% !important;\n\tposition: fixed;\n}\n\n#nav-tree\n{\n\tbackground-color: transparent;\n\tbackground-image: none;\n\theight: 100% !important;\n}\n\n#nav-tree .label\n{\n\tfont-family: Helvetica, arial, freesans, clean, sans-serif, 'Segoe UI Emoji', 'Segoe UI Symbol';\n\tline-height: 25.5px;\t\n\tfont-size: 15px;\n}\n\n#nav-tree\n{\n\tcolor: #b3b3b3;\n}\n\n#nav-tree .selected {\n\tbackground-image: none;\n}\n\n#nav-tree a\n{\n\tcolor: #b3b3b3;\n}\n\n#github\n{\n\tposition: fixed;\n\tleft: auto;\n\tright: auto;\n\twidth: 250px;\n}\n\n#MSearchBox\n{\n\tmargin: 20px;\n\tleft: 40px;\n\tright: auto;\n\tposition: fixed;\n\twidth: 180px;\n}\n\n#MSearchField\n{\n\twidth: 121px;\n}\n\n#MSearchResultsWindow\n{\n\tleft: 45px !important;\n}\n\n#nav-sync\n{\n\tdisplay: none;\n}\n\n.ui-resizable .ui-resizable-handle\n{\n\twidth: 0px;\n}\n\n#nav-path\n{\n\tdisplay: none;\n}\n\n/* external link icon */\ndiv.contents a[href ^= \"http\"]:after {\n     content: \" \" url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAVklEQVR4Xn3PgQkAMQhDUXfqTu7kTtkpd5RA8AInfArtQ2iRXFWT2QedAfttj2FsPIOE1eCOlEuoWWjgzYaB/IkeGOrxXhqB+uA9Bfcm0lAZuh+YIeAD+cAqSz4kCMUAAAAASUVORK5CYII=);\n}\n\n.githublogo {\n\tcontent: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNS1jMDIxIDc5LjE1NDkxMSwgMjAxMy8xMC8yOS0xMTo0NzoxNiAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6RERCMUIwOUY4NkNFMTFFM0FBNTJFRTMzNTJEMUJDNDYiIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6RERCMUIwOUU4NkNFMTFFM0FBNTJFRTMzNTJEMUJDNDYiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNiAoTWFjaW50b3NoKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOkU1MTc4QTJBOTlBMDExRTI5QTE1QkMxMDQ2QTg5MDREIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkU1MTc4QTJCOTlBMDExRTI5QTE1QkMxMDQ2QTg5MDREIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+jUqS1wAAApVJREFUeNq0l89rE1EQx3e3gVJoSPzZeNEWPKgHoa0HBak0iHiy/4C3WvDmoZ56qJ7txVsPQu8qlqqHIhRKJZceesmhioQEfxTEtsoSpdJg1u/ABJ7Pmc1m8zLwgWTmzcw3L+/te+tHUeQltONgCkyCi2AEDHLsJ6iBMlgHL8FeoqokoA2j4CloRMmtwTmj7erHBXPgCWhG6a3JNXKdCiDl1cidVbXZkJoXQRi5t5BrxwoY71FzU8S4JuAIqFkJ2+BFSlEh525b/hr3+k/AklDkNsf6wTT4yv46KIMNpsy+iMdMc47HNWxbsgVcUn7FmLAzzoFAWDsBx+wVP6bUpp5ewI+DOeUx0Wd9D8F70BTGNjkWtqnhmT1JQAHcUgZd8Lo3rQb1LAT8eJVUfgGvHQigGp+V2Z0iAUUl8QH47kAA1XioxIo+bRN8OG8F/oBjwv+Z1nJgX5jpdzQDw0LCjsPmrcW7I/iHScCAEDj03FtD8A0EyuChHgg4KTlJQF3wZ7WELppnBX+dBFSVpJsOBWi1qiRgSwnOgoyD5hmuJdkWCVhTgnTvW3AgYIFrSbZGh0UW/Io5Vp+DQoK7o80pztWMemZbgxeNwCNwDbw1fIfgGZjhU6xPaJgBV8BdsMw5cbZoHsenwYFxkZzl83xTSKTiviCAfCsJLysH3POfC8m8NegyGAGfLP/VmGmfSChgXroR0RSWjEFv2J/nG84cuKFMf4sTCZqXuJd4KaXFVjEG3+tw4eXbNK/YC9oXXs3O8NY8y99L4BXY5cvLY/Bb2VZ58EOJVcB18DHJq9lRsKr8inyKGVjlmh29mtHs3AHfuhCwy1vXT/Nu2GKQt+UHsGdctyX6eQyNvc+5sfX9Dl7Pe2J/BRgAl2CpwmrsHR0AAAAASUVORK5CYII=);\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/misc/footer.html",
    "content": "<!-- HTML footer for doxygen 1.8.7-->\n<!-- start footer part -->\n<!--BEGIN GENERATE_TREEVIEW-->\n<div id=\"nav-path\" class=\"navpath\"><!-- id is needed for treeview function! -->\n  <ul>\n    $navpath\n  </ul>\n</div>\n<!--END GENERATE_TREEVIEW-->\n</body>\n</html>\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/misc/header.html",
    "content": "<!-- HTML header for doxygen 1.8.7-->\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/xhtml;charset=UTF-8\"/>\n<meta http-equiv=\"X-UA-Compatible\" content=\"IE=9\"/>\n<meta name=\"generator\" content=\"Doxygen $doxygenversion\"/>\n<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->\n<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->\n<link href=\"$relpath^tabs.css\" rel=\"stylesheet\" type=\"text/css\"/>\n<script type=\"text/javascript\" src=\"$relpath^jquery.js\"></script>\n<script type=\"text/javascript\" src=\"$relpath^dynsections.js\"></script>\n$treeview\n$search\n$mathjax\n<link href=\"$relpath^$stylesheet\" rel=\"stylesheet\" type=\"text/css\" />\n$extrastylesheet\n</head>\n<body>\n<div id=\"top\"><!-- do not remove this div, it is closed by doxygen! -->\n<div id=\"topbanner\"><a href=\"https://github.com/Tencent/rapidjson\" title=\"RapidJSON GitHub\"><i class=\"githublogo\"></i></a></div>\n$searchbox\n<!--END TITLEAREA-->\n<!-- end header part -->\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/npm.md",
    "content": "## NPM\n\n# package.json {#package}\n\n~~~~~~~~~~js\n{\n  ...\n  \"dependencies\": {\n    ...\n    \"rapidjson\": \"git@github.com:Tencent/rapidjson.git\"\n  },\n  ...\n  \"gypfile\": true\n}\n~~~~~~~~~~\n\n# binding.gyp {#binding}\n\n~~~~~~~~~~js\n{\n  ...\n  'targets': [\n    {\n      ...\n      'include_dirs': [\n        '<!(node -e \\'require(\"rapidjson\")\\')'\n      ]\n    }\n  ]\n}\n~~~~~~~~~~\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/performance.md",
    "content": "# Performance\n\nThere is a [native JSON benchmark collection] [1] which evaluates speed, memory usage and code size of various operations among 37 JSON libraries.\n\n[1]: https://github.com/miloyip/nativejson-benchmark\n\nThe old performance article for RapidJSON 0.1 is provided [here](https://code.google.com/p/rapidjson/wiki/Performance).\n\nAdditionally, you may refer to the following third-party benchmarks.\n\n## Third-party benchmarks\n\n* [Basic benchmarks for miscellaneous C++ JSON parsers and generators](https://github.com/mloskot/json_benchmark) by Mateusz Loskot (Jun 2013)\n * [casablanca](https://casablanca.codeplex.com/)\n * [json_spirit](https://github.com/cierelabs/json_spirit)\n * [jsoncpp](http://jsoncpp.sourceforge.net/)\n * [libjson](http://sourceforge.net/projects/libjson/)\n * [rapidjson](https://github.com/Tencent/rapidjson/)\n * [QJsonDocument](http://qt-project.org/doc/qt-5.0/qtcore/qjsondocument.html)\n \n* [JSON Parser Benchmarking](http://chadaustin.me/2013/01/json-parser-benchmarking/) by Chad Austin (Jan 2013)\n * [sajson](https://github.com/chadaustin/sajson)\n * [rapidjson](https://github.com/Tencent/rapidjson/)\n * [vjson](https://code.google.com/p/vjson/)\n * [YAJL](http://lloyd.github.com/yajl/)\n * [Jansson](http://www.digip.org/jansson/)\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/performance.zh-cn.md",
    "content": "# 性能\n\n有一个 [native JSON benchmark collection][1] 项目，能评估 37 个 JSON 库在不同操作下的速度、內存用量及代码大小。\n\n[1]: https://github.com/miloyip/nativejson-benchmark\n\nRapidJSON 0.1 版本的性能测试文章位于 [这里](https://code.google.com/p/rapidjson/wiki/Performance).\n\n此外，你也可以参考以下这些第三方的评测。\n\n## 第三方评测\n\n* [Basic benchmarks for miscellaneous C++ JSON parsers and generators](https://github.com/mloskot/json_benchmark) by Mateusz Loskot (Jun 2013)\n * [casablanca](https://casablanca.codeplex.com/)\n * [json_spirit](https://github.com/cierelabs/json_spirit)\n * [jsoncpp](http://jsoncpp.sourceforge.net/)\n * [libjson](http://sourceforge.net/projects/libjson/)\n * [rapidjson](https://github.com/Tencent/rapidjson/)\n * [QJsonDocument](http://qt-project.org/doc/qt-5.0/qtcore/qjsondocument.html)\n \n* [JSON Parser Benchmarking](http://chadaustin.me/2013/01/json-parser-benchmarking/) by Chad Austin (Jan 2013)\n * [sajson](https://github.com/chadaustin/sajson)\n * [rapidjson](https://github.com/Tencent/rapidjson/)\n * [vjson](https://code.google.com/p/vjson/)\n * [YAJL](http://lloyd.github.com/yajl/)\n * [Jansson](http://www.digip.org/jansson/)\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/pointer.md",
    "content": "# Pointer\n\n(This feature was released in v1.1.0)\n\nJSON Pointer is a standardized ([RFC6901]) way to select a value inside a JSON Document (DOM). This can be analogous to XPath for XML document. However, JSON Pointer is much simpler, and a single JSON Pointer only pointed to a single value.\n\nUsing RapidJSON's implementation of JSON Pointer can simplify some manipulations of the DOM.\n\n[TOC]\n\n# JSON Pointer {#JsonPointer}\n\nA JSON Pointer is a list of zero-to-many tokens, each prefixed by `/`. Each token can be a string or a number. For example, given a JSON:\n~~~javascript\n{\n    \"foo\" : [\"bar\", \"baz\"],\n    \"pi\" : 3.1416\n}\n~~~\n\nThe following JSON Pointers resolve this JSON as:\n\n1. `\"/foo\"` → `[ \"bar\", \"baz\" ]`\n2. `\"/foo/0\"` → `\"bar\"`\n3. `\"/foo/1\"` → `\"baz\"`\n4. `\"/pi\"` → `3.1416`\n\nNote that, an empty JSON Pointer `\"\"` (zero token) resolves to the whole JSON.\n\n# Basic Usage {#BasicUsage}\n\nThe following example code is self-explanatory.\n\n~~~cpp\n#include \"rapidjson/pointer.h\"\n\n// ...\nDocument d;\n\n// Create DOM by Set()\nPointer(\"/project\").Set(d, \"RapidJSON\");\nPointer(\"/stars\").Set(d, 10);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 10 }\n\n// Access DOM by Get(). It return nullptr if the value does not exist.\nif (Value* stars = Pointer(\"/stars\").Get(d))\n    stars->SetInt(stars->GetInt() + 1);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 11 }\n\n// Set() and Create() automatically generate parents if not exist.\nPointer(\"/a/b/0\").Create(d);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 11, \"a\" : { \"b\" : [ null ] } }\n\n// GetWithDefault() returns reference. And it deep clones the default value.\nValue& hello = Pointer(\"/hello\").GetWithDefault(d, \"world\");\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 11, \"a\" : { \"b\" : [ null ] }, \"hello\" : \"world\" }\n\n// Swap() is similar to Set()\nValue x(\"C++\");\nPointer(\"/hello\").Swap(d, x);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 11, \"a\" : { \"b\" : [ null ] }, \"hello\" : \"C++\" }\n// x becomes \"world\"\n\n// Erase a member or element, return true if the value exists\nbool success = Pointer(\"/a\").Erase(d);\nassert(success);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 10 }\n~~~\n\n# Helper Functions {#HelperFunctions}\n\nSince object-oriented calling convention may be non-intuitive, RapidJSON also provides helper functions, which just wrap the member functions with free-functions.\n\nThe following example does exactly the same as the above one.\n\n~~~cpp\nDocument d;\n\nSetValueByPointer(d, \"/project\", \"RapidJSON\");\nSetValueByPointer(d, \"/stars\", 10);\n\nif (Value* stars = GetValueByPointer(d, \"/stars\"))\n    stars->SetInt(stars->GetInt() + 1);\n\nCreateValueByPointer(d, \"/a/b/0\");\n\nValue& hello = GetValueByPointerWithDefault(d, \"/hello\", \"world\");\n\nValue x(\"C++\");\nSwapValueByPointer(d, \"/hello\", x);\n\nbool success = EraseValueByPointer(d, \"/a\");\nassert(success);\n~~~\n\nThe conventions are shown here for comparison:\n\n1. `Pointer(source).<Method>(root, ...)`\n2. `<Method>ValueByPointer(root, Pointer(source), ...)`\n3. `<Method>ValueByPointer(root, source, ...)`\n\n# Resolving Pointer {#ResolvingPointer}\n\n`Pointer::Get()` or `GetValueByPointer()` function does not modify the DOM. If the tokens cannot match a value in the DOM, it returns `nullptr`. User can use this to check whether a value exists.\n\nNote that, numerical tokens can represent an array index or member name. The resolving process will match the values according to the types of value.\n\n~~~javascript\n{\n    \"0\" : 123,\n    \"1\" : [456]\n}\n~~~\n\n1. `\"/0\"` → `123`\n2. `\"/1/0\"` → `456`\n\nThe token `\"0\"` is treated as member name in the first pointer. It is treated as an array index in the second pointer.\n\nThe other functions, including `Create()`, `GetWithDefault()`, `Set()` and `Swap()`, will change the DOM. These functions will always succeed. They will create the parent values if they do not exist. If the parent values do not match the tokens, they will also be forced to change their type. Changing the type also mean fully removal of that DOM subtree.\n\nParsing the above JSON into `d`, \n\n~~~cpp\nSetValueByPointer(d, \"1/a\", 789); // { \"0\" : 123, \"1\" : { \"a\" : 789 } }\n~~~\n\n## Resolving Minus Sign Token\n\nBesides, [RFC6901] defines a special token `-` (single minus sign), which represents the pass-the-end element of an array. `Get()` only treats this token as a member name '\"-\"'. Yet the other functions can resolve this for array, equivalent to calling `Value::PushBack()` to the array.\n\n~~~cpp\nDocument d;\nd.Parse(\"{\\\"foo\\\":[123]}\");\nSetValueByPointer(d, \"/foo/-\", 456); // { \"foo\" : [123, 456] }\nSetValueByPointer(d, \"/-\", 789);    // { \"foo\" : [123, 456], \"-\" : 789 }\n~~~\n\n## Resolving Document and Value\n\nWhen using `p.Get(root)` or `GetValueByPointer(root, p)`, `root` is a (const) `Value&`. That means, it can be a subtree of the DOM.\n\nThe other functions have two groups of signature. One group uses `Document& document` as parameter, another one uses `Value& root`. The first group uses `document.GetAllocator()` for creating values. And the second group needs user to supply an allocator, like the functions in DOM.\n\nAll examples above do not require an allocator parameter, because the first parameter is a `Document&`. But if you want to resolve a pointer to a subtree, you need to supply the allocator as in the following example:\n\n~~~cpp\nclass Person {\npublic:\n    Person() {\n        document_ = new Document();\n        // CreateValueByPointer() here no need allocator\n        SetLocation(CreateValueByPointer(*document_, \"/residence\"), ...);\n        SetLocation(CreateValueByPointer(*document_, \"/office\"), ...);\n    };\n\nprivate:\n    void SetLocation(Value& location, const char* country, const char* addresses[2]) {\n        Value::Allocator& a = document_->GetAllocator();\n        // SetValueByPointer() here need allocator\n        SetValueByPointer(location, \"/country\", country, a);\n        SetValueByPointer(location, \"/address/0\", address[0], a);\n        SetValueByPointer(location, \"/address/1\", address[1], a);\n    }\n\n    // ...\n\n    Document* document_;\n};\n~~~\n\n`Erase()` or `EraseValueByPointer()` does not need allocator. And they return `true` if the value is erased successfully.\n\n# Error Handling {#ErrorHandling}\n\nA `Pointer` parses a source string in its constructor. If there is parsing error, `Pointer::IsValid()` returns `false`. And you can use `Pointer::GetParseErrorCode()` and `GetParseErrorOffset()` to retrieve the error information.\n\nNote that, all resolving functions assumes valid pointer. Resolving with an invalid pointer causes assertion failure.\n\n# URI Fragment Representation {#URIFragment}\n\nIn addition to the string representation of JSON pointer that we are using till now, [RFC6901] also defines the URI fragment representation of JSON pointer. URI fragment is specified in [RFC3986] \"Uniform Resource Identifier (URI): Generic Syntax\".\n\nThe main differences are that a the URI fragment always has a `#` (pound sign) in the beginning, and some characters are encoded by percent-encoding in UTF-8 sequence. For example, the following table shows different C/C++ string literals of different representations.\n\nString Representation | URI Fragment Representation | Pointer Tokens (UTF-8)\n----------------------|-----------------------------|------------------------\n`\"/foo/0\"`            | `\"#/foo/0\"`                 | `{\"foo\", 0}`\n`\"/a~1b\"`             | `\"#/a~1b\"`                  | `{\"a/b\"}`\n`\"/m~0n\"`             | `\"#/m~0n\"`                  | `{\"m~n\"}`\n`\"/ \"`                | `\"#/%20\"`                   | `{\" \"}`\n`\"/\\0\"`               | `\"#/%00\"`                   | `{\"\\0\"}`\n`\"/€\"`                | `\"#/%E2%82%AC\"`             | `{\"€\"}`\n\nRapidJSON fully support URI fragment representation. It automatically detects the pound sign during parsing.\n\n# Stringify\n\nYou may also stringify a `Pointer` to a string or other output streams. This can be done by:\n\n~~~\nPointer p(...);\nStringBuffer sb;\np.Stringify(sb);\nstd::cout << sb.GetString() << std::endl;\n~~~\n\nIt can also stringify to URI fragment representation by `StringifyUriFragment()`.\n\n# User-Supplied Tokens {#UserSuppliedTokens}\n\nIf a pointer will be resolved multiple times, it should be constructed once, and then apply it to different DOMs or in different times. This reduce time and memory allocation for constructing `Pointer` multiple times.\n\nWe can go one step further, to completely eliminate the parsing process and dynamic memory allocation, we can establish the token array directly:\n\n~~~cpp\n#define NAME(s) { s, sizeof(s) / sizeof(s[0]) - 1, kPointerInvalidIndex }\n#define INDEX(i) { #i, sizeof(#i) - 1, i }\n\nstatic const Pointer::Token kTokens[] = { NAME(\"foo\"), INDEX(123) };\nstatic const Pointer p(kTokens, sizeof(kTokens) / sizeof(kTokens[0]));\n// Equivalent to static const Pointer p(\"/foo/123\");\n~~~\n\nThis may be useful for memory constrained systems.\n\n[RFC3986]: https://tools.ietf.org/html/rfc3986\n[RFC6901]: https://tools.ietf.org/html/rfc6901\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/pointer.zh-cn.md",
    "content": "# Pointer\n\n（本功能于 v1.1.0 发布）\n\nJSON Pointer 是一个标准化（[RFC6901]）的方式去选取一个 JSON Document（DOM）中的值。这类似于 XML 的 XPath。然而，JSON Pointer 简单得多，而且每个 JSON Pointer 仅指向单个值。\n\n使用 RapidJSON 的 JSON Pointer 实现能简化一些 DOM 的操作。\n\n[TOC]\n\n# JSON Pointer {#JsonPointer}\n\n一个 JSON Pointer 由一串（零至多个）token 所组成，每个 token 都有 `/` 前缀。每个 token 可以是一个字符串或数字。例如，给定一个 JSON：\n~~~javascript\n{\n    \"foo\" : [\"bar\", \"baz\"],\n    \"pi\" : 3.1416\n}\n~~~\n\n以下的 JSON Pointer 解析为：\n\n1. `\"/foo\"` → `[ \"bar\", \"baz\" ]`\n2. `\"/foo/0\"` → `\"bar\"`\n3. `\"/foo/1\"` → `\"baz\"`\n4. `\"/pi\"` → `3.1416`\n\n要注意，一个空 JSON Pointer `\"\"` （零个 token）解析为整个 JSON。\n\n# 基本使用方法 {#BasicUsage}\n\n以下的代码范例不解自明。\n\n~~~cpp\n#include \"rapidjson/pointer.h\"\n\n// ...\nDocument d;\n\n// 使用 Set() 创建 DOM\nPointer(\"/project\").Set(d, \"RapidJSON\");\nPointer(\"/stars\").Set(d, 10);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 10 }\n\n// 使用 Get() 访问 DOM。若该值不存在则返回 nullptr。\nif (Value* stars = Pointer(\"/stars\").Get(d))\n    stars->SetInt(stars->GetInt() + 1);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 11 }\n\n// Set() 和 Create() 自动生成父值（如果它们不存在）。\nPointer(\"/a/b/0\").Create(d);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 11, \"a\" : { \"b\" : [ null ] } }\n\n// GetWithDefault() 返回引用。若该值不存在则会深拷贝缺省值。\nValue& hello = Pointer(\"/hello\").GetWithDefault(d, \"world\");\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 11, \"a\" : { \"b\" : [ null ] }, \"hello\" : \"world\" }\n\n// Swap() 和 Set() 相似\nValue x(\"C++\");\nPointer(\"/hello\").Swap(d, x);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 11, \"a\" : { \"b\" : [ null ] }, \"hello\" : \"C++\" }\n// x 变成 \"world\"\n\n// 删去一个成员或元素，若值存在返回 true\nbool success = Pointer(\"/a\").Erase(d);\nassert(success);\n\n// { \"project\" : \"RapidJSON\", \"stars\" : 10 }\n~~~\n\n# 辅助函数 {#HelperFunctions}\n\n由于面向对象的调用习惯可能不符直觉，RapidJSON 也提供了一些辅助函数，它们把成员函数包装成自由函数。\n\n以下的例子与上面例子所做的事情完全相同。\n\n~~~cpp\nDocument d;\n\nSetValueByPointer(d, \"/project\", \"RapidJSON\");\nSetValueByPointer(d, \"/stars\", 10);\n\nif (Value* stars = GetValueByPointer(d, \"/stars\"))\n    stars->SetInt(stars->GetInt() + 1);\n\nCreateValueByPointer(d, \"/a/b/0\");\n\nValue& hello = GetValueByPointerWithDefault(d, \"/hello\", \"world\");\n\nValue x(\"C++\");\nSwapValueByPointer(d, \"/hello\", x);\n\nbool success = EraseValueByPointer(d, \"/a\");\nassert(success);\n~~~\n\n以下对比 3 种调用方式：\n\n1. `Pointer(source).<Method>(root, ...)`\n2. `<Method>ValueByPointer(root, Pointer(source), ...)`\n3. `<Method>ValueByPointer(root, source, ...)`\n\n# 解析 Pointer {#ResolvingPointer}\n\n`Pointer::Get()` 或 `GetValueByPointer()` 函数并不修改 DOM。若那些 token 不能匹配 DOM 里的值，这些函数便返回 `nullptr`。使用者可利用这个方法来检查一个值是否存在。\n\n注意，数值 token 可表示数组索引或成员名字。解析过程中会按值的类型来匹配。\n\n~~~javascript\n{\n    \"0\" : 123,\n    \"1\" : [456]\n}\n~~~\n\n1. `\"/0\"` → `123`\n2. `\"/1/0\"` → `456`\n\nToken `\"0\"` 在第一个 pointer 中被当作成员名字。它在第二个 pointer 中被当作成数组索引。\n\n其他函数会改变 DOM，包括 `Create()`、`GetWithDefault()`、`Set()`、`Swap()`。这些函数总是成功的。若一些父值不存在，就会创建它们。若父值类型不匹配 token，也会强行改变其类型。改变类型也意味着完全移除其 DOM 子树的内容。\n\n例如，把上面的 JSON 解译至 `d` 之后，\n\n~~~cpp\nSetValueByPointer(d, \"1/a\", 789); // { \"0\" : 123, \"1\" : { \"a\" : 789 } }\n~~~\n\n## 解析负号 token\n\n另外，[RFC6901] 定义了一个特殊 token `-` （单个负号），用于表示数组最后元素的下一个元素。 `Get()` 只会把此 token 当作成员名字 '\"-\"'。而其他函数则会以此解析数组，等同于对数组调用 `Value::PushBack()` 。\n\n~~~cpp\nDocument d;\nd.Parse(\"{\\\"foo\\\":[123]}\");\nSetValueByPointer(d, \"/foo/-\", 456); // { \"foo\" : [123, 456] }\nSetValueByPointer(d, \"/-\", 789);    // { \"foo\" : [123, 456], \"-\" : 789 }\n~~~\n\n## 解析 Document 及 Value\n\n当使用 `p.Get(root)` 或 `GetValueByPointer(root, p)`，`root` 是一个（常数） `Value&`。这意味着，它也可以是 DOM 里的一个子树。\n\n其他函数有两组签名。一组使用 `Document& document` 作为参数，另一组使用 `Value& root`。第一组使用 `document.GetAllocator()` 去创建值，而第二组则需要使用者提供一个 allocator，如同 DOM 里的函数。\n\n以上例子都不需要 allocator 参数，因为它的第一个参数是 `Document&`。但如果你需要对一个子树进行解析，就需要如下面的例子般提供 allocator：\n\n~~~cpp\nclass Person {\npublic:\n    Person() {\n        document_ = new Document();\n        // CreateValueByPointer() here no need allocator\n        SetLocation(CreateValueByPointer(*document_, \"/residence\"), ...);\n        SetLocation(CreateValueByPointer(*document_, \"/office\"), ...);\n    };\n\nprivate:\n    void SetLocation(Value& location, const char* country, const char* addresses[2]) {\n        Value::Allocator& a = document_->GetAllocator();\n        // SetValueByPointer() here need allocator\n        SetValueByPointer(location, \"/country\", country, a);\n        SetValueByPointer(location, \"/address/0\", address[0], a);\n        SetValueByPointer(location, \"/address/1\", address[1], a);\n    }\n\n    // ...\n\n    Document* document_;\n};\n~~~\n\n`Erase()` 或 `EraseValueByPointer()` 不需要 allocator。而且它们成功删除值之后会返回 `true`。\n\n# 错误处理 {#ErrorHandling}\n\n`Pointer` 在其建构函数里会解译源字符串。若有解析错误，`Pointer::IsValid()` 返回 `false`。你可使用 `Pointer::GetParseErrorCode()` 和 `GetParseErrorOffset()` 去获取错信息。\n\n要注意的是，所有解析函数都假设 pointer 是合法的。对一个非法 pointer 解析会造成断言失败。\n\n# URI 片段表示方式 {#URIFragment}\n\n除了我们一直在使用的字符串方式表示 JSON pointer，[RFC6901] 也定义了一个 JSON Pointer 的 URI 片段（fragment）表示方式。URI 片段是定义于 [RFC3986] \"Uniform Resource Identifier (URI): Generic Syntax\"。\n\nURI 片段的主要分别是必然以 `#` （pound sign）开头，而一些字符也会以百分比编码成 UTF-8 序列。例如，以下的表展示了不同表示法下的 C/C++ 字符串常数。\n\n字符串表示方式 | URI 片段表示方式 | Pointer Tokens （UTF-8）\n----------------------|-----------------------------|------------------------\n`\"/foo/0\"`            | `\"#/foo/0\"`                 | `{\"foo\", 0}`\n`\"/a~1b\"`             | `\"#/a~1b\"`                  | `{\"a/b\"}`\n`\"/m~0n\"`             | `\"#/m~0n\"`                  | `{\"m~n\"}`\n`\"/ \"`                | `\"#/%20\"`                   | `{\" \"}`\n`\"/\\0\"`               | `\"#/%00\"`                   | `{\"\\0\"}`\n`\"/€\"`                | `\"#/%E2%82%AC\"`             | `{\"€\"}`\n\nRapidJSON 完全支持 URI 片段表示方式。它在解译时会自动检测 `#` 号。\n\n# 字符串化\n\n你也可以把一个 `Pointer` 字符串化，储存于字符串或其他输出流。例如：\n\n~~~\nPointer p(...);\nStringBuffer sb;\np.Stringify(sb);\nstd::cout << sb.GetString() << std::endl;\n~~~\n\n使用 `StringifyUriFragment()` 可以把 pointer 字符串化为 URI 片段表示法。\n\n# 使用者提供的 tokens {#UserSuppliedTokens}\n\n若一个 pointer 会用于多次解析，它应该只被创建一次，然后再施于不同的 DOM ，或在不同时间做解析。这样可以避免多次创键 `Pointer`，节省时间和内存分配。\n\n我们甚至可以再更进一步，完全消去解析过程及动态内存分配。我们可以直接生成 token 数组：\n\n~~~cpp\n#define NAME(s) { s, sizeof(s) / sizeof(s[0]) - 1, kPointerInvalidIndex }\n#define INDEX(i) { #i, sizeof(#i) - 1, i }\n\nstatic const Pointer::Token kTokens[] = { NAME(\"foo\"), INDEX(123) };\nstatic const Pointer p(kTokens, sizeof(kTokens) / sizeof(kTokens[0]));\n// Equivalent to static const Pointer p(\"/foo/123\");\n~~~\n\n这种做法可能适合内存受限的系统。\n\n[RFC3986]: https://tools.ietf.org/html/rfc3986\n[RFC6901]: https://tools.ietf.org/html/rfc6901\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/sax.md",
    "content": "# SAX\n\nThe term \"SAX\" originated from [Simple API for XML](http://en.wikipedia.org/wiki/Simple_API_for_XML). We borrowed this term for JSON parsing and generation.\n\nIn RapidJSON, `Reader` (typedef of `GenericReader<...>`) is the SAX-style parser for JSON, and `Writer` (typedef of `GenericWriter<...>`) is the SAX-style generator for JSON.\n\n[TOC]\n\n# Reader {#Reader}\n\n`Reader` parses a JSON from a stream. While it reads characters from the stream, it analyzes the characters according to the syntax of JSON, and publishes events to a handler.\n\nFor example, here is a JSON.\n\n~~~~~~~~~~js\n{\n    \"hello\": \"world\",\n    \"t\": true ,\n    \"f\": false,\n    \"n\": null,\n    \"i\": 123,\n    \"pi\": 3.1416,\n    \"a\": [1, 2, 3, 4]\n}\n~~~~~~~~~~\n\nWhen a `Reader` parses this JSON, it publishes the following events to the handler sequentially:\n\n~~~~~~~~~~\nStartObject()\nKey(\"hello\", 5, true)\nString(\"world\", 5, true)\nKey(\"t\", 1, true)\nBool(true)\nKey(\"f\", 1, true)\nBool(false)\nKey(\"n\", 1, true)\nNull()\nKey(\"i\")\nUint(123)\nKey(\"pi\")\nDouble(3.1416)\nKey(\"a\")\nStartArray()\nUint(1)\nUint(2)\nUint(3)\nUint(4)\nEndArray(4)\nEndObject(7)\n~~~~~~~~~~\n\nThese events can be easily matched with the JSON, but some event parameters need further explanation. Let's see the `simplereader` example which produces exactly the same output as above:\n\n~~~~~~~~~~cpp\n#include \"rapidjson/reader.h\"\n#include <iostream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nstruct MyHandler : public BaseReaderHandler<UTF8<>, MyHandler> {\n    bool Null() { cout << \"Null()\" << endl; return true; }\n    bool Bool(bool b) { cout << \"Bool(\" << boolalpha << b << \")\" << endl; return true; }\n    bool Int(int i) { cout << \"Int(\" << i << \")\" << endl; return true; }\n    bool Uint(unsigned u) { cout << \"Uint(\" << u << \")\" << endl; return true; }\n    bool Int64(int64_t i) { cout << \"Int64(\" << i << \")\" << endl; return true; }\n    bool Uint64(uint64_t u) { cout << \"Uint64(\" << u << \")\" << endl; return true; }\n    bool Double(double d) { cout << \"Double(\" << d << \")\" << endl; return true; }\n    bool String(const char* str, SizeType length, bool copy) { \n        cout << \"String(\" << str << \", \" << length << \", \" << boolalpha << copy << \")\" << endl;\n        return true;\n    }\n    bool StartObject() { cout << \"StartObject()\" << endl; return true; }\n    bool Key(const char* str, SizeType length, bool copy) { \n        cout << \"Key(\" << str << \", \" << length << \", \" << boolalpha << copy << \")\" << endl;\n        return true;\n    }\n    bool EndObject(SizeType memberCount) { cout << \"EndObject(\" << memberCount << \")\" << endl; return true; }\n    bool StartArray() { cout << \"StartArray()\" << endl; return true; }\n    bool EndArray(SizeType elementCount) { cout << \"EndArray(\" << elementCount << \")\" << endl; return true; }\n};\n\nvoid main() {\n    const char json[] = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \";\n\n    MyHandler handler;\n    Reader reader;\n    StringStream ss(json);\n    reader.Parse(ss, handler);\n}\n~~~~~~~~~~\n\nNote that RapidJSON uses templates to statically bind the `Reader` type and the handler type, instead of using classes with virtual functions. This paradigm can improve performance by inlining functions.\n\n## Handler {#Handler}\n\nAs shown in the previous example, the user needs to implement a handler which consumes the events (via function calls) from the `Reader`. The handler must contain the following member functions.\n\n~~~~~~~~~~cpp\nclass Handler {\n    bool Null();\n    bool Bool(bool b);\n    bool Int(int i);\n    bool Uint(unsigned i);\n    bool Int64(int64_t i);\n    bool Uint64(uint64_t i);\n    bool Double(double d);\n    bool RawNumber(const Ch* str, SizeType length, bool copy);\n    bool String(const Ch* str, SizeType length, bool copy);\n    bool StartObject();\n    bool Key(const Ch* str, SizeType length, bool copy);\n    bool EndObject(SizeType memberCount);\n    bool StartArray();\n    bool EndArray(SizeType elementCount);\n};\n~~~~~~~~~~\n\n`Null()` is called when the `Reader` encounters a JSON null value.\n\n`Bool(bool)` is called when the `Reader` encounters a JSON true or false value.\n\nWhen the `Reader` encounters a JSON number, it chooses a suitable C++ type mapping. And then it calls *one* function out of `Int(int)`, `Uint(unsigned)`, `Int64(int64_t)`, `Uint64(uint64_t)` and `Double(double)`. If `kParseNumbersAsStrings` is enabled, `Reader` will always calls `RawNumber()` instead.\n\n`String(const char* str, SizeType length, bool copy)` is called when the `Reader` encounters a string. The first parameter is pointer to the string. The second parameter is the length of the string (excluding the null terminator). Note that RapidJSON supports null character `\\0` inside a string. If such situation happens, `strlen(str) < length`. The last `copy` indicates whether the handler needs to make a copy of the string. For normal parsing, `copy = true`. Only when *insitu* parsing is used, `copy = false`. And be aware that the character type depends on the target encoding, which will be explained later.\n\nWhen the `Reader` encounters the beginning of an object, it calls `StartObject()`. An object in JSON is a set of name-value pairs. If the object contains members it first calls `Key()` for the name of member, and then calls functions depending on the type of the value. These calls of name-value pairs repeat until calling `EndObject(SizeType memberCount)`. Note that the `memberCount` parameter is just an aid for the handler; users who do not need this parameter may ignore it.\n\nArrays are similar to objects, but simpler. At the beginning of an array, the `Reader` calls `BeginArray()`. If there is elements, it calls functions according to the types of element. Similarly, in the last call `EndArray(SizeType elementCount)`, the parameter `elementCount` is just an aid for the handler.\n\nEvery handler function returns a `bool`. Normally it should return `true`. If the handler encounters an error, it can return `false` to notify the event publisher to stop further processing.\n\nFor example, when we parse a JSON with `Reader` and the handler detects that the JSON does not conform to the required schema, the handler can return `false` and let the `Reader` stop further parsing. This will place the `Reader` in an error state, with error code `kParseErrorTermination`.\n\n## GenericReader {#GenericReader}\n\nAs mentioned before, `Reader` is a typedef of a template class `GenericReader`:\n\n~~~~~~~~~~cpp\nnamespace rapidjson {\n\ntemplate <typename SourceEncoding, typename TargetEncoding, typename Allocator = MemoryPoolAllocator<> >\nclass GenericReader {\n    // ...\n};\n\ntypedef GenericReader<UTF8<>, UTF8<> > Reader;\n\n} // namespace rapidjson\n~~~~~~~~~~\n\nThe `Reader` uses UTF-8 as both source and target encoding. The source encoding means the encoding in the JSON stream. The target encoding means the encoding of the `str` parameter in `String()` calls. For example, to parse a UTF-8 stream and output UTF-16 string events, you can define a reader by:\n\n~~~~~~~~~~cpp\nGenericReader<UTF8<>, UTF16<> > reader;\n~~~~~~~~~~\n\nNote that, the default character type of `UTF16` is `wchar_t`. So this `reader` needs to call `String(const wchar_t*, SizeType, bool)` of the handler.\n\nThe third template parameter `Allocator` is the allocator type for internal data structure (actually a stack).\n\n## Parsing {#SaxParsing}\n\nThe main function of `Reader` is used to parse JSON. \n\n~~~~~~~~~~cpp\ntemplate <unsigned parseFlags, typename InputStream, typename Handler>\nbool Parse(InputStream& is, Handler& handler);\n\n// with parseFlags = kDefaultParseFlags\ntemplate <typename InputStream, typename Handler>\nbool Parse(InputStream& is, Handler& handler);\n~~~~~~~~~~\n\nIf an error occurs during parsing, it will return `false`. User can also call `bool HasParseError()`, `ParseErrorCode GetParseErrorCode()` and `size_t GetErrorOffset()` to obtain the error states. In fact, `Document` uses these `Reader` functions to obtain parse errors. Please refer to [DOM](doc/dom.md) for details about parse errors.\n\n## Token-by-Token Parsing {#TokenByTokenParsing}\n\nSome users may wish to parse a JSON input stream a single token at a time, instead of immediately parsing an entire document without stopping. To parse JSON this way, instead of calling `Parse`, you can use the `IterativeParse` set of functions:\n\n~~~~~~~~~~cpp\n    void IterativeParseInit();\n\t\n    template <unsigned parseFlags, typename InputStream, typename Handler>\n    bool IterativeParseNext(InputStream& is, Handler& handler);\n\n    bool IterativeParseComplete();\n~~~~~~~~~~\n\nHere is an example of iteratively parsing JSON, token by token:\n\n~~~~~~~~~~cpp\n    reader.IterativeParseInit();\n    while (!reader.IterativeParseComplete()) {\n        reader.IterativeParseNext<kParseDefaultFlags>(is, handler);\n\t\t// Your handler has been called once.\n    }\n~~~~~~~~~~\n\n# Writer {#Writer}\n\n`Reader` converts (parses) JSON into events. `Writer` does exactly the opposite. It converts events into JSON. \n\n`Writer` is very easy to use. If your application only need to converts some data into JSON, it may be a good choice to use `Writer` directly, instead of building a `Document` and then stringifying it with a `Writer`.\n\nIn `simplewriter` example, we do exactly the reverse of `simplereader`.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <iostream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nvoid main() {\n    StringBuffer s;\n    Writer<StringBuffer> writer(s);\n    \n    writer.StartObject();\n    writer.Key(\"hello\");\n    writer.String(\"world\");\n    writer.Key(\"t\");\n    writer.Bool(true);\n    writer.Key(\"f\");\n    writer.Bool(false);\n    writer.Key(\"n\");\n    writer.Null();\n    writer.Key(\"i\");\n    writer.Uint(123);\n    writer.Key(\"pi\");\n    writer.Double(3.1416);\n    writer.Key(\"a\");\n    writer.StartArray();\n    for (unsigned i = 0; i < 4; i++)\n        writer.Uint(i);\n    writer.EndArray();\n    writer.EndObject();\n\n    cout << s.GetString() << endl;\n}\n~~~~~~~~~~\n\n~~~~~~~~~~\n{\"hello\":\"world\",\"t\":true,\"f\":false,\"n\":null,\"i\":123,\"pi\":3.1416,\"a\":[0,1,2,3]}\n~~~~~~~~~~\n\nThere are two `String()` and `Key()` overloads. One is the same as defined in handler concept with 3 parameters. It can handle string with null characters. Another one is the simpler version used in the above example.\n\nNote that, the example code does not pass any parameters in `EndArray()` and `EndObject()`. An `SizeType` can be passed but it will be simply ignored by `Writer`.\n\nYou may doubt that, why not just using `sprintf()` or `std::stringstream` to build a JSON?\n\nThere are various reasons:\n1. `Writer` must output a well-formed JSON. If there is incorrect event sequence (e.g. `Int()` just after `StartObject()`), it generates assertion fail in debug mode.\n2. `Writer::String()` can handle string escaping (e.g. converting code point `U+000A` to `\\n`) and Unicode transcoding.\n3. `Writer` handles number output consistently.\n4. `Writer` implements the event handler concept. It can be used to handle events from `Reader`, `Document` or other event publisher.\n5. `Writer` can be optimized for different platforms.\n\nAnyway, using `Writer` API is even simpler than generating a JSON by ad hoc methods.\n\n## Template {#WriterTemplate}\n\n`Writer` has a minor design difference to `Reader`. `Writer` is a template class, not a typedef. There is no `GenericWriter`. The following is the declaration.\n\n~~~~~~~~~~cpp\nnamespace rapidjson {\n\ntemplate<typename OutputStream, typename SourceEncoding = UTF8<>, typename TargetEncoding = UTF8<>, typename Allocator = CrtAllocator<>, unsigned writeFlags = kWriteDefaultFlags>\nclass Writer {\npublic:\n    Writer(OutputStream& os, Allocator* allocator = 0, size_t levelDepth = kDefaultLevelDepth)\n// ...\n};\n\n} // namespace rapidjson\n~~~~~~~~~~\n\nThe `OutputStream` template parameter is the type of output stream. It cannot be deduced and must be specified by user.\n\nThe `SourceEncoding` template parameter specifies the encoding to be used in `String(const Ch*, ...)`.\n\nThe `TargetEncoding` template parameter specifies the encoding in the output stream.\n\nThe `Allocator` is the type of allocator, which is used for allocating internal data structure (a stack).\n\nThe `writeFlags` are combination of the following bit-flags:\n\nParse flags                   | Meaning\n------------------------------|-----------------------------------\n`kWriteNoFlags`               | No flag is set.\n`kWriteDefaultFlags`          | Default write flags. It is equal to macro `RAPIDJSON_WRITE_DEFAULT_FLAGS`, which is defined as `kWriteNoFlags`.\n`kWriteValidateEncodingFlag`  | Validate encoding of JSON strings.\n`kWriteNanAndInfFlag`         | Allow writing of `Infinity`, `-Infinity` and `NaN`.\n\nBesides, the constructor of `Writer` has a `levelDepth` parameter. This parameter affects the initial memory allocated for storing information per hierarchy level.\n\n## PrettyWriter {#PrettyWriter}\n\nWhile the output of `Writer` is the most condensed JSON without white-spaces, suitable for network transfer or storage, it is not easily readable by human.\n\nTherefore, RapidJSON provides a `PrettyWriter`, which adds indentation and line feeds in the output.\n\nThe usage of `PrettyWriter` is exactly the same as `Writer`, expect that `PrettyWriter` provides a `SetIndent(Ch indentChar, unsigned indentCharCount)` function. The default is 4 spaces.\n\n## Completeness and Reset {#CompletenessReset}\n\nA `Writer` can only output a single JSON, which can be any JSON type at the root. Once the singular event for root (e.g. `String()`), or the last matching `EndObject()` or `EndArray()` event, is handled, the output JSON is well-formed and complete. User can detect this state by calling `Writer::IsComplete()`.\n\nWhen a JSON is complete, the `Writer` cannot accept any new events. Otherwise the output will be invalid (i.e. having more than one root). To reuse the `Writer` object, user can call `Writer::Reset(OutputStream& os)` to reset all internal states of the `Writer` with a new output stream.\n\n# Techniques {#SaxTechniques}\n\n## Parsing JSON to Custom Data Structure {#CustomDataStructure}\n\n`Document`'s parsing capability is completely based on `Reader`. Actually `Document` is a handler which receives events from a reader to build a DOM during parsing.\n\nUser may uses `Reader` to build other data structures directly. This eliminates building of DOM, thus reducing memory and improving performance.\n\nIn the following `messagereader` example, `ParseMessages()` parses a JSON which should be an object with key-string pairs.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/error/en.h\"\n#include <iostream>\n#include <string>\n#include <map>\n\nusing namespace std;\nusing namespace rapidjson;\n\ntypedef map<string, string> MessageMap;\n\nstruct MessageHandler\n    : public BaseReaderHandler<UTF8<>, MessageHandler> {\n    MessageHandler() : state_(kExpectObjectStart) {\n    }\n\n    bool StartObject() {\n        switch (state_) {\n        case kExpectObjectStart:\n            state_ = kExpectNameOrObjectEnd;\n            return true;\n        default:\n            return false;\n        }\n    }\n\n    bool String(const char* str, SizeType length, bool) {\n        switch (state_) {\n        case kExpectNameOrObjectEnd:\n            name_ = string(str, length);\n            state_ = kExpectValue;\n            return true;\n        case kExpectValue:\n            messages_.insert(MessageMap::value_type(name_, string(str, length)));\n            state_ = kExpectNameOrObjectEnd;\n            return true;\n        default:\n            return false;\n        }\n    }\n\n    bool EndObject(SizeType) { return state_ == kExpectNameOrObjectEnd; }\n\n    bool Default() { return false; } // All other events are invalid.\n\n    MessageMap messages_;\n    enum State {\n        kExpectObjectStart,\n        kExpectNameOrObjectEnd,\n        kExpectValue,\n    }state_;\n    std::string name_;\n};\n\nvoid ParseMessages(const char* json, MessageMap& messages) {\n    Reader reader;\n    MessageHandler handler;\n    StringStream ss(json);\n    if (reader.Parse(ss, handler))\n        messages.swap(handler.messages_);   // Only change it if success.\n    else {\n        ParseErrorCode e = reader.GetParseErrorCode();\n        size_t o = reader.GetErrorOffset();\n        cout << \"Error: \" << GetParseError_En(e) << endl;;\n        cout << \" at offset \" << o << \" near '\" << string(json).substr(o, 10) << \"...'\" << endl;\n    }\n}\n\nint main() {\n    MessageMap messages;\n\n    const char* json1 = \"{ \\\"greeting\\\" : \\\"Hello!\\\", \\\"farewell\\\" : \\\"bye-bye!\\\" }\";\n    cout << json1 << endl;\n    ParseMessages(json1, messages);\n\n    for (MessageMap::const_iterator itr = messages.begin(); itr != messages.end(); ++itr)\n        cout << itr->first << \": \" << itr->second << endl;\n\n    cout << endl << \"Parse a JSON with invalid schema.\" << endl;\n    const char* json2 = \"{ \\\"greeting\\\" : \\\"Hello!\\\", \\\"farewell\\\" : \\\"bye-bye!\\\", \\\"foo\\\" : {} }\";\n    cout << json2 << endl;\n    ParseMessages(json2, messages);\n\n    return 0;\n}\n~~~~~~~~~~\n\n~~~~~~~~~~\n{ \"greeting\" : \"Hello!\", \"farewell\" : \"bye-bye!\" }\nfarewell: bye-bye!\ngreeting: Hello!\n\nParse a JSON with invalid schema.\n{ \"greeting\" : \"Hello!\", \"farewell\" : \"bye-bye!\", \"foo\" : {} }\nError: Terminate parsing due to Handler error.\n at offset 59 near '} }...'\n~~~~~~~~~~\n\nThe first JSON (`json1`) was successfully parsed into `MessageMap`. Since `MessageMap` is a `std::map`, the printing order are sorted by the key. This order is different from the JSON's order.\n\nIn the second JSON (`json2`), `foo`'s value is an empty object. As it is an object, `MessageHandler::StartObject()` will be called. However, at that moment `state_ = kExpectValue`, so that function returns `false` and cause the parsing process be terminated. The error code is `kParseErrorTermination`.\n\n## Filtering of JSON {#Filtering}\n\nAs mentioned earlier, `Writer` can handle the events published by `Reader`. `condense` example simply set a `Writer` as handler of a `Reader`, so it can remove all white-spaces in JSON. `pretty` example uses the same relationship, but replacing `Writer` by `PrettyWriter`. So `pretty` can be used to reformat a JSON with indentation and line feed.\n\nActually, we can add intermediate layer(s) to filter the contents of JSON via these SAX-style API. For example, `capitalize` example capitalize all strings in a JSON.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/error/en.h\"\n#include <vector>\n#include <cctype>\n\nusing namespace rapidjson;\n\ntemplate<typename OutputHandler>\nstruct CapitalizeFilter {\n    CapitalizeFilter(OutputHandler& out) : out_(out), buffer_() {\n    }\n\n    bool Null() { return out_.Null(); }\n    bool Bool(bool b) { return out_.Bool(b); }\n    bool Int(int i) { return out_.Int(i); }\n    bool Uint(unsigned u) { return out_.Uint(u); }\n    bool Int64(int64_t i) { return out_.Int64(i); }\n    bool Uint64(uint64_t u) { return out_.Uint64(u); }\n    bool Double(double d) { return out_.Double(d); }\n    bool RawNumber(const char* str, SizeType length, bool copy) { return out_.RawNumber(str, length, copy); }\n    bool String(const char* str, SizeType length, bool) { \n        buffer_.clear();\n        for (SizeType i = 0; i < length; i++)\n            buffer_.push_back(std::toupper(str[i]));\n        return out_.String(&buffer_.front(), length, true); // true = output handler need to copy the string\n    }\n    bool StartObject() { return out_.StartObject(); }\n    bool Key(const char* str, SizeType length, bool copy) { return String(str, length, copy); }\n    bool EndObject(SizeType memberCount) { return out_.EndObject(memberCount); }\n    bool StartArray() { return out_.StartArray(); }\n    bool EndArray(SizeType elementCount) { return out_.EndArray(elementCount); }\n\n    OutputHandler& out_;\n    std::vector<char> buffer_;\n};\n\nint main(int, char*[]) {\n    // Prepare JSON reader and input stream.\n    Reader reader;\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n\n    // Prepare JSON writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n    Writer<FileWriteStream> writer(os);\n\n    // JSON reader parse from the input stream and let writer generate the output.\n    CapitalizeFilter<Writer<FileWriteStream> > filter(writer);\n    if (!reader.Parse(is, filter)) {\n        fprintf(stderr, \"\\nError(%u): %s\\n\", (unsigned)reader.GetErrorOffset(), GetParseError_En(reader.GetParseErrorCode()));\n        return 1;\n    }\n\n    return 0;\n}\n~~~~~~~~~~\n\nNote that, it is incorrect to simply capitalize the JSON as a string. For example:\n~~~~~~~~~~\n[\"Hello\\nWorld\"]\n~~~~~~~~~~\n\nSimply capitalizing the whole JSON would contain incorrect escape character:\n~~~~~~~~~~\n[\"HELLO\\NWORLD\"]\n~~~~~~~~~~\n\nThe correct result by `capitalize`:\n~~~~~~~~~~\n[\"HELLO\\nWORLD\"]\n~~~~~~~~~~\n\nMore complicated filters can be developed. However, since SAX-style API can only provide information about a single event at a time, user may need to book-keeping the contextual information (e.g. the path from root value, storage of other related values). Some processing may be easier to be implemented in DOM than SAX.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/sax.zh-cn.md",
    "content": "# SAX\n\n\"SAX\" 此术语源于 [Simple API for XML](http://en.wikipedia.org/wiki/Simple_API_for_XML)。我们借了此术语去套用在 JSON 的解析及生成。\n\n在 RapidJSON 中，`Reader`（`GenericReader<...>` 的 typedef）是 JSON 的 SAX 风格解析器，而 `Writer`（`GenericWriter<...>` 的 typedef）则是 JSON 的 SAX 风格生成器。\n\n[TOC]\n\n# Reader {#Reader}\n\n`Reader` 从输入流解析一个 JSON。当它从流中读取字符时，它会基于 JSON 的语法去分析字符，并向处理器发送事件。\n\n例如，以下是一个 JSON。\n\n~~~~~~~~~~js\n{\n    \"hello\": \"world\",\n    \"t\": true ,\n    \"f\": false,\n    \"n\": null,\n    \"i\": 123,\n    \"pi\": 3.1416,\n    \"a\": [1, 2, 3, 4]\n}\n~~~~~~~~~~\n\n当一个 `Reader` 解析此 JSON 时，它会顺序地向处理器发送以下的事件：\n\n~~~~~~~~~~\nStartObject()\nKey(\"hello\", 5, true)\nString(\"world\", 5, true)\nKey(\"t\", 1, true)\nBool(true)\nKey(\"f\", 1, true)\nBool(false)\nKey(\"n\", 1, true)\nNull()\nKey(\"i\")\nUint(123)\nKey(\"pi\")\nDouble(3.1416)\nKey(\"a\")\nStartArray()\nUint(1)\nUint(2)\nUint(3)\nUint(4)\nEndArray(4)\nEndObject(7)\n~~~~~~~~~~\n\n除了一些事件参数需要再作解释，这些事件可以轻松地与 JSON 对上。我们可以看看 `simplereader` 例子怎样产生和以上完全相同的结果：\n\n~~~~~~~~~~cpp\n#include \"rapidjson/reader.h\"\n#include <iostream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nstruct MyHandler : public BaseReaderHandler<UTF8<>, MyHandler> {\n    bool Null() { cout << \"Null()\" << endl; return true; }\n    bool Bool(bool b) { cout << \"Bool(\" << boolalpha << b << \")\" << endl; return true; }\n    bool Int(int i) { cout << \"Int(\" << i << \")\" << endl; return true; }\n    bool Uint(unsigned u) { cout << \"Uint(\" << u << \")\" << endl; return true; }\n    bool Int64(int64_t i) { cout << \"Int64(\" << i << \")\" << endl; return true; }\n    bool Uint64(uint64_t u) { cout << \"Uint64(\" << u << \")\" << endl; return true; }\n    bool Double(double d) { cout << \"Double(\" << d << \")\" << endl; return true; }\n    bool String(const char* str, SizeType length, bool copy) { \n        cout << \"String(\" << str << \", \" << length << \", \" << boolalpha << copy << \")\" << endl;\n        return true;\n    }\n    bool StartObject() { cout << \"StartObject()\" << endl; return true; }\n    bool Key(const char* str, SizeType length, bool copy) { \n        cout << \"Key(\" << str << \", \" << length << \", \" << boolalpha << copy << \")\" << endl;\n        return true;\n    }\n    bool EndObject(SizeType memberCount) { cout << \"EndObject(\" << memberCount << \")\" << endl; return true; }\n    bool StartArray() { cout << \"StartArray()\" << endl; return true; }\n    bool EndArray(SizeType elementCount) { cout << \"EndArray(\" << elementCount << \")\" << endl; return true; }\n};\n\nvoid main() {\n    const char json[] = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \";\n\n    MyHandler handler;\n    Reader reader;\n    StringStream ss(json);\n    reader.Parse(ss, handler);\n}\n~~~~~~~~~~\n\n注意 RapidJSON 使用模板去静态挷定 `Reader` 类型及处理器的类型，而不是使用含虚函数的类。这个范式可以通过把函数内联而改善性能。\n\n## 处理器 {#Handler}\n\n如前例所示，使用者需要实现一个处理器（handler），用于处理来自 `Reader` 的事件（函数调用）。处理器必须包含以下的成员函数。\n\n~~~~~~~~~~cpp\nclass Handler {\n    bool Null();\n    bool Bool(bool b);\n    bool Int(int i);\n    bool Uint(unsigned i);\n    bool Int64(int64_t i);\n    bool Uint64(uint64_t i);\n    bool Double(double d);\n    bool RawNumber(const Ch* str, SizeType length, bool copy);\n    bool String(const Ch* str, SizeType length, bool copy);\n    bool StartObject();\n    bool Key(const Ch* str, SizeType length, bool copy);\n    bool EndObject(SizeType memberCount);\n    bool StartArray();\n    bool EndArray(SizeType elementCount);\n};\n~~~~~~~~~~\n\n当 `Reader` 遇到 JSON null 值时会调用 `Null()`。\n\n当 `Reader` 遇到 JSON true 或 false 值时会调用 `Bool(bool)`。\n\n当 `Reader` 遇到 JSON number，它会选择一个合适的 C++ 类型映射，然后调用 `Int(int)`、`Uint(unsigned)`、`Int64(int64_t)`、`Uint64(uint64_t)` 及 `Double(double)` 的 * 其中之一个 *。 若开启了 `kParseNumbersAsStrings` 选项，`Reader` 便会改为调用 `RawNumber()`。\n\n当 `Reader` 遇到 JSON string，它会调用 `String(const char* str, SizeType length, bool copy)`。第一个参数是字符串的指针。第二个参数是字符串的长度（不包含空终止符号）。注意 RapidJSON 支持字串中含有空字符 `\\0`。若出现这种情况，便会有 `strlen(str) < length`。最后的 `copy` 参数表示处理器是否需要复制该字符串。在正常解析时，`copy = true`。仅当使用原位解析时，`copy = false`。此外，还要注意字符的类型与目标编码相关，我们稍后会再谈这一点。\n\n当 `Reader` 遇到 JSON object 的开始之时，它会调用 `StartObject()`。JSON 的 object 是一个键值对（成员）的集合。若 object 包含成员，它会先为成员的名字调用 `Key()`，然后再按值的类型调用函数。它不断调用这些键值对，直至最终调用 `EndObject(SizeType memberCount)`。注意 `memberCount` 参数对处理器来说只是协助性质，使用者可能不需要此参数。\n\nJSON array 与 object 相似，但更简单。在 array 开始时，`Reader` 会调用 `BeginArary()`。若 array 含有元素，它会按元素的类型来读用函数。相似地，最后它会调用 `EndArray(SizeType elementCount)`，其中 `elementCount` 参数对处理器来说只是协助性质。\n\n每个处理器函数都返回一个 `bool`。正常它们应返回 `true`。若处理器遇到错误，它可以返回 `false` 去通知事件发送方停止继续处理。\n\n例如，当我们用 `Reader` 解析一个 JSON 时，处理器检测到该 JSON 并不符合所需的 schema，那么处理器可以返回 `false`，令 `Reader` 停止之后的解析工作。而 `Reader` 会进入一个错误状态，并以 `kParseErrorTermination` 错误码标识。\n\n## GenericReader {#GenericReader}\n\n前面提及，`Reader` 是 `GenericReader` 模板类的 typedef：\n\n~~~~~~~~~~cpp\nnamespace rapidjson {\n\ntemplate <typename SourceEncoding, typename TargetEncoding, typename Allocator = MemoryPoolAllocator<> >\nclass GenericReader {\n    // ...\n};\n\ntypedef GenericReader<UTF8<>, UTF8<> > Reader;\n\n} // namespace rapidjson\n~~~~~~~~~~\n\n`Reader` 使用 UTF-8 作为来源及目标编码。来源编码是指 JSON 流的编码。目标编码是指 `String()` 的 `str` 参数所用的编码。例如，要解析一个 UTF-8 流并输出至 UTF-16 string 事件，你需要这么定义一个 reader：\n\n~~~~~~~~~~cpp\nGenericReader<UTF8<>, UTF16<> > reader;\n~~~~~~~~~~\n\n注意到 `UTF16` 的缺省类型是 `wchar_t`。因此这个 `reader` 需要调用处理器的 `String(const wchar_t*, SizeType, bool)`。\n\n第三个模板参数 `Allocator` 是内部数据结构（实际上是一个堆栈）的分配器类型。\n\n## 解析 {#SaxParsing}\n\n`Reader` 的唯一功能就是解析 JSON。 \n\n~~~~~~~~~~cpp\ntemplate <unsigned parseFlags, typename InputStream, typename Handler>\nbool Parse(InputStream& is, Handler& handler);\n\n// 使用 parseFlags = kDefaultParseFlags\ntemplate <typename InputStream, typename Handler>\nbool Parse(InputStream& is, Handler& handler);\n~~~~~~~~~~\n\n若在解析中出现错误，它会返回 `false`。使用者可调用 `bool HasParseEror()`, `ParseErrorCode GetParseErrorCode()` 及 `size_t GetErrorOffset()` 获取错误状态。实际上 `Document` 使用这些 `Reader` 函数去获取解析错误。请参考 [DOM](doc/dom.zh-cn.md) 去了解有关解析错误的细节。\n\n# Writer {#Writer}\n\n`Reader` 把 JSON 转换（解析）成为事件。`Writer` 做完全相反的事情。它把事件转换成 JSON。\n\n`Writer` 是非常容易使用的。若你的应用程序只需把一些数据转换成 JSON，可能直接使用 `Writer`，会比建立一个 `Document` 然后用 `Writer` 把它转换成 JSON 更加方便。\n\n在 `simplewriter` 例子里，我们做 `simplereader` 完全相反的事情。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <iostream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nvoid main() {\n    StringBuffer s;\n    Writer<StringBuffer> writer(s);\n    \n    writer.StartObject();\n    writer.Key(\"hello\");\n    writer.String(\"world\");\n    writer.Key(\"t\");\n    writer.Bool(true);\n    writer.Key(\"f\");\n    writer.Bool(false);\n    writer.Key(\"n\");\n    writer.Null();\n    writer.Key(\"i\");\n    writer.Uint(123);\n    writer.Key(\"pi\");\n    writer.Double(3.1416);\n    writer.Key(\"a\");\n    writer.StartArray();\n    for (unsigned i = 0; i < 4; i++)\n        writer.Uint(i);\n    writer.EndArray();\n    writer.EndObject();\n\n    cout << s.GetString() << endl;\n}\n~~~~~~~~~~\n\n~~~~~~~~~~\n{\"hello\":\"world\",\"t\":true,\"f\":false,\"n\":null,\"i\":123,\"pi\":3.1416,\"a\":[0,1,2,3]}\n~~~~~~~~~~\n\n`String()` 及 `Key()` 各有两个重载。一个是如处理器 concept 般，有 3 个参数。它能处理含空字符的字符串。另一个是如上中使用的较简单版本。\n\n注意到，例子代码中的 `EndArray()` 及 `EndObject()` 并没有参数。可以传递一个 `SizeType` 的参数，但它会被 `Writer` 忽略。\n\n你可能会怀疑，为什么不使用 `sprintf()` 或 `std::stringstream` 去建立一个 JSON？\n\n这有几个原因：\n1. `Writer` 必然会输出一个结构良好（well-formed）的 JSON。若然有错误的事件次序（如 `Int()` 紧随 `StartObject()` 出现），它会在调试模式中产生断言失败。\n2. `Writer::String()` 可处理字符串转义（如把码点 `U+000A` 转换成 `\\n`）及进行 Unicode 转码。\n3. `Writer` 一致地处理 number 的输出。\n4. `Writer` 实现了事件处理器 concept。可用于处理来自 `Reader`、`Document` 或其他事件发生器。\n5. `Writer` 可对不同平台进行优化。\n\n无论如何，使用 `Writer` API 去生成 JSON 甚至乎比这些临时方法更简单。\n\n## 模板 {#WriterTemplate}\n\n`Writer` 与 `Reader` 有少许设计区别。`Writer` 是一个模板类，而不是一个 typedef。 并没有 `GenericWriter`。以下是 `Writer` 的声明。\n\n~~~~~~~~~~cpp\nnamespace rapidjson {\n\ntemplate<typename OutputStream, typename SourceEncoding = UTF8<>, typename TargetEncoding = UTF8<>, typename Allocator = CrtAllocator<> >\nclass Writer {\npublic:\n    Writer(OutputStream& os, Allocator* allocator = 0, size_t levelDepth = kDefaultLevelDepth)\n// ...\n};\n\n} // namespace rapidjson\n~~~~~~~~~~\n\n`OutputStream` 模板参数是输出流的类型。它的类型不可以被自动推断，必须由使用者提供。\n\n`SourceEncoding` 模板参数指定了 `String(const Ch*, ...)` 的编码。\n\n`TargetEncoding` 模板参数指定输出流的编码。\n\n`Allocator` 是分配器的类型，用于分配内部数据结构（一个堆栈）。\n\n`writeFlags` 是以下位标志的组合：\n\n写入位标志                     | 意义\n------------------------------|-----------------------------------\n`kWriteNoFlags`               | 没有任何标志。\n`kWriteDefaultFlags`          | 缺省的解析选项。它等于 `RAPIDJSON_WRITE_DEFAULT_FLAGS` 宏，此宏定义为  `kWriteNoFlags`。\n`kWriteValidateEncodingFlag`  | 校验 JSON 字符串的编码。\n`kWriteNanAndInfFlag`         | 容许写入 `Infinity`, `-Infinity` 及 `NaN`。\n\n此外，`Writer` 的构造函数有一 `levelDepth` 参数。存储每层阶信息的初始内存分配量受此参数影响。\n\n## PrettyWriter {#PrettyWriter}\n\n`Writer` 所输出的是没有空格字符的最紧凑 JSON，适合网络传输或储存，但不适合人类阅读。\n\n因此，RapidJSON 提供了一个 `PrettyWriter`，它在输出中加入缩进及换行。\n\n`PrettyWriter` 的用法与 `Writer` 几乎一样，不同之处是 `PrettyWriter` 提供了一个 `SetIndent(Ch indentChar, unsigned indentCharCount)` 函数。缺省的缩进是 4 个空格。\n\n## 完整性及重置 {#CompletenessReset}\n\n一个 `Writer` 只可输出单个 JSON，其根节点可以是任何 JSON 类型。当处理完单个根节点事件（如 `String()`），或匹配的最后 `EndObject()` 或 `EndArray()` 事件，输出的 JSON 是结构完整（well-formed）及完整的。使用者可调用 `Writer::IsComplete()` 去检测完整性。\n\n当 JSON 完整时，`Writer` 不能再接受新的事件。不然其输出便会是不合法的（例如有超过一个根节点）。为了重新利用 `Writer` 对象，使用者可调用 `Writer::Reset(OutputStream& os)` 去重置其所有内部状态及设置新的输出流。\n\n# 技巧 {#SaxTechniques}\n\n## 解析 JSON 至自定义结构 {#CustomDataStructure}\n\n`Document` 的解析功能完全依靠 `Reader`。实际上 `Document` 是一个处理器，在解析 JSON 时接收事件去建立一个 DOM。\n\n使用者可以直接使用 `Reader` 去建立其他数据结构。这消除了建立 DOM 的步骤，从而减少了内存开销并改善性能。\n\n在以下的 `messagereader` 例子中，`ParseMessages()` 解析一个 JSON，该 JSON 应该是一个含键值对的 object。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/error/en.h\"\n#include <iostream>\n#include <string>\n#include <map>\n\nusing namespace std;\nusing namespace rapidjson;\n\ntypedef map<string, string> MessageMap;\n\nstruct MessageHandler\n    : public BaseReaderHandler<UTF8<>, MessageHandler> {\n    MessageHandler() : state_(kExpectObjectStart) {\n    }\n\n    bool StartObject() {\n        switch (state_) {\n        case kExpectObjectStart:\n            state_ = kExpectNameOrObjectEnd;\n            return true;\n        default:\n            return false;\n        }\n    }\n\n    bool String(const char* str, SizeType length, bool) {\n        switch (state_) {\n        case kExpectNameOrObjectEnd:\n            name_ = string(str, length);\n            state_ = kExpectValue;\n            return true;\n        case kExpectValue:\n            messages_.insert(MessageMap::value_type(name_, string(str, length)));\n            state_ = kExpectNameOrObjectEnd;\n            return true;\n        default:\n            return false;\n        }\n    }\n\n    bool EndObject(SizeType) { return state_ == kExpectNameOrObjectEnd; }\n\n    bool Default() { return false; } // All other events are invalid.\n\n    MessageMap messages_;\n    enum State {\n        kExpectObjectStart,\n        kExpectNameOrObjectEnd,\n        kExpectValue,\n    }state_;\n    std::string name_;\n};\n\nvoid ParseMessages(const char* json, MessageMap& messages) {\n    Reader reader;\n    MessageHandler handler;\n    StringStream ss(json);\n    if (reader.Parse(ss, handler))\n        messages.swap(handler.messages_);   // Only change it if success.\n    else {\n        ParseErrorCode e = reader.GetParseErrorCode();\n        size_t o = reader.GetErrorOffset();\n        cout << \"Error: \" << GetParseError_En(e) << endl;;\n        cout << \" at offset \" << o << \" near '\" << string(json).substr(o, 10) << \"...'\" << endl;\n    }\n}\n\nint main() {\n    MessageMap messages;\n\n    const char* json1 = \"{ \\\"greeting\\\" : \\\"Hello!\\\", \\\"farewell\\\" : \\\"bye-bye!\\\" }\";\n    cout << json1 << endl;\n    ParseMessages(json1, messages);\n\n    for (MessageMap::const_iterator itr = messages.begin(); itr != messages.end(); ++itr)\n        cout << itr->first << \": \" << itr->second << endl;\n\n    cout << endl << \"Parse a JSON with invalid schema.\" << endl;\n    const char* json2 = \"{ \\\"greeting\\\" : \\\"Hello!\\\", \\\"farewell\\\" : \\\"bye-bye!\\\", \\\"foo\\\" : {} }\";\n    cout << json2 << endl;\n    ParseMessages(json2, messages);\n\n    return 0;\n}\n~~~~~~~~~~\n\n~~~~~~~~~~\n{ \"greeting\" : \"Hello!\", \"farewell\" : \"bye-bye!\" }\nfarewell: bye-bye!\ngreeting: Hello!\n\nParse a JSON with invalid schema.\n{ \"greeting\" : \"Hello!\", \"farewell\" : \"bye-bye!\", \"foo\" : {} }\nError: Terminate parsing due to Handler error.\n at offset 59 near '} }...'\n~~~~~~~~~~\n\n第一个 JSON（`json1`）被成功地解析至 `MessageMap`。由于 `MessageMap` 是一个 `std::map`，打印次序按键值排序。此次序与 JSON 中的次序不同。\n\n在第二个 JSON（`json2`）中，`foo` 的值是一个空 object。由于它是一个 object，`MessageHandler::StartObject()` 会被调用。然而，在 `state_ = kExpectValue` 的情况下，该函数会返回 `false`，并导致解析过程终止。错误代码是 `kParseErrorTermination`。\n\n## 过滤 JSON {#Filtering}\n\n如前面提及过，`Writer` 可处理 `Reader` 发出的事件。`example/condense/condense.cpp` 例子简单地设置 `Writer` 作为一个 `Reader` 的处理器，因此它能移除 JSON 中的所有空白字符。`example/pretty/pretty.cpp` 例子使用同样的关系，只是以 `PrettyWriter` 取代 `Writer`。因此 `pretty` 能够重新格式化 JSON，加入缩进及换行。\n\n实际上，我们可以使用 SAX 风格 API 去加入（多个）中间层去过滤 JSON 的内容。例如 `capitalize` 例子可以把所有 JSON string 改为大写。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/error/en.h\"\n#include <vector>\n#include <cctype>\n\nusing namespace rapidjson;\n\ntemplate<typename OutputHandler>\nstruct CapitalizeFilter {\n    CapitalizeFilter(OutputHandler& out) : out_(out), buffer_() {\n    }\n\n    bool Null() { return out_.Null(); }\n    bool Bool(bool b) { return out_.Bool(b); }\n    bool Int(int i) { return out_.Int(i); }\n    bool Uint(unsigned u) { return out_.Uint(u); }\n    bool Int64(int64_t i) { return out_.Int64(i); }\n    bool Uint64(uint64_t u) { return out_.Uint64(u); }\n    bool Double(double d) { return out_.Double(d); }\n    bool RawNumber(const char* str, SizeType length, bool copy) { return out_.RawNumber(str, length, copy); }\n    bool String(const char* str, SizeType length, bool) { \n        buffer_.clear();\n        for (SizeType i = 0; i < length; i++)\n            buffer_.push_back(std::toupper(str[i]));\n        return out_.String(&buffer_.front(), length, true); // true = output handler need to copy the string\n    }\n    bool StartObject() { return out_.StartObject(); }\n    bool Key(const char* str, SizeType length, bool copy) { return String(str, length, copy); }\n    bool EndObject(SizeType memberCount) { return out_.EndObject(memberCount); }\n    bool StartArray() { return out_.StartArray(); }\n    bool EndArray(SizeType elementCount) { return out_.EndArray(elementCount); }\n\n    OutputHandler& out_;\n    std::vector<char> buffer_;\n};\n\nint main(int, char*[]) {\n    // Prepare JSON reader and input stream.\n    Reader reader;\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n\n    // Prepare JSON writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n    Writer<FileWriteStream> writer(os);\n\n    // JSON reader parse from the input stream and let writer generate the output.\n    CapitalizeFilter<Writer<FileWriteStream> > filter(writer);\n    if (!reader.Parse(is, filter)) {\n        fprintf(stderr, \"\\nError(%u): %s\\n\", (unsigned)reader.GetErrorOffset(), GetParseError_En(reader.GetParseErrorCode()));\n        return 1;\n    }\n\n    return 0;\n}\n~~~~~~~~~~\n\n注意到，不可简单地把 JSON 当作字符串去改为大写。例如：\n~~~~~~~~~~\n[\"Hello\\nWorld\"]\n~~~~~~~~~~\n\n简单地把整个 JSON 转为大写的话会产生错误的转义符：\n~~~~~~~~~~\n[\"HELLO\\NWORLD\"]\n~~~~~~~~~~\n\n而 `capitalize` 就会产生正确的结果：\n~~~~~~~~~~\n[\"HELLO\\nWORLD\"]\n~~~~~~~~~~\n\n我们还可以开发更复杂的过滤器。然而，由于 SAX 风格 API 在某一时间点只能提供单一事件的信息，使用者需要自行记录一些上下文信息（例如从根节点起的路径、储存其他相关值）。对于处理某些情况，用 DOM 会比 SAX 更容易实现。\n\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/schema.md",
    "content": "# Schema\n\n(This feature was released in v1.1.0)\n\nJSON Schema is a draft standard for describing the format of JSON data. The schema itself is also JSON data. By validating a JSON structure with JSON Schema, your code can safely access the DOM without manually checking types, or whether a key exists, etc. It can also ensure that the serialized JSON conform to a specified schema.\n\nRapidJSON implemented a JSON Schema validator for [JSON Schema Draft v4](http://json-schema.org/documentation.html). If you are not familiar with JSON Schema, you may refer to [Understanding JSON Schema](http://spacetelescope.github.io/understanding-json-schema/).\n\n[TOC]\n\n# Basic Usage {#Basic}\n\nFirst of all, you need to parse a JSON Schema into `Document`, and then compile the `Document` into a `SchemaDocument`.\n\nSecondly, construct a `SchemaValidator` with the `SchemaDocument`. It is similar to a `Writer` in the sense of handling SAX events. So, you can use `document.Accept(validator)` to validate a document, and then check the validity.\n\n~~~cpp\n#include \"rapidjson/schema.h\"\n\n// ...\n\nDocument sd;\nif (sd.Parse(schemaJson).HasParseError()) {\n    // the schema is not a valid JSON.\n    // ...       \n}\n\nSchemaDocument schema(sd); // Compile a Document to SchemaDocument\nif (!schema.GetError().ObjectEmpty()) {\n    // there was a problem compiling the schema\n    StringBuffer sb;\n    Writer<StringBuffer> w(sb);\n    schema.GetError().Accept(w);\n    printf(\"Invalid schema: %s\\n\", sb.GetString());\n}\n// sd is no longer needed here.\n\nDocument d;\nif (d.Parse(inputJson).HasParseError()) {\n    // the input is not a valid JSON.\n    // ...       \n}\n\nSchemaValidator validator(schema);\nif (!d.Accept(validator)) {\n    // Input JSON is invalid according to the schema\n    // Output diagnostic information\n    StringBuffer sb;\n    validator.GetInvalidSchemaPointer().StringifyUriFragment(sb);\n    printf(\"Invalid schema: %s\\n\", sb.GetString());\n    printf(\"Invalid keyword: %s\\n\", validator.GetInvalidSchemaKeyword());\n    sb.Clear();\n    validator.GetInvalidDocumentPointer().StringifyUriFragment(sb);\n    printf(\"Invalid document: %s\\n\", sb.GetString());\n}\n~~~\n\nSome notes:\n\n* One `SchemaDocument` can be referenced by multiple `SchemaValidator`s. It will not be modified by `SchemaValidator`s.\n* A `SchemaValidator` may be reused to validate multiple documents. To run it for other documents, call `validator.Reset()` first.\n\n# Validation during parsing/serialization {#Fused}\n\nUnlike most JSON Schema validator implementations, RapidJSON provides a SAX-based schema validator. Therefore, you can parse a JSON from a stream while validating it on the fly. If the validator encounters a JSON value that invalidates the supplied schema, the parsing will be terminated immediately. This design is especially useful for parsing large JSON files.\n\n## DOM parsing {#DOM}\n\nFor using DOM in parsing, `Document` needs some preparation and finalizing tasks, in addition to receiving SAX events, thus it needs some work to route the reader, validator and the document. `SchemaValidatingReader` is a helper class that doing such work.\n\n~~~cpp\n#include \"rapidjson/filereadstream.h\"\n\n// ...\nSchemaDocument schema(sd); // Compile a Document to SchemaDocument\n\n// Use reader to parse the JSON\nFILE* fp = fopen(\"big.json\", \"r\");\nFileReadStream is(fp, buffer, sizeof(buffer));\n\n// Parse JSON from reader, validate the SAX events, and store in d.\nDocument d;\nSchemaValidatingReader<kParseDefaultFlags, FileReadStream, UTF8<> > reader(is, schema);\nd.Populate(reader);\n\nif (!reader.GetParseResult()) {\n    // Not a valid JSON\n    // When reader.GetParseResult().Code() == kParseErrorTermination,\n    // it may be terminated by:\n    // (1) the validator found that the JSON is invalid according to schema; or\n    // (2) the input stream has I/O error.\n\n    // Check the validation result\n    if (!reader.IsValid()) {\n        // Input JSON is invalid according to the schema\n        // Output diagnostic information\n        StringBuffer sb;\n        reader.GetInvalidSchemaPointer().StringifyUriFragment(sb);\n        printf(\"Invalid schema: %s\\n\", sb.GetString());\n        printf(\"Invalid keyword: %s\\n\", reader.GetInvalidSchemaKeyword());\n        sb.Clear();\n        reader.GetInvalidDocumentPointer().StringifyUriFragment(sb);\n        printf(\"Invalid document: %s\\n\", sb.GetString());\n    }\n}\n~~~\n\n## SAX parsing {#SAX}\n\nFor using SAX in parsing, it is much simpler. If it only need to validate the JSON without further processing, it is simply:\n\n~~~\nSchemaValidator validator(schema);\nReader reader;\nif (!reader.Parse(stream, validator)) {\n    if (!validator.IsValid()) {\n        // ...    \n    }\n}\n~~~\n\nThis is exactly the method used in the [schemavalidator](example/schemavalidator/schemavalidator.cpp) example. The distinct advantage is low memory usage, no matter how big the JSON was (the memory usage depends on the complexity of the schema).\n\nIf you need to handle the SAX events further, then you need to use the template class `GenericSchemaValidator` to set the output handler of the validator:\n\n~~~\nMyHandler handler;\nGenericSchemaValidator<SchemaDocument, MyHandler> validator(schema, handler);\nReader reader;\nif (!reader.Parse(ss, validator)) {\n    if (!validator.IsValid()) {\n        // ...    \n    }\n}\n~~~\n\n## Serialization {#Serialization}\n\nIt is also possible to do validation during serializing. This can ensure the result JSON is valid according to the JSON schema.\n\n~~~\nStringBuffer sb;\nWriter<StringBuffer> writer(sb);\nGenericSchemaValidator<SchemaDocument, Writer<StringBuffer> > validator(s, writer);\nif (!d.Accept(validator)) {\n    // Some problem during Accept(), it may be validation or encoding issues.\n    if (!validator.IsValid()) {\n        // ...\n    }\n}\n~~~\n\nOf course, if your application only needs SAX-style serialization, it can simply send SAX events to `SchemaValidator` instead of `Writer`.\n\n# Remote Schema {#Remote}\n\nJSON Schema supports [`$ref` keyword](http://spacetelescope.github.io/understanding-json-schema/structuring.html), which is a [JSON pointer](doc/pointer.md) referencing to a local or remote schema. Local pointer is prefixed with `#`, while remote pointer is an relative or absolute URI. For example:\n\n~~~js\n{ \"$ref\": \"definitions.json#/address\" }\n~~~\n\nAs `SchemaDocument` does not know how to resolve such URI, it needs a user-provided `IRemoteSchemaDocumentProvider` instance to do so.\n\n~~~\nclass MyRemoteSchemaDocumentProvider : public IRemoteSchemaDocumentProvider {\npublic:\n    virtual const SchemaDocument* GetRemoteDocument(const char* uri, SizeType length) {\n        // Resolve the uri and returns a pointer to that schema.\n    }\n};\n\n// ...\n\nMyRemoteSchemaDocumentProvider provider;\nSchemaDocument schema(sd, &provider);\n~~~\n\n# Conformance {#Conformance}\n\nRapidJSON passed 262 out of 263 tests in [JSON Schema Test Suite](https://github.com/json-schema/JSON-Schema-Test-Suite) (Json Schema draft 4).\n\nThe failed test is \"changed scope ref invalid\" of \"change resolution scope\" in `refRemote.json`. It is due to that `id` schema keyword and URI combining function are not implemented.\n\nBesides, the `format` schema keyword for string values is ignored, since it is not required by the specification.\n\n## Regular Expression {#Regex}\n\nThe schema keyword `pattern` and `patternProperties` uses regular expression to match the required pattern.\n\nRapidJSON implemented a simple NFA regular expression engine, which is used by default. It supports the following syntax.\n\n|Syntax|Description|\n|------|-----------|\n|`ab`    | Concatenation |\n|<code>a&#124;b</code>   | Alternation |\n|`a?`    | Zero or one |\n|`a*`    | Zero or more |\n|`a+`    | One or more |\n|`a{3}`  | Exactly 3 times |\n|`a{3,}` | At least 3 times |\n|`a{3,5}`| 3 to 5 times |\n|`(ab)`  | Grouping |\n|`^a`    | At the beginning |\n|`a$`    | At the end |\n|`.`     | Any character |\n|`[abc]` | Character classes |\n|`[a-c]` | Character class range |\n|`[a-z0-9_]` | Character class combination |\n|`[^abc]` | Negated character classes |\n|`[^a-c]` | Negated character class range |\n|`[\\b]`   | Backspace (U+0008) |\n|<code>\\\\&#124;</code>, `\\\\`, ...  | Escape characters |\n|`\\f` | Form feed (U+000C) |\n|`\\n` | Line feed (U+000A) |\n|`\\r` | Carriage return (U+000D) |\n|`\\t` | Tab (U+0009) |\n|`\\v` | Vertical tab (U+000B) |\n\nFor C++11 compiler, it is also possible to use the `std::regex` by defining `RAPIDJSON_SCHEMA_USE_INTERNALREGEX=0` and `RAPIDJSON_SCHEMA_USE_STDREGEX=1`. If your schemas do not need `pattern` and `patternProperties`, you can set both macros to zero to disable this feature, which will reduce some code size.\n\n# Performance {#Performance}\n\nMost C++ JSON libraries do not yet support JSON Schema. So we tried to evaluate the performance of RapidJSON's JSON Schema validator according to [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark), which tests 11 JavaScript libraries running on Node.js.\n\nThat benchmark runs validations on [JSON Schema Test Suite](https://github.com/json-schema/JSON-Schema-Test-Suite), in which some test suites and tests are excluded. We made the same benchmarking procedure in [`schematest.cpp`](test/perftest/schematest.cpp).\n\nOn a Mac Book Pro (2.8 GHz Intel Core i7), the following results are collected.\n\n|Validator|Relative speed|Number of test runs per second|\n|---------|:------------:|:----------------------------:|\n|RapidJSON|155%|30682|\n|[`ajv`](https://github.com/epoberezkin/ajv)|100%|19770 (± 1.31%)|\n|[`is-my-json-valid`](https://github.com/mafintosh/is-my-json-valid)|70%|13835 (± 2.84%)|\n|[`jsen`](https://github.com/bugventure/jsen)|57.7%|11411 (± 1.27%)|\n|[`schemasaurus`](https://github.com/AlexeyGrishin/schemasaurus)|26%|5145 (± 1.62%)|\n|[`themis`](https://github.com/playlyfe/themis)|19.9%|3935 (± 2.69%)|\n|[`z-schema`](https://github.com/zaggino/z-schema)|7%|1388 (± 0.84%)|\n|[`jsck`](https://github.com/pandastrike/jsck#readme)|3.1%|606 (± 2.84%)|\n|[`jsonschema`](https://github.com/tdegrunt/jsonschema#readme)|0.9%|185 (± 1.01%)|\n|[`skeemas`](https://github.com/Prestaul/skeemas#readme)|0.8%|154 (± 0.79%)|\n|tv4|0.5%|93 (± 0.94%)|\n|[`jayschema`](https://github.com/natesilva/jayschema)|0.1%|21 (± 1.14%)|\n\nThat is, RapidJSON is about 1.5x faster than the fastest JavaScript library (ajv). And 1400x faster than the slowest one.\n\n# Schema violation reporting {#Reporting}\n\n(Unreleased as of 2017-09-20)\n\nWhen validating an instance against a JSON Schema,\nit is often desirable to report not only whether the instance is valid,\nbut also the ways in which it violates the schema.\n\nThe `SchemaValidator` class\ncollects errors encountered during validation\ninto a JSON `Value`.\nThis error object can then be accessed as `validator.GetError()`.\n\nThe structure of the error object is subject to change\nin future versions of RapidJSON,\nas there is no standard schema for violations.\nThe details below this point are provisional only.\n\n## General provisions {#ReportingGeneral}\n\nValidation of an instance value against a schema\nproduces an error value.\nThe error value is always an object.\nAn empty object `{}` indicates the instance is valid.\n\n* The name of each member\n  corresponds to the JSON Schema keyword that is violated.\n* The value is either an object describing a single violation,\n  or an array of such objects.\n\nEach violation object contains two string-valued members\nnamed `instanceRef` and `schemaRef`.\n`instanceRef` contains the URI fragment serialization\nof a JSON Pointer to the instance subobject\nin which the violation was detected.\n`schemaRef` contains the URI of the schema\nand the fragment serialization of a JSON Pointer\nto the subschema that was violated.\n\nIndividual violation objects can contain other keyword-specific members.\nThese are detailed further.\n\nFor example, validating this instance:\n\n~~~json\n{\"numbers\": [1, 2, \"3\", 4, 5]}\n~~~\n\nagainst this schema:\n\n~~~json\n{\n  \"type\": \"object\",\n  \"properties\": {\n    \"numbers\": {\"$ref\": \"numbers.schema.json\"}\n  }\n}\n~~~\n\nwhere `numbers.schema.json` refers\n(via a suitable `IRemoteSchemaDocumentProvider`)\nto this schema:\n\n~~~json\n{\n  \"type\": \"array\",\n  \"items\": {\"type\": \"number\"}\n}\n~~~\n\nproduces the following error object:\n\n~~~json\n{\n  \"type\": {\n    \"instanceRef\": \"#/numbers/2\",\n    \"schemaRef\": \"numbers.schema.json#/items\",\n    \"expected\": [\"number\"],\n    \"actual\": \"string\"\n  }\n}\n~~~\n\n## Validation keywords for numbers {#Numbers}\n\n### multipleOf {#multipleof}\n\n* `expected`: required number strictly greater than 0.\n  The value of the `multipleOf` keyword specified in the schema.\n* `actual`: required number.\n  The instance value.\n\n### maximum {#maximum}\n\n* `expected`: required number.\n  The value of the `maximum` keyword specified in the schema.\n* `exclusiveMaximum`: optional boolean.\n  This will be true if the schema specified `\"exclusiveMaximum\": true`,\n  and will be omitted otherwise.\n* `actual`: required number.\n  The instance value.\n\n### minimum {#minimum}\n\n* `expected`: required number.\n  The value of the `minimum` keyword specified in the schema.\n* `exclusiveMinimum`: optional boolean.\n  This will be true if the schema specified `\"exclusiveMinimum\": true`,\n  and will be omitted otherwise.\n* `actual`: required number.\n  The instance value.\n\n## Validation keywords for strings {#Strings}\n\n### maxLength {#maxLength}\n\n* `expected`: required number greater than or equal to 0.\n  The value of the `maxLength` keyword specified in the schema.\n* `actual`: required string.\n  The instance value.\n\n### minLength {#minLength}\n\n* `expected`: required number greater than or equal to 0.\n  The value of the `minLength` keyword specified in the schema.\n* `actual`: required string.\n  The instance value.\n\n### pattern {#pattern}\n\n* `actual`: required string.\n  The instance value.\n\n(The expected pattern is not reported\nbecause the internal representation in `SchemaDocument`\ndoes not store the pattern in original string form.)\n\n## Validation keywords for arrays {#Arrays}\n\n### additionalItems {#additionalItems}\n\nThis keyword is reported\nwhen the value of `items` schema keyword is an array,\nthe value of `additionalItems` is `false`,\nand the instance is an array\nwith more items than specified in the `items` array.\n\n* `disallowed`: required integer greater than or equal to 0.\n  The index of the first item that has no corresponding schema.\n\n### maxItems and minItems {#maxItems-minItems}\n\n* `expected`: required integer greater than or equal to 0.\n  The value of `maxItems` (respectively, `minItems`)\n  specified in the schema.\n* `actual`: required integer greater than or equal to 0.\n  Number of items in the instance array.\n\n### uniqueItems {#uniqueItems}\n\n* `duplicates`: required array\n  whose items are integers greater than or equal to 0.\n  Indices of items of the instance that are equal.\n\n(RapidJSON only reports the first two equal items,\nfor performance reasons.)\n\n## Validation keywords for objects\n\n### maxProperties and minProperties {#maxProperties-minProperties}\n\n* `expected`: required integer greater than or equal to 0.\n  The value of `maxProperties` (respectively, `minProperties`)\n  specified in the schema.\n* `actual`: required integer greater than or equal to 0.\n  Number of properties in the instance object.\n\n### required {#required}\n\n* `missing`: required array of one or more unique strings.\n  The names of properties\n  that are listed in the value of the `required` schema keyword\n  but not present in the instance object.\n\n### additionalProperties {#additionalProperties}\n\nThis keyword is reported\nwhen the schema specifies `additionalProperties: false`\nand the name of a property of the instance is\nneither listed in the `properties` keyword\nnor matches any regular expression in the `patternProperties` keyword.\n\n* `disallowed`: required string.\n  Name of the offending property of the instance.\n\n(For performance reasons,\nRapidJSON only reports the first such property encountered.)\n\n### dependencies {#dependencies}\n\n* `errors`: required object with one or more properties.\n  Names and values of its properties are described below.\n\nRecall that JSON Schema Draft 04 supports\n*schema dependencies*,\nwhere presence of a named *controlling* property\nrequires the instance object to be valid against a subschema,\nand *property dependencies*,\nwhere presence of a controlling property\nrequires other *dependent* properties to be also present.\n\nFor a violated schema dependency,\n`errors` will contain a property\nwith the name of the controlling property\nand its value will be the error object\nproduced by validating the instance object\nagainst the dependent schema.\n\nFor a violated property dependency,\n`errors` will contain a property\nwith the name of the controlling property\nand its value will be an array of one or more unique strings\nlisting the missing dependent properties.\n\n## Validation keywords for any instance type {#AnyTypes}\n\n### enum {#enum}\n\nThis keyword has no additional properties\nbeyond `instanceRef` and `schemaRef`.\n\n* The allowed values are not listed\n  because `SchemaDocument` does not store them in original form.\n* The violating value is not reported\n  because it might be unwieldy.\n\nIf you need to report these details to your users,\nyou can access the necessary information\nby following `instanceRef` and `schemaRef`.\n\n### type {#type}\n\n* `expected`: required array of one or more unique strings,\n  each of which is one of the seven primitive types\n  defined by the JSON Schema Draft 04 Core specification.\n  Lists the types allowed by the `type` schema keyword.\n* `actual`: required string, also one of seven primitive types.\n  The primitive type of the instance.\n\n### allOf, anyOf, and oneOf {#allOf-anyOf-oneOf}\n\n* `errors`: required array of at least one object.\n  There will be as many items as there are subschemas\n  in the `allOf`, `anyOf` or `oneOf` schema keyword, respectively.\n  Each item will be the error value\n  produced by validating the instance\n  against the corresponding subschema.\n\nFor `allOf`, at least one error value will be non-empty.\nFor `anyOf`, all error values will be non-empty.\nFor `oneOf`, either all error values will be non-empty,\nor more than one will be empty.\n\n### not {#not}\n\nThis keyword has no additional properties\napart from `instanceRef` and `schemaRef`.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/schema.zh-cn.md",
    "content": "# Schema\n\n（本功能于 v1.1.0 发布）\n\nJSON Schema 是描述 JSON 格式的一个标准草案。一个 schema 本身也是一个 JSON。使用 JSON Schema 去校验 JSON，可以让你的代码安全地访问 DOM，而无须检查类型或键值是否存在等。这也能确保输出的 JSON 是符合指定的 schema。\n\nRapidJSON 实现了一个 [JSON Schema Draft v4](http://json-schema.org/documentation.html) 的校验器。若你不熟悉 JSON Schema，可以参考 [Understanding JSON Schema](http://spacetelescope.github.io/understanding-json-schema/)。\n\n[TOC]\n\n# 基本用法 {#BasicUsage}\n\n首先，你要把 JSON Schema 解析成 `Document`，再把它编译成一个 `SchemaDocument`。\n\n然后，利用该 `SchemaDocument` 创建一个 `SchemaValidator`。它与 `Writer` 相似，都是能够处理 SAX 事件的。因此，你可以用 `document.Accept(validator)` 去校验一个 JSON，然后再获取校验结果。\n\n~~~cpp\n#include \"rapidjson/schema.h\"\n\n// ...\n\nDocument sd;\nif (sd.Parse(schemaJson).HasParseError()) {\n    // 此 schema 不是合法的 JSON\n    // ...       \n}\nSchemaDocument schema(sd); // 把一个 Document 编译至 SchemaDocument\n// 之后不再需要 sd\n\nDocument d;\nif (d.Parse(inputJson).HasParseError()) {\n    // 输入不是一个合法的 JSON\n    // ...       \n}\n\nSchemaValidator validator(schema);\nif (!d.Accept(validator)) {\n    // 输入的 JSON 不合乎 schema\n    // 打印诊断信息\n    StringBuffer sb;\n    validator.GetInvalidSchemaPointer().StringifyUriFragment(sb);\n    printf(\"Invalid schema: %s\\n\", sb.GetString());\n    printf(\"Invalid keyword: %s\\n\", validator.GetInvalidSchemaKeyword());\n    sb.Clear();\n    validator.GetInvalidDocumentPointer().StringifyUriFragment(sb);\n    printf(\"Invalid document: %s\\n\", sb.GetString());\n}\n~~~\n\n一些注意点：\n\n* 一个 `SchemaDocment` 能被多个 `SchemaValidator` 引用。它不会被 `SchemaValidator` 修改。\n* 可以重复使用一个 `SchemaValidator` 来校验多个文件。在校验其他文件前，须先调用 `validator.Reset()`。\n\n# 在解析／生成时进行校验 {#ParsingSerialization}\n\n与大部分 JSON Schema 校验器有所不同，RapidJSON 提供了一个基于 SAX 的 schema 校验器实现。因此，你可以在输入流解析 JSON 的同时进行校验。若校验器遇到一个与 schema 不符的值，就会立即终止解析。这设计对于解析大型 JSON 文件时特别有用。\n\n## DOM 解析 {#DomParsing}\n\n在使用 DOM 进行解析时，`Document` 除了接收 SAX 事件外，还需做一些准备及结束工作，因此，为了连接 `Reader`、`SchemaValidator` 和 `Document` 要做多一点事情。`SchemaValidatingReader` 是一个辅助类去做那些工作。\n\n~~~cpp\n#include \"rapidjson/filereadstream.h\"\n\n// ...\nSchemaDocument schema(sd); // 把一个 Document 编译至 SchemaDocument\n\n// 使用 reader 解析 JSON\nFILE* fp = fopen(\"big.json\", \"r\");\nFileReadStream is(fp, buffer, sizeof(buffer));\n\n// 用 reader 解析 JSON，校验它的 SAX 事件，并存储至 d\nDocument d;\nSchemaValidatingReader<kParseDefaultFlags, FileReadStream, UTF8<> > reader(is, schema);\nd.Populate(reader);\n\nif (!reader.GetParseResult()) {\n    // 不是一个合法的 JSON\n    // 当 reader.GetParseResult().Code() == kParseErrorTermination,\n    // 它可能是被以下原因中止：\n    // (1) 校验器发现 JSON 不合乎 schema；或\n    // (2) 输入流有 I/O 错误。\n\n    // 检查校验结果\n    if (!reader.IsValid()) {\n        // 输入的 JSON 不合乎 schema\n        // 打印诊断信息\n        StringBuffer sb;\n        reader.GetInvalidSchemaPointer().StringifyUriFragment(sb);\n        printf(\"Invalid schema: %s\\n\", sb.GetString());\n        printf(\"Invalid keyword: %s\\n\", reader.GetInvalidSchemaKeyword());\n        sb.Clear();\n        reader.GetInvalidDocumentPointer().StringifyUriFragment(sb);\n        printf(\"Invalid document: %s\\n\", sb.GetString());\n    }\n}\n~~~\n\n## SAX 解析 {#SaxParsing}\n\n使用 SAX 解析时，情况就简单得多。若只需要校验 JSON 而无需进一步处理，那么仅需要：\n\n~~~\nSchemaValidator validator(schema);\nReader reader;\nif (!reader.Parse(stream, validator)) {\n    if (!validator.IsValid()) {\n        // ...    \n    }\n}\n~~~\n\n这种方式和 [schemavalidator](example/schemavalidator/schemavalidator.cpp) 例子完全相同。这带来的独特优势是，无论 JSON 多巨大，永远维持低内存用量（内存用量只与 Schema 的复杂度相关）。\n\n若你需要进一步处理 SAX 事件，便可使用模板类 `GenericSchemaValidator` 去设置校验器的输出 `Handler`：\n\n~~~\nMyHandler handler;\nGenericSchemaValidator<SchemaDocument, MyHandler> validator(schema, handler);\nReader reader;\nif (!reader.Parse(ss, validator)) {\n    if (!validator.IsValid()) {\n        // ...    \n    }\n}\n~~~\n\n## 生成 {#Serialization}\n\n我们也可以在生成（serialization）的时候进行校验。这能确保输出的 JSON 符合一个 JSON Schema。\n\n~~~\nStringBuffer sb;\nWriter<StringBuffer> writer(sb);\nGenericSchemaValidator<SchemaDocument, Writer<StringBuffer> > validator(s, writer);\nif (!d.Accept(validator)) {\n    // Some problem during Accept(), it may be validation or encoding issues.\n    if (!validator.IsValid()) {\n        // ...\n    }\n}\n~~~\n\n当然，如果你的应用仅需要 SAX 风格的生成，那么只需要把 SAX 事件由原来发送到 `Writer`，改为发送到 `SchemaValidator`。\n\n# 远程 Schema {#RemoteSchema}\n\nJSON Schema 支持 [`$ref` 关键字](http://spacetelescope.github.io/understanding-json-schema/structuring.html)，它是一个 [JSON pointer](doc/pointer.zh-cn.md) 引用至一个本地（local）或远程（remote） schema。本地指针的首字符是 `#`，而远程指针是一个相对或绝对 URI。例如：\n\n~~~js\n{ \"$ref\": \"definitions.json#/address\" }\n~~~\n\n由于 `SchemaDocument` 并不知道如何处理那些 URI，它需要使用者提供一个 `IRemoteSchemaDocumentProvider` 的实例去处理。\n\n~~~\nclass MyRemoteSchemaDocumentProvider : public IRemoteSchemaDocumentProvider {\npublic:\n    virtual const SchemaDocument* GetRemoteDocument(const char* uri, SizeType length) {\n        // Resolve the uri and returns a pointer to that schema.\n    }\n};\n\n// ...\n\nMyRemoteSchemaDocumentProvider provider;\nSchemaDocument schema(sd, &provider);\n~~~\n\n# 标准的符合程度 {#Conformance}\n\nRapidJSON 通过了 [JSON Schema Test Suite](https://github.com/json-schema/JSON-Schema-Test-Suite) (Json Schema draft 4) 中 263 个测试的 262 个。\n\n没通过的测试是 `refRemote.json` 中的 \"change resolution scope\" - \"changed scope ref invalid\"。这是由于未实现 `id` schema 关键字及 URI 合并功能。\n\n除此以外，关于字符串类型的 `format` schema 关键字也会被忽略，因为标准中并没需求必须实现。\n\n## 正则表达式 {#RegEx}\n\n`pattern` 及 `patternProperties` 这两个 schema 关键字使用了正则表达式去匹配所需的模式。\n\nRapidJSON 实现了一个简单的 NFA 正则表达式引擎，并预设使用。它支持以下语法。\n\n|语法|描述|\n|------|-----------|\n|`ab`    | 串联 |\n|<code>a&#124;b</code>   | 交替 |\n|`a?`    | 零或一次 |\n|`a*`    | 零或多次 |\n|`a+`    | 一或多次 |\n|`a{3}`  | 刚好 3 次 |\n|`a{3,}` | 至少 3 次 |\n|`a{3,5}`| 3 至 5 次 |\n|`(ab)`  | 分组 |\n|`^a`    | 在开始处 |\n|`a$`    | 在结束处 |\n|`.`     | 任何字符 |\n|`[abc]` | 字符组 |\n|`[a-c]` | 字符组范围 |\n|`[a-z0-9_]` | 字符组组合 |\n|`[^abc]` | 字符组取反 |\n|`[^a-c]` | 字符组范围取反 |\n|`[\\b]`   | 退格符 (U+0008) |\n|<code>\\\\&#124;</code>, `\\\\`, ...  | 转义字符 |\n|`\\f` | 馈页 (U+000C) |\n|`\\n` | 馈行 (U+000A) |\n|`\\r` | 回车 (U+000D) |\n|`\\t` | 制表 (U+0009) |\n|`\\v` | 垂直制表 (U+000B) |\n\n对于使用 C++11 编译器的使用者，也可使用 `std::regex`，只需定义 `RAPIDJSON_SCHEMA_USE_INTERNALREGEX=0` 及 `RAPIDJSON_SCHEMA_USE_STDREGEX=1`。若你的 schema 无需使用 `pattern` 或 `patternProperties`，可以把两个宏都设为零，以禁用此功能，这样做可节省一些代码体积。\n\n# 性能 {#Performance}\n\n大部分 C++ JSON 库都未支持 JSON Schema。因此我们尝试按照 [json-schema-benchmark](https://github.com/ebdrup/json-schema-benchmark) 去评估 RapidJSON 的 JSON Schema 校验器。该评测测试了 11 个运行在 node.js 上的 JavaScript 库。\n\n该评测校验 [JSON Schema Test Suite](https://github.com/json-schema/JSON-Schema-Test-Suite) 中的测试，当中排除了一些测试套件及个别测试。我们在 [`schematest.cpp`](test/perftest/schematest.cpp) 实现了相同的评测。\n\n在 MacBook Pro (2.8 GHz Intel Core i7) 上收集到以下结果。\n\n|校验器|相对速度|每秒执行的测试数目|\n|---------|:------------:|:----------------------------:|\n|RapidJSON|155%|30682|\n|[`ajv`](https://github.com/epoberezkin/ajv)|100%|19770 (± 1.31%)|\n|[`is-my-json-valid`](https://github.com/mafintosh/is-my-json-valid)|70%|13835 (± 2.84%)|\n|[`jsen`](https://github.com/bugventure/jsen)|57.7%|11411 (± 1.27%)|\n|[`schemasaurus`](https://github.com/AlexeyGrishin/schemasaurus)|26%|5145 (± 1.62%)|\n|[`themis`](https://github.com/playlyfe/themis)|19.9%|3935 (± 2.69%)|\n|[`z-schema`](https://github.com/zaggino/z-schema)|7%|1388 (± 0.84%)|\n|[`jsck`](https://github.com/pandastrike/jsck#readme)|3.1%|606 (± 2.84%)|\n|[`jsonschema`](https://github.com/tdegrunt/jsonschema#readme)|0.9%|185 (± 1.01%)|\n|[`skeemas`](https://github.com/Prestaul/skeemas#readme)|0.8%|154 (± 0.79%)|\n|tv4|0.5%|93 (± 0.94%)|\n|[`jayschema`](https://github.com/natesilva/jayschema)|0.1%|21 (± 1.14%)|\n\n换言之，RapidJSON 比最快的 JavaScript 库（ajv）快约 1.5x。比最慢的快 1400x。\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/stream.md",
    "content": "# Stream\n\nIn RapidJSON, `rapidjson::Stream` is a concept for reading/writing JSON. Here we'll first show you how to use provided streams. And then see how to create a custom stream.\n\n[TOC]\n\n# Memory Streams {#MemoryStreams}\n\nMemory streams store JSON in memory.\n\n## StringStream (Input) {#StringStream}\n\n`StringStream` is the most basic input stream. It represents a complete, read-only JSON stored in memory. It is defined in `rapidjson/rapidjson.h`.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\" // will include \"rapidjson/rapidjson.h\"\n\nusing namespace rapidjson;\n\n// ...\nconst char json[] = \"[1, 2, 3, 4]\";\nStringStream s(json);\n\nDocument d;\nd.ParseStream(s);\n~~~~~~~~~~\n\nSince this is very common usage, `Document::Parse(const char*)` is provided to do exactly the same as above:\n\n~~~~~~~~~~cpp\n// ...\nconst char json[] = \"[1, 2, 3, 4]\";\nDocument d;\nd.Parse(json);\n~~~~~~~~~~\n\nNote that, `StringStream` is a typedef of `GenericStringStream<UTF8<> >`, user may use another encodings to represent the character set of the stream.\n\n## StringBuffer (Output) {#StringBuffer}\n\n`StringBuffer` is a simple output stream. It allocates a memory buffer for writing the whole JSON. Use `GetString()` to obtain the buffer.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/stringbuffer.h\"\n#include <rapidjson/writer.h>\n\nStringBuffer buffer;\nWriter<StringBuffer> writer(buffer);\nd.Accept(writer);\n\nconst char* output = buffer.GetString();\n~~~~~~~~~~\n\nWhen the buffer is full, it will increases the capacity automatically. The default capacity is 256 characters (256 bytes for UTF8, 512 bytes for UTF16, etc.). User can provide an allocator and an initial capacity.\n\n~~~~~~~~~~cpp\nStringBuffer buffer1(0, 1024); // Use its allocator, initial size = 1024\nStringBuffer buffer2(allocator, 1024);\n~~~~~~~~~~\n\nBy default, `StringBuffer` will instantiate an internal allocator.\n\nSimilarly, `StringBuffer` is a typedef of `GenericStringBuffer<UTF8<> >`.\n\n# File Streams {#FileStreams}\n\nWhen parsing a JSON from file, you may read the whole JSON into memory and use ``StringStream`` above.\n\nHowever, if the JSON is big, or memory is limited, you can use `FileReadStream`. It only read a part of JSON from file into buffer, and then let the part be parsed. If it runs out of characters in the buffer, it will read the next part from file.\n\n## FileReadStream (Input) {#FileReadStream}\n\n`FileReadStream` reads the file via a `FILE` pointer. And user need to provide a buffer.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/filereadstream.h\"\n#include <cstdio>\n\nusing namespace rapidjson;\n\nFILE* fp = fopen(\"big.json\", \"rb\"); // non-Windows use \"r\"\n\nchar readBuffer[65536];\nFileReadStream is(fp, readBuffer, sizeof(readBuffer));\n\nDocument d;\nd.ParseStream(is);\n\nfclose(fp);\n~~~~~~~~~~\n\nDifferent from string streams, `FileReadStream` is byte stream. It does not handle encodings. If the file is not UTF-8, the byte stream can be wrapped in a `EncodedInputStream`. We will discuss more about this later in this tutorial.\n\nApart from reading file, user can also use `FileReadStream` to read `stdin`.\n\n## FileWriteStream (Output) {#FileWriteStream}\n\n`FileWriteStream` is buffered output stream. Its usage is very similar to `FileReadStream`.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/filewritestream.h\"\n#include <rapidjson/writer.h>\n#include <cstdio>\n\nusing namespace rapidjson;\n\nDocument d;\nd.Parse(json);\n// ...\n\nFILE* fp = fopen(\"output.json\", \"wb\"); // non-Windows use \"w\"\n\nchar writeBuffer[65536];\nFileWriteStream os(fp, writeBuffer, sizeof(writeBuffer));\n\nWriter<FileWriteStream> writer(os);\nd.Accept(writer);\n\nfclose(fp);\n~~~~~~~~~~\n\nIt can also redirect the output to `stdout`.\n\n# iostream Wrapper {#iostreamWrapper}\n\nDue to users' requests, RapidJSON also provides official wrappers for `std::basic_istream` and `std::basic_ostream`. However, please note that the performance will be much lower than the other streams above.\n\n## IStreamWrapper {#IStreamWrapper}\n\n`IStreamWrapper` wraps any class derived from `std::istream`, such as `std::istringstream`, `std::stringstream`, `std::ifstream`, `std::fstream`, into RapidJSON's input stream.\n\n~~~cpp\n#include <rapidjson/document.h>\n#include <rapidjson/istreamwrapper.h>\n#include <fstream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nifstream ifs(\"test.json\");\nIStreamWrapper isw(ifs);\n\nDocument d;\nd.ParseStream(isw);\n~~~\n\nFor classes derived from `std::wistream`, use `WIStreamWrapper`.\n\n## OStreamWrapper {#OStreamWrapper}\n\nSimilarly, `OStreamWrapper` wraps any class derived from `std::ostream`, such as `std::ostringstream`, `std::stringstream`, `std::ofstream`, `std::fstream`, into RapidJSON's input stream.\n\n~~~cpp\n#include <rapidjson/document.h>\n#include <rapidjson/ostreamwrapper.h>\n#include <rapidjson/writer.h>\n#include <fstream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nDocument d;\nd.Parse(json);\n\n// ...\n\nofstream ofs(\"output.json\");\nOStreamWrapper osw(ofs);\n\nWriter<OStreamWrapper> writer(osw);\nd.Accept(writer);\n~~~\n\nFor classes derived from `std::wostream`, use `WOStreamWrapper`.\n\n# Encoded Streams {#EncodedStreams}\n\nEncoded streams do not contain JSON itself, but they wrap byte streams to provide basic encoding/decoding function.\n\nAs mentioned above, UTF-8 byte streams can be read directly. However, UTF-16 and UTF-32 have endian issue. To handle endian correctly, it needs to convert bytes into characters (e.g. `wchar_t` for UTF-16) while reading, and characters into bytes while writing.\n\nBesides, it also need to handle [byte order mark (BOM)](http://en.wikipedia.org/wiki/Byte_order_mark). When reading from a byte stream, it is needed to detect or just consume the BOM if exists. When writing to a byte stream, it can optionally write BOM.\n\nIf the encoding of stream is known during compile-time, you may use `EncodedInputStream` and `EncodedOutputStream`. If the stream can be UTF-8, UTF-16LE, UTF-16BE, UTF-32LE, UTF-32BE JSON, and it is only known in runtime, you may use `AutoUTFInputStream` and `AutoUTFOutputStream`. These streams are defined in `rapidjson/encodedstream.h`.\n\nNote that, these encoded streams can be applied to streams other than file. For example, you may have a file in memory, or a custom byte stream, be wrapped in encoded streams.\n\n## EncodedInputStream {#EncodedInputStream}\n\n`EncodedInputStream` has two template parameters. The first one is a `Encoding` class, such as `UTF8`, `UTF16LE`, defined in `rapidjson/encodings.h`. The second one is the class of stream to be wrapped.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\"\n#include \"rapidjson/filereadstream.h\"   // FileReadStream\n#include \"rapidjson/encodedstream.h\"    // EncodedInputStream\n#include <cstdio>\n\nusing namespace rapidjson;\n\nFILE* fp = fopen(\"utf16le.json\", \"rb\"); // non-Windows use \"r\"\n\nchar readBuffer[256];\nFileReadStream bis(fp, readBuffer, sizeof(readBuffer));\n\nEncodedInputStream<UTF16LE<>, FileReadStream> eis(bis);  // wraps bis into eis\n\nDocument d; // Document is GenericDocument<UTF8<> > \nd.ParseStream<0, UTF16LE<> >(eis);  // Parses UTF-16LE file into UTF-8 in memory\n\nfclose(fp);\n~~~~~~~~~~\n\n## EncodedOutputStream {#EncodedOutputStream}\n\n`EncodedOutputStream` is similar but it has a `bool putBOM` parameter in the constructor, controlling whether to write BOM into output byte stream.\n\n~~~~~~~~~~cpp\n#include \"rapidjson/filewritestream.h\"  // FileWriteStream\n#include \"rapidjson/encodedstream.h\"    // EncodedOutputStream\n#include <rapidjson/writer.h>\n#include <cstdio>\n\nDocument d;         // Document is GenericDocument<UTF8<> > \n// ...\n\nFILE* fp = fopen(\"output_utf32le.json\", \"wb\"); // non-Windows use \"w\"\n\nchar writeBuffer[256];\nFileWriteStream bos(fp, writeBuffer, sizeof(writeBuffer));\n\ntypedef EncodedOutputStream<UTF32LE<>, FileWriteStream> OutputStream;\nOutputStream eos(bos, true);   // Write BOM\n\nWriter<OutputStream, UTF8<>, UTF32LE<>> writer(eos);\nd.Accept(writer);   // This generates UTF32-LE file from UTF-8 in memory\n\nfclose(fp);\n~~~~~~~~~~\n\n## AutoUTFInputStream {#AutoUTFInputStream}\n\nSometimes an application may want to handle all supported JSON encoding. `AutoUTFInputStream` will detection encoding by BOM first. If BOM is unavailable, it will use  characteristics of valid JSON to make detection. If neither method success, it falls back to the UTF type provided in constructor.\n\nSince the characters (code units) may be 8-bit, 16-bit or 32-bit. `AutoUTFInputStream` requires a character type which can hold at least 32-bit. We may use `unsigned`, as in the template parameter:\n\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\"\n#include \"rapidjson/filereadstream.h\"   // FileReadStream\n#include \"rapidjson/encodedstream.h\"    // AutoUTFInputStream\n#include <cstdio>\n\nusing namespace rapidjson;\n\nFILE* fp = fopen(\"any.json\", \"rb\"); // non-Windows use \"r\"\n\nchar readBuffer[256];\nFileReadStream bis(fp, readBuffer, sizeof(readBuffer));\n\nAutoUTFInputStream<unsigned, FileReadStream> eis(bis);  // wraps bis into eis\n\nDocument d;         // Document is GenericDocument<UTF8<> > \nd.ParseStream<0, AutoUTF<unsigned> >(eis); // This parses any UTF file into UTF-8 in memory\n\nfclose(fp);\n~~~~~~~~~~\n\nWhen specifying the encoding of stream, uses `AutoUTF<CharType>` as in `ParseStream()` above.\n\nYou can obtain the type of UTF via `UTFType GetType()`. And check whether a BOM is found by `HasBOM()`\n\n## AutoUTFOutputStream {#AutoUTFOutputStream}\n\nSimilarly, to choose encoding for output during runtime, we can use `AutoUTFOutputStream`. This class is not automatic *per se*. You need to specify the UTF type and whether to write BOM in runtime.\n\n~~~~~~~~~~cpp\nusing namespace rapidjson;\n\nvoid WriteJSONFile(FILE* fp, UTFType type, bool putBOM, const Document& d) {\n    char writeBuffer[256];\n    FileWriteStream bos(fp, writeBuffer, sizeof(writeBuffer));\n\n    typedef AutoUTFOutputStream<unsigned, FileWriteStream> OutputStream;\n    OutputStream eos(bos, type, putBOM);\n    \n    Writer<OutputStream, UTF8<>, AutoUTF<> > writer;\n    d.Accept(writer);\n}\n~~~~~~~~~~\n\n`AutoUTFInputStream` and `AutoUTFOutputStream` is more convenient than `EncodedInputStream` and `EncodedOutputStream`. They just incur a little bit runtime overheads.\n\n# Custom Stream {#CustomStream}\n\nIn addition to memory/file streams, user can create their own stream classes which fits RapidJSON's API. For example, you may create network stream, stream from compressed file, etc.\n\nRapidJSON combines different types using templates. A class containing all required interface can be a stream. The Stream interface is defined in comments of `rapidjson/rapidjson.h`:\n\n~~~~~~~~~~cpp\nconcept Stream {\n    typename Ch;    //!< Character type of the stream.\n\n    //! Read the current character from stream without moving the read cursor.\n    Ch Peek() const;\n\n    //! Read the current character from stream and moving the read cursor to next character.\n    Ch Take();\n\n    //! Get the current read cursor.\n    //! \\return Number of characters read from start.\n    size_t Tell();\n\n    //! Begin writing operation at the current read pointer.\n    //! \\return The begin writer pointer.\n    Ch* PutBegin();\n\n    //! Write a character.\n    void Put(Ch c);\n\n    //! Flush the buffer.\n    void Flush();\n\n    //! End the writing operation.\n    //! \\param begin The begin write pointer returned by PutBegin().\n    //! \\return Number of characters written.\n    size_t PutEnd(Ch* begin);\n}\n~~~~~~~~~~\n\nFor input stream, they must implement `Peek()`, `Take()` and `Tell()`.\nFor output stream, they must implement `Put()` and `Flush()`. \nThere are two special interface, `PutBegin()` and `PutEnd()`, which are only for *in situ* parsing. Normal streams do not implement them. However, if the interface is not needed for a particular stream, it is still need to a dummy implementation, otherwise will generate compilation error.\n\n## Example: istream wrapper {#ExampleIStreamWrapper}\n\nThe following example is a simple wrapper of `std::istream`, which only implements 3 functions.\n\n~~~~~~~~~~cpp\nclass MyIStreamWrapper {\npublic:\n    typedef char Ch;\n\n    MyIStreamWrapper(std::istream& is) : is_(is) {\n    }\n\n    Ch Peek() const { // 1\n        int c = is_.peek();\n        return c == std::char_traits<char>::eof() ? '\\0' : (Ch)c;\n    }\n\n    Ch Take() { // 2\n        int c = is_.get();\n        return c == std::char_traits<char>::eof() ? '\\0' : (Ch)c;\n    }\n\n    size_t Tell() const { return (size_t)is_.tellg(); } // 3\n\n    Ch* PutBegin() { assert(false); return 0; }\n    void Put(Ch) { assert(false); }\n    void Flush() { assert(false); }\n    size_t PutEnd(Ch*) { assert(false); return 0; }\n\nprivate:\n    MyIStreamWrapper(const MyIStreamWrapper&);\n    MyIStreamWrapper& operator=(const MyIStreamWrapper&);\n\n    std::istream& is_;\n};\n~~~~~~~~~~\n\nUser can use it to wrap instances of `std::stringstream`, `std::ifstream`.\n\n~~~~~~~~~~cpp\nconst char* json = \"[1,2,3,4]\";\nstd::stringstream ss(json);\nMyIStreamWrapper is(ss);\n\nDocument d;\nd.ParseStream(is);\n~~~~~~~~~~\n\nNote that, this implementation may not be as efficient as RapidJSON's memory or file streams, due to internal overheads of the standard library.\n\n## Example: ostream wrapper {#ExampleOStreamWrapper}\n\nThe following example is a simple wrapper of `std::istream`, which only implements 2 functions.\n\n~~~~~~~~~~cpp\nclass MyOStreamWrapper {\npublic:\n    typedef char Ch;\n\n    MyOStreamWrapper(std::ostream& os) : os_(os) {\n    }\n\n    Ch Peek() const { assert(false); return '\\0'; }\n    Ch Take() { assert(false); return '\\0'; }\n    size_t Tell() const {  }\n\n    Ch* PutBegin() { assert(false); return 0; }\n    void Put(Ch c) { os_.put(c); }                  // 1\n    void Flush() { os_.flush(); }                   // 2\n    size_t PutEnd(Ch*) { assert(false); return 0; }\n\nprivate:\n    MyOStreamWrapper(const MyOStreamWrapper&);\n    MyOStreamWrapper& operator=(const MyOStreamWrapper&);\n\n    std::ostream& os_;\n};\n~~~~~~~~~~\n\nUser can use it to wrap instances of `std::stringstream`, `std::ofstream`.\n\n~~~~~~~~~~cpp\nDocument d;\n// ...\n\nstd::stringstream ss;\nMyOStreamWrapper os(ss);\n\nWriter<MyOStreamWrapper> writer(os);\nd.Accept(writer);\n~~~~~~~~~~\n\nNote that, this implementation may not be as efficient as RapidJSON's memory or file streams, due to internal overheads of the standard library.\n\n# Summary {#Summary}\n\nThis section describes stream classes available in RapidJSON. Memory streams are simple. File stream can reduce the memory required during JSON parsing and generation, if the JSON is stored in file system. Encoded streams converts between byte streams and character streams. Finally, user may create custom streams using a simple interface.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/stream.zh-cn.md",
    "content": "# 流\n\n在 RapidJSON 中，`rapidjson::Stream` 是用於读写 JSON 的概念（概念是指 C++ 的 concept）。在这里我们先介绍如何使用 RapidJSON 提供的各种流。然后再看看如何自行定义流。\n\n[TOC]\n\n# 内存流 {#MemoryStreams}\n\n内存流把 JSON 存储在内存之中。\n\n## StringStream（输入）{#StringStream}\n\n`StringStream` 是最基本的输入流，它表示一个完整的、只读的、存储于内存的 JSON。它在 `rapidjson/rapidjson.h` 中定义。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\" // 会包含 \"rapidjson/rapidjson.h\"\n\nusing namespace rapidjson;\n\n// ...\nconst char json[] = \"[1, 2, 3, 4]\";\nStringStream s(json);\n\nDocument d;\nd.ParseStream(s);\n~~~~~~~~~~\n\n由于这是非常常用的用法，RapidJSON 提供 `Document::Parse(const char*)` 去做完全相同的事情：\n\n~~~~~~~~~~cpp\n// ...\nconst char json[] = \"[1, 2, 3, 4]\";\nDocument d;\nd.Parse(json);\n~~~~~~~~~~\n\n需要注意，`StringStream` 是 `GenericStringStream<UTF8<> >` 的 typedef，使用者可用其他编码类去代表流所使用的字符集。\n\n## StringBuffer（输出）{#StringBuffer}\n\n`StringBuffer` 是一个简单的输出流。它分配一个内存缓冲区，供写入整个 JSON。可使用 `GetString()` 来获取该缓冲区。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/stringbuffer.h\"\n#include <rapidjson/writer.h>\n\nStringBuffer buffer;\nWriter<StringBuffer> writer(buffer);\nd.Accept(writer);\n\nconst char* output = buffer.GetString();\n~~~~~~~~~~\n\n当缓冲区满溢，它将自动增加容量。缺省容量是 256 个字符（UTF8 是 256 字节，UTF16 是 512 字节等）。使用者能自行提供分配器及初始容量。\n\n~~~~~~~~~~cpp\nStringBuffer buffer1(0, 1024); // 使用它的分配器，初始大小 = 1024\nStringBuffer buffer2(allocator, 1024);\n~~~~~~~~~~\n\n如无设置分配器，`StringBuffer` 会自行实例化一个内部分配器。\n\n相似地，`StringBuffer` 是 `GenericStringBuffer<UTF8<> >` 的 typedef。\n\n# 文件流 {#FileStreams}\n\n当要从文件解析一个 JSON，你可以把整个 JSON 读入内存并使用上述的 `StringStream`。\n\n然而，若 JSON 很大，或是内存有限，你可以改用 `FileReadStream`。它只会从文件读取一部分至缓冲区，然后让那部分被解析。若缓冲区的字符都被读完，它会再从文件读取下一部分。\n\n## FileReadStream（输入） {#FileReadStream}\n\n`FileReadStream` 通过 `FILE` 指针读取文件。使用者需要提供一个缓冲区。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/filereadstream.h\"\n#include <cstdio>\n\nusing namespace rapidjson;\n\nFILE* fp = fopen(\"big.json\", \"rb\"); // 非 Windows 平台使用 \"r\"\n\nchar readBuffer[65536];\nFileReadStream is(fp, readBuffer, sizeof(readBuffer));\n\nDocument d;\nd.ParseStream(is);\n\nfclose(fp);\n~~~~~~~~~~\n\n与 `StringStreams` 不一样，`FileReadStream` 是一个字节流。它不处理编码。若文件并非 UTF-8 编码，可以把字节流用 `EncodedInputStream` 包装。我们很快会讨论这个问题。\n\n除了读取文件，使用者也可以使用 `FileReadStream` 来读取 `stdin`。\n\n## FileWriteStream（输出）{#FileWriteStream}\n\n`FileWriteStream` 是一个含缓冲功能的输出流。它的用法与 `FileReadStream` 非常相似。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/filewritestream.h\"\n#include <rapidjson/writer.h>\n#include <cstdio>\n\nusing namespace rapidjson;\n\nDocument d;\nd.Parse(json);\n// ...\n\nFILE* fp = fopen(\"output.json\", \"wb\"); // 非 Windows 平台使用 \"w\"\n\nchar writeBuffer[65536];\nFileWriteStream os(fp, writeBuffer, sizeof(writeBuffer));\n\nWriter<FileWriteStream> writer(os);\nd.Accept(writer);\n\nfclose(fp);\n~~~~~~~~~~\n\n它也可以把输出导向 `stdout`。\n\n# iostream 包装类 {#iostreamWrapper}\n\n基于用户的要求，RapidJSON 提供了正式的 `std::basic_istream` 和 `std::basic_ostream` 包装类。然而，请注意其性能会大大低于以上的其他流。\n\n## IStreamWrapper {#IStreamWrapper}\n\n`IStreamWrapper` 把任何继承自 `std::istream` 的类（如 `std::istringstream`、`std::stringstream`、`std::ifstream`、`std::fstream`）包装成 RapidJSON 的输入流。\n\n~~~cpp\n#include <rapidjson/document.h>\n#include <rapidjson/istreamwrapper.h>\n#include <fstream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nifstream ifs(\"test.json\");\nIStreamWrapper isw(ifs);\n\nDocument d;\nd.ParseStream(isw);\n~~~\n\n对于继承自 `std::wistream` 的类，则使用 `WIStreamWrapper`。\n\n## OStreamWrapper {#OStreamWrapper}\n\n相似地，`OStreamWrapper` 把任何继承自 `std::ostream` 的类（如 `std::ostringstream`、`std::stringstream`、`std::ofstream`、`std::fstream`）包装成 RapidJSON 的输出流。\n\n~~~cpp\n#include <rapidjson/document.h>\n#include <rapidjson/ostreamwrapper.h>\n#include <rapidjson/writer.h>\n#include <fstream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nDocument d;\nd.Parse(json);\n\n// ...\n\nofstream ofs(\"output.json\");\nOStreamWrapper osw(ofs);\n\nWriter<OStreamWrapper> writer(osw);\nd.Accept(writer);\n~~~\n\n对于继承自 `std::wistream` 的类，则使用 `WIStreamWrapper`。\n\n# 编码流 {#EncodedStreams}\n\n编码流（encoded streams）本身不存储 JSON，它们是通过包装字节流来提供基本的编码／解码功能。\n\n如上所述，我们可以直接读入 UTF-8 字节流。然而，UTF-16 及 UTF-32 有字节序（endian）问题。要正确地处理字节序，需要在读取时把字节转换成字符（如对 UTF-16 使用 `wchar_t`），以及在写入时把字符转换为字节。\n\n除此以外，我们也需要处理 [字节顺序标记（byte order mark, BOM）](http://en.wikipedia.org/wiki/Byte_order_mark)。当从一个字节流读取时，需要检测 BOM，或者仅仅是把存在的 BOM 消去。当把 JSON 写入字节流时，也可选择写入 BOM。\n\n若一个流的编码在编译期已知，你可使用 `EncodedInputStream` 及 `EncodedOutputStream`。若一个流可能存储 UTF-8、UTF-16LE、UTF-16BE、UTF-32LE、UTF-32BE 的 JSON，并且编码只能在运行时得知，你便可以使用 `AutoUTFInputStream` 及 `AutoUTFOutputStream`。这些流定义在 `rapidjson/encodedstream.h`。\n\n注意到，这些编码流可以施于文件以外的流。例如，你可以用编码流包装内存中的文件或自定义的字节流。\n\n## EncodedInputStream {#EncodedInputStream}\n\n`EncodedInputStream` 含两个模板参数。第一个是 `Encoding` 类型，例如定义于 `rapidjson/encodings.h` 的 `UTF8`、`UTF16LE`。第二个参数是被包装的流的类型。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\"\n#include \"rapidjson/filereadstream.h\"   // FileReadStream\n#include \"rapidjson/encodedstream.h\"    // EncodedInputStream\n#include <cstdio>\n\nusing namespace rapidjson;\n\nFILE* fp = fopen(\"utf16le.json\", \"rb\"); // 非 Windows 平台使用 \"r\"\n\nchar readBuffer[256];\nFileReadStream bis(fp, readBuffer, sizeof(readBuffer));\n\nEncodedInputStream<UTF16LE<>, FileReadStream> eis(bis);  // 用 eis 包装 bis\n\nDocument d; // Document 为 GenericDocument<UTF8<> > \nd.ParseStream<0, UTF16LE<> >(eis);  // 把 UTF-16LE 文件解析至内存中的 UTF-8\n\nfclose(fp);\n~~~~~~~~~~\n\n## EncodedOutputStream {#EncodedOutputStream}\n\n`EncodedOutputStream` 也是相似的，但它的构造函数有一个 `bool putBOM` 参数，用于控制是否在输出字节流写入 BOM。\n\n~~~~~~~~~~cpp\n#include \"rapidjson/filewritestream.h\"  // FileWriteStream\n#include \"rapidjson/encodedstream.h\"    // EncodedOutputStream\n#include <rapidjson/writer.h>\n#include <cstdio>\n\nDocument d;         // Document 为 GenericDocument<UTF8<> > \n// ...\n\nFILE* fp = fopen(\"output_utf32le.json\", \"wb\"); // 非 Windows 平台使用 \"w\"\n\nchar writeBuffer[256];\nFileWriteStream bos(fp, writeBuffer, sizeof(writeBuffer));\n\ntypedef EncodedOutputStream<UTF32LE<>, FileWriteStream> OutputStream;\nOutputStream eos(bos, true);   // 写入 BOM\n\nWriter<OutputStream, UTF8<>, UTF32LE<>> writer(eos);\nd.Accept(writer);   // 这里从内存的 UTF-8 生成 UTF32-LE 文件\n\nfclose(fp);\n~~~~~~~~~~\n\n## AutoUTFInputStream {#AutoUTFInputStream}\n\n有时候，应用软件可能需要㲃理所有可支持的 JSON 编码。`AutoUTFInputStream` 会先使用 BOM 来检测编码。若 BOM 不存在，它便会使用合法 JSON 的特性来检测。若两种方法都失败，它就会倒退至构造函数提供的 UTF 类型。\n\n由于字符（编码单元／code unit）可能是 8 位、16 位或 32 位，`AutoUTFInputStream` 需要一个能至少储存 32 位的字符类型。我们可以使用 `unsigned` 作为模板参数：\n\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\"\n#include \"rapidjson/filereadstream.h\"   // FileReadStream\n#include \"rapidjson/encodedstream.h\"    // AutoUTFInputStream\n#include <cstdio>\n\nusing namespace rapidjson;\n\nFILE* fp = fopen(\"any.json\", \"rb\"); // 非 Windows 平台使用 \"r\"\n\nchar readBuffer[256];\nFileReadStream bis(fp, readBuffer, sizeof(readBuffer));\n\nAutoUTFInputStream<unsigned, FileReadStream> eis(bis);  // 用 eis 包装 bis\n\nDocument d;         // Document 为 GenericDocument<UTF8<> > \nd.ParseStream<0, AutoUTF<unsigned> >(eis); // 把任何 UTF 编码的文件解析至内存中的 UTF-8\n\nfclose(fp);\n~~~~~~~~~~\n\n当要指定流的编码，可使用上面例子中 `ParseStream()` 的参数 `AutoUTF<CharType>`。\n\n你可以使用 `UTFType GetType()` 去获取 UTF 类型，并且用 `HasBOM()` 检测输入流是否含有 BOM。\n\n## AutoUTFOutputStream {#AutoUTFOutputStream}\n\n相似地，要在运行时选择输出的编码，我们可使用 `AutoUTFOutputStream`。这个类本身并非「自动」。你需要在运行时指定 UTF 类型，以及是否写入 BOM。\n\n~~~~~~~~~~cpp\nusing namespace rapidjson;\n\nvoid WriteJSONFile(FILE* fp, UTFType type, bool putBOM, const Document& d) {\n    char writeBuffer[256];\n    FileWriteStream bos(fp, writeBuffer, sizeof(writeBuffer));\n\n    typedef AutoUTFOutputStream<unsigned, FileWriteStream> OutputStream;\n    OutputStream eos(bos, type, putBOM);\n    \n    Writer<OutputStream, UTF8<>, AutoUTF<> > writer;\n    d.Accept(writer);\n}\n~~~~~~~~~~\n\n`AutoUTFInputStream`／`AutoUTFOutputStream` 是比 `EncodedInputStream`／`EncodedOutputStream` 方便。但前者会产生一点运行期额外开销。\n\n# 自定义流 {#CustomStream}\n\n除了内存／文件流，使用者可创建自行定义适配 RapidJSON API 的流类。例如，你可以创建网络流、从压缩文件读取的流等等。\n\nRapidJSON 利用模板结合不同的类型。只要一个类包含所有所需的接口，就可以作为一个流。流的接合定义在 `rapidjson/rapidjson.h` 的注释里：\n\n~~~~~~~~~~cpp\nconcept Stream {\n    typename Ch;    //!< 流的字符类型\n\n    //! 从流读取当前字符，不移动读取指针（read cursor）\n    Ch Peek() const;\n\n    //! 从流读取当前字符，移动读取指针至下一字符。\n    Ch Take();\n\n    //! 获取读取指针。\n    //! \\return 从开始以来所读过的字符数量。\n    size_t Tell();\n\n    //! 从当前读取指针开始写入操作。\n    //! \\return 返回开始写入的指针。\n    Ch* PutBegin();\n\n    //! 写入一个字符。\n    void Put(Ch c);\n\n    //! 清空缓冲区。\n    void Flush();\n\n    //! 完成写作操作。\n    //! \\param begin PutBegin() 返回的开始写入指针。\n    //! \\return 已写入的字符数量。\n    size_t PutEnd(Ch* begin);\n}\n~~~~~~~~~~\n\n输入流必须实现 `Peek()`、`Take()` 及 `Tell()`。\n输出流必须实现 `Put()` 及 `Flush()`。\n`PutBegin()` 及 `PutEnd()` 是特殊的接口，仅用于原位（*in situ*）解析。一般的流不需实现它们。然而，即使接口不需用于某些流，仍然需要提供空实现，否则会产生编译错误。\n\n## 例子：istream 的包装类 {#ExampleIStreamWrapper}\n\n以下的简单例子是 `std::istream` 的包装类，它只需现 3 个函数。\n\n~~~~~~~~~~cpp\nclass MyIStreamWrapper {\npublic:\n    typedef char Ch;\n\n    MyIStreamWrapper(std::istream& is) : is_(is) {\n    }\n\n    Ch Peek() const { // 1\n        int c = is_.peek();\n        return c == std::char_traits<char>::eof() ? '\\0' : (Ch)c;\n    }\n\n    Ch Take() { // 2\n        int c = is_.get();\n        return c == std::char_traits<char>::eof() ? '\\0' : (Ch)c;\n    }\n\n    size_t Tell() const { return (size_t)is_.tellg(); } // 3\n\n    Ch* PutBegin() { assert(false); return 0; }\n    void Put(Ch) { assert(false); }\n    void Flush() { assert(false); }\n    size_t PutEnd(Ch*) { assert(false); return 0; }\n\nprivate:\n    MyIStreamWrapper(const MyIStreamWrapper&);\n    MyIStreamWrapper& operator=(const MyIStreamWrapper&);\n\n    std::istream& is_;\n};\n~~~~~~~~~~\n\n使用者能用它来包装 `std::stringstream`、`std::ifstream` 的实例。\n\n~~~~~~~~~~cpp\nconst char* json = \"[1,2,3,4]\";\nstd::stringstream ss(json);\nMyIStreamWrapper is(ss);\n\nDocument d;\nd.ParseStream(is);\n~~~~~~~~~~\n\n但要注意，由于标准库的内部开销问，此实现的性能可能不如 RapidJSON 的内存／文件流。\n\n## 例子：ostream 的包装类 {#ExampleOStreamWrapper}\n\n以下的例子是 `std::istream` 的包装类，它只需实现 2 个函数。\n\n~~~~~~~~~~cpp\nclass MyOStreamWrapper {\npublic:\n    typedef char Ch;\n\n    OStreamWrapper(std::ostream& os) : os_(os) {\n    }\n\n    Ch Peek() const { assert(false); return '\\0'; }\n    Ch Take() { assert(false); return '\\0'; }\n    size_t Tell() const {  }\n\n    Ch* PutBegin() { assert(false); return 0; }\n    void Put(Ch c) { os_.put(c); }                  // 1\n    void Flush() { os_.flush(); }                   // 2\n    size_t PutEnd(Ch*) { assert(false); return 0; }\n\nprivate:\n    MyOStreamWrapper(const MyOStreamWrapper&);\n    MyOStreamWrapper& operator=(const MyOStreamWrapper&);\n\n    std::ostream& os_;\n};\n~~~~~~~~~~\n\n使用者能用它来包装 `std::stringstream`、`std::ofstream` 的实例。\n\n~~~~~~~~~~cpp\nDocument d;\n// ...\n\nstd::stringstream ss;\nMyOStreamWrapper os(ss);\n\nWriter<MyOStreamWrapper> writer(os);\nd.Accept(writer);\n~~~~~~~~~~\n\n但要注意，由于标准库的内部开销问，此实现的性能可能不如 RapidJSON 的内存／文件流。\n\n# 总结 {#Summary}\n\n本节描述了 RapidJSON 提供的各种流的类。内存流很简单。若 JSON 存储在文件中，文件流可减少 JSON 解析及生成所需的内存量。编码流在字节流和字符流之间作转换。最后，使用者可使用一个简单接口创建自定义的流。\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/tutorial.md",
    "content": "# Tutorial\n\nThis tutorial introduces the basics of the Document Object Model(DOM) API.\n\nAs shown in [Usage at a glance](@ref index), JSON can be parsed into a DOM, and then the DOM can be queried and modified easily, and finally be converted back to JSON.\n\n[TOC]\n\n# Value & Document {#ValueDocument}\n\nEach JSON value is stored in a type called `Value`. A `Document`, representing the DOM, contains the root `Value` of the DOM tree. All public types and functions of RapidJSON are defined in the `rapidjson` namespace.\n\n# Query Value {#QueryValue}\n\nIn this section, we will use excerpt from `example/tutorial/tutorial.cpp`.\n\nAssume we have the following JSON stored in a C string (`const char* json`):\n~~~~~~~~~~js\n{\n    \"hello\": \"world\",\n    \"t\": true ,\n    \"f\": false,\n    \"n\": null,\n    \"i\": 123,\n    \"pi\": 3.1416,\n    \"a\": [1, 2, 3, 4]\n}\n~~~~~~~~~~\n\nParse it into a `Document`:\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\"\n\nusing namespace rapidjson;\n\n// ...\nDocument document;\ndocument.Parse(json);\n~~~~~~~~~~\n\nThe JSON is now parsed into `document` as a *DOM tree*:\n\n![DOM in the tutorial](diagram/tutorial.png)\n\nSince the update to RFC 7159, the root of a conforming JSON document can be any JSON value.  In earlier RFC 4627, only objects or arrays were allowed as root values. In this case, the root is an object.\n~~~~~~~~~~cpp\nassert(document.IsObject());\n~~~~~~~~~~\n\nLet's query whether a `\"hello\"` member exists in the root object. Since a `Value` can contain different types of value, we may need to verify its type and use suitable API to obtain the value. In this example, `\"hello\"` member associates with a JSON string.\n~~~~~~~~~~cpp\nassert(document.HasMember(\"hello\"));\nassert(document[\"hello\"].IsString());\nprintf(\"hello = %s\\n\", document[\"hello\"].GetString());\n~~~~~~~~~~\n\n~~~~~~~~~~\nhello = world\n~~~~~~~~~~\n\nJSON true/false values are represented as `bool`.\n~~~~~~~~~~cpp\nassert(document[\"t\"].IsBool());\nprintf(\"t = %s\\n\", document[\"t\"].GetBool() ? \"true\" : \"false\");\n~~~~~~~~~~\n\n~~~~~~~~~~\nt = true\n~~~~~~~~~~\n\nJSON null can be queried with `IsNull()`.\n~~~~~~~~~~cpp\nprintf(\"n = %s\\n\", document[\"n\"].IsNull() ? \"null\" : \"?\");\n~~~~~~~~~~\n\n~~~~~~~~~~\nn = null\n~~~~~~~~~~\n\nJSON number type represents all numeric values. However, C++ needs more specific type for manipulation.\n\n~~~~~~~~~~cpp\nassert(document[\"i\"].IsNumber());\n\n// In this case, IsUint()/IsInt64()/IsUint64() also return true.\nassert(document[\"i\"].IsInt());          \nprintf(\"i = %d\\n\", document[\"i\"].GetInt());\n// Alternatively (int)document[\"i\"]\n\nassert(document[\"pi\"].IsNumber());\nassert(document[\"pi\"].IsDouble());\nprintf(\"pi = %g\\n\", document[\"pi\"].GetDouble());\n~~~~~~~~~~\n\n~~~~~~~~~~\ni = 123\npi = 3.1416\n~~~~~~~~~~\n\nJSON array contains a number of elements.\n~~~~~~~~~~cpp\n// Using a reference for consecutive access is handy and faster.\nconst Value& a = document[\"a\"];\nassert(a.IsArray());\nfor (SizeType i = 0; i < a.Size(); i++) // Uses SizeType instead of size_t\n        printf(\"a[%d] = %d\\n\", i, a[i].GetInt());\n~~~~~~~~~~\n\n~~~~~~~~~~\na[0] = 1\na[1] = 2\na[2] = 3\na[3] = 4\n~~~~~~~~~~\n\nNote that, RapidJSON does not automatically convert values between JSON types. For example, if a value is a string, it is invalid to call `GetInt()`. In debug mode it will fail on assertion. In release mode, the behavior is undefined.\n\nIn the following sections we discuss details about querying individual types.\n\n## Query Array {#QueryArray}\n\nBy default, `SizeType` is typedef of `unsigned`. In most systems, an array is limited to store up to 2^32-1 elements.\n\nYou may access the elements in an array by integer literal, for example, `a[0]`, `a[1]`, `a[2]`.\n\nArray is similar to `std::vector`: instead of using indices, you may also use iterator to access all the elements.\n~~~~~~~~~~cpp\nfor (Value::ConstValueIterator itr = a.Begin(); itr != a.End(); ++itr)\n    printf(\"%d \", itr->GetInt());\n~~~~~~~~~~\n\nAnd other familiar query functions:\n* `SizeType Capacity() const`\n* `bool Empty() const`\n\n### Range-based For Loop (New in v1.1.0)\n\nWhen C++11 is enabled, you can use range-based for loop to access all elements in an array.\n\n~~~~~~~~~~cpp\nfor (auto& v : a.GetArray())\n    printf(\"%d \", v.GetInt());\n~~~~~~~~~~\n\n## Query Object {#QueryObject}\n\nSimilar to Array, we can access all object members by iterator:\n\n~~~~~~~~~~cpp\nstatic const char* kTypeNames[] = \n    { \"Null\", \"False\", \"True\", \"Object\", \"Array\", \"String\", \"Number\" };\n\nfor (Value::ConstMemberIterator itr = document.MemberBegin();\n    itr != document.MemberEnd(); ++itr)\n{\n    printf(\"Type of member %s is %s\\n\",\n        itr->name.GetString(), kTypeNames[itr->value.GetType()]);\n}\n~~~~~~~~~~\n\n~~~~~~~~~~\nType of member hello is String\nType of member t is True\nType of member f is False\nType of member n is Null\nType of member i is Number\nType of member pi is Number\nType of member a is Array\n~~~~~~~~~~\n\nNote that, when `operator[](const char*)` cannot find the member, it will fail on assertion.\n\nIf we are unsure whether a member exists, we need to call `HasMember()` before calling `operator[](const char*)`. However, this incurs two lookup. A better way is to call `FindMember()`, which can check the existence of a member and obtain its value at once:\n\n~~~~~~~~~~cpp\nValue::ConstMemberIterator itr = document.FindMember(\"hello\");\nif (itr != document.MemberEnd())\n    printf(\"%s\\n\", itr->value.GetString());\n~~~~~~~~~~\n\n### Range-based For Loop (New in v1.1.0)\n\nWhen C++11 is enabled, you can use range-based for loop to access all members in an object.\n\n~~~~~~~~~~cpp\nfor (auto& m : document.GetObject())\n    printf(\"Type of member %s is %s\\n\",\n        m.name.GetString(), kTypeNames[m.value.GetType()]);\n~~~~~~~~~~\n\n## Querying Number {#QueryNumber}\n\nJSON provides a single numerical type called Number. Number can be an integer or a real number. RFC 4627 says the range of Number is specified by the parser implementation.\n\nAs C++ provides several integer and floating point number types, the DOM tries to handle these with the widest possible range and good performance.\n\nWhen a Number is parsed, it is stored in the DOM as one of the following types:\n\nType       | Description\n-----------|---------------------------------------\n`unsigned` | 32-bit unsigned integer\n`int`      | 32-bit signed integer\n`uint64_t` | 64-bit unsigned integer\n`int64_t`  | 64-bit signed integer\n`double`   | 64-bit double precision floating point\n\nWhen querying a number, you can check whether the number can be obtained as the target type:\n\nChecking          | Obtaining\n------------------|---------------------\n`bool IsNumber()` | N/A\n`bool IsUint()`   | `unsigned GetUint()`\n`bool IsInt()`    | `int GetInt()`\n`bool IsUint64()` | `uint64_t GetUint64()`\n`bool IsInt64()`  | `int64_t GetInt64()`\n`bool IsDouble()` | `double GetDouble()`\n\nNote that, an integer value may be obtained in various ways without conversion. For example, A value `x` containing 123 will make `x.IsInt() == x.IsUint() == x.IsInt64() == x.IsUint64() == true`. But a value `y` containing -3000000000 will only make `x.IsInt64() == true`.\n\nWhen obtaining the numeric values, `GetDouble()` will convert internal integer representation to a `double`. Note that, `int` and `unsigned` can be safely converted to `double`, but `int64_t` and `uint64_t` may lose precision (since mantissa of `double` is only 52-bits).\n\n## Query String {#QueryString}\n\nIn addition to `GetString()`, the `Value` class also contains `GetStringLength()`. Here explains why:\n\nAccording to RFC 4627, JSON strings can contain Unicode character `U+0000`, which must be escaped as `\"\\u0000\"`. The problem is that, C/C++ often uses null-terminated string, which treats `\\0` as the terminator symbol.\n\nTo conform with RFC 4627, RapidJSON supports string containing `U+0000` character. If you need to handle this, you can use `GetStringLength()` to obtain the correct string length.\n\nFor example, after parsing the following JSON to `Document d`:\n\n~~~~~~~~~~js\n{ \"s\" :  \"a\\u0000b\" }\n~~~~~~~~~~\nThe correct length of the string `\"a\\u0000b\"` is 3, as returned by `GetStringLength()`. But `strlen()` returns 1.\n\n`GetStringLength()` can also improve performance, as user may often need to call `strlen()` for allocating buffer.\n\nBesides, `std::string` also support a constructor:\n\n~~~~~~~~~~cpp\nstring(const char* s, size_t count);\n~~~~~~~~~~\n\nwhich accepts the length of string as parameter. This constructor supports storing null character within the string, and should also provide better performance.\n\n## Comparing values\n\nYou can use `==` and `!=` to compare values. Two values are equal if and only if they have same type and contents. You can also compare values with primitive types. Here is an example:\n\n~~~~~~~~~~cpp\nif (document[\"hello\"] == document[\"n\"]) /*...*/;    // Compare values\nif (document[\"hello\"] == \"world\") /*...*/;          // Compare value with literal string\nif (document[\"i\"] != 123) /*...*/;                  // Compare with integers\nif (document[\"pi\"] != 3.14) /*...*/;                // Compare with double.\n~~~~~~~~~~\n\nArray/object compares their elements/members in order. They are equal if and only if their whole subtrees are equal.\n\nNote that, currently if an object contains duplicated named member, comparing equality with any object is always `false`.\n\n# Create/Modify Values {#CreateModifyValues}\n\nThere are several ways to create values. After a DOM tree is created and/or modified, it can be saved as JSON again using `Writer`.\n\n## Change Value Type {#ChangeValueType}\nWhen creating a `Value` or `Document` by default constructor, its type is Null. To change its type, call `SetXXX()` or assignment operator, for example:\n\n~~~~~~~~~~cpp\nDocument d; // Null\nd.SetObject();\n\nValue v;    // Null\nv.SetInt(10);\nv = 10;     // Shortcut, same as above\n~~~~~~~~~~\n\n### Overloaded Constructors\nThere are also overloaded constructors for several types:\n\n~~~~~~~~~~cpp\nValue b(true);    // calls Value(bool)\nValue i(-123);    // calls Value(int)\nValue u(123u);    // calls Value(unsigned)\nValue d(1.5);     // calls Value(double)\n~~~~~~~~~~\n\nTo create empty object or array, you may use `SetObject()`/`SetArray()` after default constructor, or using the `Value(Type)` in one call:\n\n~~~~~~~~~~cpp\nValue o(kObjectType);\nValue a(kArrayType);\n~~~~~~~~~~\n\n## Move Semantics {#MoveSemantics}\n\nA very special decision during design of RapidJSON is that, assignment of value does not copy the source value to destination value. Instead, the value from source is moved to the destination. For example,\n\n~~~~~~~~~~cpp\nValue a(123);\nValue b(456);\na = b;         // b becomes a Null value, a becomes number 456.\n~~~~~~~~~~\n\n![Assignment with move semantics.](diagram/move1.png)\n\nWhy? What is the advantage of this semantics?\n\nThe simple answer is performance. For fixed size JSON types (Number, True, False, Null), copying them is fast and easy. However, For variable size JSON types (String, Array, Object), copying them will incur a lot of overheads. And these overheads are often unnoticed. Especially when we need to create temporary object, copy it to another variable, and then destruct it.\n\nFor example, if normal *copy* semantics was used:\n\n~~~~~~~~~~cpp\nDocument d;\nValue o(kObjectType);\n{\n    Value contacts(kArrayType);\n    // adding elements to contacts array.\n    // ...\n    o.AddMember(\"contacts\", contacts, d.GetAllocator());  // deep clone contacts (may be with lots of allocations)\n    // destruct contacts.\n}\n~~~~~~~~~~\n\n![Copy semantics makes a lots of copy operations.](diagram/move2.png)\n\nThe object `o` needs to allocate a buffer of same size as contacts, makes a deep clone of it, and then finally contacts is destructed. This will incur a lot of unnecessary allocations/deallocations and memory copying.\n\nThere are solutions to prevent actual copying these data, such as reference counting and garbage collection(GC).\n\nTo make RapidJSON simple and fast, we chose to use *move* semantics for assignment. It is similar to `std::auto_ptr` which transfer ownership during assignment. Move is much faster and simpler, it just destructs the original value, `memcpy()` the source to destination, and finally sets the source as Null type.\n\nSo, with move semantics, the above example becomes:\n\n~~~~~~~~~~cpp\nDocument d;\nValue o(kObjectType);\n{\n    Value contacts(kArrayType);\n    // adding elements to contacts array.\n    o.AddMember(\"contacts\", contacts, d.GetAllocator());  // just memcpy() of contacts itself to the value of new member (16 bytes)\n    // contacts became Null here. Its destruction is trivial.\n}\n~~~~~~~~~~\n\n![Move semantics makes no copying.](diagram/move3.png)\n\nThis is called move assignment operator in C++11. As RapidJSON supports C++03, it adopts move semantics using assignment operator, and all other modifying function like `AddMember()`, `PushBack()`.\n\n### Move semantics and temporary values {#TemporaryValues}\n\nSometimes, it is convenient to construct a Value in place, before passing it to one of the \"moving\" functions, like `PushBack()` or `AddMember()`.  As temporary objects can't be converted to proper Value references, the convenience function `Move()` is available:\n\n~~~~~~~~~~cpp\nValue a(kArrayType);\nDocument::AllocatorType& allocator = document.GetAllocator();\n// a.PushBack(Value(42), allocator);       // will not compile\na.PushBack(Value().SetInt(42), allocator); // fluent API\na.PushBack(Value(42).Move(), allocator);   // same as above\n~~~~~~~~~~\n\n## Create String {#CreateString}\nRapidJSON provides two strategies for storing string.\n\n1. copy-string: allocates a buffer, and then copy the source data into it.\n2. const-string: simply store a pointer of string.\n\nCopy-string is always safe because it owns a copy of the data. Const-string can be used for storing a string literal, and for in-situ parsing which will be mentioned in the DOM section.\n\nTo make memory allocation customizable, RapidJSON requires users to pass an instance of allocator, whenever an operation may require allocation. This design is needed to prevent storing an allocator (or Document) pointer per Value.\n\nTherefore, when we assign a copy-string, we call this overloaded `SetString()` with allocator:\n\n~~~~~~~~~~cpp\nDocument document;\nValue author;\nchar buffer[10];\nint len = sprintf(buffer, \"%s %s\", \"Milo\", \"Yip\"); // dynamically created string.\nauthor.SetString(buffer, len, document.GetAllocator());\nmemset(buffer, 0, sizeof(buffer));\n// author.GetString() still contains \"Milo Yip\" after buffer is destroyed\n~~~~~~~~~~\n\nIn this example, we get the allocator from a `Document` instance. This is a common idiom when using RapidJSON. But you may use other instances of allocator.\n\nBesides, the above `SetString()` requires length. This can handle null characters within a string. There is another `SetString()` overloaded function without the length parameter. And it assumes the input is null-terminated and calls a `strlen()`-like function to obtain the length.\n\nFinally, for a string literal or string with a safe life-cycle one can use the const-string version of `SetString()`, which lacks an allocator parameter.  For string literals (or constant character arrays), simply passing the literal as parameter is safe and efficient:\n\n~~~~~~~~~~cpp\nValue s;\ns.SetString(\"rapidjson\");    // can contain null character, length derived at compile time\ns = \"rapidjson\";             // shortcut, same as above\n~~~~~~~~~~\n\nFor a character pointer, RapidJSON requires it to be marked as safe before using it without copying. This can be achieved by using the `StringRef` function:\n\n~~~~~~~~~cpp\nconst char * cstr = getenv(\"USER\");\nsize_t cstr_len = ...;                 // in case length is available\nValue s;\n// s.SetString(cstr);                  // will not compile\ns.SetString(StringRef(cstr));          // ok, assume safe lifetime, null-terminated\ns = StringRef(cstr);                   // shortcut, same as above\ns.SetString(StringRef(cstr,cstr_len)); // faster, can contain null character\ns = StringRef(cstr,cstr_len);          // shortcut, same as above\n\n~~~~~~~~~\n\n## Modify Array {#ModifyArray}\nValue with array type provides an API similar to `std::vector`.\n\n* `Clear()`\n* `Reserve(SizeType, Allocator&)`\n* `Value& PushBack(Value&, Allocator&)`\n* `template <typename T> GenericValue& PushBack(T, Allocator&)`\n* `Value& PopBack()`\n* `ValueIterator Erase(ConstValueIterator pos)`\n* `ValueIterator Erase(ConstValueIterator first, ConstValueIterator last)`\n\nNote that, `Reserve(...)` and `PushBack(...)` may allocate memory for the array elements, therefore requiring an allocator.\n\nHere is an example of `PushBack()`:\n\n~~~~~~~~~~cpp\nValue a(kArrayType);\nDocument::AllocatorType& allocator = document.GetAllocator();\n\nfor (int i = 5; i <= 10; i++)\n    a.PushBack(i, allocator);   // allocator is needed for potential realloc().\n\n// Fluent interface\na.PushBack(\"Lua\", allocator).PushBack(\"Mio\", allocator);\n~~~~~~~~~~\n\nThis API differs from STL in that `PushBack()`/`PopBack()` return the array reference itself. This is called _fluent interface_.\n\nIf you want to add a non-constant string or a string without sufficient lifetime (see [Create String](#CreateString)) to the array, you need to create a string Value by using the copy-string API.  To avoid the need for an intermediate variable, you can use a [temporary value](#TemporaryValues) in place:\n\n~~~~~~~~~~cpp\n// in-place Value parameter\ncontact.PushBack(Value(\"copy\", document.GetAllocator()).Move(), // copy string\n                 document.GetAllocator());\n\n// explicit parameters\nValue val(\"key\", document.GetAllocator()); // copy string\ncontact.PushBack(val, document.GetAllocator());\n~~~~~~~~~~\n\n## Modify Object {#ModifyObject}\nThe Object class is a collection of key-value pairs (members). Each key must be a string value. To modify an object, either add or remove members. The following API is for adding members:\n\n* `Value& AddMember(Value&, Value&, Allocator& allocator)`\n* `Value& AddMember(StringRefType, Value&, Allocator&)`\n* `template <typename T> Value& AddMember(StringRefType, T value, Allocator&)`\n\nHere is an example.\n\n~~~~~~~~~~cpp\nValue contact(kObject);\ncontact.AddMember(\"name\", \"Milo\", document.GetAllocator());\ncontact.AddMember(\"married\", true, document.GetAllocator());\n~~~~~~~~~~\n\nThe name parameter with `StringRefType` is similar to the interface of the `SetString` function for string values. These overloads are used to avoid the need for copying the `name` string, since constant key names are very common in JSON objects.\n\nIf you need to create a name from a non-constant string or a string without sufficient lifetime (see [Create String](#CreateString)), you need to create a string Value by using the copy-string API.  To avoid the need for an intermediate variable, you can use a [temporary value](#TemporaryValues) in place:\n\n~~~~~~~~~~cpp\n// in-place Value parameter\ncontact.AddMember(Value(\"copy\", document.GetAllocator()).Move(), // copy string\n                  Value().Move(),                                // null value\n                  document.GetAllocator());\n\n// explicit parameters\nValue key(\"key\", document.GetAllocator()); // copy string name\nValue val(42);                             // some value\ncontact.AddMember(key, val, document.GetAllocator());\n~~~~~~~~~~\n\nFor removing members, there are several choices: \n\n* `bool RemoveMember(const Ch* name)`: Remove a member by search its name (linear time complexity).\n* `bool RemoveMember(const Value& name)`: same as above but `name` is a Value.\n* `MemberIterator RemoveMember(MemberIterator)`: Remove a member by iterator (_constant_ time complexity).\n* `MemberIterator EraseMember(MemberIterator)`: similar to the above but it preserves order of members (linear time complexity).\n* `MemberIterator EraseMember(MemberIterator first, MemberIterator last)`: remove a range of members, preserves order (linear time complexity).\n\n`MemberIterator RemoveMember(MemberIterator)` uses a \"move-last\" trick to achieve constant time complexity. Basically the member at iterator is destructed, and then the last element is moved to that position. So the order of the remaining members are changed.\n\n## Deep Copy Value {#DeepCopyValue}\nIf we really need to copy a DOM tree, we can use two APIs for deep copy: constructor with allocator, and `CopyFrom()`.\n\n~~~~~~~~~~cpp\nDocument d;\nDocument::AllocatorType& a = d.GetAllocator();\nValue v1(\"foo\");\n// Value v2(v1); // not allowed\n\nValue v2(v1, a);                      // make a copy\nassert(v1.IsString());                // v1 untouched\nd.SetArray().PushBack(v1, a).PushBack(v2, a);\nassert(v1.IsNull() && v2.IsNull());   // both moved to d\n\nv2.CopyFrom(d, a);                    // copy whole document to v2\nassert(d.IsArray() && d.Size() == 2); // d untouched\nv1.SetObject().AddMember(\"array\", v2, a);\nd.PushBack(v1, a);\n~~~~~~~~~~\n\n## Swap Values {#SwapValues}\n\n`Swap()` is also provided.\n\n~~~~~~~~~~cpp\nValue a(123);\nValue b(\"Hello\");\na.Swap(b);\nassert(a.IsString());\nassert(b.IsInt());\n~~~~~~~~~~\n\nSwapping two DOM trees is fast (constant time), despite the complexity of the trees.\n\n# What's next {#WhatsNext}\n\nThis tutorial shows the basics of DOM tree query and manipulation. There are several important concepts in RapidJSON:\n\n1. [Streams](doc/stream.md) are channels for reading/writing JSON, which can be a in-memory string, or file stream, etc. User can also create their streams.\n2. [Encoding](doc/encoding.md) defines which character encoding is used in streams and memory. RapidJSON also provide Unicode conversion/validation internally.\n3. [DOM](doc/dom.md)'s basics are already covered in this tutorial. Uncover more advanced features such as *in situ* parsing, other parsing options and advanced usages.\n4. [SAX](doc/sax.md) is the foundation of parsing/generating facility in RapidJSON. Learn how to use `Reader`/`Writer` to implement even faster applications. Also try `PrettyWriter` to format the JSON.\n5. [Performance](doc/performance.md) shows some in-house and third-party benchmarks.\n6. [Internals](doc/internals.md) describes some internal designs and techniques of RapidJSON.\n\nYou may also refer to the [FAQ](doc/faq.md), API documentation, examples and unit tests.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/doc/tutorial.zh-cn.md",
    "content": "# 教程\n\n本教程简介文件对象模型（Document Object Model, DOM）API。\n\n如 [用法一览](../readme.zh-cn.md#用法一览) 中所示，可以解析一个 JSON 至 DOM，然后就可以轻松查询及修改 DOM，并最终转换回 JSON。\n\n[TOC]\n\n# Value 及 Document {#ValueDocument}\n\n每个 JSON 值都储存为 `Value` 类，而 `Document` 类则表示整个 DOM，它存储了一个 DOM 树的根 `Value`。RapidJSON 的所有公开类型及函数都在 `rapidjson` 命名空间中。\n\n# 查询 Value {#QueryValue}\n\n在本节中，我们会使用到 `example/tutorial/tutorial.cpp` 中的代码片段。\n\n假设我们用 C 语言的字符串储存一个 JSON（`const char* json`）：\n~~~~~~~~~~js\n{\n    \"hello\": \"world\",\n    \"t\": true ,\n    \"f\": false,\n    \"n\": null,\n    \"i\": 123,\n    \"pi\": 3.1416,\n    \"a\": [1, 2, 3, 4]\n}\n~~~~~~~~~~\n\n把它解析至一个 `Document`：\n~~~~~~~~~~cpp\n#include \"rapidjson/document.h\"\n\nusing namespace rapidjson;\n\n// ...\nDocument document;\ndocument.Parse(json);\n~~~~~~~~~~\n\n那么现在该 JSON 就会被解析至 `document` 中，成为一棵 *DOM 树 *:\n\n![教程中的 DOM](diagram/tutorial.png)\n\n自从 RFC 7159 作出更新，合法 JSON 文件的根可以是任何类型的 JSON 值。而在较早的 RFC 4627 中，根值只允许是 Object 或 Array。而在上述例子中，根是一个 Object。\n~~~~~~~~~~cpp\nassert(document.IsObject());\n~~~~~~~~~~\n\n让我们查询一下根 Object 中有没有 `\"hello\"` 成员。由于一个 `Value` 可包含不同类型的值，我们可能需要验证它的类型，并使用合适的 API 去获取其值。在此例中，`\"hello\"` 成员关联到一个 JSON String。\n~~~~~~~~~~cpp\nassert(document.HasMember(\"hello\"));\nassert(document[\"hello\"].IsString());\nprintf(\"hello = %s\\n\", document[\"hello\"].GetString());\n~~~~~~~~~~\n\n~~~~~~~~~~\nworld\n~~~~~~~~~~\n\nJSON True/False 值是以 `bool` 表示的。\n~~~~~~~~~~cpp\nassert(document[\"t\"].IsBool());\nprintf(\"t = %s\\n\", document[\"t\"].GetBool() ? \"true\" : \"false\");\n~~~~~~~~~~\n\n~~~~~~~~~~\ntrue\n~~~~~~~~~~\n\nJSON Null 值可用 `IsNull()` 查询。\n~~~~~~~~~~cpp\nprintf(\"n = %s\\n\", document[\"n\"].IsNull() ? \"null\" : \"?\");\n~~~~~~~~~~\n\n~~~~~~~~~~\nnull\n~~~~~~~~~~\n\nJSON Number 类型表示所有数值。然而，C++ 需要使用更专门的类型。\n\n~~~~~~~~~~cpp\nassert(document[\"i\"].IsNumber());\n\n// 在此情况下，IsUint()/IsInt64()/IsUint64() 也会返回 true\nassert(document[\"i\"].IsInt());          \nprintf(\"i = %d\\n\", document[\"i\"].GetInt());\n// 另一种用法： (int)document[\"i\"]\n\nassert(document[\"pi\"].IsNumber());\nassert(document[\"pi\"].IsDouble());\nprintf(\"pi = %g\\n\", document[\"pi\"].GetDouble());\n~~~~~~~~~~\n\n~~~~~~~~~~\ni = 123\npi = 3.1416\n~~~~~~~~~~\n\nJSON Array 包含一些元素。\n~~~~~~~~~~cpp\n// 使用引用来连续访问，方便之余还更高效。\nconst Value& a = document[\"a\"];\nassert(a.IsArray());\nfor (SizeType i = 0; i < a.Size(); i++) // 使用 SizeType 而不是 size_t\n        printf(\"a[%d] = %d\\n\", i, a[i].GetInt());\n~~~~~~~~~~\n\n~~~~~~~~~~\na[0] = 1\na[1] = 2\na[2] = 3\na[3] = 4\n~~~~~~~~~~\n\n注意，RapidJSON 并不自动转换各种 JSON 类型。例如，对一个 String 的 Value 调用 `GetInt()` 是非法的。在调试模式下，它会被断言失败。在发布模式下，其行为是未定义的。\n\n以下将会讨论有关查询各类型的细节。\n\n## 查询 Array {#QueryArray}\n\n缺省情况下，`SizeType` 是 `unsigned` 的 typedef。在多数系统中，Array 最多能存储 2^32-1 个元素。\n\n你可以用整数字面量访问元素，如 `a[0]`、`a[1]`、`a[2]`。\n\nArray 与 `std::vector` 相似，除了使用索引，也可使用迭代器来访问所有元素。\n~~~~~~~~~~cpp\nfor (Value::ConstValueIterator itr = a.Begin(); itr != a.End(); ++itr)\n    printf(\"%d \", itr->GetInt());\n~~~~~~~~~~\n\n还有一些熟悉的查询函数：\n* `SizeType Capacity() const`\n* `bool Empty() const`\n\n### 范围 for 循环 (v1.1.0 中的新功能)\n\n当使用 C++11 功能时，你可使用范围 for 循环去访问 Array 内的所有元素。\n\n~~~~~~~~~~cpp\nfor (auto& v : a.GetArray())\n    printf(\"%d \", v.GetInt());\n~~~~~~~~~~\n\n## 查询 Object {#QueryObject}\n\n和 Array 相似，我们可以用迭代器去访问所有 Object 成员：\n\n~~~~~~~~~~cpp\nstatic const char* kTypeNames[] = \n    { \"Null\", \"False\", \"True\", \"Object\", \"Array\", \"String\", \"Number\" };\n\nfor (Value::ConstMemberIterator itr = document.MemberBegin();\n    itr != document.MemberEnd(); ++itr)\n{\n    printf(\"Type of member %s is %s\\n\",\n        itr->name.GetString(), kTypeNames[itr->value.GetType()]);\n}\n~~~~~~~~~~\n\n~~~~~~~~~~\nType of member hello is String\nType of member t is True\nType of member f is False\nType of member n is Null\nType of member i is Number\nType of member pi is Number\nType of member a is Array\n~~~~~~~~~~\n\n注意，当 `operator[](const char*)` 找不到成员，它会断言失败。\n\n若我们不确定一个成员是否存在，便需要在调用 `operator[](const char*)` 前先调用 `HasMember()`。然而，这会导致两次查找。更好的做法是调用 `FindMember()`，它能同时检查成员是否存在并返回它的 Value：\n\n~~~~~~~~~~cpp\nValue::ConstMemberIterator itr = document.FindMember(\"hello\");\nif (itr != document.MemberEnd())\n    printf(\"%s\\n\", itr->value.GetString());\n~~~~~~~~~~\n\n### 范围 for 循环 (v1.1.0 中的新功能)\n\n当使用 C++11 功能时，你可使用范围 for 循环去访问 Object 内的所有成员。\n\n~~~~~~~~~~cpp\nfor (auto& m : document.GetObject())\n    printf(\"Type of member %s is %s\\n\",\n        m.name.GetString(), kTypeNames[m.value.GetType()]);\n~~~~~~~~~~\n\n## 查询 Number {#QueryNumber}\n\nJSON 只提供一种数值类型──Number。数字可以是整数或实数。RFC 4627 规定数字的范围由解析器指定。\n\n由于 C++ 提供多种整数及浮点数类型，DOM 尝试尽量提供最广的范围及良好性能。\n\n当解析一个 Number 时, 它会被存储在 DOM 之中，成为下列其中一个类型：\n\n类型       | 描述\n-----------|---------------------------------------\n`unsigned` | 32 位无号整数\n`int`      | 32 位有号整数\n`uint64_t` | 64 位无号整数\n`int64_t`  | 64 位有号整数\n`double`   | 64 位双精度浮点数\n\n当查询一个 Number 时, 你可以检查该数字是否能以目标类型来提取：\n\n查检              | 提取\n------------------|---------------------\n`bool IsNumber()` | 不适用\n`bool IsUint()`   | `unsigned GetUint()`\n`bool IsInt()`    | `int GetInt()`\n`bool IsUint64()` | `uint64_t GetUint64()`\n`bool IsInt64()`  | `int64_t GetInt64()`\n`bool IsDouble()` | `double GetDouble()`\n\n注意，一个整数可能用几种类型来提取，而无需转换。例如，一个名为 `x` 的 Value 包含 123，那么 `x.IsInt() == x.IsUint() == x.IsInt64() == x.IsUint64() == true`。但如果一个名为 `y` 的 Value 包含 -3000000000，那么仅会令 `x.IsInt64() == true`。\n\n当要提取 Number 类型，`GetDouble()` 是会把内部整数的表示转换成 `double`。注意 `int` 和 `unsigned` 可以安全地转换至 `double`，但 `int64_t` 及 `uint64_t` 可能会丧失精度（因为 `double` 的尾数只有 52 位）。\n\n## 查询 String {#QueryString}\n\n除了 `GetString()`，`Value` 类也有一个 `GetStringLength()`。这里会解释个中原因。\n\n根据 RFC 4627，JSON String 可包含 Unicode 字符 `U+0000`，在 JSON 中会表示为 `\"\\u0000\"`。问题是，C/C++ 通常使用空字符结尾字符串（null-terminated string），这种字符串把 ``\\0'` 作为结束符号。\n\n为了符合 RFC 4627，RapidJSON 支持包含 `U+0000` 的 String。若你需要处理这些 String，便可使用 `GetStringLength()` 去获得正确的字符串长度。\n\n例如，当解析以下的 JSON 至 `Document d` 之后：\n\n~~~~~~~~~~js\n{ \"s\" :  \"a\\u0000b\" }\n~~~~~~~~~~\n`\"a\\u0000b\"` 值的正确长度应该是 3。但 `strlen()` 会返回 1。\n\n`GetStringLength()` 也可以提高性能，因为用户可能需要调用 `strlen()` 去分配缓冲。\n\n此外，`std::string` 也支持这个构造函数：\n\n~~~~~~~~~~cpp\nstring(const char* s, size_t count);\n~~~~~~~~~~\n\n此构造函数接受字符串长度作为参数。它支持在字符串中存储空字符，也应该会有更好的性能。\n\n## 比较两个 Value\n\n你可使用 `==` 及 `!=` 去比较两个 Value。当且仅当两个 Value 的类型及内容相同，它们才当作相等。你也可以比较 Value 和它的原生类型值。以下是一个例子。\n\n~~~~~~~~~~cpp\nif (document[\"hello\"] == document[\"n\"]) /*...*/;    // 比较两个值\nif (document[\"hello\"] == \"world\") /*...*/;          // 与字符串字面量作比较\nif (document[\"i\"] != 123) /*...*/;                  // 与整数作比较\nif (document[\"pi\"] != 3.14) /*...*/;                // 与 double 作比较\n~~~~~~~~~~\n\nArray／Object 顺序以它们的元素／成员作比较。当且仅当它们的整个子树相等，它们才当作相等。\n\n注意，现时若一个 Object 含有重复命名的成员，它与任何 Object 作比较都总会返回 `false`。\n\n# 创建／修改值 {#CreateModifyValues}\n\n有多种方法去创建值。 当一个 DOM 树被创建或修改后，可使用 `Writer` 再次存储为 JSON。\n\n## 改变 Value 类型 {#ChangeValueType}\n当使用默认构造函数创建一个 Value 或 Document，它的类型便会是 Null。要改变其类型，需调用 `SetXXX()` 或赋值操作，例如：\n\n~~~~~~~~~~cpp\nDocument d; // Null\nd.SetObject();\n\nValue v;    // Null\nv.SetInt(10);\nv = 10;     // 简写，和上面的相同\n~~~~~~~~~~\n\n### 构造函数的各个重载\n几个类型也有重载构造函数：\n\n~~~~~~~~~~cpp\nValue b(true);    // 调用 Value(bool)\nValue i(-123);    // 调用 Value(int)\nValue u(123u);    // 调用 Value(unsigned)\nValue d(1.5);     // 调用 Value(double)\n~~~~~~~~~~\n\n要重建空 Object 或 Array，可在默认构造函数后使用 `SetObject()`/`SetArray()`，或一次性使用 `Value(Type)`：\n\n~~~~~~~~~~cpp\nValue o(kObjectType);\nValue a(kArrayType);\n~~~~~~~~~~\n\n## 转移语义（Move Semantics） {#MoveSemantics}\n\n在设计 RapidJSON 时有一个非常特别的决定，就是 Value 赋值并不是把来源 Value 复制至目的 Value，而是把来源 Value 转移（move）至目的 Value。例如：\n\n~~~~~~~~~~cpp\nValue a(123);\nValue b(456);\nb = a;         // a 变成 Null，b 变成数字 123。\n~~~~~~~~~~\n\n![使用移动语义赋值。](diagram/move1.png)\n\n为什么？此语义有何优点？\n\n最简单的答案就是性能。对于固定大小的 JSON 类型（Number、True、False、Null），复制它们是简单快捷。然而，对于可变大小的 JSON 类型（String、Array、Object），复制它们会产生大量开销，而且这些开销常常不被察觉。尤其是当我们需要创建临时 Object，把它复制至另一变量，然后再析构它。\n\n例如，若使用正常 * 复制 * 语义：\n\n~~~~~~~~~~cpp\nValue o(kObjectType);\n{\n    Value contacts(kArrayType);\n    // 把元素加进 contacts 数组。\n    // ...\n    o.AddMember(\"contacts\", contacts, d.GetAllocator());  // 深度复制 contacts （可能有大量内存分配）\n    // 析构 contacts。\n}\n~~~~~~~~~~\n\n![复制语义产生大量的复制操作。](diagram/move2.png)\n\n那个 `o` Object 需要分配一个和 contacts 相同大小的缓冲区，对 conacts 做深度复制，并最终要析构 contacts。这样会产生大量无必要的内存分配／释放，以及内存复制。\n\n有一些方案可避免实质地复制这些数据，例如引用计数（reference counting）、垃圾回收（garbage collection, GC）。\n\n为了使 RapidJSON 简单及快速，我们选择了对赋值采用 * 转移 * 语义。这方法与 `std::auto_ptr` 相似，都是在赋值时转移拥有权。转移快得多简单得多，只需要析构原来的 Value，把来源 `memcpy()` 至目标，最后把来源设置为 Null 类型。\n\n因此，使用转移语义后，上面的例子变成：\n\n~~~~~~~~~~cpp\nValue o(kObjectType);\n{\n    Value contacts(kArrayType);\n    // adding elements to contacts array.\n    o.AddMember(\"contacts\", contacts, d.GetAllocator());  // 只需 memcpy() contacts 本身至新成员的 Value（16 字节）\n    // contacts 在这里变成 Null。它的析构是平凡的。\n}\n~~~~~~~~~~\n\n![转移语义不需复制。](diagram/move3.png)\n\n在 C++11 中这称为转移赋值操作（move assignment operator）。由于 RapidJSON 支持 C++03，它在赋值操作采用转移语义，其它修改型函数如 `AddMember()`, `PushBack()` 也采用转移语义。\n\n### 转移语义及临时值 {#TemporaryValues}\n\n有时候，我们想直接构造一个 Value 并传递给一个“转移”函数（如 `PushBack()`、`AddMember()`）。由于临时对象是不能转换为正常的 Value 引用，我们加入了一个方便的 `Move()` 函数：\n\n~~~~~~~~~~cpp\nValue a(kArrayType);\nDocument::AllocatorType& allocator = document.GetAllocator();\n// a.PushBack(Value(42), allocator);       // 不能通过编译\na.PushBack(Value().SetInt(42), allocator); // fluent API\na.PushBack(Value(42).Move(), allocator);   // 和上一行相同\n~~~~~~~~~~\n\n## 创建 String {#CreateString}\nRapidJSON 提供两个 String 的存储策略。\n\n1. copy-string: 分配缓冲区，然后把来源数据复制至它。\n2. const-string: 简单地储存字符串的指针。\n\nCopy-string 总是安全的，因为它拥有数据的克隆。Const-string 可用于存储字符串字面量，以及用于在 DOM 一节中将会提到的 in-situ 解析中。\n\n为了让用户自定义内存分配方式，当一个操作可能需要内存分配时，RapidJSON 要求用户传递一个 allocator 实例作为 API 参数。此设计避免了在每个 Value 存储 allocator（或 document）的指针。\n\n因此，当我们把一个 copy-string 赋值时, 调用含有 allocator 的 `SetString()` 重载函数：\n\n~~~~~~~~~~cpp\nDocument document;\nValue author;\nchar buffer[10];\nint len = sprintf(buffer, \"%s %s\", \"Milo\", \"Yip\"); // 动态创建的字符串。\nauthor.SetString(buffer, len, document.GetAllocator());\nmemset(buffer, 0, sizeof(buffer));\n// 清空 buffer 后 author.GetString() 仍然包含 \"Milo Yip\"\n~~~~~~~~~~\n\n在此例子中，我们使用 `Document` 实例的 allocator。这是使用 RapidJSON 时常用的惯用法。但你也可以用其他 allocator 实例。\n\n另外，上面的 `SetString()` 需要长度参数。这个 API 能处理含有空字符的字符串。另一个 `SetString()` 重载函数没有长度参数，它假设输入是空字符结尾的，并会调用类似 `strlen()` 的函数去获取长度。\n\n最后，对于字符串字面量或有安全生命周期的字符串，可以使用 const-string 版本的 `SetString()`，它没有\nallocator 参数。对于字符串字面量（或字符数组常量），只需简单地传递字面量，又安全又高效：\n\n~~~~~~~~~~cpp\nValue s;\ns.SetString(\"rapidjson\");    // 可包含空字符，长度在编译期推导\ns = \"rapidjson\";             // 上行的缩写\n~~~~~~~~~~\n\n对于字符指针，RapidJSON 需要作一个标记，代表它不复制也是安全的。可以使用 `StringRef` 函数：\n\n~~~~~~~~~cpp\nconst char * cstr = getenv(\"USER\");\nsize_t cstr_len = ...;                 // 如果有长度\nValue s;\n// s.SetString(cstr);                  // 这不能通过编译\ns.SetString(StringRef(cstr));          // 可以，假设它的生命周期安全，并且是以空字符结尾的\ns = StringRef(cstr);                   // 上行的缩写\ns.SetString(StringRef(cstr, cstr_len));// 更快，可处理空字符\ns = StringRef(cstr, cstr_len);         // 上行的缩写\n\n~~~~~~~~~\n\n## 修改 Array {#ModifyArray}\nArray 类型的 Value 提供与 `std::vector` 相似的 API。\n\n* `Clear()`\n* `Reserve(SizeType, Allocator&)`\n* `Value& PushBack(Value&, Allocator&)`\n* `template <typename T> GenericValue& PushBack(T, Allocator&)`\n* `Value& PopBack()`\n* `ValueIterator Erase(ConstValueIterator pos)`\n* `ValueIterator Erase(ConstValueIterator first, ConstValueIterator last)`\n\n注意，`Reserve(...)` 及 `PushBack(...)` 可能会为数组元素分配内存，所以需要一个 allocator。\n\n以下是 `PushBack()` 的例子：\n\n~~~~~~~~~~cpp\nValue a(kArrayType);\nDocument::AllocatorType& allocator = document.GetAllocator();\n\nfor (int i = 5; i <= 10; i++)\n    a.PushBack(i, allocator);   // 可能需要调用 realloc() 所以需要 allocator\n\n// 流畅接口（Fluent interface）\na.PushBack(\"Lua\", allocator).PushBack(\"Mio\", allocator);\n~~~~~~~~~~\n\n与 STL 不一样的是，`PushBack()`/`PopBack()` 返回 Array 本身的引用。这称为流畅接口（_fluent interface_）。\n\n如果你想在 Array 中加入一个非常量字符串，或是一个没有足够生命周期的字符串（见 [Create String](#CreateString)），你需要使用 copy-string API 去创建一个 String。为了避免加入中间变量，可以就地使用一个 [临时值](#TemporaryValues)：\n\n~~~~~~~~~~cpp\n// 就地 Value 参数\ncontact.PushBack(Value(\"copy\", document.GetAllocator()).Move(), // copy string\n                 document.GetAllocator());\n\n// 显式 Value 参数\nValue val(\"key\", document.GetAllocator()); // copy string\ncontact.PushBack(val, document.GetAllocator());\n~~~~~~~~~~\n\n## 修改 Object {#ModifyObject}\nObject 是键值对的集合。每个键必须为 String。要修改 Object，方法是增加或移除成员。以下的 API 用来增加成员：\n\n* `Value& AddMember(Value&, Value&, Allocator& allocator)`\n* `Value& AddMember(StringRefType, Value&, Allocator&)`\n* `template <typename T> Value& AddMember(StringRefType, T value, Allocator&)`\n\n以下是一个例子。\n\n~~~~~~~~~~cpp\nValue contact(kObject);\ncontact.AddMember(\"name\", \"Milo\", document.GetAllocator());\ncontact.AddMember(\"married\", true, document.GetAllocator());\n~~~~~~~~~~\n\n使用 `StringRefType` 作为 name 参数的重载版本与字符串的 `SetString` 的接口相似。 这些重载是为了避免复制 `name` 字符串，因为 JSON object 中经常会使用常数键名。\n\n如果你需要从非常数字符串或生命周期不足的字符串创建键名（见 [创建 String](#CreateString)），你需要使用 copy-string API。为了避免中间变量，可以就地使用 [临时值](#TemporaryValues)：\n\n~~~~~~~~~~cpp\n// 就地 Value 参数\ncontact.AddMember(Value(\"copy\", document.GetAllocator()).Move(), // copy string\n                  Value().Move(),                                // null value\n                  document.GetAllocator());\n\n// 显式参数\nValue key(\"key\", document.GetAllocator()); // copy string name\nValue val(42);                             // 某 Value\ncontact.AddMember(key, val, document.GetAllocator());\n~~~~~~~~~~\n\n移除成员有几个选择：\n\n* `bool RemoveMember(const Ch* name)`：使用键名来移除成员（线性时间复杂度）。\n* `bool RemoveMember(const Value& name)`：除了 `name` 是一个 Value，和上一行相同。\n* `MemberIterator RemoveMember(MemberIterator)`：使用迭代器移除成员（_ 常数 _ 时间复杂度）。\n* `MemberIterator EraseMember(MemberIterator)`：和上行相似但维持成员次序（线性时间复杂度）。\n* `MemberIterator EraseMember(MemberIterator first, MemberIterator last)`：移除一个范围内的成员，维持次序（线性时间复杂度）。\n\n`MemberIterator RemoveMember(MemberIterator)` 使用了“转移最后”手法来达成常数时间复杂度。基本上就是析构迭代器位置的成员，然后把最后的成员转移至迭代器位置。因此，成员的次序会被改变。\n\n## 深复制 Value {#DeepCopyValue}\n若我们真的要复制一个 DOM 树，我们可使用两个 APIs 作深复制：含 allocator 的构造函数及 `CopyFrom()`。\n\n~~~~~~~~~~cpp\nDocument d;\nDocument::AllocatorType& a = d.GetAllocator();\nValue v1(\"foo\");\n// Value v2(v1); // 不容许\n\nValue v2(v1, a);                      // 制造一个克隆\nassert(v1.IsString());                // v1 不变\nd.SetArray().PushBack(v1, a).PushBack(v2, a);\nassert(v1.IsNull() && v2.IsNull());   // 两个都转移动 d\n\nv2.CopyFrom(d, a);                    // 把整个 document 复制至 v2\nassert(d.IsArray() && d.Size() == 2); // d 不变\nv1.SetObject().AddMember(\"array\", v2, a);\nd.PushBack(v1, a);\n~~~~~~~~~~\n\n## 交换 Value {#SwapValues}\n\nRapidJSON 也提供 `Swap()`。\n\n~~~~~~~~~~cpp\nValue a(123);\nValue b(\"Hello\");\na.Swap(b);\nassert(a.IsString());\nassert(b.IsInt());\n~~~~~~~~~~\n\n无论两棵 DOM 树有多复杂，交换是很快的（常数时间）。\n\n# 下一部分 {#WhatsNext}\n\n本教程展示了如何询查及修改 DOM 树。RapidJSON 还有一个重要概念：\n\n1. [流](doc/stream.zh-cn.md) 是读写 JSON 的通道。流可以是内存字符串、文件流等。用户也可以自定义流。\n2. [编码](doc/encoding.zh-cn.md) 定义在流或内存中使用的字符编码。RapidJSON 也在内部提供 Unicode 转换及校验功能。\n3. [DOM](doc/dom.zh-cn.md) 的基本功能已在本教程里介绍。还有更高级的功能，如原位（*in situ*）解析、其他解析选项及高级用法。\n4. [SAX](doc/sax.zh-cn.md) 是 RapidJSON 解析／生成功能的基础。学习使用 `Reader`/`Writer` 去实现更高性能的应用程序。也可以使用 `PrettyWriter` 去格式化 JSON。\n5. [性能](doc/performance.zh-cn.md) 展示一些我们做的及第三方的性能测试。\n6. [技术内幕](doc/internals.md) 讲述一些 RapidJSON 内部的设计及技术。\n\n你也可以参考 [常见问题](doc/faq.zh-cn.md)、API 文档、例子及单元测试。\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8)\n\nif(POLICY CMP0054)\n  cmake_policy(SET CMP0054 NEW)\nendif()\n\nset(EXAMPLES\n    capitalize\n    condense\n    filterkey\n    filterkeydom\n    jsonx\n    lookaheadparser\n    messagereader\n    parsebyparts\n    pretty\n    prettyauto\n    schemavalidator\n    serialize\n    simpledom\n    simplereader\n    simplepullreader\n    simplewriter\n    sortkeys\n    tutorial)\n    \ninclude_directories(\"../include/\")\n\nadd_definitions(-D__STDC_FORMAT_MACROS)\nset_property(DIRECTORY PROPERTY COMPILE_OPTIONS ${EXTRA_CXX_FLAGS})\n\nif (\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -pthread\")\nendif()\n\nadd_executable(archivertest archiver/archiver.cpp archiver/archivertest.cpp)\n\nforeach (example ${EXAMPLES})\n    add_executable(${example} ${example}/${example}.cpp)\nendforeach()\n\nif (CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n    target_link_libraries(parsebyparts pthread)\nendif()\n\nadd_custom_target(examples ALL DEPENDS ${EXAMPLES})\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/capitalize/capitalize.cpp",
    "content": "// JSON condenser example\n\n// This example parses JSON from stdin with validation, \n// and re-output the JSON content to stdout with all string capitalized, and without whitespace.\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/error/en.h\"\n#include <vector>\n#include <cctype>\n\nusing namespace rapidjson;\n\ntemplate<typename OutputHandler>\nstruct CapitalizeFilter {\n    CapitalizeFilter(OutputHandler& out) : out_(out), buffer_() {}\n\n    bool Null() { return out_.Null(); }\n    bool Bool(bool b) { return out_.Bool(b); }\n    bool Int(int i) { return out_.Int(i); }\n    bool Uint(unsigned u) { return out_.Uint(u); }\n    bool Int64(int64_t i) { return out_.Int64(i); }\n    bool Uint64(uint64_t u) { return out_.Uint64(u); }\n    bool Double(double d) { return out_.Double(d); }\n    bool RawNumber(const char* str, SizeType length, bool copy) { return out_.RawNumber(str, length, copy); }\n    bool String(const char* str, SizeType length, bool) {\n        buffer_.clear();\n        for (SizeType i = 0; i < length; i++)\n            buffer_.push_back(static_cast<char>(std::toupper(str[i])));\n        return out_.String(&buffer_.front(), length, true); // true = output handler need to copy the string\n    }\n    bool StartObject() { return out_.StartObject(); }\n    bool Key(const char* str, SizeType length, bool copy) { return String(str, length, copy); }\n    bool EndObject(SizeType memberCount) { return out_.EndObject(memberCount); }\n    bool StartArray() { return out_.StartArray(); }\n    bool EndArray(SizeType elementCount) { return out_.EndArray(elementCount); }\n\n    OutputHandler& out_;\n    std::vector<char> buffer_;\n\nprivate:\n    CapitalizeFilter(const CapitalizeFilter&);\n    CapitalizeFilter& operator=(const CapitalizeFilter&);\n};\n\nint main(int, char*[]) {\n    // Prepare JSON reader and input stream.\n    Reader reader;\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n\n    // Prepare JSON writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n    Writer<FileWriteStream> writer(os);\n\n    // JSON reader parse from the input stream and let writer generate the output.\n    CapitalizeFilter<Writer<FileWriteStream> > filter(writer);\n    if (!reader.Parse(is, filter)) {\n        fprintf(stderr, \"\\nError(%u): %s\\n\", static_cast<unsigned>(reader.GetErrorOffset()), GetParseError_En(reader.GetParseErrorCode()));\n        return 1;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/condense/condense.cpp",
    "content": "// JSON condenser example\n\n// This example parses JSON text from stdin with validation, \n// and re-output the JSON content to stdout without whitespace.\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/error/en.h\"\n\nusing namespace rapidjson;\n\nint main(int, char*[]) {\n    // Prepare JSON reader and input stream.\n    Reader reader;\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n\n    // Prepare JSON writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n    Writer<FileWriteStream> writer(os);\n\n    // JSON reader parse from the input stream and let writer generate the output.\n    if (!reader.Parse(is, writer)) {\n        fprintf(stderr, \"\\nError(%u): %s\\n\", static_cast<unsigned>(reader.GetErrorOffset()), GetParseError_En(reader.GetParseErrorCode()));\n        return 1;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/filterkey/filterkey.cpp",
    "content": "// JSON filterkey example with SAX-style API.\n\n// This example parses JSON text from stdin with validation.\n// During parsing, specified key will be filtered using a SAX handler.\n// It re-output the JSON content to stdout without whitespace.\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/error/en.h\"\n#include <stack>\n\nusing namespace rapidjson;\n\n// This handler forwards event into an output handler, with filtering the descendent events of specified key.\ntemplate <typename OutputHandler>\nclass FilterKeyHandler {\npublic:\n    typedef char Ch;\n\n    FilterKeyHandler(OutputHandler& outputHandler, const Ch* keyString, SizeType keyLength) : \n        outputHandler_(outputHandler), keyString_(keyString), keyLength_(keyLength), filterValueDepth_(), filteredKeyCount_()\n    {}\n\n    bool Null()             { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Null()    && EndValue(); }\n    bool Bool(bool b)       { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Bool(b)   && EndValue(); }\n    bool Int(int i)         { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Int(i)    && EndValue(); }\n    bool Uint(unsigned u)   { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Uint(u)   && EndValue(); }\n    bool Int64(int64_t i)   { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Int64(i)  && EndValue(); }\n    bool Uint64(uint64_t u) { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Uint64(u) && EndValue(); }\n    bool Double(double d)   { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Double(d) && EndValue(); }\n    bool RawNumber(const Ch* str, SizeType len, bool copy) { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.RawNumber(str, len, copy) && EndValue(); }\n    bool String   (const Ch* str, SizeType len, bool copy) { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.String   (str, len, copy) && EndValue(); }\n    \n    bool StartObject() { \n        if (filterValueDepth_ > 0) {\n            filterValueDepth_++;\n            return true;\n        }\n        else {\n            filteredKeyCount_.push(0);\n            return outputHandler_.StartObject();\n        }\n    }\n    \n    bool Key(const Ch* str, SizeType len, bool copy) { \n        if (filterValueDepth_ > 0) \n            return true;\n        else if (len == keyLength_ && std::memcmp(str, keyString_, len) == 0) {\n            filterValueDepth_ = 1;\n            return true;\n        }\n        else {\n            ++filteredKeyCount_.top();\n            return outputHandler_.Key(str, len, copy);\n        }\n    }\n\n    bool EndObject(SizeType) {\n        if (filterValueDepth_ > 0) {\n            filterValueDepth_--;\n            return EndValue();\n        }\n        else {\n            // Use our own filtered memberCount\n            SizeType memberCount = filteredKeyCount_.top();\n            filteredKeyCount_.pop();\n            return outputHandler_.EndObject(memberCount) && EndValue();\n        }\n    }\n\n    bool StartArray() {\n        if (filterValueDepth_ > 0) {\n            filterValueDepth_++;\n            return true;\n        }\n        else\n            return outputHandler_.StartArray();\n    }\n\n    bool EndArray(SizeType elementCount) {\n        if (filterValueDepth_ > 0) {\n            filterValueDepth_--;\n            return EndValue();\n        }\n        else\n            return outputHandler_.EndArray(elementCount) && EndValue();\n    }\n\nprivate:\n    FilterKeyHandler(const FilterKeyHandler&);\n    FilterKeyHandler& operator=(const FilterKeyHandler&);\n\n    bool EndValue() {\n        if (filterValueDepth_ == 1) // Just at the end of value after filtered key\n            filterValueDepth_ = 0;\n        return true;\n    }\n    \n    OutputHandler& outputHandler_;\n    const char* keyString_;\n    const SizeType keyLength_;\n    unsigned filterValueDepth_;\n    std::stack<SizeType> filteredKeyCount_;\n};\n\nint main(int argc, char* argv[]) {\n    if (argc != 2) {\n        fprintf(stderr, \"filterkey key < input.json > output.json\\n\");\n        return 1;\n    }\n\n    // Prepare JSON reader and input stream.\n    Reader reader;\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n\n    // Prepare JSON writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n    Writer<FileWriteStream> writer(os);\n\n    // Prepare Filter\n    FilterKeyHandler<Writer<FileWriteStream> > filter(writer, argv[1], static_cast<SizeType>(strlen(argv[1])));\n\n    // JSON reader parse from the input stream, filter handler filters the events, and forward to writer.\n    // i.e. the events flow is: reader -> filter -> writer\n    if (!reader.Parse(is, filter)) {\n        fprintf(stderr, \"\\nError(%u): %s\\n\", static_cast<unsigned>(reader.GetErrorOffset()), GetParseError_En(reader.GetParseErrorCode()));\n        return 1;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/filterkeydom/filterkeydom.cpp",
    "content": "// JSON filterkey example which populates filtered SAX events into a Document.\n\n// This example parses JSON text from stdin with validation.\n// During parsing, specified key will be filtered using a SAX handler.\n// And finally the filtered events are used to populate a Document.\n// As an example, the document is written to standard output.\n\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/error/en.h\"\n#include <stack>\n\nusing namespace rapidjson;\n\n// This handler forwards event into an output handler, with filtering the descendent events of specified key.\ntemplate <typename OutputHandler>\nclass FilterKeyHandler {\npublic:\n    typedef char Ch;\n\n    FilterKeyHandler(OutputHandler& outputHandler, const Ch* keyString, SizeType keyLength) : \n        outputHandler_(outputHandler), keyString_(keyString), keyLength_(keyLength), filterValueDepth_(), filteredKeyCount_()\n    {}\n\n    bool Null()             { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Null()    && EndValue(); }\n    bool Bool(bool b)       { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Bool(b)   && EndValue(); }\n    bool Int(int i)         { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Int(i)    && EndValue(); }\n    bool Uint(unsigned u)   { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Uint(u)   && EndValue(); }\n    bool Int64(int64_t i)   { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Int64(i)  && EndValue(); }\n    bool Uint64(uint64_t u) { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Uint64(u) && EndValue(); }\n    bool Double(double d)   { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.Double(d) && EndValue(); }\n    bool RawNumber(const Ch* str, SizeType len, bool copy) { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.RawNumber(str, len, copy) && EndValue(); }\n    bool String   (const Ch* str, SizeType len, bool copy) { return filterValueDepth_ > 0 ? EndValue() : outputHandler_.String   (str, len, copy) && EndValue(); }\n    \n    bool StartObject() { \n        if (filterValueDepth_ > 0) {\n            filterValueDepth_++;\n            return true;\n        }\n        else {\n            filteredKeyCount_.push(0);\n            return outputHandler_.StartObject();\n        }\n    }\n    \n    bool Key(const Ch* str, SizeType len, bool copy) { \n        if (filterValueDepth_ > 0) \n            return true;\n        else if (len == keyLength_ && std::memcmp(str, keyString_, len) == 0) {\n            filterValueDepth_ = 1;\n            return true;\n        }\n        else {\n            ++filteredKeyCount_.top();\n            return outputHandler_.Key(str, len, copy);\n        }\n    }\n\n    bool EndObject(SizeType) {\n        if (filterValueDepth_ > 0) {\n            filterValueDepth_--;\n            return EndValue();\n        }\n        else {\n            // Use our own filtered memberCount\n            SizeType memberCount = filteredKeyCount_.top();\n            filteredKeyCount_.pop();\n            return outputHandler_.EndObject(memberCount) && EndValue();\n        }\n    }\n\n    bool StartArray() {\n        if (filterValueDepth_ > 0) {\n            filterValueDepth_++;\n            return true;\n        }\n        else\n            return outputHandler_.StartArray();\n    }\n\n    bool EndArray(SizeType elementCount) {\n        if (filterValueDepth_ > 0) {\n            filterValueDepth_--;\n            return EndValue();\n        }\n        else\n            return outputHandler_.EndArray(elementCount) && EndValue();\n    }\n\nprivate:\n    FilterKeyHandler(const FilterKeyHandler&);\n    FilterKeyHandler& operator=(const FilterKeyHandler&);\n\n    bool EndValue() {\n        if (filterValueDepth_ == 1) // Just at the end of value after filtered key\n            filterValueDepth_ = 0;\n        return true;\n    }\n\n    OutputHandler& outputHandler_;\n    const char* keyString_;\n    const SizeType keyLength_;\n    unsigned filterValueDepth_;\n    std::stack<SizeType> filteredKeyCount_;\n};\n\n// Implements a generator for Document::Populate()\ntemplate <typename InputStream>\nclass FilterKeyReader {\npublic:\n    typedef char Ch;\n\n    FilterKeyReader(InputStream& is, const Ch* keyString, SizeType keyLength) : \n        is_(is), keyString_(keyString), keyLength_(keyLength), parseResult_()\n    {}\n\n    // SAX event flow: reader -> filter -> handler\n    template <typename Handler>\n    bool operator()(Handler& handler) {\n        FilterKeyHandler<Handler> filter(handler, keyString_, keyLength_);\n        Reader reader;\n        parseResult_ = reader.Parse(is_, filter);\n        return parseResult_;\n    }\n\n    const ParseResult& GetParseResult() const { return parseResult_; }\n\nprivate:\n    FilterKeyReader(const FilterKeyReader&);\n    FilterKeyReader& operator=(const FilterKeyReader&);\n\n    InputStream& is_;\n    const char* keyString_;\n    const SizeType keyLength_;\n    ParseResult parseResult_;\n};\n\nint main(int argc, char* argv[]) {\n    if (argc != 2) {\n        fprintf(stderr, \"filterkeydom key < input.json > output.json\\n\");\n        return 1;\n    }\n\n    // Prepare input stream.\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n\n    // Prepare Filter\n    FilterKeyReader<FileReadStream> reader(is, argv[1], static_cast<SizeType>(strlen(argv[1])));\n\n    // Populates the filtered events from reader\n    Document document;\n    document.Populate(reader);\n    ParseResult pr = reader.GetParseResult();\n    if (!pr) {\n        fprintf(stderr, \"\\nError(%u): %s\\n\", static_cast<unsigned>(pr.Offset()), GetParseError_En(pr.Code()));\n        return 1;\n    }\n\n    // Prepare JSON writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n    Writer<FileWriteStream> writer(os);\n\n    // Write the document to standard output\n    document.Accept(writer);\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/jsonx/jsonx.cpp",
    "content": "// JSON to JSONx conversion example, using SAX API.\n// JSONx is an IBM standard format to represent JSON as XML.\n// https://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.1.0/com.ibm.dp.doc/json_jsonx.html\n// This example parses JSON text from stdin with validation, \n// and convert to JSONx format to stdout.\n// Need compile with -D__STDC_FORMAT_MACROS for defining PRId64 and PRIu64 macros.\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/error/en.h\"\n#include <cstdio>\n\nusing namespace rapidjson;\n\n// For simplicity, this example only read/write in UTF-8 encoding\ntemplate <typename OutputStream>\nclass JsonxWriter {\npublic:\n    JsonxWriter(OutputStream& os) : os_(os), name_(), level_(0), hasName_(false) {\n    }\n\n    bool Null() {\n        return WriteStartElement(\"null\", true);\n    }\n    \n    bool Bool(bool b) {\n        return \n            WriteStartElement(\"boolean\") &&\n            WriteString(b ? \"true\" : \"false\") &&\n            WriteEndElement(\"boolean\");\n    }\n    \n    bool Int(int i) {\n        char buffer[12];\n        return WriteNumberElement(buffer, sprintf(buffer, \"%d\", i));\n    }\n    \n    bool Uint(unsigned i) {\n        char buffer[11];\n        return WriteNumberElement(buffer, sprintf(buffer, \"%u\", i));\n    }\n    \n    bool Int64(int64_t i) {\n        char buffer[21];\n        return WriteNumberElement(buffer, sprintf(buffer, \"%\" PRId64, i));\n    }\n    \n    bool Uint64(uint64_t i) {\n        char buffer[21];\n        return WriteNumberElement(buffer, sprintf(buffer, \"%\" PRIu64, i));\n    }\n    \n    bool Double(double d) {\n        char buffer[30];\n        return WriteNumberElement(buffer, sprintf(buffer, \"%.17g\", d));\n    }\n\n    bool RawNumber(const char* str, SizeType length, bool) {\n        return\n            WriteStartElement(\"number\") &&\n            WriteEscapedText(str, length) &&\n            WriteEndElement(\"number\");\n    }\n\n    bool String(const char* str, SizeType length, bool) {\n        return\n            WriteStartElement(\"string\") &&\n            WriteEscapedText(str, length) &&\n            WriteEndElement(\"string\");\n    }\n\n    bool StartObject() {\n        return WriteStartElement(\"object\");\n    }\n\n    bool Key(const char* str, SizeType length, bool) {\n        // backup key to name_\n        name_.Clear();\n        for (SizeType i = 0; i < length; i++)\n            name_.Put(str[i]);\n        hasName_ = true;\n        return true;\n    }\n\n    bool EndObject(SizeType) {\n        return WriteEndElement(\"object\");\n    }\n\n    bool StartArray() {\n        return WriteStartElement(\"array\");\n    }\n\n    bool EndArray(SizeType) {\n        return WriteEndElement(\"array\");\n    }\n\nprivate:\n    bool WriteString(const char* s) {\n        while (*s)\n            os_.Put(*s++);\n        return true;\n    }\n\n    bool WriteEscapedAttributeValue(const char* s, size_t length) {\n        for (size_t i = 0; i < length; i++) {\n            switch (s[i]) {\n                case '&': WriteString(\"&amp;\"); break;\n                case '<': WriteString(\"&lt;\"); break;\n                case '\"': WriteString(\"&quot;\"); break;\n                default: os_.Put(s[i]); break;\n            }\n        }\n        return true;\n    }\n\n    bool WriteEscapedText(const char* s, size_t length) {\n        for (size_t i = 0; i < length; i++) {\n            switch (s[i]) {\n                case '&': WriteString(\"&amp;\"); break;\n                case '<': WriteString(\"&lt;\"); break;\n                default: os_.Put(s[i]); break;\n            }\n        }\n        return true;\n    }\n\n    bool WriteStartElement(const char* type, bool emptyElement = false) {\n        if (level_ == 0)\n            if (!WriteString(\"<?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?>\"))\n                return false;\n\n        if (!WriteString(\"<json:\") || !WriteString(type))\n            return false;\n\n        // For root element, need to add declarations\n        if (level_ == 0) {\n            if (!WriteString(\n                \" xsi:schemaLocation=\\\"http://www.datapower.com/schemas/json jsonx.xsd\\\"\"\n                \" xmlns:xsi=\\\"http://www.w3.org/2001/XMLSchema-instance\\\"\"\n                \" xmlns:json=\\\"http://www.ibm.com/xmlns/prod/2009/jsonx\\\"\"))\n                return false;\n        }\n\n        if (hasName_) {\n            hasName_ = false;\n            if (!WriteString(\" name=\\\"\") ||\n                !WriteEscapedAttributeValue(name_.GetString(), name_.GetSize()) ||\n                !WriteString(\"\\\"\"))\n                return false;\n        }\n\n        if (emptyElement)\n            return WriteString(\"/>\");\n        else {\n            level_++;\n            return WriteString(\">\");\n        }\n    }\n\n    bool WriteEndElement(const char* type) {\n        if (!WriteString(\"</json:\") ||\n            !WriteString(type) ||\n            !WriteString(\">\"))\n            return false;\n\n        // For the last end tag, flush the output stream.\n        if (--level_ == 0)\n            os_.Flush();\n\n        return true;\n    }\n\n    bool WriteNumberElement(const char* buffer, int length) {\n        if (!WriteStartElement(\"number\"))\n            return false;\n        for (int j = 0; j < length; j++)\n            os_.Put(buffer[j]);\n        return WriteEndElement(\"number\");\n    }\n\n    OutputStream& os_;\n    StringBuffer name_;\n    unsigned level_;\n    bool hasName_;\n};\n\nint main(int, char*[]) {\n    // Prepare JSON reader and input stream.\n    Reader reader;\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n\n    // Prepare JSON writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n    JsonxWriter<FileWriteStream> writer(os);\n\n    // JSON reader parse from the input stream and let writer generate the output.\n    if (!reader.Parse(is, writer)) {\n        fprintf(stderr, \"\\nError(%u): %s\\n\", static_cast<unsigned>(reader.GetErrorOffset()), GetParseError_En(reader.GetParseErrorCode()));\n        return 1;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/lookaheadparser/lookaheadparser.cpp",
    "content": "#include \"rapidjson/reader.h\"\n#include \"rapidjson/document.h\"\n#include <iostream>\n\nRAPIDJSON_DIAG_PUSH\n#ifdef __GNUC__\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n// This example demonstrates JSON token-by-token parsing with an API that is\n// more direct; you don't need to design your logic around a handler object and\n// callbacks. Instead, you retrieve values from the JSON stream by calling\n// GetInt(), GetDouble(), GetString() and GetBool(), traverse into structures\n// by calling EnterObject() and EnterArray(), and skip over unwanted data by\n// calling SkipValue(). When you know your JSON's structure, this can be quite\n// convenient.\n//\n// If you aren't sure of what's next in the JSON data, you can use PeekType() and\n// PeekValue() to look ahead to the next object before reading it.\n//\n// If you call the wrong retrieval method--e.g. GetInt when the next JSON token is\n// not an int, EnterObject or EnterArray when there isn't actually an object or array\n// to read--the stream parsing will end immediately and no more data will be delivered.\n//\n// After calling EnterObject, you retrieve keys via NextObjectKey() and values via\n// the normal getters. When NextObjectKey() returns null, you have exited the\n// object, or you can call SkipObject() to skip to the end of the object\n// immediately. If you fetch the entire object (i.e. NextObjectKey() returned  null),\n// you should not call SkipObject().\n//\n// After calling EnterArray(), you must alternate between calling NextArrayValue()\n// to see if the array has more data, and then retrieving values via the normal\n// getters. You can call SkipArray() to skip to the end of the array immediately.\n// If you fetch the entire array (i.e. NextArrayValue() returned null),\n// you should not call SkipArray().\n//\n// This parser uses in-situ strings, so the JSON buffer will be altered during the\n// parse.\n\nusing namespace rapidjson;\n\n\nclass LookaheadParserHandler {\npublic:\n    bool Null() { st_ = kHasNull; v_.SetNull(); return true; }\n    bool Bool(bool b) { st_ = kHasBool; v_.SetBool(b); return true; }\n    bool Int(int i) { st_ = kHasNumber; v_.SetInt(i); return true; }\n    bool Uint(unsigned u) { st_ = kHasNumber; v_.SetUint(u); return true; }\n    bool Int64(int64_t i) { st_ = kHasNumber; v_.SetInt64(i); return true; }\n    bool Uint64(uint64_t u) { st_ = kHasNumber; v_.SetUint64(u); return true; }\n    bool Double(double d) { st_ = kHasNumber; v_.SetDouble(d); return true; }\n    bool RawNumber(const char*, SizeType, bool) { return false; }\n    bool String(const char* str, SizeType length, bool) { st_ = kHasString; v_.SetString(str, length); return true; }\n    bool StartObject() { st_ = kEnteringObject; return true; }\n    bool Key(const char* str, SizeType length, bool) { st_ = kHasKey; v_.SetString(str, length); return true; }\n    bool EndObject(SizeType) { st_ = kExitingObject; return true; }\n    bool StartArray() { st_ = kEnteringArray; return true; }\n    bool EndArray(SizeType) { st_ = kExitingArray; return true; }\n\nprotected:\n    LookaheadParserHandler(char* str);\n    void ParseNext();\n\nprotected:\n    enum LookaheadParsingState {\n        kInit,\n        kError,\n        kHasNull,\n        kHasBool,\n        kHasNumber,\n        kHasString,\n        kHasKey,\n        kEnteringObject,\n        kExitingObject,\n        kEnteringArray,\n        kExitingArray\n    };\n    \n    Value v_;\n    LookaheadParsingState st_;\n    Reader r_;\n    InsituStringStream ss_;\n    \n    static const int parseFlags = kParseDefaultFlags | kParseInsituFlag;\n};\n\nLookaheadParserHandler::LookaheadParserHandler(char* str) : v_(), st_(kInit), r_(), ss_(str) {\n    r_.IterativeParseInit();\n    ParseNext();\n}\n\nvoid LookaheadParserHandler::ParseNext() {\n    if (r_.HasParseError()) {\n        st_ = kError;\n        return;\n    }\n    \n    r_.IterativeParseNext<parseFlags>(ss_, *this);\n}\n\nclass LookaheadParser : protected LookaheadParserHandler {\npublic:\n    LookaheadParser(char* str) : LookaheadParserHandler(str) {}\n    \n    bool EnterObject();\n    bool EnterArray();\n    const char* NextObjectKey();\n    bool NextArrayValue();\n    int GetInt();\n    double GetDouble();\n    const char* GetString();\n    bool GetBool();\n    void GetNull();\n\n    void SkipObject();\n    void SkipArray();\n    void SkipValue();\n    Value* PeekValue();\n    int PeekType(); // returns a rapidjson::Type, or -1 for no value (at end of object/array)\n    \n    bool IsValid() { return st_ != kError; }\n    \nprotected:\n    void SkipOut(int depth);\n};\n\nbool LookaheadParser::EnterObject() {\n    if (st_ != kEnteringObject) {\n        st_  = kError;\n        return false;\n    }\n    \n    ParseNext();\n    return true;\n}\n\nbool LookaheadParser::EnterArray() {\n    if (st_ != kEnteringArray) {\n        st_  = kError;\n        return false;\n    }\n    \n    ParseNext();\n    return true;\n}\n\nconst char* LookaheadParser::NextObjectKey() {\n    if (st_ == kHasKey) {\n        const char* result = v_.GetString();\n        ParseNext();\n        return result;\n    }\n    \n    if (st_ != kExitingObject) {\n        st_ = kError;\n        return 0;\n    }\n    \n    ParseNext();\n    return 0;\n}\n\nbool LookaheadParser::NextArrayValue() {\n    if (st_ == kExitingArray) {\n        ParseNext();\n        return false;\n    }\n    \n    if (st_ == kError || st_ == kExitingObject || st_ == kHasKey) {\n        st_ = kError;\n        return false;\n    }\n\n    return true;\n}\n\nint LookaheadParser::GetInt() {\n    if (st_ != kHasNumber || !v_.IsInt()) {\n        st_ = kError;\n        return 0;\n    }\n\n    int result = v_.GetInt();\n    ParseNext();\n    return result;\n}\n\ndouble LookaheadParser::GetDouble() {\n    if (st_ != kHasNumber) {\n        st_  = kError;\n        return 0.;\n    }\n    \n    double result = v_.GetDouble();\n    ParseNext();\n    return result;\n}\n\nbool LookaheadParser::GetBool() {\n    if (st_ != kHasBool) {\n        st_  = kError;\n        return false;\n    }\n    \n    bool result = v_.GetBool();\n    ParseNext();\n    return result;\n}\n\nvoid LookaheadParser::GetNull() {\n    if (st_ != kHasNull) {\n        st_  = kError;\n        return;\n    }\n\n    ParseNext();\n}\n\nconst char* LookaheadParser::GetString() {\n    if (st_ != kHasString) {\n        st_  = kError;\n        return 0;\n    }\n    \n    const char* result = v_.GetString();\n    ParseNext();\n    return result;\n}\n\nvoid LookaheadParser::SkipOut(int depth) {\n    do {\n        if (st_ == kEnteringArray || st_ == kEnteringObject) {\n            ++depth;\n        }\n        else if (st_ == kExitingArray || st_ == kExitingObject) {\n            --depth;\n        }\n        else if (st_ == kError) {\n            return;\n        }\n\n        ParseNext();\n    }\n    while (depth > 0);\n}\n\nvoid LookaheadParser::SkipValue() {\n    SkipOut(0);\n}\n\nvoid LookaheadParser::SkipArray() {\n    SkipOut(1);\n}\n\nvoid LookaheadParser::SkipObject() {\n    SkipOut(1);\n}\n\nValue* LookaheadParser::PeekValue() {\n    if (st_ >= kHasNull && st_ <= kHasKey) {\n        return &v_;\n    }\n    \n    return 0;\n}\n\nint LookaheadParser::PeekType() {\n    if (st_ >= kHasNull && st_ <= kHasKey) {\n        return v_.GetType();\n    }\n    \n    if (st_ == kEnteringArray) {\n        return kArrayType;\n    }\n    \n    if (st_ == kEnteringObject) {\n        return kObjectType;\n    }\n\n    return -1;\n}\n\n//-------------------------------------------------------------------------\n\nint main() {\n    using namespace std;\n\n    char json[] = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null,\"\n        \"\\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[-1, 2, 3, 4, \\\"array\\\", []], \\\"skipArrays\\\":[1, 2, [[[3]]]], \"\n        \"\\\"skipObject\\\":{ \\\"i\\\":0, \\\"t\\\":true, \\\"n\\\":null, \\\"d\\\":123.45 }, \"\n        \"\\\"skipNested\\\":[[[[{\\\"\\\":0}, {\\\"\\\":[-9.87]}]]], [], []], \"\n        \"\\\"skipString\\\":\\\"zzz\\\", \\\"reachedEnd\\\":null, \\\"t\\\":true }\";\n\n    LookaheadParser r(json);\n    \n    RAPIDJSON_ASSERT(r.PeekType() == kObjectType);\n\n    r.EnterObject();\n    while (const char* key = r.NextObjectKey()) {\n        if (0 == strcmp(key, \"hello\")) {\n            RAPIDJSON_ASSERT(r.PeekType() == kStringType);\n            cout << key << \":\" << r.GetString() << endl;\n        }\n        else if (0 == strcmp(key, \"t\") || 0 == strcmp(key, \"f\")) {\n            RAPIDJSON_ASSERT(r.PeekType() == kTrueType || r.PeekType() == kFalseType);\n            cout << key << \":\" << r.GetBool() << endl;\n            continue;\n        }\n        else if (0 == strcmp(key, \"n\")) {\n            RAPIDJSON_ASSERT(r.PeekType() == kNullType);\n            r.GetNull();\n            cout << key << endl;\n            continue;\n        }\n        else if (0 == strcmp(key, \"pi\")) {\n            RAPIDJSON_ASSERT(r.PeekType() == kNumberType);\n            cout << key << \":\" << r.GetDouble() << endl;\n            continue;\n        }\n        else if (0 == strcmp(key, \"a\")) {\n            RAPIDJSON_ASSERT(r.PeekType() == kArrayType);\n            \n            r.EnterArray();\n            \n            cout << key << \":[ \";\n            while (r.NextArrayValue()) {\n                if (r.PeekType() == kNumberType) {\n                    cout << r.GetDouble() << \" \";\n                }\n                else if (r.PeekType() == kStringType) {\n                    cout << r.GetString() << \" \";\n                }\n                else {\n                    r.SkipArray();\n                    break;\n                }\n            }\n            \n            cout << \"]\" << endl;\n        }\n        else {\n            cout << key << \":skipped\" << endl;\n            r.SkipValue();\n        }\n    }\n    \n    return 0;\n}\n\nRAPIDJSON_DIAG_POP\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/messagereader/messagereader.cpp",
    "content": "// Reading a message JSON with Reader (SAX-style API).\n// The JSON should be an object with key-string pairs.\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/error/en.h\"\n#include <iostream>\n#include <string>\n#include <map>\n\nusing namespace std;\nusing namespace rapidjson;\n\ntypedef map<string, string> MessageMap;\n\n#if defined(__GNUC__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(switch-enum)\n#endif\n\nstruct MessageHandler\n    : public BaseReaderHandler<UTF8<>, MessageHandler> {\n    MessageHandler() : messages_(), state_(kExpectObjectStart), name_() {}\n\n    bool StartObject() {\n        switch (state_) {\n        case kExpectObjectStart:\n            state_ = kExpectNameOrObjectEnd;\n            return true;\n        default:\n            return false;\n        }\n    }\n\n    bool String(const char* str, SizeType length, bool) {\n        switch (state_) {\n        case kExpectNameOrObjectEnd:\n            name_ = string(str, length);\n            state_ = kExpectValue;\n            return true;\n        case kExpectValue:\n            messages_.insert(MessageMap::value_type(name_, string(str, length)));\n            state_ = kExpectNameOrObjectEnd;\n            return true;\n        default:\n            return false;\n        }\n    }\n\n    bool EndObject(SizeType) { return state_ == kExpectNameOrObjectEnd; }\n\n    bool Default() { return false; } // All other events are invalid.\n\n    MessageMap messages_;\n    enum State {\n        kExpectObjectStart,\n        kExpectNameOrObjectEnd,\n        kExpectValue\n    }state_;\n    std::string name_;\n};\n\n#if defined(__GNUC__)\nRAPIDJSON_DIAG_POP\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n\nstatic void ParseMessages(const char* json, MessageMap& messages) {\n    Reader reader;\n    MessageHandler handler;\n    StringStream ss(json);\n    if (reader.Parse(ss, handler))\n        messages.swap(handler.messages_);   // Only change it if success.\n    else {\n        ParseErrorCode e = reader.GetParseErrorCode();\n        size_t o = reader.GetErrorOffset();\n        cout << \"Error: \" << GetParseError_En(e) << endl;;\n        cout << \" at offset \" << o << \" near '\" << string(json).substr(o, 10) << \"...'\" << endl;\n    }\n}\n\nint main() {\n    MessageMap messages;\n\n    const char* json1 = \"{ \\\"greeting\\\" : \\\"Hello!\\\", \\\"farewell\\\" : \\\"bye-bye!\\\" }\";\n    cout << json1 << endl;\n    ParseMessages(json1, messages);\n\n    for (MessageMap::const_iterator itr = messages.begin(); itr != messages.end(); ++itr)\n        cout << itr->first << \": \" << itr->second << endl;\n\n    cout << endl << \"Parse a JSON with invalid schema.\" << endl;\n    const char* json2 = \"{ \\\"greeting\\\" : \\\"Hello!\\\", \\\"farewell\\\" : \\\"bye-bye!\\\", \\\"foo\\\" : {} }\";\n    cout << json2 << endl;\n    ParseMessages(json2, messages);\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/parsebyparts/parsebyparts.cpp",
    "content": "// Example of parsing JSON to document by parts.\n\n// Using C++11 threads\n// Temporarily disable for clang (older version) due to incompatibility with libstdc++\n#if (__cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1700)) && !defined(__clang__)\n\n#include \"rapidjson/document.h\"\n#include \"rapidjson/error/en.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/ostreamwrapper.h\"\n#include <condition_variable>\n#include <iostream>\n#include <mutex>\n#include <thread>\n\nusing namespace rapidjson;\n\ntemplate<unsigned parseFlags = kParseDefaultFlags>\nclass AsyncDocumentParser {\npublic:\n    AsyncDocumentParser(Document& d)\n        : stream_(*this)\n        , d_(d)\n        , parseThread_()\n        , mutex_()\n        , notEmpty_()\n        , finish_()\n        , completed_()\n    {\n        // Create and execute thread after all member variables are initialized.\n        parseThread_ = std::thread(&AsyncDocumentParser::Parse, this);\n    }\n\n    ~AsyncDocumentParser() {\n        if (!parseThread_.joinable())\n            return;\n\n        {        \n            std::unique_lock<std::mutex> lock(mutex_);\n\n            // Wait until the buffer is read up (or parsing is completed)\n            while (!stream_.Empty() && !completed_)\n                finish_.wait(lock);\n\n            // Automatically append '\\0' as the terminator in the stream.\n            static const char terminator[] = \"\";\n            stream_.src_ = terminator;\n            stream_.end_ = terminator + 1;\n            notEmpty_.notify_one(); // unblock the AsyncStringStream\n        }\n\n        parseThread_.join();\n    }\n\n    void ParsePart(const char* buffer, size_t length) {\n        std::unique_lock<std::mutex> lock(mutex_);\n        \n        // Wait until the buffer is read up (or parsing is completed)\n        while (!stream_.Empty() && !completed_)\n            finish_.wait(lock);\n\n        // Stop further parsing if the parsing process is completed.\n        if (completed_)\n            return;\n\n        // Set the buffer to stream and unblock the AsyncStringStream\n        stream_.src_ = buffer;\n        stream_.end_ = buffer + length;\n        notEmpty_.notify_one();\n    }\n\nprivate:\n    void Parse() {\n        d_.ParseStream<parseFlags>(stream_);\n\n        // The stream may not be fully read, notify finish anyway to unblock ParsePart()\n        std::unique_lock<std::mutex> lock(mutex_);\n        completed_ = true;      // Parsing process is completed\n        finish_.notify_one();   // Unblock ParsePart() or destructor if they are waiting.\n    }\n\n    struct AsyncStringStream {\n        typedef char Ch;\n\n        AsyncStringStream(AsyncDocumentParser& parser) : parser_(parser), src_(), end_(), count_() {}\n\n        char Peek() const {\n            std::unique_lock<std::mutex> lock(parser_.mutex_);\n\n            // If nothing in stream, block to wait.\n            while (Empty())\n                parser_.notEmpty_.wait(lock);\n\n            return *src_;\n        }\n\n        char Take() {\n            std::unique_lock<std::mutex> lock(parser_.mutex_);\n\n            // If nothing in stream, block to wait.\n            while (Empty())\n                parser_.notEmpty_.wait(lock);\n\n            count_++;\n            char c = *src_++;\n\n            // If all stream is read up, notify that the stream is finish.\n            if (Empty())\n                parser_.finish_.notify_one();\n\n            return c;\n        }\n\n        size_t Tell() const { return count_; }\n\n        // Not implemented\n        char* PutBegin() { return 0; }\n        void Put(char) {}\n        void Flush() {}\n        size_t PutEnd(char*) { return 0; }\n\n        bool Empty() const { return src_ == end_; }\n\n        AsyncDocumentParser& parser_;\n        const char* src_;     //!< Current read position.\n        const char* end_;     //!< End of buffer\n        size_t count_;        //!< Number of characters taken so far.\n    };\n\n    AsyncStringStream stream_;\n    Document& d_;\n    std::thread parseThread_;\n    std::mutex mutex_;\n    std::condition_variable notEmpty_;\n    std::condition_variable finish_;\n    bool completed_;\n};\n\nint main() {\n    Document d;\n\n    {\n        AsyncDocumentParser<> parser(d);\n\n        const char json1[] = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : tr\";\n        //const char json1[] = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : trX\"; // For test parsing error\n        const char json2[] = \"ue, \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.14\";\n        const char json3[] = \"16, \\\"a\\\":[1, 2, 3, 4] } \";\n\n        parser.ParsePart(json1, sizeof(json1) - 1);\n        parser.ParsePart(json2, sizeof(json2) - 1);\n        parser.ParsePart(json3, sizeof(json3) - 1);\n    }\n\n    if (d.HasParseError()) {\n        std::cout << \"Error at offset \" << d.GetErrorOffset() << \": \" << GetParseError_En(d.GetParseError()) << std::endl;\n        return EXIT_FAILURE;\n    }\n    \n    // Stringify the JSON to cout\n    OStreamWrapper os(std::cout);\n    Writer<OStreamWrapper> writer(os);\n    d.Accept(writer);\n    std::cout << std::endl;\n\n    return EXIT_SUCCESS;\n}\n\n#else // Not supporting C++11 \n\n#include <iostream>\nint main() {\n    std::cout << \"This example requires C++11 compiler\" << std::endl;\n}\n\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/pretty/pretty.cpp",
    "content": "// JSON pretty formatting example\n// This example can only handle UTF-8. For handling other encodings, see prettyauto example.\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/prettywriter.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/error/en.h\"\n\nusing namespace rapidjson;\n\nint main(int, char*[]) {\n    // Prepare reader and input stream.\n    Reader reader;\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n\n    // Prepare writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n    PrettyWriter<FileWriteStream> writer(os);\n\n    // JSON reader parse from the input stream and let writer generate the output.\n    if (!reader.Parse<kParseValidateEncodingFlag>(is, writer)) {\n        fprintf(stderr, \"\\nError(%u): %s\\n\", static_cast<unsigned>(reader.GetErrorOffset()), GetParseError_En(reader.GetParseErrorCode()));\n        return 1;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/prettyauto/prettyauto.cpp",
    "content": "// JSON pretty formatting example\n// This example can handle UTF-8/UTF-16LE/UTF-16BE/UTF-32LE/UTF-32BE.\n// The input firstly convert to UTF8, and then write to the original encoding with pretty formatting.\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/prettywriter.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/encodedstream.h\"    // NEW\n#include \"rapidjson/error/en.h\"\n#ifdef _WIN32\n#include <fcntl.h>\n#include <io.h>\n#endif\n\nusing namespace rapidjson;\n\nint main(int, char*[]) {\n#ifdef _WIN32\n    // Prevent Windows converting between CR+LF and LF\n    _setmode(_fileno(stdin), _O_BINARY);    // NEW\n    _setmode(_fileno(stdout), _O_BINARY);   // NEW\n#endif\n\n    // Prepare reader and input stream.\n    //Reader reader;\n    GenericReader<AutoUTF<unsigned>, UTF8<> > reader;       // CHANGED\n    char readBuffer[65536];\n    FileReadStream is(stdin, readBuffer, sizeof(readBuffer));\n    AutoUTFInputStream<unsigned, FileReadStream> eis(is);   // NEW\n\n    // Prepare writer and output stream.\n    char writeBuffer[65536];\n    FileWriteStream os(stdout, writeBuffer, sizeof(writeBuffer));\n\n#if 1\n    // Use the same Encoding of the input. Also use BOM according to input.\n    typedef AutoUTFOutputStream<unsigned, FileWriteStream> OutputStream;    // NEW\n    OutputStream eos(os, eis.GetType(), eis.HasBOM());                      // NEW\n    PrettyWriter<OutputStream, UTF8<>, AutoUTF<unsigned> > writer(eos);     // CHANGED\n#else\n    // You may also use static bound encoding type, such as output to UTF-16LE with BOM\n    typedef EncodedOutputStream<UTF16LE<>,FileWriteStream> OutputStream;    // NEW\n    OutputStream eos(os, true);                                             // NEW\n    PrettyWriter<OutputStream, UTF8<>, UTF16LE<> > writer(eos);             // CHANGED\n#endif\n\n    // JSON reader parse from the input stream and let writer generate the output.\n    //if (!reader.Parse<kParseValidateEncodingFlag>(is, writer)) {\n    if (!reader.Parse<kParseValidateEncodingFlag>(eis, writer)) {   // CHANGED\n        fprintf(stderr, \"\\nError(%u): %s\\n\", static_cast<unsigned>(reader.GetErrorOffset()), GetParseError_En(reader.GetParseErrorCode()));\n        return 1;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/schemavalidator/schemavalidator.cpp",
    "content": "// Schema Validator example\n\n// The example validates JSON text from stdin with a JSON schema specified in the argument.\n\n#define RAPIDJSON_HAS_STDSTRING 1\n\n#include \"rapidjson/error/en.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/schema.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/prettywriter.h\"\n#include <string>\n#include <iostream>\n#include <sstream>\n\nusing namespace rapidjson;\n\ntypedef GenericValue<UTF8<>, CrtAllocator > ValueType;\n\n// Forward ref\nstatic void CreateErrorMessages(const ValueType& errors, size_t depth, const char* context);\n\n// Convert GenericValue to std::string\nstatic std::string GetString(const ValueType& val) {\n  std::ostringstream s;\n  if (val.IsString())\n    s << val.GetString();\n  else if (val.IsDouble())\n    s << val.GetDouble();\n  else if (val.IsUint())\n   s << val.GetUint();\n  else if (val.IsInt())\n    s << val.GetInt();\n  else if (val.IsUint64())\n    s << val.GetUint64();\n  else if (val.IsInt64())\n    s <<  val.GetInt64();\n  else if (val.IsBool() && val.GetBool())\n    s << \"true\";\n  else if (val.IsBool())\n    s << \"false\";\n  else if (val.IsFloat())\n    s << val.GetFloat();\n  return s.str();\n}\n\n// Create the error message for a named error\n// The error object can either be empty or contain at least member properties:\n// {\"errorCode\": <code>, \"instanceRef\": \"<pointer>\", \"schemaRef\": \"<pointer>\" }\n// Additional properties may be present for use as inserts.\n// An \"errors\" property may be present if there are child errors.\nstatic void HandleError(const char* errorName, const ValueType& error, size_t depth, const char* context) {\n  if (!error.ObjectEmpty()) {\n    // Get error code and look up error message text (English)\n    int code = error[\"errorCode\"].GetInt();\n    std::string message(GetValidateError_En(static_cast<ValidateErrorCode>(code)));\n    // For each member property in the error, see if its name exists as an insert in the error message and if so replace with the stringified property value\n    // So for example - \"Number '%actual' is not a multiple of the 'multipleOf' value '%expected'.\" - we would expect \"actual\" and \"expected\" members.\n    for (ValueType::ConstMemberIterator insertsItr = error.MemberBegin();\n      insertsItr != error.MemberEnd(); ++insertsItr) {\n      std::string insertName(\"%\");\n      insertName += insertsItr->name.GetString(); // eg \"%actual\"\n      size_t insertPos = message.find(insertName);\n      if (insertPos != std::string::npos) {\n        std::string insertString(\"\");\n        const ValueType &insert = insertsItr->value;\n        if (insert.IsArray()) {\n          // Member is an array so create comma-separated list of items for the insert string\n          for (ValueType::ConstValueIterator itemsItr = insert.Begin(); itemsItr != insert.End(); ++itemsItr) {\n            if (itemsItr != insert.Begin()) insertString += \",\";\n            insertString += GetString(*itemsItr);\n          }\n        } else {\n          insertString += GetString(insert);\n        }\n        message.replace(insertPos, insertName.length(), insertString);\n      }\n    }\n    // Output error message, references, context\n    std::string indent(depth * 2, ' ');\n    std::cout << indent << \"Error Name: \" << errorName << std::endl;\n    std::cout << indent << \"Message: \" << message.c_str() << std::endl;\n    std::cout << indent << \"Instance: \" << error[\"instanceRef\"].GetString() << std::endl;\n    std::cout << indent << \"Schema: \" << error[\"schemaRef\"].GetString() << std::endl;\n    if (depth > 0) std::cout << indent << \"Context: \" << context << std::endl;\n    std::cout << std::endl;\n\n    // If child errors exist, apply the process recursively to each error structure.\n    // This occurs for \"oneOf\", \"allOf\", \"anyOf\" and \"dependencies\" errors, so pass the error name as context.\n    if (error.HasMember(\"errors\")) {\n      depth++;\n      const ValueType &childErrors = error[\"errors\"];\n      if (childErrors.IsArray()) {\n        // Array - each item is an error structure - example\n        // \"anyOf\": {\"errorCode\": ..., \"errors\":[{\"pattern\": {\"errorCode\\\": ...\\\"}}, {\"pattern\": {\"errorCode\\\": ...}}]\n        for (ValueType::ConstValueIterator errorsItr = childErrors.Begin();\n             errorsItr != childErrors.End(); ++errorsItr) {\n          CreateErrorMessages(*errorsItr, depth, errorName);\n        }\n      } else if (childErrors.IsObject()) {\n        // Object - each member is an error structure - example\n        // \"dependencies\": {\"errorCode\": ..., \"errors\": {\"address\": {\"required\": {\"errorCode\": ...}}, \"name\": {\"required\": {\"errorCode\": ...}}}\n        for (ValueType::ConstMemberIterator propsItr = childErrors.MemberBegin();\n             propsItr != childErrors.MemberEnd(); ++propsItr) {\n          CreateErrorMessages(propsItr->value, depth, errorName);\n        }\n      }\n    }\n  }\n}\n\n// Create error message for all errors in an error structure\n// Context is used to indicate whether the error structure has a parent 'dependencies', 'allOf', 'anyOf' or 'oneOf' error\nstatic void CreateErrorMessages(const ValueType& errors, size_t depth = 0, const char* context = 0) {\n    // Each member property contains one or more errors of a given type\n    for (ValueType::ConstMemberIterator errorTypeItr = errors.MemberBegin(); errorTypeItr != errors.MemberEnd(); ++errorTypeItr) {\n        const char* errorName = errorTypeItr->name.GetString();\n        const ValueType& errorContent = errorTypeItr->value;\n        if (errorContent.IsArray()) {\n            // Member is an array where each item is an error - eg \"type\": [{\"errorCode\": ...}, {\"errorCode\": ...}]\n            for (ValueType::ConstValueIterator contentItr = errorContent.Begin(); contentItr != errorContent.End(); ++contentItr) {\n                HandleError(errorName, *contentItr, depth, context);\n            }\n        } else if (errorContent.IsObject()) {\n            // Member is an object which is a single error - eg \"type\": {\"errorCode\": ... }\n            HandleError(errorName, errorContent, depth, context);\n        }\n    }\n}\n\nint main(int argc, char *argv[]) {\n    if (argc != 2) {\n        fprintf(stderr, \"Usage: schemavalidator schema.json < input.json\\n\");\n        return EXIT_FAILURE;\n    }\n\n    // Read a JSON schema from file into Document\n    Document d;\n    char buffer[4096];\n\n    {\n        FILE *fp = fopen(argv[1], \"r\");\n        if (!fp) {\n            printf(\"Schema file '%s' not found\\n\", argv[1]);\n            return -1;\n        }\n        FileReadStream fs(fp, buffer, sizeof(buffer));\n        d.ParseStream(fs);\n        if (d.HasParseError()) {\n            fprintf(stderr, \"Schema file '%s' is not a valid JSON\\n\", argv[1]);\n            fprintf(stderr, \"Error(offset %u): %s\\n\",\n                static_cast<unsigned>(d.GetErrorOffset()),\n                GetParseError_En(d.GetParseError()));\n            fclose(fp);\n            return EXIT_FAILURE;\n        }\n        fclose(fp);\n    }\n    \n    // Then convert the Document into SchemaDocument\n    SchemaDocument sd(d);\n\n    // Use reader to parse the JSON in stdin, and forward SAX events to validator\n    SchemaValidator validator(sd);\n    Reader reader;\n    FileReadStream is(stdin, buffer, sizeof(buffer));\n    if (!reader.Parse(is, validator) && reader.GetParseErrorCode() != kParseErrorTermination) {\n        // Schema validator error would cause kParseErrorTermination, which will handle it in next step.\n        fprintf(stderr, \"Input is not a valid JSON\\n\");\n        fprintf(stderr, \"Error(offset %u): %s\\n\",\n            static_cast<unsigned>(reader.GetErrorOffset()),\n            GetParseError_En(reader.GetParseErrorCode()));\n    }\n\n    // Check the validation result\n    if (validator.IsValid()) {\n        printf(\"Input JSON is valid.\\n\");\n        return EXIT_SUCCESS;\n    }\n    else {\n        printf(\"Input JSON is invalid.\\n\");\n        StringBuffer sb;\n        validator.GetInvalidSchemaPointer().StringifyUriFragment(sb);\n        fprintf(stderr, \"Invalid schema: %s\\n\", sb.GetString());\n        fprintf(stderr, \"Invalid keyword: %s\\n\", validator.GetInvalidSchemaKeyword());\n        fprintf(stderr, \"Invalid code: %d\\n\", validator.GetInvalidSchemaCode());\n        fprintf(stderr, \"Invalid message: %s\\n\", GetValidateError_En(validator.GetInvalidSchemaCode()));\n        sb.Clear();\n        validator.GetInvalidDocumentPointer().StringifyUriFragment(sb);\n        fprintf(stderr, \"Invalid document: %s\\n\", sb.GetString());\n        // Detailed violation report is available as a JSON value\n        sb.Clear();\n        PrettyWriter<StringBuffer> w(sb);\n        validator.GetError().Accept(w);\n        fprintf(stderr, \"Error report:\\n%s\\n\", sb.GetString());\n        CreateErrorMessages(validator.GetError());\n        return EXIT_FAILURE;\n    }\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/serialize/serialize.cpp",
    "content": "// Serialize example\n// This example shows writing JSON string with writer directly.\n\n#include \"rapidjson/prettywriter.h\" // for stringify JSON\n#include <cstdio>\n#include <string>\n#include <vector>\n\nusing namespace rapidjson;\n\nclass Person {\npublic:\n    Person(const std::string& name, unsigned age) : name_(name), age_(age) {}\n    Person(const Person& rhs) : name_(rhs.name_), age_(rhs.age_) {}\n    virtual ~Person();\n\n    Person& operator=(const Person& rhs) {\n        name_ = rhs.name_;\n        age_ = rhs.age_;\n        return *this;\n    }\n\nprotected:\n    template <typename Writer>\n    void Serialize(Writer& writer) const {\n        // This base class just write out name-value pairs, without wrapping within an object.\n        writer.String(\"name\");\n#if RAPIDJSON_HAS_STDSTRING\n        writer.String(name_);\n#else\n        writer.String(name_.c_str(), static_cast<SizeType>(name_.length())); // Supplying length of string is faster.\n#endif\n        writer.String(\"age\");\n        writer.Uint(age_);\n    }\n\nprivate:\n    std::string name_;\n    unsigned age_;\n};\n\nPerson::~Person() {\n}\n\nclass Education {\npublic:\n    Education(const std::string& school, double GPA) : school_(school), GPA_(GPA) {}\n    Education(const Education& rhs) : school_(rhs.school_), GPA_(rhs.GPA_) {}\n\n    template <typename Writer>\n    void Serialize(Writer& writer) const {\n        writer.StartObject();\n        \n        writer.String(\"school\");\n#if RAPIDJSON_HAS_STDSTRING\n        writer.String(school_);\n#else\n        writer.String(school_.c_str(), static_cast<SizeType>(school_.length()));\n#endif\n\n        writer.String(\"GPA\");\n        writer.Double(GPA_);\n\n        writer.EndObject();\n    }\n\nprivate:\n    std::string school_;\n    double GPA_;\n};\n\nclass Dependent : public Person {\npublic:\n    Dependent(const std::string& name, unsigned age, Education* education = 0) : Person(name, age), education_(education) {}\n    Dependent(const Dependent& rhs) : Person(rhs), education_(0) { education_ = (rhs.education_ == 0) ? 0 : new Education(*rhs.education_); }\n    virtual ~Dependent();\n\n    Dependent& operator=(const Dependent& rhs) {\n        if (this == &rhs)\n            return *this;\n        delete education_;\n        education_ = (rhs.education_ == 0) ? 0 : new Education(*rhs.education_);\n        return *this;\n    }\n\n    template <typename Writer>\n    void Serialize(Writer& writer) const {\n        writer.StartObject();\n\n        Person::Serialize(writer);\n\n        writer.String(\"education\");\n        if (education_)\n            education_->Serialize(writer);\n        else\n            writer.Null();\n\n        writer.EndObject();\n    }\n\nprivate:\n\n    Education *education_;\n};\n\nDependent::~Dependent() {\n    delete education_; \n}\n\nclass Employee : public Person {\npublic:\n    Employee(const std::string& name, unsigned age, bool married) : Person(name, age), dependents_(), married_(married) {}\n    Employee(const Employee& rhs) : Person(rhs), dependents_(rhs.dependents_), married_(rhs.married_) {}\n    virtual ~Employee();\n\n    Employee& operator=(const Employee& rhs) {\n        static_cast<Person&>(*this) = rhs;\n        dependents_ = rhs.dependents_;\n        married_ = rhs.married_;\n        return *this;\n    }\n\n    void AddDependent(const Dependent& dependent) {\n        dependents_.push_back(dependent);\n    }\n\n    template <typename Writer>\n    void Serialize(Writer& writer) const {\n        writer.StartObject();\n\n        Person::Serialize(writer);\n\n        writer.String(\"married\");\n        writer.Bool(married_);\n\n        writer.String((\"dependents\"));\n        writer.StartArray();\n        for (std::vector<Dependent>::const_iterator dependentItr = dependents_.begin(); dependentItr != dependents_.end(); ++dependentItr)\n            dependentItr->Serialize(writer);\n        writer.EndArray();\n\n        writer.EndObject();\n    }\n\nprivate:\n    std::vector<Dependent> dependents_;\n    bool married_;\n};\n\nEmployee::~Employee() {\n}\n\nint main(int, char*[]) {\n    std::vector<Employee> employees;\n\n    employees.push_back(Employee(\"Milo YIP\", 34, true));\n    employees.back().AddDependent(Dependent(\"Lua YIP\", 3, new Education(\"Happy Kindergarten\", 3.5)));\n    employees.back().AddDependent(Dependent(\"Mio YIP\", 1));\n\n    employees.push_back(Employee(\"Percy TSE\", 30, false));\n\n    StringBuffer sb;\n    PrettyWriter<StringBuffer> writer(sb);\n\n    writer.StartArray();\n    for (std::vector<Employee>::const_iterator employeeItr = employees.begin(); employeeItr != employees.end(); ++employeeItr)\n        employeeItr->Serialize(writer);\n    writer.EndArray();\n\n    puts(sb.GetString());\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/simpledom/simpledom.cpp",
    "content": "// JSON simple example\n// This example does not handle errors.\n\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <iostream>\n\nusing namespace rapidjson;\n\nint main() {\n    // 1. Parse a JSON string into DOM.\n    const char* json = \"{\\\"project\\\":\\\"rapidjson\\\",\\\"stars\\\":10}\";\n    Document d;\n    d.Parse(json);\n\n    // 2. Modify it by DOM.\n    Value& s = d[\"stars\"];\n    s.SetInt(s.GetInt() + 1);\n\n    // 3. Stringify the DOM\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    d.Accept(writer);\n\n    // Output {\"project\":\"rapidjson\",\"stars\":11}\n    std::cout << buffer.GetString() << std::endl;\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/simplepullreader/simplepullreader.cpp",
    "content": "#include \"rapidjson/reader.h\"\n#include <iostream>\n#include <sstream>\n\nusing namespace rapidjson;\nusing namespace std;\n\n// If you can require C++11, you could use std::to_string here\ntemplate <typename T> std::string stringify(T x) {\n    std::stringstream ss;\n    ss << x;\n    return ss.str();\n}\n\nstruct MyHandler {\n    const char* type;\n    std::string data;\n    \n    MyHandler() : type(), data() {}\n\n    bool Null() { type = \"Null\"; data.clear(); return true; }\n    bool Bool(bool b) { type = \"Bool:\"; data = b? \"true\": \"false\"; return true; }\n    bool Int(int i) { type = \"Int:\"; data = stringify(i); return true; }\n    bool Uint(unsigned u) { type = \"Uint:\"; data = stringify(u); return true; }\n    bool Int64(int64_t i) { type = \"Int64:\"; data = stringify(i); return true; }\n    bool Uint64(uint64_t u) { type = \"Uint64:\"; data = stringify(u); return true; }\n    bool Double(double d) { type = \"Double:\"; data = stringify(d); return true; }\n    bool RawNumber(const char* str, SizeType length, bool) { type = \"Number:\"; data = std::string(str, length); return true; }\n    bool String(const char* str, SizeType length, bool) { type = \"String:\"; data = std::string(str, length); return true; }\n    bool StartObject() { type = \"StartObject\"; data.clear(); return true; }\n    bool Key(const char* str, SizeType length, bool) { type = \"Key:\"; data = std::string(str, length); return true; }\n    bool EndObject(SizeType memberCount) { type = \"EndObject:\"; data = stringify(memberCount); return true; }\n    bool StartArray() { type = \"StartArray\"; data.clear(); return true; }\n    bool EndArray(SizeType elementCount) { type = \"EndArray:\"; data = stringify(elementCount); return true; }\nprivate:\n    MyHandler(const MyHandler& noCopyConstruction);\n    MyHandler& operator=(const MyHandler& noAssignment);\n};\n\nint main() {\n    const char json[] = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \";\n\n    MyHandler handler;\n    Reader reader;\n    StringStream ss(json);\n    reader.IterativeParseInit();\n    while (!reader.IterativeParseComplete()) {\n        reader.IterativeParseNext<kParseDefaultFlags>(ss, handler);\n        cout << handler.type << handler.data << endl;\n    }\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/simplereader/simplereader.cpp",
    "content": "#include \"rapidjson/reader.h\"\n#include <iostream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nstruct MyHandler {\n    bool Null() { cout << \"Null()\" << endl; return true; }\n    bool Bool(bool b) { cout << \"Bool(\" << boolalpha << b << \")\" << endl; return true; }\n    bool Int(int i) { cout << \"Int(\" << i << \")\" << endl; return true; }\n    bool Uint(unsigned u) { cout << \"Uint(\" << u << \")\" << endl; return true; }\n    bool Int64(int64_t i) { cout << \"Int64(\" << i << \")\" << endl; return true; }\n    bool Uint64(uint64_t u) { cout << \"Uint64(\" << u << \")\" << endl; return true; }\n    bool Double(double d) { cout << \"Double(\" << d << \")\" << endl; return true; }\n    bool RawNumber(const char* str, SizeType length, bool copy) { \n        cout << \"Number(\" << str << \", \" << length << \", \" << boolalpha << copy << \")\" << endl;\n        return true;\n    }\n    bool String(const char* str, SizeType length, bool copy) { \n        cout << \"String(\" << str << \", \" << length << \", \" << boolalpha << copy << \")\" << endl;\n        return true;\n    }\n    bool StartObject() { cout << \"StartObject()\" << endl; return true; }\n    bool Key(const char* str, SizeType length, bool copy) {\n        cout << \"Key(\" << str << \", \" << length << \", \" << boolalpha << copy << \")\" << endl;\n        return true;\n    }\n    bool EndObject(SizeType memberCount) { cout << \"EndObject(\" << memberCount << \")\" << endl; return true; }\n    bool StartArray() { cout << \"StartArray()\" << endl; return true; }\n    bool EndArray(SizeType elementCount) { cout << \"EndArray(\" << elementCount << \")\" << endl; return true; }\n};\n\nint main() {\n    const char json[] = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \";\n\n    MyHandler handler;\n    Reader reader;\n    StringStream ss(json);\n    reader.Parse(ss, handler);\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/simplewriter/simplewriter.cpp",
    "content": "#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <iostream>\n\nusing namespace rapidjson;\nusing namespace std;\n\nint main() {\n    StringBuffer s;\n    Writer<StringBuffer> writer(s);\n    \n    writer.StartObject();               // Between StartObject()/EndObject(), \n    writer.Key(\"hello\");                // output a key,\n    writer.String(\"world\");             // follow by a value.\n    writer.Key(\"t\");\n    writer.Bool(true);\n    writer.Key(\"f\");\n    writer.Bool(false);\n    writer.Key(\"n\");\n    writer.Null();\n    writer.Key(\"i\");\n    writer.Uint(123);\n    writer.Key(\"pi\");\n    writer.Double(3.1416);\n    writer.Key(\"a\");\n    writer.StartArray();                // Between StartArray()/EndArray(),\n    for (unsigned i = 0; i < 4; i++)\n        writer.Uint(i);                 // all values are elements of the array.\n    writer.EndArray();\n    writer.EndObject();\n\n    // {\"hello\":\"world\",\"t\":true,\"f\":false,\"n\":null,\"i\":123,\"pi\":3.1416,\"a\":[0,1,2,3]}\n    cout << s.GetString() << endl;\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/example/tutorial/tutorial.cpp",
    "content": "// Hello World example\n// This example shows basic usage of DOM-style API.\n\n#include \"rapidjson/document.h\"     // rapidjson's DOM-style API\n#include \"rapidjson/prettywriter.h\" // for stringify JSON\n#include <cstdio>\n\nusing namespace rapidjson;\nusing namespace std;\n\nint main(int, char*[]) {\n    ////////////////////////////////////////////////////////////////////////////\n    // 1. Parse a JSON text string to a document.\n\n    const char json[] = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \";\n    printf(\"Original JSON:\\n %s\\n\", json);\n\n    Document document;  // Default template parameter uses UTF8 and MemoryPoolAllocator.\n\n#if 0\n    // \"normal\" parsing, decode strings to new buffers. Can use other input stream via ParseStream().\n    if (document.Parse(json).HasParseError())\n        return 1;\n#else\n    // In-situ parsing, decode strings directly in the source string. Source must be string.\n    char buffer[sizeof(json)];\n    memcpy(buffer, json, sizeof(json));\n    if (document.ParseInsitu(buffer).HasParseError())\n        return 1;\n#endif\n\n    printf(\"\\nParsing to document succeeded.\\n\");\n\n    ////////////////////////////////////////////////////////////////////////////\n    // 2. Access values in document. \n\n    printf(\"\\nAccess values in document:\\n\");\n    assert(document.IsObject());    // Document is a JSON value represents the root of DOM. Root can be either an object or array.\n\n    assert(document.HasMember(\"hello\"));\n    assert(document[\"hello\"].IsString());\n    printf(\"hello = %s\\n\", document[\"hello\"].GetString());\n\n    // Since version 0.2, you can use single lookup to check the existing of member and its value:\n    Value::MemberIterator hello = document.FindMember(\"hello\");\n    assert(hello != document.MemberEnd());\n    assert(hello->value.IsString());\n    assert(strcmp(\"world\", hello->value.GetString()) == 0);\n    (void)hello;\n\n    assert(document[\"t\"].IsBool());     // JSON true/false are bool. Can also uses more specific function IsTrue().\n    printf(\"t = %s\\n\", document[\"t\"].GetBool() ? \"true\" : \"false\");\n\n    assert(document[\"f\"].IsBool());\n    printf(\"f = %s\\n\", document[\"f\"].GetBool() ? \"true\" : \"false\");\n\n    printf(\"n = %s\\n\", document[\"n\"].IsNull() ? \"null\" : \"?\");\n\n    assert(document[\"i\"].IsNumber());   // Number is a JSON type, but C++ needs more specific type.\n    assert(document[\"i\"].IsInt());      // In this case, IsUint()/IsInt64()/IsUint64() also return true.\n    printf(\"i = %d\\n\", document[\"i\"].GetInt()); // Alternative (int)document[\"i\"]\n\n    assert(document[\"pi\"].IsNumber());\n    assert(document[\"pi\"].IsDouble());\n    printf(\"pi = %g\\n\", document[\"pi\"].GetDouble());\n\n    {\n        const Value& a = document[\"a\"]; // Using a reference for consecutive access is handy and faster.\n        assert(a.IsArray());\n        for (SizeType i = 0; i < a.Size(); i++) // rapidjson uses SizeType instead of size_t.\n            printf(\"a[%d] = %d\\n\", i, a[i].GetInt());\n        \n        int y = a[0].GetInt();\n        (void)y;\n\n        // Iterating array with iterators\n        printf(\"a = \");\n        for (Value::ConstValueIterator itr = a.Begin(); itr != a.End(); ++itr)\n            printf(\"%d \", itr->GetInt());\n        printf(\"\\n\");\n    }\n\n    // Iterating object members\n    static const char* kTypeNames[] = { \"Null\", \"False\", \"True\", \"Object\", \"Array\", \"String\", \"Number\" };\n    for (Value::ConstMemberIterator itr = document.MemberBegin(); itr != document.MemberEnd(); ++itr)\n        printf(\"Type of member %s is %s\\n\", itr->name.GetString(), kTypeNames[itr->value.GetType()]);\n\n    ////////////////////////////////////////////////////////////////////////////\n    // 3. Modify values in document.\n\n    // Change i to a bigger number\n    {\n        uint64_t f20 = 1;   // compute factorial of 20\n        for (uint64_t j = 1; j <= 20; j++)\n            f20 *= j;\n        document[\"i\"] = f20;    // Alternate form: document[\"i\"].SetUint64(f20)\n        assert(!document[\"i\"].IsInt()); // No longer can be cast as int or uint.\n    }\n\n    // Adding values to array.\n    {\n        Value& a = document[\"a\"];   // This time we uses non-const reference.\n        Document::AllocatorType& allocator = document.GetAllocator();\n        for (int i = 5; i <= 10; i++)\n            a.PushBack(i, allocator);   // May look a bit strange, allocator is needed for potentially realloc. We normally uses the document's.\n\n        // Fluent API\n        a.PushBack(\"Lua\", allocator).PushBack(\"Mio\", allocator);\n    }\n\n    // Making string values.\n\n    // This version of SetString() just store the pointer to the string.\n    // So it is for literal and string that exists within value's life-cycle.\n    {\n        document[\"hello\"] = \"rapidjson\";    // This will invoke strlen()\n        // Faster version:\n        // document[\"hello\"].SetString(\"rapidjson\", 9);\n    }\n\n    // This version of SetString() needs an allocator, which means it will allocate a new buffer and copy the the string into the buffer.\n    Value author;\n    {\n        char buffer2[10];\n        int len = sprintf(buffer2, \"%s %s\", \"Milo\", \"Yip\");  // synthetic example of dynamically created string.\n\n        author.SetString(buffer2, static_cast<SizeType>(len), document.GetAllocator());\n        // Shorter but slower version:\n        // document[\"hello\"].SetString(buffer, document.GetAllocator());\n\n        // Constructor version: \n        // Value author(buffer, len, document.GetAllocator());\n        // Value author(buffer, document.GetAllocator());\n        memset(buffer2, 0, sizeof(buffer2)); // For demonstration purpose.\n    }\n    // Variable 'buffer' is unusable now but 'author' has already made a copy.\n    document.AddMember(\"author\", author, document.GetAllocator());\n\n    assert(author.IsNull());        // Move semantic for assignment. After this variable is assigned as a member, the variable becomes null.\n\n    ////////////////////////////////////////////////////////////////////////////\n    // 4. Stringify JSON\n\n    printf(\"\\nModified JSON with reformatting:\\n\");\n    StringBuffer sb;\n    PrettyWriter<StringBuffer> writer(sb);\n    document.Accept(writer);    // Accept() traverses the DOM and generates Handler events.\n    puts(sb.GetString());\n\n    return 0;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/allocators.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_ALLOCATORS_H_\n#define RAPIDJSON_ALLOCATORS_H_\n\n#include \"rapidjson.h\"\n#include \"internal/meta.h\"\n\n#include <memory>\n#include <limits>\n\n#if RAPIDJSON_HAS_CXX11\n#include <type_traits>\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n///////////////////////////////////////////////////////////////////////////////\n// Allocator\n\n/*! \\class rapidjson::Allocator\n    \\brief Concept for allocating, resizing and freeing memory block.\n    \n    Note that Malloc() and Realloc() are non-static but Free() is static.\n    \n    So if an allocator need to support Free(), it needs to put its pointer in \n    the header of memory block.\n\n\\code\nconcept Allocator {\n    static const bool kNeedFree;    //!< Whether this allocator needs to call Free().\n\n    // Allocate a memory block.\n    // \\param size of the memory block in bytes.\n    // \\returns pointer to the memory block.\n    void* Malloc(size_t size);\n\n    // Resize a memory block.\n    // \\param originalPtr The pointer to current memory block. Null pointer is permitted.\n    // \\param originalSize The current size in bytes. (Design issue: since some allocator may not book-keep this, explicitly pass to it can save memory.)\n    // \\param newSize the new size in bytes.\n    void* Realloc(void* originalPtr, size_t originalSize, size_t newSize);\n\n    // Free a memory block.\n    // \\param pointer to the memory block. Null pointer is permitted.\n    static void Free(void *ptr);\n};\n\\endcode\n*/\n\n\n/*! \\def RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief User-defined kDefaultChunkCapacity definition.\n\n    User can define this as any \\c size that is a power of 2.\n*/\n\n#ifndef RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY\n#define RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY (64 * 1024)\n#endif\n\n\n///////////////////////////////////////////////////////////////////////////////\n// CrtAllocator\n\n//! C-runtime library allocator.\n/*! This class is just wrapper for standard C library memory routines.\n    \\note implements Allocator concept\n*/\nclass CrtAllocator {\npublic:\n    static const bool kNeedFree = true;\n    void* Malloc(size_t size) { \n        if (size) //  behavior of malloc(0) is implementation defined.\n            return RAPIDJSON_MALLOC(size);\n        else\n            return NULL; // standardize to returning NULL.\n    }\n    void* Realloc(void* originalPtr, size_t originalSize, size_t newSize) {\n        (void)originalSize;\n        if (newSize == 0) {\n            RAPIDJSON_FREE(originalPtr);\n            return NULL;\n        }\n        return RAPIDJSON_REALLOC(originalPtr, newSize);\n    }\n    static void Free(void *ptr) RAPIDJSON_NOEXCEPT { RAPIDJSON_FREE(ptr); }\n\n    bool operator==(const CrtAllocator&) const RAPIDJSON_NOEXCEPT {\n        return true;\n    }\n    bool operator!=(const CrtAllocator&) const RAPIDJSON_NOEXCEPT {\n        return false;\n    }\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// MemoryPoolAllocator\n\n//! Default memory allocator used by the parser and DOM.\n/*! This allocator allocate memory blocks from pre-allocated memory chunks. \n\n    It does not free memory blocks. And Realloc() only allocate new memory.\n\n    The memory chunks are allocated by BaseAllocator, which is CrtAllocator by default.\n\n    User may also supply a buffer as the first chunk.\n\n    If the user-buffer is full then additional chunks are allocated by BaseAllocator.\n\n    The user-buffer is not deallocated by this allocator.\n\n    \\tparam BaseAllocator the allocator type for allocating memory chunks. Default is CrtAllocator.\n    \\note implements Allocator concept\n*/\ntemplate <typename BaseAllocator = CrtAllocator>\nclass MemoryPoolAllocator {\n    //! Chunk header for perpending to each chunk.\n    /*! Chunks are stored as a singly linked list.\n    */\n    struct ChunkHeader {\n        size_t capacity;    //!< Capacity of the chunk in bytes (excluding the header itself).\n        size_t size;        //!< Current size of allocated memory in bytes.\n        ChunkHeader *next;  //!< Next chunk in the linked list.\n    };\n\n    struct SharedData {\n        ChunkHeader *chunkHead;  //!< Head of the chunk linked-list. Only the head chunk serves allocation.\n        BaseAllocator* ownBaseAllocator; //!< base allocator created by this object.\n        size_t refcount;\n        bool ownBuffer;\n    };\n\n    static const size_t SIZEOF_SHARED_DATA = RAPIDJSON_ALIGN(sizeof(SharedData));\n    static const size_t SIZEOF_CHUNK_HEADER = RAPIDJSON_ALIGN(sizeof(ChunkHeader));\n\n    static inline ChunkHeader *GetChunkHead(SharedData *shared)\n    {\n        return reinterpret_cast<ChunkHeader*>(reinterpret_cast<uint8_t*>(shared) + SIZEOF_SHARED_DATA);\n    }\n    static inline uint8_t *GetChunkBuffer(SharedData *shared)\n    {\n        return reinterpret_cast<uint8_t*>(shared->chunkHead) + SIZEOF_CHUNK_HEADER;\n    }\n\n    static const size_t kDefaultChunkCapacity = RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY; //!< Default chunk capacity.\n\npublic:\n    static const bool kNeedFree = false;    //!< Tell users that no need to call Free() with this allocator. (concept Allocator)\n    static const bool kRefCounted = true;   //!< Tell users that this allocator is reference counted on copy\n\n    //! Constructor with chunkSize.\n    /*! \\param chunkSize The size of memory chunk. The default is kDefaultChunkSize.\n        \\param baseAllocator The allocator for allocating memory chunks.\n    */\n    explicit\n    MemoryPoolAllocator(size_t chunkSize = kDefaultChunkCapacity, BaseAllocator* baseAllocator = 0) : \n        chunk_capacity_(chunkSize),\n        baseAllocator_(baseAllocator ? baseAllocator : RAPIDJSON_NEW(BaseAllocator)()),\n        shared_(static_cast<SharedData*>(baseAllocator_ ? baseAllocator_->Malloc(SIZEOF_SHARED_DATA + SIZEOF_CHUNK_HEADER) : 0))\n    {\n        RAPIDJSON_ASSERT(baseAllocator_ != 0);\n        RAPIDJSON_ASSERT(shared_ != 0);\n        if (baseAllocator) {\n            shared_->ownBaseAllocator = 0;\n        }\n        else {\n            shared_->ownBaseAllocator = baseAllocator_;\n        }\n        shared_->chunkHead = GetChunkHead(shared_);\n        shared_->chunkHead->capacity = 0;\n        shared_->chunkHead->size = 0;\n        shared_->chunkHead->next = 0;\n        shared_->ownBuffer = true;\n        shared_->refcount = 1;\n    }\n\n    //! Constructor with user-supplied buffer.\n    /*! The user buffer will be used firstly. When it is full, memory pool allocates new chunk with chunk size.\n\n        The user buffer will not be deallocated when this allocator is destructed.\n\n        \\param buffer User supplied buffer.\n        \\param size Size of the buffer in bytes. It must at least larger than sizeof(ChunkHeader).\n        \\param chunkSize The size of memory chunk. The default is kDefaultChunkSize.\n        \\param baseAllocator The allocator for allocating memory chunks.\n    */\n    MemoryPoolAllocator(void *buffer, size_t size, size_t chunkSize = kDefaultChunkCapacity, BaseAllocator* baseAllocator = 0) :\n        chunk_capacity_(chunkSize),\n        baseAllocator_(baseAllocator),\n        shared_(static_cast<SharedData*>(AlignBuffer(buffer, size)))\n    {\n        RAPIDJSON_ASSERT(size >= SIZEOF_SHARED_DATA + SIZEOF_CHUNK_HEADER);\n        shared_->chunkHead = GetChunkHead(shared_);\n        shared_->chunkHead->capacity = size - SIZEOF_SHARED_DATA - SIZEOF_CHUNK_HEADER;\n        shared_->chunkHead->size = 0;\n        shared_->chunkHead->next = 0;\n        shared_->ownBaseAllocator = 0;\n        shared_->ownBuffer = false;\n        shared_->refcount = 1;\n    }\n\n    MemoryPoolAllocator(const MemoryPoolAllocator& rhs) RAPIDJSON_NOEXCEPT :\n        chunk_capacity_(rhs.chunk_capacity_),\n        baseAllocator_(rhs.baseAllocator_),\n        shared_(rhs.shared_)\n    {\n        RAPIDJSON_NOEXCEPT_ASSERT(shared_->refcount > 0);\n        ++shared_->refcount;\n    }\n    MemoryPoolAllocator& operator=(const MemoryPoolAllocator& rhs) RAPIDJSON_NOEXCEPT\n    {\n        RAPIDJSON_NOEXCEPT_ASSERT(rhs.shared_->refcount > 0);\n        ++rhs.shared_->refcount;\n        this->~MemoryPoolAllocator();\n        baseAllocator_ = rhs.baseAllocator_;\n        chunk_capacity_ = rhs.chunk_capacity_;\n        shared_ = rhs.shared_;\n        return *this;\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    MemoryPoolAllocator(MemoryPoolAllocator&& rhs) RAPIDJSON_NOEXCEPT :\n        chunk_capacity_(rhs.chunk_capacity_),\n        baseAllocator_(rhs.baseAllocator_),\n        shared_(rhs.shared_)\n    {\n        RAPIDJSON_NOEXCEPT_ASSERT(rhs.shared_->refcount > 0);\n        rhs.shared_ = 0;\n    }\n    MemoryPoolAllocator& operator=(MemoryPoolAllocator&& rhs) RAPIDJSON_NOEXCEPT\n    {\n        RAPIDJSON_NOEXCEPT_ASSERT(rhs.shared_->refcount > 0);\n        this->~MemoryPoolAllocator();\n        baseAllocator_ = rhs.baseAllocator_;\n        chunk_capacity_ = rhs.chunk_capacity_;\n        shared_ = rhs.shared_;\n        rhs.shared_ = 0;\n        return *this;\n    }\n#endif\n\n    //! Destructor.\n    /*! This deallocates all memory chunks, excluding the user-supplied buffer.\n    */\n    ~MemoryPoolAllocator() RAPIDJSON_NOEXCEPT {\n        if (!shared_) {\n            // do nothing if moved\n            return;\n        }\n        if (shared_->refcount > 1) {\n            --shared_->refcount;\n            return;\n        }\n        Clear();\n        BaseAllocator *a = shared_->ownBaseAllocator;\n        if (shared_->ownBuffer) {\n            baseAllocator_->Free(shared_);\n        }\n        RAPIDJSON_DELETE(a);\n    }\n\n    //! Deallocates all memory chunks, excluding the first/user one.\n    void Clear() RAPIDJSON_NOEXCEPT {\n        RAPIDJSON_NOEXCEPT_ASSERT(shared_->refcount > 0);\n        for (;;) {\n            ChunkHeader* c = shared_->chunkHead;\n            if (!c->next) {\n                break;\n            }\n            shared_->chunkHead = c->next;\n            baseAllocator_->Free(c);\n        }\n        shared_->chunkHead->size = 0;\n    }\n\n    //! Computes the total capacity of allocated memory chunks.\n    /*! \\return total capacity in bytes.\n    */\n    size_t Capacity() const RAPIDJSON_NOEXCEPT {\n        RAPIDJSON_NOEXCEPT_ASSERT(shared_->refcount > 0);\n        size_t capacity = 0;\n        for (ChunkHeader* c = shared_->chunkHead; c != 0; c = c->next)\n            capacity += c->capacity;\n        return capacity;\n    }\n\n    //! Computes the memory blocks allocated.\n    /*! \\return total used bytes.\n    */\n    size_t Size() const RAPIDJSON_NOEXCEPT {\n        RAPIDJSON_NOEXCEPT_ASSERT(shared_->refcount > 0);\n        size_t size = 0;\n        for (ChunkHeader* c = shared_->chunkHead; c != 0; c = c->next)\n            size += c->size;\n        return size;\n    }\n\n    //! Whether the allocator is shared.\n    /*! \\return true or false.\n    */\n    bool Shared() const RAPIDJSON_NOEXCEPT {\n        RAPIDJSON_NOEXCEPT_ASSERT(shared_->refcount > 0);\n        return shared_->refcount > 1;\n    }\n\n    //! Allocates a memory block. (concept Allocator)\n    void* Malloc(size_t size) {\n        RAPIDJSON_NOEXCEPT_ASSERT(shared_->refcount > 0);\n        if (!size)\n            return NULL;\n\n        size = RAPIDJSON_ALIGN(size);\n        if (RAPIDJSON_UNLIKELY(shared_->chunkHead->size + size > shared_->chunkHead->capacity))\n            if (!AddChunk(chunk_capacity_ > size ? chunk_capacity_ : size))\n                return NULL;\n\n        void *buffer = GetChunkBuffer(shared_) + shared_->chunkHead->size;\n        shared_->chunkHead->size += size;\n        return buffer;\n    }\n\n    //! Resizes a memory block (concept Allocator)\n    void* Realloc(void* originalPtr, size_t originalSize, size_t newSize) {\n        if (originalPtr == 0)\n            return Malloc(newSize);\n\n        RAPIDJSON_NOEXCEPT_ASSERT(shared_->refcount > 0);\n        if (newSize == 0)\n            return NULL;\n\n        originalSize = RAPIDJSON_ALIGN(originalSize);\n        newSize = RAPIDJSON_ALIGN(newSize);\n\n        // Do not shrink if new size is smaller than original\n        if (originalSize >= newSize)\n            return originalPtr;\n\n        // Simply expand it if it is the last allocation and there is sufficient space\n        if (originalPtr == GetChunkBuffer(shared_) + shared_->chunkHead->size - originalSize) {\n            size_t increment = static_cast<size_t>(newSize - originalSize);\n            if (shared_->chunkHead->size + increment <= shared_->chunkHead->capacity) {\n                shared_->chunkHead->size += increment;\n                return originalPtr;\n            }\n        }\n\n        // Realloc process: allocate and copy memory, do not free original buffer.\n        if (void* newBuffer = Malloc(newSize)) {\n            if (originalSize)\n                std::memcpy(newBuffer, originalPtr, originalSize);\n            return newBuffer;\n        }\n        else\n            return NULL;\n    }\n\n    //! Frees a memory block (concept Allocator)\n    static void Free(void *ptr) RAPIDJSON_NOEXCEPT { (void)ptr; } // Do nothing\n\n    //! Compare (equality) with another MemoryPoolAllocator\n    bool operator==(const MemoryPoolAllocator& rhs) const RAPIDJSON_NOEXCEPT {\n        RAPIDJSON_NOEXCEPT_ASSERT(shared_->refcount > 0);\n        RAPIDJSON_NOEXCEPT_ASSERT(rhs.shared_->refcount > 0);\n        return shared_ == rhs.shared_;\n    }\n    //! Compare (inequality) with another MemoryPoolAllocator\n    bool operator!=(const MemoryPoolAllocator& rhs) const RAPIDJSON_NOEXCEPT {\n        return !operator==(rhs);\n    }\n\nprivate:\n    //! Creates a new chunk.\n    /*! \\param capacity Capacity of the chunk in bytes.\n        \\return true if success.\n    */\n    bool AddChunk(size_t capacity) {\n        if (!baseAllocator_)\n            shared_->ownBaseAllocator = baseAllocator_ = RAPIDJSON_NEW(BaseAllocator)();\n        if (ChunkHeader* chunk = static_cast<ChunkHeader*>(baseAllocator_->Malloc(SIZEOF_CHUNK_HEADER + capacity))) {\n            chunk->capacity = capacity;\n            chunk->size = 0;\n            chunk->next = shared_->chunkHead;\n            shared_->chunkHead = chunk;\n            return true;\n        }\n        else\n            return false;\n    }\n\n    static inline void* AlignBuffer(void* buf, size_t &size)\n    {\n        RAPIDJSON_NOEXCEPT_ASSERT(buf != 0);\n        const uintptr_t mask = sizeof(void*) - 1;\n        const uintptr_t ubuf = reinterpret_cast<uintptr_t>(buf);\n        if (RAPIDJSON_UNLIKELY(ubuf & mask)) {\n            const uintptr_t abuf = (ubuf + mask) & ~mask;\n            RAPIDJSON_ASSERT(size >= abuf - ubuf);\n            buf = reinterpret_cast<void*>(abuf);\n            size -= abuf - ubuf;\n        }\n        return buf;\n    }\n\n    size_t chunk_capacity_;     //!< The minimum capacity of chunk when they are allocated.\n    BaseAllocator* baseAllocator_;  //!< base allocator for allocating memory chunks.\n    SharedData *shared_;        //!< The shared data of the allocator\n};\n\nnamespace internal {\n    template<typename, typename = void>\n    struct IsRefCounted :\n        public FalseType\n    { };\n    template<typename T>\n    struct IsRefCounted<T, typename internal::EnableIfCond<T::kRefCounted>::Type> :\n        public TrueType\n    { };\n}\n\ntemplate<typename T, typename A>\ninline T* Realloc(A& a, T* old_p, size_t old_n, size_t new_n)\n{\n    RAPIDJSON_NOEXCEPT_ASSERT(old_n <= (std::numeric_limits<size_t>::max)() / sizeof(T) && new_n <= (std::numeric_limits<size_t>::max)() / sizeof(T));\n    return static_cast<T*>(a.Realloc(old_p, old_n * sizeof(T), new_n * sizeof(T)));\n}\n\ntemplate<typename T, typename A>\ninline T *Malloc(A& a, size_t n = 1)\n{\n    return Realloc<T, A>(a, NULL, 0, n);\n}\n\ntemplate<typename T, typename A>\ninline void Free(A& a, T *p, size_t n = 1)\n{\n    static_cast<void>(Realloc<T, A>(a, p, n, 0));\n}\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++) // std::allocator can safely be inherited\n#endif\n\ntemplate <typename T, typename BaseAllocator = CrtAllocator>\nclass StdAllocator :\n    public std::allocator<T>\n{\n    typedef std::allocator<T> allocator_type;\n#if RAPIDJSON_HAS_CXX11\n    typedef std::allocator_traits<allocator_type> traits_type;\n#else\n    typedef allocator_type traits_type;\n#endif\n\npublic:\n    typedef BaseAllocator BaseAllocatorType;\n\n    StdAllocator() RAPIDJSON_NOEXCEPT :\n        allocator_type(),\n        baseAllocator_()\n    { }\n\n    StdAllocator(const StdAllocator& rhs) RAPIDJSON_NOEXCEPT :\n        allocator_type(rhs),\n        baseAllocator_(rhs.baseAllocator_)\n    { }\n\n    template<typename U>\n    StdAllocator(const StdAllocator<U, BaseAllocator>& rhs) RAPIDJSON_NOEXCEPT :\n        allocator_type(rhs),\n        baseAllocator_(rhs.baseAllocator_)\n    { }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    StdAllocator(StdAllocator&& rhs) RAPIDJSON_NOEXCEPT :\n        allocator_type(std::move(rhs)),\n        baseAllocator_(std::move(rhs.baseAllocator_))\n    { }\n#endif\n#if RAPIDJSON_HAS_CXX11\n    using propagate_on_container_move_assignment = std::true_type;\n    using propagate_on_container_swap = std::true_type;\n#endif\n\n    /* implicit */\n    StdAllocator(const BaseAllocator& baseAllocator) RAPIDJSON_NOEXCEPT :\n        allocator_type(),\n        baseAllocator_(baseAllocator)\n    { }\n\n    ~StdAllocator() RAPIDJSON_NOEXCEPT\n    { }\n\n    template<typename U>\n    struct rebind {\n        typedef StdAllocator<U, BaseAllocator> other;\n    };\n\n    typedef typename traits_type::size_type         size_type;\n    typedef typename traits_type::difference_type   difference_type;\n\n    typedef typename traits_type::value_type        value_type;\n    typedef typename traits_type::pointer           pointer;\n    typedef typename traits_type::const_pointer     const_pointer;\n\n#if RAPIDJSON_HAS_CXX11\n\n    typedef typename std::add_lvalue_reference<value_type>::type &reference;\n    typedef typename std::add_lvalue_reference<typename std::add_const<value_type>::type>::type &const_reference;\n\n    pointer address(reference r) const RAPIDJSON_NOEXCEPT\n    {\n        return std::addressof(r);\n    }\n    const_pointer address(const_reference r) const RAPIDJSON_NOEXCEPT\n    {\n        return std::addressof(r);\n    }\n\n    size_type max_size() const RAPIDJSON_NOEXCEPT\n    {\n        return traits_type::max_size(*this);\n    }\n\n    template <typename ...Args>\n    void construct(pointer p, Args&&... args)\n    {\n        traits_type::construct(*this, p, std::forward<Args>(args)...);\n    }\n    void destroy(pointer p)\n    {\n        traits_type::destroy(*this, p);\n    }\n\n#else // !RAPIDJSON_HAS_CXX11\n\n    typedef typename allocator_type::reference       reference;\n    typedef typename allocator_type::const_reference const_reference;\n\n    pointer address(reference r) const RAPIDJSON_NOEXCEPT\n    {\n        return allocator_type::address(r);\n    }\n    const_pointer address(const_reference r) const RAPIDJSON_NOEXCEPT\n    {\n        return allocator_type::address(r);\n    }\n\n    size_type max_size() const RAPIDJSON_NOEXCEPT\n    {\n        return allocator_type::max_size();\n    }\n\n    void construct(pointer p, const_reference r)\n    {\n        allocator_type::construct(p, r);\n    }\n    void destroy(pointer p)\n    {\n        allocator_type::destroy(p);\n    }\n\n#endif // !RAPIDJSON_HAS_CXX11\n\n    template <typename U>\n    U* allocate(size_type n = 1, const void* = 0)\n    {\n        return RAPIDJSON_NAMESPACE::Malloc<U>(baseAllocator_, n);\n    }\n    template <typename U>\n    void deallocate(U* p, size_type n = 1)\n    {\n        RAPIDJSON_NAMESPACE::Free<U>(baseAllocator_, p, n);\n    }\n\n    pointer allocate(size_type n = 1, const void* = 0)\n    {\n        return allocate<value_type>(n);\n    }\n    void deallocate(pointer p, size_type n = 1)\n    {\n        deallocate<value_type>(p, n);\n    }\n\n#if RAPIDJSON_HAS_CXX11\n    using is_always_equal = std::is_empty<BaseAllocator>;\n#endif\n\n    template<typename U>\n    bool operator==(const StdAllocator<U, BaseAllocator>& rhs) const RAPIDJSON_NOEXCEPT\n    {\n        return baseAllocator_ == rhs.baseAllocator_;\n    }\n    template<typename U>\n    bool operator!=(const StdAllocator<U, BaseAllocator>& rhs) const RAPIDJSON_NOEXCEPT\n    {\n        return !operator==(rhs);\n    }\n\n    //! rapidjson Allocator concept\n    static const bool kNeedFree = BaseAllocator::kNeedFree;\n    static const bool kRefCounted = internal::IsRefCounted<BaseAllocator>::Value;\n    void* Malloc(size_t size)\n    {\n        return baseAllocator_.Malloc(size);\n    }\n    void* Realloc(void* originalPtr, size_t originalSize, size_t newSize)\n    {\n        return baseAllocator_.Realloc(originalPtr, originalSize, newSize);\n    }\n    static void Free(void *ptr) RAPIDJSON_NOEXCEPT\n    {\n        BaseAllocator::Free(ptr);\n    }\n\nprivate:\n    template <typename, typename>\n    friend class StdAllocator; // access to StdAllocator<!T>.*\n\n    BaseAllocator baseAllocator_;\n};\n\n#if !RAPIDJSON_HAS_CXX17 // std::allocator<void> deprecated in C++17\ntemplate <typename BaseAllocator>\nclass StdAllocator<void, BaseAllocator> :\n    public std::allocator<void>\n{\n    typedef std::allocator<void> allocator_type;\n\npublic:\n    typedef BaseAllocator BaseAllocatorType;\n\n    StdAllocator() RAPIDJSON_NOEXCEPT :\n        allocator_type(),\n        baseAllocator_()\n    { }\n\n    StdAllocator(const StdAllocator& rhs) RAPIDJSON_NOEXCEPT :\n        allocator_type(rhs),\n        baseAllocator_(rhs.baseAllocator_)\n    { }\n\n    template<typename U>\n    StdAllocator(const StdAllocator<U, BaseAllocator>& rhs) RAPIDJSON_NOEXCEPT :\n        allocator_type(rhs),\n        baseAllocator_(rhs.baseAllocator_)\n    { }\n\n    /* implicit */\n    StdAllocator(const BaseAllocator& baseAllocator) RAPIDJSON_NOEXCEPT :\n        allocator_type(),\n        baseAllocator_(baseAllocator)\n    { }\n\n    ~StdAllocator() RAPIDJSON_NOEXCEPT\n    { }\n\n    template<typename U>\n    struct rebind {\n        typedef StdAllocator<U, BaseAllocator> other;\n    };\n\n    typedef typename allocator_type::value_type value_type;\n\nprivate:\n    template <typename, typename>\n    friend class StdAllocator; // access to StdAllocator<!T>.*\n\n    BaseAllocator baseAllocator_;\n};\n#endif\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_ENCODINGS_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/cursorstreamwrapper.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_CURSORSTREAMWRAPPER_H_\n#define RAPIDJSON_CURSORSTREAMWRAPPER_H_\n\n#include \"stream.h\"\n\n#if defined(__GNUC__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n#if defined(_MSC_VER) && _MSC_VER <= 1800\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4702)  // unreachable code\nRAPIDJSON_DIAG_OFF(4512)  // assignment operator could not be generated\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n\n//! Cursor stream wrapper for counting line and column number if error exists.\n/*!\n    \\tparam InputStream     Any stream that implements Stream Concept\n*/\ntemplate <typename InputStream, typename Encoding = UTF8<> >\nclass CursorStreamWrapper : public GenericStreamWrapper<InputStream, Encoding> {\npublic:\n    typedef typename Encoding::Ch Ch;\n\n    CursorStreamWrapper(InputStream& is):\n        GenericStreamWrapper<InputStream, Encoding>(is), line_(1), col_(0) {}\n\n    // counting line and column number\n    Ch Take() {\n        Ch ch = this->is_.Take();\n        if(ch == '\\n') {\n            line_ ++;\n            col_ = 0;\n        } else {\n            col_ ++;\n        }\n        return ch;\n    }\n\n    //! Get the error line number, if error exists.\n    size_t GetLine() const { return line_; }\n    //! Get the error column number, if error exists.\n    size_t GetColumn() const { return col_; }\n\nprivate:\n    size_t line_;   //!< Current Line\n    size_t col_;    //!< Current Column\n};\n\n#if defined(_MSC_VER) && _MSC_VER <= 1800\nRAPIDJSON_DIAG_POP\n#endif\n\n#if defined(__GNUC__)\nRAPIDJSON_DIAG_POP\n#endif\n\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_CURSORSTREAMWRAPPER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/document.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_DOCUMENT_H_\n#define RAPIDJSON_DOCUMENT_H_\n\n/*! \\file document.h */\n\n#include \"reader.h\"\n#include \"internal/meta.h\"\n#include \"internal/strfunc.h\"\n#include \"memorystream.h\"\n#include \"encodedstream.h\"\n#include <new>      // placement new\n#include <limits>\n#ifdef __cpp_lib_three_way_comparison\n#include <compare>\n#endif\n\nRAPIDJSON_DIAG_PUSH\n#ifdef __clang__\nRAPIDJSON_DIAG_OFF(padded)\nRAPIDJSON_DIAG_OFF(switch-enum)\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_OFF(4127) // conditional expression is constant\nRAPIDJSON_DIAG_OFF(4244) // conversion from kXxxFlags to 'uint16_t', possible loss of data\n#endif\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_OFF(effc++)\n#endif // __GNUC__\n\n#ifdef GetObject\n// see https://github.com/Tencent/rapidjson/issues/1448\n// a former included windows.h might have defined a macro called GetObject, which affects\n// GetObject defined here. This ensures the macro does not get applied\n#pragma push_macro(\"GetObject\")\n#define RAPIDJSON_WINDOWS_GETOBJECT_WORKAROUND_APPLIED\n#undef GetObject\n#endif\n\n#ifndef RAPIDJSON_NOMEMBERITERATORCLASS\n#include <iterator> // std::random_access_iterator_tag\n#endif\n\n#if RAPIDJSON_USE_MEMBERSMAP\n#include <map> // std::multimap\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n// Forward declaration.\ntemplate <typename Encoding, typename Allocator>\nclass GenericValue;\n\ntemplate <typename Encoding, typename Allocator, typename StackAllocator>\nclass GenericDocument;\n\n/*! \\def RAPIDJSON_DEFAULT_ALLOCATOR\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Allows to choose default allocator.\n\n    User can define this to use CrtAllocator or MemoryPoolAllocator.\n*/\n#ifndef RAPIDJSON_DEFAULT_ALLOCATOR\n#define RAPIDJSON_DEFAULT_ALLOCATOR ::RAPIDJSON_NAMESPACE::MemoryPoolAllocator<::RAPIDJSON_NAMESPACE::CrtAllocator>\n#endif\n\n/*! \\def RAPIDJSON_DEFAULT_STACK_ALLOCATOR\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Allows to choose default stack allocator for Document.\n\n    User can define this to use CrtAllocator or MemoryPoolAllocator.\n*/\n#ifndef RAPIDJSON_DEFAULT_STACK_ALLOCATOR\n#define RAPIDJSON_DEFAULT_STACK_ALLOCATOR ::RAPIDJSON_NAMESPACE::CrtAllocator\n#endif\n\n/*! \\def RAPIDJSON_VALUE_DEFAULT_OBJECT_CAPACITY\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief User defined kDefaultObjectCapacity value.\n\n    User can define this as any natural number.\n*/\n#ifndef RAPIDJSON_VALUE_DEFAULT_OBJECT_CAPACITY\n// number of objects that rapidjson::Value allocates memory for by default\n#define RAPIDJSON_VALUE_DEFAULT_OBJECT_CAPACITY 16\n#endif\n\n/*! \\def RAPIDJSON_VALUE_DEFAULT_ARRAY_CAPACITY\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief User defined kDefaultArrayCapacity value.\n\n    User can define this as any natural number.\n*/\n#ifndef RAPIDJSON_VALUE_DEFAULT_ARRAY_CAPACITY\n// number of array elements that rapidjson::Value allocates memory for by default\n#define RAPIDJSON_VALUE_DEFAULT_ARRAY_CAPACITY 16\n#endif\n\n//! Name-value pair in a JSON object value.\n/*!\n    This class was internal to GenericValue. It used to be a inner struct.\n    But a compiler (IBM XL C/C++ for AIX) have reported to have problem with that so it moved as a namespace scope struct.\n    https://code.google.com/p/rapidjson/issues/detail?id=64\n*/\ntemplate <typename Encoding, typename Allocator> \nclass GenericMember {\npublic:\n    GenericValue<Encoding, Allocator> name;     //!< name of member (must be a string)\n    GenericValue<Encoding, Allocator> value;    //!< value of member.\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    //! Move constructor in C++11\n    GenericMember(GenericMember&& rhs) RAPIDJSON_NOEXCEPT\n        : name(std::move(rhs.name)),\n          value(std::move(rhs.value))\n    {\n    }\n\n    //! Move assignment in C++11\n    GenericMember& operator=(GenericMember&& rhs) RAPIDJSON_NOEXCEPT {\n        return *this = static_cast<GenericMember&>(rhs);\n    }\n#endif\n\n    //! Assignment with move semantics.\n    /*! \\param rhs Source of the assignment. Its name and value will become a null value after assignment.\n    */\n    GenericMember& operator=(GenericMember& rhs) RAPIDJSON_NOEXCEPT {\n        if (RAPIDJSON_LIKELY(this != &rhs)) {\n            name = rhs.name;\n            value = rhs.value;\n        }\n        return *this;\n    }\n\n    // swap() for std::sort() and other potential use in STL.\n    friend inline void swap(GenericMember& a, GenericMember& b) RAPIDJSON_NOEXCEPT {\n        a.name.Swap(b.name);\n        a.value.Swap(b.value);\n    }\n\nprivate:\n    //! Copy constructor is not permitted.\n    GenericMember(const GenericMember& rhs);\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericMemberIterator\n\n#ifndef RAPIDJSON_NOMEMBERITERATORCLASS\n\n//! (Constant) member iterator for a JSON object value\n/*!\n    \\tparam Const Is this a constant iterator?\n    \\tparam Encoding    Encoding of the value. (Even non-string values need to have the same encoding in a document)\n    \\tparam Allocator   Allocator type for allocating memory of object, array and string.\n\n    This class implements a Random Access Iterator for GenericMember elements\n    of a GenericValue, see ISO/IEC 14882:2003(E) C++ standard, 24.1 [lib.iterator.requirements].\n\n    \\note This iterator implementation is mainly intended to avoid implicit\n        conversions from iterator values to \\c NULL,\n        e.g. from GenericValue::FindMember.\n\n    \\note Define \\c RAPIDJSON_NOMEMBERITERATORCLASS to fall back to a\n        pointer-based implementation, if your platform doesn't provide\n        the C++ <iterator> header.\n\n    \\see GenericMember, GenericValue::MemberIterator, GenericValue::ConstMemberIterator\n */\ntemplate <bool Const, typename Encoding, typename Allocator>\nclass GenericMemberIterator {\n\n    friend class GenericValue<Encoding,Allocator>;\n    template <bool, typename, typename> friend class GenericMemberIterator;\n\n    typedef GenericMember<Encoding,Allocator> PlainType;\n    typedef typename internal::MaybeAddConst<Const,PlainType>::Type ValueType;\n\npublic:\n    //! Iterator type itself\n    typedef GenericMemberIterator Iterator;\n    //! Constant iterator type\n    typedef GenericMemberIterator<true,Encoding,Allocator>  ConstIterator;\n    //! Non-constant iterator type\n    typedef GenericMemberIterator<false,Encoding,Allocator> NonConstIterator;\n\n    /** \\name std::iterator_traits support */\n    //@{\n    typedef ValueType      value_type;\n    typedef ValueType *    pointer;\n    typedef ValueType &    reference;\n    typedef std::ptrdiff_t difference_type;\n    typedef std::random_access_iterator_tag iterator_category;\n    //@}\n\n    //! Pointer to (const) GenericMember\n    typedef pointer         Pointer;\n    //! Reference to (const) GenericMember\n    typedef reference       Reference;\n    //! Signed integer type (e.g. \\c ptrdiff_t)\n    typedef difference_type DifferenceType;\n\n    //! Default constructor (singular value)\n    /*! Creates an iterator pointing to no element.\n        \\note All operations, except for comparisons, are undefined on such values.\n     */\n    GenericMemberIterator() : ptr_() {}\n\n    //! Iterator conversions to more const\n    /*!\n        \\param it (Non-const) iterator to copy from\n\n        Allows the creation of an iterator from another GenericMemberIterator\n        that is \"less const\".  Especially, creating a non-constant iterator\n        from a constant iterator are disabled:\n        \\li const -> non-const (not ok)\n        \\li const -> const (ok)\n        \\li non-const -> const (ok)\n        \\li non-const -> non-const (ok)\n\n        \\note If the \\c Const template parameter is already \\c false, this\n            constructor effectively defines a regular copy-constructor.\n            Otherwise, the copy constructor is implicitly defined.\n    */\n    GenericMemberIterator(const NonConstIterator & it) : ptr_(it.ptr_) {}\n    Iterator& operator=(const NonConstIterator & it) { ptr_ = it.ptr_; return *this; }\n\n    //! @name stepping\n    //@{\n    Iterator& operator++(){ ++ptr_; return *this; }\n    Iterator& operator--(){ --ptr_; return *this; }\n    Iterator  operator++(int){ Iterator old(*this); ++ptr_; return old; }\n    Iterator  operator--(int){ Iterator old(*this); --ptr_; return old; }\n    //@}\n\n    //! @name increment/decrement\n    //@{\n    Iterator operator+(DifferenceType n) const { return Iterator(ptr_+n); }\n    Iterator operator-(DifferenceType n) const { return Iterator(ptr_-n); }\n\n    Iterator& operator+=(DifferenceType n) { ptr_+=n; return *this; }\n    Iterator& operator-=(DifferenceType n) { ptr_-=n; return *this; }\n    //@}\n\n    //! @name relations\n    //@{\n    template <bool Const_> bool operator==(const GenericMemberIterator<Const_, Encoding, Allocator>& that) const { return ptr_ == that.ptr_; }\n    template <bool Const_> bool operator!=(const GenericMemberIterator<Const_, Encoding, Allocator>& that) const { return ptr_ != that.ptr_; }\n    template <bool Const_> bool operator<=(const GenericMemberIterator<Const_, Encoding, Allocator>& that) const { return ptr_ <= that.ptr_; }\n    template <bool Const_> bool operator>=(const GenericMemberIterator<Const_, Encoding, Allocator>& that) const { return ptr_ >= that.ptr_; }\n    template <bool Const_> bool operator< (const GenericMemberIterator<Const_, Encoding, Allocator>& that) const { return ptr_ < that.ptr_; }\n    template <bool Const_> bool operator> (const GenericMemberIterator<Const_, Encoding, Allocator>& that) const { return ptr_ > that.ptr_; }\n\n#ifdef __cpp_lib_three_way_comparison\n    template <bool Const_> std::strong_ordering operator<=>(const GenericMemberIterator<Const_, Encoding, Allocator>& that) const { return ptr_ <=> that.ptr_; }\n#endif\n    //@}\n\n    //! @name dereference\n    //@{\n    Reference operator*() const { return *ptr_; }\n    Pointer   operator->() const { return ptr_; }\n    Reference operator[](DifferenceType n) const { return ptr_[n]; }\n    //@}\n\n    //! Distance\n    DifferenceType operator-(ConstIterator that) const { return ptr_-that.ptr_; }\n\nprivate:\n    //! Internal constructor from plain pointer\n    explicit GenericMemberIterator(Pointer p) : ptr_(p) {}\n\n    Pointer ptr_; //!< raw pointer\n};\n\n#else // RAPIDJSON_NOMEMBERITERATORCLASS\n\n// class-based member iterator implementation disabled, use plain pointers\n\ntemplate <bool Const, typename Encoding, typename Allocator>\nclass GenericMemberIterator;\n\n//! non-const GenericMemberIterator\ntemplate <typename Encoding, typename Allocator>\nclass GenericMemberIterator<false,Encoding,Allocator> {\npublic:\n    //! use plain pointer as iterator type\n    typedef GenericMember<Encoding,Allocator>* Iterator;\n};\n//! const GenericMemberIterator\ntemplate <typename Encoding, typename Allocator>\nclass GenericMemberIterator<true,Encoding,Allocator> {\npublic:\n    //! use plain const pointer as iterator type\n    typedef const GenericMember<Encoding,Allocator>* Iterator;\n};\n\n#endif // RAPIDJSON_NOMEMBERITERATORCLASS\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericStringRef\n\n//! Reference to a constant string (not taking a copy)\n/*!\n    \\tparam CharType character type of the string\n\n    This helper class is used to automatically infer constant string\n    references for string literals, especially from \\c const \\b (!)\n    character arrays.\n\n    The main use is for creating JSON string values without copying the\n    source string via an \\ref Allocator.  This requires that the referenced\n    string pointers have a sufficient lifetime, which exceeds the lifetime\n    of the associated GenericValue.\n\n    \\b Example\n    \\code\n    Value v(\"foo\");   // ok, no need to copy & calculate length\n    const char foo[] = \"foo\";\n    v.SetString(foo); // ok\n\n    const char* bar = foo;\n    // Value x(bar); // not ok, can't rely on bar's lifetime\n    Value x(StringRef(bar)); // lifetime explicitly guaranteed by user\n    Value y(StringRef(bar, 3));  // ok, explicitly pass length\n    \\endcode\n\n    \\see StringRef, GenericValue::SetString\n*/\ntemplate<typename CharType>\nstruct GenericStringRef {\n    typedef CharType Ch; //!< character type of the string\n\n    //! Create string reference from \\c const character array\n#ifndef __clang__ // -Wdocumentation\n    /*!\n        This constructor implicitly creates a constant string reference from\n        a \\c const character array.  It has better performance than\n        \\ref StringRef(const CharType*) by inferring the string \\ref length\n        from the array length, and also supports strings containing null\n        characters.\n\n        \\tparam N length of the string, automatically inferred\n\n        \\param str Constant character array, lifetime assumed to be longer\n            than the use of the string in e.g. a GenericValue\n\n        \\post \\ref s == str\n\n        \\note Constant complexity.\n        \\note There is a hidden, private overload to disallow references to\n            non-const character arrays to be created via this constructor.\n            By this, e.g. function-scope arrays used to be filled via\n            \\c snprintf are excluded from consideration.\n            In such cases, the referenced string should be \\b copied to the\n            GenericValue instead.\n     */\n#endif\n    template<SizeType N>\n    GenericStringRef(const CharType (&str)[N]) RAPIDJSON_NOEXCEPT\n        : s(str), length(N-1) {}\n\n    //! Explicitly create string reference from \\c const character pointer\n#ifndef __clang__ // -Wdocumentation\n    /*!\n        This constructor can be used to \\b explicitly  create a reference to\n        a constant string pointer.\n\n        \\see StringRef(const CharType*)\n\n        \\param str Constant character pointer, lifetime assumed to be longer\n            than the use of the string in e.g. a GenericValue\n\n        \\post \\ref s == str\n\n        \\note There is a hidden, private overload to disallow references to\n            non-const character arrays to be created via this constructor.\n            By this, e.g. function-scope arrays used to be filled via\n            \\c snprintf are excluded from consideration.\n            In such cases, the referenced string should be \\b copied to the\n            GenericValue instead.\n     */\n#endif\n    explicit GenericStringRef(const CharType* str)\n        : s(str), length(NotNullStrLen(str)) {}\n\n    //! Create constant string reference from pointer and length\n#ifndef __clang__ // -Wdocumentation\n    /*! \\param str constant string, lifetime assumed to be longer than the use of the string in e.g. a GenericValue\n        \\param len length of the string, excluding the trailing NULL terminator\n\n        \\post \\ref s == str && \\ref length == len\n        \\note Constant complexity.\n     */\n#endif\n    GenericStringRef(const CharType* str, SizeType len)\n        : s(RAPIDJSON_LIKELY(str) ? str : emptyString), length(len) { RAPIDJSON_ASSERT(str != 0 || len == 0u); }\n\n    GenericStringRef(const GenericStringRef& rhs) : s(rhs.s), length(rhs.length) {}\n\n    //! implicit conversion to plain CharType pointer\n    operator const Ch *() const { return s; }\n\n    const Ch* const s; //!< plain CharType pointer\n    const SizeType length; //!< length of the string (excluding the trailing NULL terminator)\n\nprivate:\n    SizeType NotNullStrLen(const CharType* str) {\n        RAPIDJSON_ASSERT(str != 0);\n        return internal::StrLen(str);\n    }\n\n    /// Empty string - used when passing in a NULL pointer\n    static const Ch emptyString[];\n\n    //! Disallow construction from non-const array\n    template<SizeType N>\n    GenericStringRef(CharType (&str)[N]) /* = delete */;\n    //! Copy assignment operator not permitted - immutable type\n    GenericStringRef& operator=(const GenericStringRef& rhs) /* = delete */;\n};\n\ntemplate<typename CharType>\nconst CharType GenericStringRef<CharType>::emptyString[] = { CharType() };\n\n//! Mark a character pointer as constant string\n/*! Mark a plain character pointer as a \"string literal\".  This function\n    can be used to avoid copying a character string to be referenced as a\n    value in a JSON GenericValue object, if the string's lifetime is known\n    to be valid long enough.\n    \\tparam CharType Character type of the string\n    \\param str Constant string, lifetime assumed to be longer than the use of the string in e.g. a GenericValue\n    \\return GenericStringRef string reference object\n    \\relatesalso GenericStringRef\n\n    \\see GenericValue::GenericValue(StringRefType), GenericValue::operator=(StringRefType), GenericValue::SetString(StringRefType), GenericValue::PushBack(StringRefType, Allocator&), GenericValue::AddMember\n*/\ntemplate<typename CharType>\ninline GenericStringRef<CharType> StringRef(const CharType* str) {\n    return GenericStringRef<CharType>(str);\n}\n\n//! Mark a character pointer as constant string\n/*! Mark a plain character pointer as a \"string literal\".  This function\n    can be used to avoid copying a character string to be referenced as a\n    value in a JSON GenericValue object, if the string's lifetime is known\n    to be valid long enough.\n\n    This version has better performance with supplied length, and also\n    supports string containing null characters.\n\n    \\tparam CharType character type of the string\n    \\param str Constant string, lifetime assumed to be longer than the use of the string in e.g. a GenericValue\n    \\param length The length of source string.\n    \\return GenericStringRef string reference object\n    \\relatesalso GenericStringRef\n*/\ntemplate<typename CharType>\ninline GenericStringRef<CharType> StringRef(const CharType* str, size_t length) {\n    return GenericStringRef<CharType>(str, SizeType(length));\n}\n\n#if RAPIDJSON_HAS_STDSTRING\n//! Mark a string object as constant string\n/*! Mark a string object (e.g. \\c std::string) as a \"string literal\".\n    This function can be used to avoid copying a string to be referenced as a\n    value in a JSON GenericValue object, if the string's lifetime is known\n    to be valid long enough.\n\n    \\tparam CharType character type of the string\n    \\param str Constant string, lifetime assumed to be longer than the use of the string in e.g. a GenericValue\n    \\return GenericStringRef string reference object\n    \\relatesalso GenericStringRef\n    \\note Requires the definition of the preprocessor symbol \\ref RAPIDJSON_HAS_STDSTRING.\n*/\ntemplate<typename CharType>\ninline GenericStringRef<CharType> StringRef(const std::basic_string<CharType>& str) {\n    return GenericStringRef<CharType>(str.data(), SizeType(str.size()));\n}\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericValue type traits\nnamespace internal {\n\ntemplate <typename T, typename Encoding = void, typename Allocator = void>\nstruct IsGenericValueImpl : FalseType {};\n\n// select candidates according to nested encoding and allocator types\ntemplate <typename T> struct IsGenericValueImpl<T, typename Void<typename T::EncodingType>::Type, typename Void<typename T::AllocatorType>::Type>\n    : IsBaseOf<GenericValue<typename T::EncodingType, typename T::AllocatorType>, T>::Type {};\n\n// helper to match arbitrary GenericValue instantiations, including derived classes\ntemplate <typename T> struct IsGenericValue : IsGenericValueImpl<T>::Type {};\n\n} // namespace internal\n\n///////////////////////////////////////////////////////////////////////////////\n// TypeHelper\n\nnamespace internal {\n\ntemplate <typename ValueType, typename T>\nstruct TypeHelper {};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, bool> {\n    static bool Is(const ValueType& v) { return v.IsBool(); }\n    static bool Get(const ValueType& v) { return v.GetBool(); }\n    static ValueType& Set(ValueType& v, bool data) { return v.SetBool(data); }\n    static ValueType& Set(ValueType& v, bool data, typename ValueType::AllocatorType&) { return v.SetBool(data); }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, int> {\n    static bool Is(const ValueType& v) { return v.IsInt(); }\n    static int Get(const ValueType& v) { return v.GetInt(); }\n    static ValueType& Set(ValueType& v, int data) { return v.SetInt(data); }\n    static ValueType& Set(ValueType& v, int data, typename ValueType::AllocatorType&) { return v.SetInt(data); }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, unsigned> {\n    static bool Is(const ValueType& v) { return v.IsUint(); }\n    static unsigned Get(const ValueType& v) { return v.GetUint(); }\n    static ValueType& Set(ValueType& v, unsigned data) { return v.SetUint(data); }\n    static ValueType& Set(ValueType& v, unsigned data, typename ValueType::AllocatorType&) { return v.SetUint(data); }\n};\n\n#ifdef _MSC_VER\nRAPIDJSON_STATIC_ASSERT(sizeof(long) == sizeof(int));\ntemplate<typename ValueType>\nstruct TypeHelper<ValueType, long> {\n    static bool Is(const ValueType& v) { return v.IsInt(); }\n    static long Get(const ValueType& v) { return v.GetInt(); }\n    static ValueType& Set(ValueType& v, long data) { return v.SetInt(data); }\n    static ValueType& Set(ValueType& v, long data, typename ValueType::AllocatorType&) { return v.SetInt(data); }\n};\n\nRAPIDJSON_STATIC_ASSERT(sizeof(unsigned long) == sizeof(unsigned));\ntemplate<typename ValueType>\nstruct TypeHelper<ValueType, unsigned long> {\n    static bool Is(const ValueType& v) { return v.IsUint(); }\n    static unsigned long Get(const ValueType& v) { return v.GetUint(); }\n    static ValueType& Set(ValueType& v, unsigned long data) { return v.SetUint(data); }\n    static ValueType& Set(ValueType& v, unsigned long data, typename ValueType::AllocatorType&) { return v.SetUint(data); }\n};\n#endif\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, int64_t> {\n    static bool Is(const ValueType& v) { return v.IsInt64(); }\n    static int64_t Get(const ValueType& v) { return v.GetInt64(); }\n    static ValueType& Set(ValueType& v, int64_t data) { return v.SetInt64(data); }\n    static ValueType& Set(ValueType& v, int64_t data, typename ValueType::AllocatorType&) { return v.SetInt64(data); }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, uint64_t> {\n    static bool Is(const ValueType& v) { return v.IsUint64(); }\n    static uint64_t Get(const ValueType& v) { return v.GetUint64(); }\n    static ValueType& Set(ValueType& v, uint64_t data) { return v.SetUint64(data); }\n    static ValueType& Set(ValueType& v, uint64_t data, typename ValueType::AllocatorType&) { return v.SetUint64(data); }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, double> {\n    static bool Is(const ValueType& v) { return v.IsDouble(); }\n    static double Get(const ValueType& v) { return v.GetDouble(); }\n    static ValueType& Set(ValueType& v, double data) { return v.SetDouble(data); }\n    static ValueType& Set(ValueType& v, double data, typename ValueType::AllocatorType&) { return v.SetDouble(data); }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, float> {\n    static bool Is(const ValueType& v) { return v.IsFloat(); }\n    static float Get(const ValueType& v) { return v.GetFloat(); }\n    static ValueType& Set(ValueType& v, float data) { return v.SetFloat(data); }\n    static ValueType& Set(ValueType& v, float data, typename ValueType::AllocatorType&) { return v.SetFloat(data); }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, const typename ValueType::Ch*> {\n    typedef const typename ValueType::Ch* StringType;\n    static bool Is(const ValueType& v) { return v.IsString(); }\n    static StringType Get(const ValueType& v) { return v.GetString(); }\n    static ValueType& Set(ValueType& v, const StringType data) { return v.SetString(typename ValueType::StringRefType(data)); }\n    static ValueType& Set(ValueType& v, const StringType data, typename ValueType::AllocatorType& a) { return v.SetString(data, a); }\n};\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, std::basic_string<typename ValueType::Ch> > {\n    typedef std::basic_string<typename ValueType::Ch> StringType;\n    static bool Is(const ValueType& v) { return v.IsString(); }\n    static StringType Get(const ValueType& v) { return StringType(v.GetString(), v.GetStringLength()); }\n    static ValueType& Set(ValueType& v, const StringType& data, typename ValueType::AllocatorType& a) { return v.SetString(data, a); }\n};\n#endif\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, typename ValueType::Array> {\n    typedef typename ValueType::Array ArrayType;\n    static bool Is(const ValueType& v) { return v.IsArray(); }\n    static ArrayType Get(ValueType& v) { return v.GetArray(); }\n    static ValueType& Set(ValueType& v, ArrayType data) { return v = data; }\n    static ValueType& Set(ValueType& v, ArrayType data, typename ValueType::AllocatorType&) { return v = data; }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, typename ValueType::ConstArray> {\n    typedef typename ValueType::ConstArray ArrayType;\n    static bool Is(const ValueType& v) { return v.IsArray(); }\n    static ArrayType Get(const ValueType& v) { return v.GetArray(); }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, typename ValueType::Object> {\n    typedef typename ValueType::Object ObjectType;\n    static bool Is(const ValueType& v) { return v.IsObject(); }\n    static ObjectType Get(ValueType& v) { return v.GetObject(); }\n    static ValueType& Set(ValueType& v, ObjectType data) { return v = data; }\n    static ValueType& Set(ValueType& v, ObjectType data, typename ValueType::AllocatorType&) { return v = data; }\n};\n\ntemplate<typename ValueType> \nstruct TypeHelper<ValueType, typename ValueType::ConstObject> {\n    typedef typename ValueType::ConstObject ObjectType;\n    static bool Is(const ValueType& v) { return v.IsObject(); }\n    static ObjectType Get(const ValueType& v) { return v.GetObject(); }\n};\n\n} // namespace internal\n\n// Forward declarations\ntemplate <bool, typename> class GenericArray;\ntemplate <bool, typename> class GenericObject;\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericValue\n\n//! Represents a JSON value. Use Value for UTF8 encoding and default allocator.\n/*!\n    A JSON value can be one of 7 types. This class is a variant type supporting\n    these types.\n\n    Use the Value if UTF8 and default allocator\n\n    \\tparam Encoding    Encoding of the value. (Even non-string values need to have the same encoding in a document)\n    \\tparam Allocator   Allocator type for allocating memory of object, array and string.\n*/\ntemplate <typename Encoding, typename Allocator = RAPIDJSON_DEFAULT_ALLOCATOR >\nclass GenericValue {\npublic:\n    //! Name-value pair in an object.\n    typedef GenericMember<Encoding, Allocator> Member;\n    typedef Encoding EncodingType;                  //!< Encoding type from template parameter.\n    typedef Allocator AllocatorType;                //!< Allocator type from template parameter.\n    typedef typename Encoding::Ch Ch;               //!< Character type derived from Encoding.\n    typedef GenericStringRef<Ch> StringRefType;     //!< Reference to a constant string\n    typedef typename GenericMemberIterator<false,Encoding,Allocator>::Iterator MemberIterator;  //!< Member iterator for iterating in object.\n    typedef typename GenericMemberIterator<true,Encoding,Allocator>::Iterator ConstMemberIterator;  //!< Constant member iterator for iterating in object.\n    typedef GenericValue* ValueIterator;            //!< Value iterator for iterating in array.\n    typedef const GenericValue* ConstValueIterator; //!< Constant value iterator for iterating in array.\n    typedef GenericValue<Encoding, Allocator> ValueType;    //!< Value type of itself.\n    typedef GenericArray<false, ValueType> Array;\n    typedef GenericArray<true, ValueType> ConstArray;\n    typedef GenericObject<false, ValueType> Object;\n    typedef GenericObject<true, ValueType> ConstObject;\n\n    //!@name Constructors and destructor.\n    //@{\n\n    //! Default constructor creates a null value.\n    GenericValue() RAPIDJSON_NOEXCEPT : data_() { data_.f.flags = kNullFlag; }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    //! Move constructor in C++11\n    GenericValue(GenericValue&& rhs) RAPIDJSON_NOEXCEPT : data_(rhs.data_) {\n        rhs.data_.f.flags = kNullFlag; // give up contents\n    }\n#endif\n\nprivate:\n    //! Copy constructor is not permitted.\n    GenericValue(const GenericValue& rhs);\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    //! Moving from a GenericDocument is not permitted.\n    template <typename StackAllocator>\n    GenericValue(GenericDocument<Encoding,Allocator,StackAllocator>&& rhs);\n\n    //! Move assignment from a GenericDocument is not permitted.\n    template <typename StackAllocator>\n    GenericValue& operator=(GenericDocument<Encoding,Allocator,StackAllocator>&& rhs);\n#endif\n\npublic:\n\n    //! Constructor with JSON value type.\n    /*! This creates a Value of specified type with default content.\n        \\param type Type of the value.\n        \\note Default content for number is zero.\n    */\n    explicit GenericValue(Type type) RAPIDJSON_NOEXCEPT : data_() {\n        static const uint16_t defaultFlags[] = {\n            kNullFlag, kFalseFlag, kTrueFlag, kObjectFlag, kArrayFlag, kShortStringFlag,\n            kNumberAnyFlag\n        };\n        RAPIDJSON_NOEXCEPT_ASSERT(type >= kNullType && type <= kNumberType);\n        data_.f.flags = defaultFlags[type];\n\n        // Use ShortString to store empty string.\n        if (type == kStringType)\n            data_.ss.SetLength(0);\n    }\n\n    //! Explicit copy constructor (with allocator)\n    /*! Creates a copy of a Value by using the given Allocator\n        \\tparam SourceAllocator allocator of \\c rhs\n        \\param rhs Value to copy from (read-only)\n        \\param allocator Allocator for allocating copied elements and buffers. Commonly use GenericDocument::GetAllocator().\n        \\param copyConstStrings Force copying of constant strings (e.g. referencing an in-situ buffer)\n        \\see CopyFrom()\n    */\n    template <typename SourceAllocator>\n    GenericValue(const GenericValue<Encoding,SourceAllocator>& rhs, Allocator& allocator, bool copyConstStrings = false) {\n        switch (rhs.GetType()) {\n        case kObjectType:\n            DoCopyMembers(rhs, allocator, copyConstStrings);\n            break;\n        case kArrayType: {\n                SizeType count = rhs.data_.a.size;\n                GenericValue* le = reinterpret_cast<GenericValue*>(allocator.Malloc(count * sizeof(GenericValue)));\n                const GenericValue<Encoding,SourceAllocator>* re = rhs.GetElementsPointer();\n                for (SizeType i = 0; i < count; i++)\n                    new (&le[i]) GenericValue(re[i], allocator, copyConstStrings);\n                data_.f.flags = kArrayFlag;\n                data_.a.size = data_.a.capacity = count;\n                SetElementsPointer(le);\n            }\n            break;\n        case kStringType:\n            if (rhs.data_.f.flags == kConstStringFlag && !copyConstStrings) {\n                data_.f.flags = rhs.data_.f.flags;\n                data_  = *reinterpret_cast<const Data*>(&rhs.data_);\n            }\n            else\n                SetStringRaw(StringRef(rhs.GetString(), rhs.GetStringLength()), allocator);\n            break;\n        default:\n            data_.f.flags = rhs.data_.f.flags;\n            data_  = *reinterpret_cast<const Data*>(&rhs.data_);\n            break;\n        }\n    }\n\n    //! Constructor for boolean value.\n    /*! \\param b Boolean value\n        \\note This constructor is limited to \\em real boolean values and rejects\n            implicitly converted types like arbitrary pointers.  Use an explicit cast\n            to \\c bool, if you want to construct a boolean JSON value in such cases.\n     */\n#ifndef RAPIDJSON_DOXYGEN_RUNNING // hide SFINAE from Doxygen\n    template <typename T>\n    explicit GenericValue(T b, RAPIDJSON_ENABLEIF((internal::IsSame<bool, T>))) RAPIDJSON_NOEXCEPT  // See #472\n#else\n    explicit GenericValue(bool b) RAPIDJSON_NOEXCEPT\n#endif\n        : data_() {\n            // safe-guard against failing SFINAE\n            RAPIDJSON_STATIC_ASSERT((internal::IsSame<bool,T>::Value));\n            data_.f.flags = b ? kTrueFlag : kFalseFlag;\n    }\n\n    //! Constructor for int value.\n    explicit GenericValue(int i) RAPIDJSON_NOEXCEPT : data_() {\n        data_.n.i64 = i;\n        data_.f.flags = (i >= 0) ? (kNumberIntFlag | kUintFlag | kUint64Flag) : kNumberIntFlag;\n    }\n\n    //! Constructor for unsigned value.\n    explicit GenericValue(unsigned u) RAPIDJSON_NOEXCEPT : data_() {\n        data_.n.u64 = u; \n        data_.f.flags = (u & 0x80000000) ? kNumberUintFlag : (kNumberUintFlag | kIntFlag | kInt64Flag);\n    }\n\n    //! Constructor for int64_t value.\n    explicit GenericValue(int64_t i64) RAPIDJSON_NOEXCEPT : data_() {\n        data_.n.i64 = i64;\n        data_.f.flags = kNumberInt64Flag;\n        if (i64 >= 0) {\n            data_.f.flags |= kNumberUint64Flag;\n            if (!(static_cast<uint64_t>(i64) & RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0x00000000)))\n                data_.f.flags |= kUintFlag;\n            if (!(static_cast<uint64_t>(i64) & RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0x80000000)))\n                data_.f.flags |= kIntFlag;\n        }\n        else if (i64 >= static_cast<int64_t>(RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0x80000000)))\n            data_.f.flags |= kIntFlag;\n    }\n\n    //! Constructor for uint64_t value.\n    explicit GenericValue(uint64_t u64) RAPIDJSON_NOEXCEPT : data_() {\n        data_.n.u64 = u64;\n        data_.f.flags = kNumberUint64Flag;\n        if (!(u64 & RAPIDJSON_UINT64_C2(0x80000000, 0x00000000)))\n            data_.f.flags |= kInt64Flag;\n        if (!(u64 & RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0x00000000)))\n            data_.f.flags |= kUintFlag;\n        if (!(u64 & RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0x80000000)))\n            data_.f.flags |= kIntFlag;\n    }\n\n    //! Constructor for double value.\n    explicit GenericValue(double d) RAPIDJSON_NOEXCEPT : data_() { data_.n.d = d; data_.f.flags = kNumberDoubleFlag; }\n\n    //! Constructor for float value.\n    explicit GenericValue(float f) RAPIDJSON_NOEXCEPT : data_() { data_.n.d = static_cast<double>(f); data_.f.flags = kNumberDoubleFlag; }\n\n    //! Constructor for constant string (i.e. do not make a copy of string)\n    GenericValue(const Ch* s, SizeType length) RAPIDJSON_NOEXCEPT : data_() { SetStringRaw(StringRef(s, length)); }\n\n    //! Constructor for constant string (i.e. do not make a copy of string)\n    explicit GenericValue(StringRefType s) RAPIDJSON_NOEXCEPT : data_() { SetStringRaw(s); }\n\n    //! Constructor for copy-string (i.e. do make a copy of string)\n    GenericValue(const Ch* s, SizeType length, Allocator& allocator) : data_() { SetStringRaw(StringRef(s, length), allocator); }\n\n    //! Constructor for copy-string (i.e. do make a copy of string)\n    GenericValue(const Ch*s, Allocator& allocator) : data_() { SetStringRaw(StringRef(s), allocator); }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Constructor for copy-string from a string object (i.e. do make a copy of string)\n    /*! \\note Requires the definition of the preprocessor symbol \\ref RAPIDJSON_HAS_STDSTRING.\n     */\n    GenericValue(const std::basic_string<Ch>& s, Allocator& allocator) : data_() { SetStringRaw(StringRef(s), allocator); }\n#endif\n\n    //! Constructor for Array.\n    /*!\n        \\param a An array obtained by \\c GetArray().\n        \\note \\c Array is always pass-by-value.\n        \\note the source array is moved into this value and the sourec array becomes empty.\n    */\n    GenericValue(Array a) RAPIDJSON_NOEXCEPT : data_(a.value_.data_) {\n        a.value_.data_ = Data();\n        a.value_.data_.f.flags = kArrayFlag;\n    }\n\n    //! Constructor for Object.\n    /*!\n        \\param o An object obtained by \\c GetObject().\n        \\note \\c Object is always pass-by-value.\n        \\note the source object is moved into this value and the sourec object becomes empty.\n    */\n    GenericValue(Object o) RAPIDJSON_NOEXCEPT : data_(o.value_.data_) {\n        o.value_.data_ = Data();\n        o.value_.data_.f.flags = kObjectFlag;\n    }\n\n    //! Destructor.\n    /*! Need to destruct elements of array, members of object, or copy-string.\n    */\n    ~GenericValue() {\n        // With RAPIDJSON_USE_MEMBERSMAP, the maps need to be destroyed to release\n        // their Allocator if it's refcounted (e.g. MemoryPoolAllocator).\n        if (Allocator::kNeedFree || (RAPIDJSON_USE_MEMBERSMAP+0 &&\n                                     internal::IsRefCounted<Allocator>::Value)) {\n            switch(data_.f.flags) {\n            case kArrayFlag:\n                {\n                    GenericValue* e = GetElementsPointer();\n                    for (GenericValue* v = e; v != e + data_.a.size; ++v)\n                        v->~GenericValue();\n                    if (Allocator::kNeedFree) { // Shortcut by Allocator's trait\n                        Allocator::Free(e);\n                    }\n                }\n                break;\n\n            case kObjectFlag:\n                DoFreeMembers();\n                break;\n\n            case kCopyStringFlag:\n                if (Allocator::kNeedFree) { // Shortcut by Allocator's trait\n                    Allocator::Free(const_cast<Ch*>(GetStringPointer()));\n                }\n                break;\n\n            default:\n                break;  // Do nothing for other types.\n            }\n        }\n    }\n\n    //@}\n\n    //!@name Assignment operators\n    //@{\n\n    //! Assignment with move semantics.\n    /*! \\param rhs Source of the assignment. It will become a null value after assignment.\n    */\n    GenericValue& operator=(GenericValue& rhs) RAPIDJSON_NOEXCEPT {\n        if (RAPIDJSON_LIKELY(this != &rhs)) {\n            // Can't destroy \"this\" before assigning \"rhs\", otherwise \"rhs\"\n            // could be used after free if it's an sub-Value of \"this\",\n            // hence the temporary danse.\n            GenericValue temp;\n            temp.RawAssign(rhs);\n            this->~GenericValue();\n            RawAssign(temp);\n        }\n        return *this;\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    //! Move assignment in C++11\n    GenericValue& operator=(GenericValue&& rhs) RAPIDJSON_NOEXCEPT {\n        return *this = rhs.Move();\n    }\n#endif\n\n    //! Assignment of constant string reference (no copy)\n    /*! \\param str Constant string reference to be assigned\n        \\note This overload is needed to avoid clashes with the generic primitive type assignment overload below.\n        \\see GenericStringRef, operator=(T)\n    */\n    GenericValue& operator=(StringRefType str) RAPIDJSON_NOEXCEPT {\n        GenericValue s(str);\n        return *this = s;\n    }\n\n    //! Assignment with primitive types.\n    /*! \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t\n        \\param value The value to be assigned.\n\n        \\note The source type \\c T explicitly disallows all pointer types,\n            especially (\\c const) \\ref Ch*.  This helps avoiding implicitly\n            referencing character strings with insufficient lifetime, use\n            \\ref SetString(const Ch*, Allocator&) (for copying) or\n            \\ref StringRef() (to explicitly mark the pointer as constant) instead.\n            All other pointer types would implicitly convert to \\c bool,\n            use \\ref SetBool() instead.\n    */\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::IsPointer<T>), (GenericValue&))\n    operator=(T value) {\n        GenericValue v(value);\n        return *this = v;\n    }\n\n    //! Deep-copy assignment from Value\n    /*! Assigns a \\b copy of the Value to the current Value object\n        \\tparam SourceAllocator Allocator type of \\c rhs\n        \\param rhs Value to copy from (read-only)\n        \\param allocator Allocator to use for copying\n        \\param copyConstStrings Force copying of constant strings (e.g. referencing an in-situ buffer)\n     */\n    template <typename SourceAllocator>\n    GenericValue& CopyFrom(const GenericValue<Encoding, SourceAllocator>& rhs, Allocator& allocator, bool copyConstStrings = false) {\n        RAPIDJSON_ASSERT(static_cast<void*>(this) != static_cast<void const*>(&rhs));\n        this->~GenericValue();\n        new (this) GenericValue(rhs, allocator, copyConstStrings);\n        return *this;\n    }\n\n    //! Exchange the contents of this value with those of other.\n    /*!\n        \\param other Another value.\n        \\note Constant complexity.\n    */\n    GenericValue& Swap(GenericValue& other) RAPIDJSON_NOEXCEPT {\n        GenericValue temp;\n        temp.RawAssign(*this);\n        RawAssign(other);\n        other.RawAssign(temp);\n        return *this;\n    }\n\n    //! free-standing swap function helper\n    /*!\n        Helper function to enable support for common swap implementation pattern based on \\c std::swap:\n        \\code\n        void swap(MyClass& a, MyClass& b) {\n            using std::swap;\n            swap(a.value, b.value);\n            // ...\n        }\n        \\endcode\n        \\see Swap()\n     */\n    friend inline void swap(GenericValue& a, GenericValue& b) RAPIDJSON_NOEXCEPT { a.Swap(b); }\n\n    //! Prepare Value for move semantics\n    /*! \\return *this */\n    GenericValue& Move() RAPIDJSON_NOEXCEPT { return *this; }\n    //@}\n\n    //!@name Equal-to and not-equal-to operators\n    //@{\n    //! Equal-to operator\n    /*!\n        \\note If an object contains duplicated named member, comparing equality with any object is always \\c false.\n        \\note Complexity is quadratic in Object's member number and linear for the rest (number of all values in the subtree and total lengths of all strings).\n    */\n    template <typename SourceAllocator>\n    bool operator==(const GenericValue<Encoding, SourceAllocator>& rhs) const {\n        typedef GenericValue<Encoding, SourceAllocator> RhsType;\n        if (GetType() != rhs.GetType())\n            return false;\n\n        switch (GetType()) {\n        case kObjectType: // Warning: O(n^2) inner-loop\n            if (data_.o.size != rhs.data_.o.size)\n                return false;           \n            for (ConstMemberIterator lhsMemberItr = MemberBegin(); lhsMemberItr != MemberEnd(); ++lhsMemberItr) {\n                typename RhsType::ConstMemberIterator rhsMemberItr = rhs.FindMember(lhsMemberItr->name);\n                if (rhsMemberItr == rhs.MemberEnd() || (!(lhsMemberItr->value == rhsMemberItr->value)))\n                    return false;\n            }\n            return true;\n            \n        case kArrayType:\n            if (data_.a.size != rhs.data_.a.size)\n                return false;\n            for (SizeType i = 0; i < data_.a.size; i++)\n                if (!((*this)[i] == rhs[i]))\n                    return false;\n            return true;\n\n        case kStringType:\n            return StringEqual(rhs);\n\n        case kNumberType:\n            if (IsDouble() || rhs.IsDouble()) {\n                double a = GetDouble();     // May convert from integer to double.\n                double b = rhs.GetDouble(); // Ditto\n                return a >= b && a <= b;    // Prevent -Wfloat-equal\n            }\n            else\n                return data_.n.u64 == rhs.data_.n.u64;\n\n        default:\n            return true;\n        }\n    }\n\n    //! Equal-to operator with const C-string pointer\n    bool operator==(const Ch* rhs) const { return *this == GenericValue(StringRef(rhs)); }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Equal-to operator with string object\n    /*! \\note Requires the definition of the preprocessor symbol \\ref RAPIDJSON_HAS_STDSTRING.\n     */\n    bool operator==(const std::basic_string<Ch>& rhs) const { return *this == GenericValue(StringRef(rhs)); }\n#endif\n\n    //! Equal-to operator with primitive types\n    /*! \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t, \\c double, \\c true, \\c false\n    */\n    template <typename T> RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>,internal::IsGenericValue<T> >), (bool)) operator==(const T& rhs) const { return *this == GenericValue(rhs); }\n\n#ifndef __cpp_impl_three_way_comparison\n    //! Not-equal-to operator\n    /*! \\return !(*this == rhs)\n     */\n    template <typename SourceAllocator>\n    bool operator!=(const GenericValue<Encoding, SourceAllocator>& rhs) const { return !(*this == rhs); }\n\n    //! Not-equal-to operator with const C-string pointer\n    bool operator!=(const Ch* rhs) const { return !(*this == rhs); }\n\n    //! Not-equal-to operator with arbitrary types\n    /*! \\return !(*this == rhs)\n     */\n    template <typename T> RAPIDJSON_DISABLEIF_RETURN((internal::IsGenericValue<T>), (bool)) operator!=(const T& rhs) const { return !(*this == rhs); }\n\n    //! Equal-to operator with arbitrary types (symmetric version)\n    /*! \\return (rhs == lhs)\n     */\n    template <typename T> friend RAPIDJSON_DISABLEIF_RETURN((internal::IsGenericValue<T>), (bool)) operator==(const T& lhs, const GenericValue& rhs) { return rhs == lhs; }\n\n    //! Not-Equal-to operator with arbitrary types (symmetric version)\n    /*! \\return !(rhs == lhs)\n     */\n    template <typename T> friend RAPIDJSON_DISABLEIF_RETURN((internal::IsGenericValue<T>), (bool)) operator!=(const T& lhs, const GenericValue& rhs) { return !(rhs == lhs); }\n    //@}\n#endif\n\n    //!@name Type\n    //@{\n\n    Type GetType()  const { return static_cast<Type>(data_.f.flags & kTypeMask); }\n    bool IsNull()   const { return data_.f.flags == kNullFlag; }\n    bool IsFalse()  const { return data_.f.flags == kFalseFlag; }\n    bool IsTrue()   const { return data_.f.flags == kTrueFlag; }\n    bool IsBool()   const { return (data_.f.flags & kBoolFlag) != 0; }\n    bool IsObject() const { return data_.f.flags == kObjectFlag; }\n    bool IsArray()  const { return data_.f.flags == kArrayFlag; }\n    bool IsNumber() const { return (data_.f.flags & kNumberFlag) != 0; }\n    bool IsInt()    const { return (data_.f.flags & kIntFlag) != 0; }\n    bool IsUint()   const { return (data_.f.flags & kUintFlag) != 0; }\n    bool IsInt64()  const { return (data_.f.flags & kInt64Flag) != 0; }\n    bool IsUint64() const { return (data_.f.flags & kUint64Flag) != 0; }\n    bool IsDouble() const { return (data_.f.flags & kDoubleFlag) != 0; }\n    bool IsString() const { return (data_.f.flags & kStringFlag) != 0; }\n\n    // Checks whether a number can be losslessly converted to a double.\n    bool IsLosslessDouble() const {\n        if (!IsNumber()) return false;\n        if (IsUint64()) {\n            uint64_t u = GetUint64();\n            volatile double d = static_cast<double>(u);\n            return (d >= 0.0)\n                && (d < static_cast<double>((std::numeric_limits<uint64_t>::max)()))\n                && (u == static_cast<uint64_t>(d));\n        }\n        if (IsInt64()) {\n            int64_t i = GetInt64();\n            volatile double d = static_cast<double>(i);\n            return (d >= static_cast<double>((std::numeric_limits<int64_t>::min)()))\n                && (d < static_cast<double>((std::numeric_limits<int64_t>::max)()))\n                && (i == static_cast<int64_t>(d));\n        }\n        return true; // double, int, uint are always lossless\n    }\n\n    // Checks whether a number is a float (possible lossy).\n    bool IsFloat() const  {\n        if ((data_.f.flags & kDoubleFlag) == 0)\n            return false;\n        double d = GetDouble();\n        return d >= -3.4028234e38 && d <= 3.4028234e38;\n    }\n    // Checks whether a number can be losslessly converted to a float.\n    bool IsLosslessFloat() const {\n        if (!IsNumber()) return false;\n        double a = GetDouble();\n        if (a < static_cast<double>(-(std::numeric_limits<float>::max)())\n                || a > static_cast<double>((std::numeric_limits<float>::max)()))\n            return false;\n        double b = static_cast<double>(static_cast<float>(a));\n        return a >= b && a <= b;    // Prevent -Wfloat-equal\n    }\n\n    //@}\n\n    //!@name Null\n    //@{\n\n    GenericValue& SetNull() { this->~GenericValue(); new (this) GenericValue(); return *this; }\n\n    //@}\n\n    //!@name Bool\n    //@{\n\n    bool GetBool() const { RAPIDJSON_ASSERT(IsBool()); return data_.f.flags == kTrueFlag; }\n    //!< Set boolean value\n    /*! \\post IsBool() == true */\n    GenericValue& SetBool(bool b) { this->~GenericValue(); new (this) GenericValue(b); return *this; }\n\n    //@}\n\n    //!@name Object\n    //@{\n\n    //! Set this value as an empty object.\n    /*! \\post IsObject() == true */\n    GenericValue& SetObject() { this->~GenericValue(); new (this) GenericValue(kObjectType); return *this; }\n\n    //! Get the number of members in the object.\n    SizeType MemberCount() const { RAPIDJSON_ASSERT(IsObject()); return data_.o.size; }\n\n    //! Get the capacity of object.\n    SizeType MemberCapacity() const { RAPIDJSON_ASSERT(IsObject()); return data_.o.capacity; }\n\n    //! Check whether the object is empty.\n    bool ObjectEmpty() const { RAPIDJSON_ASSERT(IsObject()); return data_.o.size == 0; }\n\n    //! Get a value from an object associated with the name.\n    /*! \\pre IsObject() == true\n        \\tparam T Either \\c Ch or \\c const \\c Ch (template used for disambiguation with \\ref operator[](SizeType))\n        \\note In version 0.1x, if the member is not found, this function returns a null value. This makes issue 7.\n        Since 0.2, if the name is not correct, it will assert.\n        If user is unsure whether a member exists, user should use HasMember() first.\n        A better approach is to use FindMember().\n        \\note Linear time complexity.\n    */\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::NotExpr<internal::IsSame<typename internal::RemoveConst<T>::Type, Ch> >),(GenericValue&)) operator[](T* name) {\n        GenericValue n(StringRef(name));\n        return (*this)[n];\n    }\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::NotExpr<internal::IsSame<typename internal::RemoveConst<T>::Type, Ch> >),(const GenericValue&)) operator[](T* name) const { return const_cast<GenericValue&>(*this)[name]; }\n\n    //! Get a value from an object associated with the name.\n    /*! \\pre IsObject() == true\n        \\tparam SourceAllocator Allocator of the \\c name value\n\n        \\note Compared to \\ref operator[](T*), this version is faster because it does not need a StrLen().\n        And it can also handle strings with embedded null characters.\n\n        \\note Linear time complexity.\n    */\n    template <typename SourceAllocator>\n    GenericValue& operator[](const GenericValue<Encoding, SourceAllocator>& name) {\n        MemberIterator member = FindMember(name);\n        if (member != MemberEnd())\n            return member->value;\n        else {\n            RAPIDJSON_ASSERT(false);    // see above note\n\n#if RAPIDJSON_HAS_CXX11\n            // Use thread-local storage to prevent races between threads.\n            // Use static buffer and placement-new to prevent destruction, with\n            // alignas() to ensure proper alignment.\n            alignas(GenericValue) thread_local static char buffer[sizeof(GenericValue)];\n            return *new (buffer) GenericValue();\n#elif defined(_MSC_VER) && _MSC_VER < 1900\n            // There's no way to solve both thread locality and proper alignment\n            // simultaneously.\n            __declspec(thread) static char buffer[sizeof(GenericValue)];\n            return *new (buffer) GenericValue();\n#elif defined(__GNUC__) || defined(__clang__)\n            // This will generate -Wexit-time-destructors in clang, but that's\n            // better than having under-alignment.\n            __thread static GenericValue buffer;\n            return buffer;\n#else\n            // Don't know what compiler this is, so don't know how to ensure\n            // thread-locality.\n            static GenericValue buffer;\n            return buffer;\n#endif\n        }\n    }\n    template <typename SourceAllocator>\n    const GenericValue& operator[](const GenericValue<Encoding, SourceAllocator>& name) const { return const_cast<GenericValue&>(*this)[name]; }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Get a value from an object associated with name (string object).\n    GenericValue& operator[](const std::basic_string<Ch>& name) { return (*this)[GenericValue(StringRef(name))]; }\n    const GenericValue& operator[](const std::basic_string<Ch>& name) const { return (*this)[GenericValue(StringRef(name))]; }\n#endif\n\n    //! Const member iterator\n    /*! \\pre IsObject() == true */\n    ConstMemberIterator MemberBegin() const { RAPIDJSON_ASSERT(IsObject()); return ConstMemberIterator(GetMembersPointer()); }\n    //! Const \\em past-the-end member iterator\n    /*! \\pre IsObject() == true */\n    ConstMemberIterator MemberEnd() const   { RAPIDJSON_ASSERT(IsObject()); return ConstMemberIterator(GetMembersPointer() + data_.o.size); }\n    //! Member iterator\n    /*! \\pre IsObject() == true */\n    MemberIterator MemberBegin()            { RAPIDJSON_ASSERT(IsObject()); return MemberIterator(GetMembersPointer()); }\n    //! \\em Past-the-end member iterator\n    /*! \\pre IsObject() == true */\n    MemberIterator MemberEnd()              { RAPIDJSON_ASSERT(IsObject()); return MemberIterator(GetMembersPointer() + data_.o.size); }\n\n    //! Request the object to have enough capacity to store members.\n    /*! \\param newCapacity  The capacity that the object at least need to have.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\note Linear time complexity.\n    */\n    GenericValue& MemberReserve(SizeType newCapacity, Allocator &allocator) {\n        RAPIDJSON_ASSERT(IsObject());\n        DoReserveMembers(newCapacity, allocator);\n        return *this;\n    }\n\n    //! Check whether a member exists in the object.\n    /*!\n        \\param name Member name to be searched.\n        \\pre IsObject() == true\n        \\return Whether a member with that name exists.\n        \\note It is better to use FindMember() directly if you need the obtain the value as well.\n        \\note Linear time complexity.\n    */\n    bool HasMember(const Ch* name) const { return FindMember(name) != MemberEnd(); }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Check whether a member exists in the object with string object.\n    /*!\n        \\param name Member name to be searched.\n        \\pre IsObject() == true\n        \\return Whether a member with that name exists.\n        \\note It is better to use FindMember() directly if you need the obtain the value as well.\n        \\note Linear time complexity.\n    */\n    bool HasMember(const std::basic_string<Ch>& name) const { return FindMember(name) != MemberEnd(); }\n#endif\n\n    //! Check whether a member exists in the object with GenericValue name.\n    /*!\n        This version is faster because it does not need a StrLen(). It can also handle string with null character.\n        \\param name Member name to be searched.\n        \\pre IsObject() == true\n        \\return Whether a member with that name exists.\n        \\note It is better to use FindMember() directly if you need the obtain the value as well.\n        \\note Linear time complexity.\n    */\n    template <typename SourceAllocator>\n    bool HasMember(const GenericValue<Encoding, SourceAllocator>& name) const { return FindMember(name) != MemberEnd(); }\n\n    //! Find member by name.\n    /*!\n        \\param name Member name to be searched.\n        \\pre IsObject() == true\n        \\return Iterator to member, if it exists.\n            Otherwise returns \\ref MemberEnd().\n\n        \\note Earlier versions of Rapidjson returned a \\c NULL pointer, in case\n            the requested member doesn't exist. For consistency with e.g.\n            \\c std::map, this has been changed to MemberEnd() now.\n        \\note Linear time complexity.\n    */\n    MemberIterator FindMember(const Ch* name) {\n        GenericValue n(StringRef(name));\n        return FindMember(n);\n    }\n\n    ConstMemberIterator FindMember(const Ch* name) const { return const_cast<GenericValue&>(*this).FindMember(name); }\n\n    //! Find member by name.\n    /*!\n        This version is faster because it does not need a StrLen(). It can also handle string with null character.\n        \\param name Member name to be searched.\n        \\pre IsObject() == true\n        \\return Iterator to member, if it exists.\n            Otherwise returns \\ref MemberEnd().\n\n        \\note Earlier versions of Rapidjson returned a \\c NULL pointer, in case\n            the requested member doesn't exist. For consistency with e.g.\n            \\c std::map, this has been changed to MemberEnd() now.\n        \\note Linear time complexity.\n    */\n    template <typename SourceAllocator>\n    MemberIterator FindMember(const GenericValue<Encoding, SourceAllocator>& name) {\n        RAPIDJSON_ASSERT(IsObject());\n        RAPIDJSON_ASSERT(name.IsString());\n        return DoFindMember(name);\n    }\n    template <typename SourceAllocator> ConstMemberIterator FindMember(const GenericValue<Encoding, SourceAllocator>& name) const { return const_cast<GenericValue&>(*this).FindMember(name); }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Find member by string object name.\n    /*!\n        \\param name Member name to be searched.\n        \\pre IsObject() == true\n        \\return Iterator to member, if it exists.\n            Otherwise returns \\ref MemberEnd().\n    */\n    MemberIterator FindMember(const std::basic_string<Ch>& name) { return FindMember(GenericValue(StringRef(name))); }\n    ConstMemberIterator FindMember(const std::basic_string<Ch>& name) const { return FindMember(GenericValue(StringRef(name))); }\n#endif\n\n    //! Add a member (name-value pair) to the object.\n    /*! \\param name A string value as name of member.\n        \\param value Value of any type.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\note The ownership of \\c name and \\c value will be transferred to this object on success.\n        \\pre  IsObject() && name.IsString()\n        \\post name.IsNull() && value.IsNull()\n        \\note Amortized Constant time complexity.\n    */\n    GenericValue& AddMember(GenericValue& name, GenericValue& value, Allocator& allocator) {\n        RAPIDJSON_ASSERT(IsObject());\n        RAPIDJSON_ASSERT(name.IsString());\n        DoAddMember(name, value, allocator);\n        return *this;\n    }\n\n    //! Add a constant string value as member (name-value pair) to the object.\n    /*! \\param name A string value as name of member.\n        \\param value constant string reference as value of member.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\pre  IsObject()\n        \\note This overload is needed to avoid clashes with the generic primitive type AddMember(GenericValue&,T,Allocator&) overload below.\n        \\note Amortized Constant time complexity.\n    */\n    GenericValue& AddMember(GenericValue& name, StringRefType value, Allocator& allocator) {\n        GenericValue v(value);\n        return AddMember(name, v, allocator);\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Add a string object as member (name-value pair) to the object.\n    /*! \\param name A string value as name of member.\n        \\param value constant string reference as value of member.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\pre  IsObject()\n        \\note This overload is needed to avoid clashes with the generic primitive type AddMember(GenericValue&,T,Allocator&) overload below.\n        \\note Amortized Constant time complexity.\n    */\n    GenericValue& AddMember(GenericValue& name, std::basic_string<Ch>& value, Allocator& allocator) {\n        GenericValue v(value, allocator);\n        return AddMember(name, v, allocator);\n    }\n#endif\n\n    //! Add any primitive value as member (name-value pair) to the object.\n    /*! \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t\n        \\param name A string value as name of member.\n        \\param value Value of primitive type \\c T as value of member\n        \\param allocator Allocator for reallocating memory. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\pre  IsObject()\n\n        \\note The source type \\c T explicitly disallows all pointer types,\n            especially (\\c const) \\ref Ch*.  This helps avoiding implicitly\n            referencing character strings with insufficient lifetime, use\n            \\ref AddMember(StringRefType, GenericValue&, Allocator&) or \\ref\n            AddMember(StringRefType, StringRefType, Allocator&).\n            All other pointer types would implicitly convert to \\c bool,\n            use an explicit cast instead, if needed.\n        \\note Amortized Constant time complexity.\n    */\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (GenericValue&))\n    AddMember(GenericValue& name, T value, Allocator& allocator) {\n        GenericValue v(value);\n        return AddMember(name, v, allocator);\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    GenericValue& AddMember(GenericValue&& name, GenericValue&& value, Allocator& allocator) {\n        return AddMember(name, value, allocator);\n    }\n    GenericValue& AddMember(GenericValue&& name, GenericValue& value, Allocator& allocator) {\n        return AddMember(name, value, allocator);\n    }\n    GenericValue& AddMember(GenericValue& name, GenericValue&& value, Allocator& allocator) {\n        return AddMember(name, value, allocator);\n    }\n    GenericValue& AddMember(StringRefType name, GenericValue&& value, Allocator& allocator) {\n        GenericValue n(name);\n        return AddMember(n, value, allocator);\n    }\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\n\n    //! Add a member (name-value pair) to the object.\n    /*! \\param name A constant string reference as name of member.\n        \\param value Value of any type.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\note The ownership of \\c value will be transferred to this object on success.\n        \\pre  IsObject()\n        \\post value.IsNull()\n        \\note Amortized Constant time complexity.\n    */\n    GenericValue& AddMember(StringRefType name, GenericValue& value, Allocator& allocator) {\n        GenericValue n(name);\n        return AddMember(n, value, allocator);\n    }\n\n    //! Add a constant string value as member (name-value pair) to the object.\n    /*! \\param name A constant string reference as name of member.\n        \\param value constant string reference as value of member.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\pre  IsObject()\n        \\note This overload is needed to avoid clashes with the generic primitive type AddMember(StringRefType,T,Allocator&) overload below.\n        \\note Amortized Constant time complexity.\n    */\n    GenericValue& AddMember(StringRefType name, StringRefType value, Allocator& allocator) {\n        GenericValue v(value);\n        return AddMember(name, v, allocator);\n    }\n\n    //! Add any primitive value as member (name-value pair) to the object.\n    /*! \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t\n        \\param name A constant string reference as name of member.\n        \\param value Value of primitive type \\c T as value of member\n        \\param allocator Allocator for reallocating memory. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\pre  IsObject()\n\n        \\note The source type \\c T explicitly disallows all pointer types,\n            especially (\\c const) \\ref Ch*.  This helps avoiding implicitly\n            referencing character strings with insufficient lifetime, use\n            \\ref AddMember(StringRefType, GenericValue&, Allocator&) or \\ref\n            AddMember(StringRefType, StringRefType, Allocator&).\n            All other pointer types would implicitly convert to \\c bool,\n            use an explicit cast instead, if needed.\n        \\note Amortized Constant time complexity.\n    */\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (GenericValue&))\n    AddMember(StringRefType name, T value, Allocator& allocator) {\n        GenericValue n(name);\n        return AddMember(n, value, allocator);\n    }\n\n    //! Remove all members in the object.\n    /*! This function do not deallocate memory in the object, i.e. the capacity is unchanged.\n        \\note Linear time complexity.\n    */\n    void RemoveAllMembers() {\n        RAPIDJSON_ASSERT(IsObject()); \n        DoClearMembers();\n    }\n\n    //! Remove a member in object by its name.\n    /*! \\param name Name of member to be removed.\n        \\return Whether the member existed.\n        \\note This function may reorder the object members. Use \\ref\n            EraseMember(ConstMemberIterator) if you need to preserve the\n            relative order of the remaining members.\n        \\note Linear time complexity.\n    */\n    bool RemoveMember(const Ch* name) {\n        GenericValue n(StringRef(name));\n        return RemoveMember(n);\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    bool RemoveMember(const std::basic_string<Ch>& name) { return RemoveMember(GenericValue(StringRef(name))); }\n#endif\n\n    template <typename SourceAllocator>\n    bool RemoveMember(const GenericValue<Encoding, SourceAllocator>& name) {\n        MemberIterator m = FindMember(name);\n        if (m != MemberEnd()) {\n            RemoveMember(m);\n            return true;\n        }\n        else\n            return false;\n    }\n\n    //! Remove a member in object by iterator.\n    /*! \\param m member iterator (obtained by FindMember() or MemberBegin()).\n        \\return the new iterator after removal.\n        \\note This function may reorder the object members. Use \\ref\n            EraseMember(ConstMemberIterator) if you need to preserve the\n            relative order of the remaining members.\n        \\note Constant time complexity.\n    */\n    MemberIterator RemoveMember(MemberIterator m) {\n        RAPIDJSON_ASSERT(IsObject());\n        RAPIDJSON_ASSERT(data_.o.size > 0);\n        RAPIDJSON_ASSERT(GetMembersPointer() != 0);\n        RAPIDJSON_ASSERT(m >= MemberBegin() && m < MemberEnd());\n        return DoRemoveMember(m);\n    }\n\n    //! Remove a member from an object by iterator.\n    /*! \\param pos iterator to the member to remove\n        \\pre IsObject() == true && \\ref MemberBegin() <= \\c pos < \\ref MemberEnd()\n        \\return Iterator following the removed element.\n            If the iterator \\c pos refers to the last element, the \\ref MemberEnd() iterator is returned.\n        \\note This function preserves the relative order of the remaining object\n            members. If you do not need this, use the more efficient \\ref RemoveMember(MemberIterator).\n        \\note Linear time complexity.\n    */\n    MemberIterator EraseMember(ConstMemberIterator pos) {\n        return EraseMember(pos, pos +1);\n    }\n\n    //! Remove members in the range [first, last) from an object.\n    /*! \\param first iterator to the first member to remove\n        \\param last  iterator following the last member to remove\n        \\pre IsObject() == true && \\ref MemberBegin() <= \\c first <= \\c last <= \\ref MemberEnd()\n        \\return Iterator following the last removed element.\n        \\note This function preserves the relative order of the remaining object\n            members.\n        \\note Linear time complexity.\n    */\n    MemberIterator EraseMember(ConstMemberIterator first, ConstMemberIterator last) {\n        RAPIDJSON_ASSERT(IsObject());\n        RAPIDJSON_ASSERT(data_.o.size > 0);\n        RAPIDJSON_ASSERT(GetMembersPointer() != 0);\n        RAPIDJSON_ASSERT(first >= MemberBegin());\n        RAPIDJSON_ASSERT(first <= last);\n        RAPIDJSON_ASSERT(last <= MemberEnd());\n        return DoEraseMembers(first, last);\n    }\n\n    //! Erase a member in object by its name.\n    /*! \\param name Name of member to be removed.\n        \\return Whether the member existed.\n        \\note Linear time complexity.\n    */\n    bool EraseMember(const Ch* name) {\n        GenericValue n(StringRef(name));\n        return EraseMember(n);\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    bool EraseMember(const std::basic_string<Ch>& name) { return EraseMember(GenericValue(StringRef(name))); }\n#endif\n\n    template <typename SourceAllocator>\n    bool EraseMember(const GenericValue<Encoding, SourceAllocator>& name) {\n        MemberIterator m = FindMember(name);\n        if (m != MemberEnd()) {\n            EraseMember(m);\n            return true;\n        }\n        else\n            return false;\n    }\n\n    Object GetObject() { RAPIDJSON_ASSERT(IsObject()); return Object(*this); }\n    Object GetObj() { RAPIDJSON_ASSERT(IsObject()); return Object(*this); }\n    ConstObject GetObject() const { RAPIDJSON_ASSERT(IsObject()); return ConstObject(*this); }\n    ConstObject GetObj() const { RAPIDJSON_ASSERT(IsObject()); return ConstObject(*this); }\n\n    //@}\n\n    //!@name Array\n    //@{\n\n    //! Set this value as an empty array.\n    /*! \\post IsArray == true */\n    GenericValue& SetArray() { this->~GenericValue(); new (this) GenericValue(kArrayType); return *this; }\n\n    //! Get the number of elements in array.\n    SizeType Size() const { RAPIDJSON_ASSERT(IsArray()); return data_.a.size; }\n\n    //! Get the capacity of array.\n    SizeType Capacity() const { RAPIDJSON_ASSERT(IsArray()); return data_.a.capacity; }\n\n    //! Check whether the array is empty.\n    bool Empty() const { RAPIDJSON_ASSERT(IsArray()); return data_.a.size == 0; }\n\n    //! Remove all elements in the array.\n    /*! This function do not deallocate memory in the array, i.e. the capacity is unchanged.\n        \\note Linear time complexity.\n    */\n    void Clear() {\n        RAPIDJSON_ASSERT(IsArray()); \n        GenericValue* e = GetElementsPointer();\n        for (GenericValue* v = e; v != e + data_.a.size; ++v)\n            v->~GenericValue();\n        data_.a.size = 0;\n    }\n\n    //! Get an element from array by index.\n    /*! \\pre IsArray() == true\n        \\param index Zero-based index of element.\n        \\see operator[](T*)\n    */\n    GenericValue& operator[](SizeType index) {\n        RAPIDJSON_ASSERT(IsArray());\n        RAPIDJSON_ASSERT(index < data_.a.size);\n        return GetElementsPointer()[index];\n    }\n    const GenericValue& operator[](SizeType index) const { return const_cast<GenericValue&>(*this)[index]; }\n\n    //! Element iterator\n    /*! \\pre IsArray() == true */\n    ValueIterator Begin() { RAPIDJSON_ASSERT(IsArray()); return GetElementsPointer(); }\n    //! \\em Past-the-end element iterator\n    /*! \\pre IsArray() == true */\n    ValueIterator End() { RAPIDJSON_ASSERT(IsArray()); return GetElementsPointer() + data_.a.size; }\n    //! Constant element iterator\n    /*! \\pre IsArray() == true */\n    ConstValueIterator Begin() const { return const_cast<GenericValue&>(*this).Begin(); }\n    //! Constant \\em past-the-end element iterator\n    /*! \\pre IsArray() == true */\n    ConstValueIterator End() const { return const_cast<GenericValue&>(*this).End(); }\n\n    //! Request the array to have enough capacity to store elements.\n    /*! \\param newCapacity  The capacity that the array at least need to have.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\note Linear time complexity.\n    */\n    GenericValue& Reserve(SizeType newCapacity, Allocator &allocator) {\n        RAPIDJSON_ASSERT(IsArray());\n        if (newCapacity > data_.a.capacity) {\n            SetElementsPointer(reinterpret_cast<GenericValue*>(allocator.Realloc(GetElementsPointer(), data_.a.capacity * sizeof(GenericValue), newCapacity * sizeof(GenericValue))));\n            data_.a.capacity = newCapacity;\n        }\n        return *this;\n    }\n\n    //! Append a GenericValue at the end of the array.\n    /*! \\param value        Value to be appended.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\pre IsArray() == true\n        \\post value.IsNull() == true\n        \\return The value itself for fluent API.\n        \\note The ownership of \\c value will be transferred to this array on success.\n        \\note If the number of elements to be appended is known, calls Reserve() once first may be more efficient.\n        \\note Amortized constant time complexity.\n    */\n    GenericValue& PushBack(GenericValue& value, Allocator& allocator) {\n        RAPIDJSON_ASSERT(IsArray());\n        if (data_.a.size >= data_.a.capacity)\n            Reserve(data_.a.capacity == 0 ? kDefaultArrayCapacity : (data_.a.capacity + (data_.a.capacity + 1) / 2), allocator);\n        GetElementsPointer()[data_.a.size++].RawAssign(value);\n        return *this;\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    GenericValue& PushBack(GenericValue&& value, Allocator& allocator) {\n        return PushBack(value, allocator);\n    }\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\n    //! Append a constant string reference at the end of the array.\n    /*! \\param value        Constant string reference to be appended.\n        \\param allocator    Allocator for reallocating memory. It must be the same one used previously. Commonly use GenericDocument::GetAllocator().\n        \\pre IsArray() == true\n        \\return The value itself for fluent API.\n        \\note If the number of elements to be appended is known, calls Reserve() once first may be more efficient.\n        \\note Amortized constant time complexity.\n        \\see GenericStringRef\n    */\n    GenericValue& PushBack(StringRefType value, Allocator& allocator) {\n        return (*this).template PushBack<StringRefType>(value, allocator);\n    }\n\n    //! Append a primitive value at the end of the array.\n    /*! \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t\n        \\param value Value of primitive type T to be appended.\n        \\param allocator    Allocator for reallocating memory. It must be the same one as used before. Commonly use GenericDocument::GetAllocator().\n        \\pre IsArray() == true\n        \\return The value itself for fluent API.\n        \\note If the number of elements to be appended is known, calls Reserve() once first may be more efficient.\n\n        \\note The source type \\c T explicitly disallows all pointer types,\n            especially (\\c const) \\ref Ch*.  This helps avoiding implicitly\n            referencing character strings with insufficient lifetime, use\n            \\ref PushBack(GenericValue&, Allocator&) or \\ref\n            PushBack(StringRefType, Allocator&).\n            All other pointer types would implicitly convert to \\c bool,\n            use an explicit cast instead, if needed.\n        \\note Amortized constant time complexity.\n    */\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (GenericValue&))\n    PushBack(T value, Allocator& allocator) {\n        GenericValue v(value);\n        return PushBack(v, allocator);\n    }\n\n    //! Remove the last element in the array.\n    /*!\n        \\note Constant time complexity.\n    */\n    GenericValue& PopBack() {\n        RAPIDJSON_ASSERT(IsArray());\n        RAPIDJSON_ASSERT(!Empty());\n        GetElementsPointer()[--data_.a.size].~GenericValue();\n        return *this;\n    }\n\n    //! Remove an element of array by iterator.\n    /*!\n        \\param pos iterator to the element to remove\n        \\pre IsArray() == true && \\ref Begin() <= \\c pos < \\ref End()\n        \\return Iterator following the removed element. If the iterator pos refers to the last element, the End() iterator is returned.\n        \\note Linear time complexity.\n    */\n    ValueIterator Erase(ConstValueIterator pos) {\n        return Erase(pos, pos + 1);\n    }\n\n    //! Remove elements in the range [first, last) of the array.\n    /*!\n        \\param first iterator to the first element to remove\n        \\param last  iterator following the last element to remove\n        \\pre IsArray() == true && \\ref Begin() <= \\c first <= \\c last <= \\ref End()\n        \\return Iterator following the last removed element.\n        \\note Linear time complexity.\n    */\n    ValueIterator Erase(ConstValueIterator first, ConstValueIterator last) {\n        RAPIDJSON_ASSERT(IsArray());\n        RAPIDJSON_ASSERT(data_.a.size > 0);\n        RAPIDJSON_ASSERT(GetElementsPointer() != 0);\n        RAPIDJSON_ASSERT(first >= Begin());\n        RAPIDJSON_ASSERT(first <= last);\n        RAPIDJSON_ASSERT(last <= End());\n        ValueIterator pos = Begin() + (first - Begin());\n        for (ValueIterator itr = pos; itr != last; ++itr)\n            itr->~GenericValue();\n        std::memmove(static_cast<void*>(pos), last, static_cast<size_t>(End() - last) * sizeof(GenericValue));\n        data_.a.size -= static_cast<SizeType>(last - first);\n        return pos;\n    }\n\n    Array GetArray() { RAPIDJSON_ASSERT(IsArray()); return Array(*this); }\n    ConstArray GetArray() const { RAPIDJSON_ASSERT(IsArray()); return ConstArray(*this); }\n\n    //@}\n\n    //!@name Number\n    //@{\n\n    int GetInt() const          { RAPIDJSON_ASSERT(data_.f.flags & kIntFlag);   return data_.n.i.i;   }\n    unsigned GetUint() const    { RAPIDJSON_ASSERT(data_.f.flags & kUintFlag);  return data_.n.u.u;   }\n    int64_t GetInt64() const    { RAPIDJSON_ASSERT(data_.f.flags & kInt64Flag); return data_.n.i64; }\n    uint64_t GetUint64() const  { RAPIDJSON_ASSERT(data_.f.flags & kUint64Flag); return data_.n.u64; }\n\n    //! Get the value as double type.\n    /*! \\note If the value is 64-bit integer type, it may lose precision. Use \\c IsLosslessDouble() to check whether the converison is lossless.\n    */\n    double GetDouble() const {\n        RAPIDJSON_ASSERT(IsNumber());\n        if ((data_.f.flags & kDoubleFlag) != 0)                return data_.n.d;   // exact type, no conversion.\n        if ((data_.f.flags & kIntFlag) != 0)                   return data_.n.i.i; // int -> double\n        if ((data_.f.flags & kUintFlag) != 0)                  return data_.n.u.u; // unsigned -> double\n        if ((data_.f.flags & kInt64Flag) != 0)                 return static_cast<double>(data_.n.i64); // int64_t -> double (may lose precision)\n        RAPIDJSON_ASSERT((data_.f.flags & kUint64Flag) != 0);  return static_cast<double>(data_.n.u64); // uint64_t -> double (may lose precision)\n    }\n\n    //! Get the value as float type.\n    /*! \\note If the value is 64-bit integer type, it may lose precision. Use \\c IsLosslessFloat() to check whether the converison is lossless.\n    */\n    float GetFloat() const {\n        return static_cast<float>(GetDouble());\n    }\n\n    GenericValue& SetInt(int i)             { this->~GenericValue(); new (this) GenericValue(i);    return *this; }\n    GenericValue& SetUint(unsigned u)       { this->~GenericValue(); new (this) GenericValue(u);    return *this; }\n    GenericValue& SetInt64(int64_t i64)     { this->~GenericValue(); new (this) GenericValue(i64);  return *this; }\n    GenericValue& SetUint64(uint64_t u64)   { this->~GenericValue(); new (this) GenericValue(u64);  return *this; }\n    GenericValue& SetDouble(double d)       { this->~GenericValue(); new (this) GenericValue(d);    return *this; }\n    GenericValue& SetFloat(float f)         { this->~GenericValue(); new (this) GenericValue(static_cast<double>(f)); return *this; }\n\n    //@}\n\n    //!@name String\n    //@{\n\n    const Ch* GetString() const { RAPIDJSON_ASSERT(IsString()); return DataString(data_); }\n\n    //! Get the length of string.\n    /*! Since rapidjson permits \"\\\\u0000\" in the json string, strlen(v.GetString()) may not equal to v.GetStringLength().\n    */\n    SizeType GetStringLength() const { RAPIDJSON_ASSERT(IsString()); return DataStringLength(data_); }\n\n    //! Set this value as a string without copying source string.\n    /*! This version has better performance with supplied length, and also support string containing null character.\n        \\param s source string pointer. \n        \\param length The length of source string, excluding the trailing null terminator.\n        \\return The value itself for fluent API.\n        \\post IsString() == true && GetString() == s && GetStringLength() == length\n        \\see SetString(StringRefType)\n    */\n    GenericValue& SetString(const Ch* s, SizeType length) { return SetString(StringRef(s, length)); }\n\n    //! Set this value as a string without copying source string.\n    /*! \\param s source string reference\n        \\return The value itself for fluent API.\n        \\post IsString() == true && GetString() == s && GetStringLength() == s.length\n    */\n    GenericValue& SetString(StringRefType s) { this->~GenericValue(); SetStringRaw(s); return *this; }\n\n    //! Set this value as a string by copying from source string.\n    /*! This version has better performance with supplied length, and also support string containing null character.\n        \\param s source string. \n        \\param length The length of source string, excluding the trailing null terminator.\n        \\param allocator Allocator for allocating copied buffer. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\post IsString() == true && GetString() != s && strcmp(GetString(),s) == 0 && GetStringLength() == length\n    */\n    GenericValue& SetString(const Ch* s, SizeType length, Allocator& allocator) { return SetString(StringRef(s, length), allocator); }\n\n    //! Set this value as a string by copying from source string.\n    /*! \\param s source string. \n        \\param allocator Allocator for allocating copied buffer. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\post IsString() == true && GetString() != s && strcmp(GetString(),s) == 0 && GetStringLength() == length\n    */\n    GenericValue& SetString(const Ch* s, Allocator& allocator) { return SetString(StringRef(s), allocator); }\n\n    //! Set this value as a string by copying from source string.\n    /*! \\param s source string reference\n        \\param allocator Allocator for allocating copied buffer. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\post IsString() == true && GetString() != s.s && strcmp(GetString(),s) == 0 && GetStringLength() == length\n    */\n    GenericValue& SetString(StringRefType s, Allocator& allocator) { this->~GenericValue(); SetStringRaw(s, allocator); return *this; }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Set this value as a string by copying from source string.\n    /*! \\param s source string.\n        \\param allocator Allocator for allocating copied buffer. Commonly use GenericDocument::GetAllocator().\n        \\return The value itself for fluent API.\n        \\post IsString() == true && GetString() != s.data() && strcmp(GetString(),s.data() == 0 && GetStringLength() == s.size()\n        \\note Requires the definition of the preprocessor symbol \\ref RAPIDJSON_HAS_STDSTRING.\n    */\n    GenericValue& SetString(const std::basic_string<Ch>& s, Allocator& allocator) { return SetString(StringRef(s), allocator); }\n#endif\n\n    //@}\n\n    //!@name Array\n    //@{\n\n    //! Templated version for checking whether this value is type T.\n    /*!\n        \\tparam T Either \\c bool, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t, \\c double, \\c float, \\c const \\c char*, \\c std::basic_string<Ch>\n    */\n    template <typename T>\n    bool Is() const { return internal::TypeHelper<ValueType, T>::Is(*this); }\n\n    template <typename T>\n    T Get() const { return internal::TypeHelper<ValueType, T>::Get(*this); }\n\n    template <typename T>\n    T Get() { return internal::TypeHelper<ValueType, T>::Get(*this); }\n\n    template<typename T>\n    ValueType& Set(const T& data) { return internal::TypeHelper<ValueType, T>::Set(*this, data); }\n\n    template<typename T>\n    ValueType& Set(const T& data, AllocatorType& allocator) { return internal::TypeHelper<ValueType, T>::Set(*this, data, allocator); }\n\n    //@}\n\n    //! Generate events of this value to a Handler.\n    /*! This function adopts the GoF visitor pattern.\n        Typical usage is to output this JSON value as JSON text via Writer, which is a Handler.\n        It can also be used to deep clone this value via GenericDocument, which is also a Handler.\n        \\tparam Handler type of handler.\n        \\param handler An object implementing concept Handler.\n    */\n    template <typename Handler>\n    bool Accept(Handler& handler) const {\n        switch(GetType()) {\n        case kNullType:     return handler.Null();\n        case kFalseType:    return handler.Bool(false);\n        case kTrueType:     return handler.Bool(true);\n\n        case kObjectType:\n            if (RAPIDJSON_UNLIKELY(!handler.StartObject()))\n                return false;\n            for (ConstMemberIterator m = MemberBegin(); m != MemberEnd(); ++m) {\n                RAPIDJSON_ASSERT(m->name.IsString()); // User may change the type of name by MemberIterator.\n                if (RAPIDJSON_UNLIKELY(!handler.Key(m->name.GetString(), m->name.GetStringLength(), (m->name.data_.f.flags & kCopyFlag) != 0)))\n                    return false;\n                if (RAPIDJSON_UNLIKELY(!m->value.Accept(handler)))\n                    return false;\n            }\n            return handler.EndObject(data_.o.size);\n\n        case kArrayType:\n            if (RAPIDJSON_UNLIKELY(!handler.StartArray()))\n                return false;\n            for (ConstValueIterator v = Begin(); v != End(); ++v)\n                if (RAPIDJSON_UNLIKELY(!v->Accept(handler)))\n                    return false;\n            return handler.EndArray(data_.a.size);\n    \n        case kStringType:\n            return handler.String(GetString(), GetStringLength(), (data_.f.flags & kCopyFlag) != 0);\n    \n        default:\n            RAPIDJSON_ASSERT(GetType() == kNumberType);\n            if (IsDouble())         return handler.Double(data_.n.d);\n            else if (IsInt())       return handler.Int(data_.n.i.i);\n            else if (IsUint())      return handler.Uint(data_.n.u.u);\n            else if (IsInt64())     return handler.Int64(data_.n.i64);\n            else                    return handler.Uint64(data_.n.u64);\n        }\n    }\n\nprivate:\n    template <typename, typename> friend class GenericValue;\n    template <typename, typename, typename> friend class GenericDocument;\n\n    enum {\n        kBoolFlag       = 0x0008,\n        kNumberFlag     = 0x0010,\n        kIntFlag        = 0x0020,\n        kUintFlag       = 0x0040,\n        kInt64Flag      = 0x0080,\n        kUint64Flag     = 0x0100,\n        kDoubleFlag     = 0x0200,\n        kStringFlag     = 0x0400,\n        kCopyFlag       = 0x0800,\n        kInlineStrFlag  = 0x1000,\n\n        // Initial flags of different types.\n        kNullFlag = kNullType,\n        // These casts are added to suppress the warning on MSVC about bitwise operations between enums of different types.\n        kTrueFlag = static_cast<int>(kTrueType) | static_cast<int>(kBoolFlag),\n        kFalseFlag = static_cast<int>(kFalseType) | static_cast<int>(kBoolFlag),\n        kNumberIntFlag = static_cast<int>(kNumberType) | static_cast<int>(kNumberFlag | kIntFlag | kInt64Flag),\n        kNumberUintFlag = static_cast<int>(kNumberType) | static_cast<int>(kNumberFlag | kUintFlag | kUint64Flag | kInt64Flag),\n        kNumberInt64Flag = static_cast<int>(kNumberType) | static_cast<int>(kNumberFlag | kInt64Flag),\n        kNumberUint64Flag = static_cast<int>(kNumberType) | static_cast<int>(kNumberFlag | kUint64Flag),\n        kNumberDoubleFlag = static_cast<int>(kNumberType) | static_cast<int>(kNumberFlag | kDoubleFlag),\n        kNumberAnyFlag = static_cast<int>(kNumberType) | static_cast<int>(kNumberFlag | kIntFlag | kInt64Flag | kUintFlag | kUint64Flag | kDoubleFlag),\n        kConstStringFlag = static_cast<int>(kStringType) | static_cast<int>(kStringFlag),\n        kCopyStringFlag = static_cast<int>(kStringType) | static_cast<int>(kStringFlag | kCopyFlag),\n        kShortStringFlag = static_cast<int>(kStringType) | static_cast<int>(kStringFlag | kCopyFlag | kInlineStrFlag),\n        kObjectFlag = kObjectType,\n        kArrayFlag = kArrayType,\n\n        kTypeMask = 0x07\n    };\n\n    static const SizeType kDefaultArrayCapacity = RAPIDJSON_VALUE_DEFAULT_ARRAY_CAPACITY;\n    static const SizeType kDefaultObjectCapacity = RAPIDJSON_VALUE_DEFAULT_OBJECT_CAPACITY;\n\n    struct Flag {\n#if RAPIDJSON_48BITPOINTER_OPTIMIZATION\n        char payload[sizeof(SizeType) * 2 + 6];     // 2 x SizeType + lower 48-bit pointer\n#elif RAPIDJSON_64BIT\n        char payload[sizeof(SizeType) * 2 + sizeof(void*) + 6]; // 6 padding bytes\n#else\n        char payload[sizeof(SizeType) * 2 + sizeof(void*) + 2]; // 2 padding bytes\n#endif\n        uint16_t flags;\n    };\n\n    struct String {\n        SizeType length;\n        SizeType hashcode;  //!< reserved\n        const Ch* str;\n    };  // 12 bytes in 32-bit mode, 16 bytes in 64-bit mode\n\n    // implementation detail: ShortString can represent zero-terminated strings up to MaxSize chars\n    // (excluding the terminating zero) and store a value to determine the length of the contained\n    // string in the last character str[LenPos] by storing \"MaxSize - length\" there. If the string\n    // to store has the maximal length of MaxSize then str[LenPos] will be 0 and therefore act as\n    // the string terminator as well. For getting the string length back from that value just use\n    // \"MaxSize - str[LenPos]\".\n    // This allows to store 13-chars strings in 32-bit mode, 21-chars strings in 64-bit mode,\n    // 13-chars strings for RAPIDJSON_48BITPOINTER_OPTIMIZATION=1 inline (for `UTF8`-encoded strings).\n    struct ShortString {\n        enum { MaxChars = sizeof(static_cast<Flag*>(0)->payload) / sizeof(Ch), MaxSize = MaxChars - 1, LenPos = MaxSize };\n        Ch str[MaxChars];\n\n        inline static bool Usable(SizeType len) { return                       (MaxSize >= len); }\n        inline void     SetLength(SizeType len) { str[LenPos] = static_cast<Ch>(MaxSize -  len); }\n        inline SizeType GetLength() const       { return  static_cast<SizeType>(MaxSize -  str[LenPos]); }\n    };  // at most as many bytes as \"String\" above => 12 bytes in 32-bit mode, 16 bytes in 64-bit mode\n\n    // By using proper binary layout, retrieval of different integer types do not need conversions.\n    union Number {\n#if RAPIDJSON_ENDIAN == RAPIDJSON_LITTLEENDIAN\n        struct I {\n            int i;\n            char padding[4];\n        }i;\n        struct U {\n            unsigned u;\n            char padding2[4];\n        }u;\n#else\n        struct I {\n            char padding[4];\n            int i;\n        }i;\n        struct U {\n            char padding2[4];\n            unsigned u;\n        }u;\n#endif\n        int64_t i64;\n        uint64_t u64;\n        double d;\n    };  // 8 bytes\n\n    struct ObjectData {\n        SizeType size;\n        SizeType capacity;\n        Member* members;\n    };  // 12 bytes in 32-bit mode, 16 bytes in 64-bit mode\n\n    struct ArrayData {\n        SizeType size;\n        SizeType capacity;\n        GenericValue* elements;\n    };  // 12 bytes in 32-bit mode, 16 bytes in 64-bit mode\n\n    union Data {\n        String s;\n        ShortString ss;\n        Number n;\n        ObjectData o;\n        ArrayData a;\n        Flag f;\n    };  // 16 bytes in 32-bit mode, 24 bytes in 64-bit mode, 16 bytes in 64-bit with RAPIDJSON_48BITPOINTER_OPTIMIZATION\n\n    static RAPIDJSON_FORCEINLINE const Ch* DataString(const Data& data) {\n        return (data.f.flags & kInlineStrFlag) ? data.ss.str : RAPIDJSON_GETPOINTER(Ch, data.s.str);\n    }\n    static RAPIDJSON_FORCEINLINE SizeType DataStringLength(const Data& data) {\n        return (data.f.flags & kInlineStrFlag) ? data.ss.GetLength() : data.s.length;\n    }\n\n    RAPIDJSON_FORCEINLINE const Ch* GetStringPointer() const { return RAPIDJSON_GETPOINTER(Ch, data_.s.str); }\n    RAPIDJSON_FORCEINLINE const Ch* SetStringPointer(const Ch* str) { return RAPIDJSON_SETPOINTER(Ch, data_.s.str, str); }\n    RAPIDJSON_FORCEINLINE GenericValue* GetElementsPointer() const { return RAPIDJSON_GETPOINTER(GenericValue, data_.a.elements); }\n    RAPIDJSON_FORCEINLINE GenericValue* SetElementsPointer(GenericValue* elements) { return RAPIDJSON_SETPOINTER(GenericValue, data_.a.elements, elements); }\n    RAPIDJSON_FORCEINLINE Member* GetMembersPointer() const { return RAPIDJSON_GETPOINTER(Member, data_.o.members); }\n    RAPIDJSON_FORCEINLINE Member* SetMembersPointer(Member* members) { return RAPIDJSON_SETPOINTER(Member, data_.o.members, members); }\n\n#if RAPIDJSON_USE_MEMBERSMAP\n\n    struct MapTraits {\n        struct Less {\n            bool operator()(const Data& s1, const Data& s2) const {\n                SizeType n1 = DataStringLength(s1), n2 = DataStringLength(s2);\n                int cmp = std::memcmp(DataString(s1), DataString(s2), sizeof(Ch) * (n1 < n2 ? n1 : n2));\n                return cmp < 0 || (cmp == 0 && n1 < n2);\n            }\n        };\n        typedef std::pair<const Data, SizeType> Pair;\n        typedef std::multimap<Data, SizeType, Less, StdAllocator<Pair, Allocator> > Map;\n        typedef typename Map::iterator Iterator;\n    };\n    typedef typename MapTraits::Map         Map;\n    typedef typename MapTraits::Less        MapLess;\n    typedef typename MapTraits::Pair        MapPair;\n    typedef typename MapTraits::Iterator    MapIterator;\n\n    //\n    // Layout of the members' map/array, re(al)located according to the needed capacity:\n    //\n    //    {Map*}<>{capacity}<>{Member[capacity]}<>{MapIterator[capacity]}\n    //\n    // (where <> stands for the RAPIDJSON_ALIGN-ment, if needed)\n    //\n\n    static RAPIDJSON_FORCEINLINE size_t GetMapLayoutSize(SizeType capacity) {\n        return RAPIDJSON_ALIGN(sizeof(Map*)) +\n               RAPIDJSON_ALIGN(sizeof(SizeType)) +\n               RAPIDJSON_ALIGN(capacity * sizeof(Member)) +\n               capacity * sizeof(MapIterator);\n    }\n\n    static RAPIDJSON_FORCEINLINE SizeType &GetMapCapacity(Map* &map) {\n        return *reinterpret_cast<SizeType*>(reinterpret_cast<uintptr_t>(&map) +\n                                            RAPIDJSON_ALIGN(sizeof(Map*)));\n    }\n\n    static RAPIDJSON_FORCEINLINE Member* GetMapMembers(Map* &map) {\n        return reinterpret_cast<Member*>(reinterpret_cast<uintptr_t>(&map) +\n                                         RAPIDJSON_ALIGN(sizeof(Map*)) +\n                                         RAPIDJSON_ALIGN(sizeof(SizeType)));\n    }\n\n    static RAPIDJSON_FORCEINLINE MapIterator* GetMapIterators(Map* &map) {\n        return reinterpret_cast<MapIterator*>(reinterpret_cast<uintptr_t>(&map) +\n                                              RAPIDJSON_ALIGN(sizeof(Map*)) +\n                                              RAPIDJSON_ALIGN(sizeof(SizeType)) +\n                                              RAPIDJSON_ALIGN(GetMapCapacity(map) * sizeof(Member)));\n    }\n\n    static RAPIDJSON_FORCEINLINE Map* &GetMap(Member* members) {\n        RAPIDJSON_ASSERT(members != 0);\n        return *reinterpret_cast<Map**>(reinterpret_cast<uintptr_t>(members) -\n                                        RAPIDJSON_ALIGN(sizeof(SizeType)) -\n                                        RAPIDJSON_ALIGN(sizeof(Map*)));\n    }\n\n    // Some compilers' debug mechanisms want all iterators to be destroyed, for their accounting..\n    RAPIDJSON_FORCEINLINE MapIterator DropMapIterator(MapIterator& rhs) {\n#if RAPIDJSON_HAS_CXX11\n        MapIterator ret = std::move(rhs);\n#else\n        MapIterator ret = rhs;\n#endif\n        rhs.~MapIterator();\n        return ret;\n    }\n\n    Map* &DoReallocMap(Map** oldMap, SizeType newCapacity, Allocator& allocator) {\n        Map **newMap = static_cast<Map**>(allocator.Malloc(GetMapLayoutSize(newCapacity)));\n        GetMapCapacity(*newMap) = newCapacity;\n        if (!oldMap) {\n            *newMap = new (allocator.Malloc(sizeof(Map))) Map(MapLess(), allocator);\n        }\n        else {\n            *newMap = *oldMap;\n            size_t count = (*oldMap)->size();\n            std::memcpy(static_cast<void*>(GetMapMembers(*newMap)),\n                        static_cast<void*>(GetMapMembers(*oldMap)),\n                        count * sizeof(Member));\n            MapIterator *oldIt = GetMapIterators(*oldMap),\n                        *newIt = GetMapIterators(*newMap);\n            while (count--) {\n                new (&newIt[count]) MapIterator(DropMapIterator(oldIt[count]));\n            }\n            Allocator::Free(oldMap);\n        }\n        return *newMap;\n    }\n\n    RAPIDJSON_FORCEINLINE Member* DoAllocMembers(SizeType capacity, Allocator& allocator) {\n        return GetMapMembers(DoReallocMap(0, capacity, allocator));\n    }\n\n    void DoReserveMembers(SizeType newCapacity, Allocator& allocator) {\n        ObjectData& o = data_.o;\n        if (newCapacity > o.capacity) {\n            Member* oldMembers = GetMembersPointer();\n            Map **oldMap = oldMembers ? &GetMap(oldMembers) : 0,\n                *&newMap = DoReallocMap(oldMap, newCapacity, allocator);\n            RAPIDJSON_SETPOINTER(Member, o.members, GetMapMembers(newMap));\n            o.capacity = newCapacity;\n        }\n    }\n\n    template <typename SourceAllocator>\n    MemberIterator DoFindMember(const GenericValue<Encoding, SourceAllocator>& name) {\n        if (Member* members = GetMembersPointer()) {\n            Map* &map = GetMap(members);\n            MapIterator mit = map->find(reinterpret_cast<const Data&>(name.data_));\n            if (mit != map->end()) {\n                return MemberIterator(&members[mit->second]);\n            }\n        }\n        return MemberEnd();\n    }\n\n    void DoClearMembers() {\n        if (Member* members = GetMembersPointer()) {\n            Map* &map = GetMap(members);\n            MapIterator* mit = GetMapIterators(map);\n            for (SizeType i = 0; i < data_.o.size; i++) {\n                map->erase(DropMapIterator(mit[i]));\n                members[i].~Member();\n            }\n            data_.o.size = 0;\n        }\n    }\n\n    void DoFreeMembers() {\n        if (Member* members = GetMembersPointer()) {\n            GetMap(members)->~Map();\n            for (SizeType i = 0; i < data_.o.size; i++) {\n                members[i].~Member();\n            }\n            if (Allocator::kNeedFree) { // Shortcut by Allocator's trait\n                Map** map = &GetMap(members);\n                Allocator::Free(*map);\n                Allocator::Free(map);\n            }\n        }\n    }\n\n#else // !RAPIDJSON_USE_MEMBERSMAP\n\n    RAPIDJSON_FORCEINLINE Member* DoAllocMembers(SizeType capacity, Allocator& allocator) {\n        return Malloc<Member>(allocator, capacity);\n    }\n\n    void DoReserveMembers(SizeType newCapacity, Allocator& allocator) {\n        ObjectData& o = data_.o;\n        if (newCapacity > o.capacity) {\n            Member* newMembers = Realloc<Member>(allocator, GetMembersPointer(), o.capacity, newCapacity);\n            RAPIDJSON_SETPOINTER(Member, o.members, newMembers);\n            o.capacity = newCapacity;\n        }\n    }\n\n    template <typename SourceAllocator>\n    MemberIterator DoFindMember(const GenericValue<Encoding, SourceAllocator>& name) {\n        MemberIterator member = MemberBegin();\n        for ( ; member != MemberEnd(); ++member)\n            if (name.StringEqual(member->name))\n                break;\n        return member;\n    }\n\n    void DoClearMembers() {\n        for (MemberIterator m = MemberBegin(); m != MemberEnd(); ++m)\n            m->~Member();\n        data_.o.size = 0;\n    }\n\n    void DoFreeMembers() {\n        for (MemberIterator m = MemberBegin(); m != MemberEnd(); ++m)\n            m->~Member();\n        Allocator::Free(GetMembersPointer());\n    }\n\n#endif // !RAPIDJSON_USE_MEMBERSMAP\n\n    void DoAddMember(GenericValue& name, GenericValue& value, Allocator& allocator) {\n        ObjectData& o = data_.o;\n        if (o.size >= o.capacity)\n            DoReserveMembers(o.capacity ? (o.capacity + (o.capacity + 1) / 2) : kDefaultObjectCapacity, allocator);\n        Member* members = GetMembersPointer();\n        Member* m = members + o.size;\n        m->name.RawAssign(name);\n        m->value.RawAssign(value);\n#if RAPIDJSON_USE_MEMBERSMAP\n        Map* &map = GetMap(members);\n        MapIterator* mit = GetMapIterators(map);\n        new (&mit[o.size]) MapIterator(map->insert(MapPair(m->name.data_, o.size)));\n#endif\n        ++o.size;\n    }\n\n    MemberIterator DoRemoveMember(MemberIterator m) {\n        ObjectData& o = data_.o;\n        Member* members = GetMembersPointer();\n#if RAPIDJSON_USE_MEMBERSMAP\n        Map* &map = GetMap(members);\n        MapIterator* mit = GetMapIterators(map);\n        SizeType mpos = static_cast<SizeType>(&*m - members);\n        map->erase(DropMapIterator(mit[mpos]));\n#endif\n        MemberIterator last(members + (o.size - 1));\n        if (o.size > 1 && m != last) {\n#if RAPIDJSON_USE_MEMBERSMAP\n            new (&mit[mpos]) MapIterator(DropMapIterator(mit[&*last - members]));\n            mit[mpos]->second = mpos;\n#endif\n            *m = *last; // Move the last one to this place\n        }\n        else {\n            m->~Member(); // Only one left, just destroy\n        }\n        --o.size;\n        return m;\n    }\n\n    MemberIterator DoEraseMembers(ConstMemberIterator first, ConstMemberIterator last) {\n        ObjectData& o = data_.o;\n        MemberIterator beg = MemberBegin(),\n                       pos = beg + (first - beg),\n                       end = MemberEnd();\n#if RAPIDJSON_USE_MEMBERSMAP\n        Map* &map = GetMap(GetMembersPointer());\n        MapIterator* mit = GetMapIterators(map);\n#endif\n        for (MemberIterator itr = pos; itr != last; ++itr) {\n#if RAPIDJSON_USE_MEMBERSMAP\n            map->erase(DropMapIterator(mit[itr - beg]));\n#endif\n            itr->~Member();\n        }\n#if RAPIDJSON_USE_MEMBERSMAP\n        if (first != last) {\n            // Move remaining members/iterators\n            MemberIterator next = pos + (last - first);\n            for (MemberIterator itr = pos; next != end; ++itr, ++next) {\n                std::memcpy(static_cast<void*>(&*itr), &*next, sizeof(Member));\n                SizeType mpos = static_cast<SizeType>(itr - beg);\n                new (&mit[mpos]) MapIterator(DropMapIterator(mit[next - beg]));\n                mit[mpos]->second = mpos;\n            }\n        }\n#else\n        std::memmove(static_cast<void*>(&*pos), &*last,\n                     static_cast<size_t>(end - last) * sizeof(Member));\n#endif\n        o.size -= static_cast<SizeType>(last - first);\n        return pos;\n    }\n\n    template <typename SourceAllocator>\n    void DoCopyMembers(const GenericValue<Encoding,SourceAllocator>& rhs, Allocator& allocator, bool copyConstStrings) {\n        RAPIDJSON_ASSERT(rhs.GetType() == kObjectType);\n\n        data_.f.flags = kObjectFlag;\n        SizeType count = rhs.data_.o.size;\n        Member* lm = DoAllocMembers(count, allocator);\n        const typename GenericValue<Encoding,SourceAllocator>::Member* rm = rhs.GetMembersPointer();\n#if RAPIDJSON_USE_MEMBERSMAP\n        Map* &map = GetMap(lm);\n        MapIterator* mit = GetMapIterators(map);\n#endif\n        for (SizeType i = 0; i < count; i++) {\n            new (&lm[i].name) GenericValue(rm[i].name, allocator, copyConstStrings);\n            new (&lm[i].value) GenericValue(rm[i].value, allocator, copyConstStrings);\n#if RAPIDJSON_USE_MEMBERSMAP\n            new (&mit[i]) MapIterator(map->insert(MapPair(lm[i].name.data_, i)));\n#endif\n        }\n        data_.o.size = data_.o.capacity = count;\n        SetMembersPointer(lm);\n    }\n\n    // Initialize this value as array with initial data, without calling destructor.\n    void SetArrayRaw(GenericValue* values, SizeType count, Allocator& allocator) {\n        data_.f.flags = kArrayFlag;\n        if (count) {\n            GenericValue* e = static_cast<GenericValue*>(allocator.Malloc(count * sizeof(GenericValue)));\n            SetElementsPointer(e);\n            std::memcpy(static_cast<void*>(e), values, count * sizeof(GenericValue));\n        }\n        else\n            SetElementsPointer(0);\n        data_.a.size = data_.a.capacity = count;\n    }\n\n    //! Initialize this value as object with initial data, without calling destructor.\n    void SetObjectRaw(Member* members, SizeType count, Allocator& allocator) {\n        data_.f.flags = kObjectFlag;\n        if (count) {\n            Member* m = DoAllocMembers(count, allocator);\n            SetMembersPointer(m);\n            std::memcpy(static_cast<void*>(m), members, count * sizeof(Member));\n#if RAPIDJSON_USE_MEMBERSMAP\n            Map* &map = GetMap(m);\n            MapIterator* mit = GetMapIterators(map);\n            for (SizeType i = 0; i < count; i++) {\n                new (&mit[i]) MapIterator(map->insert(MapPair(m[i].name.data_, i)));\n            }\n#endif\n        }\n        else\n            SetMembersPointer(0);\n        data_.o.size = data_.o.capacity = count;\n    }\n\n    //! Initialize this value as constant string, without calling destructor.\n    void SetStringRaw(StringRefType s) RAPIDJSON_NOEXCEPT {\n        data_.f.flags = kConstStringFlag;\n        SetStringPointer(s);\n        data_.s.length = s.length;\n    }\n\n    //! Initialize this value as copy string with initial data, without calling destructor.\n    void SetStringRaw(StringRefType s, Allocator& allocator) {\n        Ch* str = 0;\n        if (ShortString::Usable(s.length)) {\n            data_.f.flags = kShortStringFlag;\n            data_.ss.SetLength(s.length);\n            str = data_.ss.str;\n            std::memmove(str, s, s.length * sizeof(Ch));\n        } else {\n            data_.f.flags = kCopyStringFlag;\n            data_.s.length = s.length;\n            str = static_cast<Ch *>(allocator.Malloc((s.length + 1) * sizeof(Ch)));\n            SetStringPointer(str);\n            std::memcpy(str, s, s.length * sizeof(Ch));\n        }\n        str[s.length] = '\\0';\n    }\n\n    //! Assignment without calling destructor\n    void RawAssign(GenericValue& rhs) RAPIDJSON_NOEXCEPT {\n        data_ = rhs.data_;\n        // data_.f.flags = rhs.data_.f.flags;\n        rhs.data_.f.flags = kNullFlag;\n    }\n\n    template <typename SourceAllocator>\n    bool StringEqual(const GenericValue<Encoding, SourceAllocator>& rhs) const {\n        RAPIDJSON_ASSERT(IsString());\n        RAPIDJSON_ASSERT(rhs.IsString());\n\n        const SizeType len1 = GetStringLength();\n        const SizeType len2 = rhs.GetStringLength();\n        if(len1 != len2) { return false; }\n\n        const Ch* const str1 = GetString();\n        const Ch* const str2 = rhs.GetString();\n        if(str1 == str2) { return true; } // fast path for constant string\n\n        return (std::memcmp(str1, str2, sizeof(Ch) * len1) == 0);\n    }\n\n    Data data_;\n};\n\n//! GenericValue with UTF8 encoding\ntypedef GenericValue<UTF8<> > Value;\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericDocument \n\n//! A document for parsing JSON text as DOM.\n/*!\n    \\note implements Handler concept\n    \\tparam Encoding Encoding for both parsing and string storage.\n    \\tparam Allocator Allocator for allocating memory for the DOM\n    \\tparam StackAllocator Allocator for allocating memory for stack during parsing.\n    \\warning Although GenericDocument inherits from GenericValue, the API does \\b not provide any virtual functions, especially no virtual destructor.  To avoid memory leaks, do not \\c delete a GenericDocument object via a pointer to a GenericValue.\n*/\ntemplate <typename Encoding, typename Allocator = RAPIDJSON_DEFAULT_ALLOCATOR, typename StackAllocator = RAPIDJSON_DEFAULT_STACK_ALLOCATOR >\nclass GenericDocument : public GenericValue<Encoding, Allocator> {\npublic:\n    typedef typename Encoding::Ch Ch;                       //!< Character type derived from Encoding.\n    typedef GenericValue<Encoding, Allocator> ValueType;    //!< Value type of the document.\n    typedef Allocator AllocatorType;                        //!< Allocator type from template parameter.\n    typedef StackAllocator StackAllocatorType;              //!< StackAllocator type from template parameter.\n\n    //! Constructor\n    /*! Creates an empty document of specified type.\n        \\param type             Mandatory type of object to create.\n        \\param allocator        Optional allocator for allocating memory.\n        \\param stackCapacity    Optional initial capacity of stack in bytes.\n        \\param stackAllocator   Optional allocator for allocating memory for stack.\n    */\n    explicit GenericDocument(Type type, Allocator* allocator = 0, size_t stackCapacity = kDefaultStackCapacity, StackAllocator* stackAllocator = 0) :\n        GenericValue<Encoding, Allocator>(type),  allocator_(allocator), ownAllocator_(0), stack_(stackAllocator, stackCapacity), parseResult_()\n    {\n        if (!allocator_)\n            ownAllocator_ = allocator_ = RAPIDJSON_NEW(Allocator)();\n    }\n\n    //! Constructor\n    /*! Creates an empty document which type is Null. \n        \\param allocator        Optional allocator for allocating memory.\n        \\param stackCapacity    Optional initial capacity of stack in bytes.\n        \\param stackAllocator   Optional allocator for allocating memory for stack.\n    */\n    GenericDocument(Allocator* allocator = 0, size_t stackCapacity = kDefaultStackCapacity, StackAllocator* stackAllocator = 0) : \n        allocator_(allocator), ownAllocator_(0), stack_(stackAllocator, stackCapacity), parseResult_()\n    {\n        if (!allocator_)\n            ownAllocator_ = allocator_ = RAPIDJSON_NEW(Allocator)();\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    //! Move constructor in C++11\n    GenericDocument(GenericDocument&& rhs) RAPIDJSON_NOEXCEPT\n        : ValueType(std::forward<ValueType>(rhs)), // explicit cast to avoid prohibited move from Document\n          allocator_(rhs.allocator_),\n          ownAllocator_(rhs.ownAllocator_),\n          stack_(std::move(rhs.stack_)),\n          parseResult_(rhs.parseResult_)\n    {\n        rhs.allocator_ = 0;\n        rhs.ownAllocator_ = 0;\n        rhs.parseResult_ = ParseResult();\n    }\n#endif\n\n    ~GenericDocument() {\n        // Clear the ::ValueType before ownAllocator is destroyed, ~ValueType()\n        // runs last and may access its elements or members which would be freed\n        // with an allocator like MemoryPoolAllocator (CrtAllocator does not\n        // free its data when destroyed, but MemoryPoolAllocator does).\n        if (ownAllocator_) {\n            ValueType::SetNull();\n        }\n        Destroy();\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    //! Move assignment in C++11\n    GenericDocument& operator=(GenericDocument&& rhs) RAPIDJSON_NOEXCEPT\n    {\n        // The cast to ValueType is necessary here, because otherwise it would\n        // attempt to call GenericValue's templated assignment operator.\n        ValueType::operator=(std::forward<ValueType>(rhs));\n\n        // Calling the destructor here would prematurely call stack_'s destructor\n        Destroy();\n\n        allocator_ = rhs.allocator_;\n        ownAllocator_ = rhs.ownAllocator_;\n        stack_ = std::move(rhs.stack_);\n        parseResult_ = rhs.parseResult_;\n\n        rhs.allocator_ = 0;\n        rhs.ownAllocator_ = 0;\n        rhs.parseResult_ = ParseResult();\n\n        return *this;\n    }\n#endif\n\n    //! Exchange the contents of this document with those of another.\n    /*!\n        \\param rhs Another document.\n        \\note Constant complexity.\n        \\see GenericValue::Swap\n    */\n    GenericDocument& Swap(GenericDocument& rhs) RAPIDJSON_NOEXCEPT {\n        ValueType::Swap(rhs);\n        stack_.Swap(rhs.stack_);\n        internal::Swap(allocator_, rhs.allocator_);\n        internal::Swap(ownAllocator_, rhs.ownAllocator_);\n        internal::Swap(parseResult_, rhs.parseResult_);\n        return *this;\n    }\n\n    // Allow Swap with ValueType.\n    // Refer to Effective C++ 3rd Edition/Item 33: Avoid hiding inherited names.\n    using ValueType::Swap;\n\n    //! free-standing swap function helper\n    /*!\n        Helper function to enable support for common swap implementation pattern based on \\c std::swap:\n        \\code\n        void swap(MyClass& a, MyClass& b) {\n            using std::swap;\n            swap(a.doc, b.doc);\n            // ...\n        }\n        \\endcode\n        \\see Swap()\n     */\n    friend inline void swap(GenericDocument& a, GenericDocument& b) RAPIDJSON_NOEXCEPT { a.Swap(b); }\n\n    //! Populate this document by a generator which produces SAX events.\n    /*! \\tparam Generator A functor with <tt>bool f(Handler)</tt> prototype.\n        \\param g Generator functor which sends SAX events to the parameter.\n        \\return The document itself for fluent API.\n    */\n    template <typename Generator>\n    GenericDocument& Populate(Generator& g) {\n        ClearStackOnExit scope(*this);\n        if (g(*this)) {\n            RAPIDJSON_ASSERT(stack_.GetSize() == sizeof(ValueType)); // Got one and only one root object\n            ValueType::operator=(*stack_.template Pop<ValueType>(1));// Move value from stack to document\n        }\n        return *this;\n    }\n\n    //!@name Parse from stream\n    //!@{\n\n    //! Parse JSON text from an input stream (with Encoding conversion)\n    /*! \\tparam parseFlags Combination of \\ref ParseFlag.\n        \\tparam SourceEncoding Encoding of input stream\n        \\tparam InputStream Type of input stream, implementing Stream concept\n        \\param is Input stream to be parsed.\n        \\return The document itself for fluent API.\n    */\n    template <unsigned parseFlags, typename SourceEncoding, typename InputStream>\n    GenericDocument& ParseStream(InputStream& is) {\n        GenericReader<SourceEncoding, Encoding, StackAllocator> reader(\n            stack_.HasAllocator() ? &stack_.GetAllocator() : 0);\n        ClearStackOnExit scope(*this);\n        parseResult_ = reader.template Parse<parseFlags>(is, *this);\n        if (parseResult_) {\n            RAPIDJSON_ASSERT(stack_.GetSize() == sizeof(ValueType)); // Got one and only one root object\n            ValueType::operator=(*stack_.template Pop<ValueType>(1));// Move value from stack to document\n        }\n        return *this;\n    }\n\n    //! Parse JSON text from an input stream\n    /*! \\tparam parseFlags Combination of \\ref ParseFlag.\n        \\tparam InputStream Type of input stream, implementing Stream concept\n        \\param is Input stream to be parsed.\n        \\return The document itself for fluent API.\n    */\n    template <unsigned parseFlags, typename InputStream>\n    GenericDocument& ParseStream(InputStream& is) {\n        return ParseStream<parseFlags, Encoding, InputStream>(is);\n    }\n\n    //! Parse JSON text from an input stream (with \\ref kParseDefaultFlags)\n    /*! \\tparam InputStream Type of input stream, implementing Stream concept\n        \\param is Input stream to be parsed.\n        \\return The document itself for fluent API.\n    */\n    template <typename InputStream>\n    GenericDocument& ParseStream(InputStream& is) {\n        return ParseStream<kParseDefaultFlags, Encoding, InputStream>(is);\n    }\n    //!@}\n\n    //!@name Parse in-place from mutable string\n    //!@{\n\n    //! Parse JSON text from a mutable string\n    /*! \\tparam parseFlags Combination of \\ref ParseFlag.\n        \\param str Mutable zero-terminated string to be parsed.\n        \\return The document itself for fluent API.\n    */\n    template <unsigned parseFlags>\n    GenericDocument& ParseInsitu(Ch* str) {\n        GenericInsituStringStream<Encoding> s(str);\n        return ParseStream<parseFlags | kParseInsituFlag>(s);\n    }\n\n    //! Parse JSON text from a mutable string (with \\ref kParseDefaultFlags)\n    /*! \\param str Mutable zero-terminated string to be parsed.\n        \\return The document itself for fluent API.\n    */\n    GenericDocument& ParseInsitu(Ch* str) {\n        return ParseInsitu<kParseDefaultFlags>(str);\n    }\n    //!@}\n\n    //!@name Parse from read-only string\n    //!@{\n\n    //! Parse JSON text from a read-only string (with Encoding conversion)\n    /*! \\tparam parseFlags Combination of \\ref ParseFlag (must not contain \\ref kParseInsituFlag).\n        \\tparam SourceEncoding Transcoding from input Encoding\n        \\param str Read-only zero-terminated string to be parsed.\n    */\n    template <unsigned parseFlags, typename SourceEncoding>\n    GenericDocument& Parse(const typename SourceEncoding::Ch* str) {\n        RAPIDJSON_ASSERT(!(parseFlags & kParseInsituFlag));\n        GenericStringStream<SourceEncoding> s(str);\n        return ParseStream<parseFlags, SourceEncoding>(s);\n    }\n\n    //! Parse JSON text from a read-only string\n    /*! \\tparam parseFlags Combination of \\ref ParseFlag (must not contain \\ref kParseInsituFlag).\n        \\param str Read-only zero-terminated string to be parsed.\n    */\n    template <unsigned parseFlags>\n    GenericDocument& Parse(const Ch* str) {\n        return Parse<parseFlags, Encoding>(str);\n    }\n\n    //! Parse JSON text from a read-only string (with \\ref kParseDefaultFlags)\n    /*! \\param str Read-only zero-terminated string to be parsed.\n    */\n    GenericDocument& Parse(const Ch* str) {\n        return Parse<kParseDefaultFlags>(str);\n    }\n\n    template <unsigned parseFlags, typename SourceEncoding>\n    GenericDocument& Parse(const typename SourceEncoding::Ch* str, size_t length) {\n        RAPIDJSON_ASSERT(!(parseFlags & kParseInsituFlag));\n        MemoryStream ms(reinterpret_cast<const char*>(str), length * sizeof(typename SourceEncoding::Ch));\n        EncodedInputStream<SourceEncoding, MemoryStream> is(ms);\n        ParseStream<parseFlags, SourceEncoding>(is);\n        return *this;\n    }\n\n    template <unsigned parseFlags>\n    GenericDocument& Parse(const Ch* str, size_t length) {\n        return Parse<parseFlags, Encoding>(str, length);\n    }\n    \n    GenericDocument& Parse(const Ch* str, size_t length) {\n        return Parse<kParseDefaultFlags>(str, length);\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    template <unsigned parseFlags, typename SourceEncoding>\n    GenericDocument& Parse(const std::basic_string<typename SourceEncoding::Ch>& str) {\n        // c_str() is constant complexity according to standard. Should be faster than Parse(const char*, size_t)\n        return Parse<parseFlags, SourceEncoding>(str.c_str());\n    }\n\n    template <unsigned parseFlags>\n    GenericDocument& Parse(const std::basic_string<Ch>& str) {\n        return Parse<parseFlags, Encoding>(str.c_str());\n    }\n\n    GenericDocument& Parse(const std::basic_string<Ch>& str) {\n        return Parse<kParseDefaultFlags>(str);\n    }\n#endif // RAPIDJSON_HAS_STDSTRING    \n\n    //!@}\n\n    //!@name Handling parse errors\n    //!@{\n\n    //! Whether a parse error has occurred in the last parsing.\n    bool HasParseError() const { return parseResult_.IsError(); }\n\n    //! Get the \\ref ParseErrorCode of last parsing.\n    ParseErrorCode GetParseError() const { return parseResult_.Code(); }\n\n    //! Get the position of last parsing error in input, 0 otherwise.\n    size_t GetErrorOffset() const { return parseResult_.Offset(); }\n\n    //! Implicit conversion to get the last parse result\n#ifndef __clang // -Wdocumentation\n    /*! \\return \\ref ParseResult of the last parse operation\n\n        \\code\n          Document doc;\n          ParseResult ok = doc.Parse(json);\n          if (!ok)\n            printf( \"JSON parse error: %s (%u)\\n\", GetParseError_En(ok.Code()), ok.Offset());\n        \\endcode\n     */\n#endif\n    operator ParseResult() const { return parseResult_; }\n    //!@}\n\n    //! Get the allocator of this document.\n    Allocator& GetAllocator() {\n        RAPIDJSON_ASSERT(allocator_);\n        return *allocator_;\n    }\n\n    //! Get the capacity of stack in bytes.\n    size_t GetStackCapacity() const { return stack_.GetCapacity(); }\n\nprivate:\n    // clear stack on any exit from ParseStream, e.g. due to exception\n    struct ClearStackOnExit {\n        explicit ClearStackOnExit(GenericDocument& d) : d_(d) {}\n        ~ClearStackOnExit() { d_.ClearStack(); }\n    private:\n        ClearStackOnExit(const ClearStackOnExit&);\n        ClearStackOnExit& operator=(const ClearStackOnExit&);\n        GenericDocument& d_;\n    };\n\n    // callers of the following private Handler functions\n    // template <typename,typename,typename> friend class GenericReader; // for parsing\n    template <typename, typename> friend class GenericValue; // for deep copying\n\npublic:\n    // Implementation of Handler\n    bool Null() { new (stack_.template Push<ValueType>()) ValueType(); return true; }\n    bool Bool(bool b) { new (stack_.template Push<ValueType>()) ValueType(b); return true; }\n    bool Int(int i) { new (stack_.template Push<ValueType>()) ValueType(i); return true; }\n    bool Uint(unsigned i) { new (stack_.template Push<ValueType>()) ValueType(i); return true; }\n    bool Int64(int64_t i) { new (stack_.template Push<ValueType>()) ValueType(i); return true; }\n    bool Uint64(uint64_t i) { new (stack_.template Push<ValueType>()) ValueType(i); return true; }\n    bool Double(double d) { new (stack_.template Push<ValueType>()) ValueType(d); return true; }\n\n    bool RawNumber(const Ch* str, SizeType length, bool copy) { \n        if (copy) \n            new (stack_.template Push<ValueType>()) ValueType(str, length, GetAllocator());\n        else\n            new (stack_.template Push<ValueType>()) ValueType(str, length);\n        return true;\n    }\n\n    bool String(const Ch* str, SizeType length, bool copy) { \n        if (copy) \n            new (stack_.template Push<ValueType>()) ValueType(str, length, GetAllocator());\n        else\n            new (stack_.template Push<ValueType>()) ValueType(str, length);\n        return true;\n    }\n\n    bool StartObject() { new (stack_.template Push<ValueType>()) ValueType(kObjectType); return true; }\n    \n    bool Key(const Ch* str, SizeType length, bool copy) { return String(str, length, copy); }\n\n    bool EndObject(SizeType memberCount) {\n        typename ValueType::Member* members = stack_.template Pop<typename ValueType::Member>(memberCount);\n        stack_.template Top<ValueType>()->SetObjectRaw(members, memberCount, GetAllocator());\n        return true;\n    }\n\n    bool StartArray() { new (stack_.template Push<ValueType>()) ValueType(kArrayType); return true; }\n    \n    bool EndArray(SizeType elementCount) {\n        ValueType* elements = stack_.template Pop<ValueType>(elementCount);\n        stack_.template Top<ValueType>()->SetArrayRaw(elements, elementCount, GetAllocator());\n        return true;\n    }\n\nprivate:\n    //! Prohibit copying\n    GenericDocument(const GenericDocument&);\n    //! Prohibit assignment\n    GenericDocument& operator=(const GenericDocument&);\n\n    void ClearStack() {\n        if (Allocator::kNeedFree)\n            while (stack_.GetSize() > 0)    // Here assumes all elements in stack array are GenericValue (Member is actually 2 GenericValue objects)\n                (stack_.template Pop<ValueType>(1))->~ValueType();\n        else\n            stack_.Clear();\n        stack_.ShrinkToFit();\n    }\n\n    void Destroy() {\n        RAPIDJSON_DELETE(ownAllocator_);\n    }\n\n    static const size_t kDefaultStackCapacity = 1024;\n    Allocator* allocator_;\n    Allocator* ownAllocator_;\n    internal::Stack<StackAllocator> stack_;\n    ParseResult parseResult_;\n};\n\n//! GenericDocument with UTF8 encoding\ntypedef GenericDocument<UTF8<> > Document;\n\n\n//! Helper class for accessing Value of array type.\n/*!\n    Instance of this helper class is obtained by \\c GenericValue::GetArray().\n    In addition to all APIs for array type, it provides range-based for loop if \\c RAPIDJSON_HAS_CXX11_RANGE_FOR=1.\n*/\ntemplate <bool Const, typename ValueT>\nclass GenericArray {\npublic:\n    typedef GenericArray<true, ValueT> ConstArray;\n    typedef GenericArray<false, ValueT> Array;\n    typedef ValueT PlainType;\n    typedef typename internal::MaybeAddConst<Const,PlainType>::Type ValueType;\n    typedef ValueType* ValueIterator;  // This may be const or non-const iterator\n    typedef const ValueT* ConstValueIterator;\n    typedef typename ValueType::AllocatorType AllocatorType;\n    typedef typename ValueType::StringRefType StringRefType;\n\n    template <typename, typename>\n    friend class GenericValue;\n\n    GenericArray(const GenericArray& rhs) : value_(rhs.value_) {}\n    GenericArray& operator=(const GenericArray& rhs) { value_ = rhs.value_; return *this; }\n    ~GenericArray() {}\n\n    operator ValueType&() const { return value_; }\n    SizeType Size() const { return value_.Size(); }\n    SizeType Capacity() const { return value_.Capacity(); }\n    bool Empty() const { return value_.Empty(); }\n    void Clear() const { value_.Clear(); }\n    ValueType& operator[](SizeType index) const {  return value_[index]; }\n    ValueIterator Begin() const { return value_.Begin(); }\n    ValueIterator End() const { return value_.End(); }\n    GenericArray Reserve(SizeType newCapacity, AllocatorType &allocator) const { value_.Reserve(newCapacity, allocator); return *this; }\n    GenericArray PushBack(ValueType& value, AllocatorType& allocator) const { value_.PushBack(value, allocator); return *this; }\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    GenericArray PushBack(ValueType&& value, AllocatorType& allocator) const { value_.PushBack(value, allocator); return *this; }\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    GenericArray PushBack(StringRefType value, AllocatorType& allocator) const { value_.PushBack(value, allocator); return *this; }\n    template <typename T> RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (const GenericArray&)) PushBack(T value, AllocatorType& allocator) const { value_.PushBack(value, allocator); return *this; }\n    GenericArray PopBack() const { value_.PopBack(); return *this; }\n    ValueIterator Erase(ConstValueIterator pos) const { return value_.Erase(pos); }\n    ValueIterator Erase(ConstValueIterator first, ConstValueIterator last) const { return value_.Erase(first, last); }\n\n#if RAPIDJSON_HAS_CXX11_RANGE_FOR\n    ValueIterator begin() const { return value_.Begin(); }\n    ValueIterator end() const { return value_.End(); }\n#endif\n\nprivate:\n    GenericArray();\n    GenericArray(ValueType& value) : value_(value) {}\n    ValueType& value_;\n};\n\n//! Helper class for accessing Value of object type.\n/*!\n    Instance of this helper class is obtained by \\c GenericValue::GetObject().\n    In addition to all APIs for array type, it provides range-based for loop if \\c RAPIDJSON_HAS_CXX11_RANGE_FOR=1.\n*/\ntemplate <bool Const, typename ValueT>\nclass GenericObject {\npublic:\n    typedef GenericObject<true, ValueT> ConstObject;\n    typedef GenericObject<false, ValueT> Object;\n    typedef ValueT PlainType;\n    typedef typename internal::MaybeAddConst<Const,PlainType>::Type ValueType;\n    typedef GenericMemberIterator<Const, typename ValueT::EncodingType, typename ValueT::AllocatorType> MemberIterator;  // This may be const or non-const iterator\n    typedef GenericMemberIterator<true, typename ValueT::EncodingType, typename ValueT::AllocatorType> ConstMemberIterator;\n    typedef typename ValueType::AllocatorType AllocatorType;\n    typedef typename ValueType::StringRefType StringRefType;\n    typedef typename ValueType::EncodingType EncodingType;\n    typedef typename ValueType::Ch Ch;\n\n    template <typename, typename>\n    friend class GenericValue;\n\n    GenericObject(const GenericObject& rhs) : value_(rhs.value_) {}\n    GenericObject& operator=(const GenericObject& rhs) { value_ = rhs.value_; return *this; }\n    ~GenericObject() {}\n\n    operator ValueType&() const { return value_; }\n    SizeType MemberCount() const { return value_.MemberCount(); }\n    SizeType MemberCapacity() const { return value_.MemberCapacity(); }\n    bool ObjectEmpty() const { return value_.ObjectEmpty(); }\n    template <typename T> ValueType& operator[](T* name) const { return value_[name]; }\n    template <typename SourceAllocator> ValueType& operator[](const GenericValue<EncodingType, SourceAllocator>& name) const { return value_[name]; }\n#if RAPIDJSON_HAS_STDSTRING\n    ValueType& operator[](const std::basic_string<Ch>& name) const { return value_[name]; }\n#endif\n    MemberIterator MemberBegin() const { return value_.MemberBegin(); }\n    MemberIterator MemberEnd() const { return value_.MemberEnd(); }\n    GenericObject MemberReserve(SizeType newCapacity, AllocatorType &allocator) const { value_.MemberReserve(newCapacity, allocator); return *this; }\n    bool HasMember(const Ch* name) const { return value_.HasMember(name); }\n#if RAPIDJSON_HAS_STDSTRING\n    bool HasMember(const std::basic_string<Ch>& name) const { return value_.HasMember(name); }\n#endif\n    template <typename SourceAllocator> bool HasMember(const GenericValue<EncodingType, SourceAllocator>& name) const { return value_.HasMember(name); }\n    MemberIterator FindMember(const Ch* name) const { return value_.FindMember(name); }\n    template <typename SourceAllocator> MemberIterator FindMember(const GenericValue<EncodingType, SourceAllocator>& name) const { return value_.FindMember(name); }\n#if RAPIDJSON_HAS_STDSTRING\n    MemberIterator FindMember(const std::basic_string<Ch>& name) const { return value_.FindMember(name); }\n#endif\n    GenericObject AddMember(ValueType& name, ValueType& value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n    GenericObject AddMember(ValueType& name, StringRefType value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n#if RAPIDJSON_HAS_STDSTRING\n    GenericObject AddMember(ValueType& name, std::basic_string<Ch>& value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n#endif\n    template <typename T> RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (ValueType&)) AddMember(ValueType& name, T value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    GenericObject AddMember(ValueType&& name, ValueType&& value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n    GenericObject AddMember(ValueType&& name, ValueType& value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n    GenericObject AddMember(ValueType& name, ValueType&& value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n    GenericObject AddMember(StringRefType name, ValueType&& value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    GenericObject AddMember(StringRefType name, ValueType& value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n    GenericObject AddMember(StringRefType name, StringRefType value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n    template <typename T> RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (GenericObject)) AddMember(StringRefType name, T value, AllocatorType& allocator) const { value_.AddMember(name, value, allocator); return *this; }\n    void RemoveAllMembers() { value_.RemoveAllMembers(); }\n    bool RemoveMember(const Ch* name) const { return value_.RemoveMember(name); }\n#if RAPIDJSON_HAS_STDSTRING\n    bool RemoveMember(const std::basic_string<Ch>& name) const { return value_.RemoveMember(name); }\n#endif\n    template <typename SourceAllocator> bool RemoveMember(const GenericValue<EncodingType, SourceAllocator>& name) const { return value_.RemoveMember(name); }\n    MemberIterator RemoveMember(MemberIterator m) const { return value_.RemoveMember(m); }\n    MemberIterator EraseMember(ConstMemberIterator pos) const { return value_.EraseMember(pos); }\n    MemberIterator EraseMember(ConstMemberIterator first, ConstMemberIterator last) const { return value_.EraseMember(first, last); }\n    bool EraseMember(const Ch* name) const { return value_.EraseMember(name); }\n#if RAPIDJSON_HAS_STDSTRING\n    bool EraseMember(const std::basic_string<Ch>& name) const { return EraseMember(ValueType(StringRef(name))); }\n#endif\n    template <typename SourceAllocator> bool EraseMember(const GenericValue<EncodingType, SourceAllocator>& name) const { return value_.EraseMember(name); }\n\n#if RAPIDJSON_HAS_CXX11_RANGE_FOR\n    MemberIterator begin() const { return value_.MemberBegin(); }\n    MemberIterator end() const { return value_.MemberEnd(); }\n#endif\n\nprivate:\n    GenericObject();\n    GenericObject(ValueType& value) : value_(value) {}\n    ValueType& value_;\n};\n\nRAPIDJSON_NAMESPACE_END\nRAPIDJSON_DIAG_POP\n\n#ifdef RAPIDJSON_WINDOWS_GETOBJECT_WORKAROUND_APPLIED\n#pragma pop_macro(\"GetObject\")\n#undef RAPIDJSON_WINDOWS_GETOBJECT_WORKAROUND_APPLIED\n#endif\n\n#endif // RAPIDJSON_DOCUMENT_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/encodedstream.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_ENCODEDSTREAM_H_\n#define RAPIDJSON_ENCODEDSTREAM_H_\n\n#include \"stream.h\"\n#include \"memorystream.h\"\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(padded)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Input byte stream wrapper with a statically bound encoding.\n/*!\n    \\tparam Encoding The interpretation of encoding of the stream. Either UTF8, UTF16LE, UTF16BE, UTF32LE, UTF32BE.\n    \\tparam InputByteStream Type of input byte stream. For example, FileReadStream.\n*/\ntemplate <typename Encoding, typename InputByteStream>\nclass EncodedInputStream {\n    RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\npublic:\n    typedef typename Encoding::Ch Ch;\n\n    EncodedInputStream(InputByteStream& is) : is_(is) { \n        current_ = Encoding::TakeBOM(is_);\n    }\n\n    Ch Peek() const { return current_; }\n    Ch Take() { Ch c = current_; current_ = Encoding::Take(is_); return c; }\n    size_t Tell() const { return is_.Tell(); }\n\n    // Not implemented\n    void Put(Ch) { RAPIDJSON_ASSERT(false); }\n    void Flush() { RAPIDJSON_ASSERT(false); } \n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\nprivate:\n    EncodedInputStream(const EncodedInputStream&);\n    EncodedInputStream& operator=(const EncodedInputStream&);\n\n    InputByteStream& is_;\n    Ch current_;\n};\n\n//! Specialized for UTF8 MemoryStream.\ntemplate <>\nclass EncodedInputStream<UTF8<>, MemoryStream> {\npublic:\n    typedef UTF8<>::Ch Ch;\n\n    EncodedInputStream(MemoryStream& is) : is_(is) {\n        if (static_cast<unsigned char>(is_.Peek()) == 0xEFu) is_.Take();\n        if (static_cast<unsigned char>(is_.Peek()) == 0xBBu) is_.Take();\n        if (static_cast<unsigned char>(is_.Peek()) == 0xBFu) is_.Take();\n    }\n    Ch Peek() const { return is_.Peek(); }\n    Ch Take() { return is_.Take(); }\n    size_t Tell() const { return is_.Tell(); }\n\n    // Not implemented\n    void Put(Ch) {}\n    void Flush() {} \n    Ch* PutBegin() { return 0; }\n    size_t PutEnd(Ch*) { return 0; }\n\n    MemoryStream& is_;\n\nprivate:\n    EncodedInputStream(const EncodedInputStream&);\n    EncodedInputStream& operator=(const EncodedInputStream&);\n};\n\n//! Output byte stream wrapper with statically bound encoding.\n/*!\n    \\tparam Encoding The interpretation of encoding of the stream. Either UTF8, UTF16LE, UTF16BE, UTF32LE, UTF32BE.\n    \\tparam OutputByteStream Type of input byte stream. For example, FileWriteStream.\n*/\ntemplate <typename Encoding, typename OutputByteStream>\nclass EncodedOutputStream {\n    RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\npublic:\n    typedef typename Encoding::Ch Ch;\n\n    EncodedOutputStream(OutputByteStream& os, bool putBOM = true) : os_(os) { \n        if (putBOM)\n            Encoding::PutBOM(os_);\n    }\n\n    void Put(Ch c) { Encoding::Put(os_, c);  }\n    void Flush() { os_.Flush(); }\n\n    // Not implemented\n    Ch Peek() const { RAPIDJSON_ASSERT(false); return 0;}\n    Ch Take() { RAPIDJSON_ASSERT(false); return 0;}\n    size_t Tell() const { RAPIDJSON_ASSERT(false);  return 0; }\n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\nprivate:\n    EncodedOutputStream(const EncodedOutputStream&);\n    EncodedOutputStream& operator=(const EncodedOutputStream&);\n\n    OutputByteStream& os_;\n};\n\n#define RAPIDJSON_ENCODINGS_FUNC(x) UTF8<Ch>::x, UTF16LE<Ch>::x, UTF16BE<Ch>::x, UTF32LE<Ch>::x, UTF32BE<Ch>::x\n\n//! Input stream wrapper with dynamically bound encoding and automatic encoding detection.\n/*!\n    \\tparam CharType Type of character for reading.\n    \\tparam InputByteStream type of input byte stream to be wrapped.\n*/\ntemplate <typename CharType, typename InputByteStream>\nclass AutoUTFInputStream {\n    RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\npublic:\n    typedef CharType Ch;\n\n    //! Constructor.\n    /*!\n        \\param is input stream to be wrapped.\n        \\param type UTF encoding type if it is not detected from the stream.\n    */\n    AutoUTFInputStream(InputByteStream& is, UTFType type = kUTF8) : is_(&is), type_(type), hasBOM_(false) {\n        RAPIDJSON_ASSERT(type >= kUTF8 && type <= kUTF32BE);        \n        DetectType();\n        static const TakeFunc f[] = { RAPIDJSON_ENCODINGS_FUNC(Take) };\n        takeFunc_ = f[type_];\n        current_ = takeFunc_(*is_);\n    }\n\n    UTFType GetType() const { return type_; }\n    bool HasBOM() const { return hasBOM_; }\n\n    Ch Peek() const { return current_; }\n    Ch Take() { Ch c = current_; current_ = takeFunc_(*is_); return c; }\n    size_t Tell() const { return is_->Tell(); }\n\n    // Not implemented\n    void Put(Ch) { RAPIDJSON_ASSERT(false); }\n    void Flush() { RAPIDJSON_ASSERT(false); } \n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\nprivate:\n    AutoUTFInputStream(const AutoUTFInputStream&);\n    AutoUTFInputStream& operator=(const AutoUTFInputStream&);\n\n    // Detect encoding type with BOM or RFC 4627\n    void DetectType() {\n        // BOM (Byte Order Mark):\n        // 00 00 FE FF  UTF-32BE\n        // FF FE 00 00  UTF-32LE\n        // FE FF        UTF-16BE\n        // FF FE        UTF-16LE\n        // EF BB BF     UTF-8\n\n        const unsigned char* c = reinterpret_cast<const unsigned char *>(is_->Peek4());\n        if (!c)\n            return;\n\n        unsigned bom = static_cast<unsigned>(c[0] | (c[1] << 8) | (c[2] << 16) | (c[3] << 24));\n        hasBOM_ = false;\n        if (bom == 0xFFFE0000)                  { type_ = kUTF32BE; hasBOM_ = true; is_->Take(); is_->Take(); is_->Take(); is_->Take(); }\n        else if (bom == 0x0000FEFF)             { type_ = kUTF32LE; hasBOM_ = true; is_->Take(); is_->Take(); is_->Take(); is_->Take(); }\n        else if ((bom & 0xFFFF) == 0xFFFE)      { type_ = kUTF16BE; hasBOM_ = true; is_->Take(); is_->Take();                           }\n        else if ((bom & 0xFFFF) == 0xFEFF)      { type_ = kUTF16LE; hasBOM_ = true; is_->Take(); is_->Take();                           }\n        else if ((bom & 0xFFFFFF) == 0xBFBBEF)  { type_ = kUTF8;    hasBOM_ = true; is_->Take(); is_->Take(); is_->Take();              }\n\n        // RFC 4627: Section 3\n        // \"Since the first two characters of a JSON text will always be ASCII\n        // characters [RFC0020], it is possible to determine whether an octet\n        // stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking\n        // at the pattern of nulls in the first four octets.\"\n        // 00 00 00 xx  UTF-32BE\n        // 00 xx 00 xx  UTF-16BE\n        // xx 00 00 00  UTF-32LE\n        // xx 00 xx 00  UTF-16LE\n        // xx xx xx xx  UTF-8\n\n        if (!hasBOM_) {\n            int pattern = (c[0] ? 1 : 0) | (c[1] ? 2 : 0) | (c[2] ? 4 : 0) | (c[3] ? 8 : 0);\n            switch (pattern) {\n            case 0x08: type_ = kUTF32BE; break;\n            case 0x0A: type_ = kUTF16BE; break;\n            case 0x01: type_ = kUTF32LE; break;\n            case 0x05: type_ = kUTF16LE; break;\n            case 0x0F: type_ = kUTF8;    break;\n            default: break; // Use type defined by user.\n            }\n        }\n\n        // Runtime check whether the size of character type is sufficient. It only perform checks with assertion.\n        if (type_ == kUTF16LE || type_ == kUTF16BE) RAPIDJSON_ASSERT(sizeof(Ch) >= 2);\n        if (type_ == kUTF32LE || type_ == kUTF32BE) RAPIDJSON_ASSERT(sizeof(Ch) >= 4);\n    }\n\n    typedef Ch (*TakeFunc)(InputByteStream& is);\n    InputByteStream* is_;\n    UTFType type_;\n    Ch current_;\n    TakeFunc takeFunc_;\n    bool hasBOM_;\n};\n\n//! Output stream wrapper with dynamically bound encoding and automatic encoding detection.\n/*!\n    \\tparam CharType Type of character for writing.\n    \\tparam OutputByteStream type of output byte stream to be wrapped.\n*/\ntemplate <typename CharType, typename OutputByteStream>\nclass AutoUTFOutputStream {\n    RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\npublic:\n    typedef CharType Ch;\n\n    //! Constructor.\n    /*!\n        \\param os output stream to be wrapped.\n        \\param type UTF encoding type.\n        \\param putBOM Whether to write BOM at the beginning of the stream.\n    */\n    AutoUTFOutputStream(OutputByteStream& os, UTFType type, bool putBOM) : os_(&os), type_(type) {\n        RAPIDJSON_ASSERT(type >= kUTF8 && type <= kUTF32BE);\n\n        // Runtime check whether the size of character type is sufficient. It only perform checks with assertion.\n        if (type_ == kUTF16LE || type_ == kUTF16BE) RAPIDJSON_ASSERT(sizeof(Ch) >= 2);\n        if (type_ == kUTF32LE || type_ == kUTF32BE) RAPIDJSON_ASSERT(sizeof(Ch) >= 4);\n\n        static const PutFunc f[] = { RAPIDJSON_ENCODINGS_FUNC(Put) };\n        putFunc_ = f[type_];\n\n        if (putBOM)\n            PutBOM();\n    }\n\n    UTFType GetType() const { return type_; }\n\n    void Put(Ch c) { putFunc_(*os_, c); }\n    void Flush() { os_->Flush(); } \n\n    // Not implemented\n    Ch Peek() const { RAPIDJSON_ASSERT(false); return 0;}\n    Ch Take() { RAPIDJSON_ASSERT(false); return 0;}\n    size_t Tell() const { RAPIDJSON_ASSERT(false); return 0; }\n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\nprivate:\n    AutoUTFOutputStream(const AutoUTFOutputStream&);\n    AutoUTFOutputStream& operator=(const AutoUTFOutputStream&);\n\n    void PutBOM() { \n        typedef void (*PutBOMFunc)(OutputByteStream&);\n        static const PutBOMFunc f[] = { RAPIDJSON_ENCODINGS_FUNC(PutBOM) };\n        f[type_](*os_);\n    }\n\n    typedef void (*PutFunc)(OutputByteStream&, Ch);\n\n    OutputByteStream* os_;\n    UTFType type_;\n    PutFunc putFunc_;\n};\n\n#undef RAPIDJSON_ENCODINGS_FUNC\n\nRAPIDJSON_NAMESPACE_END\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_FILESTREAM_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/encodings.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_ENCODINGS_H_\n#define RAPIDJSON_ENCODINGS_H_\n\n#include \"rapidjson.h\"\n\n#if defined(_MSC_VER) && !defined(__clang__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4244) // conversion from 'type1' to 'type2', possible loss of data\nRAPIDJSON_DIAG_OFF(4702)  // unreachable code\n#elif defined(__GNUC__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\nRAPIDJSON_DIAG_OFF(overflow)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n///////////////////////////////////////////////////////////////////////////////\n// Encoding\n\n/*! \\class rapidjson::Encoding\n    \\brief Concept for encoding of Unicode characters.\n\n\\code\nconcept Encoding {\n    typename Ch;    //! Type of character. A \"character\" is actually a code unit in unicode's definition.\n\n    enum { supportUnicode = 1 }; // or 0 if not supporting unicode\n\n    //! \\brief Encode a Unicode codepoint to an output stream.\n    //! \\param os Output stream.\n    //! \\param codepoint An unicode codepoint, ranging from 0x0 to 0x10FFFF inclusively.\n    template<typename OutputStream>\n    static void Encode(OutputStream& os, unsigned codepoint);\n\n    //! \\brief Decode a Unicode codepoint from an input stream.\n    //! \\param is Input stream.\n    //! \\param codepoint Output of the unicode codepoint.\n    //! \\return true if a valid codepoint can be decoded from the stream.\n    template <typename InputStream>\n    static bool Decode(InputStream& is, unsigned* codepoint);\n\n    //! \\brief Validate one Unicode codepoint from an encoded stream.\n    //! \\param is Input stream to obtain codepoint.\n    //! \\param os Output for copying one codepoint.\n    //! \\return true if it is valid.\n    //! \\note This function just validating and copying the codepoint without actually decode it.\n    template <typename InputStream, typename OutputStream>\n    static bool Validate(InputStream& is, OutputStream& os);\n\n    // The following functions are deal with byte streams.\n\n    //! Take a character from input byte stream, skip BOM if exist.\n    template <typename InputByteStream>\n    static CharType TakeBOM(InputByteStream& is);\n\n    //! Take a character from input byte stream.\n    template <typename InputByteStream>\n    static Ch Take(InputByteStream& is);\n\n    //! Put BOM to output byte stream.\n    template <typename OutputByteStream>\n    static void PutBOM(OutputByteStream& os);\n\n    //! Put a character to output byte stream.\n    template <typename OutputByteStream>\n    static void Put(OutputByteStream& os, Ch c);\n};\n\\endcode\n*/\n\n///////////////////////////////////////////////////////////////////////////////\n// UTF8\n\n//! UTF-8 encoding.\n/*! http://en.wikipedia.org/wiki/UTF-8\n    http://tools.ietf.org/html/rfc3629\n    \\tparam CharType Code unit for storing 8-bit UTF-8 data. Default is char.\n    \\note implements Encoding concept\n*/\ntemplate<typename CharType = char>\nstruct UTF8 {\n    typedef CharType Ch;\n\n    enum { supportUnicode = 1 };\n\n    template<typename OutputStream>\n    static void Encode(OutputStream& os, unsigned codepoint) {\n        if (codepoint <= 0x7F) \n            os.Put(static_cast<Ch>(codepoint & 0xFF));\n        else if (codepoint <= 0x7FF) {\n            os.Put(static_cast<Ch>(0xC0 | ((codepoint >> 6) & 0xFF)));\n            os.Put(static_cast<Ch>(0x80 | ((codepoint & 0x3F))));\n        }\n        else if (codepoint <= 0xFFFF) {\n            os.Put(static_cast<Ch>(0xE0 | ((codepoint >> 12) & 0xFF)));\n            os.Put(static_cast<Ch>(0x80 | ((codepoint >> 6) & 0x3F)));\n            os.Put(static_cast<Ch>(0x80 | (codepoint & 0x3F)));\n        }\n        else {\n            RAPIDJSON_ASSERT(codepoint <= 0x10FFFF);\n            os.Put(static_cast<Ch>(0xF0 | ((codepoint >> 18) & 0xFF)));\n            os.Put(static_cast<Ch>(0x80 | ((codepoint >> 12) & 0x3F)));\n            os.Put(static_cast<Ch>(0x80 | ((codepoint >> 6) & 0x3F)));\n            os.Put(static_cast<Ch>(0x80 | (codepoint & 0x3F)));\n        }\n    }\n\n    template<typename OutputStream>\n    static void EncodeUnsafe(OutputStream& os, unsigned codepoint) {\n        if (codepoint <= 0x7F) \n            PutUnsafe(os, static_cast<Ch>(codepoint & 0xFF));\n        else if (codepoint <= 0x7FF) {\n            PutUnsafe(os, static_cast<Ch>(0xC0 | ((codepoint >> 6) & 0xFF)));\n            PutUnsafe(os, static_cast<Ch>(0x80 | ((codepoint & 0x3F))));\n        }\n        else if (codepoint <= 0xFFFF) {\n            PutUnsafe(os, static_cast<Ch>(0xE0 | ((codepoint >> 12) & 0xFF)));\n            PutUnsafe(os, static_cast<Ch>(0x80 | ((codepoint >> 6) & 0x3F)));\n            PutUnsafe(os, static_cast<Ch>(0x80 | (codepoint & 0x3F)));\n        }\n        else {\n            RAPIDJSON_ASSERT(codepoint <= 0x10FFFF);\n            PutUnsafe(os, static_cast<Ch>(0xF0 | ((codepoint >> 18) & 0xFF)));\n            PutUnsafe(os, static_cast<Ch>(0x80 | ((codepoint >> 12) & 0x3F)));\n            PutUnsafe(os, static_cast<Ch>(0x80 | ((codepoint >> 6) & 0x3F)));\n            PutUnsafe(os, static_cast<Ch>(0x80 | (codepoint & 0x3F)));\n        }\n    }\n\n    template <typename InputStream>\n    static bool Decode(InputStream& is, unsigned* codepoint) {\n#define RAPIDJSON_COPY() c = is.Take(); *codepoint = (*codepoint << 6) | (static_cast<unsigned char>(c) & 0x3Fu)\n#define RAPIDJSON_TRANS(mask) result &= ((GetRange(static_cast<unsigned char>(c)) & mask) != 0)\n#define RAPIDJSON_TAIL() RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x70)\n        typename InputStream::Ch c = is.Take();\n        if (!(c & 0x80)) {\n            *codepoint = static_cast<unsigned char>(c);\n            return true;\n        }\n\n        unsigned char type = GetRange(static_cast<unsigned char>(c));\n        if (type >= 32) {\n            *codepoint = 0;\n        } else {\n            *codepoint = (0xFFu >> type) & static_cast<unsigned char>(c);\n        }\n        bool result = true;\n        switch (type) {\n        case 2: RAPIDJSON_TAIL(); return result;\n        case 3: RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); return result;\n        case 4: RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x50); RAPIDJSON_TAIL(); return result;\n        case 5: RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x10); RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); return result;\n        case 6: RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); return result;\n        case 10: RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x20); RAPIDJSON_TAIL(); return result;\n        case 11: RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x60); RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); return result;\n        default: return false;\n        }\n#undef RAPIDJSON_COPY\n#undef RAPIDJSON_TRANS\n#undef RAPIDJSON_TAIL\n    }\n\n    template <typename InputStream, typename OutputStream>\n    static bool Validate(InputStream& is, OutputStream& os) {\n#define RAPIDJSON_COPY() os.Put(c = is.Take())\n#define RAPIDJSON_TRANS(mask) result &= ((GetRange(static_cast<unsigned char>(c)) & mask) != 0)\n#define RAPIDJSON_TAIL() RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x70)\n        Ch c;\n        RAPIDJSON_COPY();\n        if (!(c & 0x80))\n            return true;\n\n        bool result = true;\n        switch (GetRange(static_cast<unsigned char>(c))) {\n        case 2: RAPIDJSON_TAIL(); return result;\n        case 3: RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); return result;\n        case 4: RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x50); RAPIDJSON_TAIL(); return result;\n        case 5: RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x10); RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); return result;\n        case 6: RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); return result;\n        case 10: RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x20); RAPIDJSON_TAIL(); return result;\n        case 11: RAPIDJSON_COPY(); RAPIDJSON_TRANS(0x60); RAPIDJSON_TAIL(); RAPIDJSON_TAIL(); return result;\n        default: return false;\n        }\n#undef RAPIDJSON_COPY\n#undef RAPIDJSON_TRANS\n#undef RAPIDJSON_TAIL\n    }\n\n    static unsigned char GetRange(unsigned char c) {\n        // Referring to DFA of http://bjoern.hoehrmann.de/utf-8/decoder/dfa/\n        // With new mapping 1 -> 0x10, 7 -> 0x20, 9 -> 0x40, such that AND operation can test multiple types.\n        static const unsigned char type[] = {\n            0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n            0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n            0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n            0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n            0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,0x10,\n            0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,\n            0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,\n            0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,0x20,\n            8,8,2,2,2,2,2,2,2,2,2,2,2,2,2,2,  2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,\n            10,3,3,3,3,3,3,3,3,3,3,3,3,4,3,3, 11,6,6,6,5,8,8,8,8,8,8,8,8,8,8,8,\n        };\n        return type[c];\n    }\n\n    template <typename InputByteStream>\n    static CharType TakeBOM(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        typename InputByteStream::Ch c = Take(is);\n        if (static_cast<unsigned char>(c) != 0xEFu) return c;\n        c = is.Take();\n        if (static_cast<unsigned char>(c) != 0xBBu) return c;\n        c = is.Take();\n        if (static_cast<unsigned char>(c) != 0xBFu) return c;\n        c = is.Take();\n        return c;\n    }\n\n    template <typename InputByteStream>\n    static Ch Take(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        return static_cast<Ch>(is.Take());\n    }\n\n    template <typename OutputByteStream>\n    static void PutBOM(OutputByteStream& os) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xEFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xBBu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xBFu));\n    }\n\n    template <typename OutputByteStream>\n    static void Put(OutputByteStream& os, Ch c) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(c));\n    }\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// UTF16\n\n//! UTF-16 encoding.\n/*! http://en.wikipedia.org/wiki/UTF-16\n    http://tools.ietf.org/html/rfc2781\n    \\tparam CharType Type for storing 16-bit UTF-16 data. Default is wchar_t. C++11 may use char16_t instead.\n    \\note implements Encoding concept\n\n    \\note For in-memory access, no need to concern endianness. The code units and code points are represented by CPU's endianness.\n    For streaming, use UTF16LE and UTF16BE, which handle endianness.\n*/\ntemplate<typename CharType = wchar_t>\nstruct UTF16 {\n    typedef CharType Ch;\n    RAPIDJSON_STATIC_ASSERT(sizeof(Ch) >= 2);\n\n    enum { supportUnicode = 1 };\n\n    template<typename OutputStream>\n    static void Encode(OutputStream& os, unsigned codepoint) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputStream::Ch) >= 2);\n        if (codepoint <= 0xFFFF) {\n            RAPIDJSON_ASSERT(codepoint < 0xD800 || codepoint > 0xDFFF); // Code point itself cannot be surrogate pair \n            os.Put(static_cast<typename OutputStream::Ch>(codepoint));\n        }\n        else {\n            RAPIDJSON_ASSERT(codepoint <= 0x10FFFF);\n            unsigned v = codepoint - 0x10000;\n            os.Put(static_cast<typename OutputStream::Ch>((v >> 10) | 0xD800));\n            os.Put(static_cast<typename OutputStream::Ch>((v & 0x3FF) | 0xDC00));\n        }\n    }\n\n\n    template<typename OutputStream>\n    static void EncodeUnsafe(OutputStream& os, unsigned codepoint) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputStream::Ch) >= 2);\n        if (codepoint <= 0xFFFF) {\n            RAPIDJSON_ASSERT(codepoint < 0xD800 || codepoint > 0xDFFF); // Code point itself cannot be surrogate pair \n            PutUnsafe(os, static_cast<typename OutputStream::Ch>(codepoint));\n        }\n        else {\n            RAPIDJSON_ASSERT(codepoint <= 0x10FFFF);\n            unsigned v = codepoint - 0x10000;\n            PutUnsafe(os, static_cast<typename OutputStream::Ch>((v >> 10) | 0xD800));\n            PutUnsafe(os, static_cast<typename OutputStream::Ch>((v & 0x3FF) | 0xDC00));\n        }\n    }\n\n    template <typename InputStream>\n    static bool Decode(InputStream& is, unsigned* codepoint) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputStream::Ch) >= 2);\n        typename InputStream::Ch c = is.Take();\n        if (c < 0xD800 || c > 0xDFFF) {\n            *codepoint = static_cast<unsigned>(c);\n            return true;\n        }\n        else if (c <= 0xDBFF) {\n            *codepoint = (static_cast<unsigned>(c) & 0x3FF) << 10;\n            c = is.Take();\n            *codepoint |= (static_cast<unsigned>(c) & 0x3FF);\n            *codepoint += 0x10000;\n            return c >= 0xDC00 && c <= 0xDFFF;\n        }\n        return false;\n    }\n\n    template <typename InputStream, typename OutputStream>\n    static bool Validate(InputStream& is, OutputStream& os) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputStream::Ch) >= 2);\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputStream::Ch) >= 2);\n        typename InputStream::Ch c;\n        os.Put(static_cast<typename OutputStream::Ch>(c = is.Take()));\n        if (c < 0xD800 || c > 0xDFFF)\n            return true;\n        else if (c <= 0xDBFF) {\n            os.Put(c = is.Take());\n            return c >= 0xDC00 && c <= 0xDFFF;\n        }\n        return false;\n    }\n};\n\n//! UTF-16 little endian encoding.\ntemplate<typename CharType = wchar_t>\nstruct UTF16LE : UTF16<CharType> {\n    template <typename InputByteStream>\n    static CharType TakeBOM(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        CharType c = Take(is);\n        return static_cast<uint16_t>(c) == 0xFEFFu ? Take(is) : c;\n    }\n\n    template <typename InputByteStream>\n    static CharType Take(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        unsigned c = static_cast<uint8_t>(is.Take());\n        c |= static_cast<unsigned>(static_cast<uint8_t>(is.Take())) << 8;\n        return static_cast<CharType>(c);\n    }\n\n    template <typename OutputByteStream>\n    static void PutBOM(OutputByteStream& os) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xFEu));\n    }\n\n    template <typename OutputByteStream>\n    static void Put(OutputByteStream& os, CharType c) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(static_cast<unsigned>(c) & 0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>((static_cast<unsigned>(c) >> 8) & 0xFFu));\n    }\n};\n\n//! UTF-16 big endian encoding.\ntemplate<typename CharType = wchar_t>\nstruct UTF16BE : UTF16<CharType> {\n    template <typename InputByteStream>\n    static CharType TakeBOM(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        CharType c = Take(is);\n        return static_cast<uint16_t>(c) == 0xFEFFu ? Take(is) : c;\n    }\n\n    template <typename InputByteStream>\n    static CharType Take(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        unsigned c = static_cast<unsigned>(static_cast<uint8_t>(is.Take())) << 8;\n        c |= static_cast<unsigned>(static_cast<uint8_t>(is.Take()));\n        return static_cast<CharType>(c);\n    }\n\n    template <typename OutputByteStream>\n    static void PutBOM(OutputByteStream& os) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xFEu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xFFu));\n    }\n\n    template <typename OutputByteStream>\n    static void Put(OutputByteStream& os, CharType c) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>((static_cast<unsigned>(c) >> 8) & 0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(static_cast<unsigned>(c) & 0xFFu));\n    }\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// UTF32\n\n//! UTF-32 encoding. \n/*! http://en.wikipedia.org/wiki/UTF-32\n    \\tparam CharType Type for storing 32-bit UTF-32 data. Default is unsigned. C++11 may use char32_t instead.\n    \\note implements Encoding concept\n\n    \\note For in-memory access, no need to concern endianness. The code units and code points are represented by CPU's endianness.\n    For streaming, use UTF32LE and UTF32BE, which handle endianness.\n*/\ntemplate<typename CharType = unsigned>\nstruct UTF32 {\n    typedef CharType Ch;\n    RAPIDJSON_STATIC_ASSERT(sizeof(Ch) >= 4);\n\n    enum { supportUnicode = 1 };\n\n    template<typename OutputStream>\n    static void Encode(OutputStream& os, unsigned codepoint) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputStream::Ch) >= 4);\n        RAPIDJSON_ASSERT(codepoint <= 0x10FFFF);\n        os.Put(codepoint);\n    }\n\n    template<typename OutputStream>\n    static void EncodeUnsafe(OutputStream& os, unsigned codepoint) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputStream::Ch) >= 4);\n        RAPIDJSON_ASSERT(codepoint <= 0x10FFFF);\n        PutUnsafe(os, codepoint);\n    }\n\n    template <typename InputStream>\n    static bool Decode(InputStream& is, unsigned* codepoint) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputStream::Ch) >= 4);\n        Ch c = is.Take();\n        *codepoint = c;\n        return c <= 0x10FFFF;\n    }\n\n    template <typename InputStream, typename OutputStream>\n    static bool Validate(InputStream& is, OutputStream& os) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputStream::Ch) >= 4);\n        Ch c;\n        os.Put(c = is.Take());\n        return c <= 0x10FFFF;\n    }\n};\n\n//! UTF-32 little endian enocoding.\ntemplate<typename CharType = unsigned>\nstruct UTF32LE : UTF32<CharType> {\n    template <typename InputByteStream>\n    static CharType TakeBOM(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        CharType c = Take(is);\n        return static_cast<uint32_t>(c) == 0x0000FEFFu ? Take(is) : c;\n    }\n\n    template <typename InputByteStream>\n    static CharType Take(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        unsigned c = static_cast<uint8_t>(is.Take());\n        c |= static_cast<unsigned>(static_cast<uint8_t>(is.Take())) << 8;\n        c |= static_cast<unsigned>(static_cast<uint8_t>(is.Take())) << 16;\n        c |= static_cast<unsigned>(static_cast<uint8_t>(is.Take())) << 24;\n        return static_cast<CharType>(c);\n    }\n\n    template <typename OutputByteStream>\n    static void PutBOM(OutputByteStream& os) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xFEu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0x00u));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0x00u));\n    }\n\n    template <typename OutputByteStream>\n    static void Put(OutputByteStream& os, CharType c) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(c & 0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>((c >> 8) & 0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>((c >> 16) & 0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>((c >> 24) & 0xFFu));\n    }\n};\n\n//! UTF-32 big endian encoding.\ntemplate<typename CharType = unsigned>\nstruct UTF32BE : UTF32<CharType> {\n    template <typename InputByteStream>\n    static CharType TakeBOM(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        CharType c = Take(is);\n        return static_cast<uint32_t>(c) == 0x0000FEFFu ? Take(is) : c; \n    }\n\n    template <typename InputByteStream>\n    static CharType Take(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        unsigned c = static_cast<unsigned>(static_cast<uint8_t>(is.Take())) << 24;\n        c |= static_cast<unsigned>(static_cast<uint8_t>(is.Take())) << 16;\n        c |= static_cast<unsigned>(static_cast<uint8_t>(is.Take())) << 8;\n        c |= static_cast<unsigned>(static_cast<uint8_t>(is.Take()));\n        return static_cast<CharType>(c);\n    }\n\n    template <typename OutputByteStream>\n    static void PutBOM(OutputByteStream& os) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(0x00u));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0x00u));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xFEu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(0xFFu));\n    }\n\n    template <typename OutputByteStream>\n    static void Put(OutputByteStream& os, CharType c) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>((c >> 24) & 0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>((c >> 16) & 0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>((c >> 8) & 0xFFu));\n        os.Put(static_cast<typename OutputByteStream::Ch>(c & 0xFFu));\n    }\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// ASCII\n\n//! ASCII encoding.\n/*! http://en.wikipedia.org/wiki/ASCII\n    \\tparam CharType Code unit for storing 7-bit ASCII data. Default is char.\n    \\note implements Encoding concept\n*/\ntemplate<typename CharType = char>\nstruct ASCII {\n    typedef CharType Ch;\n\n    enum { supportUnicode = 0 };\n\n    template<typename OutputStream>\n    static void Encode(OutputStream& os, unsigned codepoint) {\n        RAPIDJSON_ASSERT(codepoint <= 0x7F);\n        os.Put(static_cast<Ch>(codepoint & 0xFF));\n    }\n\n    template<typename OutputStream>\n    static void EncodeUnsafe(OutputStream& os, unsigned codepoint) {\n        RAPIDJSON_ASSERT(codepoint <= 0x7F);\n        PutUnsafe(os, static_cast<Ch>(codepoint & 0xFF));\n    }\n\n    template <typename InputStream>\n    static bool Decode(InputStream& is, unsigned* codepoint) {\n        uint8_t c = static_cast<uint8_t>(is.Take());\n        *codepoint = c;\n        return c <= 0X7F;\n    }\n\n    template <typename InputStream, typename OutputStream>\n    static bool Validate(InputStream& is, OutputStream& os) {\n        uint8_t c = static_cast<uint8_t>(is.Take());\n        os.Put(static_cast<typename OutputStream::Ch>(c));\n        return c <= 0x7F;\n    }\n\n    template <typename InputByteStream>\n    static CharType TakeBOM(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        uint8_t c = static_cast<uint8_t>(Take(is));\n        return static_cast<Ch>(c);\n    }\n\n    template <typename InputByteStream>\n    static Ch Take(InputByteStream& is) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename InputByteStream::Ch) == 1);\n        return static_cast<Ch>(is.Take());\n    }\n\n    template <typename OutputByteStream>\n    static void PutBOM(OutputByteStream& os) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        (void)os;\n    }\n\n    template <typename OutputByteStream>\n    static void Put(OutputByteStream& os, Ch c) {\n        RAPIDJSON_STATIC_ASSERT(sizeof(typename OutputByteStream::Ch) == 1);\n        os.Put(static_cast<typename OutputByteStream::Ch>(c));\n    }\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// AutoUTF\n\n//! Runtime-specified UTF encoding type of a stream.\nenum UTFType {\n    kUTF8 = 0,      //!< UTF-8.\n    kUTF16LE = 1,   //!< UTF-16 little endian.\n    kUTF16BE = 2,   //!< UTF-16 big endian.\n    kUTF32LE = 3,   //!< UTF-32 little endian.\n    kUTF32BE = 4    //!< UTF-32 big endian.\n};\n\n//! Dynamically select encoding according to stream's runtime-specified UTF encoding type.\n/*! \\note This class can be used with AutoUTFInputtStream and AutoUTFOutputStream, which provides GetType().\n*/\ntemplate<typename CharType>\nstruct AutoUTF {\n    typedef CharType Ch;\n\n    enum { supportUnicode = 1 };\n\n#define RAPIDJSON_ENCODINGS_FUNC(x) UTF8<Ch>::x, UTF16LE<Ch>::x, UTF16BE<Ch>::x, UTF32LE<Ch>::x, UTF32BE<Ch>::x\n\n    template<typename OutputStream>\n    static RAPIDJSON_FORCEINLINE void Encode(OutputStream& os, unsigned codepoint) {\n        typedef void (*EncodeFunc)(OutputStream&, unsigned);\n        static const EncodeFunc f[] = { RAPIDJSON_ENCODINGS_FUNC(Encode) };\n        (*f[os.GetType()])(os, codepoint);\n    }\n\n    template<typename OutputStream>\n    static RAPIDJSON_FORCEINLINE void EncodeUnsafe(OutputStream& os, unsigned codepoint) {\n        typedef void (*EncodeFunc)(OutputStream&, unsigned);\n        static const EncodeFunc f[] = { RAPIDJSON_ENCODINGS_FUNC(EncodeUnsafe) };\n        (*f[os.GetType()])(os, codepoint);\n    }\n\n    template <typename InputStream>\n    static RAPIDJSON_FORCEINLINE bool Decode(InputStream& is, unsigned* codepoint) {\n        typedef bool (*DecodeFunc)(InputStream&, unsigned*);\n        static const DecodeFunc f[] = { RAPIDJSON_ENCODINGS_FUNC(Decode) };\n        return (*f[is.GetType()])(is, codepoint);\n    }\n\n    template <typename InputStream, typename OutputStream>\n    static RAPIDJSON_FORCEINLINE bool Validate(InputStream& is, OutputStream& os) {\n        typedef bool (*ValidateFunc)(InputStream&, OutputStream&);\n        static const ValidateFunc f[] = { RAPIDJSON_ENCODINGS_FUNC(Validate) };\n        return (*f[is.GetType()])(is, os);\n    }\n\n#undef RAPIDJSON_ENCODINGS_FUNC\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// Transcoder\n\n//! Encoding conversion.\ntemplate<typename SourceEncoding, typename TargetEncoding>\nstruct Transcoder {\n    //! Take one Unicode codepoint from source encoding, convert it to target encoding and put it to the output stream.\n    template<typename InputStream, typename OutputStream>\n    static RAPIDJSON_FORCEINLINE bool Transcode(InputStream& is, OutputStream& os) {\n        unsigned codepoint;\n        if (!SourceEncoding::Decode(is, &codepoint))\n            return false;\n        TargetEncoding::Encode(os, codepoint);\n        return true;\n    }\n\n    template<typename InputStream, typename OutputStream>\n    static RAPIDJSON_FORCEINLINE bool TranscodeUnsafe(InputStream& is, OutputStream& os) {\n        unsigned codepoint;\n        if (!SourceEncoding::Decode(is, &codepoint))\n            return false;\n        TargetEncoding::EncodeUnsafe(os, codepoint);\n        return true;\n    }\n\n    //! Validate one Unicode codepoint from an encoded stream.\n    template<typename InputStream, typename OutputStream>\n    static RAPIDJSON_FORCEINLINE bool Validate(InputStream& is, OutputStream& os) {\n        return Transcode(is, os);   // Since source/target encoding is different, must transcode.\n    }\n};\n\n// Forward declaration.\ntemplate<typename Stream>\ninline void PutUnsafe(Stream& stream, typename Stream::Ch c);\n\n//! Specialization of Transcoder with same source and target encoding.\ntemplate<typename Encoding>\nstruct Transcoder<Encoding, Encoding> {\n    template<typename InputStream, typename OutputStream>\n    static RAPIDJSON_FORCEINLINE bool Transcode(InputStream& is, OutputStream& os) {\n        os.Put(is.Take());  // Just copy one code unit. This semantic is different from primary template class.\n        return true;\n    }\n    \n    template<typename InputStream, typename OutputStream>\n    static RAPIDJSON_FORCEINLINE bool TranscodeUnsafe(InputStream& is, OutputStream& os) {\n        PutUnsafe(os, is.Take());  // Just copy one code unit. This semantic is different from primary template class.\n        return true;\n    }\n    \n    template<typename InputStream, typename OutputStream>\n    static RAPIDJSON_FORCEINLINE bool Validate(InputStream& is, OutputStream& os) {\n        return Encoding::Validate(is, os);  // source/target encoding are the same\n    }\n};\n\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__GNUC__) || (defined(_MSC_VER) && !defined(__clang__))\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_ENCODINGS_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/error/en.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_ERROR_EN_H_\n#define RAPIDJSON_ERROR_EN_H_\n\n#include \"error.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(switch-enum)\nRAPIDJSON_DIAG_OFF(covered-switch-default)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Maps error code of parsing into error message.\n/*!\n    \\ingroup RAPIDJSON_ERRORS\n    \\param parseErrorCode Error code obtained in parsing.\n    \\return the error message.\n    \\note User can make a copy of this function for localization.\n        Using switch-case is safer for future modification of error codes.\n*/\ninline const RAPIDJSON_ERROR_CHARTYPE* GetParseError_En(ParseErrorCode parseErrorCode) {\n    switch (parseErrorCode) {\n        case kParseErrorNone:                           return RAPIDJSON_ERROR_STRING(\"No error.\");\n\n        case kParseErrorDocumentEmpty:                  return RAPIDJSON_ERROR_STRING(\"The document is empty.\");\n        case kParseErrorDocumentRootNotSingular:        return RAPIDJSON_ERROR_STRING(\"The document root must not be followed by other values.\");\n\n        case kParseErrorValueInvalid:                   return RAPIDJSON_ERROR_STRING(\"Invalid value.\");\n\n        case kParseErrorObjectMissName:                 return RAPIDJSON_ERROR_STRING(\"Missing a name for object member.\");\n        case kParseErrorObjectMissColon:                return RAPIDJSON_ERROR_STRING(\"Missing a colon after a name of object member.\");\n        case kParseErrorObjectMissCommaOrCurlyBracket:  return RAPIDJSON_ERROR_STRING(\"Missing a comma or '}' after an object member.\");\n\n        case kParseErrorArrayMissCommaOrSquareBracket:  return RAPIDJSON_ERROR_STRING(\"Missing a comma or ']' after an array element.\");\n\n        case kParseErrorStringUnicodeEscapeInvalidHex:  return RAPIDJSON_ERROR_STRING(\"Incorrect hex digit after \\\\u escape in string.\");\n        case kParseErrorStringUnicodeSurrogateInvalid:  return RAPIDJSON_ERROR_STRING(\"The surrogate pair in string is invalid.\");\n        case kParseErrorStringEscapeInvalid:            return RAPIDJSON_ERROR_STRING(\"Invalid escape character in string.\");\n        case kParseErrorStringMissQuotationMark:        return RAPIDJSON_ERROR_STRING(\"Missing a closing quotation mark in string.\");\n        case kParseErrorStringInvalidEncoding:          return RAPIDJSON_ERROR_STRING(\"Invalid encoding in string.\");\n\n        case kParseErrorNumberTooBig:                   return RAPIDJSON_ERROR_STRING(\"Number too big to be stored in double.\");\n        case kParseErrorNumberMissFraction:             return RAPIDJSON_ERROR_STRING(\"Miss fraction part in number.\");\n        case kParseErrorNumberMissExponent:             return RAPIDJSON_ERROR_STRING(\"Miss exponent in number.\");\n\n        case kParseErrorTermination:                    return RAPIDJSON_ERROR_STRING(\"Terminate parsing due to Handler error.\");\n        case kParseErrorUnspecificSyntaxError:          return RAPIDJSON_ERROR_STRING(\"Unspecific syntax error.\");\n\n        default:                                        return RAPIDJSON_ERROR_STRING(\"Unknown error.\");\n    }\n}\n\n//! Maps error code of validation into error message.\n/*!\n    \\ingroup RAPIDJSON_ERRORS\n    \\param validateErrorCode Error code obtained from validator.\n    \\return the error message.\n    \\note User can make a copy of this function for localization.\n        Using switch-case is safer for future modification of error codes.\n*/\ninline const RAPIDJSON_ERROR_CHARTYPE* GetValidateError_En(ValidateErrorCode validateErrorCode) {\n    switch (validateErrorCode) {\n        case kValidateErrors:                           return RAPIDJSON_ERROR_STRING(\"One or more validation errors have occurred\");\n        case kValidateErrorNone:                        return RAPIDJSON_ERROR_STRING(\"No error.\");\n\n        case kValidateErrorMultipleOf:                  return RAPIDJSON_ERROR_STRING(\"Number '%actual' is not a multiple of the 'multipleOf' value '%expected'.\");\n        case kValidateErrorMaximum:                     return RAPIDJSON_ERROR_STRING(\"Number '%actual' is greater than the 'maximum' value '%expected'.\");\n        case kValidateErrorExclusiveMaximum:            return RAPIDJSON_ERROR_STRING(\"Number '%actual' is greater than or equal to the 'exclusiveMaximum' value '%expected'.\");\n        case kValidateErrorMinimum:                     return RAPIDJSON_ERROR_STRING(\"Number '%actual' is less than the 'minimum' value '%expected'.\");\n        case kValidateErrorExclusiveMinimum:            return RAPIDJSON_ERROR_STRING(\"Number '%actual' is less than or equal to the 'exclusiveMinimum' value '%expected'.\");\n\n        case kValidateErrorMaxLength:                   return RAPIDJSON_ERROR_STRING(\"String '%actual' is longer than the 'maxLength' value '%expected'.\");\n        case kValidateErrorMinLength:                   return RAPIDJSON_ERROR_STRING(\"String '%actual' is shorter than the 'minLength' value '%expected'.\");\n        case kValidateErrorPattern:                     return RAPIDJSON_ERROR_STRING(\"String '%actual' does not match the 'pattern' regular expression.\");\n\n        case kValidateErrorMaxItems:                    return RAPIDJSON_ERROR_STRING(\"Array of length '%actual' is longer than the 'maxItems' value '%expected'.\");\n        case kValidateErrorMinItems:                    return RAPIDJSON_ERROR_STRING(\"Array of length '%actual' is shorter than the 'minItems' value '%expected'.\");\n        case kValidateErrorUniqueItems:                 return RAPIDJSON_ERROR_STRING(\"Array has duplicate items at indices '%duplicates' but 'uniqueItems' is true.\");\n        case kValidateErrorAdditionalItems:             return RAPIDJSON_ERROR_STRING(\"Array has an additional item at index '%disallowed' that is not allowed by the schema.\");\n\n        case kValidateErrorMaxProperties:               return RAPIDJSON_ERROR_STRING(\"Object has '%actual' members which is more than 'maxProperties' value '%expected'.\");\n        case kValidateErrorMinProperties:               return RAPIDJSON_ERROR_STRING(\"Object has '%actual' members which is less than 'minProperties' value '%expected'.\");\n        case kValidateErrorRequired:                    return RAPIDJSON_ERROR_STRING(\"Object is missing the following members required by the schema: '%missing'.\");\n        case kValidateErrorAdditionalProperties:        return RAPIDJSON_ERROR_STRING(\"Object has an additional member '%disallowed' that is not allowed by the schema.\");\n        case kValidateErrorPatternProperties:           return RAPIDJSON_ERROR_STRING(\"Object has 'patternProperties' that are not allowed by the schema.\");\n        case kValidateErrorDependencies:                return RAPIDJSON_ERROR_STRING(\"Object has missing property or schema dependencies, refer to following errors.\");\n\n        case kValidateErrorEnum:                        return RAPIDJSON_ERROR_STRING(\"Property has a value that is not one of its allowed enumerated values.\");\n        case kValidateErrorType:                        return RAPIDJSON_ERROR_STRING(\"Property has a type '%actual' that is not in the following list: '%expected'.\");\n\n        case kValidateErrorOneOf:                       return RAPIDJSON_ERROR_STRING(\"Property did not match any of the sub-schemas specified by 'oneOf', refer to following errors.\");\n        case kValidateErrorOneOfMatch:                  return RAPIDJSON_ERROR_STRING(\"Property matched more than one of the sub-schemas specified by 'oneOf', indices '%matches'.\");\n        case kValidateErrorAllOf:                       return RAPIDJSON_ERROR_STRING(\"Property did not match all of the sub-schemas specified by 'allOf', refer to following errors.\");\n        case kValidateErrorAnyOf:                       return RAPIDJSON_ERROR_STRING(\"Property did not match any of the sub-schemas specified by 'anyOf', refer to following errors.\");\n        case kValidateErrorNot:                         return RAPIDJSON_ERROR_STRING(\"Property matched the sub-schema specified by 'not'.\");\n\n        case kValidateErrorReadOnly:                    return RAPIDJSON_ERROR_STRING(\"Property is read-only but has been provided when validation is for writing.\");\n        case kValidateErrorWriteOnly:                   return RAPIDJSON_ERROR_STRING(\"Property is write-only but has been provided when validation is for reading.\");\n\n        default:                                        return RAPIDJSON_ERROR_STRING(\"Unknown error.\");\n    }\n}\n\n//! Maps error code of schema document compilation into error message.\n/*!\n    \\ingroup RAPIDJSON_ERRORS\n    \\param schemaErrorCode Error code obtained from compiling the schema document.\n    \\return the error message.\n    \\note User can make a copy of this function for localization.\n        Using switch-case is safer for future modification of error codes.\n*/\n  inline const RAPIDJSON_ERROR_CHARTYPE* GetSchemaError_En(SchemaErrorCode schemaErrorCode) {\n      switch (schemaErrorCode) {\n          case kSchemaErrorNone:                        return RAPIDJSON_ERROR_STRING(\"No error.\");\n\n          case kSchemaErrorStartUnknown:                return RAPIDJSON_ERROR_STRING(\"Pointer '%value' to start of schema does not resolve to a location in the document.\");\n          case kSchemaErrorRefPlainName:                return RAPIDJSON_ERROR_STRING(\"$ref fragment '%value' must be a JSON pointer.\");\n          case kSchemaErrorRefInvalid:                  return RAPIDJSON_ERROR_STRING(\"$ref must not be an empty string.\");\n          case kSchemaErrorRefPointerInvalid:           return RAPIDJSON_ERROR_STRING(\"$ref fragment '%value' is not a valid JSON pointer at offset '%offset'.\");\n          case kSchemaErrorRefUnknown:                  return RAPIDJSON_ERROR_STRING(\"$ref '%value' does not resolve to a location in the target document.\");\n          case kSchemaErrorRefCyclical:                 return RAPIDJSON_ERROR_STRING(\"$ref '%value' is cyclical.\");\n          case kSchemaErrorRefNoRemoteProvider:         return RAPIDJSON_ERROR_STRING(\"$ref is remote but there is no remote provider.\");\n          case kSchemaErrorRefNoRemoteSchema:           return RAPIDJSON_ERROR_STRING(\"$ref '%value' is remote but the remote provider did not return a schema.\");\n          case kSchemaErrorRegexInvalid:                return RAPIDJSON_ERROR_STRING(\"Invalid regular expression '%value' in 'pattern' or 'patternProperties'.\");\n          case kSchemaErrorSpecUnknown:                 return RAPIDJSON_ERROR_STRING(\"JSON schema draft or OpenAPI version is not recognized.\");\n          case kSchemaErrorSpecUnsupported:             return RAPIDJSON_ERROR_STRING(\"JSON schema draft or OpenAPI version is not supported.\");\n          case kSchemaErrorSpecIllegal:                 return RAPIDJSON_ERROR_STRING(\"Both JSON schema draft and OpenAPI version found in document.\");\n          case kSchemaErrorReadOnlyAndWriteOnly:        return RAPIDJSON_ERROR_STRING(\"Property must not be both 'readOnly' and 'writeOnly'.\");\n\n          default:                                      return RAPIDJSON_ERROR_STRING(\"Unknown error.\");\n    }\n  }\n\n//! Maps error code of pointer parse into error message.\n/*!\n    \\ingroup RAPIDJSON_ERRORS\n    \\param pointerParseErrorCode Error code obtained from pointer parse.\n    \\return the error message.\n    \\note User can make a copy of this function for localization.\n        Using switch-case is safer for future modification of error codes.\n*/\ninline const RAPIDJSON_ERROR_CHARTYPE* GetPointerParseError_En(PointerParseErrorCode pointerParseErrorCode) {\n    switch (pointerParseErrorCode) {\n        case kPointerParseErrorNone:                       return RAPIDJSON_ERROR_STRING(\"No error.\");\n\n        case kPointerParseErrorTokenMustBeginWithSolidus:  return RAPIDJSON_ERROR_STRING(\"A token must begin with a '/'.\");\n        case kPointerParseErrorInvalidEscape:              return RAPIDJSON_ERROR_STRING(\"Invalid escape.\");\n        case kPointerParseErrorInvalidPercentEncoding:     return RAPIDJSON_ERROR_STRING(\"Invalid percent encoding in URI fragment.\");\n        case kPointerParseErrorCharacterMustPercentEncode: return RAPIDJSON_ERROR_STRING(\"A character must be percent encoded in a URI fragment.\");\n\n        default:                                           return RAPIDJSON_ERROR_STRING(\"Unknown error.\");\n    }\n}\n\nRAPIDJSON_NAMESPACE_END\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_ERROR_EN_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/error/error.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_ERROR_ERROR_H_\n#define RAPIDJSON_ERROR_ERROR_H_\n\n#include \"../rapidjson.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(padded)\n#endif\n\n/*! \\file error.h */\n\n/*! \\defgroup RAPIDJSON_ERRORS RapidJSON error handling */\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_ERROR_CHARTYPE\n\n//! Character type of error messages.\n/*! \\ingroup RAPIDJSON_ERRORS\n    The default character type is \\c char.\n    On Windows, user can define this macro as \\c TCHAR for supporting both\n    unicode/non-unicode settings.\n*/\n#ifndef RAPIDJSON_ERROR_CHARTYPE\n#define RAPIDJSON_ERROR_CHARTYPE char\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_ERROR_STRING\n\n//! Macro for converting string literal to \\ref RAPIDJSON_ERROR_CHARTYPE[].\n/*! \\ingroup RAPIDJSON_ERRORS\n    By default this conversion macro does nothing.\n    On Windows, user can define this macro as \\c _T(x) for supporting both\n    unicode/non-unicode settings.\n*/\n#ifndef RAPIDJSON_ERROR_STRING\n#define RAPIDJSON_ERROR_STRING(x) x\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n///////////////////////////////////////////////////////////////////////////////\n// ParseErrorCode\n\n//! Error code of parsing.\n/*! \\ingroup RAPIDJSON_ERRORS\n    \\see GenericReader::Parse, GenericReader::GetParseErrorCode\n*/\nenum ParseErrorCode {\n    kParseErrorNone = 0,                        //!< No error.\n\n    kParseErrorDocumentEmpty,                   //!< The document is empty.\n    kParseErrorDocumentRootNotSingular,         //!< The document root must not follow by other values.\n\n    kParseErrorValueInvalid,                    //!< Invalid value.\n\n    kParseErrorObjectMissName,                  //!< Missing a name for object member.\n    kParseErrorObjectMissColon,                 //!< Missing a colon after a name of object member.\n    kParseErrorObjectMissCommaOrCurlyBracket,   //!< Missing a comma or '}' after an object member.\n\n    kParseErrorArrayMissCommaOrSquareBracket,   //!< Missing a comma or ']' after an array element.\n\n    kParseErrorStringUnicodeEscapeInvalidHex,   //!< Incorrect hex digit after \\\\u escape in string.\n    kParseErrorStringUnicodeSurrogateInvalid,   //!< The surrogate pair in string is invalid.\n    kParseErrorStringEscapeInvalid,             //!< Invalid escape character in string.\n    kParseErrorStringMissQuotationMark,         //!< Missing a closing quotation mark in string.\n    kParseErrorStringInvalidEncoding,           //!< Invalid encoding in string.\n\n    kParseErrorNumberTooBig,                    //!< Number too big to be stored in double.\n    kParseErrorNumberMissFraction,              //!< Miss fraction part in number.\n    kParseErrorNumberMissExponent,              //!< Miss exponent in number.\n\n    kParseErrorTermination,                     //!< Parsing was terminated.\n    kParseErrorUnspecificSyntaxError            //!< Unspecific syntax error.\n};\n\n//! Result of parsing (wraps ParseErrorCode)\n/*!\n    \\ingroup RAPIDJSON_ERRORS\n    \\code\n        Document doc;\n        ParseResult ok = doc.Parse(\"[42]\");\n        if (!ok) {\n            fprintf(stderr, \"JSON parse error: %s (%u)\",\n                    GetParseError_En(ok.Code()), ok.Offset());\n            exit(EXIT_FAILURE);\n        }\n    \\endcode\n    \\see GenericReader::Parse, GenericDocument::Parse\n*/\nstruct ParseResult {\n    //!! Unspecified boolean type\n    typedef bool (ParseResult::*BooleanType)() const;\npublic:\n    //! Default constructor, no error.\n    ParseResult() : code_(kParseErrorNone), offset_(0) {}\n    //! Constructor to set an error.\n    ParseResult(ParseErrorCode code, size_t offset) : code_(code), offset_(offset) {}\n\n    //! Get the error code.\n    ParseErrorCode Code() const { return code_; }\n    //! Get the error offset, if \\ref IsError(), 0 otherwise.\n    size_t Offset() const { return offset_; }\n\n    //! Explicit conversion to \\c bool, returns \\c true, iff !\\ref IsError().\n    operator BooleanType() const { return !IsError() ? &ParseResult::IsError : NULL; }\n    //! Whether the result is an error.\n    bool IsError() const { return code_ != kParseErrorNone; }\n\n    bool operator==(const ParseResult& that) const { return code_ == that.code_; }\n    bool operator==(ParseErrorCode code) const { return code_ == code; }\n    friend bool operator==(ParseErrorCode code, const ParseResult & err) { return code == err.code_; }\n\n    bool operator!=(const ParseResult& that) const { return !(*this == that); }\n    bool operator!=(ParseErrorCode code) const { return !(*this == code); }\n    friend bool operator!=(ParseErrorCode code, const ParseResult & err) { return err != code; }\n\n    //! Reset error code.\n    void Clear() { Set(kParseErrorNone); }\n    //! Update error code and offset.\n    void Set(ParseErrorCode code, size_t offset = 0) { code_ = code; offset_ = offset; }\n\nprivate:\n    ParseErrorCode code_;\n    size_t offset_;\n};\n\n//! Function pointer type of GetParseError().\n/*! \\ingroup RAPIDJSON_ERRORS\n\n    This is the prototype for \\c GetParseError_X(), where \\c X is a locale.\n    User can dynamically change locale in runtime, e.g.:\n\\code\n    GetParseErrorFunc GetParseError = GetParseError_En; // or whatever\n    const RAPIDJSON_ERROR_CHARTYPE* s = GetParseError(document.GetParseErrorCode());\n\\endcode\n*/\ntypedef const RAPIDJSON_ERROR_CHARTYPE* (*GetParseErrorFunc)(ParseErrorCode);\n\n///////////////////////////////////////////////////////////////////////////////\n// ValidateErrorCode\n\n//! Error codes when validating.\n/*! \\ingroup RAPIDJSON_ERRORS\n    \\see GenericSchemaValidator\n*/\nenum ValidateErrorCode {\n    kValidateErrors    = -1,                   //!< Top level error code when kValidateContinueOnErrorsFlag set.\n    kValidateErrorNone = 0,                    //!< No error.\n\n    kValidateErrorMultipleOf,                  //!< Number is not a multiple of the 'multipleOf' value.\n    kValidateErrorMaximum,                     //!< Number is greater than the 'maximum' value.\n    kValidateErrorExclusiveMaximum,            //!< Number is greater than or equal to the 'maximum' value.\n    kValidateErrorMinimum,                     //!< Number is less than the 'minimum' value.\n    kValidateErrorExclusiveMinimum,            //!< Number is less than or equal to the 'minimum' value.\n\n    kValidateErrorMaxLength,                   //!< String is longer than the 'maxLength' value.\n    kValidateErrorMinLength,                   //!< String is longer than the 'maxLength' value.\n    kValidateErrorPattern,                     //!< String does not match the 'pattern' regular expression.\n\n    kValidateErrorMaxItems,                    //!< Array is longer than the 'maxItems' value.\n    kValidateErrorMinItems,                    //!< Array is shorter than the 'minItems' value.\n    kValidateErrorUniqueItems,                 //!< Array has duplicate items but 'uniqueItems' is true.\n    kValidateErrorAdditionalItems,             //!< Array has additional items that are not allowed by the schema.\n\n    kValidateErrorMaxProperties,               //!< Object has more members than 'maxProperties' value.\n    kValidateErrorMinProperties,               //!< Object has less members than 'minProperties' value.\n    kValidateErrorRequired,                    //!< Object is missing one or more members required by the schema.\n    kValidateErrorAdditionalProperties,        //!< Object has additional members that are not allowed by the schema.\n    kValidateErrorPatternProperties,           //!< See other errors.\n    kValidateErrorDependencies,                //!< Object has missing property or schema dependencies.\n\n    kValidateErrorEnum,                        //!< Property has a value that is not one of its allowed enumerated values.\n    kValidateErrorType,                        //!< Property has a type that is not allowed by the schema.\n\n    kValidateErrorOneOf,                       //!< Property did not match any of the sub-schemas specified by 'oneOf'.\n    kValidateErrorOneOfMatch,                  //!< Property matched more than one of the sub-schemas specified by 'oneOf'.\n    kValidateErrorAllOf,                       //!< Property did not match all of the sub-schemas specified by 'allOf'.\n    kValidateErrorAnyOf,                       //!< Property did not match any of the sub-schemas specified by 'anyOf'.\n    kValidateErrorNot,                         //!< Property matched the sub-schema specified by 'not'.\n\n    kValidateErrorReadOnly,                    //!< Property is read-only but has been provided when validation is for writing\n    kValidateErrorWriteOnly                    //!< Property is write-only but has been provided when validation is for reading\n};\n\n//! Function pointer type of GetValidateError().\n/*! \\ingroup RAPIDJSON_ERRORS\n\n    This is the prototype for \\c GetValidateError_X(), where \\c X is a locale.\n    User can dynamically change locale in runtime, e.g.:\n\\code\n    GetValidateErrorFunc GetValidateError = GetValidateError_En; // or whatever\n    const RAPIDJSON_ERROR_CHARTYPE* s = GetValidateError(validator.GetInvalidSchemaCode());\n\\endcode\n*/\ntypedef const RAPIDJSON_ERROR_CHARTYPE* (*GetValidateErrorFunc)(ValidateErrorCode);\n\n///////////////////////////////////////////////////////////////////////////////\n// SchemaErrorCode\n\n//! Error codes when validating.\n/*! \\ingroup RAPIDJSON_ERRORS\n    \\see GenericSchemaValidator\n*/\nenum SchemaErrorCode {\n    kSchemaErrorNone = 0,                      //!< No error.\n\n    kSchemaErrorStartUnknown,                  //!< Pointer to start of schema does not resolve to a location in the document\n    kSchemaErrorRefPlainName,                  //!< $ref fragment must be a JSON pointer\n    kSchemaErrorRefInvalid,                    //!< $ref must not be an empty string\n    kSchemaErrorRefPointerInvalid,             //!< $ref fragment is not a valid JSON pointer at offset\n    kSchemaErrorRefUnknown,                    //!< $ref does not resolve to a location in the target document\n    kSchemaErrorRefCyclical,                   //!< $ref is cyclical\n    kSchemaErrorRefNoRemoteProvider,           //!< $ref is remote but there is no remote provider\n    kSchemaErrorRefNoRemoteSchema,             //!< $ref is remote but the remote provider did not return a schema\n    kSchemaErrorRegexInvalid,                  //!< Invalid regular expression in 'pattern' or 'patternProperties'\n    kSchemaErrorSpecUnknown,                   //!< JSON schema draft or OpenAPI version is not recognized\n    kSchemaErrorSpecUnsupported,               //!< JSON schema draft or OpenAPI version is not supported\n    kSchemaErrorSpecIllegal,                   //!< Both JSON schema draft and OpenAPI version found in document\n    kSchemaErrorReadOnlyAndWriteOnly           //!< Property must not be both 'readOnly' and 'writeOnly'\n};\n\n//! Function pointer type of GetSchemaError().\n/*! \\ingroup RAPIDJSON_ERRORS\n\n    This is the prototype for \\c GetSchemaError_X(), where \\c X is a locale.\n    User can dynamically change locale in runtime, e.g.:\n\\code\n    GetSchemaErrorFunc GetSchemaError = GetSchemaError_En; // or whatever\n    const RAPIDJSON_ERROR_CHARTYPE* s = GetSchemaError(validator.GetInvalidSchemaCode());\n\\endcode\n*/\ntypedef const RAPIDJSON_ERROR_CHARTYPE* (*GetSchemaErrorFunc)(SchemaErrorCode);\n\n///////////////////////////////////////////////////////////////////////////////\n// PointerParseErrorCode\n\n//! Error code of JSON pointer parsing.\n/*! \\ingroup RAPIDJSON_ERRORS\n    \\see GenericPointer::GenericPointer, GenericPointer::GetParseErrorCode\n*/\nenum PointerParseErrorCode {\n    kPointerParseErrorNone = 0,                     //!< The parse is successful\n\n    kPointerParseErrorTokenMustBeginWithSolidus,    //!< A token must begin with a '/'\n    kPointerParseErrorInvalidEscape,                //!< Invalid escape\n    kPointerParseErrorInvalidPercentEncoding,       //!< Invalid percent encoding in URI fragment\n    kPointerParseErrorCharacterMustPercentEncode    //!< A character must percent encoded in URI fragment\n};\n\n//! Function pointer type of GetPointerParseError().\n/*! \\ingroup RAPIDJSON_ERRORS\n\n    This is the prototype for \\c GetPointerParseError_X(), where \\c X is a locale.\n    User can dynamically change locale in runtime, e.g.:\n\\code\n    GetPointerParseErrorFunc GetPointerParseError = GetPointerParseError_En; // or whatever\n    const RAPIDJSON_ERROR_CHARTYPE* s = GetPointerParseError(pointer.GetParseErrorCode());\n\\endcode\n*/\ntypedef const RAPIDJSON_ERROR_CHARTYPE* (*GetPointerParseErrorFunc)(PointerParseErrorCode);\n\n\nRAPIDJSON_NAMESPACE_END\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_ERROR_ERROR_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/filereadstream.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_FILEREADSTREAM_H_\n#define RAPIDJSON_FILEREADSTREAM_H_\n\n#include \"stream.h\"\n#include <cstdio>\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(padded)\nRAPIDJSON_DIAG_OFF(unreachable-code)\nRAPIDJSON_DIAG_OFF(missing-noreturn)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! File byte stream for input using fread().\n/*!\n    \\note implements Stream concept\n*/\nclass FileReadStream {\npublic:\n    typedef char Ch;    //!< Character type (byte).\n\n    //! Constructor.\n    /*!\n        \\param fp File pointer opened for read.\n        \\param buffer user-supplied buffer.\n        \\param bufferSize size of buffer in bytes. Must >=4 bytes.\n    */\n    FileReadStream(std::FILE* fp, char* buffer, size_t bufferSize) : fp_(fp), buffer_(buffer), bufferSize_(bufferSize), bufferLast_(0), current_(buffer_), readCount_(0), count_(0), eof_(false) { \n        RAPIDJSON_ASSERT(fp_ != 0);\n        RAPIDJSON_ASSERT(bufferSize >= 4);\n        Read();\n    }\n\n    Ch Peek() const { return *current_; }\n    Ch Take() { Ch c = *current_; Read(); return c; }\n    size_t Tell() const { return count_ + static_cast<size_t>(current_ - buffer_); }\n\n    // Not implemented\n    void Put(Ch) { RAPIDJSON_ASSERT(false); }\n    void Flush() { RAPIDJSON_ASSERT(false); } \n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\n    // For encoding detection only.\n    const Ch* Peek4() const {\n        return (current_ + 4 - !eof_ <= bufferLast_) ? current_ : 0;\n    }\n\nprivate:\n    void Read() {\n        if (current_ < bufferLast_)\n            ++current_;\n        else if (!eof_) {\n            count_ += readCount_;\n            readCount_ = std::fread(buffer_, 1, bufferSize_, fp_);\n            bufferLast_ = buffer_ + readCount_ - 1;\n            current_ = buffer_;\n\n            if (readCount_ < bufferSize_) {\n                buffer_[readCount_] = '\\0';\n                ++bufferLast_;\n                eof_ = true;\n            }\n        }\n    }\n\n    std::FILE* fp_;\n    Ch *buffer_;\n    size_t bufferSize_;\n    Ch *bufferLast_;\n    Ch *current_;\n    size_t readCount_;\n    size_t count_;  //!< Number of characters read\n    bool eof_;\n};\n\nRAPIDJSON_NAMESPACE_END\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_FILESTREAM_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/filewritestream.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_FILEWRITESTREAM_H_\n#define RAPIDJSON_FILEWRITESTREAM_H_\n\n#include \"stream.h\"\n#include <cstdio>\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(unreachable-code)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Wrapper of C file stream for output using fwrite().\n/*!\n    \\note implements Stream concept\n*/\nclass FileWriteStream {\npublic:\n    typedef char Ch;    //!< Character type. Only support char.\n\n    FileWriteStream(std::FILE* fp, char* buffer, size_t bufferSize) : fp_(fp), buffer_(buffer), bufferEnd_(buffer + bufferSize), current_(buffer_) { \n        RAPIDJSON_ASSERT(fp_ != 0);\n    }\n\n    void Put(char c) { \n        if (current_ >= bufferEnd_)\n            Flush();\n\n        *current_++ = c;\n    }\n\n    void PutN(char c, size_t n) {\n        size_t avail = static_cast<size_t>(bufferEnd_ - current_);\n        while (n > avail) {\n            std::memset(current_, c, avail);\n            current_ += avail;\n            Flush();\n            n -= avail;\n            avail = static_cast<size_t>(bufferEnd_ - current_);\n        }\n\n        if (n > 0) {\n            std::memset(current_, c, n);\n            current_ += n;\n        }\n    }\n\n    void Flush() {\n        if (current_ != buffer_) {\n            size_t result = std::fwrite(buffer_, 1, static_cast<size_t>(current_ - buffer_), fp_);\n            if (result < static_cast<size_t>(current_ - buffer_)) {\n                // failure deliberately ignored at this time\n                // added to avoid warn_unused_result build errors\n            }\n            current_ = buffer_;\n        }\n    }\n\n    // Not implemented\n    char Peek() const { RAPIDJSON_ASSERT(false); return 0; }\n    char Take() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t Tell() const { RAPIDJSON_ASSERT(false); return 0; }\n    char* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t PutEnd(char*) { RAPIDJSON_ASSERT(false); return 0; }\n\nprivate:\n    // Prohibit copy constructor & assignment operator.\n    FileWriteStream(const FileWriteStream&);\n    FileWriteStream& operator=(const FileWriteStream&);\n\n    std::FILE* fp_;\n    char *buffer_;\n    char *bufferEnd_;\n    char *current_;\n};\n\n//! Implement specialized version of PutN() with memset() for better performance.\ntemplate<>\ninline void PutN(FileWriteStream& stream, char c, size_t n) {\n    stream.PutN(c, n);\n}\n\nRAPIDJSON_NAMESPACE_END\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_FILESTREAM_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/fwd.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_FWD_H_\n#define RAPIDJSON_FWD_H_\n\n#include \"rapidjson.h\"\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n// encodings.h\n\ntemplate<typename CharType> struct UTF8;\ntemplate<typename CharType> struct UTF16;\ntemplate<typename CharType> struct UTF16BE;\ntemplate<typename CharType> struct UTF16LE;\ntemplate<typename CharType> struct UTF32;\ntemplate<typename CharType> struct UTF32BE;\ntemplate<typename CharType> struct UTF32LE;\ntemplate<typename CharType> struct ASCII;\ntemplate<typename CharType> struct AutoUTF;\n\ntemplate<typename SourceEncoding, typename TargetEncoding>\nstruct Transcoder;\n\n// allocators.h\n\nclass CrtAllocator;\n\ntemplate <typename BaseAllocator>\nclass MemoryPoolAllocator;\n\n// stream.h\n\ntemplate <typename Encoding>\nstruct GenericStringStream;\n\ntypedef GenericStringStream<UTF8<char> > StringStream;\n\ntemplate <typename Encoding>\nstruct GenericInsituStringStream;\n\ntypedef GenericInsituStringStream<UTF8<char> > InsituStringStream;\n\n// stringbuffer.h\n\ntemplate <typename Encoding, typename Allocator>\nclass GenericStringBuffer;\n\ntypedef GenericStringBuffer<UTF8<char>, CrtAllocator> StringBuffer;\n\n// filereadstream.h\n\nclass FileReadStream;\n\n// filewritestream.h\n\nclass FileWriteStream;\n\n// memorybuffer.h\n\ntemplate <typename Allocator>\nstruct GenericMemoryBuffer;\n\ntypedef GenericMemoryBuffer<CrtAllocator> MemoryBuffer;\n\n// memorystream.h\n\nstruct MemoryStream;\n\n// reader.h\n\ntemplate<typename Encoding, typename Derived>\nstruct BaseReaderHandler;\n\ntemplate <typename SourceEncoding, typename TargetEncoding, typename StackAllocator>\nclass GenericReader;\n\ntypedef GenericReader<UTF8<char>, UTF8<char>, CrtAllocator> Reader;\n\n// writer.h\n\ntemplate<typename OutputStream, typename SourceEncoding, typename TargetEncoding, typename StackAllocator, unsigned writeFlags>\nclass Writer;\n\n// prettywriter.h\n\ntemplate<typename OutputStream, typename SourceEncoding, typename TargetEncoding, typename StackAllocator, unsigned writeFlags>\nclass PrettyWriter;\n\n// document.h\n\ntemplate <typename Encoding, typename Allocator> \nclass GenericMember;\n\ntemplate <bool Const, typename Encoding, typename Allocator>\nclass GenericMemberIterator;\n\ntemplate<typename CharType>\nstruct GenericStringRef;\n\ntemplate <typename Encoding, typename Allocator> \nclass GenericValue;\n\ntypedef GenericValue<UTF8<char>, MemoryPoolAllocator<CrtAllocator> > Value;\n\ntemplate <typename Encoding, typename Allocator, typename StackAllocator>\nclass GenericDocument;\n\ntypedef GenericDocument<UTF8<char>, MemoryPoolAllocator<CrtAllocator>, CrtAllocator> Document;\n\n// pointer.h\n\ntemplate <typename ValueType, typename Allocator>\nclass GenericPointer;\n\ntypedef GenericPointer<Value, CrtAllocator> Pointer;\n\n// schema.h\n\ntemplate <typename SchemaDocumentType>\nclass IGenericRemoteSchemaDocumentProvider;\n\ntemplate <typename ValueT, typename Allocator>\nclass GenericSchemaDocument;\n\ntypedef GenericSchemaDocument<Value, CrtAllocator> SchemaDocument;\ntypedef IGenericRemoteSchemaDocumentProvider<SchemaDocument> IRemoteSchemaDocumentProvider;\n\ntemplate <\n    typename SchemaDocumentType,\n    typename OutputHandler,\n    typename StateAllocator>\nclass GenericSchemaValidator;\n\ntypedef GenericSchemaValidator<SchemaDocument, BaseReaderHandler<UTF8<char>, void>, CrtAllocator> SchemaValidator;\n\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_RAPIDJSONFWD_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/biginteger.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_BIGINTEGER_H_\n#define RAPIDJSON_BIGINTEGER_H_\n\n#include \"../rapidjson.h\"\n\n#if defined(_MSC_VER) && !defined(__INTEL_COMPILER) && defined(_M_AMD64)\n#include <intrin.h> // for _umul128\n#if !defined(_ARM64EC_)\n#pragma intrinsic(_umul128)\n#else\n#pragma comment(lib,\"softintrin\")\n#endif\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\nclass BigInteger {\npublic:\n    typedef uint64_t Type;\n\n    BigInteger(const BigInteger& rhs) : count_(rhs.count_) {\n        std::memcpy(digits_, rhs.digits_, count_ * sizeof(Type));\n    }\n\n    explicit BigInteger(uint64_t u) : count_(1) {\n        digits_[0] = u;\n    }\n\n    template<typename Ch>\n    BigInteger(const Ch* decimals, size_t length) : count_(1) {\n        RAPIDJSON_ASSERT(length > 0);\n        digits_[0] = 0;\n        size_t i = 0;\n        const size_t kMaxDigitPerIteration = 19;  // 2^64 = 18446744073709551616 > 10^19\n        while (length >= kMaxDigitPerIteration) {\n            AppendDecimal64(decimals + i, decimals + i + kMaxDigitPerIteration);\n            length -= kMaxDigitPerIteration;\n            i += kMaxDigitPerIteration;\n        }\n\n        if (length > 0)\n            AppendDecimal64(decimals + i, decimals + i + length);\n    }\n    \n    BigInteger& operator=(const BigInteger &rhs)\n    {\n        if (this != &rhs) {\n            count_ = rhs.count_;\n            std::memcpy(digits_, rhs.digits_, count_ * sizeof(Type));\n        }\n        return *this;\n    }\n    \n    BigInteger& operator=(uint64_t u) {\n        digits_[0] = u;            \n        count_ = 1;\n        return *this;\n    }\n\n    BigInteger& operator+=(uint64_t u) {\n        Type backup = digits_[0];\n        digits_[0] += u;\n        for (size_t i = 0; i < count_ - 1; i++) {\n            if (digits_[i] >= backup)\n                return *this; // no carry\n            backup = digits_[i + 1];\n            digits_[i + 1] += 1;\n        }\n\n        // Last carry\n        if (digits_[count_ - 1] < backup)\n            PushBack(1);\n\n        return *this;\n    }\n\n    BigInteger& operator*=(uint64_t u) {\n        if (u == 0) return *this = 0;\n        if (u == 1) return *this;\n        if (*this == 1) return *this = u;\n\n        uint64_t k = 0;\n        for (size_t i = 0; i < count_; i++) {\n            uint64_t hi;\n            digits_[i] = MulAdd64(digits_[i], u, k, &hi);\n            k = hi;\n        }\n        \n        if (k > 0)\n            PushBack(k);\n\n        return *this;\n    }\n\n    BigInteger& operator*=(uint32_t u) {\n        if (u == 0) return *this = 0;\n        if (u == 1) return *this;\n        if (*this == 1) return *this = u;\n\n        uint64_t k = 0;\n        for (size_t i = 0; i < count_; i++) {\n            const uint64_t c = digits_[i] >> 32;\n            const uint64_t d = digits_[i] & 0xFFFFFFFF;\n            const uint64_t uc = u * c;\n            const uint64_t ud = u * d;\n            const uint64_t p0 = ud + k;\n            const uint64_t p1 = uc + (p0 >> 32);\n            digits_[i] = (p0 & 0xFFFFFFFF) | (p1 << 32);\n            k = p1 >> 32;\n        }\n        \n        if (k > 0)\n            PushBack(k);\n\n        return *this;\n    }\n\n    BigInteger& operator<<=(size_t shift) {\n        if (IsZero() || shift == 0) return *this;\n\n        size_t offset = shift / kTypeBit;\n        size_t interShift = shift % kTypeBit;\n        RAPIDJSON_ASSERT(count_ + offset <= kCapacity);\n\n        if (interShift == 0) {\n            std::memmove(digits_ + offset, digits_, count_ * sizeof(Type));\n            count_ += offset;\n        }\n        else {\n            digits_[count_] = 0;\n            for (size_t i = count_; i > 0; i--)\n                digits_[i + offset] = (digits_[i] << interShift) | (digits_[i - 1] >> (kTypeBit - interShift));\n            digits_[offset] = digits_[0] << interShift;\n            count_ += offset;\n            if (digits_[count_])\n                count_++;\n        }\n\n        std::memset(digits_, 0, offset * sizeof(Type));\n\n        return *this;\n    }\n\n    bool operator==(const BigInteger& rhs) const {\n        return count_ == rhs.count_ && std::memcmp(digits_, rhs.digits_, count_ * sizeof(Type)) == 0;\n    }\n\n    bool operator==(const Type rhs) const {\n        return count_ == 1 && digits_[0] == rhs;\n    }\n\n    BigInteger& MultiplyPow5(unsigned exp) {\n        static const uint32_t kPow5[12] = {\n            5,\n            5 * 5,\n            5 * 5 * 5,\n            5 * 5 * 5 * 5,\n            5 * 5 * 5 * 5 * 5,\n            5 * 5 * 5 * 5 * 5 * 5,\n            5 * 5 * 5 * 5 * 5 * 5 * 5,\n            5 * 5 * 5 * 5 * 5 * 5 * 5 * 5,\n            5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5,\n            5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5,\n            5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5,\n            5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5 * 5\n        };\n        if (exp == 0) return *this;\n        for (; exp >= 27; exp -= 27) *this *= RAPIDJSON_UINT64_C2(0X6765C793, 0XFA10079D); // 5^27\n        for (; exp >= 13; exp -= 13) *this *= static_cast<uint32_t>(1220703125u); // 5^13\n        if (exp > 0)                 *this *= kPow5[exp - 1];\n        return *this;\n    }\n\n    // Compute absolute difference of this and rhs.\n    // Assume this != rhs\n    bool Difference(const BigInteger& rhs, BigInteger* out) const {\n        int cmp = Compare(rhs);\n        RAPIDJSON_ASSERT(cmp != 0);\n        const BigInteger *a, *b;  // Makes a > b\n        bool ret;\n        if (cmp < 0) { a = &rhs; b = this; ret = true; }\n        else         { a = this; b = &rhs; ret = false; }\n\n        Type borrow = 0;\n        for (size_t i = 0; i < a->count_; i++) {\n            Type d = a->digits_[i] - borrow;\n            if (i < b->count_)\n                d -= b->digits_[i];\n            borrow = (d > a->digits_[i]) ? 1 : 0;\n            out->digits_[i] = d;\n            if (d != 0)\n                out->count_ = i + 1;\n        }\n\n        return ret;\n    }\n\n    int Compare(const BigInteger& rhs) const {\n        if (count_ != rhs.count_)\n            return count_ < rhs.count_ ? -1 : 1;\n\n        for (size_t i = count_; i-- > 0;)\n            if (digits_[i] != rhs.digits_[i])\n                return digits_[i] < rhs.digits_[i] ? -1 : 1;\n\n        return 0;\n    }\n\n    size_t GetCount() const { return count_; }\n    Type GetDigit(size_t index) const { RAPIDJSON_ASSERT(index < count_); return digits_[index]; }\n    bool IsZero() const { return count_ == 1 && digits_[0] == 0; }\n\nprivate:\n    template<typename Ch>\n    void AppendDecimal64(const Ch* begin, const Ch* end) {\n        uint64_t u = ParseUint64(begin, end);\n        if (IsZero())\n            *this = u;\n        else {\n            unsigned exp = static_cast<unsigned>(end - begin);\n            (MultiplyPow5(exp) <<= exp) += u;   // *this = *this * 10^exp + u\n        }\n    }\n\n    void PushBack(Type digit) {\n        RAPIDJSON_ASSERT(count_ < kCapacity);\n        digits_[count_++] = digit;\n    }\n\n    template<typename Ch>\n    static uint64_t ParseUint64(const Ch* begin, const Ch* end) {\n        uint64_t r = 0;\n        for (const Ch* p = begin; p != end; ++p) {\n            RAPIDJSON_ASSERT(*p >= Ch('0') && *p <= Ch('9'));\n            r = r * 10u + static_cast<unsigned>(*p - Ch('0'));\n        }\n        return r;\n    }\n\n    // Assume a * b + k < 2^128\n    static uint64_t MulAdd64(uint64_t a, uint64_t b, uint64_t k, uint64_t* outHigh) {\n#if defined(_MSC_VER) && defined(_M_AMD64)\n        uint64_t low = _umul128(a, b, outHigh) + k;\n        if (low < k)\n            (*outHigh)++;\n        return low;\n#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) && defined(__x86_64__)\n        __extension__ typedef unsigned __int128 uint128;\n        uint128 p = static_cast<uint128>(a) * static_cast<uint128>(b);\n        p += k;\n        *outHigh = static_cast<uint64_t>(p >> 64);\n        return static_cast<uint64_t>(p);\n#else\n        const uint64_t a0 = a & 0xFFFFFFFF, a1 = a >> 32, b0 = b & 0xFFFFFFFF, b1 = b >> 32;\n        uint64_t x0 = a0 * b0, x1 = a0 * b1, x2 = a1 * b0, x3 = a1 * b1;\n        x1 += (x0 >> 32); // can't give carry\n        x1 += x2;\n        if (x1 < x2)\n            x3 += (static_cast<uint64_t>(1) << 32);\n        uint64_t lo = (x1 << 32) + (x0 & 0xFFFFFFFF);\n        uint64_t hi = x3 + (x1 >> 32);\n\n        lo += k;\n        if (lo < k)\n            hi++;\n        *outHigh = hi;\n        return lo;\n#endif\n    }\n\n    static const size_t kBitCount = 3328;  // 64bit * 54 > 10^1000\n    static const size_t kCapacity = kBitCount / sizeof(Type);\n    static const size_t kTypeBit = sizeof(Type) * 8;\n\n    Type digits_[kCapacity];\n    size_t count_;\n};\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_BIGINTEGER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/clzll.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_CLZLL_H_\n#define RAPIDJSON_CLZLL_H_\n\n#include \"../rapidjson.h\"\n\n#if defined(_MSC_VER) && !defined(UNDER_CE)\n#include <intrin.h>\n#if defined(_WIN64)\n#pragma intrinsic(_BitScanReverse64)\n#else\n#pragma intrinsic(_BitScanReverse)\n#endif\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\ninline uint32_t clzll(uint64_t x) {\n    // Passing 0 to __builtin_clzll is UB in GCC and results in an\n    // infinite loop in the software implementation.\n    RAPIDJSON_ASSERT(x != 0);\n\n#if defined(_MSC_VER) && !defined(UNDER_CE)\n    unsigned long r = 0;\n#if defined(_WIN64)\n    _BitScanReverse64(&r, x);\n#else\n    // Scan the high 32 bits.\n    if (_BitScanReverse(&r, static_cast<uint32_t>(x >> 32)))\n        return 63 - (r + 32);\n\n    // Scan the low 32 bits.\n    _BitScanReverse(&r, static_cast<uint32_t>(x & 0xFFFFFFFF));\n#endif // _WIN64\n\n    return 63 - r;\n#elif (defined(__GNUC__) && __GNUC__ >= 4) || RAPIDJSON_HAS_BUILTIN(__builtin_clzll)\n    // __builtin_clzll wrapper\n    return static_cast<uint32_t>(__builtin_clzll(x));\n#else\n    // naive version\n    uint32_t r = 0;\n    while (!(x & (static_cast<uint64_t>(1) << 63))) {\n        x <<= 1;\n        ++r;\n    }\n\n    return r;\n#endif // _MSC_VER\n}\n\n#define RAPIDJSON_CLZLL RAPIDJSON_NAMESPACE::internal::clzll\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_CLZLL_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/diyfp.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n// This is a C++ header-only implementation of Grisu2 algorithm from the publication:\n// Loitsch, Florian. \"Printing floating-point numbers quickly and accurately with\n// integers.\" ACM Sigplan Notices 45.6 (2010): 233-243.\n\n#ifndef RAPIDJSON_DIYFP_H_\n#define RAPIDJSON_DIYFP_H_\n\n#include \"../rapidjson.h\"\n#include \"clzll.h\"\n#include <limits>\n\n#if defined(_MSC_VER) && defined(_M_AMD64) && !defined(__INTEL_COMPILER)\n#include <intrin.h>\n#if !defined(_ARM64EC_)\n#pragma intrinsic(_umul128)\n#else\n#pragma comment(lib,\"softintrin\")\n#endif\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(padded)\n#endif\n\nstruct DiyFp {\n    DiyFp() : f(), e() {}\n\n    DiyFp(uint64_t fp, int exp) : f(fp), e(exp) {}\n\n    explicit DiyFp(double d) {\n        union {\n            double d;\n            uint64_t u64;\n        } u = { d };\n\n        int biased_e = static_cast<int>((u.u64 & kDpExponentMask) >> kDpSignificandSize);\n        uint64_t significand = (u.u64 & kDpSignificandMask);\n        if (biased_e != 0) {\n            f = significand + kDpHiddenBit;\n            e = biased_e - kDpExponentBias;\n        }\n        else {\n            f = significand;\n            e = kDpMinExponent + 1;\n        }\n    }\n\n    DiyFp operator-(const DiyFp& rhs) const {\n        return DiyFp(f - rhs.f, e);\n    }\n\n    DiyFp operator*(const DiyFp& rhs) const {\n#if defined(_MSC_VER) && defined(_M_AMD64)\n        uint64_t h;\n        uint64_t l = _umul128(f, rhs.f, &h);\n        if (l & (uint64_t(1) << 63)) // rounding\n            h++;\n        return DiyFp(h, e + rhs.e + 64);\n#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) && defined(__x86_64__)\n        __extension__ typedef unsigned __int128 uint128;\n        uint128 p = static_cast<uint128>(f) * static_cast<uint128>(rhs.f);\n        uint64_t h = static_cast<uint64_t>(p >> 64);\n        uint64_t l = static_cast<uint64_t>(p);\n        if (l & (uint64_t(1) << 63)) // rounding\n            h++;\n        return DiyFp(h, e + rhs.e + 64);\n#else\n        const uint64_t M32 = 0xFFFFFFFF;\n        const uint64_t a = f >> 32;\n        const uint64_t b = f & M32;\n        const uint64_t c = rhs.f >> 32;\n        const uint64_t d = rhs.f & M32;\n        const uint64_t ac = a * c;\n        const uint64_t bc = b * c;\n        const uint64_t ad = a * d;\n        const uint64_t bd = b * d;\n        uint64_t tmp = (bd >> 32) + (ad & M32) + (bc & M32);\n        tmp += 1U << 31;  /// mult_round\n        return DiyFp(ac + (ad >> 32) + (bc >> 32) + (tmp >> 32), e + rhs.e + 64);\n#endif\n    }\n\n    DiyFp Normalize() const {\n        int s = static_cast<int>(clzll(f));\n        return DiyFp(f << s, e - s);\n    }\n\n    DiyFp NormalizeBoundary() const {\n        DiyFp res = *this;\n        while (!(res.f & (kDpHiddenBit << 1))) {\n            res.f <<= 1;\n            res.e--;\n        }\n        res.f <<= (kDiySignificandSize - kDpSignificandSize - 2);\n        res.e = res.e - (kDiySignificandSize - kDpSignificandSize - 2);\n        return res;\n    }\n\n    void NormalizedBoundaries(DiyFp* minus, DiyFp* plus) const {\n        DiyFp pl = DiyFp((f << 1) + 1, e - 1).NormalizeBoundary();\n        DiyFp mi = (f == kDpHiddenBit) ? DiyFp((f << 2) - 1, e - 2) : DiyFp((f << 1) - 1, e - 1);\n        mi.f <<= mi.e - pl.e;\n        mi.e = pl.e;\n        *plus = pl;\n        *minus = mi;\n    }\n\n    double ToDouble() const {\n        union {\n            double d;\n            uint64_t u64;\n        }u;\n        RAPIDJSON_ASSERT(f <= kDpHiddenBit + kDpSignificandMask);\n        if (e < kDpDenormalExponent) {\n            // Underflow.\n            return 0.0;\n        }\n        if (e >= kDpMaxExponent) {\n            // Overflow.\n            return std::numeric_limits<double>::infinity();\n        }\n        const uint64_t be = (e == kDpDenormalExponent && (f & kDpHiddenBit) == 0) ? 0 :\n            static_cast<uint64_t>(e + kDpExponentBias);\n        u.u64 = (f & kDpSignificandMask) | (be << kDpSignificandSize);\n        return u.d;\n    }\n\n    static const int kDiySignificandSize = 64;\n    static const int kDpSignificandSize = 52;\n    static const int kDpExponentBias = 0x3FF + kDpSignificandSize;\n    static const int kDpMaxExponent = 0x7FF - kDpExponentBias;\n    static const int kDpMinExponent = -kDpExponentBias;\n    static const int kDpDenormalExponent = -kDpExponentBias + 1;\n    static const uint64_t kDpExponentMask = RAPIDJSON_UINT64_C2(0x7FF00000, 0x00000000);\n    static const uint64_t kDpSignificandMask = RAPIDJSON_UINT64_C2(0x000FFFFF, 0xFFFFFFFF);\n    static const uint64_t kDpHiddenBit = RAPIDJSON_UINT64_C2(0x00100000, 0x00000000);\n\n    uint64_t f;\n    int e;\n};\n\ninline DiyFp GetCachedPowerByIndex(size_t index) {\n    // 10^-348, 10^-340, ..., 10^340\n    static const uint64_t kCachedPowers_F[] = {\n        RAPIDJSON_UINT64_C2(0xfa8fd5a0, 0x081c0288), RAPIDJSON_UINT64_C2(0xbaaee17f, 0xa23ebf76),\n        RAPIDJSON_UINT64_C2(0x8b16fb20, 0x3055ac76), RAPIDJSON_UINT64_C2(0xcf42894a, 0x5dce35ea),\n        RAPIDJSON_UINT64_C2(0x9a6bb0aa, 0x55653b2d), RAPIDJSON_UINT64_C2(0xe61acf03, 0x3d1a45df),\n        RAPIDJSON_UINT64_C2(0xab70fe17, 0xc79ac6ca), RAPIDJSON_UINT64_C2(0xff77b1fc, 0xbebcdc4f),\n        RAPIDJSON_UINT64_C2(0xbe5691ef, 0x416bd60c), RAPIDJSON_UINT64_C2(0x8dd01fad, 0x907ffc3c),\n        RAPIDJSON_UINT64_C2(0xd3515c28, 0x31559a83), RAPIDJSON_UINT64_C2(0x9d71ac8f, 0xada6c9b5),\n        RAPIDJSON_UINT64_C2(0xea9c2277, 0x23ee8bcb), RAPIDJSON_UINT64_C2(0xaecc4991, 0x4078536d),\n        RAPIDJSON_UINT64_C2(0x823c1279, 0x5db6ce57), RAPIDJSON_UINT64_C2(0xc2109436, 0x4dfb5637),\n        RAPIDJSON_UINT64_C2(0x9096ea6f, 0x3848984f), RAPIDJSON_UINT64_C2(0xd77485cb, 0x25823ac7),\n        RAPIDJSON_UINT64_C2(0xa086cfcd, 0x97bf97f4), RAPIDJSON_UINT64_C2(0xef340a98, 0x172aace5),\n        RAPIDJSON_UINT64_C2(0xb23867fb, 0x2a35b28e), RAPIDJSON_UINT64_C2(0x84c8d4df, 0xd2c63f3b),\n        RAPIDJSON_UINT64_C2(0xc5dd4427, 0x1ad3cdba), RAPIDJSON_UINT64_C2(0x936b9fce, 0xbb25c996),\n        RAPIDJSON_UINT64_C2(0xdbac6c24, 0x7d62a584), RAPIDJSON_UINT64_C2(0xa3ab6658, 0x0d5fdaf6),\n        RAPIDJSON_UINT64_C2(0xf3e2f893, 0xdec3f126), RAPIDJSON_UINT64_C2(0xb5b5ada8, 0xaaff80b8),\n        RAPIDJSON_UINT64_C2(0x87625f05, 0x6c7c4a8b), RAPIDJSON_UINT64_C2(0xc9bcff60, 0x34c13053),\n        RAPIDJSON_UINT64_C2(0x964e858c, 0x91ba2655), RAPIDJSON_UINT64_C2(0xdff97724, 0x70297ebd),\n        RAPIDJSON_UINT64_C2(0xa6dfbd9f, 0xb8e5b88f), RAPIDJSON_UINT64_C2(0xf8a95fcf, 0x88747d94),\n        RAPIDJSON_UINT64_C2(0xb9447093, 0x8fa89bcf), RAPIDJSON_UINT64_C2(0x8a08f0f8, 0xbf0f156b),\n        RAPIDJSON_UINT64_C2(0xcdb02555, 0x653131b6), RAPIDJSON_UINT64_C2(0x993fe2c6, 0xd07b7fac),\n        RAPIDJSON_UINT64_C2(0xe45c10c4, 0x2a2b3b06), RAPIDJSON_UINT64_C2(0xaa242499, 0x697392d3),\n        RAPIDJSON_UINT64_C2(0xfd87b5f2, 0x8300ca0e), RAPIDJSON_UINT64_C2(0xbce50864, 0x92111aeb),\n        RAPIDJSON_UINT64_C2(0x8cbccc09, 0x6f5088cc), RAPIDJSON_UINT64_C2(0xd1b71758, 0xe219652c),\n        RAPIDJSON_UINT64_C2(0x9c400000, 0x00000000), RAPIDJSON_UINT64_C2(0xe8d4a510, 0x00000000),\n        RAPIDJSON_UINT64_C2(0xad78ebc5, 0xac620000), RAPIDJSON_UINT64_C2(0x813f3978, 0xf8940984),\n        RAPIDJSON_UINT64_C2(0xc097ce7b, 0xc90715b3), RAPIDJSON_UINT64_C2(0x8f7e32ce, 0x7bea5c70),\n        RAPIDJSON_UINT64_C2(0xd5d238a4, 0xabe98068), RAPIDJSON_UINT64_C2(0x9f4f2726, 0x179a2245),\n        RAPIDJSON_UINT64_C2(0xed63a231, 0xd4c4fb27), RAPIDJSON_UINT64_C2(0xb0de6538, 0x8cc8ada8),\n        RAPIDJSON_UINT64_C2(0x83c7088e, 0x1aab65db), RAPIDJSON_UINT64_C2(0xc45d1df9, 0x42711d9a),\n        RAPIDJSON_UINT64_C2(0x924d692c, 0xa61be758), RAPIDJSON_UINT64_C2(0xda01ee64, 0x1a708dea),\n        RAPIDJSON_UINT64_C2(0xa26da399, 0x9aef774a), RAPIDJSON_UINT64_C2(0xf209787b, 0xb47d6b85),\n        RAPIDJSON_UINT64_C2(0xb454e4a1, 0x79dd1877), RAPIDJSON_UINT64_C2(0x865b8692, 0x5b9bc5c2),\n        RAPIDJSON_UINT64_C2(0xc83553c5, 0xc8965d3d), RAPIDJSON_UINT64_C2(0x952ab45c, 0xfa97a0b3),\n        RAPIDJSON_UINT64_C2(0xde469fbd, 0x99a05fe3), RAPIDJSON_UINT64_C2(0xa59bc234, 0xdb398c25),\n        RAPIDJSON_UINT64_C2(0xf6c69a72, 0xa3989f5c), RAPIDJSON_UINT64_C2(0xb7dcbf53, 0x54e9bece),\n        RAPIDJSON_UINT64_C2(0x88fcf317, 0xf22241e2), RAPIDJSON_UINT64_C2(0xcc20ce9b, 0xd35c78a5),\n        RAPIDJSON_UINT64_C2(0x98165af3, 0x7b2153df), RAPIDJSON_UINT64_C2(0xe2a0b5dc, 0x971f303a),\n        RAPIDJSON_UINT64_C2(0xa8d9d153, 0x5ce3b396), RAPIDJSON_UINT64_C2(0xfb9b7cd9, 0xa4a7443c),\n        RAPIDJSON_UINT64_C2(0xbb764c4c, 0xa7a44410), RAPIDJSON_UINT64_C2(0x8bab8eef, 0xb6409c1a),\n        RAPIDJSON_UINT64_C2(0xd01fef10, 0xa657842c), RAPIDJSON_UINT64_C2(0x9b10a4e5, 0xe9913129),\n        RAPIDJSON_UINT64_C2(0xe7109bfb, 0xa19c0c9d), RAPIDJSON_UINT64_C2(0xac2820d9, 0x623bf429),\n        RAPIDJSON_UINT64_C2(0x80444b5e, 0x7aa7cf85), RAPIDJSON_UINT64_C2(0xbf21e440, 0x03acdd2d),\n        RAPIDJSON_UINT64_C2(0x8e679c2f, 0x5e44ff8f), RAPIDJSON_UINT64_C2(0xd433179d, 0x9c8cb841),\n        RAPIDJSON_UINT64_C2(0x9e19db92, 0xb4e31ba9), RAPIDJSON_UINT64_C2(0xeb96bf6e, 0xbadf77d9),\n        RAPIDJSON_UINT64_C2(0xaf87023b, 0x9bf0ee6b)\n    };\n    static const int16_t kCachedPowers_E[] = {\n        -1220, -1193, -1166, -1140, -1113, -1087, -1060, -1034, -1007,  -980,\n        -954,  -927,  -901,  -874,  -847,  -821,  -794,  -768,  -741,  -715,\n        -688,  -661,  -635,  -608,  -582,  -555,  -529,  -502,  -475,  -449,\n        -422,  -396,  -369,  -343,  -316,  -289,  -263,  -236,  -210,  -183,\n        -157,  -130,  -103,   -77,   -50,   -24,     3,    30,    56,    83,\n        109,   136,   162,   189,   216,   242,   269,   295,   322,   348,\n        375,   402,   428,   455,   481,   508,   534,   561,   588,   614,\n        641,   667,   694,   720,   747,   774,   800,   827,   853,   880,\n        907,   933,   960,   986,  1013,  1039,  1066\n    };\n    RAPIDJSON_ASSERT(index < 87);\n    return DiyFp(kCachedPowers_F[index], kCachedPowers_E[index]);\n}\n\ninline DiyFp GetCachedPower(int e, int* K) {\n\n    //int k = static_cast<int>(ceil((-61 - e) * 0.30102999566398114)) + 374;\n    double dk = (-61 - e) * 0.30102999566398114 + 347;  // dk must be positive, so can do ceiling in positive\n    int k = static_cast<int>(dk);\n    if (dk - k > 0.0)\n        k++;\n\n    unsigned index = static_cast<unsigned>((k >> 3) + 1);\n    *K = -(-348 + static_cast<int>(index << 3));    // decimal exponent no need lookup table\n\n    return GetCachedPowerByIndex(index);\n}\n\ninline DiyFp GetCachedPower10(int exp, int *outExp) {\n    RAPIDJSON_ASSERT(exp >= -348);\n    unsigned index = static_cast<unsigned>(exp + 348) / 8u;\n    *outExp = -348 + static_cast<int>(index) * 8;\n    return GetCachedPowerByIndex(index);\n}\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\nRAPIDJSON_DIAG_OFF(padded)\n#endif\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_DIYFP_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/dtoa.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n// This is a C++ header-only implementation of Grisu2 algorithm from the publication:\n// Loitsch, Florian. \"Printing floating-point numbers quickly and accurately with\n// integers.\" ACM Sigplan Notices 45.6 (2010): 233-243.\n\n#ifndef RAPIDJSON_DTOA_\n#define RAPIDJSON_DTOA_\n\n#include \"itoa.h\" // GetDigitsLut()\n#include \"diyfp.h\"\n#include \"ieee754.h\"\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\nRAPIDJSON_DIAG_OFF(array-bounds) // some gcc versions generate wrong warnings https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59124\n#endif\n\ninline void GrisuRound(char* buffer, int len, uint64_t delta, uint64_t rest, uint64_t ten_kappa, uint64_t wp_w) {\n    while (rest < wp_w && delta - rest >= ten_kappa &&\n           (rest + ten_kappa < wp_w ||  /// closer\n            wp_w - rest > rest + ten_kappa - wp_w)) {\n        buffer[len - 1]--;\n        rest += ten_kappa;\n    }\n}\n\ninline int CountDecimalDigit32(uint32_t n) {\n    // Simple pure C++ implementation was faster than __builtin_clz version in this situation.\n    if (n < 10) return 1;\n    if (n < 100) return 2;\n    if (n < 1000) return 3;\n    if (n < 10000) return 4;\n    if (n < 100000) return 5;\n    if (n < 1000000) return 6;\n    if (n < 10000000) return 7;\n    if (n < 100000000) return 8;\n    // Will not reach 10 digits in DigitGen()\n    //if (n < 1000000000) return 9;\n    //return 10;\n    return 9;\n}\n\ninline void DigitGen(const DiyFp& W, const DiyFp& Mp, uint64_t delta, char* buffer, int* len, int* K) {\n    static const uint64_t kPow10[] = { 1ULL, 10ULL, 100ULL, 1000ULL, 10000ULL, 100000ULL, 1000000ULL, 10000000ULL, 100000000ULL,\n                                       1000000000ULL, 10000000000ULL, 100000000000ULL, 1000000000000ULL,\n                                       10000000000000ULL, 100000000000000ULL, 1000000000000000ULL,\n                                       10000000000000000ULL, 100000000000000000ULL, 1000000000000000000ULL,\n                                       10000000000000000000ULL };\n    const DiyFp one(uint64_t(1) << -Mp.e, Mp.e);\n    const DiyFp wp_w = Mp - W;\n    uint32_t p1 = static_cast<uint32_t>(Mp.f >> -one.e);\n    uint64_t p2 = Mp.f & (one.f - 1);\n    int kappa = CountDecimalDigit32(p1); // kappa in [0, 9]\n    *len = 0;\n\n    while (kappa > 0) {\n        uint32_t d = 0;\n        switch (kappa) {\n            case  9: d = p1 /  100000000; p1 %=  100000000; break;\n            case  8: d = p1 /   10000000; p1 %=   10000000; break;\n            case  7: d = p1 /    1000000; p1 %=    1000000; break;\n            case  6: d = p1 /     100000; p1 %=     100000; break;\n            case  5: d = p1 /      10000; p1 %=      10000; break;\n            case  4: d = p1 /       1000; p1 %=       1000; break;\n            case  3: d = p1 /        100; p1 %=        100; break;\n            case  2: d = p1 /         10; p1 %=         10; break;\n            case  1: d = p1;              p1 =           0; break;\n            default:;\n        }\n        if (d || *len)\n            buffer[(*len)++] = static_cast<char>('0' + static_cast<char>(d));\n        kappa--;\n        uint64_t tmp = (static_cast<uint64_t>(p1) << -one.e) + p2;\n        if (tmp <= delta) {\n            *K += kappa;\n            GrisuRound(buffer, *len, delta, tmp, kPow10[kappa] << -one.e, wp_w.f);\n            return;\n        }\n    }\n\n    // kappa = 0\n    for (;;) {\n        p2 *= 10;\n        delta *= 10;\n        char d = static_cast<char>(p2 >> -one.e);\n        if (d || *len)\n            buffer[(*len)++] = static_cast<char>('0' + d);\n        p2 &= one.f - 1;\n        kappa--;\n        if (p2 < delta) {\n            *K += kappa;\n            int index = -kappa;\n            GrisuRound(buffer, *len, delta, p2, one.f, wp_w.f * (index < 20 ? kPow10[index] : 0));\n            return;\n        }\n    }\n}\n\ninline void Grisu2(double value, char* buffer, int* length, int* K) {\n    const DiyFp v(value);\n    DiyFp w_m, w_p;\n    v.NormalizedBoundaries(&w_m, &w_p);\n\n    const DiyFp c_mk = GetCachedPower(w_p.e, K);\n    const DiyFp W = v.Normalize() * c_mk;\n    DiyFp Wp = w_p * c_mk;\n    DiyFp Wm = w_m * c_mk;\n    Wm.f++;\n    Wp.f--;\n    DigitGen(W, Wp, Wp.f - Wm.f, buffer, length, K);\n}\n\ninline char* WriteExponent(int K, char* buffer) {\n    if (K < 0) {\n        *buffer++ = '-';\n        K = -K;\n    }\n\n    if (K >= 100) {\n        *buffer++ = static_cast<char>('0' + static_cast<char>(K / 100));\n        K %= 100;\n        const char* d = GetDigitsLut() + K * 2;\n        *buffer++ = d[0];\n        *buffer++ = d[1];\n    }\n    else if (K >= 10) {\n        const char* d = GetDigitsLut() + K * 2;\n        *buffer++ = d[0];\n        *buffer++ = d[1];\n    }\n    else\n        *buffer++ = static_cast<char>('0' + static_cast<char>(K));\n\n    return buffer;\n}\n\ninline char* Prettify(char* buffer, int length, int k, int maxDecimalPlaces) {\n    const int kk = length + k;  // 10^(kk-1) <= v < 10^kk\n\n    if (0 <= k && kk <= 21) {\n        // 1234e7 -> 12340000000\n        for (int i = length; i < kk; i++)\n            buffer[i] = '0';\n        buffer[kk] = '.';\n        buffer[kk + 1] = '0';\n        return &buffer[kk + 2];\n    }\n    else if (0 < kk && kk <= 21) {\n        // 1234e-2 -> 12.34\n        std::memmove(&buffer[kk + 1], &buffer[kk], static_cast<size_t>(length - kk));\n        buffer[kk] = '.';\n        if (0 > k + maxDecimalPlaces) {\n            // When maxDecimalPlaces = 2, 1.2345 -> 1.23, 1.102 -> 1.1\n            // Remove extra trailing zeros (at least one) after truncation.\n            for (int i = kk + maxDecimalPlaces; i > kk + 1; i--)\n                if (buffer[i] != '0')\n                    return &buffer[i + 1];\n            return &buffer[kk + 2]; // Reserve one zero\n        }\n        else\n            return &buffer[length + 1];\n    }\n    else if (-6 < kk && kk <= 0) {\n        // 1234e-6 -> 0.001234\n        const int offset = 2 - kk;\n        std::memmove(&buffer[offset], &buffer[0], static_cast<size_t>(length));\n        buffer[0] = '0';\n        buffer[1] = '.';\n        for (int i = 2; i < offset; i++)\n            buffer[i] = '0';\n        if (length - kk > maxDecimalPlaces) {\n            // When maxDecimalPlaces = 2, 0.123 -> 0.12, 0.102 -> 0.1\n            // Remove extra trailing zeros (at least one) after truncation.\n            for (int i = maxDecimalPlaces + 1; i > 2; i--)\n                if (buffer[i] != '0')\n                    return &buffer[i + 1];\n            return &buffer[3]; // Reserve one zero\n        }\n        else\n            return &buffer[length + offset];\n    }\n    else if (kk < -maxDecimalPlaces) {\n        // Truncate to zero\n        buffer[0] = '0';\n        buffer[1] = '.';\n        buffer[2] = '0';\n        return &buffer[3];\n    }\n    else if (length == 1) {\n        // 1e30\n        buffer[1] = 'e';\n        return WriteExponent(kk - 1, &buffer[2]);\n    }\n    else {\n        // 1234e30 -> 1.234e33\n        std::memmove(&buffer[2], &buffer[1], static_cast<size_t>(length - 1));\n        buffer[1] = '.';\n        buffer[length + 1] = 'e';\n        return WriteExponent(kk - 1, &buffer[0 + length + 2]);\n    }\n}\n\ninline char* dtoa(double value, char* buffer, int maxDecimalPlaces = 324) {\n    RAPIDJSON_ASSERT(maxDecimalPlaces >= 1);\n    Double d(value);\n    if (d.IsZero()) {\n        if (d.Sign())\n            *buffer++ = '-';     // -0.0, Issue #289\n        buffer[0] = '0';\n        buffer[1] = '.';\n        buffer[2] = '0';\n        return &buffer[3];\n    }\n    else {\n        if (value < 0) {\n            *buffer++ = '-';\n            value = -value;\n        }\n        int length, K;\n        Grisu2(value, buffer, &length, &K);\n        return Prettify(buffer, length, K, maxDecimalPlaces);\n    }\n}\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_DTOA_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/ieee754.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_IEEE754_\n#define RAPIDJSON_IEEE754_\n\n#include \"../rapidjson.h\"\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\nclass Double {\npublic:\n    Double() {}\n    Double(double d) : d_(d) {}\n    Double(uint64_t u) : u_(u) {}\n\n    double Value() const { return d_; }\n    uint64_t Uint64Value() const { return u_; }\n\n    double NextPositiveDouble() const {\n        RAPIDJSON_ASSERT(!Sign());\n        return Double(u_ + 1).Value();\n    }\n\n    bool Sign() const { return (u_ & kSignMask) != 0; }\n    uint64_t Significand() const { return u_ & kSignificandMask; }\n    int Exponent() const { return static_cast<int>(((u_ & kExponentMask) >> kSignificandSize) - kExponentBias); }\n\n    bool IsNan() const { return (u_ & kExponentMask) == kExponentMask && Significand() != 0; }\n    bool IsInf() const { return (u_ & kExponentMask) == kExponentMask && Significand() == 0; }\n    bool IsNanOrInf() const { return (u_ & kExponentMask) == kExponentMask; }\n    bool IsNormal() const { return (u_ & kExponentMask) != 0 || Significand() == 0; }\n    bool IsZero() const { return (u_ & (kExponentMask | kSignificandMask)) == 0; }\n\n    uint64_t IntegerSignificand() const { return IsNormal() ? Significand() | kHiddenBit : Significand(); }\n    int IntegerExponent() const { return (IsNormal() ? Exponent() : kDenormalExponent) - kSignificandSize; }\n    uint64_t ToBias() const { return (u_ & kSignMask) ? ~u_ + 1 : u_ | kSignMask; }\n\n    static int EffectiveSignificandSize(int order) {\n        if (order >= -1021)\n            return 53;\n        else if (order <= -1074)\n            return 0;\n        else\n            return order + 1074;\n    }\n\nprivate:\n    static const int kSignificandSize = 52;\n    static const int kExponentBias = 0x3FF;\n    static const int kDenormalExponent = 1 - kExponentBias;\n    static const uint64_t kSignMask = RAPIDJSON_UINT64_C2(0x80000000, 0x00000000);\n    static const uint64_t kExponentMask = RAPIDJSON_UINT64_C2(0x7FF00000, 0x00000000);\n    static const uint64_t kSignificandMask = RAPIDJSON_UINT64_C2(0x000FFFFF, 0xFFFFFFFF);\n    static const uint64_t kHiddenBit = RAPIDJSON_UINT64_C2(0x00100000, 0x00000000);\n\n    union {\n        double d_;\n        uint64_t u_;\n    };\n};\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_IEEE754_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/itoa.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_ITOA_\n#define RAPIDJSON_ITOA_\n\n#include \"../rapidjson.h\"\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\ninline const char* GetDigitsLut() {\n    static const char cDigitsLut[200] = {\n        '0','0','0','1','0','2','0','3','0','4','0','5','0','6','0','7','0','8','0','9',\n        '1','0','1','1','1','2','1','3','1','4','1','5','1','6','1','7','1','8','1','9',\n        '2','0','2','1','2','2','2','3','2','4','2','5','2','6','2','7','2','8','2','9',\n        '3','0','3','1','3','2','3','3','3','4','3','5','3','6','3','7','3','8','3','9',\n        '4','0','4','1','4','2','4','3','4','4','4','5','4','6','4','7','4','8','4','9',\n        '5','0','5','1','5','2','5','3','5','4','5','5','5','6','5','7','5','8','5','9',\n        '6','0','6','1','6','2','6','3','6','4','6','5','6','6','6','7','6','8','6','9',\n        '7','0','7','1','7','2','7','3','7','4','7','5','7','6','7','7','7','8','7','9',\n        '8','0','8','1','8','2','8','3','8','4','8','5','8','6','8','7','8','8','8','9',\n        '9','0','9','1','9','2','9','3','9','4','9','5','9','6','9','7','9','8','9','9'\n    };\n    return cDigitsLut;\n}\n\ninline char* u32toa(uint32_t value, char* buffer) {\n    RAPIDJSON_ASSERT(buffer != 0);\n\n    const char* cDigitsLut = GetDigitsLut();\n\n    if (value < 10000) {\n        const uint32_t d1 = (value / 100) << 1;\n        const uint32_t d2 = (value % 100) << 1;\n\n        if (value >= 1000)\n            *buffer++ = cDigitsLut[d1];\n        if (value >= 100)\n            *buffer++ = cDigitsLut[d1 + 1];\n        if (value >= 10)\n            *buffer++ = cDigitsLut[d2];\n        *buffer++ = cDigitsLut[d2 + 1];\n    }\n    else if (value < 100000000) {\n        // value = bbbbcccc\n        const uint32_t b = value / 10000;\n        const uint32_t c = value % 10000;\n\n        const uint32_t d1 = (b / 100) << 1;\n        const uint32_t d2 = (b % 100) << 1;\n\n        const uint32_t d3 = (c / 100) << 1;\n        const uint32_t d4 = (c % 100) << 1;\n\n        if (value >= 10000000)\n            *buffer++ = cDigitsLut[d1];\n        if (value >= 1000000)\n            *buffer++ = cDigitsLut[d1 + 1];\n        if (value >= 100000)\n            *buffer++ = cDigitsLut[d2];\n        *buffer++ = cDigitsLut[d2 + 1];\n\n        *buffer++ = cDigitsLut[d3];\n        *buffer++ = cDigitsLut[d3 + 1];\n        *buffer++ = cDigitsLut[d4];\n        *buffer++ = cDigitsLut[d4 + 1];\n    }\n    else {\n        // value = aabbbbcccc in decimal\n\n        const uint32_t a = value / 100000000; // 1 to 42\n        value %= 100000000;\n\n        if (a >= 10) {\n            const unsigned i = a << 1;\n            *buffer++ = cDigitsLut[i];\n            *buffer++ = cDigitsLut[i + 1];\n        }\n        else\n            *buffer++ = static_cast<char>('0' + static_cast<char>(a));\n\n        const uint32_t b = value / 10000; // 0 to 9999\n        const uint32_t c = value % 10000; // 0 to 9999\n\n        const uint32_t d1 = (b / 100) << 1;\n        const uint32_t d2 = (b % 100) << 1;\n\n        const uint32_t d3 = (c / 100) << 1;\n        const uint32_t d4 = (c % 100) << 1;\n\n        *buffer++ = cDigitsLut[d1];\n        *buffer++ = cDigitsLut[d1 + 1];\n        *buffer++ = cDigitsLut[d2];\n        *buffer++ = cDigitsLut[d2 + 1];\n        *buffer++ = cDigitsLut[d3];\n        *buffer++ = cDigitsLut[d3 + 1];\n        *buffer++ = cDigitsLut[d4];\n        *buffer++ = cDigitsLut[d4 + 1];\n    }\n    return buffer;\n}\n\ninline char* i32toa(int32_t value, char* buffer) {\n    RAPIDJSON_ASSERT(buffer != 0);\n    uint32_t u = static_cast<uint32_t>(value);\n    if (value < 0) {\n        *buffer++ = '-';\n        u = ~u + 1;\n    }\n\n    return u32toa(u, buffer);\n}\n\ninline char* u64toa(uint64_t value, char* buffer) {\n    RAPIDJSON_ASSERT(buffer != 0);\n    const char* cDigitsLut = GetDigitsLut();\n    const uint64_t  kTen8 = 100000000;\n    const uint64_t  kTen9 = kTen8 * 10;\n    const uint64_t kTen10 = kTen8 * 100;\n    const uint64_t kTen11 = kTen8 * 1000;\n    const uint64_t kTen12 = kTen8 * 10000;\n    const uint64_t kTen13 = kTen8 * 100000;\n    const uint64_t kTen14 = kTen8 * 1000000;\n    const uint64_t kTen15 = kTen8 * 10000000;\n    const uint64_t kTen16 = kTen8 * kTen8;\n\n    if (value < kTen8) {\n        uint32_t v = static_cast<uint32_t>(value);\n        if (v < 10000) {\n            const uint32_t d1 = (v / 100) << 1;\n            const uint32_t d2 = (v % 100) << 1;\n\n            if (v >= 1000)\n                *buffer++ = cDigitsLut[d1];\n            if (v >= 100)\n                *buffer++ = cDigitsLut[d1 + 1];\n            if (v >= 10)\n                *buffer++ = cDigitsLut[d2];\n            *buffer++ = cDigitsLut[d2 + 1];\n        }\n        else {\n            // value = bbbbcccc\n            const uint32_t b = v / 10000;\n            const uint32_t c = v % 10000;\n\n            const uint32_t d1 = (b / 100) << 1;\n            const uint32_t d2 = (b % 100) << 1;\n\n            const uint32_t d3 = (c / 100) << 1;\n            const uint32_t d4 = (c % 100) << 1;\n\n            if (value >= 10000000)\n                *buffer++ = cDigitsLut[d1];\n            if (value >= 1000000)\n                *buffer++ = cDigitsLut[d1 + 1];\n            if (value >= 100000)\n                *buffer++ = cDigitsLut[d2];\n            *buffer++ = cDigitsLut[d2 + 1];\n\n            *buffer++ = cDigitsLut[d3];\n            *buffer++ = cDigitsLut[d3 + 1];\n            *buffer++ = cDigitsLut[d4];\n            *buffer++ = cDigitsLut[d4 + 1];\n        }\n    }\n    else if (value < kTen16) {\n        const uint32_t v0 = static_cast<uint32_t>(value / kTen8);\n        const uint32_t v1 = static_cast<uint32_t>(value % kTen8);\n\n        const uint32_t b0 = v0 / 10000;\n        const uint32_t c0 = v0 % 10000;\n\n        const uint32_t d1 = (b0 / 100) << 1;\n        const uint32_t d2 = (b0 % 100) << 1;\n\n        const uint32_t d3 = (c0 / 100) << 1;\n        const uint32_t d4 = (c0 % 100) << 1;\n\n        const uint32_t b1 = v1 / 10000;\n        const uint32_t c1 = v1 % 10000;\n\n        const uint32_t d5 = (b1 / 100) << 1;\n        const uint32_t d6 = (b1 % 100) << 1;\n\n        const uint32_t d7 = (c1 / 100) << 1;\n        const uint32_t d8 = (c1 % 100) << 1;\n\n        if (value >= kTen15)\n            *buffer++ = cDigitsLut[d1];\n        if (value >= kTen14)\n            *buffer++ = cDigitsLut[d1 + 1];\n        if (value >= kTen13)\n            *buffer++ = cDigitsLut[d2];\n        if (value >= kTen12)\n            *buffer++ = cDigitsLut[d2 + 1];\n        if (value >= kTen11)\n            *buffer++ = cDigitsLut[d3];\n        if (value >= kTen10)\n            *buffer++ = cDigitsLut[d3 + 1];\n        if (value >= kTen9)\n            *buffer++ = cDigitsLut[d4];\n\n        *buffer++ = cDigitsLut[d4 + 1];\n        *buffer++ = cDigitsLut[d5];\n        *buffer++ = cDigitsLut[d5 + 1];\n        *buffer++ = cDigitsLut[d6];\n        *buffer++ = cDigitsLut[d6 + 1];\n        *buffer++ = cDigitsLut[d7];\n        *buffer++ = cDigitsLut[d7 + 1];\n        *buffer++ = cDigitsLut[d8];\n        *buffer++ = cDigitsLut[d8 + 1];\n    }\n    else {\n        const uint32_t a = static_cast<uint32_t>(value / kTen16); // 1 to 1844\n        value %= kTen16;\n\n        if (a < 10)\n            *buffer++ = static_cast<char>('0' + static_cast<char>(a));\n        else if (a < 100) {\n            const uint32_t i = a << 1;\n            *buffer++ = cDigitsLut[i];\n            *buffer++ = cDigitsLut[i + 1];\n        }\n        else if (a < 1000) {\n            *buffer++ = static_cast<char>('0' + static_cast<char>(a / 100));\n\n            const uint32_t i = (a % 100) << 1;\n            *buffer++ = cDigitsLut[i];\n            *buffer++ = cDigitsLut[i + 1];\n        }\n        else {\n            const uint32_t i = (a / 100) << 1;\n            const uint32_t j = (a % 100) << 1;\n            *buffer++ = cDigitsLut[i];\n            *buffer++ = cDigitsLut[i + 1];\n            *buffer++ = cDigitsLut[j];\n            *buffer++ = cDigitsLut[j + 1];\n        }\n\n        const uint32_t v0 = static_cast<uint32_t>(value / kTen8);\n        const uint32_t v1 = static_cast<uint32_t>(value % kTen8);\n\n        const uint32_t b0 = v0 / 10000;\n        const uint32_t c0 = v0 % 10000;\n\n        const uint32_t d1 = (b0 / 100) << 1;\n        const uint32_t d2 = (b0 % 100) << 1;\n\n        const uint32_t d3 = (c0 / 100) << 1;\n        const uint32_t d4 = (c0 % 100) << 1;\n\n        const uint32_t b1 = v1 / 10000;\n        const uint32_t c1 = v1 % 10000;\n\n        const uint32_t d5 = (b1 / 100) << 1;\n        const uint32_t d6 = (b1 % 100) << 1;\n\n        const uint32_t d7 = (c1 / 100) << 1;\n        const uint32_t d8 = (c1 % 100) << 1;\n\n        *buffer++ = cDigitsLut[d1];\n        *buffer++ = cDigitsLut[d1 + 1];\n        *buffer++ = cDigitsLut[d2];\n        *buffer++ = cDigitsLut[d2 + 1];\n        *buffer++ = cDigitsLut[d3];\n        *buffer++ = cDigitsLut[d3 + 1];\n        *buffer++ = cDigitsLut[d4];\n        *buffer++ = cDigitsLut[d4 + 1];\n        *buffer++ = cDigitsLut[d5];\n        *buffer++ = cDigitsLut[d5 + 1];\n        *buffer++ = cDigitsLut[d6];\n        *buffer++ = cDigitsLut[d6 + 1];\n        *buffer++ = cDigitsLut[d7];\n        *buffer++ = cDigitsLut[d7 + 1];\n        *buffer++ = cDigitsLut[d8];\n        *buffer++ = cDigitsLut[d8 + 1];\n    }\n\n    return buffer;\n}\n\ninline char* i64toa(int64_t value, char* buffer) {\n    RAPIDJSON_ASSERT(buffer != 0);\n    uint64_t u = static_cast<uint64_t>(value);\n    if (value < 0) {\n        *buffer++ = '-';\n        u = ~u + 1;\n    }\n\n    return u64toa(u, buffer);\n}\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_ITOA_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/meta.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_INTERNAL_META_H_\n#define RAPIDJSON_INTERNAL_META_H_\n\n#include \"../rapidjson.h\"\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n#if defined(_MSC_VER) && !defined(__clang__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(6334)\n#endif\n\n#if RAPIDJSON_HAS_CXX11_TYPETRAITS\n#include <type_traits>\n#endif\n\n//@cond RAPIDJSON_INTERNAL\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\n// Helper to wrap/convert arbitrary types to void, useful for arbitrary type matching\ntemplate <typename T> struct Void { typedef void Type; };\n\n///////////////////////////////////////////////////////////////////////////////\n// BoolType, TrueType, FalseType\n//\ntemplate <bool Cond> struct BoolType {\n    static const bool Value = Cond;\n    typedef BoolType Type;\n};\ntypedef BoolType<true> TrueType;\ntypedef BoolType<false> FalseType;\n\n\n///////////////////////////////////////////////////////////////////////////////\n// SelectIf, BoolExpr, NotExpr, AndExpr, OrExpr\n//\n\ntemplate <bool C> struct SelectIfImpl { template <typename T1, typename T2> struct Apply { typedef T1 Type; }; };\ntemplate <> struct SelectIfImpl<false> { template <typename T1, typename T2> struct Apply { typedef T2 Type; }; };\ntemplate <bool C, typename T1, typename T2> struct SelectIfCond : SelectIfImpl<C>::template Apply<T1,T2> {};\ntemplate <typename C, typename T1, typename T2> struct SelectIf : SelectIfCond<C::Value, T1, T2> {};\n\ntemplate <bool Cond1, bool Cond2> struct AndExprCond : FalseType {};\ntemplate <> struct AndExprCond<true, true> : TrueType {};\ntemplate <bool Cond1, bool Cond2> struct OrExprCond : TrueType {};\ntemplate <> struct OrExprCond<false, false> : FalseType {};\n\ntemplate <typename C> struct BoolExpr : SelectIf<C,TrueType,FalseType>::Type {};\ntemplate <typename C> struct NotExpr  : SelectIf<C,FalseType,TrueType>::Type {};\ntemplate <typename C1, typename C2> struct AndExpr : AndExprCond<C1::Value, C2::Value>::Type {};\ntemplate <typename C1, typename C2> struct OrExpr  : OrExprCond<C1::Value, C2::Value>::Type {};\n\n\n///////////////////////////////////////////////////////////////////////////////\n// AddConst, MaybeAddConst, RemoveConst\ntemplate <typename T> struct AddConst { typedef const T Type; };\ntemplate <bool Constify, typename T> struct MaybeAddConst : SelectIfCond<Constify, const T, T> {};\ntemplate <typename T> struct RemoveConst { typedef T Type; };\ntemplate <typename T> struct RemoveConst<const T> { typedef T Type; };\n\n\n///////////////////////////////////////////////////////////////////////////////\n// IsSame, IsConst, IsMoreConst, IsPointer\n//\ntemplate <typename T, typename U> struct IsSame : FalseType {};\ntemplate <typename T> struct IsSame<T, T> : TrueType {};\n\ntemplate <typename T> struct IsConst : FalseType {};\ntemplate <typename T> struct IsConst<const T> : TrueType {};\n\ntemplate <typename CT, typename T>\nstruct IsMoreConst\n    : AndExpr<IsSame<typename RemoveConst<CT>::Type, typename RemoveConst<T>::Type>,\n              BoolType<IsConst<CT>::Value >= IsConst<T>::Value> >::Type {};\n\ntemplate <typename T> struct IsPointer : FalseType {};\ntemplate <typename T> struct IsPointer<T*> : TrueType {};\n\n///////////////////////////////////////////////////////////////////////////////\n// IsBaseOf\n//\n#if RAPIDJSON_HAS_CXX11_TYPETRAITS\n\ntemplate <typename B, typename D> struct IsBaseOf\n    : BoolType< ::std::is_base_of<B,D>::value> {};\n\n#else // simplified version adopted from Boost\n\ntemplate<typename B, typename D> struct IsBaseOfImpl {\n    RAPIDJSON_STATIC_ASSERT(sizeof(B) != 0);\n    RAPIDJSON_STATIC_ASSERT(sizeof(D) != 0);\n\n    typedef char (&Yes)[1];\n    typedef char (&No) [2];\n\n    template <typename T>\n    static Yes Check(const D*, T);\n    static No  Check(const B*, int);\n\n    struct Host {\n        operator const B*() const;\n        operator const D*();\n    };\n\n    enum { Value = (sizeof(Check(Host(), 0)) == sizeof(Yes)) };\n};\n\ntemplate <typename B, typename D> struct IsBaseOf\n    : OrExpr<IsSame<B, D>, BoolExpr<IsBaseOfImpl<B, D> > >::Type {};\n\n#endif // RAPIDJSON_HAS_CXX11_TYPETRAITS\n\n\n//////////////////////////////////////////////////////////////////////////\n// EnableIf / DisableIf\n//\ntemplate <bool Condition, typename T = void> struct EnableIfCond  { typedef T Type; };\ntemplate <typename T> struct EnableIfCond<false, T> { /* empty */ };\n\ntemplate <bool Condition, typename T = void> struct DisableIfCond { typedef T Type; };\ntemplate <typename T> struct DisableIfCond<true, T> { /* empty */ };\n\ntemplate <typename Condition, typename T = void>\nstruct EnableIf : EnableIfCond<Condition::Value, T> {};\n\ntemplate <typename Condition, typename T = void>\nstruct DisableIf : DisableIfCond<Condition::Value, T> {};\n\n// SFINAE helpers\nstruct SfinaeTag {};\ntemplate <typename T> struct RemoveSfinaeTag;\ntemplate <typename T> struct RemoveSfinaeTag<SfinaeTag&(*)(T)> { typedef T Type; };\n\n#define RAPIDJSON_REMOVEFPTR_(type) \\\n    typename ::RAPIDJSON_NAMESPACE::internal::RemoveSfinaeTag \\\n        < ::RAPIDJSON_NAMESPACE::internal::SfinaeTag&(*) type>::Type\n\n#define RAPIDJSON_ENABLEIF(cond) \\\n    typename ::RAPIDJSON_NAMESPACE::internal::EnableIf \\\n        <RAPIDJSON_REMOVEFPTR_(cond)>::Type * = NULL\n\n#define RAPIDJSON_DISABLEIF(cond) \\\n    typename ::RAPIDJSON_NAMESPACE::internal::DisableIf \\\n        <RAPIDJSON_REMOVEFPTR_(cond)>::Type * = NULL\n\n#define RAPIDJSON_ENABLEIF_RETURN(cond,returntype) \\\n    typename ::RAPIDJSON_NAMESPACE::internal::EnableIf \\\n        <RAPIDJSON_REMOVEFPTR_(cond), \\\n         RAPIDJSON_REMOVEFPTR_(returntype)>::Type\n\n#define RAPIDJSON_DISABLEIF_RETURN(cond,returntype) \\\n    typename ::RAPIDJSON_NAMESPACE::internal::DisableIf \\\n        <RAPIDJSON_REMOVEFPTR_(cond), \\\n         RAPIDJSON_REMOVEFPTR_(returntype)>::Type\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n//@endcond\n\n#if defined(_MSC_VER) && !defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_INTERNAL_META_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/pow10.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_POW10_\n#define RAPIDJSON_POW10_\n\n#include \"../rapidjson.h\"\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\n//! Computes integer powers of 10 in double (10.0^n).\n/*! This function uses lookup table for fast and accurate results.\n    \\param n non-negative exponent. Must <= 308.\n    \\return 10.0^n\n*/\ninline double Pow10(int n) {\n    static const double e[] = { // 1e-0...1e308: 309 * 8 bytes = 2472 bytes\n        1e+0,  \n        1e+1,  1e+2,  1e+3,  1e+4,  1e+5,  1e+6,  1e+7,  1e+8,  1e+9,  1e+10, 1e+11, 1e+12, 1e+13, 1e+14, 1e+15, 1e+16, 1e+17, 1e+18, 1e+19, 1e+20, \n        1e+21, 1e+22, 1e+23, 1e+24, 1e+25, 1e+26, 1e+27, 1e+28, 1e+29, 1e+30, 1e+31, 1e+32, 1e+33, 1e+34, 1e+35, 1e+36, 1e+37, 1e+38, 1e+39, 1e+40,\n        1e+41, 1e+42, 1e+43, 1e+44, 1e+45, 1e+46, 1e+47, 1e+48, 1e+49, 1e+50, 1e+51, 1e+52, 1e+53, 1e+54, 1e+55, 1e+56, 1e+57, 1e+58, 1e+59, 1e+60,\n        1e+61, 1e+62, 1e+63, 1e+64, 1e+65, 1e+66, 1e+67, 1e+68, 1e+69, 1e+70, 1e+71, 1e+72, 1e+73, 1e+74, 1e+75, 1e+76, 1e+77, 1e+78, 1e+79, 1e+80,\n        1e+81, 1e+82, 1e+83, 1e+84, 1e+85, 1e+86, 1e+87, 1e+88, 1e+89, 1e+90, 1e+91, 1e+92, 1e+93, 1e+94, 1e+95, 1e+96, 1e+97, 1e+98, 1e+99, 1e+100,\n        1e+101,1e+102,1e+103,1e+104,1e+105,1e+106,1e+107,1e+108,1e+109,1e+110,1e+111,1e+112,1e+113,1e+114,1e+115,1e+116,1e+117,1e+118,1e+119,1e+120,\n        1e+121,1e+122,1e+123,1e+124,1e+125,1e+126,1e+127,1e+128,1e+129,1e+130,1e+131,1e+132,1e+133,1e+134,1e+135,1e+136,1e+137,1e+138,1e+139,1e+140,\n        1e+141,1e+142,1e+143,1e+144,1e+145,1e+146,1e+147,1e+148,1e+149,1e+150,1e+151,1e+152,1e+153,1e+154,1e+155,1e+156,1e+157,1e+158,1e+159,1e+160,\n        1e+161,1e+162,1e+163,1e+164,1e+165,1e+166,1e+167,1e+168,1e+169,1e+170,1e+171,1e+172,1e+173,1e+174,1e+175,1e+176,1e+177,1e+178,1e+179,1e+180,\n        1e+181,1e+182,1e+183,1e+184,1e+185,1e+186,1e+187,1e+188,1e+189,1e+190,1e+191,1e+192,1e+193,1e+194,1e+195,1e+196,1e+197,1e+198,1e+199,1e+200,\n        1e+201,1e+202,1e+203,1e+204,1e+205,1e+206,1e+207,1e+208,1e+209,1e+210,1e+211,1e+212,1e+213,1e+214,1e+215,1e+216,1e+217,1e+218,1e+219,1e+220,\n        1e+221,1e+222,1e+223,1e+224,1e+225,1e+226,1e+227,1e+228,1e+229,1e+230,1e+231,1e+232,1e+233,1e+234,1e+235,1e+236,1e+237,1e+238,1e+239,1e+240,\n        1e+241,1e+242,1e+243,1e+244,1e+245,1e+246,1e+247,1e+248,1e+249,1e+250,1e+251,1e+252,1e+253,1e+254,1e+255,1e+256,1e+257,1e+258,1e+259,1e+260,\n        1e+261,1e+262,1e+263,1e+264,1e+265,1e+266,1e+267,1e+268,1e+269,1e+270,1e+271,1e+272,1e+273,1e+274,1e+275,1e+276,1e+277,1e+278,1e+279,1e+280,\n        1e+281,1e+282,1e+283,1e+284,1e+285,1e+286,1e+287,1e+288,1e+289,1e+290,1e+291,1e+292,1e+293,1e+294,1e+295,1e+296,1e+297,1e+298,1e+299,1e+300,\n        1e+301,1e+302,1e+303,1e+304,1e+305,1e+306,1e+307,1e+308\n    };\n    RAPIDJSON_ASSERT(n >= 0 && n <= 308);\n    return e[n];\n}\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_POW10_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/regex.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_INTERNAL_REGEX_H_\n#define RAPIDJSON_INTERNAL_REGEX_H_\n\n#include \"../allocators.h\"\n#include \"../stream.h\"\n#include \"stack.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(padded)\nRAPIDJSON_DIAG_OFF(switch-enum)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4512) // assignment operator could not be generated\n#endif\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n#ifndef RAPIDJSON_REGEX_VERBOSE\n#define RAPIDJSON_REGEX_VERBOSE 0\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\n///////////////////////////////////////////////////////////////////////////////\n// DecodedStream\n\ntemplate <typename SourceStream, typename Encoding>\nclass DecodedStream {\npublic:\n    DecodedStream(SourceStream& ss) : ss_(ss), codepoint_() { Decode(); }\n    unsigned Peek() { return codepoint_; }\n    unsigned Take() {\n        unsigned c = codepoint_;\n        if (c) // No further decoding when '\\0'\n            Decode();\n        return c;\n    }\n\nprivate:\n    void Decode() {\n        if (!Encoding::Decode(ss_, &codepoint_))\n            codepoint_ = 0;\n    }\n\n    SourceStream& ss_;\n    unsigned codepoint_;\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericRegex\n\nstatic const SizeType kRegexInvalidState = ~SizeType(0);  //!< Represents an invalid index in GenericRegex::State::out, out1\nstatic const SizeType kRegexInvalidRange = ~SizeType(0);\n\ntemplate <typename Encoding, typename Allocator>\nclass GenericRegexSearch;\n\n//! Regular expression engine with subset of ECMAscript grammar.\n/*!\n    Supported regular expression syntax:\n    - \\c ab     Concatenation\n    - \\c a|b    Alternation\n    - \\c a?     Zero or one\n    - \\c a*     Zero or more\n    - \\c a+     One or more\n    - \\c a{3}   Exactly 3 times\n    - \\c a{3,}  At least 3 times\n    - \\c a{3,5} 3 to 5 times\n    - \\c (ab)   Grouping\n    - \\c ^a     At the beginning\n    - \\c a$     At the end\n    - \\c .      Any character\n    - \\c [abc]  Character classes\n    - \\c [a-c]  Character class range\n    - \\c [a-z0-9_] Character class combination\n    - \\c [^abc] Negated character classes\n    - \\c [^a-c] Negated character class range\n    - \\c [\\b]   Backspace (U+0008)\n    - \\c \\\\| \\\\\\\\ ...  Escape characters\n    - \\c \\\\f Form feed (U+000C)\n    - \\c \\\\n Line feed (U+000A)\n    - \\c \\\\r Carriage return (U+000D)\n    - \\c \\\\t Tab (U+0009)\n    - \\c \\\\v Vertical tab (U+000B)\n\n    \\note This is a Thompson NFA engine, implemented with reference to \n        Cox, Russ. \"Regular Expression Matching Can Be Simple And Fast (but is slow in Java, Perl, PHP, Python, Ruby,...).\", \n        https://swtch.com/~rsc/regexp/regexp1.html \n*/\ntemplate <typename Encoding, typename Allocator = CrtAllocator>\nclass GenericRegex {\npublic:\n    typedef Encoding EncodingType;\n    typedef typename Encoding::Ch Ch;\n    template <typename, typename> friend class GenericRegexSearch;\n\n    GenericRegex(const Ch* source, Allocator* allocator = 0) : \n        ownAllocator_(allocator ? 0 : RAPIDJSON_NEW(Allocator)()), allocator_(allocator ? allocator : ownAllocator_), \n        states_(allocator_, 256), ranges_(allocator_, 256), root_(kRegexInvalidState), stateCount_(), rangeCount_(), \n        anchorBegin_(), anchorEnd_()\n    {\n        GenericStringStream<Encoding> ss(source);\n        DecodedStream<GenericStringStream<Encoding>, Encoding> ds(ss);\n        Parse(ds);\n    }\n\n    ~GenericRegex()\n    {\n        RAPIDJSON_DELETE(ownAllocator_);\n    }\n\n    bool IsValid() const {\n        return root_ != kRegexInvalidState;\n    }\n\nprivate:\n    enum Operator {\n        kZeroOrOne,\n        kZeroOrMore,\n        kOneOrMore,\n        kConcatenation,\n        kAlternation,\n        kLeftParenthesis\n    };\n\n    static const unsigned kAnyCharacterClass = 0xFFFFFFFF;   //!< For '.'\n    static const unsigned kRangeCharacterClass = 0xFFFFFFFE;\n    static const unsigned kRangeNegationFlag = 0x80000000;\n\n    struct Range {\n        unsigned start; // \n        unsigned end;\n        SizeType next;\n    };\n\n    struct State {\n        SizeType out;     //!< Equals to kInvalid for matching state\n        SizeType out1;    //!< Equals to non-kInvalid for split\n        SizeType rangeStart;\n        unsigned codepoint;\n    };\n\n    struct Frag {\n        Frag(SizeType s, SizeType o, SizeType m) : start(s), out(o), minIndex(m) {}\n        SizeType start;\n        SizeType out; //!< link-list of all output states\n        SizeType minIndex;\n    };\n\n    State& GetState(SizeType index) {\n        RAPIDJSON_ASSERT(index < stateCount_);\n        return states_.template Bottom<State>()[index];\n    }\n\n    const State& GetState(SizeType index) const {\n        RAPIDJSON_ASSERT(index < stateCount_);\n        return states_.template Bottom<State>()[index];\n    }\n\n    Range& GetRange(SizeType index) {\n        RAPIDJSON_ASSERT(index < rangeCount_);\n        return ranges_.template Bottom<Range>()[index];\n    }\n\n    const Range& GetRange(SizeType index) const {\n        RAPIDJSON_ASSERT(index < rangeCount_);\n        return ranges_.template Bottom<Range>()[index];\n    }\n\n    template <typename InputStream>\n    void Parse(DecodedStream<InputStream, Encoding>& ds) {\n        Stack<Allocator> operandStack(allocator_, 256);    // Frag\n        Stack<Allocator> operatorStack(allocator_, 256);   // Operator\n        Stack<Allocator> atomCountStack(allocator_, 256);  // unsigned (Atom per parenthesis)\n\n        *atomCountStack.template Push<unsigned>() = 0;\n\n        unsigned codepoint;\n        while (ds.Peek() != 0) {\n            switch (codepoint = ds.Take()) {\n                case '^':\n                    anchorBegin_ = true;\n                    break;\n\n                case '$':\n                    anchorEnd_ = true;\n                    break;\n\n                case '|':\n                    while (!operatorStack.Empty() && *operatorStack.template Top<Operator>() < kAlternation)\n                        if (!Eval(operandStack, *operatorStack.template Pop<Operator>(1)))\n                            return;\n                    *operatorStack.template Push<Operator>() = kAlternation;\n                    *atomCountStack.template Top<unsigned>() = 0;\n                    break;\n\n                case '(':\n                    *operatorStack.template Push<Operator>() = kLeftParenthesis;\n                    *atomCountStack.template Push<unsigned>() = 0;\n                    break;\n\n                case ')':\n                    while (!operatorStack.Empty() && *operatorStack.template Top<Operator>() != kLeftParenthesis)\n                        if (!Eval(operandStack, *operatorStack.template Pop<Operator>(1)))\n                            return;\n                    if (operatorStack.Empty())\n                        return;\n                    operatorStack.template Pop<Operator>(1);\n                    atomCountStack.template Pop<unsigned>(1);\n                    ImplicitConcatenation(atomCountStack, operatorStack);\n                    break;\n\n                case '?':\n                    if (!Eval(operandStack, kZeroOrOne))\n                        return;\n                    break;\n\n                case '*':\n                    if (!Eval(operandStack, kZeroOrMore))\n                        return;\n                    break;\n\n                case '+':\n                    if (!Eval(operandStack, kOneOrMore))\n                        return;\n                    break;\n\n                case '{':\n                    {\n                        unsigned n, m;\n                        if (!ParseUnsigned(ds, &n))\n                            return;\n\n                        if (ds.Peek() == ',') {\n                            ds.Take();\n                            if (ds.Peek() == '}')\n                                m = kInfinityQuantifier;\n                            else if (!ParseUnsigned(ds, &m) || m < n)\n                                return;\n                        }\n                        else\n                            m = n;\n\n                        if (!EvalQuantifier(operandStack, n, m) || ds.Peek() != '}')\n                            return;\n                        ds.Take();\n                    }\n                    break;\n\n                case '.':\n                    PushOperand(operandStack, kAnyCharacterClass);\n                    ImplicitConcatenation(atomCountStack, operatorStack);\n                    break;\n\n                case '[':\n                    {\n                        SizeType range;\n                        if (!ParseRange(ds, &range))\n                            return;\n                        SizeType s = NewState(kRegexInvalidState, kRegexInvalidState, kRangeCharacterClass);\n                        GetState(s).rangeStart = range;\n                        *operandStack.template Push<Frag>() = Frag(s, s, s);\n                    }\n                    ImplicitConcatenation(atomCountStack, operatorStack);\n                    break;\n\n                case '\\\\': // Escape character\n                    if (!CharacterEscape(ds, &codepoint))\n                        return; // Unsupported escape character\n                    // fall through to default\n                    RAPIDJSON_DELIBERATE_FALLTHROUGH;\n\n                default: // Pattern character\n                    PushOperand(operandStack, codepoint);\n                    ImplicitConcatenation(atomCountStack, operatorStack);\n            }\n        }\n\n        while (!operatorStack.Empty())\n            if (!Eval(operandStack, *operatorStack.template Pop<Operator>(1)))\n                return;\n\n        // Link the operand to matching state.\n        if (operandStack.GetSize() == sizeof(Frag)) {\n            Frag* e = operandStack.template Pop<Frag>(1);\n            Patch(e->out, NewState(kRegexInvalidState, kRegexInvalidState, 0));\n            root_ = e->start;\n\n#if RAPIDJSON_REGEX_VERBOSE\n            printf(\"root: %d\\n\", root_);\n            for (SizeType i = 0; i < stateCount_ ; i++) {\n                State& s = GetState(i);\n                printf(\"[%2d] out: %2d out1: %2d c: '%c'\\n\", i, s.out, s.out1, (char)s.codepoint);\n            }\n            printf(\"\\n\");\n#endif\n        }\n    }\n\n    SizeType NewState(SizeType out, SizeType out1, unsigned codepoint) {\n        State* s = states_.template Push<State>();\n        s->out = out;\n        s->out1 = out1;\n        s->codepoint = codepoint;\n        s->rangeStart = kRegexInvalidRange;\n        return stateCount_++;\n    }\n\n    void PushOperand(Stack<Allocator>& operandStack, unsigned codepoint) {\n        SizeType s = NewState(kRegexInvalidState, kRegexInvalidState, codepoint);\n        *operandStack.template Push<Frag>() = Frag(s, s, s);\n    }\n\n    void ImplicitConcatenation(Stack<Allocator>& atomCountStack, Stack<Allocator>& operatorStack) {\n        if (*atomCountStack.template Top<unsigned>())\n            *operatorStack.template Push<Operator>() = kConcatenation;\n        (*atomCountStack.template Top<unsigned>())++;\n    }\n\n    SizeType Append(SizeType l1, SizeType l2) {\n        SizeType old = l1;\n        while (GetState(l1).out != kRegexInvalidState)\n            l1 = GetState(l1).out;\n        GetState(l1).out = l2;\n        return old;\n    }\n\n    void Patch(SizeType l, SizeType s) {\n        for (SizeType next; l != kRegexInvalidState; l = next) {\n            next = GetState(l).out;\n            GetState(l).out = s;\n        }\n    }\n\n    bool Eval(Stack<Allocator>& operandStack, Operator op) {\n        switch (op) {\n            case kConcatenation:\n                RAPIDJSON_ASSERT(operandStack.GetSize() >= sizeof(Frag) * 2);\n                {\n                    Frag e2 = *operandStack.template Pop<Frag>(1);\n                    Frag e1 = *operandStack.template Pop<Frag>(1);\n                    Patch(e1.out, e2.start);\n                    *operandStack.template Push<Frag>() = Frag(e1.start, e2.out, Min(e1.minIndex, e2.minIndex));\n                }\n                return true;\n\n            case kAlternation:\n                if (operandStack.GetSize() >= sizeof(Frag) * 2) {\n                    Frag e2 = *operandStack.template Pop<Frag>(1);\n                    Frag e1 = *operandStack.template Pop<Frag>(1);\n                    SizeType s = NewState(e1.start, e2.start, 0);\n                    *operandStack.template Push<Frag>() = Frag(s, Append(e1.out, e2.out), Min(e1.minIndex, e2.minIndex));\n                    return true;\n                }\n                return false;\n\n            case kZeroOrOne:\n                if (operandStack.GetSize() >= sizeof(Frag)) {\n                    Frag e = *operandStack.template Pop<Frag>(1);\n                    SizeType s = NewState(kRegexInvalidState, e.start, 0);\n                    *operandStack.template Push<Frag>() = Frag(s, Append(e.out, s), e.minIndex);\n                    return true;\n                }\n                return false;\n\n            case kZeroOrMore:\n                if (operandStack.GetSize() >= sizeof(Frag)) {\n                    Frag e = *operandStack.template Pop<Frag>(1);\n                    SizeType s = NewState(kRegexInvalidState, e.start, 0);\n                    Patch(e.out, s);\n                    *operandStack.template Push<Frag>() = Frag(s, s, e.minIndex);\n                    return true;\n                }\n                return false;\n\n            case kOneOrMore:\n                if (operandStack.GetSize() >= sizeof(Frag)) {\n                    Frag e = *operandStack.template Pop<Frag>(1);\n                    SizeType s = NewState(kRegexInvalidState, e.start, 0);\n                    Patch(e.out, s);\n                    *operandStack.template Push<Frag>() = Frag(e.start, s, e.minIndex);\n                    return true;\n                }\n                return false;\n\n            default: \n                // syntax error (e.g. unclosed kLeftParenthesis)\n                return false;\n        }\n    }\n\n    bool EvalQuantifier(Stack<Allocator>& operandStack, unsigned n, unsigned m) {\n        RAPIDJSON_ASSERT(n <= m);\n        RAPIDJSON_ASSERT(operandStack.GetSize() >= sizeof(Frag));\n\n        if (n == 0) {\n            if (m == 0)                             // a{0} not support\n                return false;\n            else if (m == kInfinityQuantifier)\n                Eval(operandStack, kZeroOrMore);    // a{0,} -> a*\n            else {\n                Eval(operandStack, kZeroOrOne);         // a{0,5} -> a?\n                for (unsigned i = 0; i < m - 1; i++)\n                    CloneTopOperand(operandStack);      // a{0,5} -> a? a? a? a? a?\n                for (unsigned i = 0; i < m - 1; i++)\n                    Eval(operandStack, kConcatenation); // a{0,5} -> a?a?a?a?a?\n            }\n            return true;\n        }\n\n        for (unsigned i = 0; i < n - 1; i++)        // a{3} -> a a a\n            CloneTopOperand(operandStack);\n\n        if (m == kInfinityQuantifier)\n            Eval(operandStack, kOneOrMore);         // a{3,} -> a a a+\n        else if (m > n) {\n            CloneTopOperand(operandStack);          // a{3,5} -> a a a a\n            Eval(operandStack, kZeroOrOne);         // a{3,5} -> a a a a?\n            for (unsigned i = n; i < m - 1; i++)\n                CloneTopOperand(operandStack);      // a{3,5} -> a a a a? a?\n            for (unsigned i = n; i < m; i++)\n                Eval(operandStack, kConcatenation); // a{3,5} -> a a aa?a?\n        }\n\n        for (unsigned i = 0; i < n - 1; i++)\n            Eval(operandStack, kConcatenation);     // a{3} -> aaa, a{3,} -> aaa+, a{3.5} -> aaaa?a?\n\n        return true;\n    }\n\n    static SizeType Min(SizeType a, SizeType b) { return a < b ? a : b; }\n\n    void CloneTopOperand(Stack<Allocator>& operandStack) {\n        const Frag src = *operandStack.template Top<Frag>(); // Copy constructor to prevent invalidation\n        SizeType count = stateCount_ - src.minIndex; // Assumes top operand contains states in [src->minIndex, stateCount_)\n        State* s = states_.template Push<State>(count);\n        memcpy(s, &GetState(src.minIndex), count * sizeof(State));\n        for (SizeType j = 0; j < count; j++) {\n            if (s[j].out != kRegexInvalidState)\n                s[j].out += count;\n            if (s[j].out1 != kRegexInvalidState)\n                s[j].out1 += count;\n        }\n        *operandStack.template Push<Frag>() = Frag(src.start + count, src.out + count, src.minIndex + count);\n        stateCount_ += count;\n    }\n\n    template <typename InputStream>\n    bool ParseUnsigned(DecodedStream<InputStream, Encoding>& ds, unsigned* u) {\n        unsigned r = 0;\n        if (ds.Peek() < '0' || ds.Peek() > '9')\n            return false;\n        while (ds.Peek() >= '0' && ds.Peek() <= '9') {\n            if (r >= 429496729 && ds.Peek() > '5') // 2^32 - 1 = 4294967295\n                return false; // overflow\n            r = r * 10 + (ds.Take() - '0');\n        }\n        *u = r;\n        return true;\n    }\n\n    template <typename InputStream>\n    bool ParseRange(DecodedStream<InputStream, Encoding>& ds, SizeType* range) {\n        bool isBegin = true;\n        bool negate = false;\n        int step = 0;\n        SizeType start = kRegexInvalidRange;\n        SizeType current = kRegexInvalidRange;\n        unsigned codepoint;\n        while ((codepoint = ds.Take()) != 0) {\n            if (isBegin) {\n                isBegin = false;\n                if (codepoint == '^') {\n                    negate = true;\n                    continue;\n                }\n            }\n\n            switch (codepoint) {\n            case ']':\n                if (start == kRegexInvalidRange)\n                    return false;   // Error: nothing inside []\n                if (step == 2) { // Add trailing '-'\n                    SizeType r = NewRange('-');\n                    RAPIDJSON_ASSERT(current != kRegexInvalidRange);\n                    GetRange(current).next = r;\n                }\n                if (negate)\n                    GetRange(start).start |= kRangeNegationFlag;\n                *range = start;\n                return true;\n\n            case '\\\\':\n                if (ds.Peek() == 'b') {\n                    ds.Take();\n                    codepoint = 0x0008; // Escape backspace character\n                }\n                else if (!CharacterEscape(ds, &codepoint))\n                    return false;\n                // fall through to default\n                RAPIDJSON_DELIBERATE_FALLTHROUGH;\n\n            default:\n                switch (step) {\n                case 1:\n                    if (codepoint == '-') {\n                        step++;\n                        break;\n                    }\n                    // fall through to step 0 for other characters\n                    RAPIDJSON_DELIBERATE_FALLTHROUGH;\n\n                case 0:\n                    {\n                        SizeType r = NewRange(codepoint);\n                        if (current != kRegexInvalidRange)\n                            GetRange(current).next = r;\n                        if (start == kRegexInvalidRange)\n                            start = r;\n                        current = r;\n                    }\n                    step = 1;\n                    break;\n\n                default:\n                    RAPIDJSON_ASSERT(step == 2);\n                    GetRange(current).end = codepoint;\n                    step = 0;\n                }\n            }\n        }\n        return false;\n    }\n    \n    SizeType NewRange(unsigned codepoint) {\n        Range* r = ranges_.template Push<Range>();\n        r->start = r->end = codepoint;\n        r->next = kRegexInvalidRange;\n        return rangeCount_++;\n    }\n\n    template <typename InputStream>\n    bool CharacterEscape(DecodedStream<InputStream, Encoding>& ds, unsigned* escapedCodepoint) {\n        unsigned codepoint;\n        switch (codepoint = ds.Take()) {\n            case '^':\n            case '$':\n            case '|':\n            case '(':\n            case ')':\n            case '?':\n            case '*':\n            case '+':\n            case '.':\n            case '[':\n            case ']':\n            case '{':\n            case '}':\n            case '\\\\':\n                *escapedCodepoint = codepoint; return true;\n            case 'f': *escapedCodepoint = 0x000C; return true;\n            case 'n': *escapedCodepoint = 0x000A; return true;\n            case 'r': *escapedCodepoint = 0x000D; return true;\n            case 't': *escapedCodepoint = 0x0009; return true;\n            case 'v': *escapedCodepoint = 0x000B; return true;\n            default:\n                return false; // Unsupported escape character\n        }\n    }\n\n    Allocator* ownAllocator_;\n    Allocator* allocator_;\n    Stack<Allocator> states_;\n    Stack<Allocator> ranges_;\n    SizeType root_;\n    SizeType stateCount_;\n    SizeType rangeCount_;\n\n    static const unsigned kInfinityQuantifier = ~0u;\n\n    // For SearchWithAnchoring()\n    bool anchorBegin_;\n    bool anchorEnd_;\n};\n\ntemplate <typename RegexType, typename Allocator = CrtAllocator>\nclass GenericRegexSearch {\npublic:\n    typedef typename RegexType::EncodingType Encoding;\n    typedef typename Encoding::Ch Ch;\n\n    GenericRegexSearch(const RegexType& regex, Allocator* allocator = 0) : \n        regex_(regex), allocator_(allocator), ownAllocator_(0),\n        state0_(allocator, 0), state1_(allocator, 0), stateSet_()\n    {\n        RAPIDJSON_ASSERT(regex_.IsValid());\n        if (!allocator_)\n            ownAllocator_ = allocator_ = RAPIDJSON_NEW(Allocator)();\n        stateSet_ = static_cast<uint32_t*>(allocator_->Malloc(GetStateSetSize()));\n        state0_.template Reserve<SizeType>(regex_.stateCount_);\n        state1_.template Reserve<SizeType>(regex_.stateCount_);\n    }\n\n    ~GenericRegexSearch() {\n        Allocator::Free(stateSet_);\n        RAPIDJSON_DELETE(ownAllocator_);\n    }\n\n    template <typename InputStream>\n    bool Match(InputStream& is) {\n        return SearchWithAnchoring(is, true, true);\n    }\n\n    bool Match(const Ch* s) {\n        GenericStringStream<Encoding> is(s);\n        return Match(is);\n    }\n\n    template <typename InputStream>\n    bool Search(InputStream& is) {\n        return SearchWithAnchoring(is, regex_.anchorBegin_, regex_.anchorEnd_);\n    }\n\n    bool Search(const Ch* s) {\n        GenericStringStream<Encoding> is(s);\n        return Search(is);\n    }\n\nprivate:\n    typedef typename RegexType::State State;\n    typedef typename RegexType::Range Range;\n\n    template <typename InputStream>\n    bool SearchWithAnchoring(InputStream& is, bool anchorBegin, bool anchorEnd) {\n        DecodedStream<InputStream, Encoding> ds(is);\n\n        state0_.Clear();\n        Stack<Allocator> *current = &state0_, *next = &state1_;\n        const size_t stateSetSize = GetStateSetSize();\n        std::memset(stateSet_, 0, stateSetSize);\n\n        bool matched = AddState(*current, regex_.root_);\n        unsigned codepoint;\n        while (!current->Empty() && (codepoint = ds.Take()) != 0) {\n            std::memset(stateSet_, 0, stateSetSize);\n            next->Clear();\n            matched = false;\n            for (const SizeType* s = current->template Bottom<SizeType>(); s != current->template End<SizeType>(); ++s) {\n                const State& sr = regex_.GetState(*s);\n                if (sr.codepoint == codepoint ||\n                    sr.codepoint == RegexType::kAnyCharacterClass || \n                    (sr.codepoint == RegexType::kRangeCharacterClass && MatchRange(sr.rangeStart, codepoint)))\n                {\n                    matched = AddState(*next, sr.out) || matched;\n                    if (!anchorEnd && matched)\n                        return true;\n                }\n                if (!anchorBegin)\n                    AddState(*next, regex_.root_);\n            }\n            internal::Swap(current, next);\n        }\n\n        return matched;\n    }\n\n    size_t GetStateSetSize() const {\n        return (regex_.stateCount_ + 31) / 32 * 4;\n    }\n\n    // Return whether the added states is a match state\n    bool AddState(Stack<Allocator>& l, SizeType index) {\n        RAPIDJSON_ASSERT(index != kRegexInvalidState);\n\n        const State& s = regex_.GetState(index);\n        if (s.out1 != kRegexInvalidState) { // Split\n            bool matched = AddState(l, s.out);\n            return AddState(l, s.out1) || matched;\n        }\n        else if (!(stateSet_[index >> 5] & (1u << (index & 31)))) {\n            stateSet_[index >> 5] |= (1u << (index & 31));\n            *l.template PushUnsafe<SizeType>() = index;\n        }\n        return s.out == kRegexInvalidState; // by using PushUnsafe() above, we can ensure s is not validated due to reallocation.\n    }\n\n    bool MatchRange(SizeType rangeIndex, unsigned codepoint) const {\n        bool yes = (regex_.GetRange(rangeIndex).start & RegexType::kRangeNegationFlag) == 0;\n        while (rangeIndex != kRegexInvalidRange) {\n            const Range& r = regex_.GetRange(rangeIndex);\n            if (codepoint >= (r.start & ~RegexType::kRangeNegationFlag) && codepoint <= r.end)\n                return yes;\n            rangeIndex = r.next;\n        }\n        return !yes;\n    }\n\n    const RegexType& regex_;\n    Allocator* allocator_;\n    Allocator* ownAllocator_;\n    Stack<Allocator> state0_;\n    Stack<Allocator> state1_;\n    uint32_t* stateSet_;\n};\n\ntypedef GenericRegex<UTF8<> > Regex;\ntypedef GenericRegexSearch<Regex> RegexSearch;\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\n#if defined(__clang__) || defined(_MSC_VER)\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_INTERNAL_REGEX_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/stack.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_INTERNAL_STACK_H_\n#define RAPIDJSON_INTERNAL_STACK_H_\n\n#include \"../allocators.h\"\n#include \"swap.h\"\n#include <cstddef>\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\n///////////////////////////////////////////////////////////////////////////////\n// Stack\n\n//! A type-unsafe stack for storing different types of data.\n/*! \\tparam Allocator Allocator for allocating stack memory.\n*/\ntemplate <typename Allocator>\nclass Stack {\npublic:\n    // Optimization note: Do not allocate memory for stack_ in constructor.\n    // Do it lazily when first Push() -> Expand() -> Resize().\n    Stack(Allocator* allocator, size_t stackCapacity) : allocator_(allocator), ownAllocator_(0), stack_(0), stackTop_(0), stackEnd_(0), initialCapacity_(stackCapacity) {\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    Stack(Stack&& rhs)\n        : allocator_(rhs.allocator_),\n          ownAllocator_(rhs.ownAllocator_),\n          stack_(rhs.stack_),\n          stackTop_(rhs.stackTop_),\n          stackEnd_(rhs.stackEnd_),\n          initialCapacity_(rhs.initialCapacity_)\n    {\n        rhs.allocator_ = 0;\n        rhs.ownAllocator_ = 0;\n        rhs.stack_ = 0;\n        rhs.stackTop_ = 0;\n        rhs.stackEnd_ = 0;\n        rhs.initialCapacity_ = 0;\n    }\n#endif\n\n    ~Stack() {\n        Destroy();\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    Stack& operator=(Stack&& rhs) {\n        if (&rhs != this)\n        {\n            Destroy();\n\n            allocator_ = rhs.allocator_;\n            ownAllocator_ = rhs.ownAllocator_;\n            stack_ = rhs.stack_;\n            stackTop_ = rhs.stackTop_;\n            stackEnd_ = rhs.stackEnd_;\n            initialCapacity_ = rhs.initialCapacity_;\n\n            rhs.allocator_ = 0;\n            rhs.ownAllocator_ = 0;\n            rhs.stack_ = 0;\n            rhs.stackTop_ = 0;\n            rhs.stackEnd_ = 0;\n            rhs.initialCapacity_ = 0;\n        }\n        return *this;\n    }\n#endif\n\n    void Swap(Stack& rhs) RAPIDJSON_NOEXCEPT {\n        internal::Swap(allocator_, rhs.allocator_);\n        internal::Swap(ownAllocator_, rhs.ownAllocator_);\n        internal::Swap(stack_, rhs.stack_);\n        internal::Swap(stackTop_, rhs.stackTop_);\n        internal::Swap(stackEnd_, rhs.stackEnd_);\n        internal::Swap(initialCapacity_, rhs.initialCapacity_);\n    }\n\n    void Clear() { stackTop_ = stack_; }\n\n    void ShrinkToFit() { \n        if (Empty()) {\n            // If the stack is empty, completely deallocate the memory.\n            Allocator::Free(stack_); // NOLINT (+clang-analyzer-unix.Malloc)\n            stack_ = 0;\n            stackTop_ = 0;\n            stackEnd_ = 0;\n        }\n        else\n            Resize(GetSize());\n    }\n\n    // Optimization note: try to minimize the size of this function for force inline.\n    // Expansion is run very infrequently, so it is moved to another (probably non-inline) function.\n    template<typename T>\n    RAPIDJSON_FORCEINLINE void Reserve(size_t count = 1) {\n         // Expand the stack if needed\n        if (RAPIDJSON_UNLIKELY(static_cast<std::ptrdiff_t>(sizeof(T) * count) > (stackEnd_ - stackTop_)))\n            Expand<T>(count);\n    }\n\n    template<typename T>\n    RAPIDJSON_FORCEINLINE T* Push(size_t count = 1) {\n        Reserve<T>(count);\n        return PushUnsafe<T>(count);\n    }\n\n    template<typename T>\n    RAPIDJSON_FORCEINLINE T* PushUnsafe(size_t count = 1) {\n        RAPIDJSON_ASSERT(stackTop_);\n        RAPIDJSON_ASSERT(static_cast<std::ptrdiff_t>(sizeof(T) * count) <= (stackEnd_ - stackTop_));\n        T* ret = reinterpret_cast<T*>(stackTop_);\n        stackTop_ += sizeof(T) * count;\n        return ret;\n    }\n\n    template<typename T>\n    T* Pop(size_t count) {\n        RAPIDJSON_ASSERT(GetSize() >= count * sizeof(T));\n        stackTop_ -= count * sizeof(T);\n        return reinterpret_cast<T*>(stackTop_);\n    }\n\n    template<typename T>\n    T* Top() { \n        RAPIDJSON_ASSERT(GetSize() >= sizeof(T));\n        return reinterpret_cast<T*>(stackTop_ - sizeof(T));\n    }\n\n    template<typename T>\n    const T* Top() const {\n        RAPIDJSON_ASSERT(GetSize() >= sizeof(T));\n        return reinterpret_cast<T*>(stackTop_ - sizeof(T));\n    }\n\n    template<typename T>\n    T* End() { return reinterpret_cast<T*>(stackTop_); }\n\n    template<typename T>\n    const T* End() const { return reinterpret_cast<T*>(stackTop_); }\n\n    template<typename T>\n    T* Bottom() { return reinterpret_cast<T*>(stack_); }\n\n    template<typename T>\n    const T* Bottom() const { return reinterpret_cast<T*>(stack_); }\n\n    bool HasAllocator() const {\n        return allocator_ != 0;\n    }\n\n    Allocator& GetAllocator() {\n        RAPIDJSON_ASSERT(allocator_);\n        return *allocator_;\n    }\n\n    bool Empty() const { return stackTop_ == stack_; }\n    size_t GetSize() const { return static_cast<size_t>(stackTop_ - stack_); }\n    size_t GetCapacity() const { return static_cast<size_t>(stackEnd_ - stack_); }\n\nprivate:\n    template<typename T>\n    void Expand(size_t count) {\n        // Only expand the capacity if the current stack exists. Otherwise just create a stack with initial capacity.\n        size_t newCapacity;\n        if (stack_ == 0) {\n            if (!allocator_)\n                ownAllocator_ = allocator_ = RAPIDJSON_NEW(Allocator)();\n            newCapacity = initialCapacity_;\n        } else {\n            newCapacity = GetCapacity();\n            newCapacity += (newCapacity + 1) / 2;\n        }\n        size_t newSize = GetSize() + sizeof(T) * count;\n        if (newCapacity < newSize)\n            newCapacity = newSize;\n\n        Resize(newCapacity);\n    }\n\n    void Resize(size_t newCapacity) {\n        const size_t size = GetSize();  // Backup the current size\n        stack_ = static_cast<char*>(allocator_->Realloc(stack_, GetCapacity(), newCapacity));\n        stackTop_ = stack_ + size;\n        stackEnd_ = stack_ + newCapacity;\n    }\n\n    void Destroy() {\n        Allocator::Free(stack_);\n        RAPIDJSON_DELETE(ownAllocator_); // Only delete if it is owned by the stack\n    }\n\n    // Prohibit copy constructor & assignment operator.\n    Stack(const Stack&);\n    Stack& operator=(const Stack&);\n\n    Allocator* allocator_;\n    Allocator* ownAllocator_;\n    char *stack_;\n    char *stackTop_;\n    char *stackEnd_;\n    size_t initialCapacity_;\n};\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_STACK_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/strfunc.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_INTERNAL_STRFUNC_H_\n#define RAPIDJSON_INTERNAL_STRFUNC_H_\n\n#include \"../stream.h\"\n#include <cwchar>\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\n//! Custom strlen() which works on different character types.\n/*! \\tparam Ch Character type (e.g. char, wchar_t, short)\n    \\param s Null-terminated input string.\n    \\return Number of characters in the string. \n    \\note This has the same semantics as strlen(), the return value is not number of Unicode codepoints.\n*/\ntemplate <typename Ch>\ninline SizeType StrLen(const Ch* s) {\n    RAPIDJSON_ASSERT(s != 0);\n    const Ch* p = s;\n    while (*p) ++p;\n    return SizeType(p - s);\n}\n\ntemplate <>\ninline SizeType StrLen(const char* s) {\n    return SizeType(std::strlen(s));\n}\n\ntemplate <>\ninline SizeType StrLen(const wchar_t* s) {\n    return SizeType(std::wcslen(s));\n}\n\n//! Custom strcmpn() which works on different character types.\n/*! \\tparam Ch Character type (e.g. char, wchar_t, short)\n    \\param s1 Null-terminated input string.\n    \\param s2 Null-terminated input string.\n    \\return 0 if equal\n*/\ntemplate<typename Ch>\ninline int StrCmp(const Ch* s1, const Ch* s2) {\n    RAPIDJSON_ASSERT(s1 != 0);\n    RAPIDJSON_ASSERT(s2 != 0);\n    while(*s1 && (*s1 == *s2)) { s1++; s2++; }\n    return static_cast<unsigned>(*s1) < static_cast<unsigned>(*s2) ? -1 : static_cast<unsigned>(*s1) > static_cast<unsigned>(*s2);\n}\n\n//! Returns number of code points in a encoded string.\ntemplate<typename Encoding>\nbool CountStringCodePoint(const typename Encoding::Ch* s, SizeType length, SizeType* outCount) {\n    RAPIDJSON_ASSERT(s != 0);\n    RAPIDJSON_ASSERT(outCount != 0);\n    GenericStringStream<Encoding> is(s);\n    const typename Encoding::Ch* end = s + length;\n    SizeType count = 0;\n    while (is.src_ < end) {\n        unsigned codepoint;\n        if (!Encoding::Decode(is, &codepoint))\n            return false;\n        count++;\n    }\n    *outCount = count;\n    return true;\n}\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_INTERNAL_STRFUNC_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/strtod.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_STRTOD_\n#define RAPIDJSON_STRTOD_\n\n#include \"ieee754.h\"\n#include \"biginteger.h\"\n#include \"diyfp.h\"\n#include \"pow10.h\"\n#include <climits>\n#include <limits>\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\ninline double FastPath(double significand, int exp) {\n    if (exp < -308)\n        return 0.0;\n    else if (exp >= 0)\n        return significand * internal::Pow10(exp);\n    else\n        return significand / internal::Pow10(-exp);\n}\n\ninline double StrtodNormalPrecision(double d, int p) {\n    if (p < -308) {\n        // Prevent expSum < -308, making Pow10(p) = 0\n        d = FastPath(d, -308);\n        d = FastPath(d, p + 308);\n    }\n    else\n        d = FastPath(d, p);\n    return d;\n}\n\ntemplate <typename T>\ninline T Min3(T a, T b, T c) {\n    T m = a;\n    if (m > b) m = b;\n    if (m > c) m = c;\n    return m;\n}\n\ninline int CheckWithinHalfULP(double b, const BigInteger& d, int dExp) {\n    const Double db(b);\n    const uint64_t bInt = db.IntegerSignificand();\n    const int bExp = db.IntegerExponent();\n    const int hExp = bExp - 1;\n\n    int dS_Exp2 = 0, dS_Exp5 = 0, bS_Exp2 = 0, bS_Exp5 = 0, hS_Exp2 = 0, hS_Exp5 = 0;\n\n    // Adjust for decimal exponent\n    if (dExp >= 0) {\n        dS_Exp2 += dExp;\n        dS_Exp5 += dExp;\n    }\n    else {\n        bS_Exp2 -= dExp;\n        bS_Exp5 -= dExp;\n        hS_Exp2 -= dExp;\n        hS_Exp5 -= dExp;\n    }\n\n    // Adjust for binary exponent\n    if (bExp >= 0)\n        bS_Exp2 += bExp;\n    else {\n        dS_Exp2 -= bExp;\n        hS_Exp2 -= bExp;\n    }\n\n    // Adjust for half ulp exponent\n    if (hExp >= 0)\n        hS_Exp2 += hExp;\n    else {\n        dS_Exp2 -= hExp;\n        bS_Exp2 -= hExp;\n    }\n\n    // Remove common power of two factor from all three scaled values\n    int common_Exp2 = Min3(dS_Exp2, bS_Exp2, hS_Exp2);\n    dS_Exp2 -= common_Exp2;\n    bS_Exp2 -= common_Exp2;\n    hS_Exp2 -= common_Exp2;\n\n    BigInteger dS = d;\n    dS.MultiplyPow5(static_cast<unsigned>(dS_Exp5)) <<= static_cast<unsigned>(dS_Exp2);\n\n    BigInteger bS(bInt);\n    bS.MultiplyPow5(static_cast<unsigned>(bS_Exp5)) <<= static_cast<unsigned>(bS_Exp2);\n\n    BigInteger hS(1);\n    hS.MultiplyPow5(static_cast<unsigned>(hS_Exp5)) <<= static_cast<unsigned>(hS_Exp2);\n\n    BigInteger delta(0);\n    dS.Difference(bS, &delta);\n\n    return delta.Compare(hS);\n}\n\ninline bool StrtodFast(double d, int p, double* result) {\n    // Use fast path for string-to-double conversion if possible\n    // see http://www.exploringbinary.com/fast-path-decimal-to-floating-point-conversion/\n    if (p > 22  && p < 22 + 16) {\n        // Fast Path Cases In Disguise\n        d *= internal::Pow10(p - 22);\n        p = 22;\n    }\n\n    if (p >= -22 && p <= 22 && d <= 9007199254740991.0) { // 2^53 - 1\n        *result = FastPath(d, p);\n        return true;\n    }\n    else\n        return false;\n}\n\n// Compute an approximation and see if it is within 1/2 ULP\ntemplate<typename Ch>\ninline bool StrtodDiyFp(const Ch* decimals, int dLen, int dExp, double* result) {\n    uint64_t significand = 0;\n    int i = 0;   // 2^64 - 1 = 18446744073709551615, 1844674407370955161 = 0x1999999999999999    \n    for (; i < dLen; i++) {\n        if (significand  >  RAPIDJSON_UINT64_C2(0x19999999, 0x99999999) ||\n            (significand == RAPIDJSON_UINT64_C2(0x19999999, 0x99999999) && decimals[i] > Ch('5')))\n            break;\n        significand = significand * 10u + static_cast<unsigned>(decimals[i] - Ch('0'));\n    }\n    \n    if (i < dLen && decimals[i] >= Ch('5')) // Rounding\n        significand++;\n\n    int remaining = dLen - i;\n    const int kUlpShift = 3;\n    const int kUlp = 1 << kUlpShift;\n    int64_t error = (remaining == 0) ? 0 : kUlp / 2;\n\n    DiyFp v(significand, 0);\n    v = v.Normalize();\n    error <<= -v.e;\n\n    dExp += remaining;\n\n    int actualExp;\n    DiyFp cachedPower = GetCachedPower10(dExp, &actualExp);\n    if (actualExp != dExp) {\n        static const DiyFp kPow10[] = {\n            DiyFp(RAPIDJSON_UINT64_C2(0xa0000000, 0x00000000), -60),  // 10^1\n            DiyFp(RAPIDJSON_UINT64_C2(0xc8000000, 0x00000000), -57),  // 10^2\n            DiyFp(RAPIDJSON_UINT64_C2(0xfa000000, 0x00000000), -54),  // 10^3\n            DiyFp(RAPIDJSON_UINT64_C2(0x9c400000, 0x00000000), -50),  // 10^4\n            DiyFp(RAPIDJSON_UINT64_C2(0xc3500000, 0x00000000), -47),  // 10^5\n            DiyFp(RAPIDJSON_UINT64_C2(0xf4240000, 0x00000000), -44),  // 10^6\n            DiyFp(RAPIDJSON_UINT64_C2(0x98968000, 0x00000000), -40)   // 10^7\n        };\n        int adjustment = dExp - actualExp;\n        RAPIDJSON_ASSERT(adjustment >= 1 && adjustment < 8);\n        v = v * kPow10[adjustment - 1];\n        if (dLen + adjustment > 19) // has more digits than decimal digits in 64-bit\n            error += kUlp / 2;\n    }\n\n    v = v * cachedPower;\n\n    error += kUlp + (error == 0 ? 0 : 1);\n\n    const int oldExp = v.e;\n    v = v.Normalize();\n    error <<= oldExp - v.e;\n\n    const int effectiveSignificandSize = Double::EffectiveSignificandSize(64 + v.e);\n    int precisionSize = 64 - effectiveSignificandSize;\n    if (precisionSize + kUlpShift >= 64) {\n        int scaleExp = (precisionSize + kUlpShift) - 63;\n        v.f >>= scaleExp;\n        v.e += scaleExp; \n        error = (error >> scaleExp) + 1 + kUlp;\n        precisionSize -= scaleExp;\n    }\n\n    DiyFp rounded(v.f >> precisionSize, v.e + precisionSize);\n    const uint64_t precisionBits = (v.f & ((uint64_t(1) << precisionSize) - 1)) * kUlp;\n    const uint64_t halfWay = (uint64_t(1) << (precisionSize - 1)) * kUlp;\n    if (precisionBits >= halfWay + static_cast<unsigned>(error)) {\n        rounded.f++;\n        if (rounded.f & (DiyFp::kDpHiddenBit << 1)) { // rounding overflows mantissa (issue #340)\n            rounded.f >>= 1;\n            rounded.e++;\n        }\n    }\n\n    *result = rounded.ToDouble();\n\n    return halfWay - static_cast<unsigned>(error) >= precisionBits || precisionBits >= halfWay + static_cast<unsigned>(error);\n}\n\ntemplate<typename Ch>\ninline double StrtodBigInteger(double approx, const Ch* decimals, int dLen, int dExp) {\n    RAPIDJSON_ASSERT(dLen >= 0);\n    const BigInteger dInt(decimals, static_cast<unsigned>(dLen));\n    Double a(approx);\n    int cmp = CheckWithinHalfULP(a.Value(), dInt, dExp);\n    if (cmp < 0)\n        return a.Value();  // within half ULP\n    else if (cmp == 0) {\n        // Round towards even\n        if (a.Significand() & 1)\n            return a.NextPositiveDouble();\n        else\n            return a.Value();\n    }\n    else // adjustment\n        return a.NextPositiveDouble();\n}\n\ntemplate<typename Ch>\ninline double StrtodFullPrecision(double d, int p, const Ch* decimals, size_t length, size_t decimalPosition, int exp) {\n    RAPIDJSON_ASSERT(d >= 0.0);\n    RAPIDJSON_ASSERT(length >= 1);\n\n    double result = 0.0;\n    if (StrtodFast(d, p, &result))\n        return result;\n\n    RAPIDJSON_ASSERT(length <= INT_MAX);\n    int dLen = static_cast<int>(length);\n\n    RAPIDJSON_ASSERT(length >= decimalPosition);\n    RAPIDJSON_ASSERT(length - decimalPosition <= INT_MAX);\n    int dExpAdjust = static_cast<int>(length - decimalPosition);\n\n    RAPIDJSON_ASSERT(exp >= INT_MIN + dExpAdjust);\n    int dExp = exp - dExpAdjust;\n\n    // Make sure length+dExp does not overflow\n    RAPIDJSON_ASSERT(dExp <= INT_MAX - dLen);\n\n    // Trim leading zeros\n    while (dLen > 0 && *decimals == '0') {\n        dLen--;\n        decimals++;\n    }\n\n    // Trim trailing zeros\n    while (dLen > 0 && decimals[dLen - 1] == '0') {\n        dLen--;\n        dExp++;\n    }\n\n    if (dLen == 0) { // Buffer only contains zeros.\n        return 0.0;\n    }\n\n    // Trim right-most digits\n    const int kMaxDecimalDigit = 767 + 1;\n    if (dLen > kMaxDecimalDigit) {\n        dExp += dLen - kMaxDecimalDigit;\n        dLen = kMaxDecimalDigit;\n    }\n\n    // If too small, underflow to zero.\n    // Any x <= 10^-324 is interpreted as zero.\n    if (dLen + dExp <= -324)\n        return 0.0;\n\n    // If too large, overflow to infinity.\n    // Any x >= 10^309 is interpreted as +infinity.\n    if (dLen + dExp > 309)\n        return std::numeric_limits<double>::infinity();\n\n    if (StrtodDiyFp(decimals, dLen, dExp, &result))\n        return result;\n\n    // Use approximation from StrtodDiyFp and make adjustment with BigInteger comparison\n    return StrtodBigInteger(result, decimals, dLen, dExp);\n}\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_STRTOD_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/internal/swap.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_INTERNAL_SWAP_H_\n#define RAPIDJSON_INTERNAL_SWAP_H_\n\n#include \"../rapidjson.h\"\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\nnamespace internal {\n\n//! Custom swap() to avoid dependency on C++ <algorithm> header\n/*! \\tparam T Type of the arguments to swap, should be instantiated with primitive C++ types only.\n    \\note This has the same semantics as std::swap().\n*/\ntemplate <typename T>\ninline void Swap(T& a, T& b) RAPIDJSON_NOEXCEPT {\n    T tmp = a;\n        a = b;\n        b = tmp;\n}\n\n} // namespace internal\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_INTERNAL_SWAP_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/istreamwrapper.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_ISTREAMWRAPPER_H_\n#define RAPIDJSON_ISTREAMWRAPPER_H_\n\n#include \"stream.h\"\n#include <iosfwd>\n#include <ios>\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(padded)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4351) // new behavior: elements of array 'array' will be default initialized\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Wrapper of \\c std::basic_istream into RapidJSON's Stream concept.\n/*!\n    The classes can be wrapped including but not limited to:\n\n    - \\c std::istringstream\n    - \\c std::stringstream\n    - \\c std::wistringstream\n    - \\c std::wstringstream\n    - \\c std::ifstream\n    - \\c std::fstream\n    - \\c std::wifstream\n    - \\c std::wfstream\n\n    \\tparam StreamType Class derived from \\c std::basic_istream.\n*/\n   \ntemplate <typename StreamType>\nclass BasicIStreamWrapper {\npublic:\n    typedef typename StreamType::char_type Ch;\n\n    //! Constructor.\n    /*!\n        \\param stream stream opened for read.\n    */\n    BasicIStreamWrapper(StreamType &stream) : stream_(stream), buffer_(peekBuffer_), bufferSize_(4), bufferLast_(0), current_(buffer_), readCount_(0), count_(0), eof_(false) { \n        Read();\n    }\n\n    //! Constructor.\n    /*!\n        \\param stream stream opened for read.\n        \\param buffer user-supplied buffer.\n        \\param bufferSize size of buffer in bytes. Must >=4 bytes.\n    */\n    BasicIStreamWrapper(StreamType &stream, char* buffer, size_t bufferSize) : stream_(stream), buffer_(buffer), bufferSize_(bufferSize), bufferLast_(0), current_(buffer_), readCount_(0), count_(0), eof_(false) { \n        RAPIDJSON_ASSERT(bufferSize >= 4);\n        Read();\n    }\n\n    Ch Peek() const { return *current_; }\n    Ch Take() { Ch c = *current_; Read(); return c; }\n    size_t Tell() const { return count_ + static_cast<size_t>(current_ - buffer_); }\n\n    // Not implemented\n    void Put(Ch) { RAPIDJSON_ASSERT(false); }\n    void Flush() { RAPIDJSON_ASSERT(false); } \n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\n    // For encoding detection only.\n    const Ch* Peek4() const {\n        return (current_ + 4 - !eof_ <= bufferLast_) ? current_ : 0;\n    }\n\nprivate:\n    BasicIStreamWrapper();\n    BasicIStreamWrapper(const BasicIStreamWrapper&);\n    BasicIStreamWrapper& operator=(const BasicIStreamWrapper&);\n\n    void Read() {\n        if (current_ < bufferLast_)\n            ++current_;\n        else if (!eof_) {\n            count_ += readCount_;\n            readCount_ = bufferSize_;\n            bufferLast_ = buffer_ + readCount_ - 1;\n            current_ = buffer_;\n\n            if (!stream_.read(buffer_, static_cast<std::streamsize>(bufferSize_))) {\n                readCount_ = static_cast<size_t>(stream_.gcount());\n                *(bufferLast_ = buffer_ + readCount_) = '\\0';\n                eof_ = true;\n            }\n        }\n    }\n\n    StreamType &stream_;\n    Ch peekBuffer_[4], *buffer_;\n    size_t bufferSize_;\n    Ch *bufferLast_;\n    Ch *current_;\n    size_t readCount_;\n    size_t count_;  //!< Number of characters read\n    bool eof_;\n};\n\ntypedef BasicIStreamWrapper<std::istream> IStreamWrapper;\ntypedef BasicIStreamWrapper<std::wistream> WIStreamWrapper;\n\n#if defined(__clang__) || defined(_MSC_VER)\nRAPIDJSON_DIAG_POP\n#endif\n\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_ISTREAMWRAPPER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/memorybuffer.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_MEMORYBUFFER_H_\n#define RAPIDJSON_MEMORYBUFFER_H_\n\n#include \"stream.h\"\n#include \"internal/stack.h\"\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Represents an in-memory output byte stream.\n/*!\n    This class is mainly for being wrapped by EncodedOutputStream or AutoUTFOutputStream.\n\n    It is similar to FileWriteBuffer but the destination is an in-memory buffer instead of a file.\n\n    Differences between MemoryBuffer and StringBuffer:\n    1. StringBuffer has Encoding but MemoryBuffer is only a byte buffer. \n    2. StringBuffer::GetString() returns a null-terminated string. MemoryBuffer::GetBuffer() returns a buffer without terminator.\n\n    \\tparam Allocator type for allocating memory buffer.\n    \\note implements Stream concept\n*/\ntemplate <typename Allocator = CrtAllocator>\nstruct GenericMemoryBuffer {\n    typedef char Ch; // byte\n\n    GenericMemoryBuffer(Allocator* allocator = 0, size_t capacity = kDefaultCapacity) : stack_(allocator, capacity) {}\n\n    void Put(Ch c) { *stack_.template Push<Ch>() = c; }\n    void Flush() {}\n\n    void Clear() { stack_.Clear(); }\n    void ShrinkToFit() { stack_.ShrinkToFit(); }\n    Ch* Push(size_t count) { return stack_.template Push<Ch>(count); }\n    void Pop(size_t count) { stack_.template Pop<Ch>(count); }\n\n    const Ch* GetBuffer() const {\n        return stack_.template Bottom<Ch>();\n    }\n\n    size_t GetSize() const { return stack_.GetSize(); }\n\n    static const size_t kDefaultCapacity = 256;\n    mutable internal::Stack<Allocator> stack_;\n};\n\ntypedef GenericMemoryBuffer<> MemoryBuffer;\n\n//! Implement specialized version of PutN() with memset() for better performance.\ntemplate<>\ninline void PutN(MemoryBuffer& memoryBuffer, char c, size_t n) {\n    std::memset(memoryBuffer.stack_.Push<char>(n), c, n * sizeof(c));\n}\n\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_MEMORYBUFFER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/memorystream.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_MEMORYSTREAM_H_\n#define RAPIDJSON_MEMORYSTREAM_H_\n\n#include \"stream.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(unreachable-code)\nRAPIDJSON_DIAG_OFF(missing-noreturn)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Represents an in-memory input byte stream.\n/*!\n    This class is mainly for being wrapped by EncodedInputStream or AutoUTFInputStream.\n\n    It is similar to FileReadBuffer but the source is an in-memory buffer instead of a file.\n\n    Differences between MemoryStream and StringStream:\n    1. StringStream has encoding but MemoryStream is a byte stream.\n    2. MemoryStream needs size of the source buffer and the buffer don't need to be null terminated. StringStream assume null-terminated string as source.\n    3. MemoryStream supports Peek4() for encoding detection. StringStream is specified with an encoding so it should not have Peek4().\n    \\note implements Stream concept\n*/\nstruct MemoryStream {\n    typedef char Ch; // byte\n\n    MemoryStream(const Ch *src, size_t size) : src_(src), begin_(src), end_(src + size), size_(size) {}\n\n    Ch Peek() const { return RAPIDJSON_UNLIKELY(src_ == end_) ? '\\0' : *src_; }\n    Ch Take() { return RAPIDJSON_UNLIKELY(src_ == end_) ? '\\0' : *src_++; }\n    size_t Tell() const { return static_cast<size_t>(src_ - begin_); }\n\n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    void Put(Ch) { RAPIDJSON_ASSERT(false); }\n    void Flush() { RAPIDJSON_ASSERT(false); }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\n    // For encoding detection only.\n    const Ch* Peek4() const {\n        return Tell() + 4 <= size_ ? src_ : 0;\n    }\n\n    const Ch* src_;     //!< Current read position.\n    const Ch* begin_;   //!< Original head of the string.\n    const Ch* end_;     //!< End of stream.\n    size_t size_;       //!< Size of the stream.\n};\n\nRAPIDJSON_NAMESPACE_END\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_MEMORYBUFFER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/msinttypes/inttypes.h",
    "content": "// ISO C9x  compliant inttypes.h for Microsoft Visual Studio\n// Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 \n// \n//  Copyright (c) 2006-2013 Alexander Chemeris\n// \n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are met:\n// \n//   1. Redistributions of source code must retain the above copyright notice,\n//      this list of conditions and the following disclaimer.\n// \n//   2. Redistributions in binary form must reproduce the above copyright\n//      notice, this list of conditions and the following disclaimer in the\n//      documentation and/or other materials provided with the distribution.\n// \n//   3. Neither the name of the product nor the names of its contributors may\n//      be used to endorse or promote products derived from this software\n//      without specific prior written permission.\n// \n// THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED\n// WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\n// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO\n// EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;\n// OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, \n// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR\n// OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF\n// ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n// \n///////////////////////////////////////////////////////////////////////////////\n\n// The above software in this distribution may have been modified by \n// THL A29 Limited (\"Tencent Modifications\"). \n// All Tencent Modifications are Copyright (C) 2015 THL A29 Limited.\n\n#ifndef _MSC_VER // [\n#error \"Use this header only with Microsoft Visual C++ compilers!\"\n#endif // _MSC_VER ]\n\n#ifndef _MSC_INTTYPES_H_ // [\n#define _MSC_INTTYPES_H_\n\n#if _MSC_VER > 1000\n#pragma once\n#endif\n\n#include \"stdint.h\"\n\n// miloyip: VC supports inttypes.h since VC2013\n#if _MSC_VER >= 1800\n#include <inttypes.h>\n#else\n\n// 7.8 Format conversion of integer types\n\ntypedef struct {\n   intmax_t quot;\n   intmax_t rem;\n} imaxdiv_t;\n\n// 7.8.1 Macros for format specifiers\n\n#if !defined(__cplusplus) || defined(__STDC_FORMAT_MACROS) // [   See footnote 185 at page 198\n\n// The fprintf macros for signed integers are:\n#define PRId8       \"d\"\n#define PRIi8       \"i\"\n#define PRIdLEAST8  \"d\"\n#define PRIiLEAST8  \"i\"\n#define PRIdFAST8   \"d\"\n#define PRIiFAST8   \"i\"\n\n#define PRId16       \"hd\"\n#define PRIi16       \"hi\"\n#define PRIdLEAST16  \"hd\"\n#define PRIiLEAST16  \"hi\"\n#define PRIdFAST16   \"hd\"\n#define PRIiFAST16   \"hi\"\n\n#define PRId32       \"I32d\"\n#define PRIi32       \"I32i\"\n#define PRIdLEAST32  \"I32d\"\n#define PRIiLEAST32  \"I32i\"\n#define PRIdFAST32   \"I32d\"\n#define PRIiFAST32   \"I32i\"\n\n#define PRId64       \"I64d\"\n#define PRIi64       \"I64i\"\n#define PRIdLEAST64  \"I64d\"\n#define PRIiLEAST64  \"I64i\"\n#define PRIdFAST64   \"I64d\"\n#define PRIiFAST64   \"I64i\"\n\n#define PRIdMAX     \"I64d\"\n#define PRIiMAX     \"I64i\"\n\n#define PRIdPTR     \"Id\"\n#define PRIiPTR     \"Ii\"\n\n// The fprintf macros for unsigned integers are:\n#define PRIo8       \"o\"\n#define PRIu8       \"u\"\n#define PRIx8       \"x\"\n#define PRIX8       \"X\"\n#define PRIoLEAST8  \"o\"\n#define PRIuLEAST8  \"u\"\n#define PRIxLEAST8  \"x\"\n#define PRIXLEAST8  \"X\"\n#define PRIoFAST8   \"o\"\n#define PRIuFAST8   \"u\"\n#define PRIxFAST8   \"x\"\n#define PRIXFAST8   \"X\"\n\n#define PRIo16       \"ho\"\n#define PRIu16       \"hu\"\n#define PRIx16       \"hx\"\n#define PRIX16       \"hX\"\n#define PRIoLEAST16  \"ho\"\n#define PRIuLEAST16  \"hu\"\n#define PRIxLEAST16  \"hx\"\n#define PRIXLEAST16  \"hX\"\n#define PRIoFAST16   \"ho\"\n#define PRIuFAST16   \"hu\"\n#define PRIxFAST16   \"hx\"\n#define PRIXFAST16   \"hX\"\n\n#define PRIo32       \"I32o\"\n#define PRIu32       \"I32u\"\n#define PRIx32       \"I32x\"\n#define PRIX32       \"I32X\"\n#define PRIoLEAST32  \"I32o\"\n#define PRIuLEAST32  \"I32u\"\n#define PRIxLEAST32  \"I32x\"\n#define PRIXLEAST32  \"I32X\"\n#define PRIoFAST32   \"I32o\"\n#define PRIuFAST32   \"I32u\"\n#define PRIxFAST32   \"I32x\"\n#define PRIXFAST32   \"I32X\"\n\n#define PRIo64       \"I64o\"\n#define PRIu64       \"I64u\"\n#define PRIx64       \"I64x\"\n#define PRIX64       \"I64X\"\n#define PRIoLEAST64  \"I64o\"\n#define PRIuLEAST64  \"I64u\"\n#define PRIxLEAST64  \"I64x\"\n#define PRIXLEAST64  \"I64X\"\n#define PRIoFAST64   \"I64o\"\n#define PRIuFAST64   \"I64u\"\n#define PRIxFAST64   \"I64x\"\n#define PRIXFAST64   \"I64X\"\n\n#define PRIoMAX     \"I64o\"\n#define PRIuMAX     \"I64u\"\n#define PRIxMAX     \"I64x\"\n#define PRIXMAX     \"I64X\"\n\n#define PRIoPTR     \"Io\"\n#define PRIuPTR     \"Iu\"\n#define PRIxPTR     \"Ix\"\n#define PRIXPTR     \"IX\"\n\n// The fscanf macros for signed integers are:\n#define SCNd8       \"d\"\n#define SCNi8       \"i\"\n#define SCNdLEAST8  \"d\"\n#define SCNiLEAST8  \"i\"\n#define SCNdFAST8   \"d\"\n#define SCNiFAST8   \"i\"\n\n#define SCNd16       \"hd\"\n#define SCNi16       \"hi\"\n#define SCNdLEAST16  \"hd\"\n#define SCNiLEAST16  \"hi\"\n#define SCNdFAST16   \"hd\"\n#define SCNiFAST16   \"hi\"\n\n#define SCNd32       \"ld\"\n#define SCNi32       \"li\"\n#define SCNdLEAST32  \"ld\"\n#define SCNiLEAST32  \"li\"\n#define SCNdFAST32   \"ld\"\n#define SCNiFAST32   \"li\"\n\n#define SCNd64       \"I64d\"\n#define SCNi64       \"I64i\"\n#define SCNdLEAST64  \"I64d\"\n#define SCNiLEAST64  \"I64i\"\n#define SCNdFAST64   \"I64d\"\n#define SCNiFAST64   \"I64i\"\n\n#define SCNdMAX     \"I64d\"\n#define SCNiMAX     \"I64i\"\n\n#ifdef _WIN64 // [\n#  define SCNdPTR     \"I64d\"\n#  define SCNiPTR     \"I64i\"\n#else  // _WIN64 ][\n#  define SCNdPTR     \"ld\"\n#  define SCNiPTR     \"li\"\n#endif  // _WIN64 ]\n\n// The fscanf macros for unsigned integers are:\n#define SCNo8       \"o\"\n#define SCNu8       \"u\"\n#define SCNx8       \"x\"\n#define SCNX8       \"X\"\n#define SCNoLEAST8  \"o\"\n#define SCNuLEAST8  \"u\"\n#define SCNxLEAST8  \"x\"\n#define SCNXLEAST8  \"X\"\n#define SCNoFAST8   \"o\"\n#define SCNuFAST8   \"u\"\n#define SCNxFAST8   \"x\"\n#define SCNXFAST8   \"X\"\n\n#define SCNo16       \"ho\"\n#define SCNu16       \"hu\"\n#define SCNx16       \"hx\"\n#define SCNX16       \"hX\"\n#define SCNoLEAST16  \"ho\"\n#define SCNuLEAST16  \"hu\"\n#define SCNxLEAST16  \"hx\"\n#define SCNXLEAST16  \"hX\"\n#define SCNoFAST16   \"ho\"\n#define SCNuFAST16   \"hu\"\n#define SCNxFAST16   \"hx\"\n#define SCNXFAST16   \"hX\"\n\n#define SCNo32       \"lo\"\n#define SCNu32       \"lu\"\n#define SCNx32       \"lx\"\n#define SCNX32       \"lX\"\n#define SCNoLEAST32  \"lo\"\n#define SCNuLEAST32  \"lu\"\n#define SCNxLEAST32  \"lx\"\n#define SCNXLEAST32  \"lX\"\n#define SCNoFAST32   \"lo\"\n#define SCNuFAST32   \"lu\"\n#define SCNxFAST32   \"lx\"\n#define SCNXFAST32   \"lX\"\n\n#define SCNo64       \"I64o\"\n#define SCNu64       \"I64u\"\n#define SCNx64       \"I64x\"\n#define SCNX64       \"I64X\"\n#define SCNoLEAST64  \"I64o\"\n#define SCNuLEAST64  \"I64u\"\n#define SCNxLEAST64  \"I64x\"\n#define SCNXLEAST64  \"I64X\"\n#define SCNoFAST64   \"I64o\"\n#define SCNuFAST64   \"I64u\"\n#define SCNxFAST64   \"I64x\"\n#define SCNXFAST64   \"I64X\"\n\n#define SCNoMAX     \"I64o\"\n#define SCNuMAX     \"I64u\"\n#define SCNxMAX     \"I64x\"\n#define SCNXMAX     \"I64X\"\n\n#ifdef _WIN64 // [\n#  define SCNoPTR     \"I64o\"\n#  define SCNuPTR     \"I64u\"\n#  define SCNxPTR     \"I64x\"\n#  define SCNXPTR     \"I64X\"\n#else  // _WIN64 ][\n#  define SCNoPTR     \"lo\"\n#  define SCNuPTR     \"lu\"\n#  define SCNxPTR     \"lx\"\n#  define SCNXPTR     \"lX\"\n#endif  // _WIN64 ]\n\n#endif // __STDC_FORMAT_MACROS ]\n\n// 7.8.2 Functions for greatest-width integer types\n\n// 7.8.2.1 The imaxabs function\n#define imaxabs _abs64\n\n// 7.8.2.2 The imaxdiv function\n\n// This is modified version of div() function from Microsoft's div.c found\n// in %MSVC.NET%\\crt\\src\\div.c\n#ifdef STATIC_IMAXDIV // [\nstatic\n#else // STATIC_IMAXDIV ][\n_inline\n#endif // STATIC_IMAXDIV ]\nimaxdiv_t __cdecl imaxdiv(intmax_t numer, intmax_t denom)\n{\n   imaxdiv_t result;\n\n   result.quot = numer / denom;\n   result.rem = numer % denom;\n\n   if (numer < 0 && result.rem > 0) {\n      // did division wrong; must fix up\n      ++result.quot;\n      result.rem -= denom;\n   }\n\n   return result;\n}\n\n// 7.8.2.3 The strtoimax and strtoumax functions\n#define strtoimax _strtoi64\n#define strtoumax _strtoui64\n\n// 7.8.2.4 The wcstoimax and wcstoumax functions\n#define wcstoimax _wcstoi64\n#define wcstoumax _wcstoui64\n\n#endif // _MSC_VER >= 1800\n\n#endif // _MSC_INTTYPES_H_ ]\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/msinttypes/stdint.h",
    "content": "// ISO C9x  compliant stdint.h for Microsoft Visual Studio\n// Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 \n// \n//  Copyright (c) 2006-2013 Alexander Chemeris\n// \n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are met:\n// \n//   1. Redistributions of source code must retain the above copyright notice,\n//      this list of conditions and the following disclaimer.\n// \n//   2. Redistributions in binary form must reproduce the above copyright\n//      notice, this list of conditions and the following disclaimer in the\n//      documentation and/or other materials provided with the distribution.\n// \n//   3. Neither the name of the product nor the names of its contributors may\n//      be used to endorse or promote products derived from this software\n//      without specific prior written permission.\n// \n// THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED\n// WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\n// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO\n// EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;\n// OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, \n// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR\n// OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF\n// ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n// \n///////////////////////////////////////////////////////////////////////////////\n\n// The above software in this distribution may have been modified by \n// THL A29 Limited (\"Tencent Modifications\"). \n// All Tencent Modifications are Copyright (C) 2015 THL A29 Limited.\n\n#ifndef _MSC_VER // [\n#error \"Use this header only with Microsoft Visual C++ compilers!\"\n#endif // _MSC_VER ]\n\n#ifndef _MSC_STDINT_H_ // [\n#define _MSC_STDINT_H_\n\n#if _MSC_VER > 1000\n#pragma once\n#endif\n\n// miloyip: Originally Visual Studio 2010 uses its own stdint.h. However it generates warning with INT64_C(), so change to use this file for vs2010.\n#if _MSC_VER >= 1600 // [\n#include <stdint.h>\n\n#if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS) // [   See footnote 224 at page 260\n\n#undef INT8_C\n#undef INT16_C\n#undef INT32_C\n#undef INT64_C\n#undef UINT8_C\n#undef UINT16_C\n#undef UINT32_C\n#undef UINT64_C\n\n// 7.18.4.1 Macros for minimum-width integer constants\n\n#define INT8_C(val)  val##i8\n#define INT16_C(val) val##i16\n#define INT32_C(val) val##i32\n#define INT64_C(val) val##i64\n\n#define UINT8_C(val)  val##ui8\n#define UINT16_C(val) val##ui16\n#define UINT32_C(val) val##ui32\n#define UINT64_C(val) val##ui64\n\n// 7.18.4.2 Macros for greatest-width integer constants\n// These #ifndef's are needed to prevent collisions with <boost/cstdint.hpp>.\n// Check out Issue 9 for the details.\n#ifndef INTMAX_C //   [\n#  define INTMAX_C   INT64_C\n#endif // INTMAX_C    ]\n#ifndef UINTMAX_C //  [\n#  define UINTMAX_C  UINT64_C\n#endif // UINTMAX_C   ]\n\n#endif // __STDC_CONSTANT_MACROS ]\n\n#else // ] _MSC_VER >= 1700 [\n\n#include <limits.h>\n\n// For Visual Studio 6 in C++ mode and for many Visual Studio versions when\n// compiling for ARM we have to wrap <wchar.h> include with 'extern \"C++\" {}'\n// or compiler would give many errors like this:\n//   error C2733: second C linkage of overloaded function 'wmemchr' not allowed\n#if defined(__cplusplus) && !defined(_M_ARM)\nextern \"C\" {\n#endif\n#  include <wchar.h>\n#if defined(__cplusplus) && !defined(_M_ARM)\n}\n#endif\n\n// Define _W64 macros to mark types changing their size, like intptr_t.\n#ifndef _W64\n#  if !defined(__midl) && (defined(_X86_) || defined(_M_IX86)) && _MSC_VER >= 1300\n#     define _W64 __w64\n#  else\n#     define _W64\n#  endif\n#endif\n\n\n// 7.18.1 Integer types\n\n// 7.18.1.1 Exact-width integer types\n\n// Visual Studio 6 and Embedded Visual C++ 4 doesn't\n// realize that, e.g. char has the same size as __int8\n// so we give up on __intX for them.\n#if (_MSC_VER < 1300)\n   typedef signed char       int8_t;\n   typedef signed short      int16_t;\n   typedef signed int        int32_t;\n   typedef unsigned char     uint8_t;\n   typedef unsigned short    uint16_t;\n   typedef unsigned int      uint32_t;\n#else\n   typedef signed __int8     int8_t;\n   typedef signed __int16    int16_t;\n   typedef signed __int32    int32_t;\n   typedef unsigned __int8   uint8_t;\n   typedef unsigned __int16  uint16_t;\n   typedef unsigned __int32  uint32_t;\n#endif\ntypedef signed __int64       int64_t;\ntypedef unsigned __int64     uint64_t;\n\n\n// 7.18.1.2 Minimum-width integer types\ntypedef int8_t    int_least8_t;\ntypedef int16_t   int_least16_t;\ntypedef int32_t   int_least32_t;\ntypedef int64_t   int_least64_t;\ntypedef uint8_t   uint_least8_t;\ntypedef uint16_t  uint_least16_t;\ntypedef uint32_t  uint_least32_t;\ntypedef uint64_t  uint_least64_t;\n\n// 7.18.1.3 Fastest minimum-width integer types\ntypedef int8_t    int_fast8_t;\ntypedef int16_t   int_fast16_t;\ntypedef int32_t   int_fast32_t;\ntypedef int64_t   int_fast64_t;\ntypedef uint8_t   uint_fast8_t;\ntypedef uint16_t  uint_fast16_t;\ntypedef uint32_t  uint_fast32_t;\ntypedef uint64_t  uint_fast64_t;\n\n// 7.18.1.4 Integer types capable of holding object pointers\n#ifdef _WIN64 // [\n   typedef signed __int64    intptr_t;\n   typedef unsigned __int64  uintptr_t;\n#else // _WIN64 ][\n   typedef _W64 signed int   intptr_t;\n   typedef _W64 unsigned int uintptr_t;\n#endif // _WIN64 ]\n\n// 7.18.1.5 Greatest-width integer types\ntypedef int64_t   intmax_t;\ntypedef uint64_t  uintmax_t;\n\n\n// 7.18.2 Limits of specified-width integer types\n\n#if !defined(__cplusplus) || defined(__STDC_LIMIT_MACROS) // [   See footnote 220 at page 257 and footnote 221 at page 259\n\n// 7.18.2.1 Limits of exact-width integer types\n#define INT8_MIN     ((int8_t)_I8_MIN)\n#define INT8_MAX     _I8_MAX\n#define INT16_MIN    ((int16_t)_I16_MIN)\n#define INT16_MAX    _I16_MAX\n#define INT32_MIN    ((int32_t)_I32_MIN)\n#define INT32_MAX    _I32_MAX\n#define INT64_MIN    ((int64_t)_I64_MIN)\n#define INT64_MAX    _I64_MAX\n#define UINT8_MAX    _UI8_MAX\n#define UINT16_MAX   _UI16_MAX\n#define UINT32_MAX   _UI32_MAX\n#define UINT64_MAX   _UI64_MAX\n\n// 7.18.2.2 Limits of minimum-width integer types\n#define INT_LEAST8_MIN    INT8_MIN\n#define INT_LEAST8_MAX    INT8_MAX\n#define INT_LEAST16_MIN   INT16_MIN\n#define INT_LEAST16_MAX   INT16_MAX\n#define INT_LEAST32_MIN   INT32_MIN\n#define INT_LEAST32_MAX   INT32_MAX\n#define INT_LEAST64_MIN   INT64_MIN\n#define INT_LEAST64_MAX   INT64_MAX\n#define UINT_LEAST8_MAX   UINT8_MAX\n#define UINT_LEAST16_MAX  UINT16_MAX\n#define UINT_LEAST32_MAX  UINT32_MAX\n#define UINT_LEAST64_MAX  UINT64_MAX\n\n// 7.18.2.3 Limits of fastest minimum-width integer types\n#define INT_FAST8_MIN    INT8_MIN\n#define INT_FAST8_MAX    INT8_MAX\n#define INT_FAST16_MIN   INT16_MIN\n#define INT_FAST16_MAX   INT16_MAX\n#define INT_FAST32_MIN   INT32_MIN\n#define INT_FAST32_MAX   INT32_MAX\n#define INT_FAST64_MIN   INT64_MIN\n#define INT_FAST64_MAX   INT64_MAX\n#define UINT_FAST8_MAX   UINT8_MAX\n#define UINT_FAST16_MAX  UINT16_MAX\n#define UINT_FAST32_MAX  UINT32_MAX\n#define UINT_FAST64_MAX  UINT64_MAX\n\n// 7.18.2.4 Limits of integer types capable of holding object pointers\n#ifdef _WIN64 // [\n#  define INTPTR_MIN   INT64_MIN\n#  define INTPTR_MAX   INT64_MAX\n#  define UINTPTR_MAX  UINT64_MAX\n#else // _WIN64 ][\n#  define INTPTR_MIN   INT32_MIN\n#  define INTPTR_MAX   INT32_MAX\n#  define UINTPTR_MAX  UINT32_MAX\n#endif // _WIN64 ]\n\n// 7.18.2.5 Limits of greatest-width integer types\n#define INTMAX_MIN   INT64_MIN\n#define INTMAX_MAX   INT64_MAX\n#define UINTMAX_MAX  UINT64_MAX\n\n// 7.18.3 Limits of other integer types\n\n#ifdef _WIN64 // [\n#  define PTRDIFF_MIN  _I64_MIN\n#  define PTRDIFF_MAX  _I64_MAX\n#else  // _WIN64 ][\n#  define PTRDIFF_MIN  _I32_MIN\n#  define PTRDIFF_MAX  _I32_MAX\n#endif  // _WIN64 ]\n\n#define SIG_ATOMIC_MIN  INT_MIN\n#define SIG_ATOMIC_MAX  INT_MAX\n\n#ifndef SIZE_MAX // [\n#  ifdef _WIN64 // [\n#     define SIZE_MAX  _UI64_MAX\n#  else // _WIN64 ][\n#     define SIZE_MAX  _UI32_MAX\n#  endif // _WIN64 ]\n#endif // SIZE_MAX ]\n\n// WCHAR_MIN and WCHAR_MAX are also defined in <wchar.h>\n#ifndef WCHAR_MIN // [\n#  define WCHAR_MIN  0\n#endif  // WCHAR_MIN ]\n#ifndef WCHAR_MAX // [\n#  define WCHAR_MAX  _UI16_MAX\n#endif  // WCHAR_MAX ]\n\n#define WINT_MIN  0\n#define WINT_MAX  _UI16_MAX\n\n#endif // __STDC_LIMIT_MACROS ]\n\n\n// 7.18.4 Limits of other integer types\n\n#if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS) // [   See footnote 224 at page 260\n\n// 7.18.4.1 Macros for minimum-width integer constants\n\n#define INT8_C(val)  val##i8\n#define INT16_C(val) val##i16\n#define INT32_C(val) val##i32\n#define INT64_C(val) val##i64\n\n#define UINT8_C(val)  val##ui8\n#define UINT16_C(val) val##ui16\n#define UINT32_C(val) val##ui32\n#define UINT64_C(val) val##ui64\n\n// 7.18.4.2 Macros for greatest-width integer constants\n// These #ifndef's are needed to prevent collisions with <boost/cstdint.hpp>.\n// Check out Issue 9 for the details.\n#ifndef INTMAX_C //   [\n#  define INTMAX_C   INT64_C\n#endif // INTMAX_C    ]\n#ifndef UINTMAX_C //  [\n#  define UINTMAX_C  UINT64_C\n#endif // UINTMAX_C   ]\n\n#endif // __STDC_CONSTANT_MACROS ]\n\n#endif // _MSC_VER >= 1600 ]\n\n#endif // _MSC_STDINT_H_ ]\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/ostreamwrapper.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_OSTREAMWRAPPER_H_\n#define RAPIDJSON_OSTREAMWRAPPER_H_\n\n#include \"stream.h\"\n#include <iosfwd>\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(padded)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Wrapper of \\c std::basic_ostream into RapidJSON's Stream concept.\n/*!\n    The classes can be wrapped including but not limited to:\n\n    - \\c std::ostringstream\n    - \\c std::stringstream\n    - \\c std::wpstringstream\n    - \\c std::wstringstream\n    - \\c std::ifstream\n    - \\c std::fstream\n    - \\c std::wofstream\n    - \\c std::wfstream\n\n    \\tparam StreamType Class derived from \\c std::basic_ostream.\n*/\n   \ntemplate <typename StreamType>\nclass BasicOStreamWrapper {\npublic:\n    typedef typename StreamType::char_type Ch;\n    BasicOStreamWrapper(StreamType& stream) : stream_(stream) {}\n\n    void Put(Ch c) {\n        stream_.put(c);\n    }\n\n    void Flush() {\n        stream_.flush();\n    }\n\n    // Not implemented\n    char Peek() const { RAPIDJSON_ASSERT(false); return 0; }\n    char Take() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t Tell() const { RAPIDJSON_ASSERT(false); return 0; }\n    char* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    size_t PutEnd(char*) { RAPIDJSON_ASSERT(false); return 0; }\n\nprivate:\n    BasicOStreamWrapper(const BasicOStreamWrapper&);\n    BasicOStreamWrapper& operator=(const BasicOStreamWrapper&);\n\n    StreamType& stream_;\n};\n\ntypedef BasicOStreamWrapper<std::ostream> OStreamWrapper;\ntypedef BasicOStreamWrapper<std::wostream> WOStreamWrapper;\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_OSTREAMWRAPPER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/pointer.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_POINTER_H_\n#define RAPIDJSON_POINTER_H_\n\n#include \"document.h\"\n#include \"uri.h\"\n#include \"internal/itoa.h\"\n#include \"error/error.h\" // PointerParseErrorCode\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(switch-enum)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4512) // assignment operator could not be generated\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\nstatic const SizeType kPointerInvalidIndex = ~SizeType(0);  //!< Represents an invalid index in GenericPointer::Token\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericPointer\n\n//! Represents a JSON Pointer. Use Pointer for UTF8 encoding and default allocator.\n/*!\n    This class implements RFC 6901 \"JavaScript Object Notation (JSON) Pointer\" \n    (https://tools.ietf.org/html/rfc6901).\n\n    A JSON pointer is for identifying a specific value in a JSON document\n    (GenericDocument). It can simplify coding of DOM tree manipulation, because it\n    can access multiple-level depth of DOM tree with single API call.\n\n    After it parses a string representation (e.g. \"/foo/0\" or URI fragment \n    representation (e.g. \"#/foo/0\") into its internal representation (tokens),\n    it can be used to resolve a specific value in multiple documents, or sub-tree \n    of documents.\n\n    Contrary to GenericValue, Pointer can be copy constructed and copy assigned.\n    Apart from assignment, a Pointer cannot be modified after construction.\n\n    Although Pointer is very convenient, please aware that constructing Pointer\n    involves parsing and dynamic memory allocation. A special constructor with user-\n    supplied tokens eliminates these.\n\n    GenericPointer depends on GenericDocument and GenericValue.\n\n    \\tparam ValueType The value type of the DOM tree. E.g. GenericValue<UTF8<> >\n    \\tparam Allocator The allocator type for allocating memory for internal representation.\n\n    \\note GenericPointer uses same encoding of ValueType.\n    However, Allocator of GenericPointer is independent of Allocator of Value.\n*/\ntemplate <typename ValueType, typename Allocator = CrtAllocator>\nclass GenericPointer {\npublic:\n    typedef typename ValueType::EncodingType EncodingType;  //!< Encoding type from Value\n    typedef typename ValueType::Ch Ch;                      //!< Character type from Value\n    typedef GenericUri<ValueType, Allocator> UriType;\n\n\n  //! A token is the basic units of internal representation.\n    /*!\n        A JSON pointer string representation \"/foo/123\" is parsed to two tokens: \n        \"foo\" and 123. 123 will be represented in both numeric form and string form.\n        They are resolved according to the actual value type (object or array).\n\n        For token that are not numbers, or the numeric value is out of bound\n        (greater than limits of SizeType), they are only treated as string form\n        (i.e. the token's index will be equal to kPointerInvalidIndex).\n\n        This struct is public so that user can create a Pointer without parsing and \n        allocation, using a special constructor.\n    */\n    struct Token {\n        const Ch* name;             //!< Name of the token. It has null character at the end but it can contain null character.\n        SizeType length;            //!< Length of the name.\n        SizeType index;             //!< A valid array index, if it is not equal to kPointerInvalidIndex.\n    };\n\n    //!@name Constructors and destructor.\n    //@{\n\n    //! Default constructor.\n    GenericPointer(Allocator* allocator = 0) : allocator_(allocator), ownAllocator_(), nameBuffer_(), tokens_(), tokenCount_(), parseErrorOffset_(), parseErrorCode_(kPointerParseErrorNone) {}\n\n    //! Constructor that parses a string or URI fragment representation.\n    /*!\n        \\param source A null-terminated, string or URI fragment representation of JSON pointer.\n        \\param allocator User supplied allocator for this pointer. If no allocator is provided, it creates a self-owned one.\n    */\n    explicit GenericPointer(const Ch* source, Allocator* allocator = 0) : allocator_(allocator), ownAllocator_(), nameBuffer_(), tokens_(), tokenCount_(), parseErrorOffset_(), parseErrorCode_(kPointerParseErrorNone) {\n        Parse(source, internal::StrLen(source));\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Constructor that parses a string or URI fragment representation.\n    /*!\n        \\param source A string or URI fragment representation of JSON pointer.\n        \\param allocator User supplied allocator for this pointer. If no allocator is provided, it creates a self-owned one.\n        \\note Requires the definition of the preprocessor symbol \\ref RAPIDJSON_HAS_STDSTRING.\n    */\n    explicit GenericPointer(const std::basic_string<Ch>& source, Allocator* allocator = 0) : allocator_(allocator), ownAllocator_(), nameBuffer_(), tokens_(), tokenCount_(), parseErrorOffset_(), parseErrorCode_(kPointerParseErrorNone) {\n        Parse(source.c_str(), source.size());\n    }\n#endif\n\n    //! Constructor that parses a string or URI fragment representation, with length of the source string.\n    /*!\n        \\param source A string or URI fragment representation of JSON pointer.\n        \\param length Length of source.\n        \\param allocator User supplied allocator for this pointer. If no allocator is provided, it creates a self-owned one.\n        \\note Slightly faster than the overload without length.\n    */\n    GenericPointer(const Ch* source, size_t length, Allocator* allocator = 0) : allocator_(allocator), ownAllocator_(), nameBuffer_(), tokens_(), tokenCount_(), parseErrorOffset_(), parseErrorCode_(kPointerParseErrorNone) {\n        Parse(source, length);\n    }\n\n    //! Constructor with user-supplied tokens.\n    /*!\n        This constructor let user supplies const array of tokens.\n        This prevents the parsing process and eliminates allocation.\n        This is preferred for memory constrained environments.\n\n        \\param tokens An constant array of tokens representing the JSON pointer.\n        \\param tokenCount Number of tokens.\n\n        \\b Example\n        \\code\n        #define NAME(s) { s, sizeof(s) / sizeof(s[0]) - 1, kPointerInvalidIndex }\n        #define INDEX(i) { #i, sizeof(#i) - 1, i }\n\n        static const Pointer::Token kTokens[] = { NAME(\"foo\"), INDEX(123) };\n        static const Pointer p(kTokens, sizeof(kTokens) / sizeof(kTokens[0]));\n        // Equivalent to static const Pointer p(\"/foo/123\");\n\n        #undef NAME\n        #undef INDEX\n        \\endcode\n    */\n    GenericPointer(const Token* tokens, size_t tokenCount) : allocator_(), ownAllocator_(), nameBuffer_(), tokens_(const_cast<Token*>(tokens)), tokenCount_(tokenCount), parseErrorOffset_(), parseErrorCode_(kPointerParseErrorNone) {}\n\n    //! Copy constructor.\n    GenericPointer(const GenericPointer& rhs) : allocator_(), ownAllocator_(), nameBuffer_(), tokens_(), tokenCount_(), parseErrorOffset_(), parseErrorCode_(kPointerParseErrorNone) {\n        *this = rhs;\n    }\n\n    //! Copy constructor.\n    GenericPointer(const GenericPointer& rhs, Allocator* allocator) : allocator_(allocator), ownAllocator_(), nameBuffer_(), tokens_(), tokenCount_(), parseErrorOffset_(), parseErrorCode_(kPointerParseErrorNone) {\n        *this = rhs;\n    }\n\n    //! Destructor.\n    ~GenericPointer() {\n        if (nameBuffer_)    // If user-supplied tokens constructor is used, nameBuffer_ is nullptr and tokens_ are not deallocated.\n            Allocator::Free(tokens_);\n        RAPIDJSON_DELETE(ownAllocator_);\n    }\n\n    //! Assignment operator.\n    GenericPointer& operator=(const GenericPointer& rhs) {\n        if (this != &rhs) {\n            // Do not delete ownAllcator\n            if (nameBuffer_)\n                Allocator::Free(tokens_);\n\n            tokenCount_ = rhs.tokenCount_;\n            parseErrorOffset_ = rhs.parseErrorOffset_;\n            parseErrorCode_ = rhs.parseErrorCode_;\n\n            if (rhs.nameBuffer_)\n                CopyFromRaw(rhs); // Normally parsed tokens.\n            else {\n                tokens_ = rhs.tokens_; // User supplied const tokens.\n                nameBuffer_ = 0;\n            }\n        }\n        return *this;\n    }\n\n    //! Swap the content of this pointer with an other.\n    /*!\n        \\param other The pointer to swap with.\n        \\note Constant complexity.\n    */\n    GenericPointer& Swap(GenericPointer& other) RAPIDJSON_NOEXCEPT {\n        internal::Swap(allocator_, other.allocator_);\n        internal::Swap(ownAllocator_, other.ownAllocator_);\n        internal::Swap(nameBuffer_, other.nameBuffer_);\n        internal::Swap(tokens_, other.tokens_);\n        internal::Swap(tokenCount_, other.tokenCount_);\n        internal::Swap(parseErrorOffset_, other.parseErrorOffset_);\n        internal::Swap(parseErrorCode_, other.parseErrorCode_);\n        return *this;\n    }\n\n    //! free-standing swap function helper\n    /*!\n        Helper function to enable support for common swap implementation pattern based on \\c std::swap:\n        \\code\n        void swap(MyClass& a, MyClass& b) {\n            using std::swap;\n            swap(a.pointer, b.pointer);\n            // ...\n        }\n        \\endcode\n        \\see Swap()\n     */\n    friend inline void swap(GenericPointer& a, GenericPointer& b) RAPIDJSON_NOEXCEPT { a.Swap(b); }\n\n    //@}\n\n    //!@name Append token\n    //@{\n\n    //! Append a token and return a new Pointer\n    /*!\n        \\param token Token to be appended.\n        \\param allocator Allocator for the newly return Pointer.\n        \\return A new Pointer with appended token.\n    */\n    GenericPointer Append(const Token& token, Allocator* allocator = 0) const {\n        GenericPointer r;\n        r.allocator_ = allocator;\n        Ch *p = r.CopyFromRaw(*this, 1, token.length + 1);\n        std::memcpy(p, token.name, (token.length + 1) * sizeof(Ch));\n        r.tokens_[tokenCount_].name = p;\n        r.tokens_[tokenCount_].length = token.length;\n        r.tokens_[tokenCount_].index = token.index;\n        return r;\n    }\n\n    //! Append a name token with length, and return a new Pointer\n    /*!\n        \\param name Name to be appended.\n        \\param length Length of name.\n        \\param allocator Allocator for the newly return Pointer.\n        \\return A new Pointer with appended token.\n    */\n    GenericPointer Append(const Ch* name, SizeType length, Allocator* allocator = 0) const {\n        Token token = { name, length, kPointerInvalidIndex };\n        return Append(token, allocator);\n    }\n\n    //! Append a name token without length, and return a new Pointer\n    /*!\n        \\param name Name (const Ch*) to be appended.\n        \\param allocator Allocator for the newly return Pointer.\n        \\return A new Pointer with appended token.\n    */\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::NotExpr<internal::IsSame<typename internal::RemoveConst<T>::Type, Ch> >), (GenericPointer))\n    Append(T* name, Allocator* allocator = 0) const {\n        return Append(name, internal::StrLen(name), allocator);\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Append a name token, and return a new Pointer\n    /*!\n        \\param name Name to be appended.\n        \\param allocator Allocator for the newly return Pointer.\n        \\return A new Pointer with appended token.\n    */\n    GenericPointer Append(const std::basic_string<Ch>& name, Allocator* allocator = 0) const {\n        return Append(name.c_str(), static_cast<SizeType>(name.size()), allocator);\n    }\n#endif\n\n    //! Append a index token, and return a new Pointer\n    /*!\n        \\param index Index to be appended.\n        \\param allocator Allocator for the newly return Pointer.\n        \\return A new Pointer with appended token.\n    */\n    GenericPointer Append(SizeType index, Allocator* allocator = 0) const {\n        char buffer[21];\n        char* end = sizeof(SizeType) == 4 ? internal::u32toa(index, buffer) : internal::u64toa(index, buffer);\n        SizeType length = static_cast<SizeType>(end - buffer);\n        buffer[length] = '\\0';\n\n        if (sizeof(Ch) == 1) {\n            Token token = { reinterpret_cast<Ch*>(buffer), length, index };\n            return Append(token, allocator);\n        }\n        else {\n            Ch name[21];\n            for (size_t i = 0; i <= length; i++)\n                name[i] = static_cast<Ch>(buffer[i]);\n            Token token = { name, length, index };\n            return Append(token, allocator);\n        }\n    }\n\n    //! Append a token by value, and return a new Pointer\n    /*!\n        \\param token token to be appended.\n        \\param allocator Allocator for the newly return Pointer.\n        \\return A new Pointer with appended token.\n    */\n    GenericPointer Append(const ValueType& token, Allocator* allocator = 0) const {\n        if (token.IsString())\n            return Append(token.GetString(), token.GetStringLength(), allocator);\n        else {\n            RAPIDJSON_ASSERT(token.IsUint64());\n            RAPIDJSON_ASSERT(token.GetUint64() <= SizeType(~0));\n            return Append(static_cast<SizeType>(token.GetUint64()), allocator);\n        }\n    }\n\n    //!@name Handling Parse Error\n    //@{\n\n    //! Check whether this is a valid pointer.\n    bool IsValid() const { return parseErrorCode_ == kPointerParseErrorNone; }\n\n    //! Get the parsing error offset in code unit.\n    size_t GetParseErrorOffset() const { return parseErrorOffset_; }\n\n    //! Get the parsing error code.\n    PointerParseErrorCode GetParseErrorCode() const { return parseErrorCode_; }\n\n    //@}\n\n    //! Get the allocator of this pointer.\n    Allocator& GetAllocator() { return *allocator_; }\n\n    //!@name Tokens\n    //@{\n\n    //! Get the token array (const version only).\n    const Token* GetTokens() const { return tokens_; }\n\n    //! Get the number of tokens.\n    size_t GetTokenCount() const { return tokenCount_; }\n\n    //@}\n\n    //!@name Equality/inequality operators\n    //@{\n\n    //! Equality operator.\n    /*!\n        \\note When any pointers are invalid, always returns false.\n    */\n    bool operator==(const GenericPointer& rhs) const {\n        if (!IsValid() || !rhs.IsValid() || tokenCount_ != rhs.tokenCount_)\n            return false;\n\n        for (size_t i = 0; i < tokenCount_; i++) {\n            if (tokens_[i].index != rhs.tokens_[i].index ||\n                tokens_[i].length != rhs.tokens_[i].length || \n                (tokens_[i].length != 0 && std::memcmp(tokens_[i].name, rhs.tokens_[i].name, sizeof(Ch)* tokens_[i].length) != 0))\n            {\n                return false;\n            }\n        }\n\n        return true;\n    }\n\n    //! Inequality operator.\n    /*!\n        \\note When any pointers are invalid, always returns true.\n    */\n    bool operator!=(const GenericPointer& rhs) const { return !(*this == rhs); }\n\n    //! Less than operator.\n    /*!\n        \\note Invalid pointers are always greater than valid ones.\n    */\n    bool operator<(const GenericPointer& rhs) const {\n        if (!IsValid())\n            return false;\n        if (!rhs.IsValid())\n            return true;\n\n        if (tokenCount_ != rhs.tokenCount_)\n            return tokenCount_ < rhs.tokenCount_;\n\n        for (size_t i = 0; i < tokenCount_; i++) {\n            if (tokens_[i].index != rhs.tokens_[i].index)\n                return tokens_[i].index < rhs.tokens_[i].index;\n\n            if (tokens_[i].length != rhs.tokens_[i].length)\n                return tokens_[i].length < rhs.tokens_[i].length;\n\n            if (int cmp = std::memcmp(tokens_[i].name, rhs.tokens_[i].name, sizeof(Ch) * tokens_[i].length))\n                return cmp < 0;\n        }\n\n        return false;\n    }\n\n    //@}\n\n    //!@name Stringify\n    //@{\n\n    //! Stringify the pointer into string representation.\n    /*!\n        \\tparam OutputStream Type of output stream.\n        \\param os The output stream.\n    */\n    template<typename OutputStream>\n    bool Stringify(OutputStream& os) const {\n        return Stringify<false, OutputStream>(os);\n    }\n\n    //! Stringify the pointer into URI fragment representation.\n    /*!\n        \\tparam OutputStream Type of output stream.\n        \\param os The output stream.\n    */\n    template<typename OutputStream>\n    bool StringifyUriFragment(OutputStream& os) const {\n        return Stringify<true, OutputStream>(os);\n    }\n\n    //@}\n\n    //!@name Create value\n    //@{\n\n    //! Create a value in a subtree.\n    /*!\n        If the value is not exist, it creates all parent values and a JSON Null value.\n        So it always succeed and return the newly created or existing value.\n\n        Remind that it may change types of parents according to tokens, so it \n        potentially removes previously stored values. For example, if a document \n        was an array, and \"/foo\" is used to create a value, then the document \n        will be changed to an object, and all existing array elements are lost.\n\n        \\param root Root value of a DOM subtree to be resolved. It can be any value other than document root.\n        \\param allocator Allocator for creating the values if the specified value or its parents are not exist.\n        \\param alreadyExist If non-null, it stores whether the resolved value is already exist.\n        \\return The resolved newly created (a JSON Null value), or already exists value.\n    */\n    ValueType& Create(ValueType& root, typename ValueType::AllocatorType& allocator, bool* alreadyExist = 0) const {\n        RAPIDJSON_ASSERT(IsValid());\n        ValueType* v = &root;\n        bool exist = true;\n        for (const Token *t = tokens_; t != tokens_ + tokenCount_; ++t) {\n            if (v->IsArray() && t->name[0] == '-' && t->length == 1) {\n                v->PushBack(ValueType().Move(), allocator);\n                v = &((*v)[v->Size() - 1]);\n                exist = false;\n            }\n            else {\n                if (t->index == kPointerInvalidIndex) { // must be object name\n                    if (!v->IsObject())\n                        v->SetObject(); // Change to Object\n                }\n                else { // object name or array index\n                    if (!v->IsArray() && !v->IsObject())\n                        v->SetArray(); // Change to Array\n                }\n\n                if (v->IsArray()) {\n                    if (t->index >= v->Size()) {\n                        v->Reserve(t->index + 1, allocator);\n                        while (t->index >= v->Size())\n                            v->PushBack(ValueType().Move(), allocator);\n                        exist = false;\n                    }\n                    v = &((*v)[t->index]);\n                }\n                else {\n                    typename ValueType::MemberIterator m = v->FindMember(GenericValue<EncodingType>(GenericStringRef<Ch>(t->name, t->length)));\n                    if (m == v->MemberEnd()) {\n                        v->AddMember(ValueType(t->name, t->length, allocator).Move(), ValueType().Move(), allocator);\n                        m = v->MemberEnd();\n                        v = &(--m)->value; // Assumes AddMember() appends at the end\n                        exist = false;\n                    }\n                    else\n                        v = &m->value;\n                }\n            }\n        }\n\n        if (alreadyExist)\n            *alreadyExist = exist;\n\n        return *v;\n    }\n\n    //! Creates a value in a document.\n    /*!\n        \\param document A document to be resolved.\n        \\param alreadyExist If non-null, it stores whether the resolved value is already exist.\n        \\return The resolved newly created, or already exists value.\n    */\n    template <typename stackAllocator>\n    ValueType& Create(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, bool* alreadyExist = 0) const {\n        return Create(document, document.GetAllocator(), alreadyExist);\n    }\n\n    //@}\n\n    //!@name Compute URI\n    //@{\n\n    //! Compute the in-scope URI for a subtree.\n    //  For use with JSON pointers into JSON schema documents.\n    /*!\n        \\param root Root value of a DOM sub-tree to be resolved. It can be any value other than document root.\n        \\param rootUri Root URI\n        \\param unresolvedTokenIndex If the pointer cannot resolve a token in the pointer, this parameter can obtain the index of unresolved token.\n        \\param allocator Allocator for Uris\n        \\return Uri if it can be resolved. Otherwise null.\n\n        \\note\n        There are only 3 situations when a URI cannot be resolved:\n        1. A value in the path is not an array nor object.\n        2. An object value does not contain the token.\n        3. A token is out of range of an array value.\n\n        Use unresolvedTokenIndex to retrieve the token index.\n    */\n    UriType GetUri(ValueType& root, const UriType& rootUri, size_t* unresolvedTokenIndex = 0, Allocator* allocator = 0) const {\n        static const Ch kIdString[] = { 'i', 'd', '\\0' };\n        static const ValueType kIdValue(kIdString, 2);\n        UriType base = UriType(rootUri, allocator);\n        RAPIDJSON_ASSERT(IsValid());\n        ValueType* v = &root;\n        for (const Token *t = tokens_; t != tokens_ + tokenCount_; ++t) {\n            switch (v->GetType()) {\n                case kObjectType:\n                {\n                    // See if we have an id, and if so resolve with the current base\n                    typename ValueType::MemberIterator m = v->FindMember(kIdValue);\n                    if (m != v->MemberEnd() && (m->value).IsString()) {\n                        UriType here = UriType(m->value, allocator).Resolve(base, allocator);\n                        base = here;\n                    }\n                    m = v->FindMember(GenericValue<EncodingType>(GenericStringRef<Ch>(t->name, t->length)));\n                    if (m == v->MemberEnd())\n                        break;\n                    v = &m->value;\n                }\n                  continue;\n                case kArrayType:\n                    if (t->index == kPointerInvalidIndex || t->index >= v->Size())\n                        break;\n                    v = &((*v)[t->index]);\n                    continue;\n                default:\n                    break;\n            }\n\n            // Error: unresolved token\n            if (unresolvedTokenIndex)\n                *unresolvedTokenIndex = static_cast<size_t>(t - tokens_);\n            return UriType(allocator);\n        }\n        return base;\n    }\n\n    UriType GetUri(const ValueType& root, const UriType& rootUri, size_t* unresolvedTokenIndex = 0, Allocator* allocator = 0) const {\n      return GetUri(const_cast<ValueType&>(root), rootUri, unresolvedTokenIndex, allocator);\n    }\n\n\n    //!@name Query value\n    //@{\n\n    //! Query a value in a subtree.\n    /*!\n        \\param root Root value of a DOM sub-tree to be resolved. It can be any value other than document root.\n        \\param unresolvedTokenIndex If the pointer cannot resolve a token in the pointer, this parameter can obtain the index of unresolved token.\n        \\return Pointer to the value if it can be resolved. Otherwise null.\n\n        \\note\n        There are only 3 situations when a value cannot be resolved:\n        1. A value in the path is not an array nor object.\n        2. An object value does not contain the token.\n        3. A token is out of range of an array value.\n\n        Use unresolvedTokenIndex to retrieve the token index.\n    */\n    ValueType* Get(ValueType& root, size_t* unresolvedTokenIndex = 0) const {\n        RAPIDJSON_ASSERT(IsValid());\n        ValueType* v = &root;\n        for (const Token *t = tokens_; t != tokens_ + tokenCount_; ++t) {\n            switch (v->GetType()) {\n            case kObjectType:\n                {\n                    typename ValueType::MemberIterator m = v->FindMember(GenericValue<EncodingType>(GenericStringRef<Ch>(t->name, t->length)));\n                    if (m == v->MemberEnd())\n                        break;\n                    v = &m->value;\n                }\n                continue;\n            case kArrayType:\n                if (t->index == kPointerInvalidIndex || t->index >= v->Size())\n                    break;\n                v = &((*v)[t->index]);\n                continue;\n            default:\n                break;\n            }\n\n            // Error: unresolved token\n            if (unresolvedTokenIndex)\n                *unresolvedTokenIndex = static_cast<size_t>(t - tokens_);\n            return 0;\n        }\n        return v;\n    }\n\n    //! Query a const value in a const subtree.\n    /*!\n        \\param root Root value of a DOM sub-tree to be resolved. It can be any value other than document root.\n        \\return Pointer to the value if it can be resolved. Otherwise null.\n    */\n    const ValueType* Get(const ValueType& root, size_t* unresolvedTokenIndex = 0) const { \n        return Get(const_cast<ValueType&>(root), unresolvedTokenIndex);\n    }\n\n    //@}\n\n    //!@name Query a value with default\n    //@{\n\n    //! Query a value in a subtree with default value.\n    /*!\n        Similar to Get(), but if the specified value do not exists, it creates all parents and clone the default value.\n        So that this function always succeed.\n\n        \\param root Root value of a DOM sub-tree to be resolved. It can be any value other than document root.\n        \\param defaultValue Default value to be cloned if the value was not exists.\n        \\param allocator Allocator for creating the values if the specified value or its parents are not exist.\n        \\see Create()\n    */\n    ValueType& GetWithDefault(ValueType& root, const ValueType& defaultValue, typename ValueType::AllocatorType& allocator) const {\n        bool alreadyExist;\n        ValueType& v = Create(root, allocator, &alreadyExist);\n        return alreadyExist ? v : v.CopyFrom(defaultValue, allocator);\n    }\n\n    //! Query a value in a subtree with default null-terminated string.\n    ValueType& GetWithDefault(ValueType& root, const Ch* defaultValue, typename ValueType::AllocatorType& allocator) const {\n        bool alreadyExist;\n        ValueType& v = Create(root, allocator, &alreadyExist);\n        return alreadyExist ? v : v.SetString(defaultValue, allocator);\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Query a value in a subtree with default std::basic_string.\n    ValueType& GetWithDefault(ValueType& root, const std::basic_string<Ch>& defaultValue, typename ValueType::AllocatorType& allocator) const {\n        bool alreadyExist;\n        ValueType& v = Create(root, allocator, &alreadyExist);\n        return alreadyExist ? v : v.SetString(defaultValue, allocator);\n    }\n#endif\n\n    //! Query a value in a subtree with default primitive value.\n    /*!\n        \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t, \\c bool\n    */\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (ValueType&))\n    GetWithDefault(ValueType& root, T defaultValue, typename ValueType::AllocatorType& allocator) const {\n        return GetWithDefault(root, ValueType(defaultValue).Move(), allocator);\n    }\n\n    //! Query a value in a document with default value.\n    template <typename stackAllocator>\n    ValueType& GetWithDefault(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, const ValueType& defaultValue) const {\n        return GetWithDefault(document, defaultValue, document.GetAllocator());\n    }\n\n    //! Query a value in a document with default null-terminated string.\n    template <typename stackAllocator>\n    ValueType& GetWithDefault(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, const Ch* defaultValue) const {\n        return GetWithDefault(document, defaultValue, document.GetAllocator());\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Query a value in a document with default std::basic_string.\n    template <typename stackAllocator>\n    ValueType& GetWithDefault(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, const std::basic_string<Ch>& defaultValue) const {\n        return GetWithDefault(document, defaultValue, document.GetAllocator());\n    }\n#endif\n\n    //! Query a value in a document with default primitive value.\n    /*!\n        \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t, \\c bool\n    */\n    template <typename T, typename stackAllocator>\n    RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (ValueType&))\n    GetWithDefault(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, T defaultValue) const {\n        return GetWithDefault(document, defaultValue, document.GetAllocator());\n    }\n\n    //@}\n\n    //!@name Set a value\n    //@{\n\n    //! Set a value in a subtree, with move semantics.\n    /*!\n        It creates all parents if they are not exist or types are different to the tokens.\n        So this function always succeeds but potentially remove existing values.\n\n        \\param root Root value of a DOM sub-tree to be resolved. It can be any value other than document root.\n        \\param value Value to be set.\n        \\param allocator Allocator for creating the values if the specified value or its parents are not exist.\n        \\see Create()\n    */\n    ValueType& Set(ValueType& root, ValueType& value, typename ValueType::AllocatorType& allocator) const {\n        return Create(root, allocator) = value;\n    }\n\n    //! Set a value in a subtree, with copy semantics.\n    ValueType& Set(ValueType& root, const ValueType& value, typename ValueType::AllocatorType& allocator) const {\n        return Create(root, allocator).CopyFrom(value, allocator);\n    }\n\n    //! Set a null-terminated string in a subtree.\n    ValueType& Set(ValueType& root, const Ch* value, typename ValueType::AllocatorType& allocator) const {\n        return Create(root, allocator) = ValueType(value, allocator).Move();\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Set a std::basic_string in a subtree.\n    ValueType& Set(ValueType& root, const std::basic_string<Ch>& value, typename ValueType::AllocatorType& allocator) const {\n        return Create(root, allocator) = ValueType(value, allocator).Move();\n    }\n#endif\n\n    //! Set a primitive value in a subtree.\n    /*!\n        \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t, \\c bool\n    */\n    template <typename T>\n    RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (ValueType&))\n    Set(ValueType& root, T value, typename ValueType::AllocatorType& allocator) const {\n        return Create(root, allocator) = ValueType(value).Move();\n    }\n\n    //! Set a value in a document, with move semantics.\n    template <typename stackAllocator>\n    ValueType& Set(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, ValueType& value) const {\n        return Create(document) = value;\n    }\n\n    //! Set a value in a document, with copy semantics.\n    template <typename stackAllocator>\n    ValueType& Set(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, const ValueType& value) const {\n        return Create(document).CopyFrom(value, document.GetAllocator());\n    }\n\n    //! Set a null-terminated string in a document.\n    template <typename stackAllocator>\n    ValueType& Set(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, const Ch* value) const {\n        return Create(document) = ValueType(value, document.GetAllocator()).Move();\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    //! Sets a std::basic_string in a document.\n    template <typename stackAllocator>\n    ValueType& Set(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, const std::basic_string<Ch>& value) const {\n        return Create(document) = ValueType(value, document.GetAllocator()).Move();\n    }\n#endif\n\n    //! Set a primitive value in a document.\n    /*!\n    \\tparam T Either \\ref Type, \\c int, \\c unsigned, \\c int64_t, \\c uint64_t, \\c bool\n    */\n    template <typename T, typename stackAllocator>\n    RAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T>, internal::IsGenericValue<T> >), (ValueType&))\n        Set(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, T value) const {\n            return Create(document) = value;\n    }\n\n    //@}\n\n    //!@name Swap a value\n    //@{\n\n    //! Swap a value with a value in a subtree.\n    /*!\n        It creates all parents if they are not exist or types are different to the tokens.\n        So this function always succeeds but potentially remove existing values.\n\n        \\param root Root value of a DOM sub-tree to be resolved. It can be any value other than document root.\n        \\param value Value to be swapped.\n        \\param allocator Allocator for creating the values if the specified value or its parents are not exist.\n        \\see Create()\n    */\n    ValueType& Swap(ValueType& root, ValueType& value, typename ValueType::AllocatorType& allocator) const {\n        return Create(root, allocator).Swap(value);\n    }\n\n    //! Swap a value with a value in a document.\n    template <typename stackAllocator>\n    ValueType& Swap(GenericDocument<EncodingType, typename ValueType::AllocatorType, stackAllocator>& document, ValueType& value) const {\n        return Create(document).Swap(value);\n    }\n\n    //@}\n\n    //! Erase a value in a subtree.\n    /*!\n        \\param root Root value of a DOM sub-tree to be resolved. It can be any value other than document root.\n        \\return Whether the resolved value is found and erased.\n\n        \\note Erasing with an empty pointer \\c Pointer(\"\"), i.e. the root, always fail and return false.\n    */\n    bool Erase(ValueType& root) const {\n        RAPIDJSON_ASSERT(IsValid());\n        if (tokenCount_ == 0) // Cannot erase the root\n            return false;\n\n        ValueType* v = &root;\n        const Token* last = tokens_ + (tokenCount_ - 1);\n        for (const Token *t = tokens_; t != last; ++t) {\n            switch (v->GetType()) {\n            case kObjectType:\n                {\n                    typename ValueType::MemberIterator m = v->FindMember(GenericValue<EncodingType>(GenericStringRef<Ch>(t->name, t->length)));\n                    if (m == v->MemberEnd())\n                        return false;\n                    v = &m->value;\n                }\n                break;\n            case kArrayType:\n                if (t->index == kPointerInvalidIndex || t->index >= v->Size())\n                    return false;\n                v = &((*v)[t->index]);\n                break;\n            default:\n                return false;\n            }\n        }\n\n        switch (v->GetType()) {\n        case kObjectType:\n            return v->EraseMember(GenericStringRef<Ch>(last->name, last->length));\n        case kArrayType:\n            if (last->index == kPointerInvalidIndex || last->index >= v->Size())\n                return false;\n            v->Erase(v->Begin() + last->index);\n            return true;\n        default:\n            return false;\n        }\n    }\n\nprivate:\n    //! Clone the content from rhs to this.\n    /*!\n        \\param rhs Source pointer.\n        \\param extraToken Extra tokens to be allocated.\n        \\param extraNameBufferSize Extra name buffer size (in number of Ch) to be allocated.\n        \\return Start of non-occupied name buffer, for storing extra names.\n    */\n    Ch* CopyFromRaw(const GenericPointer& rhs, size_t extraToken = 0, size_t extraNameBufferSize = 0) {\n        if (!allocator_) // allocator is independently owned.\n            ownAllocator_ = allocator_ = RAPIDJSON_NEW(Allocator)();\n\n        size_t nameBufferSize = rhs.tokenCount_; // null terminators for tokens\n        for (Token *t = rhs.tokens_; t != rhs.tokens_ + rhs.tokenCount_; ++t)\n            nameBufferSize += t->length;\n\n        tokenCount_ = rhs.tokenCount_ + extraToken;\n        tokens_ = static_cast<Token *>(allocator_->Malloc(tokenCount_ * sizeof(Token) + (nameBufferSize + extraNameBufferSize) * sizeof(Ch)));\n        nameBuffer_ = reinterpret_cast<Ch *>(tokens_ + tokenCount_);\n        if (rhs.tokenCount_ > 0) {\n            std::memcpy(tokens_, rhs.tokens_, rhs.tokenCount_ * sizeof(Token));\n        }\n        if (nameBufferSize > 0) {\n            std::memcpy(nameBuffer_, rhs.nameBuffer_, nameBufferSize * sizeof(Ch));\n        }\n\n        // The names of each token point to a string in the nameBuffer_. The\n        // previous memcpy copied over string pointers into the rhs.nameBuffer_,\n        // but they should point to the strings in the new nameBuffer_.\n        for (size_t i = 0; i < rhs.tokenCount_; ++i) {\n          // The offset between the string address and the name buffer should\n          // still be constant, so we can just get this offset and set each new\n          // token name according the new buffer start + the known offset.\n          std::ptrdiff_t name_offset = rhs.tokens_[i].name - rhs.nameBuffer_;\n          tokens_[i].name = nameBuffer_ + name_offset;\n        }\n\n        return nameBuffer_ + nameBufferSize;\n    }\n\n    //! Check whether a character should be percent-encoded.\n    /*!\n        According to RFC 3986 2.3 Unreserved Characters.\n        \\param c The character (code unit) to be tested.\n    */\n    bool NeedPercentEncode(Ch c) const {\n        return !((c >= '0' && c <= '9') || (c >= 'A' && c <='Z') || (c >= 'a' && c <= 'z') || c == '-' || c == '.' || c == '_' || c =='~');\n    }\n\n    //! Parse a JSON String or its URI fragment representation into tokens.\n#ifndef __clang__ // -Wdocumentation\n    /*!\n        \\param source Either a JSON Pointer string, or its URI fragment representation. Not need to be null terminated.\n        \\param length Length of the source string.\n        \\note Source cannot be JSON String Representation of JSON Pointer, e.g. In \"/\\u0000\", \\u0000 will not be unescaped.\n    */\n#endif\n    void Parse(const Ch* source, size_t length) {\n        RAPIDJSON_ASSERT(source != NULL);\n        RAPIDJSON_ASSERT(nameBuffer_ == 0);\n        RAPIDJSON_ASSERT(tokens_ == 0);\n\n        // Create own allocator if user did not supply.\n        if (!allocator_)\n            ownAllocator_ = allocator_ = RAPIDJSON_NEW(Allocator)();\n\n        // Count number of '/' as tokenCount\n        tokenCount_ = 0;\n        for (const Ch* s = source; s != source + length; s++) \n            if (*s == '/')\n                tokenCount_++;\n\n        Token* token = tokens_ = static_cast<Token *>(allocator_->Malloc(tokenCount_ * sizeof(Token) + length * sizeof(Ch)));\n        Ch* name = nameBuffer_ = reinterpret_cast<Ch *>(tokens_ + tokenCount_);\n        size_t i = 0;\n\n        // Detect if it is a URI fragment\n        bool uriFragment = false;\n        if (source[i] == '#') {\n            uriFragment = true;\n            i++;\n        }\n\n        if (i != length && source[i] != '/') {\n            parseErrorCode_ = kPointerParseErrorTokenMustBeginWithSolidus;\n            goto error;\n        }\n\n        while (i < length) {\n            RAPIDJSON_ASSERT(source[i] == '/');\n            i++; // consumes '/'\n\n            token->name = name;\n            bool isNumber = true;\n\n            while (i < length && source[i] != '/') {\n                Ch c = source[i];\n                if (uriFragment) {\n                    // Decoding percent-encoding for URI fragment\n                    if (c == '%') {\n                        PercentDecodeStream is(&source[i], source + length);\n                        GenericInsituStringStream<EncodingType> os(name);\n                        Ch* begin = os.PutBegin();\n                        if (!Transcoder<UTF8<>, EncodingType>().Validate(is, os) || !is.IsValid()) {\n                            parseErrorCode_ = kPointerParseErrorInvalidPercentEncoding;\n                            goto error;\n                        }\n                        size_t len = os.PutEnd(begin);\n                        i += is.Tell() - 1;\n                        if (len == 1)\n                            c = *name;\n                        else {\n                            name += len;\n                            isNumber = false;\n                            i++;\n                            continue;\n                        }\n                    }\n                    else if (NeedPercentEncode(c)) {\n                        parseErrorCode_ = kPointerParseErrorCharacterMustPercentEncode;\n                        goto error;\n                    }\n                }\n\n                i++;\n\n                // Escaping \"~0\" -> '~', \"~1\" -> '/'\n                if (c == '~') {\n                    if (i < length) {\n                        c = source[i];\n                        if (c == '0')       c = '~';\n                        else if (c == '1')  c = '/';\n                        else {\n                            parseErrorCode_ = kPointerParseErrorInvalidEscape;\n                            goto error;\n                        }\n                        i++;\n                    }\n                    else {\n                        parseErrorCode_ = kPointerParseErrorInvalidEscape;\n                        goto error;\n                    }\n                }\n\n                // First check for index: all of characters are digit\n                if (c < '0' || c > '9')\n                    isNumber = false;\n\n                *name++ = c;\n            }\n            token->length = static_cast<SizeType>(name - token->name);\n            if (token->length == 0)\n                isNumber = false;\n            *name++ = '\\0'; // Null terminator\n\n            // Second check for index: more than one digit cannot have leading zero\n            if (isNumber && token->length > 1 && token->name[0] == '0')\n                isNumber = false;\n\n            // String to SizeType conversion\n            SizeType n = 0;\n            if (isNumber) {\n                for (size_t j = 0; j < token->length; j++) {\n                    SizeType m = n * 10 + static_cast<SizeType>(token->name[j] - '0');\n                    if (m < n) {   // overflow detection\n                        isNumber = false;\n                        break;\n                    }\n                    n = m;\n                }\n            }\n\n            token->index = isNumber ? n : kPointerInvalidIndex;\n            token++;\n        }\n\n        RAPIDJSON_ASSERT(name <= nameBuffer_ + length); // Should not overflow buffer\n        parseErrorCode_ = kPointerParseErrorNone;\n        return;\n\n    error:\n        Allocator::Free(tokens_);\n        nameBuffer_ = 0;\n        tokens_ = 0;\n        tokenCount_ = 0;\n        parseErrorOffset_ = i;\n        return;\n    }\n\n    //! Stringify to string or URI fragment representation.\n    /*!\n        \\tparam uriFragment True for stringifying to URI fragment representation. False for string representation.\n        \\tparam OutputStream type of output stream.\n        \\param os The output stream.\n    */\n    template<bool uriFragment, typename OutputStream>\n    bool Stringify(OutputStream& os) const {\n        RAPIDJSON_ASSERT(IsValid());\n\n        if (uriFragment)\n            os.Put('#');\n\n        for (Token *t = tokens_; t != tokens_ + tokenCount_; ++t) {\n            os.Put('/');\n            for (size_t j = 0; j < t->length; j++) {\n                Ch c = t->name[j];\n                if (c == '~') {\n                    os.Put('~');\n                    os.Put('0');\n                }\n                else if (c == '/') {\n                    os.Put('~');\n                    os.Put('1');\n                }\n                else if (uriFragment && NeedPercentEncode(c)) { \n                    // Transcode to UTF8 sequence\n                    GenericStringStream<typename ValueType::EncodingType> source(&t->name[j]);\n                    PercentEncodeStream<OutputStream> target(os);\n                    if (!Transcoder<EncodingType, UTF8<> >().Validate(source, target))\n                        return false;\n                    j += source.Tell() - 1;\n                }\n                else\n                    os.Put(c);\n            }\n        }\n        return true;\n    }\n\n    //! A helper stream for decoding a percent-encoded sequence into code unit.\n    /*!\n        This stream decodes %XY triplet into code unit (0-255).\n        If it encounters invalid characters, it sets output code unit as 0 and \n        mark invalid, and to be checked by IsValid().\n    */\n    class PercentDecodeStream {\n    public:\n        typedef typename ValueType::Ch Ch;\n\n        //! Constructor\n        /*!\n            \\param source Start of the stream\n            \\param end Past-the-end of the stream.\n        */\n        PercentDecodeStream(const Ch* source, const Ch* end) : src_(source), head_(source), end_(end), valid_(true) {}\n\n        Ch Take() {\n            if (*src_ != '%' || src_ + 3 > end_) { // %XY triplet\n                valid_ = false;\n                return 0;\n            }\n            src_++;\n            Ch c = 0;\n            for (int j = 0; j < 2; j++) {\n                c = static_cast<Ch>(c << 4);\n                Ch h = *src_;\n                if      (h >= '0' && h <= '9') c = static_cast<Ch>(c + h - '0');\n                else if (h >= 'A' && h <= 'F') c = static_cast<Ch>(c + h - 'A' + 10);\n                else if (h >= 'a' && h <= 'f') c = static_cast<Ch>(c + h - 'a' + 10);\n                else {\n                    valid_ = false;\n                    return 0;\n                }\n                src_++;\n            }\n            return c;\n        }\n\n        size_t Tell() const { return static_cast<size_t>(src_ - head_); }\n        bool IsValid() const { return valid_; }\n\n    private:\n        const Ch* src_;     //!< Current read position.\n        const Ch* head_;    //!< Original head of the string.\n        const Ch* end_;     //!< Past-the-end position.\n        bool valid_;        //!< Whether the parsing is valid.\n    };\n\n    //! A helper stream to encode character (UTF-8 code unit) into percent-encoded sequence.\n    template <typename OutputStream>\n    class PercentEncodeStream {\n    public:\n        PercentEncodeStream(OutputStream& os) : os_(os) {}\n        void Put(char c) { // UTF-8 must be byte\n            unsigned char u = static_cast<unsigned char>(c);\n            static const char hexDigits[16] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' };\n            os_.Put('%');\n            os_.Put(static_cast<typename OutputStream::Ch>(hexDigits[u >> 4]));\n            os_.Put(static_cast<typename OutputStream::Ch>(hexDigits[u & 15]));\n        }\n    private:\n        OutputStream& os_;\n    };\n\n    Allocator* allocator_;                  //!< The current allocator. It is either user-supplied or equal to ownAllocator_.\n    Allocator* ownAllocator_;               //!< Allocator owned by this Pointer.\n    Ch* nameBuffer_;                        //!< A buffer containing all names in tokens.\n    Token* tokens_;                         //!< A list of tokens.\n    size_t tokenCount_;                     //!< Number of tokens in tokens_.\n    size_t parseErrorOffset_;               //!< Offset in code unit when parsing fail.\n    PointerParseErrorCode parseErrorCode_;  //!< Parsing error code.\n};\n\n//! GenericPointer for Value (UTF-8, default allocator).\ntypedef GenericPointer<Value> Pointer;\n\n//!@name Helper functions for GenericPointer\n//@{\n\n//////////////////////////////////////////////////////////////////////////////\n\ntemplate <typename T>\ntypename T::ValueType& CreateValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer, typename T::AllocatorType& a) {\n    return pointer.Create(root, a);\n}\n\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& CreateValueByPointer(T& root, const CharType(&source)[N], typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Create(root, a);\n}\n\n// No allocator parameter\n\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& CreateValueByPointer(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer) {\n    return pointer.Create(document);\n}\n\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& CreateValueByPointer(DocumentType& document, const CharType(&source)[N]) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).Create(document);\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\ntemplate <typename T>\ntypename T::ValueType* GetValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer, size_t* unresolvedTokenIndex = 0) {\n    return pointer.Get(root, unresolvedTokenIndex);\n}\n\ntemplate <typename T>\nconst typename T::ValueType* GetValueByPointer(const T& root, const GenericPointer<typename T::ValueType>& pointer, size_t* unresolvedTokenIndex = 0) {\n    return pointer.Get(root, unresolvedTokenIndex);\n}\n\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType* GetValueByPointer(T& root, const CharType (&source)[N], size_t* unresolvedTokenIndex = 0) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Get(root, unresolvedTokenIndex);\n}\n\ntemplate <typename T, typename CharType, size_t N>\nconst typename T::ValueType* GetValueByPointer(const T& root, const CharType(&source)[N], size_t* unresolvedTokenIndex = 0) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Get(root, unresolvedTokenIndex);\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\ntemplate <typename T>\ntypename T::ValueType& GetValueByPointerWithDefault(T& root, const GenericPointer<typename T::ValueType>& pointer, const typename T::ValueType& defaultValue, typename T::AllocatorType& a) {\n    return pointer.GetWithDefault(root, defaultValue, a);\n}\n\ntemplate <typename T>\ntypename T::ValueType& GetValueByPointerWithDefault(T& root, const GenericPointer<typename T::ValueType>& pointer, const typename T::Ch* defaultValue, typename T::AllocatorType& a) {\n    return pointer.GetWithDefault(root, defaultValue, a);\n}\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate <typename T>\ntypename T::ValueType& GetValueByPointerWithDefault(T& root, const GenericPointer<typename T::ValueType>& pointer, const std::basic_string<typename T::Ch>& defaultValue, typename T::AllocatorType& a) {\n    return pointer.GetWithDefault(root, defaultValue, a);\n}\n#endif\n\ntemplate <typename T, typename T2>\nRAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T2>, internal::IsGenericValue<T2> >), (typename T::ValueType&))\nGetValueByPointerWithDefault(T& root, const GenericPointer<typename T::ValueType>& pointer, T2 defaultValue, typename T::AllocatorType& a) {\n    return pointer.GetWithDefault(root, defaultValue, a);\n}\n\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& GetValueByPointerWithDefault(T& root, const CharType(&source)[N], const typename T::ValueType& defaultValue, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).GetWithDefault(root, defaultValue, a);\n}\n\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& GetValueByPointerWithDefault(T& root, const CharType(&source)[N], const typename T::Ch* defaultValue, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).GetWithDefault(root, defaultValue, a);\n}\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& GetValueByPointerWithDefault(T& root, const CharType(&source)[N], const std::basic_string<typename T::Ch>& defaultValue, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).GetWithDefault(root, defaultValue, a);\n}\n#endif\n\ntemplate <typename T, typename CharType, size_t N, typename T2>\nRAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T2>, internal::IsGenericValue<T2> >), (typename T::ValueType&))\nGetValueByPointerWithDefault(T& root, const CharType(&source)[N], T2 defaultValue, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).GetWithDefault(root, defaultValue, a);\n}\n\n// No allocator parameter\n\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& GetValueByPointerWithDefault(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, const typename DocumentType::ValueType& defaultValue) {\n    return pointer.GetWithDefault(document, defaultValue);\n}\n\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& GetValueByPointerWithDefault(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, const typename DocumentType::Ch* defaultValue) {\n    return pointer.GetWithDefault(document, defaultValue);\n}\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& GetValueByPointerWithDefault(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, const std::basic_string<typename DocumentType::Ch>& defaultValue) {\n    return pointer.GetWithDefault(document, defaultValue);\n}\n#endif\n\ntemplate <typename DocumentType, typename T2>\nRAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T2>, internal::IsGenericValue<T2> >), (typename DocumentType::ValueType&))\nGetValueByPointerWithDefault(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, T2 defaultValue) {\n    return pointer.GetWithDefault(document, defaultValue);\n}\n\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& GetValueByPointerWithDefault(DocumentType& document, const CharType(&source)[N], const typename DocumentType::ValueType& defaultValue) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).GetWithDefault(document, defaultValue);\n}\n\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& GetValueByPointerWithDefault(DocumentType& document, const CharType(&source)[N], const typename DocumentType::Ch* defaultValue) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).GetWithDefault(document, defaultValue);\n}\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& GetValueByPointerWithDefault(DocumentType& document, const CharType(&source)[N], const std::basic_string<typename DocumentType::Ch>& defaultValue) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).GetWithDefault(document, defaultValue);\n}\n#endif\n\ntemplate <typename DocumentType, typename CharType, size_t N, typename T2>\nRAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T2>, internal::IsGenericValue<T2> >), (typename DocumentType::ValueType&))\nGetValueByPointerWithDefault(DocumentType& document, const CharType(&source)[N], T2 defaultValue) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).GetWithDefault(document, defaultValue);\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\ntemplate <typename T>\ntypename T::ValueType& SetValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer, typename T::ValueType& value, typename T::AllocatorType& a) {\n    return pointer.Set(root, value, a);\n}\n\ntemplate <typename T>\ntypename T::ValueType& SetValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer, const typename T::ValueType& value, typename T::AllocatorType& a) {\n    return pointer.Set(root, value, a);\n}\n\ntemplate <typename T>\ntypename T::ValueType& SetValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer, const typename T::Ch* value, typename T::AllocatorType& a) {\n    return pointer.Set(root, value, a);\n}\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate <typename T>\ntypename T::ValueType& SetValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer, const std::basic_string<typename T::Ch>& value, typename T::AllocatorType& a) {\n    return pointer.Set(root, value, a);\n}\n#endif\n\ntemplate <typename T, typename T2>\nRAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T2>, internal::IsGenericValue<T2> >), (typename T::ValueType&))\nSetValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer, T2 value, typename T::AllocatorType& a) {\n    return pointer.Set(root, value, a);\n}\n\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& SetValueByPointer(T& root, const CharType(&source)[N], typename T::ValueType& value, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Set(root, value, a);\n}\n\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& SetValueByPointer(T& root, const CharType(&source)[N], const typename T::ValueType& value, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Set(root, value, a);\n}\n\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& SetValueByPointer(T& root, const CharType(&source)[N], const typename T::Ch* value, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Set(root, value, a);\n}\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& SetValueByPointer(T& root, const CharType(&source)[N], const std::basic_string<typename T::Ch>& value, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Set(root, value, a);\n}\n#endif\n\ntemplate <typename T, typename CharType, size_t N, typename T2>\nRAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T2>, internal::IsGenericValue<T2> >), (typename T::ValueType&))\nSetValueByPointer(T& root, const CharType(&source)[N], T2 value, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Set(root, value, a);\n}\n\n// No allocator parameter\n\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& SetValueByPointer(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, typename DocumentType::ValueType& value) {\n    return pointer.Set(document, value);\n}\n\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& SetValueByPointer(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, const typename DocumentType::ValueType& value) {\n    return pointer.Set(document, value);\n}\n\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& SetValueByPointer(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, const typename DocumentType::Ch* value) {\n    return pointer.Set(document, value);\n}\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& SetValueByPointer(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, const std::basic_string<typename DocumentType::Ch>& value) {\n    return pointer.Set(document, value);\n}\n#endif\n\ntemplate <typename DocumentType, typename T2>\nRAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T2>, internal::IsGenericValue<T2> >), (typename DocumentType::ValueType&))\nSetValueByPointer(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, T2 value) {\n    return pointer.Set(document, value);\n}\n\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& SetValueByPointer(DocumentType& document, const CharType(&source)[N], typename DocumentType::ValueType& value) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).Set(document, value);\n}\n\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& SetValueByPointer(DocumentType& document, const CharType(&source)[N], const typename DocumentType::ValueType& value) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).Set(document, value);\n}\n\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& SetValueByPointer(DocumentType& document, const CharType(&source)[N], const typename DocumentType::Ch* value) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).Set(document, value);\n}\n\n#if RAPIDJSON_HAS_STDSTRING\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& SetValueByPointer(DocumentType& document, const CharType(&source)[N], const std::basic_string<typename DocumentType::Ch>& value) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).Set(document, value);\n}\n#endif\n\ntemplate <typename DocumentType, typename CharType, size_t N, typename T2>\nRAPIDJSON_DISABLEIF_RETURN((internal::OrExpr<internal::IsPointer<T2>, internal::IsGenericValue<T2> >), (typename DocumentType::ValueType&))\nSetValueByPointer(DocumentType& document, const CharType(&source)[N], T2 value) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).Set(document, value);\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\ntemplate <typename T>\ntypename T::ValueType& SwapValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer, typename T::ValueType& value, typename T::AllocatorType& a) {\n    return pointer.Swap(root, value, a);\n}\n\ntemplate <typename T, typename CharType, size_t N>\ntypename T::ValueType& SwapValueByPointer(T& root, const CharType(&source)[N], typename T::ValueType& value, typename T::AllocatorType& a) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Swap(root, value, a);\n}\n\ntemplate <typename DocumentType>\ntypename DocumentType::ValueType& SwapValueByPointer(DocumentType& document, const GenericPointer<typename DocumentType::ValueType>& pointer, typename DocumentType::ValueType& value) {\n    return pointer.Swap(document, value);\n}\n\ntemplate <typename DocumentType, typename CharType, size_t N>\ntypename DocumentType::ValueType& SwapValueByPointer(DocumentType& document, const CharType(&source)[N], typename DocumentType::ValueType& value) {\n    return GenericPointer<typename DocumentType::ValueType>(source, N - 1).Swap(document, value);\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\ntemplate <typename T>\nbool EraseValueByPointer(T& root, const GenericPointer<typename T::ValueType>& pointer) {\n    return pointer.Erase(root);\n}\n\ntemplate <typename T, typename CharType, size_t N>\nbool EraseValueByPointer(T& root, const CharType(&source)[N]) {\n    return GenericPointer<typename T::ValueType>(source, N - 1).Erase(root);\n}\n\n//@}\n\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__clang__) || defined(_MSC_VER)\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_POINTER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/prettywriter.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_PRETTYWRITER_H_\n#define RAPIDJSON_PRETTYWRITER_H_\n\n#include \"writer.h\"\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Combination of PrettyWriter format flags.\n/*! \\see PrettyWriter::SetFormatOptions\n */\nenum PrettyFormatOptions {\n    kFormatDefault = 0,         //!< Default pretty formatting.\n    kFormatSingleLineArray = 1  //!< Format arrays on a single line.\n};\n\n//! Writer with indentation and spacing.\n/*!\n    \\tparam OutputStream Type of output os.\n    \\tparam SourceEncoding Encoding of source string.\n    \\tparam TargetEncoding Encoding of output stream.\n    \\tparam StackAllocator Type of allocator for allocating memory of stack.\n*/\ntemplate<typename OutputStream, typename SourceEncoding = UTF8<>, typename TargetEncoding = UTF8<>, typename StackAllocator = CrtAllocator, unsigned writeFlags = kWriteDefaultFlags>\nclass PrettyWriter : public Writer<OutputStream, SourceEncoding, TargetEncoding, StackAllocator, writeFlags> {\npublic:\n    typedef Writer<OutputStream, SourceEncoding, TargetEncoding, StackAllocator, writeFlags> Base;\n    typedef typename Base::Ch Ch;\n\n    //! Constructor\n    /*! \\param os Output stream.\n        \\param allocator User supplied allocator. If it is null, it will create a private one.\n        \\param levelDepth Initial capacity of stack.\n    */\n    explicit PrettyWriter(OutputStream& os, StackAllocator* allocator = 0, size_t levelDepth = Base::kDefaultLevelDepth) : \n        Base(os, allocator, levelDepth), indentChar_(' '), indentCharCount_(4), formatOptions_(kFormatDefault) {}\n\n\n    explicit PrettyWriter(StackAllocator* allocator = 0, size_t levelDepth = Base::kDefaultLevelDepth) : \n        Base(allocator, levelDepth), indentChar_(' '), indentCharCount_(4), formatOptions_(kFormatDefault) {}\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    PrettyWriter(PrettyWriter&& rhs) :\n        Base(std::forward<PrettyWriter>(rhs)), indentChar_(rhs.indentChar_), indentCharCount_(rhs.indentCharCount_), formatOptions_(rhs.formatOptions_) {}\n#endif\n\n    //! Set custom indentation.\n    /*! \\param indentChar       Character for indentation. Must be whitespace character (' ', '\\\\t', '\\\\n', '\\\\r').\n        \\param indentCharCount  Number of indent characters for each indentation level.\n        \\note The default indentation is 4 spaces.\n    */\n    PrettyWriter& SetIndent(Ch indentChar, unsigned indentCharCount) {\n        RAPIDJSON_ASSERT(indentChar == ' ' || indentChar == '\\t' || indentChar == '\\n' || indentChar == '\\r');\n        indentChar_ = indentChar;\n        indentCharCount_ = indentCharCount;\n        return *this;\n    }\n\n    //! Set pretty writer formatting options.\n    /*! \\param options Formatting options.\n    */\n    PrettyWriter& SetFormatOptions(PrettyFormatOptions options) {\n        formatOptions_ = options;\n        return *this;\n    }\n\n    /*! @name Implementation of Handler\n        \\see Handler\n    */\n    //@{\n\n    bool Null()                 { PrettyPrefix(kNullType);   return Base::EndValue(Base::WriteNull()); }\n    bool Bool(bool b)           { PrettyPrefix(b ? kTrueType : kFalseType); return Base::EndValue(Base::WriteBool(b)); }\n    bool Int(int i)             { PrettyPrefix(kNumberType); return Base::EndValue(Base::WriteInt(i)); }\n    bool Uint(unsigned u)       { PrettyPrefix(kNumberType); return Base::EndValue(Base::WriteUint(u)); }\n    bool Int64(int64_t i64)     { PrettyPrefix(kNumberType); return Base::EndValue(Base::WriteInt64(i64)); }\n    bool Uint64(uint64_t u64)   { PrettyPrefix(kNumberType); return Base::EndValue(Base::WriteUint64(u64));  }\n    bool Double(double d)       { PrettyPrefix(kNumberType); return Base::EndValue(Base::WriteDouble(d)); }\n\n    bool RawNumber(const Ch* str, SizeType length, bool copy = false) {\n        RAPIDJSON_ASSERT(str != 0);\n        (void)copy;\n        PrettyPrefix(kNumberType);\n        return Base::EndValue(Base::WriteString(str, length));\n    }\n\n    bool String(const Ch* str, SizeType length, bool copy = false) {\n        RAPIDJSON_ASSERT(str != 0);\n        (void)copy;\n        PrettyPrefix(kStringType);\n        return Base::EndValue(Base::WriteString(str, length));\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    bool String(const std::basic_string<Ch>& str) {\n        return String(str.data(), SizeType(str.size()));\n    }\n#endif\n\n    bool StartObject() {\n        PrettyPrefix(kObjectType);\n        new (Base::level_stack_.template Push<typename Base::Level>()) typename Base::Level(false);\n        return Base::WriteStartObject();\n    }\n\n    bool Key(const Ch* str, SizeType length, bool copy = false) { return String(str, length, copy); }\n\n#if RAPIDJSON_HAS_STDSTRING\n    bool Key(const std::basic_string<Ch>& str) {\n        return Key(str.data(), SizeType(str.size()));\n    }\n#endif\n\t\n    bool EndObject(SizeType memberCount = 0) {\n        (void)memberCount;\n        RAPIDJSON_ASSERT(Base::level_stack_.GetSize() >= sizeof(typename Base::Level)); // not inside an Object\n        RAPIDJSON_ASSERT(!Base::level_stack_.template Top<typename Base::Level>()->inArray); // currently inside an Array, not Object\n        RAPIDJSON_ASSERT(0 == Base::level_stack_.template Top<typename Base::Level>()->valueCount % 2); // Object has a Key without a Value\n       \n        bool empty = Base::level_stack_.template Pop<typename Base::Level>(1)->valueCount == 0;\n\n        if (!empty) {\n            Base::os_->Put('\\n');\n            WriteIndent();\n        }\n        bool ret = Base::EndValue(Base::WriteEndObject());\n        (void)ret;\n        RAPIDJSON_ASSERT(ret == true);\n        if (Base::level_stack_.Empty()) // end of json text\n            Base::Flush();\n        return true;\n    }\n\n    bool StartArray() {\n        PrettyPrefix(kArrayType);\n        new (Base::level_stack_.template Push<typename Base::Level>()) typename Base::Level(true);\n        return Base::WriteStartArray();\n    }\n\n    bool EndArray(SizeType memberCount = 0) {\n        (void)memberCount;\n        RAPIDJSON_ASSERT(Base::level_stack_.GetSize() >= sizeof(typename Base::Level));\n        RAPIDJSON_ASSERT(Base::level_stack_.template Top<typename Base::Level>()->inArray);\n        bool empty = Base::level_stack_.template Pop<typename Base::Level>(1)->valueCount == 0;\n\n        if (!empty && !(formatOptions_ & kFormatSingleLineArray)) {\n            Base::os_->Put('\\n');\n            WriteIndent();\n        }\n        bool ret = Base::EndValue(Base::WriteEndArray());\n        (void)ret;\n        RAPIDJSON_ASSERT(ret == true);\n        if (Base::level_stack_.Empty()) // end of json text\n            Base::Flush();\n        return true;\n    }\n\n    //@}\n\n    /*! @name Convenience extensions */\n    //@{\n\n    //! Simpler but slower overload.\n    bool String(const Ch* str) { return String(str, internal::StrLen(str)); }\n    bool Key(const Ch* str) { return Key(str, internal::StrLen(str)); }\n\n    //@}\n\n    //! Write a raw JSON value.\n    /*!\n        For user to write a stringified JSON as a value.\n\n        \\param json A well-formed JSON value. It should not contain null character within [0, length - 1] range.\n        \\param length Length of the json.\n        \\param type Type of the root of json.\n        \\note When using PrettyWriter::RawValue(), the result json may not be indented correctly.\n    */\n    bool RawValue(const Ch* json, size_t length, Type type) {\n        RAPIDJSON_ASSERT(json != 0);\n        PrettyPrefix(type);\n        return Base::EndValue(Base::WriteRawValue(json, length));\n    }\n\nprotected:\n    void PrettyPrefix(Type type) {\n        (void)type;\n        if (Base::level_stack_.GetSize() != 0) { // this value is not at root\n            typename Base::Level* level = Base::level_stack_.template Top<typename Base::Level>();\n\n            if (level->inArray) {\n                if (level->valueCount > 0) {\n                    Base::os_->Put(','); // add comma if it is not the first element in array\n                    if (formatOptions_ & kFormatSingleLineArray)\n                        Base::os_->Put(' ');\n                }\n\n                if (!(formatOptions_ & kFormatSingleLineArray)) {\n                    Base::os_->Put('\\n');\n                    WriteIndent();\n                }\n            }\n            else {  // in object\n                if (level->valueCount > 0) {\n                    if (level->valueCount % 2 == 0) {\n                        Base::os_->Put(',');\n                        Base::os_->Put('\\n');\n                    }\n                    else {\n                        Base::os_->Put(':');\n                        Base::os_->Put(' ');\n                    }\n                }\n                else\n                    Base::os_->Put('\\n');\n\n                if (level->valueCount % 2 == 0)\n                    WriteIndent();\n            }\n            if (!level->inArray && level->valueCount % 2 == 0)\n                RAPIDJSON_ASSERT(type == kStringType);  // if it's in object, then even number should be a name\n            level->valueCount++;\n        }\n        else {\n            RAPIDJSON_ASSERT(!Base::hasRoot_);  // Should only has one and only one root.\n            Base::hasRoot_ = true;\n        }\n    }\n\n    void WriteIndent()  {\n        size_t count = (Base::level_stack_.GetSize() / sizeof(typename Base::Level)) * indentCharCount_;\n        PutN(*Base::os_, static_cast<typename OutputStream::Ch>(indentChar_), count);\n    }\n\n    Ch indentChar_;\n    unsigned indentCharCount_;\n    PrettyFormatOptions formatOptions_;\n\nprivate:\n    // Prohibit copy constructor & assignment operator.\n    PrettyWriter(const PrettyWriter&);\n    PrettyWriter& operator=(const PrettyWriter&);\n};\n\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_RAPIDJSON_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/rapidjson.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_RAPIDJSON_H_\n#define RAPIDJSON_RAPIDJSON_H_\n\n/*!\\file rapidjson.h\n    \\brief common definitions and configuration\n\n    \\see RAPIDJSON_CONFIG\n */\n\n/*! \\defgroup RAPIDJSON_CONFIG RapidJSON configuration\n    \\brief Configuration macros for library features\n\n    Some RapidJSON features are configurable to adapt the library to a wide\n    variety of platforms, environments and usage scenarios.  Most of the\n    features can be configured in terms of overridden or predefined\n    preprocessor macros at compile-time.\n\n    Some additional customization is available in the \\ref RAPIDJSON_ERRORS APIs.\n\n    \\note These macros should be given on the compiler command-line\n          (where applicable)  to avoid inconsistent values when compiling\n          different translation units of a single application.\n */\n\n#include <cstdlib>  // malloc(), realloc(), free(), size_t\n#include <cstring>  // memset(), memcpy(), memmove(), memcmp()\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_VERSION_STRING\n//\n// ALWAYS synchronize the following 3 macros with corresponding variables in /CMakeLists.txt.\n//\n\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n// token stringification\n#define RAPIDJSON_STRINGIFY(x) RAPIDJSON_DO_STRINGIFY(x)\n#define RAPIDJSON_DO_STRINGIFY(x) #x\n\n// token concatenation\n#define RAPIDJSON_JOIN(X, Y) RAPIDJSON_DO_JOIN(X, Y)\n#define RAPIDJSON_DO_JOIN(X, Y) RAPIDJSON_DO_JOIN2(X, Y)\n#define RAPIDJSON_DO_JOIN2(X, Y) X##Y\n//!@endcond\n\n/*! \\def RAPIDJSON_MAJOR_VERSION\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Major version of RapidJSON in integer.\n*/\n/*! \\def RAPIDJSON_MINOR_VERSION\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Minor version of RapidJSON in integer.\n*/\n/*! \\def RAPIDJSON_PATCH_VERSION\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Patch version of RapidJSON in integer.\n*/\n/*! \\def RAPIDJSON_VERSION_STRING\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Version of RapidJSON in \"<major>.<minor>.<patch>\" string format.\n*/\n#define RAPIDJSON_MAJOR_VERSION 1\n#define RAPIDJSON_MINOR_VERSION 1\n#define RAPIDJSON_PATCH_VERSION 0\n#define RAPIDJSON_VERSION_STRING \\\n    RAPIDJSON_STRINGIFY(RAPIDJSON_MAJOR_VERSION.RAPIDJSON_MINOR_VERSION.RAPIDJSON_PATCH_VERSION)\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_NAMESPACE_(BEGIN|END)\n/*! \\def RAPIDJSON_NAMESPACE\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief   provide custom rapidjson namespace\n\n    In order to avoid symbol clashes and/or \"One Definition Rule\" errors\n    between multiple inclusions of (different versions of) RapidJSON in\n    a single binary, users can customize the name of the main RapidJSON\n    namespace.\n\n    In case of a single nesting level, defining \\c RAPIDJSON_NAMESPACE\n    to a custom name (e.g. \\c MyRapidJSON) is sufficient.  If multiple\n    levels are needed, both \\ref RAPIDJSON_NAMESPACE_BEGIN and \\ref\n    RAPIDJSON_NAMESPACE_END need to be defined as well:\n\n    \\code\n    // in some .cpp file\n    #define RAPIDJSON_NAMESPACE my::rapidjson\n    #define RAPIDJSON_NAMESPACE_BEGIN namespace my { namespace rapidjson {\n    #define RAPIDJSON_NAMESPACE_END   } }\n    #include \"rapidjson/...\"\n    \\endcode\n\n    \\see rapidjson\n */\n/*! \\def RAPIDJSON_NAMESPACE_BEGIN\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief   provide custom rapidjson namespace (opening expression)\n    \\see RAPIDJSON_NAMESPACE\n*/\n/*! \\def RAPIDJSON_NAMESPACE_END\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief   provide custom rapidjson namespace (closing expression)\n    \\see RAPIDJSON_NAMESPACE\n*/\n#ifndef RAPIDJSON_NAMESPACE\n#define RAPIDJSON_NAMESPACE rapidjson\n#endif\n#ifndef RAPIDJSON_NAMESPACE_BEGIN\n#define RAPIDJSON_NAMESPACE_BEGIN namespace RAPIDJSON_NAMESPACE {\n#endif\n#ifndef RAPIDJSON_NAMESPACE_END\n#define RAPIDJSON_NAMESPACE_END }\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// __cplusplus macro\n\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n\n#if defined(_MSC_VER)\n#define RAPIDJSON_CPLUSPLUS _MSVC_LANG\n#else\n#define RAPIDJSON_CPLUSPLUS __cplusplus\n#endif\n\n//!@endcond\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_HAS_STDSTRING\n\n#ifndef RAPIDJSON_HAS_STDSTRING\n#ifdef RAPIDJSON_DOXYGEN_RUNNING\n#define RAPIDJSON_HAS_STDSTRING 1 // force generation of documentation\n#else\n#define RAPIDJSON_HAS_STDSTRING 0 // no std::string support by default\n#endif\n/*! \\def RAPIDJSON_HAS_STDSTRING\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Enable RapidJSON support for \\c std::string\n\n    By defining this preprocessor symbol to \\c 1, several convenience functions for using\n    \\ref rapidjson::GenericValue with \\c std::string are enabled, especially\n    for construction and comparison.\n\n    \\hideinitializer\n*/\n#endif // !defined(RAPIDJSON_HAS_STDSTRING)\n\n#if RAPIDJSON_HAS_STDSTRING\n#include <string>\n#endif // RAPIDJSON_HAS_STDSTRING\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_USE_MEMBERSMAP\n\n/*! \\def RAPIDJSON_USE_MEMBERSMAP\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Enable RapidJSON support for object members handling in a \\c std::multimap\n\n    By defining this preprocessor symbol to \\c 1, \\ref rapidjson::GenericValue object\n    members are stored in a \\c std::multimap for faster lookup and deletion times, a\n    trade off with a slightly slower insertion time and a small object allocat(or)ed\n    memory overhead.\n\n    \\hideinitializer\n*/\n#ifndef RAPIDJSON_USE_MEMBERSMAP\n#define RAPIDJSON_USE_MEMBERSMAP 0 // not by default\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_NO_INT64DEFINE\n\n/*! \\def RAPIDJSON_NO_INT64DEFINE\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Use external 64-bit integer types.\n\n    RapidJSON requires the 64-bit integer types \\c int64_t and  \\c uint64_t types\n    to be available at global scope.\n\n    If users have their own definition, define RAPIDJSON_NO_INT64DEFINE to\n    prevent RapidJSON from defining its own types.\n*/\n#ifndef RAPIDJSON_NO_INT64DEFINE\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n#if defined(_MSC_VER) && (_MSC_VER < 1800) // Visual Studio 2013\n#include \"msinttypes/stdint.h\"\n#include \"msinttypes/inttypes.h\"\n#else\n// Other compilers should have this.\n#include <stdint.h>\n#include <inttypes.h>\n#endif\n//!@endcond\n#ifdef RAPIDJSON_DOXYGEN_RUNNING\n#define RAPIDJSON_NO_INT64DEFINE\n#endif\n#endif // RAPIDJSON_NO_INT64TYPEDEF\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_FORCEINLINE\n\n#ifndef RAPIDJSON_FORCEINLINE\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n#if defined(_MSC_VER) && defined(NDEBUG)\n#define RAPIDJSON_FORCEINLINE __forceinline\n#elif defined(__GNUC__) && __GNUC__ >= 4 && defined(NDEBUG)\n#define RAPIDJSON_FORCEINLINE __attribute__((always_inline))\n#else\n#define RAPIDJSON_FORCEINLINE\n#endif\n//!@endcond\n#endif // RAPIDJSON_FORCEINLINE\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_ENDIAN\n#define RAPIDJSON_LITTLEENDIAN  0   //!< Little endian machine\n#define RAPIDJSON_BIGENDIAN     1   //!< Big endian machine\n\n//! Endianness of the machine.\n/*!\n    \\def RAPIDJSON_ENDIAN\n    \\ingroup RAPIDJSON_CONFIG\n\n    GCC 4.6 provided macro for detecting endianness of the target machine. But other\n    compilers may not have this. User can define RAPIDJSON_ENDIAN to either\n    \\ref RAPIDJSON_LITTLEENDIAN or \\ref RAPIDJSON_BIGENDIAN.\n\n    Default detection implemented with reference to\n    \\li https://gcc.gnu.org/onlinedocs/gcc-4.6.0/cpp/Common-Predefined-Macros.html\n    \\li http://www.boost.org/doc/libs/1_42_0/boost/detail/endian.hpp\n*/\n#ifndef RAPIDJSON_ENDIAN\n// Detect with GCC 4.6's macro\n#  ifdef __BYTE_ORDER__\n#    if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__\n#      define RAPIDJSON_ENDIAN RAPIDJSON_LITTLEENDIAN\n#    elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__\n#      define RAPIDJSON_ENDIAN RAPIDJSON_BIGENDIAN\n#    else\n#      error Unknown machine endianness detected. User needs to define RAPIDJSON_ENDIAN.\n#    endif // __BYTE_ORDER__\n// Detect with GLIBC's endian.h\n#  elif defined(__GLIBC__)\n#    include <endian.h>\n#    if (__BYTE_ORDER == __LITTLE_ENDIAN)\n#      define RAPIDJSON_ENDIAN RAPIDJSON_LITTLEENDIAN\n#    elif (__BYTE_ORDER == __BIG_ENDIAN)\n#      define RAPIDJSON_ENDIAN RAPIDJSON_BIGENDIAN\n#    else\n#      error Unknown machine endianness detected. User needs to define RAPIDJSON_ENDIAN.\n#   endif // __GLIBC__\n// Detect with _LITTLE_ENDIAN and _BIG_ENDIAN macro\n#  elif defined(_LITTLE_ENDIAN) && !defined(_BIG_ENDIAN)\n#    define RAPIDJSON_ENDIAN RAPIDJSON_LITTLEENDIAN\n#  elif defined(_BIG_ENDIAN) && !defined(_LITTLE_ENDIAN)\n#    define RAPIDJSON_ENDIAN RAPIDJSON_BIGENDIAN\n// Detect with architecture macros\n#  elif defined(__sparc) || defined(__sparc__) || defined(_POWER) || defined(__powerpc__) || defined(__ppc__) || defined(__ppc64__) || defined(__hpux) || defined(__hppa) || defined(_MIPSEB) || defined(_POWER) || defined(__s390__)\n#    define RAPIDJSON_ENDIAN RAPIDJSON_BIGENDIAN\n#  elif defined(__i386__) || defined(__alpha__) || defined(__ia64) || defined(__ia64__) || defined(_M_IX86) || defined(_M_IA64) || defined(_M_ALPHA) || defined(__amd64) || defined(__amd64__) || defined(_M_AMD64) || defined(__x86_64) || defined(__x86_64__) || defined(_M_X64) || defined(__bfin__)\n#    define RAPIDJSON_ENDIAN RAPIDJSON_LITTLEENDIAN\n#  elif defined(_MSC_VER) && (defined(_M_ARM) || defined(_M_ARM64))\n#    define RAPIDJSON_ENDIAN RAPIDJSON_LITTLEENDIAN\n#  elif defined(RAPIDJSON_DOXYGEN_RUNNING)\n#    define RAPIDJSON_ENDIAN\n#  else\n#    error Unknown machine endianness detected. User needs to define RAPIDJSON_ENDIAN.\n#  endif\n#endif // RAPIDJSON_ENDIAN\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_64BIT\n\n//! Whether using 64-bit architecture\n#ifndef RAPIDJSON_64BIT\n#if defined(__LP64__) || (defined(__x86_64__) && defined(__ILP32__)) || defined(_WIN64) || defined(__EMSCRIPTEN__)\n#define RAPIDJSON_64BIT 1\n#else\n#define RAPIDJSON_64BIT 0\n#endif\n#endif // RAPIDJSON_64BIT\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_ALIGN\n\n//! Data alignment of the machine.\n/*! \\ingroup RAPIDJSON_CONFIG\n    \\param x pointer to align\n\n    Some machines require strict data alignment. The default is 8 bytes.\n    User can customize by defining the RAPIDJSON_ALIGN function macro.\n*/\n#ifndef RAPIDJSON_ALIGN\n#define RAPIDJSON_ALIGN(x) (((x) + static_cast<size_t>(7u)) & ~static_cast<size_t>(7u))\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_UINT64_C2\n\n//! Construct a 64-bit literal by a pair of 32-bit integer.\n/*!\n    64-bit literal with or without ULL suffix is prone to compiler warnings.\n    UINT64_C() is C macro which cause compilation problems.\n    Use this macro to define 64-bit constants by a pair of 32-bit integer.\n*/\n#ifndef RAPIDJSON_UINT64_C2\n#define RAPIDJSON_UINT64_C2(high32, low32) ((static_cast<uint64_t>(high32) << 32) | static_cast<uint64_t>(low32))\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_48BITPOINTER_OPTIMIZATION\n\n//! Use only lower 48-bit address for some pointers.\n/*!\n    \\ingroup RAPIDJSON_CONFIG\n\n    This optimization uses the fact that current X86-64 architecture only implement lower 48-bit virtual address.\n    The higher 16-bit can be used for storing other data.\n    \\c GenericValue uses this optimization to reduce its size form 24 bytes to 16 bytes in 64-bit architecture.\n*/\n#ifndef RAPIDJSON_48BITPOINTER_OPTIMIZATION\n#if defined(__amd64__) || defined(__amd64) || defined(__x86_64__) || defined(__x86_64) || defined(_M_X64) || defined(_M_AMD64)\n#define RAPIDJSON_48BITPOINTER_OPTIMIZATION 1\n#else\n#define RAPIDJSON_48BITPOINTER_OPTIMIZATION 0\n#endif\n#endif // RAPIDJSON_48BITPOINTER_OPTIMIZATION\n\n#if RAPIDJSON_48BITPOINTER_OPTIMIZATION == 1\n#if RAPIDJSON_64BIT != 1\n#error RAPIDJSON_48BITPOINTER_OPTIMIZATION can only be set to 1 when RAPIDJSON_64BIT=1\n#endif\n#define RAPIDJSON_SETPOINTER(type, p, x) (p = reinterpret_cast<type *>((reinterpret_cast<uintptr_t>(p) & static_cast<uintptr_t>(RAPIDJSON_UINT64_C2(0xFFFF0000, 0x00000000))) | reinterpret_cast<uintptr_t>(reinterpret_cast<const void*>(x))))\n#define RAPIDJSON_GETPOINTER(type, p) (reinterpret_cast<type *>(reinterpret_cast<uintptr_t>(p) & static_cast<uintptr_t>(RAPIDJSON_UINT64_C2(0x0000FFFF, 0xFFFFFFFF))))\n#else\n#define RAPIDJSON_SETPOINTER(type, p, x) (p = (x))\n#define RAPIDJSON_GETPOINTER(type, p) (p)\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_SSE2/RAPIDJSON_SSE42/RAPIDJSON_NEON/RAPIDJSON_SIMD\n\n/*! \\def RAPIDJSON_SIMD\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief Enable SSE2/SSE4.2/Neon optimization.\n\n    RapidJSON supports optimized implementations for some parsing operations\n    based on the SSE2, SSE4.2 or NEon SIMD extensions on modern Intel\n    or ARM compatible processors.\n\n    To enable these optimizations, three different symbols can be defined;\n    \\code\n    // Enable SSE2 optimization.\n    #define RAPIDJSON_SSE2\n\n    // Enable SSE4.2 optimization.\n    #define RAPIDJSON_SSE42\n    \\endcode\n\n    // Enable ARM Neon optimization.\n    #define RAPIDJSON_NEON\n    \\endcode\n\n    \\c RAPIDJSON_SSE42 takes precedence over SSE2, if both are defined.\n\n    If any of these symbols is defined, RapidJSON defines the macro\n    \\c RAPIDJSON_SIMD to indicate the availability of the optimized code.\n*/\n#if defined(RAPIDJSON_SSE2) || defined(RAPIDJSON_SSE42) \\\n    || defined(RAPIDJSON_NEON) || defined(RAPIDJSON_DOXYGEN_RUNNING)\n#define RAPIDJSON_SIMD\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_NO_SIZETYPEDEFINE\n\n#ifndef RAPIDJSON_NO_SIZETYPEDEFINE\n/*! \\def RAPIDJSON_NO_SIZETYPEDEFINE\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief User-provided \\c SizeType definition.\n\n    In order to avoid using 32-bit size types for indexing strings and arrays,\n    define this preprocessor symbol and provide the type rapidjson::SizeType\n    before including RapidJSON:\n    \\code\n    #define RAPIDJSON_NO_SIZETYPEDEFINE\n    namespace rapidjson { typedef ::std::size_t SizeType; }\n    #include \"rapidjson/...\"\n    \\endcode\n\n    \\see rapidjson::SizeType\n*/\n#ifdef RAPIDJSON_DOXYGEN_RUNNING\n#define RAPIDJSON_NO_SIZETYPEDEFINE\n#endif\nRAPIDJSON_NAMESPACE_BEGIN\n//! Size type (for string lengths, array sizes, etc.)\n/*! RapidJSON uses 32-bit array/string indices even on 64-bit platforms,\n    instead of using \\c size_t. Users may override the SizeType by defining\n    \\ref RAPIDJSON_NO_SIZETYPEDEFINE.\n*/\ntypedef unsigned SizeType;\nRAPIDJSON_NAMESPACE_END\n#endif\n\n// always import std::size_t to rapidjson namespace\nRAPIDJSON_NAMESPACE_BEGIN\nusing std::size_t;\nRAPIDJSON_NAMESPACE_END\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_ASSERT\n\n//! Assertion.\n/*! \\ingroup RAPIDJSON_CONFIG\n    By default, rapidjson uses C \\c assert() for internal assertions.\n    User can override it by defining RAPIDJSON_ASSERT(x) macro.\n\n    \\note Parsing errors are handled and can be customized by the\n          \\ref RAPIDJSON_ERRORS APIs.\n*/\n#ifndef RAPIDJSON_ASSERT\n#include <cassert>\n#define RAPIDJSON_ASSERT(x) assert(x)\n#endif // RAPIDJSON_ASSERT\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_STATIC_ASSERT\n\n// Prefer C++11 static_assert, if available\n#ifndef RAPIDJSON_STATIC_ASSERT\n#if RAPIDJSON_CPLUSPLUS >= 201103L || ( defined(_MSC_VER) && _MSC_VER >= 1800 )\n#define RAPIDJSON_STATIC_ASSERT(x) \\\n   static_assert(x, RAPIDJSON_STRINGIFY(x))\n#endif // C++11\n#endif // RAPIDJSON_STATIC_ASSERT\n\n// Adopt C++03 implementation from boost\n#ifndef RAPIDJSON_STATIC_ASSERT\n#ifndef __clang__\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n#endif\nRAPIDJSON_NAMESPACE_BEGIN\ntemplate <bool x> struct STATIC_ASSERTION_FAILURE;\ntemplate <> struct STATIC_ASSERTION_FAILURE<true> { enum { value = 1 }; };\ntemplate <size_t x> struct StaticAssertTest {};\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__GNUC__) || defined(__clang__)\n#define RAPIDJSON_STATIC_ASSERT_UNUSED_ATTRIBUTE __attribute__((unused))\n#else\n#define RAPIDJSON_STATIC_ASSERT_UNUSED_ATTRIBUTE \n#endif\n#ifndef __clang__\n//!@endcond\n#endif\n\n/*! \\def RAPIDJSON_STATIC_ASSERT\n    \\brief (Internal) macro to check for conditions at compile-time\n    \\param x compile-time condition\n    \\hideinitializer\n */\n#define RAPIDJSON_STATIC_ASSERT(x) \\\n    typedef ::RAPIDJSON_NAMESPACE::StaticAssertTest< \\\n      sizeof(::RAPIDJSON_NAMESPACE::STATIC_ASSERTION_FAILURE<bool(x) >)> \\\n    RAPIDJSON_JOIN(StaticAssertTypedef, __LINE__) RAPIDJSON_STATIC_ASSERT_UNUSED_ATTRIBUTE\n#endif // RAPIDJSON_STATIC_ASSERT\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_LIKELY, RAPIDJSON_UNLIKELY\n\n//! Compiler branching hint for expression with high probability to be true.\n/*!\n    \\ingroup RAPIDJSON_CONFIG\n    \\param x Boolean expression likely to be true.\n*/\n#ifndef RAPIDJSON_LIKELY\n#if defined(__GNUC__) || defined(__clang__)\n#define RAPIDJSON_LIKELY(x) __builtin_expect(!!(x), 1)\n#else\n#define RAPIDJSON_LIKELY(x) (x)\n#endif\n#endif\n\n//! Compiler branching hint for expression with low probability to be true.\n/*!\n    \\ingroup RAPIDJSON_CONFIG\n    \\param x Boolean expression unlikely to be true.\n*/\n#ifndef RAPIDJSON_UNLIKELY\n#if defined(__GNUC__) || defined(__clang__)\n#define RAPIDJSON_UNLIKELY(x) __builtin_expect(!!(x), 0)\n#else\n#define RAPIDJSON_UNLIKELY(x) (x)\n#endif\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// Helpers\n\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n\n#define RAPIDJSON_MULTILINEMACRO_BEGIN do {\n#define RAPIDJSON_MULTILINEMACRO_END \\\n} while((void)0, 0)\n\n// adopted from Boost\n#define RAPIDJSON_VERSION_CODE(x,y,z) \\\n  (((x)*100000) + ((y)*100) + (z))\n\n#if defined(__has_builtin)\n#define RAPIDJSON_HAS_BUILTIN(x) __has_builtin(x)\n#else\n#define RAPIDJSON_HAS_BUILTIN(x) 0\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_DIAG_PUSH/POP, RAPIDJSON_DIAG_OFF\n\n#if defined(__GNUC__)\n#define RAPIDJSON_GNUC \\\n    RAPIDJSON_VERSION_CODE(__GNUC__,__GNUC_MINOR__,__GNUC_PATCHLEVEL__)\n#endif\n\n#if defined(__clang__) || (defined(RAPIDJSON_GNUC) && RAPIDJSON_GNUC >= RAPIDJSON_VERSION_CODE(4,2,0))\n\n#define RAPIDJSON_PRAGMA(x) _Pragma(RAPIDJSON_STRINGIFY(x))\n#define RAPIDJSON_DIAG_PRAGMA(x) RAPIDJSON_PRAGMA(GCC diagnostic x)\n#define RAPIDJSON_DIAG_OFF(x) \\\n    RAPIDJSON_DIAG_PRAGMA(ignored RAPIDJSON_STRINGIFY(RAPIDJSON_JOIN(-W,x)))\n\n// push/pop support in Clang and GCC>=4.6\n#if defined(__clang__) || (defined(RAPIDJSON_GNUC) && RAPIDJSON_GNUC >= RAPIDJSON_VERSION_CODE(4,6,0))\n#define RAPIDJSON_DIAG_PUSH RAPIDJSON_DIAG_PRAGMA(push)\n#define RAPIDJSON_DIAG_POP  RAPIDJSON_DIAG_PRAGMA(pop)\n#else // GCC >= 4.2, < 4.6\n#define RAPIDJSON_DIAG_PUSH /* ignored */\n#define RAPIDJSON_DIAG_POP /* ignored */\n#endif\n\n#elif defined(_MSC_VER)\n\n// pragma (MSVC specific)\n#define RAPIDJSON_PRAGMA(x) __pragma(x)\n#define RAPIDJSON_DIAG_PRAGMA(x) RAPIDJSON_PRAGMA(warning(x))\n\n#define RAPIDJSON_DIAG_OFF(x) RAPIDJSON_DIAG_PRAGMA(disable: x)\n#define RAPIDJSON_DIAG_PUSH RAPIDJSON_DIAG_PRAGMA(push)\n#define RAPIDJSON_DIAG_POP  RAPIDJSON_DIAG_PRAGMA(pop)\n\n#else\n\n#define RAPIDJSON_DIAG_OFF(x) /* ignored */\n#define RAPIDJSON_DIAG_PUSH   /* ignored */\n#define RAPIDJSON_DIAG_POP    /* ignored */\n\n#endif // RAPIDJSON_DIAG_*\n\n///////////////////////////////////////////////////////////////////////////////\n// C++11 features\n\n#ifndef RAPIDJSON_HAS_CXX11\n#define RAPIDJSON_HAS_CXX11 (RAPIDJSON_CPLUSPLUS >= 201103L)\n#endif\n\n#ifndef RAPIDJSON_HAS_CXX11_RVALUE_REFS\n#if RAPIDJSON_HAS_CXX11\n#define RAPIDJSON_HAS_CXX11_RVALUE_REFS 1\n#elif defined(__clang__)\n#if __has_feature(cxx_rvalue_references) && \\\n    (defined(_MSC_VER) || defined(_LIBCPP_VERSION) || defined(__GLIBCXX__) && __GLIBCXX__ >= 20080306)\n#define RAPIDJSON_HAS_CXX11_RVALUE_REFS 1\n#else\n#define RAPIDJSON_HAS_CXX11_RVALUE_REFS 0\n#endif\n#elif (defined(RAPIDJSON_GNUC) && (RAPIDJSON_GNUC >= RAPIDJSON_VERSION_CODE(4,3,0)) && defined(__GXX_EXPERIMENTAL_CXX0X__)) || \\\n      (defined(_MSC_VER) && _MSC_VER >= 1600) || \\\n      (defined(__SUNPRO_CC) && __SUNPRO_CC >= 0x5140 && defined(__GXX_EXPERIMENTAL_CXX0X__))\n\n#define RAPIDJSON_HAS_CXX11_RVALUE_REFS 1\n#else\n#define RAPIDJSON_HAS_CXX11_RVALUE_REFS 0\n#endif\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n#include <utility> // std::move\n#endif\n\n#ifndef RAPIDJSON_HAS_CXX11_NOEXCEPT\n#if RAPIDJSON_HAS_CXX11\n#define RAPIDJSON_HAS_CXX11_NOEXCEPT 1\n#elif defined(__clang__)\n#define RAPIDJSON_HAS_CXX11_NOEXCEPT __has_feature(cxx_noexcept)\n#elif (defined(RAPIDJSON_GNUC) && (RAPIDJSON_GNUC >= RAPIDJSON_VERSION_CODE(4,6,0)) && defined(__GXX_EXPERIMENTAL_CXX0X__)) || \\\n    (defined(_MSC_VER) && _MSC_VER >= 1900) || \\\n    (defined(__SUNPRO_CC) && __SUNPRO_CC >= 0x5140 && defined(__GXX_EXPERIMENTAL_CXX0X__))\n#define RAPIDJSON_HAS_CXX11_NOEXCEPT 1\n#else\n#define RAPIDJSON_HAS_CXX11_NOEXCEPT 0\n#endif\n#endif\n#ifndef RAPIDJSON_NOEXCEPT\n#if RAPIDJSON_HAS_CXX11_NOEXCEPT\n#define RAPIDJSON_NOEXCEPT noexcept\n#else\n#define RAPIDJSON_NOEXCEPT throw()\n#endif // RAPIDJSON_HAS_CXX11_NOEXCEPT\n#endif\n\n// no automatic detection, yet\n#ifndef RAPIDJSON_HAS_CXX11_TYPETRAITS\n#if (defined(_MSC_VER) && _MSC_VER >= 1700)\n#define RAPIDJSON_HAS_CXX11_TYPETRAITS 1\n#else\n#define RAPIDJSON_HAS_CXX11_TYPETRAITS 0\n#endif\n#endif\n\n#ifndef RAPIDJSON_HAS_CXX11_RANGE_FOR\n#if defined(__clang__)\n#define RAPIDJSON_HAS_CXX11_RANGE_FOR __has_feature(cxx_range_for)\n#elif (defined(RAPIDJSON_GNUC) && (RAPIDJSON_GNUC >= RAPIDJSON_VERSION_CODE(4,6,0)) && defined(__GXX_EXPERIMENTAL_CXX0X__)) || \\\n      (defined(_MSC_VER) && _MSC_VER >= 1700) || \\\n      (defined(__SUNPRO_CC) && __SUNPRO_CC >= 0x5140 && defined(__GXX_EXPERIMENTAL_CXX0X__))\n#define RAPIDJSON_HAS_CXX11_RANGE_FOR 1\n#else\n#define RAPIDJSON_HAS_CXX11_RANGE_FOR 0\n#endif\n#endif // RAPIDJSON_HAS_CXX11_RANGE_FOR\n\n///////////////////////////////////////////////////////////////////////////////\n// C++17 features\n\n#ifndef RAPIDJSON_HAS_CXX17\n#define RAPIDJSON_HAS_CXX17 (RAPIDJSON_CPLUSPLUS >= 201703L)\n#endif\n\n#if RAPIDJSON_HAS_CXX17\n# define RAPIDJSON_DELIBERATE_FALLTHROUGH [[fallthrough]]\n#elif defined(__has_cpp_attribute)\n# if __has_cpp_attribute(clang::fallthrough)\n#  define RAPIDJSON_DELIBERATE_FALLTHROUGH [[clang::fallthrough]]\n# elif __has_cpp_attribute(fallthrough)\n#  define RAPIDJSON_DELIBERATE_FALLTHROUGH __attribute__((fallthrough))\n# else\n#  define RAPIDJSON_DELIBERATE_FALLTHROUGH\n# endif\n#else\n# define RAPIDJSON_DELIBERATE_FALLTHROUGH\n#endif\n\n//!@endcond\n\n//! Assertion (in non-throwing contexts).\n /*! \\ingroup RAPIDJSON_CONFIG\n    Some functions provide a \\c noexcept guarantee, if the compiler supports it.\n    In these cases, the \\ref RAPIDJSON_ASSERT macro cannot be overridden to\n    throw an exception.  This macro adds a separate customization point for\n    such cases.\n\n    Defaults to C \\c assert() (as \\ref RAPIDJSON_ASSERT), if \\c noexcept is\n    supported, and to \\ref RAPIDJSON_ASSERT otherwise.\n */\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_NOEXCEPT_ASSERT\n\n#ifndef RAPIDJSON_NOEXCEPT_ASSERT\n#ifdef RAPIDJSON_ASSERT_THROWS\n#include <cassert>\n#define RAPIDJSON_NOEXCEPT_ASSERT(x) assert(x)\n#else\n#define RAPIDJSON_NOEXCEPT_ASSERT(x) RAPIDJSON_ASSERT(x)\n#endif // RAPIDJSON_ASSERT_THROWS\n#endif // RAPIDJSON_NOEXCEPT_ASSERT\n\n///////////////////////////////////////////////////////////////////////////////\n// malloc/realloc/free\n\n#ifndef RAPIDJSON_MALLOC\n///! customization point for global \\c malloc\n#define RAPIDJSON_MALLOC(size) std::malloc(size)\n#endif\n#ifndef RAPIDJSON_REALLOC\n///! customization point for global \\c realloc\n#define RAPIDJSON_REALLOC(ptr, new_size) std::realloc(ptr, new_size)\n#endif\n#ifndef RAPIDJSON_FREE\n///! customization point for global \\c free\n#define RAPIDJSON_FREE(ptr) std::free(ptr)\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// new/delete\n\n#ifndef RAPIDJSON_NEW\n///! customization point for global \\c new\n#define RAPIDJSON_NEW(TypeName) new TypeName\n#endif\n#ifndef RAPIDJSON_DELETE\n///! customization point for global \\c delete\n#define RAPIDJSON_DELETE(x) delete x\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// Type\n\n/*! \\namespace rapidjson\n    \\brief main RapidJSON namespace\n    \\see RAPIDJSON_NAMESPACE\n*/\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Type of JSON value\nenum Type {\n    kNullType = 0,      //!< null\n    kFalseType = 1,     //!< false\n    kTrueType = 2,      //!< true\n    kObjectType = 3,    //!< object\n    kArrayType = 4,     //!< array\n    kStringType = 5,    //!< string\n    kNumberType = 6     //!< number\n};\n\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_RAPIDJSON_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/reader.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_READER_H_\n#define RAPIDJSON_READER_H_\n\n/*! \\file reader.h */\n\n#include \"allocators.h\"\n#include \"stream.h\"\n#include \"encodedstream.h\"\n#include \"internal/clzll.h\"\n#include \"internal/meta.h\"\n#include \"internal/stack.h\"\n#include \"internal/strtod.h\"\n#include <limits>\n\n#if defined(RAPIDJSON_SIMD) && defined(_MSC_VER)\n#include <intrin.h>\n#pragma intrinsic(_BitScanForward)\n#endif\n#ifdef RAPIDJSON_SSE42\n#include <nmmintrin.h>\n#elif defined(RAPIDJSON_SSE2)\n#include <emmintrin.h>\n#elif defined(RAPIDJSON_NEON)\n#include <arm_neon.h>\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(old-style-cast)\nRAPIDJSON_DIAG_OFF(padded)\nRAPIDJSON_DIAG_OFF(switch-enum)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4127)  // conditional expression is constant\nRAPIDJSON_DIAG_OFF(4702)  // unreachable code\n#endif\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n#define RAPIDJSON_NOTHING /* deliberately empty */\n#ifndef RAPIDJSON_PARSE_ERROR_EARLY_RETURN\n#define RAPIDJSON_PARSE_ERROR_EARLY_RETURN(value) \\\n    RAPIDJSON_MULTILINEMACRO_BEGIN \\\n    if (RAPIDJSON_UNLIKELY(HasParseError())) { return value; } \\\n    RAPIDJSON_MULTILINEMACRO_END\n#endif\n#define RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID \\\n    RAPIDJSON_PARSE_ERROR_EARLY_RETURN(RAPIDJSON_NOTHING)\n//!@endcond\n\n/*! \\def RAPIDJSON_PARSE_ERROR_NORETURN\n    \\ingroup RAPIDJSON_ERRORS\n    \\brief Macro to indicate a parse error.\n    \\param parseErrorCode \\ref rapidjson::ParseErrorCode of the error\n    \\param offset  position of the error in JSON input (\\c size_t)\n\n    This macros can be used as a customization point for the internal\n    error handling mechanism of RapidJSON.\n\n    A common usage model is to throw an exception instead of requiring the\n    caller to explicitly check the \\ref rapidjson::GenericReader::Parse's\n    return value:\n\n    \\code\n    #define RAPIDJSON_PARSE_ERROR_NORETURN(parseErrorCode,offset) \\\n       throw ParseException(parseErrorCode, #parseErrorCode, offset)\n\n    #include <stdexcept>               // std::runtime_error\n    #include \"rapidjson/error/error.h\" // rapidjson::ParseResult\n\n    struct ParseException : std::runtime_error, rapidjson::ParseResult {\n      ParseException(rapidjson::ParseErrorCode code, const char* msg, size_t offset)\n        : std::runtime_error(msg), ParseResult(code, offset) {}\n    };\n\n    #include \"rapidjson/reader.h\"\n    \\endcode\n\n    \\see RAPIDJSON_PARSE_ERROR, rapidjson::GenericReader::Parse\n */\n#ifndef RAPIDJSON_PARSE_ERROR_NORETURN\n#define RAPIDJSON_PARSE_ERROR_NORETURN(parseErrorCode, offset) \\\n    RAPIDJSON_MULTILINEMACRO_BEGIN \\\n    RAPIDJSON_ASSERT(!HasParseError()); /* Error can only be assigned once */ \\\n    SetParseError(parseErrorCode, offset); \\\n    RAPIDJSON_MULTILINEMACRO_END\n#endif\n\n/*! \\def RAPIDJSON_PARSE_ERROR\n    \\ingroup RAPIDJSON_ERRORS\n    \\brief (Internal) macro to indicate and handle a parse error.\n    \\param parseErrorCode \\ref rapidjson::ParseErrorCode of the error\n    \\param offset  position of the error in JSON input (\\c size_t)\n\n    Invokes RAPIDJSON_PARSE_ERROR_NORETURN and stops the parsing.\n\n    \\see RAPIDJSON_PARSE_ERROR_NORETURN\n    \\hideinitializer\n */\n#ifndef RAPIDJSON_PARSE_ERROR\n#define RAPIDJSON_PARSE_ERROR(parseErrorCode, offset) \\\n    RAPIDJSON_MULTILINEMACRO_BEGIN \\\n    RAPIDJSON_PARSE_ERROR_NORETURN(parseErrorCode, offset); \\\n    RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID; \\\n    RAPIDJSON_MULTILINEMACRO_END\n#endif\n\n#include \"error/error.h\" // ParseErrorCode, ParseResult\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n///////////////////////////////////////////////////////////////////////////////\n// ParseFlag\n\n/*! \\def RAPIDJSON_PARSE_DEFAULT_FLAGS\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief User-defined kParseDefaultFlags definition.\n\n    User can define this as any \\c ParseFlag combinations.\n*/\n#ifndef RAPIDJSON_PARSE_DEFAULT_FLAGS\n#define RAPIDJSON_PARSE_DEFAULT_FLAGS kParseNoFlags\n#endif\n\n//! Combination of parseFlags\n/*! \\see Reader::Parse, Document::Parse, Document::ParseInsitu, Document::ParseStream\n */\nenum ParseFlag {\n    kParseNoFlags = 0,              //!< No flags are set.\n    kParseInsituFlag = 1,           //!< In-situ(destructive) parsing.\n    kParseValidateEncodingFlag = 2, //!< Validate encoding of JSON strings.\n    kParseIterativeFlag = 4,        //!< Iterative(constant complexity in terms of function call stack size) parsing.\n    kParseStopWhenDoneFlag = 8,     //!< After parsing a complete JSON root from stream, stop further processing the rest of stream. When this flag is used, parser will not generate kParseErrorDocumentRootNotSingular error.\n    kParseFullPrecisionFlag = 16,   //!< Parse number in full precision (but slower).\n    kParseCommentsFlag = 32,        //!< Allow one-line (//) and multi-line (/**/) comments.\n    kParseNumbersAsStringsFlag = 64,    //!< Parse all numbers (ints/doubles) as strings.\n    kParseTrailingCommasFlag = 128, //!< Allow trailing commas at the end of objects and arrays.\n    kParseNanAndInfFlag = 256,      //!< Allow parsing NaN, Inf, Infinity, -Inf and -Infinity as doubles.\n    kParseEscapedApostropheFlag = 512,  //!< Allow escaped apostrophe in strings.\n    kParseDefaultFlags = RAPIDJSON_PARSE_DEFAULT_FLAGS  //!< Default parse flags. Can be customized by defining RAPIDJSON_PARSE_DEFAULT_FLAGS\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// Handler\n\n/*! \\class rapidjson::Handler\n    \\brief Concept for receiving events from GenericReader upon parsing.\n    The functions return true if no error occurs. If they return false,\n    the event publisher should terminate the process.\n\\code\nconcept Handler {\n    typename Ch;\n\n    bool Null();\n    bool Bool(bool b);\n    bool Int(int i);\n    bool Uint(unsigned i);\n    bool Int64(int64_t i);\n    bool Uint64(uint64_t i);\n    bool Double(double d);\n    /// enabled via kParseNumbersAsStringsFlag, string is not null-terminated (use length)\n    bool RawNumber(const Ch* str, SizeType length, bool copy);\n    bool String(const Ch* str, SizeType length, bool copy);\n    bool StartObject();\n    bool Key(const Ch* str, SizeType length, bool copy);\n    bool EndObject(SizeType memberCount);\n    bool StartArray();\n    bool EndArray(SizeType elementCount);\n};\n\\endcode\n*/\n///////////////////////////////////////////////////////////////////////////////\n// BaseReaderHandler\n\n//! Default implementation of Handler.\n/*! This can be used as base class of any reader handler.\n    \\note implements Handler concept\n*/\ntemplate<typename Encoding = UTF8<>, typename Derived = void>\nstruct BaseReaderHandler {\n    typedef typename Encoding::Ch Ch;\n\n    typedef typename internal::SelectIf<internal::IsSame<Derived, void>, BaseReaderHandler, Derived>::Type Override;\n\n    bool Default() { return true; }\n    bool Null() { return static_cast<Override&>(*this).Default(); }\n    bool Bool(bool) { return static_cast<Override&>(*this).Default(); }\n    bool Int(int) { return static_cast<Override&>(*this).Default(); }\n    bool Uint(unsigned) { return static_cast<Override&>(*this).Default(); }\n    bool Int64(int64_t) { return static_cast<Override&>(*this).Default(); }\n    bool Uint64(uint64_t) { return static_cast<Override&>(*this).Default(); }\n    bool Double(double) { return static_cast<Override&>(*this).Default(); }\n    /// enabled via kParseNumbersAsStringsFlag, string is not null-terminated (use length)\n    bool RawNumber(const Ch* str, SizeType len, bool copy) { return static_cast<Override&>(*this).String(str, len, copy); }\n    bool String(const Ch*, SizeType, bool) { return static_cast<Override&>(*this).Default(); }\n    bool StartObject() { return static_cast<Override&>(*this).Default(); }\n    bool Key(const Ch* str, SizeType len, bool copy) { return static_cast<Override&>(*this).String(str, len, copy); }\n    bool EndObject(SizeType) { return static_cast<Override&>(*this).Default(); }\n    bool StartArray() { return static_cast<Override&>(*this).Default(); }\n    bool EndArray(SizeType) { return static_cast<Override&>(*this).Default(); }\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// StreamLocalCopy\n\nnamespace internal {\n\ntemplate<typename Stream, int = StreamTraits<Stream>::copyOptimization>\nclass StreamLocalCopy;\n\n//! Do copy optimization.\ntemplate<typename Stream>\nclass StreamLocalCopy<Stream, 1> {\npublic:\n    StreamLocalCopy(Stream& original) : s(original), original_(original) {}\n    ~StreamLocalCopy() { original_ = s; }\n\n    Stream s;\n\nprivate:\n    StreamLocalCopy& operator=(const StreamLocalCopy&) /* = delete */;\n\n    Stream& original_;\n};\n\n//! Keep reference.\ntemplate<typename Stream>\nclass StreamLocalCopy<Stream, 0> {\npublic:\n    StreamLocalCopy(Stream& original) : s(original) {}\n\n    Stream& s;\n\nprivate:\n    StreamLocalCopy& operator=(const StreamLocalCopy&) /* = delete */;\n};\n\n} // namespace internal\n\n///////////////////////////////////////////////////////////////////////////////\n// SkipWhitespace\n\n//! Skip the JSON white spaces in a stream.\n/*! \\param is A input stream for skipping white spaces.\n    \\note This function has SSE2/SSE4.2 specialization.\n*/\ntemplate<typename InputStream>\nvoid SkipWhitespace(InputStream& is) {\n    internal::StreamLocalCopy<InputStream> copy(is);\n    InputStream& s(copy.s);\n\n    typename InputStream::Ch c;\n    while ((c = s.Peek()) == ' ' || c == '\\n' || c == '\\r' || c == '\\t')\n        s.Take();\n}\n\ninline const char* SkipWhitespace(const char* p, const char* end) {\n    while (p != end && (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t'))\n        ++p;\n    return p;\n}\n\n#ifdef RAPIDJSON_SSE42\n//! Skip whitespace with SSE 4.2 pcmpistrm instruction, testing 16 8-byte characters at once.\ninline const char *SkipWhitespace_SIMD(const char* p) {\n    // Fast return for single non-whitespace\n    if (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t')\n        ++p;\n    else\n        return p;\n\n    // 16-byte align to the next boundary\n    const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n    while (p != nextAligned)\n        if (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t')\n            ++p;\n        else\n            return p;\n\n    // The rest of string using SIMD\n    static const char whitespace[16] = \" \\n\\r\\t\";\n    const __m128i w = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespace[0]));\n\n    for (;; p += 16) {\n        const __m128i s = _mm_load_si128(reinterpret_cast<const __m128i *>(p));\n        const int r = _mm_cmpistri(w, s, _SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | _SIDD_NEGATIVE_POLARITY);\n        if (r != 16)    // some of characters is non-whitespace\n            return p + r;\n    }\n}\n\ninline const char *SkipWhitespace_SIMD(const char* p, const char* end) {\n    // Fast return for single non-whitespace\n    if (p != end && (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t'))\n        ++p;\n    else\n        return p;\n\n    // The middle of string using SIMD\n    static const char whitespace[16] = \" \\n\\r\\t\";\n    const __m128i w = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespace[0]));\n\n    for (; p <= end - 16; p += 16) {\n        const __m128i s = _mm_loadu_si128(reinterpret_cast<const __m128i *>(p));\n        const int r = _mm_cmpistri(w, s, _SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | _SIDD_NEGATIVE_POLARITY);\n        if (r != 16)    // some of characters is non-whitespace\n            return p + r;\n    }\n\n    return SkipWhitespace(p, end);\n}\n\n#elif defined(RAPIDJSON_SSE2)\n\n//! Skip whitespace with SSE2 instructions, testing 16 8-byte characters at once.\ninline const char *SkipWhitespace_SIMD(const char* p) {\n    // Fast return for single non-whitespace\n    if (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t')\n        ++p;\n    else\n        return p;\n\n    // 16-byte align to the next boundary\n    const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n    while (p != nextAligned)\n        if (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t')\n            ++p;\n        else\n            return p;\n\n    // The rest of string\n    #define C16(c) { c, c, c, c, c, c, c, c, c, c, c, c, c, c, c, c }\n    static const char whitespaces[4][16] = { C16(' '), C16('\\n'), C16('\\r'), C16('\\t') };\n    #undef C16\n\n    const __m128i w0 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespaces[0][0]));\n    const __m128i w1 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespaces[1][0]));\n    const __m128i w2 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespaces[2][0]));\n    const __m128i w3 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespaces[3][0]));\n\n    for (;; p += 16) {\n        const __m128i s = _mm_load_si128(reinterpret_cast<const __m128i *>(p));\n        __m128i x = _mm_cmpeq_epi8(s, w0);\n        x = _mm_or_si128(x, _mm_cmpeq_epi8(s, w1));\n        x = _mm_or_si128(x, _mm_cmpeq_epi8(s, w2));\n        x = _mm_or_si128(x, _mm_cmpeq_epi8(s, w3));\n        unsigned short r = static_cast<unsigned short>(~_mm_movemask_epi8(x));\n        if (r != 0) {   // some of characters may be non-whitespace\n#ifdef _MSC_VER         // Find the index of first non-whitespace\n            unsigned long offset;\n            _BitScanForward(&offset, r);\n            return p + offset;\n#else\n            return p + __builtin_ffs(r) - 1;\n#endif\n        }\n    }\n}\n\ninline const char *SkipWhitespace_SIMD(const char* p, const char* end) {\n    // Fast return for single non-whitespace\n    if (p != end && (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t'))\n        ++p;\n    else\n        return p;\n\n    // The rest of string\n    #define C16(c) { c, c, c, c, c, c, c, c, c, c, c, c, c, c, c, c }\n    static const char whitespaces[4][16] = { C16(' '), C16('\\n'), C16('\\r'), C16('\\t') };\n    #undef C16\n\n    const __m128i w0 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespaces[0][0]));\n    const __m128i w1 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespaces[1][0]));\n    const __m128i w2 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespaces[2][0]));\n    const __m128i w3 = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&whitespaces[3][0]));\n\n    for (; p <= end - 16; p += 16) {\n        const __m128i s = _mm_loadu_si128(reinterpret_cast<const __m128i *>(p));\n        __m128i x = _mm_cmpeq_epi8(s, w0);\n        x = _mm_or_si128(x, _mm_cmpeq_epi8(s, w1));\n        x = _mm_or_si128(x, _mm_cmpeq_epi8(s, w2));\n        x = _mm_or_si128(x, _mm_cmpeq_epi8(s, w3));\n        unsigned short r = static_cast<unsigned short>(~_mm_movemask_epi8(x));\n        if (r != 0) {   // some of characters may be non-whitespace\n#ifdef _MSC_VER         // Find the index of first non-whitespace\n            unsigned long offset;\n            _BitScanForward(&offset, r);\n            return p + offset;\n#else\n            return p + __builtin_ffs(r) - 1;\n#endif\n        }\n    }\n\n    return SkipWhitespace(p, end);\n}\n\n#elif defined(RAPIDJSON_NEON)\n\n//! Skip whitespace with ARM Neon instructions, testing 16 8-byte characters at once.\ninline const char *SkipWhitespace_SIMD(const char* p) {\n    // Fast return for single non-whitespace\n    if (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t')\n        ++p;\n    else\n        return p;\n\n    // 16-byte align to the next boundary\n    const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n    while (p != nextAligned)\n        if (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t')\n            ++p;\n        else\n            return p;\n\n    const uint8x16_t w0 = vmovq_n_u8(' ');\n    const uint8x16_t w1 = vmovq_n_u8('\\n');\n    const uint8x16_t w2 = vmovq_n_u8('\\r');\n    const uint8x16_t w3 = vmovq_n_u8('\\t');\n\n    for (;; p += 16) {\n        const uint8x16_t s = vld1q_u8(reinterpret_cast<const uint8_t *>(p));\n        uint8x16_t x = vceqq_u8(s, w0);\n        x = vorrq_u8(x, vceqq_u8(s, w1));\n        x = vorrq_u8(x, vceqq_u8(s, w2));\n        x = vorrq_u8(x, vceqq_u8(s, w3));\n\n        x = vmvnq_u8(x);                       // Negate\n        x = vrev64q_u8(x);                     // Rev in 64\n        uint64_t low = vgetq_lane_u64(vreinterpretq_u64_u8(x), 0);   // extract\n        uint64_t high = vgetq_lane_u64(vreinterpretq_u64_u8(x), 1);  // extract\n\n        if (low == 0) {\n            if (high != 0) {\n                uint32_t lz = internal::clzll(high);\n                return p + 8 + (lz >> 3);\n            }\n        } else {\n            uint32_t lz = internal::clzll(low);\n            return p + (lz >> 3);\n        }\n    }\n}\n\ninline const char *SkipWhitespace_SIMD(const char* p, const char* end) {\n    // Fast return for single non-whitespace\n    if (p != end && (*p == ' ' || *p == '\\n' || *p == '\\r' || *p == '\\t'))\n        ++p;\n    else\n        return p;\n\n    const uint8x16_t w0 = vmovq_n_u8(' ');\n    const uint8x16_t w1 = vmovq_n_u8('\\n');\n    const uint8x16_t w2 = vmovq_n_u8('\\r');\n    const uint8x16_t w3 = vmovq_n_u8('\\t');\n\n    for (; p <= end - 16; p += 16) {\n        const uint8x16_t s = vld1q_u8(reinterpret_cast<const uint8_t *>(p));\n        uint8x16_t x = vceqq_u8(s, w0);\n        x = vorrq_u8(x, vceqq_u8(s, w1));\n        x = vorrq_u8(x, vceqq_u8(s, w2));\n        x = vorrq_u8(x, vceqq_u8(s, w3));\n\n        x = vmvnq_u8(x);                       // Negate\n        x = vrev64q_u8(x);                     // Rev in 64\n        uint64_t low = vgetq_lane_u64(vreinterpretq_u64_u8(x), 0);   // extract\n        uint64_t high = vgetq_lane_u64(vreinterpretq_u64_u8(x), 1);  // extract\n\n        if (low == 0) {\n            if (high != 0) {\n                uint32_t lz = internal::clzll(high);\n                return p + 8 + (lz >> 3);\n            }\n        } else {\n            uint32_t lz = internal::clzll(low);\n            return p + (lz >> 3);\n        }\n    }\n\n    return SkipWhitespace(p, end);\n}\n\n#endif // RAPIDJSON_NEON\n\n#ifdef RAPIDJSON_SIMD\n//! Template function specialization for InsituStringStream\ntemplate<> inline void SkipWhitespace(InsituStringStream& is) {\n    is.src_ = const_cast<char*>(SkipWhitespace_SIMD(is.src_));\n}\n\n//! Template function specialization for StringStream\ntemplate<> inline void SkipWhitespace(StringStream& is) {\n    is.src_ = SkipWhitespace_SIMD(is.src_);\n}\n\ntemplate<> inline void SkipWhitespace(EncodedInputStream<UTF8<>, MemoryStream>& is) {\n    is.is_.src_ = SkipWhitespace_SIMD(is.is_.src_, is.is_.end_);\n}\n#endif // RAPIDJSON_SIMD\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericReader\n\n//! SAX-style JSON parser. Use \\ref Reader for UTF8 encoding and default allocator.\n/*! GenericReader parses JSON text from a stream, and send events synchronously to an\n    object implementing Handler concept.\n\n    It needs to allocate a stack for storing a single decoded string during\n    non-destructive parsing.\n\n    For in-situ parsing, the decoded string is directly written to the source\n    text string, no temporary buffer is required.\n\n    A GenericReader object can be reused for parsing multiple JSON text.\n\n    \\tparam SourceEncoding Encoding of the input stream.\n    \\tparam TargetEncoding Encoding of the parse output.\n    \\tparam StackAllocator Allocator type for stack.\n*/\ntemplate <typename SourceEncoding, typename TargetEncoding, typename StackAllocator = CrtAllocator>\nclass GenericReader {\npublic:\n    typedef typename SourceEncoding::Ch Ch; //!< SourceEncoding character type\n\n    //! Constructor.\n    /*! \\param stackAllocator Optional allocator for allocating stack memory. (Only use for non-destructive parsing)\n        \\param stackCapacity stack capacity in bytes for storing a single decoded string.  (Only use for non-destructive parsing)\n    */\n    GenericReader(StackAllocator* stackAllocator = 0, size_t stackCapacity = kDefaultStackCapacity) :\n        stack_(stackAllocator, stackCapacity), parseResult_(), state_(IterativeParsingStartState) {}\n\n    //! Parse JSON text.\n    /*! \\tparam parseFlags Combination of \\ref ParseFlag.\n        \\tparam InputStream Type of input stream, implementing Stream concept.\n        \\tparam Handler Type of handler, implementing Handler concept.\n        \\param is Input stream to be parsed.\n        \\param handler The handler to receive events.\n        \\return Whether the parsing is successful.\n    */\n    template <unsigned parseFlags, typename InputStream, typename Handler>\n    ParseResult Parse(InputStream& is, Handler& handler) {\n        if (parseFlags & kParseIterativeFlag)\n            return IterativeParse<parseFlags>(is, handler);\n\n        parseResult_.Clear();\n\n        ClearStackOnExit scope(*this);\n\n        SkipWhitespaceAndComments<parseFlags>(is);\n        RAPIDJSON_PARSE_ERROR_EARLY_RETURN(parseResult_);\n\n        if (RAPIDJSON_UNLIKELY(is.Peek() == '\\0')) {\n            RAPIDJSON_PARSE_ERROR_NORETURN(kParseErrorDocumentEmpty, is.Tell());\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN(parseResult_);\n        }\n        else {\n            ParseValue<parseFlags>(is, handler);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN(parseResult_);\n\n            if (!(parseFlags & kParseStopWhenDoneFlag)) {\n                SkipWhitespaceAndComments<parseFlags>(is);\n                RAPIDJSON_PARSE_ERROR_EARLY_RETURN(parseResult_);\n\n                if (RAPIDJSON_UNLIKELY(is.Peek() != '\\0')) {\n                    RAPIDJSON_PARSE_ERROR_NORETURN(kParseErrorDocumentRootNotSingular, is.Tell());\n                    RAPIDJSON_PARSE_ERROR_EARLY_RETURN(parseResult_);\n                }\n            }\n        }\n\n        return parseResult_;\n    }\n\n    //! Parse JSON text (with \\ref kParseDefaultFlags)\n    /*! \\tparam InputStream Type of input stream, implementing Stream concept\n        \\tparam Handler Type of handler, implementing Handler concept.\n        \\param is Input stream to be parsed.\n        \\param handler The handler to receive events.\n        \\return Whether the parsing is successful.\n    */\n    template <typename InputStream, typename Handler>\n    ParseResult Parse(InputStream& is, Handler& handler) {\n        return Parse<kParseDefaultFlags>(is, handler);\n    }\n\n    //! Initialize JSON text token-by-token parsing\n    /*!\n     */\n    void IterativeParseInit() {\n        parseResult_.Clear();\n        state_ = IterativeParsingStartState;\n    }\n\n    //! Parse one token from JSON text\n    /*! \\tparam InputStream Type of input stream, implementing Stream concept\n        \\tparam Handler Type of handler, implementing Handler concept.\n        \\param is Input stream to be parsed.\n        \\param handler The handler to receive events.\n        \\return Whether the parsing is successful.\n     */\n    template <unsigned parseFlags, typename InputStream, typename Handler>\n    bool IterativeParseNext(InputStream& is, Handler& handler) {\n        while (RAPIDJSON_LIKELY(is.Peek() != '\\0')) {\n            SkipWhitespaceAndComments<parseFlags>(is);\n\n            Token t = Tokenize(is.Peek());\n            IterativeParsingState n = Predict(state_, t);\n            IterativeParsingState d = Transit<parseFlags>(state_, t, n, is, handler);\n\n            // If we've finished or hit an error...\n            if (RAPIDJSON_UNLIKELY(IsIterativeParsingCompleteState(d))) {\n                // Report errors.\n                if (d == IterativeParsingErrorState) {\n                    HandleError(state_, is);\n                    return false;\n                }\n\n                // Transition to the finish state.\n                RAPIDJSON_ASSERT(d == IterativeParsingFinishState);\n                state_ = d;\n\n                // If StopWhenDone is not set...\n                if (!(parseFlags & kParseStopWhenDoneFlag)) {\n                    // ... and extra non-whitespace data is found...\n                    SkipWhitespaceAndComments<parseFlags>(is);\n                    if (is.Peek() != '\\0') {\n                        // ... this is considered an error.\n                        HandleError(state_, is);\n                        return false;\n                    }\n                }\n\n                // Success! We are done!\n                return true;\n            }\n\n            // Transition to the new state.\n            state_ = d;\n\n            // If we parsed anything other than a delimiter, we invoked the handler, so we can return true now.\n            if (!IsIterativeParsingDelimiterState(n))\n                return true;\n        }\n\n        // We reached the end of file.\n        stack_.Clear();\n\n        if (state_ != IterativeParsingFinishState) {\n            HandleError(state_, is);\n            return false;\n        }\n\n        return true;\n    }\n\n    //! Check if token-by-token parsing JSON text is complete\n    /*! \\return Whether the JSON has been fully decoded.\n     */\n    RAPIDJSON_FORCEINLINE bool IterativeParseComplete() const {\n        return IsIterativeParsingCompleteState(state_);\n    }\n\n    //! Whether a parse error has occurred in the last parsing.\n    bool HasParseError() const { return parseResult_.IsError(); }\n\n    //! Get the \\ref ParseErrorCode of last parsing.\n    ParseErrorCode GetParseErrorCode() const { return parseResult_.Code(); }\n\n    //! Get the position of last parsing error in input, 0 otherwise.\n    size_t GetErrorOffset() const { return parseResult_.Offset(); }\n\nprotected:\n    void SetParseError(ParseErrorCode code, size_t offset) { parseResult_.Set(code, offset); }\n\nprivate:\n    // Prohibit copy constructor & assignment operator.\n    GenericReader(const GenericReader&);\n    GenericReader& operator=(const GenericReader&);\n\n    void ClearStack() { stack_.Clear(); }\n\n    // clear stack on any exit from ParseStream, e.g. due to exception\n    struct ClearStackOnExit {\n        explicit ClearStackOnExit(GenericReader& r) : r_(r) {}\n        ~ClearStackOnExit() { r_.ClearStack(); }\n    private:\n        GenericReader& r_;\n        ClearStackOnExit(const ClearStackOnExit&);\n        ClearStackOnExit& operator=(const ClearStackOnExit&);\n    };\n\n    template<unsigned parseFlags, typename InputStream>\n    void SkipWhitespaceAndComments(InputStream& is) {\n        SkipWhitespace(is);\n\n        if (parseFlags & kParseCommentsFlag) {\n            while (RAPIDJSON_UNLIKELY(Consume(is, '/'))) {\n                if (Consume(is, '*')) {\n                    while (true) {\n                        if (RAPIDJSON_UNLIKELY(is.Peek() == '\\0'))\n                            RAPIDJSON_PARSE_ERROR(kParseErrorUnspecificSyntaxError, is.Tell());\n                        else if (Consume(is, '*')) {\n                            if (Consume(is, '/'))\n                                break;\n                        }\n                        else\n                            is.Take();\n                    }\n                }\n                else if (RAPIDJSON_LIKELY(Consume(is, '/')))\n                    while (is.Peek() != '\\0' && is.Take() != '\\n') {}\n                else\n                    RAPIDJSON_PARSE_ERROR(kParseErrorUnspecificSyntaxError, is.Tell());\n\n                SkipWhitespace(is);\n            }\n        }\n    }\n\n    // Parse object: { string : value, ... }\n    template<unsigned parseFlags, typename InputStream, typename Handler>\n    void ParseObject(InputStream& is, Handler& handler) {\n        RAPIDJSON_ASSERT(is.Peek() == '{');\n        is.Take();  // Skip '{'\n\n        if (RAPIDJSON_UNLIKELY(!handler.StartObject()))\n            RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n\n        SkipWhitespaceAndComments<parseFlags>(is);\n        RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n        if (Consume(is, '}')) {\n            if (RAPIDJSON_UNLIKELY(!handler.EndObject(0)))  // empty object\n                RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n            return;\n        }\n\n        for (SizeType memberCount = 0;;) {\n            if (RAPIDJSON_UNLIKELY(is.Peek() != '\"'))\n                RAPIDJSON_PARSE_ERROR(kParseErrorObjectMissName, is.Tell());\n\n            ParseString<parseFlags>(is, handler, true);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n            SkipWhitespaceAndComments<parseFlags>(is);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n            if (RAPIDJSON_UNLIKELY(!Consume(is, ':')))\n                RAPIDJSON_PARSE_ERROR(kParseErrorObjectMissColon, is.Tell());\n\n            SkipWhitespaceAndComments<parseFlags>(is);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n            ParseValue<parseFlags>(is, handler);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n            SkipWhitespaceAndComments<parseFlags>(is);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n            ++memberCount;\n\n            switch (is.Peek()) {\n                case ',':\n                    is.Take();\n                    SkipWhitespaceAndComments<parseFlags>(is);\n                    RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n                    break;\n                case '}':\n                    is.Take();\n                    if (RAPIDJSON_UNLIKELY(!handler.EndObject(memberCount)))\n                        RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n                    return;\n                default:\n                    RAPIDJSON_PARSE_ERROR(kParseErrorObjectMissCommaOrCurlyBracket, is.Tell()); break; // This useless break is only for making warning and coverage happy\n            }\n\n            if (parseFlags & kParseTrailingCommasFlag) {\n                if (is.Peek() == '}') {\n                    if (RAPIDJSON_UNLIKELY(!handler.EndObject(memberCount)))\n                        RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n                    is.Take();\n                    return;\n                }\n            }\n        }\n    }\n\n    // Parse array: [ value, ... ]\n    template<unsigned parseFlags, typename InputStream, typename Handler>\n    void ParseArray(InputStream& is, Handler& handler) {\n        RAPIDJSON_ASSERT(is.Peek() == '[');\n        is.Take();  // Skip '['\n\n        if (RAPIDJSON_UNLIKELY(!handler.StartArray()))\n            RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n\n        SkipWhitespaceAndComments<parseFlags>(is);\n        RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n        if (Consume(is, ']')) {\n            if (RAPIDJSON_UNLIKELY(!handler.EndArray(0))) // empty array\n                RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n            return;\n        }\n\n        for (SizeType elementCount = 0;;) {\n            ParseValue<parseFlags>(is, handler);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n            ++elementCount;\n            SkipWhitespaceAndComments<parseFlags>(is);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n\n            if (Consume(is, ',')) {\n                SkipWhitespaceAndComments<parseFlags>(is);\n                RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n            }\n            else if (Consume(is, ']')) {\n                if (RAPIDJSON_UNLIKELY(!handler.EndArray(elementCount)))\n                    RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n                return;\n            }\n            else\n                RAPIDJSON_PARSE_ERROR(kParseErrorArrayMissCommaOrSquareBracket, is.Tell());\n\n            if (parseFlags & kParseTrailingCommasFlag) {\n                if (is.Peek() == ']') {\n                    if (RAPIDJSON_UNLIKELY(!handler.EndArray(elementCount)))\n                        RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n                    is.Take();\n                    return;\n                }\n            }\n        }\n    }\n\n    template<unsigned parseFlags, typename InputStream, typename Handler>\n    void ParseNull(InputStream& is, Handler& handler) {\n        RAPIDJSON_ASSERT(is.Peek() == 'n');\n        is.Take();\n\n        if (RAPIDJSON_LIKELY(Consume(is, 'u') && Consume(is, 'l') && Consume(is, 'l'))) {\n            if (RAPIDJSON_UNLIKELY(!handler.Null()))\n                RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n        }\n        else\n            RAPIDJSON_PARSE_ERROR(kParseErrorValueInvalid, is.Tell());\n    }\n\n    template<unsigned parseFlags, typename InputStream, typename Handler>\n    void ParseTrue(InputStream& is, Handler& handler) {\n        RAPIDJSON_ASSERT(is.Peek() == 't');\n        is.Take();\n\n        if (RAPIDJSON_LIKELY(Consume(is, 'r') && Consume(is, 'u') && Consume(is, 'e'))) {\n            if (RAPIDJSON_UNLIKELY(!handler.Bool(true)))\n                RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n        }\n        else\n            RAPIDJSON_PARSE_ERROR(kParseErrorValueInvalid, is.Tell());\n    }\n\n    template<unsigned parseFlags, typename InputStream, typename Handler>\n    void ParseFalse(InputStream& is, Handler& handler) {\n        RAPIDJSON_ASSERT(is.Peek() == 'f');\n        is.Take();\n\n        if (RAPIDJSON_LIKELY(Consume(is, 'a') && Consume(is, 'l') && Consume(is, 's') && Consume(is, 'e'))) {\n            if (RAPIDJSON_UNLIKELY(!handler.Bool(false)))\n                RAPIDJSON_PARSE_ERROR(kParseErrorTermination, is.Tell());\n        }\n        else\n            RAPIDJSON_PARSE_ERROR(kParseErrorValueInvalid, is.Tell());\n    }\n\n    template<typename InputStream>\n    RAPIDJSON_FORCEINLINE static bool Consume(InputStream& is, typename InputStream::Ch expect) {\n        if (RAPIDJSON_LIKELY(is.Peek() == expect)) {\n            is.Take();\n            return true;\n        }\n        else\n            return false;\n    }\n\n    // Helper function to parse four hexadecimal digits in \\uXXXX in ParseString().\n    template<typename InputStream>\n    unsigned ParseHex4(InputStream& is, size_t escapeOffset) {\n        unsigned codepoint = 0;\n        for (int i = 0; i < 4; i++) {\n            Ch c = is.Peek();\n            codepoint <<= 4;\n            codepoint += static_cast<unsigned>(c);\n            if (c >= '0' && c <= '9')\n                codepoint -= '0';\n            else if (c >= 'A' && c <= 'F')\n                codepoint -= 'A' - 10;\n            else if (c >= 'a' && c <= 'f')\n                codepoint -= 'a' - 10;\n            else {\n                RAPIDJSON_PARSE_ERROR_NORETURN(kParseErrorStringUnicodeEscapeInvalidHex, escapeOffset);\n                RAPIDJSON_PARSE_ERROR_EARLY_RETURN(0);\n            }\n            is.Take();\n        }\n        return codepoint;\n    }\n\n    template <typename CharType>\n    class StackStream {\n    public:\n        typedef CharType Ch;\n\n        StackStream(internal::Stack<StackAllocator>& stack) : stack_(stack), length_(0) {}\n        RAPIDJSON_FORCEINLINE void Put(Ch c) {\n            *stack_.template Push<Ch>() = c;\n            ++length_;\n        }\n\n        RAPIDJSON_FORCEINLINE void* Push(SizeType count) {\n            length_ += count;\n            return stack_.template Push<Ch>(count);\n        }\n\n        size_t Length() const { return length_; }\n\n        Ch* Pop() {\n            return stack_.template Pop<Ch>(length_);\n        }\n\n    private:\n        StackStream(const StackStream&);\n        StackStream& operator=(const StackStream&);\n\n        internal::Stack<StackAllocator>& stack_;\n        SizeType length_;\n    };\n\n    // Parse string and generate String event. Different code paths for kParseInsituFlag.\n    template<unsigned parseFlags, typename InputStream, typename Handler>\n    void ParseString(InputStream& is, Handler& handler, bool isKey = false) {\n        internal::StreamLocalCopy<InputStream> copy(is);\n        InputStream& s(copy.s);\n\n        RAPIDJSON_ASSERT(s.Peek() == '\\\"');\n        s.Take();  // Skip '\\\"'\n\n        bool success = false;\n        if (parseFlags & kParseInsituFlag) {\n            typename InputStream::Ch *head = s.PutBegin();\n            ParseStringToStream<parseFlags, SourceEncoding, SourceEncoding>(s, s);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n            size_t length = s.PutEnd(head) - 1;\n            RAPIDJSON_ASSERT(length <= 0xFFFFFFFF);\n            const typename TargetEncoding::Ch* const str = reinterpret_cast<typename TargetEncoding::Ch*>(head);\n            success = (isKey ? handler.Key(str, SizeType(length), false) : handler.String(str, SizeType(length), false));\n        }\n        else {\n            StackStream<typename TargetEncoding::Ch> stackStream(stack_);\n            ParseStringToStream<parseFlags, SourceEncoding, TargetEncoding>(s, stackStream);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n            SizeType length = static_cast<SizeType>(stackStream.Length()) - 1;\n            const typename TargetEncoding::Ch* const str = stackStream.Pop();\n            success = (isKey ? handler.Key(str, length, true) : handler.String(str, length, true));\n        }\n        if (RAPIDJSON_UNLIKELY(!success))\n            RAPIDJSON_PARSE_ERROR(kParseErrorTermination, s.Tell());\n    }\n\n    // Parse string to an output is\n    // This function handles the prefix/suffix double quotes, escaping, and optional encoding validation.\n    template<unsigned parseFlags, typename SEncoding, typename TEncoding, typename InputStream, typename OutputStream>\n    RAPIDJSON_FORCEINLINE void ParseStringToStream(InputStream& is, OutputStream& os) {\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n#define Z16 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n        static const char escape[256] = {\n            Z16, Z16, 0, 0,'\\\"', 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '/',\n            Z16, Z16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,'\\\\', 0, 0, 0,\n            0, 0,'\\b', 0, 0, 0,'\\f', 0, 0, 0, 0, 0, 0, 0,'\\n', 0,\n            0, 0,'\\r', 0,'\\t', 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n            Z16, Z16, Z16, Z16, Z16, Z16, Z16, Z16\n        };\n#undef Z16\n//!@endcond\n\n        for (;;) {\n            // Scan and copy string before \"\\\\\\\"\" or < 0x20. This is an optional optimzation.\n            if (!(parseFlags & kParseValidateEncodingFlag))\n                ScanCopyUnescapedString(is, os);\n\n            Ch c = is.Peek();\n            if (RAPIDJSON_UNLIKELY(c == '\\\\')) {    // Escape\n                size_t escapeOffset = is.Tell();    // For invalid escaping, report the initial '\\\\' as error offset\n                is.Take();\n                Ch e = is.Peek();\n                if ((sizeof(Ch) == 1 || unsigned(e) < 256) && RAPIDJSON_LIKELY(escape[static_cast<unsigned char>(e)])) {\n                    is.Take();\n                    os.Put(static_cast<typename TEncoding::Ch>(escape[static_cast<unsigned char>(e)]));\n                }\n                else if ((parseFlags & kParseEscapedApostropheFlag) && RAPIDJSON_LIKELY(e == '\\'')) { // Allow escaped apostrophe\n                    is.Take();\n                    os.Put('\\'');\n                }\n                else if (RAPIDJSON_LIKELY(e == 'u')) {    // Unicode\n                    is.Take();\n                    unsigned codepoint = ParseHex4(is, escapeOffset);\n                    RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n                    if (RAPIDJSON_UNLIKELY(codepoint >= 0xD800 && codepoint <= 0xDFFF)) {\n                        // high surrogate, check if followed by valid low surrogate\n                        if (RAPIDJSON_LIKELY(codepoint <= 0xDBFF)) {\n                            // Handle UTF-16 surrogate pair\n                            if (RAPIDJSON_UNLIKELY(!Consume(is, '\\\\') || !Consume(is, 'u')))\n                                RAPIDJSON_PARSE_ERROR(kParseErrorStringUnicodeSurrogateInvalid, escapeOffset);\n                            unsigned codepoint2 = ParseHex4(is, escapeOffset);\n                            RAPIDJSON_PARSE_ERROR_EARLY_RETURN_VOID;\n                            if (RAPIDJSON_UNLIKELY(codepoint2 < 0xDC00 || codepoint2 > 0xDFFF))\n                                RAPIDJSON_PARSE_ERROR(kParseErrorStringUnicodeSurrogateInvalid, escapeOffset);\n                            codepoint = (((codepoint - 0xD800) << 10) | (codepoint2 - 0xDC00)) + 0x10000;\n                        }\n                        // single low surrogate\n                        else\n                        {\n                            RAPIDJSON_PARSE_ERROR(kParseErrorStringUnicodeSurrogateInvalid, escapeOffset);\n                        }\n                    }\n                    TEncoding::Encode(os, codepoint);\n                }\n                else\n                    RAPIDJSON_PARSE_ERROR(kParseErrorStringEscapeInvalid, escapeOffset);\n            }\n            else if (RAPIDJSON_UNLIKELY(c == '\"')) {    // Closing double quote\n                is.Take();\n                os.Put('\\0');   // null-terminate the string\n                return;\n            }\n            else if (RAPIDJSON_UNLIKELY(static_cast<unsigned>(c) < 0x20)) { // RFC 4627: unescaped = %x20-21 / %x23-5B / %x5D-10FFFF\n                if (c == '\\0')\n                    RAPIDJSON_PARSE_ERROR(kParseErrorStringMissQuotationMark, is.Tell());\n                else\n                    RAPIDJSON_PARSE_ERROR(kParseErrorStringInvalidEncoding, is.Tell());\n            }\n            else {\n                size_t offset = is.Tell();\n                if (RAPIDJSON_UNLIKELY((parseFlags & kParseValidateEncodingFlag ?\n                    !Transcoder<SEncoding, TEncoding>::Validate(is, os) :\n                    !Transcoder<SEncoding, TEncoding>::Transcode(is, os))))\n                    RAPIDJSON_PARSE_ERROR(kParseErrorStringInvalidEncoding, offset);\n            }\n        }\n    }\n\n    template<typename InputStream, typename OutputStream>\n    static RAPIDJSON_FORCEINLINE void ScanCopyUnescapedString(InputStream&, OutputStream&) {\n            // Do nothing for generic version\n    }\n\n#if defined(RAPIDJSON_SSE2) || defined(RAPIDJSON_SSE42)\n    // StringStream -> StackStream<char>\n    static RAPIDJSON_FORCEINLINE void ScanCopyUnescapedString(StringStream& is, StackStream<char>& os) {\n        const char* p = is.src_;\n\n        // Scan one by one until alignment (unaligned load may cross page boundary and cause crash)\n        const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n        while (p != nextAligned)\n            if (RAPIDJSON_UNLIKELY(*p == '\\\"') || RAPIDJSON_UNLIKELY(*p == '\\\\') || RAPIDJSON_UNLIKELY(static_cast<unsigned>(*p) < 0x20)) {\n                is.src_ = p;\n                return;\n            }\n            else\n                os.Put(*p++);\n\n        // The rest of string using SIMD\n        static const char dquote[16] = { '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"' };\n        static const char bslash[16] = { '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\' };\n        static const char space[16]  = { 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F };\n        const __m128i dq = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&dquote[0]));\n        const __m128i bs = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&bslash[0]));\n        const __m128i sp = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&space[0]));\n\n        for (;; p += 16) {\n            const __m128i s = _mm_load_si128(reinterpret_cast<const __m128i *>(p));\n            const __m128i t1 = _mm_cmpeq_epi8(s, dq);\n            const __m128i t2 = _mm_cmpeq_epi8(s, bs);\n            const __m128i t3 = _mm_cmpeq_epi8(_mm_max_epu8(s, sp), sp); // s < 0x20 <=> max(s, 0x1F) == 0x1F\n            const __m128i x = _mm_or_si128(_mm_or_si128(t1, t2), t3);\n            unsigned short r = static_cast<unsigned short>(_mm_movemask_epi8(x));\n            if (RAPIDJSON_UNLIKELY(r != 0)) {   // some of characters is escaped\n                SizeType length;\n    #ifdef _MSC_VER         // Find the index of first escaped\n                unsigned long offset;\n                _BitScanForward(&offset, r);\n                length = offset;\n    #else\n                length = static_cast<SizeType>(__builtin_ffs(r) - 1);\n    #endif\n                if (length != 0) {\n                    char* q = reinterpret_cast<char*>(os.Push(length));\n                    for (size_t i = 0; i < length; i++)\n                        q[i] = p[i];\n\n                    p += length;\n                }\n                break;\n            }\n            _mm_storeu_si128(reinterpret_cast<__m128i *>(os.Push(16)), s);\n        }\n\n        is.src_ = p;\n    }\n\n    // InsituStringStream -> InsituStringStream\n    static RAPIDJSON_FORCEINLINE void ScanCopyUnescapedString(InsituStringStream& is, InsituStringStream& os) {\n        RAPIDJSON_ASSERT(&is == &os);\n        (void)os;\n\n        if (is.src_ == is.dst_) {\n            SkipUnescapedString(is);\n            return;\n        }\n\n        char* p = is.src_;\n        char *q = is.dst_;\n\n        // Scan one by one until alignment (unaligned load may cross page boundary and cause crash)\n        const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n        while (p != nextAligned)\n            if (RAPIDJSON_UNLIKELY(*p == '\\\"') || RAPIDJSON_UNLIKELY(*p == '\\\\') || RAPIDJSON_UNLIKELY(static_cast<unsigned>(*p) < 0x20)) {\n                is.src_ = p;\n                is.dst_ = q;\n                return;\n            }\n            else\n                *q++ = *p++;\n\n        // The rest of string using SIMD\n        static const char dquote[16] = { '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"' };\n        static const char bslash[16] = { '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\' };\n        static const char space[16] = { 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F };\n        const __m128i dq = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&dquote[0]));\n        const __m128i bs = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&bslash[0]));\n        const __m128i sp = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&space[0]));\n\n        for (;; p += 16, q += 16) {\n            const __m128i s = _mm_load_si128(reinterpret_cast<const __m128i *>(p));\n            const __m128i t1 = _mm_cmpeq_epi8(s, dq);\n            const __m128i t2 = _mm_cmpeq_epi8(s, bs);\n            const __m128i t3 = _mm_cmpeq_epi8(_mm_max_epu8(s, sp), sp); // s < 0x20 <=> max(s, 0x1F) == 0x1F\n            const __m128i x = _mm_or_si128(_mm_or_si128(t1, t2), t3);\n            unsigned short r = static_cast<unsigned short>(_mm_movemask_epi8(x));\n            if (RAPIDJSON_UNLIKELY(r != 0)) {   // some of characters is escaped\n                size_t length;\n#ifdef _MSC_VER         // Find the index of first escaped\n                unsigned long offset;\n                _BitScanForward(&offset, r);\n                length = offset;\n#else\n                length = static_cast<size_t>(__builtin_ffs(r) - 1);\n#endif\n                for (const char* pend = p + length; p != pend; )\n                    *q++ = *p++;\n                break;\n            }\n            _mm_storeu_si128(reinterpret_cast<__m128i *>(q), s);\n        }\n\n        is.src_ = p;\n        is.dst_ = q;\n    }\n\n    // When read/write pointers are the same for insitu stream, just skip unescaped characters\n    static RAPIDJSON_FORCEINLINE void SkipUnescapedString(InsituStringStream& is) {\n        RAPIDJSON_ASSERT(is.src_ == is.dst_);\n        char* p = is.src_;\n\n        // Scan one by one until alignment (unaligned load may cross page boundary and cause crash)\n        const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n        for (; p != nextAligned; p++)\n            if (RAPIDJSON_UNLIKELY(*p == '\\\"') || RAPIDJSON_UNLIKELY(*p == '\\\\') || RAPIDJSON_UNLIKELY(static_cast<unsigned>(*p) < 0x20)) {\n                is.src_ = is.dst_ = p;\n                return;\n            }\n\n        // The rest of string using SIMD\n        static const char dquote[16] = { '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"' };\n        static const char bslash[16] = { '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\' };\n        static const char space[16] = { 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F };\n        const __m128i dq = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&dquote[0]));\n        const __m128i bs = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&bslash[0]));\n        const __m128i sp = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&space[0]));\n\n        for (;; p += 16) {\n            const __m128i s = _mm_load_si128(reinterpret_cast<const __m128i *>(p));\n            const __m128i t1 = _mm_cmpeq_epi8(s, dq);\n            const __m128i t2 = _mm_cmpeq_epi8(s, bs);\n            const __m128i t3 = _mm_cmpeq_epi8(_mm_max_epu8(s, sp), sp); // s < 0x20 <=> max(s, 0x1F) == 0x1F\n            const __m128i x = _mm_or_si128(_mm_or_si128(t1, t2), t3);\n            unsigned short r = static_cast<unsigned short>(_mm_movemask_epi8(x));\n            if (RAPIDJSON_UNLIKELY(r != 0)) {   // some of characters is escaped\n                size_t length;\n#ifdef _MSC_VER         // Find the index of first escaped\n                unsigned long offset;\n                _BitScanForward(&offset, r);\n                length = offset;\n#else\n                length = static_cast<size_t>(__builtin_ffs(r) - 1);\n#endif\n                p += length;\n                break;\n            }\n        }\n\n        is.src_ = is.dst_ = p;\n    }\n#elif defined(RAPIDJSON_NEON)\n    // StringStream -> StackStream<char>\n    static RAPIDJSON_FORCEINLINE void ScanCopyUnescapedString(StringStream& is, StackStream<char>& os) {\n        const char* p = is.src_;\n\n        // Scan one by one until alignment (unaligned load may cross page boundary and cause crash)\n        const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n        while (p != nextAligned)\n            if (RAPIDJSON_UNLIKELY(*p == '\\\"') || RAPIDJSON_UNLIKELY(*p == '\\\\') || RAPIDJSON_UNLIKELY(static_cast<unsigned>(*p) < 0x20)) {\n                is.src_ = p;\n                return;\n            }\n            else\n                os.Put(*p++);\n\n        // The rest of string using SIMD\n        const uint8x16_t s0 = vmovq_n_u8('\"');\n        const uint8x16_t s1 = vmovq_n_u8('\\\\');\n        const uint8x16_t s2 = vmovq_n_u8('\\b');\n        const uint8x16_t s3 = vmovq_n_u8(32);\n\n        for (;; p += 16) {\n            const uint8x16_t s = vld1q_u8(reinterpret_cast<const uint8_t *>(p));\n            uint8x16_t x = vceqq_u8(s, s0);\n            x = vorrq_u8(x, vceqq_u8(s, s1));\n            x = vorrq_u8(x, vceqq_u8(s, s2));\n            x = vorrq_u8(x, vcltq_u8(s, s3));\n\n            x = vrev64q_u8(x);                     // Rev in 64\n            uint64_t low = vgetq_lane_u64(vreinterpretq_u64_u8(x), 0);   // extract\n            uint64_t high = vgetq_lane_u64(vreinterpretq_u64_u8(x), 1);  // extract\n\n            SizeType length = 0;\n            bool escaped = false;\n            if (low == 0) {\n                if (high != 0) {\n                    uint32_t lz = internal::clzll(high);\n                    length = 8 + (lz >> 3);\n                    escaped = true;\n                }\n            } else {\n                uint32_t lz = internal::clzll(low);\n                length = lz >> 3;\n                escaped = true;\n            }\n            if (RAPIDJSON_UNLIKELY(escaped)) {   // some of characters is escaped\n                if (length != 0) {\n                    char* q = reinterpret_cast<char*>(os.Push(length));\n                    for (size_t i = 0; i < length; i++)\n                        q[i] = p[i];\n\n                    p += length;\n                }\n                break;\n            }\n            vst1q_u8(reinterpret_cast<uint8_t *>(os.Push(16)), s);\n        }\n\n        is.src_ = p;\n    }\n\n    // InsituStringStream -> InsituStringStream\n    static RAPIDJSON_FORCEINLINE void ScanCopyUnescapedString(InsituStringStream& is, InsituStringStream& os) {\n        RAPIDJSON_ASSERT(&is == &os);\n        (void)os;\n\n        if (is.src_ == is.dst_) {\n            SkipUnescapedString(is);\n            return;\n        }\n\n        char* p = is.src_;\n        char *q = is.dst_;\n\n        // Scan one by one until alignment (unaligned load may cross page boundary and cause crash)\n        const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n        while (p != nextAligned)\n            if (RAPIDJSON_UNLIKELY(*p == '\\\"') || RAPIDJSON_UNLIKELY(*p == '\\\\') || RAPIDJSON_UNLIKELY(static_cast<unsigned>(*p) < 0x20)) {\n                is.src_ = p;\n                is.dst_ = q;\n                return;\n            }\n            else\n                *q++ = *p++;\n\n        // The rest of string using SIMD\n        const uint8x16_t s0 = vmovq_n_u8('\"');\n        const uint8x16_t s1 = vmovq_n_u8('\\\\');\n        const uint8x16_t s2 = vmovq_n_u8('\\b');\n        const uint8x16_t s3 = vmovq_n_u8(32);\n\n        for (;; p += 16, q += 16) {\n            const uint8x16_t s = vld1q_u8(reinterpret_cast<uint8_t *>(p));\n            uint8x16_t x = vceqq_u8(s, s0);\n            x = vorrq_u8(x, vceqq_u8(s, s1));\n            x = vorrq_u8(x, vceqq_u8(s, s2));\n            x = vorrq_u8(x, vcltq_u8(s, s3));\n\n            x = vrev64q_u8(x);                     // Rev in 64\n            uint64_t low = vgetq_lane_u64(vreinterpretq_u64_u8(x), 0);   // extract\n            uint64_t high = vgetq_lane_u64(vreinterpretq_u64_u8(x), 1);  // extract\n\n            SizeType length = 0;\n            bool escaped = false;\n            if (low == 0) {\n                if (high != 0) {\n                    uint32_t lz = internal::clzll(high);\n                    length = 8 + (lz >> 3);\n                    escaped = true;\n                }\n            } else {\n                uint32_t lz = internal::clzll(low);\n                length = lz >> 3;\n                escaped = true;\n            }\n            if (RAPIDJSON_UNLIKELY(escaped)) {   // some of characters is escaped\n                for (const char* pend = p + length; p != pend; ) {\n                    *q++ = *p++;\n                }\n                break;\n            }\n            vst1q_u8(reinterpret_cast<uint8_t *>(q), s);\n        }\n\n        is.src_ = p;\n        is.dst_ = q;\n    }\n\n    // When read/write pointers are the same for insitu stream, just skip unescaped characters\n    static RAPIDJSON_FORCEINLINE void SkipUnescapedString(InsituStringStream& is) {\n        RAPIDJSON_ASSERT(is.src_ == is.dst_);\n        char* p = is.src_;\n\n        // Scan one by one until alignment (unaligned load may cross page boundary and cause crash)\n        const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n        for (; p != nextAligned; p++)\n            if (RAPIDJSON_UNLIKELY(*p == '\\\"') || RAPIDJSON_UNLIKELY(*p == '\\\\') || RAPIDJSON_UNLIKELY(static_cast<unsigned>(*p) < 0x20)) {\n                is.src_ = is.dst_ = p;\n                return;\n            }\n\n        // The rest of string using SIMD\n        const uint8x16_t s0 = vmovq_n_u8('\"');\n        const uint8x16_t s1 = vmovq_n_u8('\\\\');\n        const uint8x16_t s2 = vmovq_n_u8('\\b');\n        const uint8x16_t s3 = vmovq_n_u8(32);\n\n        for (;; p += 16) {\n            const uint8x16_t s = vld1q_u8(reinterpret_cast<uint8_t *>(p));\n            uint8x16_t x = vceqq_u8(s, s0);\n            x = vorrq_u8(x, vceqq_u8(s, s1));\n            x = vorrq_u8(x, vceqq_u8(s, s2));\n            x = vorrq_u8(x, vcltq_u8(s, s3));\n\n            x = vrev64q_u8(x);                     // Rev in 64\n            uint64_t low = vgetq_lane_u64(vreinterpretq_u64_u8(x), 0);   // extract\n            uint64_t high = vgetq_lane_u64(vreinterpretq_u64_u8(x), 1);  // extract\n\n            if (low == 0) {\n                if (high != 0) {\n                    uint32_t lz = internal::clzll(high);\n                    p += 8 + (lz >> 3);\n                    break;\n                }\n            } else {\n                uint32_t lz = internal::clzll(low);\n                p += lz >> 3;\n                break;\n            }\n        }\n\n        is.src_ = is.dst_ = p;\n    }\n#endif // RAPIDJSON_NEON\n\n    template<typename InputStream, typename StackCharacter, bool backup, bool pushOnTake>\n    class NumberStream;\n\n    template<typename InputStream, typename StackCharacter>\n    class NumberStream<InputStream, StackCharacter, false, false> {\n    public:\n        typedef typename InputStream::Ch Ch;\n\n        NumberStream(GenericReader& reader, InputStream& s) : is(s) { (void)reader;  }\n\n        RAPIDJSON_FORCEINLINE Ch Peek() const { return is.Peek(); }\n        RAPIDJSON_FORCEINLINE Ch TakePush() { return is.Take(); }\n        RAPIDJSON_FORCEINLINE Ch Take() { return is.Take(); }\n        RAPIDJSON_FORCEINLINE void Push(char) {}\n\n        size_t Tell() { return is.Tell(); }\n        size_t Length() { return 0; }\n        const StackCharacter* Pop() { return 0; }\n\n    protected:\n        NumberStream& operator=(const NumberStream&);\n\n        InputStream& is;\n    };\n\n    template<typename InputStream, typename StackCharacter>\n    class NumberStream<InputStream, StackCharacter, true, false> : public NumberStream<InputStream, StackCharacter, false, false> {\n        typedef NumberStream<InputStream, StackCharacter, false, false> Base;\n    public:\n        NumberStream(GenericReader& reader, InputStream& s) : Base(reader, s), stackStream(reader.stack_) {}\n\n        RAPIDJSON_FORCEINLINE Ch TakePush() {\n            stackStream.Put(static_cast<StackCharacter>(Base::is.Peek()));\n            return Base::is.Take();\n        }\n\n        RAPIDJSON_FORCEINLINE void Push(StackCharacter c) {\n            stackStream.Put(c);\n        }\n\n        size_t Length() { return stackStream.Length(); }\n\n        const StackCharacter* Pop() {\n            stackStream.Put('\\0');\n            return stackStream.Pop();\n        }\n\n    private:\n        StackStream<StackCharacter> stackStream;\n    };\n\n    template<typename InputStream, typename StackCharacter>\n    class NumberStream<InputStream, StackCharacter, true, true> : public NumberStream<InputStream, StackCharacter, true, false> {\n        typedef NumberStream<InputStream, StackCharacter, true, false> Base;\n    public:\n        NumberStream(GenericReader& reader, InputStream& s) : Base(reader, s) {}\n\n        RAPIDJSON_FORCEINLINE Ch Take() { return Base::TakePush(); }\n    };\n\n    template<unsigned parseFlags, typename InputStream, typename Handler>\n    void ParseNumber(InputStream& is, Handler& handler) {\n        typedef typename internal::SelectIf<internal::BoolType<(parseFlags & kParseNumbersAsStringsFlag) != 0>, typename TargetEncoding::Ch, char>::Type NumberCharacter;\n\n        internal::StreamLocalCopy<InputStream> copy(is);\n        NumberStream<InputStream, NumberCharacter,\n            ((parseFlags & kParseNumbersAsStringsFlag) != 0) ?\n                ((parseFlags & kParseInsituFlag) == 0) :\n                ((parseFlags & kParseFullPrecisionFlag) != 0),\n            (parseFlags & kParseNumbersAsStringsFlag) != 0 &&\n                (parseFlags & kParseInsituFlag) == 0> s(*this, copy.s);\n\n        size_t startOffset = s.Tell();\n        double d = 0.0;\n        bool useNanOrInf = false;\n\n        // Parse minus\n        bool minus = Consume(s, '-');\n\n        // Parse int: zero / ( digit1-9 *DIGIT )\n        unsigned i = 0;\n        uint64_t i64 = 0;\n        bool use64bit = false;\n        int significandDigit = 0;\n        if (RAPIDJSON_UNLIKELY(s.Peek() == '0')) {\n            i = 0;\n            s.TakePush();\n        }\n        else if (RAPIDJSON_LIKELY(s.Peek() >= '1' && s.Peek() <= '9')) {\n            i = static_cast<unsigned>(s.TakePush() - '0');\n\n            if (minus)\n                while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                    if (RAPIDJSON_UNLIKELY(i >= 214748364)) { // 2^31 = 2147483648\n                        if (RAPIDJSON_LIKELY(i != 214748364 || s.Peek() > '8')) {\n                            i64 = i;\n                            use64bit = true;\n                            break;\n                        }\n                    }\n                    i = i * 10 + static_cast<unsigned>(s.TakePush() - '0');\n                    significandDigit++;\n                }\n            else\n                while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                    if (RAPIDJSON_UNLIKELY(i >= 429496729)) { // 2^32 - 1 = 4294967295\n                        if (RAPIDJSON_LIKELY(i != 429496729 || s.Peek() > '5')) {\n                            i64 = i;\n                            use64bit = true;\n                            break;\n                        }\n                    }\n                    i = i * 10 + static_cast<unsigned>(s.TakePush() - '0');\n                    significandDigit++;\n                }\n        }\n        // Parse NaN or Infinity here\n        else if ((parseFlags & kParseNanAndInfFlag) && RAPIDJSON_LIKELY((s.Peek() == 'I' || s.Peek() == 'N'))) {\n            if (Consume(s, 'N')) {\n                if (Consume(s, 'a') && Consume(s, 'N')) {\n                    d = std::numeric_limits<double>::quiet_NaN();\n                    useNanOrInf = true;\n                }\n            }\n            else if (RAPIDJSON_LIKELY(Consume(s, 'I'))) {\n                if (Consume(s, 'n') && Consume(s, 'f')) {\n                    d = (minus ? -std::numeric_limits<double>::infinity() : std::numeric_limits<double>::infinity());\n                    useNanOrInf = true;\n\n                    if (RAPIDJSON_UNLIKELY(s.Peek() == 'i' && !(Consume(s, 'i') && Consume(s, 'n')\n                                                                && Consume(s, 'i') && Consume(s, 't') && Consume(s, 'y')))) {\n                        RAPIDJSON_PARSE_ERROR(kParseErrorValueInvalid, s.Tell());\n                    }\n                }\n            }\n\n            if (RAPIDJSON_UNLIKELY(!useNanOrInf)) {\n                RAPIDJSON_PARSE_ERROR(kParseErrorValueInvalid, s.Tell());\n            }\n        }\n        else\n            RAPIDJSON_PARSE_ERROR(kParseErrorValueInvalid, s.Tell());\n\n        // Parse 64bit int\n        bool useDouble = false;\n        if (use64bit) {\n            if (minus)\n                while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                     if (RAPIDJSON_UNLIKELY(i64 >= RAPIDJSON_UINT64_C2(0x0CCCCCCC, 0xCCCCCCCC))) // 2^63 = 9223372036854775808\n                        if (RAPIDJSON_LIKELY(i64 != RAPIDJSON_UINT64_C2(0x0CCCCCCC, 0xCCCCCCCC) || s.Peek() > '8')) {\n                            d = static_cast<double>(i64);\n                            useDouble = true;\n                            break;\n                        }\n                    i64 = i64 * 10 + static_cast<unsigned>(s.TakePush() - '0');\n                    significandDigit++;\n                }\n            else\n                while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                    if (RAPIDJSON_UNLIKELY(i64 >= RAPIDJSON_UINT64_C2(0x19999999, 0x99999999))) // 2^64 - 1 = 18446744073709551615\n                        if (RAPIDJSON_LIKELY(i64 != RAPIDJSON_UINT64_C2(0x19999999, 0x99999999) || s.Peek() > '5')) {\n                            d = static_cast<double>(i64);\n                            useDouble = true;\n                            break;\n                        }\n                    i64 = i64 * 10 + static_cast<unsigned>(s.TakePush() - '0');\n                    significandDigit++;\n                }\n        }\n\n        // Force double for big integer\n        if (useDouble) {\n            while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                d = d * 10 + (s.TakePush() - '0');\n            }\n        }\n\n        // Parse frac = decimal-point 1*DIGIT\n        int expFrac = 0;\n        size_t decimalPosition;\n        if (!useNanOrInf && Consume(s, '.')) {\n            decimalPosition = s.Length();\n\n            if (RAPIDJSON_UNLIKELY(!(s.Peek() >= '0' && s.Peek() <= '9')))\n                RAPIDJSON_PARSE_ERROR(kParseErrorNumberMissFraction, s.Tell());\n\n            if (!useDouble) {\n#if RAPIDJSON_64BIT\n                // Use i64 to store significand in 64-bit architecture\n                if (!use64bit)\n                    i64 = i;\n\n                while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                    if (i64 > RAPIDJSON_UINT64_C2(0x1FFFFF, 0xFFFFFFFF)) // 2^53 - 1 for fast path\n                        break;\n                    else {\n                        i64 = i64 * 10 + static_cast<unsigned>(s.TakePush() - '0');\n                        --expFrac;\n                        if (i64 != 0)\n                            significandDigit++;\n                    }\n                }\n\n                d = static_cast<double>(i64);\n#else\n                // Use double to store significand in 32-bit architecture\n                d = static_cast<double>(use64bit ? i64 : i);\n#endif\n                useDouble = true;\n            }\n\n            while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                if (significandDigit < 17) {\n                    d = d * 10.0 + (s.TakePush() - '0');\n                    --expFrac;\n                    if (RAPIDJSON_LIKELY(d > 0.0))\n                        significandDigit++;\n                }\n                else\n                    s.TakePush();\n            }\n        }\n        else\n            decimalPosition = s.Length(); // decimal position at the end of integer.\n\n        // Parse exp = e [ minus / plus ] 1*DIGIT\n        int exp = 0;\n        if (!useNanOrInf && (Consume(s, 'e') || Consume(s, 'E'))) {\n            if (!useDouble) {\n                d = static_cast<double>(use64bit ? i64 : i);\n                useDouble = true;\n            }\n\n            bool expMinus = false;\n            if (Consume(s, '+'))\n                ;\n            else if (Consume(s, '-'))\n                expMinus = true;\n\n            if (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                exp = static_cast<int>(s.Take() - '0');\n                if (expMinus) {\n                    // (exp + expFrac) must not underflow int => we're detecting when -exp gets\n                    // dangerously close to INT_MIN (a pessimistic next digit 9 would push it into\n                    // underflow territory):\n                    //\n                    //        -(exp * 10 + 9) + expFrac >= INT_MIN\n                    //   <=>  exp <= (expFrac - INT_MIN - 9) / 10\n                    RAPIDJSON_ASSERT(expFrac <= 0);\n                    int maxExp = (expFrac + 2147483639) / 10;\n\n                    while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                        exp = exp * 10 + static_cast<int>(s.Take() - '0');\n                        if (RAPIDJSON_UNLIKELY(exp > maxExp)) {\n                            while (RAPIDJSON_UNLIKELY(s.Peek() >= '0' && s.Peek() <= '9'))  // Consume the rest of exponent\n                                s.Take();\n                        }\n                    }\n                }\n                else {  // positive exp\n                    int maxExp = 308 - expFrac;\n                    while (RAPIDJSON_LIKELY(s.Peek() >= '0' && s.Peek() <= '9')) {\n                        exp = exp * 10 + static_cast<int>(s.Take() - '0');\n                        if (RAPIDJSON_UNLIKELY(exp > maxExp))\n                            RAPIDJSON_PARSE_ERROR(kParseErrorNumberTooBig, startOffset);\n                    }\n                }\n            }\n            else\n                RAPIDJSON_PARSE_ERROR(kParseErrorNumberMissExponent, s.Tell());\n\n            if (expMinus)\n                exp = -exp;\n        }\n\n        // Finish parsing, call event according to the type of number.\n        bool cont = true;\n\n        if (parseFlags & kParseNumbersAsStringsFlag) {\n            if (parseFlags & kParseInsituFlag) {\n                s.Pop();  // Pop stack no matter if it will be used or not.\n                typename InputStream::Ch* head = is.PutBegin();\n                const size_t length = s.Tell() - startOffset;\n                RAPIDJSON_ASSERT(length <= 0xFFFFFFFF);\n                // unable to insert the \\0 character here, it will erase the comma after this number\n                const typename TargetEncoding::Ch* const str = reinterpret_cast<typename TargetEncoding::Ch*>(head);\n                cont = handler.RawNumber(str, SizeType(length), false);\n            }\n            else {\n                SizeType numCharsToCopy = static_cast<SizeType>(s.Length());\n                GenericStringStream<UTF8<NumberCharacter> > srcStream(s.Pop());\n                StackStream<typename TargetEncoding::Ch> dstStream(stack_);\n                while (numCharsToCopy--) {\n                    Transcoder<UTF8<typename TargetEncoding::Ch>, TargetEncoding>::Transcode(srcStream, dstStream);\n                }\n                dstStream.Put('\\0');\n                const typename TargetEncoding::Ch* str = dstStream.Pop();\n                const SizeType length = static_cast<SizeType>(dstStream.Length()) - 1;\n                cont = handler.RawNumber(str, SizeType(length), true);\n            }\n        }\n        else {\n           size_t length = s.Length();\n           const NumberCharacter* decimal = s.Pop();  // Pop stack no matter if it will be used or not.\n\n           if (useDouble) {\n               int p = exp + expFrac;\n               if (parseFlags & kParseFullPrecisionFlag)\n                   d = internal::StrtodFullPrecision(d, p, decimal, length, decimalPosition, exp);\n               else\n                   d = internal::StrtodNormalPrecision(d, p);\n\n               // Use > max, instead of == inf, to fix bogus warning -Wfloat-equal\n               if (d > (std::numeric_limits<double>::max)()) {\n                   // Overflow\n                   // TODO: internal::StrtodX should report overflow (or underflow)\n                   RAPIDJSON_PARSE_ERROR(kParseErrorNumberTooBig, startOffset);\n               }\n\n               cont = handler.Double(minus ? -d : d);\n           }\n           else if (useNanOrInf) {\n               cont = handler.Double(d);\n           }\n           else {\n               if (use64bit) {\n                   if (minus)\n                       cont = handler.Int64(static_cast<int64_t>(~i64 + 1));\n                   else\n                       cont = handler.Uint64(i64);\n               }\n               else {\n                   if (minus)\n                       cont = handler.Int(static_cast<int32_t>(~i + 1));\n                   else\n                       cont = handler.Uint(i);\n               }\n           }\n        }\n        if (RAPIDJSON_UNLIKELY(!cont))\n            RAPIDJSON_PARSE_ERROR(kParseErrorTermination, startOffset);\n    }\n\n    // Parse any JSON value\n    template<unsigned parseFlags, typename InputStream, typename Handler>\n    void ParseValue(InputStream& is, Handler& handler) {\n        switch (is.Peek()) {\n            case 'n': ParseNull  <parseFlags>(is, handler); break;\n            case 't': ParseTrue  <parseFlags>(is, handler); break;\n            case 'f': ParseFalse <parseFlags>(is, handler); break;\n            case '\"': ParseString<parseFlags>(is, handler); break;\n            case '{': ParseObject<parseFlags>(is, handler); break;\n            case '[': ParseArray <parseFlags>(is, handler); break;\n            default :\n                      ParseNumber<parseFlags>(is, handler);\n                      break;\n\n        }\n    }\n\n    // Iterative Parsing\n\n    // States\n    enum IterativeParsingState {\n        IterativeParsingFinishState = 0, // sink states at top\n        IterativeParsingErrorState,      // sink states at top\n        IterativeParsingStartState,\n\n        // Object states\n        IterativeParsingObjectInitialState,\n        IterativeParsingMemberKeyState,\n        IterativeParsingMemberValueState,\n        IterativeParsingObjectFinishState,\n\n        // Array states\n        IterativeParsingArrayInitialState,\n        IterativeParsingElementState,\n        IterativeParsingArrayFinishState,\n\n        // Single value state\n        IterativeParsingValueState,\n\n        // Delimiter states (at bottom)\n        IterativeParsingElementDelimiterState,\n        IterativeParsingMemberDelimiterState,\n        IterativeParsingKeyValueDelimiterState,\n\n        cIterativeParsingStateCount\n    };\n\n    // Tokens\n    enum Token {\n        LeftBracketToken = 0,\n        RightBracketToken,\n\n        LeftCurlyBracketToken,\n        RightCurlyBracketToken,\n\n        CommaToken,\n        ColonToken,\n\n        StringToken,\n        FalseToken,\n        TrueToken,\n        NullToken,\n        NumberToken,\n\n        kTokenCount\n    };\n\n    RAPIDJSON_FORCEINLINE Token Tokenize(Ch c) const {\n\n//!@cond RAPIDJSON_HIDDEN_FROM_DOXYGEN\n#define N NumberToken\n#define N16 N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N\n        // Maps from ASCII to Token\n        static const unsigned char tokenMap[256] = {\n            N16, // 00~0F\n            N16, // 10~1F\n            N, N, StringToken, N, N, N, N, N, N, N, N, N, CommaToken, N, N, N, // 20~2F\n            N, N, N, N, N, N, N, N, N, N, ColonToken, N, N, N, N, N, // 30~3F\n            N16, // 40~4F\n            N, N, N, N, N, N, N, N, N, N, N, LeftBracketToken, N, RightBracketToken, N, N, // 50~5F\n            N, N, N, N, N, N, FalseToken, N, N, N, N, N, N, N, NullToken, N, // 60~6F\n            N, N, N, N, TrueToken, N, N, N, N, N, N, LeftCurlyBracketToken, N, RightCurlyBracketToken, N, N, // 70~7F\n            N16, N16, N16, N16, N16, N16, N16, N16 // 80~FF\n        };\n#undef N\n#undef N16\n//!@endcond\n\n        if (sizeof(Ch) == 1 || static_cast<unsigned>(c) < 256)\n            return static_cast<Token>(tokenMap[static_cast<unsigned char>(c)]);\n        else\n            return NumberToken;\n    }\n\n    RAPIDJSON_FORCEINLINE IterativeParsingState Predict(IterativeParsingState state, Token token) const {\n        // current state x one lookahead token -> new state\n        static const char G[cIterativeParsingStateCount][kTokenCount] = {\n            // Finish(sink state)\n            {\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState\n            },\n            // Error(sink state)\n            {\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState\n            },\n            // Start\n            {\n                IterativeParsingArrayInitialState,  // Left bracket\n                IterativeParsingErrorState,         // Right bracket\n                IterativeParsingObjectInitialState, // Left curly bracket\n                IterativeParsingErrorState,         // Right curly bracket\n                IterativeParsingErrorState,         // Comma\n                IterativeParsingErrorState,         // Colon\n                IterativeParsingValueState,         // String\n                IterativeParsingValueState,         // False\n                IterativeParsingValueState,         // True\n                IterativeParsingValueState,         // Null\n                IterativeParsingValueState          // Number\n            },\n            // ObjectInitial\n            {\n                IterativeParsingErrorState,         // Left bracket\n                IterativeParsingErrorState,         // Right bracket\n                IterativeParsingErrorState,         // Left curly bracket\n                IterativeParsingObjectFinishState,  // Right curly bracket\n                IterativeParsingErrorState,         // Comma\n                IterativeParsingErrorState,         // Colon\n                IterativeParsingMemberKeyState,     // String\n                IterativeParsingErrorState,         // False\n                IterativeParsingErrorState,         // True\n                IterativeParsingErrorState,         // Null\n                IterativeParsingErrorState          // Number\n            },\n            // MemberKey\n            {\n                IterativeParsingErrorState,             // Left bracket\n                IterativeParsingErrorState,             // Right bracket\n                IterativeParsingErrorState,             // Left curly bracket\n                IterativeParsingErrorState,             // Right curly bracket\n                IterativeParsingErrorState,             // Comma\n                IterativeParsingKeyValueDelimiterState, // Colon\n                IterativeParsingErrorState,             // String\n                IterativeParsingErrorState,             // False\n                IterativeParsingErrorState,             // True\n                IterativeParsingErrorState,             // Null\n                IterativeParsingErrorState              // Number\n            },\n            // MemberValue\n            {\n                IterativeParsingErrorState,             // Left bracket\n                IterativeParsingErrorState,             // Right bracket\n                IterativeParsingErrorState,             // Left curly bracket\n                IterativeParsingObjectFinishState,      // Right curly bracket\n                IterativeParsingMemberDelimiterState,   // Comma\n                IterativeParsingErrorState,             // Colon\n                IterativeParsingErrorState,             // String\n                IterativeParsingErrorState,             // False\n                IterativeParsingErrorState,             // True\n                IterativeParsingErrorState,             // Null\n                IterativeParsingErrorState              // Number\n            },\n            // ObjectFinish(sink state)\n            {\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState\n            },\n            // ArrayInitial\n            {\n                IterativeParsingArrayInitialState,      // Left bracket(push Element state)\n                IterativeParsingArrayFinishState,       // Right bracket\n                IterativeParsingObjectInitialState,     // Left curly bracket(push Element state)\n                IterativeParsingErrorState,             // Right curly bracket\n                IterativeParsingErrorState,             // Comma\n                IterativeParsingErrorState,             // Colon\n                IterativeParsingElementState,           // String\n                IterativeParsingElementState,           // False\n                IterativeParsingElementState,           // True\n                IterativeParsingElementState,           // Null\n                IterativeParsingElementState            // Number\n            },\n            // Element\n            {\n                IterativeParsingErrorState,             // Left bracket\n                IterativeParsingArrayFinishState,       // Right bracket\n                IterativeParsingErrorState,             // Left curly bracket\n                IterativeParsingErrorState,             // Right curly bracket\n                IterativeParsingElementDelimiterState,  // Comma\n                IterativeParsingErrorState,             // Colon\n                IterativeParsingErrorState,             // String\n                IterativeParsingErrorState,             // False\n                IterativeParsingErrorState,             // True\n                IterativeParsingErrorState,             // Null\n                IterativeParsingErrorState              // Number\n            },\n            // ArrayFinish(sink state)\n            {\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState\n            },\n            // Single Value (sink state)\n            {\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState, IterativeParsingErrorState,\n                IterativeParsingErrorState\n            },\n            // ElementDelimiter\n            {\n                IterativeParsingArrayInitialState,      // Left bracket(push Element state)\n                IterativeParsingArrayFinishState,       // Right bracket\n                IterativeParsingObjectInitialState,     // Left curly bracket(push Element state)\n                IterativeParsingErrorState,             // Right curly bracket\n                IterativeParsingErrorState,             // Comma\n                IterativeParsingErrorState,             // Colon\n                IterativeParsingElementState,           // String\n                IterativeParsingElementState,           // False\n                IterativeParsingElementState,           // True\n                IterativeParsingElementState,           // Null\n                IterativeParsingElementState            // Number\n            },\n            // MemberDelimiter\n            {\n                IterativeParsingErrorState,         // Left bracket\n                IterativeParsingErrorState,         // Right bracket\n                IterativeParsingErrorState,         // Left curly bracket\n                IterativeParsingObjectFinishState,  // Right curly bracket\n                IterativeParsingErrorState,         // Comma\n                IterativeParsingErrorState,         // Colon\n                IterativeParsingMemberKeyState,     // String\n                IterativeParsingErrorState,         // False\n                IterativeParsingErrorState,         // True\n                IterativeParsingErrorState,         // Null\n                IterativeParsingErrorState          // Number\n            },\n            // KeyValueDelimiter\n            {\n                IterativeParsingArrayInitialState,      // Left bracket(push MemberValue state)\n                IterativeParsingErrorState,             // Right bracket\n                IterativeParsingObjectInitialState,     // Left curly bracket(push MemberValue state)\n                IterativeParsingErrorState,             // Right curly bracket\n                IterativeParsingErrorState,             // Comma\n                IterativeParsingErrorState,             // Colon\n                IterativeParsingMemberValueState,       // String\n                IterativeParsingMemberValueState,       // False\n                IterativeParsingMemberValueState,       // True\n                IterativeParsingMemberValueState,       // Null\n                IterativeParsingMemberValueState        // Number\n            },\n        }; // End of G\n\n        return static_cast<IterativeParsingState>(G[state][token]);\n    }\n\n    // Make an advance in the token stream and state based on the candidate destination state which was returned by Transit().\n    // May return a new state on state pop.\n    template <unsigned parseFlags, typename InputStream, typename Handler>\n    RAPIDJSON_FORCEINLINE IterativeParsingState Transit(IterativeParsingState src, Token token, IterativeParsingState dst, InputStream& is, Handler& handler) {\n        (void)token;\n\n        switch (dst) {\n        case IterativeParsingErrorState:\n            return dst;\n\n        case IterativeParsingObjectInitialState:\n        case IterativeParsingArrayInitialState:\n        {\n            // Push the state(Element or MemeberValue) if we are nested in another array or value of member.\n            // In this way we can get the correct state on ObjectFinish or ArrayFinish by frame pop.\n            IterativeParsingState n = src;\n            if (src == IterativeParsingArrayInitialState || src == IterativeParsingElementDelimiterState)\n                n = IterativeParsingElementState;\n            else if (src == IterativeParsingKeyValueDelimiterState)\n                n = IterativeParsingMemberValueState;\n            // Push current state.\n            *stack_.template Push<SizeType>(1) = n;\n            // Initialize and push the member/element count.\n            *stack_.template Push<SizeType>(1) = 0;\n            // Call handler\n            bool hr = (dst == IterativeParsingObjectInitialState) ? handler.StartObject() : handler.StartArray();\n            // On handler short circuits the parsing.\n            if (!hr) {\n                RAPIDJSON_PARSE_ERROR_NORETURN(kParseErrorTermination, is.Tell());\n                return IterativeParsingErrorState;\n            }\n            else {\n                is.Take();\n                return dst;\n            }\n        }\n\n        case IterativeParsingMemberKeyState:\n            ParseString<parseFlags>(is, handler, true);\n            if (HasParseError())\n                return IterativeParsingErrorState;\n            else\n                return dst;\n\n        case IterativeParsingKeyValueDelimiterState:\n            RAPIDJSON_ASSERT(token == ColonToken);\n            is.Take();\n            return dst;\n\n        case IterativeParsingMemberValueState:\n            // Must be non-compound value. Or it would be ObjectInitial or ArrayInitial state.\n            ParseValue<parseFlags>(is, handler);\n            if (HasParseError()) {\n                return IterativeParsingErrorState;\n            }\n            return dst;\n\n        case IterativeParsingElementState:\n            // Must be non-compound value. Or it would be ObjectInitial or ArrayInitial state.\n            ParseValue<parseFlags>(is, handler);\n            if (HasParseError()) {\n                return IterativeParsingErrorState;\n            }\n            return dst;\n\n        case IterativeParsingMemberDelimiterState:\n        case IterativeParsingElementDelimiterState:\n            is.Take();\n            // Update member/element count.\n            *stack_.template Top<SizeType>() = *stack_.template Top<SizeType>() + 1;\n            return dst;\n\n        case IterativeParsingObjectFinishState:\n        {\n            // Transit from delimiter is only allowed when trailing commas are enabled\n            if (!(parseFlags & kParseTrailingCommasFlag) && src == IterativeParsingMemberDelimiterState) {\n                RAPIDJSON_PARSE_ERROR_NORETURN(kParseErrorObjectMissName, is.Tell());\n                return IterativeParsingErrorState;\n            }\n            // Get member count.\n            SizeType c = *stack_.template Pop<SizeType>(1);\n            // If the object is not empty, count the last member.\n            if (src == IterativeParsingMemberValueState)\n                ++c;\n            // Restore the state.\n            IterativeParsingState n = static_cast<IterativeParsingState>(*stack_.template Pop<SizeType>(1));\n            // Transit to Finish state if this is the topmost scope.\n            if (n == IterativeParsingStartState)\n                n = IterativeParsingFinishState;\n            // Call handler\n            bool hr = handler.EndObject(c);\n            // On handler short circuits the parsing.\n            if (!hr) {\n                RAPIDJSON_PARSE_ERROR_NORETURN(kParseErrorTermination, is.Tell());\n                return IterativeParsingErrorState;\n            }\n            else {\n                is.Take();\n                return n;\n            }\n        }\n\n        case IterativeParsingArrayFinishState:\n        {\n            // Transit from delimiter is only allowed when trailing commas are enabled\n            if (!(parseFlags & kParseTrailingCommasFlag) && src == IterativeParsingElementDelimiterState) {\n                RAPIDJSON_PARSE_ERROR_NORETURN(kParseErrorValueInvalid, is.Tell());\n                return IterativeParsingErrorState;\n            }\n            // Get element count.\n            SizeType c = *stack_.template Pop<SizeType>(1);\n            // If the array is not empty, count the last element.\n            if (src == IterativeParsingElementState)\n                ++c;\n            // Restore the state.\n            IterativeParsingState n = static_cast<IterativeParsingState>(*stack_.template Pop<SizeType>(1));\n            // Transit to Finish state if this is the topmost scope.\n            if (n == IterativeParsingStartState)\n                n = IterativeParsingFinishState;\n            // Call handler\n            bool hr = handler.EndArray(c);\n            // On handler short circuits the parsing.\n            if (!hr) {\n                RAPIDJSON_PARSE_ERROR_NORETURN(kParseErrorTermination, is.Tell());\n                return IterativeParsingErrorState;\n            }\n            else {\n                is.Take();\n                return n;\n            }\n        }\n\n        default:\n            // This branch is for IterativeParsingValueState actually.\n            // Use `default:` rather than\n            // `case IterativeParsingValueState:` is for code coverage.\n\n            // The IterativeParsingStartState is not enumerated in this switch-case.\n            // It is impossible for that case. And it can be caught by following assertion.\n\n            // The IterativeParsingFinishState is not enumerated in this switch-case either.\n            // It is a \"derivative\" state which cannot triggered from Predict() directly.\n            // Therefore it cannot happen here. And it can be caught by following assertion.\n            RAPIDJSON_ASSERT(dst == IterativeParsingValueState);\n\n            // Must be non-compound value. Or it would be ObjectInitial or ArrayInitial state.\n            ParseValue<parseFlags>(is, handler);\n            if (HasParseError()) {\n                return IterativeParsingErrorState;\n            }\n            return IterativeParsingFinishState;\n        }\n    }\n\n    template <typename InputStream>\n    void HandleError(IterativeParsingState src, InputStream& is) {\n        if (HasParseError()) {\n            // Error flag has been set.\n            return;\n        }\n\n        switch (src) {\n        case IterativeParsingStartState:            RAPIDJSON_PARSE_ERROR(kParseErrorDocumentEmpty, is.Tell()); return;\n        case IterativeParsingFinishState:           RAPIDJSON_PARSE_ERROR(kParseErrorDocumentRootNotSingular, is.Tell()); return;\n        case IterativeParsingObjectInitialState:\n        case IterativeParsingMemberDelimiterState:  RAPIDJSON_PARSE_ERROR(kParseErrorObjectMissName, is.Tell()); return;\n        case IterativeParsingMemberKeyState:        RAPIDJSON_PARSE_ERROR(kParseErrorObjectMissColon, is.Tell()); return;\n        case IterativeParsingMemberValueState:      RAPIDJSON_PARSE_ERROR(kParseErrorObjectMissCommaOrCurlyBracket, is.Tell()); return;\n        case IterativeParsingKeyValueDelimiterState:\n        case IterativeParsingArrayInitialState:\n        case IterativeParsingElementDelimiterState: RAPIDJSON_PARSE_ERROR(kParseErrorValueInvalid, is.Tell()); return;\n        default: RAPIDJSON_ASSERT(src == IterativeParsingElementState); RAPIDJSON_PARSE_ERROR(kParseErrorArrayMissCommaOrSquareBracket, is.Tell()); return;\n        }\n    }\n\n    RAPIDJSON_FORCEINLINE bool IsIterativeParsingDelimiterState(IterativeParsingState s) const {\n        return s >= IterativeParsingElementDelimiterState;\n    }\n\n    RAPIDJSON_FORCEINLINE bool IsIterativeParsingCompleteState(IterativeParsingState s) const {\n        return s <= IterativeParsingErrorState;\n    }\n\n    template <unsigned parseFlags, typename InputStream, typename Handler>\n    ParseResult IterativeParse(InputStream& is, Handler& handler) {\n        parseResult_.Clear();\n        ClearStackOnExit scope(*this);\n        IterativeParsingState state = IterativeParsingStartState;\n\n        SkipWhitespaceAndComments<parseFlags>(is);\n        RAPIDJSON_PARSE_ERROR_EARLY_RETURN(parseResult_);\n        while (is.Peek() != '\\0') {\n            Token t = Tokenize(is.Peek());\n            IterativeParsingState n = Predict(state, t);\n            IterativeParsingState d = Transit<parseFlags>(state, t, n, is, handler);\n\n            if (d == IterativeParsingErrorState) {\n                HandleError(state, is);\n                break;\n            }\n\n            state = d;\n\n            // Do not further consume streams if a root JSON has been parsed.\n            if ((parseFlags & kParseStopWhenDoneFlag) && state == IterativeParsingFinishState)\n                break;\n\n            SkipWhitespaceAndComments<parseFlags>(is);\n            RAPIDJSON_PARSE_ERROR_EARLY_RETURN(parseResult_);\n        }\n\n        // Handle the end of file.\n        if (state != IterativeParsingFinishState)\n            HandleError(state, is);\n\n        return parseResult_;\n    }\n\n    static const size_t kDefaultStackCapacity = 256;    //!< Default stack capacity in bytes for storing a single decoded string.\n    internal::Stack<StackAllocator> stack_;  //!< A stack for storing decoded string temporarily during non-destructive parsing.\n    ParseResult parseResult_;\n    IterativeParsingState state_;\n}; // class GenericReader\n\n//! Reader with UTF8 encoding and default allocator.\ntypedef GenericReader<UTF8<>, UTF8<> > Reader;\n\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__clang__) || defined(_MSC_VER)\nRAPIDJSON_DIAG_POP\n#endif\n\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_READER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/schema.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available->\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip-> All rights reserved->\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License-> You may obtain a copy of the License at\n//\n// http://opensource->org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied-> See the License for the\n// specific language governing permissions and limitations under the License->\n\n#ifndef RAPIDJSON_SCHEMA_H_\n#define RAPIDJSON_SCHEMA_H_\n\n#include \"document.h\"\n#include \"pointer.h\"\n#include \"stringbuffer.h\"\n#include \"error/en.h\"\n#include \"uri.h\"\n#include <cmath> // abs, floor\n\n#if !defined(RAPIDJSON_SCHEMA_USE_INTERNALREGEX)\n#define RAPIDJSON_SCHEMA_USE_INTERNALREGEX 1\n#endif\n\n#if !defined(RAPIDJSON_SCHEMA_USE_STDREGEX) || !(__cplusplus >=201103L || (defined(_MSC_VER) && _MSC_VER >= 1800))\n#define RAPIDJSON_SCHEMA_USE_STDREGEX 0\n#endif\n\n#if RAPIDJSON_SCHEMA_USE_INTERNALREGEX\n#include \"internal/regex.h\"\n#elif RAPIDJSON_SCHEMA_USE_STDREGEX\n#include <regex>\n#endif\n\n#if RAPIDJSON_SCHEMA_USE_INTERNALREGEX || RAPIDJSON_SCHEMA_USE_STDREGEX\n#define RAPIDJSON_SCHEMA_HAS_REGEX 1\n#else\n#define RAPIDJSON_SCHEMA_HAS_REGEX 0\n#endif\n\n#ifndef RAPIDJSON_SCHEMA_VERBOSE\n#define RAPIDJSON_SCHEMA_VERBOSE 0\n#endif\n\nRAPIDJSON_DIAG_PUSH\n\n#if defined(__GNUC__)\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_OFF(weak-vtables)\nRAPIDJSON_DIAG_OFF(exit-time-destructors)\nRAPIDJSON_DIAG_OFF(c++98-compat-pedantic)\nRAPIDJSON_DIAG_OFF(variadic-macros)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_OFF(4512) // assignment operator could not be generated\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n///////////////////////////////////////////////////////////////////////////////\n// Verbose Utilities\n\n#if RAPIDJSON_SCHEMA_VERBOSE\n\nnamespace internal {\n\ninline void PrintInvalidKeywordData(const char* keyword) {\n    printf(\"    Fail keyword: '%s'\\n\", keyword);\n}\n\ninline void PrintInvalidKeywordData(const wchar_t* keyword) {\n    wprintf(L\"    Fail keyword: '%ls'\\n\", keyword);\n}\n\ninline void PrintInvalidDocumentData(const char* document) {\n    printf(\"    Fail document: '%s'\\n\", document);\n}\n\ninline void PrintInvalidDocumentData(const wchar_t* document) {\n    wprintf(L\"    Fail document: '%ls'\\n\", document);\n}\n\ninline void PrintValidatorPointersData(const char* s, const char* d, unsigned depth) {\n    printf(\"    Sch: %*s'%s'\\n    Doc: %*s'%s'\\n\", depth * 4, \" \", s, depth * 4, \" \", d);\n}\n\ninline void PrintValidatorPointersData(const wchar_t* s, const wchar_t* d, unsigned depth) {\n    wprintf(L\"    Sch: %*ls'%ls'\\n    Doc: %*ls'%ls'\\n\", depth * 4, L\" \", s, depth * 4, L\" \", d);\n}\n\ninline void PrintSchemaIdsData(const char* base, const char* local, const char* resolved) {\n    printf(\"    Resolving id: Base: '%s', Local: '%s', Resolved: '%s'\\n\", base, local, resolved);\n}\n\ninline void PrintSchemaIdsData(const wchar_t* base, const wchar_t* local, const wchar_t* resolved) {\n    wprintf(L\"    Resolving id: Base: '%ls', Local: '%ls', Resolved: '%ls'\\n\", base, local, resolved);\n}\n\ninline void PrintMethodData(const char* method) {\n    printf(\"%s\\n\", method);\n}\n\ninline void PrintMethodData(const char* method, bool b) {\n    printf(\"%s, Data: '%s'\\n\", method, b ? \"true\" : \"false\");\n}\n\ninline void PrintMethodData(const char* method, int64_t i) {\n    printf(\"%s, Data: '%\" PRId64 \"'\\n\", method, i);\n}\n\ninline void PrintMethodData(const char* method, uint64_t u) {\n    printf(\"%s, Data: '%\" PRIu64 \"'\\n\", method, u);\n}\n\ninline void PrintMethodData(const char* method, double d) {\n    printf(\"%s, Data: '%lf'\\n\", method, d);\n}\n\ninline void PrintMethodData(const char* method, const char* s) {\n    printf(\"%s, Data: '%s'\\n\", method, s);\n}\n\ninline void PrintMethodData(const char* method, const wchar_t* s) {\n    wprintf(L\"%hs, Data: '%ls'\\n\", method, s);\n}\n\ninline void PrintMethodData(const char* method, const char* s1, const char* s2) {\n    printf(\"%s, Data: '%s', '%s'\\n\", method, s1, s2);\n}\n\ninline void PrintMethodData(const char* method, const wchar_t* s1, const wchar_t* s2) {\n    wprintf(L\"%hs, Data: '%ls', '%ls'\\n\", method, s1, s2);\n}\n\n} // namespace internal\n\n#endif // RAPIDJSON_SCHEMA_VERBOSE\n\n#ifndef RAPIDJSON_SCHEMA_PRINT\n#if RAPIDJSON_SCHEMA_VERBOSE\n#define RAPIDJSON_SCHEMA_PRINT(name, ...) internal::Print##name##Data(__VA_ARGS__)\n#else\n#define RAPIDJSON_SCHEMA_PRINT(name, ...)\n#endif\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// RAPIDJSON_INVALID_KEYWORD_RETURN\n\n#define RAPIDJSON_INVALID_KEYWORD_RETURN(code)\\\nRAPIDJSON_MULTILINEMACRO_BEGIN\\\n    context.invalidCode = code;\\\n    context.invalidKeyword = SchemaType::GetValidateErrorKeyword(code).GetString();\\\n    RAPIDJSON_SCHEMA_PRINT(InvalidKeyword, context.invalidKeyword);\\\n    return false;\\\nRAPIDJSON_MULTILINEMACRO_END\n\n///////////////////////////////////////////////////////////////////////////////\n// ValidateFlag\n\n/*! \\def RAPIDJSON_VALIDATE_DEFAULT_FLAGS\n    \\ingroup RAPIDJSON_CONFIG\n    \\brief User-defined kValidateDefaultFlags definition.\n\n    User can define this as any \\c ValidateFlag combinations.\n*/\n#ifndef RAPIDJSON_VALIDATE_DEFAULT_FLAGS\n#define RAPIDJSON_VALIDATE_DEFAULT_FLAGS kValidateNoFlags\n#endif\n\n//! Combination of validate flags\nenum ValidateFlag {\n    kValidateNoFlags = 0,                                       //!< No flags are set.\n    kValidateContinueOnErrorFlag = 1,                           //!< Don't stop after first validation error.\n    kValidateReadFlag = 2,                                      //!< Validation is for a read semantic.\n    kValidateWriteFlag = 4,                                     //!< Validation is for a write semantic.\n    kValidateDefaultFlags = RAPIDJSON_VALIDATE_DEFAULT_FLAGS    //!< Default validate flags. Can be customized by defining RAPIDJSON_VALIDATE_DEFAULT_FLAGS\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// Specification\nenum SchemaDraft {\n    kDraftUnknown = -1,\n    kDraftNone = 0,\n    kDraft03 = 3,\n    kDraftMin = 4,                       //!< Current minimum supported draft\n    kDraft04 = 4,\n    kDraft05 = 5,\n    kDraftMax = 5,                       //!< Current maximum supported draft\n    kDraft06 = 6,\n    kDraft07 = 7,\n    kDraft2019_09 = 8,\n    kDraft2020_12 = 9\n};\n\nenum OpenApiVersion {\n    kVersionUnknown = -1,\n    kVersionNone = 0,\n    kVersionMin = 2,                      //!< Current minimum supported version\n    kVersion20 = 2,\n    kVersion30 = 3,\n    kVersionMax = 3,                      //!< Current maximum supported version\n    kVersion31 = 4,\n};\n\nstruct Specification {\n    Specification(SchemaDraft d) : draft(d), oapi(kVersionNone) {}\n    Specification(OpenApiVersion o) : oapi(o) {\n        if (oapi == kVersion20) draft = kDraft04;\n        else if (oapi == kVersion30) draft = kDraft05;\n        else if (oapi == kVersion31) draft = kDraft2020_12;\n        else draft = kDraft04;\n    }\n    ~Specification() {}\n    bool IsSupported() const {\n        return ((draft >= kDraftMin && draft <= kDraftMax) && ((oapi == kVersionNone) || (oapi >= kVersionMin && oapi <= kVersionMax)));\n    }\n    SchemaDraft draft;\n    OpenApiVersion oapi;\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// Forward declarations\n\ntemplate <typename ValueType, typename Allocator>\nclass GenericSchemaDocument;\n\nnamespace internal {\n\ntemplate <typename SchemaDocumentType>\nclass Schema;\n\n///////////////////////////////////////////////////////////////////////////////\n// ISchemaValidator\n\nclass ISchemaValidator {\npublic:\n    virtual ~ISchemaValidator() {}\n    virtual bool IsValid() const = 0;\n    virtual void SetValidateFlags(unsigned flags) = 0;\n    virtual unsigned GetValidateFlags() const = 0;\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// ISchemaStateFactory\n\ntemplate <typename SchemaType>\nclass ISchemaStateFactory {\npublic:\n    virtual ~ISchemaStateFactory() {}\n    virtual ISchemaValidator* CreateSchemaValidator(const SchemaType&, const bool inheritContinueOnErrors) = 0;\n    virtual void DestroySchemaValidator(ISchemaValidator* validator) = 0;\n    virtual void* CreateHasher() = 0;\n    virtual uint64_t GetHashCode(void* hasher) = 0;\n    virtual void DestroryHasher(void* hasher) = 0;\n    virtual void* MallocState(size_t size) = 0;\n    virtual void FreeState(void* p) = 0;\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// IValidationErrorHandler\n\ntemplate <typename SchemaType>\nclass IValidationErrorHandler {\npublic:\n    typedef typename SchemaType::Ch Ch;\n    typedef typename SchemaType::SValue SValue;\n\n    virtual ~IValidationErrorHandler() {}\n\n    virtual void NotMultipleOf(int64_t actual, const SValue& expected) = 0;\n    virtual void NotMultipleOf(uint64_t actual, const SValue& expected) = 0;\n    virtual void NotMultipleOf(double actual, const SValue& expected) = 0;\n    virtual void AboveMaximum(int64_t actual, const SValue& expected, bool exclusive) = 0;\n    virtual void AboveMaximum(uint64_t actual, const SValue& expected, bool exclusive) = 0;\n    virtual void AboveMaximum(double actual, const SValue& expected, bool exclusive) = 0;\n    virtual void BelowMinimum(int64_t actual, const SValue& expected, bool exclusive) = 0;\n    virtual void BelowMinimum(uint64_t actual, const SValue& expected, bool exclusive) = 0;\n    virtual void BelowMinimum(double actual, const SValue& expected, bool exclusive) = 0;\n\n    virtual void TooLong(const Ch* str, SizeType length, SizeType expected) = 0;\n    virtual void TooShort(const Ch* str, SizeType length, SizeType expected) = 0;\n    virtual void DoesNotMatch(const Ch* str, SizeType length) = 0;\n\n    virtual void DisallowedItem(SizeType index) = 0;\n    virtual void TooFewItems(SizeType actualCount, SizeType expectedCount) = 0;\n    virtual void TooManyItems(SizeType actualCount, SizeType expectedCount) = 0;\n    virtual void DuplicateItems(SizeType index1, SizeType index2) = 0;\n\n    virtual void TooManyProperties(SizeType actualCount, SizeType expectedCount) = 0;\n    virtual void TooFewProperties(SizeType actualCount, SizeType expectedCount) = 0;\n    virtual void StartMissingProperties() = 0;\n    virtual void AddMissingProperty(const SValue& name) = 0;\n    virtual bool EndMissingProperties() = 0;\n    virtual void PropertyViolations(ISchemaValidator** subvalidators, SizeType count) = 0;\n    virtual void DisallowedProperty(const Ch* name, SizeType length) = 0;\n\n    virtual void StartDependencyErrors() = 0;\n    virtual void StartMissingDependentProperties() = 0;\n    virtual void AddMissingDependentProperty(const SValue& targetName) = 0;\n    virtual void EndMissingDependentProperties(const SValue& sourceName) = 0;\n    virtual void AddDependencySchemaError(const SValue& souceName, ISchemaValidator* subvalidator) = 0;\n    virtual bool EndDependencyErrors() = 0;\n\n    virtual void DisallowedValue(const ValidateErrorCode code) = 0;\n    virtual void StartDisallowedType() = 0;\n    virtual void AddExpectedType(const typename SchemaType::ValueType& expectedType) = 0;\n    virtual void EndDisallowedType(const typename SchemaType::ValueType& actualType) = 0;\n    virtual void NotAllOf(ISchemaValidator** subvalidators, SizeType count) = 0;\n    virtual void NoneOf(ISchemaValidator** subvalidators, SizeType count) = 0;\n    virtual void NotOneOf(ISchemaValidator** subvalidators, SizeType count) = 0;\n    virtual void MultipleOneOf(SizeType index1, SizeType index2) = 0;\n    virtual void Disallowed() = 0;\n    virtual void DisallowedWhenWriting() = 0;\n    virtual void DisallowedWhenReading() = 0;\n};\n\n\n///////////////////////////////////////////////////////////////////////////////\n// Hasher\n\n// For comparison of compound value\ntemplate<typename Encoding, typename Allocator>\nclass Hasher {\npublic:\n    typedef typename Encoding::Ch Ch;\n\n    Hasher(Allocator* allocator = 0, size_t stackCapacity = kDefaultSize) : stack_(allocator, stackCapacity) {}\n\n    bool Null() { return WriteType(kNullType); }\n    bool Bool(bool b) { return WriteType(b ? kTrueType : kFalseType); }\n    bool Int(int i) { Number n; n.u.i = i; n.d = static_cast<double>(i); return WriteNumber(n); }\n    bool Uint(unsigned u) { Number n; n.u.u = u; n.d = static_cast<double>(u); return WriteNumber(n); }\n    bool Int64(int64_t i) { Number n; n.u.i = i; n.d = static_cast<double>(i); return WriteNumber(n); }\n    bool Uint64(uint64_t u) { Number n; n.u.u = u; n.d = static_cast<double>(u); return WriteNumber(n); }\n    bool Double(double d) {\n        Number n;\n        if (d < 0) n.u.i = static_cast<int64_t>(d);\n        else       n.u.u = static_cast<uint64_t>(d);\n        n.d = d;\n        return WriteNumber(n);\n    }\n\n    bool RawNumber(const Ch* str, SizeType len, bool) {\n        WriteBuffer(kNumberType, str, len * sizeof(Ch));\n        return true;\n    }\n\n    bool String(const Ch* str, SizeType len, bool) {\n        WriteBuffer(kStringType, str, len * sizeof(Ch));\n        return true;\n    }\n\n    bool StartObject() { return true; }\n    bool Key(const Ch* str, SizeType len, bool copy) { return String(str, len, copy); }\n    bool EndObject(SizeType memberCount) { \n        uint64_t h = Hash(0, kObjectType);\n        uint64_t* kv = stack_.template Pop<uint64_t>(memberCount * 2);\n        for (SizeType i = 0; i < memberCount; i++)\n            // Issue #2205\n            // Hasing the key to avoid key=value cases with bug-prone zero-value hash\n            h ^= Hash(Hash(0, kv[i * 2]), kv[i * 2 + 1]);  // Use xor to achieve member order insensitive\n        *stack_.template Push<uint64_t>() = h;\n        return true;\n    }\n    \n    bool StartArray() { return true; }\n    bool EndArray(SizeType elementCount) { \n        uint64_t h = Hash(0, kArrayType);\n        uint64_t* e = stack_.template Pop<uint64_t>(elementCount);\n        for (SizeType i = 0; i < elementCount; i++)\n            h = Hash(h, e[i]); // Use hash to achieve element order sensitive\n        *stack_.template Push<uint64_t>() = h;\n        return true;\n    }\n\n    bool IsValid() const { return stack_.GetSize() == sizeof(uint64_t); }\n\n    uint64_t GetHashCode() const {\n        RAPIDJSON_ASSERT(IsValid());\n        return *stack_.template Top<uint64_t>();\n    }\n\nprivate:\n    static const size_t kDefaultSize = 256;\n    struct Number {\n        union U {\n            uint64_t u;\n            int64_t i;\n        }u;\n        double d;\n    };\n\n    bool WriteType(Type type) { return WriteBuffer(type, 0, 0); }\n    \n    bool WriteNumber(const Number& n) { return WriteBuffer(kNumberType, &n, sizeof(n)); }\n    \n    bool WriteBuffer(Type type, const void* data, size_t len) {\n        // FNV-1a from http://isthe.com/chongo/tech/comp/fnv/\n        uint64_t h = Hash(RAPIDJSON_UINT64_C2(0xcbf29ce4, 0x84222325), type);\n        const unsigned char* d = static_cast<const unsigned char*>(data);\n        for (size_t i = 0; i < len; i++)\n            h = Hash(h, d[i]);\n        *stack_.template Push<uint64_t>() = h;\n        return true;\n    }\n\n    static uint64_t Hash(uint64_t h, uint64_t d) {\n        static const uint64_t kPrime = RAPIDJSON_UINT64_C2(0x00000100, 0x000001b3);\n        h ^= d;\n        h *= kPrime;\n        return h;\n    }\n\n    Stack<Allocator> stack_;\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// SchemaValidationContext\n\ntemplate <typename SchemaDocumentType>\nstruct SchemaValidationContext {\n    typedef Schema<SchemaDocumentType> SchemaType;\n    typedef ISchemaStateFactory<SchemaType> SchemaValidatorFactoryType;\n    typedef IValidationErrorHandler<SchemaType> ErrorHandlerType;\n    typedef typename SchemaType::ValueType ValueType;\n    typedef typename ValueType::Ch Ch;\n\n    enum PatternValidatorType {\n        kPatternValidatorOnly,\n        kPatternValidatorWithProperty,\n        kPatternValidatorWithAdditionalProperty\n    };\n\n    SchemaValidationContext(SchemaValidatorFactoryType& f, ErrorHandlerType& eh, const SchemaType* s, unsigned fl = 0) :\n        factory(f),\n        error_handler(eh),\n        schema(s),\n        flags(fl),\n        valueSchema(),\n        invalidKeyword(),\n        invalidCode(),\n        hasher(),\n        arrayElementHashCodes(),\n        validators(),\n        validatorCount(),\n        patternPropertiesValidators(),\n        patternPropertiesValidatorCount(),\n        patternPropertiesSchemas(),\n        patternPropertiesSchemaCount(),\n        valuePatternValidatorType(kPatternValidatorOnly),\n        propertyExist(),\n        inArray(false),\n        valueUniqueness(false),\n        arrayUniqueness(false)\n    {\n    }\n\n    ~SchemaValidationContext() {\n        if (hasher)\n            factory.DestroryHasher(hasher);\n        if (validators) {\n            for (SizeType i = 0; i < validatorCount; i++) {\n                if (validators[i]) {\n                    factory.DestroySchemaValidator(validators[i]);\n                }\n            }\n            factory.FreeState(validators);\n        }\n        if (patternPropertiesValidators) {\n            for (SizeType i = 0; i < patternPropertiesValidatorCount; i++) {\n                if (patternPropertiesValidators[i]) {\n                    factory.DestroySchemaValidator(patternPropertiesValidators[i]);\n                }\n            }\n            factory.FreeState(patternPropertiesValidators);\n        }\n        if (patternPropertiesSchemas)\n            factory.FreeState(patternPropertiesSchemas);\n        if (propertyExist)\n            factory.FreeState(propertyExist);\n    }\n\n    SchemaValidatorFactoryType& factory;\n    ErrorHandlerType& error_handler;\n    const SchemaType* schema;\n    unsigned flags;\n    const SchemaType* valueSchema;\n    const Ch* invalidKeyword;\n    ValidateErrorCode invalidCode;\n    void* hasher; // Only validator access\n    void* arrayElementHashCodes; // Only validator access this\n    ISchemaValidator** validators;\n    SizeType validatorCount;\n    ISchemaValidator** patternPropertiesValidators;\n    SizeType patternPropertiesValidatorCount;\n    const SchemaType** patternPropertiesSchemas;\n    SizeType patternPropertiesSchemaCount;\n    PatternValidatorType valuePatternValidatorType;\n    PatternValidatorType objectPatternValidatorType;\n    SizeType arrayElementIndex;\n    bool* propertyExist;\n    bool inArray;\n    bool valueUniqueness;\n    bool arrayUniqueness;\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// Schema\n\ntemplate <typename SchemaDocumentType>\nclass Schema {\npublic:\n    typedef typename SchemaDocumentType::ValueType ValueType;\n    typedef typename SchemaDocumentType::AllocatorType AllocatorType;\n    typedef typename SchemaDocumentType::PointerType PointerType;\n    typedef typename ValueType::EncodingType EncodingType;\n    typedef typename EncodingType::Ch Ch;\n    typedef SchemaValidationContext<SchemaDocumentType> Context;\n    typedef Schema<SchemaDocumentType> SchemaType;\n    typedef GenericValue<EncodingType, AllocatorType> SValue;\n    typedef IValidationErrorHandler<Schema> ErrorHandler;\n    typedef GenericUri<ValueType, AllocatorType> UriType;\n    friend class GenericSchemaDocument<ValueType, AllocatorType>;\n\n    Schema(SchemaDocumentType* schemaDocument, const PointerType& p, const ValueType& value, const ValueType& document, AllocatorType* allocator, const UriType& id = UriType()) :\n        allocator_(allocator),\n        uri_(schemaDocument->GetURI(), *allocator),\n        id_(id, allocator),\n        spec_(schemaDocument->GetSpecification()),\n        pointer_(p, allocator),\n        typeless_(schemaDocument->GetTypeless()),\n        enum_(),\n        enumCount_(),\n        not_(),\n        type_((1 << kTotalSchemaType) - 1), // typeless\n        validatorCount_(),\n        notValidatorIndex_(),\n        properties_(),\n        additionalPropertiesSchema_(),\n        patternProperties_(),\n        patternPropertyCount_(),\n        propertyCount_(),\n        minProperties_(),\n        maxProperties_(SizeType(~0)),\n        additionalProperties_(true),\n        hasDependencies_(),\n        hasRequired_(),\n        hasSchemaDependencies_(),\n        additionalItemsSchema_(),\n        itemsList_(),\n        itemsTuple_(),\n        itemsTupleCount_(),\n        minItems_(),\n        maxItems_(SizeType(~0)),\n        additionalItems_(true),\n        uniqueItems_(false),\n        pattern_(),\n        minLength_(0),\n        maxLength_(~SizeType(0)),\n        exclusiveMinimum_(false),\n        exclusiveMaximum_(false),\n        defaultValueLength_(0),\n        readOnly_(false),\n        writeOnly_(false),\n        nullable_(false)\n    {\n        GenericStringBuffer<EncodingType> sb;\n        p.StringifyUriFragment(sb);\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Schema\", sb.GetString(), id.GetString());\n\n        typedef typename ValueType::ConstValueIterator ConstValueIterator;\n        typedef typename ValueType::ConstMemberIterator ConstMemberIterator;\n\n        // PR #1393\n        // Early add this Schema and its $ref(s) in schemaDocument's map to avoid infinite\n        // recursion (with recursive schemas), since schemaDocument->getSchema() is always\n        // checked before creating a new one. Don't cache typeless_, though.\n        if (this != typeless_) {\n          typedef typename SchemaDocumentType::SchemaEntry SchemaEntry;\n          SchemaEntry *entry = schemaDocument->schemaMap_.template Push<SchemaEntry>();\n          new (entry) SchemaEntry(pointer_, this, true, allocator_);\n          schemaDocument->AddSchemaRefs(this);\n        }\n\n        if (!value.IsObject())\n            return;\n\n        // If we have an id property, resolve it with the in-scope id\n        // Not supported for open api 2.0 or 3.0\n        if (spec_.oapi != kVersion20 && spec_.oapi != kVersion30)\n        if (const ValueType* v = GetMember(value, GetIdString())) {\n            if (v->IsString()) {\n                UriType local(*v, allocator);\n                id_ = local.Resolve(id_, allocator);\n                    RAPIDJSON_SCHEMA_PRINT(SchemaIds, id.GetString(), v->GetString(), id_.GetString());\n            }\n        }\n\n        if (const ValueType* v = GetMember(value, GetTypeString())) {\n            type_ = 0;\n            if (v->IsString())\n                AddType(*v);\n            else if (v->IsArray())\n                for (ConstValueIterator itr = v->Begin(); itr != v->End(); ++itr)\n                    AddType(*itr);\n        }\n\n        if (const ValueType* v = GetMember(value, GetEnumString())) {\n            if (v->IsArray() && v->Size() > 0) {\n                enum_ = static_cast<uint64_t*>(allocator_->Malloc(sizeof(uint64_t) * v->Size()));\n                for (ConstValueIterator itr = v->Begin(); itr != v->End(); ++itr) {\n                    typedef Hasher<EncodingType, MemoryPoolAllocator<AllocatorType> > EnumHasherType;\n                    char buffer[256u + 24];\n                    MemoryPoolAllocator<AllocatorType> hasherAllocator(buffer, sizeof(buffer));\n                    EnumHasherType h(&hasherAllocator, 256);\n                    itr->Accept(h);\n                    enum_[enumCount_++] = h.GetHashCode();\n                }\n            }\n        }\n\n        if (schemaDocument)\n            AssignIfExist(allOf_, *schemaDocument, p, value, GetAllOfString(), document);\n\n        // AnyOf, OneOf, Not not supported for open api 2.0\n        if (schemaDocument && spec_.oapi != kVersion20) {\n            AssignIfExist(anyOf_, *schemaDocument, p, value, GetAnyOfString(), document);\n            AssignIfExist(oneOf_, *schemaDocument, p, value, GetOneOfString(), document);\n\n            if (const ValueType* v = GetMember(value, GetNotString())) {\n                schemaDocument->CreateSchema(&not_, p.Append(GetNotString(), allocator_), *v, document, id_);\n                notValidatorIndex_ = validatorCount_;\n                validatorCount_++;\n            }\n        }\n\n        // Object\n\n        const ValueType* properties = GetMember(value, GetPropertiesString());\n        const ValueType* required = GetMember(value, GetRequiredString());\n        const ValueType* dependencies = GetMember(value, GetDependenciesString());\n        {\n            // Gather properties from properties/required/dependencies\n            SValue allProperties(kArrayType);\n\n            if (properties && properties->IsObject())\n                for (ConstMemberIterator itr = properties->MemberBegin(); itr != properties->MemberEnd(); ++itr)\n                    AddUniqueElement(allProperties, itr->name);\n\n            if (required && required->IsArray())\n                for (ConstValueIterator itr = required->Begin(); itr != required->End(); ++itr)\n                    if (itr->IsString())\n                        AddUniqueElement(allProperties, *itr);\n\n            // Dependencies not supported for open api 2.0 and 3.0\n            if (spec_.oapi != kVersion20 && spec_.oapi != kVersion30)\n            if (dependencies && dependencies->IsObject())\n                for (ConstMemberIterator itr = dependencies->MemberBegin(); itr != dependencies->MemberEnd(); ++itr) {\n                    AddUniqueElement(allProperties, itr->name);\n                    if (itr->value.IsArray())\n                        for (ConstValueIterator i = itr->value.Begin(); i != itr->value.End(); ++i)\n                            if (i->IsString())\n                                AddUniqueElement(allProperties, *i);\n                }\n\n            if (allProperties.Size() > 0) {\n                propertyCount_ = allProperties.Size();\n                properties_ = static_cast<Property*>(allocator_->Malloc(sizeof(Property) * propertyCount_));\n                for (SizeType i = 0; i < propertyCount_; i++) {\n                    new (&properties_[i]) Property();\n                    properties_[i].name = allProperties[i];\n                    properties_[i].schema = typeless_;\n                }\n            }\n        }\n\n        if (properties && properties->IsObject()) {\n            PointerType q = p.Append(GetPropertiesString(), allocator_);\n            for (ConstMemberIterator itr = properties->MemberBegin(); itr != properties->MemberEnd(); ++itr) {\n                SizeType index;\n                if (FindPropertyIndex(itr->name, &index))\n                    schemaDocument->CreateSchema(&properties_[index].schema, q.Append(itr->name, allocator_), itr->value, document, id_);\n            }\n        }\n\n        // PatternProperties not supported for open api 2.0 and 3.0\n        if (spec_.oapi != kVersion20 && spec_.oapi != kVersion30)\n        if (const ValueType* v = GetMember(value, GetPatternPropertiesString())) {\n            PointerType q = p.Append(GetPatternPropertiesString(), allocator_);\n            patternProperties_ = static_cast<PatternProperty*>(allocator_->Malloc(sizeof(PatternProperty) * v->MemberCount()));\n            patternPropertyCount_ = 0;\n\n            for (ConstMemberIterator itr = v->MemberBegin(); itr != v->MemberEnd(); ++itr) {\n                new (&patternProperties_[patternPropertyCount_]) PatternProperty();\n                PointerType r = q.Append(itr->name, allocator_);\n                patternProperties_[patternPropertyCount_].pattern = CreatePattern(itr->name, schemaDocument, r);\n                schemaDocument->CreateSchema(&patternProperties_[patternPropertyCount_].schema, r, itr->value, document, id_);\n                patternPropertyCount_++;\n            }\n        }\n\n        if (required && required->IsArray())\n            for (ConstValueIterator itr = required->Begin(); itr != required->End(); ++itr)\n                if (itr->IsString()) {\n                    SizeType index;\n                    if (FindPropertyIndex(*itr, &index)) {\n                        properties_[index].required = true;\n                        hasRequired_ = true;\n                    }\n                }\n\n        // Dependencies not supported for open api 2.0 and 3.0\n        if (spec_.oapi != kVersion20 && spec_.oapi != kVersion30)\n        if (dependencies && dependencies->IsObject()) {\n            PointerType q = p.Append(GetDependenciesString(), allocator_);\n            hasDependencies_ = true;\n            for (ConstMemberIterator itr = dependencies->MemberBegin(); itr != dependencies->MemberEnd(); ++itr) {\n                SizeType sourceIndex;\n                if (FindPropertyIndex(itr->name, &sourceIndex)) {\n                    if (itr->value.IsArray()) {\n                        properties_[sourceIndex].dependencies = static_cast<bool*>(allocator_->Malloc(sizeof(bool) * propertyCount_));\n                        std::memset(properties_[sourceIndex].dependencies, 0, sizeof(bool)* propertyCount_);\n                        for (ConstValueIterator targetItr = itr->value.Begin(); targetItr != itr->value.End(); ++targetItr) {\n                            SizeType targetIndex;\n                            if (FindPropertyIndex(*targetItr, &targetIndex))\n                                properties_[sourceIndex].dependencies[targetIndex] = true;\n                        }\n                    }\n                    else if (itr->value.IsObject()) {\n                        hasSchemaDependencies_ = true;\n                        schemaDocument->CreateSchema(&properties_[sourceIndex].dependenciesSchema, q.Append(itr->name, allocator_), itr->value, document, id_);\n                        properties_[sourceIndex].dependenciesValidatorIndex = validatorCount_;\n                        validatorCount_++;\n                    }\n                }\n            }\n        }\n\n        if (const ValueType* v = GetMember(value, GetAdditionalPropertiesString())) {\n            if (v->IsBool())\n                additionalProperties_ = v->GetBool();\n            else if (v->IsObject())\n                schemaDocument->CreateSchema(&additionalPropertiesSchema_, p.Append(GetAdditionalPropertiesString(), allocator_), *v, document, id_);\n        }\n\n        AssignIfExist(minProperties_, value, GetMinPropertiesString());\n        AssignIfExist(maxProperties_, value, GetMaxPropertiesString());\n\n        // Array\n        if (const ValueType* v = GetMember(value, GetItemsString())) {\n            PointerType q = p.Append(GetItemsString(), allocator_);\n            if (v->IsObject()) // List validation\n                schemaDocument->CreateSchema(&itemsList_, q, *v, document, id_);\n            else if (v->IsArray()) { // Tuple validation\n                itemsTuple_ = static_cast<const Schema**>(allocator_->Malloc(sizeof(const Schema*) * v->Size()));\n                SizeType index = 0;\n                for (ConstValueIterator itr = v->Begin(); itr != v->End(); ++itr, index++)\n                    schemaDocument->CreateSchema(&itemsTuple_[itemsTupleCount_++], q.Append(index, allocator_), *itr, document, id_);\n            }\n        }\n\n        AssignIfExist(minItems_, value, GetMinItemsString());\n        AssignIfExist(maxItems_, value, GetMaxItemsString());\n\n        // AdditionalItems not supported for openapi 2.0 and 3.0\n        if (spec_.oapi != kVersion20 && spec_.oapi != kVersion30)\n        if (const ValueType* v = GetMember(value, GetAdditionalItemsString())) {\n            if (v->IsBool())\n                additionalItems_ = v->GetBool();\n            else if (v->IsObject())\n                schemaDocument->CreateSchema(&additionalItemsSchema_, p.Append(GetAdditionalItemsString(), allocator_), *v, document, id_);\n        }\n\n        AssignIfExist(uniqueItems_, value, GetUniqueItemsString());\n\n        // String\n        AssignIfExist(minLength_, value, GetMinLengthString());\n        AssignIfExist(maxLength_, value, GetMaxLengthString());\n\n        if (const ValueType* v = GetMember(value, GetPatternString()))\n            pattern_ = CreatePattern(*v, schemaDocument, p.Append(GetPatternString(), allocator_));\n\n        // Number\n        if (const ValueType* v = GetMember(value, GetMinimumString()))\n            if (v->IsNumber())\n                minimum_.CopyFrom(*v, *allocator_);\n\n        if (const ValueType* v = GetMember(value, GetMaximumString()))\n            if (v->IsNumber())\n                maximum_.CopyFrom(*v, *allocator_);\n\n        AssignIfExist(exclusiveMinimum_, value, GetExclusiveMinimumString());\n        AssignIfExist(exclusiveMaximum_, value, GetExclusiveMaximumString());\n\n        if (const ValueType* v = GetMember(value, GetMultipleOfString()))\n            if (v->IsNumber() && v->GetDouble() > 0.0)\n                multipleOf_.CopyFrom(*v, *allocator_);\n\n        // Default\n        if (const ValueType* v = GetMember(value, GetDefaultValueString()))\n            if (v->IsString())\n                defaultValueLength_ = v->GetStringLength();\n\n        // ReadOnly - open api only (until draft 7 supported)\n        // WriteOnly - open api 3 only (until draft 7 supported)\n        // Both can't be true\n        if (spec_.oapi != kVersionNone)\n            AssignIfExist(readOnly_, value, GetReadOnlyString());\n        if (spec_.oapi >= kVersion30)\n            AssignIfExist(writeOnly_, value, GetWriteOnlyString());\n        if (readOnly_ && writeOnly_)\n            schemaDocument->SchemaError(kSchemaErrorReadOnlyAndWriteOnly, p);\n\n        // Nullable - open api 3 only\n        // If true add 'null' as allowable type\n        if (spec_.oapi >= kVersion30) {\n            AssignIfExist(nullable_, value, GetNullableString());\n            if (nullable_)\n                AddType(GetNullString());\n        }\n    }\n\n    ~Schema() {\n        AllocatorType::Free(enum_);\n        if (properties_) {\n            for (SizeType i = 0; i < propertyCount_; i++)\n                properties_[i].~Property();\n            AllocatorType::Free(properties_);\n        }\n        if (patternProperties_) {\n            for (SizeType i = 0; i < patternPropertyCount_; i++)\n                patternProperties_[i].~PatternProperty();\n            AllocatorType::Free(patternProperties_);\n        }\n        AllocatorType::Free(itemsTuple_);\n#if RAPIDJSON_SCHEMA_HAS_REGEX\n        if (pattern_) {\n            pattern_->~RegexType();\n            AllocatorType::Free(pattern_);\n        }\n#endif\n    }\n\n    const SValue& GetURI() const {\n        return uri_;\n    }\n\n    const UriType& GetId() const {\n        return id_;\n    }\n\n    const Specification& GetSpecification() const {\n        return spec_;\n    }\n\n    const PointerType& GetPointer() const {\n        return pointer_;\n    }\n\n    bool BeginValue(Context& context) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::BeginValue\");\n        if (context.inArray) {\n            if (uniqueItems_)\n                context.valueUniqueness = true;\n\n            if (itemsList_)\n                context.valueSchema = itemsList_;\n            else if (itemsTuple_) {\n                if (context.arrayElementIndex < itemsTupleCount_)\n                    context.valueSchema = itemsTuple_[context.arrayElementIndex];\n                else if (additionalItemsSchema_)\n                    context.valueSchema = additionalItemsSchema_;\n                else if (additionalItems_)\n                    context.valueSchema = typeless_;\n                else {\n                    context.error_handler.DisallowedItem(context.arrayElementIndex);\n                    // Must set valueSchema for when kValidateContinueOnErrorFlag is set, else reports spurious type error\n                    context.valueSchema = typeless_;\n                    // Must bump arrayElementIndex for when kValidateContinueOnErrorFlag is set\n                    context.arrayElementIndex++;\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorAdditionalItems);\n                }\n            }\n            else\n                context.valueSchema = typeless_;\n\n            context.arrayElementIndex++;\n        }\n        return true;\n    }\n\n    RAPIDJSON_FORCEINLINE bool EndValue(Context& context) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::EndValue\");\n        // Only check pattern properties if we have validators\n        if (context.patternPropertiesValidatorCount > 0) {\n            bool otherValid = false;\n            SizeType count = context.patternPropertiesValidatorCount;\n            if (context.objectPatternValidatorType != Context::kPatternValidatorOnly)\n                otherValid = context.patternPropertiesValidators[--count]->IsValid();\n\n            bool patternValid = true;\n            for (SizeType i = 0; i < count; i++)\n                if (!context.patternPropertiesValidators[i]->IsValid()) {\n                    patternValid = false;\n                    break;\n                }\n\n            if (context.objectPatternValidatorType == Context::kPatternValidatorOnly) {\n                if (!patternValid) {\n                    context.error_handler.PropertyViolations(context.patternPropertiesValidators, count);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorPatternProperties);\n                }\n            }\n            else if (context.objectPatternValidatorType == Context::kPatternValidatorWithProperty) {\n                if (!patternValid || !otherValid) {\n                    context.error_handler.PropertyViolations(context.patternPropertiesValidators, count + 1);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorPatternProperties);\n                }\n            }\n            else if (!patternValid && !otherValid) { // kPatternValidatorWithAdditionalProperty)\n                context.error_handler.PropertyViolations(context.patternPropertiesValidators, count + 1);\n                RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorPatternProperties);\n            }\n        }\n\n        // For enums only check if we have a hasher\n        if (enum_ && context.hasher) {\n            const uint64_t h = context.factory.GetHashCode(context.hasher);\n            for (SizeType i = 0; i < enumCount_; i++)\n                if (enum_[i] == h)\n                    goto foundEnum;\n            context.error_handler.DisallowedValue(kValidateErrorEnum);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorEnum);\n            foundEnum:;\n        }\n\n        // Only check allOf etc if we have validators\n        if (context.validatorCount > 0) {\n            if (allOf_.schemas)\n                for (SizeType i = allOf_.begin; i < allOf_.begin + allOf_.count; i++)\n                    if (!context.validators[i]->IsValid()) {\n                        context.error_handler.NotAllOf(&context.validators[allOf_.begin], allOf_.count);\n                        RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorAllOf);\n                    }\n\n            if (anyOf_.schemas) {\n                for (SizeType i = anyOf_.begin; i < anyOf_.begin + anyOf_.count; i++)\n                    if (context.validators[i]->IsValid())\n                        goto foundAny;\n                context.error_handler.NoneOf(&context.validators[anyOf_.begin], anyOf_.count);\n                RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorAnyOf);\n                foundAny:;\n            }\n\n            if (oneOf_.schemas) {\n                bool oneValid = false;\n                SizeType firstMatch = 0;\n                for (SizeType i = oneOf_.begin; i < oneOf_.begin + oneOf_.count; i++)\n                    if (context.validators[i]->IsValid()) {\n                        if (oneValid) {\n                            context.error_handler.MultipleOneOf(firstMatch, i - oneOf_.begin);\n                            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorOneOfMatch);\n                        } else {\n                            oneValid = true;\n                            firstMatch = i - oneOf_.begin;\n                        }\n                    }\n                if (!oneValid) {\n                    context.error_handler.NotOneOf(&context.validators[oneOf_.begin], oneOf_.count);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorOneOf);\n                }\n            }\n\n            if (not_ && context.validators[notValidatorIndex_]->IsValid()) {\n                context.error_handler.Disallowed();\n                RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorNot);\n            }\n        }\n\n        return true;\n    }\n\n    bool Null(Context& context) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Null\");\n        if (!(type_ & (1 << kNullSchemaType))) {\n            DisallowedType(context, GetNullString());\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorType);\n        }\n        return CreateParallelValidator(context);\n    }\n\n    bool Bool(Context& context, bool b) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Bool\", b);\n        if (!CheckBool(context, b))\n            return false;\n        return CreateParallelValidator(context);\n    }\n\n    bool Int(Context& context, int i) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Int\", (int64_t)i);\n        if (!CheckInt(context, i))\n            return false;\n        return CreateParallelValidator(context);\n    }\n\n    bool Uint(Context& context, unsigned u) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Uint\", (uint64_t)u);\n        if (!CheckUint(context, u))\n            return false;\n        return CreateParallelValidator(context);\n    }\n\n    bool Int64(Context& context, int64_t i) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Int64\", i);\n        if (!CheckInt(context, i))\n            return false;\n        return CreateParallelValidator(context);\n    }\n\n    bool Uint64(Context& context, uint64_t u) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Uint64\", u);\n        if (!CheckUint(context, u))\n            return false;\n        return CreateParallelValidator(context);\n    }\n\n    bool Double(Context& context, double d) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Double\", d);\n        if (!(type_ & (1 << kNumberSchemaType))) {\n            DisallowedType(context, GetNumberString());\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorType);\n        }\n\n        if (!minimum_.IsNull() && !CheckDoubleMinimum(context, d))\n            return false;\n\n        if (!maximum_.IsNull() && !CheckDoubleMaximum(context, d))\n            return false;\n\n        if (!multipleOf_.IsNull() && !CheckDoubleMultipleOf(context, d))\n            return false;\n\n        return CreateParallelValidator(context);\n    }\n\n    bool String(Context& context, const Ch* str, SizeType length, bool) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::String\", str);\n        if (!(type_ & (1 << kStringSchemaType))) {\n            DisallowedType(context, GetStringString());\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorType);\n        }\n\n        if (minLength_ != 0 || maxLength_ != SizeType(~0)) {\n            SizeType count;\n            if (internal::CountStringCodePoint<EncodingType>(str, length, &count)) {\n                if (count < minLength_) {\n                    context.error_handler.TooShort(str, length, minLength_);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMinLength);\n                }\n                if (count > maxLength_) {\n                    context.error_handler.TooLong(str, length, maxLength_);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMaxLength);\n                }\n            }\n        }\n\n        if (pattern_ && !IsPatternMatch(pattern_, str, length)) {\n            context.error_handler.DoesNotMatch(str, length);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorPattern);\n        }\n\n        return CreateParallelValidator(context);\n    }\n\n    bool StartObject(Context& context) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::StartObject\");\n        if (!(type_ & (1 << kObjectSchemaType))) {\n            DisallowedType(context, GetObjectString());\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorType);\n        }\n\n        if (hasDependencies_ || hasRequired_) {\n            context.propertyExist = static_cast<bool*>(context.factory.MallocState(sizeof(bool) * propertyCount_));\n            std::memset(context.propertyExist, 0, sizeof(bool) * propertyCount_);\n        }\n\n        if (patternProperties_) { // pre-allocate schema array\n            SizeType count = patternPropertyCount_ + 1; // extra for valuePatternValidatorType\n            context.patternPropertiesSchemas = static_cast<const SchemaType**>(context.factory.MallocState(sizeof(const SchemaType*) * count));\n            context.patternPropertiesSchemaCount = 0;\n            std::memset(context.patternPropertiesSchemas, 0, sizeof(SchemaType*) * count);\n        }\n\n        return CreateParallelValidator(context);\n    }\n\n    bool Key(Context& context, const Ch* str, SizeType len, bool) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::Key\", str);\n\n        if (patternProperties_) {\n            context.patternPropertiesSchemaCount = 0;\n            for (SizeType i = 0; i < patternPropertyCount_; i++)\n                if (patternProperties_[i].pattern && IsPatternMatch(patternProperties_[i].pattern, str, len)) {\n                    context.patternPropertiesSchemas[context.patternPropertiesSchemaCount++] = patternProperties_[i].schema;\n                    context.valueSchema = typeless_;\n                }\n        }\n\n        SizeType index  = 0;\n        if (FindPropertyIndex(ValueType(str, len).Move(), &index)) {\n            if (context.patternPropertiesSchemaCount > 0) {\n                context.patternPropertiesSchemas[context.patternPropertiesSchemaCount++] = properties_[index].schema;\n                context.valueSchema = typeless_;\n                context.valuePatternValidatorType = Context::kPatternValidatorWithProperty;\n            }\n            else\n                context.valueSchema = properties_[index].schema;\n\n            if (context.propertyExist)\n                context.propertyExist[index] = true;\n\n            return true;\n        }\n\n        if (additionalPropertiesSchema_) {\n            if (context.patternPropertiesSchemaCount > 0) {\n                context.patternPropertiesSchemas[context.patternPropertiesSchemaCount++] = additionalPropertiesSchema_;\n                context.valueSchema = typeless_;\n                context.valuePatternValidatorType = Context::kPatternValidatorWithAdditionalProperty;\n            }\n            else\n                context.valueSchema = additionalPropertiesSchema_;\n            return true;\n        }\n        else if (additionalProperties_) {\n            context.valueSchema = typeless_;\n            return true;\n        }\n\n        if (context.patternPropertiesSchemaCount == 0) { // patternProperties are not additional properties\n            // Must set valueSchema for when kValidateContinueOnErrorFlag is set, else reports spurious type error\n            context.valueSchema = typeless_;\n            context.error_handler.DisallowedProperty(str, len);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorAdditionalProperties);\n        }\n\n        return true;\n    }\n\n    bool EndObject(Context& context, SizeType memberCount) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::EndObject\");\n        if (hasRequired_) {\n            context.error_handler.StartMissingProperties();\n            for (SizeType index = 0; index < propertyCount_; index++)\n                if (properties_[index].required && !context.propertyExist[index])\n                    if (properties_[index].schema->defaultValueLength_ == 0 )\n                        context.error_handler.AddMissingProperty(properties_[index].name);\n            if (context.error_handler.EndMissingProperties())\n                RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorRequired);\n        }\n\n        if (memberCount < minProperties_) {\n            context.error_handler.TooFewProperties(memberCount, minProperties_);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMinProperties);\n        }\n\n        if (memberCount > maxProperties_) {\n            context.error_handler.TooManyProperties(memberCount, maxProperties_);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMaxProperties);\n        }\n\n        if (hasDependencies_) {\n            context.error_handler.StartDependencyErrors();\n            for (SizeType sourceIndex = 0; sourceIndex < propertyCount_; sourceIndex++) {\n                const Property& source = properties_[sourceIndex];\n                if (context.propertyExist[sourceIndex]) {\n                    if (source.dependencies) {\n                        context.error_handler.StartMissingDependentProperties();\n                        for (SizeType targetIndex = 0; targetIndex < propertyCount_; targetIndex++)\n                            if (source.dependencies[targetIndex] && !context.propertyExist[targetIndex])\n                                context.error_handler.AddMissingDependentProperty(properties_[targetIndex].name);\n                        context.error_handler.EndMissingDependentProperties(source.name);\n                    }\n                    else if (source.dependenciesSchema) {\n                        ISchemaValidator* dependenciesValidator = context.validators[source.dependenciesValidatorIndex];\n                        if (!dependenciesValidator->IsValid())\n                            context.error_handler.AddDependencySchemaError(source.name, dependenciesValidator);\n                    }\n                }\n            }\n            if (context.error_handler.EndDependencyErrors())\n                RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorDependencies);\n        }\n\n        return true;\n    }\n\n    bool StartArray(Context& context) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::StartArray\");\n        context.arrayElementIndex = 0;\n        context.inArray = true;  // Ensure we note that we are in an array\n\n        if (!(type_ & (1 << kArraySchemaType))) {\n            DisallowedType(context, GetArrayString());\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorType);\n        }\n\n        return CreateParallelValidator(context);\n    }\n\n    bool EndArray(Context& context, SizeType elementCount) const {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"Schema::EndArray\");\n        context.inArray = false;\n\n        if (elementCount < minItems_) {\n            context.error_handler.TooFewItems(elementCount, minItems_);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMinItems);\n        }\n\n        if (elementCount > maxItems_) {\n            context.error_handler.TooManyItems(elementCount, maxItems_);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMaxItems);\n        }\n\n        return true;\n    }\n\n    static const ValueType& GetValidateErrorKeyword(ValidateErrorCode validateErrorCode) {\n        switch (validateErrorCode) {\n            case kValidateErrorMultipleOf:              return GetMultipleOfString();\n            case kValidateErrorMaximum:                 return GetMaximumString();\n            case kValidateErrorExclusiveMaximum:        return GetMaximumString(); // Same\n            case kValidateErrorMinimum:                 return GetMinimumString();\n            case kValidateErrorExclusiveMinimum:        return GetMinimumString(); // Same\n\n            case kValidateErrorMaxLength:               return GetMaxLengthString();\n            case kValidateErrorMinLength:               return GetMinLengthString();\n            case kValidateErrorPattern:                 return GetPatternString();\n\n            case kValidateErrorMaxItems:                return GetMaxItemsString();\n            case kValidateErrorMinItems:                return GetMinItemsString();\n            case kValidateErrorUniqueItems:             return GetUniqueItemsString();\n            case kValidateErrorAdditionalItems:         return GetAdditionalItemsString();\n\n            case kValidateErrorMaxProperties:           return GetMaxPropertiesString();\n            case kValidateErrorMinProperties:           return GetMinPropertiesString();\n            case kValidateErrorRequired:                return GetRequiredString();\n            case kValidateErrorAdditionalProperties:    return GetAdditionalPropertiesString();\n            case kValidateErrorPatternProperties:       return GetPatternPropertiesString();\n            case kValidateErrorDependencies:            return GetDependenciesString();\n\n            case kValidateErrorEnum:                    return GetEnumString();\n            case kValidateErrorType:                    return GetTypeString();\n\n            case kValidateErrorOneOf:                   return GetOneOfString();\n            case kValidateErrorOneOfMatch:              return GetOneOfString(); // Same\n            case kValidateErrorAllOf:                   return GetAllOfString();\n            case kValidateErrorAnyOf:                   return GetAnyOfString();\n            case kValidateErrorNot:                     return GetNotString();\n\n            case kValidateErrorReadOnly:                return GetReadOnlyString();\n            case kValidateErrorWriteOnly:               return GetWriteOnlyString();\n\n            default:                                    return GetNullString();\n        }\n    }\n\n\n    // Generate functions for string literal according to Ch\n#define RAPIDJSON_STRING_(name, ...) \\\n    static const ValueType& Get##name##String() {\\\n        static const Ch s[] = { __VA_ARGS__, '\\0' };\\\n        static const ValueType v(s, static_cast<SizeType>(sizeof(s) / sizeof(Ch) - 1));\\\n        return v;\\\n    }\n\n    RAPIDJSON_STRING_(Null, 'n', 'u', 'l', 'l')\n    RAPIDJSON_STRING_(Boolean, 'b', 'o', 'o', 'l', 'e', 'a', 'n')\n    RAPIDJSON_STRING_(Object, 'o', 'b', 'j', 'e', 'c', 't')\n    RAPIDJSON_STRING_(Array, 'a', 'r', 'r', 'a', 'y')\n    RAPIDJSON_STRING_(String, 's', 't', 'r', 'i', 'n', 'g')\n    RAPIDJSON_STRING_(Number, 'n', 'u', 'm', 'b', 'e', 'r')\n    RAPIDJSON_STRING_(Integer, 'i', 'n', 't', 'e', 'g', 'e', 'r')\n    RAPIDJSON_STRING_(Type, 't', 'y', 'p', 'e')\n    RAPIDJSON_STRING_(Enum, 'e', 'n', 'u', 'm')\n    RAPIDJSON_STRING_(AllOf, 'a', 'l', 'l', 'O', 'f')\n    RAPIDJSON_STRING_(AnyOf, 'a', 'n', 'y', 'O', 'f')\n    RAPIDJSON_STRING_(OneOf, 'o', 'n', 'e', 'O', 'f')\n    RAPIDJSON_STRING_(Not, 'n', 'o', 't')\n    RAPIDJSON_STRING_(Properties, 'p', 'r', 'o', 'p', 'e', 'r', 't', 'i', 'e', 's')\n    RAPIDJSON_STRING_(Required, 'r', 'e', 'q', 'u', 'i', 'r', 'e', 'd')\n    RAPIDJSON_STRING_(Dependencies, 'd', 'e', 'p', 'e', 'n', 'd', 'e', 'n', 'c', 'i', 'e', 's')\n    RAPIDJSON_STRING_(PatternProperties, 'p', 'a', 't', 't', 'e', 'r', 'n', 'P', 'r', 'o', 'p', 'e', 'r', 't', 'i', 'e', 's')\n    RAPIDJSON_STRING_(AdditionalProperties, 'a', 'd', 'd', 'i', 't', 'i', 'o', 'n', 'a', 'l', 'P', 'r', 'o', 'p', 'e', 'r', 't', 'i', 'e', 's')\n    RAPIDJSON_STRING_(MinProperties, 'm', 'i', 'n', 'P', 'r', 'o', 'p', 'e', 'r', 't', 'i', 'e', 's')\n    RAPIDJSON_STRING_(MaxProperties, 'm', 'a', 'x', 'P', 'r', 'o', 'p', 'e', 'r', 't', 'i', 'e', 's')\n    RAPIDJSON_STRING_(Items, 'i', 't', 'e', 'm', 's')\n    RAPIDJSON_STRING_(MinItems, 'm', 'i', 'n', 'I', 't', 'e', 'm', 's')\n    RAPIDJSON_STRING_(MaxItems, 'm', 'a', 'x', 'I', 't', 'e', 'm', 's')\n    RAPIDJSON_STRING_(AdditionalItems, 'a', 'd', 'd', 'i', 't', 'i', 'o', 'n', 'a', 'l', 'I', 't', 'e', 'm', 's')\n    RAPIDJSON_STRING_(UniqueItems, 'u', 'n', 'i', 'q', 'u', 'e', 'I', 't', 'e', 'm', 's')\n    RAPIDJSON_STRING_(MinLength, 'm', 'i', 'n', 'L', 'e', 'n', 'g', 't', 'h')\n    RAPIDJSON_STRING_(MaxLength, 'm', 'a', 'x', 'L', 'e', 'n', 'g', 't', 'h')\n    RAPIDJSON_STRING_(Pattern, 'p', 'a', 't', 't', 'e', 'r', 'n')\n    RAPIDJSON_STRING_(Minimum, 'm', 'i', 'n', 'i', 'm', 'u', 'm')\n    RAPIDJSON_STRING_(Maximum, 'm', 'a', 'x', 'i', 'm', 'u', 'm')\n    RAPIDJSON_STRING_(ExclusiveMinimum, 'e', 'x', 'c', 'l', 'u', 's', 'i', 'v', 'e', 'M', 'i', 'n', 'i', 'm', 'u', 'm')\n    RAPIDJSON_STRING_(ExclusiveMaximum, 'e', 'x', 'c', 'l', 'u', 's', 'i', 'v', 'e', 'M', 'a', 'x', 'i', 'm', 'u', 'm')\n    RAPIDJSON_STRING_(MultipleOf, 'm', 'u', 'l', 't', 'i', 'p', 'l', 'e', 'O', 'f')\n    RAPIDJSON_STRING_(DefaultValue, 'd', 'e', 'f', 'a', 'u', 'l', 't')\n    RAPIDJSON_STRING_(Schema, '$', 's', 'c', 'h', 'e', 'm', 'a')\n    RAPIDJSON_STRING_(Ref, '$', 'r', 'e', 'f')\n    RAPIDJSON_STRING_(Id, 'i', 'd')\n    RAPIDJSON_STRING_(Swagger, 's', 'w', 'a', 'g', 'g', 'e', 'r')\n    RAPIDJSON_STRING_(OpenApi, 'o', 'p', 'e', 'n', 'a', 'p', 'i')\n    RAPIDJSON_STRING_(ReadOnly, 'r', 'e', 'a', 'd', 'O', 'n', 'l', 'y')\n    RAPIDJSON_STRING_(WriteOnly, 'w', 'r', 'i', 't', 'e', 'O', 'n', 'l', 'y')\n    RAPIDJSON_STRING_(Nullable, 'n', 'u', 'l', 'l', 'a', 'b', 'l', 'e')\n\n#undef RAPIDJSON_STRING_\n\nprivate:\n    enum SchemaValueType {\n        kNullSchemaType,\n        kBooleanSchemaType,\n        kObjectSchemaType,\n        kArraySchemaType,\n        kStringSchemaType,\n        kNumberSchemaType,\n        kIntegerSchemaType,\n        kTotalSchemaType\n    };\n\n#if RAPIDJSON_SCHEMA_USE_INTERNALREGEX\n        typedef internal::GenericRegex<EncodingType, AllocatorType> RegexType;\n#elif RAPIDJSON_SCHEMA_USE_STDREGEX\n        typedef std::basic_regex<Ch> RegexType;\n#else\n        typedef char RegexType;\n#endif\n\n    struct SchemaArray {\n        SchemaArray() : schemas(), count() {}\n        ~SchemaArray() { AllocatorType::Free(schemas); }\n        const SchemaType** schemas;\n        SizeType begin; // begin index of context.validators\n        SizeType count;\n    };\n\n    template <typename V1, typename V2>\n    void AddUniqueElement(V1& a, const V2& v) {\n        for (typename V1::ConstValueIterator itr = a.Begin(); itr != a.End(); ++itr)\n            if (*itr == v)\n                return;\n        V1 c(v, *allocator_);\n        a.PushBack(c, *allocator_);\n    }\n\n    static const ValueType* GetMember(const ValueType& value, const ValueType& name) {\n        typename ValueType::ConstMemberIterator itr = value.FindMember(name);\n        return itr != value.MemberEnd() ? &(itr->value) : 0;\n    }\n\n    static void AssignIfExist(bool& out, const ValueType& value, const ValueType& name) {\n        if (const ValueType* v = GetMember(value, name))\n            if (v->IsBool())\n                out = v->GetBool();\n    }\n\n    static void AssignIfExist(SizeType& out, const ValueType& value, const ValueType& name) {\n        if (const ValueType* v = GetMember(value, name))\n            if (v->IsUint64() && v->GetUint64() <= SizeType(~0))\n                out = static_cast<SizeType>(v->GetUint64());\n    }\n\n    void AssignIfExist(SchemaArray& out, SchemaDocumentType& schemaDocument, const PointerType& p, const ValueType& value, const ValueType& name, const ValueType& document) {\n        if (const ValueType* v = GetMember(value, name)) {\n            if (v->IsArray() && v->Size() > 0) {\n                PointerType q = p.Append(name, allocator_);\n                out.count = v->Size();\n                out.schemas = static_cast<const Schema**>(allocator_->Malloc(out.count * sizeof(const Schema*)));\n                memset(out.schemas, 0, sizeof(Schema*)* out.count);\n                for (SizeType i = 0; i < out.count; i++)\n                    schemaDocument.CreateSchema(&out.schemas[i], q.Append(i, allocator_), (*v)[i], document, id_);\n                out.begin = validatorCount_;\n                validatorCount_ += out.count;\n            }\n        }\n    }\n\n#if RAPIDJSON_SCHEMA_USE_INTERNALREGEX\n    template <typename ValueType>\n    RegexType* CreatePattern(const ValueType& value, SchemaDocumentType* sd, const PointerType& p) {\n        if (value.IsString()) {\n            RegexType* r = new (allocator_->Malloc(sizeof(RegexType))) RegexType(value.GetString(), allocator_);\n            if (!r->IsValid()) {\n                sd->SchemaErrorValue(kSchemaErrorRegexInvalid, p, value.GetString(), value.GetStringLength());\n                r->~RegexType();\n                AllocatorType::Free(r);\n                r = 0;\n            }\n            return r;\n        }\n        return 0;\n    }\n\n    static bool IsPatternMatch(const RegexType* pattern, const Ch *str, SizeType) {\n        GenericRegexSearch<RegexType> rs(*pattern);\n        return rs.Search(str);\n    }\n#elif RAPIDJSON_SCHEMA_USE_STDREGEX\n    template <typename ValueType>\n    RegexType* CreatePattern(const ValueType& value, SchemaDocumentType* sd, const PointerType& p) {\n        if (value.IsString()) {\n            RegexType *r = static_cast<RegexType*>(allocator_->Malloc(sizeof(RegexType)));\n            try {\n                return new (r) RegexType(value.GetString(), std::size_t(value.GetStringLength()), std::regex_constants::ECMAScript);\n            }\n            catch (const std::regex_error& e) {\n                sd->SchemaErrorValue(kSchemaErrorRegexInvalid, p, value.GetString(), value.GetStringLength());\n                AllocatorType::Free(r);\n            }\n        }\n        return 0;\n    }\n\n    static bool IsPatternMatch(const RegexType* pattern, const Ch *str, SizeType length) {\n        std::match_results<const Ch*> r;\n        return std::regex_search(str, str + length, r, *pattern);\n    }\n#else\n    template <typename ValueType>\n    RegexType* CreatePattern(const ValueType&) {\n        return 0;\n    }\n\n    static bool IsPatternMatch(const RegexType*, const Ch *, SizeType) { return true; }\n#endif // RAPIDJSON_SCHEMA_USE_STDREGEX\n\n    void AddType(const ValueType& type) {\n        if      (type == GetNullString()   ) type_ |= 1 << kNullSchemaType;\n        else if (type == GetBooleanString()) type_ |= 1 << kBooleanSchemaType;\n        else if (type == GetObjectString() ) type_ |= 1 << kObjectSchemaType;\n        else if (type == GetArrayString()  ) type_ |= 1 << kArraySchemaType;\n        else if (type == GetStringString() ) type_ |= 1 << kStringSchemaType;\n        else if (type == GetIntegerString()) type_ |= 1 << kIntegerSchemaType;\n        else if (type == GetNumberString() ) type_ |= (1 << kNumberSchemaType) | (1 << kIntegerSchemaType);\n    }\n\n    // Creates parallel validators for allOf, anyOf, oneOf, not and schema dependencies, if required.\n    // Also creates a hasher for enums and array uniqueness, if required.\n    // Also a useful place to add type-independent error checks.\n    bool CreateParallelValidator(Context& context) const {\n        if (enum_ || context.arrayUniqueness)\n            context.hasher = context.factory.CreateHasher();\n\n        if (validatorCount_) {\n            RAPIDJSON_ASSERT(context.validators == 0);\n            context.validators = static_cast<ISchemaValidator**>(context.factory.MallocState(sizeof(ISchemaValidator*) * validatorCount_));\n            std::memset(context.validators, 0, sizeof(ISchemaValidator*) * validatorCount_);\n            context.validatorCount = validatorCount_;\n\n            // Always return after first failure for these sub-validators\n            if (allOf_.schemas)\n                CreateSchemaValidators(context, allOf_, false);\n\n            if (anyOf_.schemas)\n                CreateSchemaValidators(context, anyOf_, false);\n\n            if (oneOf_.schemas)\n                CreateSchemaValidators(context, oneOf_, false);\n\n            if (not_)\n                context.validators[notValidatorIndex_] = context.factory.CreateSchemaValidator(*not_, false);\n\n            if (hasSchemaDependencies_) {\n                for (SizeType i = 0; i < propertyCount_; i++)\n                    if (properties_[i].dependenciesSchema)\n                        context.validators[properties_[i].dependenciesValidatorIndex] = context.factory.CreateSchemaValidator(*properties_[i].dependenciesSchema, false);\n            }\n        }\n\n        // Add any other type-independent checks here\n        if (readOnly_ && (context.flags & kValidateWriteFlag)) {\n            context.error_handler.DisallowedWhenWriting();\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorReadOnly);\n        }\n        if (writeOnly_ && (context.flags & kValidateReadFlag)) {\n            context.error_handler.DisallowedWhenReading();\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorWriteOnly);\n        }\n\n        return true;\n    }\n\n    void CreateSchemaValidators(Context& context, const SchemaArray& schemas, const bool inheritContinueOnErrors) const {\n        for (SizeType i = 0; i < schemas.count; i++)\n            context.validators[schemas.begin + i] = context.factory.CreateSchemaValidator(*schemas.schemas[i], inheritContinueOnErrors);\n    }\n\n    // O(n)\n    bool FindPropertyIndex(const ValueType& name, SizeType* outIndex) const {\n        SizeType len = name.GetStringLength();\n        const Ch* str = name.GetString();\n        for (SizeType index = 0; index < propertyCount_; index++)\n            if (properties_[index].name.GetStringLength() == len &&\n                (std::memcmp(properties_[index].name.GetString(), str, sizeof(Ch) * len) == 0))\n            {\n                *outIndex = index;\n                return true;\n            }\n        return false;\n    }\n\n    bool CheckBool(Context& context, bool) const {\n        if (!(type_ & (1 << kBooleanSchemaType))) {\n            DisallowedType(context, GetBooleanString());\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorType);\n        }\n        return true;\n    }\n\n    bool CheckInt(Context& context, int64_t i) const {\n        if (!(type_ & ((1 << kIntegerSchemaType) | (1 << kNumberSchemaType)))) {\n            DisallowedType(context, GetIntegerString());\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorType);\n        }\n\n        if (!minimum_.IsNull()) {\n            if (minimum_.IsInt64()) {\n                if (exclusiveMinimum_ ? i <= minimum_.GetInt64() : i < minimum_.GetInt64()) {\n                    context.error_handler.BelowMinimum(i, minimum_, exclusiveMinimum_);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(exclusiveMinimum_ ? kValidateErrorExclusiveMinimum : kValidateErrorMinimum);\n                }\n            }\n            else if (minimum_.IsUint64()) {\n                context.error_handler.BelowMinimum(i, minimum_, exclusiveMinimum_);\n                RAPIDJSON_INVALID_KEYWORD_RETURN(exclusiveMinimum_ ? kValidateErrorExclusiveMinimum : kValidateErrorMinimum); // i <= max(int64_t) < minimum.GetUint64()\n            }\n            else if (!CheckDoubleMinimum(context, static_cast<double>(i)))\n                return false;\n        }\n\n        if (!maximum_.IsNull()) {\n            if (maximum_.IsInt64()) {\n                if (exclusiveMaximum_ ? i >= maximum_.GetInt64() : i > maximum_.GetInt64()) {\n                    context.error_handler.AboveMaximum(i, maximum_, exclusiveMaximum_);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(exclusiveMaximum_ ? kValidateErrorExclusiveMaximum : kValidateErrorMaximum);\n                }\n            }\n            else if (maximum_.IsUint64()) { }\n                /* do nothing */ // i <= max(int64_t) < maximum_.GetUint64()\n            else if (!CheckDoubleMaximum(context, static_cast<double>(i)))\n                return false;\n        }\n\n        if (!multipleOf_.IsNull()) {\n            if (multipleOf_.IsUint64()) {\n                if (static_cast<uint64_t>(i >= 0 ? i : -i) % multipleOf_.GetUint64() != 0) {\n                    context.error_handler.NotMultipleOf(i, multipleOf_);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMultipleOf);\n                }\n            }\n            else if (!CheckDoubleMultipleOf(context, static_cast<double>(i)))\n                return false;\n        }\n\n        return true;\n    }\n\n    bool CheckUint(Context& context, uint64_t i) const {\n        if (!(type_ & ((1 << kIntegerSchemaType) | (1 << kNumberSchemaType)))) {\n            DisallowedType(context, GetIntegerString());\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorType);\n        }\n\n        if (!minimum_.IsNull()) {\n            if (minimum_.IsUint64()) {\n                if (exclusiveMinimum_ ? i <= minimum_.GetUint64() : i < minimum_.GetUint64()) {\n                    context.error_handler.BelowMinimum(i, minimum_, exclusiveMinimum_);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(exclusiveMinimum_ ? kValidateErrorExclusiveMinimum : kValidateErrorMinimum);\n                }\n            }\n            else if (minimum_.IsInt64())\n                /* do nothing */; // i >= 0 > minimum.Getint64()\n            else if (!CheckDoubleMinimum(context, static_cast<double>(i)))\n                return false;\n        }\n\n        if (!maximum_.IsNull()) {\n            if (maximum_.IsUint64()) {\n                if (exclusiveMaximum_ ? i >= maximum_.GetUint64() : i > maximum_.GetUint64()) {\n                    context.error_handler.AboveMaximum(i, maximum_, exclusiveMaximum_);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(exclusiveMaximum_ ? kValidateErrorExclusiveMaximum : kValidateErrorMaximum);\n                }\n            }\n            else if (maximum_.IsInt64()) {\n                context.error_handler.AboveMaximum(i, maximum_, exclusiveMaximum_);\n                RAPIDJSON_INVALID_KEYWORD_RETURN(exclusiveMaximum_ ? kValidateErrorExclusiveMaximum : kValidateErrorMaximum); // i >= 0 > maximum_\n            }\n            else if (!CheckDoubleMaximum(context, static_cast<double>(i)))\n                return false;\n        }\n\n        if (!multipleOf_.IsNull()) {\n            if (multipleOf_.IsUint64()) {\n                if (i % multipleOf_.GetUint64() != 0) {\n                    context.error_handler.NotMultipleOf(i, multipleOf_);\n                    RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMultipleOf);\n                }\n            }\n            else if (!CheckDoubleMultipleOf(context, static_cast<double>(i)))\n                return false;\n        }\n\n        return true;\n    }\n\n    bool CheckDoubleMinimum(Context& context, double d) const {\n        if (exclusiveMinimum_ ? d <= minimum_.GetDouble() : d < minimum_.GetDouble()) {\n            context.error_handler.BelowMinimum(d, minimum_, exclusiveMinimum_);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(exclusiveMinimum_ ? kValidateErrorExclusiveMinimum : kValidateErrorMinimum);\n        }\n        return true;\n    }\n\n    bool CheckDoubleMaximum(Context& context, double d) const {\n        if (exclusiveMaximum_ ? d >= maximum_.GetDouble() : d > maximum_.GetDouble()) {\n            context.error_handler.AboveMaximum(d, maximum_, exclusiveMaximum_);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(exclusiveMaximum_ ? kValidateErrorExclusiveMaximum : kValidateErrorMaximum);\n        }\n        return true;\n    }\n\n    bool CheckDoubleMultipleOf(Context& context, double d) const {\n        double a = std::abs(d), b = std::abs(multipleOf_.GetDouble());\n        double q = a / b;\n        double qRounded = std::floor(q + 0.5);\n        double scaledEpsilon = (q + qRounded) * std::numeric_limits<double>::epsilon();\n        double difference = std::abs(qRounded - q);\n        bool isMultiple = (difference <= scaledEpsilon)\n                                        || (difference < std::numeric_limits<double>::min());\n        if (!isMultiple) {\n            context.error_handler.NotMultipleOf(d, multipleOf_);\n            RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorMultipleOf);\n        }\n        return true;\n    }\n\n    void DisallowedType(Context& context, const ValueType& actualType) const {\n        ErrorHandler& eh = context.error_handler;\n        eh.StartDisallowedType();\n\n        if (type_ & (1 << kNullSchemaType)) eh.AddExpectedType(GetNullString());\n        if (type_ & (1 << kBooleanSchemaType)) eh.AddExpectedType(GetBooleanString());\n        if (type_ & (1 << kObjectSchemaType)) eh.AddExpectedType(GetObjectString());\n        if (type_ & (1 << kArraySchemaType)) eh.AddExpectedType(GetArrayString());\n        if (type_ & (1 << kStringSchemaType)) eh.AddExpectedType(GetStringString());\n\n        if (type_ & (1 << kNumberSchemaType)) eh.AddExpectedType(GetNumberString());\n        else if (type_ & (1 << kIntegerSchemaType)) eh.AddExpectedType(GetIntegerString());\n\n        eh.EndDisallowedType(actualType);\n    }\n\n    struct Property {\n        Property() : schema(), dependenciesSchema(), dependenciesValidatorIndex(), dependencies(), required(false) {}\n        ~Property() { AllocatorType::Free(dependencies); }\n        SValue name;\n        const SchemaType* schema;\n        const SchemaType* dependenciesSchema;\n        SizeType dependenciesValidatorIndex;\n        bool* dependencies;\n        bool required;\n    };\n\n    struct PatternProperty {\n        PatternProperty() : schema(), pattern() {}\n        ~PatternProperty() {\n            if (pattern) {\n                pattern->~RegexType();\n                AllocatorType::Free(pattern);\n            }\n        }\n        const SchemaType* schema;\n        RegexType* pattern;\n    };\n\n    AllocatorType* allocator_;\n    SValue uri_;\n    UriType id_;\n    Specification spec_;\n    PointerType pointer_;\n    const SchemaType* typeless_;\n    uint64_t* enum_;\n    SizeType enumCount_;\n    SchemaArray allOf_;\n    SchemaArray anyOf_;\n    SchemaArray oneOf_;\n    const SchemaType* not_;\n    unsigned type_; // bitmask of kSchemaType\n    SizeType validatorCount_;\n    SizeType notValidatorIndex_;\n\n    Property* properties_;\n    const SchemaType* additionalPropertiesSchema_;\n    PatternProperty* patternProperties_;\n    SizeType patternPropertyCount_;\n    SizeType propertyCount_;\n    SizeType minProperties_;\n    SizeType maxProperties_;\n    bool additionalProperties_;\n    bool hasDependencies_;\n    bool hasRequired_;\n    bool hasSchemaDependencies_;\n\n    const SchemaType* additionalItemsSchema_;\n    const SchemaType* itemsList_;\n    const SchemaType** itemsTuple_;\n    SizeType itemsTupleCount_;\n    SizeType minItems_;\n    SizeType maxItems_;\n    bool additionalItems_;\n    bool uniqueItems_;\n\n    RegexType* pattern_;\n    SizeType minLength_;\n    SizeType maxLength_;\n\n    SValue minimum_;\n    SValue maximum_;\n    SValue multipleOf_;\n    bool exclusiveMinimum_;\n    bool exclusiveMaximum_;\n\n    SizeType defaultValueLength_;\n\n    bool readOnly_;\n    bool writeOnly_;\n    bool nullable_;\n};\n\ntemplate<typename Stack, typename Ch>\nstruct TokenHelper {\n    RAPIDJSON_FORCEINLINE static void AppendIndexToken(Stack& documentStack, SizeType index) {\n        *documentStack.template Push<Ch>() = '/';\n        char buffer[21];\n        size_t length = static_cast<size_t>((sizeof(SizeType) == 4 ? u32toa(index, buffer) : u64toa(index, buffer)) - buffer);\n        for (size_t i = 0; i < length; i++)\n            *documentStack.template Push<Ch>() = static_cast<Ch>(buffer[i]);\n    }\n};\n\n// Partial specialized version for char to prevent buffer copying.\ntemplate <typename Stack>\nstruct TokenHelper<Stack, char> {\n    RAPIDJSON_FORCEINLINE static void AppendIndexToken(Stack& documentStack, SizeType index) {\n        if (sizeof(SizeType) == 4) {\n            char *buffer = documentStack.template Push<char>(1 + 10); // '/' + uint\n            *buffer++ = '/';\n            const char* end = internal::u32toa(index, buffer);\n             documentStack.template Pop<char>(static_cast<size_t>(10 - (end - buffer)));\n        }\n        else {\n            char *buffer = documentStack.template Push<char>(1 + 20); // '/' + uint64\n            *buffer++ = '/';\n            const char* end = internal::u64toa(index, buffer);\n            documentStack.template Pop<char>(static_cast<size_t>(20 - (end - buffer)));\n        }\n    }\n};\n\n} // namespace internal\n\n///////////////////////////////////////////////////////////////////////////////\n// IGenericRemoteSchemaDocumentProvider\n\ntemplate <typename SchemaDocumentType>\nclass IGenericRemoteSchemaDocumentProvider {\npublic:\n    typedef typename SchemaDocumentType::Ch Ch;\n    typedef typename SchemaDocumentType::ValueType ValueType;\n    typedef typename SchemaDocumentType::AllocatorType AllocatorType;\n\n    virtual ~IGenericRemoteSchemaDocumentProvider() {}\n    virtual const SchemaDocumentType* GetRemoteDocument(const Ch* uri, SizeType length) = 0;\n    virtual const SchemaDocumentType* GetRemoteDocument(const GenericUri<ValueType, AllocatorType> uri, Specification& spec) {\n        // Default implementation just calls through for compatibility\n        // Following line suppresses unused parameter warning\n        (void)spec;\n        // printf(\"GetRemoteDocument: %d %d\\n\", spec.draft, spec.oapi);\n        return GetRemoteDocument(uri.GetBaseString(), uri.GetBaseStringLength());\n    }\n};\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericSchemaDocument\n\n//! JSON schema document.\n/*!\n    A JSON schema document is a compiled version of a JSON schema.\n    It is basically a tree of internal::Schema.\n\n    \\note This is an immutable class (i.e. its instance cannot be modified after construction).\n    \\tparam ValueT Type of JSON value (e.g. \\c Value ), which also determine the encoding.\n    \\tparam Allocator Allocator type for allocating memory of this document.\n*/\ntemplate <typename ValueT, typename Allocator = CrtAllocator>\nclass GenericSchemaDocument {\npublic:\n    typedef ValueT ValueType;\n    typedef IGenericRemoteSchemaDocumentProvider<GenericSchemaDocument> IRemoteSchemaDocumentProviderType;\n    typedef Allocator AllocatorType;\n    typedef typename ValueType::EncodingType EncodingType;\n    typedef typename EncodingType::Ch Ch;\n    typedef internal::Schema<GenericSchemaDocument> SchemaType;\n    typedef GenericPointer<ValueType, Allocator> PointerType;\n    typedef GenericValue<EncodingType, AllocatorType> GValue;\n    typedef GenericUri<ValueType, Allocator> UriType;\n    typedef GenericStringRef<Ch> StringRefType;\n    friend class internal::Schema<GenericSchemaDocument>;\n    template <typename, typename, typename>\n    friend class GenericSchemaValidator;\n\n    //! Constructor.\n    /*!\n        Compile a JSON document into schema document.\n\n        \\param document A JSON document as source.\n        \\param uri The base URI of this schema document for purposes of violation reporting.\n        \\param uriLength Length of \\c name, in code points.\n        \\param remoteProvider An optional remote schema document provider for resolving remote reference. Can be null.\n        \\param allocator An optional allocator instance for allocating memory. Can be null.\n        \\param pointer An optional JSON pointer to the start of the schema document\n        \\param spec Optional schema draft or OpenAPI version. Used if no specification in document. Defaults to draft-04.\n    */\n    explicit GenericSchemaDocument(const ValueType& document, const Ch* uri = 0, SizeType uriLength = 0,\n        IRemoteSchemaDocumentProviderType* remoteProvider = 0, Allocator* allocator = 0,\n        const PointerType& pointer = PointerType(), // PR #1393\n        const Specification& spec = Specification(kDraft04)) :\n        remoteProvider_(remoteProvider),\n        allocator_(allocator),\n        ownAllocator_(),\n        root_(),\n        typeless_(),\n        schemaMap_(allocator, kInitialSchemaMapSize),\n        schemaRef_(allocator, kInitialSchemaRefSize),\n        spec_(spec),\n        error_(kObjectType),\n        currentError_()\n    {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaDocument::GenericSchemaDocument\");\n        if (!allocator_)\n            ownAllocator_ = allocator_ = RAPIDJSON_NEW(Allocator)();\n\n        Ch noUri[1] = {0};\n        uri_.SetString(uri ? uri : noUri, uriLength, *allocator_);\n        docId_ = UriType(uri_, allocator_);\n\n        typeless_ = static_cast<SchemaType*>(allocator_->Malloc(sizeof(SchemaType)));\n        new (typeless_) SchemaType(this, PointerType(), ValueType(kObjectType).Move(), ValueType(kObjectType).Move(), allocator_, docId_);\n\n        // Establish the schema draft or open api version.\n        // We only ever look for '$schema' or 'swagger' or 'openapi' at the root of the document.\n        SetSchemaSpecification(document);\n\n        // Generate root schema, it will call CreateSchema() to create sub-schemas,\n        // And call HandleRefSchema() if there are $ref.\n        // PR #1393 use input pointer if supplied\n        root_ = typeless_;\n        if (pointer.GetTokenCount() == 0) {\n            CreateSchemaRecursive(&root_, pointer, document, document, docId_);\n        }\n        else if (const ValueType* v = pointer.Get(document)) {\n            CreateSchema(&root_, pointer, *v, document, docId_);\n        }\n        else {\n            GenericStringBuffer<EncodingType> sb;\n            pointer.StringifyUriFragment(sb);\n            SchemaErrorValue(kSchemaErrorStartUnknown, PointerType(), sb.GetString(), static_cast<SizeType>(sb.GetSize() / sizeof(Ch)));\n        }\n\n        RAPIDJSON_ASSERT(root_ != 0);\n\n        schemaRef_.ShrinkToFit(); // Deallocate all memory for ref\n    }\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    //! Move constructor in C++11\n    GenericSchemaDocument(GenericSchemaDocument&& rhs) RAPIDJSON_NOEXCEPT :\n        remoteProvider_(rhs.remoteProvider_),\n        allocator_(rhs.allocator_),\n        ownAllocator_(rhs.ownAllocator_),\n        root_(rhs.root_),\n        typeless_(rhs.typeless_),\n        schemaMap_(std::move(rhs.schemaMap_)),\n        schemaRef_(std::move(rhs.schemaRef_)),\n        uri_(std::move(rhs.uri_)),\n        docId_(std::move(rhs.docId_)),\n        spec_(rhs.spec_),\n        error_(std::move(rhs.error_)),\n        currentError_(std::move(rhs.currentError_))\n    {\n        rhs.remoteProvider_ = 0;\n        rhs.allocator_ = 0;\n        rhs.ownAllocator_ = 0;\n        rhs.typeless_ = 0;\n    }\n#endif\n\n    //! Destructor\n    ~GenericSchemaDocument() {\n        while (!schemaMap_.Empty())\n            schemaMap_.template Pop<SchemaEntry>(1)->~SchemaEntry();\n\n        if (typeless_) {\n            typeless_->~SchemaType();\n            Allocator::Free(typeless_);\n        }\n\n        // these may contain some allocator data so clear before deleting ownAllocator_\n        uri_.SetNull();\n        error_.SetNull();\n        currentError_.SetNull();\n\n        RAPIDJSON_DELETE(ownAllocator_);\n    }\n\n    const GValue& GetURI() const { return uri_; }\n\n    const Specification& GetSpecification() const { return spec_; }\n    bool IsSupportedSpecification() const { return spec_.IsSupported(); }\n\n    //! Static method to get the specification of any schema document\n    //  Returns kDraftNone if document is silent\n    static const Specification GetSpecification(const ValueType& document) {\n      SchemaDraft draft = GetSchemaDraft(document);\n      if (draft != kDraftNone)\n        return Specification(draft);\n      else {\n        OpenApiVersion oapi = GetOpenApiVersion(document);\n        if (oapi != kVersionNone)\n          return Specification(oapi);\n      }\n      return Specification(kDraftNone);\n    }\n\n    //! Get the root schema.\n    const SchemaType& GetRoot() const { return *root_; }\n\n    //! Gets the error object.\n    GValue& GetError() { return error_; }\n    const GValue& GetError() const { return error_; }\n\n    static const StringRefType& GetSchemaErrorKeyword(SchemaErrorCode schemaErrorCode) {\n        switch (schemaErrorCode) {\n            case kSchemaErrorStartUnknown:             return GetStartUnknownString();\n            case kSchemaErrorRefPlainName:             return GetRefPlainNameString();\n            case kSchemaErrorRefInvalid:               return GetRefInvalidString();\n            case kSchemaErrorRefPointerInvalid:        return GetRefPointerInvalidString();\n            case kSchemaErrorRefUnknown:               return GetRefUnknownString();\n            case kSchemaErrorRefCyclical:              return GetRefCyclicalString();\n            case kSchemaErrorRefNoRemoteProvider:      return GetRefNoRemoteProviderString();\n            case kSchemaErrorRefNoRemoteSchema:        return GetRefNoRemoteSchemaString();\n            case kSchemaErrorRegexInvalid:             return GetRegexInvalidString();\n            case kSchemaErrorSpecUnknown:              return GetSpecUnknownString();\n            case kSchemaErrorSpecUnsupported:          return GetSpecUnsupportedString();\n            case kSchemaErrorSpecIllegal:              return GetSpecIllegalString();\n            case kSchemaErrorReadOnlyAndWriteOnly:     return GetReadOnlyAndWriteOnlyString();\n            default:                                   return GetNullString();\n        }\n    }\n\n    //! Default error method\n    void SchemaError(const SchemaErrorCode code, const PointerType& location) {\n      currentError_ = GValue(kObjectType);\n      AddCurrentError(code, location);\n    }\n\n    //! Method for error with single string value insert\n    void SchemaErrorValue(const SchemaErrorCode code, const PointerType& location, const Ch* value, SizeType length) {\n      currentError_ = GValue(kObjectType);\n      currentError_.AddMember(GetValueString(), GValue(value, length, *allocator_).Move(), *allocator_);\n      AddCurrentError(code, location);\n    }\n\n    //! Method for error with invalid pointer\n    void SchemaErrorPointer(const SchemaErrorCode code, const PointerType& location, const Ch* value, SizeType length, const PointerType& pointer) {\n      currentError_ = GValue(kObjectType);\n      currentError_.AddMember(GetValueString(), GValue(value, length, *allocator_).Move(), *allocator_);\n      currentError_.AddMember(GetOffsetString(), static_cast<SizeType>(pointer.GetParseErrorOffset() / sizeof(Ch)), *allocator_);\n      AddCurrentError(code, location);\n    }\n\n  private:\n    //! Prohibit copying\n    GenericSchemaDocument(const GenericSchemaDocument&);\n    //! Prohibit assignment\n    GenericSchemaDocument& operator=(const GenericSchemaDocument&);\n\n    typedef const PointerType* SchemaRefPtr; // PR #1393\n\n    struct SchemaEntry {\n        SchemaEntry(const PointerType& p, SchemaType* s, bool o, Allocator* allocator) : pointer(p, allocator), schema(s), owned(o) {}\n        ~SchemaEntry() {\n            if (owned) {\n                schema->~SchemaType();\n                Allocator::Free(schema);\n            }\n        }\n        PointerType pointer;\n        SchemaType* schema;\n        bool owned;\n    };\n\n    void AddErrorInstanceLocation(GValue& result, const PointerType& location) {\n      GenericStringBuffer<EncodingType> sb;\n      location.StringifyUriFragment(sb);\n      GValue instanceRef(sb.GetString(), static_cast<SizeType>(sb.GetSize() / sizeof(Ch)), *allocator_);\n      result.AddMember(GetInstanceRefString(), instanceRef, *allocator_);\n    }\n\n    void AddError(GValue& keyword, GValue& error) {\n      typename GValue::MemberIterator member = error_.FindMember(keyword);\n      if (member == error_.MemberEnd())\n        error_.AddMember(keyword, error, *allocator_);\n      else {\n        if (member->value.IsObject()) {\n          GValue errors(kArrayType);\n          errors.PushBack(member->value, *allocator_);\n          member->value = errors;\n        }\n        member->value.PushBack(error, *allocator_);\n      }\n    }\n\n    void AddCurrentError(const SchemaErrorCode code, const PointerType& location) {\n      RAPIDJSON_SCHEMA_PRINT(InvalidKeyword, GetSchemaErrorKeyword(code));\n      currentError_.AddMember(GetErrorCodeString(), code, *allocator_);\n      AddErrorInstanceLocation(currentError_, location);\n      AddError(GValue(GetSchemaErrorKeyword(code)).Move(), currentError_);\n    }\n\n#define RAPIDJSON_STRING_(name, ...) \\\n    static const StringRefType& Get##name##String() {\\\n        static const Ch s[] = { __VA_ARGS__, '\\0' };\\\n        static const StringRefType v(s, static_cast<SizeType>(sizeof(s) / sizeof(Ch) - 1)); \\\n        return v;\\\n    }\n\n    RAPIDJSON_STRING_(InstanceRef, 'i', 'n', 's', 't', 'a', 'n', 'c', 'e', 'R', 'e', 'f')\n    RAPIDJSON_STRING_(ErrorCode, 'e', 'r', 'r', 'o', 'r', 'C', 'o', 'd', 'e')\n    RAPIDJSON_STRING_(Value, 'v', 'a', 'l', 'u', 'e')\n    RAPIDJSON_STRING_(Offset, 'o', 'f', 'f', 's', 'e', 't')\n\n    RAPIDJSON_STRING_(Null, 'n', 'u', 'l', 'l')\n    RAPIDJSON_STRING_(SpecUnknown, 'S', 'p', 'e', 'c', 'U', 'n', 'k', 'n', 'o', 'w', 'n')\n    RAPIDJSON_STRING_(SpecUnsupported, 'S', 'p', 'e', 'c', 'U', 'n', 's', 'u', 'p', 'p', 'o', 'r', 't', 'e', 'd')\n    RAPIDJSON_STRING_(SpecIllegal, 'S', 'p', 'e', 'c', 'I', 'l', 'l', 'e', 'g', 'a', 'l')\n    RAPIDJSON_STRING_(StartUnknown, 'S', 't', 'a', 'r', 't', 'U', 'n', 'k', 'n', 'o', 'w', 'n')\n    RAPIDJSON_STRING_(RefPlainName, 'R', 'e', 'f', 'P', 'l', 'a', 'i', 'n', 'N', 'a', 'm', 'e')\n    RAPIDJSON_STRING_(RefInvalid, 'R', 'e', 'f', 'I', 'n', 'v', 'a', 'l', 'i', 'd')\n    RAPIDJSON_STRING_(RefPointerInvalid, 'R', 'e', 'f', 'P', 'o', 'i', 'n', 't', 'e', 'r', 'I', 'n', 'v', 'a', 'l', 'i', 'd')\n    RAPIDJSON_STRING_(RefUnknown, 'R', 'e', 'f', 'U', 'n', 'k', 'n', 'o', 'w', 'n')\n    RAPIDJSON_STRING_(RefCyclical, 'R', 'e', 'f', 'C', 'y', 'c', 'l', 'i', 'c', 'a', 'l')\n    RAPIDJSON_STRING_(RefNoRemoteProvider, 'R', 'e', 'f', 'N', 'o', 'R', 'e', 'm', 'o', 't', 'e', 'P', 'r', 'o', 'v', 'i', 'd', 'e', 'r')\n    RAPIDJSON_STRING_(RefNoRemoteSchema, 'R', 'e', 'f', 'N', 'o', 'R', 'e', 'm', 'o', 't', 'e', 'S', 'c', 'h', 'e', 'm', 'a')\n    RAPIDJSON_STRING_(ReadOnlyAndWriteOnly, 'R', 'e', 'a', 'd', 'O', 'n', 'l', 'y', 'A', 'n', 'd', 'W', 'r', 'i', 't', 'e', 'O', 'n', 'l', 'y')\n    RAPIDJSON_STRING_(RegexInvalid, 'R', 'e', 'g', 'e', 'x', 'I', 'n', 'v', 'a', 'l', 'i', 'd')\n\n#undef RAPIDJSON_STRING_\n\n    // Static method to get schema draft of any schema document\n    static SchemaDraft GetSchemaDraft(const ValueType& document) {\n        static const Ch kDraft03String[] = { 'h', 't', 't', 'p', ':', '/', '/', 'j', 's', 'o', 'n', '-', 's', 'c', 'h', 'e', 'm', 'a', '.', 'o', 'r', 'g', '/', 'd', 'r', 'a', 'f', 't', '-', '0', '3', '/', 's', 'c', 'h', 'e', 'm', 'a', '#', '\\0' };\n        static const Ch kDraft04String[] = { 'h', 't', 't', 'p', ':', '/', '/', 'j', 's', 'o', 'n', '-', 's', 'c', 'h', 'e', 'm', 'a', '.', 'o', 'r', 'g', '/', 'd', 'r', 'a', 'f', 't', '-', '0', '4', '/', 's', 'c', 'h', 'e', 'm', 'a', '#', '\\0' };\n        static const Ch kDraft05String[] = { 'h', 't', 't', 'p', ':', '/', '/', 'j', 's', 'o', 'n', '-', 's', 'c', 'h', 'e', 'm', 'a', '.', 'o', 'r', 'g', '/', 'd', 'r', 'a', 'f', 't', '-', '0', '5', '/', 's', 'c', 'h', 'e', 'm', 'a', '#', '\\0' };\n        static const Ch kDraft06String[] = { 'h', 't', 't', 'p', ':', '/', '/', 'j', 's', 'o', 'n', '-', 's', 'c', 'h', 'e', 'm', 'a', '.', 'o', 'r', 'g', '/', 'd', 'r', 'a', 'f', 't', '-', '0', '6', '/', 's', 'c', 'h', 'e', 'm', 'a', '#', '\\0' };\n        static const Ch kDraft07String[] = { 'h', 't', 't', 'p', ':', '/', '/', 'j', 's', 'o', 'n', '-', 's', 'c', 'h', 'e', 'm', 'a', '.', 'o', 'r', 'g', '/', 'd', 'r', 'a', 'f', 't', '-', '0', '7', '/', 's', 'c', 'h', 'e', 'm', 'a', '#', '\\0' };\n        static const Ch kDraft2019_09String[] = { 'h', 't', 't', 'p', 's', ':', '/', '/', 'j', 's', 'o', 'n', '-', 's', 'c', 'h', 'e', 'm', 'a', '.', 'o', 'r', 'g', '/', 'd', 'r', 'a', 'f', 't', '/', '2', '0', '1', '9', '-', '0', '9', '/', 's', 'c', 'h', 'e', 'm', 'a', '\\0' };\n        static const Ch kDraft2020_12String[] = { 'h', 't', 't', 'p', 's', ':', '/', '/', 'j', 's', 'o', 'n', '-', 's', 'c', 'h', 'e', 'm', 'a', '.', 'o', 'r', 'g', '/', 'd', 'r', 'a', 'f', 't', '/', '2', '0', '2', '0', '-', '1', '2', '/', 's', 'c', 'h', 'e', 'm', 'a', '\\0' };\n\n        if (!document.IsObject()) {\n            return kDraftNone;\n        }\n\n        // Get the schema draft from the $schema keyword at the supplied location\n        typename ValueType::ConstMemberIterator itr = document.FindMember(SchemaType::GetSchemaString());\n        if (itr != document.MemberEnd()) {\n            if (!itr->value.IsString()) return kDraftUnknown;\n            const UriType draftUri(itr->value);\n            // Check base uri for match\n            if (draftUri.Match(UriType(kDraft04String), false)) return kDraft04;\n            if (draftUri.Match(UriType(kDraft05String), false)) return kDraft05;\n            if (draftUri.Match(UriType(kDraft06String), false)) return kDraft06;\n            if (draftUri.Match(UriType(kDraft07String), false)) return kDraft07;\n            if (draftUri.Match(UriType(kDraft03String), false)) return kDraft03;\n            if (draftUri.Match(UriType(kDraft2019_09String), false)) return kDraft2019_09;\n            if (draftUri.Match(UriType(kDraft2020_12String), false)) return kDraft2020_12;\n            return kDraftUnknown;\n        }\n        // $schema not found\n        return kDraftNone;\n    }\n\n\n    // Get open api version of any schema document\n    static OpenApiVersion GetOpenApiVersion(const ValueType& document) {\n        static const Ch kVersion20String[] = { '2', '.', '0', '\\0' };\n        static const Ch kVersion30String[] = { '3', '.', '0', '.', '\\0' }; // ignore patch level\n        static const Ch kVersion31String[] = { '3', '.', '1', '.', '\\0' }; // ignore patch level\n        static SizeType len = internal::StrLen<Ch>(kVersion30String);\n\n        if (!document.IsObject()) {\n            return kVersionNone;\n        }\n\n        // Get the open api version from the swagger / openapi keyword at the supplied location\n        typename ValueType::ConstMemberIterator itr = document.FindMember(SchemaType::GetSwaggerString());\n        if (itr == document.MemberEnd()) itr = document.FindMember(SchemaType::GetOpenApiString());\n        if (itr != document.MemberEnd()) {\n            if (!itr->value.IsString()) return kVersionUnknown;\n            const ValueType kVersion20Value(kVersion20String);\n            if (kVersion20Value == itr->value) return kVersion20; // must match 2.0 exactly\n            const ValueType kVersion30Value(kVersion30String);\n            if (itr->value.GetStringLength() > len && kVersion30Value == ValueType(itr->value.GetString(), len)) return kVersion30; // must match 3.0.x\n            const ValueType kVersion31Value(kVersion31String);\n            if (itr->value.GetStringLength() > len && kVersion31Value == ValueType(itr->value.GetString(), len)) return kVersion31; // must match 3.1.x\n            return kVersionUnknown;\n        }\n        // swagger or openapi not found\n        return kVersionNone;\n    }\n\n    // Get the draft of the schema or the open api version (which implies the draft).\n    // Report an error if schema draft or open api version not supported or not recognized, or both in document, and carry on.\n    void SetSchemaSpecification(const ValueType& document) {\n        // Look for '$schema', 'swagger' or 'openapi' keyword at document root\n        SchemaDraft docDraft = GetSchemaDraft(document);\n        OpenApiVersion docOapi = GetOpenApiVersion(document);\n        // Error if both in document\n        if (docDraft != kDraftNone && docOapi != kVersionNone)\n          SchemaError(kSchemaErrorSpecIllegal, PointerType());\n        // Use document draft or open api version if present or use spec from constructor\n        if (docDraft != kDraftNone)\n            spec_ = Specification(docDraft);\n        else if (docOapi != kVersionNone)\n            spec_ = Specification(docOapi);\n        // Error if draft or version unknown\n        if (spec_.draft == kDraftUnknown || spec_.oapi == kVersionUnknown)\n          SchemaError(kSchemaErrorSpecUnknown, PointerType());\n        else if (!spec_.IsSupported())\n            SchemaError(kSchemaErrorSpecUnsupported, PointerType());\n    }\n\n    // Changed by PR #1393\n    void CreateSchemaRecursive(const SchemaType** schema, const PointerType& pointer, const ValueType& v, const ValueType& document, const UriType& id) {\n        if (v.GetType() == kObjectType) {\n            UriType newid = UriType(CreateSchema(schema, pointer, v, document, id), allocator_);\n\n            for (typename ValueType::ConstMemberIterator itr = v.MemberBegin(); itr != v.MemberEnd(); ++itr)\n                CreateSchemaRecursive(0, pointer.Append(itr->name, allocator_), itr->value, document, newid);\n        }\n        else if (v.GetType() == kArrayType)\n            for (SizeType i = 0; i < v.Size(); i++)\n                CreateSchemaRecursive(0, pointer.Append(i, allocator_), v[i], document, id);\n    }\n\n    // Changed by PR #1393\n    const UriType& CreateSchema(const SchemaType** schema, const PointerType& pointer, const ValueType& v, const ValueType& document, const UriType& id) {\n        RAPIDJSON_ASSERT(pointer.IsValid());\n        GenericStringBuffer<EncodingType> sb;\n        pointer.StringifyUriFragment(sb);\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaDocument::CreateSchema\", sb.GetString(), id.GetString());\n        if (v.IsObject()) {\n            if (const SchemaType* sc = GetSchema(pointer)) {\n                if (schema)\n                    *schema = sc;\n                AddSchemaRefs(const_cast<SchemaType*>(sc));\n            }\n            else if (!HandleRefSchema(pointer, schema, v, document, id)) {\n                // The new schema constructor adds itself and its $ref(s) to schemaMap_\n                SchemaType* s = new (allocator_->Malloc(sizeof(SchemaType))) SchemaType(this, pointer, v, document, allocator_, id);\n                if (schema)\n                    *schema = s;\n                return s->GetId();\n            }\n        }\n        else {\n            if (schema)\n                *schema = typeless_;\n            AddSchemaRefs(typeless_);\n        }\n        return id;\n    }\n\n    // Changed by PR #1393\n    // TODO should this return a UriType& ?\n    bool HandleRefSchema(const PointerType& source, const SchemaType** schema, const ValueType& v, const ValueType& document, const UriType& id) {\n        typename ValueType::ConstMemberIterator itr = v.FindMember(SchemaType::GetRefString());\n        if (itr == v.MemberEnd())\n            return false;\n\n        GenericStringBuffer<EncodingType> sb;\n        source.StringifyUriFragment(sb);\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaDocument::HandleRefSchema\", sb.GetString(), id.GetString());\n        // Resolve the source pointer to the $ref'ed schema (finally)\n        new (schemaRef_.template Push<SchemaRefPtr>()) SchemaRefPtr(&source);\n\n        if (itr->value.IsString()) {\n            SizeType len = itr->value.GetStringLength();\n            if (len == 0)\n                SchemaError(kSchemaErrorRefInvalid, source);\n            else {\n                // First resolve $ref against the in-scope id\n                UriType scopeId = UriType(id, allocator_);\n                UriType ref = UriType(itr->value, allocator_).Resolve(scopeId, allocator_);\n                RAPIDJSON_SCHEMA_PRINT(SchemaIds, id.GetString(), itr->value.GetString(), ref.GetString());\n                // See if the resolved $ref minus the fragment matches a resolved id in this document\n                // Search from the root. Returns the subschema in the document and its absolute JSON pointer.\n                PointerType basePointer = PointerType();\n                const ValueType *base = FindId(document, ref, basePointer, docId_, false);\n                if (!base) {\n                    // Remote reference - call the remote document provider\n                    if (!remoteProvider_)\n                        SchemaError(kSchemaErrorRefNoRemoteProvider, source);\n                    else {\n                        if (const GenericSchemaDocument* remoteDocument = remoteProvider_->GetRemoteDocument(ref, spec_)) {\n                            const Ch* s = ref.GetFragString();\n                            len = ref.GetFragStringLength();\n                            if (len <= 1 || s[1] == '/') {\n                                // JSON pointer fragment, absolute in the remote schema\n                                const PointerType pointer(s, len, allocator_);\n                                if (!pointer.IsValid())\n                                    SchemaErrorPointer(kSchemaErrorRefPointerInvalid, source, s, len, pointer);\n                                else {\n                                    // Get the subschema\n                                    if (const SchemaType *sc = remoteDocument->GetSchema(pointer)) {\n                                        if (schema)\n                                            *schema = sc;\n                                        AddSchemaRefs(const_cast<SchemaType *>(sc));\n                                        return true;\n                                    } else\n                                        SchemaErrorValue(kSchemaErrorRefUnknown, source, ref.GetString(), ref.GetStringLength());\n                                }\n                            } else\n                                // Plain name fragment, not allowed in remote schema\n                                SchemaErrorValue(kSchemaErrorRefPlainName, source, s, len);\n                        } else\n                          SchemaErrorValue(kSchemaErrorRefNoRemoteSchema, source, ref.GetString(), ref.GetStringLength());\n                    }\n                }\n                else { // Local reference\n                    const Ch* s = ref.GetFragString();\n                    len = ref.GetFragStringLength();\n                    if (len <= 1 || s[1] == '/') {\n                        // JSON pointer fragment, relative to the resolved URI\n                        const PointerType relPointer(s, len, allocator_);\n                        if (!relPointer.IsValid())\n                            SchemaErrorPointer(kSchemaErrorRefPointerInvalid, source, s, len, relPointer);\n                        else {\n                            // Get the subschema\n                            if (const ValueType *pv = relPointer.Get(*base)) {\n                                // Now get the absolute JSON pointer by adding relative to base\n                                PointerType pointer(basePointer, allocator_);\n                                for (SizeType i = 0; i < relPointer.GetTokenCount(); i++)\n                                    pointer = pointer.Append(relPointer.GetTokens()[i], allocator_);\n                                if (IsCyclicRef(pointer))\n                                    SchemaErrorValue(kSchemaErrorRefCyclical, source, ref.GetString(), ref.GetStringLength());\n                                else {\n                                    // Call CreateSchema recursively, but first compute the in-scope id for the $ref target as we have jumped there\n                                    // TODO: cache pointer <-> id mapping\n                                    size_t unresolvedTokenIndex;\n                                    scopeId = pointer.GetUri(document, docId_, &unresolvedTokenIndex, allocator_);\n                                    CreateSchema(schema, pointer, *pv, document, scopeId);\n                                    return true;\n                                }\n                            } else\n                                SchemaErrorValue(kSchemaErrorRefUnknown, source, ref.GetString(), ref.GetStringLength());\n                        }\n                    } else {\n                        // Plain name fragment, relative to the resolved URI\n                        // Not supported in open api 2.0 and 3.0\n                        PointerType pointer(allocator_);\n                        if (spec_.oapi == kVersion20 || spec_.oapi == kVersion30)\n                            SchemaErrorValue(kSchemaErrorRefPlainName, source, s, len);\n                        // See if the fragment matches an id in this document.\n                        // Search from the base we just established. Returns the subschema in the document and its absolute JSON pointer.\n                        else if (const ValueType *pv = FindId(*base, ref, pointer, UriType(ref.GetBaseString(), ref.GetBaseStringLength(), allocator_), true, basePointer)) {\n                            if (IsCyclicRef(pointer))\n                                SchemaErrorValue(kSchemaErrorRefCyclical, source, ref.GetString(), ref.GetStringLength());\n                            else {\n                                // Call CreateSchema recursively, but first compute the in-scope id for the $ref target as we have jumped there\n                                // TODO: cache pointer <-> id mapping\n                                size_t unresolvedTokenIndex;\n                                scopeId = pointer.GetUri(document, docId_, &unresolvedTokenIndex, allocator_);\n                                CreateSchema(schema, pointer, *pv, document, scopeId);\n                                return true;\n                            }\n                        } else\n                            SchemaErrorValue(kSchemaErrorRefUnknown, source, ref.GetString(), ref.GetStringLength());\n                    }\n                }\n            }\n        }\n\n        // Invalid/Unknown $ref\n        if (schema)\n            *schema = typeless_;\n        AddSchemaRefs(typeless_);\n        return true;\n    }\n\n    //! Find the first subschema with a resolved 'id' that matches the specified URI.\n    // If full specified use all URI else ignore fragment.\n    // If found, return a pointer to the subschema and its JSON pointer.\n    // TODO cache pointer <-> id mapping\n    ValueType* FindId(const ValueType& doc, const UriType& finduri, PointerType& resptr, const UriType& baseuri, bool full, const PointerType& here = PointerType()) const {\n        SizeType i = 0;\n        ValueType* resval = 0;\n        UriType tempuri = UriType(finduri, allocator_);\n        UriType localuri = UriType(baseuri, allocator_);\n        if (doc.GetType() == kObjectType) {\n            // Establish the base URI of this object\n            typename ValueType::ConstMemberIterator m = doc.FindMember(SchemaType::GetIdString());\n            if (m != doc.MemberEnd() && m->value.GetType() == kStringType) {\n                localuri = UriType(m->value, allocator_).Resolve(baseuri, allocator_);\n            }\n            // See if it matches\n            if (localuri.Match(finduri, full)) {\n                RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaDocument::FindId (match)\", full ? localuri.GetString() : localuri.GetBaseString());\n                resval = const_cast<ValueType *>(&doc);\n                resptr = here;\n                return resval;\n            }\n            // No match, continue looking\n            for (m = doc.MemberBegin(); m != doc.MemberEnd(); ++m) {\n                if (m->value.GetType() == kObjectType || m->value.GetType() == kArrayType) {\n                    resval = FindId(m->value, finduri, resptr, localuri, full, here.Append(m->name.GetString(), m->name.GetStringLength(), allocator_));\n                }\n                if (resval) break;\n            }\n        } else if (doc.GetType() == kArrayType) {\n            // Continue looking\n            for (typename ValueType::ConstValueIterator v = doc.Begin(); v != doc.End(); ++v) {\n                if (v->GetType() == kObjectType || v->GetType() == kArrayType) {\n                    resval = FindId(*v, finduri, resptr, localuri, full, here.Append(i, allocator_));\n                }\n                if (resval) break;\n                i++;\n            }\n        }\n        return resval;\n    }\n\n    // Added by PR #1393\n    void AddSchemaRefs(SchemaType* schema) {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaDocument::AddSchemaRefs\");\n        while (!schemaRef_.Empty()) {\n            SchemaRefPtr *ref = schemaRef_.template Pop<SchemaRefPtr>(1);\n            SchemaEntry *entry = schemaMap_.template Push<SchemaEntry>();\n            new (entry) SchemaEntry(**ref, schema, false, allocator_);\n        }\n    }\n\n    // Added by PR #1393\n    bool IsCyclicRef(const PointerType& pointer) const {\n        for (const SchemaRefPtr* ref = schemaRef_.template Bottom<SchemaRefPtr>(); ref != schemaRef_.template End<SchemaRefPtr>(); ++ref)\n            if (pointer == **ref)\n                return true;\n        return false;\n    }\n\n    const SchemaType* GetSchema(const PointerType& pointer) const {\n        for (const SchemaEntry* target = schemaMap_.template Bottom<SchemaEntry>(); target != schemaMap_.template End<SchemaEntry>(); ++target)\n            if (pointer == target->pointer)\n                return target->schema;\n        return 0;\n    }\n\n    PointerType GetPointer(const SchemaType* schema) const {\n        for (const SchemaEntry* target = schemaMap_.template Bottom<SchemaEntry>(); target != schemaMap_.template End<SchemaEntry>(); ++target)\n            if (schema == target->schema)\n                return target->pointer;\n        return PointerType();\n    }\n\n    const SchemaType* GetTypeless() const { return typeless_; }\n\n    static const size_t kInitialSchemaMapSize = 64;\n    static const size_t kInitialSchemaRefSize = 64;\n\n    IRemoteSchemaDocumentProviderType* remoteProvider_;\n    Allocator *allocator_;\n    Allocator *ownAllocator_;\n    const SchemaType* root_;                //!< Root schema.\n    SchemaType* typeless_;\n    internal::Stack<Allocator> schemaMap_;  // Stores created Pointer -> Schemas\n    internal::Stack<Allocator> schemaRef_;  // Stores Pointer(s) from $ref(s) until resolved\n    GValue uri_;                            // Schema document URI\n    UriType docId_;\n    Specification spec_;\n    GValue error_;\n    GValue currentError_;\n};\n\n//! GenericSchemaDocument using Value type.\ntypedef GenericSchemaDocument<Value> SchemaDocument;\n//! IGenericRemoteSchemaDocumentProvider using SchemaDocument.\ntypedef IGenericRemoteSchemaDocumentProvider<SchemaDocument> IRemoteSchemaDocumentProvider;\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericSchemaValidator\n\n//! JSON Schema Validator.\n/*!\n    A SAX style JSON schema validator.\n    It uses a \\c GenericSchemaDocument to validate SAX events.\n    It delegates the incoming SAX events to an output handler.\n    The default output handler does nothing.\n    It can be reused multiple times by calling \\c Reset().\n\n    \\tparam SchemaDocumentType Type of schema document.\n    \\tparam OutputHandler Type of output handler. Default handler does nothing.\n    \\tparam StateAllocator Allocator for storing the internal validation states.\n*/\ntemplate <\n    typename SchemaDocumentType,\n    typename OutputHandler = BaseReaderHandler<typename SchemaDocumentType::SchemaType::EncodingType>,\n    typename StateAllocator = CrtAllocator>\nclass GenericSchemaValidator :\n    public internal::ISchemaStateFactory<typename SchemaDocumentType::SchemaType>, \n    public internal::ISchemaValidator,\n    public internal::IValidationErrorHandler<typename SchemaDocumentType::SchemaType> {\npublic:\n    typedef typename SchemaDocumentType::SchemaType SchemaType;\n    typedef typename SchemaDocumentType::PointerType PointerType;\n    typedef typename SchemaType::EncodingType EncodingType;\n    typedef typename SchemaType::SValue SValue;\n    typedef typename EncodingType::Ch Ch;\n    typedef GenericStringRef<Ch> StringRefType;\n    typedef GenericValue<EncodingType, StateAllocator> ValueType;\n\n    //! Constructor without output handler.\n    /*!\n        \\param schemaDocument The schema document to conform to.\n        \\param allocator Optional allocator for storing internal validation states.\n        \\param schemaStackCapacity Optional initial capacity of schema path stack.\n        \\param documentStackCapacity Optional initial capacity of document path stack.\n    */\n    GenericSchemaValidator(\n        const SchemaDocumentType& schemaDocument,\n        StateAllocator* allocator = 0, \n        size_t schemaStackCapacity = kDefaultSchemaStackCapacity,\n        size_t documentStackCapacity = kDefaultDocumentStackCapacity)\n        :\n        schemaDocument_(&schemaDocument),\n        root_(schemaDocument.GetRoot()),\n        stateAllocator_(allocator),\n        ownStateAllocator_(0),\n        schemaStack_(allocator, schemaStackCapacity),\n        documentStack_(allocator, documentStackCapacity),\n        outputHandler_(0),\n        error_(kObjectType),\n        currentError_(),\n        missingDependents_(),\n        valid_(true),\n        flags_(kValidateDefaultFlags),\n        depth_(0)\n    {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::GenericSchemaValidator\");\n    }\n\n    //! Constructor with output handler.\n    /*!\n        \\param schemaDocument The schema document to conform to.\n        \\param allocator Optional allocator for storing internal validation states.\n        \\param schemaStackCapacity Optional initial capacity of schema path stack.\n        \\param documentStackCapacity Optional initial capacity of document path stack.\n    */\n    GenericSchemaValidator(\n        const SchemaDocumentType& schemaDocument,\n        OutputHandler& outputHandler,\n        StateAllocator* allocator = 0, \n        size_t schemaStackCapacity = kDefaultSchemaStackCapacity,\n        size_t documentStackCapacity = kDefaultDocumentStackCapacity)\n        :\n        schemaDocument_(&schemaDocument),\n        root_(schemaDocument.GetRoot()),\n        stateAllocator_(allocator),\n        ownStateAllocator_(0),\n        schemaStack_(allocator, schemaStackCapacity),\n        documentStack_(allocator, documentStackCapacity),\n        outputHandler_(&outputHandler),\n        error_(kObjectType),\n        currentError_(),\n        missingDependents_(),\n        valid_(true),\n        flags_(kValidateDefaultFlags),\n        depth_(0)\n    {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::GenericSchemaValidator (output handler)\");\n    }\n\n    //! Destructor.\n    ~GenericSchemaValidator() {\n        Reset();\n        RAPIDJSON_DELETE(ownStateAllocator_);\n    }\n\n    //! Reset the internal states.\n    void Reset() {\n        while (!schemaStack_.Empty())\n            PopSchema();\n        documentStack_.Clear();\n        ResetError();\n    }\n\n    //! Reset the error state.\n    void ResetError() {\n        error_.SetObject();\n        currentError_.SetNull();\n        missingDependents_.SetNull();\n        valid_ = true;\n    }\n\n    //! Implementation of ISchemaValidator\n    void SetValidateFlags(unsigned flags) {\n        flags_ = flags;\n    }\n    virtual unsigned GetValidateFlags() const {\n        return flags_;\n    }\n\n    virtual bool IsValid() const {\n        if (!valid_) return false;\n        if (GetContinueOnErrors() && !error_.ObjectEmpty()) return false;\n        return true;\n    }\n    //! End of Implementation of ISchemaValidator\n\n    //! Gets the error object.\n    ValueType& GetError() { return error_; }\n    const ValueType& GetError() const { return error_; }\n\n    //! Gets the JSON pointer pointed to the invalid schema.\n    //  If reporting all errors, the stack will be empty.\n    PointerType GetInvalidSchemaPointer() const {\n        return schemaStack_.Empty() ? PointerType() : CurrentSchema().GetPointer();\n    }\n\n    //! Gets the keyword of invalid schema.\n    //  If reporting all errors, the stack will be empty, so return \"errors\".\n    const Ch* GetInvalidSchemaKeyword() const {\n        if (!schemaStack_.Empty()) return CurrentContext().invalidKeyword;\n        if (GetContinueOnErrors() && !error_.ObjectEmpty()) return static_cast<const Ch*>(GetErrorsString());\n        return 0;\n    }\n\n    //! Gets the error code of invalid schema.\n    //  If reporting all errors, the stack will be empty, so return kValidateErrors.\n    ValidateErrorCode GetInvalidSchemaCode() const {\n        if (!schemaStack_.Empty()) return CurrentContext().invalidCode;\n        if (GetContinueOnErrors() && !error_.ObjectEmpty()) return kValidateErrors;\n        return kValidateErrorNone;\n    }\n\n    //! Gets the JSON pointer pointed to the invalid value.\n    //  If reporting all errors, the stack will be empty.\n    PointerType GetInvalidDocumentPointer() const {\n        if (documentStack_.Empty()) {\n            return PointerType();\n        }\n        else {\n            return PointerType(documentStack_.template Bottom<Ch>(), documentStack_.GetSize() / sizeof(Ch));\n        }\n    }\n\n    void NotMultipleOf(int64_t actual, const SValue& expected) {\n        AddNumberError(kValidateErrorMultipleOf, ValueType(actual).Move(), expected);\n    }\n    void NotMultipleOf(uint64_t actual, const SValue& expected) {\n        AddNumberError(kValidateErrorMultipleOf, ValueType(actual).Move(), expected);\n    }\n    void NotMultipleOf(double actual, const SValue& expected) {\n        AddNumberError(kValidateErrorMultipleOf, ValueType(actual).Move(), expected);\n    }\n    void AboveMaximum(int64_t actual, const SValue& expected, bool exclusive) {\n        AddNumberError(exclusive ? kValidateErrorExclusiveMaximum : kValidateErrorMaximum, ValueType(actual).Move(), expected,\n            exclusive ? &SchemaType::GetExclusiveMaximumString : 0);\n    }\n    void AboveMaximum(uint64_t actual, const SValue& expected, bool exclusive) {\n        AddNumberError(exclusive ? kValidateErrorExclusiveMaximum : kValidateErrorMaximum, ValueType(actual).Move(), expected,\n            exclusive ? &SchemaType::GetExclusiveMaximumString : 0);\n    }\n    void AboveMaximum(double actual, const SValue& expected, bool exclusive) {\n        AddNumberError(exclusive ? kValidateErrorExclusiveMaximum : kValidateErrorMaximum, ValueType(actual).Move(), expected,\n            exclusive ? &SchemaType::GetExclusiveMaximumString : 0);\n    }\n    void BelowMinimum(int64_t actual, const SValue& expected, bool exclusive) {\n        AddNumberError(exclusive ? kValidateErrorExclusiveMinimum : kValidateErrorMinimum, ValueType(actual).Move(), expected,\n            exclusive ? &SchemaType::GetExclusiveMinimumString : 0);\n    }\n    void BelowMinimum(uint64_t actual, const SValue& expected, bool exclusive) {\n        AddNumberError(exclusive ? kValidateErrorExclusiveMinimum : kValidateErrorMinimum, ValueType(actual).Move(), expected,\n            exclusive ? &SchemaType::GetExclusiveMinimumString : 0);\n    }\n    void BelowMinimum(double actual, const SValue& expected, bool exclusive) {\n        AddNumberError(exclusive ? kValidateErrorExclusiveMinimum : kValidateErrorMinimum, ValueType(actual).Move(), expected,\n            exclusive ? &SchemaType::GetExclusiveMinimumString : 0);\n    }\n\n    void TooLong(const Ch* str, SizeType length, SizeType expected) {\n        AddNumberError(kValidateErrorMaxLength,\n            ValueType(str, length, GetStateAllocator()).Move(), SValue(expected).Move());\n    }\n    void TooShort(const Ch* str, SizeType length, SizeType expected) {\n        AddNumberError(kValidateErrorMinLength,\n            ValueType(str, length, GetStateAllocator()).Move(), SValue(expected).Move());\n    }\n    void DoesNotMatch(const Ch* str, SizeType length) {\n        currentError_.SetObject();\n        currentError_.AddMember(GetActualString(), ValueType(str, length, GetStateAllocator()).Move(), GetStateAllocator());\n        AddCurrentError(kValidateErrorPattern);\n    }\n\n    void DisallowedItem(SizeType index) {\n        currentError_.SetObject();\n        currentError_.AddMember(GetDisallowedString(), ValueType(index).Move(), GetStateAllocator());\n        AddCurrentError(kValidateErrorAdditionalItems, true);\n    }\n    void TooFewItems(SizeType actualCount, SizeType expectedCount) {\n        AddNumberError(kValidateErrorMinItems,\n            ValueType(actualCount).Move(), SValue(expectedCount).Move());\n    }\n    void TooManyItems(SizeType actualCount, SizeType expectedCount) {\n        AddNumberError(kValidateErrorMaxItems,\n            ValueType(actualCount).Move(), SValue(expectedCount).Move());\n    }\n    void DuplicateItems(SizeType index1, SizeType index2) {\n        ValueType duplicates(kArrayType);\n        duplicates.PushBack(index1, GetStateAllocator());\n        duplicates.PushBack(index2, GetStateAllocator());\n        currentError_.SetObject();\n        currentError_.AddMember(GetDuplicatesString(), duplicates, GetStateAllocator());\n        AddCurrentError(kValidateErrorUniqueItems, true);\n    }\n\n    void TooManyProperties(SizeType actualCount, SizeType expectedCount) {\n        AddNumberError(kValidateErrorMaxProperties,\n            ValueType(actualCount).Move(), SValue(expectedCount).Move());\n    }\n    void TooFewProperties(SizeType actualCount, SizeType expectedCount) {\n        AddNumberError(kValidateErrorMinProperties,\n            ValueType(actualCount).Move(), SValue(expectedCount).Move());\n    }\n    void StartMissingProperties() {\n        currentError_.SetArray();\n    }\n    void AddMissingProperty(const SValue& name) {\n        currentError_.PushBack(ValueType(name, GetStateAllocator()).Move(), GetStateAllocator());\n    }\n    bool EndMissingProperties() {\n        if (currentError_.Empty())\n            return false;\n        ValueType error(kObjectType);\n        error.AddMember(GetMissingString(), currentError_, GetStateAllocator());\n        currentError_ = error;\n        AddCurrentError(kValidateErrorRequired);\n        return true;\n    }\n    void PropertyViolations(ISchemaValidator** subvalidators, SizeType count) {\n        for (SizeType i = 0; i < count; ++i)\n            MergeError(static_cast<GenericSchemaValidator*>(subvalidators[i])->GetError());\n    }\n    void DisallowedProperty(const Ch* name, SizeType length) {\n        currentError_.SetObject();\n        currentError_.AddMember(GetDisallowedString(), ValueType(name, length, GetStateAllocator()).Move(), GetStateAllocator());\n        AddCurrentError(kValidateErrorAdditionalProperties, true);\n    }\n\n    void StartDependencyErrors() {\n        currentError_.SetObject();\n    }\n    void StartMissingDependentProperties() {\n        missingDependents_.SetArray();\n    }\n    void AddMissingDependentProperty(const SValue& targetName) {\n        missingDependents_.PushBack(ValueType(targetName, GetStateAllocator()).Move(), GetStateAllocator());\n    }\n    void EndMissingDependentProperties(const SValue& sourceName) {\n        if (!missingDependents_.Empty()) {\n            // Create equivalent 'required' error\n            ValueType error(kObjectType);\n            ValidateErrorCode code = kValidateErrorRequired;\n            error.AddMember(GetMissingString(), missingDependents_.Move(), GetStateAllocator());\n            AddErrorCode(error, code);\n            AddErrorInstanceLocation(error, false);\n            // When appending to a pointer ensure its allocator is used\n            PointerType schemaRef = GetInvalidSchemaPointer().Append(SchemaType::GetValidateErrorKeyword(kValidateErrorDependencies), &GetInvalidSchemaPointer().GetAllocator());\n            AddErrorSchemaLocation(error, schemaRef.Append(sourceName.GetString(), sourceName.GetStringLength(), &GetInvalidSchemaPointer().GetAllocator()));\n            ValueType wrapper(kObjectType);\n            wrapper.AddMember(ValueType(SchemaType::GetValidateErrorKeyword(code), GetStateAllocator()).Move(), error, GetStateAllocator());\n            currentError_.AddMember(ValueType(sourceName, GetStateAllocator()).Move(), wrapper, GetStateAllocator());\n        }\n    }\n    void AddDependencySchemaError(const SValue& sourceName, ISchemaValidator* subvalidator) {\n        currentError_.AddMember(ValueType(sourceName, GetStateAllocator()).Move(),\n            static_cast<GenericSchemaValidator*>(subvalidator)->GetError(), GetStateAllocator());\n    }\n    bool EndDependencyErrors() {\n        if (currentError_.ObjectEmpty())\n            return false;\n        ValueType error(kObjectType);\n        error.AddMember(GetErrorsString(), currentError_, GetStateAllocator());\n        currentError_ = error;\n        AddCurrentError(kValidateErrorDependencies);\n        return true;\n    }\n\n    void DisallowedValue(const ValidateErrorCode code = kValidateErrorEnum) {\n        currentError_.SetObject();\n        AddCurrentError(code);\n    }\n    void StartDisallowedType() {\n        currentError_.SetArray();\n    }\n    void AddExpectedType(const typename SchemaType::ValueType& expectedType) {\n        currentError_.PushBack(ValueType(expectedType, GetStateAllocator()).Move(), GetStateAllocator());\n    }\n    void EndDisallowedType(const typename SchemaType::ValueType& actualType) {\n        ValueType error(kObjectType);\n        error.AddMember(GetExpectedString(), currentError_, GetStateAllocator());\n        error.AddMember(GetActualString(), ValueType(actualType, GetStateAllocator()).Move(), GetStateAllocator());\n        currentError_ = error;\n        AddCurrentError(kValidateErrorType);\n    }\n    void NotAllOf(ISchemaValidator** subvalidators, SizeType count) {\n        // Treat allOf like oneOf and anyOf to match https://rapidjson.org/md_doc_schema.html#allOf-anyOf-oneOf\n        AddErrorArray(kValidateErrorAllOf, subvalidators, count);\n        //for (SizeType i = 0; i < count; ++i) {\n        //    MergeError(static_cast<GenericSchemaValidator*>(subvalidators[i])->GetError());\n        //}\n    }\n    void NoneOf(ISchemaValidator** subvalidators, SizeType count) {\n        AddErrorArray(kValidateErrorAnyOf, subvalidators, count);\n    }\n    void NotOneOf(ISchemaValidator** subvalidators, SizeType count) {\n        AddErrorArray(kValidateErrorOneOf, subvalidators, count);\n    }\n    void MultipleOneOf(SizeType index1, SizeType index2) {\n        ValueType matches(kArrayType);\n        matches.PushBack(index1, GetStateAllocator());\n        matches.PushBack(index2, GetStateAllocator());\n        currentError_.SetObject();\n        currentError_.AddMember(GetMatchesString(), matches, GetStateAllocator());\n        AddCurrentError(kValidateErrorOneOfMatch);\n    }\n    void Disallowed() {\n        currentError_.SetObject();\n        AddCurrentError(kValidateErrorNot);\n    }\n    void DisallowedWhenWriting() {\n        currentError_.SetObject();\n        AddCurrentError(kValidateErrorReadOnly);\n    }\n    void DisallowedWhenReading() {\n        currentError_.SetObject();\n        AddCurrentError(kValidateErrorWriteOnly);\n    }\n\n#define RAPIDJSON_STRING_(name, ...) \\\n    static const StringRefType& Get##name##String() {\\\n        static const Ch s[] = { __VA_ARGS__, '\\0' };\\\n        static const StringRefType v(s, static_cast<SizeType>(sizeof(s) / sizeof(Ch) - 1)); \\\n        return v;\\\n    }\n\n    RAPIDJSON_STRING_(InstanceRef, 'i', 'n', 's', 't', 'a', 'n', 'c', 'e', 'R', 'e', 'f')\n    RAPIDJSON_STRING_(SchemaRef, 's', 'c', 'h', 'e', 'm', 'a', 'R', 'e', 'f')\n    RAPIDJSON_STRING_(Expected, 'e', 'x', 'p', 'e', 'c', 't', 'e', 'd')\n    RAPIDJSON_STRING_(Actual, 'a', 'c', 't', 'u', 'a', 'l')\n    RAPIDJSON_STRING_(Disallowed, 'd', 'i', 's', 'a', 'l', 'l', 'o', 'w', 'e', 'd')\n    RAPIDJSON_STRING_(Missing, 'm', 'i', 's', 's', 'i', 'n', 'g')\n    RAPIDJSON_STRING_(Errors, 'e', 'r', 'r', 'o', 'r', 's')\n    RAPIDJSON_STRING_(ErrorCode, 'e', 'r', 'r', 'o', 'r', 'C', 'o', 'd', 'e')\n    RAPIDJSON_STRING_(ErrorMessage, 'e', 'r', 'r', 'o', 'r', 'M', 'e', 's', 's', 'a', 'g', 'e')\n    RAPIDJSON_STRING_(Duplicates, 'd', 'u', 'p', 'l', 'i', 'c', 'a', 't', 'e', 's')\n    RAPIDJSON_STRING_(Matches, 'm', 'a', 't', 'c', 'h', 'e', 's')\n\n#undef RAPIDJSON_STRING_\n\n#define RAPIDJSON_SCHEMA_HANDLE_BEGIN_(method, arg1)\\\n    if (!valid_) return false; \\\n    if ((!BeginValue() && !GetContinueOnErrors()) || (!CurrentSchema().method arg1 && !GetContinueOnErrors())) {\\\n        *documentStack_.template Push<Ch>() = '\\0';\\\n        documentStack_.template Pop<Ch>(1);\\\n        RAPIDJSON_SCHEMA_PRINT(InvalidDocument, documentStack_.template Bottom<Ch>());\\\n        valid_ = false;\\\n        return valid_;\\\n    }\n\n#define RAPIDJSON_SCHEMA_HANDLE_PARALLEL_(method, arg2)\\\n    for (Context* context = schemaStack_.template Bottom<Context>(); context != schemaStack_.template End<Context>(); context++) {\\\n        if (context->hasher)\\\n            static_cast<HasherType*>(context->hasher)->method arg2;\\\n        if (context->validators)\\\n            for (SizeType i_ = 0; i_ < context->validatorCount; i_++)\\\n                static_cast<GenericSchemaValidator*>(context->validators[i_])->method arg2;\\\n        if (context->patternPropertiesValidators)\\\n            for (SizeType i_ = 0; i_ < context->patternPropertiesValidatorCount; i_++)\\\n                static_cast<GenericSchemaValidator*>(context->patternPropertiesValidators[i_])->method arg2;\\\n    }\n\n#define RAPIDJSON_SCHEMA_HANDLE_END_(method, arg2)\\\n    valid_ = (EndValue() || GetContinueOnErrors()) && (!outputHandler_ || outputHandler_->method arg2);\\\n    return valid_;\n\n#define RAPIDJSON_SCHEMA_HANDLE_VALUE_(method, arg1, arg2) \\\n    RAPIDJSON_SCHEMA_HANDLE_BEGIN_   (method, arg1);\\\n    RAPIDJSON_SCHEMA_HANDLE_PARALLEL_(method, arg2);\\\n    RAPIDJSON_SCHEMA_HANDLE_END_     (method, arg2)\n\n    bool Null()             { RAPIDJSON_SCHEMA_HANDLE_VALUE_(Null,   (CurrentContext()), ( )); }\n    bool Bool(bool b)       { RAPIDJSON_SCHEMA_HANDLE_VALUE_(Bool,   (CurrentContext(), b), (b)); }\n    bool Int(int i)         { RAPIDJSON_SCHEMA_HANDLE_VALUE_(Int,    (CurrentContext(), i), (i)); }\n    bool Uint(unsigned u)   { RAPIDJSON_SCHEMA_HANDLE_VALUE_(Uint,   (CurrentContext(), u), (u)); }\n    bool Int64(int64_t i)   { RAPIDJSON_SCHEMA_HANDLE_VALUE_(Int64,  (CurrentContext(), i), (i)); }\n    bool Uint64(uint64_t u) { RAPIDJSON_SCHEMA_HANDLE_VALUE_(Uint64, (CurrentContext(), u), (u)); }\n    bool Double(double d)   { RAPIDJSON_SCHEMA_HANDLE_VALUE_(Double, (CurrentContext(), d), (d)); }\n    bool RawNumber(const Ch* str, SizeType length, bool copy)\n                                    { RAPIDJSON_SCHEMA_HANDLE_VALUE_(String, (CurrentContext(), str, length, copy), (str, length, copy)); }\n    bool String(const Ch* str, SizeType length, bool copy)\n                                    { RAPIDJSON_SCHEMA_HANDLE_VALUE_(String, (CurrentContext(), str, length, copy), (str, length, copy)); }\n\n    bool StartObject() {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::StartObject\");\n        RAPIDJSON_SCHEMA_HANDLE_BEGIN_(StartObject, (CurrentContext()));\n        RAPIDJSON_SCHEMA_HANDLE_PARALLEL_(StartObject, ());\n        valid_ = !outputHandler_ || outputHandler_->StartObject();\n        return valid_;\n    }\n    \n    bool Key(const Ch* str, SizeType len, bool copy) {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::Key\", str);\n        if (!valid_) return false;\n        AppendToken(str, len);\n        if (!CurrentSchema().Key(CurrentContext(), str, len, copy) && !GetContinueOnErrors()) {\n            valid_ = false;\n            return valid_;\n        }\n        RAPIDJSON_SCHEMA_HANDLE_PARALLEL_(Key, (str, len, copy));\n        valid_ = !outputHandler_ || outputHandler_->Key(str, len, copy);\n        return valid_;\n    }\n    \n    bool EndObject(SizeType memberCount) {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::EndObject\");\n        if (!valid_) return false;\n        RAPIDJSON_SCHEMA_HANDLE_PARALLEL_(EndObject, (memberCount));\n        if (!CurrentSchema().EndObject(CurrentContext(), memberCount) && !GetContinueOnErrors()) { \n            valid_ = false; \n            return valid_; \n        }\n        RAPIDJSON_SCHEMA_HANDLE_END_(EndObject, (memberCount));\n    }\n\n    bool StartArray() {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::StartArray\");\n        RAPIDJSON_SCHEMA_HANDLE_BEGIN_(StartArray, (CurrentContext()));\n        RAPIDJSON_SCHEMA_HANDLE_PARALLEL_(StartArray, ());\n        valid_ = !outputHandler_ || outputHandler_->StartArray();\n        return valid_;\n    }\n    \n    bool EndArray(SizeType elementCount) {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::EndArray\");\n        if (!valid_) return false;\n        RAPIDJSON_SCHEMA_HANDLE_PARALLEL_(EndArray, (elementCount));\n        if (!CurrentSchema().EndArray(CurrentContext(), elementCount) && !GetContinueOnErrors()) {\n            valid_ = false;\n            return valid_;\n        }\n        RAPIDJSON_SCHEMA_HANDLE_END_(EndArray, (elementCount));\n    }\n\n#undef RAPIDJSON_SCHEMA_HANDLE_BEGIN_\n#undef RAPIDJSON_SCHEMA_HANDLE_PARALLEL_\n#undef RAPIDJSON_SCHEMA_HANDLE_VALUE_\n\n    // Implementation of ISchemaStateFactory<SchemaType>\n    virtual ISchemaValidator* CreateSchemaValidator(const SchemaType& root, const bool inheritContinueOnErrors) {\n        *documentStack_.template Push<Ch>() = '\\0';\n        documentStack_.template Pop<Ch>(1);\n        ISchemaValidator* sv = new (GetStateAllocator().Malloc(sizeof(GenericSchemaValidator))) GenericSchemaValidator(*schemaDocument_, root, documentStack_.template Bottom<char>(), documentStack_.GetSize(),\n        depth_ + 1,\n        &GetStateAllocator());\n        sv->SetValidateFlags(inheritContinueOnErrors ? GetValidateFlags() : GetValidateFlags() & ~static_cast<unsigned>(kValidateContinueOnErrorFlag));\n        return sv;\n    }\n\n    virtual void DestroySchemaValidator(ISchemaValidator* validator) {\n        GenericSchemaValidator* v = static_cast<GenericSchemaValidator*>(validator);\n        v->~GenericSchemaValidator();\n        StateAllocator::Free(v);\n    }\n\n    virtual void* CreateHasher() {\n        return new (GetStateAllocator().Malloc(sizeof(HasherType))) HasherType(&GetStateAllocator());\n    }\n\n    virtual uint64_t GetHashCode(void* hasher) {\n        return static_cast<HasherType*>(hasher)->GetHashCode();\n    }\n\n    virtual void DestroryHasher(void* hasher) {\n        HasherType* h = static_cast<HasherType*>(hasher);\n        h->~HasherType();\n        StateAllocator::Free(h);\n    }\n\n    virtual void* MallocState(size_t size) {\n        return GetStateAllocator().Malloc(size);\n    }\n\n    virtual void FreeState(void* p) {\n        StateAllocator::Free(p);\n    }\n    // End of implementation of ISchemaStateFactory<SchemaType>\n\nprivate:\n    typedef typename SchemaType::Context Context;\n    typedef GenericValue<UTF8<>, StateAllocator> HashCodeArray;\n    typedef internal::Hasher<EncodingType, StateAllocator> HasherType;\n\n    GenericSchemaValidator( \n        const SchemaDocumentType& schemaDocument,\n        const SchemaType& root,\n        const char* basePath, size_t basePathSize,\n        unsigned depth,\n        StateAllocator* allocator = 0,\n        size_t schemaStackCapacity = kDefaultSchemaStackCapacity,\n        size_t documentStackCapacity = kDefaultDocumentStackCapacity)\n        :\n        schemaDocument_(&schemaDocument),\n        root_(root),\n        stateAllocator_(allocator),\n        ownStateAllocator_(0),\n        schemaStack_(allocator, schemaStackCapacity),\n        documentStack_(allocator, documentStackCapacity),\n        outputHandler_(0),\n        error_(kObjectType),\n        currentError_(),\n        missingDependents_(),\n        valid_(true),\n        flags_(kValidateDefaultFlags),\n        depth_(depth)\n    {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::GenericSchemaValidator (internal)\", basePath && basePathSize ? basePath : \"\");\n        if (basePath && basePathSize)\n            memcpy(documentStack_.template Push<char>(basePathSize), basePath, basePathSize);\n    }\n\n    StateAllocator& GetStateAllocator() {\n        if (!stateAllocator_)\n            stateAllocator_ = ownStateAllocator_ = RAPIDJSON_NEW(StateAllocator)();\n        return *stateAllocator_;\n    }\n\n    bool GetContinueOnErrors() const {\n        return flags_ & kValidateContinueOnErrorFlag;\n    }\n\n    bool BeginValue() {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::BeginValue\");\n        if (schemaStack_.Empty())\n            PushSchema(root_);\n        else {\n            if (CurrentContext().inArray)\n                internal::TokenHelper<internal::Stack<StateAllocator>, Ch>::AppendIndexToken(documentStack_, CurrentContext().arrayElementIndex);\n\n            if (!CurrentSchema().BeginValue(CurrentContext()) && !GetContinueOnErrors())\n                return false;\n\n            SizeType count = CurrentContext().patternPropertiesSchemaCount;\n            const SchemaType** sa = CurrentContext().patternPropertiesSchemas;\n            typename Context::PatternValidatorType patternValidatorType = CurrentContext().valuePatternValidatorType;\n            bool valueUniqueness = CurrentContext().valueUniqueness;\n            RAPIDJSON_ASSERT(CurrentContext().valueSchema);\n            PushSchema(*CurrentContext().valueSchema);\n\n            if (count > 0) {\n                CurrentContext().objectPatternValidatorType = patternValidatorType;\n                ISchemaValidator**& va = CurrentContext().patternPropertiesValidators;\n                SizeType& validatorCount = CurrentContext().patternPropertiesValidatorCount;\n                va = static_cast<ISchemaValidator**>(MallocState(sizeof(ISchemaValidator*) * count));\n                std::memset(va, 0, sizeof(ISchemaValidator*) * count);\n                for (SizeType i = 0; i < count; i++)\n                    va[validatorCount++] = CreateSchemaValidator(*sa[i], true);  // Inherit continueOnError\n            }\n\n            CurrentContext().arrayUniqueness = valueUniqueness;\n        }\n        return true;\n    }\n\n    bool EndValue() {\n        RAPIDJSON_SCHEMA_PRINT(Method, \"GenericSchemaValidator::EndValue\");\n        if (!CurrentSchema().EndValue(CurrentContext()) && !GetContinueOnErrors())\n            return false;\n\n        GenericStringBuffer<EncodingType> sb;\n        schemaDocument_->GetPointer(&CurrentSchema()).StringifyUriFragment(sb);\n        *documentStack_.template Push<Ch>() = '\\0';\n        documentStack_.template Pop<Ch>(1);\n        RAPIDJSON_SCHEMA_PRINT(ValidatorPointers, sb.GetString(), documentStack_.template Bottom<Ch>(), depth_);\n        void* hasher = CurrentContext().hasher;\n        uint64_t h = hasher && CurrentContext().arrayUniqueness ? static_cast<HasherType*>(hasher)->GetHashCode() : 0;\n        \n        PopSchema();\n\n        if (!schemaStack_.Empty()) {\n            Context& context = CurrentContext();\n            // Only check uniqueness if there is a hasher\n            if (hasher && context.valueUniqueness) {\n                HashCodeArray* a = static_cast<HashCodeArray*>(context.arrayElementHashCodes);\n                if (!a)\n                    CurrentContext().arrayElementHashCodes = a = new (GetStateAllocator().Malloc(sizeof(HashCodeArray))) HashCodeArray(kArrayType);\n                for (typename HashCodeArray::ConstValueIterator itr = a->Begin(); itr != a->End(); ++itr)\n                    if (itr->GetUint64() == h) {\n                        DuplicateItems(static_cast<SizeType>(itr - a->Begin()), a->Size());\n                        // Cleanup before returning if continuing\n                        if (GetContinueOnErrors()) {\n                            a->PushBack(h, GetStateAllocator());\n                            while (!documentStack_.Empty() && *documentStack_.template Pop<Ch>(1) != '/');\n                        }\n                        RAPIDJSON_INVALID_KEYWORD_RETURN(kValidateErrorUniqueItems);\n                    }\n                a->PushBack(h, GetStateAllocator());\n            }\n        }\n\n        // Remove the last token of document pointer\n        while (!documentStack_.Empty() && *documentStack_.template Pop<Ch>(1) != '/')\n            ;\n\n        return true;\n    }\n\n    void AppendToken(const Ch* str, SizeType len) {\n        documentStack_.template Reserve<Ch>(1 + len * 2); // worst case all characters are escaped as two characters\n        *documentStack_.template PushUnsafe<Ch>() = '/';\n        for (SizeType i = 0; i < len; i++) {\n            if (str[i] == '~') {\n                *documentStack_.template PushUnsafe<Ch>() = '~';\n                *documentStack_.template PushUnsafe<Ch>() = '0';\n            }\n            else if (str[i] == '/') {\n                *documentStack_.template PushUnsafe<Ch>() = '~';\n                *documentStack_.template PushUnsafe<Ch>() = '1';\n            }\n            else\n                *documentStack_.template PushUnsafe<Ch>() = str[i];\n        }\n    }\n\n    RAPIDJSON_FORCEINLINE void PushSchema(const SchemaType& schema) { new (schemaStack_.template Push<Context>()) Context(*this, *this, &schema, flags_); }\n    \n    RAPIDJSON_FORCEINLINE void PopSchema() {\n        Context* c = schemaStack_.template Pop<Context>(1);\n        if (HashCodeArray* a = static_cast<HashCodeArray*>(c->arrayElementHashCodes)) {\n            a->~HashCodeArray();\n            StateAllocator::Free(a);\n        }\n        c->~Context();\n    }\n\n    void AddErrorInstanceLocation(ValueType& result, bool parent) {\n        GenericStringBuffer<EncodingType> sb;\n        PointerType instancePointer = GetInvalidDocumentPointer();\n        ((parent && instancePointer.GetTokenCount() > 0)\n         ? PointerType(instancePointer.GetTokens(), instancePointer.GetTokenCount() - 1)\n         : instancePointer).StringifyUriFragment(sb);\n        ValueType instanceRef(sb.GetString(), static_cast<SizeType>(sb.GetSize() / sizeof(Ch)),\n                              GetStateAllocator());\n        result.AddMember(GetInstanceRefString(), instanceRef, GetStateAllocator());\n    }\n\n    void AddErrorSchemaLocation(ValueType& result, PointerType schema = PointerType()) {\n        GenericStringBuffer<EncodingType> sb;\n        SizeType len = CurrentSchema().GetURI().GetStringLength();\n        if (len) memcpy(sb.Push(len), CurrentSchema().GetURI().GetString(), len * sizeof(Ch));\n        if (schema.GetTokenCount()) schema.StringifyUriFragment(sb);\n        else GetInvalidSchemaPointer().StringifyUriFragment(sb);\n        ValueType schemaRef(sb.GetString(), static_cast<SizeType>(sb.GetSize() / sizeof(Ch)),\n            GetStateAllocator());\n        result.AddMember(GetSchemaRefString(), schemaRef, GetStateAllocator());\n    }\n\n    void AddErrorCode(ValueType& result, const ValidateErrorCode code) {\n        result.AddMember(GetErrorCodeString(), code, GetStateAllocator());\n    }\n\n    void AddError(ValueType& keyword, ValueType& error) {\n        typename ValueType::MemberIterator member = error_.FindMember(keyword);\n        if (member == error_.MemberEnd())\n            error_.AddMember(keyword, error, GetStateAllocator());\n        else {\n            if (member->value.IsObject()) {\n                ValueType errors(kArrayType);\n                errors.PushBack(member->value, GetStateAllocator());\n                member->value = errors;\n            }\n            member->value.PushBack(error, GetStateAllocator());\n        }\n    }\n\n    void AddCurrentError(const ValidateErrorCode code, bool parent = false) {\n        AddErrorCode(currentError_, code);\n        AddErrorInstanceLocation(currentError_, parent);\n        AddErrorSchemaLocation(currentError_);\n        AddError(ValueType(SchemaType::GetValidateErrorKeyword(code), GetStateAllocator(), false).Move(), currentError_);\n    }\n\n    void MergeError(ValueType& other) {\n        for (typename ValueType::MemberIterator it = other.MemberBegin(), end = other.MemberEnd(); it != end; ++it) {\n            AddError(it->name, it->value);\n        }\n    }\n\n    void AddNumberError(const ValidateErrorCode code, ValueType& actual, const SValue& expected,\n        const typename SchemaType::ValueType& (*exclusive)() = 0) {\n        currentError_.SetObject();\n        currentError_.AddMember(GetActualString(), actual, GetStateAllocator());\n        currentError_.AddMember(GetExpectedString(), ValueType(expected, GetStateAllocator()).Move(), GetStateAllocator());\n        if (exclusive)\n            currentError_.AddMember(ValueType(exclusive(), GetStateAllocator()).Move(), true, GetStateAllocator());\n        AddCurrentError(code);\n    }\n\n    void AddErrorArray(const ValidateErrorCode code,\n        ISchemaValidator** subvalidators, SizeType count) {\n        ValueType errors(kArrayType);\n        for (SizeType i = 0; i < count; ++i)\n            errors.PushBack(static_cast<GenericSchemaValidator*>(subvalidators[i])->GetError(), GetStateAllocator());\n        currentError_.SetObject();\n        currentError_.AddMember(GetErrorsString(), errors, GetStateAllocator());\n        AddCurrentError(code);\n    }\n\n    const SchemaType& CurrentSchema() const { return *schemaStack_.template Top<Context>()->schema; }\n    Context& CurrentContext() { return *schemaStack_.template Top<Context>(); }\n    const Context& CurrentContext() const { return *schemaStack_.template Top<Context>(); }\n\n    static const size_t kDefaultSchemaStackCapacity = 1024;\n    static const size_t kDefaultDocumentStackCapacity = 256;\n    const SchemaDocumentType* schemaDocument_;\n    const SchemaType& root_;\n    StateAllocator* stateAllocator_;\n    StateAllocator* ownStateAllocator_;\n    internal::Stack<StateAllocator> schemaStack_;    //!< stack to store the current path of schema (BaseSchemaType *)\n    internal::Stack<StateAllocator> documentStack_;  //!< stack to store the current path of validating document (Ch)\n    OutputHandler* outputHandler_;\n    ValueType error_;\n    ValueType currentError_;\n    ValueType missingDependents_;\n    bool valid_;\n    unsigned flags_;\n    unsigned depth_;\n};\n\ntypedef GenericSchemaValidator<SchemaDocument> SchemaValidator;\n\n///////////////////////////////////////////////////////////////////////////////\n// SchemaValidatingReader\n\n//! A helper class for parsing with validation.\n/*!\n    This helper class is a functor, designed as a parameter of \\ref GenericDocument::Populate().\n\n    \\tparam parseFlags Combination of \\ref ParseFlag.\n    \\tparam InputStream Type of input stream, implementing Stream concept.\n    \\tparam SourceEncoding Encoding of the input stream.\n    \\tparam SchemaDocumentType Type of schema document.\n    \\tparam StackAllocator Allocator type for stack.\n*/\ntemplate <\n    unsigned parseFlags,\n    typename InputStream,\n    typename SourceEncoding,\n    typename SchemaDocumentType = SchemaDocument,\n    typename StackAllocator = CrtAllocator>\nclass SchemaValidatingReader {\npublic:\n    typedef typename SchemaDocumentType::PointerType PointerType;\n    typedef typename InputStream::Ch Ch;\n    typedef GenericValue<SourceEncoding, StackAllocator> ValueType;\n\n    //! Constructor\n    /*!\n        \\param is Input stream.\n        \\param sd Schema document.\n    */\n    SchemaValidatingReader(InputStream& is, const SchemaDocumentType& sd) : is_(is), sd_(sd), invalidSchemaKeyword_(), invalidSchemaCode_(kValidateErrorNone), error_(kObjectType), isValid_(true) {}\n\n    template <typename Handler>\n    bool operator()(Handler& handler) {\n        GenericReader<SourceEncoding, typename SchemaDocumentType::EncodingType, StackAllocator> reader;\n        GenericSchemaValidator<SchemaDocumentType, Handler> validator(sd_, handler);\n        parseResult_ = reader.template Parse<parseFlags>(is_, validator);\n\n        isValid_ = validator.IsValid();\n        if (isValid_) {\n            invalidSchemaPointer_ = PointerType();\n            invalidSchemaKeyword_ = 0;\n            invalidDocumentPointer_ = PointerType();\n            error_.SetObject();\n        }\n        else {\n            invalidSchemaPointer_ = validator.GetInvalidSchemaPointer();\n            invalidSchemaKeyword_ = validator.GetInvalidSchemaKeyword();\n            invalidSchemaCode_ = validator.GetInvalidSchemaCode();\n            invalidDocumentPointer_ = validator.GetInvalidDocumentPointer();\n            error_.CopyFrom(validator.GetError(), allocator_);\n        }\n\n        return parseResult_;\n    }\n\n    const ParseResult& GetParseResult() const { return parseResult_; }\n    bool IsValid() const { return isValid_; }\n    const PointerType& GetInvalidSchemaPointer() const { return invalidSchemaPointer_; }\n    const Ch* GetInvalidSchemaKeyword() const { return invalidSchemaKeyword_; }\n    const PointerType& GetInvalidDocumentPointer() const { return invalidDocumentPointer_; }\n    const ValueType& GetError() const { return error_; }\n    ValidateErrorCode GetInvalidSchemaCode() const { return invalidSchemaCode_; }\n\nprivate:\n    InputStream& is_;\n    const SchemaDocumentType& sd_;\n\n    ParseResult parseResult_;\n    PointerType invalidSchemaPointer_;\n    const Ch* invalidSchemaKeyword_;\n    PointerType invalidDocumentPointer_;\n    ValidateErrorCode invalidSchemaCode_;\n    StackAllocator allocator_;\n    ValueType error_;\n    bool isValid_;\n};\n\nRAPIDJSON_NAMESPACE_END\nRAPIDJSON_DIAG_POP\n\n#endif // RAPIDJSON_SCHEMA_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/stream.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#include \"rapidjson.h\"\n\n#ifndef RAPIDJSON_STREAM_H_\n#define RAPIDJSON_STREAM_H_\n\n#include \"encodings.h\"\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n///////////////////////////////////////////////////////////////////////////////\n//  Stream\n\n/*! \\class rapidjson::Stream\n    \\brief Concept for reading and writing characters.\n\n    For read-only stream, no need to implement PutBegin(), Put(), Flush() and PutEnd().\n\n    For write-only stream, only need to implement Put() and Flush().\n\n\\code\nconcept Stream {\n    typename Ch;    //!< Character type of the stream.\n\n    //! Read the current character from stream without moving the read cursor.\n    Ch Peek() const;\n\n    //! Read the current character from stream and moving the read cursor to next character.\n    Ch Take();\n\n    //! Get the current read cursor.\n    //! \\return Number of characters read from start.\n    size_t Tell();\n\n    //! Begin writing operation at the current read pointer.\n    //! \\return The begin writer pointer.\n    Ch* PutBegin();\n\n    //! Write a character.\n    void Put(Ch c);\n\n    //! Flush the buffer.\n    void Flush();\n\n    //! End the writing operation.\n    //! \\param begin The begin write pointer returned by PutBegin().\n    //! \\return Number of characters written.\n    size_t PutEnd(Ch* begin);\n}\n\\endcode\n*/\n\n//! Provides additional information for stream.\n/*!\n    By using traits pattern, this type provides a default configuration for stream.\n    For custom stream, this type can be specialized for other configuration.\n    See TEST(Reader, CustomStringStream) in readertest.cpp for example.\n*/\ntemplate<typename Stream>\nstruct StreamTraits {\n    //! Whether to make local copy of stream for optimization during parsing.\n    /*!\n        By default, for safety, streams do not use local copy optimization.\n        Stream that can be copied fast should specialize this, like StreamTraits<StringStream>.\n    */\n    enum { copyOptimization = 0 };\n};\n\n//! Reserve n characters for writing to a stream.\ntemplate<typename Stream>\ninline void PutReserve(Stream& stream, size_t count) {\n    (void)stream;\n    (void)count;\n}\n\n//! Write character to a stream, presuming buffer is reserved.\ntemplate<typename Stream>\ninline void PutUnsafe(Stream& stream, typename Stream::Ch c) {\n    stream.Put(c);\n}\n\n//! Put N copies of a character to a stream.\ntemplate<typename Stream, typename Ch>\ninline void PutN(Stream& stream, Ch c, size_t n) {\n    PutReserve(stream, n);\n    for (size_t i = 0; i < n; i++)\n        PutUnsafe(stream, c);\n}\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericStreamWrapper\n\n//! A Stream Wrapper\n/*! \\tThis string stream is a wrapper for any stream by just forwarding any\n    \\treceived message to the origin stream.\n    \\note implements Stream concept\n*/\n\n#if defined(_MSC_VER) && _MSC_VER <= 1800\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4702)  // unreachable code\nRAPIDJSON_DIAG_OFF(4512)  // assignment operator could not be generated\n#endif\n\ntemplate <typename InputStream, typename Encoding = UTF8<> >\nclass GenericStreamWrapper {\npublic:\n    typedef typename Encoding::Ch Ch;\n    GenericStreamWrapper(InputStream& is): is_(is) {}\n\n    Ch Peek() const { return is_.Peek(); }\n    Ch Take() { return is_.Take(); }\n    size_t Tell() { return is_.Tell(); }\n    Ch* PutBegin() { return is_.PutBegin(); }\n    void Put(Ch ch) { is_.Put(ch); }\n    void Flush() { is_.Flush(); }\n    size_t PutEnd(Ch* ch) { return is_.PutEnd(ch); }\n\n    // wrapper for MemoryStream\n    const Ch* Peek4() const { return is_.Peek4(); }\n\n    // wrapper for AutoUTFInputStream\n    UTFType GetType() const { return is_.GetType(); }\n    bool HasBOM() const { return is_.HasBOM(); }\n\nprotected:\n    InputStream& is_;\n};\n\n#if defined(_MSC_VER) && _MSC_VER <= 1800\nRAPIDJSON_DIAG_POP\n#endif\n\n///////////////////////////////////////////////////////////////////////////////\n// StringStream\n\n//! Read-only string stream.\n/*! \\note implements Stream concept\n*/\ntemplate <typename Encoding>\nstruct GenericStringStream {\n    typedef typename Encoding::Ch Ch;\n\n    GenericStringStream(const Ch *src) : src_(src), head_(src) {}\n\n    Ch Peek() const { return *src_; }\n    Ch Take() { return *src_++; }\n    size_t Tell() const { return static_cast<size_t>(src_ - head_); }\n\n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    void Put(Ch) { RAPIDJSON_ASSERT(false); }\n    void Flush() { RAPIDJSON_ASSERT(false); }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\n    const Ch* src_;     //!< Current read position.\n    const Ch* head_;    //!< Original head of the string.\n};\n\ntemplate <typename Encoding>\nstruct StreamTraits<GenericStringStream<Encoding> > {\n    enum { copyOptimization = 1 };\n};\n\n//! String stream with UTF8 encoding.\ntypedef GenericStringStream<UTF8<> > StringStream;\n\n///////////////////////////////////////////////////////////////////////////////\n// InsituStringStream\n\n//! A read-write string stream.\n/*! This string stream is particularly designed for in-situ parsing.\n    \\note implements Stream concept\n*/\ntemplate <typename Encoding>\nstruct GenericInsituStringStream {\n    typedef typename Encoding::Ch Ch;\n\n    GenericInsituStringStream(Ch *src) : src_(src), dst_(0), head_(src) {}\n\n    // Read\n    Ch Peek() { return *src_; }\n    Ch Take() { return *src_++; }\n    size_t Tell() { return static_cast<size_t>(src_ - head_); }\n\n    // Write\n    void Put(Ch c) { RAPIDJSON_ASSERT(dst_ != 0); *dst_++ = c; }\n\n    Ch* PutBegin() { return dst_ = src_; }\n    size_t PutEnd(Ch* begin) { return static_cast<size_t>(dst_ - begin); }\n    void Flush() {}\n\n    Ch* Push(size_t count) { Ch* begin = dst_; dst_ += count; return begin; }\n    void Pop(size_t count) { dst_ -= count; }\n\n    Ch* src_;\n    Ch* dst_;\n    Ch* head_;\n};\n\ntemplate <typename Encoding>\nstruct StreamTraits<GenericInsituStringStream<Encoding> > {\n    enum { copyOptimization = 1 };\n};\n\n//! Insitu string stream with UTF8 encoding.\ntypedef GenericInsituStringStream<UTF8<> > InsituStringStream;\n\nRAPIDJSON_NAMESPACE_END\n\n#endif // RAPIDJSON_STREAM_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/stringbuffer.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_STRINGBUFFER_H_\n#define RAPIDJSON_STRINGBUFFER_H_\n\n#include \"stream.h\"\n#include \"internal/stack.h\"\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n#include <utility> // std::move\n#endif\n\n#include \"internal/stack.h\"\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n//! Represents an in-memory output stream.\n/*!\n    \\tparam Encoding Encoding of the stream.\n    \\tparam Allocator type for allocating memory buffer.\n    \\note implements Stream concept\n*/\ntemplate <typename Encoding, typename Allocator = CrtAllocator>\nclass GenericStringBuffer {\npublic:\n    typedef typename Encoding::Ch Ch;\n\n    GenericStringBuffer(Allocator* allocator = 0, size_t capacity = kDefaultCapacity) : stack_(allocator, capacity) {}\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    GenericStringBuffer(GenericStringBuffer&& rhs) : stack_(std::move(rhs.stack_)) {}\n    GenericStringBuffer& operator=(GenericStringBuffer&& rhs) {\n        if (&rhs != this)\n            stack_ = std::move(rhs.stack_);\n        return *this;\n    }\n#endif\n\n    void Put(Ch c) { *stack_.template Push<Ch>() = c; }\n    void PutUnsafe(Ch c) { *stack_.template PushUnsafe<Ch>() = c; }\n    void Flush() {}\n\n    void Clear() { stack_.Clear(); }\n    void ShrinkToFit() {\n        // Push and pop a null terminator. This is safe.\n        *stack_.template Push<Ch>() = '\\0';\n        stack_.ShrinkToFit();\n        stack_.template Pop<Ch>(1);\n    }\n\n    void Reserve(size_t count) { stack_.template Reserve<Ch>(count); }\n    Ch* Push(size_t count) { return stack_.template Push<Ch>(count); }\n    Ch* PushUnsafe(size_t count) { return stack_.template PushUnsafe<Ch>(count); }\n    void Pop(size_t count) { stack_.template Pop<Ch>(count); }\n\n    const Ch* GetString() const {\n        // Push and pop a null terminator. This is safe.\n        *stack_.template Push<Ch>() = '\\0';\n        stack_.template Pop<Ch>(1);\n\n        return stack_.template Bottom<Ch>();\n    }\n\n    //! Get the size of string in bytes in the string buffer.\n    size_t GetSize() const { return stack_.GetSize(); }\n\n    //! Get the length of string in Ch in the string buffer.\n    size_t GetLength() const { return stack_.GetSize() / sizeof(Ch); }\n\n    static const size_t kDefaultCapacity = 256;\n    mutable internal::Stack<Allocator> stack_;\n\nprivate:\n    // Prohibit copy constructor & assignment operator.\n    GenericStringBuffer(const GenericStringBuffer&);\n    GenericStringBuffer& operator=(const GenericStringBuffer&);\n};\n\n//! String buffer with UTF8 encoding\ntypedef GenericStringBuffer<UTF8<> > StringBuffer;\n\ntemplate<typename Encoding, typename Allocator>\ninline void PutReserve(GenericStringBuffer<Encoding, Allocator>& stream, size_t count) {\n    stream.Reserve(count);\n}\n\ntemplate<typename Encoding, typename Allocator>\ninline void PutUnsafe(GenericStringBuffer<Encoding, Allocator>& stream, typename Encoding::Ch c) {\n    stream.PutUnsafe(c);\n}\n\n//! Implement specialized version of PutN() with memset() for better performance.\ntemplate<>\ninline void PutN(GenericStringBuffer<UTF8<> >& stream, char c, size_t n) {\n    std::memset(stream.stack_.Push<char>(n), c, n * sizeof(c));\n}\n\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_STRINGBUFFER_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/uri.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// (C) Copyright IBM Corporation 2021\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_URI_H_\n#define RAPIDJSON_URI_H_\n\n#include \"internal/strfunc.h\"\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_OFF(4512) // assignment operator could not be generated\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n///////////////////////////////////////////////////////////////////////////////\n// GenericUri\n\ntemplate <typename ValueType, typename Allocator=CrtAllocator>\nclass GenericUri {\npublic:\n    typedef typename ValueType::Ch Ch;\n#if RAPIDJSON_HAS_STDSTRING\n    typedef std::basic_string<Ch> String;\n#endif\n\n    //! Constructors\n    GenericUri(Allocator* allocator = 0) : uri_(), base_(), scheme_(), auth_(), path_(), query_(), frag_(), allocator_(allocator), ownAllocator_() {\n    }\n\n    GenericUri(const Ch* uri, SizeType len, Allocator* allocator = 0) : uri_(), base_(), scheme_(), auth_(), path_(), query_(), frag_(), allocator_(allocator), ownAllocator_() {\n        Parse(uri, len);\n    }\n\n    GenericUri(const Ch* uri, Allocator* allocator = 0) : uri_(), base_(), scheme_(), auth_(), path_(), query_(), frag_(), allocator_(allocator), ownAllocator_() {\n        Parse(uri, internal::StrLen<Ch>(uri));\n    }\n\n    // Use with specializations of GenericValue\n    template<typename T> GenericUri(const T& uri, Allocator* allocator = 0) : uri_(), base_(), scheme_(), auth_(), path_(), query_(), frag_(), allocator_(allocator), ownAllocator_() {\n        const Ch* u = uri.template Get<const Ch*>(); // TypeHelper from document.h\n        Parse(u, internal::StrLen<Ch>(u));\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    GenericUri(const String& uri, Allocator* allocator = 0) : uri_(), base_(), scheme_(), auth_(), path_(), query_(), frag_(), allocator_(allocator), ownAllocator_() {\n        Parse(uri.c_str(), internal::StrLen<Ch>(uri.c_str()));\n    }\n#endif\n\n    //! Copy constructor\n    GenericUri(const GenericUri& rhs) : uri_(), base_(), scheme_(), auth_(), path_(), query_(), frag_(), allocator_(), ownAllocator_() {\n        *this = rhs;\n    }\n\n    //! Copy constructor\n    GenericUri(const GenericUri& rhs, Allocator* allocator) : uri_(), base_(), scheme_(), auth_(), path_(), query_(), frag_(), allocator_(allocator), ownAllocator_() {\n        *this = rhs;\n    }\n\n    //! Destructor.\n    ~GenericUri() {\n        Free();\n        RAPIDJSON_DELETE(ownAllocator_);\n    }\n\n    //! Assignment operator\n    GenericUri& operator=(const GenericUri& rhs) {\n        if (this != &rhs) {\n            // Do not delete ownAllocator\n            Free();\n            Allocate(rhs.GetStringLength());\n            auth_ = CopyPart(scheme_, rhs.scheme_, rhs.GetSchemeStringLength());\n            path_ = CopyPart(auth_, rhs.auth_, rhs.GetAuthStringLength());\n            query_ = CopyPart(path_, rhs.path_, rhs.GetPathStringLength());\n            frag_ = CopyPart(query_, rhs.query_, rhs.GetQueryStringLength());\n            base_ = CopyPart(frag_, rhs.frag_, rhs.GetFragStringLength());\n            uri_ = CopyPart(base_, rhs.base_, rhs.GetBaseStringLength());\n            CopyPart(uri_, rhs.uri_, rhs.GetStringLength());\n        }\n        return *this;\n    }\n\n    //! Getters\n    // Use with specializations of GenericValue\n    template<typename T> void Get(T& uri, Allocator& allocator) {\n        uri.template Set<const Ch*>(this->GetString(), allocator); // TypeHelper from document.h\n    }\n\n    const Ch* GetString() const { return uri_; }\n    SizeType GetStringLength() const { return uri_ == 0 ? 0 : internal::StrLen<Ch>(uri_); }\n    const Ch* GetBaseString() const { return base_; }\n    SizeType GetBaseStringLength() const { return base_ == 0 ? 0 : internal::StrLen<Ch>(base_); }\n    const Ch* GetSchemeString() const { return scheme_; }\n    SizeType GetSchemeStringLength() const { return scheme_ == 0 ? 0 : internal::StrLen<Ch>(scheme_); }\n    const Ch* GetAuthString() const { return auth_; }\n    SizeType GetAuthStringLength() const { return auth_ == 0 ? 0 : internal::StrLen<Ch>(auth_); }\n    const Ch* GetPathString() const { return path_; }\n    SizeType GetPathStringLength() const { return path_ == 0 ? 0 : internal::StrLen<Ch>(path_); }\n    const Ch* GetQueryString() const { return query_; }\n    SizeType GetQueryStringLength() const { return query_ == 0 ? 0 : internal::StrLen<Ch>(query_); }\n    const Ch* GetFragString() const { return frag_; }\n    SizeType GetFragStringLength() const { return frag_ == 0 ? 0 : internal::StrLen<Ch>(frag_); }\n\n#if RAPIDJSON_HAS_STDSTRING\n    static String Get(const GenericUri& uri) { return String(uri.GetString(), uri.GetStringLength()); }\n    static String GetBase(const GenericUri& uri) { return String(uri.GetBaseString(), uri.GetBaseStringLength()); }\n    static String GetScheme(const GenericUri& uri) { return String(uri.GetSchemeString(), uri.GetSchemeStringLength()); }\n    static String GetAuth(const GenericUri& uri) { return String(uri.GetAuthString(), uri.GetAuthStringLength()); }\n    static String GetPath(const GenericUri& uri) { return String(uri.GetPathString(), uri.GetPathStringLength()); }\n    static String GetQuery(const GenericUri& uri) { return String(uri.GetQueryString(), uri.GetQueryStringLength()); }\n    static String GetFrag(const GenericUri& uri) { return String(uri.GetFragString(), uri.GetFragStringLength()); }\n#endif\n\n    //! Equality operators\n    bool operator==(const GenericUri& rhs) const {\n        return Match(rhs, true);\n    }\n\n    bool operator!=(const GenericUri& rhs) const {\n        return !Match(rhs, true);\n    }\n\n    bool Match(const GenericUri& uri, bool full = true) const {\n        Ch* s1;\n        Ch* s2;\n        if (full) {\n            s1 = uri_;\n            s2 = uri.uri_;\n        } else {\n            s1 = base_;\n            s2 = uri.base_;\n        }\n        if (s1 == s2) return true;\n        if (s1 == 0 || s2 == 0) return false;\n        return internal::StrCmp<Ch>(s1, s2) == 0;\n    }\n\n    //! Resolve this URI against another (base) URI in accordance with URI resolution rules.\n    // See https://tools.ietf.org/html/rfc3986\n    // Use for resolving an id or $ref with an in-scope id.\n    // Returns a new GenericUri for the resolved URI.\n    GenericUri Resolve(const GenericUri& baseuri, Allocator* allocator = 0) {\n        GenericUri resuri;\n        resuri.allocator_ = allocator;\n        // Ensure enough space for combining paths\n        resuri.Allocate(GetStringLength() + baseuri.GetStringLength() + 1); // + 1 for joining slash\n\n        if (!(GetSchemeStringLength() == 0)) {\n            // Use all of this URI\n            resuri.auth_ = CopyPart(resuri.scheme_, scheme_, GetSchemeStringLength());\n            resuri.path_ = CopyPart(resuri.auth_, auth_, GetAuthStringLength());\n            resuri.query_ = CopyPart(resuri.path_, path_, GetPathStringLength());\n            resuri.frag_ = CopyPart(resuri.query_, query_, GetQueryStringLength());\n            resuri.RemoveDotSegments();\n        } else {\n            // Use the base scheme\n            resuri.auth_ = CopyPart(resuri.scheme_, baseuri.scheme_, baseuri.GetSchemeStringLength());\n            if (!(GetAuthStringLength() == 0)) {\n                // Use this auth, path, query\n                resuri.path_ = CopyPart(resuri.auth_, auth_, GetAuthStringLength());\n                resuri.query_ = CopyPart(resuri.path_, path_, GetPathStringLength());\n                resuri.frag_ = CopyPart(resuri.query_, query_, GetQueryStringLength());\n                resuri.RemoveDotSegments();\n            } else {\n                // Use the base auth\n                resuri.path_ = CopyPart(resuri.auth_, baseuri.auth_, baseuri.GetAuthStringLength());\n                if (GetPathStringLength() == 0) {\n                    // Use the base path\n                    resuri.query_ = CopyPart(resuri.path_, baseuri.path_, baseuri.GetPathStringLength());\n                    if (GetQueryStringLength() == 0) {\n                        // Use the base query\n                        resuri.frag_ = CopyPart(resuri.query_, baseuri.query_, baseuri.GetQueryStringLength());\n                    } else {\n                        // Use this query\n                        resuri.frag_ = CopyPart(resuri.query_, query_, GetQueryStringLength());\n                    }\n                } else {\n                    if (path_[0] == '/') {\n                        // Absolute path - use all of this path\n                        resuri.query_ = CopyPart(resuri.path_, path_, GetPathStringLength());\n                        resuri.RemoveDotSegments();\n                    } else {\n                        // Relative path - append this path to base path after base path's last slash\n                        size_t pos = 0;\n                        if (!(baseuri.GetAuthStringLength() == 0) && baseuri.GetPathStringLength() == 0) {\n                            resuri.path_[pos] = '/';\n                            pos++;\n                        }\n                        size_t lastslashpos = baseuri.GetPathStringLength();\n                        while (lastslashpos > 0) {\n                            if (baseuri.path_[lastslashpos - 1] == '/') break;\n                            lastslashpos--;\n                        }\n                        std::memcpy(&resuri.path_[pos], baseuri.path_, lastslashpos * sizeof(Ch));\n                        pos += lastslashpos;\n                        resuri.query_ = CopyPart(&resuri.path_[pos], path_, GetPathStringLength());\n                        resuri.RemoveDotSegments();\n                    }\n                    // Use this query\n                    resuri.frag_ = CopyPart(resuri.query_, query_, GetQueryStringLength());\n                }\n            }\n        }\n        // Always use this frag\n        resuri.base_ = CopyPart(resuri.frag_, frag_, GetFragStringLength());\n\n        // Re-constitute base_ and uri_\n        resuri.SetBase();\n        resuri.uri_ = resuri.base_ + resuri.GetBaseStringLength() + 1;\n        resuri.SetUri();\n        return resuri;\n    }\n\n    //! Get the allocator of this GenericUri.\n    Allocator& GetAllocator() { return *allocator_; }\n\nprivate:\n    // Allocate memory for a URI\n    // Returns total amount allocated\n    std::size_t Allocate(std::size_t len) {\n        // Create own allocator if user did not supply.\n        if (!allocator_)\n            ownAllocator_ =  allocator_ = RAPIDJSON_NEW(Allocator)();\n\n        // Allocate one block containing each part of the URI (5) plus base plus full URI, all null terminated.\n        // Order: scheme, auth, path, query, frag, base, uri\n        // Note need to set, increment, assign in 3 stages to avoid compiler warning bug.\n        size_t total = (3 * len + 7) * sizeof(Ch);\n        scheme_ = static_cast<Ch*>(allocator_->Malloc(total));\n        *scheme_ = '\\0';\n        auth_ = scheme_;\n        auth_++;\n        *auth_ = '\\0';\n        path_ = auth_;\n        path_++;\n        *path_ = '\\0';\n        query_ = path_;\n        query_++;\n        *query_ = '\\0';\n        frag_ = query_;\n        frag_++;\n        *frag_ = '\\0';\n        base_ = frag_;\n        base_++;\n        *base_ = '\\0';\n        uri_ = base_;\n        uri_++;\n        *uri_ = '\\0';\n        return total;\n    }\n\n    // Free memory for a URI\n    void Free() {\n        if (scheme_) {\n            Allocator::Free(scheme_);\n            scheme_ = 0;\n        }\n    }\n\n    // Parse a URI into constituent scheme, authority, path, query, & fragment parts\n    // Supports URIs that match regex ^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\\?([^#]*))?(#(.*))? as per\n    // https://tools.ietf.org/html/rfc3986\n    void Parse(const Ch* uri, std::size_t len) {\n        std::size_t start = 0, pos1 = 0, pos2 = 0;\n        Allocate(len);\n\n        // Look for scheme ([^:/?#]+):)?\n        if (start < len) {\n            while (pos1 < len) {\n                if (uri[pos1] == ':') break;\n                pos1++;\n            }\n            if (pos1 != len) {\n                while (pos2 < len) {\n                    if (uri[pos2] == '/') break;\n                    if (uri[pos2] == '?') break;\n                    if (uri[pos2] == '#') break;\n                    pos2++;\n                }\n                if (pos1 < pos2) {\n                    pos1++;\n                    std::memcpy(scheme_, &uri[start], pos1 * sizeof(Ch));\n                    scheme_[pos1] = '\\0';\n                    start = pos1;\n                }\n            }\n        }\n        // Look for auth (//([^/?#]*))?\n        // Note need to set, increment, assign in 3 stages to avoid compiler warning bug.\n        auth_ = scheme_ + GetSchemeStringLength();\n        auth_++;\n        *auth_ = '\\0';\n        if (start < len - 1 && uri[start] == '/' && uri[start + 1] == '/') {\n            pos2 = start + 2;\n            while (pos2 < len) {\n                if (uri[pos2] == '/') break;\n                if (uri[pos2] == '?') break;\n                if (uri[pos2] == '#') break;\n                pos2++;\n            }\n            std::memcpy(auth_, &uri[start], (pos2 - start) * sizeof(Ch));\n            auth_[pos2 - start] = '\\0';\n            start = pos2;\n        }\n        // Look for path ([^?#]*)\n        // Note need to set, increment, assign in 3 stages to avoid compiler warning bug.\n        path_ = auth_ + GetAuthStringLength();\n        path_++;\n        *path_ = '\\0';\n        if (start < len) {\n            pos2 = start;\n            while (pos2 < len) {\n                if (uri[pos2] == '?') break;\n                if (uri[pos2] == '#') break;\n                pos2++;\n            }\n            if (start != pos2) {\n                std::memcpy(path_, &uri[start], (pos2 - start) * sizeof(Ch));\n                path_[pos2 - start] = '\\0';\n                if (path_[0] == '/')\n                    RemoveDotSegments();   // absolute path - normalize\n                start = pos2;\n            }\n        }\n        // Look for query (\\?([^#]*))?\n        // Note need to set, increment, assign in 3 stages to avoid compiler warning bug.\n        query_ = path_ + GetPathStringLength();\n        query_++;\n        *query_ = '\\0';\n        if (start < len && uri[start] == '?') {\n            pos2 = start + 1;\n            while (pos2 < len) {\n                if (uri[pos2] == '#') break;\n                pos2++;\n            }\n            if (start != pos2) {\n                std::memcpy(query_, &uri[start], (pos2 - start) * sizeof(Ch));\n                query_[pos2 - start] = '\\0';\n                start = pos2;\n            }\n        }\n        // Look for fragment (#(.*))?\n        // Note need to set, increment, assign in 3 stages to avoid compiler warning bug.\n        frag_ = query_ + GetQueryStringLength();\n        frag_++;\n        *frag_ = '\\0';\n        if (start < len && uri[start] == '#') {\n            std::memcpy(frag_, &uri[start], (len - start) * sizeof(Ch));\n            frag_[len - start] = '\\0';\n        }\n\n        // Re-constitute base_ and uri_\n        base_ = frag_ + GetFragStringLength() + 1;\n        SetBase();\n        uri_ = base_ + GetBaseStringLength() + 1;\n        SetUri();\n    }\n\n    // Reconstitute base\n    void SetBase() {\n        Ch* next = base_;\n        std::memcpy(next, scheme_, GetSchemeStringLength() * sizeof(Ch));\n        next+= GetSchemeStringLength();\n        std::memcpy(next, auth_, GetAuthStringLength() * sizeof(Ch));\n        next+= GetAuthStringLength();\n        std::memcpy(next, path_, GetPathStringLength() * sizeof(Ch));\n        next+= GetPathStringLength();\n        std::memcpy(next, query_, GetQueryStringLength() * sizeof(Ch));\n        next+= GetQueryStringLength();\n        *next = '\\0';\n    }\n\n    // Reconstitute uri\n    void SetUri() {\n        Ch* next = uri_;\n        std::memcpy(next, base_, GetBaseStringLength() * sizeof(Ch));\n        next+= GetBaseStringLength();\n        std::memcpy(next, frag_, GetFragStringLength() * sizeof(Ch));\n        next+= GetFragStringLength();\n        *next = '\\0';\n    }\n\n    // Copy a part from one GenericUri to another\n    // Return the pointer to the next part to be copied to\n    Ch* CopyPart(Ch* to, Ch* from, std::size_t len) {\n        RAPIDJSON_ASSERT(to != 0);\n        RAPIDJSON_ASSERT(from != 0);\n        std::memcpy(to, from, len * sizeof(Ch));\n        to[len] = '\\0';\n        Ch* next = to + len + 1;\n        return next;\n    }\n\n    // Remove . and .. segments from the path_ member.\n    // https://tools.ietf.org/html/rfc3986\n    // This is done in place as we are only removing segments.\n    void RemoveDotSegments() {\n        std::size_t pathlen = GetPathStringLength();\n        std::size_t pathpos = 0;  // Position in path_\n        std::size_t newpos = 0;   // Position in new path_\n\n        // Loop through each segment in original path_\n        while (pathpos < pathlen) {\n            // Get next segment, bounded by '/' or end\n            size_t slashpos = 0;\n            while ((pathpos + slashpos) < pathlen) {\n                if (path_[pathpos + slashpos] == '/') break;\n                slashpos++;\n            }\n            // Check for .. and . segments\n            if (slashpos == 2 && path_[pathpos] == '.' && path_[pathpos + 1] == '.') {\n                // Backup a .. segment in the new path_\n                // We expect to find a previously added slash at the end or nothing\n                RAPIDJSON_ASSERT(newpos == 0 || path_[newpos - 1] == '/');\n                size_t lastslashpos = newpos;\n                // Make sure we don't go beyond the start segment\n                if (lastslashpos > 1) {\n                    // Find the next to last slash and back up to it\n                    lastslashpos--;\n                    while (lastslashpos > 0) {\n                        if (path_[lastslashpos - 1] == '/') break;\n                        lastslashpos--;\n                    }\n                    // Set the new path_ position\n                    newpos = lastslashpos;\n                }\n            } else if (slashpos == 1 && path_[pathpos] == '.') {\n                // Discard . segment, leaves new path_ unchanged\n            } else {\n                // Move any other kind of segment to the new path_\n                RAPIDJSON_ASSERT(newpos <= pathpos);\n                std::memmove(&path_[newpos], &path_[pathpos], slashpos * sizeof(Ch));\n                newpos += slashpos;\n                // Add slash if not at end\n                if ((pathpos + slashpos) < pathlen) {\n                    path_[newpos] = '/';\n                    newpos++;\n                }\n            }\n            // Move to next segment\n            pathpos += slashpos + 1;\n        }\n        path_[newpos] = '\\0';\n    }\n\n    Ch* uri_;    // Everything\n    Ch* base_;   // Everything except fragment\n    Ch* scheme_; // Includes the :\n    Ch* auth_;   // Includes the //\n    Ch* path_;   // Absolute if starts with /\n    Ch* query_;  // Includes the ?\n    Ch* frag_;   // Includes the #\n\n    Allocator* allocator_;      //!< The current allocator. It is either user-supplied or equal to ownAllocator_.\n    Allocator* ownAllocator_;   //!< Allocator owned by this Uri.\n};\n\n//! GenericUri for Value (UTF-8, default allocator).\ntypedef GenericUri<Value> Uri;\n\nRAPIDJSON_NAMESPACE_END\n\n#if defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_URI_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include/rapidjson/writer.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef RAPIDJSON_WRITER_H_\n#define RAPIDJSON_WRITER_H_\n\n#include \"stream.h\"\n#include \"internal/clzll.h\"\n#include \"internal/meta.h\"\n#include \"internal/stack.h\"\n#include \"internal/strfunc.h\"\n#include \"internal/dtoa.h\"\n#include \"internal/itoa.h\"\n#include \"stringbuffer.h\"\n#include <new>      // placement new\n\n#if defined(RAPIDJSON_SIMD) && defined(_MSC_VER)\n#include <intrin.h>\n#pragma intrinsic(_BitScanForward)\n#endif\n#ifdef RAPIDJSON_SSE42\n#include <nmmintrin.h>\n#elif defined(RAPIDJSON_SSE2)\n#include <emmintrin.h>\n#elif defined(RAPIDJSON_NEON)\n#include <arm_neon.h>\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(padded)\nRAPIDJSON_DIAG_OFF(unreachable-code)\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4127) // conditional expression is constant\n#endif\n\nRAPIDJSON_NAMESPACE_BEGIN\n\n///////////////////////////////////////////////////////////////////////////////\n// WriteFlag\n\n/*! \\def RAPIDJSON_WRITE_DEFAULT_FLAGS \n    \\ingroup RAPIDJSON_CONFIG\n    \\brief User-defined kWriteDefaultFlags definition.\n\n    User can define this as any \\c WriteFlag combinations.\n*/\n#ifndef RAPIDJSON_WRITE_DEFAULT_FLAGS\n#define RAPIDJSON_WRITE_DEFAULT_FLAGS kWriteNoFlags\n#endif\n\n//! Combination of writeFlags\nenum WriteFlag {\n    kWriteNoFlags = 0,              //!< No flags are set.\n    kWriteValidateEncodingFlag = 1, //!< Validate encoding of JSON strings.\n    kWriteNanAndInfFlag = 2,        //!< Allow writing of Infinity, -Infinity and NaN.\n    kWriteNanAndInfNullFlag = 4,    //!< Allow writing of Infinity, -Infinity and NaN as null.\n    kWriteDefaultFlags = RAPIDJSON_WRITE_DEFAULT_FLAGS  //!< Default write flags. Can be customized by defining RAPIDJSON_WRITE_DEFAULT_FLAGS\n};\n\n//! JSON writer\n/*! Writer implements the concept Handler.\n    It generates JSON text by events to an output os.\n\n    User may programmatically calls the functions of a writer to generate JSON text.\n\n    On the other side, a writer can also be passed to objects that generates events, \n\n    for example Reader::Parse() and Document::Accept().\n\n    \\tparam OutputStream Type of output stream.\n    \\tparam SourceEncoding Encoding of source string.\n    \\tparam TargetEncoding Encoding of output stream.\n    \\tparam StackAllocator Type of allocator for allocating memory of stack.\n    \\note implements Handler concept\n*/\ntemplate<typename OutputStream, typename SourceEncoding = UTF8<>, typename TargetEncoding = UTF8<>, typename StackAllocator = CrtAllocator, unsigned writeFlags = kWriteDefaultFlags>\nclass Writer {\npublic:\n    typedef typename SourceEncoding::Ch Ch;\n\n    static const int kDefaultMaxDecimalPlaces = 324;\n\n    //! Constructor\n    /*! \\param os Output stream.\n        \\param stackAllocator User supplied allocator. If it is null, it will create a private one.\n        \\param levelDepth Initial capacity of stack.\n    */\n    explicit\n    Writer(OutputStream& os, StackAllocator* stackAllocator = 0, size_t levelDepth = kDefaultLevelDepth) : \n        os_(&os), level_stack_(stackAllocator, levelDepth * sizeof(Level)), maxDecimalPlaces_(kDefaultMaxDecimalPlaces), hasRoot_(false) {}\n\n    explicit\n    Writer(StackAllocator* allocator = 0, size_t levelDepth = kDefaultLevelDepth) :\n        os_(0), level_stack_(allocator, levelDepth * sizeof(Level)), maxDecimalPlaces_(kDefaultMaxDecimalPlaces), hasRoot_(false) {}\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    Writer(Writer&& rhs) :\n        os_(rhs.os_), level_stack_(std::move(rhs.level_stack_)), maxDecimalPlaces_(rhs.maxDecimalPlaces_), hasRoot_(rhs.hasRoot_) {\n        rhs.os_ = 0;\n    }\n#endif\n\n    //! Reset the writer with a new stream.\n    /*!\n        This function reset the writer with a new stream and default settings,\n        in order to make a Writer object reusable for output multiple JSONs.\n\n        \\param os New output stream.\n        \\code\n        Writer<OutputStream> writer(os1);\n        writer.StartObject();\n        // ...\n        writer.EndObject();\n\n        writer.Reset(os2);\n        writer.StartObject();\n        // ...\n        writer.EndObject();\n        \\endcode\n    */\n    void Reset(OutputStream& os) {\n        os_ = &os;\n        hasRoot_ = false;\n        level_stack_.Clear();\n    }\n\n    //! Checks whether the output is a complete JSON.\n    /*!\n        A complete JSON has a complete root object or array.\n    */\n    bool IsComplete() const {\n        return hasRoot_ && level_stack_.Empty();\n    }\n\n    int GetMaxDecimalPlaces() const {\n        return maxDecimalPlaces_;\n    }\n\n    //! Sets the maximum number of decimal places for double output.\n    /*!\n        This setting truncates the output with specified number of decimal places.\n\n        For example, \n\n        \\code\n        writer.SetMaxDecimalPlaces(3);\n        writer.StartArray();\n        writer.Double(0.12345);                 // \"0.123\"\n        writer.Double(0.0001);                  // \"0.0\"\n        writer.Double(1.234567890123456e30);    // \"1.234567890123456e30\" (do not truncate significand for positive exponent)\n        writer.Double(1.23e-4);                 // \"0.0\"                  (do truncate significand for negative exponent)\n        writer.EndArray();\n        \\endcode\n\n        The default setting does not truncate any decimal places. You can restore to this setting by calling\n        \\code\n        writer.SetMaxDecimalPlaces(Writer::kDefaultMaxDecimalPlaces);\n        \\endcode\n    */\n    void SetMaxDecimalPlaces(int maxDecimalPlaces) {\n        maxDecimalPlaces_ = maxDecimalPlaces;\n    }\n\n    /*!@name Implementation of Handler\n        \\see Handler\n    */\n    //@{\n\n    bool Null()                 { Prefix(kNullType);   return EndValue(WriteNull()); }\n    bool Bool(bool b)           { Prefix(b ? kTrueType : kFalseType); return EndValue(WriteBool(b)); }\n    bool Int(int i)             { Prefix(kNumberType); return EndValue(WriteInt(i)); }\n    bool Uint(unsigned u)       { Prefix(kNumberType); return EndValue(WriteUint(u)); }\n    bool Int64(int64_t i64)     { Prefix(kNumberType); return EndValue(WriteInt64(i64)); }\n    bool Uint64(uint64_t u64)   { Prefix(kNumberType); return EndValue(WriteUint64(u64)); }\n\n    //! Writes the given \\c double value to the stream\n    /*!\n        \\param d The value to be written.\n        \\return Whether it is succeed.\n    */\n    bool Double(double d)       { Prefix(kNumberType); return EndValue(WriteDouble(d)); }\n\n    bool RawNumber(const Ch* str, SizeType length, bool copy = false) {\n        RAPIDJSON_ASSERT(str != 0);\n        (void)copy;\n        Prefix(kNumberType);\n        return EndValue(WriteString(str, length));\n    }\n\n    bool String(const Ch* str, SizeType length, bool copy = false) {\n        RAPIDJSON_ASSERT(str != 0);\n        (void)copy;\n        Prefix(kStringType);\n        return EndValue(WriteString(str, length));\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    bool String(const std::basic_string<Ch>& str) {\n        return String(str.data(), SizeType(str.size()));\n    }\n#endif\n\n    bool StartObject() {\n        Prefix(kObjectType);\n        new (level_stack_.template Push<Level>()) Level(false);\n        return WriteStartObject();\n    }\n\n    bool Key(const Ch* str, SizeType length, bool copy = false) { return String(str, length, copy); }\n\n#if RAPIDJSON_HAS_STDSTRING\n    bool Key(const std::basic_string<Ch>& str)\n    {\n      return Key(str.data(), SizeType(str.size()));\n    }\n#endif\n\n    bool EndObject(SizeType memberCount = 0) {\n        (void)memberCount;\n        RAPIDJSON_ASSERT(level_stack_.GetSize() >= sizeof(Level)); // not inside an Object\n        RAPIDJSON_ASSERT(!level_stack_.template Top<Level>()->inArray); // currently inside an Array, not Object\n        RAPIDJSON_ASSERT(0 == level_stack_.template Top<Level>()->valueCount % 2); // Object has a Key without a Value\n        level_stack_.template Pop<Level>(1);\n        return EndValue(WriteEndObject());\n    }\n\n    bool StartArray() {\n        Prefix(kArrayType);\n        new (level_stack_.template Push<Level>()) Level(true);\n        return WriteStartArray();\n    }\n\n    bool EndArray(SizeType elementCount = 0) {\n        (void)elementCount;\n        RAPIDJSON_ASSERT(level_stack_.GetSize() >= sizeof(Level));\n        RAPIDJSON_ASSERT(level_stack_.template Top<Level>()->inArray);\n        level_stack_.template Pop<Level>(1);\n        return EndValue(WriteEndArray());\n    }\n    //@}\n\n    /*! @name Convenience extensions */\n    //@{\n\n    //! Simpler but slower overload.\n    bool String(const Ch* const& str) { return String(str, internal::StrLen(str)); }\n    bool Key(const Ch* const& str) { return Key(str, internal::StrLen(str)); }\n    \n    //@}\n\n    //! Write a raw JSON value.\n    /*!\n        For user to write a stringified JSON as a value.\n\n        \\param json A well-formed JSON value. It should not contain null character within [0, length - 1] range.\n        \\param length Length of the json.\n        \\param type Type of the root of json.\n    */\n    bool RawValue(const Ch* json, size_t length, Type type) {\n        RAPIDJSON_ASSERT(json != 0);\n        Prefix(type);\n        return EndValue(WriteRawValue(json, length));\n    }\n\n    //! Flush the output stream.\n    /*!\n        Allows the user to flush the output stream immediately.\n     */\n    void Flush() {\n        os_->Flush();\n    }\n\n    static const size_t kDefaultLevelDepth = 32;\n\nprotected:\n    //! Information for each nested level\n    struct Level {\n        Level(bool inArray_) : valueCount(0), inArray(inArray_) {}\n        size_t valueCount;  //!< number of values in this level\n        bool inArray;       //!< true if in array, otherwise in object\n    };\n\n    bool WriteNull()  {\n        PutReserve(*os_, 4);\n        PutUnsafe(*os_, 'n'); PutUnsafe(*os_, 'u'); PutUnsafe(*os_, 'l'); PutUnsafe(*os_, 'l'); return true;\n    }\n\n    bool WriteBool(bool b)  {\n        if (b) {\n            PutReserve(*os_, 4);\n            PutUnsafe(*os_, 't'); PutUnsafe(*os_, 'r'); PutUnsafe(*os_, 'u'); PutUnsafe(*os_, 'e');\n        }\n        else {\n            PutReserve(*os_, 5);\n            PutUnsafe(*os_, 'f'); PutUnsafe(*os_, 'a'); PutUnsafe(*os_, 'l'); PutUnsafe(*os_, 's'); PutUnsafe(*os_, 'e');\n        }\n        return true;\n    }\n\n    bool WriteInt(int i) {\n        char buffer[11];\n        const char* end = internal::i32toa(i, buffer);\n        PutReserve(*os_, static_cast<size_t>(end - buffer));\n        for (const char* p = buffer; p != end; ++p)\n            PutUnsafe(*os_, static_cast<typename OutputStream::Ch>(*p));\n        return true;\n    }\n\n    bool WriteUint(unsigned u) {\n        char buffer[10];\n        const char* end = internal::u32toa(u, buffer);\n        PutReserve(*os_, static_cast<size_t>(end - buffer));\n        for (const char* p = buffer; p != end; ++p)\n            PutUnsafe(*os_, static_cast<typename OutputStream::Ch>(*p));\n        return true;\n    }\n\n    bool WriteInt64(int64_t i64) {\n        char buffer[21];\n        const char* end = internal::i64toa(i64, buffer);\n        PutReserve(*os_, static_cast<size_t>(end - buffer));\n        for (const char* p = buffer; p != end; ++p)\n            PutUnsafe(*os_, static_cast<typename OutputStream::Ch>(*p));\n        return true;\n    }\n\n    bool WriteUint64(uint64_t u64) {\n        char buffer[20];\n        char* end = internal::u64toa(u64, buffer);\n        PutReserve(*os_, static_cast<size_t>(end - buffer));\n        for (char* p = buffer; p != end; ++p)\n            PutUnsafe(*os_, static_cast<typename OutputStream::Ch>(*p));\n        return true;\n    }\n\n    bool WriteDouble(double d) {\n        if (internal::Double(d).IsNanOrInf()) {\n            if (!(writeFlags & kWriteNanAndInfFlag) && !(writeFlags & kWriteNanAndInfNullFlag))\n                return false;\n            if (writeFlags & kWriteNanAndInfNullFlag) {\n                PutReserve(*os_, 4);\n                PutUnsafe(*os_, 'n'); PutUnsafe(*os_, 'u'); PutUnsafe(*os_, 'l'); PutUnsafe(*os_, 'l');\n                return true;\n            }\n            if (internal::Double(d).IsNan()) {\n                PutReserve(*os_, 3);\n                PutUnsafe(*os_, 'N'); PutUnsafe(*os_, 'a'); PutUnsafe(*os_, 'N');\n                return true;\n            }\n            if (internal::Double(d).Sign()) {\n                PutReserve(*os_, 9);\n                PutUnsafe(*os_, '-');\n            }\n            else\n                PutReserve(*os_, 8);\n            PutUnsafe(*os_, 'I'); PutUnsafe(*os_, 'n'); PutUnsafe(*os_, 'f');\n            PutUnsafe(*os_, 'i'); PutUnsafe(*os_, 'n'); PutUnsafe(*os_, 'i'); PutUnsafe(*os_, 't'); PutUnsafe(*os_, 'y');\n            return true;\n        }\n\n        char buffer[25];\n        char* end = internal::dtoa(d, buffer, maxDecimalPlaces_);\n        PutReserve(*os_, static_cast<size_t>(end - buffer));\n        for (char* p = buffer; p != end; ++p)\n            PutUnsafe(*os_, static_cast<typename OutputStream::Ch>(*p));\n        return true;\n    }\n\n    bool WriteString(const Ch* str, SizeType length)  {\n        static const typename OutputStream::Ch hexDigits[16] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' };\n        static const char escape[256] = {\n#define Z16 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n            //0    1    2    3    4    5    6    7    8    9    A    B    C    D    E    F\n            'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'b', 't', 'n', 'u', 'f', 'r', 'u', 'u', // 00\n            'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', 'u', // 10\n              0,   0, '\"',   0,   0,   0,   0,   0,   0,   0,   0,   0,   0,   0,   0,   0, // 20\n            Z16, Z16,                                                                       // 30~4F\n              0,   0,   0,   0,   0,   0,   0,   0,   0,   0,   0,   0,'\\\\',   0,   0,   0, // 50\n            Z16, Z16, Z16, Z16, Z16, Z16, Z16, Z16, Z16, Z16                                // 60~FF\n#undef Z16\n        };\n\n        if (TargetEncoding::supportUnicode)\n            PutReserve(*os_, 2 + length * 6); // \"\\uxxxx...\"\n        else\n            PutReserve(*os_, 2 + length * 12);  // \"\\uxxxx\\uyyyy...\"\n\n        PutUnsafe(*os_, '\\\"');\n        GenericStringStream<SourceEncoding> is(str);\n        while (ScanWriteUnescapedString(is, length)) {\n            const Ch c = is.Peek();\n            if (!TargetEncoding::supportUnicode && static_cast<unsigned>(c) >= 0x80) {\n                // Unicode escaping\n                unsigned codepoint;\n                if (RAPIDJSON_UNLIKELY(!SourceEncoding::Decode(is, &codepoint)))\n                    return false;\n                PutUnsafe(*os_, '\\\\');\n                PutUnsafe(*os_, 'u');\n                if (codepoint <= 0xD7FF || (codepoint >= 0xE000 && codepoint <= 0xFFFF)) {\n                    PutUnsafe(*os_, hexDigits[(codepoint >> 12) & 15]);\n                    PutUnsafe(*os_, hexDigits[(codepoint >>  8) & 15]);\n                    PutUnsafe(*os_, hexDigits[(codepoint >>  4) & 15]);\n                    PutUnsafe(*os_, hexDigits[(codepoint      ) & 15]);\n                }\n                else {\n                    RAPIDJSON_ASSERT(codepoint >= 0x010000 && codepoint <= 0x10FFFF);\n                    // Surrogate pair\n                    unsigned s = codepoint - 0x010000;\n                    unsigned lead = (s >> 10) + 0xD800;\n                    unsigned trail = (s & 0x3FF) + 0xDC00;\n                    PutUnsafe(*os_, hexDigits[(lead >> 12) & 15]);\n                    PutUnsafe(*os_, hexDigits[(lead >>  8) & 15]);\n                    PutUnsafe(*os_, hexDigits[(lead >>  4) & 15]);\n                    PutUnsafe(*os_, hexDigits[(lead      ) & 15]);\n                    PutUnsafe(*os_, '\\\\');\n                    PutUnsafe(*os_, 'u');\n                    PutUnsafe(*os_, hexDigits[(trail >> 12) & 15]);\n                    PutUnsafe(*os_, hexDigits[(trail >>  8) & 15]);\n                    PutUnsafe(*os_, hexDigits[(trail >>  4) & 15]);\n                    PutUnsafe(*os_, hexDigits[(trail      ) & 15]);                    \n                }\n            }\n            else if ((sizeof(Ch) == 1 || static_cast<unsigned>(c) < 256) && RAPIDJSON_UNLIKELY(escape[static_cast<unsigned char>(c)]))  {\n                is.Take();\n                PutUnsafe(*os_, '\\\\');\n                PutUnsafe(*os_, static_cast<typename OutputStream::Ch>(escape[static_cast<unsigned char>(c)]));\n                if (escape[static_cast<unsigned char>(c)] == 'u') {\n                    PutUnsafe(*os_, '0');\n                    PutUnsafe(*os_, '0');\n                    PutUnsafe(*os_, hexDigits[static_cast<unsigned char>(c) >> 4]);\n                    PutUnsafe(*os_, hexDigits[static_cast<unsigned char>(c) & 0xF]);\n                }\n            }\n            else if (RAPIDJSON_UNLIKELY(!(writeFlags & kWriteValidateEncodingFlag ? \n                Transcoder<SourceEncoding, TargetEncoding>::Validate(is, *os_) :\n                Transcoder<SourceEncoding, TargetEncoding>::TranscodeUnsafe(is, *os_))))\n                return false;\n        }\n        PutUnsafe(*os_, '\\\"');\n        return true;\n    }\n\n    bool ScanWriteUnescapedString(GenericStringStream<SourceEncoding>& is, size_t length) {\n        return RAPIDJSON_LIKELY(is.Tell() < length);\n    }\n\n    bool WriteStartObject() { os_->Put('{'); return true; }\n    bool WriteEndObject()   { os_->Put('}'); return true; }\n    bool WriteStartArray()  { os_->Put('['); return true; }\n    bool WriteEndArray()    { os_->Put(']'); return true; }\n\n    bool WriteRawValue(const Ch* json, size_t length) {\n        PutReserve(*os_, length);\n        GenericStringStream<SourceEncoding> is(json);\n        while (RAPIDJSON_LIKELY(is.Tell() < length)) {\n            RAPIDJSON_ASSERT(is.Peek() != '\\0');\n            if (RAPIDJSON_UNLIKELY(!(writeFlags & kWriteValidateEncodingFlag ? \n                Transcoder<SourceEncoding, TargetEncoding>::Validate(is, *os_) :\n                Transcoder<SourceEncoding, TargetEncoding>::TranscodeUnsafe(is, *os_))))\n                return false;\n        }\n        return true;\n    }\n\n    void Prefix(Type type) {\n        (void)type;\n        if (RAPIDJSON_LIKELY(level_stack_.GetSize() != 0)) { // this value is not at root\n            Level* level = level_stack_.template Top<Level>();\n            if (level->valueCount > 0) {\n                if (level->inArray) \n                    os_->Put(','); // add comma if it is not the first element in array\n                else  // in object\n                    os_->Put((level->valueCount % 2 == 0) ? ',' : ':');\n            }\n            if (!level->inArray && level->valueCount % 2 == 0)\n                RAPIDJSON_ASSERT(type == kStringType);  // if it's in object, then even number should be a name\n            level->valueCount++;\n        }\n        else {\n            RAPIDJSON_ASSERT(!hasRoot_);    // Should only has one and only one root.\n            hasRoot_ = true;\n        }\n    }\n\n    // Flush the value if it is the top level one.\n    bool EndValue(bool ret) {\n        if (RAPIDJSON_UNLIKELY(level_stack_.Empty()))   // end of json text\n            Flush();\n        return ret;\n    }\n\n    OutputStream* os_;\n    internal::Stack<StackAllocator> level_stack_;\n    int maxDecimalPlaces_;\n    bool hasRoot_;\n\nprivate:\n    // Prohibit copy constructor & assignment operator.\n    Writer(const Writer&);\n    Writer& operator=(const Writer&);\n};\n\n// Full specialization for StringStream to prevent memory copying\n\ntemplate<>\ninline bool Writer<StringBuffer>::WriteInt(int i) {\n    char *buffer = os_->Push(11);\n    const char* end = internal::i32toa(i, buffer);\n    os_->Pop(static_cast<size_t>(11 - (end - buffer)));\n    return true;\n}\n\ntemplate<>\ninline bool Writer<StringBuffer>::WriteUint(unsigned u) {\n    char *buffer = os_->Push(10);\n    const char* end = internal::u32toa(u, buffer);\n    os_->Pop(static_cast<size_t>(10 - (end - buffer)));\n    return true;\n}\n\ntemplate<>\ninline bool Writer<StringBuffer>::WriteInt64(int64_t i64) {\n    char *buffer = os_->Push(21);\n    const char* end = internal::i64toa(i64, buffer);\n    os_->Pop(static_cast<size_t>(21 - (end - buffer)));\n    return true;\n}\n\ntemplate<>\ninline bool Writer<StringBuffer>::WriteUint64(uint64_t u) {\n    char *buffer = os_->Push(20);\n    const char* end = internal::u64toa(u, buffer);\n    os_->Pop(static_cast<size_t>(20 - (end - buffer)));\n    return true;\n}\n\ntemplate<>\ninline bool Writer<StringBuffer>::WriteDouble(double d) {\n    if (internal::Double(d).IsNanOrInf()) {\n        // Note: This code path can only be reached if (RAPIDJSON_WRITE_DEFAULT_FLAGS & kWriteNanAndInfFlag).\n        if (!(kWriteDefaultFlags & kWriteNanAndInfFlag))\n            return false;\n        if (kWriteDefaultFlags & kWriteNanAndInfNullFlag) {\n            PutReserve(*os_, 4);\n            PutUnsafe(*os_, 'n'); PutUnsafe(*os_, 'u'); PutUnsafe(*os_, 'l'); PutUnsafe(*os_, 'l');\n            return true;\n        }\n        if (internal::Double(d).IsNan()) {\n            PutReserve(*os_, 3);\n            PutUnsafe(*os_, 'N'); PutUnsafe(*os_, 'a'); PutUnsafe(*os_, 'N');\n            return true;\n        }\n        if (internal::Double(d).Sign()) {\n            PutReserve(*os_, 9);\n            PutUnsafe(*os_, '-');\n        }\n        else\n            PutReserve(*os_, 8);\n        PutUnsafe(*os_, 'I'); PutUnsafe(*os_, 'n'); PutUnsafe(*os_, 'f');\n        PutUnsafe(*os_, 'i'); PutUnsafe(*os_, 'n'); PutUnsafe(*os_, 'i'); PutUnsafe(*os_, 't'); PutUnsafe(*os_, 'y');\n        return true;\n    }\n    \n    char *buffer = os_->Push(25);\n    char* end = internal::dtoa(d, buffer, maxDecimalPlaces_);\n    os_->Pop(static_cast<size_t>(25 - (end - buffer)));\n    return true;\n}\n\n#if defined(RAPIDJSON_SSE2) || defined(RAPIDJSON_SSE42)\ntemplate<>\ninline bool Writer<StringBuffer>::ScanWriteUnescapedString(StringStream& is, size_t length) {\n    if (length < 16)\n        return RAPIDJSON_LIKELY(is.Tell() < length);\n\n    if (!RAPIDJSON_LIKELY(is.Tell() < length))\n        return false;\n\n    const char* p = is.src_;\n    const char* end = is.head_ + length;\n    const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n    const char* endAligned = reinterpret_cast<const char*>(reinterpret_cast<size_t>(end) & static_cast<size_t>(~15));\n    if (nextAligned > end)\n        return true;\n\n    while (p != nextAligned)\n        if (*p < 0x20 || *p == '\\\"' || *p == '\\\\') {\n            is.src_ = p;\n            return RAPIDJSON_LIKELY(is.Tell() < length);\n        }\n        else\n            os_->PutUnsafe(*p++);\n\n    // The rest of string using SIMD\n    static const char dquote[16] = { '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"', '\\\"' };\n    static const char bslash[16] = { '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\', '\\\\' };\n    static const char space[16]  = { 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F, 0x1F };\n    const __m128i dq = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&dquote[0]));\n    const __m128i bs = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&bslash[0]));\n    const __m128i sp = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&space[0]));\n\n    for (; p != endAligned; p += 16) {\n        const __m128i s = _mm_load_si128(reinterpret_cast<const __m128i *>(p));\n        const __m128i t1 = _mm_cmpeq_epi8(s, dq);\n        const __m128i t2 = _mm_cmpeq_epi8(s, bs);\n        const __m128i t3 = _mm_cmpeq_epi8(_mm_max_epu8(s, sp), sp); // s < 0x20 <=> max(s, 0x1F) == 0x1F\n        const __m128i x = _mm_or_si128(_mm_or_si128(t1, t2), t3);\n        unsigned short r = static_cast<unsigned short>(_mm_movemask_epi8(x));\n        if (RAPIDJSON_UNLIKELY(r != 0)) {   // some of characters is escaped\n            SizeType len;\n#ifdef _MSC_VER         // Find the index of first escaped\n            unsigned long offset;\n            _BitScanForward(&offset, r);\n            len = offset;\n#else\n            len = static_cast<SizeType>(__builtin_ffs(r) - 1);\n#endif\n            char* q = reinterpret_cast<char*>(os_->PushUnsafe(len));\n            for (size_t i = 0; i < len; i++)\n                q[i] = p[i];\n\n            p += len;\n            break;\n        }\n        _mm_storeu_si128(reinterpret_cast<__m128i *>(os_->PushUnsafe(16)), s);\n    }\n\n    is.src_ = p;\n    return RAPIDJSON_LIKELY(is.Tell() < length);\n}\n#elif defined(RAPIDJSON_NEON)\ntemplate<>\ninline bool Writer<StringBuffer>::ScanWriteUnescapedString(StringStream& is, size_t length) {\n    if (length < 16)\n        return RAPIDJSON_LIKELY(is.Tell() < length);\n\n    if (!RAPIDJSON_LIKELY(is.Tell() < length))\n        return false;\n\n    const char* p = is.src_;\n    const char* end = is.head_ + length;\n    const char* nextAligned = reinterpret_cast<const char*>((reinterpret_cast<size_t>(p) + 15) & static_cast<size_t>(~15));\n    const char* endAligned = reinterpret_cast<const char*>(reinterpret_cast<size_t>(end) & static_cast<size_t>(~15));\n    if (nextAligned > end)\n        return true;\n\n    while (p != nextAligned)\n        if (*p < 0x20 || *p == '\\\"' || *p == '\\\\') {\n            is.src_ = p;\n            return RAPIDJSON_LIKELY(is.Tell() < length);\n        }\n        else\n            os_->PutUnsafe(*p++);\n\n    // The rest of string using SIMD\n    const uint8x16_t s0 = vmovq_n_u8('\"');\n    const uint8x16_t s1 = vmovq_n_u8('\\\\');\n    const uint8x16_t s2 = vmovq_n_u8('\\b');\n    const uint8x16_t s3 = vmovq_n_u8(32);\n\n    for (; p != endAligned; p += 16) {\n        const uint8x16_t s = vld1q_u8(reinterpret_cast<const uint8_t *>(p));\n        uint8x16_t x = vceqq_u8(s, s0);\n        x = vorrq_u8(x, vceqq_u8(s, s1));\n        x = vorrq_u8(x, vceqq_u8(s, s2));\n        x = vorrq_u8(x, vcltq_u8(s, s3));\n\n        x = vrev64q_u8(x);                     // Rev in 64\n        uint64_t low = vgetq_lane_u64(vreinterpretq_u64_u8(x), 0);   // extract\n        uint64_t high = vgetq_lane_u64(vreinterpretq_u64_u8(x), 1);  // extract\n\n        SizeType len = 0;\n        bool escaped = false;\n        if (low == 0) {\n            if (high != 0) {\n                uint32_t lz = internal::clzll(high);\n                len = 8 + (lz >> 3);\n                escaped = true;\n            }\n        } else {\n            uint32_t lz = internal::clzll(low);\n            len = lz >> 3;\n            escaped = true;\n        }\n        if (RAPIDJSON_UNLIKELY(escaped)) {   // some of characters is escaped\n            char* q = reinterpret_cast<char*>(os_->PushUnsafe(len));\n            for (size_t i = 0; i < len; i++)\n                q[i] = p[i];\n\n            p += len;\n            break;\n        }\n        vst1q_u8(reinterpret_cast<uint8_t *>(os_->PushUnsafe(16)), s);\n    }\n\n    is.src_ = p;\n    return RAPIDJSON_LIKELY(is.Tell() < length);\n}\n#endif // RAPIDJSON_NEON\n\nRAPIDJSON_NAMESPACE_END\n\n#if defined(_MSC_VER) || defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n\n#endif // RAPIDJSON_RAPIDJSON_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/include_dirs.js",
    "content": "var path = require('path');\nconsole.log(path.join(path.relative('.', __dirname), 'include'));\n"
  },
  {
    "path": "C/thirdparty/rapidjson/library.json",
    "content": "{\n  \"name\": \"RapidJSON\",\n  \"version\": \"1.1.0\",\n  \"keywords\": \"json, sax, dom, parser, generator\",\n  \"description\": \"A fast JSON parser/generator for C++ with both SAX/DOM style API\",\n  \"export\": {\n    \"include\": \"include\"\n  },\n  \"examples\": \"example/*/*.cpp\",\n  \"repository\":\n  {\n    \"type\": \"git\",\n    \"url\": \"https://github.com/Tencent/rapidjson\"\n  }\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/license.txt",
    "content": "Tencent is pleased to support the open source community by making RapidJSON available. \n \nCopyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.  All rights reserved.\n\nIf you have downloaded a copy of the RapidJSON binary from Tencent, please note that the RapidJSON binary is licensed under the MIT License.\nIf you have downloaded a copy of the RapidJSON source code from Tencent, please note that RapidJSON source code is licensed under the MIT License, except for the third-party components listed below which are subject to different license terms.  Your integration of RapidJSON into your own projects may require compliance with the MIT License, as well as the other licenses applicable to the third-party components included within RapidJSON. To avoid the problematic JSON license in your own projects, it's sufficient to exclude the bin/jsonchecker/ directory, as it's the only code under the JSON license.\nA copy of the MIT License is included in this file.\n\nOther dependencies and licenses:\n\nOpen Source Software Licensed Under the BSD License:\n--------------------------------------------------------------------\n\nThe msinttypes r29 \nCopyright (c) 2006-2013 Alexander Chemeris \nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. \n* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.\n* Neither the name of  copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nOpen Source Software Licensed Under the JSON License:\n--------------------------------------------------------------------\n\njson.org \nCopyright (c) 2002 JSON.org\nAll Rights Reserved.\n\nJSON_checker\nCopyright (c) 2002 JSON.org\nAll Rights Reserved.\n\n\t\nTerms of the JSON License:\n---------------------------------------------------\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nThe Software shall be used for Good, not Evil.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\nTerms of the MIT License:\n--------------------------------------------------------------------\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "C/thirdparty/rapidjson/package.json",
    "content": "{\n  \"name\": \"rapidjson\",\n  \"version\": \"1.0.4\",\n  \"description\": \"![](doc/logo/rapidjson.png)\",\n  \"main\": \"include_dirs.js\",\n  \"directories\": {\n    \"doc\": \"doc\",\n    \"example\": \"example\",\n    \"test\": \"test\"\n  },\n  \"scripts\": {\n    \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\n  },\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"git+https://github.com/Tencent/rapidjson.git\"\n  },\n  \"author\": \"\",\n  \"license\": \"ISC\",\n  \"bugs\": {\n    \"url\": \"https://github.com/Tencent/rapidjson/issues\"\n  },\n  \"homepage\": \"https://github.com/Tencent/rapidjson#readme\"\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/rapidjson.autopkg",
    "content": "nuget {\n\t//Usage:  Write-NuGetPackage rapidjson.autopkg -defines:MYVERSION=1.1.0\n\t//Be sure you are running Powershell 3.0 and have the CoApp powershell extensions installed properly.\n\tnuspec {\n\t\tid = rapidjson;\n\t\tversion : ${MYVERSION};\n\t\ttitle: \"rapidjson\";\n\t\tauthors: {\"https://github.com/Tencent/rapidjson/releases/tag/v1.1.0\"};\n\t\towners: {\"@lsantos (github)\"};\n\t\tlicenseUrl: \"https://github.com/Tencent/rapidjson/blob/master/license.txt\";\n\t\tprojectUrl: \"https://github.com/Tencent/rapidjson/\";\n\t\ticonUrl: \"https://cdn1.iconfinder.com/data/icons/fatcow/32x32/json.png\";\n\t\trequireLicenseAcceptance:false;\n\t\tsummary: @\"A fast JSON parser/generator for C++ with both SAX/DOM style API\";\n\t\t\n\t\t// if you need to span several lines you can prefix a string with an @ symbol (exactly like c# does).\n\t\tdescription: @\"Rapidjson is an attempt to create the fastest JSON parser and generator.\n\n              - Small but complete. Supports both SAX and DOM style API. SAX parser only a few hundred lines of code.\n              - Fast. In the order of magnitude of strlen(). Optionally supports SSE2/SSE4.2 for acceleration.\n              - Self-contained. Minimal dependency on standard libraries. No BOOST, not even STL.\n              - Compact. Each JSON value is 16 or 20 bytes for 32 or 64-bit machines respectively (excluding text string storage). With the custom memory allocator, parser allocates memory compactly during parsing.\n              - Full  RFC4627 compliance. Supports UTF-8, UTF-16 and UTF-32.\n              - Support both in-situ parsing (directly decode strings into the source JSON text) and non-destructive parsing (decode strings into new buffers).\n              - Parse number to int/unsigned/int64_t/uint64_t/double depending on input\n              - Support custom memory allocation. Also, the default memory pool allocator can also be supplied with a user buffer (such as a buffer allocated on user's heap or - programme stack) to minimize allocation.\n\n              As the name implies, rapidjson is inspired by rapidxml.\";\n\t\t\n\t\treleaseNotes: @\"\nAdded\n\tAdd Value::XXXMember(...) overloads for std::string (#335)\n\nFixed\n\tInclude rapidjson.h for all internal/error headers.\n\tParsing some numbers incorrectly in full-precision mode (kFullPrecisionParseFlag) (#342)\n\tFix alignment of 64bit platforms (#328)\n\tFix MemoryPoolAllocator::Clear() to clear user-buffer (0691502)\n\nChanged\n\tCMakeLists for include as a thirdparty in projects (#334, #337)\n\tChange Document::ParseStream() to use stack allocator for Reader (ffbe386)\";\n\n\t\tcopyright: \"Copyright 2015\";\n\t\ttags: { native, coapp, JSON, nativepackage };\n\t\tlanguage: en-US;\n\t};\n\t\n\tdependencies {\n\t\tpackages : {\n\t\t\t//TODO:  Add dependencies here in [pkg.name]/[version] form per newline\t\t\n\t\t\t//zlib/[1.2.8],\t\t\t\n\t\t};\n\t}\n\t\n\t// the files that go into the content folders\n\tfiles {\t\n\t\t#defines {\n\t\t\tSDK_ROOT \t = .\\;\t\t\t\n\t\t}\n\n\t\t// grab all the files in the include folder\n\t\t// the folder that contains all the .h files will \n\t\t// automatically get added to the Includes path.\n\t\tnestedinclude += {\n\t\t\t#destination = ${d_include}rapidjson;\n\t\t\t\"${SDK_ROOT}include\\rapidjson\\**\\*.h\"\n\t\t};\n\t};\n\t\n\ttargets {\n\t\t// We're trying to be standard about these sorts of thing. (Will help with config.h later :D)\n\t\t//Defines += HAS_EQCORE;\n\t\t// Fix creating the package with Raggles' fork of CoApp\n\t\tIncludes += \"$(MSBuildThisFileDirectory)../..${d_include}\";\n\t};\n}"
  },
  {
    "path": "C/thirdparty/rapidjson/readme.md",
    "content": "![RapidJSON logo](doc/logo/rapidjson.png)\n\n![Release version](https://img.shields.io/badge/release-v1.1.0-blue.svg)\n\n## A fast JSON parser/generator for C++ with both SAX/DOM style API\n\nTencent is pleased to support the open source community by making RapidJSON available.\n\nCopyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n\n* [RapidJSON GitHub](https://github.com/Tencent/rapidjson/)\n* RapidJSON Documentation\n  * [English](http://rapidjson.org/)\n  * [简体中文](http://rapidjson.org/zh-cn/)\n  * [GitBook](https://www.gitbook.com/book/miloyip/rapidjson/) with downloadable PDF/EPUB/MOBI, without API reference.\n\n## Build status\n\n| [Linux][lin-link] | [Windows][win-link] | [Coveralls][cov-link] |\n| :---------------: | :-----------------: | :-------------------: |\n| ![lin-badge]      | ![win-badge]        | ![cov-badge]          |\n\n[lin-badge]: https://travis-ci.org/Tencent/rapidjson.svg?branch=master \"Travis build status\"\n[lin-link]:  https://travis-ci.org/Tencent/rapidjson \"Travis build status\"\n[win-badge]: https://ci.appveyor.com/api/projects/status/l6qulgqahcayidrf/branch/master?svg=true \"AppVeyor build status\"\n[win-link]:  https://ci.appveyor.com/project/miloyip/rapidjson-0fdqj/branch/master \"AppVeyor build status\"\n[cov-badge]: https://coveralls.io/repos/Tencent/rapidjson/badge.svg?branch=master \"Coveralls coverage\"\n[cov-link]:  https://coveralls.io/r/Tencent/rapidjson?branch=master \"Coveralls coverage\"\n\n## Introduction\n\nRapidJSON is a JSON parser and generator for C++. It was inspired by [RapidXml](http://rapidxml.sourceforge.net/).\n\n* RapidJSON is **small** but **complete**. It supports both SAX and DOM style API. The SAX parser is only a half thousand lines of code.\n\n* RapidJSON is **fast**. Its performance can be comparable to `strlen()`. It also optionally supports SSE2/SSE4.2 for acceleration.\n\n* RapidJSON is **self-contained** and **header-only**. It does not depend on external libraries such as BOOST. It even does not depend on STL.\n\n* RapidJSON is **memory-friendly**. Each JSON value occupies exactly 16 bytes for most 32/64-bit machines (excluding text string). By default it uses a fast memory allocator, and the parser allocates memory compactly during parsing.\n\n* RapidJSON is **Unicode-friendly**. It supports UTF-8, UTF-16, UTF-32 (LE & BE), and their detection, validation and transcoding internally. For example, you can read a UTF-8 file and let RapidJSON transcode the JSON strings into UTF-16 in the DOM. It also supports surrogates and \"\\u0000\" (null character).\n\nMore features can be read [here](doc/features.md).\n\nJSON(JavaScript Object Notation) is a light-weight data exchange format. RapidJSON should be in full compliance with RFC7159/ECMA-404, with optional support of relaxed syntax. More information about JSON can be obtained at\n* [Introducing JSON](http://json.org/)\n* [RFC7159: The JavaScript Object Notation (JSON) Data Interchange Format](https://tools.ietf.org/html/rfc7159)\n* [Standard ECMA-404: The JSON Data Interchange Format](https://www.ecma-international.org/publications/standards/Ecma-404.htm)\n\n## Highlights in v1.1 (2016-8-25)\n\n* Added [JSON Pointer](doc/pointer.md)\n* Added [JSON Schema](doc/schema.md)\n* Added [relaxed JSON syntax](doc/dom.md) (comment, trailing comma, NaN/Infinity)\n* Iterating array/object with [C++11 Range-based for loop](doc/tutorial.md)\n* Reduce memory overhead of each `Value` from 24 bytes to 16 bytes in x86-64 architecture.\n\nFor other changes please refer to [change log](CHANGELOG.md).\n\n## Compatibility\n\nRapidJSON is cross-platform. Some platform/compiler combinations which have been tested are shown as follows.\n* Visual C++ 2008/2010/2013 on Windows (32/64-bit)\n* GNU C++ 3.8.x on Cygwin\n* Clang 3.4 on Mac OS X (32/64-bit) and iOS\n* Clang 3.4 on Android NDK\n\nUsers can build and run the unit tests on their platform/compiler.\n\n## Installation\n\nRapidJSON is a header-only C++ library. Just copy the `include/rapidjson` folder to system or project's include path.\n\nAlternatively, if you are using the [vcpkg](https://github.com/Microsoft/vcpkg/) dependency manager you can download and install rapidjson with CMake integration in a single command:\n* vcpkg install rapidjson\n\nRapidJSON uses following software as its dependencies:\n* [CMake](https://cmake.org/) as a general build tool\n* (optional) [Doxygen](http://www.doxygen.org) to build documentation\n* (optional) [googletest](https://github.com/google/googletest) for unit and performance testing\n\nTo generate user documentation and run tests please proceed with the steps below:\n\n1. Execute `git submodule update --init` to get the files of thirdparty submodules (google test).\n2. Create directory called `build` in rapidjson source directory.\n3. Change to `build` directory and run `cmake ..` command to configure your build. Windows users can do the same with cmake-gui application.\n4. On Windows, build the solution found in the build directory. On Linux, run `make` from the build directory.\n\nOn successful build you will find compiled test and example binaries in `bin`\ndirectory. The generated documentation will be available in `doc/html`\ndirectory of the build tree. To run tests after finished build please run `make\ntest` or `ctest` from your build tree. You can get detailed output using `ctest\n-V` command.\n\nIt is possible to install library system-wide by running `make install` command\nfrom the build tree with administrative privileges. This will install all files\naccording to system preferences.  Once RapidJSON is installed, it is possible\nto use it from other CMake projects by adding `find_package(RapidJSON)` line to\nyour CMakeLists.txt.\n\n## Usage at a glance\n\nThis simple example parses a JSON string into a document (DOM), make a simple modification of the DOM, and finally stringify the DOM to a JSON string.\n\n~~~~~~~~~~cpp\n// rapidjson/example/simpledom/simpledom.cpp`\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <iostream>\n\nusing namespace rapidjson;\n\nint main() {\n    // 1. Parse a JSON string into DOM.\n    const char* json = \"{\\\"project\\\":\\\"rapidjson\\\",\\\"stars\\\":10}\";\n    Document d;\n    d.Parse(json);\n\n    // 2. Modify it by DOM.\n    Value& s = d[\"stars\"];\n    s.SetInt(s.GetInt() + 1);\n\n    // 3. Stringify the DOM\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    d.Accept(writer);\n\n    // Output {\"project\":\"rapidjson\",\"stars\":11}\n    std::cout << buffer.GetString() << std::endl;\n    return 0;\n}\n~~~~~~~~~~\n\nNote that this example did not handle potential errors.\n\nThe following diagram shows the process.\n\n![simpledom](doc/diagram/simpledom.png)\n\nMore [examples](https://github.com/Tencent/rapidjson/tree/master/example) are available:\n\n* DOM API\n  * [tutorial](https://github.com/Tencent/rapidjson/blob/master/example/tutorial/tutorial.cpp): Basic usage of DOM API.\n\n* SAX API\n  * [simplereader](https://github.com/Tencent/rapidjson/blob/master/example/simplereader/simplereader.cpp): Dumps all SAX events while parsing a JSON by `Reader`.\n  * [condense](https://github.com/Tencent/rapidjson/blob/master/example/condense/condense.cpp): A command line tool to rewrite a JSON, with all whitespaces removed.\n  * [pretty](https://github.com/Tencent/rapidjson/blob/master/example/pretty/pretty.cpp): A command line tool to rewrite a JSON with indents and newlines by `PrettyWriter`.\n  * [capitalize](https://github.com/Tencent/rapidjson/blob/master/example/capitalize/capitalize.cpp): A command line tool to capitalize strings in JSON.\n  * [messagereader](https://github.com/Tencent/rapidjson/blob/master/example/messagereader/messagereader.cpp): Parse a JSON message with SAX API.\n  * [serialize](https://github.com/Tencent/rapidjson/blob/master/example/serialize/serialize.cpp): Serialize a C++ object into JSON with SAX API.\n  * [jsonx](https://github.com/Tencent/rapidjson/blob/master/example/jsonx/jsonx.cpp): Implements a `JsonxWriter` which stringify SAX events into [JSONx](https://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.1.0/com.ibm.dp.doc/json_jsonx.html) (a kind of XML) format. The example is a command line tool which converts input JSON into JSONx format.\n\n* Schema\n  * [schemavalidator](https://github.com/Tencent/rapidjson/blob/master/example/schemavalidator/schemavalidator.cpp) : A command line tool to validate a JSON with a JSON schema.\n\n* Advanced\n  * [prettyauto](https://github.com/Tencent/rapidjson/blob/master/example/prettyauto/prettyauto.cpp): A modified version of [pretty](https://github.com/Tencent/rapidjson/blob/master/example/pretty/pretty.cpp) to automatically handle JSON with any UTF encodings.\n  * [parsebyparts](https://github.com/Tencent/rapidjson/blob/master/example/parsebyparts/parsebyparts.cpp): Implements an `AsyncDocumentParser` which can parse JSON in parts, using C++11 thread.\n  * [filterkey](https://github.com/Tencent/rapidjson/blob/master/example/filterkey/filterkey.cpp): A command line tool to remove all values with user-specified key.\n  * [filterkeydom](https://github.com/Tencent/rapidjson/blob/master/example/filterkeydom/filterkeydom.cpp): Same tool as above, but it demonstrates how to use a generator to populate a `Document`.\n\n## Contributing\n\nRapidJSON welcomes contributions. When contributing, please follow the code below.\n\n### Issues\n\nFeel free to submit issues and enhancement requests.\n\nPlease help us by providing **minimal reproducible examples**, because source code is easier to let other people understand what happens.\nFor crash problems on certain platforms, please bring stack dump content with the detail of the OS, compiler, etc.\n\nPlease try breakpoint debugging first, tell us what you found, see if we can start exploring based on more information been prepared.\n\n### Workflow\n\nIn general, we follow the \"fork-and-pull\" Git workflow.\n\n 1. **Fork** the repo on GitHub\n 2. **Clone** the project to your own machine\n 3. **Checkout** a new branch on your fork, start developing on the branch\n 4. **Test** the change before commit, Make sure the changes pass all the tests, including `unittest` and `preftest`, please add test case for each new feature or bug-fix if needed.\n 5. **Commit** changes to your own branch\n 6. **Push** your work back up to your fork\n 7. Submit a **Pull request** so that we can review your changes\n\nNOTE: Be sure to merge the latest from \"upstream\" before making a pull request!\n\n### Copyright and Licensing\n\nYou can copy and paste the license summary from below.\n\n```\nTencent is pleased to support the open source community by making RapidJSON available.\n\nCopyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n\nLicensed under the MIT License (the \"License\"); you may not use this file except\nin compliance with the License. You may obtain a copy of the License at\n\nhttp://opensource.org/licenses/MIT\n\nUnless required by applicable law or agreed to in writing, software distributed \nunder the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \nCONDITIONS OF ANY KIND, either express or implied. See the License for the \nspecific language governing permissions and limitations under the License.\n```\n"
  },
  {
    "path": "C/thirdparty/rapidjson/readme.zh-cn.md",
    "content": "![RapidJSON logo](doc/logo/rapidjson.png)\n\n![Release version](https://img.shields.io/badge/release-v1.1.0-blue.svg)\n\n## 高效的 C++ JSON 解析／生成器，提供 SAX 及 DOM 风格 API\n\nTencent is pleased to support the open source community by making RapidJSON available.\n\nCopyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n\n* [RapidJSON GitHub](https://github.com/Tencent/rapidjson/)\n* RapidJSON 文档\n  * [English](http://rapidjson.org/)\n  * [简体中文](http://rapidjson.org/zh-cn/)\n  * [GitBook](https://www.gitbook.com/book/miloyip/rapidjson/details/zh-cn) 可下载 PDF/EPUB/MOBI，但不含 API 参考手册。\n\n## Build 状态\n\n| [Linux][lin-link] | [Windows][win-link] | [Coveralls][cov-link] |\n| :---------------: | :-----------------: | :-------------------: |\n| ![lin-badge]      | ![win-badge]        | ![cov-badge]          |\n\n[lin-badge]: https://travis-ci.org/Tencent/rapidjson.svg?branch=master \"Travis build status\"\n[lin-link]:  https://travis-ci.org/Tencent/rapidjson \"Travis build status\"\n[win-badge]: https://ci.appveyor.com/api/projects/status/l6qulgqahcayidrf/branch/master?svg=true \"AppVeyor build status\"\n[win-link]:  https://ci.appveyor.com/project/miloyip/rapidjson-0fdqj/branch/master \"AppVeyor build status\"\n[cov-badge]: https://coveralls.io/repos/Tencent/rapidjson/badge.svg?branch=master \"Coveralls coverage\"\n[cov-link]:  https://coveralls.io/r/Tencent/rapidjson?branch=master \"Coveralls coverage\"\n\n## 简介\n\nRapidJSON 是一个 C++ 的 JSON 解析器及生成器。它的灵感来自 [RapidXml](http://rapidxml.sourceforge.net/)。\n\n* RapidJSON 小而全。它同时支持 SAX 和 DOM 风格的 API。SAX 解析器只有约 500 行代码。\n\n* RapidJSON 快。它的性能可与 `strlen()` 相比。可支持 SSE2/SSE4.2 加速。\n\n* RapidJSON 独立。它不依赖于 BOOST 等外部库。它甚至不依赖于 STL。\n\n* RapidJSON 对内存友好。在大部分 32/64 位机器上，每个 JSON 值只占 16 字节（除字符串外）。它预设使用一个快速的内存分配器，令分析器可以紧凑地分配内存。\n\n* RapidJSON 对 Unicode 友好。它支持 UTF-8、UTF-16、UTF-32 (大端序／小端序)，并内部支持这些编码的检测、校验及转码。例如，RapidJSON 可以在分析一个 UTF-8 文件至 DOM 时，把当中的 JSON 字符串转码至 UTF-16。它也支持代理对（surrogate pair）及 `\"\\u0000\"`（空字符）。\n\n在 [这里](doc/features.zh-cn.md) 可读取更多特点。\n\nJSON（JavaScript Object Notation）是一个轻量的数据交换格式。RapidJSON 应该完全遵从 RFC7159/ECMA-404，并支持可选的放宽语法。 关于 JSON 的更多信息可参考：\n* [Introducing JSON](http://json.org/)\n* [RFC7159: The JavaScript Object Notation (JSON) Data Interchange Format](https://tools.ietf.org/html/rfc7159)\n* [Standard ECMA-404: The JSON Data Interchange Format](https://www.ecma-international.org/publications/standards/Ecma-404.htm)\n\n## v1.1 中的亮点 (2016-8-25)\n\n* 加入 [JSON Pointer](doc/pointer.zh-cn.md) 功能，可更简单地访问及更改 DOM。\n* 加入 [JSON Schema](doc/schema.zh-cn.md) 功能，可在解析或生成 JSON 时进行校验。\n* 加入 [放宽的 JSON 语法](doc/dom.zh-cn.md) （注释、尾随逗号、NaN/Infinity）\n* 使用 [C++11 范围 for 循环](doc/tutorial.zh-cn.md) 去遍历 array 和 object。\n* 在 x86-64 架构下，缩减每个 `Value` 的内存开销从 24 字节至 16 字节。\n\n其他改动请参考 [change log](CHANGELOG.md).\n\n## 兼容性\n\nRapidJSON 是跨平台的。以下是一些曾测试的平台／编译器组合：\n* Visual C++ 2008/2010/2013 在 Windows (32/64-bit)\n* GNU C++ 3.8.x 在 Cygwin\n* Clang 3.4 在 Mac OS X (32/64-bit) 及 iOS\n* Clang 3.4 在 Android NDK\n\n用户也可以在他们的平台上生成及执行单元测试。\n\n## 安装\n\nRapidJSON 是只有头文件的 C++ 库。只需把 `include/rapidjson` 目录复制至系统或项目的 include 目录中。\n\nRapidJSON 依赖于以下软件：\n* [CMake](https://cmake.org/) 作为通用生成工具\n* (optional) [Doxygen](http://www.doxygen.org) 用于生成文档\n* (optional) [googletest](https://github.com/google/googletest) 用于单元及性能测试\n\n生成测试及例子的步骤：\n\n1. 执行 `git submodule update --init` 去获取 thirdparty submodules (google test)。\n2. 在 rapidjson 目录下，建立一个 `build` 目录。\n3. 在 `build` 目录下执行 `cmake ..` 命令以设置生成。Windows 用户可使用 cmake-gui 应用程序。\n4. 在 Windows 下，编译生成在 build 目录中的 solution。在 Linux 下，于 build 目录运行 `make`。\n\n成功生成后，你会在 `bin` 的目录下找到编译后的测试及例子可执行文件。而生成的文档将位于 build 下的 `doc/html` 目录。要执行测试，请在 build 下执行 `make test` 或 `ctest`。使用 `ctest -V` 命令可获取详细的输出。\n\n我们也可以把程序库安装至全系统中，只要在具管理权限下从 build 目录执行 `make install` 命令。这样会按系统的偏好设置安装所有文件。当安装 RapidJSON 后，其他的 CMake 项目需要使用它时，可以通过在 `CMakeLists.txt` 加入一句 `find_package(RapidJSON)`。\n\n## 用法一览\n\n此简单例子解析一个 JSON 字符串至一个 document (DOM)，对 DOM 作出简单修改，最终把 DOM 转换（stringify）至 JSON 字符串。\n\n~~~~~~~~~~cpp\n// rapidjson/example/simpledom/simpledom.cpp`\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <iostream>\n\nusing namespace rapidjson;\n\nint main() {\n    // 1. 把 JSON 解析至 DOM。\n    const char* json = \"{\\\"project\\\":\\\"rapidjson\\\",\\\"stars\\\":10}\";\n    Document d;\n    d.Parse(json);\n\n    // 2. 利用 DOM 作出修改。\n    Value& s = d[\"stars\"];\n    s.SetInt(s.GetInt() + 1);\n\n    // 3. 把 DOM 转换（stringify）成 JSON。\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    d.Accept(writer);\n\n    // Output {\"project\":\"rapidjson\",\"stars\":11}\n    std::cout << buffer.GetString() << std::endl;\n    return 0;\n}\n~~~~~~~~~~\n\n注意此例子并没有处理潜在错误。\n\n下图展示执行过程。\n\n![simpledom](doc/diagram/simpledom.png)\n\n还有许多 [例子](https://github.com/Tencent/rapidjson/tree/master/example) 可供参考：\n\n* DOM API\n  * [tutorial](https://github.com/Tencent/rapidjson/blob/master/example/tutorial/tutorial.cpp): DOM API 的基本使用方法。\n\n* SAX API\n  * [simplereader](https://github.com/Tencent/rapidjson/blob/master/example/simplereader/simplereader.cpp): 使用 `Reader` 解析 JSON 时，打印所有 SAX 事件。\n  * [condense](https://github.com/Tencent/rapidjson/blob/master/example/condense/condense.cpp): 移除 JSON 中所有空白符的命令行工具。\n  * [pretty](https://github.com/Tencent/rapidjson/blob/master/example/pretty/pretty.cpp): 为 JSON 加入缩进与换行的命令行工具，当中使用了 `PrettyWriter`。\n  * [capitalize](https://github.com/Tencent/rapidjson/blob/master/example/capitalize/capitalize.cpp): 把 JSON 中所有字符串改为大写的命令行工具。\n  * [messagereader](https://github.com/Tencent/rapidjson/blob/master/example/messagereader/messagereader.cpp): 使用 SAX API 去解析一个 JSON 报文。\n  * [serialize](https://github.com/Tencent/rapidjson/blob/master/example/serialize/serialize.cpp): 使用 SAX API 去序列化 C++ 对象，生成 JSON。\n  * [jsonx](https://github.com/Tencent/rapidjson/blob/master/example/jsonx/jsonx.cpp): 实现了一个 `JsonxWriter`，它能把 SAX 事件写成 [JSONx](https://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.1.0/com.ibm.dp.doc/json_jsonx.html)（一种 XML）格式。这个例子是把 JSON 输入转换成 JSONx 格式的命令行工具。\n\n* Schema API\n  * [schemavalidator](https://github.com/Tencent/rapidjson/blob/master/example/schemavalidator/schemavalidator.cpp): 使用 JSON Schema 去校验 JSON 的命令行工具。\n\n* 进阶\n  * [prettyauto](https://github.com/Tencent/rapidjson/blob/master/example/prettyauto/prettyauto.cpp): [pretty](https://github.com/Tencent/rapidjson/blob/master/example/pretty/pretty.cpp) 的修改版本，可自动处理任何 UTF 编码的 JSON。\n  * [parsebyparts](https://github.com/Tencent/rapidjson/blob/master/example/parsebyparts/parsebyparts.cpp): 这例子中的 `AsyncDocumentParser` 类使用 C++ 线程来逐段解析 JSON。\n  * [filterkey](https://github.com/Tencent/rapidjson/blob/master/example/filterkey/filterkey.cpp): 移取使用者指定的键值的命令行工具。\n  * [filterkeydom](https://github.com/Tencent/rapidjson/blob/master/example/filterkey/filterkey.cpp): 如上的工具，但展示如何使用生成器（generator）去填充一个 `Document`。\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/CMakeLists.txt",
    "content": "find_package(GTestSrc)\n\nIF(GTESTSRC_FOUND)\n    enable_testing()\n\n    if (WIN32 AND (NOT CYGWIN) AND (NOT MINGW))\n        set(gtest_disable_pthreads ON)\n        set(gtest_force_shared_crt ON)\n    endif()\n\n    add_subdirectory(${GTEST_SOURCE_DIR} ${CMAKE_BINARY_DIR}/googletest)\n    include_directories(SYSTEM ${GTEST_INCLUDE_DIR})\n\n    set(TEST_LIBRARIES gtest gtest_main)\n\n    add_custom_target(tests ALL)\n    add_subdirectory(perftest)\n    add_subdirectory(unittest)\n\nENDIF(GTESTSRC_FOUND)\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/perftest/CMakeLists.txt",
    "content": "set(PERFTEST_SOURCES\n    misctest.cpp\n    perftest.cpp\n    platformtest.cpp\n    rapidjsontest.cpp\n    schematest.cpp)\n\nadd_executable(perftest ${PERFTEST_SOURCES})\ntarget_link_libraries(perftest ${TEST_LIBRARIES})\n\nadd_dependencies(tests perftest)\n\nfind_program(CCACHE_FOUND ccache)\nif(CCACHE_FOUND)\n    set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)\n    set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache)\n    if (CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -Qunused-arguments -fcolor-diagnostics\")\n    endif()\nendif(CCACHE_FOUND)\n\nset_property(DIRECTORY PROPERTY COMPILE_OPTIONS ${EXTRA_CXX_FLAGS})\n\nIF(NOT (CMAKE_BUILD_TYPE STREQUAL \"Debug\"))\nadd_test(NAME perftest\n    COMMAND ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/perftest\n    WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}/bin)\nENDIF()\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/perftest/misctest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"perftest.h\"\n\n#if TEST_MISC\n\n#define __STDC_FORMAT_MACROS\n#include \"rapidjson/stringbuffer.h\"\n\n#define protected public\n#include \"rapidjson/writer.h\"\n#undef private\n\nclass Misc : public PerfTest {\n};\n\n// Copyright (c) 2008-2010 Bjoern Hoehrmann <bjoern@hoehrmann.de>\n// See http://bjoern.hoehrmann.de/utf-8/decoder/dfa/ for details.\n\n#define UTF8_ACCEPT 0\n#define UTF8_REJECT 12\n\nstatic const unsigned char utf8d[] = {\n    // The first part of the table maps bytes to character classes that\n    // to reduce the size of the transition table and create bitmasks.\n    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n    1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,  9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,\n    7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,  7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,\n    8,8,2,2,2,2,2,2,2,2,2,2,2,2,2,2,  2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,\n    10,3,3,3,3,3,3,3,3,3,3,3,3,4,3,3, 11,6,6,6,5,8,8,8,8,8,8,8,8,8,8,8,\n\n    // The second part is a transition table that maps a combination\n    // of a state of the automaton and a character class to a state.\n    0,12,24,36,60,96,84,12,12,12,48,72, 12,12,12,12,12,12,12,12,12,12,12,12,\n    12, 0,12,12,12,12,12, 0,12, 0,12,12, 12,24,12,12,12,12,12,24,12,24,12,12,\n    12,12,12,12,12,12,12,24,12,12,12,12, 12,24,12,12,12,12,12,12,12,24,12,12,\n    12,12,12,12,12,12,12,36,12,36,12,12, 12,36,12,12,12,12,12,36,12,36,12,12,\n    12,36,12,12,12,12,12,12,12,12,12,12, \n};\n\nstatic unsigned inline decode(unsigned* state, unsigned* codep, unsigned byte) {\n    unsigned type = utf8d[byte];\n\n    *codep = (*state != UTF8_ACCEPT) ?\n        (byte & 0x3fu) | (*codep << 6) :\n    (0xff >> type) & (byte);\n\n    *state = utf8d[256 + *state + type];\n    return *state;\n}\n\nstatic bool IsUTF8(unsigned char* s) {\n    unsigned codepoint, state = 0;\n\n    while (*s)\n        decode(&state, &codepoint, *s++);\n\n    return state == UTF8_ACCEPT;\n}\n\nTEST_F(Misc, Hoehrmann_IsUTF8) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        EXPECT_TRUE(IsUTF8((unsigned char*)json_));\n    }\n}\n\n////////////////////////////////////////////////////////////////////////////////\n// CountDecimalDigit: Count number of decimal places\n\ninline unsigned CountDecimalDigit_naive(unsigned n) {\n    unsigned count = 1;\n    while (n >= 10) {\n        n /= 10;\n        count++;\n    }\n    return count;\n}\n\ninline unsigned CountDecimalDigit_enroll4(unsigned n) {\n    unsigned count = 1;\n    while (n >= 10000) {\n        n /= 10000u;\n        count += 4;\n    }\n    if (n < 10) return count;\n    if (n < 100) return count + 1;\n    if (n < 1000) return count + 2;\n    return count + 3;\n}\n\ninline unsigned CountDecimalDigit64_enroll4(uint64_t n) {\n    unsigned count = 1;\n    while (n >= 10000) {\n        n /= 10000u;\n        count += 4;\n    }\n    if (n < 10) return count;\n    if (n < 100) return count + 1;\n    if (n < 1000) return count + 2;\n    return count + 3;\n}\n\ninline unsigned CountDecimalDigit_fast(unsigned n) {\n    static const uint32_t powers_of_10[] = {\n        0,\n        10,\n        100,\n        1000,\n        10000,\n        100000,\n        1000000,\n        10000000,\n        100000000,\n        1000000000\n    };\n\n#if defined(_M_IX86) || defined(_M_X64)\n    unsigned long i = 0;\n    _BitScanReverse(&i, n | 1);\n    uint32_t t = (i + 1) * 1233 >> 12;\n#elif defined(__GNUC__)\n    uint32_t t = (32 - __builtin_clz(n | 1)) * 1233 >> 12;\n#else\n#error\n#endif\n    return t - (n < powers_of_10[t]) + 1;\n}\n\ninline unsigned CountDecimalDigit64_fast(uint64_t n) {\n    static const uint64_t powers_of_10[] = {\n        0,\n        10,\n        100,\n        1000,\n        10000,\n        100000,\n        1000000,\n        10000000,\n        100000000,\n        1000000000,\n        10000000000,\n        100000000000,\n        1000000000000,\n        10000000000000,\n        100000000000000,\n        1000000000000000,\n        10000000000000000,\n        100000000000000000,\n        1000000000000000000,\n        10000000000000000000U\n    };\n\n#if defined(_M_IX86)\n    uint64_t m = n | 1;\n    unsigned long i = 0;\n    if (_BitScanReverse(&i, m >> 32))\n        i += 32;\n    else\n        _BitScanReverse(&i, m & 0xFFFFFFFF);\n    uint32_t t = (i + 1) * 1233 >> 12;\n#elif defined(_M_X64)\n    unsigned long i = 0;\n    _BitScanReverse64(&i, n | 1);\n    uint32_t t = (i + 1) * 1233 >> 12;\n#elif defined(__GNUC__)\n    uint32_t t = (64 - __builtin_clzll(n | 1)) * 1233 >> 12;\n#else\n#error\n#endif\n\n    return t - (n < powers_of_10[t]) + 1;\n}\n\n#if 0\n// Exhaustive, very slow\nTEST_F(Misc, CountDecimalDigit_Verify) {\n    unsigned i = 0;\n    do {\n        if (i % (65536 * 256) == 0)\n            printf(\"%u\\n\", i);\n        ASSERT_EQ(CountDecimalDigit_enroll4(i), CountDecimalDigit_fast(i));\n        i++;\n    } while (i != 0);\n}\n\nstatic const unsigned kDigits10Trial = 1000000000u;\nTEST_F(Misc, CountDecimalDigit_naive) {\n    unsigned sum = 0;\n    for (unsigned i = 0; i < kDigits10Trial; i++)\n        sum += CountDecimalDigit_naive(i);\n    printf(\"%u\\n\", sum);\n}\n\nTEST_F(Misc, CountDecimalDigit_enroll4) {\n    unsigned sum = 0;\n    for (unsigned i = 0; i < kDigits10Trial; i++)\n        sum += CountDecimalDigit_enroll4(i);\n    printf(\"%u\\n\", sum);\n}\n\nTEST_F(Misc, CountDecimalDigit_fast) {\n    unsigned sum = 0;\n    for (unsigned i = 0; i < kDigits10Trial; i++)\n        sum += CountDecimalDigit_fast(i);\n    printf(\"%u\\n\", sum);\n}\n#endif\n\nTEST_F(Misc, CountDecimalDigit64_VerifyFast) {\n    uint64_t i = 1, j;\n    do {\n        //printf(\"%\" PRIu64 \"\\n\", i);\n        ASSERT_EQ(CountDecimalDigit64_enroll4(i), CountDecimalDigit64_fast(i));\n        j = i;\n        i *= 3;\n    } while (j < i);\n}\n\n////////////////////////////////////////////////////////////////////////////////\n// integer-to-string conversion\n\n// https://gist.github.com/anonymous/7179097\nstatic const int randval[] ={\n     936116,  369532,  453755,  -72860,  209713,  268347,  435278, -360266, -416287, -182064,\n    -644712,  944969,  640463, -366588,  471577,  -69401, -744294, -505829,  923883,  831785,\n    -601136, -636767, -437054,  591718,  100758,  231907, -719038,  973540, -605220,  506659,\n    -871653,  462533,  764843, -919138,  404305, -630931, -288711, -751454, -173726, -718208,\n     432689, -281157,  360737,  659827,   19174, -376450,  769984, -858198,  439127,  734703,\n    -683426,       7,  386135,  186997, -643900, -744422, -604708, -629545,   42313, -933592,\n    -635566,  182308,  439024, -367219,  -73924, -516649,  421935, -470515,  413507,  -78952,\n    -427917, -561158,  737176,   94538,  572322,  405217,  709266, -357278, -908099, -425447,\n     601119,  750712, -862285, -177869,  900102,  384877,  157859, -641680,  503738, -702558,\n     278225,  463290,  268378, -212840,  580090,  347346, -473985, -950968, -114547, -839893,\n    -738032, -789424,  409540,  493495,  432099,  119755,  905004, -174834,  338266,  234298,\n      74641, -965136, -754593,  685273,  466924,  920560,  385062,  796402,  -67229,  994864,\n     376974,  299869, -647540, -128724,  469890, -163167, -547803, -743363,  486463, -621028,\n     612288,   27459, -514224,  126342,  -66612,  803409, -777155, -336453, -284002,  472451,\n     342390, -163630,  908356, -456147, -825607,  268092, -974715,  287227,  227890, -524101,\n     616370, -782456,  922098, -624001, -813690,  171605, -192962,  796151,  707183,  -95696,\n     -23163, -721260,  508892,  430715,  791331,  482048, -996102,  863274,  275406,   -8279,\n    -556239, -902076,  268647, -818565,  260069, -798232, -172924, -566311, -806503, -885992,\n     813969,  -78468,  956632,  304288,  494867, -508784,  381751,  151264,  762953,   76352,\n     594902,  375424,  271700, -743062,  390176,  924237,  772574,  676610,  435752, -153847,\n       3959, -971937, -294181, -538049, -344620, -170136,   19120, -703157,  868152, -657961,\n    -818631,  219015, -872729, -940001, -956570,  880727, -345910,  942913, -942271, -788115,\n     225294,  701108, -517736, -416071,  281940,  488730,  942698,  711494,  838382, -892302,\n    -533028,  103052,  528823,  901515,  949577,  159364,  718227, -241814, -733661, -462928,\n    -495829,  165170,  513580, -629188, -509571, -459083,  198437,   77198, -644612,  811276,\n    -422298, -860842,  -52584,  920369,  686424, -530667, -243476,   49763,  345866, -411960,\n    -114863,  470810, -302860,  683007, -509080,       2, -174981, -772163,  -48697,  447770,\n    -268246,  213268,  269215,   78810, -236340, -639140, -864323,  505113, -986569, -325215,\n     541859,  163070, -819998, -645161, -583336,  573414,  696417, -132375,       3, -294501,\n     320435,  682591,  840008,  351740,  426951,  609354,  898154, -943254,  227321, -859793,\n    -727993,   44137, -497965, -782239,   14955, -746080, -243366,    9837, -233083,  606507,\n    -995864, -615287, -994307,  602715,  770771, -315040,  610860,  446102, -307120,  710728,\n    -590392, -230474, -762625, -637525,  134963, -202700, -766902, -985541,  218163,  682009,\n     926051,  525156,  -61195,  403211, -810098,  245539, -431733,  179998, -806533,  745943,\n     447597,  131973, -187130,  826019,  286107, -937230, -577419,   20254,  681802, -340500,\n     323080,  266283, -667617,  309656,  416386,  611863,  759991, -534257,  523112, -634892,\n    -169913, -204905, -909867, -882185, -944908,  741811, -717675,  967007, -317396,  407230,\n    -412805,  792905,  994873,  744793, -456797,  713493,  355232,  116900, -945199,  880539,\n     342505, -580824, -262273,  982968, -349497, -735488,  311767, -455191,  570918,  389734,\n    -958386,   10262,  -99267,  155481,  304210,  204724,  704367, -144893, -233664, -671441,\n     896849,  408613,  762236,  322697,  981321,  688476,   13663, -970704, -379507,  896412,\n     977084,  348869,  875948,  341348,  318710,  512081,    6163,  669044,  833295,  811883,\n     708756, -802534, -536057,  608413, -389625, -694603,  541106, -110037,  720322, -540581,\n     645420,   32980,   62442,  510157, -981870,  -87093, -325960, -500494, -718291,  -67889,\n     991501,  374804,  769026, -978869,  294747,  714623,  413327, -199164,  671368,  804789,\n    -362507,  798196, -170790, -568895, -869379,   62020, -316693, -837793,  644994,  -39341,\n    -417504, -243068, -957756,   99072,  622234, -739992,  225668,    8863, -505910,   82483,\n    -559244,  241572,    1315,  -36175,  -54990,  376813,     -11,  162647, -688204, -486163,\n     -54934, -197470,  744223, -762707,  732540,  996618,  351561, -445933, -898491,  486531,\n     456151,   15276,  290186, -817110,  -52995,  313046, -452533,  -96267,   94470, -500176,\n    -818026, -398071, -810548, -143325, -819741,    1338, -897676, -101577, -855445,   37309,\n     285742,  953804, -777927, -926962, -811217, -936744, -952245, -802300, -490188, -964953,\n    -552279,  329142, -570048, -505756,  682898, -381089,  -14352,  175138,  152390, -582268,\n    -485137,  717035,  805329,  239572, -730409,  209643, -184403, -385864,  675086,  819648,\n     629058, -527109, -488666, -171981,  532788,  552441,  174666,  984921,  766514,  758787,\n     716309,  338801, -978004, -412163,  876079, -734212,  789557, -160491, -522719,   56644,\n       -991, -286038,  -53983,  663740,  809812,  919889, -717502, -137704,  220511,  184396,\n    -825740, -588447,  430870,  124309,  135956,  558662, -307087, -788055, -451328,  812260,\n     931601,  324347, -482989, -117858, -278861,  189068, -172774,  929057,  293787,  198161,\n    -342386,  -47173,  906555, -759955,  -12779,  777604,  -97869,  899320,  927486,  -25284,\n    -848550,  259450, -485856,  -17820,      88,  171400,  235492, -326783, -340793,  886886,\n     112428, -246280,    5979,  648444, -114982,  991013,  -56489,   -9497,  419706,  632820,\n    -341664,  393926, -848977,  -22538,  257307,  773731, -905319,  491153,  734883, -868212,\n    -951053,  644458, -580758,  764735,  584316,  297077,   28852, -397710, -953669,  201772,\n     879050, -198237, -588468,  448102, -116837,  770007, -231812,  642906, -582166, -885828,\n          9,  305082, -996577,  303559,   75008, -772956, -447960,  599825, -295552,  870739,\n    -386278, -950300,  485359, -457081,  629461, -850276,  550496, -451755, -620841,  -11766,\n    -950137,  832337,   28711, -273398, -507197,   91921, -271360, -705991, -753220, -388968,\n     967945,  340434, -320883, -662793, -554617, -574568,  477946,   -6148, -129519,  689217,\n     920020, -656315, -974523, -212525,   80921, -612532,  645096,  545655,  655713, -591631,\n    -307385, -816688, -618823, -113713,  526430,  673063,  735916, -809095, -850417,  639004,\n     432281, -388185,  270708,  860146,  -39902, -786157, -258180, -246169, -966720, -264957,\n     548072, -306010,  -57367, -635665,  933824,   70553, -989936, -488741,   72411, -452509,\n     529831,  956277,  449019, -577850, -360986, -803418,   48833,  296073,  203430,  609591,\n     715483,  470964,  658106, -718254,  -96424,  790163,  334739,  181070, -373578,       5,\n    -435088,  329841,  330939, -256602,  394355,  912412,  231910,  927278, -661933,  788539,\n    -769664, -893274,  -96856,  298205,  901043, -608122, -527430,  183618, -553963,  -35246,\n    -393924,  948832, -483198,  594501,   35460, -407007,   93494, -336881, -634072,  984205,\n    -812161,  944664,  -31062,  753872,  823933,  -69566,   50445,  290147,   85134,   34706,\n     551902,  405202, -991246,  -84642,  154341,  316432, -695101, -651588,   -5030,  137564,\n    -294665,  332541,  528307,  -90572, -344923,  523766, -758498, -968047,  339028,  494578,\n     593129, -725773,   31834, -718406, -208638,  159665,   -2043,  673344, -442767,   75816,\n     755442,  769257, -158730, -410272,  691688,  589550, -878398, -184121,  460679,  346312,\n     294163, -544602,  653308,  254167, -276979,   52073, -892684,  887653,  -41222,  983065,\n     -68258, -408799,  -99069, -674069, -863635,  -32890,  622757, -743862,   40872,   -4837,\n    -967228,  522370, -903951, -818669,  524459,  514702,  925801,   20007, -299229,  579348,\n     626021,  430089,  348139, -562692, -607728, -130606, -928451, -424793, -458647, -448892,\n    -312230,  143337,  109746,  880042, -339658, -785614,  938995,  540916,  118429,  661351,\n    -402967,  404729,  -40918, -976535,  743230,  713110,  440182, -381314, -499252,   74613,\n     193652,  912717,  491323,  583633,  324691,  459397,  281253,  195540,   -2764, -888651,\n     892449,  132663, -478373, -430002, -314551,  527826,  247165,  557966,  554778,  481531,\n    -946634,  431685, -769059, -348371,  174046,  184597, -354867,  584422,  227390, -850397,\n    -542924, -849093, -737769,  325359,  736314,  269101,  767940,  674809,   81413, -447458,\n     445076,  189072,  906218,  502688, -718476, -863827, -731381,  100660,  623249,  710008,\n     572060,  922203,  685740,   55096,  263394, -243695, -353910, -516788,  388471,  455165,\n     844103, -643772,  363976,  268875, -899450,  104470,  104029, -238874, -274659,  732969,\n    -676443,  953291, -916289, -861849, -242344,  958083, -479593, -970395,  799831,  277841,\n    -243236, -283462, -201510,  166263, -259105, -575706,  878926,  891064,  895297,  655262,\n     -34807, -809833,  -89281,  342585,  554920,       1,  902141, -333425,  139703,  852318,\n    -618438,  329498, -932596, -692836, -513372,  733656, -523411,   85779,  500478, -682697,\n    -502836,  138776,  156341, -420037, -557964, -556378,  710993,  -50383, -877159,  916334,\n     132996,  583516, -603392, -111615,  -12288, -780214,  476780,  123327,  137607,  519956,\n     745837,   17358, -158581,  -53490\n};\nstatic const size_t randvalCount = sizeof(randval) / sizeof(randval[0]);\nstatic const size_t kItoaTrialCount = 10000;\n\nstatic const char digits[201] =\n\"0001020304050607080910111213141516171819\"\n\"2021222324252627282930313233343536373839\"\n\"4041424344454647484950515253545556575859\"\n\"6061626364656667686970717273747576777879\"\n\"8081828384858687888990919293949596979899\";\n\n// Prevent code being optimized out\n//#define OUTPUT_LENGTH(length) printf(\"\", length)\n#define OUTPUT_LENGTH(length) printf(\"%u\\n\", (unsigned)length)\n\ntemplate<typename OutputStream>\nclass Writer1 {\npublic:\n    Writer1() : os_() {}\n    Writer1(OutputStream& os) : os_(&os) {}\n\n    void Reset(OutputStream& os) {\n        os_ = &os;\n    }\n\n    bool WriteInt(int i) {\n        if (i < 0) {\n            os_->Put('-');\n            i = -i;\n        }\n        return WriteUint((unsigned)i);\n    }\n\n    bool WriteUint(unsigned u) {\n        char buffer[10];\n        char *p = buffer;\n        do {\n            *p++ = char(u % 10) + '0';\n            u /= 10;\n        } while (u > 0);\n\n        do {\n            --p;\n            os_->Put(*p);\n        } while (p != buffer);\n        return true;\n    }\n\n    bool WriteInt64(int64_t i64) {\n        if (i64 < 0) {\n            os_->Put('-');\n            i64 = -i64;\n        }\n        WriteUint64((uint64_t)i64);\n        return true;\n    }\n\n    bool WriteUint64(uint64_t u64) {\n        char buffer[20];\n        char *p = buffer;\n        do {\n            *p++ = char(u64 % 10) + '0';\n            u64 /= 10;\n        } while (u64 > 0);\n\n        do {\n            --p;\n            os_->Put(*p);\n        } while (p != buffer);\n        return true;\n    }\n\nprivate:\n    OutputStream* os_;\n};\n\ntemplate<>\nbool Writer1<rapidjson::StringBuffer>::WriteUint(unsigned u) {\n    char buffer[10];\n    char* p = buffer;\n    do {\n        *p++ = char(u % 10) + '0';\n        u /= 10;\n    } while (u > 0);\n\n    char* d = os_->Push(p - buffer);\n    do {\n        --p;\n        *d++ = *p;\n    } while (p != buffer);\n    return true;\n}\n\n// Using digits LUT to reduce division/modulo\ntemplate<typename OutputStream>\nclass Writer2 {\npublic:\n    Writer2() : os_() {}\n    Writer2(OutputStream& os) : os_(&os) {}\n\n    void Reset(OutputStream& os) {\n        os_ = &os;\n    }\n\n    bool WriteInt(int i) {\n        if (i < 0) {\n            os_->Put('-');\n            i = -i;\n        }\n        return WriteUint((unsigned)i);\n    }\n\n    bool WriteUint(unsigned u) {\n        char buffer[10];\n        char* p = buffer;\n        while (u >= 100) {\n            const unsigned i = (u % 100) << 1;\n            u /= 100;\n            *p++ = digits[i + 1];\n            *p++ = digits[i];\n        }\n        if (u < 10)\n            *p++ = char(u) + '0';\n        else {\n            const unsigned i = u << 1;\n            *p++ = digits[i + 1];\n            *p++ = digits[i];\n        }\n\n        do {\n            --p;\n            os_->Put(*p);\n        } while (p != buffer);\n        return true;\n    }\n\n    bool WriteInt64(int64_t i64) {\n        if (i64 < 0) {\n            os_->Put('-');\n            i64 = -i64;\n        }\n        WriteUint64((uint64_t)i64);\n        return true;\n    }\n\n    bool WriteUint64(uint64_t u64) {\n        char buffer[20];\n        char* p = buffer;\n        while (u64 >= 100) {\n            const unsigned i = static_cast<unsigned>(u64 % 100) << 1;\n            u64 /= 100;\n            *p++ = digits[i + 1];\n            *p++ = digits[i];\n        }\n        if (u64 < 10)\n            *p++ = char(u64) + '0';\n        else {\n            const unsigned i = static_cast<unsigned>(u64) << 1;\n            *p++ = digits[i + 1];\n            *p++ = digits[i];\n        }\n\n        do {\n            --p;\n            os_->Put(*p);\n        } while (p != buffer);\n        return true;\n    }\n\nprivate:\n    OutputStream* os_;\n};\n\n// First pass to count digits\ntemplate<typename OutputStream>\nclass Writer3 {\npublic:\n    Writer3() : os_() {}\n    Writer3(OutputStream& os) : os_(&os) {}\n\n    void Reset(OutputStream& os) {\n        os_ = &os;\n    }\n\n    bool WriteInt(int i) {\n        if (i < 0) {\n            os_->Put('-');\n            i = -i;\n        }\n        return WriteUint((unsigned)i);\n    }\n\n    bool WriteUint(unsigned u) {\n        char buffer[10];\n        char *p = buffer;\n        do {\n            *p++ = char(u % 10) + '0';\n            u /= 10;\n        } while (u > 0);\n\n        do {\n            --p;\n            os_->Put(*p);\n        } while (p != buffer);\n        return true;\n    }\n\n    bool WriteInt64(int64_t i64) {\n        if (i64 < 0) {\n            os_->Put('-');\n            i64 = -i64;\n        }\n        WriteUint64((uint64_t)i64);\n        return true;\n    }\n\n    bool WriteUint64(uint64_t u64) {\n        char buffer[20];\n        char *p = buffer;\n        do {\n            *p++ = char(u64 % 10) + '0';\n            u64 /= 10;\n        } while (u64 > 0);\n\n        do {\n            --p;\n            os_->Put(*p);\n        } while (p != buffer);\n        return true;\n    }\n\nprivate:\n    void WriteUintReverse(char* d, unsigned u) {\n        do {\n            *--d = char(u % 10) + '0';\n            u /= 10;\n        } while (u > 0);\n    }\n\n    void WriteUint64Reverse(char* d, uint64_t u) {\n        do {\n            *--d = char(u % 10) + '0';\n            u /= 10;\n        } while (u > 0);\n    }\n\n    OutputStream* os_;\n};\n\ntemplate<>\ninline bool Writer3<rapidjson::StringBuffer>::WriteUint(unsigned u) {\n    unsigned digit = CountDecimalDigit_fast(u);\n    WriteUintReverse(os_->Push(digit) + digit, u);\n    return true;\n}\n\ntemplate<>\ninline bool Writer3<rapidjson::InsituStringStream>::WriteUint(unsigned u) {\n    unsigned digit = CountDecimalDigit_fast(u);\n    WriteUintReverse(os_->Push(digit) + digit, u);\n    return true;\n}\n\ntemplate<>\ninline bool Writer3<rapidjson::StringBuffer>::WriteUint64(uint64_t u) {\n    unsigned digit = CountDecimalDigit64_fast(u);\n    WriteUint64Reverse(os_->Push(digit) + digit, u);\n    return true;\n}\n\ntemplate<>\ninline bool Writer3<rapidjson::InsituStringStream>::WriteUint64(uint64_t u) {\n    unsigned digit = CountDecimalDigit64_fast(u);\n    WriteUint64Reverse(os_->Push(digit) + digit, u);\n    return true;\n}\n\n// Using digits LUT to reduce division/modulo, two passes\ntemplate<typename OutputStream>\nclass Writer4 {\npublic:\n    Writer4() : os_() {}\n    Writer4(OutputStream& os) : os_(&os) {}\n\n    void Reset(OutputStream& os) {\n        os_ = &os;\n    }\n\n    bool WriteInt(int i) {\n        if (i < 0) {\n            os_->Put('-');\n            i = -i;\n        }\n        return WriteUint((unsigned)i);\n    }\n\n    bool WriteUint(unsigned u) {\n        char buffer[10];\n        char* p = buffer;\n        while (u >= 100) {\n            const unsigned i = (u % 100) << 1;\n            u /= 100;\n            *p++ = digits[i + 1];\n            *p++ = digits[i];\n        }\n        if (u < 10)\n            *p++ = char(u) + '0';\n        else {\n            const unsigned i = u << 1;\n            *p++ = digits[i + 1];\n            *p++ = digits[i];\n        }\n\n        do {\n            --p;\n            os_->Put(*p);\n        } while (p != buffer);\n        return true;\n    }\n\n    bool WriteInt64(int64_t i64) {\n        if (i64 < 0) {\n            os_->Put('-');\n            i64 = -i64;\n        }\n        WriteUint64((uint64_t)i64);\n        return true;\n    }\n\n    bool WriteUint64(uint64_t u64) {\n        char buffer[20];\n        char* p = buffer;\n        while (u64 >= 100) {\n            const unsigned i = static_cast<unsigned>(u64 % 100) << 1;\n            u64 /= 100;\n            *p++ = digits[i + 1];\n            *p++ = digits[i];\n        }\n        if (u64 < 10)\n            *p++ = char(u64) + '0';\n        else {\n            const unsigned i = static_cast<unsigned>(u64) << 1;\n            *p++ = digits[i + 1];\n            *p++ = digits[i];\n        }\n\n        do {\n            --p;\n            os_->Put(*p);\n        } while (p != buffer);\n        return true;\n    }\n\nprivate:\n    void WriteUintReverse(char* d, unsigned u) {\n        while (u >= 100) {\n            const unsigned i = (u % 100) << 1;\n            u /= 100;\n            *--d = digits[i + 1];\n            *--d = digits[i];\n        }\n        if (u < 10) {\n            *--d = char(u) + '0';\n        }\n        else {\n            const unsigned i = u << 1;\n            *--d = digits[i + 1];\n            *--d = digits[i];\n        }\n    }\n\n    void WriteUint64Reverse(char* d, uint64_t u) {\n        while (u >= 100) {\n            const unsigned i = (u % 100) << 1;\n            u /= 100;\n            *--d = digits[i + 1];\n            *--d = digits[i];\n        }\n        if (u < 10) {\n            *--d = char(u) + '0';\n        }\n        else {\n            const unsigned i = u << 1;\n            *--d = digits[i + 1];\n            *--d = digits[i];\n        }\n    }\n\n    OutputStream* os_;\n};\n\ntemplate<>\ninline bool Writer4<rapidjson::StringBuffer>::WriteUint(unsigned u) {\n    unsigned digit = CountDecimalDigit_fast(u);\n    WriteUintReverse(os_->Push(digit) + digit, u);\n    return true;\n}\n\ntemplate<>\ninline bool Writer4<rapidjson::InsituStringStream>::WriteUint(unsigned u) {\n    unsigned digit = CountDecimalDigit_fast(u);\n    WriteUintReverse(os_->Push(digit) + digit, u);\n    return true;\n}\n\ntemplate<>\ninline bool Writer4<rapidjson::StringBuffer>::WriteUint64(uint64_t u) {\n    unsigned digit = CountDecimalDigit64_fast(u);\n    WriteUint64Reverse(os_->Push(digit) + digit, u);\n    return true;\n}\n\ntemplate<>\ninline bool Writer4<rapidjson::InsituStringStream>::WriteUint64(uint64_t u) {\n    unsigned digit = CountDecimalDigit64_fast(u);\n    WriteUint64Reverse(os_->Push(digit) + digit, u);\n    return true;\n}\n\ntemplate <typename Writer>\nvoid itoa_Writer_StringBufferVerify() {\n    rapidjson::StringBuffer sb;\n    Writer writer(sb);\n    for (size_t j = 0; j < randvalCount; j++) {\n        char buffer[32];\n        sprintf(buffer, \"%d\", randval[j]);\n        writer.WriteInt(randval[j]);\n        ASSERT_STREQ(buffer, sb.GetString());\n        sb.Clear();\n    }\n}\n\ntemplate <typename Writer>\nvoid itoa_Writer_InsituStringStreamVerify() {\n    Writer writer;\n    for (size_t j = 0; j < randvalCount; j++) {\n        char buffer[32];\n        sprintf(buffer, \"%d\", randval[j]);\n        char buffer2[32];\n        rapidjson::InsituStringStream ss(buffer2);\n        writer.Reset(ss);\n        char* begin = ss.PutBegin();\n        writer.WriteInt(randval[j]);\n        ss.Put('\\0');\n        ss.PutEnd(begin);\n        ASSERT_STREQ(buffer, buffer2);\n    }\n}\n\ntemplate <typename Writer>\nvoid itoa_Writer_StringBuffer() {\n    size_t length = 0;\n\n    rapidjson::StringBuffer sb;\n    Writer writer(sb);\n\n    for (size_t i = 0; i < kItoaTrialCount; i++) {\n        for (size_t j = 0; j < randvalCount; j++) {\n            writer.WriteInt(randval[j]);\n            length += sb.GetSize();\n            sb.Clear();\n        }\n    }\n    OUTPUT_LENGTH(length);\n}\n\ntemplate <typename Writer>\nvoid itoa_Writer_InsituStringStream() {\n    size_t length = 0;\n\n    char buffer[32];\n    Writer writer;\n    for (size_t i = 0; i < kItoaTrialCount; i++) {\n        for (size_t j = 0; j < randvalCount; j++) {\n            rapidjson::InsituStringStream ss(buffer);\n            writer.Reset(ss);\n            char* begin = ss.PutBegin();\n            writer.WriteInt(randval[j]);\n            length += ss.PutEnd(begin);\n        }\n    }\n    OUTPUT_LENGTH(length);\n};\n\ntemplate <typename Writer>\nvoid itoa64_Writer_StringBufferVerify() {\n    rapidjson::StringBuffer sb;\n    Writer writer(sb);\n    for (size_t j = 0; j < randvalCount; j++) {\n        char buffer[32];\n        int64_t x = randval[j] * randval[j];\n        sprintf(buffer, \"%\" PRIi64, x);\n        writer.WriteInt64(x);\n        ASSERT_STREQ(buffer, sb.GetString());\n        sb.Clear();\n    }\n}\n\ntemplate <typename Writer>\nvoid itoa64_Writer_InsituStringStreamVerify() {\n    Writer writer;\n    for (size_t j = 0; j < randvalCount; j++) {\n        char buffer[32];\n        int64_t x = randval[j] * randval[j];\n        sprintf(buffer, \"%\" PRIi64, x);\n        char buffer2[32];\n        rapidjson::InsituStringStream ss(buffer2);\n        writer.Reset(ss);\n        char* begin = ss.PutBegin();\n        writer.WriteInt64(x);\n        ss.Put('\\0');\n        ss.PutEnd(begin);\n        ASSERT_STREQ(buffer, buffer2);\n    }\n}\n\ntemplate <typename Writer>\nvoid itoa64_Writer_StringBuffer() {\n    size_t length = 0;\n\n    rapidjson::StringBuffer sb;\n    Writer writer(sb);\n\n    for (size_t i = 0; i < kItoaTrialCount; i++) {\n        for (size_t j = 0; j < randvalCount; j++) {\n            writer.WriteInt64(randval[j] * randval[j]);\n            length += sb.GetSize();\n            sb.Clear();\n        }\n    }\n    OUTPUT_LENGTH(length);\n}\n\ntemplate <typename Writer>\nvoid itoa64_Writer_InsituStringStream() {\n    size_t length = 0;\n\n    char buffer[32];\n    Writer writer;\n    for (size_t i = 0; i < kItoaTrialCount; i++) {\n        for (size_t j = 0; j < randvalCount; j++) {\n            rapidjson::InsituStringStream ss(buffer);\n            writer.Reset(ss);\n            char* begin = ss.PutBegin();\n            writer.WriteInt64(randval[j] * randval[j]);\n            length += ss.PutEnd(begin);\n        }\n    }\n    OUTPUT_LENGTH(length);\n};\n\n// Full specialization for InsituStringStream to prevent memory copying \n// (normally we will not use InsituStringStream for writing, just for testing)\n\nnamespace rapidjson {\n\ntemplate<>\nbool rapidjson::Writer<InsituStringStream>::WriteInt(int i) {\n    char *buffer = os_->Push(11);\n    const char* end = internal::i32toa(i, buffer);\n    os_->Pop(11 - (end - buffer));\n    return true;\n}\n\ntemplate<>\nbool Writer<InsituStringStream>::WriteUint(unsigned u) {\n    char *buffer = os_->Push(10);\n    const char* end = internal::u32toa(u, buffer);\n    os_->Pop(10 - (end - buffer));\n    return true;\n}\n\ntemplate<>\nbool Writer<InsituStringStream>::WriteInt64(int64_t i64) {\n    char *buffer = os_->Push(21);\n    const char* end = internal::i64toa(i64, buffer);\n    os_->Pop(21 - (end - buffer));\n    return true;\n}\n\ntemplate<>\nbool Writer<InsituStringStream>::WriteUint64(uint64_t u) {\n    char *buffer = os_->Push(20);\n    const char* end = internal::u64toa(u, buffer);\n    os_->Pop(20 - (end - buffer));\n    return true;\n}\n\n} // namespace rapidjson\n\nTEST_F(Misc, itoa_Writer_StringBufferVerify) { itoa_Writer_StringBufferVerify<rapidjson::Writer<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer1_StringBufferVerify) { itoa_Writer_StringBufferVerify<Writer1<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer2_StringBufferVerify) { itoa_Writer_StringBufferVerify<Writer2<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer3_StringBufferVerify) { itoa_Writer_StringBufferVerify<Writer3<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer4_StringBufferVerify) { itoa_Writer_StringBufferVerify<Writer4<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer_InsituStringStreamVerify) { itoa_Writer_InsituStringStreamVerify<rapidjson::Writer<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer1_InsituStringStreamVerify) { itoa_Writer_InsituStringStreamVerify<Writer1<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer2_InsituStringStreamVerify) { itoa_Writer_InsituStringStreamVerify<Writer2<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer3_InsituStringStreamVerify) { itoa_Writer_InsituStringStreamVerify<Writer3<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer4_InsituStringStreamVerify) { itoa_Writer_InsituStringStreamVerify<Writer4<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer_StringBuffer) { itoa_Writer_StringBuffer<rapidjson::Writer<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer1_StringBuffer) { itoa_Writer_StringBuffer<Writer1<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer2_StringBuffer) { itoa_Writer_StringBuffer<Writer2<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer3_StringBuffer) { itoa_Writer_StringBuffer<Writer3<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer4_StringBuffer) { itoa_Writer_StringBuffer<Writer4<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa_Writer_InsituStringStream) { itoa_Writer_InsituStringStream<rapidjson::Writer<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer1_InsituStringStream) { itoa_Writer_InsituStringStream<Writer1<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer2_InsituStringStream) { itoa_Writer_InsituStringStream<Writer2<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer3_InsituStringStream) { itoa_Writer_InsituStringStream<Writer3<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa_Writer4_InsituStringStream) { itoa_Writer_InsituStringStream<Writer4<rapidjson::InsituStringStream> >(); }\n\nTEST_F(Misc, itoa64_Writer_StringBufferVerify) { itoa64_Writer_StringBufferVerify<rapidjson::Writer<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer1_StringBufferVerify) { itoa64_Writer_StringBufferVerify<Writer1<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer2_StringBufferVerify) { itoa64_Writer_StringBufferVerify<Writer2<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer3_StringBufferVerify) { itoa64_Writer_StringBufferVerify<Writer3<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer4_StringBufferVerify) { itoa64_Writer_StringBufferVerify<Writer4<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer_InsituStringStreamVerify) { itoa64_Writer_InsituStringStreamVerify<rapidjson::Writer<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer1_InsituStringStreamVerify) { itoa64_Writer_InsituStringStreamVerify<Writer1<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer2_InsituStringStreamVerify) { itoa64_Writer_InsituStringStreamVerify<Writer2<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer3_InsituStringStreamVerify) { itoa64_Writer_InsituStringStreamVerify<Writer3<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer4_InsituStringStreamVerify) { itoa64_Writer_InsituStringStreamVerify<Writer4<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer_StringBuffer) { itoa64_Writer_StringBuffer<rapidjson::Writer<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer1_StringBuffer) { itoa64_Writer_StringBuffer<Writer1<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer2_StringBuffer) { itoa64_Writer_StringBuffer<Writer2<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer3_StringBuffer) { itoa64_Writer_StringBuffer<Writer3<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer4_StringBuffer) { itoa64_Writer_StringBuffer<Writer4<rapidjson::StringBuffer> >(); }\nTEST_F(Misc, itoa64_Writer_InsituStringStream) { itoa64_Writer_InsituStringStream<rapidjson::Writer<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer1_InsituStringStream) { itoa64_Writer_InsituStringStream<Writer1<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer2_InsituStringStream) { itoa64_Writer_InsituStringStream<Writer2<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer3_InsituStringStream) { itoa64_Writer_InsituStringStream<Writer3<rapidjson::InsituStringStream> >(); }\nTEST_F(Misc, itoa64_Writer4_InsituStringStream) { itoa64_Writer_InsituStringStream<Writer4<rapidjson::InsituStringStream> >(); }\n\n#endif // TEST_MISC\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/perftest/perftest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"perftest.h\"\n\nint main(int argc, char **argv) {\n#if _MSC_VER\n    _CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );\n    //void *testWhetherMemoryLeakDetectionWorks = malloc(1);\n#endif\n    ::testing::InitGoogleTest(&argc, argv);\n    return RUN_ALL_TESTS();\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/perftest/perftest.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef PERFTEST_H_\n#define PERFTEST_H_\n\n#define TEST_RAPIDJSON  1\n#define TEST_PLATFORM   0\n#define TEST_MISC       0\n\n#define TEST_VERSION_CODE(x,y,z) \\\n  (((x)*100000) + ((y)*100) + (z))\n\n// __SSE2__ and __SSE4_2__ are recognized by gcc, clang, and the Intel compiler.\n// We use -march=native with gmake to enable -msse2 and -msse4.2, if supported.\n// Likewise, __ARM_NEON is used to detect Neon.\n#if defined(__SSE4_2__)\n#  define RAPIDJSON_SSE42\n#elif defined(__SSE2__)\n#  define RAPIDJSON_SSE2\n#elif defined(__ARM_NEON)\n#  define RAPIDJSON_NEON\n#endif\n\n#define RAPIDJSON_HAS_STDSTRING 1\n\n////////////////////////////////////////////////////////////////////////////////\n// Google Test\n\n#ifdef __cplusplus\n\n// gtest indirectly included inttypes.h, without __STDC_CONSTANT_MACROS.\n#ifndef __STDC_CONSTANT_MACROS\n#  define __STDC_CONSTANT_MACROS 1 // required by C++ standard\n#endif\n\n#if defined(__clang__) || defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 2))\n#if defined(__clang__) || (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))\n#pragma GCC diagnostic push\n#endif\n#pragma GCC diagnostic ignored \"-Weffc++\"\n#endif\n\n#include \"gtest/gtest.h\"\n\n#if defined(__clang__) || defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))\n#pragma GCC diagnostic pop\n#endif\n\n#ifdef _MSC_VER\n#define _CRTDBG_MAP_ALLOC\n#include <crtdbg.h>\n#pragma warning(disable : 4996) // 'function': was declared deprecated\n#endif\n\n//! Base class for all performance tests\nclass PerfTest : public ::testing::Test {\npublic:\n    PerfTest() : filename_(), json_(), length_(), whitespace_(), whitespace_length_() {}\n\n    virtual void SetUp() {\n        {\n            const char *paths[] = {\n                \"data/sample.json\",\n                \"bin/data/sample.json\",\n                \"../bin/data/sample.json\",\n                \"../../bin/data/sample.json\",\n                \"../../../bin/data/sample.json\"\n            };\n\n            FILE *fp = 0;\n            for (size_t i = 0; i < sizeof(paths) / sizeof(paths[0]); i++) {\n                fp = fopen(filename_ = paths[i], \"rb\");\n                if (fp)\n                    break;\n            }\n            ASSERT_TRUE(fp != 0);\n\n            fseek(fp, 0, SEEK_END);\n            length_ = (size_t)ftell(fp);\n            fseek(fp, 0, SEEK_SET);\n            json_ = (char*)malloc(length_ + 1);\n            ASSERT_EQ(length_, fread(json_, 1, length_, fp));\n            json_[length_] = '\\0';\n            fclose(fp);\n        }\n\n        // whitespace test\n        {\n            whitespace_length_ = 1024 * 1024;\n            whitespace_ = (char *)malloc(whitespace_length_  + 4);\n            char *p = whitespace_;\n            for (size_t i = 0; i < whitespace_length_; i += 4) {\n                *p++ = ' ';\n                *p++ = '\\n';\n                *p++ = '\\r';\n                *p++ = '\\t';\n            }\n            *p++ = '[';\n            *p++ = '0';\n            *p++ = ']';\n            *p++ = '\\0';\n        }\n\n        // types test\n        {\n            const char *typespaths[] = {\n                \"data/types\",\n                \"bin/types\",\n                \"../bin/types\",\n                \"../../bin/types/\",\n                \"../../../bin/types\"\n            };\n\n            const char* typesfilenames[] = {\n                \"booleans.json\",\n                \"floats.json\",\n                \"guids.json\",\n                \"integers.json\",\n                \"mixed.json\",\n                \"nulls.json\",\n                \"paragraphs.json\",\n                \"alotofkeys.json\"\n            };\n\n            for (size_t j = 0; j < sizeof(typesfilenames) / sizeof(typesfilenames[0]); j++) {\n                types_[j] = 0;\n                for (size_t i = 0; i < sizeof(typespaths) / sizeof(typespaths[0]); i++) {\n                    char filename[256];\n                    sprintf(filename, \"%s/%s\", typespaths[i], typesfilenames[j]);\n                    if (FILE* fp = fopen(filename, \"rb\")) {\n                        fseek(fp, 0, SEEK_END);\n                        typesLength_[j] = (size_t)ftell(fp);\n                        fseek(fp, 0, SEEK_SET);\n                        types_[j] = (char*)malloc(typesLength_[j] + 1);\n                        ASSERT_EQ(typesLength_[j], fread(types_[j], 1, typesLength_[j], fp));\n                        types_[j][typesLength_[j]] = '\\0';\n                        fclose(fp);\n                        break;\n                    }\n                }\n            }\n        }\n    }\n\n    virtual void TearDown() {\n        free(json_);\n        free(whitespace_);\n        json_ = 0;\n        whitespace_ = 0;\n        for (size_t i = 0; i < 8; i++) {\n            free(types_[i]);\n            types_[i] = 0;\n        }\n    }\n\nprivate:\n    PerfTest(const PerfTest&);\n    PerfTest& operator=(const PerfTest&);\n\nprotected:\n    const char* filename_;\n    char *json_;\n    size_t length_;\n    char *whitespace_;\n    size_t whitespace_length_;\n    char *types_[8];\n    size_t typesLength_[8];\n\n    static const size_t kTrialCount = 1000;\n};\n\n#endif // __cplusplus\n\n#endif // PERFTEST_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/perftest/platformtest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"perftest.h\"\n\n// This file is for giving the performance characteristics of the platform (compiler/OS/CPU).\n\n#if TEST_PLATFORM\n\n#include <cmath>\n#include <fcntl.h>\n\n// Windows\n#ifdef _WIN32\n#include <windows.h>\n#endif\n\n// UNIX\n#if defined(unix) || defined(__unix__) || defined(__unix)\n#include <unistd.h>\n#ifdef _POSIX_MAPPED_FILES\n#include <sys/mman.h>\n#endif\n#endif\n\nclass Platform : public PerfTest {\npublic:\n    virtual void SetUp() {\n        PerfTest::SetUp();\n\n        // temp buffer for testing\n        temp_ = (char *)malloc(length_ + 1);\n        memcpy(temp_, json_, length_);\n        checkSum_ = CheckSum();\n    }\n\n    char CheckSum() {\n        char c = 0;\n        for (size_t i = 0; i < length_; ++i)\n            c += temp_[i];\n        return c;\n    }\n\n    virtual void TearDown() {\n        PerfTest::TearDown();\n        free(temp_);\n    }\n\nprotected:\n    char *temp_;\n    char checkSum_;\n};\n\nTEST_F(Platform, CheckSum) {\n    for (int i = 0; i < kTrialCount; i++)\n        EXPECT_EQ(checkSum_, CheckSum());\n}\n\nTEST_F(Platform, strlen) {\n    for (int i = 0; i < kTrialCount; i++) {\n        size_t l = strlen(json_);\n        EXPECT_EQ(length_, l);\n    }\n}\n\nTEST_F(Platform, memcmp) {\n    for (int i = 0; i < kTrialCount; i++) {\n        EXPECT_EQ(0u, memcmp(temp_, json_, length_));\n    }\n}\n\nTEST_F(Platform, pow) {\n    double sum = 0;\n    for (int i = 0; i < kTrialCount * kTrialCount; i++)\n        sum += pow(10.0, i & 255);\n    EXPECT_GT(sum, 0.0);\n}\n\nTEST_F(Platform, Whitespace_strlen) {\n    for (int i = 0; i < kTrialCount; i++) {\n        size_t l = strlen(whitespace_);\n        EXPECT_GT(l, whitespace_length_);\n    }       \n}\n\nTEST_F(Platform, Whitespace_strspn) {\n    for (int i = 0; i < kTrialCount; i++) {\n        size_t l = strspn(whitespace_, \" \\n\\r\\t\");\n        EXPECT_EQ(whitespace_length_, l);\n    }       \n}\n\nTEST_F(Platform, fread) {\n    for (int i = 0; i < kTrialCount; i++) {\n        FILE *fp = fopen(filename_, \"rb\");\n        ASSERT_EQ(length_, fread(temp_, 1, length_, fp));\n        EXPECT_EQ(checkSum_, CheckSum());\n        fclose(fp);\n    }\n}\n\n#ifdef _MSC_VER\nTEST_F(Platform, read) {\n    for (int i = 0; i < kTrialCount; i++) {\n        int fd = _open(filename_, _O_BINARY | _O_RDONLY);\n        ASSERT_NE(-1, fd);\n        ASSERT_EQ(length_, _read(fd, temp_, length_));\n        EXPECT_EQ(checkSum_, CheckSum());\n        _close(fd);\n    }\n}\n#else\nTEST_F(Platform, read) {\n    for (int i = 0; i < kTrialCount; i++) {\n        int fd = open(filename_, O_RDONLY);\n        ASSERT_NE(-1, fd);\n        ASSERT_EQ(length_, read(fd, temp_, length_));\n        EXPECT_EQ(checkSum_, CheckSum());\n        close(fd);\n    }\n}\n#endif\n\n#ifdef _WIN32\nTEST_F(Platform, MapViewOfFile) {\n    for (int i = 0; i < kTrialCount; i++) {\n        HANDLE file = CreateFile(filename_, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);\n        ASSERT_NE(INVALID_HANDLE_VALUE, file);\n        HANDLE mapObject = CreateFileMapping(file, NULL, PAGE_READONLY, 0, length_, NULL);\n        ASSERT_NE(INVALID_HANDLE_VALUE, mapObject);\n        void *p = MapViewOfFile(mapObject, FILE_MAP_READ, 0, 0, length_);\n        ASSERT_TRUE(p != NULL);\n        EXPECT_EQ(checkSum_, CheckSum());\n        ASSERT_TRUE(UnmapViewOfFile(p) == TRUE);\n        ASSERT_TRUE(CloseHandle(mapObject) == TRUE);\n        ASSERT_TRUE(CloseHandle(file) == TRUE);\n    }\n}\n#endif\n\n#ifdef _POSIX_MAPPED_FILES\nTEST_F(Platform, mmap) {\n    for (int i = 0; i < kTrialCount; i++) {\n        int fd = open(filename_, O_RDONLY);\n        ASSERT_NE(-1, fd);\n        void *p = mmap(NULL, length_, PROT_READ, MAP_PRIVATE, fd, 0);\n        ASSERT_TRUE(p != NULL);\n        EXPECT_EQ(checkSum_, CheckSum());\n        munmap(p, length_);\n        close(fd);\n    }\n}\n#endif\n\n#endif // TEST_PLATFORM\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/perftest/rapidjsontest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"perftest.h\"\n\n#if TEST_RAPIDJSON\n\n#include \"rapidjson/rapidjson.h\"\n#include \"rapidjson/document.h\"\n#include \"rapidjson/prettywriter.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/istreamwrapper.h\"\n#include \"rapidjson/encodedstream.h\"\n#include \"rapidjson/memorystream.h\"\n\n#include <fstream>\n#include <vector>\n\n#ifdef RAPIDJSON_SSE2\n#define SIMD_SUFFIX(name) name##_SSE2\n#elif defined(RAPIDJSON_SSE42)\n#define SIMD_SUFFIX(name) name##_SSE42\n#elif defined(RAPIDJSON_NEON)\n#define SIMD_SUFFIX(name) name##_NEON\n#else\n#define SIMD_SUFFIX(name) name\n#endif\n\nusing namespace rapidjson;\n\nclass RapidJson : public PerfTest {\npublic:\n    RapidJson() : temp_(), doc_() {}\n\n    virtual void SetUp() {\n        PerfTest::SetUp();\n\n        // temp buffer for insitu parsing.\n        temp_ = (char *)malloc(length_ + 1);\n\n        // Parse as a document\n        EXPECT_FALSE(doc_.Parse(json_).HasParseError());\n\n        for (size_t i = 0; i < 8; i++)\n            EXPECT_FALSE(typesDoc_[i].Parse(types_[i]).HasParseError());\n    }\n\n    virtual void TearDown() {\n        PerfTest::TearDown();\n        free(temp_);\n    }\n\nprivate:\n    RapidJson(const RapidJson&);\n    RapidJson& operator=(const RapidJson&);\n\nprotected:\n    char *temp_;\n    Document doc_;\n    Document typesDoc_[8];\n};\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParseInsitu_DummyHandler)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        memcpy(temp_, json_, length_ + 1);\n        InsituStringStream s(temp_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseInsituFlag>(s, h));\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParseInsitu_DummyHandler_ValidateEncoding)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        memcpy(temp_, json_, length_ + 1);\n        InsituStringStream s(temp_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseInsituFlag | kParseValidateEncodingFlag>(s, h));\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParse_DummyHandler)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        StringStream s(json_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse(s, h));\n    }\n}\n\n#define TEST_TYPED(index, Name)\\\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParse_DummyHandler_##Name)) {\\\n    for (size_t i = 0; i < kTrialCount * 10; i++) {\\\n        StringStream s(types_[index]);\\\n        BaseReaderHandler<> h;\\\n        Reader reader;\\\n        EXPECT_TRUE(reader.Parse(s, h));\\\n    }\\\n}\\\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParseInsitu_DummyHandler_##Name)) {\\\n    for (size_t i = 0; i < kTrialCount * 10; i++) {\\\n        memcpy(temp_, types_[index], typesLength_[index] + 1);\\\n        InsituStringStream s(temp_);\\\n        BaseReaderHandler<> h;\\\n        Reader reader;\\\n        EXPECT_TRUE(reader.Parse<kParseInsituFlag>(s, h));\\\n    }\\\n}\n\nTEST_TYPED(0, Booleans)\nTEST_TYPED(1, Floats)\nTEST_TYPED(2, Guids)\nTEST_TYPED(3, Integers)\nTEST_TYPED(4, Mixed)\nTEST_TYPED(5, Nulls)\nTEST_TYPED(6, Paragraphs)\n\n#undef TEST_TYPED\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParse_DummyHandler_FullPrecision)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        StringStream s(json_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseFullPrecisionFlag>(s, h));\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParseIterative_DummyHandler)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        StringStream s(json_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseIterativeFlag>(s, h));\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParseIterativeInsitu_DummyHandler)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        memcpy(temp_, json_, length_ + 1);\n        InsituStringStream s(temp_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseIterativeFlag|kParseInsituFlag>(s, h));\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParseIterativePull_DummyHandler)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        StringStream s(json_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        reader.IterativeParseInit();\n        while (!reader.IterativeParseComplete()) {\n            if (!reader.IterativeParseNext<kParseDefaultFlags>(s, h))\n                break;\n        }\n        EXPECT_FALSE(reader.HasParseError());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParseIterativePullInsitu_DummyHandler)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        memcpy(temp_, json_, length_ + 1);\n        InsituStringStream s(temp_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        reader.IterativeParseInit();\n        while (!reader.IterativeParseComplete()) {\n            if (!reader.IterativeParseNext<kParseDefaultFlags|kParseInsituFlag>(s, h))\n                break;\n        }\n        EXPECT_FALSE(reader.HasParseError());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParse_DummyHandler_ValidateEncoding)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        StringStream s(json_);\n        BaseReaderHandler<> h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseValidateEncodingFlag>(s, h));\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParseInsitu_MemoryPoolAllocator)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        memcpy(temp_, json_, length_ + 1);\n        Document doc;\n        doc.ParseInsitu(temp_);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParseIterativeInsitu_MemoryPoolAllocator)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        memcpy(temp_, json_, length_ + 1);\n        Document doc;\n        doc.ParseInsitu<kParseIterativeFlag>(temp_);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParse_MemoryPoolAllocator)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        Document doc;\n        doc.Parse(json_);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParseLength_MemoryPoolAllocator)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        Document doc;\n        doc.Parse(json_, length_);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n\n#if RAPIDJSON_HAS_STDSTRING\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParseStdString_MemoryPoolAllocator)) {\n    const std::string s(json_, length_);\n    for (size_t i = 0; i < kTrialCount; i++) {\n        Document doc;\n        doc.Parse(s);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n#endif\n\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParseIterative_MemoryPoolAllocator)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        Document doc;\n        doc.Parse<kParseIterativeFlag>(json_);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParse_CrtAllocator)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        memcpy(temp_, json_, length_ + 1);\n        GenericDocument<UTF8<>, CrtAllocator> doc;\n        doc.Parse(temp_);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParseEncodedInputStream_MemoryStream)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        MemoryStream ms(json_, length_);\n        EncodedInputStream<UTF8<>, MemoryStream> is(ms);\n        Document doc;\n        doc.ParseStream<0, UTF8<> >(is);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(DocumentParseAutoUTFInputStream_MemoryStream)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        MemoryStream ms(json_, length_);\n        AutoUTFInputStream<unsigned, MemoryStream> is(ms);\n        Document doc;\n        doc.ParseStream<0, AutoUTF<unsigned> >(is);\n        ASSERT_TRUE(doc.IsObject());\n    }\n}\n\ntemplate<typename T>\nsize_t Traverse(const T& value) {\n    size_t count = 1;\n    switch(value.GetType()) {\n        case kObjectType:\n            for (typename T::ConstMemberIterator itr = value.MemberBegin(); itr != value.MemberEnd(); ++itr) {\n                count++;    // name\n                count += Traverse(itr->value);\n            }\n            break;\n\n        case kArrayType:\n            for (typename T::ConstValueIterator itr = value.Begin(); itr != value.End(); ++itr)\n                count += Traverse(*itr);\n            break;\n\n        default:\n            // Do nothing.\n            break;\n    }\n    return count;\n}\n\nTEST_F(RapidJson, DocumentTraverse) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        size_t count = Traverse(doc_);\n        EXPECT_EQ(4339u, count);\n        //if (i == 0)\n        //  std::cout << count << std::endl;\n    }\n}\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\nstruct ValueCounter : public BaseReaderHandler<> {\n    ValueCounter() : count_(1) {}   // root\n\n    bool EndObject(SizeType memberCount) { count_ += memberCount * 2; return true; }\n    bool EndArray(SizeType elementCount) { count_ += elementCount; return true; }\n\n    SizeType count_;\n};\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n\nTEST_F(RapidJson, DocumentAccept) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        ValueCounter counter;\n        doc_.Accept(counter);\n        EXPECT_EQ(4339u, counter.count_);\n    }\n}\n\nTEST_F(RapidJson, DocumentFind) {\n    typedef Document::ValueType ValueType;\n    typedef ValueType::ConstMemberIterator ConstMemberIterator;\n    const Document &doc = typesDoc_[7]; // alotofkeys.json\n    if (doc.IsObject()) {\n        std::vector<const ValueType*> keys;\n        for (ConstMemberIterator it = doc.MemberBegin(); it != doc.MemberEnd(); ++it) {\n            keys.push_back(&it->name);\n        }\n        for (size_t i = 0; i < kTrialCount; i++) {\n            for (size_t j = 0; j < keys.size(); j++) {\n                EXPECT_TRUE(doc.FindMember(*keys[j]) != doc.MemberEnd());\n            }\n        }\n    }\n}\n\nstruct NullStream {\n    typedef char Ch;\n\n    NullStream() /*: length_(0)*/ {}\n    void Put(Ch) { /*++length_;*/ }\n    void Flush() {}\n    //size_t length_;\n};\n\nTEST_F(RapidJson, Writer_NullStream) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        NullStream s;\n        Writer<NullStream> writer(s);\n        doc_.Accept(writer);\n        //if (i == 0)\n        //  std::cout << s.length_ << std::endl;\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(Writer_StringBuffer)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        StringBuffer s(0, 1024 * 1024);\n        Writer<StringBuffer> writer(s);\n        doc_.Accept(writer);\n        const char* str = s.GetString();\n        (void)str;\n        //if (i == 0)\n        //  std::cout << strlen(str) << std::endl;\n    }\n}\n\n#define TEST_TYPED(index, Name)\\\nTEST_F(RapidJson, SIMD_SUFFIX(Writer_StringBuffer_##Name)) {\\\n    for (size_t i = 0; i < kTrialCount * 10; i++) {\\\n        StringBuffer s(0, 1024 * 1024);\\\n        Writer<StringBuffer> writer(s);\\\n        typesDoc_[index].Accept(writer);\\\n        const char* str = s.GetString();\\\n        (void)str;\\\n    }\\\n}\n\nTEST_TYPED(0, Booleans)\nTEST_TYPED(1, Floats)\nTEST_TYPED(2, Guids)\nTEST_TYPED(3, Integers)\nTEST_TYPED(4, Mixed)\nTEST_TYPED(5, Nulls)\nTEST_TYPED(6, Paragraphs)\n\n#undef TEST_TYPED\n\nTEST_F(RapidJson, SIMD_SUFFIX(PrettyWriter_StringBuffer)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        StringBuffer s(0, 2048 * 1024);\n        PrettyWriter<StringBuffer> writer(s);\n        writer.SetIndent(' ', 1);\n        doc_.Accept(writer);\n        const char* str = s.GetString();\n        (void)str;\n        //if (i == 0)\n        //  std::cout << strlen(str) << std::endl;\n    }\n}\n\nTEST_F(RapidJson, internal_Pow10) {\n    double sum = 0;\n    for (size_t i = 0; i < kTrialCount * kTrialCount; i++)\n        sum += internal::Pow10(int(i & 255));\n    EXPECT_GT(sum, 0.0);\n}\n\nTEST_F(RapidJson, SkipWhitespace_Basic) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        rapidjson::StringStream s(whitespace_);\n        while (s.Peek() == ' ' || s.Peek() == '\\n' || s.Peek() == '\\r' || s.Peek() == '\\t')\n            s.Take();\n        ASSERT_EQ('[', s.Peek());\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(SkipWhitespace)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        rapidjson::StringStream s(whitespace_);\n        rapidjson::SkipWhitespace(s);\n        ASSERT_EQ('[', s.Peek());\n    }\n}\n\nTEST_F(RapidJson, SkipWhitespace_strspn) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        const char* s = whitespace_ + std::strspn(whitespace_, \" \\t\\r\\n\");\n        ASSERT_EQ('[', *s);\n    }\n}\n\nTEST_F(RapidJson, UTF8_Validate) {\n    NullStream os;\n\n    for (size_t i = 0; i < kTrialCount; i++) {\n        StringStream is(json_);\n        bool result = true;\n        while (is.Peek() != '\\0')\n            result &= UTF8<>::Validate(is, os);\n        EXPECT_TRUE(result);\n    }\n}\n\nTEST_F(RapidJson, FileReadStream) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        FILE *fp = fopen(filename_, \"rb\");\n        char buffer[65536];\n        FileReadStream s(fp, buffer, sizeof(buffer));\n        while (s.Take() != '\\0')\n            ;\n        fclose(fp);\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParse_DummyHandler_FileReadStream)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        FILE *fp = fopen(filename_, \"rb\");\n        char buffer[65536];\n        FileReadStream s(fp, buffer, sizeof(buffer));\n        BaseReaderHandler<> h;\n        Reader reader;\n        reader.Parse(s, h);\n        fclose(fp);\n    }\n}\n\nTEST_F(RapidJson, IStreamWrapper) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        std::ifstream is(filename_, std::ios::in | std::ios::binary);\n        char buffer[65536];\n        IStreamWrapper isw(is, buffer, sizeof(buffer));\n        while (isw.Take() != '\\0')\n            ;\n        is.close();\n    }\n}\n\nTEST_F(RapidJson, IStreamWrapper_Unbuffered) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        std::ifstream is(filename_, std::ios::in | std::ios::binary);\n        IStreamWrapper isw(is);\n        while (isw.Take() != '\\0')\n            ;\n        is.close();\n    }\n}\n\nTEST_F(RapidJson, IStreamWrapper_Setbuffered) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        std::ifstream is;\n        char buffer[65536];\n        is.rdbuf()->pubsetbuf(buffer, sizeof(buffer));\n        is.open(filename_, std::ios::in | std::ios::binary);\n        IStreamWrapper isw(is);\n        while (isw.Take() != '\\0')\n            ;\n        is.close();\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParse_DummyHandler_IStreamWrapper)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        std::ifstream is(filename_, std::ios::in | std::ios::binary);\n        char buffer[65536];\n        IStreamWrapper isw(is, buffer, sizeof(buffer));\n        BaseReaderHandler<> h;\n        Reader reader;\n        reader.Parse(isw, h);\n        is.close();\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParse_DummyHandler_IStreamWrapper_Unbuffered)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        std::ifstream is(filename_, std::ios::in | std::ios::binary);\n        IStreamWrapper isw(is);\n        BaseReaderHandler<> h;\n        Reader reader;\n        reader.Parse(isw, h);\n        is.close();\n    }\n}\n\nTEST_F(RapidJson, SIMD_SUFFIX(ReaderParse_DummyHandler_IStreamWrapper_Setbuffered)) {\n    for (size_t i = 0; i < kTrialCount; i++) {\n        std::ifstream is;\n        char buffer[65536];\n        is.rdbuf()->pubsetbuf(buffer, sizeof(buffer));\n        is.open(filename_, std::ios::in | std::ios::binary);\n        IStreamWrapper isw(is);\n        BaseReaderHandler<> h;\n        Reader reader;\n        reader.Parse(isw, h);\n        is.close();\n    }\n}\n\nTEST_F(RapidJson, StringBuffer) {\n    StringBuffer sb;\n    for (int i = 0; i < 32 * 1024 * 1024; i++)\n        sb.Put(i & 0x7f);\n}\n\n#endif // TEST_RAPIDJSON\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/perftest/schematest.cpp",
    "content": "#include \"perftest.h\"\n\n#if TEST_RAPIDJSON\n\n#include \"rapidjson/schema.h\"\n#include <ctime>\n#include <string>\n#include <vector>\n\n#define ARRAY_SIZE(a) sizeof(a) / sizeof(a[0])\n\nusing namespace rapidjson;\n\nRAPIDJSON_DIAG_PUSH\n#if defined(__GNUC__) && __GNUC__ >= 7\nRAPIDJSON_DIAG_OFF(format-overflow)\n#endif\n\ntemplate <typename Allocator>\nstatic char* ReadFile(const char* filename, Allocator& allocator) {\n    const char *paths[] = {\n        \"\",\n        \"bin/\",\n        \"../bin/\",\n        \"../../bin/\",\n        \"../../../bin/\"\n    };\n    char buffer[1024];\n    FILE *fp = 0;\n    for (size_t i = 0; i < sizeof(paths) / sizeof(paths[0]); i++) {\n        sprintf(buffer, \"%s%s\", paths[i], filename);\n        fp = fopen(buffer, \"rb\");\n        if (fp)\n            break;\n    }\n\n    if (!fp)\n        return 0;\n\n    fseek(fp, 0, SEEK_END);\n    size_t length = static_cast<size_t>(ftell(fp));\n    fseek(fp, 0, SEEK_SET);\n    char* json = reinterpret_cast<char*>(allocator.Malloc(length + 1));\n    size_t readLength = fread(json, 1, length, fp);\n    json[readLength] = '\\0';\n    fclose(fp);\n    return json;\n}\n\nRAPIDJSON_DIAG_POP\n\nclass Schema : public PerfTest {\npublic:\n    Schema() {}\n\n    virtual void SetUp() {\n        PerfTest::SetUp();\n\n        const char* filenames[] = {\n            \"additionalItems.json\",\n            \"additionalProperties.json\",\n            \"allOf.json\",\n            \"anyOf.json\",\n            \"default.json\",\n            \"definitions.json\",\n            \"dependencies.json\",\n            \"enum.json\",\n            \"items.json\",\n            \"maximum.json\",\n            \"maxItems.json\",\n            \"maxLength.json\",\n            \"maxProperties.json\",\n            \"minimum.json\",\n            \"minItems.json\",\n            \"minLength.json\",\n            \"minProperties.json\",\n            \"multipleOf.json\",\n            \"not.json\",\n            \"oneOf.json\",\n            \"pattern.json\",\n            \"patternProperties.json\",\n            \"properties.json\",\n            \"ref.json\",\n            \"refRemote.json\",\n            \"required.json\",\n            \"type.json\",\n            \"uniqueItems.json\"\n        };\n\n        char jsonBuffer[65536];\n        MemoryPoolAllocator<> jsonAllocator(jsonBuffer, sizeof(jsonBuffer));\n\n        for (size_t i = 0; i < ARRAY_SIZE(filenames); i++) {\n            char filename[FILENAME_MAX];\n            sprintf(filename, \"jsonschema/tests/draft4/%s\", filenames[i]);\n            char* json = ReadFile(filename, jsonAllocator);\n            if (!json) {\n                printf(\"json test suite file %s not found\", filename);\n                return;\n            }\n\n            Document d;\n            d.Parse(json);\n            if (d.HasParseError()) {\n                printf(\"json test suite file %s has parse error\", filename);\n                return;\n            }\n\n            for (Value::ConstValueIterator schemaItr = d.Begin(); schemaItr != d.End(); ++schemaItr) {\n                std::string schemaDescription = (*schemaItr)[\"description\"].GetString();\n                if (IsExcludeTestSuite(schemaDescription))\n                    continue;\n\n                TestSuite* ts = new TestSuite;\n                ts->schema = new SchemaDocument((*schemaItr)[\"schema\"]);\n\n                const Value& tests = (*schemaItr)[\"tests\"];\n                for (Value::ConstValueIterator testItr = tests.Begin(); testItr != tests.End(); ++testItr) {\n                    if (IsExcludeTest(schemaDescription + \", \" + (*testItr)[\"description\"].GetString()))\n                        continue;\n\n                    Document* d2 = new Document;\n                    d2->CopyFrom((*testItr)[\"data\"], d2->GetAllocator());\n                    ts->tests.push_back(d2);\n                }\n                testSuites.push_back(ts);\n            }\n        }\n    }\n\n    virtual void TearDown() {\n        PerfTest::TearDown();\n        for (TestSuiteList::const_iterator itr = testSuites.begin(); itr != testSuites.end(); ++itr)\n            delete *itr;\n        testSuites.clear();\n    }\n\nprivate:\n    // Using the same exclusion in https://github.com/json-schema/JSON-Schema-Test-Suite\n    static bool IsExcludeTestSuite(const std::string& description) {\n        const char* excludeTestSuites[] = {\n            //lost failing these tests\n            \"remote ref\",\n            \"remote ref, containing refs itself\",\n            \"fragment within remote ref\",\n            \"ref within remote ref\",\n            \"change resolution scope\",\n            // these below were added to get jsck in the benchmarks)\n            \"uniqueItems validation\",\n            \"valid definition\",\n            \"invalid definition\"\n        };\n\n        for (size_t i = 0; i < ARRAY_SIZE(excludeTestSuites); i++)\n            if (excludeTestSuites[i] == description)\n                return true;\n        return false;\n    }\n\n    // Using the same exclusion in https://github.com/json-schema/JSON-Schema-Test-Suite\n    static bool IsExcludeTest(const std::string& description) {\n        const char* excludeTests[] = {\n            //lots of validators fail these\n            \"invalid definition, invalid definition schema\",\n            \"maxLength validation, two supplementary Unicode code points is long enough\",\n            \"minLength validation, one supplementary Unicode code point is not long enough\",\n            //this is to get tv4 in the benchmarks\n            \"heterogeneous enum validation, something else is invalid\"\n        };\n\n        for (size_t i = 0; i < ARRAY_SIZE(excludeTests); i++)\n            if (excludeTests[i] == description)\n                return true;\n        return false;\n    }\n\n    Schema(const Schema&);\n    Schema& operator=(const Schema&);\n\nprotected:\n    typedef std::vector<Document*> DocumentList;\n\n    struct TestSuite {\n        TestSuite() : schema() {}\n        ~TestSuite() {\n            delete schema;\n            for (DocumentList::iterator itr = tests.begin(); itr != tests.end(); ++itr)\n                delete *itr;\n        }\n        SchemaDocument* schema;\n        DocumentList tests;\n    };\n\n    typedef std::vector<TestSuite* > TestSuiteList;\n    TestSuiteList testSuites;\n};\n\nTEST_F(Schema, TestSuite) {\n    char validatorBuffer[65536];\n    MemoryPoolAllocator<> validatorAllocator(validatorBuffer, sizeof(validatorBuffer));\n\n    const int trialCount = 100000;\n    int testCount = 0;\n    clock_t start = clock();\n    for (int i = 0; i < trialCount; i++) {\n        for (TestSuiteList::const_iterator itr = testSuites.begin(); itr != testSuites.end(); ++itr) {\n            const TestSuite& ts = **itr;\n            GenericSchemaValidator<SchemaDocument, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> >  validator(*ts.schema, &validatorAllocator);\n            for (DocumentList::const_iterator testItr = ts.tests.begin(); testItr != ts.tests.end(); ++testItr) {\n                validator.Reset();\n                (*testItr)->Accept(validator);\n                testCount++;\n            }\n            validatorAllocator.Clear();\n        }\n    }\n    clock_t end = clock();\n    double duration = double(end - start) / CLOCKS_PER_SEC;\n    printf(\"%d trials in %f s -> %f trials per sec\\n\", trialCount, duration, trialCount / duration);\n    printf(\"%d tests per trial\\n\", testCount / trialCount);\n}\n\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/CMakeLists.txt",
    "content": "include(CheckCXXCompilerFlag)\n\nset(UNITTEST_SOURCES\n\tallocatorstest.cpp\n    bigintegertest.cpp\n    clzlltest.cpp\n\tcursorstreamwrappertest.cpp\n    documenttest.cpp\n    dtoatest.cpp\n    encodedstreamtest.cpp\n    encodingstest.cpp\n    fwdtest.cpp\n    filestreamtest.cpp\n    itoatest.cpp\n    istreamwrappertest.cpp\n    jsoncheckertest.cpp\n    namespacetest.cpp\n    pointertest.cpp\n    platformtest.cpp\n    prettywritertest.cpp\n    ostreamwrappertest.cpp\n    readertest.cpp\n    regextest.cpp\n\tschematest.cpp\n\tsimdtest.cpp\n    strfunctest.cpp\n    stringbuffertest.cpp\n    strtodtest.cpp\n    unittest.cpp\n    uritest.cpp\n    valuetest.cpp\n    writertest.cpp)\n\nfind_program(CCACHE_FOUND ccache)\nif(CCACHE_FOUND)\n    set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)\n    set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache)\n    if (CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -Qunused-arguments -fcolor-diagnostics\")\n\t\tendif()\nendif(CCACHE_FOUND)\n\nset_property(DIRECTORY PROPERTY COMPILE_OPTIONS ${EXTRA_CXX_FLAGS})\n\nif (CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n    # If the user is running a newer version of Clang that includes the\n    # -Wdouble-promotion, we will ignore that warning.\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 3.7)\n        CHECK_CXX_COMPILER_FLAG(\"-Wno-double-promotion\" HAS_NO_DOUBLE_PROMOTION)\n        if (HAS_NO_DOUBLE_PROMOTION)\n            set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -Wno-double-promotion\")\n        endif()\n    endif()\nelseif (\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"MSVC\")\n    # Force to always compile with /W4\n    if(CMAKE_CXX_FLAGS MATCHES \"/W[0-4]\")\n        string(REGEX REPLACE \"/W[0-4]\" \"/W4\" CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS}\")\n    else()\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} /W4\")\n    endif()\n\n    # Force to always compile with /WX\n    if(CMAKE_CXX_FLAGS MATCHES \"/WX-\")\n        string(REGEX REPLACE \"/WX-\" \"/WX\" CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS}\")\n    else()\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} /WX\")\n    endif()\nendif()\n\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DRAPIDJSON_HAS_STDSTRING=1\")\n\nadd_library(namespacetest STATIC namespacetest.cpp)\n\nadd_executable(unittest ${UNITTEST_SOURCES})\ntarget_link_libraries(unittest ${TEST_LIBRARIES} namespacetest)\n\nadd_dependencies(tests unittest)\n\nadd_test(NAME unittest\n    COMMAND ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/unittest\n    WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}/bin)\n\nif(NOT MSVC AND VALGRIND_FOUND)\n    # Not running SIMD.* unit test cases for Valgrind\n    add_test(NAME valgrind_unittest\n        COMMAND valgrind --suppressions=${CMAKE_SOURCE_DIR}/test/valgrind.supp --leak-check=full --error-exitcode=1 ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/unittest --gtest_filter=-SIMD.*\n        WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}/bin)\n\n    if(CMAKE_BUILD_TYPE STREQUAL \"Debug\")\n        add_test(NAME symbol_check\n        COMMAND sh -c \"objdump -t -C libnamespacetest.a | grep rapidjson ; test $? -ne 0\"\n        WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})\n    endif(CMAKE_BUILD_TYPE STREQUAL \"Debug\")\n\nendif(NOT MSVC AND VALGRIND_FOUND)\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/allocatorstest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n#include \"rapidjson/allocators.h\"\n\n#include <map>\n#include <string>\n#include <utility>\n#include <functional>\n\nusing namespace rapidjson;\n\ntemplate <typename Allocator>\nvoid TestAllocator(Allocator& a) {\n    EXPECT_TRUE(a.Malloc(0) == 0);\n\n    uint8_t* p = static_cast<uint8_t*>(a.Malloc(100));\n    EXPECT_TRUE(p != 0);\n    for (size_t i = 0; i < 100; i++)\n        p[i] = static_cast<uint8_t>(i);\n\n    // Expand\n    uint8_t* q = static_cast<uint8_t*>(a.Realloc(p, 100, 200));\n    EXPECT_TRUE(q != 0);\n    for (size_t i = 0; i < 100; i++)\n        EXPECT_EQ(i, q[i]);\n    for (size_t i = 100; i < 200; i++)\n        q[i] = static_cast<uint8_t>(i);\n\n    // Shrink\n    uint8_t *r = static_cast<uint8_t*>(a.Realloc(q, 200, 150));\n    EXPECT_TRUE(r != 0);\n    for (size_t i = 0; i < 150; i++)\n        EXPECT_EQ(i, r[i]);\n\n    Allocator::Free(r);\n\n    // Realloc to zero size\n    EXPECT_TRUE(a.Realloc(a.Malloc(1), 1, 0) == 0);\n}\n\nstruct TestStdAllocatorData {\n    TestStdAllocatorData(int &constructions, int &destructions) :\n        constructions_(&constructions),\n        destructions_(&destructions)\n    {\n        ++*constructions_;\n    }\n    TestStdAllocatorData(const TestStdAllocatorData& rhs) :\n        constructions_(rhs.constructions_),\n        destructions_(rhs.destructions_)\n    {\n        ++*constructions_;\n    }\n    TestStdAllocatorData& operator=(const TestStdAllocatorData& rhs)\n    {\n        this->~TestStdAllocatorData();\n        constructions_ = rhs.constructions_;\n        destructions_ = rhs.destructions_;\n        ++*constructions_;\n        return *this;\n    }\n    ~TestStdAllocatorData()\n    {\n        ++*destructions_;\n    }\nprivate:\n    TestStdAllocatorData();\n    int *constructions_,\n        *destructions_;\n};\n\ntemplate <typename Allocator>\nvoid TestStdAllocator(const Allocator& a) {\n#if RAPIDJSON_HAS_CXX17\n    typedef StdAllocator<bool, Allocator> BoolAllocator;\n#else\n    typedef StdAllocator<void, Allocator> VoidAllocator;\n    typedef typename VoidAllocator::template rebind<bool>::other BoolAllocator;\n#endif\n    BoolAllocator ba(a), ba2(a);\n    EXPECT_TRUE(ba == ba2);\n    EXPECT_FALSE(ba!= ba2);\n    ba.deallocate(ba.allocate());\n    EXPECT_TRUE(ba == ba2);\n    EXPECT_FALSE(ba != ba2);\n\n    unsigned long long ll = 0, *llp = &ll;\n    const unsigned long long cll = 0, *cllp = &cll;\n    StdAllocator<unsigned long long, Allocator> lla(a);\n    EXPECT_EQ(lla.address(ll), llp);\n    EXPECT_EQ(lla.address(cll), cllp);\n    EXPECT_TRUE(lla.max_size() > 0 && lla.max_size() <= SIZE_MAX / sizeof(unsigned long long));\n\n    int *arr;\n    StdAllocator<int, Allocator> ia(a);\n    arr = ia.allocate(10 * sizeof(int));\n    EXPECT_TRUE(arr != 0);\n    for (int i = 0; i < 10; ++i) {\n        arr[i] = 0x0f0f0f0f;\n    }\n    ia.deallocate(arr, 10);\n    arr = Malloc<int>(ia, 10);\n    EXPECT_TRUE(arr != 0);\n    for (int i = 0; i < 10; ++i) {\n        arr[i] = 0x0f0f0f0f;\n    }\n    arr = Realloc<int>(ia, arr, 10, 20);\n    EXPECT_TRUE(arr != 0);\n    for (int i = 0; i < 10; ++i) {\n        EXPECT_EQ(arr[i], 0x0f0f0f0f);\n    }\n    for (int i = 10; i < 20; i++) {\n        arr[i] = 0x0f0f0f0f;\n    }\n    Free<int>(ia, arr, 20);\n\n    int cons = 0, dest = 0;\n    StdAllocator<TestStdAllocatorData, Allocator> da(a);\n    for (int i = 1; i < 10; i++) {\n        TestStdAllocatorData *d = da.allocate();\n        EXPECT_TRUE(d != 0);\n\n        da.destroy(new(d) TestStdAllocatorData(cons, dest));\n        EXPECT_EQ(cons, i);\n        EXPECT_EQ(dest, i);\n\n        da.deallocate(d);\n    }\n\n    typedef StdAllocator<char, Allocator> CharAllocator;\n    typedef std::basic_string<char, std::char_traits<char>, CharAllocator> String;\n#if RAPIDJSON_HAS_CXX11\n    String s(CharAllocator{a});\n#else\n    CharAllocator ca(a);\n    String s(ca);\n#endif\n    for (int i = 0; i < 26; i++) {\n        s.push_back(static_cast<char>('A' + i));\n    }\n    EXPECT_TRUE(s == \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\");\n\n    typedef StdAllocator<std::pair<const int, bool>, Allocator> MapAllocator;\n    typedef std::map<int, bool, std::less<int>, MapAllocator> Map;\n#if RAPIDJSON_HAS_CXX11\n    Map map(std::less<int>(), MapAllocator{a});\n#else\n    MapAllocator ma(a);\n    Map map(std::less<int>(), ma);\n#endif\n    for (int i = 0; i < 10; i++) {\n        map.insert(std::make_pair(i, (i % 2) == 0));\n    }\n    EXPECT_TRUE(map.size() == 10);\n    for (int i = 0; i < 10; i++) {\n        typename Map::iterator it = map.find(i);\n        EXPECT_TRUE(it != map.end());\n        EXPECT_TRUE(it->second == ((i % 2) == 0));\n    }\n}\n\nTEST(Allocator, CrtAllocator) {\n    CrtAllocator a;\n\n    TestAllocator(a);\n    TestStdAllocator(a);\n\n    CrtAllocator a2;\n    EXPECT_TRUE(a == a2);\n    EXPECT_FALSE(a != a2);\n    a2.Free(a2.Malloc(1));\n    EXPECT_TRUE(a == a2);\n    EXPECT_FALSE(a != a2);\n}\n\nTEST(Allocator, MemoryPoolAllocator) {\n    const size_t capacity = RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY;\n    MemoryPoolAllocator<> a(capacity);\n\n    a.Clear(); // noop\n    EXPECT_EQ(a.Size(), 0u);\n    EXPECT_EQ(a.Capacity(), 0u);\n    EXPECT_EQ(a.Shared(), false);\n    {\n        MemoryPoolAllocator<> a2(a);\n        EXPECT_EQ(a2.Shared(), true);\n        EXPECT_EQ(a.Shared(), true);\n        EXPECT_TRUE(a == a2);\n        EXPECT_FALSE(a != a2);\n        a2.Free(a2.Malloc(1));\n        EXPECT_TRUE(a == a2);\n        EXPECT_FALSE(a != a2);\n    }\n    EXPECT_EQ(a.Shared(), false);\n    EXPECT_EQ(a.Capacity(), capacity);\n    EXPECT_EQ(a.Size(), 8u); // aligned\n    a.Clear();\n    EXPECT_EQ(a.Capacity(), 0u);\n    EXPECT_EQ(a.Size(), 0u);\n\n    TestAllocator(a);\n    TestStdAllocator(a);\n\n    for (size_t i = 1; i < 1000; i++) {\n        EXPECT_TRUE(a.Malloc(i) != 0);\n        EXPECT_LE(a.Size(), a.Capacity());\n    }\n\n    CrtAllocator baseAllocator;\n    a = MemoryPoolAllocator<>(capacity, &baseAllocator);\n    EXPECT_EQ(a.Capacity(), 0u);\n    EXPECT_EQ(a.Size(), 0u);\n    a.Free(a.Malloc(1));\n    EXPECT_EQ(a.Capacity(), capacity);\n    EXPECT_EQ(a.Size(), 8u); // aligned\n\n    {\n        a.Clear();\n        const size_t bufSize = 1024;\n        char *buffer = static_cast<char *>(a.Malloc(bufSize));\n        MemoryPoolAllocator<> aligned_a(buffer, bufSize);\n        EXPECT_TRUE(aligned_a.Capacity() > 0 && aligned_a.Capacity() <= bufSize);\n        EXPECT_EQ(aligned_a.Size(), 0u);\n        aligned_a.Free(aligned_a.Malloc(1));\n        EXPECT_TRUE(aligned_a.Capacity() > 0 && aligned_a.Capacity() <= bufSize);\n        EXPECT_EQ(aligned_a.Size(), 8u); // aligned\n    }\n\n    {\n        a.Clear();\n        const size_t bufSize = 1024;\n        char *buffer = static_cast<char *>(a.Malloc(bufSize));\n        RAPIDJSON_ASSERT(bufSize % sizeof(void*) == 0);\n        MemoryPoolAllocator<> unaligned_a(buffer + 1, bufSize - 1);\n        EXPECT_TRUE(unaligned_a.Capacity() > 0 && unaligned_a.Capacity() <= bufSize - sizeof(void*));\n        EXPECT_EQ(unaligned_a.Size(), 0u);\n        unaligned_a.Free(unaligned_a.Malloc(1));\n        EXPECT_TRUE(unaligned_a.Capacity() > 0 && unaligned_a.Capacity() <= bufSize - sizeof(void*));\n        EXPECT_EQ(unaligned_a.Size(), 8u); // aligned\n    }\n}\n\nTEST(Allocator, Alignment) {\n    if (sizeof(size_t) >= 8) {\n        EXPECT_EQ(RAPIDJSON_UINT64_C2(0x00000000, 0x00000000), RAPIDJSON_ALIGN(0));\n        for (uint64_t i = 1; i < 8; i++) {\n            EXPECT_EQ(RAPIDJSON_UINT64_C2(0x00000000, 0x00000008), RAPIDJSON_ALIGN(i));\n            EXPECT_EQ(RAPIDJSON_UINT64_C2(0x00000000, 0x00000010), RAPIDJSON_ALIGN(RAPIDJSON_UINT64_C2(0x00000000, 0x00000008) + i));\n            EXPECT_EQ(RAPIDJSON_UINT64_C2(0x00000001, 0x00000000), RAPIDJSON_ALIGN(RAPIDJSON_UINT64_C2(0x00000000, 0xFFFFFFF8) + i));\n            EXPECT_EQ(RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFF8), RAPIDJSON_ALIGN(RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFF0) + i));\n        }\n    }\n\n    EXPECT_EQ(0u, RAPIDJSON_ALIGN(0u));\n    for (uint32_t i = 1; i < 8; i++) {\n        EXPECT_EQ(8u, RAPIDJSON_ALIGN(i));\n        EXPECT_EQ(0xFFFFFFF8u, RAPIDJSON_ALIGN(0xFFFFFFF0u + i));\n    }\n}\n\nTEST(Allocator, Issue399) {\n    MemoryPoolAllocator<> a;\n    void* p = a.Malloc(100);\n    void* q = a.Realloc(p, 100, 200);\n    EXPECT_EQ(p, q);\n\n    // exhuasive testing\n    for (size_t j = 1; j < 32; j++) {\n        a.Clear();\n        a.Malloc(j); // some unaligned size\n        p = a.Malloc(1);\n        for (size_t i = 1; i < 1024; i++) {\n            q = a.Realloc(p, i, i + 1);\n            EXPECT_EQ(p, q);\n            p = q;\n        }\n    }\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/bigintegertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n#include \"rapidjson/internal/biginteger.h\"\n\nusing namespace rapidjson::internal;\n\n#define BIGINTEGER_LITERAL(s) BigInteger(s, sizeof(s) - 1)\n\nstatic const BigInteger kZero(0);\nstatic const BigInteger kOne(1);\nstatic const BigInteger kUint64Max = BIGINTEGER_LITERAL(\"18446744073709551615\");\nstatic const BigInteger kTwo64 = BIGINTEGER_LITERAL(\"18446744073709551616\");\n\nTEST(BigInteger, Constructor) {\n    EXPECT_TRUE(kZero.IsZero());\n    EXPECT_TRUE(kZero == kZero);\n    EXPECT_TRUE(kZero == BIGINTEGER_LITERAL(\"0\"));\n    EXPECT_TRUE(kZero == BIGINTEGER_LITERAL(\"00\"));\n\n    const BigInteger a(123);\n    EXPECT_TRUE(a == a);\n    EXPECT_TRUE(a == BIGINTEGER_LITERAL(\"123\"));\n    EXPECT_TRUE(a == BIGINTEGER_LITERAL(\"0123\"));\n\n    EXPECT_EQ(2u, kTwo64.GetCount());\n    EXPECT_EQ(0u, kTwo64.GetDigit(0));\n    EXPECT_EQ(1u, kTwo64.GetDigit(1));\n}\n\nTEST(BigInteger, AddUint64) {\n    BigInteger a = kZero;\n    a += 0u;\n    EXPECT_TRUE(kZero == a);\n\n    a += 1u;\n    EXPECT_TRUE(kOne == a);\n\n    a += 1u;\n    EXPECT_TRUE(BigInteger(2) == a);\n\n    EXPECT_TRUE(BigInteger(RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFFF)) == kUint64Max);\n    BigInteger b = kUint64Max;\n    b += 1u;\n    EXPECT_TRUE(kTwo64 == b);\n    b += RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFFF);\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"36893488147419103231\") == b);\n}\n\nTEST(BigInteger, MultiplyUint64) {\n    BigInteger a = kZero;\n    a *= static_cast <uint64_t>(0);\n    EXPECT_TRUE(kZero == a);\n    a *= static_cast <uint64_t>(123);\n    EXPECT_TRUE(kZero == a);\n\n    BigInteger b = kOne;\n    b *= static_cast<uint64_t>(1);\n    EXPECT_TRUE(kOne == b);\n    b *= static_cast<uint64_t>(0);\n    EXPECT_TRUE(kZero == b);\n\n    BigInteger c(123);\n    c *= static_cast<uint64_t>(456u);\n    EXPECT_TRUE(BigInteger(123u * 456u) == c);\n    c *= RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFFF);\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"1034640981606221330982120\") == c);\n    c *= RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFFF);\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"19085757395861596536664473018420572782123800\") == c);\n}\n\nTEST(BigInteger, MultiplyUint32) {\n    BigInteger a = kZero;\n    a *= static_cast <uint32_t>(0);\n    EXPECT_TRUE(kZero == a);\n    a *= static_cast <uint32_t>(123);\n    EXPECT_TRUE(kZero == a);\n\n    BigInteger b = kOne;\n    b *= static_cast<uint32_t>(1);\n    EXPECT_TRUE(kOne == b);\n    b *= static_cast<uint32_t>(0);\n    EXPECT_TRUE(kZero == b);\n\n    BigInteger c(123);\n    c *= static_cast<uint32_t>(456u);\n    EXPECT_TRUE(BigInteger(123u * 456u) == c);\n    c *= 0xFFFFFFFFu;\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"240896125641960\") == c);\n    c *= 0xFFFFFFFFu;\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"1034640981124429079698200\") == c);\n}\n\nTEST(BigInteger, LeftShift) {\n    BigInteger a = kZero;\n    a <<= 1;\n    EXPECT_TRUE(kZero == a);\n    a <<= 64;\n    EXPECT_TRUE(kZero == a);\n\n    a = BigInteger(123);\n    a <<= 0;\n    EXPECT_TRUE(BigInteger(123) == a);\n    a <<= 1;\n    EXPECT_TRUE(BigInteger(246) == a);\n    a <<= 64;\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"4537899042132549697536\") == a);\n    a <<= 99;\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"2876235222267216943024851750785644982682875244576768\") == a);\n\n    a = 1;\n    a <<= 64; // a.count_ != 1\n    a <<= 256; // interShift == 0\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"2135987035920910082395021706169552114602704522356652769947041607822219725780640550022962086936576\") == a);\n}\n\nTEST(BigInteger, Compare) {\n    EXPECT_EQ(0, kZero.Compare(kZero));\n    EXPECT_EQ(1, kOne.Compare(kZero));\n    EXPECT_EQ(-1, kZero.Compare(kOne));\n    EXPECT_EQ(0, kUint64Max.Compare(kUint64Max));\n    EXPECT_EQ(0, kTwo64.Compare(kTwo64));\n    EXPECT_EQ(-1, kUint64Max.Compare(kTwo64));\n    EXPECT_EQ(1, kTwo64.Compare(kUint64Max));\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/documenttest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/encodedstream.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include <sstream>\n#include <algorithm>\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\nRAPIDJSON_DIAG_OFF(missing-variable-declarations)\n#endif\n\nusing namespace rapidjson;\n\ntemplate <typename DocumentType>\nvoid ParseCheck(DocumentType& doc) {\n    typedef typename DocumentType::ValueType ValueType;\n\n    EXPECT_FALSE(doc.HasParseError());\n    if (doc.HasParseError())\n        printf(\"Error: %d at %zu\\n\", static_cast<int>(doc.GetParseError()), doc.GetErrorOffset());\n    EXPECT_TRUE(static_cast<ParseResult>(doc));\n\n    EXPECT_TRUE(doc.IsObject());\n\n    EXPECT_TRUE(doc.HasMember(\"hello\"));\n    const ValueType& hello = doc[\"hello\"];\n    EXPECT_TRUE(hello.IsString());\n    EXPECT_STREQ(\"world\", hello.GetString());\n\n    EXPECT_TRUE(doc.HasMember(\"t\"));\n    const ValueType& t = doc[\"t\"];\n    EXPECT_TRUE(t.IsTrue());\n\n    EXPECT_TRUE(doc.HasMember(\"f\"));\n    const ValueType& f = doc[\"f\"];\n    EXPECT_TRUE(f.IsFalse());\n\n    EXPECT_TRUE(doc.HasMember(\"n\"));\n    const ValueType& n = doc[\"n\"];\n    EXPECT_TRUE(n.IsNull());\n\n    EXPECT_TRUE(doc.HasMember(\"i\"));\n    const ValueType& i = doc[\"i\"];\n    EXPECT_TRUE(i.IsNumber());\n    EXPECT_EQ(123, i.GetInt());\n\n    EXPECT_TRUE(doc.HasMember(\"pi\"));\n    const ValueType& pi = doc[\"pi\"];\n    EXPECT_TRUE(pi.IsNumber());\n    EXPECT_DOUBLE_EQ(3.1416, pi.GetDouble());\n\n    EXPECT_TRUE(doc.HasMember(\"a\"));\n    const ValueType& a = doc[\"a\"];\n    EXPECT_TRUE(a.IsArray());\n    EXPECT_EQ(4u, a.Size());\n    for (SizeType j = 0; j < 4; j++)\n        EXPECT_EQ(j + 1, a[j].GetUint());\n}\n\ntemplate <typename Allocator, typename StackAllocator>\nvoid ParseTest() {\n    typedef GenericDocument<UTF8<>, Allocator, StackAllocator> DocumentType;\n    DocumentType doc;\n\n    const char* json = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \";\n\n    doc.Parse(json);\n    ParseCheck(doc);\n\n    doc.SetNull();\n    StringStream s(json);\n    doc.template ParseStream<0>(s);\n    ParseCheck(doc);\n\n    doc.SetNull();\n    char *buffer = strdup(json);\n    doc.ParseInsitu(buffer);\n    ParseCheck(doc);\n    free(buffer);\n\n    // Parse(const Ch*, size_t)\n    size_t length = strlen(json);\n    buffer = reinterpret_cast<char*>(malloc(length * 2));\n    memcpy(buffer, json, length);\n    memset(buffer + length, 'X', length);\n#if RAPIDJSON_HAS_STDSTRING\n    std::string s2(buffer, length); // backup buffer\n#endif\n    doc.SetNull();\n    doc.Parse(buffer, length);\n    free(buffer);\n    ParseCheck(doc);\n\n#if RAPIDJSON_HAS_STDSTRING\n    // Parse(std::string)\n    doc.SetNull();\n    doc.Parse(s2);\n    ParseCheck(doc);\n#endif\n}\n\nTEST(Document, Parse) {\n    ParseTest<MemoryPoolAllocator<>, CrtAllocator>();\n    ParseTest<MemoryPoolAllocator<>, MemoryPoolAllocator<> >();\n    ParseTest<CrtAllocator, MemoryPoolAllocator<> >();\n    ParseTest<CrtAllocator, CrtAllocator>();\n}\n\nTEST(Document, UnchangedOnParseError) {\n    Document doc;\n    doc.SetArray().PushBack(0, doc.GetAllocator());\n\n    ParseResult noError;\n    EXPECT_TRUE(noError);\n\n    ParseResult err = doc.Parse(\"{]\");\n    EXPECT_TRUE(doc.HasParseError());\n    EXPECT_NE(err, noError);\n    EXPECT_NE(err.Code(), noError);\n    EXPECT_NE(noError, doc.GetParseError());\n    EXPECT_EQ(err.Code(), doc.GetParseError());\n    EXPECT_EQ(err.Offset(), doc.GetErrorOffset());\n    EXPECT_TRUE(doc.IsArray());\n    EXPECT_EQ(doc.Size(), 1u);\n\n    err = doc.Parse(\"{}\");\n    EXPECT_FALSE(doc.HasParseError());\n    EXPECT_FALSE(err.IsError());\n    EXPECT_TRUE(err);\n    EXPECT_EQ(err, noError);\n    EXPECT_EQ(err.Code(), noError);\n    EXPECT_EQ(err.Code(), doc.GetParseError());\n    EXPECT_EQ(err.Offset(), doc.GetErrorOffset());\n    EXPECT_TRUE(doc.IsObject());\n    EXPECT_EQ(doc.MemberCount(), 0u);\n}\n\nstatic FILE* OpenEncodedFile(const char* filename) {\n    const char *paths[] = {\n        \"encodings\",\n        \"bin/encodings\",\n        \"../bin/encodings\",\n        \"../../bin/encodings\",\n        \"../../../bin/encodings\"\n    };\n    char buffer[1024];\n    for (size_t i = 0; i < sizeof(paths) / sizeof(paths[0]); i++) {\n        sprintf(buffer, \"%s/%s\", paths[i], filename);\n        FILE *fp = fopen(buffer, \"rb\");\n        if (fp)\n            return fp;\n    }\n    return 0;\n}\n\nTEST(Document, Parse_Encoding) {\n    const char* json = \" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \";\n\n    typedef GenericDocument<UTF16<> > DocumentType;\n    DocumentType doc;\n    \n    // Parse<unsigned, SourceEncoding>(const SourceEncoding::Ch*)\n    // doc.Parse<kParseDefaultFlags, UTF8<> >(json);\n    // EXPECT_FALSE(doc.HasParseError());\n    // EXPECT_EQ(0, StrCmp(doc[L\"hello\"].GetString(), L\"world\"));\n\n    // Parse<unsigned, SourceEncoding>(const SourceEncoding::Ch*, size_t)\n    size_t length = strlen(json);\n    char* buffer = reinterpret_cast<char*>(malloc(length * 2));\n    memcpy(buffer, json, length);\n    memset(buffer + length, 'X', length);\n#if RAPIDJSON_HAS_STDSTRING\n    std::string s2(buffer, length); // backup buffer\n#endif\n    doc.SetNull();\n    doc.Parse<kParseDefaultFlags, UTF8<> >(buffer, length);\n    free(buffer);\n    EXPECT_FALSE(doc.HasParseError());\n    if (doc.HasParseError())\n        printf(\"Error: %d at %zu\\n\", static_cast<int>(doc.GetParseError()), doc.GetErrorOffset());\n    EXPECT_EQ(0, StrCmp(doc[L\"hello\"].GetString(), L\"world\"));\n\n#if RAPIDJSON_HAS_STDSTRING\n    // Parse<unsigned, SourceEncoding>(std::string)\n    doc.SetNull();\n\n#if defined(_MSC_VER) && _MSC_VER < 1800\n    doc.Parse<kParseDefaultFlags, UTF8<> >(s2.c_str()); // VS2010 or below cannot handle templated function overloading. Use const char* instead.\n#else\n    doc.Parse<kParseDefaultFlags, UTF8<> >(s2);\n#endif\n    EXPECT_FALSE(doc.HasParseError());\n    EXPECT_EQ(0, StrCmp(doc[L\"hello\"].GetString(), L\"world\"));\n#endif\n}\n\nTEST(Document, ParseStream_EncodedInputStream) {\n    // UTF8 -> UTF16\n    FILE* fp = OpenEncodedFile(\"utf8.json\");\n    char buffer[256];\n    FileReadStream bis(fp, buffer, sizeof(buffer));\n    EncodedInputStream<UTF8<>, FileReadStream> eis(bis);\n\n    GenericDocument<UTF16<> > d;\n    d.ParseStream<0, UTF8<> >(eis);\n    EXPECT_FALSE(d.HasParseError());\n\n    fclose(fp);\n\n    wchar_t expected[] = L\"I can eat glass and it doesn't hurt me.\";\n    GenericValue<UTF16<> >& v = d[L\"en\"];\n    EXPECT_TRUE(v.IsString());\n    EXPECT_EQ(sizeof(expected) / sizeof(wchar_t) - 1, v.GetStringLength());\n    EXPECT_EQ(0, StrCmp(expected, v.GetString()));\n\n    // UTF16 -> UTF8 in memory\n    StringBuffer bos;\n    typedef EncodedOutputStream<UTF8<>, StringBuffer> OutputStream;\n    OutputStream eos(bos, false);   // Not writing BOM\n    {\n        Writer<OutputStream, UTF16<>, UTF8<> > writer(eos);\n        d.Accept(writer);\n    }\n\n    // Condense the original file and compare.\n    fp = OpenEncodedFile(\"utf8.json\");\n    FileReadStream is(fp, buffer, sizeof(buffer));\n    Reader reader;\n    StringBuffer bos2;\n    Writer<StringBuffer> writer2(bos2);\n    reader.Parse(is, writer2);\n    fclose(fp);\n\n    EXPECT_EQ(bos.GetSize(), bos2.GetSize());\n    EXPECT_EQ(0, memcmp(bos.GetString(), bos2.GetString(), bos2.GetSize()));\n}\n\nTEST(Document, ParseStream_AutoUTFInputStream) {\n    // Any -> UTF8\n    FILE* fp = OpenEncodedFile(\"utf32be.json\");\n    char buffer[256];\n    FileReadStream bis(fp, buffer, sizeof(buffer));\n    AutoUTFInputStream<unsigned, FileReadStream> eis(bis);\n\n    Document d;\n    d.ParseStream<0, AutoUTF<unsigned> >(eis);\n    EXPECT_FALSE(d.HasParseError());\n\n    fclose(fp);\n\n    char expected[] = \"I can eat glass and it doesn't hurt me.\";\n    Value& v = d[\"en\"];\n    EXPECT_TRUE(v.IsString());\n    EXPECT_EQ(sizeof(expected) - 1, v.GetStringLength());\n    EXPECT_EQ(0, StrCmp(expected, v.GetString()));\n\n    // UTF8 -> UTF8 in memory\n    StringBuffer bos;\n    Writer<StringBuffer> writer(bos);\n    d.Accept(writer);\n\n    // Condense the original file and compare.\n    fp = OpenEncodedFile(\"utf8.json\");\n    FileReadStream is(fp, buffer, sizeof(buffer));\n    Reader reader;\n    StringBuffer bos2;\n    Writer<StringBuffer> writer2(bos2);\n    reader.Parse(is, writer2);\n    fclose(fp);\n\n    EXPECT_EQ(bos.GetSize(), bos2.GetSize());\n    EXPECT_EQ(0, memcmp(bos.GetString(), bos2.GetString(), bos2.GetSize()));\n}\n\nTEST(Document, Swap) {\n    Document d1;\n    Document::AllocatorType& a = d1.GetAllocator();\n\n    d1.SetArray().PushBack(1, a).PushBack(2, a);\n\n    Value o;\n    o.SetObject().AddMember(\"a\", 1, a);\n\n    // Swap between Document and Value\n    d1.Swap(o);\n    EXPECT_TRUE(d1.IsObject());\n    EXPECT_TRUE(o.IsArray());\n\n    d1.Swap(o);\n    EXPECT_TRUE(d1.IsArray());\n    EXPECT_TRUE(o.IsObject());\n\n    o.Swap(d1);\n    EXPECT_TRUE(d1.IsObject());\n    EXPECT_TRUE(o.IsArray());\n\n    // Swap between Document and Document\n    Document d2;\n    d2.SetArray().PushBack(3, a);\n    d1.Swap(d2);\n    EXPECT_TRUE(d1.IsArray());\n    EXPECT_TRUE(d2.IsObject());\n    EXPECT_EQ(&d2.GetAllocator(), &a);\n\n    // reset value\n    Value().Swap(d1);\n    EXPECT_TRUE(d1.IsNull());\n\n    // reset document, including allocator\n    // so clear o before so that it doesnt contain dangling elements\n    o.Clear();\n    Document().Swap(d2);\n    EXPECT_TRUE(d2.IsNull());\n    EXPECT_NE(&d2.GetAllocator(), &a);\n\n    // testing std::swap compatibility\n    d1.SetBool(true);\n    using std::swap;\n    swap(d1, d2);\n    EXPECT_TRUE(d1.IsNull());\n    EXPECT_TRUE(d2.IsTrue());\n\n    swap(o, d2);\n    EXPECT_TRUE(o.IsTrue());\n    EXPECT_TRUE(d2.IsArray());\n}\n\n\n// This should be slow due to assignment in inner-loop.\nstruct OutputStringStream : public std::ostringstream {\n    typedef char Ch;\n\n    virtual ~OutputStringStream();\n\n    void Put(char c) {\n        put(c);\n    }\n    void Flush() {}\n};\n\nOutputStringStream::~OutputStringStream() {}\n\nTEST(Document, AcceptWriter) {\n    Document doc;\n    doc.Parse(\" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \");\n\n    OutputStringStream os;\n    Writer<OutputStringStream> writer(os);\n    doc.Accept(writer);\n\n    EXPECT_EQ(\"{\\\"hello\\\":\\\"world\\\",\\\"t\\\":true,\\\"f\\\":false,\\\"n\\\":null,\\\"i\\\":123,\\\"pi\\\":3.1416,\\\"a\\\":[1,2,3,4]}\", os.str());\n}\n\nTEST(Document, UserBuffer) {\n    typedef GenericDocument<UTF8<>, MemoryPoolAllocator<>, MemoryPoolAllocator<> > DocumentType;\n    char valueBuffer[4096];\n    char parseBuffer[1024];\n    MemoryPoolAllocator<> valueAllocator(valueBuffer, sizeof(valueBuffer));\n    MemoryPoolAllocator<> parseAllocator(parseBuffer, sizeof(parseBuffer));\n    DocumentType doc(&valueAllocator, sizeof(parseBuffer) / 2, &parseAllocator);\n    doc.Parse(\" { \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3, 4] } \");\n    EXPECT_FALSE(doc.HasParseError());\n    EXPECT_LE(valueAllocator.Size(), sizeof(valueBuffer));\n    EXPECT_LE(parseAllocator.Size(), sizeof(parseBuffer));\n\n    // Cover MemoryPoolAllocator::Capacity()\n    EXPECT_LE(valueAllocator.Size(), valueAllocator.Capacity());\n    EXPECT_LE(parseAllocator.Size(), parseAllocator.Capacity());\n}\n\n// Issue 226: Value of string type should not point to NULL\nTEST(Document, AssertAcceptInvalidNameType) {\n    Document doc;\n    doc.SetObject();\n    doc.AddMember(\"a\", 0, doc.GetAllocator());\n    doc.FindMember(\"a\")->name.SetNull(); // Change name to non-string type.\n\n    OutputStringStream os;\n    Writer<OutputStringStream> writer(os);\n    ASSERT_THROW(doc.Accept(writer), AssertException);\n}\n\n// Issue 44:    SetStringRaw doesn't work with wchar_t\nTEST(Document, UTF16_Document) {\n    GenericDocument< UTF16<> > json;\n    json.Parse<kParseValidateEncodingFlag>(L\"[{\\\"created_at\\\":\\\"Wed Oct 30 17:13:20 +0000 2012\\\"}]\");\n\n    ASSERT_TRUE(json.IsArray());\n    GenericValue< UTF16<> >& v = json[0];\n    ASSERT_TRUE(v.IsObject());\n\n    GenericValue< UTF16<> >& s = v[L\"created_at\"];\n    ASSERT_TRUE(s.IsString());\n\n    EXPECT_EQ(0, memcmp(L\"Wed Oct 30 17:13:20 +0000 2012\", s.GetString(), (s.GetStringLength() + 1) * sizeof(wchar_t)));\n}\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\n#if 0 // Many old compiler does not support these. Turn it off temporaily.\n\n#include <type_traits>\n\nTEST(Document, Traits) {\n    static_assert(std::is_constructible<Document>::value, \"\");\n    static_assert(std::is_default_constructible<Document>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(!std::is_copy_constructible<Document>::value, \"\");\n#endif\n    static_assert(std::is_move_constructible<Document>::value, \"\");\n\n    static_assert(!std::is_nothrow_constructible<Document>::value, \"\");\n    static_assert(!std::is_nothrow_default_constructible<Document>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(!std::is_nothrow_copy_constructible<Document>::value, \"\");\n    static_assert(std::is_nothrow_move_constructible<Document>::value, \"\");\n#endif\n\n    static_assert(std::is_assignable<Document,Document>::value, \"\");\n#ifndef _MSC_VER\n  static_assert(!std::is_copy_assignable<Document>::value, \"\");\n#endif\n    static_assert(std::is_move_assignable<Document>::value, \"\");\n\n#ifndef _MSC_VER\n    static_assert(std::is_nothrow_assignable<Document, Document>::value, \"\");\n#endif\n    static_assert(!std::is_nothrow_copy_assignable<Document>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(std::is_nothrow_move_assignable<Document>::value, \"\");\n#endif\n\n    static_assert( std::is_destructible<Document>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(std::is_nothrow_destructible<Document>::value, \"\");\n#endif\n}\n\n#endif\n\ntemplate <typename Allocator>\nstruct DocumentMove: public ::testing::Test {\n};\n\ntypedef ::testing::Types< CrtAllocator, MemoryPoolAllocator<> > MoveAllocatorTypes;\nTYPED_TEST_CASE(DocumentMove, MoveAllocatorTypes);\n\nTYPED_TEST(DocumentMove, MoveConstructor) {\n    typedef TypeParam Allocator;\n    typedef GenericDocument<UTF8<>, Allocator> D;\n    Allocator allocator;\n\n    D a(&allocator);\n    a.Parse(\"[\\\"one\\\", \\\"two\\\", \\\"three\\\"]\");\n    EXPECT_FALSE(a.HasParseError());\n    EXPECT_TRUE(a.IsArray());\n    EXPECT_EQ(3u, a.Size());\n    EXPECT_EQ(&a.GetAllocator(), &allocator);\n\n    // Document b(a); // does not compile (!is_copy_constructible)\n    D b(std::move(a));\n    EXPECT_TRUE(a.IsNull());\n    EXPECT_TRUE(b.IsArray());\n    EXPECT_EQ(3u, b.Size());\n    EXPECT_THROW(a.GetAllocator(), AssertException);\n    EXPECT_EQ(&b.GetAllocator(), &allocator);\n\n    b.Parse(\"{\\\"Foo\\\": \\\"Bar\\\", \\\"Baz\\\": 42}\");\n    EXPECT_FALSE(b.HasParseError());\n    EXPECT_TRUE(b.IsObject());\n    EXPECT_EQ(2u, b.MemberCount());\n\n    // Document c = a; // does not compile (!is_copy_constructible)\n    D c = std::move(b);\n    EXPECT_TRUE(b.IsNull());\n    EXPECT_TRUE(c.IsObject());\n    EXPECT_EQ(2u, c.MemberCount());\n    EXPECT_THROW(b.GetAllocator(), AssertException);\n    EXPECT_EQ(&c.GetAllocator(), &allocator);\n}\n\nTYPED_TEST(DocumentMove, MoveConstructorParseError) {\n    typedef TypeParam Allocator;\n    typedef GenericDocument<UTF8<>, Allocator> D;\n\n    ParseResult noError;\n    D a;\n    a.Parse(\"{ 4 = 4]\");\n    ParseResult error(a.GetParseError(), a.GetErrorOffset());\n    EXPECT_TRUE(a.HasParseError());\n    EXPECT_NE(error, noError);\n    EXPECT_NE(error.Code(), noError);\n    EXPECT_NE(error.Code(), noError.Code());\n    EXPECT_NE(error.Offset(), noError.Offset());\n\n    D b(std::move(a));\n    EXPECT_FALSE(a.HasParseError());\n    EXPECT_TRUE(b.HasParseError());\n    EXPECT_EQ(a.GetParseError(), noError);\n    EXPECT_EQ(a.GetParseError(), noError.Code());\n    EXPECT_EQ(a.GetErrorOffset(), noError.Offset());\n    EXPECT_EQ(b.GetParseError(), error);\n    EXPECT_EQ(b.GetParseError(), error.Code());\n    EXPECT_EQ(b.GetErrorOffset(), error.Offset());\n\n    D c(std::move(b));\n    EXPECT_FALSE(b.HasParseError());\n    EXPECT_TRUE(c.HasParseError());\n    EXPECT_EQ(b.GetParseError(), noError.Code());\n    EXPECT_EQ(c.GetParseError(), error.Code());\n    EXPECT_EQ(b.GetErrorOffset(), noError.Offset());\n    EXPECT_EQ(c.GetErrorOffset(), error.Offset());\n}\n\n// This test does not properly use parsing, just for testing.\n// It must call ClearStack() explicitly to prevent memory leak.\n// But here we cannot as ClearStack() is private.\n#if 0\nTYPED_TEST(DocumentMove, MoveConstructorStack) {\n    typedef TypeParam Allocator;\n    typedef UTF8<> Encoding;\n    typedef GenericDocument<Encoding, Allocator> Document;\n\n    Document a;\n    size_t defaultCapacity = a.GetStackCapacity();\n\n    // Trick Document into getting GetStackCapacity() to return non-zero\n    typedef GenericReader<Encoding, Encoding, Allocator> Reader;\n    Reader reader(&a.GetAllocator());\n    GenericStringStream<Encoding> is(\"[\\\"one\\\", \\\"two\\\", \\\"three\\\"]\");\n    reader.template Parse<kParseDefaultFlags>(is, a);\n    size_t capacity = a.GetStackCapacity();\n    EXPECT_GT(capacity, 0u);\n\n    Document b(std::move(a));\n    EXPECT_EQ(a.GetStackCapacity(), defaultCapacity);\n    EXPECT_EQ(b.GetStackCapacity(), capacity);\n\n    Document c = std::move(b);\n    EXPECT_EQ(b.GetStackCapacity(), defaultCapacity);\n    EXPECT_EQ(c.GetStackCapacity(), capacity);\n}\n#endif\n\nTYPED_TEST(DocumentMove, MoveAssignment) {\n    typedef TypeParam Allocator;\n    typedef GenericDocument<UTF8<>, Allocator> D;\n    Allocator allocator;\n\n    D a(&allocator);\n    a.Parse(\"[\\\"one\\\", \\\"two\\\", \\\"three\\\"]\");\n    EXPECT_FALSE(a.HasParseError());\n    EXPECT_TRUE(a.IsArray());\n    EXPECT_EQ(3u, a.Size());\n    EXPECT_EQ(&a.GetAllocator(), &allocator);\n\n    // Document b; b = a; // does not compile (!is_copy_assignable)\n    D b;\n    b = std::move(a);\n    EXPECT_TRUE(a.IsNull());\n    EXPECT_TRUE(b.IsArray());\n    EXPECT_EQ(3u, b.Size());\n    EXPECT_THROW(a.GetAllocator(), AssertException);\n    EXPECT_EQ(&b.GetAllocator(), &allocator);\n\n    b.Parse(\"{\\\"Foo\\\": \\\"Bar\\\", \\\"Baz\\\": 42}\");\n    EXPECT_FALSE(b.HasParseError());\n    EXPECT_TRUE(b.IsObject());\n    EXPECT_EQ(2u, b.MemberCount());\n\n    // Document c; c = a; // does not compile (see static_assert)\n    D c;\n    c = std::move(b);\n    EXPECT_TRUE(b.IsNull());\n    EXPECT_TRUE(c.IsObject());\n    EXPECT_EQ(2u, c.MemberCount());\n    EXPECT_THROW(b.GetAllocator(), AssertException);\n    EXPECT_EQ(&c.GetAllocator(), &allocator);\n}\n\nTYPED_TEST(DocumentMove, MoveAssignmentParseError) {\n    typedef TypeParam Allocator;\n    typedef GenericDocument<UTF8<>, Allocator> D;\n\n    ParseResult noError;\n    D a;\n    a.Parse(\"{ 4 = 4]\");\n    ParseResult error(a.GetParseError(), a.GetErrorOffset());\n    EXPECT_TRUE(a.HasParseError());\n    EXPECT_NE(error.Code(), noError.Code());\n    EXPECT_NE(error.Offset(), noError.Offset());\n\n    D b;\n    b = std::move(a);\n    EXPECT_FALSE(a.HasParseError());\n    EXPECT_TRUE(b.HasParseError());\n    EXPECT_EQ(a.GetParseError(), noError.Code());\n    EXPECT_EQ(b.GetParseError(), error.Code());\n    EXPECT_EQ(a.GetErrorOffset(), noError.Offset());\n    EXPECT_EQ(b.GetErrorOffset(), error.Offset());\n\n    D c;\n    c = std::move(b);\n    EXPECT_FALSE(b.HasParseError());\n    EXPECT_TRUE(c.HasParseError());\n    EXPECT_EQ(b.GetParseError(), noError.Code());\n    EXPECT_EQ(c.GetParseError(), error.Code());\n    EXPECT_EQ(b.GetErrorOffset(), noError.Offset());\n    EXPECT_EQ(c.GetErrorOffset(), error.Offset());\n}\n\n// This test does not properly use parsing, just for testing.\n// It must call ClearStack() explicitly to prevent memory leak.\n// But here we cannot as ClearStack() is private.\n#if 0\nTYPED_TEST(DocumentMove, MoveAssignmentStack) {\n    typedef TypeParam Allocator;\n    typedef UTF8<> Encoding;\n    typedef GenericDocument<Encoding, Allocator> D;\n\n    D a;\n    size_t defaultCapacity = a.GetStackCapacity();\n\n    // Trick Document into getting GetStackCapacity() to return non-zero\n    typedef GenericReader<Encoding, Encoding, Allocator> Reader;\n    Reader reader(&a.GetAllocator());\n    GenericStringStream<Encoding> is(\"[\\\"one\\\", \\\"two\\\", \\\"three\\\"]\");\n    reader.template Parse<kParseDefaultFlags>(is, a);\n    size_t capacity = a.GetStackCapacity();\n    EXPECT_GT(capacity, 0u);\n\n    D b;\n    b = std::move(a);\n    EXPECT_EQ(a.GetStackCapacity(), defaultCapacity);\n    EXPECT_EQ(b.GetStackCapacity(), capacity);\n\n    D c;\n    c = std::move(b);\n    EXPECT_EQ(b.GetStackCapacity(), defaultCapacity);\n    EXPECT_EQ(c.GetStackCapacity(), capacity);\n}\n#endif\n\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\n// Issue 22: Memory corruption via operator=\n// Fixed by making unimplemented assignment operator private.\n//TEST(Document, Assignment) {\n//  Document d1;\n//  Document d2;\n//  d1 = d2;\n//}\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/dtoatest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/internal/dtoa.h\"\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(type-limits)\n#endif\n\nusing namespace rapidjson::internal;\n\nTEST(dtoa, normal) {\n    char buffer[30];\n\n#define TEST_DTOA(d, a)\\\n    *dtoa(d, buffer) = '\\0';\\\n    EXPECT_STREQ(a, buffer)\n\n    TEST_DTOA(0.0, \"0.0\");\n    TEST_DTOA(-0.0, \"-0.0\");\n    TEST_DTOA(1.0, \"1.0\");\n    TEST_DTOA(-1.0, \"-1.0\");\n    TEST_DTOA(1.2345, \"1.2345\");\n    TEST_DTOA(1.2345678, \"1.2345678\");\n    TEST_DTOA(0.123456789012, \"0.123456789012\");\n    TEST_DTOA(1234567.8, \"1234567.8\");\n    TEST_DTOA(-79.39773355813419, \"-79.39773355813419\");\n    TEST_DTOA(-36.973846435546875, \"-36.973846435546875\");\n    TEST_DTOA(0.000001, \"0.000001\");\n    TEST_DTOA(0.0000001, \"1e-7\");\n    TEST_DTOA(1e30, \"1e30\");\n    TEST_DTOA(1.234567890123456e30, \"1.234567890123456e30\");\n    TEST_DTOA(5e-324, \"5e-324\"); // Min subnormal positive double\n    TEST_DTOA(2.225073858507201e-308, \"2.225073858507201e-308\"); // Max subnormal positive double\n    TEST_DTOA(2.2250738585072014e-308, \"2.2250738585072014e-308\"); // Min normal positive double\n    TEST_DTOA(1.7976931348623157e308, \"1.7976931348623157e308\"); // Max double\n\n#undef TEST_DTOA\n}\n\nTEST(dtoa, maxDecimalPlaces) {\n    char buffer[30];\n\n#define TEST_DTOA(m, d, a)\\\n    *dtoa(d, buffer, m) = '\\0';\\\n    EXPECT_STREQ(a, buffer)\n\n    TEST_DTOA(3, 0.0, \"0.0\");\n    TEST_DTOA(1, 0.0, \"0.0\");\n    TEST_DTOA(3, -0.0, \"-0.0\");\n    TEST_DTOA(3, 1.0, \"1.0\");\n    TEST_DTOA(3, -1.0, \"-1.0\");\n    TEST_DTOA(3, 1.2345, \"1.234\");\n    TEST_DTOA(2, 1.2345, \"1.23\");\n    TEST_DTOA(1, 1.2345, \"1.2\");\n    TEST_DTOA(3, 1.2345678, \"1.234\");\n    TEST_DTOA(3, 1.0001, \"1.0\");\n    TEST_DTOA(2, 1.0001, \"1.0\");\n    TEST_DTOA(1, 1.0001, \"1.0\");\n    TEST_DTOA(3, 0.123456789012, \"0.123\");\n    TEST_DTOA(2, 0.123456789012, \"0.12\");\n    TEST_DTOA(1, 0.123456789012, \"0.1\");\n    TEST_DTOA(4, 0.0001, \"0.0001\");\n    TEST_DTOA(3, 0.0001, \"0.0\");\n    TEST_DTOA(2, 0.0001, \"0.0\");\n    TEST_DTOA(1, 0.0001, \"0.0\");\n    TEST_DTOA(3, 1234567.8, \"1234567.8\");\n    TEST_DTOA(3, 1e30, \"1e30\");\n    TEST_DTOA(3, 5e-324, \"0.0\"); // Min subnormal positive double\n    TEST_DTOA(3, 2.225073858507201e-308, \"0.0\"); // Max subnormal positive double\n    TEST_DTOA(3, 2.2250738585072014e-308, \"0.0\"); // Min normal positive double\n    TEST_DTOA(3, 1.7976931348623157e308, \"1.7976931348623157e308\"); // Max double\n    TEST_DTOA(5, -0.14000000000000001, \"-0.14\");\n    TEST_DTOA(4, -0.14000000000000001, \"-0.14\");\n    TEST_DTOA(3, -0.14000000000000001, \"-0.14\");\n    TEST_DTOA(3, -0.10000000000000001, \"-0.1\");\n    TEST_DTOA(2, -0.10000000000000001, \"-0.1\");\n    TEST_DTOA(1, -0.10000000000000001, \"-0.1\");\n\n#undef TEST_DTOA\n}\n\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/encodedstreamtest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/encodedstream.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/memorystream.h\"\n#include \"rapidjson/memorybuffer.h\"\n\nusing namespace rapidjson;\n\nclass EncodedStreamTest : public ::testing::Test {\npublic:\n    EncodedStreamTest() : json_(), length_() {}\n    virtual ~EncodedStreamTest();\n\n    virtual void SetUp() {\n        json_ = ReadFile(\"utf8.json\", true, &length_);\n    }\n\n    virtual void TearDown() {\n        free(json_);\n        json_ = 0;\n    }\n\nprivate:\n    EncodedStreamTest(const EncodedStreamTest&);\n    EncodedStreamTest& operator=(const EncodedStreamTest&);\n    \nprotected:\n    static FILE* Open(const char* filename) {\n        const char *paths[] = {\n            \"encodings\",\n            \"bin/encodings\",\n            \"../bin/encodings\",\n            \"../../bin/encodings\",\n            \"../../../bin/encodings\"\n        };\n        char buffer[1024];\n        for (size_t i = 0; i < sizeof(paths) / sizeof(paths[0]); i++) {\n            sprintf(buffer, \"%s/%s\", paths[i], filename);\n            FILE *fp = fopen(buffer, \"rb\");\n            if (fp)\n                return fp;\n        }\n        return 0;\n    }\n\n    static char *ReadFile(const char* filename, bool appendPath, size_t* outLength) {\n        FILE *fp = appendPath ? Open(filename) : fopen(filename, \"rb\");\n\n        if (!fp) {\n            *outLength = 0;\n            return 0;\n        }\n\n        fseek(fp, 0, SEEK_END);\n        *outLength = static_cast<size_t>(ftell(fp));\n        fseek(fp, 0, SEEK_SET);\n        char* buffer = static_cast<char*>(malloc(*outLength + 1));\n        size_t readLength = fread(buffer, 1, *outLength, fp);\n        buffer[readLength] = '\\0';\n        fclose(fp);\n        return buffer;\n    }\n\n    template <typename FileEncoding, typename MemoryEncoding>\n    void TestEncodedInputStream(const char* filename) {\n        // Test FileReadStream\n        {\n            char buffer[16];\n            FILE *fp = Open(filename);\n            ASSERT_TRUE(fp != 0);\n            FileReadStream fs(fp, buffer, sizeof(buffer));\n            EncodedInputStream<FileEncoding, FileReadStream> eis(fs);\n            StringStream s(json_);\n\n            while (eis.Peek() != '\\0') {\n                unsigned expected, actual;\n                EXPECT_TRUE(UTF8<>::Decode(s, &expected));\n                EXPECT_TRUE(MemoryEncoding::Decode(eis, &actual));\n                EXPECT_EQ(expected, actual);\n            }\n            EXPECT_EQ('\\0', s.Peek());\n            fclose(fp);\n        }\n\n        // Test MemoryStream\n        {\n            size_t size;\n            char* data = ReadFile(filename, true, &size);\n            MemoryStream ms(data, size);\n            EncodedInputStream<FileEncoding, MemoryStream> eis(ms);\n            StringStream s(json_);\n\n            while (eis.Peek() != '\\0') {\n                unsigned expected, actual;\n                EXPECT_TRUE(UTF8<>::Decode(s, &expected));\n                EXPECT_TRUE(MemoryEncoding::Decode(eis, &actual));\n                EXPECT_EQ(expected, actual);\n            }\n            EXPECT_EQ('\\0', s.Peek());\n            EXPECT_EQ(size, eis.Tell());\n            free(data);\n        }\n    }\n\n    void TestAutoUTFInputStream(const char *filename, bool expectHasBOM) {\n        // Test FileReadStream\n        {\n            char buffer[16];\n            FILE *fp = Open(filename);\n            ASSERT_TRUE(fp != 0);\n            FileReadStream fs(fp, buffer, sizeof(buffer));\n            AutoUTFInputStream<unsigned, FileReadStream> eis(fs);\n            EXPECT_EQ(expectHasBOM, eis.HasBOM());\n            StringStream s(json_);\n            while (eis.Peek() != '\\0') {\n                unsigned expected, actual;\n                EXPECT_TRUE(UTF8<>::Decode(s, &expected));\n                EXPECT_TRUE(AutoUTF<unsigned>::Decode(eis, &actual));\n                EXPECT_EQ(expected, actual);\n            }\n            EXPECT_EQ('\\0', s.Peek());\n            fclose(fp);\n        }\n\n        // Test MemoryStream\n        {\n            size_t size;\n            char* data = ReadFile(filename, true, &size);\n            MemoryStream ms(data, size);\n            AutoUTFInputStream<unsigned, MemoryStream> eis(ms);\n            EXPECT_EQ(expectHasBOM, eis.HasBOM());\n            StringStream s(json_);\n\n            while (eis.Peek() != '\\0') {\n                unsigned expected, actual;\n                EXPECT_TRUE(UTF8<>::Decode(s, &expected));\n                EXPECT_TRUE(AutoUTF<unsigned>::Decode(eis, &actual));\n                EXPECT_EQ(expected, actual);\n            }\n            EXPECT_EQ('\\0', s.Peek());\n            free(data);\n            EXPECT_EQ(size, eis.Tell());\n        }\n    }\n\n    template <typename FileEncoding, typename MemoryEncoding>\n    void TestEncodedOutputStream(const char* expectedFilename, bool putBOM) {\n        // Test FileWriteStream\n        {\n            char filename[L_tmpnam];\n            FILE* fp = TempFile(filename);\n            char buffer[16];\n            FileWriteStream os(fp, buffer, sizeof(buffer));\n            EncodedOutputStream<FileEncoding, FileWriteStream> eos(os, putBOM);\n            StringStream s(json_);\n            while (s.Peek() != '\\0') {\n                bool success = Transcoder<UTF8<>, MemoryEncoding>::Transcode(s, eos);\n                EXPECT_TRUE(success);\n            }\n            eos.Flush();\n            fclose(fp);\n            EXPECT_TRUE(CompareFile(filename, expectedFilename));\n            remove(filename);\n        }\n\n        // Test MemoryBuffer\n        {\n            MemoryBuffer mb;\n            EncodedOutputStream<FileEncoding, MemoryBuffer> eos(mb, putBOM);\n            StringStream s(json_);\n            while (s.Peek() != '\\0') {\n                bool success = Transcoder<UTF8<>, MemoryEncoding>::Transcode(s, eos);\n                EXPECT_TRUE(success);\n            }\n            eos.Flush();\n            EXPECT_TRUE(CompareBufferFile(mb.GetBuffer(), mb.GetSize(), expectedFilename));\n        }\n    }\n\n    void TestAutoUTFOutputStream(UTFType type, bool putBOM, const char *expectedFilename) {\n        // Test FileWriteStream\n        {\n            char filename[L_tmpnam];\n            FILE* fp = TempFile(filename);\n\n            char buffer[16];\n            FileWriteStream os(fp, buffer, sizeof(buffer));\n            AutoUTFOutputStream<unsigned, FileWriteStream> eos(os, type, putBOM);\n            StringStream s(json_);\n            while (s.Peek() != '\\0') {\n                bool success = Transcoder<UTF8<>, AutoUTF<unsigned> >::Transcode(s, eos);\n                EXPECT_TRUE(success);\n            }\n            eos.Flush();\n            fclose(fp);\n            EXPECT_TRUE(CompareFile(filename, expectedFilename));\n            remove(filename);\n        }\n\n        // Test MemoryBuffer\n        {\n            MemoryBuffer mb;\n            AutoUTFOutputStream<unsigned, MemoryBuffer> eos(mb, type, putBOM);\n            StringStream s(json_);\n            while (s.Peek() != '\\0') {\n                bool success = Transcoder<UTF8<>, AutoUTF<unsigned> >::Transcode(s, eos);\n                EXPECT_TRUE(success);\n            }\n            eos.Flush();\n            EXPECT_TRUE(CompareBufferFile(mb.GetBuffer(), mb.GetSize(), expectedFilename));\n        }\n    }\n\n    bool CompareFile(const char* filename, const char* expectedFilename) {\n        size_t actualLength, expectedLength;\n        char* actualBuffer = ReadFile(filename, false, &actualLength);\n        char* expectedBuffer = ReadFile(expectedFilename, true, &expectedLength);\n        bool ret = (expectedLength == actualLength) && memcmp(expectedBuffer, actualBuffer, actualLength) == 0;\n        free(actualBuffer);\n        free(expectedBuffer);\n        return ret;\n    }\n\n    bool CompareBufferFile(const char* actualBuffer, size_t actualLength, const char* expectedFilename) {\n        size_t expectedLength;\n        char* expectedBuffer = ReadFile(expectedFilename, true, &expectedLength);\n        bool ret = (expectedLength == actualLength) && memcmp(expectedBuffer, actualBuffer, actualLength) == 0;\n        free(expectedBuffer);\n        return ret;\n    }\n\n    char *json_;\n    size_t length_;\n};\n\nEncodedStreamTest::~EncodedStreamTest() {}\n\nTEST_F(EncodedStreamTest, EncodedInputStream) {\n    TestEncodedInputStream<UTF8<>,    UTF8<>  >(\"utf8.json\");\n    TestEncodedInputStream<UTF8<>,    UTF8<>  >(\"utf8bom.json\");\n    TestEncodedInputStream<UTF16LE<>, UTF16<> >(\"utf16le.json\");\n    TestEncodedInputStream<UTF16LE<>, UTF16<> >(\"utf16lebom.json\");\n    TestEncodedInputStream<UTF16BE<>, UTF16<> >(\"utf16be.json\");\n    TestEncodedInputStream<UTF16BE<>, UTF16<> >(\"utf16bebom.json\");\n    TestEncodedInputStream<UTF32LE<>, UTF32<> >(\"utf32le.json\");\n    TestEncodedInputStream<UTF32LE<>, UTF32<> >(\"utf32lebom.json\");\n    TestEncodedInputStream<UTF32BE<>, UTF32<> >(\"utf32be.json\");\n    TestEncodedInputStream<UTF32BE<>, UTF32<> >(\"utf32bebom.json\");\n}\n\nTEST_F(EncodedStreamTest, AutoUTFInputStream) {\n    TestAutoUTFInputStream(\"utf8.json\",      false);\n    TestAutoUTFInputStream(\"utf8bom.json\",   true);\n    TestAutoUTFInputStream(\"utf16le.json\",   false);\n    TestAutoUTFInputStream(\"utf16lebom.json\",true);\n    TestAutoUTFInputStream(\"utf16be.json\",   false);\n    TestAutoUTFInputStream(\"utf16bebom.json\",true);\n    TestAutoUTFInputStream(\"utf32le.json\",   false);\n    TestAutoUTFInputStream(\"utf32lebom.json\",true);\n    TestAutoUTFInputStream(\"utf32be.json\",   false);\n    TestAutoUTFInputStream(\"utf32bebom.json\", true);\n\n    {\n        // Auto detection fail, use user defined UTF type\n        const char json[] = \"{ }\";\n        MemoryStream ms(json, sizeof(json));\n        AutoUTFInputStream<unsigned, MemoryStream> eis(ms, kUTF8);\n        EXPECT_FALSE(eis.HasBOM());\n        EXPECT_EQ(kUTF8, eis.GetType());\n    }\n}\n\nTEST_F(EncodedStreamTest, EncodedOutputStream) {\n    TestEncodedOutputStream<UTF8<>,     UTF8<>  >(\"utf8.json\",      false);\n    TestEncodedOutputStream<UTF8<>,     UTF8<>  >(\"utf8bom.json\",   true);\n    TestEncodedOutputStream<UTF16LE<>,  UTF16<> >(\"utf16le.json\",   false);\n    TestEncodedOutputStream<UTF16LE<>,  UTF16<> >(\"utf16lebom.json\",true);\n    TestEncodedOutputStream<UTF16BE<>,  UTF16<> >(\"utf16be.json\",   false);\n    TestEncodedOutputStream<UTF16BE<>,  UTF16<> >(\"utf16bebom.json\",true);\n    TestEncodedOutputStream<UTF32LE<>,  UTF32<> >(\"utf32le.json\",   false);\n    TestEncodedOutputStream<UTF32LE<>,  UTF32<> >(\"utf32lebom.json\",true);\n    TestEncodedOutputStream<UTF32BE<>,  UTF32<> >(\"utf32be.json\",   false);\n    TestEncodedOutputStream<UTF32BE<>,  UTF32<> >(\"utf32bebom.json\",true);\n}\n\nTEST_F(EncodedStreamTest, AutoUTFOutputStream) {\n    TestAutoUTFOutputStream(kUTF8,      false,  \"utf8.json\");\n    TestAutoUTFOutputStream(kUTF8,      true,   \"utf8bom.json\");\n    TestAutoUTFOutputStream(kUTF16LE,   false,  \"utf16le.json\");\n    TestAutoUTFOutputStream(kUTF16LE,   true,   \"utf16lebom.json\");\n    TestAutoUTFOutputStream(kUTF16BE,   false,  \"utf16be.json\");\n    TestAutoUTFOutputStream(kUTF16BE,   true,   \"utf16bebom.json\");\n    TestAutoUTFOutputStream(kUTF32LE,   false,  \"utf32le.json\");\n    TestAutoUTFOutputStream(kUTF32LE,   true,   \"utf32lebom.json\");\n    TestAutoUTFOutputStream(kUTF32BE,   false,  \"utf32be.json\");\n    TestAutoUTFOutputStream(kUTF32BE,   true,   \"utf32bebom.json\");\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/encodingstest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/encodedstream.h\"\n#include \"rapidjson/stringbuffer.h\"\n\nusing namespace rapidjson;\n\n// Verification of encoders/decoders with Hoehrmann's UTF8 decoder\n\n// http://www.unicode.org/Public/UNIDATA/Blocks.txt\nstatic const unsigned kCodepointRanges[] = {\n    0x0000,     0x007F,     // Basic Latin\n    0x0080,     0x00FF,     // Latin-1 Supplement\n    0x0100,     0x017F,     // Latin Extended-A\n    0x0180,     0x024F,     // Latin Extended-B\n    0x0250,     0x02AF,     // IPA Extensions\n    0x02B0,     0x02FF,     // Spacing Modifier Letters\n    0x0300,     0x036F,     // Combining Diacritical Marks\n    0x0370,     0x03FF,     // Greek and Coptic\n    0x0400,     0x04FF,     // Cyrillic\n    0x0500,     0x052F,     // Cyrillic Supplement\n    0x0530,     0x058F,     // Armenian\n    0x0590,     0x05FF,     // Hebrew\n    0x0600,     0x06FF,     // Arabic\n    0x0700,     0x074F,     // Syriac\n    0x0750,     0x077F,     // Arabic Supplement\n    0x0780,     0x07BF,     // Thaana\n    0x07C0,     0x07FF,     // NKo\n    0x0800,     0x083F,     // Samaritan\n    0x0840,     0x085F,     // Mandaic\n    0x0900,     0x097F,     // Devanagari\n    0x0980,     0x09FF,     // Bengali\n    0x0A00,     0x0A7F,     // Gurmukhi\n    0x0A80,     0x0AFF,     // Gujarati\n    0x0B00,     0x0B7F,     // Oriya\n    0x0B80,     0x0BFF,     // Tamil\n    0x0C00,     0x0C7F,     // Telugu\n    0x0C80,     0x0CFF,     // Kannada\n    0x0D00,     0x0D7F,     // Malayalam\n    0x0D80,     0x0DFF,     // Sinhala\n    0x0E00,     0x0E7F,     // Thai\n    0x0E80,     0x0EFF,     // Lao\n    0x0F00,     0x0FFF,     // Tibetan\n    0x1000,     0x109F,     // Myanmar\n    0x10A0,     0x10FF,     // Georgian\n    0x1100,     0x11FF,     // Hangul Jamo\n    0x1200,     0x137F,     // Ethiopic\n    0x1380,     0x139F,     // Ethiopic Supplement\n    0x13A0,     0x13FF,     // Cherokee\n    0x1400,     0x167F,     // Unified Canadian Aboriginal Syllabics\n    0x1680,     0x169F,     // Ogham\n    0x16A0,     0x16FF,     // Runic\n    0x1700,     0x171F,     // Tagalog\n    0x1720,     0x173F,     // Hanunoo\n    0x1740,     0x175F,     // Buhid\n    0x1760,     0x177F,     // Tagbanwa\n    0x1780,     0x17FF,     // Khmer\n    0x1800,     0x18AF,     // Mongolian\n    0x18B0,     0x18FF,     // Unified Canadian Aboriginal Syllabics Extended\n    0x1900,     0x194F,     // Limbu\n    0x1950,     0x197F,     // Tai Le\n    0x1980,     0x19DF,     // New Tai Lue\n    0x19E0,     0x19FF,     // Khmer Symbols\n    0x1A00,     0x1A1F,     // Buginese\n    0x1A20,     0x1AAF,     // Tai Tham\n    0x1B00,     0x1B7F,     // Balinese\n    0x1B80,     0x1BBF,     // Sundanese\n    0x1BC0,     0x1BFF,     // Batak\n    0x1C00,     0x1C4F,     // Lepcha\n    0x1C50,     0x1C7F,     // Ol Chiki\n    0x1CD0,     0x1CFF,     // Vedic Extensions\n    0x1D00,     0x1D7F,     // Phonetic Extensions\n    0x1D80,     0x1DBF,     // Phonetic Extensions Supplement\n    0x1DC0,     0x1DFF,     // Combining Diacritical Marks Supplement\n    0x1E00,     0x1EFF,     // Latin Extended Additional\n    0x1F00,     0x1FFF,     // Greek Extended\n    0x2000,     0x206F,     // General Punctuation\n    0x2070,     0x209F,     // Superscripts and Subscripts\n    0x20A0,     0x20CF,     // Currency Symbols\n    0x20D0,     0x20FF,     // Combining Diacritical Marks for Symbols\n    0x2100,     0x214F,     // Letterlike Symbols\n    0x2150,     0x218F,     // Number Forms\n    0x2190,     0x21FF,     // Arrows\n    0x2200,     0x22FF,     // Mathematical Operators\n    0x2300,     0x23FF,     // Miscellaneous Technical\n    0x2400,     0x243F,     // Control Pictures\n    0x2440,     0x245F,     // Optical Character Recognition\n    0x2460,     0x24FF,     // Enclosed Alphanumerics\n    0x2500,     0x257F,     // Box Drawing\n    0x2580,     0x259F,     // Block Elements\n    0x25A0,     0x25FF,     // Geometric Shapes\n    0x2600,     0x26FF,     // Miscellaneous Symbols\n    0x2700,     0x27BF,     // Dingbats\n    0x27C0,     0x27EF,     // Miscellaneous Mathematical Symbols-A\n    0x27F0,     0x27FF,     // Supplemental Arrows-A\n    0x2800,     0x28FF,     // Braille Patterns\n    0x2900,     0x297F,     // Supplemental Arrows-B\n    0x2980,     0x29FF,     // Miscellaneous Mathematical Symbols-B\n    0x2A00,     0x2AFF,     // Supplemental Mathematical Operators\n    0x2B00,     0x2BFF,     // Miscellaneous Symbols and Arrows\n    0x2C00,     0x2C5F,     // Glagolitic\n    0x2C60,     0x2C7F,     // Latin Extended-C\n    0x2C80,     0x2CFF,     // Coptic\n    0x2D00,     0x2D2F,     // Georgian Supplement\n    0x2D30,     0x2D7F,     // Tifinagh\n    0x2D80,     0x2DDF,     // Ethiopic Extended\n    0x2DE0,     0x2DFF,     // Cyrillic Extended-A\n    0x2E00,     0x2E7F,     // Supplemental Punctuation\n    0x2E80,     0x2EFF,     // CJK Radicals Supplement\n    0x2F00,     0x2FDF,     // Kangxi Radicals\n    0x2FF0,     0x2FFF,     // Ideographic Description Characters\n    0x3000,     0x303F,     // CJK Symbols and Punctuation\n    0x3040,     0x309F,     // Hiragana\n    0x30A0,     0x30FF,     // Katakana\n    0x3100,     0x312F,     // Bopomofo\n    0x3130,     0x318F,     // Hangul Compatibility Jamo\n    0x3190,     0x319F,     // Kanbun\n    0x31A0,     0x31BF,     // Bopomofo Extended\n    0x31C0,     0x31EF,     // CJK Strokes\n    0x31F0,     0x31FF,     // Katakana Phonetic Extensions\n    0x3200,     0x32FF,     // Enclosed CJK Letters and Months\n    0x3300,     0x33FF,     // CJK Compatibility\n    0x3400,     0x4DBF,     // CJK Unified Ideographs Extension A\n    0x4DC0,     0x4DFF,     // Yijing Hexagram Symbols\n    0x4E00,     0x9FFF,     // CJK Unified Ideographs\n    0xA000,     0xA48F,     // Yi Syllables\n    0xA490,     0xA4CF,     // Yi Radicals\n    0xA4D0,     0xA4FF,     // Lisu\n    0xA500,     0xA63F,     // Vai\n    0xA640,     0xA69F,     // Cyrillic Extended-B\n    0xA6A0,     0xA6FF,     // Bamum\n    0xA700,     0xA71F,     // Modifier Tone Letters\n    0xA720,     0xA7FF,     // Latin Extended-D\n    0xA800,     0xA82F,     // Syloti Nagri\n    0xA830,     0xA83F,     // Common Indic Number Forms\n    0xA840,     0xA87F,     // Phags-pa\n    0xA880,     0xA8DF,     // Saurashtra\n    0xA8E0,     0xA8FF,     // Devanagari Extended\n    0xA900,     0xA92F,     // Kayah Li\n    0xA930,     0xA95F,     // Rejang\n    0xA960,     0xA97F,     // Hangul Jamo Extended-A\n    0xA980,     0xA9DF,     // Javanese\n    0xAA00,     0xAA5F,     // Cham\n    0xAA60,     0xAA7F,     // Myanmar Extended-A\n    0xAA80,     0xAADF,     // Tai Viet\n    0xAB00,     0xAB2F,     // Ethiopic Extended-A\n    0xABC0,     0xABFF,     // Meetei Mayek\n    0xAC00,     0xD7AF,     // Hangul Syllables\n    0xD7B0,     0xD7FF,     // Hangul Jamo Extended-B\n    //0xD800,       0xDB7F,     // High Surrogates\n    //0xDB80,       0xDBFF,     // High Private Use Surrogates\n    //0xDC00,       0xDFFF,     // Low Surrogates\n    0xE000,     0xF8FF,     // Private Use Area\n    0xF900,     0xFAFF,     // CJK Compatibility Ideographs\n    0xFB00,     0xFB4F,     // Alphabetic Presentation Forms\n    0xFB50,     0xFDFF,     // Arabic Presentation Forms-A\n    0xFE00,     0xFE0F,     // Variation Selectors\n    0xFE10,     0xFE1F,     // Vertical Forms\n    0xFE20,     0xFE2F,     // Combining Half Marks\n    0xFE30,     0xFE4F,     // CJK Compatibility Forms\n    0xFE50,     0xFE6F,     // Small Form Variants\n    0xFE70,     0xFEFF,     // Arabic Presentation Forms-B\n    0xFF00,     0xFFEF,     // Halfwidth and Fullwidth Forms\n    0xFFF0,     0xFFFF,     // Specials\n    0x10000,    0x1007F,    // Linear B Syllabary\n    0x10080,    0x100FF,    // Linear B Ideograms\n    0x10100,    0x1013F,    // Aegean Numbers\n    0x10140,    0x1018F,    // Ancient Greek Numbers\n    0x10190,    0x101CF,    // Ancient Symbols\n    0x101D0,    0x101FF,    // Phaistos Disc\n    0x10280,    0x1029F,    // Lycian\n    0x102A0,    0x102DF,    // Carian\n    0x10300,    0x1032F,    // Old Italic\n    0x10330,    0x1034F,    // Gothic\n    0x10380,    0x1039F,    // Ugaritic\n    0x103A0,    0x103DF,    // Old Persian\n    0x10400,    0x1044F,    // Deseret\n    0x10450,    0x1047F,    // Shavian\n    0x10480,    0x104AF,    // Osmanya\n    0x10800,    0x1083F,    // Cypriot Syllabary\n    0x10840,    0x1085F,    // Imperial Aramaic\n    0x10900,    0x1091F,    // Phoenician\n    0x10920,    0x1093F,    // Lydian\n    0x10A00,    0x10A5F,    // Kharoshthi\n    0x10A60,    0x10A7F,    // Old South Arabian\n    0x10B00,    0x10B3F,    // Avestan\n    0x10B40,    0x10B5F,    // Inscriptional Parthian\n    0x10B60,    0x10B7F,    // Inscriptional Pahlavi\n    0x10C00,    0x10C4F,    // Old Turkic\n    0x10E60,    0x10E7F,    // Rumi Numeral Symbols\n    0x11000,    0x1107F,    // Brahmi\n    0x11080,    0x110CF,    // Kaithi\n    0x12000,    0x123FF,    // Cuneiform\n    0x12400,    0x1247F,    // Cuneiform Numbers and Punctuation\n    0x13000,    0x1342F,    // Egyptian Hieroglyphs\n    0x16800,    0x16A3F,    // Bamum Supplement\n    0x1B000,    0x1B0FF,    // Kana Supplement\n    0x1D000,    0x1D0FF,    // Byzantine Musical Symbols\n    0x1D100,    0x1D1FF,    // Musical Symbols\n    0x1D200,    0x1D24F,    // Ancient Greek Musical Notation\n    0x1D300,    0x1D35F,    // Tai Xuan Jing Symbols\n    0x1D360,    0x1D37F,    // Counting Rod Numerals\n    0x1D400,    0x1D7FF,    // Mathematical Alphanumeric Symbols\n    0x1F000,    0x1F02F,    // Mahjong Tiles\n    0x1F030,    0x1F09F,    // Domino Tiles\n    0x1F0A0,    0x1F0FF,    // Playing Cards\n    0x1F100,    0x1F1FF,    // Enclosed Alphanumeric Supplement\n    0x1F200,    0x1F2FF,    // Enclosed Ideographic Supplement\n    0x1F300,    0x1F5FF,    // Miscellaneous Symbols And Pictographs\n    0x1F600,    0x1F64F,    // Emoticons\n    0x1F680,    0x1F6FF,    // Transport And Map Symbols\n    0x1F700,    0x1F77F,    // Alchemical Symbols\n    0x20000,    0x2A6DF,    // CJK Unified Ideographs Extension B\n    0x2A700,    0x2B73F,    // CJK Unified Ideographs Extension C\n    0x2B740,    0x2B81F,    // CJK Unified Ideographs Extension D\n    0x2F800,    0x2FA1F,    // CJK Compatibility Ideographs Supplement\n    0xE0000,    0xE007F,    // Tags\n    0xE0100,    0xE01EF,    // Variation Selectors Supplement\n    0xF0000,    0xFFFFF,    // Supplementary Private Use Area-A\n    0x100000,   0x10FFFF,   // Supplementary Private Use Area-B\n    0xFFFFFFFF\n};\n\n// Copyright (c) 2008-2010 Bjoern Hoehrmann <bjoern@hoehrmann.de>\n// See http://bjoern.hoehrmann.de/utf-8/decoder/dfa/ for details.\n\n#define UTF8_ACCEPT 0u\n\nstatic const unsigned char utf8d[] = {\n    // The first part of the table maps bytes to character classes that\n    // to reduce the size of the transition table and create bitmasks.\n    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\n    1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,  9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,\n    7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,  7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,\n    8,8,2,2,2,2,2,2,2,2,2,2,2,2,2,2,  2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,\n    10,3,3,3,3,3,3,3,3,3,3,3,3,4,3,3, 11,6,6,6,5,8,8,8,8,8,8,8,8,8,8,8,\n\n    // The second part is a transition table that maps a combination\n    // of a state of the automaton and a character class to a state.\n    0,12,24,36,60,96,84,12,12,12,48,72, 12,12,12,12,12,12,12,12,12,12,12,12,\n    12, 0,12,12,12,12,12, 0,12, 0,12,12, 12,24,12,12,12,12,12,24,12,24,12,12,\n    12,12,12,12,12,12,12,24,12,12,12,12, 12,24,12,12,12,12,12,12,12,24,12,12,\n    12,12,12,12,12,12,12,36,12,36,12,12, 12,36,12,12,12,12,12,36,12,36,12,12,\n    12,36,12,12,12,12,12,12,12,12,12,12, \n};\n\nstatic unsigned inline decode(unsigned* state, unsigned* codep, unsigned byte) {\n    unsigned type = utf8d[byte];\n\n    *codep = (*state != UTF8_ACCEPT) ?\n        (byte & 0x3fu) | (*codep << 6) :\n    (0xffu >> type) & (byte);\n\n    *state = utf8d[256 + *state + type];\n    return *state;\n}\n\n//static bool IsUTF8(unsigned char* s) {\n//  unsigned codepoint, state = 0;\n//\n//  while (*s)\n//      decode(&state, &codepoint, *s++);\n//\n//  return state == UTF8_ACCEPT;\n//}\n\nTEST(EncodingsTest, UTF8) {\n    StringBuffer os, os2;\n    for (const unsigned* range = kCodepointRanges; *range != 0xFFFFFFFF; range += 2) {\n        for (unsigned codepoint = range[0]; codepoint <= range[1]; ++codepoint) {\n            os.Clear();\n            UTF8<>::Encode(os, codepoint);\n            const char* encodedStr = os.GetString();\n\n            // Decode with Hoehrmann\n            {\n                unsigned decodedCodepoint = 0;\n                unsigned state = 0;\n\n                unsigned decodedCount = 0;\n                for (const char* s = encodedStr; *s; ++s)\n                    if (!decode(&state, &decodedCodepoint, static_cast<unsigned char>(*s))) {\n                        EXPECT_EQ(codepoint, decodedCodepoint);\n                        decodedCount++;\n                    }\n\n                if (*encodedStr) {                  // This decoder cannot handle U+0000\n                    EXPECT_EQ(1u, decodedCount);    // Should only contain one code point\n                }\n\n                EXPECT_EQ(UTF8_ACCEPT, state);\n                if (UTF8_ACCEPT != state)\n                    std::cout << std::hex << codepoint << \" \" << decodedCodepoint << std::endl;\n            }\n\n            // Decode\n            {\n                StringStream is(encodedStr);\n                unsigned decodedCodepoint;\n                bool result = UTF8<>::Decode(is, &decodedCodepoint);\n                EXPECT_TRUE(result);\n                EXPECT_EQ(codepoint, decodedCodepoint);\n                if (!result || codepoint != decodedCodepoint)\n                    std::cout << std::hex << codepoint << \" \" << decodedCodepoint << std::endl;\n            }\n\n            // Validate\n            {\n                StringStream is(encodedStr);\n                os2.Clear();\n                bool result = UTF8<>::Validate(is, os2);\n                EXPECT_TRUE(result);\n                EXPECT_EQ(0, StrCmp(encodedStr, os2.GetString()));\n            }\n        }\n    }\n}\n\nTEST(EncodingsTest, UTF16) {\n    GenericStringBuffer<UTF16<> > os, os2;\n    GenericStringBuffer<UTF8<> > utf8os;\n    for (const unsigned* range = kCodepointRanges; *range != 0xFFFFFFFF; range += 2) {\n        for (unsigned codepoint = range[0]; codepoint <= range[1]; ++codepoint) {\n            os.Clear();\n            UTF16<>::Encode(os, codepoint);\n            const UTF16<>::Ch* encodedStr = os.GetString();\n\n            // Encode with Hoehrmann's code\n            if (codepoint != 0) // cannot handle U+0000\n            {\n                // encode with UTF8<> first\n                utf8os.Clear();\n                UTF8<>::Encode(utf8os, codepoint);\n\n                // transcode from UTF8 to UTF16 with Hoehrmann's code\n                unsigned decodedCodepoint = 0;\n                unsigned state = 0;\n                UTF16<>::Ch buffer[3], *p = &buffer[0];\n                for (const char* s = utf8os.GetString(); *s; ++s) {\n                    if (!decode(&state, &decodedCodepoint, static_cast<unsigned char>(*s)))\n                        break;\n                }\n\n                if (codepoint <= 0xFFFF)\n                    *p++ = static_cast<UTF16<>::Ch>(decodedCodepoint);\n                else {\n                    // Encode code points above U+FFFF as surrogate pair.\n                    *p++ = static_cast<UTF16<>::Ch>(0xD7C0 + (decodedCodepoint >> 10));\n                    *p++ = static_cast<UTF16<>::Ch>(0xDC00 + (decodedCodepoint & 0x3FF));\n                }\n                *p++ = '\\0';\n\n                EXPECT_EQ(0, StrCmp(buffer, encodedStr));\n            }\n\n            // Decode\n            {\n                GenericStringStream<UTF16<> > is(encodedStr);\n                unsigned decodedCodepoint;\n                bool result = UTF16<>::Decode(is, &decodedCodepoint);\n                EXPECT_TRUE(result);\n                EXPECT_EQ(codepoint, decodedCodepoint);         \n                if (!result || codepoint != decodedCodepoint)\n                    std::cout << std::hex << codepoint << \" \" << decodedCodepoint << std::endl;\n            }\n\n            // Validate\n            {\n                GenericStringStream<UTF16<> > is(encodedStr);\n                os2.Clear();\n                bool result = UTF16<>::Validate(is, os2);\n                EXPECT_TRUE(result);\n                EXPECT_EQ(0, StrCmp(encodedStr, os2.GetString()));\n            }\n        }\n    }\n}\n\nTEST(EncodingsTest, UTF32) {\n    GenericStringBuffer<UTF32<> > os, os2;\n    for (const unsigned* range = kCodepointRanges; *range != 0xFFFFFFFF; range += 2) {\n        for (unsigned codepoint = range[0]; codepoint <= range[1]; ++codepoint) {\n            os.Clear();\n            UTF32<>::Encode(os, codepoint);\n            const UTF32<>::Ch* encodedStr = os.GetString();\n\n            // Decode\n            {\n                GenericStringStream<UTF32<> > is(encodedStr);\n                unsigned decodedCodepoint;\n                bool result = UTF32<>::Decode(is, &decodedCodepoint);\n                EXPECT_TRUE(result);\n                EXPECT_EQ(codepoint, decodedCodepoint);         \n                if (!result || codepoint != decodedCodepoint)\n                    std::cout << std::hex << codepoint << \" \" << decodedCodepoint << std::endl;\n            }\n\n            // Validate\n            {\n                GenericStringStream<UTF32<> > is(encodedStr);\n                os2.Clear();\n                bool result = UTF32<>::Validate(is, os2);\n                EXPECT_TRUE(result);\n                EXPECT_EQ(0, StrCmp(encodedStr, os2.GetString()));\n            }\n        }\n    }\n}\n\nTEST(EncodingsTest, ASCII) {\n    StringBuffer os, os2;\n    for (unsigned codepoint = 0; codepoint < 128; codepoint++) {\n        os.Clear();\n        ASCII<>::Encode(os, codepoint);\n        const ASCII<>::Ch* encodedStr = os.GetString();\n        {\n            StringStream is(encodedStr);\n            unsigned decodedCodepoint;\n            bool result = ASCII<>::Decode(is, &decodedCodepoint);\n            if (!result || codepoint != decodedCodepoint)\n                std::cout << std::hex << codepoint << \" \" << decodedCodepoint << std::endl;\n        }\n\n        // Validate\n        {\n            StringStream is(encodedStr);\n            os2.Clear();\n            bool result = ASCII<>::Validate(is, os2);\n            EXPECT_TRUE(result);\n            EXPECT_EQ(0, StrCmp(encodedStr, os2.GetString()));\n        }\n    }\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/filestreamtest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/encodedstream.h\"\n\nusing namespace rapidjson;\n\nclass FileStreamTest : public ::testing::Test {\npublic:\n    FileStreamTest() : filename_(), json_(), length_(), abcde_() {}\n    virtual ~FileStreamTest();\n\n    virtual void SetUp() {\n        const char *paths[] = {\n            \"data/sample.json\",\n            \"bin/data/sample.json\",\n            \"../bin/data/sample.json\",\n            \"../../bin/data/sample.json\",\n            \"../../../bin/data/sample.json\"\n        };\n        FILE* fp = 0;\n        for (size_t i = 0; i < sizeof(paths) / sizeof(paths[0]); i++) {\n            fp = fopen(paths[i], \"rb\");\n            if (fp) {\n                filename_ = paths[i];\n                break;\n            }\n        }\n        ASSERT_TRUE(fp != 0);\n\n        fseek(fp, 0, SEEK_END);\n        length_ = static_cast<size_t>(ftell(fp));\n        fseek(fp, 0, SEEK_SET);\n        json_ = static_cast<char*>(malloc(length_ + 1));\n        size_t readLength = fread(json_, 1, length_, fp);\n        json_[readLength] = '\\0';\n        fclose(fp);\n\n        const char *abcde_paths[] = {\n            \"data/abcde.txt\",\n            \"bin/data/abcde.txt\",\n            \"../bin/data/abcde.txt\",\n            \"../../bin/data/abcde.txt\",\n            \"../../../bin/data/abcde.txt\"\n        };\n        fp = 0;\n        for (size_t i = 0; i < sizeof(abcde_paths) / sizeof(abcde_paths[0]); i++) {\n            fp = fopen(abcde_paths[i], \"rb\");\n            if (fp) {\n                abcde_ = abcde_paths[i];\n                break;\n            }\n        }\n        ASSERT_TRUE(fp != 0);\n        fclose(fp);\n    }\n\n    virtual void TearDown() {\n        free(json_);\n        json_ = 0;\n    }\n\nprivate:\n    FileStreamTest(const FileStreamTest&);\n    FileStreamTest& operator=(const FileStreamTest&);\n    \nprotected:\n    const char* filename_;\n    char *json_;\n    size_t length_;\n    const char* abcde_;\n};\n\nFileStreamTest::~FileStreamTest() {}\n\nTEST_F(FileStreamTest, FileReadStream) {\n    FILE *fp = fopen(filename_, \"rb\");\n    ASSERT_TRUE(fp != 0);\n    char buffer[65536];\n    FileReadStream s(fp, buffer, sizeof(buffer));\n\n    for (size_t i = 0; i < length_; i++) {\n        EXPECT_EQ(json_[i], s.Peek());\n        EXPECT_EQ(json_[i], s.Peek());  // 2nd time should be the same\n        EXPECT_EQ(json_[i], s.Take());\n    }\n\n    EXPECT_EQ(length_, s.Tell());\n    EXPECT_EQ('\\0', s.Peek());\n\n    fclose(fp);\n}\n\nTEST_F(FileStreamTest, FileReadStream_Peek4) {\n    FILE *fp = fopen(abcde_, \"rb\");\n    ASSERT_TRUE(fp != 0);\n    char buffer[4];\n    FileReadStream s(fp, buffer, sizeof(buffer));\n\n    const char* c = s.Peek4();\n    for (int i = 0; i < 4; i++)\n        EXPECT_EQ('a' + i, c[i]);\n    EXPECT_EQ(0u, s.Tell());\n\n    for (int i = 0; i < 5; i++) {\n        EXPECT_EQ(static_cast<size_t>(i), s.Tell());\n        EXPECT_EQ('a' + i, s.Peek());\n        EXPECT_EQ('a' + i, s.Peek());\n        EXPECT_EQ('a' + i, s.Take());\n    }\n    EXPECT_EQ(5u, s.Tell());\n    EXPECT_EQ(0, s.Peek());\n    EXPECT_EQ(0, s.Take());\n\n    fclose(fp);\n}\n\nTEST_F(FileStreamTest, FileWriteStream) {\n    char filename[L_tmpnam];\n    FILE* fp = TempFile(filename);\n\n    char buffer[65536];\n    FileWriteStream os(fp, buffer, sizeof(buffer));\n    for (size_t i = 0; i < length_; i++)\n        os.Put(json_[i]);\n    os.Flush();\n    fclose(fp);\n\n    // Read it back to verify\n    fp = fopen(filename, \"rb\");\n    FileReadStream is(fp, buffer, sizeof(buffer));\n\n    for (size_t i = 0; i < length_; i++)\n        EXPECT_EQ(json_[i], is.Take());\n\n    EXPECT_EQ(length_, is.Tell());\n    fclose(fp);\n\n    //std::cout << filename << std::endl;\n    remove(filename);\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/fwdtest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n// Using forward declared types here.\n\n#include \"rapidjson/fwd.h\"\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\nusing namespace rapidjson;\n\nstruct Foo {\n    Foo();\n    ~Foo();\n\n    // encodings.h\n    UTF8<char>* utf8;\n    UTF16<wchar_t>* utf16;\n    UTF16BE<wchar_t>* utf16be;\n    UTF16LE<wchar_t>* utf16le;\n    UTF32<unsigned>* utf32;\n    UTF32BE<unsigned>* utf32be;\n    UTF32LE<unsigned>* utf32le;\n    ASCII<char>* ascii;\n    AutoUTF<unsigned>* autoutf;\n    Transcoder<UTF8<char>, UTF8<char> >* transcoder;\n\n    // allocators.h\n    CrtAllocator* crtallocator;\n    MemoryPoolAllocator<CrtAllocator>* memorypoolallocator;\n\n    // stream.h\n    StringStream* stringstream;\n    InsituStringStream* insitustringstream;\n\n    // stringbuffer.h\n    StringBuffer* stringbuffer;\n\n    // // filereadstream.h\n    // FileReadStream* filereadstream;\n\n    // // filewritestream.h\n    // FileWriteStream* filewritestream;\n\n    // memorybuffer.h\n    MemoryBuffer* memorybuffer;\n\n    // memorystream.h\n    MemoryStream* memorystream;\n\n    // reader.h\n    BaseReaderHandler<UTF8<char>, void>* basereaderhandler;\n    Reader* reader;\n\n    // writer.h\n    Writer<StringBuffer, UTF8<char>, UTF8<char>, CrtAllocator, 0>* writer;\n\n    // prettywriter.h\n    PrettyWriter<StringBuffer, UTF8<char>, UTF8<char>, CrtAllocator, 0>* prettywriter;\n\n    // document.h\n    Value* value;\n    Document* document;\n\n    // pointer.h\n    Pointer* pointer;\n\n    // schema.h\n    SchemaDocument* schemadocument;\n    SchemaValidator* schemavalidator;\n\n    // char buffer[16];\n};\n\n// Using type definitions here.\n\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/memorybuffer.h\"\n#include \"rapidjson/memorystream.h\"\n#include \"rapidjson/document.h\" // -> reader.h\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/prettywriter.h\"\n#include \"rapidjson/schema.h\"   // -> pointer.h\n\ntypedef Transcoder<UTF8<>, UTF8<> > TranscoderUtf8ToUtf8;\ntypedef BaseReaderHandler<UTF8<>, void> BaseReaderHandlerUtf8Void;\n\nFoo::Foo() : \n    // encodings.h\n    utf8(RAPIDJSON_NEW(UTF8<>)),\n    utf16(RAPIDJSON_NEW(UTF16<>)),\n    utf16be(RAPIDJSON_NEW(UTF16BE<>)),\n    utf16le(RAPIDJSON_NEW(UTF16LE<>)),\n    utf32(RAPIDJSON_NEW(UTF32<>)),\n    utf32be(RAPIDJSON_NEW(UTF32BE<>)),\n    utf32le(RAPIDJSON_NEW(UTF32LE<>)),\n    ascii(RAPIDJSON_NEW(ASCII<>)),\n    autoutf(RAPIDJSON_NEW(AutoUTF<unsigned>)),\n    transcoder(RAPIDJSON_NEW(TranscoderUtf8ToUtf8)),\n\n    // allocators.h\n    crtallocator(RAPIDJSON_NEW(CrtAllocator)),\n    memorypoolallocator(RAPIDJSON_NEW(MemoryPoolAllocator<>)),\n\n    // stream.h\n    stringstream(RAPIDJSON_NEW(StringStream)(NULL)),\n    insitustringstream(RAPIDJSON_NEW(InsituStringStream)(NULL)),\n\n    // stringbuffer.h\n    stringbuffer(RAPIDJSON_NEW(StringBuffer)),\n\n    // // filereadstream.h\n    // filereadstream(RAPIDJSON_NEW(FileReadStream)(stdout, buffer, sizeof(buffer))),\n\n    // // filewritestream.h\n    // filewritestream(RAPIDJSON_NEW(FileWriteStream)(stdout, buffer, sizeof(buffer))),\n\n    // memorybuffer.h\n    memorybuffer(RAPIDJSON_NEW(MemoryBuffer)),\n\n    // memorystream.h\n    memorystream(RAPIDJSON_NEW(MemoryStream)(NULL, 0)),\n\n    // reader.h\n    basereaderhandler(RAPIDJSON_NEW(BaseReaderHandlerUtf8Void)),\n    reader(RAPIDJSON_NEW(Reader)),\n\n    // writer.h\n    writer(RAPIDJSON_NEW(Writer<StringBuffer>)),\n\n    // prettywriter.h\n    prettywriter(RAPIDJSON_NEW(PrettyWriter<StringBuffer>)),\n\n    // document.h\n    value(RAPIDJSON_NEW(Value)),\n    document(RAPIDJSON_NEW(Document)),\n\n    // pointer.h\n    pointer(RAPIDJSON_NEW(Pointer)),\n\n    // schema.h\n    schemadocument(RAPIDJSON_NEW(SchemaDocument)(*document)),\n    schemavalidator(RAPIDJSON_NEW(SchemaValidator)(*schemadocument))\n{\n\n}\n\nFoo::~Foo() {\n    // encodings.h\n    RAPIDJSON_DELETE(utf8);\n    RAPIDJSON_DELETE(utf16);\n    RAPIDJSON_DELETE(utf16be);\n    RAPIDJSON_DELETE(utf16le);\n    RAPIDJSON_DELETE(utf32);\n    RAPIDJSON_DELETE(utf32be);\n    RAPIDJSON_DELETE(utf32le);\n    RAPIDJSON_DELETE(ascii);\n    RAPIDJSON_DELETE(autoutf);\n    RAPIDJSON_DELETE(transcoder);\n\n    // allocators.h\n    RAPIDJSON_DELETE(crtallocator);\n    RAPIDJSON_DELETE(memorypoolallocator);\n\n    // stream.h\n    RAPIDJSON_DELETE(stringstream);\n    RAPIDJSON_DELETE(insitustringstream);\n\n    // stringbuffer.h\n    RAPIDJSON_DELETE(stringbuffer);\n\n    // // filereadstream.h\n    // RAPIDJSON_DELETE(filereadstream);\n\n    // // filewritestream.h\n    // RAPIDJSON_DELETE(filewritestream);\n\n    // memorybuffer.h\n    RAPIDJSON_DELETE(memorybuffer);\n\n    // memorystream.h\n    RAPIDJSON_DELETE(memorystream);\n\n    // reader.h\n    RAPIDJSON_DELETE(basereaderhandler);\n    RAPIDJSON_DELETE(reader);\n\n    // writer.h\n    RAPIDJSON_DELETE(writer);\n\n    // prettywriter.h\n    RAPIDJSON_DELETE(prettywriter);\n\n    // document.h\n    RAPIDJSON_DELETE(value);\n    RAPIDJSON_DELETE(document);\n\n    // pointer.h\n    RAPIDJSON_DELETE(pointer);\n\n    // schema.h\n    RAPIDJSON_DELETE(schemadocument);\n    RAPIDJSON_DELETE(schemavalidator);\n}\n\nTEST(Fwd, Fwd) {\n    Foo f;\n}\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/istreamwrappertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n#include \"rapidjson/istreamwrapper.h\"\n#include \"rapidjson/encodedstream.h\"\n#include \"rapidjson/document.h\"\n#include <sstream>\n#include <fstream>\n\n#if defined(_MSC_VER) && !defined(__clang__)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4702) // unreachable code\n#endif\n\nusing namespace rapidjson;\nusing namespace std;\n\ntemplate <typename StringStreamType>\nstatic void TestStringStream() {\n    typedef typename StringStreamType::char_type Ch;\n\n    {\n        StringStreamType iss;\n        BasicIStreamWrapper<StringStreamType> is(iss);\n        EXPECT_EQ(0u, is.Tell());\n        if (sizeof(Ch) == 1) {\n            EXPECT_EQ(0, is.Peek4());\n            EXPECT_EQ(0u, is.Tell());\n        }\n        EXPECT_EQ(0, is.Peek());\n        EXPECT_EQ(0, is.Take());\n        EXPECT_EQ(0u, is.Tell());\n    }\n\n    {\n        Ch s[] = { 'A', 'B', 'C', '\\0' };\n        StringStreamType iss(s);\n        BasicIStreamWrapper<StringStreamType> is(iss);\n        EXPECT_EQ(0u, is.Tell());\n        if (sizeof(Ch) == 1) {\n            EXPECT_EQ(0, is.Peek4()); // less than 4 bytes\n        }\n        for (int i = 0; i < 3; i++) {\n            EXPECT_EQ(static_cast<size_t>(i), is.Tell());\n            EXPECT_EQ('A' + i, is.Peek());\n            EXPECT_EQ('A' + i, is.Peek());\n            EXPECT_EQ('A' + i, is.Take());\n        }\n        EXPECT_EQ(3u, is.Tell());\n        EXPECT_EQ(0, is.Peek());\n        EXPECT_EQ(0, is.Take());\n    }\n\n    {\n        Ch s[] = { 'A', 'B', 'C', 'D', 'E', '\\0' };\n        StringStreamType iss(s);\n        BasicIStreamWrapper<StringStreamType> is(iss);\n        if (sizeof(Ch) == 1) {\n            const Ch* c = is.Peek4();\n            for (int i = 0; i < 4; i++)\n                EXPECT_EQ('A' + i, c[i]);\n            EXPECT_EQ(0u, is.Tell());\n        }\n        for (int i = 0; i < 5; i++) {\n            EXPECT_EQ(static_cast<size_t>(i), is.Tell());\n            EXPECT_EQ('A' + i, is.Peek());\n            EXPECT_EQ('A' + i, is.Peek());\n            EXPECT_EQ('A' + i, is.Take());\n        }\n        EXPECT_EQ(5u, is.Tell());\n        EXPECT_EQ(0, is.Peek());\n        EXPECT_EQ(0, is.Take());\n    }\n}\n\nTEST(IStreamWrapper, istringstream) {\n    TestStringStream<istringstream>();\n}\n\nTEST(IStreamWrapper, stringstream) {\n    TestStringStream<stringstream>();\n}\n\nTEST(IStreamWrapper, wistringstream) {\n    TestStringStream<wistringstream>();\n}\n\nTEST(IStreamWrapper, wstringstream) {\n    TestStringStream<wstringstream>();\n}\n\ntemplate <typename FileStreamType>\nstatic bool Open(FileStreamType& fs, const char* filename) {\n    const char *paths[] = {\n        \"encodings\",\n        \"bin/encodings\",\n        \"../bin/encodings\",\n        \"../../bin/encodings\",\n        \"../../../bin/encodings\"\n    };\n    char buffer[1024];\n    for (size_t i = 0; i < sizeof(paths) / sizeof(paths[0]); i++) {\n        sprintf(buffer, \"%s/%s\", paths[i], filename);\n        fs.open(buffer, ios_base::in | ios_base::binary);\n        if (fs.is_open())\n            return true;\n    }\n    return false;\n}\n\nTEST(IStreamWrapper, ifstream) {\n    ifstream ifs;\n    ASSERT_TRUE(Open(ifs, \"utf8bom.json\"));\n    IStreamWrapper isw(ifs);\n    EncodedInputStream<UTF8<>, IStreamWrapper> eis(isw);\n    Document d;\n    EXPECT_TRUE(!d.ParseStream(eis).HasParseError());\n    EXPECT_TRUE(d.IsObject());\n    EXPECT_EQ(5u, d.MemberCount());\n}\n\nTEST(IStreamWrapper, fstream) {\n    fstream fs;\n    ASSERT_TRUE(Open(fs, \"utf8bom.json\"));\n    IStreamWrapper isw(fs);\n    EncodedInputStream<UTF8<>, IStreamWrapper> eis(isw);\n    Document d;\n    EXPECT_TRUE(!d.ParseStream(eis).HasParseError());\n    EXPECT_TRUE(d.IsObject());\n    EXPECT_EQ(5u, d.MemberCount());\n}\n\n// wifstream/wfstream only works on C++11 with codecvt_utf16\n// But many C++11 library still not have it.\n#if 0\n#include <codecvt>\n\nTEST(IStreamWrapper, wifstream) {\n    wifstream ifs;\n    ASSERT_TRUE(Open(ifs, \"utf16bebom.json\"));\n    ifs.imbue(std::locale(ifs.getloc(),\n       new std::codecvt_utf16<wchar_t, 0x10ffff, std::consume_header>));\n    WIStreamWrapper isw(ifs);\n    GenericDocument<UTF16<> > d;\n    d.ParseStream<kParseDefaultFlags, UTF16<>, WIStreamWrapper>(isw);\n    EXPECT_TRUE(!d.HasParseError());\n    EXPECT_TRUE(d.IsObject());\n    EXPECT_EQ(5, d.MemberCount());\n}\n\nTEST(IStreamWrapper, wfstream) {\n    wfstream fs;\n    ASSERT_TRUE(Open(fs, \"utf16bebom.json\"));\n    fs.imbue(std::locale(fs.getloc(),\n       new std::codecvt_utf16<wchar_t, 0x10ffff, std::consume_header>));\n    WIStreamWrapper isw(fs);\n    GenericDocument<UTF16<> > d;\n    d.ParseStream<kParseDefaultFlags, UTF16<>, WIStreamWrapper>(isw);\n    EXPECT_TRUE(!d.HasParseError());\n    EXPECT_TRUE(d.IsObject());\n    EXPECT_EQ(5, d.MemberCount());\n}\n\n#endif\n\n#if defined(_MSC_VER) && !defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/itoatest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/internal/itoa.h\"\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(type-limits)\n#endif\n\nusing namespace rapidjson::internal;\n\ntemplate <typename T>\nstruct Traits {\n};\n\ntemplate <>\nstruct Traits<uint32_t> {\n    enum { kBufferSize = 11 };\n    enum { kMaxDigit = 10 };\n    static uint32_t Negate(uint32_t x) { return x; }\n};\n\ntemplate <>\nstruct Traits<int32_t> {\n    enum { kBufferSize = 12 };\n    enum { kMaxDigit = 10 };\n    static int32_t Negate(int32_t x) { return -x; }\n};\n\ntemplate <>\nstruct Traits<uint64_t> {\n    enum { kBufferSize = 21 };\n    enum { kMaxDigit = 20 };\n    static uint64_t Negate(uint64_t x) { return x; }\n};\n\ntemplate <>\nstruct Traits<int64_t> {\n    enum { kBufferSize = 22 };\n    enum { kMaxDigit = 20 };\n    static int64_t Negate(int64_t x) { return -x; }\n};\n\ntemplate <typename T>\nstatic void VerifyValue(T value, void(*f)(T, char*), char* (*g)(T, char*)) {\n    char buffer1[Traits<T>::kBufferSize];\n    char buffer2[Traits<T>::kBufferSize];\n\n    f(value, buffer1);\n    *g(value, buffer2) = '\\0';\n\n\n    EXPECT_STREQ(buffer1, buffer2);\n}\n\ntemplate <typename T>\nstatic void Verify(void(*f)(T, char*), char* (*g)(T, char*)) {\n    // Boundary cases\n    VerifyValue<T>(0, f, g);\n    VerifyValue<T>((std::numeric_limits<T>::min)(), f, g);\n    VerifyValue<T>((std::numeric_limits<T>::max)(), f, g);\n\n    // 2^n - 1, 2^n, 10^n - 1, 10^n until overflow\n    for (int power = 2; power <= 10; power += 8) {\n        T i = 1, last;\n        do {\n            VerifyValue<T>(i - 1, f, g);\n            VerifyValue<T>(i, f, g);\n            if ((std::numeric_limits<T>::min)() < 0) {\n                VerifyValue<T>(Traits<T>::Negate(i), f, g);\n                VerifyValue<T>(Traits<T>::Negate(i + 1), f, g);\n            }\n            last = i;\n            if (i > static_cast<T>((std::numeric_limits<T>::max)() / static_cast<T>(power)))\n                break;\n            i *= static_cast<T>(power);\n        } while (last < i);\n    }\n}\n\nstatic void u32toa_naive(uint32_t value, char* buffer) {\n    char temp[10];\n    char *p = temp;\n    do {\n        *p++ = static_cast<char>(char(value % 10) + '0');\n        value /= 10;\n    } while (value > 0);\n\n    do {\n        *buffer++ = *--p;\n    } while (p != temp);\n\n    *buffer = '\\0';\n}\n\nstatic void i32toa_naive(int32_t value, char* buffer) {\n    uint32_t u = static_cast<uint32_t>(value);\n    if (value < 0) {\n        *buffer++ = '-';\n        u = ~u + 1;\n    }\n    u32toa_naive(u, buffer);\n}\n\nstatic void u64toa_naive(uint64_t value, char* buffer) {\n    char temp[20];\n    char *p = temp;\n    do {\n        *p++ = static_cast<char>(char(value % 10) + '0');\n        value /= 10;\n    } while (value > 0);\n\n    do {\n        *buffer++ = *--p;\n    } while (p != temp);\n\n    *buffer = '\\0';\n}\n\nstatic void i64toa_naive(int64_t value, char* buffer) {\n    uint64_t u = static_cast<uint64_t>(value);\n    if (value < 0) {\n        *buffer++ = '-';\n        u = ~u + 1;\n    }\n    u64toa_naive(u, buffer);\n}\n\nTEST(itoa, u32toa) {\n    Verify(u32toa_naive, u32toa);\n}\n\nTEST(itoa, i32toa) {\n    Verify(i32toa_naive, i32toa);\n}\n\nTEST(itoa, u64toa) {\n    Verify(u64toa_naive, u64toa);\n}\n\nTEST(itoa, i64toa) {\n    Verify(i64toa_naive, i64toa);\n}\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/jsoncheckertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n#include \"rapidjson/document.h\"\n\nusing namespace rapidjson;\n\nstatic char* ReadFile(const char* filename, size_t& length) {\n    const char *paths[] = {\n        \"jsonchecker\",\n        \"bin/jsonchecker\",\n        \"../bin/jsonchecker\",\n        \"../../bin/jsonchecker\",\n        \"../../../bin/jsonchecker\"\n    };\n    char buffer[1024];\n    FILE *fp = 0;\n    for (size_t i = 0; i < sizeof(paths) / sizeof(paths[0]); i++) {\n        sprintf(buffer, \"%s/%s\", paths[i], filename);\n        fp = fopen(buffer, \"rb\");\n        if (fp)\n            break;\n    }\n\n    if (!fp)\n        return 0;\n\n    fseek(fp, 0, SEEK_END);\n    length = static_cast<size_t>(ftell(fp));\n    fseek(fp, 0, SEEK_SET);\n    char* json = static_cast<char*>(malloc(length + 1));\n    size_t readLength = fread(json, 1, length, fp);\n    json[readLength] = '\\0';\n    fclose(fp);\n    return json;\n}\n\nstruct NoOpHandler {\n    bool Null() { return true; }\n    bool Bool(bool) { return true; }\n    bool Int(int) { return true; }\n    bool Uint(unsigned) { return true; }\n    bool Int64(int64_t) { return true; }\n    bool Uint64(uint64_t) { return true; }\n    bool Double(double) { return true; }\n    bool RawNumber(const char*, SizeType, bool) { return true; }\n    bool String(const char*, SizeType, bool) { return true; }\n    bool StartObject() { return true; }\n    bool Key(const char*, SizeType, bool) { return true; }\n    bool EndObject(SizeType) { return true; }\n    bool StartArray() { return true; }\n    bool EndArray(SizeType) { return true; }\n};\n\n\nTEST(JsonChecker, Reader) {\n    char filename[256];\n\n    // jsonchecker/failXX.json\n    for (int i = 1; i <= 33; i++) {\n        if (i == 1) // fail1.json is valid in rapidjson, which has no limitation on type of root element (RFC 7159).\n            continue;\n        if (i == 18)    // fail18.json is valid in rapidjson, which has no limitation on depth of nesting.\n            continue;\n\n        sprintf(filename, \"fail%d.json\", i);\n        size_t length;\n        char* json = ReadFile(filename, length);\n        if (!json) {\n            printf(\"jsonchecker file %s not found\", filename);\n            ADD_FAILURE();\n            continue;\n        }\n\n        // Test stack-based parsing.\n        GenericDocument<UTF8<>, CrtAllocator> document; // Use Crt allocator to check exception-safety (no memory leak)\n        document.Parse(json);\n        EXPECT_TRUE(document.HasParseError()) << filename;\n\n        // Test iterative parsing.\n        document.Parse<kParseIterativeFlag>(json);\n        EXPECT_TRUE(document.HasParseError()) << filename;\n\n        // Test iterative pull-parsing.\n        Reader reader;\n        StringStream ss(json);\n        NoOpHandler h;\n        reader.IterativeParseInit();\n        while (!reader.IterativeParseComplete()) {\n            if (!reader.IterativeParseNext<kParseDefaultFlags>(ss, h))\n                break;\n        }\n        EXPECT_TRUE(reader.HasParseError()) << filename;\n        \n        free(json);\n    }\n\n    // passX.json\n    for (int i = 1; i <= 3; i++) {\n        sprintf(filename, \"pass%d.json\", i);\n        size_t length;\n        char* json = ReadFile(filename, length);\n        if (!json) {\n            printf(\"jsonchecker file %s not found\", filename);\n            continue;\n        }\n\n        // Test stack-based parsing.\n        GenericDocument<UTF8<>, CrtAllocator> document; // Use Crt allocator to check exception-safety (no memory leak)\n        document.Parse(json);\n        EXPECT_FALSE(document.HasParseError()) << filename;\n\n        // Test iterative parsing.\n        document.Parse<kParseIterativeFlag>(json);\n        EXPECT_FALSE(document.HasParseError()) << filename;\n        \n        // Test iterative pull-parsing.\n        Reader reader;\n        StringStream ss(json);\n        NoOpHandler h;\n        reader.IterativeParseInit();\n        while (!reader.IterativeParseComplete()) {\n            if (!reader.IterativeParseNext<kParseDefaultFlags>(ss, h))\n                break;\n        }\n        EXPECT_FALSE(reader.HasParseError()) << filename;\n\n        free(json);\n    }\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/namespacetest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n// test another instantiation of RapidJSON in a different namespace \n\n#define RAPIDJSON_NAMESPACE my::rapid::json\n#define RAPIDJSON_NAMESPACE_BEGIN namespace my { namespace rapid { namespace json {\n#define RAPIDJSON_NAMESPACE_END } } }\n\n// include lots of RapidJSON files\n\n#include \"rapidjson/document.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/filereadstream.h\"\n#include \"rapidjson/filewritestream.h\"\n#include \"rapidjson/encodedstream.h\"\n#include \"rapidjson/stringbuffer.h\"\n\nstatic const char json[] = \"{\\\"hello\\\":\\\"world\\\",\\\"t\\\":true,\\\"f\\\":false,\\\"n\\\":null,\\\"i\\\":123,\\\"pi\\\":3.1416,\\\"a\\\":[1,2,3,4]}\";\n\nTEST(NamespaceTest,Using) {\n    using namespace RAPIDJSON_NAMESPACE;\n    typedef GenericDocument<UTF8<>, CrtAllocator> DocumentType;\n    DocumentType doc;\n\n    doc.Parse(json);\n    EXPECT_TRUE(!doc.HasParseError());\n}\n\nTEST(NamespaceTest,Direct) {\n    typedef RAPIDJSON_NAMESPACE::Document Document;\n    typedef RAPIDJSON_NAMESPACE::Reader Reader;\n    typedef RAPIDJSON_NAMESPACE::StringStream StringStream;\n    typedef RAPIDJSON_NAMESPACE::StringBuffer StringBuffer;\n    typedef RAPIDJSON_NAMESPACE::Writer<StringBuffer> WriterType;\n\n    StringStream s(json);\n    StringBuffer buffer;\n    WriterType writer(buffer);\n    buffer.ShrinkToFit();\n    Reader reader;\n    reader.Parse(s, writer);\n\n    EXPECT_STREQ(json, buffer.GetString());\n    EXPECT_EQ(sizeof(json)-1, buffer.GetSize());\n    EXPECT_TRUE(writer.IsComplete());\n\n    Document doc;\n    doc.Parse(buffer.GetString());\n    EXPECT_TRUE(!doc.HasParseError());\n\n    buffer.Clear();\n    writer.Reset(buffer);\n    doc.Accept(writer);\n    EXPECT_STREQ(json, buffer.GetString());\n    EXPECT_TRUE(writer.IsComplete());\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/ostreamwrappertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n#include \"rapidjson/ostreamwrapper.h\"\n#include \"rapidjson/encodedstream.h\"\n#include \"rapidjson/document.h\"\n#include <sstream>\n#include <fstream>\n\nusing namespace rapidjson;\nusing namespace std;\n\ntemplate <typename StringStreamType>\nstatic void TestStringStream() {\n    typedef typename StringStreamType::char_type Ch;\n\n    Ch s[] = { 'A', 'B', 'C', '\\0' };\n    StringStreamType oss(s);\n    BasicOStreamWrapper<StringStreamType> os(oss);\n    for (size_t i = 0; i < 3; i++)\n        os.Put(s[i]);\n    os.Flush();\n    for (size_t i = 0; i < 3; i++)\n        EXPECT_EQ(s[i], oss.str()[i]);\n}\n\nTEST(OStreamWrapper, ostringstream) {\n    TestStringStream<ostringstream>();\n}\n\nTEST(OStreamWrapper, stringstream) {\n    TestStringStream<stringstream>();\n}\n\nTEST(OStreamWrapper, wostringstream) {\n    TestStringStream<wostringstream>();\n}\n\nTEST(OStreamWrapper, wstringstream) {\n    TestStringStream<wstringstream>();\n}\n\nTEST(OStreamWrapper, cout) {\n    OStreamWrapper os(cout);\n    const char* s = \"Hello World!\\n\";\n    while (*s)\n        os.Put(*s++);\n    os.Flush();\n}\n\ntemplate <typename FileStreamType>\nstatic void TestFileStream() {\n    char filename[L_tmpnam];\n    FILE* fp = TempFile(filename);\n    fclose(fp);\n\n    const char* s = \"Hello World!\\n\";\n    {\n        FileStreamType ofs(filename, ios::out | ios::binary);\n        BasicOStreamWrapper<FileStreamType> osw(ofs);\n        for (const char* p = s; *p; p++)\n            osw.Put(*p);\n        osw.Flush();\n    }\n\n    fp = fopen(filename, \"r\");\n    ASSERT_TRUE( fp != NULL );\n    for (const char* p = s; *p; p++)\n        EXPECT_EQ(*p, static_cast<char>(fgetc(fp)));\n    fclose(fp);\n}\n\nTEST(OStreamWrapper, ofstream) {\n    TestFileStream<ofstream>();\n}\n\nTEST(OStreamWrapper, fstream) {\n    TestFileStream<fstream>();\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/pointertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/pointer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/ostreamwrapper.h\"\n#include <sstream>\n#include <map>\n#include <algorithm>\n\nusing namespace rapidjson;\n\nstatic const char kJson[] = \"{\\n\"\n\"    \\\"foo\\\":[\\\"bar\\\", \\\"baz\\\"],\\n\"\n\"    \\\"\\\" : 0,\\n\"\n\"    \\\"a/b\\\" : 1,\\n\"\n\"    \\\"c%d\\\" : 2,\\n\"\n\"    \\\"e^f\\\" : 3,\\n\"\n\"    \\\"g|h\\\" : 4,\\n\"\n\"    \\\"i\\\\\\\\j\\\" : 5,\\n\"\n\"    \\\"k\\\\\\\"l\\\" : 6,\\n\"\n\"    \\\" \\\" : 7,\\n\"\n\"    \\\"m~n\\\" : 8\\n\"\n\"}\";\n\nTEST(Pointer, DefaultConstructor) {\n    Pointer p;\n    EXPECT_TRUE(p.IsValid());\n    EXPECT_EQ(0u, p.GetTokenCount());\n}\n\nTEST(Pointer, Parse) {\n    {\n        Pointer p(\"\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(0u, p.GetTokenCount());\n    }\n\n    {\n        Pointer p(\"/\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(0u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"\", p.GetTokens()[0].name);\n        EXPECT_EQ(kPointerInvalidIndex, p.GetTokens()[0].index);\n    }\n\n    {\n        Pointer p(\"/foo\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", p.GetTokens()[0].name);\n        EXPECT_EQ(kPointerInvalidIndex, p.GetTokens()[0].index);\n    }\n\n    #if RAPIDJSON_HAS_STDSTRING\n    {\n        Pointer p(std::string(\"/foo\"));\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", p.GetTokens()[0].name);\n        EXPECT_EQ(kPointerInvalidIndex, p.GetTokens()[0].index);\n    }\n    #endif\n\n    {\n        Pointer p(\"/foo/0\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(2u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", p.GetTokens()[0].name);\n        EXPECT_EQ(kPointerInvalidIndex, p.GetTokens()[0].index);\n        EXPECT_EQ(1u, p.GetTokens()[1].length);\n        EXPECT_STREQ(\"0\", p.GetTokens()[1].name);\n        EXPECT_EQ(0u, p.GetTokens()[1].index);\n    }\n\n    {\n        // Unescape ~1\n        Pointer p(\"/a~1b\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"a/b\", p.GetTokens()[0].name);\n    }\n\n    {\n        // Unescape ~0\n        Pointer p(\"/m~0n\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"m~n\", p.GetTokens()[0].name);\n    }\n\n    {\n        // empty name\n        Pointer p(\"/\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(0u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"\", p.GetTokens()[0].name);\n    }\n\n    {\n        // empty and non-empty name\n        Pointer p(\"//a\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(2u, p.GetTokenCount());\n        EXPECT_EQ(0u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"\", p.GetTokens()[0].name);\n        EXPECT_EQ(1u, p.GetTokens()[1].length);\n        EXPECT_STREQ(\"a\", p.GetTokens()[1].name);\n    }\n\n    {\n        // Null characters\n        Pointer p(\"/\\0\\0\", 3);\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(2u, p.GetTokens()[0].length);\n        EXPECT_EQ('\\0', p.GetTokens()[0].name[0]);\n        EXPECT_EQ('\\0', p.GetTokens()[0].name[1]);\n        EXPECT_EQ('\\0', p.GetTokens()[0].name[2]);\n    }\n\n    {\n        // Valid index\n        Pointer p(\"/123\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_STREQ(\"123\", p.GetTokens()[0].name);\n        EXPECT_EQ(123u, p.GetTokens()[0].index);\n    }\n\n    {\n        // Invalid index (with leading zero)\n        Pointer p(\"/01\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_STREQ(\"01\", p.GetTokens()[0].name);\n        EXPECT_EQ(kPointerInvalidIndex, p.GetTokens()[0].index);\n    }\n\n    if (sizeof(SizeType) == 4) {\n        // Invalid index (overflow)\n        Pointer p(\"/4294967296\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_STREQ(\"4294967296\", p.GetTokens()[0].name);\n        EXPECT_EQ(kPointerInvalidIndex, p.GetTokens()[0].index);\n    }\n\n    {\n        // kPointerParseErrorTokenMustBeginWithSolidus\n        Pointer p(\" \");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorTokenMustBeginWithSolidus, p.GetParseErrorCode());\n        EXPECT_EQ(0u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorInvalidEscape\n        Pointer p(\"/~\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorInvalidEscape, p.GetParseErrorCode());\n        EXPECT_EQ(2u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorInvalidEscape\n        Pointer p(\"/~2\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorInvalidEscape, p.GetParseErrorCode());\n        EXPECT_EQ(2u, p.GetParseErrorOffset());\n    }\n}\n\nTEST(Pointer, Parse_URIFragment) {\n    {\n        Pointer p(\"#\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(0u, p.GetTokenCount());\n    }\n\n    {\n        Pointer p(\"#/foo\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", p.GetTokens()[0].name);\n    }\n\n    {\n        Pointer p(\"#/foo/0\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(2u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", p.GetTokens()[0].name);\n        EXPECT_EQ(1u, p.GetTokens()[1].length);\n        EXPECT_STREQ(\"0\", p.GetTokens()[1].name);\n        EXPECT_EQ(0u, p.GetTokens()[1].index);\n    }\n\n    {\n        // Unescape ~1\n        Pointer p(\"#/a~1b\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"a/b\", p.GetTokens()[0].name);\n    }\n\n    {\n        // Unescape ~0\n        Pointer p(\"#/m~0n\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(3u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"m~n\", p.GetTokens()[0].name);\n    }\n\n    {\n        // empty name\n        Pointer p(\"#/\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(0u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"\", p.GetTokens()[0].name);\n    }\n\n    {\n        // empty and non-empty name\n        Pointer p(\"#//a\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(2u, p.GetTokenCount());\n        EXPECT_EQ(0u, p.GetTokens()[0].length);\n        EXPECT_STREQ(\"\", p.GetTokens()[0].name);\n        EXPECT_EQ(1u, p.GetTokens()[1].length);\n        EXPECT_STREQ(\"a\", p.GetTokens()[1].name);\n    }\n\n    {\n        // Null characters\n        Pointer p(\"#/%00%00\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(2u, p.GetTokens()[0].length);\n        EXPECT_EQ('\\0', p.GetTokens()[0].name[0]);\n        EXPECT_EQ('\\0', p.GetTokens()[0].name[1]);\n        EXPECT_EQ('\\0', p.GetTokens()[0].name[2]);\n    }\n\n    {\n        // Percentage Escapes\n        EXPECT_STREQ(\"c%d\", Pointer(\"#/c%25d\").GetTokens()[0].name);\n        EXPECT_STREQ(\"e^f\", Pointer(\"#/e%5Ef\").GetTokens()[0].name);\n        EXPECT_STREQ(\"g|h\", Pointer(\"#/g%7Ch\").GetTokens()[0].name);\n        EXPECT_STREQ(\"i\\\\j\", Pointer(\"#/i%5Cj\").GetTokens()[0].name);\n        EXPECT_STREQ(\"k\\\"l\", Pointer(\"#/k%22l\").GetTokens()[0].name);\n        EXPECT_STREQ(\" \", Pointer(\"#/%20\").GetTokens()[0].name);\n    }\n\n    {\n        // Valid index\n        Pointer p(\"#/123\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_STREQ(\"123\", p.GetTokens()[0].name);\n        EXPECT_EQ(123u, p.GetTokens()[0].index);\n    }\n\n    {\n        // Invalid index (with leading zero)\n        Pointer p(\"#/01\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_STREQ(\"01\", p.GetTokens()[0].name);\n        EXPECT_EQ(kPointerInvalidIndex, p.GetTokens()[0].index);\n    }\n\n    if (sizeof(SizeType) == 4) {\n        // Invalid index (overflow)\n        Pointer p(\"#/4294967296\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_STREQ(\"4294967296\", p.GetTokens()[0].name);\n        EXPECT_EQ(kPointerInvalidIndex, p.GetTokens()[0].index);\n    }\n\n    {\n        // Decode UTF-8 percent encoding to UTF-8\n        Pointer p(\"#/%C2%A2\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_STREQ(\"\\xC2\\xA2\", p.GetTokens()[0].name);\n    }\n\n    {\n        // Decode UTF-8 percent encoding to UTF-16\n        GenericPointer<GenericValue<UTF16<> > > p(L\"#/%C2%A2\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(static_cast<UTF16<>::Ch>(0x00A2), p.GetTokens()[0].name[0]);\n        EXPECT_EQ(1u, p.GetTokens()[0].length);\n    }\n\n    {\n        // Decode UTF-8 percent encoding to UTF-16\n        GenericPointer<GenericValue<UTF16<> > > p(L\"#/%E2%82%AC\");\n        EXPECT_TRUE(p.IsValid());\n        EXPECT_EQ(1u, p.GetTokenCount());\n        EXPECT_EQ(static_cast<UTF16<>::Ch>(0x20AC), p.GetTokens()[0].name[0]);\n        EXPECT_EQ(1u, p.GetTokens()[0].length);\n    }\n\n    {\n        // kPointerParseErrorTokenMustBeginWithSolidus\n        Pointer p(\"# \");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorTokenMustBeginWithSolidus, p.GetParseErrorCode());\n        EXPECT_EQ(1u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorInvalidEscape\n        Pointer p(\"#/~\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorInvalidEscape, p.GetParseErrorCode());\n        EXPECT_EQ(3u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorInvalidEscape\n        Pointer p(\"#/~2\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorInvalidEscape, p.GetParseErrorCode());\n        EXPECT_EQ(3u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorInvalidPercentEncoding\n        Pointer p(\"#/%\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorInvalidPercentEncoding, p.GetParseErrorCode());\n        EXPECT_EQ(2u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorInvalidPercentEncoding (invalid hex)\n        Pointer p(\"#/%g0\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorInvalidPercentEncoding, p.GetParseErrorCode());\n        EXPECT_EQ(2u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorInvalidPercentEncoding (invalid hex)\n        Pointer p(\"#/%0g\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorInvalidPercentEncoding, p.GetParseErrorCode());\n        EXPECT_EQ(2u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorInvalidPercentEncoding (incomplete UTF-8 sequence)\n        Pointer p(\"#/%C2\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorInvalidPercentEncoding, p.GetParseErrorCode());\n        EXPECT_EQ(2u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorCharacterMustPercentEncode\n        Pointer p(\"#/ \");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorCharacterMustPercentEncode, p.GetParseErrorCode());\n        EXPECT_EQ(2u, p.GetParseErrorOffset());\n    }\n\n    {\n        // kPointerParseErrorCharacterMustPercentEncode\n        Pointer p(\"#/\\n\");\n        EXPECT_FALSE(p.IsValid());\n        EXPECT_EQ(kPointerParseErrorCharacterMustPercentEncode, p.GetParseErrorCode());\n        EXPECT_EQ(2u, p.GetParseErrorOffset());\n    }\n}\n\nTEST(Pointer, Stringify) {\n    // Test by roundtrip\n    const char* sources[] = {\n        \"\",\n        \"/foo\",\n        \"/foo/0\",\n        \"/\",\n        \"/a~1b\",\n        \"/c%d\",\n        \"/e^f\",\n        \"/g|h\",\n        \"/i\\\\j\",\n        \"/k\\\"l\",\n        \"/ \",\n        \"/m~0n\",\n        \"/\\xC2\\xA2\",\n        \"/\\xE2\\x82\\xAC\",\n        \"/\\xF0\\x9D\\x84\\x9E\"\n    };\n\n    for (size_t i = 0; i < sizeof(sources) / sizeof(sources[0]); i++) {\n        Pointer p(sources[i]);\n        StringBuffer s;\n        EXPECT_TRUE(p.Stringify(s));\n        EXPECT_STREQ(sources[i], s.GetString());\n\n        // Stringify to URI fragment\n        StringBuffer s2;\n        EXPECT_TRUE(p.StringifyUriFragment(s2));\n        Pointer p2(s2.GetString(), s2.GetSize());\n        EXPECT_TRUE(p2.IsValid());\n        EXPECT_TRUE(p == p2);\n    }\n\n    {\n        // Strigify to URI fragment with an invalid UTF-8 sequence\n        Pointer p(\"/\\xC2\");\n        StringBuffer s;\n        EXPECT_FALSE(p.StringifyUriFragment(s));\n    }\n}\n\n// Construct a Pointer with static tokens, no dynamic allocation involved.\n#define NAME(s) { s, static_cast<SizeType>(sizeof(s) / sizeof(s[0]) - 1), kPointerInvalidIndex }\n#define INDEX(i) { #i, static_cast<SizeType>(sizeof(#i) - 1), i }\n\nstatic const Pointer::Token kTokens[] = { NAME(\"foo\"), INDEX(0) }; // equivalent to \"/foo/0\"\n\n#undef NAME\n#undef INDEX\n\nTEST(Pointer, ConstructorWithToken) {\n    Pointer p(kTokens, sizeof(kTokens) / sizeof(kTokens[0]));\n    EXPECT_TRUE(p.IsValid());\n    EXPECT_EQ(2u, p.GetTokenCount());\n    EXPECT_EQ(3u, p.GetTokens()[0].length);\n    EXPECT_STREQ(\"foo\", p.GetTokens()[0].name);\n    EXPECT_EQ(1u, p.GetTokens()[1].length);\n    EXPECT_STREQ(\"0\", p.GetTokens()[1].name);\n    EXPECT_EQ(0u, p.GetTokens()[1].index);\n}\n\nTEST(Pointer, CopyConstructor) {\n    {\n        CrtAllocator allocator;\n        Pointer p(\"/foo/0\", &allocator);\n        Pointer q(p);\n        EXPECT_TRUE(q.IsValid());\n        EXPECT_EQ(2u, q.GetTokenCount());\n        EXPECT_EQ(3u, q.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", q.GetTokens()[0].name);\n        EXPECT_EQ(1u, q.GetTokens()[1].length);\n        EXPECT_STREQ(\"0\", q.GetTokens()[1].name);\n        EXPECT_EQ(0u, q.GetTokens()[1].index);\n\n        // Copied pointer needs to have its own allocator\n        EXPECT_NE(&p.GetAllocator(), &q.GetAllocator());\n    }\n\n    // Static tokens\n    {\n        Pointer p(kTokens, sizeof(kTokens) / sizeof(kTokens[0]));\n        Pointer q(p);\n        EXPECT_TRUE(q.IsValid());\n        EXPECT_EQ(2u, q.GetTokenCount());\n        EXPECT_EQ(3u, q.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", q.GetTokens()[0].name);\n        EXPECT_EQ(1u, q.GetTokens()[1].length);\n        EXPECT_STREQ(\"0\", q.GetTokens()[1].name);\n        EXPECT_EQ(0u, q.GetTokens()[1].index);\n    }\n}\n\nTEST(Pointer, Assignment) {\n    {\n        CrtAllocator allocator;\n        Pointer p(\"/foo/0\", &allocator);\n        Pointer q;\n        q = p;\n        EXPECT_TRUE(q.IsValid());\n        EXPECT_EQ(2u, q.GetTokenCount());\n        EXPECT_EQ(3u, q.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", q.GetTokens()[0].name);\n        EXPECT_EQ(1u, q.GetTokens()[1].length);\n        EXPECT_STREQ(\"0\", q.GetTokens()[1].name);\n        EXPECT_EQ(0u, q.GetTokens()[1].index);\n        EXPECT_NE(&p.GetAllocator(), &q.GetAllocator());\n        q = static_cast<const Pointer &>(q);\n        EXPECT_TRUE(q.IsValid());\n        EXPECT_EQ(2u, q.GetTokenCount());\n        EXPECT_EQ(3u, q.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", q.GetTokens()[0].name);\n        EXPECT_EQ(1u, q.GetTokens()[1].length);\n        EXPECT_STREQ(\"0\", q.GetTokens()[1].name);\n        EXPECT_EQ(0u, q.GetTokens()[1].index);\n        EXPECT_NE(&p.GetAllocator(), &q.GetAllocator());\n    }\n\n    // Static tokens\n    {\n        Pointer p(kTokens, sizeof(kTokens) / sizeof(kTokens[0]));\n        Pointer q;\n        q = p;\n        EXPECT_TRUE(q.IsValid());\n        EXPECT_EQ(2u, q.GetTokenCount());\n        EXPECT_EQ(3u, q.GetTokens()[0].length);\n        EXPECT_STREQ(\"foo\", q.GetTokens()[0].name);\n        EXPECT_EQ(1u, q.GetTokens()[1].length);\n        EXPECT_STREQ(\"0\", q.GetTokens()[1].name);\n        EXPECT_EQ(0u, q.GetTokens()[1].index);\n    }\n}\n\nTEST(Pointer, Swap) {\n    Pointer p(\"/foo/0\");\n    Pointer q(&p.GetAllocator());\n\n    q.Swap(p);\n    EXPECT_EQ(&q.GetAllocator(), &p.GetAllocator());\n    EXPECT_TRUE(p.IsValid());\n    EXPECT_TRUE(q.IsValid());\n    EXPECT_EQ(0u, p.GetTokenCount());\n    EXPECT_EQ(2u, q.GetTokenCount());\n    EXPECT_EQ(3u, q.GetTokens()[0].length);\n    EXPECT_STREQ(\"foo\", q.GetTokens()[0].name);\n    EXPECT_EQ(1u, q.GetTokens()[1].length);\n    EXPECT_STREQ(\"0\", q.GetTokens()[1].name);\n    EXPECT_EQ(0u, q.GetTokens()[1].index);\n\n    // std::swap compatibility\n    std::swap(p, q);\n    EXPECT_EQ(&p.GetAllocator(), &q.GetAllocator());\n    EXPECT_TRUE(q.IsValid());\n    EXPECT_TRUE(p.IsValid());\n    EXPECT_EQ(0u, q.GetTokenCount());\n    EXPECT_EQ(2u, p.GetTokenCount());\n    EXPECT_EQ(3u, p.GetTokens()[0].length);\n    EXPECT_STREQ(\"foo\", p.GetTokens()[0].name);\n    EXPECT_EQ(1u, p.GetTokens()[1].length);\n    EXPECT_STREQ(\"0\", p.GetTokens()[1].name);\n    EXPECT_EQ(0u, p.GetTokens()[1].index);\n}\n\nTEST(Pointer, Append) {\n    {\n        Pointer p;\n        Pointer q = p.Append(\"foo\");\n        EXPECT_TRUE(Pointer(\"/foo\") == q);\n        q = q.Append(1234);\n        EXPECT_TRUE(Pointer(\"/foo/1234\") == q);\n        q = q.Append(\"\");\n        EXPECT_TRUE(Pointer(\"/foo/1234/\") == q);\n    }\n\n    {\n        Pointer p;\n        Pointer q = p.Append(Value(\"foo\").Move());\n        EXPECT_TRUE(Pointer(\"/foo\") == q);\n        q = q.Append(Value(1234).Move());\n        EXPECT_TRUE(Pointer(\"/foo/1234\") == q);\n        q = q.Append(Value(kStringType).Move());\n        EXPECT_TRUE(Pointer(\"/foo/1234/\") == q);\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    {\n        Pointer p;\n        Pointer q = p.Append(std::string(\"foo\"));\n        EXPECT_TRUE(Pointer(\"/foo\") == q);\n    }\n#endif\n}\n\nTEST(Pointer, Equality) {\n    EXPECT_TRUE(Pointer(\"/foo/0\") == Pointer(\"/foo/0\"));\n    EXPECT_FALSE(Pointer(\"/foo/0\") == Pointer(\"/foo/1\"));\n    EXPECT_FALSE(Pointer(\"/foo/0\") == Pointer(\"/foo/0/1\"));\n    EXPECT_FALSE(Pointer(\"/foo/0\") == Pointer(\"a\"));\n    EXPECT_FALSE(Pointer(\"a\") == Pointer(\"a\")); // Invalid always not equal\n}\n\nTEST(Pointer, Inequality) {\n    EXPECT_FALSE(Pointer(\"/foo/0\") != Pointer(\"/foo/0\"));\n    EXPECT_TRUE(Pointer(\"/foo/0\") != Pointer(\"/foo/1\"));\n    EXPECT_TRUE(Pointer(\"/foo/0\") != Pointer(\"/foo/0/1\"));\n    EXPECT_TRUE(Pointer(\"/foo/0\") != Pointer(\"a\"));\n    EXPECT_TRUE(Pointer(\"a\") != Pointer(\"a\")); // Invalid always not equal\n}\n\nTEST(Pointer, Create) {\n    Document d;\n    {\n        Value* v = &Pointer(\"\").Create(d, d.GetAllocator());\n        EXPECT_EQ(&d, v);\n    }\n    {\n        Value* v = &Pointer(\"/foo\").Create(d, d.GetAllocator());\n        EXPECT_EQ(&d[\"foo\"], v);\n    }\n    {\n        Value* v = &Pointer(\"/foo/0\").Create(d, d.GetAllocator());\n        EXPECT_EQ(&d[\"foo\"][0], v);\n    }\n    {\n        Value* v = &Pointer(\"/foo/-\").Create(d, d.GetAllocator());\n        EXPECT_EQ(&d[\"foo\"][1], v);\n    }\n\n    {\n        Value* v = &Pointer(\"/foo/-/-\").Create(d, d.GetAllocator());\n        // \"foo/-\" is a newly created null value x.\n        // \"foo/-/-\" finds that x is not an array, it converts x to empty object\n        // and treats - as \"-\" member name\n        EXPECT_EQ(&d[\"foo\"][2][\"-\"], v);\n    }\n\n    {\n        // Document with no allocator\n        Value* v = &Pointer(\"/foo/-\").Create(d);\n        EXPECT_EQ(&d[\"foo\"][3], v);\n    }\n\n    {\n        // Value (not document) must give allocator\n        Value* v = &Pointer(\"/-\").Create(d[\"foo\"], d.GetAllocator());\n        EXPECT_EQ(&d[\"foo\"][4], v);\n    }\n}\n\nstatic const char kJsonIds[] = \"{\\n\"\n   \"    \\\"id\\\": \\\"/root/\\\",\"\n   \"    \\\"foo\\\":[\\\"bar\\\", \\\"baz\\\", {\\\"id\\\": \\\"inarray\\\", \\\"child\\\": 1}],\\n\"\n   \"    \\\"int\\\" : 2,\\n\"\n   \"    \\\"str\\\" : \\\"val\\\",\\n\"\n   \"    \\\"obj\\\": {\\\"id\\\": \\\"inobj\\\", \\\"child\\\": 3},\\n\"\n   \"    \\\"jbo\\\": {\\\"id\\\": true, \\\"child\\\": 4}\\n\"\n   \"}\";\n\n\nTEST(Pointer, GetUri) {\n    CrtAllocator allocator;\n    Document d;\n    d.Parse(kJsonIds);\n    Pointer::UriType doc(\"http://doc\");\n    Pointer::UriType root(\"http://doc/root/\");\n    Pointer::UriType empty = Pointer::UriType();\n\n    EXPECT_TRUE(Pointer(\"\").GetUri(d, doc) == doc);\n    EXPECT_TRUE(Pointer(\"/foo\").GetUri(d, doc) == root);\n    EXPECT_TRUE(Pointer(\"/foo/0\").GetUri(d, doc) == root);\n    EXPECT_TRUE(Pointer(\"/foo/2\").GetUri(d, doc) == root);\n    EXPECT_TRUE(Pointer(\"/foo/2/child\").GetUri(d, doc) == Pointer::UriType(\"http://doc/root/inarray\"));\n    EXPECT_TRUE(Pointer(\"/int\").GetUri(d, doc) == root);\n    EXPECT_TRUE(Pointer(\"/str\").GetUri(d, doc) == root);\n    EXPECT_TRUE(Pointer(\"/obj\").GetUri(d, doc) == root);\n    EXPECT_TRUE(Pointer(\"/obj/child\").GetUri(d, doc) == Pointer::UriType(\"http://doc/root/inobj\"));\n    EXPECT_TRUE(Pointer(\"/jbo\").GetUri(d, doc) == root);\n    EXPECT_TRUE(Pointer(\"/jbo/child\").GetUri(d, doc) == root); // id not string\n\n    size_t unresolvedTokenIndex;\n    EXPECT_TRUE(Pointer(\"/abc\").GetUri(d, doc, &unresolvedTokenIndex, &allocator) == empty); // Out of boundary\n    EXPECT_EQ(0u, unresolvedTokenIndex);\n    EXPECT_TRUE(Pointer(\"/foo/3\").GetUri(d, doc, &unresolvedTokenIndex, &allocator) == empty); // Out of boundary\n    EXPECT_EQ(1u, unresolvedTokenIndex);\n    EXPECT_TRUE(Pointer(\"/foo/a\").GetUri(d, doc, &unresolvedTokenIndex, &allocator) == empty); // \"/foo\" is an array, cannot query by \"a\"\n    EXPECT_EQ(1u, unresolvedTokenIndex);\n    EXPECT_TRUE(Pointer(\"/foo/0/0\").GetUri(d, doc, &unresolvedTokenIndex, &allocator) == empty); // \"/foo/0\" is an string, cannot further query\n    EXPECT_EQ(2u, unresolvedTokenIndex);\n    EXPECT_TRUE(Pointer(\"/foo/0/a\").GetUri(d, doc, &unresolvedTokenIndex, &allocator) == empty); // \"/foo/0\" is an string, cannot further query\n    EXPECT_EQ(2u, unresolvedTokenIndex);\n\n    Pointer::Token tokens[] = { { \"foo ...\", 3, kPointerInvalidIndex } };\n    EXPECT_TRUE(Pointer(tokens, 1).GetUri(d, doc) == root);\n}\n\nTEST(Pointer, Get) {\n    Document d;\n    d.Parse(kJson);\n\n    EXPECT_EQ(&d, Pointer(\"\").Get(d));\n    EXPECT_EQ(&d[\"foo\"], Pointer(\"/foo\").Get(d));\n    EXPECT_EQ(&d[\"foo\"][0], Pointer(\"/foo/0\").Get(d));\n    EXPECT_EQ(&d[\"\"], Pointer(\"/\").Get(d));\n    EXPECT_EQ(&d[\"a/b\"], Pointer(\"/a~1b\").Get(d));\n    EXPECT_EQ(&d[\"c%d\"], Pointer(\"/c%d\").Get(d));\n    EXPECT_EQ(&d[\"e^f\"], Pointer(\"/e^f\").Get(d));\n    EXPECT_EQ(&d[\"g|h\"], Pointer(\"/g|h\").Get(d));\n    EXPECT_EQ(&d[\"i\\\\j\"], Pointer(\"/i\\\\j\").Get(d));\n    EXPECT_EQ(&d[\"k\\\"l\"], Pointer(\"/k\\\"l\").Get(d));\n    EXPECT_EQ(&d[\" \"], Pointer(\"/ \").Get(d));\n    EXPECT_EQ(&d[\"m~n\"], Pointer(\"/m~0n\").Get(d));\n\n    EXPECT_TRUE(Pointer(\"/abc\").Get(d) == 0);  // Out of boundary\n    size_t unresolvedTokenIndex;\n    EXPECT_TRUE(Pointer(\"/foo/2\").Get(d, &unresolvedTokenIndex) == 0); // Out of boundary\n    EXPECT_EQ(1u, unresolvedTokenIndex);\n    EXPECT_TRUE(Pointer(\"/foo/a\").Get(d, &unresolvedTokenIndex) == 0); // \"/foo\" is an array, cannot query by \"a\"\n    EXPECT_EQ(1u, unresolvedTokenIndex);\n    EXPECT_TRUE(Pointer(\"/foo/0/0\").Get(d, &unresolvedTokenIndex) == 0); // \"/foo/0\" is an string, cannot further query\n    EXPECT_EQ(2u, unresolvedTokenIndex);\n    EXPECT_TRUE(Pointer(\"/foo/0/a\").Get(d, &unresolvedTokenIndex) == 0); // \"/foo/0\" is an string, cannot further query\n    EXPECT_EQ(2u, unresolvedTokenIndex);\n\n    Pointer::Token tokens[] = { { \"foo ...\", 3, kPointerInvalidIndex } };\n    EXPECT_EQ(&d[\"foo\"], Pointer(tokens, 1).Get(d));\n}\n\nTEST(Pointer, GetWithDefault) {\n    Document d;\n    d.Parse(kJson);\n\n    // Value version\n    Document::AllocatorType& a = d.GetAllocator();\n    const Value v(\"qux\");\n    EXPECT_TRUE(Value(\"bar\") == Pointer(\"/foo/0\").GetWithDefault(d, v, a));\n    EXPECT_TRUE(Value(\"baz\") == Pointer(\"/foo/1\").GetWithDefault(d, v, a));\n    EXPECT_TRUE(Value(\"qux\") == Pointer(\"/foo/2\").GetWithDefault(d, v, a));\n    EXPECT_TRUE(Value(\"last\") == Pointer(\"/foo/-\").GetWithDefault(d, Value(\"last\").Move(), a));\n    EXPECT_STREQ(\"last\", d[\"foo\"][3].GetString());\n\n    EXPECT_TRUE(Pointer(\"/foo/null\").GetWithDefault(d, Value().Move(), a).IsNull());\n    EXPECT_TRUE(Pointer(\"/foo/null\").GetWithDefault(d, \"x\", a).IsNull());\n\n    // Generic version\n    EXPECT_EQ(-1, Pointer(\"/foo/int\").GetWithDefault(d, -1, a).GetInt());\n    EXPECT_EQ(-1, Pointer(\"/foo/int\").GetWithDefault(d, -2, a).GetInt());\n    EXPECT_EQ(0x87654321, Pointer(\"/foo/uint\").GetWithDefault(d, 0x87654321, a).GetUint());\n    EXPECT_EQ(0x87654321, Pointer(\"/foo/uint\").GetWithDefault(d, 0x12345678, a).GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    EXPECT_EQ(i64, Pointer(\"/foo/int64\").GetWithDefault(d, i64, a).GetInt64());\n    EXPECT_EQ(i64, Pointer(\"/foo/int64\").GetWithDefault(d, i64 + 1, a).GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    EXPECT_EQ(u64, Pointer(\"/foo/uint64\").GetWithDefault(d, u64, a).GetUint64());\n    EXPECT_EQ(u64, Pointer(\"/foo/uint64\").GetWithDefault(d, u64 - 1, a).GetUint64());\n\n    EXPECT_TRUE(Pointer(\"/foo/true\").GetWithDefault(d, true, a).IsTrue());\n    EXPECT_TRUE(Pointer(\"/foo/true\").GetWithDefault(d, false, a).IsTrue());\n\n    EXPECT_TRUE(Pointer(\"/foo/false\").GetWithDefault(d, false, a).IsFalse());\n    EXPECT_TRUE(Pointer(\"/foo/false\").GetWithDefault(d, true, a).IsFalse());\n\n    // StringRef version\n    EXPECT_STREQ(\"Hello\", Pointer(\"/foo/hello\").GetWithDefault(d, \"Hello\", a).GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        EXPECT_STREQ(\"World\", Pointer(\"/foo/world\").GetWithDefault(d, buffer, a).GetString());\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    EXPECT_STREQ(\"C++\", Pointer(\"/foo/C++\").GetWithDefault(d, std::string(\"C++\"), a).GetString());\n#endif\n}\n\nTEST(Pointer, GetWithDefault_NoAllocator) {\n    Document d;\n    d.Parse(kJson);\n\n    // Value version\n    const Value v(\"qux\");\n    EXPECT_TRUE(Value(\"bar\") == Pointer(\"/foo/0\").GetWithDefault(d, v));\n    EXPECT_TRUE(Value(\"baz\") == Pointer(\"/foo/1\").GetWithDefault(d, v));\n    EXPECT_TRUE(Value(\"qux\") == Pointer(\"/foo/2\").GetWithDefault(d, v));\n    EXPECT_TRUE(Value(\"last\") == Pointer(\"/foo/-\").GetWithDefault(d, Value(\"last\").Move()));\n    EXPECT_STREQ(\"last\", d[\"foo\"][3].GetString());\n\n    EXPECT_TRUE(Pointer(\"/foo/null\").GetWithDefault(d, Value().Move()).IsNull());\n    EXPECT_TRUE(Pointer(\"/foo/null\").GetWithDefault(d, \"x\").IsNull());\n\n    // Generic version\n    EXPECT_EQ(-1, Pointer(\"/foo/int\").GetWithDefault(d, -1).GetInt());\n    EXPECT_EQ(-1, Pointer(\"/foo/int\").GetWithDefault(d, -2).GetInt());\n    EXPECT_EQ(0x87654321, Pointer(\"/foo/uint\").GetWithDefault(d, 0x87654321).GetUint());\n    EXPECT_EQ(0x87654321, Pointer(\"/foo/uint\").GetWithDefault(d, 0x12345678).GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    EXPECT_EQ(i64, Pointer(\"/foo/int64\").GetWithDefault(d, i64).GetInt64());\n    EXPECT_EQ(i64, Pointer(\"/foo/int64\").GetWithDefault(d, i64 + 1).GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    EXPECT_EQ(u64, Pointer(\"/foo/uint64\").GetWithDefault(d, u64).GetUint64());\n    EXPECT_EQ(u64, Pointer(\"/foo/uint64\").GetWithDefault(d, u64 - 1).GetUint64());\n\n    EXPECT_TRUE(Pointer(\"/foo/true\").GetWithDefault(d, true).IsTrue());\n    EXPECT_TRUE(Pointer(\"/foo/true\").GetWithDefault(d, false).IsTrue());\n\n    EXPECT_TRUE(Pointer(\"/foo/false\").GetWithDefault(d, false).IsFalse());\n    EXPECT_TRUE(Pointer(\"/foo/false\").GetWithDefault(d, true).IsFalse());\n\n    // StringRef version\n    EXPECT_STREQ(\"Hello\", Pointer(\"/foo/hello\").GetWithDefault(d, \"Hello\").GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        EXPECT_STREQ(\"World\", Pointer(\"/foo/world\").GetWithDefault(d, buffer).GetString());\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    EXPECT_STREQ(\"C++\", Pointer(\"/foo/C++\").GetWithDefault(d, std::string(\"C++\")).GetString());\n#endif\n}\n\nTEST(Pointer, Set) {\n    Document d;\n    d.Parse(kJson);\n    Document::AllocatorType& a = d.GetAllocator();\n    \n    // Value version\n    Pointer(\"/foo/0\").Set(d, Value(123).Move(), a);\n    EXPECT_EQ(123, d[\"foo\"][0].GetInt());\n\n    Pointer(\"/foo/-\").Set(d, Value(456).Move(), a);\n    EXPECT_EQ(456, d[\"foo\"][2].GetInt());\n\n    Pointer(\"/foo/null\").Set(d, Value().Move(), a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/null\")->IsNull());\n\n    // Const Value version\n    const Value foo(d[\"foo\"], a);\n    Pointer(\"/clone\").Set(d, foo, a);\n    EXPECT_EQ(foo, *GetValueByPointer(d, \"/clone\"));\n\n    // Generic version\n    Pointer(\"/foo/int\").Set(d, -1, a);\n    EXPECT_EQ(-1, GetValueByPointer(d, \"/foo/int\")->GetInt());\n\n    Pointer(\"/foo/uint\").Set(d, 0x87654321, a);\n    EXPECT_EQ(0x87654321, GetValueByPointer(d, \"/foo/uint\")->GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    Pointer(\"/foo/int64\").Set(d, i64, a);\n    EXPECT_EQ(i64, GetValueByPointer(d, \"/foo/int64\")->GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    Pointer(\"/foo/uint64\").Set(d, u64, a);\n    EXPECT_EQ(u64, GetValueByPointer(d, \"/foo/uint64\")->GetUint64());\n\n    Pointer(\"/foo/true\").Set(d, true, a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/true\")->IsTrue());\n\n    Pointer(\"/foo/false\").Set(d, false, a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/false\")->IsFalse());\n\n    // StringRef version\n    Pointer(\"/foo/hello\").Set(d, \"Hello\", a);\n    EXPECT_STREQ(\"Hello\", GetValueByPointer(d, \"/foo/hello\")->GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        Pointer(\"/foo/world\").Set(d, buffer, a);\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    Pointer(\"/foo/c++\").Set(d, std::string(\"C++\"), a);\n    EXPECT_STREQ(\"C++\", GetValueByPointer(d, \"/foo/c++\")->GetString());\n#endif\n}\n\nTEST(Pointer, Set_NoAllocator) {\n    Document d;\n    d.Parse(kJson);\n    \n    // Value version\n    Pointer(\"/foo/0\").Set(d, Value(123).Move());\n    EXPECT_EQ(123, d[\"foo\"][0].GetInt());\n\n    Pointer(\"/foo/-\").Set(d, Value(456).Move());\n    EXPECT_EQ(456, d[\"foo\"][2].GetInt());\n\n    Pointer(\"/foo/null\").Set(d, Value().Move());\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/null\")->IsNull());\n\n    // Const Value version\n    const Value foo(d[\"foo\"], d.GetAllocator());\n    Pointer(\"/clone\").Set(d, foo);\n    EXPECT_EQ(foo, *GetValueByPointer(d, \"/clone\"));\n\n    // Generic version\n    Pointer(\"/foo/int\").Set(d, -1);\n    EXPECT_EQ(-1, GetValueByPointer(d, \"/foo/int\")->GetInt());\n\n    Pointer(\"/foo/uint\").Set(d, 0x87654321);\n    EXPECT_EQ(0x87654321, GetValueByPointer(d, \"/foo/uint\")->GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    Pointer(\"/foo/int64\").Set(d, i64);\n    EXPECT_EQ(i64, GetValueByPointer(d, \"/foo/int64\")->GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    Pointer(\"/foo/uint64\").Set(d, u64);\n    EXPECT_EQ(u64, GetValueByPointer(d, \"/foo/uint64\")->GetUint64());\n\n    Pointer(\"/foo/true\").Set(d, true);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/true\")->IsTrue());\n\n    Pointer(\"/foo/false\").Set(d, false);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/false\")->IsFalse());\n\n    // StringRef version\n    Pointer(\"/foo/hello\").Set(d, \"Hello\");\n    EXPECT_STREQ(\"Hello\", GetValueByPointer(d, \"/foo/hello\")->GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        Pointer(\"/foo/world\").Set(d, buffer);\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    Pointer(\"/foo/c++\").Set(d, std::string(\"C++\"));\n    EXPECT_STREQ(\"C++\", GetValueByPointer(d, \"/foo/c++\")->GetString());\n#endif\n}\n\nTEST(Pointer, Swap_Value) {\n    Document d;\n    d.Parse(kJson);\n    Document::AllocatorType& a = d.GetAllocator();\n    Pointer(\"/foo/0\").Swap(d, *Pointer(\"/foo/1\").Get(d), a);\n    EXPECT_STREQ(\"baz\", d[\"foo\"][0].GetString());\n    EXPECT_STREQ(\"bar\", d[\"foo\"][1].GetString());\n}\n\nTEST(Pointer, Swap_Value_NoAllocator) {\n    Document d;\n    d.Parse(kJson);\n    Pointer(\"/foo/0\").Swap(d, *Pointer(\"/foo/1\").Get(d));\n    EXPECT_STREQ(\"baz\", d[\"foo\"][0].GetString());\n    EXPECT_STREQ(\"bar\", d[\"foo\"][1].GetString());\n}\n\nTEST(Pointer, Erase) {\n    Document d;\n    d.Parse(kJson);\n\n    EXPECT_FALSE(Pointer(\"\").Erase(d));\n    EXPECT_FALSE(Pointer(\"/nonexist\").Erase(d));\n    EXPECT_FALSE(Pointer(\"/nonexist/nonexist\").Erase(d));\n    EXPECT_FALSE(Pointer(\"/foo/nonexist\").Erase(d));\n    EXPECT_FALSE(Pointer(\"/foo/nonexist/nonexist\").Erase(d));\n    EXPECT_FALSE(Pointer(\"/foo/0/nonexist\").Erase(d));\n    EXPECT_FALSE(Pointer(\"/foo/0/nonexist/nonexist\").Erase(d));\n    EXPECT_FALSE(Pointer(\"/foo/2/nonexist\").Erase(d));\n    EXPECT_TRUE(Pointer(\"/foo/0\").Erase(d));\n    EXPECT_EQ(1u, d[\"foo\"].Size());\n    EXPECT_STREQ(\"baz\", d[\"foo\"][0].GetString());\n    EXPECT_TRUE(Pointer(\"/foo/0\").Erase(d));\n    EXPECT_TRUE(d[\"foo\"].Empty());\n    EXPECT_TRUE(Pointer(\"/foo\").Erase(d));\n    EXPECT_TRUE(Pointer(\"/foo\").Get(d) == 0);\n\n    Pointer(\"/a/0/b/0\").Create(d);\n\n    EXPECT_TRUE(Pointer(\"/a/0/b/0\").Get(d) != 0);\n    EXPECT_TRUE(Pointer(\"/a/0/b/0\").Erase(d));\n    EXPECT_TRUE(Pointer(\"/a/0/b/0\").Get(d) == 0);\n\n    EXPECT_TRUE(Pointer(\"/a/0/b\").Get(d) != 0);\n    EXPECT_TRUE(Pointer(\"/a/0/b\").Erase(d));\n    EXPECT_TRUE(Pointer(\"/a/0/b\").Get(d) == 0);\n\n    EXPECT_TRUE(Pointer(\"/a/0\").Get(d) != 0);\n    EXPECT_TRUE(Pointer(\"/a/0\").Erase(d));\n    EXPECT_TRUE(Pointer(\"/a/0\").Get(d) == 0);\n\n    EXPECT_TRUE(Pointer(\"/a\").Get(d) != 0);\n    EXPECT_TRUE(Pointer(\"/a\").Erase(d));\n    EXPECT_TRUE(Pointer(\"/a\").Get(d) == 0);\n}\n\nTEST(Pointer, CreateValueByPointer) {\n    Document d;\n    Document::AllocatorType& a = d.GetAllocator();\n\n    {\n        Value& v = CreateValueByPointer(d, Pointer(\"/foo/0\"), a);\n        EXPECT_EQ(&d[\"foo\"][0], &v);\n    }\n    {\n        Value& v = CreateValueByPointer(d, \"/foo/1\", a);\n        EXPECT_EQ(&d[\"foo\"][1], &v);\n    }\n}\n\nTEST(Pointer, CreateValueByPointer_NoAllocator) {\n    Document d;\n\n    {\n        Value& v = CreateValueByPointer(d, Pointer(\"/foo/0\"));\n        EXPECT_EQ(&d[\"foo\"][0], &v);\n    }\n    {\n        Value& v = CreateValueByPointer(d, \"/foo/1\");\n        EXPECT_EQ(&d[\"foo\"][1], &v);\n    }\n}\n\nTEST(Pointer, GetValueByPointer) {\n    Document d;\n    d.Parse(kJson);\n\n    EXPECT_EQ(&d[\"foo\"][0], GetValueByPointer(d, Pointer(\"/foo/0\")));\n    EXPECT_EQ(&d[\"foo\"][0], GetValueByPointer(d, \"/foo/0\"));\n\n    size_t unresolvedTokenIndex;\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/2\", &unresolvedTokenIndex) == 0); // Out of boundary\n    EXPECT_EQ(1u, unresolvedTokenIndex);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/a\", &unresolvedTokenIndex) == 0); // \"/foo\" is an array, cannot query by \"a\"\n    EXPECT_EQ(1u, unresolvedTokenIndex);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/0/0\", &unresolvedTokenIndex) == 0); // \"/foo/0\" is an string, cannot further query\n    EXPECT_EQ(2u, unresolvedTokenIndex);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/0/a\", &unresolvedTokenIndex) == 0); // \"/foo/0\" is an string, cannot further query\n    EXPECT_EQ(2u, unresolvedTokenIndex);\n\n    // const version\n    const Value& v = d;\n    EXPECT_EQ(&d[\"foo\"][0], GetValueByPointer(v, Pointer(\"/foo/0\")));\n    EXPECT_EQ(&d[\"foo\"][0], GetValueByPointer(v, \"/foo/0\"));\n\n    EXPECT_TRUE(GetValueByPointer(v, \"/foo/2\", &unresolvedTokenIndex) == 0); // Out of boundary\n    EXPECT_EQ(1u, unresolvedTokenIndex);\n    EXPECT_TRUE(GetValueByPointer(v, \"/foo/a\", &unresolvedTokenIndex) == 0); // \"/foo\" is an array, cannot query by \"a\"\n    EXPECT_EQ(1u, unresolvedTokenIndex);\n    EXPECT_TRUE(GetValueByPointer(v, \"/foo/0/0\", &unresolvedTokenIndex) == 0); // \"/foo/0\" is an string, cannot further query\n    EXPECT_EQ(2u, unresolvedTokenIndex);\n    EXPECT_TRUE(GetValueByPointer(v, \"/foo/0/a\", &unresolvedTokenIndex) == 0); // \"/foo/0\" is an string, cannot further query\n    EXPECT_EQ(2u, unresolvedTokenIndex);\n\n}\n\nTEST(Pointer, GetValueByPointerWithDefault_Pointer) {\n    Document d;\n    d.Parse(kJson);\n\n    Document::AllocatorType& a = d.GetAllocator();\n    const Value v(\"qux\");\n    EXPECT_TRUE(Value(\"bar\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/0\"), v, a));\n    EXPECT_TRUE(Value(\"bar\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/0\"), v, a));\n    EXPECT_TRUE(Value(\"baz\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/1\"), v, a));\n    EXPECT_TRUE(Value(\"qux\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/2\"), v, a));\n    EXPECT_TRUE(Value(\"last\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/-\"), Value(\"last\").Move(), a));\n    EXPECT_STREQ(\"last\", d[\"foo\"][3].GetString());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/null\"), Value().Move(), a).IsNull());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/null\"), \"x\", a).IsNull());\n\n    // Generic version\n    EXPECT_EQ(-1, GetValueByPointerWithDefault(d, Pointer(\"/foo/int\"), -1, a).GetInt());\n    EXPECT_EQ(-1, GetValueByPointerWithDefault(d, Pointer(\"/foo/int\"), -2, a).GetInt());\n    EXPECT_EQ(0x87654321, GetValueByPointerWithDefault(d, Pointer(\"/foo/uint\"), 0x87654321, a).GetUint());\n    EXPECT_EQ(0x87654321, GetValueByPointerWithDefault(d, Pointer(\"/foo/uint\"), 0x12345678, a).GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    EXPECT_EQ(i64, GetValueByPointerWithDefault(d, Pointer(\"/foo/int64\"), i64, a).GetInt64());\n    EXPECT_EQ(i64, GetValueByPointerWithDefault(d, Pointer(\"/foo/int64\"), i64 + 1, a).GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    EXPECT_EQ(u64, GetValueByPointerWithDefault(d, Pointer(\"/foo/uint64\"), u64, a).GetUint64());\n    EXPECT_EQ(u64, GetValueByPointerWithDefault(d, Pointer(\"/foo/uint64\"), u64 - 1, a).GetUint64());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/true\"), true, a).IsTrue());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/true\"), false, a).IsTrue());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/false\"), false, a).IsFalse());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/false\"), true, a).IsFalse());\n\n    // StringRef version\n    EXPECT_STREQ(\"Hello\", GetValueByPointerWithDefault(d, Pointer(\"/foo/hello\"), \"Hello\", a).GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        EXPECT_STREQ(\"World\", GetValueByPointerWithDefault(d, Pointer(\"/foo/world\"), buffer, a).GetString());\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, Pointer(\"/foo/world\"))->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    EXPECT_STREQ(\"C++\", GetValueByPointerWithDefault(d, Pointer(\"/foo/C++\"), std::string(\"C++\"), a).GetString());\n#endif\n}\n\nTEST(Pointer, GetValueByPointerWithDefault_String) {\n    Document d;\n    d.Parse(kJson);\n\n    Document::AllocatorType& a = d.GetAllocator();\n    const Value v(\"qux\");\n    EXPECT_TRUE(Value(\"bar\") == GetValueByPointerWithDefault(d, \"/foo/0\", v, a));\n    EXPECT_TRUE(Value(\"bar\") == GetValueByPointerWithDefault(d, \"/foo/0\", v, a));\n    EXPECT_TRUE(Value(\"baz\") == GetValueByPointerWithDefault(d, \"/foo/1\", v, a));\n    EXPECT_TRUE(Value(\"qux\") == GetValueByPointerWithDefault(d, \"/foo/2\", v, a));\n    EXPECT_TRUE(Value(\"last\") == GetValueByPointerWithDefault(d, \"/foo/-\", Value(\"last\").Move(), a));\n    EXPECT_STREQ(\"last\", d[\"foo\"][3].GetString());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/null\", Value().Move(), a).IsNull());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/null\", \"x\", a).IsNull());\n\n    // Generic version\n    EXPECT_EQ(-1, GetValueByPointerWithDefault(d, \"/foo/int\", -1, a).GetInt());\n    EXPECT_EQ(-1, GetValueByPointerWithDefault(d, \"/foo/int\", -2, a).GetInt());\n    EXPECT_EQ(0x87654321, GetValueByPointerWithDefault(d, \"/foo/uint\", 0x87654321, a).GetUint());\n    EXPECT_EQ(0x87654321, GetValueByPointerWithDefault(d, \"/foo/uint\", 0x12345678, a).GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    EXPECT_EQ(i64, GetValueByPointerWithDefault(d, \"/foo/int64\", i64, a).GetInt64());\n    EXPECT_EQ(i64, GetValueByPointerWithDefault(d, \"/foo/int64\", i64 + 1, a).GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    EXPECT_EQ(u64, GetValueByPointerWithDefault(d, \"/foo/uint64\", u64, a).GetUint64());\n    EXPECT_EQ(u64, GetValueByPointerWithDefault(d, \"/foo/uint64\", u64 - 1, a).GetUint64());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/true\", true, a).IsTrue());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/true\", false, a).IsTrue());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/false\", false, a).IsFalse());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/false\", true, a).IsFalse());\n\n    // StringRef version\n    EXPECT_STREQ(\"Hello\", GetValueByPointerWithDefault(d, \"/foo/hello\", \"Hello\", a).GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        EXPECT_STREQ(\"World\", GetValueByPointerWithDefault(d, \"/foo/world\", buffer, a).GetString());\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    EXPECT_STREQ(\"C++\", GetValueByPointerWithDefault(d, \"/foo/C++\", std::string(\"C++\"), a).GetString());\n#endif\n}\n\nTEST(Pointer, GetValueByPointerWithDefault_Pointer_NoAllocator) {\n    Document d;\n    d.Parse(kJson);\n\n    const Value v(\"qux\");\n    EXPECT_TRUE(Value(\"bar\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/0\"), v));\n    EXPECT_TRUE(Value(\"bar\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/0\"), v));\n    EXPECT_TRUE(Value(\"baz\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/1\"), v));\n    EXPECT_TRUE(Value(\"qux\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/2\"), v));\n    EXPECT_TRUE(Value(\"last\") == GetValueByPointerWithDefault(d, Pointer(\"/foo/-\"), Value(\"last\").Move()));\n    EXPECT_STREQ(\"last\", d[\"foo\"][3].GetString());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/null\"), Value().Move()).IsNull());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/null\"), \"x\").IsNull());\n\n    // Generic version\n    EXPECT_EQ(-1, GetValueByPointerWithDefault(d, Pointer(\"/foo/int\"), -1).GetInt());\n    EXPECT_EQ(-1, GetValueByPointerWithDefault(d, Pointer(\"/foo/int\"), -2).GetInt());\n    EXPECT_EQ(0x87654321, GetValueByPointerWithDefault(d, Pointer(\"/foo/uint\"), 0x87654321).GetUint());\n    EXPECT_EQ(0x87654321, GetValueByPointerWithDefault(d, Pointer(\"/foo/uint\"), 0x12345678).GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    EXPECT_EQ(i64, GetValueByPointerWithDefault(d, Pointer(\"/foo/int64\"), i64).GetInt64());\n    EXPECT_EQ(i64, GetValueByPointerWithDefault(d, Pointer(\"/foo/int64\"), i64 + 1).GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    EXPECT_EQ(u64, GetValueByPointerWithDefault(d, Pointer(\"/foo/uint64\"), u64).GetUint64());\n    EXPECT_EQ(u64, GetValueByPointerWithDefault(d, Pointer(\"/foo/uint64\"), u64 - 1).GetUint64());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/true\"), true).IsTrue());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/true\"), false).IsTrue());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/false\"), false).IsFalse());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, Pointer(\"/foo/false\"), true).IsFalse());\n\n    // StringRef version\n    EXPECT_STREQ(\"Hello\", GetValueByPointerWithDefault(d, Pointer(\"/foo/hello\"), \"Hello\").GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        EXPECT_STREQ(\"World\", GetValueByPointerWithDefault(d, Pointer(\"/foo/world\"), buffer).GetString());\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, Pointer(\"/foo/world\"))->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    EXPECT_STREQ(\"C++\", GetValueByPointerWithDefault(d, Pointer(\"/foo/C++\"), std::string(\"C++\")).GetString());\n#endif\n}\n\nTEST(Pointer, GetValueByPointerWithDefault_String_NoAllocator) {\n    Document d;\n    d.Parse(kJson);\n\n    const Value v(\"qux\");\n    EXPECT_TRUE(Value(\"bar\") == GetValueByPointerWithDefault(d, \"/foo/0\", v));\n    EXPECT_TRUE(Value(\"bar\") == GetValueByPointerWithDefault(d, \"/foo/0\", v));\n    EXPECT_TRUE(Value(\"baz\") == GetValueByPointerWithDefault(d, \"/foo/1\", v));\n    EXPECT_TRUE(Value(\"qux\") == GetValueByPointerWithDefault(d, \"/foo/2\", v));\n    EXPECT_TRUE(Value(\"last\") == GetValueByPointerWithDefault(d, \"/foo/-\", Value(\"last\").Move()));\n    EXPECT_STREQ(\"last\", d[\"foo\"][3].GetString());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/null\", Value().Move()).IsNull());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/null\", \"x\").IsNull());\n\n    // Generic version\n    EXPECT_EQ(-1, GetValueByPointerWithDefault(d, \"/foo/int\", -1).GetInt());\n    EXPECT_EQ(-1, GetValueByPointerWithDefault(d, \"/foo/int\", -2).GetInt());\n    EXPECT_EQ(0x87654321, GetValueByPointerWithDefault(d, \"/foo/uint\", 0x87654321).GetUint());\n    EXPECT_EQ(0x87654321, GetValueByPointerWithDefault(d, \"/foo/uint\", 0x12345678).GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    EXPECT_EQ(i64, GetValueByPointerWithDefault(d, \"/foo/int64\", i64).GetInt64());\n    EXPECT_EQ(i64, GetValueByPointerWithDefault(d, \"/foo/int64\", i64 + 1).GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    EXPECT_EQ(u64, GetValueByPointerWithDefault(d, \"/foo/uint64\", u64).GetUint64());\n    EXPECT_EQ(u64, GetValueByPointerWithDefault(d, \"/foo/uint64\", u64 - 1).GetUint64());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/true\", true).IsTrue());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/true\", false).IsTrue());\n\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/false\", false).IsFalse());\n    EXPECT_TRUE(GetValueByPointerWithDefault(d, \"/foo/false\", true).IsFalse());\n\n    // StringRef version\n    EXPECT_STREQ(\"Hello\", GetValueByPointerWithDefault(d, \"/foo/hello\", \"Hello\").GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        EXPECT_STREQ(\"World\", GetValueByPointerWithDefault(d, \"/foo/world\", buffer).GetString());\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    EXPECT_STREQ(\"C++\", GetValueByPointerWithDefault(d, Pointer(\"/foo/C++\"), std::string(\"C++\")).GetString());\n#endif\n}\n\nTEST(Pointer, SetValueByPointer_Pointer) {\n    Document d;\n    d.Parse(kJson);\n    Document::AllocatorType& a = d.GetAllocator();\n\n    // Value version\n    SetValueByPointer(d, Pointer(\"/foo/0\"), Value(123).Move(), a);\n    EXPECT_EQ(123, d[\"foo\"][0].GetInt());\n\n    SetValueByPointer(d, Pointer(\"/foo/null\"), Value().Move(), a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/null\")->IsNull());\n\n    // Const Value version\n    const Value foo(d[\"foo\"], d.GetAllocator());\n    SetValueByPointer(d, Pointer(\"/clone\"), foo, a);\n    EXPECT_EQ(foo, *GetValueByPointer(d, \"/clone\"));\n\n    // Generic version\n    SetValueByPointer(d, Pointer(\"/foo/int\"), -1, a);\n    EXPECT_EQ(-1, GetValueByPointer(d, \"/foo/int\")->GetInt());\n\n    SetValueByPointer(d, Pointer(\"/foo/uint\"), 0x87654321, a);\n    EXPECT_EQ(0x87654321, GetValueByPointer(d, \"/foo/uint\")->GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    SetValueByPointer(d, Pointer(\"/foo/int64\"), i64, a);\n    EXPECT_EQ(i64, GetValueByPointer(d, \"/foo/int64\")->GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    SetValueByPointer(d, Pointer(\"/foo/uint64\"), u64, a);\n    EXPECT_EQ(u64, GetValueByPointer(d, \"/foo/uint64\")->GetUint64());\n\n    SetValueByPointer(d, Pointer(\"/foo/true\"), true, a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/true\")->IsTrue());\n\n    SetValueByPointer(d, Pointer(\"/foo/false\"), false, a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/false\")->IsFalse());\n\n    // StringRef version\n    SetValueByPointer(d, Pointer(\"/foo/hello\"), \"Hello\", a);\n    EXPECT_STREQ(\"Hello\", GetValueByPointer(d, \"/foo/hello\")->GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        SetValueByPointer(d, Pointer(\"/foo/world\"), buffer, a);\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    SetValueByPointer(d, Pointer(\"/foo/c++\"), std::string(\"C++\"), a);\n    EXPECT_STREQ(\"C++\", GetValueByPointer(d, \"/foo/c++\")->GetString());\n#endif\n}\n\nTEST(Pointer, SetValueByPointer_String) {\n    Document d;\n    d.Parse(kJson);\n    Document::AllocatorType& a = d.GetAllocator();\n\n    // Value version\n    SetValueByPointer(d, \"/foo/0\", Value(123).Move(), a);\n    EXPECT_EQ(123, d[\"foo\"][0].GetInt());\n\n    SetValueByPointer(d, \"/foo/null\", Value().Move(), a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/null\")->IsNull());\n\n    // Const Value version\n    const Value foo(d[\"foo\"], d.GetAllocator());\n    SetValueByPointer(d, \"/clone\", foo, a);\n    EXPECT_EQ(foo, *GetValueByPointer(d, \"/clone\"));\n\n    // Generic version\n    SetValueByPointer(d, \"/foo/int\", -1, a);\n    EXPECT_EQ(-1, GetValueByPointer(d, \"/foo/int\")->GetInt());\n\n    SetValueByPointer(d, \"/foo/uint\", 0x87654321, a);\n    EXPECT_EQ(0x87654321, GetValueByPointer(d, \"/foo/uint\")->GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    SetValueByPointer(d, \"/foo/int64\", i64, a);\n    EXPECT_EQ(i64, GetValueByPointer(d, \"/foo/int64\")->GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    SetValueByPointer(d, \"/foo/uint64\", u64, a);\n    EXPECT_EQ(u64, GetValueByPointer(d, \"/foo/uint64\")->GetUint64());\n\n    SetValueByPointer(d, \"/foo/true\", true, a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/true\")->IsTrue());\n\n    SetValueByPointer(d, \"/foo/false\", false, a);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/false\")->IsFalse());\n\n    // StringRef version\n    SetValueByPointer(d, \"/foo/hello\", \"Hello\", a);\n    EXPECT_STREQ(\"Hello\", GetValueByPointer(d, \"/foo/hello\")->GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        SetValueByPointer(d, \"/foo/world\", buffer, a);\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    SetValueByPointer(d, \"/foo/c++\", std::string(\"C++\"), a);\n    EXPECT_STREQ(\"C++\", GetValueByPointer(d, \"/foo/c++\")->GetString());\n#endif\n}\n\nTEST(Pointer, SetValueByPointer_Pointer_NoAllocator) {\n    Document d;\n    d.Parse(kJson);\n\n    // Value version\n    SetValueByPointer(d, Pointer(\"/foo/0\"), Value(123).Move());\n    EXPECT_EQ(123, d[\"foo\"][0].GetInt());\n\n    SetValueByPointer(d, Pointer(\"/foo/null\"), Value().Move());\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/null\")->IsNull());\n\n    // Const Value version\n    const Value foo(d[\"foo\"], d.GetAllocator());\n    SetValueByPointer(d, Pointer(\"/clone\"), foo);\n    EXPECT_EQ(foo, *GetValueByPointer(d, \"/clone\"));\n\n    // Generic version\n    SetValueByPointer(d, Pointer(\"/foo/int\"), -1);\n    EXPECT_EQ(-1, GetValueByPointer(d, \"/foo/int\")->GetInt());\n\n    SetValueByPointer(d, Pointer(\"/foo/uint\"), 0x87654321);\n    EXPECT_EQ(0x87654321, GetValueByPointer(d, \"/foo/uint\")->GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    SetValueByPointer(d, Pointer(\"/foo/int64\"), i64);\n    EXPECT_EQ(i64, GetValueByPointer(d, \"/foo/int64\")->GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    SetValueByPointer(d, Pointer(\"/foo/uint64\"), u64);\n    EXPECT_EQ(u64, GetValueByPointer(d, \"/foo/uint64\")->GetUint64());\n\n    SetValueByPointer(d, Pointer(\"/foo/true\"), true);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/true\")->IsTrue());\n\n    SetValueByPointer(d, Pointer(\"/foo/false\"), false);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/false\")->IsFalse());\n\n    // StringRef version\n    SetValueByPointer(d, Pointer(\"/foo/hello\"), \"Hello\");\n    EXPECT_STREQ(\"Hello\", GetValueByPointer(d, \"/foo/hello\")->GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        SetValueByPointer(d, Pointer(\"/foo/world\"), buffer);\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    SetValueByPointer(d, Pointer(\"/foo/c++\"), std::string(\"C++\"));\n    EXPECT_STREQ(\"C++\", GetValueByPointer(d, \"/foo/c++\")->GetString());\n#endif\n}\n\nTEST(Pointer, SetValueByPointer_String_NoAllocator) {\n    Document d;\n    d.Parse(kJson);\n\n    // Value version\n    SetValueByPointer(d, \"/foo/0\", Value(123).Move());\n    EXPECT_EQ(123, d[\"foo\"][0].GetInt());\n\n    SetValueByPointer(d, \"/foo/null\", Value().Move());\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/null\")->IsNull());\n\n    // Const Value version\n    const Value foo(d[\"foo\"], d.GetAllocator());\n    SetValueByPointer(d, \"/clone\", foo);\n    EXPECT_EQ(foo, *GetValueByPointer(d, \"/clone\"));\n\n    // Generic version\n    SetValueByPointer(d, \"/foo/int\", -1);\n    EXPECT_EQ(-1, GetValueByPointer(d, \"/foo/int\")->GetInt());\n\n    SetValueByPointer(d, \"/foo/uint\", 0x87654321);\n    EXPECT_EQ(0x87654321, GetValueByPointer(d, \"/foo/uint\")->GetUint());\n\n    const int64_t i64 = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0));\n    SetValueByPointer(d, \"/foo/int64\", i64);\n    EXPECT_EQ(i64, GetValueByPointer(d, \"/foo/int64\")->GetInt64());\n\n    const uint64_t u64 = RAPIDJSON_UINT64_C2(0xFFFFFFFFF, 0xFFFFFFFFF);\n    SetValueByPointer(d, \"/foo/uint64\", u64);\n    EXPECT_EQ(u64, GetValueByPointer(d, \"/foo/uint64\")->GetUint64());\n\n    SetValueByPointer(d, \"/foo/true\", true);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/true\")->IsTrue());\n\n    SetValueByPointer(d, \"/foo/false\", false);\n    EXPECT_TRUE(GetValueByPointer(d, \"/foo/false\")->IsFalse());\n\n    // StringRef version\n    SetValueByPointer(d, \"/foo/hello\", \"Hello\");\n    EXPECT_STREQ(\"Hello\", GetValueByPointer(d, \"/foo/hello\")->GetString());\n\n    // Copy string version\n    {\n        char buffer[256];\n        strcpy(buffer, \"World\");\n        SetValueByPointer(d, \"/foo/world\", buffer);\n        memset(buffer, 0, sizeof(buffer));\n    }\n    EXPECT_STREQ(\"World\", GetValueByPointer(d, \"/foo/world\")->GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    SetValueByPointer(d, \"/foo/c++\", std::string(\"C++\"));\n    EXPECT_STREQ(\"C++\", GetValueByPointer(d, \"/foo/c++\")->GetString());\n#endif\n}\n\nTEST(Pointer, SwapValueByPointer) {\n    Document d;\n    d.Parse(kJson);\n    Document::AllocatorType& a = d.GetAllocator();\n    SwapValueByPointer(d, Pointer(\"/foo/0\"), *GetValueByPointer(d, \"/foo/1\"), a);\n    EXPECT_STREQ(\"baz\", d[\"foo\"][0].GetString());\n    EXPECT_STREQ(\"bar\", d[\"foo\"][1].GetString());\n\n    SwapValueByPointer(d, \"/foo/0\", *GetValueByPointer(d, \"/foo/1\"), a);\n    EXPECT_STREQ(\"bar\", d[\"foo\"][0].GetString());\n    EXPECT_STREQ(\"baz\", d[\"foo\"][1].GetString());\n}\n\nTEST(Pointer, SwapValueByPointer_NoAllocator) {\n    Document d;\n    d.Parse(kJson);\n    SwapValueByPointer(d, Pointer(\"/foo/0\"), *GetValueByPointer(d, \"/foo/1\"));\n    EXPECT_STREQ(\"baz\", d[\"foo\"][0].GetString());\n    EXPECT_STREQ(\"bar\", d[\"foo\"][1].GetString());\n\n    SwapValueByPointer(d, \"/foo/0\", *GetValueByPointer(d, \"/foo/1\"));\n    EXPECT_STREQ(\"bar\", d[\"foo\"][0].GetString());\n    EXPECT_STREQ(\"baz\", d[\"foo\"][1].GetString());\n}\n\nTEST(Pointer, EraseValueByPointer_Pointer) {\n    Document d;\n    d.Parse(kJson);\n\n    EXPECT_FALSE(EraseValueByPointer(d, Pointer(\"\")));\n    EXPECT_FALSE(Pointer(\"/foo/nonexist\").Erase(d));\n    EXPECT_TRUE(EraseValueByPointer(d, Pointer(\"/foo/0\")));\n    EXPECT_EQ(1u, d[\"foo\"].Size());\n    EXPECT_STREQ(\"baz\", d[\"foo\"][0].GetString());\n    EXPECT_TRUE(EraseValueByPointer(d, Pointer(\"/foo/0\")));\n    EXPECT_TRUE(d[\"foo\"].Empty());\n    EXPECT_TRUE(EraseValueByPointer(d, Pointer(\"/foo\")));\n    EXPECT_TRUE(Pointer(\"/foo\").Get(d) == 0);\n}\n\nTEST(Pointer, EraseValueByPointer_String) {\n    Document d;\n    d.Parse(kJson);\n\n    EXPECT_FALSE(EraseValueByPointer(d, \"\"));\n    EXPECT_FALSE(Pointer(\"/foo/nonexist\").Erase(d));\n    EXPECT_TRUE(EraseValueByPointer(d, \"/foo/0\"));\n    EXPECT_EQ(1u, d[\"foo\"].Size());\n    EXPECT_STREQ(\"baz\", d[\"foo\"][0].GetString());\n    EXPECT_TRUE(EraseValueByPointer(d, \"/foo/0\"));\n    EXPECT_TRUE(d[\"foo\"].Empty());\n    EXPECT_TRUE(EraseValueByPointer(d, \"/foo\"));\n    EXPECT_TRUE(Pointer(\"/foo\").Get(d) == 0);\n}\n\nTEST(Pointer, Ambiguity) {\n    {\n        Document d;\n        d.Parse(\"{\\\"0\\\" : [123]}\");\n        EXPECT_EQ(123, Pointer(\"/0/0\").Get(d)->GetInt());\n        Pointer(\"/0/a\").Set(d, 456);    // Change array [123] to object {456}\n        EXPECT_EQ(456, Pointer(\"/0/a\").Get(d)->GetInt());\n    }\n\n    {\n        Document d;\n        EXPECT_FALSE(d.Parse(\"[{\\\"0\\\": 123}]\").HasParseError());\n        EXPECT_EQ(123, Pointer(\"/0/0\").Get(d)->GetInt());\n        Pointer(\"/0/1\").Set(d, 456); // 1 is treated as \"1\" to index object\n        EXPECT_EQ(123, Pointer(\"/0/0\").Get(d)->GetInt());\n        EXPECT_EQ(456, Pointer(\"/0/1\").Get(d)->GetInt());\n    }\n}\n\nTEST(Pointer, ResolveOnObject) {\n    Document d;\n    EXPECT_FALSE(d.Parse(\"{\\\"a\\\": 123}\").HasParseError());\n\n    {\n        Value::ConstObject o = static_cast<const Document&>(d).GetObject();\n        EXPECT_EQ(123, Pointer(\"/a\").Get(o)->GetInt());\n    }\n\n    {\n        Value::Object o = d.GetObject();\n        Pointer(\"/a\").Set(o, 456, d.GetAllocator());\n        EXPECT_EQ(456, Pointer(\"/a\").Get(o)->GetInt());\n    }\n}\n\nTEST(Pointer, ResolveOnArray) {\n    Document d;\n    EXPECT_FALSE(d.Parse(\"[1, 2, 3]\").HasParseError());\n\n    {\n        Value::ConstArray a = static_cast<const Document&>(d).GetArray();\n        EXPECT_EQ(2, Pointer(\"/1\").Get(a)->GetInt());\n    }\n\n    {\n        Value::Array a = d.GetArray();\n        Pointer(\"/1\").Set(a, 123, d.GetAllocator());\n        EXPECT_EQ(123, Pointer(\"/1\").Get(a)->GetInt());\n    }\n}\n\nTEST(Pointer, LessThan) {\n    static const struct {\n        const char *str;\n        bool valid;\n    } pointers[] = {\n        { \"/a/b\",       true },\n        { \"/a\",         true },\n        { \"/d/1\",       true },\n        { \"/d/2/z\",     true },\n        { \"/d/2/3\",     true },\n        { \"/d/2\",       true },\n        { \"/a/c\",       true },\n        { \"/e/f~g\",     false },\n        { \"/d/2/zz\",    true },\n        { \"/d/1\",       true },\n        { \"/d/2/z\",     true },\n        { \"/e/f~~g\",    false },\n        { \"/e/f~0g\",    true },\n        { \"/e/f~1g\",    true },\n        { \"/e/f.g\",     true },\n        { \"\",           true }\n    };\n    static const char *ordered_pointers[] = {\n        \"\",\n        \"/a\",\n        \"/a/b\",\n        \"/a/c\",\n        \"/d/1\",\n        \"/d/1\",\n        \"/d/2\",\n        \"/e/f.g\",\n        \"/e/f~1g\",\n        \"/e/f~0g\",\n        \"/d/2/3\",\n        \"/d/2/z\",\n        \"/d/2/z\",\n        \"/d/2/zz\",\n        NULL,       // was invalid \"/e/f~g\"\n        NULL        // was invalid \"/e/f~~g\"\n    };\n    typedef MemoryPoolAllocator<> AllocatorType;\n    typedef GenericPointer<Value, AllocatorType> PointerType;\n    typedef std::multimap<PointerType, size_t> PointerMap;\n    PointerMap map;\n    PointerMap::iterator it;\n    AllocatorType allocator;\n    size_t i;\n\n    EXPECT_EQ(sizeof(pointers) / sizeof(pointers[0]),\n              sizeof(ordered_pointers) / sizeof(ordered_pointers[0]));\n\n    for (i = 0; i < sizeof(pointers) / sizeof(pointers[0]); ++i) {\n        it = map.insert(PointerMap::value_type(PointerType(pointers[i].str, &allocator), i));\n        if (!it->first.IsValid()) {\n            EXPECT_EQ(++it, map.end());\n        }\n    }\n\n    for (i = 0, it = map.begin(); it != map.end(); ++it, ++i) {\n        EXPECT_TRUE(it->second < sizeof(pointers) / sizeof(pointers[0]));\n        EXPECT_EQ(it->first.IsValid(), pointers[it->second].valid);\n        EXPECT_TRUE(i < sizeof(ordered_pointers) / sizeof(ordered_pointers[0]));\n        EXPECT_EQ(it->first.IsValid(), !!ordered_pointers[i]);\n        if (it->first.IsValid()) {\n            std::stringstream ss;\n            OStreamWrapper os(ss);\n            EXPECT_TRUE(it->first.Stringify(os));\n            EXPECT_EQ(ss.str(), pointers[it->second].str);\n            EXPECT_EQ(ss.str(), ordered_pointers[i]);\n        }\n    }\n}\n\n// https://github.com/Tencent/rapidjson/issues/483\nnamespace myjson {\n\nclass MyAllocator\n{\npublic:\n    static const bool kNeedFree = true;\n    void * Malloc(size_t _size) { return malloc(_size); }\n    void * Realloc(void *_org_p, size_t _org_size, size_t _new_size) { (void)_org_size; return realloc(_org_p, _new_size); }\n    static void Free(void *_p) { return free(_p); }\n};\n\ntypedef rapidjson::GenericDocument<\n            rapidjson::UTF8<>,\n            rapidjson::MemoryPoolAllocator< MyAllocator >,\n            MyAllocator\n        > Document;\n\ntypedef rapidjson::GenericPointer<\n            ::myjson::Document::ValueType,\n            MyAllocator\n        > Pointer;\n\ntypedef ::myjson::Document::ValueType Value;\n\n}\n\nTEST(Pointer, Issue483) {\n    std::string mystr, path;\n    myjson::Document document;\n    myjson::Value value(rapidjson::kStringType);\n    value.SetString(mystr.c_str(), static_cast<SizeType>(mystr.length()), document.GetAllocator());\n    myjson::Pointer(path.c_str()).Set(document, value, document.GetAllocator());\n}\n\nTEST(Pointer, Issue1899) {\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    PointerType p;\n    PointerType q = p.Append(\"foo\");\n    EXPECT_TRUE(PointerType(\"/foo\") == q);\n    q = q.Append(1234);\n    EXPECT_TRUE(PointerType(\"/foo/1234\") == q);\n    q = q.Append(\"\");\n    EXPECT_TRUE(PointerType(\"/foo/1234/\") == q);\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/prettywritertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/prettywriter.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/filewritestream.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#endif\n\nusing namespace rapidjson;\n\nstatic const char kJson[] = \"{\\\"hello\\\":\\\"world\\\",\\\"t\\\":true,\\\"f\\\":false,\\\"n\\\":null,\\\"i\\\":123,\\\"pi\\\":3.1416,\\\"a\\\":[1,2,3,-1],\\\"u64\\\":1234567890123456789,\\\"i64\\\":-1234567890123456789}\";\nstatic const char kPrettyJson[] =\n\"{\\n\"\n\"    \\\"hello\\\": \\\"world\\\",\\n\"\n\"    \\\"t\\\": true,\\n\"\n\"    \\\"f\\\": false,\\n\"\n\"    \\\"n\\\": null,\\n\"\n\"    \\\"i\\\": 123,\\n\"\n\"    \\\"pi\\\": 3.1416,\\n\"\n\"    \\\"a\\\": [\\n\"\n\"        1,\\n\"\n\"        2,\\n\"\n\"        3,\\n\"\n\"        -1\\n\"\n\"    ],\\n\"\n\"    \\\"u64\\\": 1234567890123456789,\\n\"\n\"    \\\"i64\\\": -1234567890123456789\\n\"\n\"}\";\n\nstatic const char kPrettyJson_FormatOptions_SLA[] =\n\"{\\n\"\n\"    \\\"hello\\\": \\\"world\\\",\\n\"\n\"    \\\"t\\\": true,\\n\"\n\"    \\\"f\\\": false,\\n\"\n\"    \\\"n\\\": null,\\n\"\n\"    \\\"i\\\": 123,\\n\"\n\"    \\\"pi\\\": 3.1416,\\n\"\n\"    \\\"a\\\": [1, 2, 3, -1],\\n\"\n\"    \\\"u64\\\": 1234567890123456789,\\n\"\n\"    \\\"i64\\\": -1234567890123456789\\n\"\n\"}\";\n\nTEST(PrettyWriter, Basic) {\n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(buffer);\n    Reader reader;\n    StringStream s(kJson);\n    reader.Parse(s, writer);\n    EXPECT_STREQ(kPrettyJson, buffer.GetString());\n}\n\nTEST(PrettyWriter, FormatOptions) {\n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(buffer);\n    writer.SetFormatOptions(kFormatSingleLineArray);\n    Reader reader;\n    StringStream s(kJson);\n    reader.Parse(s, writer);\n    EXPECT_STREQ(kPrettyJson_FormatOptions_SLA, buffer.GetString());\n}\n\nTEST(PrettyWriter, SetIndent) {\n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(buffer);\n    writer.SetIndent('\\t', 1);\n    Reader reader;\n    StringStream s(kJson);\n    reader.Parse(s, writer);\n    EXPECT_STREQ(\n        \"{\\n\"\n        \"\\t\\\"hello\\\": \\\"world\\\",\\n\"\n        \"\\t\\\"t\\\": true,\\n\"\n        \"\\t\\\"f\\\": false,\\n\"\n        \"\\t\\\"n\\\": null,\\n\"\n        \"\\t\\\"i\\\": 123,\\n\"\n        \"\\t\\\"pi\\\": 3.1416,\\n\"\n        \"\\t\\\"a\\\": [\\n\"\n        \"\\t\\t1,\\n\"\n        \"\\t\\t2,\\n\"\n        \"\\t\\t3,\\n\"\n        \"\\t\\t-1\\n\"\n        \"\\t],\\n\"\n        \"\\t\\\"u64\\\": 1234567890123456789,\\n\"\n        \"\\t\\\"i64\\\": -1234567890123456789\\n\"\n        \"}\",\n        buffer.GetString());\n}\n\nTEST(PrettyWriter, String) {\n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(buffer);\n    EXPECT_TRUE(writer.StartArray());\n    EXPECT_TRUE(writer.String(\"Hello\\n\"));\n    EXPECT_TRUE(writer.EndArray());\n    EXPECT_STREQ(\"[\\n    \\\"Hello\\\\n\\\"\\n]\", buffer.GetString());\n}\n\n#if RAPIDJSON_HAS_STDSTRING\nTEST(PrettyWriter, String_STDSTRING) {\n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(buffer);\n    EXPECT_TRUE(writer.StartArray());\n    EXPECT_TRUE(writer.String(std::string(\"Hello\\n\")));\n    EXPECT_TRUE(writer.EndArray());\n    EXPECT_STREQ(\"[\\n    \\\"Hello\\\\n\\\"\\n]\", buffer.GetString());\n}\n#endif\n\n#include <sstream>\n\nclass OStreamWrapper {\npublic:\n    typedef char Ch;\n\n    OStreamWrapper(std::ostream& os) : os_(os) {}\n\n    Ch Peek() const { assert(false); return '\\0'; }\n    Ch Take() { assert(false); return '\\0'; }\n    size_t Tell() const { return 0; }\n\n    Ch* PutBegin() { assert(false); return 0; }\n    void Put(Ch c) { os_.put(c); }\n    void Flush() { os_.flush(); }\n    size_t PutEnd(Ch*) { assert(false); return 0; }\n\nprivate:\n    OStreamWrapper(const OStreamWrapper&);\n    OStreamWrapper& operator=(const OStreamWrapper&);\n\n    std::ostream& os_;\n};\n\n// For covering PutN() generic version\nTEST(PrettyWriter, OStreamWrapper) {\n    StringStream s(kJson);\n    \n    std::stringstream ss;\n    OStreamWrapper os(ss);\n    \n    PrettyWriter<OStreamWrapper> writer(os);\n\n    Reader reader;\n    reader.Parse(s, writer);\n    \n    std::string actual = ss.str();\n    EXPECT_STREQ(kPrettyJson, actual.c_str());\n}\n\n// For covering FileWriteStream::PutN()\nTEST(PrettyWriter, FileWriteStream) {\n    char filename[L_tmpnam];\n    FILE* fp = TempFile(filename);\n    ASSERT_TRUE(fp!=NULL);\n    char buffer[16];\n    FileWriteStream os(fp, buffer, sizeof(buffer));\n    PrettyWriter<FileWriteStream> writer(os);\n    Reader reader;\n    StringStream s(kJson);\n    reader.Parse(s, writer);\n    fclose(fp);\n\n    fp = fopen(filename, \"rb\");\n    fseek(fp, 0, SEEK_END);\n    size_t size = static_cast<size_t>(ftell(fp));\n    fseek(fp, 0, SEEK_SET);\n    char* json = static_cast<char*>(malloc(size + 1));\n    size_t readLength = fread(json, 1, size, fp);\n    json[readLength] = '\\0';\n    fclose(fp);\n    remove(filename);\n    EXPECT_STREQ(kPrettyJson, json);\n    free(json);\n}\n\nTEST(PrettyWriter, RawValue) {\n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(buffer);\n    writer.StartObject();\n    writer.Key(\"a\");\n    writer.Int(1);\n    writer.Key(\"raw\");\n    const char json[] = \"[\\\"Hello\\\\nWorld\\\", 123.456]\";\n    writer.RawValue(json, strlen(json), kArrayType);\n    writer.EndObject();\n    EXPECT_TRUE(writer.IsComplete());\n    EXPECT_STREQ(\n        \"{\\n\"\n        \"    \\\"a\\\": 1,\\n\"\n        \"    \\\"raw\\\": [\\\"Hello\\\\nWorld\\\", 123.456]\\n\" // no indentation within raw value\n        \"}\",\n        buffer.GetString());\n}\n\nTEST(PrettyWriter, InvalidEventSequence) {\n    // {]\n    {\n        StringBuffer buffer;\n        PrettyWriter<StringBuffer> writer(buffer);\n        writer.StartObject();\n        EXPECT_THROW(writer.EndArray(), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n    \n    // [}\n    {\n        StringBuffer buffer;\n        PrettyWriter<StringBuffer> writer(buffer);\n        writer.StartArray();\n        EXPECT_THROW(writer.EndObject(), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n    \n    // { 1:\n    {\n        StringBuffer buffer;\n        PrettyWriter<StringBuffer> writer(buffer);\n        writer.StartObject();\n        EXPECT_THROW(writer.Int(1), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n    \n    // { 'a' }\n    {\n        StringBuffer buffer;\n        PrettyWriter<StringBuffer> writer(buffer);\n        writer.StartObject();\n        writer.Key(\"a\");\n        EXPECT_THROW(writer.EndObject(), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n    \n    // { 'a':'b','c' }\n    {\n        StringBuffer buffer;\n        PrettyWriter<StringBuffer> writer(buffer);\n        writer.StartObject();\n        writer.Key(\"a\");\n        writer.String(\"b\");\n        writer.Key(\"c\");\n        EXPECT_THROW(writer.EndObject(), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n}\n\nTEST(PrettyWriter, NaN) {\n    double nan = std::numeric_limits<double>::quiet_NaN();\n\n    EXPECT_TRUE(internal::Double(nan).IsNan());\n    StringBuffer buffer;\n    {\n        PrettyWriter<StringBuffer> writer(buffer);\n        EXPECT_FALSE(writer.Double(nan));\n    }\n    {\n        PrettyWriter<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(nan));\n        EXPECT_STREQ(\"NaN\", buffer.GetString());\n    }\n    GenericStringBuffer<UTF16<> > buffer2;\n    PrettyWriter<GenericStringBuffer<UTF16<> > > writer2(buffer2);\n    EXPECT_FALSE(writer2.Double(nan));\n}\n\nTEST(PrettyWriter, Inf) {\n    double inf = std::numeric_limits<double>::infinity();\n\n    EXPECT_TRUE(internal::Double(inf).IsInf());\n    StringBuffer buffer;\n    {\n        PrettyWriter<StringBuffer> writer(buffer);\n        EXPECT_FALSE(writer.Double(inf));\n    }\n    {\n        PrettyWriter<StringBuffer> writer(buffer);\n        EXPECT_FALSE(writer.Double(-inf));\n    }\n    {\n        PrettyWriter<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(inf));\n    }\n    {\n        PrettyWriter<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(-inf));\n    }\n    EXPECT_STREQ(\"Infinity-Infinity\", buffer.GetString());\n}\n\nTEST(PrettyWriter, Issue_889) {\n    char buf[100] = \"Hello\";\n    \n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(buffer);\n    writer.StartArray();\n    writer.String(buf);\n    writer.EndArray();\n    \n    EXPECT_STREQ(\"[\\n    \\\"Hello\\\"\\n]\", buffer.GetString());\n    EXPECT_TRUE(writer.IsComplete()); \\\n}\n\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\nstatic PrettyWriter<StringBuffer> WriterGen(StringBuffer &target) {\n    PrettyWriter<StringBuffer> writer(target);\n    writer.StartObject();\n    writer.Key(\"a\");\n    writer.Int(1);\n    return writer;\n}\n\nTEST(PrettyWriter, MoveCtor) {\n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(WriterGen(buffer));\n    writer.EndObject();\n    EXPECT_TRUE(writer.IsComplete());\n    EXPECT_STREQ(\n        \"{\\n\"\n        \"    \\\"a\\\": 1\\n\"\n        \"}\",\n        buffer.GetString());\n}\n#endif\n\nTEST(PrettyWriter, Issue_1336) {\n#define T(meth, val, expected)                          \\\n    {                                                   \\\n        StringBuffer buffer;                            \\\n        PrettyWriter<StringBuffer> writer(buffer);      \\\n        writer.meth(val);                               \\\n                                                        \\\n        EXPECT_STREQ(expected, buffer.GetString());     \\\n        EXPECT_TRUE(writer.IsComplete());               \\\n    }\n\n    T(Bool, false, \"false\");\n    T(Bool, true, \"true\");\n    T(Int, 0, \"0\");\n    T(Uint, 0, \"0\");\n    T(Int64, 0, \"0\");\n    T(Uint64, 0, \"0\");\n    T(Double, 0, \"0.0\");\n    T(String, \"Hello\", \"\\\"Hello\\\"\");\n#undef T\n\n    StringBuffer buffer;\n    PrettyWriter<StringBuffer> writer(buffer);\n    writer.Null();\n\n    EXPECT_STREQ(\"null\", buffer.GetString());\n    EXPECT_TRUE(writer.IsComplete());\n}\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/readertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n//\n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/internal/dtoa.h\"\n#include \"rapidjson/internal/itoa.h\"\n#include \"rapidjson/memorystream.h\"\n\n#include <limits>\n\nusing namespace rapidjson;\n\nRAPIDJSON_DIAG_PUSH\n#ifdef __GNUC__\nRAPIDJSON_DIAG_OFF(effc++)\nRAPIDJSON_DIAG_OFF(float-equal)\nRAPIDJSON_DIAG_OFF(missing-noreturn)\n#if __GNUC__ >= 7\nRAPIDJSON_DIAG_OFF(dangling-else)\n#endif\n#endif // __GNUC__\n\n#ifdef __clang__\nRAPIDJSON_DIAG_OFF(variadic-macros)\nRAPIDJSON_DIAG_OFF(c++98-compat-pedantic)\n#endif\n\ntemplate<bool expect>\nstruct ParseBoolHandler : BaseReaderHandler<UTF8<>, ParseBoolHandler<expect> > {\n    ParseBoolHandler() : step_(0) {}\n    bool Default() { ADD_FAILURE(); return false; }\n    // gcc 4.8.x generates warning in EXPECT_EQ(bool, bool) on this gtest version.\n    // Workaround with EXPECT_TRUE().\n    bool Bool(bool b) { /*EXPECT_EQ(expect, b); */EXPECT_TRUE(expect == b);  ++step_; return true; }\n\n    unsigned step_;\n};\n\nTEST(Reader, ParseTrue) {\n    StringStream s(\"true\");\n    ParseBoolHandler<true> h;\n    Reader reader;\n    reader.Parse(s, h);\n    EXPECT_EQ(1u, h.step_);\n}\n\nTEST(Reader, ParseFalse) {\n    StringStream s(\"false\");\n    ParseBoolHandler<false> h;\n    Reader reader;\n    reader.Parse(s, h);\n    EXPECT_EQ(1u, h.step_);\n}\n\nstruct ParseIntHandler : BaseReaderHandler<UTF8<>, ParseIntHandler> {\n    ParseIntHandler() : step_(0), actual_() {}\n    bool Default() { ADD_FAILURE(); return false; }\n    bool Int(int i) { actual_ = i; step_++; return true; }\n\n    unsigned step_;\n    int actual_;\n};\n\nstruct ParseUintHandler : BaseReaderHandler<UTF8<>, ParseUintHandler> {\n    ParseUintHandler() : step_(0), actual_() {}\n    bool Default() { ADD_FAILURE(); return false; }\n    bool Uint(unsigned i) { actual_ = i; step_++; return true; }\n\n    unsigned step_;\n    unsigned actual_;\n};\n\nstruct ParseInt64Handler : BaseReaderHandler<UTF8<>, ParseInt64Handler> {\n    ParseInt64Handler() : step_(0), actual_() {}\n    bool Default() { ADD_FAILURE(); return false; }\n    bool Int64(int64_t i) { actual_ = i; step_++; return true; }\n\n    unsigned step_;\n    int64_t actual_;\n};\n\nstruct ParseUint64Handler : BaseReaderHandler<UTF8<>, ParseUint64Handler> {\n    ParseUint64Handler() : step_(0), actual_() {}\n    bool Default() { ADD_FAILURE(); return false; }\n    bool Uint64(uint64_t i) { actual_ = i; step_++; return true; }\n\n    unsigned step_;\n    uint64_t actual_;\n};\n\nstruct ParseDoubleHandler : BaseReaderHandler<UTF8<>, ParseDoubleHandler> {\n    ParseDoubleHandler() : step_(0), actual_() {}\n    bool Default() { ADD_FAILURE(); return false; }\n    bool Double(double d) { actual_ = d; step_++; return true; }\n\n    unsigned step_;\n    double actual_;\n};\n\nTEST(Reader, ParseNumber_Integer) {\n#define TEST_INTEGER(Handler, str, x) \\\n    { \\\n        StringStream s(str); \\\n        Handler h; \\\n        Reader reader; \\\n        reader.Parse(s, h); \\\n        EXPECT_EQ(1u, h.step_); \\\n        EXPECT_EQ(x, h.actual_); \\\n    }\n\n    TEST_INTEGER(ParseUintHandler, \"0\", 0u);\n    TEST_INTEGER(ParseUintHandler, \"123\", 123u);\n    TEST_INTEGER(ParseUintHandler, \"2147483648\", 2147483648u);       // 2^31 - 1 (cannot be stored in int)\n    TEST_INTEGER(ParseUintHandler, \"4294967295\", 4294967295u);\n\n    TEST_INTEGER(ParseIntHandler, \"-123\", -123);\n    TEST_INTEGER(ParseIntHandler, \"-2147483648\", static_cast<int32_t>(0x80000000));     // -2^31 (min of int)\n\n    TEST_INTEGER(ParseUint64Handler, \"4294967296\", RAPIDJSON_UINT64_C2(1, 0));   // 2^32 (max of unsigned + 1, force to use uint64_t)\n    TEST_INTEGER(ParseUint64Handler, \"18446744073709551615\", RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFFF));   // 2^64 - 1 (max of uint64_t)\n\n    TEST_INTEGER(ParseInt64Handler, \"-2147483649\", static_cast<int64_t>(RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0x7FFFFFFF)));   // -2^31 -1 (min of int - 1, force to use int64_t)\n    TEST_INTEGER(ParseInt64Handler, \"-9223372036854775808\", static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 0x00000000)));       // -2^63 (min of int64_t)\n\n    // Random test for uint32_t/int32_t\n    {\n        union {\n            uint32_t u;\n            int32_t i;\n        }u;\n        Random r;\n\n        for (unsigned i = 0; i < 100000; i++) {\n            u.u = r();\n\n            char buffer[32];\n            *internal::u32toa(u.u, buffer) = '\\0';\n            TEST_INTEGER(ParseUintHandler, buffer, u.u);\n\n            if (u.i < 0) {\n                *internal::i32toa(u.i, buffer) = '\\0';\n                TEST_INTEGER(ParseIntHandler, buffer, u.i);\n            }\n        }\n    }\n\n    // Random test for uint64_t/int64_t\n    {\n        union {\n            uint64_t u;\n            int64_t i;\n        }u;\n        Random r;\n\n        for (unsigned i = 0; i < 100000; i++) {\n            u.u = uint64_t(r()) << 32;\n            u.u |= r();\n\n            char buffer[32];\n            if (u.u > uint64_t(4294967295u)) {\n                *internal::u64toa(u.u, buffer) = '\\0';\n                TEST_INTEGER(ParseUint64Handler, buffer, u.u);\n            }\n\n            if (u.i < -int64_t(2147483648u)) {\n                *internal::i64toa(u.i, buffer) = '\\0';\n                TEST_INTEGER(ParseInt64Handler, buffer, u.i);\n            }\n        }\n    }\n#undef TEST_INTEGER\n}\n\ntemplate<bool fullPrecision>\nstatic void TestParseDouble() {\n#define TEST_DOUBLE(fullPrecision, str, x) \\\n    { \\\n        StringStream s(str); \\\n        ParseDoubleHandler h; \\\n        Reader reader; \\\n        ASSERT_EQ(kParseErrorNone, reader.Parse<fullPrecision ? kParseFullPrecisionFlag : 0>(s, h).Code()); \\\n        EXPECT_EQ(1u, h.step_); \\\n        internal::Double e(x), a(h.actual_); \\\n        if (fullPrecision) { \\\n            EXPECT_EQ(e.Uint64Value(), a.Uint64Value()); \\\n            if (e.Uint64Value() != a.Uint64Value()) \\\n                printf(\"  String: %s\\n  Actual: %.17g\\nExpected: %.17g\\n\", str, h.actual_, x); \\\n        } \\\n        else { \\\n            EXPECT_EQ(e.Sign(), a.Sign()); /* for 0.0 != -0.0 */ \\\n            EXPECT_DOUBLE_EQ(x, h.actual_); \\\n        } \\\n    }\n\n    TEST_DOUBLE(fullPrecision, \"0.0\", 0.0);\n    TEST_DOUBLE(fullPrecision, \"-0.0\", -0.0); // For checking issue #289\n    TEST_DOUBLE(fullPrecision, \"0e100\", 0.0); // For checking issue #1249\n    TEST_DOUBLE(fullPrecision, \"1.0\", 1.0);\n    TEST_DOUBLE(fullPrecision, \"-1.0\", -1.0);\n    TEST_DOUBLE(fullPrecision, \"1.5\", 1.5);\n    TEST_DOUBLE(fullPrecision, \"-1.5\", -1.5);\n    TEST_DOUBLE(fullPrecision, \"3.1416\", 3.1416);\n    TEST_DOUBLE(fullPrecision, \"1E10\", 1E10);\n    TEST_DOUBLE(fullPrecision, \"1e10\", 1e10);\n    TEST_DOUBLE(fullPrecision, \"1E+10\", 1E+10);\n    TEST_DOUBLE(fullPrecision, \"1E-10\", 1E-10);\n    TEST_DOUBLE(fullPrecision, \"-1E10\", -1E10);\n    TEST_DOUBLE(fullPrecision, \"-1e10\", -1e10);\n    TEST_DOUBLE(fullPrecision, \"-1E+10\", -1E+10);\n    TEST_DOUBLE(fullPrecision, \"-1E-10\", -1E-10);\n    TEST_DOUBLE(fullPrecision, \"1.234E+10\", 1.234E+10);\n    TEST_DOUBLE(fullPrecision, \"1.234E-10\", 1.234E-10);\n    TEST_DOUBLE(fullPrecision, \"1.79769e+308\", 1.79769e+308);\n    TEST_DOUBLE(fullPrecision, \"2.22507e-308\", 2.22507e-308);\n    TEST_DOUBLE(fullPrecision, \"-1.79769e+308\", -1.79769e+308);\n    TEST_DOUBLE(fullPrecision, \"-2.22507e-308\", -2.22507e-308);\n    TEST_DOUBLE(fullPrecision, \"4.9406564584124654e-324\", 4.9406564584124654e-324); // minimum denormal\n    TEST_DOUBLE(fullPrecision, \"2.2250738585072009e-308\", 2.2250738585072009e-308); // Max subnormal double\n    TEST_DOUBLE(fullPrecision, \"2.2250738585072014e-308\", 2.2250738585072014e-308); // Min normal positive double\n    TEST_DOUBLE(fullPrecision, \"1.7976931348623157e+308\", 1.7976931348623157e+308); // Max double\n    TEST_DOUBLE(fullPrecision, \"1e-10000\", 0.0);                                    // must underflow\n    TEST_DOUBLE(fullPrecision, \"18446744073709551616\", 18446744073709551616.0);     // 2^64 (max of uint64_t + 1, force to use double)\n    TEST_DOUBLE(fullPrecision, \"-9223372036854775809\", -9223372036854775809.0);     // -2^63 - 1(min of int64_t + 1, force to use double)\n    TEST_DOUBLE(fullPrecision, \"0.9868011474609375\", 0.9868011474609375);           // https://github.com/Tencent/rapidjson/issues/120\n    TEST_DOUBLE(fullPrecision, \"123e34\", 123e34);                                   // Fast Path Cases In Disguise\n    TEST_DOUBLE(fullPrecision, \"45913141877270640000.0\", 45913141877270640000.0);\n    TEST_DOUBLE(fullPrecision, \"2.2250738585072011e-308\", 2.2250738585072011e-308); // http://www.exploringbinary.com/php-hangs-on-numeric-value-2-2250738585072011e-308/\n    TEST_DOUBLE(fullPrecision, \"1e-00011111111111\", 0.0);                           // Issue #313\n    TEST_DOUBLE(fullPrecision, \"-1e-00011111111111\", -0.0);\n    TEST_DOUBLE(fullPrecision, \"1e-214748363\", 0.0);                                  // Maximum supported negative exponent\n    TEST_DOUBLE(fullPrecision, \"1e-214748364\", 0.0);\n    TEST_DOUBLE(fullPrecision, \"1e-21474836311\", 0.0);\n    TEST_DOUBLE(fullPrecision, \"1.00000000001e-2147483638\", 0.0);\n    TEST_DOUBLE(fullPrecision, \"0.017976931348623157e+310\", 1.7976931348623157e+308); // Max double in another form\n    TEST_DOUBLE(fullPrecision, \"128.74836467836484838364836483643636483648e-336\", 0.0); // Issue #1251\n\n    // Since\n    // abs((2^-1022 - 2^-1074) - 2.2250738585072012e-308) = 3.109754131239141401123495768877590405345064751974375599... x 10^-324\n    // abs((2^-1022) - 2.2250738585072012e-308) = 1.830902327173324040642192159804623318305533274168872044... x 10 ^ -324\n    // So 2.2250738585072012e-308 should round to 2^-1022 = 2.2250738585072014e-308\n    TEST_DOUBLE(fullPrecision, \"2.2250738585072012e-308\", 2.2250738585072014e-308); // http://www.exploringbinary.com/java-hangs-when-converting-2-2250738585072012e-308/\n\n    // More closer to normal/subnormal boundary\n    // boundary = 2^-1022 - 2^-1075 = 2.225073858507201136057409796709131975934819546351645648... x 10^-308\n    TEST_DOUBLE(fullPrecision, \"2.22507385850720113605740979670913197593481954635164564e-308\", 2.2250738585072009e-308);\n    TEST_DOUBLE(fullPrecision, \"2.22507385850720113605740979670913197593481954635164565e-308\", 2.2250738585072014e-308);\n\n    // 1.0 is in (1.0 - 2^-54, 1.0 + 2^-53)\n    // 1.0 - 2^-54 = 0.999999999999999944488848768742172978818416595458984375\n    TEST_DOUBLE(fullPrecision, \"0.999999999999999944488848768742172978818416595458984375\", 1.0); // round to even\n    TEST_DOUBLE(fullPrecision, \"0.999999999999999944488848768742172978818416595458984374\", 0.99999999999999989); // previous double\n    TEST_DOUBLE(fullPrecision, \"0.999999999999999944488848768742172978818416595458984376\", 1.0); // next double\n    // 1.0 + 2^-53 = 1.00000000000000011102230246251565404236316680908203125\n    TEST_DOUBLE(fullPrecision, \"1.00000000000000011102230246251565404236316680908203125\", 1.0); // round to even\n    TEST_DOUBLE(fullPrecision, \"1.00000000000000011102230246251565404236316680908203124\", 1.0); // previous double\n    TEST_DOUBLE(fullPrecision, \"1.00000000000000011102230246251565404236316680908203126\", 1.00000000000000022); // next double\n\n    // Numbers from https://github.com/floitsch/double-conversion/blob/master/test/cctest/test-strtod.cc\n\n    TEST_DOUBLE(fullPrecision, \"72057594037927928.0\", 72057594037927928.0);\n    TEST_DOUBLE(fullPrecision, \"72057594037927936.0\", 72057594037927936.0);\n    TEST_DOUBLE(fullPrecision, \"72057594037927932.0\", 72057594037927936.0);\n    TEST_DOUBLE(fullPrecision, \"7205759403792793199999e-5\", 72057594037927928.0);\n    TEST_DOUBLE(fullPrecision, \"7205759403792793200001e-5\", 72057594037927936.0);\n\n    TEST_DOUBLE(fullPrecision, \"9223372036854774784.0\", 9223372036854774784.0);\n    TEST_DOUBLE(fullPrecision, \"9223372036854775808.0\", 9223372036854775808.0);\n    TEST_DOUBLE(fullPrecision, \"9223372036854775296.0\", 9223372036854775808.0);\n    TEST_DOUBLE(fullPrecision, \"922337203685477529599999e-5\", 9223372036854774784.0);\n    TEST_DOUBLE(fullPrecision, \"922337203685477529600001e-5\", 9223372036854775808.0);\n\n    TEST_DOUBLE(fullPrecision, \"10141204801825834086073718800384\", 10141204801825834086073718800384.0);\n    TEST_DOUBLE(fullPrecision, \"10141204801825835211973625643008\", 10141204801825835211973625643008.0);\n    TEST_DOUBLE(fullPrecision, \"10141204801825834649023672221696\", 10141204801825835211973625643008.0);\n    TEST_DOUBLE(fullPrecision, \"1014120480182583464902367222169599999e-5\", 10141204801825834086073718800384.0);\n    TEST_DOUBLE(fullPrecision, \"1014120480182583464902367222169600001e-5\", 10141204801825835211973625643008.0);\n\n    TEST_DOUBLE(fullPrecision, \"5708990770823838890407843763683279797179383808\", 5708990770823838890407843763683279797179383808.0);\n    TEST_DOUBLE(fullPrecision, \"5708990770823839524233143877797980545530986496\", 5708990770823839524233143877797980545530986496.0);\n    TEST_DOUBLE(fullPrecision, \"5708990770823839207320493820740630171355185152\", 5708990770823839524233143877797980545530986496.0);\n    TEST_DOUBLE(fullPrecision, \"5708990770823839207320493820740630171355185151999e-3\", 5708990770823838890407843763683279797179383808.0);\n    TEST_DOUBLE(fullPrecision, \"5708990770823839207320493820740630171355185152001e-3\", 5708990770823839524233143877797980545530986496.0);\n\n    {\n        char n1e308[310];   // '1' followed by 308 '0'\n        n1e308[0] = '1';\n        for (int i = 1; i < 309; i++)\n            n1e308[i] = '0';\n        n1e308[309] = '\\0';\n        TEST_DOUBLE(fullPrecision, n1e308, 1E308);\n    }\n\n    // Cover trimming\n    TEST_DOUBLE(fullPrecision,\n\"2.22507385850720113605740979670913197593481954635164564802342610972482222202107694551652952390813508\"\n\"7914149158913039621106870086438694594645527657207407820621743379988141063267329253552286881372149012\"\n\"9811224514518898490572223072852551331557550159143974763979834118019993239625482890171070818506906306\"\n\"6665599493827577257201576306269066333264756530000924588831643303777979186961204949739037782970490505\"\n\"1080609940730262937128958950003583799967207254304360284078895771796150945516748243471030702609144621\"\n\"5722898802581825451803257070188608721131280795122334262883686223215037756666225039825343359745688844\"\n\"2390026549819838548794829220689472168983109969836584681402285424333066033985088644580400103493397042\"\n\"7567186443383770486037861622771738545623065874679014086723327636718751234567890123456789012345678901\"\n\"e-308\",\n    2.2250738585072014e-308);\n\n    {\n        static const unsigned count = 100; // Tested with 1000000 locally\n        Random r;\n        Reader reader; // Reusing reader to prevent heap allocation\n\n        // Exhaustively test different exponents with random significant\n        for (uint64_t exp = 0; exp < 2047; exp++) {\n            ;\n            for (unsigned i = 0; i < count; i++) {\n                // Need to call r() in two statements for cross-platform coherent sequence.\n                uint64_t u = (exp << 52) | uint64_t(r() & 0x000FFFFF) << 32;\n                u |= uint64_t(r());\n                internal::Double d = internal::Double(u);\n\n                char buffer[32];\n                *internal::dtoa(d.Value(), buffer) = '\\0';\n\n                StringStream s(buffer);\n                ParseDoubleHandler h;\n                ASSERT_EQ(kParseErrorNone, reader.Parse<fullPrecision ? kParseFullPrecisionFlag : 0>(s, h).Code());\n                EXPECT_EQ(1u, h.step_);\n                internal::Double a(h.actual_);\n                if (fullPrecision) {\n                    EXPECT_EQ(d.Uint64Value(), a.Uint64Value());\n                    if (d.Uint64Value() != a.Uint64Value())\n                        printf(\"  String: %s\\n  Actual: %.17g\\nExpected: %.17g\\n\", buffer, h.actual_, d.Value());\n                }\n                else {\n                    EXPECT_EQ(d.Sign(), a.Sign()); // for 0.0 != -0.0\n                    EXPECT_DOUBLE_EQ(d.Value(), h.actual_);\n                }\n            }\n        }\n    }\n\n    // Issue #340\n    TEST_DOUBLE(fullPrecision, \"7.450580596923828e-9\", 7.450580596923828e-9);\n    {\n        internal::Double d(1.0);\n        for (int i = 0; i < 324; i++) {\n            char buffer[32];\n            *internal::dtoa(d.Value(), buffer) = '\\0';\n\n            StringStream s(buffer);\n            ParseDoubleHandler h;\n            Reader reader;\n            ASSERT_EQ(kParseErrorNone, reader.Parse<fullPrecision ? kParseFullPrecisionFlag : 0>(s, h).Code());\n            EXPECT_EQ(1u, h.step_);\n            internal::Double a(h.actual_);\n            if (fullPrecision) {\n                EXPECT_EQ(d.Uint64Value(), a.Uint64Value());\n                if (d.Uint64Value() != a.Uint64Value())\n                    printf(\"  String: %s\\n  Actual: %.17g\\nExpected: %.17g\\n\", buffer, h.actual_, d.Value());\n            }\n            else {\n                EXPECT_EQ(d.Sign(), a.Sign()); // for 0.0 != -0.0\n                EXPECT_DOUBLE_EQ(d.Value(), h.actual_);\n            }\n\n\n            d = d.Value() * 0.5;\n        }\n    }\n\n    // Issue 1249\n    TEST_DOUBLE(fullPrecision, \"0e100\", 0.0);\n\n    // Issue 1251\n    TEST_DOUBLE(fullPrecision, \"128.74836467836484838364836483643636483648e-336\", 0.0);\n\n    // Issue 1256\n    TEST_DOUBLE(fullPrecision,\n        \"6223372036854775296.1701512723685473547372536854755293372036854685477\"\n        \"529752233737201701512337200972013723685473123372036872036854236854737\"\n        \"247372368372367752975258547752975254729752547372368737201701512354737\"\n        \"83723677529752585477247372368372368547354737253685475529752\",\n        6223372036854775808.0);\n\n#if 0\n    // Test (length + exponent) overflow\n    TEST_DOUBLE(fullPrecision, \"0e+2147483647\", 0.0);\n    TEST_DOUBLE(fullPrecision, \"0e-2147483648\", 0.0);\n    TEST_DOUBLE(fullPrecision, \"1e-2147483648\", 0.0);\n    TEST_DOUBLE(fullPrecision, \"0e+9223372036854775807\", 0.0);\n    TEST_DOUBLE(fullPrecision, \"0e-9223372036854775808\", 0.0);\n#endif\n\n    if (fullPrecision)\n    {\n        TEST_DOUBLE(fullPrecision, \"1e-325\", 0.0);\n        TEST_DOUBLE(fullPrecision, \"1e-324\", 0.0);\n        TEST_DOUBLE(fullPrecision, \"2e-324\", 0.0);\n        TEST_DOUBLE(fullPrecision, \"2.4703282292062327e-324\", 0.0);\n        TEST_DOUBLE(fullPrecision, \"2.4703282292062328e-324\", 5e-324);\n        TEST_DOUBLE(fullPrecision, \"2.48e-324\",5e-324);\n        TEST_DOUBLE(fullPrecision, \"2.5e-324\", 5e-324);\n\n        // Slightly above max-normal\n        TEST_DOUBLE(fullPrecision, \"1.7976931348623158e+308\", 1.7976931348623158e+308);\n\n        TEST_DOUBLE(fullPrecision,\n            \"17976931348623157081452742373170435679807056752584499659891747680315726\"\n            \"07800285387605895586327668781715404589535143824642343213268894641827684\"\n            \"67546703537516986049910576551282076245490090389328944075868508455133942\"\n            \"30458323690322294816580855933212334827479782620414472316873817718091929\"\n            \"9881250404026184124858368\",\n            (std::numeric_limits<double>::max)());\n\n        TEST_DOUBLE(fullPrecision,\n            \"243546080556034731077856379609316893158278902575447060151047\"\n            \"212703405344938119816206067372775299130836050315842578309818\"\n            \"316450894337978612745889730079163798234256495613858256849283\"\n            \"467066859489192118352020514036083287319232435355752493038825\"\n            \"828481044358810649108367633313557305310641892225870327827273\"\n            \"41408256.000000\",\n            2.4354608055603473e+307);\n        // 9007199254740991 * 2^971 (max normal)\n        TEST_DOUBLE(fullPrecision,\n            \"1.797693134862315708145274237317043567980705675258449965989174768031572607800285\"\n            \"38760589558632766878171540458953514382464234321326889464182768467546703537516986\"\n            \"04991057655128207624549009038932894407586850845513394230458323690322294816580855\"\n            \"9332123348274797826204144723168738177180919299881250404026184124858368e+308\",\n            1.797693134862315708e+308 //        0x1.fffffffffffffp1023\n            );\n#if 0\n        // TODO:\n        // Should work at least in full-precision mode...\n        TEST_DOUBLE(fullPrecision,\n            \"0.00000000000000000000000000000000000000000000000000000000000\"\n            \"0000000000000000000000000000000000000000000000000000000000000\"\n            \"0000000000000000000000000000000000000000000000000000000000000\"\n            \"0000000000000000000000000000000000000000000000000000000000000\"\n            \"0000000000000000000000000000000000000000000000000000000000000\"\n            \"0000000000000000000024703282292062327208828439643411068618252\"\n            \"9901307162382212792841250337753635104375932649918180817996189\"\n            \"8982823477228588654633283551779698981993873980053909390631503\"\n            \"5659515570226392290858392449105184435931802849936536152500319\"\n            \"3704576782492193656236698636584807570015857692699037063119282\"\n            \"7955855133292783433840935197801553124659726357957462276646527\"\n            \"2827220056374006485499977096599470454020828166226237857393450\"\n            \"7363390079677619305775067401763246736009689513405355374585166\"\n            \"6113422376667860416215968046191446729184030053005753084904876\"\n            \"5391711386591646239524912623653881879636239373280423891018672\"\n            \"3484976682350898633885879256283027559956575244555072551893136\"\n            \"9083625477918694866799496832404970582102851318545139621383772\"\n            \"2826145437693412532098591327667236328125\",\n            0.0);\n#endif\n        // 9007199254740991 * 2^-1074 = (2^53 - 1) * 2^-1074\n        TEST_DOUBLE(fullPrecision,\n            \"4.450147717014402272114819593418263951869639092703291296046852219449644444042153\"\n            \"89103305904781627017582829831782607924221374017287738918929105531441481564124348\"\n            \"67599762821265346585071045737627442980259622449029037796981144446145705102663115\"\n            \"10031828794952795966823603998647925096578034214163701381261333311989876551545144\"\n            \"03152612538132666529513060001849177663286607555958373922409899478075565940981010\"\n            \"21612198814605258742579179000071675999344145086087205681577915435923018910334964\"\n            \"86942061405218289243144579760516365090360651414037721744226256159024466852576737\"\n            \"24464300755133324500796506867194913776884780053099639677097589658441378944337966\"\n            \"21993967316936280457084866613206797017728916080020698679408551343728867675409720\"\n            \"757232455434770912461317493580281734466552734375e-308\",\n            4.450147717014402272e-308 //        0x1.fffffffffffffp-1022\n            );\n        // 9007199254740990 * 2^-1074\n        TEST_DOUBLE(fullPrecision,\n            \"4.450147717014401778049173752171719775300846224481918930987049605124880018456471\"\n            \"39035755177760751831052846195619008686241717547743167145836439860405887584484471\"\n            \"19639655002484083577939142623582164522087943959208000909794783876158397872163051\"\n            \"22622675229968408654350206725478309956546318828765627255022767720818849892988457\"\n            \"26333908582101604036318532842699932130356061901518261174396928478121372742040102\"\n            \"17446565569357687263889031732270082446958029584739170416643195242132750803227473\"\n            \"16608838720742955671061336566907126801014814608027120593609275183716632624844904\"\n            \"31985250929886016737037234388448352929102742708402644340627409931664203093081360\"\n            \"70794835812045179006047003875039546061891526346421705014598610179523165038319441\"\n            \"51446491086954182492263498716056346893310546875e-308\",\n            4.450147717014401778e-308 //        0x1.ffffffffffffep-1022\n            );\n        // half way between the two numbers above.\n        // round to nearest even.\n        TEST_DOUBLE(fullPrecision,\n            \"4.450147717014402025081996672794991863585242658592605113516950912287262231249312\"\n            \"64069530541271189424317838013700808305231545782515453032382772695923684574304409\"\n            \"93619708911874715081505094180604803751173783204118519353387964161152051487413083\"\n            \"16327252012460602310586905362063117526562176521464664318142050516404363222266800\"\n            \"64743260560117135282915796422274554896821334728738317548403413978098469341510556\"\n            \"19529382191981473003234105366170879223151087335413188049110555339027884856781219\"\n            \"01775450062980622457102958163711745945687733011032421168917765671370549738710820\"\n            \"78224775842509670618916870627821633352993761380751142008862499795052791018709663\"\n            \"46394401564490729731565935244123171539810221213221201847003580761626016356864581\"\n            \"1358486831521563686919762403704226016998291015625e-308\",\n            4.450147717014401778e-308 //        0x1.ffffffffffffep-1022\n            );\n        TEST_DOUBLE(fullPrecision,\n            \"4.450147717014402025081996672794991863585242658592605113516950912287262231249312\"\n            \"64069530541271189424317838013700808305231545782515453032382772695923684574304409\"\n            \"93619708911874715081505094180604803751173783204118519353387964161152051487413083\"\n            \"16327252012460602310586905362063117526562176521464664318142050516404363222266800\"\n            \"64743260560117135282915796422274554896821334728738317548403413978098469341510556\"\n            \"19529382191981473003234105366170879223151087335413188049110555339027884856781219\"\n            \"01775450062980622457102958163711745945687733011032421168917765671370549738710820\"\n            \"78224775842509670618916870627821633352993761380751142008862499795052791018709663\"\n            \"46394401564490729731565935244123171539810221213221201847003580761626016356864581\"\n            \"13584868315215636869197624037042260169982910156250000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000e-308\",\n            4.450147717014401778e-308 //        0x1.ffffffffffffep-1022\n            );\n#if 0\n        // ... round up\n        // TODO:\n        // Should work at least in full-precision mode...\n        TEST_DOUBLE(fullPrecision,\n            \"4.450147717014402025081996672794991863585242658592605113516950912287262231249312\"\n            \"64069530541271189424317838013700808305231545782515453032382772695923684574304409\"\n            \"93619708911874715081505094180604803751173783204118519353387964161152051487413083\"\n            \"16327252012460602310586905362063117526562176521464664318142050516404363222266800\"\n            \"64743260560117135282915796422274554896821334728738317548403413978098469341510556\"\n            \"19529382191981473003234105366170879223151087335413188049110555339027884856781219\"\n            \"01775450062980622457102958163711745945687733011032421168917765671370549738710820\"\n            \"78224775842509670618916870627821633352993761380751142008862499795052791018709663\"\n            \"46394401564490729731565935244123171539810221213221201847003580761626016356864581\"\n            \"13584868315215636869197624037042260169982910156250000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000001e-308\",\n            4.450147717014402272e-308 //        0x1.fffffffffffffp-1022\n            );\n#endif\n        // ... round down\n        TEST_DOUBLE(fullPrecision,\n            \"4.450147717014402025081996672794991863585242658592605113516950912287262231249312\"\n            \"64069530541271189424317838013700808305231545782515453032382772695923684574304409\"\n            \"93619708911874715081505094180604803751173783204118519353387964161152051487413083\"\n            \"16327252012460602310586905362063117526562176521464664318142050516404363222266800\"\n            \"64743260560117135282915796422274554896821334728738317548403413978098469341510556\"\n            \"19529382191981473003234105366170879223151087335413188049110555339027884856781219\"\n            \"01775450062980622457102958163711745945687733011032421168917765671370549738710820\"\n            \"78224775842509670618916870627821633352993761380751142008862499795052791018709663\"\n            \"46394401564490729731565935244123171539810221213221201847003580761626016356864581\"\n            \"13584868315215636869197624037042260169982910156249999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999e-308\",\n            4.450147717014401778e-308 //        0x1.ffffffffffffep-1022\n            );\n        // Slightly below half way between max-normal and infinity.\n        // Should round down.\n        TEST_DOUBLE(fullPrecision,\n            \"1.797693134862315807937289714053034150799341327100378269361737789804449682927647\"\n            \"50946649017977587207096330286416692887910946555547851940402630657488671505820681\"\n            \"90890200070838367627385484581771153176447573027006985557136695962284291481986083\"\n            \"49364752927190741684443655107043427115596995080930428801779041744977919999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999\"\n            \"99999999999999999999999999999999999999999999999999999999999999999999999999999999e+308\",\n            1.797693134862315708e+308 //        0x1.fffffffffffffp1023\n            );\n    }\n\n#undef TEST_DOUBLE\n}\n\nTEST(Reader, ParseNumber_NormalPrecisionDouble) {\n    TestParseDouble<false>();\n}\n\nTEST(Reader, ParseNumber_FullPrecisionDouble) {\n    TestParseDouble<true>();\n}\n\nTEST(Reader, ParseNumber_NormalPrecisionError) {\n    static unsigned count = 1000000;\n    Random r;\n\n    double ulpSum = 0.0;\n    double ulpMax = 0.0;\n    for (unsigned i = 0; i < count; i++) {\n        internal::Double e, a;\n        do {\n            // Need to call r() in two statements for cross-platform coherent sequence.\n            uint64_t u = uint64_t(r()) << 32;\n            u |= uint64_t(r());\n            e = u;\n        } while (e.IsNan() || e.IsInf() || !e.IsNormal());\n\n        char buffer[32];\n        *internal::dtoa(e.Value(), buffer) = '\\0';\n\n        StringStream s(buffer);\n        ParseDoubleHandler h;\n        Reader reader;\n        ASSERT_EQ(kParseErrorNone, reader.Parse(s, h).Code());\n        EXPECT_EQ(1u, h.step_);\n\n        a = h.actual_;\n        uint64_t bias1 = e.ToBias();\n        uint64_t bias2 = a.ToBias();\n        double ulp = static_cast<double>(bias1 >= bias2 ? bias1 - bias2 : bias2 - bias1);\n        ulpMax = (std::max)(ulpMax, ulp);\n        ulpSum += ulp;\n    }\n    printf(\"ULP Average = %g, Max = %g \\n\", ulpSum / count, ulpMax);\n}\n\ntemplate<bool fullPrecision>\nstatic void TestParseNumberError() {\n#define TEST_NUMBER_ERROR(errorCode, str, errorOffset, streamPos) \\\n    { \\\n        char buffer[2048]; \\\n        ASSERT_LT(std::strlen(str), 2048u); \\\n        sprintf(buffer, \"%s\", str); \\\n        InsituStringStream s(buffer); \\\n        BaseReaderHandler<> h; \\\n        Reader reader; \\\n        EXPECT_FALSE(reader.Parse<fullPrecision ? kParseFullPrecisionFlag : 0>(s, h)); \\\n        EXPECT_EQ(errorCode, reader.GetParseErrorCode());\\\n        EXPECT_EQ(errorOffset, reader.GetErrorOffset());\\\n        EXPECT_EQ(streamPos, s.Tell());\\\n    }\n\n    // Number too big to be stored in double.\n    {\n        char n1e309[311];   // '1' followed by 309 '0'\n        n1e309[0] = '1';\n        for (int i = 1; i < 310; i++)\n            n1e309[i] = '0';\n        n1e309[310] = '\\0';\n        TEST_NUMBER_ERROR(kParseErrorNumberTooBig, n1e309, 0u, 310u);\n    }\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1e309\", 0u, 5u);\n\n    // Miss fraction part in number.\n    TEST_NUMBER_ERROR(kParseErrorNumberMissFraction, \"1.\", 2u, 2u);\n    TEST_NUMBER_ERROR(kParseErrorNumberMissFraction, \"1.a\", 2u, 2u);\n\n    // Miss exponent in number.\n    TEST_NUMBER_ERROR(kParseErrorNumberMissExponent, \"1e\", 2u, 2u);\n    TEST_NUMBER_ERROR(kParseErrorNumberMissExponent, \"1e_\", 2u, 2u);\n\n    // Issue 849\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1.8e308\", 0u, 7u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"5e308\", 0u, 5u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1e309\", 0u, 5u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1.0e310\", 0u, 7u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1.00e310\", 0u, 8u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"-1.8e308\", 0u, 8u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"-1e309\", 0u, 6u);\n\n    // Issue 1253\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"2e308\", 0u, 5u);\n\n    // Issue 1259\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig,\n        \"88474320368547737236837236775298547354737253685475547552933720368546854775297525\"\n        \"29337203685468547770151233720097201372368547312337203687203685423685123372036872\"\n        \"03685473724737236837236775297525854775297525472975254737236873720170151235473783\"\n        \"7236737247372368772473723683723456789012E66\", 0u, 283u);\n\n#if 0\n    // Test (length + exponent) overflow\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1e+2147483647\", 0u, 13u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1e+9223372036854775807\", 0u, 22u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1e+10000\", 0u, 8u);\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig, \"1e+50000\", 0u, 8u);\n#endif\n\n    // 9007199254740992 * 2^971 (\"infinity\")\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig,\n        \"1.797693134862315907729305190789024733617976978942306572734300811577326758055009\"\n        \"63132708477322407536021120113879871393357658789768814416622492847430639474124377\"\n        \"76789342486548527630221960124609411945308295208500576883815068234246288147391311\"\n        \"0540827237163350510684586298239947245938479716304835356329624224137216e+308\", 0u, 315u);\n\n    // TODO:\n    // These tests (currently) fail in normal-precision mode\n    if (fullPrecision)\n    {\n        // Half way between max-normal and infinity\n        // Should round to infinity in nearest-even mode.\n        TEST_NUMBER_ERROR(kParseErrorNumberTooBig,\n            \"1.797693134862315807937289714053034150799341327100378269361737789804449682927647\"\n            \"50946649017977587207096330286416692887910946555547851940402630657488671505820681\"\n            \"90890200070838367627385484581771153176447573027006985557136695962284291481986083\"\n            \"49364752927190741684443655107043427115596995080930428801779041744977920000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000e+308\", 0u, 1125u);\n        // ...round up\n        TEST_NUMBER_ERROR(kParseErrorNumberTooBig,\n            \"1.797693134862315807937289714053034150799341327100378269361737789804449682927647\"\n            \"50946649017977587207096330286416692887910946555547851940402630657488671505820681\"\n            \"90890200070838367627385484581771153176447573027006985557136695962284291481986083\"\n            \"49364752927190741684443655107043427115596995080930428801779041744977920000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n            \"00000000000000000000000000000000000000000000000000000000000000000000000000000001e+308\", 0u, 1205u);\n    }\n\n    TEST_NUMBER_ERROR(kParseErrorNumberTooBig,\n        \"10000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n        \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n        \"00000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n        \"0000000000000000000000000000000000000000000000000000000000000000000001\", 0u, 310u);\n\n#undef TEST_NUMBER_ERROR\n}\n\nTEST(Reader, ParseNumberError_NormalPrecisionDouble) {\n    TestParseNumberError<false>();\n}\n\nTEST(Reader, ParseNumberError_FullPrecisionDouble) {\n    TestParseNumberError<true>();\n}\n\ntemplate <typename Encoding>\nstruct ParseStringHandler : BaseReaderHandler<Encoding, ParseStringHandler<Encoding> > {\n    ParseStringHandler() : str_(0), length_(0), copy_() {}\n    ~ParseStringHandler() { EXPECT_TRUE(str_ != 0); if (copy_) free(const_cast<typename Encoding::Ch*>(str_)); }\n\n    ParseStringHandler(const ParseStringHandler&);\n    ParseStringHandler& operator=(const ParseStringHandler&);\n\n    bool Default() { ADD_FAILURE(); return false; }\n    bool String(const typename Encoding::Ch* str, size_t length, bool copy) {\n        EXPECT_EQ(0, str_);\n        if (copy) {\n            str_ = static_cast<typename Encoding::Ch*>(malloc((length + 1) * sizeof(typename Encoding::Ch)));\n            memcpy(const_cast<typename Encoding::Ch*>(str_), str, (length + 1) * sizeof(typename Encoding::Ch));\n        }\n        else\n            str_ = str;\n        length_ = length;\n        copy_ = copy;\n        return true;\n    }\n\n    const typename Encoding::Ch* str_;\n    size_t length_;\n    bool copy_;\n};\n\nTEST(Reader, ParseString) {\n#define TEST_STRING(Encoding, e, x) \\\n    { \\\n        Encoding::Ch* buffer = StrDup(x); \\\n        GenericInsituStringStream<Encoding> is(buffer); \\\n        ParseStringHandler<Encoding> h; \\\n        GenericReader<Encoding, Encoding> reader; \\\n        reader.Parse<kParseInsituFlag | kParseValidateEncodingFlag>(is, h); \\\n        EXPECT_EQ(0, StrCmp<Encoding::Ch>(e, h.str_)); \\\n        EXPECT_EQ(StrLen(e), h.length_); \\\n        free(buffer); \\\n        GenericStringStream<Encoding> s(x); \\\n        ParseStringHandler<Encoding> h2; \\\n        GenericReader<Encoding, Encoding> reader2; \\\n        reader2.Parse(s, h2); \\\n        EXPECT_EQ(0, StrCmp<Encoding::Ch>(e, h2.str_)); \\\n        EXPECT_EQ(StrLen(e), h2.length_); \\\n    }\n\n    // String constant L\"\\xXX\" can only specify character code in bytes, which is not endianness-neutral.\n    // And old compiler does not support u\"\" and U\"\" string literal. So here specify string literal by array of Ch.\n    // In addition, GCC 4.8 generates -Wnarrowing warnings when character code >= 128 are assigned to signed integer types.\n    // Therefore, utype is added for declaring unsigned array, and then cast it to Encoding::Ch.\n#define ARRAY(...) { __VA_ARGS__ }\n#define TEST_STRINGARRAY(Encoding, utype, array, x) \\\n    { \\\n        static const utype ue[] = array; \\\n        static const Encoding::Ch* e = reinterpret_cast<const Encoding::Ch *>(&ue[0]); \\\n        TEST_STRING(Encoding, e, x); \\\n    }\n\n#define TEST_STRINGARRAY2(Encoding, utype, earray, xarray) \\\n    { \\\n        static const utype ue[] = earray; \\\n        static const utype xe[] = xarray; \\\n        static const Encoding::Ch* e = reinterpret_cast<const Encoding::Ch *>(&ue[0]); \\\n        static const Encoding::Ch* x = reinterpret_cast<const Encoding::Ch *>(&xe[0]); \\\n        TEST_STRING(Encoding, e, x); \\\n    }\n\n    TEST_STRING(UTF8<>, \"\", \"\\\"\\\"\");\n    TEST_STRING(UTF8<>, \"Hello\", \"\\\"Hello\\\"\");\n    TEST_STRING(UTF8<>, \"Hello\\nWorld\", \"\\\"Hello\\\\nWorld\\\"\");\n    TEST_STRING(UTF8<>, \"\\\"\\\\/\\b\\f\\n\\r\\t\", \"\\\"\\\\\\\"\\\\\\\\/\\\\b\\\\f\\\\n\\\\r\\\\t\\\"\");\n    TEST_STRING(UTF8<>, \"\\x24\", \"\\\"\\\\u0024\\\"\");         // Dollar sign U+0024\n    TEST_STRING(UTF8<>, \"\\xC2\\xA2\", \"\\\"\\\\u00A2\\\"\");     // Cents sign U+00A2\n    TEST_STRING(UTF8<>, \"\\xE2\\x82\\xAC\", \"\\\"\\\\u20AC\\\"\"); // Euro sign U+20AC\n    TEST_STRING(UTF8<>, \"\\xF0\\x9D\\x84\\x9E\", \"\\\"\\\\uD834\\\\uDD1E\\\"\");  // G clef sign U+1D11E\n\n    // UTF16\n    TEST_STRING(UTF16<>, L\"\", L\"\\\"\\\"\");\n    TEST_STRING(UTF16<>, L\"Hello\", L\"\\\"Hello\\\"\");\n    TEST_STRING(UTF16<>, L\"Hello\\nWorld\", L\"\\\"Hello\\\\nWorld\\\"\");\n    TEST_STRING(UTF16<>, L\"\\\"\\\\/\\b\\f\\n\\r\\t\", L\"\\\"\\\\\\\"\\\\\\\\/\\\\b\\\\f\\\\n\\\\r\\\\t\\\"\");\n    TEST_STRINGARRAY(UTF16<>, wchar_t, ARRAY(0x0024, 0x0000), L\"\\\"\\\\u0024\\\"\");\n    TEST_STRINGARRAY(UTF16<>, wchar_t, ARRAY(0x00A2, 0x0000), L\"\\\"\\\\u00A2\\\"\");  // Cents sign U+00A2\n    TEST_STRINGARRAY(UTF16<>, wchar_t, ARRAY(0x20AC, 0x0000), L\"\\\"\\\\u20AC\\\"\");  // Euro sign U+20AC\n    TEST_STRINGARRAY(UTF16<>, wchar_t, ARRAY(0xD834, 0xDD1E, 0x0000), L\"\\\"\\\\uD834\\\\uDD1E\\\"\");   // G clef sign U+1D11E\n\n    // UTF32\n    TEST_STRINGARRAY2(UTF32<>, unsigned, ARRAY('\\0'), ARRAY('\\\"', '\\\"', '\\0'));\n    TEST_STRINGARRAY2(UTF32<>, unsigned, ARRAY('H', 'e', 'l', 'l', 'o', '\\0'), ARRAY('\\\"', 'H', 'e', 'l', 'l', 'o', '\\\"', '\\0'));\n    TEST_STRINGARRAY2(UTF32<>, unsigned, ARRAY('H', 'e', 'l', 'l', 'o', '\\n', 'W', 'o', 'r', 'l', 'd', '\\0'), ARRAY('\\\"', 'H', 'e', 'l', 'l', 'o', '\\\\', 'n', 'W', 'o', 'r', 'l', 'd', '\\\"', '\\0'));\n    TEST_STRINGARRAY2(UTF32<>, unsigned, ARRAY('\\\"', '\\\\', '/', '\\b', '\\f', '\\n', '\\r', '\\t', '\\0'), ARRAY('\\\"', '\\\\', '\\\"', '\\\\', '\\\\', '/', '\\\\', 'b', '\\\\', 'f', '\\\\', 'n', '\\\\', 'r', '\\\\', 't', '\\\"', '\\0'));\n    TEST_STRINGARRAY2(UTF32<>, unsigned, ARRAY(0x00024, 0x0000), ARRAY('\\\"', '\\\\', 'u', '0', '0', '2', '4', '\\\"', '\\0'));\n    TEST_STRINGARRAY2(UTF32<>, unsigned, ARRAY(0x000A2, 0x0000), ARRAY('\\\"', '\\\\', 'u', '0', '0', 'A', '2', '\\\"', '\\0'));   // Cents sign U+00A2\n    TEST_STRINGARRAY2(UTF32<>, unsigned, ARRAY(0x020AC, 0x0000), ARRAY('\\\"', '\\\\', 'u', '2', '0', 'A', 'C', '\\\"', '\\0'));   // Euro sign U+20AC\n    TEST_STRINGARRAY2(UTF32<>, unsigned, ARRAY(0x1D11E, 0x0000), ARRAY('\\\"', '\\\\', 'u', 'D', '8', '3', '4', '\\\\', 'u', 'D', 'D', '1', 'E', '\\\"', '\\0'));    // G clef sign U+1D11E\n\n#undef TEST_STRINGARRAY\n#undef ARRAY\n#undef TEST_STRING\n\n    // Support of null character in string\n    {\n        StringStream s(\"\\\"Hello\\\\u0000World\\\"\");\n        const char e[] = \"Hello\\0World\";\n        ParseStringHandler<UTF8<> > h;\n        Reader reader;\n        reader.Parse(s, h);\n        EXPECT_EQ(0, memcmp(e, h.str_, h.length_ + 1));\n        EXPECT_EQ(11u, h.length_);\n    }\n}\n\nTEST(Reader, ParseString_Transcoding) {\n    const char* x = \"\\\"Hello\\\"\";\n    const wchar_t* e = L\"Hello\";\n    GenericStringStream<UTF8<> > is(x);\n    GenericReader<UTF8<>, UTF16<> > reader;\n    ParseStringHandler<UTF16<> > h;\n    reader.Parse(is, h);\n    EXPECT_EQ(0, StrCmp<UTF16<>::Ch>(e, h.str_));\n    EXPECT_EQ(StrLen(e), h.length_);\n}\n\nTEST(Reader, ParseString_TranscodingWithValidation) {\n    const char* x = \"\\\"Hello\\\"\";\n    const wchar_t* e = L\"Hello\";\n    GenericStringStream<UTF8<> > is(x);\n    GenericReader<UTF8<>, UTF16<> > reader;\n    ParseStringHandler<UTF16<> > h;\n    reader.Parse<kParseValidateEncodingFlag>(is, h);\n    EXPECT_EQ(0, StrCmp<UTF16<>::Ch>(e, h.str_));\n    EXPECT_EQ(StrLen(e), h.length_);\n}\n\nTEST(Reader, ParseString_NonDestructive) {\n    StringStream s(\"\\\"Hello\\\\nWorld\\\"\");\n    ParseStringHandler<UTF8<> > h;\n    Reader reader;\n    reader.Parse(s, h);\n    EXPECT_EQ(0, StrCmp(\"Hello\\nWorld\", h.str_));\n    EXPECT_EQ(11u, h.length_);\n}\n\ntemplate <typename Encoding>\nParseErrorCode TestString(const typename Encoding::Ch* str) {\n    GenericStringStream<Encoding> s(str);\n    BaseReaderHandler<Encoding> h;\n    GenericReader<Encoding, Encoding> reader;\n    reader.template Parse<kParseValidateEncodingFlag>(s, h);\n    return reader.GetParseErrorCode();\n}\n\nTEST(Reader, ParseString_Error) {\n#define TEST_STRING_ERROR(errorCode, str, errorOffset, streamPos)\\\n{\\\n    GenericStringStream<UTF8<> > s(str);\\\n    BaseReaderHandler<UTF8<> > h;\\\n    GenericReader<UTF8<> , UTF8<> > reader;\\\n    reader.Parse<kParseValidateEncodingFlag>(s, h);\\\n    EXPECT_EQ(errorCode, reader.GetParseErrorCode());\\\n    EXPECT_EQ(errorOffset, reader.GetErrorOffset());\\\n    EXPECT_EQ(streamPos, s.Tell());\\\n}\n\n#define ARRAY(...) { __VA_ARGS__ }\n#define TEST_STRINGENCODING_ERROR(Encoding, TargetEncoding, utype, array) \\\n    { \\\n        static const utype ue[] = array; \\\n        static const Encoding::Ch* e = reinterpret_cast<const Encoding::Ch *>(&ue[0]); \\\n        EXPECT_EQ(kParseErrorStringInvalidEncoding, TestString<Encoding>(e));\\\n        /* decode error */\\\n        GenericStringStream<Encoding> s(e);\\\n        BaseReaderHandler<TargetEncoding> h;\\\n        GenericReader<Encoding, TargetEncoding> reader;\\\n        reader.Parse(s, h);\\\n        EXPECT_EQ(kParseErrorStringInvalidEncoding, reader.GetParseErrorCode());\\\n    }\n\n    // Invalid escape character in string.\n    TEST_STRING_ERROR(kParseErrorStringEscapeInvalid, \"[\\\"\\\\a\\\"]\", 2u, 3u);\n\n    // Incorrect hex digit after \\\\u escape in string.\n    TEST_STRING_ERROR(kParseErrorStringUnicodeEscapeInvalidHex, \"[\\\"\\\\uABCG\\\"]\", 2u, 7u);\n\n    // Quotation in \\\\u escape in string (Issue #288)\n    TEST_STRING_ERROR(kParseErrorStringUnicodeEscapeInvalidHex, \"[\\\"\\\\uaaa\\\"]\", 2u, 7u);\n    TEST_STRING_ERROR(kParseErrorStringUnicodeEscapeInvalidHex, \"[\\\"\\\\uD800\\\\uFFF\\\"]\", 2u, 13u);\n\n    // The surrogate pair in string is invalid.\n    TEST_STRING_ERROR(kParseErrorStringUnicodeSurrogateInvalid, \"[\\\"\\\\uD800X\\\"]\", 2u, 8u);\n    TEST_STRING_ERROR(kParseErrorStringUnicodeSurrogateInvalid, \"[\\\"\\\\uD800\\\\uFFFF\\\"]\", 2u, 14u);\n\n    // Single low surrogate pair in string is invalid.\n    TEST_STRING_ERROR(kParseErrorStringUnicodeSurrogateInvalid, \"[\\\"\\\\udc4d\\\"]\", 2u, 8u);\n\n    // Missing a closing quotation mark in string.\n    TEST_STRING_ERROR(kParseErrorStringMissQuotationMark, \"[\\\"Test]\", 7u, 7u);\n\n    // http://www.cl.cam.ac.uk/~mgk25/ucs/examples/UTF-8-test.txt\n\n    // 3  Malformed sequences\n\n    // 3.1 Unexpected continuation bytes\n    {\n         char e[] = { '[', '\\\"', 0, '\\\"', ']', '\\0' };\n         for (unsigned char c = 0x80u; c <= 0xBFu; c++) {\n            e[2] = static_cast<char>(c);\n            ParseErrorCode error = TestString<UTF8<> >(e);\n            EXPECT_EQ(kParseErrorStringInvalidEncoding, error);\n            if (error != kParseErrorStringInvalidEncoding)\n                std::cout << static_cast<unsigned>(c) << std::endl;\n         }\n    }\n\n    // 3.2 Lonely start characters, 3.5 Impossible bytes\n    {\n        char e[] = { '[', '\\\"', 0, ' ', '\\\"', ']', '\\0' };\n        for (unsigned c = 0xC0u; c <= 0xFFu; c++) {\n            e[2] = static_cast<char>(c);\n            unsigned streamPos;\n            if (c <= 0xC1u)\n                streamPos = 3; // 0xC0 - 0xC1\n            else if (c <= 0xDFu)\n                streamPos = 4; // 0xC2 - 0xDF\n            else if (c <= 0xEFu)\n                streamPos = 5; // 0xE0 - 0xEF\n            else if (c <= 0xF4u)\n                streamPos = 6; // 0xF0 - 0xF4\n            else\n                streamPos = 3; // 0xF5 - 0xFF\n            TEST_STRING_ERROR(kParseErrorStringInvalidEncoding, e, 2u, streamPos);\n        }\n    }\n\n    // 4  Overlong sequences\n\n    // 4.1  Examples of an overlong ASCII character\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xC0u, 0xAFu, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xE0u, 0x80u, 0xAFu, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xF0u, 0x80u, 0x80u, 0xAFu, '\\\"', ']', '\\0'));\n\n    // 4.2  Maximum overlong sequences\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xC1u, 0xBFu, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xE0u, 0x9Fu, 0xBFu, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xF0u, 0x8Fu, 0xBFu, 0xBFu, '\\\"', ']', '\\0'));\n\n    // 4.3  Overlong representation of the NUL character\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xC0u, 0x80u, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xE0u, 0x80u, 0x80u, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xF0u, 0x80u, 0x80u, 0x80u, '\\\"', ']', '\\0'));\n\n    // 5  Illegal code positions\n\n    // 5.1 Single UTF-16 surrogates\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xEDu, 0xA0u, 0x80u, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xEDu, 0xADu, 0xBFu, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xEDu, 0xAEu, 0x80u, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xEDu, 0xAFu, 0xBFu, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xEDu, 0xB0u, 0x80u, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xEDu, 0xBEu, 0x80u, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF8<>, UTF16<>, unsigned char, ARRAY('[', '\\\"', 0xEDu, 0xBFu, 0xBFu, '\\\"', ']', '\\0'));\n\n    // Malform UTF-16 sequences\n    TEST_STRINGENCODING_ERROR(UTF16<>, UTF8<>, wchar_t, ARRAY('[', '\\\"', 0xDC00, 0xDC00, '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(UTF16<>, UTF8<>, wchar_t, ARRAY('[', '\\\"', 0xD800, 0xD800, '\\\"', ']', '\\0'));\n\n    // Malform UTF-32 sequence\n    TEST_STRINGENCODING_ERROR(UTF32<>, UTF8<>, unsigned, ARRAY('[', '\\\"', 0x110000, '\\\"', ']', '\\0'));\n\n    // Malform ASCII sequence\n    TEST_STRINGENCODING_ERROR(ASCII<>, UTF8<>, char, ARRAY('[', '\\\"', char(0x80u), '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(ASCII<>, UTF8<>, char, ARRAY('[', '\\\"', char(0x01u), '\\\"', ']', '\\0'));\n    TEST_STRINGENCODING_ERROR(ASCII<>, UTF8<>, char, ARRAY('[', '\\\"', char(0x1Cu), '\\\"', ']', '\\0'));\n\n#undef ARRAY\n#undef TEST_STRINGARRAY_ERROR\n}\n\ntemplate <unsigned count>\nstruct ParseArrayHandler : BaseReaderHandler<UTF8<>, ParseArrayHandler<count> > {\n    ParseArrayHandler() : step_(0) {}\n\n    bool Default() { ADD_FAILURE(); return false; }\n    bool Uint(unsigned i) { EXPECT_EQ(step_, i); step_++; return true; }\n    bool StartArray() { EXPECT_EQ(0u, step_); step_++; return true; }\n    bool EndArray(SizeType) { step_++; return true; }\n\n    unsigned step_;\n};\n\nTEST(Reader, ParseEmptyArray) {\n    char *json = StrDup(\"[ ] \");\n    InsituStringStream s(json);\n    ParseArrayHandler<0> h;\n    Reader reader;\n    reader.Parse(s, h);\n    EXPECT_EQ(2u, h.step_);\n    free(json);\n}\n\nTEST(Reader, ParseArray) {\n    char *json = StrDup(\"[1, 2, 3, 4]\");\n    InsituStringStream s(json);\n    ParseArrayHandler<4> h;\n    Reader reader;\n    reader.Parse(s, h);\n    EXPECT_EQ(6u, h.step_);\n    free(json);\n}\n\nTEST(Reader, ParseArray_Error) {\n#define TEST_ARRAY_ERROR(errorCode, str, errorOffset) \\\n    { \\\n        unsigned streamPos = errorOffset; \\\n        char buffer[1001]; \\\n        strncpy(buffer, str, 1000); \\\n        InsituStringStream s(buffer); \\\n        BaseReaderHandler<> h; \\\n        GenericReader<UTF8<>, UTF8<>, CrtAllocator> reader; \\\n        EXPECT_FALSE(reader.Parse(s, h)); \\\n        EXPECT_EQ(errorCode, reader.GetParseErrorCode());\\\n        EXPECT_EQ(errorOffset, reader.GetErrorOffset());\\\n        EXPECT_EQ(streamPos, s.Tell());\\\n    }\n\n    // Missing a comma or ']' after an array element.\n    TEST_ARRAY_ERROR(kParseErrorArrayMissCommaOrSquareBracket, \"[1\", 2u);\n    TEST_ARRAY_ERROR(kParseErrorArrayMissCommaOrSquareBracket, \"[1}\", 2u);\n    TEST_ARRAY_ERROR(kParseErrorArrayMissCommaOrSquareBracket, \"[1 2]\", 3u);\n\n    // Array cannot have a trailing comma (without kParseTrailingCommasFlag);\n    // a value must follow a comma\n    TEST_ARRAY_ERROR(kParseErrorValueInvalid, \"[1,]\", 3u);\n\n#undef TEST_ARRAY_ERROR\n}\n\nstruct ParseObjectHandler : BaseReaderHandler<UTF8<>, ParseObjectHandler> {\n    ParseObjectHandler() : step_(0) {}\n\n    bool Default() { ADD_FAILURE(); return false; }\n    bool Null() { EXPECT_EQ(8u, step_); step_++; return true; }\n    bool Bool(bool b) {\n        switch(step_) {\n            case 4: EXPECT_TRUE(b); step_++; return true;\n            case 6: EXPECT_FALSE(b); step_++; return true;\n            default: ADD_FAILURE(); return false;\n        }\n    }\n    bool Int(int i) {\n        switch(step_) {\n            case 10: EXPECT_EQ(123, i); step_++; return true;\n            case 15: EXPECT_EQ(1, i); step_++; return true;\n            case 16: EXPECT_EQ(2, i); step_++; return true;\n            case 17: EXPECT_EQ(3, i); step_++; return true;\n            default: ADD_FAILURE(); return false;\n        }\n    }\n    bool Uint(unsigned i) { return Int(static_cast<int>(i)); }\n    bool Double(double d) { EXPECT_EQ(12u, step_); EXPECT_DOUBLE_EQ(3.1416, d); step_++; return true; }\n    bool String(const char* str, size_t, bool) {\n        switch(step_) {\n            case 1: EXPECT_STREQ(\"hello\", str); step_++; return true;\n            case 2: EXPECT_STREQ(\"world\", str); step_++; return true;\n            case 3: EXPECT_STREQ(\"t\", str); step_++; return true;\n            case 5: EXPECT_STREQ(\"f\", str); step_++; return true;\n            case 7: EXPECT_STREQ(\"n\", str); step_++; return true;\n            case 9: EXPECT_STREQ(\"i\", str); step_++; return true;\n            case 11: EXPECT_STREQ(\"pi\", str); step_++; return true;\n            case 13: EXPECT_STREQ(\"a\", str); step_++; return true;\n            default: ADD_FAILURE(); return false;\n        }\n    }\n    bool StartObject() { EXPECT_EQ(0u, step_); step_++; return true; }\n    bool EndObject(SizeType memberCount) { EXPECT_EQ(19u, step_); EXPECT_EQ(7u, memberCount); step_++; return true; }\n    bool StartArray() { EXPECT_EQ(14u, step_); step_++; return true; }\n    bool EndArray(SizeType elementCount) { EXPECT_EQ(18u, step_); EXPECT_EQ(3u, elementCount); step_++; return true; }\n\n    unsigned step_;\n};\n\nTEST(Reader, ParseObject) {\n    const char* json = \"{ \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] } \";\n\n    // Insitu\n    {\n        char* json2 = StrDup(json);\n        InsituStringStream s(json2);\n        ParseObjectHandler h;\n        Reader reader;\n        reader.Parse<kParseInsituFlag>(s, h);\n        EXPECT_EQ(20u, h.step_);\n        free(json2);\n    }\n\n    // Normal\n    {\n        StringStream s(json);\n        ParseObjectHandler h;\n        Reader reader;\n        reader.Parse(s, h);\n        EXPECT_EQ(20u, h.step_);\n    }\n}\n\nstruct ParseEmptyObjectHandler : BaseReaderHandler<UTF8<>, ParseEmptyObjectHandler> {\n    ParseEmptyObjectHandler() : step_(0) {}\n\n    bool Default() { ADD_FAILURE(); return false; }\n    bool StartObject() { EXPECT_EQ(0u, step_); step_++; return true; }\n    bool EndObject(SizeType) { EXPECT_EQ(1u, step_); step_++; return true; }\n\n    unsigned step_;\n};\n\nTEST(Reader, Parse_EmptyObject) {\n    StringStream s(\"{ } \");\n    ParseEmptyObjectHandler h;\n    Reader reader;\n    reader.Parse(s, h);\n    EXPECT_EQ(2u, h.step_);\n}\n\nstruct ParseMultipleRootHandler : BaseReaderHandler<UTF8<>, ParseMultipleRootHandler> {\n    ParseMultipleRootHandler() : step_(0) {}\n\n    bool Default() { ADD_FAILURE(); return false; }\n    bool StartObject() { EXPECT_EQ(0u, step_); step_++; return true; }\n    bool EndObject(SizeType) { EXPECT_EQ(1u, step_); step_++; return true; }\n    bool StartArray() { EXPECT_EQ(2u, step_); step_++; return true; }\n    bool EndArray(SizeType) { EXPECT_EQ(3u, step_); step_++; return true; }\n\n    unsigned step_;\n};\n\ntemplate <unsigned parseFlags>\nvoid TestMultipleRoot() {\n    StringStream s(\"{}[] a\");\n    ParseMultipleRootHandler h;\n    Reader reader;\n    EXPECT_TRUE(reader.Parse<parseFlags>(s, h));\n    EXPECT_EQ(2u, h.step_);\n    EXPECT_TRUE(reader.Parse<parseFlags>(s, h));\n    EXPECT_EQ(4u, h.step_);\n    EXPECT_EQ(' ', s.Take());\n    EXPECT_EQ('a', s.Take());\n}\n\nTEST(Reader, Parse_MultipleRoot) {\n    TestMultipleRoot<kParseStopWhenDoneFlag>();\n}\n\nTEST(Reader, ParseIterative_MultipleRoot) {\n    TestMultipleRoot<kParseIterativeFlag | kParseStopWhenDoneFlag>();\n}\n\ntemplate <unsigned parseFlags>\nvoid TestInsituMultipleRoot() {\n    char* buffer = strdup(\"{}[] a\");\n    InsituStringStream s(buffer);\n    ParseMultipleRootHandler h;\n    Reader reader;\n    EXPECT_TRUE(reader.Parse<kParseInsituFlag | parseFlags>(s, h));\n    EXPECT_EQ(2u, h.step_);\n    EXPECT_TRUE(reader.Parse<kParseInsituFlag | parseFlags>(s, h));\n    EXPECT_EQ(4u, h.step_);\n    EXPECT_EQ(' ', s.Take());\n    EXPECT_EQ('a', s.Take());\n    free(buffer);\n}\n\nTEST(Reader, ParseInsitu_MultipleRoot) {\n    TestInsituMultipleRoot<kParseStopWhenDoneFlag>();\n}\n\nTEST(Reader, ParseInsituIterative_MultipleRoot) {\n    TestInsituMultipleRoot<kParseIterativeFlag | kParseStopWhenDoneFlag>();\n}\n\n#define TEST_ERROR(errorCode, str, errorOffset) \\\n    { \\\n        unsigned streamPos = errorOffset; \\\n        char buffer[1001]; \\\n        strncpy(buffer, str, 1000); \\\n        InsituStringStream s(buffer); \\\n        BaseReaderHandler<> h; \\\n        Reader reader; \\\n        EXPECT_FALSE(reader.Parse(s, h)); \\\n        EXPECT_EQ(errorCode, reader.GetParseErrorCode());\\\n        EXPECT_EQ(errorOffset, reader.GetErrorOffset());\\\n        EXPECT_EQ(streamPos, s.Tell());\\\n    }\n\nTEST(Reader, ParseDocument_Error) {\n    // The document is empty.\n    TEST_ERROR(kParseErrorDocumentEmpty, \"\", 0u);\n    TEST_ERROR(kParseErrorDocumentEmpty, \" \", 1u);\n    TEST_ERROR(kParseErrorDocumentEmpty, \" \\n\", 2u);\n\n    // The document root must not follow by other values.\n    TEST_ERROR(kParseErrorDocumentRootNotSingular, \"[] 0\", 3u);\n    TEST_ERROR(kParseErrorDocumentRootNotSingular, \"{} 0\", 3u);\n    TEST_ERROR(kParseErrorDocumentRootNotSingular, \"null []\", 5u);\n    TEST_ERROR(kParseErrorDocumentRootNotSingular, \"0 {}\", 2u);\n}\n\nTEST(Reader, ParseValue_Error) {\n    // Invalid value.\n    TEST_ERROR(kParseErrorValueInvalid, \"nulL\", 3u);\n    TEST_ERROR(kParseErrorValueInvalid, \"truE\", 3u);\n    TEST_ERROR(kParseErrorValueInvalid, \"falsE\", 4u);\n    TEST_ERROR(kParseErrorValueInvalid, \"a]\", 0u);\n    TEST_ERROR(kParseErrorValueInvalid, \".1\", 0u);\n}\n\nTEST(Reader, ParseObject_Error) {\n    // Missing a name for object member.\n    TEST_ERROR(kParseErrorObjectMissName, \"{1}\", 1u);\n    TEST_ERROR(kParseErrorObjectMissName, \"{:1}\", 1u);\n    TEST_ERROR(kParseErrorObjectMissName, \"{null:1}\", 1u);\n    TEST_ERROR(kParseErrorObjectMissName, \"{true:1}\", 1u);\n    TEST_ERROR(kParseErrorObjectMissName, \"{false:1}\", 1u);\n    TEST_ERROR(kParseErrorObjectMissName, \"{1:1}\", 1u);\n    TEST_ERROR(kParseErrorObjectMissName, \"{[]:1}\", 1u);\n    TEST_ERROR(kParseErrorObjectMissName, \"{{}:1}\", 1u);\n    TEST_ERROR(kParseErrorObjectMissName, \"{xyz:1}\", 1u);\n\n    // Missing a colon after a name of object member.\n    TEST_ERROR(kParseErrorObjectMissColon, \"{\\\"a\\\" 1}\", 5u);\n    TEST_ERROR(kParseErrorObjectMissColon, \"{\\\"a\\\",1}\", 4u);\n\n    // Must be a comma or '}' after an object member\n    TEST_ERROR(kParseErrorObjectMissCommaOrCurlyBracket, \"{\\\"a\\\":1]\", 6u);\n\n    // Object cannot have a trailing comma (without kParseTrailingCommasFlag);\n    // an object member name must follow a comma\n    TEST_ERROR(kParseErrorObjectMissName, \"{\\\"a\\\":1,}\", 7u);\n\n    // This tests that MemoryStream is checking the length in Peek().\n    {\n        MemoryStream ms(\"{\\\"a\\\"\", 1);\n        BaseReaderHandler<> h;\n        Reader reader;\n        EXPECT_FALSE(reader.Parse<kParseStopWhenDoneFlag>(ms, h));\n        EXPECT_EQ(kParseErrorObjectMissName, reader.GetParseErrorCode());\n    }\n}\n\n#undef TEST_ERROR\n\nTEST(Reader, SkipWhitespace) {\n    StringStream ss(\" A \\t\\tB\\n \\n\\nC\\r\\r \\rD \\t\\n\\r E\");\n    const char* expected = \"ABCDE\";\n    for (size_t i = 0; i < 5; i++) {\n        SkipWhitespace(ss);\n        EXPECT_EQ(expected[i], ss.Take());\n    }\n}\n\n// Test implementing a stream without copy stream optimization.\n// Clone from GenericStringStream except that copy constructor is disabled.\ntemplate <typename Encoding>\nclass CustomStringStream {\npublic:\n    typedef typename Encoding::Ch Ch;\n\n    CustomStringStream(const Ch *src) : src_(src), head_(src) {}\n\n    Ch Peek() const { return *src_; }\n    Ch Take() { return *src_++; }\n    size_t Tell() const { return static_cast<size_t>(src_ - head_); }\n\n    Ch* PutBegin() { RAPIDJSON_ASSERT(false); return 0; }\n    void Put(Ch) { RAPIDJSON_ASSERT(false); }\n    void Flush() { RAPIDJSON_ASSERT(false); }\n    size_t PutEnd(Ch*) { RAPIDJSON_ASSERT(false); return 0; }\n\nprivate:\n    // Prohibit copy constructor & assignment operator.\n    CustomStringStream(const CustomStringStream&);\n    CustomStringStream& operator=(const CustomStringStream&);\n\n    const Ch* src_;     //!< Current read position.\n    const Ch* head_;    //!< Original head of the string.\n};\n\n// If the following code is compiled, it should generate compilation error as predicted.\n// Because CustomStringStream<> is not copyable via making copy constructor private.\n#if 0\nnamespace rapidjson {\n\ntemplate <typename Encoding>\nstruct StreamTraits<CustomStringStream<Encoding> > {\n    enum { copyOptimization = 1 };\n};\n\n} // namespace rapidjson\n#endif\n\nTEST(Reader, CustomStringStream) {\n    const char* json = \"{ \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] } \";\n    CustomStringStream<UTF8<char> > s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    reader.Parse(s, h);\n    EXPECT_EQ(20u, h.step_);\n}\n\n#include <sstream>\n\nclass IStreamWrapper {\npublic:\n    typedef char Ch;\n\n    IStreamWrapper(std::istream& is) : is_(is) {}\n\n    Ch Peek() const {\n        int c = is_.peek();\n        return c == std::char_traits<char>::eof() ? '\\0' : static_cast<Ch>(c);\n    }\n\n    Ch Take() {\n        int c = is_.get();\n        return c == std::char_traits<char>::eof() ? '\\0' : static_cast<Ch>(c);\n    }\n\n    size_t Tell() const { return static_cast<size_t>(is_.tellg()); }\n\n    Ch* PutBegin() { assert(false); return 0; }\n    void Put(Ch) { assert(false); }\n    void Flush() { assert(false); }\n    size_t PutEnd(Ch*) { assert(false); return 0; }\n\nprivate:\n    IStreamWrapper(const IStreamWrapper&);\n    IStreamWrapper& operator=(const IStreamWrapper&);\n\n    std::istream& is_;\n};\n\nclass WIStreamWrapper {\npublic:\n  typedef wchar_t Ch;\n\n  WIStreamWrapper(std::wistream& is) : is_(is) {}\n\n  Ch Peek() const {\n    unsigned c = is_.peek();\n    return c == std::char_traits<wchar_t>::eof() ? Ch('\\0') : static_cast<Ch>(c);\n  }\n\n  Ch Take() {\n    unsigned c = is_.get();\n    return c == std::char_traits<wchar_t>::eof() ? Ch('\\0') : static_cast<Ch>(c);\n  }\n\n  size_t Tell() const { return static_cast<size_t>(is_.tellg()); }\n\n  Ch* PutBegin() { assert(false); return 0; }\n  void Put(Ch) { assert(false); }\n  void Flush() { assert(false); }\n  size_t PutEnd(Ch*) { assert(false); return 0; }\n\nprivate:\n  WIStreamWrapper(const WIStreamWrapper&);\n  WIStreamWrapper& operator=(const WIStreamWrapper&);\n\n  std::wistream& is_;\n};\n\nTEST(Reader, Parse_IStreamWrapper_StringStream) {\n    const char* json = \"[1,2,3,4]\";\n\n    std::stringstream ss(json);\n    IStreamWrapper is(ss);\n\n    Reader reader;\n    ParseArrayHandler<4> h;\n    reader.Parse(is, h);\n    EXPECT_FALSE(reader.HasParseError());\n}\n\n// Test iterative parsing.\n\n#define TESTERRORHANDLING(text, errorCode, offset)\\\n{\\\n    unsigned streamPos = offset; \\\n    StringStream json(text); \\\n    BaseReaderHandler<> handler; \\\n    Reader reader; \\\n    reader.Parse<kParseIterativeFlag>(json, handler); \\\n    EXPECT_TRUE(reader.HasParseError()); \\\n    EXPECT_EQ(errorCode, reader.GetParseErrorCode()); \\\n    EXPECT_EQ(offset, reader.GetErrorOffset()); \\\n    EXPECT_EQ(streamPos, json.Tell()); \\\n}\n\nTEST(Reader, IterativeParsing_ErrorHandling) {\n    TESTERRORHANDLING(\"{\\\"a\\\": a}\", kParseErrorValueInvalid, 6u);\n\n    TESTERRORHANDLING(\"\", kParseErrorDocumentEmpty, 0u);\n    TESTERRORHANDLING(\"{}{}\", kParseErrorDocumentRootNotSingular, 2u);\n\n    TESTERRORHANDLING(\"{1}\", kParseErrorObjectMissName, 1u);\n    TESTERRORHANDLING(\"{\\\"a\\\", 1}\", kParseErrorObjectMissColon, 4u);\n    TESTERRORHANDLING(\"{\\\"a\\\"}\", kParseErrorObjectMissColon, 4u);\n    TESTERRORHANDLING(\"{\\\"a\\\": 1\", kParseErrorObjectMissCommaOrCurlyBracket, 7u);\n    TESTERRORHANDLING(\"[1 2 3]\", kParseErrorArrayMissCommaOrSquareBracket, 3u);\n    TESTERRORHANDLING(\"{\\\"a: 1\", kParseErrorStringMissQuotationMark, 6u);\n    TESTERRORHANDLING(\"{\\\"a\\\":}\", kParseErrorValueInvalid, 5u);\n    TESTERRORHANDLING(\"{\\\"a\\\":]\", kParseErrorValueInvalid, 5u);\n    TESTERRORHANDLING(\"[1,2,}\", kParseErrorValueInvalid, 5u);\n    TESTERRORHANDLING(\"[}]\", kParseErrorValueInvalid, 1u);\n    TESTERRORHANDLING(\"[,]\", kParseErrorValueInvalid, 1u);\n    TESTERRORHANDLING(\"[1,,]\", kParseErrorValueInvalid, 3u);\n\n    // Trailing commas are not allowed without kParseTrailingCommasFlag\n    TESTERRORHANDLING(\"{\\\"a\\\": 1,}\", kParseErrorObjectMissName, 8u);\n    TESTERRORHANDLING(\"[1,2,3,]\", kParseErrorValueInvalid, 7u);\n\n    // Any JSON value can be a valid root element in RFC7159.\n    TESTERRORHANDLING(\"\\\"ab\", kParseErrorStringMissQuotationMark, 3u);\n    TESTERRORHANDLING(\"truE\", kParseErrorValueInvalid, 3u);\n    TESTERRORHANDLING(\"False\", kParseErrorValueInvalid, 0u);\n    TESTERRORHANDLING(\"true, false\", kParseErrorDocumentRootNotSingular, 4u);\n    TESTERRORHANDLING(\"false, false\", kParseErrorDocumentRootNotSingular, 5u);\n    TESTERRORHANDLING(\"nulL\", kParseErrorValueInvalid, 3u);\n    TESTERRORHANDLING(\"null , null\", kParseErrorDocumentRootNotSingular, 5u);\n    TESTERRORHANDLING(\"1a\", kParseErrorDocumentRootNotSingular, 1u);\n}\n\ntemplate<typename Encoding = UTF8<> >\nstruct IterativeParsingReaderHandler {\n    typedef typename Encoding::Ch Ch;\n\n    const static uint32_t LOG_NULL        = 0x10000000;\n    const static uint32_t LOG_BOOL        = 0x20000000;\n    const static uint32_t LOG_INT         = 0x30000000;\n    const static uint32_t LOG_UINT        = 0x40000000;\n    const static uint32_t LOG_INT64       = 0x50000000;\n    const static uint32_t LOG_UINT64      = 0x60000000;\n    const static uint32_t LOG_DOUBLE      = 0x70000000;\n    const static uint32_t LOG_STRING      = 0x80000000;\n    const static uint32_t LOG_STARTOBJECT = 0x90000000;\n    const static uint32_t LOG_KEY         = 0xA0000000;\n    const static uint32_t LOG_ENDOBJECT   = 0xB0000000;\n    const static uint32_t LOG_STARTARRAY  = 0xC0000000;\n    const static uint32_t LOG_ENDARRAY    = 0xD0000000;\n\n    const static size_t LogCapacity = 256;\n    uint32_t Logs[LogCapacity];\n    size_t LogCount;\n\n    IterativeParsingReaderHandler() : LogCount(0) {\n    }\n\n    bool Null() { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_NULL; return true; }\n\n    bool Bool(bool) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_BOOL; return true; }\n\n    bool Int(int) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_INT; return true; }\n\n    bool Uint(unsigned) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_INT; return true; }\n\n    bool Int64(int64_t) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_INT64; return true; }\n\n    bool Uint64(uint64_t) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_UINT64; return true; }\n\n    bool Double(double) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_DOUBLE; return true; }\n\n    bool RawNumber(const Ch*, SizeType, bool) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_STRING; return true; }\n\n    bool String(const Ch*, SizeType, bool) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_STRING; return true; }\n\n    bool StartObject() { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_STARTOBJECT; return true; }\n\n    bool Key (const Ch*, SizeType, bool) { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_KEY; return true; }\n\n    bool EndObject(SizeType c) {\n        RAPIDJSON_ASSERT(LogCount < LogCapacity);\n        RAPIDJSON_ASSERT((static_cast<uint32_t>(c) & 0xF0000000) == 0);\n        Logs[LogCount++] = LOG_ENDOBJECT | static_cast<uint32_t>(c);\n        return true;\n    }\n\n    bool StartArray() { RAPIDJSON_ASSERT(LogCount < LogCapacity); Logs[LogCount++] = LOG_STARTARRAY; return true; }\n\n    bool EndArray(SizeType c) {\n        RAPIDJSON_ASSERT(LogCount < LogCapacity);\n        RAPIDJSON_ASSERT((static_cast<uint32_t>(c) & 0xF0000000) == 0);\n        Logs[LogCount++] = LOG_ENDARRAY | static_cast<uint32_t>(c);\n        return true;\n    }\n};\n\nTEST(Reader, IterativeParsing_General) {\n    {\n        StringStream is(\"[1, {\\\"k\\\": [1, 2]}, null, false, true, \\\"string\\\", 1.2]\");\n        Reader reader;\n        IterativeParsingReaderHandler<> handler;\n\n        ParseResult r = reader.Parse<kParseIterativeFlag>(is, handler);\n\n        EXPECT_FALSE(r.IsError());\n        EXPECT_FALSE(reader.HasParseError());\n\n        uint32_t e[] = {\n            handler.LOG_STARTARRAY,\n            handler.LOG_INT,\n            handler.LOG_STARTOBJECT,\n            handler.LOG_KEY,\n            handler.LOG_STARTARRAY,\n            handler.LOG_INT,\n            handler.LOG_INT,\n            handler.LOG_ENDARRAY | 2,\n            handler.LOG_ENDOBJECT | 1,\n            handler.LOG_NULL,\n            handler.LOG_BOOL,\n            handler.LOG_BOOL,\n            handler.LOG_STRING,\n            handler.LOG_DOUBLE,\n            handler.LOG_ENDARRAY | 7\n        };\n\n        EXPECT_EQ(sizeof(e) / sizeof(int), handler.LogCount);\n\n        for (size_t i = 0; i < handler.LogCount; ++i) {\n            EXPECT_EQ(e[i], handler.Logs[i]) << \"i = \" << i;\n        }\n    }\n}\n\nTEST(Reader, IterativeParsing_Count) {\n    {\n        StringStream is(\"[{}, {\\\"k\\\": 1}, [1], []]\");\n        Reader reader;\n        IterativeParsingReaderHandler<> handler;\n\n        ParseResult r = reader.Parse<kParseIterativeFlag>(is, handler);\n\n        EXPECT_FALSE(r.IsError());\n        EXPECT_FALSE(reader.HasParseError());\n\n        uint32_t e[] = {\n            handler.LOG_STARTARRAY,\n            handler.LOG_STARTOBJECT,\n            handler.LOG_ENDOBJECT | 0,\n            handler.LOG_STARTOBJECT,\n            handler.LOG_KEY,\n            handler.LOG_INT,\n            handler.LOG_ENDOBJECT | 1,\n            handler.LOG_STARTARRAY,\n            handler.LOG_INT,\n            handler.LOG_ENDARRAY | 1,\n            handler.LOG_STARTARRAY,\n            handler.LOG_ENDARRAY | 0,\n            handler.LOG_ENDARRAY | 4\n        };\n\n        EXPECT_EQ(sizeof(e) / sizeof(int), handler.LogCount);\n\n        for (size_t i = 0; i < handler.LogCount; ++i) {\n            EXPECT_EQ(e[i], handler.Logs[i]) << \"i = \" << i;\n        }\n    }\n}\n\nTEST(Reader, IterativePullParsing_General) {\n    {\n        IterativeParsingReaderHandler<> handler;\n        uint32_t e[] = {\n            handler.LOG_STARTARRAY,\n            handler.LOG_INT,\n            handler.LOG_STARTOBJECT,\n            handler.LOG_KEY,\n            handler.LOG_STARTARRAY,\n            handler.LOG_INT,\n            handler.LOG_INT,\n            handler.LOG_ENDARRAY | 2,\n            handler.LOG_ENDOBJECT | 1,\n            handler.LOG_NULL,\n            handler.LOG_BOOL,\n            handler.LOG_BOOL,\n            handler.LOG_STRING,\n            handler.LOG_DOUBLE,\n            handler.LOG_ENDARRAY | 7\n        };\n\n        StringStream is(\"[1, {\\\"k\\\": [1, 2]}, null, false, true, \\\"string\\\", 1.2]\");\n        Reader reader;\n\n        reader.IterativeParseInit();\n        while (!reader.IterativeParseComplete()) {\n            size_t oldLogCount = handler.LogCount;\n            EXPECT_TRUE(oldLogCount < sizeof(e) / sizeof(int)) << \"overrun\";\n\n            EXPECT_TRUE(reader.IterativeParseNext<kParseDefaultFlags>(is, handler)) << \"parse fail\";\n            EXPECT_EQ(handler.LogCount, oldLogCount + 1) << \"handler should be invoked exactly once each time\";\n            EXPECT_EQ(e[oldLogCount], handler.Logs[oldLogCount]) << \"wrong event returned\";\n        }\n\n        EXPECT_FALSE(reader.HasParseError());\n        EXPECT_EQ(sizeof(e) / sizeof(int), handler.LogCount) << \"handler invoked wrong number of times\";\n\n        // The handler should not be invoked when the JSON has been fully read, but it should not fail\n        size_t oldLogCount = handler.LogCount;\n        EXPECT_TRUE(reader.IterativeParseNext<kParseDefaultFlags>(is, handler)) << \"parse-next past complete is allowed\";\n        EXPECT_EQ(handler.LogCount, oldLogCount) << \"parse-next past complete should not invoke handler\";\n        EXPECT_FALSE(reader.HasParseError()) << \"parse-next past complete should not generate parse error\";\n    }\n}\n\n// Test iterative parsing on kParseErrorTermination.\nstruct HandlerTerminateAtStartObject : public IterativeParsingReaderHandler<> {\n    bool StartObject() { return false; }\n};\n\nstruct HandlerTerminateAtStartArray : public IterativeParsingReaderHandler<> {\n    bool StartArray() { return false; }\n};\n\nstruct HandlerTerminateAtEndObject : public IterativeParsingReaderHandler<> {\n    bool EndObject(SizeType) { return false; }\n};\n\nstruct HandlerTerminateAtEndArray : public IterativeParsingReaderHandler<> {\n    bool EndArray(SizeType) { return false; }\n};\n\nTEST(Reader, IterativeParsing_ShortCircuit) {\n    {\n        HandlerTerminateAtStartObject handler;\n        Reader reader;\n        StringStream is(\"[1, {}]\");\n\n        ParseResult r = reader.Parse<kParseIterativeFlag>(is, handler);\n\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorTermination, r.Code());\n        EXPECT_EQ(4u, r.Offset());\n    }\n\n    {\n        HandlerTerminateAtStartArray handler;\n        Reader reader;\n        StringStream is(\"{\\\"a\\\": []}\");\n\n        ParseResult r = reader.Parse<kParseIterativeFlag>(is, handler);\n\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorTermination, r.Code());\n        EXPECT_EQ(6u, r.Offset());\n    }\n\n    {\n        HandlerTerminateAtEndObject handler;\n        Reader reader;\n        StringStream is(\"[1, {}]\");\n\n        ParseResult r = reader.Parse<kParseIterativeFlag>(is, handler);\n\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorTermination, r.Code());\n        EXPECT_EQ(5u, r.Offset());\n    }\n\n    {\n        HandlerTerminateAtEndArray handler;\n        Reader reader;\n        StringStream is(\"{\\\"a\\\": []}\");\n\n        ParseResult r = reader.Parse<kParseIterativeFlag>(is, handler);\n\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorTermination, r.Code());\n        EXPECT_EQ(7u, r.Offset());\n    }\n}\n\n// For covering BaseReaderHandler default functions\nTEST(Reader, BaseReaderHandler_Default) {\n    BaseReaderHandler<> h;\n    Reader reader;\n    StringStream is(\"[null, true, -1, 1, -1234567890123456789, 1234567890123456789, 3.14, \\\"s\\\", { \\\"a\\\" : 1 }]\");\n    EXPECT_TRUE(reader.Parse(is, h));\n}\n\ntemplate <int e>\nstruct TerminateHandler {\n    bool Null() { return e != 0; }\n    bool Bool(bool) { return e != 1; }\n    bool Int(int) { return e != 2; }\n    bool Uint(unsigned) { return e != 3; }\n    bool Int64(int64_t) { return e != 4; }\n    bool Uint64(uint64_t) { return e != 5;  }\n    bool Double(double) { return e != 6; }\n    bool RawNumber(const char*, SizeType, bool) { return e != 7; }\n    bool String(const char*, SizeType, bool) { return e != 8; }\n    bool StartObject() { return e != 9; }\n    bool Key(const char*, SizeType, bool)  { return e != 10; }\n    bool EndObject(SizeType) { return e != 11; }\n    bool StartArray() { return e != 12; }\n    bool EndArray(SizeType) { return e != 13; }\n};\n\n#define TEST_TERMINATION(e, json)\\\n{\\\n    Reader reader;\\\n    TerminateHandler<e> h;\\\n    StringStream is(json);\\\n    EXPECT_FALSE(reader.Parse(is, h));\\\n    EXPECT_EQ(kParseErrorTermination, reader.GetParseErrorCode());\\\n}\n\nTEST(Reader, ParseTerminationByHandler) {\n    TEST_TERMINATION(0, \"[null\");\n    TEST_TERMINATION(1, \"[true\");\n    TEST_TERMINATION(1, \"[false\");\n    TEST_TERMINATION(2, \"[-1\");\n    TEST_TERMINATION(3, \"[1\");\n    TEST_TERMINATION(4, \"[-1234567890123456789\");\n    TEST_TERMINATION(5, \"[1234567890123456789\");\n    TEST_TERMINATION(6, \"[0.5]\");\n    // RawNumber() is never called\n    TEST_TERMINATION(8, \"[\\\"a\\\"\");\n    TEST_TERMINATION(9, \"[{\");\n    TEST_TERMINATION(10, \"[{\\\"a\\\"\");\n    TEST_TERMINATION(11, \"[{}\");\n    TEST_TERMINATION(11, \"[{\\\"a\\\":1}\"); // non-empty object\n    TEST_TERMINATION(12, \"{\\\"a\\\":[\");\n    TEST_TERMINATION(13, \"{\\\"a\\\":[]\");\n    TEST_TERMINATION(13, \"{\\\"a\\\":[1]\"); // non-empty array\n}\n\nTEST(Reader, ParseComments) {\n    const char* json =\n    \"// Here is a one-line comment.\\n\"\n    \"{// And here's another one\\n\"\n    \"   /*And here's an in-line one.*/\\\"hello\\\" : \\\"world\\\",\"\n    \"   \\\"t\\\" :/* And one with '*' symbol*/true ,\"\n    \"/* A multiline comment\\n\"\n    \"   goes here*/\"\n    \"   \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3]\"\n    \"}/*And the last one to be sure */\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_TRUE(reader.Parse<kParseCommentsFlag>(s, h));\n    EXPECT_EQ(20u, h.step_);\n}\n\nTEST(Reader, ParseEmptyInlineComment) {\n    const char* json = \"{/**/\\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true, \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] }\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_TRUE(reader.Parse<kParseCommentsFlag>(s, h));\n    EXPECT_EQ(20u, h.step_);\n}\n\nTEST(Reader, ParseEmptyOnelineComment) {\n    const char* json = \"{//\\n\\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true, \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] }\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_TRUE(reader.Parse<kParseCommentsFlag>(s, h));\n    EXPECT_EQ(20u, h.step_);\n}\n\nTEST(Reader, ParseMultipleCommentsInARow) {\n    const char* json =\n    \"{/* first comment *//* second */\\n\"\n    \"/* third */ /*fourth*/// last one\\n\"\n    \"\\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true, \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] }\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_TRUE(reader.Parse<kParseCommentsFlag>(s, h));\n    EXPECT_EQ(20u, h.step_);\n}\n\nTEST(Reader, InlineCommentsAreDisabledByDefault) {\n    {\n        const char* json = \"{/* Inline comment. */\\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true, \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] }\";\n\n        StringStream s(json);\n        ParseObjectHandler h;\n        Reader reader;\n        EXPECT_FALSE(reader.Parse<kParseDefaultFlags>(s, h));\n    }\n\n    {\n        const char* json =\n        \"{\\\"hello\\\" : /* Multiline comment starts here\\n\"\n        \" continues here\\n\"\n        \" and ends here */\\\"world\\\", \\\"t\\\" :true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] }\";\n\n        StringStream s(json);\n        ParseObjectHandler h;\n        Reader reader;\n        EXPECT_FALSE(reader.Parse<kParseDefaultFlags>(s, h));\n    }\n}\n\nTEST(Reader, OnelineCommentsAreDisabledByDefault) {\n    const char* json = \"{// One-line comment\\n\\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] }\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_FALSE(reader.Parse<kParseDefaultFlags>(s, h));\n}\n\nTEST(Reader, EofAfterOneLineComment) {\n    const char* json = \"{\\\"hello\\\" : \\\"world\\\" // EOF is here -->\\0 \\n}\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_FALSE(reader.Parse<kParseCommentsFlag>(s, h));\n    EXPECT_EQ(kParseErrorObjectMissCommaOrCurlyBracket, reader.GetParseErrorCode());\n}\n\nTEST(Reader, IncompleteMultilineComment) {\n    const char* json = \"{\\\"hello\\\" : \\\"world\\\" /* EOF is here -->\\0 */}\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_FALSE(reader.Parse<kParseCommentsFlag>(s, h));\n    EXPECT_EQ(kParseErrorUnspecificSyntaxError, reader.GetParseErrorCode());\n}\n\nTEST(Reader, IncompleteMultilineComment2) {\n    const char* json = \"{\\\"hello\\\" : \\\"world\\\" /* *\\0 */}\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_FALSE(reader.Parse<kParseCommentsFlag>(s, h));\n    EXPECT_EQ(kParseErrorUnspecificSyntaxError, reader.GetParseErrorCode());\n}\n\nTEST(Reader, UnrecognizedComment) {\n    const char* json = \"{\\\"hello\\\" : \\\"world\\\" /! }\";\n\n    StringStream s(json);\n    ParseObjectHandler h;\n    Reader reader;\n    EXPECT_FALSE(reader.Parse<kParseCommentsFlag>(s, h));\n    EXPECT_EQ(kParseErrorUnspecificSyntaxError, reader.GetParseErrorCode());\n}\n\nstruct NumbersAsStringsHandler {\n    bool Null() { return true; }\n    bool Bool(bool) { return true; }\n    bool Int(int) { return true; }\n    bool Uint(unsigned) { return true; }\n    bool Int64(int64_t) { return true; }\n    bool Uint64(uint64_t) { return true;  }\n    bool Double(double) { return true; }\n    // 'str' is not null-terminated\n    bool RawNumber(const char* str, SizeType length, bool) {\n        EXPECT_TRUE(str != 0);\n        EXPECT_TRUE(expected_len_ == length);\n        EXPECT_TRUE(strncmp(str, expected_, length) == 0);\n        return true;\n    }\n    bool String(const char*, SizeType, bool) { return true; }\n    bool StartObject() { return true; }\n    bool Key(const char*, SizeType, bool) { return true; }\n    bool EndObject(SizeType) { return true; }\n    bool StartArray() { return true; }\n    bool EndArray(SizeType) { return true; }\n\n    NumbersAsStringsHandler(const char* expected)\n        : expected_(expected)\n        , expected_len_(strlen(expected)) {}\n\n    const char* expected_;\n    size_t expected_len_;\n};\n\nTEST(Reader, NumbersAsStrings) {\n    {\n        const char* json = \"{ \\\"pi\\\": 3.1416 } \";\n        StringStream s(json);\n        NumbersAsStringsHandler h(\"3.1416\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n    }\n    {\n        char* json = StrDup(\"{ \\\"pi\\\": 3.1416 } \");\n        InsituStringStream s(json);\n        NumbersAsStringsHandler h(\"3.1416\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseInsituFlag|kParseNumbersAsStringsFlag>(s, h));\n        free(json);\n    }\n    {\n        const char* json = \"{ \\\"gigabyte\\\": 1.0e9 } \";\n        StringStream s(json);\n        NumbersAsStringsHandler h(\"1.0e9\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n    }\n    {\n        char* json = StrDup(\"{ \\\"gigabyte\\\": 1.0e9 } \");\n        InsituStringStream s(json);\n        NumbersAsStringsHandler h(\"1.0e9\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseInsituFlag|kParseNumbersAsStringsFlag>(s, h));\n        free(json);\n    }\n    {\n        const char* json = \"{ \\\"pi\\\": 314.159e-2 } \";\n        StringStream s(json);\n        NumbersAsStringsHandler h(\"314.159e-2\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n    }\n    {\n        char* json = StrDup(\"{ \\\"gigabyte\\\": 314.159e-2 } \");\n        InsituStringStream s(json);\n        NumbersAsStringsHandler h(\"314.159e-2\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseInsituFlag|kParseNumbersAsStringsFlag>(s, h));\n        free(json);\n    }\n    {\n        const char* json = \"{ \\\"negative\\\": -1.54321 } \";\n        StringStream s(json);\n        NumbersAsStringsHandler h(\"-1.54321\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n    }\n    {\n        char* json = StrDup(\"{ \\\"negative\\\": -1.54321 } \");\n        InsituStringStream s(json);\n        NumbersAsStringsHandler h(\"-1.54321\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseInsituFlag|kParseNumbersAsStringsFlag>(s, h));\n        free(json);\n    }\n    {\n        const char* json = \"{ \\\"pi\\\": 314.159e-2 } \";\n        std::stringstream ss(json);\n        IStreamWrapper s(ss);\n        NumbersAsStringsHandler h(\"314.159e-2\");\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n    }\n    {\n        char n1e319[321];   // '1' followed by 319 '0'\n        n1e319[0] = '1';\n        for (int i = 1; i < 320; i++)\n            n1e319[i] = '0';\n        n1e319[320] = '\\0';\n        StringStream s(n1e319);\n        NumbersAsStringsHandler h(n1e319);\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n    }\n}\n\nstruct NumbersAsStringsHandlerWChar_t {\n  bool Null() { return true; }\n  bool Bool(bool) { return true; }\n  bool Int(int) { return true; }\n  bool Uint(unsigned) { return true; }\n  bool Int64(int64_t) { return true; }\n  bool Uint64(uint64_t) { return true; }\n  bool Double(double) { return true; }\n  // 'str' is not null-terminated\n  bool RawNumber(const wchar_t* str, SizeType length, bool) {\n    EXPECT_TRUE(str != 0);\n    EXPECT_TRUE(expected_len_ == length);\n    EXPECT_TRUE(wcsncmp(str, expected_, length) == 0);\n    return true;\n  }\n  bool String(const wchar_t*, SizeType, bool) { return true; }\n  bool StartObject() { return true; }\n  bool Key(const wchar_t*, SizeType, bool) { return true; }\n  bool EndObject(SizeType) { return true; }\n  bool StartArray() { return true; }\n  bool EndArray(SizeType) { return true; }\n\n  NumbersAsStringsHandlerWChar_t(const wchar_t* expected)\n    : expected_(expected)\n    , expected_len_(wcslen(expected)) {}\n\n  const wchar_t* expected_;\n  size_t expected_len_;\n};\n\nTEST(Reader, NumbersAsStringsWChar_t) {\n  {\n    const wchar_t* json = L\"{ \\\"pi\\\": 3.1416 } \";\n    GenericStringStream<UTF16<> > s(json);\n    NumbersAsStringsHandlerWChar_t h(L\"3.1416\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n  }\n  {\n    wchar_t* json = StrDup(L\"{ \\\"pi\\\": 3.1416 } \");\n    GenericInsituStringStream<UTF16<> > s(json);\n    NumbersAsStringsHandlerWChar_t h(L\"3.1416\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseInsituFlag | kParseNumbersAsStringsFlag>(s, h));\n    free(json);\n  }\n  {\n    const wchar_t* json = L\"{ \\\"gigabyte\\\": 1.0e9 } \";\n    GenericStringStream<UTF16<> > s(json);\n    NumbersAsStringsHandlerWChar_t h(L\"1.0e9\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n  }\n  {\n    wchar_t* json = StrDup(L\"{ \\\"gigabyte\\\": 1.0e9 } \");\n    GenericInsituStringStream<UTF16<> > s(json);\n    NumbersAsStringsHandlerWChar_t h(L\"1.0e9\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseInsituFlag | kParseNumbersAsStringsFlag>(s, h));\n    free(json);\n  }\n  {\n    const wchar_t* json = L\"{ \\\"pi\\\": 314.159e-2 } \";\n    GenericStringStream<UTF16<> > s(json);\n    NumbersAsStringsHandlerWChar_t h(L\"314.159e-2\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n  }\n  {\n    wchar_t* json = StrDup(L\"{ \\\"gigabyte\\\": 314.159e-2 } \");\n    GenericInsituStringStream<UTF16<> > s(json);\n    NumbersAsStringsHandlerWChar_t h(L\"314.159e-2\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseInsituFlag | kParseNumbersAsStringsFlag>(s, h));\n    free(json);\n  }\n  {\n    const wchar_t* json = L\"{ \\\"negative\\\": -1.54321 } \";\n    GenericStringStream<UTF16<> > s(json);\n    NumbersAsStringsHandlerWChar_t h(L\"-1.54321\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n  }\n  {\n    wchar_t* json = StrDup(L\"{ \\\"negative\\\": -1.54321 } \");\n    GenericInsituStringStream<UTF16<> > s(json);\n    NumbersAsStringsHandlerWChar_t h(L\"-1.54321\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseInsituFlag | kParseNumbersAsStringsFlag>(s, h));\n    free(json);\n  }\n  {\n    const wchar_t* json = L\"{ \\\"pi\\\": 314.159e-2 } \";\n    std::wstringstream ss(json);\n    WIStreamWrapper s(ss);\n    NumbersAsStringsHandlerWChar_t h(L\"314.159e-2\");\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n  }\n  {\n    wchar_t n1e319[321];   // '1' followed by 319 '0'\n    n1e319[0] = L'1';\n    for(int i = 1; i < 320; i++)\n      n1e319[i] = L'0';\n    n1e319[320] = L'\\0';\n    GenericStringStream<UTF16<> > s(n1e319);\n    NumbersAsStringsHandlerWChar_t h(n1e319);\n    GenericReader<UTF16<>, UTF16<> > reader;\n    EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag>(s, h));\n  }\n}\n\ntemplate <unsigned extraFlags>\nvoid TestTrailingCommas() {\n    {\n        StringStream s(\"[1,2,3,]\");\n        ParseArrayHandler<3> h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h));\n        EXPECT_EQ(5u, h.step_);\n    }\n    {\n        const char* json = \"{ \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false,\"\n                \"\\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3],}\";\n        StringStream s(json);\n        ParseObjectHandler h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h));\n        EXPECT_EQ(20u, h.step_);\n    }\n    {\n        // whitespace around trailing commas\n        const char* json = \"{ \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false,\"\n                \"\\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3\\n,\\n]\\n,\\n} \";\n        StringStream s(json);\n        ParseObjectHandler h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h));\n        EXPECT_EQ(20u, h.step_);\n    }\n    {\n        // comments around trailing commas\n        const char* json = \"{ \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null,\"\n                \"\\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3/*test*/,/*test*/]/*test*/,/*test*/}\";\n        StringStream s(json);\n        ParseObjectHandler h;\n        Reader reader;\n        EXPECT_TRUE(reader.Parse<extraFlags|kParseTrailingCommasFlag|kParseCommentsFlag>(s, h));\n        EXPECT_EQ(20u, h.step_);\n    }\n}\n\nTEST(Reader, TrailingCommas) {\n    TestTrailingCommas<kParseNoFlags>();\n}\n\nTEST(Reader, TrailingCommasIterative) {\n    TestTrailingCommas<kParseIterativeFlag>();\n}\n\ntemplate <unsigned extraFlags>\nvoid TestMultipleTrailingCommaErrors() {\n    // only a single trailing comma is allowed.\n    {\n        StringStream s(\"[1,2,3,,]\");\n        ParseArrayHandler<3> h;\n        Reader reader;\n        ParseResult r = reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h);\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorValueInvalid, r.Code());\n        EXPECT_EQ(7u, r.Offset());\n    }\n    {\n        const char* json = \"{ \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false,\"\n                \"\\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3,],,}\";\n        StringStream s(json);\n        ParseObjectHandler h;\n        Reader reader;\n        ParseResult r = reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h);\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorObjectMissName, r.Code());\n        EXPECT_EQ(95u, r.Offset());\n    }\n}\n\nTEST(Reader, MultipleTrailingCommaErrors) {\n    TestMultipleTrailingCommaErrors<kParseNoFlags>();\n}\n\nTEST(Reader, MultipleTrailingCommaErrorsIterative) {\n    TestMultipleTrailingCommaErrors<kParseIterativeFlag>();\n}\n\ntemplate <unsigned extraFlags>\nvoid TestEmptyExceptForCommaErrors() {\n    // not allowed even with trailing commas enabled; the\n    // trailing comma must follow a value.\n    {\n        StringStream s(\"[,]\");\n        ParseArrayHandler<3> h;\n        Reader reader;\n        ParseResult r = reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h);\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorValueInvalid, r.Code());\n        EXPECT_EQ(1u, r.Offset());\n    }\n    {\n        StringStream s(\"{,}\");\n        ParseObjectHandler h;\n        Reader reader;\n        ParseResult r = reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h);\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorObjectMissName, r.Code());\n        EXPECT_EQ(1u, r.Offset());\n    }\n}\n\nTEST(Reader, EmptyExceptForCommaErrors) {\n    TestEmptyExceptForCommaErrors<kParseNoFlags>();\n}\n\nTEST(Reader, EmptyExceptForCommaErrorsIterative) {\n    TestEmptyExceptForCommaErrors<kParseIterativeFlag>();\n}\n\ntemplate <unsigned extraFlags>\nvoid TestTrailingCommaHandlerTermination() {\n    {\n        HandlerTerminateAtEndArray h;\n        Reader reader;\n        StringStream s(\"[1,2,3,]\");\n        ParseResult r = reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h);\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorTermination, r.Code());\n        EXPECT_EQ(7u, r.Offset());\n    }\n    {\n        HandlerTerminateAtEndObject h;\n        Reader reader;\n        StringStream s(\"{\\\"t\\\": true, \\\"f\\\": false,}\");\n        ParseResult r = reader.Parse<extraFlags|kParseTrailingCommasFlag>(s, h);\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorTermination, r.Code());\n        EXPECT_EQ(23u, r.Offset());\n    }\n}\n\nTEST(Reader, TrailingCommaHandlerTermination) {\n    TestTrailingCommaHandlerTermination<kParseNoFlags>();\n}\n\nTEST(Reader, TrailingCommaHandlerTerminationIterative) {\n    TestTrailingCommaHandlerTermination<kParseIterativeFlag>();\n}\n\nTEST(Reader, ParseNanAndInfinity) {\n#define TEST_NAN_INF(str, x) \\\n    { \\\n        { \\\n            StringStream s(str); \\\n            ParseDoubleHandler h; \\\n            Reader reader; \\\n            ASSERT_EQ(kParseErrorNone, reader.Parse<kParseNanAndInfFlag>(s, h).Code()); \\\n            EXPECT_EQ(1u, h.step_); \\\n            internal::Double e(x), a(h.actual_); \\\n            EXPECT_EQ(e.IsNan(), a.IsNan()); \\\n            EXPECT_EQ(e.IsInf(), a.IsInf()); \\\n            if (!e.IsNan()) \\\n                EXPECT_EQ(e.Sign(), a.Sign()); \\\n        } \\\n        { \\\n            const char* json = \"{ \\\"naninfdouble\\\": \" str \" } \"; \\\n            StringStream s(json); \\\n            NumbersAsStringsHandler h(str); \\\n            Reader reader; \\\n            EXPECT_TRUE(reader.Parse<kParseNumbersAsStringsFlag|kParseNanAndInfFlag>(s, h)); \\\n        } \\\n        { \\\n            char* json = StrDup(\"{ \\\"naninfdouble\\\": \" str \" } \"); \\\n            InsituStringStream s(json); \\\n            NumbersAsStringsHandler h(str); \\\n            Reader reader; \\\n            EXPECT_TRUE(reader.Parse<kParseInsituFlag|kParseNumbersAsStringsFlag|kParseNanAndInfFlag>(s, h)); \\\n            free(json); \\\n        } \\\n    }\n#define TEST_NAN_INF_ERROR(errorCode, str, errorOffset) \\\n    { \\\n        unsigned streamPos = errorOffset; \\\n        char buffer[1001]; \\\n        strncpy(buffer, str, 1000); \\\n        InsituStringStream s(buffer); \\\n        BaseReaderHandler<> h; \\\n        Reader reader; \\\n        EXPECT_FALSE(reader.Parse<kParseNanAndInfFlag>(s, h)); \\\n        EXPECT_EQ(errorCode, reader.GetParseErrorCode());\\\n        EXPECT_EQ(errorOffset, reader.GetErrorOffset());\\\n        EXPECT_EQ(streamPos, s.Tell());\\\n    }\n\n    double nan = std::numeric_limits<double>::quiet_NaN();\n    double inf = std::numeric_limits<double>::infinity();\n\n    TEST_NAN_INF(\"NaN\", nan);\n    TEST_NAN_INF(\"-NaN\", nan);\n    TEST_NAN_INF(\"Inf\", inf);\n    TEST_NAN_INF(\"Infinity\", inf);\n    TEST_NAN_INF(\"-Inf\", -inf);\n    TEST_NAN_INF(\"-Infinity\", -inf);\n    TEST_NAN_INF_ERROR(kParseErrorValueInvalid, \"NInf\", 1u);\n    TEST_NAN_INF_ERROR(kParseErrorValueInvalid, \"NaInf\", 2u);\n    TEST_NAN_INF_ERROR(kParseErrorValueInvalid, \"INan\", 1u);\n    TEST_NAN_INF_ERROR(kParseErrorValueInvalid, \"InNan\", 2u);\n    TEST_NAN_INF_ERROR(kParseErrorValueInvalid, \"nan\", 1u);\n    TEST_NAN_INF_ERROR(kParseErrorValueInvalid, \"-nan\", 1u);\n    TEST_NAN_INF_ERROR(kParseErrorValueInvalid, \"NAN\", 1u);\n    TEST_NAN_INF_ERROR(kParseErrorValueInvalid, \"-Infinty\", 6u);\n    TEST_NAN_INF_ERROR(kParseErrorDocumentRootNotSingular, \"NaN.2e2\", 3u);\n    TEST_NAN_INF_ERROR(kParseErrorDocumentRootNotSingular, \"Inf.2\", 3u);\n    TEST_NAN_INF_ERROR(kParseErrorDocumentRootNotSingular, \"-InfE2\", 4u);\n\n#undef TEST_NAN_INF_ERROR\n#undef TEST_NAN_INF\n}\n\nTEST(Reader, EscapedApostrophe) {\n    const char json[] = \" { \\\"foo\\\": \\\"bar\\\\'buzz\\\" } \";\n\n    BaseReaderHandler<> h;\n\n    {\n        StringStream s(json);\n        Reader reader;\n        ParseResult r = reader.Parse<kParseNoFlags>(s, h);\n        EXPECT_TRUE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorStringEscapeInvalid, r.Code());\n        EXPECT_EQ(14u, r.Offset());\n    }\n\n    {\n        StringStream s(json);\n        Reader reader;\n        ParseResult r = reader.Parse<kParseEscapedApostropheFlag>(s, h);\n        EXPECT_FALSE(reader.HasParseError());\n        EXPECT_EQ(kParseErrorNone, r.Code());\n        EXPECT_EQ(0u, r.Offset());\n    }\n}\n\nRAPIDJSON_DIAG_POP\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/regextest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/internal/regex.h\"\n\nusing namespace rapidjson::internal;\n\nTEST(Regex, Single) {\n    Regex re(\"a\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n}\n\nTEST(Regex, Concatenation) {\n    Regex re(\"abc\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abc\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"a\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n    EXPECT_FALSE(rs.Match(\"abcd\"));\n}\n\nTEST(Regex, Alternation1) {\n    Regex re(\"abab|abbb\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abab\"));\n    EXPECT_TRUE(rs.Match(\"abbb\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n    EXPECT_FALSE(rs.Match(\"ababa\"));\n    EXPECT_FALSE(rs.Match(\"abb\"));\n    EXPECT_FALSE(rs.Match(\"abbbb\"));\n}\n\nTEST(Regex, Alternation2) {\n    Regex re(\"a|b|c\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"b\"));\n    EXPECT_TRUE(rs.Match(\"c\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n}\n\nTEST(Regex, Parenthesis1) {\n    Regex re(\"(ab)c\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abc\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"a\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n    EXPECT_FALSE(rs.Match(\"abcd\"));\n}\n\nTEST(Regex, Parenthesis2) {\n    Regex re(\"a(bc)\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abc\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"a\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n    EXPECT_FALSE(rs.Match(\"abcd\"));\n}\n\nTEST(Regex, Parenthesis3) {\n    Regex re(\"(a|b)(c|d)\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"ac\"));\n    EXPECT_TRUE(rs.Match(\"ad\"));\n    EXPECT_TRUE(rs.Match(\"bc\"));\n    EXPECT_TRUE(rs.Match(\"bd\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n    EXPECT_FALSE(rs.Match(\"cd\"));\n}\n\nTEST(Regex, ZeroOrOne1) {\n    Regex re(\"a?\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"\"));\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n}\n\nTEST(Regex, ZeroOrOne2) {\n    Regex re(\"a?b\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"b\"));\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_FALSE(rs.Match(\"a\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n    EXPECT_FALSE(rs.Match(\"bb\"));\n    EXPECT_FALSE(rs.Match(\"ba\"));\n}\n\nTEST(Regex, ZeroOrOne3) {\n    Regex re(\"ab?\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n    EXPECT_FALSE(rs.Match(\"bb\"));\n    EXPECT_FALSE(rs.Match(\"ba\"));\n}\n\nTEST(Regex, ZeroOrOne4) {\n    Regex re(\"a?b?\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"\"));\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"b\"));\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n    EXPECT_FALSE(rs.Match(\"bb\"));\n    EXPECT_FALSE(rs.Match(\"ba\"));\n    EXPECT_FALSE(rs.Match(\"abc\"));\n}\n\nTEST(Regex, ZeroOrOne5) {\n    Regex re(\"a(ab)?b\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_TRUE(rs.Match(\"aabb\"));\n    EXPECT_FALSE(rs.Match(\"aab\"));\n    EXPECT_FALSE(rs.Match(\"abb\"));\n}\n\nTEST(Regex, ZeroOrMore1) {\n    Regex re(\"a*\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"\"));\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"aa\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n}\n\nTEST(Regex, ZeroOrMore2) {\n    Regex re(\"a*b\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"b\"));\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_TRUE(rs.Match(\"aab\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"bb\"));\n}\n\nTEST(Regex, ZeroOrMore3) {\n    Regex re(\"a*b*\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"\"));\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"aa\"));\n    EXPECT_TRUE(rs.Match(\"b\"));\n    EXPECT_TRUE(rs.Match(\"bb\"));\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_TRUE(rs.Match(\"aabb\"));\n    EXPECT_FALSE(rs.Match(\"ba\"));\n}\n\nTEST(Regex, ZeroOrMore4) {\n    Regex re(\"a(ab)*b\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_TRUE(rs.Match(\"aabb\"));\n    EXPECT_TRUE(rs.Match(\"aababb\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n}\n\nTEST(Regex, OneOrMore1) {\n    Regex re(\"a+\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"aa\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n}\n\nTEST(Regex, OneOrMore2) {\n    Regex re(\"a+b\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_TRUE(rs.Match(\"aab\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n}\n\nTEST(Regex, OneOrMore3) {\n    Regex re(\"a+b+\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"ab\"));\n    EXPECT_TRUE(rs.Match(\"aab\"));\n    EXPECT_TRUE(rs.Match(\"abb\"));\n    EXPECT_TRUE(rs.Match(\"aabb\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"ba\"));\n}\n\nTEST(Regex, OneOrMore4) {\n    Regex re(\"a(ab)+b\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"aabb\"));\n    EXPECT_TRUE(rs.Match(\"aababb\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"ab\"));\n}\n\nTEST(Regex, QuantifierExact1) {\n    Regex re(\"ab{3}c\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abbbc\"));\n    EXPECT_FALSE(rs.Match(\"ac\"));\n    EXPECT_FALSE(rs.Match(\"abc\"));\n    EXPECT_FALSE(rs.Match(\"abbc\"));\n    EXPECT_FALSE(rs.Match(\"abbbbc\"));\n}\n\nTEST(Regex, QuantifierExact2) {\n    Regex re(\"a(bc){3}d\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abcbcbcd\"));\n    EXPECT_FALSE(rs.Match(\"ad\"));\n    EXPECT_FALSE(rs.Match(\"abcd\"));\n    EXPECT_FALSE(rs.Match(\"abcbcd\"));\n    EXPECT_FALSE(rs.Match(\"abcbcbcbcd\"));\n}\n\nTEST(Regex, QuantifierExact3) {\n    Regex re(\"a(b|c){3}d\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abbbd\"));\n    EXPECT_TRUE(rs.Match(\"acccd\"));\n    EXPECT_TRUE(rs.Match(\"abcbd\"));\n    EXPECT_FALSE(rs.Match(\"ad\"));\n    EXPECT_FALSE(rs.Match(\"abbd\"));\n    EXPECT_FALSE(rs.Match(\"accccd\"));\n    EXPECT_FALSE(rs.Match(\"abbbbd\"));\n}\n\nTEST(Regex, QuantifierMin1) {\n    Regex re(\"ab{3,}c\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abbbc\"));\n    EXPECT_TRUE(rs.Match(\"abbbbc\"));\n    EXPECT_TRUE(rs.Match(\"abbbbbc\"));\n    EXPECT_FALSE(rs.Match(\"ac\"));\n    EXPECT_FALSE(rs.Match(\"abc\"));\n    EXPECT_FALSE(rs.Match(\"abbc\"));\n}\n\nTEST(Regex, QuantifierMin2) {\n    Regex re(\"a(bc){3,}d\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abcbcbcd\"));\n    EXPECT_TRUE(rs.Match(\"abcbcbcbcd\"));\n    EXPECT_FALSE(rs.Match(\"ad\"));\n    EXPECT_FALSE(rs.Match(\"abcd\"));\n    EXPECT_FALSE(rs.Match(\"abcbcd\"));\n}\n\nTEST(Regex, QuantifierMin3) {\n    Regex re(\"a(b|c){3,}d\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abbbd\"));\n    EXPECT_TRUE(rs.Match(\"acccd\"));\n    EXPECT_TRUE(rs.Match(\"abcbd\"));\n    EXPECT_TRUE(rs.Match(\"accccd\"));\n    EXPECT_TRUE(rs.Match(\"abbbbd\"));\n    EXPECT_FALSE(rs.Match(\"ad\"));\n    EXPECT_FALSE(rs.Match(\"abbd\"));\n}\n\nTEST(Regex, QuantifierMinMax1) {\n    Regex re(\"ab{3,5}c\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abbbc\"));\n    EXPECT_TRUE(rs.Match(\"abbbbc\"));\n    EXPECT_TRUE(rs.Match(\"abbbbbc\"));\n    EXPECT_FALSE(rs.Match(\"ac\"));\n    EXPECT_FALSE(rs.Match(\"abc\"));\n    EXPECT_FALSE(rs.Match(\"abbc\"));\n    EXPECT_FALSE(rs.Match(\"abbbbbbc\"));\n}\n\nTEST(Regex, QuantifierMinMax2) {\n    Regex re(\"a(bc){3,5}d\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abcbcbcd\"));\n    EXPECT_TRUE(rs.Match(\"abcbcbcbcd\"));\n    EXPECT_TRUE(rs.Match(\"abcbcbcbcbcd\"));\n    EXPECT_FALSE(rs.Match(\"ad\"));\n    EXPECT_FALSE(rs.Match(\"abcd\"));\n    EXPECT_FALSE(rs.Match(\"abcbcd\"));\n    EXPECT_FALSE(rs.Match(\"abcbcbcbcbcbcd\"));\n}\n\nTEST(Regex, QuantifierMinMax3) {\n    Regex re(\"a(b|c){3,5}d\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"abbbd\"));\n    EXPECT_TRUE(rs.Match(\"acccd\"));\n    EXPECT_TRUE(rs.Match(\"abcbd\"));\n    EXPECT_TRUE(rs.Match(\"accccd\"));\n    EXPECT_TRUE(rs.Match(\"abbbbd\"));\n    EXPECT_TRUE(rs.Match(\"acccccd\"));\n    EXPECT_TRUE(rs.Match(\"abbbbbd\"));\n    EXPECT_FALSE(rs.Match(\"ad\"));\n    EXPECT_FALSE(rs.Match(\"abbd\"));\n    EXPECT_FALSE(rs.Match(\"accccccd\"));\n    EXPECT_FALSE(rs.Match(\"abbbbbbd\"));\n}\n\n// Issue538\nTEST(Regex, QuantifierMinMax4) {\n    Regex re(\"a(b|c){0,3}d\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"ad\"));\n    EXPECT_TRUE(rs.Match(\"abd\"));\n    EXPECT_TRUE(rs.Match(\"acd\"));\n    EXPECT_TRUE(rs.Match(\"abbd\"));\n    EXPECT_TRUE(rs.Match(\"accd\"));\n    EXPECT_TRUE(rs.Match(\"abcd\"));\n    EXPECT_TRUE(rs.Match(\"abbbd\"));\n    EXPECT_TRUE(rs.Match(\"acccd\"));\n    EXPECT_FALSE(rs.Match(\"abbbbd\"));\n    EXPECT_FALSE(rs.Match(\"add\"));\n    EXPECT_FALSE(rs.Match(\"accccd\"));\n    EXPECT_FALSE(rs.Match(\"abcbcd\"));\n}\n\n// Issue538\nTEST(Regex, QuantifierMinMax5) {\n    Regex re(\"a(b|c){0,}d\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"ad\"));\n    EXPECT_TRUE(rs.Match(\"abd\"));\n    EXPECT_TRUE(rs.Match(\"acd\"));\n    EXPECT_TRUE(rs.Match(\"abbd\"));\n    EXPECT_TRUE(rs.Match(\"accd\"));\n    EXPECT_TRUE(rs.Match(\"abcd\"));\n    EXPECT_TRUE(rs.Match(\"abbbd\"));\n    EXPECT_TRUE(rs.Match(\"acccd\"));\n    EXPECT_TRUE(rs.Match(\"abbbbd\"));\n    EXPECT_TRUE(rs.Match(\"accccd\"));\n    EXPECT_TRUE(rs.Match(\"abcbcd\"));\n    EXPECT_FALSE(rs.Match(\"add\"));\n    EXPECT_FALSE(rs.Match(\"aad\"));\n}\n\n#define EURO \"\\xE2\\x82\\xAC\" // \"\\xE2\\x82\\xAC\" is UTF-8 rsquence of Euro sign U+20AC\n\nTEST(Regex, Unicode) {\n    Regex re(\"a\" EURO \"+b\"); \n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\" EURO \"b\"));\n    EXPECT_TRUE(rs.Match(\"a\" EURO EURO \"b\"));\n    EXPECT_FALSE(rs.Match(\"a?b\"));\n    EXPECT_FALSE(rs.Match(\"a\" EURO \"\\xAC\" \"b\")); // unaware of UTF-8 will match\n}\n\nTEST(Regex, AnyCharacter) {\n    Regex re(\".\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"b\"));\n    EXPECT_TRUE(rs.Match(EURO));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n}\n\nTEST(Regex, CharacterRange1) {\n    Regex re(\"[abc]\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"b\"));\n    EXPECT_TRUE(rs.Match(\"c\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"`\"));\n    EXPECT_FALSE(rs.Match(\"d\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n}\n\nTEST(Regex, CharacterRange2) {\n    Regex re(\"[^abc]\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"`\"));\n    EXPECT_TRUE(rs.Match(\"d\"));\n    EXPECT_FALSE(rs.Match(\"a\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"c\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n}\n\nTEST(Regex, CharacterRange3) {\n    Regex re(\"[a-c]\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"b\"));\n    EXPECT_TRUE(rs.Match(\"c\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"`\"));\n    EXPECT_FALSE(rs.Match(\"d\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n}\n\nTEST(Regex, CharacterRange4) {\n    Regex re(\"[^a-c]\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"`\"));\n    EXPECT_TRUE(rs.Match(\"d\"));\n    EXPECT_FALSE(rs.Match(\"a\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n    EXPECT_FALSE(rs.Match(\"c\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"aa\"));\n}\n\nTEST(Regex, CharacterRange5) {\n    Regex re(\"[-]\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"-\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"a\"));\n}\n\nTEST(Regex, CharacterRange6) {\n    Regex re(\"[a-]\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"-\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"`\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n}\n\nTEST(Regex, CharacterRange7) {\n    Regex re(\"[-a]\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"a\"));\n    EXPECT_TRUE(rs.Match(\"-\"));\n    EXPECT_FALSE(rs.Match(\"\"));\n    EXPECT_FALSE(rs.Match(\"`\"));\n    EXPECT_FALSE(rs.Match(\"b\"));\n}\n\nTEST(Regex, CharacterRange8) {\n    Regex re(\"[a-zA-Z0-9]*\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"Milo\"));\n    EXPECT_TRUE(rs.Match(\"MT19937\"));\n    EXPECT_TRUE(rs.Match(\"43\"));\n    EXPECT_FALSE(rs.Match(\"a_b\"));\n    EXPECT_FALSE(rs.Match(\"!\"));\n}\n\nTEST(Regex, Search) {\n    Regex re(\"abc\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Search(\"abc\"));\n    EXPECT_TRUE(rs.Search(\"_abc\"));\n    EXPECT_TRUE(rs.Search(\"abc_\"));\n    EXPECT_TRUE(rs.Search(\"_abc_\"));\n    EXPECT_TRUE(rs.Search(\"__abc__\"));\n    EXPECT_TRUE(rs.Search(\"abcabc\"));\n    EXPECT_FALSE(rs.Search(\"a\"));\n    EXPECT_FALSE(rs.Search(\"ab\"));\n    EXPECT_FALSE(rs.Search(\"bc\"));\n    EXPECT_FALSE(rs.Search(\"cba\"));\n}\n\nTEST(Regex, Search_BeginAnchor) {\n    Regex re(\"^abc\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Search(\"abc\"));\n    EXPECT_TRUE(rs.Search(\"abc_\"));\n    EXPECT_TRUE(rs.Search(\"abcabc\"));\n    EXPECT_FALSE(rs.Search(\"_abc\"));\n    EXPECT_FALSE(rs.Search(\"_abc_\"));\n    EXPECT_FALSE(rs.Search(\"a\"));\n    EXPECT_FALSE(rs.Search(\"ab\"));\n    EXPECT_FALSE(rs.Search(\"bc\"));\n    EXPECT_FALSE(rs.Search(\"cba\"));\n}\n\nTEST(Regex, Search_EndAnchor) {\n    Regex re(\"abc$\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Search(\"abc\"));\n    EXPECT_TRUE(rs.Search(\"_abc\"));\n    EXPECT_TRUE(rs.Search(\"abcabc\"));\n    EXPECT_FALSE(rs.Search(\"abc_\"));\n    EXPECT_FALSE(rs.Search(\"_abc_\"));\n    EXPECT_FALSE(rs.Search(\"a\"));\n    EXPECT_FALSE(rs.Search(\"ab\"));\n    EXPECT_FALSE(rs.Search(\"bc\"));\n    EXPECT_FALSE(rs.Search(\"cba\"));\n}\n\nTEST(Regex, Search_BothAnchor) {\n    Regex re(\"^abc$\");\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Search(\"abc\"));\n    EXPECT_FALSE(rs.Search(\"\"));\n    EXPECT_FALSE(rs.Search(\"a\"));\n    EXPECT_FALSE(rs.Search(\"b\"));\n    EXPECT_FALSE(rs.Search(\"ab\"));\n    EXPECT_FALSE(rs.Search(\"abcd\"));\n}\n\nTEST(Regex, Escape) {\n    const char* s = \"\\\\^\\\\$\\\\|\\\\(\\\\)\\\\?\\\\*\\\\+\\\\.\\\\[\\\\]\\\\{\\\\}\\\\\\\\\\\\f\\\\n\\\\r\\\\t\\\\v[\\\\b][\\\\[][\\\\]]\";\n    Regex re(s);\n    ASSERT_TRUE(re.IsValid());\n    RegexSearch rs(re);\n    EXPECT_TRUE(rs.Match(\"^$|()?*+.[]{}\\\\\\x0C\\n\\r\\t\\x0B\\b[]\"));\n    EXPECT_FALSE(rs.Match(s)); // Not escaping\n}\n\nTEST(Regex, Invalid) {\n#define TEST_INVALID(s) \\\n    {\\\n        Regex re(s);\\\n        EXPECT_FALSE(re.IsValid());\\\n    }\n\n    TEST_INVALID(\"\");\n    TEST_INVALID(\"a|\");\n    TEST_INVALID(\"()\");\n    TEST_INVALID(\"(\");\n    TEST_INVALID(\")\");\n    TEST_INVALID(\"(a))\");\n    TEST_INVALID(\"(a|)\");\n    TEST_INVALID(\"(a||b)\");\n    TEST_INVALID(\"(|b)\");\n    TEST_INVALID(\"?\");\n    TEST_INVALID(\"*\");\n    TEST_INVALID(\"+\");\n    TEST_INVALID(\"{\");\n    TEST_INVALID(\"{}\");\n    TEST_INVALID(\"a{a}\");\n    TEST_INVALID(\"a{0}\");\n    TEST_INVALID(\"a{-1}\");\n    TEST_INVALID(\"a{}\");\n    // TEST_INVALID(\"a{0,}\");   // Support now\n    TEST_INVALID(\"a{,0}\");\n    TEST_INVALID(\"a{1,0}\");\n    TEST_INVALID(\"a{-1,0}\");\n    TEST_INVALID(\"a{-1,1}\");\n    TEST_INVALID(\"a{4294967296}\"); // overflow of unsigned\n    TEST_INVALID(\"a{1a}\");\n    TEST_INVALID(\"[\");\n    TEST_INVALID(\"[]\");\n    TEST_INVALID(\"[^]\");\n    TEST_INVALID(\"[\\\\a]\");\n    TEST_INVALID(\"\\\\a\");\n\n#undef TEST_INVALID\n}\n\nTEST(Regex, Issue538) {\n    Regex re(\"^[0-9]+(\\\\\\\\.[0-9]+){0,2}\");\n    EXPECT_TRUE(re.IsValid());\n}\n\nTEST(Regex, Issue583) {\n    Regex re(\"[0-9]{99999}\");\n    ASSERT_TRUE(re.IsValid());\n}\n\n#undef EURO\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/schematest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#define RAPIDJSON_SCHEMA_VERBOSE 0\n#define RAPIDJSON_HAS_STDSTRING 1\n\n#include \"unittest.h\"\n#include \"rapidjson/schema.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/error/error.h\"\n#include \"rapidjson/error/en.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(variadic-macros)\n#elif defined(_MSC_VER)\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(4822) // local class member function does not have a body\n#endif\n\nusing namespace rapidjson;\n\n#define TEST_HASHER(json1, json2, expected) \\\n{\\\n    Document d1, d2;\\\n    d1.Parse(json1);\\\n    ASSERT_FALSE(d1.HasParseError());\\\n    d2.Parse(json2);\\\n    ASSERT_FALSE(d2.HasParseError());\\\n    internal::Hasher<Value, CrtAllocator> h1, h2;\\\n    d1.Accept(h1);\\\n    d2.Accept(h2);\\\n    ASSERT_TRUE(h1.IsValid());\\\n    ASSERT_TRUE(h2.IsValid());\\\n    /*printf(\"%s: 0x%016llx\\n%s: 0x%016llx\\n\\n\", json1, h1.GetHashCode(), json2, h2.GetHashCode());*/\\\n    EXPECT_TRUE(expected == (h1.GetHashCode() == h2.GetHashCode()));\\\n}\n\nTEST(SchemaValidator, Hasher) {\n    TEST_HASHER(\"null\", \"null\", true);\n\n    TEST_HASHER(\"true\", \"true\", true);\n    TEST_HASHER(\"false\", \"false\", true);\n    TEST_HASHER(\"true\", \"false\", false);\n    TEST_HASHER(\"false\", \"true\", false);\n    TEST_HASHER(\"true\", \"null\", false);\n    TEST_HASHER(\"false\", \"null\", false);\n\n    TEST_HASHER(\"1\", \"1\", true);\n    TEST_HASHER(\"2147483648\", \"2147483648\", true); // 2^31 can only be fit in unsigned\n    TEST_HASHER(\"-2147483649\", \"-2147483649\", true); // -2^31 - 1 can only be fit in int64_t\n    TEST_HASHER(\"2147483648\", \"2147483648\", true); // 2^31 can only be fit in unsigned\n    TEST_HASHER(\"4294967296\", \"4294967296\", true); // 2^32 can only be fit in int64_t\n    TEST_HASHER(\"9223372036854775808\", \"9223372036854775808\", true); // 2^63 can only be fit in uint64_t\n    TEST_HASHER(\"1.5\", \"1.5\", true);\n    TEST_HASHER(\"1\", \"1.0\", true);\n    TEST_HASHER(\"1\", \"-1\", false);\n    TEST_HASHER(\"0.0\", \"-0.0\", false);\n    TEST_HASHER(\"1\", \"true\", false);\n    TEST_HASHER(\"0\", \"false\", false);\n    TEST_HASHER(\"0\", \"null\", false);\n\n    TEST_HASHER(\"\\\"\\\"\", \"\\\"\\\"\", true);\n    TEST_HASHER(\"\\\"\\\"\", \"\\\"\\\\u0000\\\"\", false);\n    TEST_HASHER(\"\\\"Hello\\\"\", \"\\\"Hello\\\"\", true);\n    TEST_HASHER(\"\\\"Hello\\\"\", \"\\\"World\\\"\", false);\n    TEST_HASHER(\"\\\"Hello\\\"\", \"null\", false);\n    TEST_HASHER(\"\\\"Hello\\\\u0000\\\"\", \"\\\"Hello\\\"\", false);\n    TEST_HASHER(\"\\\"\\\"\", \"null\", false);\n    TEST_HASHER(\"\\\"\\\"\", \"true\", false);\n    TEST_HASHER(\"\\\"\\\"\", \"false\", false);\n\n    TEST_HASHER(\"[]\", \"[ ]\", true);\n    TEST_HASHER(\"[1, true, false]\", \"[1, true, false]\", true);\n    TEST_HASHER(\"[1, true, false]\", \"[1, true]\", false);\n    TEST_HASHER(\"[1, 2]\", \"[2, 1]\", false);\n    TEST_HASHER(\"[[1], 2]\", \"[[1, 2]]\", false);\n    TEST_HASHER(\"[1, 2]\", \"[1, [2]]\", false);\n    TEST_HASHER(\"[]\", \"null\", false);\n    TEST_HASHER(\"[]\", \"true\", false);\n    TEST_HASHER(\"[]\", \"false\", false);\n    TEST_HASHER(\"[]\", \"0\", false);\n    TEST_HASHER(\"[]\", \"0.0\", false);\n    TEST_HASHER(\"[]\", \"\\\"\\\"\", false);\n\n    TEST_HASHER(\"{}\", \"{ }\", true);\n    TEST_HASHER(\"{\\\"a\\\":1}\", \"{\\\"a\\\":1}\", true);\n    TEST_HASHER(\"{\\\"a\\\":1}\", \"{\\\"b\\\":1}\", false);\n    TEST_HASHER(\"{\\\"a\\\":1}\", \"{\\\"a\\\":2}\", false);\n    TEST_HASHER(\"{\\\"a\\\":\\\"a\\\"}\", \"{\\\"b\\\":\\\"b\\\"}\", false); // Key equals value hashing\n    TEST_HASHER(\"{\\\"a\\\":\\\"a\\\", \\\"b\\\":\\\"b\\\"}\", \"{\\\"c\\\":\\\"c\\\", \\\"d\\\":\\\"d\\\"}\", false);\n    TEST_HASHER(\"{\\\"a\\\":\\\"a\\\"}\", \"{\\\"b\\\":\\\"b\\\", \\\"c\\\":\\\"c\\\"}\", false);\n    TEST_HASHER(\"{\\\"a\\\":1, \\\"b\\\":2}\", \"{\\\"b\\\":2, \\\"a\\\":1}\", true); // Member order insensitive\n    TEST_HASHER(\"{}\", \"null\", false);\n    TEST_HASHER(\"{}\", \"false\", false);\n    TEST_HASHER(\"{}\", \"true\", false);\n    TEST_HASHER(\"{}\", \"0\", false);\n    TEST_HASHER(\"{}\", \"0.0\", false);\n    TEST_HASHER(\"{}\", \"\\\"\\\"\", false);\n}\n\n// Test cases following http://spacetelescope.github.io/understanding-json-schema\n\n#define VALIDATE(schema, json, expected) \\\n{\\\n    VALIDATE_(schema, json, expected, true) \\\n}\n\n#define VALIDATE_(schema, json, expected, expected2) \\\n{\\\n    EXPECT_TRUE(expected2 == schema.GetError().ObjectEmpty());\\\n    EXPECT_TRUE(schema.IsSupportedSpecification());\\\n    SchemaValidator validator(schema);\\\n    Document d;\\\n    /*printf(\"\\n%s\\n\", json);*/\\\n    d.Parse(json);\\\n    EXPECT_FALSE(d.HasParseError());\\\n    EXPECT_TRUE(expected == d.Accept(validator));\\\n    EXPECT_TRUE(expected == validator.IsValid());\\\n    ValidateErrorCode code = validator.GetInvalidSchemaCode();\\\n    if (expected) {\\\n      EXPECT_TRUE(code == kValidateErrorNone);\\\n      EXPECT_TRUE(validator.GetInvalidSchemaKeyword() == 0);\\\n    }\\\n    if ((expected) && !validator.IsValid()) {\\\n        StringBuffer sb;\\\n        validator.GetInvalidSchemaPointer().StringifyUriFragment(sb);\\\n        printf(\"Invalid schema: %s\\n\", sb.GetString());\\\n        printf(\"Invalid keyword: %s\\n\", validator.GetInvalidSchemaKeyword());\\\n        printf(\"Invalid code: %d\\n\", code);\\\n        printf(\"Invalid message: %s\\n\", GetValidateError_En(code));\\\n        sb.Clear();\\\n        validator.GetInvalidDocumentPointer().StringifyUriFragment(sb);\\\n        printf(\"Invalid document: %s\\n\", sb.GetString());\\\n        sb.Clear();\\\n        Writer<StringBuffer> w(sb);\\\n        validator.GetError().Accept(w);\\\n        printf(\"Validation error: %s\\n\", sb.GetString());\\\n    }\\\n}\n\n#define INVALIDATE(schema, json, invalidSchemaPointer, invalidSchemaKeyword, invalidDocumentPointer, error) \\\n{\\\n    INVALIDATE_(schema, json, invalidSchemaPointer, invalidSchemaKeyword, invalidDocumentPointer, error, kValidateDefaultFlags, SchemaValidator, Pointer) \\\n}\n\n#define INVALIDATE_(schema, json, invalidSchemaPointer, invalidSchemaKeyword, invalidDocumentPointer, error, \\\n    flags, SchemaValidatorType, PointerType) \\\n{\\\n    EXPECT_TRUE(schema.GetError().ObjectEmpty());\\\n    EXPECT_TRUE(schema.IsSupportedSpecification());\\\n    SchemaValidatorType validator(schema);\\\n    validator.SetValidateFlags(flags);\\\n    Document d;\\\n    /*printf(\"\\n%s\\n\", json);*/\\\n    d.Parse(json);\\\n    EXPECT_FALSE(d.HasParseError());\\\n    d.Accept(validator);\\\n    EXPECT_FALSE(validator.IsValid());\\\n    ValidateErrorCode code = validator.GetInvalidSchemaCode();\\\n    ASSERT_TRUE(code != kValidateErrorNone);\\\n    ASSERT_TRUE(strcmp(GetValidateError_En(code), \"Unknown error.\") != 0);\\\n    if (validator.GetInvalidSchemaPointer() != PointerType(invalidSchemaPointer)) {\\\n        StringBuffer sb;\\\n        validator.GetInvalidSchemaPointer().Stringify(sb);\\\n        printf(\"GetInvalidSchemaPointer() Expected: %s Actual: %s\\n\", invalidSchemaPointer, sb.GetString());\\\n        ADD_FAILURE();\\\n    }\\\n    ASSERT_TRUE(validator.GetInvalidSchemaKeyword() != 0);\\\n    if (strcmp(validator.GetInvalidSchemaKeyword(), invalidSchemaKeyword) != 0) {\\\n        printf(\"GetInvalidSchemaKeyword() Expected: %s Actual %s\\n\", invalidSchemaKeyword, validator.GetInvalidSchemaKeyword());\\\n        ADD_FAILURE();\\\n    }\\\n    if (validator.GetInvalidDocumentPointer() != PointerType(invalidDocumentPointer)) {\\\n        StringBuffer sb;\\\n        validator.GetInvalidDocumentPointer().Stringify(sb);\\\n        printf(\"GetInvalidDocumentPointer() Expected: %s Actual: %s\\n\", invalidDocumentPointer, sb.GetString());\\\n        ADD_FAILURE();\\\n    }\\\n    Document e;\\\n    e.Parse(error);\\\n    if (validator.GetError() != e) {\\\n        StringBuffer sb;\\\n        Writer<StringBuffer> w(sb);\\\n        validator.GetError().Accept(w);\\\n        printf(\"GetError() Expected: %s Actual: %s\\n\", error, sb.GetString());\\\n        ADD_FAILURE();\\\n    }\\\n}\n\n// Use for checking whether a compiled schema document contains errors\n#define SCHEMAERROR(schema, error) \\\n{\\\n    Document e;\\\n    e.Parse(error);\\\n    if (schema.GetError() != e) {\\\n        StringBuffer sb;\\\n        Writer<StringBuffer> w(sb);\\\n        schema.GetError().Accept(w);\\\n        printf(\"GetError() Expected: %s Actual: %s\\n\", error, sb.GetString());\\\n        ADD_FAILURE();\\\n    }\\\n}\n\nTEST(SchemaValidator, Typeless) {\n    Document sd;\n    sd.Parse(\"{}\");\n    SchemaDocument s(sd);\n    \n    VALIDATE(s, \"42\", true);\n    VALIDATE(s, \"\\\"I'm a string\\\"\", true);\n    VALIDATE(s, \"{ \\\"an\\\": [ \\\"arbitrarily\\\", \\\"nested\\\" ], \\\"data\\\": \\\"structure\\\" }\", true);\n}\n\nTEST(SchemaValidator, MultiType) {\n    Document sd;\n    sd.Parse(\"{ \\\"type\\\": [\\\"number\\\", \\\"string\\\"] }\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"42\", true);\n    VALIDATE(s, \"\\\"Life, the universe, and everything\\\"\", true);\n    INVALIDATE(s, \"[\\\"Life\\\", \\\"the universe\\\", \\\"and everything\\\"]\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\", \\\"number\\\"], \\\"actual\\\": \\\"array\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Enum_Typed) {\n    Document sd;\n    sd.Parse(\"{ \\\"type\\\": \\\"string\\\", \\\"enum\\\" : [\\\"red\\\", \\\"amber\\\", \\\"green\\\"] }\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"\\\"red\\\"\", true);\n    INVALIDATE(s, \"\\\"blue\\\"\", \"\", \"enum\", \"\",\n        \"{ \\\"enum\\\": { \\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\" }}\");\n}\n\nTEST(SchemaValidator, Enum_Typeless) {\n    Document sd;\n    sd.Parse(\"{  \\\"enum\\\": [\\\"red\\\", \\\"amber\\\", \\\"green\\\", null, 42] }\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"\\\"red\\\"\", true);\n    VALIDATE(s, \"null\", true);\n    VALIDATE(s, \"42\", true);\n    INVALIDATE(s, \"0\", \"\", \"enum\", \"\",\n        \"{ \\\"enum\\\": { \\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\" }}\");\n}\n\nTEST(SchemaValidator, Enum_InvalidType) {\n    Document sd;\n    sd.Parse(\"{ \\\"type\\\": \\\"string\\\", \\\"enum\\\": [\\\"red\\\", \\\"amber\\\", \\\"green\\\", null] }\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"\\\"red\\\"\", true);\n    INVALIDATE(s, \"null\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, AllOf) {\n    {\n        Document sd;\n        sd.Parse(\"{\\\"allOf\\\": [{ \\\"type\\\": \\\"string\\\" }, { \\\"type\\\": \\\"string\\\", \\\"maxLength\\\": 5 }]}\");\n        SchemaDocument s(sd);\n\n        VALIDATE(s, \"\\\"ok\\\"\", true);\n        INVALIDATE(s, \"\\\"too long\\\"\", \"\", \"allOf\", \"\",\n            \"{ \\\"allOf\\\": {\"\n            \"    \\\"errors\\\": [\"\n            \"      {},\"\n            \"      {\\\"maxLength\\\": {\\\"errorCode\\\": 6, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/1\\\", \\\"expected\\\": 5, \\\"actual\\\": \\\"too long\\\"}}\"\n            \"    ],\"\n            \"    \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n            \"}}\");\n    }\n    {\n        Document sd;\n        sd.Parse(\"{\\\"allOf\\\": [{ \\\"type\\\": \\\"string\\\" }, { \\\"type\\\": \\\"number\\\" } ] }\");\n        SchemaDocument s(sd);\n\n        VALIDATE(s, \"\\\"No way\\\"\", false);\n        INVALIDATE(s, \"-1\", \"\", \"allOf\", \"\",\n            \"{ \\\"allOf\\\": {\"\n            \"    \\\"errors\\\": [\"\n            \"      {\\\"type\\\": { \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/0\\\", \\\"errorCode\\\": 20, \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"}},\"\n            \"      {}\"\n            \"    ],\"\n            \"    \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n            \"}}\");\n    }\n}\n\nTEST(SchemaValidator, AnyOf) {\n    Document sd;\n    sd.Parse(\"{\\\"anyOf\\\": [{ \\\"type\\\": \\\"string\\\" }, { \\\"type\\\": \\\"number\\\" } ] }\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"\\\"Yes\\\"\", true);\n    VALIDATE(s, \"42\", true);\n    INVALIDATE(s, \"{ \\\"Not a\\\": \\\"string or number\\\" }\", \"\", \"anyOf\", \"\",\n        \"{ \\\"anyOf\\\": {\"\n        \"    \\\"errorCode\\\": 24,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\", \"\n        \"    \\\"errors\\\": [\"\n        \"      { \\\"type\\\": {\"\n        \"          \\\"errorCode\\\": 20,\"\n        \"          \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/anyOf/0\\\",\"\n        \"          \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"object\\\"\"\n        \"      }},\"\n        \"      { \\\"type\\\": {\"\n        \"          \\\"errorCode\\\": 20,\"\n        \"          \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/anyOf/1\\\",\"\n        \"          \\\"expected\\\": [\\\"number\\\"], \\\"actual\\\": \\\"object\\\"\"\n        \"      }}\"\n        \"    ]\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, OneOf) {\n    Document sd;\n    sd.Parse(\"{\\\"oneOf\\\": [{ \\\"type\\\": \\\"number\\\", \\\"multipleOf\\\": 5 }, { \\\"type\\\": \\\"number\\\", \\\"multipleOf\\\": 3 } ] }\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"10\", true);\n    VALIDATE(s, \"9\", true);\n    INVALIDATE(s, \"2\", \"\", \"oneOf\", \"\",\n        \"{ \\\"oneOf\\\": {\"\n        \"    \\\"errorCode\\\": 21,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"errors\\\": [\"\n        \"      { \\\"multipleOf\\\": {\"\n        \"          \\\"errorCode\\\": 1,\"\n        \"          \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/oneOf/0\\\",\"\n        \"          \\\"expected\\\": 5, \\\"actual\\\": 2\"\n        \"      }},\"\n        \"      { \\\"multipleOf\\\": {\"\n        \"          \\\"errorCode\\\": 1,\"\n        \"          \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/oneOf/1\\\",\"\n        \"          \\\"expected\\\": 3, \\\"actual\\\": 2\"\n        \"      }}\"\n        \"    ]\"\n        \"}}\");\n    INVALIDATE(s, \"15\", \"\", \"oneOf\", \"\",\n        \"{ \\\"oneOf\\\": { \\\"errorCode\\\": 22, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\", \\\"matches\\\": [0,1]}}\");\n}\n\nTEST(SchemaValidator, Not) {\n    Document sd;\n    sd.Parse(\"{\\\"not\\\":{ \\\"type\\\": \\\"string\\\"}}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"42\", true);\n    VALIDATE(s, \"{ \\\"key\\\": \\\"value\\\" }\", true);\n    INVALIDATE(s, \"\\\"I am a string\\\"\", \"\", \"not\", \"\",\n        \"{ \\\"not\\\": { \\\"errorCode\\\": 25, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\" }}\");\n}\n\nTEST(SchemaValidator, Ref) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"$schema\\\": \\\"http://json-schema.org/draft-04/schema#\\\",\"\n        \"\"\n        \"  \\\"definitions\\\": {\"\n        \"    \\\"address\\\": {\"\n        \"      \\\"type\\\": \\\"object\\\",\"\n        \"      \\\"properties\\\": {\"\n        \"        \\\"street_address\\\": { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"city\\\":           { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"state\\\":          { \\\"type\\\": \\\"string\\\" }\"\n        \"      },\"\n        \"      \\\"required\\\": [\\\"street_address\\\", \\\"city\\\", \\\"state\\\"]\"\n        \"    }\"\n        \"  },\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"properties\\\": {\"\n        \"    \\\"billing_address\\\": { \\\"$ref\\\": \\\"#/definitions/address\\\" },\"\n        \"    \\\"shipping_address\\\": { \\\"$ref\\\": \\\"#/definitions/address\\\" }\"\n        \"  }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{\\\"shipping_address\\\": {\\\"street_address\\\": \\\"1600 Pennsylvania Avenue NW\\\", \\\"city\\\": \\\"Washington\\\", \\\"state\\\": \\\"DC\\\"}, \\\"billing_address\\\": {\\\"street_address\\\": \\\"1st Street SE\\\", \\\"city\\\": \\\"Washington\\\", \\\"state\\\": \\\"DC\\\"} }\", true);\n}\n\nTEST(SchemaValidator, Ref_AllOf) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"$schema\\\": \\\"http://json-schema.org/draft-04/schema#\\\",\"\n        \"\"\n        \"  \\\"definitions\\\": {\"\n        \"    \\\"address\\\": {\"\n        \"      \\\"type\\\": \\\"object\\\",\"\n        \"      \\\"properties\\\": {\"\n        \"        \\\"street_address\\\": { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"city\\\":           { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"state\\\":          { \\\"type\\\": \\\"string\\\" }\"\n        \"      },\"\n        \"      \\\"required\\\": [\\\"street_address\\\", \\\"city\\\", \\\"state\\\"]\"\n        \"    }\"\n        \"  },\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"properties\\\": {\"\n        \"    \\\"billing_address\\\": { \\\"$ref\\\": \\\"#/definitions/address\\\" },\"\n        \"    \\\"shipping_address\\\": {\"\n        \"      \\\"allOf\\\": [\"\n        \"        { \\\"$ref\\\": \\\"#/definitions/address\\\" },\"\n        \"        { \\\"properties\\\":\"\n        \"          { \\\"type\\\": { \\\"enum\\\": [ \\\"residential\\\", \\\"business\\\" ] } },\"\n        \"          \\\"required\\\": [\\\"type\\\"]\"\n        \"        }\"\n        \"      ]\"\n        \"    }\"\n        \"  }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"{\\\"shipping_address\\\": {\\\"street_address\\\": \\\"1600 Pennsylvania Avenue NW\\\", \\\"city\\\": \\\"Washington\\\", \\\"state\\\": \\\"DC\\\"} }\", \"/properties/shipping_address\", \"allOf\", \"/shipping_address\",\n        \"{ \\\"allOf\\\": {\"\n        \"    \\\"errors\\\": [\"\n        \"      {},\"\n        \"      {\\\"required\\\": {\\\"errorCode\\\": 15, \\\"instanceRef\\\": \\\"#/shipping_address\\\", \\\"schemaRef\\\": \\\"#/properties/shipping_address/allOf/1\\\", \\\"missing\\\": [\\\"type\\\"]}}\"\n        \"    ],\"\n        \"    \\\"errorCode\\\":23,\\\"instanceRef\\\":\\\"#/shipping_address\\\",\\\"schemaRef\\\":\\\"#/properties/shipping_address\\\"\"\n        \"}}\");\n    VALIDATE(s, \"{\\\"shipping_address\\\": {\\\"street_address\\\": \\\"1600 Pennsylvania Avenue NW\\\", \\\"city\\\": \\\"Washington\\\", \\\"state\\\": \\\"DC\\\", \\\"type\\\": \\\"business\\\"} }\", true);\n}\n\nTEST(SchemaValidator, String) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"string\\\"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"\\\"I'm a string\\\"\", true);\n    INVALIDATE(s, \"42\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"2147483648\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\"); // 2^31 can only be fit in unsigned\n    INVALIDATE(s, \"-2147483649\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\"); // -2^31 - 1 can only be fit in int64_t\n    INVALIDATE(s, \"4294967296\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\"); // 2^32 can only be fit in int64_t\n    INVALIDATE(s, \"3.1415926\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"number\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, String_LengthRange) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"string\\\",\\\"minLength\\\":2,\\\"maxLength\\\":3}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"\\\"A\\\"\", \"\", \"minLength\", \"\",\n        \"{ \\\"minLength\\\": {\"\n        \"    \\\"errorCode\\\": 7,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 2, \\\"actual\\\": \\\"A\\\"\"\n        \"}}\");\n    VALIDATE(s, \"\\\"AB\\\"\", true);\n    VALIDATE(s, \"\\\"ABC\\\"\", true);\n    INVALIDATE(s, \"\\\"ABCD\\\"\", \"\", \"maxLength\", \"\",\n        \"{ \\\"maxLength\\\": {\"\n        \"    \\\"errorCode\\\": 6,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 3, \\\"actual\\\": \\\"ABCD\\\"\"\n        \"}}\");\n}\n\n#if RAPIDJSON_SCHEMA_HAS_REGEX\nTEST(SchemaValidator, String_Pattern) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"string\\\",\\\"pattern\\\":\\\"^(\\\\\\\\([0-9]{3}\\\\\\\\))?[0-9]{3}-[0-9]{4}$\\\"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"\\\"555-1212\\\"\", true);\n    VALIDATE(s, \"\\\"(888)555-1212\\\"\", true);\n    INVALIDATE(s, \"\\\"(888)555-1212 ext. 532\\\"\", \"\", \"pattern\", \"\",\n        \"{ \\\"pattern\\\": {\"\n        \"    \\\"errorCode\\\": 8,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"actual\\\": \\\"(888)555-1212 ext. 532\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"\\\"(800)FLOWERS\\\"\", \"\", \"pattern\", \"\",\n        \"{ \\\"pattern\\\": {\"\n        \"    \\\"errorCode\\\": 8,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"actual\\\": \\\"(800)FLOWERS\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, String_Pattern_Invalid) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"string\\\",\\\"pattern\\\":\\\"a{0}\\\"}\");\n    SchemaDocument s(sd);\n    SCHEMAERROR(s, \"{\\\"RegexInvalid\\\":{\\\"errorCode\\\":9,\\\"instanceRef\\\":\\\"#/pattern\\\",\\\"value\\\":\\\"a{0}\\\"}}\");\n\n    VALIDATE_(s, \"\\\"\\\"\", true, false);\n    VALIDATE_(s, \"\\\"a\\\"\", true, false);\n    VALIDATE_(s, \"\\\"aa\\\"\", true, false);\n}\n#endif\n\nTEST(SchemaValidator, Integer) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"integer\\\"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"42\", true);\n    VALIDATE(s, \"-1\", true);\n    VALIDATE(s, \"2147483648\", true); // 2^31 can only be fit in unsigned\n    VALIDATE(s, \"-2147483649\", true); // -2^31 - 1 can only be fit in int64_t\n    VALIDATE(s, \"2147483648\", true); // 2^31 can only be fit in unsigned\n    VALIDATE(s, \"4294967296\", true); // 2^32 can only be fit in int64_t\n    INVALIDATE(s, \"3.1415926\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"number\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"\\\"42\\\"\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Integer_Range) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"integer\\\",\\\"minimum\\\":0,\\\"maximum\\\":100,\\\"exclusiveMaximum\\\":true}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"-1\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 0, \\\"actual\\\": -1\"\n        \"}}\");\n    VALIDATE(s, \"0\", true);\n    VALIDATE(s, \"10\", true);\n    VALIDATE(s, \"99\", true);\n    INVALIDATE(s, \"100\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 100\"\n        \"}}\");\n    INVALIDATE(s, \"101\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 101\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Integer_Range64Boundary) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"integer\\\",\\\"minimum\\\":-9223372036854775807,\\\"maximum\\\":9223372036854775806}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"-9223372036854775808\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -9223372036854775807, \\\"actual\\\": -9223372036854775808\"\n        \"}}\");\n    VALIDATE(s, \"-9223372036854775807\", true);\n    VALIDATE(s, \"-2147483648\", true); // int min\n    VALIDATE(s, \"0\", true);\n    VALIDATE(s, \"2147483647\", true);  // int max\n    VALIDATE(s, \"2147483648\", true);  // unsigned first\n    VALIDATE(s, \"4294967295\", true);  // unsigned max\n    VALIDATE(s, \"9223372036854775806\", true);\n    INVALIDATE(s, \"9223372036854775807\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 2,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775806, \\\"actual\\\": 9223372036854775807\"\n        \"}}\");\n    INVALIDATE(s, \"18446744073709551615\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 2,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775806, \\\"actual\\\": 18446744073709551615\"\n        \"}}\");   // uint64_t max\n}\n\nTEST(SchemaValidator, Integer_RangeU64Boundary) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"integer\\\",\\\"minimum\\\":9223372036854775808,\\\"maximum\\\":18446744073709551614}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"-9223372036854775808\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808, \\\"actual\\\": -9223372036854775808\"\n        \"}}\");\n    INVALIDATE(s, \"9223372036854775807\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808, \\\"actual\\\": 9223372036854775807\"\n        \"}}\");\n    INVALIDATE(s, \"-2147483648\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808, \\\"actual\\\": -2147483648\"\n        \"}}\"); // int min\n    INVALIDATE(s, \"0\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808, \\\"actual\\\": 0\"\n        \"}}\");\n    INVALIDATE(s, \"2147483647\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808, \\\"actual\\\": 2147483647\"\n        \"}}\");  // int max\n    INVALIDATE(s, \"2147483648\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808, \\\"actual\\\": 2147483648\"\n        \"}}\");  // unsigned first\n    INVALIDATE(s, \"4294967295\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808, \\\"actual\\\": 4294967295\"\n        \"}}\");  // unsigned max\n    VALIDATE(s, \"9223372036854775808\", true);\n    VALIDATE(s, \"18446744073709551614\", true);\n    INVALIDATE(s, \"18446744073709551615\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 2,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 18446744073709551614, \\\"actual\\\": 18446744073709551615\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Integer_Range64BoundaryExclusive) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"integer\\\",\\\"minimum\\\":-9223372036854775808,\\\"maximum\\\":18446744073709551615,\\\"exclusiveMinimum\\\":true,\\\"exclusiveMaximum\\\":true}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"-9223372036854775808\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 5,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -9223372036854775808, \\\"exclusiveMinimum\\\": true, \"\n        \"    \\\"actual\\\": -9223372036854775808\"\n        \"}}\");\n    VALIDATE(s, \"-9223372036854775807\", true);\n    VALIDATE(s, \"18446744073709551614\", true);\n    INVALIDATE(s, \"18446744073709551615\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 18446744073709551615, \\\"exclusiveMaximum\\\": true, \"\n        \"    \\\"actual\\\": 18446744073709551615\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Integer_MultipleOf) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"integer\\\",\\\"multipleOf\\\":10}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"0\", true);\n    VALIDATE(s, \"10\", true);\n    VALIDATE(s, \"-10\", true);\n    VALIDATE(s, \"20\", true);\n    INVALIDATE(s, \"23\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 10, \\\"actual\\\": 23\"\n        \"}}\");\n    INVALIDATE(s, \"-23\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 10, \\\"actual\\\": -23\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Integer_MultipleOf64Boundary) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"integer\\\",\\\"multipleOf\\\":18446744073709551615}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"0\", true);\n    VALIDATE(s, \"18446744073709551615\", true);\n    INVALIDATE(s, \"18446744073709551614\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 18446744073709551615, \\\"actual\\\": 18446744073709551614\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Number_Range) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"number\\\",\\\"minimum\\\":0,\\\"maximum\\\":100,\\\"exclusiveMaximum\\\":true}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"-1\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 0, \\\"actual\\\": -1\"\n        \"}}\");\n    VALIDATE(s, \"0\", true);\n    VALIDATE(s, \"0.1\", true);\n    VALIDATE(s, \"10\", true);\n    VALIDATE(s, \"99\", true);\n    VALIDATE(s, \"99.9\", true);\n    INVALIDATE(s, \"100\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 100\"\n        \"}}\");\n    INVALIDATE(s, \"100.0\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 100.0\"\n        \"}}\");\n    INVALIDATE(s, \"101.5\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 101.5\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Number_RangeInt) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"number\\\",\\\"minimum\\\":-100,\\\"maximum\\\":-1,\\\"exclusiveMaximum\\\":true}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"-101\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -100, \\\"actual\\\": -101\"\n        \"}}\");\n    INVALIDATE(s, \"-100.1\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -100, \\\"actual\\\": -100.1\"\n        \"}}\");\n    VALIDATE(s, \"-100\", true);\n    VALIDATE(s, \"-2\", true);\n    INVALIDATE(s, \"-1\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": -1\"\n        \"}}\");\n    INVALIDATE(s, \"-0.9\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": -0.9\"\n        \"}}\");\n    INVALIDATE(s, \"0\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 0\"\n        \"}}\");\n    INVALIDATE(s, \"2147483647\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 2147483647\"\n        \"}}\");  // int max\n    INVALIDATE(s, \"2147483648\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 2147483648\"\n        \"}}\");  // unsigned first\n    INVALIDATE(s, \"4294967295\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 4294967295\"\n        \"}}\");  // unsigned max\n    INVALIDATE(s, \"9223372036854775808\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 9223372036854775808\"\n        \"}}\");\n    INVALIDATE(s, \"18446744073709551614\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 18446744073709551614\"\n        \"}}\");\n    INVALIDATE(s, \"18446744073709551615\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": -1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 18446744073709551615\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Number_RangeDouble) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"number\\\",\\\"minimum\\\":0.1,\\\"maximum\\\":100.1,\\\"exclusiveMaximum\\\":true}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"-9223372036854775808\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 0.1, \\\"actual\\\": -9223372036854775808\"\n        \"}}\");\n    INVALIDATE(s, \"-2147483648\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 0.1, \\\"actual\\\": -2147483648\"\n        \"}}\"); // int min\n    INVALIDATE(s, \"-1\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 0.1, \\\"actual\\\": -1\"\n        \"}}\");\n    VALIDATE(s, \"0.1\", true);\n    VALIDATE(s, \"10\", true);\n    VALIDATE(s, \"99\", true);\n    VALIDATE(s, \"100\", true);\n    INVALIDATE(s, \"101\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 101\"\n        \"}}\");\n    INVALIDATE(s, \"101.5\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 101.5\"\n        \"}}\");\n    INVALIDATE(s, \"18446744073709551614\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 18446744073709551614\"\n        \"}}\");\n    INVALIDATE(s, \"18446744073709551615\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 18446744073709551615\"\n        \"}}\");\n    INVALIDATE(s, \"2147483647\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 2147483647\"\n        \"}}\");  // int max\n    INVALIDATE(s, \"2147483648\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 2147483648\"\n        \"}}\");  // unsigned first\n    INVALIDATE(s, \"4294967295\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 4294967295\"\n        \"}}\");  // unsigned max\n    INVALIDATE(s, \"9223372036854775808\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 9223372036854775808\"\n        \"}}\");\n    INVALIDATE(s, \"18446744073709551614\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 18446744073709551614\"\n        \"}}\");\n    INVALIDATE(s, \"18446744073709551615\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 100.1, \\\"exclusiveMaximum\\\": true, \\\"actual\\\": 18446744073709551615\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Number_RangeDoubleU64Boundary) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"number\\\",\\\"minimum\\\":9223372036854775808.0,\\\"maximum\\\":18446744073709550000.0}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"-9223372036854775808\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808.0, \\\"actual\\\": -9223372036854775808\"\n        \"}}\");\n    INVALIDATE(s, \"-2147483648\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808.0, \\\"actual\\\": -2147483648\"\n        \"}}\"); // int min\n    INVALIDATE(s, \"0\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808.0, \\\"actual\\\": 0\"\n        \"}}\");\n    INVALIDATE(s, \"2147483647\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808.0, \\\"actual\\\": 2147483647\"\n        \"}}\");  // int max\n    INVALIDATE(s, \"2147483648\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808.0, \\\"actual\\\": 2147483648\"\n        \"}}\");  // unsigned first\n    INVALIDATE(s, \"4294967295\", \"\", \"minimum\", \"\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 9223372036854775808.0, \\\"actual\\\": 4294967295\"\n        \"}}\");  // unsigned max\n    VALIDATE(s, \"9223372036854775808\", true);\n    VALIDATE(s, \"18446744073709540000\", true);\n    INVALIDATE(s, \"18446744073709551615\", \"\", \"maximum\", \"\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 2,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 18446744073709550000.0, \\\"actual\\\": 18446744073709551615\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Number_MultipleOf) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"number\\\",\\\"multipleOf\\\":10.0}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"0\", true);\n    VALIDATE(s, \"10\", true);\n    VALIDATE(s, \"-10\", true);\n    VALIDATE(s, \"20\", true);\n    INVALIDATE(s, \"23\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 10.0, \\\"actual\\\": 23\"\n        \"}}\");\n    INVALIDATE(s, \"-2147483648\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 10.0, \\\"actual\\\": -2147483648\"\n        \"}}\");  // int min\n    VALIDATE(s, \"-2147483640\", true);\n    INVALIDATE(s, \"2147483647\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 10.0, \\\"actual\\\": 2147483647\"\n        \"}}\");  // int max\n    INVALIDATE(s, \"2147483648\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 10.0, \\\"actual\\\": 2147483648\"\n        \"}}\");  // unsigned first\n    VALIDATE(s, \"2147483650\", true);\n    INVALIDATE(s, \"4294967295\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 10.0, \\\"actual\\\": 4294967295\"\n        \"}}\");  // unsigned max\n    VALIDATE(s, \"4294967300\", true);\n}\n\nTEST(SchemaValidator, Number_MultipleOfOne) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"number\\\",\\\"multipleOf\\\":1}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"42\", true);\n    VALIDATE(s, \"42.0\", true);\n    INVALIDATE(s, \"3.1415926\", \"\", \"multipleOf\", \"\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 1, \\\"actual\\\": 3.1415926\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Object) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"object\\\"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{\\\"key\\\":\\\"value\\\",\\\"another_key\\\":\\\"another_value\\\"}\", true);\n    VALIDATE(s, \"{\\\"Sun\\\":1.9891e30,\\\"Jupiter\\\":1.8986e27,\\\"Saturn\\\":5.6846e26,\\\"Neptune\\\":10.243e25,\\\"Uranus\\\":8.6810e25,\\\"Earth\\\":5.9736e24,\\\"Venus\\\":4.8685e24,\\\"Mars\\\":6.4185e23,\\\"Mercury\\\":3.3022e23,\\\"Moon\\\":7.349e22,\\\"Pluto\\\":1.25e22}\", true);    \n    INVALIDATE(s, \"[\\\"An\\\", \\\"array\\\", \\\"not\\\", \\\"an\\\", \\\"object\\\"]\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"object\\\"], \\\"actual\\\": \\\"array\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"\\\"Not an object\\\"\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"object\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Object_Properties) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\": \\\"object\\\",\"\n        \"    \\\"properties\\\" : {\"\n        \"        \\\"number\\\": { \\\"type\\\": \\\"number\\\" },\"\n        \"        \\\"street_name\\\" : { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"street_type\\\" : { \\\"type\\\": \\\"string\\\", \\\"enum\\\" : [\\\"Street\\\", \\\"Avenue\\\", \\\"Boulevard\\\"] }\"\n        \"    }\"\n        \"}\");\n\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"number\\\": 1600, \\\"street_name\\\": \\\"Pennsylvania\\\", \\\"street_type\\\": \\\"Avenue\\\" }\", true);\n    INVALIDATE(s, \"{ \\\"number\\\": \\\"1600\\\", \\\"street_name\\\": \\\"Pennsylvania\\\", \\\"street_type\\\": \\\"Avenue\\\" }\", \"/properties/number\", \"type\", \"/number\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/number\\\", \\\"schemaRef\\\": \\\"#/properties/number\\\",\"\n        \"    \\\"expected\\\": [\\\"number\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"{ \\\"number\\\": \\\"One\\\", \\\"street_name\\\": \\\"Microsoft\\\", \\\"street_type\\\": \\\"Way\\\" }\",\n        \"/properties/number\", \"type\", \"/number\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/number\\\", \\\"schemaRef\\\": \\\"#/properties/number\\\",\"\n        \"    \\\"expected\\\": [\\\"number\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\"); // fail fast\n    VALIDATE(s, \"{ \\\"number\\\": 1600, \\\"street_name\\\": \\\"Pennsylvania\\\" }\", true);\n    VALIDATE(s, \"{}\", true);\n    VALIDATE(s, \"{ \\\"number\\\": 1600, \\\"street_name\\\": \\\"Pennsylvania\\\", \\\"street_type\\\": \\\"Avenue\\\", \\\"direction\\\": \\\"NW\\\" }\", true);\n}\n\nTEST(SchemaValidator, Object_AdditionalPropertiesBoolean) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\": \\\"object\\\",\"\n        \"        \\\"properties\\\" : {\"\n        \"        \\\"number\\\": { \\\"type\\\": \\\"number\\\" },\"\n        \"            \\\"street_name\\\" : { \\\"type\\\": \\\"string\\\" },\"\n        \"            \\\"street_type\\\" : { \\\"type\\\": \\\"string\\\",\"\n        \"            \\\"enum\\\" : [\\\"Street\\\", \\\"Avenue\\\", \\\"Boulevard\\\"]\"\n        \"        }\"\n        \"    },\"\n        \"    \\\"additionalProperties\\\": false\"\n        \"}\");\n\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"number\\\": 1600, \\\"street_name\\\": \\\"Pennsylvania\\\", \\\"street_type\\\": \\\"Avenue\\\" }\", true);\n    INVALIDATE(s, \"{ \\\"number\\\": 1600, \\\"street_name\\\": \\\"Pennsylvania\\\", \\\"street_type\\\": \\\"Avenue\\\", \\\"direction\\\": \\\"NW\\\" }\", \"\", \"additionalProperties\", \"/direction\",\n        \"{ \\\"additionalProperties\\\": {\"\n        \"    \\\"errorCode\\\": 16,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"disallowed\\\": \\\"direction\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Object_AdditionalPropertiesObject) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\": \\\"object\\\",\"\n        \"    \\\"properties\\\" : {\"\n        \"        \\\"number\\\": { \\\"type\\\": \\\"number\\\" },\"\n        \"        \\\"street_name\\\" : { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"street_type\\\" : { \\\"type\\\": \\\"string\\\",\"\n        \"            \\\"enum\\\" : [\\\"Street\\\", \\\"Avenue\\\", \\\"Boulevard\\\"]\"\n        \"        }\"\n        \"    },\"\n        \"    \\\"additionalProperties\\\": { \\\"type\\\": \\\"string\\\" }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"number\\\": 1600, \\\"street_name\\\": \\\"Pennsylvania\\\", \\\"street_type\\\": \\\"Avenue\\\" }\", true);\n    VALIDATE(s, \"{ \\\"number\\\": 1600, \\\"street_name\\\": \\\"Pennsylvania\\\", \\\"street_type\\\": \\\"Avenue\\\", \\\"direction\\\": \\\"NW\\\" }\", true);\n    INVALIDATE(s, \"{ \\\"number\\\": 1600, \\\"street_name\\\": \\\"Pennsylvania\\\", \\\"street_type\\\": \\\"Avenue\\\", \\\"office_number\\\": 201 }\", \"/additionalProperties\", \"type\", \"/office_number\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/office_number\\\", \\\"schemaRef\\\": \\\"#/additionalProperties\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Object_Required) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\": \\\"object\\\",\"\n        \"    \\\"properties\\\" : {\"\n        \"        \\\"name\\\":      { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"email\\\" : { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"address\\\" : { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"telephone\\\" : { \\\"type\\\": \\\"string\\\" }\"\n        \"    },\"\n        \"    \\\"required\\\":[\\\"name\\\", \\\"email\\\"]\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"name\\\": \\\"William Shakespeare\\\", \\\"email\\\" : \\\"bill@stratford-upon-avon.co.uk\\\" }\", true);\n    VALIDATE(s, \"{ \\\"name\\\": \\\"William Shakespeare\\\", \\\"email\\\" : \\\"bill@stratford-upon-avon.co.uk\\\", \\\"address\\\" : \\\"Henley Street, Stratford-upon-Avon, Warwickshire, England\\\", \\\"authorship\\\" : \\\"in question\\\"}\", true);\n    INVALIDATE(s, \"{ \\\"name\\\": \\\"William Shakespeare\\\", \\\"address\\\" : \\\"Henley Street, Stratford-upon-Avon, Warwickshire, England\\\" }\", \"\", \"required\", \"\",\n        \"{ \\\"required\\\": {\"\n        \"    \\\"errorCode\\\": 15,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"missing\\\": [\\\"email\\\"]\"\n        \"}}\");\n    INVALIDATE(s, \"{}\", \"\", \"required\", \"\",\n        \"{ \\\"required\\\": {\"\n        \"    \\\"errorCode\\\": 15,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"missing\\\": [\\\"name\\\", \\\"email\\\"]\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Object_Required_PassWithDefault) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\": \\\"object\\\",\"\n        \"    \\\"properties\\\" : {\"\n        \"        \\\"name\\\":      { \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"William Shakespeare\\\" },\"\n        \"        \\\"email\\\" : { \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"\\\" },\"\n        \"        \\\"address\\\" : { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"telephone\\\" : { \\\"type\\\": \\\"string\\\" }\"\n        \"    },\"\n        \"    \\\"required\\\":[\\\"name\\\", \\\"email\\\"]\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"email\\\" : \\\"bill@stratford-upon-avon.co.uk\\\", \\\"address\\\" : \\\"Henley Street, Stratford-upon-Avon, Warwickshire, England\\\", \\\"authorship\\\" : \\\"in question\\\"}\", true);\n    INVALIDATE(s, \"{ \\\"name\\\": \\\"William Shakespeare\\\", \\\"address\\\" : \\\"Henley Street, Stratford-upon-Avon, Warwickshire, England\\\" }\", \"\", \"required\", \"\",\n        \"{ \\\"required\\\": {\"\n        \"    \\\"errorCode\\\": 15,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"missing\\\": [\\\"email\\\"]\"\n        \"}}\");\n    INVALIDATE(s, \"{}\", \"\", \"required\", \"\",\n        \"{ \\\"required\\\": {\"\n        \"    \\\"errorCode\\\": 15,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"missing\\\": [\\\"email\\\"]\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Object_PropertiesRange) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"object\\\", \\\"minProperties\\\":2, \\\"maxProperties\\\":3}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"{}\", \"\", \"minProperties\", \"\",\n        \"{ \\\"minProperties\\\": {\"\n        \"    \\\"errorCode\\\": 14,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 2, \\\"actual\\\": 0\"\n        \"}}\");\n    INVALIDATE(s, \"{\\\"a\\\":0}\", \"\", \"minProperties\", \"\",\n        \"{ \\\"minProperties\\\": {\"\n        \"    \\\"errorCode\\\": 14,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 2, \\\"actual\\\": 1\"\n        \"}}\");\n    VALIDATE(s, \"{\\\"a\\\":0,\\\"b\\\":1}\", true);\n    VALIDATE(s, \"{\\\"a\\\":0,\\\"b\\\":1,\\\"c\\\":2}\", true);\n    INVALIDATE(s, \"{\\\"a\\\":0,\\\"b\\\":1,\\\"c\\\":2,\\\"d\\\":3}\", \"\", \"maxProperties\", \"\",\n        \"{ \\\"maxProperties\\\": {\"\n        \"    \\\"errorCode\\\": 13,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\", \"\n        \"    \\\"expected\\\": 3, \\\"actual\\\": 4\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Object_PropertyDependencies) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"properties\\\": {\"\n        \"    \\\"name\\\": { \\\"type\\\": \\\"string\\\" },\"\n        \"    \\\"credit_card\\\": { \\\"type\\\": \\\"number\\\" },\"\n        \"    \\\"cvv_code\\\": { \\\"type\\\": \\\"number\\\" },\"\n        \"    \\\"billing_address\\\": { \\\"type\\\": \\\"string\\\" }\"\n        \"  },\"\n        \"  \\\"required\\\": [\\\"name\\\"],\"\n        \"  \\\"dependencies\\\": {\"\n        \"    \\\"credit_card\\\": [\\\"cvv_code\\\", \\\"billing_address\\\"]\"\n        \"  }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"name\\\": \\\"John Doe\\\", \\\"credit_card\\\": 5555555555555555, \\\"cvv_code\\\": 777, \"\n        \"\\\"billing_address\\\": \\\"555 Debtor's Lane\\\" }\", true);\n    INVALIDATE(s, \"{ \\\"name\\\": \\\"John Doe\\\", \\\"credit_card\\\": 5555555555555555 }\", \"\", \"dependencies\", \"\",\n        \"{ \\\"dependencies\\\": {\"\n        \"    \\\"errorCode\\\": 18,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"errors\\\": {\"\n        \"       \\\"credit_card\\\": {\"\n        \"        \\\"required\\\": {\"\n        \"          \\\"errorCode\\\": 15,\"\n        \"          \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/dependencies/credit_card\\\",\"\n        \"          \\\"missing\\\": [\\\"cvv_code\\\", \\\"billing_address\\\"]\"\n        \"    } } }\"\n        \"}}\");\n    VALIDATE(s, \"{ \\\"name\\\": \\\"John Doe\\\"}\", true);\n    VALIDATE(s, \"{ \\\"name\\\": \\\"John Doe\\\", \\\"cvv_code\\\": 777, \\\"billing_address\\\": \\\"555 Debtor's Lane\\\" }\", true);\n}\n\nTEST(SchemaValidator, Object_SchemaDependencies) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\": \\\"object\\\",\"\n        \"    \\\"properties\\\" : {\"\n        \"        \\\"name\\\": { \\\"type\\\": \\\"string\\\" },\"\n        \"        \\\"credit_card\\\" : { \\\"type\\\": \\\"number\\\" }\"\n        \"    },\"\n        \"    \\\"required\\\" : [\\\"name\\\"],\"\n        \"    \\\"dependencies\\\" : {\"\n        \"        \\\"credit_card\\\": {\"\n        \"            \\\"properties\\\": {\"\n        \"                \\\"billing_address\\\": { \\\"type\\\": \\\"string\\\" }\"\n        \"            },\"\n        \"            \\\"required\\\" : [\\\"billing_address\\\"]\"\n        \"        }\"\n        \"    }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{\\\"name\\\": \\\"John Doe\\\", \\\"credit_card\\\" : 5555555555555555,\\\"billing_address\\\" : \\\"555 Debtor's Lane\\\"}\", true);\n    INVALIDATE(s, \"{\\\"name\\\": \\\"John Doe\\\", \\\"credit_card\\\" : 5555555555555555 }\", \"\", \"dependencies\", \"\",\n        \"{ \\\"dependencies\\\": {\"\n        \"    \\\"errorCode\\\": 18,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"errors\\\": {\"\n        \"      \\\"credit_card\\\": {\"\n        \"        \\\"required\\\": {\"\n        \"          \\\"errorCode\\\": 15,\"\n        \"          \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/dependencies/credit_card\\\",\"\n        \"          \\\"missing\\\": [\\\"billing_address\\\"]\"\n        \"    } } }\"\n        \"}}\");\n    VALIDATE(s, \"{\\\"name\\\": \\\"John Doe\\\", \\\"billing_address\\\" : \\\"555 Debtor's Lane\\\"}\", true);\n}\n\n#if RAPIDJSON_SCHEMA_HAS_REGEX\nTEST(SchemaValidator, Object_PatternProperties) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"patternProperties\\\": {\"\n        \"    \\\"^S_\\\": { \\\"type\\\": \\\"string\\\" },\"\n        \"    \\\"^I_\\\": { \\\"type\\\": \\\"integer\\\" }\"\n        \"  }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"S_25\\\": \\\"This is a string\\\" }\", true);\n    VALIDATE(s, \"{ \\\"I_0\\\": 42 }\", true);\n    INVALIDATE(s, \"{ \\\"S_0\\\": 42 }\", \"\", \"patternProperties\", \"/S_0\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/S_0\\\", \\\"schemaRef\\\": \\\"#/patternProperties/%5ES_\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"{ \\\"I_42\\\": \\\"This is a string\\\" }\", \"\", \"patternProperties\", \"/I_42\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/I_42\\\", \\\"schemaRef\\\": \\\"#/patternProperties/%5EI_\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n    VALIDATE(s, \"{ \\\"keyword\\\": \\\"value\\\" }\", true);\n}\n\nTEST(SchemaValidator, Object_PatternProperties_ErrorConflict) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"patternProperties\\\": {\"\n        \"    \\\"^I_\\\": { \\\"multipleOf\\\": 5 },\"\n        \"    \\\"30$\\\": { \\\"multipleOf\\\": 6 }\"\n        \"  }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"I_30\\\": 30 }\", true);\n    INVALIDATE(s, \"{ \\\"I_30\\\": 7 }\", \"\", \"patternProperties\", \"/I_30\",\n        \"{ \\\"multipleOf\\\": [\"\n        \"    {\"\n        \"      \\\"errorCode\\\": 1,\"\n        \"      \\\"instanceRef\\\": \\\"#/I_30\\\", \\\"schemaRef\\\": \\\"#/patternProperties/%5EI_\\\",\"\n        \"      \\\"expected\\\": 5, \\\"actual\\\": 7\"\n        \"    }, {\"\n        \"      \\\"errorCode\\\": 1,\"\n        \"      \\\"instanceRef\\\": \\\"#/I_30\\\", \\\"schemaRef\\\": \\\"#/patternProperties/30%24\\\",\"\n        \"      \\\"expected\\\": 6, \\\"actual\\\": 7\"\n        \"    }\"\n        \"]}\");\n}\n\nTEST(SchemaValidator, Object_Properties_PatternProperties) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"properties\\\": {\"\n        \"    \\\"I_42\\\": { \\\"type\\\": \\\"integer\\\", \\\"minimum\\\": 73 }\"\n        \"  },\"\n        \"  \\\"patternProperties\\\": {\"\n        \"    \\\"^I_\\\": { \\\"type\\\": \\\"integer\\\", \\\"multipleOf\\\": 6 }\"\n        \"  }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"I_6\\\": 6 }\", true);\n    VALIDATE(s, \"{ \\\"I_42\\\": 78 }\", true);\n    INVALIDATE(s, \"{ \\\"I_42\\\": 42 }\", \"\", \"patternProperties\", \"/I_42\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#/I_42\\\", \\\"schemaRef\\\": \\\"#/properties/I_42\\\",\"\n        \"    \\\"expected\\\": 73, \\\"actual\\\": 42\"\n        \"}}\");\n    INVALIDATE(s, \"{ \\\"I_42\\\": 7 }\", \"\", \"patternProperties\", \"/I_42\",\n        \"{ \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 4,\"\n        \"    \\\"instanceRef\\\": \\\"#/I_42\\\", \\\"schemaRef\\\": \\\"#/properties/I_42\\\",\"\n        \"    \\\"expected\\\": 73, \\\"actual\\\": 7\"\n        \"  },\"\n        \"  \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1,\"\n        \"    \\\"instanceRef\\\": \\\"#/I_42\\\", \\\"schemaRef\\\": \\\"#/patternProperties/%5EI_\\\",\"\n        \"    \\\"expected\\\": 6, \\\"actual\\\": 7\"\n        \"  }\"\n        \"}\");\n}\n\nTEST(SchemaValidator, Object_PatternProperties_AdditionalPropertiesObject) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"properties\\\": {\"\n        \"    \\\"builtin\\\": { \\\"type\\\": \\\"number\\\" }\"\n        \"  },\"\n        \"  \\\"patternProperties\\\": {\"\n        \"    \\\"^S_\\\": { \\\"type\\\": \\\"string\\\" },\"\n        \"    \\\"^I_\\\": { \\\"type\\\": \\\"integer\\\" }\"\n        \"  },\"\n        \"  \\\"additionalProperties\\\": { \\\"type\\\": \\\"string\\\" }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"builtin\\\": 42 }\", true);\n    VALIDATE(s, \"{ \\\"keyword\\\": \\\"value\\\" }\", true);\n    INVALIDATE(s, \"{ \\\"keyword\\\": 42 }\", \"/additionalProperties\", \"type\", \"/keyword\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/keyword\\\", \\\"schemaRef\\\": \\\"#/additionalProperties\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\");\n}\n\n// Replaces test Issue285 and tests failure as well as success\nTEST(SchemaValidator, Object_PatternProperties_AdditionalPropertiesBoolean) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"patternProperties\\\": {\"\n        \"    \\\"^S_\\\": { \\\"type\\\": \\\"string\\\" },\"\n        \"    \\\"^I_\\\": { \\\"type\\\": \\\"integer\\\" }\"\n        \"  },\"\n        \"  \\\"additionalProperties\\\": false\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"S_25\\\": \\\"This is a string\\\" }\", true);\n    VALIDATE(s, \"{ \\\"I_0\\\": 42 }\", true);\n    INVALIDATE(s, \"{ \\\"keyword\\\": \\\"value\\\" }\", \"\", \"additionalProperties\", \"/keyword\",\n        \"{ \\\"additionalProperties\\\": {\"\n        \"    \\\"errorCode\\\": 16,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"disallowed\\\": \\\"keyword\\\"\"\n        \"}}\");\n}\n#endif\n\nTEST(SchemaValidator, Array) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"array\\\"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"[1, 2, 3, 4, 5]\", true);\n    VALIDATE(s, \"[3, \\\"different\\\", { \\\"types\\\" : \\\"of values\\\" }]\", true);\n    INVALIDATE(s, \"{\\\"Not\\\": \\\"an array\\\"}\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"array\\\"], \\\"actual\\\": \\\"object\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Array_ItemsList) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\": \\\"array\\\",\"\n        \"    \\\"items\\\" : {\"\n        \"        \\\"type\\\": \\\"number\\\"\"\n        \"    }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"[1, 2, 3, 4, 5]\", true);\n    INVALIDATE(s, \"[1, 2, \\\"3\\\", 4, 5]\", \"/items\", \"type\", \"/2\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/2\\\", \\\"schemaRef\\\": \\\"#/items\\\",\"\n        \"    \\\"expected\\\": [\\\"number\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n    VALIDATE(s, \"[]\", true);\n}\n\nTEST(SchemaValidator, Array_ItemsTuple) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"array\\\",\"\n        \"  \\\"items\\\": [\"\n        \"    {\"\n        \"      \\\"type\\\": \\\"number\\\"\"\n        \"    },\"\n        \"    {\"\n        \"      \\\"type\\\": \\\"string\\\"\"\n        \"    },\"\n        \"    {\"\n        \"      \\\"type\\\": \\\"string\\\",\"\n        \"      \\\"enum\\\": [\\\"Street\\\", \\\"Avenue\\\", \\\"Boulevard\\\"]\"\n        \"    },\"\n        \"    {\"\n        \"      \\\"type\\\": \\\"string\\\",\"\n        \"      \\\"enum\\\": [\\\"NW\\\", \\\"NE\\\", \\\"SW\\\", \\\"SE\\\"]\"\n        \"    }\"\n        \"  ]\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"[1600, \\\"Pennsylvania\\\", \\\"Avenue\\\", \\\"NW\\\"]\", true);\n    INVALIDATE(s, \"[24, \\\"Sussex\\\", \\\"Drive\\\"]\", \"/items/2\", \"enum\", \"/2\",\n        \"{ \\\"enum\\\": { \\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#/2\\\", \\\"schemaRef\\\": \\\"#/items/2\\\" }}\");\n    INVALIDATE(s, \"[\\\"Palais de l'Elysee\\\"]\", \"/items/0\", \"type\", \"/0\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/0\\\", \\\"schemaRef\\\": \\\"#/items/0\\\",\"\n        \"    \\\"expected\\\": [\\\"number\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"[\\\"Twenty-four\\\", \\\"Sussex\\\", \\\"Drive\\\"]\", \"/items/0\", \"type\", \"/0\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/0\\\", \\\"schemaRef\\\": \\\"#/items/0\\\",\"\n        \"    \\\"expected\\\": [\\\"number\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\"); // fail fast\n    VALIDATE(s, \"[10, \\\"Downing\\\", \\\"Street\\\"]\", true);\n    VALIDATE(s, \"[1600, \\\"Pennsylvania\\\", \\\"Avenue\\\", \\\"NW\\\", \\\"Washington\\\"]\", true);\n}\n\nTEST(SchemaValidator, Array_AdditionalItems) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"array\\\",\"\n        \"  \\\"items\\\": [\"\n        \"    {\"\n        \"      \\\"type\\\": \\\"number\\\"\"\n        \"    },\"\n        \"    {\"\n        \"      \\\"type\\\": \\\"string\\\"\"\n        \"    },\"\n        \"    {\"\n        \"      \\\"type\\\": \\\"string\\\",\"\n        \"      \\\"enum\\\": [\\\"Street\\\", \\\"Avenue\\\", \\\"Boulevard\\\"]\"\n        \"    },\"\n        \"    {\"\n        \"      \\\"type\\\": \\\"string\\\",\"\n        \"      \\\"enum\\\": [\\\"NW\\\", \\\"NE\\\", \\\"SW\\\", \\\"SE\\\"]\"\n        \"    }\"\n        \"  ],\"\n        \"  \\\"additionalItems\\\": false\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"[1600, \\\"Pennsylvania\\\", \\\"Avenue\\\", \\\"NW\\\"]\", true);\n    VALIDATE(s, \"[1600, \\\"Pennsylvania\\\", \\\"Avenue\\\"]\", true);\n    INVALIDATE(s, \"[1600, \\\"Pennsylvania\\\", \\\"Avenue\\\", \\\"NW\\\", \\\"Washington\\\"]\", \"\", \"additionalItems\", \"/4\",\n        \"{ \\\"additionalItems\\\": {\"\n        \"    \\\"errorCode\\\": 12,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"disallowed\\\": 4\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Array_ItemsRange) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"array\\\",\\\"minItems\\\": 2,\\\"maxItems\\\" : 3}\");\n    SchemaDocument s(sd);\n\n    INVALIDATE(s, \"[]\", \"\", \"minItems\", \"\",\n        \"{ \\\"minItems\\\": {\"\n        \"    \\\"errorCode\\\": 10,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 2, \\\"actual\\\": 0\"\n        \"}}\");\n    INVALIDATE(s, \"[1]\", \"\", \"minItems\", \"\",\n        \"{ \\\"minItems\\\": {\"\n        \"    \\\"errorCode\\\": 10,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 2, \\\"actual\\\": 1\"\n        \"}}\");\n    VALIDATE(s, \"[1, 2]\", true);\n    VALIDATE(s, \"[1, 2, 3]\", true);\n    INVALIDATE(s, \"[1, 2, 3, 4]\", \"\", \"maxItems\", \"\",\n        \"{ \\\"maxItems\\\": {\"\n        \"    \\\"errorCode\\\": 9,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 3, \\\"actual\\\": 4\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Array_UniqueItems) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"array\\\", \\\"uniqueItems\\\": true}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"[1, 2, 3, 4, 5]\", true);\n    INVALIDATE(s, \"[1, 2, 3, 3, 4]\", \"\", \"uniqueItems\", \"/3\",\n        \"{ \\\"uniqueItems\\\": {\"\n        \"    \\\"errorCode\\\": 11,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"duplicates\\\": [2, 3]\"\n        \"}}\");\n    INVALIDATE(s, \"[1, 2, 3, 3, 3]\", \"\", \"uniqueItems\", \"/3\",\n        \"{ \\\"uniqueItems\\\": {\"\n        \"    \\\"errorCode\\\": 11,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"duplicates\\\": [2, 3]\"\n        \"}}\"); // fail fast\n    VALIDATE(s, \"[]\", true);\n}\n\nTEST(SchemaValidator, Boolean) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"boolean\\\"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"true\", true);\n    VALIDATE(s, \"false\", true);\n    INVALIDATE(s, \"\\\"true\\\"\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"0\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, Null) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"null\\\"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"null\", true);\n    INVALIDATE(s, \"false\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"null\\\"], \\\"actual\\\": \\\"boolean\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"0\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"null\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"\\\"\\\"\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"null\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n}\n\n// Additional tests\n\nTEST(SchemaValidator, ObjectInArray) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"array\\\", \\\"items\\\": { \\\"type\\\":\\\"string\\\" }}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"[\\\"a\\\"]\", true);\n    INVALIDATE(s, \"[1]\", \"/items\", \"type\", \"/0\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/0\\\", \\\"schemaRef\\\": \\\"#/items\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"[{}]\", \"/items\", \"type\", \"/0\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/0\\\", \\\"schemaRef\\\": \\\"#/items\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"object\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, MultiTypeInObject) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\":\\\"object\\\",\"\n        \"    \\\"properties\\\": {\"\n        \"        \\\"tel\\\" : {\"\n        \"            \\\"type\\\":[\\\"integer\\\", \\\"string\\\"]\"\n        \"        }\"\n        \"    }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{ \\\"tel\\\": 999 }\", true);\n    VALIDATE(s, \"{ \\\"tel\\\": \\\"123-456\\\" }\", true);\n    INVALIDATE(s, \"{ \\\"tel\\\": true }\", \"/properties/tel\", \"type\", \"/tel\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/tel\\\", \\\"schemaRef\\\": \\\"#/properties/tel\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\", \\\"integer\\\"], \\\"actual\\\": \\\"boolean\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, MultiTypeWithObject) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\": [\\\"object\\\",\\\"string\\\"],\"\n        \"    \\\"properties\\\": {\"\n        \"        \\\"tel\\\" : {\"\n        \"            \\\"type\\\": \\\"integer\\\"\"\n        \"        }\"\n        \"    }\"\n        \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"\\\"Hello\\\"\", true);\n    VALIDATE(s, \"{ \\\"tel\\\": 999 }\", true);\n    INVALIDATE(s, \"{ \\\"tel\\\": \\\"fail\\\" }\", \"/properties/tel\", \"type\", \"/tel\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/tel\\\", \\\"schemaRef\\\": \\\"#/properties/tel\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, AllOf_Nested) {\n    Document sd;\n    sd.Parse(\n    \"{\"\n    \"    \\\"allOf\\\": [\"\n    \"        { \\\"type\\\": \\\"string\\\", \\\"minLength\\\": 2 },\"\n    \"        { \\\"type\\\": \\\"string\\\", \\\"maxLength\\\": 5 },\"\n    \"        { \\\"allOf\\\": [ { \\\"enum\\\" : [\\\"ok\\\", \\\"okay\\\", \\\"OK\\\", \\\"o\\\"] }, { \\\"enum\\\" : [\\\"ok\\\", \\\"OK\\\", \\\"o\\\"]} ] }\"\n    \"    ]\"\n    \"}\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"\\\"ok\\\"\", true);\n    VALIDATE(s, \"\\\"OK\\\"\", true);\n    INVALIDATE(s, \"\\\"okay\\\"\", \"\", \"allOf\", \"\",\n        \"{ \\\"allOf\\\": {\"\n        \"    \\\"errors\\\": [\"\n        \"    {},{},\"\n        \"    { \\\"allOf\\\": {\"\n        \"      \\\"errors\\\": [\"\n        \"        {},\"\n        \"        { \\\"enum\\\": {\\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2/allOf/1\\\" }}\"\n        \"      ],\"\n        \"      \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2\\\"\"\n        \"    }}],\"\n        \"    \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"\\\"o\\\"\", \"\", \"allOf\", \"\",\n        \"{ \\\"allOf\\\": {\"\n        \"  \\\"errors\\\": [\"\n        \"    { \\\"minLength\\\": {\\\"actual\\\": \\\"o\\\", \\\"expected\\\": 2, \\\"errorCode\\\": 7, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/0\\\" }},\"\n        \"    {},{}\"\n        \"  ],\"\n        \"  \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"\\\"n\\\"\", \"\", \"allOf\", \"\",\n        \"{ \\\"allOf\\\": {\"\n        \"    \\\"errors\\\": [\"\n        \"      { \\\"minLength\\\": {\\\"actual\\\": \\\"n\\\", \\\"expected\\\": 2, \\\"errorCode\\\": 7, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/0\\\" }},\"\n        \"      {},\"\n        \"      { \\\"allOf\\\": {\"\n        \"          \\\"errors\\\": [\"\n        \"            { \\\"enum\\\": {\\\"errorCode\\\": 19 ,\\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2/allOf/0\\\"}},\"\n        \"            { \\\"enum\\\": {\\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2/allOf/1\\\"}}\"\n        \"          ],\"\n        \"          \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2\\\"\"\n        \"      }}\"\n        \"    ],\"\n        \"    \\\"errorCode\\\":23,\\\"instanceRef\\\":\\\"#\\\",\\\"schemaRef\\\":\\\"#\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"\\\"too long\\\"\", \"\", \"allOf\", \"\",\n        \"{ \\\"allOf\\\": {\"\n        \"    \\\"errors\\\": [\"\n        \"      {},\"\n        \"      { \\\"maxLength\\\": {\\\"actual\\\": \\\"too long\\\", \\\"expected\\\": 5, \\\"errorCode\\\": 6, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/1\\\" }},\"\n        \"      { \\\"allOf\\\": {\"\n        \"          \\\"errors\\\": [\"\n        \"            { \\\"enum\\\": {\\\"errorCode\\\": 19 ,\\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2/allOf/0\\\"}},\"\n        \"            { \\\"enum\\\": {\\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2/allOf/1\\\"}}\"\n        \"          ],\"\n        \"          \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2\\\"\"\n        \"      }}\"\n        \"    ],\"\n        \"    \\\"errorCode\\\":23,\\\"instanceRef\\\":\\\"#\\\",\\\"schemaRef\\\":\\\"#\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"123\", \"\", \"allOf\", \"\",\n        \"{ \\\"allOf\\\": {\"\n        \"    \\\"errors\\\": [\"\n        \"      {\\\"type\\\": {\\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/0\\\"}},\"\n        \"      {\\\"type\\\": {\\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/1\\\"}},\"\n        \"      { \\\"allOf\\\": {\"\n        \"          \\\"errors\\\": [\"\n        \"            { \\\"enum\\\": {\\\"errorCode\\\": 19 ,\\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2/allOf/0\\\"}},\"\n        \"            { \\\"enum\\\": {\\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2/allOf/1\\\"}}\"\n        \"          ],\"\n        \"          \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/allOf/2\\\"\"\n        \"      }}\"\n        \"    ],\"\n        \"    \\\"errorCode\\\":23,\\\"instanceRef\\\":\\\"#\\\",\\\"schemaRef\\\":\\\"#\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, EscapedPointer) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"type\\\": \\\"object\\\",\"\n        \"  \\\"properties\\\": {\"\n        \"    \\\"~/\\\": { \\\"type\\\": \\\"number\\\" }\"\n        \"  }\"\n        \"}\");\n    SchemaDocument s(sd);\n    INVALIDATE(s, \"{\\\"~/\\\":true}\", \"/properties/~0~1\", \"type\", \"/~0~1\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/~0~1\\\", \\\"schemaRef\\\": \\\"#/properties/~0~1\\\",\"\n        \"    \\\"expected\\\": [\\\"number\\\"], \\\"actual\\\": \\\"boolean\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, SchemaPointer) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"  \\\"swagger\\\": \\\"2.0\\\",\"\n        \"  \\\"paths\\\": {\"\n        \"    \\\"/some/path\\\": {\"\n        \"      \\\"post\\\": {\"\n        \"        \\\"parameters\\\": [\"\n        \"          {\"\n        \"            \\\"in\\\": \\\"body\\\",\"\n        \"            \\\"name\\\": \\\"body\\\",\"\n        \"            \\\"schema\\\": {\"\n        \"              \\\"properties\\\": {\"\n        \"                \\\"a\\\": {\"\n        \"                  \\\"$ref\\\": \\\"#/definitions/Prop_a\\\"\"\n        \"                },\"\n        \"                \\\"b\\\": {\"\n        \"                  \\\"type\\\": \\\"integer\\\"\"\n        \"                }\"\n        \"              },\"\n        \"              \\\"type\\\": \\\"object\\\"\"\n        \"            }\"\n        \"          }\"\n        \"        ],\"\n        \"        \\\"responses\\\": {\"\n        \"          \\\"200\\\": {\"\n        \"            \\\"schema\\\": {\"\n        \"              \\\"$ref\\\": \\\"#/definitions/Resp_200\\\"\"\n        \"            }\"\n        \"          }\"\n        \"        }\"\n        \"      }\"\n        \"    }\"\n        \"  },\"\n        \"  \\\"definitions\\\": {\"\n        \"    \\\"Prop_a\\\": {\"\n        \"      \\\"properties\\\": {\"\n        \"        \\\"c\\\": {\"\n        \"          \\\"enum\\\": [\"\n        \"            \\\"C1\\\",\"\n        \"            \\\"C2\\\",\"\n        \"            \\\"C3\\\"\"\n        \"          ],\"\n        \"          \\\"type\\\": \\\"string\\\"\"\n        \"        },\"\n        \"        \\\"d\\\": {\"\n        \"          \\\"$ref\\\": \\\"#/definitions/Prop_d\\\"\"\n        \"        },\"\n        \"        \\\"s\\\": {\"\n        \"          \\\"type\\\": \\\"string\\\"\"\n        \"        }\"\n        \"      },\"\n        \"      \\\"required\\\": [\\\"c\\\"],\"\n        \"      \\\"type\\\": \\\"object\\\"\"\n        \"    },\"\n        \"    \\\"Prop_d\\\": {\"\n        \"      \\\"properties\\\": {\"\n        \"        \\\"a\\\": {\"\n        \"          \\\"$ref\\\": \\\"#/definitions/Prop_a\\\"\"\n        \"        },\"\n        \"        \\\"c\\\": {\"\n        \"          \\\"$ref\\\": \\\"#/definitions/Prop_a/properties/c\\\"\"\n        \"        }\"\n        \"      },\"\n        \"      \\\"type\\\": \\\"object\\\"\"\n        \"    },\"\n        \"    \\\"Resp_200\\\": {\"\n        \"      \\\"properties\\\": {\"\n        \"        \\\"e\\\": {\"\n        \"          \\\"type\\\": \\\"string\\\"\"\n        \"        },\"\n        \"        \\\"f\\\": {\"\n        \"          \\\"type\\\": \\\"boolean\\\"\"\n        \"        }\"\n        \"      },\"\n        \"      \\\"type\\\": \\\"object\\\"\"\n        \"    }\"\n        \"  }\"\n        \"}\");\n    SchemaDocument s1(sd, NULL, 0, NULL, NULL, Pointer(\"#/paths/~1some~1path/post/parameters/0/schema\"));\n    VALIDATE(s1,\n        \"{\"\n        \"  \\\"a\\\": {\"\n        \"    \\\"c\\\": \\\"C1\\\",\"\n        \"    \\\"d\\\": {\"\n        \"      \\\"a\\\": {\"\n        \"        \\\"c\\\": \\\"C2\\\"\"\n        \"      },\"\n        \"      \\\"c\\\": \\\"C3\\\"\"\n        \"    }\"\n        \"  },\"\n        \"  \\\"b\\\": 123\"\n        \"}\",\n         true);\n    INVALIDATE(s1,\n        \"{\"\n        \"  \\\"a\\\": {\"\n        \"    \\\"c\\\": \\\"C1\\\",\"\n        \"    \\\"d\\\": {\"\n        \"      \\\"a\\\": {\"\n        \"        \\\"c\\\": \\\"C2\\\"\"\n        \"      },\"\n        \"      \\\"c\\\": \\\"C3\\\"\"\n        \"    }\"\n        \"  },\"\n        \"  \\\"b\\\": \\\"should be an int\\\"\"\n        \"}\",\n        \"#/paths/~1some~1path/post/parameters/0/schema/properties/b\", \"type\", \"#/b\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\":\\\"#/b\\\",\"\n        \"    \\\"schemaRef\\\":\\\"#/paths/~1some~1path/post/parameters/0/schema/properties/b\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\":\\\"string\\\"\"\n        \"}}\");\n    INVALIDATE(s1,\n        \"{\"\n        \"  \\\"a\\\": {\"\n        \"    \\\"c\\\": \\\"C1\\\",\"\n        \"    \\\"d\\\": {\"\n        \"      \\\"a\\\": {\"\n        \"        \\\"c\\\": \\\"should be within enum\\\"\"\n        \"      },\"\n        \"      \\\"c\\\": \\\"C3\\\"\"\n        \"    }\"\n        \"  },\"\n        \"  \\\"b\\\": 123\"\n        \"}\",\n        \"#/definitions/Prop_a/properties/c\", \"enum\", \"#/a/d/a/c\",\n        \"{ \\\"enum\\\": {\"\n        \"    \\\"errorCode\\\": 19,\"\n        \"    \\\"instanceRef\\\":\\\"#/a/d/a/c\\\",\"\n        \"    \\\"schemaRef\\\":\\\"#/definitions/Prop_a/properties/c\\\"\"\n        \"}}\");\n    INVALIDATE(s1,\n        \"{\"\n        \"  \\\"a\\\": {\"\n        \"    \\\"c\\\": \\\"C1\\\",\"\n        \"    \\\"d\\\": {\"\n        \"      \\\"a\\\": {\"\n        \"        \\\"s\\\": \\\"required 'c' is missing\\\"\"\n        \"      }\"\n        \"    }\"\n        \"  },\"\n        \"  \\\"b\\\": 123\"\n        \"}\",\n        \"#/definitions/Prop_a\", \"required\", \"#/a/d/a\",\n        \"{ \\\"required\\\": {\"\n        \"    \\\"errorCode\\\": 15,\"\n        \"    \\\"missing\\\":[\\\"c\\\"],\"\n        \"    \\\"instanceRef\\\":\\\"#/a/d/a\\\",\"\n        \"    \\\"schemaRef\\\":\\\"#/definitions/Prop_a\\\"\"\n        \"}}\");\n    SchemaDocument s2(sd, NULL, 0, NULL, NULL, Pointer(\"#/paths/~1some~1path/post/responses/200/schema\"));\n    VALIDATE(s2,\n        \"{ \\\"e\\\": \\\"some string\\\", \\\"f\\\": false }\",\n        true);\n    INVALIDATE(s2,\n        \"{ \\\"e\\\": true, \\\"f\\\": false }\",\n        \"#/definitions/Resp_200/properties/e\", \"type\", \"#/e\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\":\\\"#/e\\\",\"\n        \"    \\\"schemaRef\\\":\\\"#/definitions/Resp_200/properties/e\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\":\\\"boolean\\\"\"\n        \"}}\");\n    INVALIDATE(s2,\n        \"{ \\\"e\\\": \\\"some string\\\", \\\"f\\\": 123 }\",\n        \"#/definitions/Resp_200/properties/f\", \"type\", \"#/f\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\":\\\"#/f\\\",\"\n        \"    \\\"schemaRef\\\":\\\"#/definitions/Resp_200/properties/f\\\",\"\n        \"    \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\":\\\"integer\\\"\"\n        \"}}\");\n}\n\ntemplate <typename Allocator>\nstatic char* ReadFile(const char* filename, Allocator& allocator) {\n    const char *paths[] = {\n        \"\",\n        \"bin/\",\n        \"../bin/\",\n        \"../../bin/\",\n        \"../../../bin/\"\n    };\n    char buffer[1024];\n    FILE *fp = 0;\n    for (size_t i = 0; i < sizeof(paths) / sizeof(paths[0]); i++) {\n        sprintf(buffer, \"%s%s\", paths[i], filename);\n        fp = fopen(buffer, \"rb\");\n        if (fp)\n            break;\n    }\n\n    if (!fp)\n        return 0;\n\n    fseek(fp, 0, SEEK_END);\n    size_t length = static_cast<size_t>(ftell(fp));\n    fseek(fp, 0, SEEK_SET);\n    char* json = reinterpret_cast<char*>(allocator.Malloc(length + 1));\n    size_t readLength = fread(json, 1, length, fp);\n    json[readLength] = '\\0';\n    fclose(fp);\n    return json;\n}\n\nTEST(SchemaValidator, ValidateMetaSchema) {\n    CrtAllocator allocator;\n    char* json = ReadFile(\"draft-04/schema\", allocator);\n    Document d;\n    d.Parse(json);\n    ASSERT_FALSE(d.HasParseError());\n    SchemaDocument sd(d);\n    SchemaValidator validator(sd);\n    d.Accept(validator);\n    if (!validator.IsValid()) {\n        StringBuffer sb;\n        validator.GetInvalidSchemaPointer().StringifyUriFragment(sb);\n        printf(\"Invalid schema: %s\\n\", sb.GetString());\n        printf(\"Invalid keyword: %s\\n\", validator.GetInvalidSchemaKeyword());\n        printf(\"Invalid code: %d\\n\", validator.GetInvalidSchemaCode());\n        printf(\"Invalid message: %s\\n\", GetValidateError_En(validator.GetInvalidSchemaCode()));\n        sb.Clear();\n        validator.GetInvalidDocumentPointer().StringifyUriFragment(sb);\n        printf(\"Invalid document: %s\\n\", sb.GetString());\n        sb.Clear();\n        Writer<StringBuffer> w(sb);\n        validator.GetError().Accept(w);\n        printf(\"Validation error: %s\\n\", sb.GetString());\n        ADD_FAILURE();\n    }\n    CrtAllocator::Free(json);\n}\n\nTEST(SchemaValidator, ValidateMetaSchema_UTF16) {\n    typedef GenericDocument<UTF16<> > D;\n    typedef GenericSchemaDocument<D::ValueType> SD;\n    typedef GenericSchemaValidator<SD> SV;\n\n    CrtAllocator allocator;\n    char* json = ReadFile(\"draft-04/schema\", allocator);\n\n    D d;\n    StringStream ss(json);\n    d.ParseStream<0, UTF8<> >(ss);\n    ASSERT_FALSE(d.HasParseError());\n    SD sd(d);\n    SV validator(sd);\n    d.Accept(validator);\n    if (!validator.IsValid()) {\n        GenericStringBuffer<UTF16<> > sb;\n        validator.GetInvalidSchemaPointer().StringifyUriFragment(sb);\n        wprintf(L\"Invalid schema: %ls\\n\", sb.GetString());\n        wprintf(L\"Invalid keyword: %ls\\n\", validator.GetInvalidSchemaKeyword());\n        sb.Clear();\n        validator.GetInvalidDocumentPointer().StringifyUriFragment(sb);\n        wprintf(L\"Invalid document: %ls\\n\", sb.GetString());\n        sb.Clear();\n        Writer<GenericStringBuffer<UTF16<> >, UTF16<> > w(sb);\n        validator.GetError().Accept(w);\n        printf(\"Validation error: %ls\\n\", sb.GetString());\n        ADD_FAILURE();\n    }\n    CrtAllocator::Free(json);\n}\n\ntemplate <typename SchemaDocumentType = SchemaDocument>\nclass RemoteSchemaDocumentProvider : public IGenericRemoteSchemaDocumentProvider<SchemaDocumentType> {\npublic:\n    RemoteSchemaDocumentProvider() : \n        documentAllocator_(documentBuffer_, sizeof(documentBuffer_)), \n        schemaAllocator_(schemaBuffer_, sizeof(schemaBuffer_)) \n    {\n        const char* filenames[kCount] = {\n            \"jsonschema/remotes/integer.json\",\n            \"jsonschema/remotes/subSchemas.json\",\n            \"jsonschema/remotes/folder/folderInteger.json\",\n            \"draft-04/schema\",\n            \"unittestschema/address.json\"\n        };\n        const char* uris[kCount] = {\n            \"http://localhost:1234/integer.json\",\n            \"http://localhost:1234/subSchemas.json\",\n            \"http://localhost:1234/folder/folderInteger.json\",\n            \"http://json-schema.org/draft-04/schema\",\n            \"http://localhost:1234/address.json\"\n        };\n\n        for (size_t i = 0; i < kCount; i++) {\n            sd_[i] = 0;\n\n            char jsonBuffer[8192];\n            MemoryPoolAllocator<> jsonAllocator(jsonBuffer, sizeof(jsonBuffer));\n            char* json = ReadFile(filenames[i], jsonAllocator);\n            if (!json) {\n                printf(\"json remote file %s not found\", filenames[i]);\n                ADD_FAILURE();\n            }\n            else {\n                char stackBuffer[4096];\n                MemoryPoolAllocator<> stackAllocator(stackBuffer, sizeof(stackBuffer));\n                DocumentType d(&documentAllocator_, 1024, &stackAllocator);\n                d.Parse(json);\n                sd_[i] = new SchemaDocumentType(d, uris[i], static_cast<SizeType>(strlen(uris[i])), 0, &schemaAllocator_);\n                MemoryPoolAllocator<>::Free(json);\n            }\n        };\n    }\n\n    ~RemoteSchemaDocumentProvider() {\n        for (size_t i = 0; i < kCount; i++)\n            delete sd_[i];\n    }\n\n    virtual const SchemaDocumentType* GetRemoteDocument(const char* uri, SizeType length) {\n        //printf(\"GetRemoteDocument : %s\\n\", uri);\n        for (size_t i = 0; i < kCount; i++)\n            if (typename SchemaDocumentType::GValue(uri, length) == sd_[i]->GetURI()) {\n                //printf(\"Matched document\");\n                return sd_[i];\n            }\n        //printf(\"No matched document\");\n        return 0;\n    }\n\nprivate:\n    typedef GenericDocument<typename SchemaDocumentType::EncodingType, MemoryPoolAllocator<>, MemoryPoolAllocator<> > DocumentType;\n\n    RemoteSchemaDocumentProvider(const RemoteSchemaDocumentProvider&);\n    RemoteSchemaDocumentProvider& operator=(const RemoteSchemaDocumentProvider&);\n\n    static const size_t kCount = 5;\n    SchemaDocumentType* sd_[kCount];\n    typename DocumentType::AllocatorType documentAllocator_;\n    typename SchemaDocumentType::AllocatorType schemaAllocator_;\n    char documentBuffer_[16384];\n    char schemaBuffer_[128u * 1024];\n};\n\nTEST(SchemaValidator, TestSuite) {\n    const char* filenames[] = {\n        \"additionalItems.json\",\n        \"additionalProperties.json\",\n        \"allOf.json\",\n        \"anyOf.json\",\n        \"default.json\",\n        \"definitions.json\",\n        \"dependencies.json\",\n        \"enum.json\",\n        \"items.json\",\n        \"maximum.json\",\n        \"maxItems.json\",\n        \"maxLength.json\",\n        \"maxProperties.json\",\n        \"minimum.json\",\n        \"minItems.json\",\n        \"minLength.json\",\n        \"minProperties.json\",\n        \"multipleOf.json\",\n        \"not.json\",\n        \"oneOf.json\",\n        \"pattern.json\",\n        \"patternProperties.json\",\n        \"properties.json\",\n        \"ref.json\",\n        \"refRemote.json\",\n        \"required.json\",\n        \"type.json\",\n        \"uniqueItems.json\"\n    };\n\n    const char* onlyRunDescription = 0;\n    //const char* onlyRunDescription = \"a string is a string\";\n\n    unsigned testCount = 0;\n    unsigned passCount = 0;\n\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n\n    char jsonBuffer[65536];\n    char documentBuffer[65536];\n    char documentStackBuffer[65536];\n    char schemaBuffer[65536];\n    char validatorBuffer[65536];\n    MemoryPoolAllocator<> jsonAllocator(jsonBuffer, sizeof(jsonBuffer));\n    MemoryPoolAllocator<> documentAllocator(documentBuffer, sizeof(documentBuffer));\n    MemoryPoolAllocator<> documentStackAllocator(documentStackBuffer, sizeof(documentStackBuffer));\n    MemoryPoolAllocator<> schemaAllocator(schemaBuffer, sizeof(schemaBuffer));\n    MemoryPoolAllocator<> validatorAllocator(validatorBuffer, sizeof(validatorBuffer));\n\n    for (size_t i = 0; i < sizeof(filenames) / sizeof(filenames[0]); i++) {\n        char filename[FILENAME_MAX];\n        sprintf(filename, \"jsonschema/tests/draft4/%s\", filenames[i]);\n        char* json = ReadFile(filename, jsonAllocator);\n        if (!json) {\n            printf(\"json test suite file %s not found\", filename);\n            ADD_FAILURE();\n        }\n        else {\n            //printf(\"\\njson test suite file %s parsed ok\\n\", filename);\n            GenericDocument<UTF8<>, MemoryPoolAllocator<>, MemoryPoolAllocator<> > d(&documentAllocator, 1024, &documentStackAllocator);\n            d.Parse(json);\n            if (d.HasParseError()) {\n                printf(\"json test suite file %s has parse error\", filename);\n                ADD_FAILURE();\n            }\n            else {\n                for (Value::ConstValueIterator schemaItr = d.Begin(); schemaItr != d.End(); ++schemaItr) {\n                    {\n                        const char* description1 = (*schemaItr)[\"description\"].GetString();\n                        //printf(\"\\ncompiling schema for json test %s \\n\", description1);\n                        SchemaDocumentType schema((*schemaItr)[\"schema\"], filenames[i], static_cast<SizeType>(strlen(filenames[i])), &provider, &schemaAllocator);\n                        GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > validator(schema, &validatorAllocator);\n                        const Value& tests = (*schemaItr)[\"tests\"];\n                        for (Value::ConstValueIterator testItr = tests.Begin(); testItr != tests.End(); ++testItr) {\n                            const char* description2 = (*testItr)[\"description\"].GetString();\n                            //printf(\"running json test %s \\n\", description2);\n                            if (!onlyRunDescription || strcmp(description2, onlyRunDescription) == 0) {\n                                const Value& data = (*testItr)[\"data\"];\n                                bool expected = (*testItr)[\"valid\"].GetBool();\n                                testCount++;\n                                validator.Reset();\n                                data.Accept(validator);\n                                bool actual = validator.IsValid();\n                                if (expected != actual)\n                                    printf(\"Fail: %30s \\\"%s\\\" \\\"%s\\\"\\n\", filename, description1, description2);\n                                else {\n                                    //printf(\"Passed: %30s \\\"%s\\\" \\\"%s\\\"\\n\", filename, description1, description2);\n                                    passCount++;\n                                }\n                            }\n                        }\n                        //printf(\"%zu %zu %zu\\n\", documentAllocator.Size(), schemaAllocator.Size(), validatorAllocator.Size());\n                    }\n                    schemaAllocator.Clear();\n                    validatorAllocator.Clear();\n                }\n            }\n        }\n        documentAllocator.Clear();\n        MemoryPoolAllocator<>::Free(json);\n        jsonAllocator.Clear();\n    }\n    printf(\"%u / %u passed (%2u%%)\\n\", passCount, testCount, passCount * 100 / testCount);\n    if (passCount != testCount)\n        ADD_FAILURE();\n}\n\nTEST(SchemaValidatingReader, Simple) {\n    Document sd;\n    sd.Parse(\"{ \\\"type\\\": \\\"string\\\", \\\"enum\\\" : [\\\"red\\\", \\\"amber\\\", \\\"green\\\"] }\");\n    SchemaDocument s(sd);\n\n    Document d;\n    StringStream ss(\"\\\"red\\\"\");\n    SchemaValidatingReader<kParseDefaultFlags, StringStream, UTF8<> > reader(ss, s);\n    d.Populate(reader);\n    EXPECT_TRUE(reader.GetParseResult());\n    EXPECT_TRUE(reader.IsValid());\n    EXPECT_TRUE(d.IsString());\n    EXPECT_STREQ(\"red\", d.GetString());\n}\n\nTEST(SchemaValidatingReader, Invalid) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"string\\\",\\\"minLength\\\":2,\\\"maxLength\\\":3}\");\n    SchemaDocument s(sd);\n\n    Document d;\n    StringStream ss(\"\\\"ABCD\\\"\");\n    SchemaValidatingReader<kParseDefaultFlags, StringStream, UTF8<> > reader(ss, s);\n    d.Populate(reader);\n    EXPECT_FALSE(reader.GetParseResult());\n    EXPECT_FALSE(reader.IsValid());\n    EXPECT_EQ(kParseErrorTermination, reader.GetParseResult().Code());\n    EXPECT_STREQ(\"maxLength\", reader.GetInvalidSchemaKeyword());\n    EXPECT_TRUE(reader.GetInvalidSchemaCode() == kValidateErrorMaxLength);\n    EXPECT_TRUE(reader.GetInvalidSchemaPointer() == SchemaDocument::PointerType(\"\"));\n    EXPECT_TRUE(reader.GetInvalidDocumentPointer() == SchemaDocument::PointerType(\"\"));\n    EXPECT_TRUE(d.IsNull());\n    Document e;\n    e.Parse(\n        \"{ \\\"maxLength\\\": {\"\n        \"     \\\"errorCode\\\": 6,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 3, \\\"actual\\\": \\\"ABCD\\\"\"\n        \"}}\");\n    if (e != reader.GetError()) {\n        ADD_FAILURE();\n    }\n}\n\nTEST(SchemaValidatingWriter, Simple) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"string\\\",\\\"minLength\\\":2,\\\"maxLength\\\":3}\");\n    SchemaDocument s(sd);\n\n    Document d;\n    StringBuffer sb;\n    Writer<StringBuffer> writer(sb);\n    GenericSchemaValidator<SchemaDocument, Writer<StringBuffer> > validator(s, writer);\n\n    d.Parse(\"\\\"red\\\"\");\n    EXPECT_TRUE(d.Accept(validator));\n    EXPECT_TRUE(validator.IsValid());\n    EXPECT_STREQ(\"\\\"red\\\"\", sb.GetString());\n\n    sb.Clear();\n    validator.Reset();\n    d.Parse(\"\\\"ABCD\\\"\");\n    EXPECT_FALSE(d.Accept(validator));\n    EXPECT_FALSE(validator.IsValid());\n    EXPECT_TRUE(validator.GetInvalidSchemaPointer() == SchemaDocument::PointerType(\"\"));\n    EXPECT_TRUE(validator.GetInvalidDocumentPointer() == SchemaDocument::PointerType(\"\"));\n    EXPECT_TRUE(validator.GetInvalidSchemaCode() == kValidateErrorMaxLength);\n    Document e;\n    e.Parse(\n        \"{ \\\"maxLength\\\": {\"\n\"            \\\"errorCode\\\": 6,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": 3, \\\"actual\\\": \\\"ABCD\\\"\"\n        \"}}\");\n    EXPECT_EQ(e, validator.GetError());\n}\n\nTEST(Schema, Issue848) {\n    rapidjson::Document d;\n    rapidjson::SchemaDocument s(d);\n    rapidjson::GenericSchemaValidator<rapidjson::SchemaDocument, rapidjson::Document> v(s);\n}\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\nstatic SchemaDocument ReturnSchemaDocument() {\n    Document sd;\n    sd.Parse(\"{ \\\"type\\\": [\\\"number\\\", \\\"string\\\"] }\");\n    SchemaDocument s(sd);\n    return s;\n}\n\nTEST(Schema, Issue552) {\n    SchemaDocument s = ReturnSchemaDocument();\n    VALIDATE(s, \"42\", true);\n    VALIDATE(s, \"\\\"Life, the universe, and everything\\\"\", true);\n    INVALIDATE(s, \"[\\\"Life\\\", \\\"the universe\\\", \\\"and everything\\\"]\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\", \\\"number\\\"], \\\"actual\\\": \\\"array\\\"\"\n        \"}}\");\n}\n\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\nTEST(SchemaValidator, Issue608) {\n    Document sd;\n    sd.Parse(\"{\\\"required\\\": [\\\"a\\\", \\\"b\\\"] }\");\n    SchemaDocument s(sd);\n\n    VALIDATE(s, \"{\\\"a\\\" : null, \\\"b\\\": null}\", true);\n    INVALIDATE(s, \"{\\\"a\\\" : null, \\\"a\\\" : null}\", \"\", \"required\", \"\",\n        \"{ \\\"required\\\": {\"\n        \"    \\\"errorCode\\\": 15,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"missing\\\": [\\\"b\\\"]\"\n        \"}}\");\n}\n\n// Fail to resolve $ref in allOf causes crash in SchemaValidator::StartObject()\nTEST(SchemaValidator, Issue728_AllOfRef) {\n    Document sd;\n    sd.Parse(\"{\\\"allOf\\\": [{\\\"$ref\\\": \\\"#/abc\\\"}]}\");\n    SchemaDocument s(sd);\n    SCHEMAERROR(s, \"{\\\"RefUnknown\\\":{\\\"errorCode\\\":5,\\\"instanceRef\\\":\\\"#/allOf/0\\\",\\\"value\\\":\\\"#/abc\\\"}}\");\n\n    VALIDATE_(s, \"{\\\"key1\\\": \\\"abc\\\", \\\"key2\\\": \\\"def\\\"}\", true, false);\n}\n\nTEST(SchemaValidator, Issue1017_allOfHandler) {\n    Document sd;\n    sd.Parse(\"{\\\"allOf\\\": [{\\\"type\\\": \\\"object\\\",\\\"properties\\\": {\\\"cyanArray2\\\": {\\\"type\\\": \\\"array\\\",\\\"items\\\": { \\\"type\\\": \\\"string\\\" }}}},{\\\"type\\\": \\\"object\\\",\\\"properties\\\": {\\\"blackArray\\\": {\\\"type\\\": \\\"array\\\",\\\"items\\\": { \\\"type\\\": \\\"string\\\" }}},\\\"required\\\": [ \\\"blackArray\\\" ]}]}\");\n    SchemaDocument s(sd);\n    StringBuffer sb;\n    Writer<StringBuffer> writer(sb);\n    GenericSchemaValidator<SchemaDocument, Writer<StringBuffer> > validator(s, writer);\n    EXPECT_TRUE(validator.StartObject());\n    EXPECT_TRUE(validator.Key(\"cyanArray2\", 10, false));\n    EXPECT_TRUE(validator.StartArray());    \n    EXPECT_TRUE(validator.EndArray(0));    \n    EXPECT_TRUE(validator.Key(\"blackArray\", 10, false));\n    EXPECT_TRUE(validator.StartArray());    \n    EXPECT_TRUE(validator.EndArray(0));    \n    EXPECT_TRUE(validator.EndObject(0));\n    EXPECT_TRUE(validator.IsValid());\n    EXPECT_STREQ(\"{\\\"cyanArray2\\\":[],\\\"blackArray\\\":[]}\", sb.GetString());\n}\n\nTEST(SchemaValidator, Ref_remote) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"$ref\\\": \\\"http://localhost:1234/subSchemas.json#/integer\\\"}\");\n    SchemaDocumentType s(sd, 0, 0, &provider);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"null\", \"/integer\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\",\"\n        \"    \\\"schemaRef\\\": \\\"http://localhost:1234/subSchemas.json#/integer\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// Merge with id where $ref is full URI\nTEST(SchemaValidator, Ref_remote_change_resolution_scope_uri) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"id\\\": \\\"http://ignore/blah#/ref\\\", \\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"http://localhost:1234/subSchemas.json#/integer\\\"}}}\");\n    SchemaDocumentType s(sd, 0, 0, &provider);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"myInt\\\": null}\", \"/integer\", \"type\", \"/myInt\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/myInt\\\",\"\n        \"    \\\"schemaRef\\\": \\\"http://localhost:1234/subSchemas.json#/integer\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// Merge with id where $ref is a relative path\nTEST(SchemaValidator, Ref_remote_change_resolution_scope_relative_path) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"id\\\": \\\"http://localhost:1234/\\\", \\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"subSchemas.json#/integer\\\"}}}\");\n    SchemaDocumentType s(sd, 0, 0, &provider);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"myInt\\\": null}\", \"/integer\", \"type\", \"/myInt\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/myInt\\\",\"\n        \"    \\\"schemaRef\\\": \\\"http://localhost:1234/subSchemas.json#/integer\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// Merge with id where $ref is an absolute path\nTEST(SchemaValidator, Ref_remote_change_resolution_scope_absolute_path) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"id\\\": \\\"http://localhost:1234/xxxx\\\", \\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"/subSchemas.json#/integer\\\"}}}\");\n    SchemaDocumentType s(sd, 0, 0, &provider);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"myInt\\\": null}\", \"/integer\", \"type\", \"/myInt\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/myInt\\\",\"\n        \"    \\\"schemaRef\\\": \\\"http://localhost:1234/subSchemas.json#/integer\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// Merge with id where $ref is an absolute path, and the document has a base URI\nTEST(SchemaValidator, Ref_remote_change_resolution_scope_absolute_path_document) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"/subSchemas.json#/integer\\\"}}}\");\n    SchemaDocumentType s(sd, \"http://localhost:1234/xxxx\", 26, &provider);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"myInt\\\": null}\", \"/integer\", \"type\", \"/myInt\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/myInt\\\",\"\n        \"    \\\"schemaRef\\\": \\\"http://localhost:1234/subSchemas.json#/integer\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// $ref is a non-JSON pointer fragment and there a matching id\nTEST(SchemaValidator, Ref_internal_id_1) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt1\\\": {\\\"$ref\\\": \\\"#myId\\\"}, \\\"myStr\\\": {\\\"type\\\": \\\"string\\\", \\\"id\\\": \\\"#myStrId\\\"}, \\\"myInt2\\\": {\\\"type\\\": \\\"integer\\\", \\\"id\\\": \\\"#myId\\\"}}}\");\n    SchemaDocumentType s(sd);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"myInt1\\\": null}\", \"/properties/myInt2\", \"type\", \"/myInt1\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/myInt1\\\",\"\n        \"    \\\"schemaRef\\\": \\\"#/properties/myInt2\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// $ref is a non-JSON pointer fragment and there are two matching ids so we take the first\nTEST(SchemaValidator, Ref_internal_id_2) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt1\\\": {\\\"$ref\\\": \\\"#myId\\\"}, \\\"myInt2\\\": {\\\"type\\\": \\\"integer\\\", \\\"id\\\": \\\"#myId\\\"}, \\\"myStr\\\": {\\\"type\\\": \\\"string\\\", \\\"id\\\": \\\"#myId\\\"}}}\");\n    SchemaDocumentType s(sd);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"myInt1\\\": null}\", \"/properties/myInt2\", \"type\", \"/myInt1\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/myInt1\\\",\"\n        \"    \\\"schemaRef\\\": \\\"#/properties/myInt2\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// $ref is a non-JSON pointer fragment and there is a matching id within array\nTEST(SchemaValidator, Ref_internal_id_in_array) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt1\\\": {\\\"$ref\\\": \\\"#myId\\\"}, \\\"myInt2\\\": {\\\"anyOf\\\": [{\\\"type\\\": \\\"string\\\", \\\"id\\\": \\\"#myStrId\\\"}, {\\\"type\\\": \\\"integer\\\", \\\"id\\\": \\\"#myId\\\"}]}}}\");\n    SchemaDocumentType s(sd);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"myInt1\\\": null}\", \"/properties/myInt2/anyOf/1\", \"type\", \"/myInt1\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/myInt1\\\",\"\n        \"    \\\"schemaRef\\\": \\\"#/properties/myInt2/anyOf/1\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// $ref is a non-JSON pointer fragment and there is a matching id, and the schema is embedded in the document\nTEST(SchemaValidator, Ref_internal_id_and_schema_pointer) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{ \\\"schema\\\": {\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt1\\\": {\\\"$ref\\\": \\\"#myId\\\"}, \\\"myInt2\\\": {\\\"anyOf\\\": [{\\\"type\\\": \\\"integer\\\", \\\"id\\\": \\\"#myId\\\"}]}}}}\");\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    SchemaDocumentType s(sd, 0, 0, 0, 0, PointerType(\"/schema\"));\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    INVALIDATE_(s, \"{\\\"myInt1\\\": null}\", \"/schema/properties/myInt2/anyOf/0\", \"type\", \"/myInt1\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#/myInt1\\\",\"\n        \"    \\\"schemaRef\\\": \\\"#/schema/properties/myInt2/anyOf/0\\\",\"\n        \"    \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\",\n        kValidateDefaultFlags, SchemaValidatorType, PointerType);\n}\n\n// Test that $refs are correctly resolved when intermediate multiple ids are present\n// Includes $ref to a part of the document with a different in-scope id, which also contains $ref..\nTEST(SchemaValidator, Ref_internal_multiple_ids) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    //RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/idandref.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocumentType s(sd, \"http://xyz\", 10/*, &provider*/);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"PA1\\\": \\\"s\\\", \\\"PA2\\\": \\\"t\\\", \\\"PA3\\\": \\\"r\\\", \\\"PX1\\\": 1, \\\"PX2Y\\\": 2, \\\"PX3Z\\\": 3, \\\"PX4\\\": 4, \\\"PX5\\\": 5, \\\"PX6\\\": 6, \\\"PX7W\\\": 7, \\\"PX8N\\\": { \\\"NX\\\": 8}}\", \"#\", \"errors\", \"#\",\n        \"{ \\\"type\\\": [\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PA1\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/A\\\", \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"string\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PA2\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/A\\\", \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"string\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PA3\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/A\\\", \\\"expected\\\": [\\\"integer\\\"], \\\"actual\\\": \\\"string\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PX1\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/B/definitions/X\\\", \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PX2Y\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/B/definitions/X\\\", \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PX3Z\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/B/definitions/X\\\", \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PX4\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/B/definitions/X\\\", \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PX5\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/B/definitions/X\\\", \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PX6\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/B/definitions/X\\\", \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PX7W\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/B/definitions/X\\\", \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"},\"\n        \"    {\\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/PX8N/NX\\\", \\\"schemaRef\\\": \\\"http://xyz#/definitions/B/definitions/X\\\", \\\"expected\\\": [\\\"boolean\\\"], \\\"actual\\\": \\\"integer\\\"}\"\n        \"]}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidatorType, PointerType);\n    CrtAllocator::Free(schema);\n}\n\nTEST(SchemaValidator, Ref_remote_issue1210) {\n    class SchemaDocumentProvider : public IRemoteSchemaDocumentProvider {\n        SchemaDocument** collection;\n\n        // Dummy private copy constructor & assignment operator.\n        // Function bodies added so that they compile in MSVC 2019.\n        SchemaDocumentProvider(const SchemaDocumentProvider&) : collection(NULL) {\n        }\n        SchemaDocumentProvider& operator=(const SchemaDocumentProvider&) {\n            return *this;\n        }\n\n        public:\n          SchemaDocumentProvider(SchemaDocument** collection) : collection(collection) { }\n          virtual const SchemaDocument* GetRemoteDocument(const char* uri, SizeType length) {\n            int i = 0;\n            while (collection[i] && SchemaDocument::GValue(uri, length) != collection[i]->GetURI()) ++i;\n            return collection[i];\n          }\n    };\n    SchemaDocument* collection[] = { 0, 0, 0 };\n    SchemaDocumentProvider provider(collection);\n\n    Document x, y, z;\n    x.Parse(\"{\\\"properties\\\":{\\\"country\\\":{\\\"$ref\\\":\\\"y.json#/definitions/country_remote\\\"}},\\\"type\\\":\\\"object\\\"}\");\n    y.Parse(\"{\\\"definitions\\\":{\\\"country_remote\\\":{\\\"$ref\\\":\\\"z.json#/definitions/country_list\\\"}}}\");\n    z.Parse(\"{\\\"definitions\\\":{\\\"country_list\\\":{\\\"enum\\\":[\\\"US\\\"]}}}\");\n\n    SchemaDocument sz(z, \"z.json\", 6, &provider);\n    collection[0] = &sz;\n    SchemaDocument sy(y, \"y.json\", 6, &provider);\n    collection[1] = &sy;\n    SchemaDocument sx(x, \"x.json\", 6, &provider);\n\n    VALIDATE(sx, \"{\\\"country\\\":\\\"UK\\\"}\", false);\n    VALIDATE(sx, \"{\\\"country\\\":\\\"US\\\"}\", true);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, all errors are reported.\nTEST(SchemaValidator, ContinueOnErrors) {\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    VALIDATE(s, \"{\\\"version\\\": 1.0, \\\"address\\\": {\\\"number\\\": 24, \\\"street1\\\": \\\"The Woodlands\\\", \\\"street3\\\": \\\"Ham\\\", \\\"city\\\": \\\"Romsey\\\", \\\"area\\\": \\\"Kent\\\", \\\"country\\\": \\\"UK\\\", \\\"postcode\\\": \\\"SO51 0GP\\\"}, \\\"phones\\\": [\\\"0111-222333\\\", \\\"0777-666888\\\"], \\\"names\\\": [\\\"Fred\\\", \\\"Bloggs\\\"]}\", true);\n    INVALIDATE_(s, \"{\\\"version\\\": 1.01, \\\"address\\\": {\\\"number\\\": 0, \\\"street2\\\": false,  \\\"street3\\\": \\\"Ham\\\", \\\"city\\\": \\\"RomseyTownFC\\\", \\\"area\\\": \\\"Narnia\\\", \\\"country\\\": \\\"USA\\\", \\\"postcode\\\": \\\"999ABC\\\"}, \\\"phones\\\": [], \\\"planet\\\": \\\"Earth\\\", \\\"extra\\\": {\\\"S_xxx\\\": 123}}\", \"#\", \"errors\", \"#\",\n        \"{ \\\"multipleOf\\\": {\"\n        \"    \\\"errorCode\\\": 1, \\\"instanceRef\\\": \\\"#/version\\\", \\\"schemaRef\\\": \\\"#/definitions/decimal_type\\\", \\\"expected\\\": 1.0, \\\"actual\\\": 1.01\"\n        \"  },\"\n        \"  \\\"minimum\\\": {\"\n        \"    \\\"errorCode\\\": 5, \\\"instanceRef\\\": \\\"#/address/number\\\", \\\"schemaRef\\\": \\\"#/definitions/positiveInt_type\\\", \\\"expected\\\": 0, \\\"actual\\\": 0, \\\"exclusiveMinimum\\\": true\"\n        \"  },\"\n        \"  \\\"type\\\": [\"\n        \"    {\\\"expected\\\": [\\\"null\\\", \\\"string\\\"], \\\"actual\\\": \\\"boolean\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/address/street2\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/street2\\\"},\"\n        \"    {\\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/extra/S_xxx\\\", \\\"schemaRef\\\": \\\"#/properties/extra/patternProperties/%5ES_\\\"}\"\n        \"  ],\"\n        \"  \\\"maxLength\\\": {\"\n        \"    \\\"actual\\\": \\\"RomseyTownFC\\\", \\\"expected\\\": 10, \\\"errorCode\\\": 6, \\\"instanceRef\\\": \\\"#/address/city\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/city\\\"\"\n        \"  },\"\n        \"  \\\"anyOf\\\": {\"\n        \"    \\\"errors\\\":[\"\n        \"      {\\\"pattern\\\": {\\\"actual\\\": \\\"999ABC\\\", \\\"errorCode\\\": 8, \\\"instanceRef\\\": \\\"#/address/postcode\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/postcode/anyOf/0\\\"}},\"\n        \"      {\\\"pattern\\\": {\\\"actual\\\": \\\"999ABC\\\", \\\"errorCode\\\": 8, \\\"instanceRef\\\": \\\"#/address/postcode\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/postcode/anyOf/1\\\"}}\"\n        \"    ],\"\n        \"    \\\"errorCode\\\": 24, \\\"instanceRef\\\": \\\"#/address/postcode\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/postcode\\\"\"\n        \"  },\"\n        \"  \\\"allOf\\\": {\"\n        \"    \\\"errors\\\":[\"\n        \"      {\\\"enum\\\":{\\\"errorCode\\\":19,\\\"instanceRef\\\":\\\"#/address/country\\\",\\\"schemaRef\\\":\\\"#/definitions/country_type\\\"}}\"\n        \"    ],\"\n        \"    \\\"errorCode\\\":23,\\\"instanceRef\\\":\\\"#/address/country\\\",\\\"schemaRef\\\":\\\"#/definitions/address_type/properties/country\\\"\"\n        \"  },\"\n        \"  \\\"minItems\\\": {\"\n        \"    \\\"actual\\\": 0, \\\"expected\\\": 1, \\\"errorCode\\\": 10, \\\"instanceRef\\\": \\\"#/phones\\\", \\\"schemaRef\\\": \\\"#/properties/phones\\\"\"\n        \"  },\"\n        \"  \\\"additionalProperties\\\": {\"\n        \"    \\\"disallowed\\\": \\\"planet\\\", \\\"errorCode\\\": 16, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n        \"  },\"\n        \"  \\\"required\\\": {\"\n        \"    \\\"missing\\\": [\\\"street1\\\"], \\\"errorCode\\\": 15, \\\"instanceRef\\\": \\\"#/address\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type\\\"\"\n        \"  },\"\n        \"  \\\"oneOf\\\": {\"\n        \"    \\\"matches\\\": [0, 1], \\\"errorCode\\\": 22, \\\"instanceRef\\\": \\\"#/address/area\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/area\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    INVALIDATE_(s, \"{\\\"address\\\": {\\\"number\\\": 200, \\\"street1\\\": {}, \\\"street3\\\": null, \\\"city\\\": \\\"Rom\\\", \\\"area\\\": \\\"Dorset\\\", \\\"postcode\\\": \\\"SO51 0GP\\\"}, \\\"phones\\\": [\\\"0111-222333\\\", \\\"0777-666888\\\", \\\"0777-666888\\\"], \\\"names\\\": [\\\"Fred\\\", \\\"S\\\", \\\"M\\\", \\\"Bloggs\\\"]}\", \"#\", \"errors\", \"#\",\n        \"{ \\\"maximum\\\": {\"\n        \"    \\\"errorCode\\\": 3, \\\"instanceRef\\\": \\\"#/address/number\\\", \\\"schemaRef\\\": \\\"#/definitions/positiveInt_type\\\", \\\"expected\\\": 100, \\\"actual\\\": 200, \\\"exclusiveMaximum\\\": true\"\n        \"  },\"\n        \"  \\\"type\\\": {\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"object\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/address/street1\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/street1\\\"\"\n        \"  },\"\n        \"  \\\"not\\\": {\"\n        \"    \\\"errorCode\\\": 25, \\\"instanceRef\\\": \\\"#/address/street3\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/street3\\\"\"\n        \"  },\"\n        \"  \\\"minLength\\\": {\"\n        \"    \\\"actual\\\": \\\"Rom\\\", \\\"expected\\\": 4, \\\"errorCode\\\": 7, \\\"instanceRef\\\": \\\"#/address/city\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/city\\\"\"\n        \"  },\"\n        \"  \\\"maxItems\\\": {\"\n        \"    \\\"actual\\\": 3, \\\"expected\\\": 2, \\\"errorCode\\\": 9, \\\"instanceRef\\\": \\\"#/phones\\\", \\\"schemaRef\\\": \\\"#/properties/phones\\\"\"\n        \"  },\"\n        \"  \\\"uniqueItems\\\": {\"\n        \"    \\\"duplicates\\\": [1, 2], \\\"errorCode\\\": 11, \\\"instanceRef\\\": \\\"#/phones\\\", \\\"schemaRef\\\": \\\"#/properties/phones\\\"\"\n        \"  },\"\n        \"  \\\"minProperties\\\": {\\\"actual\\\": 6, \\\"expected\\\": 7, \\\"errorCode\\\": 14, \\\"instanceRef\\\": \\\"#/address\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type\\\"\"\n        \"  },\"\n        \"  \\\"additionalItems\\\": [\"\n        \"    {\\\"disallowed\\\": 2, \\\"errorCode\\\": 12, \\\"instanceRef\\\": \\\"#/names\\\", \\\"schemaRef\\\": \\\"#/properties/names\\\"},\"\n        \"    {\\\"disallowed\\\": 3, \\\"errorCode\\\": 12, \\\"instanceRef\\\": \\\"#/names\\\", \\\"schemaRef\\\": \\\"#/properties/names\\\"}\"\n        \"  ],\"\n        \"  \\\"dependencies\\\": {\"\n        \"    \\\"errors\\\": {\"\n        \"      \\\"address\\\": {\\\"required\\\": {\\\"missing\\\": [\\\"version\\\"], \\\"errorCode\\\": 15, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/dependencies/address\\\"}},\"\n        \"      \\\"names\\\": {\\\"required\\\": {\\\"missing\\\": [\\\"version\\\"], \\\"errorCode\\\": 15, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/dependencies/names\\\"}}\"\n        \"    },\"\n        \"    \\\"errorCode\\\": 18, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n        \"  },\"\n        \"  \\\"oneOf\\\": {\"\n        \"    \\\"errors\\\": [\"\n        \"      {\\\"enum\\\": {\\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#/address/area\\\", \\\"schemaRef\\\": \\\"#/definitions/county_type\\\"}},\"\n        \"      {\\\"enum\\\": {\\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#/address/area\\\", \\\"schemaRef\\\": \\\"#/definitions/province_type\\\"}}\"\n        \"    ],\"\n        \"    \\\"errorCode\\\": 21, \\\"instanceRef\\\": \\\"#/address/area\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type/properties/area\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n\n        CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, it is not propagated to oneOf sub-validator so we only get the first error.\nTEST(SchemaValidator, ContinueOnErrors_OneOf) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/oneOf_address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocumentType s(sd, 0, 0, &provider);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"version\\\": 1.01, \\\"address\\\": {\\\"number\\\": 0, \\\"street2\\\": false,  \\\"street3\\\": \\\"Ham\\\", \\\"city\\\": \\\"RomseyTownFC\\\", \\\"area\\\": \\\"BC\\\", \\\"country\\\": \\\"USA\\\", \\\"postcode\\\": \\\"999ABC\\\"}, \\\"phones\\\": [], \\\"planet\\\": \\\"Earth\\\", \\\"extra\\\": {\\\"S_xxx\\\": 123}}\", \"#\", \"errors\", \"#\",\n        \"{ \\\"oneOf\\\": {\"\n        \"    \\\"errors\\\": [{\"\n        \"      \\\"multipleOf\\\": {\"\n        \"        \\\"errorCode\\\": 1, \\\"instanceRef\\\": \\\"#/version\\\", \\\"schemaRef\\\": \\\"http://localhost:1234/address.json#/definitions/decimal_type\\\", \\\"expected\\\": 1.0, \\\"actual\\\": 1.01\"\n        \"      }\"\n        \"    }],\"\n        \"    \\\"errorCode\\\": 21, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidatorType, PointerType);\n    CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, it is not propagated to allOf sub-validator so we only get the first error.\nTEST(SchemaValidator, ContinueOnErrors_AllOf) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/allOf_address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocumentType s(sd, 0, 0, &provider);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"version\\\": 1.01, \\\"address\\\": {\\\"number\\\": 0, \\\"street2\\\": false,  \\\"street3\\\": \\\"Ham\\\", \\\"city\\\": \\\"RomseyTownFC\\\", \\\"area\\\": \\\"BC\\\", \\\"country\\\": \\\"USA\\\", \\\"postcode\\\": \\\"999ABC\\\"}, \\\"phones\\\": [], \\\"planet\\\": \\\"Earth\\\", \\\"extra\\\": {\\\"S_xxx\\\": 123}}\", \"#\", \"errors\", \"#\",\n        \"{ \\\"allOf\\\": {\"\n        \"    \\\"errors\\\": [{\"\n        \"      \\\"multipleOf\\\": {\"\n        \"        \\\"errorCode\\\": 1, \\\"instanceRef\\\": \\\"#/version\\\", \\\"schemaRef\\\": \\\"http://localhost:1234/address.json#/definitions/decimal_type\\\", \\\"expected\\\": 1.0, \\\"actual\\\": 1.01\"\n        \"      }\"\n        \"    }],\"\n        \"    \\\"errorCode\\\": 23, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidatorType, PointerType);\n    CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, it is not propagated to anyOf sub-validator so we only get the first error.\nTEST(SchemaValidator, ContinueOnErrors_AnyOf) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/anyOf_address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocumentType s(sd, 0, 0, &provider);\n    typedef GenericSchemaValidator<SchemaDocumentType, BaseReaderHandler<UTF8<> >, MemoryPoolAllocator<> > SchemaValidatorType;\n    typedef GenericPointer<Value, MemoryPoolAllocator<> > PointerType;\n    INVALIDATE_(s, \"{\\\"version\\\": 1.01, \\\"address\\\": {\\\"number\\\": 0, \\\"street2\\\": false,  \\\"street3\\\": \\\"Ham\\\", \\\"city\\\": \\\"RomseyTownFC\\\", \\\"area\\\": \\\"BC\\\", \\\"country\\\": \\\"USA\\\", \\\"postcode\\\": \\\"999ABC\\\"}, \\\"phones\\\": [], \\\"planet\\\": \\\"Earth\\\", \\\"extra\\\": {\\\"S_xxx\\\": 123}}\", \"#\", \"errors\", \"#\",\n        \"{ \\\"anyOf\\\": {\"\n        \"    \\\"errors\\\": [{\"\n        \"      \\\"multipleOf\\\": {\"\n        \"        \\\"errorCode\\\": 1, \\\"instanceRef\\\": \\\"#/version\\\", \\\"schemaRef\\\": \\\"http://localhost:1234/address.json#/definitions/decimal_type\\\", \\\"expected\\\": 1.0, \\\"actual\\\": 1.01\"\n        \"      }\"\n        \"    }],\"\n        \"    \\\"errorCode\\\": 24, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidatorType, PointerType);\n\n    CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, arrays with uniqueItems:true are correctly processed when an item is invalid.\n// This tests that we don't blow up if a hasher does not get created.\nTEST(SchemaValidator, ContinueOnErrors_UniqueItems) {\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    VALIDATE(s, \"{\\\"phones\\\":[\\\"12-34\\\",\\\"56-78\\\"]}\", true);\n    INVALIDATE_(s, \"{\\\"phones\\\":[\\\"12-34\\\",\\\"12-34\\\"]}\", \"#\", \"errors\", \"#\",\n        \"{\\\"uniqueItems\\\": {\\\"duplicates\\\": [0,1], \\\"errorCode\\\": 11, \\\"instanceRef\\\": \\\"#/phones\\\", \\\"schemaRef\\\": \\\"#/properties/phones\\\"}}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    INVALIDATE_(s, \"{\\\"phones\\\":[\\\"ab-34\\\",\\\"cd-78\\\"]}\", \"#\", \"errors\", \"#\",\n        \"{\\\"pattern\\\": [\"\n        \"  {\\\"actual\\\": \\\"ab-34\\\", \\\"errorCode\\\": 8, \\\"instanceRef\\\": \\\"#/phones/0\\\", \\\"schemaRef\\\": \\\"#/definitions/phone_type\\\"},\"\n        \"  {\\\"actual\\\": \\\"cd-78\\\", \\\"errorCode\\\": 8, \\\"instanceRef\\\": \\\"#/phones/1\\\", \\\"schemaRef\\\": \\\"#/definitions/phone_type\\\"}\"\n        \"]}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, an enum field is correctly processed when it has an invalid value.\n// This tests that we don't blow up if a hasher does not get created.\nTEST(SchemaValidator, ContinueOnErrors_Enum) {\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    VALIDATE(s, \"{\\\"gender\\\":\\\"M\\\"}\", true);\n    INVALIDATE_(s, \"{\\\"gender\\\":\\\"X\\\"}\", \"#\", \"errors\", \"#\",\n        \"{\\\"enum\\\": {\\\"errorCode\\\": 19, \\\"instanceRef\\\": \\\"#/gender\\\", \\\"schemaRef\\\": \\\"#/properties/gender\\\"}}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    INVALIDATE_(s, \"{\\\"gender\\\":1}\", \"#\", \"errors\", \"#\",\n        \"{\\\"type\\\": {\\\"expected\\\":[\\\"string\\\"], \\\"actual\\\": \\\"integer\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/gender\\\", \\\"schemaRef\\\": \\\"#/properties/gender\\\"}}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, an array appearing for an object property is handled\n// This tests that we don't blow up when there is a type mismatch.\nTEST(SchemaValidator, ContinueOnErrors_RogueArray) {\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    INVALIDATE_(s, \"{\\\"address\\\":[{\\\"number\\\": 0}]}\", \"#\", \"errors\", \"#\",\n        \"{\\\"type\\\": {\\\"expected\\\":[\\\"object\\\"], \\\"actual\\\": \\\"array\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/address\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type\\\"},\"\n        \"  \\\"dependencies\\\": {\"\n        \"    \\\"errors\\\": {\"\n        \"      \\\"address\\\": {\\\"required\\\": {\\\"missing\\\": [\\\"version\\\"], \\\"errorCode\\\": 15, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/dependencies/address\\\"}}\"\n        \"    },\\\"errorCode\\\": 18, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"}}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, an object appearing for an array property is handled\n// This tests that we don't blow up when there is a type mismatch.\nTEST(SchemaValidator, ContinueOnErrors_RogueObject) {\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    INVALIDATE_(s, \"{\\\"phones\\\":{\\\"number\\\": 0}}\", \"#\", \"errors\", \"#\",\n        \"{\\\"type\\\": {\\\"expected\\\":[\\\"array\\\"], \\\"actual\\\": \\\"object\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/phones\\\", \\\"schemaRef\\\": \\\"#/properties/phones\\\"}}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, a string appearing for an array or object property is handled\n// This tests that we don't blow up when there is a type mismatch.\nTEST(SchemaValidator, ContinueOnErrors_RogueString) {\n    CrtAllocator allocator;\n    char* schema = ReadFile(\"unittestschema/address.json\", allocator);\n    Document sd;\n    sd.Parse(schema);\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    INVALIDATE_(s, \"{\\\"address\\\":\\\"number\\\"}\", \"#\", \"errors\", \"#\",\n        \"{\\\"type\\\": {\\\"expected\\\":[\\\"object\\\"], \\\"actual\\\": \\\"string\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/address\\\", \\\"schemaRef\\\": \\\"#/definitions/address_type\\\"},\"\n        \"  \\\"dependencies\\\": {\"\n        \"    \\\"errors\\\": {\"\n        \"      \\\"address\\\": {\\\"required\\\": {\\\"missing\\\": [\\\"version\\\"], \\\"errorCode\\\": 15, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/dependencies/address\\\"}}\"\n        \"    },\\\"errorCode\\\": 18, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"}}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    INVALIDATE_(s, \"{\\\"phones\\\":\\\"number\\\"}\", \"#\", \"errors\", \"#\",\n        \"{\\\"type\\\": {\\\"expected\\\":[\\\"array\\\"], \\\"actual\\\": \\\"string\\\", \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#/phones\\\", \\\"schemaRef\\\": \\\"#/properties/phones\\\"}}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    CrtAllocator::Free(schema);\n}\n\n// Test that when kValidateContinueOnErrorFlag is set, an incorrect simple type with a sub-schema is handled correctly.\n// This tests that we don't blow up when there is a type mismatch but there is a sub-schema present\nTEST(SchemaValidator, ContinueOnErrors_BadSimpleType) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\":\\\"string\\\", \\\"anyOf\\\":[{\\\"maxLength\\\":2}]}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    VALIDATE(s, \"\\\"AB\\\"\", true);\n    INVALIDATE_(s, \"\\\"ABC\\\"\", \"#\", \"errors\", \"#\",\n        \"{ \\\"anyOf\\\": {\"\n        \"    \\\"errors\\\": [{\"\n        \"      \\\"maxLength\\\": {\"\n        \"        \\\"errorCode\\\": 6, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#/anyOf/0\\\", \\\"expected\\\": 2, \\\"actual\\\": \\\"ABC\\\"\"\n        \"      }\"\n        \"    }],\"\n        \"    \\\"errorCode\\\": 24, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n    // Invalid type\n    INVALIDATE_(s, \"333\", \"#\", \"errors\", \"#\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20, \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\", \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"integer\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateContinueOnErrorFlag, SchemaValidator, Pointer);\n}\n\n\nTEST(SchemaValidator, UnknownValidationError) {\n    ASSERT_TRUE(SchemaValidator::SchemaType::GetValidateErrorKeyword(kValidateErrors).GetString() == std::string(\"null\"));\n}\n\n// The first occurrence of a duplicate keyword is taken\nTEST(SchemaValidator, DuplicateKeyword) {\n    Document sd;\n    sd.Parse(\"{ \\\"title\\\": \\\"test\\\",\\\"type\\\": \\\"number\\\", \\\"type\\\": \\\"string\\\" }\");\n    EXPECT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    VALIDATE(s, \"42\", true);\n    INVALIDATE(s, \"\\\"Life, the universe, and everything\\\"\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"number\\\"], \\\"actual\\\": \\\"string\\\"\"\n        \"}}\");\n}\n\n\n// SchemaDocument tests\n\n// Specification (schema draft, open api version)\nTEST(SchemaValidator, Schema_SupportedNotObject) {\n    Document sd;\n    sd.Parse(\"true\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_SupportedNoSpec) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_SupportedNoSpecStatic) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    Specification spec = SchemaDocumentType::GetSpecification(sd);\n    ASSERT_FALSE(spec.IsSupported());\n    ASSERT_TRUE(spec.draft == kDraftNone);\n    ASSERT_TRUE(spec.oapi == kVersionNone);\n}\n\nTEST(SchemaValidator, Schema_SupportedDraft5Static) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-05/schema#\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    Specification spec = SchemaDocumentType::GetSpecification(sd);\n    ASSERT_TRUE(spec.IsSupported());\n    ASSERT_TRUE(spec.draft == kDraft05);\n    ASSERT_TRUE(spec.oapi == kVersionNone);\n}\n\nTEST(SchemaValidator, Schema_SupportedDraft4) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-04/schema#\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_SupportedDraft4NoFrag) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-04/schema\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_SupportedDraft5) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-05/schema#\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft05);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_SupportedDraft5NoFrag) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-05/schema\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft05);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_IgnoreDraftEmbedded) {\n    Document sd;\n    sd.Parse(\"{\\\"root\\\": {\\\"$schema\\\":\\\"http://json-schema.org/draft-05/schema#\\\", \\\"type\\\": \\\"integer\\\"}}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd, 0, 0, 0, 0, SchemaDocument::PointerType(\"/root\"));\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_SupportedDraftOverride) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, Specification(kDraft04));\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_UnknownDraftOverride) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, Specification(kDraftUnknown));\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraftUnknown);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    SCHEMAERROR(s, \"{\\\"SpecUnknown\\\":{\\\"errorCode\\\":10,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnsupportedDraftOverride) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, Specification(kDraft03));\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft03);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    SCHEMAERROR(s, \"{\\\"SpecUnsupported\\\":{\\\"errorCode\\\":11,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnknownDraft) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-xxx/schema#\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraftUnknown);\n     ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n   SCHEMAERROR(s, \"{\\\"SpecUnknown\\\":{\\\"errorCode\\\":10,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnknownDraftNotString) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\": 4, \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraftUnknown);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    SCHEMAERROR(s, \"{\\\"SpecUnknown\\\":{\\\"errorCode\\\":10,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnsupportedDraft3) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-03/schema#\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft03);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    SCHEMAERROR(s, \"{\\\"SpecUnsupported\\\":{\\\"errorCode\\\":11,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnsupportedDraft6) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-06/schema#\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft06);\n    SCHEMAERROR(s, \"{\\\"SpecUnsupported\\\":{\\\"errorCode\\\":11,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnsupportedDraft7) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"http://json-schema.org/draft-07/schema#\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft07);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    SCHEMAERROR(s, \"{\\\"SpecUnsupported\\\":{\\\"errorCode\\\":11,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnsupportedDraft2019_09) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"https://json-schema.org/draft/2019-09/schema\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft2019_09);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    SCHEMAERROR(s, \"{\\\"SpecUnsupported\\\":{\\\"errorCode\\\":11,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnsupportedDraft2020_12) {\n    Document sd;\n    sd.Parse(\"{\\\"$schema\\\":\\\"https://json-schema.org/draft/2020-12/schema\\\", \\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft2020_12);\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionNone);\n    SCHEMAERROR(s, \"{\\\"SpecUnsupported\\\":{\\\"errorCode\\\":11,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_SupportedVersion20Static) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"swagger\\\":\\\"2.0\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    Specification spec = SchemaDocumentType::GetSpecification(sd);\n    ASSERT_TRUE(spec.IsSupported());\n    ASSERT_TRUE(spec.draft == kDraft04);\n    ASSERT_TRUE(spec.oapi == kVersion20);\n}\n\nTEST(SchemaValidator, Schema_SupportedVersion20) {\n    Document sd;\n    sd.Parse(\"{\\\"swagger\\\":\\\"2.0\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersion20);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_SupportedVersion30x) {\n    Document sd;\n    sd.Parse(\"{\\\"openapi\\\":\\\"3.0.0\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersion30);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft05);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_SupportedVersionOverride) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, Specification(kVersion20));\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersion20);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    EXPECT_TRUE(s.GetError().ObjectEmpty());\n}\n\nTEST(SchemaValidator, Schema_UnknownVersionOverride) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, Specification(kVersionUnknown));\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionUnknown);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    SCHEMAERROR(s, \"{\\\"SpecUnknown\\\":{\\\"errorCode\\\":10,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnsupportedVersionOverride) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, Specification(kVersion31));\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersion31);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft2020_12);\n    SCHEMAERROR(s, \"{\\\"SpecUnsupported\\\":{\\\"errorCode\\\":11,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnknownVersion) {\n    Document sd;\n    sd.Parse(\"{\\\"openapi\\\":\\\"1.0\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionUnknown);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    SCHEMAERROR(s, \"{\\\"SpecUnknown\\\":{\\\"errorCode\\\":10,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnknownVersionShort) {\n    Document sd;\n    sd.Parse(\"{\\\"openapi\\\":\\\"3.0.\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionUnknown);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    SCHEMAERROR(s, \"{\\\"SpecUnknown\\\":{\\\"errorCode\\\":10,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnknownVersionNotString) {\n    Document sd;\n    sd.Parse(\"{\\\"swagger\\\": 2}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersionUnknown);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft04);\n    SCHEMAERROR(s, \"{\\\"SpecUnknown\\\":{\\\"errorCode\\\":10,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_UnsupportedVersion31) {\n    Document sd;\n    sd.Parse(\"{\\\"openapi\\\":\\\"3.1.0\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_FALSE(s.IsSupportedSpecification());\n    ASSERT_TRUE(s.GetSpecification().oapi == kVersion31);\n    ASSERT_TRUE(s.GetSpecification().draft == kDraft2020_12);\n    SCHEMAERROR(s, \"{\\\"SpecUnsupported\\\":{\\\"errorCode\\\":11,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_DraftAndVersion) {\n    Document sd;\n    sd.Parse(\"{\\\"swagger\\\": \\\"2.0\\\", \\\"$schema\\\": \\\"http://json-schema.org/draft-04/schema#\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    ASSERT_TRUE(s.IsSupportedSpecification());\n    SCHEMAERROR(s, \"{\\\"SpecIllegal\\\":{\\\"errorCode\\\":12,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_StartUnknown) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd, 0, 0, 0, 0, SchemaDocument::PointerType(\"/nowhere\"));\n    SCHEMAERROR(s, \"{\\\"StartUnknown\\\":{\\\"errorCode\\\":1,\\\"instanceRef\\\":\\\"#\\\", \\\"value\\\":\\\"#/nowhere\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_MultipleErrors) {\n    Document sd;\n    sd.Parse(\"{\\\"swagger\\\": \\\"foo\\\", \\\"$schema\\\": \\\"bar\\\"}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s(sd);\n    SCHEMAERROR(s, \"{ \\\"SpecUnknown\\\": {\\\"errorCode\\\":10,\\\"instanceRef\\\":\\\"#\\\"},\"\n                   \"  \\\"SpecIllegal\\\": {\\\"errorCode\\\":12,\\\"instanceRef\\\":\\\"#\\\"}\"\n                   \"}\");\n}\n\n// $ref is a non-JSON pointer fragment - not allowed when OpenAPI\nTEST(SchemaValidator, Schema_RefPlainNameOpenApi) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"swagger\\\": \\\"2.0\\\", \\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt1\\\": {\\\"$ref\\\": \\\"#myId\\\"}, \\\"myStr\\\": {\\\"type\\\": \\\"string\\\", \\\"id\\\": \\\"#myStrId\\\"}, \\\"myInt2\\\": {\\\"type\\\": \\\"integer\\\", \\\"id\\\": \\\"#myId\\\"}}}\");\n    SchemaDocumentType s(sd);\n    SCHEMAERROR(s, \"{\\\"RefPlainName\\\":{\\\"errorCode\\\":2,\\\"instanceRef\\\":\\\"#/properties/myInt1\\\",\\\"value\\\":\\\"#myId\\\"}}\");\n}\n\n// $ref is a non-JSON pointer fragment - not allowed when remote document\nTEST(SchemaValidator, Schema_RefPlainNameRemote) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"/subSchemas.json#plainname\\\"}}}\");\n    SchemaDocumentType s(sd, \"http://localhost:1234/xxxx\", 26, &provider);\n    SCHEMAERROR(s, \"{\\\"RefPlainName\\\":{\\\"errorCode\\\":2,\\\"instanceRef\\\":\\\"#/properties/myInt\\\",\\\"value\\\":\\\"#plainname\\\"}}\");\n}\n\n// $ref is an empty string\nTEST(SchemaValidator, Schema_RefEmptyString) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt1\\\": {\\\"$ref\\\": \\\"\\\"}}}\");\n    SchemaDocumentType s(sd);\n    SCHEMAERROR(s, \"{\\\"RefInvalid\\\":{\\\"errorCode\\\":3,\\\"instanceRef\\\":\\\"#/properties/myInt1\\\"}}\");\n}\n\n// $ref is remote but no provider\nTEST(SchemaValidator, Schema_RefNoRemoteProvider) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"/subSchemas.json#plainname\\\"}}}\");\n    SchemaDocumentType s(sd, \"http://localhost:1234/xxxx\", 26, 0);\n    SCHEMAERROR(s, \"{\\\"RefNoRemoteProvider\\\":{\\\"errorCode\\\":7,\\\"instanceRef\\\":\\\"#/properties/myInt\\\"}}\");\n}\n\n// $ref is remote but no schema returned\nTEST(SchemaValidator, Schema_RefNoRemoteSchema) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"/will-not-resolve.json\\\"}}}\");\n    SchemaDocumentType s(sd, \"http://localhost:1234/xxxx\", 26, &provider);\n    SCHEMAERROR(s, \"{\\\"RefNoRemoteSchema\\\":{\\\"errorCode\\\":8,\\\"instanceRef\\\":\\\"#/properties/myInt\\\",\\\"value\\\":\\\"http://localhost:1234/will-not-resolve.json\\\"}}\");\n}\n\n// $ref pointer is invalid\nTEST(SchemaValidator, Schema_RefPointerInvalid) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"#/&&&&&\\\"}}}\");\n    SchemaDocumentType s(sd);\n    SCHEMAERROR(s, \"{\\\"RefPointerInvalid\\\":{\\\"errorCode\\\":4,\\\"instanceRef\\\":\\\"#/properties/myInt\\\",\\\"value\\\":\\\"#/&&&&&\\\",\\\"offset\\\":2}}\");\n}\n\n// $ref is remote and pointer is invalid\nTEST(SchemaValidator, Schema_RefPointerInvalidRemote) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"/subSchemas.json#/abc&&&&&\\\"}}}\");\n    SchemaDocumentType s(sd, \"http://localhost:1234/xxxx\", 26, &provider);\n    SCHEMAERROR(s, \"{\\\"RefPointerInvalid\\\":{\\\"errorCode\\\":4,\\\"instanceRef\\\":\\\"#/properties/myInt\\\",\\\"value\\\":\\\"#/abc&&&&&\\\",\\\"offset\\\":5}}\");\n}\n\n// $ref is unknown non-pointer\nTEST(SchemaValidator, Schema_RefUnknownPlainName) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"#plainname\\\"}}}\");\n    SchemaDocumentType s(sd);\n    SCHEMAERROR(s, \"{\\\"RefUnknown\\\":{\\\"errorCode\\\":5,\\\"instanceRef\\\":\\\"#/properties/myInt\\\",\\\"value\\\":\\\"#plainname\\\"}}\");\n}\n\n/// $ref is unknown pointer\nTEST(SchemaValidator, Schema_RefUnknownPointer) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"#/a/b\\\"}}}\");\n    SchemaDocumentType s(sd);\n    SCHEMAERROR(s, \"{\\\"RefUnknown\\\":{\\\"errorCode\\\":5,\\\"instanceRef\\\":\\\"#/properties/myInt\\\",\\\"value\\\":\\\"#/a/b\\\"}}\");\n}\n\n// $ref is remote and unknown pointer\nTEST(SchemaValidator, Schema_RefUnknownPointerRemote) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    RemoteSchemaDocumentProvider<SchemaDocumentType> provider;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\\\"myInt\\\": {\\\"$ref\\\": \\\"/subSchemas.json#/a/b\\\"}}}\");\n    SchemaDocumentType s(sd, \"http://localhost:1234/xxxx\", 26, &provider);\n    SCHEMAERROR(s, \"{\\\"RefUnknown\\\":{\\\"errorCode\\\":5,\\\"instanceRef\\\":\\\"#/properties/myInt\\\",\\\"value\\\":\\\"http://localhost:1234/subSchemas.json#/a/b\\\"}}\");\n}\n\n// $ref is cyclical\nTEST(SchemaValidator, Schema_RefCyclical) {\n    typedef GenericSchemaDocument<Value, MemoryPoolAllocator<> > SchemaDocumentType;\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"object\\\", \\\"properties\\\": {\"\n             \"    \\\"cyclic_source\\\": {\"\n             \"         \\\"$ref\\\": \\\"#/properties/cyclic_target\\\"\"\n             \"    },\"\n             \"    \\\"cyclic_target\\\": {\"\n             \"        \\\"$ref\\\": \\\"#/properties/cyclic_source\\\"\"\n             \"    }\"\n             \"}}\");\n    SchemaDocumentType s(sd);\n    SCHEMAERROR(s, \"{\\\"RefCyclical\\\":{\\\"errorCode\\\":6,\\\"instanceRef\\\":\\\"#/properties/cyclic_target\\\",\\\"value\\\":\\\"#/properties/cyclic_source\\\"}}\");\n}\n\nTEST(SchemaValidator, Schema_ReadOnlyAndWriteOnly) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"integer\\\", \\\"readOnly\\\": true, \\\"writeOnly\\\": true}\");\n    ASSERT_FALSE(sd.HasParseError());\n    SchemaDocument s1(sd, 0, 0, 0, 0, 0, Specification(kDraft04));\n    EXPECT_TRUE(s1.GetError().ObjectEmpty());\n    SchemaDocument s2(sd, 0, 0, 0, 0, 0, Specification(kVersion30));\n    SCHEMAERROR(s2, \"{\\\"ReadOnlyAndWriteOnly\\\":{\\\"errorCode\\\":13,\\\"instanceRef\\\":\\\"#\\\"}}\");\n}\n\nTEST(SchemaValidator, ReadOnlyWhenWriting) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\":\\\"object\\\",\"\n        \"    \\\"properties\\\": {\"\n        \"        \\\"rprop\\\" : {\"\n        \"            \\\"type\\\": \\\"string\\\",\"\n        \"            \\\"readOnly\\\": true\"\n        \"        }\"\n        \"    }\"\n        \"}\");\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, Specification(kVersion20));\n    VALIDATE(s, \"{ \\\"rprop\\\": \\\"hello\\\" }\", true);\n    INVALIDATE_(s, \"{ \\\"rprop\\\": \\\"hello\\\" }\", \"/properties/rprop\", \"readOnly\", \"/rprop\",\n        \"{ \\\"readOnly\\\": {\"\n        \"    \\\"errorCode\\\": 26, \\\"instanceRef\\\": \\\"#/rprop\\\", \\\"schemaRef\\\": \\\"#/properties/rprop\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateWriteFlag, SchemaValidator, Pointer);\n}\n\nTEST(SchemaValidator, WriteOnlyWhenReading) {\n    Document sd;\n    sd.Parse(\n        \"{\"\n        \"    \\\"type\\\":\\\"object\\\",\"\n        \"    \\\"properties\\\": {\"\n        \"        \\\"wprop\\\" : {\"\n        \"            \\\"type\\\": \\\"boolean\\\",\"\n        \"            \\\"writeOnly\\\": true\"\n        \"        }\"\n        \"    }\"\n        \"}\");\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, Specification(kVersion30));\n    VALIDATE(s, \"{ \\\"wprop\\\": true }\", true);\n    INVALIDATE_(s, \"{ \\\"wprop\\\": true }\", \"/properties/wprop\", \"writeOnly\", \"/wprop\",\n        \"{ \\\"writeOnly\\\": {\"\n        \"    \\\"errorCode\\\": 27, \\\"instanceRef\\\": \\\"#/wprop\\\", \\\"schemaRef\\\": \\\"#/properties/wprop\\\"\"\n        \"  }\"\n        \"}\",\n        kValidateDefaultFlags | kValidateReadFlag, SchemaValidator, Pointer);\n}\n\nTEST(SchemaValidator, NullableTrue) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"string\\\", \\\"nullable\\\": true}\");\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, kVersion20);\n\n    VALIDATE(s, \"\\\"hello\\\"\", true);\n    INVALIDATE(s, \"null\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"false\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"boolean\\\"\"\n        \"}}\");\n\n    SchemaDocument s30(sd, 0, 0, 0, 0, 0, kVersion30);\n\n    VALIDATE(s30, \"\\\"hello\\\"\", true);\n    VALIDATE(s30, \"null\", true);\n    INVALIDATE(s30, \"false\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"null\\\", \\\"string\\\"], \\\"actual\\\": \\\"boolean\\\"\"\n        \"}}\");\n}\n\nTEST(SchemaValidator, NullableFalse) {\n    Document sd;\n    sd.Parse(\"{\\\"type\\\": \\\"string\\\", \\\"nullable\\\": false}\");\n    SchemaDocument s(sd, 0, 0, 0, 0, 0, kVersion20);\n\n    VALIDATE(s, \"\\\"hello\\\"\", true);\n    INVALIDATE(s, \"null\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\");\n    INVALIDATE(s, \"false\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"boolean\\\"\"\n        \"}}\");\n\n    SchemaDocument s30(sd, 0, 0, 0, 0, 0, kVersion30);\n\n    VALIDATE(s30, \"\\\"hello\\\"\", true);\n    INVALIDATE(s, \"null\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"null\\\"\"\n        \"}}\");\n    INVALIDATE(s30, \"false\", \"\", \"type\", \"\",\n        \"{ \\\"type\\\": {\"\n        \"    \\\"errorCode\\\": 20,\"\n        \"    \\\"instanceRef\\\": \\\"#\\\", \\\"schemaRef\\\": \\\"#\\\",\"\n        \"    \\\"expected\\\": [\\\"string\\\"], \\\"actual\\\": \\\"boolean\\\"\"\n        \"}}\");\n}\n\n#if defined(_MSC_VER) || defined(__clang__)\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/simdtest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n// Since Travis CI installs old Valgrind 3.7.0, which fails with some SSE4.2\n// The unit tests prefix with SIMD should be skipped by Valgrind test\n\n// __SSE2__ and __SSE4_2__ are recognized by gcc, clang, and the Intel compiler.\n// We use -march=native with gmake to enable -msse2 and -msse4.2, if supported.\n#if defined(__SSE4_2__)\n#  define RAPIDJSON_SSE42\n#elif defined(__SSE2__)\n#  define RAPIDJSON_SSE2\n#elif defined(__ARM_NEON)\n#  define RAPIDJSON_NEON\n#endif\n\n#define RAPIDJSON_NAMESPACE rapidjson_simd\n\n#include \"unittest.h\"\n\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/writer.h\"\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(effc++)\n#endif\n\nusing namespace rapidjson_simd;\n\n#ifdef RAPIDJSON_SSE2\n#define SIMD_SUFFIX(name) name##_SSE2\n#elif defined(RAPIDJSON_SSE42)\n#define SIMD_SUFFIX(name) name##_SSE42\n#elif defined(RAPIDJSON_NEON)\n#define SIMD_SUFFIX(name) name##_NEON\n#else\n#define SIMD_SUFFIX(name) name\n#endif\n\n#define SIMD_SIZE_ALIGN(n) ((size_t(n) + 15) & ~size_t(15))\n\ntemplate <typename StreamType>\nvoid TestSkipWhitespace() {\n    for (size_t step = 1; step < 32; step++) {\n        char buffer[SIMD_SIZE_ALIGN(1025)];\n        for (size_t i = 0; i < 1024; i++)\n            buffer[i] = \" \\t\\r\\n\"[i % 4];\n        for (size_t i = 0; i < 1024; i += step)\n            buffer[i] = 'X';\n        buffer[1024] = '\\0';\n\n        StreamType s(buffer);\n        size_t i = 0;\n        for (;;) {\n            SkipWhitespace(s);\n            if (s.Peek() == '\\0')\n                break;\n            EXPECT_EQ(i, s.Tell());\n            EXPECT_EQ('X', s.Take());\n            i += step;\n        }\n    }\n}\n\nTEST(SIMD, SIMD_SUFFIX(SkipWhitespace)) {\n    TestSkipWhitespace<StringStream>();\n    TestSkipWhitespace<InsituStringStream>();\n}\n\nTEST(SIMD, SIMD_SUFFIX(SkipWhitespace_EncodedMemoryStream)) {\n    for (size_t step = 1; step < 32; step++) {\n        char buffer[SIMD_SIZE_ALIGN(1024)];\n        for (size_t i = 0; i < 1024; i++)\n            buffer[i] = \" \\t\\r\\n\"[i % 4];\n        for (size_t i = 0; i < 1024; i += step)\n            buffer[i] = 'X';\n\n        MemoryStream ms(buffer, 1024);\n        EncodedInputStream<UTF8<>, MemoryStream> s(ms);\n        for (;;) {\n            SkipWhitespace(s);\n            if (s.Peek() == '\\0')\n                break;\n            //EXPECT_EQ(i, s.Tell());\n            EXPECT_EQ('X', s.Take());\n        }\n    }\n}\n\nstruct ScanCopyUnescapedStringHandler : BaseReaderHandler<UTF8<>, ScanCopyUnescapedStringHandler> {\n    bool String(const char* str, size_t length, bool) {\n        memcpy(buffer, str, length + 1);\n        return true;\n    }\n    char buffer[1024 + 5 + 32];\n};\n\ntemplate <unsigned parseFlags, typename StreamType>\nvoid TestScanCopyUnescapedString() {\n    char buffer[SIMD_SIZE_ALIGN(1024u + 5 + 32)];\n    char backup[SIMD_SIZE_ALIGN(1024u + 5 + 32)];\n\n    // Test \"ABCDABCD...\\\\\"\n    for (size_t offset = 0; offset < 32; offset++) {\n        for (size_t step = 0; step < 1024; step++) {\n            char* json = buffer + offset;\n            char *p = json;\n            *p++ = '\\\"';\n            for (size_t i = 0; i < step; i++)\n                *p++ = \"ABCD\"[i % 4];\n            *p++ = '\\\\';\n            *p++ = '\\\\';\n            *p++ = '\\\"';\n            *p++ = '\\0';\n            strcpy(backup, json); // insitu parsing will overwrite buffer, so need to backup first\n\n            StreamType s(json);\n            Reader reader;\n            ScanCopyUnescapedStringHandler h;\n            reader.Parse<parseFlags>(s, h);\n            EXPECT_TRUE(memcmp(h.buffer, backup + 1, step) == 0);\n            EXPECT_EQ('\\\\', h.buffer[step]);    // escaped\n            EXPECT_EQ('\\0', h.buffer[step + 1]);\n        }\n    }\n\n    // Test \"\\\\ABCDABCD...\"\n    for (size_t offset = 0; offset < 32; offset++) {\n        for (size_t step = 0; step < 1024; step++) {\n            char* json = buffer + offset;\n            char *p = json;\n            *p++ = '\\\"';\n            *p++ = '\\\\';\n            *p++ = '\\\\';\n            for (size_t i = 0; i < step; i++)\n                *p++ = \"ABCD\"[i % 4];\n            *p++ = '\\\"';\n            *p++ = '\\0';\n            strcpy(backup, json); // insitu parsing will overwrite buffer, so need to backup first\n\n            StreamType s(json);\n            Reader reader;\n            ScanCopyUnescapedStringHandler h;\n            reader.Parse<parseFlags>(s, h);\n            EXPECT_TRUE(memcmp(h.buffer + 1, backup + 3, step) == 0);\n            EXPECT_EQ('\\\\', h.buffer[0]);    // escaped\n            EXPECT_EQ('\\0', h.buffer[step + 1]);\n        }\n    }\n}\n\nTEST(SIMD, SIMD_SUFFIX(ScanCopyUnescapedString)) {\n    TestScanCopyUnescapedString<kParseDefaultFlags, StringStream>();\n    TestScanCopyUnescapedString<kParseInsituFlag, InsituStringStream>();\n}\n\nTEST(SIMD, SIMD_SUFFIX(ScanWriteUnescapedString)) {\n    char buffer[SIMD_SIZE_ALIGN(2048 + 1 + 32)];\n    for (size_t offset = 0; offset < 32; offset++) {\n        for (size_t step = 0; step < 1024; step++) {\n            char* s = buffer + offset;\n            char* p = s;\n            for (size_t i = 0; i < step; i++)\n                *p++ = \"ABCD\"[i % 4];\n            char escape = \"\\0\\n\\\\\\\"\"[step % 4];\n            *p++ = escape;\n            for (size_t i = 0; i < step; i++)\n                *p++ = \"ABCD\"[i % 4];\n\n            StringBuffer sb;\n            Writer<StringBuffer> writer(sb);\n            writer.String(s, SizeType(step * 2 + 1));\n            const char* q = sb.GetString();\n            EXPECT_EQ('\\\"', *q++);\n            for (size_t i = 0; i < step; i++)\n                EXPECT_EQ(\"ABCD\"[i % 4], *q++);\n            if (escape == '\\0') {\n                EXPECT_EQ('\\\\', *q++);\n                EXPECT_EQ('u', *q++);\n                EXPECT_EQ('0', *q++);\n                EXPECT_EQ('0', *q++);\n                EXPECT_EQ('0', *q++);\n                EXPECT_EQ('0', *q++);\n            }\n            else if (escape == '\\n') {\n                EXPECT_EQ('\\\\', *q++);\n                EXPECT_EQ('n', *q++);\n            }\n            else if (escape == '\\\\') {\n                EXPECT_EQ('\\\\', *q++);\n                EXPECT_EQ('\\\\', *q++);\n            }\n            else if (escape == '\\\"') {\n                EXPECT_EQ('\\\\', *q++);\n                EXPECT_EQ('\\\"', *q++);\n            }\n            for (size_t i = 0; i < step; i++)\n                EXPECT_EQ(\"ABCD\"[i % 4], *q++);\n            EXPECT_EQ('\\\"', *q++);\n            EXPECT_EQ('\\0', *q++);\n        }\n    }\n}\n\n#ifdef __GNUC__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/strfunctest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/internal/strfunc.h\"\n\nusing namespace rapidjson;\nusing namespace rapidjson::internal;\n\nTEST(StrFunc, CountStringCodePoint) {\n    SizeType count;\n    EXPECT_TRUE(CountStringCodePoint<UTF8<> >(\"\", 0, &count));\n    EXPECT_EQ(0u, count);\n    EXPECT_TRUE(CountStringCodePoint<UTF8<> >(\"Hello\", 5, &count));\n    EXPECT_EQ(5u, count);\n    EXPECT_TRUE(CountStringCodePoint<UTF8<> >(\"\\xC2\\xA2\\xE2\\x82\\xAC\\xF0\\x9D\\x84\\x9E\", 9, &count)); // cents euro G-clef\n    EXPECT_EQ(3u, count);\n    EXPECT_FALSE(CountStringCodePoint<UTF8<> >(\"\\xC2\\xA2\\xE2\\x82\\xAC\\xF0\\x9D\\x84\\x9E\\x80\", 10, &count));\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/stringbuffertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/writer.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#endif\n\nusing namespace rapidjson;\n\nTEST(StringBuffer, InitialSize) {\n    StringBuffer buffer;\n    EXPECT_EQ(0u, buffer.GetSize());\n    EXPECT_EQ(0u, buffer.GetLength());\n    EXPECT_STREQ(\"\", buffer.GetString());\n}\n\nTEST(StringBuffer, Put) {\n    StringBuffer buffer;\n    buffer.Put('A');\n\n    EXPECT_EQ(1u, buffer.GetSize());\n    EXPECT_EQ(1u, buffer.GetLength());\n    EXPECT_STREQ(\"A\", buffer.GetString());\n}\n\nTEST(StringBuffer, PutN_Issue672) {\n    GenericStringBuffer<UTF8<>, MemoryPoolAllocator<> > buffer;\n    EXPECT_EQ(0u, buffer.GetSize());\n    EXPECT_EQ(0u, buffer.GetLength());\n    rapidjson::PutN(buffer, ' ', 1);\n    EXPECT_EQ(1u, buffer.GetSize());\n    EXPECT_EQ(1u, buffer.GetLength());\n}\n\nTEST(StringBuffer, Clear) {\n    StringBuffer buffer;\n    buffer.Put('A');\n    buffer.Put('B');\n    buffer.Put('C');\n    buffer.Clear();\n\n    EXPECT_EQ(0u, buffer.GetSize());\n    EXPECT_EQ(0u, buffer.GetLength());\n    EXPECT_STREQ(\"\", buffer.GetString());\n}\n\nTEST(StringBuffer, Push) {\n    StringBuffer buffer;\n    buffer.Push(5);\n\n    EXPECT_EQ(5u, buffer.GetSize());\n    EXPECT_EQ(5u, buffer.GetLength());\n\n    // Causes sudden expansion to make the stack's capacity equal to size\n    buffer.Push(65536u);\n    EXPECT_EQ(5u + 65536u, buffer.GetSize());\n}\n\nTEST(StringBuffer, Pop) {\n    StringBuffer buffer;\n    buffer.Put('A');\n    buffer.Put('B');\n    buffer.Put('C');\n    buffer.Put('D');\n    buffer.Put('E');\n    buffer.Pop(3);\n\n    EXPECT_EQ(2u, buffer.GetSize());\n    EXPECT_EQ(2u, buffer.GetLength());\n    EXPECT_STREQ(\"AB\", buffer.GetString());\n}\n\nTEST(StringBuffer, GetLength_Issue744) {\n    GenericStringBuffer<UTF16<wchar_t> > buffer;\n    buffer.Put('A');\n    buffer.Put('B');\n    buffer.Put('C');\n    EXPECT_EQ(3u * sizeof(wchar_t), buffer.GetSize());\n    EXPECT_EQ(3u, buffer.GetLength());\n}\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\n#if 0 // Many old compiler does not support these. Turn it off temporaily.\n\n#include <type_traits>\n\nTEST(StringBuffer, Traits) {\n    static_assert( std::is_constructible<StringBuffer>::value, \"\");\n    static_assert( std::is_default_constructible<StringBuffer>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(!std::is_copy_constructible<StringBuffer>::value, \"\");\n#endif\n    static_assert( std::is_move_constructible<StringBuffer>::value, \"\");\n\n    static_assert(!std::is_nothrow_constructible<StringBuffer>::value, \"\");\n    static_assert(!std::is_nothrow_default_constructible<StringBuffer>::value, \"\");\n\n#if !defined(_MSC_VER) || _MSC_VER >= 1800\n    static_assert(!std::is_nothrow_copy_constructible<StringBuffer>::value, \"\");\n    static_assert(!std::is_nothrow_move_constructible<StringBuffer>::value, \"\");\n#endif\n\n    static_assert( std::is_assignable<StringBuffer,StringBuffer>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(!std::is_copy_assignable<StringBuffer>::value, \"\");\n#endif\n    static_assert( std::is_move_assignable<StringBuffer>::value, \"\");\n\n#if !defined(_MSC_VER) || _MSC_VER >= 1800\n    static_assert(!std::is_nothrow_assignable<StringBuffer, StringBuffer>::value, \"\");\n#endif\n\n    static_assert(!std::is_nothrow_copy_assignable<StringBuffer>::value, \"\");\n    static_assert(!std::is_nothrow_move_assignable<StringBuffer>::value, \"\");\n\n    static_assert( std::is_destructible<StringBuffer>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(std::is_nothrow_destructible<StringBuffer>::value, \"\");\n#endif\n}\n\n#endif\n\nTEST(StringBuffer, MoveConstructor) {\n    StringBuffer x;\n    x.Put('A');\n    x.Put('B');\n    x.Put('C');\n    x.Put('D');\n\n    EXPECT_EQ(4u, x.GetSize());\n    EXPECT_EQ(4u, x.GetLength());\n    EXPECT_STREQ(\"ABCD\", x.GetString());\n\n    // StringBuffer y(x); // does not compile (!is_copy_constructible)\n    StringBuffer y(std::move(x));\n    EXPECT_EQ(0u, x.GetSize());\n    EXPECT_EQ(0u, x.GetLength());\n    EXPECT_EQ(4u, y.GetSize());\n    EXPECT_EQ(4u, y.GetLength());\n    EXPECT_STREQ(\"ABCD\", y.GetString());\n\n    // StringBuffer z = y; // does not compile (!is_copy_assignable)\n    StringBuffer z = std::move(y);\n    EXPECT_EQ(0u, y.GetSize());\n    EXPECT_EQ(0u, y.GetLength());\n    EXPECT_EQ(4u, z.GetSize());\n    EXPECT_EQ(4u, z.GetLength());\n    EXPECT_STREQ(\"ABCD\", z.GetString());\n}\n\nTEST(StringBuffer, MoveAssignment) {\n    StringBuffer x;\n    x.Put('A');\n    x.Put('B');\n    x.Put('C');\n    x.Put('D');\n\n    EXPECT_EQ(4u, x.GetSize());\n    EXPECT_EQ(4u, x.GetLength());\n    EXPECT_STREQ(\"ABCD\", x.GetString());\n\n    StringBuffer y;\n    // y = x; // does not compile (!is_copy_assignable)\n    y = std::move(x);\n    EXPECT_EQ(0u, x.GetSize());\n    EXPECT_EQ(4u, y.GetLength());\n    EXPECT_STREQ(\"ABCD\", y.GetString());\n}\n\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/strtodtest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n#include \"rapidjson/internal/strtod.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(unreachable-code)\n#endif\n\n#define BIGINTEGER_LITERAL(s) BigInteger(s, sizeof(s) - 1)\n\nusing namespace rapidjson::internal;\n\nTEST(Strtod, CheckApproximationCase) {\n    static const int kSignificandSize = 52;\n    static const int kExponentBias = 0x3FF;\n    static const uint64_t kExponentMask = RAPIDJSON_UINT64_C2(0x7FF00000, 0x00000000);\n    static const uint64_t kSignificandMask = RAPIDJSON_UINT64_C2(0x000FFFFF, 0xFFFFFFFF);\n    static const uint64_t kHiddenBit = RAPIDJSON_UINT64_C2(0x00100000, 0x00000000);\n\n    // http://www.exploringbinary.com/using-integers-to-check-a-floating-point-approximation/\n    // Let b = 0x1.465a72e467d88p-149\n    //       = 5741268244528520 x 2^-201\n    union {\n        double d;\n        uint64_t u;\n    }u;\n    u.u = 0x465a72e467d88 | ((static_cast<uint64_t>(-149 + kExponentBias)) << kSignificandSize);\n    const double b = u.d;\n    const uint64_t bInt = (u.u & kSignificandMask) | kHiddenBit;\n    const int bExp = static_cast<int>(((u.u & kExponentMask) >> kSignificandSize) - kExponentBias - kSignificandSize);\n    EXPECT_DOUBLE_EQ(1.7864e-45, b);\n    EXPECT_EQ(RAPIDJSON_UINT64_C2(0x001465a7, 0x2e467d88), bInt);\n    EXPECT_EQ(-201, bExp);\n\n    // Let d = 17864 x 10-49\n    const char dInt[] = \"17864\";\n    const int dExp = -49;\n\n    // Let h = 2^(bExp-1)\n    const int hExp = bExp - 1;\n    EXPECT_EQ(-202, hExp);\n\n    int dS_Exp2 = 0;\n    int dS_Exp5 = 0;\n    int bS_Exp2 = 0;\n    int bS_Exp5 = 0;\n    int hS_Exp2 = 0;\n    int hS_Exp5 = 0;\n\n    // Adjust for decimal exponent\n    if (dExp >= 0) {\n        dS_Exp2 += dExp;\n        dS_Exp5 += dExp;\n    }\n    else {\n        bS_Exp2 -= dExp;\n        bS_Exp5 -= dExp;\n        hS_Exp2 -= dExp;\n        hS_Exp5 -= dExp;\n    }\n\n    // Adjust for binary exponent\n    if (bExp >= 0)\n        bS_Exp2 += bExp;\n    else {\n        dS_Exp2 -= bExp;\n        hS_Exp2 -= bExp;\n    }\n\n    // Adjust for half ulp exponent\n    if (hExp >= 0)\n        hS_Exp2 += hExp;\n    else {\n        dS_Exp2 -= hExp;\n        bS_Exp2 -= hExp;\n    }\n\n    // Remove common power of two factor from all three scaled values\n    int common_Exp2 = (std::min)(dS_Exp2, (std::min)(bS_Exp2, hS_Exp2));\n    dS_Exp2 -= common_Exp2;\n    bS_Exp2 -= common_Exp2;\n    hS_Exp2 -= common_Exp2;\n\n    EXPECT_EQ(153, dS_Exp2);\n    EXPECT_EQ(0, dS_Exp5);\n    EXPECT_EQ(1, bS_Exp2);\n    EXPECT_EQ(49, bS_Exp5);\n    EXPECT_EQ(0, hS_Exp2);\n    EXPECT_EQ(49, hS_Exp5);\n\n    BigInteger dS = BIGINTEGER_LITERAL(dInt);\n    dS.MultiplyPow5(static_cast<unsigned>(dS_Exp5)) <<= static_cast<size_t>(dS_Exp2);\n\n    BigInteger bS(bInt);\n    bS.MultiplyPow5(static_cast<unsigned>(bS_Exp5)) <<= static_cast<size_t>(bS_Exp2);\n\n    BigInteger hS(1);\n    hS.MultiplyPow5(static_cast<unsigned>(hS_Exp5)) <<= static_cast<size_t>(hS_Exp2);\n\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"203970822259994138521801764465966248930731085529088\") == dS);\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"203970822259994122305215569213032722473144531250000\") == bS);\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"17763568394002504646778106689453125\") == hS);\n\n    EXPECT_EQ(1, dS.Compare(bS));\n    \n    BigInteger delta(0);\n    EXPECT_FALSE(dS.Difference(bS, &delta));\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"16216586195252933526457586554279088\") == delta);\n    EXPECT_TRUE(bS.Difference(dS, &delta));\n    EXPECT_TRUE(BIGINTEGER_LITERAL(\"16216586195252933526457586554279088\") == delta);\n\n    EXPECT_EQ(-1, delta.Compare(hS));\n}\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/unittest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/rapidjson.h\"\n\n#ifdef __clang__\n#pragma GCC diagnostic push\n#if __has_warning(\"-Wdeprecated\")\n#pragma GCC diagnostic ignored \"-Wdeprecated\"\n#endif\n#endif\n\nAssertException::~AssertException() throw() {}\n\n#ifdef __clang__\n#pragma GCC diagnostic pop\n#endif\n\nint main(int argc, char **argv) {\n    ::testing::InitGoogleTest(&argc, argv);\n\n    std::cout << \"RapidJSON v\" << RAPIDJSON_VERSION_STRING << std::endl;\n\n#ifdef _MSC_VER\n    _CrtMemState memoryState = { 0 };\n    (void)memoryState;\n    _CrtMemCheckpoint(&memoryState);\n    //_CrtSetBreakAlloc(X);\n    //void *testWhetherMemoryLeakDetectionWorks = malloc(1);\n#endif\n\n    int ret = RUN_ALL_TESTS();\n\n#ifdef _MSC_VER\n    // Current gtest constantly leak 2 blocks at exit\n    _CrtMemDumpAllObjectsSince(&memoryState);\n#endif\n    return ret;\n}\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/unittest.h",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#ifndef UNITTEST_H_\n#define UNITTEST_H_\n\n// gtest indirectly included inttypes.h, without __STDC_CONSTANT_MACROS.\n#ifndef __STDC_CONSTANT_MACROS\n#ifdef __clang__\n#pragma GCC diagnostic push\n#if __has_warning(\"-Wreserved-id-macro\")\n#pragma GCC diagnostic ignored \"-Wreserved-id-macro\"\n#endif\n#endif\n\n#  define __STDC_CONSTANT_MACROS 1 // required by C++ standard\n\n#ifdef __clang__\n#pragma GCC diagnostic pop\n#endif\n#endif\n\n#ifdef _MSC_VER\n#define _CRTDBG_MAP_ALLOC\n#include <crtdbg.h>\n#pragma warning(disable : 4996) // 'function': was declared deprecated\n#endif\n\n#if defined(__clang__) || defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 2))\n#if defined(__clang__) || (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))\n#pragma GCC diagnostic push\n#endif\n#pragma GCC diagnostic ignored \"-Weffc++\"\n#endif\n\n#include \"gtest/gtest.h\"\n#include <stdexcept>\n\n#if defined(__clang__) || defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))\n#pragma GCC diagnostic pop\n#endif\n\n#ifdef __clang__\n// All TEST() macro generated this warning, disable globally\n#pragma GCC diagnostic ignored \"-Wglobal-constructors\"\n#endif\n\ntemplate <typename Ch>\ninline unsigned StrLen(const Ch* s) {\n    const Ch* p = s;\n    while (*p) p++;\n    return unsigned(p - s);\n}\n\ntemplate<typename Ch>\ninline int StrCmp(const Ch* s1, const Ch* s2) {\n    while(*s1 && (*s1 == *s2)) { s1++; s2++; }\n    return static_cast<unsigned>(*s1) < static_cast<unsigned>(*s2) ? -1 : static_cast<unsigned>(*s1) > static_cast<unsigned>(*s2);\n}\n\ntemplate <typename Ch>\ninline Ch* StrDup(const Ch* str) {\n    size_t bufferSize = sizeof(Ch) * (StrLen(str) + 1);\n    Ch* buffer = static_cast<Ch*>(malloc(bufferSize));\n    memcpy(buffer, str, bufferSize);\n    return buffer;\n}\n\ninline FILE* TempFile(char *filename) {\n#if defined(__WIN32__) || defined(_MSC_VER)\n    filename = tmpnam(filename);\n\n    // For Visual Studio, tmpnam() adds a backslash in front. Remove it.\n    if (filename[0] == '\\\\')\n        for (int i = 0; filename[i] != '\\0'; i++)\n            filename[i] = filename[i + 1];\n        \n    return fopen(filename, \"wb\");\n#else\n    strcpy(filename, \"/tmp/fileXXXXXX\");\n    int fd = mkstemp(filename);\n    return fdopen(fd, \"w\");\n#endif\n}\n\n// Use exception for catching assert\n#ifdef _MSC_VER\n#pragma warning(disable : 4127)\n#endif\n\n#ifdef __clang__\n#pragma GCC diagnostic push\n#if __has_warning(\"-Wdeprecated\")\n#pragma GCC diagnostic ignored \"-Wdeprecated\"\n#endif\n#endif\n\nclass AssertException : public std::logic_error {\npublic:\n    AssertException(const char* w) : std::logic_error(w) {}\n    AssertException(const AssertException& rhs) : std::logic_error(rhs) {}\n    virtual ~AssertException() throw();\n};\n\n#ifdef __clang__\n#pragma GCC diagnostic pop\n#endif\n\n// Not using noexcept for testing RAPIDJSON_ASSERT()\n#define RAPIDJSON_HAS_CXX11_NOEXCEPT 0\n\n#ifndef RAPIDJSON_ASSERT\n#define RAPIDJSON_ASSERT(x) (!(x) ? throw AssertException(RAPIDJSON_STRINGIFY(x)) : (void)0u)\n#ifndef RAPIDJSON_ASSERT_THROWS\n#define RAPIDJSON_ASSERT_THROWS\n#endif\n#endif\n\nclass Random {\npublic:\n    Random(unsigned seed = 0) : mSeed(seed) {}\n\n    unsigned operator()() {\n        mSeed = 214013 * mSeed + 2531011;\n        return mSeed;\n    }\n\nprivate:\n    unsigned mSeed;\n};\n\n#endif // UNITTEST_H_\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/valuetest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n#include \"rapidjson/document.h\"\n#include <algorithm>\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#endif\n\nusing namespace rapidjson;\n\nTEST(Value, Size) {\n    if (sizeof(SizeType) == 4) {\n#if RAPIDJSON_48BITPOINTER_OPTIMIZATION\n        EXPECT_EQ(16u, sizeof(Value));\n#elif RAPIDJSON_64BIT\n        EXPECT_EQ(24u, sizeof(Value));\n#else\n        EXPECT_EQ(16u, sizeof(Value));\n#endif\n    }\n}\n\nTEST(Value, DefaultConstructor) {\n    Value x;\n    EXPECT_EQ(kNullType, x.GetType());\n    EXPECT_TRUE(x.IsNull());\n\n    //std::cout << \"sizeof(Value): \" << sizeof(x) << std::endl;\n}\n\n// Should not pass compilation\n//TEST(Value, copy_constructor) {\n//  Value x(1234);\n//  Value y = x;\n//}\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\n#if 0 // Many old compiler does not support these. Turn it off temporaily.\n\n#include <type_traits>\n\nTEST(Value, Traits) {\n    typedef GenericValue<UTF8<>, CrtAllocator> Value;\n    static_assert(std::is_constructible<Value>::value, \"\");\n    static_assert(std::is_default_constructible<Value>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(!std::is_copy_constructible<Value>::value, \"\");\n#endif\n    static_assert(std::is_move_constructible<Value>::value, \"\");\n\n#ifndef _MSC_VER\n    static_assert(std::is_nothrow_constructible<Value>::value, \"\");\n    static_assert(std::is_nothrow_default_constructible<Value>::value, \"\");\n    static_assert(!std::is_nothrow_copy_constructible<Value>::value, \"\");\n    static_assert(std::is_nothrow_move_constructible<Value>::value, \"\");\n#endif\n\n    static_assert(std::is_assignable<Value,Value>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(!std::is_copy_assignable<Value>::value, \"\");\n#endif\n    static_assert(std::is_move_assignable<Value>::value, \"\");\n\n#ifndef _MSC_VER\n    static_assert(std::is_nothrow_assignable<Value, Value>::value, \"\");\n#endif\n    static_assert(!std::is_nothrow_copy_assignable<Value>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(std::is_nothrow_move_assignable<Value>::value, \"\");\n#endif\n\n    static_assert(std::is_destructible<Value>::value, \"\");\n#ifndef _MSC_VER\n    static_assert(std::is_nothrow_destructible<Value>::value, \"\");\n#endif\n}\n\n#endif\n\nTEST(Value, MoveConstructor) {\n    typedef GenericValue<UTF8<>, CrtAllocator> V;\n    V::AllocatorType allocator;\n\n    V x((V(kArrayType)));\n    x.Reserve(4u, allocator);\n    x.PushBack(1, allocator).PushBack(2, allocator).PushBack(3, allocator).PushBack(4, allocator);\n    EXPECT_TRUE(x.IsArray());\n    EXPECT_EQ(4u, x.Size());\n\n    // Value y(x); // does not compile (!is_copy_constructible)\n    V y(std::move(x));\n    EXPECT_TRUE(x.IsNull());\n    EXPECT_TRUE(y.IsArray());\n    EXPECT_EQ(4u, y.Size());\n\n    // Value z = y; // does not compile (!is_copy_assignable)\n    V z = std::move(y);\n    EXPECT_TRUE(y.IsNull());\n    EXPECT_TRUE(z.IsArray());\n    EXPECT_EQ(4u, z.Size());\n}\n\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n\nTEST(Value, AssignmentOperator) {\n    Value x(1234);\n    Value y;\n    y = x;\n    EXPECT_TRUE(x.IsNull());    // move semantic\n    EXPECT_EQ(1234, y.GetInt());\n\n    y = 5678;\n    EXPECT_TRUE(y.IsInt());\n    EXPECT_EQ(5678, y.GetInt());\n\n    x = \"Hello\";\n    EXPECT_TRUE(x.IsString());\n    EXPECT_STREQ(x.GetString(),\"Hello\");\n\n    y = StringRef(x.GetString(),x.GetStringLength());\n    EXPECT_TRUE(y.IsString());\n    EXPECT_EQ(y.GetString(),x.GetString());\n    EXPECT_EQ(y.GetStringLength(),x.GetStringLength());\n\n    static char mstr[] = \"mutable\";\n    // y = mstr; // should not compile\n    y = StringRef(mstr);\n    EXPECT_TRUE(y.IsString());\n    EXPECT_EQ(y.GetString(),mstr);\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    // C++11 move assignment\n    x = Value(\"World\");\n    EXPECT_TRUE(x.IsString());\n    EXPECT_STREQ(\"World\", x.GetString());\n\n    x = std::move(y);\n    EXPECT_TRUE(y.IsNull());\n    EXPECT_TRUE(x.IsString());\n    EXPECT_EQ(x.GetString(), mstr);\n\n    y = std::move(Value().SetInt(1234));\n    EXPECT_TRUE(y.IsInt());\n    EXPECT_EQ(1234, y);\n#endif // RAPIDJSON_HAS_CXX11_RVALUE_REFS\n}\n\ntemplate <typename A, typename B> \nvoid TestEqual(const A& a, const B& b) {\n    EXPECT_TRUE (a == b);\n    EXPECT_FALSE(a != b);\n    EXPECT_TRUE (b == a);\n    EXPECT_FALSE(b != a);\n}\n\ntemplate <typename A, typename B> \nvoid TestUnequal(const A& a, const B& b) {\n    EXPECT_FALSE(a == b);\n    EXPECT_TRUE (a != b);\n    EXPECT_FALSE(b == a);\n    EXPECT_TRUE (b != a);\n}\n\nTEST(Value, EqualtoOperator) {\n    Value::AllocatorType allocator;\n    Value x(kObjectType);\n    x.AddMember(\"hello\", \"world\", allocator)\n        .AddMember(\"t\", Value(true).Move(), allocator)\n        .AddMember(\"f\", Value(false).Move(), allocator)\n        .AddMember(\"n\", Value(kNullType).Move(), allocator)\n        .AddMember(\"i\", 123, allocator)\n        .AddMember(\"pi\", 3.14, allocator)\n        .AddMember(\"a\", Value(kArrayType).Move().PushBack(1, allocator).PushBack(2, allocator).PushBack(3, allocator), allocator);\n\n    // Test templated operator==() and operator!=()\n    TestEqual(x[\"hello\"], \"world\");\n    const char* cc = \"world\";\n    TestEqual(x[\"hello\"], cc);\n    char* c = strdup(\"world\");\n    TestEqual(x[\"hello\"], c);\n    free(c);\n\n    TestEqual(x[\"t\"], true);\n    TestEqual(x[\"f\"], false);\n    TestEqual(x[\"i\"], 123);\n    TestEqual(x[\"pi\"], 3.14);\n\n    // Test operator==() (including different allocators)\n    CrtAllocator crtAllocator;\n    GenericValue<UTF8<>, CrtAllocator> y;\n    GenericDocument<UTF8<>, CrtAllocator> z(&crtAllocator);\n    y.CopyFrom(x, crtAllocator);\n    z.CopyFrom(y, z.GetAllocator());\n    TestEqual(x, y);\n    TestEqual(y, z);\n    TestEqual(z, x);\n\n    // Swapping member order should be fine.\n    EXPECT_TRUE(y.RemoveMember(\"t\"));\n    TestUnequal(x, y);\n    TestUnequal(z, y);\n    EXPECT_TRUE(z.RemoveMember(\"t\"));\n    TestUnequal(x, z);\n    TestEqual(y, z);\n    y.AddMember(\"t\", false, crtAllocator);\n    z.AddMember(\"t\", false, z.GetAllocator());\n    TestUnequal(x, y);\n    TestUnequal(z, x);\n    y[\"t\"] = true;\n    z[\"t\"] = true;\n    TestEqual(x, y);\n    TestEqual(y, z);\n    TestEqual(z, x);\n\n    // Swapping element order is not OK\n    x[\"a\"][0].Swap(x[\"a\"][1]);\n    TestUnequal(x, y);\n    x[\"a\"][0].Swap(x[\"a\"][1]);\n    TestEqual(x, y);\n\n    // Array of different size\n    x[\"a\"].PushBack(4, allocator);\n    TestUnequal(x, y);\n    x[\"a\"].PopBack();\n    TestEqual(x, y);\n\n    // Issue #129: compare Uint64\n    x.SetUint64(RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFF0));\n    y.SetUint64(RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFFF));\n    TestUnequal(x, y);\n}\n\ntemplate <typename Value>\nvoid TestCopyFrom() {\n    typename Value::AllocatorType a;\n    Value v1(1234);\n    Value v2(v1, a); // deep copy constructor\n    EXPECT_TRUE(v1.GetType() == v2.GetType());\n    EXPECT_EQ(v1.GetInt(), v2.GetInt());\n\n    v1.SetString(\"foo\");\n    v2.CopyFrom(v1, a);\n    EXPECT_TRUE(v1.GetType() == v2.GetType());\n    EXPECT_STREQ(v1.GetString(), v2.GetString());\n    EXPECT_EQ(v1.GetString(), v2.GetString()); // string NOT copied\n\n    v1.SetString(\"bar\", a); // copy string\n    v2.CopyFrom(v1, a);\n    EXPECT_TRUE(v1.GetType() == v2.GetType());\n    EXPECT_STREQ(v1.GetString(), v2.GetString());\n    EXPECT_NE(v1.GetString(), v2.GetString()); // string copied\n\n\n    v1.SetArray().PushBack(1234, a);\n    v2.CopyFrom(v1, a);\n    EXPECT_TRUE(v2.IsArray());\n    EXPECT_EQ(v1.Size(), v2.Size());\n\n    v1.PushBack(Value().SetString(\"foo\", a), a); // push string copy\n    EXPECT_TRUE(v1.Size() != v2.Size());\n    v2.CopyFrom(v1, a);\n    EXPECT_TRUE(v1.Size() == v2.Size());\n    EXPECT_STREQ(v1[1].GetString(), v2[1].GetString());\n    EXPECT_NE(v1[1].GetString(), v2[1].GetString()); // string got copied\n}\n\nTEST(Value, CopyFrom) {\n    TestCopyFrom<Value>();\n    TestCopyFrom<GenericValue<UTF8<>, CrtAllocator> >();\n}\n\nTEST(Value, Swap) {\n    Value v1(1234);\n    Value v2(kObjectType);\n\n    EXPECT_EQ(&v1, &v1.Swap(v2));\n    EXPECT_TRUE(v1.IsObject());\n    EXPECT_TRUE(v2.IsInt());\n    EXPECT_EQ(1234, v2.GetInt());\n\n    // testing std::swap compatibility\n    using std::swap;\n    swap(v1, v2);\n    EXPECT_TRUE(v1.IsInt());\n    EXPECT_TRUE(v2.IsObject());\n}\n\nTEST(Value, Null) {\n    // Default constructor\n    Value x;\n    EXPECT_EQ(kNullType, x.GetType());\n    EXPECT_TRUE(x.IsNull());\n\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsNumber());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    // Constructor with type\n    Value y(kNullType);\n    EXPECT_TRUE(y.IsNull());\n\n    // SetNull();\n    Value z(true);\n    z.SetNull();\n    EXPECT_TRUE(z.IsNull());\n}\n\nTEST(Value, True) {\n    // Constructor with bool\n    Value x(true);\n    EXPECT_EQ(kTrueType, x.GetType());\n    EXPECT_TRUE(x.GetBool());\n    EXPECT_TRUE(x.IsBool());\n    EXPECT_TRUE(x.IsTrue());\n\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsNumber());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    // Constructor with type\n    Value y(kTrueType);\n    EXPECT_TRUE(y.IsTrue());\n\n    // SetBool()\n    Value z;\n    z.SetBool(true);\n    EXPECT_TRUE(z.IsTrue());\n\n    // Templated functions\n    EXPECT_TRUE(z.Is<bool>());\n    EXPECT_TRUE(z.Get<bool>());\n    EXPECT_FALSE(z.Set<bool>(false).Get<bool>());\n    EXPECT_TRUE(z.Set(true).Get<bool>());\n}\n\nTEST(Value, False) {\n    // Constructor with bool\n    Value x(false);\n    EXPECT_EQ(kFalseType, x.GetType());\n    EXPECT_TRUE(x.IsBool());\n    EXPECT_TRUE(x.IsFalse());\n\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.GetBool());\n    //EXPECT_FALSE((bool)x);\n    EXPECT_FALSE(x.IsNumber());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    // Constructor with type\n    Value y(kFalseType);\n    EXPECT_TRUE(y.IsFalse());\n\n    // SetBool()\n    Value z;\n    z.SetBool(false);\n    EXPECT_TRUE(z.IsFalse());\n}\n\nTEST(Value, Int) {\n    // Constructor with int\n    Value x(1234);\n    EXPECT_EQ(kNumberType, x.GetType());\n    EXPECT_EQ(1234, x.GetInt());\n    EXPECT_EQ(1234u, x.GetUint());\n    EXPECT_EQ(1234, x.GetInt64());\n    EXPECT_EQ(1234u, x.GetUint64());\n    EXPECT_NEAR(1234.0, x.GetDouble(), 0.0);\n    //EXPECT_EQ(1234, (int)x);\n    //EXPECT_EQ(1234, (unsigned)x);\n    //EXPECT_EQ(1234, (int64_t)x);\n    //EXPECT_EQ(1234, (uint64_t)x);\n    //EXPECT_EQ(1234, (double)x);\n    EXPECT_TRUE(x.IsNumber());\n    EXPECT_TRUE(x.IsInt());\n    EXPECT_TRUE(x.IsUint());\n    EXPECT_TRUE(x.IsInt64());\n    EXPECT_TRUE(x.IsUint64());\n\n    EXPECT_FALSE(x.IsDouble());\n    EXPECT_FALSE(x.IsFloat());\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsBool());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    Value nx(-1234);\n    EXPECT_EQ(-1234, nx.GetInt());\n    EXPECT_EQ(-1234, nx.GetInt64());\n    EXPECT_TRUE(nx.IsInt());\n    EXPECT_TRUE(nx.IsInt64());\n    EXPECT_FALSE(nx.IsUint());\n    EXPECT_FALSE(nx.IsUint64());\n\n    // Constructor with type\n    Value y(kNumberType);\n    EXPECT_TRUE(y.IsNumber());\n    EXPECT_TRUE(y.IsInt());\n    EXPECT_EQ(0, y.GetInt());\n\n    // SetInt()\n    Value z;\n    z.SetInt(1234);\n    EXPECT_EQ(1234, z.GetInt());\n\n    // operator=(int)\n    z = 5678;\n    EXPECT_EQ(5678, z.GetInt());\n\n    // Templated functions\n    EXPECT_TRUE(z.Is<int>());\n    EXPECT_EQ(5678, z.Get<int>());\n    EXPECT_EQ(5679, z.Set(5679).Get<int>());\n    EXPECT_EQ(5680, z.Set<int>(5680).Get<int>());\n\n#ifdef _MSC_VER\n    // long as int on MSC platforms\n    RAPIDJSON_STATIC_ASSERT(sizeof(long) == sizeof(int));\n    z.SetInt(2222);\n    EXPECT_TRUE(z.Is<long>());\n    EXPECT_EQ(2222l, z.Get<long>());\n    EXPECT_EQ(3333l, z.Set(3333l).Get<long>());\n    EXPECT_EQ(4444l, z.Set<long>(4444l).Get<long>());\n    EXPECT_TRUE(z.IsInt());\n#endif\n}\n\nTEST(Value, Uint) {\n    // Constructor with int\n    Value x(1234u);\n    EXPECT_EQ(kNumberType, x.GetType());\n    EXPECT_EQ(1234, x.GetInt());\n    EXPECT_EQ(1234u, x.GetUint());\n    EXPECT_EQ(1234, x.GetInt64());\n    EXPECT_EQ(1234u, x.GetUint64());\n    EXPECT_TRUE(x.IsNumber());\n    EXPECT_TRUE(x.IsInt());\n    EXPECT_TRUE(x.IsUint());\n    EXPECT_TRUE(x.IsInt64());\n    EXPECT_TRUE(x.IsUint64());\n    EXPECT_NEAR(1234.0, x.GetDouble(), 0.0);   // Number can always be cast as double but !IsDouble().\n\n    EXPECT_FALSE(x.IsDouble());\n    EXPECT_FALSE(x.IsFloat());\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsBool());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    // SetUint()\n    Value z;\n    z.SetUint(1234);\n    EXPECT_EQ(1234u, z.GetUint());\n\n    // operator=(unsigned)\n    z = 5678u;\n    EXPECT_EQ(5678u, z.GetUint());\n\n    z = 2147483648u;    // 2^31, cannot cast as int\n    EXPECT_EQ(2147483648u, z.GetUint());\n    EXPECT_FALSE(z.IsInt());\n    EXPECT_TRUE(z.IsInt64());   // Issue 41: Incorrect parsing of unsigned int number types\n\n    // Templated functions\n    EXPECT_TRUE(z.Is<unsigned>());\n    EXPECT_EQ(2147483648u, z.Get<unsigned>());\n    EXPECT_EQ(2147483649u, z.Set(2147483649u).Get<unsigned>());\n    EXPECT_EQ(2147483650u, z.Set<unsigned>(2147483650u).Get<unsigned>());\n\n#ifdef _MSC_VER\n    // unsigned long as unsigned on MSC platforms\n    RAPIDJSON_STATIC_ASSERT(sizeof(unsigned long) == sizeof(unsigned));\n    z.SetUint(2222);\n    EXPECT_TRUE(z.Is<unsigned long>());\n    EXPECT_EQ(2222ul, z.Get<unsigned long>());\n    EXPECT_EQ(3333ul, z.Set(3333ul).Get<unsigned long>());\n    EXPECT_EQ(4444ul, z.Set<unsigned long>(4444ul).Get<unsigned long>());\n    EXPECT_TRUE(x.IsUint());\n#endif\n}\n\nTEST(Value, Int64) {\n    // Constructor with int\n    Value x(int64_t(1234));\n    EXPECT_EQ(kNumberType, x.GetType());\n    EXPECT_EQ(1234, x.GetInt());\n    EXPECT_EQ(1234u, x.GetUint());\n    EXPECT_EQ(1234, x.GetInt64());\n    EXPECT_EQ(1234u, x.GetUint64());\n    EXPECT_TRUE(x.IsNumber());\n    EXPECT_TRUE(x.IsInt());\n    EXPECT_TRUE(x.IsUint());\n    EXPECT_TRUE(x.IsInt64());\n    EXPECT_TRUE(x.IsUint64());\n\n    EXPECT_FALSE(x.IsDouble());\n    EXPECT_FALSE(x.IsFloat());\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsBool());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    Value nx(int64_t(-1234));\n    EXPECT_EQ(-1234, nx.GetInt());\n    EXPECT_EQ(-1234, nx.GetInt64());\n    EXPECT_TRUE(nx.IsInt());\n    EXPECT_TRUE(nx.IsInt64());\n    EXPECT_FALSE(nx.IsUint());\n    EXPECT_FALSE(nx.IsUint64());\n\n    // SetInt64()\n    Value z;\n    z.SetInt64(1234);\n    EXPECT_EQ(1234, z.GetInt64());\n\n    z.SetInt64(2147483648u);   // 2^31, cannot cast as int\n    EXPECT_FALSE(z.IsInt());\n    EXPECT_TRUE(z.IsUint());\n    EXPECT_NEAR(2147483648.0, z.GetDouble(), 0.0);\n\n    z.SetInt64(int64_t(4294967295u) + 1);   // 2^32, cannot cast as uint\n    EXPECT_FALSE(z.IsInt());\n    EXPECT_FALSE(z.IsUint());\n    EXPECT_NEAR(4294967296.0, z.GetDouble(), 0.0);\n\n    z.SetInt64(-int64_t(2147483648u) - 1);   // -2^31-1, cannot cast as int\n    EXPECT_FALSE(z.IsInt());\n    EXPECT_NEAR(-2147483649.0, z.GetDouble(), 0.0);\n\n    int64_t i = static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x80000000, 00000000));\n    z.SetInt64(i);\n    EXPECT_DOUBLE_EQ(-9223372036854775808.0, z.GetDouble());\n\n    // Templated functions\n    EXPECT_TRUE(z.Is<int64_t>());\n    EXPECT_EQ(i, z.Get<int64_t>());\n#if 0 // signed integer underflow is undefined behaviour\n    EXPECT_EQ(i - 1, z.Set(i - 1).Get<int64_t>());\n    EXPECT_EQ(i - 2, z.Set<int64_t>(i - 2).Get<int64_t>());\n#endif\n}\n\nTEST(Value, Uint64) {\n    // Constructor with int\n    Value x(uint64_t(1234));\n    EXPECT_EQ(kNumberType, x.GetType());\n    EXPECT_EQ(1234, x.GetInt());\n    EXPECT_EQ(1234u, x.GetUint());\n    EXPECT_EQ(1234, x.GetInt64());\n    EXPECT_EQ(1234u, x.GetUint64());\n    EXPECT_TRUE(x.IsNumber());\n    EXPECT_TRUE(x.IsInt());\n    EXPECT_TRUE(x.IsUint());\n    EXPECT_TRUE(x.IsInt64());\n    EXPECT_TRUE(x.IsUint64());\n\n    EXPECT_FALSE(x.IsDouble());\n    EXPECT_FALSE(x.IsFloat());\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsBool());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    // SetUint64()\n    Value z;\n    z.SetUint64(1234);\n    EXPECT_EQ(1234u, z.GetUint64());\n\n    z.SetUint64(uint64_t(2147483648u));  // 2^31, cannot cast as int\n    EXPECT_FALSE(z.IsInt());\n    EXPECT_TRUE(z.IsUint());\n    EXPECT_TRUE(z.IsInt64());\n\n    z.SetUint64(uint64_t(4294967295u) + 1);  // 2^32, cannot cast as uint\n    EXPECT_FALSE(z.IsInt());\n    EXPECT_FALSE(z.IsUint());\n    EXPECT_TRUE(z.IsInt64());\n\n    uint64_t u = RAPIDJSON_UINT64_C2(0x80000000, 0x00000000);\n    z.SetUint64(u);    // 2^63 cannot cast as int64\n    EXPECT_FALSE(z.IsInt64());\n    EXPECT_EQ(u, z.GetUint64()); // Issue 48\n    EXPECT_DOUBLE_EQ(9223372036854775808.0, z.GetDouble());\n\n    // Templated functions\n    EXPECT_TRUE(z.Is<uint64_t>());\n    EXPECT_EQ(u, z.Get<uint64_t>());\n    EXPECT_EQ(u + 1, z.Set(u + 1).Get<uint64_t>());\n    EXPECT_EQ(u + 2, z.Set<uint64_t>(u + 2).Get<uint64_t>());\n}\n\nTEST(Value, Double) {\n    // Constructor with double\n    Value x(12.34);\n    EXPECT_EQ(kNumberType, x.GetType());\n    EXPECT_NEAR(12.34, x.GetDouble(), 0.0);\n    EXPECT_TRUE(x.IsNumber());\n    EXPECT_TRUE(x.IsDouble());\n\n    EXPECT_FALSE(x.IsInt());\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsBool());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    // SetDouble()\n    Value z;\n    z.SetDouble(12.34);\n    EXPECT_NEAR(12.34, z.GetDouble(), 0.0);\n\n    z = 56.78;\n    EXPECT_NEAR(56.78, z.GetDouble(), 0.0);\n\n    // Templated functions\n    EXPECT_TRUE(z.Is<double>());\n    EXPECT_EQ(56.78, z.Get<double>());\n    EXPECT_EQ(57.78, z.Set(57.78).Get<double>());\n    EXPECT_EQ(58.78, z.Set<double>(58.78).Get<double>());\n}\n\nTEST(Value, Float) {\n    // Constructor with double\n    Value x(12.34f);\n    EXPECT_EQ(kNumberType, x.GetType());\n    EXPECT_NEAR(12.34f, x.GetFloat(), 0.0);\n    EXPECT_TRUE(x.IsNumber());\n    EXPECT_TRUE(x.IsDouble());\n    EXPECT_TRUE(x.IsFloat());\n\n    EXPECT_FALSE(x.IsInt());\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsBool());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    // SetFloat()\n    Value z;\n    z.SetFloat(12.34f);\n    EXPECT_NEAR(12.34f, z.GetFloat(), 0.0f);\n\n    // Issue 573\n    z.SetInt(0);\n    EXPECT_EQ(0.0f, z.GetFloat());\n\n    z = 56.78f;\n    EXPECT_NEAR(56.78f, z.GetFloat(), 0.0f);\n\n    // Templated functions\n    EXPECT_TRUE(z.Is<float>());\n    EXPECT_EQ(56.78f, z.Get<float>());\n    EXPECT_EQ(57.78f, z.Set(57.78f).Get<float>());\n    EXPECT_EQ(58.78f, z.Set<float>(58.78f).Get<float>());\n}\n\nTEST(Value, IsLosslessDouble) {\n    EXPECT_TRUE(Value(0.0).IsLosslessDouble());\n    EXPECT_TRUE(Value(12.34).IsLosslessDouble());\n    EXPECT_TRUE(Value(-123).IsLosslessDouble());\n    EXPECT_TRUE(Value(2147483648u).IsLosslessDouble());\n    EXPECT_TRUE(Value(-static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x40000000, 0x00000000))).IsLosslessDouble());\n#if !(defined(_MSC_VER) && _MSC_VER < 1800) // VC2010 has problem\n    EXPECT_TRUE(Value(RAPIDJSON_UINT64_C2(0xA0000000, 0x00000000)).IsLosslessDouble());\n#endif\n\n    EXPECT_FALSE(Value(static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x7FFFFFFF, 0xFFFFFFFF))).IsLosslessDouble()); // INT64_MAX\n    EXPECT_FALSE(Value(-static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x7FFFFFFF, 0xFFFFFFFF))).IsLosslessDouble()); // -INT64_MAX\n    EXPECT_TRUE(Value(-static_cast<int64_t>(RAPIDJSON_UINT64_C2(0x7FFFFFFF, 0xFFFFFFFF)) - 1).IsLosslessDouble()); // INT64_MIN\n    EXPECT_FALSE(Value(RAPIDJSON_UINT64_C2(0xFFFFFFFF, 0xFFFFFFFF)).IsLosslessDouble()); // UINT64_MAX\n\n    EXPECT_TRUE(Value(3.4028234e38f).IsLosslessDouble()); // FLT_MAX\n    EXPECT_TRUE(Value(-3.4028234e38f).IsLosslessDouble()); // -FLT_MAX\n    EXPECT_TRUE(Value(1.17549435e-38f).IsLosslessDouble()); // FLT_MIN\n    EXPECT_TRUE(Value(-1.17549435e-38f).IsLosslessDouble()); // -FLT_MIN\n    EXPECT_TRUE(Value(1.7976931348623157e+308).IsLosslessDouble()); // DBL_MAX\n    EXPECT_TRUE(Value(-1.7976931348623157e+308).IsLosslessDouble()); // -DBL_MAX\n    EXPECT_TRUE(Value(2.2250738585072014e-308).IsLosslessDouble()); // DBL_MIN\n    EXPECT_TRUE(Value(-2.2250738585072014e-308).IsLosslessDouble()); // -DBL_MIN\n}\n\nTEST(Value, IsLosslessFloat) {\n    EXPECT_TRUE(Value(12.25).IsLosslessFloat());\n    EXPECT_TRUE(Value(-123).IsLosslessFloat());\n    EXPECT_TRUE(Value(2147483648u).IsLosslessFloat());\n    EXPECT_TRUE(Value(3.4028234e38f).IsLosslessFloat());\n    EXPECT_TRUE(Value(-3.4028234e38f).IsLosslessFloat());\n    EXPECT_FALSE(Value(3.4028235e38).IsLosslessFloat());\n    EXPECT_FALSE(Value(0.3).IsLosslessFloat());\n}\n\nTEST(Value, String) {\n    // Construction with const string\n    Value x(\"Hello\", 5); // literal\n    EXPECT_EQ(kStringType, x.GetType());\n    EXPECT_TRUE(x.IsString());\n    EXPECT_STREQ(\"Hello\", x.GetString());\n    EXPECT_EQ(5u, x.GetStringLength());\n\n    EXPECT_FALSE(x.IsNumber());\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsBool());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsObject());\n    EXPECT_FALSE(x.IsArray());\n\n    static const char cstr[] = \"World\"; // const array\n    Value(cstr).Swap(x);\n    EXPECT_TRUE(x.IsString());\n    EXPECT_EQ(x.GetString(), cstr);\n    EXPECT_EQ(x.GetStringLength(), sizeof(cstr)-1);\n\n    static char mstr[] = \"Howdy\"; // non-const array\n    // Value(mstr).Swap(x); // should not compile\n    Value(StringRef(mstr)).Swap(x);\n    EXPECT_TRUE(x.IsString());\n    EXPECT_EQ(x.GetString(), mstr);\n    EXPECT_EQ(x.GetStringLength(), sizeof(mstr)-1);\n    strncpy(mstr,\"Hello\", sizeof(mstr));\n    EXPECT_STREQ(x.GetString(), \"Hello\");\n\n    const char* pstr = cstr;\n    //Value(pstr).Swap(x); // should not compile\n    Value(StringRef(pstr)).Swap(x);\n    EXPECT_TRUE(x.IsString());\n    EXPECT_EQ(x.GetString(), cstr);\n    EXPECT_EQ(x.GetStringLength(), sizeof(cstr)-1);\n\n    char* mpstr = mstr;\n    Value(StringRef(mpstr,sizeof(mstr)-1)).Swap(x);\n    EXPECT_TRUE(x.IsString());\n    EXPECT_EQ(x.GetString(), mstr);\n    EXPECT_EQ(x.GetStringLength(), 5u);\n    EXPECT_STREQ(x.GetString(), \"Hello\");\n\n    // Constructor with copy string\n    MemoryPoolAllocator<> allocator;\n    Value c(x.GetString(), x.GetStringLength(), allocator);\n    EXPECT_NE(x.GetString(), c.GetString());\n    EXPECT_EQ(x.GetStringLength(), c.GetStringLength());\n    EXPECT_STREQ(x.GetString(), c.GetString());\n    //x.SetString(\"World\");\n    x.SetString(\"World\", 5);\n    EXPECT_STREQ(\"Hello\", c.GetString());\n    EXPECT_EQ(5u, c.GetStringLength());\n\n    // Constructor with type\n    Value y(kStringType);\n    EXPECT_TRUE(y.IsString());\n    EXPECT_STREQ(\"\", y.GetString());    // Empty string should be \"\" instead of 0 (issue 226)\n    EXPECT_EQ(0u, y.GetStringLength());\n\n    // SetConsttring()\n    Value z;\n    z.SetString(\"Hello\");\n    EXPECT_TRUE(x.IsString());\n    z.SetString(\"Hello\", 5);\n    EXPECT_STREQ(\"Hello\", z.GetString());\n    EXPECT_STREQ(\"Hello\", z.GetString());\n    EXPECT_EQ(5u, z.GetStringLength());\n\n    z.SetString(\"Hello\");\n    EXPECT_TRUE(z.IsString());\n    EXPECT_STREQ(\"Hello\", z.GetString());\n\n    //z.SetString(mstr); // should not compile\n    //z.SetString(pstr); // should not compile\n    z.SetString(StringRef(mstr));\n    EXPECT_TRUE(z.IsString());\n    EXPECT_STREQ(z.GetString(), mstr);\n\n    z.SetString(cstr);\n    EXPECT_TRUE(z.IsString());\n    EXPECT_EQ(cstr, z.GetString());\n\n    z = cstr;\n    EXPECT_TRUE(z.IsString());\n    EXPECT_EQ(cstr, z.GetString());\n\n    // SetString()\n    char s[] = \"World\";\n    Value w;\n    w.SetString(s, static_cast<SizeType>(strlen(s)), allocator);\n    s[0] = '\\0';\n    EXPECT_STREQ(\"World\", w.GetString());\n    EXPECT_EQ(5u, w.GetStringLength());\n\n    // templated functions\n    EXPECT_TRUE(z.Is<const char*>());\n    EXPECT_STREQ(cstr, z.Get<const char*>());\n    EXPECT_STREQ(\"Apple\", z.Set<const char*>(\"Apple\").Get<const char*>());\n\n#if RAPIDJSON_HAS_STDSTRING\n    {\n        std::string str = \"Hello World\";\n        str[5] = '\\0';\n        EXPECT_STREQ(str.data(),\"Hello\"); // embedded '\\0'\n        EXPECT_EQ(str.size(), 11u);\n\n        // no copy\n        Value vs0(StringRef(str));\n        EXPECT_TRUE(vs0.IsString());\n        EXPECT_EQ(vs0.GetString(), str.data());\n        EXPECT_EQ(vs0.GetStringLength(), str.size());\n        TestEqual(vs0, str);\n\n        // do copy\n        Value vs1(str, allocator);\n        EXPECT_TRUE(vs1.IsString());\n        EXPECT_NE(vs1.GetString(), str.data());\n        EXPECT_NE(vs1.GetString(), str); // not equal due to embedded '\\0'\n        EXPECT_EQ(vs1.GetStringLength(), str.size());\n        TestEqual(vs1, str);\n\n        // SetString\n        str = \"World\";\n        vs0.SetNull().SetString(str, allocator);\n        EXPECT_TRUE(vs0.IsString());\n        EXPECT_STREQ(vs0.GetString(), str.c_str());\n        EXPECT_EQ(vs0.GetStringLength(), str.size());\n        TestEqual(str, vs0);\n        TestUnequal(str, vs1);\n\n        // vs1 = str; // should not compile\n        vs1 = StringRef(str);\n        TestEqual(str, vs1);\n        TestEqual(vs0, vs1);\n\n        // Templated function.\n        EXPECT_TRUE(vs0.Is<std::string>());\n        EXPECT_EQ(str, vs0.Get<std::string>());\n        vs0.Set<std::string>(std::string(\"Apple\"), allocator);\n        EXPECT_EQ(std::string(\"Apple\"), vs0.Get<std::string>());\n        vs0.Set(std::string(\"Orange\"), allocator);\n        EXPECT_EQ(std::string(\"Orange\"), vs0.Get<std::string>());\n    }\n#endif // RAPIDJSON_HAS_STDSTRING\n}\n\n// Issue 226: Value of string type should not point to NULL\nTEST(Value, SetStringNull) {\n\n    MemoryPoolAllocator<> allocator;\n    const char* nullPtr = 0;\n    {\n        // Construction with string type creates empty string\n        Value v(kStringType);\n        EXPECT_NE(v.GetString(), nullPtr); // non-null string returned\n        EXPECT_EQ(v.GetStringLength(), 0u);\n\n        // Construction from/setting to null without length not allowed\n        EXPECT_THROW(Value(StringRef(nullPtr)), AssertException);\n        EXPECT_THROW(Value(StringRef(nullPtr), allocator), AssertException);\n        EXPECT_THROW(v.SetString(nullPtr, allocator), AssertException);\n\n        // Non-empty length with null string is not allowed\n        EXPECT_THROW(v.SetString(nullPtr, 17u), AssertException);\n        EXPECT_THROW(v.SetString(nullPtr, 42u, allocator), AssertException);\n\n        // Setting to null string with empty length is allowed\n        v.SetString(nullPtr, 0u);\n        EXPECT_NE(v.GetString(), nullPtr); // non-null string returned\n        EXPECT_EQ(v.GetStringLength(), 0u);\n\n        v.SetNull();\n        v.SetString(nullPtr, 0u, allocator);\n        EXPECT_NE(v.GetString(), nullPtr); // non-null string returned\n        EXPECT_EQ(v.GetStringLength(), 0u);\n    }\n    // Construction with null string and empty length is allowed\n    {\n        Value v(nullPtr,0u);\n        EXPECT_NE(v.GetString(), nullPtr); // non-null string returned\n        EXPECT_EQ(v.GetStringLength(), 0u);\n    }\n    {\n        Value v(nullPtr, 0u, allocator);\n        EXPECT_NE(v.GetString(), nullPtr); // non-null string returned\n        EXPECT_EQ(v.GetStringLength(), 0u);\n    }\n}\n\ntemplate <typename T, typename Allocator>\nstatic void TestArray(T& x, Allocator& allocator) {\n    const T& y = x;\n\n    // PushBack()\n    Value v;\n    x.PushBack(v, allocator);\n    v.SetBool(true);\n    x.PushBack(v, allocator);\n    v.SetBool(false);\n    x.PushBack(v, allocator);\n    v.SetInt(123);\n    x.PushBack(v, allocator);\n    //x.PushBack((const char*)\"foo\", allocator); // should not compile\n    x.PushBack(\"foo\", allocator);\n\n    EXPECT_FALSE(x.Empty());\n    EXPECT_EQ(5u, x.Size());\n    EXPECT_FALSE(y.Empty());\n    EXPECT_EQ(5u, y.Size());\n    EXPECT_TRUE(x[SizeType(0)].IsNull());\n    EXPECT_TRUE(x[1].IsTrue());\n    EXPECT_TRUE(x[2].IsFalse());\n    EXPECT_TRUE(x[3].IsInt());\n    EXPECT_EQ(123, x[3].GetInt());\n    EXPECT_TRUE(y[SizeType(0)].IsNull());\n    EXPECT_TRUE(y[1].IsTrue());\n    EXPECT_TRUE(y[2].IsFalse());\n    EXPECT_TRUE(y[3].IsInt());\n    EXPECT_EQ(123, y[3].GetInt());\n    EXPECT_TRUE(y[4].IsString());\n    EXPECT_STREQ(\"foo\", y[4].GetString());\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    // PushBack(GenericValue&&, Allocator&);\n    {\n        Value y2(kArrayType);\n        y2.PushBack(Value(true), allocator);\n        y2.PushBack(std::move(Value(kArrayType).PushBack(Value(1), allocator).PushBack(\"foo\", allocator)), allocator);\n        EXPECT_EQ(2u, y2.Size());\n        EXPECT_TRUE(y2[0].IsTrue());\n        EXPECT_TRUE(y2[1].IsArray());\n        EXPECT_EQ(2u, y2[1].Size());\n        EXPECT_TRUE(y2[1][0].IsInt());\n        EXPECT_TRUE(y2[1][1].IsString());\n    }\n#endif\n\n    // iterator\n    typename T::ValueIterator itr = x.Begin();\n    EXPECT_TRUE(itr != x.End());\n    EXPECT_TRUE(itr->IsNull());\n    ++itr;\n    EXPECT_TRUE(itr != x.End());\n    EXPECT_TRUE(itr->IsTrue());\n    ++itr;\n    EXPECT_TRUE(itr != x.End());\n    EXPECT_TRUE(itr->IsFalse());\n    ++itr;\n    EXPECT_TRUE(itr != x.End());\n    EXPECT_TRUE(itr->IsInt());\n    EXPECT_EQ(123, itr->GetInt());\n    ++itr;\n    EXPECT_TRUE(itr != x.End());\n    EXPECT_TRUE(itr->IsString());\n    EXPECT_STREQ(\"foo\", itr->GetString());\n\n    // const iterator\n    typename T::ConstValueIterator citr = y.Begin();\n    EXPECT_TRUE(citr != y.End());\n    EXPECT_TRUE(citr->IsNull());\n    ++citr;\n    EXPECT_TRUE(citr != y.End());\n    EXPECT_TRUE(citr->IsTrue());\n    ++citr;\n    EXPECT_TRUE(citr != y.End());\n    EXPECT_TRUE(citr->IsFalse());\n    ++citr;\n    EXPECT_TRUE(citr != y.End());\n    EXPECT_TRUE(citr->IsInt());\n    EXPECT_EQ(123, citr->GetInt());\n    ++citr;\n    EXPECT_TRUE(citr != y.End());\n    EXPECT_TRUE(citr->IsString());\n    EXPECT_STREQ(\"foo\", citr->GetString());\n\n    // PopBack()\n    x.PopBack();\n    EXPECT_EQ(4u, x.Size());\n    EXPECT_TRUE(y[SizeType(0)].IsNull());\n    EXPECT_TRUE(y[1].IsTrue());\n    EXPECT_TRUE(y[2].IsFalse());\n    EXPECT_TRUE(y[3].IsInt());\n\n    // Clear()\n    x.Clear();\n    EXPECT_TRUE(x.Empty());\n    EXPECT_EQ(0u, x.Size());\n    EXPECT_TRUE(y.Empty());\n    EXPECT_EQ(0u, y.Size());\n\n    // Erase(ValueIterator)\n\n    // Use array of array to ensure removed elements' destructor is called.\n    // [[0],[1],[2],...]\n    for (int i = 0; i < 10; i++)\n        x.PushBack(Value(kArrayType).PushBack(i, allocator).Move(), allocator);\n\n    // Erase the first\n    itr = x.Erase(x.Begin());\n    EXPECT_EQ(x.Begin(), itr);\n    EXPECT_EQ(9u, x.Size());\n    for (int i = 0; i < 9; i++)\n        EXPECT_EQ(i + 1, x[static_cast<SizeType>(i)][0].GetInt());\n\n    // Ease the last\n    itr = x.Erase(x.End() - 1);\n    EXPECT_EQ(x.End(), itr);\n    EXPECT_EQ(8u, x.Size());\n    for (int i = 0; i < 8; i++)\n        EXPECT_EQ(i + 1, x[static_cast<SizeType>(i)][0].GetInt());\n\n    // Erase the middle\n    itr = x.Erase(x.Begin() + 4);\n    EXPECT_EQ(x.Begin() + 4, itr);\n    EXPECT_EQ(7u, x.Size());\n    for (int i = 0; i < 4; i++)\n        EXPECT_EQ(i + 1, x[static_cast<SizeType>(i)][0].GetInt());\n    for (int i = 4; i < 7; i++)\n        EXPECT_EQ(i + 2, x[static_cast<SizeType>(i)][0].GetInt());\n\n    // Erase(ValueIterator, ValueIterator)\n    // Exhaustive test with all 0 <= first < n, first <= last <= n cases\n    const unsigned n = 10;\n    for (unsigned first = 0; first < n; first++) {\n        for (unsigned last = first; last <= n; last++) {\n            x.Clear();\n            for (unsigned i = 0; i < n; i++)\n                x.PushBack(Value(kArrayType).PushBack(i, allocator).Move(), allocator);\n\n            itr = x.Erase(x.Begin() + first, x.Begin() + last);\n            if (last == n)\n                EXPECT_EQ(x.End(), itr);\n            else\n                EXPECT_EQ(x.Begin() + first, itr);\n\n            size_t removeCount = last - first;\n            EXPECT_EQ(n - removeCount, x.Size());\n            for (unsigned i = 0; i < first; i++)\n                EXPECT_EQ(i, x[i][0].GetUint());\n            for (unsigned i = first; i < n - removeCount; i++)\n                EXPECT_EQ(i + removeCount, x[static_cast<SizeType>(i)][0].GetUint());\n        }\n    }\n}\n\nTEST(Value, Array) {\n    Value::AllocatorType allocator;\n    Value x(kArrayType);\n    const Value& y = x;\n\n    EXPECT_EQ(kArrayType, x.GetType());\n    EXPECT_TRUE(x.IsArray());\n    EXPECT_TRUE(x.Empty());\n    EXPECT_EQ(0u, x.Size());\n    EXPECT_TRUE(y.IsArray());\n    EXPECT_TRUE(y.Empty());\n    EXPECT_EQ(0u, y.Size());\n\n    EXPECT_FALSE(x.IsNull());\n    EXPECT_FALSE(x.IsBool());\n    EXPECT_FALSE(x.IsFalse());\n    EXPECT_FALSE(x.IsTrue());\n    EXPECT_FALSE(x.IsString());\n    EXPECT_FALSE(x.IsObject());\n\n    TestArray(x, allocator);\n\n    // Working in gcc without C++11, but VS2013 cannot compile. To be diagnosed.\n    // http://en.wikipedia.org/wiki/Erase-remove_idiom\n    x.Clear();\n    for (int i = 0; i < 10; i++)\n        if (i % 2 == 0)\n            x.PushBack(i, allocator);\n        else\n            x.PushBack(Value(kNullType).Move(), allocator);\n\n    const Value null(kNullType);\n    x.Erase(std::remove(x.Begin(), x.End(), null), x.End());\n    EXPECT_EQ(5u, x.Size());\n    for (int i = 0; i < 5; i++)\n        EXPECT_EQ(i * 2, x[static_cast<SizeType>(i)]);\n\n    // SetArray()\n    Value z;\n    z.SetArray();\n    EXPECT_TRUE(z.IsArray());\n    EXPECT_TRUE(z.Empty());\n\n    // PR #1503: assign from inner Value\n    {\n        CrtAllocator a; // Free() is not a noop\n        GenericValue<UTF8<>, CrtAllocator> nullValue;\n        GenericValue<UTF8<>, CrtAllocator> arrayValue(kArrayType);\n        arrayValue.PushBack(nullValue, a);\n        arrayValue = arrayValue[0]; // shouldn't crash (use after free)\n        EXPECT_TRUE(arrayValue.IsNull());\n    }\n}\n\nTEST(Value, ArrayHelper) {\n    Value::AllocatorType allocator;\n    {\n        Value x(kArrayType);\n        Value::Array a = x.GetArray();\n        TestArray(a, allocator);\n    }\n\n    {\n        Value x(kArrayType);\n        Value::Array a = x.GetArray();\n        a.PushBack(1, allocator);\n\n        Value::Array a2(a); // copy constructor\n        EXPECT_EQ(1u, a2.Size());\n\n        Value::Array a3 = a;\n        EXPECT_EQ(1u, a3.Size());\n\n        Value::ConstArray y = static_cast<const Value&>(x).GetArray();\n        (void)y;\n        // y.PushBack(1, allocator); // should not compile\n\n        // Templated functions\n        x.Clear();\n        EXPECT_TRUE(x.Is<Value::Array>());\n        EXPECT_TRUE(x.Is<Value::ConstArray>());\n        a.PushBack(1, allocator);\n        EXPECT_EQ(1, x.Get<Value::Array>()[0].GetInt());\n        EXPECT_EQ(1, x.Get<Value::ConstArray>()[0].GetInt());\n\n        Value x2;\n        x2.Set<Value::Array>(a);\n        EXPECT_TRUE(x.IsArray());   // IsArray() is invariant after moving.\n        EXPECT_EQ(1, x2.Get<Value::Array>()[0].GetInt());\n    }\n\n    {\n        Value y(kArrayType);\n        y.PushBack(123, allocator);\n\n        Value x(y.GetArray());      // Construct value form array.\n        EXPECT_TRUE(x.IsArray());\n        EXPECT_EQ(123, x[0].GetInt());\n        EXPECT_TRUE(y.IsArray());   // Invariant\n        EXPECT_TRUE(y.Empty());\n    }\n\n    {\n        Value x(kArrayType);\n        Value y(kArrayType);\n        y.PushBack(123, allocator);\n        x.PushBack(y.GetArray(), allocator);    // Implicit constructor to convert Array to GenericValue\n\n        EXPECT_EQ(1u, x.Size());\n        EXPECT_EQ(123, x[0][0].GetInt());\n        EXPECT_TRUE(y.IsArray());\n        EXPECT_TRUE(y.Empty());\n    }\n}\n\n#if RAPIDJSON_HAS_CXX11_RANGE_FOR\nTEST(Value, ArrayHelperRangeFor) {\n    Value::AllocatorType allocator;\n    Value x(kArrayType);\n\n    for (int i = 0; i < 10; i++)\n        x.PushBack(i, allocator);\n\n    {\n        int i = 0;\n        for (auto& v : x.GetArray()) {\n            EXPECT_EQ(i, v.GetInt());\n            i++;\n        }\n        EXPECT_EQ(i, 10);\n    }\n    {\n        int i = 0;\n        for (const auto& v : const_cast<const Value&>(x).GetArray()) {\n            EXPECT_EQ(i, v.GetInt());\n            i++;\n        }\n        EXPECT_EQ(i, 10);\n    }\n\n    // Array a = x.GetArray();\n    // Array ca = const_cast<const Value&>(x).GetArray();\n}\n#endif\n\ntemplate <typename T, typename Allocator>\nstatic void TestObject(T& x, Allocator& allocator) {\n    const T& y = x; // const version\n\n    // AddMember()\n    x.AddMember(\"A\", \"Apple\", allocator);\n    EXPECT_FALSE(x.ObjectEmpty());\n    EXPECT_EQ(1u, x.MemberCount());\n\n    Value value(\"Banana\", 6);\n    x.AddMember(\"B\", \"Banana\", allocator);\n    EXPECT_EQ(2u, x.MemberCount());\n\n    // AddMember<T>(StringRefType, T, Allocator)\n    {\n        Value o(kObjectType);\n        o.AddMember(\"true\", true, allocator);\n        o.AddMember(\"false\", false, allocator);\n        o.AddMember(\"int\", -1, allocator);\n        o.AddMember(\"uint\", 1u, allocator);\n        o.AddMember(\"int64\", int64_t(-4294967296), allocator);\n        o.AddMember(\"uint64\", uint64_t(4294967296), allocator);\n        o.AddMember(\"double\", 3.14, allocator);\n        o.AddMember(\"string\", \"Jelly\", allocator);\n\n        EXPECT_TRUE(o[\"true\"].GetBool());\n        EXPECT_FALSE(o[\"false\"].GetBool());\n        EXPECT_EQ(-1, o[\"int\"].GetInt());\n        EXPECT_EQ(1u, o[\"uint\"].GetUint());\n        EXPECT_EQ(int64_t(-4294967296), o[\"int64\"].GetInt64());\n        EXPECT_EQ(uint64_t(4294967296), o[\"uint64\"].GetUint64());\n        EXPECT_STREQ(\"Jelly\",o[\"string\"].GetString());\n        EXPECT_EQ(8u, o.MemberCount());\n    }\n\n    // AddMember<T>(Value&, T, Allocator)\n    {\n        Value o(kObjectType);\n\n        Value n(\"s\");\n        o.AddMember(n, \"string\", allocator);\n        EXPECT_EQ(1u, o.MemberCount());\n\n        Value count(\"#\");\n        o.AddMember(count, o.MemberCount(), allocator);\n        EXPECT_EQ(2u, o.MemberCount());\n    }\n\n#if RAPIDJSON_HAS_STDSTRING\n    {\n        // AddMember(StringRefType, const std::string&, Allocator)\n        Value o(kObjectType);\n        o.AddMember(\"b\", std::string(\"Banana\"), allocator);\n        EXPECT_STREQ(\"Banana\", o[\"b\"].GetString());\n\n        // RemoveMember(const std::string&)\n        o.RemoveMember(std::string(\"b\"));\n        EXPECT_TRUE(o.ObjectEmpty());\n    }\n#endif\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\n    // AddMember(GenericValue&&, ...) variants\n    {\n        Value o(kObjectType);\n        o.AddMember(Value(\"true\"), Value(true), allocator);\n        o.AddMember(Value(\"false\"), Value(false).Move(), allocator);    // value is lvalue ref\n        o.AddMember(Value(\"int\").Move(), Value(-1), allocator);         // name is lvalue ref\n        o.AddMember(\"uint\", std::move(Value().SetUint(1u)), allocator); // name is literal, value is rvalue\n        EXPECT_TRUE(o[\"true\"].GetBool());\n        EXPECT_FALSE(o[\"false\"].GetBool());\n        EXPECT_EQ(-1, o[\"int\"].GetInt());\n        EXPECT_EQ(1u, o[\"uint\"].GetUint());\n        EXPECT_EQ(4u, o.MemberCount());\n    }\n#endif\n\n    // Tests a member with null character\n    Value name;\n    const Value C0D(\"C\\0D\", 3);\n    name.SetString(C0D.GetString(), 3);\n    value.SetString(\"CherryD\", 7);\n    x.AddMember(name, value, allocator);\n\n    // HasMember()\n    EXPECT_TRUE(x.HasMember(\"A\"));\n    EXPECT_TRUE(x.HasMember(\"B\"));\n    EXPECT_TRUE(y.HasMember(\"A\"));\n    EXPECT_TRUE(y.HasMember(\"B\"));\n\n#if RAPIDJSON_HAS_STDSTRING\n    EXPECT_TRUE(x.HasMember(std::string(\"A\")));\n#endif\n\n    name.SetString(\"C\\0D\");\n    EXPECT_TRUE(x.HasMember(name));\n    EXPECT_TRUE(y.HasMember(name));\n\n    GenericValue<UTF8<>, CrtAllocator> othername(\"A\");\n    EXPECT_TRUE(x.HasMember(othername));\n    EXPECT_TRUE(y.HasMember(othername));\n    othername.SetString(\"C\\0D\");\n    EXPECT_TRUE(x.HasMember(othername));\n    EXPECT_TRUE(y.HasMember(othername));\n\n    // operator[]\n    EXPECT_STREQ(\"Apple\", x[\"A\"].GetString());\n    EXPECT_STREQ(\"Banana\", x[\"B\"].GetString());\n    EXPECT_STREQ(\"CherryD\", x[C0D].GetString());\n    EXPECT_STREQ(\"CherryD\", x[othername].GetString());\n    EXPECT_THROW(x[\"nonexist\"], AssertException);\n\n    // const operator[]\n    EXPECT_STREQ(\"Apple\", y[\"A\"].GetString());\n    EXPECT_STREQ(\"Banana\", y[\"B\"].GetString());\n    EXPECT_STREQ(\"CherryD\", y[C0D].GetString());\n\n#if RAPIDJSON_HAS_STDSTRING\n    EXPECT_STREQ(\"Apple\", x[\"A\"].GetString());\n    EXPECT_STREQ(\"Apple\", y[std::string(\"A\")].GetString());\n#endif\n\n    // member iterator\n    Value::MemberIterator itr = x.MemberBegin(); \n    EXPECT_TRUE(itr != x.MemberEnd());\n    EXPECT_STREQ(\"A\", itr->name.GetString());\n    EXPECT_STREQ(\"Apple\", itr->value.GetString());\n    ++itr;\n    EXPECT_TRUE(itr != x.MemberEnd());\n    EXPECT_STREQ(\"B\", itr->name.GetString());\n    EXPECT_STREQ(\"Banana\", itr->value.GetString());\n    ++itr;\n    EXPECT_TRUE(itr != x.MemberEnd());\n    EXPECT_TRUE(memcmp(itr->name.GetString(), \"C\\0D\", 4) == 0);\n    EXPECT_STREQ(\"CherryD\", itr->value.GetString());\n    ++itr;\n    EXPECT_FALSE(itr != x.MemberEnd());\n\n    // const member iterator\n    Value::ConstMemberIterator citr = y.MemberBegin(); \n    EXPECT_TRUE(citr != y.MemberEnd());\n    EXPECT_STREQ(\"A\", citr->name.GetString());\n    EXPECT_STREQ(\"Apple\", citr->value.GetString());\n    ++citr;\n    EXPECT_TRUE(citr != y.MemberEnd());\n    EXPECT_STREQ(\"B\", citr->name.GetString());\n    EXPECT_STREQ(\"Banana\", citr->value.GetString());\n    ++citr;\n    EXPECT_TRUE(citr != y.MemberEnd());\n    EXPECT_TRUE(memcmp(citr->name.GetString(), \"C\\0D\", 4) == 0);\n    EXPECT_STREQ(\"CherryD\", citr->value.GetString());\n    ++citr;\n    EXPECT_FALSE(citr != y.MemberEnd());\n\n    // member iterator conversions/relations\n    itr  = x.MemberBegin();\n    citr = x.MemberBegin(); // const conversion\n    TestEqual(itr, citr);\n    EXPECT_TRUE(itr < x.MemberEnd());\n    EXPECT_FALSE(itr > y.MemberEnd());\n    EXPECT_TRUE(citr < x.MemberEnd());\n    EXPECT_FALSE(citr > y.MemberEnd());\n    ++citr;\n    TestUnequal(itr, citr);\n    EXPECT_FALSE(itr < itr);\n    EXPECT_TRUE(itr < citr);\n    EXPECT_FALSE(itr > itr);\n    EXPECT_TRUE(citr > itr);\n    EXPECT_EQ(1, citr - x.MemberBegin());\n    EXPECT_EQ(0, itr - y.MemberBegin());\n    itr += citr - x.MemberBegin();\n    EXPECT_EQ(1, itr - y.MemberBegin());\n    TestEqual(citr, itr);\n    EXPECT_TRUE(itr <= citr);\n    EXPECT_TRUE(citr <= itr);\n    itr++;\n    EXPECT_TRUE(itr >= citr);\n    EXPECT_FALSE(citr >= itr);\n\n    // RemoveMember()\n    EXPECT_TRUE(x.RemoveMember(\"A\"));\n    EXPECT_FALSE(x.HasMember(\"A\"));\n\n    EXPECT_TRUE(x.RemoveMember(\"B\"));\n    EXPECT_FALSE(x.HasMember(\"B\"));\n\n    EXPECT_FALSE(x.RemoveMember(\"nonexist\"));\n\n    EXPECT_TRUE(x.RemoveMember(othername));\n    EXPECT_FALSE(x.HasMember(name));\n\n    EXPECT_TRUE(x.MemberBegin() == x.MemberEnd());\n\n    // EraseMember(ConstMemberIterator)\n\n    // Use array members to ensure removed elements' destructor is called.\n    // { \"a\": [0], \"b\": [1],[2],...]\n    const char keys[][2] = { \"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\", \"j\" };\n    for (int i = 0; i < 10; i++)\n        x.AddMember(keys[i], Value(kArrayType).PushBack(i, allocator), allocator);\n\n    // MemberCount, iterator difference\n    EXPECT_EQ(x.MemberCount(), SizeType(x.MemberEnd() - x.MemberBegin()));\n\n    // Erase the first\n    itr = x.EraseMember(x.MemberBegin());\n    EXPECT_FALSE(x.HasMember(keys[0]));\n    EXPECT_EQ(x.MemberBegin(), itr);\n    EXPECT_EQ(9u, x.MemberCount());\n    for (; itr != x.MemberEnd(); ++itr) {\n        size_t i = static_cast<size_t>((itr - x.MemberBegin())) + 1;\n        EXPECT_STREQ(itr->name.GetString(), keys[i]);\n        EXPECT_EQ(static_cast<int>(i), itr->value[0].GetInt());\n    }\n\n    // Erase the last\n    itr = x.EraseMember(x.MemberEnd() - 1);\n    EXPECT_FALSE(x.HasMember(keys[9]));\n    EXPECT_EQ(x.MemberEnd(), itr);\n    EXPECT_EQ(8u, x.MemberCount());\n    for (; itr != x.MemberEnd(); ++itr) {\n        size_t i = static_cast<size_t>(itr - x.MemberBegin()) + 1;\n        EXPECT_STREQ(itr->name.GetString(), keys[i]);\n        EXPECT_EQ(static_cast<int>(i), itr->value[0].GetInt());\n    }\n\n    // Erase the middle\n    itr = x.EraseMember(x.MemberBegin() + 4);\n    EXPECT_FALSE(x.HasMember(keys[5]));\n    EXPECT_EQ(x.MemberBegin() + 4, itr);\n    EXPECT_EQ(7u, x.MemberCount());\n    for (; itr != x.MemberEnd(); ++itr) {\n        size_t i = static_cast<size_t>(itr - x.MemberBegin());\n        i += (i < 4) ? 1 : 2;\n        EXPECT_STREQ(itr->name.GetString(), keys[i]);\n        EXPECT_EQ(static_cast<int>(i), itr->value[0].GetInt());\n    }\n\n    // EraseMember(ConstMemberIterator, ConstMemberIterator)\n    // Exhaustive test with all 0 <= first < n, first <= last <= n cases\n    const unsigned n = 10;\n    for (unsigned first = 0; first < n; first++) {\n        for (unsigned last = first; last <= n; last++) {\n            x.RemoveAllMembers();\n            for (unsigned i = 0; i < n; i++)\n                x.AddMember(keys[i], Value(kArrayType).PushBack(i, allocator), allocator);\n\n            itr = x.EraseMember(x.MemberBegin() + static_cast<int>(first), x.MemberBegin() + static_cast<int>(last));\n            if (last == n)\n                EXPECT_EQ(x.MemberEnd(), itr);\n            else\n                EXPECT_EQ(x.MemberBegin() + static_cast<int>(first), itr);\n\n            size_t removeCount = last - first;\n            EXPECT_EQ(n - removeCount, x.MemberCount());\n            for (unsigned i = 0; i < first; i++)\n                EXPECT_EQ(i, x[keys[i]][0].GetUint());\n            for (unsigned i = first; i < n - removeCount; i++)\n                EXPECT_EQ(i + removeCount, x[keys[i+removeCount]][0].GetUint());\n        }\n    }\n\n    // RemoveAllMembers()\n    x.RemoveAllMembers();\n    EXPECT_TRUE(x.ObjectEmpty());\n    EXPECT_EQ(0u, x.MemberCount());\n}\n\nTEST(Value, Object) {\n    Value::AllocatorType allocator;\n    Value x(kObjectType);\n    const Value& y = x; // const version\n\n    EXPECT_EQ(kObjectType, x.GetType());\n    EXPECT_TRUE(x.IsObject());\n    EXPECT_TRUE(x.ObjectEmpty());\n    EXPECT_EQ(0u, x.MemberCount());\n    EXPECT_EQ(kObjectType, y.GetType());\n    EXPECT_TRUE(y.IsObject());\n    EXPECT_TRUE(y.ObjectEmpty());\n    EXPECT_EQ(0u, y.MemberCount());\n\n    TestObject(x, allocator);\n\n    // SetObject()\n    Value z;\n    z.SetObject();\n    EXPECT_TRUE(z.IsObject());\n}\n\nTEST(Value, ObjectHelper) {\n    Value::AllocatorType allocator;\n    {\n        Value x(kObjectType);\n        Value::Object o = x.GetObject();\n        TestObject(o, allocator);\n    }\n\n    {\n        Value x(kObjectType);\n        Value::Object o = x.GetObject();\n        o.AddMember(\"1\", 1, allocator);\n\n        Value::Object o2(o); // copy constructor\n        EXPECT_EQ(1u, o2.MemberCount());\n\n        Value::Object o3 = o;\n        EXPECT_EQ(1u, o3.MemberCount());\n\n        Value::ConstObject y = static_cast<const Value&>(x).GetObject();\n        (void)y;\n        // y.AddMember(\"1\", 1, allocator); // should not compile\n\n        // Templated functions\n        x.RemoveAllMembers();\n        EXPECT_TRUE(x.Is<Value::Object>());\n        EXPECT_TRUE(x.Is<Value::ConstObject>());\n        o.AddMember(\"1\", 1, allocator);\n        EXPECT_EQ(1, x.Get<Value::Object>()[\"1\"].GetInt());\n        EXPECT_EQ(1, x.Get<Value::ConstObject>()[\"1\"].GetInt());\n\n        Value x2;\n        x2.Set<Value::Object>(o);\n        EXPECT_TRUE(x.IsObject());   // IsObject() is invariant after moving\n        EXPECT_EQ(1, x2.Get<Value::Object>()[\"1\"].GetInt());\n    }\n\n    {\n        Value x(kObjectType);\n        x.AddMember(\"a\", \"apple\", allocator);\n        Value y(x.GetObject());\n        EXPECT_STREQ(\"apple\", y[\"a\"].GetString());\n        EXPECT_TRUE(x.IsObject());  // Invariant\n    }\n\n    {\n        Value x(kObjectType);\n        x.AddMember(\"a\", \"apple\", allocator);\n        Value y(kObjectType);\n        y.AddMember(\"fruits\", x.GetObject(), allocator);\n        EXPECT_STREQ(\"apple\", y[\"fruits\"][\"a\"].GetString());\n        EXPECT_TRUE(x.IsObject());  // Invariant\n    }\n}\n\n#if RAPIDJSON_HAS_CXX11_RANGE_FOR\nTEST(Value, ObjectHelperRangeFor) {\n    Value::AllocatorType allocator;\n    Value x(kObjectType);\n\n    for (int i = 0; i < 10; i++) {\n        char name[10];\n        Value n(name, static_cast<SizeType>(sprintf(name, \"%d\", i)), allocator);\n        x.AddMember(n, i, allocator);\n    }\n\n    {\n        int i = 0;\n        for (auto& m : x.GetObject()) {\n            char name[11];\n            sprintf(name, \"%d\", i);\n            EXPECT_STREQ(name, m.name.GetString());\n            EXPECT_EQ(i, m.value.GetInt());\n            i++;\n        }\n        EXPECT_EQ(i, 10);\n    }\n    {\n        int i = 0;\n        for (const auto& m : const_cast<const Value&>(x).GetObject()) {\n            char name[11];\n            sprintf(name, \"%d\", i);\n            EXPECT_STREQ(name, m.name.GetString());\n            EXPECT_EQ(i, m.value.GetInt());\n            i++;\n        }\n        EXPECT_EQ(i, 10);\n    }\n\n    // Object a = x.GetObject();\n    // Object ca = const_cast<const Value&>(x).GetObject();\n}\n#endif\n\nTEST(Value, EraseMember_String) {\n    Value::AllocatorType allocator;\n    Value x(kObjectType);\n    x.AddMember(\"A\", \"Apple\", allocator);\n    x.AddMember(\"B\", \"Banana\", allocator);\n\n    EXPECT_TRUE(x.EraseMember(\"B\"));\n    EXPECT_FALSE(x.HasMember(\"B\"));\n\n    EXPECT_FALSE(x.EraseMember(\"nonexist\"));\n\n    GenericValue<UTF8<>, CrtAllocator> othername(\"A\");\n    EXPECT_TRUE(x.EraseMember(othername));\n    EXPECT_FALSE(x.HasMember(\"A\"));\n\n    EXPECT_TRUE(x.MemberBegin() == x.MemberEnd());\n}\n\nTEST(Value, BigNestedArray) {\n    MemoryPoolAllocator<> allocator;\n    Value x(kArrayType);\n    static const SizeType  n = 200;\n\n    for (SizeType i = 0; i < n; i++) {\n        Value y(kArrayType);\n        for (SizeType  j = 0; j < n; j++) {\n            Value number(static_cast<int>(i * n + j));\n            y.PushBack(number, allocator);\n        }\n        x.PushBack(y, allocator);\n    }\n\n    for (SizeType i = 0; i < n; i++)\n        for (SizeType j = 0; j < n; j++) {\n            EXPECT_TRUE(x[i][j].IsInt());\n            EXPECT_EQ(static_cast<int>(i * n + j), x[i][j].GetInt());\n        }\n}\n\nTEST(Value, BigNestedObject) {\n    MemoryPoolAllocator<> allocator;\n    Value x(kObjectType);\n    static const SizeType n = 200;\n    const char* format = std::numeric_limits<SizeType>::is_signed ? \"%d\" : \"%u\";\n\n    for (SizeType i = 0; i < n; i++) {\n        char name1[10];\n        sprintf(name1, format, i);\n\n        // Value name(name1); // should not compile\n        Value name(name1, static_cast<SizeType>(strlen(name1)), allocator);\n        Value object(kObjectType);\n\n        for (SizeType j = 0; j < n; j++) {\n            char name2[10];\n            sprintf(name2, format, j);\n\n            Value name3(name2, static_cast<SizeType>(strlen(name2)), allocator);\n            Value number(static_cast<int>(i * n + j));\n            object.AddMember(name3, number, allocator);\n        }\n\n        // x.AddMember(name1, object, allocator); // should not compile\n        x.AddMember(name, object, allocator);\n    }\n\n    for (SizeType i = 0; i < n; i++) {\n        char name1[10];\n        sprintf(name1, format, i);\n\n        for (SizeType j = 0; j < n; j++) {\n            char name2[10];\n            sprintf(name2, format, j);\n            x[name1];\n            EXPECT_EQ(static_cast<int>(i * n + j), x[name1][name2].GetInt());\n        }\n    }\n}\n\n// Issue 18: Error removing last element of object\n// http://code.google.com/p/rapidjson/issues/detail?id=18\nTEST(Value, RemoveLastElement) {\n    rapidjson::Document doc;\n    rapidjson::Document::AllocatorType& allocator = doc.GetAllocator();\n    rapidjson::Value objVal(rapidjson::kObjectType);\n    objVal.AddMember(\"var1\", 123, allocator);\n    objVal.AddMember(\"var2\", \"444\", allocator);\n    objVal.AddMember(\"var3\", 555, allocator);\n    EXPECT_TRUE(objVal.HasMember(\"var3\"));\n    objVal.RemoveMember(\"var3\");    // Assertion here in r61\n    EXPECT_FALSE(objVal.HasMember(\"var3\"));\n}\n\n// Issue 38:    Segmentation fault with CrtAllocator\nTEST(Document, CrtAllocator) {\n    typedef GenericValue<UTF8<>, CrtAllocator> V;\n\n    V::AllocatorType allocator;\n    V o(kObjectType);\n    o.AddMember(\"x\", 1, allocator); // Should not call destructor on uninitialized name/value of newly allocated members.\n\n    V a(kArrayType);\n    a.PushBack(1, allocator);   // Should not call destructor on uninitialized Value of newly allocated elements.\n}\n\nstatic void TestShortStringOptimization(const char* str) {\n    const rapidjson::SizeType len = static_cast<rapidjson::SizeType>(strlen(str));\n\n    rapidjson::Document doc;\n    rapidjson::Value val;\n    val.SetString(str, len, doc.GetAllocator());\n\n    EXPECT_EQ(val.GetStringLength(), len);\n    EXPECT_STREQ(val.GetString(), str);\n}\n\nTEST(Value, AllocateShortString) {\n    TestShortStringOptimization(\"\");                 // edge case: empty string\n    TestShortStringOptimization(\"12345678\");         // regular case for short strings: 8 chars\n    TestShortStringOptimization(\"12345678901\");      // edge case: 11 chars in 32-bit mode (=> short string)\n    TestShortStringOptimization(\"123456789012\");     // edge case: 12 chars in 32-bit mode (=> regular string)\n    TestShortStringOptimization(\"123456789012345\");  // edge case: 15 chars in 64-bit mode (=> short string)\n    TestShortStringOptimization(\"1234567890123456\"); // edge case: 16 chars in 64-bit mode (=> regular string)\n}\n\ntemplate <int e>\nstruct TerminateHandler {\n    bool Null() { return e != 0; }\n    bool Bool(bool) { return e != 1; }\n    bool Int(int) { return e != 2; }\n    bool Uint(unsigned) { return e != 3; }\n    bool Int64(int64_t) { return e != 4; }\n    bool Uint64(uint64_t) { return e != 5; }\n    bool Double(double) { return e != 6; }\n    bool RawNumber(const char*, SizeType, bool) { return e != 7; }\n    bool String(const char*, SizeType, bool) { return e != 8; }\n    bool StartObject() { return e != 9; }\n    bool Key(const char*, SizeType, bool)  { return e != 10; }\n    bool EndObject(SizeType) { return e != 11; }\n    bool StartArray() { return e != 12; }\n    bool EndArray(SizeType) { return e != 13; }\n};\n\n#define TEST_TERMINATION(e, json)\\\n{\\\n    Document d; \\\n    EXPECT_FALSE(d.Parse(json).HasParseError()); \\\n    Reader reader; \\\n    TerminateHandler<e> h;\\\n    EXPECT_FALSE(d.Accept(h));\\\n}\n\nTEST(Value, AcceptTerminationByHandler) {\n    TEST_TERMINATION(0, \"[null]\");\n    TEST_TERMINATION(1, \"[true]\");\n    TEST_TERMINATION(1, \"[false]\");\n    TEST_TERMINATION(2, \"[-1]\");\n    TEST_TERMINATION(3, \"[2147483648]\");\n    TEST_TERMINATION(4, \"[-1234567890123456789]\");\n    TEST_TERMINATION(5, \"[9223372036854775808]\");\n    TEST_TERMINATION(6, \"[0.5]\");\n    // RawNumber() is never called\n    TEST_TERMINATION(8, \"[\\\"a\\\"]\");\n    TEST_TERMINATION(9, \"[{}]\");\n    TEST_TERMINATION(10, \"[{\\\"a\\\":1}]\");\n    TEST_TERMINATION(11, \"[{}]\");\n    TEST_TERMINATION(12, \"{\\\"a\\\":[]}\");\n    TEST_TERMINATION(13, \"{\\\"a\\\":[]}\");\n}\n\nstruct ValueIntComparer {\n    bool operator()(const Value& lhs, const Value& rhs) const {\n        return lhs.GetInt() < rhs.GetInt();\n    }\n};\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\nTEST(Value, Sorting) {\n    Value::AllocatorType allocator;\n    Value a(kArrayType);\n    a.PushBack(5, allocator);\n    a.PushBack(1, allocator);\n    a.PushBack(3, allocator);\n    std::sort(a.Begin(), a.End(), ValueIntComparer());\n    EXPECT_EQ(1, a[0].GetInt());\n    EXPECT_EQ(3, a[1].GetInt());\n    EXPECT_EQ(5, a[2].GetInt());\n}\n#endif\n\n// http://stackoverflow.com/questions/35222230/\n\nstatic void MergeDuplicateKey(Value& v, Value::AllocatorType& a) {\n    if (v.IsObject()) {\n        // Convert all key:value into key:[value]\n        for (Value::MemberIterator itr = v.MemberBegin(); itr != v.MemberEnd(); ++itr)\n            itr->value = Value(kArrayType).Move().PushBack(itr->value, a);\n\n        // Merge arrays if key is duplicated\n        for (Value::MemberIterator itr = v.MemberBegin(); itr != v.MemberEnd();) {\n            Value::MemberIterator itr2 = v.FindMember(itr->name);\n            if (itr != itr2) {\n                itr2->value.PushBack(itr->value[0], a);\n                itr = v.EraseMember(itr);\n            }\n            else\n                ++itr;\n        }\n\n        // Convert key:[values] back to key:value if there is only one value\n        for (Value::MemberIterator itr = v.MemberBegin(); itr != v.MemberEnd(); ++itr) {\n            if (itr->value.Size() == 1)\n                itr->value = itr->value[0];\n            MergeDuplicateKey(itr->value, a); // Recursion on the value\n        }\n    }\n    else if (v.IsArray())\n        for (Value::ValueIterator itr = v.Begin(); itr != v.End(); ++itr)\n            MergeDuplicateKey(*itr, a);\n}\n\nTEST(Value, MergeDuplicateKey) {\n    Document d;\n    d.Parse(\n        \"{\"\n        \"    \\\"key1\\\": {\"\n        \"        \\\"a\\\": \\\"asdf\\\",\"\n        \"        \\\"b\\\": \\\"foo\\\",\"\n        \"        \\\"b\\\": \\\"bar\\\",\"\n        \"        \\\"c\\\": \\\"fdas\\\"\"\n        \"    }\"\n        \"}\");\n\n    Document d2;\n    d2.Parse(\n        \"{\"\n        \"    \\\"key1\\\": {\"\n        \"        \\\"a\\\": \\\"asdf\\\",\"\n        \"        \\\"b\\\": [\"\n        \"            \\\"foo\\\",\"\n        \"            \\\"bar\\\"\"\n        \"        ],\"\n        \"        \\\"c\\\": \\\"fdas\\\"\"\n        \"    }\"\n        \"}\");\n\n    EXPECT_NE(d2, d);\n    MergeDuplicateKey(d, d.GetAllocator());\n    EXPECT_EQ(d2, d);\n}\n\nTEST(Value, SSOMemoryOverlapTest) {\n    Document d;\n    d.Parse(\"{\\\"project\\\":\\\"rapidjson\\\",\\\"stars\\\":\\\"ssovalue\\\"}\");\n    Value &s = d[\"stars\"];\n    s.SetString(GenericStringRef<char>(&(s.GetString()[1]), 5), d.GetAllocator());\n    EXPECT_TRUE(strcmp(s.GetString(),\"soval\") == 0);\n}\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/unittest/writertest.cpp",
    "content": "// Tencent is pleased to support the open source community by making RapidJSON available.\n// \n// Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip.\n//\n// Licensed under the MIT License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// http://opensource.org/licenses/MIT\n//\n// Unless required by applicable law or agreed to in writing, software distributed \n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR \n// CONDITIONS OF ANY KIND, either express or implied. See the License for the \n// specific language governing permissions and limitations under the License.\n\n#include \"unittest.h\"\n\n#include \"rapidjson/document.h\"\n#include \"rapidjson/reader.h\"\n#include \"rapidjson/writer.h\"\n#include \"rapidjson/stringbuffer.h\"\n#include \"rapidjson/memorybuffer.h\"\n\n#ifdef __clang__\nRAPIDJSON_DIAG_PUSH\nRAPIDJSON_DIAG_OFF(c++98-compat)\n#endif\n\nusing namespace rapidjson;\n\nTEST(Writer, Compact) {\n    StringStream s(\"{ \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3] } \");\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    buffer.ShrinkToFit();\n    Reader reader;\n    reader.Parse<0>(s, writer);\n    EXPECT_STREQ(\"{\\\"hello\\\":\\\"world\\\",\\\"t\\\":true,\\\"f\\\":false,\\\"n\\\":null,\\\"i\\\":123,\\\"pi\\\":3.1416,\\\"a\\\":[1,2,3]}\", buffer.GetString());\n    EXPECT_EQ(77u, buffer.GetSize());\n    EXPECT_TRUE(writer.IsComplete());\n}\n\n// json -> parse -> writer -> json\n#define TEST_ROUNDTRIP(json) \\\n    { \\\n        StringStream s(json); \\\n        StringBuffer buffer; \\\n        Writer<StringBuffer> writer(buffer); \\\n        Reader reader; \\\n        reader.Parse<kParseFullPrecisionFlag>(s, writer); \\\n        EXPECT_STREQ(json, buffer.GetString()); \\\n        EXPECT_TRUE(writer.IsComplete()); \\\n    }\n\nTEST(Writer, Root) {\n    TEST_ROUNDTRIP(\"null\");\n    TEST_ROUNDTRIP(\"true\");\n    TEST_ROUNDTRIP(\"false\");\n    TEST_ROUNDTRIP(\"0\");\n    TEST_ROUNDTRIP(\"\\\"foo\\\"\");\n    TEST_ROUNDTRIP(\"[]\");\n    TEST_ROUNDTRIP(\"{}\");\n}\n\nTEST(Writer, Int) {\n    TEST_ROUNDTRIP(\"[-1]\");\n    TEST_ROUNDTRIP(\"[-123]\");\n    TEST_ROUNDTRIP(\"[-2147483648]\");\n}\n\nTEST(Writer, UInt) {\n    TEST_ROUNDTRIP(\"[0]\");\n    TEST_ROUNDTRIP(\"[1]\");\n    TEST_ROUNDTRIP(\"[123]\");\n    TEST_ROUNDTRIP(\"[2147483647]\");\n    TEST_ROUNDTRIP(\"[4294967295]\");\n}\n\nTEST(Writer, Int64) {\n    TEST_ROUNDTRIP(\"[-1234567890123456789]\");\n    TEST_ROUNDTRIP(\"[-9223372036854775808]\");\n}\n\nTEST(Writer, Uint64) {\n    TEST_ROUNDTRIP(\"[1234567890123456789]\");\n    TEST_ROUNDTRIP(\"[9223372036854775807]\");\n}\n\nTEST(Writer, String) {\n    TEST_ROUNDTRIP(\"[\\\"Hello\\\"]\");\n    TEST_ROUNDTRIP(\"[\\\"Hello\\\\u0000World\\\"]\");\n    TEST_ROUNDTRIP(\"[\\\"\\\\\\\"\\\\\\\\/\\\\b\\\\f\\\\n\\\\r\\\\t\\\"]\");\n\n#if RAPIDJSON_HAS_STDSTRING\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer> writer(buffer);\n        writer.String(std::string(\"Hello\\n\"));\n        EXPECT_STREQ(\"\\\"Hello\\\\n\\\"\", buffer.GetString());\n    }\n#endif\n}\n\nTEST(Writer, Issue_889) {\n    char buf[100] = \"Hello\";\n    \n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    writer.StartArray();\n    writer.String(buf);\n    writer.EndArray();\n    \n    EXPECT_STREQ(\"[\\\"Hello\\\"]\", buffer.GetString());\n    EXPECT_TRUE(writer.IsComplete()); \\\n}\n\nTEST(Writer, ScanWriteUnescapedString) {\n    const char json[] = \"[\\\" \\\\\\\"0123456789ABCDEF\\\"]\";\n    //                       ^ scanning stops here.\n    char buffer2[sizeof(json) + 32];\n\n    // Use different offset to test different alignments\n    for (int i = 0; i < 32; i++) {\n        char* p = buffer2 + i;\n        memcpy(p, json, sizeof(json));\n        TEST_ROUNDTRIP(p);\n    }\n}\n\nTEST(Writer, Double) {\n    TEST_ROUNDTRIP(\"[1.2345,1.2345678,0.123456789012,1234567.8]\");\n    TEST_ROUNDTRIP(\"0.0\");\n    TEST_ROUNDTRIP(\"-0.0\"); // Issue #289\n    TEST_ROUNDTRIP(\"1e30\");\n    TEST_ROUNDTRIP(\"1.0\");\n    TEST_ROUNDTRIP(\"5e-324\"); // Min subnormal positive double\n    TEST_ROUNDTRIP(\"2.225073858507201e-308\"); // Max subnormal positive double\n    TEST_ROUNDTRIP(\"2.2250738585072014e-308\"); // Min normal positive double\n    TEST_ROUNDTRIP(\"1.7976931348623157e308\"); // Max double\n\n}\n\n// UTF8 -> TargetEncoding -> UTF8\ntemplate <typename TargetEncoding>\nvoid TestTranscode(const char* json) {\n    StringStream s(json);\n    GenericStringBuffer<TargetEncoding> buffer;\n    Writer<GenericStringBuffer<TargetEncoding>, UTF8<>, TargetEncoding> writer(buffer);\n    Reader reader;\n    reader.Parse(s, writer);\n\n    StringBuffer buffer2;\n    Writer<StringBuffer> writer2(buffer2);\n    GenericReader<TargetEncoding, UTF8<> > reader2;\n    GenericStringStream<TargetEncoding> s2(buffer.GetString());\n    reader2.Parse(s2, writer2);\n\n    EXPECT_STREQ(json, buffer2.GetString());\n}\n\nTEST(Writer, Transcode) {\n    const char json[] = \"{\\\"hello\\\":\\\"world\\\",\\\"t\\\":true,\\\"f\\\":false,\\\"n\\\":null,\\\"i\\\":123,\\\"pi\\\":3.1416,\\\"a\\\":[1,2,3],\\\"dollar\\\":\\\"\\x24\\\",\\\"cents\\\":\\\"\\xC2\\xA2\\\",\\\"euro\\\":\\\"\\xE2\\x82\\xAC\\\",\\\"gclef\\\":\\\"\\xF0\\x9D\\x84\\x9E\\\"}\";\n\n    // UTF8 -> UTF16 -> UTF8\n    TestTranscode<UTF8<> >(json);\n\n    // UTF8 -> ASCII -> UTF8\n    TestTranscode<ASCII<> >(json);\n\n    // UTF8 -> UTF16 -> UTF8\n    TestTranscode<UTF16<> >(json);\n\n    // UTF8 -> UTF32 -> UTF8\n    TestTranscode<UTF32<> >(json);\n\n    // UTF8 -> AutoUTF -> UTF8\n    UTFType types[] = { kUTF8, kUTF16LE , kUTF16BE, kUTF32LE , kUTF32BE };\n    for (size_t i = 0; i < 5; i++) {\n        StringStream s(json);\n        MemoryBuffer buffer;\n        AutoUTFOutputStream<unsigned, MemoryBuffer> os(buffer, types[i], true);\n        Writer<AutoUTFOutputStream<unsigned, MemoryBuffer>, UTF8<>, AutoUTF<unsigned> > writer(os);\n        Reader reader;\n        reader.Parse(s, writer);\n\n        StringBuffer buffer2;\n        Writer<StringBuffer> writer2(buffer2);\n        GenericReader<AutoUTF<unsigned>, UTF8<> > reader2;\n        MemoryStream s2(buffer.GetBuffer(), buffer.GetSize());\n        AutoUTFInputStream<unsigned, MemoryStream> is(s2);\n        reader2.Parse(is, writer2);\n\n        EXPECT_STREQ(json, buffer2.GetString());\n    }\n\n}\n\n#include <sstream>\n\nclass OStreamWrapper {\npublic:\n    typedef char Ch;\n\n    OStreamWrapper(std::ostream& os) : os_(os) {}\n\n    Ch Peek() const { assert(false); return '\\0'; }\n    Ch Take() { assert(false); return '\\0'; }\n    size_t Tell() const { return 0; }\n\n    Ch* PutBegin() { assert(false); return 0; }\n    void Put(Ch c) { os_.put(c); }\n    void Flush() { os_.flush(); }\n    size_t PutEnd(Ch*) { assert(false); return 0; }\n\nprivate:\n    OStreamWrapper(const OStreamWrapper&);\n    OStreamWrapper& operator=(const OStreamWrapper&);\n\n    std::ostream& os_;\n};\n\nTEST(Writer, OStreamWrapper) {\n    StringStream s(\"{ \\\"hello\\\" : \\\"world\\\", \\\"t\\\" : true , \\\"f\\\" : false, \\\"n\\\": null, \\\"i\\\":123, \\\"pi\\\": 3.1416, \\\"a\\\":[1, 2, 3], \\\"u64\\\": 1234567890123456789, \\\"i64\\\":-1234567890123456789 } \");\n    \n    std::stringstream ss;\n    OStreamWrapper os(ss);\n    \n    Writer<OStreamWrapper> writer(os);\n\n    Reader reader;\n    reader.Parse<0>(s, writer);\n    \n    std::string actual = ss.str();\n    EXPECT_STREQ(\"{\\\"hello\\\":\\\"world\\\",\\\"t\\\":true,\\\"f\\\":false,\\\"n\\\":null,\\\"i\\\":123,\\\"pi\\\":3.1416,\\\"a\\\":[1,2,3],\\\"u64\\\":1234567890123456789,\\\"i64\\\":-1234567890123456789}\", actual.c_str());\n}\n\nTEST(Writer, AssertRootMayBeAnyValue) {\n#define T(x)\\\n    {\\\n        StringBuffer buffer;\\\n        Writer<StringBuffer> writer(buffer);\\\n        EXPECT_TRUE(x);\\\n    }\n    T(writer.Bool(false));\n    T(writer.Bool(true));\n    T(writer.Null());\n    T(writer.Int(0));\n    T(writer.Uint(0));\n    T(writer.Int64(0));\n    T(writer.Uint64(0));\n    T(writer.Double(0));\n    T(writer.String(\"foo\"));\n#undef T\n}\n\nTEST(Writer, AssertIncorrectObjectLevel) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    writer.StartObject();\n    writer.EndObject();\n    ASSERT_THROW(writer.EndObject(), AssertException);\n}\n\nTEST(Writer, AssertIncorrectArrayLevel) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    writer.StartArray();\n    writer.EndArray();\n    ASSERT_THROW(writer.EndArray(), AssertException);\n}\n\nTEST(Writer, AssertIncorrectEndObject) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    writer.StartObject();\n    ASSERT_THROW(writer.EndArray(), AssertException);\n}\n\nTEST(Writer, AssertIncorrectEndArray) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    writer.StartObject();\n    ASSERT_THROW(writer.EndArray(), AssertException);\n}\n\nTEST(Writer, AssertObjectKeyNotString) {\n#define T(x)\\\n    {\\\n        StringBuffer buffer;\\\n        Writer<StringBuffer> writer(buffer);\\\n        writer.StartObject();\\\n        ASSERT_THROW(x, AssertException); \\\n    }\n    T(writer.Bool(false));\n    T(writer.Bool(true));\n    T(writer.Null());\n    T(writer.Int(0));\n    T(writer.Uint(0));\n    T(writer.Int64(0));\n    T(writer.Uint64(0));\n    T(writer.Double(0));\n    T(writer.StartObject());\n    T(writer.StartArray());\n#undef T\n}\n\nTEST(Writer, AssertMultipleRoot) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n\n    writer.StartObject();\n    writer.EndObject();\n    ASSERT_THROW(writer.StartObject(), AssertException);\n\n    writer.Reset(buffer);\n    writer.Null();\n    ASSERT_THROW(writer.Int(0), AssertException);\n\n    writer.Reset(buffer);\n    writer.String(\"foo\");\n    ASSERT_THROW(writer.StartArray(), AssertException);\n\n    writer.Reset(buffer);\n    writer.StartArray();\n    writer.EndArray();\n    //ASSERT_THROW(writer.Double(3.14), AssertException);\n}\n\nTEST(Writer, RootObjectIsComplete) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    EXPECT_FALSE(writer.IsComplete());\n    writer.StartObject();\n    EXPECT_FALSE(writer.IsComplete());\n    writer.String(\"foo\");\n    EXPECT_FALSE(writer.IsComplete());\n    writer.Int(1);\n    EXPECT_FALSE(writer.IsComplete());\n    writer.EndObject();\n    EXPECT_TRUE(writer.IsComplete());\n}\n\nTEST(Writer, RootArrayIsComplete) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    EXPECT_FALSE(writer.IsComplete());\n    writer.StartArray();\n    EXPECT_FALSE(writer.IsComplete());\n    writer.String(\"foo\");\n    EXPECT_FALSE(writer.IsComplete());\n    writer.Int(1);\n    EXPECT_FALSE(writer.IsComplete());\n    writer.EndArray();\n    EXPECT_TRUE(writer.IsComplete());\n}\n\nTEST(Writer, RootValueIsComplete) {\n#define T(x)\\\n    {\\\n        StringBuffer buffer;\\\n        Writer<StringBuffer> writer(buffer);\\\n        EXPECT_FALSE(writer.IsComplete()); \\\n        x; \\\n        EXPECT_TRUE(writer.IsComplete()); \\\n    }\n    T(writer.Null());\n    T(writer.Bool(true));\n    T(writer.Bool(false));\n    T(writer.Int(0));\n    T(writer.Uint(0));\n    T(writer.Int64(0));\n    T(writer.Uint64(0));\n    T(writer.Double(0));\n    T(writer.String(\"\"));\n#undef T\n}\n\nTEST(Writer, InvalidEncoding) {\n    // Fail in decoding invalid UTF-8 sequence http://www.cl.cam.ac.uk/~mgk25/ucs/examples/UTF-8-test.txt\n    {\n        GenericStringBuffer<UTF16<> > buffer;\n        Writer<GenericStringBuffer<UTF16<> >, UTF8<>, UTF16<> > writer(buffer);\n        writer.StartArray();\n        EXPECT_FALSE(writer.String(\"\\xfe\"));\n        EXPECT_FALSE(writer.String(\"\\xff\"));\n        EXPECT_FALSE(writer.String(\"\\xfe\\xfe\\xff\\xff\"));\n        writer.EndArray();\n    }\n\n    // Fail in encoding\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer, UTF32<> > writer(buffer);\n        static const UTF32<>::Ch s[] = { 0x110000, 0 }; // Out of U+0000 to U+10FFFF\n        EXPECT_FALSE(writer.String(s));\n    }\n\n    // Fail in unicode escaping in ASCII output\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer, UTF32<>, ASCII<> > writer(buffer);\n        static const UTF32<>::Ch s[] = { 0x110000, 0 }; // Out of U+0000 to U+10FFFF\n        EXPECT_FALSE(writer.String(s));\n    }\n}\n\nTEST(Writer, ValidateEncoding) {\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteValidateEncodingFlag> writer(buffer);\n        writer.StartArray();\n        EXPECT_TRUE(writer.String(\"\\x24\"));             // Dollar sign U+0024\n        EXPECT_TRUE(writer.String(\"\\xC2\\xA2\"));         // Cents sign U+00A2\n        EXPECT_TRUE(writer.String(\"\\xE2\\x82\\xAC\"));     // Euro sign U+20AC\n        EXPECT_TRUE(writer.String(\"\\xF0\\x9D\\x84\\x9E\")); // G clef sign U+1D11E\n        EXPECT_TRUE(writer.String(\"\\x01\"));             // SOH control U+0001\n        EXPECT_TRUE(writer.String(\"\\x1B\"));             // Escape control U+001B\n        writer.EndArray();\n        EXPECT_STREQ(\"[\\\"\\x24\\\",\\\"\\xC2\\xA2\\\",\\\"\\xE2\\x82\\xAC\\\",\\\"\\xF0\\x9D\\x84\\x9E\\\",\\\"\\\\u0001\\\",\\\"\\\\u001B\\\"]\", buffer.GetString());\n    }\n\n    // Fail in decoding invalid UTF-8 sequence http://www.cl.cam.ac.uk/~mgk25/ucs/examples/UTF-8-test.txt\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteValidateEncodingFlag> writer(buffer);\n        writer.StartArray();\n        EXPECT_FALSE(writer.String(\"\\xfe\"));\n        EXPECT_FALSE(writer.String(\"\\xff\"));\n        EXPECT_FALSE(writer.String(\"\\xfe\\xfe\\xff\\xff\"));\n        writer.EndArray();\n    }\n}\n\nTEST(Writer, InvalidEventSequence) {\n    // {]\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer> writer(buffer);\n        writer.StartObject();\n        EXPECT_THROW(writer.EndArray(), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n\n    // [}\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer> writer(buffer);\n        writer.StartArray();\n        EXPECT_THROW(writer.EndObject(), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n\n    // { 1: \n    {\n        StringBuffer buffer;\n        Writer<StringBuffer> writer(buffer);\n        writer.StartObject();\n        EXPECT_THROW(writer.Int(1), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n\n    // { 'a' }\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer> writer(buffer);\n        writer.StartObject();\n        writer.Key(\"a\");\n        EXPECT_THROW(writer.EndObject(), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n\n    // { 'a':'b','c' }\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer> writer(buffer);\n        writer.StartObject();\n        writer.Key(\"a\");\n        writer.String(\"b\");\n        writer.Key(\"c\");\n        EXPECT_THROW(writer.EndObject(), AssertException);\n        EXPECT_FALSE(writer.IsComplete());\n    }\n}\n\nTEST(Writer, NaN) {\n    double nan = std::numeric_limits<double>::quiet_NaN();\n\n    EXPECT_TRUE(internal::Double(nan).IsNan());\n    StringBuffer buffer;\n    {\n        Writer<StringBuffer> writer(buffer);\n        EXPECT_FALSE(writer.Double(nan));\n    }\n    {\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(nan));\n        EXPECT_STREQ(\"NaN\", buffer.GetString());\n    }\n    GenericStringBuffer<UTF16<> > buffer2;\n    Writer<GenericStringBuffer<UTF16<> > > writer2(buffer2);\n    EXPECT_FALSE(writer2.Double(nan));\n}\n\nTEST(Writer, NaNToNull) {\n    double nan = std::numeric_limits<double>::quiet_NaN();\n\n    EXPECT_TRUE(internal::Double(nan).IsNan());\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfNullFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(nan));\n        EXPECT_STREQ(\"null\", buffer.GetString());\n    }\n}\n\nTEST(Writer, Inf) {\n    double inf = std::numeric_limits<double>::infinity();\n\n    EXPECT_TRUE(internal::Double(inf).IsInf());\n    StringBuffer buffer;\n    {\n        Writer<StringBuffer> writer(buffer);\n        EXPECT_FALSE(writer.Double(inf));\n    }\n    {\n        Writer<StringBuffer> writer(buffer);\n        EXPECT_FALSE(writer.Double(-inf));\n    }\n    {\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(inf));\n    }\n    {\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(-inf));\n    }\n    EXPECT_STREQ(\"Infinity-Infinity\", buffer.GetString());\n}\n\nTEST(Writer, InfToNull) {\n    double inf = std::numeric_limits<double>::infinity();\n\n    EXPECT_TRUE(internal::Double(inf).IsInf());\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfNullFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(inf));\n        EXPECT_STREQ(\"null\", buffer.GetString());\n    }\n    {\n        StringBuffer buffer;\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteNanAndInfNullFlag> writer(buffer);\n        EXPECT_TRUE(writer.Double(-inf));\n        EXPECT_STREQ(\"null\", buffer.GetString());\n    }\n}\n\nTEST(Writer, RawValue) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(buffer);\n    writer.StartObject();\n    writer.Key(\"a\");\n    writer.Int(1);\n    writer.Key(\"raw\");\n    const char json[] = \"[\\\"Hello\\\\nWorld\\\", 123.456]\";\n    writer.RawValue(json, strlen(json), kArrayType);\n    writer.EndObject();\n    EXPECT_TRUE(writer.IsComplete());\n    EXPECT_STREQ(\"{\\\"a\\\":1,\\\"raw\\\":[\\\"Hello\\\\nWorld\\\", 123.456]}\", buffer.GetString());\n}\n\nTEST(Write, RawValue_Issue1152) {\n    {\n        GenericStringBuffer<UTF32<> > sb;\n        Writer<GenericStringBuffer<UTF32<> >, UTF8<>, UTF32<> > writer(sb);\n        writer.RawValue(\"null\", 4, kNullType);\n        EXPECT_TRUE(writer.IsComplete());\n        const unsigned *out = sb.GetString();\n        EXPECT_EQ(static_cast<unsigned>('n'), out[0]);\n        EXPECT_EQ(static_cast<unsigned>('u'), out[1]);\n        EXPECT_EQ(static_cast<unsigned>('l'), out[2]);\n        EXPECT_EQ(static_cast<unsigned>('l'), out[3]);\n        EXPECT_EQ(static_cast<unsigned>(0  ), out[4]);\n    }\n\n    {\n        GenericStringBuffer<UTF8<> > sb;\n        Writer<GenericStringBuffer<UTF8<> >, UTF16<>, UTF8<> > writer(sb);\n        writer.RawValue(L\"null\", 4, kNullType);\n        EXPECT_TRUE(writer.IsComplete());\n        EXPECT_STREQ(\"null\", sb.GetString());\n    }\n\n    {\n        // Fail in transcoding\n        GenericStringBuffer<UTF16<> > buffer;\n        Writer<GenericStringBuffer<UTF16<> >, UTF8<>, UTF16<> > writer(buffer);\n        EXPECT_FALSE(writer.RawValue(\"\\\"\\xfe\\\"\", 3, kStringType));\n    }\n\n    {\n        // Fail in encoding validation\n        StringBuffer buffer;\n        Writer<StringBuffer, UTF8<>, UTF8<>, CrtAllocator, kWriteValidateEncodingFlag> writer(buffer);\n        EXPECT_FALSE(writer.RawValue(\"\\\"\\xfe\\\"\", 3, kStringType));\n    }\n}\n\n#if RAPIDJSON_HAS_CXX11_RVALUE_REFS\nstatic Writer<StringBuffer> WriterGen(StringBuffer &target) {\n    Writer<StringBuffer> writer(target);\n    writer.StartObject();\n    writer.Key(\"a\");\n    writer.Int(1);\n    return writer;\n}\n\nTEST(Writer, MoveCtor) {\n    StringBuffer buffer;\n    Writer<StringBuffer> writer(WriterGen(buffer));\n    writer.EndObject();\n    EXPECT_TRUE(writer.IsComplete());\n    EXPECT_STREQ(\"{\\\"a\\\":1}\", buffer.GetString());\n}\n#endif\n\n#ifdef __clang__\nRAPIDJSON_DIAG_POP\n#endif\n"
  },
  {
    "path": "C/thirdparty/rapidjson/test/valgrind.supp",
    "content": "{\n\tSuppress wcslen valgrind report 1\n\tMemcheck:Cond\n\tfun:__wcslen_sse2\n}\n\n{\n    Suppress wcslen valgrind report 2\n    Memcheck:Addr8\n    fun:__wcslen_sse2\n}\n\n{\n    Suppress wcslen valgrind report 3\n    Memcheck:Value8\n    fun:__wcslen_sse2\n}\n\n{\n   Suppress wmemcmp valgrind report 4\n   Memcheck:Addr32\n   fun:__wmemcmp_avx2_movbe\n   ...\n   fun:*Uri*Parse_UTF16_Std*\n}\n\n"
  },
  {
    "path": "C/thirdparty/rapidjson/travis-doxygen.sh",
    "content": "#!/bin/bash\n# Update Doxygen documentation after push to 'master'.\n# Author: @pah\n\nset -e\n\nDOXYGEN_VER=1_8_16\nDOXYGEN_URL=\"https://codeload.github.com/doxygen/doxygen/tar.gz/Release_${DOXYGEN_VER}\"\n\n: ${GITHUB_REPO:=\"Tencent/rapidjson\"}\nGITHUB_HOST=\"github.com\"\nGITHUB_CLONE=\"git://${GITHUB_HOST}/${GITHUB_REPO}\"\nGITHUB_URL=\"https://${GITHUB_HOST}/${GITHUB_PUSH-${GITHUB_REPO}}\"\n\n# if not set, ignore password\n#GIT_ASKPASS=\"${TRAVIS_BUILD_DIR}/gh_ignore_askpass.sh\"\n\nskip() {\n\techo \"$@\" 1>&2\n\techo \"Exiting...\" 1>&2\n\texit 0\n}\n\nabort() {\n\techo \"Error: $@\" 1>&2\n\techo \"Exiting...\" 1>&2\n\texit 1\n}\n\n# TRAVIS_BUILD_DIR not set, exiting\n[ -d \"${TRAVIS_BUILD_DIR-/nonexistent}\" ] || \\\n\tabort '${TRAVIS_BUILD_DIR} not set or nonexistent.'\n\n# check for pull-requests\n[ \"${TRAVIS_PULL_REQUEST}\" = \"false\" ] || \\\n\tskip \"Not running Doxygen for pull-requests.\"\n\n# check for branch name\n[ \"${TRAVIS_BRANCH}\" = \"master\" ] || \\\n\tskip \"Running Doxygen only for updates on 'master' branch (current: ${TRAVIS_BRANCH}).\"\n\n# check for job number\n# [ \"${TRAVIS_JOB_NUMBER}\" = \"${TRAVIS_BUILD_NUMBER}.1\" ] || \\\n# \tskip \"Running Doxygen only on first job of build ${TRAVIS_BUILD_NUMBER} (current: ${TRAVIS_JOB_NUMBER}).\"\n\n# install doxygen binary distribution\ndoxygen_install()\n{\n\tcd ${TMPDIR-/tmp}\n\tcurl ${DOXYGEN_URL} -o doxygen.tar.gz\n\ttar zxvf doxygen.tar.gz\n\tmkdir doxygen_build\n\tcd doxygen_build\n\tcmake ../doxygen-Release_${DOXYGEN_VER}/\n\tmake\n    \n\texport PATH=\"${TMPDIR-/tmp}/doxygen_build/bin:$PATH\"\n\t\n\tcd ../../\n}\n\ndoxygen_run()\n{\n\tcd \"${TRAVIS_BUILD_DIR}\";\n\tdoxygen ${TRAVIS_BUILD_DIR}/build/doc/Doxyfile;\n\tdoxygen ${TRAVIS_BUILD_DIR}/build/doc/Doxyfile.zh-cn;\n}\n\ngh_pages_prepare()\n{\n\tcd \"${TRAVIS_BUILD_DIR}/build/doc\";\n\t[ ! -d \"html\" ] || \\\n\t\tabort \"Doxygen target directory already exists.\"\n\tgit --version\n\tgit clone --single-branch -b gh-pages \"${GITHUB_CLONE}\" html\n\tcd html\n\t# setup git config (with defaults)\n\tgit config user.name \"${GIT_NAME-travis}\"\n\tgit config user.email \"${GIT_EMAIL-\"travis@localhost\"}\"\n\t# clean working dir\n\trm -f .git/index\n\tgit clean -df\n}\n\ngh_pages_commit() {\n\tcd \"${TRAVIS_BUILD_DIR}/build/doc/html\";\n\techo \"rapidjson.org\" > CNAME\n\tgit add --all;\n\tgit diff-index --quiet HEAD || git commit -m \"Automatic doxygen build\";\n}\n\ngh_setup_askpass() {\n\tcat > ${GIT_ASKPASS} <<EOF\n#!/bin/bash\necho\nexit 0\nEOF\n\tchmod a+x \"$GIT_ASKPASS\"\n}\n\ngh_pages_push() {\n\t# check for secure variables\n\t[ \"${TRAVIS_SECURE_ENV_VARS}\" = \"true\" ] || \\\n\t\tskip \"Secure variables not available, not updating GitHub pages.\"\n\t# check for GitHub access token\n\t[ \"${GH_TOKEN+set}\" = set ] || \\\n\t\tskip \"GitHub access token not available, not updating GitHub pages.\"\n\t[ \"${#GH_TOKEN}\" -eq 40 ] || \\\n\t\tabort \"GitHub token invalid: found ${#GH_TOKEN} characters, expected 40.\"\n\n\tcd \"${TRAVIS_BUILD_DIR}/build/doc/html\";\n\t# setup credentials (hide in \"set -x\" mode)\n\tgit remote set-url --push origin \"${GITHUB_URL}\"\n\tgit config credential.helper 'store'\n\t# ( set +x ; git config credential.username \"${GH_TOKEN}\" )\n\t( set +x ; [ -f ${HOME}/.git-credentials ] || \\\n\t\t\t( echo \"https://${GH_TOKEN}:@${GITHUB_HOST}\" > ${HOME}/.git-credentials ; \\\n\t\t\t chmod go-rw ${HOME}/.git-credentials ) )\n\t# push to GitHub\n\tgit push origin gh-pages\n}\n\ndoxygen_install\ngh_pages_prepare\ndoxygen_run\ngh_pages_commit\ngh_pages_push\n\n"
  },
  {
    "path": "CMakeLists.txt",
    "content": "cmake_minimum_required (VERSION 2.8.8)\nproject (Fledge)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\nEXECUTE_PROCESS( COMMAND grep -o ^NAME=.* /etc/os-release COMMAND cut -f2 -d\\\" COMMAND sed s/\\\"//g OUTPUT_VARIABLE os_name )\nEXECUTE_PROCESS( COMMAND grep -o ^VERSION_ID=.* /etc/os-release COMMAND cut -f2 -d\\\" COMMAND sed s/\\\"//g OUTPUT_VARIABLE os_version )\n\nif ( ( ${os_name} MATCHES \"Red Hat\" OR ${os_name} MATCHES \"CentOS\") AND ( ${os_version} MATCHES \"7\" ) )\n\tadd_compile_options(-D RHEL_CENTOS_7)\n\tmessage( \"System is RHEL/CentOS 7\" )\nelse()\n\tmessage( \"System is not RHEL/CentOS 7\" )\nendif()\n\nfind_package(PkgConfig REQUIRED)\n\nadd_subdirectory(C/common)\nadd_subdirectory(C/services/common)\nadd_subdirectory(C/plugins/common)\nadd_subdirectory(C/plugins/filter/common)\nadd_subdirectory(C/services/storage)\nadd_subdirectory(C/plugins/storage/common)\nadd_subdirectory(C/plugins/storage/postgres)\nadd_subdirectory(C/plugins/storage/sqlite)\nadd_subdirectory(C/plugins/storage/sqlitelb)\nadd_subdirectory(C/plugins/storage/sqlitememory)\nadd_subdirectory(C/services/south)\nadd_subdirectory(C/services/north)\nadd_subdirectory(C/services/south-plugin-interfaces/python)\nadd_subdirectory(C/services/south-plugin-interfaces/python/async_ingest_pymodule)\nadd_subdirectory(C/services/notification-plugin-interfaces/python)\nadd_subdirectory(C/services/filter-plugin-interfaces/python)\nadd_subdirectory(C/services/filter-plugin-interfaces/python/filter_ingest_pymodule)\nadd_subdirectory(C/services/north-plugin-interfaces/python)\nadd_subdirectory(C/tasks/north)\nadd_subdirectory(C/tasks/purge_system)\nadd_subdirectory(C/tasks/check_updates)\nadd_subdirectory(C/tasks/statistics_history)\nadd_subdirectory(C/plugins/utils)\nadd_subdirectory(C/plugins/north/OMF)\n\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing to Fledge\n\nThe project welcomes contributions of all types; documentation, code\nchanges, new plugins, scripts, just simply reports of the way you use Fledge\nor suggestions of features you would like to see within Fledge.\n\nThe following is a set of guidelines for contributing to Fledge IoT\nproject and its plugins, which are hosted in\nthe [fledge-iot Organization](https://github.com/fledge-iot) on GitHub.\n\nTo give us feedback or make suggestions use the fledge or fledge-help Slack Channel on [LFEdge](slack.lfedge.org).\n\nIf you find a security vulnerability within Fledge or any of its plugins then we request that **you inform us via email rather than by opening an issue in GitHub**. This allows us to act on it without giving information that others might exploit. Any security vulnerability will be discussed at the project TSC and user will be informed of the need to upgrade via the Fledge Slack channel. The email address to which vulnerabilities should be reported is security@dianomic.com.\n\n## Pull requests\n\n**Please ask first** before embarking on any significant work (e.g. implementing new features,\nrefactoring code etc.), otherwise you risk spending a lot of time working on something that might\nalready be underway or is unlikely to be merged into the project.\n\nJoin the fledge or fledge-help Slack channel on [LFEdge](slack.lfedge.org). This\nwill allow you to talk to the wider fledge community and discuss your\nproposed changes and get help from the maintainers when needed.\n\nPlease adhere to the coding conventions used throughout the project and\nlimit your changes to functional rather than aesthetic changes to make\nit as easy as possible to review your changes. We also encourage you to\ncomment your code changes for the same reason and for the benefit of those\nthat come after you.\n\nAdhering to the following process is the best way to get your work included in the project:\n\n1. [Fork](https://help.github.com/articles/fork-a-repo/) the project, clone your fork, and configure\n   the remotes:\n\n   ```bash\n   # Clone your fork of the repo into the current directory\n   git clone https://github.com/<your-username>/fledge.git\n\n   # Navigate to the newly cloned directory\n   cd fledge\n\n   # Assign the original repo to a remote called \"upstream\"\n   git remote add upstream https://github.com/fledge-iot/fledge.git\n   ```\n\n2. If you cloned a while ago, get the latest changes of develop branch from upstream:\n\n   ```bash\n   git checkout develop\n   git pull --rebase upstream develop\n   ```\n\n3. Create a new topic branch from `develop`, if you are working a particular issue from the Project Jira then the convention for branch names is to use the Jira name, otherwise choose a descriptive branch name that contains your GitHub username in order to help us track the changes.\n\n   ```bash\n   git checkout -b [topic-branch-name] upstream/develop\n   ```\n\n4. Commit your changes in logical chunks. When you are ready to commit, make sure to write a Good\n   Commit Message™.  Use [interactive rebase](https://help.github.com/articles/about-git-rebase)\n   to group your commits into logical units of working before making them public.\n\n   Note that every commit you make must be signed. By signing off your work you indicate that you\n   are accepting the [Developer Certificate of Origin](https://developercertificate.org/).\n\n   Use your real name (sorry, no pseudonyms or anonymous contributions). If you set your `user.name`\n   and `user.email` git configs, you can sign your commit automatically with `git commit -s`.\n\n5. Locally merge (or rebase) the upstream development branch into your topic branch:\n\n   ```bash\n   git pull --rebase upstream develop\n   ```\n\n6. Push your topic branch up to your fork:\n\n   ```bash\n   git push -u origin [topic-branch-name]\n   ```\n\n7. [Open a Pull Request](https://help.github.com/articles/using-pull-requests/) with a clear title\n   and detailed description. Also make sure always raise Pull request against develop base branch of upstream only. \n   It must have at least one reviewer to expedite the review and also verify GitHub status checks which let you know if your commits meet the conditions set for the repository you're contributing to. \n   GitHub Status checks are based on external processes, such as continuous integration builds, which run for each push you make to a repository. You can see the pending, passing, or failing state of status checks next to individual commits in your pull request.\n\n\n### Plugins\n\nThe above addresses the main Fledge repository, however plugins each have\na repository of their own which contains the code for the plugin and the\ndocumentation for the plugin. If you wish to work on an existing plugin\nthen the process is similar to that above, just replace the \"fledge.git\"\nrepository with the fledge-{plugin-type}-{plugin-name}.git repository, for example\n\n   ```bash\n   # Clone your fork of the repo into the current directory\n   git clone https://github.com/<your-username>/fledge-south-sinusoid.git\n\n   # Navigate to the newly cloned directory\n   cd fledge-south-sinusoid\n\n   # Assign the original repo to a remote called \"upstream\"\n   git remote add upstream https://github.com/fledge-iot/fledge-south-sinusoid.git\n   ```\nRepeat further steps which we mentioned [here](#pull-requests)\n\nIf you wish to create a new plugin then contact the maintainers and we\nwill create a blank base repository for you to add your code into.\n"
  },
  {
    "path": "GOVERNANCE.MD",
    "content": "# Governance\n\nProject governance as well as policies, procedures and instructions for contributing to FLEDGE can be found on our Wiki site at the following locations:\n\n\n- [Fledge Governance Wiki Page](https://wiki.lfedge.org/display/FLEDGE/Governance)\n- [Contributor's Guide](CONTRIBUTING.md)\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2019 Dianomic Systems\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "Makefile",
    "content": "###############################################################################\n################################### COMMANDS ##################################\n###############################################################################\n# Check RedHat || CentOS\n$(eval PLATFORM_RH=$(shell (lsb_release -ds 2>/dev/null || cat /etc/*release 2>/dev/null | head -n1 || uname -om) | egrep '(Red Hat|CentOS)'))\n$(eval OS_VERSION=$(shell grep -o '^VERSION_ID=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g'))\n# RHEL gives os full version e.g. 7.9; hence check for prefix (major) only\n$(eval OS_VERSION_PREFIX=$(shell echo $(OS_VERSION) | head -c 1))\n\n# Log Platform RedHat || CentOS\n$(if $(PLATFORM_RH), $(info Platform is $(PLATFORM_RH) $(OS_VERSION)))\n\n# For RedHat || CentOS we need rh-python36\nifneq (\"$(PLATFORM_RH)\",\"\")\n   ifeq (\"$(OS_VERSION_PREFIX)\", \"7\")\n\t# CentOS we need rh-python36 and devtoolset-7\n\tPIP_INSTALL_REQUIREMENTS := source scl_source enable rh-python36 && python3 -m pip install -Ir\n\tPYTHON_BUILD_PACKAGE = source scl_source enable rh-python36 && python3 setup.py build -b ../$(PYTHON_BUILD_DIR)\n\tCMAKE := source scl_source enable rh-python36 && source scl_source enable devtoolset-7 && cmake\n    else\n\tPIP_INSTALL_REQUIREMENTS := python3 -m pip install -Ir\n\tPYTHON_BUILD_PACKAGE = python3 setup.py build -b ../$(PYTHON_BUILD_DIR)\n\tCMAKE := cmake\n    endif\nelse\n\tPIP_INSTALL_REQUIREMENTS := python3 -m pip install -Ir\n\tPYTHON_BUILD_PACKAGE = python3 setup.py build -b ../$(PYTHON_BUILD_DIR)\n\tCMAKE := cmake\nendif\n\n# Extract Python version components\nPYTHON_VERSION := $(shell python3 --version 2>&1 | awk '{print $$2}')\nPYTHON_MAJOR := $(word 1, $(subst ., ,$(PYTHON_VERSION)))\nPYTHON_MINOR := $(word 2, $(subst ., ,$(PYTHON_VERSION)))\nPYTHON_PATCH := $(word 3, $(subst ., ,$(PYTHON_VERSION)))\n\n# Apply the --break-system-packages flag only for Python versions 3.11 or later\nifeq ($(shell test $(PYTHON_MAJOR) -gt 3 || { [ $(PYTHON_MAJOR) -eq 3 ] && [ $(PYTHON_MINOR) -ge 11 ]; } && echo 1 || echo 0),1)\n    PIP_BREAK_SYSTEM_PACKAGES := --break-system-packages\nelse\n    PIP_BREAK_SYSTEM_PACKAGES :=  # No flag\nendif\n\nMKDIR_PATH := mkdir -p\nCD := cd\nLN := ln -sf\nPIP_USER_FLAG = --user\nUSE_PIP_CACHE := no\n\nRM_DIR := rm -r\nRM_FILE := rm\nMAKE_INSTALL = $(MAKE) install\nCP            := cp\nCP_DIR        := cp -r\nSSL_NAME      := \"fledge\"\nAUTH_NAME     := \"ca\"\nSSL_DAYS      := \"365\"\n\n###############################################################################\n################################### DIRS/FILES ################################\n###############################################################################\n# PARENT DIR\nMKFILE_PATH := $(abspath $(lastword $(MAKEFILE_LIST)))\nCURRENT_DIR := $(dir $(MKFILE_PATH))\n\n# C BUILD DIRS/FILES\nCMAKE_FILE                    := $(CURRENT_DIR)/CMakeLists.txt\nCMAKE_BUILD_DIR               := cmake_build\nCMAKE_GEN_MAKEFILE            := $(CURRENT_DIR)/$(CMAKE_BUILD_DIR)/Makefile\nCMAKE_SERVICES_DIR            := $(CURRENT_DIR)/$(CMAKE_BUILD_DIR)/C/services\nCMAKE_TASKS_DIR               := $(CURRENT_DIR)/$(CMAKE_BUILD_DIR)/C/tasks\nCMAKE_STORAGE_BINARY          := $(CMAKE_SERVICES_DIR)/storage/fledge.services.storage\nCMAKE_SOUTH_BINARY            := $(CMAKE_SERVICES_DIR)/south/fledge.services.south\nCMAKE_NORTH_SERVICE_BINARY    := $(CMAKE_SERVICES_DIR)/north/fledge.services.north\nCMAKE_NORTH_BINARY            := $(CMAKE_TASKS_DIR)/north/sending_process/sending_process\nCMAKE_PURGE_SYSTEM_BINARY     := $(CMAKE_TASKS_DIR)/purge_system/purge_system\nCMAKE_CHECK_UPDATES_BINARY    := $(CMAKE_TASKS_DIR)/check_updates/check_updates\nCMAKE_STATISTICS_BINARY       := $(CMAKE_TASKS_DIR)/statistics_history/statistics_history\nCMAKE_PLUGINS_DIR             := $(CURRENT_DIR)/$(CMAKE_BUILD_DIR)/C/plugins\nDEV_SERVICES_DIR              := $(CURRENT_DIR)/services\nDEV_TASKS_DIR                 := $(CURRENT_DIR)/tasks\nSYMLINK_PLUGINS_DIR           := $(CURRENT_DIR)/plugins\nSYMLINK_STORAGE_BINARY        := $(DEV_SERVICES_DIR)/fledge.services.storage\nSYMLINK_SOUTH_BINARY          := $(DEV_SERVICES_DIR)/fledge.services.south\nSYMLINK_NORTH_SERVICE_BINARY  := $(DEV_SERVICES_DIR)/fledge.services.north\nSYMLINK_NORTH_BINARY          := $(DEV_TASKS_DIR)/sending_process\nSYMLINK_PURGE_SYSTEM_BINARY   := $(DEV_TASKS_DIR)/purge_system\nSYMLINK_CHECK_UPDATES_BINARY  := $(DEV_TASKS_DIR)/check_updates\nSYMLINK_STATISTICS_BINARY     := $(DEV_TASKS_DIR)/statistics_history\nASYNC_INGEST_PYMODULE         := $(CURRENT_DIR)/python/async_ingest.so*\nFILTER_INGEST_PYMODULE        := $(CURRENT_DIR)/python/filter_ingest.so*\n\n# PYTHON BUILD DIRS/FILES\nPYTHON_SRC_DIR := python\nPYTHON_BUILD_DIR := python_build_dir\nPYTHON_LIB_DIR := $(PYTHON_BUILD_DIR)/lib\nPYTHON_REQUIREMENTS_FILE := $(PYTHON_SRC_DIR)/requirements.txt\nPYTHON_SETUP_FILE := $(PYTHON_SRC_DIR)/setup.py\n\n# DATA AND ETC DIRS/FILES\nDATA_SRC_DIR := data\n\n# INSTALL DIRS\nINSTALL_DIR=$(DESTDIR)/usr/local/fledge\nPYTHON_INSTALL_DIR=$(INSTALL_DIR)/python\nSCRIPTS_INSTALL_DIR=$(INSTALL_DIR)/scripts\nBIN_INSTALL_DIR=$(INSTALL_DIR)/bin\nEXTRAS_INSTALL_DIR=$(INSTALL_DIR)/extras\nSCRIPT_COMMON_INSTALL_DIR = $(SCRIPTS_INSTALL_DIR)/common\nSCRIPT_PLUGINS_STORAGE_INSTALL_DIR = $(SCRIPTS_INSTALL_DIR)/plugins/storage\nSCRIPT_SERVICES_INSTALL_DIR = $(SCRIPTS_INSTALL_DIR)/services\nSCRIPT_TASKS_INSTALL_DIR = $(SCRIPTS_INSTALL_DIR)/tasks\nFOGBENCH_PYTHON_INSTALL_DIR = $(EXTRAS_INSTALL_DIR)/python\n\n# DB schema update\nSQLITE_SCHEMA_UPDATE_SCRIPT_SRC := scripts/plugins/storage/sqlite/schema_update.sh\nSQLITELB_SCHEMA_UPDATE_SCRIPT_SRC := scripts/plugins/storage/sqlitelb/schema_update.sh\nPOSTGRES_SCHEMA_UPDATE_SCRIPT_SRC := scripts/plugins/storage/postgres/schema_update.sh\nPOSTGRES_SCHEMA_UPDATE_DIR := $(SCRIPTS_INSTALL_DIR)/plugins/storage/postgres\nSQLITE_SCHEMA_UPDATE_DIR := $(SCRIPTS_INSTALL_DIR)/plugins/storage/sqlite\nSQLITELB_SCHEMA_UPDATE_DIR := $(SCRIPTS_INSTALL_DIR)/plugins/storage/sqlitelb\n\n# SCRIPTS TO INSTALL IN BIN DIR\nFOGBENCH_SCRIPT_SRC        := scripts/extras/fogbench\nFLEDGE_SCRIPT_SRC         := scripts/fledge\nFLEDGE_UPDATE_SRC         := scripts/extras/fledge_update\nUPDATE_TASK_APT_SRC        := scripts/extras/update_task.apt\nUPDATE_TASK_SNAPPY_SRC     := scripts/extras/update_task.snappy\nUPDATE_TASK_YUM_SRC        := scripts/extras/update_task.yum\nSUDOERS_SRC                := scripts/extras/fledge.sudoers\nSUDOERS_SRC_RH             := scripts/extras/fledge.sudoers_rh\n\n# SCRIPTS TO INSTALL IN SCRIPTS DIR\nCOMMON_SCRIPTS_SRC          := scripts/common\nPOSTGRES_SCRIPT_SRC         := scripts/plugins/storage/postgres.sh\nSQLITE_SCRIPT_SRC           := scripts/plugins/storage/sqlite.sh\nSQLITELB_SCRIPT_SRC         := scripts/plugins/storage/sqlitelb.sh\nSOUTH_C_SCRIPT_SRC          := scripts/services/south_c\nSTORAGE_SERVICE_SCRIPT_SRC  := scripts/services/storage\nSTORAGE_SCRIPT_SRC          := scripts/storage\nNORTH_C_SCRIPT_SRC          := scripts/tasks/north_c\nNORTH_SERVICE_C_SCRIPT_SRC  := scripts/services/north_C\nNOTIFICATION_C_SCRIPT_SRC   := scripts/services/notification_c\nDISPATCHER_C_SCRIPT_SRC     := scripts/services/dispatcher_c\nBUCKET_STORAGE_C_SCRIPT_SRC := scripts/services/bucket_storage_c\nPURGE_SCRIPT_SRC            := scripts/tasks/purge\nPURGE_C_SCRIPT_SRC          := scripts/tasks/purge_system\nCHECK_UPDATES_SCRIPT_SRC    := scripts/tasks/check_updates\nSTATISTICS_SCRIPT_SRC       := scripts/tasks/statistics\nBACKUP_SRC                  := scripts/tasks/backup\nRESTORE_SRC                 := scripts/tasks/restore\nCHECK_CERTS_TASK_SCRIPT_SRC := scripts/tasks/check_certs\nAUTOMATION_TASK_SCRIPT_SRC  := scripts/tasks/automation_script\nCERTIFICATES_SCRIPT_SRC     := scripts/certificates\nAUTH_CERTIFICATES_SCRIPT_SRC := scripts/auth_certificates\nPACKAGE_UPDATE_SCRIPT_SRC   := scripts/package\nFLEDGE_MNT_SCRIPT           := scripts/fledge_mnt\n\n# Custom location of SQLite3 library\nFLEDGE_HAS_SQLITE3_PATH    := /tmp/sqlite3-pkg/src\n\n# EXTRA SCRIPTS\nEXTRAS_SCRIPTS_SRC_DIR      := extras/scripts\n\n# FOGBENCH\nFOGBENCH_PYTHON_SRC_DIR     := extras/python/fogbench\n\n# Fledge Version file\nFLEDGE_VERSION_FILE        := VERSION\n\n###############################################################################\n################################### OTHER VARS ################################\n###############################################################################\n# ETC\nPACKAGE_NAME=Fledge\n\n###############################################################################\n############################ PRIMARY TARGETS ##################################\n###############################################################################\n# default\n# compile any code that must be compiled\n# generally prepare the development tree to allow for core to be run\ndefault : apply_version \\\n\tgenerate_selfcertificate \\\n\tc_build $(SYMLINK_STORAGE_BINARY) $(SYMLINK_SOUTH_BINARY) $(SYMLINK_NORTH_SERVICE_BINARY) $(SYMLINK_NORTH_BINARY) $(SYMLINK_PURGE_SYSTEM_BINARY) $(SYMLINK_CHECK_UPDATES_BINARY) $(SYMLINK_STATISTICS_BINARY) $(SYMLINK_PLUGINS_DIR) \\\n\tpython_build python_requirements_user\n\napply_version :\n# VERSION : this file contains Fledge app version and Fledge DB schema revision\n#\n# Example:\n# fledge_version=1.2\n# fledge_schema=3\n#\n# Note: variable names are case insensitive, all spaces are removed\n# Get variables and export FLEDGE_VERSION and FLEDGE_SCHEMA\n\t$(eval FLEDGE_VERSION := $(shell cat $(FLEDGE_VERSION_FILE) | tr -d ' ' | grep -i \"FLEDGE_VERSION=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'))\n\t$(eval FLEDGE_SCHEMA := $(shell cat $(FLEDGE_VERSION_FILE) | tr -d ' ' | grep -i \"FLEDGE_SCHEMA=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'))\n\t$(if $(FLEDGE_VERSION),$(eval FLEDGE_VERSION=$(FLEDGE_VERSION)),$(error FLEDGE_VERSION is not set, check VERSION file))\n\t$(if $(FLEDGE_SCHEMA),$(eval FLEDGE_SCHEMA=$(FLEDGE_SCHEMA)),$(error FLEDGE_SCHEMA is not set, check VERSION file))\n\n# Print build or install message based on MAKECMDGOALS var\nifeq ($(MAKECMDGOALS),install)\n\t$(eval ACTION=\"Installing\")\nelse\n\t$(eval ACTION=\"Building\")\nendif\n\t@echo \"$(ACTION) $(PACKAGE_NAME) version $(FLEDGE_VERSION), DB schema $(FLEDGE_SCHEMA)\"\n\n# Use cache for python requirements depending on the value of USE_PIP_CACHE\nifeq ($(USE_PIP_CACHE), yes)\n    $(eval NO_CACHE_DIR=)\nelse\n    $(eval NO_CACHE_DIR= --no-cache-dir)\nendif\n\n# Check where this Fledge can be installed over an existing one:\nschema_check : apply_version\n###\n# Call check_schema_update.sh (param 1 is installed Fledge VERSION file path, param2 is the new VERSION file path)\n# and grab it's output\n# Note: DATA_INSTALL_DIR is passed to the called script via export\n###\n\t@$(eval SCHEMA_CHANGE_OUTPUT=$(shell export DATA_INSTALL_DIR=$(DATA_INSTALL_DIR); scripts/common/check_schema_update.sh \"$(INSTALL_DIR)/${FLEDGE_VERSION_FILE}\" \"${FLEDGE_VERSION_FILE}\"))\n\n# Check for \"error\" \"warning\"\n\t@$(eval SCHEMA_CHANGE_ERROR=$(shell echo $(SCHEMA_CHANGE_OUTPUT) | grep -i error))\n\t@$(eval SCHEMA_CHANGE_WARNING=$(shell echo $(SCHEMA_CHANGE_OUTPUT) | grep -i warning))\n\n# Abort, print warning or info message\n\t$(if $(SCHEMA_CHANGE_ERROR),$(error Fledge DB schema cannot be performed as pre-install task: $(SCHEMA_CHANGE_ERROR)),)\n\t$(if $(SCHEMA_CHANGE_WARNING),$(warning $(SCHEMA_CHANGE_WARNING)),$(info -- Fledge DB schema check OK: $(SCHEMA_CHANGE_OUTPUT)))\n\n#\n# install\n# Creates a deployment structure in the default destination, /usr/local/fledge\n# Destination may be overridden by use of the DESTDIR=<location> directive\n# This first does a make to build anything needed for the installation.\ninstall : $(INSTALL_DIR) \\\n\tgenerate_selfcertificate \\\n\tschema_check \\\n\tfledge_version_file_install \\\n\tc_install \\\n\tpython_install \\\n\tpython_requirements \\\n\tscripts_install \\\n\tbin_install \\\n\textras_install \\\n\tdata_install \n\n###############################################################################\n############################ PRE-REQUISITE SCRIPTS ############################\n###############################################################################\ngenerate_selfcertificate:\n\tscripts/certificates $(SSL_NAME) $(SSL_DAYS)\n\tscripts/auth_certificates ca $(AUTH_NAME) $(SSL_DAYS)\n\tscripts/auth_certificates user user $(SSL_DAYS)\n\tscripts/auth_certificates user admin $(SSL_DAYS)\n\n###############################################################################\n############################ C BUILD/INSTALL TARGETS ##########################\n###############################################################################\n# run make execute makefiles producer by cmake\nc_build : $(CMAKE_GEN_MAKEFILE)\n\t$(CD) $(CMAKE_BUILD_DIR) ; $(MAKE)\n# Local copy of sqlite3 command line tool if needed\n# Copy the cmd line tool into sqlite plugin dir\nifneq (\"$(wildcard $(FLEDGE_HAS_SQLITE3_PATH))\",\"\")\n\t$(info  SQLite3 package has been found in $(FLEDGE_HAS_SQLITE3_PATH))\n\t$(CP) $(FLEDGE_HAS_SQLITE3_PATH)/sqlite3 $(CMAKE_PLUGINS_DIR)/storage/sqlite/\nendif\n\n# run cmake to generate makefiles\n# always rerun cmake because:\n#   parent CMakeLists.txt may have changed\n#   CMakeLists.txt files in subdirectories may have changed\n$(CMAKE_GEN_MAKEFILE) : $(CMAKE_FILE) $(CMAKE_BUILD_DIR)\n\t$(CD) $(CMAKE_BUILD_DIR) ; $(CMAKE) $(CURRENT_DIR)\n\n# create build dir\n$(CMAKE_BUILD_DIR) :\n\t$(MKDIR_PATH) $@\n\n# create symlink to storage binary\n$(SYMLINK_STORAGE_BINARY) : $(DEV_SERVICES_DIR)\n\t$(LN) $(CMAKE_STORAGE_BINARY) $(SYMLINK_STORAGE_BINARY)\n\n# create symlink to south binary\n$(SYMLINK_SOUTH_BINARY) : $(DEV_SERVICES_DIR)\n\t$(LN) $(CMAKE_SOUTH_BINARY) $(SYMLINK_SOUTH_BINARY)\n\n# create symlink to north service binary\n$(SYMLINK_NORTH_SERVICE_BINARY) : $(DEV_SERVICES_DIR)\n\t$(LN) $(CMAKE_NORTH_SERVICE_BINARY) $(SYMLINK_NORTH_SERVICE_BINARY)\n\n# create services dir\n$(DEV_SERVICES_DIR) :\n\t$(MKDIR_PATH) $(DEV_SERVICES_DIR)\n\n# create symlink to sending_process binary\n$(SYMLINK_NORTH_BINARY) : $(DEV_TASKS_DIR)\n\t$(LN) $(CMAKE_NORTH_BINARY) $(SYMLINK_NORTH_BINARY)\n\n# create symlink to purge_system binary\n$(SYMLINK_PURGE_SYSTEM_BINARY) : $(DEV_TASKS_DIR)\n\t$(LN) $(CMAKE_PURGE_SYSTEM_BINARY) $(SYMLINK_PURGE_SYSTEM_BINARY)\n\n# create symlink to check_updates binary\n$(SYMLINK_CHECK_UPDATES_BINARY) : $(DEV_TASKS_DIR)\n\t$(LN) $(CMAKE_CHECK_UPDATES_BINARY) $(SYMLINK_CHECK_UPDATES_BINARY)\n\n# create symlink to purge_system binary\n$(SYMLINK_STATISTICS_BINARY) : $(DEV_TASKS_DIR)\n\t$(LN) $(CMAKE_STATISTICS_BINARY) $(SYMLINK_STATISTICS_BINARY)\n\n\n# create tasks dir\n$(DEV_TASKS_DIR) :\n\t$(MKDIR_PATH) $(DEV_TASKS_DIR)\n\n# create symlink for plugins dir\n$(SYMLINK_PLUGINS_DIR) :\n\t$(LN) $(CMAKE_PLUGINS_DIR) $(SYMLINK_PLUGINS_DIR)\n\n# run make install on cmake based components\nc_install : c_build\n\t$(CD) $(CMAKE_BUILD_DIR) ; $(MAKE_INSTALL)\n\n###############################################################################\n###################### PYTHON BUILD/INSTALL TARGETS ###########################\n###############################################################################\n# build python source\npython_build : $(PYTHON_SETUP_FILE)\n\t$(CD) $(PYTHON_SRC_DIR) ; $(PYTHON_BUILD_PACKAGE) ; $(CD) $(CURRENT_DIR) ; $(CP) $(PYTHON_REQUIREMENTS_FILE) $(PYTHON_LIB_DIR)/.\n\n# install python requirements without --user\npython_requirements : $(PYTHON_REQUIREMENTS_FILE)\n\t$(PIP_INSTALL_REQUIREMENTS) $(PYTHON_REQUIREMENTS_FILE) $(NO_CACHE_DIR) $(PIP_BREAK_SYSTEM_PACKAGES)\n\n# install python requirements for user\npython_requirements_user : $(PYTHON_REQUIREMENTS_FILE)\n\t$(PIP_INSTALL_REQUIREMENTS) $(PYTHON_REQUIREMENTS_FILE) $(PIP_USER_FLAG) $(NO_CACHE_DIR) $(PIP_BREAK_SYSTEM_PACKAGES)\n\n# create python install dir\n$(PYTHON_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n# copy python package into install dir\npython_install : python_build $(PYTHON_INSTALL_DIR)\n\t$(CP_DIR) $(PYTHON_LIB_DIR)/* $(PYTHON_INSTALL_DIR)\n\n# copy Fledge version info file into install dir\nfledge_version_file_install :\n\t$(CP) $(FLEDGE_VERSION_FILE) $(INSTALL_DIR)\n\n###############################################################################\n###################### SCRIPTS INSTALL TARGETS ################################\n###############################################################################\n# install scripts\nscripts_install : $(SCRIPTS_INSTALL_DIR) \\\n\tinstall_common_scripts \\\n\tinstall_postgres_script \\\n\tinstall_sqlite_script \\\n\tinstall_sqlitelb_script \\\n\tinstall_south_c_script \\\n\tinstall_storage_service_script \\\n\tinstall_north_c_script \\\n\tinstall_north_service_c_script \\\n\tinstall_notification_c_script \\\n\tinstall_dispatcher_c_script \\\n\tinstall_bucket_storage_c_script \\\n\tinstall_purge_script \\\n\tinstall_check_updates_script \\\n\tinstall_statistics_script \\\n\tinstall_storage_script \\\n\tinstall_backup_script \\\n\tinstall_restore_script \\\n\tinstall_check_certificates_script \\\n\tinstall_automation_script \\\n\tinstall_certificates_script \\\n\tinstall_auth_certificates_script \\\n\tinstall_package_update_script \\\n\tinstall_fledge_mnt_script\n\n# create scripts install dir\n$(SCRIPTS_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\ninstall_common_scripts : $(SCRIPT_COMMON_INSTALL_DIR) $(COMMON_SCRIPTS_SRC)\n\t$(CP) $(COMMON_SCRIPTS_SRC)/*.sh $(SCRIPT_COMMON_INSTALL_DIR)\n\t$(CP) $(COMMON_SCRIPTS_SRC)/*.py $(SCRIPT_COMMON_INSTALL_DIR)\n\ninstall_postgres_script : $(SCRIPT_PLUGINS_STORAGE_INSTALL_DIR) \\\n\t$(POSTGRES_SCHEMA_UPDATE_DIR) $(POSTGRES_SCRIPT_SRC) $(POSTGRES_SCHEMA_UPDATE_SCRIPT_SRC)\n\t$(CP) $(POSTGRES_SCRIPT_SRC) $(SCRIPT_PLUGINS_STORAGE_INSTALL_DIR)\n\t$(CP) $(POSTGRES_SCHEMA_UPDATE_SCRIPT_SRC) $(POSTGRES_SCHEMA_UPDATE_DIR)\n\t$(CP_DIR) scripts/plugins/storage/postgres/upgrade $(POSTGRES_SCHEMA_UPDATE_DIR)\n\t$(CP_DIR) scripts/plugins/storage/postgres/downgrade $(POSTGRES_SCHEMA_UPDATE_DIR)\n\ninstall_sqlite_script : $(SCRIPT_PLUGINS_STORAGE_INSTALL_DIR) \\\n\t$(SQLITE_SCHEMA_UPDATE_DIR) $(SQLITE_SCRIPT_SRC) $(SQLITE_SCHEMA_UPDATE_SCRIPT_SRC)\n\t$(CP) $(SQLITE_SCRIPT_SRC) $(SCRIPT_PLUGINS_STORAGE_INSTALL_DIR)\n\t$(CP) $(SQLITE_SCHEMA_UPDATE_SCRIPT_SRC) $(SQLITE_SCHEMA_UPDATE_DIR)\n\t$(CP_DIR) scripts/plugins/storage/sqlite/upgrade $(SQLITE_SCHEMA_UPDATE_DIR)\n\t$(CP_DIR) scripts/plugins/storage/sqlite/downgrade $(SQLITE_SCHEMA_UPDATE_DIR)\n\ninstall_sqlitelb_script : $(SCRIPT_PLUGINS_STORAGE_INSTALL_DIR) \\\n\t$(SQLITELB_SCHEMA_UPDATE_DIR) $(SQLITELB_SCRIPT_SRC) $(SQLITELB_SCHEMA_UPDATE_SCRIPT_SRC)\n\t$(CP) $(SQLITELB_SCRIPT_SRC) $(SCRIPT_PLUGINS_STORAGE_INSTALL_DIR)\n\t$(CP) $(SQLITELB_SCHEMA_UPDATE_SCRIPT_SRC) $(SQLITELB_SCHEMA_UPDATE_DIR)\n\t$(CP_DIR) scripts/plugins/storage/sqlite/upgrade $(SQLITELB_SCHEMA_UPDATE_DIR)\n\t$(CP_DIR) scripts/plugins/storage/sqlite/downgrade $(SQLITELB_SCHEMA_UPDATE_DIR)\n\ninstall_south_c_script : $(SCRIPT_SERVICES_INSTALL_DIR) $(SOUTH_C_SCRIPT_SRC)\n\t$(CP) $(SOUTH_C_SCRIPT_SRC) $(SCRIPT_SERVICES_INSTALL_DIR)\n\ninstall_storage_service_script : $(SCRIPT_SERVICES_INSTALL_DIR) $(STORAGE_SERVICE_SCRIPT_SRC)\n\t$(CP) $(STORAGE_SERVICE_SCRIPT_SRC) $(SCRIPT_SERVICES_INSTALL_DIR)\n\ninstall_north_c_script : $(SCRIPT_TASKS_INSTALL_DIR) $(NORTH_C_SCRIPT_SRC)\n\t$(CP) $(NORTH_C_SCRIPT_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\ninstall_north_service_c_script : $(SCRIPT_SERVICES_INSTALL_DIR) $(NORTH_SERVICE_C_SCRIPT_SRC)\n\t$(CP) $(NORTH_SERVICE_C_SCRIPT_SRC) $(SCRIPT_SERVICES_INSTALL_DIR)\n\ninstall_notification_c_script: $(SCRIPT_SERVICES_INSTALL_DIR) $(NOTIFICATION_C_SCRIPT_SRC)\n\t$(CP) $(NOTIFICATION_C_SCRIPT_SRC) $(SCRIPT_SERVICES_INSTALL_DIR)\n\ninstall_dispatcher_c_script: $(SCRIPT_SERVICES_INSTALL_DIR) $(DISPATCHER_C_SCRIPT_SRC)\n\t$(CP) $(DISPATCHER_C_SCRIPT_SRC) $(SCRIPT_SERVICES_INSTALL_DIR)\n\ninstall_bucket_storage_c_script: $(SCRIPT_SERVICES_INSTALL_DIR) $(BUCKET_STORAGE_C_SCRIPT_SRC)\n\t$(CP) $(BUCKET_STORAGE_C_SCRIPT_SRC) $(SCRIPT_SERVICES_INSTALL_DIR)\n\ninstall_purge_script : $(SCRIPT_TASKS_INSTALL_DIR) $(PURGE_SCRIPT_SRC)\n\t$(CP) $(PURGE_SCRIPT_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\t$(CP) $(PURGE_C_SCRIPT_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\ninstall_check_updates_script : $(SCRIPT_TASKS_INSTALL_DIR) $(CHECK_UPDATES_SCRIPT_SRC)\n\t$(CP) $(CHECK_UPDATES_SCRIPT_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\ninstall_statistics_script : $(SCRIPT_TASKS_INSTALL_DIR) $(STATISTICS_SCRIPT_SRC)\n\t$(CP) $(STATISTICS_SCRIPT_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\ninstall_backup_script : $(SCRIPT_TASKS_INSTALL_DIR) $(BACKUP_SRC)\n\t$(CP) $(BACKUP_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\ninstall_restore_script : $(SCRIPT_TASKS_INSTALL_DIR) $(RESTORE_SRC)\n\t$(CP) $(RESTORE_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\ninstall_check_certificates_script : $(SCRIPT_TASKS_INSTALL_DIR) $(CHECK_CERTS_TASK_SCRIPT_SRC)\n\t$(CP) $(CHECK_CERTS_TASK_SCRIPT_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\ninstall_automation_script : $(SCRIPT_INSTALL_DIR) $(AUTOMATION_TASK_SCRIPT_SRC)\n\t$(CP) $(AUTOMATION_TASK_SCRIPT_SRC) $(SCRIPT_TASKS_INSTALL_DIR)\n\ninstall_storage_script : $(SCRIPT_INSTALL_DIR) $(STORAGE_SCRIPT_SRC)\n\t$(CP) $(STORAGE_SCRIPT_SRC) $(SCRIPTS_INSTALL_DIR)\n\ninstall_certificates_script : $(SCRIPT_INSTALL_DIR) $(CERTIFICATES_SCRIPT_SRC)\n\t$(CP) $(CERTIFICATES_SCRIPT_SRC) $(SCRIPTS_INSTALL_DIR)\n\ninstall_auth_certificates_script : $(SCRIPT_INSTALL_DIR) $(AUTH_CERTIFICATES_SCRIPT_SRC)\n\t$(CP) $(AUTH_CERTIFICATES_SCRIPT_SRC) $(SCRIPTS_INSTALL_DIR)\n\ninstall_package_update_script : $(SCRIPT_INSTALL_DIR) $(PACKAGE_UPDATE_SCRIPT_SRC)\n\t$(CP_DIR) $(PACKAGE_UPDATE_SCRIPT_SRC) $(SCRIPTS_INSTALL_DIR)\n\tchmod -R a-w $(SCRIPTS_INSTALL_DIR)/package\n\tchmod -R u+x $(SCRIPTS_INSTALL_DIR)/package\n\ninstall_fledge_mnt_script: $(SCRIPT_INSTALL_DIR) ${FLEDGE_MNT_SCRIPT}\n\t$(CP) ${FLEDGE_MNT_SCRIPT} $(SCRIPTS_INSTALL_DIR)\n\n$(SCRIPT_COMMON_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n$(SCRIPT_PLUGINS_STORAGE_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n$(SCRIPT_SERVICES_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n$(SCRIPT_STORAGE_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n$(SCRIPT_TASKS_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n$(POSTGRES_SCHEMA_UPDATE_DIR) :\n\t$(MKDIR_PATH) $@\n\t$(MKDIR_PATH) $@/upgrade\n\t$(MKDIR_PATH) $@/downgrade\n\n$(SQLITE_SCHEMA_UPDATE_DIR) :\n\t$(MKDIR_PATH) $@\n\t$(MKDIR_PATH) $@/upgrade\n\t$(MKDIR_PATH) $@/downgrade\n\n$(SQLITELB_SCHEMA_UPDATE_DIR) :\n\t$(MKDIR_PATH) $@\n\t$(MKDIR_PATH) $@/upgrade\n\t$(MKDIR_PATH) $@/downgrade\n\n###############################################################################\n########################## BIN INSTALL TARGETS ################################\n###############################################################################\n# install bin\nbin_install : $(BIN_INSTALL_DIR) $(FOGBENCH_SCRIPT_SRC) $(FLEDGE_SCRIPT_SRC)\n\t$(CP) $(FOGBENCH_SCRIPT_SRC) $(BIN_INSTALL_DIR)\n\t$(CP) $(FLEDGE_SCRIPT_SRC) $(BIN_INSTALL_DIR)\n\t$(CP) $(FLEDGE_UPDATE_SRC) $(BIN_INSTALL_DIR)\n\t$(CP) $(UPDATE_TASK_APT_SRC) $(BIN_INSTALL_DIR)\n\t$(CP) $(UPDATE_TASK_SNAPPY_SRC) $(BIN_INSTALL_DIR)\n\t$(CP) $(UPDATE_TASK_YUM_SRC) $(BIN_INSTALL_DIR)\nifneq (\"$(PLATFORM_RH)\",\"\")\n\t$(CP) $(SUDOERS_SRC_RH) $(BIN_INSTALL_DIR)\nelse\n\t$(CP) $(SUDOERS_SRC) $(BIN_INSTALL_DIR)\nendif\n\n# create bin install dir\n$(BIN_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n###############################################################################\n####################### EXTRAS INSTALL TARGETS ################################\n###############################################################################\n# install bin\nextras_install : $(EXTRAS_INSTALL_DIR) install_python_fogbench install_extras_scripts setuid_cmdutil\n\ninstall_python_fogbench : $(FOGBENCH_PYTHON_INSTALL_DIR) $(FOGBENCH_PYTHON_SRC_DIR)\n\t$(CP_DIR) $(FOGBENCH_PYTHON_SRC_DIR) $(FOGBENCH_PYTHON_INSTALL_DIR)\n\n$(FOGBENCH_PYTHON_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\ninstall_extras_scripts : $(EXTRAS_INSTALL_DIR) $(EXTRAS_SCRIPTS_SRC_DIR)\n\t$(CP_DIR) $(EXTRAS_SCRIPTS_SRC_DIR) $(EXTRAS_INSTALL_DIR)\n\n\tsed -i \"s|export FLEDGE_ROOT=.*|export FLEDGE_ROOT=\\\"$(INSTALL_DIR)\\\"|\" $(EXTRAS_INSTALL_DIR)/scripts/setenv.sh\n\tsed -i \"s|^FLEDGE_ROOT=.*|FLEDGE_ROOT=\\\"$(INSTALL_DIR)\\\"|\" $(EXTRAS_INSTALL_DIR)/scripts/fledge.service\n\n# create extras install dir\n$(EXTRAS_INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n###############################################################################\n####################### DATA INSTALL TARGETS ################################\n###############################################################################\n# install data\ndata_install : $(DATA_INSTALL_DIR) install_data\n\ninstall_data : $(DATA_INSTALL_DIR) $(DATA_SRC_DIR)\n\t$(CP_DIR) $(DATA_SRC_DIR) $(INSTALL_DIR)\n\n# data and etc directories, should be owned by the user running fledge\n# If install is executed with sudo and the sudo user is root, the data and etc\n# directories must be set to be owned by the calling user.\nifdef SUDO_USER\nifeq (\"$(USER)\",\"root\")\n\n\tchown -R ${SUDO_USER}:${SUDO_USER} $(DATA_SRC_DIR)\n\tchown -R ${SUDO_USER}:${SUDO_USER} $(INSTALL_DIR)/$(DATA_SRC_DIR)\nendif\nendif\n\n# create extras install dir\n#$(DATA_INSTALL_DIR) :\n#\t$(MKDIR_PATH) $@\n\n# set setuid bit of cmdutil\nsetuid_cmdutil : c_install\n\tchmod u+s $(EXTRAS_INSTALL_DIR)/C/cmdutil\n\n\n###############################################################################\n######################## SUPPORTING BUILD/INSTALL TARGETS #####################\n###############################################################################\n# create install directory\n$(INSTALL_DIR) :\n\t$(MKDIR_PATH) $@\n\n###############################################################################\n###############################################################################\n###################### CLEAN/UNINSTALL TARGETS ################################\n###############################################################################\n# clean\nclean :\n\t-$(RM_DIR) $(CMAKE_BUILD_DIR)\n\t-$(RM_DIR) $(PYTHON_BUILD_DIR)\n\t-$(RM_DIR) $(DEV_SERVICES_DIR)\n\t-$(RM) $(SYMLINK_PLUGINS_DIR)\n\t-$(RM) $(ASYNC_INGEST_PYMODULE)\n\t-$(RM) $(FILTER_INGEST_PYMODULE)\n"
  },
  {
    "path": "README.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n   \n.. Links\n.. |pluginlist| raw:: html\n\n   <a href=\"https://fledge-iot.readthedocs.io/en/develop/fledge_plugins.html\">Fledge Plugins</a>\n   \n   \n.. |quickstart| raw:: html\n\n   <a href=\"https://fledge-iot.readthedocs.io/en/develop/quick_start/installing.html\">Installing Fledge</a>\n\n.. |building| raw:: html\n\n   <a href=\"https://fledge-iot.readthedocs.io/en/develop/building_fledge/building_fledge.html#building-fledge\">Building Fledge</a>\n\n*******\nFledge\n*******\n\nThis is the Fledge project.\n\nFledge is an open source platform for the **Internet of Things**, and an essential component in **Fog Computing**. It uses a modular **microservices architecture** including sensor data collection, storage, processing and forwarding to historians, Enterprise systems and Cloud-based services. Fledge can run in highly available, stand alone, unattended environments that assume unreliable network connectivity.\n\nFledge also provides a means of buffering data coming from sensors and forwarding that data onto high level storage systems. It assumes the underlying network layer is not always connected or may not be reliable. Data from sensors may be stored within Fledge for a number of days before being purged from the Fledge storage. During this time it may be sent to one or more historians and also accessed via a REST API for use by *local* analytical applications.\n\nFledge has been designed to run in a Linux environment and makes use of Linux services.\n|br| |br|\n\nArchitecture\n============\n\nFledge is built using a microservices architecture for major component areas, these services consist of:\n\n- a **Core service** responsible for the management of the other services, the external REST API's, scheduling and monitoring of activities.\n- a **South service** responsible for the communication between Fledge and the sensors/actuators.\n- a **Storage service** responsible for the persistence of configuration and metrics and the buffering of sensor data.\n\nThis core set of services may also be extended using optional services that are available within their own repositories, for example a notification service and a control dispatcher service.\n\nFledge makes extensive use of plugin components in order to increase the flexibility of the implementation:\n\n- **South plugins** are used to allow for the easy expansion of Fledge to deal with new South devices and South device connection buses.\n- **North plugins** are used to allow for connection to different historians\n- **Datastore plugins** are used to allow Fledge to use different storage mechanisms for persisting meta data and the sensor data\n\nThe South and North plugins are stored in separate source code repositories which are named in the pattern fledge-south-<device> fledge-north-<service>. A complete list of plugins can be found in the readthedocs documentation for the project. See |pluginlist|.\n\nThe optional services, for example the notification service also make use of plugins to extend the capabilities of those services.\n\nThe other paradigm that is used extensively within Fledge is the idea of **scheduling processes** to perform specific operations. The Fledge core contains a scheduler which can execute processes based on time schedules or triggered by events. This is used to start processes when an event occurs, such as Fledge starting, or based on a time trigger.\n\nScheduled processes are used to send data from Fledge to the historian, to purge data from the Fledge data buffer, to gather statistics for historical analysis and perform backups of the Fledge environment.\n|br| |br|\n\nPre-built packages for Fledge are available, see |quickstart| for details of how to use these.\n\nBuilding Fledge\n================\n\nSee also |building| in the online documentation.\n\nBuild Prerequisites\n-------------------\n\nFledge is currently based on C/C++ and Python code. The packages needed to build and run Fledge may be installed by running the script *requirements.sh*\n\n\nLinux distributions\n-------------------\n\nFledge can be built or installed on the following Linux distributions, supporting both x86_64 and aarch64 architectures.\n\n- Ubuntu 20.04, 22.04, 24.04\n- Raspbian Bullseye, Bookworm\n\nInstall the prerequisites\n-------------------------\n\nThe prerequisites required to build Fledge can be installed on any of the supported platforms by running the requirements.sh script from this directory.\n\n.. code-block:: console\n\n\tsudo ./requirements.sh\n\n.. note::\n\n   This script will use the platforms package management software, such as apt on Ubuntu systems, to install the required packages. This must be done as the root user and hence the need to run requirements.sh using the sudo command\n \nBuild\n-----\n\nTo build Fledge run the command ``make`` in the top level directory. This will compile all the components that need to be compiled and will also create a runable structure of the Python code components of Fledge.\n\n**NOTE:**\n\n- *The GCC compiler version 5.4 available in Ubuntu 16.04 LTS raises warnings. This is a known bug of the compiler and it can be ignored.*\n\n- *openssl toolkit is a requirement if we want to use https based REST client and certificate based authentication.*\n\nOnce the *make* has completed you can decide to test Fledge from your development environment or you can install it. \n|br| |br|\n\n\nTesting Fledge from Your Development Environment\n=================================================\n\nyou can test Fledge directly from your Development Environment. All you need to do is to set one environment variable to be able to run Fledge from the development tree.\n::\n   export FLEDGE_ROOT=<basedir>/Fledge\n\nWhere *basedir* is the base directory into which you cloned the Fledge repository.\n\nFinally, start the Fledge core daemon:\n::\n   $FLEDGE_ROOT/scripts/fledge start\n\n|br|\n\nInstalling Fledge\n==================\n\nCreate an installation by executing ``make install``, then set the *FLEDGE_ROOT* environment variable specifying the installation path. By default the installation will be placed in */usr/local/fledge*. You may need to execute ``sudo make install`` to install Fledge where the current user does not have permissions:\n::\n   sudo make install\n   export FLEDGE_ROOT=/usr/local/fledge\n\nThe destination may be overriden by setting the variable *DESTDIR* in the make command line, to a location in which you wish to install Fledge. For example, to install Fledge in the */opt* directory use the command:\n::\n   sudo make install DESTDIR=/opt\n   export FLEDGE_ROOT=/opt/usr/local/fledge\n\n|br|\n\nUpgrading Fledge on Debian based systems\n========================================\n\nFledge supports the Kerberos authentication starting from the version 1.7.1 and so the related packages are installed by the script `requirements.sh <requirements.sh>`_.\nThe *krb5-user* package prompt a question during the installation process asking for the KDC definition, the packages are installed setting the environment *DEBIAN_FRONTEND*\nto avoid this interaction:\n::\n\n\t# for Kerberos authentication, avoid interactive questions\n\tDEBIAN_FRONTEND=noninteractive apt install -yq krb5-user\n\tapt install -y libcurl4-openssl-dev\n\nThe upgrade of the Fledge package should follow the same philosophy, it should be done executing the command:\n::\n    sudo DEBIAN_FRONTEND=noninteractive apt -y upgrade\n\nbefore the upgrade of Fledge, *SETENV:* should be set/added in */etc/sudoers.d/fledge* to allow *sudo* to support the handling of the environment variables, a sample of the file:\n::\n\n    %sudo ALL=(ALL) NOPASSWD:SETENV: /usr/bin/apt -y update, /usr/bin/apt-get -y install fledge, /usr/bin/apt -y install /usr/local/fledge/data/plugins/fledge*.deb, /usr/bin/apt list, /usr/bin/apt -y install fledge*, /usr/bin/apt -y upgrade\n\n|br|\n\nExecuting Fledge\n=================\n\nFledge is now ready to start. Use the command:\n::\n   $FLEDGE_ROOT/bin/fledge start\n\nTo check if Fledge is running, use the command:\n::\n   $FLEDGE_ROOT/bin/fledge status\n\nThe command returns the status of Fledge on the machine it has been executed.\n\n\nIf You Use PostgreSQL: Creating the Database Repository\n=======================================================\n\nThis version of Fledge relies on SQLite to run. SQLite is embedded into the Storage service, but you may want to use PostgreSQL as a buffer and metadata storage (refer to the documentation on `ReadTheDocs <https://fledge-iot.readthedocs.io/en/develop/building_fledge/building_fledge.html?highlight=appendix#appendix-setting-the-postgresql-database>`_ for more info. With a version of PostgreSQL installed via *apt-get* first you need to create a new database user with:\n::\n   sudo -u postgres createuser -d <user>\n\nwhere *user* is the name of the Linux user that will run Fledge. The Fledge database user must have *createdb* privileges (i.e. the *-d* argument).\n|br| |br|\n\n\nTroubleshooting\n===============\n\nFledge version 1.7.0\n--------------------\n\n$FLEDGE_ROOT/data/etc directory ownership\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe execution of the *sudo make install* immediately after *git clone* will create a *data/etc* directory owned by the *root* user,\nit should be owned by the user that will run Fledge, to fix it:\n::\n    chown -R <user>:<user> $FLEDGE_ROOT/data\n\nwhere *user* is the name of the Linux user that will run Fledge.\n|br| |br|\n"
  },
  {
    "path": "SECURITY.MD",
    "content": "<!-- BEGIN Fledge SECURITY.MD V0.0.1 BLOCK -->\n\n## Security\n\nFledge takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Fledge](https://github.com/Fledge-iot)\n\n\nIf you believe you have found a security vulnerability in any Fledge repository please report it to us as described below.\n\n## Reporting Security Issues\n\n**Please do not report security vulnerabilities through public GitHub issues, instead email to security@dianomic.com.**\n\nYou should receive a response soon. If for some reason you do not, please follow up via email to ensure we received your original message. \n\nPlease include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:\n\n  * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)\n  * Full paths of source file(s) related to the manifestation of the issue\n  * The location of the affected source code (tag/branch/commit or direct URL)\n  * Any special configuration required to reproduce the issue\n  * Step-by-step instructions to reproduce the issue\n  * Proof-of-concept or exploit code (if possible)\n  * Impact of the issue, including how an attacker might exploit the issue\n\nThis information will help us triage your report more quickly.\n\n\n## Preferred Languages\n\nWe prefer all communications to be in English.\n\n<!-- END Fledge SECURITY.MD BLOCK -->"
  },
  {
    "path": "VERSION",
    "content": "fledge_version=3.1.0\nfledge_schema=76\n"
  },
  {
    "path": "contrib/.gitkeep",
    "content": ""
  },
  {
    "path": "data/etc/kerberos/README.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n\n\n********\nKerberos\n********\n\nThis directory should contain the ketytab file named **piwebapi_kerberos_https.keytab** needed for the Kerberos authentication.\n\n|br| |br|\n"
  },
  {
    "path": "data/extras/fogbench/fogbench_sensor_coap.template.json",
    "content": "\n[\n  { \"name\"          : \"asset_1\",\n    \"sensor_values\" : [ { \"name\": \"dp_1\", \"type\": \"number\", \"min\": -2.0, \"max\": 2.0 },\n                        { \"name\": \"dp_1\", \"type\": \"number\", \"min\": -2.0, \"max\": 2.0 },\n                        { \"name\": \"dp_1\", \"type\": \"number\", \"min\": -2.0, \"max\": 2.0 } ] },\n  { \"name\"          : \"asset_2\",\n    \"sensor_values\" : [ { \"name\": \"lux\", \"type\": \"number\", \"min\": 0, \"max\": 130000, \"precision\":3 } ] },\n  { \"name\"          : \"asset_3\",\n    \"sensor_values\" : [ { \"name\": \"pressure\", \"type\": \"number\", \"min\": 800.0, \"max\": 1100.0, \"precision\":1 } ] }\n]\n\n"
  },
  {
    "path": "dco-signoffs/AmandeepSinghArora-dco-signoff.txt",
    "content": "I, Amandeep Singh Arora, hereby sign-off-by all my past commits to this repository\nsubject to the Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: aman@dianomic.com\n"
  },
  {
    "path": "dco-signoffs/AshishJabble-dco-signoff.txt",
    "content": "I, Ashish Jabble hereby sign-off-by all my past commits to this repository\nsubject to the Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: ashish@dianomic.com\n"
  },
  {
    "path": "dco-signoffs/AshwinGopalakrishnan-dco-signoff.txt",
    "content": "I, Ashwin Gopalakrishnan hereby sign-off-by all my past commits to this repository\nsubject tothe Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: ashwin@dianomic.com\n"
  },
  {
    "path": "dco-signoffs/BillHunt-dco-signoff.txt",
    "content": "I, Bill Hunt hereby sign-off-by all of my past commits to this repo subject to\nthe Developer Certificate of Origin (DCO), Version 1.1. In the past I have\nused emails: bill@dianomic.com, bill@thomlex.com\n"
  },
  {
    "path": "dco-signoffs/MarkRiddoch-dco-signoff.txt",
    "content": "I, Mark Riddoch hereby sign-off-by all my past commits to this repository\nsubject tothe Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: mark@dianomic.com\n"
  },
  {
    "path": "dco-signoffs/MassimilianoPinto-dco-signoff.txt",
    "content": "I, Massimiliano Pinto hereby sign-off-by all my past commits to this repository\nsubject to the Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: massimiliano.pinto@gmail.com\n"
  },
  {
    "path": "dco-signoffs/MohdShariq-dco-signoff.txt",
    "content": "I, Mohd Shariq, hereby sign-off-by all my past commits to this repository\nsubject to the Developer Certificate of Origin (DCO), Version 1,1. In the past,\nI have used emails: mohd.shariq@nerdapplabs.com, mshariqaftab@gmail.com"
  },
  {
    "path": "dco-signoffs/MonikaSharma-dco-signoff.txt",
    "content": "I, Monika Sharma, hereby sign-off-by all my past commits to this repository\nsubject to the Developer Certificate of Origin (DCO), Version 1.1. In\nthe past I have used emails: monika@dianomic.com, monika.sharma@nerdapplabs.com\n"
  },
  {
    "path": "dco-signoffs/OriShadmon-dco-signoff.txt",
    "content": "I, Ori Shadmon hereby sign-off-by all my past commits to this repository\nsubject tothe Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: oshadmon@gmail.com\n"
  },
  {
    "path": "dco-signoffs/PraveenGarg-dco-signoff.txt",
    "content": "I, Praveen Garg, hereby sign-off-by all my past commits to this repository\nsubject to the Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: praveen.garg@nerdapplabs.com\n"
  },
  {
    "path": "dco-signoffs/StefanoSimonelli-dco-signoff.txt",
    "content": "I, Stefano Simonelli hereby sign-off-by all my past commits to this repository\nsubject to the Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: stefano@dianomic.com\n"
  },
  {
    "path": "dco-signoffs/YashTatkondawar-dco-signoff.txt",
    "content": "I, Yash Tatkondawar hereby sign-off-by all my past commits to this repository\nsubject to the Developer Certificate of Origin (DCO), Version 1,1. In\nthe past I have used emails: yashtatkondawar@gmail.com\n"
  },
  {
    "path": "dco-signoffs/other-dco-signoff.txt",
    "content": "I, Bill Hunt, CTO of Dianomic hereby sign-off-by all past commits to this repo not covered by other sign-offs here,\nsubject to the Developer Certificate of Origin (DCO), Version 1.1. In the past, these emails have been used:\n\n43450583+sdauber@users.noreply.github.com\n49699333+dependabot[bot]@users.noreply.github.com\naksinha@nerdapplabs.com\namarendra@dianmoic.com\namarendra@dianomic.com\nbot@dianomic.com\ndlopez@osisoft.com\ndependabot[bot]@users.noreply.github.com\nec2-user@ip-10-0-0-234.ec2.internal\nec2-user@ip-10-0-0-235.ec2.internal\nfoglamp@nerd-034\ngmoffett@gmail.com\nivan.zoratti@gmail.com\nivan@dianomic.com\nivan@scaledb.com\nivan@zoratti.co.uk\njonscott20@gmail.com\nmichael@scaledb.com\nmiguelitoraton@users.noreply.github.com\nmoduser@localhost.localdomain\nterris@dianomic.com\nterris@scaledb.com\nthyagr@outlook.com\nvaibhav@dianomic.com\nvaibhav@scaledb.com\n\n"
  },
  {
    "path": "docs/91_version_history.rst",
    "content": ".. Version History presents a list of versions of Fledge released.\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. Links in new tabs\n\n.. |1.1 requirements| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/Fledge/blob/1.1/python/requirements.txt\" target=\"_blank\">check here</a>\n\n.. =============================================\n\n\n***************\nVersion History\n***************\n\nFledge v3\n==========\n\nv3.1.0\n-------\n\nRelease Date: 2025-07-10\n\n- **Fledge Core**\n\n    - New Features:\n\n       - Added new convenience methods to the ConfigCategory class to streamline plugin development in C++.\n       - Added support for running in containers without root privileges.\n       - Added command line interface for debugging processing pipelines in south and north services.\n       - Added new configuration category for selectively enabling/disabling features instance-wide (currently supports control features and pipeline debugger).\n       - Added pipeline debugging capabilities to trace data flow in north and south services.\n       - Added configurable buffer size limit for south services to handle storage service overload scenarios.\n       - Added certificate-based authentication support to Fledge management script.\n       - Improved north service resource usage during connection failures.\n       - Increased minimum key size requirement to 2048 bits for all authentication and encryption keys.\n       - Updated plugin developers guide with notification rule plugin development documentation.\n       - Updated quick start guide with improved formatting and content accuracy.\n       - The newly supported platforms include Ubuntu 24.04 and Raspberry Pi OS (bookworm).\n\n    - Bug Fix:\n\n       - Fixed storage system issue that was causing log flooding due to stale object registrations.\n       - Fixed service registration issue with sinusoid plugin when using reserved asset names.\n       - Fixed critical resource leak in plugin interface. **Note:** All plugins must be rebuilt against this version to ensure binary object handling compatibility.\n       - Fixed log display corruption caused by invalid character handling.\n       - Corrected service pipeline configuration to properly restrict control pipeline source/destination options.\n       - Resolved shutdown logging anomalies when running with Python 3.11+.\n       - Resolved certificate chain of trust validation issues in authentication system.\n       - Addressed security vulnerability that allowed unauthorized authentication method changes without proper password configuration.\n       - Addressed security vulnerability that exposed plaintext passwords in support bundles.\n\n\n- **GUI**\n\n    - New Features:\n\n       - Added tooltip to notification log page showing source column description.\n       - Added search functionality to dashboard graph selection dropdown to filter statistics.\n       - Added persistence for menubar collapsed/expanded state between sessions.\n       - Changed default menubar mode to narrow view for improved space utilization.\n       - Enhanced user list display to clearly indicate certificate login issues.\n\n    - Bug Fix:\n\n       - Fixed service status icon incorrectly showing grey state.\n       - Fixed menu sidebar layout issues on different screen sizes.\n       - Fixed disabled scroll arrows on south service configuration page.\n       - Fixed pipeline icon display with long filter/service names.\n       - Fixed screen visibility during restart with collapsed menu.\n       - Standardized Delete key behavior for connections across keyboard and mouse operations.\n       - Improved submenu interaction to prevent accidental selections.\n       - Improved overall UI appearance and consistency.\n\n- **Plugins**\n\n    - New Features:\n\n       - Asset Filter Enhancements:\n\n         * Added new *nest* rule for adding nesting to datapoints.\n         * Added list support to *select* rule similar to *remove* rule.\n         * Improved regular expression support with substitution patterns in *rename*, *datapoint map* and *split* rules.\n         * Improved filter performance.\n         * Extended rules to handle lists of datapoints instead of single datapoints.\n         * Restructured documentation with better rule descriptions and additional examples.\n\n       - OMF North Plugin Enhancements:\n\n         * Optimized authentication for AVEVA Data Hub (ADH) by leveraging token expiration time.\n         * Added connectivity checks for AVEVA Data Hub and Edge Data Store (similar to existing PI Web API checks).\n         * Added logging of attempted links when OMF Data message fails to help troubleshoot AF Attribute conflicts.\n         * Added data type coercion support for *number* and *integer* OMFHints to handle filter-induced data type changes.\n         * Added support for Static Data values in Linked Types.\n\n    - Bug Fix:\n\n       - Fixed scale-set filter bug affecting data scaling.\n       - Resolved resource leaks in fledge-filter-asset, fledge-filter-python35, and several other filters.\n       - OMF North Plugin Fixes:\n\n         * Fixed parsing of Static Data values when more than 2 values configured.\n         * Fixed large OMF Data message handling by implementing message splitting.\n         * Fixed authentication endpoint to use correct Region hosting the Namespace.\n         * Updated Tag Name OMFHint to support both Container and PI Point naming.\n         * Enhanced plugin documentation for better clarity.\n\n\nv3.0.0\n-------\n\nRelease Date: 2025-03-13\n\n- **Fledge Core**\n\n    - New Features:\n\n       - New installations will now default to be secured with a username and password. Upgrading existing installations will not affect their current settings.\n       - The ability to split a data pipeline and run branched pipelines in parallel has been added.\n       - The SQLite storage plugin has been updated to allow the user to give an indication of the workload the plugin is expected to be used for. This is then used to inform tuning decisions within the SQLite plugin.\n       - The maximum send latency configuration option for south plugins has had an upper bound imposed upon it to prevent large values being accidentally set as doing so makes it appear as if the south service has broken.\n       - Monitoring of the ingest rate has now been added to the south service. If the ingest rate is seen to suddenly fall an alert will be raised to the user. If the rate returns to previous levels then the alert will be cleared. This monitoring may not be suitable for services using asynchronous south plugins or that only forward changes of values and can be disabled in the advanced configuration options of the south service.\n       - A new section has been added to the tuning section of the documentation that discusses tuning the purge processes within Fledge.\n       - The documentation has been updated to improve the discussion on tuning to encompass low through data ingestion.\n       - The documentation on making Fledge secure has been updated to include the different types of user that have been introduced.\n       - Documentation has been added to the plugin developers section of the documentation that describes how to persist data between restart of services or the entire system.\n       - In an attempt to make it easier to find plugins, new subsections have been added to the documentation section that lists the available plugins.  It now includes a number of sections that categorise the plugins into those with related functionality. \n       - The documentation section for plugin developers writers has been updated with the latest mechanisms to run Fledge services under memory analysis tool, valgrind.\n       - The Fledge documentation has been updated to reflect the new security defaults in the 3.0 release and to include a lengthier discussion of other optional security features.\n       - Updates to the documentation have been added that describe how a Fledge instance can be connected to a PostgreSQL instance hosted on a different host or container.\n       - The plugin developers guide has been updated to include more information on the 3.0.0 version of the south plugin interface.\n\n\n    - Bug Fix:\n\n       - An issue with macro substitution incorrectly handling default values has been resolved.\n       - An issue that could cause a problem if certain characters were used in the asset names has been resolved. \n       - An issue with mixed case user names mismatching names in authentication certificates has been resolved. This prevented users with mixed case names correctly authenticating using certificates. \n       - A number of issues in the handling of plugin (such as fledge-south-s2opcua) configuration updates have been addressed. This allows for plugin configuration to be updated in new versions of the plugins while migrating the older configuration to the new configuration. In particular this aids the transition of configuration items previously entered as a JSON structure to the new list style of configuration item.\n       - An issue that caused spurious errors to be written to the error log after an extended period of running of a south service has been resolved. \n       - An issue when using the conditional forwarding features of Fledge in conjunction with the PostgreSQL storage engine has been resolved.\n       - When using PostgreSQL as the storage layer, the exit status of the script used to start and stop the system may give incorrect exit status information. This has now been resolved.\n       - An issue in the PostgreSQL storage plugin that could cause a failure of the storage engine when purging has been resolved.\n       - An issue that allowed two filters of the same name to be added to different branches of the filter pipeline has been resolved.\n       - An issue that could cause persisted data from plugins not to be written on the second and subsequent restarts of a service has been resolved.\n       - An issue that prevented complex pipelines with multiple branches, one of which is a simple batch that contained no filters, from operating correctly has been resolved.\n       - An issue that allowed duplicate tags to be defined in the new list type mechanism for adding object type list has been resolved.\n       - An issue that could result in sending of incorrect statistics data by North services has been resolved.\n       - An issue that could cause the north service to needlessly pull data from storage when sending of the data was disabled has been resolved.\n\n\n\n- **GUI**\n\n    - New Features:\n\n       - The ability to import list content from CSV and JSON files has been added for all configuration items that have lists of items. This impacts the fledge-south-s2opcua.\n       - Flow Editor: The default UI for new installations now uses the flow-based editor instead of the tabular pipeline view, with an option to switch via Settings.  Added a confirmation dialog to prevent accidental service disabling. Improved the appearance of the add filter interface. Additionally, plugin configuration performance within the flow editor has been optimized.\n       - The look and feel of the south and north menu items has been improved with more intuitive icons.\n       - The configuration tab has been improved to include navigation buttons to easily move between tabs.\n       - The layout of the south service in the tabular view has been improved.\n\n\n    - Bug Fix:\n\n       - An issue with the save button becoming active when it should not in the flow editor has been updated.\n       - An issue that could cause the Next button to be incorrectly disabled in the notification create pages has been resolved.\n       - An issue that could cause the state of a service to be incorrectly shown in the user interface has been resolved.\n       - An issue that could cause a blank page to be displayed when cancelling the changes to the pipeline flow has been resolved.\n       - An issue that could result in a confirmation dialog not being correctly shown when deleting a filter has been resolved.\n       - A number of issues with entry of negative values into configuration items has been resolved.\n\n\n- **Plugins**\n\n    - New Features:\n\n       - fledge-south-mqtt-sparkplug: Added an option to attach the topic as a datapoint, enabling its use in later filters for applications like passing placement hints to north plugins. Additionally, the plugin now supports long integer and double values.\n       - The fledge-south-benchmark plugin has been enhanced to allow support for multiple datapoints per asset.\n       - fledge-south-s2opcua: Added a new Datapoint Name configuration, allowing users to choose between Browse Name (default) or Node Id for naming datapoints, while asset names derived from parent OPC UA objects remain based on Browse Name. The plugin now supports control operations flowing from Fledge to OPC UA devices. Additionally, improvements have been made to subscription configuration, and debug trace output is now included in Fledge support bundles.\n       - Logging in the fledge-south-opcua plugin has been improved to include more data on the low level OPC UA protocol connections.\n       - fledge-filter-asset: Added a new option to select which datapoints are sent onwards in the pipeline and improved error handling for rules configuration.\n       - The fledge-filter-scale-set has been updated to use an improved user interface to define the set of scale factors and offset to apply.\n       - The fledge-filter-metadata plugin has been updated to support substitution of datapoint values and the asset name, not the new meta data values created.\n       - The fledge-north-http-c plugin has been updated to support optional HTTP Basic authentication.\n       - OMF North plugin: Added a configuration option to enable or disable OMF message logging. Additionally, various improvements and fixes have been made, including logging of OMF Types and Containers as Information messages, enhanced error checking and logging for PI Server license expiration, improved detection of PI Web API connection loss to prevent failed REST calls, and warnings for unstable destination data archives (detected via HTTP 409 Conflict responses). The Troubleshooting the PI Server integration documentation has also been updated to reflect these changes.\n       - The documentation for the fledge-rule-simple-expression plugin has been enhanced to include examples of multiple datapoint expressions.\n       - Documentation has been added to illustrate how the standard HTTP-C plugin can be used to send data to the Inductive Automations Ignition product.\n       - The documentation for the fledge-filter-delta plugin has been improved such that it appears correctly in the table of contents.\n\n\n    - Bug Fix:\n\n       - fledge-south-s2opcua: for the Asset Naming Scheme selection of Single Datapoint or Single Asset, Browse Names of Variables must be unique among all OPC UA Subscriptions. If there are duplicate Browse Names, the plugin should disambiguate the names by concatenating the Variable’s Node Id to the end of the Browse Name. If there are more than 2 duplicate Browse Names, the plugin would concatenate the Node Id too many times. This has been fixed.\n       - An issue with the dynamic reconfiguration of the fledge-south-randomwalk plugin has been resolved. The service no longer requires a restart after reconfiguration.\n       - fledge-south-opcua: Resolved issues causing failures when the service is restarted without an available OPC UA server connection and when the plugin is reconfigured.\n       - A problem with the fledge-south-mqtt plugin that would cause it to not re-establish the connection to the MQTT broker if connectivity was lost has been resolved.\n       - fledge-filter-asset: Fixed issues with rule execution order, ensuring proper sequencing. Resolved a problem where multiple rules might not remove all datapoints from an asset. Additionally, improved stability by preventing non-graceful exits when an incomplete configuration is provided.\n       - fledge-filter-metadata: Resolved an issue causing the plugin to terminate a south service due to excessively late integer values and added support for defining nested values.\n       - A problem that could result in excessive memory use when the fledge-filter-delta plugin is used.\n       - fledge-north-opcuaclient: Resolved an issue where the plugin attempted to write data to non-existent OPC UA nodes and fixed a problem causing statistics to increase even when no data was being sent.\n       - OMF North plugin: sending OMF Data messages to the Edge Data Store 2020 resulted in the HTTP error code 400 with the message \"One or more errors occurred. (The action 'Update' is not supported for OMF messages.).\" The same problem occurs when sending OMF Data messages to the AVEVA Connector Relay. This has been fixed. This problem does not occur in EDS 2023, EDS 2023 Patch 1 and EDS 2024.\n       - An issue that could result in missing audit log entries when notifications are sent based on statistic history data has been resolved.\n\n\nFledge v2\n==========\n\nv2.6.0\n-------\n\nRelease Date: 2024-10-24\n\n- **Fledge Core**\n\n    - New Features:\n\n       - Monitoring of the available space on the disk that holds the SQLite buffer for readings ingested by Fledge has been added. Entries will be written to the error log to predict the expiration of the disk space and also logs when the spaces left available fall below 10% and 5%.\n       - Failures to send data north via the north service raise an alert. This is now cleared if the flow of data north later resumes.\n       - An issue that could cause high CPU utilisation when a north service was unable to send data to the upstream system has been resolved. The user is also made more aware of failed attempts to send data upstream using the alerting feature. An alert will be created and shown in the GUI status bar.\n       - It is now possible to create allow and block lists for IP addresses that are allowed access or explicitly denied access to the API port of the instance.\n       - The configuration items within the configuration category now have the ability to limit the user roles that are allowed to update the configuration item.\n       - Configuration items can now be given a permission property that can be used to control which user roles have access to the configuration item.\n       - Performance monitors have been improved.\n       - A new section relating to all aspects of monitoring within Fledge has been added to the documentation.\n       - The tuning documentation has been updated to discuss the use of the configuration cache size tuning parameter and to add a discussion on tuning the log level for the various services.\n\n    - Bug Fix:\n\n       - An issue that caused incorrect payloads to be sent north when choosing the audit log as the data source has been resolved.\n       - An issue that incorrectly prevented some authorised users from executing a control endpoint has been resolved.\n       - An issue with backup and restore when using Postgres as the storage engine to store the configuration data has been resolved.\n       - The manual purge, in the developer features of the user interface, was not working for assets containing ' #',  ‘+' or '&’ characters in asset name.\n       - An issue with north services that have been misconfigured not cleanly shutting down has been resolved.\n       - An issue that could cause a backup file to not download has been resolved.\n       - An issue that could allow an unauthorised user to change the password of another user has been resolved.\n       - After restart of Fledge, the sent count in the Fledge GUI dashboard would increase by few readings after every restart of Fledge, even if no south plugins were running. This has been fixed. This was an error in the increment of the counter; there were never any additional readings sent.\n       - An issue with an incorrect audit entry being created when adding new properties to a configuration item has been resolved.\n       - A security issue that could allow one user to see the profile of another has been resolved.\n       - Users now need to have administration permissions to see the user names and roles of other users.\n       - The handling of errors during pipeline creation has been enhanced to give better reporting of filter plugin exceptions.\n       - The notification service has been updated to allow sub-second retriever times for notifications to be specified.\n       - The documentation for deleting users via the REST API has been corrected.\n\n\n- **GUI**\n\n    - New Features:\n\n       - The rendering of lists in the configuration options has been improved.\n       - The ordering of the tabs in the configuration user interface is no longer alphabetical but controlled by the configuration category itself.\n       - The user interface now hides configuration options the user does not have permission to change.\n       - The rendering of the developer menu has been brought into line with other sub-menu rendering.\n\n\n    - Bug Fix:\n\n       - An issue with the user interface incorrectly displaying a timestamp has been resolved.\n       - An issue with list type configuration items occasionally not rendering correctly for filters has been resolved.\n       - An issue related to improper length validation for certain entries in the configuration items has been resolved.\n       - An issue with forcible session disconnection sometimes failing has been resolved.\n       - The performance related to management of services and plugins has been improved.\n       - An issue that prevents export of persisted data in developer mode has been resolved.\n       - An issue that prevented deletion of persisted data for plugins in developer mode has been resolved.\n       - An issue with the filter plugin that involved underscores in the name has been resolved.\n\n\n- **Plugins**\n\n    - New Features:\n\n       - fledge-filter-asset now supports to allow regular expressions to be used in the asset name to match when applying the filter.\n       - fledge-filter-delta has been enhanced to give greater control over the action when one or more datapoints in an asset reading are detected as changing.\n       - fledge-notify-operation now supports data substitution as with the other control-related notification delivery plugins.\n       - fledge-north-OMF plugin's *action* code in the HTTP header is typically set to *update* for OMF Data messages. This setting allows old values to be updated when timestamps match and ensures that new data is compressed correctly by the PI Data Archive. However, if the AVEVA PI Buffer Subsystem is configured, the *update* action code is converted to an internal PI storage mode, which causes new data to bypass compression and may lead to excessive storage of too many data values. To address this issue, the OMF North plugin now allows you to change the OMF Data action code to *create*. This option should only be used when the AVEVA PI Buffer Subsystem is configured.\n       - fledge-south-s2opcua now supports for the OPC UA Data Change Filter. This filter type is defined in the `OPC UA Specification, Part 4, Section 7.22.2 <https://reference.opcfoundation.org/Core/Part4/v105/docs/7.22.2>`_. The Data Change Filter allows OPC UA clients (such as this plugin) to request that the OPC UA server send data value updates only if the server's data values have changed significantly. With careful tuning, you can reduce the data traffic from OPC UA server to the client without significant loss of fidelity. This plugin has been upgraded to use Systerel's `S2OPC Toolkit Version 1.5.0 <https://gitlab.com/systerel/S2OPC/-/releases/S2OPC_Toolkit_1.5.0>`_.\n\n    - Bug Fix:\n\n       - An issue that could cause a crash during purge operations has been resolved.\n       - fledge-south-modbusc plugin that could cause it to fail if the IP address of the MODBUS device was changed incorrectly and then changed back to the correct address has been resolved.\n       - fledge-south-mqtt-sparkplug now supports various data types, including string, integer, float and boolean.\n       - fledge-south-randomwalk has been improved to give a more random result.\n       - fledge-south-s2opcua would sometimes fail to connect to an OPC UA server with a large number of available endpoints on a distant or noisy network. This has been fixed.\n\n\nv2.5.0\n-------\n\nRelease Date: 2024-06-26\n\n- **Fledge Core**\n\n    - New Features:\n\n       - A new parameter has been added to the storage service to configure the number of threads that will be used to interact with the buffered reading data. This limits that impact on heavily loaded systems and also allows for the threads to be pre-created, which slightly reduces the latency.\n       - A new tuning option has been added to the core that tunes the size of the cache maintained by the configuration manager. Increasing this cache size can speed up the startup of the system when large numbers of services are used within the system.\n       - A new tuning option has been added to the north service to control the frequency of updating the stream position when writing data to the north. More details are available in the tuning section of the documentation.\n       - A new tuning option has been added to the north service that allows the number of buffers that are prefetched to be tuned.\n       - A small performance enhancement has been made in the south service such that the rate of storing statistics is not dependent on the ingest rate of the service. This will improve the performance of south services with high ingest rates.\n       - A performance enhancement has been added to all SQLite storage plugins that results in higher throughput and lower latency in all cases when using SQLite as a storage engine.\n       - The storage performance monitors have been updated to include the table name in the monitor name. This allows tracking of which tables are being heavily used within the system.\n       - A new security option has been added to allow for a password policy to be set. This policy defines characters that must appear within a user password.\n       - Password encryption has been made more secure.\n       - Some changes have been done to the way the API handles passwords to make them more secure.\n       - The system now tracks failed login attempts and can block accounts that have excessive failed attempts to login.\n       - A new security feature has been added to disconnect idle sessions after a configurable time.\n       - Support for adding list names to configuration category items of type list has been added.\n       - Support bundles and backups have been updated to require extra privileges to access.\n       - The documentation on securing Fledge has been updated.\n       - The documentation for the SQLite storage plugin has been improved and some duplication removed.\n       - The configuration category documentation has been updated to include more explanation and examples of the use of the various list type configuration items.\n       - Documentation has been added to the 'Tuning Fledge' section to describe the performance counters in the storage layer and also give some general tips on the subject of using the performance counters to tune the Fledge installation.\n\n\n    - Bug Fix:\n\n       - A typo in the statistics description for “Readings Sent North” has been fixed.\n       - An issue that may occasionally cause the storage service to fail during purge when using the default SQLite storage engine for readings data storage has been resolved.\n       - A problem with integer overflow in the SQLite storage engine has been resolved.\n       - An issue with one of the API entry points that could allow for command injection into the underlying operating system has been resolved.\n\n\n- **GUI**\n\n    - New Features:\n\n       - An option to use the flow editor interface for notifications has been added.\n       - An option has been added to the graph display to scroll to the latest readings available.\n       - A facility to read a JSON configuration item from a file and insert the contents into a configuration item has been added to the user interface.\n       - Support has been added to the GUI for key/value lists in configuration items.\n       - The ability to create and manage backups has become a privileged operation. Likewise support bundles can only be created by administrators.\n       - The user management screen now shows if a user has been blocked due to excessive failed login attempts.\n       - The documentation on viewing data has been updated in line with a number of recent changes to the user interface.\n       \n\n    - Bug Fix:\n\n       - A missing health icon in the north service flow editor has been added.\n       - An issue when deleting a disabled notification service could result in an error has been resolved.\n       - An issue that could result in two identical audit logs when deleting a filter from a pipeline has been resolved.\n       - The north flow editor page was not showing branches in pipelines, this has now been resolved.\n\n\n- **Services & Plugins**\n\n    - New Features:\n\n       - The build mechanism has been updated to support profiled builds.\n       - fledge-north-opcua: In the hierarchy map, forward-slash-separated string tokens in the meta-data and the Asset Name are now parsed and used to construct an object hierarchy in the OPC UA Server's Address Space. Since some South plugins and filters send path information to Fledge that is split between a path Datapoint and the Asset Name, path segments found in the Asset Name will be added to the end of the path Datapoint. The plugin supports the entire path being present in the Asset Name.\n       - OMF North plugin: The default naming convention of PI tags created is Asset name and Datapoint name separated by a dot (.) delimiter. It is now possible to choose any single character as the delimiter except characters that are not allowed in OMF field names.\n       - Support has been added to automatically detect new storage engines upon restart of Fledge.\n\n\n    - Bug Fix:\n\n       - A memory leak in the scale filter plugin has been fixed.\n       - A Python compatibility issue with the fledge-south-s7-python plugin has been resolved.\n       - An issue with reordering filters in a control pipeline has been resolved.\n       - An issue that could cause the failure of the service if configuring a regular expression in the fledge-filter-omfhint with an invalid regular expression has been resolved.\n       - A problem that could cause the dispatcher service to fail when deleting a filter from a control pipeline has been resolved.\n       - A problem that could cause the control dispatcher to become unresponsive when adding a filter to an active control pipeline has been resolved.\n       - An issue that meant the notification service could not find the control dispatcher if the control dispatcher was started with a non-default name has now been resolved.\n\n\nv2.4.0\n-------\n\nRelease Date: 2024-04-10\n\n- **Fledge Core**\n\n    - New Features:\n\n       - A new feature has been added that allows for internal alerts to be raised. These are used to inform users of any issue internally that may require attention, they are not related to specific data that is flowing through the Fledge data pipelines. Examples of alerts that may be raised are that updates to the software are available, a service is repeatedly failing or an exceptional issue has occurred.\n       - A new task (update checker) has been added that will run periodically and raise an alert if there are software updates available.\n       - The internal service monitor has been updated to use the new alerts mechanism to alert a user if services are failing.\n       - A new storage configuration option has been added that allows the server request timeout value to be modified.\n       - The ability to tune the cache flushing frequency of the asset tracker has been added to the advanced configuration options of the south and north services.\n       - The reporting of south service send latency has been updated to give more detail regarding continue send latency issues.\n       - A new tuning parameter has been added to the purge process to control the number of readings purged within single blocks. This can be used to tune the intention between the purge process and the ingestion of new data in highly loaded systems.\n       - A new list type has been added to the types supported in the configuration category. This allows for improved configuration interactions.\n       - Support has been added in the C++ configuration manager code to allow for the new list, key/value list and object list types in configuration categories. Also some convenience functions have been added for use by plugins that wish to traverse the lists.\n       - In the hierarchy map, forward-slash-separated string tokens in the meta-data are now parsed and used to construct an object hierarchy in the OPC UA Server's Address Space.\n       - The scheduler has been enhanced to provide the capability to order the startup of services when Fledge is started.\n       - A performance improvement, courtesy of a community member, for the JSON escaping code has been added. This improves performance of the PostgreSQL storage plugin and other areas of the system.\n       - A new section has been added to the documentation that describes how storage plugins are built.\n       - The plugin developers guide has been updated with information and examples of the new list handling facilities added to configuration items within Fledge.\n       - The tuning section of the documentation has been updated to include details of the service startup ordering enhancement.\n       - The plugin documentation has been updated to include cross referencing between plugins. A new See Also section will be included that will link the set to other plugins that might be useful or relate to the plugin that is being viewed.\n       - The plugin developers guide has been updated to add some additional guidance to the developer as to how to decide if features should be added to a plugin or not and also to document common problems that cause problems with plugins.\n       - Documentation that describes what firewall settings are needed to install Fledge has been added to the quick start guide.\n\n\n    - Bug Fix:\n\n       - An issue that prevented configuration categories items called messages has been resolved.\n       - An issue that could cause data to be repeated in a north service when using a pipeline in the north that adds new readings to the pipeline has been resolved.\n       - An issue that could cause the order of filters in a control pipeline API to be modified has been fixed.\n       - An issue that could result in series that are already installed being shown in the list of services available to be installed has been resolved.\n       - An issue that could cause some north plugins to fail following a restart when using the SQLite in-memory storage plugin has been fixed.      \n       - An issue that could prevent a plugin being updated in some circumstances has been resolved.\n       - An issue requiring a restart before the change in log level for the storage service took effect has been resolved.\n       - An issue causing the database to potentially not initialize correctly when switching the readings plugin from SQLite to PostgreSQL has been resolved.\n       - An issue in the control pipeline API related to the type of one of the parameters of the pipeline has been resolved. This issue could manifest itself as an inability to edit a control pipeline.\n       - The return type of plugin_shutdown was incorrectly documented in the plugin developers guide for north plugins. This has now been resolved.\n\n\n- **GUI**\n\n    - New Features:\n\n       - A new page has been added for managing additional services within an instance.\n       - Support for entering simple lists for configuration items has been added.\n       - Support has been added for manipulating key/value lists using the new available list configuration type that is available.\n       - Navigation buttons have been added to the tabs in the south and north services to facilitate easier navigation between tabs.\n       - A preview of the new flow editor for the north side has been added. This may be enabled via the GUI settings page.\n       - The GUI now shows the internal alerts via an icon in the navigation bar at the top of the screen.\n\n\n    - Bug Fix:\n\n       - An issue with creating an operation in a control script with no parameters in the GUI has been resolved.\n       - An issue with the Next button not being enabled when changing the name of a service in the service creation wizard has been resolved.\n       - An issue that could result in a filter not being added to a control pipeline when the user does not click on the see button has been addressed by adding a check before navigating off the page.\n       - An issue that could result in the JSON code editor being incorrectly displayed for non-JSON code has been resolved.\n       - An issue with the visibility of the last item on the side menu when scrolling in a small window has been resolved.\n\n\n- **Services & Plugins**\n\n    - New Features:\n       \n       - Improvements have been made to the buffering strategy of the OMF north plugin to reduce the overhead in creating outgoing OMF messages.\n       - The control pipelines mechanism has been enhanced to allow pipelines to change the name of the operation that is performed as well as the parameters.\n       - The documentation of the expression filter has been updated to document the restriction on asset and datapoint names.\n\n\n    - Bug Fix:\n\n       - An issue with the dynamic reconfiguration of filters in control pipelines has been resolved.\n       - An issue that could cause the control dispatcher service to fail when changing the destination of a control pipeline has been resolved.\n       - An issue with the control dispatcher that prevents operations with no parameters from being correctly passed via control pipelines has been resolved.\n       - An issue in the control dispatcher that could cause a crash if a control pipeline completely removed the request has now been resolved.\n       - An issue that could cause an error to be logged when installing the control dispatcher has been resolved. The error did not prevent the dispatcher from executing.\n       - An issue when using the PostgreSQL storage plugin and data containing double quotes within JSON data has been resolved.\n       - An issue that could cause an error in the south plugin written in Python that supports control operations has been resolved.\n       - A memory consumption issue in the fledge-filter-asset when using the flatten option has been resolved.\n       - The fledge-filter-asset issue causing deadlock in pipelines with two instances has been resolved.\n       - An issue that limited the number of variables the fledge-south-s2opcua plugin could subscribed to has been resolved.\n       - An issue that could result in the sent count being incorrectly incremented when using the fledge-north-kafka (C based) plugin has been resolved.\n       - An issue that could cause excessive messages regarding connection loss and regain to be raised in the OMF north plugin has been resolved.\n       - An issue that caused the fledge-north-kafka (C based) plugin to fetch data when it was disabled has been resolved.\n       - If you set the User Authentication Policy to username, you must select a Security Policy other than None to communicate with the OPC UA Server. Allowing username authentication with None would mean that usernames and passwords would be passed from the plugin to the server as clear text which is a serious security risk. This is explained in the `OPC UA Specification <https://reference.opcfoundation.org/Core/Part4/v104/docs/7.36.4>`_. In addition, OPC UA defines a Security Policy for a \"UserIdentityToken\". When configuring the fledge-south-s2opcua plugin, the Security Policy selected in your configuration must match a supported \"UserIdentityToken\" Security Policy.  To help troubleshoot configuration problems, log messages for the endpoint search have been improved. The documentation includes a new section called \"Username Authentication\".\n       - If a datapoint or asset name contains a reserved mathematical symbol then the fledge-filter-expression plugin was previously unable to use this as a variable in an expression. A mechanism has been added to allow these names.\n       - The Notification service would create Rule and Delivery support objects even if the notification was disabled. When the notification was later enabled, the original objects would remain. This has been fixed.\n       - If the OMF North plugin gets an exception when POSTing data to the PI Web API, the plugin would declare the connection to PI broken when it wasn't. This would result in endless connection loss and reconnection messages. This has been fixed. The plugin will now ping the PI Web API every 60 seconds and will determine that connection has been lost only if this ping fails. The OMFHint LegacyType has been deprecated because a Container cannot be changed after it is created in the PI System. This means there is no way to process the LegacyType hint when readings are processed. If the LegacyType hint appears in any reading, a warning message will be written saying that this hint type has been deprecated.\n       - This fix applies when configuring OMF North to create an Asset Framework (AF) structure. The first time an AF Element holding an AF Attribute pointing to a PI Point (i.e. the Container) is created, it will appear in Asset Framework as a normal AF Element. If the path is then changed using an \"AFLocation hint\", a reference to the AF Element should appear in the hint's location. The original AF Element's location should remain unchanged. This feature was not working correctly but has been fixed. Before this fix, the hint's path would be created as expected but no reference to the original data location was created.\n       - The storage service with the SQLite in-memory plugin does consume large amounts of memory while running at higher data rates. Analysis has determined this is not caused by a memory leak but rather by legitimately storing large amounts of data in memory while operating. The reason for the high load on the storage service appears to be database purging but this is a subject for further study.\n       - An issue in the control pipeline documentation that stated that services could only be the source of control pipelines has been fixed to now show that they may be the source or the destination.\n       - It is not possible to change the numeric data type of OMF Container (which maps to a PI Point) after it has been created. This means it is not possible to enable or disable an integer OMFHint or change the numeric data type in the Fledge GUI after the Container has been created. It is possible to manually correct the problem if it is necessary. OMF North plugin documentation has been updated with the procedure.\n\n\nv2.3.0\n-------\n\nRelease Date: 2023-12-28\n\n- **Fledge Core**\n\n    - New Features:\n\n       - A new REST API has been added to the public API to allow performance counters to be fetched and reset. This API is intended for diagnostic purposes only.\n       - Improvements have been made to the way load issues on the storage service are logged. \n       - Documentation has been added that describes how to extend the API to include custom URLs for executing control functions. This documentation also shows how these are then called using the graphical user interface.\n\n    - Bug Fix:\n\n       - An issue with the PostgreSQL storage plugin when very large numbers of readings are ingested, more than 4294967296, has now been resolved.\n       - An issue with services shutting down rather than restarting when they fail to get a valid bearer token has been resolved.\n       - The user interface for creating write API endpoints was incorrectly requiring both a constant and a variable when only one is required. This is now resolved.\n       - A problem that meant parameters to set point control operations were not correctly sent to south plugins written in Python has been resolved.\n\n\n- **GUI**\n\n    - New Features:\n\n       - The user interface has been upgraded to use Angular version 16.\n       - The configuration section of the user interface that allows for instance wide configuration has been improved with a single tree navigation item and improved visual feedback.\n       - A link to the documentation has been added to the Control API pages of the user interface.\n\n\n    - Bug Fix:\n\n       - An issue that could cause some datapoint to display incorrectly in the user interface graph when multiple assets are displayed and those assets have data points with the same name in both assets has been resolved.\n       - An issue in the user interface that meant exporting data as a CSV file created incorrect files if any of the data point names contained a comma has been fixed.\n       - An issue with the user interface not always correctly showing the information for the dispatcher service has been resolved.\n       - A broken link to the documentation in the control pipeline user interface page of the user interface has been fixed.\n\n\n- **Services & Plugins**\n\n    - New Features:\n\n       - The benchmark south plugin has been enhanced to increase the load that can be placed during testing.\n       - The fldege-south-s2opcua south plugin has been enhanced to allow filtering of nodes using regular expressions on the Browse Name of the nodes.\n       - The OMF north plugin has been updated to improve both the time and space efficiency of the lookup data used to map to PI Server objects.\n       - OMF North plugin documentation has been updated to show which version of the OMF specification the plugin will adopt when communicating with different versions of AVEVA products: PI Web API, Edge Data Store (EDS) and AVEVA Data Hub (ADH).\n\n\n    - Bug Fix:\n\n       - A memory leak in the SQLite in-memory storage plugin has been resolved.\n       - A memory leak in the OMF north plugin has been resolved.\n       - An issue that could cause data to fail to send using the OMF plugin when the names of data points contain special characters has now been resolved.\n       - When the \"Send full structure\" configuration boolean was false, OMF North would create an AF structure anyways. All AF Elements were at the root of the AF database, with every AF Element having a single AF Attribute mapped to a PI Point. Creation of this AF structure would take a long time for large databases which would lead to PI Web API POST timeouts. This has been fixed. If the configuration boolean is false, OMF North will create PI Points only. In the configuration page, Send full structure has been renamed to \"Create AF Structure\".\n       - The OMF North plugin was unable to connect to AVEVA Data Hub (ADH) and OSIsoft Cloud Services (OCS) endpoints. This has been fixed.\n       - An issue with using an OMF Hint that defines a specific name to use with a tag has been resolved. The issue would show itself as the data not being sent to PI or ADH in some circumstances.\n       - An issue that meant some OPC UA nodes stored in the root of the hierarchy were not correctly ingested in the fldege-south-s2opcua south plugin has been resolved.\n       - The SQLite storage plugin had an issue that caused it to create overflow tables multiple times. This was not a problem in itself, but did cause the database to become locked for excessive periods of time, creating contention and delays for data ingestions in progress at the time.\n       - A problem that, in rare circumstances, could result in data being added to the incorrect asset in the SQLite plugin has been resolved. \n       - An issue with assets containing bracket characters not being stored in the PostgreSQL storage plugin has been resolved.\n       - An issue with string type parameters to control operations having extra pairs of quotes added has been resolved.\n       - A problem that caused the dispatcher service to log messages regarding incorrect bearer tokens has been resolved.\n       - The control dispatcher service was previously advertising itself before it had completed initialisation. This meant that a request could be received when it was partially configured, resulting in a crash of the service. Registration now takes place only once the service is completely ready to accept requests.\n       - The control dispatcher was not always using the correct source information when looking for matching pipelines. This has now been resolved.\n       - Control pipelines were previously still being executed if the entire pipeline was disabled, this has now been resolved.\n\n\nv2.2.0\n-------\n\nRelease Date: 2023-10-17\n\n- **Fledge Core**\n\n    - New Features:\n\n       - New audit logs have been added to reflect the creation, update and deletion of access control lists.\n       - New public API Entry Points have been added to allow for the creation and manipulation of control pipelines.\n       - A new user role has been added for those users able to update the control features of the platform.\n       - A new tuning parameter has been added to the PostgreSQL storage plugin to allow the maximum number of readings inserted into the database in a single insert to be limited. This is useful when high data rates or large bursts of readings are received as it limits the memory consumption of the plugin and reduces the lock contention on the database.\n       - The asset tracker component has been optimized in order to improve the ingress and egress performance of Fledge.\n       - The mechanism used by the south and north services to interact with the audit log has been optimized. This improves the ingress and egress performance of the product at the cost of a small delay before the audit log is updated.\n       - A number of optimizations have been made to improve the performance of Python filters within a pipeline.\n       - A number of optimizations to the SQLite in-memory storage plugin and the SQLiteLB storage plugin have been added that increase the rate at which readings can be stored with these plugins.\n       - The support bundle creation process has been updated to include any performance counters available in the system.\n       - The ability to monitor performance counters has been added to Fledge. The South and North services now offer performance counters that can be captured by the system. These are designed to provide information useful for tuning the respective services.\n       - The process used to extract log information from the system logs has been updated to improve performance and reduce the system overhead required to extract log data.\n       - A number of changes have been made to improve the performance of sending data north from the system.\n       - The performance of the statistics history task has been improved. It now makes fewer calls to the storage subsystem, improving the overall system performance.\n       - The performance of the asset tracker system has been improved, resulting in an improvement in the ingress performance of the system.\n       - Changes have been made to the purge process in the SQLiteLB and SQLite in-memory plugins in order to improve performance.       \n       - The audit log entries have been updated to include more information when schedules are updated.\n       - Audit logs have been added to the user API of the public REST interface.\n       - The plugin developers guide has been updated to include the mechanism for adding audit trail entries from C++ plugins.\n       - Plugins that run within the south and north services and north tasks now have access to the audit logging system.\n       - The public API has been updated to include the ability to make control requests.\n       - The public API of the system has been updated to allow selection of readings from the storage buffer for given time intervals.      \n       - The public API that is used to retrieve reading data from the storage layer has been updated to allow data for multiple assets to be retrieved in a single call.\n       - The SQLite in-memory storage plugin now has an option that allows the data to be persisted when shutting the system down and reloaded on startup.\n       - The SQLite storage plugins have been updated to improve the error reporting around database contention issues.\n       - A change has been made to the configuration of the storage plugin such that rather than having to type correct names for storage plugins the user may now select the plugins to use from a drop down list. Note however that the system must still be restarted for the new storage plugin to take effect.\n       - The storage service has been updated to allow other services to subscribe the notifications of inserts into the generic tables.\n       - A change has been made to prevent the schedules used to start services from being renamed as this could cause the services to fail.\n       - The default interval for running the purge process has been reduced, the purge process will now run every 10 minutes. This change only affects new installations, the purge process will run as before on systems that are upgraded.       \n       - The ingestion of data from asynchronous south services paid no attention to the advanced configuration option \"Throttle\". This meant that very fast asynchronous south plugins could build extremely large queues of data within the south service, using system resources and taking a long time to shutdown. This has now been rectified, with asynchronous south services now subject to flow control if the \"Throttle\" option is set for the service. Unconstrained input is still available if the \"Throttle\" option is not checked.\n       - The south plugin now supports three different modes of polling. Polling at fixed intervals from the time started, polling at fixed times or polling on demand via the control mechanisms.\n       - Support has been added to allow filters to ingest passed data onwards during a shutdown of the filter. This allows any buffered data to be flushed to the next filter in the pipeline.\n       - A numeric list data type has been added to the reading ingestion code of the system.\n       - A Python package, used by the system, found to have a security vulnerability. This has been updated.\n       - The format of Python traceback has been improved to use multiple lines within the log. This makes the trace easier to understand and prevents the truncation that can occur.\n       - The setting of log levels from a service is now also reflected in any Python code loaded by the service.\n       - The reporting of issues related to failure to load plugins has been improved.\n       - When upgrading the version of a plugin any new configuration items are added to the relevant configuration categories. However the operation was not correctly reported as a configuration change in the audit log. This behavior has now been corrected.\n       - An issue which could occasionally result in the bearer token used for authentication between the various services expiring before the completion of the renewal process has been resolved. This could result in the failure of services to communicate with each other.\n       - The configuration category C++ API has been enhanced in the retrieval and setting of all the attributes of a configuration item.       \n       - The support bundle has been updated to include a list of the Python packages installed on the machine.\n       - The documentation regarding handling and updating certificates used for authentication has been updated. \n       - Added documentation for the performance counters in the tuning guide.\n\n\n    - Bug Fix:\n\n       - An issue with the SQLite in-memory and the SQLiteLB storage plugins that could result in incorrect data being stored has been resolved.\n       - An erroneous message was being produced when starting the system using the SQLite in-memory storage plugin. This has now been resolved.\n       - Support has been improved for switching between different storage plugins that allows for correct schema creation when using different SQLite plugin variants for configuration and readings storage.\n       - An issue that could cause health metrics to not be correctly returned when using the Postgres storage engine has been resolved.\n       - An issue in one of the storage plugins that caused spurious warnings to appear in the logs during a backup has been resolved.\n       - A memory leak in one of the storage plugins has been fixed. This caused the storage service to consume large amounts of memory over time which could result in the operating system killing the service.\n       - An update has been done to the default SQLite storage plugin to enable it to handle a large number of distinct asset codes in the readings. Previously the plugin was limited in the number of assets it could support. When the number of asset codes gets large the performance of the plugin will be reduced slightly, however it will continue to ingest data.\n       - An issue with memory usage in Python plugins used in south services has been resolved.\n       - A number of issues regarding the usage of memory have been resolved, including some small memory leaks. The overall memory footprint of north services should also be reduced in some circumstances. \n       - An issue that causes log messages to not be recorded has been resolved.\n       - An issue that could cause the statistics to be displayed with a timestamp in the wrong timezone has been resolved.\n       - A bug in the statistics rate API that would result in incorrect data being returned has been fixed.\n       - An empty statistics entry would erroneously be added for an asset or a service if the advanced parameter to control the statistics was modified from the default before the service was started. This has now been resolved.\n       - A problem with statistics counter overflow that could cause a crash in the statistics collector has been resolved.\n       - An issue that caused the retrieval of system logs for services with white space in the name of the service has been resolved.\n       - The control dispatcher now has access to the audit logging system.\n       - An issue that required the north service to be restarted if the source of data to send was changed in a running service has been resolved. Changing the data source no longer requires a restart of the north service.\n       - An issue that could sometimes cause a running north service to fail if the configuration for that service is updated has been resolved.\n       - A problem that prevents an updated service from restarting after an upgrade if HTTPS is used for the interface between services has been resolved.\n       - An issue that limited the update of additional services to just the notification service has been resolved. The update mechanism can now update any service that is added to the base system installation.       \n       - The Python south plugin mechanism has been updated to fix an issue with ingestion of nested data point values.       \n       - When switching a south plugin from a slow poll rate to a faster one the new poll rate does not take effect until the end of the current poll cycle. This could be a very long time. This has now been changed so that the south service will take the new poll rate as soon as possible rather than wait for the end of the current poll cycle.\n       - A bug that prevented notification rules from being executed for readings with asset codes starting with numeric values has been resolved.\n       - The data sent to notification rules that register for audit information has been updated to include the complete audit record. This allows for notification rules to be written that trigger on the particular auditable operations within the system.\n       - The notification service would sometimes shutdown without removing all of the subscriptions it holds with the storage service. This could cause issues for the storage service. Subscriptions are now correctly removed.\n       - The command line interface to view the status of the system has been updated to correctly show the statistics history collection task when it is running.      \n       - The issue of incorrect timestamps in reading graphs due to inconsistent timezones in API calls has been resolved. All API calls now return timestamps in UTC unless explicitly specified in the response.\n       - An issue with the code update mechanism that could cause multiple updates to occur has been resolved. Only a single update should be executed and then the flag allowing for updates to be applied should be removed. This prevents the update mechanism triggering on each restart of the system.\n       - A problem that prevented the fledge-south-modbus plugin from being updated in the same way as other plugins has been resolved.\n       - An issue with trying to create a new user that shares the same user name with a previous user that was removed from the system failing has been resolved.\n       - A problem with converting very long integers from JSON has been resolved. This would have manifested itself as a crash when handling datapoints that contain 64 bit integers above a certain value.     \n       - An update has been made to prevent the creation of service with empty service name.\n\n\n- **GUI**\n\n    - New Features:\n\n       - New controls have been added in the menu pane of the GUI to allow nested commands to be collapsed or expanded, resulting in a smaller menu display.\n       - A new user interface option has been added to the control menu to create control pipelines.\n       - The user interface has been updated such that if the backend system is not available then the user interface components are made non-interactive & blur.\n       - The interface for updating the filters has been improved when multiple filters are being updated at once.\n       - New controls have been added to the asset browser to pause the automatic refresh of the data and to allow shuffling back and forth along the timeline.\n       - The ability to move backwards and forwards in the timeline of the asset browser graph has been added.\n       - The facility that pauses the automatic update of the asset browser graph has been added.\n       - The ability to graph multiple readings on a single graph has been added to the asset browser graph.\n       - A facility to allow a user to define the default time duration shown in the asset browser graph has been added to the user interface settings page.\n       - The date format has been made more flexible in the asset and readings graph.\n       - The display of image attributes for image type data points has been added to the latest reading display. \n       - The ability to select an area on the graph shown in the asset browser and zoom into the time period defined by that area has been added.\n       - The reading graph time granularity has been improved in the asset browser.       \n\n\n    - Bug Fix:\n\n       - The user interface for configuring plugins has been improved to make it more obvious when mandatory items are missing.\n       - An issue that allowed view users to update configuration when logged in using certificate based authentication has been resolved.\n       - An issue which prevented the file upload/value update for script type configuration item, unless the name also was script has been resolved.\n       - An issue with editing large scripts or JSON items in the configuration has been resolved.\n       - An issue that caused services with quotes in the name to disappear from the user interface has been resolved.\n       - The latest reading display issue that resulted in non image data not being shown when one or more image data points are in the reading has been resolved.\n       - A text wrapping issue in the system log viewer has been resolved.\n       - An occasional error that appeared on the Control Script and ACL pages has been resolved.\n\n\n- **Services & Plugins**\n\n    - New Features:\n\n       - An update has been done to the OMF north plugin to correctly handle the set of reserved characters in PI tag names when using the new linked data method for inserting data in the PI Server.\n       - The OMF north plugin has been updated to make an additional test for the server hostname when it is configured. This will give clearer feedback in the error log if a bad hostname is entered or the hostname can not be resolved. This will also confirm that IP addresses entered are in the correct format.\n       - Some enhancements have been made to the OMF north plugin to improve the performance when there are large numbers of distinct assets to send to the PI Server.\n       - There have been improvements to the OMF north plugin to prevent an issue that could cause the plugin to stop sending data if the type of an individual datapoint changed repeatedly between integer and floating point values. The logging of the plugin has also been improved, with clearer messages and less repetition of error conditions that persist for long periods.\n       - Support for multiple data centers for OSIsoft Cloud Services (OCS) has been added in the OMF north plugin. OCS is hosted in the US-West and EU-West regions.\n       - When processing data updates from the PI Server at high rates, the PI Server Update Manager queue might overflow. This is caused by the PI Server not retrieving data updates until all registrations were complete. To address this, the PI Server South plugin has been updated to interleave registration and retrieval of data updates so that data retrieval begins immediately.\n       - Macro substitution has been added to the OMFHint filter allowing the contents of datapoints and metadata to be incorporated into the values of the OMF Hint, for example in the Asset Framework location can now include data read from the data source in the location.\n       - The fledge-filter-asset has been updated to allow it to split assets into multiple assets, with the different data points in the original asset being assigned to one or more of the new assets created.\n       - The fledge-filter-asset has been enhanced to allow it to flatten a complex asset structure. This allows nested data to be moved to the root level of the asset.\n       - The fledge-filter-asset has been enhanced to allow it to remove data points from readings.\n       - Windowed averages in the notification service preserve the type of the input data when creating the averages. This does not work well for integer values and has been changed such that integer values are promoted to floating point when using windowed averages for notification rule input.\n       - The notification mechanism has been updated to accept raw statistics and statistics rates as an input for notification rules. This allows alerts to be raised for pipeline flows and other internal tasks that generate statistics.\n       - Notifications can now register for audit log entries to be sent to notification rules. This allows notification to be made based on internal state changes of the system.\n       - The fledge-north-opcuaclient has been updated to support multiple values in a single write.\n       - The fledge-north-opcuaclient plugin has been updated to support OPC UA security mode and security policies.\n       - The fledge-north-httpc plugin now supports sending audit log data as well as readings and statistics.\n       - The fledge-north-kafka plugin has been updated to allow for username and password authentication to be supplied when connecting to the Kafka server.\n       - Compression functionality has been added to the fledge-north-kafka.\n       - The average and watchdog rules have been updated to allow selection of data sources other than the readings to be sent to the rules.\n       - The fledge-notify-email notification delivery plugin has been updated to hide the password from view and also allow custom alert messages to be created.\n       - Some devices were not compatible with the optimized block reading of registers performed by the fledge-south-modbus plugin. The plugin has been updated to provide controls that can determine how it reads data from the modbus device. This allows single register reads, single object reads and the current optimized block reads.\n       - The fledge-south-s2opcua now supports an optional datapoint in its Readings that shows the full path of the OPC UA Variable in the server's namespace. It has also to support large numbers of Monitored Items.\n       - The option to configure and use a username and password for authentication to the MQTT broker has been added to the fledge-south-mqtt plugin.\n       - The North service could crash if it retrieved invalid JSON while processing a reconfiguration request. This was addressed by adding an exception handler to prevent the crash.\n       - The audit logger has been made available to plugins running within the notification service.\n       - The notification service documentation has been updated to include examples of notifications based on statistics and audit logs.\n       - Documentation of the AF Location OMFHint in the OMF North plugin page has been updated to include an outline of differences in behaviors between Complex Types and the new Linked Types configuration.\n       - The documentation of the OMF North plugin has been updated to conform with the latest look and feel of the configuration user interface. It also contains notes regarding the use of complex types versus the OMF 1.2 linked types.\n       - The documentation for the asset filter has been improved to include more examples and explanations for the various uses of the plugin and to include all the different operations that can be performed with the filter.\n       - The documentation for the control notification plugin has been updated to include examples for all destinations of control requests.\n\n\n    - Bug Fix:\n\n       - The OMF North plugin that is used to send Data to the AVEVA PI Server has been updated to improve the performance of the plugin.\n       - The OMF North plugin sent basic data type definitions to AVEVA Data Hub (ADH) that could not be processed resulting in a loss of all time series data. This has been fixed.\n       - Recent changes in the OMF North plugin caused the data streaming to the Edge Data Store (EDS) to fail. This has been fixed. The fix has been tested with EDS 2020 (Version 1.0.0.609).\n       - The fledge-north-opcuaclient plugin has been updated to support higher data transfer rates.\n       - An issue with the fledge-south-s2opcua that allowed a negative value to be entered for the minimum reporting interval has been resolved. The plugin has also been updated to use the new tab format for configuration item grouping.\n       - An issue with NULL string data being returned from OPC UA servers has been resolved. NULL strings will not be represented in the readings, no data point will be created for the NULL string.\n       - The fledge-south-s2opcua plugin would become unresponsive if the OPC UA server was unavailable or if the server URL was incorrect. The only way to stop the plugin in this state was to shut down Fledge. This has been fixed.\n       - An issue with fledge-notify-setpoint plugin to control operations occurring before a south plugin is fully ready has been resolved.\n       - An issue with reconfiguring a fledge-north-kafka plugin has been resolved, this now behaves correctly in all cases.\n       - An issue with sending data to Kafka that included image data points has been resolved. There is no support in Kafka for images and they will be removed while allowing the remainder of the data to be sent to Kafka.\n       - An issue with the fledge-south-modbustcp & S7 plugins which caused the polling to fail has been resolved.\n       - A problem with the fledge-south-j1708 & fledge-south-j1939 plugins that caused them to fail if added disabled and then later enabling them has been resolved.\n       - A problem that caused the fledge-north-azure-iot plugin to fail to send data has been corrected.\n       - A product version check was made incorrectly if the OMF endpoint type was not PI Web API. This has been fixed.       \n       - The notification sent an audit log entry was created even when the delivery failed. It should only be created on successful delivery, this has been fixed.\n       - A problem with the fledge-notify-asset delivery plugin that would sometimes result in stopping the notification service and also it was not previously creating entries in the asset tracker have been resolved.\n       - An issue that could cause notification to not trigger correctly when used with conditional forwarding has been resolved.\n       - An issue with using multiple Python based plugins in a north conditional forwarding pipeline has been resolved.\n       - Changing the name of an asset in a notification rule plugins could sometimes cause an error to be incorrectly logged. This has now been resolved.\n       - An issue related to using averaging with the statistics history input to the notification rules has been fixed.\n       - If a query for AF Attributes includes a search string token that does not exist, PI Web API returns an HTTP 400 error. PI Server South now retrieves error messages if this occurs and logs them.\n       - Various filters summarize data over time, these have been standardized to use the times of the summary calculation.\n       - The fledge-filter-threshold interface has been tidied up, removing duplicate information.\n       - A problem with installation of the fledge-south-person-detection plugin on Ubuntu 20 has been resolved.\n       - The control map configuration item of the fledge-south-modbus plugin was incorrectly described, this has now been resolved.\n\n\nv2.1.0\n-------\n\nRelease Date: 2022-12-26\n\n- **Fledge Core**\n\n    - New Features:\n\n       - North plugins run as a task rather than a service would be run by the Python sending task rather than the C++ sending task. This resulted in filter pipelines not being applied to the task. This has now been resolved.\n       - A new mechanism has been introduced that allows configuration items within a category to have a group associated with them. This allows items that relate to a particular mechanism be recognised as related by clients of the API and display decisions to be taken based on these groups.\n       - The asset browser APIs have been enhanced to allow for a window of data in the past to be returned. In conjunction a new timespan entry point has been added to allow the oldest and newest date for which an asset exists within the reading buffer to be returned.\n       - An option has been added to the advanced configuration of south services that allow the statistics that are generated by the south service to be tailored. Statistics may be kept for the service as a whole, each asset ingested by the service or both. This setting relates to a given service and may be different in different south services. Full details are available in the tuning guide within the documentation.\n       - Two new types of user are now available in Fledge; users that can view the configuration only and users that can view the data only.\n\n\n    - Bug Fix:\n      \n       - The reset and purge scripts have been improved such that if the reading plugin is different from the storage plugin the data will be removed from the appropriate plugins.\n       - A problem that prevented items from being disabled in the user interface when they were not valid for the current configuration has been resolved.\n       - An issue that would sometimes cause the error `Not all updates in a transaction succeeded` to be logged when updating the users access token has been resolved.\n       - An issue that could cause properties of configuration items to be lost or incorrectly updated has been resolved.\n\n\n- **GUI**\n\n    - New Features:\n\n       - The graphical user interface for viewing the configuration of the south and north services and tasks has now been updated to display the configuration items in multiple tabs.\n       - The user interface now supports two types of view only users; those that can view the configuration and those that can view the data only.\n\n\n    - Bug Fix:\n\n       - An issue that could leave two menu items selected in the menu pane of the user interface has been resolved.\n       - The tab view of tabular data in the user interface has been updated to show the date as well as the time related to readings.\n\n\n- **Services & Plugins**\n\n    - New Features:\n\n       - A new north plugin, fledge-north-opcuaclient, has been created to send data with OPC UA Client to an OPC UA Server.\n       - The asset filter has been updated to support the ability to map datapoint names for an asset.\n       - The OMF north plugin now supports all ADH regions.\n       - The OMF north plugin has been updated to allow support for OMF 1.2 features. This allows for better control of types within OMF resulting in the OMF plugin now dealing more cleanly with assets with different datapoints in different readings. Any assets that are already being sent to an OMF endpoint will continue to use the previous type mechanism. A number of new OMF hints are also supported.\n       - The S2OPCUA south plugin has been updated to allow the timestamp for readings to be taken from the OPC UA server itself rather than the time that it was received by Fledge.\n\n\n\n    - Bug Fix:\n\n       - An issue with building of the DNP3 plugin on the Raspberry Pi platform has been resolved.\n       - The S2OPCUA south plugin has been updated to resolve an issue with duplicate browse names causing data from two OPC UA variables being stored in the same Fledge datapoint. The plugin has also been updated to give more options for how the assets are structured. The option of a single asset for all datapoints and an asset put OPC UA object have been added. It is also possible to use the OPC UA object name as the prefix for asset names in the case of a single variable per asset as well as the current option of a fixed prefix for the browse name of the variable.\n\n   \nv2.0.1\n-------\n\nRelease Date: 2022-10-20\n\n- **Fledge Core**\n\n    - New Features:\n\n       - A new option, healthcheck has been added to the command line script used to start, stop and monitor the instance. This runs a number of checks against the system to detect common misconfigurations and issues with the environment that have been observed to cause issues with the system.\n       - A third source of data is now available for sending to the north plugins, the internal audit log. This contains information such as configuration changes, services failures and other significant events within the Fledge instance. Note that a plugin must indicate it is able to handle audit data before it will be available within the plugin, currently the OPCUA north plugin is able to accept audit data.\n       - The SQLite storage plugins have been updated to periodically reclaim free storage. This is useful for installations that experience short term peaks in storage demand as it will release the storage used during those peaks back to the operating system.\n       - The API to fetch audit log entries has been enhanced to allow a time based filter to be applied. This allows only audit log entries since a given date to be returned to the caller.\n       - A new API has been added that will fetch the list of packages that are available to be updated on the system.\n       - Two new API entry points have been added that return health data for the logging subsystems and the storage service. These are used by the healthcheck option of the fledge command script.\n       - The nesting of JSON objects that represent readings was previously limited to two levels within JSON, this limitation has now been lifted in line with the internal representation of nested objects. This is particularly important when handling audit log data in north plugins and now allows full audit log entries to be transmitted via north plugins.\n       - Improvements have been made to error logs to diagnose certain storage faults. Also the ability to recover from some storage faults connected to gathering of statistics has been added.\n       - Some improvements to the diagnostics for control operations within the system have been made to aid in the development of control pipelines within the system.\n       - The public REST API documentation has been updated to cover more of the entry points supported and also to include examples of calling the asset browsing and statistics APIs using Grafana.\n\n\n    - Bug Fix:\n       \n       - An issue with incorrectly formed JSON when control operations are triggered from the north service has been resolved.\n       - A fix has been added to prevent a crash when the incorrect number of arguments is given to get_plugin_info. Also the function name to extract has been defaulted to be plugin_info.\n       - An issue with control operation parameters which had embedded quotes within the parameter values has been resolved. This previously caused some control operations from north services to not be processed by the control dispatcher service.\n       - When modifying a schedule the audit log entry, SCHCH for that changed, was previously added twice. This has now been resolved.\n       - An issue that prevented a change to the units used for reading rate, e.g. per second, per minute or per hour, not being actioned until a service was restarted has now been fixed. If the rate was also changed then this change would be actioned.\n       - It was possible to set a reading rate of 0 readings, this would cause the south service to fail. It is now not possible to set a rate of 0.\n\n\n- **Services & Plugins**\n\n    - New Features:\n\n       - Support has been added to the OMF north plugin that allows the AVEVA Data Hub to be specified as a destination.\n       - Documentation has been added for the GCP Pub/Sub north plugin.\n\n\n    - Bug Fix:\n      \n       - The service dispatcher was previously looking at the wrong service type when sending operation messages to south service, this has now been resolved.\n       - A bug in the scale-set filter that caused integer values to remain as integers when scaled to a value that could not be represented in an integer, e.g. scaling down or scaling by a non-integer factor, has been resolved.\n       - The S2OPCUA south plugin provides a configuration option, minimum reporting interval that is used to slow the rate of reporting down for busy items. No reports of changes will be recorded when the change happens more frequently than the value set. In the case of the S2OPCUA plugin this was being ignored. It is now actioned correctly within the plugin.\n\n\nv2.0.0\n-------\n\nRelease Date: 2022-09-09\n\n- **Fledge Core**\n\n    - New Features:\n\n       - Add options for choosing the Fledge Asset name: Browser Name, Subscription Path and Full Path. Use the OPC UA Source timestamp as the User Timestamp in Fledge.\n       - The storage interface used to query generic configuration tables has been improved to support tests for null and non-null column values.\n       - The ability for north services to support control inputs coming from systems north of Fledge has been introduced.\n       - The handling of a failed storage service has been improved. The client now attempt to re-connect and if that fails they will down. The logging produced is now much less verbose, removing the repeated messages previously seen.\n       - A new service has been added to Fledge to facilitate the routing of control messages within Fledge. This service is responsible for determining which south services to send control requests to and also for the security aspects of those requests.\n       - Ensure that new Fledge data types not supported by OMF are not processed.\n       - The storage service now supports a richer set of queries against the generic table interface. In particular, joins between tables are now supported.\n       - OPC UA Security has been enhanced. This plugin now supports Security Policies Basic256 and Basic256Sha256, with Security Modes Sign and Sign & Encrypt. Authentication types are anonymous and username/password.\n       - South services that have a slow poll rate can take a long time to shutdown, this sometimes resulted in those services not shutting down cleanly. The shutdown process has been modified such that these services now shutdown promptly regardless of polling rate.\n       - A new configuration item type has been added for the selection of access control lists.\n       - Support has been added to the Python query builder for NULL and NOT NULL columns.\n       - The Python query builder has been updated to support nested database queries.\n       - The third party packages on which Fledge is built have been updated to use the latest versions to resolve issues with vulnerabilities in these underlying packages.\n       - When the data stream from a south plugin included an OMF Hint of AFLocation, performance of the OMF North plugin would degrade. In addition, process memory would grow over time. These issues have been fixed.\n       - The version of the PostgreSQL database used by the Postgres storage plugin has been updated to PostgreSQL 13.\n       - An enhancement has been added to the North service to allow the user to specify the block size to use when sending data to the plugin. This helps tune the north services and is described in the tuning guide within the documentation.\n       - The notification service would previously output warning messages when it was starting. These were not an indication of a problem and should have been information messages. This has now been resolved.\n       - The backup mechanism has been improved to include some external items in the backup and provide a more secure backup.\n       - The purge option that controls if unsent assets can be purged or not has been enhanced to provide options for sent to any destination or sent to all destinations as well as sent to no destinations.\n       - It is now possible to add control features to Python south plugins.\n       - Certificate based authentication is now possible between services in a single instance. This allows for secure control messages to be implemented between services.\n       - Performance improvements have been made such that the display of south service data when large numbers of assets are in use.\n       - The new micro service, control dispatcher, is now available as a package that can be installed via the package manager.\n       - New data types are now supported for data points within an asset and are encoded into various Python types when passed to Python plugins or scripts run within standard plugin. This includes numpy arrays for images and data buffers, 2 dimensional Python lists and others. Details of the type encoding can be found in the plugin developers guide of the online product documentation.\n       - The mechanism for online update of configuration has been extended to allow for more configuration to be modified without the need to restart any services.\n       - Support has been added for the Raspberry Pi Bullseye release.\n       - A problem with a file descriptor leak in Python that could cause Fledge to fail has been resolved.\n       - The control of logging levels has now been added to the Python code run within a service such that the advanced settings option is now honoured by the Python code.\n       - Enhancements have been made to the asset tracker API to retrieve the service responsive for the ingest of a given asset.\n       - A new API has been added to allow external viewing and managing of the data that various plugins persist.\n       - A new REST API entry point has been added that allows all instances of a specified asset to be purged from the buffer. A further entry point has also been added to purge all data from the reading buffer. These entry points should be used with care as they will cause data to be discarded.\n       - A new parameter has been added to the asset retrieval API that allows image data to be returned, images=include. By default image type datapoints will be replaced with a message, “Image removed for brevity”, in order to reduce the size of the returned payload.\n       - A new API has been added to the management API that allows services to request that URL’s in the public API are proxied to the service API. This is used when extending the functionality of the system with custom microservices.\n       - A new set of API calls have been added to the public REST API of the product to support the control dispatcher and for the creation and management of control scripts.\n       - A new API has been added to the public API that will return the latest reading for a given asset. This will return all data types including images.\n       - A new API has been added that allows asset tracking records to be marked as deprecated. This allows the flushing of relationships between assets and the services that have processed them. It is useful only in development systems and should not be used in production systems.\n       - A new API call has been added that allows the persisted data related to a plugin to be retrieved via the public REST API. The is intended for use by plugin writers and to allow for better tracking of data persisted between service executions.\n       - A new query parameter has been added to the API used to fetch log messages from the system log, nontotals. This will increase the performance of the call at the expense of not returning the total number of logs that match the search criteria.\n       - New API entry points have been added for the management of Python packages.\n       - Major performance improvements have been made to the code for retrieving log messages from the system log. This is mainly an issue on systems with very large log files.\n       - The storage service API has been extended to support the creation of private schemas for the use of optional micro services registered to a Fledge instance.\n       - Filtering by service type has now been added to the API that retrieve service information via the public REST API.\n       - A number of new features have been added to the user interface to aid developers creating data pipelines and plugins. These features allow for manual purging of data, deprecating the relationship between the services and the assets they have ingested and viewing the persisted data of the plugins. These are all documented in the section on developing pipelines within the online documentation.\n       - A new section has been added to the documentation which discusses the process and best practices for building data pipelines in Fledge.\n       - A glossary has been added to the documentation for the product.\n       - The documentation that describes the writing of asynchronous Python plugins has been updated in line with the latest code changes.\n       - The documentation has been updated to reflect the new tabs available in the Fledge user interface for editing the configuration of services and tasks.\n       - A new introduction section has been added to the Fledge documentation that describes the new features and some typical use cases of Fledge.\n       - A new section has been added to the Fledge Tuning guide that discusses the tuning of North services and tasks. Also scheduler tuning has been added to the tuning guide along with the tuning of the service monitor which is used to detect failures of services within Fledge.\n       - The Tuning Fledge section of the documentation has been updated to include information on tuning the Fledge service monitor that is used to monitor and restart Fledge services. A section has also been added that describes the tuning of north services and tasks. A new section describes the different storage plugins available, when they should be used and how to tune them.\n       - Added an article on Developing with Windows Subsystem for Linux (WSL2) to the Plugin Developer Guide. WSL2 allows you to run a Linux environment directly on Windows without the overhead of Windows Hyper-V. You can run Fledge and develop plugins on WSL2.\n       - Documentation has been added for the purge process and the new options recently added.\n       - Documentation has been added to the plugin developer guides that explain what needs to be done to allow the packaging mechanism to be able to package a plugin.\n       - Documentation has been added to the Building Pipelines section of the documentation for the new UI feature that allows Python packages to be installed via the user interface.\n       - Documentation has been updated to show how to build Fledge using the requirements.sh script.\n       - The documentation ordering has been changed to make the section order more logical.\n       - The plugin developers guide has been updated to include information on the various flags that are used to communicate the options implemented by a plugin.\n       - Updated OMF North plugin documentation to include current OSIsoft (AVEVA) product names.\n       - Fixed a typo in the quick start guide.\n       - Improved north plugin developers documentation is now available.\n\n    - Bug Fix:\n\n       - The Fledge control script has options for purge and reset that requires a confirmation before it will continue. The message that was produced if this confirmation was not given was unclear. This has now been improved.\n       - An issue that could cause a north service or task that had been disabled for a long period of time to fail to send data when it was re-enabled has been resolved.\n       - S2OPCUA Toolkit changes required an update in build procedures for the S2OPCUA South Plugin.\n       - Previously it has not been possible to configure the advanced configuration of a south service until it has been run at least once. This has now been resolved and it is possible to add a south service in disable mode and edit the advanced configuration.\n       - The diagnostics when a plugin fails to load have been improved.\n       - The South Plugin shutdown problem was caused by errors in the plugin startup procedure which would throw an exception for any error. The plugin startup has been fixed so errors are reported properly. The problem of plugin shutdown when adding a filter has been resolved.\n       - The S2OPCUA South Plugin would throw an exception for any error during startup. This would cause the core system to shut down the plugin permanently after a few retries. This has been fixed. Error messages has been recategorized to properly reflect informational, warning and error messages.\n       - The update process has been optimised to remove an unnecessary restart if no new version of the software are available.\n       - The OMF North plugin was unable to process configuration changes or shut down if the PI Web API hostname was not correct. This has been fixed.\n       - S2OPC South plugin builds have been updated to explicitly reference S2OPC Toolkit Version 1.2.0.\n       - An issue that could on rare occasions cause the SQLite plugin to silently discard readings has been resolved.\n       - An issue with the automatic renewal of authentication certificates has been resolved.\n       - Deleting a service which had a filter pipeline could cause some orphaned configuration information to be left stored. This prevented creating filters of the same name in the future. This has now been resolved.\n       - The error reporting has been improved when downloading backups from the system.\n       - An issue that could cause north plugins to occasionally fail to shutdown correctly has now been resolved.\n       - Some fixes are made in Package update API that allows the core package to be updated.\n       - The documentation has been updated to correct a statement regarding running the south side as a task.\n\n\n- **GUI**\n\n    - New Features:\n\n        - A new *Developer* item has been added to the user interface to allow for the management of Python packages via the UI. This is enabled by turning on developer features in the user interface *Settings* page.\n        - A control has been added that allows the display of assets in the *South* screen to be collapsed or expanded. This allows for more services to be seen when services ingest multiple assets.\n        - A new feature has been added to the south page that allows the relationship between an asset and a service to be deprecated. This is a special feature enabled with the Developer Features option. See the documentation on building pipelines for a full description.\n        - A new feature has been added to the Assets and Readings page that allows for manual purging of named assets or all assets. This is a developer only feature and should not be used on production systems. The feature is enabled, along with other developer features via the Settings page.\n        - A new feature has been added to the South and North pages for each service that allows the user to view, import, export and delete the data persisted by a plugin. This is a developer only feature and should not be used on production systems. It is enabled via the Setting page.\n        - A new configuration type, Access Control List, is now supported in user interface. This allows for selection of an ACL from those already created.\n        - A new tabbed layout has been adopted for the editing of south and north services and tasks. Configuration, Advanced and Security tabs are supported as our tabs for developer features if enabled.\n        - The user interface for displaying system logs has been modified to improve the performance of log viewing.\n        - The User Interface has been updated to use the latest versions of a number of packages it depends upon, due to vulnerabilities reported in those packages.\n        - With the introduction of image data types to the readings supported by the system the user interface has been updated to add visualisation features for these images. A new feature also allows the latest reading for a given asset to be shown.\n        - A new feature has been added to the south and north pages that allows the user to view the logs for the service.\n        - The service status display now includes the Control Dispatcher service if it has been installed.\n        - The user interface now supports the new control dispatcher service. This includes the graphical creation and editing of control scripts and access control lists used by control features.\n        - An option has been added to the Asset and Readings page to show just the latest values for a given asset.\n        - The notification user interface now links to the relevant sections of the online documentation allowing users to navigate to the help based on the current context.\n        - Some timezone inconsistencies in the user interface have been resolved.\n\n    - Bug Fix:\n\n        - An issue that would cause the GUI to not always allow JSON data to be saved has been resolved.\n        - An issue with the auto refresh in the systems log page that made selecting the service to filter difficult has been resolved.\n        - The sorting of services and tasks in the South and North pages has been improved such that enabled services appear above disabled services.\n        - An issue the prevented gaps in the data from appearing int he groans displayed by the GUI has now been resolved.\n        - Entering times in the GUI could sometimes be difficult and result in unexpected results. This has now been improved to ease the entry of time values.\n\n\n- **Plugins**\n\n    - New Features:\n\n       - A new fledge-notify-control plugin has been added that allows notifications to be delivered via the control dispatcher service. This allows the full features of the control dispatcher to be used with the edge notification path.\n       - A new fledge-notify-customasset notification delivery plugin that creates an event asset in readings.\n       - A new fledge-rule-delta notification rule plugin that triggers when a data point value changes.\n       - A new fledge-rule-watchdog notification rule plugin that allows notifications to be send if data stops being ingress for specified assets.\n       - Support has been added for proxy servers in the north HTTP-C plugin.\n       - The OPCUA north plugin has been updated to include the ability for systems outside of Fledge to write to the server that Fledge advertises. These write are taken as control input into the Fledge system.\n       - The HTTPC North plugin has been enhanced to add an optional Python script that can be used to format the payload of the data sent in the HTTP REST request.\n       - The SQLite storage plugins have been updated to support service extension schemas. This is a mechanism that allows services within the Fledge system to add new schemas within the storage service that are exclusive to that service.\n       - The Python35 filter has been updated to use the common Python interpreter. This allows for packages such as numpy to be used. The resilience and error reporting of this plugin have also been improved.\n       - A set of developer only features designed to aid the process of developing data pipelines and plugins has been added in this release. These features are turned on and off via a toggle setting on the Settings page.\n       - A new option has been added to the Python35 filter that changes the way datapoint names are used in the JSON readings. Previously there had to be encoded and decode by use of the b’xxx' mechanism. There is now a toggle that allows for either this to be required or simple text string use to be enabled.\n       - The API of the storage service has been updated to allow for custom schemas to be created by services that extend the core functionality of the system.\n       - New image type datapoints can now be sent between instances using the http north and south plugins.\n       - The ability to define response headers in the http south plugin has been added to aid certain circumstances where CORS provided data flows.\n       - The documentation of the Python35 filter has been updated to include a fuller description of how to make use of the configuration data block supported by the plugin.\n       - The documentation describing how to run services under the debugger has been improved along with other improvements to the documentation aimed at plugin developers.\n       - Documentation has been added for fledge-north-azure plugin.\n       - Documentation has now been added for fledge-north-harperdb plugin.\n\n\n    - Bug Fix:\n\n       - Build procedures were updated to accommodate breaking changes in the S2OPC OPCUA Toolkit.\n       - Occasionally switching from the sqlite to the sqlitememory plugin for the storage of readings would cause a fatal error in the storage layer. This has now been fixed and it is possible to change to sqlitememory without an error.\n       - A race condition within the modbus south plugin that could cause unfair scheduling of read versus write operations has been resolved. This could cause write operations to be delayed in some circumstances. The scheduling of set point write operations is now fairly interleaved between the read operations in all cases.\n       - A problem that caused the HTTPC North plugin to fail if the path component of the URL was omitted has been resolved.\n       - The modbus-c south plugin documentation has been enhanced to include details of the function codes used to read modbus data. Also incorrect error message and others have been improved to aid resolving configuration issues. The documentation has been updated to include descriptive text for the error messages that may occur.\n       - The Python35 filter plugin has been updated such that if no data is to be passed onwards it may now simply return the None Python constant or an empty list. Also it allows simple Python scripts to be added into filter pipelines has had a number of updates to improve the robustness of the plugin in the event of incorrect script code being provided by the user. The behaviour of the plugin has also been updated such that any errors run the script will prevent data being passed onwards the filter pipeline. An error explaining the exact cause of the failure is now logged in the system log. Also its documentation has been updated to discuss Python package imports and issues when removing previously used imports.\n       - The Average rule has been updated to improve the user interaction during the configuration of the rule.\n       - The first time a plugin that persisted data is executed erroneous errors and warnings would be written to the system log. This has now been resolved.\n       - An issue with the Kafka north plugin not sending data in certain circumstances has been resolved.\n       - Adding some notification plugins would cause incorrect errors to be logged to the system log. The functioning of the notifications was not affected. This has now been resolved and the error logs no longer appear.\n       - The documentation for the fledge-rule-delta plugin has been corrected.\n\n\nFledge v1\n==========\n\n\nv1.9.2\n-------\n\nRelease Date: 2021-09-29\n\n- **Fledge Core**\n\n    - New Features:\n\n       - The ability for south plugins to persist data between executions of south services has been added for plugins written in C/C++. This follows the same model as already available for north plugins.              \n       - Notification delivery plugins now also receive the data that caused the rule to trigger. This can be used to deliver values in the notification delivery plugins.\n       - A new option has been added to the sqlite storage plugin only that allows assets to be excluded from consideration in the purge process.\n       - A new purge process has been added to control the growth of statistics history and audit trails. This new process is known as the \"System Purge\" process.\n       - The support bundle has been updated to include details of the packages installed.\n       - The package repository API endpoint has been updated to support Ubuntu 20.04 repository end point.\n       - The handling of updates from RPM package repositories has been improved.       \n       - The certificate store has been updated to support more formats of certificates, including DER, P12 and PFX format certificates.     \n       - The documentation has been updated to include an improved & detailed introduction to filters.\n       - The OMF north plugin documentation has been re-organised and updated to include the latest features that have been introduced to this plugin.\n       - A new section has been added to the documentation that discusses the tuning of the edge based control path.\n\n\n    - Bug Fix:\n       - A rare race condition during ingestion of readings would cause the south service to terminate and restart. This has now been resolved.       \n       - In some circumstances it was seen that north services could send the same data more than once. This has now been corrected.\n       - An issue that caused an intermittent error in the tracking of data sent north has been resolved. This only impacted north services and not north tasks.\n       - An optimisation has been added to prevent north plugins being sent empty data sets when the filter chain removes all the data in a reading set.\n       - An issue that prevented a north service restarting correctly when certain combinations of filters were present has been resolved.\n       - The API for retrieving the list of backups on the system has been improved to honour the limit and offset parameters.\n       - An issue with the restore operation always restoring the latest backup rather than the chosen backup has been resolved.\n       - The support package failed to include log data if binary data had been written to syslog. This has now been resolved.\n       - The configuration category for the system purge was in the incorrect location with the configuration category tree, this has now been correctly placed underneath the “Utilities” item.\n       - It was not possible to set a notification to always retrigger as there was a limitation that there must always be 1 second between notification triggers. This restriction has now been removed and it is possible to set a retrigger time of zero.\n       - An error in the documentation for the plugin developers guide which incorrectly documented how to build debug binaries has been corrected.\n\n\n- **GUI**\n\n    - New Features:\n\n       - The user interface has been updated to improve the filtering of logs when a large number of services have been defined within the instance.\n       - The user interface input validation for hostnames and port has been improved in the setup screen. A message  is now displayed when an incorrect port or address is entered.\n       - The user interface now prompts to accept a self signed certificate if one is configured.\n\n\n    - Bug Fix:\n\n       - If a south or north plugin included a script type configuration item the GUI failed to allow the service or task using this plugin to be created correctly. This has now been resolved.\n       - The ability to paste into password fields has been enabled in order to allow copy/paste of keys, tokens etc into configuration of the south and north services.\n       - An issue that could result in filters not being correctly removed from a pipeline of 2 or more filters has been resolved.\n\n\n- **Plugins**\n\n    - New Features:\n\n       - A new OPC/UA south plugin has been created based on the Safe and Secure OPC/UA library. This plugin supports authentication and encryption mechanisms.\n       - Control features have now been added to the modbus south plugin that allows the writing of registers and coils via the south service control channel.      \n       - The modbus south control flow has been updated to use both 0x06 and 0x10 function codes. This allows items that are split across multiple modbus registers to be written in a single write operation.\n       - The OMF plugin has been updated to support more complex scenarios for the placement of assets with the PI Asset Framework.\n       - The OMF north plugin hinting mechanism has been extended to support asset framework hierarchy hints.\n       - The OMF north plugin now defaults to using a concise naming scheme for tags in the PI server.      \n       - The Kafka north plugin has been updated to allow timestamps of higher granularity than 1 second, previously timestamps would be truncated to the previous second.\n       - The Kafka north plugin has been enhanced to give the option of sending JSON objects as strings to Kafka, as previously the default, or sending them as JSON objects.\n       - The HTTP-C north plugin has been updated to allow the inclusion of customer HTTP headers.\n       - The Python35 Filter plugin did not correctly handle string type data points. This has now been resolved.\n       - The OMF Hint filter documentation has been updated to describe the use of regular expressions when defining the asset name to which the hint should be applied.\n\n\n    - Bug Fix:\n\n       - An issue with string data that had quote characters embedded within the reading data has been resolved. This would cause data to be discarded with a bad formatting message in the log.       \n       - An issue that could result in the configuration for the incorrect plugin being displayed has now been resolved.       \n       - An issue with the modbus south plugin that could cause resource starvation in the threads used for set point write operations has been resolved.\n       - A race condition in the modbus south that could cause an issue if the plugin configuration is changed during a set point operation.\n       - The CSV playback south plugin installation on CentOS 7 platforms has now been corrected.\n       - The error handling of the OMF north plugin has been improved such that assets that contain data types that are not supported by the OMF endpoint of the PI Server are removed and other data continues to be sent to the PI Server.\n       - The Kafka north plugin was not always able to reconnect if the Kafka service was not available when it was first started. This issue has now been resolved. \n       - The Kafka north plugin would on occasion duplicate data if a connection failed and was later reconnected. This has been resolved.\n       - A number of fixes have been made to the Kafka north plugin, these include; fixing issues caused by quoted data in the Kafka payload, sending timestamps accurate to the millisecond, fixing an issue that caused data duplication and switching the the user timestamp.\n       - A problem with the quoting of string type data points on the North HTTP-C plugin has been fixed.\n       - String type variables in the OPC/UA north plugin were incorrectly having extra quotes added to them. This has now been resolved.\n       - The delta filter previously did not manage calculating delta values when a datapoint changed from being an integer to a floating point value or vice versa. This has now been resolved and delta values are correctly calculated when these changes occur.\n       - The example path shown in the DHT11 plugin in the developers guide was incorrect, this has now been fixed.\n\n\nv1.9.1\n-------\n\nRelease Date: 2021-05-27\n\n- **Fledge Core**\n\n    - New Features:\n\n       - Support has been added for Ubuntu 20.04 LTS.\n       - The core components have been ported to build and run on CentOS 8\n       - A new option has been added to the command line tool that controls the system. This option, called purge, allows all readings related data to be purged from the system whilst retaining the configuration. This allows a system to be tested and then reset without losing the configuration.\n       - A new service interface has been added to the south service that allows set point control and operations to be performed via the south interface. This is the first phase of the set point control feature in the product.\n       - The documentation has been improved to include the new control functionality in the south plugin developers guide.\n       - An improvement has been made to the documentation layout for default plugins to make the GUI able to find the plugin documentation.\n       - Documentation describing the installation of PostgreSQL on CentOS has been updated.\n       - The documentation has been updated to give more detail around the topic of self-signed certificates.\n\n\n    - Bug Fix:\n\n       - A security flaw that allowed non-privileged users to update the certificate store has been resolved.\n       - A bug that prevented users being created with certificate based authentication rather than password based authentication has been fixed.\n       - Switching storage plugins from SQLite to PostgreSQL caused errors in some circumstances. This has now been resolved.\n       - The HTTP code returned by the ping command has been updated to correctly report 401 errors if the option to allow ping without authentication is turned off.\n       - The HTTP error code returned when the notification service is not available has been corrected.\n       - Disabling and re-enabling the backup and restore task schedules sometimes caused a restart of the system. This has now been resolved.\n       - The error message returned when schedules could not be enabled or disabled has been improved.\n       - A problem related to readings with nested data not correctly getting copied has been resolved.\n       - An issue that caused problems if a service was deleted and then a new service was recreated using the name of the previously deleted service has been resolved.\n\n\n- **GUI**\n\n    - New Features:\n\n       - Links to the online help have been added on a number of screens in the user interface.\n       - Improvements have been made to the user management screens of the GUI.\n\n\n- **Plugins**\n\n    - New Features:\n\n       - North services now support Python as well as C++ plugins.\n       - A new delivery notification plugin has been added that uses the set point control mechanism to invoke an action in the south plugin.\n       - A new notification delivery mechanism has been implemented that uses the set point control mechanism to assert control on a south service. The plugin allows you to set the values of one or more control items on the notification triggered and set a different set of values when the notification rule clears.\n       - Support has been added in the OPC/UA north plugin for array data. This allows FFT spectrum data to be represented in the OPC/UA server.\n       - The documentation for the OPC/UA north plugin has been updated to recommend running the plugin as a service.\n       - A new storage plugin has been added that uses SQLite. This is designed for situations with low bandwidth sensors and stores all the readings within a single SQLite file.\n       - Support has been added to use RTSP video streams in the person detection plugin.\n       - The delta filter has been updated to allow an optional set of asset specific tolerances to be added in addition to the global tolerance used by the plugin when deciding to forward data.\n       - The Python script run by the MQTT scripted plugin now receives the topic as well as the message.\n       - The OMF plugin has been updated in line with recommendations from the OMF group regarding the use of SCRF Defense.\n       - The OMFHint plugin has been updated to support wildcarding of asset names in the rules for the plugin.\n       - New documentation has been added to help in troubleshooting PI connection issues.\n       - The pi_server and ocs north plugins are deprecated in favour of the newer and more feature rich OMF north plugin. These deprecated plugins cannot be used in north services and are only provided for backward compatibility when run as north tasks. These plugins will be removed in a future release.\n\n\n    - Bug Fix:\n\n       - The OMF plugin has been updated to better deal with nested data.\n       - Some improvements to error handling have been added to the InfluxDB north plugin for version 1.x of InfluxDB.\n       - The Python 35 filter stated it used the Python version 3.5 always, in reality it uses whatever Python 3 version is installed on your system. The documentation has been updated to reflect this.\n       - Fixed a bug that treated arrays of bytes as if they were strings in the OPC/UA south plugin.\n       - The HTTP North C plugin would not correctly shutdown, this effected reconfiguration when run as an always on service. This issue has now been resolved.\n       - An issue with the SQLite in-memory storage plugin that caused database locks under high load conditions has been resolved.\n\n\nv1.9.0\n-------\n\nRelease Date: 2021-02-19\n\n- **Fledge Core**\n\n    - New Features:\n\n       - Support has been added in the Python north sending process for nested JSON reading payloads.\n       - A new section has been added to the documentation to document the process of writing a notification delivery plugin. As part of this documentation a new delivery plugin has also been written which delivers notifications via an MQTT broker.\n       - The plugin developers guide has been updated with information regarding installation and debugging of new plugins.\n       - The developer documentation has been updated to include details for writing both C++ and Python filter plugins.\n       - An always on north service has been added. This compliments the current north task and allows a choice of using scheduled windows to send data north or sending data as soon as it is available.\n       - The Python north sending process required the JQ filter information to be mandatory in north plugins. JQ filtering has been deprecated and will be removed in the next major release.\n       - Storage plugins may now have configuration options that are controllable via the API and the graphical interface.\n       - The ping API call has been enhanced to return the version of the core component of the system.\n       - The SQLite storage plugin has been enhanced to distribute readings for multiple assets across multiple databases. This improves the ingest performance and also improves the responsiveness of the system when very large numbers of readings are buffered within the instance.\n       - Documentation has been added for configuration of the storage service.\n\n\n    - Bug Fix:\n\n       - The REST API for the notification service was missing the re-trigger time information for configured notification in the retrieval and update calls. This has now been added.\n       - If the SQLite storage plugin is configured to use managed storage Fledge fails to restart. This has been resolved, the SQLite storage service no longer uses the managed option and will ignore it if set.\n       - An upgraded version of the HTTPS library has been applied, this solves an issue with large payloads in HTTPS exchanges.\n       - A number of Python source files contained incorrect references to the readthedocs page. This has now been resolved.\n       - The retrieval of log information was incorrectly including debug log output if the requested level was information and higher. This is now correctly filtered out.\n       - If a south plugin generates bad data that can not be inserted into the storage layer, that plugin will buffer the bad data forever and continually attempt to insert it. This causes the queue to build on the south plugin and eventually will exhaust system memory. To prevent this if data can not be inserted for a number of attempts it will be discarded in the south service. This allows the bad data to be dropped and newer, good data to be handled correctly.\n       - When a statistics value becomes greater than 2,147,483,648 the storage layer would fail, this has now been fixed.\n       - During installation of plugins the user interface would occasionally flag the system as down due to congestion in the API layer. This has now been resolved and the correct status of the system should be reflected.\n       - The notification service previously logged errors if no rule/delivery notification plugins had been installed. This is no longer the case.\n       - An issue with JSON configuration options that contained escaped strings within the JSON caused the service with the associated configuration to fail to run. This has now been resolved.\n       - The Postgres storage engine limited the length of asset codes to 50 characters, this has now been increased to 255 characters.\n       - Notifications based on asset names that contain the character '.' in the name would not receive any data. This has now been resolved.\n\n    - Known Issues:\n\n       - Known issues with Postgres storage plugins. During the final testing of the 1.9.0 release a problem has been found with switching to the PostgreSQL storage plugin via the user interface. Until this is resolved switching to PostgreSQL is only supported by manual editing the storage.json as per version 1.8.0. A patch to resolve this is likely to be released in the near future.\n\n\n- **GUI**\n\n    - New Features:\n\n       - The user interface now shows the retrigger time for a notification.\n       - The user interface now supports adding a north service as well as a north task.\n       - A new help menu item has been added to the user interface which will cause the readthedocs documentation to be displayed. Also the wizard to add the south and north services has been enhanced to give an option to display the help for the plugins.\n\n\n    - Bug Fix:\n\n       - The user interface now supports the ability to filter on all severity levels when viewing the system log.\n\n\n- **Plugins**\n\n    - New Features:\n\n       - The OPC/UA south plugin has been updated to allow the definition of the minimum reporting time between updates. It has also been updated to support subscription to arrays and DATE_TIME type with the OPC/UA server.\n       - AWS SiteWise requires the SourceTimestamp to be non-null when reading from an OPC/UA server. This was not always the case with the OPC/UA north plugin and caused issues when ingesting data into SiteWise. This has now been corrected such that SourceTimestamp is correctly set in addition to server timestamp.\n       - The HTTP-C north plugin has been updated to support primary and secondary destinations. It will automatically failover to the secondary if the primary becomes unavailable. Fail back will occur either when the secondary becomes unavailable or the plugin is restarted.\n\n\n    - Bug Fix:\n\n       - An issue with different versions of the libmodbus library prevented the modbus-c plugin building on Moxa gateways, this has now been resolved.\n       - An issue with building the MQTT notification plugin on CentOS/RedHat platforms has been resolved. This plugin now builds correctly on those platforms.\n       - The modbus plugin has been enhanced to support Modbus over IPv6, also request timeout has been added as a configuration option. There have been improvements to the error handling also.\n       - The DNP3 south plugin incorrectly treated all data as strings, this meant it was not easy to process the data with generic plugins. This has now been resolved and data is treated as floating point or integer values.\n       - The OMF north plugin previously reported the incorrect version information. This has now been resolved.\n       - A memory issue with the python35 filter integration has been resolved.\n       - Packaging conflicts between plugins that used the same additional libraries have been resolved to allow both plugins to be installed on the same machine. This issue impacted the plugins that used MQTT as a transport layer.\n       - The OPC/UA north plugin did not correctly handle the types for integer data, this has now been resolved.\n       - The OPCUA south plugin did not allow subscriptions to integer node ids. This has now been added.\n       - A problem with reading multiple modbus input registers into a single value has been resolved in the ModbusC plugin.\n       - OPC/UA north nested objects did not always generate unique node IDs in the OPC/UA server. This has now been resolved.\n\n\nv1.8.2\n-------\n\nRelease Date: 2020-11-03\n\n- **Fledge Core**\n\n    - Bug Fix:\n\n      - Following the release of a new version of a Python package the 1.8.1 release was no longer installable. This issue is resolved by the 1.8.2 patch release of the core package. All plugins from the 1.8.1 release will continue to work with the 1.8.2 release.\n\n\nv1.8.1\n-------\n\nRelease Date: 2020-07-08\n\n- **Fledge Core**\n\n    - New Features:\n\n       - Support has been added for the deployment on Moxa gateways running a variant of Debian 9 Stretch.\n       - The purge process has been improved to also purge the statistics history and audit trail of the system. New configuration parameters have been added to manage the amount of data to be retain for each of these.\n       - An issue with installing on the Mendel Day release on Google’s Coral boards has been resolved.\n       - The REST API has been expanded to allow an API call to be made to set the repository from which new packages will be pulled when installing plugins via the API and GUI.\n       - A problem with the service discovery failing to respond correctly after it had been running for a short while has been rectified. This allows external micro services to now correctly discover the core micro service.\n       - Details for making contributions to the Fledge project have been added to the source repository.\n       - The support bundle has been improved to include more information needed to diagnose issues with sending data to PI Servers\n       - The REST API has been extended to add a new call that will return statistics in terms of rates rather than absolute values. \n       - The documentation has been updated to include guidance on setting up package repositories for installing the software and plugins.\n\n\n    - Bug Fix:\n\n       - If JSON type configuration parameters were marked as mandatory there was an issue that prevented the update of the parameters. This has now been resolved.\n       - After changing storage engine from sqlite to Postgres using the configuration option in the GUI or via the API, the new storage engine would incorrectly report itself as sqlite in the API and user interface. This has now been resolved.\n       - External micro-services that restarted without a graceful shutdown would fail to register with the service registry as nothing was able to unregister the failed service. This has now been relaxed to allow the recovered service to be correctly registered.\n       - The configuration of the storage system was previously not available via the GUI. This has now been resolved and the configuration can be viewed in the Advanced category of the configuration user interface. Any changes made to the storage configuration will only take effect on the next restart of Fledge. This allows administrators to change the storage plugins used without the need to edit the storage.json configuration file.\n\n\n- **GUI**\n\n    - Bug Fix:\n\n       - An improvement to the user experience for editing password in the GUI has been implemented that stops the issue with passwords disappearing if the input field is clicked.\n       - Password validation was not correctly occurring in the GUI wizard that adds south plugins. This has now be rectified.\n\n\n- **Plugins**\n\n    - New Features:\n\n       - The Modbus plugin did not gracefully handle interrupted reads of data from modes TCP devices during the bulk transfer of data. This would result in assets missing certain data points and subsequent issues in the north systems that received those assets getting changes in the asset data type. This was a particular issue when dealign with the PI Web API and would result in excessive types being created. The Modbus plugin now detects the issues and takes action to ensure complete assets are read.\n       - A new image processing plugin, south human detector, that uses the Google Tensor Flow machine learning platform has been added to the Fledge-iot project.\n       - A new Python plugin has been added that can send data north to a Kafka system.\n       - A new south plugin has been added for the Dynamic Ratings B100 Electronic Temperature Monitor used for monitoring the condition of electricity transformers.\n       - A new plugin has been contributed to the project by Nexcom that implements the SAE J1708 protocol for accessing the ECU's of heavy duty vehicles. \n       - An issue with missing dependencies on the Coral Mendel platform prevent 1.8.0 packages installing correctly without manual intervention. This has now been resolved.\n       - The image recognition plugin, south-human-detector, has been updated to work with the Google Coral board running the Mendel Day release of Linux.\n\n\n    - Bug Fix:\n\n       - A missing dependency in v1.8.0 release for the package fledge-south-human-detector meant that it could not be installed without manual intervention. This has now been resolved.\n       - Support has been added to the south-human-detector plugin for the Coral Camera module in addition to the existing support for USB connected cameras.\n       - An issue with installation of the external shared libraries required by the USB4704 plugin has been resolved.\n\n\nv1.8.0\n-------\n\nRelease Date: 2020-05-08\n\n- **Fledge Core**\n\n    - New Features:\n\n       - Documentation has been added for the use of the SQLite in-memory storage plugin.\n       - The support bundle functionality has been improved to include more detail in order to aid tracking down issues in installations.\n       - Improvements have been made to the documentation of the OMF plugin in line with the enhancements to the code. This includes the documentation of OCS and EDS support as well as PI Web API.\n       - An issue with forwarding data between two Fledge instances in different time zones has been resolved.\n       - A new API entry point has been added to the Fledge REST API to allow the removal of plugin packages.\n       - The notification service has been updated to allow for the delivery of multiple notifications in parallel.\n       - Improvements have been made to the handling of asset codes within the buffer in order to improve the ingest performance of Fledge. This is transparent to all services outside of the storage service and has no impact on the public APIs.\n       - Extra information has been added to the notification trigger such that trigger time and the asset that triggered the notification is included.\n       - A new configuration item type of “northTask” has been introduced. It allows the user to enter the name of a northTask in the configuration of another category within Fledge.\n       - Data on multiple assets may now be requested in a single call to the asset growing API within Fledge.\n       - An additional API has been added to the asset browser to allow time bucketed data to be returned for multiple data points of multiple assets in a single call.\n       - Support has been added for nested readings within the reading data.\n       - Messages about exceeding the configured latency of the south service may be repeated when the latency is above the configured value for a period of time. These have now been replaced with a single message when the latency is exceeded and another when the condition is cleared.\n       - The feedback provided to the user when a configuration item is set to an invalid value has been improved.\n       - Configuration items can now be marked as mandatory, this improves the user experience when configuring plugins.\n       - A new configuration item type, code, has been added to improve the user experience when adding code snippets in configuration data.\n       - Improvements have been made to the caching of configuration data within the core of Fledge.\n       - The logging of package installation has been improved.\n       - Additions have been added to the public API to allow multiple audit log sources to be extracted in a single API call.\n       - The audit trail has been improved to show all package additions and updates in the audit trail.\n       - A new API has been added to allow notification plugin packages to be updated.\n       - A new API has been added to allow filter code versions to be updated.\n       - A new API call has been added to allow retrieval of reading data over a period of time which is averaged into time buckets within that time period.\n       - The notification service now supports rule plugins implemented in Python as well as C++.\n       - Improvements have been made to the checking of configuration items such that minimum, maximum values and string lengths are now checked.\n       - The plugin developers documentation has been updated to include a description building C/C++ south plugins.\n\n\n    - Bug Fix:\n\n       - Improvements have been made to the generation of the support bundle.\n       - An issue in the reporting of the task names in the fledge status script has been resolved.\n       - The purge by size (number of readings) would remove all data if the number of rows to retain was less than 1000, this has now been resolved.\n       - On occasions plugins would disappear from the list of available plugins, this has now been resolved.\n       - Improvements have been made to the management of the certificate store to ensure the correct files are uploaded to the store.\n       - An expensive and unnecessary test was being performed in the asset browsing API of Fledge. This slowed down the user interface and put load n the server. This has now been removed and has improved the performance of examining the buffered data within the Fledge instance.\n       - The FogBench utility used to send data to Fledge has been updated in line with new Python packages for the CoAP protocol.\n       - Configuration category relationships were not always correctly cleaned up when a filter is deleted, this has now been resolved.\n       - The support bundle functionality has been updated to provide information on the Python processes.\n       - The REST API incorrectly allowed configuration categories with a blank name to be created. This has now been prevented.\n       - Validation of minimum and maximum configuration item values was not correctly performed in the REST API, this has now been resolved.\n       - Nested objects within readings could cause the storage engine to fail and those readings to not be stored. This has now been resolved.\n       - On occasion shutting down a service may fail if the filters for that service have not been activated, this has now been resolved.\n       - An issue that cause notifications for asset whose names contain special characters has been resolved.\n       - The asset tracker was not correctly adding entries to the asset tracker, this has now been resolved.\n       - An intermittent issue that prevented the notification service being enabled on the Buster release on Raspberry Pi has been resolved.\n       - An intermittent problem that would prevent the north sending process to fail has been resolved.\n       - Performance improvements have been made to the installation of new packages from the package repository from within the Fledge API and user interface.\n       - It is now possible to reuse the name of a north process after deleting one with the same name.\n       - The incorrect HTTP error code is returned by the asset summary API call if an asset does not exist, this has now been resolved.\n       - Deleting and recreating a south service may cause errors in the log to appear. These have now been resolved.\n       - The SQLite and SQLiteInMemory storage engines have been updated to enable a purge to be defined that reduces the number of readings to a specified value rather than simply allowing a purge by the age of the data. This is designed to allow tighter controls on the size of the buffer database when high frequency data in particular is being stored within the Fledge buffer.\n\n\n- **GUI**\n\n    - New Features:\n\n       - The user interface for viewing logs has been improve to allow filtering by service and task.  A search facility has also been added.\n       - The requirement that a key file is uploaded with every certificate file has been removed from the graphical user interface as this is not always true.\n       - The performance of adding a new notification via the graphical user interface has been improved.\n       - The feedback in the graphical user interface has been improved when installation of the notification service fails.\n       - Installing the Fledge graphical user interface on OSX platforms fails due to the new version of the brew package manager. This has now been resolved.\n       - Improve script editing has been added to the graphical user interface.\n       - Improvements have been made to the user interface for the installations and enabling of the notification service.\n       - The notification audit log user interface has been improved in the GUI to allow all the logs relating to notifications to be viewed in a single screen.\n       - The user interface has been redesigned to make better use of the screen space when editing south and north services.\n       - Support has been added to the graphical user interface to determine when configuration items are not valid based on the values of other items These items that are not valid in the current configuration are greyed out in the interface.\n       - The user interface now shows the version of the code in the settings page.\n       - Improvements have been made to the user interface layout to force footers to stay at the bottom of the screen.\n\n\n    - Bug Fix:\n\n       - Improvements have been made to the zoom and pan options within the graph displays.\n       - The wizard used for the creation of new notifications in the graphical user interface would loose values when going back and forth between pages, this has now been resolved.\n       - A memory leak that was affecting the performance of the graphical user interface has been fixed, improving performance of the interface.\n       - Incorrect category names may be displayed int he graphical user interface, this has now be resolved.\n       - Issues with the layout of the graphical user interface when viewed on an Apple iPad have been resolved.\n       - The asset graph in the graphical user interface would sometimes not resize to fit the screen correctly, this has now been resolved.\n       - The “Asset & Readings” option in the graphical user interface was initially slow to respond, this has now been improved.\n       - The pagination of audit logs has bene improved when multiple sources are displayed.\n       - The counts in the user interface for notifications have been corrected.\n       - Asset data graphs are not able to handle correctly the transition between one day and the next. This is now resolved.\n\n\n- **Plugins**\n\n    - New Features:\n\n       - The existing set of OMF north plugins have been rationalised and replaced by a single OMF north plugin that is able to support the connector rely, PI Web API, EDS and OCS.\n       - When a Modbus TCP connection is closed by the remote end we fail to read a value, we then reconnect and move on to read the next value. On device with short timeout values, smaller than the poll interval, we fail the same reading every time and never get a value for that reading. The behaviour has been modified to allow us to retry reading the original value after re-establishing the connection.\n       - The OMF north plugin has been updated to support the released version of the OSIsoft EDS product as a destination for data.\n       - New functionality has been added to the north data to PI plugin when using PI Web API that allows the location in the PI Server AF hierarchy to be defined. A default location can be set and an override based on the asset name or metadata within the reading. The data may also be placed in multiple locations within the AF hierarchy.\n       - A new notification delivery plugin has been added that allows a north task to be triggered to send data for a period of time either side of the notification trigger event. This allows conditional forwarding of large amounts of data when a trigger event occurs.\n       - The asset notification delivery plugin has been updated to allow creation of new assets both for notifications that are triggered and/or cleared.\n       - The rate filter now allows the termination of sending full rate data either by use of an expression or by specifying a time in milliseconds.\n       - A new simple Python filter has been added that calculates an exponential moving average,\n       - Some typos in the OPCUA south and north plugin configuration have been fixed.\n       - The OPCUA north plugin has been updated to support nested reading objects correctly and also to allow a name to be set for the OPCUA server. These have also been some stability fixes in the underlying OPCUA layer used by this and the south OPCUA plugin.\n       - The modbus map configuration now supports byte swapping and word swapping by use of the {{swap}} property of the map. This may take the values {{bytes}}, {{words}} or {{both}}.\n       - The people detection machine learning plugin now supports RTSP streams as input.\n       - The option list items in the OMF plugin have been updated to make them more user friendly and descriptive.\n       - The threshold notification rule has been updated such that the unused fields in the configuration now correctly grey out in the GUI dependent upon the setting of the window type or single item asset validation.\n       - The configuration of the OMF north plugin for connecting to the PI Server has been improved to give the user better feedback as to what elements are valid based on choice of connection method and security options chosen.\n       - Support has been added for simple Python code to be entered into a filter that does not require all of the support code. This is designed to allow a user to very quickly develop filters with limited programming.\n       - Support has been added for filters written entirely in Python, these are full featured filters as supported by the C++ filtering mechanism and include dynamic reconfiguration.\n       - The fledge-filter-expression filter has been modified to better deal with streams which contain multiple assets. It is now possible to use the syntax <assetName>.<datapointName> in an expression in addition to the previous <datapointName>. The result is that if two assets in the data stream have the same data point names it is now possible to differentiate between them.\n       - A new plugin to collect variables from Beckhoff PLC's has been written. The plugin uses the TwinCAT 2 or TwinCAT 3 protocols to collect specified variable from the running PLC.\n\n\n    - Bug Fix:\n\n       - An issue in the sending of data to the PI server with large values has been resolved.\n       - The playback south plugin was not correctly replaying timestamps within the file, this has now been resolved.\n       - Use of the asset filter in a north task could result in the north task terminating. This has now resolved.\n       - A small memory leak in the south service statistics handling code was impacting the performance of the south service, this is now resolved.\n       - An issue has been discovered in the Flir camera plugin with the validity attribute of the spot temperatures, this has now been resolved.\n       - It was not possible to send data for the same asset from two different Fledge’s into the PI Server using PI Web API, this has now been resolved.\n       - The filter Fledge RMS Trigger was not able to be dynamically reconfigured, this has now been resolved.\n       - If a filter in the north sending process increased the number of readings it was possible that the limit of the number of readings sent in a single block . The sending process will now ensure this can not happen.\n       - RMS filter plugin was not able to be dynamically reconfigured, this has now been resolved.\n       - The HTTP South plugin that is used to receive data from another Fledge instance may fail with some combinations of filters applied to the service. This issue has now been resolved.\n       - The rule filter may give errors if expressions have variables not satisfied in the reading data. Under some circumstances it has been seen that the filter fails to process data after giving this error. This has been resolved by changes to make the rate filter more robust.\n       - Blank values for asset names in the south service may cause the service to become unresponsive. Blank asset names have now been correctly detected, asset names are required configuration values.\n       - A new version of the driver software for the USB-4704 Data Acquisition Module has been released, the plugin has been updated to use this driver version.\n       - The OPCUA North plugin might report incorrect counts for sent readings on some platforms, this has now been resolved.\n       - The simple Python filter plugin was not adding correct asset tracking data, this has now been updated.\n       - An issue with the asset filter failing when incorrect configuration was present has bene resolved.\n       - The benchmark plugin now enforces a minimum number of asset of 1.\n       - The OPCUA plugins are now available on the Raspberry Pi Buster platform.\n       - Errors that prevented the use of the Postgres storage plugin have been resolved.\n\n\nv1.7.0\n-------\n\nRelease Date: 2019-08-15\n\n- **Fledge Core**\n\n    - New Features:\n\n       - Added support for Raspbian Buster\n       - Additional, optional flow control has been added to the south service to prevent it from overwhelming the storage service. This is enabled via the throttling option in the south service advanced configuration.\n       - The mechanism for including JSON configuration in C++ plugins has been improved and the macros for the inline coding moved to a standard location to prevent duplication.\n       - An option has been added that allows the system to be updated to the latest version of the system packages prior to installing a new plugin or component.\n       - Fledge now supports password type configuration items. This allows passwords to be hidden from the user in the user interface.\n       - A new feature has been added that allows the logs of plugin or other package installation to be retrieved.\n       - Installation logs for package installations are now retained and available via the REST API.\n       - A mechanism has been added that allows plugins to be marked as deprecated prior to the removal of these plugins in future releases. Running a deprecated plugin will result in a warning being logged, but otherwise the plugin will operate as normal.\n       - The Fledge REST API has been updated to add a new entry point that will allow a plugin to be updated from the package repository.\n       - An additional API has been added to fetch the set of installed services within a Fledge installation.\n       - An API has been added that allows the caller to retrieve the list of plugins that are available in the Fledge package repository.\n       - The /fledge/plugins REST API has been extended to allow plugins to be installed from an APT/RPM repository.\n       - Addition of support for hybrid plugins. A hybrid plugin is a JSON file that defines another plugin to load along with some default configuration for that plugin. This gives a means to create a new plugin by customising the configuration of an existing plugin. An example might be a plugin for a specific modbus device type that uses the generic modbus plugin and a predefined modbus map.\n       - The notification service has been improved to allow the re-trigger time of a notification to be defined by the user on a per notification basis.\n       - A new environment variable, FLEDGE_PLUGIN_PATH has been added to allow plugins to be stored in multiple locations or locations outside of the usual Fledge installation directory.\n       - Added support for FLEDGE_PLUGIN_PATH environment variable, that would be used for searching additional directory paths for plugins/filters to use with Fledge.\n       - Fledge packages for the Google Coral Edge TPU development board have been made available.\n       - Support has been added to the OMF north plugin for the PI Web API OMF endpoint. The PI Server functionality to support this is currently in beta test.\n\n    - Bug Fix/Improvements:\n\n       - An issue with the notification service becoming unresponsive on the Raspberry Pi Buster release has been resolved.\n       - A debug message was being incorrectly logged as an error when adding a Python south plugin. The message level has now been corrected.\n       - A problem whereby not all properties of configuration items are updated when a new version of a configuration category is installed has been fixed.\n       - The notification service was not correctly honouring the notification types for one shot, toggled and retriggered notifications. This has now be bought in line with the documentation.\n       - The system log was becoming flooded with messages from the plugin discovery utility. This utility now logs at the correct level and only logs errors and warning by default.\n       - Improvements to the REST API allow for selective sets of statistic history to be retrieved. This reduces the size of the returned result set and improves performance.\n       - The order in which filters are shutdown in a pipeline of filters has been reversed to resolve an issue regarding releasing Python interpreters, under some circumstances shutdowns of later filters would fail if multiple Python filters were being used.\n       - The output of the `fledge status` command was corrupt, showing random text after the number of seconds for which fledge has been up. This has now been resolved.\n\n- **GUI**\n\n    - New Features:\n\n       - A new log option has been added to the GUI to show the logs of package installations.\n       - It is now possible to edit Python scripts directly in the GUI for plugins that load Python snippets.\n       - A new log retrieval option has been added to the GUI that will show only notification delivery events. This makes it easier for a user to see what notifications have been sent by the system.\n       - The GUI asset graphs have been improved such that multiple tabs are now available for graphing and tabular display of asset data.\n       - The GUI menu has been reordered to move the Notifications entry below the South and North entries.\n       - Support has been added to the Fledge GUI for entry of password fields. Data is obfuscated as it is entered or edited.\n       - The GUI now shows plugin name and version for each north task defined.\n       - The GUI now shows the plugin name and version for each south service that is configured.\n       - The GUI has been updated such that it can install new plugins from the Fledge package repository for south services and north tasks. A list of available packages from the repository is displayed to allow the user to pick from that list. The Fledge instance must have connectivity tot he package repository to allow this feature to succeed.\n       - The GUI now supports using certificates to authenticate with the Fledge instance.\n\n    - Bug Fix/Improvements:\n\n       - Improved editing of JSON configuration entities in the configuration editor.\n       - Improvements have been made to the asset browser graphs in the GUI to make better use of the available space to show the graph itself.\n       - The GUI was incorrectly showing Fledge as down in certain circumstances, this has now been resolved.\n       - An issue in the edit dialog for the north plugin which sometimes prevented the enabled state from being correctly modified has been resolved.\n       - Exported CSV data from the GUI would sometimes be missing column headers, these are now always present.\n       - The exporting of data as a CSV file in the GUI has been improved such that it no longer outputs the readings as a block of JSON, but rather individual columns. This allows the data to be imported into a spreadsheet with ease.\n       - Missing help text has been added for notification trigger and enabled elements.\n       - A number of issues in the filter configuration editor have been resolved. These issues meant that sometimes new values were not honoured or when changes were made with multiple filters in a chain only one filter would be updated.\n       - Under some rare circumstances the GUI asset graph may show incorrect dates, this issue has now been resolved.\n       - The Fledge GUI build and start commands did not work on Windows platforms and preventing the running on those platforms. This has now been resolved and the Fledge GUI can be built and run on Windows platforms.\n       - The GUI was not correctly interpreting the value of the readonly attribute of configuration items when the value was anything other than true. This has been resolved.\n       - The Fledge GUI RPM package had an error that caused installation to fail on some systems, this is now resolved.\n\n- **Plugins**\n\n    - New Features:\n\n       - A new filter has been created that looks for changes in values and only sends full rate data around the time of those changes. At other times the filter can be configured to send reduced rate averages of the data.\n       - A new rule plugin has been implemented that will create notifications if the value of a data point moves more than a defined percentage from the average for that data point. A moving average for each data point is calculated by the plugin, this may be a simple average or an exponential moving average.\n       - A new south plugin has been created that supports the DNP3 protocol.\n       - A south plugin has been created based on the Google TensorFlow people detection model. It uses a live feed from a video camera and returns data regarding the number of people detected and the position within the frame.\n       - A south plugin based on the Google TensorFlow demo model for people recognition has been created. The plugin reads an image from a file and returns the people co-ordinates of the people it detects within the image.\n       - A new north plugin has been added that creates an OPCUA server based on the data ingested by the Fledge instance.\n       - Support has been added for a Flir Thermal Imaging Camera connected via Modbus TCP. Both a south plugin to gather the data and a filter plugin, to clean the data, have been added.\n       - A new south plugin has been created based on the Google TensorFlow demo model that accepts a live feed from a Raspberry Pi camera and classifies the images.\n       - A new south plugin has been created based on the Google TensorFlow demo model for object detection. The plugin return object count, name position and confidence data.\n       - The change filter has been made available on CentOS and RedHat 7 releases.\n\n    - Bug Fix/Improvements:\n\n       - Support  for reading floating point values in a pair of 16 bit registers has been added to the modbus plugin.\n       - Improvements have been made to the performance of the modbus plugin when large numbers of contiguous registers are read. Also the addition of support for floating point values in modbus registers.\n       - Flir south service has been modified to support the Flir camera range as currently available, i.e. a maximum of 10 areas as opposed to the 20 that were previously supported. This has improved performance, especially on low performance platforms.\n       - The python35 filter plugin did not allow the Python code to add attributes to the data. This has now been resolved.\n       - The playback south plugin did not correctly take the timestamp data from he CSV file. An option is now available that will allow this.\n       - The rate filter has been enhanced to accept a list of assets that should be passed through the filter without having the rate of those assets altered.\n       - The filter plugin python35 crashed on the Buster release on the Raspberry Pi, this has now been resolved.\n       - The FFT filter now enforces that the number of samples must be a power of 2.\n       - The ThingSpeak north plugin was not updated in line with changes to the timestamp handling in Fledge, this resulted in a crash when it tried to send data to ThingSpeak. This has been resolved and the cause of the crash also fixed such that now an error will be logged rather than the task crashing.\n       - The configuration of the simple expression notification rule plugin has been simplified.\n       - The DHT 11 plugin mistakenly had a dependency on the Wiring PI package. This has now been removed.\n       - The system information plugin was missing a dependency that would cause it to fail to install on systems that did not already have the package it was depend on installed. This has been resolved.\n       - The phidget south plugin reconfiguration method would crash the service on occasions, this has now been resolved.\n       - The notification service would sometimes become unresponsive after calling the notify-python35 plugin, this has now been resolved.\n       - The configuration options regarding notification evaluation of single items and windows has been improved to make it less confusing to end users.\n       - The OverMax and UnderMin notification rules have been combined into a single threshold rule plugin.\n       - The OPCUA south plugin was incorrectly reporting itself as the upcua plugin. This is now resolved.\n       - The OPCUA south plugin has been updated to support subscriptions both using browse names and Node Id’s. Node ID is now the default subscription mechanism as this is much higher performance than traversing the object tree looking at browse names.\n       - Shutting down the OPCUA service when it has failed to connect to an OPCUA server, either because of an incorrect configuration or the OPCUA server being down resulted in the service crashing. The service now shuts down cleanly.\n       - In order to install the fledge-south-modbus package on RedHat Enterprise Linux or CentOS 7 you must have configured the epel repository by executing the command:\n\n         `sudo yum install epel-release`\n\n       - A number of packages have been renamed in order to obtain better consistency in the naming and to facilitate the upgrade of packages from the API and graphical interface to Fledge. This will result in duplication of certain plugins after upgrading to the release. This is only an issue of the plugins had been previously installed, these old plugin should be manually removed form the system to alleviate this problem.\n\n         The plugins involved are,\n\n          * fledge-north-http Vs fledge-north-http-north\n\n          * fledge-south-http Vs fledge-south-http-south\n\n          * fledge-south-Csv Vs fledge-south-csv\n\n          * fledge-south-Expression Vs fledge-south-expression\n\n          * fledge-south-dht Vs fledge-south-dht11V2\n\n          * fledge-south-modbusc Vs fledge-south-modbus\n\n\nv1.6.0\n-------\n\nRelease Date: 2019-05-22\n\n- **Fledge Core**\n\n    - New Features:\n\n       - The scope of the Fledge certificate store has been widen to allow it to store .pem certificates and keys for accessing cloud functions.\n       - The creation of a Docker container for Fledge has been added to the packaging options for Fledge in this version of Fledge.\n       - Red Hat Enterprise Linux packages have been made available from this release of Fledge onwards. These packages include all the applicable plugins and notification service for Fledge.\n       - The Fledge API now supports the creation of configuration snapshots which can be used to create configuration checkpoints and rollback configuration changes.\n       - The Fledge administration API has been extended to allow the installation of new plugins via API.\n       \n\n    - Improvements/Bug Fix:\n\n       - A bug that prevents multiple Fledge's on the same network being discoverable via multicast DNS lookup has been fixed.\n       - Set, unset optional configuration attributes\n\n\n- **GUI**\n\n    - New Features:\n       \n       - The Fledge Graphical User Interface now has the ability to show sets of graphs over a time period for data such as the spectrum analysis produced but the Fast Fourier transform filter.\n       - The Fledge Graphical User Interface is now available as an RPM file that may be installed on Red Hat Enterprise Linux or CentOS.\n\n\n    - Improvements/Bug Fix:\n\n       - Improvements have been made to the Fledge Graphical User Interface to allow more control of the time periods displayed in the graphs of asset values.\n       - Some improvements to screen layout in the Fledge Graphical User Interface have been made in order to improve the look and reduce the screen space used in some of the screens.\n       - Improvements have been made to the appearance of dropdown and other elements with the Fledge Graphical User Interface.\n\n\n- **Plugins**\n\n    - New Features:\n       - A new threshold filter has been added that can be used to block onward transmission of data until a configured expression evaluates too true.\n       - The Modbus RTU/TCP south plugin is now available on CentOS 7 and RHEL 7.\n       - A new north plugin has been added to allow data to be sent the Google Cloud Platform IoT Core interface.\n       - The FFT filter now has an option to output raw frequency spectra. Note this can not be accepted into all north bound systems.\n       - Changed the release status of the FFT filter plugin.\n       - Added the ability in the modbus plugin to define multiple registers that create composite values. For example two 16 bit registers can be put together to make one 32 bit value. This is does using an array of register values in a modbus map, e.g. {\"name\":\"rpm\",\"slave\":1,\"register\":[33,34],\"scale\":0.1,\"offset\":0}. Register 33 contains the low 16 its of the RPM and register 34 the high 16 bits of the RPM.\n       - Addition of a new Notification Delivery plugin to send notifications to a Google Hangouts chatroom.\n       - A new plugin has been created that uses machine learning based on Google's TensorFlow technology to classify image data and populate derived information the north side systems. The current TensorFlow model in use will recognise hard written digits and populate those digits. This plugins is currently a proof of concept for machine learning. \n\n\n    - Improvements/Bug Fix:\n       - Removal of unnecessary include directive from Modbus-C plugin.\n       - Improved error reporting for the modbus-c plugin and added documentation on the configuration of the plugin.\n       - Improved the subscription handling in the OPCUA south plugin.\n       - Stability improvements have been made to the notification service, these related to the handling of dynamic reconfigurations of the notifications.\n       - Removed erroneous default for script configuration option in Python35 notification delivery plugin.\n       - Corrected description of the enable configuration item.\n\n\nv1.5.2\n-------\n\nRelease Date: 2019-04-08\n\n- **Fledge Core**\n\n    - New Features:\n       - Notification service, notification rule and delivery plugins\n       - Addition of a new notification delivery plugin that will create an asset reading when a notification is delivered. This can then be sent to any system north of the Fledge instance via the usual mechanisms\n       - Bulk insert support for SQLite and Postgres storage plugins\n\n    - Enhancements / Bug Fix:\n       - Performance improvements for SQLite storage plugin.\n       - Improved performance of data browsing where large datasets have been acquired\n       - Optimized statistics history collection\n       - Optimized purge task\n       - The readings count shown on GUI and south page and corresponding API endpoints now shows total readings count and not what is currently buffered by Fledge. So these counts don't reduce when purge task runs\n       - Static data in the OMF plugin was not being correctly taken from the plugin configuration\n       - Reduced the number of informational log messages being sent to the syslog\n\n\n- **GUI**\n\n    - New Features:\n       - Notifications UI\n\n    - Bug Fix:\n       - Backup creation time format\n\n\nv1.5.1\n-------\n\nRelease Date: 2019-03-12\n\n- **Fledge Core**\n\n    - Bug Fix: plugin loading errors\n\n\n- **GUI**\n\n    - Bug Fix: uptime shows up to 24 hour clock only\n\n\nv1.5.0\n-------\n\nRelease Date: 2019-02-21\n\n- **Fledge Core**\n\n    - Performance improvements and Bug Fixes\n    - Introduction of Safe Mode in case Fledge is accidentally configured to generate so much data that it is overwhelmed and can no longer be managed.\n\n\n- **GUI**\n\n    - re-organization of screens for Health, Assets, South and North\n    - bug fixes\n\n\n- **South**\n\n    - Many Performance improvements, including conversion to C++\n    - Modbus plugin\n    - many other new south plugins\n\n\n- **North**\n\n    - Compressed data via OMF\n    - Kafka\n\n\n- **Filters**: Perform data pre-processing, and allow distributed applications to be built on Fledge.\n\n    - Delta: only send data upon change\n    - Expression: run a complex mathematical expression across one or more data streams\n    - Python: run arbitrary python code to modify a data stream\n    - Asset: modify Asset metadata\n    - RMS: Generate new asset with Root Mean Squared and Peak calculations across data streams\n    - FFT (beta): execute a Fast Fourier Transform across a data stream. Valuable for Vibration Analysis\n    - Many others\n\n\n- **Event Notification Engine (beta)**\n \n    - Run rules to detect conditions and generate events at the edge\n    - Default Delivery Mechanisms: email, external script\n    - Fully pluggable, so custom Rules and Delivery Mechanisms can be easily created\n\n\n- **Debian Packages for All Repo's**\n\n\nv1.4.1\n------\n\nRelease Date: 2018-10-10\n\n\n\nv1.4.0\n------\n\nRelease Date: 2018-09-25\n\n\n\nv1.3.1\n------\n\nRelease Date: 2018-07-13\n\n\nFixed Issues\n~~~~~~~~~~~~\n\n- **Open File Descriptors**\n\n  - **open file descriptors**: Storage service did not close open files, leading to multiple open file descriptors\n\n\n\nv1.3\n----\n\nRelease Date: 2018-07-05\n\n\nNew Features\n~~~~~~~~~~~~\n\n- **Python version upgrade**\n\n  - **python 3 version**: The minimal supported python version is now python 3.5.3. \n\n- **aiohttp python package version upgrade**\n\n  - **aiohttp package version**: aiohttp (version 3.2.1) and aiohttp_cors (version 0.7.0) is now being used\n  \n- **Removal of south plugins**\n\n  - **coap**: coap south plugin was moved into its own repository https://github.com/fledge-iot/fledge-south-coap\n  - **http**: http south plugin was moved into its own repository https://github.com/fledge-iot/fledge-south-http\n\n\nKnown Issues\n~~~~~~~~~~~~\n\n- **Issues in Documentation**\n\n  - **plugin documentation**: testing Fledge requires user to first install southbound plugins necessary (CoAP, http)\n\n\n\nv1.2\n----\n\nRelease Date: 2018-04-23\n\n\nNew Features\n~~~~~~~~~~~~\n\n- **Changes in the REST API**\n\n  - **ping Method**: the ping method now returns uptime, number of records read/sent/purged and if Fledge requires REST API authentication.\n\n- **Storage Layer**\n\n  - **Default Storage Engine**: The default storage engine is now SQLite. We provide a script to migrate from PostgreSQL in 1.1.1 version to 1.2. PostgreSQL is still available in the main repository and package, but it will be removed to an operate repository in future versions. \n  \n- **Admin and Maintenance Scripts**\n\n  - **fledge status**: the command now shows what the ``ping`` REST method provides.\n  - **setenv script**: a new script has been added to simplify the user interaction. The script is in *$FLEDGE_ROOT/extras/scripts* and it is called *setenv.sh*.\n  - **fledge service script**: a new service script has been added to setup Fledge as a service. The script is in *$FLEDGE_ROOT/extras/scripts* and it is called *fledge.service*.\n\n\nKnown Issues\n~~~~~~~~~~~~\n\n- **Issues in the REST API**\n\n  - **asset method response**: the ``asset`` method returns a JSON object with asset code named ``asset_code`` instead of ``assetCode``\n  - **task method response**: the ``task`` method returns a JSON object with unexpected element ``\"exitCode\"``\n\n\nv1.1.1\n------\n\nRelease Date: 2018-01-18\n\n\nNew Features\n~~~~~~~~~~~~\n\n- **Fixed aiohttp incompatibility**: This fix is for the incompatibility of *aiohttp* with *yarl*, discovered in the previous version. The issue has been fixed.\n- **Fixed avahi-daemon issue**: Avahi daemon is a pre-requisite of Fledge, Fledge can now run as a snap or build from source without avahi daemon installed.\n\n\nKnown Issues\n~~~~~~~~~~~~\n\n- **PostgreSQL with Snap**: the issue described in version 1.0 still persists, see :ref:`1.0-known_issues` in v1.0.\n\n\nv1.1\n----\n\nRelease Date: 2018-01-09\n\n\nNew Features\n~~~~~~~~~~~~\n\n- **Startup Script**:\n\n  - ``fledge start`` script now checks if the Core microservice has started.\n  - ``fledge start`` creates a *core.err* file in *$FLEDGE_DATA* and writes the stderr there. \n\n\nKnown Issues\n~~~~~~~~~~~~\n\n- **Incompatibility between aiohttp and yarl when Fledge is built from source**: in this version we use *aiohttp 2.3.6* (|1.1 requirements|). This version is incompatible with updated versions of *yarl* (0.18.0+). If you intend to use this version, change the requirements for *aiohttp* for version 2.3.8 or higher.\n- **PostgreSQL with Snap**: the issue described in version 1.0 still persists, see :ref:`1.0-known_issues` in v1.0.\n\n\nv1.0\n----\n\nRelease Date: 2017-12-11\n\n\nFeatures\n~~~~~~~~\n\n- All the essential microservices are now in place: *Core, Storage, South, North*.\n- Storage plugins available in the main repository:\n\n  - **Postgres**: The storage layer relies on PostgreSQL for data and metadata\n\n- South plugins available in the main repository:\n\n  - **CoAP Listener**: A CoAP microservice plugin listening to client applications that send data to Fledge\n\n- North plugins available in the main repository:\n\n  - **OMF Translator**: A task plugin sending data to OSIsoft PI Connector Relay 1.0\n\n\n.. _1.0-known_issues:\n\nKnown Issues\n~~~~~~~~~~~~\n\n- **Startup Script**: ``fledge start`` does not check if the Core microservice has started correctly, hence it may report that \"Fledge started.\" when the process has died. As a workaround, check with ``fledge status`` the presence of the Fledge microservices.\n- **Snap Execution on Raspbian**: there is an issue on Raspbian when the Fledge snap package is used. It is an issue with the snap environment, it looks for a shared object to preload on Raspian, but the object is not available. As a workaround, a superuser should comment a line in the file */etc/ld.so.preload*. Add a ``#`` at the beginning of this line: ``/usr/lib/arm-linux-gnueabihf/libarmmem.so``. Save the file and you will be able to immediately use the snap.\n- **OMF Translator North Plugin for Fledge Statistics**: in this version the statistics collected by Fledge are not sent automatically to the PI System via the OMF Translator plugin, as it is supposed to be. The issue will be fixed in a future release.\n- **Snap installed in an environment with an existing version of PostgreSQL**: the Fledge snap does not check if another version of PostgreSQL is available on the machine. The result may be a conflict between the tailored version of PostgreSQL installed with the snap and the version of PostgreSQL generally available on the machine. You can check if PostgreSQL is installed using the command ``sudo dpkg -l | grep 'postgres'``. All packages should be removed with ``sudo dpkg --purge <package>``.\n\n\n"
  },
  {
    "path": "docs/92_downloads.rst",
    "content": ".. Downloads\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. Links in new tabs\n\n.. |fledge repo| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge\" target=\"_blank\">https://github.com/fledge-iot/fledge</a>\n\n.. |fledge prj| raw:: html\n\n   <a href=\"https://github.com/fledge-iot\" target=\"_blank\">https://github.com/fledge-iot</a>\n\n.. |fledge gui| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge-gui\" target=\"_blank\">https://github.com/fledge-iot/fledge-gui</a>\n\n\n.. |downloads| raw:: html\n\n   <a href=\"http://dianomic.com/download-fledge/\">Downloads Page</a>\n\n\n   \n*********\nDownloads\n*********\n\n\nPackages\n========\n\nPackages for a number of different Linux platforms are available for both Intel and Arm architectures via the Dianomic web site's download page.\n\n- |downloads|\n\n\n\nDownload/Clone from GitHub\n==========================\n\nFledge and the Fledge tools are on GitHub. You can view and download them here:\n\n- **Fledge**: This is the main project for the Fledge platform. |br| |fledge repo|\n- **Fledge GUI**: This is an experimental GUI that connects to the Fledge REST API to configure and administer the platform and to retrieve the data buffered in it. |br| |fledge gui|\n \nThere are many south, north, and filter plugins available on github: |br| |fledge prj|\n"
  },
  {
    "path": "docs/KERBEROS.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n\n.. Links\n.. _OMF: https://docs.aveva.com/bundle/omf/page/1283981.html\n\nOMF Kerberos Authentication\n***************************\n\nIntroduction\n============\n\nThe bundled OMF north plugin in Fledge can use a number of different authentication schemes when communicating with the various OSIsoft products. The PI Web API method in the `OMF`_ plugin supports the use of a Kerberos scheme.\n\nThe Fledge *requirements.sh* script installs the Kerberos client to allow the integration with what in the specific terminology is called KDC (the Kerberos server).\n\nPI Server as the North endpoint\n===============================\n\nThe OSI *Connector Relay* allows token authentication while *PI Web API* supports Basic and Kerberos authentication.\n\nThere could be more than one configuration to allow the Kerberos authentication,\nthe easiest one is the Windows server on which the PI Server is executed act as the Kerberos server also.\n\nThe Windows Active directory should be installed and properly configured for allowing the Windows server to authenticate Kerberos requests.\n\nNorth plugin\n============\n\nThe North plugin has a set of configurable options that should be changed, using either the Fledge API or the Fledge GUI,\nto select the Kerberos authentication.\n\nThe North plugin supports the configurable option *PIServerEndpoint* for allowing to select the target among:\n\n  - Connector Relay\n\n  - PI Web API\n\n  - Edge Data Store\n\n  - OSIsoft Cloud Services\n\nThe *PIWebAPIAuthenticationMethod* option permits to select the desired authentication among:\n\n  - anonymous\n  - basic\n  - kerberos\n\nThe Kerberos authentication requires a keytab file, the *PIWebAPIKerberosKeytabFileName* option specifies the name of the file expected under the directory:\n\n.. code-block:: console\n\n  ${FLEDGE_ROOT}/data/etc/kerberos\n\n**NOTE:**\n\n- *A keytab is a file containing pairs of Kerberos principals and encrypted keys (which are derived from the Kerberos password). A keytab file allows to authenticate to various remote systems using Kerberos without entering a password.*\n\nthe *AFHierarchy1Level* option allows to specific the first level of the hierarchy that will be created into the Asset Framework and will contain the information for the specific\nNorth plugin.\n\n\nFledge server configuration\n============================\nThe server on which Fledge is going to be executed needs to be properly configured to allow the Kerberos authentication.\n\nThe following steps are needed:\n\n  - *IP Address resolution for the KDC*\n\n  - *Kerberos client configuration*\n\n   - *Kerberos keytab file setup*\n\nIP Address resolution of the KDC\n--------------------------------\nThe Kerberos server name should be resolved to the corresponding IP Address, editing the */etc/hosts* is one of the possible and the easiest way, sample row to add:\n\n.. code-block:: console\n\n\t192.168.1.51    pi-server.dianomic.com pi-server\n\ntry the resolution of the name using the usual *ping* command:\n\n.. code-block:: console\n\n\t$ ping -c 1 pi-server.dianomic.com\n\n\tPING pi-server.dianomic.com (192.168.1.51) 56(84) bytes of data.\n\t64 bytes from pi-server.dianomic.com (192.168.1.51): icmp_seq=1 ttl=128 time=0.317 ms\n\t64 bytes from pi-server.dianomic.com (192.168.1.51): icmp_seq=2 ttl=128 time=0.360 ms\n\t64 bytes from pi-server.dianomic.com (192.168.1.51): icmp_seq=3 ttl=128 time=0.455 ms\n\n**NOTE:**\n\n- *the name of the KDC should be the first in the list of aliases*\n\n\nKerberos client configuration\n-----------------------------\nThe server on which Fledge runs act like a Kerberos client and the related configuration file should be edited for allowing the proper Kerberos server identification.\nThe information should be added into the */etc/krb5.conf* file in the corresponding section, for example:\n\n.. code-block:: console\n\n\t[libdefaults]\n\t\tdefault_realm = DIANOMIC.COM\n\n\t[realms]\n\t    DIANOMIC.COM = {\n\t        kdc = pi-server.dianomic.com\n\t        admin_server = pi-server.dianomic.com\n\t    }\n\nKerberos keytab file\n--------------------\nThe keytab file should be generated on the Kerberos server and copied into the Fledge server in the directory:\n\n.. code-block:: console\n\n\t${FLEDGE_DATA}/etc/kerberos\n\n**NOTE:**\n\n- if **FLEDGE_DATA** is not set its value should be *$FLEDGE_ROOT/data*.\n\nThe name of the file should match the value of the North plugin option *PIWebAPIKerberosKeytabFileName*, by default *piwebapi_kerberos_https.keytab*\n\n.. code-block:: console\n\n\t$ ls -l ${FLEDGE_DATA}/etc/kerberos\n\t-rwxrwxrwx 1 fledge fledge  91 Jul 17 09:07 piwebapi_kerberos_https.keytab\n\t-rw-rw-r-- 1 fledge fledge 199 Aug 13 15:30 README.rst\n\nThe way the keytab file is generated depends on the type of the Kerberos server, in the case of Windows Active Directory this is an sample command:\n\n.. code-block:: console\n\n\tktpass -princ HTTPS/pi-server@DIANOMIC.COM -mapuser Administrator@DIANOMIC.COM -pass Password -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -out C:\\Temp\\piwebapi_kerberos_https.keytab\n\nTroubleshooting the Kerberos authentication\n--------------------------------------------\n\n1) check the North plugin configuration, a sample command\n\n.. code-block:: console\n\n    curl -s -S -X GET http://localhost:8081/fledge/category/North_Readings_to_PI | jq \".|{URL,\"PIServerEndpoint\",PIWebAPIAuthenticationMethod,PIWebAPIKerberosKeytabFileName,AFHierarchy1Level}\"\n\n2) check the presence of the keytab file\n\n.. code-block:: console\n\n\t$ ls -l ${FLEDGE_ROOT}/data/etc/kerberos\n\t-rwxrwxrwx 1 fledge fledge  91 Jul 17 09:07 piwebapi_kerberos_https.keytab\n\t-rw-rw-r-- 1 fledge fledge 199 Aug 13 15:30 README.rst\n\n3) verify the reachability of the Kerberos server (usually the PI Server) - Network reachability\n\n.. code-block:: console\n\n    $ ping pi-server.dianomic.com\n    PING pi-server.dianomic.com (192.168.1.51) 56(84) bytes of data.\n    64 bytes from pi-server.dianomic.com (192.168.1.51): icmp_seq=1 ttl=128 time=5.07 ms\n    64 bytes from pi-server.dianomic.com (192.168.1.51): icmp_seq=2 ttl=128 time=1.92 ms\n\nKerberos reachability and keys retrieval\n\n.. code-block:: console\n\n    $ kinit -p HTTPS/pi-server@DIANOMIC.COM\n    Password for HTTPS/pi-server@DIANOMIC.COM:\n    $ klist\n    Ticket cache: FILE:/tmp/krb5cc_1001\n    Default principal: HTTPS/pi-server@DIANOMIC.COM\n\n    Valid starting       Expires              Service principal\n    09/27/2019 11:51:47  09/27/2019 21:51:47  krbtgt/DIANOMIC.COM@DIANOMIC.COM\n        renew until 09/28/2019 11:51:46\n    $\n"
  },
  {
    "path": "docs/Makefile",
    "content": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    =\nSPHINXBUILD   = python3 -msphinx\nSPHINXPROJ    = Fledge\nSOURCEDIR     = .\nBUILDDIR      = _build\n\n# Put it first so that \"make\" without argument is like \"make help\".\nhelp:\tfledge_plugins.rst\n\t@$(SPHINXBUILD) -M help \"$(SOURCEDIR)\" \"$(BUILDDIR)\" $(SPHINXOPTS) $(O)\n\n.PHONY: help Makefile generated fledge_plugins.rst plugin_and_services_configuration clean\n\n# Catch-all target: route all unknown targets to Sphinx using the new\n# \"make mode\" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).\n%: Makefile \n\t@$(SPHINXBUILD) -M $@ \"$(SOURCEDIR)\" \"$(BUILDDIR)\" $(SPHINXOPTS) $(O)\n\ngenerated: fledge_plugins.rst plugin_and_services_configuration\n\nfledge_plugins.rst: \n\t@echo Building page with table of plugins\n\t@bash scripts/fledge_plugin_list fledge_plugins.rst $(DOCBRANCH)\n\nplugin_and_services_configuration:\n\t@echo Building plugin and service configuration appendices\n\t@bash scripts/plugin_and_service_documentation $(DOCBRANCH)\n\nclean:\n\t@echo Clean Doc build artifacts\n\t@rm -f fledge_plugins.rst\n\t@rm -rf plugins\n\t@rm -rf services\n\t@rm -rf \"$(BUILDDIR)\"\n"
  },
  {
    "path": "docs/OMF.rst",
    "content": ".. Images\n.. |PI_connect| image:: images/PI_connect.jpg\n.. |PI_connectors| image:: images/PI_connectors.jpg\n.. |PI_token| image:: images/PI_token.jpg\n.. |omf_plugin_pi_web_config| image:: images/omf-plugin-pi-web.jpg\n.. |omf_plugin_connector_relay_config| image:: images/omf-plugin-connector-relay.jpg\n.. |omf_plugin_eds_config| image:: images/omf-plugin-eds.jpg\n.. |omf_plugin_ocs_config| image:: images/omf-plugin-ocs.jpg\n.. |omf_plugin_adh_config| image:: images/omf-plugin-adh.jpg\n.. |OMF_AF| image:: images/OMF_AF.jpg\n.. |OMF_Auth| image:: images/OMF_Auth.jpg\n.. |OMF_Cloud| image:: images/OMF_Cloud.jpg\n.. |OMF_Connection| image:: images/OMF_Connection.jpg\n.. |OMF_Default| image:: images/OMF_Default.jpg\n.. |OMF_Format| image:: images/OMF_Format.jpg\n.. |OMF_Endpoints| image:: images/OMF_Endpoints.jpg\n.. |OMF_StaticData| image:: images/OMF_StaticData.jpg\n.. |ADH_Regions| image:: images/ADH_Regions.jpg\n\n.. Links\n.. |OMFHint filter plugin| raw:: html\n\n   <a href=\"../fledge-filter-omfhint/index.html\">OMFHint filter plugin</a>\n\n.. |OMF North Troubleshooting| raw:: html\n\n   <a href=\"../../troubleshooting_pi-server_integration.html\">Troubleshooting the PI Server integration</a>\n\nOMF End Points\n--------------\n\nThe OMF Plugin within Fledge supports a number of different OMF Endpoints for sending data out of Fledge.\n\nPI Web API OMF Endpoint\n~~~~~~~~~~~~~~~~~~~~~~~\n\nTo use the PI Web API OMF endpoint first ensure the OMF option was included in your PI Server when it was installed.  \n\nNow go to the Fledge user interface, create a new North instance and select the “OMF” plugin on the first screen.\nIn the second screen select the PI Web API as the OMF endpoint.\n\nAVEVA Data Hub\n~~~~~~~~~~~~~~\n\nThe cloud service from AVEVA that allows you to store your data in the AVEVA cloud.\n\n.. _Edge_Data_Store:\n\nEdge Data Store OMF Endpoint\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nTo use the OSIsoft Edge Data Store first install Edge Data Store on the same machine as your Fledge instance. It is a limitation of Edge Data Store that it must reside on the same host as any system that connects to it with OMF.\n\n\n.. _Connector_Relay:\n\nPI Connector Relay\n~~~~~~~~~~~~~~~~~~\n\n**The PI Connector Relay has been discontinued by OSIsoft.**\nAll new deployments should use the PI Web API endpoint.\nExisting installations will still be supported.\nThe PI Connector Relay was the original mechanism by which OMF data could be ingesting into a PI Server.\nTo use the PI Connector Relay, open and sign into the PI Relay Data Connection Manager.\n\n+-----------------+\n| |PI_connectors| |\n+-----------------+\n\nTo add a new connector for the Fledge system, click on the drop down menu to the right of \"Connectors\" and select \"Add an OMF application\".  Add and save the requested configuration information.\n\n+--------------+\n| |PI_connect| |\n+--------------+\n\nConnect the new application to the PI Connector Relay by selecting the new Fledge application, clicking the check box for the PI Connector Relay and then clicking \"Save Configuration\".\n\n+------------+\n| |PI_token| |\n+------------+\n\nFinally, select the new Fledge application. Click \"More\" at the bottom of the Configuration panel. Make note of the Producer Token and Relay Ingress URL.\n\nNow go to the Fledge user interface, create a new North instance and select the “OMF” plugin on the first screen. Continue with the configuration, choosing the connector relay as the end point to be connected.\n\nOSIsoft Cloud Services\n~~~~~~~~~~~~~~~~~~~~~~\n\nThe original cloud services from OSIsoft, this has now been superseded by AVEVA Data Hub, and should only be used to support existing workloads. All new installations should use AVEVA Data Hub.\n\nConfiguration\n-------------\n\nThe configuration of the plugin is split into a number of tabs in order to reduce the size of each set of values to enter. Each tab contains a set of related items.\n\n  - **Basic**: This tab contains the base set of configuration items that are most commonly changed.\n\n  - **Asset Framework**: The configuration that impacts the location with the asset framework in which the data will be placed.\n\n  - **Authentication**: The configuration required to authenticate with the OMF end point.\n\n  - **Cloud**: Configuration specific to using the cloud end points for OCS and ADH.\n\n  - **Connection**: This tab contains the configuration items that can be used to tune the connection to the OMF end point.\n\n  - **Formats & Types**: The configuration relating to how types are used and formatted with the OMF data.\n\n  - **Advanced Configuration**: Configuration of the service or task that is supporting the OMF plugin.\n\n  - **Security Configuration**: The configuration options that impact the security of the service that is running OMF.\n\n  - **Developer**: This tab is only visible if the developer features of Fledge have been enabled and will give access to the features aimed at a plugin or pipeline developer.\n\nBasic\n~~~~~\n\nThe *Basic* tab contains the most commonly modified items\n\n+---------------+\n| |OMF_Default| |\n+---------------+\n\n  - **Endpoint**: The type of OMF end point we are connecting with. The options available are\n\n    +-----------------+\n    | |OMF_Endpoints| | \n    +-----------------+\n\n    - *PI Web API* - A connection to a PI Server that supports the OMF option of the PI Web API. This is the preferred mechanism for sending data to a PI Server.\n\n    - *AVEVA Data Hub* - The AVEVA cloud service.\n\n    - *Connector Relay* - The previous way to send data to a PI Server before PI Web API supported OMF. This should only be used for older PI Servers that do not have the support available within PI Web API.\n\n    - *OSIsoft Cloud Services* - The original OSIsoft cloud service, this is currently being replaced with the AVEVA Data Hub.\n\n    - *Edge Data Store* - The OSIsoft Edge Data Store \n\n  - **Create AF Structure**: Used to control if Asset Framework structure messages are sent to the PI Server. If this is turned off then the data will not be placed in the Asset Framework.\n     \n  - **Naming scheme**: Defines the naming scheme to be used when creating the PI points in the PI Data Archive. See :ref:`Naming_Scheme`.\n\n  - **Server hostname**: The hostname or address of the OMF end point. This is only valid if the end point is a PI Server either with PI Web API or the Connector Relay. This is normally the same address as the PI Server.\n\n  - **Server port**: The port the PI Web API OMF endpoint is listening on. Leave as 0 if you are using the default port.\n\n  - **Data Source**: Defines which data is sent to the OMF end point. The options available are\n    \n    - *readings* - The data that has been ingested into Fledge via the South services.\n     \n    - *statistics* - Fledge's internal statistics.\n\n  - **Static Data**: Data to include in every Container created by OMF.\n    For example, you can use this to specify the location of the devices being monitored by the Fledge server.\n    See the :ref:`Static Data` section.\n\n  - **Data Stream Name Delimiter**: The plugin creates Container names by concatenating Asset and Datapoint names separated by this single-character delimiter.\n    The default delimiter is a dot (\".\").\n\n  - **Action Code for Data Messages**: Defines the action code in the HTTP header when sending OMF Data messages.\n\n    All OMF messages must have an *action* code in the HTTP header which defines how the server should process the OMF message.\n    For Data messages, the default action code is *update* which means that the server should update the data value if there is already a value at the passed timestamp.\n    If there is no value at the passed timestamp, the data value is inserted into the server's data archive.\n    If the passed data value is newer than the server's snapshot, the new value is processed by the server's compression algorithm.\n    The action code of *update* is the default and should generally be left unchanged.\n\n    The one exception is if the PI Buffer Subsystem is used to buffer data sent to the PI Data Archive.\n    Because of an issue with the PI Buffer Subsystem, OMF data sent with an action code of *update* is converted to the PI Data Archive's internal *replace* storage code.\n    The *replace* storage code causes the PI Data Archive's compression algorithm to be bypassed.\n    When using the PI Buffer Subsystem, set the action code to *create* which will allow new data to be compressed normally.\n    One disadvantage of the *create* action code is that multiple values with the same timestamp will all be stored. \n\n  - **Enable Tracing**: The Enable Tracing flag allows users to toggle the Tracing functionality on or off. If enabled, a detailed tracing of OMF messages will be written to `logs/debug-trace/omf.log` file in Fledge data directory.\n\n\nAsset Framework\n~~~~~~~~~~~~~~~\n\nThe OMF plugins has the ability to interact with the PI Asset Framework and put data into the desired locations within the asset framework. It allows a default location to be specified and also a set of rules to be defined that will override that default location.\n\n+----------+\n| |OMF_AF| |\n+----------+\n\n   - **Default Asset Framework Location**: The location in the Asset Framework hierarchy into which the data will be inserted.\n     All data will be inserted at this point in the Asset Framework hierarchy unless a later rule overrides this.\n     Note this field does not include the name of the target Asset Framework Database;\n     the target database is defined on the PI Web API server by the PI Web API Admin Utility.\n\n   - **Asset Framework Hierarchies Rules**: A set of rules that allow specific readings to be placed elsewhere in the Asset Framework. These rules can be based on the name of the asset itself or some metadata associated with the asset. See `Asset Framework Hierarchy Rules`_.\n\nAuthentication\n~~~~~~~~~~~~~~\n\nThe *Authentication* tab allows the configuration of authentication between the OMF plugin and the OMF endpoint.\n\n+------------+\n| |OMF_Auth| |\n+------------+\n\n   - **Producer Token**: The Producer Token provided by the PI Relay Data Connection Manager. This is only required when using the older Connector Relay end point for sending data to a PI Server.\n\n   - **PI Web API Authentication Method**: The authentication method to be used: \n\n     - *anonymous* - Anonymous equates to no authentication.\n      \n     - *basic* - basic authentication requires a user name and password\n       \n     - *kerberos* - Kerberos allows integration with your Single Sign-On environment.\n\n   - **PI Web API User Id**:  For Basic authentication, the user name to authenticate with the PI Web API.\n\n   - **PI Web API Password**: For Basic authentication, the password of the user we are using to authenticate.\n   \n   - **PI Web API Kerberos keytab file**: The Kerberos keytab file used to authenticate.\n\nCloud\n~~~~~\n\nThe *Cloud* tab contains configuration items that are required if the chosen OMF end point is either AVEVA Data Hub or OSIsoft Cloud Services.\n\n+-------------+\n| |OMF_Cloud| |\n+-------------+\n\n  - **Cloud Service Region**: - The region in which your AVEVA Data Hub or OSIsoft Cloud Services service is located.\n\n    +---------------+\n    | |ADH_Regions| |\n    +---------------+\n\n  - **Namespace**: Your namespace within the AVEVA Data Hub or OSIsoft Cloud Service.\n\n  - **Tenant ID**: Your AVEVA Data Hub or OSIsoft Cloud Services Tenant ID for your account.\n\n  - **Client ID**: Your AVEVA Data Hub or OSIsoft Cloud Services Client ID for your account.\n\n  - **Client Secret**: Your AVEVA Data Hub or OSIsoft Cloud Services Client Secret.\n\nConnection\n~~~~~~~~~~\n\nThe *Connection* tab allows a set of tuning parameters to be set for the connection from the OMF plugin to the OMF End point.\n\n+------------------+\n| |OMF_Connection| |\n+------------------+\n\n\n   - **Sleep Time Retry**: Number of seconds to wait before retrying the connection (Fledge doubles this time after each failed attempt).\n\n   - **Maximum Retry**: Maximum number of times to retry connecting to the OMF Endpoint.\n\n   - **HTTP Timeout**: Number of seconds to wait before Fledge will time out an HTTP connection attempt.\n\n   - **Compression**: Compress the readings data before sending them to the OMF endpoint.\n\nFormats & Types\n~~~~~~~~~~~~~~~\n\nThe *Formats & Types* tab provides a means to specify the detail types that will be used and the way complex assets are mapped to OMF types to also be configured.\nSee the section :ref:`Numeric Data Types` for more information on configuring data types.\n\n+--------------+\n| |OMF_Format| |\n+--------------+\n\n   - **Integer Format**: Used to match Fledge data types to the data type configured in PI. This defaults to int64 but may be set to any OMF data type compatible with integer data, e.g. int32.\n\n   - **Number Format**: Used to match Fledge data types to the data type configured in PI. The default is float64 but may be set to any OMF datatype that supports floating point values.\n\n   - **Complex Types**: Versions of the OMF plugin prior to 2.1 support complex types in which each asset would have a corresponding OMF type created for it. With the introduction of OMF Version 1.2 support in version 2.1.0 of the plugin support has been added for linked types. These are more versatile and allow for asset structures to change dynamically. The linked types are now the default, however setting this option can force the older complex types to be used.  See :ref:`Linked_Types`. Versions of the PI Server from 2020 or before will always use the complex types. The plugin will normally automatically detect this, however if the detection does not correctly enforce this setting then this option should be enabled by the user.\n\n.. _Naming_Scheme:\n\nNaming Scheme\n-------------\n\nThe naming of objects in the Asset Framework and of the attributes of\nthose objects has a number of constraints that need to be understood when\nstoring data into a PI Server using OMF.\nAn important factor in this is the stability of your data structures.\nIf you have objects in your environment that are likely to change,\nyou may wish to take a different naming approach.\nExamples of changes are a difference in the number of attributes between readings, and a change in the data types of attributes.\n\nThis occurs because of a limitation of the OMF interface to the PI Server.\nData is sent to OMF in a number of stages.\nOne of these is the definition of the Types used to create AF Element Templates.\nOMF uses a Type to define an AF Element Template but once defined it cannot be changed.\nIf an updated Type definition is sent to OMF, it will be used to create a new AF Element Template rather than changing the existing one.\nThis means a new AF Element Template is created each time a Type changes.\n\nThe OMF plugin names objects in the Asset Framework based upon the asset\nname in the reading within Fledge. Asset names are typically added to\nthe readings in the south plugins, however they may be altered by filters\nbetween the south ingest and the north egress points in the data\npipeline. Asset names can be overridden using the :ref:`OMF Hints` mechanism\ndescribed below.\n\nThe attribute names used within the objects in the PI System are based\non the names of the datapoints within each Reading within Fledge. Again\n:ref:`OMF Hints` can be used to override this mechanism.\n\nThe naming used within the objects in the Asset Framework is controlled\nby the *Naming Scheme* option:\n\n  Concise\n     No suffix or prefix is added to the asset name and property name when\n     creating objects in the Asset Framework and PI Points in the PI Data Archive.\n     However, if the structure of an asset changes a new AF Element Template\n     will be created which will have the suffix -type*x* appended to it.\n\n  Use Type Suffix\n     The AF Element names will be created from the asset names by appending\n     the suffix -type*x* to the asset name. If the structure\n     of an asset changes a new AF Element name will be created with an\n     updated suffix.\n\n  Use Attribute Hash\n     AF Attribute names will be created using a numerical hash as a prefix.\n\n  Backward Compatibility\n     The naming reverts to the rules that were used by version 1.9.1 and\n     earlier of Fledge: both type suffixes and attribute hashes will be\n     applied to the name.\n\n\nAsset Framework Hierarchy Rules\n-------------------------------\n\nThe Asset Framework rules allow the location of specific assets within\nthe Asset Framework to be controlled. There are two basic types of hint:\n\n  - Asset name placement: the name of the asset determines where in the\n    Asset Framework the asset is placed,\n\n  - Meta data placement: metadata within the reading determines where\n    the asset is placed in the Asset Framework.\n\nThe rules are encoded within a JSON document.\nThis document contains two properties in the root of the document:\none for name-based rules and the other for metadata based rules.\n\n.. code-block:: console\n\n    {       \n\t    \"names\" :       \n\t\t    {       \n\t\t\t    \"asset1\" : \"/Building1/EastWing/GroundFloor/Room4\",\n\t\t\t    \"asset2\" : \"Room14\"\n\t\t    },\n\t    \"metadata\" :\n\t\t    {\n\t\t\t    \"exist\" :\n\t\t\t\t    {\n\t\t\t\t\t    \"temperature\"   : \"temperatures\",\n\t\t\t\t\t    \"power\"         : \"/Electrical/Power\"\n\t\t\t\t    },\n\t\t\t    \"nonexist\" :\n\t\t\t\t    {\n\t\t\t\t\t    \"unit\"          : \"Uncalibrated\"\n\t\t\t\t    },\n\t\t\t    \"equal\" :\n\t\t\t\t    {\n\t\t\t\t\t    \"room\"          :\n\t\t\t\t\t\t    {\n\t\t\t\t\t\t\t    \"4\" : \"ElecticalLab\",\n\t\t\t\t\t\t\t    \"6\" : \"FluidLab\"\n\t\t\t\t\t\t    }\n\t\t\t\t    },\n\t\t\t    \"notequal\" :\n\t\t\t\t    {\n\t\t\t\t\t    \"building\"      :\n\t\t\t\t\t\t    {\n\t\t\t\t\t\t\t    \"plant\" : \"/Office/Environment\"\n\t\t\t\t\t\t    }\n\t\t\t\t    }\n\t\t    }\n    }\n\nThe name type rules are simply a set of asset name and Asset Framework location\npairs. The asset names must be complete names; there is no pattern\nmatching within the names.\n\nThe metadata rules are more complex. Four different tests can be applied:\n\n  - **exists**: This test looks for the existence of the named datapoint within the asset.\n\n  - **nonexist**: This test looks for the lack of a named datapoint within the asset.\n\n  - **equal**: This test looks for a named datapoint having a given value.\n\n  - **notequal**: This test looks for a name datapoint having a value different from that specified.\n\nThe *exist* and *nonexist* tests take a set of name/value pairs that\nare tested. The name is the datapoint name to examine and the value is\nthe Asset Framework location to use. For example\n\n.. code-block:: console\n\n   \"exist\" :\n       {\n            \"temperature\"   : \"temperatures\",\n            \"power\"         : \"/Electrical/Power\"\n       }  \n\nIf an asset has a datapoint called *temperature* in will be stored in\nthe AF hierarchy *temperatures*, if the asset had a datapoint called\n*power* the asset will be placed in the AF hierarchy */Electrical/Power*.\n\nThe *equal* and *notequal* tests take an object as a child, the name of\nthe object is datapoint to examine, the child nodes a sets of values\nand locations. For example\n\n.. code-block:: console\n\n   \"equal\" :\n      {\n         \"room\" :\n            {\n               \"4\" : \"ElectricalLab\",\n               \"6\" : \"FluidLab\"\n            }\n      }\n\nIn this case if the asset has a datapoint called *room* with a value\nof *4* then the asset will be placed in the AF location *ElectricalLab*,\nif it has a value of *6* then it is placed in the AF location *FluidLab*.\n\nIf an asset matches multiple rules in the ruleset it will appear in\nmultiple locations in the hierarchy, the data is shared between each of\nthe locations.\n\nIf an OMF Hint exists within a particular reading this will take\nprecedence over generic rules.\n\nThe AF location may be a simple string or it may also include\nsubstitutions from other datapoints within the reading. For example\nof the reading has a datapoint called *room* that contains the room\nin which the readings was taken, an AF location of */BuildingA/${room}*\nwould put the reading in the Asset Framework using the value of the room\ndatapoint. The reading\n\n.. code-block:: console\n\n  \"reading\" : {\n       \"temperature\" : 23.4,\n       \"room\"        : \"B114\"\n       }\n\nwould be put in the AF at */BuildingA/B114* whereas a reading of the form\n\n.. code-block:: console\n\n  \"reading\" : {\n       \"temperature\" : 24.6,\n       \"room\"        : \"2016\"\n       }\n\nwould be put at the location */BuildingA/2016*.\n\nIt is also possible to define defaults if the referenced datapoint\nis missing. In our example above if we used the location\n*/BuildingA/${room:unknown}* a reading without a *room* datapoint would\nbe placed in */BuildingA/unknown*. If no default is given and the data\npoint is missing then the level in the hierarchy is ignore. E.g. if we\nuse our original location */BuildingA/${room}* and we have the reading\n\n.. code-block:: console\n\n  \"reading\" : {\n       \"temperature\" : 22.8\n       }\n\nthis reading would be stored in */BuildingA*.\n\n.. _OMF Hints:\n\nOMF Hints\n---------\n\nThe OMF plugin also supports the concept of hints in the actual data\nthat determine how the data should be treated by the plugin. Hints are\nencoded in a specially named datapoint within a reading called *OMFHint*.\nThe hints themselves are encoded as JSON within a string.\n\nAn *OMFHint* can be added at any point in the processing of the data.\nA specific plugin called the |OMFHint filter plugin| is available for adding hints.\n\nNumber Format Hint\n~~~~~~~~~~~~~~~~~~\n\nA number format hint tells the plugin what number format to use when inserting data\ninto the PI Server. The following will cause all numeric data within\nthe asset to be written using the format *float32*.\nSee the section :ref:`Numeric Data Types`.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"number\" : \"float32\" }\n\nThe value of the *number* hint may be two of the numeric formats supported by the PI Server: *float64* or *float32*.\nThis hint applies to all numeric datapoints in the asset.\nFor Linked Types, you can also use this hint to coerce numeric data to integer: *int64*, *int32*, *int16*, *uint64*, *uint32* or *uint16*.\nTo apply a Number Format hint to a specific datapoint only, see the section :ref:`Datapoint Specific Hints`.\n\nInteger Format Hint\n~~~~~~~~~~~~~~~~~~~\n\nAn integer format hint tells the plugin what integer format to use when inserting\ndata into the PI Server. The following will cause all integer data\nwithin the asset to be written using the format *integer32*.\nSee the section :ref:`Numeric Data Types`.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"integer\" : \"integer32\" }\n\nThe value of the *integer* hint may be any integer format that is supported by the PI Server: *int64*, *int32*, *int16*, *uint64*, *uint32* or *uint16*.\nThis hint applies to all integer datapoints in the asset.\nFor Linked Types, you can also use this hint to coerce integer data to numeric: *float64* or *float32*.\nTo apply a Integer Format hint to a specific datapoint only, see the section :ref:`Datapoint Specific Hints`.\n\nType Name Hint\n~~~~~~~~~~~~~~\n\nA type name hint specifies that a particular name should be used when\ndefining the name of the type that will be created to store the object\nin the Asset Framework. This will override the :ref:`Naming_Scheme` currently\nconfigured.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"typeName\" : \"substation\" }\n\nType Hint\n~~~~~~~~~\n\nA type hint is similar to a type name hint, but instead of defining\nthe name of a type to create it defines the name of an existing type\nto use. The structure of the asset *must* match the structure of the\nexisting type with the PI Server, it is the responsibility of the person\nthat adds this hint to ensure this is the case.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"type\" : \"pump\" }\n\n.. note::\n\n   This hint only has meaning when using the complex type legacy mode with this plugin.\n\nTag Name Hint for a Container\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe default name of an OMF Container is the reading's asset name.\nThe *tagName* hint allows you to override this and specify the OMF Container name.\nIn the AF Database, the *tagName* hint becomes the name of the AF Element that owns the AF Attributes that map the reading's datapoints.\nThis hint does not influence the names of individual PI Points.\nIf you need to specify PI Point names, see :ref:`Datapoint Specific Hints`.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"tagName\" : \"Reactor42\" }\n\n.. note::\n\n   If you configure a *tagName* hint to specify the name of the OMF Container, you must do so before you start your OMF North instance for the first time.\n   After that, you must include the *tagName* hint for every reading without changing it.\n   If you don't, the OMF North plugin will create a new OMF Container with the default name along with new PI Points.\n   If this happens, there is a procedure for restoring your configuration in the |OMF North Troubleshooting| guide.\n\nSource Hint\n~~~~~~~~~~~\n\nThe default data source that is associated with tags in the PI Server is Fledge, however this can be overridden using the data source hint. This hint may be applied to the entire asset or to specific datapoints within the asset.\n\n.. code-block:: console\n\n   \"OMFHint\" : { \"source\" : \"Fledge23\" }\n\nAsset Framework Location Hint\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAn Asset Framework location hint can be added to a reading to control\nthe placement of the asset within the Asset Framework.\nThis hint overrides the path in the *Default Asset Framework Location* for the reading.\nAn Asset Framework hint would be as follows:\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"AFLocation\" : \"/UK/London/TowerHill/Floor4\" }\n   \nNote the following when defining an *AFLocation* hint:\n\n- An asset name in a Fledge Reading is used to create an AF Element in the OSIsoft Asset Framework.\n  Time series data streams become AF Attributes of that AF Element.\n  This means these AF Attributes are mapped to PI Points using the OSIsoft PI Point Data Reference.\n- Deleting the original Reading AF Element is not recommended;\n  if you delete a Reading AF Element, the OMF North plugin will not recreate it.\n- If you wish to move a Reading AF Element, you can do this with the PI System Explorer.\n  Right-click on the AF Element that represents the Reading AF Element.\n  Choose Copy.\n  Select the AF Element that will serve as the new parent of the Reading AF Element.\n  Right-click and choose *Paste* or *Paste Reference*.\n  *Note that PI System Explorer does not have the traditional Cut function for AF Elements*.\n- For Linked Types\n    - If you define an AF Location hint after the Reading AF Element has been created in the default location,\n      a reference will be created in the location defined by the hint.\n    - If an AF Location hint was in place when the Reading AF Element was created and you then disable the hint,\n      a reference will be created in the *Default Asset Framework Location*.\n    - If you edit the AF Location hint, the Reading AF Element not move.\n      A reference to the Reading AF Element will be created in the new location.\n- For Complex Types\n    - If you disable the OMF Hint filter, the Reading AF Element will not move.\n    - If you edit the AF Location hint, the Reading AF Element will move to the new location in the AF hierarchy.\n    - No references are created.\n\n.. _Datapoint Specific Hints:\n\nDatapoint Specific Hints\n~~~~~~~~~~~~~~~~~~~~~~~~\n\nHints may also be targeted to specific data points within an asset by\nusing the *datapoint* hint. A *datapoint* hint takes a JSON object as its value.\nThe object must have the *name* key to identify the datapoint to which to apply the hint.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : { \"name\" : \"voltage\", \"number\" : \"float32\" } }\n\nThe above hint applies to the datapoint *voltage* in the asset and\napplies a *number format* hint to that datapoint.\n\nIf more than one datapoint within a reading is required to have OMF hints\nattached to them, this may be done by using an array as a child of the\ndatapoint item.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : [\n        { \"name\" : \"voltage\", \"number\" : \"float32\", \"uom\" : \"volt\" },\n        { \"name\" : \"current\", \"number\" : \"uint32\", \"uom\" : \"milliampere\" }\n        ]\n   }\n\nThe example above attaches a number hint to both the voltage and current\ndatapoints and to the current datapoint. It assigns a unit of measure\nof milliampere. The unit of measure for the voltage is set to be volts.\n\nThis is a list of hints that can be applied to a datapoint:\n\n- Number\n- Integer\n- Unit of Measure\n- Minimum\n- Maximum\n- Interpolation\n- Tag Name\n\nThe following sub-sections outlines each datapoint hint.\n\nNumber Format Hint\n##################\n\nA number format hint tells the plugin what number format to use when inserting numeric data into the PI Server.\nThe following will cause all numeric data for the *flow* datapoint within the asset to be written using the format *float32*.\nSee the section :ref:`Numeric Data Types`.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : { \"name\" : \"flow\", \"number\" : \"float32\" } }\n\nFor Linked Types, you can also use this hint to coerce numeric data to integer: *int64*, *int32*, *int16*, *uint64*, *uint32* or *uint16*.\n\nInteger Format Hint\n###################\n\nA integer format hint tells the plugin what number format to use when inserting integer data into the PI Server.\nThe following will cause all integer data for the *height* datapoint within the asset to be written using the format *integer32*.\nSee the section :ref:`Numeric Data Types`.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : { \"name\" : \"height\", \"integer\" : \"integer32\" } }\n\nFor Linked Types, you can also use this hint to coerce integer data to numeric: *float64* or *float32*.\n\nUnit Of Measure Hint\n####################\n\nA unit of measure, or uom hint is used to associate one of the units of\nmeasurement defined within your PI Server with a particular data point\nwithin an asset.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : { \"name\" : \"height\", \"uom\" : \"meter\" } }\n\nMinimum Hint\n############\n\nA minimum hint is used to associate a minimum value in the PI Point created for a data point.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : { \"name\" : \"height\", \"minimum\" : \"0\" } }\n\nMaximum Hint\n############\n\nA maximum hint is used to associate a maximum value in the PI Point created for a data point.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : { \"name\" : \"height\", \"maximum\" : \"100000\" } }\n\nInterpolation\n#############\n\nThe interpolation hint sets the interpolation value used within the PI Server, interpolation values supported are continuous, discrete, stepwisecontinuousleading, and stepwisecontinuousfollowing.\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : { \"name\" : \"height\", \"interpolation\" : \"continuous\" } }\n\nTag Name Hint for a Datapoint (Linked Types only)\n#################################################\n\nThe default name of a datapoint's PI Point is the reading's asset name concatenated with the datapoint name, separated by a dot (\".\").\nThe datapoint *tagName* hint allows you to override this and specify the name of the PI Point.\nFor example:\n\n.. code-block:: console\n\n   \"OMFHint\"  : { \"datapoint\" : { \"name\" : \"temperature\", \"tagName\" : \"T105.PV\" } }\n\nIf you wish to set PI Point names for multiple datapoints in the same asset, use the datapoint hint array format:\n\n.. code-block:: JSON\n\n   \"OMFHint\": [\n      {\n         \"name\":\"temperature\",\n         \"tagName\":\"T105.PV\"\n      },\n      {\n         \"name\":\"pressure\",\n         \"tagName\":\"P105.PV\"\n      },\n      {\n         \"name\":\"status\",\n         \"tagName\":\"Stat105.bool\"\n      }\n   ]\n\n.. note::\n\n   If you configure a *tagName* hint to specify a PI Point name, you must do so before you start your OMF North instance for the first time.\n   After that, you must include the *tagName* hint for every reading without changing it.\n   If you don't, the OMF North plugin will report errors and stop processing.\n   If this happens, there is a procedure for restoring your configuration in the |OMF North Troubleshooting| guide.\n\n.. _Numeric Data Types:\n\nNumeric Data Types\n------------------\n\nConfiguring Numeric Data Types\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIt is possible to configure the exact data types used to send data to the PI Server using OMF.\nTo configure the data types for all integers and numbers (that is, floating point values), you can use the *Formats & Types* tab in the Fledge GUI.\nTo influence the data types for specific assets or datapoints, you can create an OMFHint of type *number* or *integer*.\n\nYou must create your data type configurations before starting your OMF North plugin instance.\nAfter your plugin has run for the first time,\nOMF messages sent by the plugin to the PI Server will cause AF Attributes and PI Points to be created using data types defined by your configuration.\nThe data types of the AF Attributes and PI Points will not change if you edit your OMF North plugin instance configuration.\nFor example, if you disable an *integer* OMFHint,\nyou will change the OMF messages sent to PI but the data in the messages will no longer match the AF Attributes and PI Points in your PI Server.\n\nDetecting the Data Type Mismatch Problem\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nEditing your data type choices in OMF North will cause the following messages to appear in the System Log:\n\n.. code-block:: console\n\n   ERROR: HTTP 409: The OMF endpoint reported a Conflict when sending Containers: 1 message\n   ERROR: Message 0 HTTP 409: Error, A container with the supplied ID already exists, but does not match the supplied container.,\n   WARNING: Containers attempted: Calvin1.random\n   WARNING: HTTP Code 409: Processing cannot continue until data archive errors are corrected\n\nThe HTTP Code 409 (Conflict) means that OMF North has encountered a problem that cannot be resolved automatically.\nOMF North will not attempt to send data again.\nYou must shut down the OMF North instance and address the problem.\n\nRecovering from the Data Type Mismatch Problem\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThere are two techniques for recovering from the data type mismatch problem:\n\nReset the PI Point\n##################\n\nAs you experiment with configurations, you may discover that your original assumptions about your data types were not correct and need to be changed.\nIt is possible to repair your PI Server so that you do not need to discard your AF Database and start over.\nThis is the procedure:\n\n- Shut down your OMF North instance.\n- Using PI System Explorer, locate the problematic PI Points.\n  Their names will be listed in the *Containers attempted* warning message.\n  The PI Points are mapped to AF Attributes using the PI Point Data Reference.\n  For each AF Attribute, you can see the name of the PI Point in the Settings pane.\n  While editing the AF Attribute, change the *Value Type* to your intended data type and check in your change.\n- Using PI System Management Tools (PI SMT), open the Point Builder tool (under Points) and locate the problematic PI Points.\n- In the General tab in the Point Builder, locate the Extended Descriptor (*Exdesc*).\n  It will contain a long character string with several OMF tokens such as *OmfPropertyIndexer*, *OmfContainerId* and *OmfTypeId*.\n  Clear the *Excdesc* field completely.\n  Set the *Point Source* to *L*.\n  Save your changes.\n- Start up your OMF North instance.\n\nClearing the Extended Descriptor will cause OMF to \"adopt\" the PI Point.\nOMF will update the Extended Descriptor with new values of the OMF tokens.\nWatch the System Log during startup to see if any problems occur.\n\nApply an OMFHint\n################\n\nYou can create a OMFHint of type *number* or *integer* to coerce the datapoint values to the data type of the PI Points already created.\nThe |OMFHint filter plugin| can be used to add these hints to your OMF North configuration.\nOMF North will allow you to coerce integers to numeric values, and numeric values to integers for Linked Type configurations.\n\n.. note::\n\n   Be careful when configuring an OMFHint to coerce floating point datapoint values to any form of unsigned integer (*uint64*, *uint32* or *uint16*).\n   Negative numbers cannot be coerced to unsigned integers.\n   If OMF North encounters this, it will write *null* as the unsigned integer value.\n   AVEVA Data Hub and Edge Data Store represent a *null* value from OMF as *null*.\n   For PI Web API, an *null* value from OMF is represented in the PI Data Archive as the Digital State value *No Data*.\n\n.. note::\n\n   Complex Type configurations do not support coercion of data types, that is, coercion of *number* to *integer*, or *integer* to *number*.\n   Data type mismatch issues are less likely, however, because integer datapoint values are used to create *float64* PI Points.\n   This reduces the likelihood of errors since both numbers and integers can be written to *float64* PI Points.\n\nFurther Troubleshooting\n~~~~~~~~~~~~~~~~~~~~~~~\n\nIf you are unable to locate your problematic PI Points using the PI System Explorer, or if there are simply too many of them, there are advanced techniques available to troubleshoot\nand repair your system.\nContact Technical Support for assistance.\n\n.. _Static Data:\n\nStatic Data\n-----------\n\nThis feature allows you specify static string values that will be included in OMF Containers created by OMF North.\nContainers appear in the target AF Database as AF Elements that own AF Attributes which are mapped to PI Points.\nFor example, this AF Element named *Calvin2* has two AF Attributes that map PI Points: *random* and *random2*.\nThe AF Element also has three static data values named *Company*, *Domain* and *Location*:\n\n+------------------+\n| |OMF_StaticData| |\n+------------------+\n\nEach Static Data item in the Fledge configuration consists of a key/value pair separated by a colon(\":\").\nYou can specify multiple Static Data items separated by commas (\",\").\nStatic Data keys and values are applied when a Container is created.\nStatic Data values are always strings.\nOMF North cannot easily change the keys or add new keys after OMF North has started the first time.\nYou can, however, edit the values of the Static Data keys.\n\nStatic Data values are included in AF Element Templates which are then used by OMF to create Containers.\nThe design of the AF Templates depends on whether Linked Types or Complex Types are configured:\n\nLinked Types\n~~~~~~~~~~~~\n\nOMF creates a single AF Element Template called *FledgeAsset*.\nBesides the essential *__id*, *__indexProperty* and *__nameProperty* OMF attributes, the *FledgeAsset* template will have attributes defined by your Static Data configuration.\nThe first OMF North instance to start will create the *FledgeAsset* AF Element Template.\nAny subsequent OMF North instance that starts will not change the *FledgeAsset* template.\nThe best practice is to decide early which Static Data keys should be added to all OMF North configurations.\nEach OMF North instance can have its own values for these Static Data items which will be applied to any Container it creates.\nUpdated Static Data values in your configuration will be applied to both new and existing Containers.\n\nComplex Types\n~~~~~~~~~~~~~\n\nOMF creates multiple AF Element Templates, one per asset.\nThese AF Element Templates own the Static Data items and have names that end in \"...\\ *assetname*\\ _typename_sensor.\"\nEach template is used to create a Container for one asset.\nBecause of this, the risk of overlapping definitions is lower.\nOnce Containers are created, editing the Static Data values in your configuration does not update any existing AF Elements.\n\n.. note::\n\n    When OMF North starts, you may see this warning in */var/log/syslog*::\n\n        WARNING: FledgeAsset Type exists with a different definition\n\n    This warning does not affect the normal flow of data.\n    A detailed explanation of this warning and how to address it can be found in the |OMF North Troubleshooting| guide.\n\n.. _Linked_Types:\n\nLinked Types\n------------\n\nVersions of this plugin prior to 2.1.0 created a complex type within OMF for each asset that included all of the data points within that asset. This suffered from a limitation in that readings had to contain values for all of the data points of an asset in order to be accepted by the OMF end point. Following the introduction of OMF version 1.2 it was possible to use the linking features of OMF to avoid the need to create complex types for an asset and instead create empty assets and link the data points to this shell asset. This allows readings to only contain a subset of datapoints and still be successfully sent to the PI Server, or other end points.\n\nAs of version 2.1.0 this linking approach is used for all new assets created, if assets exist within the PI Server from versions of the plugin prior to 2.1.0 then the older, complex types will be used. It is possible to force the plugin to use complex types for all assets, both old and new, using the configuration option. It is also to force a particular asset to use the complex type mechanism using an OMFHint.\n\nOMF Version Support\n-------------------\n\nTo date, AVEVA has released three versions of the OSIsoft Message Format (OMF) specification: 1.0, 1.1 and 1.2.\nThe OMF Plugin supports all three OMF versions.\nThe plugin will determine the OMF version to use by reading product version information from the AVEVA data destination system.\nThese are the OMF versions the plugin will use to post data:\n\n+-----------+----------+---------------------+\n|OMF Version|PI Web API|Edge Data Store (EDS)|\n+===========+==========+=====================+\n|        1.2|- 2021    |- 2023               |\n|           |- 2021 SP1|- 2023 Patch 1       |\n|           |- 2021 SP2|- 2024               |\n|           |- 2021 SP3|                     |\n|           |- 2023    |                     |\n|           |- 2023 SP1|                     |\n+-----------+----------+---------------------+\n|        1.1|          |                     |\n+-----------+----------+---------------------+\n|        1.0|- 2019    |- 2020               |\n|           |- 2019 SP1|                     |\n+-----------+----------+---------------------+\n\nThe AVEVA Data Hub (ADH) is cloud-deployed and is always at the latest version of OMF support which is 1.2.\nThis includes the legacy OSIsoft Cloud Services (OCS) endpoints.\n"
  },
  {
    "path": "docs/RASPBIAN.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n\n**************************************\nBuilding and using Fledge on Raspbian\n**************************************\n\nFledge requires the use of Python 3.5.3+ in order to support the\nasynchronous IO mechanisms used by Fledge. Earlier Raspberry Pi Raspbian\ndistributions support Python 3.4 as the latest version of Python.\nIn order to build and run Fledge on Raspbian the version of Python\nmust be updated manually if your distribution has an older version.\n\n**NOTE**: These steps must be executed *in addition* to what is described in the README file when you install Fledge on Raspbian.\n\nCheck your Python version by running the command\n\n.. code-block:: console \n\n    $ python3 --version\n    $\n\n|br|\n\nIf your version is less than 3.5.3 then follow the instructions below to update\nyour Python version.\n\nInstall and update the build tools required for Python to be built\n\n.. code-block:: console \n\n    $ sudo apt-get update\n    $ sudo apt-get install build-essential tk-dev\n    $ sudo apt-get install libncurses5-dev libncursesw5-dev libreadline6-dev\n    $ sudo apt-get install libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev\n    $ sudo apt-get install libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev\n    $\n\n|br|\n\nNow build and install the new version of Python\n\n.. code-block:: console \n\n    $ wget https://www.python.org/ftp/python/3.5.3/Python-3.5.3.tgz\n    $ tar zxvf Python-3.5.3.tgz\n    $ cd Python-3.5.3\n    $ ./configure\n    $ make\n    $ sudo make install\n\n|br|\n\nConfirm the Python version\n\n.. code-block:: console \n\n    $ python3 --version\n    $ pip3 --version\n\n|br|\n\nThese should both return a version number as 3.5.3+, if not then check which\npython3 and pip3 you are running and replace these with the newly\nbuilt versions. This may be caused by the newly built version being\ninstalled in /usr/local/bin and the existing python3 and pip3 being\nin /usr/bin. If this is the case then remove the /usr/bin versions\n\n.. code-block:: console \n\n    $ sudo rm /usr/bin/python3 /usr/bin/pip3\n\n|br|\n\nYou may also link to the new version if you wish\n\n.. code-block:: console \n\n    $ sudo ln -s /usr/bin/python3 /usr/local/bin/python3\n    $ sudo ln -s /usr/bin/pip3 /usr/local/bin/pip3\n\n|br|\nOnce python3.5 has been installed you may follow the instructions\nin the README file to build, install and run Fledge on Raspberry\nPi using the Raspbian distribution.\n"
  },
  {
    "path": "docs/_static/.gitkeep",
    "content": ""
  },
  {
    "path": "docs/_static/theme_overrides.css",
    "content": "/* override table width restrictions */\n@media screen and (min-width: 767px) {\n\n   .wy-table-responsive table td {\n      /* !important prevents the common CSS stylesheets from overriding\n         this as on RTD they are loaded after this stylesheet */\n      white-space: normal !important;\n   }\n\n   .wy-table-responsive {\n      overflow: visible !important;\n   }\n}\n"
  },
  {
    "path": "docs/_static/version_menu.css",
    "content": "/* Hide \"On GitHub\" section from versions menu */\ndiv.rst-versions > div.rst-other-versions > div.injected > dl:nth-child(3) {\n    display: none;\n}"
  },
  {
    "path": "docs/_templates/breadcrumbs.html",
    "content": "{%- extends \"sphinx_rtd_theme/breadcrumbs.html\" %}\n\n{% block breadcrumbs_aside %}\n{% endblock %}"
  },
  {
    "path": "docs/acl.rst",
    "content": ".. Images\n.. |ACL_1| image:: images/ACL_1.jpg\n.. |ACL_2| image:: images/ACL_2.jpg\n.. |ACL_3| image:: images/ACL_3.jpg\n.. |ACL_4| image:: images/ACL_4.jpg\n.. |ACL_5| image:: images/ACL_5.jpg\n.. |ACL_6| image:: images/ACL_6.jpg\n.. |ACL_7| image:: images/ACL_7.jpg\n.. |ACL_8| image:: images/ACL_8.jpg\n.. |ACL_9| image:: images/ACL_9.jpg\n\nAccess Control Lists\n--------------------\n\nControl features within Fledge have the ability to add access control to every stage of the control process. Access control lists are used to limit which services have access to specific south services or scripts within the Fledge system.\n\nGraphical Interface\n~~~~~~~~~~~~~~~~~~~\n\nA graphical interfaces is available to allow the creation and management of the access control lists used within the Fledge system. This is available within the *Control Dispatcher* menu item of the Fledge graphical interface.\n\n+---------+\n| |ACL_1| |\n+---------+\n\nAdding An ACL\n#############\n\nClick on the *Add* button in the top right corner of the ACL screen, the following screen will then be displayed.\n\n+---------+\n| |ACL_2| |\n+---------+\n\nYou can enter a name for your access control list in the *Name* item on the screen. You should give each of your ACLs a unique name, this may be anything you like, but should ideally be descriptive or memorable as this is the name you will use when associating the ACL with services and scripts.\n\nThe *Services* section is use to define a set of services that this ACL is allowing access for. You may select services either by name or by service type. Multiple services may be granted access by a single ACL.\n\n+---------+\n| |ACL_3| |\n+---------+\n\nTo add a named service to the ACL select the names drop down list and select the service name from those displayed. The display will change to show the service that you added to the ACL.\n\n+---------+\n| |ACL_4| |\n+---------+\n\nMore names may be added to the ACL by selecting the drop down again.\n\n+---------+\n| |ACL_5| |\n+---------+\n\nIf you wish to remove a named service from the list of services simply click on the small *x* to the left of the service name you wish to remove.\n\nIt is also possible to add a service type to an ACL. In this case all services of this type in the local Fledge instance will be given access via this ACL.\n\n+---------+\n| |ACL_6| |\n+---------+\n\nFor example to create an ACL that allows all north services to have be granted access you would select *Northbound* in the *Services Types* drop down list.\n\n+---------+\n| |ACL_7| |\n+---------+\n\nThe *URLs* section of the ACL is used to grant access to specific URLs accessing the system.\n\n.. note::\n\n  This is intended to allow control access via the REST API of the Fledge instance and is currently not implemented in Fledge. \n\nOnce you are satisfied with the content of your access control list click on the *Save* button at the bottom of the page. You will be taken back to a display of the list of ACLs defined in your system.\n\n+---------+\n| |ACL_8| |\n+---------+\n\nUpdating an ACL\n###############\n\nIn the page that displays the set of ACLs in your system, click on the name of the ACL you wish to update, a page will then be displayed showing the current contents of the ACL.\n\n+---------+\n| |ACL_9| |\n+---------+\n\nTo completely remove the ACL from the system click on the *Delete* button at the bottom of the page.\n\nYou may add and remove service names and types using the same procedure you used when adding the ACL.\n\nOnce you are happy with your updated ACL click on the *Save* button.\n"
  },
  {
    "path": "docs/build_index.rst",
    "content": "*****************\nBuilding Fledge\n*****************\n\n.. toctree::\n\n    building_fledge/index\n    RASPBIAN\n"
  },
  {
    "path": "docs/building_fledge/01_introduction.rst",
    "content": ".. Fledge documentation master file, created by\n   sphinx-quickstart on Fri Sep 22 02:34:49 2017.\n   You can adapt this file completely to your liking, but it should at least\n   contain the root `toctree` directive.\n\n.. Images\n.. |fledge_all_round| image:: ../images/fledge_all_round_solution.jpg\n\n.. Links\n\n************\nIntroduction\n************\n\nWhat Is Fledge?\n================\n\nFledge is an open source platform for the **Internet of Things** and an\nessential component in **Fog Computing**.  It uses a modular\n**microservices architecture** including sensor data collection, storage, processing and forwarding to historians, Enterprise systems and Cloud-based services. Fledge can run in highly available, stand alone, unattended environments that assume unreliable network connectivity.\n\nBy providing a modular and distributable framework under an open source Apache v2 license, Fledge is the best platform to manage the data infrastructure for IoT. The modules can be distributed in any layer - Edge, Fog and Cloud - and they act together to provide scalability, elasticity and resilience.\n\nFledge offers an \"all-round\" solution for data management, combining a bi-directional **Northbound/Southbound** data and metadata communication with a **Eastbound/Westbound** service and object distribution.\n\n\nFledge Positioning in an IoT and IIoT Infrastructure\n-----------------------------------------------------\n\nFledge can be used in IoT and IIoT infrastructure at Edge and in the Fog.\nIt stretches bi-directionally South-North/North-South and it is distributed\nEast-West/West-East (see figure below).\n\n|fledge_all_round|\n\n.. note:: In this scenario we refer to “Cloud” as the layer above the Fog. “Fog” is where historians, gateways and middle servers coexist. In practice, the Cloud may also represent internal Enterprise systems, concentrated in regional or global corporate data centers, where larger historians, Big Data and analytical systems reside.\n\nIn practical terms, this means that:\n\n- Intra-layer communication and data exchange:\n\n  - At the **Edge**, microservices are installed on devices, sensors and actuators.\n  - In the **Fog**, data is collected and aggregated in gateways and regional servers.\n  - In the **Cloud**, data is distributed and analysed on multiple servers, such as Big Data Systems and Data Historians.\n\n- Inter-layer communication and data exchange:\n\n  - From **Edge to Fog**, data is retrieved from multiple sensors and devices and it is aggregated on resilient and highly available middle servers and gateways, either in traditional Data Historians and in the new edge of Machine Learning systems.\n  - From **Fog to Edge**, configuration information, metadata and other valuable data is transferred to sensors and devices.\n  - From **Fog to Cloud**, the data collected and optionally transformed is transferred to more powerful distributed Cloud and Enterprise systems.\n  - From **Cloud to Fog**, results of complex analysis and other valuable information are sent to the designated gateways and middle servers that will interact with the Edge.\n\n- Intra-layer service distribution:\n\n  - A microservice architecture based on secure communication allows lightweight service distribution and information exchange among **Edge to Edge** devices.\n  - Fledge provides high availability, scalability and data distribution among **Fog-to-Fog** systems. Due to its portability and modularity, Fledge can be installed on a large number of intermediate servers and gateways, as application instances, appliances, containers or virtualized environments.\n  - **Cloud to Cloud Fledge server** capabilities provide scalability and elasticity in data storage, retrieval and analytics. The data collected at the Edge and Fog, also combined with external data, can be distributed to multiple systems within a Data Center and replicated to multiple Data Centers to guarantee local and faster access.\n\nAll these operations are **scheduled, automated** and **executed securely, unattended** and in a **transactional** fashion (i.e. the system can always revert to a previous state in case of failures or unexpected events).\n\n\nFledge Features\n================\n\nIn a nutshell, these are main features of Fledge:\n\n- Transactional, always on, server platform designed to work unattended and with zero maintenance.\n- Microservice architecture with secured inter-communication:\n\n  - Core System\n  - Storage Layer\n  - South side, sensors and device communication\n  - North side, Cloud and Enterprise communication\n  - Application Modules, internal application logic\n\n- Pluggable modules for:\n\n  - South side: multiple, data and metadata bi-directional communication\n  - North side: multiple, data and metadata bi-directional communication\n  - East/West side: IN/OUT Communicator with external applications\n  - Plus:\n\n    - Data and communication authentication\n    - Data and status monitoring and alerting\n    - Data transformation\n    - Data storage and retrieval\n\n- Small memory and processing footprint. Fledge can be installed and executed on inexpensive Edge devices; microservices can be distributed on sensors and actuator boards.\n- Resilient and optionally highly available.\n- Discoverable and cluster-based.\n- Based on APIs (RESTful and non-RESTful) to communicate with sensors and other devices, to interact with user applications, to manage the platform and to be integrated with a Cloud or Data Center-based data infrastructure.\n- Hardened with default secure communication that can be optionally relaxed.\n"
  },
  {
    "path": "docs/building_fledge/04_installation.rst",
    "content": ".. Fledge installation describes how to install Fledge\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. Links in new tabs\n\n.. |build fledge| raw:: html\n\n   <a href=\"building_fledge.html\" target=\"_blank\">here</a>\n\n.. |snappy| raw:: html\n\n   <a href=\"https://en.wikipedia.org/wiki/Snappy_(package_manager)\" target=\"_blank\">Snappy</a>\n\n.. |snapcraft| raw:: html\n\n   <a href=\"https://snapcraft.io\" target=\"_blank\">snapcraft.io</a>\n\n.. |x86 Package| raw:: html\n\n   <a href=\"https://s3.amazonaws.com/fledge/snaps/x86_64/fledge_1.0_amd64.snap\" target=\"_blank\">Snap for Intel x86_64 architecture</a>\n\n.. |ARM Package| raw:: html\n\n   <a href=\"https://s3.amazonaws.com/fledge/snaps/armhf/fledge_1.0_armhf.snap\" target=\"_blank\">Snap for ARM (armhf - ARM hard float) / Raspberry PI 2 & 3</a>\n\n.. |Downloads page| raw:: html\n\n   <a href=\"../92_downloads.html\" target=\"_blank\">Downloads page</a>\n\n\n.. =============================================\n\n\n********************\nFledge Installation\n********************\n\nInstalling Fledge using defaults is straightforward: depending on the usage, you may install a new version from source or from a pre-built package. In environments where the defaults do not fit, you will need to execute few more steps. This chapter describes the default installation of Fledge and the most common scenarios where administrators need to modify the default behavior.\n\n\nInstalling Fledge from a Build\n===============================\n\nOnce you have built Fledge following the instructions presented |build fledge|, you can execute the default installation with the ``make install`` command. By default, Fledge is installed from build in the root directory, under */usr/local/fledge*. Since the root directory */* is a protected a system location, you will need superuser privileges to execute the command. Therefore, if you are not superuser, you should login as superuser or you should use the ``sudo`` command.\n\n.. code-block:: console\n\n  $ sudo make install\n  mkdir -p /usr/local/fledge\n  Installing Fledge version 1.8.0, DB schema 2\n  -- Fledge DB schema check OK: Info: /usr/local/fledge is empty right now. Skipping DB schema check.\n  cp VERSION /usr/local/fledge\n  cd cmake_build ; cmake /home/fledge/Fledge/\n  -- Boost version: 1.58.0\n  -- Found the following Boost libraries:\n  --   system\n  --   thread\n  --   chrono\n  --   date_time\n  --   atomic\n  -- Found SQLite version 3.11.0: /usr/lib/x86_64-linux-gnu/libsqlite3.so\n  -- Boost version: 1.58.0\n  -- Found the following Boost libraries:\n  --   system\n  --   thread\n  --   chrono\n  --   date_time\n  --   atomic\n  -- Configuring done\n  -- Generating done\n  -- Build files have been written to: /home/fledge/Fledge/cmake_build\n  cd cmake_build ; make\n  make[1]: Entering directory '/home/fledge/Fledge/cmake_build'\n  ...\n  $\n\nThese are the main steps of the installation:\n\n- Create the */usr/local/fledge* directory, if it does not exist\n- Build the code that has not been compiled and built yet\n- Create all the necessary destination directories and copy the executables, scripts and configuration files\n- Change the ownership of the *data* directory, if the install user is a superuser (we recommend to run Fledge as regular user, i.e. not as superuser).\n\nFledge is now present in */usr/local/fledge* and ready to start. The start script is in the */usr/local/fledge/bin* directory\n\n.. code-block:: console\n\n  $ cd /usr/local/fledge/\n  $ ls -l\n  total 32\n  drwxr-xr-x 2 root    root    4096 Apr 24 18:07 bin\n  drwxr-xr-x 4 fledge fledge 4096 Apr 24 18:07 data\n  drwxr-xr-x 4 root    root    4096 Apr 24 18:07 extras\n  drwxr-xr-x 4 root    root    4096 Apr 24 18:07 plugins\n  drwxr-xr-x 3 root    root    4096 Apr 24 18:07 python\n  drwxr-xr-x 6 root    root    4096 Apr 24 18:07 scripts\n  drwxr-xr-x 2 root    root    4096 Apr 24 18:07 services\n  -rwxr-xr-x 1 root    root      37 Apr 24 18:07 VERSION\n  $\n  $ bin/fledge\n  Usage: fledge {start|start --safe-mode|stop|status|reset|kill|help|version}\n  $\n  $ bin/fledge help\n  Usage: fledge {start|start --safe-mode|stop|status|reset|kill|help|version}\n  Fledge v1.3.1 admin script\n  The script is used to start Fledge\n  Arguments:\n   start             - Start Fledge core (core will start other services).\n   start --safe-mode - Start in safe mode (only core and storage services will be started)\n   stop              - Stop all Fledge services and processes\n   kill              - Kill all Fledge services and processes\n   status            - Show the status for the Fledge services\n   reset             - Restore Fledge factory settings\n                       WARNING! This command will destroy all your data!\n   version           - Print Fledge version\n   help              - This text\n  $\n  $ bin/fledge start\n  Starting Fledge......\n  Fledge started.\n  $ \n\n\nEnvironment Variables\n---------------------\n\nIn order to operate, Fledge requires two environment variables:\n\n- **FLEDGE_ROOT**: the root directory for Fledge. The default is */usr/local/fledge*\n- **FLEDGE_DATA**: the data directory. The default is *$FLEDGE_ROOT/data*, hence whichever value *FLEDGE_ROOT* has plus the *data* sub-directory, or */usr/local/fledge/data* in case *FLEDGE_ROOT* is set as default value.\n\n\nThe setenv.sh Script\n--------------------\n\nIn the *extras/scripts* folder of the newly installed Fledge you can find the *setenv.sh* script. This script can be used to set the environment variables used by Fledge and update your PATH environment variable. |br|\nYou can call the script from your shell or you can add the same command to your *.profile* script:\n\n.. code-block:: console\n\n  $ cat /usr/local/fledge/extras/scripts/setenv.sh\n  #!/bin/sh\n\n  ##--------------------------------------------------------------------\n  ## Copyright (c) 2018 OSIsoft, LLC\n  ##\n  ## Licensed under the Apache License, Version 2.0 (the \"License\");\n  ## you may not use this file except in compliance with the License.\n  ## You may obtain a copy of the License at\n  ##\n  ##     http://www.apache.org/licenses/LICENSE-2.0\n  ##\n  ## Unless required by applicable law or agreed to in writing, software\n  ## distributed under the License is distributed on an \"AS IS\" BASIS,\n  ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  ## See the License for the specific language governing permissions and\n  ## limitations under the License.\n  ##--------------------------------------------------------------------\n\n  #\n  # This script sets the user environment to facilitate the administration\n  # of Fledge\n  #\n  # You can execute this script from shell, using for example this command:\n  #\n  # source /usr/local/fledge/extras/scripts/setenv.sh\n  #\n  # or you can add the same command at the bottom of your profile script\n  # {HOME}/.profile.\n  #\n\n  export FLEDGE_ROOT=\"/usr/local/fledge\"\n  export FLEDGE_DATA=\"${FLEDGE_ROOT}/data\"\n\n  export PATH=\"${FLEDGE_ROOT}/bin:${PATH}\"\n  export LD_LIBRARY_PATH=\"${FLEDGE_ROOT}/lib:${LD_LIBRARY_PATH}\"\n\n  $ source /usr/local/fledge/extras/scripts/setenv.sh\n  $\n\n\nThe fledge.service Script\n--------------------------\n\nAnother file available in the *extras/scripts* folder is the fledge.service script. This script can be used to set Fledge as a Linux service. If you wish to do so, we recommend to install the Fledge package, but if you have a special build or for other reasons you prefer to work with Fledge built from source, this script will be quite helpful.\n\nYou can install Fledge as a service following these simple steps:\n\n- After the ``make install`` command, copy *fledge.service* with a simple name *fledge* in the */etc/init.d* folder.\n- Execute the command ``systemctl enable fledge.service`` to enable Fledge as a service\n- Execute the command ``systemctl start fledge.service`` if you want to start Fledge\n\n.. code-block:: console\n\n  $ sudo cp /usr/local/fledge/extras/scripts/fledge.service /etc/init.d/fledge\n  $ sudo systemctl status fledge.service\n  ● fledge.service\n     Loaded: not-found (Reason: No such file or directory)\n     Active: inactive (dead)\n  $ sudo systemctl enable fledge.service\n  fledge.service is not a native service, redirecting to systemd-sysv-install\n  Executing /lib/systemd/systemd-sysv-install enable fledge\n  $ sudo systemctl status fledge.service\n  ● fledge.service - LSB: Fledge\n     Loaded: loaded (/etc/init.d/fledge; bad; vendor preset: enabled)\n     Active: inactive (dead)\n       Docs: man:systemd-sysv-generator(8)\n  $ sudo systemctl start fledge.service\n  $ sudo systemctl status fledge.service\n  ● fledge.service - LSB: Fledge\n   Loaded: loaded (/etc/init.d/fledge; generated)\n   Active: active (running) since Thu 2020-05-28 18:42:07 IST; 9min ago\n     Docs: man:systemd-sysv-generator(8)\n   Process: 5047 ExecStart=/etc/init.d/fledge start (code=exited, status=0/SUCCESS)\n     Tasks: 27 (limit: 4680)\n   CGroup: /system.slice/fledge.service\n           ├─5123 python3 -m fledge.services.core\n           ├─5331 /usr/local/fledge/services/fledge.services.storage --address=0.0.0.0 --port=34827\n           ├─8119 /bin/sh tasks/north_c --port=34827 --address=127.0.0.1 --name=OMF to PI north\n           └─8120 ./tasks/sending_process --port=34827 --address=127.0.0.1 --name=OMF to PI north\n\n  ...\n  $\n\n|br|\n\n\nInstalling the Debian Package\n=============================\n\nWe have versions of Fledge available as Debian packages for you. Check the |Downloads page| to review which versions and platforms are available.\n\n\nObtaining and Installing the Debian Package\n-------------------------------------------\n\nCheck the |Downloads page| to find the package to install.\n\nOnce you have downloaded the package, install it using the ``apt-get`` command. You can use ``apt-get`` to install a local Debian package and automatically retrieve all the necessary packages that are defined as pre-requisites for Fledge.  Note that you may need to install the package as superuser (or by using the ``sudo`` command) and move the package to the apt cache directory first (``/var/cache/apt/archives``).\n\nFor example, if you are installing Fledge on an Intel x86_64 machine, you can type this command to download the package:\n\n.. code-block:: console\n\n  $ wget https://fledge-iot.s3.us-east-1.amazonaws.com/latest/ubuntu2004/x86_64/fledge.tgz\n  --2025-06-26 13:08:24--  https://fledge-iot.s3.us-east-1.amazonaws.com/latest/ubuntu2004/x86_64/fledge.tgz\n  Resolving fledge-iot.s3.us-east-1.amazonaws.com (fledge-iot.s3.us-east-1.amazonaws.com)... 54.231.233.50, 54.231.232.162, 16.182.38.114, ...\n  Connecting to fledge-iot.s3.us-east-1.amazonaws.com (fledge-iot.s3.us-east-1.amazonaws.com)|54.231.233.50|:443... connected.\n  HTTP request sent, awaiting response... 200 OK\n  Length: 42015699 (40M) [application/x-tar]\n  Saving to: ‘fledge.tgz’\n\n  fledge.tgz                                          100%[===================================================================================================================>]  40.07M  7.03MB/s    in 17s     \n\n  2025-06-26 13:08:42 (2.40 MB/s) - ‘fledge.tgz’ saved [42015699/42015699]\n  $\n\nWe recommend to execute an *update-upgrade-update* of the system first, then you may untar the fledge.tgz file and copy the Fledge package in the *apt cache* directory and install it.\n\n\n.. code-block:: console\n\n  $ sudo apt update\n  Hit:1 http://gb.archive.ubuntu.com/ubuntu xenial InRelease\n  ...\n  $ sudo apt upgrade\n  ...\n  $ sudo apt update\n  ...\n  $ sudo cp fledge-1.8.0-x86_64.deb /var/cache/apt/archives/.\n  ...\n  $ sudo apt install /var/cache/apt/archives/fledge-1.8.0-x86_64.deb\n  Reading package lists... Done\n  Building dependency tree\n  Reading state information... Done\n  Note, selecting 'fledge' instead of '/var/cache/apt/archives/fledge-1.8.0-x86_64.deb'\n  The following packages were automatically installed and are no longer required:\n  ...\n  Unpacking fledge (1.8.0) ...\n  Setting up fledge (1.8.0) ...\n  Resolving data directory\n  Data directory does not exist. Using new data directory\n  Installing service script\n  Generating certificate files\n  Certificate files do not exist. Generating new certificate files.\n  Creating a self signed SSL certificate ...\n  Certificates created successfully, and placed in data/etc/certs\n  Generating auth certificate files\n  CA Certificate file does not exist. Generating new CA certificate file.\n  Creating ca SSL certificate ...\n  ca certificate created successfully, and placed in data/etc/certs\n  Admin Certificate file does not exist. Generating new admin certificate file.\n  Creating user SSL certificate ...\n  user certificate created successfully for admin, and placed in data/etc/certs\n  User Certificate file does not exist. Generating new user certificate file.\n  Creating user SSL certificate ...\n  user certificate created successfully for user, and placed in data/etc/certs\n  Setting ownership of Fledge files\n  Calling Fledge package update script\n  Linking update task\n  Changing setuid of update_task.apt\n  Removing task/update\n  Create link file\n  Copying sudoers file\n  Setting setuid bit of cmdutil\n  Enabling Fledge service\n  fledge.service is not a native service, redirecting to systemd-sysv-install.\n  Executing: /lib/systemd/systemd-sysv-install enable fledge\n  Starting Fledge service\n  $ \n\nAs you can see from the output, the installation automatically registers Fledge as a service, so it will come up at startup and it is already up and running when you complete the command.\n\nCheck the newly installed package:\n\n.. code-block:: console\n\n  $ sudo dpkg -l | grep fledge\n  ii  fledge     1.8.0       amd64        Fledge, the open source platform for the Internet of Things\n  $\n\n\nYou can also check the service currently running:\n\n.. code-block:: console\n\n  $ sudo systemctl status fledge.service\n  ● fledge.service - LSB: Fledge\n   Loaded: loaded (/etc/init.d/fledge; generated)\n   Active: active (running) since Thu 2020-05-28 18:42:07 IST; 9min ago\n     Docs: man:systemd-sysv-generator(8)\n   Process: 5047 ExecStart=/etc/init.d/fledge start (code=exited, status=0/SUCCESS)\n     Tasks: 27 (limit: 4680)\n   CGroup: /system.slice/fledge.service\n           ├─5123 python3 -m fledge.services.core\n           ├─5331 /usr/local/fledge/services/fledge.services.storage --address=0.0.0.0 --port=34827\n           ├─8119 /bin/sh tasks/north_c --port=34827 --address=127.0.0.1 --name=OMF to PI north\n           └─8120 ./tasks/sending_process --port=34827 --address=127.0.0.1 --name=OMF to PI north\n\n  ...\n  $\n\n\nCheck if Fledge is up and running with the ``fledge`` command:\n\n.. code-block:: console\n\n  $ /usr/local/fledge/bin/fledge status\n  Fledge v1.8.0 running.\n  Fledge Uptime:  162 seconds.\n  Fledge records: 0 read, 0 sent, 0 purged.\n  Fledge does not require authentication.\n  === Fledge services:\n  fledge.services.core\n  ...\n  === Fledge tasks:\n  ...\n  $\n\n\nDon't forget to add the *setenv.sh* available in the /usr/local/fledge/extras/scripts* directory to your *.profile* user startup script if you want to have an easy access to the Fledge tools, and...\n\n\n...Congratulations! This is all you need to do, now Fledge is ready to run.\n\n\nUpgrading or Downgrading Fledge\n--------------------------------\n\nUpgrading or downgrading Fledge, starting from version 1.2, is as easy as installing it from scratch: simply follow the instructions in the previous section regarding the installation and the package will take care of the upgrade/downgrade path. The installation will not proceed if there is not a path to upgrade or downgrade from the currently installed version. You should still check the pre-requisites before you apply the upgrade. Clearly the old data will not be lost, there will be a schema upgrade/downgrade, if required.\n\n\nUninstalling the Debian Package\n-------------------------------\n\nUse the ``apt`` or the ``apt-get`` command to uninstall Fledge:\n\n.. code-block:: console\n\n  $ sudo apt purge fledge\n  Reading package lists... Done\n  Building dependency tree\n  Reading state information... Done\n  The following packages were automatically installed and are no longer required:\n    libmodbus-dev libmodbus5\n  Use 'sudo apt autoremove' to remove them.\n  The following packages will be REMOVED:\n    fledge*\n  0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.\n  After this operation, 0 B of additional disk space will be used.\n  Do you want to continue? [Y/n] y\n  (Reading database ... 160251 files and directories currently installed.)\n  Removing fledge (1.8.0) ...\n  Fledge is currently running.\n  Stop Fledge service.\n  Kill Fledge.\n  Disable Fledge service.\n  fledge.service is not a native service, redirecting to systemd-sysv-install.\n  Executing: /lib/systemd/systemd-sysv-install disable fledge\n  Remove Fledge service script\n  Reset systemctl\n  Cleanup of files\n  Remove fledge sudoers file\n  (Reading database ... 159822 files and directories currently installed.)\n  Purging configuration files for fledge (1.8.0) ...\n  Cleanup of files\n  Remove fledge sudoers file\n  dpkg: warning: while removing fledge, directory '/usr/local/fledge' not empty so not removed\n  $\n\nThe command also removes the service installed. |br| You may notice the warning in the last row of the command output: this is due to the fact that the data directory (``/usr/local/fledge/data`` by default) has not been removed, in case an administrator might want to analyze or reuse the data.\n\n|br|\n"
  },
  {
    "path": "docs/building_fledge/04_utilities.rst",
    "content": ".. Utilities and Scripts\n.. https://docs.google.com/document/d/1JJDP7g25SWerNVCxgff02qp9msHbqA9nt3RAFx8-Qng\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n\n.. Links\n\n.. Links in new tabs\n\n\n.. =============================================\n\n\n*****************************\nFledge Utilities and Scripts\n*****************************\n\nThe Fledge platform comes with a set of utilities and scripts to help users, developers and administrators with their day-by-day operations. These tools are under heavy development and you may expect incompatibilities in future versions, therefore it is highly recommended to check the revision history to verify the changes in new versions.\n\n\nfledge\n=======\n\n``fledge`` is the first utility available with the platform, it is the control center for all the admin operations on Fledge.\n\nIn the current implementation, *fledge* provides these features:\n\n- *start* Fledge\n- *stop* Fledge\n- *kill* Fledge processes\n- Check the *status* of Fledge, i.e. whether it is running, starting or not running\n- *reset* Fledge to its factory settings\n\n\nStarting Fledge\n----------------\n\n``fledge start`` is the command to start Fledge. Since only one core microservice of Fledge can be executed in the same environment, the command checks if Fledge is already running, and if it does, it ends. The command also checks the presence of the *FLEDGE_ROOT* and *FLEDGE_DATA* environment variables. If the variables have not been set, it verifies if Fledge has been installed in the default position, which is */usr/local/fledge* or a position defined by the installed package, and it will set the missing variables accordingly. It will also take care of the *PYTHONPATH* variable.\n\nIn more specific terms, the command executes these steps:\n\n- Check if Fledge is already running\n- Check if the storage layer is *managed* or *unmanaged*. \"managed\" means that the storage layer relies on a storage system (i.e. a database, a set of files or in-memory structures) that are under exclusive control of Fledge. \"unmanaged\" means that the storage system is generic and potentially shared with other applications.\n- Check if the storage plugin and the related storage system (for example a PostgreSQL database) is available. \n- Check if the metadata structure that is necessary to execute Fledge is already available in the storage layer. If the metadata is not available, it creates the data model and sets the factory settings that are necessary to start and use Fledge.\n- Start the core microservice.\n- Wait until the core microservice starts the Storage microservice and the initial required process that are necessary to handle other tasks and microservices.\n\n\nSafe Mode\n---------\n\nIt is possible to start Fledge in safe mode by passing the flag ``--safe-mode`` to the start command. In safe mode Fledge\nwill not start any of the south services or schedule any tasks, such as purge or north bound tasks. Safe mode allows\nFledge to be started and configured in those situations where a previous misconfiguration has rendered it impossible to\nstart and interact with Fledge.\n\nOnce started in safe mode any configuration changes should be made and then Fledge should be restarted in normal mode\nto test those configuration changes.\n\n\nStopping Fledge\n----------------\n\n``fledge stop`` is the command used to stop Fledge. The command waits until all the tasks and services have been completed, then it stops the core service.\n\n\nIf Fledge Does Not Stop\n------------------------\n\nIf Fledge does not stop, i.e. if by using the process status command ``ps`` you see Fledge processes still running, you can use ``fledge kill`` to kill them.\n\n.. note:: The command issues a ``kill -9`` against the processes associated to Fledge. This is not recommended, unless Fledge cannot be stopped. The *stop* command. In other words, *kill* is your last resort before a reboot. If you must use the kill command, it means that there is a problem: please report this to the Fledge project slack channel.\n\n\nChecking the Status of Fledge\n------------------------------\n\n``fledge status`` is used to provide the current status of tasks and microservices on the machine. The output is something like:\n\n.. code-block:: console\n\n  $ fledge status\n  Fledge running.\n  Fledge uptime:  2034 seconds.\n  === Fledge services:\n  fledge.services.core\n  fledge.services.south --port=33074 --address=127.0.0.1 --name=HTTP_SOUTH\n  fledge.services.south --port=33074 --address=127.0.0.1 --name=COAP\n  === Fledge tasks:\n  $ fledge_use_from_here stop\n  Fledge stopped.\n  $ fledge_use_from_here status\n  Fledge not running.\n  $\n\n- The first row always indicates if Fledge is running or not\n- The second row provides the uptime in seconds\n- The next set of rows provides information regarding the microservices running on the machine\n- The last set of rows provides information regarding the tasks running on the machine\n\n\nResetting Fledge\n-----------------\n\nIt may occur that you want to restore Fledge to its factory settings, and this is what ``fledge reset`` does. The command also destroys all the data and all the configuration currently stored in Fledge, so you must use it at your own risk!\n\nFledge can be restored to its factory settings only when it is not running, hence you should stop it first. \n\nThe command forces you to insert the word *YES*, all in uppercase, to continue:\n\n.. code-block:: console\n\n  $ fledge reset\n  This script will remove all data stored in the server.\n  Enter YES if you want to continue: YES\n  $\n\n\n"
  },
  {
    "path": "docs/building_fledge/05_tasks.rst",
    "content": ".. Tasks\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. Links in new tabs\n\n.. =============================================\n\n\n*************\nFledge Tasks\n*************\n\nTasks are part of the Fledge IoT platform. They are like services, but with a clear distinction:\n\n- *services* are started at a certain point (usually at startup) and they are likely to continue to work until Fledge stops. \n- *tasks* are started when required, they execute a job and then they terminate.\n\nIn simple terms, a service is meant to always listen and react to requests, while a task is triggered by an event and then when job is terminated, the tasks ends.\n\nThat said, tasks and services shared these same features:\n\n- They are both started by the Fledge scheduler. It is likely that services are started at startup, while tasks can start at a given time or interval.\n- They both use the internal API to communicate with other services.\n- They both use the same pluggable architecture to separate a common logic, usually associated to the internal features of Fledge, from a more generic logic, usually closer to the type of operations that must be performed.\n\nIn this chapter we present a set of tasks that are commonly available in Fledge.\n\n\nPurge\n=====\n\nThe *Purge* task is triggered by the scheduler to purge old data that is still stored (buffered) in Fledge. The logic applied to the task is relatively simple:\n\n- The task is called exclusively (i.e. there cannot be more than one *Purge* task running at any given time) by the Fledge scheduler every hour (by default).\n- Data that is older than a certain date/time is removed.\n- Optionally, data is removed if the total size of the stored objects is bigger than 1GByte (default)\n- Optionally, data is not removed if it has not been extracted and used by any North task or service yet.\n- All purge operations are stored in the audit log.\n\n\nPurge Schedule\n--------------\n\n*Purge* is one of the tasks launched by the Fledge scheduler. You can retrieve information about the scheduling by calling the *GET* method of the *schedule* call. The name and the process name of the task are both *purge*:\n\n.. code-block:: console\n\n  $ curl -sX GET http://localhost:8081/fledge/schedule\n  ...\n  { \"id\"          : \"cea17db8-6ccc-11e7-907b-a6006ad3dba0\",\n    \"name\"        : \"purge\",\n    \"time\"        : 0,\n    \"enabled\"     : true,\n    \"repeat\"      : 3600,\n    \"type\"        : \"INTERVAL\",\n    \"exclusive\"   : true,\n    \"processName\" : \"purge\",\n    \"day\"         : null },\n  ...\n  $\n\nAs you can see from the JSON output, the task is scheduled to be executed every hour (3,600 seconds). In order to change the interval between *Purge* tasks, you can call the *PUT* method of the *schedule* call by passing the associated *id*. For example, in order to change the task to be executed any 5 minutes (i.e. 300 seconds) you should call:\n\n.. code-block:: console\n\n  $ curl -sX PUT http://localhost:8081/fledge/schedule/cea17db8-6ccc-11e7-907b-a6006ad3dba0 -d '{\"repeat\": 300}'\n  { \"schedule\": { \"id\": \"cea17db8-6ccc-11e7-907b-a6006ad3dba0\",\n                  \"name\"        : \"purge\",\n                  \"time\"        : 0,\n                  \"enabled\"     : true,\n                  \"repeat\"      : 300,\n                  \"type\"        : \"INTERVAL\",\n                  \"exclusive\"   : true,\n                  \"processName\" : \"purge\",\n                  \"day\"         : null }\n  }\n  $\n\n\nPurge Configuration\n-------------------\n\nThe configuration of the *Purge* task is stored in the metadata structures of Fledge and it can be retrieve using the *GET* method of the *category/PURGE_READ* call. This is the command used to retrieve the configuration in JSON format:\n\n.. code-block:: console\n\n  $ curl -sX GET http://localhost:8081/fledge/category/PURGE_READ\n  { \"retainUnsent\" : { \"type\": \"boolean\",\n                       \"default\": \"False\",\n                       \"description\": \"Retain data that has not been sent to any historian yet.\",\n                       \"value\": \"False\" },\n    \"age\"          : { \"type\": \"integer\",\n                       \"default\": \"72\",\n                       \"description\": \"Age of data to be retained, all data that is older than this value will be removed,unless retained. (in Hours)\",\n                       \"value\": \"72\" },\n    \"size\"         : { \"type\": \"integer\",\n                       \"default\": \"1000000\",\n                       \"description\": \"Maximum size of data to be retained, the oldest data will be removed to keep below this size, unless retained. (in Kbytes)\",\n                       \"value\": \"1000000\" } }\n  $\n\n\nChanges can be applied using the *PUT* method for each parameter call. For example, in order to change the retention policy for data that has not been sent to historians yet, you can use this call:\n\n.. code-block:: console\n\n  $ curl -sX PUT http://locahost:8081/fledge/category/PURGE_READ/retainUnsent -d '{\"value\": \"True\"}'\n  { \"type\": \"boolean\",\n    \"default\": \"False\",\n    \"description\": \"Retain data that has not been sent to any historian yet.\",\n    \"value\": \"True\" }\n  $\n\nThe following table shows the list of parameters that can be changed in the *Purge* task:\n\n.. list-table::\n    :widths: 20 20 20 80\n    :header-rows: 1\n\n    * - Item\n      - Type\n      - Default\n      - Description\n    * - retainUnsent\n      - boolean\n      - False\n      - Retain data that has not been sent to \"North\" yet When *True*, data that has not yet been retrieved by any North service or task, will not be purged. When *False*, data is purged without checking whether it has been sent to a North destination yet or not.\n    * - age\n      - integer\n      - 72\n      - Age in hours of the data to be retained. Data that is older than this value, will be purged.\n    * - size\n      - integer\n      - 1000000\n      - Size in KBytes of data that will be retained in Fledge. Older data will be removed to keep the data stored in Fledge below this size.\n"
  },
  {
    "path": "docs/building_fledge/06_testing.rst",
    "content": ".. Fledge testing describes how to test Fledge\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. |postman_ping| image:: https://s3.amazonaws.com/fledge/readthedocs/images/05_postman_ping.jpg\n   :target: https://s3.amazonaws.com/fledge-iot/readthedocs/images/05_postman_ping.jpg\n\n.. |win_server_waiting| image:: https://s3.amazonaws.com/fledge/readthedocs/images/05_win_server_waiting.jpg\n   :target: https://s3.amazonaws.com/fledge-iot/readthedocs/images/05_win_server_waiting.jpg\n\n.. |pi_loaded| image:: https://s3.amazonaws.com/fledge/readthedocs/images/05_pi_loaded.jpg\n   :target: https://s3.amazonaws.com/fledge-iot/readthedocs/images/05_pi_loaded.jpg\n\n.. Links\n\n.. Links in new tabs\n\n.. |curl| raw:: html\n\n   <a href=\"https://en.wikipedia.org/wiki/CURL\" target=\"_blank\">curl</a>\n\n.. |postman| raw:: html\n\n   <a href=\"https://www.getpostman.com\" target=\"_blank\">Postman</a>\n\n.. |here OSIsoft| raw:: html\n\n   <a href=\"https://www.osisoft.com/\" target=\"_blank\">here</a>\n\n.. |here OMF| raw:: html\n\n   <a href=\"http://omf-docs.readthedocs.io\" target=\"_blank\">here</a>\n\n.. |here PI| raw:: html\n\n   <a href=\"https://www.osisoft.com/pi-system\" target=\"_blank\">here</a>\n\n.. |jq| raw:: html\n\n   <a href=\"https://stedolan.github.io/jq\" target=\"_blank\">jq</a>\n\n.. |get start| raw:: html\n\n   <a href=\"building_fledge.html\" target=\"_blank\">Building Fledge</a>\n\n\n.. =============================================\n\n\n***************\nTesting Fledge\n***************\n\nAfter the installation, you are now ready to test Fledge. An end-to-end test involves three types of tests:\n\n- The **South** side, i.e. testing the collection of information from South microservices and associated plugins\n- The **North** side, i.e. testing the tasks that send data North to historians, databases, Enterprise and Cloud systems\n- The **East/West** side, i.e. testing the interaction of external applications with Fledge via REST API.\n\nThis chapter describes how to tests Fledge in these three directions.\n\n\nFirst Checks: Fledge Status\n============================\n\nBefore we start, let's make sure that Fledge is up and running and that we have the tasks and services in place to execute the tests. |br| First, run the ``fledge status`` command to check if Fledge has already started. The result of the command can be:\n\n- ``Fledge not running.`` - it means that we must start Fledge with ``fledge start``\n- ``Fledge starting.`` - it means that we have started Fledge but the starting phase has not been completed yet. You should wait for a little while (from few seconds to about a minute) to see Fledge running.\n- ``Fledge running.`` - (plus extra rows giving the uptime and other info. It means that Fledge is up and running, hence it is ready for use.\n\n\nWhen you have a running Fledge, check the extra information provided by the ``fledge status`` command:\n\n.. code-block:: console\n\n  $ fledge status\n  Fledge v1.8.0 running.\n  Fledge Uptime:  9065 seconds.\n  Fledge records: 86299 read, 86851 sent, 0 purged.\n  Fledge does not require authentication.\n  === Fledge services:\n  fledge.services.core\n  fledge.services.storage --address=0.0.0.0 --port=42583\n  fledge.services.south --port=42583 --address=127.0.0.1 --name=Sine\n  fledge.services.notification --port=42583 --address=127.0.0.1 --name=Fledge Notifications\n  === Fledge tasks:\n  fledge.tasks.purge --port=42583 --address=127.0.0.1 --name=purge\n  tasks/sending_process --port=42583 --address=127.0.0.1 --name=PI Server\n  $\n\nLet's analyze the output of the command:\n\n- ``Fledge running.`` - The Fledge Core microservice is running on this machine and it is responding to the status command as *running* because other basic microservices are also running.\n- ``Fledge uptime:  282 seconds.`` - This is a simple uptime in second provided by the Core microservice. It is equivalent to the ``ping`` method called via the REST API.\n- ``Fledge records:`` - This is a summary of the number of records received from sensors and devices (South), sent to other services (North) and purged from the buffer.\n- ``Fledge authentication`` - This row describes if a user or an application must authenticate to ogLAMP in order to operate with the REST API.\n\nThe following lines provide a list of the modules running in this installation of Fledge. They are separated by dots and described in this way:\n\n  - The prefix ``fledge`` is always present and identifies the Fledge modules.\n  - The following term describes the type of module: *services* for microservices, *tasks* for tasks etc.\n  - The following term is the name of the module: *core*, *storage*, *north*, *south*, *app*, *alert*\n  - The last term is the name of the plugin executed as part of the module.\n  - Extra arguments may be available: they are the arguments passed to the module by the core when it is launched.\n\n- ``=== Fledge services:`` - This block contains the list of microservices running in the Fledge platform.\n\n  - ``fledge.services.core`` is the Core microservice itself\n  - ``fledge.services.south --port=44180 --address=127.0.0.1 --name=COAP`` - This South microservice is a listener of data pushed to Fledge via a CoAP protocol\n\n- ``=== Fledge tasks:`` - This block contains the list of tasks running in the Fledge platform.\n\n  - ``fledge.tasks.north.sending_process ... --name=sending process`` is a North task that prepares and sends data collected by the South modules to the OSIsoft PI System in OMF (OSIsoft Message Format).\n  - ``fledge.tasks.north.sending_process ... --name=statistics to pi`` is a North task that prepares and sends the internal statistics to the OSIsoft PI System in OMF (OSIsoft Message Format).\n\n\nHello, Foggy World!\n===================\n\nThe output of the ``fledge status`` command gives you an idea of the modules running in your machine, but let's try to get more information from Fledge.\n\n\nThe Fledge REST API\n--------------------\n\nFirst of all, we need to familiarize with the Fledge REST API. The API provides a set of methods used to monitor and administer the status of Fledge. Users and developers can also use the API to interact with external applications.\n\nThis is a short list of the methods available to the administrators.  A more detailed list will be available soon:\n- **ping** provides the uptime of the Fledge Core microservice\n- **statistics** provides a set of statistics of the Fledge platform, such as data collected, sent, purged, rejected etc.\n- **asset** provides a list of asset that have readings buffered in Fledge.\n- **category** provides a list of the configuration of the modules and components in Fledge.\n\n\nUseful Tools\n~~~~~~~~~~~~\n\nSystems Administrators and Developers may already have their favorite tools to interact with a REST API, and they can probably use the same tools with Fledge. If you are not familiar with any tool, we recommend one of these:\n\n- If you are familiar with the Linux shell and command lines, |curl| is the simplest and most useful tool available. It comes with every Linux distribution (or you can easily add it if it is not available in the default installation.\n- If you prefer to use a browser-like interface, we recommend |postman|. Postman is an application available on Linux, MacOS and Windows and allows you to save queries, results, and run a set of queries with a single click.\n\n\nHello World!\n------------\n\nLet's execute the *ping* method. First, you must identify the IP address where Fledge is running. If you have installed Fledge on your local machine, you can use *localhost*. Alternatively, check the IP address of the machine where Fledge is installed.\n\n.. note:: This version of Fledge does not have any security setup by default, therefore you may be able to access the entry point for the REST API by any external application, but there may be security setting on your operating environment that prevent access to specific ports from external applications. If you receive an error using the ping method, and the ``fledge status`` command says that everything is running, it is likely that you are experiencing a security issue.\n\nThe default port for the REST API is 8081. Using curl, try this command:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/ping ; echo\n  {\"uptime\": 10480, \"dataRead\": 0, \"dataSent\": 0, \"dataPurged\": 0, \"authenticationOptional\": true, \"serviceName\": \"Fledge\", \"hostName\": \"fledge\", \"ipAddresses\": [\"x.x.x.x\", \"x:x:x:x:x:x:x:x\"], \"health\": \"green\", \"safeMode\": false}\n  $\n\nThe ``echo`` at the end of the line is simply used to add an extra new line to the output.\n|br| |br|\nIf you are using Postman, select the *GET* method and type ``http://localhost:8081/fledge/ping`` in the URI address input. If you are accessing a remote machine, replace *localhost* with the correct IP address. The output should be something like:\n\n|postman_ping|\n\nThis is the first message you may receive from Fledge!\n\n\nHello from the Southern Hemisphere of the Fledge World\n=======================================================\n\nLet's now try something more exciting. The primary job of Fledge is to collect data from the Edge (we call it *South*), buffer it in our storage engine and then we send the data to Cloud historians and Enterprise Servers (we call them *North*). We also offer information to local or networked applications, something we call *East* or *West*.\n|br| |br|\nIn order to insert data you may need a sensor or a device that generates data. If you want to try Fledge but you do not have any sensor at hand, do not worry, we have a tool that can generate data as if it is a sensor.\n\n\nfogbench: a Brief Intro\n-----------------------\n\nFledge comes with a little but pretty handy tool called **fogbench**. The tools is written in Python and it uses the same libraries of other modules of Fledge, therefore no extra libraries are needed. With *fogbench* you can do many things, like inserting data stored in files, running benchmarks to understand how Fledge performs in a given environment, or test an end-to-end installation.\n\nNote: This following instructions assume you have downloaded and installed the CoAP south plugin from https://github.com/fledge-iot/fledge-south-coap.\n\n\n.. code-block:: console\n\n  $ git clone https://github.com/fledge-iot/fledge-south-coap\n  $ cd fledge-south-coap\n  $ sudo cp -r python/fledge/plugins/south/coap /usr/local/fledge/python/fledge/plugins/south/\n  $ sudo cp python/requirements-coap.txt /usr/local/fledge/python/\n  $ sudo python3 -m pip install -r /usr/local/fledge/python/requirements-coap.txt\n  $ sudo chown -R root:root /usr/local/fledge/python/fledge/plugins/south/coap\n  $ curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"CoAP\", \"type\": \"south\", \"plugin\": \"coap\", \"enabled\": true}'\n\n\nDepending on your environment, you can call *fogbench* in one of those ways:\n\n- In a development environment, use the script *scripts/extras/fogbench*, inside your project repository (remember to set the *FLEDGE_ROOT* environment variable with the path to your project repository folder).\n- In an environment deployed with ``sudo make install``, use the script *bin/fogbench*.\n\nYou may call the *fogbench* tool like this:\n\n.. code-block:: console\n\n  $ /usr/local/fledge/bin/fogbench\n  >>> Make sure south CoAP plugin service is running & listening on specified host and port\n  usage: fogbench [-h] [-v] [-k {y,yes,n,no}] -t TEMPLATE [-o OUTPUT]\n                  [-I ITERATIONS] [-O OCCURRENCES] [-H HOST] [-P PORT]\n                  [-i INTERVAL] [-S {total}]\n  fogbench: error: the following arguments are required: -t/--template\n  $\n\n...or more specifically, when you call invoke *fogbench* with the *--help* or *-h* argument:\n\n.. code-block:: console\n\n  $ /usr/local/fledge/bin/fogbench -h\n  >>> Make sure south CoAP plugin service is running & listening on specified host and port\n  usage: fogbench [-h] [-v] [-k {y,yes,n,no}] -t TEMPLATE [-o OUTPUT]\n                  [-I ITERATIONS] [-O OCCURRENCES] [-H HOST] [-P PORT]\n                  [-i INTERVAL] [-S {total}]\n\n  fogbench -- a Python script used to test Fledge (simulate payloads)\n\n  optional arguments:\n    -h, --help            show this help message and exit\n    -v, --version         show program's version number and exit\n    -k {y,yes,n,no}, --keep {y,yes,n,no}\n                            Do not delete the running sample (default: no)\n    -t TEMPLATE, --template TEMPLATE\n                          Set the template file, json extension\n    -o OUTPUT, --output OUTPUT\n                          Set the statistics output file\n    -I ITERATIONS, --iterations ITERATIONS\n                          The number of iterations of the test (default: 1)\n    -O OCCURRENCES, --occurrences OCCURRENCES\n                          The number of occurrences of the template (default: 1)\n    -H HOST, --host HOST  CoAP server host address (default: localhost)\n    -P PORT, --port PORT  The Fledge port. (default: 5683)\n    -i INTERVAL, --interval INTERVAL\n                          The interval in seconds for each iteration (default:\n                          0)\n    -S {total}, --statistics {total}\n                          The type of statistics to collect (default: total)\n\n  The initial version of fogbench is meant to test the sensor/device interface\n  of Fledge using CoAP\n  $\n\nIn order to use *fogbench* you need a template file. The template is a set of JSON elements that are used to create a random set of values that simulate the data generated by one or more sensors. Fledge comes with a template file named *fogbench_sensor_coap.template.json*. The template is located here:\n\n- In a development environment, look in *data/extras/fogbench* in the project repository folder.\n- In an environment deployed using ``sudo make install``, look in *$FLEDGE_DATA/extras/fogbench*.\n\nThe template file looks like this:\n\n\n.. code-block:: console\n\n  $ cat /usr/local/fledge/data/extras/fogbench/fogbench_sensor_coap.template.json\n  [\n      { \"name\"          : \"asset_1\",\n        \"sensor_values\" : [ { \"name\": \"dp_1\", \"type\": \"number\", \"min\": -2.0, \"max\": 2.0 },\n                            { \"name\": \"dp_1\", \"type\": \"number\", \"min\": -2.0, \"max\": 2.0 },\n                            { \"name\": \"dp_1\", \"type\": \"number\", \"min\": -2.0, \"max\": 2.0 } ] },\n      { \"name\"          : \"asset_2\",\n        \"sensor_values\" : [ { \"name\": \"lux\", \"type\": \"number\", \"min\": 0, \"max\": 130000, \"precision\":3 } ] },\n      { \"name\"          : \"asset_3\",\n        \"sensor_values\" : [ { \"name\": \"pressure\", \"type\": \"number\", \"min\": 800.0, \"max\": 1100.0, \"precision\":1 } ] }\n  ]\n  $\n\nIn the array, each element simulates a message from a sensor, with a name, a set of data points that have their name, value type and range.\n\n\nData Coming from South\n----------------------\n\nNow you should have all the information necessary to test the CoAP South microservice. From the command line, type:\n\n- ``$FLEDGE_ROOT/scripts/extras/fogbench`` ``-t $FLEDGE_ROOT/data/extras/fogbench/fogbench_sensor_coap.template.json``, if you are in a development environment, with the *FLEDGE_ROOT* environment variable set with the path to your project repository folder\n- ``$FLEDGE_ROOT/bin/fogbench -t $FLEDGE_DATA/extras/fogbench/fogbench_sensor_coap.template.json``, if you are in a deployed environment, with *FLEDGE_ROOT* and *FLEDGE_DATA* set correctly.\n  - If you have installed Fledge in the default location (i.e. */usr/local/fledge*), type ``/usr/local/fledge/bin/fogbench -t data/extras/fogbench/fogbench_sensor_coap.template.json``.\n\nIn development environment the output of your command should be:\n\n.. code-block:: console\n\n  $ $FLEDGE_ROOT/scripts/extras/fogbench -t $FLEDGE_ROOT/data/extras/fogbench/fogbench_sensor_coap.template.json\n    >>> Make sure south CoAP plugin service is running\n     & listening on specified host and port\n\n    Total Statistics:\n\n    Start Time: 2023-04-14 11:15:50.679366\n    End Time:   2023-04-14 11:15:50.711856\n\n    Total Messages Transferred: 3\n    Total Bytes Transferred:    720\n\n    Total Iterations: 1\n    Total Messages per Iteration: 3.0\n    Total Bytes per Iteration:    720.0\n\n    Min messages/second: 92.33610341643583\n    Max messages/second: 92.33610341643583\n    Avg messages/second: 92.33610341643583\n\n    Min Bytes/second: 22160.6648199446\n    Max Bytes/second: 22160.6648199446\n    Avg Bytes/second: 22160.6648199446\n  $\n\nCongratulations! You have just inserted data into Fledge from the CoAP South microservice. More specifically, the output informs you that the data inserted has been composed by 10 different messages for a total of 720 Bytes, for an average of 92 messages per second and 22,160 Bytes per second.\n\nIf you want to stress Fledge a bit, you may insert the same data sample several times, by using the *-I* or *--iterations* argument:\n\n.. code-block:: console\n\n  $ $FLEDGE_ROOT/scripts/extras/fogbench -t data/extras/fogbench/fogbench_sensor_coap.template.json -I 100\n    >>> Make sure south CoAP plugin service is running & listening on specified host and port\n    Total Statistics:\n\n    Start Time: 2023-04-14 11:18:03.586924\n    End Time:   2023-04-14 11:18:04.582291\n\n    Total Messages Transferred: 300\n    Total Bytes Transferred:    72000\n\n    Total Iterations: 100\n    Total Messages per Iteration: 3.0\n    Total Bytes per Iteration:    720.0\n\n    Min messages/second: 90.53597295992274\n    Max messages/second: 454.33893684688775\n    Avg messages/second: 323.7178365566367\n\n    Min Bytes/second: 21728.63351038146\n    Max Bytes/second: 109041.34484325306\n    Avg Bytes/second: 77692.28077359282\n  $\n\nHere we have inserted the same set of data 100 times, therefore the total number of Bytes inserted is 72,000. The performance and insertion rates varies with each iteration and *fogbench* presents the minimum, maximum and average values.\n\n\nChecking What's Inside Fledge\n==============================\n\nWe can check if Fledge has now stored what we have inserted from the South microservice by using the *asset* API. From curl or Postman, use this URL:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/asset ; echo\n    [{\"count\": 11, \"assetCode\": \"asset_1\"}, {\"count\": 11, \"assetCode\": \"asset_2\"}, {\"count\": 11, \"assetCode\": \"asset_3\"}]\n  $\n\nThe output of the asset entry point provides a list of assets buffered in Fledge and the count of elements stored. The output is a JSON array with two elements:\n\n- **count** : the number of occurrences of the asset in the buffer.\n- **assetCode** : the name of the sensor or device that provides the data.\n\n\nFeeding East/West Applications\n------------------------------\n\nLet's suppose that we are interested in the data collected for one of the assets listed in the previous query, for example *fogbench_temperature*. The *asset* entry point can be used to retrieve the data points for individual assets by simply adding the code of the asset to the URI:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/asset/asset_2 ; echo\n    [{\"reading\": {\"lux\": 75723.923}, \"timestamp\": \"2023-04-14 11:25:05.672528\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}, {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}]\n  $\n\nLet's see the JSON output on a more readable format:\n\n.. code-block:: json\n\n    [\n     {\"reading\": {\"lux\": 75723.923}, \"timestamp\": \"2023-04-14 11:25:05.672528\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"},\n     {\"reading\": {\"lux\": 50475.99}, \"timestamp\": \"2023-04-14 11:24:49.767983\"}\n   ]\n\nThe JSON structure depends on the sensor and the plugin used to capture the data. In this case, the values shown are:\n\n- **reading** : a JSON structure that is the set of data points provided by the sensor. In this case only datapoint named lux:\n- **lux** : the lux meter value\n- **timestamp** : the timestamp generated by the sensors. In this case, since we have inserted 10 times the same value and one time a new value using *fogbench*, the result is 10 timestamps with the same value and one timestamp with a different value.\n\nYou can dig even more in the data and extract only a subset of the reading. Fog example, you can select the lux and limit to the last 5 readings:\n\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/asset/asset_2/lux?limit=5 ; echo\n    [\n     {\"timestamp\": \"2023-04-14 11:25:05.672528\", \"lux\": 75723.923},\n     {\"timestamp\": \"2023-04-14 11:24:49.767983\", \"lux\": 50475.99},\n     {\"timestamp\": \"2023-04-14 11:24:49.767983\", \"lux\": 50475.99},\n     {\"timestamp\": \"2023-04-14 11:24:49.767983\", \"lux\": 50475.99},\n     {\"timestamp\": \"2023-04-14 11:24:49.767983\", \"lux\": 50475.99}\n    ]\n  $\n\nWe have beautified the JSON output for you, so it is more readable.\n\n.. note:: When you select a specific element in the reading, the timestamp and the element are presented in the opposite order compared to the previous example. This is a known issue that will be fixed in the next version.\n\nSending Greetings to the Northern Hemisphere\n============================================\n\nThe next and last step is to send data to North, which means that we can take all of some of the data we buffer in Fledge and we can send it to a historian or a database using a North task or microservice.\n\n\nThe OMF Translator\n------------------\n\nFledge comes with a North plugin called *OMF Translator*. OMF is the OSIsoft Message Format, which is the message format accepted by the PI Connector Relay OMF. The PI Connector Relay OMF is provided by OSIsoft and it is used to feed the OSIsoft PI System.\n\n- Information regarding OSIsoft are available |here OSIsoft|\n- Information regarding OMF are available |here OMF|\n- Information regarding the OSIsoft PI System are available |here PI|\n\n*OMF Translator* is scheduled as a North task that is executed every 30 seconds (the time may vary, we set it to 30 seconds to facilitate the testing).\n\n\nPreparing the PI System\n-----------------------\n\nIn order to test the North task and plugin, first you need to setup the PI system. Here we assume you are already familiar with PI and you have a Windows server with PI installed, up and running. The minimum installation must include the PI System and the PI Connector Relay OMF. Once you have checked that everything is installed and works correctly, you should collect the IP address of the Windows system.\n\n\nSetting the OMF Translator Plugin\n---------------------------------\n\nFledge uses the same *OMF Translator* plugin to send the data coming from the South modules and buffered in Fledge.\n\n.. note:: In this version, only the South data can be sent to the PI System.\n\nIf you are curious to see which categories are available in Fledge, simply type:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/category ; echo\n  {\n      \"categories\": [\n        {\n          \"key\": \"Storage\",\n          \"description\": \"Storage configuration\",\n          \"displayName\": \"Storage\"\n        },\n        {\n          \"key\": \"Advanced\",\n          \"description\": \"Advanced\",\n          \"displayName\": \"Advanced\"\n        },\n        {\n          \"key\": \"LOGGING\",\n          \"description\": \"Logging Level of Core Server\",\n          \"displayName\": \"Logging\"\n        },\n        {\n          \"key\": \"SCHEDULER\",\n          \"description\": \"Scheduler configuration\",\n          \"displayName\": \"Scheduler\"\n        },\n        {\n          \"key\": \"SMNTR\",\n          \"description\": \"Service Monitor\",\n          \"displayName\": \"Service Monitor\"\n        },\n        {\n          \"key\": \"rest_api\",\n          \"description\": \"Fledge Admin and User REST API\",\n          \"displayName\": \"Admin API\"\n        },\n        {\n          \"key\": \"password\",\n          \"description\": \"To control the password policy\",\n          \"displayName\": \"Password Policy\"\n        },\n        {\n          \"key\": \"service\",\n          \"description\": \"Fledge Service\",\n          \"displayName\": \"Fledge Service\"\n        },\n        {\n          \"key\": \"Installation\",\n          \"description\": \"Installation\",\n          \"displayName\": \"Installation\"\n        },\n        {\n          \"key\": \"sqlite\",\n          \"description\": \"Storage Plugin\",\n          \"displayName\": \"sqlite\"\n        },\n        {\n          \"key\": \"General\",\n          \"description\": \"General\",\n          \"displayName\": \"General\"\n        },\n        {\n          \"key\": \"Utilities\",\n          \"description\": \"Utilities\",\n          \"displayName\": \"Utilities\"\n        },\n        {\n          \"key\": \"purge_system\",\n          \"description\": \"Configuration of the Purge System\",\n          \"displayName\": \"Purge System\"\n        },\n        {\n          \"key\": \"PURGE_READ\",\n          \"description\": \"Purge the readings, log, statistics history table\",\n          \"displayName\": \"Purge\"\n        }\n      ]\n  }\n  $\n\n\nFor each plugin, you will see corresponding category e.g. For fledge-south-coap the registered category will be ``{ \"key\": \"COAP\", \"description\": \"CoAP Listener South Plugin\"}``.\nThe configuration for the OMF Translator used to stream the South data is initially disabled, all you can see about the settings is:\n\n.. code-block:: console\n\n  $ curl -sX GET   http://localhost:8081/fledge/category/OMF%20to%20PI%20north\n  {\n    \"enable\": {\n      \"description\": \"A switch that can be used to enable or disable execution of the sending process.\",\n      \"type\": \"boolean\",\n      \"readonly\": \"true\",\n      \"default\": \"true\",\n      \"value\": \"true\"\n    },\n    \"streamId\": {\n      \"description\": \"Identifies the specific stream to handle and the related information, among them the ID of the last object streamed.\",\n      \"type\": \"integer\",\n      \"readonly\": \"true\",\n      \"default\": \"0\",\n      \"value\": \"4\",\n      \"order\": \"16\"\n    },\n    \"plugin\": {\n      \"description\": \"PI Server North C Plugin\",\n      \"type\": \"string\",\n      \"default\": \"OMF\",\n      \"readonly\": \"true\",\n      \"value\": \"OMF\"\n    },\n    \"source\": {\n       \"description\": \"Defines the source of the data to be sent on the stream, this may be one of either readings, statistics or audit.\",\n       \"type\": \"enumeration\",\n       \"options\": [\n         \"readings\",\n         \"statistics\"\n        ],\n      \"default\": \"readings\",\n      \"order\": \"5\",\n      \"displayName\": \"Data Source\",\n      \"value\": \"readings\"\n    },\n  ...}\n  $ curl -sX GET   http://localhost:8081/fledge/category/Stats%20OMF%20to%20PI%20north\n  {\n    \"enable\": {\n      \"description\": \"A switch that can be used to enable or disable execution of the sending process.\",\n      \"type\": \"boolean\",\n      \"readonly\": \"true\",\n      \"default\": \"true\",\n      \"value\": \"true\"\n    },\n    \"streamId\": {\n      \"description\": \"Identifies the specific stream to handle and the related information, among them the ID of the last object streamed.\",\n      \"type\": \"integer\",\n      \"readonly\": \"true\",\n      \"default\": \"0\",\n      \"value\": \"5\",\n      \"order\": \"16\"\n    },\n    \"plugin\": {\n      \"description\": \"PI Server North C Plugin\",\n      \"type\": \"string\",\n      \"default\": \"OMF\",\n      \"readonly\": \"true\",\n      \"value\": \"OMF\"\n    },\n    \"source\": {\n      \"description\": \"Defines the source of the data to be sent on the stream, this may be one of either readings, statistics or audit.\",\n      \"type\": \"enumeration\",\n      \"options\": [\n        \"readings\",\n        \"statistics\"\n       ],\n      \"default\": \"readings\",\n      \"order\": \"5\",\n      \"displayName\": \"Data Source\",\n      \"value\": \"statistics\"\n    },\n  ...}\n  $\n\nAt this point it may be a good idea to familiarize with the |jq| tool, it will help you a lot in selecting and using data via the REST API. You may remember, we discussed it in the |get start| chapter.\n\nFirst, we can see the list of all the scheduled tasks (the process of sending data to a PI Connector Relay OMF is one of them). The command is:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/schedule | jq\n  {\n    \"schedules\": [\n      {\n        \"id\": \"ef8bd42b-da9f-47c4-ade8-751ce9a504be\",\n        \"name\": \"OMF to PI north\",\n        \"processName\": \"north_c\",\n        \"type\": \"INTERVAL\",\n        \"repeat\": 30.0,\n        \"time\": 0,\n        \"day\": null,\n        \"exclusive\": true,\n        \"enabled\": false\n      },\n      {\n        \"id\": \"27501b35-e0cd-4340-afc2-a4465fe877d6\",\n        \"name\": \"Stats OMF to PI north\",\n        \"processName\": \"north_c\",\n        \"type\": \"INTERVAL\",\n        \"repeat\": 30.0,\n        \"time\": 0,\n        \"day\": null,\n        \"exclusive\": true,\n        \"enabled\": true\n      },\n    ...\n    ]\n  }\n  $\n\n...which means: \"show me all the tasks that can be scheduled\", The output has been made more readable by jq. There are several tasks, we need to identify the one we need and extract its unique id. We can achieve that with the power of jq: first we can select the JSON object that shows the elements of the sending task:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/schedule | jq '.schedules[] | select( .name == \"OMF to PI north\")'\n  {\n    \"id\": \"ef8bd42b-da9f-47c4-ade8-751ce9a504be\",\n    \"name\": \"OMF to PI north\",\n    \"processName\": \"north_c\",\n    \"type\": \"INTERVAL\",\n    \"repeat\": 30,\n    \"time\": 0,\n    \"day\": null,\n    \"exclusive\": true,\n    \"enabled\": true\n  }\n  $\n\nLet's have a look at what we have found:\n\n- **id** is the unique identifier of the schedule.\n- **name** is a user-friendly name of the schedule.\n- **type** is the type of schedule, in this case a schedule that is triggered at regular intervals.\n- **repeat** specifies the interval of 30 seconds.\n- **time** specifies when the schedule should run: since the type is INTERVAL, this element is irrelevant.\n- **day** indicates the day of the week the schedule should run, in this case it will be constantly every 30 seconds\n- **exclusive** indicates that only a single instance of this task should run at any time.\n- **processName** is the name of the task to be executed.\n- **enabled** indicates whether the schedule is currently enabled or disabled.\n\nNow let's identify the plugin used to send data to the PI Connector Relay OMF.\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/category | jq '.categories[] | select ( .key == \"OMF to PI north\" )'\n  {\n    \"key\": \"OMF to PI north\",\n    \"description\": \"Configuration of the Sending Process\",\n    \"displayName\": \"OMF to PI north\"\n  }\n  $\n\nWe can get the specific information adding the name of the task to the URL:\n\n.. code-block:: console\n\n  $  curl -s http://localhost:8081/fledge/category/OMF%20to%20PI%20north | jq .plugin\n  {\n    \"description\": \"PI Server North C Plugin\",\n    \"type\": \"string\",\n    \"default\": \"OMF\",\n    \"readonly\": \"true\",\n    \"value\": \"OMF\"\n  }\n  $\n\nNow, the output returned does not say much: this is because the plugin has never been enabled, so the configuration has not been loaded yet. First, let's enabled the schedule. From a the previous query of the schedulable tasks, we know the id is *ef8bd42b-da9f-47c4-ade8-751ce9a504be*:\n\n.. code-block:: console\n\n  $ curl  -X PUT http://localhost:8081/fledge/schedule/ef8bd42b-da9f-47c4-ade8-751ce9a504be -d '{ \"enabled\" : true }'\n  {\n    \"schedule\": {\n      \"id\": \"ef8bd42b-da9f-47c4-ade8-751ce9a504be\",\n      \"name\": \"OMF to PI north\",\n      \"processName\": \"north_c\",\n      \"type\": \"INTERVAL\",\n      \"repeat\": 30,\n      \"time\": 0,\n      \"day\": null,\n      \"exclusive\": true,\n      \"enabled\": true\n    }\n  }\n  $\n\nOnce enabled, the plugin will be executed inside the *OMF to PI north* task within 30 seconds, so you have to wait up to 30 seconds to see the new, full configuration. After 30 seconds or so, you should see something like this:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/category/OMF%20to%20PI%20north | jq\n  {\n    \"enable\": {\n      \"description\": \"A switch that can be used to enable or disable execution of the sending process.\",\n      \"type\": \"boolean\",\n      \"readonly\": \"true\",\n      \"default\": \"true\",\n      \"value\": \"true\"\n    },\n    \"streamId\": {\n      \"description\": \"Identifies the specific stream to handle and the related information, among them the ID of the last object streamed.\",\n      \"type\": \"integer\",\n      \"readonly\": \"true\",\n      \"default\": \"0\",\n      \"value\": \"4\",\n      \"order\": \"16\"\n    },\n    \"plugin\": {\n      \"description\": \"PI Server North C Plugin\",\n      \"type\": \"string\",\n      \"default\": \"OMF\",\n      \"readonly\": \"true\",\n      \"value\": \"OMF\"\n    },\n    \"PIServerEndpoint\": {\n      \"description\": \"Select the endpoint among PI Web API, Connector Relay, OSIsoft Cloud Services or Edge Data Store\",\n      \"type\": \"enumeration\",\n      \"options\": [\n      \"PI Web API\",\n      \"Connector Relay\",\n      \"OSIsoft Cloud Services\",\n      \"Edge Data Store\"\n    ],\n      \"default\": \"Connector Relay\",\n      \"order\": \"1\",\n      \"displayName\": \"Endpoint\",\n      \"value\": \"Connector Relay\"\n    },\n    \"ServerHostname\": {\n      \"description\": \"Hostname of the server running the endpoint either PI Web API or Connector Relay\",\n      \"type\": \"string\",\n      \"default\": \"localhost\",\n      \"order\": \"2\",\n      \"displayName\": \"Server hostname\",\n      \"validity\": \"PIServerEndpoint != \\\"Edge Data Store\\\" && PIServerEndpoint != \\\"OSIsoft Cloud Services\\\"\",\n      \"value\": \"localhost\"\n    },\n    \"ServerPort\": {\n      \"description\": \"Port on which the endpoint either PI Web API or Connector Relay or Edge Data Store is listening, 0 will use the default one\",\n      \"type\": \"integer\",\n      \"default\": \"0\",\n      \"order\": \"3\",\n      \"displayName\": \"Server port, 0=use the default\",\n      \"validity\": \"PIServerEndpoint != \\\"OSIsoft Cloud Services\\\"\",\n      \"value\": \"0\"\n    },\n    \"producerToken\": {\n      \"description\": \"The producer token that represents this Fledge stream\",\n      \"type\": \"string\",\n      \"default\": \"omf_north_0001\",\n      \"order\": \"4\",\n      \"displayName\": \"Producer Token\",\n      \"validity\": \"PIServerEndpoint == \\\"Connector Relay\\\"\",\n      \"value\": \"omf_north_0001\"\n    },\n    \"source\": {\n      \"description\": \"Defines the source of the data to be sent on the stream, this may be one of either readings, statistics or audit.\",\n      \"type\": \"enumeration\",\n      \"options\": [\n        \"readings\",\n        \"statistics\"\n      ],\n      \"default\": \"readings\",\n      \"order\": \"5\",\n      \"displayName\": \"Data Source\",\n      \"value\": \"readings\"\n    },\n    \"StaticData\": {\n      \"description\": \"Static data to include in each sensor reading sent to the PI Server.\",\n      \"type\": \"string\",\n      \"default\": \"Location: Palo Alto, Company: Dianomic\",\n      \"order\": \"6\",\n      \"displayName\": \"Static Data\",\n      \"value\": \"Location: Palo Alto, Company: Dianomic\"\n    },\n    \"OMFRetrySleepTime\": {\n      \"description\": \"Seconds between each retry for the communication with the OMF PI Connector Relay, NOTE : the time is doubled at each attempt.\",\n      \"type\": \"integer\",\n      \"default\": \"1\",\n      \"order\": \"7\",\n      \"displayName\": \"Sleep Time Retry\",\n      \"value\": \"1\"\n    },\n\t  \"OMFMaxRetry\": {\n\t\t  \"description\": \"Max number of retries for the communication with the OMF PI Connector Relay\",\n\t\t  \"type\": \"integer\",\n\t\t  \"default\": \"3\",\n\t\t  \"order\": \"8\",\n\t\t  \"displayName\": \"Maximum Retry\",\n\t\t  \"value\": \"3\"\n\t  },\n    \"OMFHttpTimeout\": {\n      \"description\": \"Timeout in seconds for the HTTP operations with the OMF PI Connector Relay\",\n      \"type\": \"integer\",\n      \"default\": \"10\",\n      \"order\": \"9\",\n      \"displayName\": \"HTTP Timeout\",\n      \"value\": \"10\"\n    },\n    \"formatInteger\": {\n      \"description\": \"OMF format property to apply to the type Integer\",\n      \"type\": \"string\",\n      \"default\": \"int64\",\n      \"order\": \"10\",\n      \"displayName\": \"Integer Format\",\n      \"value\": \"int64\"\n    },\n    \"formatNumber\": {\n      \"description\": \"OMF format property to apply to the type Number\",\n      \"type\": \"string\",\n      \"default\": \"float64\",\n      \"order\": \"11\",\n      \"displayName\": \"Number Format\",\n      \"value\": \"float64\"\n    },\n    \"compression\": {\n      \"description\": \"Compress readings data before sending to PI server\",\n      \"type\": \"boolean\",\n      \"default\": \"true\",\n      \"order\": \"12\",\n      \"displayName\": \"Compression\",\n      \"value\": \"false\"\n    },\n    \"DefaultAFLocation\": {\n      \"description\": \"Defines the hierarchies tree in Asset Framework in which the assets will be created, each level is separated by /, PI Web API only.\",\n      \"type\": \"string\",\n      \"default\": \"/fledge/data_piwebapi/default\",\n      \"order\": \"13\",\n      \"displayName\": \"Asset Framework hierarchies tree\",\n      \"validity\": \"PIServerEndpoint == \\\"PI Web API\\\"\",\n      \"value\": \"/fledge/data_piwebapi/default\"\n    },\n    \"AFMap\": {\n      \"description\": \"Defines a set of rules to address where assets should be placed in the AF hierarchy.\",\n      \"type\": \"JSON\",\n      \"default\": \"{ }\",\n      \"order\": \"14\",\n      \"displayName\": \"Asset Framework hierarchies rules\",\n      \"validity\": \"PIServerEndpoint == \\\"PI Web API\\\"\",\n      \"value\": \"{ }\"\n    },\n    \"notBlockingErrors\": {\n      \"description\": \"These errors are considered not blocking in the communication with the PI Server, the sending operation will proceed with the next block of data if one of these is encountered\",\n      \"type\": \"JSON\",\n      \"default\": \"{ \\\"errors400\\\" : [ \\\"Redefinition of the type with the same ID is not allowed\\\", \\\"Invalid value type for the property\\\", \\\"Property does not exist in the type definition\\\", \\\"Container is not defined\\\", \\\"Unable to find the property of the container of type\\\" ] }\",\n      \"order\": \"15\",\n      \"readonly\": \"true\",\n      \"value\": \"{ \\\"errors400\\\" : [ \\\"Redefinition of the type with the same ID is not allowed\\\", \\\"Invalid value type for the property\\\", \\\"Property does not exist in the type definition\\\", \\\"Container is not defined\\\", \\\"Unable to find the property of the container of type\\\" ] }\"\n    },\n    \"PIWebAPIAuthenticationMethod\": {\n      \"description\": \"Defines the authentication method to be used with the PI Web API.\",\n      \"type\": \"enumeration\",\n      \"options\": [\n        \"anonymous\",\n        \"basic\",\n        \"kerberos\"\n      ],\n      \"default\": \"anonymous\",\n      \"order\": \"17\",\n      \"displayName\": \"PI Web API Authentication Method\",\n      \"validity\": \"PIServerEndpoint == \\\"PI Web API\\\"\",\n      \"value\": \"anonymous\"\n    },\n    \"PIWebAPIUserId\": {\n      \"description\": \"User id of PI Web API to be used with the basic access authentication.\",\n      \"type\": \"string\",\n      \"default\": \"user_id\",\n      \"order\": \"18\",\n      \"displayName\": \"PI Web API User Id\",\n      \"validity\": \"PIServerEndpoint == \\\"PI Web API\\\" && PIWebAPIAuthenticationMethod == \\\"basic\\\"\",\n      \"value\": \"user_id\"\n    },\n    \"PIWebAPIPassword\": {\n      \"description\": \"Password of the user of PI Web API to be used with the basic access authentication.\",\n      \"type\": \"password\",\n      \"default\": \"password\",\n      \"order\": \"19\",\n      \"displayName\": \"PI Web API Password\",\n      \"validity\": \"PIServerEndpoint == \\\"PI Web API\\\" && PIWebAPIAuthenticationMethod == \\\"basic\\\"\",\n      \"value\": \"****\"\n    },\n    \"PIWebAPIKerberosKeytabFileName\": {\n      \"description\": \"Keytab file name used for Kerberos authentication in PI Web API.\",\n      \"type\": \"string\",\n      \"default\": \"piwebapi_kerberos_https.keytab\",\n      \"order\": \"20\",\n      \"displayName\": \"PI Web API Kerberos keytab file\",\n      \"validity\": \"PIServerEndpoint == \\\"PI Web API\\\" && PIWebAPIAuthenticationMethod == \\\"kerberos\\\"\",\n      \"value\": \"piwebapi_kerberos_https.keytab\"\n    },\n    \"OCSNamespace\": {\n      \"description\": \"Specifies the OCS namespace where the information are stored and it is used for the interaction with the OCS API\",\n      \"type\": \"string\",\n      \"default\": \"name_space\",\n      \"order\": \"21\",\n      \"displayName\": \"OCS Namespace\",\n      \"validity\": \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\",\n      \"value\": \"name_space\"\n    },\n    \"OCSTenantId\": {\n      \"description\": \"Tenant id associated to the specific OCS account\",\n      \"type\": \"string\",\n      \"default\": \"ocs_tenant_id\",\n      \"order\": \"22\",\n      \"displayName\": \"OCS Tenant ID\",\n      \"validity\": \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\",\n      \"value\": \"ocs_tenant_id\"\n    },\n    \"OCSClientId\": {\n      \"description\": \"Client id associated to the specific OCS account, it is used to authenticate the source for using the OCS API\",\n      \"type\": \"string\",\n      \"default\": \"ocs_client_id\",\n      \"order\": \"23\",\n      \"displayName\": \"OCS Client ID\",\n      \"validity\": \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\",\n      \"value\": \"ocs_client_id\"\n    },\n    \"OCSClientSecret\": {\n      \"description\": \"Client secret associated to the specific OCS account, it is used to authenticate the source for using the OCS API\",\n      \"type\": \"password\",\n      \"default\": \"ocs_client_secret\",\n      \"order\": \"24\",\n      \"displayName\": \"OCS Client Secret\",\n      \"validity\": \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\",\n      \"value\": \"****\"\n    }\n  }\n  $\n\nYou can look at the descriptions to have a taste of what you can control with this plugin. The default configuration should be fine, with the exception of the *ServerHostname*, which of course should refer to the IP address of the machine and the port used by the PI Connector Relay OMF. The PI Connector Relay OMF 1.0 used the HTTP protocol with port 8118 and version 1.2, or higher, uses the HTTPS and port 5460. Assuming that the port is *5460* and the IP address is *192.168.56.101*, you can set the new ServerHostname with this PUT method:\n\n.. code-block:: console\n\n  $ curl -sH'Content-Type: application/json' -X PUT -d '{ \"ServerHostname\": \"192.168.56.101\" }' http://localhost:8081/fledge/category/OMF%20to%20PI%20north | jq\n  \"ServerHostname\": {\n    \"description\": \"Hostname of the server running the endpoint either PI Web API or Connector Relay\",\n    \"type\": \"string\",\n    \"default\": \"localhost\",\n    \"order\": \"2\",\n    \"displayName\": \"Server hostname\",\n    \"validity\": \"PIServerEndpoint != \\\"Edge Data Store\\\" && PIServerEndpoint != \\\"OSIsoft Cloud Services\\\"\",\n    \"value\": \"192.168.56.101\"\n  }\n  $\n\nYou can note that the *value* element is the only one that can be changed in *URL* (the other elements are factory settings).\n\nNow we are ready to send data North, to the PI System.\n\n\nSending Data to the PI System\n-----------------------------\n\nThe last bit to accomplish is to start the PI Connector Relay OMF on the Windows Server. The output may look like this screenshot, where you can see the Connector Relay debug window on the left and teh PI Data Explorer on the right.\n\n|win_server_waiting|\n\nWait a few seconds ...et voilà! Readings and statistics are in the PI System:\n\n|pi_loaded|\n\n\nCongratulations! You have experienced an end-to-end test of Fledge, from South with sensor data through Fledge and East/West applications and finally to North towards Historians.\n\n\n"
  },
  {
    "path": "docs/building_fledge/building_fledge.rst",
    "content": ".. Getting Started describes how to build and install Fledge\n\n.. |br| raw:: html\n\n   <br />\n\n.. Links in new tabs\n.. |Fledge Repo| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge\" target=\"_blank\">https://github.com/fledge-iot/fledge</a>\n\n.. |GCC Bug| raw:: html\n\n   <a href=\"https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66425\" target=\"_blank\">here</a>\n\n.. =============================================\n\n\n****************\nBuilding Fledge\n****************\n\nLet's get started! In this chapter we will see where to find and how to build, install and run Fledge for the first time.\n\n\nFledge Platforms\n=================\n\nDue to the use of standard libraries, Fledge can run on a large number of platforms and operating environments, but its primary target is Linux distributions. |br| Our testing environment includes Ubuntu 20.04 LTS, Ubuntu 22.04 LTS and Raspbian, but we have installed and tested Fledge on other Linux distributions. In addition to the native support, Fledge can also run on Virtual Machines, Docker and LXC containers.\n\n\nRequirements\n------------\n\nFledge requires a number of software packages and libraries to be installed in order to be built, the process of installing these has been streamlined and automated for all the currently supported platforms. A single script, *requirements.sh* can be run and this will install all of the packages needed to to build and run Fledge.\n\nBuilding Fledge\n================\n\nIn this section we will describe how to build Fledge on any of the supported platforms. If you are not familiar with Linux and you do not want to build Fledge from the source code, you can download a ready-made package (the list of packages is `available here <../92_downloads.html>`_).\n\nObtaining the Source Code\n-------------------------\n\nFledge is available on GitHub. The link to the repository is |Fledge Repo|. In order to clone the code in the repository, type:\n\n.. code-block:: console\n\n  $ git clone https://github.com/fledge-iot/fledge.git\n  Cloning into 'fledge'...\n  remote: Enumerating objects: 83394, done.\n  remote: Counting objects: 100% (2093/2093), done.\n  remote: Compressing objects: 100% (903/903), done.\n  remote: Total 83394 (delta 1349), reused 1840 (delta 1161), pack-reused 81301\n  Receiving objects: 100% (83394/83394), 34.85 MiB | 7.38 MiB/s, done.\n  Resolving deltas: 100% (55599/55599), done.\n  $\n\nThe code should now be loaded on your machine in a directory called fledge. The name of the repository directory is *fledge*:\n\n.. code-block:: console\n\n  $ ls -l fledge\n  total 228\n  drwxrwxr-x  7 fledge fledge   4096 Aug 26 11:20 C\n  -rw-rw-r--  1 fledge fledge   1659 Aug 26 11:20 CMakeLists.txt\n  drwxrwxr-x  2 fledge fledge   4096 Aug 26 11:20 contrib\n  -rw-rw-r--  1 fledge fledge   4786 Aug 26 11:20 CONTRIBUTING.md\n  drwxrwxr-x  4 fledge fledge   4096 Aug 26 11:20 data\n  drwxrwxr-x  2 fledge fledge   4096 Aug 26 11:20 dco-signoffs\n  drwxrwxr-x 10 fledge fledge   4096 Aug 26 11:20 docs\n  -rw-rw-r--  1 fledge fledge 108680 Aug 26 11:20 doxy.config\n  drwxrwxr-x  3 fledge fledge   4096 Aug 26 11:20 examples\n  drwxrwxr-x  4 fledge fledge   4096 Aug 26 11:20 extras\n  -rw-rw-r--  1 fledge fledge  11346 Aug 26 11:20 LICENSE\n  -rw-rw-r--  1 fledge fledge  24216 Aug 26 11:20 Makefile\n  -rwxrwxr-x  1 fledge fledge    310 Aug 26 11:20 mkversion\n  drwxrwxr-x  4 fledge fledge   4096 Aug 26 11:20 python\n  -rw-rw-r--  1 fledge fledge   9292 Aug 26 11:20 README.rst\n  -rwxrwxr-x  1 fledge fledge   8177 Aug 26 11:20 requirements.sh\n  drwxrwxr-x  8 fledge fledge   4096 Aug 26 11:20 scripts\n  drwxrwxr-x  4 fledge fledge   4096 Aug 26 11:20 tests\n  drwxrwxr-x  3 fledge fledge   4096 Aug 26 11:20 tests-manual\n  -rwxrwxr-x  1 fledge fledge     38 Aug 26 11:20 VERSION\n  $\n\nSelecting the Correct Version\n-----------------------------\n\nThe git repository created on your local machine, creates several branches. More specifically:\n\n- The **main** branch is the latest, stable version. You should use this branch if you are interested in using Fledge with the last release features and fixes.\n- The **develop** branch is the current working branch used by our developers. The branch contains the latest version and features, but it may be unstable and there may be issues in the code. You may consider to use this branch if you are curious to see one of the latest features we are working on, but you should not use this branch in production.\n- The branches with versions **majorID.minorID** or **majorID.minorID.patchID**, such as *1.0* or *1.4.2*, contain the code of that specific version. You may use one of these branches if you need to check the code used in those versions.\n- The branches with name **FOGL-XXXX**, where 'XXXX' is a sequence number, are working branches used by developers and contributors to add features, fix issues, modify and release code and documentation of Fledge. Those branches are free for you to see and learn from the work of the contributors.\n\nNote that the default branch is *develop*.\n\nOnce you have cloned the Fledge project, in order to check the branches available, use the ``git branch`` command:\n\n.. code-block:: console\n\n  $ pwd\n  /home/ubuntu\n  $ cd fledge\n  $ git branch --all\n  * develop\n  remotes/origin/1.0\n  ...\n  remotes/origin/FOGL-822\n  remotes/origin/FOGL-823\n  remotes/origin/HEAD -> origin/develop\n  ...\n  remotes/origin/develop\n  remotes/origin/main\n  $\n\nAssuming you want to use the latest released, stable version, use the ``git checkout`` command to select the *master* branch:\n\n.. code-block:: console\n\n  $ git checkout main\n  Branch main set up to track remote branch main from origin.\n  Switched to a new branch 'main'\n  $\n\nYou can always use the ``git status`` command to check the branch you have checked out.\n\n\nBuilding Fledge\n----------------\n\nYou are now ready to build your first Fledge project. \n\n  - Move to the *fledge* project directory\n\n  - Load the requirements needed to build Fledge by typing\n\n    .. code-block:: console\n\n      $ sudo ./requirements.sh\n      [sudo] password for john:\n      Platform is Ubuntu, Version: 20.04\n      apt 1.6.14 (amd64)\n      Reading package lists...\n      Building dependency tree...\n      ...\n\n  - Type the ``make`` command and let the magic happen.\n\n    .. code-block:: console\n\n      $ make\n      Building Fledge version X.X., DB schema X\n      scripts/certificates \"fledge\" \"365\"\n      Creating a self signed SSL certificate ...\n      Certificates created successfully, and placed in data/etc/certs\n      scripts/auth_certificates ca \"ca\" \"365\"\n      ...\n      Successfully installed aiohttp-3.8.1 aiohttp-cors-0.7.0 aiosignal-1.2.0 async-timeout-4.0.2 asynctest-0.13.0 attrs-22.1.0 cchardet-2.1.4 certifi-2022.6.15 charset-normalizer-2.1.1 frozenlist-1.2.0 idna-3.3 idna-ssl-1.1.0 ifaddr-0.2.0 multidict-5.2.0 pyjq-2.3.1 pyjwt-1.6.4 requests-2.27.1 requests-toolbelt-0.9.1 six-1.16.0 typing-extensions-4.1.1 urllib3-1.26.12 yarl-1.7.2 zeroconf-0.27.0\n      $\n\n\nDepending on the version of Ubuntu or other Linux distribution you are using, you may have found some issues. For example, there is a bug in the GCC compiler that raises a warning under specific circumstances. The output of the build will be something like:\n\n.. code-block:: console\n\n  /home/ubuntu/Fledge/C/services/storage/storage.cpp:97:14: warning: ignoring return value of ‘int dup(int)’, declared with attribute warn_unused_result [-Wunused-result]\n    (void)dup(0);     // stdout GCC bug 66425 produces warning\n                ^\n  /home/ubuntu/Fledge/C/services/storage/storage.cpp:98:14: warning: ignoring return value of ‘int dup(int)’, declared with attribute warn_unused_result [-Wunused-result]\n    (void)dup(0);     // stderr GCC bug 66425 produces warning\n                ^\n\nThe bug is documented |GCC Bug|. For our project, you should ignore it.\n\n\nThe other issue is related to the version of pip (more specifically pip3), the Python package manager. If you see this warning in the middle of the build output:\n\n.. code-block:: console\n\n  /usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'python_requires'\n    warnings.warn(msg)\n\n...and this output at the end of the build process:\n\n.. code-block:: console\n\n  You are using pip version 8.1.1, however version 9.0.1 is available.\n  You should consider upgrading via the 'pip install --upgrade pip' command.\n\nIn this case, what you need to do is to upgrade the pip software for Python3:\n\n.. code-block:: console\n\n  $ sudo pip3 install --upgrade pip\n  Collecting pip\n    Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB)\n      100% |████████████████████████████████| 1.3MB 1.1MB/s\n  Installing collected packages: pip\n  Successfully installed pip-9.0.1\n  $\n\nAt this point, run the ``make`` command again and the Python warning should disappear.\n\n\nTesting Fledge from the Build Environment\n------------------------------------------\n\nIf you are eager to test Fledge straight away, you can do so! All you need to do is to set the *FLEDGE_ROOT* environment variable and you are good to go. Stay in the Fledge project directory, set the environment variable with the path to the Fledge directory and start fledge with the ``fledge start`` command:\n\n.. code-block:: console\n\n  $ pwd\n  /home/ubuntu/fledge\n  $ export FLEDGE_ROOT=/home/ubuntu/fledge\n  $ ./scripts/fledge start\n  Starting Fledge vX.X.....\n  Fledge started.\n  $\n\n\nYou can check the status of Fledge with the ``fledge status`` command. For few seconds you may see service starting, then it will show the status of the Fledge services and tasks:\n\n.. code-block:: console\n\n  $ ./scripts/fledge status\n  Fledge starting.\n  $\n  $ scripts/fledge status\n  Fledge vX.X.X running.\n  Fledge Uptime:  9065 seconds.\n  Fledge records: 86299 read, 86851 sent, 0 purged.\n  Fledge does not require authentication.\n  === Fledge services:\n  fledge.services.core\n  fledge.services.storage --address=0.0.0.0 --port=42583\n  fledge.services.south --port=42583 --address=127.0.0.1 --name=Sine\n  fledge.services.notification --port=42583 --address=127.0.0.1 --name=Fledge Notifications\n  === Fledge tasks:\n  fledge.tasks.purge --port=42583 --address=127.0.0.1 --name=purge\n  tasks/sending_process --port=42583 --address=127.0.0.1 --name=PI Server\n  $\n\nIf you are curious to see a proper output from Fledge, you can query the Core microservice using the REST API:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/ping ; echo\n  {\"uptime\": 10480, \"dataRead\": 0, \"dataSent\": 0, \"dataPurged\": 0, \"authenticationOptional\": true, \"serviceName\": \"Fledge\", \"hostName\": \"fledge\", \"ipAddresses\": [\"x.x.x.x\", \"x:x:x:x:x:x:x:x\"], \"health\": \"green\", \"safeMode\": false}\n  $\n  $ curl -s http://localhost:8081/fledge/statistics ; echo\n  [{\"key\": \"BUFFERED\", \"description\": \"Readings currently in the Fledge buffer\", \"value\": 0}, {\"key\": \"DISCARDED\", \"description\": \"Readings discarded by the South Service before being  placed in the buffer. This may be due to an error in the readings themselves.\", \"value\": 0}, {\"key\": \"PURGED\", \"description\": \"Readings removed from the buffer by the purge process\", \"value\": 0}, {\"key\": \"READINGS\", \"description\": \"Readings received by Fledge\", \"value\": 0}, {\"key\": \"UNSENT\", \"description\": \"Readings filtered out in the send process\", \"value\": 0}, {\"key\": \"UNSNPURGED\", \"description\": \"Readings that were purged from the buffer before being sent\", \"value\": 0}]\n  $\n\nCongratulations! You have installed and tested Fledge! If you want to go extra mile (and make the output of the REST API more readable, download the *jq* JSON processor and pipe the output of the *curl* command to it:\n\n.. code-block:: console\n\n  $ sudo apt install jq\n  ...\n  $\n  $ curl -s http://localhost:8081/fledge/statistics | jq\n  [\n    {\n      \"key\": \"BUFFERED\",\n      \"description\": \"Readings currently in the Fledge buffer\",\n      \"value\": 0\n    },\n    {\n      \"key\": \"DISCARDED\",\n      \"description\": \"Readings discarded by the South Service before being  placed in the buffer. This may be due to an error in the readings themselves.\",\n      \"value\": 0\n    },\n    {\n      \"key\": \"PURGED\",\n      \"description\": \"Readings removed from the buffer by the purge process\",\n      \"value\": 0\n    },\n    {\n      \"key\": \"READINGS\",\n      \"description\": \"Readings received by Fledge\",\n      \"value\": 0\n    },\n    {\n      \"key\": \"UNSENT\",\n      \"description\": \"Readings filtered out in the send process\",\n      \"value\": 0\n    },\n    {\n      \"key\": \"UNSNPURGED\",\n      \"description\": \"Readings that were purged from the buffer before being sent\",\n      \"value\": 0\n    }\n  ]\n  $\n\n\nNow I Want to Stop Fledge!\n---------------------------\n\nEasy, you have learnt ``fledge start`` and ``fledge status``, simply type ``fledge stop``:\n\n\n.. code-block:: console\n\n  $ scripts/fledge stop\n  Stopping Fledge.........\n  Fledge stopped.\n  $\n\n|br| |br|\nAs a next step, let's install Fledge!\n\n\nAppendix: Setting the PostgreSQL Database\n=========================================\n\nIf you intend to use the PostgreSQL database as storage engine, make sure that PostgreSQL is installed and running correctly:\n\n.. code-block:: console\n\n  $ sudo systemctl status postgresql\n  ● postgresql.service - PostgreSQL RDBMS\n     Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)\n     Active: active (exited) since Fri 2017-12-08 15:56:07 GMT; 15min ago\n   Main PID: 14572 (code=exited, status=0/SUCCESS)\n     CGroup: /system.slice/postgresql.service\n\n  Dec 08 15:56:07 ubuntu systemd[1]: Starting PostgreSQL RDBMS...\n  Dec 08 15:56:07 ubuntu systemd[1]: Started PostgreSQL RDBMS.\n  Dec 08 15:56:11 ubuntu systemd[1]: Started PostgreSQL RDBMS.\n  $\n  $ ps -ef | grep postgres\n  postgres 14806     1  0 15:56 ?        00:00:00 /usr/lib/postgresql/9.5/bin/postgres -D /var/lib/postgresql/9.5/main -c config_file=/etc/postgresql/9.5/main/postgresql.conf\n  postgres 14808 14806  0 15:56 ?        00:00:00 postgres: checkpointer process\n  postgres 14809 14806  0 15:56 ?        00:00:00 postgres: writer process\n  postgres 14810 14806  0 15:56 ?        00:00:00 postgres: wal writer process\n  postgres 14811 14806  0 15:56 ?        00:00:00 postgres: autovacuum launcher process\n  postgres 14812 14806  0 15:56 ?        00:00:00 postgres: stats collector process\n  ubuntu   15198  1225  0 17:22 pts/0    00:00:00 grep --color=auto postgres\n  $\n\nPostgreSQL 13 is the version available for Ubuntu 20.04 when we have published this page. Other versions of PostgreSQL, such as 9.6 to newer version work just fine. |br| |br| When you install the Ubuntu package, PostreSQL is set for a *peer authentication*, i.e. the database user must match with the Linux user. Other packages may differ. You may quickly check the authentication mode set in the *pg_hba.conf* file. The file is in the same directory of the *postgresql.conf* file you may see as output from the *ps* command shown above, in our case */etc/postgresql/9.5/main*:\n\n.. code-block:: console\n\n  $ sudo grep '^local' /etc/postgresql/9.5/main/pg_hba.conf\n  local   all             postgres                                peer\n  local   all             all                                     peer\n  $\n\nThe installation procedure also creates a Linux *postgres* user. In order to check if everything is set correctly, execute the *psql* utility as sudo user:\n\n.. code-block:: console\n\n  $ sudo -u postgres psql -l\n                                    List of databases\n     Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges\n  -----------+----------+----------+-------------+-------------+-----------------------\n   postgres  | postgres | UTF8     | en_GB.UTF-8 | en_GB.UTF-8 |\n   template0 | postgres | UTF8     | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres          +\n             |          |          |             |             | postgres=CTc/postgres\n   template1 | postgres | UTF8     | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres          +\n             |          |          |             |             | postgres=CTc/postgres\n  (3 rows)\n  $\n\nEncoding and collations may differ, depending on the choices made when you installed your operating system. |br| Before you proceed, you must create a PostgreSQL user that matches your Linux user. Supposing that your user is *<fledge_user>*, type:\n\n.. code-block:: console\n\n  $ sudo -u postgres createuser -d <fledge_user>\n\nThe *-d* argument is important because the user will need to create the Fledge database.\n\nA more generic command is:\n  $ sudo -u postgres createuser -d $(whoami)\n\nFinally, you should now be able to see the list of the available databases from your current user:\n\n.. code-block:: console\n\n  $ psql -l\n                                    List of databases\n     Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges\n  -----------+----------+----------+-------------+-------------+-----------------------\n   postgres  | postgres | UTF8     | en_GB.UTF-8 | en_GB.UTF-8 |\n   template0 | postgres | UTF8     | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres          +\n             |          |          |             |             | postgres=CTc/postgres\n   template1 | postgres | UTF8     | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres          +\n             |          |          |             |             | postgres=CTc/postgres\n  (3 rows)\n  $\n\n"
  },
  {
    "path": "docs/building_fledge/index.rst",
    "content": ".. Building Developers Guide\n\n*************************\nBuilding Developers Guide\n*************************\n\n.. toctree::\n\n    01_introduction\n    building_fledge\n    04_installation\n    06_testing\n    04_utilities\n    05_tasks"
  },
  {
    "path": "docs/building_pipelines.rst",
    "content": "..\n\n.. Images\n.. |pipelines| image:: images/pipelines.png\n.. |logview_1| image:: images/logview_1.png\n.. |logview_2| image:: images/logview_2.png\n.. |view_graph| image:: images/view_graph.jpg\n.. |view_spreadsheet| image:: images/view_spreadsheet.jpg\n.. |developer_features| image:: images/developer_features.jpg\n.. |manual_purge| image:: images/manual_purge.jpg\n.. |eraser| image:: images/eraser.jpg\n.. |deprecated_1| image:: images/deprecated_1.png\n.. |deprecated_2| image:: images/deprecated_2.png\n.. |pip_1| image:: images/pip_1.jpg\n.. |pip_2| image:: images/pip_2.jpg\n.. |pip_3| image:: images/pip_3.jpg\n.. |LathePipeline| image:: images/LathePipeline.jpg\n\n.. Links\n.. |python35| raw:: html\n\n   <a href=\"plugins/fledge-filter-python35/index.html\">fledge-filter-python35</a>\n\n.. |assetFilter| raw:: html\n\n   <a href=\"plugins/fledge-filter-asset/index.html\">asset filter</a>\n\n.. |OMFhints| raw:: html\n\n   <a href=\"plugins/fledge-filter-omfhint/index.html\">OMF Hints</a>\n\n.. |OMF| raw:: html\n\n   <a href=\"plugins/fledge-north-OMF/index.html\">OMF</a>\n\nDeveloping Data Pipelines\n=========================\n\nFledge provides a system of data pipelines that allows data to flow from its point of ingest into the Fledge instance, the south plugin, to the storage layer in which it is buffered. The stages along this pipeline are Fledge processing filters, output of one filter becomes the input of the next. Fledge also supports pipelines on the egress as data flows from the storage layer to the north plugins and onward to the systems integrated upstream of the Fledge instance.\n\nOperations in the south service are performed on the data from a single source, whilst operations in the north are performed on data going to a single destination. The filter pipeline in the north will have the data from sources flowing through the pipeline, this data will form a mixed stream that will contain all the data in date/time order.\n\n+-------------+\n| |pipelines| |\n+-------------+\n\nBest Practices\n--------------\n\nIt is possible with Fledge to support multiple data pipelines within a single Fledge instance, however if you have a well established Fledge instance with critical pipelines running on that instance it is perhaps not always the best practice to then develop a new, experimental pipeline on that same Fledge instance.\n\nLooking first at south plugins; one reason for this is that data that enters the Fledge instance via your new pipeline will be sent to the storage system and then onward to the north destinations mixed with the data from other pipelines on your system. If your new pipeline is incorrect or generating poor data you are then left in a situation whereby that data has been sent to your existing upstream systems. \n\nIf it is unavoidable to use the same instance there are techniques that can be used to reduce the risk; namely to use an |assetFilter| to block data from your new south pipeline entering your existing north pipelines and then being sent to your upstream systems. To do this you merely insert a filter at the start of each of your existing north pipelines and set it to exclude the named assets that will be ingested by your new, experimental pipeline. This will allow the data from the existing south service to still flow to the upstream systems, but prevent your new data from streaming out to these systems.\n\nThere are still risks associated with this approach, namely that the new service may produce assets of a different name to those you expect or may produce more assets that you expect. Data is still also sent to the notification service from your new pipeline, which may impact that service, although it is less likely than sending incorrect or unwanted data north. There is also the limitation that your new data will be discarded from the buffer and can not then be sent to the existing north pipelines if you subsequently decide the data is good. Data with your new asset names, from your new pipeline, will only be sent once you remove the |assetFilter| from those pipelines in the north that send data to your upstream systems.\n\nDeveloping new north pipelines is less risky, as the data that comes from the storage service and is destined for your new pipeline to upstream systems is effectively duplicated as it leaves the storage system. The main risk is that this new service will count as if the data has been sent up stream as far as the storage system is concerned and may make your data eligible for operation by the purge system sooner than would otherwise be the case. If you wish to prevent this you can update the purge configuration to insist the data is sent on all north channels before being considered sent for the purposes of the purge system. In most circumstances this is a precaution that can be ignored, however if you have configured your Fledge system for aggressive purging of data you may wish to consider this.\n\nIncremental Development\n~~~~~~~~~~~~~~~~~~~~~~~\n\nThe Fledge pipeline mechanism is designed for and lends itself to a modular development of the data processing requirement of your application. The pipeline is built from a collection of small, targeted filters that each perform a small, incremental process on the data. When building your pipelines, especially when using the filters that allow the application of scripts to the data, you should consider this approach and not build existing functionality that can be imported by applying an existing filter to the pipeline. Rather use that existing filter and add more steps to your pipeline, the Fledge environment is designed to provide minimal overhead when combining filters into a pipeline. Also the pipeline builder can make use of well used and tested filters, thus reducing the overheads to develop and test new functionality that is not needed.\n\nThis piecemeal approach can also be adopted in the process of building the pipeline, especially if you use the |assetFilter| to block data from progressing further through the Fledge system once it has been buffered in the storage layer. Simply add your south service, bring the service up and observe the data that is buffered from the service. You can now add another filter to the pipeline and observe how this alters the data that is being buffered. Since you have a block on the data flowing further within your system, this data will disappear as part of the normal purging process and will not end up in upstream systems to the north of Fledge.\n\nIf you are developing on a standalone Fledge instance, with no existing north services, and you still set your experimental data to disappear, this can be achieved by use of the purge process. Simply configure the purge process to frequently purge data and set the process to purge unsent data. This will mean that the data will remain in the buffer for you to examine for a short time before it is purged from that buffer. Simply adjust the purge interval to allow you enough time to view the data in the buffer. Provided all the experimental data has been purged before you make your system go live, you will not be troubled with your experimental data being sent upstream.\n\nRemember of course to reconfigure the purge process to be more inline with the duration you wish to keep the data for and to turn off the purging of unsent data unless you are willing to loose data that can not be sent for a period of time greater than the purge interval.\n\nConfiguring a more aggressive purge system, with the purging of unsent data, is probably not something you would wish to do on an existing system with live data pipelines and should not be used as a technique for developing new pipelines on such a system.\n\nAn alternative approach for removing data from the system is to enable the *Developer Features* in the Fledge User Interface. This can be done by selecting the *Settings* page in the left hand menu and clicking the option on the bottom of that screen.\n\n+----------------------+\n| |developer_features| |\n+----------------------+\n\nAmongst the extra features introduced by selecting *Developer Features* will be the ability to manually purge data from the Fledge data store. This on-demand purging can be either applied to a single asset or to all assets within the data store. The manual purge operations are accessed via the *Assets & Readings* item in the Fledge menu. A number of new icons will appear when the *Developer Features* are turned on, one per asset and one that impacts all assets. \n\n+----------------+\n| |manual_purge| |\n+----------------+\n\n.. image:: images/eraser.jpg\n     :align: left\n\nThese icons are resemble erasers and are located in each row of the assets and also in the top right corner next to the help icon. Clicking on the eraser icon in each of the rows will purge the data for just that asset, leaving other assets untouched. Clicking on the icon in the top right corner will purge all the assets currently in the data store.\n\nIn both cases a confirmation dialog will be displayed to ensure against accidental use. If you choose to proceed the selected data within the Fledge buffer, either all or a specific asset, will be erased. There is no way to undo this operation or to retrieve the data once it has been purged.\n\nAnother consequence that may occur when developing new pipelines is that assets are created during the development process which are not required in the finished pipeline. The asset however remains associated with the service and the asset name and count of number of ingested readings will be displayed in the *South Services* page on the user interface.\n\n+----------------+\n| |deprecated_1| |\n+----------------+\n\nIt is possible to deprecate the relationship between the service and the asset name using the developer features of the user interface. To do this you must first enable *Developer Features* in the user interface settings page. Now when you view the *South Services* page you will see an eraser icon next to each asset listed for a service.\n\n+----------------+\n| |deprecated_2| |\n+----------------+\n\nIf you click on this icon you will be prompted to deprecate the relationship between the asset and the service. If you select *Yes* the relationship will be severed and the asset will no longer appear next to the service.\n\nDeprecating the relationship will not remove the statistics for the asset, it will merely remove the relationship with the service and hence it will not be displayed against the service.\n\nIf an asset relationship is deprecated for an asset that is still in use, it will automatically be reinstated the next time a reading is ingested for that asset. Since the statistics were not deleted when the relationship was deprecated the previous readings will still in included in the statistics when the relationship is restored.\n\nThese *Developer Features* are designed to be of use when developing pipelines within Fledge, the functionality is not something that should be used in normal operation and the developer features should be turned off when pipelines are not being developed.\n\nSacrificial North System\n########################\n\nDeveloping north pipelines in a piecemeal fashion can be more of an issue as you are unlikely to want to put poorly formatted data into your upstream systems. One approach to this is to have a sacrificial north system of some type that you can use to develop the pipeline and determine if you are performing the process you need to on that pipeline. This way it is unimportant if that system becomes polluted with data that is not in the form you require it. Ideally you would use a system of the same type to do your development and then switch to the production system when you are satisfied your pipeline is correct.\n\nIf this is not possible for some reason a second choice solution would be to use another Fledge instance as your test north system. Rather than configure the north plugin you ultimately wish to use you would install the north HTTP plugin and connect this to a second Fledge instance running an HTTP plugin. Your data would then be sent to your new Fledge instance where you can then examine the data to see what was sent by the first Fledge instance. You then build up your north pipeline on that first Fledge instance in the same way you did with your south pipeline. Once satisfied you will need to carefully recreate your north pipeline against the correct north plugin and the you may remove your experimental north pipeline and destroy your sacrificial Fledge instance that you used to buffer and view the data.\n\nOMF Specific Considerations\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nCertain north plugins present specific problems to the incremental development approach as changing the format of data that is sent to them can cause them internal issues. The |OMF| plugin that is used to send data to the AVEVA PI Server is one such plugin.\n\nThe problem with the PI Server is that it is designed to store data in fixed formats, therefore having data that is not of a consistent type, i.e. made up of the set of attributes, can cause issues. In a PI server each new data type becomes a new tag, this is not a problem if you are happy to use tag naming that is flexible. However if you require that you used fixed name tags within the PI Server, using the |OMFhints| filter, this can be an issue for incremental development of your pipeline. Changing the properties of the tag will result in a new name being required for the tag.\n\nThe simplest approach is to do all the initial development without the fixed name and then do the name mapping as the final step in developing the pipeline. Although not ideal it gives a relatively simple approach to resolving the problem.\n\nShould you subsequently need to reuse the tag names with different types it becomes necessary to clear the type definitions from the PI Server by removing the element templates, the elements themselves and the cache. The PI Web API will then need to be restarted and the Fledge north plugin removed and recreated.\n\nExamining Data\n~~~~~~~~~~~~~~\n\nThe easiest way to examine your data you have ingested via your new south pipeline is by use of the Fledge GUI to examine the data that currently resides within the buffer. You can view the data either via the graph feature of the Assets & Readings page, which will show the time series data.\n\n+--------------+\n| |view_graph| |\n+--------------+\n\nIf you have data that is not timeseries by nature, such as string, you may use the tabular displayed to show you non timeseries data, images if there are any or the download of the data to a spreadsheet view. This later view will not contain any image data in the readings.\n\n+--------------------+\n| |view_spreadsheet| |\n+--------------------+\n\n.. _AccessingLogs:\n\nExamining Logs\n~~~~~~~~~~~~~~\n\nIt is important to view the logs for your service when building a pipeline, this is due to the Fledge goal that Fledge instances should run as unattended services and hence any errors or warnings generated are written to logs rather than to an interactive user session. The Fledge user interface does however provide a number of mechanisms for viewing the log data and filtering it to particular sources. You may view the log from the “System” item in the Log menu and then filter the source to your particular south or north service. \n\n+-------------+\n| |logview_1| |\n+-------------+\n\nAlternatively if you display the north or south configuration page for your service you will find an icon in the bottom left of the screen that looks like a page of text with the corner folded over. Simply click on this icon and the log screen will be displayed and automatically filtered to view just the logs from the service whose configuration you were previously editing.\n\n+-------------+\n| |logview_2| |\n+-------------+\n\nLog are displayed with the most recent entry first, with older entries shown as you move down the page. You may move to the next page to view older log entries. It is also possible to view different log severity; fatal, error, warning, info and debug. By default a service will not write info and debug messages to the log, it is possible to turn these levels on via the advanced configuration options of the service. This will then cause the log entries to be written, but before you can view them you must set the appropriate level of severity filtering and the user interface will filter out information and debug message by default.\n\nIt is important to turn the logging level back down to warning and above messages once you have finished your debugging session and failure to do this will cause excessive log entries to be written to the system log file.\n\nAlso note that the logs are written to the logging subsystem of the underlying Linux version, either syslog or the messages mechanism depending upon your Linux distribution. This means that these log files will be automatically rotated by the operating system mechanisms. This means the system will not, under normal circumstances, fill the storage subsystem. Older log files will be kept for a short time, but will be removed automatically after a few days. This should be borne in mind if you have information in the log that you wish to keep. Also the user interface will only allow you to view data in the most recent log file.\n\nIt is also possible to configure the syslog mechanism to write log files to non-standard files or remote machines. The Fledge mechanisms for viewing the system logs does require that the standard names for log files are used.\n\nEnabling and Disabling Filters\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIt should be noted that each filter has an individual enable control, this has the advantage that is is easy to temporarily remove a filter from a pipeline during the development stage. However this does have the downside that it is easy to forget to enable a filter in the pipeline or accidentally add a filter in a disabled state.\n\nScripting Plugins\n~~~~~~~~~~~~~~~~~\n\nWhere there is not an existing plugin that does what is required, either in a filter or in south plugins where the data payload of a protocol is highly variable, such as generic REST or MQTT plugins, Fledge offers the option of using a scripting language in order to extend the off the shelf plugin set.\n\nThis scripting is done via the Python scripting language, both Python 3 and Python 2 are supported by Fledge, however it is recommended that the Python 3 variant, |python35| be used by preference. The Python support allows external libraries to be used to extend the basic functionality of Python, however it should be noted currently that the Python libraries have to be manually installed on the Fledge host machine.\n\nScripting Guidelines\n####################\n\nThe user has the full range of Python functionality available to them within the script code they provides to this filter, however caution should be exercised as it is possible to adversely impact the functionality and performance of the Fledge system by misusing Python features to the detriment of Fledge’s own features.\n\nThe general principles behind all Fledge filters apply to the scripts included in these filters;\n\n  - Do not duplicate existing functionality provided by existing filters.\n\n  - Keep the operations small and focused. It is better to have multiple filters each with a specific purpose than to create large, complex Python scripts.\n\n  - Do not buffer large quantities of data, this will effect the footprint of the service and also slow the data pipeline.\n\nImporting Python Packages\n#########################\n\nThe user is free to import whatever packages they wish in a Python script, this includes the likes of the numpy packages and other that are limited to a single instance within a Python interpreter.\n\nDo not import packages that you do not use or are not required. This adds an extra overhead to the filter and can impact the performance of Fledge. Only import packages you actually need.\n\nPython does not provide a mechanism to remove a package that has previously been imported, therefore if you import a package in your script and then update your script to no longer import the package, the package will still be in memory from the previous import. This is because we reload updated scripts without closing down as restarting the Python interpreter. This is part of the sharing of the interpreter that is needed in order to allow packages such as numpy and scipy to be used. This can lead to misleading behavior as when the service gets restarted the package will not be loaded and the script may break because it makes use of the package still.\n\nIf you remove a package import form your script and you want to be completely satisfied that the script will still run without it, then you must restart the service in which you are using the plugin. This can be done by disabling and then re-enabling the service.\n\nOne of the *Developer Features* of the Fledge user interface allows the management of the installed Python Packages from within the user interface. This features is turned on via the *Developer features* toggle in the *Settings* page and will add a new menu item called *Developer*. Navigating to this page will give the the option of managing packages\n\n+---------+\n| |pip_1| |\n+---------+\n\nClicking on *Manage packages* link will display the current set of Python packages that are installed on the machine.\n\n+---------+\n| |pip_2| |\n+---------+\n\nTo add a new package click on the *Add +* link in the top right corner. This will display a screen that allows you to enter details of a Python package to install.\n\n+---------+\n| |pip_3| |\n+---------+\n\nEnter package name and an optional package version and then click on the *Install* button to install a new package via *pip3*.\n\nUse of Global Variables\n#######################\n\nYou may use global variables within your script and these globals will retain their value between invocations of the of processing function. You may use global variables as a method to keep information between executions and perform such operations as trend analysis based on data seen in previous calls to the filter function.\n\nAll Python code within a single service shares the same Python interpreter and hence they also share the same set of global variables. This means you must be careful as to how you name global variables and also if you need to have multiple instances of the same filter in a single pipeline you must be aware that the global variables will be shared between them. If your filter uses global variables it is normally not recommended to have multiple instances of them in the same pipeline.\n\nIt is tempting to use this sharing of global variables as a method to share information between filters, this is not recommended as should not be used. There are several reasons for this\n\n  - It provides data coupling between filters, each filter should be independent of each other filter.\n\n  - One of the filters sharing global variables may be disabled by the user with unexpected consequences.\n\n  - Filter order may be changed, resulting in data that is expected by a later filter in the chain not being available.\n\n  - Intervening filters may add or remove readings resulting in the data in the global variables not referring to the same reading, or set of readings that it was intended to reference.\n\nIf you wish one filter to pass data onto a later filter in the pipeline this is best done by adding data to the reading, as an extra data point. This data point can then be removed by the later filter. An example of this is the way Fledge adds |OMFhints| to readings that are processed and removed by the |OMF| north plugin.\n\nFor example let us assume we have calculated some value delta that we wish to pass to a later filter, we can add this as a data point to our reading which we will call *_hintDelta*.\n\n.. code-block:: Python\n\n  def myPython(readings):\n    for elem in list(readings):\n        reading = elem['readings']\n        ...\n        reading['_hintDelta'] = delta\n        ...\n    return readings\n\nThis is far better than using a global as it is attached to the reading to which it refers and will remain attached to that reading until it is removed. It also means that it is independent of the number of readings that are processed per call, and resilient to readings being added or removed from the stream.\n\nThe name chosen for this data point in the example above has no significance, however it is good practice to choose a name that is unlikely to occur in the data normally and portrays the usage or meaning of the data.\n\nFile IO Operations\n##################\n\nIt is possible to make use of file operations within a Python filter function, however it is not recommended for production use for the following reasons;\n\n  - Pipelines may be moved to other hosts where files may not be accessible.\n\n  - Permissions may change dependent upon how Fledge systems are deployed in the various different scenarios.\n\n  - Edge devices may also not have large, high performance storage available, resulting in performance issues for Fledge or failure due to lack of space.\n\n  - Fledge is designed to be managed solely via the Fledge API and applications that use the API. There is no facility within that API to manage arbitrary files within the filesystem.\n\nIt is common to make use of files during development of a script to write information to in order to aid development and debugging, however this should be removed, along with associated imports of packages required to perform the file IO, when a filter is put into production.\n\nThreads within Python\n#####################\n\nIt is tempting to use threads within Python to perform background activity or to allow processing of data sets in parallel, however there is an issue with threading in Python, the Python Global Interpreter Lock or GIL. The GIL prevents two Python statements from being executed within the same interpreter by two threads simultaneously. Because we use a single interpreter for all Python code running in each service within Fledge, if a Python thread is created that performs CPU intensive work within it, we block all other Python code from running within that Fledge service.\n\nWe therefore avoid using Python threads within Fledge as a means to run CPU intensive tasks, only using Python threads to perform IO intensive tasks, using the asyncio mechanism of Python 3.5.3 or later. In older versions of Fledge we used multiple interpreters, one per filter, in order to workaround this issue, however that had the side effect that a number of popular Python packages, such as numpy, pandas and scipy, could not be used as they can not support multiple interpreters within the same address space. It was decided that the need to use these packages was greater than the need to support multiple interpreters and hence we have a single interpreter per service in order to allow the use of these packages.\n\nInteraction with External Systems\n#################################\n\nInteraction with external systems, using network connections or any form of blocking communication should be avoided in a filter. Any blocking operation will cause data to be blocked in the pipeline and risks either large queues of data accumulating in the case of asynchronous south plugins or data begin missed in the case of polled plugins.\n\nScripting Errors\n################\n\nIf an error occurs in the plugin or Python script, including script coding errors and Python exception, details will be logged to the error log and data will not flow through the pipeline to the next filter or into the storage service.\n\nWarnings raised will also be logged to the error log but will not cause data to cease flowing through the pipeline.\n\nSee :ref:`AccessingLogs`: for details have how to access the system logs.\n\nDebugging & Tracing Pipelines\n-----------------------------\n\nFledge has a feature that allows the debugging of pipelines in south and north services. It provides a mechanism to view the data as it flows through the pipeline.\n\nThe debugger is designed to show the data as it traverses the pipeline within the service. Users may:\n\n   - Show the data at each stage in the pipeline.\n\n   - Configure the number of readings to store at each stage in the pipeline.\n\n   - Pause the ingest into the pipeline.\n\n   - Suspend the output of the pipeline.\n\n   - Ingest a number of readings into the paused pipeline.\n\n   - Replay the currently buffered readings for a pipeline.\n\n   - Resume flow into and out of the pipeline.\n\n\nThe *replay* operation is useful when manipulating the configuration of the pipeline components. The user may change a filter configuration and replay the saved readings to see the impact of the configuration change. This may be repeated multiple times until the user is satisfied with the result of the filter configuration.\n\nCommand Line Interface\n~~~~~~~~~~~~~~~~~~~~~~\n\nA command line interface is provided to access the pipeline debugger. The interface can be started by running the *fledge* script that is used to start, stop and monitor Fledge or by calling the *scripts/debug/debug* script directly.\n\n.. note::\n\n   Due to the sensitive nature of pipeline debugging only user with edit permissions may use this interface. Additionally authentication must be enabled on the Fledge instance for this interface to work.\n\nStarting the Debugger\n#####################\n\nThe *debug* command should be given the name of a south or north service to debug.\n\n.. code-block:: bash\n\n   $ scripts/fledge -u admin debug Lathe\n   Debug: Lathe$ \n\nUpon starting a new prompt will be displayed, if the service is a valid service to be debugged and is running. The user may now enter the various debug commands at this prompt. To complete the debugging session type *exit* or Control-D.\n\nTo see a list of available commands enter the command *commands*\n\n.. code-block:: bash\n\n   Debug: Lathe$ commands\n   attach:              Attach the pipeline debugger\n   buffer:              Return the contents of the buffers at every pipeline element\n   detach:              Detach the debugger from the pipeline\n   isolate:             Isolate the pipeline from the destination\n   replay:              Replay the buffered data through the pipeline\n   resumeIngest:        Resume the flow of data into the pipeline\n   setBuffer:           Set the number of readings to hold in each buffer, passing an integer argument\n   state:               Return the state of the debugger\n   step:                Allow readings to flow into the pipeline. Passed an optional number of readings to ingest; default to 1 if omitted\n   store:               Allow data to flow out of the pipeline into storage\n   suspendIngest:       Suspend the ingestion of data into the pipeline\n   Debug: Lathe$ \n\nAttach\n######\n\nThe first command to issue is usually the *attach* command. It attaches the debugger session to the service and begins the process of collecting data at each node within the pipeline. Until the debugger is attached it is not possible to view the data in the pipeline.\n\n.. code-block:: bash\n\n   Debug: Lathe$ attach\n   {\n     \"status\": \"ok\"\n   }\n   Debug: Lathe$\n\nThe data returned from the *attach* command is the status of running that command encoded within a JSON document.\n\nAt this point the debugger is attached and collecting data at each node within the pipeline. By default only one reading is retained at each point in the pipeline; this is the last reading seen at that point in the pipeline.\n\nBuffer\n######\n\nThe buffer command will display the data at each node in the pipeline. In the example pipeline used here there are several nodes, including a branch in the pipeline.\n\n+-----------------+\n| |LathePipeline| |\n+-----------------+\n\nThe output of the Lathe simulator south plugin is split into two branches of the pipeline. An asset filter is used at the start of each branch to control which assets are sent down that branch of the pipeline. In this case the *latheCurrent* readings are sent down one branch and all other readings are sent down the other branch.\n\nRunning the *buffer* command will display the data recorded on the input of each node in the pipeline.\n\n.. code-block:: bash\n\n   Debug: Lathe$ buffer\n   {\n     \"data\": [\n       {\n         \"name\": \"Branch\",\n         \"readings\": [\n           {\n             \"asset_code\": \"latheIR\",\n             \"user_ts\": \"2025-03-31 16:23:10.008024+00:00\",\n             \"ts\": \"2025-03-31 16:23:10.008024+00:00\",\n             \"reading\": {\n               \"gearbox\": 27.93525,\n               \"motor\": 29.6282,\n               \"headstock\": 25.51,\n               \"tailstock\": 20.52,\n               \"tool\": 24.0905\n             }\n           }\n         ]\n       },\n       [\n         {\n           \"name\": \"CurrentOnly\",\n           \"readings\": [\n             {\n               \"asset_code\": \"latheIR\",\n               \"user_ts\": \"2025-03-31 16:23:10.008024+00:00\",\n               \"ts\": \"2025-03-31 16:23:10.008024+00:00\",\n               \"reading\": {\n                 \"gearbox\": 27.93525,\n                 \"motor\": 29.6282,\n                 \"headstock\": 25.51,\n                 \"tailstock\": 20.52,\n                 \"tool\": 24.0905\n               }\n             }\n           ]\n         },\n         {\n           \"name\": \"RMS\",\n           \"readings\": [\n             {\n               \"asset_code\": \"latheCurrent\",\n               \"user_ts\": \"2025-03-31 16:23:10.008018+00:00\",\n               \"ts\": \"2025-03-31 16:23:10.008018+00:00\",\n               \"reading\": {\n                 \"current\": 767\n               }\n             }\n           ]\n         },\n         {\n           \"name\": \"Writer\",\n           \"readings\": [\n             {\n               \"asset_code\": \"latheCurrent RMS\",\n               \"user_ts\": \"2025-03-31 16:23:02.803881+00:00\",\n               \"ts\": \"2025-03-31 16:23:02.803881+00:00\",\n               \"reading\": {\n                 \"current\": 773.9413414465,\n                 \"currentpeak\": 47,\n                 \"currentcrest\": 0.0607281166\n               }\n             }\n           ]\n         }\n       ],\n       {\n         \"name\": \"NonCurrent\",\n         \"readings\": [\n           {\n             \"asset_code\": \"latheIR\",\n             \"user_ts\": \"2025-03-31 16:23:10.008024+00:00\",\n             \"ts\": \"2025-03-31 16:23:10.008024+00:00\",\n             \"reading\": {\n               \"gearbox\": 27.93525,\n               \"motor\": 29.6282,\n               \"headstock\": 25.51,\n               \"tailstock\": 20.52,\n               \"tool\": 24.0905\n             }\n           }\n         ]\n       },\n       {\n         \"name\": \"Writer\",\n         \"readings\": [\n           {\n             \"asset_code\": \"latheIR\",\n             \"user_ts\": \"2025-03-31 16:23:10.008024+00:00\",\n             \"ts\": \"2025-03-31 16:23:10.008024+00:00\",\n             \"reading\": {\n               \"gearbox\": 27.93525,\n               \"motor\": 29.6282,\n               \"headstock\": 25.51,\n               \"tailstock\": 20.52,\n               \"tool\": 24.0905\n             }\n           }\n         ]\n       }\n     ]\n   }\n   Debug: Lathe$\n\nData is displayed as a JSON document. Each object in the data represents a node in the pipeline. Each pipeline is enclosed in a JSON array. The *name* property of the object is the name of the filter into which the data is about to be passed. There are two reserved names: *Branch* and *Writer*.\n\nThe *Branch* nodes represents the data that is about to be branched and sent down each of the separate branches of the pipeline. In the case of this example we see the data that comes from the south plugin and is immediately split to each branch.\n\nThe *Writer* node represents where the data is about to be written to the storage service, since this is a south service we are debugging. If we were debugging a north service, the *Writer* node represents data written to the north plugin.\n\nAll other names represent a filter name in the pipeline and the readings are the data that is passed into the filter.\n\nThe example above shows a single reading at each node within the pipeline, this is the default number of readings that are buffered. The readings shown are always the must recent readings to be seen at that point within the pipeline. It is possible to define how many readings are to be buffered at each point in the pipeline using the *setBuffer* command.\n\nsetBuffer\n#########\n\nThe *setBuffer* command is used to set the number of readings to be buffered at each point in the pipeline. The same number must be buffered at all nodes in the pipeline. It is not possible to buffer different numbers of readings at each node in the pipeline.\n\nThe *setBuffer* command is passed a number which is the number of readings to buffer. If omitted then the number of buffered readings will be set to 1.\n\n.. code-block:: bash\n\n   Debug: Lathe$ setBuffer 4\n   {\n     \"status\": \"ok\"\n   }\n   Debug: Lathe$ \n\nThe returned JSON document shows the status of running the *setBuffer* command.\n\n.. note::\n\n   It is possible to enter large numbers to the *setBuffer* command, however it is not recommended as this will considerably increase the memory footprint of the service that is attached to the debugger.\n\nAfter running the *setBuffer* command the next call to the *buffer* command will return up to the number of readings requested in the *setBuffer* command. It may take some time for the requested number to be buffered as it requires new data to be sent through the pipeline to fill those buffers.\n\nMultiple buffers are returned as an array in each of the nodes of the pipeline. Below is a sample for the first few nodes of our pipeline.\n\n.. code-block:: bash\n\n   Debug: Lathe$ setBuffer 3\n   {\n     \"status\": \"ok\"\n   }\n   Debug: Lathe$ buffer\n   {\n     \"data\": [\n       {\n         \"name\": \"Branch\",\n         \"readings\": [\n           {\n             \"asset_code\": \"latheCurrent\",\n             \"user_ts\": \"2025-04-01 08:18:42.262970+00:00\",\n             \"ts\": \"2025-04-01 08:18:42.262970+00:00\",\n             \"reading\": {\n               \"current\": 750\n             }\n           },\n           {\n             \"asset_code\": \"latheIR\",\n             \"user_ts\": \"2025-04-01 08:18:42.262975+00:00\",\n             \"ts\": \"2025-04-01 08:18:42.262975+00:00\",\n             \"reading\": {\n               \"gearbox\": 27.9788,\n               \"motor\": 29.9785,\n               \"headstock\": 23.65,\n               \"tailstock\": 20.1666666667,\n               \"tool\": 18.73\n             }\n           },\n           {\n             \"asset_code\": \"lathe\",\n             \"user_ts\": \"2025-04-01 08:18:42.262955+00:00\",\n             \"ts\": \"2025-04-01 08:18:42.262955+00:00\",\n             \"reading\": {\n               \"rpm\": 349,\n               \"x\": 0,\n               \"depth\": 40,\n               \"state\": \"Spining Up\"\n             }\n           }\n         ]\n       },\n       [\n         {\n           \"name\": \"CurrentOnly\",\n           \"readings\": [\n             {\n               \"asset_code\": \"latheCurrent\",\n               \"user_ts\": \"2025-04-01 08:18:42.262970+00:00\",\n               \"ts\": \"2025-04-01 08:18:42.262970+00:00\",\n               \"reading\": {\n                 \"current\": 750\n               }\n             },\n             {\n               \"asset_code\": \"latheIR\",\n               \"user_ts\": \"2025-04-01 08:18:42.262975+00:00\",\n               \"ts\": \"2025-04-01 08:18:42.262975+00:00\",\n               \"reading\": {\n                 \"gearbox\": 27.9788,\n                 \"motor\": 29.9785,\n                 \"headstock\": 23.65,\n                 \"tailstock\": 20.1666666667,\n                 \"tool\": 18.73\n               }\n             },\n             {\n               \"asset_code\": \"lathe\",\n               \"user_ts\": \"2025-04-01 08:18:42.262955+00:00\",\n               \"ts\": \"2025-04-01 08:18:42.262955+00:00\",\n               \"reading\": {\n                 \"rpm\": 349,\n                 \"x\": 0,\n                 \"depth\": 40,\n                 \"state\": \"Spining Up\"\n               }\n             }\n           ]\n         },\n     ...\n\nThe example has been truncated at the second node merely to save space in the documentation.\n\nsuspendIngest\n#############\n\nWhen the debugger is attached to a service it does not stop the service from ingesting and processing new data; it merely adds a way to view the latest data as it traverse the pipeline. In situation where the user wishes to troubleshoot the operation of the pipeline it may be desirable to suspend the service from ingesting new data. This allows the data to be examined, configuration to be modified and data resent through the pipeline to observe the impact of configuration changes. To stop new data ingesting into the pipeline use the *suspendIngest* command.\n\n.. code-block:: bash\n\n   Debug: Lathe$ suspendIngest\n   {\n     \"status\": \"ok\"\n   }\n   Debug: Lathe$\n\nNo new data will be read into the service, either by calling the poll entry point for a south service or fetching data from storage for a north service.\n\n.. note::\n\n   If the south plugin is an asynchronous plugin, any new data that the plugin tries to send into the pipeline will be discarded whilst the pipeline ingest is suspended.\n\nIf troubleshooting a pipeline it is also useful to stop the pipeline send data out to the storage layer, in the case of a south plugin, or upstream to the destination system if a north plugin. This can be down using the *isolate* command.\n\nisolate\n#######\n\nStops the pipeline emitting data into the storage layer, if a south service is attached to the debugger, or to the upstream system if the attached service is a north service.\n\n.. code-block:: bash\n\n   Debug: Lathe$ isolate\n   {\n     \"status\": \"ok\"\n   }\n   Debug: Lathe$ \n\nstate\n#####\n\nIt is always possible to see the state of the debugger and the pipeline it is attached to by using the *state* command.\n\n.. code-block:: bash\n\n   Debug: Lathe$ state\n   {\n     \"debugger\": \"Attached\",\n     \"ingress\": \"Suspended\",\n     \"egress\": \"Isolated\"\n   }\n   Debug: Lathe$\n\nstep\n####\n\nWhen a pipeline has its ingest suspended it can be useful to allow one or more new readings to be ingested to see the impact of any configuration changes on new data. This can be done using the *step* command. It can be passed an optional number of readings to ingest. If no number is passed then a single reading will be ingested.\n\n.. code-block:: bash\n\n   Debug: Lathe$ step\n   {\n     \"status\": \"ok\"\n   }\n   Debug: Lathe$ step 4\n   {\n     \"status\": \"ok\"\n   }\n   Debug: Lathe$\n\nThese new readings will then be ingested into the pipeline. The buffers will be updated with the new data and the results buffered at each node within the pipeline.\n\nreplay\n######\n\nThe *replay* command is useful when the user has an isolated pipeline and they wish to see the impact of updating the configuration of one or more of the filters in the pipeline on the data that flows through the pipeline. Running the *replay* command will resend the data currently in the buffer at the first node in the pipeline through the pipeline again, updating the data in all other nodes in the pipeline.\n\n.. code-block:: bash\n\n   Debug: Lathe$ replay\n   {\n     \"status\": \"ok\"\n   }\n   Debug: Lathe$\n\n.. note::\n\n   The *replay* command should only be used if a pipeline has had ingest suspended. It is also probably sensible to isolate the pipeline to prevent readings with the same timestamp as readings already sent upstream to be sent again.\n\nstore\n#####\n\nThe *store* command is used to resume the storage of data that comes from the pipeline, in a south service, or to send the data upstream in the case of a north service. It effectively reverses the effect of the *isolate* command.\n\nresumeIngest\n############\n\nThe *resumeIngest* command will restart the ingest of data into the pipeline, revoking the impact of running the *suspendIngest* command.\n\ndetach\n######\n\nThe *detach* command should be run at the end of the debugging session to detach the debugger. Detaching the debugger will automatically resume the ingest and egress of the pipeline and return the pipeline to normal functioning.\n"
  },
  {
    "path": "docs/check-sphinx.py",
    "content": "import subprocess\nimport pytest\n\n\n# noinspection PyClassHasNoInit\nclass TestDoc:\n\n    def test_linkcheck(self, tmpdir):\n        doctrees = tmpdir.join(\"doctrees\")\n        htmldir = tmpdir.join(\"html\")\n        subprocess.check_call([\"sphinx-build\", \"-W\", \"-blinkcheck\", \"-d\", str(doctrees), \".\", str(htmldir)])\n\n    def test_build_docs(self, tmpdir):\n        doctrees = tmpdir.join(\"doctrees\")\n        htmldir = tmpdir.join(\"html\")\n        subprocess.check_call([\"sphinx-build\", \"-W\", \"-bhtml\", \"-d\", str(doctrees), \".\", str(htmldir)])\n"
  },
  {
    "path": "docs/conf.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# Fledge documentation build configuration file, created by\n# sphinx-quickstart on Fri Sep 22 02:34:49 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nimport subprocess\nimport datetime\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = []\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Fledge'\ncopyright = datetime.date.today().strftime(\"%Y\") + u', Dianomic Systems'\nauthor = u'Dianomic Systems'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = u''\n# The full version, including alpha/beta/rc tags.\nrelease = u''\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'keywords/README.rst', 'OMF.rst']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.  See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further.  For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\nhtml_sidebars = {\n    '**': [\n        'about.html',\n        'navigation.html',\n        'relations.html',  # needs 'show_related': True theme option to display\n        'searchbox.html',\n        'donate.html',\n    ]\n}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Fledgedoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n    # The paper size ('letterpaper' or 'a4paper').\n    #\n    # 'papersize': 'letterpaper',\n\n    # The font size ('10pt', '11pt' or '12pt').\n    #\n    # 'pointsize': '10pt',\n\n    # Additional stuff for the LaTeX preamble.\n    #\n    # 'preamble': '',\n\n    # Latex figure (float) alignment\n    #\n    # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n#  author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n    (master_doc, 'Fledge.tex', u'Fledge Documentation',\n     u'Dianomic Systems', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n    (master_doc, 'fledge', u'Fledge Documentation',\n     [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n#  dir menu entry, description, category)\ntexinfo_documents = [\n    (master_doc, 'Fledge', u'Fledge Documentation',\n     author, 'Fledge', 'One line description of project.',\n     'Miscellaneous'),\n]\n\n\nhtml_context = {\n    'css_files': [\n        '_static/theme_overrides.css',  # override wide tables in RTD theme\n        '_static/version_menu.css',     # override options from versions menu\n    ],\n     }\n\n# Pass Plugin DOCBRANCH argument in Makefile ; by default develop\n# NOTE: During release time we need to replace DOCBRANCH with actual released version\nsubprocess.run([\"make generated DOCBRANCH='develop'\"], shell=True, check=True)\n"
  },
  {
    "path": "docs/control.rst",
    "content": ".. Images\n.. |setpoint_1| image:: images/setpoint_1.jpg\n.. |setpoint_2| image:: images/setpoint_2.jpg\n.. |setpoint_3| image:: images/setpoint_3.jpg\n.. |advanced_south| image:: images/advanced_south.jpg\n.. |edge_control_path| image:: images/edge_control_path.jpg\n.. |sine_in| image:: images/sine_in.jpg\n.. |sine_out5| image:: images/sine_out5.jpg\n.. |sine_out_change| image:: images/sine_out_change.jpg\n.. |end_to_end| image:: images/EndToEnd.jpg\n.. |north_map1| image:: images/north_map1.jpg\n.. |north_map2| image:: images/north_map2.jpg\n.. |north_map3| image:: images/north_map3.jpg\n.. |north_map4| image:: images/north_map4.jpg\n.. |opcua_server| image:: images/opcua_server.jpg\n.. |dispatcher_config| image:: images/dispatcher-config.jpg\n.. |pipeline_list| image:: images/control/pipeline_list.jpg\n.. |pipeline_add| image:: images/control/pipeline_add.jpg\n.. |pipeline_menu| image:: images/control/pipeline_menu.jpg\n.. |pipeline_model| image:: images/control/pipeline_model.jpg\n.. |pipeline_source| image:: images/control/pipeline_source.jpg\n.. |pipeline_filter_add| image:: images/control/pipeline_filter_add.jpg\n.. |pipeline_filter_config| image:: images/control/pipeline_filter_config.jpg\n.. |pipeline_context_menu| image:: images/control/pipeline_context_menu.jpg\n.. |pipeline_destination| image:: images/control/pipeline_destination.jpg\n.. |control_api_1| image:: images/control/control_api_1.jpg\n.. |control_api_2| image:: images/control/control_api_2.jpg\n.. |control_api_3| image:: images/control/control_api_3.jpg\n.. |control_api_4| image:: images/control/control_api_4.jpg\n.. |control_api_5| image:: images/control/control_api_5.jpg\n.. |control_api_6| image:: images/control/control_api_6.jpg\n.. |control_api_7| image:: images/control/control_api_7.jpg\n.. |control_api_8| image:: images/control/control_api_8.jpg\n.. |control_api_9| image:: images/control/control_api_9.jpg\n.. |control_api_10| image:: images/control/control_api_10.jpg\n.. |features| image:: images/features.jpg\n\n.. Links\n.. |ExpressionFilter| raw:: html\n\n   <a href=\"plugins/fledge-filter-expression/index.html\">expression filter</a>\n\n.. |DeltaFilter| raw:: html\n\n   <a href=\"plugins/fledge-filter-delta/index.html\">delta filter</a>\n\n************************\nFledge Control Features\n************************\n\nFledge supports facilities that allows control of devices via the south service and plugins. This control in known as *set point control* as it is not intended for real time critical control of devices but rather to modify the behavior of a device based on one of many different information flows. The latency involved in these control operations is highly dependent on the control path itself and also the scheduling limitations of the underlying operating system. Hence the caveat that the control functions are not real time or guaranteed to be actioned within a specified time window. This does not mean however that they can not be used for non-critical closed loop control, however we would not advise the use of this functionality in safety critical situations.\n\n.. note::\n\n   The control features within Fledge may be disabled globally. The feature configuration is locations in the *Configuration* menu item under the *Advanced::Features* configuration category. The *Control* toggle enables or disables access to control features. Administrative rights are required to update the features that are enabled or disabled.\n\n   +------------+\n   | |features| |\n   +------------+\n\nControl Functions\n=================\n\nThe are two type of control function supported\n\n  - Modify the value in a device via the south service and plugin.\n\n  - Request the device to perform an action.\n\nSet Point\n---------\n\nSetting the value within the device is known as a set point action in Fledge. This can be as simple as setting a speed variable within a controller for a fan or it may be more complete. Typically a south plugin would provide a set of values that can be manipulated, giving each a symbolic name that would be available for a set point command. The exact nature of these is defined by the south plugin.\n\nOperation\n---------\n\nOperations, as the name implies provides a means for the south service to request a device to perform an operation, such as reset or re-calibrate. The names of these operations and any arguments that can be given are defined within the south plugin and are specific to that south plugin.\n\nControl Paths\n=============\n\nSet point control may be invoked via a number of paths with Fledge\n\n  - As the result of a notification within Fledge itself.\n\n  - As a result of a request via the Fledge public REST API.\n\n  - As a result of a control message flowing from a north side system into a north plugin and being routed onward to the south service.\n\nThe use of a notification in the Fledge instance itself provides the fastest response for an edge notification. All the processing for this is done on the edge by Fledge itself.\n\nAs with the data ingress and egress features of Fledge it is also possible to build filter pipelines in the control paths in order to alter the behavior and process the data in the control path. Pipelines in the control path as defined between the different end point of control operations and are defined such that the same pipeline can be utilized by multiple control paths. See :ref:`ControlPipelines`\n\nEdge Based Control\n------------------\n\nEdge based control is the name we use for a class of control applications that take place solely within the Fledge instance at the edge. The data that is required for the control decision to be made is gathered in the Fledge instance, the logic to trigger the control action runs in the Fledge instance and the control action is taken within the Fledge instance. Typically this will involve one or more south plugins to gather the data required to make the control decision, possibly some filters to process that data, the notification engine to make the decision and one or more south services to deliver the control messages.\n\nAs an example of how edge based control might work lets consider the following case.\n\nWe have a machine tool that is being monitored by Fledge using the OPC/UA south plugin to read data from the machine tools controlling PLC. As part of that data we receive an asset which contains the temperature of the motor which is running the tool. We can assume this asset is called *MotorTemperature* and it contains a single data point called *temperature*. \n\nWe also have a fan unit that is able to cool that motor which is controlled via a Modbus interface. The modbus contains one a coil that toggles the fan on and off and a register that controls the speed of the fan. We configure the *fledge-south-modbus* as a service called *MotorFan* with a control map that will map the coil and register to a pair of set points. \n\n.. code-block:: JSON\n\n   {\n       \"values\" : [\n                      {\n                          \"name\" : \"run\",\n                          \"coil\" : 1\n                      },\n                      {\n                          \"name\"     : \"speed\",\n                          \"register\" : 1\n                      }\n                  ]\n   }\n\n+--------------+\n| |setpoint_1| |\n+--------------+\n\nIf the measured temperature of the motor going above 35 degrees centigrade we want to turn the fan on at 1200 RPM. We create a new notification to do this. The notification uses the *threshold* rule and triggers if the asset *MotorTemperature*, data point *temperature* is greater than 35.\n\n+--------------+\n| |setpoint_2| |\n+--------------+\n\nWe select the *setpoint* delivery plugin from the list and configure it.\n\n\n    +--------------+\n    | |setpoint_3| |\n    +--------------+\n\n  - In *Service* we set the name of the service we are going to use to control the fan, in this case *MotorFan* \n\n  - In *Trigger Value* we set the control message we are going to send to the service. This will turn the fan on and set the speed to 1200RPM\n\n  - In *Cleared Value* we set the control message we are going to send to turn off the fan when the value falls below 35 degrees.\n\nThe plugin is enabled and we go on to set the notification type to toggled, since we want to turn off the fan if the motor cools down, and set a retrigger time to prevent the fan switching on and off too quickly. The notification type and the retrigger time are important parameters for tuning the behavior of the control system and are discussed in more detail below.\n\nIf we required the fan to speed up at a higher temperature then this could be achieved with a second notification. In this case it would have a higher threshold value and would set the speed to a higher value in the trigger condition and set it back to 1200 in the cleared condition. Since the notification type is *toggled* the notification service will ensure that these are called in the correct order.\n\nData Substitution\n~~~~~~~~~~~~~~~~~\n\nThere is another option that can be considered in our example above that would allow the fan speed to be dependent on the temperature, the use of data substitution in the *setpoint* notification delivery.\n\nData substitution allows the values of a data point in the asset that caused the notification rule to trigger to be substituted into the values passed in the set point operation. The data that is available in the substitution is the same data that is given to the notification rule that caused the alert to be triggered. This may be a single asset with all of its data points for simple rules or may be multiple assets for more complex rules. If the notification rule is given averaged data then it is these averages that will be available rather than the individual values.\n\nParameters are substituted using a simple macro mechanism, the name of an asset and data point with in the asset is inserted into the value surrounded by the *$* character. For example to substitute the value of the *temperature* data point of the *MotorTemperature* asset into the *speed* set point parameter we would define the following in the *Trigger Value*\n\n.. code-block:: JSON\n\n   {\n       \"values\" : {\n            \"speed\"  : \"$MotorTemperature.temperature$\"\n   }\n\nNote that we separate the asset name from the data point name using a period character.\n\nThis would have the effect of setting the fan speed to the temperature of the motor. Whilst allowing us to vary the speed based on temperature it would probably not be what we want as the fan speed is too low. We need a way to map a temperature to a higher speed.\n\nA simple option is to use the macro mechanism to append a couple of 0s to the temperature, a temperature of 21 degrees would result in a fan speed of 2100 RPM.\n\n.. code-block:: JSON\n\n   {\n       \"values\" : {\n            \"speed\"  : \"$MotorTemperature.temperature$00\"\n   }\n\nThis works, but is a little primitive and limiting. Another option is to add data to the asset that triggers the notification. In this case we could add an expression filter to create a new data point with a desired fan speed. If we were to add an expression filter and give it the expression *desiredSpeed = temperature > 20 ? temperature * 50 + 1200 : 0* then we would create a new data point in the asset called *desiredSpeed*. The value of *desiredSpeed* would be 0 if the temperature was 20 degrees or below, however for temperatures above it would be 1200 plus 50 times the temperature. \n\nThis new desired speed can then be used to set the temperature in the *setpoint* notification plugin.\n\n.. code-block:: JSON\n\n   {\n       \"values\" : {\n            \"speed\"  : \"$MotorTemperature.desiredSpeed$\"\n            }\n   }\n\nThe user then has the choice of adding the desired speed item to the data stored in the north, or adding an asset filter in the north to remove this data point form the data that is sent onward to the north.\n\nTuning edge control systems\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe set point control features of Fledge are not intended to replace real time control applications such as would be seen in PLCs that are typically implemented in ladder logic, however Fledge does allow for high performance control to be implemented within the edge device. The precise latency in control decisions is dependent on a large number of factors and there are various tuning parameters that can be used to reduce the latency in the control path.\n\nIn order to understand the latency inherent in the control path we should first start my examining that path to discover where latency can occur. To do this will will choose a simple case of a single south plugin that is gathering data required by a control decision within Fledge. The control decision will be taken in a notification rule and delivered via the *fledge-notify-setpoint* plugin to another south service.\n\nA total of four services within Fledge will be involved in the control path\n\n+---------------------+\n| |edge_control_path| |\n+---------------------+\n\n  - the south service that is gathering the data required for the decision\n\n  - the storage service that will dispatch the data to the notification service\n\n  - the notification service that will run the decision rule and trigger the delivery of the control message\n\n  - the south service that will send the control input to the device that is being controlled\n\nEach of these services can add to that latency in the control path, however the way in which these are configured can significantly reduce that latency.\n\nThe south service that is gathering the data will typically being either be polling a device or obtaining data asynchronously from the device. This will be sent to the ingest thread of the south service where it will be buffered before sending the data to the storage service.\n\nThe advanced settings for the south service can be used to trigger how often that data is sent to the storage service. Since it is the storage service that is responsible for routing the data onward to the notification service this impacts the latency of the delivery of the control messages.\n\n+------------------+\n| |advanced_south| |\n+------------------+\n\nThe above shows the default configuration of a south service. In this case data will not be sent to the south service until there are either 100 readings buffered in the south service, or the oldest reading in the south service buffer has been in the buffer for 5000 milliseconds. In this example we are reading 1 new readings every second, therefore will send data to the storage service every 5 seconds, when the oldest reading in the buffer has been there for 5000mS. When it sends data it will send all the data it has buffered, in this case 5 readings as one block. If the oldest reading is the one that triggers the notification we have therefore introduced a 5 second latency into the control path.\n\nThe control path latency can be reduced by reducing the *Maximum Reading Latency* of this south plugin. This will of course put greater load on the system as a whole and should be done with caution as it increases the message traffic between the south service and the storage service.\n\nThe storage service has little impact on the latency, it is designed such that it will forward data it receives for buffering to the notification service in parallel to buffering it. The storage service will only forward data the notification service has subscribed to receive and will forward that data in the blocks it arrives at the storage service in. If a block of 5 readings arrives at the the storage service then all 5 will be sent to the notification service as a single block.\n\nThe next service in the edge control path is the notification service, this is perhaps the most complex step in the journey. The behavior of the notification service is very dependent upon how each individual notification instance has been configured, factors that are important are the notification type, the retrigger interval and the evaluation data options.\n\nThe notification type is used to determine when notifications are delivered to the delivery channel, in the case of edge control this might be the *setpoint* plugin or the *operation* plugin. Fledge implements three options for the notification type\n\n    - **One shot**: A one shot notification is sent once when the notification triggers but will not be resent again if the notification triggers on successive evaluations. Once the evaluation does not trigger, the notification is cleared and will be sent again the next time the notification rule triggers.  One shot notifications may be further tailored with a maximum repeat frequency, e.g. no more than once in any 15 minute period.\n\n    - **Toggle**: A toggle notification is sent when the notification rule triggers and will not be resent again until the rule fails to trigger, in exactly the same way as a one shot trigger. However in this case when the notification rule first stops triggering a cleared notification is sent.  Again this may be modified by the addition of a maximum repeat frequency.\n\n    - **Retriggered**: A retriggered notification will continue to be sent when a notification rule triggers. The rate at which the notification is sent can be controlled by a maximum repeat frequency, e.g. send a notification every 5 minutes until the condition fails to trigger.\n\nIt is very important to choose the right type of notification in order to ensure the data delivered in your set point control path is what you require. The other factor that comes into play is the *Retrigger Time*, this defines a dead period during which notifications will not be sent regardless of the notification type.\n\nSetting a retrigger time that is too high will mean that data that you expect to be sent will not be sent. For example if you a new value you wish to be updated once every 5 seconds then you should use a retrigger type notification and set the retrigger time to less than 5 seconds.\n\nIt is very important to understand however that the retrigger time defines when notifications can be delivered, it does not related to the interval between readings. As an example, assume we have a retrigger time of 1 second and a reading that arrives every 2 seconds that causes a notification to be sent.\n\n  - If the south service is left with the default buffering configuration it will send the readings in a block to the storage service every 5 seconds, each block containing 2 readings. \n    \n  - These are sent to the notification service in a single block of two readings.\n\n  - The notification will evaluate the rule against the first reading in the block.\n   \n -  If the rule triggers the notification service will send the notification via the set point plugin.\n\n - The notification service will now evaluate the rule against the second readings.\n\n - If the rule triggers the notification service will note that it has been less than 1 second since it sent the last notification and it will not deliver another notification.\n\nTherefore, in this case you appear to see only half of the data points you expect being delivered to you set point notification. In order to rectify this you must alter the tuning parameters of the south service to send data more frequently to the storage service.\n\nThe final hop in the edge control path is the call from the notification service to the south service and the delivery via the plugin in the south service. This is done using the south service interface and is run on a separate thread in the south service. The result would normally be expected to be very low latency, however it should be noted that plugins commonly protect against simultaneous ingress and egress, therefore if the south service being used to deliver the data to the end device is also reading data from that device, there may be a requirement for the current read to complete before the write operation an commence.\n\nTo illustrate how the buffering in the south service might impact the data sent to the set point control service we will use a simple example of sine wave data being created by a south plugin and have every reading sent to a modbus device and then read back from the modbus device. The input data as read at the south service gathering the data is a smooth sine wave, \n\n+-----------+\n| |sine_in| |\n+-----------+\n\nThe data observed that is written to the modbus device is not however a clean sine wave as readings have been missed due to the retrigger time eliminating data that arrived in the same buffer.\n\n+-------------+\n| |sine_out5| |\n+-------------+\n\nSome jitter caused by occasional differences in the readings that arrive in a single block can be seen in the data as well.\n\nChanging the buffering on the south service to only buffer a single reading results in a much smooth sine wave as can be seen below as the data is seen to transition from one buffering policy to the next.\n\n+-------------------+\n| |sine_out_change| |\n+-------------------+\n\nAt the left end of the graph the south service is buffering 5 readings before sending data onward, on the right end it is only buffering one reading.\n\nEnd to End Control \n------------------\n\nThe end to end control path in Fledge is a path that allows control messages to enter the Fledge system from the north and flow through to the south. Both the north and south plugins involved in the path must support control operations, a dedicated service, the control dispatcher, is used to route the control messages from the source of the control input, the north service to the objects of the control operations, via the south service and plugins. Multiple south services may receive control inputs as a result of a single north control input.\n\n+--------------+\n| |end_to_end| |\n+--------------+\n\nIt is the job of the north plugin to define how the control input is received, as this is specific to the protocol of device to the north of Fledge. The plugin then takes this input and maps it to a control message that can be routed by the dispatcher. The way this mappings is defined is specific to each of the north plugins that provide control input.\n\nThe control messages that the dispatcher is able to route are defined by the following set\n\n  - Write one or more values to a specified south service\n\n  - Write one or more values to the south service that ingests a specified asset\n\n  - Write one or more values to all south services supporting control\n\n  - Run an automation script within the dispatcher service\n\n  - Execution an operation on a specified south service\n\n  - Execute an operation on the south service that ingests a specified asset\n\n  - Execute an operation on all the south services that support control\n\nA example of how a north plugin might define this mapping is shown below\n\n+--------------+\n| |north_map1| |\n+--------------+\n\nIn this case we have an OPCUA north plugin that offers a writable node called *test*, we have defined this as accepting integer values and also set a destination of *service* and a name of *fan0213*. When the OPCUA node test is written the plugin will send a control message to the dispatcher to ask it to perform a write operation on the named service.\n\nAlternately the dispatcher can send the request based on the assets that the south service is ingesting. In the following example, again taken from the OPCUA north plugin, we send a value of *EngineSpeed* which is an integer within the OPCUA server that Fledge presents to the service that is ingesting the asset *pump0014*.\n\n+--------------+\n| |north_map2| |\n+--------------+\n\nIf browsing the OPCUA server which Fledge is offering via the north service you will see a node with the browse name *EngineSpeed* which when written will cause the north plugin to send a message to the dispatcher service and ultimately cause the south service ingesting *pump0014* to have that value written to it;s *EngineSpeed* item. That south service need not be an OPCUA service, it could be any south service that supports control.\n\n+----------------+\n| |opcua_server| |\n+----------------+\n\nIt is also possible to get the dispatcher to send the control request to all services that support control. In the case of the OPCUA north plugin this is specified by omitting the other types of destination.\n\n+--------------+\n| |north_map3| |\n+--------------+\n\nAll south services that support control will be sent the request, these may be of many different types and are free to ignore the request if it can not be mapped locally to a resource to update. The semantics of how the request is treated is determined by the south plugin, each plugin receiving the request may take different actions.\n\nThe dispatcher can also be instructed to run a local automation script, these are discussed in more detail below, when a write occurs on the OPCUA node via this north plugin. In this case the control map is passed a script key and name to execute. The script will receive the value *EngineSpeed* as a parameter of the script.\n\n+--------------+\n| |north_map4| |\n+--------------+\n\n.. note::\n\n  This is an example and does not mean that all or any plugins will use the exact syntax for mapping described above, the documentation for your particular plugin should be consulted to confirm the mapping implemented by the plugin.\n\nAPI Control Invocation\n======================\n\nFledge allows the administer of the system to extend to REST API of Fledge to encompass custom defined entry point for invoking control operations within the Fledge instance. These configured API Control entry points can be called with a PUT operations to a URL of the form\n\n.. code-block:: console\n\n  /fledge/control/request/{name}\n\n\nWhere *{name}* is a symbolic name that is defined by the user who configures the API request.\n\nA payload can be passed as a JSON document that may be processed into the request that will be sent to the control dispatcher. This process is discussed below.\n\nThis effectively adds a new entry point to the Fledge public API, calling this entry point will call the control dispatcher to effectively route a control operation from the public API to one or more south services. The definition of the Control API Entry point allows restrictions to be placed on what calls can be made, by whom and with what data.\n\nDefining API Control Entry Points\n---------------------------------\n\nA control entry point has the following attributes\n\n  - The type of control, write or operation\n\n  - The destination for the control. This is the ultimate destination, all control requests will be routed via the control dispatcher. The destination may be one of service, asset, script or broadcast.\n\n  - The operation name if the type is operation.\n\n  - A set of constant key/value pairs that are sent either as the items to be written or as parameters if the type of the entry point is operation. Constants are always passed to the dispatcher call with the values defined here.\n\n  - A set of key/value pairs that define the variables that may be passed into the control entry point in the request. The value given here represents a default value to use if no corresponding key is given in the request made via the public API.\n\n  - A set of usernames for the users that are allowed to make requests to this entry point. If this is empty then no users are permitted to call the API entry point unless anonymous access has been enabled. See below.\n\n  - A flag, with the key anonymous, that states if the entry point is open to all users, including where users are not authenticated with the API. It may take the values “true” or “false”.\n\n  - The anonymous flag is really intended for situations when no user is logged into the system, i.e. authentication is not mandatory. It also serves a double purpose to allow a control API call to be open to all users. It is **not** recommended that this flag is set to “true” in production environments.\n\n  \nTo define a new control entry point a POST request is made to the URL\n\n.. code-block:: console\n\n  /fledge/control/manage\n\n\nWith a payload such as\n\n.. code-block:: JSON\n\n  {\n        \"name\"           : \"FocusCamera1\",\n        \"description\"    : \"Perform a focus operation on camera 1\",\n        \"type\"           : \"operation\",\n        \"operation_name\" : \"focus\",\n        \"destination\"    : \"service\",\n        \"service\"        : \"camera1\",\n        \"constants\"      : {\n                                \"units\"    : \"cm\"\n                           },\n        \"variables\"      : {\n                                \"distance\" : \"100\"\n                           },\n        \"allow\"          : [ \"john\", \"fred\" ],\n        \"anonymous\"      : false\n  }\n\n\nThe above will define an API entry point that can be called with a PUT request to the URL\n\n.. code-block:: console\n\n  /fledge/control/request/FocusCamera1\n\nThe payload of the request is defined by the set of variables that was created when the entry point was defined. Only keys given as variable names in the definition can be included in the payload of this call. If any variable is omitted from the payload of this call, then the default value that was defined when the entry point was defined will be used as the value of the variable that is passed in the payload to the dispatcher call that will action the request.\n\nThe payload sent to the dispatcher will always contain all of the variables and constants defined in the API entry point. The values for the constants are always from the original definition, whereas the values of the variables can be given in the public API or if omitted the defaults defined when the entry point was defined will be used.\n\nGraphical User Interface\n------------------------\n\nThe GUI functionality is accessed via the *API Entry Points* sub-menu of the *Control* menu in the left-hand menu pane. Selecting this option will display a screen that appears as follows.\n\n+-----------------+\n| |control_api_1| |\n+-----------------+\n\nAdding A Control API Entry Point\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\nClicking on the *Add +* item in the top right corner will allow a new entry point to be defined.\n\n+-----------------+\n| |control_api_2| |\n+-----------------+\n\nFollowing the above example we can add the name of the entry point and select the type of control request we wish to make from the drop down menu.\n\n+-----------------+\n| |control_api_3| |\n+-----------------+\n\nWe then enter destination, in this case service, by selecting it from the drop down. We can also enter the service name.\n\n+-----------------+\n| |control_api_4| |\n+-----------------+\n\nWe can add constant and variable parameters to the entry point via the *Parameters* pane of the add entry page\n\n+-----------------+\n| |control_api_5| |\n+-----------------+\n\nClicking on the *+ Add new variable* or *+Add new constant* items will add a pair of entry fields to allow you to enter the name and value for the variable or constant.\n\n+-----------------+\n| |control_api_6| |\n+-----------------+\n\nYou may delete a variable or constant by clicking on the *x* icon next to the entry.\n\nThe *Execution Access* pane allows control of who may execute the endpoint. Select the *Anonymous* toggle button will allow any user to execute the API. This is not recommended in a production environment.\n\n+-----------------+\n| |control_api_7| |\n+-----------------+\n\nThe *Allow Users* drop down will provides a means to allow limited users to run the entry point and provides a list of defined users within the system to choose from.\n\nFinally a textual description of the operation may be given in the *Description* field.\n\nClicking on *Save* will save an enable the new API entry point. The new entry point will be displayed on the resultant screen along with any others that have been defined previously.\n\n+-----------------+\n| |control_api_8| |\n+-----------------+\n\nClicking on the three vertical dots will display a menu that allows the details of the entry point to be viewed and updated or to delete the new entry point.\n\n+-----------------+\n| |control_api_9| |\n+-----------------+\n\nIt is also possible to execute the entry point from the GUI by clicking on the name of the entry point. You will be prompted to enter values for any variables that have been defined.\n\n+------------------+\n| |control_api_10| |\n+------------------+\n\nControl Dispatcher Service\n==========================\n\nThe *control dispatcher* service is a service responsible for receiving control messages from other components of the Fledge system and taking the necessary actions against the south services in order to achieve the request result. This may be as simple as forwarding the write or operation request to one to more south services or it may require the execution of an automation script by the *dispatcher service*.\n\nForwarding Requests\n-------------------\n\nThe *service dispatcher* supports three forwarding regimes which may be used to either forward write requests or operation requests, these are;\n\n  - Forward to a single service using the name of the service. The caller of the dispatcher must provide the name of the service to which the request will be sent.\n\n  - Forward to a single service that is responsible for ingesting a named asset into the Fledge system. The caller of the dispatcher must provide the name of an asset, the *service dispatcher* will then look this asset up in the asset tracker database to determine which service ingested the named asset. The request will then be forwarded to that service.\n\n  - Forward the request to all south services that are currently running and that support control operations. Note that if a service is not running then the request will not be buffered for later sending.\n\nAutomation Scripts\n------------------\n\nThe control dispatcher service supports a limited scripting designed to allow users to easily create sequences of operations that can be executed in response to a single control write operation. Scripts are created within Fledge and named externally to any control operations and may be executed by more than one control input. These scripts consist of a linear set of steps, each of which results in one of a number of actions, the actions supported are\n\n  - Perform a write request. A new write operation is defined in the step and it may take the form of any of the three styles of forwarding supported by the dispatcher; write to a named service, write to as service providing an asset or write to all south services.\n\n  - Perform an operation request on one or all south services. As with the write request above the three forwards of defining the target south service are defined.\n\n  - Delay the execution of a script. Add a delay between execution of the script steps.\n\n  - Update the Fledge configuration. Change the value of a configuration item within the system.\n\n  - Execute another script. A mechanism for calling another named script, the named script is executed and then the calling script will continue.\n\nThe same data substitution rules described above can also be used within the steps of an automation script. This allows data that is sent to the write or operation request in the dispatcher to be substituted in the steps themselves, for example a request to run a script with the values *param1* set to *value1* and *param2* set to *value2* would result in a step that wrote the value *$param1$* to a south service actually writing the value *value1*, i..e the value of *param1*.\n\nEach step may also have associated with it a condition, if specified that condition must evaluate to true for the step to be executed. If it evaluates to false then the step is not executed and execution moves to the next step in the script.\n\n.. include:: control_scripts.rst\n\nStep Conditions\n~~~~~~~~~~~~~~~\n\nThe conditions that can be applied to a step allow for the checking of the values in the original request sent to the dispatcher. For example attaching a condition of the form \n\n.. code-block:: console\n\n   speed != 0\n\nto a step, would result in the step being executed if the value in the parameter called *speed* that was in the original request to the dispatcher, had a value other than 0.\n\nConditions may be defined using the equals and not equals operators or for numeric values also greater than and less than.\n\n.. include:: acl.rst\n\nConfiguration\n-------------\n\nThe *control dispatcher service* has a small number of configuration items that are available in the *Dispatcher* configuration category within the general Configuration menu item on the user interface.\n\nTwo subcategories exist, Server and Advanced.\n\nServer Configuration\n~~~~~~~~~~~~~~~~~~~~\n\nThe server section contains a single option which can be used to either turn on or off the forwarding of control messages to the various services within Fledge. Clicking this option off will turn off all control message routing within Fledge.\n\nAdvanced Configuration\n~~~~~~~~~~~~~~~~~~~~~~\n\n+---------------------+\n| |dispatcher_config| |\n+---------------------+\n\n  - **Minimum Log Level**: Allows the minimum level at which logs will get written to the system log to be defined.\n\n  - **Maximum number of dispatcher threads**: Dispatcher threads are used to execute automation scripts. Each script utilizes a single thread for the duration of the execution of the script. Therefore this setting determines how many scripts can be executed in parallel.\n\n.. _ControlPipelines:\n\nControl Pipelines\n=================\n\nA control pipeline is very similar to pipelines in Fledge's data path, i.e. the ingress pipelines of a south service or the egress pipelines in the north data path. A control pipeline comprises an order set of filters through which the data in the control path is pushed. Each individual filter in the pipeline can add, remove or modify the data as it flows through the filter, in this case the data however are the set point write and operations.\n\nThe flow of control requests is organised in such a way that the same filters that are used for data ingress in a south service or data egress in a north service can be used for control pipelines. This is done by mapping control data to asset names and datapoint names and value in the control path pipeline.\n\nMapping Rules\n-------------\n\nFor a set point write the name of the asset will always be set to *reading*, the asset that is created will have a set of datapoints, one per each setpoint write operation that is to be executed. The name of the datapoint is the name of the set point to be written and the value of the datapoint is the value to to set.\n\nFor example, if a set point write wishes to set the *Pump Speed* set point to *80* and the *Pump Running* set point to *True* then the reading that would be created and passed to the filter would have the asset_code of *reading* and two data points, one called *Pump Speed* with a value of *80* and another called *Pump Running* with a value of *True*.\n\nThis reading can then be manipulated by a filter in the same way as in any other pipeline. For example the |ExpressionFilter| filter could be used to scale the pump speed. If the required was to multiply the pump speed by 10, then the expression defined would be *Pump Speed * 10* .\n\nIn the case of an operation the mapping is very similar, except that the asset_code in the reading becomes the operation name and the data points are the parameters of the operation.\n\nFor example, if an operation *Start Fan* required a parameter of *Fan Speed* then a reading with an asset_code of *Start Fan* with a single datapoint called *Fan Speed* would be created and passed through the filter pipeline.\n\nData Types\n~~~~~~~~~~\n\nThe values of all set points and the parameters of all operations are passed in the control services and between services as string representations, however they are converted to appropriate types when passed through the filter pipeline. If a value can be represented as an integer it will be and likewise for floating point values.\n\n.. note::\n\n   Currently complex types such as Image, Data Buffer and Array data can not be represented in the control pipelines.\n\nPipeline Connections\n--------------------\n\nThe control pipelines are not defined against a particular end point as they are with the data pipelines, they are defined separately and part of that definition includes the input and output end points to which the control pipeline may be attached. The input and output of a control pipeline may be defined as being able to connect to one of a set of endpoints.\n\n.. list-table::\n    :widths: 20 20 70\n    :header-rows: 1\n\n    * - Type\n      - Endpoints\n      - Description\n    * - Any\n      - Both\n      - The pipeline can connection to any source or destination. This is only used in situations where an exact match for an endpoint can not be satisfied.\n    * - API\n      - Source\n      - The source of the request is an API call to the public API of the Fledge instance.\n    * - Asset\n      - Destination\n      - The data will be sent to the service that is responsible for ingesting the named asset.\n    * - Broadcast\n      - Destination\n      - The requests will be sent to all south services that support control.\n    * - Notification\n      - Source\n      - The request originated from the named notification.\n    * - Schedule\n      - Source\n      - The request originated from a schedule.\n    * - Script\n      - Source\n      - The request is either originating from a script or being sent to a script.\n    * - Service\n      - Both\n      - The request is either coming from a named service or going to a named service.\n\nControl pipelines are always executed in the control dispatcher service. When a request comes into the service it will look for a pipeline to pass that request through. This process will look at the source of the request and the destination of the request. If a pipeline that has source and destination endpoints that are an exact match for the source and destination of the control request then the control request will be processed through that pipeline.\n\nIf no exact match is found then the source of the request will be checked against the defined pipelines for a match with the specified source and a destination of *any*. If there is a pipeline that matches these criteria it will be used. If not then a check is made for a pipeline with a source of *any* and a destination that matches the destination of this request.\n\nIf all the above tests fail then a final test is made for a pipeline with a source of *any* and a destination of *any*. If no match occurs then the request is processed without passing through any filters.\n\nIf a request is processed by a script in the control dispatcher then this request may pass through multiple filters, one from the source to the script and then one for each script step that performs a set point write or operation. Each of these may be a different pipeline.\n\nPipeline Execution Models\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen a pipeline is defined it may be set to use a *Shared* execution model or an *Exclusive* execution model. This is only important if any of the filters in the pipeline persist state that impacts future processing.\n\nIn a *Shared* execution model one pipeline instance will be created and any requests that use resolve to the pipeline will share the same instance of the pipeline. This saves creating multiple objects within the control dispatcher and is the preferred model to use.\n\nHowever if the filters in the pipeline store previous data and use it to influence future decisions, such as the |DeltaFilter| this behavior is undesirable as requests from different sources or destined for different destinations may interfere with each other. In this case the *Exclusive* execution model should be used.\n\nIn an *Exclusive* execution model a new instance of the pipeline will be created for each distinct source and destination of the control request that utilises the pipeline. This ensures that the different instances of the pipeline can not interfere with each other.\n\nControl Pipeline Management\n---------------------------\n\nThe Fledge Graphical User Interface provides a mechanism to manage the control pipelines, this is found in the Control sub menu under the pipelines item.\n\n+-----------------+\n| |pipeline_menu| |\n+-----------------+\n\nThe user is presented with a list of the pipelines that have been created to date and an option in the top right corner to add a new pipeline.\n\n+-----------------+\n| |pipeline_list| |\n+-----------------+\n\nThe list displays the name of the pipeline, the source and destination of the pipeline, the filters in the pipeline, the execution model and the enabled/disabled state of the pipeline.\n\nThe user has a number of actions that may be taken from this screen.\n\n  - Enable or disable the pipeline by clicking on the checkbox to the left of the pipeline name\n\n  - Click on the name of the plugin to view and edit the pipeline.\n\n  - Click on the three vertical dots to view the content menu.\n\n    +-------------------------+\n    | |pipeline_context_menu| |\n    +-------------------------+\n\n    Currently the only operation that is supported is delete.\n\n  - Click on the Add option in the top right corner to define a new pipeline.\n\nAdding A Control Pipeline\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nClicking on the add option will display the screen to add a new control pipeline.\n\n+----------------+\n| |pipeline_add| |\n+----------------+\n\n  - **Name**: The name of the control pipeline. This should be a unique name that is used to identify the control pipeline.\n\n  - **Execution**: The execution model to use to run this pipeline. In most cases the *Shared* execution model is sufficient.\n\n  - **Source**: The control source that this pipeline should be considered to be used by.\n\n    +-------------------+\n    | |pipeline_source| |\n    +-------------------+\n\n  - **Filters**: The filters in the pipeline. Click on *Add new filter* to add a new filter to the pipeline.\n\n\n    Clicking on the *Add filter* link will display a dialog in which the filter plugin can be chosen and named.\n\n    +-----------------------+\n    | |pipeline_filter_add| |\n    +-----------------------+\n\n    Clicking on next from this dialog will display the configuration for the chosen filter, in this case we have chosen the |ExpressionFilter|\n\n    +--------------------------+\n    | |pipeline_filter_config| |\n    +--------------------------+\n\n    The filter should then be configured in the same way as it would for data path pipelines.\n\n    On clicking *Done* the dialog will disappear and the original screen shown with the new pipeline displayed. In the list of filters. More filters can be added by clicking on the *Add new filter* link. If multiple filters are in the pipeline that can be re-ordered by dragging them around to change the order.\n\n  - **Destination**: The control destination that this pipeline will considered to be used with.\n\n    +------------------------+\n    | |pipeline_destination| |\n    +------------------------+\n\n  - **Enabled**: Enable the execution of the pipeline\n\nFinally click on the *Save* button to save the new control pipeline.\n\n"
  },
  {
    "path": "docs/control_scripts.rst",
    "content": ".. Images\n.. |automation_1| image:: images/automation_1.jpg\n.. |automation_2| image:: images/automation_2.jpg\n.. |automation_3| image:: images/automation_3.jpg\n.. |automation_4| image:: images/automation_4.jpg\n.. |automation_5| image:: images/automation_5.jpg\n.. |automation_6| image:: images/automation_6.jpg\n.. |automation_7| image:: images/automation_7.jpg\n.. |automation_8| image:: images/automation_8.jpg\n.. |automation_9| image:: images/automation_9.jpg\n.. |automation_10| image:: images/automation_10.jpg\n.. |automation_11| image:: images/automation_11.jpg\n.. |automation_12| image:: images/automation_12.jpg\n.. |automation_13| image:: images/automation_13.jpg\n.. |automation_14| image:: images/automation_14.jpg\n.. |automation_15| image:: images/automation_15.jpg\n.. |automation_16| image:: images/automation_16.jpg\n.. |automation_17| image:: images/automation_17.jpg\n\nGraphical Interface\n~~~~~~~~~~~~~~~~~~~\n\nThe automation scripts are available via the *Control Service* menu item in the left hand menu panel. Selecting this will give you access to the user interface associated with the control functions of Fledge. Click on the *Scripts* tab to select the scripts, this will display a list of scripts currently defined within the system and also show an add button icon the top right corner.\n\n+----------------+\n| |automation_1| |\n+----------------+\n\nViewing & Editing Existing Scripts\n##################################\n\nSimply click on the name of a script to view the script\n\n+----------------+\n| |automation_2| |\n+----------------+\n\nThe steps within the script are each displayed within a panel for that step. The user is then able to edit the script provided they have permission on the script.\n\nThere are then a number of options that allow you to modify the script, note however it is not possible to change the type of a step in the script. The user must add a new step and remove the old step they wish to replace.\n\n  - To add a new step to a script click on the *Add new step* button\n\n    - The new step be created in a new panel and will prompt for the user to select the step type\n\n      +----------------+\n      | |automation_5| |\n      +----------------+\n\n    - The next step in to process will depend on the type of automation step chosen.\n\n      - A *Configure* step will request the configuration category to update to be chosen. This is displayed in a drop down menu.\n\n        +----------------+\n        | |automation_6| |\n        +----------------+\n\n        The configuration categories are shown as a tree structure, allowing the user to navigate to the configuration category they wish to change.\n\n        Once chosen the user is presented with the items in that configuration category from which to choose.\n\n        +----------------+\n        | |automation_7| |\n        +----------------+\n\n        Selecting an item will give you a text box with the current value of that item. Simply type the new value that should be assigned to that item when this step of the script runs into that text box.\n\n      - A *Delay* step will request the duration of the delay. The *Duration* is merely typed into the text box and is expressed in milliseconds.\n\n      - An *Operation* step will request you to enter the name of the operation to perform and then select the service to which the operation request should be sent\n\n        +----------------+\n        | |automation_8| |\n        +----------------+\n\n        Operations can be passed zero or more parameters, to add parameters to an operation click on the *Add parameter* option. A pair of text boxes will appear allowing you to enter the key and value for the parameter.\n\n        +----------------+\n        | |automation_9| |\n        +----------------+\n\n        To add another parameter simply press the *Add parameter* option again.\n\n      - A *Script* step will request you to choose the name of the script to run from a list of all the currently defined scripts.\n\n        +-----------------+\n        | |automation_10| |\n        +-----------------+\n\n        Note that the script that you are currently editing is not included in this list of scripts. You can then choose if you want the execution of this script to block the execution of the current script or to run in parallel with the execution of the current script.\n\n        +-----------------+\n        | |automation_11| |\n        +-----------------+\n\n        Scripts may also have parameters added by choosing the *Add parameter* option.\n\n      - A *Write* step will request you to choose the service to which you wish to send the write request. The list of available services is given in a drop down selection.\n\n        +-----------------+\n        | |automation_12| |\n        +-----------------+\n\n        Values are added to the write request by clicking on the *Add new value* option. This will present a pair of text boxes in which the key and value of the write request value can be typed.\n\n        +-----------------+\n        | |automation_13| |\n        +-----------------+\n\n        Multiple values can be sent in a single write request, to add another value simply click on the *Add new value* option again.\n\n  - Any step type may have a condition added to it. If a step has a condition associated with it, then that condition must evaluate to true if the step is to be run executed. If it does not evaluate to true the step is skipped and the next step is executed. To add a condition to a step click on the *Add condition* option within the step's panel.\n\n    +-----------------+\n    | |automation_14| |\n    +-----------------+\n\n    A key and a value text box appears, type the key to test, this is usually a script parameter and the value to test. Script parameters are referenced using the *$* character to enclose the name of the script parameter.\n\n    +-----------------+\n    | |automation_15| |\n    +-----------------+\n\n    A selection list is provided that allows the test that you wish to perform to be chosen.\n\n    +-----------------+\n    | |automation_16| |\n    +-----------------+\n\n  - To remove a step from a script click on the bin icon on the right of the step panel\n\n    +----------------+\n    | |automation_4| |\n    +----------------+\n\n  - To reorder the steps in a script it is a simple case of clicking on one of the panels that contains a step and dragging and dropping the step into the new position within the script in which it should run.\n\n    +----------------+\n    | |automation_3| |\n    +----------------+\n\n  - A script may have an access control list associated to it. This controls how a script can be access, it allows the script to limit access to certain services, notifications or APIs. The creation of ACLs is covered elsewhere, to associate an ACL to a script simply select the name of the ACL from the ACL drop down at foot of the screen. If not ACL is assigned access to the script will not be limited.\n\nAdding a Script\n###############\n\nThe process for adding new scripts is similar to editing an existing script.\n\n  - To add a new script click on the *Add* option in the top right corner.\n\n  - Enter a name for the script in the text box that appears\n\n    +-----------------+\n    | |automation_17| |\n    +-----------------+\n\n  - Now start to add the steps to you script in the same way as above when editing an existing script.\n\n  - Once you have added all your steps you may also add optional access control list\n\n  - Finally click on *Save* to save your script\n"
  },
  {
    "path": "docs/fledge-north-OMF.rst",
    "content": "\nOMF\n===\n\nThe *OMF* north plugin is included in all distributions of the Fledge core and provides the north bound interface to the OSIsoft data historians in all it forms; PI Server, Edge Data Store, AVEVA Data Hub and OSIsoft Cloud Services.\n\n"
  },
  {
    "path": "docs/fledge-rule-DataAvailability/index.rst",
    "content": ".. Images\n.. |data-availability| image:: images/data-availability.png\n\nDataAvailability Rule\n=====================\n\nThis is a built in rule that triggers every time it receives data that matches an asset code or audit code those given in the configuration.\n\n+---------------------+\n| |data-availability| |\n+---------------------+\n\n  - **Audit Code**: Audit log code to monitor, Leave blank if not required or set to * for all codes. If we want to monitor several audit codes a comma separated list can be entered. E.g. SRVRG, SRVUN\n\n  - **Asset Code**: Asset code to monitor. Leave blank if not required.\n"
  },
  {
    "path": "docs/fledge-rule-Threshold/index.rst",
    "content": ".. Images\n.. |threshold| image:: images/threshold.jpg\n.. |source| image:: images/threshold_source.jpg\n\nThreshold Rule\n==============\n\nThe threshold rule is used to detect the value of a data point within an asset going above or below a set threshold.\n\nThe configuration of the rule allows the threshold value to be set, the operation and the datapoint used to trigger the rule.\n\n+-------------+\n| |threshold| |\n+-------------+\n\n  - **Data Source**: The source of the data used for the rule evaluation. This may be one of Readings, Statistics or Statistics History. See details below.\n\n    +----------+\n    | |source| |\n    +----------+\n\n  - **Name**: The name of the asset or statistics that is tested by the rule.\n\n  - **Value**: The name of the datapoint in the asset used for the test. This is only required if the *Data Source* above is set to *Readings*.\n\n  - **Condition**: The condition that is being tested, this may be one of >, >=, <= or <.\n\n  - **Trigger value**: The value used for the test.\n\n  - **Evaluation data**: Select if the data evaluate is a single value or a window of values.\n\n  - **Window evaluation**: Only valid if evaluation data is set to Window. This determines if the value used in the rule evaluation is the average, minimum or maximum over the duration of the window.\n\n  - **Time window**: Only valid if evaluation data is set to Window. This determines the time span of the window.\n\nData Source\n-----------\n\nThe rule may be used to test the values of the data that is ingested by\nsouth services within Fledge or the statistics that Fledge itself creates.\n\nWhen the rule examines a reading in the Fledge data stream it must be\ngiven then name of the asset to observe and the name of the data point\nwithin that asset. The data points within the asset should contain\nnumeric data.\n\nWhen observing a statistic there are two choices that can be made,\nto monitor the raw statistics value, which is a simple count, or to\nexamine the statistic history. The value received by the threshold rule\nfor a statistic is the increment that is added to the statistic and not\nthe absolute value of the statistics.\n\nThe statistic history is the value seen plotted in\nthe dashboard graphs and shows the change in the statistic value over\na defined period. By default the period is 15 seconds, however this is\nconfigurable. In the case of statistics all that is required is the name\nof the statistic to monitor, there is no associated data point name as\neach statistic is a single value.\n"
  },
  {
    "path": "docs/fledge_architecture.rst",
    "content": ".. Fledge documentation master file, created by\n   sphinx-quickstart on Fri Sep 22 02:34:49 2017.\n   You can adapt this file completely to your liking, but it should at least\n   contain the root `toctree` directive.\n\n\n.. |br| raw:: html\n\n   <br />\n\n\n.. Images\n.. |fledge_architecture| image:: images/fledge_architecture.png\n.. |pipelines| image:: images/pipelines.png\n\n\n.. Links to open in new tabs:\n.. |Dianomic Website| raw:: html\n\n   <a href=\"http://www.dianomic.com\" target=\"_blank\">Dianomic Website</a>\n\n.. =============================================\n\n\n********************\nFledge Architecture\n********************\n\nThe following diagram shows the architecture of Fledge:\n\n- Components in blue are **plugins**. Plugins are light-weight modules that enable Fledge to be extended. There are a variety of types of plugins: south-facing, north-facing, storage engine, filters, event rules and event delivery mechanisms. Plugins can be written in python (for fast development) or C++ (for high performance).\n\n- Components with a blue line at the top of the box are **microservices**. They can co-exist in the same operating environment or they can be distributed across multiple environments.\n\n|fledge_architecture|\n\n\nFledge Core\n============\n\nThe Core microservice coordinates all of the Fledge operations. Only one Core service can be active at any time.\n\nCore functionality includes:\n\n**Scheduler**: Flexible scheduler to bring up processes.\n\n**Configuration Management**: maintain configuration of all Fledge components. Enable software updates across all Fledge components.\n\n**Monitoring**: monitor all Fledge components, and if a problem is discovered (such as an unresponsive microservice), attempt to self-heal.\n\n**REST API**: expose external management and data APIs for functionality across all components.\n\n**Backup**: Fledge system backup and restore functionality.\n\n**Audit Logging**: maintain logs of system changes for auditing purposes.\n\n**Certificate Storage**: maintain security certificates for different components, including south services, north services, and API security.\n\n**User Management**: maintain authentication and permission info on Fledge administrators.\n\n**Asset Browsing**: enable querying of stored asset data.\n\nStorage Layer\n=============\n\nThe Storage microservice provides two principal functions: a) maintenance of Fledge configuration and run-time state, and b) storage/buffering of asset data. The type of storage engine is pluggable, so in installations with a small footprint, a plugin for SQLite may be chosen, or in installations with a high number of concurrent requests and larger footprint Postgresql may be suitable. In micro installations, for example on Edge devices, or when high bandwidth is required, an in-memory temporary storage may be the best option.\n\nSouth Microservices\n===================\n\nSouth microservices offer bi-directional communication of data and metadata between Edge devices, such as sensors, actuators or PLCs and Fledge. Smaller systems may have this service installed onboard Edge devices. South components are typically deployed as always-running services, which continuously wait for new data.\n\nNorth Microservices\n===================\n\nNorth microservices offer bi-directional communication of data and metadata between the Fledge platform and larger systems located locally or in the cloud. Larger systems may be private and public Cloud data services, proprietary solutions or Fledge instances with larger footprints. North components are typically deployed as one-shot tasks, which periodically spin up and send data which has been batched, then spin down. However, they can also be deployed as continually-running services.\n\nFilters\n=======\n\nFilters are plugins which modify streams of data that flow through Fledge. They can be deployed at ingress (in a South service), or at egress (in a North service). Typically, ingress filters are used to transform or enrich data, and egress filters are used to reduce flow to northbound pipes and infrastructure, i.e. by compressing or reducing data that flows out. Multiple filters can be applied in \"pipelines\", and once configured, pipelines can be applied to multiple south or north services.\n\nA sample of existing Filters:\n\n**Expression**: apply an arbitrary mathematical equation across one or more assets.\n\n**Python35**: run user-specified python code across one or more assets.\n\n**Metadata**: apply tags to data, to note the device/location it came from, or to attribute data to a manufactured part.\n\n**RMS/Peak**: summarize vibration data by generating a Root Mean Squared (RMS) across n samples.\n\n**FFT**: generate a Fast Fourier Transform (FFT) of vibration data to discover component waveforms.\n\n**Delta**: Only send data that has changed by a specified amount.\n\n**Rate**: buffer data but don’t send it, then if an error condition occurs, send the previous data.\n\n**Contrast**: Enhance the contrast of image type data\n\nFilters may be concatenated together to form a data pipeline from the data source to the storage layer, in the south microservice. Of from the storage layer to the destination in the north.\n\n+-------------+\n| |pipelines| |\n+-------------+\n\nThis allows for data processing to be built up via the graphical interface of Fledge with little or no coding required. Filters that are applied in a south service will affect all out going streams whilst those applied in the north only affect the data that is sent on that particular connection to an external system.\n\nEvent Service\n==============\n\nThe event engine maintains zero or more rule/action pairs. Each rule subscribes to desired asset data, and evaluates it. If the rule triggers, its associated action is executed.\n\n**Data Subscriptions**: Rules can evaluate every data point for a specified asset, or they can evaluate the minimum, maximum or average of a specified window of data points.\n\n**Rules**: the most basic rule evaluates if values are over/under a specified threshold. The Expression plugin will evaluate an arbitrary math equation across one or more assets. The Python35 plugin will execute user-specified python code to one or more assets.\n\n**Actions**: A variety of delivery mechanisms exist to execute a python application, create arbitrary data, alter the configuration of Fledge, send a control message, raise a ticket in a problem ticking system or email/slack/hangout/communicate a message.\n\nSet Point Control Service\n=========================\n\nFledge is not designed to replace real time control systems, it does however allow for non-time-critical control using the control microservice. Control messages may originate from a number of sources; the north microservice, the event service, the REST API or from scheduled events. It is the job of the control service to route these control messages to the correct destination. It also provides a simple form of scripting to allow control messages to generate chains of writes and operations on the south service and also modify the configuration of the Fledge itself.\n\nREST API\n========\n\nThe Fledge API provides methods to administer Fledge, and to interact with the data inside it.\n\nGraphical User Interface\n========================\n\nA GUI enables administration of Fledge. All GUI capability is through the REST API, so Fledge can also be administered through scripts or other management tools. The GUI contains pages to:\n\n**Health**: See if services are responsive. See data that’s flowed in and out of Fledge\n\n**Assets & Readings**: analytics of data in Fledge\n\n**South**: manage south services\n\n**North**: manage north services\n\n**Notifications**: manage event engine rules and delivery mechanisms\n\n**Configuration Management**: manage configuration of all components\n\n**Schedules**: flexible scheduler management for processes and tasks\n\n**Certificate Store**: manage certificates\n\n**Backup & Restore**: backup/restore Fledge\n\n**Logs**: see system, notification, audit, packages and tasks logging information\n\n**Support**: support bundle contents with system diagnostic reports\n\n**Settings**: set/reset connection and GUI related settings\n\n"
  },
  {
    "path": "docs/glossary.rst",
    "content": ".. Fledge Glossary\n\n.. |github| raw:: html\n\n    <a href=\"https://github.com/fledge-iot/fledge\">Fledge GitHub repository</a>\n\n********\nGlossary\n********\n\nThe following are a set of definitions for terms used within the Fledge documentation and code, these are designed to be an aid to understanding some of the principles behind Fledge and improve the comprehension of the documentation by ensuring all readers have a common understanding of the terms used. If you feel any terms are missing or not fully explained please raise an issue against, or contribute to, the documentation in the |github|.\n\n.. glossary::\n\n    Asset\n        A representation of a set of device or set of values about a device or entity that is being monitored and possibly controlled by Fledge. It may also be used to represent a subset of a device. These values are a collection of :term:`Datapoints<Datapoint>` that are the actual values. An asset contains a unique name that is used to reference the data about the asset. An asset is an abstract concept and has no real implementation with the fledge code, instead a :term:`reading<Reading>` is used to represent the state of an asset at a point in term. The phase asset is used to represent a time series collection of 0 or more :term:`readings<Reading>`.\n\n    Control Service\n        An optional microservice that is used by the control features of Fledge to route control messages from the various sources of control and send them to the :term:`south service<South Service>` which implements the control path for the :term:`assets<Asset>` under control.\n\n    Core Service\n        The :term:`service<Service>` within Fledge that is responsible for the oversight of all the other services. It provides configuration management, monitoring, registration and routing services. It is also responsible for the public API into the Fledge system and the execution of periodic tasks such as :term:`purge<Purge>`, statistics and backup.\n\n    Datapoint\n        A datapoint is a container for data, each datapoint represents a value that is known about an asset and has a name for that value and the value itself. Values may be one of many types; simpler scalar values, alpha numeric strings, arrays of scalar values, images, arbitrary binary objects or a collection of datapoints.\n\n    Filter\n        A combination of a :term:`Filter Plugin<Filter Plugin>` and the configuration that makes that filter perform the processing that is required of it.\n\n    Filter Plugin\n        A filter plugin is a :term:`plugin<Plugin>` that implements an operation on one or more :term:`reading<Reading>` as it passes through the Fledge system. This processing may add, remove or augment the data as it passes through Fledge. Filters are arrange as linear :term:`pipelines<Pipeline>` in either the :term:`south service<South Service>` as data is ingested into Fledge or the :term:`north services<North Service>` and :term:`tasks<Task>` as data is passed upstream to the systems that receive data from Fledge.\n\n    Microservice\n        A microservice is a small service that implements parts of the Fledge functionality. They are also referred to as :term:`services<Service>`.\n\n    Notification Delivery Plugin\n        A notification delivery plugin is used by the :term:`notification service<Notification Service>` to delivery notifications when a :term:`notification rule<Notification Rule Plugin>` triggers. A notification delivery plugin may send notification data to external systems, trigger internal Fledge operations or create :term:`reading<Reading>` data within the Fledge :term:`storage service<Storage Service>`.\n\n    Notification Rule Plugin\n        A notification rule plugin is used by the notification service to determine if a notification should be sent. The rule plugin receives :term:`reading<Reading>` data from the Fledge :term:`storage service<Storage Service>`, evaluates a rule against that data and returns a triggered or cleared state to the notification service.\n\n    Notification Service\n        An optional :term:`service<Service>` within Fledge that is responsible for the execution and delivery of notifications when events occurs in the data that is being ingested into Fledge.\n\n    North\n        An abstract term for any service or system to which Fledge sends data that is has ingested. Fledge may also receive control message from the north as well as from other locations.\n\n    North Plugin\n        A :term:`plugin<Plugin>` that implements to connection to an upstream system. North plugins are responsible to both implement the communication to the north systems and also the translation from internal data representations to the representation used in the external system.\n\n    North Service\n        A :term:`service<Service>` responsible for connections upstream from Fledge. These are usually systems that will receive data that Fledge has ingested and/or processed. There may also be control data flows that operate from the north systems into the Fledge system.\n\n    North Task\n        A :term:`task<Task>` that is run to send data to upstream systems from Fledge. It is very similar in operation and concept to a :term:`north service<North Service>`, but differs from a north service in that it does not always run, it is scheduled using a time based schedule and is designed for situation where connection to the upstream system is not always available or desirable.\n\n    Pipeline\n        A linear collection of zero or more :term:`filters<Filter>` connected between with the :term:`south plugin<South Plugin>` that ingests data and the :term:`storage service<Storage Service>`, or between the :term:`storage service<Storage Service>` and the :term:`north plugin<North Plugin>` as data exits Fledge to be sent to upstream systems.\n\n    Plugin\n        A dynamically loadable code fragment that is used to enhance the capabilities of Fledge. These plugins may implement a :term:`south<South>` interface to devices and systems, a :term:`north<North>` interface to systems that receive data from Fledge, a :term:`storage plugin<Storage Plugin>` used to buffer :term:`readings<Reading>`, a :term:`filter plugin<Filter Plugin>` used to process data, a :term:`notification rule<Notification Rule Plugin>` or :term:`notification delivery<Notification Delivery Plugin>` plugin. Plugins have well defined interfaces, they can be written by third parties without recourse to modifying the Fledge services and are shipped externally to Fledge to allow for diverse installations of Fledge. Plugins are the major route by which Fledge is customized for individual use cases.\n\n    Purge\n        The process by which :term:`readings<Reading>` are removed from the :term:`storage service<Storage Service>`.\n\n    Reading\n        A reading is the presentation of an :term:`asset<Asset>` at a point in time. It contains the asset name, two timestamps and the collection of :term:`datapoints<Datapoint>` that represent the state of the asset at that point in time. A reading has two timestamps to allow for the time to be recorded when Fledge first read the data and also for the device itself to give a time that it sets for when the data was created. Not all devices are capable of reporting timestamps and hence this second timestamp may be the same as the first.\n\n    Service\n        Fledge is implemented as a set of services, each of which runs constantly and implements a subset of the system functionality. There are a small set of fixed services, such as the :term:`core service<Core Service>` or :term:`storage service<Storage Service>`, optional services for enhanced functionality, such as the :term:`notification service<Notification Service>` and :term:`control service<Control Service>`. There are also a set of non-fixed services of various types used to interact with downstream or :term:`south<South>` devices and upstream or :term:`north<North>` systems.\n\n    South\n        An abstract term for any device or service from which Fledge ingests data or over which Fledge exerts control.\n\n    South Service\n        A :term:`service<Service>` responsible for communication with a device or service from which Fledge is ingesting data. Each south service connections to a single device and can collect data from that device and optionally send control signals to that device. A south service may represent one or more :term:`assets<Asset>`.\n\n    South Plugin\n        A south plugin is a :term:`plugin<Plugin>` that implements the interface to a device or system from which Fledge is collecting data and optionally to which Fledge is sending control signals.\n\n    Storage Service\n        A :term:`microservice<Microservice>` that implements either permanent or transient storage services used to both buffer :term:`readings<Reading>` within Fledge and also to store Fledge's configuration information. The storage services uses either one or two :term:`storage plugins<Storage Plugin>` to store the configuration data and the :term:`readings<Reading>` data.\n\n    Storage Plugin\n        A :term:`plugin<Plugin>` that implements the storage requirements of the Fledge :term:`storage service<Storage Service>`. A plugin may implement the storage of both configuration and :term:`readings<Reading>` or it may just implement :term:`readings<Reading>` storage. In this later case Fledge will use two storage plugins, one to store the configuration and the other to store the readings.\n\n    Task\n        A task implements functionality that only runs for specific times within Fledge. It is used to initiate periodic operations that are not required to be always running. Amongst the tasks that form part of Fledge are the :term:`purge task<Purge>`, :term:`north tasks<North Task>`, backup and statistics gathering tasks.\n"
  },
  {
    "path": "docs/index.rst",
    "content": ".. Fledge documentation master file, created by\n   sphinx-quickstart on Fri Sep 22 02:34:49 2017.\n   You can adapt this file completely to your liking, but it should at least\n   contain the root `toctree` directive.\n\n***********************************\nWelcome to Fledge's documentation!\n***********************************\n\n.. toctree::\n\n    introduction\n    quick_start/index\n    processing_data\n    fledge_architecture\n    storage\n    services/index\n    control\n    plugin_index\n    building_pipelines\n    monitoring/index\n    securing_fledge\n    tuning_fledge\n    troubleshooting_pi-server_integration\n    plugin_developers_guide/index\n    rest_api_guide/index\n    build_index\n    KERBEROS\n    fledge_plugins\n    91_version_history\n    92_downloads\n    glossary\n"
  },
  {
    "path": "docs/introduction.rst",
    "content": ".. Links\n.. |DeveloperGuides| raw:: html\n\n   <a href=\"plugin_developers_guide/index.html\">Developer Guides</a>\n\n.. |FledgeArchitecture| raw:: html\n\n   <a href=\"fledge_architecture.html\">Fledge Architecture</a>\n\n.. |DataPipelines| raw:: html\n\n   <a href=\"building_pipelines.html\">Developing Data Pipelines</a>\n\nIntroduction to Fledge\n=======================\n\nFledge is an open Industrial IoT system designed to make collecting, filtering, processing and using operational data simpler and more open. Core to Fledge is an extensible microservice based architecture enabling any data to be read, processed and sent to any system. Coupled with this extensibility Fledge’s Apache 2 license and community of developers results in an ever growing choice of components that can be used to solve your OT data needs well into the feature.\n\nFledge provides a scalable, secure, robust infrastructure for collecting data from sensors, processing data at the edge using intelligent data pipelines and transporting data to historian and other management systems. Fledge also allows for edge based event detection and notification and control flows as a result of events, stimulus from upstream systems or user action. Fledge can operate over the unreliable, intermittent and low bandwidth connections often found in industrial or rugged environments.\n\nTypical Use Cases\n-----------------\n\nThe depth and breadth of Industrial IoT use cases is considerable. Fledge is designed to address them. Below are some examples of typical Fledge deployments.\n\nUnified data collection\n    The industrial edge is one of the more challenging in computing. Today there are over 100 different protocols, no standards in machine data definitions, different types of data (time-series, vibration, array, image, radiometric, transactional, etc.), sensors producing bytes/hr to gigs/hr all in environments with network, power and environmental challenges. This diversity creates pain in managing, scaling, securing and orchestrating industrial data. Ultimately resulting in silos of data with competing context. Fledge is designed to eliminate those silos by providing a very flexible data collections and distribution mechanism all using the same APIs, features and functions.\n\nSpecialized Analytical Environments\n    With the advent of cloud systems and sophisticated analytic tools it may no longer be possible to have a single system that is both your system of record and the place on which the analytics takes place. Fledge allows you to distribute your data to multiple systems, either in part or as a whole. This allows you to get just the data you need to the systems that need it without compromising your system of record.\n\nResilience\n    Fledge provides mechanisms to store and forward your data. Data is no longer lost if a connection to some key system is unavailable.\n\nEdge processing\n    Using the Fledge intelligent data pipelines concept, Fledge allows for your data to be processed close to where it is gathered. This can save both network bandwidth and reduce costs when high bandwidth sensors such as vibration monitors or image capture is used. In addition it reduces the latency when timely action is required compared with shipping and processing data in the cloud or at some centralized IT location.\n\nNo code/Low code solutions\n    Fledge provides tools that allow the OT engineer to create solutions by use of existing processing elements that can be combined and augmented with little or no coding required. This allows the OT organization to be able to quickly and independently obtain the data they need for their specific requirements.\n\nProcess Optimization & Operational Efficiency\n    The Fledge intelligent pipelines, with their prebuilt processing elements and through use of machine learning techniques can be used to improve operational efficiency by giving operators immediate feedback on the state of the process of product being produced without remote analytics and the associated delays involved.\n\n\nArchitectural Overview\n----------------------\n\nFledge is implemented as a collection of microservices which include:\n\n  - Core services, including security, monitoring, and storage\n\n  - Data transformation and alerting services\n\n  - South services: Collect data from sensors and other Fledge systems\n\n  - North services: Transmit and integrate data to historians and other systems\n\n  - Edge data processing applications\n\n  - Event detection and notification\n\n  - Set point control\n\nServices can easily be developed and incorporated into the Fledge framework. Fledge services may also be customized by creating new plugins, written in C/C++ or Python, for data collection, processing, export, rule evaluation and event notification. The |DeveloperGuides| describe how to do this.\n\nMore detail on the Fledge architecture can be found in the section |FledgeArchitecture|.\n\nNo-code/Low-code Development\n----------------------------\n\nFledge can be extended by writing code to add new plugins. Additionally, it is easily tailored by combining pre-written data processing filters applied in linear pipelines to data as it comes into or goes out of the Fledge system. A number of filters exist that can be customized with small snippets of code written in the Python scripting language. These snippets of code allow the end user to produce custom processing without the need to develop more complex plugins or other code. The environment also allows them to experiment with these code snippets to obtain the results desired.\n\nData may be processed on the way into Fledge or on the way out. Processing on the way in allows the data to be manipulated to the way the organization wants it. Processing on the way out allows the data to be manipulate to suit the up stream system that will use the data without impacting the data that might go to another up stream system.\n\nSee the section |DataPipelines|.\n"
  },
  {
    "path": "docs/keywords/Augmentation",
    "content": "Plugins That Augment Data\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of filters that will augment the data in the pipeline. Adding fixed or calculated data to provide more context or quality to the data.\n\n"
  },
  {
    "path": "docs/keywords/Cleansing",
    "content": "Plugins That Improve The Data Quality\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins whose purpose is to help improve the data quality in a pipeline by removing or highlighting data of poor quality or in some way anomalous.\n\n"
  },
  {
    "path": "docs/keywords/Cloud",
    "content": "Plugins That Interact With Clouds\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins that interaction with the public cloud providers.\n\n"
  },
  {
    "path": "docs/keywords/Compression",
    "content": "Plugins That Compress Data\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThe following set of plugins have functionality that can be used to compress the data stream as it flow through the pipeline.\n\n"
  },
  {
    "path": "docs/keywords/Governance",
    "content": "Data Governance Plugins\n^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins designed to aid the process of ensuring availability, usability, integrity, and security of data within the streaming pipelines.\n\n"
  },
  {
    "path": "docs/keywords/Image",
    "content": "Image Data Plugins\n^^^^^^^^^^^^^^^^^^\n\nA set of plugins explicitly designed to deal with images in data pipelines.\n\n"
  },
  {
    "path": "docs/keywords/Labelling",
    "content": "Plugins Used To Label Data\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins that offer the ability to conditional label the data in a pipeline to aid in training of machine learning or to merely improve the visibility of events that occur is the pipeline.\n"
  },
  {
    "path": "docs/keywords/MQTT",
    "content": "Plugin Utilising The MQTT Protocol\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPlugins that use MQTT as the transport protocol.\n\n"
  },
  {
    "path": "docs/keywords/Mathematical",
    "content": "Computational Plugins\n^^^^^^^^^^^^^^^^^^^^^\n\nPlugins that apply a mathematical expression or translation to the data as it flows through the data pipeline.\n\n"
  },
  {
    "path": "docs/keywords/ModelExecution",
    "content": "Machine Learning Plugins\n^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins that execute machine learning models.\n\n"
  },
  {
    "path": "docs/keywords/Namespace",
    "content": "Unified Namespace Management\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA collection of plugins aimed at helping create, translate or enforce a unified namespace (UNS) within data pipelines.\n\n"
  },
  {
    "path": "docs/keywords/PLC",
    "content": "Plugins That Interact With PLCs\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins that interaction with programmable logic controllers (PLCs).\n\n"
  },
  {
    "path": "docs/keywords/README.rst",
    "content": ".. Links\n\n.. _here: ../scripts\\\\fledge_plugin_list\n\nKeyword Introduction\n====================\n\nIt consists of a collection of files, with names corresponding to keywords found in the plugin documentation directories.\nThese files are utilized by the script `here`_.\n\nThese files are used by the script to generate plugin groupings by category, where each category corresponds to a keyword for a specific plugin type.\n\nIf a file corresponding to the keyword exists, it will be used as the introduction to the table of plugins associated with that keyword.\n\nIf no such file exists for a given keyword, a standard header will be generated instead.\n\nAdding new keywords to plugin repositories does not require adding new files to this directory; the files are primarily there to enhance the documentation. Additionally, the presence of a file with a specific name in this directory does not necessarily mean that the corresponding keyword is used elsewhere.\n\nKeyword section headers\n=======================\n\nThis directory contains a set of markdown files whose names match keywords used in plugins. The file will be used to create an introduction in the generated sections for each group of plugins that mention the keyword named in the filename.\n\nIt is optional to create a section header for a keyword, if one is not given then a default header is inserted.\n\nThe format of a keyword header is to include the title for the section and zero or more paragraphs of text in rich text format. The example below shows the header for the *Cleansing* keyword.\n\n.. code-block:: RST\n\n  Plugins That Improve The Data Quality\n  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n  A set of plugins whose purpose is to help improve the data quality in a pipeline by removing or highlighting data of poor quality or in some way anomalous.\n"
  },
  {
    "path": "docs/keywords/Scripted",
    "content": "Plugins That Provide Scripting\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins that offer scripting as a mechanism to extend the operation of the plugin and manipulate the content of the data pipeline.\n\n"
  },
  {
    "path": "docs/keywords/Signal Processing",
    "content": "Plugins That Process A Digital Signal\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA group of plugins that will process some digital signal, normally applying an algorithm that augments or improves that signal to provide new insights into the signal.\n\n"
  },
  {
    "path": "docs/keywords/Simulation",
    "content": "Data Simulation Plugins\n^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins that create simulated data that may be used in testing of data pipelines, new filters or other plugins. These plugins are also useful as training aids when learning the features of Fledge.\n\n"
  },
  {
    "path": "docs/keywords/Structure",
    "content": "Plugins That Alter The Structure Of Data\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA collection of plugins that alter the structure of the data within an asset. Typically these plugins are used to make the data match a semantic model of an asset. Mapping the data that is available to an idealised form of the asset.\n\n"
  },
  {
    "path": "docs/keywords/Textual",
    "content": "Plugin To Manipulate String Data\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins that perform operations on string rather than numerical data.\n\n"
  },
  {
    "path": "docs/keywords/Vibration",
    "content": "Vibration Process Plugins\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA set of plugins designed to process Vibration data.\n"
  },
  {
    "path": "docs/make.bat",
    "content": "@ECHO OFF\r\n\r\npushd %~dp0\r\n\r\nREM Command file for Sphinx documentation\r\n\r\nif \"%SPHINXBUILD%\" == \"\" (\r\n\tset SPHINXBUILD=python -msphinx\r\n)\r\nset SOURCEDIR=.\r\nset BUILDDIR=_build\r\nset SPHINXPROJ=Fledge\r\n\r\nif \"%1\" == \"\" goto help\r\n\r\n%SPHINXBUILD% >NUL 2>NUL\r\nif errorlevel 9009 (\r\n\techo.\r\n\techo.The Sphinx module was not found. Make sure you have Sphinx installed,\r\n\techo.then set the SPHINXBUILD environment variable to point to the full\r\n\techo.path of the 'sphinx-build' executable. Alternatively you may add the\r\n\techo.Sphinx directory to PATH.\r\n\techo.\r\n\techo.If you don't have Sphinx installed, grab it from\r\n\techo.http://sphinx-doc.org/\r\n\texit /b 1\r\n)\r\n\r\n%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%\r\ngoto end\r\n\r\n:help\r\n%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%\r\n\r\n:end\r\npopd\r\n"
  },
  {
    "path": "docs/monitoring/configuration.rst",
    "content": ".. |CONCH_available| image:: ../images/CONCH_available.jpg\n.. |CONCH_message| image:: ../images/CONCH_message.jpg\n.. |CONCH_slack| image:: ../images/CONCH_slack.jpg\n.. |north_audit| image:: ../images/north_audit.jpg\n.. |north_change_log| image:: ../images/north_change_log.jpg\n\n\nConfiguration Changes\n=====================\n\nWhenever a configuration category is changed within Fledge an entry is written to the audit log that documents which category was changed along with the old and new value. For example if the configuration category for a south service called sine is changed a record as follows is written to the audit log with the source of the audit entry set to CONCH.\n\n.. code-block:: JSON\n\n   {\n      \"category\": \"sine\",\n      \"items\": {\n             \"assetName\": {\n                            \"oldValue\": \"sinusoid\",\n                            \"newValue\": \"sinusoid3\"\n                          }\n               }\n   }\n\nSince audit data can be used in the same way as a reading data within Fledge, this allows changes of configuration to be monitored and acted upon in the same way as any data that is changed within Fledge.\n\nUsing Notifications\n-------------------\n\nAs an example, let's assume we want to be informed if the configuration of one of the categories in Fledge has been altered. We can create a simple notification instance, using the Data Availability plugin that looks for any audit log entries with a source of CONCH.\n\n+-------------------+\n| |CONCH_available| |\n+-------------------+\n\nWe then define the medium we want to use to deliver our notification. For ease of configuration we will use the Slack delivery plugin to do this.\n\n+---------------+\n| |CONCH_slack| |\n+---------------+\n\nEach time any user modifies a configuration item a Slack message will be sent to alert that the configuration has been altered.\n\n+-----------------+\n| |CONCH_message| |\n+-----------------+\n\nThis is a very simple message that only gives the information that a change has been made, however more sophisticated delivery mechanisms can be used that will detail the actual change.\n\nSending To External Systems\n---------------------------\n\nAudit log data can also be sent to the north in the same way that reading data can. This can be used to send data to third party systems to maintain a change log of internal Fledge changes in other systems. In order to send audit log data to the north we merely setup a new north service or task. When we configure the north plugin we select the data source as *audit* rather than readings or statistics.\n\n+---------------+\n| |north_audit| |\n+---------------+\n\n.. note::\n\n   Currently not all north destinations support the selection of audit data, however a growing number are now being extended to support audit data\n\nFilters can then be used to filter out particular audit records. The asset filter is a good example of one that can be used. Since the audit code becomes the asset code when sent to a north destination, filtering out just the audit code of CONCH can easily be done with an asset filter configuration of\n\n.. code-block:: JSON\n\n   {\n       \"rules\" : [\n                    {\n                        \"asset_name\" : \"CONCH\",\n                        \"action\"     : \"include\"\n                    }\n                 ],\n       \"defaultAction\" : \"exclude\"\n   }\n\nA simple north pipeline that can send a change log to an external application using HTTP would look as follows:\n\n+--------------------+\n| |north_change_log| |\n+--------------------+\n\nAudit Log Data\n--------------\n\nOther changes, such as running a service, shutting down a service, executing a purge operation, etc. also cause audit log entries to be made. The complete list of audit log codes are;\n\n+------+------------------------------+\n| Code | Meaning                      |\n+======+==============================+\n| PURGE|Data Purging Process          |\n+------+------------------------------+\n| LOGGN|Logging Process               |\n+------+------------------------------+\n| STRMN|Streaming Process             |\n+------+------------------------------+\n| SYPRG|System Purge                  |\n+------+------------------------------+\n| START|System Startup                |\n+------+------------------------------+\n| FSTOP|System Shutdown               |\n+------+------------------------------+\n| CONCH|Configuration Change          |\n+------+------------------------------+\n| CONAD|Configuration Addition        |\n+------+------------------------------+\n| SCHCH|Schedule Change               |\n+------+------------------------------+\n| SCHAD|Schedule Addition             |\n+------+------------------------------+\n| SRVRG|Service Registered            |\n+------+------------------------------+\n| SRVUN|Service Unregistered          |\n+------+------------------------------+\n| SRVFL|Service Fail                  |\n+------+------------------------------+\n| SRVRS|Service Restart               |\n+------+------------------------------+\n| NHCOM|North Process Complete        |\n+------+------------------------------+\n| NHDWN|North Destination Unavailable | \n+------+------------------------------+\n| NHAVL|North Destination Available   |\n+------+------------------------------+\n| UPEXC|Update Complete               |\n+------+------------------------------+\n| BKEXC|Backup Complete               |\n+------+------------------------------+\n| NTFDL|Notification Deleted          |\n+------+------------------------------+\n| NTFAD|Notification Added            |\n+------+------------------------------+\n| NTFSN|Notification Sent             |\n+------+------------------------------+\n| NTFCL|Notification Cleared          |\n+------+------------------------------+\n| NTFST|Notification Server Startup   |\n+------+------------------------------+\n| NTFSD|Notification Server Shutdown  |\n+------+------------------------------+\n| PKGIN|Package installation          |\n+------+------------------------------+\n| PKGUP|Package updated               |\n+------+------------------------------+\n| PKGRM|Package purged                |\n+------+------------------------------+\n| DSPST|Dispatcher Startup            |\n+------+------------------------------+\n| DSPSD|Dispatcher Shutdown           |\n+------+------------------------------+\n| ESSRT|External Service Startup      |\n+------+------------------------------+\n| ESSTP|External Service Shutdown     |\n+------+------------------------------+\n| ASTDP|Asset deprecated              |\n+------+------------------------------+\n| ASTUN|Asset un-deprecated           |\n+------+------------------------------+\n| PIPIN|Pip installation              |\n+------+------------------------------+\n| AUMRK|Audit Log Marker              |\n+------+------------------------------+\n| USRAD|User Added                    |\n+------+------------------------------+\n| USRDL|User Deleted                  |\n+------+------------------------------+\n| USRCH|User Changed                  |\n+------+------------------------------+\n| USRRS|User Restored                 |\n+------+------------------------------+\n| ACLAD|ACL Added                     |\n+------+------------------------------+\n| ACLCH|ACL Changed                   |\n+------+------------------------------+\n| ACLDL|ACL Deleted                   |\n+------+------------------------------+\n| CTSAD|Control Script Added          |\n+------+------------------------------+\n| CTSCH|Control Script Changed        |\n+------+------------------------------+\n| CTSDL|Control Script Deleted        |\n+------+------------------------------+\n| CTPAD|Control Pipeline Added        |\n+------+------------------------------+\n| CTPCH|Control Pipeline Changed      |\n+------+------------------------------+\n| CTPDL|Control Pipeline Deleted      |\n+------+------------------------------+\n| CTEAD|Control Entrypoint Added      |\n+------+------------------------------+\n| CTECH|Control Entrypoint Changed    |\n+------+------------------------------+\n| CTEDL|Control Entrypoint Deleted    |\n+------+------------------------------+\n| BUCAD|Bucket Added                  |\n+------+------------------------------+\n| BUCCH|Bucket Changed                |\n+------+------------------------------+\n| BUCDL|Bucket Deleted                |\n+------+------------------------------+\n| USRBK|User Blocked                  |\n+------+------------------------------+\n| USRUB|User Unblocked                |\n+------+------------------------------+\n\nAs can be seen from the table above there is more than just configuration change that can be monitored by looking at the audit logs of Fledge.\n"
  },
  {
    "path": "docs/monitoring/flow.rst",
    "content": ".. |MonitorNorthRate| image:: ../images/MonitorNorthRate.jpg\n.. |MonitorZendesk| image:: ../images/MonitorZendesk.jpg\n.. |MonitorTrigger| image:: ../images/MonitorTrigger.jpg\n.. |MonitoredBuffered| image:: ../images/MonitoredBuffered.jpg\n\nPipeline Data Flow\n==================\n\nThere are several aspect to monitoring the data pipelines within Fledge. The one we will discuss in this section is how to monitor the flow of data. This is about if data is flowing, not if the data that is flowing is good data, just because there is a flow of data says nothing about the quality of that data.\n\nThe south services automatically monitor the flow rates out from the south service to the storage layer. The configuration of this monitoring is discussed in the section on tuning Fledge. This automatic monitoring uses statistical methods to detect changes in the flow rate and alerts the user via the user interface status bar alerts.\n\nThe main mechanism for monitoring data flow is the statistics that are collected by all the services that implement a Fledge instance.\n\nSouth Service Data Flow\n-----------------------\n\nThe south service maintains two distinct statistics: one for the total ingest count for the services and a set for the ingest counts for each asset ingested by the service. These statistics are collected when the data is sent from the south service to the storage service; they do therefore not include any readings that have been bought into the service or removed by any of the filters in the filter pipeline.\n\nThe south service may also be configured to reduce the statistics it collects. Very heavily loaded south services, collecting large numbers of distinct assets are often configured to only collect total ingest statistics for the service as this can have a very large impact on the efficiency of the service.\n\nNorth Service Data Flow\n-----------------------\n\nThe north services also collect statistics. In this case we can get the total number of readings sent to all north systems or the number sent to each north system. Unlike the south services both of these are always collected and so they can be utilised in all cases.\n\nStatistics Mechanism\n--------------------\n\nStatistics are gathered as pure counters that increment as data is traversing the pipeline or whatever the statistic is monitoring. This is not particularly useful when trying to monitor rates. There is however another option. The statistics collector is a task that runs at periodic intervals and takes the raw statistics to create a statistics history table. The statistics collector is run by default every 15 seconds, although this interval can be altered via the configuration. The statistics table will therefore record the increment in each statistic every 15 seconds. This allows us to easily obtain a rate of change for any statistic gathered by the system. It is these statistics history entries that are shown by the Fledge dashboard: it shows the number of readings ingested every 15 seconds by default.\n\nBoth the raw statistics and the statistics history can be used as data feeds into the internal Fledge notification service. This means that we can use the Fledge mechanisms for capturing events and sending notifications to many destinations to monitor the flow rates of our south and north services.\n\nMonitor Low North Flow Rate\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAs an example we will setup a notification on the flow rate north dropping below a low water mark. For this we will use the threshold notification rule and use the statistics history as the source of data. Let's assume we want to be notified when we send less than 400 readings north in a any given minute. Given our default 15 second statistics collection schedule we can either change that to be once a minute or we can set our threshold lower to 100 readings in every 15 seconds. For this example we will do the latter.\n\n+--------------------+\n| |MonitorNorthRate| |\n+--------------------+\n\nWhen can then choose one of Fledge's many supported delivery plugins to have this notification sent to the destination that we require. For the purpose of this example we will use the ZenDesk plugin to raise a problem ticket to have the cause of the issue investigated.\n\n+------------------+\n| |MonitorZendesk| |\n+------------------+\n\nWe can then decide how we want this to trigger. It is probably most appropriate to have the alert trigger first when we drop below that threshold and then be cleared when we rise above that threshold. We also do not wish to have it re-trigger again too soon, so we will set the re-trigger interval to 3600 seconds, or every hour. This way we do not raise excessive numbers of tickets.\n\n+------------------+\n| |MonitorTrigger| |\n+------------------+\n\nDetecting Fluctuating Flows\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAnother useful notification rule plugin that can be used when processing statistical data is the *Simple-Sigma* plugin. This plugin uses statistical methods to collect the mean and standard deviations of a data stream and then detect when it sees values that are a long way outside of the standard deviation from the mean, usually three times the standard deviation.\n\n+---------------------+\n| |MonitoredBuffered| |\n+---------------------+\n\nIn the above example it is shown monitoring the BUFFERED statistic. This is the number of readings written to the storage layer and represents the aggregate of all the data read by all of the south services. This can then be used to monitor if the ingest for the Fledge in total moves outside of the observed operating criteria that has been measured previously. \n\n.. note::\n\n   This is very similar to the automated monitoring in each south service. Because it is unaware of configured changes to the collection rate it has the disadvantage that it is unable to distinguish from a configured change to a change that occurs because of some failure of equipment or of Fledge.\n"
  },
  {
    "path": "docs/monitoring/index.rst",
    "content": "**********\nMonitoring\n**********\n\n.. toctree::\n\n    introduction\n    service\n    resources\n    configuration\n    flow\n    quality\n"
  },
  {
    "path": "docs/monitoring/introduction.rst",
    "content": "Introduction\n============\n\nMonitoring within Fledge is a varied subject and has different meanings to the different user personas that are using Fledge.\n\nMonitoring can take the form of\n\n  - Monitoring of the state of the various services that are run to implement the overall Fledge functionality.\n\n  - Monitoring the resources that Fledge needs to operate.\n\n  - Monitoring the flow of data within the various pipelines implemented by a Fledge instance.\n\n  - Monitoring for configuration changes of the pipelines that operate on the data.\n\n  - Monitoring the quality of the data that is flowing in the Fledge pipelines.\n\n  - And finally, the one that is the major purpose of Fledge: monitoring the state of the machines and processes connected to the Fledge instance.\n\nFledge offers tools to perform all of these various forms of monitoring. The following sections will detail these tools and describe how to configure them and how to interrupt the results of these monitoring tools.\n"
  },
  {
    "path": "docs/monitoring/quality.rst",
    "content": ".. |MonitorWatchDog| image:: ../images/MonitorWatchDog.jpg\n\nData Quality\n============\n\nThere are many different definitions of data quality and many different mechanisms that can be used within Fledge to test the quality of the data ingested into Fledge.\n\nThere are several criteria that can be used to determine poor quality data. The very simplest mechanisms are things such as data falling outside of expected limits. This can be very easily detected by using the threshold notification filter to detect over or under limit values. The next level of data quality monitoring is to look for outliers by observing patterns in the data or by viewing the statistical variations within the data.\n\nCauses of \"poor quality\" data are difficult to ascertain. It could be related to a sensor that is not well calibrated, not well installed or poorly connected. It could equally be because the collection mechanism has problems, or it could simply be that the data looks \"poor\" because the equipment to which it is connected has failed, is worn or running outside of its operational limits.\n\nSome sensors may provide a measure of the quality of the data that they return along with the data, usually a Fledge south plugin that is connected to a sensor able to do this will report it as a separate datapoint within the asset. It is then a simple job to use the notification tools to monitor for out of bound values of data quality. For example if quality is returned as a percentage the threshold notification rule can be used to trigger when the quality percentage drops below an acceptable threshold.\n\nUnfortunately not many sensors are able to report quality metrics, and even if they could this only accounts for a proportion of the quality issues that may exist in data. In these other cases the issue becomes being able to distinguish between data collection and quality issues and actual issues with the equipment that is being monitored.\n\nFailure Modes\n-------------\n\nThere are some common failure modes of the collection infrastructure that can probably be determined by looking at the data:\n\nFlat Line Data\n   The particular data item has stopped changing value for a longer period than is normal for the sensor. This is unlikely to happen, unless the monitored equipment has been shutdown or put into some form of quiescent state.\n\nOut of Range Data\n   The data that is being read is out of range for the sensor we are reading, i.e. we are receiving values that the sensor should not be physical capable of returning. A similar issue may also exist when data is outside the range a machine should normally report. Although this appears like the same issue as out of sensor range, it may be legitimate if the machine itself has issues, so may be a real issue as opposed to a data quality issue.\n\nNo Data Read\n   There has been a period of time during which no data has been ingested from a service or sensor when the service is running.\n\nDetecting Failure Modes\n~~~~~~~~~~~~~~~~~~~~~~~\n\nOut of Range Data\n#################\n\nDetecting out of range data is also done very simply using the threshold notification rule or the out of bound rule. The difference between the two plugins is that the out of bound rule allows an upper and lower limit to be set, ensuring the value is within the bounds defined. Whereas the threshold rule detects a threshold being crossed from either direction and only allows one threshold to be set.\n\nNo Data Read\n############\n\nDetecting the lack of data in Fledge is relatively simple using the watchdog notification rule plugin which is especially designed for this purpose. This will alert the user when there is no data for a specified period. \n\n+-------------------+\n| |MonitorWatchDog| |\n+-------------------+\n\n"
  },
  {
    "path": "docs/monitoring/resources.rst",
    "content": ".. |MonitorDiskUsage| image:: ../images/MonitorDiskUsage.jpg\n\nResources\n=========\n\nMonitoring resource usage within a Fledge instance can be achieved by using the south plugin that is designed to monitor the Linux resources of the machine on which Fledge is running. This is the systeminfo south plugin.\n\nThis plugin creates a number of assets with a configurable prefix on the asset name. This includes:\n\n  - The aggregate CPU usage for all the CPUs within the machine.\n\n  - The disk traffic for each disk connected to the machine.\n\n  - The load average of the machine.\n\n  - The memory usage information for the machine.\n\n  - The network usage for each network interface on the machine.\n\n  - The paging and swapping events for the machine.\n\n  - Platform information for the machine.\n\n  - A summary of the processes running on the machine.\n\nThere are some other things that are also monitored that of not of particular interest for monitoring resource usage.\n\nExample\n-------\n\nIn this example we will assume we want to use the notification service to monitor the percentage of space remaining on a particular disk within the Fledge machine. We use the systeminfo plugin to get the data we want. In this case we will look at the disk device sda1. Using the defaults of the plugin will create an asset called *system/diskUsage_dev/sda1* with a datapoint called *Use_prcntg*.\n\n.. note::\n\n   The systeminfo plugin imposes a significant load when it collects data. Since the data we are interested in does not change at a very high rate we should reduce the polling rate of this plugin in order to not take unnecessary resources from the machine in order to monitor resource usage.\n\nSince we are looking for a numeric value to go above a certain value, say 85%, we can simply use the threshold notification rule to detect the disk usage going above this value and deliver a notification using any of the Fledge notification delivery mechanisms.\n\n+--------------------+\n| |MonitorDiskUsage| |\n+--------------------+\n\nDatabase Disk Usage\n===================\n\nFledge has a built in mechanism that will monitor and predict when the disk hosting the internal storage buffer will become full. It will write predictions to the error log and will additionally raise alerts when the disk usage becomes critical.\n"
  },
  {
    "path": "docs/monitoring/service.rst",
    "content": ".. |MonitorMatch| image:: ../images/MonitorMatch.jpg\n\nService State\n=============\n\nFledge is made up of a set of micro services, the precise number and function of which are determined by the configuration of a Fledge instance. Fledge will internally monitor the state of all the microservices and will perform restarts of failed microservices. It is also possible to determine the state of the microservices by using the Fledge external API.\n\nIt is also possible to use the Fledge internal notification service to take actions when services fail, although this can be an issue if the notification service itself fails. When a service starts, stops or fails an entry is written to the audit log to this effect. Since the notification service can use the audit log as a source of data to trigger rules, the service state changes can also trigger rules. \n\nThe audit codes that relate to service state changes are:\n\n+------+------------------------------+\n| Code | Meaning                      |\n+======+==============================+\n| SRVRG|Service Registered            |\n+------+------------------------------+\n| SRVUN|Service Unregistered          |\n+------+------------------------------+\n| SRVFL|Service Fail                  |\n+------+------------------------------+\n| SRVRS|Service Restart               |\n+------+------------------------------+\n\nNotifications can be created using the data availability plugin or the match notification rule plugins to match the audit code.\n\n+----------------+\n| |MonitorMatch| |\n+----------------+\n\nThis then allows for the notification mechanism to report the change of state of the various services via the notification delivery channels.\n"
  },
  {
    "path": "docs/plugin_developers_guide/00_source_code_doc.rst",
    "content": ".. Fledge Source Code Documentation\n\n.. Links\n\n.. |here| raw:: html\n\n        <a href=\"http://archives.fledge-iot.org/nightly/source-documentation/html/index.html\">here</a>\n\n.. |pdf| raw:: html\n\n        <a href=\"http://archives.fledge-iot.org/nightly/source-documentation/fledge.pdf\">pdf</a>\n\nSource Code Documentation\n=========================\n\nDocumentation of source code will be available |here|\n\nSingle PDF document will be available |pdf|"
  },
  {
    "path": "docs/plugin_developers_guide/01_01_Data.rst",
    "content": ".. Data\n\n\nRepresenting Data\n=================\n\nThe key purpose of Fledge and the plugins is the manipulation of data, that data is passed around the system and represented in a number of ways. This section will introduce the data representation formats used at various locations within the Fledge system. Conceptually the unit of data that we use is a reading. The reading represents the state of a monitored device at a point in time and has a number of elements.\n\n+-------------+----------------------------------------------------------+\n| Name        | Description                                              |\n+=============+==========================================================+\n| asset       | The name of the asset or device to which the data refers |\n+-------------+----------------------------------------------------------+\n| timestamp   | The point in time at which these values where observed.  |\n+-------------+----------------------------------------------------------+\n| data points | A set of named values for the data held for the asset    |\n+-------------+----------------------------------------------------------+\n\nThere are actually two timestamps within a reading and these may be different. There is a *user_ts*, which is the time the plugin assigned to the reading data and may come from the device itself and the *ts*. The *ts* timestamp is set by the system when the data is read into Fledge. Unless the plugin is able to determine a timestamp from the device the *user_ts* is usually the same as the *ts*.\n\nThe data points themselves are a set of name and value pairs, with the values supporting a number of different data types. These will be described below.\n\nReading data is nominally stored and passed between the APIs using JSON, however for convenience it is access in different ways within the different languages that can be used to implement Fledge components and plugins. In JSON a reading is represented as a JSON DICT whereas in C++ a Reading is a class, as is a data point. The way the different data point types are represented is outline below.\n\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n| Type                          | JSON                    | C++                   | Python                         |\n+===============================+=========================+=======================+================================+\n| Integer                       | An integer              | An int                | An integer                     |\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n| Floating Point                | A floating point value  | A double              | A floating point               |\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n| Boolean                       | A string either \"true\"  | A bool                | A boolean                      |\n|                               | or \"false\"              |                       |                                |\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n| String                        | A string                | A std::string pointer | A string                       |\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n| List of numbers               | An array of floating    | A std::vector<double> | A list of floating point       |\n|                               | point values            |                       | values                         |\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n| 2 Dimensional list of numbers | A list of lists of      | A std::vector of      | A list of lists of floating    |\n|                               | floating point values   | std::vector<double>   | point values                   |\n|                               |                         | pointers              |                                |\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n| Data buffer                   | A base64 encoded string | A Databuffer class    | A 1 dimensional numpy array    |\n|                               | with a header           |                       | of values                      |\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n| Image                         | A base64 encoded string | A DPImage class       | A 2 dimensional numpy array of |\n|                               | with a header           |                       | pixels. In the case of RGB     |\n|                               |                         |                       | images each pixels is an array |\n+-------------------------------+-------------------------+-----------------------+--------------------------------+\n"
  },
  {
    "path": "docs/plugin_developers_guide/01_Fledge_plugins.rst",
    "content": ".. Fledge Plugins\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n.. |available plugins| raw:: html\n\n   <a href=\"../fledge_plugins.html\">available plugins</a>\n\n.. Links in new tabs\n\n.. |here BT| raw:: html\n\n   <a href=\"https://bugs.launchpad.net/snappy/+bug/1674509\" target=\"_blank\">here</a>\n\n\n.. =============================================\n\n\nFledge makes extensive use of plugin components to extend the base functionality of the platform. In particular, plugins are used to;\n\n  - Extend the set of sensors and actuators that Fledge supports.\n  - Extend the set of services to which Fledge will push accumulated data gathered from those sensors.\n  - The mechanism by which Fledge buffers data internally.\n  - Filter plugins may be used to augment, edit or remove data as it flows through Fledge.\n  - Rule plugins extend the rules that may trigger the delivery of notifications at the edge.\n  - Notification delivery plugins allow for new delivery mechanisms to be integrated into Fledge.\n\nThis chapter presents the plugins that are bundled with Fledge, how to write and use new plugins to support different sensors, protocols, historians and storage devices. It will guide you through the process and entry points that are required for the various different types of plugin.\n\nThere are also numerous plugins that are available as separate packages or in separate repositories that may be used with Fledge.\n\n\nPlugins\n=======\n\nIn this version of Fledge you have six types of plugins:\n\n- **South Plugins** - They are responsible for communication between Fledge and the sensors and actuators they support. Each instance of a Fledge South microservice will use a plugin for the actual communication to the sensors or actuators that that instance of the South microservice supports.\n- **North Plugins** - They are responsible for taking reading data passed to them from the South bound service and doing any necessary conversion to the data and providing the protocol to send that converted data to a north-side task.\n- **Storage Plugins** - They sit between the Storage microservice and the physical data storage mechanism that stores the Fledge configuration and readings data. Storage plugins differ from other plugins in that they are written exclusively in C/C++, however they share the same common attributes and entry points that the other filter must support.\n- **Filter Plugins** - Filter plugins are used to modify data as it flows through Fledge. Filter plugins may be combined into a set of ordered filters that are applied as a pipeline to either the south ingress service or the north egress task that sends data to external systems.\n- **Notification Rule Plugins** - These are used by the optional notification service in order to evaluate data that flows into the notification service to determine if a notification should be sent.\n- **Notification Delivery Plugins** - These plugins are used by the optional notification service to deliver a notification to a system when a notification rule has triggered. These plugins allow the mechanisms to deliver notifications to be extended.\n\n\nPlugins in this version of Fledge\n----------------------------------\n\nThis version of Fledge provides the following plugins in the main repository:\n\n+---------+------------+------------+-----------------------------+----------------------------+----------------------------------------+\n| Type    | Name       | Initial    | Description                 | Availability               | Notes                                  |\n|         |            | |br| Status|                             |                            |                                        |\n+=========+============+============+=============================+============================+========================================+\n| Storage | SQLite     | Enabled    | SQLite storage |br|         | Ubuntu: x86_64 |br|        |                                        |\n|         |            |            | for data and metadata       | Ubuntu Core: x86, ARM |br| |                                        |\n|         |            |            |                             | Raspbian                   |                                        |\n+---------+------------+------------+-----------------------------+----------------------------+----------------------------------------+\n| Storage | Postgres   | Disabled   | PostgreSQL storage |br|     | Ubuntu: x86_64 |br|        |                                        |\n|         |            |            | for data and metadata       | Ubuntu Core: x86, ARM |br| |                                        |\n|         |            |            |                             | Raspbian                   |                                        |\n+---------+------------+------------+-----------------------------+----------------------------+----------------------------------------+\n| North   | OMF        | Disabled   | OSIsoft Message Format |br| | Ubuntu: x86_64 |br|        | It works with PI Connector |br|        |\n|         |            |            | sender to PI Connector |br| | Ubuntu Core: x86, ARM |br| | Relay OMF 1.2.X and 2.2. The plugin    |\n|         |            |            | Relay OMF                   | Raspbian                   | also works against EDS and OCS.        |\n+---------+------------+------------+-----------------------------+----------------------------+----------------------------------------+\n\n\nIn addition to the plugins in the main repository, there are many other plugins available in separate repositories, a list of the |available plugins| is maintained within this document.\n\n\nInstalling New Plugins\n----------------------\n\nAs a general rule and unless the documentation states otherwise, plugins should be installed in two ways:\n\n- When the plugin is available as **package**, it should be installed when **Fledge is running**. |br| This is the required method because the package executed pre and post-installation tasks that require Fledge to run. \n- When the plugin is available as **source code**, it should be installed when **Fledge is either running or not**. |br| You will want to manually move the plugin code into the right location where Fledge is installed, add pre-requisites and execute the REST commands necessary to start the plugin **after** you have started Fledge if it is not running when you start this process.\n\nFor example, this is the command to use to install the *OpenWeather* South plugin:\n\n.. code-block:: console\n\n  $ sudo systemctl status fledge.service\n  ● fledge.service - LSB: Fledge\n     Loaded: loaded (/etc/init.d/fledge; bad; vendor preset: enabled)\n     Active: active (running) since Wed 2018-05-16 01:32:25 BST; 4min 1s ago\n       Docs: man:systemd-sysv-generator(8)\n     CGroup: /system.slice/fledge.service\n             ├─13741 python3 -m fledge.services.core\n             └─13746 /usr/local/fledge/services/storage --address=0.0.0.0 --port=40138\n\n  May 16 01:36:09 ubuntu python3[13741]: Fledge[13741] INFO: scheduler: fledge.services.core.scheduler.scheduler: Process started: Schedule 'stats collection' process 'stats coll\n                                         ['tasks/statistics', '--port=40138', '--address=127.0.0.1', '--name=stats collector']\n  ...\n  Fledge v1.3.1 running.\n  Fledge Uptime:  266 seconds.\n  Fledge records: 0 read, 0 sent, 0 purged.\n  Fledge does not require authentication.\n  === Fledge services:\n  fledge.services.core\n  === Fledge tasks:\n  $\n  $ sudo cp fledge-south-openweathermap-1.2-x86_64.deb /var/cache/apt/archives/.\n  $ sudo apt install /var/cache/apt/archives/fledge-south-openweathermap-1.2-x86_64.deb\n  Reading package lists... Done\n  Building dependency tree\n  Reading state information... Done\n  Note, selecting 'fledge-south-openweathermap' instead of '/var/cache/apt/archives/fledge-south-openweathermap-1.2-x86_64.deb'\n  The following packages were automatically installed and are no longer required:\n    linux-headers-4.4.0-109 linux-headers-4.4.0-109-generic linux-headers-4.4.0-119 linux-headers-4.4.0-119-generic linux-headers-4.4.0-121 linux-headers-4.4.0-121-generic\n    linux-image-4.4.0-109-generic linux-image-4.4.0-119-generic linux-image-4.4.0-121-generic linux-image-extra-4.4.0-109-generic linux-image-extra-4.4.0-119-generic\n    linux-image-extra-4.4.0-121-generic\n  Use 'sudo apt autoremove' to remove them.\n  The following NEW packages will be installed\n    fledge-south-openweathermap\n  0 to upgrade, 1 to newly install, 0 to remove and 0 not to upgrade.\n  Need to get 0 B/3,404 B of archives.\n  After this operation, 0 B of additional disk space will be used.\n  Selecting previously unselected package fledge-south-openweathermap.\n  (Reading database ... 211747 files and directories currently installed.)\n  Preparing to unpack .../fledge-south-openweathermap-1.2-x86_64.deb ...\n  Unpacking fledge-south-openweathermap (1.2) ...\n  Setting up fledge-south-openweathermap (1.2) ...\n  openweathermap plugin installed.\n  $\n  $ fledge status\n  Fledge v1.3.1 running.\n  Fledge Uptime:  271 seconds.\n  Fledge records: 36 read, 0 sent, 0 purged.\n  Fledge does not require authentication.\n  === Fledge services:\n  fledge.services.core\n  fledge.services.south --port=42066 --address=127.0.0.1 --name=openweathermap\n  === Fledge tasks:\n  $\n\nYou may also install new plugins directly from within the Fledge GUI, however you will need to have setup your Linux machine to include the Fledge package repository in the list of repositories the Linux package manager searches for new packages.\n"
  },
  {
    "path": "docs/plugin_developers_guide/02_persisting_data.rst",
    "content": ".. |persist_1| image:: ../images/persist_1.png\n.. |persist_2| image:: ../images/persist_2.png\n\n.. |REST API| raw:: html\n\n        <a href= \"..//rest_api_guide/05_RESTdeveloper.html#view-plugin-persisted-data.html\">REST API</a>\n\nPersisting Data\n---------------\n\nSome plugin may wish to persist state between executions of the plugin we discuss here how this can be done.\n\n.. note::\n\n   This topic covers the persistence between executions of the service that is hosting the plugin, but between calls to the plugin entry points. Data is persisted via the plugin handle between these calls and needs no special discussion.\n\nThe plugin information, returned by the plugin_init call, provides Fledge with information about the plugin, including a set of flags that describe how to interact with the plugin. One of these flags, the SP_PERSIST_DATA flag informs Fledge that the plugin has data it wishes to persist between runs of the plugin.\n\nWhen this flag is present in the plugin_information returned it has two further implications\n\n  - The plugin_start entry point should be included in the plugin and that entry point will be altered to allow the passing of a string to the plugin. That string is expected to contain a JSON document which is the data persisted from the previous execution of the plugin.\n\n  - The plugin_shutdown entry point will be altered such that it returns the data the plugin wishes to be persisted after it has shutdown. This data will be passed to the plugin_start entry point of the plugin when it is next started. The data is returned in the form of a string that is assumed to contain a JSON document.\n\nThe state information is persisted within the configuration database of the Fledge instance. State is preserved on a per-plugin instance rather than by plugin, therefore there is no sharing of persisted data between two instances of the same plugin that are executing either within a single pipeline or multiple pipelines within a single Fledge instance.\n\n.. note::\n\n   At the time of writing the persistence option is only available for plugins written in C/C++. Support for plugins written in Python will be made available in a future release.\n\nRestoring Persisted Data\n########################\n\nIt may seem strange to discuss restoring persisted data before it has been persisted, but the reality is this is the order in which the plugin will get calls. Persisted data it passed to the plugin by the plugin_start entry point.\n\nThe following is an example taken from a filter plugin\n\n.. code-block:: C\n\n    /**\n     * Pass saved plugin configuration\n     */\n    void plugin_start(PLUGIN_HANDLE *handle, string& data)\n    {\n    FILTER_INFO *info = (FILTER_INFO *) handle;\n\n        Sigma *filter = info->handle;\n        filter->loadState(data);\n    }\n\n\nThe plugin_start entry point is called after the plugin_init entry point has been called and before any data is sent to or requested from the plugin.\n\nIf there is no data persisted for the plugin, because it has never be previously run or not persisted any data via shutdown, an empty string will be passed to the plugin.\n\nPersisting Data\n###############\n\nAs discussed above, data is persisted upon shutdown of the plugin, this is achieved by altering the signature of the plugin_shutdown entry point to return a string. The following example is taken from a filter plugin, but persistence of state can be done for any filter type.\n\n.. code-block:: C\n\n   /**\n    * Call the shutdown method in the plugin\n    */\n   std::string plugin_shutdown(PLUGIN_HANDLE *handle)\n   {\n      FILTER_INFO *info = (FILTER_INFO *) handle;\n      Sigma *filter = info->handle;\n\n      // Fetch the state information to persist\n      std::string state = filter->saveState();\n\n      // Delete the filter\n      delete filter;\n      return state;\n   }\n\n.. note::\n\n   Data is persisted only on orderly shutdown, therefore it is important to avoid uncontrolled shutdowns if you are using this persistence scheme. It is always best to do an orderly shutdown of Fledge, but even more so in these circumstances.\n\nViewing Persisted Data\n######################\n\nThe Fledge user interface has a mechanism that allows features designed for a pipeline developer to be enabled via the settings menu. One of these features allows persisted data to be viewed. Once developer features are enabled a new tab will appear in the configuration of the service. In this *Developer* tab the persisted data can be viewed for the plugins in the pipeline.\n\n+-------------+\n| |persist_1| |\n+-------------+\n\nWhen multiple plugin in the pipeline support persistence, the particular plugin can be selected from the drop down menu in the left of the tab.\n\nA menu exists that may be accessed by clicking on the the three dots, to access the context menu, that are situated to the right of the *Developer* tab.\n\n+-------------+\n| |persist_2| |\n+-------------+\n\nThis menu allows the import and export of the persisted data to a file. This is useful when debugging pipelines to alter the state of the plugin. The import function is only available if the plugin is shutdown.\n\nYou can also delete the persisted data, allowing you to purge the saved state of the plugin and forcing it back to initial operating conditions.\n\nIt is also possible to use the |REST API| to interact with the plugin data that is persisted. \n"
  },
  {
    "path": "docs/plugin_developers_guide/02_writing_plugins.rst",
    "content": ".. Writing and Using Plugins describes how to implement a plugin for Fledge and how to use it\n.. https://docs.google.com/document/d/1IKGXLWbyN6a7vx8UO3uDbq5Df0VvE4oCQIULgZVZbjM\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n.. |C++ Support Classes| raw:: html\n\n   <a href=\"035_CPP.html\">C++ Support Classes</a>\n\n.. |audit_trail| raw:: html\n\n   <a href=\"../rest_api_guide/03_RESTadmin.html#audit\">Audit Trail</a>\n\n\n\n.. Links in new tabs\n\n.. =============================================\n\n\nWriting and Using Plugins\n=========================\n\nA plugin has a small set of external entry points that must exist in order for Fledge to load and execute that plugin. Currently plugins may be written in either Python or C/C++, the set of entry points is the same for both languages. The entry points detailed here will be presented for both languages, a more in depth discussion of writing plugins in C/C++ will then follow.\n\nGeneral Guidance\n----------------\n\nBefore delving into the detail of how to write plugins, what entry points have to be provided and how to build and test them, a few notes of general guidance that all plugin developers should consider that will prevent the plugin writer difficulty.\n\n  - The ethos of Fledge is to provide data pipelines that promote easy building of applications through re-use of small, focused processing components. Always try to make use of existing plugins when at all possible. When writing new plugins do not be tempted to make them too specific to a single application. This will mean it is more likely that at some point in the future you will have all the components in your toolbox that you need to create the next application without having to write new plugins.\n\n  - Filters within Fledge are run within a single process which may be a south or north service, they do not run as separate executable. Therefore make sure that when you write a new plugin service that you do not make use of global variables. Global variables will be shared between all the plugins in a service and may clash with other plugins and will prevent the same plugin being used multiple times within a pipeline.\n\n  - Do not make assumptions about how the data you are processing in your plugin will be used, or by how many upstream components it will be used. For example do not put anything in a south plugin or a filter plugin that assumes the data will be consumed by a particular north plugin or will only be consumed by one north plugin. An example of this might be a south plugin that adds OMF AF Location hints to the data it produces. Whilst this works well if the data is sent to OMF, it does not help if the data is sent to a different destination that also requires location information. Adding options for different destinations only compounds the problem, consider for example that the data might be sent to multiple destinations. A better approach would be to add generic location meta data to the data and have the hints filters for each of the destinations perform the destination specific work.\n\nCommon Fledge Plugin API\n-------------------------\n\nEvery plugin provides at least one common API entry point, the *plugin_info* entry point. It is used to obtain information about a plugin before it is initialized and used. It allows Fledge to determine what type of plugin it is, e.g. a South bound plugin or a North bound plugin, obtain default configuration information for the plugin and determine version information.\n\n\nPlugin Information\n~~~~~~~~~~~~~~~~~~\n\nThe information entry point is implemented as a call, *plugin_info*, that takes no arguments. Data is returned from this API call as a JSON document with certain well known properties.\n\nA typical Python implementation of this would simply return a fixed dictionary object that encodes the required properties.\n\n.. code-block:: python\n\n  def plugin_info():\n      \"\"\" Returns information about the plugin.\n\n      Args:\n      Returns:\n          dict: plugin information\n      Raises:\n      \"\"\"\n\n      return {\n          'name': 'DHT11 GPIO',\n          'version': '1.0',\n          'mode': 'poll',\n          'type': 'south',\n          'interface': '1.0',\n          'config': _DEFAULT_CONFIG\n      }\n\nThese are the properties returned by the JSON document:\n\n- **name** - A textual name that will be used for reporting purposes for this plugin.\n- **version** - This property allows the version of the plugin to be communicated to the plugin loader. This is used for reporting purposes only and has no effect on the way Fledge interacts with the plugin.\n- **mode** - A set of options that defines how the plugin operates. Multiple values can be given, the different options are separated from each other using the | symbol.\n- **type** - The type of the plugin, used by the plugin loader to determine if the plugin is being used correctly. The type is a simple string and may be *south*, *north*,  *filter*, *rule* or *delivery*.\n\n.. note:: If you browse the Fledge code you may find old plugins with type *device*: this was the type used to indicate a South plugin and it is now deprecated.\n\n- **interface** - This property reports the version of the plugin API to which this plugin was written. It allows Fledge to support upgrades of the API whilst being able to recognise the version that a particular plugin is compliant with. Currently all interfaces are version 1.0.0 except for a number of south plugins that use interface version 2.0.0.\n\n.. note::\n\n   Interface versions from 2.0.0. onwards support a new plugin_poll entry point that can return a vector of readings from a single plugin_poll call as opposed to the single reading that can be returned from the 1.0.0 interface. The callback for asynchronous input is also updated in the 2.0.0 interface to expect a vector of readings rather than a single reading.\n\n- **configuration** - This allows the plugin to return a JSON document which contains the default configuration of the plugin.  This is in line with the extensible plugin mechanism of Fledge, each plugin will return a set of configuration items that it wishes to use, this will then be used to extend the set of Fledge configuration items. This structure, a JSON document, includes default values but no actual values for each configuration option. The first time Fledge’s configuration manager sees a category it will register the category and create values for each item using the default value in the configuration document. On subsequent calls the value already in the configuration manager will be used. |br| This mechanism allows the plugin to extend the set of configuration variables whilst giving the user the opportunity to modify the value of these configuration items. It also allow new versions of plugins to add new configuration items whilst retaining the values of previous items. And new items will automatically be assigned the default value for that item. |br| As an example, a plugin that wishes to maintain two configuration variables, say a GPIO pin to use and a polling interval, would return a configuration document that looks as follows:\n\n.. code-block:: console\n\n  {\n      'pollInterval': {\n          'description': 'The interval between poll calls to the device poll routine expressed in milliseconds.',\n          'type': 'integer',\n          'default': '1000'\n      },\n      'gpiopin': {\n          'description': 'The GPIO pin into which the DHT11 data pin is connected',\n          'type': 'integer',\n          'default': '4'\n      }\n  }\n\n\nThe various values that may appear in the *mode* item are shown in the table below\n\n+---------+---------------------------------------------------------------------------------------+\n| Mode    | Description                                                                           |\n+=========+=======================================================================================+\n| poll    | The plugin is a polled plugin and *plugin_poll* will be called periodically to obtain |\n|         | new values.                                                                           |\n+---------+---------------------------------------------------------------------------------------+\n| async   | The plugin is an asynchronous plugin, *plugin_poll* will not be called and the        |\n|         | plugin will be supplied with a callback function that it calls each time it has a     |\n|         | new value to pass to the system. The *plugin_register_ingest* entry point will be     |\n|         | called to register the callback with the plugin. The *plugin_start* call will be      |\n|         | called once to initiate the asynchronous delivery of data.                            |\n+---------+---------------------------------------------------------------------------------------+\n| none    | This is equivalent to poll.                                                           |\n+---------+---------------------------------------------------------------------------------------+\n| control | The plugin support a control flow to the device the plugin is connected to. The       |\n|         | must supply the control entry points *plugin_write* and *plugin_operation*.           |\n+---------+---------------------------------------------------------------------------------------+\n\n|br|\n\nA C/C++ plugin returns the same information as a structure, this structure includes the JSON configuration document as a simple C string.\n\n.. code-block:: C\n\n  #include <plugin_api.h>\n\n  extern \"C\" {\n\n  /**\n   * The plugin information structure\n   */\n  static PLUGIN_INFORMATION info = {\n          \"MyPlugin\",               // Name\n          \"1.0.1\",                  // Version\n          0,    \t\t    // Flags\n          PLUGIN_TYPE_SOUTH,        // Type\n          \"1.0.0\",                  // Interface version\n          default_config            // Default configuration\n  };\n\n  /**\n   * Return the information about this plugin\n   */\n  PLUGIN_INFORMATION *plugin_info()\n  {\n          return &info;\n  }\n\nIn the above example the constant *default_config* is a string that contains the JSON configuration document. In order to make the JSON easier to manage a special macro is defined in the *plugin_api.h* header file. This macro is called *QUOTE* and is designed to ease the quoting requirements to create this JSON document.\n\n.. code-block:: C\n\n  const char *default_config = QUOTE({\n                \"plugin\" : {\n                        \"description\" : \"My example plugin in C++\",\n                        \"type\" : \"string\",\n                        \"default\" : \"MyPlugin\",\n                        \"readonly\" : \"true\"\n                        },\n                 \"asset\" : {\n                        \"description\" : \"The name of the asset the plugin will produce\",\n                        \"type\" : \"string\",\n                        \"default\" : \"MyAsset\"\n                        }\n  });\n\nThe *flags* items contains a bitmask of flag values used to pass information regarding the behavior and requirements of the plugin. The flag values currently supported are shown below\n\n+-------------------+---------------------------------------------------------------------------------+\n| Flag Name         | Description                                                                     |\n+===================+=================================================================================+\n| SP_COMMON         | Used exclusively by storage plugins. The plugin supports the common table       |\n|                   | access needed to store configuration                                            |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_READINGS       | Used exclusively by storage plugins. The plugin supports the storage of reading |\n|                   | data                                                                            |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_ASYNC          | The plugin is an asynchronous plugin, *plugin_poll* will not be called and the  |\n|                   | plugin will be supplied with a callback function that it calls each time it has |\n|                   | a new value to pass to the system. The *plugin_register_ingest* entry point will|\n|                   | be called to register the callback with the plugin. The *plugin_start* call will|\n|                   | be called once to initiate the asynchronous delivery of data. This applies      |\n|                   | only to south plugins.                                                          |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_PERSIST_DATA   | The plugin wishes to persist data between executions                            |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_INGEST         | A non-south plugin wishes to ingest new data into the system. Used by           |\n|                   | notification plugins                                                            |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_GET_MANAGEMENT | The plugin requires access to the management API interface for the service      |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_GET_STORAGE    | The plugin requires access to the storage service                               |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_DEPRECATED     | The plugin should be considered to be deprecated. New service can not use this  |\n|                   | plugin, but existing services may continue to use it                            |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_BUILTIN        | The plugin is not implemented as an external package but is built into the      |\n|                   | system                                                                          |\n+-------------------+---------------------------------------------------------------------------------+\n| SP_CONTROL        | The plugin implement control features                                           |\n+-------------------+---------------------------------------------------------------------------------+\n\nThese flag values may be combined by use of the or operator where more than one of the above options is supported.\n\nPlugin Initialization\n~~~~~~~~~~~~~~~~~~~~~\n\nThe plugin initialization is called after the service that has loaded the plugin has collected the plugin information and resolved the configuration of the plugin but before any other calls will be made to the plugin. The initialization routine is called with the resolved configuration of the plugin, this includes values as opposed to the defaults that were returned in the *plugin_info* call.\n\nThis call is used by the plugin to do any initialization or state creation it needs to do. The call returns a handle which will be passed into each subsequent call of the plugin. The handle allows the plugin to create state information that is maintained and passed to it whilst allowing for multiple instances of the same plugin to be loaded by a service if desired. It is equivalent to a this or self pointer for the plugin, although the plugin is not defined as a class. The handle is the only way in which the plugin should retain information between calls to a given entry point and also the only way information should be passed between entry points.\n\nIn Python a simple example of a sensor that reads a GPIO pin for data, we might choose to use that configured GPIO pin as the handle we pass to other calls. \n\n.. code-block:: python\n\n  def plugin_init(config):\n      \"\"\" Initialise the plugin.\n   \n      Args:\n          config: JSON configuration document for the device configuration category\n      Returns:\n          handle: JSON object to be used in future calls to the plugin\n      Raises:\n      \"\"\"\n   \n      handle = config['gpiopin']['value']\n      return handle\n\nA C/C++ plugin should return a value in a *void* pointer that can then be dereferenced in subsequent calls.  A typical C++ implementation might create an instance of a class and use that instance as the handle for the plugin.\n\n.. code-block:: C\n  \n  /**\n   * Initialise the plugin, called to get the plugin handle\n   */\n  PLUGIN_HANDLE plugin_init(ConfigCategory *config)\n  {\n  MyPluginClass *plugin = new MyPluginClass();\n\n          plugin->configure(config);\n\n          return (PLUGIN_HANDLE)plugin;\n  }\n\nIt should also be observed in the above C/C++ example the *plugin_init* call is passed a pointer to a *ConfigCategory* class that encapsulates the JSON configuration category for the plugin. Details of the ConfigCategory class are available in the section |C++ Support Classes|.\n\n|br|\n\n\nPlugin Shutdown\n~~~~~~~~~~~~~~~\n\nThe plugin shutdown method is called as part of the shutdown sequence of the service that loaded the plugin. It gives the plugin the opportunity to do any cleanup operations before terminating. As with all calls it is passed the handle of our plugin instance. Plugins can not prevent the shutdown and do not have to implement any actions. In our simple sensor example there is nothing to do in order to shutdown the plugin.\n      \nA C/C++ plugin might use this *plugin_shutdown* call to delete the plugin class instance it created in the corresponding *plugin_init* call.\n\n.. code-block:: C\n\n  /**\n   * Shutdown the plugin\n   */\n  void plugin_shutdown(PLUGIN_HANDLE *handle)\n  {\n  MyPluginClass *plugin = (MyPluginClass *)handle;\n\n          delete plugin;\n  }\n\n\n|br|\n\n\nPlugin Reconfigure\n~~~~~~~~~~~~~~~~~~\n\nThe plugin reconfigure method is called whenever the configuration of the plugin is changed. It allows for the dynamic reconfiguration of the plugin whilst it is running. The method is called with the handle of the plugin and the updated configuration document. The plugin should take whatever action it needs to and return a new or updated copy of the handle that will be passed to future calls.\n\nThe plugin reconfigure method is shared between most but not all plugin types. In particular it does not exist for the shorted lived plugins that are created to perform a single operation and then terminated. These are the north plugins and the notification delivery plugins.\n\nUsing a simple Python example of our sensor reading a GPIO pin, we extract the new pin number from the new configuration data and return that as the new handle for the plugin instance.\n\n.. code-block:: python\n\n  def plugin_reconfigure(handle, new_config):\n      \"\"\" Reconfigures the plugin, it should be called when the configuration of the plugin is changed during the\n          operation of the device service.\n          The new configuration category should be passed.\n\n      Args:\n          handle: handle returned by the plugin initialisation call\n          new_config: JSON object representing the new configuration category for the category\n      Returns:\n          new_handle: new handle to be used in the future calls\n      Raises:\n      \"\"\"\n\n      new_handle = new_config['gpiopin']['value']\n      return new_handle\n\n\nIn C/C++ the *plugin_reconfigure* method is very similar, note however that the *plugin_reconfigure* call is passed the JSON configuration category as a string and not a *ConfigCategory*, it is easy to parse and create the C++ class however, a name for the category must be given however.\n\n.. code-block:: C\n\n  /**\n   * Reconfigure the plugin\n   */\n  void plugin_reconfigure(PLUGIN_HANDLE *handle, string& newConfig)\n  {\n  ConfigCategory\tconfig(\"newConfiguration\", newConfig);\n  MyPluginClass\t\t*plugin = (MyPluginClass *)*handle;\n\n          plugin->configure(&config);\n  }\n\nIt should be noted that the *plugin_reconfigure* call may be delivered in a separate thread for a C/C++ plugin and that the plugin should implement any mutual exclusion mechanisms that are required based on the actions of the *plugin_reconfigure* method.\n\nConfiguration Lifecycle\n-----------------------\n\nFledge has a very particular way of handling configuration, there are a number of design aims that have resulted in the configuration system within Fledge.\n\n  - A desire to allow the plugins to define their own configuration elements.\n\n  - Dynamic configuration that allows for maximum uptime during configuration changes.\n\n  - A descriptive way to define the configuration such that user interfaces can be built without prior knowledge of the elements to be configured.\n\n  - A common approach that will work across many different languages.\n\nFledge divides its configuration in categories. A category being a collection of configuration items. A category is also the smallest item of configuration that can be subscribed to by the code. This subscription mechanism is they way that Fledge facilitates dynamic reconfiguration. It allows a service to subscribe to one or more configuration categories, whenever an item within a category changes the central configuration manager will call a handler to pass the newly updated configuration category. This handler may be within a services or between services using the micro service management API that every service must support. The mechanism however is transparent to the code involved.\n\nThe configuration items within a category are JSON object, the object key is the name of the configuration item, the object itself contains data about that item. As an example, if we wanted to have a configuration item called *MaxRetries* that is an integer with a default value of 5, then we would configured it using the JSON object\n\n.. code-block:: console\n\n   \"MaxRetries\" : {\n                \"type\" : \"integer\",\n                \"default\" : \"5\"\n                }\n\nWe have used the properties *type* and *default* to define properties of the configuration item *MaxRetries*.  These are not the only properties that a configuration item can have, the full set of item types and properties are shown below\n\nTypes\n~~~~~\n\nThe configuration items within a configuration category can each be defined as one of a set of types. The types currently supported by Fledge are\n\n.. list-table::\n   :header-rows: 1\n\n   * - Type\n     - Description\n   * - integer\n     - An integer numeric value. The value may be positive or negative but may not contain any fractional part. The *minimum* and *maximum* properties may be used to control the limits of the values assigned to an integer.\n   * - float\n     - A floating point numeric item. The *minimum* and *maximum* properties may be used to control the limits of the values assigned to a floating point item.\n   * - string\n     - An alpha-numeric array of characters that may contain any printable characters. The *length* property can be used to constrain the maximum length of the string.\n   * - boolean\n     - A boolean value that can be assigned the values *true* or *false*.\n   * - IPv4\n     - An IP version 4 address.\n   * - IPv6\n     - An IP version 6 address.\n   * - X509 certificate\n     - An X509 certificate\n   * - password\n     - A string that is used as a password. There is no difference between this or a string type other than user interfaces do not show this in plain text.\n   * - JSON\n     - A JSON document. The value is checked to ensure it is a valid JSON document.\n   * - URL\n     - A universal resource locator string. The API will check for correct URL formatting of the value.\n   * - enumeration\n     - The item can be assigned one of a fixed set of values. These values are defined in the *options* property of the item.\n   * - script\n     - A block of text that is executed as a script. The script type should be used for larger blocks of code to be executed.\n   * - code\n     - A block of text that is executed as Python code. This is used for small snippets of Python rather than when larger scripts.\n   * - northTask\n     - The name of a north task. The API will check that the value matches the name of an existing north task.\n   * - ACL\n     - An access control list. The value is the string name of an access control list that has been created within Fledge.\n   * - list\n     - A list of items, the items can be of type *string*, *integer*, *float*, *enumeration* or *object*. The type of the items within the list must all be the same, and this is defined via the *items* property of the list. A limit on the maximum number of entries allowed in the list can be enforced by use of the *listSize* property.\n   * - kvlist\n     - A key value pair list. The key is a string value always but the value of the item in the list may be of type *string*, *enumeration*, *float*, *integer* or *object*. The type of the values in the kvlist is defined by the *items* property of the configuration item. A limit on the maximum number of entries allowed in the list can be enforced by use of the *listSize* property.\n   * - object\n     - A complex configuration type with multiple elements that may be used within *list* and *kvlist* items only, it is not possible to have *object* type items outside of a list. Object type configuration items have a set of *properties* defined, each of which is itself a configuration item. \n\nList\n####\n\nA simple list would be defined using the JSON shown below\n\n.. code-block:: JSON\n\n \"tags\" : {\n                \"description\" : \"A set of tag names on which to operate\",\n                \"type\" : \"list\",\n                \"items\" : \"string\",\n                \"default\" : \"[ \\\"speed\\\", \\\"temperature\\\", \\\"voltage\\\" ]\",\n                \"order\" : \"4\",\n                \"displayName\" : \"Labels\"\n           }\n\nThe returned value for such a configuration item is merely an array of strings.\n\n.. code-block:: JSON\n\n    [ \"speed\", \"temperature\", \"voltage\", \"current\", \"status\" ]\n\nIn the above example the user is able to supply any number of unconstrained strings, however if we wanted a user to choose from a set of fixed options they may use an enumeration as the type of the list rather than a string.\n\n.. code-block:: JSON\n\n    \"tags\" : {\n                \"description\" : \"A set of tag names on which to operate\",\n                \"type\" : \"list\",\n                \"items\" : \"enumeration\",\n                \"options\" : [ \"speed\", \"current\", \"temperature\", \"voltage\", \"status\", \"runtime\" ],\n                \"default\" : \"{ \\\"speed\\\", \\\"temperature\\\", \\\"voltage\\\" }\",\n                \"order\" : \"4\",\n                \"displayName\" : \"Labels\"\n             }\n\nThis differs from an enumeration in that the user is allowed to select more than one option from the set of options.\n\nLists may also be used to allow the entry of a list of numeric values, either integer or float, by setting the appropriate type for the items in a list.\n\n.. code-block:: JSON\n\n \"tags\" : {\n                \"description\" : \"A set of factor values to use when calculating the best fit\",\n                \"type\" : \"list\",\n                \"items\" : \"integer\",\n                \"default\" : \"[ \\\"1\\\", \\\"2\\\", \\\"3\\\" ]\",\n                \"order\" : \"4\",\n                \"displayName\" : \"Factors\"\n           }\n\n\nKey/Value List\n##############\n\nA key/value list is a way of storing tagged item pairs within a list. For example, to create a list of labels and expressions we can use a kvlist that stores the expressions as string values in the kvlist.\n\n.. code-block:: JSON\n\n  \"expressions\" : {\n                \"description\" : \"A set of expressions used to evaluate and label data\",\n                \"type\" : \"kvlist\",\n                \"items\" : \"string\",\n                \"default\" : \"{\\\"idle\\\" : \\\"speed == 0\\\"}\",\n                \"order\" : \"4\",\n                \"displayName\" : \"Labels\"\n                }\n\nThe key values must be unique within a kvlist, as the data is stored as a JSON object with the key becoming the property name and the value of the property the corresponding value for the key.\n\nA returned list with 4 entries would have a value as shown below.\n\n.. code-block:: JSON\n\n   {\n        \"idle\" : \"speed == 0\",\n        \"operational\" : \"speed == 5000\",\n        \"transitional\" : \"speed > 0 && speed < 5000\",\n        \"overspeed\" : \"speed > 5000\"\n   }\n\nLists of Objects\n################\n\nObject type items may be used in lists and are a mechanism to allow for list of groups of configuration items. The object list type items must specify a property called *properties*. The value of this is a JSON object that contains a list of configuration items that are grouped into the object.\n\nAn example use of an object list might allow for a map structure to be built for accessing a device like a PLC. The following shows the definition of a list that contains the map for a PLC.\n\n.. code-block:: JSON\n\n  \"map\": {\n        \"description\": \"A list of datapoints to read and PLC register definitions\",\n        \"type\": \"list\",\n        \"items\" : \"object\",\n        \"default\": \"[ { \\\"datapoint\\\" : \\\"speed\\\", \\\"register\\\" : \\\"10\\\", \\\"width\\\" : \\\"1\\\", \\\"type\\\" : \\\"integer\\\"} ]\",\n        \"order\" : \"3\",\n        \"displayName\" : \"PLC Map\",\n        \"properties\" : {\n                \"datapoint\" : {\n                        \"description\" : \"The name of the datapoint to create for the map entry\",\n                        \"displayName\" : \"Datapoint\",\n                        \"type\" : \"string\",\n                        \"default\" : \"datapoint\"\n                        },\n                \"register\" : {\n                        \"description\" : \"The register number to read\",\n                        \"displayName\" : \"Register\",\n                        \"type\" : \"integer\",\n                        \"default\" : \"0\"\n                        },\n                \"width\" : {\n                        \"description\" : \"Number of registers to read\",\n                        \"displayName\" : \"Width\",\n                        \"type\" : \"integer\",\n                        \"maximum\" : \"4\",\n                        \"default\" : \"1\"\n                        },\n                \"type\" : {\n                        \"description\" : \"The data type to read\",\n                        \"displayName\" : \"Data Type\",\n                        \"type\" : \"enumeration\",\n                        \"options\" : [ \"integer\",\"float\", \"boolean\" ],\n                        \"default\" : \"integer\"\n                        }\n                }\n        }\n\nThe *value* and *default* properties for a list of objects is returned as a JSON structure. An example of the above list with two elements in the list would be returned as follows:\n\n.. code-block:: JSON\n\n  [\n    {\n        \"datapoint\" : \"voltage\",\n        \"register\" : \"10\",\n        \"width\" : \"2\",\n        \"type\" : \"integer\"\n    },\n    {\n        \"datapoint\" : \"current\",\n        \"register\" : \"14\",\n        \"width\" : \"4\",\n        \"type\" : \"float\"\n    }\n  ]\n\nAn alternative might be to use a key/value pair list, *kvlist* type, where the key is the name of the item to be read, the datapoint in the previous example and the value is an object that describes how to read the data from the PLC.\n\n.. code-block:: JSON\n\n  \"map\": {\n        \"description\": \"A list of datapoints to read and PLC register definitions\",\n        \"type\": \"kvlist\",\n        \"items\" : \"object\",\n        \"default\": \"{\\\"speed\\\" : {\\\"register\\\" : \\\"10\\\", \\\"width\\\" : \\\"1\\\", \\\"type\\\" : \\\"integer\\\"}}\",\n        \"order\" : \"3\",\n        \"displayName\" : \"PLC Map\",\n        \"keyName\": \"Datapoints\",\n        \"keyDescription\": \"A list of datapoints to read\",\n        \"properties\" : {\n                \"register\" : {\n                        \"description\" : \"The register number to read\",\n                        \"displayName\" : \"Register\",\n                        \"type\" : \"integer\",\n                        \"default\" : \"0\"\n                        },\n                \"width\" : {\n                        \"description\" : \"Number of registers to read\",\n                        \"displayName\" : \"Width\",\n                        \"type\" : \"integer\",\n                        \"maximum\" : \"4\",\n                        \"default\" : \"1\"\n                        },\n                \"type\" : {\n                        \"description\" : \"The data type to read\",\n                        \"displayName\" : \"Data Type\",\n                        \"type\" : \"enumeration\",\n                        \"options\" : [ \"integer\",\"float\", \"boolean\" ],\n                        \"default\" : \"integer\"\n                        }\n                }\n        }\n\nThe *value* and *default* properties for a list of objects is returned as a JSON structure. An example of the above list with two elements in the list, voltage and current would be returned as follows:\n\n.. code-block:: JSON\n\n  {\n        \"voltage\" : {\n                        \"register\" : \"10\",\n                        \"width\" : \"2\",\n                        \"type\" : \"integer\"\n                },\n        \"current\" : {\n                        \"register\" : \"14\",\n                        \"width\" : \"4\",\n                        \"type\" : \"float\"\n                }\n  }\n\nThe *keyName* and *keyDescription* will be used by the GUI client to display a name and description associated with an entry in the key/value list.\n\nProperties\n~~~~~~~~~~\n\n.. list-table::\n   :header-rows: 1\n\n   * - Property\n     - Description\n   * - default\n     - The default value for the configuration item. This is always expressed as a string regardless of the type of the configuration item.\n   * - deprecated\n     - A boolean flag to indicate that this item is no longer used and will be removed in a future release.\n   * - description\n     - A description of the configuration item used in the user interface to give more details of the item. Commonly used as a mouse over help prompt.\n   * - displayName\n     - The string to use in the user interface when presenting the configuration item. Generally a more user friendly form of the item name. Item names are referenced within the code.\n   * - items\n     - The type of the items in a list or kvlist configuration item.\n   * - length\n     - The maximum length of the string value of the item.\n   * - listSize\n     - The maximum number of entries allowed in a list or kvlist item.\n   * - mandatory\n     - A boolean flag to indicate that this item can not be left blank.\n   * - maximum\n     - The maximum value for a numeric configuration item.\n   * - minimum\n     - The minimum value for a numeric configuration item.\n   * - options\n     - Only used for enumeration type elements. This is a JSON array of string that contains the options in the enumeration.\n   * - order\n     - Used in the user interface to give an indication of how high up in the dialogue to place this item.\n   * - group\n     - Used to group related items together. The main use of this is within the GUI which will turn each group into a tab in the creation and edit screens.\n   * - readonly\n     - A boolean property that can be used to include items that can not be altered by the API.\n   * - rule\n     - A validation rule that will be run against the value. This must evaluate to true for the new value to be accepted by the API\n   * - type\n     - The type of the configuration item. The list of types supported are; integer, float, string, password, enumeration, boolean, JSON, URL, IPV4, IPV6, script, code, X509 certificate and northTask.\n   * - validity\n     - An expression used to determine if the configuration item is valid. Used in the UI to gray out one value based on the value of others.\n   * - value\n     - The current value of the configuration item. This is not included when defining a set of default configuration in, for example, a plugin.\n   * - properties\n     - A set of items that are used in list and kvlist type items to create a list of groups of configuration items.\n   * - keyName\n     - A display name to be used for entry and display of key in the key-value list type, with item being an object.\n   * - keyDescription\n     - A description of key value in the key-value list type, with item being an object.\n   * - permissions\n     - An array of user roles that are allowed to update this configuration item. If not given then the configuration item can be updated by any user. If the permissions property is included in a configuration item the array must have at least one entry.\n\nOf the above properties of a configuration item *type*, *default* and *description* are mandatory, all others are optional.\n\n.. note::\n\n  It is strongly advised to include a *displayName* and an *order* in every item to improve the GUI rendering of configuration screens. If a configuration category is very large it is also recommended to use the *group* property to group together related items. These grouped items are displayed within separate tabs in the current Fledge GUI.\n\nManagement\n~~~~~~~~~~\n\nConfiguration data is stored by the storage service and is maintained by the configuration in the core Fledge service. When code requires configuration it would create a configuration category with a set of items as a JSON document. It would then register that configuration category with the configuration manager. The configuration manager is responsible for storing the data in the storage layer, as it does this it first checks to see if there is already a configuration category from a previous execution of the code. If one does exist then the two are merged, this merging process allows updates to the software to extend the configuration category whilst maintaining any changes in values made by the user.\n\nDynamic reconfiguration within Fledge code is supported by allowing code to subscribe for changes in a configuration category. The services that load plugin will automatically register for the plugin configuration category and when changes are seen will call the *plugin_reconfigure* entry point of the plugin with the new configuration. This allows the plugins to receive the updated configuration and take what actions it must in order to honour the changes to configuration. This allows for configuration to be changed without the need to stop and restart the services, however some plugins may need to close connections and reopen them, which may cause a slight interruption in the process of gathering data. That choice is up to the developers of the individual plugins.\n\nDiscovery\n~~~~~~~~~\n\nIt is possible using this system to do a limited amount of discovery and tailoring of plugin configuration. A typical case when discovery might be used is to discover devices on a network that can be monitored. This can be achieved by putting the discovery code in the *plugin_info* entry point and having that discovery code alter the default configuration that is returned as part of the plugin information structure.\n\nAny example of this might be to have an enumeration in the configuration that enumerates the devices to be monitored. The discovery code would then populate the enumerations options item with the various devices it discovered when the *plugin_info* call was made.\n\nAn example of the *plugin_info* entry point that does this might be as follows\n\n.. code-block:: C\n\n    /**\n     * Return the information about this plugin\n     */\n    PLUGIN_INFORMATION *plugin_info()\n    {\n    DeviceDiscovery discover;\n\n            char *config = discover.discover(default_config, \"discovered\");\n            info.config = config;\n            return &info;\n    }\n\nThe configuration in *default_config* is assumed to have an enumeration item called *discovered*\n\n.. code-block:: console\n\n        \"discovered\" : {\n                \"description\" : \"The discovered devices, select 'Manual' to manually enter an IP address\",\n                \"type\" : \"enumeration\",\n                \"options\" : [ \"Manual\" ],\n                \"default\" : \"Manual\",\n                \"displayName\": \"Devices\",\n                \"mandatory\": \"true\",\n                \"order\" : \"2\"\n                },\n        \"IP\" : {\n                \"description\" : \"The IP address of your device, used to add a device that could not be discovered\",\n                \"type\" : \"string\",\n                \"default\" : \"127.0.0.1\",\n                \"displayName\": \"IP Address\",\n                \"mandatory\": \"true\",\n                \"order\" : \"3\",\n                \"validity\" : \"discovered == \\\"Manual\\\"\"\n                },\n\nNote the use of the *Manual* option to allow entry of devices that could not be discovered.\n\nThe *discover* method does the actually discovery and manipulates the JSON configuration to add the *options* element of the configuration item.\n\nThe code that connects to the device should then look at the *discovered* configuration item, if it finds it set to *Manual* then it will get an IP address from the *IP* configuration item. Otherwise it uses the information in the *discovered* item to connect, note that this need not just be an IP address, you can format the data in a way that is more user friendly and have the connection code extract what it needs or create a table in the *discover* method to allow for user meaningful strings to be mapped to network addresses.\n\nThe example here was written in C++, there is nothing that is specific to C++ however and the same approach can be taken in Python.\n\nOne thing to note however, the *plugin_info* call is used in the display of available plugins, discovery code that is very slow will impact the performance of plugin selection.\n\nWriting Audit Trail\n~~~~~~~~~~~~~~~~~~~\n\nPlugins are able to write records to the audit trail. These records must use one of the predefined audit code that are support by the system. See |audit_trail| for details of the supported audit codes within the system.\n\nIn C++ you use the *AuditLogger* class to write these audit trail entries, this is a singleton object that is access via the getLogger method.\n\n.. code-block:: C\n\n   AuditLogger *audit = AuditLogger::getLogger();\n   audit->audit(\"NHDWN\", \"INFORMATION\");\n\nThere is also a convenience function that can be used if you not want to define a local pointer the AuditLogger\n\n.. code-block:: C\n\n   AuditLogger::auditLog(\"NHAVL\", \"INFORMATION\");\n\nPermissions\n~~~~~~~~~~~\n\nThe permissions property is used to control the update of configuration items within a category. If Fledge has been configured such that it requires authentication in order to connect to the REST API, or by extension the Fledge GUI, then when a user attempts to update a category which contains an item with the permissions property set, the user role will be fetched and compared to the list of roles able to update the item. Items within the category that do not have the permissions property will not be affected.\n\nThe REST API category is an example of using the permissions property to restrict the items in the category that non-admin role users can change. Below we show one of the configuration items that is restricted, the one that controls the requirement to authenticate when connecting to Fledge.\n\n.. code-block:: JSON\n\n  \"authentication\": {\n    \"description\": \"API Call Authentication\",\n    \"type\": \"enumeration\",\n    \"options\": [\n      \"mandatory\",\n      \"optional\"\n    ],\n    \"default\": \"optional\",\n    \"displayName\": \"Authentication\",\n    \"order\": \"5\",\n    \"permissions\": [\n      \"admin\"\n    ],\n    \"value\": \"optional\"\n  }\n\nThe use of the permissions property with the single role of admin means that only admin users can change this setting. If we wished to allow more than admin users we can add another role. For example\n\n.. code-block:: JSON\n\n        \"logLevel\": {\n            \"description\": \"Minimum logging level reported for Core server\",\n            \"type\": \"enumeration\",\n            \"displayName\": \"Minimum Log Level\",\n            \"options\": [\"debug\", \"info\", \"warning\", \"error\", \"critical\"],\n            \"default\": \"warning\",\n            \"order\": \"1\",\n            \"permissions\": [\"admin\", \"control\"]\n        }\n\nIn this case users with the role admin or control are allowed to alter the configuration item.\n\n.. note::\n\n   User created with the role of view configuration or view data are unable to alter any configuration items regardless of the permissions properties on those items.\n"
  },
  {
    "path": "docs/plugin_developers_guide/035_CPP.rst",
    "content": "\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. =============================================\n\nC++ Support Classes\n===================\n\nA number of support classes exist within the common library that forms part of every Fledge plugin.\n\nReading\n-------\n\nThe *Reading* class and the associated *Datapoint* and *DatapointValue* classes provide the mechanism within C++ classes to manipulated the reading asset data. The public part of the *Reading* class is currently defined as follows;\n\n.. code-block:: C\n\n  class Reading {\n          public:\n                  Reading(const std::string& asset, Datapoint *value);\n                  Reading(const std::string& asset, std::vector<Datapoint *> values);\n                  Reading(const std::string& asset, std::vector<Datapoint *> values, const std::string& ts);\n                  Reading(const Reading& orig);\n\n                  ~Reading();\n                  void                            addDatapoint(Datapoint *value);\n                  Datapoint                       *removeDatapoint(const std::string& name);\n                  std::string                     toJSON(bool minimal = false) const;\n                  std::string                     getDatapointsJSON() const;\n                  // Return AssetName\n                  const std::string&              getAssetName() const { return m_asset; };\n                  // Set AssetName\n                  void                            setAssetName(std::string assetName) { m_asset = assetName; };\n                  unsigned int                    getDatapointCount() { return m_values.size(); };\n                  void                            removeAllDatapoints();\n                  // Return Reading datapoints\n                  const std::vector<Datapoint *>  getReadingData() const { return m_values; };\n                  // Return reference to Reading datapoints\n                  std::vector<Datapoint *>&       getReadingData() { return m_values; };\n                  unsigned long                   getId() const { return m_id; };\n                  unsigned long                   getTimestamp() const { return (unsigned long)m_timestamp.tv_sec; };\n                  unsigned long                   getUserTimestamp() const { return (unsigned long)m_userTimestamp.tv_sec; };\n                  void                            setId(unsigned long id) { m_id = id; };\n                  void                            setTimestamp(unsigned long ts) { m_timestamp.tv_sec = (time_t)ts; };\n                  void                            setTimestamp(struct timeval tm) { m_timestamp = tm; };\n                  void                            setTimestamp(const std::string& timestamp);\n                  void                            getTimestamp(struct timeval *tm) { *tm = m_timestamp; };\n                  void                            setUserTimestamp(unsigned long uTs) { m_userTimestamp.tv_sec = (time_t)uTs; };\n                  void                            setUserTimestamp(struct timeval tm) { m_userTimestamp = tm; };\n                  void                            setUserTimestamp(const std::string& timestamp);\n                  void                            getUserTimestamp(struct timeval *tm) { *tm = m_userTimestamp; };\n\n                  typedef enum dateTimeFormat { FMT_DEFAULT, FMT_STANDARD, FMT_ISO8601 } readingTimeFormat;\n\n                  // Return Reading asset time - ts time\n                  const std::string getAssetDateTime(readingTimeFormat datetimeFmt = FMT_DEFAULT, bool addMs = true) const;\n                  // Return Reading asset time - user_ts time\n                  const std::string getAssetDateUserTime(readingTimeFormat datetimeFmt = FMT_DEFAULT, bool addMs = true) const;\n  }\n\nThe *Reading* class contains a number of items that are mapped to the JSON representation of data that is sent to the Fledge storage service and are used by the various services and plugins within Fledge.\n\n  - **Asset Name**: The name of the asset. The asset name is set in the constructor of the reading and retrieved via the *getAssetName()* method.\n\n  - **Timestamp**: The timestamp when the reading was first seen within Fledge.\n\n  - **User Timestamp**: The timestamp for the actual data in the reading. This may differ from the value of Timestamp if the device itself is able to supply a timestamp value.\n\n  - **Datapoints**: The actual data of a reading stored in a Datapoint class.\n\nThe *Datapoint* class provides a name for each data point within a *Reading* and the tagged type data for the reading value. The public definition of the *Datapoint* class is as follows;\n\n.. code-block:: C\n\n  class Datapoint {\n          public:\n                  /**\n                   * Construct with a data point value\n                   */\n                  Datapoint(const std::string& name, DatapointValue& value) : m_name(name), m_value(value);\n                  ~Datapoint();\n                  /**\n                   * Return asset reading data point as a JSON\n                   * property that can be included within a JSON\n                   * document.\n                   */\n                  std::string     toJSONProperty();\n                  const std::string getName() const;\n                  void setName(std::string name);\n                  const DatapointValue getData() const;\n                  DatapointValue& getData();\n  }\n\nClosely associated with the *Datapoint* is the *DatapointValue* which uses a tagged union to store the values. The public definition of the *DatapointValue*  is as follows;\n\n.. code-block:: C\n\n   class DatapointValue {\n           public:\n                   /**\n                    * Construct with a string\n                    */\n                   DatapointValue(const std::string& value)\n                   {\n                           m_value.str = new std::string(value);\n                           m_type = T_STRING;\n                   };\n                   /**\n                    * Construct with an integer value\n                    */\n                   DatapointValue(const long value)\n                   {\n                           m_value.i = value;\n                           m_type = T_INTEGER;\n                   };\n                   /**\n                    * Construct with a floating point value\n                    */\n                   DatapointValue(const double value)\n                   {\n                           m_value.f = value;\n                           m_type = T_FLOAT;\n                   };\n                   /**\n                    * Construct with an array of floating point values\n                    */\n                   DatapointValue(const std::vector<double>& values)\n                   {\n                           m_value.a = new std::vector<double>(values);\n                           m_type = T_FLOAT_ARRAY;\n                   };\n\n                   /**\n                    * Construct with an array of Datapoints\n                    */\n                   DatapointValue(std::vector<Datapoint*>*& values, bool isDict)\n                   {\n                           m_value.dpa = values;\n                           m_type = isDict? T_DP_DICT : T_DP_LIST;\n                   }\n\n                   /**\n                    * Construct with an Image\n                    */\n                   DatapointValue(const DPImage& value)\n                   {\n                           m_value.image = new DPImage(value);\n                           m_type = T_IMAGE;\n                   }\n\n                   /**\n                    * Construct with a DataBuffer\n                    */\n                   DatapointValue(const DataBuffer& value)\n                   {\n                           m_value.dataBuffer = new DataBuffer(value);\n                           m_type = T_DATABUFFER;\n                   }\n\n                   /**\n                    * Construct with an Image Pointer, the\n                    * image becomes owned by the datapointValue\n                    */\n                   DatapointValue(DPImage *value)\n                   {\n                           m_value.image = value;\n                           m_type = T_IMAGE;\n                   }\n\n                   /**\n                    * Construct with a DataBuffer\n                    */\n                   DatapointValue(DataBuffer *value)\n                   {\n                           m_value.dataBuffer = value;\n                           m_type = T_DATABUFFER;\n                   }\n\n                   /**\n                    * Construct with a 2 dimensional  array of floating point values\n                    */\n                   DatapointValue(const std::vector< std::vector<double> >& values)\n                   {\n                           m_value.a2d = new std::vector< std::vector<double> >();\n                           for (auto row : values)\n                           {\n                                   m_value.a2d->push_back(std::vector<double>(row));\n                           }\n                           m_type = T_2D_FLOAT_ARRAY;\n                   };\n\n                   /**\n                    * Copy constructor\n                    */\n                   DatapointValue(const DatapointValue& obj);\n\n                   /**\n                    * Assignment Operator\n                    */\n                   DatapointValue& operator=(const DatapointValue& rhs);\n\n                   /**\n                    * Destructor\n                    */\n                   ~DatapointValue();\n\n                   \n                   /**\n                    * Set the value of a datapoint, this may\n                    * also cause the type to be changed.\n                    * @param value\tAn integer value to set\n                    */\n                   void setValue(long value)\n                   {\n                           m_value.i = value;\n                           m_type = T_INTEGER;\n                   }\n\n                   /**\n                    * Set the value of a datapoint, this may\n                    * also cause the type to be changed.\n                    * @param value\tA floating point value to set\n                    */\n                   void setValue(double value)\n                   {\n                           m_value.f = value;\n                           m_type = T_FLOAT;\n                   }\n\n                   /** Set the value of a datapoint to be an image\n                    * @param value The image to set in the data point\n                    */\n                   void setValue(const DPImage& value)\n                   {\n                           m_value.image = new DPImage(value);\n                           m_type = T_IMAGE;\n                   }\n\n                   /**\n                    * Return the value as a string\n                    */\n                   std::string\ttoString() const;\n\n                   /**\n                    * Return string value without trailing/leading quotes\n                    */\n                   std::string\ttoStringValue() const { return *m_value.str; };\n\n                   /**\n                    * Return long value\n                    */\n                   long toInt() const { return m_value.i; };\n                   /**\n                    * Return double value\n                    */\n                   double toDouble() const { return m_value.f; };\n\n                   // Supported Data Tag Types\n                   typedef enum DatapointTag\n                   {\n                           T_STRING,\n                           T_INTEGER,\n                           T_FLOAT,\n                           T_FLOAT_ARRAY,\n                           T_DP_DICT,\n                           T_DP_LIST,\n                           T_IMAGE,\n                           T_DATABUFFER,\n                           T_2D_FLOAT_ARRAY\n                   } dataTagType;\n\n                   /**\n                    * Return the Tag type\n                    */\n                   dataTagType getType() const\n                   {\n                           return m_type;\n                   }\n\n                   std::string getTypeStr() const\n                   {\n                           switch(m_type)\n                           {\n                                   case T_STRING: return std::string(\"STRING\");\n                                   case T_INTEGER: return std::string(\"INTEGER\");\n                                   case T_FLOAT: return std::string(\"FLOAT\");\n                                   case T_FLOAT_ARRAY: return std::string(\"FLOAT_ARRAY\");\n                                   case T_DP_DICT: return std::string(\"DP_DICT\");\n                                   case T_DP_LIST: return std::string(\"DP_LIST\");\n                                   case T_IMAGE: return std::string(\"IMAGE\");\n                                   case T_DATABUFFER: return std::string(\"DATABUFFER\");\n                                   case T_2D_FLOAT_ARRAY: return std::string(\"2D_FLOAT_ARRAY\");\n                                   default: return std::string(\"INVALID\");\n                           }\n                   }\n\n                   /**\n                    * Return array of datapoints\n                    */\n                   std::vector<Datapoint*>*& getDpVec()\n                   {\n                           return m_value.dpa;\n                   }\n\n                   /**\n                    * Return array of float\n                    */\n                   std::vector<double>*& getDpArr()\n                   {\n                           return m_value.a;\n                   }\n\n                   /**\n                    * Return 2D array of float\n                    */\n                   std::vector<std::vector<double> >*& getDp2DArr()\n                   {\n                           return m_value.a2d;\n                   }\n\n                   /**\n                    * Return the Image\n                    */\n                   DPImage *getImage()\n                   {\n                           return m_value.image;\n                   }\n\n                   /**\n                    * Return the DataBuffer\n                    */\n                   DataBuffer *getDataBuffer()\n                   {\n                           return m_value.dataBuffer;\n                   }\n   };\n\nThe *DatapointValue* can store data in as a number of types\n\n +------------------+---------------------------------------------+\n | Type             | C++ Representation                          |\n +==================+=============================================+\n | T_STRING         | Pointer to std::string                      |\n +------------------+---------------------------------------------+\n | T_INTEGER        | long                                        |\n +------------------+---------------------------------------------+\n | T_FLOAT          | double                                      |\n +------------------+---------------------------------------------+\n | T_FLOAT_ARRAY    | Pointer to std::vector<double>              |\n +------------------+---------------------------------------------+\n | T_2D_FLOAT_ARRAY | Pointer to std::vector<std::vector<double>> |\n +------------------+---------------------------------------------+\n | T_DP_DICT        | Pointer to std::vector<Datapoint \\*>        |\n +------------------+---------------------------------------------+\n | T_DP_LIST        | Pointer to std::vector<Datapoint \\*>        |\n +------------------+---------------------------------------------+\n | T_IMAGE          | Pointer to DPImage                          |\n +------------------+---------------------------------------------+\n | T_DATABUFFER     | Pointer to DataBuffer                       |\n +------------------+---------------------------------------------+\n\nConfiguration Category\n----------------------\n\nThe *ConfigCategory* class is a support class for managing configuration information within a plugin and is passed to the plugin entry points. The public definition of the class is as follows;\n\n.. code-block:: C\n\n  class ConfigCategory {\n          public:\n                  enum ItemType {\n                          UnknownType,\n                          StringItem,\n                          EnumerationItem,\n                          JsonItem,\n                          BoolItem,\n                          NumberItem,\n                          DoubleItem,\n                          ScriptItem,\n                          CategoryType,\n                          CodeItem\n                  };\n\n                  ConfigCategory(const std::string& name, const std::string& json);\n                  ConfigCategory() {};\n                  ConfigCategory(const ConfigCategory& orig);\n                  ~ConfigCategory();\n                  void                            addItem(const std::string& name, const std::string description,\n                                                          const std::string& type, const std::string def,\n                                                          const std::string& value);\n                  void                            addItem(const std::string& name, const std::string description,\n                                                          const std::string def, const std::string& value,\n                                                          const std::vector<std::string> options);\n                  void                            removeItems();\n                  void                            removeItemsType(ItemType type);\n                  void                            keepItemsType(ItemType type);\n                  bool                            extractSubcategory(ConfigCategory &subCategories);\n                  void                            setDescription(const std::string& description);\n                  std::string                     getName() const;\n                  std::string                     getDescription() const;\n                  unsigned int                    getCount() const;\n                  bool                            itemExists(const std::string& name) const;\n                  bool                            setItemDisplayName(const std::string& name, const std::string& displayName);\n                  std::string                     getValue(const std::string& name) const;\n                  std::string                     getType(const std::string& name) const;\n                  std::string                     getDescription(const std::string& name) const;\n                  std::string                     getDefault(const std::string& name) const;\n                  bool                            setDefault(const std::string& name, const std::string& value);\n                  std::string                     getDisplayName(const std::string& name) const;\n                  std::vector<std::string>        getOptions(const std::string& name) const;\n                  std::string                     getLength(const std::string& name) const;\n                  std::string                     getMinimum(const std::string& name) const;\n                  std::string                     getMaximum(const std::string& name) const;\n                  bool                            isString(const std::string& name) const;\n                  bool                            isEnumeration(const std::string& name) const;\n                  bool                            isJSON(const std::string& name) const;\n                  bool                            isBool(const std::string& name) const;\n                  bool                            isNumber(const std::string& name) const;\n                  bool                            isDouble(const std::string& name) const;\n                  bool                            isDeprecated(const std::string& name) const;\n                  std::string                     toJSON(const bool full=false) const;\n                  std::string                     itemsToJSON(const bool full=false) const;\n                  ConfigCategory&                 operator=(ConfigCategory const& rhs);\n                  ConfigCategory&                 operator+=(ConfigCategory const& rhs);\n                  void                            setItemsValueFromDefault();\n                  void                            checkDefaultValuesOnly() const;\n                  std::string                     itemToJSON(const std::string& itemName) const;\n                  enum ItemAttribute              { ORDER_ATTR, READONLY_ATTR, MANDATORY_ATTR, FILE_ATTR};\n                  std::string                     getItemAttribute(const std::string& itemName,\n                                                                   ItemAttribute itemAttribute) const;\n  }\n\nAlthough *ConfigCategory* is a complex class, only a few of the methods are commonly used within a plugin\n\n  - **itemExists:** - used to test if an expected configuration item exists within the configuration category.\n  - **getValue:** - return the value of a configuration item from within the configuration category\n  - **isBool:** - tests if a configuration item is of boolean type\n  - **isNumber:** - tests if a configuration item is a number\n  - **isDouble:** - tests if a configuration item is valid to be represented as a double\n  - **isString:** - tests if a configuration item is a string\n\nLogger\n------\n\nThe *Logger* class is used to write entries to the syslog system within Fledge. A singleton *Logger* exists which can be obtained using the following code snippet;\n\n.. code-block:: C\n\n  Logger *logger = Logger::getLogger();\n  logger->error(\"An error has occurred within the plugin processing\");\n\n\nIt is then possible to log messages at one of five different log levels; *debug*, *info*, *warn*, *error* or *fatal*. Messages may be logged using standard printf formatting strings. The public definition of the *Logger* class is as follows;\n\n.. code-block:: C\n\n  class Logger {\n          public:\n                  Logger(const std::string& application);\n                  ~Logger();\n                  static Logger *getLogger();\n                  void debug(const std::string& msg, ...);\n                  void printLongString(const std::string&);\n                  void info(const std::string& msg, ...);\n                  void warn(const std::string& msg, ...);\n                  void error(const std::string& msg, ...);\n                  void fatal(const std::string& msg, ...);\n                  void setMinLevel(const std::string& level);\n  };\n\nThe various log levels should be used as follows;\n\n  - **debug**: should be used to output messages that are relevant only to a programmer that is debugging the plugin.\n  - **info**: should be used for information that is meaningful to the end users, but should not normally be logged.\n  - **warn**: should be used for warning messages that will normally be logged but reflect a condition that does not prevent the plugin from operating.\n  - **error**: should be used for conditions that cause a temporary failure in processing within the plugin.\n  - **fatal**: should be used for conditions that cause the plugin to fail processing permanently, possibly requiring a restart of the microservice in order to resolve.\n"
  },
  {
    "path": "docs/plugin_developers_guide/037_hybrid_plugins.rst",
    "content": ".. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. =============================================\n\nHybrid Plugins\n==============\n\nIn addition to plugins written in Python and C/C++ it is possible to have a hybrid plugin that is a combination of an existing plugin and configuration for that plugin. This is useful in a situation whereby there are multiple sensors or devices that you connect to Fledge that have common configuration. It allows devices to be added without repeating the common configuration.\n\nUsing our example of a *DHT11* sensor connected to a GPIO pin, if we wanted to create a new plugin for a *DHT11* that was always connected to pin 4 then we could do this by creating a JSON file as below that supplies a fixed default value for the GPIO pin.\n\n.. code-block:: JSON\n\n  {\n        \"description\" : \"A DHT11 sensor connected to GPIO pin 4\",\n  \t\"name\" : \"DHT11-4\",\n  \t\"connection\" : \"DHT11\",\n  \t\"defaults\" : {\n  \t\t\"pin\" : {\n  \t\t\t\"default\" : \"4\"\n                        }\n                     }\n  }\n\nThis creates a new hybrid plugin called DHT11-4 that is installed by copying this file into the plugins/south/DHT11-4 directory of your installation. Once installed it can be treated as any other south plugin within Fledge. The effect of this hybrid plugin is to load the *DHT11* plugin and always set the configuration parameter called \"pin\" to the value \"4\". The item \"pin\" will hidden from the user in the Fledge GUI when they create the instance of the plugin. This allows for a simpler and more streamlined user experience when adding plugins with common configuration.\n\nThe items in the JSON file are;\n\n.. list-table::\n    :widths: 20 80\n    :header-rows: 1\n\n    * - Name\n      - Description\n    * - description\n      - A description of the hybrid plugin. This will appear the right of the selection list in the Fledge user interface when the plugin is selected.\n    * - name\n      - The name of the plugin itself. This must match the filename of the JSON file and also the name of the directory the file is placed in.\n    * - connection\n      - The name of the underlying plugin that will be used as the basis for this hybrid plugin. This must be a C/C++ or Python plugin, it can not be another hybrid plugin.\n    * - defaults\n      - The set of values to default in this hybrid plugin. These are configuration parameters of the underlying plugin that will be fixed in the hybrid plugin. Each hybrid plugin can have one or my values here.\n\nIt may not be difficult to enter the GPIO pin in each case in this example, where it becomes more useful is for plugins such as *Modbus* where a complex map is required to be entered in a JSON document. By using a hybrid plugin we can define the map we need once and then add new sensors of the same type without having to repeat the map. An example of this would be the Flir AX8 camera that require a total of 176 Modbus registers to be mapped into 88 different values in an asset. A hybrid plugin *fledge-south-FlirAX8* defines that mapping once and as a result adding a new Flir AX8 camera is as simple as selecting the FlirAX8 hybrid plugin and entering the IP address of the camera.\n"
  },
  {
    "path": "docs/plugin_developers_guide/03_01_DHT11.rst",
    "content": ".. Writing and Using Plugins describes how to implement a plugin for Fledge and how to use it\n.. https://docs.google.com/document/d/1IKGXLWbyN6a7vx8UO3uDbq5Df0VvE4oCQIULgZVZbjM\n\n.. Images\n\n.. |DHT11 in PI| image:: https://s3.amazonaws.com/fledge/readthedocs/images/06_dht11_tags_in_PI.jpg\n   :target: https://s3.amazonaws.com/fledge-iot/readthedocs/images/06_dht11_tags_in_PI.jpg\n\n.. Links\n.. |these INSTALLATION| raw:: html\n\n   <a href=\"../building_fledge/04_installation.html\">these</a>\n\n.. |Getting Started| raw:: html\n\n   <a href=\"../building_fledge/building_fledge.html\">here</a>\n\n.. Links in new tabs\n\n.. |DHT Repo| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge-south-dht11\" target=\"_blank\">Fledge DHT11 South Plugin</a>\n\n.. |ADAFruit| raw:: html\n\n   <a href=\"https://github.com/adafruit/Adafruit_Python_DHT\" target=\"_blank\">ADAFruit DHT Library</a>\n\n.. |DHT Description| raw:: html\n\n   <a href=\"https://www.adafruit.com/product/386\" target=\"_blank\">DHT11 Product Description</a>\n\n.. |DHT Manual| raw:: html\n\n   <a href=\"https://s3.amazonaws.com/fledge-iot/docs/v1/Common/plugins/South/DHT11/DHT11.pdf\" target=\"_blank\">DHT11 Product Manual</a>\n\n.. |DHT Resistor| raw:: html\n\n   <a href=\"https://s3.amazonaws.com/fledge-iot/docs/v1/Common/plugins/South/DHT11/DHT11-with-resistor.jpg\" target=\"_blank\">This picture</a>\n\n.. |DHT Wired| raw:: html\n\n   <a href=\"https://s3.amazonaws.com/fledge-iot/docs/v1/Common/plugins/South/DHT11/DHT11-RaspPI-wired.jpg\" target=\"_blank\">This picture</a>\n\n.. |DHT Pins| raw:: html\n\n   <a href=\"https://s3.amazonaws.com/fledge-iot/docs/v1/Common/plugins/South/DHT11/DHT11-RaspPI-pins.jpg\" target=\"_blank\">this</a>\n\n.. |GPIO| raw:: html\n\n   <a href=\"https://www.raspberrypi.org/documentation/usage/gpio/README.md\" target=\"_blank\">here</a>\n\n\n.. =============================================\n\n\nA South Plugin Example In Python: the DHT11 Sensor\n--------------------------------------------------\n\nLet's try to put all the information together and write a plugin. We can continue to use the example of an inexpensive sensor, the DHT11, used to measure temperature and humidity, directly wired to a Raspberry PI. This plugin is available on github, |DHT Repo|.\n\nFirst, here is a set of links where you can find more information regarding this sensor:\n\n- |DHT Description|\n- |DHT Manual|\n- |ADAFruit|\n\n\nThe Hardware\n~~~~~~~~~~~~\n\nThe DHT sensor is directly connected to a Raspberry PI 2 or 3. You may decide to buy a sensor and a resistor and solder them yourself, or you can buy a ready-made circuit that provides the correct output to wire to the Raspberry PI. |DHT Resistor| shows a DHT11 with resistor that you can buy online.\n\nThe sensor can be directly connected to the Raspberry PI GPIO (General Purpose Input/Output). An introduction to the GPIO and the pinset is available |GPIO|. In our case, you must connect the sensor on these pins:\n\n- **VCC** is connected to PIN #2 (5v Power)\n- **GND** is connected to PIN #6 (Ground)\n- **DATA** is connected to PIN #7 (BCM 4 - GPCLK0)\n\n|DHT Wired| shows the sensor wired to the Raspberry PI and |DHT Pins| is a zoom into the wires used.\n\n\nThe Software\n~~~~~~~~~~~~\n\nFor this plugin we use the ADAFruit Python Library (links to the GitHub repository are above). First, you must install the library (in future versions the library will be provided in a ready-made package):\n\n.. code-block:: console\n \n  $ git clone https://github.com/adafruit/Adafruit_Python_DHT.git\n  Cloning into 'Adafruit_Python_DHT'...\n  remote: Counting objects: 249, done.\n  remote: Total 249 (delta 0), reused 0 (delta 0), pack-reused 249\n  Receiving objects: 100% (249/249), 77.00 KiB | 0 bytes/s, done.\n  Resolving deltas: 100% (142/142), done.\n  $ cd Adafruit_Python_DHT\n  $ sudo apt-get install build-essential python-dev\n  Reading package lists... Done\n  Building dependency tree\n  Reading state information... Done\n  The following NEW packages will be installed:\n  build-essential python-dev\n  ...\n  $ sudo python3 setup.py install\n  running install\n  running bdist_egg\n  running egg_info\n  creating Adafruit_DHT.egg-info\n  ...\n  $\n\n\nThe Plugin\n~~~~~~~~~~\n\nThis is the code for the plugin:\n\n.. code-block:: python\n\n  # -*- coding: utf-8 -*-\n\n  # FLEDGE_BEGIN\n  # See: http://fledge-iot.readthedocs.io/\n  # FLEDGE_END\n\n  \"\"\" Plugin for a DHT11 temperature and humidity sensor attached directly\n      to the GPIO pins of a Raspberry Pi\n\n      This plugin uses the Adafruit DHT library, to install this perform\n      the following steps:\n\n          git clone https://github.com/adafruit/Adafruit_Python_DHT.git\n          cd Adafruit_Python_DHT\n          sudo apt-get install build-essential python-dev\n          sudo python setup.py install\n\n      To access the GPIO pins fledge must be able to access /dev/gpiomem,\n      the default access for this is owner and group read/write. Either\n      Fledge must be added to the group or the permissions altered to\n      allow Fledge access to the device.\n      \"\"\"\n\n\n  from datetime import datetime, timezone\n  import uuid\n\n  from fledge.common import logger\n  from fledge.services.south import exceptions\n\n  __author__ = \"Mark Riddoch\"\n  __copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n  __license__ = \"Apache 2.0\"\n  __version__ = \"${VERSION}\"\n\n  _DEFAULT_CONFIG = {\n      'plugin': {\n           'description': 'Python module name of the plugin to load',\n           'type': 'string',\n           'default': 'dht11'\n      },\n      'pollInterval': {\n          'description': 'The interval between poll calls to the device poll routine expressed in milliseconds.',\n          'type': 'integer',\n          'default': '1000'\n      },\n      'gpiopin': {\n          'description': 'The GPIO pin into which the DHT11 data pin is connected', \n          'type': 'integer',\n          'default': '4'\n      }\n\n  }\n\n  _LOGGER = logger.setup(__name__)\n  \"\"\" Setup the access to the logging system of Fledge \"\"\"\n\n\n  def plugin_info():\n      \"\"\" Returns information about the plugin.\n\n      Args:\n      Returns:\n          dict: plugin information\n      Raises:\n      \"\"\"\n\n      return {\n          'name': 'DHT11 GPIO',\n          'version': '1.0',\n          'mode': 'poll',\n          'type': 'south',\n          'interface': '1.0',\n          'config': _DEFAULT_CONFIG\n      }\n\n\n  def plugin_init(config):\n      \"\"\" Initialise the plugin.\n\n      Args:\n          config: JSON configuration document for the device configuration category\n      Returns:\n          handle: JSON object to be used in future calls to the plugin\n      Raises:\n      \"\"\"\n\n      handle = config['gpiopin']['value']\n      return handle\n\n\n  def plugin_poll(handle):\n      \"\"\" Extracts data from the sensor and returns it in a JSON document as a Python dict.\n\n      Available for poll mode only.\n\n      Args:\n          handle: handle returned by the plugin initialisation call\n      Returns:\n          returns a sensor reading in a JSON document, as a Python dict, if it is available\n          None - If no reading is available\n      Raises:\n          DataRetrievalError\n      \"\"\"\n\n      try:\n          humidity, temperature = Adafruit_DHT.read_retry(Adafruit_DHT.DHT11, handle)\n          if humidity is not None and temperature is not None:\n              time_stamp = str(datetime.now(tz=timezone.utc))\n              readings = {'temperature': temperature, 'humidity': humidity}\n              wrapper = {\n                      'asset':     'dht11',\n                      'timestamp': time_stamp,\n                      'key':       str(uuid.uuid4()),\n                      'readings':  readings\n              }\n              return wrapper\n          else:\n              return None\n\n      except Exception as ex:\n          raise exceptions.DataRetrievalError(ex)\n\n      return None\n\n\n  def plugin_reconfigure(handle, new_config):\n      \"\"\" Reconfigures the plugin, it should be called when the configuration of the plugin is changed during the\n          operation of the device service.\n          The new configuration category should be passed.\n\n      Args:\n          handle: handle returned by the plugin initialisation call\n          new_config: JSON object representing the new configuration category for the category\n      Returns:\n          new_handle: new handle to be used in the future calls\n      Raises:\n      \"\"\"\n\n      new_handle = new_config['gpiopin']['value']\n      return new_handle\n\n\n  def plugin_shutdown(handle):\n      \"\"\" Shutdowns the plugin doing required cleanup, to be called prior to the device service being shut down.\n\n      Args:\n          handle: handle returned by the plugin initialisation call\n      Returns:\n      Raises:\n      \"\"\"\n      pass\n\n\nBuilding Fledge and Adding the Plugin\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf you have not built Fledge yet, follow the steps described |Getting Started|. After the build, you can optionally install Fledge following |these INSTALLATION| steps.\n\n\n- If you have started Fledge from the build directory, copy the structure of the *fledge-south-dht11/python/* directory into the *python* directory:\n\n.. code-block:: console\n\n  $ cd ~/Fledge\n  $ cp -R ~/fledge-south-dht11/python/fledge/plugins/south/dht11 python/fledge/plugins/south/\n  $\n\n- If you have installed Fledge by executing ``sudo make install``, copy the structure of the *fledge-south-dht11/python/* directory into the installed *python* directory:\n\n.. code-block:: console\n\n  $ sudo cp -R ~/fledge-south-dht11/python/fledge/plugins/south/dht11 /usr/local/fledge/python/fledge/plugins/south/\n  $\n\n.. note:: If you have installed Fledge using an alternative *DESTDIR*, remember to add the path to the destination directory to the ``cp`` command.\n\n\n- Add service\n\n.. code-block:: console\n\n   $ curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"dht11\", \"type\": \"south\", \"plugin\": \"dht11\", \"enabled\": true}'\n\n.. note:: Each plugin repo has its own debian packaging script and documentation, And that is the recommended way to go! As above method(s) may need explicit action for linux and/or python dependencies installation.\n\nUsing the Plugin\n~~~~~~~~~~~~~~~~\n\nOnce south plugin is added as an enabled service, You are ready to use the DHT11 plugin.\n\n.. code-block:: console\n\n   $ curl -X GET http://localhost:8081/fledge/service | jq\n\nLet's see what we have collected so far:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/asset | jq\n  [\n    {\n      \"count\": 158,\n      \"asset_code\": \"dht11\"\n    }\n  ]\n  $\n\nFinally, let's extract some values:\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/asset/dht11?limit=5 | jq\n  [\n    {\n      \"timestamp\": \"2017-12-30 14:41:39.672\",\n      \"reading\": {\n        \"temperature\": 19,\n        \"humidity\": 62\n      }\n    },\n    {\n      \"timestamp\": \"2017-12-30 14:41:35.615\",\n      \"reading\": {\n        \"temperature\": 19,\n        \"humidity\": 63\n      }\n    },\n    {\n      \"timestamp\": \"2017-12-30 14:41:34.087\",\n      \"reading\": {\n        \"temperature\": 19,\n        \"humidity\": 62\n      }\n    },\n    {\n      \"timestamp\": \"2017-12-30 14:41:32.557\",\n      \"reading\": {\n        \"temperature\": 19,\n        \"humidity\": 63\n      }\n    },\n    {\n      \"timestamp\": \"2017-12-30 14:41:31.028\",\n      \"reading\": {\n        \"temperature\": 19,\n        \"humidity\": 63\n      }\n    }\n  ]\n  $\n\n\nClearly we will not see many changes in temperature or humidity, unless we place our thumb on the sensor or we blow warm breathe on it :-)\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/asset/dht11?limit=5 | jq\n  [\n    {\n      \"timestamp\": \"2017-12-30 14:43:16.787\",\n      \"reading\": {\n        \"temperature\": 25,\n        \"humidity\": 95\n      }\n    },\n    {\n      \"timestamp\": \"2017-12-30 14:43:15.258\",\n      \"reading\": {\n        \"temperature\": 25,\n        \"humidity\": 95\n      }\n    },\n    {\n      \"timestamp\": \"2017-12-30 14:43:13.729\",\n      \"reading\": {\n        \"temperature\": 24,\n        \"humidity\": 95\n      }\n    },\n    {\n      \"timestamp\": \"2017-12-30 14:43:12.201\",\n      \"reading\": {\n        \"temperature\": 24,\n        \"humidity\": 95\n      }\n    },\n    {\n      \"timestamp\": \"2017-12-30 14:43:05.616\",\n      \"reading\": {\n        \"temperature\": 22,\n        \"humidity\": 95\n      }\n    }\n  ]\n  $\n\nNeedless to say, the North plugin will send the buffered data to the PI system using the OMF plugin or any other north system using the appropriate north plugin.\n\n|DHT11 in PI|\n\n\n"
  },
  {
    "path": "docs/plugin_developers_guide/03_02_Control.rst",
    "content": "Set Point Control\n-----------------\n\nSouth plugins can also be used to exert control on the underlying device to which they are connected. This is not intended for use as a substitute for real time control systems, but rather as a mechanism to make non-time critical changes to a device or to trigger an operation on the device.\n\nTo make a south plugin support control features there are two steps that need to be taken\n\n  - Tag the plugin as supporting control\n\n  - Add the entry points for control\n\n\nEnable Control\n~~~~~~~~~~~~~~\n\nA plugin enables control features by means of the flags in the plugin information data structure which is returned by the *plugin_info* entry point of the plugin. The flag value *SP_CONTROL* should be added to the flags of the plugin.\n\n.. code-block:: console\n\n      /**\n       * The plugin information structure\n       */\n      static PLUGIN_INFORMATION info = {\n              PLUGIN_NAME,              // Name\n              VERSION,                  // Version\n              SP_CONTROL,   \t    // Flags - add control\n              PLUGIN_TYPE_SOUTH,        // Type\n              \"1.0.0\",                  // Interface version\n              CONFIG                    // Default configuration\n      };\n\nAdding this flag will cause the south service to do a number of things when it loads the plugin;\n\n  - The south service will attempt to resolve the two control entry points.\n\n  - A toggle will be added to the advanced configuration category of the service that will permit the disabling of control services.\n\n  - A security category will be added to the south service that contains the access control lists and permissions associated with the service.\n\nControl Entry Points\n~~~~~~~~~~~~~~~~~~~~\n\nTwo entry points are supported for control operations in the south plugin\n\n  - **plugin_write**: which is used to set the value of a parameter within the plugin or device\n\n  - **plugin_operation**: which is used to perform an operation on the plugin or device\n\nThe south plugin can support one or both of these entry points as appropriate for the plugin.\n\nWrite Entry Point\n^^^^^^^^^^^^^^^^^\n\nThe write entry point is used to set data in the plugin or write data into the device.\n\nThe plugin write entry point is defined as follows\n\n.. code-block:: C\n\n     bool plugin_write(PLUGIN_HANDLE *handle, string name, string value)\n\nWhere the parameters are;\n\n  - **handle** the handle of the plugin instance\n\n  - **name** the name of the item to be changed\n\n  - **value** a string presentation of the new value to assign top the item\n\nThe return value defines if the write was successful or not. True is returned for a successful write.\n\n.. code-block:: C\n\n  bool plugin_write(PLUGIN_HANDLE *handle, string& name, string& value)\n  {\n  Random *random = (Random *)handle;\n\n  \treturn random->write(operation, name, value);\n  }\n\nIn this case the main logic of the write operation is implemented in a class that contains all the plugin logic. Note that the assumption here, and a design pattern often used by plugin writers, is that the *PLUGIN_HANDLE* is actually a pointer to a C++ class instance.\n\nIn this case the implementation in the plugin class is as follows:\n\n.. code-block:: C\n\n  bool Random::write(string& name, string& value)\n  {\n        if (name.compare(\"mode\") == 0)\n        {\n                if (value.compare(\"relative\") == 0)\n                {\n                        m_mode = RELATIVE_MODE;\n                }\n                else if (value.compare(\"absolute\") == 0)\n                {\n                        m_mode = ABSOLUTE_MODE;\n                }\n                Logger::getLogger()->error(\"Unknown mode requested '%s' ignored.\", value.c_str());\n                return false;\n        }\n        else\n        {\n                Logger::getLogger()->error(\"Unknown control item '%s' ignored.\", name.c_str());\n                return false;\n        }\n        return true;\n  }\n\nIn this case the code is relatively simple as we assume there is a single control parameter that can be written, the mode of operation. We look for the known name and if a different name is passed an error is logged and false is returned. If the correct name is passed in we then check the value and take the appropriate action. If the value is not a recognized value then an error is logged and we again return false.\n\nIn this case we are merely setting a value within the plugin, this could equally well be done via configuration and would in that case be persisted between restarted. Normally control would not be used for this, but rather for making a change with the connected device itself, such as changing a PLC register value. This is simply an example to demonstrate the mechanism.\n\nOperation Entry Point\n^^^^^^^^^^^^^^^^^^^^^\n\nThe plugin will support an operation entry point. This will execute the given operation synchronously, it is expected that this operation entry point will be called using a separate thread, therefore the plugin should implement operations in a thread safe environment.\n\nThe plugin write operation entry point is defined as follows\n\n.. code-block:: C\n\n     bool plugin_operation(PLUGIN_HANDLE *handle, string& operation, int count, PLUGIN_PARAMETER **params)\n\nWhere the parameters are;\n\n  - **handle** the handle of the plugin instance\n\n  - **operation** the name of the operation to be executed\n\n  - **count** the number of parameters\n\n  - **params** a set of name/value pairs that are passed to the operation\n\nThe *operation* parameter should be used by the plugin to determine which operation is to be performed, that operation may also be passed a number of parameters. The count of these parameters are passed to the plugin in the *count* argument and the actual parameters are passed in an array of key/value pairs as strings.\n\nThe return from the call is a boolean result of the operation, a failure of the operation or a call to an unrecognized operation should be indicated by returning a false value. If the operation succeeds a value of true should be returned.\n\nThe following example shows the implementation of the plugin operation entry point.\n\n.. code-block:: C\n\n  bool plugin_operation(PLUGIN_HANDLE *handle, string& operation, int count, PLUGIN_PARAMETER **params)\n  {\n  Random *random = (Random *)handle;\n\n  \treturn random->operation(operation, count, params);\n  }\n\nIn this case the main logic of the operation is implemented in a class that contains all the plugin logic. Note that the assumption here, and a design pattern often used by plugin writers, is that the *PLUGIN_HANDLE* is actually a pointer to a C++ class instance.\n\nIn this case the implementation in the plugin class is as follows:\n\n.. code-block:: C\n\n  /**\n   * SetPoint operation. We support reseeding the random number generator\n   */\n  bool Random::operation(const std::string& operation, int count, PLUGIN_PARAMETER **params)\n  {\n          if (operation.compare(\"seed\") == 0)\n          {\n                  if (count)\n                  {\n                          if (params[0]->name.compare(\"seed\"))\n                          {\n                                  long seed = strtol(params[0]->value.c_str(), NULL, 10);\n                                  srand(seed);\n                          }\n                          else\n                          {\n                                  return false;\n                          }\n                  }\n                  else\n                  {\n                          srand(time(0));\n                  }\n                  Logger::getLogger()->info(\"Reseeded random number generator\");\n                  return true;\n          }\n          Logger::getLogger()->error(\"Unrecognised operation %s\", operation.c_str());\n          return false;\n  }\n\nIn this example, the operation method checks the name of the operation to perform, only a single operation is supported by this plugin. If this operation name differs the method will log an error and return false. If the operation is recognized it will check for any arguments passed in, retrieve and use it. In this case an optional *seed* argument may be passed.\n\nThere is no actual machine connected here, therefore the operation occurs within the plugin. In the case of a real machine the operation would most likely cause an action on a machine, for example a request to the machine to re-calibrate itself.\n"
  },
  {
    "path": "docs/plugin_developers_guide/03_02_DHT11_C.rst",
    "content": ".. Writing and Using Plugins describes how to implement a plugin for Fledge and how to use it\n.. https://docs.google.com/document/d/1IKGXLWbyN6a7vx8UO3uDbq5Df0VvE4oCQIULgZVZbjM\n\n.. Links\n.. |these INSTALLATION| raw:: html\n\n   <a href=\"../building_fledge/04_installation.html\">these</a>\n\n.. |Getting Started| raw:: html\n\n   <a href=\"../building_fledge/building_fledge.html\">here</a>\n\n.. Links in new tabs\n\n.. |fledge DHT| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge-south-dht\" target=\"_blank\">https://github.com/fledge-iot/fledge-south-dht</a>\n   <br />\n\n\n\nA South Plugin Example In C/C++: the DHT11 Sensor\n-------------------------------------------------\n\nUsing the same example as before, the DHT11 temperature and humidity sensor, let's look at how to create the plugin in C/C++.\n\nThe Software\n~~~~~~~~~~~~\n\nFor this plugin we use the wiringpi C library to connect to the hardware of the Raspberry Pi\n\n.. code-block:: console\n \n  $ sudo apt-get install wiringpi\n  Reading package lists... Done\n  Building dependency tree\n  Reading state information... Done\n  The following NEW packages will be installed:\n  wiringpi\n  ...\n  $\n\n\nThe Plugin\n~~~~~~~~~~\n\nThis is the code for the plugin.cpp file that provides the plugin API:\n\n.. code-block:: C\n\n  /*\n   * Fledge south plugin.\n   *\n   * Copyright (c) 2018 OSisoft, LLC\n   *\n   * Released under the Apache 2.0 Licence\n   *\n   * Author: Amandeep Singh Arora\n   */\n  #include <dht11.h>\n  #include <plugin_api.h>\n  #include <stdio.h>\n  #include <stdlib.h>\n  #include <strings.h>\n  #include <string>\n  #include <logger.h>\n  #include <plugin_exception.h>\n  #include <config_category.h>\n  #include <rapidjson/document.h>\n  #include <version.h>\n\n  using namespace std;\n  #define PLUGIN_NAME \"dht11_V2\"\n\n  /**\n   * Default configuration\n   */\n  const static char *default_config = QUOTE({\n                  \"plugin\" : { \n                          \"description\" : \"DHT11 C south plugin\",\n                          \"type\" : \"string\",\n                          \"default\" : PLUGIN_NAME,\n                          \"readonly\": \"true\"\n                          },\n                  \"asset\" : {\n                          \"description\" : \"Asset name\",\n                          \"type\" : \"string\",\n                          \"default\" : \"dht11\",\n                          \"order\": \"1\",\n                          \"displayName\": \"Asset Name\",\n                          \"mandatory\" : \"true\"\n                          },\n                  \"pin\" : {\n                          \"description\" : \"Rpi pin to which DHT11 is attached\",\n                          \"type\" : \"integer\",\n                          \"default\" : \"7\",\n                          \"displayName\": \"Rpi Pin\"\n                          }\n                  });\n\n\n  /**\n   * The DHT11 plugin interface\n   */\n  extern \"C\" {\n\n  /**\n   * The plugin information structure\n   */\n  static PLUGIN_INFORMATION info = {\n          PLUGIN_NAME,              // Name\n          VERSION,                  // Version\n          0,                        // Flags\n          PLUGIN_TYPE_SOUTH,        // Type\n          \"1.0.0\",                  // Interface version\n          default_config            // Default configuration\n  };\n\n  /**\n   * Return the information about this plugin\n   */\n  PLUGIN_INFORMATION *plugin_info()\n  {\n          return &info;\n  }\n\n  /**\n   * Initialise the plugin, called to get the plugin handle\n   */\n  PLUGIN_HANDLE plugin_init(ConfigCategory *config)\n  {\n          unsigned int pin;\n\n          if (config->itemExists(\"pin\"))\n          {\n                  pin = stoul(config->getValue(\"pin\"), nullptr, 0);\n          }\n\n          DHT11 *dht11= new DHT11(pin);\n\n          if (config->itemExists(\"asset\"))\n                  dht11->setAssetName(config->getValue(\"asset\"));\n          else\n                  dht11->setAssetName(\"dht11\");\n\n          Logger::getLogger()->info(\"m_assetName set to %s\", dht11->getAssetName());\n\n          return (PLUGIN_HANDLE)dht11;\n  }\n\n  /**\n   * Poll for a plugin reading\n   */\n  Reading plugin_poll(PLUGIN_HANDLE *handle)\n  {\n          DHT11 *dht11 = (DHT11*)handle;\n          return dht11->takeReading();\n  }\n\n  /**\n   * Reconfigure the plugin\n   */\n  void plugin_reconfigure(PLUGIN_HANDLE *handle, string& newConfig)\n  {\n  ConfigCategory\tconf(\"dht\", newConfig);\n  DHT11 *dht11 = (DHT11*)*handle;\n\n          if (conf.itemExists(\"asset\"))\n                  dht11->setAssetName(conf.getValue(\"asset\"));\n          if (conf.itemExists(\"pin\"))\n          {\n                  unsigned int pin = stoul(conf.getValue(\"pin\"), nullptr, 0);\n                  dht11->setPin(pin);\n          }\n  }\n\n  /**\n   * Shutdown the plugin\n   */\n  void plugin_shutdown(PLUGIN_HANDLE *handle)\n  {\n          DHT11 *dht11 = (DHT11*)handle;\n          delete dht11;\n  }\n  };\n\nThe full source code, including the *DHT11* class can be found in GitHub |fledge DHT|\n\nBuilding Fledge and Adding the Plugin\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf you have not built Fledge yet, follow the steps described |Getting Started|. After the build, you can optionally install Fledge following |these INSTALLATION| steps.\n\n\n- Clone the *fledge-south-dht* repository\n\n.. code-block:: console\n\n  $ git clone https://github.com/fledge-iot/fledge-south-dht.git\n  ...\n  $\n\n- Set the environment variable FLEDGE_ROOT to the directory in which you built Fledge\n\n.. code-block:: console\n\n  $ export FLEDGE_ROOT=~/fledge\n  $\n\n- Go to the location in which you cloned the fledge-south-dht repository and create a build directory and run cmake in that directory\n\n.. code-block:: console\n\n  $ cd ~/fledge-south-dht\n  $ mkdir build\n  $ cd build\n  $ cmake ..\n  ...\n  $\n\n- Now make the plugin\n\n.. code-block:: console\n\n  $ make\n  $\n\n- If you have started Fledge from the build directory, copy the plugin into the destination directory\n\n.. code-block:: console\n\n  $ mkdir -p $FLEDGE_ROOT/plugins/south/dht\n  $ cp libdht.so $FLEDGE_ROOT/plugins/south/dht\n  $\n\n- If you have installed Fledge by executing ``sudo make install``, copy the plugin into the destination directory\n\n.. code-block:: console\n\n  $ sudo mkdir -p /usr/local/fledge/plugins/south/dht\n  $ sudo cp libdht.so /usr/local/fledge/plugins/south/dht\n  $\n\n.. note:: If you have installed Fledge using an alternative *DESTDIR*, remember to add the path to the destination directory to the ``cp`` command.\n\n\n- Add service\n\n.. code-block:: console\n\n   $ curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"dht\", \"type\": \"south\", \"plugin\": \"dht\", \"enabled\": true}'\n\nYou may now use the C/C++ plugin in exactly the same way as you used a Python plugin earlier.\n"
  },
  {
    "path": "docs/plugin_developers_guide/03_02_south_python_Control.rst",
    "content": "Set Point Control\n-----------------\n\nSouth plugins can also be used to exert control on the underlying device to which they are connected. This is not intended for use as a substitute for real time control systems, but rather as a mechanism to make non-time critical changes to a device or to trigger an operation on the device.\n\nTo make a south plugin support control features there are two steps that need to be taken\n\n  - Tag the plugin as supporting control\n\n  - Add the entry points for control\n\n\nEnable Control\n~~~~~~~~~~~~~~\n\nA plugin enables control features by means of the mode field in the plugin information dict which is returned by the *plugin_info* entry point of the plugin. The flag value *control* should be added to the mode field of the plugin. Multiple flag values are separated by the pipe symbol '|'.\n\n.. code-block:: console\n\n    # plugin information dict\n    {\n        'name': 'Sinusoid Poll plugin',\n        'version': '1.9.2',\n        'mode': 'poll|control',\n        'type': 'south',\n        'interface': '1.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\nAdding this flag will cause the south service to do a number of things when it loads the plugin;\n\n  - The south service will attempt to resolve the two control entry points.\n\n  - A toggle will be added to the advanced configuration category of the service that will permit the disabling of control services.\n\n  - A security category will be added to the south service that contains the access control lists and permissions associated with the service.\n\nControl Entry Points\n~~~~~~~~~~~~~~~~~~~~\n\nTwo entry points are supported for control operations in the south plugin\n\n  - **plugin_write**: which is used to set the value of a parameter within the plugin or device\n\n  - **plugin_operation**: which is used to perform an operation on the plugin or device\n\nThe south plugin can support one or both of these entry points as appropriate for the plugin.\n\nWrite Entry Point\n^^^^^^^^^^^^^^^^^\n\nThe write entry point is used to set data in the plugin or write data into the device.\n\nThe plugin write entry point is defined as follows\n\n.. code-block:: python\n\n     def plugin_write(handle, name, value)\n\nWhere the parameters are;\n\n  - **handle** the handle of the plugin instance\n\n  - **name** the name of the item to be changed\n\n  - **value** a string presentation of the new value to assign to the item\n\nThe return value defines if the write was successful or not. True is returned for a successful write.\n\n.. code-block:: python\n\n  def plugin_write(handle, name, value):\n    \"\"\" Setpoint write operation\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        name: Name of parameter to write\n        value: Value to be written to that parameter\n    Returns:\n        bool: Result of the write operation\n    \"\"\"\n    _LOGGER.info(\"plugin_write(): name={}, value={}\".format(name, value))\n    return True\n\n\nIn this case we are merely printing the parameter name and the value to be set for this parameter. Normally control would be used for making a change with the connected device itself, such as changing a PLC register value. This is simply an example to demonstrate the API.\n\nOperation Entry Point\n^^^^^^^^^^^^^^^^^^^^^\n\nThe plugin will support an operation entry point. This will execute the given operation synchronously, it is expected that this operation entry point will be called using a separate thread, therefore the plugin should implement operations in a thread safe environment.\n\nThe plugin write operation entry point is defined as follows\n\n.. code-block:: python\n\n     def plugin_operation(handle, operation, params)\n\nWhere the parameters are;\n\n  - **handle** the handle of the plugin instance\n\n  - **operation** the name of the operation to be executed\n\n  - **params** a list of name/value tuples that are passed to the operation\n\nThe *operation* parameter should be used by the plugin to determine which operation is to be performed. The actual parameters are passed in a list of key/value tuples as strings.\n\nThe return from the call is a boolean result of the operation, a failure of the operation or a call to an unrecognized operation should be indicated by returning a false value. If the operation succeeds a value of true should be returned.\n\nThe following example shows the implementation of the plugin operation entry point.\n\n.. code-block:: python\n\n  def plugin_operation(handle, operation, params):\n    \"\"\" Setpoint control operation\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        operation: Name of operation\n        params: Parameter list\n    Returns:\n        bool: Result of the operation\n    \"\"\"\n    _LOGGER.info(\"plugin_operation(): operation={}, params={}\".format(operation, params))\n    return True\n\nIn the case of a real machine the operation would most likely cause an action on a machine, for example a request to the machine to re-calibrate itself. Above example is just a demonstration of the API.\n"
  },
  {
    "path": "docs/plugin_developers_guide/03_south_C_plugins.rst",
    "content": ".. Writing and Using Plugins describes how to implement a plugin for Fledge and how to use it\n.. https://docs.google.com/document/d/1IKGXLWbyN6a7vx8UO3uDbq5Df0VvE4oCQIULgZVZbjM\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n.. |C++ Support Classes| raw:: html\n\n   <a href=\"035_CPP.html\">C++ Support Classes</a>\n\n.. =============================================\n\n\nSouth Plugins in C\n==================\n\nSouth plugins written in C/C++ are no different in use to those written in Python, it is merely a case that they are implemented in a different language. The same options of polled or asynchronous methods still exist and the enduser of Fledge is not aware in which language the plugin has been written.\n\n\nPolled Mode\n-----------\n\nPolled mode is the simplest form of South plugin that can be written, a poll routine is called at an interval defined in the plugin advanced configuration. The South service determines the type of the plugin by examining the mode property in the information the plugin returns from the *plugin_info* call.\n\n\nPlugin Poll\n~~~~~~~~~~~\n\nThe plugin *poll* method is called periodically to collect the readings from a poll mode sensor. As with all other calls the argument passed to the method is the handle returned by the *plugin_init* call, the return of the method should be a *Reading* instance that contains the data read.\n\nThe *Reading* class consists of\n\n+---------------+---------------------------------------------------------+\n| Property      | Description                                             |\n+===============+=========================================================+\n| assetName     | The asset key of the sensor device that is being read   |\n+---------------+---------------------------------------------------------+\n| userTimestamp | A timestamp for the reading data                        |\n+---------------+---------------------------------------------------------+\n| datapoints    | The reading data itself as a set if datapoint instances |\n+---------------+---------------------------------------------------------+\n\nMore detail regarding the *Reading* class can be found in the section |C++ Support classes|.\n\nIt is important that the *poll* method does not block as this will prevent the proper operation of the South microservice.  Using the example of our simple DHT11 device attached to a GPIO pin, the *poll* routine could be:\n\n.. code-block:: C\n\n  /**\n   * Poll for a plugin reading\n   */\n  Reading plugin_poll(PLUGIN_HANDLE *handle)\n  {\n          DHT11 *dht11 = (DHT11*)handle;\n          return dht11->takeReading();\n  }\n\nWhere our *DHT11* class has a method *takeReading* as follows\n\n.. code-block:: C\n\n  /**\n   * Take reading from sensor\n   *\n   * @param firstReading   This flag indicates whether this is the first reading to be taken from sensor,\n   *                       if so get it reliably even if takes multiple retries. Subsequently (firstReading=false),\n   *                       if reading from sensor fails, last good reading is returned.\n   */\n  Reading DHT11::takeReading(bool firstReading)\n  {\n          static uint8_t sensorData[4] = {0,0,0,0};\n\n          bool valid = false;\n          unsigned int count=0;\n          do {\n                  valid = readSensorData(sensorData);\n                  count++;\n          } while(!valid && firstReading && count < MAX_SENSOR_READ_RETRIES);\n\n          if (firstReading && count >= MAX_SENSOR_READ_RETRIES)\n                  Logger::getLogger()->error(\"Unable to get initial valid reading from DHT11 sensor connected to pin %d even after %d tries\", m_pin, MAX_SENSOR_READ_RETRIES);\n\n          vector<Datapoint *> vec;\n\n          ostringstream tmp;\n          tmp << ((unsigned int)sensorData[0]) << \".\" << ((unsigned int)sensorData[1]);\n          DatapointValue dpv1(stod(tmp.str()));\n          vec.push_back(new Datapoint(\"Humidity\", dpv1));\n\n          ostringstream tmp2;\n          tmp2 << ((unsigned int)sensorData[2]) << \".\" <<  ((unsigned int)sensorData[3]);\n          DatapointValue dpv2(stod(tmp2.str()));\n          vec.push_back(new Datapoint (\"Temperature\", dpv2));\n\n          return Reading(m_assetName, vec);\n  }\n\nWe are creating two *DatapointValues* for the Humidity and Temperature values returned by reading the DHT11 sensor.\n\nPlugin Poll Returning Multiple Values\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIt is possible in a C/C++ plugin to have a plugin that returns multiple readings in a single call to a poll routine. This is done by setting the interface version of 2.0.0 rather than 1.0.0. In this interface version the *plugin_poll* call returns a vector of *Reading* rather than a single *Reading*.\n\n.. code-block:: C\n\n  /**\n   * Poll for a plugin reading\n   */\n  std::vector<Reading *> *plugin_poll(PLUGIN_HANDLE *handle)\n  {\n  Modbus *modbus = (Modbus *)handle;\n\n          if (!handle)\n                  throw runtime_error(\"Bad plugin handle\");\n          return modbus->takeReading();\n  }\n\nAsync IO Mode\n-------------\n\nIn asyncio mode the plugin runs either a separate thread or uses some incoming event from a device or callback mechanism to trigger sending data to Fledge. The asynchronous mode uses two additional entry points to the plugin, one to register a callback on which the plugin sends data, *plugin_register_ingest*  and another to start the asynchronous behavior *plugin_start*.\n\nPlugin Register Ingest\n~~~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_register_ingest* call is used to allow the south service to pass a callback function to the plugin that the plugin uses to send data to the service every time the plugin has some new data.\n\n.. code-block:: C\n\n  /**\n   * Register ingest callback\n   */\n  void plugin_register_ingest(PLUGIN_HANDLE *handle, INGEST_CB cb, void *data)\n  {\n  MyPluginClass *plugin = (MyPluginClass *)handle;\n\n          if (!handle)\n                  throw new exception();\n          plugin->registerIngest(data, cb);\n  }\n\nThe plugin should store the callback function pointer and the data associated with the callback such that it can use that information to pass a reading to the south service. The following code snippets show how a plugin class might store the callback and data and then use it to send readings into Fledge at a later stage.\n\n.. code-block:: C\n\n  /**\n   * Record the ingest callback function and data in member variables\n   *\n   * @param data  The Ingest function data\n   * @param cb    The callback function to call\n   */\n  void MyPluginClass::registerIngest(void *data, INGEST_CB cb)\n  {\n          m_ingest = cb;\n          m_data = data;\n  }\n\n  /**\n   * Called when a data is available to send in the south service\n   *\n   * @param reading        The reading we have ingested into the plugin\n   */\n  void MyPluginClass::ingest(Reading& reading)\n  {\n\n          (*m_ingest)(m_data, reading);\n  }\n\nIn version 2.0.0 onward of the south plugin interface, the callback method has been updated to pass multiple readings.\n\nThe *plugin_register_ingest* call in versions 2.0.0 and later of the interface would be rewritten as follow\n\n.. code-block:: C\n\n  /**\n   * Register ingest callback\n   *\n   * @param handle      The plugin handle\n   * @param cb          The callback function to be called with the readings\n   * @param data        Data to be passed to the callback function\n   */\n  void plugin_register_ingest(PLUGIN_HANDLE *handle, INGEST_CB2 cb, void *data)\n  {\n  MyPluginClass *plugin = (MyPluginClass *)handle;\n\n          if (!handle)\n                  throw new exception();\n          plugin->registerIngest(data, cb);\n  }\n\nThe plugin should store the callback function pointer and the data associated with the callback such that it can use that information to pass a reading to the south service in the same way as before. The following code snippets show how a plugin class might store the callback and data and then use it to send readings into Fledge at a later stage.\n\n.. code-block:: C\n\n  /**\n   * Record the ingest callback function and data in member variables\n   *\n   * @param data  The Ingest function data\n   * @param cb    The callback function to call\n   */\n  void MyPluginClass::registerIngest(void *data, INGEST_CB2 cb)\n  {\n          m_ingest = cb;\n          m_data = data;\n  }\n\n  /**\n   * Called when a data is available to send in the south service\n   *\n   * @param readings        A vector of readings to be processed\n   */\n  void MyPluginClass::ingest(std::vector<Reading *>& readings)\n  {\n\n          (*m_ingest)(m_data, readings);\n  }\n\nThe ownership of the readings in the vector is transferred to the south plugin by the call to the ingest callback and should not be deleted by the south plugin.\n\nPlugin Start\n~~~~~~~~~~~~\n\nThe *plugin_start* method, as with other plugin calls, is called with the plugin handle data that was returned from the *plugin_init* call. The *plugin_start* call will only be called once for a plugin, it is the responsibility of *plugin_start* to take whatever action is required in the plugin in order to start the asynchronous actions of the plugin. This might be to start a thread, register an endpoint for a remote connection or call an entry point in a third party library to start asynchronous processing.\n\n.. code-block:: C\n\n  /**     \n   * Start the Async handling for the plugin\n   *\n   * @param handle      The plugin instance handle\n   */\n  void plugin_start(PLUGIN_HANDLE *handle)\n  {\n  MyPluginClass *plugin = (MyPluginClass *)handle;\n\n\n          if (!handle)\n                  return;\n          plugin->start();\n  }\n\n  /**\n   * Start the asynchronous processing thread\n   */\n  void MyPluginClass::start()\n  {\n          m_running = true;\n          m_thread = new thread(threadWrapper, this);\n  }\n\n.. include:: 03_02_Control.rst\n\n.. include:: 03_02_DHT11_C.rst\n"
  },
  {
    "path": "docs/plugin_developers_guide/03_south_plugins.rst",
    "content": ".. Writing and Using Plugins describes how to implement a plugin for Fledge and how to use it\n.. https://docs.google.com/document/d/1IKGXLWbyN6a7vx8UO3uDbq5Df0VvE4oCQIULgZVZbjM\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. =============================================\n\n\nSouth Plugins\n=============\n\nSouth plugins are used to communicate with sensors and actuators, there are two modes of plugin operation; *asyncio* and *polled*.\n\n\nPolled Mode\n-----------\n\nPolled mode is the simplest form of South plugin that can be written, a poll routine is called at an interval defined in the plugin configuration. The South service determines the type of the plugin by examining at the mode property in the information the plugin returns from the *plugin_info* call.\n\n\nPlugin Poll\n~~~~~~~~~~~\n\nThe plugin *poll* method is called periodically to collect the readings from a poll mode sensor. As with all other calls the argument passed to the method is the handle returned by the initialization call, the return of the method should be the JSON payload of the readings to return.\n\nThe JSON payload returned, as a Python dictionary, should contain the properties; asset, timestamp, key and readings.\n\n+-----------+-------------------------------------------------------+\n| Property  | Description                                           |\n+===========+=======================================================+\n| asset     | The asset key of the sensor device that is being read |\n+-----------+-------------------------------------------------------+\n| timestamp | A timestamp for the reading data                      |\n+-----------+-------------------------------------------------------+\n| key       | A UUID which is the unique key of this reading        |\n+-----------+-------------------------------------------------------+\n| readings  | The reading data itself as a JSON object              |\n+-----------+-------------------------------------------------------+\n\nIt is important that the *poll* method does not block as this will prevent the proper operation of the South microservice.\nUsing the example of our simple DHT11 device attached to a GPIO pin, the *poll* routine could be:\n\n.. code-block:: python\n\n  def plugin_poll(handle):\n      \"\"\" Extracts data from the sensor and returns it in a JSON document as a Python dict.\n\n      Available for poll mode only.\n\n      Args:\n          handle: handle returned by the plugin initialisation call\n      Returns:\n          returns a sensor reading in a JSON document, as a Python dict, if it is available\n          None - If no reading is available\n      Raises:\n          DataRetrievalError\n      \"\"\"\n\n      try:\n          humidity, temperature = Adafruit_DHT.read_retry(Adafruit_DHT.DHT11, handle)\n          if humidity is not None and temperature is not None:\n              time_stamp = str(datetime.now(tz=timezone.utc))\n              readings =  { 'temperature': temperature , 'humidity' : humidity }\n              wrapper = {\n                      'asset': 'dht11',\n                      'timestamp': time_stamp,\n                      'key': str(uuid.uuid4()),\n                      'readings': readings\n              }\n              return wrapper\n          else:\n              return None\n\n      except Exception as ex:\n          raise exceptions.DataRetrievalError(ex)\n\n      return None\n\n\nAsync IO Mode\n-------------\n\nIn asyncio mode the plugin inserts itself into the event processing loop of the South Service itself. This is a more complex mechanism and is intended for plugins that need to block or listen for incoming data via a network.\n\n\nPlugin Start\n~~~~~~~~~~~~\n\nThe *plugin_start* method, as with other plugin calls, is called with the plugin handle data that was returned from the *plugin_init* call. The *plugin_start* call will only be called once for a plugin, it is the responsibility of *plugin_start* to install the plugin code into the python event handling system for asyncIO. Assuming an example whereby the interface to a sensor is via HTTP and the sensor will make HTTP POST calls to our plugin in order to send data into Fledge, a *plugin_start* for this scenario would create a web application endpoint for reception of the POST command.\n\n.. code-block:: python\n\n  loop = asyncio.get_event_loop()\n  app = web.Application(middlewares=[middleware.error_middleware])\n  app.router.add_route('POST', '/', SensorPhoneIngest.render_post)\n  handler = app.make_handler()\n  coro = loop.create_server(handler, host, port)\n  server = asyncio.ensure_future(coro)\n\nThis code first gets the event loop for this Python execution, it then creates the web application and adds a route for the POST request. In this case it is calling the *render_post* method of the object *SensorPhone*. It then goes on to create the handler and install the web server instance into the event system.\n\n\nAsync Data Callback\n~~~~~~~~~~~~~~~~~~~\n\nThe async data callback is used for incoming sensor data and passing that reading data into the Fledge ingest process. Unlike the poll mechanism, this is done from within the callback rather than by passing the data back to the South service itself. A plugin entry point, *plugin_register_ingest* is called by the south service before the plugin is started to register the callback with the plugin. The plugin would usually save the callback function and the reference data for later use.\n\n.. code-block:: python\n\n   def plugin_register_ingest(handle, callback, ingest_ref):\n       \"\"\"Required plugin interface component to communicate to South C server\n\n       Args:\n           handle: handle returned by the plugin initialisation call\n           callback: C opaque object required to passed back to C->ingest method\n           ingest_ref: C opaque object required to passed back to C->ingest method\n       \"\"\"\n       global c_callback, c_ingest_ref\n       c_callback = callback\n       c_ingest_ref = ingest_ref\n\n\n\nThe plugin then uses these saved references when it has data to be ingested. A new reading is constructed and passed to the callback function using *async_ingest* object that should be imported by the plugin.\n\n.. code-block:: python\n\n  import async_ingest\n\nThen for each reading to be ingested the data is sent to the ingest thread of the south plugin using the following construct.\n\n.. code-block:: python\n\n    data = {\n                'asset': self.asset_name,\n                'timestamp': utils.local_timestamp(),\n                'readings': reads\n    }\n    async_ingest.ingest_callback(c_callback, c_ingest_ref, data)\n\n\n.. code-block:: python\n\n  message['status'] = code\n  return web.json_response(message)\n\n\n.. include:: 03_02_south_python_Control.rst\n\n.. include:: 03_01_DHT11.rst\n"
  },
  {
    "path": "docs/plugin_developers_guide/04_north_plugins.rst",
    "content": ".. North Plugins\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. Links in new tabs\n\n.. =============================================\n\n\nNorth Plugins\n=============\n\nNorth plugins are used in North tasks and micro services to extract data buffered in Fledge and send it Northbound, i.e. to a server or a service in the Cloud or in an Enterprise data center. North plugins may be written in Python or C/C++, a number of different north plugins are available as examples that may be used when creating new plugins.\n\nA north plugin has a limited number of entry points that it much support, these entry points are the same for both Python and C/C++ north plugins.\n\n.. list-table::\n    :header-rows: 1\n\n    * - Entry Point\n      - Description\n    * - plugin_info\n      - Return information about the plugin including the configuration for the plugin. This is the same as plugin_info in all other types of plugin and is part of the standard plugin interface.\n    * - plugin_init\n      - Also part of the standard plugin interface. This call is passed the request configuration of the plugin and should be used to do any initialization of the plugin.\n    * - plugin_send\n      - This entry point is the north plugin specific entry point that is used to send data from Fledge. This will be called repeatedly with blocks of readings.\n    * - plugin_shutdown\n      - Part of the standard plugin interface, this will be called when the plugin is no longer required and will be the final call to the plugin.\n    * - plugin_register\n      - Register the callback function used for control writes and operations.\n\nThe life cycle of a plugin is very similar regardless of if it is written in Python or C/C++, the *plugin_info* call is made first to determine data about the plugin. The plugin is then initialized by calling the *plugin_init* entry point. The *plugin_send* entry point will be called multiple times to send the actual data and finally the *plugin_shutdown* entry point will be called.\n\nIn the following sections each of these calls will be described in detail and samples given in both C/C++ and Python.\n\nPython Plugins\n--------------\n\nPython plugins are loaded dynamically and executed either within a task, known as the *sending_task* or *north* task. This code is implemented in C++ and embedded a Python interpreter that is used to run the Python plugin.\n\nThe plugin_info call\n~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_info* call is the first call that will be made to a plugin and is called only once. It is part of the standard plugin interface that is implemented by north, south, filter, notification rule and notification delivery plugins. No arguments are passed to this call and it should return a *plugin information structure* as a Python dict.\n\nA typical implementation for a simple north plugin simply returns a DICT as follows\n\n.. code-block:: python\n\n    def plugin_info():\n        \"\"\" Used only once when call will be made to a plugin.\n\n            Args:\n            Returns:\n                Information about the plugin including the configuration for the plugin\n        \"\"\"\n        return {\n            'name': 'http',\n            'version': '1.9.1',\n            'type': 'north',\n            'interface': '1.0',\n            'config': _DEFAULT_CONFIG\n        }\n\n\nThe items in the structure returned by *plugin_info* are\n\n.. list-table::\n    :header-rows: 1\n\n    * - Name\n      - Description\n    * - name\n      - The name of the plugin\n    * - version\n      - The version of the plugin. Typically this is the same as the version of Fledge it is designed to work with but is not constrained to be the same.\n    * - type\n      - The type of the plugin, in this case the type will always be *north*\n    * - interface\n      - The version of the plugin interface that the plugin supports. In this case the version if 1.0\n    * - config\n      - The DICT that defines the configuration that the plugin has as default.\n\nIn the case above *_DEFAULT_CONFIG* is another Python DICT that contains the defaults for the plugin configuration and will be covered in the Configuration section.\n\n\nConfiguration\n#############\n\nConfiguration within Fledge is represented in a JSON structure that defines a name, value, default, type and a number of other optional parameters. The configuration process works by the plugins having a default configuration that they return from the plugin_init call. The Fledge configuration code will then combine this with a copy of that configuration that it holds. On the first time a service is created, with no previously held configuration, the configuration manager will take the default values and make those the actual values. The user may then update these to set non-default values. In subsequent executions of the plugin these values will be combined with the defaults to create the in use configuration that is passed to the *plugin_init* entry point. The mechanism is designed to allow initial execution of a plugin, but also to allow upgrade of a plugin to create new configuration items for the plugins whilst preserving previous configuration values set by the user.\n\nA sample default configuration of http north python based plugin is shown below.\n\n.. code-block:: json\n\n    {\n    \t\"plugin\": {\n    \t\t\"description\": \"HTTP North Plugin\",\n    \t\t\"type\": \"string\",\n    \t\t\"default\": \"http_north\",\n    \t\t\"readonly\": \"true\"\n    \t},\n    \t\"url\": {\n    \t\t\"description\": \"Destination URL\",\n    \t\t\"type\": \"string\",\n    \t\t\"default\": \"http://localhost:6683/sensor-reading\",\n    \t\t\"order\": \"1\",\n    \t\t\"displayName\": \"URL\"\n    \t},\n    \t\"source\": {\n    \t\t\"description\": \"Source of data to be sent on the stream. May be either readings or statistics.\",\n    \t\t\"type\": \"enumeration\",\n    \t\t\"default\": \"readings\",\n    \t\t\"options\": [\"readings\", \"statistics\"],\n    \t\t\"order\": \"2\",\n    \t\t\"displayName\": \"Source\"\n    \t},\n    \t\"verifySSL\": {\n    \t\t\"description\": \"Verify SSL certificate\",\n    \t\t\"type\": \"boolean\",\n    \t\t\"default\": \"false\",\n    \t\t\"order\": \"3\",\n    \t\t\"displayName\": \"Verify SSL\"\n    \t}\n    }\n\nItems marked as *\"readonly\" :\"true\"* will not be presented to the user. The *displayName* and *order* properties are only used by the user interface to display the configuration item. The description, type and default are used by the API to verify the input and also set the initial values when a new configuration item is created.\n\nRules can also be given to the user interface to define the validity of configuration items based upon the values of others, or example\n\n.. code-block:: json\n\n    {\n        \"applyFilter\": {\n            \"description\": \"Should filter be applied before processing data\",\n            \"type\": \"boolean\",\n            \"default\": \"false\",\n            \"order\": \"4\",\n            \"displayName\": \"Apply Filter\"\n        },\n        \"filterRule\": {\n            \"description\": \"JQ formatted filter to apply (only applicable if applyFilter is True)\",\n            \"type\": \"string\",\n            \"default\": \".[]\",\n            \"order\": \"5\",\n            \"displayName\": \"Filter Rule\",\n            \"validity\": \"applyFilter == \\\"true\\\"\"\n        }\n    }\n\nThis will only allow entry to the *filterRule* configuration item if the *applyFilter* item has been set to true.\n\nThe plugin_init call\n~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_init* call will be invoked after the *plugin_info* call has been called to obtain the information regarding the plugin. This call is designed to allow the plugin to do any initialization that is required and also creates the handle will is used in all subsequent calls to identify the instance of the plugin.\n\nThe *plugin_init* is passed a Python DICT as the only argument, this DICT contains the modified configuration for the plugin that is created by taking the default plugin configuration returned by *plugin_info* and adding to that the values the user has configured previously. This is the working configuration that the plugin should use.\n\nThe typical implementation of the *plugin_init* call will create an instance of a Python class which is the main body of the plugin. An object will then be returned which is the handle that will be passed into subsequent calls. This handle in a simple plugin, is commonly a Python DICT that is the configuration of the plugin, however any values may be returned. The caller treats the handle as opaque data that it stores and passed to further calls to the plugin, it will never look inside that object or have any expectations as to what is stored within that object.\n\nThe *fledge-north-http* plugin implementation of *plugin_init* is shown below as an example\n\n.. code-block:: python\n\n    def plugin_init(data):\n        \"\"\" Used for initialization of a plugin.\n\n        Args:\n            data - Plugin configuration\n        Returns:\n            Dictionary of a Plugin configuration\n        \"\"\"\n        global http_north, config\n        http_north = HttpNorthPlugin()\n        config = data\n        return config\n\nIn this case the plugin creates an object that implements the functionality and stores that object in a global variable. This can be done as only one instance of the north plugin exists within a single process. It is however perhaps better practice to return the instance of the class in the handle rather than use a global variable. Using a global is not recommended for filter plugins as multiple instances of a filter may exist within a single process. In this case the plugin uses the configuration as the handle it returns. \n\nThe plugin_send call\n~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_send* call is the main entry point of a north plugin, it is used to send set of readings north to the destination system. It is responsible for both the communication to that system and the translation of the internal representation of the reading data to the representation required by the external system.\n\nThe communication performed by the *plugin_send* routine should use the Python 3 asynchronous I/O primitives, the definition of the *plugin_send* entry point must also use the *async* keyword.\n\nThe *plugin_send* entry point is passed 3 arguments, the plugin handle, the data to send and a stream_id.\n\n.. code-block:: python\n\n   async def plugin_send(handle, payload, stream_id):\n\nThe handle is the opaque data returned by the call to *plugin_init* and may be used by the plugin to store data between invocations. The *payload* is a set of readings that should be sent, see below for more details on payload handling. The stream_id is an integer that uniquely identifies the connection from this Fledge instance to the destination system. This id can be used if the plugin needs to have a unique identifier but in most cases can be ignored.\n\nThe *plugin_send* call returns three values, a boolean that indicates if any data has been sent, the object id of the last reading sent and the number of readings sent.\n\nThe code below is the *plugin_send* entry point for the http north plugin.\n\n.. code-block:: python\n\n    async def plugin_send(handle, payload, stream_id):\n        \"\"\" Used to send the readings block from north to the configured destination.\n\n        Args:\n            handle - An object which is returned by plugin_init\n            payload - A List of readings block\n            stream_id - An Integer that uniquely identifies the connection from Fledge instance to the destination system\n        Returns:\n            Tuple which consists of\n            - A Boolean that indicates if any data has been sent\n            - The object id of the last reading which has been sent\n            - Total number of readings which has been sent to the configured destination\n        \"\"\"\n        try:\n            is_data_sent, new_last_object_id, num_sent = await http_north.send_payloads(payload)\n        except asyncio.CancelledError:\n            pass\n        else:\n            return is_data_sent, new_last_object_id, num_sent\n\nThe plugin_shutdown call\n~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_shutdown* call is the final entry that is required for Python north plugin, it is called by the north service or task just prior to the task terminating or in a north service if the configuration is allowed, see reconfiguration below. The *plugin_shutdown* call is passed the plugin handle and should perform any cleanup required by the plugin.\n\n.. code-block:: python\n\n   def plugin_shutdown(handle):\n       \"\"\" Used when plugin is no longer required and will be final call to shutdown the plugin. It should do any necessary cleanup if required.\n\n       Args:\n            handle - Plugin handle which is returned by plugin_init\n       Returns:\n       \"\"\"\n\nThe call should not return any data. Once called the handle should no longer be regarded as valid and no further calls will be made to the plugin using this handle.\n\nReconfiguration\n~~~~~~~~~~~~~~~\n\nUnlike other plugins within Fledge the north plugins do not have a reconfiguration entry point, this is due to the original nature of the north implementation in Fledge which used short lived tasks in order to send data out the north. Each new execution created a new task with new configuration, it was therefore felt that reconfiguration added a complexity to the north plugins that could be avoided.\n\nSince the introduction of the feature that allows the north to be run as an always on service however this has become an issue. It is resolved by closing down the plugin, calling *plugin_shutdown* and then restarting by called *plugin_init* to pass new configuration and retrieve a new plugin handle with that new configuration.\n\nPayload Handling\n~~~~~~~~~~~~~~~~\n\nThe payload that is passed to the *plugin_send* routine is a Python list of readings, each reading is encoded as a Python DICT. The properties of the reading dict are;\n\n.. list-table::\n    :header-rows: 1\n\n    * - Key\n      - Description\n    * - id\n      - The ID of the reading. Each reading is given an integer id that is an increasing value, it is these id values that are used to track how much data is sent via north plugin. One of the returns form the *plugin_send* routine is the id of the last reading that was successfully sent.\n    * - asset_code\n      - The asset code of the reading. Typical a south service will generate reading for one or more asset codes. These asset codes are used to identify the source of the data. Multiple asset codes may appear in a single block of readings passed to the *plugin_send* routine.\n    * - reading\n      - A nested Python DICT that stores the actual data points associated to the reading. These reading DICT's will contain a key/value pair for each data point within the asset. The value of this pair is the value of the data point and may be numeric, string, an array, or a nested object.\n    * - ts\n      - The timestamp when the reading was first seen by the system.\n    * - user_ts\n      - The timestamp of the data in the reading. This may be the same as *ts* above or in some cases may be a timestamp that has been received from the source of the data itself. This timestamp is the one that should be considered the most accurately represents the timestamp of the data.\n\n\nA sample payload is shown below.\n\n.. code-block:: python\n\n    [{'reading': {'sinusoid': 0.0}, 'asset_code': 'sinusoid', 'id': 1, 'ts': '2021-09-27 06:55:52.692000+00:00', 'user_ts': '2021-09-27 06:55:49.947058+00:00'},\n    {'reading': {'sinusoid': 0.104528463}, 'asset_code': 'sinusoid', 'id': 2, 'ts': '2021-09-27 06:55:52.692000+00:00', 'user_ts': '2021-09-27 06:55:50.947110+00:00'}]\n\n\nC/C++ Plugins\n-------------\n\nThe flow of a C/C++ plugin is very similar to that of a Python plugin, the entry points vary slightly compared to Python, mostly for language reasons.\n\nThe plugin_info entry point\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_info* is again the first entry point that will be called, in the case a C/C++ plugin it will return a pointer to a PLUGIN_INFORMATION structure, this structure contains the same elements there are seen in the Python DICT that is returned by Python plugins.\n\n.. code-block:: C\n\n    static PLUGIN_INFORMATION info = {\n            PLUGIN_NAME,                    // Name\n            VERSION,                        // Version\n            0,                              // Flags\n            PLUGIN_TYPE_NORTH,              // Type\n            \"1.0.0\",                        // Interface version\n            default_config                  // Configuration\n    }\n\nIt should be noted that the *PLUGIN_INFORMATION* structure instance is declared as static. All global variables declared with a C/C++ plugin should be declared as static as the mechanism for loading the plugins will share global variables between plugins. Using true global variables can create unexpected interactions between plugins.\n    \nThe items are\n\n.. list-table::\n    :header-rows: 1\n    \n    * - Name\n      - Description\n    * - name\n      - The name of the plugin.\n    * - version\n      - The version of the plugin expressed as a string. This usually but not always matches the current version of Fledge.\n    * - flags\n      - A bitmap of flags that give extra information about the plugin.\n    * - interface\n      - The interface version, currently north plugins are at interface version 1.0.0.\n    * - config\n      - The default configuration for the plugin. In C/C++ plugins this is returned as a string containing the JSON structure.\n\nA number of flags are supported by the plugins, however a small subset are supported in north plugins, this subset consists of\n\n.. list-table::\n   :header-rows: 1\n\n   * - Name\n     - Description\n   * - SP_PERSIST_DATA\n     - The plugin persists data and uses the data persistence API extensions.\n   * - SP_BUILTIN\n     - The plugin is builtin with the Fledge core package. This should not be used for any user added plugins.\n\nA typical implementation of the *plugin_info* entry would merely return the *PLUGIN_INFORMATION* structure for the plugin.\n\n.. code-block:: C\n\n    PLUGIN_INFORMATION *plugin_info()\n    {\n        return &info;\n    }\n\nMore complex implementations may tailor the content of the information returned based upon some criteria determined at run time. An example of such a scenario might be to tailor the default configuration based upon some element of discovery that occurs at run time. For example if the plugin is designed to send data to another service the *plugin_info* entry point could perform some service discovery and update a set of options for an enumerated type in the default configuration. This would allow the user interface to give the user a selection list of all the service instances that it found when the plugin was run.\n\nThe plugin_init entry point\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_init* entry point is called once the configuration of the plugin has been constructed by combining the default configuration with any stored configuration that the user has set for the plugin. The configuration is passed as a pointer to a C++ object of class ConfigCategory. This object may then be used to extract data from the configuration.\n\nThe *plugin_init* call should be used to initialize the plugin itself and to extract the configuration for the *ConfigCategory* instance and store within the instance of the plugin. Details regarding the use of the *ConfigCategory* class can be found in the C++ Support Class section of the Plugin Developers Guide. Typically the north plugin will create an instance of a class that implements the functionality required, store the configuration in that class and return a pointer to that instance as the handle for the plugin. This will ensure that subsequent calls can access that class instance and the associated state, since all future calls will be passed the handle as an argument.\n\nThe following is perhaps the most generic form of the *plugin_init* call. \n\n.. code-block:: C\n\n    PLUGIN_HANDLE plugin_init(ConfigCategory *configData)\n    {\n        return (PLUGIN_HANDLE)(new myNorthPlugin(configData));\n    }\n\nIn this case it assumes we have a class, *myNorthPlugin* that implements the functionality of the plugin. The constructor takes the *ConfigCategory* pointer as an argument and performs all required initialization from that configuration category.\n\nThe plugin_send entry point\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_send* entry point, as with Python plugins already describe, is the heart of a north plugin. It is called with the plugin handle and a block of readings data to be sent north. Typically the *plugin_send* will extract the object created in the *plugin_init* call from the handle and then call the functionality within that object to perform whatever translation and communication logic is required to send the reading data.\n\n.. code-block:: C\n\n   uint32_t plugin_send(PLUGIN_HANDLE handle, std::vector<Reading *>& readings)\n   {\n        myNorthPlugin *plugin = (myNorthPlugin *)handle;\n        return plugin->send(readings);\n   }\n\nThe block of readings is sent as a C++ standard template library vector of pointers to instance of the Reading class, also covered above in the section on C++ Support Classes.\n\nThe return from the *plugin_send* function should be a count of the number of readings sent by the plugin.\n\nThe plugin_shutdown entry point\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_shutdown* entry point is called when the plugin is no longer required. It should do any necessary cleanup required. As with other entry points, it is called with the handle that was returned by *plugin_init*. In the case of our simple plugin that might simple be to delete the C++ object that implements the plugin functionality.\n\n.. code-block:: C\n\n   void plugin_shutdown(PLUGIN_HANDLE handle)\n   {\n        myNorthPlugin *plugin = (myNorthPlugin *)handle;\n        delete plugin;\n   }\n\nThe plugin_register entry point\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_register* entry point is used to pass two function pointers to the plugin. These functions pointers are the functions that should be called when either a set point write or a set point operation is required. The plugin should store these function pointers for later use.\n\n.. code-block:: C\n\n   void plugin_register(PLUGIN_HANDLE handle, (bool ( *write)(char *name, char *value, ControlDestination destination, ...), int (* operation)(char *operation, int paramCount, char *parameters[], ControlDestination destination, ...))\n   {\n        myNorthPlugin *plugin = (myNorthPlugin *)handle;\n        plugin->setpointCallbacks(write, operation);\n   }\n\nThis call will only be made if the plugin included the *SP_CONTROL* option in the flags field of the *PLUGIN_INFORMATION* structure.\n\nSet Point Control\n-----------------\n\nFledge supports multiple paths for set point control, one of these paths allows for a north service to be bi-directional, with the north plugin receiving a trigger from the system north of Fledge to perform a set point control. This trigger may be the north plugin polling the system or a protocol response from the north.\n\nSet point control is only available for north services, it is not supported for north tasks and will be ignored.\n\nWhen the north plugin requires a set point write operation to be performed it calls the *write* callback that was passed to the plugin in the *plugin_register* entry point. This callback takes a number of arguments;\n\n  - The name of the set point to be written.\n\n  - The value to write to the set point. This is expressed as a string always.\n\n  - The destination of the write operation. This is passed using the *ControlDestination* enumerated type. Currently this may be one of\n\n      - **DestinationBroadcast**: send the write operation to all south services that support control.\n\n      - **DestinationAsset**: send the write request to the south service responsible for ingesting the given asset. The asset is passed as the next argument in the *write* call.\n\n      - **DestinationService**: send the write request to the named south service.\n\nFor example if the north plugin wishes to write the set point called *speed* with the value *28* in the south service called *Motor Control* it would make a call as follows.\n\n.. code-block:: C\n\n       (*m_write)(\"speed\", \"28\", DestinationService, \"Motor Control\");\n\nAssuming the member variable *m_write* was used to store the function pointer of the *write* callback.\n\nIf the north plugin requires an operation to be performed, rather than a write, then it should call the *operation* called which was passed to it in the *plugin_register* call. This callback takes a set of arguments;\n\n   - The name of the operation to execute.\n\n   - The number of parameters the operation should be passed.\n\n   - An array of parameters, as strings, to pass to the operation\n\n   - The destination of the operation, this is the same set of destinations as per the write call.\n"
  },
  {
    "path": "docs/plugin_developers_guide/05_storage_plugins.rst",
    "content": ".. Storage Plugins\n\n.. |br| raw:: html\n\n   <br />\n\n.. Images\n\n.. Links\n\n.. Links in new tabs\n\n.. =============================================\n\n\nStorage Plugins\n===============\n\nStorage plugins are used to interact with the Storage Microservice and provide the persistent storage of information for Fledge. \n\nThe current version of Fledge comes with three storage plugins:\n\n- The **SQLite plugin**: this is the default plugin and it is used for general purpose storage on constrained devices.\n- The **SQLite In Memory plugin**: this plugin can be used in conjunction with one of the other storage plugins and will provide an in memory storage system for reading data only. Configuration data is stored using the *SQLite* or *PostgreSQL* plugins.\n- The **PostgreSQL plugin**: this plugin can be set on request (or it can be built as a default plugin from source) and it is used for a more significant demand of storage on relatively larger systems.\n\n\nData and Metadata\n-----------------\n\nPersistency is split in two blocks:\n\n- **Metadata persistency**: it refers to the storage of metadata for Fledge, such as the configuration of the plugins, the scheduling of jobs and tasks and the the storage of statistical information.\n- **Data persistency**: it refers to the storage of data collected from sensors and devices by the South microservices. The *SQLite In Memory* plugin is an example of a storage plugin designed to store only the data.\n\nIn the current implementation of Fledge, metadata and data use the same Storage plugin by default. Administrators can select different plugins for these two categories of data, with the most common configuration of this type to use the SQLite In Memory storage service for data and SQLite for the metadata. This is set by editing the storage configuration file. Currently there is no interface within Fledge to change the storage configuration.\n\nThe storage configuration file is stored in the Fledge data directory as etc/storage.json, the default storage configuration file is\n\n.. code-block:: JSON\n\n  {\n    \"plugin\": {\n      \"value\": \"sqlite\",\n      \"description\": \"The main storage plugin to load\"\n    },\n    \"readingPlugin\": {\n      \"value\": \"\",\n      \"description\": \"The storage plugin to load for readings data. If blank the main storage plugin is used.\"\n    },\n    \"threads\": {\n      \"value\": \"1\",\n      \"description\": \"The number of threads to run\"\n    },\n    \"managedStatus\": {\n      \"value\": \"false\",\n      \"description\": \"Control if Fledge should manage the storage provider\"\n    },\n    \"port\": {\n      \"value\": \"0\",\n      \"description\": \"The port to listen on\"\n    },\n    \"managementPort\": {\n      \"value\": \"0\",\n      \"description\": \"The management port to listen on.\"\n    }\n  }\n\nThis sets the storage plugin to use as the *SQLite* plugin and leaves the *readingPlugin* blank. If the *readingPlugin* is blank then readings will be stored via the main plugin, if it is populated then a separate plugin will be used to store the readings. As an example, to store the readings in the *SQLite In Memory* plugin the storage.json file would be\n\n.. code-block:: JSON\n\n  {\n    \"plugin\": {\n      \"value\": \"sqlite\",\n      \"description\": \"The main storage plugin to load\"\n    },\n    \"readingPlugin\": {\n      \"value\": \"sqlitememory\",\n      \"description\": \"The storage plugin to load for readings data. If blank the main storage plugin is used.\"\n    },\n    \"threads\": {\n      \"value\": \"1\",\n      \"description\": \"The number of threads to run\"\n    },\n    \"managedStatus\": {\n      \"value\": \"false\",\n      \"description\": \"Control if Fledge should manage the storage provider\"\n    },\n    \"port\": {\n      \"value\": \"0\",\n      \"description\": \"The port to listen on\"\n    },\n    \"managementPort\": {\n      \"value\": \"0\",\n      \"description\": \"The management port to listen on.\"\n    }\n  }\n\nFledge must be restarted for changes to the storage.json file to take effect.\n\nIn addition to the definition of the plugins to use, the storage.json file also has a number of other configuration options for the storage service.\n\n- **threads**: The number of threads to use to accept incoming REST requests. This is normally set to 1, increasing the number of threads has minimal impact on performance in normal circumstances.\n\n- **managedStatus**: This configuration option allows Fledge to manage the underlying storage system. If, for example you used a database server and you wished Fledge to start and stop that server as part of the Fledge start up and shut down procedure you would set this option to \"true\".\n\n- **port**: This option can be used to make the storage service listen on a fixed port. This is normally not required, but can be used for diagnostic purposes.\n\n- **managementPort**: As with *port* above this can be used for diagnostic purposes to fix the management API port for the storage service.\n\nCommon Elements for Storage Plugins\n-----------------------------------\n\nIn designing the Storage API and plugins, we have first of all considered that there may be a large number of use cases for data and metadata persistence, therefore we have designed a flexible architecture that poses very few limitations. In practice, this means that developers can build their own Storage plugin and they can rely on anything they want to use as persistent storage. They can use a memory structure, or even a pass-through library, a file, a message queue system, a time series database, a relational database, NoSQL or something else.\n\nAfter having praised the flexibility of the Storage plugins, let's provide guidelines about the basic functionality they should provide, bearing in mind that such functionality may not be relevant for some use cases.\n\n- **Metadata persistency**: As mentioned before, one of the main reasons to use a Storage plugin is to safely store the configuration of the Fledge components. Since the configuration must survive to a system crash or reboot, it is fair to say that such information should be stored in one or more files or in a database system.\n- **Data buffering**: The second most important feature of a Storage plugin is the ability to buffer (or store) data coming from the outside world, typically from the South microservices. In some cases this feature may not be necessary, since administrators may want to send data to other systems as soon as possible, using a North task of microservice. Even in situations where data can be sent up North instantaneously, you should consider these scenarios:\n\n  - Fledge may be installed in areas where the network is unreliable. The North plugins will provide the logic of retrying to gain connectivity and resending data when the connection has been lost in the middle of the transfer operations.\n  - North services may rely on the use of networks that provide time windows to operate. \n  - Historians and other systems may work better when data is transferred in blocks instead of a constant streaming.\n\n- **Data purging**: Data may persist for the time needed by any specific use case, but it is pretty common that after a while (it can be seconds or minutes, but also day or months) data is no longer needed in Fledge. For this reason, the Storage plugin is able to purge data. Purging may be by time or by space usage, in conjunction with the fact that data may have been already transferred to other systems.\n\n- **Data backup/restore**: Data, but especially metadata (i.e. configuration), can be backed up and stored safely on other systems. In case of crash and recovery, the same data may be restored into Fledge. Fledge provides a set of generic API to execute backup and restore operations.\n\n\n"
  },
  {
    "path": "docs/plugin_developers_guide/06_filter_plugins.rst",
    "content": ".. Filter Plugins\n\n.. |br| raw:: html\n\n   <br/>\n\n.. Links\n.. |expression filter| raw:: html\n\n   <a href=\"../plugins/fledge-filter-expression/index.html\">expression filter</a>\n\n.. |Python 3.5 filter| raw:: html\n\n   <a href=\"../plugins/fledge-filter-python35/index.html\">Python 3.5 filter</a>\n\nFilter Plugins\n==============\n\nFilter plugins provide a mechanism to alter the data stream as it flows\nthrough a fledge instance, filters may be applied in south or north\nmicro-services and may form a pipeline of multiple processing elements\nthrough which the data flows. Filters applied in a south service will only\nprocess data that is received by the south service, whilst filters placed\nin the north will process all data that flows out of that north interface.\n\nFilters may;\n\n  - augment data by adding static metadata or calculated values to the data\n\n  - remove data from the stream\n\n  - add data to the stream\n\n  - modify data in the stream\n\nIt should be noted that there are some alternatives to creating a filter\nif you wish to make simple changes to the data stream. There are a number of\nexisting filters that provide a degree of programmability. These include\nthe |expression filter| which allows an arbitrary mathematical formula\nto be applied to the data or the |Python 3.5 filter| which allows a\nsmall include Python script to be applied to the data.\n\nFilter plugins may be written in C++ or Python and have a very simple interface. The plugin mechanism and a subset of the API is common between all types of plugins including filters.\n\nConfiguration\n-------------\n\nFilters use the same configuration mechanism as the rest of Fledge,\nusing a JSON document to describe the configuration parameters. As with\nany other plugin the structure is defined by the plugin and retrieve\nby the plugin_info entry point. This is then matched with the database\ncontent to pass the configured values to the plugin_init entry point.\n\nC++ Filter Plugin API\n---------------------\n\nThe filter API consists of a small number of C function entry points,\nthese are called in a strict order and based on the same set of common\nAPI entry points for all Fledge plugins.\n\nPlugin Information\n~~~~~~~~~~~~~~~~~~\n\nThe *plugin_info* entry point is the first entry point that is called\nin a filter plugin and returns the plugin information structure. This is\nthe exact same call that every Fledge plugin must support and is used to\ndetermine the type of the plugin and the configuration category defaults\nfor the plugin.\n\nA typical implementation of *plugin_info* would merely return a pointer\nto a static PLUGIN_INFORMATION structure.\n\n.. code-block:: C\n\n\n   PLUGIN_INFORMATION *plugin_info()\n   {\n        return &info;\n   }\n\nPlugin Initialise\n~~~~~~~~~~~~~~~~~\n\nThe *plugin_init* entry point is called after *plugin_info* has been called and before any data is passed to the filter. It is called at the phase where the service is setting up the filter pipeline and provides the filter with its configuration category that now contains the user supplied values and the destination to which the filter will send the output of the filter.\n\n.. code-block:: C\n\n   PLUGIN_HANDLE plugin_init(ConfigCategory* config,\n                          OUTPUT_HANDLE *outHandle,\n                          OUTPUT_STREAM output)\n   {\n   }\n\nThe *config* parameter is the configuration category with the user supplied\nvalues inserted, the *outHandle* is a handle for the next filter in the\nchain and the *output* is a function pointer to call to send the data\nto the next filter in the chain. The *outHandle* and *output* arguments\nshould be stored for future use in the *plugin_ingest* when data is to\nbe forwarded within the pipeline.\n\nThe *plugin_init* function returns a handle that will be passed to all\nsubsequent plugin calls. This handle can be used to store state that\nneeds to be passed between calls. Typically the *plugin_init* call will\ncreate a C++ class that implements the filter and return a point to the\ninstance as the handle. The instance can then be used to store the state\nof the filter, including the output handle and callback that needs to\nbe used.\n\nFilter classes can also be used to buffer data between calls to the\n*plugin_ingest* entry point, allowing a filter to defer the processing\nof the data until it has a sufficient quantity of buffered data available\nto it.\n\nPlugin Ingest\n~~~~~~~~~~~~~\n\nThe *plugin_ingest* entry point is the workhorse of the filter, it is\ncalled with sets of readings to process and then passes on the new set\nof readings to the next filter in the pipeline. The process of passing on\nthe data to the next filter is via the *OUTPUT_STREAM* function pointer. A\nfilter does not have to output data each time it ingests data, it is free\nto output no data or to output more or less data than it was called with.\n\n.. code-block:: C\n\n   void plugin_ingest(PLUGIN_HANDLE *handle,\n                   READINGSET *readingSet)\n   {\n   }\n\nThe number of readings that a filter is called with will depend on the\nenvironment it is run in and what any filters earlier in the filter\npipeline have produced. A filter that requires a particular sample size\nin order to process a result should therefore be prepared to buffer data\nacross multiple calls to *plugin_ingest*. Several examples of filters\nthat so this are available for reference.\n\nThe *plugin_ingest* call may send data onwards in the filter pipeline\nby using the stored *output* and *outHandle* parameters passed to\n*plugin_init*.\n\n.. code-block:: C\n\n    (*output)(outHandle, readings);\n\nPlugin Reconfigure\n~~~~~~~~~~~~~~~~~~\n\nAs with other plugin types the filter may be reconfigured during its\nlifetime. When a reconfiguration operation occurs the *plugin_reconfigure*\nmethod will be called with the new configuration for the filter.\n\n.. code-block:: C\n\n   void plugin_reconfigure(PLUGIN_HANDLE *handle, const std::string& newConfig)\n   {\n   }\n\nPlugin Shutdown\n~~~~~~~~~~~~~~~\n\nAs with other plugins a shutdown call exists which may be used by\nthe plugin to perform any cleanup that is required when the filter is\nshut down.\n\n.. code-block:: C\n\n   void plugin_shutdown(PLUGIN_HANDLE *handle)\n   {\n   }\n\nC++ Helper Class\n~~~~~~~~~~~~~~~~\n\nIt is expected that filters will be written as C++ classes, with the\nplugin handle being used a a mechanism to store and pass the pointer to\nthe instance of the filter class. In order to make it easier to write\nfilters a base *FledgeFilter* class has been provided, it is recommended\nthat you derive your specific filter class from this base class in order\nto simplify the implementation\n\n.. code-block:: C\n\n    class FledgeFilter {\n            public:\n                    FledgeFilter(const std::string& filterName,\n                                  ConfigCategory& filterConfig,\n                                  OUTPUT_HANDLE *outHandle,\n                                  OUTPUT_STREAM output);\n                    ~FledgeFilter() {};\n                    const std::string&\n                                        getName() const { return m_name; };\n                    bool\t\tisEnabled() const { return m_enabled; };\n                    ConfigCategory&     getConfig() { return m_config; };\n                    void\t\tdisableFilter() { m_enabled = false; };\n                    void\t\tsetConfig(const std::string& newConfig);\n            public:\n                    OUTPUT_HANDLE*\tm_data;\n                    OUTPUT_STREAM\tm_func;\n            protected:\n                    std::string\t        m_name;\n                    ConfigCategory\tm_config;\n                    bool\t\tm_enabled;\n    };\n\nC++ Filter Example\n------------------\n\nThe following example is a simple data processing example. It applies the log() function to numeric data in the data stream\n\nPlugin Interface\n~~~~~~~~~~~~~~~~\n\nMost plugins written in C++ have a source file that encapsulates the C API to the plugin, this is traditionally called plugin.cpp. The example plugin follows this model with the content of plugin.cpp shown below.\n\nThe first section includes the filter class that is the actual implementation of the filter logic and defines the JSON configuration category. This uses the *QUOTE* macro in order to make the JSON definition more readable.\n\n.. code-block:: C\n\n   /*\n    * Fledge \"log\" filter plugin.\n    *\n    * Copyright (c) 2020 Dianomic Systems\n    *\n    * Released under the Apache 2.0 Licence\n    *\n    * Author: Mark Riddoch\n    */\n\n   #include <logFilter.h>\n   #include <version.h>\n\n   #define FILTER_NAME \"log\"\n   const static char *default_config = QUOTE({\n                   \"plugin\" : {\n                           \"description\" : \"Log filter plugin\",\n                           \"type\" : \"string\",\n                           \"default\" : FILTER_NAME,\n                           \"readonly\": \"true\"\n                           },\n                    \"enable\": {\n                           \"description\": \"A switch that can be used to enable or disable execution of the log filter.\", \n                           \"type\": \"boolean\",\n                           \"displayName\": \"Enabled\",\n                           \"default\": \"false\"\n                           },\n                   \"match\" : {\n                           \"description\" : \"An optional regular expression to match in the asset name.\",\n                           \"type\": \"string\",\n                           \"default\": \"\",\n                           \"order\": \"1\",\n                           \"displayName\": \"Asset filter\"}\n                   });\n\n   using namespace std;\n\nWe then define the plugin information contents that will be returned by the *plugin_info* call.\n\n.. code-block:: C\n\n   /**\n    * The Filter plugin interface\n    */\n   extern \"C\" {\n\n   /**\n    * The plugin information structure\n    */\n   static PLUGIN_INFORMATION info = {\n           FILTER_NAME,              // Name\n           VERSION,                  // Version\n           0,                        // Flags\n           PLUGIN_TYPE_FILTER,       // Type\n           \"1.0.0\",                  // Interface version\n           default_config\t          // Default plugin configuration\n   };\n\nThe final section of this file consists of the entry points themselves\nand the implementation. The majority of this consist of calls to the\nLogFilter class that in this case implements the logic of the filter.\n\n.. code-block:: C\n\n   /**\n    * Return the information about this plugin\n    */\n   PLUGIN_INFORMATION *plugin_info()\n   {\n           return &info;\n   }\n\n   /**\n    * Initialise the plugin, called to get the plugin handle.\n    * We merely create an instance of our LogFilter class\n    *\n    * @param config\tThe configuration category for the filter\n    * @param outHandle\tA handle that will be passed to the output stream\n    * @param output\tThe output stream (function pointer) to which data is passed\n    * @return\t\tAn opaque handle that is used in all subsequent calls to the plugin\n    */\n   PLUGIN_HANDLE plugin_init(ConfigCategory* config,\n                             OUTPUT_HANDLE *outHandle,\n                             OUTPUT_STREAM output)\n   {\n           LogFilter *log = new LogFilter(FILTER_NAME,\n                                           *config,\n                                           outHandle,\n                                           output);\n\n           return (PLUGIN_HANDLE)log;\n   }\n\n   /**\n    * Ingest a set of readings into the plugin for processing\n    *\n    * @param handle\tThe plugin handle returned from plugin_init\n    * @param readingSet\tThe readings to process\n    */\n   void plugin_ingest(PLUGIN_HANDLE *handle,\n                      READINGSET *readingSet)\n   {\n           LogFilter *log = (LogFilter *) handle;\n           log->ingest(readingSet);\n   }\n\n   /**\n    * Plugin reconfiguration method\n    *\n    * @param handle\tThe plugin handle\n    * @param newConfig\tThe updated configuration\n    */\n   void plugin_reconfigure(PLUGIN_HANDLE *handle, const std::string& newConfig)\n   {\n           LogFilter *log = (LogFilter *)handle;\n           log->reconfigure(newConfig);\n   }\n\n   /**\n    * Call the shutdown method in the plugin\n    */\n   void plugin_shutdown(PLUGIN_HANDLE *handle)\n   {\n           LogFilter *log = (LogFilter *) handle;\n           delete log;\n   }\n\n   // End of extern \"C\"\n   };\n\nFilter Class\n~~~~~~~~~~~~\n\nAlthough it is not mandatory it is good practice to encapsulate the filter login in a class, these classes are derived from the FledgeFilter class\n\n.. code-block:: C\n\n   #ifndef _LOG_FILTER_H\n   #define _LOG_FILTER_H\n   /*\n    * Fledge \"Log\" filter plugin.\n    *\n    * Copyright (c) 2020 Dianomic Systems\n    *\n    * Released under the Apache 2.0 Licence\n    *\n    * Author: Mark Riddoch           \n    */     \n   #include <filter.h>               \n   #include <reading_set.h>\n   #include <config_category.h>\n   #include <string>                 \n   #include <logger.h>\n   #include <mutex>\n   #include <regex>\n   #include <math.h>\n\n\n   /**\n    * Convert the incoming data to use a logarithmic scale\n    */\n   class LogFilter : public FledgeFilter {\n           public:\n                   LogFilter(const std::string& filterName,\n                           ConfigCategory& filterConfig,\n                           OUTPUT_HANDLE *outHandle,\n                           OUTPUT_STREAM output);\n                   ~LogFilter();\n                   void\tingest(READINGSET *readingSet);\n                   void\treconfigure(const std::string& newConfig);\n           private:\n                   void\t\t\t\thandleConfig(ConfigCategory& config);\n                   std::string\t\t\tm_match;\n                   std::regex\t\t\t*m_regex;\n                   std::mutex\t\t\tm_configMutex;\n   };\n\n\n   #endif\n\nFilter Class Implementation\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe following is the code that implements the filter logic\n\n.. code-block:: C\n\n   /*\n    * Fledge \"Log\" filter plugin.\n    *\n    * Copyright (c) 2020 Dianomic Systems\n    *\n    * Released under the Apache 2.0 Licence\n    *\n    * Author: Mark Riddoch           \n    */     \n   #include <logFilter.h>               \n\n   using namespace std;\n\n   /**\n    * Constructor for the LogFilter.\n    *\n    * We call the constructor of the base class and handle the initial\n    * configuration of the filter.\n    *\n    * @param\tfilterName      The name of the filter\n    * @param\tfilterConfig    The configuration category for this filter\n    * @param\toutHandle       The handle of the next filter in the chain\n    * @param\toutput          A function pointer to call to output data to the next filter\n    */\n   LogFilter::LogFilter(const std::string& filterName,\n                           ConfigCategory& filterConfig,\n                           OUTPUT_HANDLE *outHandle,\n                           OUTPUT_STREAM output) : m_regex(NULL),\n                                   FledgeFilter(filterName, filterConfig, outHandle, output)\n   {\n           handleConfig(filterConfig);\n   }\n\n   /**\n    * Destructor for this filter class\n    */\n   LogFilter::~LogFilter()\n   {\n           if (m_regex)\n                   delete m_regex;\n   }\n\n   /**\n    * The actual filtering code\n    *\n    * @param readingSet\tThe reading data to filter\n    */\n   void\n   LogFilter::ingest(READINGSET *readingSet)\n   {\n           lock_guard<mutex> guard(m_configMutex);\n\n           if (isEnabled())\t// Filter enable, process the readings\n           {\n                   const vector<Reading *>& readings = ((ReadingSet *)readingSet)->getAllReadings();\n                   for (vector<Reading *>::const_iterator elem = readings.begin();\n                                   elem != readings.end(); ++elem)\n                   {\n                           // If we set a matching regex then compare to the name of this asset\n                           if (!m_match.empty())\n                           {\n                                   string asset = (*elem)->getAssetName();\n                                   if (!regex_match(asset, *m_regex))\n                                   {\n                                           continue;\n                                   }\n                           }\n\n                           // We are modifying this asset so put an entry in the asset tracker\n                           AssetTracker::getAssetTracker()->addAssetTrackingTuple(getName(), (*elem)->getAssetName(), string(\"Filter\"));\n\n                           // Get a reading DataPoints\n                           const vector<Datapoint *>& dataPoints = (*elem)->getReadingData();\n\n                           // Iterate over the datapoints\n                           for (vector<Datapoint *>::const_iterator it = dataPoints.begin(); it != dataPoints.end(); ++it)\n                           {\n                                   // Get the reference to a DataPointValue\n                                   DatapointValue& value = (*it)->getData();\n\n                                   /*\n                                    * Deal with the T_INTEGER and T_FLOAT types.\n                                    * Try to preserve the type if possible but\n                                    * if a floating point log function is applied\n                                    * then T_INTEGER values will turn into T_FLOAT.\n                                    * If the value is zero we do not apply the log function\n                                    */\n                                   if (value.getType() == DatapointValue::T_INTEGER)\n                                   {\n                                           long ival = value.toInt();\n                                           if (ival != 0)\n                                           {\n                                                   double newValue = log((double)ival);\n                                                   value.setValue(newValue);\n                                           }\n                                   }\n                                   else if (value.getType() == DatapointValue::T_FLOAT)\n                                   {\n                                           double dval = value.toDouble();\n                                           if (dval != 0.0)\n                                           {\n                                                   value.setValue(log(dval));\n                                           }\n                                   }\n                                   else\n                                   {\n                                           // do nothing for other types\n                                   }\n                           }\n                   }\n           }\n\n           // Pass on all readings in this case\n           (*m_func)(m_data, readingSet);\n   }\n\n   /**\n    * Reconfiguration entry point to the filter.\n    *\n    * This method runs holding the configMutex to prevent\n    * ingest using the regex class that may be destroyed by this\n    * call.\n    *\n    * Pass the configuration to the base FilterPlugin class and\n    * then call the private method to handle the filter specific \n    * configuration.\n    *\n    * @param newConfig\tThe JSON of the new configuration\n    */\n   void\n   LogFilter::reconfigure(const std::string& newConfig)\n   {\n           lock_guard<mutex> guard(m_configMutex);\n           setConfig(newConfig);\t\t// Pass the configuration to the base class\n           handleConfig(m_config);\n   }\n\n   /**\n    * Handle the filter specific configuration. In this case\n    * it is just the single item \"match\" that is a regex\n    * expression\n    *\n    * @param config\tThe configuration category\n    */\n   void\n   LogFilter::handleConfig(ConfigCategory& config)\n   {\n           if (config.itemExists(\"match\"))\n           {\n                   m_match = config.getValue(\"match\");\n                   if (m_regex)\n                           delete m_regex;\n                   m_regex = new regex(m_match);\n           }\n   }\n\nPython Filter API\n-----------------\n\nFilters may also be written in Python, the API is very similar to that of a C++ filter and consists of the same set of entry points.\n\nPlugin Information\n~~~~~~~~~~~~~~~~~~\n\nAs with C++ filters this is the first entry point called, it returns a Python dictionary that describes the filter.\n\n.. code-block:: python\n\n   def plugin_info():\n       \"\"\" Returns information about the plugin\n       Args:\n       Returns:\n           dict: plugin information\n       Raises:\n       \"\"\"\n\nPlugin Initialisation\n~~~~~~~~~~~~~~~~~~~~~\n\nThe *plugin_init* call is used to pass the resolved configuration to the\nplugin and also pass in the handle of the next filter in the pipeline\nand a callback that should be called with the output data of the filter.\n\n.. code-block:: python\n\n  def plugin_init(config, ingest_ref, callback):\n      \"\"\" Initialise the plugin\n      Args:\n          config: JSON configuration document for the Filter plugin configuration category\n          ingest_ref: filter ingest reference\n          callback: filter callback\n      Returns:\n          data: JSON object to be used in future calls to the plugin\n      Raises:\n      \"\"\"\n\nPlugin Ingestion\n~~~~~~~~~~~~~~~~\n\nThe *plugin_ingest* method is used to pass data into the plugin, the plugin will then process that data and call the callback that was passed into the *plugin_init* entry point with the *ingest_ref* handle and the data to send along the filter pipeline.\n\n.. code-block:: python\n\n   def plugin_ingest(handle, data):\n       \"\"\" Modify readings data and pass it onward\n\n       Args:\n           handle: handle returned by the plugin initialisation call\n           data: readings data\n       \"\"\"\n\nThe *data* is arranged as an array of Python dictionaries, each of which is a *Reading*. Typically the data can be processed by traversing the array\n\n.. code-block:: python\n\n   for elem in data:\n       process(elem)\n\nPlugin Reconfigure\n~~~~~~~~~~~~~~~~~~\n\nThe *plugin_reconfigure* entry point is called whenever a configuration change occurs for the filters configuration category.\n\n.. code-block:: python\n\n   def plugin_reconfigure(handle, new_config):\n       \"\"\" Reconfigures the plugin\n\n       Args:\n           handle: handle returned by the plugin initialisation call\n           new_config: JSON object representing the new configuration category for the category\n       Returns:\n           new_handle: new handle to be used in the future calls\n       \"\"\"\n\nPlugin Shutdown\n~~~~~~~~~~~~~~~\n\nCalled when the plugin is to be shutdown to allow it to perform any cleanup operations.\n\n.. code-block:: python\n\n   def plugin_shutdown(handle):\n       \"\"\" Shutdowns the plugin doing required cleanup.\n\n       Args:\n           handle: handle returned by the plugin initialisation call\n       Returns:\n           plugin shutdown\n       \"\"\"\n\nPython Filter Example\n---------------------\n\nThe following is an example of a Python filter that calculates an exponential moving average.\n\n.. code-block:: python\n\n\n   # -*- coding: utf-8 -*-\n\n   # Fledge_BEGIN\n   # See: http://fledge-iot.readthedocs.io/\n   # Fledge_END\n\n   \"\"\" Module for EMA filter plugin\n\n   Generate Exponential Moving Average\n   The rate value (x) allows to include x% of current value\n   and (100-x)% of history\n   A datapoint called 'ema' is added to each reading being filtered\n   \"\"\"\n\n   import time\n   import copy\n   import logging\n\n   from fledge.common import logger\n   import filter_ingest\n\n   __author__ = \"Massimiliano Pinto\"\n   __copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc.\"\n   __license__ = \"Apache 2.0\"\n   __version__ = \"${VERSION}\"\n\n   _LOGGER = logger.setup(__name__, level = logging.INFO)\n\n   PLUGIN_NAME = 'ema'\n\n   _DEFAULT_CONFIG = {\n       'plugin': {\n           'description': 'Exponential Moving Average filter plugin',\n           'type': 'string',\n           'default': PLUGIN_NAME,\n           'readonly': 'true'\n       },\n       'enable': {\n           'description': 'Enable ema plugin',\n           'type': 'boolean',\n           'default': 'false',\n           'displayName': 'Enabled',\n           'order': \"3\"\n       },\n       'rate': {\n           'description': 'Rate value: include % of current value',\n           'type': 'float',\n           'default': '0.07',\n           'displayName': 'Rate',\n           'order': \"2\"\n       },\n       'datapoint': {\n           'description': 'Datapoint name for calculated ema value',\n           'type': 'string',\n           'default': PLUGIN_NAME,\n           'displayName': 'EMA datapoint',\n           'order': \"1\"\n       }\n   }\n\n\n   def compute_ema(handle, reading):\n       \"\"\" Compute EMA\n\n       Args:\n           A reading data\n       \"\"\"\n       rate = float(handle['rate']['value'])\n       for attribute in list(reading):\n           if not handle['latest']:\n               handle['latest'] = reading[attribute]\n       handle['latest'] = reading[attribute] * rate + handle['latest'] * (1 - rate)\n       reading[handle['datapoint']['value']] = handle['latest']\n\n\n   def plugin_info():\n       \"\"\" Returns information about the plugin\n       Args:\n       Returns:\n           dict: plugin information\n       Raises:\n       \"\"\"\n       return {\n           'name': PLUGIN_NAME,\n           'version': '1.9.2',\n           'mode': 'none',\n           'type': 'filter',\n           'interface': '1.0',\n           'config': _DEFAULT_CONFIG\n       }\n\n\n   def plugin_init(config, ingest_ref, callback):\n       \"\"\" Initialise the plugin\n       Args:\n           config: JSON configuration document for the Filter plugin configuration category\n           ingest_ref: filter ingest reference\n           callback: filter callback\n       Returns:\n           data: JSON object to be used in future calls to the plugin\n       Raises:\n       \"\"\"\n       _config = copy.deepcopy(config)\n       _config['ingestRef'] = ingest_ref\n       _config['callback'] = callback\n       _config['latest'] = None\n       _config['shutdownInProgress'] = False\n       return _config\n\n\n   def plugin_reconfigure(handle, new_config):\n       \"\"\" Reconfigures the plugin\n\n       Args:\n           handle: handle returned by the plugin initialisation call\n           new_config: JSON object representing the new configuration category for the category\n       Returns:\n           new_handle: new handle to be used in the future calls\n       \"\"\"\n       _LOGGER.info(\"Old config for ema plugin {} \\n new config {}\".format(handle, new_config))\n\n       new_handle = copy.deepcopy(new_config)\n       new_handle['shutdownInProgress'] = False\n       new_handle['latest'] = None\n       new_handle['ingestRef'] = handle['ingestRef']\n       new_handle['callback'] = handle['callback']\n       return new_handle\n\n\n   def plugin_shutdown(handle):\n       \"\"\" Shutdowns the plugin doing required cleanup.\n\n       Args:\n           handle: handle returned by the plugin initialisation call\n       Returns:\n           plugin shutdown\n       \"\"\"\n       handle['shutdownInProgress'] = True\n       time.sleep(1)\n       handle['callback'] = None\n       handle['ingestRef'] = None\n       handle['latest'] = None\n\n       _LOGGER.info('{} filter plugin shutdown.'.format(PLUGIN_NAME))\n\n\n   def plugin_ingest(handle, data):\n       \"\"\" Modify readings data and pass it onward\n\n       Args:\n           handle: handle returned by the plugin initialisation call\n           data: readings data\n       \"\"\"\n       if handle['shutdownInProgress']:\n           return\n\n       if handle['enable']['value'] == 'false':\n           # Filter not enabled, just pass data onwards\n           filter_ingest.filter_ingest_callback(handle['callback'], handle['ingestRef'], data)\n           return\n\n       # Filter is enabled: compute EMA for each reading\n       for elem in data:\n           compute_ema(handle, elem['readings'])\n\n       # Pass data onwards\n       filter_ingest.filter_ingest_callback(handle['callback'], handle['ingestRef'], data)\n\n       _LOGGER.debug(\"{} filter_ingest done.\".format(PLUGIN_NAME))\n\n"
  },
  {
    "path": "docs/plugin_developers_guide/07_rules_plugins.rst",
    "content": ".. Rules Plugins\n\n.. |br| raw:: html\n\n   <br/>\n\n.. Links\n.. |average| raw:: html\n\n   <a href=\"../plugins/fledge-rule-Average/index.html\">Moving Average Rule plugin</a>\n\n.. |code| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge-rule-average.git\">Moving Average Rule source code</a>\n\n\nNotification Rule Plugins\n=========================\n\nNotification rule  plugins are used by the notification system to\nevaluate a rule and trigger a notification based on the result of\nthat evaluation. They are where the decisions are made that results\nin event notification to other systems or devices.\n\nNotification rule plugins may be written in C or C++ and have a very\nsimple interface. The plugin mechanism and a subset of the API is common\nbetween all types of plugins including notification rules.  This documentation\nis based on the |code|. The |average| calculates a moving average of the values\nsent to the notification instance and will trigger the notification to be sent\nwhen the value of a datapoint varies from the calculated average by more than\na specified percentage.\n\nConfiguration\n-------------\n\nNotification Rule  plugins use the same configuration mechanism as the rest of\nFledge, using a JSON document to describe the configuration parameters. In\ncommon with all other plugins the structure is defined by the plugin and retrieved\nvia the *plugin_info* entry point. This is then merged with the database\ncontent to pass the configured values to the *plugin_init* entry point.\n\nNotification Rule Plugin API\n----------------------------\n\nThe notification rule plugin API consists of a small number of C\nfunction entry points, these are called in a strict order and based on\nthe same set of common API entry points for all Fledge plugins.\n\nPlugin Information\n~~~~~~~~~~~~~~~~~~\n\nThe *plugin_info* entry point is the first entry point that is called\nin a notification rule plugin and returns the plugin information\nstructure. This is the exact same call that every Fledge plugin\nmust support and is used to determine the type of the plugin and the\nconfiguration category defaults for the plugin.\n\nA typical implementation of *plugin_info* would merely return a pointer\nto a static PLUGIN_INFORMATION structure.\n\n.. code-block:: C\n\n\n   PLUGIN_INFORMATION *plugin_info()\n   {\n        return &info;\n   }\n\nPlugin Initialise\n~~~~~~~~~~~~~~~~~\n\nThe second call that is made to the plugin is the *plugin_init* call, that is used to retrieve a handle on the plugin instance and to configure the plugin.\n\n.. code-block:: C\n\n   PLUGIN_HANDLE plugin_init(ConfigCategory* config)\n   {\n           AverageRule *average = new AverageRule(config);\n           average->configure(config);\n           return (PLUGIN_HANDLE)average;\n   }\n\n\nThe *config* parameter is the configuration category with the user supplied\nvalues inserted, these values are used to configure the behavior of the\nplugin. In the case of our moving average example we use this to construct\nan instance of our AverageRule class and then call the configure method of that\nnewly constructed instance of the class.\n\n.. code-block:: C\n\n    /**\n     * Average rule constructor\n     *      \n     * Call parent class BuiltinRule constructor\n     */     \n    AverageRule::AverageRule() : BuiltinRule()\n    {       \n    }\n\n.. note::\n\n    The constructor for the base class *BuiltinRule* is called as part of the\n    construction of a notification rule. This does common initialisation\n    required for all notification rules.\n\nThe *configure* method for our AverageRule class is shown below.\n\n.. code-block:: C\n\n    /**\n     * Configure the rule plugin\n     *\n     * @param    config     The configuration object to process\n     */\n    void AverageRule::configure(const ConfigCategory& config)\n    {\n            // Remove current triggers\n            // Configuration change is protected by a lock\n            lockConfig();\n            if (hasTriggers())\n            {\n                    removeTriggers();\n            }\n            // Release lock\n            unlockConfig();\n\n            string assetName = config.getValue(\"asset\");\n            if (!assetName.empty())\n            {\n                    addTrigger(assetName, NULL);\n            }\n            m_source = config.getValue(\"source\");\n\n            m_deviation = strtol(config.getValue(\"deviation\").c_str(), NULL, 10);\n            m_direction = config.getValue(\"direction\");\n            string aveType = config.getValue(\"averageType\");\n            if (aveType.compare(\"Simple Moving Average\") == 0)\n            {\n                    m_aveType = SMA;\n            }\n            else\n            {\n                    m_aveType = EMA;\n            }\n            m_factor = strtol(config.getValue(\"factor\").c_str(), NULL, 10);\n            for (auto it = m_averages.begin(); it != m_averages.end(); it++)\n            {\n                    it->second->setAverageType(m_aveType, m_factor);\n            }\n    }\n\nWe return the pointer to our AverageRule class as the handle for the plugin. This\nallows subsequent calls to the plugin to reference the instance created\nby the *plugin_init* call.\n\nPlugin Triggers\n~~~~~~~~~~~~~~~\n\nThis is the API call made by the notification service to determine the data it needs to send to the plugin for the purposes of evaluating the rule. Typically the notification rule configuration will include the data it requires to execute the evaluation of the rule.\n\nThe return from the *plugin_triggers* API call is a string that contains a JSON document. This document include the type and name of the data to be sent to the evaluation entry point of the plugin. The table below lists the valid trigger types and the data associated with each.\n\n..  list-table::\n    :widths: 15 55 30\n    :header-rows: 1\n\n    * - Key\n      - Description\n      - Example\n    * - asset\n      - Readings for the specified asset. The value of the *asset* key is the name of the asset.\n      - { \"triggers\" : [ { \"asset\" : \"sinusoid\" } ] }\n    * - statistic\n      - The cumulative statistics counter.\n      - { \"triggers\" : [ { \"statistic\" : \"Sine-Ingest\" } ] }\n    * - statisticRate\n      - The delta of the statistics counter for the statistic history period. By default this will be the increase in the statistic for a 15 second time interval.\n      - { \"triggers\" : [ { \"statisticRate\" : \"Sine-Ingest\" } ] }\n    * - audit\n      - The audit log code of the audit log events sent to the evaluate entry point. In this example we use the service failed audit log code.\n      - { \"triggers\" : [ { \"audit\" : \"SRVFL\" } ] }\n    * - interval\n      - The interval between which calls are made to the evaluate entry point. The *interval* type takes an additional *evaluate* parameter that determines if evaluation is called if any data arrives or only if the interval expires.\n      - { \"triggers\" : [ { \"interval\" : 500, \"evaluate\" : \"any\" } ] }\n\n\nMultiple trigger source may be combined, to request that the evaluate entry point be called at a particular interval and for a particular asset, the document below would be returned.\n\n.. code-block:: JSON\n\n   {\n       \"triggers\" : [\n           { \"asset\" : \"status\" },\n           { \"interval\" : 1000, \"evaluate\" : \"any\" }\n           ]\n   }\n\nThe above will cause the *plugin_eval* call to be called if the interval expires or if any readings for the asset *status* arrive. The alternate below will only be called at the defined interval. The data will still contain the buffered readings of the asset *status*, but will call *plugin_eval* every 1000 milliseconds.\n\n.. code-block:: JSON\n\n   {\n       \"triggers\" : [\n           { \"asset\" : \"status\" },\n           { \"interval\" : 1000, \"evaluate\" : \"interval\" }\n           ]\n   }\n\nThe code for the Moving Average rule plugin's *plugin_trigger* entry point is shown below.\n\n.. code-block:: C\n\n    /**\n     * Return triggers JSON document\n     *\n     * @return\tJSON string\n     */\n    string plugin_triggers(PLUGIN_HANDLE handle)\n    {\n            string ret;\n            AverageRule *rule = (AverageRule *)handle;\n\n            if (!rule)\n            {\n                    ret = \"{\\\"triggers\\\" : []}\";\n                    return ret;\n            }\n\n            // Configuration fetch is protected by a lock\n            rule->lockConfig();\n\n            if (!rule->hasTriggers())\n            {\n                    rule->unlockConfig();\n                    ret = \"{\\\"triggers\\\" : []}\";\n                    return ret;\n            }\n\n            ret = \"{\\\"triggers\\\" : [ \";\n            std::map<std::string, RuleTrigger *> triggers = rule->getTriggers();\n            for (auto it = triggers.begin();\n                      it != triggers.end();\n                      ++it)\n            {\n                    string source = rule->getSource();\n                    if (source.compare(\"Readings\") == 0)\n                            ret += \"{ \\\"asset\\\"  : \\\"\" + (*it).first + \"\\\"\";\n                    else if (source.compare(\"Statistics\") == 0)\n                            ret += \"{ \\\"statistic\\\"  : \\\"\" + (*it).first + \"\\\"\";\n                    else if (source.compare(\"Statistics History\") == 0)\n                            ret += \"{ \\\"statisticRate\\\"  : \\\"\" + (*it).first + \"\\\"\";\n                    else\n                    {\n                            ret += \"{ \";\t// Keep JSON valid\n                            Logger::getLogger()->error(\"Unsupported data source %s, rule will not subscribe to any data\", source.c_str());\n                    }\n                    ret += \" }\";\n                    \n                    if (std::next(it, 1) != triggers.end())\n                    {\n                            ret += \", \";\n                    }\n            }\n\n            ret += \" ] }\";\n\n            // Release lock\n            rule->unlockConfig();\n\n            return ret;\n    }\n\nPlugin Evaluation\n~~~~~~~~~~~~~~~~~\n\nThe *plugin_eval* API entry point is called with the plugin handle and the data, as a string, which holds the values to be evaluated. The return value of this call is a boolean that is the result of the evaluation. A value of true is returned if the conditions of the rule are met. Otherwise the entry point will return false.\n\nBelow is the code for the Moving Average plugin.\n\n.. code-block:: C\n\n    /**\n     * Evaluate notification data received\n     *\n     * @param    assetValues\tJSON string document\n     *\t\t\t\twith notification data.\n     * @return\t\t\tTrue if the rule was triggered,\n     *\t\t\t\tfalse otherwise.\n     */\n    bool plugin_eval(PLUGIN_HANDLE handle,\n                     const string& assetValues)\n    {\n            Document doc;\n            doc.Parse(assetValues.c_str());\n            if (doc.HasParseError())\n            {\n                    return false;\n            }\n\n            bool eval = false; \n            AverageRule *rule = (AverageRule *)handle;\n            map<std::string, RuleTrigger *>& triggers = rule->getTriggers();\n\n            // Iterate through all configured assets\n            // If we have multiple asset the evaluation result is\n            // TRUE only if all assets checks returned true\n            for (auto t = triggers.begin(); t != triggers.end(); ++t)\n            {\n                    string assetName = t->first;\n                    string assetTimestamp = \"timestamp_\" + assetName;\n                    if (doc.HasMember(assetName.c_str()))\n                    {\n                            // Get all datapoints for assetName\n                            const Value& assetValue = doc[assetName.c_str()];\n\n                            for (Value::ConstMemberIterator itr = assetValue.MemberBegin();\n                                                itr != assetValue.MemberEnd(); ++itr)\n                            {\n                                    if (itr->value.IsInt64())\n                                    {\n                                            eval |= rule->evaluate(assetName, itr->name.GetString(), (long)itr->value.GetInt64());\n                                    }\n                                    else if (itr->value.IsDouble())\n                                    {\n                                            eval |= rule->evaluate(assetName, itr->name.GetString(), itr->value.GetDouble());\n                                    }\n                            }\n                            // Add evaluation timestamp\n                            if (doc.HasMember(assetTimestamp.c_str()))\n                            {\n                                    const Value& assetTime = doc[assetTimestamp.c_str()];\n                                    double timestamp = assetTime.GetDouble();\n                                    rule->setEvalTimestamp(timestamp);\n                            }\n                    }\n            }\n\n            // Set final state: true is any calls to evaluate() returned true\n            rule->setState(eval);\n\n            return eval;\n    }\n\nIn this case the code iterates through the trigger names and calls the *evaluate* method in the *AverageRule* class for each trigger and with each value in the incoming data stream.\n\nVarious calls are made that will set the state of the evaluation, namely the *setValTimestamp* and *setState*. These states may later be used in the *plugin_reason* API.\n\nPlugin Reason\n~~~~~~~~~~~~~\n\nThe *plugin_reason* API call is made to the rule plugin by the notification service when it needs to send a notification. Depending on the result of the last *plugin_evaluate* call this may a notification of the condition either triggering or clearing. The code for the Moving Average plugin is shown below.\n\n.. code-block:: C\n\n    /**\n     * Return rule trigger reason: trigger or clear the notification. \n     *\n     * @return\t A JSON string\n     */\n    string plugin_reason(PLUGIN_HANDLE handle)\n    {\n            AverageRule* rule = (AverageRule *)handle;\n            BuiltinRule::TriggerInfo info;\n            rule->getFullState(info);\n\n            string ret = \"{ \\\"reason\\\": \\\"\";\n            ret += info.getState() == BuiltinRule::StateTriggered ? \"triggered\" : \"cleared\";\n            ret += \"\\\"\";\n            ret += \", \\\"asset\\\": \" + info.getAssets();\n            if (rule->getEvalTimestamp())\n            {\n                    ret += string(\", \\\"timestamp\\\": \\\"\") + info.getUTCTimestamp() + string(\"\\\"\");\n            }\n            ret += \" }\";\n\n            return ret;\n    }\n\nThe code first fetches the state information that was set by the previous *plugin_eval* call, using the *getFullState()* entry point of the base *BuiltinRule* class.\n\nThe *plugin_reason* call returns a JSON document, within a string. The reason document returns why the notification is being sent, the name of the item that triggered the notification and the timestamp of the data that triggered the notification. Below is a example reason document.\n\n.. code-block:: JSON\n\n   {\n       \"reason\" : \"triggered\",\n       \"asset\" : \"sinusoid\",\n       \"timestamp\" : \"2025/03/16 12:33:04.026\"\n   }\n\n.. note::\n\n   The data that triggered the notification is always passed with a key of *asset*, but it may be an asset in a reading, a statistic name or an audit log code.\n\nPlugin Reconfigure\n~~~~~~~~~~~~~~~~~~\n\nAs with other plugin types the notification delivery plugin  may be\nreconfigured during its lifetime. When a reconfiguration operation occurs\nthe *plugin_reconfigure* method will be called with the new configuration\nfor the plugin.\n\n.. code-block:: C\n\n   void plugin_reconfigure(PLUGIN_HANDLE *handle, const std::string& newConfig)\n   {\n        AverageRule *average = (AverageRule *)handle;\n        average->configure(newConfig);\n        return;\n   }\n\nIn the case of the Moving Average plugin this calls the same *configure* method that was called by the *plugin_init* entry point during initialisation and is shown above.\n\nPlugin Shutdown\n~~~~~~~~~~~~~~~\n\nIn common with all Fledge plugins a shutdown call exists which is used by\nthe plugin to perform any cleanup that is required when the plugin is\nshut down.\n\n.. code-block:: C\n\n   void plugin_shutdown(PLUGIN_HANDLE *handle)\n   {\n        AverageRule *average = (AverageRule *)handle;\n        delete average;\n   }\n\nIn the case of our Moving Average example we merely destroy the instance of the\nAverageRule class and allow the destructor of that class to do any cleanup that\nis required.\n"
  },
  {
    "path": "docs/plugin_developers_guide/08_notify_plugins.rst",
    "content": ".. Filter Plugins\n\n.. |br| raw:: html\n\n   <br/>\n\n.. Links\n.. |mqtt| raw:: html\n\n   <a href=\"../plugins/fledge-notify-mqtt/index.html\">MQTT delivery plugin</a>\n\n.. |code| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge-notify-mqtt.git\">MQTT notification delivery source code</a>\n\nNotification Delivery Plugins\n=============================\n\nNotification delivery plugins are used by the notification system to\nsend a notification to some other system or device. They are the transport\nthat allows the event to be notified to that other system or device.\n\n\nNotification delivery plugins may be written in C or C++ and have a very\nsimple interface. The plugin mechanism and a subset of the API is common\nbetween all types of plugins including filters. This documentation is based\non the |code|. The |mqtt| sends MQTT messages to a configurable MQTT topic\nwhen a notification is triggered and cleared.\n\nConfiguration\n-------------\n\nNotification Delivery plugins use the same configuration mechanism as the rest of\nFledge, using a JSON document to describe the configuration parameters. As\nwith any other plugin the structure is defined by the plugin and retrieve\nby the *plugin_info* entry point. This is then matched with the database\ncontent to pass the configured values to the *plugin_init* entry point.\n\nNotification Delivery Plugin API\n---------------------------------\n\nThe notification delivery plugin API consists of a small number of C\nfunction entry points, these are called in a strict order and based on\nthe same set of common API entry points for all Fledge plugins.\n\nPlugin Information\n~~~~~~~~~~~~~~~~~~\n\nThe *plugin_info* entry point is the first entry point that is called\nin a notification delivery plugin and returns the plugin information\nstructure. This is the exact same call that every Fledge plugin\nmust support and is used to determine the type of the plugin and the\nconfiguration category defaults for the plugin.\n\nA typical implementation of *plugin_info* would merely return a pointer\nto a static PLUGIN_INFORMATION structure.\n\n.. code-block:: C\n\n\n   PLUGIN_INFORMATION *plugin_info()\n   {\n        return &info;\n   }\n\nPlugin Initialise\n~~~~~~~~~~~~~~~~~\n\nThe second call that is made to the plugin is the *plugin_init* call, that is used to retrieve a handle on the plugin instance and to configure the plugin.\n\n.. code-block:: C\n\n   PLUGIN_HANDLE plugin_init(ConfigCategory* config)\n   {\n           MQTT *mqtt = new MQTT(config);\n           return (PLUGIN_HANDLE)mqtt;\n   }\n\n\nThe *config* parameter is the configuration category with the user supplied\nvalues inserted, these values are used to configure the behavior of the\nplugin. In the case of our MQTT example we use this to call the constructor\nof our MQTT class.\n\n.. code-block:: C\n\n   /**\n    * Construct a MQTT notification plugin\n    *\n    * @param category\tThe configuration of the plugin\n    */\n   MQTT::MQTT(ConfigCategory *category)\n   {\n           if (category->itemExists(\"broker\"))\n                   m_broker = category->getValue(\"broker\");\n           if (category->itemExists(\"topic\"))\n                   m_topic = category->getValue(\"topic\");\n           if (category->itemExists(\"trigger_payload\"))\n                   m_trigger = category->getValue(\"trigger_payload\");\n           if (category->itemExists(\"clear_payload\"))\n                   m_clear = category->getValue(\"clear_payload\");\n   }\n\nThis constructor merely stores values out of the configuration category\nas private member variables of the MQTT class.\n\nWe return the pointer to our MQTT class as the handle for the plugin. This\nallows subsequent calls to the plugin to reference the instance created\nby the *plugin_init* call.\n\nPlugin Delivery\n~~~~~~~~~~~~~~~\n\nThis is the API call made whenever the plugin needs to send a triggered or cleared notification state. It may be called multiple times within the lifetime of a plugin.\n\n.. code-block:: C\n\n   bool plugin_deliver(PLUGIN_HANDLE handle,\n                       const std::string& deliveryName,\n                       const std::string& notificationName,\n                       const std::string& triggerReason,\n                       const std::string& message)\n   {\n           MQTT *mqtt = (MQTT *)handle;\n           return mqtt->notify(notificationName, triggerReason, message);\n   }\n\nThe delivery call is passed the handle, which gives us the MQTT class\ninstance on this case, the name of the notification, a trigger reason,\nwhich is a JSON document and a message. The trigger reason JSON document\ncontains information about why the delivery call was made, including the\ntriggered or cleared status, the timestamp of the reading that caused\nthe notification to trigger and the name of the asset or assets involved\nin the notification rule that triggered this delivery event.\n\n.. code-block:: JSON\n\n   {\n       \"reason\": \"triggered\",\n       \"asset\": [\"sinusoid\"],\n       \"timestamp\": \"2020-11-18 11:52:33.960530+00:00\"\n   }\n\nThe return from the *plugin_deliver* entry point is a boolean that\nindicates if the delivery succeeded or not.\n\nIn the case of our MQTT example we call the notify method of the class,\nthis then interacts with the MQTT broker.\n\n.. code-block:: C\n\n   /**\n    * Send a notification via MQTT broker\n    *\n    * @param notificationName \tThe name of this notification\n    * @param triggerReason\t\tWhy the notification is being sent\n    * @param message\t\tThe message to send\n    */\n   bool MQTT::notify(const string& notificationName, const string& triggerReason, const string& message)\n   {\n   string \t\tpayload = m_trigger;\n   MQTTClient\tclient;\n\n           lock_guard<mutex> guard(m_mutex);\n\n           // Parse the JSON that represents the reason data\n           Document doc;\n           doc.Parse(triggerReason.c_str());\n           if (!doc.HasParseError() && doc.HasMember(\"reason\"))\n           {\n                   if (!strcmp(doc[\"reason\"].GetString(), \"cleared\"))\n                           payload = m_clear;\n           }\n\n           // Connect to the MQTT broker\n           MQTTClient_connectOptions conn_opts = MQTTClient_connectOptions_initializer;\n           MQTTClient_message pubmsg = MQTTClient_message_initializer;\n           MQTTClient_deliveryToken token;\n           int rc;\n\n           if ((rc = MQTTClient_create(&client, m_broker.c_str(), CLIENTID,\n                   MQTTCLIENT_PERSISTENCE_NONE, NULL)) != MQTTCLIENT_SUCCESS)\n           {\n                   Logger::getLogger()->error(\"Failed to create client, return code %d\\n\", rc);\n                   return false;\n           }\n\n           conn_opts.keepAliveInterval = 20;\n           conn_opts.cleansession = 1;\n           if ((rc = MQTTClient_connect(client, &conn_opts)) != MQTTCLIENT_SUCCESS)\n           {\n                   Logger::getLogger()->error(\"Failed to connect, return code %d\\n\", rc);\n                   return false;\n           }\n\n           // Construct the payload\n           pubmsg.payload = (void *)payload.c_str();\n           pubmsg.payloadlen = payload.length();\n           pubmsg.qos = 1;\n           pubmsg.retained = 0;\n\n           // Publish the message\n           if ((rc = MQTTClient_publishMessage(client, m_topic.c_str(), &pubmsg, &token)) != MQTTCLIENT_SUCCESS)\n           {\n                   Logger::getLogger()->error(\"Failed to publish message, return code %d\\n\", rc);\n                   return false;\n           }\n\n           // Wait for completion and disconnect\n           rc = MQTTClient_waitForCompletion(client, token, TIMEOUT);\n           if ((rc = MQTTClient_disconnect(client, 10000)) != MQTTCLIENT_SUCCESS)\n                   Logger::getLogger()->error(\"Failed to disconnect, return code %d\\n\", rc);\n           MQTTClient_destroy(&client);\n           return true;\n   }\n\nPlugin Reconfigure\n~~~~~~~~~~~~~~~~~~\n\nAs with other plugin types the notification delivery plugin  may be\nreconfigured during its lifetime. When a reconfiguration operation occurs\nthe *plugin_reconfigure* method will be called with the new configuration\nfor the plugin.\n\n.. code-block:: C\n\n   void plugin_reconfigure(PLUGIN_HANDLE *handle, const std::string& newConfig)\n   {\n        MQTT *mqtt = (MQTT *)handle;\n        mqtt->reconfigure(newConfig);\n        return;\n   }\n\nIn the case of our MQTT example we call the reconfigure method of our\nMQTT class. In this method the new values are copied into the local\nmember variables of the instance.\n\n.. code-block:: C\n\n   /**\n    * Reconfigure the MQTT delivery plugin\n    *\n    * @param newConfig\tThe new configuration\n    */\n   void MQTT::reconfigure(const string& newConfig)\n   {\n           ConfigCategory category(\"new\", newConfig);\n           lock_guard<mutex> guard(m_mutex);\n           m_broker = category.getValue(\"broker\");\n           m_topic = category.getValue(\"topic\");\n           m_trigger = category.getValue(\"trigger_payload\");\n           m_clear = category.getValue(\"clear_payload\");\n   }\n\nThe mutex is used here to prevent the plugin reconfiguration occurring\nwhen we are delivering a notification. The same mutex is held in the\nnotify method of the MQTT class.\n\nPlugin Shutdown\n~~~~~~~~~~~~~~~\n\nAs with other plugins a shutdown call exists which may be used by\nthe plugin to perform any cleanup that is required when the plugin is\nshut down.\n\n.. code-block:: C\n\n   void plugin_shutdown(PLUGIN_HANDLE *handle)\n   {\n        MQTT *mqtt = (MQTT *)handle;\n        delete mqtt;\n   }\n\nIn the case of our MQTT example we merely destroy the instance of the\nMQTT class and allow the destructor of that class to do any cleanup that\nis required. In the case of this example there is no cleanup required.\n"
  },
  {
    "path": "docs/plugin_developers_guide/08_storage.rst",
    "content": "Storage Service And Plugins\n===========================\n\nThe storage component provides a level of abstraction of the database layer used within Fledge. The storage abstract is explicitly not a SQL layer, and the interface it offers to the clients of the storage layer; the device service, API and send process, is very deliberately not a SQL interface to facilitate the replacement of the underlying storage with any no-SQL storage mechanism or even a simple file storage mechanism. Different plugins may be used for the structured and unstructured data that is stored by the storage layer.\n\nThe three requirements that have resulted in the plugin architecture and separation of the database access into a microservice within Fledge are:\n\n - A desire to be able to support different storage mechanisms as the deployment and customer requirements dictate. E.g. SQL, no-SQL, in-memory, backing store (disk, SD card etc.) or simple file based mechanisms.\n\n - The ability to separate the storage from the south and north services of Fledge and to allow for distribution of Fledge across multiple physical hardware components.\n\n - To provide flexibility to allow components to be removed from a Fledge deployment, e.g. remove the buffering and have a simple forwarding router implementation of Fledge without storage.\n\nUse of JSON\n-----------\n\nThere are three distinct reasons that JSON is used within the storage layer, these are;\n\n - The REST API uses JSON to encode the payloads within each API entry point. This is the preferred payload type for all REST interfaces in Fledge. The option to use XML has been considered and rejected as the vast majority of REST interfaces now use JSON and not XML. JSON is generally more compact and easier to read than XML.\n\n - The interface between the generic storage layer and the plugin also passes requests and results as JSON. This is partly to make it compatible with the REST payloads and partly to give the plugin implementer flexibility and the ability to push functionality down to the plugin layer to be able to exploit storage system specific features for greatest efficiency.\n\n - Some of the structures that are persisted are themselves JSON encoded documents. The assumption is that in this case they will remain as JSON all the way to the storage system itself and be persisted as JSON rather than being translated. These JSON structures are transported within the JSON structure of a request (or response) payload and will be sent as objects within that payload although they are not interpreted as anything other than data to be stored by the storage layer.\n\n\nRequirements\n~~~~~~~~~~~~\n\nThe storage layer represents the interface to persist data for the Fledge appliance, all persisted data will be read or written via this storage layer. This includes:\n\n - Configuration data - this is a set of JSON documents indexed by a key.\n\n - Readings data - the readings coming from the device that have buffered for a period of time.\n\n - User & credential data - this is username, passwords and certificates related to the users of the Fledge API.\n\n - Audit trail data - this is a log of significant events during the lifetime of Fledge.\n\n - Metrics - various modules will hold performance metrics, such as readings in, readings out etc. These will be periodically written by those models as cumulative totals. These will be collected by the statistics gatherer and interval statistics of the values will be written to the persistent storage.\n\n - Task records - status and history of the tasks that have been scheduled within Fledge.\n\n - Flexible schemas - the storage layer should be written that the schema, assuming there is a schema based underlying storage mechanism, is not fixed by the storage layer itself, but by the implementation of the storage and the application (Fledge). In particular the set of tables and columns in those tables is not preconfigured in the storage layer component (assuming a schema based underlying data store).\n\nImplementation Language\n~~~~~~~~~~~~~~~~~~~~~~~\n\nThe core of the Fledge platform has to date been written using Python, for the storage layer however a decision has been taken to implement this in C/C++. There are a number of factors that need to be taken into account as a result of this decision.\n\n - Library choices made for the Python implementation are no longer valid and a choice has to be made for C/C++.\n\n - Common code, such as the microservices management API can not be reused and a C/C++ implementation is required.\n\nThe storage service differs from the other services within Fledge as it only supports plugins compiled to shared objects that have the prescribed C interface. The plugin's code itself may be in other languages, but it must compile to a C compatible shared object using the C calling conventions.\n\nLanguage Choice Reasons\n#######################\n\nInitially it was envisaged that the entire Fledge product would be written in Python, after the initial demo implementation issues were starting to surface regarding the validity of this choice for implementation of a product such as Fledge. These issues are;\n\n - Scalability - Python is essentially a single threaded language due to the Global Interpreter Lock (GIL) which only allows a single Python statement to be executing at any one time.\n\n - Portability - As we started working more with OSIsoft and with ARM it became clear that the option to port Fledge or some of its components to embedded hardware was going to become more of a requirement for us. In particular the ARM mbed platform is one that has been discussed. Python is not available on this platform or numerous other embedded platforms.\n\nIf Python was not to be the language in which to implement in future then it was decided that the storage layer, as something that has yet to be started, might be best implemented in a different way. Since the design is based on micro-services with REST API’s between them, then it is possible to mix and match the implementation of different components amongst different languages.\n\nThe storage layer is a separate micro-service and not directly linked to any Python code, linkage is only via a REST API. Therefore the storage layer can implement a threading model that best suits it and is not tied to the Python threading model in use in other microservices.\n\nThe choice of C/C++ is based on what is commonly available on all the platforms on which we now envisage Fledge might need to run in the foreseeable future and on the experience available within the team.\n\nLibrary Choice\n##############\n\nOne of the key libraries that will need to be chosen for C/C++ is the JSON library since there is no native support for this in the language. There are numerous libraries that exist for this purpose, for example, rapidjson, Jansson and many more. Some investigation is required to find the most suitable. The factors to be considered in the choice of library are, in order of importance;\n\n - Functionality - clearly any library chosen must offer the feature we need.\n\n - Footprint - Footprint is a major concern for Fledge as we wish to run in constrained devices with the likelihood that in future the device we want to run on may become even smaller than we are considering today.\n\n - Thread safety - It is assumed that for reasons of scalability and the nature of a REST interface that multiple threads will be employed in the implementation, so hence thread safety is a major concern when choosing a library.\n\n - Performance - Any library chosen should be reasonably performant at the job it does in order to be considered. We need to avoid choosing libraries that are slow or bloated as part of our drive to run on highly constrained hardware.\n\nThe choice of the JSON library is also something to be considered; since JSON objects are passed across the plugin interface, choosing a C++ library would limit both the microservice and the plugins to use C++. It may be preferable to use a C based library and thus have the flexibility to have a C or C++ implementation for either the service itself or for the plugin.\n\nAnother key library choice, in order to support the REST interface, is an HTTP library capable of being used to support the REST interface development and able to support custom header fields and HTTPS. Once again these are numerous, libmicrohttpd, Simple-Web-Server, Proxygen. A choice must be made here also using the same criteria outlined above.\n\nThread safety is likely to be important also as it is assumed the storage layer will be multi-threaded and almost certainly utilise asynchronous I/O operations.\n\nClasses of Data Stored\n----------------------\n\nThere are two classes of data that Fledge needs to store:\n\n  - Internally generated data\n\n  - Data that emanates from sensors\n\nThe first of these are essentially Fledges configuration, state and lookup data it needs to function. The pattern of access to this data is the classic create, retrieve, update and delete operations that are common to most databases. Access is random by nature and usually via some form of indexes and keys.\n\nThe second class of data that is stored, and the one which is the primary function of Fledge to store, is the data that it receives from sensors. Here the pattern of access is very different; \n\n - New data is always appended to the stored data\n\n - No updates are supported on this data\n\n - Data is predominately read in sequential blocks (main use case)\n\n - Random access is rare and confined to display and analytics within the user interface or by clients of the public API\n\n - Deletion of data is done based solely on age and entries will not be removed other than in chronological order.\n\nGiven the difference in the nature of the two classes of data and the possibility that this will result in different storage implementations for the two, the interface is split between these two classes of data. This allows;\n\n - Different plugins to be used for each type, perhaps a SQL database for the internal data storage and a specialised time series database or document store for the sensor readings.\n\n - A single plugin can choose to only implement a subset of the plugin API, e.g. the common data access methods or the readings methods. Or both.\n\n - Plugins can choose where and how they store the readings to optimize the implementation. E.g. a SQL data can store the JSON in a table or a series of tables if preferred.\n\n - The plugins are not forced to store the JSON data in a particular way. For example, a SQL database does not have to use JSON data types in a single column if it does not support them.\n\nThese two classes of data are referred to in this documentation as “common data access” and “readings data”.\n\nCommon Data Access Methods\n--------------------------\n\nMost of these types of data can be accessed by the classic create, update, retrieve and delete methods and consist of data in JSON format with an associated key and timestamp. In this case a simple create with a key and JSON value, an update with the same key and value, a retrieve with an optional key (which returns an array of JSON objects) and a delete with the key is all that is required. Configuration, metrics, task records, audit trail and user data all fall into this category. Readings however do not and have to be treated differently.\n\nReadings Data Access\n--------------------\n\nReadings work differently from other data, both in the way they are created, retrieved and removed. There is no update functionality required for readings currently, in particular there is no method to update readings data.\n\nThe other difference with readings data from the other data that is managed by the storage layer is related to the volume and use of the data. Readings data is by far the largest volume of data that is managed by Fledge, and has a somewhat different lifecycle and use. The data streams in from external devices, lives within the storage layer for a period of time and is then removed. It may also be retrieved by other processes during the period of time in lives within the buffer.\n\nAnother characteristic of the readings data is the ability to trigger processing based on the arrival of new data. This could be from a process that blocks, waiting for data to arrive or as an optimisation when a process wishes to process the new data as it arrives and not retrieve it explicitly from the storage layer. In this later case the storage data would still be buffered in the storage layer using the usual rules for storage and purging of that data.\n\nReading Creation\n~~~~~~~~~~~~~~~~\n\nReadings come from the device component of Fledge and are a time series stream of JSON documents. They should be appended to the storage device with unique keys and a timestamp. The appending of readings can be considered as a queuing mechanism into the storage layer.\n\nManaging Blocked Retrievals\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nVarious components, most notably the sending process and north service, read blocks of readings from the storage layer. These components may request a notification when new readings are available, for example the sending process may request a new block of data when there are no more blocks available. This will be registered with the storage layer and the storage layer will notify the sending process that new data is available and that a subsequent call will return a new block of data.\n\nThis is an advantage feature that may be omitted from the first version. It is intended to allow a process that is fetching and processing readings data to have an efficient way to know that new data is available to be processed. One scenario would be a sending process that has sent all of the readings that are available; it wishes to be informed when new readings are available to it for sending. Rather than poll the storage layer requesting new readings, it may request the storage layer to call it when a number of readings are available beyond the id that process last fetched.\n\nBypassing Database Storage\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOne potential optimisation which the storage layer should be built to allow as a future optimization is to architect the storage layer such that a publish/subscribe mechanism could be used to allow the data that flows into the storage layer and be directed to both the storage plugin itself and also send it to other services such as the sending process.\n\nReading Retrieval\n~~~~~~~~~~~~~~~~~\n\nReadings may be retrieved via one of two mechanism\n\n - By the sending process that will request readings within a time window\n\n - From the API layer for analysis within the edge device or an external entity that is retrieving the data via the Fledge user REST API.\n\nThe sending process and north service may require large volumes of data to be sent, in order to reduce the memory footprint required and to improve reliability, the sending module will require the readings in controllable “chunks”, therefore it will request readings between two timestamps in blocks of x readings and then request each block sequentially. It is the responsibility of the sending process to ensure that it requests blocks of a reasonable size. Since the REST interface is by definition stateless the storage layer does not need to maintain any information about previous fetches of data.\n\nThe API access to data  will be similar, except it will have a limitation on the number of readings, it will request ordered readings between timestamps and ask for readings between the n-th and m-th reading. E.g. Return readings between 21:00 on 10th June 2017 and 21:00 on the 11th June limited to the 100th and 150th reading in that time. The API layer will enforce a maximum number of readings that can be returned in order to make sure result sets are small.\n\nReading Removal\n~~~~~~~~~~~~~~~\n\nThe reading removal is done via the purge process, this process will request readings before a given time to be removed from the storage device based on the timestamp of each reading. Introducing the storage layer and removing the pure SQL interface will alter the nature of the purge process and essentially move the logic of the purge process into the storage layer.\n\nStorage Plugin\n--------------\n\nOne of the requirements that drives the desire to have a storage layer is to isolate the other services and users of the storage layer from the technology that provides that storage. The upper level of the storage service offers a consistent API to the client of the storage service and provides the common infrastructure to communicate with the other services within Fledge, whilst the lower layer provides the interface to the storage technology that will actually store the data. Since we have a desire to be able to switch between different storage layers this lower layer will use a plugin mechanism that will allow a common storage service to dynamically load one or more storage plugins.\n\nThe ability to use multiple plugins within a single storage layer would allow a different plugin to be used for each class of data, see Classes of Data Stored. This would give the flexibility to store Fledges internal data in generic database whilst storing the readings data in something that was tailored specifically to time series or JSON data. There is no requirement to have multiple plugins in any specific deployment, however if the option is to be made available the code that is initially developed should be aware of this future requirement and be implemented appropriately. It is envisaged that the first version will have a single plugin for both classes of data. The incremental effort for supporting more than one plugin is virtually zero, hence the inclusion here. \n\nEntry Points\n~~~~~~~~~~~~\n\nThe storage plugin exposes a number of entry points in a similar way to the Python plugins used for the translator interface and the device interface. In the C/C++ environment the mechanism is slightly different from that of Python. A plugin is a shared library that is included with the installation or may be installed later into a known location. The library is use by use the dlopen() C library function and each entry point is retrieved using the dlsym() call.\n\nThe plugin interface is modeled as a set of C functions rather than as a C++ class in order to give the plugin writer the flexibility to implement the plugin in C or C++ as desired.\n\n.. list-table::\n        :widths: 30 70\n        :header-rows: 1\n\n        * - Entry Point\n          - Summary\n        * - plugin_info\n          - Return information about the plugin.\n        * - plugin_init\n          - Initialise the plugin.\n        * - plugin_common_insert\n          - Insert a row into a data set (table).\n        * - plugin_common_retrieve\n          - Retrieve a result set from a table.\n        * - plugin_common_update\n          - Update data in a data set.\n        * - plugin_common_delete\n          - Delete data from a data set.\n        * - plugin_reading_append\n          - Append one or more readings or the readings table.\n        * - plugin_reading_fetch\n          - Retrieve a block of readings from the readings table.\n        * - plugin_reading_retrieve\n          - Generic retrieve to retrieve data from the readings table based on query parameters.\n        * - plugin_reading_purge\n          - Purge readings from the readings table.\n        * - plugin_release\n          - Release a result set previously returned by the plugin to the plugin, so that it may be freed.\n        * - plugin_last_error\n          - Return information on the last error that occurred within the plugin.\n        * - plugin_shutdown\n          - Called prior to the device service being shut down.\n\n\nPlugin Error Handling\n~~~~~~~~~~~~~~~~~~~~~\n\nErrors that occur within the plugin must be propagated to the generic storage layer with sufficient information to allow the generic layer to report those errors and take appropriate remedial action. The interface to the plugin has been deliberately chosen not to use C++ classes or interfaces so that plugin implementers are not forced to implement plugins in C++.  Therefore the error propagation mechanism can not be C++ exceptions and a much simpler, language agnostic approach must be taken. To that end errors will be indicated by the return status of each call into the interface and a specific plugin entry point will be used to retrieve more details on errors that occur.\n\nPlugin API Header File\n~~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  #ifndef _PLUGIN_API\n  #define _PLUGIN_API\n\n  typedef struct {\n          char         *name;\n          char         *version;\n          unsigned int options;\n          char         *type;\n          char         *interface;\n          char         *config;\n  } PLUGIN_INFORMATION;\n\n  typedef struct {\n          char         *message;\n          char         *entryPoint;\n          boolean      retryable;\n  } PLUGIN_ERROR;\n\n  typedef void * PLUGIN_HANDLE;\n\n  /**\n   * Plugin options bitmask values\n   */\n  #define SP_COMMON       0x0001\n  #define SP_READINGS     0x0002\n\n  /**\n   * Plugin types\n   */\n  #define PLUGIN_TYPE_STORAGE     \"storage\"\n\n  /**\n   * Readings purge flags\n   */\n  #define PLUGIN_PURGE_UNSENT     0x0001\n\n  extern PLUGIN_INFORMATION *plugin_info();\n  extern PLUGIN_HANDLE plugin_init();\n  extern boolean plugin_common_insert(PLUGIN_HANDLE handle, char *table, JSON *data);\n  extern JSON *plugin_common_retrieve(PLUGIN_HANDLE handle, char *table, JSON *query);\n  extern boolean plugin_common_update(PLUGIN_HANDLE handle, char *table, JSON *data);\n  extern boolean plugin_common_delete(PLUGIN_HANDLE handle, char *table, JSON *condition);\n  extern boolean plugin_reading_append(PLUGIN_HANDLE handle, JSON *reading);\n  extern JSON *plugin_reading_fetch(PLUGIN_HANDLE handle, unsigned long id, unsigned int blksize);\n  extern JSON *plugin_reading_retrieve(PLUGIN_HANDLE handle, JSON *condition);\n  extern unsigned int plugin_reading_purge(PLUGIN_HANDLE handle, unsigned long age, unsigned int flags, unsigned long sent);\n  extern plugin_release(PLUGIN_HANDLE handle, JSON *results);\n  extern PLUGIN_ERROR *plugin_last_error(PLUGIN_HANDLE);\n  extern boolean plugin_shutdown(PLUGIN_HANDLE handle)\n  #endif\n\n\nPlugin Support\n~~~~~~~~~~~~~~\n\nA storage plugin may support either or both of the two data access methods; common data access methods and readings access methods. The storage service can use the mechanism to have one plugin for the common data access methods, and hence a storage system for the general tables and configuration information. It then may load a second plugin in order to support the storage and retrieval of readings.\n\nPlugin Information\n~~~~~~~~~~~~~~~~~~\n\nThe plugin information entry point, plugin_info() allows the device service to retrieve information from the plugin.  This information comes back as a C structure (PLUGIN_INFORMATION). The PLUGIN_INFORMATION will include a number of fields with information that will be used by the storage service.\n\n.. list-table::\n        :header-rows: 1\n        :widths: 20 60 20\n\n        * - Property\n          - Description\n          - Example\n        * - name\n          - A printable name that can be used to identify the plugin.\n          - Postgres Plugin\n        * - version\n          - A version number of the plugin, again used for diagnostics and status reporting\n          - 1.0.2\n        * - options\n          - A bitmask of options that describes the level of support offered by this plugin.\n            Currently two options are available; SP_COMMON and SP_READINGS. Each of these bits represents support for the set of common data access methods and the readings access method. See Plugin Support for details.\n          - SP_COMMON|SP_READINGS\n        * - type\n          - The type of the plugin, this is used to distinguish a storage API plugin from any other type of plugin in Fledge. This should always be the string “storage”.\n          - storage\n        * - interface\n          - The interface version that the plugin implements. Currently the version is 1.0.\n          - 1.0\n\n\nThis is the first call that will be made to the plugin after it has been loaded, it is designed to give the loader enough information to know how to interact with the plugin and to allow it to confirm the plugin is of the correct type.\n\nPlugin Initialisation\n~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern PLUGIN_HANDLE plugin_init();\n\nCalled after the plugin has been loaded and the plugin information has been successfully retrieved. This will only be called once and should perform the initialisation necessary for the sensor communication. \n\nThe plugin initialisation call returns a handle, of type void \\*, which will be used in future calls to the plugin. This may be used to hold instance or state information that would be needed for any future calls. The handle should be used in preference to global variables within the plugin.\n\nIf the initialisation fails the routine should raise an exception. After this exception is raised the plugin will not be used further.\n\nPlugin Common Insert\n~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern boolean plugin_common_insert(PLUGIN_HANDLE handle, char *table, JSON *data);\n\nInsert data that is represented by the JSON structure that is passed into the call to the specified table.\n\nThe handle is the value returned by the call to plugin_init().\n\nThe table is the name of the table, or data set, into which the data is to be inserted.\n\nThe data is a JSON document with a number of property name/value pairs. For example, if the plugin is storing the data in a SQL database; the names are the column names in an equivalent SQL database and the values are the values to write to that column. Plugins for non-SQL, such as document databases may choose to store the data as it is represented in the JSON document or in a very different structure. Note that the value may be of different types, represented by JSON type and may be JSON objects themselves. The plugin should do whatever conversation is needed for the particular storage layer based on the JSON type.\n\nThe return value of this call is a boolean that represents success or value of the insert.\n\nPlugin Common Retrieve\n~~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern JSON *plugin_common_retrieve(PLUGIN_HANDLE handle, char *table, JSON *query);\n\nRetrieve a data set from a named table.\n\nThe handle is the value returned by the call to plugin_init().\n\nThe table is the name of the table, or data set, from which the data is to be retrieved.\n\nThe query is a JSON document that encodes the predicates for the query, the where condition in the case of a SQL layer. See Encoding Query Predicates in JSON for details of how this JSON is encoded.\n\nThe return value is the result set of the query encoded as a JSON structure. This encoding takes the form of an array of JSON object, one per row in the result set. Each object represents a row encoded as name/value pair properties. In addition a property count is included that returns the number of rows in the result set.\n\nAn query that returns two rows with columns named “c1”, “c2” and “c3” would be represented as\n\n.. code-block:: JSON\n\n  {\n    \"count\" : 2,\n    \"rows\"  : [ \n                {  \n                   \"c1\" : 1,\n                   \"c2\" : 5,\n                   \"c3\" : 9\n                },\n                {  \n                   \"c1\" : 8,\n                   \"c2\" : 2,\n                   \"c3\" : 15\n                }\n              ]\n  }\n\nThe pointer return to the caller must be released when the caller has finished with the result set. This is done by calling the plugin_release() call with the plugin_handle and the pointer returned from this call.\n\nPlugin Common Update\n~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern boolean plugin_common_update(PLUGIN_HANDLE handle, char *table, JSON *data);\n\n\nUpdate the contents of a set of rows in the given table.\n\nThe handle is the value returned by the call to plugin_init().\n\nThe table is the name of the table, or data set, into which the data is to be updated.\n\nThe data item is a JSON document that encodes but the values to set in the table and the condition used to select the data. The object contains two properties, a condition, the value of which is a JSON encoded where clause as defined in Encoding Query Predicates in JSON and a values object. The values object is a set of name/value pairs where the name matches column names within the data and the value defines the value to set for that column.\n\nThe following JSON example \n\n.. code-block:: JSON\n\n  {\n    \"condition\" : { \n                    \"column\"    : \"c1\",\n                    \"condition\" : \"=\",\n                    \"value\"     : 15\n                  },\n    \"values\"    : {\n                    \"c2\" : 20,\n                    \"c3\" : \"Updated\"\n                  }\n  }\n\n\nwould map to a SQL update statement\n\n.. code-block:: SQL\n\n  UPDATE <table> SET c2 = 20, c3 = \"Updated\" where c1 = 15;\n\nPlugin Common Delete\n~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern boolean plugin_common_delete(PLUGIN_HANDLE handle, char *table, JSON *condition);\n\n\nUpdate the contents of a set of rows in the given table.\n\nThe handle is the value returned by the call to plugin_init().\n\nThe table is the name of the table, or data set, into which the data is to be removed.\nThe condition JSON element defines the condition clause which will select the rows of data to be removed. This condition object follows the same JSON encoding scheme defined in the section Encoding Query Predicates in JSON. A condition object containing\n\n.. code-block:: JSON\n\n  {\n      \"column\"    : \"c1\",\n      \"condition\" : \"=\",\n      \"value\"     : 15\n  }\n\nwould delete all rows where the value of c1 is 15.\n\nPlugin Reading Append\n~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern boolean plugin_reading_append(PLUGIN_HANDLE handle, JSON *reading);\n\nThe handle is the value returned by the call to plugin_init().\n\nThe reading JSON object is an array of one or more readings objects that should be appended to the readings storage device. \n\nThe return status indicates if the readings have been successfully appended to the storage device or not.\n\nPlugin Reading Fetch\n~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern JSON *plugin_reading_fetch(PLUGIN_HANDLE handle, unsigned long id, unsigned int blksize);\n\nFetch a block of readings, starting from a given id and return them as a JSON object.\n\nThis call will be used by the sending process to retrieve readings that have been buffered and send them to the historian. The process of sending readings will read a set of consecutive readings from the database and send them as a block rather than send all readings in a single transaction with the historian. This allows the sending process to rate limit the send and also to provide improved error recovery in the case of transmission failure.\n\nThe handle is the value returned by the call to plugin_init().\n\nThe id passed in is the id of the first record to return in the block.\n\nThe blksize is the maximum number of records to return in the block. If there are no sufficient readings to return a complete block of readings then a smaller number of readings will be returned. If no reading can be returned then a NULL pointer is returned. This call will not block waiting for new readings.\n\nPlugin Reading Retrieve\n~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern JSON *plugin_reading_retrieve(PLUGIN_HANDLE handle, JSON *condition);\n\nReturn a set of readings as a JSON object based on a query to select those readings.\n\nThe handle is the value returned by the call to plugin_init().\n\nThe condition is a JSON encoded query using the same mechanisms as defined in the section Encoding Query Predicates in JSON. In this case it is expected that the JSON condition would include not just selection criteria but also grouping and aggregation options.\n\nPlugin Reading Purge\n~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern unsigned int plugin_reading_purge(PLUGIN_HANDLE handle, unsigned long age, unsigned int flags, unsigned long sent);\n\nThe removal of readings data based on the age of the data with an optional limit to prevent purging of data that has not been sent out of the Fledge device for external storage/processing.\n\nThe handle is the value returned by the call to plugin_init().\n\nThe age defines the maximum age of data that is to be retained\n\nThe flags define if the sent or unsent status of data should be considered or not. If the flags specify that unsent data should not be purged then the value of the sent parameter is used to determine what data has not been sent and readings with an id greater than the sent id will not be purged.\n\nPlugin Release\n~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern boolean plugin_release(PLUGIN_HANDLE handle, JSON *json)\n\nThis call is used by the storage service to release a result set or other JSON object that has been returned previously from the plugin to the storage service. JSON structures should only be released to the plugin when the storage service has finished with them as the plugin will most likely free the memory resources associated with the JSON structure.\n\nPlugin Error Retrieval\n~~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern PLUGIN_ERROR *plugin_last_error(PLUGIN_HANDLE)\n\nReturn more details on the last error that occurred within this instance of a plugin. The returned pointer points to a static area of memory that will be overwritten when the next error occurs within the plugin. There is no requirement for the caller to free any memory returned.\n\nPlugin Shutdown\n~~~~~~~~~~~~~~~\n\n.. code-block:: C\n\n  extern boolean plugin_shutdown(PLUGIN_HANDLE handle)\n\nShutdown the plugin, this is called with the plugin handle returned from plugin_init and is the last operation that will be performed on the plugin. It is designed to allow the plugin to complete any outstanding operations it may have, close connections to storage layers and generally release resources.\n\nOnce this call has completed the plugin handle that was previously given out by the plugin should be considered to be invalid and any future calls using that handle should fail.\n\nEncoding Query Predicates in JSON\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOne particular issue with a storage layer API is how to encode the query predicates in a JSON structure that are as expression as the SQL predicates whilst not making the JSON document too complex whilst still maintaining the flexibility to be able to implement storage plugins that are not based on SQL databases. In traditional REST API’s the HTTP GET operation should be used to retrieve data, however the GET operation does not strictly support body content and therefore any modifiers or queries have to be encoded in the URL. Encoding complex query predicates in a URL quickly becomes an issue, therefore this API layer will not take this approach, it will allow simple predicates in the URL, but will use JSON documents and PUT operations to encode more complex predicates in the body of the PUT operation.\n\nThe same JSON encoding will be used in the storage layer to the plugin interface for all retrieval operations.\n\nThe predicates will be encoded in a JSON object that contains a where clause, other optional properties may be added to control aggregation, grouping and sorting of the selected data.\n\nThe where object contains a column name, operation and value to match, it may also optionally contain an and property and an or property. The values of the and and or property, if they exist, are themselves where objects.\n\nAs an example the following JSON object\n\n.. code-block:: JSON\n\n  {\n    \"where\"  : {\n                 \"column\"    : \"c1\",\n                 \"condition\" : \"=\",\n                 \"value\"     : \"mine\",\n                 \"and\"       : {\n                                 \"column\"    : \"c2\",\n                                 \"condition\" : \"<\",\n                                 \"value\"     : 20\n                               }\n               }\n  }\n\nwould result in a SQL where clause of the form\n\n.. code-block:: console\n\n  WHERE c1 = “mine” AND c2 < 20\n\nAn example of a more complex example, using an and and an or condition, would be\n\n.. code-block:: JSON\n\n  {\n\t\"where\" : {\n\t\t\t\"column\"    : \"id\",\n\t\t\t\"condition\" : \"<\",\n\t\t\t\"value\"     : \"3\",\n\t\t\t\"or\"        : {\n\t\t\t\t           \"column\"    : \"id\",\n\t\t\t\t           \"condition\" : \">\",\n\t\t\t\t           \"value\"     : \"7\",\n\t\t\t\t           \"and\"       : {\n\t\t\t\t\t            \"column\"    : \"description\",\n\t\t\t\t\t            \"condition\" : \"=\",\n\t\t\t\t\t            \"value\"     : \"A test row\"\n\t\t\t\t               }\n\t\t\t              }\n\t\t   }\n  }\n\nWhich would yield a traditional SQL query of\n\n.. code-block:: console\n\n  WHERE id < 3 OR id > 7 AND description = “A test row”\n\n.. note::\n\n  It is currently not possible to introduce bracketed conditions.\n\nAggregation\n###########\n\nIn some cases adding aggregation of the results of a record selection is also required. Within the JSON this is represented using an optional aggregate object.\n\n.. code-block:: console\n\n  \"aggregate\" : {\n                \"operation\" : \"<operation>\"\n                \"column\"    : \"<column name>\"\n              }\n\nValid operations for aggregations are; min, max, avg, sum and count.\n\nAs an example the following JSON object\n\n.. code-block:: JSON\n\n  {\n    \"where\"     : {\n                     \"column\"    : \"room\",\n                     \"condition\" : \"=\",\n                     \"value\"     : \"kitchen\"\n                  },\n    \"aggregate\" : {\n                     \"operation\" : \"avg\",\n                     \"column\"    : \"temperature\"\n                  }\n  }\n\nMultiple aggregates may be applied, in which case the aggregate property becomes an array of objects rather than a single object.\n\n.. code-block:: JSON\n\n  {\n    \"where\"     : {\n                     \"column\"    : \"room\",\n                     \"condition\" : \"=\",\n                     \"value\"     : \"kitchen\"\n                  },\n    \"aggregate\" : [\n                    {\n                       \"operation\" : \"avg\",\n                       \"column\"    : \"temperature\"\n                    },\n                    {\n                       \"operation\" : \"min\",\n                       \"column\"    : \"temperature\"\n                    },\n                    {\n                       \"operation\" : \"max\",\n                       \"column\"    : \"temperature\"\n                    }\n\t\t]\n  }\n\nThe result set JSON that is created for aggregates will have properties with names that are a concatenation of the column and operation. For example, the where clause defined above would result in a response similar to below.\n\n.. code-block:: JSON\n\n  {\n     \"count\": 1,\n     \"rows\" : [\n               {\n                  \"avg_temperature\" : 21.8,\n                  \"min_temperature\" : 18.4,\n                  \"max_temperature\" : 22.6\n               }\n              ]\n  }\n\nAlternatively an “alias” property may be added to aggregates to control the naming of the property in the JSON document that is produced.\n\n.. code-block:: JSON\n\n  {\n    \"where\"     : {\n                     \"column\"    : \"room\",\n                     \"condition\" : \"=\",\n                     \"value\"     : \"kitchen\"\n                  },\n    \"aggregate\" : [\n  {\n                       \"operation\" : \"avg\",\n                       \"column\"    : \"temperature\",\n                       \"alias\"     : \"Average\"\n                    },\n  {\n                       \"operation\" : \"min\",\n                       \"column\"    : \"temperature\",\n                       \"alias\"     : \"Minimum\"\n                    },\n  {\n                       \"operation\" : \"max\",\n                       \"column\"    : \"temperature\",\n                       \"alias\"     : \"Maximum\"\n                    }\n\t\t\t]\n  }\n\nWould result in the following output\n\n.. code-block:: JSON\n\n  {\n      \"count\": 1,\n      \"rows\" : [\n                 {\n                   \"Average\" : 21.8,\n                   \"Minimum\" : 18.4,\n                   \"Maximum\" : 22.6\n                 }\n     ]\n  }\n\nWhen the column that is being aggregated contains a JSON document rather than a simple value then the column property is replaced with a json property and the object defines the properties within the json document in the database field that will be used for aggregation.\n\nThe following is an example of a payload that will query the readings data and return aggregations of the JSON property rate from within the column reading. The column reading is a JSON blob within the database.\n\n.. code-block:: JSON\n\n  {\n          \"where\"   : {\n                                  \"column\"    : \"asset_code\",\n                                  \"condition\" : \"=\",\n                                  \"value\"     : \"MyAsset\"\n                          },\n          \"aggregate\" : [\n                          {\n                                  \"operation\" : \"min\",\n                                  \"json\"      : {\n                                                      \"column\"     : \"reading\",\n                                                      \"properties\" : \"rate\"\n                                                  },\n                                  \"alias\"     : \"Minimum\"\n                          },\n                          {\n                                  \"operation\" : \"max\",\n                                  \"json\"      : {\n                                                      \"column\"     : \"reading\",\n                                                      \"properties\" : \"rate\"\n                                                  },\n                                  \"alias\"     : \"Maximum\"\n                          },\n                          {\n                                  \"operation\" : \"avg\",\n                                  \"json\"      : {\n                                                      \"column\" : \"reading\",\n                                                      \"properties\" : \"rate\"\n                                                  },\n                                  \"alias\"     : \"Average\"\n                          }\n                        ],\n          \"group\" : \"asset_code\"\n  }\n\nGrouping\n########\n\nGrouping of records can be achieved by adding a group property to the JSON document, the value of the group property is the column name to group on.\n\n.. code-block:: console\n\n  \"group\" : \"<column name>\"\n\nSorting\n#######\n\nWhere the output is required to be sorted a sort object may be added to the JSON document. This contains a column to sort on and a direction for the sort “asc” or “desc”.\n\n.. code-block:: console\n\n  \"sort\"   : {\n       \"column\"    : \"c1\",\n       \"direction\" : \"asc\"\n     }\n\nIt is also possible to apply multiple sort operations, in which case the sort property becomes an ordered array of objects rather than a single object\n\n.. code-block:: console\n\n  \"sort\"   : [\n      {\n        \"column\"    : \"c1\",\n        \"direction\" : \"asc\"\n      },\n      {\n        \"column\"    : \"c3\",\n        \"direction\" : \"asc\"\n      }\n     ]\n\n.. note::\n\n    The direction property is optional and if omitted will default to ascending order.\n\nLimit\n#####\n\nA limit property can be included that will limit the number of rows returned to no more than the value of the limit property.\n\n.. code-block:: console\n\n   \"limit\" : <number>\n\n\nCreating Time Series Data\n#########################\n\nThe timebucket mechanism in the storage layer allows data that includes a timestamp value to be extracted in timestamp order, grouped over a fixed period of time.\n\nThe time bucket directive allows a timestamp column to be defined, the size of each time bucket, in seconds, an optional date format for the timestamp written in the results and an optional alias for the timestamp property that is written.\n\n.. code-block:: console\n\n\t\"timebucket\" :  {\n\t\t\t   \"timestamp\" : \"user_ts\",\n\t\t\t   \"size\"      : \"5\",\n\t\t\t   \"format\"    : \"DD-MM-YYYY HH24:MI:SS\",\n\t\t\t   \"alias\"     : \"bucket\"\n\t\t\t}\n\nIf no size element is present then the default time bucket size is 1 second.\n\nThis produces a grouping of data results, therefore it is expected to be used in conjunction with aggregates to extract data results. The following example is the complete payload that would be used to extract assets from the readings interface\n\n.. code-block:: JSON\n\n  {\n\t\"where\" : {\n\t\t\t\t\"column\"    : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\"     : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\"      : {\n\t\t\t\t\t\t    \"column\"     : \"reading\",\n\t\t\t\t\t\t    \"properties\" : \"rate\"\n\t\t\t\t\t        },\n\t\t\t\t\"alias\"     : \"Minimum\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\"      : {\n\t\t\t\t\t\t    \"column\"     : \"reading\",\n\t\t\t\t\t\t    \"properties\" : \"rate\"\n\t\t\t\t\t        },\n\t\t\t\t\"alias\"     : \"Maximum\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\"      : {\n\t\t\t\t\t\t    \"column\"     : \"reading\",\n\t\t\t\t\t\t    \"properties\" : \"rate\"\n\t\t\t\t\t        },\n\t\t\t\t\"alias\"      : \"Average\"\n\t\t\t}\n\t\t      ],\n\t\"timebucket\" :  {\n\t\t\t   \"timestamp\" : \"user_ts\",\n\t\t\t   \"size\"      : \"30\",\n\t\t\t   \"format\"    : \"DD-MM-YYYY HH24:MI:SS\",\n\t\t\t   \"alias\"     : \"Time\"\n\t\t\t}\n  }\n\nIn this case the payload would be sent in a PUT request to the URL /storage/reading/query and the returned values would contain the reading data for the asset called MyAsset which has a sensor value rate in the JSON payload it returns. The data would be aggregated in 30 second time buckets and the return values would be in the JSON format shown below.\n\n.. code-block:: JSON\n\n  {\n   \"count\":2,\n   \"Rows\":[\n            {\n              \"Minimum\"    : 2,\n              \"Maximum\"    : 96,\n              \"Average\"    : 47.9523809523809,\n              \"asset_code\" : \"MyAsset\",\n              \"Time\"       : \"11-10-20177 15:10:50\"\n             },\n             {\n               \"Minimum\"    : 1,\n               \"Maximum\"    : 98,\n               \"Average\"    : 53.7721518987342,\n               \"asset_code\" : \"MyAsset\",\n               \"Time\"       : \"11-10-20177 15:11:20\"\n             }\n           ]\n  }\n\nJoining Tables\n##############\n\nJoins can be created between tables using the join object. The JSON object contains a table name, a column to join on in the table of the query itself and an optional column in the joined table. It also allows a query to be added that may define a where condition to select columns in the joined table and a returns object to define which rows should be used from that table and how to name them.\n\nThe following example joins the table called attributes to the table given in the URL of the request. It uses a column called parent_id in the attributes table to join to the column id in the table given in the request. If the column name in both tables is the same then there is no need to give the column field in the table object, the column name can be given in the on field instead.\n\n.. code-block:: JSON\n\n  {\n        \"join\" : {\n                \"table\"  : {\n                                \"name\" : \"attributes\",\n                \t\t\"column\" : \"parent_id\"\n                },\n                \"on\"     : \"id\",\t\n                \"query\"  : {    \n                                \"where\" : { \n                                        \"column\"    : \"name\",\n                                        \"condition\" : \"=\",\n                                        \"value\"     : \"MyName\"\n                                        \n                                        }, \n                                \"return\" : [\n                                        \"parent_id\",\n                                        {       \n                                                \"column\" : \"name\",\n                                                \"alias\"  : \"attribute_name\"\n                                        },\n                                        {\n                                                \"column\" : \"value\",\n                                                \"alias\"  : \"attribute_value\"\n                                        }\n                                        ]\n                        }\n        }\n  }\n\nAssuming no additional where conditions or return constraints on the main table query, this would yields SQL of the form\n\n.. code-block:: SQL\n\n  select t1.*, t2.parent_id, t2.name as \"attribute_name\", t2.value as \"attribute_value\"  from parent t1, attributes t2 where t1.id = t2.parent_id and t2.name = \"MyName\";\n\nJoins may be nested, allowing more than two tables to be joined. Assume again we have a parent table that contains items and an attributes table that contains attributes of those items. We wish to return the items that have an attribute called MyName and a colour. We need to join the attributes table twice to get the requests we require. The JSON payload would be as follows\n\n.. code-block:: JSON\n\n  {\n        \"join\" : {\n                \"table\"  : {\n                                \"name\" : \"attributes\",\n                                \"column\" : \"parent_id\"\n                        },      \n                \"on\"     : \"id\",\n                \"query\"  : {    \n                                \"where\" : { \n                                        \"column\"    : \"name\",\n                                        \"condition\" : \"=\",\n                                        \"value\"     : \"MyName\"\n                                        \n                                        }, \n                                \"return\" : [\n                                        \"parent_id\",\n                                        {\n                                                \"column\" : \"value\",\n                                                \"alias\"  : \"my_name\"\n                                        }\n                                        ]\n                                \"join\" : {\n                                                \"table\" : {\n                                                \"name\" : \"attributes\",\n                                                        \"column\" : \"parent_id\"\n                                                },\n                                                \"on\"     : \"id\",\n                                                \"query\"  : {\n                                                         \"where\" : {\n                                                                \"column\"    : \"name\",\n                                                                \"condition\" : \"=\",\n                                                                \"value\"     : \"colour\"\n\n                                                                },\n                                                          \"return\" : [\n                                                                 \"parent_id\",\n                                                                {       \n                                                                         \"column\" : \"value\",\n                                                                         \"alias\"  : \"colour\"\n                                                                }       \n                                                           ]\n                                                }\n                                        }\n                        }\n        }\n  }\n\nAnd the resultant SQL query would be\n\n.. code-block:: SQL\n\n  select t1.*, t2.parent_id, t2.value as \"my_name\", t3.value as \"colour\"  from parent t1, attributes t2, attributes t3 where t1.id = t2.parent_id and t2.name = \"MyName\" and t1.id = t3.parent_id and t3.name = \"colour\";\n \nJSON Predicate Schema\n#####################\n\nThe following is the JSON schema definition for the predicate encoding.\n\n.. code-block:: JSON\n\n  {\n    \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n    \"definitions\": {},\n    \"id\": \"http://example.com/example.json\",\n    \"properties\": {\n      \"group\": {\n        \"id\": \"/properties/group\",\n        \"type\": \"string\"\n      },\n      \"sort\": {\n        \"id\": \"/properties/sort\",\n        \"properties\": {\n          \"column\": {\n            \"id\": \"/properties/sort/properties/column\",\n            \"type\": \"string\"\n          },\n          \"direction\": {\n            \"id\": \"/properties/sort/properties/direction\",\n            \"type\": \"string\"\n          }\n        },\n        \"type\": \"object\"\n      },\n      \"aggregate\": {\n        \"id\": \"/properties/aggregate\",\n        \"properties\": {\n          \"column\": {\n            \"id\": \"/properties/aggregate/properties/column\",\n            \"type\": \"string\"\n          },\n          \"operation\": {\n            \"id\": \"/properties/sort/properties/operation\",\n            \"type\": \"string\"\n          }\n        },\n        \"type\": \"object\"\n      },\n    \"properties\": {\n      \"limit\": {\n        \"id\": \"/properties/limit\",\n        \"type\": \"number\"\n      }\n      \"where\": {\n        \"id\": \"/properties/where\",\n        \"properties\": {\n          \"and\": {\n            \"id\": \"/properties/where/properties/and\",\n            \"properties\": {\n              \"column\": {\n                \"id\": \"/properties/where/properties/and/properties/column\",\n                \"type\": \"string\"\n              },\n              \"condition\": {\n                \"id\": \"/properties/where/properties/and/properties/condition\",\n                \"type\": \"string\"\n              },\n              \"value\": {\n                \"id\": \"/properties/where/properties/and/properties/value\",\n                \"type\": \"string\"\n              }\n            },\n            \"type\": \"object\"\n          },\n          \"column\": {\n            \"id\": \"/properties/where/properties/column\",\n            \"type\": \"string\"\n          },\n          \"condition\": {\n            \"id\": \"/properties/where/properties/condition\",\n            \"type\": \"string\"\n          },\n          \"or\": {\n            \"id\": \"/properties/where/properties/or\",\n            \"properties\": {\n              \"column\": {\n                \"id\": \"/properties/where/properties/or/properties/column\",\n                \"type\": \"string\"\n              },\n              \"condition\": {\n                \"id\": \"/properties/where/properties/or/properties/condition\",\n                \"type\": \"string\"\n              },\n              \"value\": {\n                \"id\": \"/properties/where/properties/or/properties/value\",\n                \"type\": \"string\"\n              }\n            },\n            \"type\": \"object\"\n          },\n          \"value\": {\n            \"id\": \"/properties/where/properties/value\",\n            \"type\": \"string\"\n          }\n        },\n        \"type\": \"object\"\n      }\n    },\n    \"type\": \"object\"\n  }\n\nControlling Returned Values\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe common retrieval API and the reading retrieval API can be controlled to return subsets of the data by defining the “columns” to be returned in an optional “return” object in the JSON payload of these entry points.\n\nReturning Limited Set of Columns\n################################\n\nAn optional “returns” object may be followed by a JSON array that contains the names of the columns to return.\n\n.. code-block:: console\n\n        \"return\" : [ \"column1\", \"column2\", \"column3\" ]\n\nThe array may be simple strings that the columns to return or they may be JSON objects which give the column and and an alias for that column\n\n.. code-block:: console\n\n        \"return : [ \"column1\", {\n                                \"column\" : \"column2\",\n                                \"alias\"  : \"SecondColumn\"\n                                 }\n                    ]\n\n\nIndividual array items may also be mixed as in the example above.\n\nFormatting Columns\n##################\n\nWhen a return object is specified it is also possible to format the returned data, this is particularly applicable to dates. Formatting is done by adding a format property to the column object to be returned. \n\n.. code-block:: console\n\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"DD Mon YYYY\",\n\t\t\t  \"alias\" : \"date\"\n\t\t\t}\n\t\t    ]\n\nThe format string may be for dates or numeric values. The content of the string for dates is a template pattern  consisting of a combination of the following.\n\n.. list-table::\n        :widths: 20 80\n        :header-rows: 1\n\n        * - Pattern\n          - Description\n        * - HH\n          - Hour of the day in 12 hour clock\n        * - HH24\n          - Hour of the day in 24 hour clock\n        * - MI\n          - Minute value\n        * - SS\n          - Seconds value\n        * - MS\n          - Milliseconds value\n        * - US\n          - Microseconds value\n        * - SSSS\n          - Seconds since midnight\n        * - YYYY\n          - Year as 4 digits\n        * - YY\n          - Year as 2 digits\n        * - Month\n          - Full month name\n        * - Mon\n          - Month name abbreviated to 3 characters\n        * - MM\n          - Month number\n        * - Day\n          - Day of the week\n        * - Dy\n          - Abbreviated data of the week\n        * - DDD\n          - Day of the year\n        * - DD\n          - Day of the month\n        * - D\n          - Day of the week\n        * - W\n          - Week of the year\n        * - am\n          - am/pm meridian\n\n\nReturn JSON Document Content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe returns mechanism may also be used to return the properties within a JSON document stored within the database.\n\n.. code-block:: JSON\n\n  {\n        \"return\" : [ \n                        \"code\", \n                        { \n                                \"column\" : \"ts\",\n                                \"alias\"  : \"timestamp\" \n                        }, \n                        { \n                                \"json\" : { \n                                                \"column\"     : \"log\", \n                                                \"properties\" : \"reason\" \n                                         }, \n                                \"alias\" : \"myJson\"\n                        } \n                   ]    \n  }\n\nIn the example above a database column called json contains a JSON document with the property reason at the base level of the JSON document. The above statement extracts the JSON properties value and returns it in the result set using the property name myJSON.\n\nTo access properties nested more deeply in the JSON document the properties property in the above example can also be an array of JSON property names for each level in the hierarchy. If the column contains a JSON document as below,\n\n.. code-block:: console\n\n  {\n        \"building\" : {\n                        \"floor\" : {     \n                                        \"room\" : {      \n                                                        \"number\" : 432,\n                                                        ...\n                                                 },\n                                 },\n                     }\n  }\n\nTo access the room number a return fragment as shown below would be used.\n\n.. code-block:: JSON\n\n  {       \n        \"return\" : [    \n                        {\n                                \"json\" : { \n                                                \"column\" : \"street\", \n                                                \"properties\" : [\n                                                        \"building\",\n                                                        \"floor\",\n                                                        \"room\",\n                                                        \"number\"\n                                                                ]\n                                         }, \n                                \"alias\" : \"RoomNumber\"\n                        }\n                   ]\n  }\n \n"
  },
  {
    "path": "docs/plugin_developers_guide/09_packaging.rst",
    "content": ".. Plugin as a Package\n\nPlugin Packaging\n================\n\nThere are as set of files that must exist within the repository of a plugin that are used to create the package for that plugin on the various supported platforms. The following documents what those files are and what they should contain.\n\nCommon files\n------------\n\n- **Description** - It should contain a brief description of the plugin and will be used as the description for the package that is created. Also make sure description of plugin must be in a single line as of now we do not have support multi lines yet.\n- **Package** - This is the main file where we define set of variables.\n\n   - **plugin_name** - Name of the Plugin.\n   - **plugin_type** - Type of the Plugin.\n   - **plugin_install_dirname** - Installed Directory name.\n   - **plugin_package_name (Optional)** - Name of the Package. If it is not given then the package name should be same as plugin name.\n   - **requirements** - Runtime Architecture specific packages list and should have comma separated values without any space.\n\n   .. note::\n      For C-based plugins if a plugin requires some additional libraries to install with then set additional_libs variable inside Package file. And the value must be with following contract:\n\n      additional_libs=\"DIRECTORY_NAME:FILE_NAME\" - in case of single\n      additional_libs=\"DIRECTORY_NAME:FILE_NAME1,DIRECTORY_NAME:FILE_NAME2\" - in case of multiple use comma separated with both directory & file name\n\n- **service_notification.version** - It is only required if the plugin is a notification rule or notification delivery plugin. It contains the minimum version of the notification service which the plugin requires.\n\nC based Plugins\n---------------\n\n- **VERSION** - It contains the version number of the plugin and is used by the build process to include the version number within the code and also within the name of the package file created.\n- **fledge.version** - It contains the minimum version number of Fledge required by the plugin.\n- **requirements.sh (Optional)** - It is used to install any additional libraries or other artifacts that are need to build the plugin. It takes the form of a shell script. This script, if it exists, will be run as a part of the process of building the plugin before the cmake command is issued in the build process.\n- **extras_install.sh (Optional)** - It is a shell script that is added to the package to allow for extra commands to be executed as part of the package installation. Not all plugins will require this file to be present and it can be omitted if there are no extra steps required on the installation.\n\nExamples of filename along with content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n1. VERSION\n\n.. code-block:: console\n\n    $ cat VERSION\n    1.9.2\n\n2. fledge.version\n\n.. code-block:: console\n\n    $ cat fledge.version\n    fledge_version>=1.9\n\n3. requirements.sh\n\n.. code-block:: console\n\n    $ cat requirements.sh\n    #!/usr/bin/env bash\n    which apt >/dev/null 2>&1\n    if [ $? -eq 0 ]; then\n        sudo apt install -y libmodbus-dev\n    else\n        which yum >/dev/null 2>&1\n        if [ $? -eq 0 ]; then\n            sudo yum -y install epel-release libmodbus libmodbus-devel\n        fi\n    fi\n\n4. Description\n\n.. code-block:: console\n\n    $ cat Description\n    Fledge modbus plugin. Supports modbus RTU and modbus TCP.\n\n5. Package\n\n.. code-block:: console\n\n    $ cat Package\n    # A set of variables that define how we package this repository\n    #\n    plugin_name=modbus\n    plugin_type=south\n    plugin_install_dirname=ModbusC\n    plugin_package_name=fledge-south-modbus\n    additional_libs=\"usr/local/lib:/usr/local/lib/libsmod.so*\"\n\n    # Now build up the runtime requirements list. This has 3 components\n    #   1. Generic packages we depend on in all architectures and package managers\n    #   2. Architecture specific packages we depend on\n    #   3. Package manager specific packages we depend on\n    requirements=\"fledge\"\n\n    case \"$arch\" in\n        x84_64)\n            ;;\n        armv7l)\n            ;;\n        aarch64)\n            ;;\n    esac\n    case \"$package_manager\" in\n        deb)\n            requirements=\"${requirements},libmodbus-dev\"\n            ;;\n    esac\n\n.. note::\n    If your package is not supported for a specific platform then you must exit with exitcode 1.\n\n6. service_notification.version\n\n.. code-block:: console\n\n    $ cat service_notification.version\n    service_notification_version>=1.9.2\n\nCommon Additional Libraries Package\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nBelow are the packages which created a part of the process of building Fledge that are commonly used in plugins.\n\n- **fledge-mqtt** which is a packaged version of the libpaho-mqtt library.\n- **fledge-iec** which is a packaged version of the IEC 60870 and IEC 61850 libraries.\n- **fledge-s2opcua** which is a packaged version of libexpat and libs2opc libraries.\n\n\nIf your plugin depends on any of these libraries they should be added to the *requirements* variable in the **Package** file rather than adding them as *additional_libs* since the version of these is managed by the Fledge build and packaging process. Below is the example\n\n.. code-block:: console\n\n    requirements=\"fledge,fledge-s2opcua\"\n\nPython based Plugins\n--------------------\n\n- **VERSION.{PLUGIN_TYPE}.{PLUGIN_NAME}** - It contains the packaged version of the plugin and also the minimum fledge version that the plugin requires.\n- **install_notes.txt (Optional)** - It is a simple text file that can be included if there are specific instructions required to be given during the installation of the plugin. These notes will be displayed at the end of the installation process for the package.\n- **extras_install.sh (Optional)** - It is a shell script that is added to the package to allow for extra commands to be executed as part of the package installation. Not all plugins will require this file to be present and it can be omitted if there are no extra steps required on the installation.\n- **requirements-{PLUGIN_NAME}.txt (Optional)** - It is a simple text file that can be included if there are pip dependencies required to be given during the installation of the plugin. Also make sure file should be placed inside *python* directory.\n\nExamples of filename along with content\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n1. Description\n\n.. code-block:: console\n\n    $ cat Description\n    Fledge South Sinusoid plugin\n\n2. Package\n\n.. code-block:: console\n\n    $ cat Package\n    # A set of variables that define how we package this repository\n    #\n    plugin_name=sinusoid\n    plugin_type=south\n    plugin_install_dirname=sinusoid\n\n    # Now build up the runtime requirements list. This has 3 components\n    #   1. Generic packages we depend on in all architectures and package managers\n    #   2. Architecture specific packages we depend on\n    #   3. Package manager specific packages we depend on\n    requirements=\"fledge\"\n\n    case \"$arch\" in\n        x86_64)\n            ;;\n        armv7l)\n            ;;\n        aarch64)\n            ;;\n    esac\n    case \"$package_manager\" in\n        deb)\n            ;;\n    esac\n\n.. note::\n    If your package is not supported for a specific platform then you must exit with exitcode 1.\n\n3. VERSION.{PLUGIN_TYPE}.{PLUGIN_NAME}\n\n.. code-block:: console\n\n    $ cat VERSION.south.sinusoid\n    fledge_south_sinusoid_version=1.9.2\n    fledge_version>=1.9\n\n4. install_notes.txt\n\n.. code-block:: console\n\n    $ cat install_notes.txt\n    It is required to reboot the RPi, please do the following steps:\n    1) sudo reboot\n\n5. extras_install.sh\n\n.. code-block:: console\n\n    #!/usr/bin/env bash\n\n    os_name=$(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\n    os_version=$(grep -o '^VERSION_ID=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\n    echo \"Platform is ${os_name}, Version: ${os_version}\"\n    arch=`arch`\n    ID=$(cat /etc/os-release | grep -w ID | cut -f2 -d\"=\")\n    case $os_name in\n      *\"Ubuntu\"*)\n        if [ ${arch} = \"aarch64\" ]; then\n          python3 -m pip install --upgrade pip\n        fi\n        ;;\n\n    esac\n\n6. requirements-{PLUGIN_NAME}.txt\n\n.. code-block:: console\n\n    $ cat python/requirements-modbustcp.txt\n    pymodbus3==1.0.0\n\n\nBuilding A Package\n------------------\n\nFirstly you need to clone the repository `fledge-pkg <https://github.com/fledge-iot/fledge-pkg>`_. Now do the following steps\n\n.. code-block:: console\n\n    $ cd plugins\n    $ ./make_deb -b <BRANCH_NAME> <REPOSITORY_NAME>\n\n    if everything goes well with above command then you can find your package inside archive directory.\n\n    $ ls archive\n"
  },
  {
    "path": "docs/plugin_developers_guide/10_testing.rst",
    "content": ".. Testing your Plugin\n\n.. |br| raw:: html\n\n   <br/>\n\n.. Links\n.. |expression filter| raw:: html\n\n   <a href=\"../plugins/fledge-filter-expression/index.html\">expression filter</a>\n\n.. |Python 3.5 filter| raw:: html\n\n   <a href=\"../plugins/fledge-filter-python35/index.html\">Python 3.5 filter</a>\n\nTesting Your Plugin\n===================\n\nThe first step in testing your new plugin is to put the plugin in the\nlocation in which your Fledge system will be loading it from. The exact\nlocation depends on the way your installed you Fledge system and the\ntype of plugin.\n\nIf your Fledge system was installed from a package and you used the\ndefault installation path, then your plugin must be stored under the\ndirectory */usr/local/fledge*. If you installed Fledge in a nonstandard\nlocation or your have built it from the source code, then the plugin\nshould be stored under the directory *$FLEDGE_ROOT*.\n\nA C/C++ plugin or a hybrid plugin should be placed in the directory\n*plugins/<type>/<plugin name>* under the installed directory\ndescribed above. Where *<type>* is one of *south*, *filter*, *north*,\n*notificationRule* or *notificationDelivery*. And *<plugin name>* is\nthe name you gave your plugin.\n\nA south plugin written in C/C++ and called DHT11, for a system\ninstalled from a package, would be installed in a directory called\n*/usr/local/fledge/plugins/south/DHT11*. Within that directory Fledge\nwould expect to find a file called *libDHT11.so*.\n\nA south hybrid plugin called MD1421, for a development system built from\nsource would be installed in *${FLEDGE_ROOT}/plugins/south/MD1421*. In\nthis directory a JSON file called *MD1421.json* should exist, this is\nwhat the system will read to create the plugin.\n\nA Python plugin should be installed in the directory\n*python/fledge/plugins/<plugin type>/<plugin name>* under the installed\ndirectory described above. Where *<type>* is one of *south*, *filter*,\n*north*, *notificationRule* or *notificationDelivery*. And *<plugin name>*\nis the name you gave your plugin.\n\nA Python filter plugin called normalise, on a system installed from\na package in the default location should be copied into a directory\n*/usr/local/fledge/python/fledge/plugins/filter/normalise*. Within\nthis directory should be a file called *normalise.py* and an empty file\ncalled *__init__.py*.\n\nInitial Testing\n---------------\n\nAfter you have copied your plugin into the correct location\nyou can test if Fledge is able to see it by running the API call\n*/fledge/plugins/installed*. This will list all the installed plugins\nand their versions.\n\n.. code-block:: console\n\n   $ curl http://localhost:8081/fledge/plugins/installed | jq\n   {\n     \"plugins\": [\n       {\n         \"name\": \"http_north\",\n         \"type\": \"north\",\n         \"description\": \"HTTP North Plugin\",\n         \"version\": \"2.2.0\",\n         \"installedDirectory\": \"north/http_north\",\n         \"packageName\": \"fledge-north-http-north\"\n       },\n       {\n         \"name\": \"Kafka\",\n         \"type\": \"north\",\n         \"description\": \"Simple plugin to send data to Kafka topic\",\n         \"version\": \"2.2.0\",\n         \"installedDirectory\": \"north/Kafka\",\n         \"packageName\": \"fledge-north-kafka\"\n       },\n   ...\n   }\n\nNote, in the above example the *jq* program has been used to format the\nreturned JSON and the output has been truncated for brevity.\n\nIf your plugin does not appear it may be because there was a problem\nloading it or because the *plugin_info* call returned a bad value. Examine\nthe syslog file to see if there are any errors recorded during the above\nAPI call.\n\nC/C++ Common Faults\n-------------------\n\nCommon faults for C/C++ plugins are that a symbol could not be resolved\nwhen the plugin was loaded or the JSON for the default configuration\nis malformed.\n\nThere is a utility called *get_plugin_info* that is used by Python code\nto call the C *plugin_info* call, this can be used to ascertain the\ncause of some problems. It should return the default configuration of\nyour plugin and will verify that your plugin has no undefined symbols.\n\nThe location of *get_plugin_info* will depend on the type of\ninstallation you have. If you have built from source then it can\nbe found in *./cmake_build/C/plugins/utils/get_plugin_info*. If you\nhave installed a package, or run *make install*, you can find it in\n*/usr/local/fledge/extras/C/get_plugin_info*.\n\nThe utility is passed the library file of your plugin as its first argument\nand the function to call, usually *plugin_info*.\n\n.. code-block:: console\n\n   $ ./get_plugin_info /usr/local/fledge/plugins/north/Kafka/libKafka.so plugin_info\n    {\"name\": \"Kafka\", \"type\": \"north\", \"flag\": 0, \"version\": \"2.2.0\", \"interface\": \"1.0.0\", \"config\": {\"SSL_CERT\": {\"displayName\": \"Certificate Name\", \"description\": \"Name of client certificate for identity authentications\", \"default\": \"\", \"validity\": \"KafkaSecurityProtocol == \\\"SSL\\\" || KafkaSecurityProtocol == \\\"SASL_SSL\\\"\", \"group\": \"Encryption\", \"type\": \"string\", \"order\": \"10\"}, \"topic\": {\"mandatory\": \"true\", \"description\": \"The topic to send reading data on\", \"default\": \"Fledge\", \"displayName\": \"Kafka Topic\", \"type\": \"string\", \"order\": \"2\"}, \"brokers\": {\"displayName\": \"Bootstrap Brokers\", \"description\": \"The bootstrap broker list to retrieve full Kafka brokers\", \"default\": \"localhost:9092,kafka.local:9092\", \"mandatory\": \"true\", \"type\": \"string\", \"order\": \"1\"}, \"KafkaUserID\": {\"group\": \"Authentication\", \"description\": \"User ID to be used with SASL_PLAINTEXT security protocol\", \"default\": \"user\", \"validity\": \"KafkaSecurityProtocol == \\\"SASL_PLAINTEXT\\\" || KafkaSecurityProtocol == \\\"SASL_SSL\\\"\", \"displayName\": \"User ID\", \"type\": \"string\", \"order\": \"7\"}, \"KafkaSASLMechanism\": {\"group\": \"Authentication\", \"description\": \"Authentication mechanism to be used to connect to kafka broker\", \"default\": \"PLAIN\", \"displayName\": \"SASL Mechanism\", \"type\": \"enumeration\", \"order\": \"6\", \"validity\": \"KafkaSecurityProtocol == \\\"SASL_PLAINTEXT\\\" || KafkaSecurityProtocol == \\\"SASL_SSL\\\"\", \"options\": [\"PLAIN\", \"SCRAM-SHA-256\", \"SCRAM-SHA-512\"]}, \"SSL_Password\": {\"displayName\": \"Certificate Password\", \"description\": \"Optional: Password to be used when loading the certificate chain\", \"default\": \"\", \"validity\": \"KafkaSecurityProtocol == \\\"SSL\\\" || KafkaSecurityProtocol == \\\"SASL_SSL\\\"\", \"group\": \"Encryption\", \"type\": \"password\", \"order\": \"12\"}, \"compression\": {\"displayName\": \"Compression Codec\", \"description\": \"The compression codec to be used to send data to the Kafka broker\", \"default\": \"none\", \"order\": \"4\", \"type\": \"enumeration\", \"options\": [\"none\", \"gzip\", \"snappy\", \"lz4\"]}, \"plugin\": {\"default\": \"Kafka\", \"readonly\": \"true\", \"type\": \"string\", \"description\": \"Simple plugin to send data to a Kafka topic\"}, \"KafkaSecurityProtocol\": {\"group\": \"Authentication\", \"description\": \"Security protocol to be used to connect to kafka broker\", \"default\": \"PLAINTEXT\", \"options\": [\"PLAINTEXT\", \"SASL_PLAINTEXT\", \"SSL\", \"SASL_SSL\"], \"displayName\": \"Security Protocol\", \"type\": \"enumeration\", \"order\": \"5\"}, \"source\": {\"displayName\": \"Data Source\", \"description\": \"The source of data to send\", \"default\": \"readings\", \"order\": \"13\", \"type\": \"enumeration\", \"options\": [\"readings\", \"statistics\"]}, \"json\": {\"displayName\": \"Send JSON\", \"description\": \"Send as JSON objects or as strings\", \"default\": \"Strings\", \"order\": \"3\", \"type\": \"enumeration\", \"options\": [\"Objects\", \"Strings\"]}, \"SSL_CA_File\": {\"displayName\": \"Root CA Name\", \"description\": \"Name of the root certificate authority that will be used to verify the certificate\", \"default\": \"\", \"validity\": \"KafkaSecurityProtocol == \\\"SSL\\\" || KafkaSecurityProtocol == \\\"SASL_SSL\\\"\", \"group\": \"Encryption\", \"type\": \"string\", \"order\": \"9\"}, \"SSL_Keyfile\": {\"displayName\": \"Private Key Name\", \"description\": \"Name of client private key required for communication\", \"default\": \"\", \"validity\": \"KafkaSecurityProtocol == \\\"SSL\\\" || KafkaSecurityProtocol == \\\"SASL_SSL\\\"\", \"group\": \"Encryption\", \"type\": \"string\", \"order\": \"11\"}, \"KafkaPassword\": {\"group\": \"Authentication\", \"description\": \"Password to be used with SASL_PLAINTEXT security protocol\", \"default\": \"pass\", \"validity\": \"KafkaSecurityProtocol == \\\"SASL_PLAINTEXT\\\" || KafkaSecurityProtocol == \\\"SASL_SSL\\\"\", \"displayName\": \"Password\", \"type\": \"password\", \"order\": \"8\"}}}\n\n\nIf there is an undefined symbol you will get an error from this\nutility. You can also check the validity of your JSON configuration by\npiping the output to a program such as jq.\n\n.. code-block:: console\n\n   $ ./get_plugin_info plugins/south/Random/libRandom.so plugin_info | jq\n    {\n      \"name\": \"Random\",\n      \"version\": \"1.9.2\",\n      \"type\": \"south\",\n      \"interface\": \"1.0.0\",\n      \"flag\": 4096,\n      \"config\": {\n        \"plugin\": {\n          \"description\": \"Random data generation plugin\",\n          \"type\": \"string\",\n          \"default\": \"Random\",\n          \"readonly\": \"true\"\n        },\n        \"asset\": {\n          \"description\": \"Asset name\",\n          \"type\": \"string\",\n          \"default\": \"Random\",\n          \"displayName\": \"Asset Name\",\n          \"mandatory\": \"true\"\n        }\n      }\n    }\n\nRunning Under a Debugger\n------------------------\n\nIf you have a C/C++ plugin that crashes you may want to run the plugin under a debugger. To build with debug symbols use the CMake option *-DCMAKE_BUILD_TYPE=Debug* when you create the *Makefile*.\n\nRunning a Service Under the Debugger\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: console\n\n   $ cmake -DCMAKE_BUILD_TYPE=Debug ..\n\n\nThe easiest approach to run under a debugger is \n\n  - Create the service that uses your plugin, say a south service and name that service as you normally would.\n   \n  - Disable that service from being started by Fledge\n\n  - Use the fledge status script to find the arguments to pass the service\n\n    .. code-block:: console\n\n       $ scripts/fledge status\n       Fledge v1.8.2 running.\n       Fledge Uptime:  1451 seconds.\n       Fledge records: 200889 read, 200740 sent, 120962 purged.\n       Fledge does not require authentication.\n       === Fledge services:\n       fledge.services.core\n       fledge.services.storage --address=0.0.0.0 --port=39821\n       fledge.services.south --port=39821 --address=127.0.0.1 --name=AX8\n       fledge.services.south --port=39821 --address=127.0.0.1 --name=Sine\n       === Fledge tasks:\n\n   - Note the *--port=* and *--address=* arguments\n\n   - Set your LD_LIBRARY_PATH. This is normally done in the script that launches Fledge but will need to be run as a manual step when running under the debugger.\n\n     .. code-block:: console\n\n        export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/fledge/lib\n\n     If you built from source rather than installing a package you will need to include the libraries you built\n\n     .. code-block:: console\n\n        export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${FLEDGE_ROOT}/cmake_build/C/lib\n\n   - Get a startup token by calling the Fledge API endpoint\n\n     *Note*: the caller must be authenticated as the *admin* user using either the username and password authentication or the certificate authentication mechanism in order to call the API endpoint.\n     You must first set Fledge to require authentication.\n     To do this, launch the Fledge GUI, navigate to Configuration and then Admin API.\n     Set Authentication to *mandatory*.\n     Authentication Method can be left as *any*.\n\n     In order to authenticate as the *admin* user one of the two following methods should be used, the choice of which is dependant on the authentication mechanism configured in your Fledge installation.\n\n     - User and password login\n\n       .. code-block:: console\n\n          curl -d '{\"username\": \"admin\", \"some_pass\": \"fledge\"}' -X POST http://localhost:8081/fledge/login\n\n       Successful authentication will produce a response as shown below.\n\n       .. code-block:: console\n\n          {\"message\": \"Logged in successfully\", \"uid\": 1, \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1aWQiOjEsImV4cCI6MTY1NDU5NTIyMn0.IlhIgQ93LbCP-ztGlIuJVd6AJrBlbNBNvCv7SeuMfAs\", \"admin\": true}\n\n     - Certificate login\n\n       .. code-block:: console\n\n          curl -T /some_path/admin.cert -X POST http://localhost:8081/fledge/login\n\n       Successful authentication will produce a response as shown below.\n\n       .. code-block:: console\n\n          {\"message\": \"Logged in successfully\", \"uid\": 1, \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1aWQiOjEsImV4cCI6MTY1NDU5NTkzN30.6VVD_5RwmpLga2A7ri2bXhlo3x_CLqOYiefAAmLP63Y\", \"admin\": true}\n\n   It is now possible to call the API endpoint to retrieve a startup token by passing the authentication token given in the authentication request.\n\n   .. code-block:: console\n\n      curl -X POST 127.0.0.1:8081/fledge/service/ServiceName/otp -H 'authorization: Token'\n\n   Where *ServiceName* is the name you gave your service when you created it and *Token* received by the authentication request above.\n\n   This call will respond with a startup token that can be used to start the service you are debugging. An example response is shown below.\n\n   .. code-block:: console\n\n      {\"startupToken\": \"WvFTYeGUvSEFMndePGbyvOsVYUzbnJdi\"}\n\n   *startupToken* will be passed as service start argument: --token=*startupToken*\n\n   - Load the service you wish to use to run your plugin, e.g. a south service, under the debugger. This should be run from the FLEDGE_ROOT directory\n\n     .. code-block:: console\n\n        $ cd $FLEDGE_ROOT\n        $ gdb services/fledge.services.south\n\n   - Run the service passing the *--port=* and *--address=* arguments you noted above and add *-d* and *--name=* with the name of your service and *--token=startupToken*\n\n     .. code-block:: console\n\n        (gdb) run --port=39821 --address=127.0.0.1 --name=ServiceName -d --token=startupToken\n\n     Where *ServiceName* is the name you gave your service when you created it and startupToken is the token issued using the method described above. Note, this token may only be used once, each time the service is restarted using the debugger a new startup token must be obtained.\n\n   - You can now use the debugger in the way you normally would to find any issues.\n\n     .. note::\n     \n        At this stage the plugins have not been loaded into the address space. If you try to set a break point in the plugin code you will get a warning that the break point can not currently be set. However when the plugin is later loaded the break point will be set and behave as expected.\n\nOnly the plugin has been built with debug information, if you wish to be able to single step into the library code that supports the plugin, and the services you must rebuild Fledge itself with debug symbols. There are multiple ways this can be done, but perhaps the simplest approach is to modify the *Makefile* in the route of the Fledge source.\n\nWhen building Fledge the *cmake* command is executed by the make process, hence rather than manually running cmake and rebuilding you can simple alter the line\n\n.. code-block:: console\n\n   CMAKE := cmake\n\nin the *Makefile* to read\n\n.. code-block:: console\n\n   CMAKE := cmake -DCMAKE_BUILD_TYPE=Debug\n\nAfter making this change you should run a *make clean* followed by a *make* command\n\n.. code-block:: console\n\n   $ make clean\n   $ make\n\nOne side effect of this, caused by running *make clean* is that the plugins you have previously built have been removed from the $FLEDGE_ROOT/plugins directory and this must be rebuilt.\n\nAlternatively you can manually build a debug version by running the following commands\n\n.. code-block:: console\n\n   $ cd $FLEDGE_ROOT/cmake_build\n   $ cmake -DCMAKE_BUILD_TYPE=Debug ..\n   $ make\n\nThis has the advantage that *make clean* is not run so your plugins will be preserved.\n\nRunning a Task Under the Debugger\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRunning a task under the debugger is much the same as running a service,\nyou will first need to find the management port and address of the core\nmanagement service. Create the task, e.g. a north sending process in\nthe same way as you normally would and disable it. You will also need\nto set your LD_LIBRARY_PATH as with running a service under the debugger.\n\nIf you are using a plugin with a task, such as the north sending process\ntask, then the command to use to start the debugger is\n\n.. code-block:: console\n\n   $ gdb tasks/sending_process\n\nRunning the Storage Service Under the Debugger\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRunning the storage service under the debugger is more difficult as you can not start the storage service after Fledge has started, the startup of the storage service is coordinated by the core due to the nature of how configuration is stored. It is possible however to attach a debugger to a running storage service.\n\n  - Run a command to find the process ID of the storage service\n\n    .. code-block:: console\n\n       $ ps aux | grep fledge.services.storage\n       fledge  23318  0.0  0.3 270848 12388 ?        Ssl  10:00   0:01 /usr/local/fledge/services/fledge.services.storage --address=0.0.0.0 --port=33761\n       fledge  31033  0.0  0.0  13136  1084 pts/1    S+   10:37   0:00 grep --color=auto fledge.services.storage\n\n    - Use the process ID of the fledge service as an argument to gdb. Note you will need to run gdb as root on some systems\n\n      .. code-block:: console\n\n          $ sudo gdb /usr/local/fledge/services/fledge.services.storage 23318\n          GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git\n          Copyright (C) 2018 Free Software Foundation, Inc.\n          License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\n          This is free software: you are free to change and redistribute it.\n          There is NO WARRANTY, to the extent permitted by law.  Type \"show copying\"\n          and \"show warranty\" for details.\n          This GDB was configured as \"x86_64-linux-gnu\".\n          Type \"show configuration\" for configuration details.\n          For bug reporting instructions, please see:\n          <http://www.gnu.org/software/gdb/bugs/>.\n          Find the GDB manual and other documentation resources online at:\n          <http://www.gnu.org/software/gdb/documentation/>.\n          For help, type \"help\".\n          Type \"apropos word\" to search for commands related to \"word\"...\n          Reading symbols from services/fledge.services.storage...done.\n          Attaching to program: /usr/local/fledge/services/fledge.services.storage, process 23318\n          [New LWP 23320]\n          [New LWP 23321]\n          [New LWP 23322]\n          [New LWP 23330]\n          [Thread debugging using libthread_db enabled]\n          Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n          0x00007f47a3e05d2d in __GI___pthread_timedjoin_ex (threadid=139945627997952, thread_return=0x0, abstime=0x0,\n              block=<optimized out>) at pthread_join_common.c:89\n          89\tpthread_join_common.c: No such file or directory.\n          (gdb)\n\n   - You can now use gdb to set break points etc and debug the storage service and plugins.\n\nIf you are debugger a plugin that crashes the system when readings are\nprocessed you should disable the south services until you have connected\nthe debugger to the storage system. If you have a system that is setup\nand crashes, use the --safe-mode flag to the startup of Fledge in order\nto disable all processes and services. This will allow you to disable\nservices or to run a particular service manually.\n\nUsing strace\n------------\n\nYou can also use a similar approach to that of running gdb to use the *strace* command to trace system calls and signals\n\n  - Create the service that uses your plugin, say a south service and name that service as you normally would.\n   \n  - Disable that service from being started by Fledge\n\n  - Use the fledge status script to find the arguments to pass the service\n\n    .. code-block:: console\n\n       $ scripts/fledge status\n       Fledge v1.8.2 running.\n       Fledge Uptime:  1451 seconds.\n       Fledge records: 200889 read, 200740 sent, 120962 purged.\n       Fledge does not require authentication.\n       === Fledge services:\n       fledge.services.core\n       fledge.services.storage --address=0.0.0.0 --port=39821\n       fledge.services.south --port=39821 --address=127.0.0.1 --name=AX8\n       fledge.services.south --port=39821 --address=127.0.0.1 --name=Sine\n       === Fledge tasks:\n\n   - Note the *--port=* and *--address=* arguments\n\n   - Run *strace* with the service adding the same set of arguments you used in gdb when running the service\n\n     .. code-block:: console\n\n        $ strace services/fledge.services.south --port=39821 --address=127.0.0.1 --name=ServiceName --token=StartupToken -d\n\n     Where *ServiceName* is the name you gave your service and *startupToken* as issued following above steps.\n\nMemory Leaks and Corruptions\n----------------------------\n\nFledge has integrated support that allows south and north services to be run using the *valgrind* tool.  This tool makes it easy to find memory corruption and leak issues in your plugin\n\n  - Create the service that uses your plugin, say a south service and name that service as you normally would.\n   \n  - Shutdown Fledge\n\n  - If using a south service to test your plugin set the environment variable VALGRIND_SOUTH to be the name of the service you just defined.\n\n  - Start Fledge using the *fledge* script in the scripts directory.\n\n  - Allow Fledge to run for some time. Note that the service running under *valgrind* will run much more slowly that it does outside of *valgrind*. You may have to allow it to run for more time than expected.\n\n  - Shutdown Fledge. Again this may take longer than normal.\n\nYou will see a file created in your home directory called *south.serviceName.valgrind.out*. This is a text file that contains the result of running *valgrind*. Refer to the standard *valgrind* documentation for information on how to interpret this file.\n\nIf developing a plugin to run in a north service, then the variable VALGRIND_NORTH should be set.\n\nMultiple services may be run under *valgrind* by setting the appropriate variable to be a comma separated list of service names.\n\nCompiling under debug mode, by setting *CFLAGS=-DDebug* will allow *valgrind* to pinpoint memory leaks and corruptions to particular lines of your source code.\n\n.. note::\n\n   Don't forget to clear the environment variable once you have completed your analysis otherwise you will degrade the performance of the service.\n\nPython Plugin Info\n------------------\n\nIt is also possible to test the loading and validity of the *plugin_info* call in a Python plugin.\n\n  - From the */usr/include/fledge* or *${FLEDGE_ROOT}* directory run the command\n\n    .. code-block:: console\n\n       python3 -c 'from fledge.plugins.south.<name>.<name> import plugin_info; print(plugin_info())'\n\n    Where *<name>* is the name of your plugin.\n\n    .. code-block:: console\n\n       python3 -c 'from fledge.plugins.south.sinusoid.sinusoid import plugin_info; print(plugin_info())'\n       {'name': 'Sinusoid Poll plugin', 'version': '1.8.1', 'mode': 'poll', 'type': 'south', 'interface': '1.0', 'config': {'plugin': {'description': 'Sinusoid Poll Plugin which implements sine wave with data points', 'type': 'string', 'default': 'sinusoid', 'readonly': 'true'}, 'assetName': {'description': 'Name of Asset', 'type': 'string', 'default': 'sinusoid', 'displayName': 'Asset name', 'mandatory': 'true'}}}\n\nThis allows you to confirm the plugin can be loaded and the *plugin_info* entry point can be called.\n\nYou can also check your default configuration. Although in Python this is usually harder to get wrong.\n\n.. code-block:: console\n\n   $ python3 -c 'from fledge.plugins.south.sinusoid.sinusoid import plugin_info; print(plugin_info()[\"config\"])'\n   {'plugin': {'description': 'Sinusoid Poll Plugin which implements sine wave with data points', 'type': 'string', 'default': 'sinusoid', 'readonly': 'true'}, 'assetName': {'description': 'Name of Asset', 'type': 'string', 'default': 'sinusoid', 'displayName': 'Asset name', 'mandatory': 'true'}}\n\nPlugin Configuration Validation\n-------------------------------\n\nThe plugin configuration validation feature provides automated connectivity testing for plugin configurations. This feature validates that network-related configuration parameters are correct and reachable before deploying or activating a plugin, helping to identify configuration issues early in the plugin testing process.\n\nThis is the first iteration of the configuration validation feature, focusing on network connectivity validation for common plugin patterns.\n\nOverview\n~~~~~~~~\n\nThe validation system performs two primary types of tests:\n\n**Host Reachability Test**\n   Tests whether the configured host or server is reachable over the network using socket-based connectivity testing.\n\n**Listening Test**\n   Tests whether a service is actively listening on the specified host and port combination.\n\nThe validation automatically detects relevant configuration fields and applies appropriate tests based on the field types and naming conventions.\n\nSupported Configuration Keys\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe validation system recognizes the following configuration field patterns (case insensitive):\n\n**Address Fields**\n   - ``address``\n   - ``ip``\n   - ``server``\n   - ``host``\n   - ``hostname``\n\n**Port Fields**\n   - ``port``\n   - ``brokerport``\n\n**URL Fields**\n   - ``url``\n\n**Broker Fields**\n   - ``broker``\n   - ``brokerhost``\n\nConfiguration Value Priority\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nWhen both ``default`` and ``value`` keys are present in a configuration field, the validation system uses the following priority:\n\n1. **``value``** - Takes precedence when present\n2. **``default``** - Used when ``value`` is not present\n3. **Error** - Raised if neither ``value`` nor ``default`` is present\n\nCommon Configuration Patterns\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nHost and Port Configuration\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nStandard server configuration with separate host and port fields:\n\n.. code-block:: json\n\n   {\n       \"plugin\": {\n           \"description\": \"Modbus TCP Plugin\",\n           \"type\": \"string\",\n           \"default\": \"modbus\",\n           \"readonly\": \"true\"\n       },\n       \"address\": {\n           \"description\": \"Address of Modbus TCP server\",\n           \"type\": \"string\",\n           \"default\": \"192.168.1.100\",\n           \"order\": \"1\",\n           \"displayName\": \"Server Address\"\n       },\n       \"port\": {\n           \"description\": \"Port of Modbus TCP server\",\n           \"type\": \"integer\",\n           \"default\": \"502\",\n           \"order\": \"2\",\n           \"displayName\": \"Port\"\n       }\n   }\n\n**Validation Behavior:**\n   - Tests host reachability to ``192.168.1.100``\n   - Tests service listening on ``192.168.1.100:502``\n\nAddress-Only Configuration\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIndustrial protocol configuration with address field only:\n\n.. code-block:: json\n\n   {\n       \"plugin\": {\n           \"description\": \"S7 PLC Plugin\",\n           \"type\": \"string\",\n           \"default\": \"s7\",\n           \"readonly\": \"true\"\n       },\n       \"IP\": {\n           \"description\": \"PLC IP Address\",\n           \"type\": \"string\",\n           \"default\": \"192.168.1.151\",\n           \"order\": \"1\",\n           \"displayName\": \"PLC IP Address\"\n       }\n   }\n\n**Validation Behavior:**\n   - Tests host reachability to ``192.168.1.151``\n   - Tests service listening on common industrial ports: ``102`` (S7), ``44818`` (EtherNet/IP), ``502`` (Modbus), ``80`` (HTTP), ``443`` (HTTPS)\n\nIP Address Configuration\n^^^^^^^^^^^^^^^^^^^^^^^^\n\nSimilar to address-only, but specifically for IP fields:\n\n.. code-block:: json\n\n   {\n       \"plugin\": {\n           \"description\": \"EtherNet/IP Plugin\",\n           \"type\": \"string\",\n           \"default\": \"etherip\",\n           \"readonly\": \"true\"\n       },\n       \"address\": {\n           \"description\": \"Address of PLC\",\n           \"type\": \"string\",\n           \"default\": \"127.0.0.1\",\n           \"value\": \"192.168.1.199\",\n           \"order\": \"1\",\n           \"displayName\": \"PLC IP Address\"\n       }\n   }\n\n**Validation Behavior:**\n   - Uses ``value`` (``192.168.1.199``) over ``default``\n   - Tests host reachability to ``192.168.1.199``\n   - Tests service listening on default industrial ports\n\nBroker Configuration\n^^^^^^^^^^^^^^^^^^^^\n\nMQTT broker configuration supports multiple patterns:\n\n**Pattern 1: Broker URL**\n\n.. code-block:: json\n\n   {\n       \"plugin\": {\n           \"description\": \"MQTT South Plugin\",\n           \"type\": \"string\",\n           \"default\": \"mqtt\",\n           \"readonly\": \"true\"\n       },\n       \"broker\": {\n           \"description\": \"MQTT Broker URL\",\n           \"type\": \"string\",\n           \"default\": \"tcp://mqtt.local:1883\",\n           \"order\": \"1\",\n           \"displayName\": \"MQTT Broker\"\n       }\n   }\n\n**Validation Behavior:**\n   - Parses URL to extract host (``mqtt.local``) and port (``1883``)\n   - Tests host reachability to ``mqtt.local``\n   - Tests service listening on ``mqtt.local:1883``\n\n**Pattern 2: Separate Host and Port**\n\n.. code-block:: json\n\n   {\n       \"brokerHost\": {\n           \"description\": \"Hostname or IP address of the broker\",\n           \"type\": \"string\",\n           \"default\": \"localhost\",\n           \"order\": \"1\",\n           \"displayName\": \"MQTT Broker Host\"\n       },\n       \"brokerPort\": {\n           \"description\": \"The network port of the broker\",\n           \"type\": \"integer\",\n           \"default\": \"1883\",\n           \"order\": \"2\",\n           \"displayName\": \"MQTT Broker Port\"\n       }\n   }\n\n**Validation Behavior:**\n   - Tests host reachability to ``localhost``\n   - Tests service listening on ``localhost:1883``\n\n**Pattern 3: Broker Hostname Only**\n\n.. code-block:: json\n\n   {\n       \"broker\": {\n           \"description\": \"The address of the MQTT broker\",\n           \"type\": \"string\",\n           \"default\": \"localhost\",\n           \"order\": \"1\",\n           \"displayName\": \"MQTT Broker\"\n       }\n   }\n\n**Validation Behavior:**\n   - Tests host reachability to ``localhost``\n   - Tests service listening on default MQTT ports: ``1883``, ``8883``\n\nURL Configuration\n^^^^^^^^^^^^^^^^^\n\nService URL configuration with protocol-specific handling:\n\n.. code-block:: json\n\n   {\n       \"plugin\": {\n           \"description\": \"OPC UA South Plugin\",\n           \"type\": \"string\",\n           \"default\": \"opcua\",\n           \"readonly\": \"true\"\n       },\n       \"url\": {\n           \"description\": \"OPC UA Server URL\",\n           \"type\": \"string\",\n           \"default\": \"opc.tcp://localhost:4840/server\",\n           \"order\": \"1\",\n           \"displayName\": \"Server URL\"\n       }\n   }\n\n**Validation Behavior:**\n   - Parses ``opc.tcp://`` URL format\n   - Tests host reachability to ``localhost``\n   - Tests service listening on ``localhost:4840``\n\n**Supported URL Schemes:**\n   - ``http://``, ``https://``\n   - ``opc.tcp://`` (OPC UA)\n   - ``tcp://`` (MQTT)\n   - ``mqtt://``, ``mqtts://``\n   - ``ftp://``\n\nServer and Port Configuration\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nGeneric server configuration pattern:\n\n.. code-block:: json\n\n   {\n       \"plugin\": {\n           \"description\": \"Database North Plugin\",\n           \"type\": \"string\",\n           \"default\": \"database\",\n           \"readonly\": \"true\"\n       },\n       \"ServerHostname\": {\n           \"description\": \"Database server hostname\",\n           \"type\": \"string\",\n           \"default\": \"db.example.com\",\n           \"order\": \"1\",\n           \"displayName\": \"Server Hostname\"\n       },\n       \"ServerPort\": {\n           \"description\": \"Database server port\",\n           \"type\": \"integer\",\n           \"default\": \"5432\",\n           \"order\": \"2\",\n           \"displayName\": \"Server Port\"\n       }\n   }\n\n**Validation Behavior:**\n   - Tests host reachability to ``db.example.com``\n   - Tests service listening on ``db.example.com:5432``\n\nDefault Port Assignment\n~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen no explicit port is configured, the validation system applies default ports based on field names and common protocols:\n\n**IP Fields** (``ip``, ``IP``)\n   - ``102`` (S7 PLC)\n   - ``44818`` (EtherNet/IP)\n   - ``502`` (Modbus TCP)\n   - ``80`` (HTTP)\n   - ``443`` (HTTPS)\n\n**Address Fields** (``address``)\n   - ``502`` (Modbus TCP)\n   - ``80`` (HTTP)\n   - ``443`` (HTTPS)\n\n**Host/Server Fields** (``host``, ``hostname``, ``server``)\n   - ``80`` (HTTP)\n   - ``443`` (HTTPS)\n\n**Broker Fields** (``broker``, ``brokerhost``)\n   - ``1883`` (MQTT)\n   - ``8883`` (MQTTS)\n\nTesting Configuration Validation\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n**API Endpoint:** ``PUT /fledge/plugin/validate``\n\n**Request Body:** JSON configuration object\n\n**Example Test Request:**\n\n.. code-block:: bash\n\n   curl -X PUT http://localhost:8081/fledge/plugin/validate \\\n        -H \"Content-Type: application/json\" \\\n        -d '{\n            \"plugin\": {\n                \"description\": \"Test Plugin\",\n                \"type\": \"string\",\n                \"default\": \"test\",\n                \"readonly\": \"true\"\n            },\n            \"address\": {\n                \"description\": \"Server address\",\n                \"type\": \"string\",\n                \"default\": \"192.168.1.100\"\n            },\n            \"port\": {\n                \"description\": \"Server port\",\n                \"type\": \"integer\",\n                \"default\": \"502\"\n            }\n        }'\n\nValidation Response Format\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe validation API returns a JSON response with test results:\n\n**Success Response (HTTP 200):**\n\n.. code-block:: json\n\n   {\n       \"HostReachable\": {\n           \"description\": \"Host Reachable\",\n           \"result\": \"pass\",\n           \"values\": [\n               {\"address\": \"192.168.1.100\", \"port\": \"502\"}\n           ]\n       },\n       \"Listening\": {\n           \"description\": \"Listening\",\n           \"result\": \"fail\",\n           \"values\": [\n               {\"address\": \"192.168.1.100\", \"port\": \"502\"}\n           ],\n           \"detail\": {\n               \"reason\": \"No service is listening on 192.168.1.100:502\"\n           }\n       }\n   }\n\n**No Validation Required (HTTP 204):**\n\nReturned when no network-related configuration fields are detected.\n\n**Error Response (HTTP 400):**\n\n.. code-block:: json\n\n   {\n       \"error\": \"Configuration item 'address' must have either 'value' or 'default' key\"\n   }\n\nConfiguration Testing Best Practices\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n**During Plugin Development**\n   - Test configurations with the validation API before finalizing plugin design\n   - Use realistic default values that can be validated\n   - Ensure network-related fields follow recognized naming patterns\n\n**Configuration Design**\n   - Use consistent field naming conventions (``address``, ``port``, ``host``)\n   - Provide meaningful default values for testing\n   - Include both ``default`` and ``value`` support for flexibility\n\n**Field Naming**\n   - Use recognized field names for automatic detection\n   - Follow consistent naming patterns across plugins\n   - Consider using descriptive field names that match the validation patterns\n\n**Error Handling**\n   - Always provide either ``default`` or ``value`` keys for network-related fields\n   - Use appropriate data types (``string`` for hostnames, ``integer`` for ports)\n   - Provide clear descriptions for configuration fields\n\n**Integration Testing**\n   - Validate configurations during plugin development\n   - Use validation results to verify network connectivity before deployment\n   - Consider validation feedback when designing plugin configurations\n\nValidation Limitations\n~~~~~~~~~~~~~~~~~~~~~~\n\nThis is the first iteration of the plugin configuration validation feature. Current limitations include:\n\n- Network connectivity testing only (no protocol-specific validation)\n- Limited to common network configuration patterns\n- No support for complex authentication scenarios\n- Basic URL scheme support\n- No certificate or encryption validation\n\nFuture enhancements may include additional validation types, protocol-specific testing, and expanded configuration pattern support.\n\n"
  },
  {
    "path": "docs/plugin_developers_guide/11_WSL2.rst",
    "content": ".. Developing with Windows Subsystem for Linux (WSL2)\n\n.. |br| raw:: html\n\n   <br />\n\n.. =============================================\n\n\nDeveloping with Windows Subsystem for Linux (WSL2)\n==================================================\n\n`Windows Subsystem for Linux (WSL2) <https://docs.microsoft.com/en-us/windows/wsl>`_ allows you to run a Linux environment directly on Windows\nwithout the overhead of `Hyper-V on Windows 10 <https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about>`_ or a dual-boot setup.\nYou can run many Linux command-line tools, utilities, and applications on a special lightweight virtual machine running on Windows.\nIt is possible to run a complete Fledge system on WSL2.\nThis includes the `Fledge GUI <../quick_start/gui.html>`_\nwhich can be accessed from a browser running on the host Windows environment.\n\nMicrosoft's `Visual Studio Code <https://code.visualstudio.com>`_ is a cross-platform editor that supports extensions\nfor building and debugging software in a variety of languages and environments.\nThis article describes how to configure Visual Studio Code to edit, build and debug Fledge plugins written in C++ running in Linux under WSL2.\n\n.. note::\n    It is possible to configure Visual Studio Code to build and test Python code in WSL2 but this is not covered in this article.\n\nPreparing the Development Environment\n-------------------------------------\n\nThis section outlines the steps to configure WSL2 and the Linux environment.\n\nInstalling Windows Subsystem for Linux (WSL2)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nYou must be running Windows 10 version 2004 and higher (Build 19041 and higher) or Windows 11 to install WSL2.\nThe easiest way to install is to open a Windows Command Prompt as Administrator and run this command:\n\n.. code-block:: bash\n\n   wsl --install\n\nWindows will perform all the necessary steps for you.\nIt will install the default Linux distribution which is the latest version of Ubuntu.\nIf you wish to perform the steps manually, or install a Linux distribution other than the default,\nsee the Microsoft documentation on `Installing WSL <https://docs.microsoft.com/en-us/windows/wsl/install>`_.\n\nWhen the installation completes, the Linux distribution will launch in a new window.\nIt will prompt you for a username to serve as the root account and password.\nThis username has nothing to do with your Windows environment so it can be any name you choose.\n\nYou can start the Linux distribution at any time by finding it in the Windows Start Menu.\nIf you hit the Windows key and type the name of your Linux distribution (default: \"Ubuntu\"), you should see it immediately.\n\nSome Useful Features of WSL2\n############################\n\nA Linux distribution running in WSL2 is a lightweight virtual machine but is well integrated with the Windows environment.\nHere are some useful features:\n\n- *Cut and paste text into and out of the Linux window*: |br|\n  The Linux window behaves just like a Command Prompt window or a Powershell window.\n  You can copy text from any window and paste it into any other.\n- *Access the Linux file system from Windows*: |br|\n  The Linux file system appears as a Network drive in Windows.\n  Open the Windows File Explorer and navigate to \"*\\\\\\\\wsl$*.\"\n  You will see your Linux distributions appear as network folders.\n- *Access the Windows file system from Linux*: |br|\n  From the *bash* command line, navigate to the mount point \"*/mnt*.\"\n  You will see your Windows drive letters in this directory.\n- *Access the Linux environment from the Windows host through the network*: |br|\n  From the *bash* command line, run the command *hostname -I*.\n  The external IP address returned by this command can be used in the Windows host to reach Linux.\n- *Access the Windows host from the Linux environment through the network*: |br|\n  From the *bash* command line, run the command *cat /etc/resolv.conf*.\n  The IP address after the label *nameserver* can be used in the Linux environment to reach the Windows host.\n  \nPreparing the Linux Distribution for Fledge\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe *systemd* service manager is not configured by default in an Ubuntu distribution running in WSL2.\nSince Fledge relies on *systemd*, you must run a script to enable it.\nFrom your home directory in the Ubuntu window, enter the commands:\n\n.. code-block:: bash\n\n   git clone https://github.com/DamionGans/ubuntu-wsl2-systemd-script.git\n   cd ubuntu-wsl2-systemd-script\n   bash ubuntu-wsl2-systemd-script.sh\n   \nRestart the Ubuntu distribution using *sudo reboot* or *sudo systemctl reboot*.\nWhen the distribution has restarted, run the command *systemctl*.\nYou should see no error and a list of units.\nThe script must be run *one time only*.\nWhenever you start up your Ubuntu distribution, *systemd* should be ready.\n\nInstalling Fledge\n~~~~~~~~~~~~~~~~~~\n\nFollowing the normal instructions for `Installing Fledge on Ubuntu <../quick_start/installing.html#ubuntu-or-debian>`_.\nMake sure the package repository matches your version of Ubuntu.\nYou can check the operating system version in your distribution with the command *hostnamectl* or *cat /etc/os-release*.\n\nInstalling Visual Studio Code\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nNavigate to the `Visual Studio Code <https://code.visualstudio.com>`_ webpage in your Windows browser.\nClick the *Download for Windows* button.\nRun the installer to install Visual Studio Code.\n\nVisual Studio Code is available for Microsoft Windows, Apple MacOS, and several Linux distributions.\n**Do not install the Linux build of Visual Studio Code in your Linux distribution in WSL2.**\nYou will actually be launching Visual Studio Code for Windows from your Linux distribution!\n\nStarting the Linux Distribution\n-------------------------------\n\nPerform these steps every time you start your Linux distribution if you plan to run Fledge:\n\nStarting syslog\n~~~~~~~~~~~~~~~\n\nThe system log */var/log/syslog* is not configured to run automatically in a Linux distribution in WSL2.\nStart *syslog* with the command:\n\n.. code-block:: bash\n\n   sudo service rsyslog start\n\nYou must do this at every startup.\n\nStarting Nginx\n~~~~~~~~~~~~~~\n\nFledge uses `Nginx <https://nginx.org>`_ as a web server to host the Fledge GUI.\nIf you plan to run Fledge GUI during your Linux distribution session, enter the command:\n\n.. code-block:: bash\n\n   sudo service nginx start\n\nYou must do this at every startup if you plan to run the Fledge GUI.\n\nStarting Fledge\n~~~~~~~~~~~~~~~~\n\nStart Fledge normally.\nYou can start it from the normal run directory, or from your build directory by following the directions on the webpage\n`Testing Your Plugin <10_testing.html#testing-your-plugin>`_.\n\nStarting Fledge GUI\n~~~~~~~~~~~~~~~~~~~~\n\nIf *Nginx* is running, you can run the Fledge GUI in a browser in your host Windows environment.\nFind the external IP address for your Linux distribution using the command:\n\n.. code-block:: bash\n\n   hostname -I\n\nThis address is reachable from your Windows environment.\nCopy the IP address to a new tab in your browser and hit Enter.\nYou should see the Fledge GUI Dashboard page.\n\n.. note::\n    The Linux distribution's external IP address is (usually) different every time you start it.\n    You will need to run the *hostname -I* command every time to obtain the current IP address.\n\nConfiguring Visual Studio Code\n------------------------------\n\nThis section describes how to configure Visual Studio Code to edit, build and debug your C++ Linux projects.\nThese instructions are summarized from the Visual Studio Code tutorial `Using C++ and WSL in VS Code <https://code.visualstudio.com/docs/cpp/config-wsl>`_.\n\nInstalling Extensions\n~~~~~~~~~~~~~~~~~~~~~\n\nNavigate to a directory containing your C++ source code files and issue the command:\n\n.. code-block:: bash\n\n   code .\n   \nThis will launch Visual Studio Code in your Windows environment but it will be looking at the current directory in your Linux distribution.\nSince you are launching Visual Studio Code from your Linux distribution, Code should prompt you to install two Extensions:\n\n* `Remote-WSL <https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl>`_\n* `C/C++ <https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools>`_\n\nIf you are not prompted, follow these links to install the extensions and restart Visual Studio Code.\nIf the extensions are installed and working, you should see a green label in the lower left-hand corner of the Visual Studio Code window\nwith the text *WSL:* followed by the name of your Linux distribution.\n\nConfiguring your Workspace\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nVisual Studio Code refers to your directory of source code files as the *Workspace.*\nIn order to edit, build and debug your code, you must create 3 Json files in a Workspace subdirectory called *.vscode*:\n\n- **c_cpp_properties.json**: compiler path, IntelliSense settings, and include file paths\n- **tasks.json**: build instructions\n- **launch.json**: debugger settings\n\nYou can create these files manually or use Visual Studio Code's configuration wizards.\nThese subsections describe creation and required contents of each of these three files.\n\nCode Editor Configuration: c_cpp_properties.json\n################################################\n\n- Open the Command Palette using the key sequence *Ctrl+Shift+P*. |br|\n- Choose the command *C/C++: Edit Configurations (JSON)*. |br|\n- This will create the *.vscode* subdirectory (if it doesn't already exist) and the *c_cpp_properties.json* file. |br|\n- This Json file will be opened for editing. |br|\n- You will see a new array called *configurations* with a single configuration object defined. |br|\n- This configuration will have a string array called *includePath*. |br|\n- Add the paths to your own include files, and those required by the Fledge API to the *includePath* array. |br|\n- You can use Linux environment variables in your paths. For example: |br|\n\n  .. code-block:: json\n\n    \"${FLEDGE_ROOT}/C/common/include\"\n\n- You can find the list of include files by running your *make* command: |br|\n\n  .. code-block:: bash\n\n    make --just-print\n   \nwhich will list all commands defined by *make* without executing them.\nYou will see the include file list in every instance of the *gcc* compiler command.\n\nBuild Configuration: tasks.json\n###############################\n\n- From the Visual Studio Code main menu, choose *Terminal -> Configure Default Build Task*. |br|\n- A dropdown will display of available tasks for C++ projects. |br|\n- Choose *g++ build active file*. |br|\n- This will create the *.vscode* subdirectory (if it doesn't already exist) and the *tasks.json* file. |br|\n- Open the Json file for editing. |br|\n\nBuilding the project will be done using the *make* file rather than the *gcc* compiler.\nTo make this change, edit the *command* and *args* entries as follows:\n\n.. code-block:: console\n\n   \"command\": \"make\",\n   \"args\": [\n      \"-C\",\n      \"${workspaceFolder}/build\"\n   ],\n\nThe \"-C\" argument for *make* will move into the specified directory before doing anything.\n\nYou can invoke a build from Visual Studio Code at any time with the key sequence *Ctrl+Shift+B*.\n\nDebugger Configuration: launch.json\n###################################\n\n- From the Visual Studio Code main menu, choose *Run -> Add Configuration..*. |br|\n- Choose *C++ (GDB/LLDB)*. |br|\n- This will create the *.vscode* subdirectory (if it doesn't already exist) and the *launch.json* file. |br|\n- Edit the *launch.json* file so it looks like this:\n\n.. code-block:: json\n\n   {\n      \"version\": \"0.2.0\",\n      \"configurations\": [\n         {\n            \"name\": \"Debug Plugin\",\n            \"type\": \"cppdbg\",\n            \"request\": \"launch\",\n            \"targetArchitecture\": \"x86_64\",\n            \"cwd\": \"${fileDirname}\",\n            \"program\": \"/full/path/to/fledge.services.north\",\n            \"externalConsole\": false,\n            \"stopAtEntry\": true,\n            \"MIMode\": \"gdb\",\n            \"avoidWindowsConsoleRedirection\": false,\n            \"args\": [\n                \"--port=42467\",\n                \"--address=0.0.0.0\",\n                \"--name=MyPluginInstance\",\n                \"-d\"\n            ]\n         }\n       ]\n   }\n\n.. note::\n    - The *program* attribute holds the program that the *gdb* debugger should launch.\n      For Fledge plugin development, this is either *fledge.services.north* or *fledge.services.south* depending on which one you are building.\n      These service executables will dynamically load your plugin library when they run.\n    - The *args* attribute has the arguments normally passed to the service executable.\n      Since the TCP/IP *port* changes every time Fledge starts up, you must edit this file to update the *port* number before starting your debug session.\n\nStart your debug session from the Visual Studio Code main menu.\nChoose *Run -> Start Debugging* or by hitting the F5 key.\n\nKnown Problems\n--------------\n\n- *Environment variables in launch.json*: |br|\n  Support for environment variables in the *program* attribute is inconsistent.\n  Variables created by Visual Studio Code itself will work but user-defined environment variables like FLEDGE_ROOT will not.\n- *gdb startup errors*: |br|\n  It can occur that *gdb* stops with error 42 and exits immediately when you start a debugging session.\n  To fix this, shut down your Linux distributions and reinstall Visual Studio Code in Windows.\n  You will not lose your configuration settings or your installed extensions.\n- *Inconsistent breakpoint lists*: |br|\n  Visual Studio Code shows a list of breakpoints in the lower left corner of the window.\n  The *gdb* debugger maintains its own list of breakpoints.\n  It can occur that the two lists fall out of sync.\n  You can still create, view and delete breakpoints from the *Debug Console* tab at the bottom of the screen which gives you access to the *gdb* command line.\n  When using the *Debug Console*, you must precede all *gdb* commands with \"*-exec*.\" |br|\n\n  To manipulate breakpoints:\n    - Set a breakpoint: *-exec b functionName*.\n    - View breakpoints: *-exec info b*.\n      This will display an ordinal number for each breakpoint.\n    - Delete breakpoints: *-exec del ##*. Use the original number returned by *-exec info b* as \"*##*.\"\n\nReferences\n----------\n  \n- `Visual Studio Code <https://code.visualstudio.com>`_\n- `Using C++ and WSL in VS Code <https://code.visualstudio.com/docs/cpp/config-wsl>`_\n- `Remote development in WSL <https://code.visualstudio.com/docs/remote/wsl-tutorial>`_\n- `Debug C++ in Visual Studio Code <https://code.visualstudio.com/docs/cpp/cpp-debug>`_\n- `Predefined Variables Reference <https://code.visualstudio.com/docs/editor/variables-reference>`_\n- `C_cpp_properties.json reference <https://code.visualstudio.com/docs/cpp/customize-cpp-settings>`_\n- `Schema for tasks.json <https://code.visualstudio.com/docs/debugtest/tasks>`_\n- `Configuring C/C++ Debugging (launch.json) <https://code.visualstudio.com/docs/cpp/launch-json-reference>`_\n"
  },
  {
    "path": "docs/plugin_developers_guide/index.rst",
    "content": ".. Plugins\n\n***********************\nPlugin Developer Guide\n***********************\n\n.. toctree::\n\n    00_source_code_doc.rst\n    01_Fledge_plugins\n    01_01_Data\n    02_writing_plugins\n    02_persisting_data\n    03_south_plugins\n    03_south_C_plugins\n    035_CPP\n    037_hybrid_plugins\n    04_north_plugins\n    05_storage_plugins\n    06_filter_plugins\n    07_rules_plugins\n    08_notify_plugins\n    08_storage.rst\n    09_packaging.rst\n    10_testing\n    11_WSL2.rst\n    \n"
  },
  {
    "path": "docs/plugin_index.rst",
    "content": "********************\nPlugin Documentation\n********************\n\nThe following external plugins are currently available to extend the functionality of Fledge.\n\n.. toctree::\n\n    plugins/south\n    plugins/north\n    plugins/filter\n    plugins/rule\n    plugins/notify\n"
  },
  {
    "path": "docs/processing_data.rst",
    "content": ".. Images\n.. |filter_south| image:: images/filter_1.jpg\n.. |filter_add| image:: images/filter_2.jpg\n.. |filter_expression| image:: images/filter_3.jpg\n.. |filter_data| image:: images/filter_4.jpg\n.. |filter_pipeline| image:: images/filter_5.jpg\n.. |filter_reorder| image:: images/filter_6.jpg\n.. |filter_edit| image:: images/filter_7.jpg\n.. |filter_north| image:: images/filter_8.jpg\n.. |filter_select| image:: images/filter_9.jpg\n.. |filter_floor| image:: images/filter_10.jpg\n.. |branch_1| image:: images/branch_1.jpg\n.. |branch_2| image:: images/branch_2.jpg\n.. |branch_3| image:: images/branch_3.jpg\n.. |branch_4| image:: images/branch_4.jpg\n.. |flow_addfilter| image:: images/flow_addfilter.jpg\n.. |flow_definefilter| image:: images/flow_definefilter.jpg\n.. |flow_filterconfig| image:: images/flow_filterconfig.jpg\n.. |flow_sinusoid| image:: images/flow_sinusoid.jpg\n.. |flow_south| image:: images/flow_south.jpg\n.. |flow_southplugin| image:: images/flow_southplugin.jpg\n.. |flow_actionbar| image:: images/flow_actionbar.jpg\n.. |flow_southcontrols| image:: images/flow_southcontrols.jpg\n.. |flow_southhover| image:: images/flow_southhover.jpg\n.. |flow_southmenu| image:: images/flow_southmenu.jpg\n.. |flow_filterdone| image:: images/flow_filterdone.jpg\n.. |flow_filteradded| image:: images/flow_filteradded.jpg\n.. |flow_dragging| image:: images/flow_dragging.jpg\n.. |flow_reordered| image:: images/flow_reordered.jpg\n.. |flow_details| image:: images/flow_details.jpg\n\n.. Links\n.. |filter_plugins| raw:: html\n\n   <a href=\"fledge_plugins.html#filter-plugins\">Filter Plugins</a>\n\n\n***************\nProcessing Data\n***************\n\nWe have already seen that Fledge can collect data from a variety of sources, buffer it locally and send it on to one or more destination systems. It is also possible to process the data within Fledge to edit, augment or remove data as it traverses the Fledge system. In the same way Fledge makes extensive use of plugin components to add new sources of data and new destinations for that data, Fledge also uses plugins to add processing filters to the Fledge system.\n\nWhy Use Filters?\n================\n\nThe concept behind filters is to create a set of small, useful pieces of\nfunctionality that can be inserted into the data flow from the south data\ningress side to the north data egress side. By making these elements\nsmall and dedicated to a single task it increases the re-usability of\nthe filters and greatly improves the chances when a new requirement\nis encountered that it can be satisfied by creating a filter pipeline\nfrom existing components or by augmenting existing components with the\naddition of any incremental processing required. The ultimate aim being\nto be able to create new applications within Fledge by merely configuring\nfilters from the existing pool of available filters into a suitable pipeline\nwithout the need to write any new code.\n\nWhat Can Be Done?\n=================\n\nData processing is done via plugins that are known as *filters* in Fledge, therefore it is not possible to give a definitive list of all the different processing that can occur, the design intent is that it is expandable by the user. The general types of things that can be done are;\n\n  - **Modify a value in a reading**. This could be as simple as applying a scale factor to convert from one measurement scale to another or more complex mathematical operation.\n  - **Modify asset or datapoint names**. Perform a simple textual substitution in order to change the name of an asset or a data point within that asset.\n  - **Add a new calculated value**. A new value can be calculated from a set of values, either based over a time period or based on a combination of different values, e.g. calculate power from voltage and current.\n  - **Add metadata to an asset**. This allows data such as units of measurement or information about the data source to be added to the data.\n  - **Compress data**. Only send data forward when the data itself shows significant change from previous values. This can be a useful technique to save bandwidth in low bandwidth or high cost network connections.\n  - **Conditionally forward data**. Only send data when a condition is satisfied or send low rate data unless some *interesting* condition is met.\n  - **Data conditioning**. Remove data from the data stream if the values are suspect or outside of reasonable conditions.\n\nWhere Can it Be Done?\n=====================\n\nFilters can be applied in two locations in the Fledge system;\n\n  - In the south service as data arrives in Fledge and before it is added to the storage subsystem for buffering.\n  - In the north tasks as the data is sent out to the upstream systems that receive data from the Fledge system.\n\nMore than one filter can be added to a single south or north within a Fledge instance. Filters are placed in an ordered pipeline of filters that are applied to the data in the order of the pipeline. The output of the first filter becomes the input to the second. Filters can thus be combined to perform complex sets of operations on a particular data stream into Fledge or out of Fledge.\n\nThe same filter plugin can appear in multiple places within a filter pipeline. A different instance is created for each and each one has its own configuration. Pipelines can also contain branches to allow parallel processing of data.\n\nAdding a South Filter\n---------------------\n\nIn the following example we will add a filter to a south service. The filter we will use is the *expression* filter and we will convert the incoming value to a logarithmic scale. The south plugin used in this simple example is the *sinusoid* plugin that creates a simulated sine wave.\n\nThe process starts by selecting the *South* services in the Fledge GUI from the left-hand menu bar. Then click on the south service of interest. This will display a dialog that allows the south service to be edited.\n\n+----------------+\n| |filter_south| |\n+----------------+\n\nTowards the bottom of this dialog is a section labeled *Applications* with a + icon to the right, select the + icon to add a filter to the south service. A filter wizard is now shown that allows you to select the filter you wish to add and give that filter a name.\n\n+--------------+\n| |filter_add| |\n+--------------+\n\nSelect the *expression* filter and enter a name in the dialog. Now click on the *Next* button. A new page in the wizard appears that allows the configuration of the filter.\n\n+---------------------+\n| |filter_expression| |\n+---------------------+\n\nIn the case of our expression filter we should add the expression we wish to execute *log(sinusoid)* and the name of the datapoint we wish to put the result in, *LogSine*. We can also choose to enable or disable the execution of this filter. We will enable it and click on *Done* to complete adding the filter.\n\nClick on *Save* in the south edit dialog and our filter is now installed and running.\n\nIf we select the *Assets & Readings* option from the menu bar we can examine the sinusoid asset and view a graph of that asset. We will now see a second datapoint has been added, *LogSine* which is the result of executing our expression in the filter.\n\n+---------------+\n| |filter_data| |\n+---------------+\n\nA second filter can be added in the same way, for example a *metadata* filter to create a pipeline. Now when we go back and view the south service we see two applications in the dialog.\n\n+-------------------+\n| |filter_pipeline| |\n+-------------------+\n\nReordering Filters\n~~~~~~~~~~~~~~~~~~\n\nThe order in which the filters are applied can be changed in the south service dialog by clicking and dragging one filter above another in the *Applications* section of dialog.\n\n+------------------+\n| |filter_reorder| |\n+------------------+\n\nFilters are executed in a top to bottom order always. It may not matter in some cases what order a filter is executed in, in others it can have significant effect on the result.\n\nEditing Filter Configuration\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nA filters configuration can be altered from the south service dialog by selecting the down arrow to the right of the filter name. This will open the edit area for that filter and show the configuration that can be altered.\n\n+---------------+\n| |filter_edit| |\n+---------------+\n\nYou can also remove a filter from the pipeline of filters by select the trash can icon at the bottom right of the edit area for the filter.\n\nAdding Filters To The North\n---------------------------\n\nFilters can also be added to the north in the same way as the south. The same set of filters can be applied, however some may be less useful in the north than in the south as they apply to all assets that are sent north.\n\nIn this example we will use the metadata filter to label all the data that goes north as coming via a particular Fledge instance. As with the *South* service we start by selecting our north task from the *North* menu item in the left-hand menu bar.\n\n+----------------+\n| |filter_north| |\n+----------------+\n\nAt the bottom of the dialog there is a *Applications* area, you may have to scroll the dialog to find it, click on the + icon. A selection dialog appears that allows you to select the filter to use. Select the *metadata* filter.\n\n+-----------------+\n| |filter_select| |\n+-----------------+\n\nAfter clicking *Next* you will be shown the configuration page for the particular filter you have chosen. We will edit the JSON that defines the metadata tags to add and set a name of *floor* and a value of *1*.\n\n+----------------+\n| |filter_floor| |\n+----------------+\n\nAfter enabling and clicking on *Done* we save the north changes. All assets sent to this PI Server connection will now be tagged with the tag \"floor\" and value \"1\".\n\nAlthough this is a simple example of labeling data other things can be done here, such as limiting the rate we send data to the PI Server until an *interesting* condition becomes true, perhaps to save costs on an expensive link or prevent a network becoming loaded until normal operating conditions. Another option might be to block particular assets from being sent on this link, this could be useful if you have two destinations and you wish to send a subset of assets to each.\n\nThis example used a PI Server as the destination, however the same mechanism and filters may be used for any north destination.\n\nGraphical Pipeline Development\n------------------------------\n\nIn versions 1.0 up until version 2.4 of Fledge the user interface for creating and editing pipeline was a purely grid based interface as illustrated above. In version 2.4 an option was introduced that allowed a more graphical approach to creating and managing data pipelines. This option was activated via the *Settings* menu option and is call the *Flow editor*. Enabling this will change the user interface for pipeline management.\n\n.. note::\n\n   As of version 3.0 the graphical flow editor will be the default user interface mode. The previous interface is still available and may be selected by turning off the flow editor in the Settings menu.\n\nAdding A Pipeline\n~~~~~~~~~~~~~~~~~\n\nAdding a pipeline, in for example the south, is done by navigating to the *South* menu item. A page will appear that shows all of the current south service, or pipeline, that have been configured.\n\n+--------------+\n| |flow_south| |\n+--------------+\n\n.. note::\n\n   The north services and tasks are presented in the same way. The interactions are the same as are described here for the south services.\n\nTo add a new service, click on the + icon in the dotted icon in the top left corner. You will then be presented with a dialog that allows you to choose the south plugin to use to ingest data. If working on a north service or task you will choose the plugin to use to send data to the system north of Fledge.\n\n+--------------------+\n| |flow_southplugin| |\n+--------------------+\n\nThis dialog will guide you through a number of steps to configure the south plugin. Once complete you will be presented with your new south pipeline that consists of a data ingestion plugin and a connection to the internal storage buffer. The example below shows the sinusoid south plugin in use.\n\n+-----------------+\n| |flow_sinusoid| |\n+-----------------+\n\nPlugin Interaction\n~~~~~~~~~~~~~~~~~~\n\nThere are a number of ways of interacting with the south and north plugins. If you hover over the plugin it will display a count of the number of readings processed by the pipeline and the distinct number of asset names observed in the pipeline.\n\n+-------------------+\n| |flow_southhover| |\n+-------------------+\n\nThe subscript number is the count of distinct asset names in the pipeline.\n\nThe south plugin also has a number of controls and status indicators around the periphery of the display as well as describing the service name and the name of the south ingest plugin.\n\n+----------------------+\n| |flow_southcontrols| |\n+----------------------+\n\n  - **Service Status** - a coloured indicator of the current monitored status:\n    \n      - Green indicates the service is running and the pipeline is processing data.\n\n      - Yellow indicates the service has started to show signs of failure.\n\n      - Red indicates the service has failed.\n\n      - Grey indicates the service is not enabled.\n\n   - **Enable Control** - a toggle control that can be used to enable and disable the service.\n\n   - **Configuration** - a control that can be used to display the configuration dialog for the plugin. This allows the configuration to be changed for an existing south plugin.\n\n   - **Menu** - enable the display of the context menu for the service:\n\n     +------------------+\n     | |flow_southmenu| |\n     +------------------+\n\n     - Readings - display the readings that are buffered in the storage service as a result of this pipeline.\n\n     - Export Readings - export the buffered readings for this pipeline to a CSV file.\n\n     - Delete - delete the service.\n\n  - **Display Readings** - display the readings that are buffered in the storage service as a result of this pipeline.\n\n  - **Show Logs** - display the logs written by this service.\n\n  - **Help** - show the online documentation of this south plugin. The online documentation will be shown in a new browser tab.\n\nThese same controls and status indicators are also available in the south page that shows all of the current south services that are configured within the Fledge instance.\n\n+--------------+\n| |flow_south| |\n+--------------+\n\nAdding Filters\n~~~~~~~~~~~~~~\n\nAdding a new filter to a pipeline is a simple process of dragging the blank filter icon from the bottom left of the canvas and dropping it on one of the connecting lines in the filter pipeline.\n\n+------------------+\n| |flow_addfilter| |\n+------------------+\n\nOnce the new filter has been dropped onto the connection, the pipeline will redraw and show an empty filter in the pipeline.\n\n+--------------------+\n| |flow_filteradded| |\n+--------------------+\n\nThe dashed outline of the filter signifies that the filter has yet to be defined. Its place within the pipeline is set, but the actual filter to be inserted as not been selected and it has not been configured. Clicking on the + icon will bring up the filter selection dialog.\n\n+---------------------+\n| |flow_definefilter| |\n+---------------------+\n\nSelect the filter plugin to use from the list of plugins given. If a filter you need has not been installed you may install it by clicking on the *Install from available plugins* link below the list.\n\nGive the filter a name. Names must be unique within the Fledge instance. Once complete click on the *Next* button to configure your chosen filter plugin.\n\nHere we have assumed you selected the *RMS* plugin and see the configuration screen for that plugin.\n\n+---------------------+\n| |flow_filterconfig| | \n+---------------------+\n\nConfigure your plugin and then click on *Done*. The filter screen will be displayed with the empty filter now replaced with your newly defined and configured filter.\n\n+-------------------+\n| |flow_filterdone| |\n+-------------------+\n\n.. note::\n\n   The colour of the filter components outline is used to distinguish the type of element it is. South plugins are shown in green and filters in yellow. Filters also display a \"funnel\" icon in the centre of the element.\n\nSimilar controls are shown around the periphery of the filter icon to those shown on the south plugin. There is no count display when you hover over the filter and no readings or logs are available that are related just to this filter.\n\nThe context menu has a single entry, *Delete*. This will delete the filter from the pipeline.\n\nThe filter element also has two green dots that represent the ingress and egress points of the filter.\n\nReordering Filters\n~~~~~~~~~~~~~~~~~~\n\nUsing the flow based pipeline editor the process of reordering filters within a pipeline is simply a case of clicking and dragging the filter you wish to move.\n\n+-----------------+\n| |flow_dragging| |\n+-----------------+\n\nIn this case the filter called *rename* has been dragged from its position between *Sinusoid* and *SineRMS* and will be dropped on the connection coming from *SinRMS* to the storage layer.\n\nReleasing the mouse button to drop the filter on the connection will cause the pipeline to be drawn with the new filter order.\n\n+------------------+\n| |flow_reordered| |\n+------------------+\n\nAccessing An Existing Pipeline\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nTo access an existing pipeline from the *South* or *North* screen you may either click on the service name or select the *Details* item from the services context menu.\n\n+----------------+\n| |flow_details| |\n+----------------+\n\nCanvas Action Bar\n~~~~~~~~~~~~~~~~~\n\nAt the top of the working canvas for the pipeline flow editor a small action bar allows for some actions related to the drawing of the pipeline.\n\n+------------------+\n| |flow_actionbar| |\n+------------------+\n\nFrom left to right in the action bar the operations support are:\n\n  - **Reload** - reloads the pipeline, discarding any incomplete actions.\n\n  - **Reset** - Resets the canvas to include the pipeline. This is useful if you have scrolled or zoomed the display and want to revert to seeing the entire pipeline.\n\n  - **Undo** - Undo the last action.\n\n  - **Redo** - Redo the last action.\n\n  - **Delete** - Deletes the currently selected item in the pipeline. This may be a filter or connection.\n\nAlso displayed in the bottom right of the working canvas is an overview \"map\" that shows the working area, the pipeline and the area shown currently.  You can click and drag on this to scroll the visible area of the pipeline canvas.\n\nPipeline Branching\n------------------\n\nIt is possible using the graphical pipeline development view to create pipelines that are not linear. This allows for data to be sent via two parallel branches of a pipeline and processed in parallel. This can be very useful when two sets of operations are required to be performed on the same data and you wish to not have the data from both operations combined in a single asset or pipeline. An example of this might be the processing of vibration data. You may wish to run an FFT filter over the data to examine the frequency distribution of the signals, but also calculate the power in the vibration using the RMS filter. It is unlikely that you want to do these two different approaches to the analysis of the vibration data in the same asset.\n\nTo create a branch, drag the filter from the bottom right corner and drop it onto the canvas in free space.\n\n+------------+\n| |branch_1| |\n+------------+\n\nClick on the green circle on the output side of the pipeline element from which the branch is to start and drag out a connection.\n\n+------------+\n| |branch_2| |\n+------------+\n\n.. note::\n\n   The branch may start from the output of the south plugin or from the output of a filter. Outputs of filters are the rightmost green dots and inputs are the leftmost green dots.\n\nDrop the end of the connection on the input of the filter you just placed onto the canvas.\n\n+------------+\n| |branch_3| |\n+------------+\n\nNow click on the output of the filter and again drag out a connection. This time drop the end of the connection onto the input of the storage icon.\n\n+------------+\n| |branch_4| |\n+------------+\n\nNow that the filter is connected into the pipeline you can click on the *+* icon to select the filter plugin.\n\n.. note::\n\n   The workflow shown here connects the input side before the output side. These may actually be done in the opposite order.\n\nTo add more filters on your new branch you can drag and drop the new filter icon from the bottom left of the screen onto one of the connection arrows in the same way as you would with the linear pipeline when using the flow editor view.\n\n.. note::\n\n   It is very important when using branches that if two or more branches write to the storage service that they do not write data with the same asset name and timestamp. Typically a branch should create an asset name that differs from those on any other branch. The easiest way to ensure this is the use the *rename* filter or the *asset* filter to change the name of the asset within each branch.\n\nSome Useful Filters\n===================\n\nA number of simple filters are worthy of mention here, a complete list of the currently available filters in Fledge can be found in the section |filter_plugins|.\n\nScale\n-----\n\nThe filter *fledge-filter-scale* applies a scale factor and offset to the numeric values within an asset. This is useful for operations such as changing the unit of measurement of a value. An example might be to convert a temperature reading from Centigrade to Fahrenheit.\n\nMetadata\n--------\n\nThe filter *fledge-filter-metadata* will add metadata to an asset. This could be used to add information such as unit of measurement, machine data (make, model, serial no)  or the location of the asset to the data.\n\nDelta\n-----\n\nThe filter *fledge-filter-delta* allows duplicate data to be removed, only forwarding data that changes by more than a configurable percentage. This can be useful if a value does not change often and there is a desire not to forward all the *similar* values in order to save network bandwidth or reduce storage requirements.\n\nRate\n----\n\nThe filter *fledge-filter-rate* is similar to the delta filter above, however it forwards data at a fixed rate that is lower the rate of the oncoming data but can send full rate data should an *interesting* condition be detected. The filter is configured with a rate to send data, the values sent at that rate are an average of the values seen since the last value was sent.\n\nA rate of one reading per minute for example would average all the values for 1 minute and then send that average as the reading at the end of that minute. A condition can be added, when that condition is triggered all data is forwarded at full rate of the incoming data until a further condition is triggered that causes the reduced rate to be resumed.\n\n"
  },
  {
    "path": "docs/quick_start/backup.rst",
    "content": ".. Images\n.. |backup| image:: ../images/backup.JPG\n\nBacking up and Restoring Fledge\n=================================\n+----------+\n| |backup| |\n+----------+\n\nYou can make a complete backup of all Fledge data and configuration.  To do this, click on \"Backup & Restore\" in the left menu bar. This screen will show a list of all backups on the system and the time they were created.\nTo make a new backup, click the \"Backup\" button in the upper right corner of the screen.  You will briefly see a \"Running\" indicator in the lower left of the screen.  After a period of time, the new backup will appear in the list.  You may need to click the refresh button in the upper left of the screen to refresh the list.\nYou can restore, delete or download any backup simply by clicking the appropriate button next to the backup in the list.\n\n"
  },
  {
    "path": "docs/quick_start/datasources.rst",
    "content": ".. Images\n.. |south_services| image:: ../images/south_services.JPG\n.. |south_service_config| image:: ../images/south_service_config.JPG\n\nManaging Data Sources\n=====================\n+------------------+\n| |south_services| |\n+------------------+\n\nData sources are managed from the South Services screen.  To access this screen, click on “South” from the menu bar on the left side of any screen.\n\nThe South Services screen displays the status of all data sources in the Fledge system.  Each data source will display its status, the data assets it is providing, and the number of readings that have been collected.\n\nAdding Data Sources\n###################\n\nTo add a data source, you will first need to install the plugin for that sensor type.  If you have not already done this, open a terminal session to your Fledge server.  Download the package for the plugin and enter::\n\n  sudo apt -y install PackageName\n\nOnce the plugin is installed return to the Fledge GUI and click on “Add+” in the upper right of the South Services screen.  Fledge will display a series of 3 screens to add the data source:\n\n1. The first screen will ask you to select the plugin for the data source from the list of installed plugins.  If you do not see the plugin you need, refer to the Installing Fledge section of this manual.  In addition, this screen allows you to specify a display name for the data source.\n\n2. The second screen allows you to configure the plugin and the data assets it will provide. \n\n   .. note::\n\n      Every data asset in Fledge must have a unique name.  If you have multiple sensors using the same plugin, modify the asset names on this screen so they are unique. \n      \n   Some plugins allow you to specify an asset name prefix that will apply to all the asset names for that sensor. Refer to the individual plugin documentation for descriptions of the fields on this screen.\n\n3. If you modify any of the configuration fields, click on the “save” button to save them.\n\n4. The final screen allows you to specify whether the service will be enabled immediately for data collection or await enabling in the future.\n\nConfiguring Data Sources\n########################\n+------------------------+\n| |south_service_config| |\n+------------------------+\n\nTo modify the configuration of a data source, click on its name in the South Services screen. This will display a list of all parameters available for that data source.  If you make any changes, click on the “save” button in the top panel to save the new configuration.  Click on the “x” button in the upper right corner to return to the South Services screen.\n\nEnabling and Disabling Data Sources\n###################################\n\nTo enable or disable a data source, click on its name in the South Services screen. Under the list of data source parameters, there is a check box to enable or disable the service.  If you make any changes, click on the “save” button in the bottom panel near the check box to save the new configuration.\n"
  },
  {
    "path": "docs/quick_start/gui.rst",
    "content": ".. Images\n.. |login| image:: ../images/gui_login.jpg\n.. |dashboard| image:: ../images/dashboard.JPG\n.. |DashboardSearch| image:: ../images/DashboardSearch.jpg\n\n.. Links\n.. |secure| raw:: html\n\n   <a href=\"../securing_fledge.html\">Securing Fledge</a>\n\nRunning the Fledge GUI\n=======================\n\nFledge offers an easy-to-use, browser-based GUI. To access the GUI, open your browser and enter the IP address of the Fledge server into the address bar. This will display the Fledge dashboard.\n\n.. note::\n\n   As of version 3.0 of Fledge and onward the user must login in order to access Fledge. Two default usernames; *admin* and *user* are created on installation. The password for both of these is set to *fledge*. It is strongly recommended to change both passwords at the earliest opportunity.\n\n+---------+\n| |login| |\n+---------+\n\nEnter a username and password to authenticate, unless changed, the username to use is *admin* with a password of *fledge*.\n\nThe *Setup Instance* link can be used to configure the URL of the Fledge instance if you have installed the Fledge GUI on a different host from Fledge itself.\n\nTo understand more relating to the Fledge security features please see the |secure| section of the documentation.\n\nYou can easily use the Fledge UI to monitor multiple Fledge servers. To view and manage a different server, click \"Settings\" in the left menu bar. In the \"Connection Setup\" pane, enter the IP address and port number for the new server you wish to manage. Click the \"Set the URL & Restart\" button to switch the UI to the new server.\n\nIf you are managing a very lightweight server or one that is connected via a slow network link, you may want to reduce the UI update frequency to minimize load on the server and network.  You can adjust this rate in the \"GUI Settings\" pane of the Settings screen. While the graph rate and ping rate can be adjusted individually, in general you should set them to the same value.\n\nFledge Dashboard\n#################\n+-------------+\n| |dashboard| |\n+-------------+\n\nThis screen provides an overview of Fledge operations. You may view various graphs of the statistics that are gathered as part of the operation of Fledge. Using these statistical displays it is possible to ascertain the operational state of Fledge and the services it is running.\n\nYou can customize the information and time frames displayed on this screen using the drop-down menus in the upper right corner. The information you select will be displayed in a series of graphs.\n\nYou can choose to view a graph of any of the statistics for sensor readings being collected by the Fledge system or the ingest rates for any of the Fledge services. In addition, you can view graphs of the following system-wide statistics:\n\n  - **READINGS:** The total number of data readings collected by Fledge since system boot.\n  - **BUFFERED:** The number of data readings currently in the Fledge buffer.\n  - **DISCARDED:** Number of data readings discarded before being buffered (due to data errors, for example).\n  - **UNSENT:** Number of data readings that were not sent successfully.\n  - **PURGED:** The total number of data readings that have been purged from the system.\n  - **UNSNPURGED:** The number of data readings that were purged without being sent to a North service.\n\nIn a Fledge system that has many services, or where multiple assets are ingested per service, the list of available statistics to display can become long very quickly. In order to aid the process of finding the statistic you wish to view in the dashboard a search option is provided.\n\n+-------------------+\n| |DashboardSearch| |\n+-------------------+\n\nSimply type part of the name you wish to locate in the drop down and the content of the drop down list will be filtered to show just those statistics that match the string you type.\n\n.. note::\n\n   The filtering algorithm is independent of the case of the string you enter and in the name of the matching statistic.\n\n"
  },
  {
    "path": "docs/quick_start/index.rst",
    "content": "*****************\nQuick Start Guide\n*****************\n\n.. toctree::\n\n    platforms\n    installing\n    starting\n    troubleshooting\n    gui\n    datasources\n    viewing\n    north\n    backup\n    support\n    update\n    uninstalling\n"
  },
  {
    "path": "docs/quick_start/installing.rst",
    "content": ".. Links\n.. |Debian PostgreSQL| raw:: html\n\n   <a href=\"../storage.html#ubuntu-install\">For Debian Platform</a>\n\n.. |RPM PostgreSQL| raw:: html\n\n   <a href=\"../storage.html#red-hat-install\">For Red Hat Platform</a>\n\n.. |Configure Storage Plugin| raw:: html\n\n   <a href=\"../storage.html#configuring-the-storage-plugin\">Configure Storage Plugin from GUI</a>\n\n.. |Supported Platforms| raw:: html\n\n   <a href=\"platforms.html\">Supported Platforms</a>\n\n\nInstalling Fledge\n==================\n\nFledge is extremely lightweight and can run on inexpensive edge devices, sensors and actuator boards.  For the purposes of this manual, we assume that all services are running on a Raspberry Pi running the Bullseye operating system. Be sure your system has plenty of storage available for data readings.\n\nIf your system does not have a supported version of the Raspberry Pi Operating System  pre-installed, you can find instructions on downloading and installing it at https://www.raspberrypi.org/downloads/operating-systems/.  After installing a supported operating system, ensure you have the latest updates by executing the following commands on your Fledge server::\n\n  sudo apt-get update\n  sudo apt-get upgrade\n  sudo apt-get update\n\n.. include:: instructions.txt\n\nIn general, Fledge installation will require the following packages:\n\n- Fledge core\n- Fledge user interface\n- One or more Fledge South services\n- One or more Fledge North service (OSI PI and OCS north services are included in Fledge core)\n\nUsing the package repository to install Fledge\n###############################################\n\nIf you choose to use the Dianomic Systems package repository to install the packages you will need to follow the steps outlined below for the particular platform you are using.\n\nUbuntu or Debian\n~~~~~~~~~~~~~~~~\n\nOn a Ubuntu or Debian system, including the Raspberry Pi, the package manager that is supported is *apt*. You will need to add the Dianomic Systems archive server into the configuration of apt on your system. The first thing that most be done is to add the key that is used to verify the package repository. To do this run the command\n\n.. code-block:: console\n\n   wget -q -O - http://archives.fledge-iot.org/KEY.gpg | sudo apt-key add -\n\nOnce complete you can add the repository itself into the apt configuration file /etc/apt/sources.list. The simplest way to do this is the use the *add-apt-repository* command. The exact command will vary between systems;\n\n  - Raspberry Pi does not have an apt-add-repository command, the user must edit the apt sources file manually\n\n    .. code-block:: console\n\n       sudo vi /etc/apt/sources.list\n       \n    and add the line\n    \n    .. code-block:: console\n\n       deb  http://archives.fledge-iot.org/latest/bullseye/aarch64/ /\n\n    to the end of the file.\n\n  - Users utilizing x86_64 or amd64 architectures on Ubuntu versions need to confirm compatibility with their particular platforms and should run the following command:\n\n    .. code-block:: console\n\n       sudo add-apt-repository \"deb http://archives.fledge-iot.org/latest/ubuntu2204/x86_64/ / \"\n\n  - Users utilizing aarch64 or arm64 architectures on Ubuntu versions need to confirm compatibility with their particular platforms and should run the following command:\n\n    .. code-block:: console\n\n       sudo add-apt-repository \"deb http://archives.fledge-iot.org/latest/ubuntu2204/aarch64/ / \"\n\n.. note::\n\n       Explore other |Supported Platforms|\n\n\nOnce the repository has been added you must inform the package manager to go and fetch a list of the packages it supports. To do this run the command\n\n.. code-block:: console\n\n   sudo apt -y update\n\nYou are now ready to install the Fledge packages. You do this by running the command\n\n.. code-block:: console\n\n   sudo apt -y install *package*\n\nYou may also install multiple packages in a single command. To install the base fledge package, the fledge user interface and the sinusoid south plugin run the command\n\n.. code-block:: console\n\n   sudo DEBIAN_FRONTEND=noninteractive apt -y install fledge fledge-gui fledge-south-sinusoid\n\n\nInstalling Fledge downloaded packages\n######################################\n\nAssuming you have downloaded the packages from the download link given above. Use SSH to login to the system that will host Fledge services. For each Fledge package that you choose to install, type the following command\n\n.. code-block:: console\n\n  sudo apt -y install <filename>\n\n.. note::\n\n  The downloaded files are named using the package name and the current version of the software. Therefore these names will change over time as new versions are released. At the time of writing the version of the Fledge package is 2.3.0, therefore the package filename is fledge_2.3.0_x86_64.deb on the X86 64bit platform. As a result the filenames shown in the following examples may differ from the names of the files you have downloaded.\n\nThe key packages to install are the Fledge core and the Fledge Graphical User Interface\n\n.. code-block:: console\n\n  sudo DEBIAN_FRONTEND=noninteractive apt -y install ./fledge_2.3.0_x86_64.deb\n  sudo apt -y install ./fledge-gui_2.3.0.deb\n\nYou will need to install one of more South plugins to acquire data.  You can either do this now or when you are adding the data source. For example, to install the plugin for the Sense HAT sensor board, type\n\n.. code-block:: console\n\n  sudo apt -y install ./fledge-south-sensehat_2.3.0_armv7l.deb  \n\n.. note::\n\n  In this case we are showing the name for a package on the Raspberry Pi platform. The sensehat plugin is not supported on all platforms as it requires Raspberry Pi specific hardware connections.\n\nYou may also need to install one or more North plugins to transmit data.  Support for OSIsoft PI and OCS are included with the Fledge core package, so you don't need to install anything more if you are sending data to only these systems.\n\nFirewall Configuration\n######################\n\nIf you are installing packages within a fire walled environment you will need to open a number of locations for outgoing connections. This will vary depending upon how you install the packages.\n\nIf you are downloading or installing packages on the fire walled machine, that machine will need to access *archives.fledge-iot.org* to be able to pull the Fledge packages. This will use the standard HTTP port, port 80.\n\nIt is also recommended that you allow the machine to access the source of packages for your Linux installation. This allows you to keep the machine updated with important patches and also for the installation of any Linux packages that are required by Fledge or the plugins that you load.\n\nAs part of the installation of the Python components of Fledge a number of Python packages are installed using the *pip* utility. In order to allow this you need to open access to a set of locations that pip will pull packages from. The set of locations required is\n\n  - python.org\n\n  - pypi.org\n\n  - pythonhosted.org\n\nIn all cases the standard HTTPS port, 443, is used for communication and is the only port that needs to be opened.\n\n.. note::\n\n   If you download packages on a different machine and copy them to your machine behind the fire wall you must still open the access for pip to the Python package locations.\n\nChecking package installation\n#############################\n\nTo check what packages have been installed, ssh into your host system and use the dpkg command::\n\n  dpkg -l | grep 'fledge'\n\n\nRun with PostgreSQL\n###################\n\nTo start Fledge with PostgreSQL, first you need to install the PostgreSQL package explicitly. See the below links for setup\n\n|Debian PostgreSQL|\n\n|RPM PostgreSQL|\n\nAlso you need to change the value of Storage plugin. See |Configure Storage Plugin| or with below curl command\n\n.. code-block:: console\n\n    $ curl -sX PUT localhost:8081/fledge/category/Storage/plugin -d '{\"value\": \"postgres\"}'\n    {\n      \"description\": \"The main storage plugin to load\",\n      \"type\": \"string\",\n      \"order\": \"1\",\n      \"displayName\": \"Storage Plugin\",\n      \"default\": \"sqlite\",\n      \"value\": \"postgres\"\n    }\n\nNow, it's time to restart Fledge. Thereafter you will see Fledge is running with PostgreSQL.\n\n\nUsing Docker Containerizer to install Fledge\n#############################################\n\nFledge Docker containers are provided in a private repository. This repository has no authentication or encryption associated with it.\n\nThe following steps describe how to install Fledge using these containers:\n\n- Edit the daemon.json file, whose default location is /etc/docker/daemon.json on Linux, If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:\n\n.. code-block:: console\n\n    { \"insecure-registries\":[\"54.204.128.201:5000\"] }\n\n- Restart Docker for the changes to take effect\n\n.. code-block:: console\n\n    sudo systemctl restart docker.service\n\n- Check using command\n\n.. code-block:: console\n\n    docker info\n\nYou should see the following output:\n\n.. code-block:: console\n\n    Insecure Registries:\n    52.3.255.136:5000\n    127.0.0.0/8\n\nYou may also refer to the Docker documentation `here <https://docs.docker.com/registry/insecure/>`_.\n\nUbuntu 20.04\n~~~~~~~~~~~~\n\n- To pull the Docker registry\n\n.. code-block:: console\n\n    docker pull 54.204.128.201:5000/fledge:latest-ubuntu2004\n\n- To run the Docker container\n\n.. code-block:: console\n\n    docker run -d --name fledge -p 8081:8081 -p 1995:1995 -p 8082:80 54.204.128.201:5000/fledge:latest-ubuntu2004\n\nHere, The GUI is forwarded to port 8082 on the host machine, it can be any port and omitted if port 80 is free.\n\n- It is possible to check if Fledge and the Fledge GUI are running by using the following commands on the host machine\n\n*Fledge*\n\n.. code-block:: console\n\n    curl -sX GET http://localhost:8081/fledge/ping\n\n*Fledge GUI*\n\n.. code-block:: console\n\n    http://localhost:8082\n\n- To attach to the running container\n\n.. code-block:: console\n\n    docker exec -it fledge bash\n\n.. note::\n    To set up Ubuntu 22.04, simply replace ubuntu2004 with ubuntu2204.\n    Currently, images are only available for Ubuntu versions 20.04 and are compatible with both Intel and AMD architectures."
  },
  {
    "path": "docs/quick_start/instructions.txt",
    "content": "You can obtain Fledge in three ways:\n\n- Dianomic Systems hosts a package repository that allows the Fledge packages to be loaded using the system package manage. This is the recommended method for long term use of Fledge as it gives access to all the Fledge plugins and provides a route for easy upgrade of the Fledge packages. This also has the advantages that once the repository is configured you are able to install new plugins directly from the Fledge user interface without the need to resort to the Linux command line.\n- Dianomic Systems offers pre-built, certified binaries of Fledge for Debian using either Intel or ARM architectures. This is perhaps the simplest method for users not used to Linux. You can download the complete set of packages from http://dianomic.com/download-fledge/.\n- As source code from https://github.com/fledge-iot/.  Instructions for downloading and building Fledge source code can be found in the Fledge Developer’s Guide\n"
  },
  {
    "path": "docs/quick_start/north.rst",
    "content": ".. Images\n.. |north_services| image:: ../images/north_services.JPG\n.. |pi_plugin_config| image:: ../images/pi_plugin_config.JPG\n.. |NorthFailure| image:: ../images/NorthFailure.jpg\n\n.. Links\n.. |OMF| raw:: html\n\n   <a href=\"../plugins/fledge-north-OMF/index.html\">OMF</a>\n\nSending Data to Other Systems\n=============================\n+------------------+\n| |north_services| |\n+------------------+\n\nData destinations are managed from the North Services screen.  To access this screen, click on “North” from the menu bar on the left side of any screen.\n\nThe North Services screen displays the status of all data sending processes in the Fledge system.  Each data destination will display its status and the number of readings that have been collected.\n\nAdding Data Destinations\n########################\n\nTo add a data destination, click on “Create North Instance+” in the upper right of the North Services screen. Fledge will display a series of 3 screens to add the data destination:\n\n1. The first screen will ask you to select the plugin for the data destination from the list of installed plugins. If you do not see the plugin you need, refer to the Installing Fledge section of this manual. In addition, this screen allows you to specify a display name for the data destination. In addition, you can specify how frequently data will be forwarded to the destination in days, hours, minutes and seconds. Enter the number of days in the interval in the left box and the number of hours, minutes and seconds in format HH:MM:SS in the right box.\n2. The second screen allows you to configure the plugin and the data assets it will send. See the section below for specifics of configuring a PI, EDS, OCS or ADH destination.\n\n   .. note::\n\n      An option exists to run a service rather than a task in the north. If run as a service there is no schedule and data is sent as soon as it is available. It is recommended, if you have no connection restrictions, to run the north as a service rather than a task as this will give the best performance.\n\n3. The final screen loads the plugin. You can specify whether it will be enabled immediately for data sending or to await enabling in the future.\n\n.. note::\n\n   Fledge supports multiple plugins to send to different north destinations. Multiple north tasks and/or services may be created to send data simultaneously to multiple destinations.\n\nConfiguring Data Destinations\n#############################\n\nTo modify the configuration of a data destination, click on its name in the North Services screen. This will display a list of all parameters available for that data source.  If you make any changes, click on the “save” button in the top panel to save the new configuration.  Click on the “x” button in the upper right corner to return to the North Services screen.\n\nEnabling and Disabling Data Destinations\n########################################\n\nTo enable or disable a data source, click on its name in the North Services screen. Under the list of data source parameters, there is a check box to enable or disable the service.  If you make any changes, click on the “save” button in the bottom panel near the check box to save the new configuration.\n\nFailure to Send Data\n####################\n\nIf Fledge is unable to send data to another system via a north service it will write a log message to the error log and also raise an alert. These alerts are shown in the status bar of the Fledge user interface.\n\n+----------------+\n| |NorthFailure| |\n+----------------+\n\nThe failure could be an incorrect configuration within Fledge for the particular north plugin or it may be caused by the upstream system being unavailable or there being no network route to the upstream system.\n\nOnce the failure is cleared, the alert will be removed from the status bar.\n\n.. note::\n\n   If Fledge is shutdown and when it is later restarted the problem has cleared, the alert may persist in the status bar and should be manually cleared. If the alert is manually cleared but the problem persists then the alert will be recreated.\n\nUsing the OMF plugin\n####################\n\nAVEVA PI (formerly OSISoft PI) data historians are one of the most common destinations for Fledge data.  Fledge supports the full range of AVEVA historians: the PI System, Edge Data Store (EDS), OSIsoft Cloud Services (OCS) and AVEVA Data Hub (ADH). To send data to a PI Server you may use either the older PI Connector Relay or the newer PI Web API OMF endpoint. It is recommended that new users use the PI Web API OMF endpoint rather than the Connector Relay which is no longer supported by AVEVA.\n\n.. note::\n\n   The AVEVA Data Hub is now known as CONNECT. See the |OMF| Plugin.\n"
  },
  {
    "path": "docs/quick_start/platforms.rst",
    "content": ".. |br| raw:: html\n\n   <br /><br />\n\n\nSupported Platforms\n===================\n\nFledge can be built or installed in one of the following Linux distributions:\n\n.. list-table::\n\n    * - Operating System\n      - System Architecture\n      - Repository Archive URL\n      - Docker Image Identifier\n    * - Ubuntu 24.04\n      - aarch64 |br| x86_64\n      - http://archives.fledge-iot.org/latest/ubuntu2404/aarch64 |br| http://archives.fledge-iot.org/latest/ubuntu2404/x86_64\n      - latest-ubuntu2404-aarch64 |br| latest-ubuntu2404\n    * - Ubuntu 22.04\n      - aarch64 |br| x86_64\n      - http://archives.fledge-iot.org/latest/ubuntu2204/aarch64 |br| http://archives.fledge-iot.org/latest/ubuntu2204/x86_64\n      - latest-ubuntu2204-aarch64 |br| latest-ubuntu2204\n    * - Ubuntu 20.04\n      - aarch64 |br| x86_64\n      - http://archives.fledge-iot.org/latest/ubuntu2004/aarch64 |br| http://archives.fledge-iot.org/latest/ubuntu2004/x86_64\n      - latest-ubuntu2004-aarch64 |br| latest-ubuntu2004\n    * - Raspberry Pi OS (bookworm)\n      - aarch64 |br| armv7l\n      - http://archives.fledge-iot.org/latest/bookworm/aarch64 |br| http://archives.fledge-iot.org/latest/bookworm/armv7l\n      - N/A |br| N/A\n    * - Raspberry Pi OS (bullseye)\n      - aarch64 |br| armv7l\n      - http://archives.fledge-iot.org/latest/bullseye/aarch64 |br| http://archives.fledge-iot.org/latest/bullseye/armv7l\n      - N/A |br| N/A\n\n"
  },
  {
    "path": "docs/quick_start/starting.rst",
    "content": "Starting and stopping Fledge\n=============================\n\nFledge administration is performed using the “fledge” command line utility.  You must first ssh into the host system.  The Fledge utility is installed by default in /usr/local/fledge/bin.\n\nThe following command options are available:\n\n  - **Start:** Start the Fledge system\n  - **Stop:** Stop the Fledge system\n  - **Status:** Lists currently running Fledge services and tasks\n  - **Reset:** Delete all data and configuration and return Fledge to factory settings\n  - **Kill:** Kill Fledge services that have not correctly responded to Stop\n  - **Help:** Describe Fledge options\n\nFor example, to start the Fledge system, open a session to the Fledge device and type::\n\n/usr/local/fledge/bin/fledge start\n\nIf authentication is enabled, which is the default mode for Fledge version 3.0 onward, then a number of the  commands require authentication. Authentication can be accomplished by several means;\n\n  - Set the environment variable *USERNAME* to be the user name.\n    \n  - Pass the *-u* flag flag to the command to specify a user name.\n\n  - If neither of the above are done the user will be prompted to enter a user name.\n\nIn both cases the user will be prompted to enter a password. It is possible, but not recommended, to set an environment variable *PASSWORD* or pass the *-p* flag on the command line, with the plain text version of the password.\n\n.. code-block:: console\n\n   $ /usr/local/fledge/bin/fledge -u admin stop\n   Password:\n   Stopping Fledge..........\n   Fledge Stopped\n\n.. note::\n\n   The *start*, *status* and *help* commands do not require authentication.\n\nIt is also possible to use certificate based authentication to login to the system. In this case the \"fledge\" command line utility should be passed the *-c* flag with the name of the certificate file to use to authenticate.\n\n.. code-block:: console\n\n   $ /usr/local/fledge/bin/fledge -c ~/.fledge/admin.cert stop\n   Stopping Fledge..........\n   Fledge Stopped\n\n.. note::\n\n   Extreme caution should be taken when storing certificate files that they not be readable by any other user within the system.\n\nFollowing a successful authentication attempt a time based token is issued that allows the user to run further commands, for a limited time, without the need to authenticate again.\n"
  },
  {
    "path": "docs/quick_start/support.rst",
    "content": ".. Images\n.. |support| image:: ../images/support.JPG\n\nTroubleshooting and Support Information\n=======================================\n+-----------+\n| |support| |\n+-----------+\n\nFledge keep detailed logs of system events for both auditing and troubleshooting use.  To access them, click \"Logs\" in the left menu bar.  There are five logs in the system:\n\n  - **Audit:** Tracks all configuration changes and data uploads performed on the Fledge system.\n  - **Notifications:** If you are using the Fledge notification service this log will give details of notifications that have been triggered\n  - **Packages:** This log will give you information about the installation and upgrade of Fledge packages for services and plugins.\n  - **System:** All events and scheduled tasks and their status.\n  - **Tasks:** The most recent scheduled tasks that have run and their status\n\nIf you have a service contract for your Fledge system, your support technician may ask you to send system data to facilitate troubleshooting an issue.  To do this, click on “Support” in the left menu and then “Request New” in the upper right of the screen.  This will create an archive of information.  Click download to retrieve this archive to your system so you can email it to the technician.\n"
  },
  {
    "path": "docs/quick_start/troubleshooting.rst",
    "content": "Troubleshooting Fledge\n#######################\n\nFledge logs status and error messages to syslog.  To troubleshoot a Fledge installation using this information, open a session to the Fledge server and type::\n\n  grep -a 'fledge' /var/log/syslog | tail -n 20\n"
  },
  {
    "path": "docs/quick_start/uninstalling.rst",
    "content": "Package Uninstallation\n======================\n\nDebian Platform\n###############\n\nUse the ``apt`` or the ``apt-get`` command to uninstall Fledge:\n\n.. code-block:: console\n\n  sudo apt -y purge fledge\n\nRed Hat Platform\n################\n\n.. code-block:: console\n\n  sudo yum -y remove fledge\n\n.. note::\n    You may notice the warning in the last row of the package removal output:\n\n    dpkg: warning: while removing fledge, directory '/usr/local/fledge' not empty so not removed\n\nThis is due to the fact that the data directory (``/usr/local/fledge/data`` by default) has not been removed, in case we might want to analyze or reuse the data further.\nSo, if you want to remove fledge completely from your system, then do ``rm -rf /usr/local/fledge`` directory.\n"
  },
  {
    "path": "docs/quick_start/update.rst",
    "content": ".. Images\n.. |alert| image:: ../images/alert.jpg\n\nPackage Updates\n===============\n\nFledge will periodically check for updates to the various packages that are installed. If updates are available then this will be indicated by a status indicating on the bar at the top of the Fledge GUI.\n\n+---------+\n| |alert| |\n+---------+\n\nClicking on the *bell* icon will display the current system alerts, including the details of the packages available to be updated.\n\nInstalling Updates\n------------------\n\nUpdates must either be installed manually from the command line or via the Fledge API. To update via the API a call to the */fledge/update* should be made using the PUT method.\n\n.. code-block:: console\n\n   curl -X PUT http://localhost:8081/fledge/update\n\nIf the Fledge instance has been configured to require authentication then a valid authentication token must be passed in the request header and that authentication token must by for a user with administration rights on the instance.\n\n.. code-block:: console\n\n    curl -H \"authorization: <token>\" -X PUT http://localhost:8081/fledge/update\n\nManual updates can be down from the command line using the appropriate package manager for your Linux host. If using the *apt* package manager then the command would be\n\n.. code-block:: console\n\n   apt install --only-upgrade 'fledge*'\n\nOr for the *yum* package manager\n\n.. code-block:: console\n\n   yum upgrade 'fledge*'\n\n.. note::\n\n   These commands should be executed as the root user or using the sudo command.\n\n"
  },
  {
    "path": "docs/quick_start/viewing.rst",
    "content": ".. Images\n.. |viewing_data| image:: ../images/viewing_data.jpg\n.. |view_graph| image:: ../images/view_graph.jpg\n.. |view_hide| image:: ../images/view_hide.jpg\n.. |view_summary| image:: ../images/view_summary.jpg\n.. |view_tabular| image:: ../images/view_tabular.jpg\n.. |view_times| image:: ../images/view_times.jpg\n.. |view_spreadsheet| image:: ../images/view_spreadsheet.jpg\n.. |gui_settings| image:: ../images/gui_settings.jpg\n.. |graph_buttons| image:: ../images/view_buttons.jpg\n.. |graph_paused| image:: ../images/view_paused.jpg\n.. |multi_graph1| image:: ../images/multi_graph1.jpg\n.. |multi_graph2| image:: ../images/multi_graph2.jpg\n.. |multi_graph3| image:: ../images/multi_graph3.jpg\n.. |latest_icon| image:: ../images/latest_icon.jpg\n.. |current_icon| image:: ../images/current_icon.jpg\n.. |latest_graph| image:: ../images/latest_graph.jpg\n.. |most_recent_icon| image:: ../images/most_recent_icon.jpg\n.. |most_recent_data| image:: ../images/most_recent_data.jpg\n\n.. |br| raw:: html\n\n   <br />\n\nViewing Data\n############\n\n+----------------+\n| |viewing_data| |\n+----------------+\n\nYou can inspect all the data buffered by the Fledge system on the Assets page.  To access this page, click on “Assets & Readings” from the left-side menu bar.\n\nThis screen will display a list of every data asset in the system.  Alongside each asset are two icons; one to display a graph of the asset and another to download the data stored for that asset as a CSV file.\n\nDisplaying A Graph\n------------------\n\n.. image:: ../images/graph_icon.jpg\n   :align: left\n\nBy clicking on the graph button next to each asset name, you can view a graph of individual data readings. A graph will be displayed with a plot for each data point within the asset.\n\n+--------------+\n| |view_graph| |\n+--------------+\n\nIt is possible to change the time period to which the graph refers by use of the time entry field and time units drop down menu in the top right of the screen.\n\n+--------------+\n| |view_times| |\n+--------------+\n\nIt is also possible to change the default duration of a graph when it is first displayed. This is done via the *Settings* menu item.\n\n+----------------+\n| |gui_settings| |\n+----------------+\n\nThis can be useful when very high frequency data is ingested into the system as it will prevent the initial graph that is displayed from pulling large amounts of data from the system and slowing down the response of the system and the GUI.\n\nWhere an asset contains multiple data points each of these is displayed in a different colour. Graphs for particular data points can be toggled on and off by clicking on the key at the top of the graph. Those data points not should will be indicated by striking through the name of the data point.\n\n+-------------+\n| |view_hide| |\n+-------------+\n\nAdjusting The Timeframe\n~~~~~~~~~~~~~~~~~~~~~~~\n\nThere are a number of button on the top right of the graph window that can be used to control the time period of the graph that is shown. \n\n+-----------------+\n| |graph_buttons| |\n+-----------------+\n\nThe default behavior of the graph window is to show the data up to the current point in time for as far back in time as is defined by the duration as described above. As new data arrives it will be plotted on the right hand side of the graph, and the graph will scroll to the left. Using these control buttons this behavior can be changed.\n\nBefore you can use the two navigation buttons to navigate the graph you must first pause the graph update by clicking on the pause button. The two navigation buttons will now become active and the pause button will change to a play symbol.\n\n+----------------+\n| |graph_paused| |\n+----------------+\n\nIf you click on the play button the graph will once move switch to automatically updating and will shown the time window for the current time.\n\nThe two double arrow buttons allow you to move backwards and forwards in time. \n\n.. image:: ../images/older.jpg\n   :align: left\n\nClicking the arrows facing left will move you back in time by the current window of data that is shown. This will result in older data being seen.\n\n.. image:: ../images/newer.jpg\n   :align: left\n\nClicking the right arrows moves you forwards in time, newer data will now be seen.\n\n|br|\n\nWhen an asset does not continuously ingest data you may need to move back in time in order to see the last data that was ingested for an ingest. The interface provides a convenient shortcut to allow you to quickly navigate back in time to see the last data that was ingested.\n\n.. image:: ../images/current_icon.jpg\n   :align: left\n\nSimply click on the icon to the left of the time navigation buttons. The graph will change to show the latest data available for the chosen asset.\n\n|br|\n\n+----------------+\n| |latest_graph| |\n+----------------+\n\nNotice that the icon used to get this graph has now changed.\n\n.. image:: ../images/current_icon.jpg\n   :align: left\n\nClicking on the icon will cause the graph to return to showing the current time frame and refresh as data is freshly ingested.\n\n|br|\n\nViewing Multiple Assets\n~~~~~~~~~~~~~~~~~~~~~~~\n\nIt is also possible to overlay the graphs for other assets onto the asset you are viewing.\n\n+----------------+\n| |multi_graph1| |\n+----------------+\n\nUsing the pull down menu above the graph you may select another asset to add to the graph.\n\n+----------------+\n| |multi_graph2| |\n+----------------+\n\nAll the data points from that asset will then be added to the graph. Multiple assets may be chosen from this pull down in order to build up more complex sets of graphs, individual data points for any of the assets may be hidden as above, or an entire asset may be removed from the graph by clicking on the **x** next to the asset name.\n\n+----------------+\n| |multi_graph3| |\n+----------------+\n\nNon-graphical Data\n~~~~~~~~~~~~~~~~~~\n\nSome types data are not capable of being displayed in a graph, such as string data and images. These are shown in separates tabs on the screen. String data will be shown in a *Tabular Data* tab.\n\n+----------------+\n| |view_tabular| |\n+----------------+\n\nA summary tab is also available, this will show the minimum, maximum and average values for each of the data points. Click on *Summary* to show the summary tab.\n\n+----------------+\n| |view_summary| |\n+----------------+\n\nDownload Data\n-------------\n\n.. image:: ../images/download_icon.jpg\n   :align: left\n\nBy clicking on the download icon adjacent to each asset you can download the stored data for the asset. The format of the file is download is a CSV file that is designed to be loaded int a spreadsheet such as Excel, Numbers or OpenOffice Calc.\n\nThe file contains a header row with the names of the data points within the asset, the first column is always the timestamp when the reading was taken, the header for this being *timestamp*. The data is sorted in chronological order with the newest data first.\n\n+--------------------+\n| |view_spreadsheet| |\n+--------------------+\n\nMost Recent Data\n----------------\n\n.. image:: ../images/most_recent_icon.jpg\n   :align: left\n\nBy clicking on the most recent reading icon you can view just the latest values that have been read for the given asset. The data will be displayed in a tabular format.\n\n+--------------------+\n| |most_recent_data| |\n+--------------------+\n\nThis data will be automatically refreshed as new data arrives.\n"
  },
  {
    "path": "docs/requirements.txt",
    "content": "Sphinx==3.5.4\ndocutils<0.18\nJinja2<3.1\nurllib3==1.26.15\nsphinx-rtd-theme==1.3.0\n"
  },
  {
    "path": "docs/rest_api_guide/01_REST.rst",
    "content": ".. REST API Guide\n.. https://docs.google.com/document/d/1JJDP7g25SWerNVCxgff02qp9msHbqA9nt3RAFx8-Qng\n\n.. |br| raw:: html\n\n   <br />\n\n.. |ar| raw:: html\n\n   <div align=\"right\">\n\n.. Images\n\n\n.. Links\n\n\n.. =============================================\n\n\n********************\nThe Fledge REST API\n********************\n\nUsers, administrators and applications interact with Fledge via a REST API. This section presents a full reference of the API.\n\n.. note:: The Fledge REST API should not be confused with the internal REST API used by Fledge tasks and microservices to communicate with each other.\n\n\nIntroducing the Fledge REST API\n================================\n\nThe REST API is the route into the Fledge appliance, it provides all user and program interaction to configure, monitor and manage the Fledge system. A separate specification will define the contents of the API, in summary however it is designed to allow for: \n\n- The complete configuration of the Fledge appliance\n- Access to monitoring statistics for the Fledge appliance\n- User and role management for access to the API\n- Access to the data buffer contents\n\n\nPort Usage\n----------\n\nIn general Fledge components use dynamic port allocation to determine which port to use, the admin API is however an exception to this rule. The Admin API port has to be known to end-users and any user interface or management system that uses it, therefore the port on which the admin API listens must be consistent and fixed between invocations. This does not mean however that it can not be changed by the user. The user must have the option to define the port to use by the admin API to listen on. To achieve this the port will be stored in the configuration data for the admin API, using the configuration category *AdminAPI*, see Configuration. Administrators who have access to the appliance can find information regarding the port and the protocol to used (i.e. HTTP or HTTPS) in the *pid* file stored in *$FLEDGE_DATA/var/run/*:\n\n.. code-block:: console\n\n  $ cat data/var/run/fledge.core.pid\n  { \"adminAPI\"  : { \"protocol\"  : \"HTTP\",\n                    \"port\"      : 8081,\n                    \"addresses\" : [ \"0.0.0.0\" ] },\n    \"processID\" : 3585 }\n  $\n\n\nFledge is shipped with a default port for the admin API to use, however the user is free to change this after installation. This can be done by first connecting to the port defined as the default and then modifying the port using the admin API. Fledge should then be restarted to make use of this new port.\n\n\nInfrastructure\n--------------\n\nThere are two REST API’s that allow external access to Fledge, the **Administration API** and the **User API**. The User API is intended to allow access to the data in the Fledge storage layer which buffers sensor readings, and it is not part of this current version.\n\nThe Administration API is the first API is concerned with all aspects of managing and monitoring the Fledge appliance. This API is used for all configuration operations that occur beyond basic installation.\n\n\n"
  },
  {
    "path": "docs/rest_api_guide/02_RESTauthentication.rst",
    "content": "*******************************\nREST API Users & Authentication\n*******************************\n\nFledge supports a number of different authentication schemes for use of the REST API\n\n  - Unauthenticated or Optional authentication. There is no requirement for any authentication to occur with the Fledge system to use the API. A user may authenticate if they desire, but it is not required.\n\n  - Username/Password authentication. Authentication is required and the user chooses to authenticate using a username and password.\n\n  - Certificate based authentication. Authentication is required and the user presents a token issued using a certificate in order to authenticate.\n\nAuthentication API\n==================\n\nLogin\n-----\n\n``POST /fledge/login`` - Create a login session token that can be used for future calls to the API\n\n\n**Request Payload** \n\nIf the user is connecting with a user name and a password then a JSON structure should be passed as the payload providing the following key/value pairs.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Key Name\n      - Type\n      - Description\n      - Example\n    * - username\n      - string\n      - The username of the user attempting to login\n      - admin\n    * - password\n      - string\n      - The plain text password of the user attempting to login\n      - fledge\n\n**Response Payload**\n\nThe response payload is an authentication token that should be included in all future calls to the API. This token will be included in the header of the subsequent requests as the value of the property authorization.\n\n**Example**\n\nAssuming the authentication provider is a username and password provider.\n\n.. code-block:: console\n\n    curl -X POST http://localhost:8081/fledge/login -d'\n    {\n      \"username\" : \"admin\",\n      \"password\" : \"fledge\"\n    }'\n\nWould return an authentication token\n\n.. code-block:: json \n\n    {\n      \"message\": \"Logged in successfully\",\n      \"uid\": 1,\n      \"token\": \"********************\",\n      \"admin\": true\n    }\n\nSubsequent calls should carry an HTTP header with the authorization token given in this response.\n\n.. code-block:: console\n\n   curl -H \"authorization: <token>\" http://localhost:8081/fledge/ping\n\nAlternatively a certificate based authentication can be used with the user presenting a certificate instead of the JSON payload shown above to the ``/fledge/login`` endpoint.\n\n.. code-block:: console\n\n   curl -T user.cert -X POST http://localhost:8081/fledge/login --insecure\n\nThe payload returned is the same as for username and password based authentication.\n\n.. note::\n\n   The examples above have been shown using HTTP as the transport, however if authentication is in use then it would normally be expected to use HTTPS to encrypt the communication.\n\nLogout\n------\n\n``PUT /fledge/logout`` - Terminate the current login session and invalidate the authentication token\n\nEnds to login session for the current user and invalidates the token given in the header.\n\n``PUT /fledge/{user_id}/logout`` - Terminate the login session for user's all active sessions.\n\nThe administrator may terminate the login session of another user.\n\n.. code-block:: console\n\n   curl -H \"authorization: <token>\" -X PUT http://localhost:8081/fledge/{user_id}/logout\n\nUsers\n=====\n\nFledge supports two levels of user, administration users and normal users. A set of API calls exists to allow users to be created, queried, modified and destroyed. \n\nAdd User\n--------\n\n``POST /fledge/admin/user`` - add a new user to Fledge’s user database\n\n.. note::\n\n   Only admin users are able to create other users.\n\n\n**Request Payload**\n\nA JSON document which describes the user to add.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Key Name\n      - Type\n      - Description\n      - Example\n    * - username\n      - string\n      - The username of the new user to add. It is a required field.\n      - david\n    * - password\n      - string\n      - The password to assign to the new user. It is a required field.\n      - Inv1nc!ble\n    * - access_method\n      - string\n      - Access of a user. It is an optional field.\n      - Possible values are cert, any, cert.\n    * - real_name\n      - string\n      - The real name of the user. This is used for display purposes only. It is an optional field.\n      - David Brent\n    * - role_id\n      - integer\n      - The role id of the new user. It is an optional field.\n      - 1 for Admin user, 2 for normal user, 3 for view users, 4 for data view users and 5 for control users. To obtain the current list of supported roles the /fledge/user/role entry point may be used. If not given it will be treated as normal user.\n    * - description\n      - string\n      - Description of the user. It is an optional field.\n      - Member of maintenance team\n\n\n**Response Payload**\n\nThe response payload is a JSON document containing the full details of the newly created user.\n\n**Errors**\n\nThe following error responses may be returned\n\n.. list-table::\n    :widths: 20 50\n    :header-rows: 1\n\n    * - HTTP Code\n      - Reason\n    * - 400\n      - Incomplete or badly formed request payload\n    * - 403\n      - A user without admin permissions tried to add a new user\n    * - 409\n      - The username is already in use\n\n\n**Example**\n\n.. code-block:: console\n\n    curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"david\", \"password\": \"Inv1nc!ble\", \"role_id\": 1, \"real_name\": \"David Brent\"}' http://localhost:8081/fledge/admin/user\n\nTo obtain the full list of supported roles;\n\n.. code-block:: console\n\n    curl -H \"authorization: <token>\" http://localhost:8081/fledge/user/role\n\n    {\n      \"roles\": [\n        {\n          \"id\": 1,\n          \"name\": \"admin\",\n          \"description\": \"All CRUD privileges\"\n        },\n        {\n          \"id\": 2,\n          \"name\": \"user\",\n          \"description\": \"All CRUD operations and self profile management\"\n        },\n        {\n          \"id\": 3,\n          \"name\": \"view\",\n          \"description\": \"Only to view the configuration\"\n        },\n        {\n          \"id\": 4,\n          \"name\": \"data-view\",\n          \"description\": \"Only read the data in buffer\"\n        },\n        {\n          \"id\": 5,\n          \"name\": \"control\",\n          \"description\": \"Same as editor can do and also have access for control scripts and pipelines\"\n        }\n      ]\n    }\n\n.. note::\n\n   This entry point is only available to users with the *admin* role.\n\nGet All Users\n-------------\n\n``GET /fledge/user`` - Retrieve data on all users\n\n**Response Payload**\n\nA JSON document which all users in a JSON array.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - JSON Key\n      - Type\n      - Description\n      - Example\n    * - .users[].userName\n      - string\n      - The username of the user\n      - david\n    * - .users[].roleId\n      - integer\n      - The permissions level of the user\n      - 1\n    * - .users[].realName\n      - string\n      - The real name of the user. This is used for display purposes only.\n      - David Brent\n    * - .users[].description\n      - string\n      - The description of the user.\n      - This is an admin user.\n\n.. note::\n\n   This payload does not include the password of the user.\n\n**Example**\n\n.. code-block:: console\n\n   curl -H \"authorization: <token>\" -X GET http://localhost:8081/fledge/user\n\n\nReturns the response payload\n\n.. code-block:: json\n\n    {\n        \"users\" : [\n                    {\n                       \"userId\"       : 1,\n                       \"userName\"     : \"admin\",\n                       \"roleId\"       : 1,\n                       \"accessMethod\" : \"any\",\n                       \"realName\"     : \"Admin user\",\n                       \"description\"  : \"admin user\"\n                    },\n                    {\n                       \"userId\"       : 2,\n                       \"userName\"     : \"david\",\n                       \"realName\"     : \"David Brent\",\n                       \"accessMethod\" : \"any\",\n                       \"roleId\"       : 1,\n                       \"description\"  : \"OT Department Head\"\n                    },\n                    {\n                       \"userId\"       : 3,\n                       \"userName\"     : \"paul\",\n                       \"realName\"     : \"Paul Smith\"\n                       \"roleId\"       : 2,\n                       \"accessMethod\" : \"any\",\n                       \"description\"  : \"OT Supervisor\"\n                    }\n                  ]\n    }\n\nUpdate User\n-----------\n\n``PUT /fledge/user`` - Allows a user to update their own user information\n\n**Request Payload**\n\nA JSON document which describes the updates to the user record.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Key Name\n      - string\n      - description\n      - Example\n    * - real_name\n      - string\n      - The real name of the user. This is used for display purposes only.\n      - David Brent\n\n\n.. note::\n\n    A user can only update their own real name, other information must be updated by an admin user.\n\n**Response Payload**\n\nThe response payload is a JSON document containing a message as to the success of the operation.\n\n**Errors**\n\nThe following error responses may be returned\n\n.. list-table::\n    :widths: 20 50 \n    :header-rows: 1\n\n    * - HTTP Code\n      - Reason\n    * - 400\n      - Incomplete or badly formed request payload\n\n**Example**\n\n.. code-block:: console\n\n   curl -H \"authorization: <token>\" -X PUT /fledge/user -d '{\"real_name\": \"Dave Brent\"}'\n\nChange Password\n---------------\n\n``PUT /fledge/user/{userid}/password`` - change the password for the current user\n\n**Request Payload**\n\nA JSON document that contains the old and new passwords.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Key Name\n      - string\n      - description\n      - Example\n    * - current_password\n      - string\n      - The current password of the user\n      - Inv1nc!ble\n    * - new_password\n      - string\n      - The new password of the user\n      - F0gl!mp1\n\n**Response Payload**\n\nA message as to the success of the operation\n\n**Example**\n\n.. code-block:: console\n\n    curl -X PUT -d '{\"current_password\": \"Inv1nc!ble\", \"new_password\": \"F0gl!mp1\"}' http://localhost:8081/fledge/user/{user_id}/password\n\nAdmin Update User\n-----------------\n\n``PUT /fledge/admin/user`` - An admin user can update any user's information\n\n**Request Payload**\n\nA JSON document which describes the updates to the user record.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - description\n      - string\n      - The description of a user\n      - david\n    * - access_method\n      - string\n      - The permissions that new user should be given\n      - Possible values are cert, any, cert.\n    * - real_name\n      - string\n      - The real name of the user. This is used for display purposes only.\n      - David Brent\n\n**Response Payload**\n\nThe response payload is a JSON document containing the user information.\n\n**Errors**\n\nThe following error responses may be returned\n\n.. list-table::\n    :widths: 20 50 \n    :header-rows: 1\n\n    * - HTTP Code\n      - Reason\n    * - 400\n      - Incomplete or badly formed request payload\n    * - 403\n      - A user without admin permissions tried to add a new user\n    * - 409\n      - The username is already in use\n\n**Example**\n\n.. code-block:: console\n\n   curl -H \"authorization: <token>\" -X PUT -d '{\"description\": \"OT Department Head\", \"real_name\": \"David Brent\", \"access_method\": \"pwd\"}' http://localhost:8081/fledge/admin/{user_id}\n\nDelete User\n-----------\n\n``DELETE /fledge/admin/user/{userID}/delete`` - delete a user\n\n\nThe delete user call can only be made by users with administrator privileges. If a user that is currently logged in is removed then that user will be forcibly logged out of the system.\n\n.. note::\n\n   The user with the user name admin can not be removed from the system.\n\n**Example**\n\n.. code-block:: console \n\n\tcurl -H \"authorization: <token>\" -X DELETE  http://localhost:8081/fledge/admin/{user_id}/delete\n"
  },
  {
    "path": "docs/rest_api_guide/03_RESTadmin.rst",
    "content": ".. REST API Guide\n.. https://docs.google.com/document/d/1JJDP7g25SWerNVCxgff02qp9msHbqA9nt3RAFx8-Qng\n\n.. |br| raw:: html\n\n   <br />\n\n.. |ar| raw:: html\n\n   <div align=\"right\">\n\n.. Images\n\n\n.. Links\n\n\n.. =============================================\n\n\n****************************\nAdministration API Reference\n****************************\n\nThis section presents the list of administrative API methods in alphabetical order.\n\n\nAudit Trail\n===========\n\nThe audit trail API is used to interact with the audit trail log tables in the storage microservice. In Fledge, log information is stored in the system log where the microservice is hosted. All the relevant information used for auditing are instead stored inside Fledge and they are accessible through the Admin REST API. The API allows the reading but also the addition of extra audit logs, as if such logs are created within the system.\n\n\naudit\n-----\n\nThe *audit* methods implement the audit trail, they are used to create and retrieve audit logs.\n\nThe set of possible audit sources are\n\n+--------+-------------------------------+\n| Source | Description                   |\n+========+===============================+\n| PURGE  | Data Purging Process          |\n+--------+-------------------------------+\n| LOGGN  | Logging Process               |\n+--------+-------------------------------+\n| STRMN  | Streaming Process             |\n+--------+-------------------------------+\n| SYPRG  | System Purge                  |\n+--------+-------------------------------+\n| START  | System Startup                |\n+--------+-------------------------------+\n| FSTOP  | System Shutdown               |\n+--------+-------------------------------+\n| CONCH  | Configuration Change          |\n+--------+-------------------------------+\n| CONAD  | Configuration Addition        |\n+--------+-------------------------------+\n| SCHCH  | Schedule Change               |\n+--------+-------------------------------+\n| SCHAD  | Schedule Addition             |\n+--------+-------------------------------+\n| SRVRG  | Service Registered            |\n+--------+-------------------------------+\n| SRVUN  | Service Unregistered          |\n+--------+-------------------------------+\n| SRVFL  | Service Fail                  |\n+--------+-------------------------------+\n| NHCOM  | North Process Complete        |\n+--------+-------------------------------+\n| NHDWN  | North Destination Unavailable |\n+--------+-------------------------------+\n| NHAVL  | North Destination Available   |\n+--------+-------------------------------+\n| UPEXC  | Update Complete               |\n+--------+-------------------------------+\n| BKEXC  | Backup Complete               |\n+--------+-------------------------------+\n| NTFDL  | Notification Deleted          |\n+--------+-------------------------------+\n| NTFAD  | Notification Added            |\n+--------+-------------------------------+\n| NTFSN  | Notification Sent             |\n+--------+-------------------------------+\n| NTFCL  | Notification Cleared          |\n+--------+-------------------------------+\n| NTFST  | Notification Server Startup   |\n+--------+-------------------------------+\n| NTFSD  | Notification Server Shutdown  |\n+--------+-------------------------------+\n| PKGIN  | Package installation          |\n+--------+-------------------------------+\n| PKGUP  | Package updated               |\n+--------+-------------------------------+\n| PKGRM  | Package purged                |\n+--------+-------------------------------+\n| DSPST  | Dispatcher Startup            |\n+--------+-------------------------------+\n| DSPSD  | Dispatcher Shutdown           |\n+--------+-------------------------------+\n| ESSRT  | External Service Startup      |\n+--------+-------------------------------+\n| ESSTP  | External Service Shutdown     |\n+--------+-------------------------------+\n\nGET Audit Entries\n~~~~~~~~~~~~~~~~~\n\n``GET /fledge/audit`` - return a list of audit trail entries sorted with most recent first.\n\n**Request Parameters**\n\n- **limit** - limit the number of audit entries returned to the number specified\n- **skip** - skip the first n entries in the audit table, used with limit to implement paged interfaces\n- **source** - filter the audit entries to be only those from the specified source\n- **severity** - filter the audit entries to only those of the specified severity\n\n\n**Response Payload**\n\nThe response payload is an array of JSON objects with the audit trail entries.\n\n+-----------+-----------+-----------------------------------------------+--------------------------------------------------------+\n| Name      | Type      | Description                                   | Example                                                |\n+===========+===========+===============================================+========================================================+\n| timestamp | timestamp | The timestamp when the audit trail |br|       | 2018-04-16 14:33:18.215                                |\n|           |           | item was written.                             |                                                        |\n+-----------+-----------+-----------------------------------------------+--------------------------------------------------------+\n| source    | string    | The source of the audit trail entry.          | CoAP                                                   |\n+-----------+-----------+-----------------------------------------------+--------------------------------------------------------+\n| severity  | string    | The severity of the event that triggered |br| | FAILURE                                                |\n|           |           | the audit trail entry to be written. |br|     |                                                        |\n|           |           | This will be one of SUCCESS, FAILURE, |br|    |                                                        |\n|           |           | WARNING or INFORMATION.                       |                                                        |\n+-----------+-----------+-----------------------------------------------+--------------------------------------------------------+\n| details   | object    | A JSON object that describes the detail |br|  | { \"message\" : |br|                                     |\n|           |           | of the audit trail event.                     | \"Sensor readings discarded due to malformed payload\" } |\n+-----------+-----------+-----------------------------------------------+--------------------------------------------------------+\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/audit?limit=2\n  { \"totalCount\" : 24,\n    \"audit\"      : [ { \"timestamp\" : \"2018-02-25 18:58:07.748\",\n                       \"source\"    : \"SRVRG\",\n                       \"details\"   : { \"name\" : \"COAP\" },\n                       \"severity\"  : \"INFORMATION\" },\n                     { \"timestamp\" : \"2018-02-25 18:58:07.742\",\n                       \"source\"    : \"SRVRG\",\n                       \"details\"   : { \"name\" : \"HTTP_SOUTH\" },\n                       \"severity\"  : \"INFORMATION\" },\n                     { \"timestamp\" : \"2018-02-25 18:58:07.390\",\n                       \"source\"    : \"START\",\n                       \"details\"   : {},\n                       \"severity\"  : \"INFORMATION\" }\n                   ]\n  }\n  $ curl -s http://localhost:8081/fledge/audit?source=SRVUN&limit=1\n  { \"totalCount\" : 4,\n    \"audit\"      : [ { \"timestamp\" : \"2018-02-25 05:22:11.053\",\n                       \"source\"    : \"SRVUN\",\n                       \"details\"   : { \"name\": \"COAP\" },\n                       \"severity\"  : \"INFORMATION\" }\n                   ]\n  }\n  $\n\n|br|\n\n\nPOST Audit Entries\n~~~~~~~~~~~~~~~~~~\n\n``POST /fledge/audit`` - create a new audit trail entry.\n\nThe purpose of the create method on an audit trail entry is to allow a user interface or an application that is using the Fledge API to utilize the Fledge audit trail and notification mechanism to raise user defined audit trail entries.\n\n\n**Request Payload**\n\nThe request payload is a JSON object with the audit trail entry minus the timestamp.\n\n+-----------+-----------+-----------------------------------------------+---------------------------+\n| Name      | Type      | Description                                   | Example                   |\n+===========+===========+===============================================+===========================+\n| source    | string    | The source of the audit trail entry.          | LOGGN                     |\n+-----------+-----------+-----------------------------------------------+---------------------------+\n| severity  | string    | The severity of the event that triggered |br| | FAILURE                   |\n|           |           | the audit trail entry to be written. |br|     |                           |\n|           |           | This will be one of SUCCESS, FAILURE, |br|    |                           |\n|           |           | WARNING or INFORMATION.                       |                           |\n+-----------+-----------+-----------------------------------------------+---------------------------+\n| details   | object    | A JSON object that describes the detail |br|  | { \"message\" : |br|        |\n|           |           | of the audit trail event.                     | \"Internal System Error\" } |\n+-----------+-----------+-----------------------------------------------+---------------------------+\n\n\n**Response Payload**\n\nThe response payload is the newly created audit trail entry.\n\n+-----------+-----------+-----------------------------------------------+---------------------------+\n| Name      | Type      | Description                                   | Example                   |\n+===========+===========+===============================================+===========================+\n| timestamp | timestamp | The timestamp when the audit trail |br|       | 2018-04-16 14:33:18.215   |\n|           |           | item was written.                             |                           |\n+-----------+-----------+-----------------------------------------------+---------------------------+\n| source    | string    | The source of the audit trail entry.          | LOGGN                     |\n+-----------+-----------+-----------------------------------------------+---------------------------+\n| severity  | string    | The severity of the event that triggered |br| | FAILURE                   |\n|           |           | the audit trail entry to be written. |br|     |                           |\n|           |           | This will be one of SUCCESS, FAILURE, |br|    |                           |\n|           |           | WARNING or INFORMATION.                       |                           |\n+-----------+-----------+-----------------------------------------------+---------------------------+\n| details   | object    | A JSON object that describes the detail |br|  | { \"message\" : |br|        |\n|           |           | of the audit trail event.                     | \"Internal System Error\" } |\n+-----------+-----------+-----------------------------------------------+---------------------------+\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X POST http://localhost:8081/fledge/audit \\\n  -d '{ \"severity\": \"FAILURE\", \"details\": { \"message\": \"Internal System Error\" }, \"source\": \"LOGGN\" }'\n  { \"source\": \"LOGGN\",\n    \"timestamp\": \"2018-04-17 11:49:55.480\",\n    \"severity\": \"FAILURE\",\n    \"details\": { \"message\": \"Internal System Error\" }\n  }\n  $\n  $ curl -X GET http://localhost:8081/fledge/audit?severity=FAILURE\n  { \"totalCount\": 1,\n    \"audit\": [ { \"timestamp\": \"2018-04-16 18:32:28.427\",\n                 \"source\"   :    \"LOGGN\",\n                 \"details\"  : { \"message\": \"Internal System Error\" },\n                 \"severity\" : \"FAILURE\" }\n             ]\n  }\n  $\n\n|br|\n\n\nConfiguration Management\n========================\n\nConfiguration management in an important aspect of the REST API, however due to the discoverable form of the configuration of Fledge the API itself is fairly small.\n\nThe configuration REST API interacts with the configuration manager to create, retrieve, update and delete the configuration categories and values. Specifically all updates must go via the management layer as this is used to trigger the notifications to the components that have registered interest in configuration categories. This is the means by which the dynamic reconfiguration of Fledge is achieved.\n\n\ncategory\n--------\n\nThe *category* interface is part of the Configuration Management for Fledge and it is used to create, retrieve, update and delete configuration categories and items.\n\n\nGET categor(ies)\n~~~~~~~~~~~~~~~~\n\n``GET /fledge/category`` - return the list of known categories in the configuration database\n\n\n**Response Payload**\n\nThe response payload is a JSON object with an array of JSON objects, one per valid category.\n\n+-------------+--------+------------------------------------------------+------------------+\n| Name        | Type   | Description                                    | Example          |\n+=============+========+================================================+==================+\n| key         | string | The category key, each category |br|           | network          |\n|             |        | has a unique textual key that defines it.      |                  |\n+-------------+--------+------------------------------------------------+------------------+\n| description | string | A description of the category that may be |br| | Network Settings |\n|             |        | used for display purposes.                     |                  |\n+-------------+--------+------------------------------------------------+------------------+\n| displayName | string | Name of the category that may be |br|          | Network Settings |\n|             |        | used for display purposes.                     |                  |\n+-------------+--------+------------------------------------------------+------------------+\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X GET http://localhost:8081/fledge/category\n  {\n      \"categories\": [\n        {\n          \"key\": \"Storage\",\n          \"description\": \"Storage configuration\",\n          \"displayName\": \"Storage\"\n        },\n        {\n          \"key\": \"Advanced\",\n          \"description\": \"Advanced\",\n          \"displayName\": \"Advanced\"\n        },\n        {\n          \"key\": \"LOGGING\",\n          \"description\": \"Logging Level of Core Server\",\n          \"displayName\": \"Logging\"\n        },\n        {\n          \"key\": \"SCHEDULER\",\n          \"description\": \"Scheduler configuration\",\n          \"displayName\": \"Scheduler\"\n        },\n        {\n          \"key\": \"SMNTR\",\n          \"description\": \"Service Monitor\",\n          \"displayName\": \"Service Monitor\"\n        },\n        {\n          \"key\": \"rest_api\",\n          \"description\": \"Fledge Admin and User REST API\",\n          \"displayName\": \"Admin API\"\n        },\n        {\n          \"key\": \"password\",\n          \"description\": \"To control the password policy\",\n          \"displayName\": \"Password Policy\"\n        },\n        {\n          \"key\": \"service\",\n          \"description\": \"Fledge Service\",\n          \"displayName\": \"Fledge Service\"\n        },\n        {\n          \"key\": \"Installation\",\n          \"description\": \"Installation\",\n          \"displayName\": \"Installation\"\n        },\n        {\n          \"key\": \"sqlite\",\n          \"description\": \"Storage Plugin\",\n          \"displayName\": \"sqlite\"\n        },\n        {\n          \"key\": \"General\",\n          \"description\": \"General\",\n          \"displayName\": \"General\"\n        },\n        {\n          \"key\": \"Utilities\",\n          \"description\": \"Utilities\",\n          \"displayName\": \"Utilities\"\n        },\n        {\n          \"key\": \"purge_system\",\n          \"description\": \"Configuration of the Purge System\",\n          \"displayName\": \"Purge System\"\n        },\n        {\n          \"key\": \"PURGE_READ\",\n          \"description\": \"Purge the readings, log, statistics history table\",\n          \"displayName\": \"Purge\"\n        }\n      ]\n  }\n  $\n\n|br|\n\n\nGET category\n~~~~~~~~~~~~\n\n``GET /fledge/category/{name}`` - return the configuration items in the given category.\n\n\n**Path Parameters**\n\n- **name** is the name of one of the categories returned from the GET /fledge/category call.\n\n\n**Response Payload**\n\nThe response payload is a set of configuration items within the category, each item is a JSON object with the following set of properties.\n\n.. list-table::\n    :widths: 20 20 50 50\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - description\n      - string\n      - A description of the configuration item that may be used in a user interface.\n      - The IPv4 network address of the Fledge server\n    * - type\n      - string\n      - A type that may be used by a user interface to know how to display an item.\n      - IPv4\n    * - default\n      - string\n      - An optional default value for the configuration item.\n      - 127.0.0.1\n    * - displayName\n      - string\n      - Name of the category that may be used for display purposes.\n      - IPv4 address\n    * - order\n      - integer\n      - Order at which category name will be displayed.\n      - 1\n    * - value\n      - string\n      - The current configured value of the configuration item. This may be empty if no value has been set.\n      - 192.168.0.27\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X GET http://localhost:8081/fledge/category/rest_api\n  {\n      \"enableHttp\": {\n        \"description\": \"Enable HTTP (disable to use HTTPS)\",\n        \"type\": \"boolean\",\n        \"default\": \"true\",\n        \"displayName\": \"Enable HTTP\",\n        \"order\": \"1\",\n        \"value\": \"true\"\n      },\n      \"httpPort\": {\n        \"description\": \"Port to accept HTTP connections on\",\n        \"type\": \"integer\",\n        \"default\": \"8081\",\n        \"displayName\": \"HTTP Port\",\n        \"order\": \"2\",\n        \"value\": \"8081\"\n      },\n      \"httpsPort\": {\n        \"description\": \"Port to accept HTTPS connections on\",\n        \"type\": \"integer\",\n        \"default\": \"1995\",\n        \"displayName\": \"HTTPS Port\",\n        \"order\": \"3\",\n        \"validity\": \"enableHttp==\\\"false\\\"\",\n        \"value\": \"1995\"\n      },\n      \"certificateName\": {\n        \"description\": \"Certificate file name\",\n        \"type\": \"string\",\n        \"default\": \"fledge\",\n        \"displayName\": \"Certificate Name\",\n        \"order\": \"4\",\n        \"validity\": \"enableHttp==\\\"false\\\"\",\n        \"value\": \"fledge\"\n      },\n      \"authentication\": {\n        \"description\": \"API Call Authentication\",\n        \"type\": \"enumeration\",\n        \"options\": [\n          \"mandatory\",\n          \"optional\"\n        ],\n        \"default\": \"optional\",\n        \"displayName\": \"Authentication\",\n        \"order\": \"5\",\n        \"value\": \"optional\"\n      },\n      \"authMethod\": {\n        \"description\": \"Authentication method\",\n        \"type\": \"enumeration\",\n        \"options\": [\n          \"any\",\n          \"password\",\n          \"certificate\"\n        ],\n        \"default\": \"any\",\n        \"displayName\": \"Authentication method\",\n        \"order\": \"6\",\n        \"value\": \"any\"\n      },\n      \"authCertificateName\": {\n        \"description\": \"Auth Certificate name\",\n        \"type\": \"string\",\n        \"default\": \"ca\",\n        \"displayName\": \"Auth Certificate\",\n        \"order\": \"7\",\n        \"value\": \"ca\"\n      },\n      \"allowPing\": {\n        \"description\": \"Allow access to ping, regardless of the authentication required and authentication header\",\n        \"type\": \"boolean\",\n        \"default\": \"true\",\n        \"displayName\": \"Allow Ping\",\n        \"order\": \"8\",\n        \"value\": \"true\"\n      },\n      \"authProviders\": {\n        \"description\": \"Authentication providers to use for the interface (JSON array object)\",\n        \"type\": \"JSON\",\n        \"default\": \"{\\\"providers\\\": [\\\"username\\\", \\\"ldap\\\"] }\",\n        \"displayName\": \"Auth Providers\",\n        \"order\": \"9\",\n        \"value\": \"{\\\"providers\\\": [\\\"username\\\", \\\"ldap\\\"] }\"\n      }\n  }\n  $\n\n|br|\n\n\nGET category item\n~~~~~~~~~~~~~~~~~\n\n``GET /fledge/category/{name}/{item}`` - return the configuration item in the given category.\n\n\n**Path Parameters**\n\n- **name** - the name of one of the categories returned from the GET /fledge/category call.\n- **item** - the item within the category to return.\n\n\n**Response Payload**\n\nThe response payload is a configuration item within the category, each item is a JSON object with the following set of properties.\n\n.. list-table::\n    :widths: 20 20 50 50\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - description\n      - string\n      - A description of the configuration item that may be used in a user interface.\n      - The IPv4 network address of the Fledge server\n    * - type\n      - string\n      - A type that may be used by a user interface to know how to display an item.\n      - IPv4\n    * - default\n      - string\n      - An optional default value for the configuration item.\n      - 127.0.0.1\n    * - displayName\n      - string\n      - Name of the category that may be used for display purposes.\n      - IPv4 address\n    * - order\n      - integer\n      - Order at which category name will be displayed.\n      - 1\n    * - value\n      - string\n      - The current configured value of the configuration item. This may be empty if no value has been set.\n      - 192.168.0.27\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X GET http://localhost:8081/fledge/category/rest_api/httpsPort\n  {\n      \"description\": \"Port to accept HTTPS connections on\",\n      \"type\": \"integer\",\n      \"default\": \"1995\",\n      \"displayName\": \"HTTPS Port\",\n      \"order\": \"3\",\n      \"validity\": \"enableHttp==\\\"false\\\"\",\n      \"value\": \"1995\"\n  }\n\n  $\n\n|br|\n\n\nPUT category item\n~~~~~~~~~~~~~~~~~\n\n``PUT /fledge/category/{name}/{item}`` - set the configuration item value in the given category.\n\n\n**Path Parameters**\n\n- **name** - the name of one of the categories returned from the GET /fledge/category call.\n- **item** - the the item within the category to set.\n\n\n**Request Payload**\n\nA JSON object with the new value to assign to the configuration item.\n\n+-------------+--------+------------------------------------------+--------------+\n| Name        | Type   | Description                              | Example      |\n+=============+========+==========================================+==============+\n| value       | string | The new value of the configuration item. | 192.168.0.27 |\n+-------------+--------+------------------------------------------+--------------+\n\n\n**Response Payload**\n\nThe response payload is the newly updated configuration item within the category, the item is a JSON object object with the following set of properties.\n\n.. list-table::\n    :widths: 20 20 50 50\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - description\n      - string\n      - A description of the configuration item that may be used in a user interface.\n      - The IPv4 network address of the Fledge server\n    * - type\n      - string\n      - A type that may be used by a user interface to know how to display an item.\n      - IPv4\n    * - default\n      - string\n      - An optional default value for the configuration item.\n      - 127.0.0.1\n    * - displayName\n      - string\n      - Name of the category that may be used for display purposes.\n      - IPv4 address\n    * - order\n      - integer\n      - Order at which category name will be displayed.\n      - 1\n    * - value\n      - string\n      - The current configured value of the configuration item. This may be empty if no value has been set.\n      - 192.168.0.27\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X PUT http://localhost:8081/fledge/category/rest_api/httpsPort \\\n    -d '{ \"value\" : \"1996\" }'\n  {\n    \"description\": \"Port to accept HTTPS connections on\",\n    \"type\": \"integer\",\n    \"default\": \"1995\",\n    \"displayName\": \"HTTPS Port\",\n    \"order\": \"3\",\n    \"validity\": \"enableHttp==\\\"false\\\"\",\n    \"value\": \"1996\"\n  }\n  $\n\n|br|\n\n\nDELETE category item\n~~~~~~~~~~~~~~~~~~~~\n\n``DELETE /fledge/category/{name}/{item}/value`` - unset the value of the configuration item in the given category.\n\nThis will result in the value being returned to the default value if one is defined. If not the value will be blank, i.e. the value property of the JSON object will exist with an empty value.\n\n\n**Path Parameters**\n\n- **name** - the name of one of the categories returned from the GET /fledge/category call.\n- **item** - the the item within the category to return.\n\n\n**Response Payload**\n\nThe response payload is the newly updated configuration item within the category, the item is a JSON object object with the following set of properties.\n\n.. list-table::\n    :widths: 20 20 50 50\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - description\n      - string\n      - A description of the configuration item that may be used in a user interface.\n      - The IPv4 network address of the Fledge server\n    * - type\n      - string\n      - A type that may be used by a user interface to know how to display an item.\n      - IPv4\n    * - default\n      - string\n      - An optional default value for the configuration item.\n      - 127.0.0.1\n    * - displayName\n      - string\n      - Name of the category that may be used for display purposes.\n      - IPv4 address\n    * - order\n      - integer\n      - Order at which category name will be displayed.\n      - 1\n    * - value\n      - string\n      - The current configured value of the configuration item. This may be empty if no value has been set.\n      - 127.0.0.1\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X DELETE http://localhost:8081/fledge/category/rest_api/httpsPort/value\n  {\n    \"description\": \"Port to accept HTTPS connections on\",\n    \"type\": \"integer\",\n    \"default\": \"1995\",\n    \"displayName\": \"HTTPS Port\",\n    \"order\": \"3\",\n    \"validity\": \"enableHttp==\\\"false\\\"\",\n    \"value\": \"1995\"\n  }\n  $\n\n|br|\n\n\nPOST category\n~~~~~~~~~~~~~\n\n``POST /fledge/category`` - creates a new category\n\n\n**Request Payload**\n\nA JSON object that defines the category.\n\n+--------------------+--------+------------------------------------------------------+-------------------------------+\n| Name               | Type   | Description                                          | Example                       |\n+====================+========+======================================================+===============================+\n| key                | string | The key that identifies the category. |br|           | backup                        |\n|                    |        | If the key already exists as a category |br|         |                               |\n|                    |        | then the contents of this request |br|               |                               |\n|                    |        | is merged with the data stored.                      |                               |\n+--------------------+--------+------------------------------------------------------+-------------------------------+\n| description        | string | A description of the configuration category          | Backup configuration          |\n+--------------------+--------+------------------------------------------------------+-------------------------------+\n| items              | list   | An optional list of items to create in this category |                               |\n+--------------------+--------+------------------------------------------------------+-------------------------------+\n| |ar| *name*        | string | The name of a configuration item                     | destination                   |\n+--------------------+--------+------------------------------------------------------+-------------------------------+\n| |ar| *description* | string | A description of the configuration item              | The destination to which |br| |\n|                    |        |                                                      | the backup will be written    |\n+--------------------+--------+------------------------------------------------------+-------------------------------+\n| |ar| *type*        | string | The type of the configuration item                   | string                        |\n+--------------------+--------+------------------------------------------------------+-------------------------------+\n| |ar| *default*     | string | An optional default value for the configuration item | /backup                       |\n+--------------------+--------+------------------------------------------------------+-------------------------------+\n\n**NOTE:** with list we mean a list of JSON objects in the form of { obj1,obj2,etc. }, to differ from the concept of *array*, i.e. [ obj1,obj2,etc. ]\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X POST http://localhost:8081/fledge/category\n    -d '{ \"key\": \"My Configuration\", \"description\": \"This is my new configuration\",\n        \"value\": { \"item one\": { \"description\": \"The first item\", \"type\": \"string\", \"default\": \"one\" },\n                   \"item two\": { \"description\": \"The second item\", \"type\": \"string\", \"default\": \"two\" },\n                   \"item three\": { \"description\": \"The third item\", \"type\": \"string\", \"default\": \"three\" } } }'\n  { \"description\": \"This is my new configuration\", \"key\": \"My Configuration\", \"value\": {\n        \"item one\": { \"default\": \"one\", \"type\": \"string\", \"description\": \"The first item\", \"value\": \"one\" },\n        \"item two\": { \"default\": \"two\", \"type\": \"string\", \"description\": \"The second item\", \"value\": \"two\" },\n        \"item three\": { \"default\": \"three\", \"type\": \"string\", \"description\": \"The third item\", \"value\": \"three\" } }\n  }\n  $\n\n|br|\n\n\nTask Management\n===============\n\nThe task management API’s allow an administrative user to monitor and control the tasks that are started by the task scheduler either from a schedule or as a result of an API request.\n\n\ntask\n----\n\nThe *task* interface allows an administrative user to monitor and control Fledge tasks.\n\n\nGET task\n~~~~~~~~\n\n``GET /fledge/task`` - return the list of all known task running or completed\n\n\n**Request Parameters**\n\n- **name** - an optional task name to filter on, only executions of the particular task will be reported.\n- **state** - an optional query parameter that will return only those tasks in the given state.\n\n\n**Response Payload**\n\nThe response payload is a JSON object with an array of task objects.\n\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| Name      | Type      | Description                             | Example                              |\n+===========+===========+=========================================+======================================+\n| id        | string    | A unique identifier for the task.  |br| | 0a787bf3-4f48-4235-ae9a-2816f8ac76cc |\n|           |           | This takes the form of a uuid and  |br| |                                      |\n|           |           | not a Linux process id as the ID’s |br| |                                      |\n|           |           | must survive restarts and failovers     |                                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| name      | string    | The name of the task                    | purge                                |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| state     | string    | The current state of the task           | Running                              |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| startTime | timestamp | The date and time the task started      | 2018-04-17 08:32:15.071              |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| endTime   | timestamp | The date and time the task ended   |br| | 2018-04-17 08:32:14.872              |\n|           |           | This may not exist if the task is  |br| |                                      |\n|           |           | not completed.                          |                                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| exitCode  | integer   | Exit Code of the task.             |br| | 0                                    |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| reason    | string    | An optional reason string that     |br| | No destination available |br|        |\n|           |           | describes why the task failed.          | to write backup                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X GET http://localhost:8081/fledge/task\n  {\n  \"tasks\": [\n    {\n      \"id\": \"a9967d61-8bec-4d0b-8aa1-8b4dfb1d9855\",\n      \"name\": \"stats collection\",\n      \"processName\": \"stats collector\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:21:58.650\",\n      \"endTime\": \"2020-05-28 09:21:59.155\",\n      \"exitCode\": 0,\n      \"reason\": \"\"\n    },\n    {\n      \"id\": \"7706b23c-71a4-410a-a03a-9b517dcd8c93\",\n      \"name\": \"stats collection\",\n      \"processName\": \"stats collector\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:22:13.654\",\n      \"endTime\": \"2020-05-28 09:22:14.160\",\n      \"exitCode\": 0,\n      \"reason\": \"\"\n    },\n    ... ] }\n  $\n  $ curl -X GET http://localhost:8081/fledge/task?name=purge\n  {\n  \"tasks\": [\n    {\n      \"id\": \"c24e006d-22f2-4c52-9f3a-391a9b17b6d6\",\n      \"name\": \"purge\",\n      \"processName\": \"purge\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:44:00.175\",\n      \"endTime\": \"2020-05-28 09:44:13.915\",\n      \"exitCode\": 0,\n      \"reason\": \"\"\n    },\n    {\n      \"id\": \"609f35e6-4e89-4749-ac17-841ae3ee2b31\",\n      \"name\": \"purge\",\n      \"processName\": \"purge\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:44:15.165\",\n      \"endTime\": \"2020-05-28 09:44:28.154\",\n      \"exitCode\": 0,\n      \"reason\": \"\"\n    },\n  ... ] }\n  $\n  $ curl -X GET http://localhost:8081/fledge/task?state=complete\n  {\n  \"tasks\": [\n    {\n      \"id\": \"a9967d61-8bec-4d0b-8aa1-8b4dfb1d9855\",\n      \"name\": \"stats collection\",\n      \"processName\": \"stats collector\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:21:58.650\",\n      \"endTime\": \"2020-05-28 09:21:59.155\",\n      \"exitCode\": 0,\n      \"reason\": \"\"\n    },\n    {\n      \"id\": \"7706b23c-71a4-410a-a03a-9b517dcd8c93\",\n      \"name\": \"stats collection\",\n      \"processName\": \"stats collector\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:22:13.654\",\n      \"endTime\": \"2020-05-28 09:22:14.160\",\n      \"exitCode\": 0,\n      \"reason\": \"\"\n    },\n    ... ] }\n   $\n\n|br|\n\n\nGET task latest\n~~~~~~~~~~~~~~~\n\n``GET /fledge/task/latest`` - return the list of most recent task execution for each name.\n\nThis call is designed to allow a monitoring interface to show when each task was last run and what the status of that task was.\n\n\n**Request Parameters**\n\n- **name** - an optional task name to filter on, only executions of the particular task will be reported.\n- **state** - an optional query parameter that will return only those tasks in the given state.\n\n\n**Response Payload**\n\nThe response payload is a JSON object with an array of task objects.\n\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| Name      | Type      | Description                             | Example                              |\n+===========+===========+=========================================+======================================+\n| id        | string    | A unique identifier for the task.  |br| | 0a787bf3-4f48-4235-ae9a-2816f8ac76cc |\n|           |           | This takes the form of a uuid and  |br| |                                      |\n|           |           | not a Linux process id as the ID’s |br| |                                      |\n|           |           | must survive restarts and failovers     |                                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| name      | string    | The name of the task                    | purge                                |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| state     | string    | The current state of the task           | Running                              |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| startTime | timestamp | The date and time the task started      | 2018-04-17 08:32:15.071              |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| endTime   | timestamp | The date and time the task ended   |br| | 2018-04-17 08:32:14.872              |\n|           |           | This may not exist if the task is  |br| |                                      |\n|           |           | not completed.                          |                                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| exitCode  | integer   | Exit Code of the task.             |br| | 0                                    |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| reason    | string    | An optional reason string that     |br| | No destination available |br|        |\n|           |           | describes why the task failed.          | to write backup                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| pid       | integer   | Process ID of the task.            |br| | 17481                                |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X GET http://localhost:8081/fledge/task/latest\n  {\n  \"tasks\": [\n    {\n      \"id\": \"ea334d3b-8a33-4a29-845c-8be50efd44a4\",\n      \"name\": \"certificate checker\",\n      \"processName\": \"certificate checker\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:35:00.009\",\n      \"endTime\": \"2020-05-28 09:35:00.057\",\n      \"exitCode\": 0,\n      \"reason\": \"\",\n      \"pid\": 17481\n    },\n    {\n      \"id\": \"794707da-dd32-471e-8537-5d20dc0f401a\",\n      \"name\": \"stats collection\",\n      \"processName\": \"stats collector\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:37:28.650\",\n      \"endTime\": \"2020-05-28 09:37:29.138\",\n      \"exitCode\": 0,\n      \"reason\": \"\",\n      \"pid\": 17926\n    }\n    ... ] }\n  $\n  $ curl -X GET http://localhost:8081/fledge/task/latest?name=purge\n  {\n  \"tasks\":  [\n    {\n      \"id\": \"609f35e6-4e89-4749-ac17-841ae3ee2b31\",\n      \"name\": \"purge\",\n      \"processName\": \"purge\",\n      \"state\": \"Complete\",\n      \"startTime\": \"2020-05-28 09:44:15.165\",\n      \"endTime\": \"2020-05-28 09:44:28.154\",\n      \"exitCode\": 0,\n      \"reason\": \"\",\n      \"pid\": 20914\n    }\n  \t]\n  }\n  $\n\n|br|\n\n\nGET task by ID\n~~~~~~~~~~~~~~\n\n``GET /fledge/task/{id}`` - return the task information for the given task\n\n\n**Path Parameters**\n\n- **id** - the uuid of the task whose data should be returned.\n\n\n**Response Payload**\n\nThe response payload is a JSON object containing the task details.\n\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| Name      | Type      | Description                             | Example                              |\n+===========+===========+=========================================+======================================+\n| id        | string    | A unique identifier for the task.  |br| | 0a787bf3-4f48-4235-ae9a-2816f8ac76cc |\n|           |           | This takes the form of a uuid and  |br| |                                      |\n|           |           | not a Linux process id as the ID’s |br| |                                      |\n|           |           | must survive restarts and failovers     |                                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| name      | string    | The name of the task                    | purge                                |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| state     | string    | The current state of the task           | Running                              |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| startTime | timestamp | The date and time the task started      | 2018-04-17 08:32:15.071              |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| endTime   | timestamp | The date and time the task ended   |br| | 2018-04-17 08:32:14.872              |\n|           |           | This may not exist if the task is  |br| |                                      |\n|           |           | not completed.                          |                                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| exitCode  | integer   | Exit Code of the task.             |br| | 0                                    |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| reason    | string    | An optional reason string that     |br| | No destination available |br|        |\n|           |           | describes why the task failed.          | to write backup                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X GET http://localhost:8081/fledge/task/ea334d3b-8a33-4a29-845c-8be50efd44a4\n  {\n    \"id\": \"ea334d3b-8a33-4a29-845c-8be50efd44a4\",\n    \"name\": \"certificate checker\",\n    \"processName\": \"certificate checker\",\n    \"state\": \"Complete\",\n    \"startTime\": \"2020-05-28 09:35:00.009\",\n    \"endTime\": \"2020-05-28 09:35:00.057\",\n    \"exitCode\": 0,\n    \"reason\": \"\"\n  }\n  $\n\n|br|\n\n\nCancel task by ID\n~~~~~~~~~~~~~~~~~\n\n``PUT /fledge/task/{id}/cancel`` - cancel a task\n\n\n**Path Parameters**\n\n- **id** - the uuid of the task to cancel.\n\n\n**Response Payload**\n\nThe response payload is a JSON object with the details of the cancelled task.\n\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| Name      | Type      | Description                             | Example                              |\n+===========+===========+=========================================+======================================+\n| id        | string    | A unique identifier for the task.  |br| | 0a787bf3-4f48-4235-ae9a-2816f8ac76cc |\n|           |           | This takes the form of a uuid and  |br| |                                      |\n|           |           | not a Linux process id as the ID’s |br| |                                      |\n|           |           | must survive restarts and failovers     |                                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| name      | string    | The name of the task                    | purge                                |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| state     | string    | The current state of the task           | Running                              |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| startTime | timestamp | The date and time the task started      | 2018-04-17 08:32:15.071              |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| endTime   | timestamp | The date and time the task ended   |br| | 2018-04-17 08:32:14.872              |\n|           |           | This may not exist if the task is  |br| |                                      |\n|           |           | not completed.                          |                                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n| reason    | string    | An optional reason string that     |br| | No destination available |br|        |\n|           |           | describes why the task failed.          | to write backup                      |\n+-----------+-----------+-----------------------------------------+--------------------------------------+\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -X PUT http://localhost:8081/fledge/task/ea334d3b-8a33-4a29-845c-8be50efd44a4/cancel\n  {\"id\": \"ea334d3b-8a33-4a29-845c-8be50efd44a4\", \"message\": \"Task cancelled successfully\"}\n  $\n\n|br|\n\n\nOther Administrative API calls\n==============================\n\nshutdown\n--------\n\nThe *shutdown* option will causes all fledge services to be shutdown cleanly. Any data held in memory buffers within the services will be sent to the storage layer and the Fledge plugins will persist any state required when they restart.\n\n.. code-block:: console\n\n   $ curl -X PUT /fledge/shutdown\n\n.. note::\n\n   If an in memory storage layer is configured this will **not** be stored to any permanent storage and the contents of the memory storage layer will be lost.\n\nrestart\n-------\n\nThe *restart* option will causes all fledge services to be shutdown cleanly and then restarted. Any data held in memory buffers within the services will be sent to the storage layer and the Fledge plugins will persist any state required when they restart.\n\n.. code-block:: console\n\n   $ curl -X PUT /fledge/restart\n\n.. note::\n\n   If an in memory storage layer is configured this will **not** be stored to any permanent storage and the contents of the memory storage layer will be lost.\n\nping\n----\n\nThe *ping* interface gives a basic confidence check that the Fledge appliance is running and the API aspect of the appliance is functional. It is designed to be a simple test that can  be applied by a user or by an HA monitoring system to test the liveness and responsiveness of the system.\n\n\nGET ping\n~~~~~~~~\n\n``GET /fledge/ping`` - return liveness of Fledge\n\n*NOTE:* the GET method can be executed without authentication even when authentication is required. This behaviour is configurable via a configuration option.\n\n\n**Response Payload**\n\nThe response payload is some basic health information in a JSON object.\n\n.. list-table::\n    :widths: 20 20 80 20\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - uptime\n      - numeric\n      - Time in seconds since Fledge started\n      - 2113.076449394226\n    * - dataRead\n      - numeric\n      - A count of the number of sensor readings\n      - 1452\n    * - dataSent\n      - numeric\n      - A count of the number of readings sent to PI\n      - 347\n    * - dataPurged\n      - numeric\n      - A count of the number of readings purged\n      - 226\n    * - authenticationOptional\n      - boolean\n      - When true, the REST API does not require authentication. When false, users must successfully login in order to call the rest API. Default is *true*\n      - true\n    * - serviceName\n      - string\n      - Name of service\n      - Fledge\n    * - hostName\n      - string\n      - Name of host machine\n      - fledge\n    * - ipAddresses\n      - list\n      - IPv4 and IPv6 address of host machine\n      - [\"10.0.0.0\",\"123:234:345:456:567:678:789:890\"]\n    * - health\n      - string\n      - Health of Fledge services\n      - \"green\"\n    * - safeMode\n      - boolean\n      - True if Fledge is started in safe mode (only core and storage services will be started)\n      - 2113.076449394226\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/ping\n  {\n    \"uptime\": 276818,\n    \"dataRead\": 0,\n    \"dataSent\": 0,\n    \"dataPurged\": 0,\n    \"authenticationOptional\": true,\n    \"serviceName\": \"Fledge\",\n    \"hostName\": \"fledge\",\n    \"ipAddresses\": [\n      \"x.x.x.x\",\n      \"x:x:x:x:x:x:x:x\"\n    ],\n    \"health\": \"green\",\n    \"safeMode\": false\n  }\n  $\n\n"
  },
  {
    "path": "docs/rest_api_guide/03_RESTassetTracker.rst",
    "content": "..\n\nAsset Tracker\n-------------\n\nThe *asset tracker* API allows the operations that an asset undergoes whilst traversing the data pipeline within Fledge to be tracked as displayed.\n\n``GET /fledge/track`` - return tracking data for one or more asset\n\n**Parameters**\n\n  - ``asset`` - define the asset to be tracked. If omitted tracking data for all assets is returned\n\n  - ``event`` - the event to track. If omitted all events will be returned\n\n  - ``service`` - limit the tracking data to a particular service\n\n**Response Payload**\n\nAn array of tracked events, each of which contains the following\n\n.. list-table::\n    :widths: 20 20 50 50\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - asset\n      - string\n      - The name of the asset for which this track event relates\n      - sinusoid\n    * - event\n      - string\n      - The event that was tracked, this will be one of Ingest, Filter or Egress\n      - Ingest\n    * - service\n      - string\n      - The name of the service this event was tracked in\n      - testSignal4\n    * - fledge\n      - string\n      - The name of the fledge instance this event was tracked in\n      - fledge002\n    * - plugin\n      - string\n      - The name of the plugin this event was tracked in\n      - sinusoid\n    * - timestamp\n      - string\n      - The timestamp when this event was first tracked\n      - 2022-07-06 10:20:13.059\n    * - deprecatedTimestamp\n      - string\n      - The timestamp when this event was deprecated\n      - 2022-07-06 10:20:13.059\n\n.. note::\n\n   Asset tracking deprecation allows for old information regarding the plugin that ingested an asset to be hidden when that asset is no longer ingested by the plugin. When this is done the deprecatedTimestamp value is set to be a non-empty timestamp.\n\n**Example**\n\nReturn the asset tracking data for the asset called *sinusoid*\n\n.. code-block:: console\n\n   curl http://localhost:8081/fledge/track?asset=sinusoid\n\nReturns\n\n.. code-block:: console\n\n    {\n      \"track\": [\n        {\n          \"asset\": \"sinusoid\",\n          \"event\": \"Filter\",\n          \"service\": \"test1\",\n          \"fledge\": \"Fledge\",\n          \"plugin\": \"test2\",\n          \"timestamp\": \"2022-07-06 10:20:13.059\"\n        },\n        {\n          \"asset\": \"sinusoid\",\n          \"event\": \"Ingest\",\n          \"service\": \"test1\",\n          \"fledge\": \"Fledge\",\n          \"plugin\": \"sinusoid\",\n          \"timestamp\": \"2022-07-11 16:12:25.749\"\n        },\n        {\n          \"asset\": \"sinusoid\",\n          \"event\": \"Filter\",\n          \"service\": \"test1\",\n          \"fledge\": \"Fledge\",\n          \"plugin\": \"python35\",\n          \"timestamp\": \"2022-07-13 12:33:10.082\"\n        },\n        {\n          \"asset\": \"sinusoid\",\n          \"event\": \"Egress\",\n          \"service\": \"OMF\",\n          \"fledge\": \"Fledge\",\n          \"plugin\": \"OMF\",\n          \"timestamp\": \"2022-07-15 14:07:14.950\"\n        }\n      ]\n    }\n\nDeprecation\n~~~~~~~~~~~\n\nThere are some circumstances in which old data regarding asset tracking needs to be removed. In particular when a plugin ingests multiple assets or asset names have changed, it is convenient for the user to remove the association with the old asset names.\n\n``PUT /fledge/track/service/service_name/asset/asset_name/event/event_name`` - mark the asset tracking event as deprecated\n\n**Parameters**\n\n  - ``service_name`` - the name of the service for which we want to deprecate the asset tracking event\n\n  - ``asset_name`` - the name of the asset that we should deprecate\n\n  - ``event_name`` - the name of the event to deprecate\n\n.. note::\n\n   There is no API to remove the deprecation of an asset tracking event, this is done automatically when assets are tracked in future events.\n"
  },
  {
    "path": "docs/rest_api_guide/03_RESTservices.rst",
    "content": "..\n\nWorking With Services\n=====================\n\nThere are a number of API entries points related to working with services within Fledge. \n\nService Status\n--------------\n\nIn order to discover the services registered within a Fledge instance and what state they are currently in the API call */fledge/service* can be used. This is a GET call and will return the set of services along with various information regarding the service. A registered service is one that is either currently running or is configured but disabled.\n\n.. code-block:: console\n\n   $ curl http://localhost:8081/fledge/service | jq\n   {\n      \"services\": [\n        {\n          \"name\": \"Fledge Storage\",\n          \"type\": \"Storage\",\n          \"address\": \"localhost\",\n          \"management_port\": 39773,\n          \"service_port\": 46391,\n          \"protocol\": \"http\",\n          \"status\": \"running\"\n        },\n        {\n          \"name\": \"Fledge Core\",\n          \"type\": \"Core\",\n          \"address\": \"0.0.0.0\",\n          \"management_port\": 41547,\n          \"service_port\": 8081,\n          \"protocol\": \"http\",\n          \"status\": \"running\"\n        },\n        {\n          \"name\": \"Notification\",\n          \"type\": \"Notification\",\n          \"address\": \"localhost\",\n          \"management_port\": 40605,\n          \"service_port\": 40779,\n          \"protocol\": \"http\",\n          \"status\": \"shutdown\"\n        },\n        {\n          \"name\": \"dispatcher\",\n          \"type\": \"Dispatcher\",\n          \"address\": \"localhost\",\n          \"management_port\": 46353,\n          \"service_port\": 35605,\n          \"protocol\": \"http\",\n          \"status\": \"shutdown\"\n        },\n        {\n          \"name\": \"lathe1004\",\n          \"type\": \"Southbound\",\n          \"address\": \"localhost\",\n          \"management_port\": 45113,\n          \"service_port\": 34403,\n          \"protocol\": \"http\",\n          \"status\": \"running\"\n        },\n        {\n          \"name\": \"OPCUA\",\n          \"type\": \"Northbound\",\n          \"address\": \"localhost\",\n          \"management_port\": 42783,\n          \"service_port\": null,\n          \"protocol\": \"http\",\n          \"status\": \"shutdown\"\n        },\n        {\n          \"name\": \"sine2\",\n          \"type\": \"Southbound\",\n          \"address\": \"localhost\",\n          \"management_port\": 38679,\n          \"service_port\": 33433,\n          \"protocol\": \"http\",\n          \"status\": \"running\"\n        }\n      ]\n    }\n\nThe data returned for each service includes\n\n.. list-table::\n    :widths: 20 50\n    :header-rows: 1\n\n    * - Key\n      - Description\n    * - name\n      - The name of the service.\n    * - type\n      - The service type. This may be one of Northbound, Southbound, Core, Storage, Notification or Dispatcher. In addition other storage types may also be installed to extend the functionality of Fledge.\n    * - address\n      - The Address the service can be contacted via. Normally localhost or 0.0.0.0 if the service is running on the same machine as the Core service of the Fledge instance.\n    * - management_port\n      - The management port the service is using to communicate with the core.\n    * - service_port\n      - The port the service is using to expose the service API of the service.\n    * - protocol\n      - The protocol the service is using for its control API.\n    * - status\n      - The status of the service. This may be running, shutdown, unresponsive or failed.\n\nParameters\n~~~~~~~~~~\n\nYou may limit the services returned by this call to a particular type by using the *type=* parameter to the URL.\n\n.. code-block:: console\n\n    $ curl -sX GET http://localhost:8081/fledge/service?type=Southbound | jq\n    {\n      \"services\": [\n        {\n          \"name\": \"lathe1004\",\n          \"type\": \"Southbound\",\n          \"address\": \"localhost\",\n          \"management_port\": 45113,\n          \"service_port\": 34403,\n          \"protocol\": \"http\",\n          \"status\": \"running\"\n        },\n        {\n          \"name\": \"sine2\",\n          \"type\": \"Southbound\",\n          \"address\": \"localhost\",\n          \"management_port\": 38679,\n          \"service_port\": 33433,\n          \"protocol\": \"http\",\n          \"status\": \"running\"\n        }\n      ]\n    }\n\nSouth and North Services\n~~~~~~~~~~~~~~~~~~~~~~~~\n\nSpecific API calls exist for the two must commonly used service types, the south and north services. These give additional information and are primarily used to give the status of all south or north services in the system.\n\n.. note::\n\n   In the case of the north API entry point the information returned is for both services and tasks\n\nSouth Services\n~~~~~~~~~~~~~~\n\nThe */fledge/south* call will list all of the south service with the information above and will also list \n\n  - the assets that are ingested by the service, \n    \n  - a count for each asset of how many readings have been ingested, this is only applicable if the plugin ingests multiple assets\n    \n  - the name and version of the south plugin used \n    \n  - and the current enabled state of the south service.\n\n.. code-block:: console \n\n    $ curl -s http://localhost:8081/fledge/south |jq\n    {\n      \"services\": [\n        {\n          \"name\": \"lathe1004\",\n          \"address\": \"localhost\",\n          \"management_port\": 45113,\n          \"service_port\": 34403,\n          \"protocol\": \"http\",\n          \"status\": \"running\",\n          \"assets\": [\n            {\n              \"count\": 520774,\n              \"asset\": \"lathe1004\"\n            },\n            {\n              \"count\": 520774,\n              \"asset\": \"lathe1004Current\"\n            },\n            {\n              \"count\": 520239,\n              \"asset\": \"lathe1004IR\"\n            },\n            {\n              \"count\": 260379,\n              \"asset\": \"lathe1004Vibration\"\n            }\n          ],\n          \"plugin\": {\n            \"name\": \"lathesim\",\n            \"version\": \"1.9.2\"\n          },\n          \"schedule_enabled\": true\n        },\n        {\n          \"name\": \"sine2\",\n          \"address\": \"localhost\",\n          \"management_port\": 38679,\n          \"service_port\": 33433,\n          \"protocol\": \"http\",\n          \"status\": \"running\",\n          \"assets\": [\n            {\n              \"count\": 734,\n              \"asset\": \"sine2\"\n            },\n            {\n              \"count\": 373008,\n              \"asset\": \"sine250\"\n            }\n          ],\n          \"plugin\": {\n            \"name\": \"sinusoid\",\n            \"version\": \"1.9.2\"\n          },\n          \"schedule_enabled\": true\n        },\n        {\n          \"name\": \"test1\",\n          \"address\": \"\",\n          \"management_port\": \"\",\n          \"service_port\": \"\",\n          \"protocol\": \"\",\n          \"status\": \"\",\n          \"assets\": [\n            {\n              \"count\": 76892,\n              \"asset\": \"sinusoid\"\n            },\n            {\n              \"count\": 125681,\n              \"asset\": \"sinusoid2\"\n            }\n          ],\n          \"plugin\": {\n            \"name\": \"sinusoid\",\n            \"version\": \"1.9.2\"\n          },\n          \"schedule_enabled\": false\n        },\n        {\n          \"name\": \"testacl\",\n          \"address\": \"\",\n          \"management_port\": \"\",\n          \"service_port\": \"\",\n          \"protocol\": \"\",\n          \"status\": \"\",\n          \"assets\": [\n            {\n              \"count\": 76892,\n              \"asset\": \"sinusoid\"\n            }\n          ],\n          \"plugin\": {\n            \"name\": \"testing\",\n            \"version\": \"1.9.2\"\n          },\n          \"schedule_enabled\": false\n        },\n        {\n          \"name\": \"dsds\",\n          \"address\": \"\",\n          \"management_port\": \"\",\n          \"service_port\": \"\",\n          \"protocol\": \"\",\n          \"status\": \"\",\n          \"assets\": [],\n          \"plugin\": {\n            \"name\": \"Expression\",\n            \"version\": \"1.9.2\"\n          },\n          \"schedule_enabled\": false\n        }\n      ]\n    }\n    $\n\nService Types\n-------------\n\nFledge supports a number of different service types, some of which are included with the base Fledge installation and others that must be installed separately if required.\n\n.. note::\n\n   The API entry points in this section require that the Fledge installation has been configured with access to a Fledge package repository.\n\nInstalled Service Types\n~~~~~~~~~~~~~~~~~~~~~~~\n\nIn order to find out what service types are installed in the system the */fledge/service/installed* call can be used.\n\n.. code-block:: console\n\n    $ curl http://localhost:8081/fledge/service/installed\n    {\"services\": [\"storage\", \"north\", \"dispatcher\", \"notification\", \"south\"]}\n\n.. note::\n\n   All Fledge instances have the storage, south and north services installed by default when the Fledge core is installed.\n\nAvailable Service Types\n~~~~~~~~~~~~~~~~~~~~~~~\n\nTo find out what services are available to be installed from the package repository configured for your Fledge instance use the API */fledge/service/available*.\n\n.. code-block:: console\n\n    $ curl -q http://localhost:8081/fledge/service/available |jq\n    {\n      \"services\": [\n        \"fledge-service-notification\"\n      ],\n      \"link\": \"logs/220831-13-26-25-list.log\"\n    }\n\nThe *link* in the returned JSON is a link to a log file that shows the interaction with the package repository.\n\nInstall a Service Type\n~~~~~~~~~~~~~~~~~~~~~~\n\nTo install a new service type the POST method can be used on the */fledge/service* API call with the parameter *action=install*.\n\n.. code-block:: console\n\n   $ curl -X POST http://localhost:8081/fledge/service?action=install -d'{\"format\":\"repository\", \"name\": \"fledge-service-notification\"}'\n\nThis will install the named service from the package repository.\n\n.. note::\n\n   In order to install a package the package repository must be configured and accessible.\n\nCreating A Service\n------------------\n\nA new service can be created using the POST method on the */fledge/service* API call. The payload passed to this request will determine at least the service type and the name of the new service, however it may also contain further configuration which is dependent on the type of the service.\n\nThe minimum payload content that must be in every create call for a service is the name of the new service, the type of the service and the enabled state of the service. This can be used for example to create a notification service or a control dispatcher service that need no further configuration.\n\n.. code-block:: console\n\n   $ curl -X POST http://localhost:8081/fledge/service -d'{ \"name\" : \"Notifier\", \"type\" : \"notification\", \"enabled\" : \"true\" }'\n\nOr for a control dispatcher\n\n.. code-block:: console\n\n   $ curl -X POST http://localhost:8081/fledge/service -d'{ \"name\" : \"Control\", \"type\" : \"dispatcher\", \"enabled\" : \"true\" }'\n\nA north or south service need some extra configuration in the payload. These service type must always have a plugin and can optionally be passed configuration for that plugin. If no plugin configuration is given then the plugins default configuration values will be used.\n\nTo create a south service using the default values of the *sinusoid* plugin.\n\n.. code-block:: console\n\n   $ curl -X POST http://localhost:8081/fledge/service -d'{ \"name\" : \"Sine\", \"type\" : \"south\", \"enabled\" : \"true\", \"plugin\" : \"sinusoid\" }'\n\nIn the next example we create a north plugin that will send data to another Fledge instance using the *HTTPC* plugin. We set the value of the configuration item *URL* in the plugin to be the URL of the concentrator Fledge instance.\n\n.. code-block:: console\n\n   $ curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"HTTP\", \"plugin\": \"httpc\", \"type\": \"north\", \"enabled\": true, \"config\": {\"URL\": {\"value\": \"http://concentrator.local:6683/buildingA\"}}}'\n\nStopping and Starting Services\n------------------------------\n\nServices within Fledge are started and stop via the scheduler, normally a service will be started via a schedule that defines the service to run at startup of Fledge. This ensures that the service runs when Fledge is started and will continue to run until Fledge is stopped. To implicitly stop a service the schedule must be disabled.\n\nDisabling a schedule associated for a service will also stop the service. The service will not then be restarted, even if Fledge is restarted, until the schedule is again enabled.\n\nTo disable a schedule you can call the */fledge/schedule/{schedule_id}/disable* API call, however this requires you to know the ID of the schedule associated with the service. It is possible to find this for a given service, as the schedule name is the same as the service name, however it is simpler to use the API call */fledge/schedule/disable* as this can be passed the name of the schedule rather than the schedule ID. Since the schedule name and the service name are the same, we effectively pass the name of the service we wish to disable.\n\nTo disable the service call *Sine* we would use the following *curl* command.\n\n.. code-block:: console\n\n   $ curl -X PUT http://localhost:8081/fledge/schedule/disable -d '{\"schedule_name\": \"Sine\"}'\n\nTo enable the service again we can use the */fledge/schedule/enable* API call, this takes an identical payload to the disable API call.\n\n.. code-block:: console\n\n   $ curl -X PUT http://localhost:8081/fledge/schedule/enable -d '{\"schedule_name\": \"Sine\"}'\n\nDeleting a Service\n------------------\n\nServices may be deleted from the system using the */fledge/service* API call with the DELETE method. When a service is deleted it will be stopped and the service, configuration for the service and the associated schedule will be removed. Any data that has been read by the service will however remain in the readings database.\n\nTo delete the service named *Sine*\n\n.. code-block:: console\n\n   $ curl -X DELETE http://localhost:8081/fledge/service/Sine\n\n"
  },
  {
    "path": "docs/rest_api_guide/03_RESTstatistics.rst",
    "content": "\nStatistics\n----------\n\nThe *statistics* interface allows the retrieval of live statistics, statistical history and statistical rates for the Fledge device.\n\nFledge records a number of statistics values, some with fixed names and other that reflect the name of a service or an asset. The statistics counters with fixed names are given below.\n\n.. list-table::\n    :widths: 20 50\n    :header-rows: 1\n\n    * - Key\n      - Description\n    * - BUFFERED\n      - Readings currently in the Fledge buffer\n    * - DISCARDED\n      - Readings discarded by the South Service before being  placed in the buffer. This may be due to an error in the readings themselves.\n    * - PURGED\n      - Readings removed from the buffer by the purge process\n    * - READINGS\n      - Readings received by Fledge\n    * - UNSENT\n      - Readings filtered out in the send process\n    * - UNSNPURGED\n      - Readings that were purged from the buffer before being sent\n\nIn addition to these fixed names there will be;\n\n  - One statistic per north service or task that is named the same as the service or task name. This will count the number of readings sent out on that service.\n\n  - One statistic per asset that is named the same as the asset. This will be the number of readings that have been ingested for that asset.\n\n  - One statistics per south service, that is named with the service name and *-Ingest* appended. This is the count of readings read in for that service.\n\nGET statistics\n~~~~~~~~~~~~~~\n\n``GET /fledge/statistics`` - return a general set of statistics\n\n\n**Response Payload**\n\nThe response payload is a JSON document with statistical information (all numerical), these statistics are absolute counts since Fledge started.\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/statistics\n  [ {\n      \"key\": \"BUFFERED\",\n      \"description\": \"Readings currently in the Fledge buffer\",\n      \"value\": 0\n    },\n  ...\n    {\n      \"key\": \"UNSNPURGED\",\n      \"description\": \"Readings that were purged from the buffer before being sent\",\n      \"value\": 0\n    },\n  ... ]\n  $\n\n\nGET statistics/history\n~~~~~~~~~~~~~~~~~~~~~~\n\n``GET /fledge/statistics/history`` - return a historical set of statistics. This interface is normally used to check if a set of sensors or devices are sending data to Fledge, by comparing the recent statistics and the number of readings received for an asset.\n\n\n**Request Parameters**\n\n- **limit** - limit the result set to the *N* most recent entries.\n\n\n**Response Payload**\n\nA JSON document containing an array of statistical information, these statistics are delta counts since the previous entry in the array. The time interval between values is a constant defined that runs the gathering process which populates the history statistics in the storage layer.\n\n.. list-table::\n    :widths: 20 50\n    :header-rows: 1\n\n    * - Key\n      - Description\n    * - interval\n      - The interval in seconds between successive statistics values\n    * - statistics[].BUFFERED\n      - Readings currently in the Fledge buffer\n    * - statistics[].DISCARDED\n      - Readings discarded by the South Service before being  placed in the buffer. This may be due to an error in the readings themselves.\n    * - statistics[].PURGED\n      - Readings removed from the buffer by the purge process\n    * - statistics[].READINGS\n      - Readings received by Fledge\n    * - statistics[].*NORTH_TASK_NAME*\n      - The number of readings sent to the upstream system by the plugin with the north instance name\n    * - statistics[].UNSENT\n      - Readings filtered out in the send process\n    * - statistics[].UNSNPURGED\n      - Readings that were purged from the buffer before being sent\n    * - statistics[].*ASSET-CODE*\n      - The number of readings received by Fledge since startup with name *asset-code*\n\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -s http://localhost:8081/fledge/statistics/history?limit=2\n  {\n    \"interval\": 15,\n    \"statistics\": [\n      {\n        \"history_ts\": \"2020-06-01 11:21:04.357\",\n        \"READINGS\": 0,\n        \"BUFFERED\": 0,\n        \"UNSENT\": 0,\n        \"PURGED\": 0,\n        \"UNSNPURGED\": 0,\n        \"DISCARDED\": 0,\n        \"Readings Sent\": 0\n      },\n      {\n        \"history_ts\": \"2020-06-01 11:20:48.740\",\n        \"READINGS\": 0,\n        \"BUFFERED\": 0,\n        \"UNSENT\": 0,\n        \"PURGED\": 0,\n        \"UNSNPURGED\": 0,\n        \"DISCARDED\": 0,\n        \"Readings Sent\": 0\n      }\n    ]\n  }\n  $\n\n\nGET statistics/rate\n~~~~~~~~~~~~~~~~~~~\n\n``GET /fledge/statistics/rate`` - return a set of rates for a set of statistics. This interface returns the rate of a statistic value in counts per minute over a specified set of averages. It is passed two parameters, a comma separated list of intervals in minutes and a comma separated list of statistics.\n\n**Request Parameters**\n\n  - **statistics** - a comma separated list of statistics keys.\n\n  - **periods** - a comma separated list of time periods in minutes.\n\nThe corresponding rate that will be returned for a given value X is the counts per minute over the previous X minutes.\n\n**Example**\n\n.. code-block:: console\n\n   $ curl -sX GET http://localhost:8081/fledge/statistics/rate?statistics=READINGS,Readings%20Sent\\&periods=1,5,15,30,60\n\n    {\n      \"rates\": {\n        \"READINGS\": {\n          \"1\": 2561.0,\n          \"5\": 512.2,\n          \"15\": 170.73333333333332,\n          \"30\": 85.36666666666666,\n          \"60\": 42.68333333333333\n        },\n        \"Readings Sent\": {\n          \"1\": 2225.0,\n          \"5\": 445.0,\n          \"15\": 148.33333333333334,\n          \"30\": 74.16666666666667,\n          \"60\": 37.083333333333336\n        }\n      }\n    }\n"
  },
  {
    "path": "docs/rest_api_guide/03_RESTupdate.rst",
    "content": ".. \n\n\nRepository Configuration\n------------------------\n\n``POST /fledge/repository`` - Configure the package repository to use for the Fledge packages.\n\n**Payload**\n\nThe payload is a JSON document that can have one or two keys defined in\nthe JSON object, *url* and *version*. The *url* item is mandatory and\ngives the URL of the package repository. This is normally set to the\nfledge-iot archives for the fledge packages.\n\n.. code-block:: console\n\n   {\n       \"url\":\"http://archives.fledge-iot.org\",\n       \"version\":\"latest\"\n   }\n\n\nUpdate Packages\n---------------\n\n``PUT /fledge/update`` - Update all of the packages within the Fledge instance\n\nThis call can be used if you have installed some or all of your Fledge\ninstance using packages via the package installation process or using\nthe package installer to add extra plugins. It will update all the Fledge\npackages that you have installed to the latest version.\n\n**Example**\n\n.. code-block:: console\n\n   $ curl -X PUT http://localhost:8081/fledge/update\n\nThe call will return immediately and the package update will occur as a background task.\n\n\n"
  },
  {
    "path": "docs/rest_api_guide/04_RESTuser.rst",
    "content": ".. REST API Guide\n.. https://docs.google.com/document/d/1JJDP7g25SWerNVCxgff02qp9msHbqA9nt3RAFx8-Qng\n\n.. |br| raw:: html\n\n   <br />\n\n.. |ar| raw:: html\n\n   <div align=\"right\">\n\n.. Images\n\n\n.. Links\n\n\n.. =============================================\n\n\n******************\nUser API Reference\n******************\n\nThe user API provides a mechanism to access the data that is buffered within Fledge. It is designed to allow users and applications to get a view of the data that is available in the buffer and do analysis and possibly trigger actions based on recently received sensor readings.\n\nIn order to use the entry points in the user API, with the exception of the ``/fledge/authenticate`` entry point, there must be an authenticated client calling the API. The client must provide a header field in each request, authtoken, the value of which is the token that was retrieved via a call to ``/fledge/authenticate``. This token must be checked for validity and also that the authenticated entity has user or admin permissions.\n\n\nBrowsing Assets\n===============\n\n\nasset\n-----\n\nThe asset method is used to browse all or some assets, based on search and filtering.\n\n\nGET all assets\n~~~~~~~~~~~~~~\n\n``GET /fledge/asset`` - Return an array of asset codes buffered in Fledge and a count of assets by code.\n\n\n**Response Payload**\n\nAn array of JSON objects, one per asset.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - count\n      - number\n      - The number of recorded readings for the asset code\n      - 11\n    * - assetCode\n      - string\n      - The code of the asset\n      - asset_1\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -sX GET http://localhost:8081/fledge/asset\n  [\n    {\"count\": 11, \"assetCode\": \"asset_1\"},\n    {\"count\": 11, \"assetCode\": \"asset_2\"},\n    {\"count\": 11, \"assetCode\": \"asset_3\"}\n  ]\n  $\n\nGET asset readings\n~~~~~~~~~~~~~~~~~~\n\n``GET /fledge/asset/{code}`` - Return an array of readings for a given asset code.\n\n\n**Path Parameters**\n\n- **code** - the asset code to retrieve.\n\n\n**Request Parameters**\n\n  - **limit** - set the limit of the number of readings to return. If not specified, the defaults is 20 readings.\n  \n  - **skip** - the number of assets to skip. This is used in conjunction with limit and allows the caller to not just get the last N readings, but to get a set of readings from the past.\n\n  - **seconds** - this is essentially an alternative form of limit, but here the limit is expressed in seconds rather than a number of readings. It will return the readings for the last N seconds. Note that this can not be used in conjunction with the *limit* and *skip* or with *hours* and *minutes* request parameters.\n\n  - **minutes** - this is essentially an alternative form of limit, but here the limit is expressed in minutes rather than a number of readings. It will return the readings for the last N minutes. Note that this can not be used in conjunction with the *limit* and *skip* or with *seconds* and *hours* request parameters.\n\n  - **hours** - this is essentially an alternative form of limit, but here the limit is expressed in hours rather than a number of readings. It will return the readings for the last N hours. Note that this can not be used in conjunction with the *limit* and *skip* or with *seconds* and *minutes* request parameters.\n\n  - **previous** - This is used in conjunction with the *hours*, *minutes* or *seconds* request parameter and allows the caller to get not just the most recent readings but historical readings. The value of *previous* is defined in hours, minutes or seconds dependent upon the parameter it is used with and defines how long ago the data that is returned should end. If the caller passes a set of parameters *seconds=30&previous=120* the call will return 30 seconds worth of data and the newest data returned will be 120 seconds old.\n\n**Response Payload**\n\nAn array of JSON objects with the readings data for a series of readings sorted in reverse chronological order.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - reading\n      - JSON object\n      - The JSON reading object received from the sensor\n      - {\"pressure\": 885.7}\n    * - timestamp\n      - timestamp\n      - The time at which the reading was received\n      - 2023-04-14 12:04:34.603963\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -sX GET http://localhost:8081/fledge/asset/asset_3\n  [\n    {\"reading\": {\"pressure\": 885.7}, \"timestamp\": \"2023-04-14 12:04:34.603963\"},\n    {\"reading\": {\"pressure\": 846.3}, \"timestamp\": \"2023-04-14 12:02:39.150127\"},\n    {\"reading\": {\"pressure\": 913.0}, \"timestamp\": \"2023-04-14 12:02:26.616218\"},\n    {\"reading\": {\"pressure\": 994.7}, \"timestamp\": \"2023-04-14 12:02:11.171338\"},\n    {\"reading\": {\"pressure\": 960.2}, \"timestamp\": \"2023-04-14 12:01:56.979426\"}\n  ]\n  $\n\n  $ curl -sX GET http://localhost:8081/fledge/asset/asset_3?limit=3\n  [\n    {\"reading\": {\"pressure\": 885.7}, \"timestamp\": \"2023-04-14 12:04:34.603963\"},\n    {\"reading\": {\"pressure\": 846.3}, \"timestamp\": \"2023-04-14 12:02:39.150127\"},\n    {\"reading\": {\"pressure\": 913.0}, \"timestamp\": \"2023-04-14 12:02:26.616218\"}\n  ]\n  $\n\nUsing *seconds* and *previous* to obtain historical data.\n\n.. code-block:: console\n\n  $ curl -sX GET http://localhost:8081/fledge/asset/sinusoid?seconds=5\\&previous=60|jq\n  [\n    { \"reading\": { \"sinusoid\": 1 }, \"timestamp\": \"2022-11-09 09:37:51.930688\" },\n    { \"reading\": { \"sinusoid\": 0.994521895 }, \"timestamp\": \"2022-11-09 09:37:50.930887\" },\n    { \"reading\": { \"sinusoid\": 0.978147601 }, \"timestamp\": \"2022-11-09 09:37:49.933698\" },\n    { \"reading\": { \"sinusoid\": 0.951056516 }, \"timestamp\": \"2022-11-09 09:37:48.930644\" },\n    { \"reading\": { \"sinusoid\": 0.913545458 }, \"timestamp\": \"2022-11-09 09:37:47.930950\" }\n  ]\n\nThe above call returned 5 seconds of data from the current time minus 65 seconds to the current time minus 5 seconds.\n\nGET asset reading\n~~~~~~~~~~~~~~~~~\n\n``GET /fledge/asset/{code}/{reading}`` - Return an array of single readings for a given asset code.\n\n\n**Path Parameters**\n\n- **code** - the asset code to retrieve.\n- **reading** - the sensor from the assets JSON formatted reading.\n\n\n**Request Parameters**\n\n  - **limit** - set the limit of the number of readings to return. If not specified, the defaults is 20 single readings.\n  \n  - **skip** - the number of assets to skip. This is used in conjunction with limit and allows the caller to not just get the last N readings, but to get a set of readings from the past.\n\n  - **seconds** - this is essentially an alternative form of limit, but here the limit is expressed in seconds rather than a number of readings. It will return the readings for the last N seconds. Note that this can not be used in conjunction with the *limit* and *skip* or with *hours* and *minutes* request parameters.\n\n  - **minutes** - this is essentially an alternative form of limit, but here the limit is expressed in minutes rather than a number of readings. It will return the readings for the last N minutes. Note that this can not be used in conjunction with the *limit* and *skip* or with *seconds* and *hours* request parameters.\n\n  - **hours** - this is essentially an alternative form of limit, but here the limit is expressed in hours rather than a number of readings. It will return the readings for the last N hours. Note that this can not be used in conjunction with the *limit* and *skip* or with *seconds* and *minutes* request parameters.\n\n  - **previous** - This is used in conjunction with the *hours*, *minutes* or *seconds* request parameter and allows the caller to get not just the most recent readings but historical readings. The value of *previous* is defined in hours, minutes or seconds dependent upon the parameter it is used with and defines how long ago the data that is returned should end. If the caller passes a set of parameters *seconds=30&previous=120* the call will return 30 seconds worth of data and the newest data returned will be 120 seconds old.\n\n\n**Response Payload**\n\nAn array of JSON objects with a series of readings sorted in reverse chronological order.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - timestamp\n      - timestamp\n      - The time at which the reading was received\n      - 2023-04-14 12:04:34.603937\n    * - reading\n      - JSON object\n      - The value of the specified reading\n      - {\"lux\": 47705.68}\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -sX GET http://localhost:8081/fledge/asset/asset_2/lux\n  [\n    {\"timestamp\": \"2023-04-14 12:04:34.603937\", \"lux\": 47705.68},\n    {\"timestamp\": \"2023-04-14 12:02:39.150106\", \"lux\": 97967.9},\n    {\"timestamp\": \"2023-04-14 12:02:26.616200\", \"lux\": 28788.154},\n    {\"timestamp\": \"2023-04-14 12:02:11.171319\", \"lux\": 57992.674},\n    {\"timestamp\": \"2023-04-14 12:01:56.979407\", \"lux\": 10373.945}\n  ]\n  $\n\n  $ curl -sX GET http://localhost:8081/fledge/asset/asset_2/lux?limit=3\n  [\n    {\"timestamp\": \"2023-04-14 11:25:05.672528\", \"lux\": 75723.923},\n    {\"timestamp\": \"2023-04-14 11:24:49.767983\", \"lux\": 50475.99},\n    {\"timestamp\": \"2023-04-14 11:23:15.672528\", \"lux\": 75723.923}\n  ]\n  $\n\n\nGET asset reading summary\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\n``GET /fledge/asset/{code}/{reading}/summary`` - Return minimum, maximum and average values of a reading by asset code.\n\n\n**Path Parameters**\n\n- **code** - the asset code to retrieve.\n- **reading** - the sensor from the assets JSON formatted reading.\n\n\n**Response Payload**\n\nA JSON object of a reading by asset code.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - .lux.min\n      - number\n      - The minimum value of the set of sensor values selected in the query string\n      - 10373.945\n    * - .lux.max\n      - number\n      - The maximum value of the set of sensor values selected in the query string\n      - 97967.9\n    * - .lux.average\n      - number\n      - The average value of the set of sensor values selected in the query string\n      - 48565.6706\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -sX GET http://localhost:8081/fledge/asset/asset_2/lux/summary\n    {\"lux\": {\"min\": 10373.945, \"max\": 97967.9, \"average\": 48565.6706}}\n  $\n\nGET all asset reading timespan\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n``GET /fledge/asset/timespan`` - Return newest and oldest timestamp of each asset for which we hold readings in the buffer.\n\n\n**Response Payload**\n\nAn array of JSON objects with newest and oldest timestamps of the readings held for each asset.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - oldest\n      - string\n      - The oldest timestamp held in the buffer for this asset\n      - 2022-11-08 17:07:02.623258\n    * - newest\n      - string\n      - The newest timestamp held in the buffer for this asset\n      - 2022-11-09 14:52:50.069432\n    * - asset_code\n      - string\n      - The asset code for which the timestamps refer\n      - sinusoid\n\n**Example**\n\n.. code-block:: console\n\n    $ curl -sX GET http://localhost:8081/fledge/asset/timespan\n    [\n      {\n        \"oldest\": \"2022-11-08 17:07:02.623258\",\n        \"newest\": \"2022-11-09 14:52:50.069432\",\n        \"asset_code\": \"sinusoid\"\n      }\n    ]\n\nGET asset reading timespan\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n``GET /fledge/asset/{code}/timespan`` - Return newest and oldest timestamp for which we hold readings in the buffer.\n\n\n**Path Parameters**\n\n- **code** - the asset code to retrieve.\n\n\n**Response Payload**\n\nA JSON object with the newest and oldest timestamps for the asset held in the storage buffer.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - oldest\n      - string\n      - The oldest timestamp held in the buffer for this asset\n      - 2022-11-08 17:07:02.623258\n    * - newest\n      - string\n      - The newest timestamp held in the buffer for this asset\n      - 2022-11-09 14:52:50.069432\n\n**Example**\n\n.. code-block:: console\n\n    $ curl -sX GET http://localhost:8081/fledge/asset/sinusoid/timespan|jq\n      {\n        \"oldest\": \"2022-11-08 17:07:02.623258\",\n        \"newest\": \"2022-11-09 14:59:14.069207\"\n      }\n\n\nGET timed average asset reading\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n``GET /fledge/asset/{code}/{reading}/series`` - Return minimum, maximum and average values of a reading by asset code in a time series. The default interval in the series is one second.\n\n\n**Path Parameters**\n\n- **code** - the asset code to retrieve.\n- **reading** - the sensor from the assets JSON formatted reading.\n\n**Request Parameters**\n\n  - **limit** - set the limit of the number of readings to return. If not specified, the defaults is 20 readings.\n\n  - **skip** - the number of assets to skip. This is used in conjunction with limit and allows the caller to not just get the last N readings, but to get a set of readings from the past.\n\n  - **seconds** - this is essentially an alternative form of limit, but here the limit is expressed in seconds rather than a number of readings. It will return the readings for the last N seconds. Note that this can not be used in conjunction with the *limit* and *skip* or with *hours* and *minutes* request parameters.\n\n  - **minutes** - this is essentially an alternative form of limit, but here the limit is expressed in minutes rather than a number of readings. It will return the readings for the last N minutes. Note that this can not be used in conjunction with the *limit* and *skip* or with *seconds* and *hours* request parameters.\n\n  - **hours** - this is essentially an alternative form of limit, but here the limit is expressed in hours rather than a number of readings. It will return the readings for the last N hours. Note that this can not be used in conjunction with the *limit* and *skip* or with *seconds* and *minutes* request parameters.\n\n  - **previous** - This is used in conjunction with the *hours*, *minutes* or *seconds* request parameter and allows the caller to get not just the most recent readings but historical readings. The value of *previous* is defined in hours, minutes or seconds dependent upon the parameter it is used with and defines how long ago the data that is returned should end. If the caller passes a set of parameters *seconds=30&previous=120* the call will return 30 seconds worth of data and the newest data returned will be 120 seconds old.\n\n**Response Payload**\n\nAn array of JSON objects with a series of readings sorted in reverse chronological order.\n\n.. list-table::\n    :widths: 20 20 50 30\n    :header-rows: 1\n\n    * - Name\n      - Type\n      - Description\n      - Example\n    * - min\n      - number\n      - The minimum value of the set of sensor values selected in the query string\n      - 47705.68\n    * - max\n      - number\n      - The maximum value of the set of sensor values selected in the query string\n      - 47705.68\n    * - average\n      - number\n      - The average value of the set of sensor values selected in the query string\n      - 47705.68\n    * - timestamp\n      - timestamp\n      - The time the reading represents\n      - 2023-04-14 12:04:34\n\n**Example**\n\n.. code-block:: console\n\n  $ curl -sX GET http://localhost:8081/fledge/asset/asset_2/lux/series\n  [\n    {\"min\": 47705.68, \"max\": 47705.68, \"average\": 47705.68, \"timestamp\": \"2023-04-14 12:04:34\"},\n    {\"min\": 97967.9, \"max\": 97967.9, \"average\": 97967.9, \"timestamp\": \"2023-04-14 12:02:39\"},\n    {\"min\": 28788.154, \"max\": 28788.154, \"average\": 28788.154, \"timestamp\": \"2023-04-14 12:02:26\"},\n    {\"min\": 57992.674, \"max\": 57992.674, \"average\": 57992.674, \"timestamp\": \"2023-04-14 12:02:11\"},\n    {\"min\": 10373.945, \"max\": 10373.945, \"average\": 10373.945, \"timestamp\": \"2023-04-14 12:01:56\"}\n  ]\n  $\n\n  $ curl -sX GET http://localhost:8081/fledge/asset/asset_2/lux/series?limit=3\n  [\n    {\"min\": 47705.68, \"max\": 47705.68, \"average\": 47705.68, \"timestamp\": \"2023-04-14 12:04:34\"},\n    {\"min\": 97967.9, \"max\": 97967.9, \"average\": 97967.9, \"timestamp\": \"2023-04-14 12:02:39\"},\n    {\"min\": 28788.154, \"max\": 28788.154, \"average\": 28788.154, \"timestamp\": \"2023-04-14 12:02:26\"}\n  ]\n\nUsing *seconds* and *previous* to obtain historical data.\n\n.. code-block:: console\n\n    $ curl -sX GET http://localhost:8081/fledge/asset/asset_2/lux/series?seconds=5\\&previous=60\n    [\n        {\"min\": 47705.68, \"max\": 47705.68, \"average\": 47705.68, \"timestamp\": \"2023-04-14 12:04:34\"}\n    ]\n  $\n\nThe above call returned 5 seconds of data from the current time minus 65 seconds to the current time minus 5 seconds.\n"
  },
  {
    "path": "docs/rest_api_guide/05_RESTdeveloper.rst",
    "content": "Developer API Calls\n===================\n\nA number of calls exist in the API that are targeted at those developing\npipelines and plugins for Fledge. These are not actions that are expected\nto be of everyday use, but are to aid in this development process.\n\nPurge Readings\n--------------\n\nUnder ordinary circumstances a user should never need to manually purge\ndata from the Fledge storage buffer, however during the development\nprocess it can be useful to be able to manually purge data.\n\n``DELETE /fledge/asset`` - Purge data for all assets from the buffer\n\n**Response Payload**\n\nThe response payload is a JSON document that returns the number of readings that have been deleted.\n\n**Example**\n\n.. code-block:: console\n\n   $ curl -X DELETE http://localhost:8081/fledge/asset\n\nThe return from this is the number of readings that have been purged.\n\n.. code-block:: JSON\n\n   { \"purged\" : 3239 }\n\n.. note::\n\n   Great care should be exercised in using this call as **all** data that is currently buffered in the Fledge storage layer will be lost and there is no mechanism to undo this operation.\n\n``DELETE /fledge/asset/{asset name}`` - Purge data for the named asset from the buffer\n\n**Response Payload**\n\nThe response payload is a JSON document that returns the number of readings that have been deleted.\n\n**Example**\n\n.. code-block:: console\n\n   $ curl -X DELETE http://localhost:8081/fledge/asset/sinusoid\n\nThe return from this is the number of readings that have been purged.\n\n.. code-block:: JSON\n\n   { \"purged\" : 435 }\n\n.. note::\n\n   Great care should be exercised in using this call as **all** data for the **named** asset that is currently buffered in the Fledge storage layer will be lost and there is no mechanism to undo this operation.\n\nView Plugin Persisted Data\n--------------------------\n\nFledge plugins may persist data between executions of the the plugin. This data takes the form of a JSON document. In normally circumstance the user should not need to view or manage this data as it is the responsibility of the plugin to manage this data. However, during the development of a plugin it is useful for a plugin developer to be able to view this data and manage the data.\n\n``GET /fledge/service/{service_name}/persist`` - get the names of the plugins that persist data within a service.\n\n.. code-block:: console\n\n   curl http:/localhost:8081/fledge/service/OMF/persist\n\nThis would return the list of plugins as a JSON document as shown below\n\n.. code-block:: JSON\n\n   {\n      \"persistent\": [\n        \"OMF\"\n      ]\n    }\n\nIf no plugins within this service persist data the *persistent* array would be empty.\n\n``GET /fledge/service/{service_name}/plugin/{plugin_name}/data`` - view the plugin data persisted by an instance of a plugin\n\n**Parameters**\n\n  - *service_name* - the name of the service in which the plugin is running\n\n  - *plugin_name* - the name of the plugin within the service. For a north or south plugin this is the same as the service name. For a filter this will be the name given to the filter instance when it was added to the pipeline.\n\n**Response Payload**\n\nThe response payload is the persisted data from the plugin.\n\n**Example**\n\n.. code-block:: console\n\n   $ curl http://localhost:8081/fledge/service/OMF/plugin/OMF/data\n\nWhere *OMF* is the name of a north service with an OMF filter connected to a PI Server. In this case the persisted data is the type information we cache locally that describes the types that have been created within the OMF layer of the PI Server.\n\n.. code-block:: console\n\n    {\n      \"data\": {\n        \"sentDataTypes\": [\n          {\n            \"sine25\": {\n              \"type-id\": 1,\n              \"dataTypesShort\": \"0x101\",\n              \"hintChecksum\": \"0x0\",\n              \"namingScheme\": 0,\n              \"afhHash\": \"15489826335467873671\",\n              \"afHierarchy\": \"fledge/data_piwebapi/mark\",\n              \"afHierarchyOrig\": \"fledge/data_piwebapi/mark\",\n              \"dataTypes\": {\n                \"sinusoid\": {\n                  \"type\": \"number\",\n                  \"format\": \"float64\"\n                }\n              }\n            }\n          },\n          {\n            \"sinusoid\": {\n              \"type-id\": 1,\n              \"dataTypesShort\": \"0x101\",\n              \"hintChecksum\": \"0x0\",\n              \"namingScheme\": 0,\n              \"afhHash\": \"15489826335467873671\",\n              \"afHierarchy\": \"fledge/data_piwebapi/mark\",\n              \"afHierarchyOrig\": \"fledge/data_piwebapi/mark\",\n              \"dataTypes\": {\n                \"sinusoid\": {\n                  \"type\": \"number\",\n                  \"format\": \"float64\"\n                }\n              }\n            }\n          }\n        ]\n      }\n    }\n\n.. note::\n\n   Persisted data is written when the plugin is shutdown. Therefore in order to obtain accurate results this call should only be made when the service is shutdown. Calling this API when a service is running will result in data from the previous time the service was shutdown.\n\n``POST /fledge/service/{service_name}/plugin/{plugin_name}/data`` - write the persisted data for a plugin. Also send the data with payload ``{\"data\": \"<YOUR_VALUE>\"}``\n\nWrite or overwrite data persisted by the plugin. The request payload is the data which the plugin should receive and must be in the correct format for the plugin.\n\nThe payload for the POST command is defined by the plugin itself and hence no general example can be given here. It is intended that this is used in conjunction with an earlier GET request or a GET request on another instance, to restore a previous state.\n\n.. note::\n\n   Persisted data is written when the plugin is shutdown. Therefore in order to obtain predictable results this call should only be made when the service is shutdown. Calling this API when a service is running will result in data that is written by the call being overwritten by the plugin when it shuts down.\n\n\n``DELETE /fledge/service/{service_name}/plugin/{plugin_name}/data`` - delete the persisted data for the plugin\n\n.. note::\n\n   Persisted data is written when the plugin is shutdown. Therefore in order to obtain predictable results this call should only be made when the service is shutdown. Calling this API when a service is running will result the data being written from the plugin when it is shutdown and render this delete operation obsolete.\n"
  },
  {
    "path": "docs/rest_api_guide/06_GrafanaExamples.rst",
    "content": "..\n.. Images\n.. |GrafanaAsset| image:: ../images/Grafana_asset.jpg\n.. |GrafanaPing| image:: ../images/Grafana_ping.jpg\n.. |GrafanaReading| image:: ../images/Grafana_reading.jpg\n.. |GrafanaStatistics| image:: ../images/Grafana_statistics.jpg\n.. |GrafanaTimestamp| image:: ../images/Grafana_Timestamp.jpg\n\n.. Links\n.. |grafana| raw:: html\n\n   <a href=\"https://grafana.com\" target=\"_blank\">Grafana</a>\n\nGrafana Examples\n================\n\nThe REST API of Fledge provides a way to integrate other applications with Fledge, these applications can control Fledge or that may be used to monitor the operation of Fledge itself or to visualize the data held within a Fledge instance. One such tool is |grafana|. Here we will show some simple examples of how the Fledge REST API can be used with Grafana and the Infinity data source plugin. This is intended to be a simple example, more complex systems can be built using these tools.\n\nShow Fledge Status\n-------------------\n\nUsing the *GET /fledge/ping* endpoint we can retrieve information about the number of readings read, sent, purged etc. \n\n.. code-block:: console\n\n   $ curl http://localhost:8081/fledge/ping\n\nWhich would return a JSON payload that looks similar to that shown below\n\n.. code-block:: JSON\n\n\t{\n\t  \"uptime\": 13203,\n\t  \"dataRead\": 2045868,\n\t  \"dataSent\": 6700,\n\t  \"dataPurged\": 1293723,\n\t  \"authenticationOptional\": true,\n\t  \"serviceName\": \"Fledge\",\n\t  \"hostName\": \"foglamp-18\",\n\t  \"ipAddresses\": [\n\t    \"192.168.0.172\"\n\t  ],\n\t  \"health\": \"green\",\n\t  \"safeMode\": false,\n\t  \"version\": \"1.9.2\"\n\t}\n\nWe can use this URL as the query for a Grafana dashboard panel to retrieve the basic statistics. We then select the items we want to display as columns and set the type of these, in this case we have chosen the basic counters which are numeric value.\n\n+---------------+\n| |GrafanaPing| |\n+---------------+\n\nDisplay Statistics\n------------------\n\nThis example sow how to take a set of values over time and display them graphically within Grafana. The major difference here is the treatment of the timestamp. In this example we are using the statistics history API to retrieve statistics data over time.\n\nUsing the curl command to look at the API call\n\n.. code-block:: console\n\n    curl http://localhost:8081/fledge/statistics/history|jq\n\nWe get a JSON response as follows\n\n.. code-block:: console\n\n    {\n      \"interval\": 15,\n      \"statistics\": [\n\t{\n\t  \"history_ts\": \"2022-08-25 11:31:29.565\",\n\t  \"READINGS\": 68,\n\t  \"BUFFERED\": 0,\n\t  \"UNSENT\": 0,\n\t  \"PURGED\": 0,\n\t  \"UNSNPURGED\": 0,\n\t  \"DISCARDED\": 0,\n\t  \"coap-Ingest\": 0,\n\t  \"COAP\": 0,\n\t  \"Sine-Ingest\": 0,\n\t  \"SINUSOID\": 0,\n\t  \"exp-Ingest\": 0,\n\t  \"EXPRESSION\": 0,\n\t  \"Readings Sent\": 0,\n\t  \"OP\": 0,\n\t  \"test1-Ingest\": 0,\n\t  \"sine2-Ingest\": 15,\n\t  \"SINE210\": 0,\n\t  \"SINE25\": 0,\n\t  \"SINE2\": 0,\n\t  \"SINE250\": 15,\n\t  \"OMF\": 0,\n\t  \"PRESINE2.SINUSOID\": 0,\n\t  \"SINUSOID2\": 0,\n\t  \"lathe1004-Ingest\": 53,\n\t  \"LATHE1004\": 15,\n\t  \"LATHE1004CURRENT\": 15,\n\t  \"LATHE1004IR\": 15,\n\t  \"LATHE1004VIBRATION\": 8,\n\t  \"testacl-Ingest\": 0,\n\t  \"dsds-Ingest\": 0,\n\t  \"OMF2\": 0,\n\t  \"test-Ingest\": 0\n\t},\n\t...\n    }\n\nWe are interested in the array of data under the *statistics* object in the JSON, therefore we choose a value of *statistics* for the *Rows / Root* value. This means that each array element under *statistics* will be considered as a row in the query result.\n\n+---------------------+\n| |GrafanaStatistics| |\n+---------------------+\n\nWe then select the columns as before to extract the values we are interested in displaying. These are all set to be of type *Number*.\n\nIn order to have the data graphed over time we must also select a timestamp column, in this case *history_ts* will be used. We can not set this as a timestamp type column as the Fledge timestamp format is not directly supported by Grafana. We must set up a transformation to take the string value from *history_ts* and convert it to a timestamp that can be understood by Grafana.\n\n+--------------------+\n| |GrafanaTimestamp| |\n+--------------------+\n\nIn this transform we give it the Fledge timestamp format and set the desired result type to be a Timestamp. This now allows Grafana to understand the timestamps and display the Fledge data.\n\nOne final point to mention, the Fledge timestamps are returned in UTC whereas Grafana assumes the data is in the local timezone. To resolve this merely set the preferences in Grafana to expect UTC data or add a time adjustment based on the number of hours from UTC at your location.\n\nGraph Reading Data\n------------------\n\nThis example is very similar to that of the statistics history example above, the major difference is that we are extracting the readings data from the buffer using the */fledge/asset/{assetName}* URL.\n\n+------------------+\n| |GrafanaReading| |\n+------------------+\n\nWe must select the data to display in the same way, we use the *limit=* to allow the query to return sufficient data. Ideally we would have a time bound query here, but that is outside the scope of this simple example.\n\n.. code-block:: console\n\n    $ curl http://localhost:8081/fledge/asset/sine250?limit=2 |jq\n    [\n      {\n\t\"reading\": {\n\t  \"sinusoid\": -0.951056516\n\t},\n\t\"timestamp\": \"2022-08-25 13:47:45.624800\"\n      },\n      {\n\t\"reading\": {\n\t  \"sinusoid\": -0.978147601\n\t},\n\t\"timestamp\": \"2022-08-25 13:47:44.624586\"\n      }\n    ]\n\nWe add the columns we require, there is no need to select the *Rows / Root* in this example as the array is already at the root of the JSON document returned.\n\nWe must also do the same transformation for the timestamp format we did above.\n"
  },
  {
    "path": "docs/rest_api_guide/index.rst",
    "content": ".. REST API Developers Guide\n\n********************************\nREST API Developers Guide\n********************************\n\n.. toctree::\n\n    01_REST\n    02_RESTauthentication\n    03_RESTadmin\n    03_RESTstatistics\n    03_RESTassetTracker\n    03_RESTupdate\n    03_RESTservices\n    04_RESTuser\n    05_RESTdeveloper\n    06_GrafanaExamples\n"
  },
  {
    "path": "docs/scripts/fledge_plugin_list",
    "content": "#!/usr/bin/env bash\n\nif [[ \"${USERNAME}\" == \"\" ]] || [[ \"${GITHUB_ACCESS_TOKEN}\" == \"\" ]]; then\n    echo \"You must have set a GitHub username & access token environment variable. Like export USERNAME=YOUR_USERNAME; export GITHUB_ACCESS_TOKEN=YOUR_ACCESS_TOKEN\";\n    exit 1\nfi\n\noutput=$1\nrm -f $output\nDOCBRANCH=$2\nheader=\"Authorization: token ${GITHUB_ACCESS_TOKEN}\"\n# Get total number of repository pages in fledge org.\nfledgeRepoPagesCount=$(curl -sI https://api.github.com/orgs/fledge-iot/repos | grep -oP '\\d+(?=>; rel=\"last\")')\nfledgeRepos=$(curl -H \"$header\" -sX GET https://api.github.com/orgs/fledge-iot/repos\\?page=[1-$fledgeRepoPagesCount])\nfledgeRepos=\"$(echo $fledgeRepos | sed 's/\\] \\[/,/g')\"\nfetchTopicReposPyScript='\nimport json,sys;\\\nrepos=json.load(sys.stdin);\\\nfRepos = [r[\"name\"] for r in repos[\"items\"]];\\\nprint(\"\\n\".join(fRepos));\n'\nfledge_wip_repos=$(curl -sX GET -H \"$header\" -H \"Accept: application/vnd.github.mercy-preview+json\" https://api.github.com/search/repositories?q=topic:wip+org:fledge-iot)\nfledge_poc_repos=$(curl -sX GET -H \"$header\" -H \"Accept: application/vnd.github.mercy-preview+json\" https://api.github.com/search/repositories?q=topic:poc+org:fledge-iot)\nfledge_internal_repos=$(curl -sX GET -H \"$header\" -H \"Accept: application/vnd.github.mercy-preview+json\" https://api.github.com/search/repositories?q=topic:internal+org:fledge-iot)\nfledge_obsolete_repos=$(curl -sX GET -H \"$header\" -H \"Accept: application/vnd.github.mercy-preview+json\" https://api.github.com/search/repositories?q=topic:obsolete+org:fledge-iot)\nfledge_wip_repos_name=$(echo ${fledge_wip_repos} | python3 -c \"$fetchTopicReposPyScript\" | sort -f)\nfledge_poc_repos_name=$(echo ${fledge_poc_repos} | python3 -c \"$fetchTopicReposPyScript\" | sort -f)\nfledge_internal_repos_name=$(echo ${fledge_internal_repos} | python3 -c \"$fetchTopicReposPyScript\" | sort -f)\nfledge_obsolete_repos_name=$(echo ${fledge_obsolete_repos} | python3 -c \"$fetchTopicReposPyScript\" | sort -f)\nexport EXCLUDE_FLEDGE_TOPIC_REPOSITORIES=$(echo ${fledge_wip_repos_name} ${fledge_poc_repos_name} ${fledge_internal_repos_name} ${fledge_obsolete_repos_name} | sort -f)\necho \"EXCLUDED FLEDGE TOPIC REPOS LIST: $EXCLUDE_FLEDGE_TOPIC_REPOSITORIES\"\nfetchFledgeReposPyScript='\nimport os,json,sys;\\\nrepos=json.load(sys.stdin);\\\nexclude_topic_packages=os.environ[\"EXCLUDE_FLEDGE_TOPIC_REPOSITORIES\"];\\\nall_repos = [r[\"name\"] for r in repos if r[\"archived\"] is False];\\\nfRepos = list(set(all_repos) - set(exclude_topic_packages.split()));\\\nprint(\"\\n\".join(fRepos));\n'\nREPOS=$(echo ${fledgeRepos} | python3 -c \"$fetchFledgeReposPyScript\" | sort -f)\nINBUILT_PLUGINS=\"fledge-north-OMF fledge-rule-Threshold fledge-rule-DataAvailability\"\nREPOSITORIES=$(echo ${REPOS} ${INBUILT_PLUGINS} | xargs -n1 | sort -f | xargs)\necho \"REPOSITORIES LIST: \"${REPOSITORIES}\n\n# Clean out any old keywords left behind\nif [[ -d /tmp/keywords ]]; then\n    rm -rf /tmp/keywords\nfi\n\n# Create the list header for a new keyword file\nfunction keyword_header {\n    keyfile=\"$1\"\n    title=$2\n    echo \".. list-table:: ${title}\" >> \"$keyfile\"\n    echo \"    :widths: 20 50\" >> \"$keyfile\"\n    echo \"    :header-rows: 1\" >> \"$keyfile\"\n    echo \"\" >> \"$keyfile\"\n    echo \"    * - Name\" >> \"$keyfile\"\n    echo \"      - Description\" >> \"$keyfile\"\n}\n\n# Create the table of plugins by type. Also generate a set of keyword grouped\n# tables in files stored in /tmp/keywords/{plugin type}\nfunction table {\n    list=\"$1\"\n    title=$2\n    tableType=$3\n    echo \".. list-table:: ${title}\" >> \"$output\"\n    echo \"    :widths: 20 50\" >> \"$output\"\n    echo \"    :header-rows: 1\" >> \"$output\"\n    echo \"\" >> \"$output\"\n    echo \"    * - Name\" >> \"$output\"\n    echo \"      - Description\" >> \"$output\"\n    for repo in ${list}\n    do\n        type=$(echo \"${repo}\" | sed -e 's/fledge-//' -e 's/-.*//')\n        name=$(echo \"${repo}\" | sed -e 's/fledge-//' -e \"s/${type}-//\")\n        if [[ ${type} = \"$tableType\" ]]; then\n            if grep -q \"$repo\" <<< \"$INBUILT_PLUGINS\"; then\n                if [[ $repo == \"fledge-north-OMF\" ]]; then\n                    echo \"    * - \\`omf <\"plugins/fledge-north-OMF/index.html\">\\`__\" >> \"$output\"\n                    echo \"      - Send data to OSIsoft PI Server, Edge Data Store or OSIsoft Cloud Services\" >> \"$output\"\n                    keyword=OMF\n                    mkdir -p \"/tmp/keywords/{$type}\"\n                    keyfile=\"/tmp/keywords/{$type}/${keyword}\"\n                    if [[ ! -f \"${keyfile}\" ]]; then\n                        keyword_header \"$keyfile\" \"$title\"\n                    fi\n                    if [[ -f ${repo}/docs/index.rst ]]; then\n                        echo \"    * - \\`$name <\"plugins/fledge-north-OMF/index.html\">\\`__\" >> \"$keyfile\"\n                    else\n                        echo \"    * - ${name}\" >> \"${keyfile}\"\n                    fi\n                    echo \"      - ${description}\" >> \"${keyfile}\"\n                elif [[ $repo == \"fledge-rule-DataAvailability\" ]]; then\n                    echo \"    * - \\`data-availability <\"plugins/fledge-rule-DataAvailability/index.html\">\\`__\" >> \"$output\"\n                    echo \"      - Triggers every time when it receives data that matches an asset code or audit code those given in the configuration\" >> \"$output\"\n                elif [[ $repo == \"fledge-rule-Threshold\" ]]; then\n                    echo \"    * - \\`threshold <\"plugins/fledge-rule-Threshold/index.html\">\\`__\" >> \"$output\"\n                    echo \"      - Detect the value of a data point within an asset going above or below a set threshold\" >> \"$output\"\n                fi\n                else\n                    rm -rf ${repo}\n                    git clone https://${USERNAME}:${GITHUB_ACCESS_TOKEN}@github.com/fledge-iot/${repo}.git --branch ${DOCBRANCH} >/dev/null 2>&1\n                    is_branch_exists=$?\n                    if [[ ${is_branch_exists} -eq 0 ]]; then\n                        description=$(echo \"$fledgeRepos\" | python3 -c 'import json,sys;repos=json.load(sys.stdin);fRepo = [r for r in repos if r[\"name\"] == \"'\"${repo}\"'\" ];print(fRepo[0][\"description\"])')\n                        if [[ \"${description}\" = \"None\" ]]; then description=\"A ${name} ${type} plugin\"; fi\n                        # cloned directory replaced with installed directory name which is defined in Package file for each repo\n                        installed_plugin_dir_name=$(cat ${repo}/Package | grep plugin_install_dirname= | sed -e \"s/plugin_install_dirname=//g\")\n                        if [[ $installed_plugin_dir_name == \"\\${plugin_name}\" ]]; then\n                            installed_plugin_dir_name=$(cat ${repo}/Package | grep plugin_name= | sed -e \"s/plugin_name=//g\")\n                        fi\n                        old_plugin_name=$(echo ${repo} | cut -d '-' -f3-)\n                        new_plugin_name=$(echo ${repo/$old_plugin_name/$installed_plugin_dir_name})\n                        # Only link when doc exists in plugins directory\n                        if [[ -d ${repo}/docs && -f ${repo}/docs/index.rst ]]; then\n                            echo \"    * - \\`$name <\"plugins/$new_plugin_name/index.html\">\\`__\" >> \"$output\"\n                        else\n                            echo \"    * - ${name}\" >> \"$output\"\n                        fi\n                        echo \"      - ${description}\" >> \"$output\"\n                        # Now do the keywords\n                        if [[ -d ${repo}/docs && -f ${repo}/docs/keywords ]]; then\n                            mkdir -p \"/tmp/keywords/${type}\"\n                            keywords=`sed -e 's/ /%20/g' -e 's/,/ /g' < ${repo}/docs/keywords`\n                            for word in $keywords\n                            do\n                                keyword=$(echo $word | sed -e 's/%20/ /g')\n                                keyfile=\"/tmp/keywords/${type}/${keyword}\"\n                                if [[ ! -f \"${keyfile}\" ]]; then\n                                    keyword_header \"$keyfile\" \"$title\"\n                                fi\n                                if [[ -f ${repo}/docs/index.rst ]]; then\n                                    echo \"    * - \\`$name <\"plugins/$new_plugin_name/index.html\">\\`__\" >> \"$keyfile\"\n                                else\n                                    echo \"    * - ${name}\" >> \"${keyfile}\"\n                                fi\n                                echo \"      - ${description}\" >> \"${keyfile}\"\n                            done\n                        fi\n                    fi\n                    rm -rf ${repo}\n            fi\n        fi\n    done\n}\n\n# For a given plugin type collect the keyword files from the tmp\n# directory and create the section in the Plugins documentation page\nfunction add_keywords {\n    type=\"$1\"\n    pluginType=\"$2\"\n    find /tmp/keywords/${type} -type f -print0 | while IFS= read -r -d '' file; do\n        keyword=$(basename \"$file\")\n        if [[ -f \"keywords/${keyword}\" ]]; then\n            # Use the keyword override header file\n            cat \"keywords/${keyword}\" >> ${output}\n        else\n            # No override header for this keyword, create a default\n            heading=\"${keyword} Plugins\"\n            echo $heading >> $output\n            echo $heading | sed -e 's/[A-Za-z0-9 ]/^/g' >> $output\n        fi\n        echo >> $output\n        # Add in the table\n        cat \"${file}\" >> $output\n        echo >> $output\n    done\n}\n\ncat >> $output << EOF1\nFledge Plugins\n==============\n\nThe following set of plugins are available for Fledge. These plugins\nextend the functionality by adding new sources of data, new destinations,\nprocessing filters that can enhance or modify the data, rules for\nnotification delivery and notification delivery mechanisms.\n\nEOF1\ncat >>$output << EOF2\nSouth Plugins\n-------------\n\nSouth plugins add new ways to get data into Fledge, a number of south\nplugins are available ready built or users may add new south plugins of\ntheir own by writing them in Python or C/C++.\n\nEOF2\ntable \"$REPOSITORIES\" \"Fledge South Plugins\" south\ncat >>$output << EOF3\n\nNorth Plugins\n-------------\n\nNorth plugins add new destinations to which data may be sent by Fledge. A\nnumber of north plugins are available ready built or users may add new\nnorth plugins of their own by writing them in Python or C/C++.\n\nEOF3\ntable \"$REPOSITORIES\" \"Fledge North Plugins\" north\ncat >>$output << EOF4\n\nFilter Plugins\n--------------\n\nFilter plugins add new ways in which data may be modified, enhanced\nor cleaned as part of the ingress via a south service or egress to a\ndestination system. A number of north plugins are available ready built\nor users may add new north plugins of their own by writing them in Python\nor C/C++.\n\nIt is also possible, using particular filters, to supply expressions\nor script snippets that can operate on the data as well. This provides a\nsimple way to process the data in Fledge as it is read from devices or\nwritten to destination systems.\n\nEOF4\ntable \"$REPOSITORIES\" \"Fledge Filter Plugins\" filter\ncat >>$output << EOF5\n\nNotification Rule Plugins\n-------------------------\n\nNotification rule plugins provide the logic that is used by the\nnotification service to determine if a condition has been met that should\ntrigger or clear that condition and hence send a notification. A number of\nnotification plugins are available as standard, however as with any plugin the\nuser is able to write new plugins in Python or C/C++ to extend the set of\nnotification rules.\n\nEOF5\ntable \"$REPOSITORIES\" \"Fledge Notification Rule Plugins\" rule\ncat >>$output << EOF6\n\nNotification Delivery Plugins\n-----------------------------\n\nNotification delivery plugins provide the mechanisms to deliver the\nnotification messages to the systems that will receive them.  A number\nof notification delivery plugins are available as standard, however as\nwith any plugin the user is able to write new plugins in Python or C/C++\nto extend the set of notification deliveries.\n\nEOF6\ntable \"$REPOSITORIES\" \"Fledge Notification Delivery Plugins\" notify\n\ncat >>$output << EOF7\n\nPlugins Organized by Category\n-----------------------------\n\nSouth\n~~~~~\n\nThe following tables categorize the south plugins.\n\nEOF7\nadd_keywords south \"South\"\n\ncat >>$output << EOF8\n\nNorth\n~~~~~\n\nThe following tables categorize the north plugins.\n\nEOF8\nadd_keywords north \"North\"\n\ncat >>$output << EOF9\n\nFilters\n~~~~~~~\n\nThe following tables categorize the filter plugins.\n\nEOF9\nadd_keywords filter \"Filter\"\n\n# TODO: Add notification based plugins grouping here\n\n# Clean out any old keywords left behind\nif [[ -d /tmp/keywords ]]; then\n    rm -rf /tmp/keywords\nfi\n"
  },
  {
    "path": "docs/scripts/plugin_and_service_documentation",
    "content": "#!/bin/bash\n\nif [[ \"${USERNAME}\" == \"\" ]] || [[ \"${GITHUB_ACCESS_TOKEN}\" == \"\" ]]; then\n    echo \"You must have set a GitHub username & access token environment variable. Like export USERNAME=YOUR_USERNAME; export GITHUB_ACCESS_TOKEN=YOUR_ACCESS_TOKEN\";\n    exit 1\nfi\n# Always create a fresh set of Plugin & Service documentation\nif [ -d plugins ]; then rm -rf plugins; fi\nmkdir plugins\nif [ -d services ]; then rm -rf services; fi\nmkdir services\n\ncat > plugins/south.rst << EOFSOUTH\n********************\nFledge South Plugins\n********************\n\n.. toctree::\n\nEOFSOUTH\ncat > plugins/north.rst << EOFNORTH\n********************\nFledge North Plugins\n********************\n\n.. toctree::\n\nEOFNORTH\ncat > plugins/filter.rst << EOFFILTER\n*********************\nFledge Filter Plugins\n*********************\n\n.. toctree::\n\nEOFFILTER\ncat > plugins/rule.rst << EOFRULE\n********************************\nFledge Notification Rule Plugins\n********************************\n\n.. toctree::\n\nEOFRULE\ncat > plugins/notify.rst << EOFNOTIFY\n************************************\nFledge Notification Delivery Plugins\n************************************\n\n.. toctree::\n\nEOFNOTIFY\n\nDOCBRANCH=\"$1\"\n# TODO: source code documentation always point to nightly irrespective of the DOCBRANCH\n#  In future may point to specific branch/release version\nARCHIVE_BUILD=\"nightly\"\necho \"Default ${DOCBRANCH} branch used for plugin documentation and ${ARCHIVE_BUILD} build used for source code documentation\"\n# Tweaks required in plugin developer guide to enable source code documentation\nsed -i 's/ARCHIVE_BUILD_NAME/'\"${ARCHIVE_BUILD}\"'/g' plugin_developers_guide/00_source_code_doc.rst\nheader=\"Authorization: token ${GITHUB_ACCESS_TOKEN}\"\n# Get total number of repository pages in fledge org.\nfledgeRepoPagesCount=$(curl -sI https://api.github.com/orgs/fledge-iot/repos | grep -oP '\\d+(?=>; rel=\"last\")')\nfledgeRepos=$(curl -H \"$header\" -sX GET https://api.github.com/orgs/fledge-iot/repos\\?page=[1-$fledgeRepoPagesCount])\nfledgeRepos=\"$(echo $fledgeRepos | sed 's/\\] \\[/,/g')\"\nfetchTopicReposPyScript='\nimport json,sys;\\\nrepos=json.load(sys.stdin);\\\nfRepos = [r[\"name\"] for r in repos[\"items\"]];\\\nprint(\"\\n\".join(fRepos));\n'\nfledge_wip_repos=$(curl -sX GET -H \"$header\" -H \"Accept: application/vnd.github.mercy-preview+json\" https://api.github.com/search/repositories?q=topic:wip+org:fledge-iot)\nfledge_poc_repos=$(curl -sX GET -H \"$header\" -H \"Accept: application/vnd.github.mercy-preview+json\" https://api.github.com/search/repositories?q=topic:poc+org:fledge-iot)\nfledge_internal_repos=$(curl -sX GET -H \"$header\" -H \"Accept: application/vnd.github.mercy-preview+json\" https://api.github.com/search/repositories?q=topic:internal+org:fledge-iot)\nfledge_obsolete_repos=$(curl -sX GET -H \"$header\" -H \"Accept: application/vnd.github.mercy-preview+json\" https://api.github.com/search/repositories?q=topic:obsolete+org:fledge-iot)\nfledge_wip_repos_name=$(echo ${fledge_wip_repos} | python3 -c \"$fetchTopicReposPyScript\" | sort -f)\nfledge_poc_repos_name=$(echo ${fledge_poc_repos} | python3 -c \"$fetchTopicReposPyScript\" | sort -f)\nfledge_internal_repos_name=$(echo ${fledge_internal_repos} | python3 -c \"$fetchTopicReposPyScript\" | sort -f)\nfledge_obsolete_repos_name=$(echo ${fledge_obsolete_repos} | python3 -c \"$fetchTopicReposPyScript\" | sort -f)\nexport EXCLUDE_FLEDGE_TOPIC_REPOSITORIES=$(echo ${fledge_wip_repos_name} ${fledge_poc_repos_name} ${fledge_internal_repos_name} ${fledge_obsolete_repos_name} | sort -f)\necho \"EXCLUDED FLEDGE TOPIC REPOS LIST: $EXCLUDE_FLEDGE_TOPIC_REPOSITORIES\"\nfetchFledgeReposPyScript='\nimport os,json,sys;\\\nrepos=json.load(sys.stdin);\\\nexclude_topic_packages=os.environ[\"EXCLUDE_FLEDGE_TOPIC_REPOSITORIES\"];\\\nall_repos = [r[\"name\"] for r in repos if r[\"archived\"] is False];\\\nfRepos = list(set(all_repos) - set(exclude_topic_packages.split()));\\\nprint(\"\\n\".join(fRepos));\n'\nREPOS=$(echo ${fledgeRepos} | python3 -c \"$fetchFledgeReposPyScript\" | sort -f)\nINBUILT_PLUGINS=\"fledge-north-OMF fledge-rule-Threshold fledge-rule-DataAvailability\"\nREPOSITORIES=$(echo ${REPOS} ${INBUILT_PLUGINS} | xargs -n1 | sort -f | xargs)\necho \"REPOSITORIES LIST: \"${REPOSITORIES}\n\nfunction plugin_and_service_doc {\n    repo_name=$1\n    dest=$2\n    dir_type=$3\n    type=$(echo ${repo_name} | sed -e 's/fledge-//' -e 's/-.*//')\n    name=$(echo ${repo_name} | sed -e 's/fledge-//' -e \"s/${type}-//\")\n    mkdir -p /tmp/doc.$$\n    cd /tmp/doc.$$\n    git clone -b ${DOCBRANCH} --single-branch https://${USERNAME}:${GITHUB_ACCESS_TOKEN}@github.com/fledge-iot/${repo_name}.git\n\n    if [[ ${type} != \"service\" ]]; then\n        # cloned directory replaced with installed directory name which is defined in Package file for each repo\n        installed_plugin_dir_name=$(cat ${repo_name}/Package | grep plugin_install_dirname= | sed -e \"s/plugin_install_dirname=//g\")\n        if [[ ${installed_plugin_dir_name} == \"\\${plugin_name}\" ]]; then\n            installed_plugin_dir_name=$(cat ${repo_name}/Package | grep plugin_name= | sed -e \"s/plugin_name=//g\")\n        fi\n        old_plugin_name=$(echo ${repo_name} | cut -d '-' -f3-)\n        new_plugin_name=$(echo ${repo_name/$old_plugin_name/$installed_plugin_dir_name})\n        if [[ ${repo_name} != ${new_plugin_name} ]]; then\n            mv ${repo_name} ${new_plugin_name}\n        fi\n        repo_name=${new_plugin_name}\n    else\n        repo_name=fledge-${type}-${name}\n    fi\n    cd -\n    if [ -d /tmp/doc.$$/${repo_name}/docs ]; then\n        rm -rf ${dir_type}/${repo_name}\n        mkdir -p ${dir_type}/${repo_name}\n        cp -r /tmp/doc.$$/${repo_name}/docs/. ${dir_type}/${repo_name}\n        if [ -f ${dir_type}/${repo_name}/index.rst ]; then\n            echo \"    ${repo_name}/index\" >> $dest\n        else\n            echo \"*** WARNING: index.rst file is missing for ${repo_name}.\"\n        fi\n    else\n        echo \"*** WARNING: ${repo_name} docs directory is missing.\"\n    fi\n    rm -rf /tmp/doc.$$\n}\n\nfor repo in ${REPOSITORIES}\ndo\n    type=$(echo $repo | sed -e 's/fledge-//' -e 's/-.*//')\n    dest=plugins/${type}.rst\n    if grep -q \"$repo\" <<< \"$INBUILT_PLUGINS\"; then\n        if [[ $repo == \"fledge-north-OMF\" ]]; then\n            name=\"fledge-north-OMF\"\n            echo \"    ${name}/index\" >> $dest\n            mkdir plugins/${name}\n            ln -s ../../images plugins/${name}/images\n            echo '.. include:: ../../fledge-north-OMF.rst' > plugins/${name}/index.rst\n            # Append OMF.rst to the end of the file rather than including it so that we may edit the links to prevent duplicates\n            cat OMF.rst >> plugins/${name}/index.rst\n        elif [[ $repo == \"fledge-rule-DataAvailability\" ]]; then\n            name=\"fledge-rule-DataAvailability\"\n            echo \"    ${name}/index\" >> $dest\n            mkdir plugins/${name}\n            ln -s $(pwd)/${name}/images plugins/${name}/images\n            echo '.. include:: ../../fledge-rule-DataAvailability/index.rst' > plugins/${name}/index.rst\n        elif [[ $repo == \"fledge-rule-Threshold\" ]]; then\n            name=\"fledge-rule-Threshold\"\n            echo \"    ${name}/index\" >> $dest\n            mkdir plugins/${name}\n            ln -s $(pwd)/${name}/images plugins/${name}/images\n            echo '.. include:: ../../fledge-rule-Threshold/index.rst' > plugins/${name}/index.rst\n        fi\n    elif [ \"$type\" = \"south\" -o \"$type\" = \"north\" -o $type = \"filter\" -o $type = \"rule\" -o $type = \"notify\" ]; then\n        plugin_and_service_doc $repo $dest \"plugins\"\n    fi\ndone\n\ncat > services/index.rst << EOFSERVICES\n*******************\nAdditional Services\n*******************\n\nThe following additional services are currently available to extend the functionality of Fledge. These are optional services not installed as part of the base Fledge installation.\n\n.. toctree::\n\nEOFSERVICES\n\nSERVICE_REPOS=$(echo \"${REPOSITORIES}\" | grep -o \"[a-zA-Z0-9\\-]*-service-[a-zA-Z0-9_\\-]*\" | sed -e 's/\\([a-zA-Z0-9\\-]*\\)\\-service-\\([a-zA-Z0-9]*\\)/\\2-\\1-service-\\2/g' | sort -f | sed -e 's/\\([a-zA-Z0-9]*\\)\\-\\([a-zA-Z0-9\\-]*\\)/\\2/g')\necho \"SERVICE REPOS LIST: \"${SERVICE_REPOS}\nfor repo in ${SERVICE_REPOS}\ndo\n    type=$(echo $repo | sed -e 's/fledge-//' -e 's/-.*//')\n    dest=services/index.rst\n    plugin_and_service_doc $repo $dest \"services\"\ndone\n\n# Cross Referencing list of plugins\nplugins_path=$(pwd)/plugins\n\n# HashMap used for storing keywords and repos\ndeclare -A KEYWORDS\nfor dir in $plugins_path/*\ndo\n    dir_name=$(echo $dir | sed 's/^.*fledge-/fledge-/')\n    if [[ $dir_name == *fledge-* ]]; then\n        if [ -f $plugins_path/$dir_name/keywords ]; then\n            keywords=$(cat $plugins_path/$dir_name/keywords | sed -e \"s/,/ /g\")\n            for k in $keywords\n            do\n                KEYWORDS+=([\"$k\"]+=\"$dir_name \")\n            done\n        fi\n    fi\ndone\n\nfunction get_repos_list_by_keywords() {\n    DIR_NAME=\"$1\"\n    REPOSITORIES_LIST=\"\"\n    for i in \"${!KEYWORDS[@]}\"\n    do\n        repos_val=$(echo ${KEYWORDS[$i]} | grep -w \"$DIR_NAME\")\n        if [[ $repos_val != \"\" ]]; then\n            repos_result+=$(echo \"$repos_val \")\n        fi\n    done\n    REPOSITORIES_LIST=$(echo \"$repos_result\" | xargs -n1 | grep -v \"^$DIR_NAME$\" | sort -u | xargs)\n    echo \"$REPOSITORIES_LIST\"\n}\n\n# See Also section added as per installed plugins directory path\nfor dir in $plugins_path/*\ndo\n    dir_name=$(echo $dir | sed 's/^.*fledge-/fledge-/')\n    if [[ $dir_name == *fledge-* ]]; then\n        if [ -f $plugins_path/$dir_name/keywords ]; then\n            result=$(get_repos_list_by_keywords \"$dir_name\")\n            echo \"For $dir_name: $result\"\n            if [[ -n \"$result\" ]]; then\n                cat >> $plugins_path/$dir_name/index.rst << EOFPLUGINS\n\n\nSee Also\n--------\nEOFPLUGINS\n                for r in $result\n                do\n                     # Add link and description to the plugin\n                     description=$(cat $(pwd)/fledge_plugins.rst | grep -A1 -w \"plugins/$r/index.html\" | grep -v \"$r\" | head -n 1)\n                     echo \"    \\`$r <../$r/index.html>\\`_  $description\" >> $plugins_path/$dir_name/index.rst\n                     echo -e \"\\n\" >> $plugins_path/$dir_name/index.rst\n                done\n            fi\n        fi\n    fi\ndone\n"
  },
  {
    "path": "docs/securing_fledge.rst",
    "content": ".. Images\n.. |admin_api| image:: images/admin_api.jpg\n.. |enable_https| image:: images/enable_https.jpg\n.. |connection_https| image:: images/connection_https.jpg\n.. |auth_options| image:: images/authentication.jpg\n.. |login| image:: images/login.jpg\n.. |login_dashboard| image:: images/login_dashboard.jpg\n.. |user_pulldown| image:: images/user_pulldown.jpg\n.. |profile| image:: images/profile.jpg\n.. |password| image:: images/password.jpg\n.. |password_rotation| image:: images/password_rotation.jpg\n.. |password_policy| image:: images/password_policy.jpg\n.. |user_management| image:: images/user_management.jpg\n.. |add_user| image:: images/add_user.jpg\n.. |update_user| image:: images/update_user.jpg\n.. |delete_user| image:: images/delete_user.jpg\n.. |change_role| image:: images/change_role.jpg\n.. |reset_password| image:: images/reset_password.jpg\n.. |certificate_store| image:: images/certificate_store.jpg\n.. |update_certificate| image:: images/update_certificate.jpg\n.. |firewall| image:: images/firewall.jpg\n.. |features| image:: images/features.jpg\n\n\n.. Links\n.. |REST API| raw:: html\n\n   <a href=\"rest_api_guide/02_RESTauthentication.html\">REST API</a>\n\n.. |Require User Login| raw:: html\n\n    <a href=\"#requiring-user-login\">Require User Login</a>\n\n.. |User Management| raw:: html\n\n    <a href=\"#user-management\">User Management</a>\n\n.. |control| raw:: html\n\n    <a href=\"control.html\">control</a>\n\n*****************\nSecuring Fledge\n*****************\n\nThe default installation of Fledge in version 3.0 and after have authentication features enabled by default, using the username and password option. Two users are created, one with administrator privileges and another as an edit user. There are other security features that are not enabled by default that should be considered in order to enhance security and for compliance with any local security restrictions.\n\n.. note::\n\n   It is strongly recommended that once installation is complete the default passwords are changed at the earliest opportunity.\n\nIn installations of Fledge prior to version 3.0 authentication is disabled by default. Whilst this is acceptable for demonstration purposes or in completely closed networks it is unwise to use Fledge unsecured in real world deployments. Fledge offers several features that can enhance security. Versions of Fledge from 3.0 and onward can still be configured to be open by disabling the security features if desired, although this is not recommended.\n\n  - The REST API by default supports unencrypted HTTP requests, it can be switched to require HTTPS to be used.\n\n  - The REST API and the GUI can be protected by requiring authentication to prevent users being able to change the configuration of the Fledge system. This has now become the default in version 3.0 and later.\n   \n  - Authentication can be via username and password or by means of an authentication certificate. The default authentication in Fledge 3.0 and onward is to use username and password for authentication.\n\n    .. note::\n    \n       When using username and password authentication it is recommended to disable support for unencrypted HTTP requests.\n\n  - Fledge supports a number of different user types or roles, care should be taken to only give users access they require and in particular the administration user rights should be reserved.\n\n  - If authentication is via username and password the administrator is able to select one of a set of password policy that define restrictions as to what characters must be included in any password as well as requiring a minimum length of password.\n\n  - The system can be configured to require users to change their password regularly and a list of previous passwords is maintained to prevent users simply reusing old passwords.\n\n  - If a user attempts to authenticate and fails then that user will be blocked for a short time. If multiple failures occur the blocked period will be increased until ultimately the user is blocked for a 24 hour period. This is to prevent automated systems attempting to guess passwords.\n\n  - Fledge maintains full audit logs of all updates to the Fledge configuration, and other events. This allows for complete auditing of who made what changes to the Fledge configuration and when the changes were made.\n\n  - Fledge can have optional allow and deny lists configured that list those hosts that are either permitted to connect to the UI and REST API or are denied access.\n\n.. note::\n\n   It is recommended to run Fledge behind firewalls and restrict access to the Fledge API port to trusted networks when using it in production environments.\n\nEnabling HTTPS Encryption\n=========================\n\nFledge can support both HTTP and HTTPS as the transport for the REST API used for management, to switch between there two transport protocols select the *Configuration* option from the left-hand menu and the select *Admin API* from the configuration tree that appears,\n\n+-------------+\n| |admin_api| |\n+-------------+\n\nThe first option you will see is a tick box labeled *Enable HTTP*, to select HTTPS as the protocol to use this tick box should be deselected.\n\n+----------------+\n| |enable_https| |\n+----------------+\n\nWhen this is unticked two options become active on the page, *HTTPS Port* and *Certificate Name*. The HTTPS Port is the port that Fledge will listen on for HTTPS requests, the default for this is port 1995.\n\nThe *Certificate Name* is the name of the certificate that will be used for encryption. The default is to use a self signed certificate called *fledge* that is created as part of the installation process. This certificate is unique per fledge installation but is not signed by a certificate authority. If you require the extra security of using a signed certificate you may use the Fledge :ref:`certificate_store` functionality to upload a certificate that has been created and signed by a certificate authority.\n\nAfter enabling HTTPS and selecting save you must restart Fledge in order for the change to take effect. You must also update the connection setting in the GUI to use the HTTPS transport and the correct port.\n\n.. note::\n  If using the default self-signed certificate you might need to authorise the browser to connect to IP:PORT.\n  Just open a new browser tab and type the URL https://YOUR_FLEDGE_IP:1995\n  \n  Then follow the instructions in the browser in order to allow the connection and close the tab.\n  In the Fledge GUI you should see the green icon (Fledge is running).\n\n+--------------------+\n| |connection_https| |\n+--------------------+\n\nAllow & Deny Lists\n==================\n\nFledge supports a pair of optional lists of IP addresses that can be set to allow or deny access to the Fledge API. These lists can be accessed via the *Configuration* menu option in the user interface in the *General*, *Admin API*, *Firewall* configuration category.\n\n+------------+\n| |firewall| |\n+------------+\n\n  - Clicking on the arrow icon beside each list will expand the list and show the current contents of the list.\n\n  - Click on the *Add new item* link to create a new entry in the list.\n\n  - To remove an entry from the list click on the *x* icon to the right of the list item.\n\nIf the allow list is non-empty, then any access, including ping, to the Fledge API port will be checked to see if the source IP address of the request matches an entry in the allow list. If the address of the requester is not in this allow list then the API will not send any response to the caller and the connection will be closed. The only address that is exempt from this checking is the localhost via the loopback interface, 127.0.0.1. This is required for local management of the Fledge instance and must always be accessible.\n\nIf the blocked list is non-empty then any access, including ping, to the API will check the source address of the caller to see if it is included in the block list. If it is then the connection will be closed without sending any response to the caller. Again the address 127.0.0.1 is immune from this test.\n\nRequiring User Login\n====================\n\nIn order to set the REST API and GUI to force users to login before accessing Fledge select the *Configuration* option from the left-hand menu and then select *Admin API* from the configuration tree that appears.\n\n+-------------+\n| |admin_api| |\n+-------------+\n\nTwo particular items are of interest in this configuration category that is then displayed; *Authentication* and *Authentication method*\n\n+----------------+\n| |auth_options| |\n+----------------+\n\nSelect the *Authentication* field to be mandatory and the *Authentication method* to be password. Click on *Save* at the bottom of the dialog.\n\nIn order for the changes to take effect Fledge must be restarted, this can be done in the GUI by selecting the restart item in the top status bar of Fledge. Confirm the restart of Fledge and wait for it to be restarted.\n\nOnce restarted refresh your browser page. You should be presented with a login request.\n\n+---------+\n| |login| |\n+---------+\n\nThe default username is \"admin\" with a password of \"fledge\". Use these to login to Fledge, you should be presented with a slightly changed dashboard view.\n\n+-------------------+\n| |login_dashboard| |\n+-------------------+\n\nThe status bar now contains the name of the user that is currently logged in and a new option has appeared in the left-hand menu, *User Management*.\n\n.. note::\n   Any session that is idle for 15 minutes or longer will be disconnected. The user will then be required to authenticate again before being able to issue any further commands via the API or user interface.\n\nFailed Login Attempts\n---------------------\n\nIf a user makes an incorrect login attempt, such as entering the wrong password, that user will be blocked from logging in for a short period. If more than a certain number of consecutive login attempts fail then the user account will be blocked for 24 hours. The account may be unblocked by an administrative user before the 24 hours has elapsed.\n\nChanging Your Password\n----------------------\n\nThe top status bar of the Fledge GUI now contains the user name on the right-hand side and a pull down arrow, selecting this arrow gives a number of options including one labeled *Profile*.\n\n+-----------------+\n| |user_pulldown| |\n+-----------------+\n\n.. note::\n   This pulldown menu is also where the *Shutdown* and *Restart* options have moved.\n\nSelecting the *Profile* option will display the profile for the user.\n\n+-----------+\n| |profile| |\n+-----------+\n\nTowards the bottom of this profile display the *change password* option appears. Click on this text and a new password dialog will appear.\n\n+------------+\n| |password| |\n+------------+\n\nThis popup can be used to change your password. On successfully changing your password you will be logged out of the user interface and will be required to log back in using this new password.\n\nPassword Policy\n---------------\n\nFledge provides different policies to control the managed users password. The following options are currently available:\n\n+-------------------+\n| |password_policy| |\n+-------------------+\n\n- *Any characters* - there are no restrictions placed on the characters within a password.\n\n- *Mixed case Alphabetic* -  passwords must contain upper and lower case letters. The user is free to add numeric values and special characters if they wish, but there is no requirement to add these.\n\n- *Mixed case and numeric* - password must contain upper, lower case letters and numeric values.\n\n- *Mixed case, numeric and special characters* - password must contain at least one upper and lower case letter, numeric and special characters.\n\n.. note::\n\n    In addition to the above rules on password content, the minimum password length is by default 6 and can be controlled with the 'Minimum length' configuration item. The maximum password length that can be configured is 80 characters.\n\nPassword Rotation Mechanism\n---------------------------\n\nFledge provides a mechanism to limit the age of passwords in use within the system. A value for the maximum allowed age of a password is defined in the configuration page of the user interface.\n\n+---------------------+\n| |password_rotation| |\n+---------------------+\n\nWhenever a user logs into Fledge the age of their password is checked against the maximum allowed password age. If their password has reached that age then the user is not logged in, but is instead forced to enter a new password. They must then login with that new password. In addition the system maintains a history of the last three passwords the user has used and prevents them being reused.\n\n\nUser Management\n===============\n\nThe user management option becomes active once the Fledge has been configured to require authentication of users. This is enabled via the *Admin API* page of the *Configuration* menu item. A new menu item *User Management* will appear in the left hand menu.\n\n.. note::\n\n   After setting the Authentication option to mandatory in the configuration page the Fledge instance should be restarted.\n\n\n+-------------------+\n| |user_management| |\n+-------------------+\n\nThe user management pages allows\n\n  - Adding new users.\n  - Deleting users.\n  - Resetting user passwords.\n  - Changing the role of a user.\n  - Changing the details of a user\n\nFledge currently supports a number of roles for users:\n\n  - **Administrator**: a user with admin role is able to fully configure Fledge, view the data read by the Fledge instance and also manage Fledge users, backups and support bundles.\n\n  - **Control**: a user with this role is able to configure Fledge, execute control scripts and pipelines and also view the data read by Fledge. The user can not manage other users or add new users.\n\n  - **Editor**: a user with this role is able to configure Fledge and view the data read by Fledge. The user can not manage other users or add new users.\n\n  - **Viewer**: a user that can only view the configuration of the Fledge instance and the data that has been read by Fledge. The user has no ability to modify the Fledge instance in any way.\n\n  - **Data Viewer**: a user that can only view the data in Fledge and not the configuration of Fledge itself. The user has no ability to modify the Fledge instance in any way.\n\nRestrictions apply to both the API calls that can be made when authenticated as particular users and the access the user will have to the graphical user interface. Users will observe both that menu items will be removed completely or options on certain pages will be unavailable if they are not privileged to access those features.\n\nAdding Users\n------------\n\nTo add a new user from the *User Management* page select the *Add User* icon in the top right of the *User Management* pane. a new dialog will appear that will allow you to enter details of that user.\n\n+------------+\n| |add_user| |\n+------------+\n\nYou can select a role for the new user, a user name and an initial password for the user. Only users with the role *admin* can add new users.\n\nUpdate User Details\n-------------------\n\nThe edit user option allows the name, authentication method and description of a user to be updated. This option is only available to users with the *admin* role.\n\n+---------------+\n| |update_user| |\n+---------------+\n\nChanging User Roles\n-------------------\n\nThe role that a particular user has when the login can be changed from the *User Management* page. Simply select on the *change role* link next to the user you wish to change the role of. \n\n+---------------+\n| |change_role| |\n+---------------+\n\nSelect the new role for the user from the drop down list and click on update. The new role will take effect the next time the user logs in.\n\nReset User Password\n-------------------\n\nUsers with the *admin* role may reset the password of other users. In the *User Management* page select the *reset password* link to the right of the user name of the user you wish to reset the password of. A new dialog will appear prompting for a new password to be created for the user.\n\n+------------------+\n| |reset_password| |\n+------------------+\n\nEnter the new password and confirm that password by entering it a second time and click on *Update*.\n\nDelete A User\n-------------\n\nUsers may be deleted from the *User Management* page. Select the *delete* link to the right of the user you wish to delete. A confirmation dialog will appear. Select *Delete* and the user will be deleted.\n\n+---------------+\n| |delete_user| |\n+---------------+\n\nYou can not delete the last user with role *admin* as this will prevent you from being able to manage Fledge.\n\n.. _certificate_store:\n\nCertificate Store\n=================\n\nThe Fledge *Certificate Store* allows certificates to be stored that may be referenced by various components within the system, in particular these certificates are used for the encryption of the REST API traffic and authentication. They may also be used by particular plugins that require a certificate of one type or another. A number of different certificate types re supported by the certificate store;\n\n  - PEM files as created by most certificate authorities\n  - CRT files as used by GlobalSign, VeriSign and Thawte\n  - Binary CER X.509 certificates\n  - JSON certificates as used by Google Cloud Platform\n\nThe *Certificate Store* functionality is available in the left-hand menu by selecting *Certificate Store*. When selected it will show the current content of the store.\n\n+---------------------+\n| |certificate_store| |\n+---------------------+\n\nCertificates may be removed by selecting the delete option next to the certificate name, note that the keys and certificates can be deleted independently.\nThe self signed certificate that is created at installation time can not be deleted.\n\nTo add a new certificate select the *Import* icon in the top right of the certificate store display.\n\n+----------------------+\n| |update_certificate| |\n+----------------------+\n\nA dialog will appear that allows a key file and/or a certificate file to be selected and uploaded to the *Certificate Store*. An option allows to allow overwrite of an existing certificate. By default certificates may not be overwritten.\n\n\nCustom Certificate Authority (CA)\n---------------------------------\n\nFledge is not restricted to utilizing its own CA certificates; you have the option to use your own CA certificates.\n\nTo configure Fledge with your own Root CA certificate:\n\n- Make sure that the Root CA certificate is correctly utilized to sign the user authentication certificates.\n- If your certificate chain contains intermediate certificates, generate a **certificate bundle** by combining the intermediate certificates in the proper sequence. This bundle guarantees that the complete trust chain is acknowledged during the validation process.\n\nPosition the Root CA certificate (and its bundle, if necessary) in the `$FLEDGE_DATA` directory or the `$FLEDGE_ROOT/data` directory.\n\nAfter obtaining the certificates, modify the **Root CA Certificate** setting (currently labeled as **Auth Certificate**) in the **Admin and User REST API configuration** section to reference your custom CA.\n\n+-------------+\n| |admin_api| |\n+-------------+\n\n\n.. note::\n\n    If there are any intermediate certificates forming a chain, consolidate them into a single file named **intermediate.cert or intermediate.pem** and store it in the same directory specified earlier.\n\nGenerate a new auth certificates for user login\n-----------------------------------------------\n\nDefault ca certificate is available inside $FLEDGE_DATA/etc/certs and named as ca.cert. Also default admin and non-admin certs are available in the same location which will be used for Login with Certificate in Fledge i.e admin.cert, user.cert. See |Require User Login|\n\nBelow are the steps to create custom certificate along with existing fledge based ca signed for auth certificates.\n\n**Using cURL**\n\n- Create User\n\n.. note::\n\n    Usernames must be distinct, are not case-sensitive, and require administrative privileges to execute this action.\n\nFor example, Add a username with the name **test**\n\n.. code-block:: console\n\n    $ AUTH_TOKEN=$(curl -d '{\"username\": \"<ADMIN_USERNAME>\", \"password\": \"<ADMIN_PASSWORD>\"}' -sX POST <PROTOCOL>://<FLEDGE_IP>:<FLEDGE_REST_API_PORT>/fledge/login | jq '.token' | tr -d '\"\"')\n    $ USER_ID=$(curl -H \"authorization: $AUTH_TOKEN\" -skX POST <PROTOCOL>://<FLEDGE_IP>:<FLEDGE_REST_API_PORT>/fledge/admin/user -d '{\"username\":\"test\",\"real_name\":\"Test\",\"access_method\":\"cert\",\"description\":\"Non-admin based role\",\"role_id\":2}' | jq '.user.userId')\n\n- Create Authentication Certificate\n\n.. code-block:: console\n\n    $ curl -o filename.cert -H \"authorization: $AUTH_TOKEN\" -sX POST <PROTOCOL>://<FLEDGE_IP>:<FLEDGE_REST_API_PORT>/fledge/admin/$USER_ID/authcertificate\n\nThe certificate is available for download and can be used to log in.\n\n.. note::\n\n   Fledge supports a number of different user roles, the appropriate role_id should be passed for the user role required. The full list of supported role_id's can be obtained by called the /fledge/user/role GET API entry point. This entry point is only available to users with the *admin* role.\n\nYou may also refer the documentation of |REST API| cURL commands. If you are not comfortable with cURL commands then use the GUI steps |User Management| and make sure Login with admin user.\n\n.. note::\n\n   Steps a (cert creation) and b (create user) can be executed in any order.\n\nc) Now you can login with the newly created user **test**, with the following cURL\n\n.. code-block:: console\n\n    $ curl -T $FLEDGE_DATA/etc/certs/test.cert -skX POST <PROTOCOL>://<FLEDGE_IP>:<FLEDGE_REST_API_PORT>/fledge/login\n\nOr use GUI |Require User Login|\n\nManaging Features\n=================\n\nFledge provides mechanisms whereby the administration user can disable access to features which may not be desirable in a production system or may not be required for a particular installation.\n\nThe interface to enable or disable these features can be found in the *Configuration* menu item under the configuration category *Advanced::Features*.\n\n+------------+\n| |features| |\n+------------+\n\nCurrently there are two features that can be disabled on an instance wide basis: *Control* and *Pipeline Debugging*.\n\nThe *Control* toggle button can be used to disable all write and operation calls from south plugins back to devices. To disable |control| features for a Fledge instance uncheck the *Control* toggle button.\n\nThe *Pipeline Debugger* toggle button will control the ability to perform pipeline debugging in any north or south service within Fledge. If there are any pipeline debugging sessions in progress when the toggle is unset, they will be terminated. No new debugging sessions can be started if the pipeline debugger option is not enabled in the *Features* configuration category.\n\n"
  },
  {
    "path": "docs/storage.rst",
    "content": ".. Images\n.. |storage_01| image:: images/storage_01.jpg\n.. |storage_03| image:: images/storage_03.jpg\n.. |sqlite_01| image:: images/sqlite_storage_configuration.jpg\n.. |purge_01| image:: images/purge_01.jpg\n.. |purge_02| image:: images/purge_02.jpg\n.. |purge_03| image:: images/purge_03.jpg\n.. |postgres_01| image:: images/postgres_01.jpg\n\n\n\n*******************\nBuffering & Storage\n*******************\n\nOne of the micro-services that makes up the core of a Fledge\nimplementation is the storage micro-service. This is responsible for\n\n  - storing the configuration of Fledge\n\n  - buffering the data read from the south\n\n  - maintaining the Fledge audit log\n\n  - persisting the state of the system\n\nThe storage service is configurable, like other services within Fledge\nand uses plugins to extend the functionality of the storage system. These\nstorage plugins provide the underlying mechanism by which data is\nstored within Fledge. Fledge can make use of either one or two of these\nplugins at any one time. If a single plugin is used then this plugin\nprovides the storage for all data. If two plugins are used, one will\nbe for the buffering of readings and the other for the storage of the\nconfiguration.\n\nAs standard Fledge comes with 3 storage plugins\n\n  - **SQLite**: A plugin that can store both configuration data and the readings data using SQLite files as the backing store. The plugin uses multiple SQLite database to store different assets, allowing for high bandwidth data at the expense of limiting the number of assets that a single instance can ingest.\n\n  - **SQLiteLB**: A plugin that can store both configuration data and the readings data using SQLite files as the backing store. This version of the SQLite plugin uses a single readings database and is better suited for environments that do not have very high bandwidth data. It does not limit the number of distinct assets that can be ingested.\n\n  - **PostgreSQL**: A plugin that can store both configuration and readings data which uses the PostgreSQL SQL server as a storage medium.\n\n  - **SQLiteMemory**: A plugin that can only be used to store reading data. It uses SQLite's in memory storage engine to store the reading data. This provides a high performance reading store however capacity is limited by available memory and if Fledge is stopped or there is a power failure the buffered data will be lost.\n\n\nThe default configuration uses the SQLite disk based storage engine for\nboth configuration and reading data\n\nConfiguring The Storage Plugin\n==============================\n\nOnce installed the storage plugin can be reconfigured in much the same\nway as any Fledge configuration, either using the API or the graphical\nuser interface to set the storage engine and its options.\n\n  - Using the user interface to configuration the storage, select the *Configuration* item in the left hand menu bar.\n\n  - In the category category tree select *Advanced* and under that select *Storage*.\n\n    +--------------+\n    | |storage_01| |\n    +--------------+\n  \n - To change the storage plugin to use for both configuration and readings enter the name of the new plugin in the *Storage Plugin* entry field. If *Readings Plugin* is left empty then the storage plugin will also be used to store reading data. The default set of plugins installed with Fledge that can be used as *Storage Plugin* values are:\n\n     - *sqlite* - the SQLite file based storage engine.\n\n     - *postgres* - the PostgreSQL server.\n       \n       .. note::\n\n          The PostgreSQL server is not installed by default when Fledge is installed and must be installed before it can be used. See :ref:`InstallingPostgreSQL`.\n\n  - The *Readings Plugin* may be set to any of the above and may also be set to use the SQLite In Memory plugin by entering the value *sqlitememory* into the configuration field.\n\n  - The *Storage API threads* field allows for the number of threads used by the Storage service API for handling incoming requests to be tuned.\n\n  - The *Worker thread pool* is used for executing operations against the readings data with Fledge. The number of threads in this thread pool is controlled via this setting. Increasing the thread pool size will increase the number of operations the can be executed in parallel, however care should be exercised as increasing this will increase any contention issues within the storage layer and will become counter productive once the storage plugin is overloaded.\n\n  - The *Manage Storage* option is only used when the database storage uses an external database server, such as PostgreSQL. Toggling this option on causes Fledge to start as stop the database server when Fledge is started and stopped. If it s left off then Fledge will assume the database server is running when it starts.\n\n  - The *Management Port* and *Service Port* options allow fixed ports to be assigned to the storage service. These settings are for debugging purposes only and the values should be set to 0 in normal operation.\n\n\n.. note::\n\n   Additional storage engines may be installed to extend the set\n   that is delivered with the standard Fledge installation. These will be\n   documented in the packages that provide the storage plugin.\n\n   Storage plugin configurations are not dynamic and Fledge *must* be\n   restarted after changing these values. Changing the plugin used to store\n   readings will *not* cause the data in the previous storage system to be\n   migrated to the new storage system and this data may be lost if it has\n   not been sent onward from Fledge.\n\n   If selecting the PostgreSQL storage engine then PostgreSQL must be installed and running with a fledge user created in order for Fledge to start successfully.\n\n\nSQLite Plugin Configuration\n---------------------------\n\nThe SQLite storage engine has further options that may be used to\nconfigure its behavior. To access these configuration parameters click\non the *sqlite* option under the *Storage* category in the configuration\npage.\n\n+-------------+\n| |sqlite_01| |\n+-------------+\n\nMany of these configuration options control the performance of SQLite and\nit is important to have some background on how readings are stored within\nSQLite. The plugin is designed to allow greater ingests rates in\nsituations where multiple different assets are being ingested by a\nsingle instance.\n\nThe storage plugin stores readings for each distinct asset in\na table for that asset. These tables are stored within a database, however\nthe SQLite database engine will lock an entire database to insert into\nany table within that database. In order to improve concurrency, multiple\ndatabases are used within the storage plugin. A set of parameters are\nused to define how these tables and databases are used.\n\n.. note::\n\n   SQLite has a limitation on the number of databases that can be attached\n   to a single process. Therefore we can not create an unlimited number\n   of databases and attach them.\n\nOnce the tables within all the databases have been assigned to a\nparticular asset, any new assets ingested will be inserted into an\noverflow tables that contains multiple assets. There is one overflow\ntable per database within the process. The impact of this is that once\nthe total number of distinct assets exceeds the number of tables allocated\nthe gain in performance from using multiple tables in multiple databases\nstart to diminish.\n\n  - **Pool Size**: The number of connections to create in the database connection pool.\n\n  - **No. Readings per database**: This option control how many assets can be stored in a single database. Each asset will be stored in a distinct table within the database. Once all tables within a database are allocated the plugin will use more databases to store further assets.\n\n  - **No. databases allocate in advance**: This option defines how many databases are create initially by the SQLite plugin.\n\n  - **Database allocation threshold**: The number of unused databases that must exist within the system. Once the number of available databases falls below this value the system will begin the process of creating extra databases.\n\n  - **Database allocation size**: The number of databases to create when the above threshold is crossed. Database creation is a slow process and hence the tuning of these parameters can impact performance when an instance receives a large number of new asset names for which it has previously not allocated readings tables.\n\n  - **Purge Exclusions**: This option allows the user to specify that the purge process should not be applied to particular assets. The user can give a comma separated list of asset names that should be excluded from the purge process. Note, it is recommended that this option is only used for extremely low bandwidth, lookup data that would otherwise be completely purged from the system when the purge process runs.\n\n  - **Vacuum Interval**: The interval in hours between running a database vacuum command to reclaim space. Setting this too high will impact performance, setting it too low will mean that more storage may be required for longer periods.\n\nPostgreSQL Plugin Configuration\n-------------------------------\n\nFledge supports PostgreSQL as a storage solution for both configuration and reading data. It can be used to store either one or both types of data, or just one in combination with another storage plugin.\n\n.. note::\n\n   PostgreSQL is not installed as part of the installation of Fledge. If you wish to make use of the PostgreSQL storage plugin you may need to install PostgreSQL on the Fledge host or another host that Fledge can communicate with. Installing PostgreSQL in both these configuration is covered in the section, :ref:`PostgreSQL`.\n\nThe PostgreSQL storage engine has further options that may be used to\nconfigure its behavior. To access these configuration parameters click\non the *postgres* option under the *Storage* category in the configuration\npage.\n\n+---------------+\n| |postgres_01| |\n+---------------+\n\nThere are a number of configuration items that can be used to tune the performance of the PostgeSQL storage plugin.\n\n  - **Pool Size**: The number of connections to create in the database connection pool.\n\n  - **Max. Insert Rows**: The maximum number of readings that will be inserted within a single SQL statement. \n\n.. _PostgreSQL:\n\nPostgreSQL Installation\n=======================\n\nPostgreSQL may be installed locally on the same Linux host as Fledge or remotely on a separate host. This option to install PostgreSQL on a separate host makes it an ideal choice for containerised environments, where a stateless Fledge installation is a common goal. This allows all configuration state and buffered readings can held outside of the Fledge container, in a separate PostgreSQL container.\n\n.. note::\n\n    Some state, such as scripts, may need to be stored outside the database. These will also need to be managed in order to achieve a truly stateless Fledge installation in all cases.\n\n.. _InstallingPostgreSQL:\n\nInstalling A PostgreSQL Server\n------------------------------\n\nThe precise commands needed to install a PostgreSQL server vary for system\nto system, in general a packaged version of PostgreSQL is best used.\nThese are often available within the standard package repositories for\nyour system.\n\nUbuntu Install\n~~~~~~~~~~~~~~\n\nOn Ubuntu or other apt based distributions the command to install PostgreSQL would be:\n\n.. code-block:: console\n\n  sudo apt install -y postgresql postgresql-client\n\nNow, make sure that PostgreSQL is installed and running correctly:\n\n.. code-block:: console\n\n  sudo systemctl status postgresql\n\nBefore you proceed, you must create a PostgreSQL user that matches your Linux user. Supposing that user is *<fledge_user>*, type:\n\n.. code-block:: console\n\n  sudo -u postgres createuser -d <fledge_user>\n\nThe *-d* argument is important because the user will need to create the Fledge database.\n\nA more generic command is:\n\n.. code-block:: console\n\n  sudo -u postgres createuser -d $(whoami)\n\nRed Hat Install\n~~~~~~~~~~~~~~~\n\nOn Red Hat or other yum based distributions to install postgres:\n\nAdd PostgreSQL YUM Repository to your System\n\n.. code-block:: console\n\n    sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n\nCheck whether PostgreSQL 13 is available using the command shown below\n\n.. code-block:: console\n\n    sudo yum search -y postgresql13\n\nOnce you have confirmed that PostgreSQL 13 repositories are available on your system. Then, you can proceed to install PostgreSQL 13\n\n.. code-block:: console\n\n    sudo yum install -y postgresql13 postgresql13-server\n\nBefore using the PostgreSQL server, you need to first initialize the database service using the command\n\n.. code-block:: console\n\n    sudo /usr/pgsql-13/bin/postgresql-13-setup initdb\n\nYou can then proceed to start the database server as follows\n\n.. code-block:: console\n\n    sudo systemctl enable --now postgresql-13\n\nConfirm if the just started service above is running by checking its status using the command\n\n.. code-block:: console\n\n    sudo systemctl status postgresql-13\n\nNext, you **must** create a PostgreSQL user that matches your Linux user.\n\n.. code-block:: console\n\n  sudo -u postgres createuser -d $(whoami)\n\n.. note::\n\n   The example above is based on the use of PostgreSQL version 13, this is the latest verified version at the time of writing. Use of later versions should not cause any problems should version 13 not be available for your platform or there is a local requirement to use a different version. Likewise older version can be used, Fledge has been tested using version 9 and later.\n\n\nUsing a Remote PostgreSQL Server\n--------------------------------\n\nFollow the steps below to set up PostgreSQL on a remote machine and enable secure network connections to the PostgreSQL server.\n\n   #. Install PostgreSQL on the remote machine\n\n      Refer to the section :ref:`InstallingPostgreSQL` for detailed PostgreSQL installation instructions.\n\n   #. Configure PostgreSQL to allow network connections\n\n      By default, PostgreSQL only listens for connections on the local machine. To allow access over the network, from other hosts or containers, you need to modify the PostgreSQL configuration file, `postgresql.conf`.\n\n      - Open the configuration file for editing:\n\n         .. code-block:: bash\n\n             sudo nano /etc/postgresql/<version>/main/postgresql.conf\n\n         Replace `<version>` with the installed PostgreSQL version (e.g. `12`, `14`, etc.).\n\n      - Locate the following line in the configuration file:\n\n         .. code-block:: ini\n\n              #listen_addresses = 'localhost'\n\n      - Update the line to:\n\n         .. code-block:: ini\n\n            listen_addresses = '*'\n\n         This setting instructs PostgreSQL to listen for connections on all available network interfaces.\n\n         .. note::\n\n            In production environments, avoid using `'*'` unless absolutely necessary. Instead, restrict connections to specific IP addresses to enhance security.\n\n      - Save the file and exit the editor.\n\n\n   #. Update client authentication rules\n\n      PostgreSQL uses the `pg_hba.conf` file (Host-Based Authentication) to control how clients authenticate and connect to the database. To allow network connections, you need to update the rules in this file.\n\n      - Open the `pg_hba.conf` file for editing:\n\n         .. code-block:: bash\n\n            sudo nano /etc/postgresql/<version>/main/pg_hba.conf\n\n      - Add the following entry at the end of the file:\n\n         .. code-block:: ini\n\n            host    all    all    0.0.0.0/0    trust\n\n         Each entry in the file consists of five fields, these fields are described below:\n\n            .. list-table::\n               :widths: 25 55 20\n               :header-rows: 1\n\n               * - Field\n                 - Description\n                 - Example\n               * - Connection\n                 - This specifies that the rule applies to TCP/IP connections.\n                 - host\n               * - Database\n                 - The name of the database to which the rule is applied. The reserve name *all* applies the rule to all databases.\n                 - all\n               * - User\n                 - The user to which the rule applies. The reserved username of *all* may be used to apply the rule to all users.\n                 - all\n               * - Address\n                 - The network mask which defines the sub-networks from which connections are allowed. The example provided permits connections from all IPv4 addresses. For improved security replace this with a specific network mask (e.g. `192.168.1.0/24` limits connections to just those from the network 192.168.1.xxx). Either IPV4 or IPV6 network masks may be supplied.\n                 - 0.0.0.0/0\n               * - Authentication Method\n                 - The authentication method to be used. The trust method disables password authentication. While trust is convenient for testing, it is not recommended for production environments. For secure authentication in production, use `md5` or `scram-sha-256`.\n                 - trust\n\n        An example entry for use in a production environment, might be as follows:\n\n        .. code-block:: ini\n\n           host    all    all    0.0.0.0/0    md5\n\n        .. note::\n\n           Security can be further strengthened by enforcing SSL encryption on connections. This is done by specifying a connection type of *hostssl* rather than *host* in the configuration record. You must also enable SSL support in PostgreSQL, refer to the PostgreSQL documentation for settings required for this.\n\n      - Save the file and exit the editor.\n\n\n   #. Restart the PostgreSQL Service\n\n      In order for the changes to take affect it is necessary to restart PostgreSQL.\n\n      .. note::\n\n         Once the configuration is complete, ensure that the machine's firewall settings allow incoming connections to PostgreSQL's default port (5432).\n\n   #. Setting Up PostgreSQL Client on the Local Machine\n\n      This section explains how to set up the PostgreSQL client on the machine running Fledge.\n\n      - Install PostgreSQL Client\n\n         The PostgreSQL client tools are required to allow Fledge to interact with a remote PostgreSQL server. Install the `postgresql-client` package using your system's package manager.\n\n      - Export PostgreSQL Environment Variables\n\n         Configure environment variables to specify the connection details for the PostgreSQL server. These variables ensure that Fledge can communicate with the server seamlessly.\n\n         The environment variables include the host, user, password and optionally port, for the PostgreSQL server. \n\n         .. code-block:: bash\n\n             export PGHOST=<host_ip_address>\n             export PGUSER=<postgres_user_name>\n             export PGPASSWORD=<postgres_user_password>\n             export PGPORT=<postgres_port>\n\n         These may be set in the login script run by the user who runs Fledge or if using a container, you may pass these into the Fledge container at startup using the *-e* flag to the container.\n\n         The description of each of the environment variables supported is shown below.\n\n         .. list-table::\n            :header-rows: 1\n\n            * - Environment Variable\n              - Description\n            * - PGHOST\n              - The IP address or hostname of the PostgreSQL server.\n            * - PGUSER\n              - The username for the PostgreSQL database (e.g. `postgres`).\n            * - PGPASSWORD\n              - The password for the specified user.\n            * - PGPORT\n              - An optional environment variable that can be specified if the PostgreSQL installation is not listening on the default PostgreSQL port of 5432.\n\n\n\nStorage Management\n==================\n\nFledge manages the amount of storage used by means of purge processes that run periodically to remove older data and thus limit the growth of storage use. The purging operations are implemented as Fledge tasks that can be scheduled to run periodically. There are two distinct tasks that are run\n\n  - **purge**: This task is responsible for limiting the readings that are maintained within the Fledge buffer.\n\n  - **system purge**: This task limit the amount of system data in the form of logs, audit trail and task history that is maintained.\n\nPurge Task\n----------\n\nThe purge task is run via a scheduled called *purge*, the default for this schedule is to run the purge task every hour. This can be modified via the user interface in the *Schedules* menu entry or via the REST API by updating the schedule.\n\nThe purge task has two metrics it takes into consideration, the age of the readings within the system and the number of readings in the system. These can be configured to control how much data is retained within the system. Note however that this does not mean that there will never be data older than specified or more rows than specified as purge runs periodically and between executions of the purge task the readings buffered will continue to grow.\n\nThe configuration of the purge task can be found in the *Configuration* menu item under the *Utilities* section.\n\n+------------+\n| |purge_01| |\n+------------+\n\n  - **Age Of Data To Be Retained**: This configuration option sets the limit on how old data has to be before it is considered for purging from the system. It defines a value in hours, and only data older than this is considered for purging from the system.\n\n  - **Max rows of data to retain**: This defines how many readings should be retained in the buffer. This can override the age of data to retain and defines the maximum allowed number of readings that should be in the buffer after the purge process has completed.\n\n  - **Retain Unsent Data**: This defines how to treat data that has been read by Fledge but not yet sent onward to one or more of the north destinations for data. It supports a number of options\n\n    +------------+\n    | |purge_02| |\n    +------------+\n\n    - **purge unsent**: Data will be purged regardless if it has been sent onward from Fledge or not.\n\n    - **retain unsent to any destination**: Data will not be purged, i.e. it will be retained, if it has not been sent to any of the north destinations. If it has been sent to at least one of the north destinations then it will be purged.\n\n    - **retain unset to all destinations**: Data will be retained until it has been sent to all north destinations that are enabled at the time the purge process runs. Disabled north destinations are not included in order to prevent them from stopping all data from being purged.\n\n\nNote: This configuration category will not appear until after the purge process has run for the first time. By default this will be 1 hour after the Fledge instance is started for the first time.\n\n\nSystem Purge Task\n-----------------\n\nThe system purge task is run via a scheduled called *system_purge*, the default for this schedule is to run the system purge task every 23 hours and 50 minutes. This can be modified via the user interface in the *Schedules* menu entry or via the REST API by updating the schedule.\n\nThe configuration category for the system purge can be found in the *Configuration* menu item under the *Utilities* section.\n\n+------------+\n| |purge_03| |\n+------------+\n\n  - **Statistics Retention**: This defines the number of days for which full statistics are held within Fledge. Statistics older than this number of days are removed and only a summary of the statistics is held.\n\n  - **Audit Retention**: This defines the number of day for which the audit log entries will be retained. Once the entries reach this age they will be removed from the system.\n\n  - **Task Retention**: This defines the number of days for which history if task execution within Fledge is maintained.\n\nNote: This configuration category will not appear until after the system purge process has run for the first time.\n"
  },
  {
    "path": "docs/troubleshooting_pi-server_integration.rst",
    "content": ".. Images\n.. |img_001| image:: images/tshooting_pi_001.jpg\n.. |img_002| image:: images/tshooting_pi_002.jpg\n.. |img_003| image:: images/tshooting_pi_003.png\n.. |img_004| image:: images/tshooting_pi_004.jpg\n.. |img_005| image:: images/tshooting_pi_005.jpg\n.. |img_006| image:: images/tshooting_pi_006.jpg\n.. |img_007| image:: images/tshooting_pi_007.jpg\n.. |img_008| image:: images/tshooting_pi_008.jpg\n.. |img_009| image:: images/tshooting_pi_009.jpg\n.. |img_010| image:: images/tshooting_pi_010.jpg\n.. |AFElement_Default| image:: images/tshooting_pi_011.jpg\n.. |AFAttribute_Default| image:: images/tshooting_pi_012.jpg\n.. |OMF_tabs| image:: images/OMF_tabs.png\n.. |OMF_Persisted| image:: images/OMF_Persisted.png\n.. |PersistedPlugins| image:: images/PersistedPlugins.png\n.. |PersistedActions| image:: images/PersistActions.png\n.. |OMF_Formats| image:: images/OMF_Formats.jpg\n\n*****************************************\nTroubleshooting the PI Server integration\n*****************************************\n\nThis section describes how to troubleshoot issues with the PI Server integration using the OMF North plugin.\nYou should be running PI Web API 2019 SP1 (1.13.0.6518) or later.\n\n- `Log files`_\n- `How to confirm that PI Web API is installed and running`_\n- `Error Messages and Causes`_\n\n  - `Loss of Connection to the PI Web API Server`_\n  - `HTTP Code 409: Processing cannot continue until data archive errors are corrected`_\n  - `HTTP Code 409: The supplied container overlaps with a different existing container`_\n  - `HTTP Code 409:  One or more PI Points could not be created`_\n  - `PI License Expired or Limit Exceeded`_\n  - `HTTP Code 409: AF Element could not be created`_\n  - `HTTP Code 409: An existing LINK includes an AF Attribute that overlaps`_\n  - `HTTP Code 413: Payload Too Large`_\n  - `WARNING: FledgeAsset Type exists with a different definition`_\n  - `Changing the Tag Name OMF Hint`_\n- `Possible Solutions to Common Problems`_\n\nComplex Types vs. Linked Types\n==============================\n\nThis section describes the representation of Readings in the PI System using either Complex Types or Linked Types.\n**Linked Types are highly recommended for new OMF North configurations.**\n\nComplex Types\n-------------\n\nWhen the OMF North plugin operates, it accepts Readings from the Fledge storage and uses them to create OMF Types, Containers and Data messages.\nIn the initial release of the OMF North plugin, all Types were Complex Types.\nThis means the Type would include data streams that represent all Datapoints in the Reading.\nIf a Reading arrived later with the same asset name but with additional Datapoints, OMF would be forced to create a new Type and new Containers.\nPreviously defined Containers would be abandoned.\n\n.. _Linked Types Description:\n\nLinked Types\n------------\n\nIn Fledge 2.1.0, support for OMF version 1.2 was introduced to the OMF North plugin.\nOMF 1.2 includes support for Linked Types.\nEach Datapoint becomes an AF Attribute mapped to a PI Point.\nOMF then links the AF Attribute to a parent AF Element to create a complete representation of the Reading in AF.\nA major advantage of Linked Types is that new Readings with additional Datapoints will not break any OMF Types.\nOMF will simply create a new AF Attribute and PI Point and link it to the existing parent AF Element.\n\nUnderstanding Types when upgrading Fledge\n------------------------------------------\n\nWhen upgrading from a Fledge version prior to 2.1 where data had previously been sent to OMF, the plugin will continue to use the pre-OMF 1.2 Complex Types definitions to send data.\nThis ensures that data will continue to be written to the same PI Points within the PI Server or other OMF end points. New OMF North instances will send data using the newer OMF 1.2 mechanism.\n\nIt is possible to create a new OMF North plugin instance that sends data using Complex Types (that is, pre-OMF 1.2 format) by turning on the option *Complex Types* in the *Formats & Types* tab of the plugin configuration.\n**This is not recommended.**\n\n+---------------+\n| |OMF_Formats| |\n+---------------+\n\nLog files\n=========\n\nFledge logs messages at error and warning levels by default, it is possible to increase the verbosity of messages logged to include information and debug messages also. This is done by altering the minimum log level setting for the north service or task. To change the minimal log level within the graphical user interface select the north service or task, click on the advanced settings link and then select a new minimal log level from the option list presented.\n\nThe name of the north instance should be used to extract just the logs about the PI Server integration, as in this example:\n\nscreenshot from the Fledge GUI\n\n|img_003|\n\n.. code-block:: console\n\n    $ sudo cat /var/log/syslog | grep North_Readings_to_PI\n\nInformation Messages\n--------------------\n\nIf the minimum logging level is set to Information, the OMF North plugin will write messages that list the OMF Types, Containers and Links it creates.\nThese messages can aid in understanding the artifacts created in the PI AF Server and the PI Data Archive and can help with troubleshooting.\nFor example, if the *Default Asset Framework Location* is left at its default of */fledge/data_piwebapi/default* and the target AF Database is new,\nthe following informational messages will be written to the log:\n\n.. code-block:: bash\n\n    INFO: Created Type 16634532276179036866_fledge_typeid\n    INFO: Created Element fledge\n    INFO: Created Type 4258086130383245257_data_piwebapi_typeid\n    INFO: Created Element data_piwebapi\n    INFO: Created Link 16634532276179036866_fledge to 4258086130383245257_data_piwebapi\n    INFO: Created Type 13582600746897291705_default_typeid\n    INFO: Created Element default\n    INFO: Created Link 4258086130383245257_data_piwebapi to 13582600746897291705_default\n\nTypes, Elements and Links\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOMF Types appear in the AF Database as AF Element Templates.\nTemplates can be viewed by running the PI System Explorer and navigating to the *Library* tab and expanding *Element Templates*.\nWhen an AF Element is created by OMF, its defining AF Template is displayed in the *Template* field of the *General* tab.\nThis is the PI System Explorer view of the *default* AF Element:\n\n|AFElement_Default|\n\nThe *Created Link* messages show the creation of links between parent and child AF Elements thereby creating a hierarchy.\nThe identifiers in the *Created Link* messages are the \"index\" values of the AF Elements.\nThe index values can be viewed by clicking the *Attributes* tab for an AF Element in PI System Explorer.\nLook for the value of the Attribute \"__id.\"\nThis is a view of the Attributes of the *default* AF Element:\n\n|AFAttribute_Default|\n\nContainers\n~~~~~~~~~~\n\nThe Information messages logged for a Linked Types configuration are different from messages logged for a Complex Types configuration.\nEach Datapoint in a Reading will become a PI Point mapped to an AF Attribute for both configuration types but how they are organized is different.\n\nContainers with Linked Types\n############################\n\nWith Linked Types, Containers become PI Points mapped to AF Attributes.\nContainers defined by Datapoints in a single Reading are created at once but additional Containers can be added later without breaking the configuration.\nFor example, if a Reading with an asset named \"Calvin\" and 3 Datapoints named \"random1\" through \"random3\" is received by the plugin,\nthe following message will be logged when the Containers are created:\n\n.. code-block:: bash\n\n    INFO: Containers created: Calvin.random1,Calvin.random2,Calvin.random3\n\nIf at a later time another Reading named \"Calvin\" is received but with 4 Datapoints, a new message will be logged:\n\n.. code-block:: bash\n\n    INFO: Containers confirmed: Calvin.random1,Calvin.random2,Calvin.random3,Calvin.random4\n\nStrictly speaking, this message is not completely accurate.\nThe first three Containers already exist so their presence is confirmed.\nThe last Container (Calvin.random4) will be new.\n\nThe data type of the created PI Points and AF Attributes is not logged.\nYou can check the data types by using the PI System Explorer to view the AF Attributes of the AF Element \"Calvin\" or by using PI System Management Tools to view the PI Points.\n\nContainers with Complex Types\n#############################\n\nWith Complex Types, Containers are defined by an OMF Type which will have one or more data streams in it.\nThe Type will define the names and data types of the individual data streams.\nTypes are created by the plugin to reflect a Reading and its Datapoints when the Reading is received by the plugin.\nWhen the Container is created, it is important to log its OMF Type as well:\n\n.. code-block:: bash\n\n    INFO: Created Container (Type: A_13582600746897291705_default_2_Calvin_typename_measurement) 2measurement_Calvin\n    INFO: Created Element Calvin-type2\n\nThe data streams in this example will be Attributes of a new AF Element called \"Calvin-type2.\"\nTo find the names of the individual data streams, check the definition of the AF Element Template \"*A_13582600746897291705_default_2_Calvin_typename_measurement*\"\nusing PI System Explorer.\nYou will see this AF Template has 3 AF Attributes named \"random1\" through \"random3.\"\nThe names of the underlying PI Points will be the Container name from the logged message concatenated with the AF Attribute names separated by a dot (\".\").\nThis means the PI Point names will be *2measurement_Calvin.random1*, *2measurement_Calvin.random2* and *2measurement_Calvin.random3*.\n\nIf at a later time another Reading named \"Calvin\" is received but with 4 Datapoints, the situation is much more complicated than for Linked Types.\nOnce created, an OMF Type cannot be redefined to allow for additional data streams.\nThe plugin will attempt to match the new Reading to the existing Type but this will fail:\n\n.. code-block:: bash\n\n    ERROR: Error 409 creating Type A_13582600746897291705_default_2_Calvin_typename_sensor\n    ERROR: Error 409 creating Type A_13582600746897291705_default_2_Calvin_typename_measurement\n    ERROR: HTTP 409: Type conflict for Calvin (random1,random2,random3,random4). Creating a new Type: 2 messages\n    WARNING: Message 0 HTTP 200: Warning, The type with the supplied ID and version already exists.,\n    ERROR: Message 1 HTTP 409: Error, A type with the supplied ID and version already exists, but it does not match the supplied type.,\n\nThis is not a fatal error.\nThe plugin will search for an existing Type that matches the definition of the newest Reading.\nIt it can't find one, it will create a new Type.\nThe process should end with messages like these:\n\n.. code-block:: bash\n\n    INFO: Created Type A_13582600746897291705_default_3_Calvin_typename_sensor\n    INFO: Created Type A_13582600746897291705_default_3_Calvin_typename_measurement\n    INFO: Created Container (Type: A_13582600746897291705_default_3_Calvin_typename_measurement) 3measurement_Calvin\n    INFO: Created Element Calvin-type3\n\nThis means the new PI Point names will be *3measurement_Calvin.random1*, *3measurement_Calvin.random2* and *3measurement_Calvin.random3*.\nUnfortunately, the previously-defined Containers with their underlying AF Attributes and PI Points cannot be reused.\n\nCreated vs. Confirmed\n~~~~~~~~~~~~~~~~~~~~~\n\nYou may see the terms *Created* and *Confirmed* in the Information messages.\nThey have specific meanings:\n\n- *Created* means an item did not exist in the PI Server and was created.\n- *Confirmed* means an item already exists and is correctly defined.\n\n.. note::\n\n    The plugin makes this distinction by evaluating the HTTP return code from OMF POST calls.\n    If an OMF POST call returns an HTTP return code of 200 (OK), it means an item already exists and is correctly defined.\n    If an OMF POST call returns an HTTP return code of 201 (Created), it means a new item has been created.\n    \nTracing File\n------------\n\nIt is possible to generate a detailed trace of all OMF messages POSTed to the AVEVA web server for troubleshooting purposes.\nThis applies to all AVEVA OMF web server types: PI Web API, AVEVA CONNECT and Edge Data Store.\nTo enable this feature, click the *Enable Tracing* checkbox on the `OMF Basic tab <plugins/fledge-north-OMF/index.html#basic>`_.\n\n.. note::\n\n    The *Enable Tracing* feature should be disabled in production environments.\n    The *omf.log* file can grow to be quite large if the feature is left enabled.\n\nThe web server's response to the POSTing of an OMF message is almost always a JSON document which is included in the *omf.log* trace file.\nYou can temporarily configure PI Web API to include additional information for debugging purposes.\nTo include debugging information, set the *DebugMode* boolean attribute to *true* in the PI Web API System Configuration.\nSee the `Configuration at runtime <https://docs.aveva.com/bundle/pi-web-api/page/1023022.html>`_\nand `Other security settings <https://docs.aveva.com/bundle/pi-web-api/page/1023034.html>`_ webpages on the AVEVA documentation website for instructions on how to do this.\nDebug information for OMF messages appears as a new *Parameters* array in an *EventInfo* object.\nFor example, this JSON response snippet includes the identifier of the OMF Container and the name of the underlying PI Point:\n\n.. code-block:: json\n\n       \"Parameters\":[\n          {\n             \"Name\":\"Container.Id\",\n             \"Value\":\"sinusoid.sinusoid\"\n          },\n          {\n             \"Name\":\"Container.TypeId\",\n             \"Value\":\"Double64\"\n          },\n          {\n             \"Name\":\"Container.TypeVersion\",\n             \"Value\":\"1.0.0.0\"\n          },\n          {\n             \"Name\":\"Property\",\n             \"Value\":\"Double64\"\n          },\n          {\n             \"Name\":\"PIPoint.Name\",\n             \"Value\":\"sinusoid.sinusoid\"\n          }\n       ]\n\n.. note::\n\n    AVEVA notes that *DebugMode* should be used for troubleshooting only and should be disabled when you are done.\n    In a production environment, the *DebugMode* attribute should be set to *false* to reduce vulnerability to cross-site scripting (XSS).\n\nHow to confirm that PI Web API is installed and running\n=======================================================\n\nOpen the URL *https://piserver_1/piwebapi* in the browser (substituting *piserver_1* with the name and address of your PI Server) to\nconfirm that your server is reachable and that PI Web API is properly installed.\nIf PI Web API is configured for Basic authentication, a prompt similar to the example shown below requesting entry of the user name and password will be displayed:\n\n|img_002|\n\n**NOTE:**\n\n- *Enter the user name and password which you set in your Fledge configuration.*\n\nThe *PI Web API* *OMF* plugin must be installed to allow the integration with Fledge, in this screenshot the 4th row shows the\nproper installation of the plugin:\n\n|img_001|\n\nSelect the item *System* to verify the installed version:\n\n|img_010|\n\nCommands to check PI Web API\n----------------------------\n\nOpen the PI Web API URL and drill drown into the Data Archive and the Asset Framework hierarchies to verify the proper configuration on the PI Server side. Also confirm that the correct permissions have be granted to access these hierarchies.\n\n**Data Archive drill down**\n\nFollowing the path *DataServers* -> *Points*:\n\n|img_004|\n\n|img_005|\n\nYou should be able to browse the *PI Points* page and see your *PI Points* if some data was already sent:\n\n|img_006|\n\n**Asset Framework drill down**\n\nFollowing the path *AssetServers* -> Select the *Instance* -> Select the proper *Databases* -> drill down into the AF hierarchy up to the required level -> *Elements*:\n\n|img_007|\n\n*selecting the instance*\n\n|img_008|\n\n*selecting the database*\n\n|img_009|\n\nProceed with the drill down operation up to the desired level/asset.\n\nUnderstanding the OMF Data Cache\n--------------------------------\n\nThe PI Web API maintains two separate caches of PI Server data to maintain best performance: the PI System cache and the OMF cache.\nThe PI System cache pools Asset Framework and Data Archive resources in support of PI Web API data access features.\nThis cache is updated every 5 minutes.\nThe OMF cache, on the other hand, caches Asset Framework resources created by OMF Type and Container messages.\nThe cache is updated every 24 hours.\nThe reason this cache is updated so infrequently is that AVEVA assumes that all AF Database items generated by OMF messages are only ever manipulated through OMF.\nSee `Data Caching <https://docs.aveva.com/bundle/omf-with-pi-web-api/page/1017376.html>`_ on the AVEVA Documentation website for details.\n\n**You should never use other tools such as the PI System Explorer to edit or delete items in your AF Database that were created by OMF.**\nIf you do need to edit the AF Database directly to solve a problem, restart the PI Web API before restarting your OMF North plugin instance.\nThe restarted PI Web API will have no OMF data cached.\n\nError Messages and Causes\n=========================\n\nThis section documents some of the OMF North error messages that can appear in the Linux system log file */var/log/syslog*.\n\nLoss of Connection to the PI Web API Server\n-------------------------------------------\n\nIf the OMF North plugin cannot communicate with the PI Web API server over the network, these messages will appear:\n\n.. code-block:: bash\n\n    ERROR: Error sending Data, Failed to send data: Operation canceled - piserver:443 /piwebapi/omf\n    WARNING: Connection to the destination data archive has been lost\n    ERROR: The PI Web API service piserver:443 is not available. HTTP Code: 503\n\nWhenever the message \"*Connection to the destination data archive has been lost*\" appears, OMF North will not attempt to send data again until connection is reestablished.\nOMF North will attempt to reach the PI Web API server every 60 seconds.\nWhen connection is reestablished, these messages will appear:\n\n.. code-block:: bash\n\n    WARNING: PI Web API 2023 SP1-1.19.0.621 reconnected to piserver:443 OMF Version: 1.2\n    INFO: The sending of data has resumed\n\nIf the PI Web API server machine is running but PI Web API itself is not, the \"*Operation canceled*\" message will not appear.\nOMF North's attempt to send data to PI Web API will result in an HTTP return code 503 (Service Unavailable):\n\n.. code-block:: bash\n\n    ERROR: The PI Web API service piserver:443 is not available. HTTP Code: 503\n\nHTTP Code 409: Processing cannot continue until data archive errors are corrected\n---------------------------------------------------------------------------------\n\nThe HTTP return code 409 means Conflict.\nIf OMF North receives an HTTP return code 409, it means the message it sent has attempted to create an item that already exists but is defined differently.\nNeither OMF North nor PI Web API can resolve these conflicts automatically.\nOMF North will not attempt to send data again.\nYou must shut down the OMF North instance and address the problem.\n\nManual intervention by the system manager will be necessary.\nThis usually means editing or deleting an item in the PI Asset Framework or the PI Data Archive.\nSome specific examples are listed in this section.\n\nHTTP Code 409: The supplied container overlaps with a different existing container\n----------------------------------------------------------------------------------\n\nThis message means that OMF North is attempting to create a new PI Point but a point with the same name already exists with a different configuration.\nThere is a procedure for repairing the PI Points if this occurs.\nThe context in which this message appears differs between configurations with Complex Types and Linked Types.\nIn both cases, the list of messages ends with \"*Processing cannot continue until data archive errors are corrected.*\"\nThis means OMF North must be shut down to correct the problem.\n\nComplex Types\n~~~~~~~~~~~~~\n\n.. code-block:: bash\n\n    INFO: Created Type A_13582600746897291705_default_1_Calvin_typename_sensor\n    INFO: Created Type A_13582600746897291705_default_1_Calvin_typename_measurement\n    ERROR: Error 409 creating Container Calvin\n    ERROR: HTTP 409: A Conflict occurred sending the Container message for the asset Calvin (Type: A_13582600746897291705_default_1_Calvin_typename_measurement): 1 message\n    ERROR: Message 0 HTTP 409: Error, The supplied container overlaps with a different existing container., Data Archive requires PI Point names to be unique, and treats PI Point names as case-insensitive. The specified type and container were translated into PI Point names, but one or more resulting names were already being used.\n    WARNING: HTTP Code 409: Processing cannot continue until data archive errors are corrected\n\nFollow the description in the `Containers with Complex Types`_ section to find the names of the PI Points referenced by these messages.\n\nLinked Types\n~~~~~~~~~~~~\n\n.. code-block:: bash\n\n    ERROR: HTTP 409: The OMF endpoint reported a Conflict when sending Containers: 4 messages\n    WARNING: Message 0 HTTP 200: Warning, The specified container already exists in cache. If the associated points were manually modified or removed and need to be repaired, please restart PI Web API and send the message again.,\n    ERROR: Message 3 HTTP 409: Error, The supplied container overlaps with a different existing container., Data Archive requires PI Point names to be unique, and treats PI Point names as case-insensitive. The specified type and container were translated into PI Point names, but one or more resulting names were already being used.\n    WARNING: 2 duplicate messages skipped\n    WARNING: Containers attempted: Calvin.random1,Calvin.random2,Calvin.random3,Calvin.random4\n    WARNING: HTTP Code 409: Processing cannot continue until data archive errors are corrected\n\nFinding the problem PI Points in a Linked Types configuration is straightforward:\nthe point names appear in the *Containers attempted* message.\nIt is not possible to tell which of the PI Points has the problem.\nApplying the repair procedure to all PI Points listed in the message is safe.\n\nRepair Procedure\n~~~~~~~~~~~~~~~~\n\n- Shut down your OMF North instance\n- Start PI System Management Tools as Administrator\n- Navigate to *Points* then *Point Builder*\n- Search for the problem PI Points\n- Click the *General* tab in the lower pane. For each PI Point you wish to repair:\n\n  - Change *Point Source* to \"L\"\n  - Clear the *Exdesc*\n- Click the *Save* icon at the top of the page, or press Control-S on your keyboard\n- Stop and restart PI Web API\n- Start your OMF North instance\n\nWhen your OMF North instance starts, you may see messages that Containers were created:\n\n.. code-block:: bash\n\n    INFO: Containers created: Calvin.random1,Calvin.random2,Calvin.random3,Calvin.random4\n\nThis does not mean that new PI Points were created.\nIt means the OMF processor in PI Web API overwrote the *Point Source* and *Exdesc* point attributes, thereby adopting the PI Point.\nOMF returns HTTP return code 201 (Created) when it does this which is why OMF North logs a *Containers created* message.\nIf you are examining the *omf.log* trace file, you will see messages reading \"*A PI Point was overwritten.*\"\n\nHTTP Code 409:  One or more PI Points could not be created\n----------------------------------------------------------\n\nIf OMF North cannot create a PI Point, the messages are these:\n\n.. code-block:: bash\n\n    ERROR: Error 409 creating Container Calvin\n    ERROR: HTTP 409: A Conflict occurred sending the Container message for the asset Calvin: 1 message\n    ERROR: Message 0 HTTP 409: Error, One or more PI Points could not be created.,\n    WARNING: HTTP Code 409: Processing cannot continue until PI Server errors are corrected\n\nThe reason why a PI Point cannot be created is not provided by PI Web API.\nIt is possible that the user account configured for your OMF North instance does not have privileges to create or edit points.\nYou can test this by starting PI System Management Tools under the same user account and trying to create or edit a PI Point.\n\nIt is possible that your PI License has expired or you have exceeded the licensed number of points.\nIf this is the case, the messages are different.\nSee the next section.\n\nPI License Expired or Limit Exceeded\n------------------------------------\n\nProcessing of OMF Container messages may require creation of one or more PI Points.\nIf the PI Data Archive license has expired or the limit on the number of PI Points has been exceeded, PI Point creation will fail.\nPI Web API responds with an exception which is logged by OMF North:\n\n.. code-block:: bash\n\n    ERROR: HTTP 500: An exception occurred when sending container information to the OMF endpoint: 1 message\n    ERROR: Message 0 HTTP 500: Error, One or more PI Points could not be created.,\n    ERROR: Message 0 Exception: [-12216] Maximum licensed aggregate Point /Module Count exceeded. Parameter name: FatalError (System.ArgumentException)\n    WARNING: Containers attempted: Calvin.random4\n    WARNING: HTTP Code 500: Processing cannot continue until data archive errors are corrected\n\nHTTP Code 409: AF Element could not be created\n----------------------------------------------\n\nIf you start your OMF North instance after making manual changes to OMF-generated structures in your AF Database, you may see this pattern of messages:\n\n.. code-block:: bash\n\n    INFO: Containers created: Calvin1.random1,Calvin1.random2,Calvin2.random1,Calvin2.random2\n    ERROR: HTTP 409: Conflict sending Data: 4 messages\n    ERROR: Message 0 HTTP 409: Error, AF Element could not be created.,\n    ERROR: Message 0 Exception: 'Calvin1' already exists. (System.InvalidOperationException)\n    ERROR: Message 1 HTTP 409: Error, AF Element could not be created.,\n    ERROR: Message 1 Exception: 'Calvin2' already exists. (System.InvalidOperationException)\n    ERROR: Message 2 HTTP 409: Error, The specified static instance could not be found.,\n    WARNING: 1 duplicate messages skipped\n    WARNING: HTTP Code 409: Processing cannot continue until data archive errors are corrected\n\nThese messages may not reflect the underlying cause of the problem.\nIf you have the `Tracing File`_ enabled, you may see supporting information labelled *Suggestions*:\n\n.. code-block:: json\n\n    {\n        \"EventInfo\":{\n        \"Message\":\"AF Element could not be created.\",\n        \"Reason\":null,\n        \"Suggestions\":[\n            \"Sibling elements must have unique names. Elements are siblings if they share a parent.\"\n        ]\n        }\n    }\n\nThis Suggestion is evidence that the OMF message sent by OMF North included an item that conflicted with an item in the OMF cache\neven though the item had been deleted from the AF Database manually and checked in.\nTo address this, restart the PI Web API.\n\nHTTP Code 409: An existing LINK includes an AF Attribute that overlaps\n----------------------------------------------------------------------\n\nThe full text of this error message is much longer.\nThe context is:\n\n.. code-block:: bash\n\n    ERROR: HTTP 409: Conflict sending Data: 1 message\n    ERROR: Message 0 HTTP 409: Error, An existing LINK includes an AF Attribute that overlaps with the name of an AF Attribute that would be created for the specified LINK.,\n    ERROR: Error 409 creating Link random to random.randomwalk\n    ERROR: Error 409 creating Link random to random.temperature\n    ERROR: Error 409 creating Link random to random.units\n    ERROR: Error 409 creating Link random to random.location\n    WARNING: HTTP Code 409: Processing cannot continue until data archive errors are corrected\n\nA complete description of a OMF LINKs can be found in the :ref:`Linked Types<Linked Types Description>` section.\nIn this case, an OMF message is attempting to create a new AF Attribute in an AF Element that already has an AF Attribute with the same name.\nOMF North logs all links that were attempted when this OMF Data message failed.\n\nUnfortunately, OMF does not return the name of the conflicting AF Attribute.\nYou should compare the list of attempted links against the existing AF Attributes of the AF Element.\nThe AF Element name is the first item in each *Error 409 creating Link* message.\nThe AF Attribute name is the text after the dot (\".\") in the second item of the message.\n\nThe solution to this problem depends on the situation.\nIt is possible that an AF user manually added an AF Attribute to the AF Element.\nIf this is the case, remove or rename the conflicting AF Attribute.\n\nTake note of the *Static Data* parameter in the OMF North configuration.\nItems in *Static Data* are added to every AF Element that represents an OMF Container.\nIn this example, *Static Data* was left at its default which is:\n\n.. code-block:: bash\n\n    Location: Palo Alto, Company: Dianomic\n\nThis creates AF Attributes *Location* and *Company* in every AF Element.\nSince comparisons in AF are case-insensitive, the name of the static AF Attribute *Location* conflicted with the *location* datapoint in the OMF Data message.\n\nHTTP Code 413: Payload Too Large\n--------------------------------\n\nThis error means that a message POSTed to the PI Web API server is larger than the server will accept.\nThis can occur if Readings have large numbers of Datapoints or if the data rates into OMF North are high.\nOMF North will work around this error for Data messages but you should be aware of its method for doing this.\nIf this error occurs, you will see the following in */var/log/syslog*:\n\n.. code-block:: bash\n\n    ERROR: Error sending Data, 413 Payload Too Large - mypiserver:443 /piwebapi/omf\n    WARNING: Next POST of Readings will take place in 2 blocks\n\nThe HTTP code of 413 (Payload Too Large) is returned by PI Web API.\nIf this occurs, OMF North will divide the number of Readings it has received into 2 blocks and try to send its Data message again.\nIf HTTP 413 is returned again, OMF North will divide the Readings into 3 blocks and so on.\nThe block count will increase until the OMF Data message size is under the PI Web API limit.\n\nOnce the block count has been set, OMF North will use continue to use this value.\nIt will not be reduced automatically.\nIf you believe that OMF Data message sizes will be significantly lower as processing continues, restart your OMF North instance.\nThe block count will be reset to 1.\n\nIf you have enabled the `Tracing File`_, you will see more detail:\n\n.. code-block:: bash\n\n    Code: 413 Request Entity Too Large\n    Content: {\"Errors\":[\"Request content exceeds the maximum allowed length 4194304\"]}\n\nThe integer 4194304 in the Content message is the PI Web API default value for the maximum inbound message size which is 4 Gigabytes.\n\nOMF North will divide its Readings into blocks for OMF Data messages only.\nError logs for OMF Data messages begin with \"*ERROR: Error sending Data.*\"\nIf you encounter the HTTP 413 error for any other type of OMF message, you must increase the maximum inbound message size in the PI Web API.\nThe PI Web API parameter name for this message size limit is *MaxRequestContentLength*.\nSee the `AVEVA Documentation page <https://docs.aveva.com/bundle/pi-web-api/page/1023031.html>`_ for instructions on how to edit this limit.\n\n.. note::\n\n    Another way to reduce OMF message size is to reduce the *Data Block Size* on the *Advanced* tab of the Fledge GUI.\n    This will result in fewer Readings being processed by OMF North at once.\n    In general, a *Data Block Size* setting of 2000 offers the best performance so this solution is not ideal.\n    It may be necessary if OMF Container messages generate the HTTP 413 error and the PI Web API *MaxRequestContentLength* cannot be increased.\n\nWARNING: FledgeAsset Type exists with a different definition\n-------------------------------------------------------------\n\nThis warning can appear in Fledge systems that have already had one or more instances of OMF North running.\nThe first OMF North instance to start will create the *FledgeAsset* AF Element Template which is used by OMF to create AF Elements that represent Containers in Linked Type configurations.\nThe warning means that the OMF North Static Data parameters have changed since the *FledgeAsset* template was created.\nThe flow of data to the PI System will not stop.\nHowever, any new AF Elements created by OMF North will have AF Attributes defined by the existing definition of *FledgeAsset*, not the Static Data parameter in your configuration.\n\n.. note::\n\n    Support for Static Data in Linked Types was introduced in Fledge 3.1.0.\n    Any instance of the *FledgeAsset* AF Element Template created before Fledge 3.1.0 will have only the minimum AF Attribute Templates: *__id*, *__indexProperty* and *__nameProperty*.\n    The presence of the even the default value of the Static Data parameter (\"*Location: Palo Alto, Company: Dianomic*\") will generate this warning.\n\n.. note::\n\n    The *FledgeAsset* AF Element Template is not used for Complex Types.\n    If all of your configurations use Complex Types, this warning is benign.\n\nEliminating the Warning\n~~~~~~~~~~~~~~~~~~~~~~~\n\nTechniques for eliminating the warning depend on your requirements for Static Data in your Containers.\n\nClearing the Static Data\n########################\n\nIf you are upgrading to Fledge 3.1.0 and don't need to add Static Data values to your OMF Containers, clear the Static Data configuration using the Fledge GUI.\nThe OMF message that attempts to create the *FledgeAsset* AF Element Template will then match the existing definition of *FledgeAsset* so there will be no warning.\nYou will see only the message \"*Confirmed FledgeAsset Type.*\"\n\nRecreating FledgeAsset to include Static Data\n##############################################\n\nIf you want to use your Static Data configuration in your Containers, you can delete all AF Elements that derive from *FledgeAsset* and then the *FledgeAsset* AF Element Template itself.\nOMF North will recreate the *FledgeAsset* AF Element Template and AF Elements when readings are processed.\nBefore doing any work on your AF Database, shut down any OMF North instance that is sending data to it.\nAfter deleting AF Elements and AF Templates, you must check in your changes.\nRestart PI Web API and then your OMF North instance.\n\nFinding AF Elements that derive from FledgeAsset\n#################################################\n\nThe PI System Explorer allows you to search for all AF Elements that derive from the *FledgeAsset* AF Template:\n\n- In the *Elements* tab, locate *Element Searches* in the upper-left pane.\n- Right-click *Element Searches* and choose *New Search*.\n- In the *Element Search* dialog, click the *Template* drop-down and select *FledgeAsset*.\n- Click *OK* to invoke the search.\n- Select all items in the list of matching AF Elements.\n- Right-click and choose *Delete…*\n- In the resulting dialog box, choose \"*Delete these objects and all references to them. Check in is required to complete this action.*\"\n\nChanging the Tag Name OMF Hint\n------------------------------\n\nYou can define a Tag Name OMF Hint two different ways:\n\n    - the Tag Name Hint for a Container overrides the default OMF Container name,\n    - the Tag Name Hint for a Datapoint overrides the default PI Point name.\n\nDetails can be found on the `OMF North plugin <../plugins/fledge-north-OMF/index.html>`_ documentation page.\n\nYou must add the *tagName* hint to every reading sent to the OMF North plugin whose asset or datapoint you wish to rename.\nIf a subsequent reading lacks the *tagName* hint, OMF messages sent by OMF North will have the following effects:\n\nTag Name Hint for a Container\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf the Container Tag Name OMF Hint is no longer present, OMF North will create new AF Elements named after the reading's asset that will be siblings of the original AF Elements.\nNew PI Points will be created and mapped to new AF Attributes owned by the new AF Elements.\n\n**For example:** assume the first reading contains an asset *Calvin1* with a *tagName* hint of *ABC1* and a datapoint *random*.\nOMF North will create the AF Element *ABC1* with an AF Attribute *random* mapped to a PI Point *ABC1.random*.\nIf a subsequent reading has no *tagName* hint, OMF North will create a new AF Element *Calvin1* with an AF Attribute *random* mapped to a new PI Point *Calvin1.random*.\n\nNo error will be reported but time series data will flow into the new PI Point *Calvin1.random* and no longer to *ABC1.random*.\nIf the *tagName* hint is restored in later readings, data will once again flow to *ABC1.random*.\nStoring time series data in two different PI Points makes it almost impossible to use.\n\nTag Name Hint for a Datapoint\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf the Datapoint Tag Name OMF Hint is no longer present, OMF North will report errors and stop processing until the errors are addressed.\nThe Tag Name OMF Hint will disappear if you are using the OMF Hint Filter and have disabled it.\nWhile the OMF North instance is still running, you will see these errors:\n\n.. code-block:: bash\n\n    ERROR: HTTP 404: Error sending Data: 1 message\n    ERROR: Message 0 HTTP 404: Error, Container not found.,\n    WARNING: HTTP Code 404: Processing cannot continue until data archive errors are corrected\n\nIf you shut down and restart the OMF North instance, you will see these errors:\n\n.. code-block:: bash\n\n    INFO: Containers created: Calvin1.random,Calvin2.random\n    ERROR: HTTP 409: Conflict sending Data: 1 message\n    ERROR: Message 0 HTTP 409: Error, An existing LINK includes an AF Attribute that overlaps with the name of an AF Attribute that would be created for the specified LINK.,\n    WARNING: HTTP Code 409: Processing cannot continue until data archive errors are corrected\n\nThe PI Points in the INFO message (in this example: *Calvin1.random* and *Calvin2.random*) will have been created but will not receive data values.\n\nRepairing the PI System\n#######################\n\nTo repair the PI System, restore the Datapoint Tag Name OMF Hint and then follow this procedure:\n\n- Using PI System Explorer, locate the AF Elements that represent the Containers. In the above example, these are *Calvin1* and *Calvin2*\n- Within each AF Element, locate the AF Attribute *random*\n- Delete the AF Attribute\n- Check in the changes\n- Restart PI Web API\n- Restart the OMF North instance\n\nYou can achieve the same result by deleting the Container AF Elements. In this example, these are AF Elements *Calvin1* and *Calvin2*.\nOMF North will recreate the AF Elements and/or AF Attributes.\n\nIf you intention is to stop using the Datapoint Tag Name OMF Hint altogether, the procedure is the same.\nWhen OMF North restarts, it will create (or adopt) PI Points with the default PI Tag names.\nIn this example, the PI Points would be *Calvin1.random* and *Calvin2.random*.\nNote that any data sent previously to the PI Points created with the former *tagName* hint will be abandoned.\n\nOMF Plugin Persisted Data\n=========================\n\nThe OMF North plugin must create type information within the OMF subsystem of the PI Server before any data can be sent. This type information is persisted within the PI Server between sessions and must also be persisted within Fledge for each connection to a PI Server. This is done using the plugin data persistence features of the OMF North plugin.\n\nThis results in an important connection between a north service or task and a PI Server, which does add extra constraints as to what may be done at each end. It is very important this data is kept synchronized between the two ends. In normal circumstances this is not a problem, but there are some actions that can cause problems and require action on both ends.\n\nDelete a north service or task using the OMF plugin\n    If a north service or task using the OMF plugin is deleted then the persisted data of the plugin is also lost. This is Fledge's record of what types have been created in the PI Server and is no longer synchronized following the deletion of the north service. Any new service or task that is created and connected to the same PI Server will receive duplicate type errors from the PI Server. There are two possible solutions to this problem;\n\n        - Remove the type data from the PI Server such that neither end has the type information.\n\n        - Before deleting the north service or task export the plugin persisted data and import that data into the new service or task.\n\nCleanup a PI Server and reuse and existing OMF North service or task\n    This is the opposite problem to that stated above, the plugin will try to send data thinking that the types have already been created in the PI Server and receive an error. Fledge will automatically correct for this and create new types. These new types however will be created with new names, which may not be the desired behavior. Type names are created using a fixed algorithm. To re-use the previous names, stopping the north service and deleting the plugin persisted data will reset the algorithm and recreate the types using the names that had been previously used.\n\nTaking an existing Fledge north task or service and moving it to a new PI Server\n    This new PI Server will not have the type information from the old and we will once again get errors when sending data due to these missing types. Fledge will automatically correct for this and create new types. These new types however will be created with new names, which may not be the desired behavior. Type names are created using a fixed algorithm. To re-use the previous names, stopping, the north service and deleting the plugin persisted data will reset the algorithm and recreate the types using the names that had been previously used.\n\nManaging Plugin Persisted Data\n------------------------------\n\nThis is not a feature that users would ordinarily need to be concerned with.\nIt is possible to enable *Developer Features* in the Fledge User Interface that will provide a mechanism to manage this data.\n\nEnable Developer Features\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nNavigate to the *Settings* page of the GUI and toggle on the *Developer Features* check box on the bottom left of the page.\n\nViewing Persisted Data\n~~~~~~~~~~~~~~~~~~~~~~\n\nIn order to view the persisted data for the plugins of a service open either the *North* or *South* page on the user interface and select your service or task. An page will open that allows you to update the configuration of the plugin. This contains a set of tabs that may be selected, when *Developer Features* are enabled one of these tabs will be labeled *Developer*.\n\n+------------+\n| |OMF_tabs| |\n+------------+\n\nThe *Developer* tab will allow the viewing of the persisted data for all of the plugins in that service, filters and either north or south plugins, for which data is persisted.\n\nPersisted data is only written when a plugin is shutdown, therefore in order to get the most up to date view of the data it is recommended that service is disabled before viewing the persisted data. It is possible to view the persisted data of a running service, however this will be a snapshot taken from the last time the service was shutdown.\n\n+-----------------+\n| |OMF_Persisted| |\n+-----------------+\n\nIt is possible for more than one plugin within a pipeline to persist data.\nIn order to select between the plugins that have persisted data, a menu is provided in the top left which will list all those plugins for which data can be viewed.\n\n+--------------------+\n| |PersistedPlugins| |\n+--------------------+\n\nAs well as viewing the persisted data it is also possible to perform other actions, such as *Delete*, *Export* and *Import*. These actions are available via a menu that appears in the top right of the screen.\n\n+--------------------+\n| |PersistedActions| |\n+--------------------+\n\n.. note::\n\n    The service must be disabled before use of the Delete or Import features and to get the latest values when performing an Export.\n\nUnderstanding The OMF Persisted Data\n------------------------------------\n\nThe persisted data takes the form of a JSON document.\nThe format of the persisted data differs between *Linked Type* and *Complex Type* configurations.\n\nLinked Type Persisted Data\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPersisted data for Linked Type configurations does not change.\nIt is always:\n\n.. code-block:: json\n\n    {\n        \"type-id\":1\n    }\n\nComplex Type Persisted Data\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe following is an example of an OMF North instance configured for Complex Types with just the Sinusoid plugin:\n\n.. code-block:: json\n\n    {\n      \"sentDataTypes\": [\n\t{\n\t  \"sinusoid\": {\n\t    \"type-id\": 1,\n\t    \"dataTypesShort\": \"0x101\",\n\t    \"hintChecksum\": \"0x0\",\n\t    \"namingScheme\": 0,\n\t    \"afhHash\": \"15489826335467873671\",\n\t    \"afHierarchy\": \"fledge/data_piwebapi/mark\",\n\t    \"afHierarchyOrig\": \"fledge/data_piwebapi/mark\",\n\t    \"dataTypes\": {\n\t      \"sinusoid\": {\n\t\t\"type\": \"number\",\n\t\t\"format\": \"float64\"\n\t      }\n\t    }\n\t  }\n\t}\n      ]\n    }\n\nThe *SentDataTypes* is a JSON array of object, with each object representing one data type that has been sent to the PI Server. The key/value pairs within the object are as follow\n\n+-----------------+-------------------------------------------------------------------------------------------+\n| Key             | Description                                                                               |\n+=================+===========================================================================================+\n| type-id         | An index of the different types sent for this asset. Each time a new type is sent to the  |\n|                 | PI Server for this asset this index will be incremented.                                  |\n+-----------------+-------------------------------------------------------------------------------------------+\n| dataTypesShort  | A summary of the types in the datatypes of the asset. The value is an encoded number that |\n|                 | contains the count of each of base types, integer, float and string, in the datapoints of |\n|                 | this asset.                                                                               |\n+-----------------+-------------------------------------------------------------------------------------------+\n| hintChecksum    | A checksum of the OMFHints used to create this type. 0 if no OMF Hint was used.           |\n+-----------------+-------------------------------------------------------------------------------------------+\n| namingScheme    | The current OMF naming scheme when the type was sent.                                     |\n+-----------------+-------------------------------------------------------------------------------------------+\n| afhHash         | A Hash of the AF settings for the type.                                                   |\n+-----------------+-------------------------------------------------------------------------------------------+\n| afHierarchy     | The AF Hierarchy location.                                                                |\n+-----------------+-------------------------------------------------------------------------------------------+\n| afHierarchyOrig | The original setting of AF Hierarchy. This may differ from the above if specific AF rules |\n|                 | are in place.                                                                             |\n+-----------------+-------------------------------------------------------------------------------------------+\n| dataTypes       | The data type sent to the PI Server. This is an actually OMF type definition and is the   |\n|                 | exact type definition sent to the PI Web API endpoint.                                    |\n+-----------------+-------------------------------------------------------------------------------------------+\n\nPossible Solutions to Common Problems\n=====================================\n\nThe solutions in this section apply to *Complex Type* configurations only.\n\nRecreate PI Server objects and resend data to the same AF Hierarchy\n-------------------------------------------------------------------\n\nRecreate a single PI Server object or a set of PI Server objects.\nResend all the data for them to the PI Server on the Asset Framework hierarchy level.\n    \nProcedure:\n    - Disable the first OMF North instance\n    - Delete the AF Elements in the AF Database that are to be recreated or were partially sent\n    - Create a new **DISABLED** OMF North instance using a new, unique name and having the same AF hierarchy as the first OMF North instance\n    - Install *fledge-filter-asset* on the new OMF North instance\n    - Configure *fledge-filter-asset* with a rule like this:\n\n    .. code-block:: JSON\n\n\t{\n\t   \"rules\":[\n\t      {\n\t         \"asset_name\":\"asset_4\",\n\t         \"action\":\"include\"\n\t      }\n\t   ],\n\t   \"defaultAction\":\"exclude\"\n\t}\n\n    - Enable the second OMF North instance\n    - Let the second OMF North instance send the desired amount of data and then disable it\n    - Enable the first OMF North instance\n\n.. note::\n\n    - The second OMF North instance will be used only to recreate the objects and resend the data\n    - The second OMF North instance will resend all the data available for the specified *included* assets\n    - There will some data duplicated for the recreated assets because part of the information will be managed by both the north instances\n\nRecreate PI Server objects and resend data to the same AF Hierarchy\n-------------------------------------------------------------------\n\nThis is similar to the previous procedure except that the destination AF hierarchy will be different from the original.\n\nProcedure:\n    - Disable the first OMF North instance\n    - Create a new OMF North instance using a new, unique name and having a new AF hierarchy.\n      The location in the AF hierarchy is set on the *Asset Framework* tab, *Default Asset Framework Location* field.\n\n.. note::\n\n    - This solution will create a set of new objects unrelated to the previous ones\n    - All the data stored in Fledge will be sent\n\nResend Data with Data Duplication\n---------------------------------\n\nRecreate all the PI Server objects and resend all the data to the PI Server on the same Asset Framework hierarchy level of the first OMF North instance WITH data duplication.\n\nProcedure:\n    - Disable the first OMF North instance\n    - Delete the AF Elements and AF Element Templates in the AF Database that were partially deleted\n    - Stop and restart PI Web API\n    - Create a new OMF North instance using the same AF hierarchy.\n      The location in the AF hierarchy is set on the *Asset Framework* tab, *Default Asset Framework Location* field.\n\n.. note::\n\n    - All the Types will be recreated on the PI Server.\n      If the structure of each asset, number and types of the properties do not change, the data will be accepted and laced into the PI Server without any error.\n      PI Web API 2019 SP1 (1.13.0.6518) (and later) will accept the data.\n    - Using PI Web API 2019 SP1 1.13.0.6518, the PI Data Archive creates objects with the compression feature disabled.\n      This will cause any data that was previously loaded and is still present in the PI Data Archive to be duplicated.\n\nResend Data without Data Duplication\n------------------------------------\n\nRecreate all the PI Server objects and resend all the data to the PI Server on the same Asset Framework hierarchy level of the first OMF North instance WITHOUT data duplication.\n\nProcedure:\n    - Disable the first OMF North instance\n    - Delete all the AF Elements and AF Element Templates in the AF Database and PI Points in the PI Data Archive that were sent by the first OMF North instance\n    - Stop and restart PI Web API\n    - Create a new OMF North instance using the same AF hierarchy.\n      The location in the AF hierarchy is set on the *Asset Framework* tab, *Default Asset Framework Location* field.\n\n.. note::\n\n    - All the data stored in Fledge will be sent\n"
  },
  {
    "path": "docs/tuning_fledge.rst",
    "content": ".. Images\n.. |south_advanced| image:: images/south_advanced.jpg\n.. |south_alert| image:: images/south_alert.jpg\n.. |stats_options| image:: images/stats_options.jpg\n.. |north_advanced| image:: images/north_advanced.jpg\n.. |service_monitor| image:: images/service_monitor.jpg\n.. |scheduler_advanced| image:: images/scheduler_advanced.jpg\n.. |storage_config| image:: images/storage_config.png\n.. |sqlite_config| image:: images/sqlite_config.png\n.. |sqlitelb_config| image:: images/sqlitelb_config.png\n.. |postgres_config| image:: images/postgres_config.png\n.. |sqlitememory_config| image:: images/sqlitememory_config.png\n.. |poll_type| image:: images/poll_type.png\n.. |config_cache| image:: images/config_cache.jpg\n.. |core_log_level| image:: images/core_log_level.jpg\n.. |PurgeConfig| image:: images/PurgeConfig.png\n.. |PurgeSystemConfig| image:: images/PurgeSystemConfig.png\n.. |PurgeCycles| image:: images/PurgeCycles.png\n.. |PurgeSchedules| image:: images/PurgeSchedules.png\n.. |TaskLog| image:: images/TaskLog.png\n.. |resource_limit_south_advanced| image:: images/resource_limit_south_advanced.png\n.. |support_bundle_configuration| image:: images/support_bundle_configuration.png\n\n***************\nTuning Fledge\n***************\n\nMany factors will impact the performance of a Fledge system\n\n  - The CPU, memory and storage performance of the underlying hardware\n\n  - The communication channel performance to the sensors\n\n  - The communications to the north systems\n\n  - The choice of storage system\n\n  - The external demands via the public REST API\n\n\nMany of these are outside of the control of Fledge itself, however it is possible to tune the way Fledge will use certain resources to achieve better performance within the constraints of a deployment environment.\n\nSetting Log Level\n=================\n\nFledge writes logs via the *syslog* facility of Linux, this allows for multiple different log levels. Altering the log level will impact the performance of the system and can use significant disk space when set to *debug* or *info* levels. Each of the services within a Fledge instance may have the log level set of that service individually.\n\nThe logging level for the Fledge core can be set in the *Logging* configuration category in the *Advanced* parent category that can be accessed from the *Configuration* menu item.\n\n+------------------+\n| |core_log_level| |\n+------------------+\n\nThe logging level can be set to *error*, *warning*, *info* or *debug*, the default setting is *warning*. The level set defines the least severe error that will be logged, logs or higher severity that set will also be logged. In the default setting both *error* and *warning* level logs will be sent to the syslog facility.\n\nThe storage log level setting can be found in the *Storage* configuration category.\n\n+------------------+\n| |storage_config| |\n+------------------+\n\nThe south and north services also have log level settings for each service that can be accessed via the *Advanced* tab within the configuration of each of the services.\n\nAll other optional services will also have a log level setting accessible via the configuration for each service.\n\n.. note::\n\n   It is recommended to only set the log level of a service to *info* or *debug* when actively tracing a problem with the configuration or operation of a service and to always run with the default, *warning*, log level in production.\n\nConfiguration Manager Cache\n===========================\n\nThe Fledge system has an internal configuration manager that is used to load, distribute configuration categories and to dynamically update the other components of the system. These configuration categories are stored in the Fledge storage layer, in order to prevent the need for the configuration manager to query the database for each request to read a configuration category. The size of this cache can be configured in the *Configuration Manager* configuration page which is located with the *Advanced* configuration parent category.\n\n+----------------+\n| |config_cache| |\n+----------------+\n\nThe cache size is expressed as a number of configuration categories to hold in the cache. Increasing this value will increase the amount of memory required for the core service, but will increase the performance, particularly when starting up with a large number of services. Increasing the cache size will also reduce the load on the storage service.\n\nSouth Service Advanced Configuration\n====================================\n\nThe south services within Fledge each have a set of advanced configuration options defined for them. These are accessed by editing the configuration of the south service itself. A screen with a set of tabbed panes will appear, select the tab labeled *Advanced Configuration* to view and edit the advanced configuration options.\n\n+------------------+\n| |south_advanced| |\n+------------------+\n\n  - *Maximum Reading Latency (mS)* - This is the maximum period of time for which a south service will buffer a reading before sending it onward to the storage layer. The value is expressed in milliseconds and it effectively defines the maximum time you can expect to wait before being able to view the data ingested by this south service.\n\n  - *Maximum buffered Readings* - This is the maximum number of readings the south service will buffer before attempting to send those readings onward to the storage service. This and the setting above work together to define the buffering strategy of the south service.\n\n  - *Throttle* - If enabled this allows the reading rate to be throttled by the south service. The service will attempt to poll at the rate defined by *Reading Rate*, however if this is not possible, because the readings are being forwarded out of the south service at a lower rate, the reading rate will be reduced to prevent the buffering in the south service from becoming overrun.\n\n  - *Reading Rate* - The rate at which polling occurs for this south service. This parameter only has effect if your south plugin is polled, asynchronous south services do not use this parameter. The units are defined by the setting of the *Reading Rate Per* item.\n\n  - *Asset Tracker Update* - This control how frequently the asset tracker flushes the cache of asset tracking information to the storage layer. It is a value expressed in milliseconds. The asset tracker only write updates, therefore if you have a fixed set of assets flowing in a pipeline the asset tracker will only write any data the first time each asset is seen and will then perform no further writes. If you have variability in your assets or asset structure the asset tracker will be more active and it becomes more useful to tune this parameter.\n\n  - *Reading Rate Per* - This defines the units to be used in the *Reading Rate* value. It allows the selection of per *second*, *minute* or *hour*.\n\n  - *Poll Type* - This defines the mechanism used to control the poll requests that will be sent to the plugin. Three options are currently available, interval polling and fixed time polling and polling on demand.\n\n    +-------------+\n    | |poll_type| |\n    +-------------+\n\n    - *Interval* polling will issue a poll request at a fixed rate, that rate being determined by the *Reading Rate* and *Reading Rate Per* settings described above. The first poll request will be issued after startup of the plugin and will not be synchronized to any time or other events within the system.\n\n    - *Fixed Times* polling will issue poll requests at fixed times that are defined by a set of hours, minutes and seconds. These times are defined in the local time zone of the machine that is running the Fledge instance.\n\n    - *On Demand* polling will not perform any regular polling, instead it will wait for a control operation to be sent to the service. That operation is named *poll* and takes no arguments. This allow a poll to be trigger by the control mechanisms from notifications, schedules, north services or API requests.\n\n  - *Hours* - This defines the hours when a poll request will be made. The hours are expressed using the 24 hour clock, with poll requests being made only when the current hour matches one of the hours in the coma separated list of hours. If the *Hours* field is left blank then poll will be issued during every hour of the day.\n\n  - *Minutes* - This defines the minutes in the day when poll requests are made. Poll requests are only made when the current minute matches one of the minutes in the comma separated list of minutes. If the *Minutes* field is left blank then poll requests will be made in any minute within the hour.\n\n  - *Seconds* - This defines the seconds when a poll requests will be made. Seconds is a comma separated list of seconds, poll requests are made when the current second match one of the seconds in the list. If *Fixed Times* polling is selected then the *Seconds* field must not be empty.\n\n  - *Minimum Log Level* - This configuration option can be used to set the logs that will be seen for this service. It defines the level of logging that is send to the syslog and may be set to *error*, *warning*, *info* or *debug*. Logs of the level selected and higher will be sent to the syslog. You may access the contents of these logs by selecting the log icon in the bottom left of this screen.\n\n  - *Statistics Collection* - This configuration option can be used to control how detailed the statistics collected by the south service are. There are three options that may be selected\n\n    +-----------------+\n    | |stats_options| |\n    +-----------------+\n\n    The *per asset & per service* setting will collect one statistic per asset ingested and an overall statistic for the entire service. The *per service* option just collects the overall service ingest statistics and the *per asset* option just collects the statistics for each asset and not for the entire service. The default is to collect statistics on a per asset & service basis, this is not the best setting if large numbers of distinct assets are ingested by a single south service. Use of the per asset or the per asset and service options should be limited to south service that collect a relatively small number of distinct assets. Collecting large number of statistics, for 1000 or more distinct assets will have a significant performance overhead and may overwhelm less well provisioned Fledge instances. When a large number of assets are ingested by a single south service this value should be set to *per service*.\n\n    .. note::\n\n       The *Statistics Collection* setting will not remove any existing statistics, these will remain and remain to be represented in the statistics history. This only impacts new values that are collected. It is recommended that this be set before a service is started for the first time if the desire it to have no statistics values recorded for either assets or the service.\n\n    .. note::\n\n       If the *per service* option is used then the UI page that displays the south services will not show the asset names and counts for each of the assets that are ingested by that service.\n\n  - *Performance Counters* - This option allows for the collection of performance counters that can be used to help tune the south service.\n\n  - *Monitoring Period* - This defines a period in minutes over which the service collects ingest counts to determine the flow rate of the service. This is averaged over a number of samples to build the average rate and standard deviation from that rate in order to detect anomalous changes in the rate. The user is warned when the rate does not appear consistent with the learnt average and standard deviation. Setting this value to 0 will disable the ingest rate monitoring.\n\n  - *Monitoring Sensitivity* -  This defines the sensitivity of the rate monitoring reports. It is expressed as a factor and is used to determine how many standard deviations from the mean ingest rate is considered as an anomalous ingest rate. The high this number the less sensitive the monitoring process is.\n\nPerformance Counters\n--------------------\n\nA number of performance counters can be collected in the south service to help characterise the performance of the service. This is intended to provide input into the tuning of the service and the collection of these counters should not be left on during production use of the service.\n\nPerformance counters are collected in the service and a report is written once per minute to the storage layer for later retrieval. The values written are\n\n  - The minimum value of the counter observed within the current minute\n\n  - The maximum value of the counter observed within the current minute\n\n  - The average value of the counter observed within the current minute\n\n  - The number of samples of the counter collected within the current minute\n\nIn the current release the performance counters can only be retrieved by direct access to the configuration and statistics database, they are stored in the *monitors* table. Or via the REST API. Future releases will include tools for the retrieval and analysis of these performance counters.\n\nTo access the performance counters via the REST API use the entry point /fledge/monitors to retrieve all counters, or /fledge/monitors/{service name} to retrieve counters for a single service.\n\nWhen collection is enabled the following counters will be collected for the south service that is enabled.\n\n.. list-table::\n    :widths: 15 30 55\n    :header-rows: 1\n\n    * - Counter\n      - Description\n      - Causes & Remedial Actions\n    * - queueLength\n      - The total number of readings that have been queued within the south service for sending to the storage service.\n      - Large queues in the south service will mean that the service will have a larger than normal footprint but may not be an issue in itself. However if the queue size grows continuously then there will eventually be a memory allocation failure in the south service. Turning on throttling of the ingest rate will reduce the data that is added to the queue and may be enough to resole the problem, however data will be collected at a reduced rate. A faster storage plugin, perhaps using an in-memory storage engine may be another solution. If your instance has many south services it may be worth considering splitting the south services between multiple instances.\n    * - ingestCount\n      - The number of readings ingested in each plugin interaction.\n      - The counter reflects the number of readings that are returned for each call to the south plugin poll entry point or by the south plugin ingest asynchronous call. Typically this number should be moderately low, if very large numbers are returned in a single call it will result in very large queues building up within the south service and the performance of the system will be degraded with large burst of data that possibly overwhelm other layers interspersed with periods of inactivity. Ideally the peaks should be eliminated and the rate kept 'flat' in order to make the best use of the system. Consider altering the configuration of the south plugin such that it returns less data but more frequently.\n    * - readLatency\n      - The longest time a reading has spent in the queue between being returned by the south plugin and sent to the storage layer.\n      - This counter describes how long, in milliseconds, the oldest reading waiting in the internal south service queue before being sent to the storage layer. This should be less than or equal to the define maximum latency, it may be a little over to allow for queue management times, but should not be significantly higher. If it is significantly higher for long periods of time it would indicate that the storage service is unable to handle the load that is being placed upon it. It may be possible that by tuning the storage layer, changing t a higher performance plugin or one that is better suited to your workload, may resolve the problem. Alternatively consider reducing the load by splitting the south services across multiple Fledge instances.\n    * - flow controlled\n      - The number of times the reading rate has been reduced due to excessive queues building up in the south service.\n      - This is closely related to the queuLength counter and has much the same set of actions that should be taken if the service is frequently flow controlled. Reducing the ingest rate, or adding filtering in the pipeline to reduce the amount of data passed onward to the storage service may alleviate the problem. In general if processing can be done that reduces high bandwidth data into lower bandwidth data that can still characterise the high bandwidth content, then this should be done as close as possible to the source of the data to reduce the overall load on the system.\n    * - throttled rate\n      - The rate that data is being ingested at as a result of flow control throttling.\n      - This counter is more for information as to what might make a reasonable ingest rate the system can sustain with the current configuration. It is useful as it gives a good idea of how far away from your desired performance the current configuration of the system is currently\n    * - storedReadings\n      - The readings successfully sent to the storage layer.\n      - This counter gives an indication of the bandwidth available from the service to the storage engine. This should be at least as high as the ingest rate if data is not to accumulate in buffers within the storage. Altering the maximum latency and maximum buffered readings advanced settings in the south server can impact this throughput.\n    * - resendQueued\n      - The number of readings queued for resend. Note that readings may be queued for resend multiple times if the resend also failed.\n      - This is a good indication of overload conditions within the storage engine. Consistent high values of this counter point to the need to improve the performance of the storage layer.\n    * - removedReadings\n      - A count of the readings that have been removed after too many attempts to save them in the storage layer.\n      - This should normally be zero or close to zero. Any significant values here are a pointer to a critical error with either the south plugin data that is being created or the operation of the storage layer.\n\nIngest Rate Monitoring\n----------------------\n\nThe ingest rate monitoring in the south service is designed to warn the user when the observed ingest rate of the service falls outside of the expected range observed previously for the service. The mechanism does not rely an option the user provides defining an expected rate, but rather uses observed data to determine an expected range of rates that can be considered normal. The user has options to configure the period over which the rate is observed for reporting purposes and also the sensitivity of the monitoring. This has the advantage over simply defining an upper and lower acceptable ingest rate that it does not need to be adjusted each time the poll rate is adjusted and it can be used with asynchronous data sources where the rate may be unknown, provided those sources are relatively consistent with the rate they supply data.\n\nThe monitoring period may be adjusted to suit the consistency of the incoming data rate and tune the frequency with which reports are made. A report can be made at most once per every two monitoring periods,  therefore setting a long monitoring period will reduce the responsive of the alerts to failures. However too short a monitoring period, with rates that fluctuate can result in false positives because the average rate over the given period in not stable even to provide consistent results.\n\nIn cases where the data rate is so inconsistent that the monitoring is giving too many false alerts it may be disabled by setting a monitoring period of 0.\n\nThe algorithm uses the well known outlier detection mechanism which states that the distribution of data usually falls within a bell curve, with the likelihood of data being higher closer to the average of the data set. It uses standard deviation and mean calculation to determine this and the sensitivity setting defines the number of standard deviation plus or minus of the computed mean that are considered to be good ingest rates.\n\nThe monitoring process will collect a number of samples, to create an initial mean and standard deviation before it will start to actively monitor the flow rate. Should the collection rate configuration of the service be altered, the algorithm will discard the learnt mean and standard deviation and restart the collection of the initial sample. The initial sample size is set to be 10 monitoring periods.\n\nOnce the monitoring algorithm has completed the initial sample collection and switched to active monitoring, it will continue to refine the current mean value and standard deviation. This allows the monitoring to adjust to small, natural variations in collection rates over time.\n\nWhen two consecutive  monitoring periods are detected that sent either more than or fewer than the number of readings defined by the current mean, standard deviation and sensitivity factory an alert will be displayed in the Fledge status bar and a warning will be written to the error log. \n\n+---------------+\n| |south_alert| |\n+---------------+\n\nThe algorithm requires two consecutive out of range ingest rates to prevent the alert trigger for an isolated peak or trough in data collection caused by a one off action occurring on the host platform, or within Fledge. If in a subsequent monitoring period the flow rate returns to acceptable limits, the alert in the status bar will be cleared.\n\n.. note::\n\n   This ingest rate monitoring is designed to be applicable in as many situations as possible. There are however some cases in which this monitoring will create false reports of issues. This may be able to be reduced or eliminated by using the tuning options, but this may not be true in all cases. In particular an asynchronous south plugin that reports data at unpredictable time intervals will most likely not be suitable for this type of monitoring and the monitoring should be disabled by setting a value of 0 for the monitoring interval.\n\nFixed Time Polling\n------------------\n\nThe fixed time polling can be used in a number of ways to control when poll requests occur, amongst the possible scenarios are;\n\n - Poll at fixed times within a minute or hour.\n\n - Poll only for certain periods of the day.\n\nTo poll at fixed, regular times then simply set the times when a poll is required. For example to poll every 15 seconds at 0 seconds past the minute, 15, 30 and 45 seconds past the hour, simply st the *Seconds* field to have the value 0, 15, 30, 45 and leave the minutes and hours blank.\n\nIf you wished to poll at the hour and every 15 minutes thereafter set the *Minutes* field to 0, 15, 30 and 45 and set the *Seconds* field to 0. Settings *Seconds* to another single value, for example 30, would simply move the poll time to be 0 minutes and 30 seconds, 15 minutes and 30 seconds etc. If multiple values of seconds are given then multiple polls would occur. For example if *Minutes* is set to 0, 15, 30, 45 and *Seconds* is set to 0, 30. A poll would occur at 0 minutes and 0 seconds, 0 minutes and 30 seconds, 15 minutes and 0 seconds, 15 minutes and thirty seconds.\n\nThe *Hours* field, if not left empty, would work in the same way as the minutes above.\n\nAnother use of the feature is to only poll at certain times of the day. As an example, if we wished to poll every 15 minutes between the hours of 8am and 5pm then we can set the *Hours* field to be 8,9,10,11,12,13,14,15,16 and the *Minutes* field to be 0, 15, 30, 45. The seconds field can be left as 0.\n\n.. note::\n\n   The last poll of the day would be at 16:45 in the above configuration.\n\nAlthough the intervals between poll times shown in the above examples have all been equal, there is no requirement for this to be the case.\n\nTuning Buffer Usage\n-------------------\n\nThe tuning of the south service allows the way the buffering is used within the south service to be controlled. Setting the latency value low results in frequent calls to send data to the storage service and therefore means data is more quickly available. However sending small quantities of data in each call the the storage system does not result in the most optimal use of the communications or of the storage engine itself. Setting a higher latency value results in more data being sent per transaction with the storage system and a more efficient system. The cost of this is the requirement for more in-memory storage within the south service.\n\nSetting the *Maximum buffers Readings* value allows the user to place a cap on the amount of memory used to buffer within the south service, since when this value is reach, regardless of the age of the data and the setting of the latency parameter, the data will be sent to the storage service. Setting this to a smaller value allows tighter control on the memory footprint at the cost of less efficient use of the communication and storage service.\n\nTuning between performance, latency and memory usage is always a balancing act, there are situations where the performance requirements mean that a high latency will need to be incurred in order to make the most efficient use of the communications between the micro services and the transactional performance of the storage engine. Likewise the memory resources available for buffering may restrict the performance obtainable.\n\nReading Latency\n---------------\n\nClosely related to buffer usage is reading latency in the south service. This is a measure of the delay between the south service receiving a new reading and that reading appearing in the storage subsystem. We deliberately delay the forwarding of readings from the south service to storage in order to create blocks of multiple readings to send per call to the storage layer. This increases the overall throughput of the south to storage interface at the cost of increasing the latency. There are two settings that come into play when defining this, the maximum latency we will accept and the maximum number of readings we will buffer.\n\n.. note::\n\n   The maximum reading latency may be set to any value between 0 and 600000 milliseconds. A value of zero will disable the buffering. See below for a discussion of the impact of large values of maximum reading latency.\n\nIn situations where readings are arriving in the south service relatively frequently these can be set to values to allow data to build up reasonable size blocks of readings to send and hence be more efficient in sending the data to the storage layer. However if data does not arrive frequently or is not predictable in the way it arrives then these settings may cause unexpected latency and delays within the system.\n\nThe buffering subsystem within the south service will buffer readings in the south as they arrive. It checks the time difference between the oldest buffered reading and the current time to see if the maximum latency setting is about to be exceeded. If it is it will send the buffered data. If latency check does not result in the data queue being sent to the storage subsystems, the south service will check the number of readings buffered. If the count of buffered readings is about to exceed the maximum allowed number of buffered readings, the south service will then send all the buffered readings to the storage service. No further checks are done until the next reading arrives.\n\nTherefore, if readings do not arrive very frequently, or the south plugin is asynchronous and data arrives sporadically, then it may not check the buffer status for more than the maximum configured latency period. The requirement for more data to arrive before more checks are made, may result in that maximum latency being exceeded. When this occurs a warning message will be logged in the system logs.\n\nIn these circumstances, it is recommended to disable or severely limit the buffering in the south service. This will result in less efficient interactions with the storage system, but these will be infrequent due to the infrequent nature of data arrival.\n\n.. note::\n\n   Data arrives at the buffering subsystem **after** it has passed through the processing pipeline in the south service. Therefore if the pipeline does data compression, for example using the delta filter, this may reduce the arrival rate of data at the buffering subsystem and convert high bandwidth data from the plugin to low bandwidth data to send to the storage subsystem.\n\nThe system imposes an upper limit of 600000 milliseconds (10 minutes) on the maximum send latency to prevent it being set so high that it appears that the south service is no longer functioning. This is really only an issue in situations where the south service does not receive high rates of data and the send latency is set very high. In these cases the data may reside in the south service for a long period, during which it is not accessible to other services within the system. There is also a risk, in these circumstances, that data for a long period of time might be lost if there was a failure that caused the south service to terminate before sending the data to the storage service.\n\n\nResource Limit of South Services\n================================\n\nSouth service within Fledge will buffer readings before forwarding them onto the internal storage service. The default behavior is to buffer all readings until they reach a certain age or a certain number. When this occurs, they will be sent to the storage layer. The thresholds that cause data to be sent to the storage layer can be configured and are discussed in **South Service Advanced Configuration**.\n\nIf there is a problem sending readings to the storage layer, the south service will continue to buffer the data until that fault is cleared. This behavior will cause the south service to consume increasing amounts of system memory until the fault clears. This can be undesirable if that consumption is not checked. The resource limit configuration for the south service allows the administrator to control the buffering that will occur in this case.\n\n+---------------------------------+\n| |resource_limit_south_advanced| |\n+---------------------------------+\n\nThe following parameters are available for configuration:\n\n  - **South Service Buffering** : Defines whether the buffering for South Services is unlimited or capped. If set to `\"Limited\"`, additional configuration options become applicable.  \n\n  - **South Service Buffer Size** : Specifies the maximum number of readings that can be buffered in the South Service. This setting is only valid when the *South Service Buffering* option is set to `\"Limited\"`.  \n\n  - **Discard Policy** : Determines the policy for discarding readings when the buffer limit is reached. This setting is only valid when the *South Service Buffering* option is set to `\"Limited\"`.  \n\n     - **Discard Oldest**: Removes the oldest readings to keep the buffer size within the limit.  \n\n     - **Reduce Fidelity**: Reduces the fidelity of buffered readings by discarding every second reading, starting from the oldest. This policy tracks the next reading to discard to avoid repeated reduction of fidelity for the same data.  \n     \n     - **Discard Newest**: Discards the newest readings to maintain the buffer size.  \n\nAccess Control\n--------------\nOnly users with administrative privileges can modify the **Resource Limit** configuration items.\n\nBuffering Behavior and Discard Policies\n---------------------------------------\nWhen the **South Service Buffering** option is set to `\"Limited\"`, the following behaviors apply based on the configured **Discard Policy**:\n\n1. **Discard Oldest**:  \n   The oldest readings in the buffer are removed until the buffer size is within the configured limit.  \n\n2. **Reduce Fidelity**:  \n   Every second reading is discarded, starting from the oldest, to reduce the number of buffered readings. The discard mechanism tracks the last removed reading to ensure fidelity reduction is evenly distributed and does not repeatedly affect the same data. If the reading associated with the tracked timestamp is no longer in the queue, the discard mechanism adjusts to the current state of the queue.  \n\n3. **Discard Newest**:  \n   The newest readings are discarded as they arrive, ensuring the buffer size remains within the configured limit.  \n\n\nNorth Advanced Configuration\n============================\n\nIn a similar way to the south services, north services and tasks also have advanced configuration that can be used to tune the operation of the north side of Fledge. The north advanced configuration is accessed in much the same way as the south, select the North page and open the particular north service or task. A tabbed screen will be shown which contains an *Advanced Configuration* tab.\n\n+------------------+\n| |north_advanced| |\n+------------------+\n\n  - *Minimum Log Level* - This configuration option can be used to set the logs that will be seen for this service or task. It defines the level of logging that is send to the syslog and may be set to *error*, *warning*, *info* or *debug*. Logs of the level selected and higher will be sent to the syslog. You may access the contents of these logs by selecting the log icon in the bottom left of this screen.\n\n  - *Data block size* - This defines the number of readings that will be sent to the north plugin for each call to the *plugin_send* entry point. This allows the performance of the north data pipeline to be adjusted, with larger blocks sizes increasing the performance, by reducing overhead, but at the cost of requiring more memory in the north service or task to buffer the data as it flows through the pipeline. Setting this value too high may cause issues for certain of the north plugins that have limitations on the number of messages they can handle within a single block.\n\n  - *Stream update frequency* - This controls how frequently the north service updates the current position it has reached in the stream of data it is sending north. The value is expressed as a number of data blocks between updates. Increasing this value will write the position to the storage less frequently, increasing the performance. However in the event of a failure data in the stream may be repeated for this number of blocks.\n\n  - *Data block prefetch* - The north service has a read-ahead buffering scheme to allow a thread to prefetch buffers of readings data ready to be consumed by the thread sending to the plugin. This value allows the number of blocks that will be prefetched to be tuned. If the sending thread is starved of data, and data is available to be sent, increasing this value can increase the overall throughput of the north service. Caution should however be exercised as increasing this value will also increase the amount of memory consumed.\n\n  - *Asset Tracker Update* - This control how frequently the asset tracker flushes the cache of asset tracking information to the storage layer. It is a value expressed in milliseconds. The asset tracker only write updates, therefore if you have a fixed set of assets flowing in a pipeline the asset tracker will only write any data the first time each asset is seen and will then perform no further writes. If you have variability in your assets or asset structure the asset tracker will be more active and it becomes more useful to tune this parameter.\n\n  - *Performance Counters* - This option allows for collection of performance counters that can be use to help tune the north service.\n\nPerformance Counters\n--------------------\n\nA number of performance counters can be collected in the north service to help characterise the performance of the service. This is intended to provide input into the tuning of the service and the collection of these counters should not be left on during production use of the service.\n\nPerformance counters are collected in the service and a report is written once per minute to the storage layer for later retrieval. The values written are\n\n  - The minimum value of the counter observed within the current minute\n\n  - The maximum value of the counter observed within the current minute\n\n  - The average value of the counter observed within the current minute\n\n  - The number of samples of the counter collected within the current minute\n\nIn the current release the performance counters can only be retrieved by direct access to the configuration and statistics database, they are stored in the *monitors* table. Future releases will include tools for the retrieval and analysis of these performance counters.\n\nTo access the performance counters via the REST API use the entry point */fledge/monitors* to retrieve all counters, or */fledge/monitors/{service name}* to retrieve counters for a single service.\n\n.. code-block:: bash\n\n    $ curl -s http://localhost:8081/fledge/monitors | jq\n    {\n      \"monitors\": [\n        {\n          \"monitor\": \"storedReadings\",\n          \"values\": [\n            {\n              \"average\": 102,\n              \"maximum\": 102,\n              \"minimum\": 102,\n              \"samples\": 20,\n              \"timestamp\": \"2024-02-19 16:33:46.690\",\n              \"service\": \"si\"\n            },\n            {\n              \"average\": 102,\n              \"maximum\": 102,\n              \"minimum\": 102,\n              \"samples\": 20,\n              \"timestamp\": \"2024-02-19 16:34:46.713\",\n              \"service\": \"si\"\n            },\n            {\n              \"average\": 102,\n              \"maximum\": 102,\n              \"minimum\": 102,\n              \"samples\": 20,\n              \"timestamp\": \"2024-02-19 16:35:46.736\",\n              \"service\": \"si\"\n            }\n          ]\n        },\n        {\n          \"monitor\": \"readLatency\",\n          \"values\": [\n            {\n              \"average\": 2055,\n              \"maximum\": 2064,\n              \"minimum\": 2055,\n              \"samples\": 20,\n              \"timestamp\": \"2024-02-19 16:33:46.698\",\n              \"service\": \"si\"\n            },\n            {\n              \"average\": 2056,\n              \"maximum\": 2068,\n              \"minimum\": 2053,\n              \"samples\": 20,\n              \"timestamp\": \"2024-02-19 16:34:46.719\",\n              \"service\": \"si\"\n            },\n            {\n              \"average\": 2058,\n              \"maximum\": 2079,\n              \"minimum\": 2056,\n              \"samples\": 20,\n              \"timestamp\": \"2024-02-19 16:35:46.743\",\n              \"service\": \"si\"\n            }\n          ]\n        },\n        {\n          \"monitor\": \"ingestCount\",\n          \"values\": [\n            {\n              \"average\": 34,\n              \"maximum\": 34,\n              \"minimum\": 34,\n              \"samples\": 60,\n              \"timestamp\": \"2024-02-19 16:33:46.702\",\n              \"service\": \"si\"\n            },\n            {\n              \"average\": 34,\n              \"maximum\": 34,\n              \"minimum\": 34,\n              \"samples\": 60,\n              \"timestamp\": \"2024-02-19 16:34:46.724\",\n              \"service\": \"si\"\n            },\n            {\n              \"average\": 34,\n              \"maximum\": 34,\n              \"minimum\": 34,\n              \"samples\": 60,\n              \"timestamp\": \"2024-02-19 16:35:46.748\",\n              \"service\": \"si\"\n            }\n          ]\n        },\n        {\n          \"monitor\": \"queueLength\",\n          \"values\": [\n            {\n              \"average\": 55,\n              \"maximum\": 100,\n              \"minimum\": 34,\n              \"samples\": 60,\n              \"timestamp\": \"2024-02-19 16:33:46.706\",\n              \"service\": \"si\"\n            },\n            {\n              \"average\": 55,\n              \"maximum\": 100,\n              \"minimum\": 34,\n              \"samples\": 60,\n              \"timestamp\": \"2024-02-19 16:34:46.729\",\n              \"service\": \"si\"\n            },\n            {\n              \"average\": 55,\n              \"maximum\": 100,\n              \"minimum\": 34,\n              \"samples\": 60,\n              \"timestamp\": \"2024-02-19 16:35:46.753\",\n              \"service\": \"si\"\n            }\n          ]\n        }\n      ]\n    }\n\nWhen collection is enabled the following counters will be collected for the south service that is enabled.\n\n.. list-table::\n    :widths: 15 30 55\n    :header-rows: 1\n\n    * - Counter\n      - Description\n      - Causes & Remedial Actions\n    * - No of waits for data\n      - This counter reports how many times the north service requested data from storage and no data was available.\n      - If this value is consistently low or zero it indicates the other services are providing data faster than the north service is able to send that data. Improving the throughput of the north service would be advisable to prevent the accumulation of unsent data in the storage service.\n    * - Block utilisation %\n      - Data is read by the north service in blocks, the size of this blocks is defined in the advanced configuration of the north service. This counter reflects what percentage of the requested blocks are actually populated with data on each call to the storage service.\n      - A constantly high utilisation is an indication that more data is available than can be sent, increasing the block size may improve this situation and allow for a high throughput.\n    * - Reading sets buffered\n      - This is a counter of the number of blocks that are waiting to be sent in the north service\n      - if this figure is more than a couple of blocks it is an indication that the north plugin is failing to sent complete blocks of data and that partial blocks are failing. Reducing the block size may improve the situation and reduce the amount of storage required in the north service.\n    * - Total readings buffered\n      - This is a count of the total number of readings buffered within the north service.\n      - This should be equivalent to 2 or 3 blocks size worth of readings. If it is high then it is an indication that the north plugin is not able to sustain a high enough data rate to match the ingest rates of the system.\n    * - Readings sent\n      - This gives an indication, for each block, how many readings are sent in the block.\n      - This should typically match the blocks read, if not it is an indication of failures to send data by the north plugin.\n    * - Percentage readings sent\n      - Closely related to the above the s the percentage of each block read that was actually sent.\n      - In a well tuned system this figure should be close to 100%, if it is not then it may be that the north plugin is failing to send data, possibly because of an issue in an upstream system. Alternatively the block size may be too high for the upstream system to handle and reducing the block size will bring this value closer to 100%.\n    * - Readings added to buffer\n      - An absolute count of the number of readings read into each block.\n      - If this value is significantly less than the block size it is an indication that the block size can be lowered. If it is always close to the block size then consider increasing the block size.\n    * - No data available to fetch\n      - Signifies how often there was no data available to be sent to the north plugin.\n      - This performance monitor is useful to aid in tuning the number of buffers to prefetch. It is set to one each time the north plugin is ready to consume more data and no data is available. The count of samples will indicate how often this condition was true within the one minute sampling period.\n\nHealth Monitoring\n=================\n\nThe Fledge core monitors the health of other services within Fledge, this is done with the *Service Monitor* within Fledge and can be configured via the *Configuration* menu item in the Fledge user interface. In the configuration page select the *Advanced* options and then the *Service Monitor* section.\n\n+-------------------+\n| |service_monitor| |\n+-------------------+\n\n  - *Health Check Interval* - This setting determines how often Fledge will send a health check request to each of the microservices within the Fledge instance. The value is expressed in seconds. Making this value small will decrease the amount of time it will take to detect a failure, but will increase the load on the system for performing health checks. Making this too frequent is likely to increase the occurrence of false failure detection.\n\n  - *Ping Timeout* - Amount of time to wait, in seconds, before declaring that a health check request has failed. Failure for a health check response to be seen within this time will make a service as unresponsive. Small values can result in busy services becoming suspect erroneously.\n\n  - *Max Attempts To Check Heartbeat* - This is the number of heartbeat requests that must fail before the core determines that the service has failed and attempts any restorative action. Reducing this value will cause the service to be declared as failed sooner and hence recovery can be performed sooner. If this value is too small then it can result in multiple instances of a service running or frequent restarts occurring. Making this too long results in loss of data.\n\n  - *Restart Failed* - Determine what action should be taken when a service is detected as failed. Two options are available, *Manual*, in which case not automatic action will be taken, or *Auto*, in which case the service will be automatically restarted.\n\nScheduler\n=========\n\nThe Fledge core contains a scheduler that is used for running periodic tasks, this scheduler has a couple of tuning parameters. To access these parameters from the Fledge User Interface, in the configuration page select the *Advanced* options and then the *Scheduler* section.\n\n+----------------------+\n| |scheduler_advanced| |\n+----------------------+\n\n  - *Max Running Tasks* - Specifies the maximum number of tasks that can be running at any one time. This parameter is designed to stop runaway tasks adversely impacting the performance of the system. When this number is reached no new tasks will be created until one or more of the currently running tasks terminated. Set this too low and you will not be able to run all the task you require in parallel. Set it too high and the system is more at risk from runaway tasks.\n\n  - *Max Age of Task* - Specifies, in days, how long a task can run for. Tasks that run longer than this will be killed by the system.\n\n.. note::\n\n    Individual tasks have a setting that they may use to stop multiple instances of the same task running in parallel. This also helps protect the system from runaway tasks.\n\nStartup Ordering\n----------------\n\nThe Fledge scheduler also provides for ordering the startup sequence of the various services within a Fledge instance. This ensures that the support services are started before any south or north services are started, with the south services started before the north services.\n\nThere is no ordering within the south or north services, with all south services being started in a single block and all north services started in a single block.\n\nThe order in which a service is started is controlled by assigning a priority to the service. This priority is a numeric value and services are started based on this value. The lower the value the earlier in the sequence the service is started.\n\nPriorities are stored in the database table, scheduled_processes. There is currently no user interface to modify the priority of scheduled processes, but it may be changed by direct access to the database. Future versions of Fledge may add an interface to allow for the tuning of process startup priorities.\n\nStorage\n=======\n\nThe storage layer is perhaps one of the areas that most impacts the overall performance of the Fledge instance as it is the end point for the data pipelines; the location at which all ingest pipelines in the south terminate and the point of origin for all north pipelines to external systems.\n\nThe storage system in Fledge serves two purposes\n\n  - The storage of configuration and persistent state of Fledge itself\n\n  - The buffering of reading data as it traverses the Fledge instance\n\nThe physical storage is managed by plugins that are loaded dynamically into the storage service in the same way as with other services in Fledge. In the case of the storage service it may have either one or two plugins loaded. If a single plugin is loaded this will be used for the storage of both configuration and readings; if two plugins are loaded then one will be used for storing the configuration and the other for storing the readings. Not all plugins support both classes of data.\n\nChoosing A Storage Plugin\n-------------------------\n\nFledge comes with a number of storage plugins that may be used, each one has it benefits and limitations, below is an overview of each of the plugins that are currently included with Fledge.\n\nsqlite\n    The default storage plugin that is used. It is implemented using the *SQLite* database and is capable of storing both configuration and reading data. It is optimized to allow parallelism when multiple assets are being ingested into the Fledge instance. It does however have limitations on the number of different assets that can be ingested within an instance. The precise limit is dependent upon a number of other factors, but is of the order of 900 unique asset names per instance. This is a good general purpose storage plugin and can manage reasonably high rates of data reading.\n\nsqlitelb\n    This is another *SQLite* based plugin able to store both readings and configuration data. It is designed for lower bandwidth data, hence the name suffix *lb*. It does not have the same parallelism optimization as the default *sqlite* plugin, and is therefore less good when high rate data spread across multiple assets is being ingested. However it does perform well when ingesting high rates of a single asset or low rates of a very large number of assets. It does not have any limitations on the number of different assets that can be stored within the Fledge instance.\n\nsqlitememory\n    This is a *SQLite* based plugin that uses in memory tables and can only be used to store reading data, it must be used in conjunction with another plugin that will be used to store the configuration. Reading data is stored in tables in memory and thus very high bandwidth data can be supported. If Fledge is shutdown however the data stored in these tables will be lost.\n\npostgres\n    This plugin is implemented using the *PostgreSQL* database and supports the storage of both configuration and reading data. It uses the standard Postgres storage engine and benefits from the additional features of Postgres for security and replication. It is capable of high levels of concurrency however has slightly less overall performance than the *sqlite* plugins. Postgres also does not work well with certain types of storage media, such as SD cards as it has a higher ware rate on the media.\n\nIn most cases the default *sqlite* storage plugin is perfectly acceptable, however if very high data rates, or huge volumes of data (i.e. large images at a reasonably high rate) are ingested this plugin can start to exhibit issues. This usually exhibits itself by large queues building in the south service or in extreme cases by transaction failure messages in the log for the storage service. If this happens then the recommended course of action is to either switch to a plugin that stores data in memory rather than on external storage, *sqlitememory*, or investigate the media where the data is stored. Low performance storage will adversely impact the *sqlite* plugin.\n\nThe *sqlite* plugin may also prove less than optimal if you are ingesting many hundreds of different assets in the same Fledge instance. The *sqlite* plugin has been optimized to allow concurrent south services to write to the storage in parallel. This is done by the use of multiple databases to improve the concurrency, however there is a limit, imposed by the number of open databases that can be supported. If this limit is exceeded it is recommend to switch to the *sqlitelb* plugin. There are configuration options regarding how these databases are used that can change the point at which it becomes necessary to switch to the other plugin.\n\nIf you wish to use the same plugin to both store the configuration data and the reading data then you may either choose the same plugin for both or select the option *Use main plugin* for the *Reading Plugin* value. Use the later is perhaps a slightly safer option as changes to the *Storage Plugin* will then automatically cause the readings to use that same plugin.\n\nConfiguring Storage Plugins\n###########################\n\nThe storage plugins to use can be selected in the *Advanced* section of the *Configuration* page. Select the *Storage* category from the category tree display and the following will be displayed.\n\n+------------------+\n| |storage_config| |\n+------------------+\n\n- **Storage Plugin**: The name of the storage plugin to use. This will be used to store the configuration data and must be one of the supported storage plugins. \n    \n.. note:: \n\n   This can not be the *sqlitememory* plugin as that plugin does not support the storage of configuration.\n\n- **Reading Plugin**: The name of the storage plugin that will be used to store the readings data. If left blank then the *Storage Plugin* above will be used to store both configuration and readings.\n\n- **Database threads**: Increase the number of threads used within the storage service to manage the database activity. This is not the number of threads that can be used to read or write the database and increasing this will not improve the throughput of the data.\n\n- **Manage Storage**: This is used when an external storage application, such as the Postgres database is used that requires separate initialization. If this external process is not run by default setting this to true will cause Fledge to start the storage process. Normally this is not required as Postgres should be run as a system service and SQLite does not require it.\n\n- **Service Port**: Normally the storage service will dynamically create a service port that will be used by the storage service. Setting this to a value other than 0 will cause a fixed port to be used. This can be useful when developing a new storage plugin or to allow access to a non-fledge application to the storage layer. This should only be changed with extreme caution.\n\n- **Management Port**: Normally the storage service will dynamically create a management port that will be used by the storage service. Setting this to a value other than 0 will cause a fixed port to be used. This can be useful when developing a new storage plugin.\n\n- **Log Level**: This control the level at which the storage plugin will output logs. \n\n- **Timeout**: Sets the timeout value in seconds for each request to the storage layer. This causes a timeout error to be returned to a client if a storage call takes longer than the specified value.\n\nChanging will be saved once the *save* button is pressed. Fledge uses a mechanism whereby this data is not only saved in the configuration database, but also cached to a file called *storage.json* in the *etc* directory of the data directory. This is required such that Fledge can find the configuration database during the boot process. If the configuration becomes corrupt for some reason simply removing this file and restarting Fledge will cause the default configuration to be restored. The location of the Fledge data directory will depend upon how you installed Fledge and the environment variables used to run Fledge.\n\n- Installation from a package will usually put the data directory in */usr/local/fledge/data*. However this can be overridden by setting the *$FLEDGE_DATA* environment variable to point at a different location.\n\n- When running a copy of Fledge built from source the data directory can be found in *${FLEDGE_ROOT}/data*. Again this may be overridden by setting the *$FLEDGE_DATA* environment variable.\n\n.. note::\n\n    When changing the storage service a reboot of the Fledge instance is required before the new storage plugins will be used. Also, data is not migrated from one plugin to another and hence if there is unsent data within the database this will be lost when changing the storage plugin. The sqlite and sqlitelb plugin however share the same configuration data tables and hence configuration will be preserved when changing between these databases but reading data will not.\n\nsqlite Plugin Configuration\n###########################\n\nThe storage plugin configuration can be found in the *Advanced* section of the *Configuration* page. Select the *Storage* category from the category tree display and the plugin name from beneath that category. In the case of the *sqlite* storage plugin the following will be displayed.\n\n+-----------------+\n| |sqlite_config| |\n+-----------------+\n\n- **Deployment**: This option controls a number of settings within the SQLite storage layer. Three options are available;\n\n  - **Small** Used when Fledge is installed with minimal resources. This reduces the disk and memory footprint of the storage layer. It is only recommended when the data flowing through the Fledge instance is of limited quantity and frequency.\n\n  - **Normal** This is the most commonly used setting and provides a compromise of memory and disk footprint for the storage system. This is the setting that is recommended in most circumstances and should be sufficient in must cases.\n\n  - **High Bandwidth** This setting is best when the Fledge instance is being used to process very high traffic loads. It increases both the disk and memory footprint of the storage layer in order to provide for high throughput of data in the storage layer.\n\n- **Pool Size**: The storage service uses a connection pool to communicate with the underlying database, it is this pool size that determines how many parallel operations can be invoked on the database.\n\n  This pool size is only the initial size, the storage service will grow the pool if required, however setting a realistic initial pool size will improve the ramp up performance of Fledge.\n\n.. note::\n\n        Although the pool size denotes the number of parallel operations that can take place, database locking considerations may reduce the number of actual operations in progress at any point in time.\n\n- **No. Readings per database**: The *sqlite* plugin support multiple readings databases, with the name of the asset used to determine which database to store the readings in. This improves the level of parallelism by reducing the lock contention when data is being written. Setting this value to 1 will cause only a single asset name to be stored within a single readings database, resulting in no contention between assets. However there is a limit on the number of databases, therefore setting this to 1 will limit the number of different assets that can be ingested into the instance.\n\n- **No. databases to allocate in advance**: This controls how many reading databases Fledge should initially created. Creating databases is a slow process and thus is best achieved before data starts to flow through Fledge. Setting this too high will cause Fledge to allocate a large number of databases than required and waste open database connections. Ideally set this to the number of different assets you expect to ingest divided by the number of readings per database configuration above. This should give you sufficient databases to store the data you require.\n\n- **Database allocation threshold**: The allocation of a new database is a slow process, therefore rather than wait until there are no available databases before allocating new ones, it is possible to pre-allocate database as the number of free databases becomes low. This value allows you to set the point at which to allocation more databases. As soon as the number of free databases declines to this value the plugin will allocate more databases.\n\n- **Database allocation size**: The number of new databases to create whenever an allocation occurs. This effectively denotes the size of the free pool of databases that should be created.\n\n- **Purge Exclusion**: This is not a performance settings, but allows a number of assets to be exempted from the purge process. This value is a comma separated list of asset names that will be excluded from the purge operation.\n\n- **Vacuum Interval**: The interval between execution of vacuum operations on the database, expressed in hours. A vacuum operation is used to reclaim space occupied in the database by data that has been deleted.\n\nsqlitelb Configuration\n######################\n\nThe storage plugin configuration can be found in the *Advanced* section of the *Configuration* page. Select the *Storage* category from the category tree display and the plugin name from beneath that category. In the case of the *sqlitelb* storage plugin the following will be displayed.\n\n+-------------------+\n| |sqlitelb_config| |\n+-------------------+\n\n.. note::\n\n   The *sqlite* configuration is still present and selectable since this instance has run that storage plugin in the past and the configuration is preserved when switching between *sqlite* and *sqlitelb* plugins.\n\n- **Pool Size**: The storage service uses a connection pool to communicate with the underlying database, it is this pool size that determines how many parallel operations can be invoked on the database.\n\n  This pool size is only the initial size, the storage service will grow the pool if required, however setting a realistic initial pool size will improve the ramp up performance of Fledge.\n\n.. note::\n\n    Although the pool size denotes the number of parallel operations that can take place, database locking considerations may reduce the number of actual operations in progress at any point in time.\n\n- **Vacuum Interval**: The interval between execution of vacuum operations on the database, expressed in hours. A vacuum operation is used to reclaim space occupied in the database by data that has been deleted.\n\n- **Purge Block Size**: The maximum number of rows that will be deleted within a single transactions when performing a purge operation on the readings data. Large block sizes are potential the most efficient in terms of the time to complete the purge operation, however this will increase database contention as a database lock is required that will cause any ingest operations to be stalled until the purge completes. By setting a lower block size the purge will take longer, nut ingest operations can be interleaved with the purging of blocks.\n\npostgres Configuration\n######################\n\nThe storage plugin configuration can be found in the *Advanced* section of the *Configuration* page. Select the *Storage* category from the category tree display and the plugin name from beneath that category. In the case of the *postgres* storage plugin the following will be displayed.\n\n+-------------------+\n| |postgres_config| |\n+-------------------+\n\n  - **Pool Size**: The storage service uses a connection pool to communicate with the underlying database, it is this pool size that determines how many parallel operations can be invoked on the database.\n   \n    This pool size is only the initial size, the storage service will grow the pool if required, however setting a realistic initial pool size will improve the ramp up performance of Fledge.\n\n  - **Max. Insert Rows**: The maximum number of readings that will be inserted in a single call to Postgres. This is a tuning parameter that has two effects on the system\n\n    - It limits the size, and hence memory requirements, for a single insert statement\n\n    - It prevents very long running insert transactions from blocking access to the readings table\n\n    This parameter is useful on systems with very high data ingest rates or when the ingest contains sporadic large bursts of readings, to limit resource usage and database lock contention.\n\n.. note::\n\n   Although the pool size denotes the number of parallel operations that can take place, database locking considerations may reduce the number of actual operations in progress at any point in time.\n\nsqlitememory Configuration\n##########################\n\nThe storage plugin configuration can be found in the *Advanced* section of the *Configuration* page. Select the *Storage* category from the category tree display and the plugin name from beneath that category. Since this plugin only supports the storage of readings there will always be at least one other reading plugin displayed. Selecting the *sqlitememory* storage plugin the following will be displayed.\n\n+-----------------------+\n| |sqlitememory_config| |\n+-----------------------+\n\n  - **Pool Size**: The storage service uses a connection pool to communicate with the underlying database, it is this pool size that determines how many parallel operations can be invoked on the database.\n\n    This pool size is only the initial size, the storage service will grow the pool if required, however setting a realistic initial pool size will improve the ramp up performance of Fledge.\n\n.. note::\n\n    Although the pool size denotes the number of parallel operations that can take place, database locking considerations may reduce the number of actual operations in progress at any point in time.\n\n - **Persist Data**: Control the persisting of the in-memory database on shutdown. If enabled the in-memory database will be persisted on shutdown of Fledge and reloaded when Fledge is next started. Selecting this option will slow down the shutdown and startup processing for Fledge.\n\n - **Persist File**: This defines the name of the file to which the in-memory database will be persisted.\n\n - **Purge Block Size**: The maximum number of rows that will be deleted within a single transactions when performing a purge operation on the readings data. Large block sizes are potential the most efficient in terms of the time to complete the purge operation, however this will increase database contention as a database lock is required that will cause any ingest operations to be stalled until the purge completes. By setting a lower block size the purge will take longer, nut ingest operations can be interleaved with the purging of blocks.\n\nPerformance Counters\n--------------------\n\nA number of performance counters can be collected in the storage service to help characterise the performance of the service. This is intended to provide input into the tuning of the service and the collection of these counters should not be left on during production use of the service.\n\nThe performance counters are turned on and off using a toggle control in the storage service configuration \nthat can be found by selecting the *Advanced* item in the *Configuration* page categories shown. Then select the *Storage* category within *Advanced* from the category tree display. The following will be displayed.\n\n+------------------+\n| |storage_config| |\n+------------------+\n\nThe **Performance Counters** tick box indicates the current state of collection of storage layer statistics. Unlike a number of the other items within this configuration category it does not require a reboot of the system for the new setting to take effect.\n\nPerformance counters are collected in the storage service and a report is written once per minute to the configuration database for later retrieval. The values written are\n\n  - The minimum value of the counter observed within the current minute.\n\n  - The maximum value of the counter observed within the current minute.\n\n  - The average value of the counter observed within the current minute.\n\n  - The number of samples of the counter collected within the current minute. Since one sample is made per call to the storage API, this value actually gives you the number of insert, update, delete or reading append calls made to the storage layer.\n\nIn the current release the performance counters can only be retrieved by direct access to the configuration and statistics database, they are stored in the *monitors* table. Or via the REST API. Future releases will include tools for the retrieval and analysis of these performance counters.\n\nTo access the performance counters via the REST API use the entry point /fledge/monitors to retrieve all counters, or /fledge/monitors/Storage to retrieve counters for just the storage service.\n\nWhen collection is enabled the following counters will be collected for the storage service that is enabled.\n\n.. list-table::\n    :widths: 15 30 55\n    :header-rows: 1\n\n    * - Counter\n      - Description\n      - Causes & Remedial Actions\n    * - Reading Append Time (ms)\n      - The amount of time it took to append the readings to the storage system\n      - High values of this could result from high levels of contention within the system or if the underlying storage system does not have enough bandwidth to handle the rate of data ingestion. A number of things can be tried to reduce high values observed here. Reducing the number of calls by increasing the maximum block size and latency setting in the south service. Switching to a faster plugin or improving the storage subsystem if the machine hosting Fledge.\n    * - Reading Append Rows <plugin>\n      - The number of readings inserted in each call to the storage layer.\n      - Low values of this can be an indication that the south services are configured with either a latency value that is too low or a maximum number of readings to buffer that is too low. If performance is not sufficient then increasing the number of readings sent to the storage service per call can improve the performance.\n    * - Reading Append PayloadSize <plugin>\n      - The size of the JSON payload containing the readings\n      - High payload sizes with small rows counts indicates very rich reading contents, reducing the payload size by filtering or processing the data will improve performance and reduce the storage requirements for the Fledge instance.\n    * - insert rows <table>\n      - A set of counters, one per table, that indicate the number of inserts into the table within the one minute collection time. The number of samples equates to the number of calls to the storage API to insert rows. The minimum, average and maximum values refer to the number of rows inserted in a single insert call.\n      - The action to take is very much related to which table is involved. For example if it is the statistics table then reducing the number of statistics maintained by the system will reduce the load on the system to store them.\n    * - update rows <table>\n      - A set of counters, one per table, that indicate the number of updates of the table within the one minute collection time. The number of samples equates to the number of calls to the storage API to update rows. The minimum, average and maximum values refer to the number of rows updated in a single call.\n      - The action to take is very much related to which table is involved. For example if it is the statistics table then reducing the number of statistics maintained by the system will reduce the load on the system to store them.\n    * - delete rows <table>\n      - A set of counters, one per table, that indicate the number of delete calls related to the table within the one minute collection time. The number of samples equates to the number of calls to the storage API to delete rows. The minimum, average and maximum values refer to the number of rows deleted in a single call.\n      - The delete API is not frequently used and there is little that is configurable that will impact its usage.\n    * - insert Payload Size <table>\n      - The size of the JSON payload in the insert calls to the storage layer for the given table.\n      - There is little an end user can influence regarding the payload size, however it gives an indication of bandwidth usage for the storage API.\n    * - update Payload Size <table>\n      - The size of the JSON payload in the update calls to the storage layer for the given table.\n      - There is little an end user can influence regarding the payload size, however it gives an indication of bandwidth usage for the storage API.\n    * - delete Payload Size <table>\n      - The size of the JSON payload in the delete calls to the storage layer for the given table.\n      - There is little an end user can influence regarding the payload size, however it gives an indication of bandwidth usage for the storage API.\n\nPurge\n=====\n\nThe purpose of the purge processes within Fledge is to control the usage of the storage system. Fledge has two different purge processes that run, each of which purges a different aspect of the storage within the system.\n\n  - **System Purge** - The system purge process is responsible for purging the logs held internally within the Fledge storage system. There are three types of log information held in the storage system: statistics, the audit trail, and task execution history.\n\n    .. note::\n\n        The *System Logs*, or message logs, are not held within the Fledge storage system but are rather sent to the Linux system logging facility, *syslog*. This is configured within the Linux system itself to rotate, compress and ultimately remove logs using the system defined log rotation settings.\n\n  - **Purge** - The purge process is responsible for purging the readings data from the system. \n\nPurge System\n------------\n\nThe log purging is perhaps the simpler of the two purge process to discuss as it has the least impact on the performance of the system. The configuration of the process itself can be found under the *Configuration* menu option in the *Utilities::Purge System* category.\n\n+---------------------+\n| |PurgeSystemConfig| |\n+---------------------+\n\nThe configuration options merely allow you to set the number of days worth of data that should be retained for each of the three log categories: audit, tasks and statistics. The important consideration here is that the various logs should not be allowed to grow to such an extent that you risk exhausting the storage system, but should retain sufficient information to be able to examine enough history of the system.\n\nThe other dimension to consider is that performance is known to degrade as these tables become large. It is therefore not simply keeping an extensive history just because you have the storage to do so. Reducing the history kept can improve the performance.\n\nTypically the statistics that are held will take the most space in the system, especially if you are collecting per asset ingest statistics and you collect data for many assets.  There are actually two forms of statistics kept; the absolute counters and the history snapshot of the statistics. The history snapshot records the statistics values every 15 seconds and create an entry in the statistics history table for each statistic every 15 seconds. It is these statistic history entries that are purged and not the absolute statistics counters. Hence the retention period for statistics, the statistics history, is generally lower.\n\n.. note::\n\n   The 15 second statistics history update can itself be tuned by changing the frequency with which the statistics history task is run. This is done via the *Schedules* menu item by changing the interval for the *stats collection* task. Changing this will impact the dashboard seen in the Fledge as this shows values from the statistics history table. The values shown are the deltas in the statistics between each run of the stats collection task. Therefore by default the rates shown in the dashboard are per 15 second intervals.\n\nSimilar decisions should be made for the task and the audit log data. In the case of the audit log you should consider what use is being made of that data and how frequently it is updated. Typically systems do not undergo much reconfiguration after the initial setup period. Therefore most of the audit data is likely to be around significant events that occur, such as a restart or failure. If you are making heavy use of the notification or control features of Fledge then these will increase the growth rate of the audit log as these are auditable events.\n\n.. note::\n\n   The audit log is also used by the *FogLAMP Manage* product to determine if changes have been made locally to the instance. Therefore the retention period for audit log data must be greater that the frequency with which that product is collecting this data from the instance.\n\nThe task log is used internally within Fledge to track the state of running tasks as well as to give the history of tasks that have run for support purposes; this data is included in the support bundles. Therefore the retention should be such that there is sufficient history to cover any period that might be needed to diagnose issues within Fledge. Also the period should not be so small that it risks the data for a running task being purged before the task has completed. As a guideline it must never be less than 1 day. It is recommended to keep at least 7 days to allow for some history to be available for diagnostic purposes.\n\nAs well as the configuration of the retention period for the various logs the other tuning that can be done is the frequency of the execution of the system purge process. This is done in the *Schedules* menu item and is the tasks named *purge_system*. The default is to run it every 23 hours and 50 minutes.\n\n.. note::\n\n   If you run the system purge every 24 hours and you retain 7 days worth of data for the statistics, you will have 8 days of data stored at the peak of storage use. This is because when the process runs it will reduce the data down to 7 days, but as soon as it has completed new data will accumulate until it is next run a day later. The same is obviously also true for the task and audit data.\n\nPurge Process\n-------------\n\nThe purge process is probably the more important process to tune of the two. It manages the storage for the reading data that is the more dynamic and larger data set of the two controlled by purge processes. As with the purge system process above, the configuration of how the purge process runs is available in the *Configuration* menu item in the category *Utilities::Purge*.\n\n+---------------+\n| |PurgeConfig| |\n+---------------+\n\nThe details of each of the options are covered elsewhere in the documentation, but the salient points will be repeated here. The operation of the purge process reduces the number of readings that are retained in the readings storage subsystem using two parameters:\n\n   - the age of the reading\n\n   - the number of readings\n\nThe age is set in hours. Any reading older than this age is a candidate to be removed from the readings data. The purge process also looks at the number of readings stored and will remove the oldest, even if they are newer than the age to be retained if the number exceeds the *Max rows of data to retain* value.\n\nThese are the candidates to be removed, but may not be removed depending upon the sent status of the readings and the configuration item *Retain Unsent Data*.\n\nCandidate data that has already been sent to all the defined north destinations in the system will always be removed regardless of the *Retain Unsent Data* setting. Data that has not be marked as a candidate for removal will be retained event after it has been sent to all the north destinations.\n\nIf the *Retain Unsent Data* setting is set to *retain unsent to any destination*, then candidate data will be removed if it has been sent to at least one north destination. Data that has not be sent to any destination will be retained.\n\nAs with the purge system process the purge process is also run by a schedule that is accessed via the *Schedules* menu item.\n\n+------------------+\n| |PurgeSchedules| |\n+------------------+\n\nThe frequency of running the purge process is very important, since it as the same effect as described for the purge system execution, but the impact is much higher. Consider a system that wants to retain data for 12 hours. If the purge process is set to run every 12 hours the number of readings over time would be as shown in the graph below\n\n+---------------+\n| |PurgeCycles| |\n+---------------+\n\nThe red line indicates the configured retention point for the readings. Each point where the blue line drops is an execution of the purge process.\n\nThis assumes we started with a system with no readings. We read in data for 12 hours and then run the purge process. This is shown as removing a small number of readings to reduce the retained readings to those less than 12 hours old. The initial run is in fact not likely to find any data to remove, or at most a handful of readings, depending on how long it takes the purge process to start executing.\n\nThe system now continues to ingest data and will accumulate another 12 hours of data before purge is run again and the data reduced to the newest 12 hours of data.\n\n.. note::\n\n   We are assuming that either unsent data is not retained or we are sending all data north immediately as it is received.\n\nThis means that at a peak we are storing 24 hours of data, or twice what we wish to retain. Running the purge process more frequently than the retention period will not remove any more data than defined within the retention period, but will reduce the peaks of data that are stored. The other impact of this, not shown in the graph above, is that purge is **not an instantaneous process**. It takes time to purge the data and with some storage engines the system is blocked from ingesting more data during the purge. In this case the services will buffer the data in memory whilst waiting to gain access to the storage. Purging more often will decrease the number of readings that are removed for each execution and hence reduce the time that the ingest is locked out of the storage system. This reduces the time, and memory resources, that services have to buffer data in memory.\n\n.. note::\n\n   Since all data must go via the storage system from south service to the north services and tasks, the period when services are buffering in memory because the purge process is running, will increase the latency for data to traverse from the south to the north.\n\nThere are many advantages to running the purge process more frequently than the retention period. Running it too frequently, however, can cause increase in latency for readings. In addition, if one purge process does not complete before another starts, issues can be seen whereby the purge process dominates the usage of the storage subsystem. If this happens, readings build up in the service memory buffers, eventually causing issues with excessive memory usage. The execution interval for the purge process must be balanced to not create issues with memory and storage utilisation.\n\nThe *Logs::Tasks* menu item can be used to view the execution duration of the *purge* and other tasks and provides useful information for tuning the schedule of the purge process.\n\n+-----------+\n| |TaskLog| |\n+-----------+\n\nIt is recommended that the interval between running the purge task should be no lower than 10 times the duration of the purge task itself.\n\nUsing Performance Counters\n==========================\n\nPerformance counters are a way to look at specific indicators within a service to ascertain greater insights into the performance of the individual services and the system as a whole. The documentation above describes the usage of these counters for a number of the different services, however to aid in interpreting those counters it is useful to understand in more depth how the data is collected and what it means.\n\nPerformance counters are implemented internally within the services to collect data over a fixed period of time and present a summary of the values collected. Each counter, or monitor is collected for one minutes and then four items of data are stored regarding the counter.\n\n - The number of samples collected during that minute.\n\n - The minimum value observed within the minute.\n\n - The maximum value observed within the minute.\n\n - The average value observed within the minute.\n\nThese values are recorded against the counter name and a timestamp that represent the end of the minute during which the values were collected.\n\nSampling\n--------\n\nSampling is perhaps a slightly misleading term regarding a number of the counters. In the majority of cases a sample is taken when an event occurs, for example in the case of the storage service each sample represents one of the storage APIs receiving a call. Therefore, in the case of the storage service the number of samples gives you the number of API calls made within the minute. The counter name tells you which API call it was, and in the case of storage also the table on which that call was made. The values, for these API calls tell you something about the parameters passed to the API call.\n\nIn the south and north services the events related to data ingest, forwarding and reading. Most commonly a sample is taken when a block of data, which consists of one or more readings is processed by the system. Again the sample quantity is a indication of the number of operations per minute the service is making and the values represent the volume of data processed in most cases.\n\nIdentifying bottlenecks\n-----------------------\n\nLooking at long term trends in performance counters that report queue length is a useful way to determine where a bottleneck might exist within a system. Ideally queue lengths should be proportional to the volume of data being read and should be stable over time if the data volumes are stable. If there are not stable and are growing it is an indication that something north of that queue is unable to handle the sustained data volumes being presented. If queue lengths are decreasing it indicates that something south of the queue is not managing to maintain the load offered to it.\n\nProcessing times increasing can also indicate that something north of that location in the pipeline, or the location itself, is unable to obtain sufficient resource to maintain the processing load requested of it.\n\nIncreasing payload sizes or row counts in the case of storage performance counters is an indication that the components south of the the counter are presenting data faster than it can be processed and more and more data is being buffered in those service.\n\nRemoving Monitors\n-----------------\n\nThe performance monitors are stored in the configuration database of the Fledge instance in a single tables named *monitors*. These will remain in the database until manually removed. This removal may be done using the API or by directly accessing the database table. The API to remove monitors using the DELETE method in the API call. The URL's used are identical to those when fetching the performance counters. To remove all performance monitors use the URL /fledge/monitors with the DELETE method, to remove just those for a particular service then use a URL of the form /fledge/monitors/{service}.\n\n.. code-block:: console\n\n   curl -X DELETE http://localhost:8081/fledge/monitors\n\nCautions\n--------\n\nCare should be taken when using performance counters, as with almost any system the act of observing the system impacts the behavior of the system. This is certainly true of the performance counters.\n\n  - Collection time. Although internally the performance counters are stored in a memory structure, this is indexed by the counter name and does take a finite amount of time to collect. This will detract from the overall system performance to a small degree.\n\n  - Memory usage. Performance counters are keep in memory, with values recorded for each sample. This can take significant memory when working in a system we large number of events that trigger performance counter sampling taking place. This not just impacts the size of the system, but also the performance as it requires dynamic memory allocation to take place.\n\n  - Storing counters. The Performance counters are stored in the configuration database of the storage layer. The storing of these counters not only puts more load on the storage system, making API calls to insert rows into the monitors table, but also increases contention on the configuration database.\n\n  - Database growth. There is no automatic process for purging performance counters. This must be done manually via the API or directly on the monitors table.\n\n.. note::\n\n  Performance counters can be a very useful tool when tuning or debugging Fledge systems, but should **never** be left on during production use.\n\n\nSupport Bundle Configuration\n============================\n\nThe support bundle is a collection of diagnostic data about the Fledge instance, used to identify and troubleshoot system issues more effectively. \nThe following configuration parameters control the automatic generation and retention of support bundles:\n\n+--------------------------------+\n| |support_bundle_configuration| |\n+--------------------------------+\n\n - **Auto Generate On Failure**: This option controls whether a support bundle is automatically created when a service failure occurs. By default, this is set to **true**. When enabled, a support bundle is generated and saved in the **support** directory within the Fledge data directory. An alert is also triggered to notify the user that the bundle has been created.\n\n - **Bundles To Retain**: This setting defines how many support bundles should be retained. The minimum value is **1**, and the default number of bundles that can be retained is **3**. If the number of stored bundles exceeds this limit, the oldest one is automatically deleted to make room for the new bundle. Setting this value too high will increase the storage requirements for the Fledge instance.\n\n"
  },
  {
    "path": "doxy.config",
    "content": "# Doxyfile 1.8.13\n\n# This file describes the settings to be used by the documentation system\n# doxygen (www.doxygen.org) for a project.\n#\n# All text after a double hash (##) is considered a comment and is placed in\n# front of the TAG it is preceding.\n#\n# All text after a single hash (#) is considered a comment and will be ignored.\n# The format is:\n# TAG = value [value, ...]\n# For lists, items can also be appended using:\n# TAG += value [value, ...]\n# Values that contain spaces should be placed between quotes (\\\" \\\").\n\n#---------------------------------------------------------------------------\n# Project related configuration options\n#---------------------------------------------------------------------------\n\n# This tag specifies the encoding used for all characters in the config file\n# that follow. The default is UTF-8 which is also the encoding used for all text\n# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv\n# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv\n# for the list of possible encodings.\n# The default value is: UTF-8.\n\nDOXYFILE_ENCODING      = UTF-8\n\n# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by\n# double-quotes, unless you are using Doxywizard) that should identify the\n# project for which the documentation is generated. This name is used in the\n# title of most generated pages and in a few other places.\n# The default value is: My Project.\n\nPROJECT_NAME           = \"Fledge\"\n\n# The PROJECT_NUMBER tag can be used to enter a project or revision number. This\n# could be handy for archiving the generated documentation or if some version\n# control system is used.\n\nPROJECT_NUMBER         =\n\n# Using the PROJECT_BRIEF tag one can provide an optional one line description\n# for a project that appears at the top of each page and should give viewer a\n# quick idea about the purpose of the project. Keep the description short.\n\nPROJECT_BRIEF          = \"An open source edge computing platform for industrial users\"\n\n# With the PROJECT_LOGO tag one can specify a logo or an icon that is included\n# in the documentation. The maximum height of the logo should not exceed 55\n# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy\n# the logo to the output directory.\n\nPROJECT_LOGO           = docs/images/fledge.png\n\n# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path\n# into which the generated documentation will be written. If a relative path is\n# entered, it will be relative to the location where doxygen was started. If\n# left blank the current directory will be used.\n\nOUTPUT_DIRECTORY       = doxy\n\n# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-\n# directories (in 2 levels) under the output directory of each output format and\n# will distribute the generated files over these directories. Enabling this\n# option can be useful when feeding doxygen a huge amount of source files, where\n# putting all generated files in the same directory would otherwise causes\n# performance problems for the file system.\n# The default value is: NO.\n\nCREATE_SUBDIRS         = NO\n\n# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII\n# characters to appear in the names of generated files. If set to NO, non-ASCII\n# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode\n# U+3044.\n# The default value is: NO.\n\nALLOW_UNICODE_NAMES    = NO\n\n# The OUTPUT_LANGUAGE tag is used to specify the language in which all\n# documentation generated by doxygen is written. Doxygen will use this\n# information to generate all constant output in the proper language.\n# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,\n# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),\n# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,\n# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),\n# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,\n# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,\n# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,\n# Ukrainian and Vietnamese.\n# The default value is: English.\n\nOUTPUT_LANGUAGE        = English\n\n# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member\n# descriptions after the members that are listed in the file and class\n# documentation (similar to Javadoc). Set to NO to disable this.\n# The default value is: YES.\n\nBRIEF_MEMBER_DESC      = YES\n\n# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief\n# description of a member or function before the detailed description\n#\n# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the\n# brief descriptions will be completely suppressed.\n# The default value is: YES.\n\nREPEAT_BRIEF           = YES\n\n# This tag implements a quasi-intelligent brief description abbreviator that is\n# used to form the text in various listings. Each string in this list, if found\n# as the leading text of the brief description, will be stripped from the text\n# and the result, after processing the whole list, is used as the annotated\n# text. Otherwise, the brief description is used as-is. If left blank, the\n# following values are used ($name is automatically replaced with the name of\n# the entity):The $name class, The $name widget, The $name file, is, provides,\n# specifies, contains, represents, a, an and the.\n\nABBREVIATE_BRIEF       = \"The $name class\" \\\n                         \"The $name widget\" \\\n                         \"The $name file\" \\\n                         is \\\n                         provides \\\n                         specifies \\\n                         contains \\\n                         represents \\\n                         a \\\n                         an \\\n                         the\n\n# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then\n# doxygen will generate a detailed section even if there is only a brief\n# description.\n# The default value is: NO.\n\nALWAYS_DETAILED_SEC    = NO\n\n# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all\n# inherited members of a class in the documentation of that class as if those\n# members were ordinary class members. Constructors, destructors and assignment\n# operators of the base classes will not be shown.\n# The default value is: NO.\n\nINLINE_INHERITED_MEMB  = NO\n\n# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path\n# before files name in the file list and in the header files. If set to NO the\n# shortest path that makes the file name unique will be used\n# The default value is: YES.\n\nFULL_PATH_NAMES        = YES\n\n# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.\n# Stripping is only done if one of the specified strings matches the left-hand\n# part of the path. The tag can be used to show relative paths in the file list.\n# If left blank the directory from which doxygen is run is used as the path to\n# strip.\n#\n# Note that you can specify absolute paths here, but also relative paths, which\n# will be relative from the directory where doxygen is started.\n# This tag requires that the tag FULL_PATH_NAMES is set to YES.\n\nSTRIP_FROM_PATH        =\n\n# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the\n# path mentioned in the documentation of a class, which tells the reader which\n# header file to include in order to use a class. If left blank only the name of\n# the header file containing the class definition is used. Otherwise one should\n# specify the list of include paths that are normally passed to the compiler\n# using the -I flag.\n\nSTRIP_FROM_INC_PATH    =\n\n# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but\n# less readable) file names. This can be useful is your file systems doesn't\n# support long names like on DOS, Mac, or CD-ROM.\n# The default value is: NO.\n\nSHORT_NAMES            = NO\n\n# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the\n# first line (until the first dot) of a Javadoc-style comment as the brief\n# description. If set to NO, the Javadoc-style will behave just like regular Qt-\n# style comments (thus requiring an explicit @brief command for a brief\n# description.)\n# The default value is: NO.\n\nJAVADOC_AUTOBRIEF      = YES\n\n# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first\n# line (until the first dot) of a Qt-style comment as the brief description. If\n# set to NO, the Qt-style will behave just like regular Qt-style comments (thus\n# requiring an explicit \\brief command for a brief description.)\n# The default value is: NO.\n\nQT_AUTOBRIEF           = NO\n\n# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a\n# multi-line C++ special comment block (i.e. a block of //! or /// comments) as\n# a brief description. This used to be the default behavior. The new default is\n# to treat a multi-line C++ comment block as a detailed description. Set this\n# tag to YES if you prefer the old behavior instead.\n#\n# Note that setting this tag to YES also means that rational rose comments are\n# not recognized any more.\n# The default value is: NO.\n\nMULTILINE_CPP_IS_BRIEF = NO\n\n# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the\n# documentation from any documented member that it re-implements.\n# The default value is: YES.\n\nINHERIT_DOCS           = YES\n\n# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new\n# page for each member. If set to NO, the documentation of a member will be part\n# of the file/class/namespace that contains it.\n# The default value is: NO.\n\nSEPARATE_MEMBER_PAGES  = NO\n\n# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen\n# uses this value to replace tabs by spaces in code fragments.\n# Minimum value: 1, maximum value: 16, default value: 4.\n\nTAB_SIZE               = 4\n\n# This tag can be used to specify a number of aliases that act as commands in\n# the documentation. An alias has the form:\n# name=value\n# For example adding\n# \"sideeffect=@par Side Effects:\\n\"\n# will allow you to put the command \\sideeffect (or @sideeffect) in the\n# documentation, which will result in a user-defined paragraph with heading\n# \"Side Effects:\". You can put \\n's in the value part of an alias to insert\n# newlines.\n\nALIASES                =\n\n# This tag can be used to specify a number of word-keyword mappings (TCL only).\n# A mapping has the form \"name=value\". For example adding \"class=itcl::class\"\n# will allow you to use the command class in the itcl::class meaning.\n\nTCL_SUBST              =\n\n# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources\n# only. Doxygen will then generate output that is more tailored for C. For\n# instance, some of the names that are used will be different. The list of all\n# members will be omitted, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_FOR_C  = NO\n\n# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or\n# Python sources only. Doxygen will then generate output that is more tailored\n# for that language. For instance, namespaces will be presented as packages,\n# qualified scopes will look different, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_JAVA   = NO\n\n# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran\n# sources. Doxygen will then generate output that is tailored for Fortran.\n# The default value is: NO.\n\nOPTIMIZE_FOR_FORTRAN   = NO\n\n# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL\n# sources. Doxygen will then generate output that is tailored for VHDL.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_VHDL   = NO\n\n# Doxygen selects the parser to use depending on the extension of the files it\n# parses. With this tag you can assign which parser to use for a given\n# extension. Doxygen has a built-in mapping, but you can override or extend it\n# using this tag. The format is ext=language, where ext is a file extension, and\n# language is one of the parsers supported by doxygen: IDL, Java, Javascript,\n# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:\n# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:\n# Fortran. In the later case the parser tries to guess whether the code is fixed\n# or free formatted code, this is the default for Fortran type files), VHDL. For\n# instance to make doxygen treat .inc files as Fortran files (default is PHP),\n# and .f files as C (default is Fortran), use: inc=Fortran f=C.\n#\n# Note: For files without extension you can use no_extension as a placeholder.\n#\n# Note that for custom extensions you also need to set FILE_PATTERNS otherwise\n# the files are not read by doxygen.\n\nEXTENSION_MAPPING      =\n\n# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments\n# according to the Markdown format, which allows for more readable\n# documentation. See http://daringfireball.net/projects/markdown/ for details.\n# The output of markdown processing is further processed by doxygen, so you can\n# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in\n# case of backward compatibilities issues.\n# The default value is: YES.\n\nMARKDOWN_SUPPORT       = YES\n\n# When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up\n# to that level are automatically included in the table of contents, even if\n# they do not have an id attribute.\n# Note: This feature currently applies only to Markdown headings.\n# Minimum value: 0, maximum value: 99, default value: 0.\n# This tag requires that the tag MARKDOWN_SUPPORT is set to YES.\n\nTOC_INCLUDE_HEADINGS   = 0\n\n# When enabled doxygen tries to link words that correspond to documented\n# classes, or namespaces to their corresponding documentation. Such a link can\n# be prevented in individual cases by putting a % sign in front of the word or\n# globally by setting AUTOLINK_SUPPORT to NO.\n# The default value is: YES.\n\nAUTOLINK_SUPPORT       = YES\n\n# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want\n# to include (a tag file for) the STL sources as input, then you should set this\n# tag to YES in order to let doxygen match functions declarations and\n# definitions whose arguments contain STL classes (e.g. func(std::string);\n# versus func(std::string) {}). This also make the inheritance and collaboration\n# diagrams that involve STL classes more complete and accurate.\n# The default value is: NO.\n\nBUILTIN_STL_SUPPORT    = NO\n\n# If you use Microsoft's C++/CLI language, you should set this option to YES to\n# enable parsing support.\n# The default value is: NO.\n\nCPP_CLI_SUPPORT        = NO\n\n# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:\n# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen\n# will parse them like normal C++ but will assume all classes use public instead\n# of private inheritance when no explicit protection keyword is present.\n# The default value is: NO.\n\nSIP_SUPPORT            = NO\n\n# For Microsoft's IDL there are propget and propput attributes to indicate\n# getter and setter methods for a property. Setting this option to YES will make\n# doxygen to replace the get and set methods by a property in the documentation.\n# This will only work if the methods are indeed getting or setting a simple\n# type. If this is not the case, or you want to show the methods anyway, you\n# should set this option to NO.\n# The default value is: YES.\n\nIDL_PROPERTY_SUPPORT   = YES\n\n# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC\n# tag is set to YES then doxygen will reuse the documentation of the first\n# member in the group (if any) for the other members of the group. By default\n# all members of a group must be documented explicitly.\n# The default value is: NO.\n\nDISTRIBUTE_GROUP_DOC   = NO\n\n# If one adds a struct or class to a group and this option is enabled, then also\n# any nested class or struct is added to the same group. By default this option\n# is disabled and one has to add nested compounds explicitly via \\ingroup.\n# The default value is: NO.\n\nGROUP_NESTED_COMPOUNDS = NO\n\n# Set the SUBGROUPING tag to YES to allow class member groups of the same type\n# (for instance a group of public functions) to be put as a subgroup of that\n# type (e.g. under the Public Functions section). Set it to NO to prevent\n# subgrouping. Alternatively, this can be done per class using the\n# \\nosubgrouping command.\n# The default value is: YES.\n\nSUBGROUPING            = YES\n\n# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions\n# are shown inside the group in which they are included (e.g. using \\ingroup)\n# instead of on a separate page (for HTML and Man pages) or section (for LaTeX\n# and RTF).\n#\n# Note that this feature does not work in combination with\n# SEPARATE_MEMBER_PAGES.\n# The default value is: NO.\n\nINLINE_GROUPED_CLASSES = NO\n\n# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions\n# with only public data fields or simple typedef fields will be shown inline in\n# the documentation of the scope in which they are defined (i.e. file,\n# namespace, or group documentation), provided this scope is documented. If set\n# to NO, structs, classes, and unions are shown on a separate page (for HTML and\n# Man pages) or section (for LaTeX and RTF).\n# The default value is: NO.\n\nINLINE_SIMPLE_STRUCTS  = NO\n\n# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or\n# enum is documented as struct, union, or enum with the name of the typedef. So\n# typedef struct TypeS {} TypeT, will appear in the documentation as a struct\n# with name TypeT. When disabled the typedef will appear as a member of a file,\n# namespace, or class. And the struct will be named TypeS. This can typically be\n# useful for C code in case the coding convention dictates that all compound\n# types are typedef'ed and only the typedef is referenced, never the tag name.\n# The default value is: NO.\n\nTYPEDEF_HIDES_STRUCT   = NO\n\n# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This\n# cache is used to resolve symbols given their name and scope. Since this can be\n# an expensive process and often the same symbol appears multiple times in the\n# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small\n# doxygen will become slower. If the cache is too large, memory is wasted. The\n# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range\n# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536\n# symbols. At the end of a run doxygen will report the cache usage and suggest\n# the optimal cache size from a speed point of view.\n# Minimum value: 0, maximum value: 9, default value: 0.\n\nLOOKUP_CACHE_SIZE      = 0\n\n#---------------------------------------------------------------------------\n# Build related configuration options\n#---------------------------------------------------------------------------\n\n# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in\n# documentation are documented, even if no documentation was available. Private\n# class members and static file members will be hidden unless the\n# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.\n# Note: This will also disable the warnings about undocumented members that are\n# normally produced when WARNINGS is set to YES.\n# The default value is: NO.\n\nEXTRACT_ALL            = NO\n\n# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will\n# be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PRIVATE        = NO\n\n# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal\n# scope will be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PACKAGE        = NO\n\n# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be\n# included in the documentation.\n# The default value is: NO.\n\nEXTRACT_STATIC         = NO\n\n# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined\n# locally in source files will be included in the documentation. If set to NO,\n# only classes defined in header files are included. Does not have any effect\n# for Java sources.\n# The default value is: YES.\n\nEXTRACT_LOCAL_CLASSES  = YES\n\n# This flag is only useful for Objective-C code. If set to YES, local methods,\n# which are defined in the implementation section but not in the interface are\n# included in the documentation. If set to NO, only methods in the interface are\n# included.\n# The default value is: NO.\n\nEXTRACT_LOCAL_METHODS  = NO\n\n# If this flag is set to YES, the members of anonymous namespaces will be\n# extracted and appear in the documentation as a namespace called\n# 'anonymous_namespace{file}', where file will be replaced with the base name of\n# the file that contains the anonymous namespace. By default anonymous namespace\n# are hidden.\n# The default value is: NO.\n\nEXTRACT_ANON_NSPACES   = NO\n\n# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all\n# undocumented members inside documented classes or files. If set to NO these\n# members will be included in the various overviews, but no documentation\n# section is generated. This option has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_MEMBERS     = NO\n\n# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all\n# undocumented classes that are normally visible in the class hierarchy. If set\n# to NO, these classes will be included in the various overviews. This option\n# has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_CLASSES     = NO\n\n# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend\n# (class|struct|union) declarations. If set to NO, these declarations will be\n# included in the documentation.\n# The default value is: NO.\n\nHIDE_FRIEND_COMPOUNDS  = NO\n\n# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any\n# documentation blocks found inside the body of a function. If set to NO, these\n# blocks will be appended to the function's detailed documentation block.\n# The default value is: NO.\n\nHIDE_IN_BODY_DOCS      = NO\n\n# The INTERNAL_DOCS tag determines if documentation that is typed after a\n# \\internal command is included. If the tag is set to NO then the documentation\n# will be excluded. Set it to YES to include the internal documentation.\n# The default value is: NO.\n\nINTERNAL_DOCS          = NO\n\n# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file\n# names in lower-case letters. If set to YES, upper-case letters are also\n# allowed. This is useful if you have classes or files whose names only differ\n# in case and if your file system supports case sensitive file names. Windows\n# and Mac users are advised to set this option to NO.\n# The default value is: system dependent.\n\nCASE_SENSE_NAMES       = YES\n\n# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with\n# their full class and namespace scopes in the documentation. If set to YES, the\n# scope will be hidden.\n# The default value is: NO.\n\nHIDE_SCOPE_NAMES       = NO\n\n# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will\n# append additional text to a page's title, such as Class Reference. If set to\n# YES the compound reference will be hidden.\n# The default value is: NO.\n\nHIDE_COMPOUND_REFERENCE= NO\n\n# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of\n# the files that are included by a file in the documentation of that file.\n# The default value is: YES.\n\nSHOW_INCLUDE_FILES     = YES\n\n# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each\n# grouped member an include statement to the documentation, telling the reader\n# which file to include in order to use the member.\n# The default value is: NO.\n\nSHOW_GROUPED_MEMB_INC  = NO\n\n# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include\n# files with double quotes in the documentation rather than with sharp brackets.\n# The default value is: NO.\n\nFORCE_LOCAL_INCLUDES   = NO\n\n# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the\n# documentation for inline members.\n# The default value is: YES.\n\nINLINE_INFO            = YES\n\n# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the\n# (detailed) documentation of file and class members alphabetically by member\n# name. If set to NO, the members will appear in declaration order.\n# The default value is: YES.\n\nSORT_MEMBER_DOCS       = YES\n\n# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief\n# descriptions of file, namespace and class members alphabetically by member\n# name. If set to NO, the members will appear in declaration order. Note that\n# this will also influence the order of the classes in the class list.\n# The default value is: NO.\n\nSORT_BRIEF_DOCS        = NO\n\n# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the\n# (brief and detailed) documentation of class members so that constructors and\n# destructors are listed first. If set to NO the constructors will appear in the\n# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.\n# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief\n# member documentation.\n# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting\n# detailed member documentation.\n# The default value is: NO.\n\nSORT_MEMBERS_CTORS_1ST = NO\n\n# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy\n# of group names into alphabetical order. If set to NO the group names will\n# appear in their defined order.\n# The default value is: NO.\n\nSORT_GROUP_NAMES       = NO\n\n# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by\n# fully-qualified names, including namespaces. If set to NO, the class list will\n# be sorted only by class name, not including the namespace part.\n# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.\n# Note: This option applies only to the class list, not to the alphabetical\n# list.\n# The default value is: NO.\n\nSORT_BY_SCOPE_NAME     = NO\n\n# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper\n# type resolution of all parameters of a function it will reject a match between\n# the prototype and the implementation of a member function even if there is\n# only one candidate or it is obvious which candidate to choose by doing a\n# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still\n# accept a match between prototype and implementation in such cases.\n# The default value is: NO.\n\nSTRICT_PROTO_MATCHING  = NO\n\n# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo\n# list. This list is created by putting \\todo commands in the documentation.\n# The default value is: YES.\n\nGENERATE_TODOLIST      = YES\n\n# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test\n# list. This list is created by putting \\test commands in the documentation.\n# The default value is: YES.\n\nGENERATE_TESTLIST      = YES\n\n# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug\n# list. This list is created by putting \\bug commands in the documentation.\n# The default value is: YES.\n\nGENERATE_BUGLIST       = YES\n\n# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)\n# the deprecated list. This list is created by putting \\deprecated commands in\n# the documentation.\n# The default value is: YES.\n\nGENERATE_DEPRECATEDLIST= YES\n\n# The ENABLED_SECTIONS tag can be used to enable conditional documentation\n# sections, marked by \\if <section_label> ... \\endif and \\cond <section_label>\n# ... \\endcond blocks.\n\nENABLED_SECTIONS       =\n\n# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the\n# initial value of a variable or macro / define can have for it to appear in the\n# documentation. If the initializer consists of more lines than specified here\n# it will be hidden. Use a value of 0 to hide initializers completely. The\n# appearance of the value of individual variables and macros / defines can be\n# controlled using \\showinitializer or \\hideinitializer command in the\n# documentation regardless of this setting.\n# Minimum value: 0, maximum value: 10000, default value: 30.\n\nMAX_INITIALIZER_LINES  = 30\n\n# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at\n# the bottom of the documentation of classes and structs. If set to YES, the\n# list will mention the files that were used to generate the documentation.\n# The default value is: YES.\n\nSHOW_USED_FILES        = YES\n\n# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This\n# will remove the Files entry from the Quick Index and from the Folder Tree View\n# (if specified).\n# The default value is: YES.\n\nSHOW_FILES             = YES\n\n# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces\n# page. This will remove the Namespaces entry from the Quick Index and from the\n# Folder Tree View (if specified).\n# The default value is: YES.\n\nSHOW_NAMESPACES        = YES\n\n# The FILE_VERSION_FILTER tag can be used to specify a program or script that\n# doxygen should invoke to get the current version for each file (typically from\n# the version control system). Doxygen will invoke the program by executing (via\n# popen()) the command command input-file, where command is the value of the\n# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided\n# by doxygen. Whatever the program writes to standard output is used as the file\n# version. For an example see the documentation.\n\nFILE_VERSION_FILTER    =\n\n# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed\n# by doxygen. The layout file controls the global structure of the generated\n# output files in an output format independent way. To create the layout file\n# that represents doxygen's defaults, run doxygen with the -l option. You can\n# optionally specify a file name after the option, if omitted DoxygenLayout.xml\n# will be used as the name of the layout file.\n#\n# Note that if you run doxygen from a directory containing a file called\n# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE\n# tag is left empty.\n\nLAYOUT_FILE            =\n\n# The CITE_BIB_FILES tag can be used to specify one or more bib files containing\n# the reference definitions. This must be a list of .bib files. The .bib\n# extension is automatically appended if omitted. This requires the bibtex tool\n# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.\n# For LaTeX the style of the bibliography can be controlled using\n# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the\n# search path. See also \\cite for info how to create references.\n\nCITE_BIB_FILES         =\n\n#---------------------------------------------------------------------------\n# Configuration options related to warning and progress messages\n#---------------------------------------------------------------------------\n\n# The QUIET tag can be used to turn on/off the messages that are generated to\n# standard output by doxygen. If QUIET is set to YES this implies that the\n# messages are off.\n# The default value is: NO.\n\nQUIET                  = NO\n\n# The WARNINGS tag can be used to turn on/off the warning messages that are\n# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES\n# this implies that the warnings are on.\n#\n# Tip: Turn warnings on while writing the documentation.\n# The default value is: YES.\n\nWARNINGS               = YES\n\n# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate\n# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag\n# will automatically be disabled.\n# The default value is: YES.\n\nWARN_IF_UNDOCUMENTED   = YES\n\n# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for\n# potential errors in the documentation, such as not documenting some parameters\n# in a documented function, or documenting parameters that don't exist or using\n# markup commands wrongly.\n# The default value is: YES.\n\nWARN_IF_DOC_ERROR      = YES\n\n# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that\n# are documented, but have no documentation for their parameters or return\n# value. If set to NO, doxygen will only warn about wrong or incomplete\n# parameter documentation, but not about the absence of documentation.\n# The default value is: NO.\n\nWARN_NO_PARAMDOC       = NO\n\n# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when\n# a warning is encountered.\n# The default value is: NO.\n\nWARN_AS_ERROR          = NO\n\n# The WARN_FORMAT tag determines the format of the warning messages that doxygen\n# can produce. The string should contain the $file, $line, and $text tags, which\n# will be replaced by the file and line number from which the warning originated\n# and the warning text. Optionally the format may contain $version, which will\n# be replaced by the version of the file (if it could be obtained via\n# FILE_VERSION_FILTER)\n# The default value is: $file:$line: $text.\n\nWARN_FORMAT            = \"$file:$line: $text\"\n\n# The WARN_LOGFILE tag can be used to specify a file to which warning and error\n# messages should be written. If left blank the output is written to standard\n# error (stderr).\n\nWARN_LOGFILE           =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the input files\n#---------------------------------------------------------------------------\n\n# The INPUT tag is used to specify the files and/or directories that contain\n# documented source files. You may enter file names like myfile.cpp or\n# directories like /usr/src/myproject. Separate the files or directories with\n# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING\n# Note: If this tag is empty the current directory is searched.\n\nINPUT                  = C CONTRIBUTING.md\n\n# This tag can be used to specify the character encoding of the source files\n# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses\n# libiconv (or the iconv built into libc) for the transcoding. See the libiconv\n# documentation (see: http://www.gnu.org/software/libiconv) for the list of\n# possible encodings.\n# The default value is: UTF-8.\n\nINPUT_ENCODING         = UTF-8\n\n# If the value of the INPUT tag contains directories, you can use the\n# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and\n# *.h) to filter out the source-files in the directories.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# read by doxygen.\n#\n# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,\n# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,\n# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,\n# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,\n# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf.\n\nFILE_PATTERNS          = *.c \\\n                         *.cc \\\n                         *.cxx \\\n                         *.cpp \\\n                         *.c++ \\\n                         *.java \\\n                         *.ii \\\n                         *.ixx \\\n                         *.ipp \\\n                         *.i++ \\\n                         *.inl \\\n                         *.idl \\\n                         *.ddl \\\n                         *.odl \\\n                         *.h \\\n                         *.hh \\\n                         *.hxx \\\n                         *.hpp \\\n                         *.h++ \\\n                         *.cs \\\n                         *.d \\\n                         *.php \\\n                         *.php4 \\\n                         *.php5 \\\n                         *.phtml \\\n                         *.inc \\\n                         *.m \\\n                         *.markdown \\\n                         *.md \\\n                         *.mm \\\n                         *.dox \\\n                         *.py \\\n                         *.pyw \\\n                         *.f90 \\\n                         *.f95 \\\n                         *.f03 \\\n                         *.f08 \\\n                         *.f \\\n                         *.for \\\n                         *.tcl \\\n                         *.vhd \\\n                         *.vhdl \\\n                         *.ucf \\\n                         *.qsf\n\n# The RECURSIVE tag can be used to specify whether or not subdirectories should\n# be searched for input files as well.\n# The default value is: NO.\n\nRECURSIVE              = YES\n\n# The EXCLUDE tag can be used to specify files and/or directories that should be\n# excluded from the INPUT source files. This way you can easily exclude a\n# subdirectory from a directory tree whose root is specified with the INPUT tag.\n#\n# Note that relative paths are relative to the directory from which doxygen is\n# run.\n\nEXCLUDE                = C/thirdparty C/common/include/exprtk.hpp\n\n# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or\n# directories that are symbolic links (a Unix file system feature) are excluded\n# from the input.\n# The default value is: NO.\n\nEXCLUDE_SYMLINKS       = NO\n\n# If the value of the INPUT tag contains directories, you can use the\n# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude\n# certain files from those directories.\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories for example use the pattern */test/*\n\nEXCLUDE_PATTERNS       =\n\n# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names\n# (namespaces, classes, functions, etc.) that should be excluded from the\n# output. The symbol name can be a fully qualified name, a word, or if the\n# wildcard * is used, a substring. Examples: ANamespace, AClass,\n# AClass::ANamespace, ANamespace::*Test\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories use the pattern */test/*\n\nEXCLUDE_SYMBOLS        =\n\n# The EXAMPLE_PATH tag can be used to specify one or more files or directories\n# that contain example code fragments that are included (see the \\include\n# command).\n\nEXAMPLE_PATH           =\n\n# If the value of the EXAMPLE_PATH tag contains directories, you can use the\n# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank all\n# files are included.\n\nEXAMPLE_PATTERNS       = *\n\n# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be\n# searched for input files to be used with the \\include or \\dontinclude commands\n# irrespective of the value of the RECURSIVE tag.\n# The default value is: NO.\n\nEXAMPLE_RECURSIVE      = NO\n\n# The IMAGE_PATH tag can be used to specify one or more files or directories\n# that contain images that are to be included in the documentation (see the\n# \\image command).\n\nIMAGE_PATH             =\n\n# The INPUT_FILTER tag can be used to specify a program that doxygen should\n# invoke to filter for each input file. Doxygen will invoke the filter program\n# by executing (via popen()) the command:\n#\n# <filter> <input-file>\n#\n# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the\n# name of an input file. Doxygen will then use the output that the filter\n# program writes to standard output. If FILTER_PATTERNS is specified, this tag\n# will be ignored.\n#\n# Note that the filter must not add or remove lines; it is applied before the\n# code is scanned, but not when the output code is generated. If lines are added\n# or removed, the anchors will not be placed correctly.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# properly processed by doxygen.\n\nINPUT_FILTER           =\n\n# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern\n# basis. Doxygen will compare the file name with each pattern and apply the\n# filter if there is a match. The filters are a list of the form: pattern=filter\n# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how\n# filters are used. If the FILTER_PATTERNS tag is empty or if none of the\n# patterns match the file name, INPUT_FILTER is applied.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# properly processed by doxygen.\n\nFILTER_PATTERNS        =\n\n# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using\n# INPUT_FILTER) will also be used to filter the input files that are used for\n# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).\n# The default value is: NO.\n\nFILTER_SOURCE_FILES    = NO\n\n# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file\n# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and\n# it is also possible to disable source filtering for a specific pattern using\n# *.ext= (so without naming a filter).\n# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.\n\nFILTER_SOURCE_PATTERNS =\n\n# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that\n# is part of the input, its contents will be placed on the main page\n# (index.html). This can be useful if you have a project on for instance GitHub\n# and want to reuse the introduction page also for the doxygen output.\n\nUSE_MDFILE_AS_MAINPAGE = CONTRIBUTING.md\n\n#---------------------------------------------------------------------------\n# Configuration options related to source browsing\n#---------------------------------------------------------------------------\n\n# If the SOURCE_BROWSER tag is set to YES then a list of source files will be\n# generated. Documented entities will be cross-referenced with these sources.\n#\n# Note: To get rid of all source code in the generated output, make sure that\n# also VERBATIM_HEADERS is set to NO.\n# The default value is: NO.\n\nSOURCE_BROWSER         = NO\n\n# Setting the INLINE_SOURCES tag to YES will include the body of functions,\n# classes and enums directly into the documentation.\n# The default value is: NO.\n\nINLINE_SOURCES         = NO\n\n# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any\n# special comment blocks from generated source code fragments. Normal C, C++ and\n# Fortran comments will always remain visible.\n# The default value is: YES.\n\nSTRIP_CODE_COMMENTS    = YES\n\n# If the REFERENCED_BY_RELATION tag is set to YES then for each documented\n# function all documented functions referencing it will be listed.\n# The default value is: NO.\n\nREFERENCED_BY_RELATION = NO\n\n# If the REFERENCES_RELATION tag is set to YES then for each documented function\n# all documented entities called/used by that function will be listed.\n# The default value is: NO.\n\nREFERENCES_RELATION    = NO\n\n# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set\n# to YES then the hyperlinks from functions in REFERENCES_RELATION and\n# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will\n# link to the documentation.\n# The default value is: YES.\n\nREFERENCES_LINK_SOURCE = YES\n\n# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the\n# source code will show a tooltip with additional information such as prototype,\n# brief description and links to the definition and documentation. Since this\n# will make the HTML file larger and loading of large files a bit slower, you\n# can opt to disable this feature.\n# The default value is: YES.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nSOURCE_TOOLTIPS        = YES\n\n# If the USE_HTAGS tag is set to YES then the references to source code will\n# point to the HTML generated by the htags(1) tool instead of doxygen built-in\n# source browser. The htags tool is part of GNU's global source tagging system\n# (see http://www.gnu.org/software/global/global.html). You will need version\n# 4.8.6 or higher.\n#\n# To use it do the following:\n# - Install the latest version of global\n# - Enable SOURCE_BROWSER and USE_HTAGS in the config file\n# - Make sure the INPUT points to the root of the source tree\n# - Run doxygen as normal\n#\n# Doxygen will invoke htags (and that will in turn invoke gtags), so these\n# tools must be available from the command line (i.e. in the search path).\n#\n# The result: instead of the source browser generated by doxygen, the links to\n# source code will now point to the output of htags.\n# The default value is: NO.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nUSE_HTAGS              = NO\n\n# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a\n# verbatim copy of the header file for each class for which an include is\n# specified. Set to NO to disable this.\n# See also: Section \\class.\n# The default value is: YES.\n\nVERBATIM_HEADERS       = YES\n\n# If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the\n# clang parser (see: http://clang.llvm.org/) for more accurate parsing at the\n# cost of reduced performance. This can be particularly helpful with template\n# rich C++ code for which doxygen's built-in parser lacks the necessary type\n# information.\n# Note: The availability of this option depends on whether or not doxygen was\n# generated with the -Duse-libclang=ON option for CMake.\n# The default value is: NO.\n\nCLANG_ASSISTED_PARSING = NO\n\n# If clang assisted parsing is enabled you can provide the compiler with command\n# line options that you would normally use when invoking the compiler. Note that\n# the include paths will already be set by doxygen for the files and directories\n# specified with INPUT and INCLUDE_PATH.\n# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.\n\nCLANG_OPTIONS          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the alphabetical class index\n#---------------------------------------------------------------------------\n\n# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all\n# compounds will be generated. Enable this if the project contains a lot of\n# classes, structs, unions or interfaces.\n# The default value is: YES.\n\nALPHABETICAL_INDEX     = YES\n\n# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in\n# which the alphabetical index list will be split.\n# Minimum value: 1, maximum value: 20, default value: 5.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nCOLS_IN_ALPHA_INDEX    = 5\n\n# In case all classes in a project start with a common prefix, all classes will\n# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag\n# can be used to specify a prefix (or a list of prefixes) that should be ignored\n# while generating the index headers.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nIGNORE_PREFIX          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the HTML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output\n# The default value is: YES.\n\nGENERATE_HTML          = YES\n\n# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_OUTPUT            = html\n\n# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each\n# generated HTML page (for example: .htm, .php, .asp).\n# The default value is: .html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FILE_EXTENSION    = .html\n\n# The HTML_HEADER tag can be used to specify a user-defined HTML header file for\n# each generated HTML page. If the tag is left blank doxygen will generate a\n# standard header.\n#\n# To get valid HTML the header file that includes any scripts and style sheets\n# that doxygen needs, which is dependent on the configuration options used (e.g.\n# the setting GENERATE_TREEVIEW). It is highly recommended to start with a\n# default header using\n# doxygen -w html new_header.html new_footer.html new_stylesheet.css\n# YourConfigFile\n# and then modify the file new_header.html. See also section \"Doxygen usage\"\n# for information on how to generate the default header that doxygen normally\n# uses.\n# Note: The header is subject to change so you typically have to regenerate the\n# default header when upgrading to a newer version of doxygen. For a description\n# of the possible markers and block names see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_HEADER            =\n\n# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each\n# generated HTML page. If the tag is left blank doxygen will generate a standard\n# footer. See HTML_HEADER for more information on how to generate a default\n# footer and what special commands can be used inside the footer. See also\n# section \"Doxygen usage\" for information on how to generate the default footer\n# that doxygen normally uses.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FOOTER            =\n\n# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style\n# sheet that is used by each HTML page. It can be used to fine-tune the look of\n# the HTML output. If left blank doxygen will generate a default style sheet.\n# See also section \"Doxygen usage\" for information on how to generate the style\n# sheet that doxygen normally uses.\n# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as\n# it is more robust and this tag (HTML_STYLESHEET) will in the future become\n# obsolete.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_STYLESHEET        =\n\n# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined\n# cascading style sheets that are included after the standard style sheets\n# created by doxygen. Using this option one can overrule certain style aspects.\n# This is preferred over using HTML_STYLESHEET since it does not replace the\n# standard style sheet and is therefore more robust against future updates.\n# Doxygen will copy the style sheet files to the output directory.\n# Note: The order of the extra style sheet files is of importance (e.g. the last\n# style sheet in the list overrules the setting of the previous ones in the\n# list). For an example see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_STYLESHEET  =\n\n# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the HTML output directory. Note\n# that these files will be copied to the base HTML output directory. Use the\n# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these\n# files. In the HTML_STYLESHEET file, use the file name only. Also note that the\n# files will be copied as-is; there are no commands or markers available.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_FILES       =\n\n# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen\n# will adjust the colors in the style sheet and background images according to\n# this color. Hue is specified as an angle on a colorwheel, see\n# http://en.wikipedia.org/wiki/Hue for more information. For instance the value\n# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300\n# purple, and 360 is red again.\n# Minimum value: 0, maximum value: 359, default value: 220.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_HUE    = 220\n\n# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors\n# in the HTML output. For a value of 0 the output will use grayscales only. A\n# value of 255 will produce the most vivid colors.\n# Minimum value: 0, maximum value: 255, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_SAT    = 100\n\n# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the\n# luminance component of the colors in the HTML output. Values below 100\n# gradually make the output lighter, whereas values above 100 make the output\n# darker. The value divided by 100 is the actual gamma applied, so 80 represents\n# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not\n# change the gamma.\n# Minimum value: 40, maximum value: 240, default value: 80.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_GAMMA  = 80\n\n# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML\n# page will contain the date and time when the page was generated. Setting this\n# to YES can help to show when doxygen was last run and thus if the\n# documentation is up to date.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_TIMESTAMP         = NO\n\n# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML\n# documentation will contain sections that can be hidden and shown after the\n# page has loaded.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_DYNAMIC_SECTIONS  = NO\n\n# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries\n# shown in the various tree structured indices initially; the user can expand\n# and collapse entries dynamically later on. Doxygen will expand the tree to\n# such a level that at most the specified number of entries are visible (unless\n# a fully collapsed tree already exceeds this amount). So setting the number of\n# entries 1 will produce a full collapsed tree by default. 0 is a special value\n# representing an infinite number of entries and will result in a full expanded\n# tree by default.\n# Minimum value: 0, maximum value: 9999, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_INDEX_NUM_ENTRIES = 100\n\n# If the GENERATE_DOCSET tag is set to YES, additional index files will be\n# generated that can be used as input for Apple's Xcode 3 integrated development\n# environment (see: http://developer.apple.com/tools/xcode/), introduced with\n# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a\n# Makefile in the HTML output directory. Running make will produce the docset in\n# that directory and running make install will install the docset in\n# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at\n# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html\n# for more information.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_DOCSET        = NO\n\n# This tag determines the name of the docset feed. A documentation feed provides\n# an umbrella under which multiple documentation sets from a single provider\n# (such as a company or product suite) can be grouped.\n# The default value is: Doxygen generated docs.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_FEEDNAME        = \"Doxygen generated docs\"\n\n# This tag specifies a string that should uniquely identify the documentation\n# set bundle. This should be a reverse domain-name style string, e.g.\n# com.mycompany.MyDocSet. Doxygen will append .docset to the name.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_BUNDLE_ID       = org.doxygen.Project\n\n# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify\n# the documentation publisher. This should be a reverse domain-name style\n# string, e.g. com.mycompany.MyDocSet.documentation.\n# The default value is: org.doxygen.Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_ID    = org.doxygen.Publisher\n\n# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.\n# The default value is: Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_NAME  = Publisher\n\n# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three\n# additional HTML index files: index.hhp, index.hhc, and index.hhk. The\n# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop\n# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on\n# Windows.\n#\n# The HTML Help Workshop contains a compiler that can convert all HTML output\n# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML\n# files are now used as the Windows 98 help format, and will replace the old\n# Windows help format (.hlp) on all Windows platforms in the future. Compressed\n# HTML files also contain an index, a table of contents, and you can search for\n# words in the documentation. The HTML workshop also contains a viewer for\n# compressed HTML files.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_HTMLHELP      = NO\n\n# The CHM_FILE tag can be used to specify the file name of the resulting .chm\n# file. You can add a path in front of the file if the result should not be\n# written to the html output directory.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_FILE               =\n\n# The HHC_LOCATION tag can be used to specify the location (absolute path\n# including file name) of the HTML help compiler (hhc.exe). If non-empty,\n# doxygen will try to run the HTML help compiler on the generated index.hhp.\n# The file has to be specified with full path.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nHHC_LOCATION           =\n\n# The GENERATE_CHI flag controls if a separate .chi index file is generated\n# (YES) or that it should be included in the master .chm file (NO).\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nGENERATE_CHI           = NO\n\n# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)\n# and project file content.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_INDEX_ENCODING     =\n\n# The BINARY_TOC flag controls whether a binary table of contents is generated\n# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it\n# enables the Previous and Next buttons.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nBINARY_TOC             = NO\n\n# The TOC_EXPAND flag can be set to YES to add extra items for group members to\n# the table of contents of the HTML help documentation and to the tree view.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nTOC_EXPAND             = NO\n\n# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and\n# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that\n# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help\n# (.qch) of the generated HTML documentation.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_QHP           = NO\n\n# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify\n# the file name of the resulting .qch file. The path specified is relative to\n# the HTML output folder.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQCH_FILE               =\n\n# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help\n# Project output. For more information please see Qt Help Project / Namespace\n# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_NAMESPACE          = org.doxygen.Project\n\n# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt\n# Help Project output. For more information please see Qt Help Project / Virtual\n# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-\n# folders).\n# The default value is: doc.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_VIRTUAL_FOLDER     = doc\n\n# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom\n# filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_NAME   =\n\n# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the\n# custom filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_ATTRS  =\n\n# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this\n# project's filter section matches. Qt Help Project / Filter Attributes (see:\n# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_SECT_FILTER_ATTRS  =\n\n# The QHG_LOCATION tag can be used to specify the location of Qt's\n# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the\n# generated .qhp file.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHG_LOCATION           =\n\n# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be\n# generated, together with the HTML files, they form an Eclipse help plugin. To\n# install this plugin and make it available under the help contents menu in\n# Eclipse, the contents of the directory containing the HTML and XML files needs\n# to be copied into the plugins directory of eclipse. The name of the directory\n# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.\n# After copying Eclipse needs to be restarted before the help appears.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_ECLIPSEHELP   = NO\n\n# A unique identifier for the Eclipse help plugin. When installing the plugin\n# the directory name containing the HTML and XML files should also have this\n# name. Each documentation set should have its own identifier.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.\n\nECLIPSE_DOC_ID         = org.doxygen.Project\n\n# If you want full control over the layout of the generated HTML pages it might\n# be necessary to disable the index and replace it with your own. The\n# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top\n# of each HTML page. A value of NO enables the index and the value YES disables\n# it. Since the tabs in the index contain the same information as the navigation\n# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nDISABLE_INDEX          = NO\n\n# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index\n# structure should be generated to display hierarchical information. If the tag\n# value is set to YES, a side panel will be generated containing a tree-like\n# index structure (just like the one that is generated for HTML Help). For this\n# to work a browser that supports JavaScript, DHTML, CSS and frames is required\n# (i.e. any modern browser). Windows users are probably better off using the\n# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can\n# further fine-tune the look of the index. As an example, the default style\n# sheet generated by doxygen has an example that shows how to put an image at\n# the root of the tree instead of the PROJECT_NAME. Since the tree basically has\n# the same information as the tab index, you could consider setting\n# DISABLE_INDEX to YES when enabling this option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_TREEVIEW      = YES\n\n# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that\n# doxygen will group on one line in the generated HTML documentation.\n#\n# Note that a value of 0 will completely suppress the enum values from appearing\n# in the overview section.\n# Minimum value: 0, maximum value: 20, default value: 4.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nENUM_VALUES_PER_LINE   = 4\n\n# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used\n# to set the initial width (in pixels) of the frame in which the tree is shown.\n# Minimum value: 0, maximum value: 1500, default value: 250.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nTREEVIEW_WIDTH         = 250\n\n# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to\n# external symbols imported via tag files in a separate window.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nEXT_LINKS_IN_WINDOW    = NO\n\n# Use this tag to change the font size of LaTeX formulas included as images in\n# the HTML documentation. When you change the font size after a successful\n# doxygen run you need to manually remove any form_*.png images from the HTML\n# output directory to force them to be regenerated.\n# Minimum value: 8, maximum value: 50, default value: 10.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_FONTSIZE       = 10\n\n# Use the FORMULA_TRANPARENT tag to determine whether or not the images\n# generated for formulas are transparent PNGs. Transparent PNGs are not\n# supported properly for IE 6.0, but are supported on all modern browsers.\n#\n# Note that when changing this option you need to delete any form_*.png files in\n# the HTML output directory before the changes have effect.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_TRANSPARENT    = YES\n\n# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see\n# http://www.mathjax.org) which uses client side Javascript for the rendering\n# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX\n# installed or if you want to formulas look prettier in the HTML output. When\n# enabled you may also need to install MathJax separately and configure the path\n# to it using the MATHJAX_RELPATH option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nUSE_MATHJAX            = NO\n\n# When MathJax is enabled you can set the default output format to be used for\n# the MathJax output. See the MathJax site (see:\n# http://docs.mathjax.org/en/latest/output.html) for more details.\n# Possible values are: HTML-CSS (which is slower, but has the best\n# compatibility), NativeMML (i.e. MathML) and SVG.\n# The default value is: HTML-CSS.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_FORMAT         = HTML-CSS\n\n# When MathJax is enabled you need to specify the location relative to the HTML\n# output directory using the MATHJAX_RELPATH option. The destination directory\n# should contain the MathJax.js script. For instance, if the mathjax directory\n# is located at the same level as the HTML output directory, then\n# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax\n# Content Delivery Network so you can quickly see the result without installing\n# MathJax. However, it is strongly recommended to install a local copy of\n# MathJax from http://www.mathjax.org before deployment.\n# The default value is: http://cdn.mathjax.org/mathjax/latest.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_RELPATH        = http://cdn.mathjax.org/mathjax/latest\n\n# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax\n# extension names that should be enabled during MathJax rendering. For example\n# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_EXTENSIONS     =\n\n# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces\n# of code that will be used on startup of the MathJax code. See the MathJax site\n# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an\n# example see the documentation.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_CODEFILE       =\n\n# When the SEARCHENGINE tag is enabled doxygen will generate a search box for\n# the HTML output. The underlying search engine uses javascript and DHTML and\n# should work on any modern browser. Note that when using HTML help\n# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)\n# there is already a search function so this one should typically be disabled.\n# For large projects the javascript based search engine can be slow, then\n# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to\n# search using the keyboard; to jump to the search box use <access key> + S\n# (what the <access key> is depends on the OS and browser, but it is typically\n# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down\n# key> to jump into the search results window, the results can be navigated\n# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel\n# the search. The filter options can be selected when the cursor is inside the\n# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>\n# to select a filter and <Enter> or <escape> to activate or cancel the filter\n# option.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nSEARCHENGINE           = YES\n\n# When the SERVER_BASED_SEARCH tag is enabled the search engine will be\n# implemented using a web server instead of a web client using Javascript. There\n# are two flavors of web server based searching depending on the EXTERNAL_SEARCH\n# setting. When disabled, doxygen will generate a PHP script for searching and\n# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing\n# and searching needs to be provided by external tools. See the section\n# \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSERVER_BASED_SEARCH    = NO\n\n# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP\n# script for searching. Instead the search results are written to an XML file\n# which needs to be processed by an external indexer. Doxygen will invoke an\n# external search engine pointed to by the SEARCHENGINE_URL option to obtain the\n# search results.\n#\n# Doxygen ships with an example indexer (doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/).\n#\n# See the section \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH        = NO\n\n# The SEARCHENGINE_URL should point to a search engine hosted by a web server\n# which will return the search results when EXTERNAL_SEARCH is enabled.\n#\n# Doxygen ships with an example indexer (doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/). See the section \"External Indexing and\n# Searching\" for details.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHENGINE_URL       =\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed\n# search data is written to a file for indexing by an external tool. With the\n# SEARCHDATA_FILE tag the name of this file can be specified.\n# The default file is: searchdata.xml.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHDATA_FILE        = searchdata.xml\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the\n# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is\n# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple\n# projects and redirect the results back to the right project.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH_ID     =\n\n# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen\n# projects other than the one defined by this configuration file, but that are\n# all added to the same external search index. Each project needs to have a\n# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of\n# to a relative location where the documentation can be found. The format is:\n# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTRA_SEARCH_MAPPINGS  =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the LaTeX output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.\n# The default value is: YES.\n\nGENERATE_LATEX         = YES\n\n# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_OUTPUT           = latex\n\n# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be\n# invoked.\n#\n# Note that when enabling USE_PDFLATEX this option is only used for generating\n# bitmaps for formulas in the HTML output, but not in the Makefile that is\n# written to the output directory.\n# The default file is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_CMD_NAME         = latex\n\n# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate\n# index for LaTeX.\n# The default file is: makeindex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nMAKEINDEX_CMD_NAME     = makeindex\n\n# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nCOMPACT_LATEX          = NO\n\n# The PAPER_TYPE tag can be used to set the paper type that is used by the\n# printer.\n# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x\n# 14 inches) and executive (7.25 x 10.5 inches).\n# The default value is: a4.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPAPER_TYPE             = a4\n\n# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names\n# that should be included in the LaTeX output. The package can be specified just\n# by its name or with the correct syntax as to be used with the LaTeX\n# \\usepackage command. To get the times font for instance you can specify :\n# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}\n# To use the option intlimits with the amsmath package you can specify:\n# EXTRA_PACKAGES=[intlimits]{amsmath}\n# If left blank no extra packages will be included.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nEXTRA_PACKAGES         =\n\n# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the\n# generated LaTeX document. The header should contain everything until the first\n# chapter. If it is left blank doxygen will generate a standard header. See\n# section \"Doxygen usage\" for information on how to let doxygen write the\n# default header to a separate file.\n#\n# Note: Only use a user-defined header if you know what you are doing! The\n# following commands have a special meaning inside the header: $title,\n# $datetime, $date, $doxygenversion, $projectname, $projectnumber,\n# $projectbrief, $projectlogo. Doxygen will replace $title with the empty\n# string, for the replacement values of the other commands the user is referred\n# to HTML_HEADER.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HEADER           =\n\n# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the\n# generated LaTeX document. The footer should contain everything after the last\n# chapter. If it is left blank doxygen will generate a standard footer. See\n# LATEX_HEADER for more information on how to generate a default footer and what\n# special commands can be used inside the footer.\n#\n# Note: Only use a user-defined footer if you know what you are doing!\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_FOOTER           =\n\n# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined\n# LaTeX style sheets that are included after the standard style sheets created\n# by doxygen. Using this option one can overrule certain style aspects. Doxygen\n# will copy the style sheet files to the output directory.\n# Note: The order of the extra style sheet files is of importance (e.g. the last\n# style sheet in the list overrules the setting of the previous ones in the\n# list).\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_STYLESHEET =\n\n# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the LATEX_OUTPUT output\n# directory. Note that the files will be copied as-is; there are no commands or\n# markers available.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_FILES      =\n\n# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is\n# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will\n# contain links (just like the HTML output) instead of page references. This\n# makes the output suitable for online browsing using a PDF viewer.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPDF_HYPERLINKS         = YES\n\n# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate\n# the PDF file directly from the LaTeX files. Set this option to YES, to get a\n# higher quality PDF documentation.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nUSE_PDFLATEX           = YES\n\n# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode\n# command to the generated LaTeX files. This will instruct LaTeX to keep running\n# if errors occur, instead of asking the user for help. This option is also used\n# when generating formulas in HTML.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BATCHMODE        = NO\n\n# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the\n# index chapters (such as File Index, Compound Index, etc.) in the output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HIDE_INDICES     = NO\n\n# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source\n# code with syntax highlighting in the LaTeX output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_SOURCE_CODE      = NO\n\n# The LATEX_BIB_STYLE tag can be used to specify the style to use for the\n# bibliography, e.g. plainnat, or ieeetr. See\n# http://en.wikipedia.org/wiki/BibTeX and \\cite for more info.\n# The default value is: plain.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BIB_STYLE        = plain\n\n# If the LATEX_TIMESTAMP tag is set to YES then the footer of each generated\n# page will contain the date and time when the page was generated. Setting this\n# to NO can help when comparing the output of multiple runs.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_TIMESTAMP        = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the RTF output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The\n# RTF output is optimized for Word 97 and may not look too pretty with other RTF\n# readers/editors.\n# The default value is: NO.\n\nGENERATE_RTF           = NO\n\n# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: rtf.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_OUTPUT             = rtf\n\n# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nCOMPACT_RTF            = NO\n\n# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will\n# contain hyperlink fields. The RTF file will contain links (just like the HTML\n# output) instead of page references. This makes the output suitable for online\n# browsing using Word or some other Word compatible readers that support those\n# fields.\n#\n# Note: WordPad (write) and others do not support links.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_HYPERLINKS         = NO\n\n# Load stylesheet definitions from file. Syntax is similar to doxygen's config\n# file, i.e. a series of assignments. You only have to provide replacements,\n# missing definitions are set to their default value.\n#\n# See also section \"Doxygen usage\" for information on how to generate the\n# default style sheet that doxygen normally uses.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_STYLESHEET_FILE    =\n\n# Set optional variables used in the generation of an RTF document. Syntax is\n# similar to doxygen's config file. A template extensions file can be generated\n# using doxygen -e rtf extensionFile.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_EXTENSIONS_FILE    =\n\n# If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code\n# with syntax highlighting in the RTF output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_SOURCE_CODE        = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the man page output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for\n# classes and files.\n# The default value is: NO.\n\nGENERATE_MAN           = NO\n\n# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it. A directory man3 will be created inside the directory specified by\n# MAN_OUTPUT.\n# The default directory is: man.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_OUTPUT             = man\n\n# The MAN_EXTENSION tag determines the extension that is added to the generated\n# man pages. In case the manual section does not start with a number, the number\n# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is\n# optional.\n# The default value is: .3.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_EXTENSION          = .3\n\n# The MAN_SUBDIR tag determines the name of the directory created within\n# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by\n# MAN_EXTENSION with the initial . removed.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_SUBDIR             =\n\n# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it\n# will generate one additional man file for each entity documented in the real\n# man page(s). These additional files only source the real man page, but without\n# them the man command would be unable to find the correct page.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_LINKS              = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the XML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that\n# captures the structure of the code including all documentation.\n# The default value is: NO.\n\nGENERATE_XML           = NO\n\n# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: xml.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_OUTPUT             = xml\n\n# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program\n# listings (including syntax highlighting and cross-referencing information) to\n# the XML output. Note that enabling this will significantly increase the size\n# of the XML output.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_PROGRAMLISTING     = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to the DOCBOOK output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files\n# that can be used to generate PDF.\n# The default value is: NO.\n\nGENERATE_DOCBOOK       = NO\n\n# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.\n# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in\n# front of it.\n# The default directory is: docbook.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_OUTPUT         = docbook\n\n# If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the\n# program listings (including syntax highlighting and cross-referencing\n# information) to the DOCBOOK output. Note that enabling this will significantly\n# increase the size of the DOCBOOK output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_PROGRAMLISTING = NO\n\n#---------------------------------------------------------------------------\n# Configuration options for the AutoGen Definitions output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an\n# AutoGen Definitions (see http://autogen.sf.net) file that captures the\n# structure of the code including all documentation. Note that this feature is\n# still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_AUTOGEN_DEF   = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the Perl module output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module\n# file that captures the structure of the code including all documentation.\n#\n# Note that this feature is still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_PERLMOD       = NO\n\n# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary\n# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI\n# output from the Perl module output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_LATEX          = NO\n\n# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely\n# formatted so it can be parsed by a human reader. This is useful if you want to\n# understand what is going on. On the other hand, if this tag is set to NO, the\n# size of the Perl module output will be much smaller and Perl will parse it\n# just the same.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_PRETTY         = YES\n\n# The names of the make variables in the generated doxyrules.make file are\n# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful\n# so different doxyrules.make files included by the same Makefile don't\n# overwrite each other's variables.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_MAKEVAR_PREFIX =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the preprocessor\n#---------------------------------------------------------------------------\n\n# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all\n# C-preprocessor directives found in the sources and include files.\n# The default value is: YES.\n\nENABLE_PREPROCESSING   = YES\n\n# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names\n# in the source code. If set to NO, only conditional compilation will be\n# performed. Macro expansion can be done in a controlled way by setting\n# EXPAND_ONLY_PREDEF to YES.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nMACRO_EXPANSION        = NO\n\n# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then\n# the macro expansion is limited to the macros specified with the PREDEFINED and\n# EXPAND_AS_DEFINED tags.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_ONLY_PREDEF     = NO\n\n# If the SEARCH_INCLUDES tag is set to YES, the include files in the\n# INCLUDE_PATH will be searched if a #include is found.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSEARCH_INCLUDES        = YES\n\n# The INCLUDE_PATH tag can be used to specify one or more directories that\n# contain include files that are not input files but should be processed by the\n# preprocessor.\n# This tag requires that the tag SEARCH_INCLUDES is set to YES.\n\nINCLUDE_PATH           =\n\n# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard\n# patterns (like *.h and *.hpp) to filter out the header-files in the\n# directories. If left blank, the patterns specified with FILE_PATTERNS will be\n# used.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nINCLUDE_FILE_PATTERNS  =\n\n# The PREDEFINED tag can be used to specify one or more macro names that are\n# defined before the preprocessor is started (similar to the -D option of e.g.\n# gcc). The argument of the tag is a list of macros of the form: name or\n# name=definition (no spaces). If the definition and the \"=\" are omitted, \"=1\"\n# is assumed. To prevent a macro definition from being undefined via #undef or\n# recursively expanded use the := operator instead of the = operator.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nPREDEFINED             =\n\n# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this\n# tag can be used to specify a list of macro names that should be expanded. The\n# macro definition that is found in the sources will be used. Use the PREDEFINED\n# tag if you want to use a different macro definition that overrules the\n# definition found in the source code.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_AS_DEFINED      =\n\n# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will\n# remove all references to function-like macros that are alone on a line, have\n# an all uppercase name, and do not end with a semicolon. Such function macros\n# are typically used for boiler-plate code, and will confuse the parser if not\n# removed.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSKIP_FUNCTION_MACROS   = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to external references\n#---------------------------------------------------------------------------\n\n# The TAGFILES tag can be used to specify one or more tag files. For each tag\n# file the location of the external documentation should be added. The format of\n# a tag file without this location is as follows:\n# TAGFILES = file1 file2 ...\n# Adding location for the tag files is done as follows:\n# TAGFILES = file1=loc1 \"file2 = loc2\" ...\n# where loc1 and loc2 can be relative or absolute paths or URLs. See the\n# section \"Linking to external documentation\" for more information about the use\n# of tag files.\n# Note: Each tag file must have a unique name (where the name does NOT include\n# the path). If a tag file is not located in the directory in which doxygen is\n# run, you must also specify the path to the tagfile here.\n\nTAGFILES               =\n\n# When a file name is specified after GENERATE_TAGFILE, doxygen will create a\n# tag file that is based on the input files it reads. See section \"Linking to\n# external documentation\" for more information about the usage of tag files.\n\nGENERATE_TAGFILE       =\n\n# If the ALLEXTERNALS tag is set to YES, all external class will be listed in\n# the class index. If set to NO, only the inherited external classes will be\n# listed.\n# The default value is: NO.\n\nALLEXTERNALS           = NO\n\n# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed\n# in the modules index. If set to NO, only the current project's groups will be\n# listed.\n# The default value is: YES.\n\nEXTERNAL_GROUPS        = YES\n\n# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in\n# the related pages index. If set to NO, only the current project's pages will\n# be listed.\n# The default value is: YES.\n\nEXTERNAL_PAGES         = YES\n\n# The PERL_PATH should be the absolute path and name of the perl script\n# interpreter (i.e. the result of 'which perl').\n# The default file (with absolute path) is: /usr/bin/perl.\n\nPERL_PATH              = /usr/bin/perl\n\n#---------------------------------------------------------------------------\n# Configuration options related to the dot tool\n#---------------------------------------------------------------------------\n\n# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram\n# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to\n# NO turns the diagrams off. Note that this option also works with HAVE_DOT\n# disabled, but it is recommended to install and use dot, since it yields more\n# powerful graphs.\n# The default value is: YES.\n\nCLASS_DIAGRAMS         = YES\n\n# You can define message sequence charts within doxygen comments using the \\msc\n# command. Doxygen will then run the mscgen tool (see:\n# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the\n# documentation. The MSCGEN_PATH tag allows you to specify the directory where\n# the mscgen tool resides. If left empty the tool is assumed to be found in the\n# default search path.\n\nMSCGEN_PATH            =\n\n# You can include diagrams made with dia in doxygen documentation. Doxygen will\n# then run dia to produce the diagram and insert it in the documentation. The\n# DIA_PATH tag allows you to specify the directory where the dia binary resides.\n# If left empty dia is assumed to be found in the default search path.\n\nDIA_PATH               =\n\n# If set to YES the inheritance and collaboration graphs will hide inheritance\n# and usage relations if the target is undocumented or is not a class.\n# The default value is: YES.\n\nHIDE_UNDOC_RELATIONS   = YES\n\n# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is\n# available from the path. This tool is part of Graphviz (see:\n# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent\n# Bell Labs. The other options in this section have no effect if this option is\n# set to NO\n# The default value is: YES.\n\nHAVE_DOT               = YES\n\n# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed\n# to run in parallel. When set to 0 doxygen will base this on the number of\n# processors available in the system. You can set it explicitly to a value\n# larger than 0 to get control over the balance between CPU load and processing\n# speed.\n# Minimum value: 0, maximum value: 32, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_NUM_THREADS        = 0\n\n# When you want a differently looking font in the dot files that doxygen\n# generates you can specify the font name using DOT_FONTNAME. You need to make\n# sure dot is able to find the font, which can be done by putting it in a\n# standard location or by setting the DOTFONTPATH environment variable or by\n# setting DOT_FONTPATH to the directory containing the font.\n# The default value is: Helvetica.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTNAME           = Helvetica\n\n# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of\n# dot graphs.\n# Minimum value: 4, maximum value: 24, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTSIZE           = 10\n\n# By default doxygen will tell dot to use the default font as specified with\n# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set\n# the path where dot can find it using this tag.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTPATH           =\n\n# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for\n# each documented class showing the direct and indirect inheritance relations.\n# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCLASS_GRAPH            = YES\n\n# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a\n# graph for each documented class showing the direct and indirect implementation\n# dependencies (inheritance, containment, and class references variables) of the\n# class with other documented classes.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCOLLABORATION_GRAPH    = YES\n\n# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for\n# groups, showing the direct groups dependencies.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGROUP_GRAPHS           = YES\n\n# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and\n# collaboration diagrams in a style similar to the OMG's Unified Modeling\n# Language.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LOOK               = NO\n\n# If the UML_LOOK tag is enabled, the fields and methods are shown inside the\n# class node. If there are many fields or methods and many nodes the graph may\n# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the\n# number of items for each type to make the size more manageable. Set this to 0\n# for no limit. Note that the threshold may be exceeded by 50% before the limit\n# is enforced. So when you set the threshold to 10, up to 15 fields may appear,\n# but if the number exceeds 15, the total amount of fields shown is limited to\n# 10.\n# Minimum value: 0, maximum value: 100, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LIMIT_NUM_FIELDS   = 10\n\n# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and\n# collaboration graphs will show the relations between templates and their\n# instances.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nTEMPLATE_RELATIONS     = NO\n\n# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to\n# YES then doxygen will generate a graph for each documented file showing the\n# direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDE_GRAPH          = YES\n\n# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are\n# set to YES then doxygen will generate a graph for each documented file showing\n# the direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDED_BY_GRAPH      = YES\n\n# If the CALL_GRAPH tag is set to YES then doxygen will generate a call\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable call graphs for selected\n# functions only using the \\callgraph command. Disabling a call graph can be\n# accomplished by means of the command \\hidecallgraph.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALL_GRAPH             = NO\n\n# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable caller graphs for selected\n# functions only using the \\callergraph command. Disabling a caller graph can be\n# accomplished by means of the command \\hidecallergraph.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALLER_GRAPH           = NO\n\n# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical\n# hierarchy of all classes instead of a textual one.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGRAPHICAL_HIERARCHY    = YES\n\n# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the\n# dependencies a directory has on other directories in a graphical way. The\n# dependency relations are determined by the #include relations between the\n# files in the directories.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDIRECTORY_GRAPH        = YES\n\n# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images\n# generated by dot. For an explanation of the image formats see the section\n# output formats in the documentation of the dot tool (Graphviz (see:\n# http://www.graphviz.org/)).\n# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order\n# to make the SVG files visible in IE 9+ (other browsers do not have this\n# requirement).\n# Possible values are: png, png:cairo, png:cairo:cairo, png:cairo:gd, png:gd,\n# png:gd:gd, jpg, jpg:cairo, jpg:cairo:gd, jpg:gd, jpg:gd:gd, gif, gif:cairo,\n# gif:cairo:gd, gif:gd, gif:gd:gd, svg, png:gd, png:gd:gd, png:cairo,\n# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and\n# png:gdiplus:gdiplus.\n# The default value is: png.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_IMAGE_FORMAT       = png\n\n# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to\n# enable generation of interactive SVG images that allow zooming and panning.\n#\n# Note that this requires a modern browser other than Internet Explorer. Tested\n# and working are Firefox, Chrome, Safari, and Opera.\n# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make\n# the SVG files visible. Older versions of IE do not have SVG support.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINTERACTIVE_SVG        = NO\n\n# The DOT_PATH tag can be used to specify the path where the dot tool can be\n# found. If left blank, it is assumed the dot tool can be found in the path.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_PATH               =\n\n# The DOTFILE_DIRS tag can be used to specify one or more directories that\n# contain dot files that are included in the documentation (see the \\dotfile\n# command).\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOTFILE_DIRS           =\n\n# The MSCFILE_DIRS tag can be used to specify one or more directories that\n# contain msc files that are included in the documentation (see the \\mscfile\n# command).\n\nMSCFILE_DIRS           =\n\n# The DIAFILE_DIRS tag can be used to specify one or more directories that\n# contain dia files that are included in the documentation (see the \\diafile\n# command).\n\nDIAFILE_DIRS           =\n\n# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the\n# path where java can find the plantuml.jar file. If left blank, it is assumed\n# PlantUML is not used or called during a preprocessing step. Doxygen will\n# generate a warning when it encounters a \\startuml command in this case and\n# will not generate output for the diagram.\n\nPLANTUML_JAR_PATH      =\n\n# When using plantuml, the PLANTUML_CFG_FILE tag can be used to specify a\n# configuration file for plantuml.\n\nPLANTUML_CFG_FILE      =\n\n# When using plantuml, the specified paths are searched for files specified by\n# the !include statement in a plantuml block.\n\nPLANTUML_INCLUDE_PATH  =\n\n# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes\n# that will be shown in the graph. If the number of nodes in a graph becomes\n# larger than this value, doxygen will truncate the graph, which is visualized\n# by representing a node as a red box. Note that doxygen if the number of direct\n# children of the root node in a graph is already larger than\n# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that\n# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.\n# Minimum value: 0, maximum value: 10000, default value: 50.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_GRAPH_MAX_NODES    = 50\n\n# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs\n# generated by dot. A depth value of 3 means that only nodes reachable from the\n# root by following a path via at most 3 edges will be shown. Nodes that lay\n# further from the root node will be omitted. Note that setting this option to 1\n# or 2 may greatly reduce the computation time needed for large code bases. Also\n# note that the size of a graph can be further restricted by\n# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.\n# Minimum value: 0, maximum value: 1000, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nMAX_DOT_GRAPH_DEPTH    = 0\n\n# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent\n# background. This is disabled by default, because dot on Windows does not seem\n# to support this out of the box.\n#\n# Warning: Depending on the platform used, enabling this option may lead to\n# badly anti-aliased labels on the edges of a graph (i.e. they become hard to\n# read).\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_TRANSPARENT        = NO\n\n# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output\n# files in one run (i.e. multiple -o and -T options on the command line). This\n# makes dot run faster, but since only newer versions of dot (>1.8.10) support\n# this, this feature is disabled by default.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_MULTI_TARGETS      = NO\n\n# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page\n# explaining the meaning of the various boxes and arrows in the dot generated\n# graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGENERATE_LEGEND        = YES\n\n# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot\n# files that are used to generate the various graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_CLEANUP            = YES\n"
  },
  {
    "path": "extras/python/.gitignore",
    "content": "# TODO: remove out directory\nfogbench/out/*\nfogbench/fledge_running_sample.*"
  },
  {
    "path": "extras/python/fogbench/__init__.py",
    "content": ""
  },
  {
    "path": "extras/python/fogbench/__main__.py",
    "content": "#!/usr/bin/env python3\n\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\n\"\"\" fogbench -- a Python script used to test Fledge.\n\nThe objective is to simulate payloads for input, REST and other requests against one or\nmore Fledge instances. This version of fogbench is meant to test the CoAP and HTTP plugins\ninterface of Fledge southbound services.\n\nfogbench\n\n [IN]   -h --help        Print this help\n        -i --interval    The interval in seconds between each iteration (default: 0)\n [IN]   -k --keep        Do not delete (keep) the running sample (default: no)\n [IN]   -o --output      Set the output file for statistics\n [IN]   -p --payload     Type of payload and protocol (default: coap)\n [IN]   -t --template    Set the template to use\n [IN]   -v --version     Display the version and exit\n [IN]   -H --host        The Fledge host (default: localhost)\n        -I --iterations  The number of iterations of the test (default: 1)\n [IN]   -O --occurrences The number of occurrences of the template (default: 1)\n [IN]   -P --port        The Fledge port. Default depends on payload and protocol\n [IN]   -S --statistic   The type of statistics to collect\n\n Example:\n\n     $ cd $FLEDGE_ROOT/bin\n     $ ./fogbench\n\n Help:\n\n     $ ./fogbench -h\n\n   * Create reading objects from given template, as per the json file name specified with -t\n   * Save those objects to the file, as per the file name specified with -o\n   * Read those objects\n   * Send those to CoAP or HTTP south plugin server, on specific host and port\n\n .. todo::\n\n   * Try generators\n\n\"\"\"\nimport sys\nimport os\nimport random\nimport json\nfrom datetime import datetime, timezone\nimport argparse\nimport collections\n\nimport asyncio\nimport aiohttp\n\n\nfrom .exceptions import *\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_FOGBENCH_VERSION = u\"0.1.1\"\n\n_start_time = []\n_end_time = []\n_tot_msgs_transferred = []\n_tot_byte_transferred = []\n_num_iterated = 0\n\"\"\"Statistics to be collected\"\"\"\n\n# _logger = logger.setup(__name__)\n\n\ndef local_timestamp():\n    \"\"\"\n    :return: str - current time stamp with microseconds and machine timezone info\n    :example '2018-05-08 14:06:40.517313+05:30'\n    \"\"\"\n    return str(datetime.now(timezone.utc).astimezone())\n\n\ndef read_templates():\n    templates = []\n\n    return templates\n\n\ndef parse_template_and_prepare_json(_template_file,\n                                    _write_to_file=None, _occurrences=1):\n    # template_file = os.path.join(os.path.dirname(__file__), \"templates/\" + _template_file)\n\n    with open(_template_file) as data_file:\n        data = json.load(data_file)\n\n    supported_format_types = [\"number\", \"enum\"]\n    for _ in range(_occurrences):\n        readings_ = _prepare_sensor_reading(data, supported_format_types)\n        for r in readings_:\n            _write_readings_to_file(_write_to_file, r)\n\n\ndef _write_readings_to_file(to_file, r):\n        with open(to_file, 'a') as the_file:\n            json.dump(r, the_file)\n            the_file.write(os.linesep)\n\n\ndef _prepare_sensor_reading(data, supported_format_types):\n    readings = []\n\n    for d in data:\n        x_sensor_values = dict()\n\n        _sensor_value_object_formats = d[\"sensor_values\"]\n        for fmt in _sensor_value_object_formats:\n            if fmt[\"type\"] not in supported_format_types:\n                raise InvalidSensorValueObjectTemplateFormat(u\"Invalid format, \"\n                                                             u\"Can not parse type {}\".format(fmt[\"type\"]))\n            if fmt[\"type\"] == \"number\":\n                # check float precision if any\n                precision = fmt.get(\"precision\", None)\n                min_val = fmt.get(\"min\", None)\n                max_val = fmt.get(\"max\", None)\n                if min_val is None or max_val is None:\n                    raise InvalidSensorValueObjectTemplateFormat(u\"Invalid format, \"\n                                                                 u\"Min and Max values must be defined for type number.\")\n                # print(precision)\n                # print(min_val)\n                # print(max_val)\n                reading = round(random.uniform(min_val, max_val), precision)\n            elif fmt[\"type\"] == \"enum\":\n                reading = random.choice(fmt[\"list\"])\n\n            # print(fmt[\"name\"], reading)\n            x_sensor_values[fmt[\"name\"]] = reading\n\n        # print(d[\"name\"])\n\n        sensor_value_object = dict()\n        sensor_value_object[\"asset\"] = d['name']\n        sensor_value_object[\"readings\"] = x_sensor_values\n        sensor_value_object[\"timestamp\"] = \"{!s}\".format(local_timestamp())\n        # print(json.dumps(sensor_value_object))\n        ord_dict = collections.OrderedDict(sorted(sensor_value_object.items()))\n        readings.append(ord_dict)\n\n    return readings\n\n\ndef read_out_file(_file=None, _keep=False, _iterations=1, _interval=0, send_to='coap'):\n\n    global _start_time\n    global _end_time\n    global _tot_msgs_transferred\n    global _tot_byte_transferred\n    global _num_iterated\n\n    # from pprint import pprint\n    import time\n    # _file = os.path.join(os.path.dirname(__file__), \"out/{}\".format(outfile))\n    with open(_file) as f:\n        readings_list = [json.loads(line) for line in f]\n\n    loop = asyncio.get_event_loop()\n\n    while _iterations > 0:\n\n        # Pre-calculate the messages and size\n        msg_transferred_itr = 0  # Messages transferred in every iteration\n        byte_transferred_itr = 0  # Bytes transferred in every iteration\n\n        for r in readings_list:\n            msg_transferred_itr += 1\n            byte_transferred_itr += sys.getsizeof(r)\n\n        if send_to == 'coap':\n            _start_time.append(datetime.now())\n            for r in readings_list:\n                is_sent = loop.run_until_complete(send_to_coap(r))\n                if not is_sent:\n                    break\n        elif send_to == 'http':\n            _start_time.append(datetime.now())\n            loop.run_until_complete(send_to_http(readings_list))\n\n        _end_time.append(datetime.now())  # End time of every iteration\n        _tot_msgs_transferred.append(msg_transferred_itr)\n        _tot_byte_transferred.append(byte_transferred_itr)\n        _iterations -= 1\n        _num_iterated += 1\n        if _iterations != 0:\n            # print(u\"Iteration {} completed, waiting for {} seconds\".format(_iterations, _interval))\n            time.sleep(_interval)\n\n    if not _keep:\n        os.remove(_file)\n\n\nasync def send_to_coap(payload):\n    \"\"\"\n    POST request to:\n     localhost\n     port 5683 (official IANA assigned CoAP port),\n     URI \"/other/sensor-values\".\n\n    \"\"\"\n    from aiocoap import Context, Message\n    from aiocoap.numbers.codes import Code\n    from cbor2 import dumps\n\n    context = await Context.create_client_context()\n\n    uri = \"coap://{}:{}/other/sensor-values\".format(arg_host, arg_port)\n    request = Message(payload=dumps(payload), uri=uri, code=Code.POST)\n    response = await context.request(request).response\n    str_res = str(response.code)\n    status_code = str_res[:4]  # or str_res.split()[0]\n    if status_code == \"4.00\" or status_code == \"5.00\":\n        print(\"Error: \", str_res)\n        return False\n\n    return True\n\n\nasync def send_to_http(payload):\n    \"\"\"\n    POST request to:\n     host localhost\n     port 6683 (default HTTP south plugin port),\n     uri  sensor-reading\n    \"\"\"\n    headers = {'content-type': 'application/json'}\n    url = 'http://{}:{}/sensor-reading'.format(arg_host, arg_port)\n    async with aiohttp.ClientSession() as session:\n        async with session.post(url, data=json.dumps(payload), headers=headers) as resp:\n            await resp.text()\n            status_code = resp.status\n            if status_code in range(400, 500):\n                print(\"Bad request error | code:{}, reason: {}\".format(status_code, resp.reason))\n                return False\n            if status_code in range(500, 600):\n                print(\"Server error | code:{}, reason: {}\".format(status_code, resp.reason))\n                return False\n            return True\n\n\ndef get_statistics(_stats_type=None, _out_file=None):\n    stat = ''\n    global _start_time\n    global _end_time\n    global _tot_msgs_transferred\n    global _tot_byte_transferred\n    global _num_iterated\n    if _stats_type == 'total':\n        stat += u\"Total Statistics:\\n\"\n        stat += (u\"\\nStart Time: {}\".format(datetime.strftime(_start_time[0], \"%Y-%m-%d %H:%M:%S.%f\")))\n        stat += (u\"\\nEnd Time:   {}\\n\".format(datetime.strftime(_end_time[-1], \"%Y-%m-%d %H:%M:%S.%f\")))\n        stat += (u\"\\nTotal Messages Transferred: {}\".format(sum(_tot_msgs_transferred)))\n        stat += (u\"\\nTotal Bytes Transferred:    {}\\n\".format(sum(_tot_byte_transferred)))\n        stat += (u\"\\nTotal Iterations: {}\".format(_num_iterated))\n        stat += (u\"\\nTotal Messages per Iteration: {}\".format(sum(_tot_msgs_transferred)/_num_iterated))\n        stat += (u\"\\nTotal Bytes per Iteration:    {}\\n\".format(sum(_tot_byte_transferred)/_num_iterated))\n        _msg_rate = []\n        _byte_rate = []\n        for itr in range(_num_iterated):\n            time_taken = _end_time[itr] - _start_time[itr]\n            _msg_rate.append(_tot_msgs_transferred[itr]/(time_taken.seconds+time_taken.microseconds/1E6))\n            _byte_rate.append(_tot_byte_transferred[itr] / (time_taken.seconds+time_taken.microseconds/1E6))\n        stat += (u\"\\nMin messages/second: {}\".format(min(_msg_rate)))\n        stat += (u\"\\nMax messages/second: {}\".format(max(_msg_rate)))\n        stat += (u\"\\nAvg messages/second: {}\\n\".format(sum(_msg_rate)/_num_iterated))\n        stat += (u\"\\nMin Bytes/second: {}\".format(min(_byte_rate)))\n        stat += (u\"\\nMax Bytes/second: {}\".format(max(_byte_rate)))\n        stat += (u\"\\nAvg Bytes/second: {}\".format(sum(_byte_rate)/_num_iterated))\n    if _out_file:\n        with open(_out_file, 'w') as f:\n            f.write(stat)\n    else:\n        print(stat)\n    # should we also show total time diff? end_time - start_time\n\n\ndef check_server(payload_type='coap'):\n    template_str = \">>> Make sure south {} plugin service is running \\n & listening on specified host and port \\n\"\n    if payload_type == 'coap':\n        print(template_str.format(\"CoAP\"))\n    elif payload_type == 'http':\n        print(template_str.format(\"HTTP\"))\n\n\nparser = argparse.ArgumentParser(prog='fogbench')\nparser.description = '%(prog)s -- a Python script used to test Fledge (simulate payloads)'\nparser.epilog = 'The initial version of %(prog)s is meant to test the south plugin interface of ' \\\n                'Fledge using CoAP or HTTP'\nparser.add_argument('-v', '--version', action='version', version='%(prog)s {0!s}'.format(_FOGBENCH_VERSION))\nparser.add_argument('-k', '--keep', default=False, choices=['y', 'yes', 'n', 'no'],\n                    help='Do not delete the running sample (default: no)')\nparser.add_argument('-t', '--template', required=True, help='Set the template file, json extension')\nparser.add_argument('-o', '--output', default=None, help='Set the statistics output file')\nparser.add_argument('-p', '--payload', default='coap', choices=['coap', 'http'], help='Type of payload '\n                                                                                      'and protocol (default: coap)')\nparser.add_argument('-I', '--iterations', help='The number of iterations of the test (default: 1)')\nparser.add_argument('-O', '--occurrences', help='The number of occurrences of the template (default: 1)')\n\nparser.add_argument('-H', '--host', help='Server host address (default: localhost)')\nparser.add_argument('-P', '--port', help='The Fledge port. (default: 5683)')\nparser.add_argument('-i', '--interval', default=0, help='The interval in seconds for each iteration (default: 0)')\n\nparser.add_argument('-S', '--statistics', default='total', choices=['total'], help='The type of statistics to collect '\n                                                                                   '(default: total)')\n\nnamespace = parser.parse_args(sys.argv[1:])\ninfile = '{0}'.format(namespace.template if namespace.template else '')\nstatistics_file = os.path.join(os.path.dirname(__file__), \"out/{}\".format(namespace.output)) if namespace.output else None\nkeep_the_file = True if namespace.keep in ['y', 'yes'] else False\n\n# iterations and occurrences\narg_iterations = int(namespace.iterations) if namespace.iterations else 1\narg_occurrences = int(namespace.occurrences) if namespace.occurrences else 1\n\n# interval between each iteration\narg_interval = int(namespace.interval) if namespace.interval else 0\n\narg_stats_type = '{0}'.format(namespace.statistics) if namespace.statistics else 'total'\n\nif namespace.payload:\n    arg_payload_protocol = namespace.payload\n\narg_host = '{0}'.format(namespace.host) if namespace.host else 'localhost'\n\ndefault_port = 6683 if arg_payload_protocol == 'http' else 5683\narg_port = int(namespace.port) if namespace.port else default_port\n\ncheck_server(arg_payload_protocol)\nsample_file = os.path.join(\"/tmp\", \"fledge_running_sample.{}\".format(os.getpid()))\nparse_template_and_prepare_json(_template_file=infile, _write_to_file=sample_file, _occurrences=arg_occurrences)\nread_out_file(_file=sample_file, _keep=keep_the_file, _iterations=arg_iterations, _interval=arg_interval,\n              send_to=arg_payload_protocol)\nget_statistics(_stats_type=arg_stats_type, _out_file=statistics_file)\n\n# TODO: Change below per local_timestamp() values\n\"\"\" Expected output from given template\n{ \n  \"timestamp\"     : \"2017-08-04T06:59:57.503Z\",\n  \"asset\"         : \"TI sensorTag/luxometer\",\n  \"sensor_values\" : { \"lux\" : 49 }\n}\n\n{ \n  \"timestamp\"     : \"2017-08-04T06:59:57.863Z\",\n  \"asset\"         : \"TI sensorTag/pressure\",\n  \"sensor_values\" : { \"pressure\" : 1021.2 }\n}\n\n{ \n  \"timestamp\"     : \"2017-08-04T06:59:58.863Z\",\n  \"asset\"         : \"TI sensorTag/humidity\",\n  \"sensor_values\" : { \"humidity\" : 71.2, \"temperature\" : 18.6 }\n}\n\n{ \n  \"timestamp\"     : \"2017-08-04T06:59:59.863Z\",\n  \"asset\"         : \"TI sensorTag/temperature\",\n  \"sensor_values\" : { \"object\" : 18.2, \"ambient\" : 21.6 }\n}\n\n{ \n  \"timestamp\"     : \"2017-08-04T07:00:00.863Z\",\n  \"asset\"         : \"TI sensorTag/accelerometer\",\n  \"sensor_values\" : { \"x\" : 1.2, \"y\" : 0.0, \"z\" : -0.6 }\n}\n\n{ \n  \"timestamp\"     : \"2017-08-04T07:00:01.863Z\",\n  \"asset\"         : \"TI sensorTag/gyroscope\",\n  \"sensor_values\" : { \"x\" : 101.2, \"y\" : 46.2, \"z\" : -12.6 }\n}\n\n{ \n  \"timestamp\"     : \"2017-08-04T07:00:02.863Z\",\n  \"asset\"         : \"TI sensorTag/magnetometer\",\n  \"sensor_values\" : { \"x\" : 101.2, \"y\" : 46.2, \"z\" : -12.6 }\n}\n\n{ \n  \"timestamp\"     : \"2017-08-04T07:00:03.863Z\",\n  \"asset\"         : \"mouse\",\n  \"sensor_values\" : { \"button\" : \"down\" }\n}\n\n{ \n  \"timestamp\"     : \"2017-08-04T07:00:04.863Z\",\n  \"asset\"         : \"wall clock\",\n  \"sensor_values\" : { \"tick\" : \"tock\" }\n}\n\n\"\"\"\n"
  },
  {
    "path": "extras/python/fogbench/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"The fogbench exceptions module contains Exception subclasses\n\n\"\"\"\n\n# nothing to import yet\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass FogbenchError(Exception):\n    \"\"\"\n    All errors specific to fogbench will be\n    subclassed from FogbenchError which is subclassed from Exception.\n    \"\"\"\n    pass\n\n\nclass InvalidTemplateFormat(FogbenchError):\n    pass\n\n\nclass InvalidSensorValueObjectTemplateFormat(InvalidTemplateFormat):\n    def __init__(self, msg):\n        self.msg = msg\n\n    def __str__(self):\n        return \"{!s}\".format(self.msg)\n"
  },
  {
    "path": "extras/scripts/fledge.service",
    "content": "#!/bin/sh\n\nPLATFORM=`(lsb_release -ds 2>/dev/null || cat /etc/*release 2>/dev/null | head -n1 || uname -om)`\nIS_RHEL=`echo $PLATFORM | egrep '(Red Hat|CentOS)' || echo \"\"`\n\nif [ \"$IS_RHEL\" = \"\" ]; then\n\t# Ubuntu/Debian specific\n\n\t# kFreeBSD do not accept scripts as interpreters, using #!/bin/sh and sourcing.\n\tif [ true != \"$INIT_D_SCRIPT_SOURCED\" ] ; then\n\t    set \"$0\" \"$@\"; INIT_D_SCRIPT_SOURCED=true . /lib/init/init-d-script\n\tfi\nfi\n\n### BEGIN INIT INFO\n# Provides:          fledge\n# Required-Start:    $local_fs $remote_fs $syslog $network $time\n# Required-Stop:     $local_fs $remote_fs $syslog $network $time\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: Fledge \n# Description:       Init script for the Fledge daemon\n### END INIT INFO\n\n \n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n#\n# This wrapper script is used to set Fledge as a service\n# If you have installed Fledge from a package, this script has been\n# automatically added to the /etc/init.d folder and the service has\n# been set with the systemctl utility.\n# If you have installed Fledge from source with sudo make install,\n# you may manually copy this script in /etc/init.d. We recommend to\n# change the name to fledge, for example:\n#\n# sudo cp fledge.service /etc/init.d/fledge\n#\n\n\nFLEDGE_ROOT=\"/usr/local/fledge\"\nFLEDGE_DATA=\"${FLEDGE_ROOT}/data\"\nFLEDGE_USER=`ls -ld \"${FLEDGE_DATA}\" | awk '{print $3}'`\nPID_FILE=\"${FLEDGE_DATA}/var/run/fledge.core.pid\"\nPID=0\n \nget_pid() {\n  if [ -f \"$PID_FILE\" ]; then\n    PID=`cat \"${PID_FILE}\" | tr -d ' ' |  grep -o '\"processID\":[0-9]*' | grep -o '[0-9]*'`\n  else\n    PID=0\n  fi\n}\n\nfledge_start() {\n    if [ \"$IS_RHEL\" = \"\" ]; then\n        sudo -u ${FLEDGE_USER} \"${FLEDGE_ROOT}/bin/fledge\" start > /dev/null\n    else\n        \"${FLEDGE_ROOT}/bin/fledge\" start > /dev/null\n    fi\n}\n \nfledge_stop() {\n    if [ \"$IS_RHEL\" = \"\" ]; then\n        sudo -u ${FLEDGE_USER} \"${FLEDGE_ROOT}/bin/fledge\" stop > /dev/null\n    else\n        \"${FLEDGE_ROOT}/bin/fledge\" stop > /dev/null\n    fi\n}\n \ncase \"$1\" in\n\n  start)\n\n    get_pid\n    if [ $PID -eq 0 ]; then\n      fledge_start\n    else\n      ps -p $PID\n      if [ $? -eq 1 ]; then\n        rm -f $PID_FILE\n        fledge_start\n      else\n        echo \"Fledge already running [$PID]\"\n        exit 0\n      fi\n    fi\n \n    get_pid\n    if [ $PID -eq 0 ]; then\n        echo \"Fledge failed starting\"\n        exit 1\n    else\n      echo \"Fledge started [$PID]\"\n      exit 0\n    fi\n    ;;\n\n  status)\n\n    get_pid\n    if [ $PID -eq 0 ]; then\n      echo \"Fledge not running\"\n    else\n      ps -p $PID\n      if [ $? -eq 1 ]; then\n        echo \"Fledge not running (process dead but PID file exists)\"\n        exit 1\n      else\n        echo \"Fledge running [$PID]\"\n      fi\n    fi\n    exit 0\n    ;;\n\n  stop)\n\n    get_pid\n    if [ $PID -eq 0 ]; then\n      echo \"Fledge not running\"\n    else\n      ps -p $PID\n      if [ $? -eq 1 ]; then\n        echo \"Fledge not running (process dead but PID file exists)\"\n        rm -f $PID_FILE\n        exit 1\n      else\n        fledge_stop\n        echo \"Fledge stopped [$PID]\"\n      fi\n    fi\n    exit 0\n    ;;\n\n  restart)\n\n    $0 fledge_stop\n    $0 fledge_start\n    ;;\n\n  *)\n\n    echo \"Usage: $0 {status|start|stop|restart}\"\n    exit 0\n    ;;\n\nesac\n\n"
  },
  {
    "path": "extras/scripts/setenv.sh",
    "content": "#!/bin/sh\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n#\n# This script sets the user environment to facilitate the administration\n# of Fledge\n#\n# You can execute this script from shell, using for example this command:\n#\n# source /usr/local/fledge/extras/scripts/setenv.sh\n#\n# or you can add the same command at the bottom of your profile script\n# {HOME}/.profile.\n#\n\nexport FLEDGE_ROOT=\"/usr/local/fledge\"\nexport FLEDGE_DATA=\"${FLEDGE_ROOT}/data\"\n\nexport PATH=\"${FLEDGE_ROOT}/bin:${PATH}\"\n\nexport LD_LIBRARY_PATH=\"${FLEDGE_ROOT}/lib:$LD_LIBRARY_PATH\"\n\n"
  },
  {
    "path": "mkversion",
    "content": "#!/bin/sh\noutput=$1/cmake_build/version.h\n. $1/VERSION\nif [ -f $output ]; then\n\tgrep -s \"VERSION[ \t]*\\\"${fledge_version}\\\"\" $output > /dev/null\n\tif [ $? = 0 ]; then\n\t\texit 0\n\tfi\nfi\ncat > $output << END_WARNING\n\n/*\n * WARNING: This is an automatically generated file.\n *          Do not edit this file.\n *          To change the version edit the file VERSION\n */\n\nEND_WARNING\n/bin/echo '#define VERSION\t\"'${fledge_version}'\"' >> $output\n"
  },
  {
    "path": "python/.gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\n.cache/\n\n# virtualenv\n.venv\nvenv/\nENV/\n\n# Config env file\nfledge-env.yaml\n\n# install info file (used to uninstall)\ninstall-info.txt\n\n# lint check\npylint_*.log\n\n# tox\n.tox\n\n# Pytest coverage\n.coverage\nhtmlcov/\ncoverage.xml\n"
  },
  {
    "path": "python/.pylintrc",
    "content": "[MASTER]\n\n# A comma-separated list of package or module names from where C extensions may\n# be loaded. Extensions are loading into the active Python interpreter and may\n# run arbitrary code\nextension-pkg-whitelist=\n\n# Add files or directories to the blacklist. They should be base names, not\n# paths.\nignore=.git,__template__.py\n\n# Add files or directories matching the regex patterns to the blacklist. The\n# regex matches against base names, not paths.\nignore-patterns=\n\n# Python code to execute, usually for sys.path manipulation such as\n# pygtk.require().\n#init-hook=\n\n# Use multiple processes to speed up Pylint.\njobs=1\n\n# List of plugins (as comma separated values of python modules names) to load,\n# usually to register additional checkers.\nload-plugins=\n\n# Pickle collected data for later comparisons.\npersistent=yes\n\n# Specify a configuration file.\nrcfile=.pylintrc\n\n# Allow loading of arbitrary C extensions. Extensions are imported into the\n# active Python interpreter and may run arbitrary code.\nunsafe-load-any-extension=no\n\n\n[MESSAGES CONTROL]\n\n# Only show warnings with the listed confidence levels. Leave empty to show\n# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED\nconfidence=\n\n# Disable the message, report, category or checker with the given id(s). You\n# can either give multiple identifiers separated by comma (,) or put this\n# option multiple times (only on the command line, not in the configuration\n# file where it should appear only once).You can also use \"--disable=all\" to\n# disable everything first and then reenable specific checks. For example, if\n# you want to run only the similarities checker, you can use \"--disable=all\n# --enable=similarities\". If you want to run only the classes checker, but have\n# no Warning level messages displayed, use\"--disable=all --enable=classes\n# --disable=W\"\ndisable=print-statement,parameter-unpacking,unpacking-in-except,old-raise-syntax,backtick,long-suffix,old-ne-operator,old-octal-literal,import-star-module-level,raw-checker-failed,bad-inline-option,locally-disabled,locally-enabled,file-ignored,suppressed-message,useless-suppression,deprecated-pragma,apply-builtin,basestring-builtin,buffer-builtin,cmp-builtin,coerce-builtin,execfile-builtin,file-builtin,long-builtin,raw_input-builtin,reduce-builtin,standarderror-builtin,unicode-builtin,xrange-builtin,coerce-method,delslice-method,getslice-method,setslice-method,no-absolute-import,old-division,dict-iter-method,dict-view-method,next-method-called,metaclass-assignment,indexing-exception,raising-string,reload-builtin,oct-method,hex-method,nonzero-method,cmp-method,input-builtin,round-builtin,intern-builtin,unichr-builtin,map-builtin-not-iterating,zip-builtin-not-iterating,range-builtin-not-iterating,filter-builtin-not-iterating,using-cmp-argument,eq-without-hash,div-method,idiv-method,rdiv-method,exception-message-attribute,invalid-str-codec,sys-max-int,bad-python3-import,deprecated-string-function,deprecated-str-translate-call\n\n# Enable the message, report, category or checker with the given id(s). You can\n# either give multiple identifier separated by comma (,) or put this option\n# multiple time (only on the command line, not in the configuration file where\n# it should appear only once). See also the \"--disable\" option for examples.\nenable=\n\n\n[REPORTS]\n\n# Python expression which should return a note less than 10 (10 is the highest\n# note). You have access to the variables errors warning, statement which\n# respectively contain the number of errors / warnings messages and the total\n# number of statements analyzed. This is used by the global evaluation report\n# (RP0004).\nevaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)\n\n# Template used to display messages. This is a python new-style format string\n# used to format the message information. See doc for all details\nmsg-template={path}:{line}: [{msg_id}({symbol}), {obj}] {msg}\n\n# Set the output format. Available formats are text, parseable, colorized, json\n# and msvs (visual studio).You can also give a reporter class, eg\n# mypackage.mymodule.MyReporterClass.\noutput-format=text\n\n# Tells whether to display a full report or only the messages\nreports=no\n\n# Activate the evaluation score.\nscore=yes\n\n\n[REFACTORING]\n\n# Maximum number of nested blocks for function / method body\nmax-nested-blocks=5\n\n\n[BASIC]\n\n# Naming hint for argument names\nargument-name-hint=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Regular expression matching correct argument names\nargument-rgx=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Naming hint for attribute names\nattr-name-hint=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Regular expression matching correct attribute names\nattr-rgx=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Bad variable names which should always be refused, separated by a comma\nbad-names=foo,bar,baz,toto,tutu,tata\n\n# Naming hint for class attribute names\nclass-attribute-name-hint=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$\n\n# Regular expression matching correct class attribute names\nclass-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$\n\n# Naming hint for class names\nclass-name-hint=[A-Z_][a-zA-Z0-9]+$\n\n# Regular expression matching correct class names\nclass-rgx=[A-Z_][a-zA-Z0-9]+$\n\n# Naming hint for constant names\nconst-name-hint=(([A-Z_][A-Z0-9_]*)|(__.*__))$\n\n# Regular expression matching correct constant names\nconst-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$\n\n# Minimum line length for functions/classes that require docstrings, shorter\n# ones are exempt.\ndocstring-min-length=-1\n\n# Naming hint for function names\nfunction-name-hint=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Regular expression matching correct function names\nfunction-rgx=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Good variable names which should always be accepted, separated by a comma\ngood-names=i,j,k,ex,Run,_\n\n# Include a hint for the correct naming format with invalid-name\ninclude-naming-hint=no\n\n# Naming hint for inline iteration names\ninlinevar-name-hint=[A-Za-z_][A-Za-z0-9_]*$\n\n# Regular expression matching correct inline iteration names\ninlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$\n\n# Naming hint for method names\nmethod-name-hint=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Regular expression matching correct method names\nmethod-rgx=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Naming hint for module names\nmodule-name-hint=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$\n\n# Regular expression matching correct module names\nmodule-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$\n\n# Colon-delimited sets of names that determine each other's naming style when\n# the name regexes allow several styles.\nname-group=\n\n# Regular expression which should only match function or class names that do\n# not require a docstring.\nno-docstring-rgx=^_\n\n# List of decorators that produce properties, such as abc.abstractproperty. Add\n# to this list to register other decorators that produce valid properties.\nproperty-classes=abc.abstractproperty\n\n# Naming hint for variable names\nvariable-name-hint=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n# Regular expression matching correct variable names\nvariable-rgx=(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$\n\n\n[FORMAT]\n\n# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.\nexpected-line-ending-format=\n\n# Regexp for a line that is allowed to be longer than the limit.\nignore-long-lines=^\\s*(# )?<?https?://\\S+>?$\n\n# Number of spaces of indent required inside a hanging  or continued line.\nindent-after-paren=4\n\n# String used as indentation unit. This is usually \"    \" (4 spaces) or \"\\t\" (1\n# tab).\nindent-string='    '\n\n# Maximum number of characters on a single line.\nmax-line-length=100\n\n# Maximum number of lines in a module\nmax-module-lines=1000\n\n# List of optional constructs for which whitespace checking is disabled. `dict-\n# separator` is used to allow tabulation in dicts, etc.: {1  : 1,\\n222: 2}.\n# `trailing-comma` allows a space between comma and closing bracket: (a, ).\n# `empty-line` allows space-only lines.\nno-space-check=trailing-comma,dict-separator\n\n# Allow the body of a class to be on the same line as the declaration if body\n# contains single statement.\nsingle-line-class-stmt=no\n\n# Allow the body of an if to be on the same line as the test if there is no\n# else.\nsingle-line-if-stmt=no\n\n\n[LOGGING]\n\n# Logging modules to check that the string format arguments are in logging\n# function parameter format\nlogging-modules=logging\n\n\n[MISCELLANEOUS]\n\n# List of note tags to take in consideration, separated by a comma.\nnotes=FIXME,XXX,TODO,HACK\n\n\n[SIMILARITIES]\n\n# Ignore comments when computing similarities.\nignore-comments=yes\n\n# Ignore docstrings when computing similarities.\nignore-docstrings=yes\n\n# Ignore imports when computing similarities.\nignore-imports=no\n\n# Minimum lines number of a similarity.\nmin-similarity-lines=4\n\n\n[SPELLING]\n\n# Spelling dictionary name. Available dictionaries: none. To make it working\n# install python-enchant package.\nspelling-dict=\n\n# List of comma separated words that should not be checked.\nspelling-ignore-words=\n\n# A path to a file that contains private dictionary; one word per line.\nspelling-private-dict-file=\n\n# Tells whether to store unknown words to indicated private dictionary in\n# --spelling-private-dict-file option instead of raising a message.\nspelling-store-unknown-words=no\n\n\n[TYPECHECK]\n\n# List of decorators that produce context managers, such as\n# contextlib.contextmanager. Add to this list to register other decorators that\n# produce valid context managers.\ncontextmanager-decorators=contextlib.contextmanager\n\n# List of members which are set dynamically and missed by pylint inference\n# system, and so shouldn't trigger E1101 when accessed. Python regular\n# expressions are accepted.\ngenerated-members=\n\n# Tells whether missing members accessed in mixin class should be ignored. A\n# mixin class is detected if its name ends with \"mixin\" (case insensitive).\nignore-mixin-members=yes\n\n# This flag controls whether pylint should warn about no-member and similar\n# checks whenever an opaque object is returned when inferring. The inference\n# can return multiple potential results while evaluating a Python object, but\n# some branches might not be evaluated, which results in partial inference. In\n# that case, it might be useful to still emit no-member and other checks for\n# the rest of the inferred objects.\nignore-on-opaque-inference=yes\n\n# List of class names for which member attributes should not be checked (useful\n# for classes with dynamically set attributes). This supports the use of\n# qualified names.\nignored-classes=optparse.Values,thread._local,_thread._local\n\n# List of module names for which member attributes should not be checked\n# (useful for modules/projects where namespaces are manipulated during runtime\n# and thus existing member attributes cannot be deduced by static analysis. It\n# supports qualified module names, as well as Unix pattern matching.\nignored-modules=\n\n# Show a hint with possible names when a member name was not found. The aspect\n# of finding the hint is based on edit distance.\nmissing-member-hint=yes\n\n# The minimum edit distance a name should have in order to be considered a\n# similar match for a missing member name.\nmissing-member-hint-distance=1\n\n# The total number of similar names that should be taken in consideration when\n# showing a hint for a missing member.\nmissing-member-max-choices=1\n\n\n[VARIABLES]\n\n# List of additional names supposed to be defined in builtins. Remember that\n# you should avoid to define new builtins when possible.\nadditional-builtins=\n\n# Tells whether unused global variables should be treated as a violation.\nallow-global-unused-variables=yes\n\n# List of strings which can identify a callback function by name. A callback\n# name must start or end with one of those strings.\ncallbacks=cb_,_cb\n\n# A regular expression matching the name of dummy variables (i.e. expectedly\n# not used).\ndummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_\n\n# Argument names that match this expression will be ignored. Default to name\n# with leading underscore\nignored-argument-names=_.*|^ignored_|^unused_\n\n# Tells whether we should check for unused import in __init__ files.\ninit-import=no\n\n# List of qualified module names which can have objects that can redefine\n# builtins.\nredefining-builtins-modules=six.moves,future.builtins\n\n\n[CLASSES]\n\n# List of method names used to declare (i.e. assign) instance attributes.\ndefining-attr-methods=__init__,__new__,setUp\n\n# List of member names, which should be excluded from the protected access\n# warning.\nexclude-protected=_asdict,_fields,_replace,_source,_make\n\n# List of valid names for the first argument in a class method.\nvalid-classmethod-first-arg=cls\n\n# List of valid names for the first argument in a metaclass class method.\nvalid-metaclass-classmethod-first-arg=mcs\n\n\n[DESIGN]\n\n# Maximum number of arguments for function / method\nmax-args=5\n\n# Maximum number of attributes for a class (see R0902).\nmax-attributes=7\n\n# Maximum number of boolean expressions in a if statement\nmax-bool-expr=5\n\n# Maximum number of branch for function / method body\nmax-branches=12\n\n# Maximum number of locals for function / method body\nmax-locals=15\n\n# Maximum number of parents for a class (see R0901).\nmax-parents=7\n\n# Maximum number of public methods for a class (see R0904).\nmax-public-methods=20\n\n# Maximum number of return / yield for function / method body\nmax-returns=6\n\n# Maximum number of statements in function / method body\nmax-statements=50\n\n# Minimum number of public methods for a class (see R0903).\nmin-public-methods=2\n\n\n[IMPORTS]\n\n# Allow wildcard imports from modules that define __all__.\nallow-wildcard-with-all=no\n\n# Analyse import fallback blocks. This can be used to support both Python 2 and\n# 3 compatible code, which means that the block might have code that exists\n# only in one or another interpreter, leading to false positives when analysed.\nanalyse-fallback-blocks=no\n\n# Deprecated modules which should not be used, separated by a comma\ndeprecated-modules=optparse,tkinter.tix\n\n# Create a graph of external dependencies in the given file (report RP0402 must\n# not be disabled)\next-import-graph=\n\n# Create a graph of every (i.e. internal and external) dependencies in the\n# given file (report RP0402 must not be disabled)\nimport-graph=\n\n# Create a graph of internal dependencies in the given file (report RP0402 must\n# not be disabled)\nint-import-graph=\n\n# Force import order to recognize a module as part of the standard\n# compatibility libraries.\nknown-standard-library=\n\n# Force import order to recognize a module as part of a third party library.\nknown-third-party=enchant\n\n\n[EXCEPTIONS]\n\n# Exceptions that will emit a warning when being caught. Defaults to\n# \"Exception\"\novergeneral-exceptions=Exception\n"
  },
  {
    "path": "python/__init__.py",
    "content": ""
  },
  {
    "path": "python/__template__.py",
    "content": "#!/usr/bin/env python3\n# TODO: Remove the #! line above if this is not an executable script. Also remove this line.\n\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Example Google style docstrings.\n\nThis module demonstrates documentation as specified by the `Google Python\nStyle Guide`_. Docstrings may extend over multiple lines. Sections are created\nwith a section header and a colon followed by a block of indented text.\n\nExample:\n    Examples can be given using either the ``Example`` or ``Examples``\n    sections. Sections support any reStructuredText formatting, including\n    literal blocks::\n\n        $ python __template__.py\n\nSection breaks are created by resuming unindented text. Section breaks\nare also implicitly created anytime a new section starts.\n\nAttributes:\n    module_level_variable1 (int): Module level variables may be documented in\n        either the ``Attributes`` section of the module docstring, or in an\n        inline docstring immediately following the variable.\n\n        Either form is acceptable, but the two should not be mixed. Choose\n        one convention to document module level variables and be consistent\n        with it.\n\n.. todo::\n\n   * For module TODOs in docstring\n   * To show in readthedocs.io, you have to also use ``sphinx.ext.todo`` extension and enable\n     todo_include_todos in conf.py\n   * See also `Sphinx ToDo`_\n\n.. _Google Python Style Guide:\n   http://google.github.io/styleguide/pyguide.html\n\n.. _Sphinx ToDo:\n   http://www.sphinx-doc.org/en/1.3.6/ext/todo.html#confval-todo_include_todos\n\n\"\"\"\n\n# This is a TODO Example\n# TODO: JIRA-XXXX Short description (put longer description in the JIRA)\n\nimport sys\n\n\n__author__ = \"${FULL_NAME}\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_PRIVATE_MODULE_LEVEL_CONSTANT = 12345\n\nPUBLIC_MODULE_LEVEL_CONSTANT = 12345\n\"\"\"int: Module level variable documented inline.\n\nThe docstring may span multiple lines. The type may optionally be specified\non the first line, separated by a colon.\n\n\"\"\"\n\n\ndef function_with_types_in_docstring(param1, param2):\n    \"\"\"Example function with types declared in the def\n\n    Args:\n        param1 (int): The first parameter.\n\n        param2 (str):\n            The second parameter.\n            - Bullet point 1\n            - Bullet point 2\n\n    Returns:\n        bool: The return value. True for success, False otherwise.\n\n    .. todo::\n        This is a todo docstring example\n        For def level todo, if we want to expose this publicly via readthedocs\n\n    \"\"\"\n\n\ndef function_with_pep484_type_annotations(param1: int, param2: str) -> bool:\n    \"\"\"Example function with PEP 484 type annotations.\n\n    `PEP 484`_ type annotations are supported. If attribute, parameter, and\n    return types are annotated according to `PEP 484`_, they do not need to be\n    included in the docstring.\n\n    Args:\n        param1: The first parameter.\n        param2: The second parameter.\n\n    Returns:\n        The return value. True for success, False otherwise.\n\n    .. _PEP 484:\n        https://www.python.org/dev/peps/pep-0484\n\n    \"\"\"\n\n\ndef module_level_function(param1, param2=None, *args, **kwargs):\n    # The leading r is needed to stop pylint from complaining\n    # about docstrings that contain \\\n\n    r\"\"\"This is an example of a module level function.\n\n    Function parameters should be documented in the ``Args`` section. The name\n    of each parameter is required. The type and description of each parameter\n    is optional, but should be included if not obvious.\n\n    If \\*args or \\*\\*kwargs are accepted,\n    they should be listed as ``*args`` and ``**kwargs``.\n\n    The format for a parameter is::\n\n        name (type): description\n            The description may span multiple lines. Following\n            lines should be indented. The \"(type)\" is optional.\n\n            Multiple paragraphs are supported in parameter\n            descriptions.\n\n    Args:\n        param1 (int): The first parameter.\n        param2 (:obj:`str`, optional): The second parameter. Defaults to None.\n            Second line of description should be indented.\n        *args: Variable length argument list.\n        **kwargs: Arbitrary keyword arguments.\n\n    Returns:\n        bool: True if successful, False otherwise.\n\n        The return type is optional and may be specified at the beginning of\n        the ``Returns`` section followed by a colon.\n\n        The ``Returns`` section may span multiple lines and paragraphs.\n        Following lines should be indented to match the first line.\n\n        The ``Returns`` section supports any reStructuredText formatting,\n        including literal blocks::\n\n            {\n                'param1': param1,\n                'param2': param2\n            }\n\n    Raises:\n        AttributeError: The ``Raises`` section is a list of all exceptions\n            that are relevant to the interface.\n        ValueError: If `param2` is equal to `param1`.\n\n    \"\"\"\n    if param1 == param2:\n        raise ValueError('param1 may not be equal to param2')\n    return True\n\n\ndef example_generator(n):\n    \"\"\"Generators have a ``Yields`` section instead of a ``Returns`` section.\n\n    Please see https://stackoverflow.com/questions/37549846/how-to-use-yield-inside-async-function\n    Old answer for Python 3.5, You can't yield inside coroutines. Only way is to implement\n    Asynchronous Iterator manually using __aiter__/__anext__ magic methods.\n    In nutshell, go with async-await / coroutine way and consider this example def as how to\n    illustrate working of it in docstring.\n\n    Args:\n        n (int): The upper limit of the range to generate, from 0 to `n` - 1.\n\n    Yields:\n        int: The next number in the range of 0 to `n` - 1.\n\n    Examples:\n        Examples should be written in doctest format, and should illustrate how\n        to use the function.\n\n        >>> print([i for i in example_generator(4)])\n        [0, 1, 2, 3]\n\n    \"\"\"\n    for i in range(n):\n        yield i\n\n\n# Custom exception class example\n# Put shared exception classes in exceptions.py\nclass ExampleError(Exception):\n    \"\"\"Exceptions are documented in the same way as classes.\n\n    The __init__ method may be documented in either the class level\n    docstring, or as a docstring on the __init__ method itself.\n\n    Either form is acceptable, but the two should not be mixed. Choose one\n    convention to document the __init__ method and be consistent with it.\n\n    Note:\n        Do not include the `self` parameter in the ``Args`` section.\n\n    Args:\n        message (str): Human readable string describing the exception.\n        code (:obj:`int`, optional): Error code.\n\n    Attributes:\n        message (str): Human readable string describing the exception.\n        code (int): Exception error code.\n\n    \"\"\"\n\n    def __init__(self, message, code):\n        super().__init__(message)\n        self.code = code\n\n\nclass ExampleClass(object):\n    \"\"\"The summary line for a class docstring should fit on one line.\n\n    If the class has public attributes, they may be documented here\n    in an ``Attributes`` section and follow the same formatting as a\n    function's ``Args`` section. Alternatively, attributes may be documented\n    inline with the attribute's declaration (see __init__ method below).\n\n    Properties created with the ``@property`` decorator should be documented\n    in the property's getter method.\n\n    Attributes:\n        attr1 (str): Description of `attr1`.\n        attr2 (:obj:`int`, optional): Description of `attr2`.\n\n    \"\"\"\n\n    def __init__(self, param1, param2, param3):\n        \"\"\"Example of docstring on the __init__ method.\n\n        The __init__ method may be documented in either the class level\n        docstring, or as a docstring on the __init__ method itself.\n\n        Either form is acceptable, but the two should not be mixed. Choose one\n        convention to document the __init__ method and be consistent with it.\n\n        Note:\n            Do not include the `self` parameter in the ``Args`` section.\n\n        Args:\n            param1 (str): Description of `param1`.\n            param2 (:obj:`int`, optional): Description of `param2`. Multiple\n                lines are supported.\n            param3 (:obj:`list` of :obj:`str`): Description of `param3`.\n\n        \"\"\"\n        self.attr1 = param1\n        self.attr2 = param2\n        self.attr3 = param3  #: Doc comment *inline* with attribute\n\n        #: list of str: Doc comment *before* attribute, with type specified\n        self.attr4 = ['attr4']\n\n        self.attr5 = None\n        \"\"\"str: Docstring *after* attribute, with type specified.\"\"\"\n\n    @property\n    def readonly_property(self):\n        \"\"\"str: Properties should be documented in their getter method.\n\n        When a member needs to be protected and cannot be simply exposed as a public member,\n        Use Python’s property decorator to accomplish the functionality of getters and\n        setters (or mutator method). See the `anti-patterns`_ for more details.\n\n        .. _anti-patterns:\n            http://docs.quantifiedcode.com/python-anti-patterns\n        \"\"\"\n        return 'readonly_property'\n\n    @property\n    def readwrite_property(self):\n        \"\"\":obj:`list` of :obj:`str`: Properties with both a getter and setter\n        should only be documented in their getter method.\n\n        If the setter method contains notable behavior, it should be\n        mentioned here.\n        \"\"\"\n        return ['readwrite_property']\n\n    @readwrite_property.setter\n    def readwrite_property(self, value):\n        value\n\n    def example_method(self, param1, param2):\n        \"\"\"Class methods are similar to regular functions.\n\n        This docstring contains a hyperlink to another method.\n\n        Note:\n            Do not include the `self` parameter in the ``Args`` section.\n            same is applicable for cls, if this has been a class method @classmethod\n\n        Args:\n            param1: The first parameter.\n            param2: The second parameter.\n\n        Returns:\n            True if successful, False otherwise.\n\n        Raises ExampleError:\n            Explain why this happens\n\n        See also:\n            :meth:`_private`\n\n        \"\"\"\n        return True\n\n    def __special__(self):\n        \"\"\"By default special members with docstrings are not included.\n\n        Special members are any methods or attributes that start with and\n        end with a double underscore.\n\n        This behavior can be changed such that private members *are* included\n        by adding the following line to Sphinx's conf.py:\n\n        autodoc_default_flags = ['members', 'undoc-members', 'private-members',\n        'special-members', 'inherited-members', 'show-inheritance']\n\n        \"\"\"\n        pass\n\n    def __special_without_docstring__(self):\n        pass\n\n    def _private(self):\n        \"\"\"By default private members are not included.\n\n        Private members are any methods or attributes that start with an\n        underscore and are *not* special. By default they are not included\n        in the output.\n\n        This behavior can be changed such that private members *are* included\n        by adding the following line to Sphinx's conf.py:\n\n        autodoc_default_flags = ['members', 'undoc-members', 'private-members',\n        'special-members', 'inherited-members', 'show-inheritance']\n\n        \"\"\"\n        pass\n\n    def _private_without_docstring(self):\n        pass\n\n\n# TODO: Remove these lines if this module is named __main__.py\n#       or if this is not an executable module\nif __name__ == \"main\":\n    pass\n\n"
  },
  {
    "path": "python/fledge/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/apps/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/apps/common/README.rst",
    "content": "***********\nCommon Apps\n***********\n\nThis directory contains common code for applications\nthat run on the Fledge platform.\n\n"
  },
  {
    "path": "python/fledge/apps/common/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/common/README.rst",
    "content": "*******************************\nCommon Python Modules & Classes\n*******************************\n\nThis directory and sub-directories of this directory contain code that\nis shared between more than one microservice or scheduled process within\nthe Fledge system.\n"
  },
  {
    "path": "python/fledge/common/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/common/acl_manager.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import StorageServerError\n\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass ACLManagerSingleton(object):\n    _shared_state = {}\n\n    def __init__(self):\n        self.__dict__ = self._shared_state\n\n\nclass ACLManager(ACLManagerSingleton):\n    _pending_notifications = {}\n    _storage_client = None\n    \n    def __init__(self, storage_client=None):\n        ACLManagerSingleton.__init__(self)\n        if not storage_client:\n            from fledge.services.core import connect\n            self._storage_client = connect.get_storage_async()\n        else:\n            self._storage_client = storage_client\n\n    async def _notify_service_about_acl_change(self, entity_name, acl, reason):\n        \"\"\"Helper function that sends the ACL change to the respective service. \"\"\"\n        # We need to find the address and management host for the required service.\n        from fledge.services.core.service_registry.service_registry import ServiceRegistry\n        from fledge.services.core.service_registry.exceptions import DoesNotExist\n        try:\n            services = ServiceRegistry.get(name=entity_name)\n            service = services[0]\n        except DoesNotExist:\n            _logger.error(\"Cannot notify the service {} \"\n                          \"about {}. It does not exist in service registry.\".format(entity_name, reason))\n            return\n        else:\n            try:\n                _logger.info(\"Notifying the service\"\n                             \" {} about {}\".format(entity_name, reason))\n                from fledge.common.service_record import ServiceRecord\n\n                if service._status == ServiceRecord.Status.Shutdown:\n                    _logger.info(\"The service {} has Shut Down. \"\n                                 \"Cannot notify the service about ACL change.\".format(entity_name))\n                    return\n\n                elif service._status == ServiceRecord.Status.Unresponsive:\n                    _logger.warn(\"The service {} is Unresponsive. Skipping notifying \"\n                                 \"the service about {}. But adding to pending items.\".format(entity_name, reason))\n                    self._pending_notifications[entity_name] = acl\n                    _logger.info(\"Adding {} to to pending list. \"\n                                 \"And pending items are {}\".format(entity_name,\n                                                                   self._pending_notifications))\n                    return\n\n                elif service._status == ServiceRecord.Status.Failed:\n                    _logger.error(\"The service {} has failed. \"\n                                  \"Cannot notify the service about ACL change\".format(entity_name))\n                    # Need to clear pending items if any so that when next time service restarts it picks\n                    # acl from its category.\n                    if entity_name in self._pending_notifications:\n                        self._pending_notifications.pop(entity_name)\n                    return\n\n                from fledge.common.microservice_management_client.microservice_management_client import MicroserviceManagementClient\n                _logger.debug(\"The host and port is {} {} and \"\n                              \"entity is {}\".format(service._address,\n                                                    service._management_port,\n                                                    entity_name))\n                mgt_client = MicroserviceManagementClient(service._address,\n                                                          service._management_port)\n                _logger.debug(\"Connection established with {} at {} and port {}\".format(entity_name,\n                                                                                        service._address,\n                                                                                        service._management_port))\n                await mgt_client.update_service_for_acl_change_security(acl=acl,\n                                                                        reason=reason)\n                _logger.info(\"Notified the {} about {}\".format(entity_name, reason))\n                # Clearing the pending notifications if any.\n                if entity_name in self._pending_notifications:\n                    self._pending_notifications.pop(entity_name)\n\n            except Exception as ex:\n                _logger.error(ex, \"Could not notify {}.\".format(entity_name))\n\n    async def handle_update_for_acl_usage(self, entity_name, acl_name, entity_type,\n                                          message=\"updateACL\"):\n        _logger.debug(\"Update acl usage called for {} {} {}\".format(entity_name, acl_name, entity_type))\n\n        if entity_type == \"service\":\n            try:\n                del_payload = PayloadBuilder().WHERE([\"entity_type\", \"=\", \"service\"]). \\\n                    AND_WHERE([\"entity_name\", \"=\", entity_name]).payload()\n                result = await self._storage_client.delete_from_tbl('acl_usage', del_payload)\n\n                payload = PayloadBuilder().INSERT(entity_name=entity_name,\n                                                  entity_type=\"service\",\n                                                  name=acl_name).payload()\n                _logger.debug(\"insert payload is {}\".format(payload))\n                result = await self._storage_client.insert_into_tbl(\"acl_usage\", payload)\n                response = result['response']\n\n                await self._notify_service_about_acl_change(entity_name, acl_name, message)\n            except KeyError:\n                raise ValueError(result['message'])\n            except StorageServerError as ex:\n                err_response = ex.error\n                raise ValueError(err_response)\n        else:\n            try:\n                del_payload = PayloadBuilder().WHERE([\"entity_type\", \"=\", \"script\"]). \\\n                    AND_WHERE([\"entity_name\", \"=\", entity_name]).payload()\n                result = await self._storage_client.delete_from_tbl('acl_usage', del_payload)\n\n                payload = PayloadBuilder().INSERT(entity_name=entity_name,\n                                                  entity_type=\"script\",\n                                                  name=acl_name).payload()\n                _logger.debug(\"insert payload is {}\".format(payload))\n                result = await self._storage_client.insert_into_tbl(\"acl_usage\", payload)\n                response = result['response']\n\n            except KeyError:\n                raise ValueError(result['message'])\n            except StorageServerError as ex:\n                err_response = ex.error\n                raise ValueError(err_response)\n\n    async def handle_delete_for_acl_usage(self, entity_name, acl_name, entity_type, notify_service=True):\n        _logger.debug(\"delete acl usage called for {} {} {}\".format(entity_name, acl_name, entity_type))\n\n        if entity_type == \"service\":\n            try:\n                # Note entity_type must be a service since it is a config item of type ACL\n                # in a category.\n                delete_payload = PayloadBuilder().WHERE([\"entity_name\", \"=\", entity_name]). \\\n                    AND_WHERE([\"entity_type\", \"=\", \"service\"]).payload()\n                _logger.debug(\"The delete payload is {}\".format(delete_payload))\n\n                result = await self._storage_client.delete_from_tbl(\"acl_usage\", delete_payload)\n                response = result['response']\n                _logger.debug(\"The response payload is {}\".format(response))\n\n                if notify_service:\n                    await self._notify_service_about_acl_change(entity_name,\n                                                                acl_name,\n                                                                \"detachACL\")\n            except KeyError:\n                raise ValueError(result['message'])\n            except StorageServerError as ex:\n                err_response = ex.error\n                raise ValueError(err_response)\n        else:\n            try:\n                # Note entity_type must be a script since ACL is being deleted.\n                delete_payload = PayloadBuilder().WHERE([\"entity_name\", \"=\", entity_name]). \\\n                    AND_WHERE([\"entity_type\", \"=\", \"script\"]).payload()\n                result = await self._storage_client.delete_from_tbl(\"acl_usage\", delete_payload)\n                response = result['response']\n            except KeyError:\n                raise ValueError(result['message'])\n            except StorageServerError as ex:\n                err_response = ex.error\n                raise ValueError(err_response)\n\n    async def handle_create_for_acl_usage(self, entity_name, acl_name, entity_type, notify_service=False,\n                                          acl_to_delete=None):\n        _logger.debug(\"Create acl usage called for {} {} {}\".format(entity_name, acl_name, entity_type))\n        if entity_type == \"service\":\n            try:\n                # Note entity_type must be a service since it is a config item of type ACL\n                # in a category.\n                _logger.debug(\"Notify south is {}\".format(notify_service))\n                q_payload = PayloadBuilder().SELECT(\"name\", \"entity_name\", \"entity_type\"). \\\n                    WHERE([\"entity_name\", \"=\", entity_name]). \\\n                    AND_WHERE([\"entity_type\", \"=\", entity_type]).\\\n                    AND_WHERE([\"name\", \"=\", acl_name]).payload()\n                results = await self._storage_client.query_tbl_with_payload('acl_usage', q_payload)\n                _logger.debug(\"The result of query is {}\".format(results))\n                # Check if the value to insert already exists.\n                if len(results[\"rows\"]) > 0:\n                    _logger.debug(\"The tuple ({}, {}, {}) already exists in acl usage table.\".format(entity_name,\n                                                                                                    entity_type,\n                                                                                                    acl_name))\n                else:\n                    payload = PayloadBuilder().INSERT(entity_name=entity_name,\n                                                      entity_type=\"service\",\n                                                      name=acl_name).payload()\n                    result = await self._storage_client.insert_into_tbl(\"acl_usage\", payload)\n                    response = result['response']\n                    if acl_to_delete is not None:\n                        delete_payload = PayloadBuilder().WHERE([\"entity_name\", \"=\", entity_name]). \\\n                            AND_WHERE([\"entity_type\", \"=\", entity_type]).\\\n                            AND_WHERE([\"name\", \"=\", acl_to_delete]).payload()\n                        _logger.debug(\"The acl to delete is {} and entity name is {}\".format(acl_to_delete,\n                                                                                            entity_name))\n                        result = await self._storage_client.delete_from_tbl(\"acl_usage\", delete_payload)\n                        response = result['response']\n                        _logger.debug(\"the response is {}\".format(response))\n\n                if notify_service:\n                    await self._notify_service_about_acl_change(entity_name, acl_name, \"attachACL\")\n            except KeyError:\n                raise ValueError(result['message'])\n            except StorageServerError as ex:\n                err_response = ex.error\n                raise ValueError(err_response)\n        else:\n            try:\n                # Note entity_type must be a script since handle new acl is called.\n                q_payload = PayloadBuilder().SELECT(\"name\", \"entity_name\", \"entity_type\"). \\\n                    WHERE([\"entity_name\", \"=\", entity_name]). \\\n                    AND_WHERE([\"entity_type\", \"=\", entity_type]). \\\n                    AND_WHERE([\"name\", \"=\", acl_name]).payload()\n                results = await self._storage_client.query_tbl_with_payload('acl_usage', q_payload)\n                _logger.debug(\"The result of query is {}\".format(results))\n                # Check if the value to insert already exists.\n                if len(results[\"rows\"]) > 0:\n                    _logger.debug(\"The tuple ({}, {}, {}) already exists in acl usage table.\".format(entity_name,\n                                                                                                     entity_type,\n                                                                                                     acl_name))\n                else:\n                    payload = PayloadBuilder().INSERT(entity_name=entity_name,\n                                                      entity_type=\"script\",\n                                                      name=acl_name).payload()\n                    result = await self._storage_client.insert_into_tbl(\"acl_usage\", payload)\n                    response = result['response']\n            except KeyError:\n                raise ValueError(result['message'])\n            except StorageServerError as ex:\n                err_response = ex.error\n                raise ValueError(err_response)\n\n    async def get_all_entities_for_a_acl(self, acl_name, entity_type):\n        \"\"\"Get all the entities attached to an acl.\"\"\"\n        try:\n            q_payload = PayloadBuilder().SELECT(\"entity_name\"). \\\n                AND_WHERE([\"entity_type\", \"=\", entity_type]). \\\n                AND_WHERE([\"name\", \"=\", acl_name]).payload()\n            results = await self._storage_client.query_tbl_with_payload('acl_usage', q_payload)\n\n            if len(results['rows']) > 0:\n                entities = []\n                for row in results['rows']:\n                    entities.append(row['entity_name'])\n                return entities\n            else:\n                return []\n        except KeyError:\n            raise ValueError(results['message'])\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n\n    async def get_acl_for_an_entity(self, entity_name, entity_type):\n        \"\"\"Get the acl attached to an entity.\"\"\"\n        try:\n            q_payload = PayloadBuilder().SELECT(\"name\"). \\\n                AND_WHERE([\"entity_type\", \"=\", entity_type]). \\\n                AND_WHERE([\"entity_name\", \"=\", entity_name]).payload()\n            results = await self._storage_client.query_tbl_with_payload('acl_usage', q_payload)\n\n            if len(results['rows']) > 0:\n                for row in results['rows']:\n                    return row['name']\n            else:\n                return \"\"\n        except KeyError:\n            raise ValueError(results['message'])\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n\n    async def resolve_pending_notification_for_acl_change(self, svc_name):\n        \"\"\"Methods that handles the pending notification about acl change to the service.\"\"\"\n        _logger.debug(\"svc name {} and pending notifications {}\".format(svc_name,\n                                                                        self._pending_notifications))\n        if svc_name not in self._pending_notifications:\n            return\n\n        new_acl = await self.get_acl_for_an_entity(svc_name, \"service\")\n        old_acl = self._pending_notifications[svc_name]\n\n        _logger.debug(\"New acl is {} Old acl is {}\".format(new_acl, old_acl))\n\n        if new_acl == old_acl and new_acl != \"\":\n            await self._notify_service_about_acl_change(entity_name=svc_name, acl=new_acl,\n                                                        reason=\"reloadACL\")\n\n        elif new_acl != old_acl and new_acl != \"\" and old_acl != \"\":\n            await self._notify_service_about_acl_change(entity_name=svc_name, acl=new_acl,\n                                                        reason=\"updateACL\")\n        elif old_acl != \"\" and new_acl == \"\":\n            await self._notify_service_about_acl_change(entity_name=svc_name, acl=new_acl,\n                                                        reason=\"detachACL\")\n\n        elif old_acl == \"\" and new_acl != \"\":\n            await self._notify_service_about_acl_change(entity_name=svc_name, acl=new_acl,\n                                                        reason=\"attachACL\")\n"
  },
  {
    "path": "python/fledge/common/alert_manager.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\nclass AlertManagerSingleton(object):\n    _shared_state = {}\n\n    def __init__(self):\n        self.__dict__ = self._shared_state\n\n\nclass AlertManager(AlertManagerSingleton):\n    storage_client = None\n    alerts = []\n    urgency = {\"Critical\": 1, \"High\": 2, \"Normal\": 3, \"Low\": 4}\n\n    def __init__(self, storage_client=None):\n        AlertManagerSingleton.__init__(self)\n        if not storage_client:\n            from fledge.services.core import connect\n            self.storage_client = connect.get_storage_async()\n        else:\n            self.storage_client = storage_client\n\n    async def get_all(self):\n        \"\"\" Get all alerts from storage \"\"\"\n        try:\n            q_payload = PayloadBuilder().SELECT(\"key\", \"message\", \"urgency\", \"ts\").ALIAS(\n                \"return\", (\"ts\", 'timestamp')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")).payload()\n            storage_result = await self.storage_client.query_tbl_with_payload('alerts', q_payload)\n            result = []\n            if 'rows' in storage_result:\n                for row in storage_result['rows']:\n                    tmp = {\"key\": row['key'],\n                             \"message\": row['message'],\n                             \"urgency\": self._urgency_name_by_value(row['urgency']),\n                             \"timestamp\": row['timestamp']\n                             }\n                    result.append(tmp)\n            self.alerts = result\n        except Exception as ex:\n            raise Exception(ex)\n        else:\n            return self.alerts\n\n    async def get_by_key(self, name):\n        \"\"\" Get an alert by key \"\"\"\n        key_found = [a for a in self.alerts if a['key'] == name]\n        if key_found:\n            return key_found[0]\n        try:\n            q_payload = PayloadBuilder().SELECT(\"key\", \"message\", \"urgency\", \"ts\").ALIAS(\n                \"return\", (\"ts\", 'timestamp')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")).WHERE(\n                [\"key\", \"=\", name]).payload()\n            results = await self.storage_client.query_tbl_with_payload('alerts', q_payload)\n            alert = {}\n            if 'rows' in results:\n                if len(results['rows']) > 0:\n                    row = results['rows'][0]\n                    alert = {\"key\": row['key'],\n                           \"message\": row['message'],\n                           \"urgency\": self._urgency_name_by_value(row['urgency']),\n                           \"timestamp\": row['timestamp']\n                           }\n            if not alert:\n                raise KeyError('{} alert not found.'.format(name))\n        except KeyError as err:\n            msg = str(err.args[0])\n            raise KeyError(msg)\n        else:\n            return alert\n\n    async def add(self, params):\n        \"\"\" Add an alert \"\"\"\n        response = None\n        try:\n            payload = PayloadBuilder().INSERT(**params).payload()\n            insert_api_result = await self.storage_client.insert_into_tbl('alerts', payload)\n            if insert_api_result['response'] == 'inserted' and insert_api_result['rows_affected'] == 1:\n                response = {\"alert\": params}\n                self.alerts.append(params)\n        except Exception as ex:\n            raise Exception(ex)\n        else:\n            return response\n\n    async def delete(self, key=None):\n        \"\"\" Delete an entry from storage \"\"\"\n        try:\n            payload = {}\n            message = \"Nothing to delete.\"\n            key_exists = {}\n            if key is not None:\n                for index, item in enumerate(self.alerts):\n                    if item['key'] == key:\n                        key_exists = item\n                        break\n                if key_exists:\n                    payload = PayloadBuilder().WHERE([\"key\", \"=\", key]).payload()\n                else:\n                    raise KeyError\n            result = await self.storage_client.delete_from_tbl(\"alerts\", payload)\n            if 'rows_affected' in result:\n                if result['response'] == \"deleted\" and result['rows_affected']:\n                    if key is None:\n                        message = \"Delete all alerts.\"\n                        self.alerts = []\n                    else:\n                        message = \"{} alert is deleted.\".format(key)\n                        if key_exists:\n                            self.alerts.remove(key_exists)\n        except KeyError:\n            raise KeyError\n        except Exception as ex:\n            raise Exception(ex)\n        else:\n            return message\n\n    def _urgency_name_by_value(self, value):\n        try:\n            name = list(self.urgency.keys())[list(self.urgency.values()).index(value)]\n        except:\n            name = \"UNKNOWN\"\n        return name\n\n"
  },
  {
    "path": "python/fledge/common/audit_logger.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.storage_client.exceptions import StorageServerError\n\n__author__ = \"Mark Riddoch\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass AuditLoggerSingleton(object):\n    \"\"\" AuditLoggerSingleton\n    \n    Used to make AuditLogger a singleton via shared state\n    \"\"\"\n    _shared_state = {}\n\n    def __init__(self):\n        self.__dict__ = self._shared_state\n\n\nclass AuditLogger(AuditLoggerSingleton):\n    \"\"\" Audit Logger\n\n        Singleton interface to an audit logging class\n    \"\"\"\n\n    _success = 0\n    _failure = 1\n    _warning = 2\n    _information = 4\n    \"\"\" The various log levels as defined in init.sql \"\"\"\n\n    _storage = None\n    \"\"\" The storage client we should use to talk to the storage service \"\"\"\n\n    def __init__(self, storage=None):\n        AuditLoggerSingleton.__init__(self)\n        if self._storage is None:\n            if not isinstance(storage, StorageClientAsync):\n                raise TypeError('Must be a valid Storage object')\n            self._storage = storage\n\n    async def _log(self, level, code, log):\n        try:\n            if log is None:\n                payload = PayloadBuilder().INSERT(code=code, level=level).payload()\n            else:\n                payload = PayloadBuilder().INSERT(code=code, level=level, log=log).payload()\n\n            await self._storage.insert_into_tbl(\"log\", payload)\n\n        except (StorageServerError, Exception) as ex:\n            _logger.error(ex, \"Failed to log audit trail entry '{}'.\".format(code))\n            raise ex\n\n    async def success(self, code, log):\n        await self._log(self._success, code, log)\n\n    async def failure(self, code, log):\n        await self._log(self._failure, code, log)\n\n    async def warning(self, code, log):\n        await self._log(self._warning, code, log)\n\n    async def information(self, code, log):\n        await self._log(self._information, code, log)\n"
  },
  {
    "path": "python/fledge/common/common.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common definitions\"\"\"\n\nimport os\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_FLEDGE_DATA = os.getenv(\"FLEDGE_DATA\", default=None)\n_FLEDGE_ROOT = os.getenv(\"FLEDGE_ROOT\", default='/usr/local/fledge')\n_FLEDGE_PLUGIN_PATH = os.getenv(\"FLEDGE_PLUGIN_PATH\", default=None)\n"
  },
  {
    "path": "python/fledge/common/configuration_manager.py",
    "content": "# -*- coding: utf-8 -*-\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom importlib import import_module\nfrom urllib.parse import urlparse\nimport binascii\nimport copy\nimport json\nimport inspect\nimport ipaddress\nimport datetime\nimport os\nfrom math import *\nimport collections\nimport ast\n\nimport aiohttp.web_request\nfrom fledge.common import utils as common_utils\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.utils import Utils\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.acl_manager import ACLManager\n\n__author__ = \"Ashwin Gopalakrishnan, Ashish Jabble, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n# MAKE UPPER_CASE\n_valid_type_strings = sorted(['boolean', 'integer', 'float', 'string', 'IPv4', 'IPv6', 'X509 certificate', 'password',\n                              'JSON', 'URL', 'enumeration', 'script', 'code', 'northTask', 'ACL', 'bucket',\n                              'list', 'kvlist'])\n_optional_items = sorted(['readonly', 'order', 'length', 'maximum', 'minimum', 'rule', 'deprecated', 'displayName',\n                          'validity', 'mandatory', 'group', 'listSize', 'listName', 'permissions'])\nRESERVED_CATG = ['South', 'North', 'General', 'Advanced', 'Utilities', 'rest_api', 'Security', 'service', 'SCHEDULER',\n                 'SMNTR', 'PURGE_READ', 'Notifications']\n\n\nclass ConfigurationCache(object):\n    \"\"\"Configuration Cache Manager\"\"\"\n\n    def __init__(self, size=30):\n        \"\"\"\n        cache: value stored in dictionary as per category_name\n        max_cache_size: Hold the recently requested categories in the cache. Default cache size is 30\n        hit: number of times an item is read from the cache\n        miss: number of times an item was not found in the cache and a read of the storage layer was required\n        \"\"\"\n        self.cache = {}\n        self.max_cache_size = size\n        self.hit = 0\n        self.miss = 0\n\n    def __contains__(self, category_name):\n        \"\"\"Returns True or False depending on whether or not the key is in the cache\n        and update the hit and data_accessed\"\"\"\n        if category_name in self.cache:\n            try:\n                current_hit = self.cache[category_name]['hit']\n            except KeyError:\n                current_hit = 0\n\n            self.hit += 1\n            self.cache[category_name].update({'date_accessed': datetime.datetime.now(), 'hit': current_hit + 1})\n            return True\n        self.miss += 1\n        return False\n\n    def update(self, category_name, category_description, category_val, display_name=None):\n        \"\"\"Update the cache dictionary and remove the oldest item\"\"\"\n        if category_name not in self.cache and len(self.cache) >= self.max_cache_size:\n            self.remove_oldest()\n        display_name = category_name if display_name is None else display_name\n        self.cache[category_name] = {'date_accessed': datetime.datetime.now(), 'description': category_description,\n                                     'value': category_val, 'displayName': display_name}\n        _logger.debug(\"Updated Configuration Cache %s\", self.cache)\n\n    def remove_oldest(self):\n        \"\"\"Remove the entry that has the oldest accessed date\"\"\"\n        oldest_entry = None\n        for category_name in self.cache:\n            if oldest_entry is None:\n                oldest_entry = category_name\n            elif self.cache[category_name].get('date_accessed') and self.cache[oldest_entry].get('date_accessed') \\\n                    and self.cache[category_name]['date_accessed'] < self.cache[oldest_entry]['date_accessed']:\n                oldest_entry = category_name\n        if oldest_entry:\n            self.cache.pop(oldest_entry)\n\n    def remove(self, key):\n        \"\"\"Remove the entry with given key name\"\"\"\n        for category_name in self.cache:\n            if key == category_name:\n                self.cache.pop(key)\n                break\n\n    @property\n    def size(self):\n        \"\"\"Return the size of the cache\"\"\"\n        return len(self.cache)\n\n\nclass ConfigurationManagerSingleton(object):\n    \"\"\" ConfigurationManagerSingleton\n\n    Used to make ConfigurationManager a singleton via shared state\n    \"\"\"\n    _shared_state = {}\n\n    def __init__(self):\n        self.__dict__ = self._shared_state\n\n\nclass ConfigurationManager(ConfigurationManagerSingleton):\n    \"\"\" Configuration Manager\n\n    General naming convention:\n\n    category(s)\n        category_name - string\n        category_description - string\n        category_val - dict\n            item_name - string (dynamic)\n            item_val - dict\n                entry_name - string\n                entry_val - string\n\n        ----------- 4 fixed entry_name/entry_val pairs ----------------\n\n                description_name - string (fixed - 'description')\n                    description_val - string (dynamic)\n                type_name - string (fixed - 'type')\n                    type_val - string (dynamic - ('boolean', 'integer', 'string', 'IPv4', 'IPv6', 'X509 certificate', 'JSON'))\n                default_name - string (fixed - 'default')\n                    default_val - string (dynamic)\n                value_name - string (fixed - 'value')\n                    value_val - string (dynamic)\n    \"\"\"\n\n    _storage = None\n    _registered_interests = None\n    _registered_interests_child = None\n    _cacheManager = None\n    _acl_handler = None\n\n    def __init__(self, storage=None):\n        ConfigurationManagerSingleton.__init__(self)\n        if self._storage is None:\n            if not isinstance(storage, StorageClientAsync):\n                raise TypeError('Must be a valid Storage object')\n            self._storage = storage\n        if self._registered_interests is None:\n            self._registered_interests = {}\n\n        if self._registered_interests_child is None:\n            self._registered_interests_child = {}\n\n        if self._cacheManager is None:\n            self._cacheManager = ConfigurationCache()\n\n        if self._acl_handler is None:\n            self._acl_handler = ACLManager(storage)\n\n    async def _run_callbacks(self, category_name):\n        callbacks = self._registered_interests.get(category_name)\n        if callbacks is not None:\n            for callback in callbacks:\n                try:\n                    cb = import_module(callback)\n                except ImportError:\n                    _logger.exception(\n                        'Unable to import callback module %s for category_name %s', callback, category_name)\n                    raise\n                if not hasattr(cb, 'run'):\n                    _logger.exception(\n                        'Callback module %s does not have method run', callback)\n                    raise AttributeError('Callback module {} does not have method run'.format(callback))\n                method = cb.run\n                if not inspect.iscoroutinefunction(method):\n                    _logger.exception(\n                        'Callback module %s run method must be a coroutine function', callback)\n                    raise AttributeError('Callback module {} run method must be a coroutine function'.format(callback))\n                await cb.run(category_name)\n        else:\n            if category_name == \"LOGGING\":\n                from fledge.services.core import server\n                logging_level = self._cacheManager.cache[category_name]['value']['logLevel']['value']\n                server.Server._log_level = logging_level\n                FLCoreLogger().set_level(logging_level)\n\n    async def _run_callbacks_child(self, parent_category_name, child_category, operation):\n        callbacks = self._registered_interests_child.get(parent_category_name)\n        if callbacks is not None:\n            for callback in callbacks:\n                try:\n                    cb = import_module(callback)\n                except ImportError:\n                    _logger.exception(\n                        'Unable to import callback module %s for category_name %s', callback, parent_category_name)\n                    raise\n                if not hasattr(cb, 'run_child'):\n                    _logger.exception(\n                        'Callback module %s does not have method run_child', callback)\n                    raise AttributeError('Callback module {} does not have method run_child'.format(callback))\n                method = cb.run_child\n                if not inspect.iscoroutinefunction(method):\n                    _logger.exception(\n                        'Callback module %s run_child method must be a coroutine function', callback)\n                    raise AttributeError(\n                        'Callback module {} run_child method must be a coroutine function'.format(callback))\n                await cb.run_child(parent_category_name, child_category, operation)\n\n    async def _merge_category_vals(self, category_val_new, category_val_storage, keep_original_items,\n                                   category_name=None):\n        def convert_json_to_list_for_category_and_item(config_item_name: str, new_config: dict):\n            old_value_json = json.loads(category_val_storage[config_item_name]['value'])\n            if isinstance(old_value_json, dict):\n                config_item_list_name = new_config.get('listName')\n                if config_item_list_name is not None:\n                    old_list_value = old_value_json.get(config_item_list_name)\n                    if old_list_value is not None:\n                        _logger.info(\"Upgrading the JSON configuration into a list for category: {} and \"\n                                     \"config item: {}\".format(category_name, config_item_name))\n                        new_config['value'] = json.dumps(old_value_json)\n                    else:\n                        _logger.error(\"The values for the {} category could not be merged \"\n                                      \"because the listName value was missing in the old configuration for the {} \"\n                                      \"config item.\".format(category_name, config_item_name))\n                else:\n                    _logger.error(\"The values for the {} category could not be merged because the listName key-pair was\"\n                                  \" not found in the {} config item.\".format(category_name, config_item_name))\n            return new_config['value']\n        # preserve all value_vals from category_val_storage\n        # use items in category_val_new not in category_val_storage\n        # keep_original_items = FALSE ignore items in category_val_storage not in category_val_new\n        # keep_original_items = TRUE keep items in category_val_storage not in category_val_new\n        category_val_storage_copy = copy.deepcopy(category_val_storage)\n        category_val_new_copy = copy.deepcopy(category_val_new)\n        deprecated_items = []\n        for item_name_new, item_val_new in category_val_new_copy.items():\n            item_val_storage = category_val_storage_copy.get(item_name_new)\n            if item_val_storage is not None:\n                if item_val_new['type'] == item_val_storage.get('type'):\n                    item_val_new['value'] = item_val_storage.get('value')\n                else:\n                    if 'value' not in item_val_new:\n                        item_val_new['value'] = item_val_new['default']\n                    \"\"\" Upgrade case: \n                        when the config item is of type JSON, it will be converted into a list while preserving \n                        its value as is.\n                    \"\"\"\n                    if item_val_new['type'] == 'list' and item_val_storage['type'] == 'JSON':\n                        if 'listName' in item_val_new:\n                            convert_json_to_list_for_category_and_item(item_name_new, item_val_new)\n                category_val_storage_copy.pop(item_name_new)\n            if \"deprecated\" in item_val_new and item_val_new['deprecated'] == 'true':\n                audit = AuditLogger(self._storage)\n                # Mask password values for audit trail security\n                masked_old_value = self._mask_password_value(item_val_new.get('type'), item_val_new['value'])\n                audit_details = {'category': category_name, 'item': item_name_new, 'oldValue': masked_old_value,\n                                 'newValue': 'deprecated'}\n                await audit.information('CONCH', audit_details)\n                deprecated_items.append(item_name_new)\n\n        for item in deprecated_items:\n            category_val_new_copy.pop(item)\n        if keep_original_items:\n            for item_name_storage, item_val_storage in category_val_storage_copy.items():\n                category_val_new_copy[item_name_storage] = item_val_storage\n        return category_val_new_copy\n    \n    def _validate_optional_string_attribute(self, category_name, optional_key_name, optional_key_value, config_item_name):\n        \"\"\"Validate optional attributes that must be non-empty strings\"\"\"\n        if not isinstance(optional_key_value, str):\n            raise TypeError('For {} category, {} type must be a string for item name {}; got {}'.format(\n                category_name, optional_key_name, config_item_name, type(optional_key_value)))\n        final_optional_key_value = optional_key_value.strip()\n        if not final_optional_key_value:\n            raise ValueError('For {} category, {} cannot be empty for item name {}'.format(\n                category_name, optional_key_name, config_item_name))\n        return final_optional_key_value\n\n    def _validate_permissions_entry(self, category_name, entry_name, item_name, entry_val):\n        \"\"\"Validate permissions entry - must be non-empty list of non-empty strings\"\"\"\n        if not isinstance(entry_val, list):\n            raise ValueError(\n                'For {} category, {} entry value must be a list of string for item name {}; got {}.'\n                ''.format(category_name, entry_name, item_name, type(entry_val)))\n        if not entry_val:\n            raise ValueError(\n                'For {} category, {} entry value must not be empty for item name '\n                '{}.'.format(category_name, entry_name, item_name))\n        if not all(isinstance(ev, str) and ev != '' for ev in entry_val):\n            raise ValueError('For {} category, {} entry values must be a string and non-empty '\n                             'for item name {}.'.format(category_name, entry_name, item_name))\n\n    def _validate_enumeration_type(self, category_name, item_name, item_val, entry_name, entry_val, get_entry_val):\n        \"\"\"Validate enumeration type with options and default value\"\"\"\n        updates = {}\n        if 'options' not in item_val:\n            raise KeyError('For {} category, options required for enumeration type'.format(category_name))\n        if entry_name == 'options':\n            if type(entry_val) is not list:\n                raise TypeError('For {} category, entry value must be a list for item name {} and '\n                                'entry name {}; got {}'.format(category_name, item_name, entry_name,\n                                                               type(entry_val)))\n            if not entry_val:\n                raise ValueError('For {} category, entry value cannot be empty list for item_name {} and '\n                                 'entry_name {}; got {}'.format(category_name, item_name, entry_name,\n                                                                entry_val))\n            if get_entry_val(\"default\") not in entry_val:\n                raise ValueError('For {} category, entry value does not exist in options list for item name'\n                                 ' {} and entry_name {}; got {}'.format(category_name, item_name,\n                                                                        entry_name,\n                                                                        get_entry_val(\"default\")))\n            updates[entry_name] = entry_val\n        elif entry_name == \"permissions\":\n            self._validate_permissions_entry(category_name, entry_name, item_name, entry_val)\n        else:\n            if type(entry_val) is not str:\n                raise TypeError('For {} category, entry value must be a string for item name {} and '\n                                'entry name {}; got {}'.format(category_name, item_name, entry_name,\n                                                               type(entry_val)))\n        return updates\n\n    def _validate_bucket_type(self, category_name, item_name, item_val, entry_name, entry_val, get_entry_val):\n        \"\"\"Validate bucket type with properties and permissions\"\"\"\n        updates = {}\n        if 'properties' not in item_val:\n            raise KeyError('For {} category, properties KV pair must be required '\n                           'for item name {}.'.format(category_name, item_name))\n        if entry_name == 'properties':\n            prop_val = get_entry_val('properties')\n            if not isinstance(prop_val, dict):\n                raise ValueError('For {} category, properties must be JSON object for item name {}; got {}'\n                                 .format(category_name, item_name, type(entry_val)))\n            if not prop_val:\n                raise ValueError('For {} category, properties JSON object cannot be empty for item name {}'.format(category_name, item_name))\n            if 'key' not in prop_val:\n                raise ValueError('For {} category, key KV pair must exist in properties for item name {}'\n                                 ''.format(category_name, item_name))\n            updates[entry_name] = entry_val\n        elif \"permissions\" in item_val:\n            permissions = item_val['permissions']\n            self._validate_permissions_entry(category_name, 'permissions', item_name, permissions)\n            item_val['permissions'] = permissions\n        else:\n            if type(entry_val) is not str:\n                raise TypeError('For {} category, entry value must be a string for item name {} and '\n                                'entry name {}; got {}'.format(category_name, item_name, entry_name,\n                                                               type(entry_val)))\n        return updates\n\n    def _validate_list_items_object(self, category_name, item_name, prop_val):\n        \"\"\"Validate object type items with required properties structure\"\"\"\n        if not isinstance(prop_val, dict):\n            raise ValueError(\n                'For {} category, properties must be JSON object for item name {}; got {}'\n                .format(category_name, item_name, type(prop_val)))\n        if not prop_val:\n            raise ValueError(\n                'For {} category, properties JSON object cannot be empty for item name {}'\n                ''.format(category_name, item_name))\n        for kp, vp in prop_val.items():\n            if isinstance(vp, dict):\n                prop_keys = list(vp.keys())\n                if not prop_keys:\n                    raise ValueError('For {} category, {} properties cannot be empty for '\n                                     'item name {}'.format(category_name, kp, item_name))\n                diff = {'description', 'default', 'type'} - set(prop_keys)\n                if diff:\n                    raise ValueError('For {} category, {} properties must have type, description, '\n                                     'default keys for item name {}'.format(category_name,\n                                                                            kp, item_name))\n            else:\n                raise TypeError('For {} category, Properties must be a JSON object for {} key '\n                                'for item name {}'.format(category_name, kp, item_name))\n\n    def _validate_list_items_enumeration(self, category_name, item_name, item_val, entry_name):\n        \"\"\"Validate enumeration type items with options\"\"\"\n        if 'options' not in item_val:\n            raise KeyError('For {} category, options required for item name {}'.format(\n                category_name, item_name))\n        options = item_val['options']\n        if type(options) is not list:\n            raise TypeError('For {} category, entry value must be a list for item name {} and '\n                            'entry name {}; got {}'.format(category_name, item_name,\n                                                           entry_name, type(options)))\n        if not options:\n            raise ValueError(\n                'For {} category, options cannot be empty list for item_name {} and '\n                'entry_name {}'.format(category_name, item_name, entry_name))\n\n    def _validate_list_default_values(self, category_name, item_name, item_val, entry_val, default_val, list_size):\n        \"\"\"Validate default values for list/kvlist types including uniqueness and type checking\"\"\"\n        msg = \"array\" if item_val['type'] == 'list' else \"KV pair\"\n        try:\n            eval_default_val = ast.literal_eval(default_val)\n            if item_val['type'] == 'list':\n                if len(eval_default_val) > len(set(eval_default_val)):\n                    raise ArithmeticError(\"For {} category, default value {} elements are not \"\n                                          \"unique for item name {}\".format(category_name, msg,\n                                                                           item_name))\n            else:\n                if isinstance(eval_default_val, dict) and eval_default_val:\n                    nv = default_val.replace(\"{\", \"\")\n                    unique_list = []\n                    for pair in nv.split(','):\n                        if pair:\n                            k, v = pair.split(':')\n                            ks = k.strip()\n                            if ks not in unique_list:\n                                unique_list.append(ks)\n                            else:\n                                raise ArithmeticError(\"For category {}, duplicate KV pair found \"\n                                                      \"for item name {}\".format(\n                                    category_name, item_name))\n                        else:\n                            raise ArithmeticError(\"For {} category, KV pair invalid in default \"\n                                                  \"value for item name {}\".format(\n                                category_name, item_name))\n            if list_size >= 0:\n                if len(eval_default_val) > list_size:\n                    raise ArithmeticError(\"For {} category, default value {} list size limit to \"\n                                          \"{} for item name {}\".format(category_name, msg,\n                                                                       list_size, item_name))\n        except ArithmeticError as err:\n            raise ValueError(err)\n        except:\n            raise TypeError(\"For {} category, default value should be passed {} list in string \"\n                            \"format for item name {}\".format(category_name, msg, item_name))\n        type_check = str\n        if entry_val == 'integer':\n            type_check = int\n        elif entry_val == 'float':\n            type_check = float\n        type_mismatched_message = (\"For {} category, all elements should be of same {} type \"\n                                   \"in default value for item name {}\").format(category_name,\n                                                                               type_check, item_name)\n        if item_val['type'] == 'kvlist':\n            if not isinstance(eval_default_val, dict):\n                raise TypeError(\"For {} category, KV pair invalid in default value for item name {}\"\n                                \"\".format(category_name, item_name))\n            for k, v in eval_default_val.items():\n                try:\n                    eval_s = v if entry_val == \"string\" else ast.literal_eval(v)\n                except:\n                    raise ValueError(type_mismatched_message)\n                if not isinstance(eval_s, type_check):\n                    raise ValueError(type_mismatched_message)\n        else:\n            for s in eval_default_val:\n                try:\n                    eval_s = s if entry_val == \"string\" else ast.literal_eval(s)\n                except:\n                    raise ValueError(type_mismatched_message)\n                if not isinstance(eval_s, type_check):\n                    raise ValueError(type_mismatched_message)\n\n    def _validate_enumeration_default_values(self, category_name, item_name, item_val, default_val):\n        \"\"\"Validate that enumeration default values exist in options\"\"\"\n        eval_default_val = ast.literal_eval(default_val)\n        ev_options = item_val['options']\n        if item_val['type'] == 'kvlist':\n            for ek, ev in eval_default_val.items():\n                if ev not in ev_options:\n                    raise ValueError('For {} category, {} value does not exist in options '\n                                     'for item name {} and entry_name {}'.format(\n                        category_name, ev, item_name, ek))\n        else:\n            for s in eval_default_val:\n                if s not in ev_options:\n                    raise ValueError('For {} category, {} value does not exist in options for item '\n                                     'name {}'.format(category_name, s, item_name))\n\n    def _validate_items_entry(self, category_name, item_name, item_val, entry_name, entry_val, get_entry_val):\n        \"\"\"Validate the items entry including object, enumeration, and default value validation\"\"\"\n        if entry_val not in (\"string\", \"float\", \"integer\", \"object\", \"enumeration\"):\n            raise ValueError(\"For {} category, items value should either be in string, float, \"\n                             \"integer, object or enumeration for item name {}\".format(\n                category_name, item_name))\n        \n        if entry_val == 'object':\n            if 'properties' not in item_val:\n                raise KeyError('For {} category, properties KV pair must be required for item name {}'\n                               ''.format(category_name, item_name))\n            prop_val = get_entry_val('properties')\n            self._validate_list_items_object(category_name, item_name, prop_val)\n        \n        if entry_val == 'enumeration':\n            self._validate_list_items_enumeration(category_name, item_name, item_val, entry_name)\n        \n        default_val = get_entry_val(\"default\")\n        list_size = -1\n        if 'listSize' in item_val:\n            list_size = item_val['listSize']\n            if not isinstance(list_size, str):\n                raise TypeError('For {} category, listSize type must be a string for item name {}; '\n                                'got {}'.format(category_name, item_name, type(list_size)))\n            if self._validate_type_value('listSize', list_size) is False:\n                raise ValueError('For {} category, listSize value must be an integer value '\n                                 'for item name {}'.format(category_name, item_name))\n            list_size = int(item_val['listSize'])\n        \n        if entry_val not in (\"object\", \"enumeration\"):\n            self._validate_list_default_values(category_name, item_name, item_val, entry_val, default_val, list_size)\n        elif entry_val == \"enumeration\":\n            self._validate_enumeration_default_values(category_name, item_name, item_val, default_val)\n\n    def _validate_list_type(self, category_name, item_name, item_val, entry_name, entry_val, get_entry_val):\n        \"\"\"Validate list/kvlist types with items, properties, options, and listSize\"\"\"\n        updates = {}\n        if entry_name not in ('properties', 'options', 'permissions') and not isinstance(entry_val, str):\n            raise TypeError('For {} category, entry value must be a string for item name {} and '\n                            'entry name {}; got {}'.format(category_name, item_name, entry_name,\n                                                           type(entry_val)))\n        if 'items' not in item_val:\n            raise KeyError('For {} category, items KV pair must be required '\n                           'for item name {}.'.format(category_name, item_name))\n        if item_val['type'] == 'kvlist' and item_val['items'] == 'object':\n            if 'keyName' in item_val:\n                item_val['keyName'] = self._validate_optional_string_attribute(\n                    category_name, 'keyName', item_val['keyName'], item_name)\n                updates[entry_name] = entry_val\n            if 'keyDescription' in item_val:\n                item_val['keyDescription'] = self._validate_optional_string_attribute(\n                    category_name, 'keyDescription', item_val['keyDescription'], item_name)\n                updates[entry_name] = entry_val\n        if 'listName' in item_val:\n            item_val['listName'] = self._validate_optional_string_attribute(\n                category_name, 'listName', item_val['listName'], item_name)\n        elif \"permissions\" in item_val:\n            permissions = item_val['permissions']\n            self._validate_permissions_entry(category_name, 'permissions', item_name, permissions)\n        if entry_name == 'items':\n            self._validate_items_entry(category_name, item_name, item_val, entry_name, entry_val, get_entry_val)\n            updates[entry_name] = entry_val\n        if entry_name in ('properties', 'options'):\n            updates[entry_name] = entry_val\n        return updates\n\n    def _validate_json_type_with_schema(self, category_name, item_name, item_val, entry_name, entry_val):\n        \"\"\"Validate JSON type with optional schema\"\"\"\n        updates = {}\n        if 'schema' in item_val:\n            if type(item_val['schema']) is not dict:\n                raise TypeError('For {} category, {} item name and schema entry value must be an object; '\n                                'got {}'.format(category_name, item_name, type(entry_val)))\n            if not item_val['schema']:\n                raise ValueError('For {} category, {} item name and schema entry value can not be empty.'\n                                 ''.format(category_name, item_name))\n            updates[entry_name] = entry_val\n        return updates\n\n    def _validate_optional_entries(self, category_name, item_name, item_val, entry_name, entry_val):\n        \"\"\"Validate optional configuration entries (readonly, deprecated, mandatory, min, max, etc.)\"\"\"\n        updates = {}\n        if entry_name == 'readonly' or entry_name == 'deprecated' or entry_name == 'mandatory':\n            if self._validate_type_value('boolean', entry_val) is False:\n                raise ValueError('For {} category, entry value must be boolean for item name {}; got {}'\n                                 .format(category_name, entry_name, type(entry_val)))\n            else:\n                if entry_name == 'mandatory' and entry_val == 'true':\n                    if not len(item_val['default'].strip()):\n                        raise ValueError(\n                            'For {} category, A default value must be given for {}'.format(category_name,\n                                                                                           item_name))\n        elif entry_name == 'minimum' or entry_name == 'maximum':\n            if (self._validate_type_value('integer', entry_val) or\n                self._validate_type_value('float', entry_val)) is False:\n                raise ValueError('For {} category, entry value must be an integer or float for item name '\n                                 '{}; got {}'.format(category_name, entry_name, type(entry_val)))\n        elif entry_name == \"permissions\":\n            self._validate_permissions_entry(category_name, entry_name, item_name, entry_val)\n        elif entry_name in ('displayName', 'group', 'rule', 'validity', 'listName'):\n            if not isinstance(entry_val, str):\n                raise ValueError('For {} category, entry value must be string for item name {}; got {}'\n                                 .format(category_name, entry_name, type(entry_val)))\n        else:\n            if (self._validate_type_value('integer', entry_val) or\n                    self._validate_type_value('listSize', entry_val)) is False:\n                raise ValueError('For {} category, entry value must be an integer for item name {}; got {}'\n                                 .format(category_name, entry_name, type(entry_val)))\n        updates[entry_name] = entry_val\n        return updates\n\n    def _check_required_entries_present(self, category_name, item_name, expected_item_entries):\n        \"\"\"Check all expected entries are present\"\"\"\n        for needed_key, needed_value in expected_item_entries.items():\n            if needed_value == 0:\n                raise ValueError('For {} category, missing entry name {} for item name {}'.format(\n                    category_name, needed_key, item_name))\n\n    def _cleanup_and_set_defaults(self, category_name, item_name, item_val, get_entry_val, set_default_val):\n        \"\"\"type validation and value cleanup\"\"\"\n        # validate data type value\n        if self._validate_type_value(get_entry_val(\"type\"), get_entry_val(\"default\")) is False:\n            raise ValueError(\n                'For {} category, unrecognized value for item name {}'.format(category_name, item_name))\n        if 'readonly' in item_val:\n            item_val['readonly'] = self._clean('boolean', item_val['readonly'])\n        if 'deprecated' in item_val:\n            item_val['deprecated'] = self._clean('boolean', item_val['deprecated'])\n        if 'mandatory' in item_val:\n            item_val['mandatory'] = self._clean('boolean', item_val['mandatory'])\n        if set_default_val:\n            item_val['default'] = self._clean(item_val, item_val['default'])\n            item_val['value'] = item_val['default']\n\n    async def _validate_category_val(self, category_name, category_val, set_value_val_from_default_val=True):\n        require_entry_value = not set_value_val_from_default_val\n        if type(category_val) is not dict:\n            raise TypeError('For {} category, category value must be a dictionary; got {}'\n                            .format(category_name, type(category_val)))\n        category_val_copy = copy.deepcopy(category_val)\n        \n        for item_name, item_val in category_val_copy.items():\n            if type(item_name) is not str:\n                raise TypeError('For {} category, item name {} must be a string; got {}'\n                                .format(category_name, item_name, type(item_name)))\n            if type(item_val) is not dict:\n                raise TypeError('For {} category, item value must be a dict for item name {}; got {}'\n                                .format(category_name, item_name, type(item_val)))\n\n            optional_item_entries = {'readonly': 0, 'order': 0, 'length': 0, 'maximum': 0, 'minimum': 0,\n                                     'deprecated': 0, 'displayName': 0, 'rule': 0, 'validity': 0, 'mandatory': 0,\n                                     'group': 0, 'listSize': 0, 'listName': 0, 'permissions': 0}\n            expected_item_entries = {'description': 0, 'default': 0, 'type': 0}\n\n            if require_entry_value:\n                expected_item_entries['value'] = 0\n\n            def get_entry_val(k):\n                v = [val for name, val in item_val.items() if name == k]\n                return v[0]\n            \n            for entry_name, entry_val in item_val.copy().items():\n                if type(entry_name) is not str:\n                    raise TypeError('For {} category, entry name {} must be a string for item name {}; got {}'\n                                    .format(category_name, entry_name, item_name, type(entry_name)))\n\n                # Validate different types using extracted helper functions\n                if 'type' in item_val and get_entry_val(\"type\") == 'enumeration':\n                    updates = self._validate_enumeration_type(category_name, item_name, item_val, entry_name, \n                                                              entry_val, get_entry_val)\n                    expected_item_entries.update(updates)\n                elif 'type' in item_val and get_entry_val(\"type\") == 'bucket':\n                    updates = self._validate_bucket_type(category_name, item_name, item_val, entry_name, \n                                                         entry_val, get_entry_val)\n                    expected_item_entries.update(updates)\n                elif 'type' in item_val and get_entry_val(\"type\") in ('list', 'kvlist'):\n                    updates = self._validate_list_type(category_name, item_name, item_val, entry_name, \n                                                       entry_val, get_entry_val)\n                    expected_item_entries.update(updates)\n                elif entry_name == \"permissions\":\n                    self._validate_permissions_entry(category_name, entry_name, item_name, entry_val)\n                elif 'type' in item_val and get_entry_val(\"type\") == 'JSON':\n                    updates = self._validate_json_type_with_schema(category_name, item_name, item_val, \n                                                                   entry_name, entry_val)\n                    expected_item_entries.update(updates)\n                else:\n                    if type(entry_val) is not str:\n                        raise TypeError('For {} category, entry value must be a string for item name {} and '\n                                        'entry name {}; got {}'.format(category_name, item_name, entry_name,\n                                                                       type(entry_val)))\n\n                # Validate optional entries\n                if entry_name in optional_item_entries:\n                    updates = self._validate_optional_entries(category_name, item_name, item_val, \n                                                             entry_name, entry_val)\n                    expected_item_entries.update(updates)\n                \n                # Handle unrecognized entries and type validation\n                num_entries = expected_item_entries.get(entry_name)\n                if set_value_val_from_default_val and entry_name == 'value':\n                    raise ValueError('Specifying value_name and value_val for item_name {} is not allowed if '\n                                     'desired behavior is to use default_val as value_val'.format(item_name))\n                if num_entries is None:\n                    _logger.warning('For {} category, DISCARDING unrecognized entry name {} for item name {}'\n                                    .format(category_name, entry_name, item_name))\n                    try:\n                        del category_val_copy[item_name][entry_name]\n                    except Exception:\n                        raise KeyError\n\n                if entry_name == 'type':\n                    if entry_val not in _valid_type_strings:\n                        raise ValueError('For {} category, invalid entry value for entry name \"type\" for item name {}.'\n                                         ' valid type strings are: {}'.format(category_name, item_name,\n                                                                              _valid_type_strings))\n                expected_item_entries[entry_name] = 1\n            \n            # Finalize item validation\n            self._check_required_entries_present(category_name, item_name, expected_item_entries)\n            self._cleanup_and_set_defaults(category_name, item_name, item_val, get_entry_val, \n                                          set_value_val_from_default_val)\n        \n        return category_val_copy\n\n    async def _create_new_category(self, category_name, category_val, category_description, display_name=None):\n        try:\n            if isinstance(category_val, dict):\n                new_category_val = copy.deepcopy(category_val)\n                for i, v in category_val.items():\n                    # Remove \"deprecated\" items from a new category configuration\n                    if 'deprecated' in v and v['deprecated'] == 'true':\n                        new_category_val.pop(i)\n            else:\n                new_category_val = category_val\n            display_name = category_name if display_name is None else display_name\n            audit = AuditLogger(self._storage)\n            # Mask password values for audit trail security\n            masked_category_val = self._mask_password_value(new_category_val)\n            await audit.information('CONAD', {'name': category_name, 'category': masked_category_val})\n            payload = PayloadBuilder().INSERT(key=category_name, description=category_description,\n                                              value=new_category_val, display_name=display_name).payload()\n            result = await self._storage.insert_into_tbl(\"configuration\", payload)\n            response = result['response']\n            self._cacheManager.update(category_name, category_description, new_category_val, display_name)\n        except KeyError:\n            raise ValueError(result['message'])\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n\n    async def search_for_ACL_single(self, cat_name):\n        payload = PayloadBuilder().SELECT(\"key\", \"value\").WHERE([\"key\", \"=\", cat_name]).payload()\n        results = await self._storage.query_tbl_with_payload('configuration', payload)\n        for row in results[\"rows\"]:\n            for item_name, item_info in row[\"value\"].items():\n                try:\n                    if item_info[\"type\"] == \"ACL\" and \"Security\" in cat_name:\n                        # if item_info[\"type\"] == \"ACL\":\n                        return True, item_name, cat_name.replace(\"Security\", \"\"), item_info['value']\n                        # return True, item_name, cat_name, item_info['value']\n                except KeyError:\n                    continue\n\n        return False, None, None, None\n\n    async def search_for_ACL_recursive_from_cat_name(self, cat_name):\n        \"\"\"\n            Searches for config item ACL recursive in a category and its child categories.\n        \"\"\"\n        payload = PayloadBuilder().SELECT(\"key\", \"value\").WHERE([\"key\", \"=\", cat_name]).payload()\n        results = await self._storage.query_tbl_with_payload('configuration', payload)\n        for row in results[\"rows\"]:\n            for item_name, item_info in row[\"value\"].items():\n                try:\n                    if item_info[\"type\"] == \"ACL\" and \"Security\" in cat_name:\n                        # if item_info[\"type\"] == \"ACL\":\n                        return True, item_name, cat_name.replace(\"Security\", \"\"), item_info['value']\n                        # return True, item_name, cat_name, item_info['value']\n                except KeyError:\n                    continue\n\n        category_children_payload = PayloadBuilder().SELECT(\"child\").DISTINCT([\"child\"]).WHERE([\"parent\", \"=\",\n                                                                                                cat_name]).payload()\n        child_results = await self._storage.query_tbl_with_payload('category_children',\n                                                                   category_children_payload)\n        for row in child_results['rows']:\n            res, config_item_name, found_cat_name, found_value = await \\\n                self.search_for_ACL_recursive_from_cat_name(row[\"child\"])\n            if res:\n                return True, config_item_name, found_cat_name, found_value\n\n        # If nothing found then return False\n        return False, None, None, None\n\n    async def _read_all_category_names(self):\n        # SELECT configuration.key, configuration.description, configuration.value, configuration.display_name, configuration.ts FROM configuration\n        payload = PayloadBuilder().SELECT(\"key\", \"description\", \"value\", \"display_name\", \"ts\") \\\n            .ALIAS(\"return\", (\"ts\", 'timestamp')) \\\n            .FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")).payload()\n        results = await self._storage.query_tbl_with_payload('configuration', payload)\n\n        category_info = []\n        for row in results['rows']:\n            category_info.append((row['key'], row['description'], row[\"display_name\"]))\n        return category_info\n\n    async def _read_category(self, cat_name):\n        # SELECT configuration.key, configuration.description, configuration.value, configuration.display_name, configuration.ts FROM configuration\n        payload = PayloadBuilder().SELECT(\"key\", \"description\", \"value\", \"display_name\", \"ts\") \\\n            .ALIAS(\"return\", (\"ts\", 'timestamp')) \\\n            .FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")).WHERE([\"key\", \"=\", cat_name]).LIMIT(1).payload()\n        result = await self._storage.query_tbl_with_payload('configuration', payload)\n        return result['rows'][0] if result['rows'] else None\n\n    async def _read_all_groups(self, root, children):\n        async def nested_children(child):\n            # Recursively find children\n            if not child:\n                return\n            next_children = await self.get_category_child(child[\"key\"])\n            if len(next_children) == 0:\n                child.update({\"children\": []})\n            else:\n                child.update({\"children\": next_children})\n                # call for each child\n                for next_child in child[\"children\"]:\n                    await nested_children(next_child)\n\n        # SELECT key, description, display_name FROM configuration\n        payload = PayloadBuilder().SELECT(\"key\", \"description\", \"display_name\").payload()\n        all_categories = await self._storage.query_tbl_with_payload('configuration', payload)\n\n        # SELECT DISTINCT child FROM category_children\n        unique_category_children_payload = PayloadBuilder().SELECT(\"child\").DISTINCT([\"child\"]).payload()\n        unique_category_children = await self._storage.query_tbl_with_payload('category_children',\n                                                                              unique_category_children_payload)\n\n        list_child = [row['child'] for row in unique_category_children['rows']]\n        list_root = []\n        list_not_root = []\n\n        for row in all_categories['rows']:\n            if row[\"key\"] in list_child:\n                list_not_root.append((row[\"key\"], row[\"description\"], row[\"display_name\"]))\n            else:\n                list_root.append((row[\"key\"], row[\"description\"], row[\"display_name\"]))\n        if children:\n            tree = []\n            for k, v, d in list_root if root is True else list_not_root:\n                tree.append({\"key\": k, \"description\": v, \"displayName\": d, \"children\": []})\n\n            for branch in tree:\n                await nested_children(branch)\n\n            return tree\n\n        return list_root if root else list_not_root\n\n    async def _read_category_val(self, category_name):\n        # SELECT configuration.key, configuration.description, configuration.value,\n        # configuration.ts FROM configuration WHERE configuration.key = :key_1\n        payload = PayloadBuilder().SELECT(\"value\").WHERE([\"key\", \"=\", category_name]).payload()\n        results = await self._storage.query_tbl_with_payload('configuration', payload)\n        for row in results['rows']:\n            return row['value']\n\n    async def _read_item_val(self, category_name, item_name):\n        # SELECT configuration.value::json->'configuration' as value\n        # FROM fledge.configuration WHERE configuration.key='SENSORS'\n        payload = PayloadBuilder().SELECT((\"key\", \"description\", \"ts\", [\"value\", [item_name]])) \\\n            .ALIAS(\"return\", (\"ts\", \"timestamp\"), (\"value\", \"value\")) \\\n            .FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n            .WHERE([\"key\", \"=\", category_name]).payload()\n\n        results = await self._storage.query_tbl_with_payload('configuration', payload)\n        if len(results['rows']) == 0:\n            return None\n\n        return results['rows'][0]['value']\n\n    async def _read_value_val(self, category_name, item_name):\n        # SELECT configuration.value::json->'retainUnsent'->'value' as value\n        # FROM fledge.configuration WHERE configuration.key='PURGE_READ'\n        payload = PayloadBuilder().SELECT((\"key\", \"description\", \"ts\", [\"value\", [item_name, \"value\"]])) \\\n            .ALIAS(\"return\", (\"ts\", \"timestamp\"), (\"value\", \"value\")) \\\n            .FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n            .WHERE([\"key\", \"=\", category_name]).payload()\n\n        results = await self._storage.query_tbl_with_payload('configuration', payload)\n        if len(results['rows']) == 0:\n            return None\n\n        return results['rows'][0]['value']\n\n    async def _update_value_val(self, category_name, item_name, new_value_val, item_type=None):\n        try:\n            old_value = await self._read_value_val(category_name, item_name)\n            # UPDATE fledge.configuration\n            # SET value = jsonb_set(value, '{retainUnsent,value}', '\"12\"')\n            # WHERE key='PURGE_READ'\n            payload = PayloadBuilder().SELECT(\"key\", \"description\", \"ts\", \"value\") \\\n                .JSON_PROPERTY((\"value\", [item_name, \"value\"], new_value_val)) \\\n                .FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n                .WHERE([\"key\", \"=\", category_name]).payload()\n            await self._storage.update_tbl(\"configuration\", payload)\n            cat_value = {item_name: {\"value\": new_value_val}}\n            self._handle_config_items(category_name, cat_value)\n            audit = AuditLogger(self._storage)\n\n            # Mask password values for audit trail security\n            masked_old_value = self._mask_password_value(item_type, old_value) if item_type else old_value\n            masked_new_value = self._mask_password_value(item_type, new_value_val) if item_type else new_value_val\n\n            audit_details = {'category': category_name, 'item': item_name, 'oldValue': masked_old_value,\n                             'newValue': masked_new_value}\n            await audit.information('CONCH', audit_details)\n        except KeyError as ex:\n            raise ValueError(str(ex))\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n\n    async def update_configuration_item_bulk(self, category_name, config_item_list, request=None):\n        \"\"\" Bulk update config items\n\n        Args:\n            category_name: category name\n            config_item_list: dict containing config item values\n            request: request details to identify user info\n\n        Returns:\n            None\n        \"\"\"\n        try:\n            payload = {\"updates\": []}\n            audit_details = {'category': category_name, 'items': {}}\n            cat_info = await self.get_category_all_items(category_name)\n            if cat_info is None:\n                raise NameError(\"No such Category found for {}\".format(category_name))\n            \"\"\" Note: Update reject to the properties with permissions property when the logged in user type is not\n             given in the list of permissions. \"\"\"\n            user_role_name = await self._check_updates_by_role(request)\n            for item_name, new_val in config_item_list.items():\n                if item_name not in cat_info:\n                    raise KeyError('{} config item not found'.format(item_name))\n                self._check_permissions(request, cat_info[item_name], user_role_name)\n                # Evaluate new_val as per rule if defined\n                if 'rule' in cat_info[item_name]:\n                    rule = cat_info[item_name]['rule'].replace(\"value\", new_val)\n                    if eval(rule) is False:\n                        raise ValueError('The value of {} is not valid, please supply a valid value'.format(item_name))\n                if cat_info[item_name]['type'] == 'JSON':\n                    if isinstance(new_val, dict):\n                        pass\n                    elif not isinstance(new_val, str):\n                        raise TypeError('new value should be a valid dict Or a string literal, in double quotes')\n                elif not isinstance(new_val, str):\n                    raise TypeError('new value should be of type string')\n                if cat_info[item_name]['type'] == 'enumeration':\n                    if new_val == '':\n                        raise ValueError('entry_val cannot be empty')\n                    if new_val not in cat_info[item_name]['options']:\n                        raise ValueError('new value does not exist in options enum')\n                else:\n                    if self._validate_type_value(cat_info[item_name]['type'], new_val) is False:\n                        raise TypeError('Unrecognized value name for item_name {}'.format(item_name))\n\n                if 'mandatory' in cat_info[item_name]:\n                    if cat_info[item_name]['mandatory'] == 'true':\n                        if cat_info[item_name]['type'] == 'JSON':\n                            if not len(new_val):\n                                raise ValueError(\n                                    \"Dict cannot be set as empty. A value must be given for {}\".format(item_name))\n                        elif not len(new_val.strip()):\n                            raise ValueError(\"A value must be given for {}\".format(item_name))\n                if cat_info[item_name]['type'] in ('list', 'kvlist') and cat_info[item_name]['items'] == 'enumeration':\n                    try:\n                        eval_new_val = ast.literal_eval(new_val)\n                    except:\n                        raise TypeError(\"Malformed payload for given {} category\".format(category_name))\n                    ev_options = cat_info[item_name]['options']\n                    if cat_info[item_name]['type'] == 'kvlist':\n                        if not isinstance(eval_new_val, dict):\n                            raise TypeError(\"New value should be in KV pair format\")\n                        for ek, ev in eval_new_val.items():\n                            if ev == '':\n                                raise ValueError('For {}, enum value cannot be empty'.format(ek))\n                            if ev not in ev_options:\n                                raise ValueError('For {}, new value does not exist in options enum'.format(ek))\n                    else:\n                        if not isinstance(eval_new_val, list):\n                            raise TypeError(\"New value should be passed in list\")\n                        if not eval_new_val:\n                            raise ValueError('enum value cannot be empty')\n                        for s in eval_new_val:\n                            if s not in ev_options:\n                                raise ValueError('For {}, new value does not exist in options enum'.format(s))\n                old_value = cat_info[item_name]['value']\n                new_val = self._clean(cat_info[item_name], new_val)\n                # Validations on the basis of optional attributes\n                self._validate_value_per_optional_attribute(item_name, cat_info[item_name], new_val)\n\n                old_value_for_check = old_value\n                new_val_for_check = new_val\n                # Special case: If type is list and listName is given then modify the value internally\n                if cat_info[item_name]['type'] == 'list' and 'listName' in cat_info[item_name]:\n                    if cat_info[item_name][\"listName\"] not in new_val:\n                        modify_value = json.dumps({cat_info[item_name]['listName']: json.loads(new_val)})\n                        new_val_for_check = modify_value\n                        new_val = modify_value\n                if type(new_val) == dict:\n                    # it converts .old so both .new and .old are dicts\n                    # it uses OrderedDict to preserve the sequence of the keys\n                    try:\n                        old_value_dict = ast.literal_eval(old_value)\n                        old_value_for_check = collections.OrderedDict(old_value_dict)\n                        new_val_for_check = collections.OrderedDict(new_val)\n                    except:\n                        old_value_for_check = old_value\n                        new_val_for_check = new_val\n\n                if old_value_for_check != new_val_for_check:\n                    payload_item = PayloadBuilder().SELECT(\"key\", \"description\", \"ts\", \"value\") \\\n                        .JSON_PROPERTY((\"value\", [item_name, \"value\"], new_val)) \\\n                        .FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n                        .WHERE([\"key\", \"=\", category_name]).payload()\n                    payload['updates'].append(json.loads(payload_item))\n                    # Mask password values for audit trail security\n                    item_type = cat_info[item_name].get('type')\n                    masked_old_value = self._mask_password_value(item_type, old_value)\n                    masked_new_value = self._mask_password_value(item_type, new_val)\n                    audit_details['items'].update({item_name: {'oldValue': masked_old_value, 'newValue': masked_new_value}})\n\n                    if \"ACL\" in item_name and type(old_value) == str and type(new_val) == str:\n                        await self._handle_update_config_for_acl(category_name, old_value, new_val)\n\n            if not payload['updates']:\n                return\n\n            await self._storage.update_tbl(\"configuration\", json.dumps(payload))\n\n            # read the updated value from storage\n            cat_value = await self._read_category_val(category_name)\n            self._handle_config_items(category_name, cat_value)\n            # Category config items cache updated\n            for item_name, new_val in config_item_list.items():\n                if category_name in self._cacheManager.cache:\n                    if item_name in self._cacheManager.cache[category_name]['value']:\n                        self._cacheManager.cache[category_name]['value'][item_name]['value'] = cat_value[item_name][\n                            'value']\n                    else:\n                        self._cacheManager.cache[category_name]['value'].update(\n                            {item_name: cat_value[item_name]['value']})\n\n            # Configuration Change audit entry\n            audit = AuditLogger(self._storage)\n            await audit.information('CONCH', audit_details)\n\n        except Exception as ex:\n            if 'Forbidden' not in str(ex):\n                _logger.exception(ex, 'Unable to bulk update config items')\n            raise\n\n        try:\n            await self._run_callbacks(category_name)\n        except:\n            _logger.exception(\n                'Unable to run callbacks for category_name %s', category_name)\n            raise\n\n    async def _handle_update_config_for_acl(self, category_name, old_value, new_val):\n        \"\"\" Handles which function to call for acl usage table on the basis of old_value and\n            new_val.\n        \"\"\"\n        if new_val != old_value:\n            if old_value == \"\" and not new_val == \"\":\n                # Need to attach ACL.\n                await \\\n                 self._acl_handler.handle_create_for_acl_usage(category_name.replace(\"Security\", \"\"),\n                                                               new_val,\n                                                               \"service\", notify_service=True,\n                                                               acl_to_delete=\"\")\n\n            elif not old_value == \"\" and new_val == \"\":\n                # Need to detach ACL\n                await \\\n                 self._acl_handler.handle_delete_for_acl_usage(category_name.replace(\"Security\", \"\"),\n                                                               new_val,\n                                                               \"service\")\n\n            else:\n                # Need to update ACL.\n                await \\\n                 self._acl_handler.handle_update_for_acl_usage(category_name.replace(\"Security\", \"\"),\n                                                               new_val, \"service\")\n\n    async def _update_category(self, category_name, category_val, category_description, display_name=None):\n        try:\n            display_name = category_name if display_name is None else display_name\n            payload = PayloadBuilder().SET(value=category_val, description=category_description,\n                                           display_name=display_name).WHERE([\"key\", \"=\", category_name]).payload()\n            result = await self._storage.update_tbl(\"configuration\", payload)\n            response = result['response']\n            # Re-read category from DB\n            new_category_val_db = await self._read_category_val(category_name)\n            if category_name in self._cacheManager.cache:\n                self._cacheManager.cache[category_name]['description'] = category_description\n                self._cacheManager.cache[category_name]['value'] = new_category_val_db\n                self._cacheManager.cache[category_name]['displayName'] = display_name\n            else:\n                self._cacheManager.cache.update({category_name: {\"description\": category_description,\n                                                                 \"value\": new_category_val_db,\n                                                                 \"displayName\": display_name}})\n        except KeyError:\n            raise ValueError(result['message'])\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n\n    async def get_all_category_names(self, root=None, children=False):\n        \"\"\"Get all category names in the Fledge system\n\n        Args:\n            root: If true then select all keys from categories table and then filter out\n                  that are children of another category. So the root categories are those\n                  entries in configuration table that do not appear in distinct child in category_children\n                  If false then it will return distinct child in category_children\n                  If root is None then it will return all categories\n            children: If true then it will return nested array of children of that category\n                      If false then it will return categories on the basis of root value\n        Return Values:\n                    a list of tuples (string category_name, string category_description)\n        \"\"\"\n        try:\n            info = await self._read_all_groups(root,\n                                               children) if root is not None else await self._read_all_category_names()\n            return info\n        except:\n            _logger.exception('Unable to read all category names')\n            raise\n\n    async def get_category_all_items(self, category_name):\n        \"\"\"Get a specified category's entire configuration (all items).\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n\n        Return Values:\n        a JSONB dictionary with all items of a category's configuration\n        None\n        \"\"\"\n        try:\n            if category_name in self._cacheManager:\n                # Interim solution; to ensure script type config item file content handling\n                category_value = self._handle_script_type(category_name,\n                                                          self._cacheManager.cache[category_name]['value'])\n                self._cacheManager.update(category_name, self._cacheManager.cache[category_name]['description'],\n                                          category_value, self._cacheManager.cache[category_name]['displayName'])\n                return category_value\n\n            category = await self._read_category(category_name)  # await self._read_category_val(category_name)\n            category_value = None\n            if category is not None:\n                category_value = self._handle_script_type(category_name, category[\"value\"])\n                self._cacheManager.update(category_name, category[\"description\"], category_value,\n                                          category[\"display_name\"])\n            return category_value\n        except:\n            _logger.exception('Unable to get all category items of {} category.'.format(category_name))\n            raise\n\n    async def get_category_item(self, category_name, item_name):\n        \"\"\"Get a given item within a given category.\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n        item_name -- name of the item within the category (required)\n\n        Return Values:\n        a JSONB dictionary with item's content\n        None\n        \"\"\"\n        try:\n            if category_name in self._cacheManager:\n                if item_name not in self._cacheManager.cache[category_name]['value']:\n                    return None\n                return self._cacheManager.cache[category_name]['value'][item_name]\n            else:\n                cat_item = await self._read_item_val(category_name, item_name)\n                if cat_item is not None:\n                    category = await self._read_category(category_name)  # await self._read_category_val(category_name)\n                    if category is not None:\n                        category_value = self._handle_script_type(category_name, category[\"value\"])\n                        self._cacheManager.update(category_name, category[\"description\"], category_value,\n                                                  category[\"display_name\"])\n                        cat_item = category_value[item_name]\n                return cat_item\n        except:\n            _logger.exception(\n                'Unable to get category item based on category_name %s and item_name %s', category_name, item_name)\n            raise\n\n    async def get_category_item_value_entry(self, category_name, item_name):\n        \"\"\"Get the \"value\" entry of a given item within a given category.\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n        item_name -- name of the item within the category (required)\n\n        Return Values:\n        a string of the \"value\" entry\n        None\n        \"\"\"\n        try:\n            return await self._read_value_val(category_name, item_name)\n        except:\n            _logger.exception(\n                'Unable to get the \"value\" entry based on category_name %s and item_name %s', category_name,\n                item_name)\n            raise\n\n    async def set_category_item_value_entry(self, category_name, item_name, new_value_entry, script_file_path=\"\",\n                                            request=None):\n        \"\"\"Set the \"value\" entry of a given item within a given category.\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n        item_name -- name of item within the category whose \"value\" entry needs to be changed (required)\n        new_value_entry -- new value entry to replace old value entry\n        script_file_path -- Script file path for the config item whose type is script\n        request -- request details to identify user info\n\n        Side Effects:\n        An update to storage will not be issued if a new_value_entry is the same as the new_value_entry from storage.\n        Registered callbacks will be invoked only if an update is issued.\n\n        Exceptions Raised:\n\n        ImportError if callback module does not exist for relevant callbacks\n        AttributeError if callback module does not implement run(category_name) for relevant callbacks\n\n        Return Values:\n        None\n        \"\"\"\n        try:\n            storage_value_entry = None\n            \"\"\" Note: Update reject to the properties with permissions property when the logged in user type is not\n                                     given in the list of permissions. \"\"\"\n            user_role_name = await self._check_updates_by_role(request)\n            if category_name in self._cacheManager:\n                if item_name not in self._cacheManager.cache[category_name]['value']:\n                    raise ValueError(\"No detail found for the category_name: {} and item_name: {}\"\n                                     .format(category_name, item_name))\n                storage_value_entry = self._cacheManager.cache[category_name]['value'][item_name]\n                if user_role_name:\n                    self._check_permissions(request, storage_value_entry, user_role_name)\n                if storage_value_entry['value'] == new_value_entry:\n                    return\n            else:\n                # get storage_value_entry and compare against new_value_value with its type, update if different\n                storage_value_entry = await self._read_item_val(category_name, item_name)\n                # check for category_name and item_name combination existence in storage\n                if storage_value_entry is None:\n                    raise ValueError(\"No detail found for the category_name: {} and item_name: {}\"\n                                     .format(category_name, item_name))\n                if user_role_name:\n                    self._check_permissions(request, storage_value_entry, user_role_name)\n                if storage_value_entry == new_value_entry:\n                    return\n            # Special case for enumeration field type handling\n            if storage_value_entry['type'] == 'enumeration':\n                if new_value_entry == '':\n                    raise ValueError('entry_val cannot be empty')\n                if new_value_entry not in storage_value_entry['options']:\n                    raise ValueError('new value does not exist in options enum')\n            else:\n                if self._validate_type_value(storage_value_entry['type'], new_value_entry) is False:\n                    raise TypeError('Unrecognized value name for item_name {}'.format(item_name))\n            if 'mandatory' in storage_value_entry:\n                if storage_value_entry['mandatory'] == 'true':\n                    if storage_value_entry['type'] != 'JSON' and not len(new_value_entry.strip()):\n                        raise ValueError(\"A value must be given for {}\".format(item_name))\n                    elif storage_value_entry['type'] == 'JSON' and not len(new_value_entry):\n                        raise ValueError(\"Dict cannot be set as empty. A value must be given for {}\".format(item_name))\n            new_value_entry = self._clean(storage_value_entry, new_value_entry)\n            # Evaluate new_value_entry as per rule if defined\n            if 'rule' in storage_value_entry:\n                rule = storage_value_entry['rule'].replace(\"value\", new_value_entry)\n                if eval(rule) is False:\n                    raise ValueError('The value of {} is not valid, please supply a valid value'.format(item_name))\n            # Validations on the basis of optional attributes\n            self._validate_value_per_optional_attribute(item_name, storage_value_entry, new_value_entry)\n\n            if type(storage_value_entry) == dict and 'type' in storage_value_entry \\\n                    and storage_value_entry['type'] == \"ACL\":\n                old_value = storage_value_entry['value']\n                new_val = new_value_entry\n                await self._handle_update_config_for_acl(category_name, old_value, new_val)\n\n            # Special case: If type is list and listName is given then modify the value internally\n            if storage_value_entry['type'] == 'list' and 'listName' in storage_value_entry:\n                if storage_value_entry[\"listName\"] not in new_value_entry:\n                    modify_value = json.dumps({storage_value_entry['listName']: json.loads(new_value_entry)})\n                    new_value_entry = modify_value\n\n            await self._update_value_val(category_name, item_name, new_value_entry, storage_value_entry.get('type'))\n            # always get value from storage\n            cat_item = await self._read_item_val(category_name, item_name)\n            # Special case for script type\n            if storage_value_entry['type'] == 'script':\n                if cat_item['value'] is not None and cat_item['value'] != \"\":\n                    cat_item[\"value\"] = binascii.unhexlify(cat_item['value'].encode('utf-8')).decode(\"utf-8\")\n                cat_item[\"file\"] = script_file_path\n\n            if category_name in self._cacheManager.cache:\n                if item_name in self._cacheManager.cache[category_name]['value']:\n                    self._cacheManager.cache[category_name]['value'][item_name]['value'] = cat_item['value']\n                    if storage_value_entry['type'] == 'script':\n                        self._cacheManager.cache[category_name]['value'][item_name][\"file\"] = script_file_path\n                else:\n                    self._cacheManager.cache[category_name]['value'].update({item_name: cat_item['value']})\n        except Exception as ex:\n            if 'Forbidden' not in str(ex):\n                _logger.exception(\n                    'Unable to set item value entry based on category_name %s and item_name %s and value_item_entry %s',\n                    category_name, item_name, new_value_entry)\n            raise\n        try:\n            await self._run_callbacks(category_name)\n        except:\n            _logger.exception(\n                'Unable to run callbacks for category_name %s', category_name)\n            raise\n\n    async def set_optional_value_entry(self, category_name, item_name, optional_entry_name, new_value_entry):\n        \"\"\"Set the \"optional_key\" entry of a given item within a given category.\n        Even we can reset the optional value by just passing new_value_entry=\"\"\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n        item_name -- name of item within the category whose \"optional_key\" entry needs to be changed (required)\n        optional_entry_name -- name of the optional attribute\n        new_value_entry -- new value entry to replace old value entry\n\n        Return Values:\n        None\n        \"\"\"\n        try:\n            storage_value_entry = None\n            if category_name in self._cacheManager:\n                if item_name not in self._cacheManager.cache[category_name]['value']:\n                    raise ValueError(\"No detail found for the category_name: {} and item_name: {}\"\n                                     .format(category_name, item_name))\n                storage_value_entry = self._cacheManager.cache[category_name]['value'][item_name]\n                if optional_entry_name not in storage_value_entry:\n                    raise KeyError(\"{} does not exist\".format(optional_entry_name))\n                if storage_value_entry[optional_entry_name] == new_value_entry:\n                    return\n            else:\n                # get storage_value_entry and compare against new_value_value with its type, update if different\n                storage_value_entry = await self._read_item_val(category_name, item_name)\n                # check for category_name and item_name combination existence in storage\n                if storage_value_entry is None:\n                    raise ValueError(\"No detail found for the category_name: {} and item_name: {}\"\n                                     .format(category_name, item_name))\n                if storage_value_entry[optional_entry_name] == new_value_entry:\n                    return\n            # Validate optional types only when new_value_entry not empty; otherwise set empty value\n            if new_value_entry:\n                if optional_entry_name == \"properties\":\n                    raise ValueError('For {} category, optional item name properties cannot be updated.'.format(\n                        category_name))\n                elif optional_entry_name in ('readonly', 'deprecated', 'mandatory'):\n                    if self._validate_type_value('boolean', new_value_entry) is False:\n                        raise ValueError(\n                            'For {} category, entry value must be boolean for optional item name {}; got {}'\n                            .format(category_name, optional_entry_name, type(new_value_entry)))\n                elif optional_entry_name in ('minimum', 'maximum'):\n                    if (self._validate_type_value('integer', new_value_entry) or self._validate_type_value(\n                            'float', new_value_entry)) is False:\n                        raise ValueError('For {} category, entry value must be an integer or float for optional item '\n                                         '{}; got {}'.format(category_name, optional_entry_name, type(new_value_entry)))\n                elif optional_entry_name in ('displayName', 'group', 'rule', 'validity'):\n                    if not isinstance(new_value_entry, str):\n                        raise ValueError('For {} category, entry value must be string for optional item {}; got {}'\n                                         .format(category_name, optional_entry_name, type(new_value_entry)))\n                else:\n                    if self._validate_type_value('integer', new_value_entry) is False:\n                        raise ValueError('For {} category, entry value must be an integer for optional item {}; got {}'\n                                         .format(category_name, optional_entry_name, type(new_value_entry)))\n\n                # Validation is fairly minimal, minimum, maximum like\n                # maximum should be greater than minimum or vice-versa\n                # And no link between minimum, maximum and length is needed.\n                # condition check with numeric operands (int or float) rather than with string operands\n                def convert(value, _type):\n                    return int(value) if _type == \"integer\" else float(value) if _type == \"float\" else value\n\n                if optional_entry_name == 'minimum':\n                    new = convert(new_value_entry, storage_value_entry['type'])\n                    old = convert(storage_value_entry['maximum'], storage_value_entry['type'])\n                    if new > old:\n                        raise ValueError('Minimum value should be less than equal to Maximum value')\n\n                if optional_entry_name == 'maximum':\n                    new = convert(new_value_entry, storage_value_entry['type'])\n                    old = convert(storage_value_entry['minimum'], storage_value_entry['type'])\n                    if new < old:\n                        raise ValueError('Maximum value should be greater than equal to Minimum value')\n            payload = PayloadBuilder().SELECT(\"key\", \"description\", \"ts\", \"value\") \\\n                .JSON_PROPERTY((\"value\", [item_name, optional_entry_name], new_value_entry)) \\\n                .FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n                .WHERE([\"key\", \"=\", category_name]).payload()\n            await self._storage.update_tbl(\"configuration\", payload)\n            # always get value from storage\n            cat_item = await self._read_item_val(category_name, item_name)\n            if category_name in self._cacheManager.cache:\n                if item_name in self._cacheManager.cache[category_name]['value']:\n                    self._cacheManager.cache[category_name]['value'][item_name][optional_entry_name] = cat_item[\n                        optional_entry_name]\n                else:\n                    self._cacheManager.cache[category_name]['value'].update({item_name: cat_item[optional_entry_name]})\n        except:\n            _logger.exception(\n                'Unable to set optional %s entry based on category_name %s and item_name %s and value_item_entry %s',\n                optional_entry_name, category_name, item_name, new_value_entry)\n            raise\n\n    async def create_category(self, category_name, category_value, category_description='', keep_original_items=False,\n                              display_name=None):\n        \"\"\"Create a new category in the database.\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n        category_json_schema -- JSONB object in dictionary form representing category's configuration values\n                sample_category_json_schema = {\n                    \"item_name1\": {\n                        \"description\": \"Port to listen on\",\n                        \"default\": \"5432\",\n                        \"type\": \"integer\"\n                    }\n                    \"port\": {\n                        \"description\": \"Port to listen on\",\n                        \"default\": \"5432\",\n                        \"type\": \"integer\"\n                    },\n                    \"url\": {\n                        \"description\": \"URL to accept data on\",\n                        \"default\": \"sensor/reading-values\",\n                        \"type\": \"string\"\n                    },\n                    \"certificate\": {\n                        \"description\": \"X509 certificate used to identify ingress interface\",\n                        \"default\": \"\",\n                        \"type\": \"x509 certificate\"\n                    }\n                }\n\n        category_description -- description of the category (default='')\n        keep_original_items -- keep items in storage's category_val that are not in the new category_val (removes side effect #3) (default=False)\n        display_name -- configuration category for display in the GUI. if it is NONE then use the value of the category_name\n\n        Return Values:\n        None\n\n        Side Effects:\n        A \"value\" entry will be created for each item using the \"Default\" entry's specified value.\n        If a category of this name already exists within the storage, the new category_val and the storage's category_val will be merged such that:\n            1. preserve all \"value\" entries from the storage's category val\n            2. use items in the new category_val that are not in the storage's category_val\n            3. ignore items in storage's category_val that are not in the new category_val\n        An update to storage will not be issued if a merged category_value is the same as the category_value from storage.\n        Registered callbacks specific to the category_name will be invoked only if a new category is created or if a category is updated.\n\n        Exceptions Raised:\n        ValueError\n        TypeError\n        ImportError if callback module does not exist for relevant callbacks\n        AttributeError if callback module does not implement run(category_name) for relevant callbacks\n\n        Restrictions and Usage:\n        A Fledge component calls this method to create one or more new configuration categories to store initial configuration.\n        Only default values can be entered for and item's entries.\n        A \"value\" entry specified for an item will raise an exception.\n        \"\"\"\n        if not isinstance(category_name, str):\n            raise TypeError('category_name must be a string')\n\n        if not isinstance(category_description, str):\n            raise TypeError('category_description must be a string')\n\n        category_val_prepared = ''\n        try:\n            # Don't allow invalid character in category name\n            is_valid_identifier, blocked_character = common_utils.is_valid_identifier(category_name)\n            if not is_valid_identifier:\n                raise ValueError(\"Invalid character {} found in category name {}\".format(blocked_character, category_name))\n            \n            # validate new category_val, set \"value\" from default\n            category_val_prepared = await self._validate_category_val(category_name, category_value, True)\n            # Evaluate value as per rule if defined\n            for item_name in category_val_prepared:\n                if 'rule' in category_val_prepared[item_name]:\n                    rule = category_val_prepared[item_name]['rule'].replace(\"value\",\n                                                                            category_val_prepared[item_name]['value'])\n                    if eval(rule) is False:\n                        raise ValueError(\n                            'For {} category, The value of {} is not valid, please supply a valid value'.format(\n                                category_name, item_name))\n            # check if category_name is already in storage\n            category_val_storage = await self._read_category_val(category_name)\n            if category_val_storage is None:\n                await self._create_new_category(category_name, category_val_prepared, category_description,\n                                                display_name)\n            else:\n                # validate category_val from storage, do not set \"value\" from default, reuse from storage value\n                try:\n                    category_val_storage = await self._validate_category_val(category_name, category_val_storage, False)\n                # if validating category from storage fails, nothing to salvage from storage, use new completely\n                except:\n                    _logger.exception(\n                        'category_value for category_name %s from storage is corrupted; using category_value without merge',\n                        category_name)\n                # if validating category from storage succeeds, merge new and storage\n                else:\n                    all_categories = await self._read_all_category_names()\n                    for c in all_categories:\n                        if c[0] == category_name:\n                            display_name_storage = c[2]\n                            break\n                    if display_name is None:\n                        display_name = display_name_storage\n\n                    category_val_prepared = await self._merge_category_vals(category_val_prepared, category_val_storage,\n                                                                            keep_original_items, category_name)\n                    if json.dumps(category_val_prepared, sort_keys=True) == json.dumps(category_val_storage,\n                                                                                       sort_keys=True):\n                        if display_name_storage == display_name:\n                            return\n\n                        await self._update_category(category_name, category_val_prepared, category_description,\n                                                    display_name)\n                    else:\n                        await self._update_category(category_name, category_val_prepared, category_description,\n                                                    display_name)\n                        diff = common_utils.dict_difference(category_val_prepared, category_val_storage)\n                        if diff:\n                            audit = AuditLogger(self._storage)\n                            # Mask password values for audit trail security\n                            masked_old_value = self._mask_password_value(category_val_storage)\n                            masked_new_value = self._mask_password_value(category_val_prepared)\n                            audit_details = {\n                                'category': category_name,\n                                'item': \"configurationChange\",\n                                'oldValue': masked_old_value,\n                                'newValue': masked_new_value\n                            }\n                            await audit.information('CONCH', audit_details)\n            is_acl, config_item, found_cat_name, found_value = await \\\n                self.search_for_ACL_recursive_from_cat_name(category_name)\n            _logger.debug(\"check if there is {} create category function  for category {} \".format(is_acl,\n                                                                                                   category_name))\n            if is_acl and found_value and found_value != \"\":\n                await self._acl_handler.handle_create_for_acl_usage(found_cat_name, found_value, \"service\")\n        except:\n            _logger.exception(\n                'Unable to create new category based on category_name %s and category_description %s and category_json_schema %s',\n                category_name, category_description, category_val_prepared)\n            raise\n        try:\n            await self._run_callbacks(category_name)\n        except:\n            _logger.exception(\n                'Unable to run callbacks for category_name %s', category_name)\n            raise\n        return None\n\n    async def _read_all_child_category_names(self, category_name):\n        _children = []\n        payload = PayloadBuilder().SELECT(\"parent\", \"child\").WHERE([\"parent\", \"=\", category_name]).ORDER_BY(\n            [\"id\"]).payload()\n        results = await self._storage.query_tbl_with_payload('category_children', payload)\n        for row in results['rows']:\n            _children.append(row)\n\n        return _children\n\n    async def _read_child_info(self, child_list):\n        info = []\n        for item in child_list:\n            payload = PayloadBuilder().SELECT(\"key\", \"description\", \"display_name\").WHERE(\n                [\"key\", \"=\", item['child']]).payload()\n            results = await self._storage.query_tbl_with_payload('configuration', payload)\n            for row in results['rows']:\n                info.append(row)\n\n        return info\n\n    async def _create_child(self, category_name, child):\n        # FIXME: Handle the case if re-create same data, it throws UNIQUE constraint failed\n        try:\n            payload = PayloadBuilder().INSERT(parent=category_name, child=child).payload()\n            result = await self._storage.insert_into_tbl(\"category_children\", payload)\n            response = result['response']\n        except KeyError:\n            raise ValueError(result['message'])\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n\n        return response\n\n    async def get_category_child(self, category_name):\n        \"\"\"Get the list of categories that are children of a given category.\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n\n        Return Values:\n        JSON\n        \"\"\"\n        category = await self._read_category_val(category_name)\n        if category is None:\n            raise ValueError('No such {} category exist'.format(category_name))\n\n        try:\n            child_cat_names = await self._read_all_child_category_names(category_name)\n            children = await self._read_child_info(child_cat_names)\n            return [{\"key\": c['key'], \"description\": c['description'], \"displayName\": c['display_name']} for c in\n                    children]\n        except:\n            _logger.exception(\n                'Unable to read all child category names')\n            raise\n\n    async def create_child_category(self, category_name, children):\n        \"\"\"Create a new child category in the database.\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n        children -- an array of child categories\n\n        Return Values:\n        JSON\n        \"\"\"\n\n        def diff(lst1, lst2):\n            return [v for v in lst2 if v not in lst1]\n\n        if not isinstance(category_name, str):\n            raise TypeError('category_name must be a string')\n\n        if not isinstance(children, list):\n            raise TypeError('children must be a list')\n\n        try:\n            category = await self._read_category_val(category_name)\n            if category is None:\n                raise ValueError('No such {} category exist'.format(category_name))\n\n            for child in children:\n                category = await self._read_category_val(child)\n                if category is None:\n                    raise ValueError('No such {} child exist'.format(child))\n\n            # Read children from storage\n            _existing_children = await self._read_all_child_category_names(category_name)\n            children_from_storage = [item['child'] for item in _existing_children]\n            # Diff in existing children and requested children\n            new_children = diff(children_from_storage, children)\n            for a_new_child in new_children:\n                result = await self._create_child(category_name, a_new_child)\n                children_from_storage.append(a_new_child)\n\n            try:\n                # If there is a diff then call the create callback\n                if len(new_children):\n                    await self._run_callbacks_child(category_name, children, \"c\")\n            except:\n                _logger.exception(\n                    'Unable to run callbacks for child category_name %s', children)\n                raise\n\n            # Evaluate value as per rule if defined\n\n            is_acl_parent, config_item, found_cat_name, found_value = await self.search_for_ACL_recursive_from_cat_name(\n                category_name)\n            _logger.debug(\"check if acl item there is {} for parent {} \".format(is_acl_parent, category_name))\n            if is_acl_parent and found_value and found_value != \"\":\n                await self._acl_handler.handle_create_for_acl_usage(found_cat_name, found_value, \"service\")\n            for new_child in new_children:\n                is_acl_child, config_item, found_cat_name, found_value = await self.search_for_ACL_recursive_from_cat_name(\n                    new_child)\n                _logger.debug(\"check if acl item there is {} for child {} \".format(is_acl_child, new_child))\n                if is_acl_child and found_value and found_value != \"\":\n                    await self._acl_handler.handle_create_for_acl_usage(found_cat_name, found_value, \"service\")\n\n            return {\"children\": children_from_storage}\n\n            # TODO: [TO BE DECIDED] - Audit Trail Entry\n        except KeyError:\n            raise ValueError(result['message'])\n\n    async def delete_child_category(self, category_name, child_category):\n        \"\"\"Delete a parent-child relationship\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n        child_category -- child name\n\n        Return Values:\n        JSON\n        \"\"\"\n\n        if not isinstance(category_name, str):\n            raise TypeError('category_name must be a string')\n\n        if not isinstance(child_category, str):\n            raise TypeError('child_category must be a string')\n\n        category = await self._read_category_val(category_name)\n        if category is None:\n            raise ValueError('No such {} category exist'.format(category_name))\n\n        child = await self._read_category_val(child_category)\n        if child is None:\n            raise ValueError('No such {} child exist'.format(child_category))\n\n        try:\n            payload = PayloadBuilder().WHERE([\"parent\", \"=\", category_name]).AND_WHERE(\n                [\"child\", \"=\", child_category]).payload()\n            result = await self._storage.delete_from_tbl(\"category_children\", payload)\n\n            if result['response'] == 'deleted':\n                child_dict = await self._read_all_child_category_names(category_name)\n                _children = []\n                for item in child_dict:\n                    _children.append(item['child'])\n\n            # TODO: Shall we write audit trail code entry here? log_code?\n\n        except KeyError:\n            raise ValueError(result['message'])\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n\n        try:\n            await self._run_callbacks_child(category_name, child_category, \"d\")\n        except:\n            _logger.exception('Unable to run callbacks for child category_name %s', child_category)\n            raise\n\n        return _children\n\n    async def delete_parent_category(self, category_name):\n        \"\"\"Delete a parent-child relationship for a parent\n\n        Keyword Arguments:\n        category_name -- name of the category (required)\n\n        Return Values:\n        JSON\n        \"\"\"\n        if not isinstance(category_name, str):\n            raise TypeError('category_name must be a string')\n\n        category = await self._read_category_val(category_name)\n        if category is None:\n            raise ValueError('No such {} category exist'.format(category_name))\n\n        try:\n            payload = PayloadBuilder().WHERE([\"parent\", \"=\", category_name]).payload()\n            result = await self._storage.delete_from_tbl(\"category_children\", payload)\n            response = result[\"response\"]\n            # TODO: Shall we write audit trail code entry here? log_code?\n\n        except KeyError:\n            raise ValueError(result['message'])\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n\n        return result\n\n    async def delete_category_and_children_recursively(self, category_name):\n        \"\"\"Delete recursively a category and its children along with their parent-child relationship\n        Keyword Arguments:\n        category_name -- name of the category (required)\n        Return Values:\n        JSON\n        \"\"\"\n        if not isinstance(category_name, str):\n            raise TypeError('category_name must be a string')\n        category = await self._read_category_val(category_name)\n\n        if category is None:\n            raise ValueError('No such {} category exist'.format(category_name))\n        catg_descendents = await self._fetch_descendents(category_name)\n\n        for catg in RESERVED_CATG:\n            if catg in catg_descendents:\n                raise ValueError(\n                    'Reserved category found in descendents of {} - {}'.format(category_name, catg_descendents))\n        try:\n            result = await self._delete_recursively(category_name)\n\n        except ValueError as ex:\n            raise ValueError(ex)\n        else:\n            return result[category_name]\n\n    async def _fetch_descendents(self, cat):\n        children = await self._read_all_child_category_names(cat)\n        descendents = []\n        for row in children:\n            child = row['child']\n            descendents.append(child)\n            child_descendents = await self._fetch_descendents(child)\n            descendents.extend(child_descendents)\n        return descendents\n\n    async def _delete_recursively(self, cat):\n        try:\n            children = await self._read_all_child_category_names(cat)\n            for row in children:\n                child = row['child']\n                await self._delete_recursively(child)\n\n            is_acl, _, found_cat_name, acl_value = await self.search_for_ACL_single(cat)\n            if is_acl:\n                await self._acl_handler.handle_delete_for_acl_usage(found_cat_name,\n                                                                    acl_value,\n                                                                    \"service\",\n                                                                    notify_service=False)\n            # Remove cat as child from parent-child relation.\n            payload = PayloadBuilder().WHERE([\"child\", \"=\", cat]).payload()\n            result = await self._storage.delete_from_tbl(\"category_children\", payload)\n            if result['response'] == 'deleted':\n                _logger.info('Deleted parent in category_children: {}'.format(cat))\n\n            # Remove category.\n            payload = PayloadBuilder().WHERE([\"key\", \"=\", cat]).payload()\n            result = await self._storage.delete_from_tbl(\"configuration\", payload)\n            if result['response'] == 'deleted':\n                _logger.info('Deleted parent category from configuration: {}'.format(cat))\n                audit = AuditLogger(self._storage)\n                audit_details = {'categoryDeleted': cat}\n                # FIXME: FOGL-2140\n                await audit.information('CONCH', audit_details)\n\n            # delete_category_script_files is a better name in today's context. But in future there can be more stuff\n            #  related to the category; the definition of method should be extended as required\n            self.delete_category_related_things(cat)\n\n            # Remove cat from cache\n            if cat in self._cacheManager.cache:\n                self._cacheManager.remove(cat)\n\n        except KeyError as ex:\n            raise ValueError(ex)\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n        else:\n            return {cat: result}\n\n    def delete_category_related_things(self, category_name):\n        \"\"\" On delete category request\n\n        - Delete category related files\n\n        :param category_name:\n        :return:\n        \"\"\"\n        import glob\n        uploaded_scripts_dir = '{}/data/scripts/'.format(_FLEDGE_ROOT)\n        if _FLEDGE_DATA:\n            uploaded_scripts_dir = '{}/scripts/'.format(_FLEDGE_DATA)\n        files = \"{}/{}_*\".format(uploaded_scripts_dir, category_name.lower())\n        try:\n            for f in glob.glob(files):\n                _logger.info(\"Removing file %s for category %s\", f, category_name)\n                os.remove(f)\n        except Exception as ex:\n            _logger.error(ex, 'Failed to delete file(s) for category {}.'.format(category_name))\n            # raise ex\n\n    def register_interest_child(self, category_name, callback):\n        \"\"\"Registers an interest in any changes to the category_value associated with category_name\n\n        Keyword Arguments:\n        category_name -- name of the category_name of interest (required)\n        callback -- module with implementation of async method run(category_name) to be called when change is made to category_value\n\n        Return Values:\n        None\n\n        Side Effects:\n        Registers an interest in any changes to the category_value of a given category_name.\n        This interest is maintained in memory only, and not persisted in storage.\n\n        Restrictions and Usage:\n        A particular category_name may have multiple registered interests, aka multiple callbacks associated with a single category_name.\n        One or more category_names may use the same callback when a change is made to the corresponding category_value.\n        User must implement the callback code.\n        For example, if a callback is 'fledge.callback', then user must implement fledge/callback.py module with method run(category_name).\n        A callback is only called if the corresponding category_value is created or updated.\n        A callback is not called if the corresponding category_description is updated.\n        A change in configuration is not rolled back if callbacks fail.\n        \"\"\"\n\n        if category_name is None:\n            raise ValueError('Failed to register interest. category_name cannot be None')\n        if callback is None:\n            raise ValueError('Failed to register interest. callback cannot be None')\n        if self._registered_interests_child.get(category_name) is None:\n            self._registered_interests_child[category_name] = {callback}\n        else:\n            self._registered_interests_child[category_name].add(callback)\n\n    def register_interest(self, category_name, callback):\n        \"\"\"Registers an interest in any changes to the category_value associated with category_name\n\n        Keyword Arguments:\n        category_name -- name of the category_name of interest (required)\n        callback -- module with implementation of async method run(category_name) to be called when change is made to category_value\n\n        Return Values:\n        None\n\n        Side Effects:\n        Registers an interest in any changes to the category_value of a given category_name.\n        This interest is maintained in memory only, and not persisted in storage.\n\n        Restrictions and Usage:\n        A particular category_name may have multiple registered interests, aka multiple callbacks associated with a single category_name.\n        One or more category_names may use the same callback when a change is made to the corresponding category_value.\n        User must implement the callback code.\n        For example, if a callback is 'fledge.callback', then user must implement fledge/callback.py module with method run(category_name).\n        A callback is only called if the corresponding category_value is created or updated.\n        A callback is not called if the corresponding category_description is updated.\n        A change in configuration is not rolled back if callbacks fail.\n        \"\"\"\n        if category_name is None:\n            raise ValueError('Failed to register interest. category_name cannot be None')\n        if callback is None:\n            raise ValueError('Failed to register interest. callback cannot be None')\n        if self._registered_interests.get(category_name) is None:\n            self._registered_interests[category_name] = {callback}\n        else:\n            self._registered_interests[category_name].add(callback)\n\n    def unregister_interest(self, category_name, callback):\n        \"\"\"Unregisters an interest in any changes to the category_value associated with category_name\n\n        Keyword Arguments:\n        category_name -- name of the category_name of interest (required)\n        callback -- module with implementation of async method run(category_name) to be called when change is made to category_value\n\n        Return Values:\n        None\n\n        Side Effects:\n        Unregisters an interest in any changes to the category_value of a given category_name with the associated callback.\n        This interest is maintained in memory only, and not persisted in storage.\n\n        Restrictions and Usage:\n        A particular category_name may have multiple registered interests, aka multiple callbacks associated with a single category_name.\n        One or more category_names may use the same callback when a change is made to the corresponding category_value.\n        \"\"\"\n        if category_name is None:\n            raise ValueError('Failed to unregister interest. category_name cannot be None')\n        if callback is None:\n            raise ValueError('Failed to unregister interest. callback cannot be None')\n        if self._registered_interests.get(category_name) is not None:\n            if callback in self._registered_interests[category_name]:\n                self._registered_interests[category_name].discard(callback)\n                if len(self._registered_interests[category_name]) == 0:\n                    del self._registered_interests[category_name]\n\n    def _validate_type_value(self, _type, _value):\n        # TODO: Not implemented for password and X509 certificate type\n        def _str_to_bool(item_val):\n            return item_val.lower() in (\"true\", \"false\")\n\n        def _str_to_int(item_val):\n            try:\n                _value = int(item_val)\n            except ValueError:\n                return False\n            else:\n                return True\n\n        def _str_to_float(item_val):\n            try:\n                _value = float(item_val)\n            except ValueError:\n                return False\n            else:\n                return True\n\n        def _str_to_ipaddress(item_val):\n            try:\n                return ipaddress.ip_address(item_val)\n            except ValueError:\n                return False\n\n        if _type == 'boolean':\n            return _str_to_bool(_value)\n        elif _type in ('integer', 'listSize'):\n            return _str_to_int(_value)\n        elif _type == 'float':\n            return _str_to_float(_value)\n        elif _type == 'JSON':\n            if isinstance(_value, dict):\n                return True\n            return Utils.is_json(_value)\n        elif _type == 'IPv4' or _type == 'IPv6':\n            return _str_to_ipaddress(_value)\n        elif _type == 'URL':\n            try:\n                result = urlparse(_value)\n                return True if all([result.scheme, result.netloc]) else False\n            except:\n                return False\n        elif _type == 'string' or _type == 'northTask':\n            return isinstance(_value, str)\n\n    def _mask_password_value(self, item_type_or_category, value_or_none=None):\n        \"\"\"Mask password values for audit trail security\n\n        Args:\n            item_type_or_category: Either a string item type (e.g., 'password') or a category dict\n            value_or_none: The value to mask (when first arg is item type), or None (when first arg is category)\n\n        Returns:\n            Masked value (string) or masked category configuration (dict)\n        \"\"\"\n        # Handle individual item masking (backward compatibility)\n        if isinstance(item_type_or_category, str) and value_or_none is not None:\n            if item_type_or_category == 'password':\n                return '****'\n            return value_or_none\n\n        # Handle category configuration masking\n        category_val = item_type_or_category\n        if not isinstance(category_val, dict):\n            return category_val\n\n        masked_category_val = copy.deepcopy(category_val)\n        for item_name, item_val in masked_category_val.items():\n            if isinstance(item_val, dict) and 'type' in item_val and item_val['type'] == 'password':\n                if 'value' in item_val:\n                    masked_category_val[item_name]['value'] = '****'\n                if 'default' in item_val:\n                    masked_category_val[item_name]['default'] = '****'\n        return masked_category_val\n\n    def _clean(self, storage_val, item_val) -> str:\n        # For optional attributes\n        if isinstance(storage_val, str):\n            return item_val.lower() if storage_val == 'boolean' else item_val\n        # For required attributes\n        if storage_val['type'] == 'boolean':\n            return item_val.lower()\n        elif storage_val['type'] == 'float':\n            return str(float(item_val))\n        elif storage_val.get('items') == 'object':\n            if storage_val.get('type') == 'list':\n                # Convert string to list\n                data_list = json.loads(item_val)\n                if isinstance(data_list, list):\n                    # Remove duplicate objects\n                    new_item_val = []\n                    seen = set()\n                    for item in data_list:\n                        item_frozenset = frozenset(item.items())\n                        if item_frozenset not in seen:\n                            new_item_val.append(item)\n                            seen.add(item_frozenset)\n                    return json.dumps(new_item_val)\n            elif storage_val.get('type') == 'kvlist':\n                # Remove duplicate objects\n                new_item_val = json.loads(item_val)\n                return json.dumps(new_item_val)\n\n        return item_val\n\n    def _handle_script_type(self, category_name, category_value):\n        \"\"\"For the given category, check for config item of type script “unhexlify” the value stored in database\n        and add “file” attribute on the fly\n\n        Keyword Arguments:\n        category_name -- name of the category\n        category_value -- category value\n\n        Return Values:\n        JSON\n        \"\"\"\n        import glob\n        cat_value = copy.deepcopy(category_value)\n        for k, v in cat_value.items():\n            if v['type'] == 'script':\n                try:\n                    # cat_value[k][\"file\"] = \"\"\n                    if v['value'] is not None and v['value'] != \"\":\n                        cat_value[k][\"value\"] = binascii.unhexlify(v['value'].encode('utf-8')).decode(\"utf-8\")\n                except binascii.Error:\n                    pass\n                except Exception as e:\n                    _logger.warning(\n                        \"Got an issue while decoding config item: {} | {}\".format(cat_value[k], str(e)))\n                    pass\n\n                script_dir = _FLEDGE_DATA + '/scripts/' if _FLEDGE_DATA else _FLEDGE_ROOT + \"/data/scripts/\"\n                prefix_file_name = category_name.lower() + \"_\" + k.lower() + \"_\"\n                if not os.path.exists(script_dir):\n                    os.makedirs(script_dir)\n                else:\n                    # find pattern with file_name\n                    list_of_files = glob.glob(script_dir + prefix_file_name + \"*.py\")\n                    if list_of_files:\n                        # get latest modified file\n                        latest_file = max(list_of_files, key=os.path.getmtime)\n                        cat_value[k][\"file\"] = latest_file\n        return cat_value\n\n    def _validate_value_per_optional_attribute(self, item_name, storage_value_entry, new_value_entry):\n        # FIXME: Logically below exception throw as ValueError; TypeError used ONLY to get right HTTP status code returned from API endpoint.\n        # As we used same defs for optional attribute value & config item value save\n        def in_range(n, start, end):\n            return start <= n <= end  # start and end inclusive\n\n        def _validate_length(val):\n            if 'length' in storage_value_entry:\n                if len(val) > int(storage_value_entry['length']):\n                    raise TypeError('For config item {} you cannot set the new value, beyond the length {}'.format(\n                        item_name, storage_value_entry['length']))\n\n        def _validate_min_max(_type, val):\n            if 'minimum' in storage_value_entry and 'maximum' in storage_value_entry:\n                if _type == 'integer':\n                    _new_value = int(val)\n                    _min_value = int(storage_value_entry['minimum'])\n                    _max_value = int(storage_value_entry['maximum'])\n                else:\n                    _new_value = float(val)\n                    _min_value = float(storage_value_entry['minimum'])\n                    _max_value = float(storage_value_entry['maximum'])\n\n                if not in_range(_new_value, _min_value, _max_value):\n                    raise TypeError('For config item {} you cannot set the new value, beyond the range ({},{})'.format(\n                        item_name, storage_value_entry['minimum'], storage_value_entry['maximum']))\n            elif 'minimum' in storage_value_entry:\n                if _type == 'integer':\n                    _new_value = int(val)\n                    _min_value = int(storage_value_entry['minimum'])\n                else:\n                    _new_value = float(val)\n                    _min_value = float(storage_value_entry['minimum'])\n                if _new_value < _min_value:\n                    raise TypeError('For config item {} you cannot set the new value, below {}'.format(item_name,\n                                                                                                       _min_value))\n            elif 'maximum' in storage_value_entry:\n                if _type == 'integer':\n                    _new_value = int(val)\n                    _max_value = int(storage_value_entry['maximum'])\n                else:\n                    _new_value = float(val)\n                    _max_value = float(storage_value_entry['maximum'])\n                if _new_value > _max_value:\n                    raise TypeError('For config item {} you cannot set the new value, above {}'.format(item_name,\n                                                                                                       _max_value))\n\n        config_item_type = storage_value_entry['type']\n        if config_item_type == 'string':\n            _validate_length(new_value_entry)\n\n        if config_item_type == 'integer' or config_item_type == 'float':\n            _validate_min_max(config_item_type, new_value_entry)\n\n        if config_item_type in (\"list\", \"kvlist\"):\n            if storage_value_entry['items'] not in ('object', 'enumeration'):\n                msg = \"array\" if config_item_type == 'list' else \"KV pair\"\n                try:\n                    eval_new_val = ast.literal_eval(new_value_entry)\n                except:\n                    raise TypeError(\"For config item {} value should be passed {} list in string format\".format(\n                        item_name, msg))\n\n                if config_item_type == 'list':\n                    if len(eval_new_val) > len(set(eval_new_val)):\n                        raise ValueError(\"For config item {} elements are not unique\".format(item_name))\n                else:\n                    if isinstance(eval_new_val, dict) and eval_new_val:\n                        nv = new_value_entry.replace(\"{\", \"\")\n                        unique_list = []\n                        for pair in nv.split(','):\n                            if pair:\n                                k, v = pair.split(':')\n                                ks = k.strip()\n                                if ks not in unique_list:\n                                    unique_list.append(ks)\n                                else:\n                                    raise TypeError(\"For config item {} duplicate KV pair found\".format(item_name))\n                            else:\n                                raise TypeError(\"For config item {} KV pair invalid\".format(item_name))\n                if 'listSize' in storage_value_entry:\n                    list_size = int(storage_value_entry['listSize'])\n                    if list_size >= 0:\n                        if len(eval_new_val) > list_size:\n                            raise TypeError(\"For config item {} value {} list size limit to {}\".format(\n                                item_name, msg, list_size))\n                type_mismatched_message = \"For config item {} all elements should be of same {} type\".format(\n                    item_name, storage_value_entry['items'])\n                type_check = str\n                if storage_value_entry['items'] == 'integer':\n                    type_check = int\n                elif storage_value_entry['items'] == 'float':\n                    type_check = float\n\n                if config_item_type == 'kvlist':\n                    if not isinstance(eval_new_val, dict):\n                        raise TypeError(\"For config item {} KV pair invalid\".format(item_name))\n                    for k, v in eval_new_val.items():\n                        try:\n                            eval_s = v\n                            if storage_value_entry['items'] in (\"integer\", \"float\"):\n                                eval_s = ast.literal_eval(v)\n                                _validate_min_max(storage_value_entry['items'], eval_s)\n                            elif storage_value_entry['items'] == 'string':\n                                _validate_length(eval_s)\n                        except TypeError as err:\n                            raise ValueError(err)\n                        except:\n                            raise ValueError(type_mismatched_message)\n                        if not isinstance(eval_s, type_check):\n                            raise ValueError(type_mismatched_message)\n                else:\n                    for s in eval_new_val:\n                        try:\n                            eval_s = s\n                            if storage_value_entry['items'] in (\"integer\", \"float\"):\n                                eval_s = ast.literal_eval(s)\n                                _validate_min_max(storage_value_entry['items'], eval_s)\n                            elif storage_value_entry['items'] == 'string':\n                                _validate_length(eval_s)\n                        except TypeError as err:\n                            raise ValueError(err)\n                        except:\n                            raise ValueError(type_mismatched_message)\n                        if not isinstance(eval_s, type_check):\n                            raise ValueError(type_mismatched_message)\n\n    def _handle_config_items(self, cat_name: str, cat_value: dict) -> None:\n        \"\"\" Update value in config items for a category which are required without restart of Fledge \"\"\"\n        if cat_name == 'CONFIGURATION':\n            if 'cacheSize' in cat_value:\n                self._cacheManager.max_cache_size = int(cat_value['cacheSize']['value'])\n        elif cat_name == 'firewall':\n            from fledge.services.core.firewall import Firewall\n            Firewall.IPAddresses.save(data=cat_value)\n\n    async def _check_updates_by_role(self, request: aiohttp.web_request.Request) -> str:\n        async def get_role_name():\n            from fledge.services.core.user_model import User\n            name = await User.Objects.get_role_name_by_id(request.user['role_id'])\n            if name is None:\n                raise ValueError(\"Requesting user's role is not matched with any existing roles.\")\n            return name\n\n        role_name = \"\"\n        if request is not None:\n            if hasattr(request, \"user_is_admin\"):\n                if not request.user_is_admin:\n                    role_name = await get_role_name()\n        return role_name\n\n    def _check_permissions(self, request: aiohttp.web_request.Request, cat_info: str, role_name: str) -> None:\n        if request is not None:\n            if hasattr(request, \"user_is_admin\"):\n                if not request.user_is_admin:\n                    if 'permissions' in cat_info:\n                        if not (role_name in cat_info['permissions']):\n                            raise Exception('Forbidden')\n\n"
  },
  {
    "path": "python/fledge/common/iprpc.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" interprocess rpc \"\"\"\n\n\n__author__ = \"Douglas Orr\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nimport os\nimport sys\nimport io\nimport subprocess\nimport threading\nimport pickle\nimport tempfile\nimport time\nimport logging\nimport mmap\n\n\nsys.path.append(os.path.dirname(__file__))\nDEBUG_RPC = os.environ.get('DEBUG_RPC', \"false\").lower() == \"true\"\n\ntry:\n    from fledge.common import logger\nexcept ImportError:\n    # if invoked outside of fledge, fake up a logger environment\n    class Logger:\n        def __init__(self):\n            self.CONSOLE = 0\n            self.SYSLOG = 1\n            self.default_destination = self.CONSOLE\n            \n        def setup(self, name, level):\n            _logger = logging.getLogger(name)\n            _logger.setLevel(level)\n            return _logger\n    logger = Logger()\n\n_LOGGER = logger.setup(__name__, level=logging.INFO)\n\n\ndef eprint(*args, **kwargs):\n    print(*args, *kwargs, file=sys.stderr)\n    \n\ndef is_server_process():\n    # temp backward compatibility: if we have a default\n    # destination of SYSLOG, we are a server process\n    return not hasattr(logger, 'default_destination') or \\\n        (logger.default_destination == logger.SYSLOG)\n#\n#\n# IPC overview:\n#   The basic structure we have is a client which spawns a server in another process, then\n# do simple ipc to send procedure calls from client to server, then results back to client\n#\n# InterProcessRPC is spawned with a pipe to the client. Writes and reads on the pipe are used to\n# synchronize work that is done in the server. Servers are required to be single threaded.\n#\n# The basic mechanism is to send a newline-terminated \"length\" to the server which signals\n# new method/arguments are available. (In a pure-pipe implementation, the length precedes\n# a \"length\"-sized encoded dict. In mapped file implementation, the \"length\" is merely\n# used for signaling.)\n#\n# Arguments are sent in an encoded dict: {'method': <method-name>, 'args': <list of arguments>}\n#\n# In the mapped file implementation, there is a tmpfile of ARGFILE_SIZE mapped into client and\n# server where the arguments are written using pickle. Results are written back the same way.\n#\n# For the process receiving the results, the length indicates a couple of special things:\n# >0 -> standard dict\n# =0 -> None\n# <0 -> exception, which is re-constituted and re-raised, so the client receives it\n#\n# Argfile is a temp file (mkstemp) which is created by the client. The name is sent\n# to the server as the first pipe message; it is opened and unlinked once opened so\n# that it will disappear on close.\n#\n# The InterProcessRPCClient class will invoke a subprocess to create the server. Special\n# care is taken to read stderr if the client is a server, so that error messages written to\n# stderr (usually before the server starts) aren't lost.\n#\n# The IPCModuleClient derives from InterProcessRPCClient and  specifically\n# invokes a named python module as a server.\n\n\nARGFILE_SIZE = 1024*1024*20\n\n\nclass InterProcessRPC:\n    def __init__(self,\n                 infd=io.BufferedReader(io.FileIO(os.dup(sys.stdin.fileno()))),\n                 outfd=io.BufferedWriter(io.FileIO(os.dup(sys.stdout.fileno()), mode='w')),\n                 errfd=sys.stderr,\n                 name=\"\",\n                 argfile_fd=None):\n        self.infd = infd    # for direct i/o between client/server\n        self.outfd = outfd\n\n        self.errfd = errfd\n        self.name = name\n\n        if argfile_fd is None:\n            # server\n            # close 0/1/2 in case client is trying to do i/o on them. use stderr for output\n            os.close(0)\n            os.close(1)\n            os.dup2(2, 1)\n\n            # special protocol for server process: first line read is the name of our mapped arg file\n            _argfile_name = self.infd.readline()[:-1].decode('utf-8')\n            self.argfile_fd = os.open(_argfile_name, os.O_RDWR)\n            os.unlink(_argfile_name)  # delete on close\n        else:\n            # client process opens the file then passes it up to superclass\n            self.argfile_fd = argfile_fd\n\n        self.mfile = mmap.mmap(self.argfile_fd, ARGFILE_SIZE)  # assume input > output\n\n    def call(self, rpcobj):\n        \"\"\" call - local instance of rpc call \"\"\"\n        if not ('method' in rpcobj and 'args' in rpcobj):\n            _LOGGER.error(\"invalid rpc object missing fields {}\".format(str(rpcobj.keys())))\n            raise ValueError\n\n        _method = getattr(self, rpcobj['method'])\n        _args = rpcobj['args']\n        return _method(*_args)\n\n    def rpc_read(self):\n        \"\"\" rpc_read - read len/buf from the remote host\n        protocol:\n        each call is preceded by an ascii - <len>\\n\n          len == '' : EOF from remote side\n          len > 0   : len sized buffer with json method + args follows\n          len == 0  : None\n          len < 0   : len sized buffer with named exception plus arg follows\n        Returns:\n            json dict if method or exception\n            None if len == 0\n        Raises:\n            EOFError if pipe closes\n        \"\"\"\n\n        # protocol: pipe produces a length of next object\n        _len = self.infd.readline()  # assume small enough to not deadlock\n        self.mfile.seek(0)\n\n        if _len == b'':\n            # closed fd on one side or the other of the pipe\n            raise EOFError\n\n        _len = int(_len)\n        if _len > 0:\n            # len > 0 -> json object\n\n            # eprint(\"read >0\")\n            obj = pickle.loads(self.mfile)\n            return obj\n\n        elif _len < 0:\n            # _len < 0 -> Exception\n            _ex = pickle.loads(self.mfile)\n\n            # reconstitute the exception, pass server exception through locally\n            _ex_class, _ex_msg = _ex['class'], _ex['message']\n            _builtins = globals()['__builtins__']\n            if _ex_class in _builtins:\n                raise _builtins[_ex_class](_ex_msg)\n\n            # raise unknown exception\n            _LOGGER.warning(\"unknown local exception {}\".format(_ex_class))\n            raise Exception(\"{}: {}\".format(_ex_class, _ex_msg))\n\n        # we fall through here when len == 0 -> None\n        return None\n\n    def rpc_write(self, obj, is_exception=False):\n        \"\"\" rpc_write -- write an rpc return value to the receiver \n        protocol:\n        each call is preceded by an ascii - <len>\\n\n          len > 0   : len sized buffer with json method + args follows\n          len == 0  : None\n          len < 0   : len sized buffer with named exception plus arg follows\n        Returns:\n        Raises:\n        \"\"\"\n\n        _lenmult = 1\n        if is_exception:\n            # replace exception object with something that can be turned into JSON\n            _class = obj.__class__.__name__\n            obj = { 'class': str(_class), 'message': str(obj) }\n            _lenmult = -1\n\n        if obj is not None:\n            # put the dict into shared memory\n            self.mfile.seek(0)\n            pickle.dump(obj, self.mfile)\n\n            # write a leading \"len\", positive (object) or negative (exception)\n            # and signal to the server there's something to do\n            _lenstr = str(_lenmult * 1)+'\\n'\n            self.outfd.write(_lenstr.encode('utf-8', 'ignore'))\n        else:\n            # no payload for None return\n            # eprint(\"rpc write none\")\n            _lenstr = '0\\n'\n            self.outfd.write(_lenstr.encode('utf-8', 'ignore'))\n\n        self.outfd.flush()\n\n    def rpc_exception(self, ex):\n        \"\"\" exception -- write an exception object to client to represent internal error \"\"\"\n\n        # exception sent so that it will be raised in client\n        self.rpc_write(ex, is_exception=True)\n\n    def serve(self):\n        \"\"\" receive \"methods\" to invoke on infd, return results on outfd\n\n        each method is represented using (string) len followed by \"len\" worth of JSON packet\n        JSON input packet is {method: methodname, args: args}\n\n        return values and exceptions are written back on outfd in the same format; \n\n        . None returns have a zero length and are not JSON\n        . Exceptions are JSON objects with exception type and message, and have a negative length\n        . A length which is null string indicates closed channel\n        \"\"\"\n\n        while True:\n            try:\n                _obj = self.rpc_read()\n            except EOFError as ex:\n                break\n            except Exception as ex:\n                self.rpc_exception(ex)\n\n            try:\n                _ret = self.call(_obj)  # local \"call\" - returns json-able value; may raise\n\n            except Exception as ex:\n                if DEBUG_RPC and type(ex) not in [EOFError, SystemExit]:\n                    # give a traceback\n                    _LOGGER.exception(\"exception in rpc backend\")\n\n                # return the exception\n                self.rpc_exception(ex)\n\n                if type(ex) is SystemExit:\n                    _LOGGER.info(\"EXITING\")\n                    break\n\n            else:\n                # return the result of the call\n                self.rpc_write(_ret)\n\n        sys.exit()\n\n\nclass InterProcessRPCClient(InterProcessRPC):\n    \"\"\"\n    InterProcessRPCClient: companion to InterProcessRPC, client code that calls into a server in a separate process\n    \"\"\"\n\n    def __init__(self, server_args, env=None):\n\n        # trap and send stderr to syslog if we are in a server process\n        _is_server = is_server_process()\n        _stderr = subprocess.PIPE if _is_server else None\n\n        # set up a big shared memory file for argument transfer\n        (_argfile_fd, _argfile_path) = tempfile.mkstemp()\n        os.pwrite(_argfile_fd, b' ', ARGFILE_SIZE-1)\n\n        p = subprocess.Popen(server_args,\n                             stdin=subprocess.PIPE, stdout=subprocess.PIPE,\n                             stderr=_stderr,\n                             env=env)\n        super().__init__(infd=p.stdout, outfd=p.stdin, errfd=p.stderr, argfile_fd=_argfile_fd)\n\n        if _is_server:\n            def log_errors(fd):\n                \"\"\" log_errors - vacuum up error output that would otherwise be lost \"\"\"\n                while True:\n                    err_str = fd.read().decode('utf-8')  # blocking read for error output\n                    if err_str == '':\n                        break\n                    _LOGGER.error(\"python module error: {}\".format(err_str))\n                return\n\n            # pull off and log errors that would disappear on stderr\n            self.log_err_thread = threading.Thread(target=log_errors, args=(self.errfd,))\n            self.log_err_thread.start()\n\n        # check for module immediate exit -- shouldn't happen\n        for _ in range(2):\n            # Wait until process terminates (without using p.wait())\n            if p.poll() is not None:\n                _LOGGER.error(\"{} module load exited with return code {}\".format(server_args, p.returncode))\n                raise RuntimeError(p.returncode)\n            # Process hasn't exited yet, let's wait some\n            time.sleep(0.5)\n\n        # special prtocol, now tell the server the name of the mapped arg file\n        self.outfd.write('{}\\n'.format(_argfile_path).encode('utf-8'))\n\n    def call(self, rpcobj):\n        \"\"\" call - rpc client writes rpc request, reads and returns the result \n        Args:\n            rpcobj : dict() with:\n            'method': name of remote method to invoke\n            'args': JSON-able list of arguments to be sent to remote object\n        Returns:\n            de-JSON-ified return result from remote execution\n        Raises:\n            Exception with appropriate message raised in remote execution (xxx -- reinstantiate exception class)\n        \"\"\"\n        self.rpc_write(rpcobj)\n        return self.rpc_read()\n\n\nclass IPCModuleClient(InterProcessRPCClient):\n    \"\"\" IPCModuleClient - specifically invoke python to create a server from a python module \"\"\"\n    def __init__(self, module_name, module_dir):\n\n        env = os.environ.copy()\n        # make sure the new environment can find modules in cwd\n        env['PYTHONPATH'] = env.get('PYTHONPATH', '') + \":\"+module_dir\n\n        _LOGGER.debug(\"STARTING module {} path={}\".format(module_name, env['PYTHONPATH']))\n        super().__init__(['python3', '-m', module_name], env=env)\n\n    def __getattr__(self, method_name):\n        \"\"\" __getattr__  - override getattr so that we can proxy function calls by name \"\"\"\n\n        # NOTE: this is a dangerous game since we are comandeering all function/slot accesses\n        # it's also not the most efficient thing in the world... but we're about to do a pipe rpc\n        # so it's probalby also not the weakest link\n\n        _super = super() # bind super outside of the lambda\n        return lambda *x: _super.call({'method': method_name, 'args': [*x]})\n"
  },
  {
    "path": "python/fledge/common/logger.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Fledge Logger \"\"\"\nimport os\nimport subprocess\nimport logging\nimport traceback\nfrom logging.handlers import SysLogHandler\nfrom functools import wraps\nimport socket\n\n__author__ = \"Praveen Garg, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017-2023 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nSYSLOG = 0\nr\"\"\"Send log entries to /var/log/syslog\n\n- View with: ``tail -f /var/log/syslog | sed 's/#012/\\n\\t/g'``\n\n\"\"\"\nCONSOLE = 1\n\"\"\"Send log entries to STDERR\"\"\"\nFLEDGE_LOGS_DESTINATION = 'FLEDGE_LOGS_DESTINATION'\n\"\"\"Log destination environment variable\"\"\"\ndefault_destination = SYSLOG\n\"\"\"Default destination of logger\"\"\"\n\nlog_udp = os.getenv('SYSLOG_UDP_ENABLED', 'False')\n\ndef get_format() -> str:\n    \"\"\"Set the format for the logger\n\n    Args:\n        fmt: format string\n    \"\"\"\n        # TODO: Consider using %r with message when using syslog .. \\n looks better than #\n    if log_udp.lower() == 'false':\n        fmt = '{}[%(process)d] %(levelname)s: %(module)s: %(name)s: %(message)s'.format(get_process_name())\n    else:\n        fmt = '{} {}[%(process)d] %(levelname)s: %(module)s: %(name)s: %(message)s'.format(socket.gethostname(), get_process_name())\n    return fmt\n\ndef get_syslog_handler():\n    \"\"\"Defines a syslog handler\n\n    Returns:\n         logging handler object : the syslog handler\n    \"\"\"\n    # Retrieve LOG_PORT and LOG_IP from environment variables\n    if log_udp.lower() == 'true':\n        log_port = os.getenv('SYSLOG_PORT', '5140')\n        log_ip = os.getenv('SYSLOG_IP', '127.0.0.1') \n        syslog_address = (log_ip, int(log_port))\n        syslog_handler = SysLogHandler(address=syslog_address)\n    else:\n        syslog_handler = SysLogHandler(address='/dev/log')\n\n    return syslog_handler\n\n\ndef set_default_destination(destination: int):\n    \"\"\" set_default_destination - allow a global default to be set, once, for all fledge modules\n        also, set env variable FLEDGE_LOGS_DESTINATION for communication with related, spawned\n        processes. (makes logging consistent for interactive stderr vs server syslog applications \"\"\"\n    global default_destination\n    default_destination = destination\n    os.environ[FLEDGE_LOGS_DESTINATION] = str(destination)\n\n\nif (FLEDGE_LOGS_DESTINATION in os.environ) and \\\n   os.environ[FLEDGE_LOGS_DESTINATION] in [str(CONSOLE), str(SYSLOG)]:\n    # inherit (valid) default from the environment\n    set_default_destination(int(os.environ[FLEDGE_LOGS_DESTINATION]))\n\n\ndef get_process_name() -> str:\n    # Example: ps -eaf | grep 5175 | grep -v grep | awk -F '--name=' '{print $2}'\n    pid = os.getpid()\n    cmd = \"ps -eaf | grep {} | grep -v grep | awk -F '--name=' '{{print $2}}'| tr -d '\\n'\".format(pid)\n    read_process_name = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout.readlines()\n    binary_to_string = [b.decode() for b in read_process_name]\n    pname = 'Fledge ' + binary_to_string[0] if binary_to_string else 'Fledge'\n    return pname\n\n\ndef setup(logger_name: str = None,\n          destination: int = None,\n          level: int = None,\n          propagate: bool = False) -> logging.Logger:\n    \"\"\"Configures a `logging.Logger`_ object\n\n    Once configured, a logger can also be retrieved via\n    `logging.getLogger`_\n\n    It is inefficient to call this function more than once for the same\n    logger name.\n\n    Args:\n        logger_name:\n            The name of the logger to configure. Use None (the default)\n            to configure the root logger.\n\n        level:\n            The `logging level`_ to use when filtering log entries.\n            Use None to maintain the default level\n\n        propagate:\n            Whether to send log entries to ancestor loggers. Defaults to False.\n\n        destination:\n            - SYSLOG: (the default) Send messages to syslog.\n                - View with: ``tail -f /var/log/syslog | sed 's/#012/\\n\\t/g'``\n            - CONSOLE: Send message to stderr\n\n    Returns:\n        A `logging.Logger`_ object\n\n    .. _logging.Logger: https://docs.python.org/3/library/logging.html#logging.Logger\n\n    .. _logging level: https://docs.python.org/3/library/logging.html#levels\n\n    .. _logging.getLogger: https://docs.python.org/3/library/logging.html#logging.getLogger\n    \"\"\"\n    logger = logging.getLogger(logger_name)\n\n    # if no destination is set, use the fledge default\n    if destination is None:\n        destination = default_destination\n\n    if destination == SYSLOG:\n        handler = get_syslog_handler()\n    elif destination == CONSOLE:\n        handler = logging.StreamHandler()  # stderr\n    else:\n        raise ValueError(\"Invalid destination {}\".format(destination))\n\n    formatter = logging.Formatter(fmt=get_format())\n    handler.setFormatter(formatter)\n    if level is not None:\n        logger.setLevel(level)\n    logger.addHandler(handler)\n    logger.propagate = propagate\n\n    # Call error override\n    error_override(logger)\n    return logger\n\n\ndef error_override(_logger: logging.Logger) -> None:\n    \"\"\"Override error logger to print multi-line traceback and error string with newline\n    Args:\n        _logger: Logger Object\n\n    Returns:\n        None\n    \"\"\"\n    # save the old logging.error function\n    __logging_error = _logger.error\n\n    @wraps(_logger.error)\n    def error(msg, *args, **kwargs):\n        if isinstance(msg, Exception):\n            \"\"\"For example:\n                a) _logger.error(ex)\n                b) _logger.error(ex, \"Failed to add data.\")\n            \"\"\"\n            trace_msg = traceback.format_exception(msg.__class__, msg, msg.__traceback__)\n            if args:\n                trace_msg[:0] = [\"{}\\n\".format(args[0])]\n            [__logging_error(line.strip('\\n')) for line in trace_msg]\n        else:\n            if isinstance(msg, str):\n                \"\"\"For example:\n                    a) _logger.error(str(ex))\n                    b) _logger.error(\"Failed to log audit trail entry\")\n                    c) _logger.error('Failed to log audit trail entry for code: %s', \"CONCH\")\n                    d) _logger.error('Failed to log audit trail entry for code: {log_code}'.format(log_code=\"CONAD\"))\n                    e) _logger.error('Failed to log audit trail entry for code: {0}'.format(\"CONAD\"))\n                    f) _logger.error(\"Failed to log audit trail entry for code '{}' \\n{}\".format(\"CONCH\", \"Next line\"))\n                \"\"\"\n                if args:\n                    msg = msg % args\n                [__logging_error(m) for m in msg.splitlines()]\n            else:\n                # Default logging error\n                __logging_error(msg)\n    # overwrite the default logging.error\n    _logger.error = error\n\n\nclass FLCoreLogger:\n    \"\"\"\n    Singleton FLCoreLogger class. This class is only instantiated ONCE. It is to keep a consistent\n    criteria for the logger throughout the application if need to be called upon.\n    It serves as the criteria for initiating logger for modules. It creates child loggers.\n    It's important to note these are child loggers as any changes made to the root logger\n    can be done.\n    \"\"\"\n    _instance = None\n\n    def __new__(cls):\n        if cls._instance is None:\n            cls._instance = super().__new__(cls)\n            cls.formatter = logging.Formatter(fmt=get_format())\n        return cls._instance\n\n    def get_syslog_handler(self):\n        \"\"\"Defines a syslog handler\n\n        Returns:\n             logging handler object : the syslog handler\n        \"\"\"\n        syslog_handler = get_syslog_handler()\n        syslog_handler.setFormatter(self.formatter)\n        syslog_handler.name = \"syslogHandler\"\n        return syslog_handler\n\n    def get_console_handler(self):\n        \"\"\"Defines a console handler to come out on the console\n\n        Returns:\n            logging handler object : the console handler\n        \"\"\"\n        console_handler = logging.StreamHandler()\n        console_handler.setFormatter(self.formatter)\n        console_handler.name = \"consoleHandler\"\n        return console_handler\n\n    def add_handlers(self, logger, handler_list: list):\n        \"\"\"Adds handlers to the logger, checks first if handlers exist to avoid\n        duplication\n\n        Args:\n            logger: Logger to check handlers\n            handler_list: list of handlers to add\n        \"\"\"\n        existing_handler_names = []\n        for existing_handler in logger.handlers:\n            existing_handler_names.append(existing_handler.name)\n\n        for new_handler in handler_list:\n            if new_handler.name not in existing_handler_names:\n                logger.addHandler(new_handler)\n\n    def get_logger(self, logger_name: str):\n        \"\"\"Generates logger for use in the modules.\n        Args:\n            logger_name: name of the logger\n\n        Returns:\n            logger: returns logger for module\n        \"\"\"\n        _logger = logging.getLogger(logger_name)\n        console_handler = self.get_console_handler()\n        syslog_handler = self.get_syslog_handler()\n        self.add_handlers(_logger, [syslog_handler, console_handler])\n        _logger.propagate = False\n        # Call error override\n        error_override(_logger)\n        return _logger\n\n    def set_level(self, level_name: str):\n        \"\"\"Sets the root logger level. That means all child loggers will inherit this feature from it.\n        Args:\n            level_name: logging level\n        \"\"\"\n        if level_name == 'debug':\n            log_level = logging.DEBUG\n        elif level_name == 'info':\n            log_level = logging.INFO\n        elif level_name == 'error':\n            log_level = logging.ERROR\n        elif level_name == 'critical':\n            log_level = logging.CRITICAL\n        else:\n            log_level = logging.WARNING\n        logging.root.setLevel(log_level)\n"
  },
  {
    "path": "python/fledge/common/microservice_management_client/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/common/microservice_management_client/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Microservice Management Client Exceptions module\"\"\"\n\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass MicroserviceManagementClientError(Exception):\n\n        def __init__(self, status=None, reason=None):\n                self.status = status\n                self.reason = reason\n\n"
  },
  {
    "path": "python/fledge/common/microservice_management_client/microservice_management_client.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport http.client\nimport json\nimport urllib.parse\nimport logging\n\nfrom fledge.common import logger\nfrom fledge.common.microservice_management_client import exceptions as client_exceptions\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = logger.setup(__name__, level=logging.INFO)\n\n\nclass MicroserviceManagementClient(object):\n    _management_client_conn = None\n\n    def __init__(self, microservice_management_host, microservice_management_port):\n        self._management_client_conn = http.client.HTTPConnection(\"{0}:{1}\".format(microservice_management_host, microservice_management_port))\n        self.hostname = microservice_management_host\n        self.port = microservice_management_port\n\n    def register_service(self, service_registration_payload):\n        \"\"\" Registers a newly created microservice with the core service\n\n        The core service will persist this information in memory rather than write it to the storage layer since it will\n        change on every run of Fledge.\n\n\n        :param service_registration_payload: A dict object describing the microservice and giving details of the\n        management interface for that microservice\n        :return: a JSON object containing the UUID of the newly registered service\n        \"\"\"\n        url = '/fledge/service'\n\n        self._management_client_conn.request(method='POST', url=url, body=json.dumps(service_registration_payload))\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        try:\n            response[\"id\"]\n        except (KeyError, Exception) as ex:\n            _logger.exception(ex, \"Could not register the microservice, From request {}\".format(\n                json.dumps(service_registration_payload)))\n            raise\n\n        return response\n\n    def unregister_service(self, microservice_id):\n        \"\"\" Removes the registration record for a microservice\n\n        This is usually called by the microservice itself as part of its shutdown procedure, although this may not be\n        the only time it is called. A service may unregister, do some maintenance type operation and then re-register\n        if it desires.\n\n        :param microservice_id: string UUID of microservice\n        :return: a JSON object containing the UUID of the unregistered service\n        \"\"\"\n        url = '/fledge/service/{}'.format(microservice_id)\n\n        self._management_client_conn.request(method='DELETE', url=url)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        try:\n            response[\"id\"]\n        except (KeyError, Exception) as ex:\n            _logger.exception(ex, \"Could not unregister the microservice having UUID {}\".format(microservice_id))\n            raise\n\n        return response\n\n    def register_interest(self, category, microservice_id):\n        \"\"\" Register an interest of microservice in a configuration category\n\n        :param category: configuration category\n        :param microservice_id: microservice's UUID string\n        :return: A JSON object containing a registration ID for this registration\n        \"\"\"\n\n        url = '/fledge/interest'\n\n        payload = json.dumps({\"category\": category, \"service\": microservice_id}, sort_keys=True)\n        self._management_client_conn.request(method='POST', url=url, body=payload)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        try:\n            response[\"id\"]\n        except (KeyError, Exception) as ex:\n            _logger.exception(ex, \"Could not register interest, for request payload {}\".format(payload))\n            raise\n\n        return response\n\n    def unregister_interest(self, registered_interest_id):\n        \"\"\" Remove a previously registered interest in a configuration category\n\n        :param registered_interest_id: registered interest id for a configuration category\n        :return: A JSON object containing the unregistered interest id\n        \"\"\"\n        url = '/fledge/interest/{}'.format(registered_interest_id)\n\n        self._management_client_conn.request(method='DELETE', url=url)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        try:\n            response[\"id\"]\n        except (KeyError, Exception) as ex:\n            _logger.exception(ex, \"Could not unregister interest for {}\".format(registered_interest_id))\n            raise\n\n        return response\n\n    def get_services(self, service_name=None, service_type=None):\n        \"\"\" Retrieve the details of one or more services that are registered\n\n        :param service_name: filter the returned services by name\n        :param service_type: filter the returned services by type\n        :return: list of registered microservices, all or based on filter(s) applied\n        \"\"\"\n        url = '/fledge/service'\n        delimeter = '?'\n        if service_name:\n            url = '{}{}name={}'.format(url, delimeter, urllib.parse.quote(service_name))\n            delimeter = '&'\n        if service_type:\n            url = '{}{}type={}'.format(url, delimeter, service_type)\n\n        self._management_client_conn.request(method='GET', url=url)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        try:\n            response[\"services\"]\n        except (KeyError, Exception) as ex:\n            _logger.exception(ex, \"Could not find the microservice for requested url {}\".format(url))\n            raise\n\n        return response\n\n    def get_configuration_category(self, category_name=None):\n        \"\"\"\n\n        :param category_name:\n        :return:\n        \"\"\"\n        url = '/fledge/service/category'\n\n        if category_name:\n            url = \"{}/{}\".format(url, urllib.parse.quote(category_name))\n\n        self._management_client_conn.request(method='GET', url=url)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    def get_configuration_item(self, category_name, config_item):\n        \"\"\"\n\n        :param category_name:\n        :param config_item:\n        :return:\n        \"\"\"\n        url = \"/fledge/service/category/{}/{}\".format(urllib.parse.quote(category_name), urllib.parse.quote(config_item))\n\n        self._management_client_conn.request(method='GET', url=url)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    def create_configuration_category(self, category_data):\n        \"\"\"\n\n        :param category_data: e.g. '{\"key\": \"TEST\", \"description\": \"description\", \"value\": {\"info\": {\"description\": \"Test\", \"type\": \"boolean\", \"default\": \"true\"}}}'\n        :return:\n        \"\"\"\n        data = json.loads(category_data)\n        if 'keep_original_items' in data:\n            keep_original_item = 'true' if data['keep_original_items'] is True else 'false'\n            url = '/fledge/service/category?keep_original_items={}'.format(keep_original_item)\n            del data['keep_original_items']\n        else:\n            url = '/fledge/service/category'\n\n        self._management_client_conn.request(method='POST', url=url, body=json.dumps(data))\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    def create_child_category(self, parent, children):\n        \"\"\"\n        :param parent string\n        :param children list\n        :return:\n        \"\"\"\n        data = {\"children\": children}\n        url = '/fledge/service/category/{}/children'.format(urllib.parse.quote(parent))\n\n        self._management_client_conn.request(method='POST', url=url, body=json.dumps(data))\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    def update_configuration_item(self, category_name, config_item, category_data):\n        \"\"\"\n\n        :param category_name:\n        :param config_item:\n        :param category_data: e.g. '{\"value\": \"true\"}'\n        :return:\n        \"\"\"\n        url = \"/fledge/service/category/{}/{}\".format(urllib.parse.quote(category_name),\n                                                      urllib.parse.quote(config_item))\n\n        self._management_client_conn.request(method='PUT', url=url, body=category_data)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    async def ping_service(self):\n\n        import aiohttp\n        async with aiohttp.ClientSession() as session:\n            async with session.get('http://{}:{}/fledge/service/ping'.format(self.hostname,\n                                                                             self.port)) as resp:\n                json_response = await resp.json()\n                self._management_client_conn.close()\n                self.port = None\n                self.hostname = None\n        return json_response\n\n    async def update_service_for_acl_change_security(self, acl, reason):\n        assert reason in [\"attachACL\", \"detachACL\", \"reloadACL\", \"updateACL\"]\n        url = \"/fledge/security\"\n        payload = {\n            \"reason\": reason,\n            \"argument\": acl\n        }\n        import aiohttp\n        async with aiohttp.ClientSession() as session:\n            async with session.put('http://{}:{}/fledge/security'.format(self.hostname,\n                                                                         self.port),\n                                   data=json.dumps(payload)) as resp:\n                json_response = await resp.json()\n                _logger.debug(\"The response is {}\".format(json_response))\n                self._management_client_conn.close()\n                self.port = None\n                self.hostname = None\n        return json_response\n\n    def delete_configuration_item(self, category_name, config_item):\n        \"\"\"\n\n        :param category_name:\n        :param config_item:\n        :return:\n        \"\"\"\n        url = \"/fledge/service/category/{}/{}/value\".format(urllib.parse.quote(category_name), urllib.parse.quote(config_item))\n\n        self._management_client_conn.request(method='DELETE', url=url)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    def get_asset_tracker_events(self):\n        url = '/fledge/track'\n        self._management_client_conn.request(method='GET', url=url)\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    def create_asset_tracker_event(self, asset_event):\n        \"\"\"\n\n        :param asset_event\n               e.g. {\"asset\": \"AirIntake\", \"event\": \"Ingest\", \"service\": \"PT100_In1\", \"plugin\": \"PT100\"}\n        :return:\n        \"\"\"\n        url = '/fledge/track'\n        self._management_client_conn.request(method='POST', url=url, body=json.dumps(asset_event))\n        r = self._management_client_conn.getresponse()\n        if r.status in range(400, 500):\n            _logger.error(\"Client error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"Server error code: %d, Reason: %s\", r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    def get_alert_by_key(self, key):\n        url = \"/fledge/alert/{}\".format(key)\n        self._management_client_conn.request(method='GET', url=url)\n        r = self._management_client_conn.getresponse()\n        if r.status != 404:\n            if r.status in range(400, 500):\n                _logger.error(\"For URL: %s, Client error code: %d, Reason: %s\", url, r.status, r.reason)\n                raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"For URL: %s, Server error code: %d, Reason: %s\", url, r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response\n\n    def add_alert(self, params):\n        url = '/fledge/alert'\n        self._management_client_conn.request(method='POST', url=url, body=json.dumps(params))\n        r = self._management_client_conn.getresponse()\n        if r.status in range(401, 500):\n            _logger.error(\"For URL: %s, Client error code: %d, Reason: %s\", url, r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        if r.status in range(500, 600):\n            _logger.error(\"For URL: %s, Server error code: %d, Reason: %s\", url, r.status, r.reason)\n            raise client_exceptions.MicroserviceManagementClientError(status=r.status, reason=r.reason)\n        res = r.read().decode()\n        self._management_client_conn.close()\n        response = json.loads(res)\n        return response"
  },
  {
    "path": "python/fledge/common/parser.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport argparse\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nclass ArgumentParserError(Exception):\n    pass\n\nclass SilentArgParse(argparse.ArgumentParser):\n    def error(self, message):\n        raise ArgumentParserError(message)\n\nclass Parser(object):\n    \"\"\" Fledge argument parser.\n\n     Used to parse command line arguments of various Fledge processes\n    \"\"\"\n\n    @staticmethod\n    def get(argument_name):\n        \"\"\"Parses command line arguments for a single argument of name argument_name. Returns the value of the argument specified or None if argument was not specified.\n\n        Keyword Arguments:\n        argument_name -- name of command line argument to retrieve value for\n\n        Return Values:\n        Argument value (as a string)\n        None (if argument was not passed)\n\n        Side Effects:\n        None\n\n        Known Exceptions:\n        ArgumentParserError\n        \"\"\"\n\n        parser = SilentArgParse()\n        parser.add_argument(argument_name)\n        try:\n            parser.parse_known_args()\n        except ArgumentParserError:\n            raise\n        else:\n            return list(vars(parser.parse_known_args()[0]).values())[0]\n\n"
  },
  {
    "path": "python/fledge/common/plugin_discovery.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common Plugin Discovery Class\"\"\"\n\nimport os\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.plugins.common import utils as common_utils\nfrom fledge.services.core.api import utils\nfrom fledge.services.core.api.plugins import common\n\n\n__author__ = \"Amarendra K Sinha, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass PluginDiscovery(object):\n    def __init__(self):\n        pass\n\n    @classmethod\n    def get_plugins_installed(cls, plugin_type=None, is_config=False):\n        if plugin_type is None:\n            plugins_list = []\n            plugins_list_north = cls.fetch_plugins_installed(plugin_type=\"north\", installed_dir_name=\"north\",\n                                                             is_config=is_config)\n            plugins_list_south = cls.fetch_plugins_installed(plugin_type=\"south\", installed_dir_name=\"south\",\n                                                             is_config=is_config)\n            plugins_list_filter = cls.fetch_plugins_installed(plugin_type=\"filter\", installed_dir_name=\"filter\",\n                                                              is_config=is_config)\n            plugins_list_notify = cls.fetch_plugins_installed(\n                plugin_type=\"notify\", installed_dir_name=\"notificationDelivery\", is_config=is_config)\n            plugins_list_rule = cls.fetch_plugins_installed(\n                plugin_type=\"rule\", installed_dir_name=\"notificationRule\", is_config=is_config)\n            plugins_list_c_north = cls.fetch_c_plugins_installed(plugin_type=\"north\", is_config=is_config,\n                                                                 installed_dir_name=\"north\")\n            plugins_list_c_south = cls.fetch_c_plugins_installed(plugin_type=\"south\", is_config=is_config,\n                                                                 installed_dir_name=\"south\")\n            plugins_list_c_filter = cls.fetch_c_plugins_installed(plugin_type=\"filter\", is_config=is_config,\n                                                                  installed_dir_name=\"filter\")\n            plugins_list_c_notify = cls.fetch_c_plugins_installed(plugin_type=\"notify\", is_config=is_config,\n                                                                  installed_dir_name=\"notificationDelivery\")\n            plugins_list_c_rule = cls.fetch_c_plugins_installed(plugin_type=\"rule\", is_config=is_config,\n                                                                installed_dir_name=\"notificationRule\")\n            plugins_list.extend(plugins_list_north)\n            plugins_list.extend(plugins_list_c_north)\n            plugins_list.extend(plugins_list_south)\n            plugins_list.extend(plugins_list_c_south)\n            plugins_list.extend(plugins_list_filter)\n            plugins_list.extend(plugins_list_c_filter)\n            plugins_list.extend(plugins_list_notify)\n            plugins_list.extend(plugins_list_c_notify)\n            plugins_list.extend(plugins_list_rule)\n            plugins_list.extend(plugins_list_c_rule)\n        else:\n            if plugin_type == 'notify':\n                installed_dir_name = 'notificationDelivery'\n            elif plugin_type == 'rule':\n                installed_dir_name = 'notificationRule'\n            else:\n                installed_dir_name = plugin_type\n            plugins_list = cls.fetch_plugins_installed(plugin_type=plugin_type,\n                                                       installed_dir_name=installed_dir_name, is_config=is_config)\n            plugins_list.extend(cls.fetch_c_plugins_installed(plugin_type=plugin_type, is_config=is_config,\n                                                              installed_dir_name=installed_dir_name))\n        return plugins_list\n\n    @classmethod\n    def fetch_plugins_installed(cls, plugin_type, installed_dir_name, is_config):\n        directories = cls.get_plugin_folders(installed_dir_name)\n        # Check is required only for notificationDelivery & notificationRule python plugins as NS is an external service\n        # Hence we are not creating empty directories, as we had for south & filters\n        if directories is None:\n            directories = []\n        configs = []\n        for d in directories:\n            plugin_config = cls.get_plugin_config(d, plugin_type, installed_dir_name, is_config)\n            if plugin_config is not None:\n                configs.append(plugin_config)\n        return configs\n\n    @classmethod\n    def get_plugin_folders(cls, plugin_type):\n        directories = []\n        dir_name = utils._FLEDGE_ROOT + \"/python/fledge/plugins/\" + plugin_type\n        dir_path = []\n        l1 = [plugin_type]\n        if utils._FLEDGE_PLUGIN_PATH:\n            my_list = utils._FLEDGE_PLUGIN_PATH.split(\";\")\n            for l in my_list:\n                dir_found = os.path.isdir(l)\n                if dir_found:\n                    subdirs = [dirs for x, dirs, files in os.walk(l)]\n                    if subdirs[0]:\n                        result = any(elem in l1 for elem in subdirs[0])\n                        if result:\n                            dir_path.append(l)\n                    else:\n                        _logger.warning(\"{} subdir type not found\".format(l))\n                else:\n                    _logger.warning(\"{} dir path not found\".format(l))\n        try:\n            directories = [dir_name + '/' + d for d in os.listdir(dir_name) if os.path.isdir(dir_name + \"/\" + d) and\n                           not d.startswith(\"__\") and d != \"empty\" and d != \"common\"]\n            if dir_path:\n                temp_list = []\n                for fp in dir_path:\n                    for root, dirs, files in os.walk(fp + \"/\" + plugin_type):\n                        for name in dirs:\n                            if not name.startswith(\"__\"):\n                                # temp_list.append(name)\n                                p = os.path.join(root, name)\n                                temp_list.append(p)\n                directories = directories + temp_list\n        except FileNotFoundError:\n            pass\n        else:\n            return directories\n\n    @classmethod\n    def fetch_c_plugins_installed(cls, plugin_type, is_config, installed_dir_name):\n        libs = utils.find_c_plugin_libs(installed_dir_name)\n        configs = []\n        for name, _type in libs:\n            try:\n                if _type == 'binary':\n                    jdoc = utils.get_plugin_info(name, dir=installed_dir_name)\n                    if jdoc:\n                        if 'flag' in jdoc:\n                            if common_utils.bit_at_given_position_set_or_unset(jdoc['flag'],\n                                                                               common_utils.DEPRECATED_BIT_POSITION):\n                                raise DeprecationWarning\n\n                        pkg_name = ''\n                        # Only OMF is an inbuilt plugin\n                        if name.lower() != 'omf':\n                            pkg_name = 'fledge-{}-{}'.format(plugin_type, name.lower().replace(\"_\", \"-\"))\n\n                        plugin_config = {'name': name,\n                                         'type': plugin_type,\n                                         'description': jdoc['config']['plugin']['description'],\n                                         'version': jdoc['version'],\n                                         'installedDirectory': '{}/{}'.format(installed_dir_name, name),\n                                         'packageName': get_package_name(\n                                             \"fledge-{}-\".format(plugin_type),\n                                             \"{}/plugins/{}/{}/.Package\".format(utils._FLEDGE_ROOT, installed_dir_name, name),\n                                             pkg_name)\n                                         }\n                        if is_config:\n                            plugin_config.update({'config': jdoc['config']})\n                        configs.append(plugin_config)\n                else:\n                    # for c-hybrid plugin\n                    hybrid_plugin_config = common.load_and_fetch_c_hybrid_plugin_info(name, is_config)\n                    if hybrid_plugin_config:\n                        configs.append(hybrid_plugin_config)\n            except DeprecationWarning:\n                _logger.warning('\"{}\" plugin is deprecated'.format(name))\n            except Exception as ex:\n                _logger.exception(ex)\n\n        return configs\n\n    @classmethod\n    def get_plugin_config(cls, plugin_dir, plugin_type, installed_dir_name, is_config):\n        plugin_module_path = plugin_dir\n        plugin_config = None\n        # Now load the plugin to fetch its configuration\n        try:\n            plugin_info = common.load_and_fetch_python_plugin_info(\n                plugin_module_path, plugin_module_path.split('/')[-1], installed_dir_name)\n            # Fetch configuration from the configuration defined in the plugin\n            if plugin_info['type'] == installed_dir_name:\n                if 'flag' in plugin_info:\n                    if common_utils.bit_at_given_position_set_or_unset(plugin_info['flag'],\n                                                                       common_utils.DEPRECATED_BIT_POSITION):\n                        raise DeprecationWarning\n                pkg_name = ''\n                name = plugin_info['config']['plugin']['default']\n                # Only OMF is an inbuilt plugin\n                if name.lower() != 'omf':\n                    pkg_name = 'fledge-{}-{}'.format(plugin_type, name.lower().replace(\"_\", \"-\"))\n                plugin_config = {\n                    'name': plugin_info['config']['plugin']['default'],\n                    'type': plugin_type,\n                    'description': plugin_info['config']['plugin']['description'],\n                    'version': plugin_info['version'],\n                    'installedDirectory': '{}/{}'.format(installed_dir_name, name),\n                    'packageName': get_package_name(\"fledge-{}-\".format(plugin_type),\n                                                    \"{}/.Package\".format(plugin_dir), pkg_name)\n                }\n            else:\n                _logger.warning(\"Plugin {} is discarded due to invalid type\".format(plugin_dir))\n\n            if is_config:\n                plugin_config.update({'config': plugin_info['config']})\n        except DeprecationWarning:\n            _logger.warning('\"{}\" plugin is deprecated'.format(plugin_dir.split('/')[-1]))\n        except FileNotFoundError as ex:\n            _logger.error(ex, 'Import problem from path \"{}\" for {} plugin.'.format(plugin_module_path, plugin_dir))\n        except Exception as ex:\n            _logger.exception(ex, 'Failed to fetch config for {} plugin.'.format(plugin_dir))\n\n        return plugin_config\n\n\ndef get_package_name(prefix: str, filepath: str, internal_name: str) -> str:\n    \"\"\" Get Package name on the basis of .Package file\n    Args:\n        prefix:    package prefix which is used for file content matching\n        filepath:  Check .Package file in given path\n        internal_name:  If .Package file is missing then use old internal way\n    \"\"\"\n    try:\n        # open file in read mode\n        with open(filepath, 'r') as read_obj:\n            line = read_obj.read().strip('\\n')\n    except Exception:\n        # If .Package file not found then return internal package name\n        # which is most likely a case of non-package environment setup\n        return internal_name\n    else:\n        # if Package file content is empty then return internal package name Else Package file content\n        return internal_name if prefix not in line else line\n"
  },
  {
    "path": "python/fledge/common/plugin_helpers.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n__author__ = \"Douglas Orr\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n### plugin_helpers -- utility classes to facilitate making python plugins -- ###\n\nimport re\nimport copy\n\nimport logging\nfrom fledge.common import iprpc\n\n\nclass HandleMap:\n    # keep a map of json-able handles to internal objects that have the working instance state\n    def __init__(self, name):\n        self.name = name\n        self.uid = 0\n        self.handles = {}\n\n    def new_handle(self, handle, config):\n        \"\"\" new_handle -- make a new entry in our handle map findable by id; stash a copy of current config \"\"\"\n        _handle_id = \"{}-{}\".format(self.name, self.uid)\n        self.uid += 1\n        self.handles[_handle_id] = handle\n        return {\"id\": _handle_id, \"config\": copy.deepcopy(config)}\n\n    def get_handle(self, h):\n        \"\"\" get_handle -- use the unique id to find the real handle \"\"\"\n        return self.handles.get(h[\"id\"], None)\n\n    def del_handle(self, h):\n        \"\"\" del_handle -- done with the handle, delete the id (which deletes the underlying handle) \"\"\"\n        if h[\"id\"] in self.handles:\n            del self.handles[h[\"id\"]]\n\n\nclass PluginHandle(object):\n    \"\"\" PluginHandle -- utility class that makes converting from config to internal handle easier \"\"\"\n\n    typefns = {\n        \"integer\": int,\n        \"float\": float,\n        \"bool\": lambda x: x == \"true\",\n        \"boolean\": lambda x: x == \"true\",\n        \"string\": str,\n        \"enumeration\": str,\n    }\n\n    def __init__(self, service_name, cbinfo=(None, None)):\n        # for filter plugins save the info that will be sent to the filter_ingest callback\n        self.ingest_ref, self.callback = cbinfo\n\n    def config_update(self, udict):\n        \"\"\" config_update - store config values in the (derived) handle \"\"\"\n\n        def snake_case(name):\n            # handle member names use snake-case, convert from camel-case\n            return re.sub(r\"(['A-Z'])\", r\"_\\1\", name).lower()\n\n        def get_typed_value(k):\n            # auto-convert string config entries into their appropriate type\n            _t = udict[k][\"type\"]\n            # \"typefns\" convert to real type; default type fn assumes identity (string, usually)\n            def ident_fn(x): return x\n            _typefn = ident_fn if _t not in PluginHandle.typefns else PluginHandle.typefns[_t]\n            return _typefn(udict[k][\"value\"])\n\n        for k in udict.keys():\n            _v = get_typed_value(k)\n            setattr(self, snake_case(k), _v)\n\n    def rpc_setup(self, config, module_dir=\"\", restart_rpc=False):\n        \"\"\" rpc_setup -- create a new IPC server configuration, optionally (re)starting the IPC server \"\"\"\n        _server_module = getattr(self, \"RPC_SERVER_NAME\", None)\n\n        if _server_module is None:\n            raise ValueError(\n                \"RPC_SERVER_NAME class variable must be set to the name of the RPC server module\"\n            )\n\n        # create the server that does the signal processing on frame data\n        if self.rpc is None or restart_rpc:\n            self.rpc = iprpc.IPCModuleClient(_server_module, module_dir)\n        self.rpc.plugin_init(self._rpc_config())\n\n    def _rpc_config(self):\n        \"\"\" _rpc_config -- return the dict of k,v to be updated in the server when rpc configuration changes \"\"\"\n\n        # BY CONVENTION, RPC_PARAMS is the slot which has the names of config members to be sent to\n        # the rpc server as config values\n\n        _params = getattr(self, \"RPC_CONFIG_MEMBERS\", [])\n        return {k: getattr(self, k) for k in _params}\n\n    def shutdown(self):\n        if self.rpc is not None:\n            self.rpc.plugin_shutdown()\n\n\nclass PluginRPCServer(iprpc.InterProcessRPC):\n    \"\"\" PluginRPCServer - helper class that simplifies writing iprpc server-based plugins \"\"\"\n\n    def __init__(self, service_name):\n        super().__init__()\n\n    def config_update(self, config):\n        # servers receive unpickled dict's with typed values\n        for k, v in config.items():\n            setattr(self, k, v)\n"
  },
  {
    "path": "python/fledge/common/process.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common FledgeProcess Class\"\"\"\n\nfrom abc import ABC, abstractmethod\nimport argparse\nimport time\nfrom fledge.common.storage_client.storage_client import StorageClientAsync, ReadingsStorageClientAsync\nfrom fledge.common import logger\nfrom fledge.common.microservice_management_client.microservice_management_client import MicroserviceManagementClient\n\n__author__ = \"Ashwin Gopalakrishnan, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = logger.setup(__name__)\n\n\nclass ArgumentParserError(Exception):\n    \"\"\" Override default exception to not terminate application \"\"\"\n\n    def __init__(self, message):\n        self.message = message\n\n    def __str__(self):\n        fmt = '%(message)s'\n        return fmt % dict(message=self.message)\n\n\nclass SilentArgParse(argparse.ArgumentParser):\n    def error(self, message):\n        \"\"\" Override default error functionality to not terminate application \"\"\"\n        raise ArgumentParserError(message)\n\n\nclass FledgeProcess(ABC):\n    \"\"\" FledgeProcess for all non-core python processes.\n\n    All processes will inherit from FledgeProcess and must implement pure virtual method run()\n    \"\"\"\n\n    _core_management_host = None\n    \"\"\" string containing core's micro-service management host \"\"\"\n\n    _core_management_port = None\n    \"\"\" int containing core's micro-service management port \"\"\"\n\n    _name = None\n    \"\"\" name of process \"\"\"\n\n    _core_microservice_management_client = None\n    \"\"\" MicroserviceManagementClient instance \"\"\"\n\n    _readings_storage_async = None\n    \"\"\" fledge.common.storage_client.storage_client.ReadingsStorageClientAsync \"\"\"\n\n    _storage_async = None\n    \"\"\" async fledge.common.storage_client.storage_client.StorageClientAsync \"\"\"\n\n    _start_time = None\n    \"\"\" time at which this python process started \"\"\"\n\n    _dryrun = False\n    \"\"\" this is a dry run invocation of the process used to populate configuration \"\"\"\n\n    def __init__(self):\n        \"\"\" All processes must have these three command line arguments passed:\n\n        --address [core microservice management host]\n        --port [core microservice management port]\n        --name [process name]\n        \"\"\"\n\n        self._start_time = time.time()\n\n        try:\n            parser = SilentArgParse()\n            parser.add_argument(\"--name\", required=True)\n            parser.add_argument(\"--address\", required=True)\n            parser.add_argument(\"--port\", required=True, type=int)\n            namespace, args = parser.parse_known_args()\n            self._name = getattr(namespace, 'name')\n            self._core_management_host = getattr(namespace, 'address')\n            self._core_management_port = getattr(namespace, 'port')\n            r = range(1, 65536)\n            if self._core_management_port not in r:\n                raise ArgumentParserError(\"Invalid Port: {}\".format(self._core_management_port))\n            for item in args:\n                if item == \"--dryrun\":\n                    self._dryrun = True\n                elif item.startswith('--'):\n                    kv = item.split('=')\n                    if len(kv) == 2:\n                        if len(kv[1].strip()) == 0:\n                            raise ArgumentParserError(\"Invalid value {} for optional arg {}\".format(kv[1], kv[0]))\n\n        except ArgumentParserError as ex:\n            _logger.error(ex, \"Arg parser error.\")\n            raise\n\n        self._core_microservice_management_client = MicroserviceManagementClient(self._core_management_host,\n                                                                                 self._core_management_port)\n\n        self._readings_storage_async = ReadingsStorageClientAsync(self._core_management_host,\n                                                                  self._core_management_port)\n        self._storage_async = StorageClientAsync(self._core_management_host, self._core_management_port)\n\n    # pure virtual method run() to be implemented by child class\n    @abstractmethod\n    def run(self):\n        pass\n\n    def get_services_from_core(self, name=None, _type=None):\n        return self._core_microservice_management_client.get_services(name, _type)\n\n    def register_service_with_core(self, service_registration_payload):\n        \"\"\" Register a microservice with core\n\n        Keyword Arguments:\n            service_registration_payload -- json format dictionary\n\n        Return Values:\n            Argument value (as a string)\n            None (if argument was not passed)\n\n            Known Exceptions:\n                HTTPError\n        \"\"\"\n\n        return self._core_microservice_management_client.register_service(service_registration_payload)\n\n    def unregister_service_with_core(self, microservice_id):\n        \"\"\" Unregister a microservice with core\n\n        Keyword Arguments:\n            microservice_id (uuid as a string)\n        \"\"\"\n        return self._core_microservice_management_client.unregister_service(microservice_id)\n\n    def register_interest_with_core(self):\n        # cat name\n        # callback module\n        # self.microservice_id\n        raise NotImplementedError\n\n    def unregister_interest_with_core(self):\n        # cat name\n        # self.microservice_id\n        raise NotImplementedError\n\n    def get_configuration_categories(self):\n        \"\"\"\n\n        :return:\n        \"\"\"\n        return self._core_microservice_management_client.get_configuration_category()\n\n    def get_configuration_category(self, category_name=None):\n        \"\"\"\n\n        :param category_name:\n        :return:\n        \"\"\"\n        return self._core_microservice_management_client.get_configuration_category(category_name)\n\n    def get_configuration_item(self, category_name, config_item):\n        \"\"\"\n\n        :param category_name:\n        :param config_item:\n        :return:\n        \"\"\"\n        return self._core_microservice_management_client.get_configuration_item(category_name, config_item)\n\n    def create_configuration_category(self, category_data):\n        \"\"\"\n\n        :param category_data:\n        :return:\n        \"\"\"\n        return self._core_microservice_management_client.create_configuration_category(category_data)\n\n    def update_configuration_item(self, category_name, config_item):\n        \"\"\"\n\n        :param category_name:\n        :param config_item:\n        :return:\n        \"\"\"\n        return self._core_microservice_management_client.update_configuration_item(category_name, config_item)\n\n    def delete_configuration_item(self, category_name, config_item):\n        \"\"\"\n\n        :param category_name:\n        :param config_item:\n        :return:\n        \"\"\"\n        return self._core_microservice_management_client.delete_configuration_item(category_name, config_item)\n\n    def is_dry_run(self):\n        \"\"\"\n\n        Return true if this is a dry run of the process. Dry runs are used\n        to allow a task or service to add configuration without that task or\n        service performing whatever operation it normally does. Thus the user\n        can then update the configuration before the task or service is ever\n        started in anger.\n\n        :return: Boolean\n        \"\"\"\n        return self._dryrun\n"
  },
  {
    "path": "python/fledge/common/service_record.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Service Record Class\"\"\"\n\nfrom enum import IntEnum\n\n__author__ = \"Praveen Garg, Amarendra Kumar Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass ServiceRecord(object):\n    \"\"\"Used to information regarding a registered microservice.\n    \"\"\"\n\n    class Type(IntEnum):\n        \"\"\"Enumeration for Service Types\"\"\"\n\n        Storage = 1\n        Core = 2\n        Southbound = 3\n        Notification = 4\n        Management = 5\n        Northbound = 6\n        Dispatcher = 7\n        BucketStorage = 8\n        Pipeline = 9\n\n    class Status(IntEnum):\n        \"\"\"Enumeration for Service Status\"\"\"\n\n        Running = 1\n        Shutdown = 2\n        Failed = 3\n        Unresponsive = 4\n        Restart = 5\n\n    class InvalidServiceType(Exception):\n        # TODO: tell allowed service types?\n        pass\n\n    class InvalidServiceStatus(Exception):\n        # TODO: tell allowed service status?\n        pass\n\n    __slots__ = ['_id', '_name', '_type', '_protocol', '_address', '_port', '_management_port', '_status', '_debug']\n\n    def __init__(self, s_id, s_name, s_type, s_protocol, s_address, s_port, m_port):\n        self._id = s_id\n        self._name = s_name\n        self._type = self.valid_type(s_type)  # check with ServiceRecord.Type, if not a valid type raise error\n        self._protocol = s_protocol\n        self._address = s_address\n        self._port = None\n        if s_port is not None:\n            self._port = int(s_port)\n        self._management_port = int(m_port)\n        self._status = ServiceRecord.Status.Running\n        self._debug = {}\n\n    def __repr__(self):\n        template = 'service instance id={s._id}: <{s._name}, type={s._type}, protocol={s._protocol}, ' \\\n                   'address={s._address}, service port={s._port}, management port={s._management_port}, ' \\\n                   'status={s._status}>'\n        return template.format(s=self)\n\n    def __str__(self):\n        return self.__repr__()\n\n    def valid_type(self, s_type):\n        if s_type not in ServiceRecord.Type.__members__:\n            raise ServiceRecord.InvalidServiceType\n        return s_type\n"
  },
  {
    "path": "python/fledge/common/statistics.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n\n__author__ = \"Ashwin Gopalakrishnan, Ashish Jabble, Mark Riddoch, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def create_statistics(storage=None):\n    stat = Statistics(storage)\n    await stat._init()\n    return stat\n\n\nclass Statistics(object):\n    \"\"\" Statistics interface of the API to gather the available statistics counters,\n        calculate the deltas from the previous run of the process and write the deltas\n        to a statistics record.\n    \"\"\"\n\n    _shared_state = {}\n\n    _storage = None\n\n    _registered_keys = None\n    \"\"\" Set of keys already in the storage tables \"\"\"\n\n    def __init__(self, storage=None):\n        self.__dict__ = self._shared_state\n        if self._storage is None:\n            if not isinstance(storage, StorageClientAsync):\n                raise TypeError('Must be a valid Async Storage object')\n            self._storage = storage\n\n    async def _init(self):\n        if self._registered_keys is None:\n            await self._load_keys()\n\n    async def update_bulk(self, stat_list):\n        \"\"\" Bulk update statistics table keys and their values\n\n        Args:\n            stat_list: dict containing statistics keys and increment values\n\n        Returns:\n            None\n        \"\"\"\n        if not isinstance(stat_list, dict):\n            raise TypeError('stat_list must be a dict')\n\n        try:\n            payload = {\"updates\": []}\n            for k, v in stat_list.items():\n                payload_item = PayloadBuilder() \\\n                    .WHERE([\"key\", \"=\", k]) \\\n                    .EXPR([\"value\", \"+\", v]) \\\n                    .payload()\n                payload['updates'].append(json.loads(payload_item))\n            await self._storage.update_tbl(\"statistics\", json.dumps(payload, sort_keys=False))\n        except Exception as ex:\n            _logger.exception(ex, 'Unable to bulk update statistics')\n            raise\n\n    async def update(self, key, value_increment):\n        \"\"\" UPDATE the value column only of a statistics row based on key\n\n        Args:\n            key: statistics key value (required)\n            value_increment: amount to increment the value by\n\n        Returns:\n            None\n        \"\"\"\n        if not isinstance(key, str):\n            raise TypeError('key must be a string')\n\n        if not isinstance(value_increment, int):\n            raise ValueError('value must be an integer')\n\n        try:\n            payload = PayloadBuilder()\\\n                .WHERE([\"key\", \"=\", key])\\\n                .EXPR([\"value\", \"+\", value_increment])\\\n                .payload()\n            await self._storage.update_tbl(\"statistics\", payload)\n        except Exception as ex:\n            msg = 'Unable to update statistics value based on statistics_key {} and value_increment {}'.format(\n                key, value_increment)\n            _logger.exception(ex, msg)\n            raise\n\n    async def add_update(self, sensor_stat_dict):\n        \"\"\"UPDATE the value column of a statistics based on key, if key is not present, ADD the new key\n\n        Args:\n            sensor_stat_dict: Dictionary containing the key value of Asset name and value increment\n\n        Returns:\n            None\n        \"\"\"\n        for key, value_increment in sensor_stat_dict.items():\n            # Try updating the statistics value for given key\n            try:\n                payload = PayloadBuilder() \\\n                    .WHERE([\"key\", \"=\", key]) \\\n                    .EXPR([\"value\", \"+\", value_increment]) \\\n                    .payload()\n                result = await self._storage.update_tbl(\"statistics\", payload)\n                if result[\"response\"] != \"updated\":\n                    raise KeyError\n            # If key was not present, add the key and with value = value_increment\n            except KeyError:\n                _logger.exception('Statistics key %s has not been registered', key)\n                raise\n            except Exception as ex:\n                msg = 'Unable to update statistics value based on statistics_key {} and value_increment {}'.format(\n                    key, value_increment)\n                _logger.exception(ex, msg)\n                raise\n\n    async def register(self, key, description):\n        if key in self._registered_keys:\n            return\n        if len(self._registered_keys) == 0:\n            await self._load_keys()\n        try:\n            payload = PayloadBuilder().INSERT(key=key, description=description, value=0, previous_value=0).payload()\n            await self._storage.insert_into_tbl(\"statistics\", payload)\n            self._registered_keys.append(key)\n        except Exception as ex:\n            \"\"\" The error may be because the key has been created in another process, reload keys \"\"\"\n            await self._load_keys()\n            if key not in self._registered_keys:\n                _logger.exception(ex, 'Unable to create new statistic {} key.'.format(key))\n                raise\n\n    async def _load_keys(self):\n        self._registered_keys = []\n        try:\n            payload = PayloadBuilder().SELECT(\"key\").payload()\n            results = await self._storage.query_tbl_with_payload('statistics', payload)\n            for row in results['rows']:\n                self._registered_keys.append(row['key'])\n        except Exception as ex:\n            _logger.exception(ex, 'Failed to retrieve statistics keys')\n"
  },
  {
    "path": "python/fledge/common/storage_client/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/common/storage_client/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Storage layer python client exceptions\n\n    If the data layer was unavailable then return “503 Service Unavailable”\n    If any of the query parameters are missing or payload is malformed then return “400 Bad Request”\n    If the data could not be deleted because of a conflict, then return error “409 Conflict”\n\n    In other circumstances a “400 Bad Request” should be returned.\n\"\"\"\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__all__ = ('BadRequest', 'StorageServiceUnavailable', 'InvalidServiceInstance', 'InvalidReadingsPurgeFlagParameters',\n           'PurgeOneOfAgeAssetAndSize', 'PurgeOnlyOneOfAgeAndSize', 'PurgeOneOfAgeAndAsset', 'PurgeOneOfSizeAndAsset', 'StorageServerError')\n\n\nclass StorageClientException(Exception):\n    \"\"\" The base exception class for all exceptions this module raises.\n    \"\"\"\n    def __init__(self, code, message=None):\n        self.code = code\n        # NOTE: Use getattr on self.__class__.message since\n        # BaseException.message was dropped in python 3, see PEP 0352.\n        self.message = message or getattr(self.__class__, 'message', None)\n\n    def __str__(self):\n        fmt_msg = \"%s\" % self.message\n        return fmt_msg\n\n\nclass BadRequest(StorageClientException):\n    \"\"\" 400 - Bad request: you sent some malformed data.\n    \"\"\"\n    def __init__(self):\n        self.code = 400\n        self.message = \"Bad request\"\n\n\nclass StorageServiceUnavailable(StorageClientException):\n    \"\"\" 503 - Service Unavailable\n    \"\"\"\n    def __init__(self):\n        self.code = 503\n        self.message = \"Storage service is unavailable\"\n\n\nclass InvalidServiceInstance(StorageClientException):\n    \"\"\" 502 - Invalid Storage Service\n    \"\"\"\n\n    def __init__(self):\n        self.code = 502\n        self.message = \"Storage client needs a valid *Fledge storage* micro-service instance\"\n\n\nclass InvalidReadingsPurgeFlagParameters(BadRequest):\n    \"\"\" 400 - Invalid params for Purge request\n    \"\"\"\n\n    def __init__(self):\n        self.code = 400\n        self.message = \"Purge flag valid options are retain or purge only\"\n\n\nclass PurgeOnlyOneOfAgeAndSize(BadRequest):\n    \"\"\" 400 - Invalid params for Purge request\n    \"\"\"\n\n    def __init__(self):\n        self.code = 400\n        self.message = \"Purge must specify only one of age or size\"\n\n\nclass PurgeOneOfAgeAssetAndSize(BadRequest):\n    \"\"\" 400 - Invalid params for Purge request\n    \"\"\"\n\n    def __init__(self):\n        self.code = 400\n        self.message = \"Purge must specify one of age, size or asset\"\n\nclass PurgeOneOfAgeAndAsset(BadRequest):\n    \"\"\" 400 - Invalid params for Purge request\n    \"\"\"\n\n    def __init__(self):\n        self.code = 400\n        self.message = \"Purge must specify one of age or asset\"\n\nclass PurgeOneOfSizeAndAsset(BadRequest):\n    \"\"\" 400 - Invalid params for Purge request\n    \"\"\"\n\n    def __init__(self):\n        self.code = 400\n        self.message = \"Purge must specify one of size or asset\"\n\nclass StorageServerError(Exception):\n\n    def __init__(self, code, reason, error):\n        self.code = code\n        self.reason = reason\n        self.error = error\n\n    def __str__(self):\n        fmt_msg = \"code: %d, reason:%s, error:%s\" % (self.code, self.reason, self.error)\n        return fmt_msg\n"
  },
  {
    "path": "python/fledge/common/storage_client/payload_builder.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Storage layer python client payload builder\n\"\"\"\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nfrom collections import OrderedDict\nimport json\nimport urllib.parse\nimport numbers\n\nfrom fledge.common import logger\n\n\n_LOGGER = logger.setup(__name__)\n\n\nclass PayloadBuilder(object):\n    \"\"\" Payload Builder to be used in Python client  for Storage Service\n\n    \"\"\"\n\n    # TODO: Add json validator\n    ''' Ref: https://docs.google.com/document/d/1qGIswveF9p2MmAOw_W1oXpo_aFUJd3bXBkW563E16g0/edit#\n        Ref: http://json-schema.org/\n        Ref: http://json-schema.org/implementations.html#validators\n    '''\n    # TODO: Add tests\n\n    query_payload = None\n\n    def __init__(self, initial_payload=OrderedDict()):\n        # TODO: Investigate why simple \"self.__class__.query_payload = initial_payload\" is not working\n        self.__class__.query_payload = initial_payload if len(initial_payload) else OrderedDict()\n\n    @staticmethod\n    def verify_select(arg):\n        retval = False\n        if isinstance(arg, str):\n            retval = True\n        elif isinstance(arg, tuple):\n            retval = True\n        return retval\n\n    @staticmethod\n    def verify_condition(arg):\n        retval = False\n        if isinstance(arg, list):\n            if len(arg) == 3:\n                if arg[1] in ['like', '<', '>', '=', '>=', '<=', '!=', 'newer', 'older', 'in', 'not in']:\n                    retval = True\n                if arg[1] in ['in', 'not in']:\n                    if isinstance(arg[2], list):\n                        retval = True\n            elif len(arg) == 2:\n                # There is no 3rd arg required for below operations\n                if arg[1] in ['isnull', 'notnull']:\n                    retval = True\n            else:\n                retval = False\n        return retval\n\n    @staticmethod\n    def verify_aggregation(arg):\n        retval = False\n        if isinstance(arg, list):\n            if len(arg) == 1 and arg[0] == \"all\":\n                retval = True\n            elif len(arg) == 2:\n                if arg[0] in ['min', 'max', 'avg', 'sum', 'count']:\n                    retval = True\n        return retval\n\n    @staticmethod\n    def verify_orderby(arg):\n        retval = False\n        if isinstance(arg, list):\n            if len(arg) == 1:\n                arg.append('asc')\n\n            if len(arg) == 2:\n                if arg[1].upper() in ['ASC', 'DESC']:\n                    retval = True\n        return retval\n\n    @staticmethod\n    def verify_alias(arg):\n        retval = False\n        if isinstance(arg, tuple):\n            if len(arg) == 2:\n                retval = True\n            if len(arg) == 3:\n                if arg[1] in ['min', 'max', 'avg', 'sum', 'count']:\n                    retval = True\n        return retval\n\n    @staticmethod\n    def verify_json_property(arg):\n        retval = False\n        if isinstance(arg, tuple):\n            if len(arg) == 3:\n                if isinstance(arg[1], list):\n                    retval = True\n        return retval\n\n    @staticmethod\n    def is_json(myjson):\n        try:\n            json_object = json.loads(myjson)\n        except (ValueError, Exception):\n            return False\n        return True\n\n    @classmethod\n    def add_clause_to_select(cls, clause, qp_list, col, clause_value):\n        for i, item in enumerate(qp_list):\n            if isinstance(item, str):\n                if item == col:\n                    with_clause = OrderedDict()\n                    with_clause['column'] = item\n                    with_clause[clause] = clause_value\n                    \"\"\"\n                    NOTE: \n                        For Sqlite based engines, Temporarily workaround in payload builder to add \"utc\" timezone always \n                        when query with user_ts column and having alias timestamp.\n                        Though for PostgreSQL we already have set a session level time zone to 'UTC' during connection\n                        https://github.com/fledge-iot/fledge/pull/900/files\n                    \"\"\"\n                    if col == \"user_ts\" and clause_value == \"timestamp\":\n                        with_clause[\"timezone\"] = \"utc\"\n                    qp_list[i] = with_clause\n            if isinstance(item, dict):\n                if 'json' in qp_list[i] and qp_list[i]['json']['column'] == col:\n                    qp_list[i][clause] = clause_value\n                elif 'column' in qp_list[i] and qp_list[i]['column'] == col:\n                    qp_list[i][clause] = clause_value\n\n    @classmethod\n    def add_clause_to_aggregate(cls, clause, qp_list, col, opr, clause_value):\n        if isinstance(qp_list, dict):\n            if 'json' in qp_list:\n                if qp_list['json']['column'] == col and qp_list['operation'] == opr:\n                    qp_list[clause] = clause_value\n            elif qp_list['column'] == col and qp_list['operation'] == opr:\n                qp_list[clause] = clause_value\n\n        if isinstance(qp_list, list):\n            for i, item in enumerate(qp_list):\n                if isinstance(item, dict):\n                    if 'json' in qp_list[i]:\n                        if qp_list[i]['json']['column'] == col and qp_list[i]['operation'] == opr:\n                            qp_list[i][clause] = clause_value\n                    elif qp_list[i]['column'] == col and qp_list[i]['operation'] == opr:\n                        qp_list[i][clause] = clause_value\n\n    @classmethod\n    def add_clause_to_group(cls, clause, qp, col, clause_value):\n        item = qp['group']\n        if cls.is_json(item) is False and isinstance(item, str):\n            if item == col:\n                with_clause = OrderedDict()\n                with_clause['column'] = item\n                with_clause[clause] = clause_value\n                qp['group'] = with_clause\n        if isinstance(item, dict) or cls.is_json(item) is True:\n            my_item = json.loads(item) if isinstance(item, dict) is False else item\n            if 'column' in my_item and my_item['column'] == col:\n                my_item[clause] = clause_value\n            qp['group'] = my_item\n\n    @classmethod\n    def _add_clause(cls, clause, main_key, args):\n        \"\"\"\n        Adds \"alias\" and \"format\" clauses to columns in payload info. Currently, adding clauses is supported at two\n        actions only - SELECT and AGGREGATE.\n\n        :param clause: either \"alias\" or \"format\"\n        :param main_key: either \"return\" or \"aggregate\"\n        :param args: each arg is a tuple. The len of tuple depends upon main_key. If main_key is \"return\" i.e. SELECT,\n                     then each tuple will contain (col, alias). If main_key is \"aggregate\", then each tuple will contain\n                     (col, operation, alias) because col can be repeated in aggregate with different \"operations\".\n        :return:\n        \"\"\"\n        if clause not in ['alias', 'format', 'group']:\n            return cls\n\n        if main_key in ['return', 'aggregate', 'group']:\n            for arg in args:\n                if cls.verify_alias(arg):\n                    if main_key == 'return':\n                        col = arg[0]\n                        alias = arg[1]\n                        cls.add_clause_to_select(clause, cls.query_payload[main_key], col, alias)\n                    if main_key == 'aggregate':\n                        col = arg[0]\n                        opr = arg[1]\n                        alias = arg[2]\n                        cls.add_clause_to_aggregate(clause, cls.query_payload[main_key], col, opr, alias)\n                    if main_key == 'group':\n                        col = arg[0]\n                        alias = arg[1]\n                        cls.add_clause_to_group(clause, cls.query_payload, col, alias)\n\n        return cls\n\n    @classmethod\n    def ALIAS(cls, main_key, *args):\n        \"\"\"\n        Adds \"alias\" to columns in payload info. Currently, adding clauses is supported at two\n        actions only - SELECT and AGGREGATE.\n\n        :param main_key: either \"return\" or \"aggregate\" or \"group\"\n        :param args: each arg is a tuple. The len of tuple depends upon main_key. If main_key is \"return\" i.e. SELECT,\n                     then each tuple will contain (col, alias). If main_key is \"aggregate\", then each tuple will contain\n                     (col, operation, alias) because col can be repeated in aggregate with different \"operations\".\n        :return:\n        :example:\n        PayloadBuilder().SELECT((\"name\", \"id\")).ALIAS('return', ('name', 'my_name'), ('id', 'my_id')).payload() returns\n            {\"return\": [{\"column\": \"name\", \"alias\": \"my_name\"},\n                        {\"column\": \"id\", \"alias\": \"my_id\"}]}\n\n        PayloadBuilder().SELECT((\"name\", [\"id\", \"reason\"]).ALIAS('return', ('name', 'my_name'), ('id', 'my_id')).payload() returns\n            {\"return\": [{\"alias\": \"my_name\", \"column\": \"name\"},\n                        {\n                              \"json\" : {\n                                          \"column\"     : \"id\",\n                                          \"properties\" : \"reason\"\n                                       },\n                              \"alias\" : \"my_id\"\n                        }\n                       ]\n            }\n\n        PayloadBuilder().AGGREGATE(([\"min\", \"values\"], [\"max\", \"values\"], [\"avg\", \"values\"])).ALIAS('aggregate',\n                                                           ('values', 'min', 'min_values'),\n                                                           ('values', 'max', 'max_values'),\n                                                           ('values', 'avg', 'avg_values')).payload() returns\n            {\"aggregate\": [{\"operation\": \"min\", \"column\": \"values\", \"alias\": \"min_values\"},\n                           {\"operation\": \"max\", \"column\": \"values\", \"alias\": \"max_values\"},\n                           {\"operation\": \"avg\", \"column\": \"values\", \"alias\": \"avg_values\"}]}\n\n        PayloadBuilder().AGGREGATE(([\"min\", [\"values\", \"rate\"]], [\"max\", [\"values\", \"rate\"]], [\"avg\", [\"values\", \"rate\"]])).\\\n        ALIAS('aggregate', ('values', 'min', 'Minimum'), ('values', 'max', 'Maximum'), ('values', 'avg', 'Average')).payload() returns\n            {\n              \"aggregate\": [\n                {\n                  \"operation\": \"min\",\n                  \"json\"      : {\n                                    \"column\"     : \"values\",\n                                    \"properties\" : \"rate\"\n                                },\n                  \"alias\": \"Minimum\"\n                },\n                {\n                  \"operation\": \"max\",\n                  \"json\"      : {\n                                    \"column\"     : \"values\",\n                                    \"properties\" : \"rate\"\n                                },\n                  \"alias\": \"Maximum\"\n                },\n                {\n                  \"operation\": \"avg\",\n                  \"json\"      : {\n                                    \"column\"     : \"values\",\n                                    \"properties\" : \"rate\"\n                                },\n                  \"alias\": \"Average\"\n                }\n              ]\n            }\n        \"\"\"\n        return cls._add_clause('alias', main_key, args)\n\n    @classmethod\n    def FORMAT(cls, main_key, *args):\n        \"\"\"\n        Adds \"format\" to columns in payload info. Currently, adding clauses is supported at two\n        actions only - SELECT and AGGREGATE.\n\n        :param main_key: either \"return\" or \"aggregate\"\n        :param args: each arg is a tuple. The len of tuple depends upon main_key. If main_key is \"return\" i.e. SELECT,\n                     then each tuple will contain (col, format). If main_key is \"aggregate\", then each tuple will contain\n                     (col, operation, format) because col can be repeated in aggregate with different \"operations\".\n        :return:\n        :example:\n        PayloadBuilder().SELECT((\"reading\", \"user_ts\")).ALIAS('return', ('user_ts', 'timestamp')).\\\n            FORMAT('return', ('user_ts', \"YYYY-MM-DD HH24:MI:SS.MS\")).payload() returns\n            {\"return\": [\"reading\", {\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"column\": \"user_ts\", \"alias\": \"timestamp\"}]}\n        \"\"\"\n        return cls._add_clause('format', main_key, args)\n\n    @classmethod\n    def SELECT(cls, *args):\n        \"\"\"\n        Forms a json to return a list of columns.\n\n        :param args: list of args. Can be a single str, a single list for a json col, a tuple of str and/or json cols\n        :return:\n        \"\"\"\n        for arg in args:\n            if cls.verify_select(arg):\n                if 'return' not in cls.query_payload:\n                    cls.query_payload[\"return\"] = list()\n                if isinstance(arg, tuple):\n                    for a in arg:\n                        if isinstance(a, list):\n                            select = {\"json\": {'column': a[0], 'properties': a[1]}}\n                        elif isinstance(a, str):\n                            select = json.loads(a) if cls.is_json(a) else a\n                        else:\n                            continue\n                        cls.query_payload[\"return\"].append(select)\n                else:\n                    if isinstance(arg, list):\n                        select = {\"json\": {'column': arg[0], 'properties': arg[1]}}\n                    elif isinstance(arg, str):\n                        select = json.loads(arg) if cls.is_json(arg) else arg\n                    else:\n                        continue\n                    cls.query_payload[\"return\"].append(select)\n        return cls\n\n    @classmethod\n    def FROM(cls, tbl_name):\n        cls.query_payload[\"table\"] = tbl_name\n        return cls\n\n    @classmethod\n    def DISTINCT(cls, cols):\n        if cols is None:\n            return cls\n        if not isinstance(cols, list):\n            return cls\n        if len(cols) == 0:\n            return cls\n        cls.query_payload[\"modifier\"] = \"distinct\"\n        cls.query_payload[\"return\"] = cols\n        return cls\n\n    @classmethod\n    def MODIFIER(cls, arg):\n        if arg is None:\n            return cls\n        if not isinstance(arg, list):\n            return cls\n        if len(arg) == 0:\n            return cls\n        cls.query_payload[\"modifier\"] = arg\n        return cls\n\n    @classmethod\n    def UPDATE_TABLE(cls, tbl_name):\n        return cls.FROM(tbl_name)\n\n    @classmethod\n    def COLS(cls, kwargs):\n        values = OrderedDict()\n        for key, value in kwargs.items():\n            values[key] = value\n        return values\n\n    @classmethod\n    def SET(cls, **kwargs):\n        if 'values' in cls.query_payload:\n            cls.query_payload[\"values\"].update(cls.COLS(kwargs))\n        else:\n            cls.query_payload[\"values\"] = cls.COLS(kwargs)\n        return cls\n\n    @classmethod\n    def INSERT(cls, **kwargs):\n        cls.query_payload.update(cls.COLS(kwargs))\n        return cls\n\n    @classmethod\n    def INSERT_INTO(cls, tbl_name):\n        return cls.FROM(tbl_name)\n\n    @classmethod\n    def DELETE(cls, tbl_name):\n        return cls.FROM(tbl_name)\n\n    @classmethod\n    def add_new_clause(cls, and_or, main, new):\n        \"\"\"\n        Recursively searches for the innermost and/or block, or cls.query_payload[\"where\"] if none, in \"main\" to add\n        the 'new' condition block under \"and_or\" key.\n\n        Args:\n            and_or: one of 'and', 'or'\n            main: Dict (cls.query_payload[\"where\"] or the innermost and/or subset of it) where\n                  the new condition block is to be added\n            new: condition block to be added\n\n        Returns:\n        \"\"\"\n        if 'and' not in main:\n            if 'or' not in main:\n                main[and_or] = new\n            else:\n                cls.add_new_clause(and_or, main['or'], new)\n        else:\n            cls.add_new_clause(and_or, main['and'], new)\n\n    @classmethod\n    def WHERE(cls, arg, *args):\n        # Pass multiple arguments in a single tuple also. Useful when called from external process i.e. api, test.\n        args = (arg,) + args if not isinstance(arg, tuple) else arg\n        for arg in args:\n            condition = OrderedDict()\n            if cls.verify_condition(arg):\n                condition[\"column\"] = arg[0]\n                condition[\"condition\"] = arg[1]\n                # Note: append value KV pair only if 3 argument supplied\n                if len(arg) == 3:\n                    condition[\"value\"] = arg[2]\n                if 'where' not in cls.query_payload:\n                    cls.query_payload[\"where\"] = condition\n                else:\n                    cls.add_new_clause('and', cls.query_payload['where'], condition)\n        return cls\n\n    @classmethod\n    def AND_WHERE(cls, arg, *args):\n        # Pass multiple arguments in a single tuple also. Useful when called from external process i.e. api, test.\n        args = (arg,) + args if not isinstance(arg, tuple) else arg\n        for arg in args:\n            condition = OrderedDict()\n            if cls.verify_condition(arg):\n                condition[\"column\"] = arg[0]\n                condition[\"condition\"] = arg[1]\n                # Note: append value KV pair only if 3 argument supplied\n                if len(arg) == 3:\n                    condition[\"value\"] = arg[2]\n                if 'where' not in cls.query_payload:\n                    cls.query_payload[\"where\"] = condition\n                else:\n                    cls.add_new_clause('and', cls.query_payload['where'], condition)\n        return cls\n\n    @classmethod\n    def OR_WHERE(cls, arg, *args):\n        # Pass multiple arguments in a single tuple also. Useful when called from external process i.e. api, test.\n        args = (arg,) + args if not isinstance(arg, tuple) else arg\n        for arg in args:\n            condition = OrderedDict()\n            if cls.verify_condition(arg):\n                condition[\"column\"] = arg[0]\n                condition[\"condition\"] = arg[1]\n                # Note: append value KV pair only if 3 argument supplied\n                if len(arg) == 3:\n                    condition[\"value\"] = arg[2]\n                if 'where' not in cls.query_payload:\n                    cls.query_payload[\"where\"] = condition\n                else:\n                    cls.add_new_clause('or', cls.query_payload['where'], condition)\n        return cls\n\n    @classmethod\n    def GROUP_BY(cls, *args):\n        # TODO: Add dict format for args\n        cls.query_payload[\"group\"] = ', '.join(args)\n        return cls\n\n    @classmethod\n    def JOIN(cls, *args):\n        \"\"\"\n        Class method for JOIN. Use like this 1. PayloadBuilder().JOIN(\"table_name\", \"column_name\")\n                                        or   2. PayloadBuilder().JOIN(\"table_name\").\n        The first example assumes that were a table_name and a column_name for the JOIN clause.\n        The second example assumes that we only have a table_name and its column matches\n         the column name given in the request payload.\n        Args:\n            *args ():  Tuple/s of table name and column name or only table name.\n\n        Returns:\n            The object of payload builder class.\n        \"\"\"\n        if len(args) == 2:\n            tbl_name = args[0]\n            col_name = args[1]\n\n            table_dict = {\n                \"table\": {\n                    \"name\": tbl_name,\n                    \"column\": col_name\n                }\n            }\n\n        elif len(args) == 1:\n            tbl_name = args[0]\n\n            table_dict = {\n                \"table\": {\n                    \"name\": tbl_name\n                }\n            }\n        else:\n            raise Exception(\"Expected at least table name with JOIN clause.\")\n\n        cls.query_payload[\"join\"] = table_dict\n        return cls\n\n    @classmethod\n    def ON(cls, *args):\n        \"\"\"\n            Class method for ON. Use like this PayloadBuilder().JOIN(\"table_name\", \"column_name\").\\\n                                                                ON(\"column_name\")\n            Used only with JOIN.\n            Args:\n                *args (): column name for ON clause.\n\n            Returns:\n                The object of payload builder class.\n        \"\"\"\n        if \"join\" not in cls.query_payload:\n            raise Exception(\"ON Clause used without using JOIN first.\")\n\n        if len(args) != 1:\n            raise Exception(\"Expected column name with ON clause.\")\n\n        col_name = args[0]\n        cls.query_payload[\"join\"][\"on\"] = col_name\n        return cls\n\n    @classmethod\n    def QUERY(cls, *args):\n        \"\"\"\n             Class method for QUERY. Used only with JOIN and ON.\n             Inserts a query payload inside cls.query_payload['join']['query.']\n             Usage\n              1. First make a query payload like this\n              qp = PayloadBuilder().SELECT((\"name\", \"id\")) \\\n                                   .ALIAS('return', ('name', 'my_name'), ('id', 'my_id'))\\\n                                   .chain_payload()\n              2. Then use PayloadBuilder().JOIN(\"t1\", \"t1_id\").ON(\"t1_id\").QUERY(qp)\n              Result:\n                   {\n                      \"join\": {\n                        \"table\": {\n                          \"name\": \"t1\",\n                          \"column\": \"t1_id\"\n                        },\n                        \"on\": \"t1_id\",\n                        \"query\": {\n                          \"return\": [\n                            {\n                              \"column\": \"name\",\n                              \"alias\": \"my_name\"\n                            },\n                            {\n                              \"column\": \"id\",\n                              \"alias\": \"my_id\"\n                            }\n                          ]\n                        }\n                     }\n                   }\n\n            Args:\n                *args (): the query payload to insert into parent payload.\n\n            Returns:\n                The object of payload builder class.\n        \"\"\"\n\n        if \"join\" not in cls.query_payload:\n            raise Exception(\"Query used without JOIN clause.\")\n\n        if 'on' not in cls.query_payload['join']:\n            raise Exception(\"Query used without ON clause.\")\n\n        if len(args) != 1:\n            raise Exception(\"Either no or invalid query payload parameter given.\")\n\n        payload = args[0]\n        if not isinstance(payload, OrderedDict):\n            raise Exception(\"The query payload parameter must be an OrderedDict.\")\n\n        if 'query' in cls.query_payload['join']:\n            # Used when we have to perform only one join.\n            cls.query_payload['join']['query'].update(payload)\n        else:\n            # Used when we have to perform nested join.\n            # This will update the already existent query field.\n            cls.query_payload['join']['query'] = payload\n        return cls\n\n    @classmethod\n    def AGGREGATE(cls, arg, *args):\n        \"\"\"\n        Forms a json to return a dict (for a single col) or a list of dicts required in an aggregate clause.\n\n        :param args: Can be a single list or a tuple of lists. The list is given in structure [opr, col].\n                     col can be a str or another list for json col. For json col, the structure is [col, properties].\n        :return:\n        :example:\n        PayloadBuilder().AGGREGATE(([\"min\", \"values\"], [\"max\", \"values\"], [\"avg\", \"values\"])).ALIAS('aggregate',\n                                                           ('values', 'min', 'min_values'),\n                                                           ('values', 'max', 'max_values'),\n                                                           ('values', 'avg', 'avg_values')).payload() returns\n            {\"aggregate\": [{\"operation\": \"min\", \"column\": \"values\", \"alias\": \"min_values\"},\n                           {\"operation\": \"max\", \"column\": \"values\", \"alias\": \"max_values\"},\n                           {\"operation\": \"avg\", \"column\": \"values\", \"alias\": \"avg_values\"}]}\n\n        PayloadBuilder().AGGREGATE([\"all\"])\n        \"\"\"\n\n        # Pass multiple arguments in a single tuple also. Useful when called from external process i.e. api, test.\n        args = (arg,) + args if not isinstance(arg, tuple) else arg\n        for arg in args:\n            aggregate = OrderedDict()\n            if cls.verify_aggregation(arg):\n                aggregate[\"operation\"] = arg[0]\n                if len(arg) >= 2:\n                    if isinstance(arg[1], list):\n                        aggregate[\"json\"] = {'column': arg[1][0], 'properties': arg[1][1]}\n                    elif isinstance(arg[1], str):\n                        aggregate[\"column\"] = arg[1]\n                    else:\n                        continue\n                if 'aggregate' in cls.query_payload:\n                    if not isinstance(cls.query_payload['aggregate'], list):\n                        cls.query_payload['aggregate'] = [cls.query_payload.get('aggregate')]\n                    cls.query_payload['aggregate'].append(aggregate)\n                else:\n                    cls.query_payload[\"aggregate\"] = aggregate\n        return cls\n\n    @classmethod\n    def HAVING(cls):\n        raise NotImplementedError(\"To be implemented\")\n\n    @classmethod\n    def LIMIT(cls, arg):\n        if isinstance(arg, numbers.Real):\n            cls.query_payload[\"limit\"] = arg\n        return cls\n\n    @classmethod\n    def OFFSET(cls, arg):\n        if isinstance(arg, numbers.Real):\n            cls.query_payload[\"skip\"] = arg\n        return cls\n\n    SKIP = OFFSET\n\n    @classmethod\n    def ORDER_BY(cls, arg, *args):\n        # Pass multiple arguments in a single tuple also. Useful when called from external process i.e. api, test.\n        args = (arg,) + args if not isinstance(arg, tuple) else arg\n        for arg in args:\n            sort = OrderedDict()\n            if cls.verify_orderby(arg):\n                sort[\"column\"] = arg[0]\n                sort[\"direction\"] = arg[1]\n                if 'sort' in cls.query_payload:\n                    if not isinstance(cls.query_payload['sort'], list):\n                        cls.query_payload['sort'] = [cls.query_payload.get('sort')]\n                    cls.query_payload['sort'].append(sort)\n                else:\n                    cls.query_payload[\"sort\"] = sort\n        return cls\n\n    @classmethod\n    def EXPR(cls, arg, *args):\n        args = (arg,) + args if not isinstance(arg, tuple) else arg\n\n        for arg in args:\n            expr = OrderedDict()\n            expr[\"column\"] = arg[0]\n            expr[\"operator\"] = arg[1]\n            expr[\"value\"] = arg[2]\n\n            if 'expressions' in cls.query_payload:\n                cls.query_payload['expressions'].append(expr)\n            else:\n                cls.query_payload['expressions'] = [expr]\n        return cls\n\n    @classmethod\n    def JSON_PROPERTY(cls, *args):\n        \"\"\"\n        Forms a json to return a list of dicts required in a json_properties clause.\n\n        :param args: Can be a single tuple or a sequence of tuples. Each tuple is given in structure [col, path, value].\n                     col and value should be a str and path should be a list.\n        :return:\n        :example:\n        PayloadBuilder().JSON_PROPERTY((\"data\", [ \"url\", \"value\" ], \"new value\")).payload() returns\n            {\n                \"json_properties\" : [\n                            {\n                                \"column\" : \"data\",\n                                \"path\"   : [ \"url\", \"value\" ],\n                                \"value\"  : \"new value\"\n                            }\n                            ]\n            }\n        \"\"\"\n\n        # Pass multiple arguments in a single tuple also. Useful when called from external process i.e. api, test.\n        for arg in args:\n            json_property = OrderedDict()\n            if cls.verify_json_property(arg):\n                json_property[\"column\"] = arg[0]\n                json_property[\"path\"] = arg[1]\n                json_property[\"value\"] = arg[2]\n                if 'json_properties' in cls.query_payload:\n                    if not isinstance(cls.query_payload['json_properties'], list):\n                        cls.query_payload['json_properties'] = [cls.query_payload.get('json_properties')]\n                    cls.query_payload['json_properties'].append(json_property)\n                else:\n                    cls.query_payload[\"json_properties\"] = [json_property]\n        return cls\n\n    @classmethod\n    def TIMEBUCKET(cls, timestamp, size=\"1\", fmt=None, alias=None):\n        \"\"\"\n        Forms a json to return a dict of timebucket col\n\n        :param timestamp: timestamp col\n        :param size: bucket size in seconds, defaults to \"1\"\n        :param fmt: format string, optional\n        :param alias: alias, optional\n        :return:\n        :example:\n        PayloadBuilder().TIMEBUCKET(\"user_ts\", \"5\").payload() returns\n            \"timebucket\" :  {\n                               \"timestamp\" : \"user_ts\",\n                               \"size\"      : \"5\"\n                        }\n\n        PayloadBuilder().TIMEBUCKET(\"user_ts\", \"5\", format=\"DD-MM-YYYYY HH24:MI:SS\").payload() returns\n            \"timebucket\" :  {\n                               \"timestamp\" : \"user_ts\",\n                               \"size\"      : \"5\",\n                               \"format\"    : \"DD-MM-YYYYY HH24:MI:SS\"\n                        }\n\n        PayloadBuilder().TIMEBUCKET(\"user_ts\", \"5\", format=\"DD-MM-YYYYY HH24:MI:SS\", alias=\"bucket\").payload() returns\n            \"timebucket\" :  {\n                               \"timestamp\" : \"user_ts\",\n                               \"size\"      : \"5\",\n                               \"format\"    : \"DD-MM-YYYYY HH24:MI:SS\",\n                               \"alias\"     : \"bucket\"\n                        }\n        \"\"\"\n\n        timebucket = OrderedDict()\n        timebucket[\"timestamp\"] = timestamp\n        timebucket[\"size\"] = size\n        if fmt is not None:\n            timebucket[\"format\"] = fmt\n        if alias is not None:\n            timebucket[\"alias\"] = alias\n        cls.query_payload[\"timebucket\"] = timebucket\n\n        return cls\n\n    @classmethod\n    def payload(cls):\n        return json.dumps(cls.query_payload, sort_keys=False)\n\n    @classmethod\n    def chain_payload(cls):\n        \"\"\"\n        Sometimes, we may want to create payload incremently, based upon some conditions, this method will come\n        handy in such Use cases.\n        \"\"\"\n        return cls.query_payload\n\n    @classmethod\n    def query_params(cls):\n        where = cls.query_payload['where']\n        query_params = OrderedDict({where['column']: where['value']})\n        for key, value in where.items():\n            if key == 'and':\n                query_params.update({value['column']: value['value']})\n        return urllib.parse.urlencode(query_params)\n"
  },
  {
    "path": "python/fledge/common/storage_client/storage_client.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Storage layer python client\n\"\"\"\n\n__author__ = \"Praveen Garg, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport aiohttp\nimport http.client\nimport json\nimport time\nfrom abc import ABC, abstractmethod\n\nfrom fledge.common import logger\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.storage_client.exceptions import *\nfrom fledge.common.storage_client.utils import Utils\n\n_LOGGER = logger.setup(__name__)\n\n\nclass AbstractStorage(ABC):\n    \"\"\" abstract class for storage client \"\"\"\n\n    def __init__(self):\n        super().__init__()\n\n    @abstractmethod\n    def connect(self):\n        pass\n\n    @abstractmethod\n    def disconnect(self):\n        pass\n\n    # Allow with context\n    def __enter__(self):\n        return self.connect()\n\n    def __exit__(self, *args):\n        self.disconnect()\n\n\nclass StorageClientAsync(AbstractStorage):\n    def __init__(self, core_management_host, core_management_port, svc=None):\n        try:\n            if svc:\n                self.service = svc\n            else:\n                self.connect(core_management_host, core_management_port)\n\n            self.base_url = '{}:{}'.format(self.service._address, self.service._port)\n            self.management_api_url = '{}:{}'.format(self.service._address, self.service._management_port)\n        except Exception:\n            raise InvalidServiceInstance\n\n    @property\n    def base_url(self):\n        return self.__base_url\n\n    @base_url.setter\n    def base_url(self, url):\n        self.__base_url = url\n\n    @property\n    def service(self):\n        return self.__service\n\n    @service.setter\n    def service(self, svc):\n        if not isinstance(svc, ServiceRecord):\n            w_msg = 'Storage should be a valid Fledge micro-service instance'\n            _LOGGER.warning(w_msg)\n            raise InvalidServiceInstance\n\n        if not getattr(svc, \"_type\") == \"Storage\":\n            w_msg = 'Storage should be a valid *Storage* micro-service instance'\n            _LOGGER.warning(w_msg)\n            raise InvalidServiceInstance\n\n        self.__service = svc\n\n    def _get_storage_service(self, host, port):\n        \"\"\" get Storage service \"\"\"\n\n        conn = http.client.HTTPConnection(\"{0}:{1}\".format(host, port))\n        # TODO: need to set http / https based on service protocol\n\n        conn.request('GET', url='/fledge/service?name=Fledge%20Storage')\n        r = conn.getresponse()\n\n        if r.status in range(400, 500):\n            _LOGGER.error(\"Get Service: Client error code: %d, %s\", r.status.r.reason)\n        if r.status in range(500, 600):\n            _LOGGER.error(\"Get Service: Server error code: %d, %s\", r.status, r.reason)\n\n        res = r.read().decode()\n        conn.close()\n        response = json.loads(res)\n        svc = response[\"services\"][0]\n        return svc\n\n    def connect(self, core_management_host, core_management_port):\n        svc = self._get_storage_service(host=core_management_host, port=core_management_port)\n        if len(svc) == 0:\n            raise InvalidServiceInstance\n        self.service = ServiceRecord(s_id=svc[\"id\"], s_name=svc[\"name\"], s_type=svc[\"type\"], s_port=svc[\"service_port\"],\n                                     m_port=svc[\"management_port\"], s_address=svc[\"address\"],\n                                     s_protocol=svc[\"protocol\"])\n\n        return self\n\n    def disconnect(self):\n        pass\n\n    # FIXME: As per JIRA-615 strict=false at python side (interim solution)\n    # fix is required at storage layer (error message with escape sequence using a single quote)\n    async def insert_into_tbl(self, tbl_name, data):\n        \"\"\" insert json payload into given table\n\n        :param tbl_name:\n        :param data: JSON payload\n        :return:\n\n        :Example:\n            curl -X POST http://0.0.0.0:8080/storage/table/statistics_history -d @payload2.json\n            @payload2.json content:\n\n            {\n                \"key\" : \"SENT_test\",\n                \"history_ts\" : \"now()\",\n                \"value\" : 1\n            }\n        \"\"\"\n        if not tbl_name:\n            raise ValueError(\"Table name is missing\")\n\n        if not data:\n            raise ValueError(\"Data to insert is missing\")\n\n        if not Utils.is_json(data):\n            raise TypeError(\"Provided data to insert must be a valid JSON\")\n\n        post_url = '/storage/table/{tbl_name}'.format(tbl_name=tbl_name)\n        url = 'http://' + self.base_url + post_url\n        async with aiohttp.ClientSession() as session:\n            async with session.post(url, data=data) as resp:\n                status_code = resp.status\n                jdoc = await resp.json()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"POST %s, with payload: %s\", post_url, data)\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n\n        return jdoc\n\n    async def update_tbl(self, tbl_name, data):\n        \"\"\" update json payload for specified condition into given table\n\n        :param tbl_name:\n        :param data: JSON payload\n        :return:\n\n        :Example:\n            curl -X PUT http://0.0.0.0:8080/storage/table/statistics_history -d @payload3.json\n            @payload3.json content:\n            {\n                \"condition\" : {\n                    \"column\" : \"key\",\n                    \"condition\" : \"=\",\n                    \"value\" : \"SENT_test\"\n                },\n                \"values\" : {\n                    \"value\" : 44444\n                }\n            }\n        \"\"\"\n        if not tbl_name:\n            raise ValueError(\"Table name is missing\")\n\n        if not data:\n            raise ValueError(\"Data to update is missing\")\n\n        if not Utils.is_json(data):\n            raise TypeError(\"Provided data to update must be a valid JSON\")\n\n        put_url = '/storage/table/{tbl_name}'.format(tbl_name=tbl_name)\n\n        url = 'http://' + self.base_url + put_url\n        async with aiohttp.ClientSession() as session:\n            async with session.put(url, data=data) as resp:\n                status_code = resp.status\n                jdoc = await resp.json()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"PUT %s, with payload: %s\", put_url, data)\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n\n        return jdoc\n\n    async def delete_from_tbl(self, tbl_name, condition=None):\n        \"\"\" Delete for specified condition from given table\n\n        :param tbl_name:\n        :param condition: JSON payload\n        :return:\n\n        :Example:\n            curl -X DELETE http://0.0.0.0:8080/storage/table/statistics_history -d @payload_del.json\n            @payload_del.json content:\n            \"condition\" : {\n                    \"column\" : \"key\",\n                    \"condition\" : \"=\",\n                    \"value\" : \"SENT_test\"\n            }\n        \"\"\"\n\n        if not tbl_name:\n            raise ValueError(\"Table name is missing\")\n\n        del_url = '/storage/table/{tbl_name}'.format(tbl_name=tbl_name)\n\n        if condition and (not Utils.is_json(condition)):\n            raise TypeError(\"condition payload must be a valid JSON\")\n\n        url = 'http://' + self.base_url + del_url\n        async with aiohttp.ClientSession() as session:\n            async with session.delete(url, data=condition) as resp:\n                status_code = resp.status\n                jdoc = await resp.json()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"DELETE %s, with payload: %s\", del_url, condition if condition else '')\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n\n        return jdoc\n\n    async def query_tbl(self, tbl_name, query=None):\n        \"\"\" Simple SELECT query for the specified table with optional query params\n\n        :param tbl_name:\n        :param query: query params in format k1=v1&k2=v2\n        :return:\n\n        :Example:\n            curl -X GET http://0.0.0.0:8080/storage/table/statistics_history\n            curl -X GET http://0.0.0.0:8080/storage/table/statistics_history?key=PURGE\n        \"\"\"\n        if not tbl_name:\n            raise ValueError(\"Table name is missing\")\n\n        get_url = '/storage/table/{tbl_name}'.format(tbl_name=tbl_name)\n\n        if query:  # else SELECT * FROM <tbl_name>\n            get_url += '?{}'.format(query)\n\n        url = 'http://' + self.base_url + get_url\n        async with aiohttp.ClientSession() as session:\n            async with session.get(url) as resp:\n                status_code = resp.status\n                jdoc = await resp.json()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"GET %s\", get_url)\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n\n        return jdoc\n\n    async def query_tbl_with_payload(self, tbl_name, query_payload):\n        \"\"\" Complex SELECT query for the specified table with a payload\n\n        :param tbl_name:\n        :param query_payload: payload in valid JSON format\n        :return:\n\n        :Example:\n            curl -X PUT http://0.0.0.0:8080/storage/table/statistics_history/query -d @payload.json\n            @payload.json content:\n            \"where\" : {\n                    \"column\" : \"key\",\n                    \"condition\" : \"=\",\n                    \"value\" : \"SENT_test\"\n            }\n        \"\"\"\n        if not tbl_name:\n            raise ValueError(\"Table name is missing\")\n\n        if not query_payload:\n            raise ValueError(\"Query payload is missing\")\n\n        if not Utils.is_json(query_payload):\n            raise TypeError(\"Query payload must be a valid JSON\")\n\n        put_url = '/storage/table/{tbl_name}/query'.format(tbl_name=tbl_name)\n\n        url = 'http://' + self.base_url + put_url\n\n        async with aiohttp.ClientSession() as session:\n            async with session.put(url, data=query_payload) as resp:\n                status_code = resp.status\n                jdoc = await resp.json()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"PUT %s, with query payload: %s\", put_url, query_payload)\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n\n        return jdoc\n\n    async def post_snapshot(self, tbl_name):\n        \"\"\"Create a table snapshot\n\n        :param tbl_name:\n        :return:\n\n        :Example:\n            curl -X POST http://0.0.0.0:8080/storage/table/configuration/snapshot\n        \"\"\"\n        post_url = '/storage/table/{tbl_name}/snapshot'.format(tbl_name=tbl_name)\n        data = {\"id\": str(int(time.time()))}\n\n        url = 'http://' + self.base_url + post_url\n        async with aiohttp.ClientSession() as session:\n            async with session.post(url, data=json.dumps(data)) as resp:\n                status_code = resp.status\n                jdoc = await resp.text()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"POST %s\", post_url)\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n        return json.loads(jdoc)\n\n    async def put_snapshot(self, tbl_name, snapshot_id):\n        \"\"\"Restore a table snapshot\n\n        :param tbl_name:\n        :param snapshot_id:\n        :return:\n\n        :Example:\n            curl -X PUT http://0.0.0.0:8080/storage/table/configuration/snapshot/cea17db8-6ccc-11e7-907b-a6006ad3dba0\n        \"\"\"\n        put_url = '/storage/table/{tbl_name}/snapshot/{id}'.format(tbl_name=tbl_name, id=snapshot_id)\n\n        url = 'http://' + self.base_url + put_url\n        async with aiohttp.ClientSession() as session:\n            async with session.put(url) as resp:\n                status_code = resp.status\n                jdoc = await resp.text()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"PUT %s\", put_url)\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n        return json.loads(jdoc)\n\n    async def delete_snapshot(self, tbl_name, snapshot_id):\n        \"\"\"Delete a table snapshot\n\n        :param tbl_name:\n        :param snapshot_id:\n        :return:\n\n        :Example:\n            curl -X DELETE http://0.0.0.0:8080/storage/table/configuration/snapshot/cea17db8-6ccc-11e7-907b-a6006ad3dba0\n        \"\"\"\n        delete_url = '/storage/table/{tbl_name}/snapshot/{id}'.format(tbl_name=tbl_name, id=snapshot_id)\n\n        url = 'http://' + self.base_url + delete_url\n        async with aiohttp.ClientSession() as session:\n            async with session.delete(url) as resp:\n                status_code = resp.status\n                jdoc = await resp.text()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"DELETE %s\", delete_url)\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n        return json.loads(jdoc)\n\n    async def get_snapshot(self, tbl_name):\n        \"\"\"Get a table snapshot\n\n        :param tbl_name:\n        :return:\n\n        :Example:\n            curl -X GET http://0.0.0.0:8080/storage/table/configuration/snapshot\n        \"\"\"\n        get_url = '/storage/table/{tbl_name}/snapshot'.format(tbl_name=tbl_name)\n\n        url = 'http://' + self.base_url + get_url\n        async with aiohttp.ClientSession() as session:\n            async with session.get(url) as resp:\n                status_code = resp.status\n                jdoc = await resp.text()\n                if status_code not in range(200, 209):\n                    _LOGGER.info(\"GET %s\", get_url)\n                    _LOGGER.error(\"Error code: %d, reason: %s, details: %s\", resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n        return json.loads(jdoc)\n\n\nclass ReadingsStorageClientAsync(StorageClientAsync):\n    \"\"\" Readings table operations \"\"\"\n    _base_url = \"\"\n\n    def __init__(self, core_mgt_host, core_mgt_port, svc=None):\n        super().__init__(core_management_host=core_mgt_host, core_management_port=core_mgt_port, svc=svc)\n        self.__class__._base_url = self.base_url\n\n    async def append(self, readings):\n        \"\"\"\n        :param readings:\n        :return:\n\n        :Example:\n            curl -X POST http://0.0.0.0:8080/storage/reading -d @payload.json\n\n            {\n              \"readings\" : [\n                {\n                  \"asset_code\": \"MyAsset\",\n                  \"reading\" : { \"rate\" : 18.4 },\n                  \"user_ts\" : \"2017-09-21 15:00:09.025655\"\n                },\n                {\n                \"asset_code\": \"MyAsset\",\n                \"reading\" : { \"rate\" : 45.1 },\n                \"user_ts\" : \"2017-09-21 15:03:09.025655\"\n                }\n              ]\n            }\n\n        \"\"\"\n\n        if not readings:\n            raise ValueError(\"Readings payload is missing\")\n\n        if not Utils.is_json(readings):\n            raise TypeError(\"Readings payload must be a valid JSON\")\n\n        url = 'http://' + self._base_url + '/storage/reading'\n        async with aiohttp.ClientSession() as session:\n            async with session.post(url, data=readings) as resp:\n                status_code = resp.status\n                jdoc = await resp.json()\n                if status_code not in range(200, 209):\n                    _LOGGER.error(\"POST url %s with payload: %s, Error code: %d, reason: %s, details: %s\",\n                                  '/storage/reading', readings, resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n\n        return jdoc\n\n    async def fetch(self, reading_id, count):\n        \"\"\"\n\n        :param reading_id: the first reading ID in the block that is retrieved\n        :param count: the number of readings to return, if available\n        :return:\n        :Example:\n            curl -X  GET http://0.0.0.0:8080/storage/reading?id=2&count=3\n\n        \"\"\"\n\n        if reading_id is None:\n            raise ValueError(\"first reading id to retrieve the readings block is required\")\n\n        if count is None:\n            raise ValueError(\"count is required to retrieve the readings block\")\n\n        try:\n            count = int(count)\n        except ValueError:\n            raise\n\n        get_url = '/storage/reading?id={}&count={}'.format(reading_id, count)\n        url = 'http://' + self._base_url + get_url\n        async with aiohttp.ClientSession() as session:\n            async with session.get(url) as resp:\n                status_code = resp.status\n                jdoc = await resp.json()\n                if status_code not in range(200, 209):\n                    _LOGGER.error(\"GET url: %s, Error code: %d, reason: %s, details: %s\", url, resp.status,\n                                  resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n\n        return jdoc\n\n    async def query(self, query_payload):\n        \"\"\"\n\n        :param query_payload:\n        :return:\n        :Example:\n            curl -X PUT http://0.0.0.0:8080/storage/reading/query -d @payload.json\n\n            @payload.json content:\n            {\n              \"where\" : {\n                \"column\" : \"asset_code\",\n                \"condition\" : \"=\",\n                \"value\" : \"MyAsset\"\n                }\n            }\n        \"\"\"\n\n        if not query_payload:\n            raise ValueError(\"Query payload is missing\")\n\n        if not Utils.is_json(query_payload):\n            raise TypeError(\"Query payload must be a valid JSON\")\n\n        url = 'http://' + self._base_url + '/storage/reading/query'\n        async with aiohttp.ClientSession() as session:\n            async with session.put(url, data=query_payload) as resp:\n                status_code = resp.status\n                jdoc = await resp.json()\n                if status_code not in range(200, 209):\n                    _LOGGER.error(\"PUT url %s with query payload: %s, Error code: %d, reason: %s, details: %s\",\n                                  '/storage/reading/query', query_payload, resp.status, resp.reason, jdoc)\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n\n        return jdoc\n\n    async def purge(self, age=None, sent_id=0, size=None, flag=None, asset=None):\n        \"\"\" Purge readings based on the age of the readings\n\n        :param age: the maximum age of data to retain, expressed in hours\n        :param sent_id: the id of the last reading to be sent out of Fledge\n        :param size: the maximum size of data to retain, expressed in Kbytes\n        :param flag: define what to do about unsent readings. Valid options are retain or purge\n        :param asset: defien the asset to purge. Currently no other options are valid with this option\n        :return: a JSON with the number of readings removed, the number of unsent readings removed\n            and the number of readings that remain\n        :Example:\n            curl -X PUT \"http://0.0.0.0:<storage_service_port>/storage/reading/purge?age=<age>&sent=<reading id>&flags=<flags>\"\n            curl -X PUT \"http://0.0.0.0:<storage_service_port>/storage/reading/purge?age=24&sent=2&flags=PURGE\"\n            curl -X PUT \"http://0.0.0.0:<storage_service_port>/storage/reading/purge?size=1024&sent=0&flags=PURGE\"\n        \"\"\"\n\n        valid_flags = ['retainany', 'retainall', 'purge']\n\n        if flag and flag.lower() not in valid_flags:\n            raise InvalidReadingsPurgeFlagParameters\n\n        if age and size:\n            raise PurgeOnlyOneOfAgeAndSize\n\n        if not age and not size and asset is None:\n            raise PurgeOneOfAgeAssetAndSize\n\n        if asset is not None and age:\n            raise PurgeOneOfAgeAndAsset\n\n        if asset is not None and size:\n            raise PurgeOneOfSizeAndAsset\n\n        # age should be int\n        # size should be int\n        # sent_id should again be int\n        try:\n            if age is not None:\n                _age = int(age)\n\n            if size is not None:\n                _size = int(size)\n\n            _sent_id = int(sent_id)\n        except ValueError:\n            raise\n\n        if age:\n            put_url = '/storage/reading/purge?age={}&sent={}'.format(_age, _sent_id)\n        if size:\n            put_url = '/storage/reading/purge?size={}&sent={}'.format(_size, _sent_id)\n        if flag:\n            put_url += \"&flags={}\".format(flag.lower())\n        if asset is not None:\n            import urllib.parse\n            put_url = '/storage/reading/purge?asset={}'.format(urllib.parse.quote(asset))\n\n        url = 'http://' + self._base_url + put_url\n        async with aiohttp.ClientSession() as session:\n            async with session.put(url, data=None) as resp:\n                status_code = resp.status\n                try:\n                    jdoc = await resp.json()\n                    if status_code not in range(200, 209):\n                        _LOGGER.error(\"PUT url %s, Error code: %d, reason: %s, details: %s\", put_url, resp.status,\n                                      resp.reason, jdoc)\n                        raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n                except ValueError as err:\n                    jdoc = None\n                    _LOGGER.error(err, \"Failed to parse JSON data returned of purge from the storage reading plugin.\")\n                except Exception as ex:\n                    jdoc = None\n                    _LOGGER.error(ex, \"Purge readings is failed.\")\n        return jdoc\n"
  },
  {
    "path": "python/fledge/common/storage_client/utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Storage layer python client utility methods\n\"\"\"\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n# TODO: add utils method here to keep stuff DRY\n\nimport json\n\n\nclass Utils(object):\n\n    @staticmethod\n    def is_json(payload):\n        try:\n            json.loads(payload)\n        except (TypeError, ValueError):  # JSONDecodeError is a subclass of ValueError\n            return False\n        return True\n"
  },
  {
    "path": "python/fledge/common/utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common utilities\"\"\"\n\nimport asyncio\nimport functools\nimport datetime\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport sys\n\n\ndef check_reserved(string):\n    \"\"\"\n    RFC 2396 Uniform Resource Identifiers (URI): Generic Syntax lists\n    the following reserved characters.\n\n    reserved    = \";\" | \"/\" | \"?\" | \":\" | \"@\" | \"&\" | \"=\" | \"+\" |\n                  \"$\" | \",\"\n    \n    Hence for certain inputs, e.g. service name, configuration key etc which form part of a URL should not \n    contain any of the above reserved characters.\n    \n    :param string: \n    :return: \n    \"\"\"\n    reserved = \";\" + \"/\" + \"?\" + \":\" + \"@\" + \"&\" + \"=\" + \"+\" + \"$\" + \",\" + \"{\" + \"}\"\n    if string is None or not isinstance(string, str) or string == \"\":\n        return False\n    for s in string:\n        if s in reserved:\n            return False\n    return True\n\ndef is_valid_identifier(string):\n    \"\"\"\n    Check if the given string is a valid identifier which doesn't have any disallowed characters.\n   \n    :param string: The string to check.\n    :return: True if the string is a valid identifier, False otherwise.\n    \"\"\"\n    disallowed_characters = [\"\\\\\"]\n    if string is None or not isinstance(string, str) or string == \"\":\n        return False , \"\"\n    for ch in disallowed_characters:\n        if ch in string:    # check if disallowed char exists anywhere\n            return False, ch\n    return True, \"\"\n\n   \n\ndef check_fledge_reserved(string):\n    reserved = [\n        'fledge',\n        'general',\n        'advanced',\n        'notifications',\n        'north',\n        'south',\n        'filter',\n        'notify',\n        'rule',\n        'delivery',\n        'utilities'\n    ]\n    if string is None or not isinstance(string, str) or string == \"\":\n        return False\n    if string.lower() in reserved:\n        return False\n    return True\n\n\ndef local_timestamp():\n    \"\"\"\n    :return: str - current time stamp with microseconds and machine timezone info\n    :example '2018-05-08 14:06:40.517313+05:30'\n    \"\"\"\n    return str(datetime.datetime.now(datetime.timezone.utc).astimezone())\n\n\ndef add_functions_as_methods(functions):\n    \"\"\" add_functions_as_methods - add the given functions to a class (to allow multi-file definition) \n        Type: class decorator\n        Arguments:\n            functions: list of functions (which expect a \"self\" argument) to be added to class namespace\n    \"\"\"\n    \n    def decorator(Class):\n        for function in functions:\n            setattr(Class, function.__name__, function)\n        return Class\n    return decorator\n\n\ndef eprint(*args, **kwargs):\n    \"\"\" eprintf -- convenience print function that prints to stderr \"\"\"\n    print(*args, *kwargs, file=sys.stderr)\n\n\ndef read_os_release():\n    \"\"\" General information to identifying the operating system \"\"\"\n    import ast\n    import re\n    os_details = {}\n    with open('/etc/os-release', encoding=\"utf-8\") as f:\n        for line_number, line in enumerate(f, start=1):\n            line = line.rstrip()\n            if not line or line.startswith('#'):\n                continue\n            m = re.match(r'([A-Z][A-Z_0-9]+)=(.*)', line)\n            if m:\n                name, val = m.groups()\n                if val and val[0] in '\"\\'':\n                    val = ast.literal_eval(val)\n                os_details.update({name: val})\n    return os_details\n\n\ndef is_redhat_based():\n    \"\"\"\n        To check if the Operating system is of Red Hat family or Not\n        Examples:\n            a) For an operating system with \"ID=centos\", an assignment of \"ID_LIKE=\"rhel fedora\"\" is appropriate\n            b) For an operating system with \"ID=ubuntu/raspbian\", an assignment of \"ID_LIKE=debian\" is appropriate.\n    \"\"\"\n    os_release = read_os_release()\n    id_like = os_release.get('ID_LIKE')\n    if id_like is not None and any(x in id_like.lower() for x in ['centos', 'rhel', 'redhat', 'fedora']):\n        return True\n    return False\n\n\ndef get_open_ssl_version(version_string=True):\n    \"\"\" Open SSL version info\n\n    Args:\n        version_string\n\n    Returns:\n        When version_string is True - The version string of the OpenSSL library loaded by the interpreter\n        When version_string is False - A tuple of five integers representing version information about the OpenSSL library\n    \"\"\"\n    import ssl\n    return ssl.OPENSSL_VERSION if version_string else ssl.OPENSSL_VERSION_INFO\n\n\ndef make_async(fn):\n    \"\"\" turns a sync function to async function using threads \"\"\"\n    from concurrent.futures import ThreadPoolExecutor\n    pool = ThreadPoolExecutor()\n\n    @functools.wraps(fn)\n    def wrapper(*args, **kwargs):\n        future = pool.submit(fn, *args, **kwargs)\n        return asyncio.wrap_future(future)  # make it awaitable\n\n    return wrapper\n\n\ndef dict_difference(dict1, dict2):\n    \"\"\" Compare two dictionaries and return their difference \"\"\"\n    diff = {}\n\n    # Check keys in dict1 not in dict2\n    for key in dict1:\n        if key not in dict2:\n            diff[key] = dict1[key]\n        else:\n            # Recursively compare nested dictionaries\n            if isinstance(dict1[key], dict) and isinstance(dict2[key], dict):\n                nested_diff = dict_difference(dict1[key], dict2[key])\n                if nested_diff:\n                    diff[key] = nested_diff\n            elif dict1[key] != dict2[key]:\n                diff[key] = dict1[key]\n\n    # Check keys in dict2 not in dict1\n    for key in dict2:\n        if key not in dict1:\n            diff[key] = dict2[key]\n        else:\n            # Recursively compare nested dictionaries\n            if isinstance(dict1[key], dict) and isinstance(dict2[key], dict):\n                nested_diff = dict_difference(dict1[key], dict2[key])\n                if nested_diff:\n                    diff[key] = nested_diff\n            elif dict1[key] != dict2[key]:\n                diff[key] = dict2[key]\n    return diff\n\n\ndef async_sleep(seconds):\n    # Check Python version\n    if sys.version_info < (3, 7):\n        # For older versions, explicitly pass the loop argument\n        loop = asyncio.get_event_loop()\n        return asyncio.sleep(seconds, loop=loop)\n    else:\n        # For Python 3.7+, just use asyncio.sleep as usual\n        return asyncio.sleep(seconds)\n\n"
  },
  {
    "path": "python/fledge/common/web/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/common/web/middleware.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nfrom functools import wraps\nimport json\nimport traceback\nfrom datetime import datetime\n\nfrom aiohttp import web\nimport jwt\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.services.core.firewall import Firewall\nfrom fledge.services.core.user_model import User\n\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def error_middleware(app, handler):\n    async def middleware_handler(request):\n        if_trace = request.query.get('trace') if 'trace' in request.query and request.query.get('trace') == '1' else None\n\n        try:\n            response = await handler(request)\n        except (web.HTTPException, asyncio.CancelledError):\n            raise\n        # Below Exception must come last as it is the super class of all exceptions\n        except Exception as ex:\n            return handle_api_exception(ex, ex.__class__.__name__, if_trace)\n        else:\n            return response\n\n    return middleware_handler\n\n\nasync def optional_auth_middleware(app, handler):\n    async def middleware(request):\n        _logger.debug(\"Received %s request for %s\", request.method, request.path)\n        request.is_auth_optional = True\n        request.user = None\n        check_firewall(request)\n        return await handler(request)\n    return middleware\n\n\nasync def auth_middleware(app, handler):\n    async def _disconnect_idle_logins(user_token):\n        timeout, sessions = await User.Sessions.get()\n        fmt = \"%Y-%m-%d %H:%M:%S.%f\"\n        current_time = datetime.now().strftime(fmt)\n        for session in sessions:\n            if session['token'] == user_token:\n                diff = datetime.strptime(current_time, fmt) - datetime.strptime(session['last_accessed_ts'], fmt)\n                if diff.seconds > timeout:\n                    raise User.SessionTimeout(\"Session has timed out or been disconnected. Log in again.\")\n                session['last_accessed_ts'] = current_time\n                break\n\n    async def middleware(request):\n        # if `rest_api` config has `authentication` set to mandatory then:\n        #   request must carry auth header,\n        #   actual value will be checked too and if bad then 401: unauthorized will be returned\n        _logger.debug(\"Received %s request for %s\", request.method, request.path)\n        request.is_auth_optional = False\n        request.user = None\n        check_firewall(request)\n        if request.method == 'OPTIONS':\n            return await handler(request)\n\n        # make case insensitive `Authorization` should work\n        try:\n            token = request.headers.get('authorization', None)\n        except:\n            token = request.headers.get('Authorization', None)\n\n        if token:\n            try:\n                # validate the token and get user id\n                uid = await User.Objects.validate_token(token)\n                if not str(handler).startswith(\"<function ping\"):\n                    # disconnect idle user logins\n                    await _disconnect_idle_logins(token)\n                # extend the token expiry, as token is valid\n                # and no bad token exception raised\n                await User.Objects.refresh_token_expiry(token)\n                # set the user to request object\n                request.user = await User.Objects.get(uid=uid)\n                # set the token to request\n                request.token = token\n                # set if user is admin\n                request.user_is_admin = True if int(request.user[\"role_id\"]) == 1 else False\n                # validate request path\n                await validate_requests(request)\n            except User.SessionTimeout as e:\n                await User.Objects.delete_token(token)\n                raise web.HTTPUnauthorized(reason=str(e))\n            except (jwt.DecodeError, jwt.ExpiredSignatureError, User.InvalidToken, User.TokenExpired) as e:\n                raise web.HTTPUnauthorized(reason=str(e))\n            except jwt.exceptions.InvalidAlgorithmError:\n                raise web.HTTPUnauthorized(reason=\"The token has expired, login again.\")\n        else:\n            if str(handler).startswith(\"<function ping\"):\n                pass\n            elif str(handler).startswith(\"<function login\"):\n                pass\n            elif str(handler).startswith(\"<function update_password\"):  # when pwd expiration\n                pass\n            else:\n                raise web.HTTPUnauthorized()\n\n        return await handler(request)\n    return middleware\n\n\nasync def certificate_login_middleware(app, handler):\n    async def middleware(request):\n        if request.method == 'OPTIONS':\n            return await handler(request)\n        request.auth_method = 'certificate'\n        return await handler(request)\n    return middleware\n\n\nasync def password_login_middleware(app, handler):\n    async def middleware(request):\n        if request.method == 'OPTIONS':\n            return await handler(request)\n        request.auth_method = 'password'\n        return await handler(request)\n    return middleware\n\n\ndef has_permission(permission):\n    \"\"\"Decorator that restrict access only for authorized users with correct permissions (role_name)\n\n    if user is authorized and does not have permission raises HTTPForbidden.\n    \"\"\"\n    def wrapper(fn):\n        @wraps(fn)\n        async def wrapped(*args, **kwargs):\n            request = args[-1]\n            if not isinstance(request, web.BaseRequest):\n                msg = (\"Incorrect decorator usage. \"\n                       \"Expecting `def handler(request)` \"\n                       \"or `def handler(self, request)`.\")\n                raise RuntimeError(msg)\n\n            if request.is_auth_optional is False:  # auth is mandatory\n                roles_id = [int(r[\"id\"]) for r in await User.Objects.get_role_id_by_name(permission)]\n                if int(request.user[\"role_id\"]) not in roles_id:\n                    raise web.HTTPForbidden\n\n            ret = await fn(*args, **kwargs)\n            return ret\n\n        return wrapped\n\n    return wrapper\n\n\ndef handle_api_exception(ex, _class=None, if_trace=0):\n    err_msg = {\"message\": \"[{}] {}\".format(_class,  str(ex))}\n\n    if if_trace:\n        err_msg.update({\"exception\": _class, \"traceback\": traceback.format_exc()})\n\n    return web.Response(status=500, body=json.dumps({'error': err_msg}).encode('utf-8'),\n                        content_type='application/json')\n\n\nasync def validate_requests(request):\n    \"\"\"\n        a) With \"normal\" based user role id=2 only\n           - restrict operations of Control scripts and pipelines except GET\n        b) With \"view\" based user role id=3 only\n           - read access operations (GET calls)\n           - change profile (PUT call)\n           - logout (PUT call)\n           - extension API (PUT match call)\n        c) With \"data-view\" based user role id=4 only\n           - ping (GET call)\n           - browser asset read operation (GET call)\n           - service (GET call)\n           - statistics, statistics history, statistics rate (GET call)\n           - user profile (GET call)\n           - user roles (GET call)\n           - change profile (PUT call)\n           - logout (PUT call)\n        d) With \"control\" based user role id=5 only\n           - same as normal user can do\n           - All CRUD's privileges for control scripts\n           - All CRUD's privileges for control pipelines\n    \"\"\"\n    user_id = request.user['id']\n    # Only URL's which are specific meant for Admin user\n    if not request.user_is_admin and request.method == 'GET':\n        # Special case: Allowed GET user for Control user\n        if int(request.user[\"role_id\"]) != 5 and str(request.rel_url) == '/fledge/user':\n            raise web.HTTPForbidden\n    # Normal/Editor user\n    if int(request.user[\"role_id\"]) == 2 and request.method != 'GET':\n        # Special case: Allowed control entrypoint update request and handling of rejection in its handler\n        if str(request.rel_url).startswith('/fledge/control') and not str(request.rel_url).startswith(\n                '/fledge/control/request'):\n            raise web.HTTPForbidden\n    # Viewer user\n    elif int(request.user[\"role_id\"]) == 3:\n        if request.method != 'GET':\n            supported_endpoints = ['/fledge/user', '/fledge/user/{}/password'.format(user_id), '/logout',\n                                   '/fledge/extension/bucket/match', '/fledge/plugin/validate']\n            if not str(request.rel_url).endswith(tuple(supported_endpoints)):\n                raise web.HTTPForbidden\n        else:\n            if '/debug?action=' in str(request.rel_url):\n                raise web.HTTPForbidden\n    # Data Viewer user\n    elif int(request.user[\"role_id\"]) == 4:\n        if request.method == 'GET':\n            supported_endpoints = ['/fledge/asset', '/fledge/ping', '/fledge/statistics',\n                                   '/fledge/user?', '/fledge/user/role']\n            if not (str(request.rel_url).startswith(tuple(supported_endpoints)\n                                                    ) or str(request.rel_url).endswith('/fledge/service')):\n                raise web.HTTPForbidden\n        elif request.method == 'PUT':\n            supported_endpoints = ['/fledge/user', '/fledge/user/{}/password'.format(user_id), '/logout']\n            if not str(request.rel_url).endswith(tuple(supported_endpoints)):\n                raise web.HTTPForbidden\n        else:\n            raise web.HTTPForbidden\n\n\ndef check_firewall(req: web.Request) -> None:\n    # FIXME: Need to check on other environments like AWS, docker; May be ideal with socket\n    # import socket\n    # hostname = socket.gethostname()\n    # IPAddr = socket.gethostbyname(hostname)\n\n    source_ip_address = req.transport.get_extra_info('peername')[0]\n    if source_ip_address not in ['localhost', '127.0.0.1']:\n        firewall_ip_addresses = Firewall.IPAddresses.get()\n        if 'allowedIP' in firewall_ip_addresses:\n            allowed = firewall_ip_addresses['allowedIP']\n            if allowed:\n                if source_ip_address not in allowed:\n                    raise web.HTTPForbidden\n            else:\n                if 'deniedIP' in firewall_ip_addresses:\n                    if source_ip_address in firewall_ip_addresses['deniedIP']:\n                        raise web.HTTPForbidden\n\n"
  },
  {
    "path": "python/fledge/common/web/ssl_wrapper.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Python SSL wrapper for openssl\"\"\"\n\nimport time\nimport datetime\nimport subprocess\nfrom fledge.common import logger, utils\n\n__author__ = \"Amarendra Kumar Sinha\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = logger.setup(__name__)\n\n\nclass SSLVerifier(object):\n    class VerificationError(ValueError):\n        pass\n\n    user_cert = None\n    ca_cert = None\n    intermediate_cert = None\n\n    def __init__(self, cert_user=None, cert_ca=None, intermediate_cert=None):\n        self.__class__.user_cert = cert_user\n        self.__class__.ca_cert = cert_ca\n        self.__class__.intermediate_cert = intermediate_cert\n\n    @classmethod\n    def verify(cls):\n        if cls.user_cert is None:\n            raise OSError(\"No user certificate supplied.\")\n        cls.verify_against_revoked()\n        result = cls.verify_against_ca()[0].split(\"stdin:\")[1]\n        return result.strip()\n\n    @classmethod\n    def verify_expired(cls, attime=str(time.time())):\n        if cls.user_cert is None:\n            raise OSError(\"No user certificate supplied.\")\n\n        echo_process = subprocess.Popen(['echo', cls.user_cert], stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n        a = subprocess.Popen([\"openssl\", \"verify\", \"-CAfile\", cls.ca_cert, \"-x509_strict\", \"-attime\", attime], stdin=echo_process.stdout, stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n        outs, errs = a.communicate()\n        if outs is None and errs is None:\n            raise OSError(\n                'Verification error in executing command \"{}\"'.format(\"openssl verify -CAfile {} -x509_strict\".format(cls.ca_cert)))\n        if a.returncode != 0:\n            raise OSError(\n                'Verification error in executing command \"{}\". Error: {}, returncode: {}'.format(\"openssl verify -CAfile {} -x509_strict\".format(cls.ca_cert), errs.decode('utf-8').replace('\\n', ''), a.returncode))\n        d = [b for b in outs.decode('utf-8').split('\\n') if b != '']\n        return False if \"OK\" in d[0] else True\n\n    @classmethod\n    def get_revoked_fingerprint(cls):\n        # TODO: Work out a mechanism to populate REVOKED_CERTS like\n        # REVOKED_CERTS = [\n        #     'F8:7F:30:7B:12:15:15:47:07:93:D4:99:8F:7B:2E:DF:'\n        #     '12:5A:2C:0F:C4:BD:5E:56:B8:5C:93:A3:65:CB:63:9B',\n        #     '00:11:22:33:44:55:66:77:88:99:AA:BB:CC:DD:EE:FF:'\n        #     '12:5A:2C:0F:C4:BD:5E:56:B8:5C:93:A3:65:CB:63:9B',\n        #     '36:9F:36:7F:0C:90:26:A1:AD:A3:79:E9:A9:8B:F5:74:'\n        #     '21:B1:29:4B:67:73:78:B4:DE:CF:FA:C5:A6:42:BA:03',\n        # ]\n        REVOKED_CERTS = []\n        return REVOKED_CERTS\n\n    @classmethod\n    def verify_against_revoked(cls):\n        revoked_fingerprints = cls.get_revoked_fingerprint()\n        fp = cls.get_fingerprint()\n        if fp in revoked_fingerprints:\n            raise SSLVerifier.VerificationError(\n                str(), 'matches revoked fingerprint', fp)\n\n    @classmethod\n    def verify_against_ca(cls):\n        echo_process = subprocess.Popen(['echo', cls.user_cert], stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n        args = \"openssl verify -CAfile {}\".format(cls.ca_cert)\n        if cls.intermediate_cert is not None:\n            args = \"openssl verify -CAfile {} -untrusted {}\".format(cls.ca_cert, cls.intermediate_cert)\n        # TODO: FOGL-7302 to handle -x509_strict check when OpenSSL version >=3.x\n        # Removing the -x509_strict flag as an interim solution; as of now only CentOS Stream9 has OpenSSL version 3.0\n        if utils.get_open_ssl_version(version_string=False)[0] < 3:\n            args += \" -x509_strict\"\n        a = subprocess.Popen(args.split(), stdin=echo_process.stdout, stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n        outs, errs = a.communicate()\n        if outs is None and errs is None:\n            raise OSError('Verification error in executing command \"{}\"'.format(args))\n        if a.returncode != 0:\n            raise OSError('Verification error in executing command \"{}\". Error: {}, returncode: {}'.format(\n                args, errs.decode('utf-8').replace('\\n', ''), a.returncode))\n        d = [b for b in outs.decode('utf-8').split('\\n') if b != '']\n        if \"OK\" not in d[0]:\n            raise SSLVerifier.VerificationError(\n                str(), 'failed verification', errs)\n        return d\n\n    \"\"\"\n        Common x509 options:\n         -serial         - print serial number value\n         -subject_hash   - print subject hash value\n         -subject_hash_old   - print old-style (MD5) subject hash value\n         -issuer_hash    - print issuer hash value\n         -issuer_hash_old    - print old-style (MD5) issuer hash value\n         -hash           - synonym for -subject_hash\n         -subject        - print subject DN\n         -issuer         - print issuer DN\n         -email          - print email address(es)\n         -startdate      - notBefore field\n         -enddate        - notAfter field\n         -purpose        - print out certificate purposes\n         -dates          - both Before and After dates\n         -modulus        - print the RSA key modulus\n         -pubkey         - output the public key\n         -fingerprint    - print the certificate fingerprint\n         -alias          - output certificate alias\n    \"\"\"\n\n    @classmethod\n    def get_x509(cls, cmd):\n        if cls.user_cert is None:\n            raise OSError(\"No user certificate supplied.\")\n        echo_process = subprocess.Popen(['echo', cls.user_cert], stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n        a = subprocess.Popen([\"openssl\", \"x509\", \"-noout\", \"-nameopt\", \"sep_comma_plus_space\", cmd], stdin=echo_process.stdout, stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n        outs, errs = a.communicate()\n        if outs is None and errs is None:\n            raise OSError(\n                'Error in executing command \"{}\"'.format(\"openssl x509 -noout {}\".format(cmd)))\n        if a.returncode != 0:\n            raise OSError(\n                'Error in executing command \"{}\". Error: {}, return code: {}'.format(cmd, errs.decode('utf-8').replace('\\n', ''), a.returncode))\n        d = [b for b in outs.decode('utf-8').split('\\n') if b != '']\n        return d\n\n    @classmethod\n    def get_serial(cls):\n        cmd = '-serial'\n        serial = cls.get_x509(cmd=cmd)[0].split(\"serial=\")[1].strip()\n        return serial\n\n    @classmethod\n    def get_purposes(cls):\n        cmd = '-purpose'\n        purposes = cls.get_x509(cmd=cmd)\n        return purposes\n\n    @classmethod\n    def get_issuer_common_name(cls):\n        cmd = '-issuer'\n        issuer = cls.get_x509(cmd=cmd)[0].split(\"issuer=\")[1].strip()\n        return issuer\n\n    @classmethod\n    def get_subject(cls):\n        cmd = '-subject'\n        subject_text = cls.get_x509(cmd=cmd)[0].split(\"subject=\")[1].strip()\n        subject = subject_text.strip().split(\", \")\n\n        country = next(filter(lambda x: x.startswith(\"C=\"), subject), None)\n        state = next(filter(lambda x: x.startswith(\"ST=\"), subject), None)\n        organisation = next(filter(lambda x: x.startswith(\"O=\"), subject), None)\n        commonName = next(filter(lambda x: x.startswith(\"CN=\"), subject), None)\n        email = next(filter(lambda x: x.startswith(\"emailAddress=\"), subject), None)\n\n        subject_dict = {\n            \"country\": \"\" if country is None else country.split(\"C=\")[1],\n            \"state\": \"\" if state is None else state.split(\"ST=\")[1],\n            \"organisation\": \"\" if organisation is None else organisation.split(\"O=\")[1],\n            \"commonName\": \"\" if commonName is None else commonName.split(\"CN=\")[1],\n            \"email\": \"\" if email is None else email.split(\"emailAddress=\")[1],\n        }\n        return subject_dict\n\n    @classmethod\n    def get_fingerprint(cls):\n        cmd = '-fingerprint'\n        fp = cls.get_x509(cmd=cmd)[0].split(\"SHA1 Fingerprint=\")[1].strip()\n        return fp\n\n    @classmethod\n    def get_pubkey(cls):\n        cmd = '-pubkey'\n        pk = cls.get_x509(cmd=cmd)[0].strip()\n        return pk\n\n    @classmethod\n    def get_startdate(cls):\n        cmd = '-startdate'\n        stdt = cls.get_x509(cmd=cmd)[0].split(\"notBefore=\")[1].strip()\n        return stdt\n\n    @classmethod\n    def get_enddate(cls):\n        cmd = '-enddate'\n        enddt = cls.get_x509(cmd=cmd)[0].split(\"notAfter=\")[1].strip()\n        return enddt\n\n    @classmethod\n    def is_expired(cls):\n        enddt = cls.get_enddate()\n        dt_format = \"%b %d %X %Y %Z\"  # Mar 12 12:31:57 2020 GMT\n        cert_time = time.mktime(datetime.datetime.strptime(enddt, dt_format).timetuple())\n        curr_time = time.time()\n        return True if cert_time < curr_time else False\n\n    @classmethod\n    def set_ca_cert(cls, cert):\n        cls.ca_cert = cert\n\n    @classmethod\n    def set_user_cert(cls, cert):\n        cls.user_cert = cert\n\n    @classmethod\n    def set_intermediate_cert(cls, cert):\n        cls.intermediate_cert = cert\n\n"
  },
  {
    "path": "python/fledge/plugins/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/common/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/common/shim/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/common/utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common Utilities\"\"\"\n\nimport datetime\n\nfrom fledge.services.core.api import utils as api_utils\n\n__author__ = \"Amarendra Kumar Sinha, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nDEPRECATED_BIT_POSITION = 7\nDEPRECATED_BIT_MASK_VALUE = 1 << DEPRECATED_BIT_POSITION\nPERSIST_DATA_BIT_POSITION = 3\n\n\ndef get_diff(old, new):\n    diff = list()\n    for key in new:\n        if key in old:\n            if old[key] != new[key]:\n                diff.append(key)\n        else:\n            diff.append(key)\n    return diff\n\n\ndef local_timestamp():\n    \"\"\"\n    :return: str - current time stamp with microseconds and machine timezone info\n    :example: '2018-05-08 14:06:40.51731305:30'\n    \"\"\"\n    return str(datetime.datetime.now(datetime.timezone.utc).astimezone())\n\n\ndef bit_at_given_position_set_or_unset(n, k):\n    \"\"\" Check whether the bit at given position is set or unset\n\n        :return: if it results to '1' then bit is set, else it results to '0' bit is unset\n        :example:\n              Input : n = 32, k = 5\n              Output : Set\n              (100000)\n              The 5th bit from the right is set.\n\n              Input : n = 32, k = 2\n              Output : Unset\n    \"\"\"\n    new_num = n >> k\n    return new_num & 1\n\n\ndef get_persist_plugins(directory=None):\n    \"\"\" Get a list of south, north, filter types plugins that can persist data for a service\n    :return: list - plugins\n    :example: [\"OMF\"]\n    \"\"\"\n    plugin_list = []\n    supported_persist_dirs = [\"south\", \"north\", \"filter\"]\n    if directory is not None:\n        supported_persist_dirs = [directory, \"filter\"]\n    for plugin_type in supported_persist_dirs:\n        libs = api_utils.find_c_plugin_libs(plugin_type)\n        for name, _type in libs:\n            if _type == 'binary':\n                jdoc = api_utils.get_plugin_info(name, dir=plugin_type)\n                if jdoc:\n                    if 'flag' in jdoc:\n                        if bit_at_given_position_set_or_unset(jdoc['flag'], PERSIST_DATA_BIT_POSITION):\n                            plugin_list.append(jdoc['name'])\n    return plugin_list\n"
  },
  {
    "path": "python/fledge/plugins/north/README.rst",
    "content": "*********************\nFledge North Plugins\n*********************\n\nFledge North plugins are the code modules that are used by the northbound\nprocess within Fledge to send data upwards to historian, the Cloud\nor the Fog layer. Each plugin implements a specific protocols and/or\nformat conversions required for a particular destination service.\n\nGeneric processing for the northbound processing is implemented in the\nFledge North modules.\n"
  },
  {
    "path": "python/fledge/plugins/north/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/north/common/README.rst",
    "content": "************************\nCommon North Plugin Code\n************************\n\nThis directory is the location in which code that is common to more than\none North plugin is located. Code here may not be common to all\nNorth plugins, but it should not be specific to an individual\nplugin.\n\nCode may be further organised into sub-directories to provide logical\ngrouping of code with a common purpose.\n"
  },
  {
    "path": "python/fledge/plugins/north/common/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/north/common/common.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Common code to the north facing plugins\n\n\"\"\"\n\nimport asyncio\n\nfrom fledge.common.configuration_manager import ConfigurationManager\n\n\n# Messages used for Information, Warning and Error notice\nMESSAGES_LIST = {\n\n    # Information messages\n    \"i000000\": \"information.\",\n\n    # Warning / Error messages\n    \"e000000\": \"general error.\",\n\n    \"e000001\": \"ERROR - the plugin cannot be executed directly.\",\n\n    \"e000010\": \"cannot complete the retrieval of the plugin information - error details |{0}|\",\n    \"e000011\": \"cannot initialize the plugin - error details |{0}|\",\n    \"e000012\": \"cannot initialize the logger - error details |{0}|\",\n    \"e000013\": \"cannot complete the termination of the plugin - error details |{0}|\",\n\n    \"e000020\": \"cannot retrieve information about the sensor.\",\n    \"e000021\": \"cannot complete the preparation of the in memory structure.\",\n    \"e000022\": \"unable to extend the memory structure with new data.\",\n    \"e000023\": \"cannot prepare sensor information for the destination - error details |{0}|\",\n    \"e000024\": \"an error occurred during the request to the destination - server address |{0}| - error details |{1}|\",\n\n    \"e000030\": \"cannot update the reached position.\",\n    \"e000031\": \"cannot complete the sending operation - error details |{0}|\",\n    \"e000032\": \"an error occurred during the request to the destination, the error is considered not blocking \"\n               \"- status code |{0}| - error details |{1}|\",\n}\n\n\ndef convert_to_type(value):\n    \"\"\"Evaluates and converts to the type in relation to its actual value, for example \"180.2\" to float 180.2\n\n     Args:\n        value : value to evaluate and convert\n     Returns:\n         value_converted: converted value\n     Raises:\n     \"\"\"\n\n    value_type = evaluate_type(value)\n\n    if value_type == \"string\":\n        value_converted = value\n\n    elif value_type == \"number\":\n        value_converted = float(value)\n\n    elif value_type == \"integer\":\n        value_converted = int(value)\n    else:\n        value_converted = value\n\n    return value_converted\n\n\ndef evaluate_type(value):\n    \"\"\"Evaluates the type in relation to its value\n\n     Args:\n        value : value to evaluate\n     Returns:\n         Evaluated type {integer,number,string}\n     Raises:\n     \"\"\"\n\n    if isinstance(value, list):\n        evaluated_type = \"array\"\n    elif isinstance(value, dict):\n        evaluated_type = \"string\"\n    else:\n        try:\n            float(value)\n\n            try:\n                # Evaluates if it is a int or a number\n                if str(int(float(value))) == str(value):\n\n                    # Checks the case having .0 as 967.0\n                    int_str = str(int(float(value)))\n                    value_str = str(value)\n\n                    if int_str == value_str:\n                        evaluated_type = \"integer\"\n                    else:\n                        evaluated_type = \"number\"\n\n                else:\n                    evaluated_type = \"number\"\n\n            except ValueError:\n                evaluated_type = \"string\"\n\n        except ValueError:\n            evaluated_type = \"string\"\n\n    return evaluated_type\n\n\ndef identify_unique_asset_codes(raw_data):\n    \"\"\"Identify unique asset codes in the data block\n\n    Args:\n        raw_data : data block retrieved from the Storage layer that should be evaluated\n    Returns:\n        unique_asset_codes : list of unique codes\n\n    Raises:\n    \"\"\"\n\n    unique_asset_codes = []\n\n    for row in raw_data:\n        asset_code = row['asset_code']\n        asset_data = row['reading']\n\n        # Evaluates if the asset_code is already in the list\n        if not any(item[\"asset_code\"] == asset_code for item in unique_asset_codes):\n\n            unique_asset_codes.append(\n                {\n                    \"asset_code\": asset_code,\n                    \"asset_data\": asset_data\n                }\n            )\n\n    return unique_asset_codes\n\n\ndef retrieve_configuration(_storage, _category_name, _default, _category_description):\n    \"\"\"Retrieves the configuration from the Category Manager for a category name\n\n     Args:\n         _storage: Reference to the Storage Client to be used\n         _category_name: Category name to be retrieved\n         _default: default values for the category\n         _category_description: category description\n     Returns:\n         _config_from_manager: Retrieved configuration as a Dictionary\n     Raises:\n     \"\"\"\n\n    _event_loop = asyncio.get_event_loop()\n\n    cfg_manager = ConfigurationManager(_storage)\n\n    _event_loop.run_until_complete(cfg_manager.create_category(_category_name, _default, _category_description))\n\n    _config_from_manager = _event_loop.run_until_complete(cfg_manager.get_category_all_items(_category_name))\n\n    return _config_from_manager\n"
  },
  {
    "path": "python/fledge/plugins/north/common/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Exceptions module \"\"\"\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__all__ = ('NorthPluginException', 'DataSendError')\n\n\nclass URLFetchError(RuntimeError):\n    \"\"\" Unable to fetch from the HTTP server \"\"\"\n    pass\n\n\nclass PluginInitializeFailed(RuntimeError):\n    \"\"\" Unable to initialize the plugin \"\"\"\n    pass\n\n\nclass NorthPluginException(Exception):\n    def __init__(self, reason):\n        self.reason = reason\n\n\nclass DataSendError(NorthPluginException):\n    \"\"\" Unable to send the data to the destination \"\"\"\n    def __init__(self, reason):\n        super(DataSendError, self).__init__(reason)\n        self.reason = reason\n\n\nclass URLConnectionError(Exception):\n    \"\"\" Unable to connect to the server \"\"\"\n    pass\n"
  },
  {
    "path": "python/fledge/plugins/north/empty/README.rst",
    "content": "**************************\nFledge North Empty plugin\n**************************\n\nTemplate to be used for the implementation of a Fledge North plugin.\n"
  },
  {
    "path": "python/fledge/plugins/north/empty/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/north/empty/empty.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" North Plugin template \"\"\"\n\nimport fledge.plugins.north.common.common as plugin_common\nimport fledge.plugins.north.common.exceptions as plugin_exceptions\n\nfrom fledge.common import logger\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2017,2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_MODULE_NAME = \"Empty North Plugin\"\n\n_DEFAULT_CONFIG = {}\n\n_logger = logger.setup(__name__)\n\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n\n     Args:\n     Returns:\n         _plugin_info: python dictionary containing the plugin information\n     Raises:\n     \"\"\"\n\n    _plugin_info = {\n        'name': _MODULE_NAME,\n        'version': \"1.0.0\",\n        'type': \"north\",\n        'interface': \"1.0\",\n        'config': _DEFAULT_CONFIG\n    }\n\n    return _plugin_info\n\n\ndef plugin_init(config):\n    \"\"\" Initialise the plugin.\n\n    Args:\n        config: JSON configuration document\n    Returns:\n        handle: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n\n    handle = config\n\n    return handle\n\n\nasync def plugin_send(handle, data_to_send, stream_id):\n    \"\"\" Sends the data to the destination implementing the required protocol\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        data_to_send: Data to send as a python list/dictionary\n        stream_id:\n\n    Returns:\n        is_data_sent: True if the data were sent\n        new_position: Id of the position already sent\n        num_sent: Number of asset codes sent\n\n    Raises:\n        DataSendError\n    \"\"\"\n\n    try:\n        is_data_sent = True\n        new_position = 0\n        num_sent = 0\n\n    except Exception as ex:\n        raise plugin_exceptions.DataSendError(ex)\n\n    return is_data_sent, new_position, num_sent\n\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin, it should be called when the configuration of the plugin is changed during the\n        operation of the North task.\n        The new configuration category should be passed.\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n\n    Returns:\n        new_handle: new handle to be used in the future calls\n\n    Raises:\n    \"\"\"\n\n    new_handle = handle\n\n    return handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shutdowns the plugin doing required cleanup, to be called prior to the North task being shut down.\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n\n    Returns:\n    Raises:\n    \"\"\"\n\n    pass\n\n\nif __name__ == \"__main__\":\n    pass\n"
  },
  {
    "path": "python/fledge/plugins/storage/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/storage/common/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/storage/common/backup.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (C) 2017\n\n\"\"\" Common functionalities for the Backup, they are also used for the integration with the API.\n\"\"\"\n\nimport os\nimport uuid\n\nfrom fledge.services.core import server\n\nfrom fledge.common.storage_client import payload_builder\nfrom fledge.common import logger\n\nimport fledge.plugins.storage.common.lib as lib\nimport fledge.plugins.storage.common.exceptions as exceptions\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_MESSAGES_LIST = {\n\n    # Information messages\n    \"i000001\": \"Execution started.\",\n    \"i000002\": \"Execution completed.\",\n\n    # Warning / Error messages\n    \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n    \"e000002\": \"an error occurred during the backup operation - error details |{0}|\",\n    \"e000004\": \"cannot complete the initialization - error details |{0}|\",\n}\n\"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n# Log definitions\n_logger = None\n\n_LOG_LEVEL_DEBUG = 10\n_LOG_LEVEL_INFO = 20\n\n_LOGGER_LEVEL = _LOG_LEVEL_INFO\n_LOGGER_DESTINATION = logger.SYSLOG\n\n\nclass Backup(object):\n    \"\"\" Provides external functionality/integration for Backup operations\n\n        the constructor expects to receive a reference to a StorageClient object to being able to access\n        the Storage Layer\n    \"\"\"\n\n    _MODULE_NAME = \"fledge_backup_common\"\n\n    _SCHEDULE_BACKUP_ON_DEMAND = \"fac8dae6-d8d1-11e7-9296-cec278b6b50a\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000000\": \"general information\",\n        \"i000003\": \"On demand backup successfully launched.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"cannot delete/purge backup file on file system - id |{0}| - file name |{1}| error details |{2}|\",\n        \"e000002\": \"cannot delete/purge backup information on the storage layer \"\n                   \"- id |{0}| - file name |{1}| error details |{2}|\",\n        \"e000003\": \"cannot retrieve information for the backup id |{0}|\",\n        \"e000004\": \"cannot launch on demand backup - error details |{0}|\",\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _logger = None\n    STORAGE_TABLE_BACKUPS = None\n\n    def __init__(self, _storage):\n        self._storage = _storage\n\n        if not Backup._logger:\n            Backup._logger = logger.setup(self._MODULE_NAME,\n                                          destination=_LOGGER_DESTINATION,\n                                          level=_LOGGER_LEVEL)\n\n        self._backup_lib = lib.BackupRestoreLib(self._storage, self._logger)\n        self.STORAGE_TABLE_BACKUPS = self._backup_lib.STORAGE_TABLE_BACKUPS\n\n    async def get_all_backups(\n                        self,\n                        limit: int,\n                        skip: int,\n                        status: [lib.BackupStatus, None],\n                        sort_order: lib.SortOrder = lib.SortOrder.DESC) -> list:\n\n        \"\"\" Returns a list of backups is returned sorted in chronological order with the most recent backup first.\n\n        Args:\n            limit: int - limit the number of backups returned to the number given\n            skip: int - skip the number of backups specified before returning backups-\n                  this, in conjunction with the limit option, allows for a paged interface to be built\n            status: lib.BackupStatus - limit the returned backups to those only with the specified status,\n                    None = retrieves information for all the backups\n            sort_order: lib.SortOrder - Defines the order used to present information, DESC by default\n\n        Returns:\n            backups_information: all the information available related to the requested backups\n\n        Raises:\n        \"\"\"\n\n        payload = payload_builder.PayloadBuilder().SELECT(\"id\", \"status\", \"ts\", \"file_name\", \"type\") \\\n            .ALIAS(\"return\", (\"ts\", 'ts')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\n        if status:\n            payload.WHERE(['status', '=', status])\n        payload.ORDER_BY(['id', sort_order])\n        payload.LIMIT(limit)\n        if skip > 0:\n            payload.OFFSET(skip)\n        backups_from_storage = await self._storage.query_tbl_with_payload(self.STORAGE_TABLE_BACKUPS, payload.payload())\n        backups_information = backups_from_storage['rows']\n\n        return backups_information\n\n    async def get_backup_details(self, backup_id: int) -> dict:\n        \"\"\" Returns the details of a backup\n\n        Args:\n            backup_id: int - the id of the backup to return\n\n        Returns:\n            backup_information: all the information available related to the requested backup_id\n\n        Raises:\n            exceptions.DoesNotExist\n            exceptions.NotUniqueBackup\n        \"\"\"\n        payload = payload_builder.PayloadBuilder().SELECT(\"id\", \"status\", \"ts\", \"file_name\", \"type\") \\\n            .ALIAS(\"return\", (\"ts\", 'ts')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n            .WHERE(['id', '=', backup_id]).payload()\n\n        backup_from_storage = await self._storage.query_tbl_with_payload(self.STORAGE_TABLE_BACKUPS, payload)\n\n        if backup_from_storage['count'] == 0:\n            raise exceptions.DoesNotExist\n        elif backup_from_storage['count'] == 1:\n            backup_information = backup_from_storage['rows'][0]\n        else:\n            raise exceptions.NotUniqueBackup\n\n        return backup_information\n\n    async def delete_backup(self, backup_id: int):\n        \"\"\" Deletes a backup\n\n        Args:\n            backup_id: int - the id of the backup to delete\n\n        Returns:\n        Raises:\n        \"\"\"\n        try:\n            backup_information = await self.get_backup_details(backup_id)\n            file_name = backup_information['file_name']\n\n            # Deletes backup file from the file system\n            if os.path.exists(file_name):\n\n                try:\n                    os.remove(file_name)\n                except Exception as _ex:\n                    _message = self._MESSAGES_LIST[\"e000001\"].format(backup_id, file_name, _ex)\n                    Backup._logger.error(_message)\n                    raise\n\n            # Deletes backup information from the Storage layer\n            # only if it was possible to delete the file from the file system\n            try:\n                await self._delete_backup_information(backup_id)\n            except Exception as _ex:\n                _message = self._MESSAGES_LIST[\"e000002\"].format(backup_id, file_name, _ex)\n                self._logger.error(_message)\n                raise\n        except exceptions.DoesNotExist:\n            _message = self._MESSAGES_LIST[\"e000003\"].format(backup_id)\n            self._logger.warning(_message)\n            raise\n\n    async def _delete_backup_information(self, _id):\n        \"\"\" Deletes backup information from the Storage layer\n\n        Args:\n            _id: Backup id to delete\n        Returns:\n        Raises:\n        \"\"\"\n        payload = payload_builder.PayloadBuilder() \\\n            .WHERE(['id', '=', _id]) \\\n            .payload()\n        await self._storage.delete_from_tbl(self.STORAGE_TABLE_BACKUPS, payload)\n\n    async def create_backup(self):\n        \"\"\" Run a backup task using the scheduler on-demand schedule mechanism to run the script,\n            the backup will proceed asynchronously.\n\n        Args:\n        Returns:\n            status: str - {\"running\"|\"failed\"}\n        Raises:\n        \"\"\"\n        self._logger.debug(\"{func}\".format(func=\"create_backup\"))\n        try:\n            await server.Server.scheduler.queue_task(uuid.UUID(Backup._SCHEDULE_BACKUP_ON_DEMAND))\n            _message = self._MESSAGES_LIST[\"i000003\"]\n            Backup._logger.debug(\"{0}\".format(_message))\n            status = \"running\"\n        except Exception as _ex:\n            _message = self._MESSAGES_LIST[\"e000004\"].format(_ex)\n            Backup._logger.error(\"{0}\".format(_message))\n            status = \"failed\"\n\n        return status\n"
  },
  {
    "path": "python/fledge/plugins/storage/common/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Exceptions module \"\"\"\n\n\nclass ConfigRetrievalError(Exception):\n    \"\"\" Unable to retrieve the parameters from the configuration manager \"\"\"\n    pass\n\n\nclass BackupOrRestoreAlreadyRunning(Exception):\n    \"\"\" Backup or restore cannot proceed as another operation is already running \"\"\"\n    pass\n\n\nclass InitializationFailed(Exception):\n    \"\"\" Cannot complete the initialization \"\"\"\n    pass\n\n\nclass BackupFailed(Exception):\n    \"\"\" An error occurred during the backup operation \"\"\"\n    pass\n\n\nclass RestoreFailed(Exception):\n    \"\"\" An error occurred during the restore operation \"\"\"\n    pass\n\n\nclass NotUniqueBackup(Exception):\n    \"\"\" There are more than one backups having the same backup id \"\"\"\n    pass\n\n\nclass BackupsDirDoesNotExist(Exception):\n    \"\"\" Directory used to store backups doesn't exist \"\"\"\n    pass\n\n\nclass SemaphoresDirDoesNotExist(Exception):\n    \"\"\" Directory used to store semaphores for backup/restore synchronization doesn't exist \"\"\"\n    pass\n\n\nclass DoesNotExist(Exception):\n    \"\"\" The requested backup id doesn't exist \"\"\"\n    pass\n\n\nclass CannotCreateConfigurationCacheFile(Exception):\n    \"\"\" It is not possible to create the configuration cache file to store information retrieved from the\n        configuration manager \"\"\"\n    pass\n\n\nclass InvalidBackupsPath(Exception):\n    \"\"\" The identified backups' path is not a valid directory \"\"\"\n    pass\n\n\nclass InvalidPath(Exception):\n    \"\"\" The identified path is not a valid directory \"\"\"\n    pass\n\n\nclass ArgumentParserError(Exception):\n    \"\"\" Invalid command line arguments \"\"\"\n    pass\n\n\nclass FledgeStartError(RuntimeError):\n    \"\"\" Unable to start Fledge \"\"\"\n    pass\n\n\nclass FledgeStopError(RuntimeError):\n    \"\"\" Unable to stop Fledge \"\"\"\n\n\nclass PgCommandUnAvailable(Exception):\n    \"\"\" Postgres command is not available neither using the managed nor the unmanaged approach \"\"\"\n    pass\n\n\nclass PgCommandNotExecutable(Exception):\n    \"\"\" Postgres command is not executable \"\"\"\n    pass\n\n\nclass CannotReadPostgres(Exception):\n    \"\"\" It is not possible to read data from Postgres \"\"\"\n    pass\n\n\nclass NoBackupAvailableError(RuntimeError):\n    \"\"\" No backup in the proper state is available \"\"\"\n    pass\n\n\nclass FileNameError(RuntimeError):\n    \"\"\" Impossible to identify an unique backup to restore \"\"\"\n    pass\n\n\nclass InvalidFledgeEnvironment(RuntimeError):\n    \"\"\" It is not possible to determine the environment in which the code is running\n    neither Deployment nor Development \"\"\"\n    pass\n\n\nclass UndefinedStorage(Exception):\n    \"\"\" It is not possible to evaluate if the storage is managed or unmanaged \"\"\"\n    pass\n"
  },
  {
    "path": "python/fledge/plugins/storage/common/lib.py",
    "content": "# -*- coding: utf-8 -*-\n# Copyright (C) 2017\n\n\"\"\" Library used for backup and restore operations\n\"\"\"\n\nimport subprocess\nimport time\nimport os\nimport asyncio\nimport json\nimport datetime\nfrom enum import IntEnum\n\nfrom fledge.common import logger\nfrom fledge.common.storage_client import payload_builder\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.storage_client.exceptions import StorageServerError\nimport fledge.plugins.storage.common.exceptions as exceptions\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_MODULE_NAME = \"fledge_backup_common_library\"\n\n_MESSAGES_LIST = {\n\n    # Information messages\n    \"i000000\": \"Information\",\n\n    # Warning / Error messages\n    \"e000000\": \"general error\",\n    \"e000001\": \"semaphore file deleted because it was already in existence - file |{0}|\",\n    \"e000002\": \"semaphore file deleted because it existed even if the corresponding process was not running \"\n               \"- file |{0}| - pid |{1}|\",\n    \"e000003\": \"ERROR - the library cannot be executed directly.\",\n}\n\"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n_CMD_TIMEOUT = \" timeout --signal=9  \"\n\"\"\" Every external commands will be launched using timeout to avoid endless executions \"\"\"\n\n_logger = None\n_storage = None\n\"\"\"\" Objects references assigned by the caller \"\"\"\n\n\ndef exec_wait(_cmd, _output_capture=False, _timeout=0):\n    \"\"\"  Executes an external/shell commands\n\n    Args:\n        _cmd: command to execute\n        _output_capture: if the output of the command should be captured or not\n        _timeout: 0 no timeout or the timeout in seconds for the execution of the command\n\n    Returns:\n        _exit_code: exit status of the command\n        _output: output of the command\n    Raises:\n    \"\"\"\n\n    _output = \"\"\n\n    if _timeout != 0:\n        _cmd = _CMD_TIMEOUT + str(_timeout) + \" \" + _cmd\n        _logger.debug(\"{func} - Executing command using the timeout |{timeout}| \".format(\n                                        func=\"exec_wait\",\n                                        timeout=_timeout))\n\n    _logger.debug(\"{func} - cmd |{cmd}| \".format(func=\"exec_wait\",\n                                                 cmd=_cmd))\n\n    if _output_capture:\n        process = subprocess.Popen(_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\n    else:\n        process = subprocess.Popen(_cmd, shell=True)\n\n    _exit_code = process.wait()\n\n    if _output_capture:\n        output_step1 = process.stdout.read()\n        _output = output_step1.decode(\"utf-8\")\n\n    _logger.debug(\"{func} - Executed command - cmd |{cmd}| - exit_code |{exit_code}| - output |{output}| \".format(\n                                    func=\"exec_wait\",\n                                    cmd=_cmd,\n                                    exit_code=_exit_code,\n                                    output=_output))\n\n    return _exit_code, _output\n\n\ndef exec_wait_retry(cmd, output_capture=False, exit_code_ok=0, max_retry=3, write_error=True, sleep_time=1, timeout=0):\n    \"\"\" Executes an external command retrying x time the operation up to the exit status match a specific value\n\n    Args:\n        cmd: command to execute\n        output_capture: if the output of the command should be captured or not\n        exit_code_ok: exit status to achieve\n        max_retry: maximum number of retries to achieve the desired exit status\n        write_error: if a message should be generated for each retry\n        sleep_time: seconds to sleep between each retry\n        timeout: 0= no timeout, or the timeout in seconds for the execution of the external command\n\n    Returns:\n        _exit_code: exit status of the command\n        _output: output of the command\n\n    Raises:\n    \"\"\"\n\n    global _logger\n\n    _logger.debug(\"{func} - cmd |{cmd}| \".format(func=\"exec_wait_retry\",\n                                                 cmd=cmd))\n\n    _exit_code = 0\n    _output = \"\"\n\n    # try X times the operation\n    retry = 1\n    loop_continue = True\n\n    while loop_continue:\n\n        _exit_code, _output = exec_wait(cmd, output_capture, timeout)\n\n        if _exit_code == exit_code_ok:\n            loop_continue = False\n\n        elif retry <= max_retry:\n\n            # Prepares for the retry operation\n            if write_error:\n                short_output = _output[0:50]\n                _logger.debug(\"{func} - cmd |{cmd}| - N retry |{retry}| - message |{msg}| \".format(\n                    func=\"exec_wait_retry\",\n                    cmd=cmd,\n                    retry=retry,\n                    msg=short_output)\n                )\n\n            time.sleep(sleep_time)\n            retry += 1\n\n        else:\n            loop_continue = False\n\n    return _exit_code, _output\n\n\ndef cr_strip(text):\n    \"\"\"\n    Args:\n    Returns:\n    Raises:\n    \"\"\"\n\n    text = text.replace(\"\\n\", \"\")\n    text = text.replace(\"\\r\", \"\")\n\n    return text\n\n\nclass BackupType (IntEnum):\n    \"\"\" Supported backup types \"\"\"\n\n    FULL = 1\n    INCREMENTAL = 2\n\n\nclass SortOrder (object):\n    \"\"\" Define the order used to present information \"\"\"\n\n    ASC = 'ASC'\n    DESC = 'DESC'\n\n\nclass BackupStatus (object):\n    \"\"\" Backup status \"\"\"\n\n    UNDEFINED = -1\n    RUNNING = 1\n    COMPLETED = 2\n    CANCELLED = 3\n    INTERRUPTED = 4\n    FAILED = 5\n    RESTORED = 6\n    ALL = 999\n\n    text = {\n        UNDEFINED: \"undefined\",\n        RUNNING: \"running\",\n        COMPLETED: \"completed\",\n        CANCELLED: \"cancelled\",\n        INTERRUPTED: \"interrupted\",\n        FAILED: \"failed\",\n        RESTORED: \"restored\",\n        ALL: \"all\"\n    }\n\n\nclass BackupRestoreLib(object):\n    \"\"\" Library of functionalities for the backup restore operations that requires information/state to be stored \"\"\"\n\n    STORAGE_EXE = \"/services/storage\"\n\n    MAX_NUMBER_OF_BACKUPS_TO_RETRIEVE = 9999\n    \"\"\"\" Maximum number of backup information to retrieve from the storage layer\"\"\"\n\n    STORAGE_TABLE_BACKUPS = \"backups\"\n    \"\"\" Table name containing the backups information\"\"\"\n\n    JOB_SEM_FILE_PATH = \"/tmp\"\n    \"\"\" Updated by the caller to the proper value \"\"\"\n\n    JOB_SEM_FILE_BACKUP = \".backup.sem\"\n    JOB_SEM_FILE_RESTORE = \".restore.sem\"\n    \"\"\"\" Semaphores information for the handling of the backup/restore synchronization \"\"\"\n\n    # SQLite commands\n    SQLITE_SQLITE = \"sqlite3\"\n    SQLITE_BACKUP = \".backup\"\n    SQLITE_RESTORE_COPY = \"cp\"\n    SQLITE_RESTORE_MOVE = \"mv\"\n\n    # Postgres commands\n    PG_COMMAND_DUMP = \"pg_dump\"\n    PG_COMMAND_RESTORE = \"pg_restore\"\n    PG_COMMAND_PSQL = \"psql\"\n\n    PG_COMMANDS = {PG_COMMAND_DUMP: None,\n                   PG_COMMAND_RESTORE: None,\n                   PG_COMMAND_PSQL: None\n                   }\n    \"\"\"List of Postgres commands to check/validate if they are available and usable\n       and the actual Postgres commands to use \"\"\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000001\": \"Execution started.\",\n        \"i000002\": \"Execution completed.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n        \"e000002\": \"cannot retrieve the configuration from the manager, trying retrieving from file \"\n                   \"- error details |{0}|\",\n        \"e000003\": \"cannot retrieve the configuration from file - error details |{0}|\",\n        \"e000004\": \"...\",\n        \"e000005\": \"...\",\n        \"e000006\": \"...\",\n        \"e000007\": \"backup failed.\",\n        \"e000008\": \"cannot execute the backup, either a backup or a restore is already running - pid |{0}|\",\n        \"e000009\": \"...\",\n        \"e000010\": \"directory used to store backups doesn't exist - dir |{0}|\",\n        \"e000011\": \"directory used to store semaphores for backup/restore synchronization doesn't exist - dir |{0}|\",\n        \"e000012\": \"cannot create the configuration cache file, neither FLEDGE_DATA nor FLEDGE_ROOT are defined.\",\n        \"e000013\": \"cannot create the configuration cache file, provided path is not a directory - dir |{0}|\",\n        \"e000014\": \"the identified path of backups doesn't exists, creation was tried \"\n                   \"- dir |{0}| - error details |{1}|\",\n        \"e000015\": \"The command is not available neither using the unmanaged approach\"\n                   \" - command |{0}|\",\n        \"e000016\": \"Postgres command is not executable - command |{0}|\",\n        \"e000017\": \"The execution of the Postgres command using the -V option produce an error\"\n                   \" - command |{0}| - output |{1}|\",\n        \"e000018\": \"It is not possible to read data from Postgres\"\n                   \" - command |{0}| - exit code |{1}| - output |{2}|\",\n        \"e000019\": \"The command is not available using the managed approach\"\n                   \" - command |{0}| - full command |{1}|\",\n        \"e000020\": \"It is not possible to evaluate if the storage is managed or unmanaged\"\n                   \" - storage plugin |{0}|\",\n\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _DIR_MANAGED_FLEDGE_PG_COMMANDS = \"plugins/storage/postgres/pgsql/bin\"\n    \"\"\"Directory for Postgres commands in a managed configuration\"\"\"\n\n    _DB_CONNECTION_STRING = \"dbname='{db}'\"\n\n    _DEFAULT_FLEDGE_ROOT = \"/usr/local/fledge\"\n    \"\"\" Default value to use for the FLEDGE_ROOT if the environment $FLEDGE_ROOT is not defined \"\"\"\n\n    _BACKUP_FILE_NAME_PREFIX = \"fledge_backup_\"\n    \"\"\" Prefix used to generate a backup file name \"\"\"\n\n    _CONFIG_CACHE_FILE = \"backup_postgres_configuration_cache.json\"\n    \"\"\" Stores a configuration cache in case the configuration Manager is not available\"\"\"\n\n    # Configuration retrieved from the Configuration Manager\n    _CONFIG_CATEGORY_NAME = 'BACK_REST'\n    _CONFIG_CATEGORY_DESCRIPTION = 'Backup and Restore'\n\n    _CONFIG_DEFAULT = {\n        \"host\": {\n            \"description\": \"Server IP or name to backup/restore\",\n            \"type\": \"string\",\n            \"default\": \"localhost\",\n            \"displayName\": \"Host\"\n        },\n        \"port\": {\n            \"description\": \"PostgreSQL port\",\n            \"type\": \"integer\",\n            \"default\": \"5432\",\n            \"displayName\": \"PostgresSQL Port\"\n        },\n        \"database\": {\n            \"description\": \"Database name\",\n            \"type\": \"string\",\n            \"default\": \"fledge\",\n            \"displayName\": \"DB Name\"\n        },\n        \"schema\": {\n            \"description\": \"Schema\",\n            \"type\": \"string\",\n            \"default\": \"fledge\",\n            \"displayName\": \"DB Schema Name\"\n        },\n        \"database-filename\": {\n            \"description\": \"SQLite database file name\",\n            \"type\": \"string\",\n            \"default\": \"fledge.db\",\n            \"displayName\": \"SQLite DB Filename\"\n        },\n        \"backup-dir\": {\n            \"description\": \"Directory where backups will be created. \"\n                           \"If not specificed, FLEDGE_BACKUP, FLEDGE_DATA or FLEDGE_BACKUP will be used.\",\n            \"type\": \"string\",\n            \"default\": \"none\",\n            \"displayName\": \"Backup Directory\"\n        },\n        \"semaphores-dir\": {\n            \"description\": \"Semaphore directory for backup/restore synchronization.\"\n                           \"if not specified, backup-dir is used.\",\n            \"type\": \"string\",\n            \"default\": \"none\",\n            \"displayName\": \"Semaphore Directory\"\n        },\n        \"retention\": {\n            \"description\": \"Number of backups to maintain (old ones will be deleted)\",\n            \"type\": \"integer\",\n            \"default\": \"5\",\n            \"displayName\": \"Max Backups To Retain\"\n        },\n        \"max_retry\": {\n            \"description\": \"Maximum retries\",\n            \"type\": \"integer\",\n            \"default\": \"5\",\n            \"displayName\": \"Max Retries\"\n        },\n        \"timeout\": {\n            \"description\": \"Timeout in seconds for execution of external commands\",\n            \"type\": \"integer\",\n            \"default\": \"1200\",\n            \"displayName\": \"Timeout (In Seconds)\"\n        },\n        \"restart-max-retries\": {\n            \"description\": \"Maximum number of retries to restart Fledge\",\n            \"type\": \"integer\",\n            \"default\": \"10\",\n            \"displayName\": \"Max Retries To Restart Fledge\"\n        },\n        \"restart-sleep\": {\n            \"description\": \"Sleep time between status checks at Fledge restarts \"\n                           \"to ensure it has started successfully.\",\n            \"type\": \"integer\",\n            \"default\": \"5\",\n            \"displayName\": \"Restart Status Check Interval (In Seconds)\"\n        },\n    }\n\n    config = {}\n\n    _storage = None\n    _logger = None\n\n    def __init__(self, _storage, _logger):\n\n        self._storage = _storage\n        self._logger = _logger\n\n        self.config = {}\n\n        # Fledge directories\n        self.dir_fledge_root = \"\"\n        self.dir_fledge_data = \"\"\n        self.dir_fledge_data_etc = \"\"\n        self.dir_fledge_backup = \"\"\n        self.dir_backups = \"\"\n        self.dir_semaphores = \"\"\n\n    def sl_backup_status_create(self, _file_name, _type, _status):\n        \"\"\" Logs the creation of the backup in the Storage layer\n\n        Args:\n            _file_name: file_name used for the backup as a full path\n            _type: backup type {BackupType.FULL|BackupType.INCREMENTAL}\n            _status: backup status, usually BackupStatus.RUNNING\n        Returns:\n        Raises:\n        \"\"\"\n        _logger.debug(\"{func} - file name |{file}| \".format(func=\"sl_backup_status_create\", file=_file_name))\n        payload = payload_builder.PayloadBuilder().INSERT(\n            file_name=_file_name, ts=datetime.datetime.utcnow().strftime(\"%Y-%m-%d %H:%M:%S\"), type=_type,\n            status=_status, exit_code=0).payload()\n        asyncio.get_event_loop().run_until_complete(self._storage.insert_into_tbl(self.STORAGE_TABLE_BACKUPS, payload))\n\n    def sl_backup_status_update(self, _id, _status, _exit_code):\n        \"\"\" Updates the status of the backup using the Storage layer\n\n        Args:\n            _id: Backup's Id to update\n            _status: status of the backup {BackupStatus.SUCCESSFUL|BackupStatus.RESTORED}\n            _exit_code: exit status of the backup/restore execution\n        Returns:\n        Raises:\n        \"\"\"\n        _logger.debug(\"{func} - id |{file}| \".format(func=\"sl_backup_status_update\", file=_id))\n        payload = payload_builder.PayloadBuilder().SET(\n            status=_status, ts=datetime.datetime.utcnow().strftime(\"%Y-%m-%d %H:%M:%S\"), exit_code=_exit_code\n        ).WHERE(['id', '=', _id]).payload()\n        asyncio.get_event_loop().run_until_complete(self._storage.update_tbl(self.STORAGE_TABLE_BACKUPS, payload))\n\n    def sl_get_backup_details_from_file_name(self, _file_name):\n        \"\"\" Retrieves backup information from file name\n\n        Args:\n            _file_name: file name to search in the Storage layer\n\n        Returns:\n            backup_information: Backup information related to the file name\n\n        Raises:\n            exceptions.DoesNotExist\n            exceptions.NotUniqueBackup\n        \"\"\"\n\n        payload = payload_builder.PayloadBuilder() \\\n            .WHERE(['file_name', '=', _file_name]) \\\n            .payload()\n\n        backups_from_storage = asyncio.get_event_loop().run_until_complete(self._storage.query_tbl_with_payload(self.STORAGE_TABLE_BACKUPS, payload))\n        if backups_from_storage['count'] == 1:\n            backup_information = backups_from_storage['rows'][0]\n        elif backups_from_storage['count'] == 0:\n            raise exceptions.DoesNotExist\n        else:\n            raise exceptions.NotUniqueBackup\n\n        return backup_information\n\n    def check_for_execution_restore(self):\n        \"\"\" Executes all the checks to ensure the prerequisites to execute the backup are met\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        self._check_commands()\n\n    def check_for_execution_backup(self):\n        \"\"\" Executes all the checks to ensure the prerequisites to execute the backup are met\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        self._check_commands()\n        self._check_db()\n\n    def _check_db(self):\n        \"\"\" Checks if the database is working properly reading a sample row from the backups table\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.CannotReadPostgres\n        \"\"\"\n        cmd_psql = self.PG_COMMANDS[self.PG_COMMAND_PSQL]\n        cmd = '{psql} -d {db} -t -c \"SELECT id FROM {schema}.{table} LIMIT 1;\"'.format(\n                                                                psql=cmd_psql,\n                                                                db=self.config['database'],\n                                                                schema=self.config['schema'],\n                                                                table=self.STORAGE_TABLE_BACKUPS)\n        _exit_code, output = exec_wait(\n                                        _cmd=cmd,\n                                        _output_capture=True,\n                                        _timeout=self.config['timeout']\n                                        )\n        self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n                            func=\"_check_db\",\n                            cmd=cmd,\n                            exit_code=_exit_code,\n                            output=cr_strip(output)))\n        if _exit_code != 0:\n            _message = self._MESSAGES_LIST[\"e000018\"].format(cmd, _exit_code, output)\n            self._logger.error(\"{0}\".format(_message))\n            raise exceptions.CannotReadPostgres(_message)\n\n    def _check_commands(self):\n        \"\"\" Identify and checks the Postgres commands\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        for cmd in self.PG_COMMANDS:\n            cmd_identified = self._check_command_identification(cmd)\n            self._check_command_test(cmd_identified)\n\n    def _check_command_identification(self, cmd_to_identify):\n        \"\"\" Identifies the proper Postgres command to use, 2 possible cases :\n\n        Managed    - command is available in $FLEDGE_ROOT/plugins/storage/postgres/pgsql/bin\n        Unmanaged  - checks using the path and it identifies the used command through 'command -v'\n\n        Args:\n            cmd_to_identify: str - command to identify\n\n        Returns:\n            cmd_identified: str - actual identified command to use\n\n        Raises:\n            exceptions.PgCommandUnAvailable\n        \"\"\"\n        is_managed = self._is_plugin_managed(\"postgres\")\n        if is_managed:\n            # Checks for Managed\n            cmd_managed = \"{root}/{path}/{cmd}\".format(\n                                                        root=self.dir_fledge_root,\n                                                        path=self._DIR_MANAGED_FLEDGE_PG_COMMANDS,\n                                                        cmd=cmd_to_identify)\n            if os.path.exists(cmd_managed):\n                cmd_identified = cmd_managed\n            else:\n                _message = self._MESSAGES_LIST[\"e000019\"].format(cmd_to_identify, cmd_managed)\n                self._logger.error(\"{0}\".format(_message))\n                raise exceptions.PgCommandUnAvailable(_message)\n        else:\n            # Checks for Unmanaged\n            cmd = \"command -v \" + cmd_to_identify\n\n            # The timeout command can't be used with 'command'\n            # noinspection PyArgumentEqualDefault\n            _exit_code, output = exec_wait(\n                                            _cmd=cmd,\n                                            _output_capture=True,\n                                            _timeout=0\n                                            )\n            self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n                                                                            func=\"_check_command_identification\",\n                                                                            cmd=cmd,\n                                                                            exit_code=_exit_code,\n                                                                            output=output))\n            if _exit_code == 0:\n                cmd_identified = cr_strip(output)\n            else:\n                _message = self._MESSAGES_LIST[\"e000015\"].format(cmd)\n                self._logger.error(\"{0}\".format(_message))\n                raise exceptions.PgCommandUnAvailable(_message)\n\n        self.PG_COMMANDS[cmd_to_identify] = cmd_identified\n\n        return cmd_identified\n\n    def _is_plugin_managed(self, plugin_to_identify):\n        \"\"\" Identifies the type of plugin, Managed or not, inquiring the storage executable\n\n        Args:\n            plugin_to_identify: str - plugin to evaluate if it is managed or not\n        Returns:\n            type: boolean - True if it is a managed plugin\n        Raises:\n        \"\"\"\n        # noinspection PyUnusedLocal\n        plugin_type = False\n\n        # Inquires the storage\n        file_full_path = self.dir_fledge_root + self.STORAGE_EXE\n        cmd = file_full_path + \" --plugin\"\n\n        # noinspection PyArgumentEqualDefault\n        _exit_code, output = exec_wait(\n                                        _cmd=cmd,\n                                        _output_capture=True,\n                                        _timeout=0\n                                        )\n\n        self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n            func=\"_is_plugin_managed\",\n            cmd=cmd,\n            exit_code=_exit_code,\n            output=output))\n\n        # Evaluates the storage answer\n        if plugin_to_identify in output:\n            if \"false\" in output:\n                plugin_type = False\n            else:\n                plugin_type = True\n        else:\n            _message = self._MESSAGES_LIST[\"e000020\"].format(plugin_to_identify)\n            self._logger.error(\"{0}\".format(_message))\n            raise exceptions.UndefinedStorage(_message)\n\n        return plugin_type\n\n    def _check_command_test(self, cmd_to_test):\n        \"\"\" Tests if the Postgres command could be successfully launched/used\n\n        Args:\n            cmd_to_test: str -  Command to test\n\n        Returns:\n        Raises:\n            exceptions.PgCommandUnAvailable\n            exceptions.PgCommandNotExecutable\n        \"\"\"\n        if os.access(cmd_to_test, os.X_OK):\n            cmd = cmd_to_test + \" -V\"\n            _exit_code, output = exec_wait(\n                                            _cmd=cmd,\n                                            _output_capture=True,\n                                            _timeout=self.config['timeout']\n                                            )\n            self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n                                                                            func=\"_check_command_test\",\n                                                                            cmd=cmd,\n                                                                            exit_code=_exit_code,\n                                                                            output=output))\n            if _exit_code != 0:\n                _message = self._MESSAGES_LIST[\"e000017\"].format(cmd, output)\n                self._logger.error(\"{0}\".format(_message))\n                raise exceptions.PgCommandUnAvailable(_message)\n        else:\n            _message = self._MESSAGES_LIST[\"e000016\"].format(cmd_to_test)\n            self._logger.error(\"{0}\".format(_message))\n            raise exceptions.PgCommandNotExecutable(_message)\n\n    def sl_get_all_backups(self) -> list:\n        \"\"\" Retrieves all records of backup table\n        Args:\n\n        Returns: List of backups\n\n        Raises:\n        \"\"\"\n        payload = payload_builder.PayloadBuilder().SELECT(\"id\", \"file_name\", \"ts\", \"type\", \"status\", \"exit_code\") \\\n            .ALIAS(\"return\", (\"ts\", 'ts')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")).payload()\n        _logger.debug(\"sl_get_all_backups PAYLOAD: {}\".format(payload))\n        backups = asyncio.get_event_loop().run_until_complete(self._storage.query_tbl_with_payload(\n            self.STORAGE_TABLE_BACKUPS, payload))\n        if 'rows' not in backups:\n            raise StorageServerError\n        _logger.debug(\"{func} - backup list {backups}\".format(func=\"sl_get_all_backups\", backups=backups))\n        return backups['rows']\n\n    def sl_get_backup_details(self, backup_id: int) -> dict:\n        \"\"\" Returns the details of a backup\n\n        Args:\n            backup_id: int - the id of the backup to return\n\n        Returns:\n            backup_information: all the information available related to the requested backup_id\n\n        Raises:\n            exceptions.DoesNotExist\n            exceptions.NotUniqueBackup\n        \"\"\"\n        payload = payload_builder.PayloadBuilder().SELECT(\"id\", \"status\", \"ts\", \"file_name\", \"type\")\\\n            .ALIAS(\"return\", (\"ts\", 'ts')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n            .WHERE(['id', '=', backup_id]).payload()\n\n        backup_from_storage = asyncio.get_event_loop().run_until_complete(self._storage.query_tbl_with_payload(self.STORAGE_TABLE_BACKUPS, payload))\n\n        if backup_from_storage['count'] == 0:\n            raise exceptions.DoesNotExist\n        elif backup_from_storage['count'] == 1:\n            backup_information = backup_from_storage['rows'][0]\n        else:\n            raise exceptions.NotUniqueBackup\n\n        return backup_information\n\n    def retrieve_configuration(self):\n        \"\"\"  Retrieves the configuration either from the manager or from a local file.\n        the local configuration file is used if the configuration manager is not available\n        and updated with the values retrieved from the manager when feasible.\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.ConfigRetrievalError\n        \"\"\"\n\n        global JOB_SEM_FILE_PATH\n\n        try:\n            self._retrieve_configuration_from_manager()\n        except Exception as _ex:\n            _message = self._MESSAGES_LIST[\"e000002\"].format(_ex)\n            self._logger.warning(_message)\n            try:\n                self._retrieve_configuration_from_file()\n            except Exception as _ex:\n                _message = self._MESSAGES_LIST[\"e000003\"].format(_ex)\n                self._logger.error(_message)\n                raise exceptions.ConfigRetrievalError(_message)\n        else:\n            self._update_configuration_file()\n\n        # Identifies the directory of backups and checks its existence\n        if self.config['backup-dir'] != \"none\":\n            self.dir_backups = self.config['backup-dir']\n        else:\n            self.dir_backups = self.dir_fledge_backup\n\n        self._check_create_path(self.dir_backups)\n\n        # Identifies the directory for the semaphores and checks its existence\n        # Stores semaphores in the _backups_dir if semaphores-dir is not defined\n        if self.config['semaphores-dir'] != \"none\":\n            self.dir_semaphores = self.config['semaphores-dir']\n        else:\n            self.dir_semaphores = self.dir_backups\n            JOB_SEM_FILE_PATH = self.dir_semaphores\n\n        self._check_create_path(self.dir_semaphores)\n\n    def evaluate_paths(self):\n        \"\"\"  Evaluates paths in relation to the environment variables\n             FLEDGE_ROOT, FLEDGE_DATA and FLEDGE_BACKUP\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        # Evaluates FLEDGE_ROOT\n        if \"FLEDGE_ROOT\" in os.environ:\n            self.dir_fledge_root = os.getenv(\"FLEDGE_ROOT\")\n        else:\n            self.dir_fledge_root = self._DEFAULT_FLEDGE_ROOT\n\n        # Evaluates FLEDGE_DATA\n        if \"FLEDGE_DATA\" in os.environ:\n            self.dir_fledge_data = os.getenv(\"FLEDGE_DATA\")\n        else:\n            self.dir_fledge_data = self.dir_fledge_root + \"/data\"\n\n        # Evaluates FLEDGE_BACKUP\n        if \"FLEDGE_BACKUP\" in os.environ:\n            self.dir_fledge_backup = os.getenv(\"FLEDGE_BACKUP\")\n        else:\n            self.dir_fledge_backup = self.dir_fledge_data + \"/backup\"\n\n        # Evaluates etc directory\n        self.dir_fledge_data_etc = self.dir_fledge_data + \"/etc\"\n\n        self._check_create_path(self.dir_fledge_backup)\n        self._check_create_path(self.dir_fledge_data_etc)\n\n    def _check_create_path(self, path):\n        \"\"\"  Checks path existences and creates it if needed\n        Args:\n        Returns:\n        Raises:\n            exceptions.InvalidBackupsPath\n        \"\"\"\n        # Check the path existence\n        if not os.path.isdir(path):\n            # The path doesn't exists, tries to create it\n            try:\n                os.makedirs(path)\n            except OSError as _ex:\n                _message = self._MESSAGES_LIST[\"e000014\"].format(path, _ex)\n                self._logger.error(\"{0}\".format(_message))\n                raise exceptions.InvalidPath(_message)\n\n    def _retrieve_configuration_from_manager(self):\n        \"\"\"\" Retrieves the configuration from the configuration manager\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        _event_loop = asyncio.get_event_loop()\n        cfg_manager = ConfigurationManager(self._storage)\n\n        _event_loop.run_until_complete(cfg_manager.create_category(\n            self._CONFIG_CATEGORY_NAME,\n            self._CONFIG_DEFAULT,\n            self._CONFIG_CATEGORY_DESCRIPTION, display_name=\"Backup & Restore\"))\n        _event_loop.run_until_complete(cfg_manager.create_child_category(\n            \"Utilities\", [self._CONFIG_CATEGORY_NAME]))\n        self._config_from_manager = _event_loop.run_until_complete(cfg_manager.get_category_all_items\n                                                                   (self._CONFIG_CATEGORY_NAME))\n        self._decode_configuration_from_manager(self._config_from_manager)\n\n    def _decode_configuration_from_manager(self, _config_from_manager):\n        \"\"\"\" Decodes a json configuration as generated by the configuration manager\n\n        Args:\n            _config_from_manager: Json configuration to decode\n        Returns:\n        Raises:\n        \"\"\"\n        self.config['host'] = _config_from_manager['host']['value']\n        self.config['port'] = int(_config_from_manager['port']['value'])\n        self.config['database'] = _config_from_manager['database']['value']\n        self.config['database-filename'] = _config_from_manager['database-filename']['value']\n        self.config['schema'] = _config_from_manager['schema']['value']\n        self.config['backup-dir'] = _config_from_manager['backup-dir']['value']\n        self.config['semaphores-dir'] = _config_from_manager['semaphores-dir']['value']\n        self.config['retention'] = int(_config_from_manager['retention']['value'])\n        self.config['max_retry'] = int(_config_from_manager['max_retry']['value'])\n        self.config['timeout'] = int(_config_from_manager['timeout']['value'])\n        self.config['restart-max-retries'] = int(_config_from_manager['restart-max-retries']['value'])\n        self.config['restart-sleep'] = int(_config_from_manager['restart-sleep']['value'])\n\n    def _retrieve_configuration_from_file(self):\n        \"\"\"\" Retrieves the configuration from the local file\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        file_full_path = self._identify_configuration_file_path()\n        with open(file_full_path) as file:\n            self._config_from_manager = json.load(file)\n        self._decode_configuration_from_manager(self._config_from_manager)\n\n    def _update_configuration_file(self):\n        \"\"\" Updates the configuration file with the values retrieved from tha manager.\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        file_full_path = self._identify_configuration_file_path()\n        with open(file_full_path, 'w') as file:\n            json.dump(self._config_from_manager, file)\n\n    def _identify_configuration_file_path(self):\n        \"\"\" Identifies the path of the configuration cache file,\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        file_full_path = self.dir_fledge_data_etc + \"/\" + self._CONFIG_CACHE_FILE\n        return file_full_path\n\n\nclass Job:\n    \"\"\"\" Handles backup and restore operations synchronization \"\"\"\n\n    @classmethod\n    def _pid_file_retrieve(cls, file_name):\n        \"\"\" Retrieves the PID from the semaphore file\n\n        Args:\n            file_name: full path of the semaphore file\n        Returns:\n            pid: pid retrieved from the semaphore file\n        Raises:\n        \"\"\"\n        with open(file_name) as f:\n            pid = f.read()\n        pid = int(pid)\n        return pid\n\n    @classmethod\n    def _pid_file_create(cls, file_name, pid):\n        \"\"\" Creates the semaphore file having the PID as content\n\n        Args:\n            file_name: full path of the semaphore file\n            pid: pid to store into the semaphore file\n        Returns:\n        Raises:\n        \"\"\"\n        file = open(file_name, \"w\")\n        file.write(str(pid))\n        file.close()\n\n    @classmethod\n    def _check_semaphore_file(cls, file_name):\n        \"\"\" Evaluates if a specific either backup or restore operation is in execution\n\n        Args:\n            file_name: semaphore file, full path\n        Returns:\n            pid: 0= no operation is in execution or the pid retrieved from the semaphore file\n        Raises:\n        \"\"\"\n        _logger.debug(\"{func}\".format(func=\"check_semaphore_file\"))\n        pid = 0\n        if os.path.exists(file_name):\n            pid = cls._pid_file_retrieve(file_name)\n\n            # Check if the process is really running\n            try:\n                os.getpgid(pid)\n            except ProcessLookupError:\n                # Process is not running, removing the semaphore file\n                os.remove(file_name)\n                _message = _MESSAGES_LIST[\"e000002\"].format(file_name, pid)\n                _logger.warning(\"{0}\".format(_message))\n                pid = 0\n\n        return pid\n\n    @classmethod\n    def is_running(cls):\n        \"\"\" Evaluates if another either backup or restore job is already running\n\n        Args:\n        Returns:\n            pid: 0= no operation is in execution or the pid retrieved from the semaphore file\n        Raises:\n        \"\"\"\n        _logger.debug(\"{func}\".format(func=\"is_running\"))\n\n        # Checks if a backup process is still running\n        full_path_backup = JOB_SEM_FILE_PATH + \"/\" + BackupRestoreLib.JOB_SEM_FILE_BACKUP\n        pid = cls._check_semaphore_file(full_path_backup)\n\n        # Checks if a restore process is still running\n        if pid == 0:\n            full_path_restore = JOB_SEM_FILE_PATH + \"/\" + BackupRestoreLib.JOB_SEM_FILE_RESTORE\n            pid = cls._check_semaphore_file(full_path_restore)\n\n        return pid\n\n    @classmethod\n    def set_as_running(cls, file_name, pid):\n        \"\"\" Sets a job as running\n\n        Args:\n            file_name: semaphore file either fot backup or restore\n            pid: pid of the process to be stored within the semaphore file\n        Returns:\n        Raises:\n        \"\"\"\n        _logger.debug(\"{func}\".format(func=\"set_as_running\"))\n        full_path = JOB_SEM_FILE_PATH + \"/\" + file_name\n\n        if os.path.exists(full_path):\n            os.remove(full_path)\n            _message = _MESSAGES_LIST[\"e000001\"].format(full_path)\n            _logger.warning(\"{0}\".format(_message))\n\n        cls._pid_file_create(full_path, pid)\n\n    @classmethod\n    def set_as_completed(cls, file_name):\n        \"\"\" Sets a job as completed\n\n        Args:\n            file_name: semaphore file either for backup or restore operations\n        Returns:\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func}\".format(func=\"set_as_completed\"))\n        full_path = JOB_SEM_FILE_PATH + \"/\" + file_name\n\n        if os.path.exists(full_path):\n            os.remove(full_path)\n\n\nif __name__ == \"__main__\":\n\n    message = _MESSAGES_LIST[\"e000003\"]\n    print(message)\n\n    if False:\n        # Used to assign the proper objects type without actually executing them\n        _storage = StorageClientAsync(\"127.0.0.1\", \"0\")\n        _logger = logger.setup(_MODULE_NAME)\n"
  },
  {
    "path": "python/fledge/plugins/storage/common/restore.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (C) 2017\n\n\"\"\" Common functionalities for the Restore, they are also used for the integration with the API.\n\"\"\"\nimport logging\nimport uuid\n\nfrom fledge.services.core import server\nfrom fledge.common import logger\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_MESSAGES_LIST = {\n\n    # Information messages\n    \"i000001\": \"Execution started.\",\n    \"i000002\": \"Execution completed.\",\n\n    # Warning / Error messages\n    \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n    \"e000002\": \"an error occurred during the restore operation - error details |{0}|\",\n    \"e000003\": \"invalid command line arguments - error details |{0}|\",\n    \"e000004\": \"cannot complete the initialization - error details |{0}|\",\n}\n\"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n_LOGGER_DESTINATION = logger.SYSLOG\n\n\nclass Restore(object):\n    \"\"\" Provides external functionality/integration to Restore a Backup\n    \"\"\"\n\n    _MODULE_NAME = \"fledge_restore_common\"\n\n    SCHEDULE_RESTORE_ON_DEMAND = \"8d4d3ca0-de80-11e7-80c1-9a214cf093ae\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000000\": \"general information\",\n        \"i000001\": \"On demand restore successfully launched.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"cannot launch on demand restore - error details |{0}|\",\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _logger = None\n\n    def __init__(self, _storage):\n        self._storage = _storage\n\n        if not Restore._logger:\n            Restore._logger = logger.setup(self._MODULE_NAME, destination=_LOGGER_DESTINATION, level=logging.INFO)\n\n    async def restore_backup(self, backup_id: int):\n        \"\"\" Starts an asynchronous restore process to restore the state of Fledge.\n\n        Important Note : The current version restores the latest backup\n\n        Args:\n            backup_id: int - the id of the backup to restore from\n\n        Returns:\n            status: str - {\"running\"|\"failed\"}\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func} - backup id |{backup_id}|\".format(func=\"restore_backup\", backup_id=backup_id))\n\n        try:\n            await server.Server.scheduler.queue_task(uuid.UUID(Restore.SCHEDULE_RESTORE_ON_DEMAND))\n            # Setting scheduler restore backup id\n            server.Server.scheduler._restore_backup_id = backup_id\n            self._logger.debug(\"Scheduler restore_backup_id: {}\".format(server.Server.scheduler._restore_backup_id))\n            _message = self._MESSAGES_LIST[\"i000001\"]\n            Restore._logger.info(\"{0}\".format(_message))\n            status = \"running\"\n        except Exception as _ex:\n            _message = self._MESSAGES_LIST[\"e000001\"].format(_ex)\n            Restore._logger.error(\"{0}\".format(_message))\n            status = \"failed\"\n        return status\n"
  },
  {
    "path": "python/fledge/plugins/storage/postgres/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/storage/postgres/backup_restore/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/storage/postgres/backup_restore/backup_postgres.py",
    "content": "# -*- coding: utf-8 -*-\n# Copyright (C) 2017, 2018\n\n\"\"\" Backups the entire Fledge repository into a file in the local filesystem,\nit executes a full warm backup.\n\nThe information about executed backups are stored into the Storage Layer.\n\nThe parameters for the execution are retrieved from the configuration manager.\nIt could work also without the configuration manager,\nretrieving the parameters for the execution from the local file 'backup_configuration_cache.json'.\n\n\"\"\"\n\nimport sys\nimport time\nimport os\nimport uuid\nimport asyncio\nimport json\nimport tarfile\n\nfrom fledge.common import logger\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.storage_client import payload_builder\nfrom fledge.services.core import server\nfrom fledge.services.core.api.service import get_service_installed\n\nimport fledge.plugins.storage.postgres.backup_restore.lib as lib\nimport fledge.plugins.storage.postgres.backup_restore.exceptions as exceptions\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2017, 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_MODULE_NAME = \"fledge_backup_postgres_module\"\n\n_MESSAGES_LIST = {\n\n    # Information messages\n    \"i000001\": \"Execution started.\",\n    \"i000002\": \"Execution completed.\",\n\n    # Warning / Error messages\n    \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n    \"e000002\": \"an error occurred during the backup operation - error details |{0}|\",\n    \"e000004\": \"cannot complete the initialization - error details |{0}|\",\n}\n\"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n\n# Log definitions\n_logger = None\n\n_LOG_LEVEL_DEBUG = 10\n_LOG_LEVEL_INFO = 20\n\n_LOGGER_LEVEL = _LOG_LEVEL_INFO\n_LOGGER_DESTINATION = logger.SYSLOG\n\n\nclass Backup(object):\n    \"\"\" Provides external functionality/integration for Backup operations\n\n        the constructor expects to receive a reference to a StorageClient object to being able to access\n        the Storage Layer\n    \"\"\"\n\n    _MODULE_NAME = \"fledge_backup_postgres_api\"\n\n    _SCHEDULE_BACKUP_ON_DEMAND = \"fac8dae6-d8d1-11e7-9296-cec278b6b50a\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000000\": \"general information\",\n        \"i000003\": \"On demand backup successfully launched.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"cannot delete/purge backup file on file system - id |{0}| - file name |{1}| error details |{2}|\",\n        \"e000002\": \"cannot delete/purge backup information on the storage layer \"\n                   \"- id |{0}| - file name |{1}| error details |{2}|\",\n        \"e000003\": \"cannot retrieve information for the backup id |{0}|\",\n        \"e000004\": \"cannot launch on demand backup - error details |{0}|\",\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _logger = None\n    STORAGE_TABLE_BACKUPS = None\n\n    def __init__(self, _storage):\n        self._storage = _storage\n\n        if not Backup._logger:\n            Backup._logger = logger.setup(self._MODULE_NAME,\n                                          destination=_LOGGER_DESTINATION,\n                                          level=_LOGGER_LEVEL)\n\n        self._backup_lib = lib.BackupRestoreLib(self._storage, self._logger)\n        self.STORAGE_TABLE_BACKUPS = self._backup_lib.STORAGE_TABLE_BACKUPS\n\n    async def get_all_backups(\n                                self,\n                                limit: int,\n                                skip: int,\n                                status: [lib.BackupStatus, None],\n                                sort_order: lib.SortOrder = lib.SortOrder.DESC) -> list:\n\n        \"\"\" Returns a list of backups is returned sorted in chronological order with the most recent backup first.\n\n        Args:\n            limit: int - limit the number of backups returned to the number given\n            skip: int - skip the number of backups specified before returning backups-\n                  this, in conjunction with the limit option, allows for a paged interface to be built\n            status: lib.BackupStatus - limit the returned backups to those only with the specified status,\n                    None = retrieves information for all the backups\n            sort_order: lib.SortOrder - Defines the order used to present information, DESC by default\n\n        Returns:\n            backups_information: all the information available related to the requested backups\n\n        Raises:\n        \"\"\"\n        payload = payload_builder.PayloadBuilder().SELECT(\"id\", \"status\", \"ts\", \"file_name\", \"type\") \\\n            .ALIAS(\"return\", (\"ts\", 'ts')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\n        if status:\n            payload.WHERE(['status', '=', status])\n            \n        backups_from_storage = await self._storage.query_tbl_with_payload(self._backup_lib.STORAGE_TABLE_BACKUPS, payload.payload())\n        backups_information = backups_from_storage['rows']\n\n        return backups_information\n\n    async def get_backup_details(self, backup_id: int) -> dict:\n        \"\"\" Returns the details of a backup\n\n        Args:\n            backup_id: int - the id of the backup to return\n\n        Returns:\n            backup_information: all the information available related to the requested backup_id\n\n        Raises:\n            exceptions.DoesNotExist\n            exceptions.NotUniqueBackup\n        \"\"\"\n        payload = payload_builder.PayloadBuilder().SELECT(\"id\", \"status\", \"ts\", \"file_name\", \"type\") \\\n            .ALIAS(\"return\", (\"ts\", 'ts')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n            .WHERE(['id', '=', backup_id]).payload()\n\n        backup_from_storage = await self._storage.query_tbl_with_payload(self.STORAGE_TABLE_BACKUPS, payload)\n\n        if backup_from_storage['count'] == 0:\n            raise exceptions.DoesNotExist\n        elif backup_from_storage['count'] == 1:\n            backup_information = backup_from_storage['rows'][0]\n        else:\n            raise exceptions.NotUniqueBackup\n\n        return backup_information\n\n    async def delete_backup(self, backup_id: int):\n        \"\"\" Deletes a backup\n\n        Args:\n            backup_id: int - the id of the backup to delete\n\n        Returns:\n        Raises:\n        \"\"\"\n\n        try:\n            backup_information = await self.get_backup_details(backup_id)\n            file_name = backup_information['file_name']\n\n            # Deletes backup file from the file system\n            if os.path.exists(file_name):\n                try:\n                    os.remove(file_name)\n                except Exception as _ex:\n                    _message = self._MESSAGES_LIST[\"e000001\"].format(backup_id, file_name, _ex)\n                    Backup._logger.error(_message)\n                    raise\n\n            # Deletes backup information from the Storage layer\n            # only if it was possible to delete the file from the file system\n            try:\n                await self._delete_backup_information(backup_id)\n            except Exception as _ex:\n                _message = self._MESSAGES_LIST[\"e000002\"].format(backup_id, file_name, _ex)\n                self._logger.error(_message)\n                raise\n        except exceptions.DoesNotExist:\n            _message = self._MESSAGES_LIST[\"e000003\"].format(backup_id)\n            self._logger.warning(_message)\n            raise\n\n    async def _delete_backup_information(self, _id):\n        \"\"\" Deletes backup information from the Storage layer\n\n        Args:\n            _id: Backup id to delete\n        Returns:\n        Raises:\n        \"\"\"\n        payload = payload_builder.PayloadBuilder() \\\n            .WHERE(['id', '=', _id]) \\\n            .payload()\n        await self._storage.delete_from_tbl(self._backup_lib.STORAGE_TABLE_BACKUPS, payload)\n\n    async def create_backup(self):\n        \"\"\" Run a backup task using the scheduler on-demand schedule mechanism to run the script,\n            the backup will proceed asynchronously.\n\n        Args:\n        Returns:\n            status: str - {\"running\"|\"failed\"}\n        Raises:\n        \"\"\"\n        self._logger.debug(\"{func}\".format(func=\"create_backup\"))\n        try:\n            await server.Server.scheduler.queue_task(uuid.UUID(Backup._SCHEDULE_BACKUP_ON_DEMAND))\n            _message = self._MESSAGES_LIST[\"i000003\"]\n            Backup._logger.info(\"{0}\".format(_message))\n            status = \"running\"\n        except Exception as _ex:\n            _message = self._MESSAGES_LIST[\"e000004\"].format(_ex)\n            Backup._logger.error(\"{0}\".format(_message))\n            status = \"failed\"\n\n        return status\n\n\nclass BackupProcess(FledgeProcess):\n    \"\"\" Backups the entire Fledge repository into a file in the local filesystem,\n        it executes a full warm backup\n    \"\"\"\n\n    _MODULE_NAME = \"fledge_backup_postgres_process\"\n\n    _BACKUP_FILE_NAME_PREFIX = \"fledge_backup_\"\n    \"\"\" Prefix used to generate a backup file name \"\"\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000001\": \"Execution started.\",\n        \"i000002\": \"Execution completed.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n        \"e000002\": \"cannot retrieve the configuration from the manager, trying retrieving from file \"\n                   \"- error details |{0}|\",\n        \"e000003\": \"cannot retrieve the configuration from file - error details |{0}|\",\n        \"e000004\": \"...\",\n        \"e000005\": \"...\",\n        \"e000006\": \"...\",\n        \"e000007\": \"backup failed.\",\n        \"e000008\": \"cannot execute the backup, either a backup or a restore is already running - pid |{0}|\",\n        \"e000009\": \"...\",\n        \"e000010\": \"directory used to store backups doesn't exist - dir |{0}|\",\n        \"e000011\": \"directory used to store semaphores for backup/restore synchronization doesn't exist - dir |{0}|\",\n        \"e000012\": \"cannot create the configuration cache file, neither FLEDGE_DATA nor FLEDGE_ROOT are defined.\",\n        \"e000013\": \"cannot create the configuration cache file, provided path is not a directory - dir |{0}|\",\n        \"e000014\": \"the identified path of backups doesn't exists, creation was tried \"\n                   \"- dir |{0}| - error details |{1}|\",\n        \"e000015\": \"The command is not available neither using the unmanaged approach\"\n                   \" - command |{0}|\",\n        \"e000016\": \"Postgres command is not executable - command |{0}|\",\n        \"e000017\": \"The execution of the Postgres command using the -V option produce an error\"\n                   \" - command |{0}| - output |{1}|\",\n        \"e000018\": \"It is not possible to read data from Postgres\"\n                   \" - command |{0}| - exit code |{1}| - output |{2}|\",\n        \"e000019\": \"The command is not available using the managed approach\"\n                   \" - command |{0}|\",\n\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _logger = None\n\n    def __init__(self):\n\n        super().__init__()\n\n        if not self._logger:\n            self._logger = logger.setup(self._MODULE_NAME,\n                                        destination=_LOGGER_DESTINATION,\n                                        level=_LOGGER_LEVEL)\n\n        self._backup = Backup(self._storage_async)\n        self._backup_lib = lib.BackupRestoreLib(self._storage_async, self._logger)\n\n        self._job = lib.Job()\n\n        # Creates the objects references used by the library\n        lib._logger = self._logger\n        lib._storage = self._storage_async\n\n    def _generate_file_name(self):\n        \"\"\" Generates the file name for the backup operation, it uses hours/minutes/seconds for the file name generation\n\n        Args:\n        Returns:\n            _backup_file: generated file name\n        Raises:\n        \"\"\"\n        self._logger.debug(\"{func}\".format(func=\"_generate_file_name\"))\n\n        # Evaluates the parameters\n        execution_time = time.strftime(\"%Y_%m_%d_%H_%M_%S\")\n\n        full_file_name = self._backup_lib.dir_backups + \"/\" + self._BACKUP_FILE_NAME_PREFIX + execution_time\n        ext = \"dump\"\n\n        _backup_file = \"{file}.{ext}\".format(file=full_file_name, ext=ext)\n\n        return _backup_file\n\n    def init(self):\n        \"\"\" Setups the correct state for the execution of the backup\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.BackupOrRestoreAlreadyRunning\n        \"\"\"\n        self._logger.debug(\"{func}\".format(func=\"init\"))\n        self._backup_lib.evaluate_paths()\n        self._backup_lib.retrieve_configuration()\n        self._backup_lib.check_for_execution_backup()\n\n        # Checks for backup/restore synchronization\n        pid = self._job.is_running()\n        if pid == 0:\n            # no job is running\n            pid = os.getpid()\n            self._job.set_as_running(self._backup_lib.JOB_SEM_FILE_BACKUP, pid)\n        else:\n            _message = self._MESSAGES_LIST[\"e000008\"].format(pid)\n            self._logger.warning(\"{0}\".format(_message))\n            raise exceptions.BackupOrRestoreAlreadyRunning\n\n    def execute_backup(self):\n        \"\"\" Executes the backup functionality\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.BackupFailed\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"execute_backup\"))\n        self._purge_old_backups()\n        backup_file = self._generate_file_name()\n        backup_file_tar_base, dummy = os.path.splitext(backup_file)\n        backup_file_tar = backup_file_tar_base + \".tar.gz\"\n        self._logger.debug(\"execute_backup - backup_file  :{}: backup_file_tar :{}: -\".format(backup_file,\n                                                                                              backup_file_tar))\n        self._backup_lib.sl_backup_status_create(backup_file_tar, lib.BackupType.FULL, lib.BackupStatus.RUNNING)\n        # Run backup\n        status, exit_code = self._run_backup_command(backup_file)\n\n        # Create tar file\n        t = tarfile.open(backup_file_tar, \"w:gz\")\n        t.add(backup_file, arcname=os.path.basename(backup_file))\n        # Add external scripts if any\n        backup_path = self._backup_lib.dir_fledge_data + \"/scripts\"\n        if os.path.isdir(backup_path):\n            t.add(backup_path, arcname=os.path.basename(backup_path))\n        # Add data/etc directory\n        t.add(self._backup_lib.dir_fledge_data_etc, arcname=os.path.basename(self._backup_lib.dir_fledge_data_etc))\n        # Add software both plugins & services\n        data = {\n            \"plugins\": PluginDiscovery.get_plugins_installed(),\n            \"services\": get_service_installed()\n        }\n        temp_software_file = \"{}/software.json\".format(self._backup_lib.dir_backups)\n        with open(temp_software_file, 'w') as outfile:\n            json.dump(data, outfile, indent=4)\n        t.add(temp_software_file, arcname=os.path.basename(temp_software_file))\n        t.close()\n        # Delete the temporary files\n        os.remove(backup_file)\n        os.remove(temp_software_file)\n\n        # Get backup details\n        backup_information = self._backup_lib.sl_get_backup_details_from_file_name(backup_file_tar)\n        # Update backup id, status, exit code\n        self._backup_lib.sl_backup_status_update(backup_information['id'], status, exit_code)\n        # Audit trail entry\n        audit = AuditLogger(self._storage_async)\n        loop = asyncio.get_event_loop()\n        if status != lib.BackupStatus.COMPLETED:\n            self._logger.error(self._MESSAGES_LIST[\"e000007\"])\n            loop.run_until_complete(audit.information('BKEXC', {'status': 'failed'}))\n            raise exceptions.BackupFailed\n        else:\n            loop.run_until_complete(audit.information('BKEXC', {'status': 'completed'}))\n\n    def _purge_old_backups(self):\n        \"\"\"  Deletes old backups in relation at the retention parameter\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        backups_info = asyncio.get_event_loop().run_until_complete(self._backup.get_all_backups(\n                                            self._backup_lib.MAX_NUMBER_OF_BACKUPS_TO_RETRIEVE,\n                                            0,\n                                            None,\n                                            lib.SortOrder.ASC))\n\n        # Evaluates which backup should be deleted\n        backups_n = len(backups_info)\n        # -1 so at the end of the current backup up to 'retention' backups will be available\n        last_to_delete = backups_n - (self._backup_lib.config['retention'] - 1)\n\n        if last_to_delete > 0:\n\n            # Deletes backups\n            backups_to_delete = backups_info[:last_to_delete]\n\n            for row in backups_to_delete:\n                backup_id = row['id']\n                file_name = row['file_name']\n\n                self._logger.debug(\"{func} - id |{id}| - file_name |{file}|\".format(func=\"_purge_old_backups\",\n                                                                                    id=backup_id,\n                                                                                    file=file_name))\n                asyncio.get_event_loop().run_until_complete(self._backup.delete_backup(backup_id))\n\n    def _run_backup_command(self, _backup_file):\n        \"\"\" Backups the entire Fledge repository into a file in the local file system\n\n        Args:\n            _backup_file: backup file to create  as a full path\n        Returns:\n            _status: status of the backup\n            _exit_code: exit status of the operation, 0=Successful\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func} - file_name |{file}|\".format(func=\"_run_backup_command\",\n                                                                file=_backup_file))\n\n        pg_cmd = self._backup_lib.PG_COMMANDS[self._backup_lib.PG_COMMAND_DUMP]\n\n        # Prepares the backup command\n        cmd = \"{cmd} {options} {db} > {file}\".format(\n                                                cmd=pg_cmd,\n                                                options=\"--serializable-deferrable -Fc\",\n                                                db=self._backup_lib.config['database'],\n                                                file=_backup_file\n        )\n\n        # Executes the backup waiting for the completion and using a retry mechanism\n        # noinspection PyArgumentEqualDefault\n        _exit_code, output = lib.exec_wait_retry(cmd,\n                                                 output_capture=True,\n                                                 exit_code_ok=0,\n                                                 max_retry=self._backup_lib.config['max_retry'],\n                                                 timeout=self._backup_lib.config['timeout']\n                                                 )\n\n        if _exit_code == 0:\n            _status = lib.BackupStatus.COMPLETED\n        else:\n            _status = lib.BackupStatus.FAILED\n\n        self._logger.debug(\"{func} - status |{status}| - exit_code |{exit_code}| \"\n                           \"- cmd |{cmd}|  output |{output}| \".format(\n                                                                        func=\"_run_backup_command\",\n                                                                        status=_status,\n                                                                        exit_code=_exit_code,\n                                                                        cmd=cmd,\n                                                                        output=output))\n\n        return _status, _exit_code\n\n    def shutdown(self):\n        \"\"\" Sets the correct state to terminate the execution\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        self._logger.debug(\"{func}\".format(func=\"shutdown\"))\n        self._job.set_as_completed(self._backup_lib.JOB_SEM_FILE_BACKUP)\n\n    def run(self):\n        \"\"\"  Creates a new backup\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        self.init()\n\n        try:\n            self.execute_backup()\n        except Exception as _ex:\n            _message = _MESSAGES_LIST[\"e000002\"].format(_ex)\n            _logger.error(_message)\n            self.shutdown()\n            raise exceptions.RestoreFailed(_message)\n        else:\n            self.shutdown()\n\n\nif __name__ == \"__main__\":\n    # Initializes the logger\n    try:\n        _logger = logger.setup(_MODULE_NAME,\n                               destination=_LOGGER_DESTINATION,\n                               level=_LOGGER_LEVEL)\n    except Exception as ex:\n        message = _MESSAGES_LIST[\"e000001\"].format(str(ex))\n        current_time = time.strftime(\"%Y-%m-%d %H:%M:%S\")\n        print(\"[FLEDGE] {0} - ERROR - {1}\".format(current_time, message), file=sys.stderr)\n        sys.exit(1)\n\n    # Initializes FledgeProcess and Backup classes - handling the command line parameters\n    try:\n        backup_process = BackupProcess()\n    except Exception as ex:\n        message = _MESSAGES_LIST[\"e000004\"].format(ex)\n        _logger.exception(message)\n        _logger.info(_MESSAGES_LIST[\"i000002\"])\n        sys.exit(1)\n\n    if not backup_process.is_dry_run():\n        try:\n            # noinspection PyProtectedMember\n            _logger.debug(\"{module} - name |{name}| - address |{addr}| - port |{port}|\".format(\n                module=_MODULE_NAME,\n                name=backup_process._name,\n                addr=backup_process._core_management_host,\n                port=backup_process._core_management_port))\n            _logger.info(_MESSAGES_LIST[\"i000001\"])\n            backup_process.run()\n            _logger.info(_MESSAGES_LIST[\"i000002\"])\n            sys.exit(0)\n        except Exception as ex:\n            message = _MESSAGES_LIST[\"e000002\"].format(ex)\n            _logger.exception(message)\n            backup_process.shutdown()\n            sys.exit(1)\n    else:\n        # Put any configuration here if required for the backup\n        sys.exit()\n"
  },
  {
    "path": "python/fledge/plugins/storage/postgres/backup_restore/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Exceptions module \"\"\"\n\n\nclass SQLCommandExecutionError(Exception):\n    \"\"\" the SQL command generates an error \"\"\"\n    pass\n\n\nclass ConfigRetrievalError(Exception):\n    \"\"\" Unable to retrieve the parameters from the configuration manager \"\"\"\n    pass\n\n\nclass BackupOrRestoreAlreadyRunning(Exception):\n    \"\"\" Backup or restore cannot proceed as another operation is already running \"\"\"\n    pass\n\n\nclass InitializationFailed(Exception):\n    \"\"\" Cannot complete the initialization \"\"\"\n    pass\n\n\nclass BackupFailed(Exception):\n    \"\"\" An error occurred during the backup operation \"\"\"\n    pass\n\n\nclass RestoreFailed(Exception):\n    \"\"\" An error occurred during the restore operation \"\"\"\n    pass\n\n\nclass NotUniqueBackup(Exception):\n    \"\"\" There are more than one backups having the same backup id \"\"\"\n    pass\n\n\nclass BackupsDirDoesNotExist(Exception):\n    \"\"\" Directory used to store backups doesn't exist \"\"\"\n    pass\n\n\nclass SemaphoresDirDoesNotExist(Exception):\n    \"\"\" Directory used to store semaphores for backup/restore synchronization doesn't exist \"\"\"\n    pass\n\n\nclass DoesNotExist(Exception):\n    \"\"\" The requested backup id doesn't exist \"\"\"\n    pass\n\n\nclass CannotCreateConfigurationCacheFile(Exception):\n    \"\"\" It is not possible to create the configuration cache file to store information retrieved from the\n        configuration manager \"\"\"\n    pass\n\n\nclass InvalidBackupsPath(Exception):\n    \"\"\" The identified backups' path is not a valid directory \"\"\"\n    pass\n\n\nclass InvalidPath(Exception):\n    \"\"\" The identified path is not a valid directory \"\"\"\n    pass\n\n\nclass ArgumentParserError(Exception):\n    \"\"\" Invalid command line arguments \"\"\"\n    pass\n\n\nclass FledgeStartError(RuntimeError):\n    \"\"\" Unable to start Fledge \"\"\"\n    pass\n\n\nclass FledgeStopError(RuntimeError):\n    \"\"\" Unable to stop Fledge \"\"\"\n\n\nclass PgCommandUnAvailable(Exception):\n    \"\"\" Postgres command is not available neither using the managed nor the unmanaged approach \"\"\"\n    pass\n\n\nclass PgCommandNotExecutable(Exception):\n    \"\"\" Postgres command is not executable \"\"\"\n    pass\n\n\nclass CannotReadPostgres(Exception):\n    \"\"\" It is not possible to read data from Postgres \"\"\"\n    pass\n\n\nclass NoBackupAvailableError(RuntimeError):\n    \"\"\" No backup in the proper state is available \"\"\"\n    pass\n\n\nclass FileNameError(RuntimeError):\n    \"\"\" Impossible to identify an unique backup to restore \"\"\"\n    pass\n\n\nclass InvalidFledgeEnvironment(RuntimeError):\n    \"\"\" It is not possible to determine the environment in which the code is running\n    neither Deployment nor Development \"\"\"\n    pass\n\n\nclass UndefinedStorage(Exception):\n    \"\"\" It is not possible to evaluate if the storage is managed or unmanaged \"\"\"\n    pass\n"
  },
  {
    "path": "python/fledge/plugins/storage/postgres/backup_restore/lib.py",
    "content": "# -*- coding: utf-8 -*-\n# Copyright (C) 2017, 2018\n\n\"\"\" Library used for backup and restore operations\n\"\"\"\n\nimport subprocess\nimport time\nimport os\nimport asyncio\nimport json\nfrom enum import IntEnum\n\nfrom fledge.common import logger\nfrom fledge.common.storage_client import payload_builder\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.configuration_manager import ConfigurationManager\n\nimport fledge.plugins.storage.postgres.backup_restore.exceptions as exceptions\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2017, 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_MODULE_NAME = \"fledge_backup_postgres_library\"\n\n_MESSAGES_LIST = {\n\n    # Information messages\n    \"i000000\": \"Information\",\n\n    # Warning / Error messages\n    \"e000000\": \"general error\",\n    \"e000001\": \"semaphore file deleted because it was already in existence - file |{0}|\",\n    \"e000002\": \"semaphore file deleted because it existed even if the corresponding process was not running \"\n               \"- file |{0}| - pid |{1}|\",\n    \"e000003\": \"ERROR - the library cannot be executed directly.\"\n\n}\n\"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n_CMD_TIMEOUT = \" timeout --signal=9  \"\n\"\"\" Every external commands will be launched using timeout to avoid endless executions \"\"\"\n\n_logger = None\n_storage = None\n\"\"\"\" Objects references assigned by the caller \"\"\"\n\n\ndef exec_wait(_cmd, _output_capture=False, _timeout=0):\n    \"\"\"  Executes an external/shell commands\n\n    Args:\n        _cmd: command to execute\n        _output_capture: if the output of the command should be captured or not\n        _timeout: 0 no timeout or the timeout in seconds for the execution of the command\n\n    Returns:\n        _exit_code: exit status of the command\n        _output: output of the command\n    Raises:\n    \"\"\"\n\n    _output = \"\"\n\n    if _timeout != 0:\n        _cmd = _CMD_TIMEOUT + str(_timeout) + \" \" + _cmd\n        _logger.debug(\"{func} - Executing command using the timeout |{timeout}| \".format(\n                                        func=\"exec_wait\",\n                                        timeout=_timeout))\n\n    _logger.debug(\"{func} - cmd |{cmd}| \".format(func=\"exec_wait\",\n                                                 cmd=_cmd))\n\n    if _output_capture:\n        process = subprocess.Popen(_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\n    else:\n        process = subprocess.Popen(_cmd, shell=True)\n\n    _exit_code = process.wait()\n\n    if _output_capture:\n        output_step1 = process.stdout.read()\n        _output = output_step1.decode(\"utf-8\")\n\n    _logger.debug(\"{func} - Executed command - cmd |{cmd}| - exit_code |{exit_code}| - output |{output}| \".format(\n                                    func=\"exec_wait\",\n                                    cmd=_cmd,\n                                    exit_code=_exit_code,\n                                    output=_output))\n\n    return _exit_code, _output\n\n\ndef exec_wait_retry(cmd, output_capture=False, exit_code_ok=0, max_retry=3, write_error=True, sleep_time=1, timeout=0):\n    \"\"\" Executes an external command retrying x time the operation up to the exit status match a specific value\n\n    Args:\n        cmd: command to execute\n        output_capture: if the output of the command should be captured or not\n        exit_code_ok: exit status to achieve\n        max_retry: maximum number of retries to achieve the desired exit status\n        write_error: if a message should be generated for each retry\n        sleep_time: seconds to sleep between each retry\n        timeout: 0= no timeout, or the timeout in seconds for the execution of the external command\n\n    Returns:\n        _exit_code: exit status of the command\n        _output: output of the command\n\n    Raises:\n    \"\"\"\n\n    global _logger\n\n    _logger.debug(\"{func} - cmd |{cmd}| \".format(func=\"exec_wait_retry\",\n                                                 cmd=cmd))\n\n    _exit_code = 0\n    _output = \"\"\n\n    # try X times the operation\n    retry = 1\n    loop_continue = True\n\n    while loop_continue:\n\n        _exit_code, _output = exec_wait(cmd, output_capture, timeout)\n\n        if _exit_code == exit_code_ok:\n            loop_continue = False\n\n        elif retry <= max_retry:\n\n            # Prepares for the retry operation\n            if write_error:\n                short_output = _output[0:50]\n                _logger.debug(\"{func} - cmd |{cmd}| - N retry |{retry}| - message |{msg}| \".format(\n                    func=\"exec_wait_retry\",\n                    cmd=cmd,\n                    retry=retry,\n                    msg=short_output)\n                )\n\n            time.sleep(sleep_time)\n            retry += 1\n\n        else:\n            loop_continue = False\n\n    return _exit_code, _output\n\n\ndef cr_strip(text):\n    \"\"\"\n    Args:\n    Returns:\n    Raises:\n    \"\"\"\n\n    text = text.replace(\"\\n\", \"\")\n    text = text.replace(\"\\r\", \"\")\n\n    return text\n\n\nclass BackupType (IntEnum):\n    \"\"\" Supported backup types \"\"\"\n\n    FULL = 1\n    INCREMENTAL = 2\n\n\nclass SortOrder (object):\n    \"\"\" Define the order used to present information \"\"\"\n\n    ASC = 'ASC'\n    DESC = 'DESC'\n\n\nclass BackupStatus (object):\n    \"\"\" Backup status \"\"\"\n\n    UNDEFINED = -1\n    RUNNING = 1\n    COMPLETED = 2\n    CANCELLED = 3\n    INTERRUPTED = 4\n    FAILED = 5\n    RESTORED = 6\n    ALL = 999\n\n    text = {\n        UNDEFINED: \"undefined\",\n        RUNNING: \"running\",\n        COMPLETED: \"completed\",\n        CANCELLED: \"cancelled\",\n        INTERRUPTED: \"interrupted\",\n        FAILED: \"failed\",\n        RESTORED: \"restored\",\n        ALL: \"all\"\n    }\n\n\nclass BackupRestoreLib(object):\n    \"\"\" Library of functionalities for the backup restore operations that requires information/state to be stored \"\"\"\n\n    STORAGE_EXE = \"/services/fledge.services.storage\"\n\n    MAX_NUMBER_OF_BACKUPS_TO_RETRIEVE = 9999\n    \"\"\"\" Maximum number of backup information to retrieve from the storage layer\"\"\"\n\n    STORAGE_TABLE_BACKUPS = \"backups\"\n    \"\"\" Table name containing the backups information\"\"\"\n\n    JOB_SEM_FILE_PATH = \"/tmp\"\n    \"\"\" Updated by the caller to the proper value \"\"\"\n\n    JOB_SEM_FILE_BACKUP = \".backup.sem\"\n    JOB_SEM_FILE_RESTORE = \".restore.sem\"\n    \"\"\"\" Semaphores information for the handling of the backup/restore synchronization \"\"\"\n\n    # Postgres commands\n    PG_COMMAND_DUMP = \"pg_dump\"\n    PG_COMMAND_RESTORE = \"pg_restore\"\n    PG_COMMAND_PSQL = \"psql\"\n\n    PG_COMMANDS = {PG_COMMAND_DUMP: None,\n                   PG_COMMAND_RESTORE: None,\n                   PG_COMMAND_PSQL: None\n                   }\n    \"\"\"List of Postgres commands to check/validate if they are available and usable\n       and the actual Postgres commands to use \"\"\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000001\": \"Execution started.\",\n        \"i000002\": \"Execution completed.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n        \"e000002\": \"cannot retrieve the configuration from the manager, trying retrieving from file \"\n                   \"- error details |{0}|\",\n        \"e000003\": \"cannot retrieve the configuration from file - error details |{0}|\",\n        \"e000004\": \"...\",\n        \"e000005\": \"...\",\n        \"e000006\": \"...\",\n        \"e000007\": \"backup failed.\",\n        \"e000008\": \"cannot execute the backup, either a backup or a restore is already running - pid |{0}|\",\n        \"e000009\": \"...\",\n        \"e000010\": \"directory used to store backups doesn't exist - dir |{0}|\",\n        \"e000011\": \"directory used to store semaphores for backup/restore synchronization doesn't exist - dir |{0}|\",\n        \"e000012\": \"cannot create the configuration cache file, neither FLEDGE_DATA nor FLEDGE_ROOT are defined.\",\n        \"e000013\": \"cannot create the configuration cache file, provided path is not a directory - dir |{0}|\",\n        \"e000014\": \"the identified path of backups doesn't exists, creation was tried \"\n                   \"- dir |{0}| - error details |{1}|\",\n        \"e000015\": \"The command is not available neither using the unmanaged approach\"\n                   \" - command |{0}|\",\n        \"e000016\": \"Postgres command is not executable - command |{0}|\",\n        \"e000017\": \"The execution of the Postgres command using the -V option produce an error\"\n                   \" - command |{0}| - output |{1}|\",\n        \"e000018\": \"It is not possible to read data from Postgres\"\n                   \" - command |{0}| - exit code |{1}| - output |{2}|\",\n        \"e000019\": \"The command is not available using the managed approach\"\n                   \" - command |{0}| - full command |{1}|\",\n        \"e000020\": \"It is not possible to evaluate if the storage is managed or unmanaged\"\n                   \" - storage plugin |{0}|\",\n        \"e000021\": \"the SQL command generates an error - error details |{0}| - full command |{1}|\",\n\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _DIR_MANAGED_FLEDGE_PG_COMMANDS = \"plugins/storage/postgres/pgsql/bin\"\n    \"\"\"Directory for Postgres commands in a managed configuration\"\"\"\n\n    _DB_CONNECTION_STRING = \"dbname='{db}'\"\n\n    _DEFAULT_FLEDGE_ROOT = \"/usr/local/fledge\"\n    \"\"\" Default value to use for the FLEDGE_ROOT if the environment $FLEDGE_ROOT is not defined \"\"\"\n\n    _BACKUP_FILE_NAME_PREFIX = \"fledge_backup_\"\n    \"\"\" Prefix used to generate a backup file name \"\"\"\n\n    _CONFIG_CACHE_FILE = \"backup_postgres_configuration_cache.json\"\n    \"\"\" Stores a configuration cache in case the configuration Manager is not available\"\"\"\n\n    # Configuration retrieved from the Configuration Manager\n    _CONFIG_CATEGORY_NAME = 'BACK_REST'\n    _CONFIG_CATEGORY_DESCRIPTION = 'Backup and Restore'\n\n    _CONFIG_DEFAULT = {\n        \"host\": {\n            \"description\": \"Host server\",\n            \"type\": \"string\",\n            \"default\": \"localhost\"\n        },\n        \"port\": {\n            \"description\": \"PostgreSQL port\",\n            \"type\": \"integer\",\n            \"default\": \"5432\"\n        },\n        \"database\": {\n            \"description\": \"Database to backup/restore\",\n            \"type\": \"string\",\n            \"default\": \"fledge\"\n        },\n        \"schema\": {\n            \"description\": \"Schema\",\n            \"type\": \"string\",\n            \"default\": \"fledge\"\n        },\n        \"backup-dir\": {\n            \"description\": \"Directory where backups will be created, \"\n                           \"it uses backup-dir if it is specified \"\n                           \"or FLEDGE_BACKUP if defined or FLEDGE_DATA/backup as the last resort\",\n            \"type\": \"string\",\n            \"default\": \"none\"\n        },\n        \"semaphores-dir\": {\n            \"description\": \"Directory for semaphores for backup/restore synchronization, \"\n                           \"if not specified, backup-dir will be used\",\n            \"type\": \"string\",\n            \"default\": \"none\"\n        },\n        \"retention\": {\n            \"description\": \"Number of backups to maintain. Old backups will be deleted\",\n            \"type\": \"integer\",\n            \"default\": \"5\"\n        },\n        \"max_retry\": {\n            \"description\": \"Number of retries\",\n            \"type\": \"integer\",\n            \"default\": \"5\"\n        },\n        \"timeout\": {\n            \"description\": \"Timeout in seconds for execution of external commands\",\n            \"type\": \"integer\",\n            \"default\": \"1200\"\n        },\n        \"restart-max-retries\": {\n            \"description\": \"Maximum number of retries at restarting Fledge\",\n            \"type\": \"integer\",\n            \"default\": \"10\"\n        },\n        \"restart-sleep\": {\n            \"description\": \"Sleep time between each check of the status at the restart of Fledge \"\n                           \"to ensure it is started successfully\",\n            \"type\": \"integer\",\n            \"default\": \"5\"\n        },\n    }\n\n    config = {}\n\n    _storage = None\n    _logger = None\n\n    def __init__(self, _storage, _logger):\n\n        self._storage = _storage\n        self._logger = _logger\n\n        self.config = {}\n\n        # Fledge directories\n        self.dir_fledge_root = \"\"\n        self.dir_fledge_data = \"\"\n        self.dir_fledge_data_etc = \"\"\n        self.dir_fledge_backup = \"\"\n        self.dir_backups = \"\"\n        self.dir_semaphores = \"\"\n\n    def sl_get_all_backups(self) -> list:\n        \"\"\" Retrieves all records of backup table\n        Args:\n\n        Returns: List of backups\n\n        Raises:\n        \"\"\"\n        payload = payload_builder.PayloadBuilder().SELECT(\"id\", \"file_name\", \"ts\", \"type\", \"status\", \"exit_code\") \\\n            .ALIAS(\"return\", (\"ts\", 'ts')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")).payload()\n        _logger.debug(\"sl_get_all_backups PAYLOAD: {}\".format(payload))\n        backups = asyncio.get_event_loop().run_until_complete(self._storage.query_tbl_with_payload(\n            self.STORAGE_TABLE_BACKUPS, payload))\n        if 'rows' not in backups:\n            raise StorageServerError\n        _logger.debug(\"{func} - backup list {backups}\".format(func=\"sl_get_all_backups\", backups=backups))\n        return backups['rows']\n\n    def sl_backup_status_create(self, _file_name, _type, _status):\n        \"\"\" Logs the creation of the backup in the Storage layer\n\n        Args:\n            _file_name: file_name used for the backup as a full path\n            _type: backup type {BackupType.FULL|BackupType.INCREMENTAL}\n            _status: backup status, usually BackupStatus.RUNNING\n        Returns:\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func} - file name |{file}| \".format(func=\"sl_backup_status_create\", file=_file_name))\n\n        payload = payload_builder.PayloadBuilder() \\\n            .INSERT(file_name=_file_name,\n                    ts=\"now()\",\n                    type=_type,\n                    status=_status,\n                    exit_code=0) \\\n            .payload()\n\n        asyncio.get_event_loop().run_until_complete(self._storage.insert_into_tbl(self.STORAGE_TABLE_BACKUPS, payload))\n\n    def sl_backup_status_update(self, _id, _status, _exit_code):\n        \"\"\" Updates the status of the backup using the Storage layer\n\n        Args:\n            _id: Backup's Id to update\n            _status: status of the backup {BackupStatus.SUCCESSFUL|BackupStatus.RESTORED}\n            _exit_code: exit status of the backup/restore execution\n        Returns:\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func} - id |{file}| \".format(func=\"sl_backup_status_update\", file=_id))\n\n        payload = payload_builder.PayloadBuilder() \\\n            .SET(status=_status,\n                 ts=\"now()\",\n                 exit_code=_exit_code) \\\n            .WHERE(['id', '=', _id]) \\\n            .payload()\n\n        asyncio.get_event_loop().run_until_complete(self._storage.update_tbl(self.STORAGE_TABLE_BACKUPS, payload))\n\n    def sl_get_backup_details_from_file_name(self, _file_name):\n        \"\"\" Retrieves backup information from file name\n\n        Args:\n            _file_name: file name to search in the Storage layer\n\n        Returns:\n            backup_information: Backup information related to the file name\n\n        Raises:\n            exceptions.DoesNotExist\n            exceptions.NotUniqueBackup\n        \"\"\"\n\n        payload = payload_builder.PayloadBuilder() \\\n            .WHERE(['file_name', '=', _file_name]) \\\n            .payload()\n\n        backups_from_storage = asyncio.get_event_loop().run_until_complete(self._storage.query_tbl_with_payload(self.STORAGE_TABLE_BACKUPS, payload))\n\n        if backups_from_storage['count'] == 1:\n\n            backup_information = backups_from_storage['rows'][0]\n\n        elif backups_from_storage['count'] == 0:\n            raise exceptions.DoesNotExist\n\n        else:\n            raise exceptions.NotUniqueBackup\n\n        return backup_information\n\n    def check_for_execution_restore(self):\n        \"\"\" Executes all the checks to ensure the prerequisites to execute the backup are met\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self._check_commands()\n\n    def check_for_execution_backup(self):\n        \"\"\" Executes all the checks to ensure the prerequisites to execute the backup are met\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self._check_commands()\n        self._check_db()\n\n    def _check_db(self):\n        \"\"\" Checks if the database is working properly reading a sample row from the backups table\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.CannotReadPostgres\n        \"\"\"\n\n        cmd_psql = self.PG_COMMANDS[self.PG_COMMAND_PSQL]\n\n        cmd = '{psql} -d {db} -t -c \"SELECT id FROM {schema}.{table} LIMIT 1;\"'.format(\n                                                                psql=cmd_psql,\n                                                                db=self.config['database'],\n                                                                schema=self.config['schema'],\n                                                                table=self.STORAGE_TABLE_BACKUPS)\n\n        _exit_code, output = exec_wait(\n                                        _cmd=cmd,\n                                        _output_capture=True,\n                                        _timeout=self.config['timeout']\n                                        )\n\n        self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n                            func=\"_check_db\",\n                            cmd=cmd,\n                            exit_code=_exit_code,\n                            output=cr_strip(output)))\n\n        if _exit_code != 0:\n            _message = self._MESSAGES_LIST[\"e000018\"].format(cmd, _exit_code, output)\n            self._logger.error(\"{0}\".format(_message))\n\n            raise exceptions.CannotReadPostgres(_message)\n\n    def _check_commands(self):\n        \"\"\" Identify and checks the Postgres commands\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        for cmd in self.PG_COMMANDS:\n\n            cmd_identified = self._check_command_identification(cmd)\n            self._check_command_test(cmd_identified)\n\n    def _check_command_identification(self, cmd_to_identify):\n        \"\"\" Identifies the proper Postgres command to use, 2 possible cases :\n\n        Managed    - command is available in $FLEDGE_ROOT/plugins/storage/postgres/pgsql/bin\n        Unmanaged  - checks using the path and it identifies the used command through 'command -v'\n\n        Args:\n            cmd_to_identify: str - command to identify\n\n        Returns:\n            cmd_identified: str - actual identified command to use\n\n        Raises:\n            exceptions.PgCommandUnAvailable\n        \"\"\"\n\n        is_managed = self._is_plugin_managed(\"postgres\")\n\n        if is_managed:\n            # Checks for Managed\n            cmd_managed = \"{root}/{path}/{cmd}\".format(\n                                                        root=self.dir_fledge_root,\n                                                        path=self._DIR_MANAGED_FLEDGE_PG_COMMANDS,\n                                                        cmd=cmd_to_identify)\n\n            if os.path.exists(cmd_managed):\n                cmd_identified = cmd_managed\n            else:\n                _message = self._MESSAGES_LIST[\"e000019\"].format(cmd_to_identify, cmd_managed)\n                self._logger.error(\"{0}\".format(_message))\n\n                raise exceptions.PgCommandUnAvailable(_message)\n        else:\n            # Checks for Unmanaged\n            cmd = \"command -v \" + cmd_to_identify\n\n            # The timeout command can't be used with 'command'\n            # noinspection PyArgumentEqualDefault\n            _exit_code, output = exec_wait(\n                                            _cmd=cmd,\n                                            _output_capture=True,\n                                            _timeout=0\n                                            )\n\n            self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n                                                                            func=\"_check_command_identification\",\n                                                                            cmd=cmd,\n                                                                            exit_code=_exit_code,\n                                                                            output=output))\n\n            if _exit_code == 0:\n                cmd_identified = cr_strip(output)\n            else:\n                _message = self._MESSAGES_LIST[\"e000015\"].format(cmd)\n                self._logger.error(\"{0}\".format(_message))\n\n                raise exceptions.PgCommandUnAvailable(_message)\n\n        self.PG_COMMANDS[cmd_to_identify] = cmd_identified\n\n        return cmd_identified\n\n    def _is_plugin_managed(self, plugin_to_identify):\n        \"\"\" Identifies the type of plugin, Managed or not, inquiring the storage executable\n\n        Args:\n            plugin_to_identify: str - plugin to evaluate if it is managed or not\n        Returns:\n            type: boolean - True if it is a managed plugin\n        Raises:\n        \"\"\"\n\n        plugin_type = False\n\n        # The storage executable requires the environment FLEDGE_DATA, so it ensures it is valued\n        cmd = \"export FLEDGE_DATA={data_dir};\".format(data_dir=self.dir_fledge_data)\n\n        # Inquires the storage\n        file_full_path = self.dir_fledge_root + self.STORAGE_EXE\n        cmd += file_full_path + \" --plugin\"\n\n        # noinspection PyArgumentEqualDefault\n        _exit_code, output = exec_wait(\n                                        _cmd=cmd,\n                                        _output_capture=True,\n                                        _timeout=0\n                                        )\n\n        self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n            func=\"_is_plugin_managed\",\n            cmd=cmd,\n            exit_code=_exit_code,\n            output=output))\n\n        # Evaluates the storage answer\n        if plugin_to_identify in output:\n\n            if \"false\" in output:\n                plugin_type = False\n            else:\n                plugin_type = True\n        else:\n            _message = self._MESSAGES_LIST[\"e000020\"].format(plugin_to_identify)\n            self._logger.error(\"{0}\".format(_message))\n\n            raise exceptions.UndefinedStorage(_message)\n\n        return plugin_type\n\n    def _check_command_test(self, cmd_to_test):\n        \"\"\" Tests if the Postgres command could be successfully launched/used\n\n        Args:\n            cmd_to_test: str -  Command to test\n\n        Returns:\n        Raises:\n            exceptions.PgCommandUnAvailable\n            exceptions.PgCommandNotExecutable\n        \"\"\"\n\n        if os.access(cmd_to_test, os.X_OK):\n            cmd = cmd_to_test + \" -V\"\n\n            _exit_code, output = exec_wait(\n                                            _cmd=cmd,\n                                            _output_capture=True,\n                                            _timeout=self.config['timeout']\n                                            )\n\n            self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n                                                                            func=\"_check_command_test\",\n                                                                            cmd=cmd,\n                                                                            exit_code=_exit_code,\n                                                                            output=output))\n\n            if _exit_code != 0:\n                _message = self._MESSAGES_LIST[\"e000017\"].format(cmd, output)\n                self._logger.error(\"{0}\".format(_message))\n\n                raise exceptions.PgCommandUnAvailable(_message)\n\n        else:\n            _message = self._MESSAGES_LIST[\"e000016\"].format(cmd_to_test)\n            self._logger.error(\"{0}\".format(_message))\n\n            raise exceptions.PgCommandNotExecutable(_message)\n\n    def sl_get_backup_details(self, backup_id: int) -> dict:\n        \"\"\" Returns the details of a backup\n\n        Args:\n            backup_id: int - the id of the backup to return\n\n        Returns:\n            backup_information: all the information available related to the requested backup_id\n\n        Raises:\n            exceptions.DoesNotExist\n            exceptions.NotUniqueBackup\n        \"\"\"\n        payload = payload_builder.PayloadBuilder().SELECT(\"id\", \"status\", \"ts\", \"file_name\", \"type\")\\\n            .ALIAS(\"return\", (\"ts\", 'ts')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n            .WHERE(['id', '=', backup_id]).payload()\n\n        backup_from_storage = asyncio.get_event_loop().run_until_complete(self._storage.query_tbl_with_payload(self.STORAGE_TABLE_BACKUPS, payload))\n\n        if backup_from_storage['count'] == 0:\n            raise exceptions.DoesNotExist\n\n        elif backup_from_storage['count'] == 1:\n\n            backup_information = backup_from_storage['rows'][0]\n        else:\n            raise exceptions.NotUniqueBackup\n\n        return backup_information\n\n    def psql_cmd(self, sql_cmd):\n        \"\"\"  Execute a sql command and return the results using the psql command line tool\n\n        Args:\n            sql_cmd - the SQL Command\n        Returns:\n            result_data - data returned from the execution\n        Raises:\n        \"\"\"\n\n        cmd_psql = self.PG_COMMANDS[self.PG_COMMAND_PSQL]\n\n        cmd = '{psql} -qt -d {db} -c \"{sql}\"'.format(\n                                                        psql=cmd_psql,\n                                                        db=self.config['database'],\n                                                        sql=sql_cmd)\n\n        result_code, output_1 = exec_wait(cmd, True)\n\n        output_2 = output_1.replace(\"\\n\", \"\")\n        result_data = output_2.split('|')\n\n        # Error handling required\n        if result_code != 0:\n            raise exceptions.SQLCommandExecutionError(self._MESSAGES_LIST[\"e000021\"].format(output_1, cmd))\n\n        return result_data\n\n    def backup_status_update(self, backup_id, status):\n        \"\"\" Updates the status of the backup in the Storage layer\n\n        Args:\n            backup_id: int -\n            status: BackupStatus -\n        Returns:\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func} - backup id |{id}| \".format(func=\"backup_status_update\",\n                                                          id=backup_id))\n\n        sql_cmd = \"\"\"\n            UPDATE fledge.backups SET status={status} WHERE id='{id}';\n            \"\"\".format(status=status,\n                       id=backup_id, )\n\n        self.psql_cmd(sql_cmd)\n\n    def retrieve_configuration(self):\n        \"\"\"  Retrieves the configuration either from the manager or from a local file.\n        the local configuration file is used if the configuration manager is not available\n        and updated with the values retrieved from the manager when feasible.\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.ConfigRetrievalError\n        \"\"\"\n\n        global JOB_SEM_FILE_PATH\n\n        try:\n            self._retrieve_configuration_from_manager()\n\n        except Exception as _ex:\n            _message = self._MESSAGES_LIST[\"e000002\"].format(_ex)\n            self._logger.warning(_message)\n\n            try:\n                self._retrieve_configuration_from_file()\n\n            except Exception as _ex:\n                _message = self._MESSAGES_LIST[\"e000003\"].format(_ex)\n                self._logger.error(_message)\n\n                raise exceptions.ConfigRetrievalError(_message)\n        else:\n            self._update_configuration_file()\n\n        # Identifies the directory of backups and checks its existence\n        if self.config['backup-dir'] != \"none\":\n\n            self.dir_backups = self.config['backup-dir']\n        else:\n            self.dir_backups = self.dir_fledge_backup\n\n        self._check_create_path(self.dir_backups)\n\n        # Identifies the directory for the semaphores and checks its existence\n        # Stores semaphores in the _backups_dir if semaphores-dir is not defined\n        if self.config['semaphores-dir'] != \"none\":\n\n            self.dir_semaphores = self.config['semaphores-dir']\n        else:\n            self.dir_semaphores = self.dir_backups\n            JOB_SEM_FILE_PATH = self.dir_semaphores\n\n        self._check_create_path(self.dir_semaphores)\n\n    def evaluate_paths(self):\n        \"\"\"  Evaluates paths in relation to the environment variables\n             FLEDGE_ROOT, FLEDGE_DATA and FLEDGE_BACKUP\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        # Evaluates FLEDGE_ROOT\n        if \"FLEDGE_ROOT\" in os.environ:\n            self.dir_fledge_root = os.getenv(\"FLEDGE_ROOT\")\n        else:\n            self.dir_fledge_root = self._DEFAULT_FLEDGE_ROOT\n\n        # Evaluates FLEDGE_DATA\n        if \"FLEDGE_DATA\" in os.environ:\n            self.dir_fledge_data = os.getenv(\"FLEDGE_DATA\")\n        else:\n            self.dir_fledge_data = self.dir_fledge_root + \"/data\"\n\n        # Evaluates FLEDGE_BACKUP\n        if \"FLEDGE_BACKUP\" in os.environ:\n            self.dir_fledge_backup = os.getenv(\"FLEDGE_BACKUP\")\n        else:\n            self.dir_fledge_backup = self.dir_fledge_data + \"/backup\"\n\n        # Evaluates etc directory\n        self.dir_fledge_data_etc = self.dir_fledge_data + \"/etc\"\n\n        self._check_create_path(self.dir_fledge_backup)\n        self._check_create_path(self.dir_fledge_data_etc)\n\n    def _check_create_path(self, path):\n        \"\"\"  Checks path existences and creates it if needed\n        Args:\n        Returns:\n        Raises:\n            exceptions.InvalidBackupsPath\n        \"\"\"\n\n        # Check the path existence\n        if not os.path.isdir(path):\n\n            # The path doesn't exists, tries to create it\n            try:\n                os.makedirs(path)\n\n            except OSError as _ex:\n                _message = self._MESSAGES_LIST[\"e000014\"].format(path, _ex)\n                self._logger.error(\"{0}\".format(_message))\n\n                raise exceptions.InvalidPath(_message)\n\n    def _retrieve_configuration_from_manager(self):\n        \"\"\"\" Retrieves the configuration from the configuration manager\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        _event_loop = asyncio.get_event_loop()\n\n        cfg_manager = ConfigurationManager(self._storage)\n\n        _event_loop.run_until_complete(cfg_manager.create_category(\n            self._CONFIG_CATEGORY_NAME,\n            self._CONFIG_DEFAULT,\n            self._CONFIG_CATEGORY_DESCRIPTION, display_name=\"Backup & Restore\"))\n        _event_loop.run_until_complete(cfg_manager.create_child_category(\n            \"Utilities\", [self._CONFIG_CATEGORY_NAME]))\n        self._config_from_manager = _event_loop.run_until_complete(cfg_manager.get_category_all_items\n                                                                   (self._CONFIG_CATEGORY_NAME))\n\n        self._decode_configuration_from_manager(self._config_from_manager)\n\n    def _decode_configuration_from_manager(self, _config_from_manager):\n        \"\"\"\" Decodes a json configuration as generated by the configuration manager\n\n        Args:\n            _config_from_manager: Json configuration to decode\n        Returns:\n        Raises:\n        \"\"\"\n\n        self.config['host'] = _config_from_manager['host']['value']\n\n        self.config['port'] = int(_config_from_manager['port']['value'])\n        self.config['database'] = _config_from_manager['database']['value']\n        self.config['schema'] = _config_from_manager['schema']['value']\n        self.config['backup-dir'] = _config_from_manager['backup-dir']['value']\n        self.config['semaphores-dir'] = _config_from_manager['semaphores-dir']['value']\n        self.config['retention'] = int(_config_from_manager['retention']['value'])\n        self.config['max_retry'] = int(_config_from_manager['max_retry']['value'])\n        self.config['timeout'] = int(_config_from_manager['timeout']['value'])\n\n        self.config['restart-max-retries'] = int(_config_from_manager['restart-max-retries']['value'])\n        self.config['restart-sleep'] = int(_config_from_manager['restart-sleep']['value'])\n\n    def _retrieve_configuration_from_file(self):\n        \"\"\"\" Retrieves the configuration from the local file\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        file_full_path = self._identify_configuration_file_path()\n\n        with open(file_full_path) as file:\n            self._config_from_manager = json.load(file)\n\n        self._decode_configuration_from_manager(self._config_from_manager)\n\n    def _update_configuration_file(self):\n        \"\"\" Updates the configuration file with the values retrieved from tha manager.\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        file_full_path = self._identify_configuration_file_path()\n\n        with open(file_full_path, 'w') as file:\n            json.dump(self._config_from_manager, file)\n\n    def _identify_configuration_file_path(self):\n        \"\"\" Identifies the path of the configuration cache file,\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        file_full_path = self.dir_fledge_data_etc + \"/\" + self._CONFIG_CACHE_FILE\n\n        return file_full_path\n\n\nclass Job:\n    \"\"\"\" Handles backup and restore operations synchronization \"\"\"\n\n    @classmethod\n    def _pid_file_retrieve(cls, file_name):\n        \"\"\" Retrieves the PID from the semaphore file\n\n        Args:\n            file_name: full path of the semaphore file\n        Returns:\n            pid: pid retrieved from the semaphore file\n        Raises:\n        \"\"\"\n\n        with open(file_name) as f:\n            pid = f.read()\n\n        pid = int(pid)\n\n        return pid\n\n    @classmethod\n    def _pid_file_create(cls, file_name, pid):\n        \"\"\" Creates the semaphore file having the PID as content\n\n        Args:\n            file_name: full path of the semaphore file\n            pid: pid to store into the semaphore file\n        Returns:\n        Raises:\n        \"\"\"\n\n        file = open(file_name, \"w\")\n        file.write(str(pid))\n        file.close()\n\n    @classmethod\n    def _check_semaphore_file(cls, file_name):\n        \"\"\" Evaluates if a specific either backup or restore operation is in execution\n\n        Args:\n            file_name: semaphore file, full path\n        Returns:\n            pid: 0= no operation is in execution or the pid retrieved from the semaphore file\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func}\".format(func=\"check_semaphore_file\"))\n\n        pid = 0\n\n        if os.path.exists(file_name):\n            pid = cls._pid_file_retrieve(file_name)\n\n            # Check if the process is really running\n            try:\n                os.getpgid(pid)\n            except ProcessLookupError:\n                # Process is not running, removing the semaphore file\n                os.remove(file_name)\n\n                _message = _MESSAGES_LIST[\"e000002\"].format(file_name, pid)\n                _logger.warning(\"{0}\".format(_message))\n\n                pid = 0\n\n        return pid\n\n    @classmethod\n    def is_running(cls):\n        \"\"\" Evaluates if another either backup or restore job is already running\n\n        Args:\n        Returns:\n            pid: 0= no operation is in execution or the pid retrieved from the semaphore file\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func}\".format(func=\"is_running\"))\n\n        # Checks if a backup process is still running\n        full_path_backup = JOB_SEM_FILE_PATH + \"/\" + BackupRestoreLib.JOB_SEM_FILE_BACKUP\n        pid = cls._check_semaphore_file(full_path_backup)\n\n        # Checks if a restore process is still running\n        if pid == 0:\n            full_path_restore = JOB_SEM_FILE_PATH + \"/\" + BackupRestoreLib.JOB_SEM_FILE_RESTORE\n            pid = cls._check_semaphore_file(full_path_restore)\n\n        return pid\n\n    @classmethod\n    def set_as_running(cls, file_name, pid):\n        \"\"\" Sets a job as running\n\n        Args:\n            file_name: semaphore file either fot backup or restore\n            pid: pid of the process to be stored within the semaphore file\n        Returns:\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func}\".format(func=\"set_as_running\"))\n\n        full_path = JOB_SEM_FILE_PATH + \"/\" + file_name\n\n        if os.path.exists(full_path):\n\n            os.remove(full_path)\n\n            _message = _MESSAGES_LIST[\"e000001\"].format(full_path)\n            _logger.warning(\"{0}\".format(_message))\n\n        cls._pid_file_create(full_path, pid)\n\n    @classmethod\n    def set_as_completed(cls, file_name):\n        \"\"\" Sets a job as completed\n\n        Args:\n            file_name: semaphore file either for backup or restore operations\n        Returns:\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func}\".format(func=\"set_as_completed\"))\n\n        full_path = JOB_SEM_FILE_PATH + \"/\" + file_name\n\n        if os.path.exists(full_path):\n            os.remove(full_path)\n\n\nif __name__ == \"__main__\":\n\n    message = _MESSAGES_LIST[\"e000003\"]\n    print(message)\n\n    if False:\n        # Used to assign the proper objects type without actually executing them\n        _storage = StorageClientAsync(\"127.0.0.1\", \"0\")\n        _logger = logger.setup(_MODULE_NAME)\n"
  },
  {
    "path": "python/fledge/plugins/storage/postgres/backup_restore/restore_postgres.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (C) 2017, 2018\n\n\"\"\" Restores the entire Fledge repository from a previous backup.\n\nIt executes a full cold restore,\nFledge will be stopped before the start of the restore and restarted at the end.\n\nIt could work also without the Configuration Manager\nretrieving the parameters for the execution from the local file 'restore_configuration_cache.json'.\nThe local file is used as a cache of information retrieved from the Configuration Manager.\n\nThe restore operation executes the following macro steps :\n\n    - stops Fledge\n    - executes the restore\n    - starts Fledge again\n\nso it needs also to interact with Postgres directly using psql and executing SQL commands\nbecause at the restart of Fledge the reference to the Storage Layer, previously obtained through\nthe FledgeProcess class, will be no more valid.\n\n\nUsage:\n    --backup-id                     Restore a specific backup retrieving the related information from the\n                                    Storage Layer.\n         --file                     Restore a backup from a specific file, the full path should be provided\n                                    like for example : --file=/tmp/fledge_2017_09_25_15_10_22.dump\n\n    The latest backup will be restored if no options is used.\n\nExecution samples :\n    restore_postgres --backup-id=29 --port=${adm_port} --address=127.0.0.1 --name=restore\n    restore_postgres --file=/tmp/fledge_backup_2017_12_04_13_57_37.dump \\\n                     --port=${adm_port} --address=127.0.0.1 --name=restore\n    restore_postgres --port=${adm_port} --address=127.0.0.1 --name=restore\n\n    Note : ${adm_port} should correspond to the Management API port of the core.\n\nExit code :\n    0    = OK\n    >=1  = Warning/Error\n\n\"\"\"\n\nimport time\nimport sys\nimport os\nimport signal\nimport uuid\n\nfrom fledge.common.parser import Parser\nfrom fledge.services.core import server\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common import logger\n\nimport fledge.plugins.storage.postgres.backup_restore.lib as lib\nimport fledge.plugins.storage.postgres.backup_restore.exceptions as exceptions\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2017, 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_MODULE_NAME = \"fledge_restore_postgres_module\"\n\n_MESSAGES_LIST = {\n\n    # Information messages\n    \"i000001\": \"Execution started.\",\n    \"i000002\": \"Execution completed.\",\n\n    # Warning / Error messages\n    \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n    \"e000002\": \"an error occurred during the restore operation - error details |{0}|\",\n    \"e000003\": \"invalid command line arguments - error details |{0}|\",\n    \"e000004\": \"cannot complete the initialization - error details |{0}|\",\n}\n\"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n\n# Log definitions\n_logger = None\n\n_LOG_LEVEL_DEBUG = 10\n_LOG_LEVEL_INFO = 20\n\n_LOGGER_LEVEL = _LOG_LEVEL_INFO\n_LOGGER_DESTINATION = logger.SYSLOG\n\n\nclass Restore(object):\n    \"\"\" Provides external functionality/integration to Restore a Backup\n    \"\"\"\n\n    _MODULE_NAME = \"fledge_restore_postgres_api\"\n\n    SCHEDULE_RESTORE_ON_DEMAND = \"8d4d3ca0-de80-11e7-80c1-9a214cf093ae\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000000\": \"general information\",\n        \"i000001\": \"On demand restore successfully launched.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"cannot launch on demand restore - error details |{0}|\",\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _logger = None\n\n    def __init__(self, _storage):\n        self._storage = _storage\n\n        if not Restore._logger:\n            Restore._logger = logger.setup(\n                                            self._MODULE_NAME,\n                                            destination=_LOGGER_DESTINATION,\n                                            level=_LOGGER_LEVEL)\n\n    async def restore_backup(self, backup_id: int):\n        \"\"\" Starts an asynchronous restore process to restore the state of Fledge.\n\n        Important Note : The current version restores the latest backup\n\n        Args:\n            backup_id: int - the id of the backup to restore from\n\n        Returns:\n            status: str - {\"running\"|\"failed\"}\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func} - backup id |{backup_id}|\".format(\n                                                                    func=\"restore_backup\",\n                                                                    backup_id=backup_id))\n\n        try:\n            await server.Server.scheduler.queue_task(uuid.UUID(Restore.SCHEDULE_RESTORE_ON_DEMAND))\n\n            _message = self._MESSAGES_LIST[\"i000001\"]\n            Restore._logger.info(\"{0}\".format(_message))\n            status = \"running\"\n\n        except Exception as _ex:\n            _message = self._MESSAGES_LIST[\"e000001\"].format(_ex)\n            Restore._logger.error(\"{0}\".format(_message))\n\n            status = \"failed\"\n\n        return status\n\n\nclass RestoreProcess(FledgeProcess):\n    \"\"\" Restore the entire Fledge repository.\n    \"\"\"\n\n    _MODULE_NAME = \"fledge_restore_postgres_process\"\n\n    _FLEDGE_ENVIRONMENT_DEV = \"dev\"\n    _FLEDGE_ENVIRONMENT_DEPLOY = \"deploy\"\n\n    _FLEDGE_CMD_PATH_DEV = \"scripts/fledge\"\n    _FLEDGE_CMD_PATH_DEPLOY = \"bin/fledge\"\n\n    # The init method will evaluate the running environment setting the variables accordingly\n    _fledge_environment = _FLEDGE_ENVIRONMENT_DEV\n    _fledge_cmd = _FLEDGE_CMD_PATH_DEV + \" {0}\"\n    \"\"\" Command for managing Fledge, stop/start/status \"\"\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000001\": \"Execution started.\",\n        \"i000002\": \"Execution completed.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"Invalid file name\",\n        \"e000002\": \"cannot retrieve the configuration from the manager, trying retrieving from file \"\n                   \"- error details |{0}|\",\n        \"e000003\": \"cannot retrieve the configuration from file - error details |{0}|\",\n        \"e000004\": \"cannot restore the backup, file doesn't exists - file name |{0}|\",\n        \"e000006\": \"cannot start Fledge after the restore - error details |{0}|\",\n        \"e000007\": \"cannot restore the backup, restarting Fledge - error details |{0}|\",\n        \"e000008\": \"cannot identify Fledge status, the maximum number of retries has been reached \"\n                   \"- error details |{0}|\",\n        \"e000009\": \"cannot restore the backup, either a backup or a restore is already running - pid |{0}|\",\n        \"e000010\": \"cannot retrieve the Fledge status - error details |{0}|\",\n        \"e000011\": \"cannot restore the backup, the selected backup doesn't exists - backup id |{0}|\",\n        \"e000012\": \"cannot restore the backup, the selected backup doesn't exists - backup file name |{0}|\",\n        \"e000013\": \"cannot proceed the execution, \"\n                   \"It is not possible to determine the environment in which the code is running\"\n                   \" neither Deployment nor Development\",\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _logger = None\n\n    _backup_id = None\n    \"\"\" Used to store the optional command line parameter value \"\"\"\n\n    _file_name = None\n    \"\"\" Used to store the optional command line parameter value \"\"\"\n\n    class FledgeStatus(object):\n        \"\"\" Fledge - possible status \"\"\"\n\n        NOT_DEFINED = 0\n        STOPPED = 1\n        RUNNING = 2\n\n    @staticmethod\n    def _signal_handler(_signo, _stack_frame):\n        \"\"\" Handles signals to avoid restore termination doing Fledge stop\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        short_stack_frame = str(_stack_frame)[:100]\n        _logger.debug(\"{func} - signal |{signo}| - info |{ssf}| \".format(\n            func=\"_signal_handler\",\n            signo=_signo,\n            ssf=short_stack_frame))\n\n    def __init__(self):\n\n        super().__init__()\n\n        if not self._logger:\n            self._logger = logger.setup(self._MODULE_NAME,\n                                        destination=_LOGGER_DESTINATION,\n                                        level=_LOGGER_LEVEL)\n\n        # Handled Restore command line parameters\n        try:\n            self._backup_id = Parser.get('--backup-id')\n            self._file_name = Parser.get('--file')\n\n        except Exception as _ex:\n\n            _message = _MESSAGES_LIST[\"e000003\"].format(_ex)\n            _logger.exception(_message)\n\n            raise exceptions.ArgumentParserError(_message)\n\n        self._restore_lib = lib.BackupRestoreLib(self._storage_async, self._logger)\n\n        self._job = lib.Job()\n\n        self._force_restore = True\n        \"\"\" Restore a backup doesn't exist in the backups table \"\"\"\n\n        # Creates the objects references used by the library\n        lib._logger = self._logger\n        lib._storage = self._storage_async\n\n    def _identifies_backup_to_restore(self):\n        \"\"\"Identifies the backup to restore either\n        - latest backup\n        - or a specific backup_id\n        - or a specific file_name\n\n        Args:\n        Returns:\n        Raises:\n            FileNotFoundError\n        \"\"\"\n\n        backup_id = None\n        file_name = None\n\n        # Case - last backup\n        if self._backup_id is None and \\\n           self._file_name is None:\n\n            backup_id,  file_name = self._identify_last_backup()\n\n        # Case - backup-id\n        elif self._backup_id is not None:\n\n            try:\n                backup_info = self._restore_lib.sl_get_backup_details(self._backup_id)\n                backup_id = backup_info[\"id\"]\n                file_name = backup_info[\"file_name\"]\n\n            except exceptions.DoesNotExist:\n                _message = self._MESSAGES_LIST[\"e000011\"].format(self._backup_id)\n                _logger.error(_message)\n\n                raise exceptions.DoesNotExist(_message)\n\n        # Case - file-name\n        elif self._file_name is not None:\n\n            try:\n                backup_info = self._restore_lib.sl_get_backup_details_from_file_name(self._file_name)\n                backup_id = backup_info[\"id\"]\n                file_name = backup_info[\"file_name\"]\n\n            except exceptions.DoesNotExist:\n                if self._force_restore:\n                    file_name = self._file_name\n\n                else:\n                    _message = self._MESSAGES_LIST[\"e000012\"].format(self._file_name)\n                    _logger.error(_message)\n\n                    raise exceptions.DoesNotExist(_message)\n\n        if not os.path.exists(file_name):\n\n            _message = self._MESSAGES_LIST[\"e000004\"].format(file_name)\n            _logger.error(_message)\n\n            raise FileNotFoundError(_message)\n\n        return backup_id, file_name\n\n    def _identify_last_backup(self):\n        \"\"\" Identifies latest executed backup either successfully executed (COMPLETED) or already RESTORED\n\n        Args:\n        Returns:\n        Raises:\n            NoBackupAvailableError: No backup either successfully executed or already restored available\n            FileNameError: it is impossible to identify an unique backup to restore\n        \"\"\"\n\n        self._logger.debug(\"{func} \".format(func=\"_identify_last_backup\"))\n\n        sql_cmd = \"\"\"\n            SELECT id, file_name FROM fledge.backups WHERE (ts,id)=\n            (SELECT  max(ts),MAX(id) FROM fledge.backups WHERE status={0} or status={1}) LIMIT 1;\n        \"\"\".format(lib.BackupStatus.COMPLETED,\n                   lib.BackupStatus.RESTORED)\n\n        data = self._restore_lib.psql_cmd(sql_cmd)\n\n        if len(data) <= 1:\n            raise exceptions.NoBackupAvailableError\n\n        elif len(data) == 2:\n            _backup_id = data[0].strip()\n            _file_name = data[1].strip()\n\n        else:\n            raise exceptions.FileNameError\n\n        return _backup_id, _file_name\n\n    def get_backup_id_from_file_name(self, _file_name):\n        \"\"\" Retrieves backup information from file name\n\n        Args:\n            _file_name: file name to search in the Storage layer\n\n        Returns:\n            backup_id: Backup id related to the file name\n\n        Raises:\n            exceptions.NoBackupAvailableError\n            exceptions.FileNameError\n        \"\"\"\n\n        self._logger.debug(\"{func} \".format(func=\"get_backup_details_from_file_name\"))\n\n        # We retrieve file_name to avoid the false negative of len == 1\n        sql_cmd = \"\"\"\n            SELECT id, file_name FROM fledge.backups WHERE file_name='{file}' LIMIT 1\n        \"\"\".format(file=_file_name)\n\n        data = self._restore_lib.psql_cmd(sql_cmd)\n\n        if len(data) <= 1:\n            raise exceptions.NoBackupAvailableError\n\n        elif len(data) == 2:   # It means exactly one row retrieved\n            backup_id = data[0].strip()\n\n        else:\n            raise exceptions.FileNameError\n\n        return backup_id\n\n    def _fledge_stop(self):\n        \"\"\" Stops Fledge before for the execution of the backup, doing a cold backup\n\n        Args:\n        Returns:\n        Raises:\n            FledgeStopError\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"_fledge_stop\"))\n\n        cmd = \"{path}/{cmd}\".format(\n            path=self._restore_lib.dir_fledge_root,\n            cmd=self._fledge_cmd.format(\"stop\")\n        )\n\n        # Stops Fledge\n        status, output = lib.exec_wait_retry(cmd, True,\n                                             max_retry=self._restore_lib.config['max_retry'],\n                                             timeout=self._restore_lib.config['timeout'])\n\n        self._logger.debug(\"{func} - status |{status}| - cmd |{cmd}| - output |{output}|   \".format(\n                    func=\"_fledge_stop\",\n                    status=status,\n                    cmd=cmd,\n                    output=output))\n\n        if status == 0:\n\n            # Checks to ensure the Fledge status\n            if self._fledge_status() != self.FledgeStatus.STOPPED:\n                raise exceptions.FledgeStopError(output)\n        else:\n            raise exceptions.FledgeStopError(output)\n\n    def _decode_fledge_status(self, text):\n        \"\"\"\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        text_upper = text.upper()\n\n        if 'FLEDGE UPTIME' in text_upper:\n            status = self.FledgeStatus.RUNNING\n\n        elif 'FLEDGE NOT RUNNING.' in text_upper:\n            status = self.FledgeStatus.STOPPED\n\n        else:\n            status = self.FledgeStatus.NOT_DEFINED\n\n        return status\n\n    def _check_wait_fledge_start(self):\n        \"\"\" Checks and waits Fledge to start\n\n        Args:\n        Returns:\n            status: FledgeStatus - {NOT_DEFINED|STOPPED|RUNNING}\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"_check_wait_fledge_start\"))\n\n        status = self.FledgeStatus.NOT_DEFINED\n\n        n_retry = 0\n        max_reties = self._restore_lib.config['restart-max-retries']\n        sleep_time = self._restore_lib.config['restart-sleep']\n\n        while n_retry < max_reties:\n\n            self._logger.debug(\"{func}\".format(func=\"_check_wait_fledge_start - checks Fledge status\"))\n\n            status = self._fledge_status()\n            if status == self.FledgeStatus.RUNNING:\n                break\n\n            self._logger.debug(\"{func}\".format(func=\"_check_wait_fledge_start - sleep {0}\".format(sleep_time)))\n\n            time.sleep(sleep_time)\n            n_retry += 1\n\n        return status\n\n    def _fledge_status(self):\n        \"\"\" Checks Fledge status\n\n        to ensure the status is stable and reliable,\n        It executes the Fledge 'status' command until either\n        until the same value comes back for 3 times in a row  or it reaches the maximum number of retries allowed.\n\n        Args:\n        Returns:\n            status: FledgeStatus - {STATUS_NOT_DEFINED|STATUS_STOPPED|STATUS_RUNNING}\n        Raises:\n        \"\"\"\n\n        status = self.FledgeStatus.NOT_DEFINED\n\n        num_exec = 0\n        max_exec = 10\n        same_status = 0\n        same_status_ok = 3\n        sleep_time = 1\n\n        while (same_status < same_status_ok) and (num_exec <= max_exec):\n\n            try:\n\n                cmd = \"{path}/{cmd}\".format(\n                            path=self._restore_lib.dir_fledge_root,\n                            cmd=self._fledge_cmd.format(\"status\")\n                )\n\n                cmd_status, output = lib.exec_wait(cmd, True, _timeout=self._restore_lib.config['timeout'])\n\n                self._logger.debug(\"{func} - output |{output}| \\r - status |{status}|  \".format(\n                                                                                            func=\"_fledge_status\",\n                                                                                            output=output,\n                                                                                            status=cmd_status))\n\n                num_exec += 1\n\n                new_status = self._decode_fledge_status(output)\n\n            except Exception as _ex:\n                _message = self._MESSAGES_LIST[\"e000010\"].format(_ex)\n                _logger.error(_message)\n\n                raise\n\n            else:\n                if new_status == status:\n                    same_status += 1\n                    time.sleep(sleep_time)\n                else:\n                    status = new_status\n                    same_status = 0\n\n        if num_exec >= max_exec:\n            _message = self._MESSAGES_LIST[\"e000008\"]\n            self._logger.error(_message)\n\n            status = self.FledgeStatus.NOT_DEFINED\n\n        return status\n\n    def _run_restore_command(self, backup_file):\n        \"\"\" Executes the restore of the storage layer from a backup\n\n        Args:\n            backup_file: backup file to restore\n        Returns:\n        Raises:\n            RestoreError\n        \"\"\"\n\n        self._logger.debug(\"{func} - Restore starts - file name |{file}|\".format(\n                                                                    func=\"_run_restore_command\",\n                                                                    file=backup_file))\n\n        # Prepares the restore command\n        pg_cmd = self._restore_lib.PG_COMMANDS[self._restore_lib.PG_COMMAND_RESTORE]\n\n        cmd = \"{cmd} {options} {schema} -d {db}  {file}\".format(\n                                                cmd=pg_cmd,\n                                                options=\"--verbose --clean -n \",\n                                                schema=self._restore_lib.config['schema'],\n                                                db=self._restore_lib.config['database'],\n                                                file=backup_file\n        )\n\n        # Restores the backup\n        status, output = lib.exec_wait_retry(cmd, True, timeout=self._restore_lib.config['timeout'])\n\n        # Avoid output too long\n        output_short = output.splitlines()[10]\n\n        self._logger.debug(\"{func} - Restore ends - status |{status}| - cmd |{cmd}| - output |{output}|\".format(\n                                    func=\"_run_restore_command\",\n                                    status=status,\n                                    cmd=cmd,\n                                    output=output_short))\n\n        if status != 0:\n            raise exceptions.RestoreFailed\n\n    def _fledge_start(self):\n        \"\"\" Starts Fledge after the execution of the restore\n\n        Args:\n        Returns:\n        Raises:\n            FledgeStartError\n        \"\"\"\n\n        cmd = \"{path}/{cmd}\".format(\n                                    path=self._restore_lib.dir_fledge_root,\n                                    cmd=self._fledge_cmd.format(\"start\")\n        )\n\n        exit_code, output = lib.exec_wait_retry(\n                                                cmd,\n                                                True,\n                                                max_retry=self._restore_lib.config['max_retry'],\n                                                timeout=self._restore_lib.config['timeout'])\n\n        self._logger.debug(\"{func} - exit_code |{exit_code}| - cmd |{cmd}| - output |{output}|\".format(\n                                    func=\"_fledge_start\",\n                                    exit_code=exit_code,\n                                    cmd=cmd,\n                                    output=output))\n\n        if exit_code == 0:\n            if self._check_wait_fledge_start() != self.FledgeStatus.RUNNING:\n                raise exceptions.FledgeStartError\n\n        else:\n            raise exceptions.FledgeStartError\n\n    def insert_backup_entries(self, old_data: list, new_data: list) -> None:\n        \"\"\" Insert those backup entries from old data which are not found in new data\n\n        Args:\n            old_data: Old backup data before restore\n            new_data: New backup data after restore\n\n        Returns:\n\n        Raises:\n        \"\"\"\n        self._logger.debug(\"Old backup data: {} - New backup data: {}\".format(old_data, new_data))\n        matched_entry_to_delete = []\n        for idx, old_row in enumerate(old_data):\n            for new_row in new_data:\n                if old_row['file_name'] == new_row.strip():\n                    matched_entry_to_delete.append(old_row['file_name'])\n                    break\n        self._logger.debug(\"Matched entry deletion list from old backup data: {}\".format(matched_entry_to_delete))\n        # Filter duplicate records between old and new backup data list\n        filtered_list = [d for d in old_data if d['file_name'] not in matched_entry_to_delete]\n        self._logger.debug(\"Filtered list: {}\".format(filtered_list))\n        # Prepare backup multiple records with execute many operation of sqlite\n        new_backup_list = []\n        for row in filtered_list:\n            r_tuple = (row['file_name'], row['ts'], row['type'], row['status'], row['exit_code'])\n            new_backup_list.append(r_tuple)\n        if new_backup_list:\n            # Insert new backup entries which were before restored checkpoint - this way we can't loose all backups\n            # List of tuples to String\n            values = str(new_backup_list).strip('[]')\n            sql_cmd = \"INSERT INTO fledge.backups (file_name, ts, type, status, exit_code) VALUES {};\".format(values)\n            self._logger.debug(\"Insert new entries from backup list: {}\".format(values))\n            self._restore_lib.psql_cmd(sql_cmd)\n\n    def execute_restore(self):\n        \"\"\"Executes the restore operation\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        def tar_extraction(_file, _ext) -> str:\n            \"\"\" Extracts the files from tar.gz backup file\n\n            Args:\n                _file: filename of the backup\n                _ext:  extension of backup file\n            Returns:\n                Full backup filepath\n            Raises:\n            \"\"\"\n            import json\n            import tarfile\n            import shutil\n            from distutils.dir_util import copy_tree\n\n            # Removes tar.gz\n            filename_base1 = os.path.basename(_file)\n            filename_base2, dummy1 = os.path.splitext(filename_base1)\n            filename_base, dummy2 = os.path.splitext(filename_base2)\n            self._logger.debug(\"tar_extraction - filename: {} file_extension: {}\".format(_file, _ext))\n            extract_path = \"{}/extract\".format(self._restore_lib.dir_fledge_backup)\n            if not os.path.isdir(extract_path):\n                os.mkdir(extract_path)\n            else:\n                shutil.rmtree(extract_path)\n                os.mkdir(extract_path)\n\n            # Extracts the tar\n            backup_tar = tarfile.open(_file)\n            backup_tar.extractall(extract_path)\n            db_file_from_extract = [entry for entry in backup_tar.getnames() if entry.endswith(\".dump\")]\n            backup_tar.close()\n\n            # Moves the dump file to the right position\n            db_file_to_restore = db_file_from_extract[0] if db_file_from_extract else \"{}.dump\".format(filename_base)\n            file_source = \"{}/{}\".format(extract_path, db_file_to_restore)\n            file_target = \"{}/{}.dump\".format(self._restore_lib.dir_fledge_backup, filename_base)\n            self._logger.debug(\"tar_extraction 'db' - source :{}: target :{}: \".format(file_source, file_target))\n            os.rename(file_source, file_target)\n\n            # etc\n            source = \"{}/etc\".format(extract_path)\n            target = \"{}/etc\".format(self._restore_lib.dir_fledge_data)\n            self._logger.debug(\"tar_extraction 'etc' - source :{}: target :{}: \".format(source, target))\n            copy_tree(source, target)\n\n            # external scripts\n            dir_scripts = \"{}/scripts\".format(extract_path)\n            if os.path.isdir(dir_scripts):\n                target = \"{}/scripts\".format(self._restore_lib.dir_fledge_data)\n                if not os.path.isdir(target):\n                    os.mkdir(target)\n                source = dir_scripts\n                self._logger.debug(\"tar_extraction 'scripts' - source :{}: target :{}: \".format(source, target))\n                copy_tree(source, target)\n\n            # software\n            is_software = \"{}/software.json\".format(extract_path)\n            if os.path.exists(is_software):\n                # we don't need to install software as a part of restore automatically\n                # It is a user responsibility to install\n                with open(is_software, 'r') as f:\n                    data = json.load(f)\n                self._logger.debug(\"tar_extraction 'data' :{}: \".format(data))\n                software_list = []\n                for p in data['plugins']:\n                    # Exclude inbuilt plugins\n                    if p['packageName'] != '':\n                        software_list.append({p['packageName']: p['version']})\n                for s in data['services']:\n                    # Exclude inbuilt services\n                    if s not in ('storage', 'south', 'north'):\n                        # As such no version available for services, therefore keeping empty\n                        software_list.append({\"fledge-service-{}\".format(s): ''})\n                self._logger.info(\n                    \"Please check install software list: {}; if any of software is not present onto your system, \"\n                    \"you need to install it manually.\".format(software_list))\n            # Remove extract directory\n            shutil.rmtree(extract_path)\n            return file_target\n\n        self._logger.debug(\"{func}\".format(func=\"execute_restore\"))\n        # Fetch old backup table entries before restore\n        old_backup_entries = self._restore_lib.sl_get_all_backups()\n        self._logger.debug(\"{func} - old backup entries list - {backups}\".format(func=\"execute_restore\",\n                                                                                 backups=old_backup_entries))\n        backup_id, file_name = self._identifies_backup_to_restore()\n        self._logger.debug(\"{func} - backup to restore |{id}| - |{file}| \".format(\n                                                                                func=\"execute_restore\",\n                                                                                id=backup_id,\n                                                                                file=file_name))\n        # Stops Fledge if it is running\n        if self._fledge_status() == self.FledgeStatus.RUNNING:\n            self._fledge_stop()\n        self._logger.debug(\"{func} - Fledge is down\".format(func=\"execute_restore\"))\n        # Executes the restore and then starts Fledge\n        try:\n            dummy, file_extension = os.path.splitext(file_name)\n            # backward compatibility (<= 1.9.2)\n            file_name_dump = file_name if file_extension == \".dump\" else tar_extraction(file_name, file_extension)\n            self._logger.debug(\"Filename dump: {}\".format(file_name_dump))\n            self._run_restore_command(file_name_dump)\n            if self._force_restore and file_extension != \".gz\":\n                # Retrieve the backup-id after the restore operation\n                backup_id = self.get_backup_id_from_file_name(file_name)\n            # Updates the backup status as RESTORED\n            self._restore_lib.backup_status_update(backup_id, lib.BackupStatus.RESTORED)\n            # Fetch new backup entries after DB fully restored\n            sql_cmd = \"\"\"SELECT id, file_name, ts, type, status, exit_code FROM fledge.backups;\"\"\"\n            new_backup_entries = self._restore_lib.psql_cmd(sql_cmd)\n            self._logger.debug(\"{func} - !!!New backup entries list - {backups}\".format(func=\"execute_restore\",\n                                                                                        backups=new_backup_entries))\n            # Insert old backup entries into newly restored Database\n            self.insert_backup_entries(old_backup_entries, new_backup_entries)\n        except Exception as _ex:\n            _message = self._MESSAGES_LIST[\"e000007\"].format(_ex)\n            self._logger.error(_message)\n            raise\n\n        finally:\n            try:\n                self._fledge_start()\n\n            except Exception as _ex:\n                _message = self._MESSAGES_LIST[\"e000006\"].format(_ex)\n\n                self._logger.error(_message)\n                raise\n\n    def check_command(self, cmd_to_identify):\n        \"\"\"\"Evaluates if the command is available or not\n\n          Args:\n          Returns:\n              cmd_available: boolean\n          Raises:\n          \"\"\"\n\n        cmd = \"command -v \" + cmd_to_identify\n\n        # The timeout command can't be used with 'command'\n        # noinspection PyArgumentEqualDefault\n        _exit_code, output = lib.exec_wait(\n            _cmd=cmd,\n            _output_capture=True,\n            _timeout=0\n        )\n\n        self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n            func=\"check_command\",\n            cmd=cmd,\n            exit_code=_exit_code,\n            output=output))\n\n        if _exit_code == 0:\n            cmd_available = True\n        else:\n            cmd_available = False\n\n        return cmd_available\n\n    def evaluate_fledge_env(self):\n        \"\"\"\"Evaluates if the code is running either in Development or in Deploy environment\n\n        Args:\n        Returns:\n            env: str - {_FLEDGE_CMD_PATH_DEPLOY|_FLEDGE_CMD_PATH_DEV}\n        Raises:\n            exceptions.InvalidFledgeEnvironment\n        \"\"\"\n\n        cmd = self._restore_lib.dir_fledge_root + \"/\" + self._FLEDGE_CMD_PATH_DEPLOY\n\n        if self.check_command(cmd):\n            env = self._FLEDGE_ENVIRONMENT_DEPLOY\n        else:\n\n            cmd = self._restore_lib.dir_fledge_root + \"/\" + self._FLEDGE_CMD_PATH_DEV\n            if self.check_command(cmd):\n\n                env = self._FLEDGE_ENVIRONMENT_DEV\n            else:\n                _message = self._MESSAGES_LIST[\"e000013\"]\n                self._logger.error(_message)\n\n                raise exceptions.InvalidFledgeEnvironment\n\n        return env\n\n    def set_fledge_env(self):\n        \"\"\"\"Sets a proper configuration in relation to the environment in which the code is running\n        either Development or Deploy\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self._fledge_environment = self.evaluate_fledge_env()\n\n        # Configures in relation to the environment is use\n        if self._fledge_environment == self._FLEDGE_ENVIRONMENT_DEPLOY:\n\n            self._fledge_cmd = self._FLEDGE_CMD_PATH_DEPLOY + \" {0}\"\n        else:\n            self._fledge_cmd = self._FLEDGE_CMD_PATH_DEV + \" {0}\"\n\n    def init(self):\n        \"\"\"\"Setups the correct state for the execution of the restore\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        # Setups signals handlers, to avoid the termination of the restore\n        # a) SIGINT: Keyboard interrupt\n        # b) SIGTERM: kill or system shutdown\n        # c) SIGHUP: Controlling shell exiting\n        signal.signal(signal.SIGINT, RestoreProcess._signal_handler)\n        signal.signal(signal.SIGTERM, RestoreProcess._signal_handler)\n        signal.signal(signal.SIGHUP, RestoreProcess._signal_handler)\n\n        self._logger.debug(\"{func}\".format(func=\"init\"))\n\n        self._restore_lib.evaluate_paths()\n\n        self.set_fledge_env()\n\n        self._restore_lib.retrieve_configuration()\n\n        self._restore_lib.check_for_execution_restore()\n\n        # Checks for backup/restore synchronization\n        pid = self._job.is_running()\n        if pid == 0:\n\n            # no job is running\n            pid = os.getpid()\n            self._job.set_as_running(self._restore_lib.JOB_SEM_FILE_RESTORE, pid)\n        else:\n            _message = self._MESSAGES_LIST[\"e000009\"].format(pid)\n            self._logger.warning(\"{0}\".format(_message))\n\n            raise exceptions.BackupOrRestoreAlreadyRunning\n\n    def shutdown(self):\n        \"\"\"\"Sets the correct state to terminate the execution\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"shutdown\"))\n\n        self._job.set_as_completed(self._restore_lib.JOB_SEM_FILE_RESTORE)\n\n    def run(self):\n        \"\"\" Restores a backup\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.RestoreFailed\n        \"\"\"\n        self.init()\n\n        try:\n            self.execute_restore()\n\n        except Exception as _ex:\n            _message = _MESSAGES_LIST[\"e000002\"].format(_ex)\n            _logger.error(_message)\n\n            self.shutdown()\n\n            raise exceptions.RestoreFailed(_message)\n        else:\n            self.shutdown()\n\n\nif __name__ == \"__main__\":\n\n    # Initializes the logger\n    try:\n        _logger = logger.setup(_MODULE_NAME,\n                               destination=_LOGGER_DESTINATION,\n                               level=_LOGGER_LEVEL)\n\n    except Exception as ex:\n        message = _MESSAGES_LIST[\"e000001\"].format(str(ex))\n        current_time = time.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n        print(\"[FLEDGE] {0} - ERROR - {1}\".format(current_time, message), file=sys.stderr)\n        sys.exit(1)\n\n    # Initializes FledgeProcess and RestoreProcess classes - handling also the command line parameters\n    try:\n        restore_process = RestoreProcess()\n    except Exception as ex:\n        message = _MESSAGES_LIST[\"e000004\"].format(ex)\n        _logger.exception(message)\n        _logger.info(_MESSAGES_LIST[\"i000002\"])\n        sys.exit(1)\n\n    if not restore_process.is_dry_run():\n        try:\n            # noinspection PyProtectedMember\n            _logger.debug(\"{module} - name |{name}| - address |{addr}| - port |{port}| \"\n                          \"- file |{file}| - backup_id |{backup_id}| \".format(\n                                                                            module=_MODULE_NAME,\n                                                                            name=restore_process._name,\n                                                                            addr=restore_process._core_management_host,\n                                                                            port=restore_process._core_management_port,\n                                                                            file=restore_process._file_name,\n                                                                            backup_id=restore_process._backup_id))\n            _logger.info(_MESSAGES_LIST[\"i000001\"])\n            restore_process.run()\n            _logger.info(_MESSAGES_LIST[\"i000002\"])\n            sys.exit(0)\n        except Exception as ex:\n            message = _MESSAGES_LIST[\"e000002\"].format(ex)\n            _logger.exception(message)\n            sys.exit(1)\n    else:\n        # Put any configuration here if required for the restore\n        sys.exit()\n"
  },
  {
    "path": "python/fledge/plugins/storage/sqlite/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/storage/sqlite/backup_restore/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/plugins/storage/sqlite/backup_restore/backup_sqlite.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (C) 2017\n\n\"\"\" Backups the entire Fledge repository into a file in the local filesystem,\nit executes a full warm backup.\n\nThe information about executed backups are stored into the Storage Layer.\n\nThe parameters for the execution are retrieved from the configuration manager.\nIt could work also without the configuration manager,\nretrieving the parameters for the execution from the local file 'backup_configuration_cache.json'.\n\n\"\"\"\n\nimport sys\nimport time\nimport os\nimport asyncio\nimport json\nimport tarfile\n\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common import logger\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.plugins.storage.common.backup import Backup\nfrom fledge.services.core.api.service import get_service_installed\n\nimport fledge.plugins.storage.common.lib as lib\nimport fledge.plugins.storage.common.exceptions as exceptions\n\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_MODULE_NAME = \"fledge_backup_sqlite_module\"\n\n_MESSAGES_LIST = {\n\n    # Information messages\n    \"i000001\": \"Execution started.\",\n    \"i000002\": \"Execution completed.\",\n\n    # Warning / Error messages\n    \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n    \"e000002\": \"an error occurred during the backup operation - error details |{0}|\",\n    \"e000004\": \"cannot complete the initialization - error details |{0}|\",\n}\n\"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n\n# Log definitions\n_logger = None\n\n_LOG_LEVEL_DEBUG = 10\n_LOG_LEVEL_INFO = 20\n\n_LOGGER_LEVEL = _LOG_LEVEL_INFO\n_LOGGER_DESTINATION = logger.SYSLOG\n\n\n# noinspection PyAbstractClass\nclass BackupProcess(FledgeProcess):\n    \"\"\" Backups the entire Fledge repository into a file in the local filesystem,\n        it executes a full warm backup\n    \"\"\"\n\n    _MODULE_NAME = \"fledge_backup_sqlite_process\"\n\n    _BACKUP_FILE_NAME_PREFIX = \"fledge_backup_\"\n    \"\"\" Prefix used to generate a backup file name \"\"\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000001\": \"Execution started.\",\n        \"i000002\": \"Execution completed.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n        \"e000002\": \"cannot retrieve the configuration from the manager, trying retrieving from file \"\n                   \"- error details |{0}|\",\n        \"e000003\": \"cannot retrieve the configuration from file - error details |{0}|\",\n        \"e000004\": \"...\",\n        \"e000005\": \"...\",\n        \"e000006\": \"...\",\n        \"e000007\": \"backup failed.\",\n        \"e000008\": \"cannot execute the backup, either a backup or a restore is already running - pid |{0}|\",\n        \"e000009\": \"...\",\n        \"e000010\": \"directory used to store backups doesn't exist - dir |{0}|\",\n        \"e000011\": \"directory used to store semaphores for backup/restore synchronization doesn't exist - dir |{0}|\",\n        \"e000012\": \"cannot create the configuration cache file, neither FLEDGE_DATA nor FLEDGE_ROOT are defined.\",\n        \"e000013\": \"cannot create the configuration cache file, provided path is not a directory - dir |{0}|\",\n        \"e000014\": \"the identified path of backups doesn't exists, creation was tried \"\n                   \"- dir |{0}| - error details |{1}|\",\n        \"e000015\": \"The command is not available neither using the unmanaged approach\"\n                   \" - command |{0}|\",\n        \"e000019\": \"The command is not available using the managed approach\"\n                   \" - command |{0}|\",\n\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _logger = None\n\n    def __init__(self):\n\n        super().__init__()\n\n        if not self._logger:\n            self._logger = logger.setup(self._MODULE_NAME,\n                                        destination=_LOGGER_DESTINATION,\n                                        level=_LOGGER_LEVEL)\n\n        self._backup = Backup(self._storage_async)\n        self._backup_lib = lib.BackupRestoreLib(self._storage_async, self._logger)\n\n        self._job = lib.Job()\n\n        # Creates the objects references used by the library\n        lib._logger = self._logger\n        lib._storage = self._storage_async\n\n    def _generate_file_name(self):\n        \"\"\" Generates the file name for the backup operation, it uses hours/minutes/seconds for the file name generation\n\n        Args:\n        Returns:\n            _backup_file: generated file name\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"_generate_file_name\"))\n\n        # Evaluates the parameters\n        execution_time = time.strftime(\"%Y_%m_%d_%H_%M_%S\")\n\n        full_file_name = self._backup_lib.dir_backups + \"/\" + self._BACKUP_FILE_NAME_PREFIX + execution_time\n        ext = \"db\"\n\n        _backup_file = \"{file}.{ext}\".format(file=full_file_name, ext=ext)\n\n        return _backup_file\n\n    def check_for_execution_backup(self):\n        \"\"\" Executes all the checks to ensure the prerequisites to execute the backup are met\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n    def init(self):\n        \"\"\" Setups the correct state for the execution of the backup\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.BackupOrRestoreAlreadyRunning\n        \"\"\"\n        self._logger.debug(\"{func}\".format(func=\"init\"))\n\n        self._backup_lib.evaluate_paths()\n\n        self._backup_lib.retrieve_configuration()\n\n        self.check_for_execution_backup()\n\n        # Checks for backup/restore synchronization\n        pid = self._job.is_running()\n        if pid == 0:\n\n            # no job is running\n            pid = os.getpid()\n            self._job.set_as_running(self._backup_lib.JOB_SEM_FILE_BACKUP, pid)\n\n        else:\n            _message = self._MESSAGES_LIST[\"e000008\"].format(pid)\n            self._logger.warning(\"{0}\".format(_message))\n\n            raise exceptions.BackupOrRestoreAlreadyRunning\n\n    def execute_backup(self):\n        \"\"\" Executes the backup functionality\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.BackupFailed\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"execute_backup\"))\n\n        self._purge_old_backups()\n\n        backup_file = self._generate_file_name()\n        backup_file_tar_base, dummy = os.path.splitext(backup_file)\n        backup_file_tar = backup_file_tar_base + \".tar.gz\"\n        self._logger.debug(\"execute_backup - backup_file  :{}: backup_file_tar :{}: -\".format(backup_file,\n                                                                                              backup_file_tar))\n        self._backup_lib.sl_backup_status_create(backup_file_tar, lib.BackupType.FULL, lib.BackupStatus.RUNNING)\n\n        status, exit_code = self._run_backup_command(backup_file)\n\n        # Create tar file\n        t = tarfile.open(backup_file_tar, \"w:gz\")\n        t.add(backup_file, arcname=os.path.basename(backup_file))\n        # Add external scripts if any\n        backup_path = self._backup_lib.dir_fledge_data + \"/scripts\"\n        if os.path.isdir(backup_path):\n            t.add(backup_path, arcname=os.path.basename(backup_path))\n        # Add data/etc directory\n        t.add(self._backup_lib.dir_fledge_data_etc, arcname=os.path.basename(self._backup_lib.dir_fledge_data_etc))\n        # Add software both plugins & services\n        data = {\n            \"plugins\": PluginDiscovery.get_plugins_installed(),\n            \"services\": get_service_installed()\n        }\n        temp_software_file = \"{}/software.json\".format(self._backup_lib.dir_backups)\n        with open(temp_software_file, 'w') as outfile:\n            json.dump(data, outfile, indent=4)\n        t.add(temp_software_file, arcname=os.path.basename(temp_software_file))\n        t.close()\n\n        # Delete the temporary files\n        os.remove(backup_file)\n        os.remove(temp_software_file)\n\n        backup_information = self._backup_lib.sl_get_backup_details_from_file_name(backup_file_tar)\n\n        self._backup_lib.sl_backup_status_update(backup_information['id'], status, exit_code)\n\n        audit = AuditLogger(self._storage_async)\n        loop = asyncio.get_event_loop()\n        if status != lib.BackupStatus.COMPLETED:\n\n            self._logger.error(self._MESSAGES_LIST[\"e000007\"])\n            loop.run_until_complete(audit.information('BKEXC', {'status': 'failed'}))\n            raise exceptions.BackupFailed\n        else:\n            loop.run_until_complete(audit.information('BKEXC', {'status': 'completed'}))\n\n    def _purge_old_backups(self):\n        \"\"\"  Deletes old backups in relation at the retention parameter\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        backups_info = asyncio.get_event_loop().run_until_complete(self._backup.get_all_backups(\n                                            self._backup_lib.MAX_NUMBER_OF_BACKUPS_TO_RETRIEVE,\n                                            0,\n                                            None,\n                                            lib.SortOrder.ASC))\n\n        # Evaluates which backup should be deleted\n        backups_n = len(backups_info)\n        # -1 so at the end of the current backup up to 'retention' backups will be available\n        last_to_delete = backups_n - (self._backup_lib.config['retention'] - 1)\n\n        if last_to_delete > 0:\n\n            # Deletes backups\n            backups_to_delete = backups_info[:last_to_delete]\n\n            for row in backups_to_delete:\n                backup_id = row['id']\n                file_name = row['file_name']\n\n                self._logger.debug(\"{func} - id |{id}| - file_name |{file}|\".format(func=\"_purge_old_backups\",\n                                                                                    id=backup_id,\n                                                                                    file=file_name))\n                asyncio.get_event_loop().run_until_complete(self._backup.delete_backup(backup_id))\n\n    def _run_backup_command(self, _backup_file):\n        \"\"\" Backups the entire Fledge repository into a file in the local file system\n\n        Args:\n            _backup_file: backup file to create  as a full path\n        Returns:\n            _status: status of the backup\n            _exit_code: exit status of the operation, 0=Successful\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func} - file_name |{file}|\".format(func=\"_run_backup_command\",\n                                                                file=_backup_file))\n\n        # Force the checkpoint - WAL mechanism\n        cmd = \"{sqlite_cmd} {path}/{db} 'PRAGMA wal_checkpoint(PASSIVE);'\".format(\n            sqlite_cmd=self._backup_lib.SQLITE_SQLITE,\n            path=self._backup_lib.dir_fledge_data,\n            db=self._backup_lib.config['database-filename']\n        )\n\n        # noinspection PyArgumentEqualDefault\n        _exit_code, output = lib.exec_wait_retry(cmd,\n                                                 output_capture=True,\n                                                 exit_code_ok=0,\n                                                 max_retry=self._backup_lib.config['max_retry'],\n                                                 timeout=self._backup_lib.config['timeout']\n                                                 )\n\n        # Prepares the backup command\n        cmd = \"{sqlite_cmd} {path}/{db} '{backup_cmd} {file}'\".format(\n                                                sqlite_cmd=self._backup_lib.SQLITE_SQLITE,\n                                                path=self._backup_lib.dir_fledge_data,\n                                                db=self._backup_lib.config['database-filename'],\n                                                backup_cmd=self._backup_lib.SQLITE_BACKUP,\n                                                file=_backup_file\n        )\n\n        # Executes the backup waiting for the completion and using a retry mechanism\n        # noinspection PyArgumentEqualDefault\n        _exit_code, output = lib.exec_wait_retry(cmd,\n                                                 output_capture=True,\n                                                 exit_code_ok=0,\n                                                 max_retry=self._backup_lib.config['max_retry'],\n                                                 timeout=self._backup_lib.config['timeout']\n                                                 )\n\n        if _exit_code == 0:\n            _status = lib.BackupStatus.COMPLETED\n        else:\n            _status = lib.BackupStatus.FAILED\n\n        self._logger.debug(\"{func} - status |{status}| - exit_code |{exit_code}| \"\n                           \"- cmd |{cmd}|  output |{output}| \".format(\n                                                                        func=\"_run_backup_command\",\n                                                                        status=_status,\n                                                                        exit_code=_exit_code,\n                                                                        cmd=cmd,\n                                                                        output=output))\n\n        return _status, _exit_code\n\n    def shutdown(self):\n        \"\"\" Sets the correct state to terminate the execution\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"shutdown\"))\n\n        self._job.set_as_completed(self._backup_lib.JOB_SEM_FILE_BACKUP)\n\n    def run(self):\n        \"\"\"  Creates a new backup\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self.init()\n\n        try:\n            self.execute_backup()\n\n        except Exception as _ex:\n            _message = _MESSAGES_LIST[\"e000002\"].format(_ex)\n            _logger.error(_message)\n\n            self.shutdown()\n\n            raise exceptions.RestoreFailed(_message)\n        else:\n            self.shutdown()\n\n\nif __name__ == \"__main__\":\n\n    # Initializes the logger\n    try:\n        _logger = logger.setup(_MODULE_NAME,\n                               destination=_LOGGER_DESTINATION,\n                               level=_LOGGER_LEVEL)\n\n    except Exception as ex:\n        message = _MESSAGES_LIST[\"e000001\"].format(str(ex))\n        current_time = time.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n        print(\"[FLEDGE] {0} - ERROR - {1}\".format(current_time, message), file=sys.stderr)\n        sys.exit(1)\n\n    # Initializes FledgeProcess and Backup classes - handling the command line parameters\n    try:\n        backup_process = BackupProcess()\n    except Exception as ex:\n        message = _MESSAGES_LIST[\"e000004\"].format(ex)\n        _logger.exception(message)\n        sys.exit(1)\n\n    if not backup_process.is_dry_run():\n        try:\n            # noinspection PyProtectedMember\n            _logger.debug(\"{module} - name |{name}| - address |{addr}| - port |{port}|\".format(\n                module=_MODULE_NAME,\n                name=backup_process._name,\n                addr=backup_process._core_management_host,\n                port=backup_process._core_management_port))\n            _logger.info(_MESSAGES_LIST[\"i000001\"])\n            backup_process.run()\n            _logger.info(_MESSAGES_LIST[\"i000002\"])\n            sys.exit(0)\n        except Exception as ex:\n            message = _MESSAGES_LIST[\"e000002\"].format(ex)\n            _logger.exception(message)\n            backup_process.shutdown()\n            sys.exit(1)\n    else:\n        # Put any configuration here if required for the backup\n        sys.exit()\n"
  },
  {
    "path": "python/fledge/plugins/storage/sqlite/backup_restore/restore_sqlite.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (C) 2017\n\n\"\"\" Restores the entire Fledge repository from a previous backup.\n\nIt executes a full cold restore,\nFledge will be stopped before the start of the restore and restarted at the end.\n\nIt could work also without the Configuration Manager\nretrieving the parameters for the execution from the local file 'restore_configuration_cache.json'.\nThe local file is used as a cache of information retrieved from the Configuration Manager.\n\nThe restore operation executes the following macro steps :\n\n    - stops Fledge\n    - executes the restore\n    - starts Fledge again\n\nso it needs also to interact with SQLite directly executing SQL commands\nbecause at the restart of Fledge the reference to the Storage Layer, previously obtained through\nthe FledgeProcess class, will be no more valid.\n\n\nUsage:\n    --backup-id                     Restore a specific backup retrieving the related information from the\n                                    Storage Layer.\n    --file                          Restore a backup from a specific file, the full path should be provided\n                                    like for example : --file=/tmp/fledge_2017_09_25_15_10_22.dump\n\n    The latest backup will be restored if no options is used.\n\nExecution samples :\n    restore_sqlite --backup-id=29 --port=${adm_port} --address=127.0.0.1 --name=restore\n    restore_sqlite --file=/tmp/fledge_backup_2017_12_04_13_57_37.dump \\\n                     --port=${adm_port} --address=127.0.0.1 --name=restore\n    restore_sqlite --port=${adm_port} --address=127.0.0.1 --name=restore\n\n    Note : ${adm_port} should correspond to the Management API port of the core.\n\nExit code :\n    0    = OK\n    >=1  = Warning/Error\n\n\"\"\"\n\nimport time\nimport sys\nimport os\nimport signal\nimport sqlite3\nimport json\nimport tarfile\nimport shutil\nfrom distutils.dir_util import copy_tree\n\nfrom fledge.common.parser import Parser\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common import logger\nimport fledge.plugins.storage.common.lib as lib\nimport fledge.plugins.storage.common.exceptions as exceptions\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_MODULE_NAME = \"fledge_restore_sqlite_module\"\n\n_MESSAGES_LIST = {\n\n    # Information messages\n    \"i000001\": \"Execution started.\",\n    \"i000002\": \"Execution completed.\",\n\n    # Warning / Error messages\n    \"e000001\": \"cannot initialize the logger - error details |{0}|\",\n    \"e000002\": \"an error occurred during the restore operation - error details |{0}|\",\n    \"e000003\": \"invalid command line arguments - error details |{0}|\",\n    \"e000004\": \"cannot complete the initialization - error details |{0}|\",\n}\n\"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n\n# Log definitions\n_logger = None\n\n_LOG_LEVEL_DEBUG = 10\n_LOG_LEVEL_INFO = 20\n\n_LOGGER_LEVEL = _LOG_LEVEL_INFO\n_LOGGER_DESTINATION = logger.SYSLOG\n\n\n# noinspection PyAbstractClass\nclass RestoreProcess(FledgeProcess):\n    \"\"\" Restore the entire Fledge repository.\n    \"\"\"\n\n    _MODULE_NAME = \"fledge_restore_sqlite_process\"\n\n    _FLEDGE_ENVIRONMENT_DEV = \"dev\"\n    _FLEDGE_ENVIRONMENT_DEPLOY = \"deploy\"\n\n    _FLEDGE_CMD_PATH_DEV = \"scripts/fledge\"\n    _FLEDGE_CMD_PATH_DEPLOY = \"bin/fledge\"\n\n    # The init method will evaluate the running environment setting the variables accordingly\n    _fledge_environment = _FLEDGE_ENVIRONMENT_DEV\n    _fledge_cmd = _FLEDGE_CMD_PATH_DEV + \" {0}\"\n    \"\"\" Command for managing Fledge, stop/start/status \"\"\"\n\n    _MESSAGES_LIST = {\n\n        # Information messages\n        \"i000001\": \"Execution started.\",\n        \"i000002\": \"Execution completed.\",\n\n        # Warning / Error messages\n        \"e000000\": \"general error\",\n        \"e000001\": \"Invalid file name\",\n        \"e000002\": \"cannot retrieve the configuration from the manager, trying retrieving from file \"\n                   \"- error details |{0}|\",\n        \"e000003\": \"cannot retrieve the configuration from file - error details |{0}|\",\n        \"e000004\": \"cannot restore the backup, file doesn't exists - file name |{0}|\",\n        \"e000006\": \"cannot start Fledge after the restore - error details |{0}|\",\n        \"e000007\": \"cannot restore the backup, restarting Fledge - error details |{0}|\",\n        \"e000008\": \"cannot identify Fledge status, the maximum number of retries has been reached \"\n                   \"- error details |{0}|\",\n        \"e000009\": \"cannot restore the backup, either a backup or a restore is already running - pid |{0}|\",\n        \"e000010\": \"cannot retrieve the Fledge status - error details |{0}|\",\n        \"e000011\": \"cannot restore the backup, the selected backup doesn't exists - backup id |{0}|\",\n        \"e000012\": \"cannot restore the backup, the selected backup doesn't exists - backup file name |{0}|\",\n        \"e000013\": \"cannot proceed the execution, \"\n                   \"It is not possible to determine the environment in which the code is running\"\n                   \" neither Deployment nor Development\",\n    }\n    \"\"\" Messages used for Information, Warning and Error notice \"\"\"\n\n    _logger = None\n\n    _backup_id = None\n    \"\"\" Used to store the optional command line parameter value \"\"\"\n\n    _file_name = None\n    \"\"\" Used to store the optional command line parameter value \"\"\"\n\n    class FledgeStatus(object):\n        \"\"\" Fledge - possible status \"\"\"\n\n        NOT_DEFINED = 0\n        STOPPED = 1\n        RUNNING = 2\n\n    @staticmethod\n    def _signal_handler(_signo, _stack_frame):\n        \"\"\" Handles signals to avoid restore termination doing Fledge stop\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        short_stack_frame = str(_stack_frame)[:100]\n        _logger.debug(\"{func} - signal |{signo}| - info |{ssf}| \".format(\n            func=\"_signal_handler\",\n            signo=_signo,\n            ssf=short_stack_frame))\n\n    def __init__(self):\n\n        super().__init__()\n\n        if not self._logger:\n            self._logger = logger.setup(self._MODULE_NAME,\n                                        destination=_LOGGER_DESTINATION,\n                                        level=_LOGGER_LEVEL)\n\n        # Handled Restore command line parameters\n        try:\n            self._backup_id = Parser.get('--backup-id')\n            self._file_name = Parser.get('--file')\n\n        except Exception as _ex:\n\n            _message = _MESSAGES_LIST[\"e000003\"].format(_ex)\n            _logger.exception(_message)\n\n            raise exceptions.ArgumentParserError(_message)\n\n        self._restore_lib = lib.BackupRestoreLib(self._storage_async, self._logger)\n\n        self._job = lib.Job()\n\n        self._force_restore = True\n        \"\"\" Restore a backup doesn't exist in the backups table \"\"\"\n\n        # Creates the objects references used by the library\n        lib._logger = self._logger\n        lib._storage = self._storage_async\n\n    def _identifies_backup_to_restore(self):\n        \"\"\"Identifies the backup to restore either\n        - latest backup\n        - or a specific backup_id\n        - or a specific file_name\n\n        Args:\n        Returns:\n        Raises:\n            FileNotFoundError\n        \"\"\"\n\n        backup_id = None\n        file_name = None\n\n        # Case - last backup\n        if self._backup_id is None and \\\n           self._file_name is None:\n\n            backup_id,  file_name = self._identify_last_backup()\n\n        # Case - backup-id\n        elif self._backup_id is not None:\n\n            try:\n                backup_info = self._restore_lib.sl_get_backup_details(self._backup_id)\n                backup_id = backup_info[\"id\"]\n                file_name = backup_info[\"file_name\"]\n\n            except exceptions.DoesNotExist:\n                _message = self._MESSAGES_LIST[\"e000011\"].format(self._backup_id)\n                _logger.error(_message)\n\n                raise exceptions.DoesNotExist(_message)\n\n        # Case - file-name\n        elif self._file_name is not None:\n\n            try:\n                backup_info = self._restore_lib.sl_get_backup_details_from_file_name(self._file_name)\n                backup_id = backup_info[\"id\"]\n                file_name = backup_info[\"file_name\"]\n\n            except exceptions.DoesNotExist:\n                if self._force_restore:\n                    file_name = self._file_name\n\n                else:\n                    _message = self._MESSAGES_LIST[\"e000012\"].format(self._file_name)\n                    _logger.error(_message)\n\n                    raise exceptions.DoesNotExist(_message)\n\n        if not os.path.exists(file_name):\n\n            _message = self._MESSAGES_LIST[\"e000004\"].format(file_name)\n            _logger.error(_message)\n\n            raise FileNotFoundError(_message)\n\n        return backup_id, file_name\n\n    def storage_retrieve(self, sql_cmd):\n        \"\"\"  Executes a sql command against SQLite directly\n\n        Args:\n        Returns:\n            raw_data:list - Python list containing the rows retrieved from the Storage layer\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func} - sql cmd |{cmd}| \".format(func=\"storage_retrieve\",\n                                                         cmd=sql_cmd))\n\n        db_connection_string = \"{path}/{db}\".format(\n                                                        path=self._restore_lib.dir_fledge_data,\n                                                        db=self._restore_lib.config['database-filename']\n                                                    )\n\n        comm = sqlite3.connect(db_connection_string)\n\n        cur = comm.cursor()\n\n        cur.execute(sql_cmd)\n\n        raw_data = cur.fetchall()\n        cur.close()\n\n        return raw_data\n\n    def storage_update(self, sql_cmd, records=None):\n        \"\"\" Executes a sql command against SQLite directly\n\n        Args:\n            sql_cmd: sql command to execute\n            records: to insert multiple records only if records are there otherwise default None and will treat as single execution\n        Returns:\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func} - sql cmd |{cmd} | {records}|\".format(func=\"storage_update\", cmd=sql_cmd,\n                                                                    records=records))\n\n        db_connection_string = \"{path}/{db}\".format(\n                                                        path=self._restore_lib.dir_fledge_data,\n                                                        db=self._restore_lib.config['database-filename']\n                                                    )\n\n        comm = sqlite3.connect(db_connection_string)\n\n        cur = comm.cursor()\n        if records is None:\n            cur.execute(sql_cmd)\n        else:\n            cur.executemany(sql_cmd, records)\n        comm.commit()\n        comm.close()\n\n    def _identify_last_backup(self):\n        \"\"\" Identifies latest executed backup either successfully executed (COMPLETED) or already RESTORED\n\n        Args:\n        Returns:\n        Raises:\n            NoBackupAvailableError: No backup either successfully executed or already restored available\n            FileNameError: it is impossible to identify an unique backup to restore\n        \"\"\"\n\n        self._logger.debug(\"{func} \".format(func=\"_identify_last_backup\"))\n\n        sql_cmd = \"\"\"\n            SELECT id, file_name FROM backups WHERE id=\n            (SELECT  MAX(id) FROM backups WHERE status={0} or status={1});\n        \"\"\".format(lib.BackupStatus.COMPLETED,\n                   lib.BackupStatus.RESTORED)\n\n        data = self.storage_retrieve(sql_cmd)\n\n        if len(data) == 0:\n            raise exceptions.NoBackupAvailableError\n\n        elif len(data) == 1:\n\n            _backup_id = data[0][0]\n            _file_name = data[0][1]\n\n        else:\n            raise exceptions.FileNameError\n\n        return _backup_id, _file_name\n\n    def get_backup_details_from_file_name(self, _file_name):\n        \"\"\" Retrieves backup information from file name\n\n        Args:\n            _file_name: file name to search in the Storage layer\n\n        Returns:\n            backup_information: Backup information related to the file name\n\n        Raises:\n            exceptions.NoBackupAvailableError\n            exceptions.FileNameError\n        \"\"\"\n\n        self._logger.debug(\"{func} \".format(func=\"get_backup_details_from_file_name\"))\n\n        sql_cmd = \"\"\"\n            SELECT * FROM backups WHERE file_name='{file}'\n        \"\"\".format(file=_file_name)\n        data = self.storage_retrieve(sql_cmd)\n\n        if len(data) == 0:\n            raise exceptions.NoBackupAvailableError\n\n        elif len(data) == 1:\n            backup_information = data[0]\n\n        else:\n            raise exceptions.FileNameError\n\n        return backup_information\n\n    def _fledge_stop(self):\n        \"\"\" Stops Fledge before for the execution of the backup, doing a cold backup\n\n        Args:\n        Returns:\n        Raises:\n            FledgeStopError\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"_fledge_stop\"))\n\n        cmd = \"{path}/{cmd}\".format(\n            path=self._restore_lib.dir_fledge_root,\n            cmd=self._fledge_cmd.format(\"stop\")\n        )\n\n        # Stops Fledge\n        status, output = lib.exec_wait_retry(cmd, True,\n                                             max_retry=self._restore_lib.config['max_retry'],\n                                             timeout=self._restore_lib.config['timeout'])\n\n        self._logger.debug(\"{func} - status |{status}| - cmd |{cmd}| - output |{output}|   \".format(\n                    func=\"_fledge_stop\",\n                    status=status,\n                    cmd=cmd,\n                    output=output))\n\n        if status == 0:\n\n            # Checks to ensure the Fledge status\n            if self._fledge_status() != self.FledgeStatus.STOPPED:\n                raise exceptions.FledgeStopError(output)\n        else:\n            raise exceptions.FledgeStopError(output)\n\n    def _decode_fledge_status(self, text):\n        \"\"\"\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        text_upper = text.upper()\n\n        if 'FLEDGE UPTIME' in text_upper:\n            status = self.FledgeStatus.RUNNING\n\n        elif 'FLEDGE NOT RUNNING.' in text_upper:\n            status = self.FledgeStatus.STOPPED\n\n        else:\n            status = self.FledgeStatus.NOT_DEFINED\n\n        return status\n\n    def _check_wait_fledge_start(self):\n        \"\"\" Checks and waits Fledge to start\n\n        Args:\n        Returns:\n            status: FledgeStatus - {NOT_DEFINED|STOPPED|RUNNING}\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"_check_wait_fledge_start\"))\n\n        status = self.FledgeStatus.NOT_DEFINED\n\n        n_retry = 0\n        max_reties = self._restore_lib.config['restart-max-retries']\n        sleep_time = self._restore_lib.config['restart-sleep']\n\n        while n_retry < max_reties:\n\n            self._logger.debug(\"{func}\".format(func=\"_check_wait_fledge_start - checks Fledge status\"))\n\n            status = self._fledge_status()\n            if status == self.FledgeStatus.RUNNING:\n                break\n\n            self._logger.debug(\"{func}\".format(func=\"_check_wait_fledge_start - sleep {0}\".format(sleep_time)))\n\n            time.sleep(sleep_time)\n            n_retry += 1\n\n        return status\n\n    def _fledge_status(self):\n        \"\"\" Checks Fledge status\n\n        to ensure the status is stable and reliable,\n        It executes the Fledge 'status' command until either\n        until the same value comes back for 3 times in a row  or it reaches the maximum number of retries allowed.\n\n        Args:\n        Returns:\n            status: FledgeStatus - {STATUS_NOT_DEFINED|STATUS_STOPPED|STATUS_RUNNING}\n        Raises:\n        \"\"\"\n\n        status = self.FledgeStatus.NOT_DEFINED\n\n        num_exec = 0\n        max_exec = 10\n        same_status = 0\n        same_status_ok = 3\n        sleep_time = 1\n\n        while (same_status < same_status_ok) and (num_exec <= max_exec):\n\n            try:\n\n                cmd = \"{path}/{cmd}\".format(\n                            path=self._restore_lib.dir_fledge_root,\n                            cmd=self._fledge_cmd.format(\"status\")\n                )\n\n                cmd_status, output = lib.exec_wait(cmd, True, _timeout=self._restore_lib.config['timeout'])\n\n                self._logger.debug(\"{func} - output |{output}| \\r - status |{status}|  \".format(\n                                                                                            func=\"_fledge_status\",\n                                                                                            output=output,\n                                                                                            status=cmd_status))\n\n                num_exec += 1\n\n                new_status = self._decode_fledge_status(output)\n\n            except Exception as _ex:\n                _message = self._MESSAGES_LIST[\"e000010\"].format(_ex)\n                _logger.error(_message)\n\n                raise\n\n            else:\n                if new_status == status:\n                    same_status += 1\n                    time.sleep(sleep_time)\n                else:\n                    status = new_status\n                    same_status = 0\n\n        if num_exec >= max_exec:\n            _message = self._MESSAGES_LIST[\"e000008\"]\n            self._logger.error(_message)\n\n            status = self.FledgeStatus.NOT_DEFINED\n\n        return status\n\n    def _run_restore_command(self, backup_file, restore_command):\n        \"\"\" Executes the restore of the storage layer from a backup\n\n        Args:\n            backup_file: backup file to restore\n        Returns:\n        Raises:\n            RestoreError\n        \"\"\"\n\n        self._logger.debug(\"{func} - Restore starts - file name |{file}|\".format(\n                                                                    func=\"_run_restore_command\",\n                                                                    file=backup_file))\n\n        # Prepares the restore command\n        cmd = \"{cmd} {file} {path}/{db} \".format(\n                                                cmd=restore_command,\n                                                file=backup_file,\n                                                path=self._restore_lib.dir_fledge_data,\n                                                db=self._restore_lib.config['database-filename']\n\n        )\n\n        # Restores the backup\n        status, output = lib.exec_wait_retry(cmd, True, timeout=self._restore_lib.config['timeout'])\n\n        self._logger.debug(\"{func} - Restore ends - status |{status}| - cmd |{cmd}| - output |{output}|\".format(\n                                    func=\"_run_restore_command\",\n                                    status=status,\n                                    cmd=cmd,\n                                    output=output))\n\n        if status != 0:\n            raise exceptions.RestoreFailed\n\n        # Delete files related to the WAL mechanism\n        cmd = \"rm  {path}/fledge.db-shm \".format(path=self._restore_lib.dir_fledge_data)\n        status, output = lib.exec_wait_retry(cmd, True, timeout=self._restore_lib.config['timeout'])\n\n        cmd = \"rm  {path}/fledge.db-wal \".format(path=self._restore_lib.dir_fledge_data)\n        status, output = lib.exec_wait_retry(cmd, True, timeout=self._restore_lib.config['timeout'])\n\n    def _fledge_start(self):\n        \"\"\" Starts Fledge after the execution of the restore\n\n        Args:\n        Returns:\n        Raises:\n            FledgeStartError\n        \"\"\"\n\n        cmd = \"{path}/{cmd}\".format(\n                                    path=self._restore_lib.dir_fledge_root,\n                                    cmd=self._fledge_cmd.format(\"start\")\n        )\n\n        exit_code, output = lib.exec_wait_retry(\n                                                cmd,\n                                                True,\n                                                max_retry=self._restore_lib.config['max_retry'],\n                                                timeout=self._restore_lib.config['timeout'])\n\n        self._logger.debug(\"{func} - exit_code |{exit_code}| - cmd |{cmd}| - output |{output}|\".format(\n                                    func=\"_fledge_start\",\n                                    exit_code=exit_code,\n                                    cmd=cmd,\n                                    output=output))\n\n        if exit_code == 0:\n            if self._check_wait_fledge_start() != self.FledgeStatus.RUNNING:\n                raise exceptions.FledgeStartError\n\n        else:\n            raise exceptions.FledgeStartError\n\n    def insert_backup_entries(self, old_data: list, new_data: list) -> None:\n        \"\"\" Insert those backup entries from old data which are not found in new data\n\n        Args:\n            old_data: Old backup data before restore\n            new_data: New backup data after restore\n\n        Returns:\n\n        Raises:\n        \"\"\"\n        self._logger.debug(\"Old backup data: {} - New backup data: {}\".format(old_data, new_data))\n        matched_entry_to_delete = []\n        for idx, old_row in enumerate(old_data):\n            for new_row in new_data:\n                if old_row['file_name'] == new_row[1]:\n                    matched_entry_to_delete.append(old_row['file_name'])\n                    break\n        self._logger.debug(\"Matched entry deletion list from old backup data: {}\".format(matched_entry_to_delete))\n        # Filter duplicate records between old and new backup data list\n        filtered_list = [d for d in old_data if d['file_name'] not in matched_entry_to_delete]\n        self._logger.debug(\"Filtered list: {}\".format(filtered_list))\n        # Prepare backup multiple records with execute many operation of sqlite\n        backup_list = []\n        for row in filtered_list:\n            r_tuple = (row['file_name'], row['ts'], row['type'], row['status'], row['exit_code'])\n            backup_list.append(r_tuple)\n        # Insert new backup entries which were before restored checkpoint - this way we can't loose all backups\n        sql_cmd = \"INSERT INTO backups (file_name, ts, type, status, exit_code) VALUES (?, ?, ?, ?, ?)\"\n        self._logger.debug(\"Insert new entries from backup list: {}\".format(backup_list))\n\n        self.storage_update(sql_cmd, backup_list)\n\n    def backup_status_update(self, backup_id, status):\n        \"\"\" Updates the status of the backup in the Storage layer\n\n        Args:\n            backup_id: int -\n            status: BackupStatus -\n        Returns:\n        Raises:\n        \"\"\"\n\n        _logger.debug(\"{func} - backup id |{id}| \".format(func=\"backup_status_update\",\n                                                          id=backup_id))\n\n        sql_cmd = \"\"\"\n\n            UPDATE backups SET  status={status} WHERE id='{id}';\n\n            \"\"\".format(status=status,\n                       id=backup_id, )\n\n        self.storage_update(sql_cmd)\n\n    def tar_extraction(self, file_name) -> str:\n        \"\"\" Extracts the files from tar.gz backup file\n\n        Args:\n            file_name: filename of the backup\n        Returns:\n            Full backup filepath\n        Raises:\n        \"\"\"\n        dummy, file_extension = os.path.splitext(file_name)\n        self._logger.debug(\"tar_extraction - filename  :{}: file_extension :{}: \".format(file_name, file_extension))\n        # Removes tar.gz\n        filename_base1 = os.path.basename(file_name)\n        filename_base2, dummy = os.path.splitext(filename_base1)\n        filename_base, dummy = os.path.splitext(filename_base2)\n        extract_path = \"{}/extract\".format(self._restore_lib.dir_fledge_backup)\n        if not os.path.isdir(extract_path):\n            os.mkdir(extract_path)\n        else:\n            shutil.rmtree(extract_path)\n            os.mkdir(extract_path)\n\n        # Extracts the tar\n        backup_tar = tarfile.open(file_name)\n        backup_tar.extractall(extract_path)\n        db_file_from_extract = [entry for entry in backup_tar.getnames() if entry.endswith(\".db\")]\n        backup_tar.close()\n\n        # Moves the db file to the right position\n        db_file_to_restore = db_file_from_extract[0] if db_file_from_extract else \"{}.db\".format(filename_base)\n        file_source = \"{}/{}\".format(extract_path, db_file_to_restore)\n        file_target = \"{}/{}.db\".format(self._restore_lib.dir_fledge_backup, filename_base)\n        self._logger.debug(\"tar_extraction 'db' - source :{}: target :{}: \".format(file_source, file_target))\n        os.rename(file_source, file_target)\n\n        # etc\n        source = \"{}/etc\".format(extract_path)\n        target = \"{}/etc\".format(self._restore_lib.dir_fledge_data)\n        self._logger.debug(\"tar_extraction 'etc' - source :{}: target :{}: \".format(source, target))\n        copy_tree(source, target)\n\n        # external scripts\n        dir_scripts = \"{}/scripts\".format(extract_path)\n        if os.path.isdir(dir_scripts):\n            target = \"{}/scripts\".format(self._restore_lib.dir_fledge_data)\n            if not os.path.isdir(target):\n                os.mkdir(target)\n            source = dir_scripts\n            self._logger.debug(\"tar_extraction 'scripts' - source :{}: target :{}: \".format(source, target))\n            copy_tree(source, target)\n\n        # software\n        is_software = \"{}/software.json\".format(extract_path)\n        if os.path.exists(is_software):\n            # we don't need to install software as a part of restore automatically\n            # It is a user responsibility to install\n            with open(is_software, 'r') as f:\n                data = json.load(f)\n            self._logger.debug(\"tar_extraction 'data' :{}: \".format(data))\n            software_list = []\n            for p in data['plugins']:\n                # Exclude inbuilt plugins\n                if p['packageName'] != '':\n                    software_list.append({p['packageName']: p['version']})\n            for s in data['services']:\n                # Exclude inbuilt services\n                if s not in ('storage', 'south', 'north'):\n                    # As such no version available for services, therefore keeping empty\n                    software_list.append({\"fledge-service-{}\".format(s): ''})\n            self._logger.info(\"Please check install software list: {}; \"\n                              \"if any of software is not present onto your system, you need to install it \"\n                              \"manually.\".format(software_list))\n        # Remove extract directory\n        shutil.rmtree(extract_path)\n        return file_target\n\n    def execute_restore(self) -> None:\n        \"\"\"Executes the restore operation\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"execute_restore\"))\n\n        # Fetch old backup table entries before restore\n        old_backup_entries = self._restore_lib.sl_get_all_backups()\n        self._logger.debug(\"{func} - old backup entries list - {backups}\".format(func=\"execute_restore\",\n                                                                                 backups=old_backup_entries))\n\n        backup_id, file_name = self._identifies_backup_to_restore()\n\n        self._logger.debug(\"{func} - backup to restore |{id}| - |{file}| \".format(\n                                                                                func=\"execute_restore\",\n                                                                                id=backup_id,\n                                                                                file=file_name))\n        # Stops Fledge if it is running\n        if self._fledge_status() == self.FledgeStatus.RUNNING:\n            self._fledge_stop()\n\n        self._logger.debug(\"{func} - Fledge is down\".format(func=\"execute_restore\"))\n\n        dummy, file_extension = os.path.splitext(file_name)\n        # backward compatibility (<= 1.9.2)\n        if file_extension == \".db\":\n            file_name_db = file_name\n            restore_command = self._restore_lib.SQLITE_RESTORE_COPY\n        elif file_extension == \".gz\":\n            file_name_db = self.tar_extraction(file_name)\n            restore_command = self._restore_lib.SQLITE_RESTORE_MOVE\n        else:\n            raise Exception('Unsupported {} file extension found')\n        # Executes the restore and then starts Fledge\n        try:\n            self._run_restore_command(file_name_db, restore_command)\n            if self._force_restore and file_extension != \".gz\":\n                # Retrieve the backup-id after the restore operation\n                backup_info = self.get_backup_details_from_file_name(file_name_db)\n                backup_id = backup_info[0]\n            # Updates the backup status as RESTORED\n            self.backup_status_update(backup_id, lib.BackupStatus.RESTORED)\n            # Fetch new backup entries after DB fully restored\n            sql_cmd = \"\"\"SELECT id, file_name, ts, type, status, exit_code FROM backups;\"\"\"\n            new_backup_entries = self.storage_retrieve(sql_cmd)\n            self._logger.debug(\"{func} - !!!New backup entries list - {backups}\".format(func=\"execute_restore\",\n                                                                                        backups=new_backup_entries))\n            # Insert old backup entries into newly restored Database\n            self.insert_backup_entries(old_backup_entries, new_backup_entries)\n        except Exception as _ex:\n            _message = self._MESSAGES_LIST[\"e000007\"].format(_ex)\n            self._logger.error(_message)\n            raise\n        finally:\n            try:\n                self._fledge_start()\n            except Exception as _ex:\n                _message = self._MESSAGES_LIST[\"e000006\"].format(_ex)\n                self._logger.error(_message)\n                raise\n\n    def check_command(self, cmd_to_identify):\n        \"\"\"\"Evaluates if the command is available or not\n\n          Args:\n          Returns:\n              cmd_available: boolean\n          Raises:\n          \"\"\"\n\n        cmd = \"command -v \" + cmd_to_identify\n\n        # The timeout command can't be used with 'command'\n        # noinspection PyArgumentEqualDefault\n        _exit_code, output = lib.exec_wait(\n            _cmd=cmd,\n            _output_capture=True,\n            _timeout=0\n        )\n\n        self._logger.debug(\"{func} - cmd |{cmd}| - exit_code |{exit_code}| output |{output}| \".format(\n            func=\"check_command\",\n            cmd=cmd,\n            exit_code=_exit_code,\n            output=output))\n\n        if _exit_code == 0:\n            cmd_available = True\n        else:\n            cmd_available = False\n\n        return cmd_available\n\n    def evaluate_fledge_env(self):\n        \"\"\"\"Evaluates if the code is running either in Development or in Deploy environment\n\n        Args:\n        Returns:\n            env: str - {_FLEDGE_CMD_PATH_DEPLOY|_FLEDGE_CMD_PATH_DEV}\n        Raises:\n            exceptions.InvalidFledgeEnvironment\n        \"\"\"\n\n        cmd = self._restore_lib.dir_fledge_root + \"/\" + self._FLEDGE_CMD_PATH_DEPLOY\n\n        if self.check_command(cmd):\n            env = self._FLEDGE_ENVIRONMENT_DEPLOY\n        else:\n\n            cmd = self._restore_lib.dir_fledge_root + \"/\" + self._FLEDGE_CMD_PATH_DEV\n            if self.check_command(cmd):\n\n                env = self._FLEDGE_ENVIRONMENT_DEV\n            else:\n                _message = self._MESSAGES_LIST[\"e000013\"]\n                self._logger.error(_message)\n\n                raise exceptions.InvalidFledgeEnvironment\n\n        return env\n\n    def set_fledge_env(self):\n        \"\"\"\"Sets a proper configuration in relation to the environment in which the code is running\n        either Development or Deploy\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self._fledge_environment = self.evaluate_fledge_env()\n\n        # Configures in relation to the environment is use\n        if self._fledge_environment == self._FLEDGE_ENVIRONMENT_DEPLOY:\n\n            self._fledge_cmd = self._FLEDGE_CMD_PATH_DEPLOY + \" {0}\"\n        else:\n            self._fledge_cmd = self._FLEDGE_CMD_PATH_DEV + \" {0}\"\n\n    def check_for_execution_restore(self):\n        \"\"\" Executes all the checks to ensure the prerequisites to execute the backup are met\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        pass\n\n    def init(self):\n        \"\"\"\"Setups the correct state for the execution of the restore\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n        # Setups signals handlers, to avoid the termination of the restore\n        # a) SIGINT: Keyboard interrupt\n        # b) SIGTERM: kill or system shutdown\n        # c) SIGHUP: Controlling shell exiting\n        signal.signal(signal.SIGINT, RestoreProcess._signal_handler)\n        signal.signal(signal.SIGTERM, RestoreProcess._signal_handler)\n        signal.signal(signal.SIGHUP, RestoreProcess._signal_handler)\n\n        self._logger.debug(\"{func}\".format(func=\"init\"))\n\n        self._restore_lib.evaluate_paths()\n\n        self.set_fledge_env()\n\n        self._restore_lib.retrieve_configuration()\n\n        self.check_for_execution_restore()\n\n        # Checks for backup/restore synchronization\n        pid = self._job.is_running()\n        if pid == 0:\n\n            # no job is running\n            pid = os.getpid()\n            self._job.set_as_running(self._restore_lib.JOB_SEM_FILE_RESTORE, pid)\n        else:\n            _message = self._MESSAGES_LIST[\"e000009\"].format(pid)\n            self._logger.warning(\"{0}\".format(_message))\n\n            raise exceptions.BackupOrRestoreAlreadyRunning\n\n    def shutdown(self):\n        \"\"\"\"Sets the correct state to terminate the execution\n\n        Args:\n        Returns:\n        Raises:\n        \"\"\"\n\n        self._logger.debug(\"{func}\".format(func=\"shutdown\"))\n\n        self._job.set_as_completed(self._restore_lib.JOB_SEM_FILE_RESTORE)\n\n    def run(self):\n        \"\"\" Restores a backup\n\n        Args:\n        Returns:\n        Raises:\n            exceptions.RestoreFailed\n        \"\"\"\n\n        self.init()\n\n        try:\n            self.execute_restore()\n\n        except Exception as _ex:\n            _message = _MESSAGES_LIST[\"e000002\"].format(_ex)\n            _logger.error(_message)\n\n            self.shutdown()\n\n            raise exceptions.RestoreFailed(_message)\n        else:\n            self.shutdown()\n\n\nif __name__ == \"__main__\":\n\n    # Initializes the logger\n    try:\n        _logger = logger.setup(_MODULE_NAME,\n                               destination=_LOGGER_DESTINATION,\n                               level=_LOGGER_LEVEL)\n\n    except Exception as ex:\n        message = _MESSAGES_LIST[\"e000001\"].format(str(ex))\n        current_time = time.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n        print(\"[FLEDGE] {0} - ERROR - {1}\".format(current_time, message), file=sys.stderr)\n        sys.exit(1)\n\n    # Initializes FledgeProcess and RestoreProcess classes - handling also the command line parameters\n    try:\n        restore_process = RestoreProcess()\n    except Exception as ex:\n        message = _MESSAGES_LIST[\"e000004\"].format(ex)\n        _logger.exception(message)\n        sys.exit(1)\n\n    if not restore_process.is_dry_run():\n        try:\n            # noinspection PyProtectedMember\n            _logger.debug(\"{module} - name |{name}| - address |{addr}| - port |{port}| \"\n                          \"- file |{file}| - backup_id |{backup_id}| \".format(\n                                                                            module=_MODULE_NAME,\n                                                                            name=restore_process._name,\n                                                                            addr=restore_process._core_management_host,\n                                                                            port=restore_process._core_management_port,\n                                                                            file=restore_process._file_name,\n                                                                            backup_id=restore_process._backup_id))\n            _logger.info(_MESSAGES_LIST[\"i000001\"])\n            restore_process.run()\n            _logger.info(_MESSAGES_LIST[\"i000002\"])\n            sys.exit(0)\n        except Exception as ex:\n            message = _MESSAGES_LIST[\"e000002\"].format(ex)\n            _logger.exception(message)\n            sys.exit(1)\n    else:\n        # Put any configuration here if required for the restore\n        sys.exit()\n"
  },
  {
    "path": "python/fledge/services/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/common/README.rst",
    "content": "************************\nMicroservice Common Code\n************************\n\nThis directory includes code that is common to more than one microservice\nbut is not used outside of the microservices environment.\n"
  },
  {
    "path": "python/fledge/services/common/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/common/microservice.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common FledgeMicroservice Class\"\"\"\n\nfrom aiohttp import web\nfrom fledge.services.common.microservice_management import routes\nfrom fledge.common import logger\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common.web import middleware\nfrom abc import abstractmethod\nimport time\nimport json\nimport asyncio\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = logger.setup(__name__)\n\n\nclass FledgeMicroservice(FledgeProcess):\n    \"\"\" FledgeMicroservice class for all non-core python microservices\n        All microservices will inherit from FledgeMicroservice and implement pure virtual method run()\n    \"\"\"\n    _microservice_management_app = None\n    \"\"\" web application for microservice management app \"\"\"\n\n    _microservice_management_handler = None\n    \"\"\" http factory for microservice management app \"\"\"\n\n    _microservice_management_server = None\n    \"\"\" server for microservice management app \"\"\"\n\n    _microservice_management_host = None\n    _microservice_management_port = None\n    \"\"\" address for microservice management app \"\"\"\n\n    _microservice_id = None\n    \"\"\" id for this microservice \"\"\"\n\n    _type = None\n    \"\"\" microservice type \"\"\"\n\n    _protocol = \"http\"\n    \"\"\" communication protocol \"\"\"\n\n    def __init__(self):\n        super().__init__()\n        try:\n            # Configuration handled through the Configuration Manager\n            default_config = {\n                'local_services': {\n                    'description': 'Restrict microservices to localhost',\n                    'type': 'boolean',\n                    'default': 'false',\n                    'displayName': 'Restrict Microservices To Local'\n                }\n            }\n\n            loop = asyncio.get_event_loop()\n\n            category = \"Security\"\n            config = default_config\n            config_descr = 'Microservices Security'\n            config_payload = json.dumps({\n                \"key\": category,\n                \"description\": config_descr,\n                \"value\": config,\n                \"keep_original_items\": True\n            })\n            self._core_microservice_management_client.create_configuration_category(config_payload)\n            self._core_microservice_management_client.create_child_category(\"General\", [\"Security\"])\n            config = self._core_microservice_management_client.get_configuration_category(category_name=category)\n            is_local_services = True if config['local_services']['value'].lower() == 'true' else False\n            host = '127.0.0.1' if is_local_services is True else '0.0.0.0'\n\n            self._make_microservice_management_app()\n            self._run_microservice_management_app(loop, host)\n            res = self.register_service_with_core(self._get_service_registration_payload())\n            self._microservice_id = res[\"id\"]\n        except Exception as ex:\n            _logger.exception(ex, 'Unable to initialize FledgeMicroservice')\n            raise\n\n    def _make_microservice_management_app(self):\n        # create web server application\n        self._microservice_management_app = web.Application(middlewares=[middleware.error_middleware])\n        # register supported urls\n        routes.setup(self._microservice_management_app, self)\n        # create http protocol factory for handling requests\n        self._microservice_management_handler = self._microservice_management_app.make_handler()\n\n    def _run_microservice_management_app(self, loop, host='127.0.0.1'):\n        # run microservice_management_app\n        coro = loop.create_server(self._microservice_management_handler, host, 0)\n        self._microservice_management_server = loop.run_until_complete(coro)\n        self._microservice_management_host, self._microservice_management_port = \\\n            self._microservice_management_server.sockets[0].getsockname()\n\n    def _get_service_registration_payload(self):\n        service_registration_payload = {\n                \"name\": self._name,\n                \"type\": self._type,\n                \"management_port\": int(self._microservice_management_port),\n                \"service_port\": int(self._microservice_management_port),\n                \"address\": self._microservice_management_host,\n                \"protocol\": self._protocol\n            }\n        return service_registration_payload\n\n    @abstractmethod\n    async def shutdown(self, request):\n        pass\n\n    @abstractmethod\n    async def change(self, request):\n        pass\n\n    async def ping(self, request):\n        \"\"\" health check\n    \n        \"\"\"\n        since_started = time.time() - self._start_time\n        return web.json_response({'uptime': since_started})\n"
  },
  {
    "path": "python/fledge/services/common/microservice_management/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/common/microservice_management/routes.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom fledge.services.core import proxy\n\n__author__ = \"Ashish Jabble, Praveen Garg, Ashwin Gopalakrishnan, Massimiliano Pinto\"\n__copyright__ = \"Copyright (c) 2021 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\ndef setup(app, obj, is_core=False):\n    \"\"\" Common method to setup the microservice management api.\n    Args:\n        app: an Application instance\n        obj: an instance, or a class, with the implementation of the needed methods for each route below\n        is_core: if True, routes are being set for Core Management API only\n    \"\"\"\n    \n    # Basic api common to all microservices\n    app.router.add_route('GET', '/fledge/service/ping', obj.ping)\n    app.router.add_route('POST', '/fledge/service/shutdown', obj.shutdown)\n    app.router.add_route('POST', '/fledge/change', obj.change)\n\n    if is_core:\n        # Configuration\n        app.router.add_route('GET', '/fledge/service/category', obj.get_configuration_categories)\n        app.router.add_route('POST', '/fledge/service/category', obj.create_configuration_category)\n        app.router.add_route('GET', '/fledge/service/category/{category_name}', obj.get_configuration_category)\n        app.router.add_route('DELETE', '/fledge/service/category/{category_name}', obj.delete_configuration_category)\n        app.router.add_route('GET', '/fledge/service/category/{category_name}/children', obj.get_child_category)\n        app.router.add_route('POST', '/fledge/service/category/{category_name}/children', obj.create_child_category)\n        app.router.add_route('GET', '/fledge/service/category/{category_name}/{config_item}',\n                             obj.get_configuration_item)\n        app.router.add_route('PUT', '/fledge/service/category/{category_name}/{config_item}',\n                             obj.update_configuration_item)\n        app.router.add_route('DELETE', '/fledge/service/category/{category_name}/{config_item}/value',\n                             obj.delete_configuration_item)\n\n        # Service Registration\n        app.router.add_route('POST', '/fledge/service', obj.register)\n        app.router.add_route('DELETE', '/fledge/service/{service_id}', obj.unregister)\n        app.router.add_route('PUT', '/fledge/service/{service_id}/restart', obj.restart_service)\n        app.router.add_route('GET', '/fledge/service', obj.get_service)\n        app.router.add_route('POST', '/fledge/service/login', obj.service_login)\n\n        # Interest Registration\n        app.router.add_route('POST', '/fledge/interest', obj.register_interest)\n        app.router.add_route('DELETE', '/fledge/interest/{interest_id}', obj.unregister_interest)\n        app.router.add_route('GET', '/fledge/interest', obj.get_interest)\n\n        # Asset Tracker\n        app.router.add_route('GET', '/fledge/track', obj.get_track)\n        app.router.add_route('POST', '/fledge/track', obj.add_track)\n\n        # Audit Log\n        app.router.add_route('POST', '/fledge/audit', obj.add_audit)\n\n        # enable/disable schedule\n        app.router.add_route('PUT', '/fledge/schedule/{schedule_id}/enable', obj.enable_disable_schedule)\n\n        # Internal refresh cache\n        app.router.add_route('PUT', '/fledge/cache', obj.refresh_cache)\n\n        # Service token verification\n        app.router.add_route('POST', '/fledge/service/verify_token', obj.verify_token)\n\n        # Service token refresh\n        app.router.add_route('POST', '/fledge/service/refresh_token', obj.refresh_token)\n\n        app.router.add_route('GET', '/fledge/ACL/{acl_name}', obj.get_control_acl)\n\n        # alerts\n        app.router.add_route('GET', '/fledge/alert/{key}', obj.get_alert)\n        app.router.add_route('POST', '/fledge/alert', obj.add_alert)\n        app.router.add_route('DELETE', '/fledge/alert/{key}', obj.delete_alert)\n\n        # Proxy API setup for a microservice\n        proxy.setup(app)\n\n        # enable cors support\n        enable_cors(app)\n\n\ndef enable_cors(app):\n    \"\"\" implements Cross Origin Resource Sharing (CORS) support \"\"\"\n    import aiohttp_cors\n\n    # Configure default CORS settings.\n    cors = aiohttp_cors.setup(app, defaults={\n        \"*\": aiohttp_cors.ResourceOptions(\n            allow_methods=[\"GET\", \"POST\", \"PUT\", \"DELETE\"],\n            allow_credentials=True,\n            expose_headers=\"*\",\n            allow_headers=\"*\",\n        )\n    })\n\n    # Configure CORS on all routes.\n    for route in list(app.router.routes()):\n        cors.add(route)\n\n\ndef enable_debugger(app):\n    \"\"\" provides a debug toolbar for server requests \"\"\"\n    import aiohttp_debugtoolbar\n\n    # dev mode only\n    # this will be served at API_SERVER_URL/_debugtoolbar\n    aiohttp_debugtoolbar.setup(app)\n"
  },
  {
    "path": "python/fledge/services/common/service_announcer.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common FledgeMicroservice Class\"\"\"\n\n\nimport socket\nfrom zeroconf import ServiceInfo, ServiceBrowser, ServiceStateChange, Zeroconf\n\nfrom fledge.common import logger\n\n\n__author__ = \"Mark Riddoch, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_LOGGER = logger.setup(__name__)\n\n\nclass ServiceAnnouncer:\n    def __init__(self, sname, stype, port, txt):\n        host_name = socket.gethostname()\n        host = self.get_ip()\n        service_name = \"{}.{}\".format(sname, stype)\n        desc_txt = 'Fledge Service'\n        if isinstance(txt, list):\n            try:\n                desc_txt = txt[0]\n            except:\n                pass\n        desc = {'description': desc_txt}\n        \"\"\" Create a service description.\n                type_: fully qualified service type name\n                name: fully qualified service name\n                port: port that the service runs on\n                weight: weight of the service\n                addresses: list of IP addresses as unsigned short, network byte order\n                priority: priority of the service\n                properties: dictionary of properties (or a string holding the\n                            bytes for the text field)\n                server: fully qualified name for service host (defaults to name)\n                host_ttl: ttl used for A/SRV records\n                other_ttl: ttl used for PTR/TXT records\"\"\"\n        info = ServiceInfo(\n            stype,\n            service_name,\n            port,\n            addresses=[socket.inet_aton(host)],\n            properties=desc,\n            server=\"{}.local.\".format(host_name)\n        )\n        zeroconf = Zeroconf()\n        # Refresh zeroconf cache\n        # browser = ServiceBrowser(zeroconf, stype, handlers=[self.on_service_state_change])\n        zeroconf.register_service(info, allow_name_change=True)\n\n    def get_ip(self):\n        s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n        try:\n            # doesn't even have to be reachable\n            s.connect(('10.255.255.255', 1))\n            ip = s.getsockname()[0]\n        except:\n            ip = '127.0.0.1'\n        finally:\n            s.close()\n        return ip\n\n    def on_service_state_change(self, zeroconf: Zeroconf, service_type: str, name: str, state_change: ServiceStateChange) -> None:\n        if state_change is ServiceStateChange.Added:\n            info = zeroconf.get_service_info(service_type, name)\n"
  },
  {
    "path": "python/fledge/services/common/utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common Utilities\"\"\"\n\nimport aiohttp\nimport asyncio\nimport logging\nfrom fledge.common import logger\nfrom fledge.common.utils import async_sleep\n\n__author__ = \"Amarendra Kumar Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_logger = logger.setup(__name__, level=logging.INFO)\n\n_MAX_ATTEMPTS = 15\n\"\"\"Number of max attempts for finding a heartbeat of service\"\"\"\n\n\nasync def ping_service(service, loop=None):\n    attempt_count = 1\n    \"\"\"Number of current attempt to ping url\"\"\"\n\n    loop = asyncio.get_event_loop() if loop is None else loop\n    url_ping = \"{}://{}:{}/fledge/service/ping\".format(service._protocol,\n                                                        service._address,\n                                                        service._management_port)\n    async with aiohttp.ClientSession(loop=loop) as session:\n        while attempt_count < _MAX_ATTEMPTS + 1:\n            try:\n                async with session.get(url_ping) as resp:\n                    res = await resp.json()\n                    if res[\"uptime\"] is not None:\n                        break\n            except Exception as ex:\n                attempt_count += 1\n                await async_sleep(1.5)\n        if attempt_count <= _MAX_ATTEMPTS:\n            _logger.debug('Ping received for Service %s id %s at url %s', service._name, service._id, url_ping)\n            return True\n    _logger.error('Ping not received for Service %s id %s at url %s attempt_count %s', service._name, service._id,\n                       url_ping, attempt_count)\n    return False\n\n\nasync def shutdown_service(service, loop=None):\n    loop = asyncio.get_event_loop() if loop is None else loop\n\n    try:\n        _logger.info(\"Shutting down the %s service %s ...\", service._type, service._name)\n        url_shutdown = \"{}://{}:{}/fledge/service/shutdown\".format(service._protocol, service._address,\n                                                                    service._management_port)\n        async with aiohttp.ClientSession(loop=loop) as session:\n            async with session.post(url_shutdown) as resp:\n                status_code = resp.status\n                text = await resp.text()\n                if not status_code == 200:\n                    raise Exception(message=text)\n    except Exception as ex:\n        _logger.exception(ex, 'Failed to shutdown {} service.'.format(service._name))\n        return False\n    else:\n        _logger.info('Service %s, id %s at url %s successfully shutdown', service._name, service._id, url_shutdown)\n        return True\n"
  },
  {
    "path": "python/fledge/services/core/README.rst",
    "content": "*************************\nFledge Core Microservice\n*************************\n\nThis directory contians the elements of the core microservice within\nFledge. The core is the first microservice started and contians services\nused by other microservices in Fledge, the REST API to the external\nworld and the orchestrtion and central management fucntions of Fledge.\n\nCode in this directory and sub-directories of this directory are used\nexclusively in the core and not shaed with any other microservices.\n\nStarting the Service\n====================\n\nUse ``python -m fledge.services.core`` to start the microservice as\na regular process. Alternatively, use the *fledge* script in the \n*scripts* development directory and in the *bin* deployment directory\nto start the core. The command is ``fledge start``.\n\n"
  },
  {
    "path": "python/fledge/services/core/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/__main__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Core server starter\"\"\"\n\nimport sys\nfrom fledge.services.core.server import Server\n\n__author__ = \"Terris Linenbach\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nis_safe_mode = True if sys.argv[1] == 'safe-mode' else False\nServer().start(is_safe_mode)\n"
  },
  {
    "path": "python/fledge/services/core/api/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/api/alerts.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom aiohttp import web\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.services.core import server\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ----------------------------------------------------------------\n    | GET    DELETE        | /fledge/alert                         |\n    | DELETE               | /fledge/alert/{key}                   |\n    ----------------------------------------------------------------\n\"\"\"\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\ndef setup(app):\n    app.router.add_route('GET', '/fledge/alert', get_all)\n    app.router.add_route('DELETE', '/fledge/alert', delete)\n    app.router.add_route('DELETE', '/fledge/alert/{key}', delete)\n\n\n\nasync def get_all(request: web.Request) -> web.Response:\n    \"\"\" GET list of alerts\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/alert\n    \"\"\"\n    try:\n        alerts = await server.Server._alert_manager.get_all()\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Failed to get alerts.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"alerts\": alerts})\n\nasync def delete(request: web.Request) -> web.Response:\n    \"\"\" DELETE all alerts\n\n    :Example:\n        curl -sX DELETE http://localhost:8081/fledge/alert\n        curl -sX DELETE http://localhost:8081/fledge/alert/{key}\n    \"\"\"\n    key = request.match_info.get('key', None)\n    try:\n        if key:\n            response = await server.Server._alert_manager.delete(key=key)\n        else:\n            response = await server.Server._alert_manager.delete()\n    except KeyError:\n        msg = '{} alert not found.'.format(key)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Failed to delete alerts.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"message\": response})"
  },
  {
    "path": "python/fledge/services/core/api/asset_tracker.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\nimport json\n\nfrom aiohttp import web\nimport urllib.parse\n\nfrom fledge.common import utils as common_utils\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -----------------------------------------------------------------------------------------\n    | GET                |    /fledge/track                                                 |\n    | PUT                |    /fledge/track/service/{service}/asset/{asset}/event/{event}   |\n    -----------------------------------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def get_asset_tracker_events(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n            asset track records\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/track\n            curl -sX GET http://localhost:8081/fledge/track?asset=XXX\n            curl -sX GET http://localhost:8081/fledge/track?event=XXX\n            curl -sX GET http://localhost:8081/fledge/track?service=XXX\n            curl -sX GET http://localhost:8081/fledge/track?deprecated=true\n            curl -sX GET http://localhost:8081/fledge/track?event=XXX&asset=XXX&service=XXX\n    \"\"\"\n    payload = PayloadBuilder().SELECT(\"asset\", \"event\", \"service\", \"fledge\", \"plugin\", \"ts\", \"deprecated_ts\", \"data\") \\\n        .ALIAS(\"return\", (\"ts\", 'timestamp')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n        .ALIAS(\"return\", (\"deprecated_ts\", 'deprecatedTimestamp')) \\\n        .WHERE(['1', '=', 1])\n    if 'asset' in request.query and request.query['asset'] != '':\n        asset = urllib.parse.unquote(request.query['asset'])\n        payload.AND_WHERE(['asset', '=', asset])\n    if 'event' in request.query and request.query['event'] != '':\n        event = request.query['event']\n        payload.AND_WHERE(['event', '=', event])\n    if 'service' in request.query and request.query['service'] != '':\n        service = urllib.parse.unquote(request.query['service'])\n        payload.AND_WHERE(['service', '=', service])\n    if 'deprecated' in request.query and request.query['deprecated'] != '':\n        deprecated = request.query['deprecated'].strip().lower()\n        if deprecated == \"true\":\n            payload.AND_WHERE(['deprecated_ts', \"notnull\"])\n\n    storage_client = connect.get_storage_async()\n    payload = PayloadBuilder(payload.chain_payload())\n    try:\n        result = await storage_client.query_tbl_with_payload('asset_tracker', payload.payload())\n        response = result['rows']\n    except KeyError:\n        msg = result['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get asset tracker events.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'track': response})\n\n\nasync def deprecate_asset_track_entry(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n            message\n\n    :Example:\n            curl -sX PUT http://localhost:8081/fledge/track/service/XXX/asset/XXX/event/XXXX\n    \"\"\"\n    svc_name = request.match_info.get('service', None)\n    asset_name = request.match_info.get('asset', None)\n    event_name = request.match_info.get('event', None)\n    try:\n        storage_client = connect.get_storage_async()\n        # TODO: FOGL-6749 Once rows affected with 0 case handled at Storage side then we can remove SELECT call\n        select_payload = PayloadBuilder().SELECT(\"deprecated_ts\").WHERE(\n            ['service', '=', svc_name]).AND_WHERE(['asset', '=', asset_name]).AND_WHERE(\n            ['event', '=', event_name]).payload()\n        get_result = await storage_client.query_tbl_with_payload('asset_tracker', select_payload)\n        if 'rows' in get_result:\n            response = get_result['rows']\n            if response:\n                if response[0]['deprecated_ts'] == \"\":\n                    # Update deprecated_ts column entry\n                    current_time = common_utils.local_timestamp()\n                    if event_name in ('Ingest', 'store'):\n                        audit_event_name = \"Ingest & store\"\n                        and_where_val = ['event', 'in', [\"Ingest\", \"store\"]]\n                    else:\n                        audit_event_name = event_name\n                        and_where_val = ['event', '=', event_name]\n                    update_payload = PayloadBuilder().SET(deprecated_ts=current_time).WHERE(\n                        ['service', '=', svc_name]).AND_WHERE(['asset', '=', asset_name]).AND_WHERE(\n                        and_where_val).AND_WHERE(['deprecated_ts', 'isnull']).payload()\n                    update_result = await storage_client.update_tbl(\"asset_tracker\", update_payload)\n                    if 'response' in update_result:\n                        response = update_result['response']\n                        if response != 'updated':\n                            raise KeyError('Update failure in asset tracker for service: {} asset: {} event: {}'.format(\n                                svc_name, asset_name, event_name))\n                        try:\n                            audit = AuditLogger(storage_client)\n                            audit_details = {'asset': asset_name, 'service': svc_name, 'event': audit_event_name}\n                            await audit.information('ASTDP', audit_details)\n                        except:\n                            _logger.warning(\"Failed to log the audit entry for {} deprecation.\".format(asset_name))\n                            pass\n                    else:\n                        raise StorageServerError\n                else:\n                    raise KeyError('{} asset record already deprecated.'.format(asset_name))\n            else:\n                raise ValueError('No record found in asset tracker for given service: {} asset: {} event: {}'.format(\n                    svc_name, asset_name, event_name))\n        else:\n            raise StorageServerError\n    except StorageServerError as err:\n        msg = err.error\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"Storage error: {}\".format(msg)}))\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Deprecate {} asset entry failed for {} service with {} event.\".format(\n            asset_name, svc_name, event_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        msg = \"For {} event, {} asset record entry has been deprecated.\".format(event_name, asset_name)\n        _logger.info(msg)\n        return web.json_response({'success': msg})\n\n\nasync def get_datapoint_usage(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n        request: a GET request to the /fledge/track/storage/assets Endpoint.\n\n    Returns:\n            A JSON response. An example would be.\n            {\n              \"count\" : 5,\n               \"assets\" : [\n                            {\n                              \"asset\" : \"sinusoid\",\n                              \"datapoints\" : [ \"sinusoid\" ]\n                            },\n                            {\n                               \"asset\" : \"motor\",\n                               \"datapoints\" : [ \"rpm\", \"current\", \"voltage\", \"temperature\" ]\n                            }\n                          ]\n            }\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/track/storage/assets\n\n    \"\"\"\n\n    response = {\"count\": 0,\n                \"assets\": []\n                }\n    try:\n        storage_client = connect.get_storage_async()\n        q_payload = PayloadBuilder().SELECT(). \\\n            DISTINCT([\"asset\", \"data\"]). \\\n            WHERE([\"event\", \"=\", \"store\"]). \\\n            payload()\n\n        results = await storage_client.query_tbl_with_payload('asset_tracker', q_payload)\n\n        total_datapoints = 0\n        asset_info_list = []\n        for row in results[\"rows\"]:\n            # The no of datapoints for this asset.\n            asset_name = row[\"asset\"]\n            # Construct a dict that contains information about a single asset.\n            current_count = int(row[\"data\"][\"count\"])\n            dict_to_add = {\"asset\": row[\"asset\"], \"datapoints\": row[\"data\"][\"datapoints\"], 'count': current_count}\n            # appending information of single asset to the asset information list.\n\n            # now find that this is a new asset or not.\n            # Initialize index_of_found_asset by -1 . -1 means not found.\n            index_of_found_asset = -1\n            for (idx, asset_info) in enumerate(asset_info_list):\n                if 'asset' in asset_info and asset_info['asset'] == asset_name:\n                    index_of_found_asset = idx\n\n            if index_of_found_asset != -1:\n                # If current data point count exceed the maximum data point then replace\n                # this new data point information else do nothing\n                if current_count >= asset_info_list[index_of_found_asset]['count']:\n                    asset_info_list.pop(index_of_found_asset)\n                    asset_info_list.append(dict_to_add)\n            else:\n                # This is a new asset simply add to list.\n                asset_info_list.append(dict_to_add)\n\n        # finally calculate the total data points\n        for asset_info in asset_info_list:\n            total_datapoints += int(asset_info['count'])\n\n        # Remove count for each asset_info\n        for ast_info in asset_info_list:\n            ast_info.pop('count')\n\n        # update the required information.\n        response['assets'] = asset_info_list\n        response[\"count\"] = total_datapoints\n\n    except KeyError as msg:\n        raise web.HTTPBadRequest(reason=str(msg), body=json.dumps({\"message\": str(msg)}))\n    except TypeError as ex:\n        raise web.HTTPBadRequest(reason=str(ex), body=json.dumps({\"message\": str(ex)}))\n    except StorageServerError as ex:\n        err_response = ex.error\n        raise web.HTTPBadRequest(reason=err_response, body=json.dumps({\"message\": err_response}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get asset tracker store datapoints.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n"
  },
  {
    "path": "python/fledge/services/core/api/audit.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport copy\nfrom datetime import datetime\nfrom enum import IntEnum\nfrom aiohttp import web\nimport json\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.services.core import connect\n\n__author__ = \"Amarendra K. Sinha, Ashish Jabble, Massimiliano Pinto\"\n__copyright__ = \"Copyright (c) 2017-2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__DEFAULT_LIMIT = 20\n__DEFAULT_OFFSET = 0\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | GET POST        | /fledge/audit                                            |\n    | GET             | /fledge/audit/logcode                                    |\n    | GET             | /fledge/audit/severity                                   |\n    -------------------------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass Severity(IntEnum):\n    \"\"\" Enumeration for log.severity \"\"\"\n    # TODO: FOGL-1100, no info for 3\n    SUCCESS = 0\n    FAILURE = 1\n    WARNING = 2\n    INFORMATION = 4\n\n####################################\n#  Audit Trail\n####################################\n\n\nasync def create_audit_entry(request):\n    \"\"\" Creates a new Audit entry\n\n    Args:\n        request: POST /fledge/audit\n\n        {\n          \"source\"   : \"LMTR\", # 5 char max\n          \"severity\" : \"WARNING\",\n          \"details\"  : {\n                        \"message\" : \"Engine oil pressure low\"\n                      }\n        }\n\n    :Example:\n\n        curl -X POST -d '{\"source\":\"LMTR\",\"severity\":\"WARNING\",\"details\":{\"message\":\"Engine oil pressure low\"}}\n        http://localhost:8081/fledge/audit\n\n    Returns:\n        json object representation of created audit entry\n\n        {\n          \"timestamp\" : \"2017-06-21T09:39:51.8949395\",\n          \"source\"    : \"LMTR\",\n          \"severity\"  : \"WARNING\",\n          \"details\"   : {\n                         \"message\" : \"Engine oil pressure low\"\n                        }\n        }\n    \"\"\"\n    return_error = False\n    err_msg = \"Missing required parameter\"\n\n    payload = await request.json()\n\n    severity = payload.get(\"severity\")\n    source = payload.get(\"source\")\n    details = payload.get(\"details\")\n\n    if severity is None or severity == \"\":\n        err_msg += \" severity\"\n        return_error = True\n    if source is None or source == \"\":\n        err_msg += \" source\"\n        return_error = True\n    if details is None:\n        err_msg += \" details\"\n        return_error = True\n\n    if return_error:\n        raise web.HTTPBadRequest(reason=err_msg)\n\n    if not isinstance(details, dict):\n        raise web.HTTPBadRequest(reason=\"Details should be a valid json object\")\n\n    try:\n        audit = AuditLogger()\n        await getattr(audit, str(severity).lower())(source, details)\n\n        # Set timestamp for return message\n        timestamp = datetime.now().strftime(\"%Y-%m-%d %H:%M:%S.%f\")[:-3]\n\n        message = {'timestamp': str(timestamp),\n                   'source': source,\n                   'severity': severity,\n                   'details': details\n                   }\n    except AttributeError as e:\n        # Return error for wrong severity method\n        err_msg = \"severity type {} is not supported\".format(severity)\n        raise web.HTTPNotFound(reason=err_msg, body=json.dumps({\"message\": err_msg}))\n    except StorageServerError as ex:\n        if int(ex.code) in range(400, 500):\n            err_msg = 'Audit entry cannot be logged. {}'.format(ex.error['message'])\n            raise web.HTTPBadRequest(reason=err_msg, body=json.dumps({\"message\": err_msg}))\n        else:\n            err_msg = 'Failed to log audit entry. {}'.format(ex.error['message'])\n            _logger.warning(err_msg)\n            raise web.HTTPInternalServerError(reason=err_msg, body=json.dumps({\"message\": err_msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to log audit entry.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(message)\n\n\nasync def get_audit_entries(request):\n    \"\"\" Returns a list of audit trail entries sorted with most recent first and total count\n        (including the criteria search if applied)\n\n    :Example:\n\n        curl -X GET http://localhost:8081/fledge/audit\n\n        curl -X GET http://localhost:8081/fledge/audit?limit=5\n\n        curl -X GET \"http://localhost:8081/fledge/audit?limit=5&skip=3\"\n\n        curl -X GET http://localhost:8081/fledge/audit?skip=2\n\n        curl -X GET http://localhost:8081/fledge/audit?source=PURGE\n\n        curl -X GET http://localhost:8081/fledge/audit?source=NTFSD,NTFSN,NTFAD,NTFST,NTFDL,NTFCL\n        curl -X GET http://localhost:8081/fledge/audit?severity=FAILURE\n\n        curl -X GET \"http://localhost:8081/fledge/audit?source=LOGGN&severity=INFORMATION&limit=10\"\n\n        curl -X GET \"http://localhost:8081/fledge/audit?source=CONAD&since=2022-10-10%2009:31:32\"\n    \"\"\"\n\n    limit = __DEFAULT_LIMIT\n    if 'limit' in request.query and request.query['limit'] != '':\n        try:\n            limit = int(request.query['limit'])\n            if limit < 0:\n                raise ValueError\n        except ValueError:\n            raise web.HTTPBadRequest(reason=\"Limit must be a positive integer\")\n\n    offset = __DEFAULT_OFFSET\n    if 'skip' in request.query and request.query['skip'] != '':\n        try:\n            offset = int(request.query['skip'])\n            if offset < 0:\n                raise ValueError\n        except ValueError:\n            raise web.HTTPBadRequest(reason=\"Skip/Offset must be a positive integer\")\n\n    # If microsend is required then add .%f into the __DATE_FORMAT & remove the split from datetime string conversion\n    __DATE_FORMAT = \"%Y-%m-%d %H:%M:%S\"\n    if 'since' in request.query and request.query['since'] != '':\n        try:\n            datetime.strptime(request.query['since'], __DATE_FORMAT)\n        except ValueError:\n            msg = \"Incorrect date format, should be {}\".format(__DATE_FORMAT)\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n    source = None\n    source_list = []\n    if 'source' in request.query and request.query['source'] != '':\n        try:\n            source = request.query.get('source')\n            source_list = source.split(',')\n            # SELECT * FROM log_codes\n            storage_client = connect.get_storage_async()\n            result = await storage_client.query_tbl(\"log_codes\")\n            log_codes = [key['code'] for key in result['rows']]\n            for code in source_list:\n                if code not in log_codes:\n                    raise ValueError(code)\n        except ValueError as e:\n            raise web.HTTPBadRequest(reason=\"{} is not a valid source\".format(str(e)))\n\n    severity = None\n    if 'severity' in request.query and request.query['severity'] != '':\n        try:\n            severity = Severity[request.query['severity'].upper()].value\n        except KeyError as ex:\n            raise web.HTTPBadRequest(reason=\"{} is not a valid severity\".format(ex))\n\n    try:\n        # HACK: This way when we can more future we do not get an exponential\n        # explosion of if statements\n        payload = PayloadBuilder().SELECT(\"code\", \"level\", \"log\", \"ts\")\\\n            .ALIAS(\"return\", (\"ts\", 'timestamp')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n            .WHERE(['1', '=', 1])\n\n        if source is not None:\n            if len(source_list) == 1:\n                payload.AND_WHERE(['code', '=', source])\n            else:\n                payload = PayloadBuilder().SELECT(\"code\", \"level\", \"log\", \"ts\") \\\n                    .ALIAS(\"return\", (\"ts\", 'timestamp')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\n                payload.WHERE(['code', 'in', source_list])\n        if severity is not None:\n            payload.AND_WHERE(['level', '=', severity])\n\n        _and_where_payload = payload.chain_payload()\n        # SELECT *, count(*) OVER() FROM log - No support yet from storage layer\n        # TODO: FOGL-740, FOGL-663 once ^^ resolved we should replace below storage call for getting total rows\n        _and_where_copy = copy.deepcopy(_and_where_payload)\n        total_count_payload = PayloadBuilder(_and_where_copy).AGGREGATE([\"count\", \"*\"])\\\n            .ALIAS(\"aggregate\", (\"*\", \"count\", \"count\")).payload()\n\n        # SELECT count (*) FROM log <_and_where_payload>\n        storage_client = connect.get_storage_async()\n        result = await storage_client.query_tbl_with_payload('log', total_count_payload)\n        total_count = result['rows'][0]['count']\n\n        payload = PayloadBuilder(_and_where_payload)\n        payload.ORDER_BY(['ts', 'desc'])\n        payload.LIMIT(limit)\n\n        if offset > 0:\n            payload.OFFSET(offset)\n\n        # SELECT * FROM log <payload.payload()>\n        results = await storage_client.query_tbl_with_payload('log', payload.payload())\n        rows = results['rows']\n        # If 'since' datetime string param is passed then filter the records internally from the actual storage result\n        if 'since' in request.query:\n            since_dt = datetime.strptime(request.query['since'].split('.', 1)[0], __DATE_FORMAT)\n            temp_rows = []\n            for row in results[\"rows\"]:\n                convert_dt = datetime.strptime(row['timestamp'].split('.', 1)[0], __DATE_FORMAT)\n                if since_dt <= convert_dt:\n                    temp_rows.append(row)\n            rows = temp_rows\n            total_count = len(rows)\n        res = []\n        for row in rows:\n            r = dict()\n            r[\"details\"] = row[\"log\"]\n            severity_level = int(row[\"level\"])\n            r[\"severity\"] = Severity(severity_level).name if severity_level in (0, 1, 2, 4) else \"UNKNOWN\"\n            r[\"source\"] = row[\"code\"]\n            r[\"timestamp\"] = row[\"timestamp\"]\n            res.append(r)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get Audit log entry.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'audit': res, 'totalCount': total_count})\n\n\nasync def get_audit_log_codes(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n           an array of log codes with description\n\n    :Example:\n\n        curl -X GET http://localhost:8081/fledge/audit/logcode\n    \"\"\"\n    storage_client = connect.get_storage_async()\n    result = await storage_client.query_tbl('log_codes')\n\n    return web.json_response({'logCode': result['rows']})\n\n\nasync def get_audit_log_severity(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n            an array of audit severity enumeration key index values\n\n    :Example:\n\n        curl -X GET http://localhost:8081/fledge/audit/severity\n    \"\"\"\n    results = []\n    for _severity in Severity:\n        data = {'index': _severity.value, 'name': _severity.name}\n        results.append(data)\n\n    return web.json_response({\"logSeverity\": results})\n"
  },
  {
    "path": "python/fledge/services/core/api/auth.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" auth routes \"\"\"\nimport asyncio\nimport os\nimport subprocess\nimport datetime\nimport re\nimport json\nfrom collections import OrderedDict\nfrom pathlib import Path\nimport jwt\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.common import _FLEDGE_ROOT\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.web.middleware import has_permission\nfrom fledge.common.web.ssl_wrapper import SSLVerifier\nfrom fledge.services.core.api import utils as apiutils\nfrom fledge.services.core.user_model import User\n\n\n__author__ = \"Praveen Garg, Ashish Jabble, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n_help = \"\"\"\n    ------------------------------------------------------------------------------------\n    | GET                        | /fledge/user                                       |\n    | PUT                        | /fledge/user                                       |\n    | PUT                        | /fledge/user/{user_id}/password                    |     \n\n    | GET                        | /fledge/user/role                                  |\n    \n    | POST                       | /fledge/login                                      |\n    | PUT                        | /fledge/{user_id}/logout                           |\n    \n    | GET                        | /fledge/auth/ott                                   |\n\n    | POST                       | /fledge/admin/user                                 |\n    | PUT                        | /fledge/admin/{user_id}                            |\n    | PUT                        | /fledge/admin/{user_id}/enable                     |\n    | PUT                        | /fledge/admin/{user_id}/reset                      |\n    | DELETE                     | /fledge/admin/{user_id}/delete                     |\n    | POST                       | /fledge/admin/{user_id}/authcertificate            |\n    ------------------------------------------------------------------------------------\n\"\"\"\n\nJWT_SECRET = 'f0gl@mp'\nJWT_ALGORITHM = 'HS512'\nJWT_EXP_DELTA_SECONDS = 30*60  # 30 minutes\n\nMIN_USERNAME_LENGTH = 4\nUSERNAME_REGEX_PATTERN = '^[a-zA-Z0-9_.-]+$'\nFORBIDDEN_MSG = 'Resource you were trying to reach is absolutely forbidden for some reason'\nDATE_FORMAT = \"%Y-%m-%d %H:%M:%S.%f\"\n\n# TODO: remove me, use from roles table\nADMIN_ROLE_ID = 1\nDEFAULT_ROLE_ID = 2\nOTT_TOKEN_EXPIRY_MINUTES = 5\n\n\nclass OTT:\n    \"\"\"\n        Manage the One Time Token assigned to log in for single time and within OTT_TOKEN_EXPIRY_MINUTES\n    \"\"\"\n\n    OTT_MAP = {}\n\n    def __init__(self):\n        pass\n\n\ndef __remove_ott_for_user(user_id):\n    \"\"\"Helper function that removes given user_id from OTT_MAP if the user exists in the map.\"\"\"\n    try:\n        _user_id = int(user_id)\n    except ValueError:\n        return\n    for k, v in OTT.OTT_MAP.items():\n        if v[0] == _user_id:\n            OTT.OTT_MAP.pop(k)\n            break\n\n\ndef __remove_ott_for_token(given_token):\n    \"\"\"Helper function that removes given token from OTT_MAP if that token in the map.\"\"\"\n    for k, v in OTT.OTT_MAP.items():\n        if v[1] == given_token:\n            OTT.OTT_MAP.pop(k)\n            break\n\n\nasync def login(request):\n    \"\"\" Validate user with its username and password\n\n    :Example:\n        curl -d '{\"username\": \"user\", \"password\": \"fledge\"}' -X POST http://localhost:8081/fledge/login\n        curl -T data/etc/certs/user.cert -X POST http://localhost:8081/fledge/login --insecure (--insecure or -k)\n        curl -d '{\"ott\": \"ott_token\"}' -skX POST http://localhost:8081/fledge/login\n    \"\"\"\n    auth_method = request.auth_method if 'auth_method' in dir(request) else \"any\"\n    data = await request.text()\n    _data = {}\n    try:\n        # Check ott inside request payload.\n        _data = json.loads(data)\n        if 'ott' in _data:\n            auth_method = \"OTT\"  # This is for local reference and not a configuration value\n    except json.JSONDecodeError:\n        if auth_method == 'password':\n            raise web.HTTPBadRequest(reason=\"Use valid username & password to log in.\")\n        # For auth_method 'any' or 'certificate', we might have certificate data\n        pass\n\n    # Check for appropriate payload per auth_method\n    if auth_method == 'certificate':\n        if not data.startswith(\"-----BEGIN CERTIFICATE-----\"):\n            raise web.HTTPBadRequest(reason=\"Use a valid certificate to login.\")\n\n    if data.startswith(\"-----BEGIN CERTIFICATE-----\"):\n        peername = request.transport.get_extra_info('peername')\n        if peername is not None:\n            host, _ = peername\n\n        try:\n            await User.Objects.verify_certificate(data)\n            username = SSLVerifier.get_subject()['commonName'].lower()\n            uid, token, is_admin = await User.Objects.certificate_login(username, host)\n            # set the user to request object\n            request.user = await User.Objects.get(uid=uid)\n            # set the token to request\n            request.token = token\n        except (SSLVerifier.VerificationError, User.DoesNotExist, OSError) as e:\n            raise web.HTTPUnauthorized(reason=\"Authentication failed: invalid or untrusted certificate.\")\n        except ValueError as ex:\n            raise web.HTTPUnauthorized(reason=\"Authentication failed: {}\".format(str(ex)))\n    elif auth_method == \"OTT\":\n        _ott = _data.get('ott')\n        if _ott not in OTT.OTT_MAP:\n            raise web.HTTPUnauthorized(reason=\"Authentication failed. Either the given token expired or already used.\")\n\n        time_now = datetime.datetime.now()\n        user_id, orig_token, is_admin, initial_time = OTT.OTT_MAP[_ott]\n\n        # remove ott from MAP when used or when expired.\n        OTT.OTT_MAP.pop(_ott, None)\n        if time_now - initial_time <= datetime.timedelta(minutes=OTT_TOKEN_EXPIRY_MINUTES):\n            return web.json_response(\n                {\"message\": \"Logged in successfully\", \"uid\": user_id, \"token\": orig_token, \"admin\": is_admin})\n        else:\n            raise web.HTTPUnauthorized(reason=\"Authentication failed! The given token has expired\")\n    else:\n        # Ensure we have valid JSON data\n        if not _data:\n            raise web.HTTPBadRequest(reason=\"Invalid or untrusted certificate or missing credentials in payload.\")\n\n        username = _data.get('username')\n        password = _data.get('password')\n\n        if not username or not password:\n            raise web.HTTPBadRequest(reason=\"Username or password is missing\")\n\n        username = str(username).lower()\n\n        peername = request.transport.get_extra_info('peername')\n        host = '0.0.0.0'\n        if peername is not None:\n            host, port = peername\n        try:\n            uid, token, is_admin = await User.Objects.login(username, password, host)\n        except User.PasswordNotSetError as err:\n            msg = str(err)\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        except (User.DoesNotExist, User.PasswordDoesNotMatch, ValueError) as err:\n            msg = str(err)\n            raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n        except User.PasswordExpired as ex:\n            # delete all user token for this user\n            await User.Objects.delete_user_tokens(str(ex))\n            msg = 'Your password has been expired. Please set your password again.'\n            _logger.warning(msg)\n            raise web.HTTPUnauthorized(reason=msg)\n        except Exception as exc:\n            msg = str(exc)\n            _logger.error(exc, \"Failed to login.\")\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n    _logger.info(\"User with username:<{}> logged in successfully.\".format(username))\n    return web.json_response({\"message\": \"Logged in successfully.\", \"uid\": uid, \"token\": token, \"admin\": is_admin})\n\n\nasync def get_ott(request):\n    \"\"\" Get one time use token (OTT) for login.\n\n        :Example:\n            curl -H \"authorization: <token>\" -X GET http://localhost:8081/fledge/auth/ott\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    try:\n        # Fetching user_id and role for given token.\n        original_token = request.token\n        from fledge.services.core import connect\n        from fledge.common.storage_client.payload_builder import PayloadBuilder\n        payload = PayloadBuilder().SELECT(\"user_id\").WHERE(['token', '=', original_token]).payload()\n        storage_client = connect.get_storage_async()\n        result = await storage_client.query_tbl_with_payload(\"user_logins\", payload)\n        if len(result['rows']) == 0:\n            message = \"The request token {} does not have a valid user associated with it.\".format(original_token)\n            raise web.HTTPBadRequest(reason=message)\n        user_id = result['rows'][0]['user_id']\n        payload_role = PayloadBuilder().SELECT(\"role_id\").WHERE(['id', '=', user_id]).payload()\n        storage_client = connect.get_storage_async()\n        result_role = await storage_client.query_tbl_with_payload(\"users\", payload_role)\n        if len(result_role['rows']) < 1:\n            message = \"The request token {} does not have a valid role associated with it.\".format(original_token)\n            raise web.HTTPBadRequest(reason=message)\n        # checking if the user is an admin.\n        is_admin = False\n        if int(result_role['rows'][0]['role_id']) == 1:\n            is_admin = True\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"OTT token failed.\")\n        raise web.HTTPBadRequest(reason=\"The request failed due to {}\".format(msg))\n    else:\n        now_time = datetime.datetime.now()\n        p = {'uid': user_id, 'exp': now_time}\n        ott_token = jwt.encode(p, JWT_SECRET, JWT_ALGORITHM)\n        already_existed_token = False\n        key_to_remove = None\n        for k, v in OTT.OTT_MAP.items():\n            if v[1] == original_token:\n                already_existed_token = True\n                key_to_remove = k\n\n        if already_existed_token:\n            OTT.OTT_MAP.pop(key_to_remove, None)\n\n        ott_info = (user_id, original_token, is_admin, now_time)\n        OTT.OTT_MAP[ott_token] = ott_info\n        return web.json_response({\"ott\": ott_token})\n\n\nasync def logout_me(request):\n    \"\"\" log out user\n\n    :Example:\n        curl -H \"authorization: <token>\" -X PUT http://localhost:8081/fledge/logout\n\n    \"\"\"\n\n    if request.is_auth_optional:\n        # no action needed\n        return web.json_response({\"logout\": True})\n\n    result = await User.Objects.delete_token(request.token)\n\n    if not result['rows_affected']:\n        raise web.HTTPNotFound()\n\n    __remove_ott_for_token(request.token)\n    _logger.info(\"User has been logged out successfully.\")\n    return web.json_response({\"logout\": True})\n\n\nasync def logout(request):\n    \"\"\" log out user's all active sessions\n\n    :Example:\n        curl -H \"authorization: <token>\" -X PUT http://localhost:8081/fledge/{user_id}/logout\n\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    user_id = request.match_info.get('user_id')\n\n    if int(request.user[\"role_id\"]) == ADMIN_ROLE_ID or int(request.user[\"id\"]) == int(user_id):\n        result = await User.Objects.delete_user_tokens(user_id)\n        if not result['rows_affected']:\n            raise web.HTTPNotFound()\n        # Remove OTT token for this user if there.\n        __remove_ott_for_user(user_id)\n        _logger.info(\"User with ID:<{}> has been logged out successfully.\".format(int(user_id)))\n    else:\n        # requester is not an admin but trying to take action for another user\n        raise web.HTTPUnauthorized(reason=\"admin privileges are required to logout other user\")\n\n    return web.json_response({\"logout\": True})\n\n\n@has_permission(\"admin\")\nasync def get_roles(request):\n    \"\"\" get roles\n\n    :Example:\n        curl -H \"authorization: <token>\" -X GET http://localhost:8081/fledge/user/role\n    \"\"\"\n    result = await User.Objects.get_roles()\n    return web.json_response({'roles': result})\n\n\nasync def get_user(request):\n    \"\"\" get user info\n\n    :Example:\n        curl -H \"authorization: <token>\" -X GET http://localhost:8081/fledge/user\n        curl -H \"authorization: <token>\" -X GET http://localhost:8081/fledge/user?id=2\n        curl -H \"authorization: <token>\" -X GET http://localhost:8081/fledge/user?username=admin\n        curl -H \"authorization: <token>\" -X GET \"http://localhost:8081/fledge/user?id=1&username=admin\"\n    \"\"\"\n    user_id = None\n    user_name = None\n    if 'id' in request.query:\n        try:\n            user_id = int(request.query['id'])\n            if user_id <= 0:\n                raise ValueError\n        except ValueError:\n            raise web.HTTPBadRequest(reason=\"Bad user ID\")\n\n    if 'username' in request.query and request.query['username'] != '':\n        user_name = request.query['username'].lower()\n    if user_id or user_name:\n        try:\n            if not request.is_auth_optional:\n                if int(request.user[\"role_id\"]) not in [1, 5]:\n                    if ((user_id is not None and int(request.user[\"id\"]) != user_id)\n                            or (user_name is not None and request.user[\"uname\"] != user_name)):\n                        raise web.HTTPForbidden\n            user = await User.Objects.get(user_id, user_name)\n            u = OrderedDict()\n            u['userId'] = user.pop('id')\n            u['userName'] = user.pop('uname')\n            u['roleId'] = user.pop('role_id')\n            u[\"accessMethod\"] = user.pop('access_method')\n            u[\"realName\"] = user.pop('real_name')\n            u[\"description\"] = user.pop('description')\n            result = u\n        except User.DoesNotExist as ex:\n            msg = str(ex)\n            raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        users = await User.Objects.all()\n        res = []\n        for row in users:\n            if row['enabled'] == 't':\n                u = OrderedDict()\n                u[\"userId\"] = row[\"id\"]\n                u[\"userName\"] = row[\"uname\"]\n                u[\"roleId\"] = row[\"role_id\"]\n                u[\"accessMethod\"] = row[\"access_method\"]\n                u[\"realName\"] = row[\"real_name\"]\n                u[\"description\"] = row[\"description\"]\n                if row[\"block_until\"]:\n                    curr_time = datetime.datetime.now(datetime.timezone.utc).strftime(DATE_FORMAT)\n                    block_time = row[\"block_until\"].split('.')[0] # strip time after HH:MM:SS for display\n                    if datetime.datetime.strptime(row[\"block_until\"], DATE_FORMAT) > datetime.datetime.strptime(curr_time, DATE_FORMAT):\n                        u[\"blockUntil\"] = block_time\n                res.append(u)\n        result = {'users': res}\n\n    return web.json_response(result)\n\n\n@has_permission(\"admin\")\nasync def create_user(request):\n    \"\"\" create user\n\n    :Example:\n        curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"any1\", \"password\": \"User@123\"}' http://localhost:8081/fledge/admin/user\n        curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"aj.1988\", \"password\": \"User@123\", \"access_method\": \"any\"}' http://localhost:8081/fledge/admin/user\n        curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"aj-1988\", \"password\": \"User@123\", \"access_method\": \"pwd\"}' http://localhost:8081/fledge/admin/user\n        curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"aj_1988\", \"access_method\": \"any\"}' http://localhost:8081/fledge/admin/user\n        curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"aj1988\", \"access_method\": \"cert\"}' http://localhost:8081/fledge/admin/user\n        curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"ajnerd\", \"password\": \"F0gl@mp!\", \"role_id\": 1, \"real_name\": \"AJ\", \"description\": \"Admin user\"}' http://localhost:8081/fledge/admin/user\n        curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"nerd\", \"access_method\": \"cert\", \"real_name\": \"AJ\", \"description\": \"Admin user\"}' http://localhost:8081/fledge/admin/user\n        curl -H \"authorization: <token>\" -X POST -d '{\"username\": \"nerdapp\", \"password\": \"FL)dG3\", \"access_method\": \"pwd\", \"real_name\": \"AJ\", \"description\": \"Admin user\"}' http://localhost:8081/fledge/admin/user\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n    \n    data = await request.json()\n    username = data.get('username', '')\n    password = data.get('password')\n    role_id = data.get('role_id', DEFAULT_ROLE_ID)\n    access_method = data.get('access_method', 'any')\n    real_name = data.get('real_name', '')\n    description = data.get('description', '')\n\n    if not username:\n        msg = \"Username is required to create user.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n    if not isinstance(username, str) or not isinstance(access_method, str) or not isinstance(real_name, str) \\\n            or not isinstance(description, str):\n        msg = \"Values should be passed in string.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n    username = username.lower().strip().replace(\" \", \"\")\n    if len(username) < MIN_USERNAME_LENGTH:\n        msg = \"Username should be of minimum 4 characters.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    if not re.match(USERNAME_REGEX_PATTERN, username):\n        msg = \"Dot, hyphen, underscore special characters are allowed for username.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n    if access_method.lower() not in ['any', 'cert', 'pwd']:\n        msg = \"Invalid access method. Must be 'any' or 'cert' or 'pwd'.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n    if access_method == 'pwd' and not password:\n        msg = \"Password should not be an empty.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n    if access_method != 'cert':\n        if password is not None:\n            error_msg = await validate_password(password)\n            if error_msg:\n                raise web.HTTPBadRequest(reason=error_msg, body=json.dumps({\"message\": error_msg}))\n    if not (await is_valid_role(role_id)):\n        msg = \"Invalid role ID.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    users = await User.Objects.all()\n    unames = [u['uname'] for u in users]\n    if username in unames:\n        msg = \"Username already exists.\"\n        _logger.warning(msg)\n        raise web.HTTPConflict(reason=msg, body=json.dumps({\"message\": msg}))\n    u = dict()\n    try:\n        result = await User.Objects.create(username, password, role_id, access_method, real_name, description)\n        if result['rows_affected']:\n            # FIXME: we should not do get again!\n            # we just need inserted user id; insert call should return that\n            user = await User.Objects.get(username=username)\n            u['userId'] = user.pop('id')\n            u['userName'] = user.pop('uname')\n            u['roleId'] = user.pop('role_id')\n            u[\"accessMethod\"] = user.pop('access_method')\n            u[\"realName\"] = user.pop('real_name')\n            u[\"description\"] = user.pop('description')\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to create user.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    msg = \"{} user has been created successfully.\".format(username)\n    _logger.info(msg)\n    return web.json_response({'message': msg, 'user': u})\n\n# FIXME: Need to fix user id dependency in update_me\n\n\nasync def update_me(request):\n    \"\"\" update user profile\n\n    :Example:\n             curl -H \"authorization: <token>\" -X PUT -d '{\"real_name\": \"AJ\"}' http://localhost:8081/fledge/user\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n    data = await request.json()\n    real_name = data.get('real_name', '')\n    if 'real_name' in data:\n        if len(real_name.strip()) == 0:\n            msg = \"Real Name should not be empty.\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            from fledge.services.core import connect\n            from fledge.common.storage_client.payload_builder import PayloadBuilder\n            try:\n                payload = PayloadBuilder().SELECT(\"user_id\").WHERE(['token', '=', request.token]).payload()\n                storage_client = connect.get_storage_async()\n                result = await storage_client.query_tbl_with_payload(\"user_logins\", payload)\n                if len(result['rows']) == 0:\n                    raise User.DoesNotExist\n                user_id = result['rows'][0]['user_id']\n                payload = PayloadBuilder().SET(real_name=real_name.strip()).WHERE(['id', '=', user_id]).payload()\n                message = \"Something went wrong.\"\n                result = await storage_client.update_tbl(\"users\", payload)\n                if result['response'] == 'updated':\n                    # TODO: FOGL-1226 At the moment only real name can update\n                    message = \"Real name has been updated successfully!\"\n            except User.DoesNotExist:\n                msg = \"User does not exist.\"\n                raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n            except ValueError as err:\n                msg = str(err)\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            except Exception as exc:\n                msg = str(exc)\n                _logger.error(exc, \"Failed to update the user <{}> profile.\".format(int(user_id)))\n                raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        msg = \"Nothing to update.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response({\"message\": message})\n\n\n@has_permission(\"admin\")\nasync def update_user(request):\n    \"\"\" access_method, description, real_name\n        :Example:\n            curl -H \"authorization: <token>\" -X PUT -d '{\"description\": \"A new user\"}' http://localhost:8081/fledge/admin/{user_id}\n            curl -H \"authorization: <token>\" -X PUT -d '{\"real_name\": \"Admin\"}' http://localhost:8081/fledge/admin/{user_id}\n            curl -H \"authorization: <token>\" -X PUT -d '{\"access_method\": \"pwd\"}' http://localhost:8081/fledge/admin/{user_id}\n            curl -H \"authorization: <token>\" -X PUT -d '{\"description\": \"A new user\", \"real_name\": \"Admin\", \"access_method\": \"pwd\"}' http://localhost:8081/fledge/admin/{user_id}\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    user_id = request.match_info.get('user_id')\n    if int(user_id) == 1:\n        msg = \"Restricted for Super Admin user.\"\n        _logger.warning(msg)\n        raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n\n    data = await request.json()\n    access_method = data.get('access_method', '')\n    description = data.get('description', '')\n    real_name = data.get('real_name', '')\n    user_data = {}\n    if 'real_name' in data:\n        if len(real_name.strip()) == 0:\n            msg = \"Real Name should not be empty.\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            user_data.update({\"real_name\": real_name.strip()})\n    if 'access_method' in data:\n        if len(access_method.strip()) == 0:\n            msg = \"Access method should not be empty.\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            valid_access_method = ('any', 'pwd', 'cert')\n            if access_method not in valid_access_method:\n                msg = \"Accepted access method values are {}.\".format(valid_access_method)\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            user_data.update({\"access_method\": access_method.strip()})\n    if 'description' in data:\n        user_data.update({\"description\": description.strip()})\n    if not user_data:\n        msg = \"Nothing to update.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    try:\n        user = await User.Objects.update(user_id, user_data)\n        if user:\n            user_info = await User.Objects.get(uid=user_id)\n\n        if 'access_method' in data:\n            # Remove OTT token for this user only if access method is updated.\n            __remove_ott_for_user(user_id)\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=str(err), body=json.dumps({\"message\": msg}))\n    except User.DoesNotExist:\n        msg = \"User with ID:<{}> does not exist\".format(int(user_id))\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to update the user ID:<{}>.\".format(user_id))\n        raise web.HTTPInternalServerError(reason=str(exc), body=json.dumps({\"message\": msg}))\n    return web.json_response({'user_info': user_info})\n\n\nasync def update_password(request):\n    \"\"\" update password\n\n        :Example:\n             curl -X PUT -d '{\"current_password\": \"F0gl@mp!\", \"new_password\": \"F0gl@mp1\"}' http://localhost:8081/fledge/user/<user_id>/password\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    user_id = request.match_info.get('user_id')\n    try:\n        int(user_id)\n    except ValueError:\n        msg = \"User ID should be in integer.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n    # Restrictions\n    if int(request.user[\"id\"]) != int(user_id):\n        # Super Admin default user\n        if int(user_id) == 1:\n            raise web.HTTPUnauthorized(reason=\"Insufficient privileges to update the password for the given user.\")\n        else:\n            if int(request.user[\"role_id\"]) != ADMIN_ROLE_ID:\n                raise web.HTTPUnauthorized(\n                    reason=\"Insufficient privileges to update the password for the given user.\")\n\n    data = await request.json()\n    current_password = data.get('current_password')\n    new_password = data.get('new_password')\n    if not current_password or not new_password:\n        msg = \"Current or new password is missing.\"\n        raise web.HTTPBadRequest(reason=msg)\n\n    if new_password and not isinstance(new_password, str):\n        err_msg = \"New password should be a valid string.\"\n        raise web.HTTPBadRequest(reason=err_msg, body=json.dumps({\"message\": err_msg}))\n    error_msg = await validate_password(new_password)\n    if error_msg:\n        raise web.HTTPBadRequest(reason=error_msg, body=json.dumps({\"message\": error_msg}))\n    if current_password == new_password:\n        msg = \"New password should not be the same as current password.\"\n        raise web.HTTPBadRequest(reason=msg)\n\n    user_id = await User.Objects.is_user_exists(user_id, current_password)\n    if not user_id:\n        msg = 'Invalid current password.'\n        raise web.HTTPNotFound(reason=msg)\n\n    try:\n        await User.Objects.update(int(user_id), {'password': new_password})\n        # Remove OTT token for this user if there.\n        __remove_ott_for_user(user_id)\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except User.DoesNotExist:\n        msg = \"User with ID:<{}> does not exist.\".format(int(user_id))\n        raise web.HTTPNotFound(reason=msg)\n    except User.PasswordAlreadyUsed:\n        msg = \"The new password should be different from previous 3 used.\"\n        raise web.HTTPBadRequest(reason=msg)\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to update the user ID:<{}>.\".format(user_id))\n        raise web.HTTPInternalServerError(reason=msg)\n\n    msg = \"Password has been updated successfully for user ID:<{}>.\".format(int(user_id))\n    _logger.info(msg)\n    return web.json_response({'message': msg})\n\n\n@has_permission(\"admin\")\nasync def enable_user(request):\n    \"\"\" enable/disable user\n        :Example:\n            curl -H \"authorization: <token>\" -X PUT -d '{\"enabled\": \"true\"}' http://localhost:8081/fledge/admin/{user_id}/enable\n            curl -H \"authorization: <token>\" -X PUT -d '{\"enabled\": \"False\"}' http://localhost:8081/fledge/admin/{user_id}/enable\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    user_id = request.match_info.get('user_id')\n    if int(user_id) == 1:\n        msg = \"Restricted for Super Admin user.\"\n        _logger.warning(msg)\n        raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n\n    data = await request.json()\n    enabled = data.get('enabled')\n    try:\n        if enabled is not None:\n            if str(enabled).lower() in ['true', 'false']:\n                from fledge.services.core import connect\n                from fledge.common.storage_client.payload_builder import PayloadBuilder\n                user_data = {'enabled': 't' if str(enabled).lower() == 'true' else 'f'}\n                payload = PayloadBuilder().SELECT(\"id\", \"uname\", \"role_id\", \"enabled\").WHERE(\n                    ['id', '=', user_id]).payload()\n                storage_client = connect.get_storage_async()\n                old_result = await storage_client.query_tbl_with_payload('users', payload)\n                if len(old_result['rows']) == 0:\n                    raise User.DoesNotExist\n                payload = PayloadBuilder().SET(enabled=user_data['enabled']).WHERE(['id', '=', user_id]).payload()\n                result = await storage_client.update_tbl(\"users\", payload)\n                # Remove ott token for this enabled/disabled user.\n                __remove_ott_for_user(user_id)\n                if result['response'] == 'updated':\n                    _text = 'enabled' if user_data['enabled'] == 't' else 'disabled'\n                    payload = PayloadBuilder().SELECT(\"id\", \"uname\", \"role_id\", \"enabled\").WHERE(\n                        ['id', '=', user_id]).payload()\n                    new_result = await storage_client.query_tbl_with_payload('users', payload)\n                    if len(new_result['rows']) == 0:\n                        raise User.DoesNotExist\n                    # USRCH audit trail entry\n                    audit = AuditLogger(storage_client)\n                    await audit.information(\n                        'USRCH', {'user_id': int(user_id), 'old_value': {'enabled': old_result['rows'][0]['enabled']},\n                                  'new_value': {'enabled': new_result['rows'][0]['enabled']},\n                                  \"message\": \"'{}' user has been {}.\".format(new_result['rows'][0]['uname'], _text)})\n                else:\n                    raise ValueError('Something went wrong during update. Check Syslogs.')\n            else:\n                raise ValueError('Accepted values are True/False only.')\n        else:\n            raise ValueError('Nothing to enable user update.')\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=str(err), body=json.dumps({\"message\": msg}))\n    except User.DoesNotExist:\n        msg = \"User with ID:<{}> does not exist.\".format(int(user_id))\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to enable/disable user ID:<{}>.\".format(user_id))\n        raise web.HTTPInternalServerError(reason=str(exc), body=json.dumps({\"message\": msg}))\n    return web.json_response({'message': 'User with ID:<{}> has been {} successfully.'.format(int(user_id), _text)})\n\n@has_permission(\"admin\")\nasync def unblock_user(request):\n    \"\"\" Unblock the user got blocked due to multiple invalid log in attempts\n        :Example:\n            curl -H \"authorization: <token>\" -X PUT  http://localhost:8081/fledge/admin/{user_id}/unblock\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    user_id = request.match_info.get('user_id')\n\n    try:\n        from fledge.services.core import connect\n        storage_client = connect.get_storage_async()\n        result = await _unblock_user(user_id,storage_client)\n        if 'response' in result:\n            if result['response'] == 'updated':\n                # USRUB audit trail entry\n                audit = AuditLogger(storage_client)\n                await audit.information('USRUB', {'user_id': int(user_id),\n                                \"message\": \"User with ID:<{}> has been unblocked.\".format(user_id)})\n        else:\n            raise KeyError(\"Unblock operation for user with ID:<{}> failed\".format(user_id))\n    except (KeyError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=str(err), body=json.dumps({\"message\": msg}))\n    except User.DoesNotExist:\n        msg = \"User with ID:<{}> does not exist.\".format(int(user_id))\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to unblock user ID:<{}>.\".format(user_id))\n        raise web.HTTPInternalServerError(reason=str(exc), body=json.dumps({\"message\": msg}))\n    return web.json_response({'message': 'User with ID:<{}> has been unblocked successfully.'.format(int(user_id))})\n\n\nasync def _unblock_user(user_id, storage_client):\n    \"\"\" implementation for unblock user\n    \"\"\"\n\n    from fledge.common.storage_client.payload_builder import PayloadBuilder\n\n    payload = PayloadBuilder().SELECT(\"id\").WHERE(\n        ['id', '=', user_id]).payload()\n    old_result = await storage_client.query_tbl_with_payload('users', payload)\n    if len(old_result['rows']) == 0:\n        raise User.DoesNotExist('User does not exist')\n\n    # Clear the failed_attempts so that maximum allowed attempts can be used correctly\n    payload = PayloadBuilder().SET(block_until=None, failed_attempts=0).WHERE(['id', '=', user_id]).payload()\n    result = await storage_client.update_tbl(\"users\", payload)\n    return result\n\n@has_permission(\"admin\")\nasync def reset(request):\n    \"\"\" reset user (only role and password)\n        :Example:\n            curl -H \"authorization: <token>\" -X PUT -d '{\"role_id\": \"1\"}' http://localhost:8081/fledge/admin/{user_id}/reset\n            curl -H \"authorization: <token>\" -X PUT -d '{\"password\": \"F0gl@mp!\"}' http://localhost:8081/fledge/admin/{user_id}/reset\n            curl -H \"authorization: <token>\" -X PUT -d '{\"role_id\": 1, \"password\": \"F0gl@mp!\"}' http://localhost:8081/fledge/admin/{user_id}/reset\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    user_id = request.match_info.get('user_id')\n    if int(user_id) == 1:\n        msg = \"Restricted for Super Admin user.\"\n        _logger.warning(msg)\n        raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n\n    data = await request.json()\n    password = data.get('password')\n    role_id = data.get('role_id')\n\n    if not role_id and not password:\n        msg = \"Nothing to update the user.\"\n        raise web.HTTPBadRequest(reason=msg)\n\n    if role_id and not (await is_valid_role(role_id)):\n        msg = \"Invalid or bad role id.\"\n        return web.HTTPBadRequest(reason=msg)\n\n    if password and not isinstance(password, str):\n        err_msg = \"New password should be in string format.\"\n        raise web.HTTPBadRequest(reason=err_msg, body=json.dumps({\"message\": err_msg}))\n    if password:\n        error_msg = await validate_password(password)\n        if error_msg:\n            raise web.HTTPBadRequest(reason=error_msg, body=json.dumps({\"message\": error_msg}))\n    user_data = {}\n    if 'role_id' in data:\n        user_data.update({'role_id': data['role_id']})\n    if 'password' in data:\n        user_data.update({'password': data['password']})\n    if not user_data:\n        msg = \"Nothing to update.\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    try:\n        await User.Objects.update(user_id, user_data)\n        # Remove OTT token for this user if there.\n        __remove_ott_for_user(user_id)\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except User.DoesNotExist:\n        msg = \"User with ID:<{}> does not exist.\".format(int(user_id))\n        raise web.HTTPNotFound(reason=msg)\n    except User.PasswordAlreadyUsed:\n        msg = \"The new password should be different from previous 3 used.\"\n        _logger.warning(msg)\n        raise web.HTTPBadRequest(reason=msg)\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to reset the user ID:<{}>.\".format(user_id))\n        raise web.HTTPInternalServerError(reason=msg)\n\n    msg = \"User with ID:<{}> has been updated successfully.\".format(int(user_id))\n    _logger.info(msg)\n    return web.json_response({'message': msg})\n\n\n@has_permission(\"admin\")\nasync def delete_user(request):\n    \"\"\" Delete a user from users table\n\n    :Example:\n        curl -H \"authorization: <token>\" -X DELETE  http://localhost:8081/fledge/admin/{user_id}/delete\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    # TODO: we should not prevent this, when we have at-least 1 admin (super) user\n    try:\n        user_id = int(request.match_info.get('user_id'))\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n\n    if user_id == 1:\n        msg = \"Super admin user can not be deleted.\"\n        _logger.warning(msg)\n        raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n    \n    # Requester should not be able to delete her/himself\n    if user_id == request.user[\"id\"]:\n        msg = \"You can not delete your own account.\"\n        _logger.warning(msg)\n        raise web.HTTPBadRequest(reason=msg)\n\n    try:\n        result = await User.Objects.delete(user_id)\n        if not result['rows_affected']:\n            raise User.DoesNotExist\n        # Remove OTT token for this user if there.\n        __remove_ott_for_user(user_id)\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except User.DoesNotExist:\n        msg = \"User with ID:<{}> does not exist.\".format(int(user_id))\n        raise web.HTTPNotFound(reason=msg)\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to delete the user ID:<{}>.\".format(user_id))\n        raise web.HTTPInternalServerError(reason=msg)\n\n    _logger.info(\"User with ID:<{}> has been deleted successfully.\".format(int(user_id)))\n\n    return web.json_response({'message': \"User has been deleted successfully.\"})\n\n\n@has_permission(\"admin\")\nasync def create_certificate(request):\n    \"\"\" Add authentication certificate\n\n    :Example:\n        curl -o username.cert -H \"authorization: <token>\" -sX POST  http://localhost:8081/fledge/admin/{user_id}/authcertificate\n    \"\"\"\n\n    async def delete_cert_after_response(cert_dir, cert_name):\n        await asyncio.sleep(1)\n        import glob\n        files = \"{}/{}*\".format(cert_dir, cert_name)\n        for f in glob.glob(files):\n            os.remove(f)\n\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n\n    user_id = int(request.match_info.get('user_id'))\n    try:\n        data = await request.json()\n        expiration_days = data.get(\"expiration_days\", 365)\n        if not isinstance(expiration_days, int):\n            msg = \"expiration_days must be an integer.\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps(msg))\n        if expiration_days < 1 or expiration_days > 365:\n            msg = \"expiration_days must be between 1 and 365.\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps(msg))\n    except json.JSONDecodeError:\n        expiration_days = 365\n    try:\n        user = await User.Objects.get(uid=user_id)\n        username = user['uname']\n        os.chdir(_FLEDGE_ROOT)\n        result = subprocess.run(['bash', \"./scripts/auth_certificates\", 'user', username, str(expiration_days)])\n        if result.returncode != 0:\n            raise Exception\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps(msg))\n    except User.DoesNotExist:\n        msg = \"User with ID:<{}> does not exist.\".format(int(user_id))\n        raise web.HTTPNotFound(reason=msg, body=json.dumps(msg))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Unable to generate the authentication certificate for user '{}'..\".format(username))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps(msg))\n    else:\n        certs_dir = apiutils.get_fl_dir(\"/etc/certs/\")\n        cert_path = Path(certs_dir) / \"{}.cert\".format(username)\n        if cert_path.exists() and cert_path.is_file():\n            response = web.FileResponse(path=cert_path)\n            asyncio.ensure_future(delete_cert_after_response(certs_dir, username))\n            return response\n        else:\n            msg = \"The certificate for {} could not be found.\".format(username)\n            return web.HTTPNotFound(reason=msg, body=json.dumps(msg))\n\n\nasync def is_valid_role(role_id):\n    roles = [int(r[\"id\"]) for r in await User.Objects.get_roles()]\n    try:\n        role = int(role_id)\n        if role not in roles:\n            raise ValueError\n    except ValueError:\n        return False\n    return True\n\n\ndef has_admin_permissions(request):\n    if request.is_auth_optional is False:  # auth is mandatory\n        # TODO: we may replace with request.user_is_admin\n        if int(request.user[\"role_id\"]) != ADMIN_ROLE_ID:\n            return False\n    return True\n\nasync def validate_password(password) -> str:\n    from fledge.common.configuration_manager import ConfigurationManager\n    from fledge.services.core import connect\n    import string\n\n    message = \"\"\n    storage_client = connect.get_storage_async()\n    cfg_mgr = ConfigurationManager(storage_client)\n    category = await cfg_mgr.get_category_all_items('password')\n    policy = category['policy']['value']\n    min_chars = category['length']['value']\n    max_chars = category['length']['maximum']\n    if len(password) < int(min_chars):\n        message = \"Password should have minimum {} characters.\".format(min_chars)\n    if len(password) > int(max_chars):\n        message = \"Password should have maximum {} characters.\".format(max_chars)\n    if not message:\n        has_lower = any(pwd.islower() for pwd in password)\n        has_upper = any(pwd.isupper() for pwd in password)\n        has_numeric = any(pwd.isdigit() for pwd in password)\n        has_special = any(pwd in string.punctuation for pwd in password)\n        if policy == 'Mixed case Alphabetic':\n            mixed_case = has_lower and has_upper\n            if not mixed_case:\n                message = \"Password must contain upper and lower case letters.\"\n        elif policy == 'Mixed case and numeric':\n            mixed_numeric_case = has_lower and has_upper and has_numeric\n            if not mixed_numeric_case:\n                message = \"Password must contain upper, lower case, uppercase and numeric values.\"\n        elif policy == 'Mixed case, numeric and special characters':\n            mixed_numeric_special_case = has_lower and has_upper and has_numeric and has_special\n            if not mixed_numeric_special_case:\n                message = \"Password must contain atleast one upper and lower case letter, numeric and special characters.\"\n        else:\n            # Any characters\n            pass\n    return message\n"
  },
  {
    "path": "python/fledge/services/core/api/backup_restore.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Backup and Restore Rest API support\"\"\"\nimport os\nimport sys\nimport tarfile\nimport json\nfrom pathlib import Path\nfrom aiohttp import web\nfrom enum import IntEnum\nfrom collections import OrderedDict\n\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.logger import FLCoreLogger\n\nfrom fledge.common.storage_client import payload_builder\nfrom fledge.common.web.middleware import has_permission\nfrom fledge.plugins.storage.common import exceptions\nfrom fledge.services.core import connect\n\nif 'fledge.plugins.storage.common.backup' not in sys.modules:\n    from fledge.plugins.storage.common.backup import Backup\n\nif 'fledge.plugins.storage.common.restore' not in sys.modules:\n    from fledge.plugins.storage.common.restore import Restore\n\n__author__ = \"Vaibhav Singhal, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__DEFAULT_LIMIT = 20\n__DEFAULT_OFFSET = 0\n\n_help = \"\"\"\n    -----------------------------------------------------------------------------------\n    | GET, POST       | /fledge/backup                                                |\n    | POST            | /fledge/backup/upload                                         |\n    | GET, DELETE     | /fledge/backup/{backup-id}                                    |\n    | GET             | /fledge/backup/{backup-id}/download                           |\n    | PUT             | /fledge/backup/{backup-id}/restore                            |\n    | GET             | /fledge/backup/status                                         |\n    -----------------------------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass Status(IntEnum):\n    \"\"\"Enumeration for backup.status\"\"\"\n    RUNNING = 1\n    COMPLETED = 2\n    CANCELED = 3\n    INTERRUPTED = 4\n    FAILED = 5\n    RESTORED = 6\n\n\ndef _get_status(status_code):\n    if status_code not in range(1, 7):\n        return \"UNKNOWN\"\n    return Status(status_code).name\n\n\nasync def get_backups(request):\n    \"\"\" Returns a list of all backups\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/backup\n        curl -sX GET http://localhost:8081/fledge/backup?status=completed\n        curl -sX GET http://localhost:8081/fledge/backup?limit=1\n        curl -sX GET \"http://localhost:8081/fledge/backup?limit=2&status=restored\"\n        curl -sX GET http://localhost:8081/fledge/backup?skip=1\n        curl -sX GET \"http://localhost:8081/fledge/backup?skip=1&limit=1\"\n        curl -sX GET \"http://localhost:8081/fledge/backup?skip=1&status=completed\"\n        curl -sX GET \"http://localhost:8081/fledge/backup?skip=1&status=completed&limit=2\"\n    \"\"\"\n    limit = __DEFAULT_LIMIT\n    if 'limit' in request.query and request.query['limit'] != '':\n        try:\n            limit = int(request.query['limit'])\n            if limit < 0:\n                raise ValueError\n        except ValueError:\n            raise web.HTTPBadRequest(reason=\"Limit must be a positive integer\")\n\n    skip = __DEFAULT_OFFSET\n    if 'skip' in request.query and request.query['skip'] != '':\n        try:\n            skip = int(request.query['skip'])\n            if skip < 0:\n                raise ValueError\n        except ValueError:\n            raise web.HTTPBadRequest(reason=\"Skip/Offset must be a positive integer\")\n\n    status = None\n    if 'status' in request.query and request.query['status'] != '':\n        try:\n            status = Status[request.query['status'].upper()].value\n        except KeyError as ex:\n            raise web.HTTPBadRequest(reason=\"{} is not a valid status\".format(ex))\n    try:\n        backup = Backup(connect.get_storage_async())\n        backup_json = await backup.get_all_backups(limit=limit, skip=skip, status=status)\n\n        res = []\n        for row in backup_json:\n            r = OrderedDict()\n            r[\"id\"] = row[\"id\"]\n            r[\"date\"] = row[\"ts\"]\n            r[\"status\"] = _get_status(int(row[\"status\"]))\n            res.append(r)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get the list of Backup records.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response({\"backups\": res})\n\n\n@has_permission(\"admin\")\nasync def create_backup(request):\n    \"\"\" Creates a backup\n\n    :Example: curl -X POST http://localhost:8081/fledge/backup\n    \"\"\"\n    try:\n        backup = Backup(connect.get_storage_async())\n        status = await backup.create_backup()\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create Backup.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response({\"status\": status})\n\n\nasync def get_backup_details(request):\n    \"\"\" Returns the details of a backup\n\n    :Example: curl -X GET http://localhost:8081/fledge/backup/1\n    \"\"\"\n    backup_id = request.match_info.get('backup_id', None)\n    try:\n        backup_id = int(backup_id)\n        backup = Backup(connect.get_storage_async())\n        backup_json = await backup.get_backup_details(backup_id)\n\n        resp = {\"status\": _get_status(int(backup_json[\"status\"])),\n                'id': backup_json[\"id\"],\n                'date': backup_json[\"ts\"]\n                }\n\n    except ValueError:\n        raise web.HTTPBadRequest(reason='Invalid backup id')\n    except exceptions.DoesNotExist:\n        raise web.HTTPNotFound(reason='Backup id {} does not exist'.format(backup_id))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to fetch backup details for ID: <{}>.\".format(backup_id))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response(resp)\n\n\n@has_permission(\"admin\")\nasync def get_backup_download(request):\n    \"\"\" Download back up file by id\n\n    :Example:\n        wget -O fledge-backup-1.tar.gz http://localhost:8081/fledge/backup/1/download\n\n    \"\"\"\n    backup_id = request.match_info.get('backup_id', None)\n    try:\n        backup_id = int(backup_id)\n        backup = Backup(connect.get_storage_async())\n        backup_json = await backup.get_backup_details(backup_id)\n\n        # Strip filename from backup path\n        file_name_path = str(backup_json[\"file_name\"]).split('data/backup/')\n        file_name = str(file_name_path[1])\n        dir_name = _FLEDGE_DATA + '/backup/' if _FLEDGE_DATA else _FLEDGE_ROOT + \"/data/backup/\"\n        source = dir_name + file_name\n        if not os.path.isfile(source):\n            raise FileNotFoundError('{} backup file does not exist in {} directory'.format(file_name, dir_name))\n        # Find the source extension\n        dummy, file_extension = os.path.splitext(source)\n        # backward compatibility (<= 1.9.2)\n        if file_extension in (\".db\", \".dump\"):\n            # Create tar file\n            t = tarfile.open(source + \".tar.gz\", \"w:gz\")\n            t.add(source, arcname=os.path.basename(source))\n            t.close()\n            gz_path = Path(source + \".tar.gz\")\n        else:\n            gz_path = Path(source)\n        _logger.debug(\"get_backup_download - file_extension :{}: - gz_path :{}:\".format(file_extension, gz_path))\n    except FileNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError:\n        msg = \"Invalid backup id\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except exceptions.DoesNotExist:\n        msg = \"Backup id {} does not exist\".format(backup_id)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to download Backup file for ID: <{}>.\".format(backup_id))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.FileResponse(path=gz_path, headers={'Content-Type': 'application/gzip'})\n\n\n@has_permission(\"admin\")\nasync def delete_backup(request):\n    \"\"\" Delete a backup\n\n    :Example: curl -X DELETE http://localhost:8081/fledge/backup/1\n    \"\"\"\n    backup_id = request.match_info.get('backup_id', None)\n    try:\n        backup_id = int(backup_id)\n        backup = Backup(connect.get_storage_async())\n        await backup.delete_backup(backup_id)\n        return web.json_response({'message': \"Backup deleted successfully\"})\n    except ValueError:\n        raise web.HTTPBadRequest(reason='Invalid backup id')\n    except exceptions.DoesNotExist:\n        raise web.HTTPNotFound(reason='Backup id {} does not exist'.format(backup_id))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete Backup ID: <{}>.\".format(backup_id))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n\n@has_permission(\"admin\")\nasync def restore_backup(request):\n    \"\"\"\n    Restore from a backup\n    :Example: curl -X PUT http://localhost:8081/fledge/backup/1/restore\n    \"\"\"\n\n    # TODO: FOGL-861\n    backup_id = request.match_info.get('backup_id', None)\n    try:\n        backup_id = int(backup_id)\n        restore = Restore(connect.get_storage_async())\n        status = await restore.restore_backup(backup_id)\n        return web.json_response({'status': status})\n    except ValueError:\n        raise web.HTTPBadRequest(reason='Invalid backup id')\n    except exceptions.DoesNotExist:\n        raise web.HTTPNotFound(reason='Backup with {} does not exist'.format(backup_id))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to restore Backup ID: <{}>.\".format(backup_id))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n\nasync def get_backup_status(request):\n    \"\"\"\n\n    Returns:\n           an array of backup status enumeration key index values\n\n    :Example:\n\n        curl -X GET http://localhost:8081/fledge/backup/status\n    \"\"\"\n    results = []\n    for _status in Status:\n        data = {'index': _status.value, 'name': _status.name}\n        results.append(data)\n\n    return web.json_response({\"backupStatus\": results})\n\n\n@has_permission(\"admin\")\nasync def upload_backup(request: web.Request) -> web.Response:\n    \"\"\"\n    Upload a backup file\n\n    :Example:\n        curl -F \"filename=@fledge_backup_2021_08_24_13_27_08.tar.gz\" localhost:8081/fledge/backup/upload\n        curl -F \"filename=@fledge_backup_2021_08_25_16_37_01.dump.tar.gz\" localhost:8081/fledge/backup/upload\n    \"\"\"\n\n    try:\n        fl_data_path = _FLEDGE_DATA if _FLEDGE_DATA else _FLEDGE_ROOT + '/data'\n        backup_prefix = \"fledge_backup_\"\n        backup_path = \"{}/backup\".format(fl_data_path)\n        temp_path = \"{}/upload\".format(fl_data_path)\n        valid_extensions = ('.db', '.dump')\n        reader = await request.multipart()\n        # reader.next() will `yield` the fields of your form\n        field = await reader.next()\n        file_name = field.filename\n        if not str(file_name).endswith(\".tar.gz\"):\n            raise NameError(\"{} file should end with .tar.gz extension\".format(file_name))\n        if not str(file_name).startswith(backup_prefix):\n            raise NameError(\"{} filename is invalid. Either check file format from FLEDGE_DATA/backup \"\n                            \"or create it from GUI create new backup from Backup & Restore option\".format(file_name))\n        # Create temporary directory for tar extraction & backup data directory for Fledge\n        cmd = \"mkdir -p {} {}\".format(temp_path, backup_path)\n        os.system(cmd)\n        # You cannot rely on Content-Length if transfer is chunked\n        size = 0\n        with open(\"{}/{}\".format(temp_path, file_name), 'wb') as temp_file:\n            while True:\n                chunk = await field.read_chunk()  # 8192 bytes by default.\n                if not chunk:\n                    break\n                size += len(chunk)\n                temp_file.write(chunk)\n\n        _logger.debug(\"upload_backup - temp_path :{}: file_name :{}: \".format(temp_path, file_name))\n        # Extract tar inside temporary directory\n        tar_file = tarfile.open(name=\"{}/{}\".format(temp_path, file_name), mode='r:*')\n        tar_file_names = tar_file.getnames()\n        if any((item.startswith(backup_prefix) and item.endswith(valid_extensions)) for item in tar_file_names):\n            if any((item.startswith(\"etc\") and item.endswith(\"etc\")) for item in tar_file_names):\n                source = temp_path + \"/\" + file_name\n                backup_file_name = file_name\n            # backward compatibility (<= 1.9.2)\n            else:\n                tar_file.extractall(temp_path)\n                backup_file_name = tar_file_names[0]\n                source = \"{}/{}\".format(temp_path, backup_file_name)\n            cmd = \"cp {} {}\".format(source, backup_path)\n            _logger.debug(\"upload_backup: source :{}: - cmd :{}: - filename :{}:\".format(source, cmd, backup_file_name))\n            ret_code = os.system(cmd)\n            if ret_code != 0:\n                raise OSError(\"{} upload failed during copy to path:{}\".format(file_name, backup_path))\n            # TODO: FOGL-5876 ts as per post param if given in payload\n            # insert backup record entry in db\n            full_file_name_path = \"{}/{}\".format(backup_path, backup_file_name)\n            # Get the current UTC date and format as 'YYYY-MM-DD HH:MM:SS'\n            from datetime import datetime\n            now_utc = datetime.utcnow().strftime(\"%Y-%m-%d %H:%M:%S\")\n            payload = payload_builder.PayloadBuilder().INSERT(\n                file_name=full_file_name_path, ts=now_utc, type=1, status=2, exit_code=0).payload()\n            # audit trail entry\n            storage = connect.get_storage_async()\n            await storage.insert_into_tbl(\"backups\", payload)\n            audit = AuditLogger(storage)\n            await audit.information('BKEXC', {'status': 'completed', 'message': 'From upload backup'})\n            # TODO: FOGL-4239 - readings table upload\n        else:\n            raise NameError('Either {} prefix or {} valid extension is not found inside given tar file'.format(\n                backup_prefix, valid_extensions))\n    except tarfile.ReadError:\n        msg = \"DB file is not found inside tarfile and should be with valid {} extensions\".format(valid_extensions)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except tarfile.CompressionError:\n        msg = \"Only gzip compression is supported\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except (NameError, OSError, RuntimeError) as err_msg:\n        msg = str(err_msg)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err_msg:\n        msg = str(err_msg)\n        raise web.HTTPNotImplemented(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to upload Backup.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        msg = \"{} backup uploaded successfully.\".format(file_name)\n        return web.json_response({\"message\": msg})\n    finally:\n        # Remove temporary directory\n        cmd = \"rm -rf {}\".format(temp_path)\n        os.system(cmd)\n"
  },
  {
    "path": "python/fledge/services/core/api/browser.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"\nExperimental browser for extracting reading data from the Fledge data buffer\n\nSupports a number of REST API:\n\n  http://<address>/fledge/asset\n     - Return a summary count of all asset readings\n  http://<address>/fledge/asset/{asset_code}\n    - Return a set of asset readings for the given asset\n  http://<address>/fledge/asset/{asset_code}/latest\n    - Return latest reading for the given asset\n  http://<address>/fledge/asset/{asset_code}/summary\n    - Return a set of the summary of all sensors values for the given asset\n  http://<address>/fledge/asset/{asset_code}/{reading}\n    - Return a set of sensor readings for the specified asset and sensor\n  http://<address>/fledge/asset/{asset_code}/{reading}/summary\n    - Return a summary (min, max and average) for the specified asset and sensor\n  http://<address>/fledge/asset/{asset_code}/{reading}/series\n    - Return a time series (min, max and average) for the specified asset and\n      sensor averages over seconds, minutes or hours. The selection of seconds, minutes\n      or hours is done via the group query parameter\n\n  All but the /fledge/asset API call take a set of optional query parameters\n    limit=x     Return the first x rows only\n    skip=x      skip first n entries and used with limit to implemented paged interfaces\n    seconds=x   Limit the data return to be less than x seconds old\n    minutes=x   Limit the data returned to be less than x minutes old\n    hours=x     Limit the data returned to be less than x hours old\n\n  Note: seconds, minutes and hours can not be combined in a URL. If they are then only seconds\n  will have an effect.\n  Note: if datetime units are supplied then limit will not respect i.e mutually exclusive\n\"\"\"\nimport time\nimport datetime\nimport json\n\nfrom aiohttp import web\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n__author__ = \"Mark Riddoch, Ashish Jabble, Massimiliano Pinto\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n__DEFAULT_LIMIT = 20\n__DEFAULT_OFFSET = 0\n\nDATAPOINT_TYPES = ['__DPIMAGE', '__DATABUFFER']\nIMAGE_PLACEHOLDER = \"Data removed for brevity\"\n\n\ndef setup(app):\n    \"\"\" Add the routes for the API endpoints supported by the data browser \"\"\"\n    app.router.add_route('GET', '/fledge/asset', asset_counts)\n    app.router.add_route('GET', '/fledge/asset/timespan', asset_timespan)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}', asset)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}/latest', asset_latest)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}/summary', asset_all_readings_summary)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}/timespan', asset_reading_timespan)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}/{reading}', asset_reading)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}/{reading}/summary', asset_summary)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}/{reading}/series', asset_averages)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}/bucket/{bucket_size}', asset_datapoints_with_bucket_size)\n    app.router.add_route('GET', '/fledge/asset/{asset_code}/{reading}/bucket/{bucket_size}',\n                         asset_readings_with_bucket_size)\n    app.router.add_route('GET', '/fledge/structure/asset', asset_structure)\n    # The developer Purge by Asset name API entry points\n    app.router.add_route('DELETE', '/fledge/asset', asset_purge_all)\n    app.router.add_route('DELETE', '/fledge/asset/{asset_code}', asset_purge)\n\n\ndef prepare_limit_skip_payload(request, _dict):\n    \"\"\" limit skip clause validation\n\n    Args:\n        request: request query params\n        _dict: main payload dict\n    Returns:\n        chain payload dict\n    \"\"\"\n    limit = __DEFAULT_LIMIT\n    if 'limit' in request.query and request.query['limit'] != '':\n        try:\n            limit = int(request.query['limit'])\n            if limit < 0:\n                raise ValueError\n        except ValueError:\n            raise web.HTTPBadRequest(reason=\"Limit must be a positive integer\")\n\n    offset = __DEFAULT_OFFSET\n    if 'skip' in request.query and request.query['skip'] != '':\n        try:\n            offset = int(request.query['skip'])\n            if offset < 0:\n                raise ValueError\n        except ValueError:\n            raise web.HTTPBadRequest(reason=\"Skip/Offset must be a positive integer\")\n\n    payload = PayloadBuilder(_dict).LIMIT(limit)\n    if offset:\n        payload = PayloadBuilder(_dict).SKIP(offset)\n\n    return payload.chain_payload()\n\n\ndef is_image_excluded(request: web.Request) -> bool:\n    \"\"\" image type datapoints exclusion\n    Args:\n        request: images request query param\n    Returns:\n        Boolean\n    \"\"\"\n    if 'images' in request.query:\n        if request.query['images'] not in ('include', 'exclude'):\n            msg = \"images request query should either be include or exclude.\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        if request.query['images'] == 'include':\n            return False\n    return True\n\n\nasync def asset_counts(request):\n    \"\"\" Browse all the assets for which we have recorded readings and\n    return a readings count.\n\n    Returns:\n           json result on basis of SELECT asset_code, count(*) FROM readings GROUP BY asset_code;\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset\n    \"\"\"\n    payload = PayloadBuilder().AGGREGATE([\"count\", \"*\"]).ALIAS(\"aggregate\", (\"*\", \"count\", \"count\")) \\\n        .GROUP_BY(\"asset_code\").payload()\n    _readings = connect.get_readings_async()\n    results = await _readings.query(payload)\n    try:\n        response = results['rows']\n        asset_json = [{\"count\": r['count'], \"assetCode\": r['asset_code']} for r in response]\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get all assets.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(asset_json)\n\n\nasync def asset(request):\n    \"\"\" Browse a particular asset for which we have recorded readings and\n    return a readings with timestamps for the asset. The number of readings\n    return is defaulted to a small number (20), this may be changed by supplying\n    the query parameter ?limit=xx&skip=xx and it will not respect when datetime units is supplied\n    Can also output the readings in ascending or descending order. For that give query parameter\n    ?order=asc or ?order=desc . If nothing given in order then default is descending.\n    Returns:\n          json result on basis of SELECT user_ts as \"timestamp\", (reading)::jsonFROM readings WHERE asset_code = 'asset_code' ORDER BY user_ts DESC LIMIT 20 OFFSET 0;\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity?limit=1\n            curl -sX GET \"http://localhost:8081/fledge/asset/fogbench_humidity?limit=1&skip=1\n            curl -sX GET \"http://localhost:8081/fledge/asset/fogbench_humidity?limit=1&skip=1&order=asc\n            curl -sX GET \"http://localhost:8081/fledge/asset/fogbench_humidity?limit=1&skip=1&order=desc\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity?seconds=60\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity?seconds=60&previous=600\n            curl -sX GET \"http://localhost:8081/fledge/asset/fogbench_humidity?additional=sinusoid,random&seconds=600\"\n            curl -sX GET \"http://localhost:8081/fledge/asset/sinusoid?mostrecent=true&seconds=600\"\n            curl -sX GET \"http://localhost:8081/fledge/asset/sinusoid?mostrecent=true&seconds=60&additional=randomwalk\"\n    \"\"\"\n    asset_code = request.match_info.get('asset_code', '')\n    # A comma separated list of additional assets to generate the readings to display multiple graphs in GUI\n    if 'additional' in request.query:\n        additional_assets = \"{},{}\".format(asset_code, request.query['additional'])\n        additional_asset_codes = additional_assets.split(',')\n        _select = PayloadBuilder().SELECT((\"asset_code\", \"reading\", \"user_ts\")).ALIAS(\n            \"return\", (\"user_ts\", \"timestamp\")).chain_payload()\n        _where = PayloadBuilder(_select).WHERE([\"asset_code\", \"in\", additional_asset_codes]).chain_payload()\n    else:\n        _select = PayloadBuilder().SELECT((\"reading\", \"user_ts\")).ALIAS(\n            \"return\", (\"user_ts\", \"timestamp\")).chain_payload()\n        _where = PayloadBuilder(_select).WHERE([\"asset_code\", \"=\", asset_code]).chain_payload()\n    if 'previous' in request.query and (\n            'seconds' in request.query or 'minutes' in request.query or 'hours' in request.query):\n        _and_where = where_window(request, _where)\n    elif 'mostrecent' not in request.query and (\n            'seconds' in request.query or 'minutes' in request.query or 'hours' in request.query):\n        _and_where = where_clause(request, _where)\n    elif 'mostrecent' in request.query and 'seconds' in request.query:\n        if str(request.query['mostrecent']).lower() == 'true':\n            if 'previous_ts' in request.query:\n                _and_where = where_window(request, _where)\n            else:\n                # To get latest reading for an asset's\n                asset_codes = additional_asset_codes if 'additional' in request.query else [asset_code]\n                _readings = connect.get_readings_async()\n                date_times = []\n                dt_format = '%Y-%m-%d %H:%M:%S.%f'\n                for ac in asset_codes:\n                    payload = PayloadBuilder().SELECT(\"user_ts\").ALIAS(\"return\", (\"user_ts\", \"timestamp\")).WHERE(\n                        [\"asset_code\", \"=\", ac]).LIMIT(1).ORDER_BY([\"user_ts\", \"desc\"]).payload()\n                    results = await _readings.query(payload)\n                    response = results['rows']\n                    if response and 'timestamp' in response[0]:\n                        date_times.append(datetime.datetime.strptime(response[0]['timestamp'], dt_format))\n                if date_times:\n                    most_recent_ts = max(date_times)\n                    _logger.debug(\"DTS: {} most_recent_ts: {}\".format(date_times, most_recent_ts))\n                    window = int(request.query['seconds'])\n                    to_ts = most_recent_ts - datetime.timedelta(seconds=window)\n                    # As the returned timestamp in the above qury is 'utc',\n                    # we can add \"+00:00\" to upper limit date string: this allows right query execution\n                    most_recent_str = most_recent_ts.strftime(dt_format) + \"+00:00\"\n                    to_str = to_ts.strftime(dt_format)\n                    _logger.debug(\"user_ts <={} TO user_ts>{}\".format(most_recent_str, to_str))\n                    _and_where = PayloadBuilder(_where).AND_WHERE(['user_ts', '<=', most_recent_str]).AND_WHERE(\n                        ['user_ts', '>', to_str]).chain_payload()\n                else:\n                    _and_where = where_clause(request, _where)\n    elif 'previous' in request.query:\n        msg = \"the parameter previous can only be given if one of seconds, minutes or hours is also given\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # Add the order by and limit, offset clause\n        _and_where = prepare_limit_skip_payload(request, _where)\n\n    # check the order. keep the default order desc\n    _order = 'desc'\n    if 'order' in request.query:\n        _order = request.query['order']\n        if _order not in ('asc', 'desc'):\n            msg = \"order must be asc or desc\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    payload = PayloadBuilder(_and_where).ORDER_BY([\"user_ts\", _order]).payload()\n    try:\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        rows = results['rows']\n        for index, data in enumerate(rows):\n            for item_name, item_val in data.items():\n                if isinstance(item_val, dict):\n                    for item_name2, item_val2 in item_val.items():\n                        if isinstance(item_val2, str) and item_val2.startswith(tuple(DATAPOINT_TYPES)):\n                            data[item_name][item_name2] = IMAGE_PLACEHOLDER if is_image_excluded(request) else item_val2\n        # Group the readings value by asset_code in case of additional multiple assets\n        if 'additional' in request.query:\n            response_by_asset_code = {}\n            for aacl in additional_asset_codes:\n                response_by_asset_code[aacl] = []\n            for r in rows:\n                if r['asset_code'] in additional_asset_codes:\n                    response_by_asset_code[r['asset_code']].extend([r])\n                    r.pop('asset_code')\n            response = response_by_asset_code\n        else:\n            response = rows\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get {} asset.\".format(asset_code))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def asset_latest(request: web.Request) -> web.Response:\n    \"\"\" Browse a particular asset for which we have recorded readings and\n    return a single latest reading with timestamps for an asset.\n    Returns:\n            Latest reading for an asset\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/latest\n    \"\"\"\n    asset_code = request.match_info.get('asset_code', '')\n    payload = PayloadBuilder().SELECT((\"reading\", \"user_ts\")).ALIAS(\"return\", (\"user_ts\", \"timestamp\")).WHERE(\n        [\"asset_code\", \"=\", asset_code]).LIMIT(1).ORDER_BY([\"user_ts\", \"desc\"]).payload()\n    results = {}\n    try:\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        response = results['rows']\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get latest {} asset.\".format(asset_code))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def asset_reading(request):\n    \"\"\" Browse a particular sensor value of a particular asset for which we have recorded readings and\n    return the timestamp and reading value for that sensor. The number of rows returned\n    is limited to a small number, this number may be altered by use of\n    the query parameter limit=xxx&skip=xxx and it will not respect when datetime units is supplied\n\n    The readings returned can also be time limited by use of the query\n    parameter seconds=sss. This defines a number of seconds that the reading\n    must have been processed in. Older readings than this will not be returned.\n\n    The readings returned can also be time limited by use of the query\n    parameter minutes=mmm. This defines a number of minutes that the reading\n    must have been processed in. Older readings than this will not be returned.\n\n    The readings returned can also be time limited by use of the query\n    parameter hours=hh. This defines a number of hours that the reading\n    must have been processed in. Older readings than this will not be returned.\n\n    Only one of hour, minutes or seconds should be supplied\n\n    Returns:\n           json result on basis of SELECT user_ts as \"timestamp\", reading->>'reading' FROM readings WHERE asset_code = 'asset_code' ORDER BY user_ts DESC LIMIT 20 OFFSET 0;\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature?limit=1\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature?skip=10\n            curl -sX GET \"http://localhost:8081/fledge/asset/fogbench_humidity/temperature?limit=1&skip=10\"\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature?minutes=60\n    \"\"\"\n    asset_code = request.match_info.get('asset_code', '')\n    reading = request.match_info.get('reading', '')\n\n    _select = PayloadBuilder().SELECT((\"user_ts\", [\"reading\", reading])) \\\n        .ALIAS(\"return\", (\"user_ts\", \"timestamp\"), (\"reading\", reading)).chain_payload()\n    _where = PayloadBuilder(_select).WHERE([\"asset_code\", \"=\", asset_code]).chain_payload()\n    if 'previous' in request.query and (\n            'seconds' in request.query or 'minutes' in request.query or 'hours' in request.query):\n        _and_where = where_window(request, _where)\n    elif 'seconds' in request.query or 'minutes' in request.query or 'hours' in request.query:\n        _and_where = where_clause(request, _where)\n    elif 'previous' in request.query:\n        msg = \"the parameter previous can only be given if one of seconds, minutes or hours is also given\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # Add the order by and limit, offset clause\n        _and_where = prepare_limit_skip_payload(request, _where)\n    payload = PayloadBuilder(_and_where).ORDER_BY([\"user_ts\", \"desc\"]).payload()\n    try:\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        rows = results['rows']\n        for index, data in enumerate(rows):\n            for item_name, item_val in data.items():\n                if item_name != 'timestamp':\n                    if isinstance(item_val, str) and item_val.startswith(tuple(DATAPOINT_TYPES)):\n                        data[item_name] = IMAGE_PLACEHOLDER if is_image_excluded(request) else item_val\n        response = rows\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get {} asset for {} reading.\".format(asset_code, reading))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def asset_all_readings_summary(request):\n    \"\"\" Browse all the assets for which we have recorded readings and\n    return a summary for all sensors values for an asset code. The values that are\n    returned are the min, max and average values of the sensor.\n\n    Only one of hour, minutes or seconds should be supplied, if more than one time unit\n    then the smallest unit will be picked\n\n    The number of records return is default to a small number (20), this may be changed by supplying\n    the query parameter ?limit=xx&skip=xx and it will not respect when datetime units is supplied\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/summary\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/summary?seconds=60\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/summary?limit=10\n    \"\"\"\n    try:\n        # Get readings from asset_code\n        asset_code = request.match_info.get('asset_code', '')\n        # TODO: Use only the latest asset read to determine the data points to use. This\n        # avoids reading every single reading into memory and creating a very big result set See FOGL-2635\n        payload = PayloadBuilder().SELECT(\"reading\").WHERE(\n            [\"asset_code\", \"=\", asset_code]).LIMIT(1).ORDER_BY([\"user_ts\", \"desc\"]).payload()\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        if not results['rows']:\n            raise KeyError(\"{} asset_code not found\".format(asset_code))\n\n        # TODO: FOGL-1768 when support available from storage layer then avoid multiple calls\n        # Find keys in readings\n        reading_keys = list(results['rows'][-1]['reading'].keys())\n        rows = []\n        _where = PayloadBuilder().WHERE([\"asset_code\", \"=\", asset_code]).chain_payload()\n        if 'previous' in request.query and (\n                'seconds' in request.query or 'minutes' in request.query or 'hours' in request.query):\n            _and_where = where_window(request, _where)\n        elif 'seconds' in request.query or 'minutes' in request.query or 'hours' in request.query:\n            _and_where = where_clause(request, _where)\n        elif 'previous' in request.query:\n            msg = \"the parameter previous can only be given if one of seconds, minutes or hours is also given\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            # Add limit, offset clause\n            _and_where = prepare_limit_skip_payload(request, _where)\n\n        for reading in reading_keys:\n            _aggregate = PayloadBuilder(_and_where).AGGREGATE([\"min\", [\"reading\", reading]],\n                                                              [\"max\", [\"reading\", reading]],\n                                                              [\"avg\", [\"reading\", reading]]) \\\n                .ALIAS('aggregate', ('reading', 'min', 'min'),\n                       ('reading', 'max', 'max'),\n                       ('reading', 'avg', 'average')).chain_payload()\n            payload = PayloadBuilder(_aggregate).payload()\n            results = await _readings.query(payload)\n            rows.append({reading: results['rows'][0]})\n        for index, data in enumerate(rows):\n            for item_name, item_val in data.items():\n                if isinstance(item_val, dict):\n                    for item_name2, item_val2 in item_val.items():\n                        if isinstance(item_val2, str) and item_val2.startswith(tuple(DATAPOINT_TYPES)):\n                            data[item_name][item_name2] = IMAGE_PLACEHOLDER if is_image_excluded(request) else item_val2\n        response = rows\n    except (KeyError, IndexError) as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (TypeError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get {} asset readings summary.\".format(asset_code))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def asset_summary(request):\n    \"\"\" Browse all the assets for which we have recorded readings and\n    return a summary for a particular sensor. The values that are\n    returned are the min, max and average values of the sensor.\n\n    The readings summarised can also be time limited by use of the query\n    parameter seconds=sss. This defines a number of seconds that the reading\n    must have been processed in. Older readings than this will not be summarised.\n\n    The readings summarised can also be time limited by use of the query\n    parameter minutes=mmm. This defines a number of minutes that the reading\n    must have been processed in. Older readings than this will not be summarised.\n\n    The readings summarised can also be time limited by use of the query\n    parameter hours=hh. This defines a number of hours that the reading\n    must have been processed in. Older readings than this will not be summarised.\n\n    Only one of hour, minutes or seconds should be supplied\n\n    Returns:\n           json result on basis of SELECT MIN(reading->>'reading'), MAX(reading->>'reading'), AVG((reading->>'reading')::float) FROM readings WHERE asset_code = 'asset_code';\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature/summary\n    \"\"\"\n    asset_code = request.match_info.get('asset_code', '')\n    reading = request.match_info.get('reading', '')\n    try:\n        payload = PayloadBuilder().SELECT(\"reading\").WHERE([\"asset_code\", \"=\", asset_code]).LIMIT(1).ORDER_BY(\n            [\"user_ts\", \"desc\"]).payload()\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        if not results['rows']:\n            raise ValueError('{} asset code not found'.format(asset_code))\n\n        # TODO: FOGL-1768 when support available from storage layer then avoid multiple calls\n        reading_keys = list(results['rows'][-1]['reading'].keys())\n        if reading not in reading_keys:\n            raise ValueError('{} reading key is not found'.format(reading))\n        _aggregate = PayloadBuilder().AGGREGATE([\"min\", [\"reading\", reading]], [\"max\", [\"reading\", reading]],\n                                                [\"avg\", [\"reading\", reading]]) \\\n            .ALIAS('aggregate', ('reading', 'min', 'min'), ('reading', 'max', 'max'),\n                   ('reading', 'avg', 'average')).chain_payload()\n        _where = PayloadBuilder(_aggregate).WHERE([\"asset_code\", \"=\", asset_code]).chain_payload()\n        _and_where = where_clause(request, _where)\n        payload = PayloadBuilder(_and_where).payload()\n        results = await _readings.query(payload)\n        # for aggregates, so there can only ever be one row\n        response = results['rows'][0]\n        for item_name, item_val in response.items():\n            if isinstance(item_val, str) and item_val.startswith(tuple(DATAPOINT_TYPES)):\n                response[item_name] = IMAGE_PLACEHOLDER if is_image_excluded(request) else item_val\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get {} asset {} reading summary.\".format(asset_code, reading))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({reading: response})\n\n\nasync def asset_averages(request):\n    \"\"\" Browse all the assets for which we have recorded readings and\n    return a series of averages per second, minute or hour.\n\n    The readings averaged can also be time limited by use of the query\n    parameter seconds=sss. This defines a number of seconds that the reading\n    must have been processed in. Older readings than this will not be summarised.\n\n    The readings averaged can also be time limited by use of the query\n    parameter minutes=mmm. This defines a number of minutes that the reading\n    must have been processed in. Older readings than this will not be summarised.\n\n    The readings averaged can also be time limited by use of the query\n    parameter hours=hh. This defines a number of hours that the reading\n    must have been processed in. Older readings than this will not be summarised.\n\n    Only one of hour, minutes or seconds should be supplied\n\n    The amount of time covered by each returned value is set using the\n    query parameter group. This may be set to seconds, minutes or hours\n\n    Returns:\n            on the basis of\n            SELECT min((reading->>'reading')::float) AS \"min\",\n                   max((reading->>'reading')::float) AS \"max\",\n                   avg((reading->>'reading')::float) AS \"average\",\n                   to_char(user_ts, 'YYYY-MM-DD HH24:MI:SS') AS \"timestamp\"\n            FROM fledge.readings\n                   WHERE asset_code = 'asset_code' AND\n                     reading ? 'reading'\n            GROUP BY to_char(user_ts, 'YYYY-MM-DD HH24:MI:SS')\n            ORDER BY timestamp DESC;\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature/series\n            curl -sX GET \"http://localhost:8081/fledge/asset/fogbench_humidity/temperature/series?limit=1&skip=1\"\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature/series?hours=1\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature/series?minutes=60\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature/series?seconds=3600\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature/series?group=seconds\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature/series?group=minutes\n            curl -sX GET http://localhost:8081/fledge/asset/fogbench_humidity/temperature/series?group=hours\n    \"\"\"\n    asset_code = request.match_info.get('asset_code', '')\n    reading = request.match_info.get('reading', '')\n\n    ts_restraint = 'YYYY-MM-DD HH24:MI:SS'\n    if 'group' in request.query and request.query['group'] != '':\n        _group = request.query['group']\n        if _group in ('seconds', 'minutes', 'hours'):\n            if _group == 'seconds':\n                ts_restraint = 'YYYY-MM-DD HH24:MI:SS'\n            elif _group == 'minutes':\n                ts_restraint = 'YYYY-MM-DD HH24:MI'\n            elif _group == 'hours':\n                ts_restraint = 'YYYY-MM-DD HH24'\n        else:\n            raise web.HTTPBadRequest(reason=\"{} is not a valid group\".format(_group))\n\n    _aggregate = PayloadBuilder().AGGREGATE([\"min\", [\"reading\", reading]], [\"max\", [\"reading\", reading]],\n                                            [\"avg\", [\"reading\", reading]]) \\\n        .ALIAS('aggregate', ('reading', 'min', 'min'), ('reading', 'max', 'max'),\n               ('reading', 'avg', 'average')).chain_payload()\n    _where = PayloadBuilder(_aggregate).WHERE([\"asset_code\", \"=\", asset_code]).chain_payload()\n\n    if 'previous' in request.query and (\n            'seconds' in request.query or 'minutes' in request.query or 'hours' in request.query):\n        _and_where = where_window(request, _where)\n    elif 'seconds' in request.query or 'minutes' in request.query or 'hours' in request.query:\n        _and_where = where_clause(request, _where)\n    elif 'previous' in request.query:\n        msg = \"the parameter previous can only be given if one of seconds, minutes or hours is also given\"\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # Add LIMIT, OFFSET\n        _and_where = prepare_limit_skip_payload(request, _where)\n\n    # Add the GROUP BY and ORDER BY timestamp DESC\n    _group = PayloadBuilder(_and_where).GROUP_BY(\"user_ts\").ALIAS(\"group\", (\"user_ts\", \"timestamp\")) \\\n        .FORMAT(\"group\", (\"user_ts\", ts_restraint)).chain_payload()\n    payload = PayloadBuilder(_group).ORDER_BY([\"user_ts\", \"desc\"]).payload()\n    try:\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        rows = results['rows']\n        for index, data in enumerate(rows):\n            for item_name, item_val in data.items():\n                if item_name != 'timestamp':\n                    if isinstance(item_val, str) and item_val.startswith(tuple(DATAPOINT_TYPES)):\n                        data[item_name] = IMAGE_PLACEHOLDER if is_image_excluded(request) else item_val\n        response = rows\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get average of {} readings for {} asset.\".format(reading, asset_code))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\ndef where_clause(request, where):\n    val = 0\n    try:\n        if 'seconds' in request.query and request.query['seconds'] != '':\n            val = int(request.query['seconds'])\n        elif 'minutes' in request.query and request.query['minutes'] != '':\n            val = int(request.query['minutes']) * 60\n        elif 'hours' in request.query and request.query['hours'] != '':\n            val = int(request.query['hours']) * 60 * 60\n\n        if val < 0:\n            raise ValueError\n    except ValueError:\n        raise web.HTTPBadRequest(reason=\"Time must be a positive integer\")\n\n    # if no time units then NO AND_WHERE condition applied\n    if val == 0:\n        return where\n\n    payload = PayloadBuilder(where).AND_WHERE(['user_ts', 'newer', val]).chain_payload()\n    return payload\n\n\ndef where_window(request, where):\n    \"\"\" mostrecent case: newer/older payload conditions only worked with datetime (now - seconds)\n        Also there is no support of BETWEEN operator.\n        For mostrecent functionality with back/forward buttons a.k.a previous payload\n        There is workaround implemented at python side to get it without any amendments at C Payload side\n        Now, client has to pass datetime UTC string and having format %Y-%m-%d %H:%M:%S.%f in \"previous_ts\" payload\n        For example: /fledge/asset/randomwalk?mostrecent=TRUE&seconds=10&previous_ts=2023-08-01 06:32:36.515123\n        Payload:\n        {\"return\": [\"reading\", {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}],\n        \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"randomwalk\",\n        \"and\": {\"column\": \"user_ts\", \"condition\": \"<=\", \"value\": \"2023-08-01 06:32:36.515123\",\n        \"and\": {\"column\": \"user_ts\", \"condition\": \">=\", \"value\": \"2023-08-01 06:32:26.515012\"}}},\n        \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}\n    \"\"\"\n    if 'mostrecent' in request.query and 'seconds' in request.query:\n        val = int(request.query['seconds'])\n        previous_str = request.query['previous_ts']\n        dt_format = '%Y-%m-%d %H:%M:%S.%f'\n        dt_obj = datetime.datetime.strptime(previous_str, dt_format)\n        dt_obj_diff = dt_obj - datetime.timedelta(seconds=val)\n        dt_str = dt_obj_diff.strftime(dt_format)\n        payload = PayloadBuilder(where).AND_WHERE(['user_ts', '<=', previous_str]).chain_payload()\n        return PayloadBuilder(payload).AND_WHERE(['user_ts', '>=', dt_str]).chain_payload()\n\n    val = 0\n    previous = 0\n    try:\n        if 'seconds' in request.query and request.query['seconds'] != '':\n            val = int(request.query['seconds'])\n            previous = int(request.query['previous'])\n        elif 'minutes' in request.query and request.query['minutes'] != '':\n            val = int(request.query['minutes']) * 60\n            previous = int(request.query['previous']) * 60\n        elif 'hours' in request.query and request.query['hours'] != '':\n            val = int(request.query['hours']) * 60 * 60\n            previous = int(request.query['previous']) * 60 * 60\n\n        if val < 0:\n            raise ValueError\n    except ValueError:\n        raise web.HTTPBadRequest(reason=\"Time must be a positive integer\")\n\n    # if no time units then NO AND_WHERE condition applied\n    if val == 0:\n        return where\n\n    payload = PayloadBuilder(where).AND_WHERE(['user_ts', 'newer', val + previous]).chain_payload()\n    return PayloadBuilder(payload).AND_WHERE(['user_ts', 'older', previous]).chain_payload()\n\n\nasync def asset_datapoints_with_bucket_size(request: web.Request) -> web.Response:\n    \"\"\" Retrieve datapoints for an asset.\n\n        If bucket_size is not given then the bucket size is 1\n        If start is not given then the start point is now - 60 seconds.\n        If length is not given then length is 60 seconds. And length is calculated with length / bucket_size\n        For multiple assets use comma separated values in request and this will allow data from one or more asset to be returned.\n\n       :Example:\n               curl -sX GET http://localhost:8081/fledge/asset/{asset_code}/bucket/{bucket_size}\n               curl -sX GET http://localhost:8081/fledge/asset/{asset_code_1},{asset_code_2}/bucket/{bucket_size}\n       \"\"\"\n    try:\n        start_found = False\n        length_found = False\n        asset_code = request.match_info.get('asset_code', '')\n        bucket_size = request.match_info.get('bucket_size', 1)\n        length = 60\n\n        ts = datetime.datetime.timestamp(datetime.datetime.now())\n        start = ts - length\n        asset_code_list = asset_code.split(',')\n        _readings = connect.get_readings_async()\n\n        if 'start' in request.query and request.query['start'] != '':\n            try:\n                start = float(request.query['start'])\n                start_found = True\n            except Exception as e:\n                raise ValueError('Invalid value for start. Error: {}'.format(str(e)))\n\n        if 'length' in request.query and request.query['length'] != '':\n            length = float(request.query['length'])\n            if length < 0:\n                raise ValueError('length must be a positive integer')\n            length_found = True\n            # No user start parameter: decrease default start by the user provided length\n            if start_found is False:\n                start = ts - length\n\n        use_microseconds = False\n        # Check subsecond request in start\n        start_micros = \"{:.6f}\".format(start).split('.')[1]\n        if start_found is True and start_micros != '000000':\n            use_microseconds = True\n        else:\n            # No decimal part, check subsecond request in length\n            start_micros = \"{:.6f}\".format(length).split('.')[1]\n            if length_found is True and start_micros != '000000':\n                use_microseconds = True\n\n        # Build UTC datetime start/stop from start timestamp with/without microseconds\n        if use_microseconds is False:\n            start_date = datetime.datetime.fromtimestamp(start, datetime.timezone.utc).strftime(\"%Y-%m-%d %H:%M:%S\")\n            stop_date = datetime.datetime.fromtimestamp(start + length, datetime.timezone.utc).strftime(\"%Y-%m-%d %H:%M:%S\")\n        else:\n            start_date = datetime.datetime.fromtimestamp(start, datetime.timezone.utc).strftime(\"%Y-%m-%d %H:%M:%S.%f\")\n            stop_date = datetime.datetime.fromtimestamp(start + length, datetime.timezone.utc).strftime(\"%Y-%m-%d %H:%M:%S.%f\")\n\n        # Prepare payload\n        _aggregate = PayloadBuilder().AGGREGATE([\"all\"]).chain_payload()\n        _and_where = PayloadBuilder(_aggregate).WHERE([\"asset_code\", \"in\", asset_code_list]).AND_WHERE([\n            \"user_ts\", \">=\", str(start_date)], [\"user_ts\", \"<=\", str(stop_date)]).chain_payload()\n\n        _bucket = PayloadBuilder(_and_where).TIMEBUCKET('user_ts', bucket_size,\n                                                        'YYYY-MM-DD HH24:MI:SS', 'timestamp').chain_payload()\n\n        payload = PayloadBuilder(_bucket).LIMIT(int(float(length / float(bucket_size)))).payload()\n\n        # Sort & timebucket modifiers can not be used in same payload\n        # payload = PayloadBuilder(limit).ORDER_BY([\"user_ts\", \"desc\"]).payload()\n        results = await _readings.query(payload)\n        response = results['rows']\n    except (KeyError, IndexError) as e:\n        raise web.HTTPNotFound(reason=e)\n    except (TypeError, ValueError) as e:\n        raise web.HTTPBadRequest(reason=e)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get {} asset datapoints with {} bucket size.\".format(asset_code, bucket_size))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def asset_readings_with_bucket_size(request: web.Request) -> web.Response:\n    \"\"\" Retrieve readings for a single asset between two points in time.\n        These points are defined as a relative value in seconds back in time from the current time and a number of seconds worth of data.\n        For example: For asset XYZ from (now - 60) for 60 seconds to get a minutes worth of data from a minute in the passed.\n        The samples returned are averages grouped over a period of time, know as a bucket size.\n        If 60 seconds worth of data is requested and a bucket size of 10 seconds is given then 6 values will be returned.\n        Each of those 6 readings is an average over a 10 seconds period.\n\n        If bucket_size is not given then the bucket size is 1\n        If start is not given then the start point is now - 60 seconds.\n        If length is not given then length is 60 seconds. And length is calculated with length / bucket_size\n\n       :Example:\n               curl -sX GET http://localhost:8081/fledge/asset/{asset_code}/{reading}/bucket/{bucket_size}\n               curl -sX GET http://localhost:8081/fledge/asset/{asset_code}/{reading}/bucket/{bucket_size}?start=<start point>\n               curl -sX GET http://localhost:8081/fledge/asset/{asset_code}/{reading}/bucket/{bucket_size}?length=<length>\n               curl -sX GET \"http://localhost:8081/fledge/asset/{asset_code}/{reading}/bucket/{bucket_size}?start=<start point>&length=<length>\"\n       \"\"\"\n    try:\n        start_found = False\n        asset_code = request.match_info.get('asset_code', '')\n        reading = request.match_info.get('reading', '')\n        bucket_size = request.match_info.get('bucket_size', 1)\n        length = 60\n        ts = datetime.datetime.now().timestamp()\n        start = ts - 60\n        _aggregate = PayloadBuilder().AGGREGATE([\"min\", [\"reading\", reading]], [\"max\", [\"reading\", reading]],\n                                                [\"avg\", [\"reading\", reading]]) \\\n            .ALIAS('aggregate', ('reading', 'min', 'min'), ('reading', 'max', 'max'),\n                   ('reading', 'avg', 'average')).chain_payload()\n        _readings = connect.get_readings_async()\n\n        if 'start' in request.query and request.query['start'] != '':\n            try:\n                start = float(request.query['start'])\n                datetime.datetime.fromtimestamp(start)\n                start_found = True\n            except Exception as e:\n                raise ValueError('Invalid value for start. Error: {}'.format(str(e)))\n\n        if 'length' in request.query and request.query['length'] != '':\n            length = int(request.query['length'])\n            if length < 0:\n                raise ValueError('length must be a positive integer')\n            # No user start parameter: decrease default start by the user provided length\n            if start_found is False:\n                start = ts - length\n\n        # Build datetime from timestamp\n        start_time = time.gmtime(start)\n        start_date = time.strftime(\"%Y-%m-%d %H:%M:%S\", start_time)\n        stop_time = time.gmtime(start + length)\n        stop_date = time.strftime(\"%Y-%m-%d %H:%M:%S\", stop_time)\n\n        # Prepare payload\n        _where = PayloadBuilder(_aggregate).WHERE([\"asset_code\", \"=\", asset_code]).AND_WHERE([\n            \"user_ts\", \">=\", str(start_date)], [\"user_ts\", \"<=\", str(stop_date)], [\n            \"user_ts\", \"<=\", str(stop_date)]).chain_payload()\n        _bucket = PayloadBuilder(_where).TIMEBUCKET('user_ts', bucket_size, 'YYYY-MM-DD HH24:MI:SS',\n                                                    'timestamp').chain_payload()\n\n        payload = PayloadBuilder(_bucket).LIMIT(int(length / int(bucket_size))).payload()\n\n        # Sort & timebucket modifiers can not be used in same payload\n        # payload = PayloadBuilder(limit).ORDER_BY([\"user_ts\", \"desc\"]).payload()\n        results = await _readings.query(payload)\n        response = results['rows']\n    except (KeyError, IndexError) as e:\n        raise web.HTTPNotFound(reason=e)\n    except (TypeError, ValueError) as e:\n        raise web.HTTPBadRequest(reason=e)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get {} readings of {} asset with {} bucket size.\".format(\n            reading, asset_code, bucket_size))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def asset_structure(request):\n    \"\"\" Browse all the assets for which we have recorded readings and\n    return the asset structure\n\n    Returns:\n           json result showing the asset structure\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/structure/asset\n            {\n              \"AX8\": {\n                \"datapoint\": {\n                  \"internal\": \"float\",\n                  \"spot1\": \"float\",\n                  \"minRPi\": \"float\",\n                  \"maxRPi\": \"float\",\n                  \"averageRPi\": \"float\",\n                  \"minBackupdisk\": \"float\",\n                  \"maxBackupdisk\": \"float\",\n                  \"averageBackupdisk\": \"float\",\n                  \"minCoral\": \"float\",\n                  \"maxCoral\": \"float\",\n                  \"averageCoral\": \"float\"\n                },\n                \"metadata\": {\n                  \"factory\": \"London\",\n                  \"line\": \"Line 4\",\n                  \"units\": \"Kelvin\"\n                }\n              }\n            }\n    \"\"\"\n    results = {}\n    try:\n        payload = PayloadBuilder().ORDER_BY([\"asset_code\"]).payload()\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        rows = results['rows']\n        asset_json = {}\n        for row in rows:\n            code = row['asset_code']\n            datapoint = {}\n            metadata = {}\n            for name, value in row['reading'].items():\n                if type(value) == str:\n                    if value == \"True\" or value == \"False\":\n                        datapoint[name] = \"boolean\"\n                    else:\n                        metadata[name] = value\n                elif type(value) == int:\n                    datapoint[name] = \"integer\"\n                elif type(value) == float:\n                    datapoint[name] = \"float\"\n            if len(metadata) > 0:\n                asset_json[code] = {'datapoint': datapoint, 'metadata': metadata}\n            else:\n                asset_json[code] = {'datapoint': datapoint}\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get assets structure.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(asset_json)\n\n# The following two routines are not really browsing data but this is probably a logical\n# place to put them as they share the same URL stem\n\n\nasync def asset_purge_all(request):\n    \"\"\" Purge all the assets for which we have recorded readings\n\n    Returns:\n           json result with details of assets purge\n\n    :Example:\n            curl -sX DELETE http://localhost:8081/fledge/asset\n    \"\"\"\n    try:\n        _logger.warning(\"Manual purge of all assets has been requested.\")\n        # Call storage service\n        _readings = connect.get_readings_async()\n        # Get AuditLogger\n        from fledge.common.audit_logger import AuditLogger\n        _audit = AuditLogger(_readings)\n\n        start_time = time.strftime('%Y-%m-%d %H:%M:%S.%s', time.localtime(time.time()))\n\n        results = await _readings.purge(asset=\"\")\n\n        if 'purged' in results:\n            end_time = time.strftime('%Y-%m-%d %H:%M:%S.%s', time.localtime(time.time()))\n            await _audit.information('PURGE',\n                                     {\n                                         \"start_time\": start_time,\n                                         \"end_time\": end_time,\n                                         \"rowsRemoved\": results['purged']})\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to purge all assets.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(results)\n\n\nasync def asset_purge(request):\n    \"\"\" Purge a particular asset for which we have recorded readings\n    Returns:\n          json result details of purged asset\n\n    :Example:\n            curl -sX DELETE http://localhost:8081/fledge/asset/fogbench_humidity\n    \"\"\"\n    asset_code = request.match_info.get('asset_code', '')\n    _logger.warning(\"Manual purge of '{}' asset has been requested.\".format(asset_code))\n\n    try:\n        # Call storage service\n        _readings = connect.get_readings_async()\n        # Get AuditLogger\n        from fledge.common.audit_logger import AuditLogger\n        _audit = AuditLogger(_readings)\n\n        start_time = time.strftime('%Y-%m-%d %H:%M:%S.%s', time.localtime(time.time()))\n\n        results = await _readings.purge(asset=asset_code)\n\n        if 'purged' in results:\n            end_time = time.strftime('%Y-%m-%d %H:%M:%S.%s', time.localtime(time.time()))\n            await _audit.information('PURGE',\n                                     {\n                                         \"start_time\": start_time,\n                                         \"end_time\": end_time,\n                                         \"rowsRemoved\": results['purged'],\n                                         \"asset\": asset_code})\n    except KeyError:\n        msg = results['message']\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to purge {} asset.\".format(asset_code))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(results)\n\n\nasync def asset_timespan(request):\n    \"\"\" \n    Return the timespan of the buffered readings for each asset. The returned data includes the timestamp\n    of the oldest and newest reading for each asset that we hold in the buffer\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset/timespan\n    \"\"\"\n    try:\n        payload = PayloadBuilder().AGGREGATE([\"min\", \"user_ts\"], [\"max\", \"user_ts\"]).GROUP_BY(\"asset_code\") \\\n                .ALIAS('aggregate', ('user_ts', 'min', 'oldest'), ('user_ts', 'max', 'newest')).payload()\n        # Call storage service\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        response = results['rows']\n    except (KeyError, IndexError) as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (TypeError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get timespan of buffered readings for assets.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def asset_reading_timespan(request):\n    \"\"\" \n    Return the timespan of the buffered readings for the given asset . The returned data includes the timestamp\n    of the oldest and newest reading for the asset that we hold in the buffer\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/asset/sinusoid/timespan\n    \"\"\"\n    try:\n        asset_code = request.match_info.get('asset_code', '')\n        payload = PayloadBuilder().WHERE(\n            [\"asset_code\", \"=\", asset_code]).AGGREGATE([\"min\", \"user_ts\"], [\"max\", \"user_ts\"]).ALIAS(\n            'aggregate', ('user_ts', 'min', 'oldest'), ('user_ts', 'max', 'newest')).payload()\n        # Call storage service\n        _readings = connect.get_readings_async()\n        results = await _readings.query(payload)\n        response = results['rows'][0]\n    except (KeyError, IndexError) as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (TypeError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _logger.error(exc, \"Failed to get timespan of buffered readings for {} asset.\".format(asset_code))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n"
  },
  {
    "path": "python/fledge/services/core/api/certificate_store.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport os\nimport json\n\nfrom aiohttp import web\n\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.web.middleware import has_permission\nfrom fledge.services.core import connect\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | GET POST         | /fledge/certificate                                     |\n    | DELETE           | /fledge/certificate/{name}                              |\n    -------------------------------------------------------------------------------\n\"\"\"\nFORBIDDEN_MSG = 'Resource you were trying to reach is absolutely forbidden for some reason.'\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def get_certs(request):\n    \"\"\" Get the list of certs\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/certificate\n    \"\"\"\n    certs = []\n    keys = []\n\n    key_valid_extensions = ('.key', '.pem')\n    short_cert_name_valid_extensions = ('.cert', '.cer', '.csr', '.crl', '.crt', '.der', '.p12', '.pfx')\n    certs_root_dir = _get_certs_dir('/etc/certs')\n    for root, dirs, files in os.walk(certs_root_dir):\n        if not root.endswith((\"pem\", \"json\")):\n            for f in files:\n                if f.endswith(short_cert_name_valid_extensions):\n                    certs.append(f)\n                if f.endswith(key_valid_extensions):\n                    keys.append(f)\n\n    json_certs_path = _get_certs_dir('/etc/certs/json')\n    json_cert_files = os.listdir(json_certs_path)\n    json_certs = [f for f in json_cert_files if f.endswith('.json')]\n    certs += json_certs\n\n    pem_certs_path = _get_certs_dir('/etc/certs/pem')\n    pem_cert_files = os.listdir(pem_certs_path)\n    pem_certs = [f for f in pem_cert_files if f.endswith('.pem')]\n    certs += pem_certs\n\n    return web.json_response({\"certs\": certs, \"keys\": keys})\n\n\nasync def upload(request):\n    \"\"\" Upload a certificate\n\n    :Example:\n        curl -F \"cert=@filename.pem\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.json\" http://localhost:8081/fledge/certificate\n        curl -F \"key=@filename.pem\" -F \"cert=@filename.pem\" http://localhost:8081/fledge/certificate\n        curl -F \"key=@filename.key\" -F \"cert=@filename.json\" http://localhost:8081/fledge/certificate\n        curl -F \"key=@filename.key\" -F \"cert=@filename.cert\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.cert\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.cer\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.csr\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.crl\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.crt\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.der\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.p12\" http://localhost:8081/fledge/certificate\n        curl -F \"cert=@filename.pfx\" http://localhost:8081/fledge/certificate\n        curl -F \"key=@filename.key\" -F \"cert=@filename.cert\" -F \"overwrite=1\" http://localhost:8081/fledge/certificate\n    \"\"\"\n    data = await request.post()\n    # contains the name of the file in string format\n    key_file = data.get('key')\n    cert_file = data.get('cert')\n    allow_overwrite = data.get('overwrite', '0')\n    # accepted values for overwrite are '0 and 1'\n    should_overwrite = False\n    if allow_overwrite in ('0', '1'):\n        should_overwrite = True if int(allow_overwrite) == 1 else False\n    else:\n        raise web.HTTPBadRequest(reason=\"Accepted value for overwrite is 0 or 1\")\n    \n    if not cert_file:\n        raise web.HTTPBadRequest(reason=\"Cert file is missing\")\n\n    cert_filename = cert_file.filename\n\n    # default installed auth cert keys can be deleted, for matching/debugging disallow overwrite\n    if cert_filename in ['admin.cert', 'admin.key', 'user.cert', 'user.key', 'fledge.key', 'fledge.cert', 'ca.key',\n                         'ca.cert']:\n        if request.is_auth_optional:\n            _logger.warning(FORBIDDEN_MSG)\n            raise web.HTTPForbidden(reason=FORBIDDEN_MSG, body=json.dumps({\"message\": FORBIDDEN_MSG}))\n        else:\n            if not request.user_is_admin:\n                msg = \"admin role permissions required to overwrite the default installed auth/TLS certificates.\"\n                _logger.warning(msg)\n                raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n    # note.. We are not checking if HTTPS enabled or auth mechanism?\n    # Here, in secured instance, we are simply disallowing non-admin user to overwrite/import configured TLS/CA certificates\n    if request.user and not request.user_is_admin:\n        cf_mgr = ConfigurationManager(connect.get_storage_async())\n        cat = await cf_mgr.get_category_all_items(category_name='rest_api')\n        configured_ca_and_tls_certs = [cat['certificateName']['value'], cat['authCertificateName']['value']]\n        if cert_filename and cert_filename.rpartition('.')[0] in configured_ca_and_tls_certs:  # we better disallow any extension with those names instead of [1]/endswith .cert\n            msg = 'Certificate with name {} is configured to be used, ' \\\n                  'An `admin` role permissions required to add/overwrite.'.format(cert_filename)\n            _logger.warning(msg)\n            raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n\n    key_valid_extensions = ('.key', '.pem')\n    cert_valid_extensions = ('.cert', '.cer', '.csr', '.crl', '.crt', '.der', '.json', '.pem', '.p12', '.pfx')\n\n    key_filename = None\n    if key_file:\n        key_filename = key_file.filename\n        if not key_filename.endswith(key_valid_extensions):\n            raise web.HTTPBadRequest(reason=\"Accepted file extensions are {} for key file\".format(key_valid_extensions))\n\n    if not cert_filename.endswith(cert_valid_extensions):\n        raise web.HTTPBadRequest(reason=\"Accepted file extensions are {} for cert file\".format(cert_valid_extensions))\n\n    certs_dir = _get_certs_dir('/etc/certs/')\n    if cert_filename.endswith('.pem'):\n        certs_dir = _get_certs_dir('/etc/certs/pem')\n    if cert_filename.endswith('.json'):\n        certs_dir = _get_certs_dir('/etc/certs/json')\n\n    is_found = True if len(_find_file(cert_filename, certs_dir)) else False\n    if is_found and should_overwrite is False:\n        raise web.HTTPBadRequest(reason=\"Certificate with the same name already exists! \"\n                                        \"To overwrite, set the overwrite flag\")\n    if key_file:\n        key_file_found = True if len(_find_file(key_filename, _get_certs_dir('/etc/certs/'))) else False\n        if key_file_found and should_overwrite is False:\n            raise web.HTTPBadRequest(reason=\"Key cert with the same name already exists. \"\n                                            \"To overwrite, set the overwrite flag\")\n    if cert_file:\n        cert_file_data = data['cert'].file\n        cert_file_content = cert_file_data.read()\n        cert_file_path = str(certs_dir) + '/{}'.format(cert_filename)\n        with open(cert_file_path, 'wb') as f:\n            f.write(cert_file_content)\n    if key_file:\n        key_file_data = data['key'].file\n        key_file_content = key_file_data.read()\n        key_file_path = str(_get_certs_dir('/etc/certs/')) + '/{}'.format(key_filename)\n        with open(key_file_path, 'wb') as f:\n            f.write(key_file_content)\n\n    # in order to bring this new cert usage into effect, make sure to\n    # update config for category rest_api\n    # and restart for TLS\n    msg = \"{} has been uploaded successfully\".format(cert_filename)\n    if key_file:\n        msg = \"{} and {} have been uploaded successfully\".format(key_filename, cert_filename)\n    return web.json_response({\"result\": msg})\n\n\n@has_permission(\"admin\")\nasync def delete_certificate(request):\n    \"\"\" Delete a certificate\n\n    :Example:\n          curl -X DELETE http://localhost:8081/fledge/certificate/user.key\n          curl -X DELETE http://localhost:8081/fledge/certificate/user.cert\n          curl -X DELETE http://localhost:8081/fledge/certificate/filename.cer\n          curl -X DELETE http://localhost:8081/fledge/certificate/filename.csr\n          curl -X DELETE http://localhost:8081/fledge/certificate/filename.crl\n          curl -X DELETE http://localhost:8081/fledge/certificate/filename.crt\n          curl -sX DELETE http://localhost:8081/fledge/certificate/filename.der\n          curl -X DELETE http://localhost:8081/fledge/certificate/filename.p12\n          curl -X DELETE http://localhost:8081/fledge/certificate/filename.pfx\n          curl -X DELETE http://localhost:8081/fledge/certificate/fledge.json?type=cert\n          curl -X DELETE http://localhost:8081/fledge/certificate/fledge.pem?type=cert\n          curl -X DELETE http://localhost:8081/fledge/certificate/fledge.pem\n          curl -X DELETE http://localhost:8081/fledge/certificate/fledge.pem?type=key\n    \"\"\"\n    file_name = request.match_info.get('name', None)\n    valid_extensions = ('.cert', '.cer', '.csr', '.crl', '.crt', '.der', '.json', '.key', '.pem', '.p12', '.pfx')\n\n    if not file_name.endswith(valid_extensions):\n        msg = \"Accepted file extensions are {}\".format(valid_extensions)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    if file_name in ['admin.cert', 'user.cert', 'fledge.key', 'fledge.cert', 'ca.key', 'ca.cert']:\n        if request.is_auth_optional:\n            _logger.warning(FORBIDDEN_MSG)\n            raise web.HTTPForbidden(reason=FORBIDDEN_MSG, body=json.dumps({\"message\": FORBIDDEN_MSG}))\n    \n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    cat = await cf_mgr.get_category_all_items(category_name='rest_api')\n    configured_ca_and_tls_certs = [cat['certificateName']['value'], cat['authCertificateName']['value']]\n    if file_name and file_name.rpartition('.')[0] in configured_ca_and_tls_certs:\n        # check if cert_name is currently set for 'certificateName' or authCertificateName in config for 'rest_api'\n        msg = 'Certificate with name {} is configured for use, you can not delete but overwrite if required.'.format(\n            file_name)\n        raise web.HTTPConflict(reason=msg, body=json.dumps({\"message\": msg}))\n\n    _type = None\n    if 'type' in request.query and request.query['type'] != '':\n        _type = request.query['type']\n        if _type not in ['cert', 'key']:\n            msg = \"Only cert and key are allowed for the value of type param\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n    certs_dir = _get_certs_dir('/etc/certs/')\n    is_found = False\n    cert_path = list()\n\n    if _type and _type == 'cert':\n        short_cert_name_valid_extensions = ('.cert', '.cer', '.csr', '.crl', '.crt', '.der', '.p12', '.pfx')\n        if not file_name.endswith(short_cert_name_valid_extensions):\n            if os.path.isfile(certs_dir + 'pem/' + file_name):\n                is_found = True\n                cert_path = [certs_dir + 'pem/' + file_name]\n            if os.path.isfile(certs_dir + 'json/' + file_name):\n                is_found = True\n                cert_path = [certs_dir + 'json/' + file_name]\n        else:\n            if os.path.isfile(certs_dir + file_name):\n                is_found = True\n                cert_path = [certs_dir + file_name]\n\n    if _type and _type == 'key':\n        if os.path.isfile(certs_dir + file_name):\n            is_found = True\n            cert_path = [certs_dir + file_name]\n\n    if _type is None:\n        for root, dirs, files in os.walk(certs_dir):\n            if root.endswith('json'):\n                for f in files:\n                    if file_name == f:\n                        is_found = True\n                        cert_path.append(certs_dir + 'json/' + file_name)\n                        files.remove(f)\n            if root.endswith('pem'):\n                for f in files:\n                    if file_name == f:\n                        is_found = True\n                        cert_path.append(certs_dir + 'pem/' + file_name)\n                        files.remove(f)\n            for f in files:\n                if file_name == f:\n                    is_found = True\n                    cert_path.append(certs_dir + file_name)\n\n    if not is_found:\n        msg = 'Certificate with name {} does not exist'.format(file_name)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n\n    # Remove file\n    for fp in cert_path:\n        os.remove(fp)\n    return web.json_response({'result': \"{} has been deleted successfully\".format(file_name)})\n\n\ndef _get_certs_dir(_path):\n    dir_path = _FLEDGE_DATA + _path if _FLEDGE_DATA else _FLEDGE_ROOT + '/data' + _path\n    if not os.path.exists(dir_path):\n        os.makedirs(dir_path)\n    certs_dir = os.path.expanduser(dir_path)\n    return certs_dir\n\n\ndef _find_file(name, path):\n    fl = list()\n    for root, dirs, files in os.walk(path):\n        if name in files:\n            fl.append(os.path.join(root, name))\n    return fl\n"
  },
  {
    "path": "python/fledge/services/core/api/common.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport time\nimport json\nimport logging\nimport socket\nimport subprocess\n\nfrom aiohttp import web\nfrom functools import lru_cache\n\nfrom fledge.common import logger\nfrom fledge.services.core import server\nfrom fledge.services.core.api.statistics import get_statistics\nfrom fledge.services.core import connect\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.common import _FLEDGE_ROOT\n\n__author__ = \"Amarendra K. Sinha, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__start_time = time.time()\n\n_logger = logger.setup(__name__, level=logging.INFO)\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | GET             | /fledge/ping                                             |\n    | PUT             | /fledge/shutdown                                         |\n    | PUT             | /fledge/restart                                          |\n    -------------------------------------------------------------------------------\n\"\"\"\n\n\n@lru_cache(maxsize=1, typed=True)\ndef get_version() -> str:\n    with open(_FLEDGE_ROOT + '/VERSION') as f:\n        # Read only the first line of a VERSION file and grab the release version number\n        return f.readline().split('=')[1].strip()\n\n\nasync def ping(request):\n    \"\"\"\n    Args:\n       request:\n\n    Returns:\n           basic health information json payload\n\n    :Example:\n           curl -X GET http://localhost:8081/fledge/ping\n    \"\"\"\n\n    try:\n        auth_token = request.token\n    except AttributeError:\n        if request.is_auth_optional is False:\n            cfg_mgr = ConfigurationManager(connect.get_storage_async())\n            category_item = await cfg_mgr.get_category_item('rest_api', 'allowPing')\n            allow_ping = True if category_item['value'].lower() == 'true' else False\n            if allow_ping is False:\n                _logger.warning(\"A valid token required to ping; as auth is mandatory & allow ping is set to false.\")\n                raise web.HTTPUnauthorized()\n\n    since_started = time.time() - __start_time\n\n    stats_request = request.clone(rel_url='fledge/statistics')\n    data_read, data_sent, data_purged = await get_stats(stats_request)\n\n    host_name = socket.gethostname()\n    # all addresses for the host\n    all_ip_addresses_cmd_res = subprocess.run(['hostname', '-I'], stdout=subprocess.PIPE)\n    ip_addresses = all_ip_addresses_cmd_res.stdout.decode('utf-8').replace(\"\\n\", \"\").strip().split(\" \")\n\n    svc_name = server.Server._service_name\n\n    def services_health_litmus_test():\n        all_svc_status = [ServiceRecord.Status(int(service_record._status)).name.upper()\n                          for service_record in ServiceRegistry.all()]\n        if 'FAILED' in all_svc_status:\n            return 'red'\n        elif 'UNRESPONSIVE' in all_svc_status:\n            return 'amber'\n        return 'green'\n\n    status_color = services_health_litmus_test()\n    safe_mode = True\n    alert_count = 0\n    if not server.Server.running_in_safe_mode:\n        safe_mode = False\n        alert_count = len(server.Server._alert_manager.alerts)\n    version = get_version()\n    return web.json_response({'uptime': int(since_started),\n                              'dataRead': data_read,\n                              'dataSent': data_sent,\n                              'dataPurged': data_purged,\n                              'authenticationOptional': request.is_auth_optional,\n                              'serviceName': svc_name,\n                              'hostName': host_name,\n                              'ipAddresses': ip_addresses,\n                              'health': status_color,\n                              'safeMode': safe_mode,\n                              'version': version,\n                              'alerts': alert_count\n                              })\n\n\nasync def get_stats(req):\n    \"\"\"\n    :param req: a clone of 'fledge/statistics' endpoint request\n    :return:  data_read, data_sent, data_purged\n    \"\"\"\n\n    res = await get_statistics(req)\n    stats = json.loads(res.body.decode())\n\n    def filter_stat(k):\n\n        \"\"\"\n        there is no statistics about 'Readings Sent' at the start of Fledge\n        so the specific exception is caught and 0 is returned to avoid the error 'index out of range'\n        calling the API ping.\n        \"\"\"\n        try:\n            v = [s['value'] for s in stats if s['key'] == k]\n            value = int(v[0])\n        except IndexError:\n            value = 0\n\n        return value\n\n    data_read = filter_stat('READINGS')\n    data_sent = filter_stat('Readings Sent')\n    data_purged = filter_stat('PURGED')\n\n    return data_read, data_sent, data_purged\n\n\nasync def shutdown(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n\n    :Example:\n            curl -X PUT http://localhost:8081/fledge/shutdown\n    \"\"\"\n\n    try:\n        loop = request.loop\n        loop.call_later(2, do_shutdown, request)\n        return web.json_response({'message': 'Fledge shutdown has been scheduled. '\n                                             'Wait for few seconds for process cleanup.'})\n    except TimeoutError as err:\n        raise web.HTTPRequestTimeout(reason=str(err))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Error while stopping Fledge server.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n\ndef do_shutdown(request):\n    _logger.info(\"Executing controlled shutdown\")\n    try:\n        loop = request.loop\n        asyncio.ensure_future(server.Server.shutdown(request), loop=loop)\n    except RuntimeError as e:\n        _logger.error(e, \"Error while stopping Fledge server.\")\n        raise\n\n\nasync def restart(request):\n    \"\"\"\n    :Example:\n            curl -X PUT http://localhost:8081/fledge/restart\n    \"\"\"\n\n    try:\n        _logger.info(\"Executing controlled shutdown and start\")\n        asyncio.ensure_future(server.Server.restart(request), loop=request.loop)\n        return web.json_response({'message': 'Fledge restart has been scheduled.'})\n    except TimeoutError as err:\n        msg = str(err)\n        raise web.HTTPRequestTimeout(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Error while stopping Fledge server.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n"
  },
  {
    "path": "python/fledge/services/core/api/configuration.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport os\nimport json\nimport copy\nimport binascii\nimport urllib.parse\nfrom typing import Dict\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.configuration_manager import ConfigurationManager, _optional_items\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect\n\n\n__author__ = \"Amarendra K. Sinha, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    --------------------------------------------------------------------------------\n    | GET POST       | /fledge/category                                           |\n    | GET PUT DELETE | /fledge/category/{category_name}                           |\n    | GET POST PUT   | /fledge/category/{category_name}/{config_item}             |\n    | DELETE         | /fledge/category/{category_name}/{config_item}/value       |\n    | POST           | /fledge/category/{category_name}/{config_item}/upload      |\n    | GET POST       | /fledge/category/{category_name}/children                  |\n    | DELETE         | /fledge/category/{category_name}/children/{child_category} |\n    | DELETE         | /fledge/category/{category_name}/parent                    |\n    --------------------------------------------------------------------------------\n\"\"\"\n\nscript_dir = _FLEDGE_DATA + '/scripts/' if _FLEDGE_DATA else _FLEDGE_ROOT + \"/data/scripts/\"\n_logger = FLCoreLogger().get_logger(__name__)\n\n#################################\n#  Configuration Manager\n#################################\n\n\nasync def get_categories(request):\n    \"\"\"\n    Args:\n         request:\n\n    Returns:\n            the list of known categories in the configuration database\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/category\n            curl -sX GET http://localhost:8081/fledge/category?root=true\n            curl -sX GET 'http://localhost:8081/fledge/category?root=true&children=true'\n    \"\"\"\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n\n    if 'root' in request.query and request.query['root'].lower() in ['true', 'false']:\n        is_root = True if request.query['root'].lower() == 'true' else False\n        # to get nested categories, if children is true\n        is_children = True if 'children' in request.query and request.query['children'].lower() == 'true' else False\n        if is_children:\n            categories_json = await cf_mgr.get_all_category_names(root=is_root, children=is_children)\n        else:\n            categories = await cf_mgr.get_all_category_names(root=is_root)\n            categories_json = [{\"key\": c[0], \"description\": c[1], \"displayName\": c[2]} for c in categories]\n    else:\n        categories = await cf_mgr.get_all_category_names()\n        categories_json = [{\"key\": c[0], \"description\": c[1], \"displayName\": c[2]} for c in categories]\n\n    return web.json_response({'categories': categories_json})\n\n\nasync def get_category(request):\n    \"\"\"\n    Args:\n         request: category_name is required\n\n    Returns:\n            the configuration items in the given category.\n\n    :Example:\n            curl -X GET http://localhost:8081/fledge/category/PURGE_READ\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    category = await cf_mgr.get_category_all_items(category_name)\n\n    if category is None:\n        raise web.HTTPNotFound(reason=\"No such Category found for {}\".format(category_name))\n\n    try:\n        request.is_core_mgt\n    except AttributeError:\n        category = hide_password(category)\n\n    return web.json_response(category)\n\n\nasync def create_category(request):\n    \"\"\"\n    Args:\n         request: A JSON object that defines the category\n\n    Returns:\n            category info\n\n    :Example:\n            curl -d '{\"key\": \"TEST\", \"description\": \"description\", \"value\": {\"info\": {\"description\": \"Test\", \"type\": \"boolean\", \"default\": \"true\"}}}' -X POST http://localhost:8081/fledge/category\n            curl -d '{\"key\": \"TEST\", \"description\": \"description\", \"display_name\": \"Display test\", \"value\": {\"info\": {\"description\": \"Test\", \"type\": \"boolean\", \"default\": \"true\"}}}' -X POST http://localhost:8081/fledge/category\n            curl -d '{\"key\": \"TEST\", \"description\": \"description\", \"value\": {\"info\": {\"description\": \"Test\", \"type\": \"boolean\", \"default\": \"true\"}}, \"children\":[\"child1\", \"child2\"]}' -X POST http://localhost:8081/fledge/category\n    \"\"\"\n    keep_original_items = None\n    if 'keep_original_items' in request.query and request.query['keep_original_items'] != '':\n        keep_original_items = request.query['keep_original_items'].lower()\n        if keep_original_items not in ['true', 'false']:\n            raise ValueError(\"Only 'true' and 'false' are allowed for keep_original_items. {} given.\".format(keep_original_items))\n\n    try:\n        cf_mgr = ConfigurationManager(connect.get_storage_async())\n        data = await request.json()\n        if not isinstance(data, dict):\n            raise ValueError('Data payload must be a dictionary')\n\n        valid_post_keys = ['key', 'description', 'value']\n        for k in valid_post_keys:\n            if k not in list(data.keys()):\n                raise KeyError(\"'{}' param required to create a category\".format(k))\n\n        category_name = data.get('key')\n        category_desc = data.get('description')\n        category_value = data.get('value')\n        category_display_name = data.get('display_name')\n        should_keep_original_items = True if keep_original_items == 'true' else False\n        if not len(category_name.strip()):\n            raise ValueError('Key should not be empty')\n        if category_display_name is not None:\n            if not len(category_display_name.strip()):\n                category_display_name = category_name\n\n        await cf_mgr.create_category(category_name=category_name, category_description=category_desc,\n                                     category_value=category_value, display_name=category_display_name, keep_original_items=should_keep_original_items)\n\n        category_info = await cf_mgr.get_category_all_items(category_name=category_name)\n        if category_info is None:\n            raise LookupError('No such %s found' % category_name)\n        result = {\"key\": category_name, \"description\": category_desc, \"value\": category_info, \"displayName\": cf_mgr._cacheManager.cache[category_name]['displayName']}\n        if data.get('children'):\n            r = await cf_mgr.create_child_category(category_name, data.get('children'))\n            result.update(r)\n        try:\n            request.is_core_mgt\n        except AttributeError:\n            result['value'] = hide_password(result['value'])\n    except (KeyError, ValueError, TypeError) as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except LookupError as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create category.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response(result)\n\n\nasync def delete_category(request):\n    \"\"\"\n    Args:\n         request: category_name required\n    Returns:\n        Success message on successful deletion \n    Raises:\n        TypeError/ValueError/Exception on error\n    :Example:\n            curl -X DELETE http://localhost:8081/fledge/category/{category_name}\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n\n    try:\n        cf_mgr = ConfigurationManager(connect.get_storage_async())\n        await cf_mgr.delete_category_and_children_recursively(category_name)\n    except (ValueError, TypeError) as ex:\n        msg = str(ex)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete {} category.\".format(category_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'result': 'Category {} deleted successfully.'.format(category_name)})\n\n\nasync def get_category_item(request):\n    \"\"\"\n    Args:\n         request: category_name & config_item are required\n\n    Returns:\n            the configuration item in the given category.\n\n    :Example:\n            curl -X GET http://localhost:8081/fledge/category/PURGE_READ/age\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    config_item = request.match_info.get('config_item', None)\n\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n    config_item = urllib.parse.unquote(config_item) if config_item is not None else None\n\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    category_item = await cf_mgr.get_category_item(category_name, config_item)\n    if category_item is None:\n        raise web.HTTPNotFound(reason=\"No such Category item found for {}\".format(config_item))\n\n    try:\n        request.is_core_mgt\n    except AttributeError:\n        category_item = hide_password(category_item)\n\n    return web.json_response(category_item)\n\n\nasync def set_configuration_item(request):\n    \"\"\"\n    Args:\n         request: category_name, config_item, [{\"value\" : \"<some value>\"} OR {\"optional_key\": \"some value\"}] are required\n\n    Returns:\n            set the configuration item value in the given category.\n\n    :Example:\n        curl -X PUT -H \"Content-Type: application/json\" -d '{\"value\": \"<some value>\" }' http://localhost:8081/fledge/category/{category_name}/{config_item}\n        For {category_name}=>PURGE update value for {config_item}=>age\n\n        curl -X PUT -H \"Content-Type: application/json\" -d '{\"value\": \"24\"}' http://localhost:8081/fledge/category/PURGE_READ/age\n        curl -X PUT -H \"Content-Type: application/json\" -d '{\"displayName\": \"Age\"}' http://localhost:8081/fledge/category/PURGE_READ/age\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    config_item = request.match_info.get('config_item', None)\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n    config_item = urllib.parse.unquote(config_item) if config_item is not None else None\n    data = await request.json()\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    found_optional = {}\n    # if multiple param keys in data and if value key is found, then value update for config item will be tried first\n    # otherwise it will be looking for optional keys updation\n    try:\n        value = data['value']\n        if isinstance(value, dict):\n            pass\n        elif not isinstance(value, str):\n            raise web.HTTPBadRequest(reason='{} should be a string literal, in double quotes'.format(value))\n    except KeyError:\n        for k, v in data.items():\n            # if multiple optional keys are found, then it will be update only 1 whoever comes first\n            if k in _optional_items:\n                found_optional = {k: v}\n                break\n        if not found_optional:\n            raise web.HTTPBadRequest(reason='Missing required value for {}'.format(config_item))\n    try:\n        if not found_optional:\n            try:\n                is_core_mgt = request.is_core_mgt\n            except AttributeError:\n                storage_value_entry = await cf_mgr.get_category_item(category_name, config_item)\n                if storage_value_entry is None:\n                    raise ValueError(\"No detail found for the category_name: {} and item_name: {}\"\n                               .format(category_name, config_item))\n                if 'readonly' in storage_value_entry:\n                    if storage_value_entry['readonly'] == 'true':\n                        raise TypeError(\"Update not allowed for {} item_name as it has readonly attribute set\".format(config_item))\n            request_details = request if hasattr(request, \"user\") else None\n            await cf_mgr.set_category_item_value_entry(category_name, config_item, value, request=request_details)\n        else:\n            await cf_mgr.set_optional_value_entry(category_name, config_item, list(found_optional.keys())[0], list(found_optional.values())[0])\n    except ValueError as err:\n        raise web.HTTPNotFound(reason=str(err)) if not found_optional else web.HTTPBadRequest(reason=str(err))\n    except (TypeError, KeyError) as err:\n        raise web.HTTPBadRequest(reason=str(err))\n    except Exception as ex:\n        msg = str(ex)\n        if 'Forbidden' in msg:\n            msg = \"Insufficient access privileges to change the value for '{}' category and '{}' config item.\".format(\n                category_name, config_item)\n            _logger.warning(msg)\n            raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n    category_item = await cf_mgr.get_category_item(category_name, config_item)\n    if category_item is None:\n        raise web.HTTPNotFound(reason=\"No detail found for the category_name: {} and config_item: {}\".format(category_name, config_item))\n\n    try:\n        request.is_core_mgt\n    except AttributeError:\n        category_item = hide_password(category_item)\n\n    return web.json_response(category_item)\n\n\nasync def update_configuration_item_bulk(request):\n    \"\"\" Bulk update config items\n\n     :Example:\n        curl -X PUT -H \"Content-Type: application/json\" -d '{\"config_item_key\": \"<some value>\", \"config_item2_key\": \"<some value>\" }' http://localhost:8081/fledge/category/{category_name}\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n    try:\n        data = await request.json()\n        if not data:\n            return web.HTTPBadRequest(reason='Nothing to update')\n        request_details = request if hasattr(request, \"user\") else None\n        cf_mgr = ConfigurationManager(connect.get_storage_async())\n        try:\n            is_core_mgt = request.is_core_mgt\n        except AttributeError:\n            for item_name, new_val in data.items():\n                storage_value_entry = await cf_mgr.get_category_item(category_name, item_name)\n                if storage_value_entry is None:\n                    raise KeyError(\"{} config item not found\".format(item_name))\n                else:\n                    if 'readonly' in storage_value_entry:\n                        if storage_value_entry['readonly'] == 'true':\n                            raise TypeError(\n                                \"Bulk update not allowed for {} item_name as it has readonly attribute set\".format(item_name))\n        await cf_mgr.update_configuration_item_bulk(category_name, data, request_details)\n    except (NameError, KeyError) as err:\n        raise web.HTTPNotFound(reason=str(err))\n    except (ValueError, TypeError) as err:\n        raise web.HTTPBadRequest(reason=str(err))\n    except Exception as ex:\n        msg = str(ex)\n        if 'Forbidden' in msg:\n            if 'Forbidden' in msg:\n                msg = \"Insufficient access privileges to change the value for given data for '{}' category.\".format(\n                    category_name)\n                _logger.warning(msg)\n                raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            _logger.error(ex, \"Failed to bulk update {} category.\".format(category_name))\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        cat = await cf_mgr.get_category_all_items(category_name)\n        try:\n            request.is_core_mgt\n        except AttributeError:\n            cat = hide_password(cat)\n        return web.json_response(cat)\n\n\nasync def add_configuration_item(request):\n    \"\"\"\n    Args:\n        request: A JSON object that defines the config item and has key-pair\n                 (default, type, description, value[optional])\n\n    Returns:\n        Json response with message key\n\n    :Example:\n        curl -d '{\"default\": \"true\", \"description\": \"Test description\", \"type\": \"boolean\"}' -X POST https://localhost:1995/fledge/category/{category_name}/{new_config_item} --insecure\n        curl -d '{\"default\": \"true\", \"description\": \"Test description\", \"type\": \"boolean\", \"value\": \"false\"}' -X POST https://localhost:1995/fledge/category/{category_name}/{new_config_item} --insecure\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    new_config_item = request.match_info.get('config_item', None)\n\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n    new_config_item = urllib.parse.unquote(new_config_item) if new_config_item is not None else None\n\n    try:\n        storage_client = connect.get_storage_async()\n        cf_mgr = ConfigurationManager(storage_client)\n\n        data = await request.json()\n        if not isinstance(data, dict):\n            raise ValueError('Data payload must be a dictionary')\n\n        # if value key is in data then go ahead with data payload and validate\n        # else update the data payload with value key and set its value to default value and validate\n        val = data.get('value', None)\n        if val is None:\n            data.update({'value': data.get('default')})\n            config_item_dict = {new_config_item: data}\n        else:\n            config_item_dict = {new_config_item: data}\n\n        # validate configuration category value\n        await cf_mgr._validate_category_val(category_name=category_name, category_val=config_item_dict, set_value_val_from_default_val=False)\n\n        # validate category\n        category = await cf_mgr.get_category_all_items(category_name)\n        if category is None:\n            raise NameError(\"No such Category found for {}\".format(category_name))\n\n        # check if config item is already in use\n        if new_config_item in category.keys():\n            raise KeyError(\"Config item is already in use for {}\".format(category_name))\n\n        # merge category values with keep_original_items True\n        merge_cat_val = await cf_mgr._merge_category_vals(config_item_dict, category, keep_original_items=True)\n\n        # update category value in storage\n        payload = PayloadBuilder().SET(value=merge_cat_val).WHERE([\"key\", \"=\", category_name]).payload()\n        result = await storage_client.update_tbl(\"configuration\", payload)\n        response = result['response']\n\n        # update cache with new config item\n        if category_name in cf_mgr._cacheManager.cache:\n            cf_mgr._cacheManager.cache[category_name]['value'].update({new_config_item: data})\n\n        # logged audit new config item for category\n        audit = AuditLogger(storage_client)\n        audit_details = {'category': category_name, 'item': new_config_item, 'value': config_item_dict}\n        await audit.information('CONAD', audit_details)\n\n    except (KeyError, ValueError, TypeError) as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except NameError as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create {} config item for {} category.\".format(new_config_item, category_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n    return web.json_response({\"message\": \"{} config item has been saved for {} category\".format(new_config_item,\n                                                                                                category_name)})\n\n\nasync def delete_configuration_item_value(request):\n    \"\"\"\n    Args:\n        request: category_name, config_item are required\n\n    Returns:\n        set the configuration item value to empty string in the given category\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/category/{category_name}/{config_item}/value\n\n        For {category_name}=>PURGE delete value for {config_item}=>age\n        curl -X DELETE http://localhost:8081/fledge/category/PURGE_READ/age/value\n\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    config_item = request.match_info.get('config_item', None)\n\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n    config_item = urllib.parse.unquote(config_item) if config_item is not None else None\n\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    try:\n        category_item = await cf_mgr.get_category_item(category_name, config_item)\n        if category_item is None:\n            raise ValueError\n        try:\n            is_core_mgt = request.is_core_mgt\n        except AttributeError:\n            if 'readonly' in category_item:\n                if category_item['readonly'] == 'true':\n                    raise TypeError(\n                        \"Delete not allowed for {} item_name as it has readonly attribute set\".format(config_item))\n        await cf_mgr.set_category_item_value_entry(category_name, config_item, category_item['default'])\n    except ValueError:\n        raise web.HTTPNotFound(reason=\"No detail found for the category_name: {} and config_item: {}\".format(category_name, config_item))\n    except TypeError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n\n    result = await cf_mgr.get_category_item(category_name, config_item)\n\n    if result is None:\n        raise web.HTTPNotFound(reason=\"No detail found for the category_name: {} and config_item: {}\".format(category_name, config_item))\n\n    try:\n        request.is_core_mgt\n    except AttributeError:\n        result = hide_password(result)\n\n    return web.json_response(result)\n\n\nasync def get_child_category(request):\n    \"\"\"\n    Args:\n         request: category_name is required\n\n    Returns:\n            list of categories that are children of name category\n\n    :Example:\n            curl -X GET http://localhost:8081/fledge/category/south/children\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n\n    try:\n        children = await cf_mgr.get_category_child(category_name)\n    except ValueError as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get the child {} category.\".format(category_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response({\"categories\": children})\n\n\nasync def create_child_category(request):\n    \"\"\"\n    Args:\n         request: category_name is required and JSON object that defines the child category\n\n    Returns:\n        parent of the children being added\n\n    :Example:\n            curl -d '{\"children\": [\"coap\", \"http\", \"sinusoid\"]}' -X POST http://localhost:8081/fledge/category/south/children\n    \"\"\"\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    data = await request.json()\n    if not isinstance(data, dict):\n        raise ValueError('Data payload must be a dictionary')\n\n    category_name = request.match_info.get('category_name', None)\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n\n    children = data.get('children')\n\n    try:\n        r = await cf_mgr.create_child_category(category_name, children)\n    except TypeError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except ValueError as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create the child relationship for {} category.\".format(category_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response(r)\n\n\nasync def delete_child_category(request):\n    \"\"\"\n    Args:\n        request: category_name, child_category are required\n\n    Returns:\n        remove the link b/w child category and its parent\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/category/{category_name}/children/{child_category}\n\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    child_category = request.match_info.get('child_category', None)\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    try:\n        result = await cf_mgr.delete_child_category(category_name, child_category)\n\n    except TypeError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except ValueError as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete the {} child of {} category.\".format(child_category, category_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response({\"children\": result})\n\n\nasync def delete_parent_category(request):\n    \"\"\"\n    Args:\n        request: category_name\n\n    Returns:\n        remove the link b/w parent-child category for the parent\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/category/{category_name}/parent\n\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    try:\n        await cf_mgr.delete_parent_category(category_name)\n    except TypeError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except ValueError as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete the parent-child relationship of {} category.\".format(category_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    return web.json_response({\"message\": \"Parent-child relationship for the parent-{} is deleted\".format(category_name)})\n\n\nasync def upload_script(request):\n    \"\"\" Upload script for a given config item\n\n    :Example:\n            curl -F \"script=@filename.py\" http://localhost:8081/fledge/category/{category_name}/{config_item}/upload\n    \"\"\"\n    category_name = request.match_info.get('category_name', None)\n    config_item = request.match_info.get('config_item', None)\n\n    category_name = urllib.parse.unquote(category_name) if category_name is not None else None\n    config_item = urllib.parse.unquote(config_item) if config_item is not None else None\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    category_item = await cf_mgr.get_category_item(category_name, config_item)\n    if category_item is None:\n        raise web.HTTPNotFound(reason=\"No such Category item found for {}\".format(config_item))\n\n    config_item_type = category_item['type']\n    if config_item_type != 'script':\n        raise web.HTTPBadRequest(reason=\"Accepted config item type is 'script' but found {}\".format(config_item_type))\n\n    data = await request.post()\n\n    # contains the name of the file in string format\n    script_file = data.get('script')\n    if not script_file:\n        raise web.HTTPBadRequest(reason=\"Script file is missing\")\n\n    # TODO: For the time being accepted extension is '.py'\n    script_filename = script_file.filename\n    if not script_filename.endswith('.py'):\n        raise web.HTTPBadRequest(reason=\"Accepted file extension is .py\")\n\n    script_file_data = data['script'].file\n    script_file_content = script_file_data.read()\n    prefix_file_name = category_name.lower() + \"_\" + config_item.lower() + \"_\"\n    file_name = prefix_file_name + script_filename\n    script_file_path = script_dir + file_name\n    # If 'scripts' dir not exists, then create\n    if not os.path.exists(script_dir):\n        os.makedirs(script_dir)\n    # Write contents to file and save under scripts dir path; it will be overwritten if exists\n    with open(script_file_path, 'wb') as f:\n        f.write(script_file_content)\n\n    # the hexadecimal representation of the binary data\n    hex_data = binascii.hexlify(script_file_content)\n    str_data = hex_data.decode('utf-8')\n\n    try:\n        # Save the value to database\n        await cf_mgr.set_category_item_value_entry(category_name, config_item, str_data, script_file_path)\n        # Remove old files for combination categoryName_configItem_* and retain only the latest one\n        _all_files = os.listdir(script_dir)\n        for name in _all_files:\n            if name.startswith(prefix_file_name):\n                if name != file_name:\n                    os.remove(script_dir + name)\n\n    except Exception as ex:\n        os.remove(script_file_path)\n        msg = str(ex)\n        _logger.error(ex, \"Failed to upload script for {} config item of {} category.\".format(config_item,\n                                                                                              category_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        result = await cf_mgr.get_category_item(category_name, config_item)\n        return web.json_response(result)\n\n\ndef hide_password(config: dict) -> Dict:\n    new_config = copy.deepcopy(config)\n    try:\n        for k, v in new_config.items():\n            if v['type'] == 'password' and len(v['value']):\n                v['value'] = \"****\"\n    except TypeError:\n        if new_config['type'] == 'password' and len(new_config['value']):\n            new_config['value'] = \"****\"\n    return new_config\n"
  },
  {
    "path": "python/fledge/services/core/api/control_service/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/api/control_service/acl_management.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom aiohttp import web\n\nfrom fledge.common.acl_manager import ACLManager\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.web.middleware import has_permission\nfrom fledge.services.core import connect\nfrom fledge.services.core.api.control_service.exceptions import *\n\n__author__ = \"Ashish Jabble, Massimiliano Pinto\"\n__copyright__ = \"Copyright (c) 2021 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    --------------------------------------------------------------\n    | GET POST            | /fledge/ACL                          |\n    | GET PUT DELETE      | /fledge/ACL/{acl_name}               |\n    | PUT DELETE          | /fledge/service/{service_name}/ACL   |\n    --------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def get_all_acls(request: web.Request) -> web.Response:\n    \"\"\" Get list of all access control lists in the system\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX GET http://localhost:8081/fledge/ACL\n    \"\"\"\n    storage = connect.get_storage_async()\n    payload = PayloadBuilder().SELECT(\"name\", \"service\", \"url\").payload()\n    result = await storage.query_tbl_with_payload('control_acl', payload)\n    # TODO: FOGL-6258 Add users list in response where they are used\n    return web.json_response({\"acls\": result['rows']})\n\n\nasync def get_acl(request: web.Request) -> web.Response:\n    \"\"\" Get the details of access control list by name\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX GET http://localhost:8081/fledge/ACL/testACL\n    \"\"\"\n    try:\n        name = request.match_info.get('acl_name', None)\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().SELECT(\"name\", \"service\", \"url\").WHERE(['name', '=', name]).payload()\n        result = await storage.query_tbl_with_payload('control_acl', payload)\n        if 'rows' in result:\n            if result['rows']:\n                acl_info = result['rows'][0]\n            else:\n                raise NameNotFoundError('ACL with name {} is not found.'.format(name))\n        else:\n            raise StorageServerError(result)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except NameNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get {} ACL.\".format(name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(acl_info)\n\n\n@has_permission(\"admin\")\nasync def add_acl(request: web.Request) -> web.Response:\n    \"\"\" Create a new access control list\n\n    :Example:\n         curl -H \"authorization: $AUTH_TOKEN\" -sX POST http://localhost:8081/fledge/ACL -d '{\"name\": \"testACL\",\n         \"service\": [{\"name\": \"IEC-104\"}, {\"type\": \"notification\"}], \"url\": [{\"url\": \"/fledge/south/operation\",\n         \"acl\": [{\"type\": \"Northbound\"}]}]}'\n         curl -H \"authorization: $AUTH_TOKEN\" -sX POST http://localhost:8081/fledge/ACL -d '{\"name\": \"testACL-2\",\n         \"service\": [{\"name\": \"IEC-104\"}], \"url\": []}'\n         curl -H \"authorization: $AUTH_TOKEN\" -sX POST http://localhost:8081/fledge/ACL -d '{\"name\": \"testACL-3\",\n         \"service\": [{\"type\": \"Notification\"}], \"url\": []}'\n         curl -H \"authorization: $AUTH_TOKEN\" -sX POST http://localhost:8081/fledge/ACL -d '{\"name\": \"testACL-4\",\n         \"service\": [{\"name\": \"IEC-104\"}, {\"type\": \"notification\"}], \"url\": [{\"url\": \"/fledge/south/operation\",\n         \"acl\": [{\"type\": \"Northbound\"}]}, {\"url\": \"/fledge/south/write\",\n         \"acl\": [{\"type\": \"Northbound\"}, {\"type\": \"Southbound\"}]}]}'\n    \"\"\"\n    try:\n        data = await request.json()\n        columns = await _check_params(data, action=\"POST\")\n        name = columns['name']\n        service = columns['service']\n        url = columns['url']\n        result = {}\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', name]).payload()\n        get_control_acl_name_result = await storage.query_tbl_with_payload('control_acl', payload)\n        if get_control_acl_name_result['count'] == 0:\n            services = json.dumps(service)\n            urls = json.dumps(url)\n            payload = PayloadBuilder().INSERT(name=name, service=services, url=urls).payload()\n            insert_control_acl_result = await storage.insert_into_tbl(\"control_acl\", payload)\n            if 'response' in insert_control_acl_result:\n                if insert_control_acl_result['response'] == \"inserted\":\n                    result = {\"name\": name, \"service\": json.loads(services), \"url\": json.loads(urls)}\n                    # ACLAD audit trail entry\n                    audit = AuditLogger(storage)\n                    await audit.information('ACLAD', result)\n            else:\n                raise StorageServerError(insert_control_acl_result)\n        else:\n            msg = 'ACL with name {} already exists.'.format(name)\n            raise DuplicateNameError(msg)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except DuplicateNameError as err:\n        msg = str(err)\n        raise web.HTTPConflict(reason=msg, body=json.dumps({\"message\": msg}))\n    except (TypeError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create ACL.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(result)\n\n\n@has_permission(\"admin\")\nasync def update_acl(request: web.Request) -> web.Response:\n    \"\"\" Update an access control list\n    Only the service and URL parameters can be updated.\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX PUT http://localhost:8081/fledge/ACL/testACL\n        -d '{\"service\": [{\"name\": \"Sinusoid\"}]}'\n        curl -H \"authorization: $AUTH_TOKEN\" -sX PUT http://localhost:8081/fledge/ACL/testACL\n        -d '{\"url\": [{\"url\": \"/fledge/south/write\", \"acl\": []}]}'\n         curl -H \"authorization: $AUTH_TOKEN\" -sX PUT http://localhost:8081/fledge/ACL/testACL\n         -d '{\"service\": [{\"type\": \"core\"}], \"url\": [{\"url\": \"/fledge/south/write\", \"acl\": [{\"type\": \"Northbound\"}]}]}'\n    \"\"\"\n    try:\n        name = request.match_info.get('acl_name', None)\n\n        data = await request.json()\n        service = data.get('service', None)\n        url = data.get('url', None)\n        await _check_params(data, action=\"PUT\")\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().SELECT(\"name\", \"service\", \"url\").WHERE(['name', '=', name]).payload()\n        result = await storage.query_tbl_with_payload('control_acl', payload)\n        message = \"\"\n        if 'rows' in result:\n            if result['rows']:\n                update_query = PayloadBuilder()\n                set_values = {}\n                if service is not None:\n                    set_values[\"service\"] = json.dumps(service)\n                if url is not None:\n                    set_values[\"url\"] = json.dumps(url)\n\n                update_query.SET(**set_values).WHERE(['name', '=', name])\n                update_result = await storage.update_tbl(\"control_acl\", update_query.payload())\n                if 'response' in update_result:\n                    if update_result['response'] == \"updated\":\n                        message = \"ACL {} updated successfully.\".format(name)\n                        # ACLCH audit trail entry\n                        audit = AuditLogger(storage)\n                        values = {'name': name, 'service': service, 'url': url}\n                        await audit.information('ACLCH', {'acl': values, 'old_acl': result['rows'][0]})\n                else:\n                    raise StorageServerError(update_result)\n            else:\n                raise NameNotFoundError('ACL with name {} is not found.'.format(name))\n        else:\n            raise StorageServerError(result)\n    except StorageServerError as err:\n        message = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=message, body=json.dumps({\"message\": message}))\n    except NameNotFoundError as err:\n        message = str(err)\n        raise web.HTTPNotFound(reason=message, body=json.dumps({\"message\": message}))\n    except (TypeError, ValueError) as err:\n        message = str(err)\n        raise web.HTTPBadRequest(reason=message, body=json.dumps({\"message\": message}))\n    except Exception as ex:\n        message = str(ex)\n        _logger.error(ex, \"Failed to update {} ACL.\".format(name))\n        raise web.HTTPInternalServerError(reason=message, body=json.dumps({\"message\": message}))\n    else:\n        # Fetch service name associated with acl\n        acl_handler = ACLManager(storage)\n        services = await acl_handler.get_all_entities_for_a_acl(name, \"service\")\n        for svc in services:\n            await acl_handler._notify_service_about_acl_change(svc, name, \"reloadACL\")\n\n        # No need to handle update for script. As acl name has not changed.\n        # The script will pick the updated contents of acl next time when it runs.\n\n        return web.json_response({\"message\": message})\n\n\n@has_permission(\"admin\")\nasync def delete_acl(request: web.Request) -> web.Response:\n    \"\"\" Delete an access control list. Only ACL's that have no users can be deleted\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX DELETE http://localhost:8081/fledge/ACL/testACL\n    \"\"\"\n    try:\n        name = request.match_info.get('acl_name', None)\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', name]).payload()\n        result = await storage.query_tbl_with_payload('control_acl', payload)\n        message = \"\"\n        if 'rows' in result:\n            if result['rows']:\n                payload = PayloadBuilder().WHERE(['name', '=', name]).payload()\n                acl_handler = ACLManager(storage)\n                services = await acl_handler.get_all_entities_for_a_acl(name, \"service\")\n                scripts = await acl_handler.get_all_entities_for_a_acl(name, \"script\")\n                if services or scripts:\n                    message = \"{} is associated with an entity. So cannot delete.\" \\\n                              \" Make sure to remove all the usages of this ACL.\".format(name)\n                    _logger.warning(message)\n                    return web.HTTPConflict(reason=message, body=json.dumps({\"message\": message}))\n\n                delete_result = await storage.delete_from_tbl(\"control_acl\", payload)\n                if 'response' in delete_result:\n                    if delete_result['response'] == \"deleted\":\n                        message = \"{} ACL deleted successfully.\".format(name)\n                        # ACLDL audit trail entry\n                        audit = AuditLogger(storage)\n                        await audit.information('ACLDL', {\"message\": message, \"name\": name})\n                else:\n                    raise StorageServerError(delete_result)\n            else:\n                raise NameNotFoundError('ACL with name {} is not found.'.format(name))\n        else:\n            raise StorageServerError(result)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except NameNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete {} ACL.\".format(name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"message\": message})\n\n\n@has_permission(\"admin\")\nasync def attach_acl_to_service(request: web.Request) -> web.Response:\n    \"\"\" Attach ACL to a service. A service may only have single ACL associated with it\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX PUT http://localhost:8081/fledge/service/Sine/ACL -d '{\"acl_name\": \"testACL\"}'\n    \"\"\"\n    svc_name = request.match_info.get('service_name', None)\n    try:\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().SELECT([\"id\", \"enabled\"]).WHERE(['schedule_name', '=', svc_name]).payload()\n        # check service name existence\n        get_schedules_result = await storage.query_tbl_with_payload('schedules', payload)\n        if 'count' in get_schedules_result:\n            if get_schedules_result['count'] == 0:\n                raise NameNotFoundError('Schedule with name {} is not found.'.format(svc_name))\n        else:\n            raise StorageServerError(get_schedules_result)\n        data = await request.json()\n        acl_name = data.get('acl_name', None)\n        if acl_name is not None:\n            if not isinstance(acl_name, str):\n                raise ValueError('ACL must be a string.')\n            if acl_name.strip() == \"\":\n                raise ValueError('ACL cannot be empty.')\n        else:\n            raise ValueError('acl_name KV pair is missing.')\n        acl_name = acl_name.strip()\n        # check ACL name existence\n        payload = PayloadBuilder().SELECT(\"name\", \"service\", \"url\").WHERE(['name', '=', acl_name]).payload()\n        get_acl_result = await storage.query_tbl_with_payload('control_acl', payload)\n        if 'count' in get_acl_result:\n            if get_acl_result['count'] == 0:\n                raise NameNotFoundError('ACL with name {} is not found.'.format(acl_name))\n        else:\n            raise StorageServerError(get_acl_result)\n        # check ACL existence with service\n        cf_mgr = ConfigurationManager(storage)\n        security_cat_name = \"{}Security\".format(svc_name)\n        category = await cf_mgr.get_category_all_items(security_cat_name)\n\n        if category is not None and 'ACL' in category:\n            if category['ACL']['value'] != \"\":\n                raise ValueError('Service {} already has an ACL object.'.format(svc_name))\n\n        # Create {service_name}Security category and having value with AuthenticationCaller Global switch &\n        # ACL info attached (name is excluded from the ACL dict)\n        category_desc = \"Security category for {} service\".format(svc_name)\n        del get_acl_result['rows'][0]['name']\n        category_value = {\n            'AuthenticatedCaller':\n                {\n                    'description': 'Caller authorisation is needed',\n                    'type': 'boolean',\n                    'default': 'false',\n                    'displayName': 'Enable caller authorisation'\n                },\n            'ACL':\n                {\n                    'description': 'Service ACL for {}'.format(svc_name),\n                    'type': 'ACL',\n                    'displayName': 'Service ACL',\n                    'default': ''\n                }\n        }\n        # Create category content with ACL default set to ''\n        await cf_mgr.create_category(category_name=security_cat_name, category_description=category_desc,\n                                     category_value=category_value)\n        add_child_result = await cf_mgr.create_child_category(svc_name, [security_cat_name])\n        if security_cat_name not in add_child_result['children']:\n            raise StorageServerError(add_child_result)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except NameNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (TypeError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Attach ACL to {} service failed.\".format(svc_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # Call service security endpoint with attachACL = acl_name\n        data = {'ACL': acl_name}\n        await cf_mgr.update_configuration_item_bulk(security_cat_name, data)\n\n        return web.json_response({\"message\": \"ACL with name {} attached to {} service successfully.\".format(\n            acl_name, svc_name)})\n\n\n@has_permission(\"admin\")\nasync def detach_acl_from_service(request: web.Request) -> web.Response:\n    \"\"\" Detach ACL from a service\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX DELETE http://localhost:8081/fledge/service/Sine/ACL\n    \"\"\"\n    svc_name = request.match_info.get('service_name', None)\n    try:\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().SELECT([\"id\", \"enabled\"]).WHERE(['schedule_name', '=', svc_name]).payload()\n        # check service name existence\n        get_schedules_result = await storage.query_tbl_with_payload('schedules', payload)\n        if 'count' in get_schedules_result:\n            if get_schedules_result['count'] == 0:\n                raise NameNotFoundError('Schedule with name {} is not found.'.format(svc_name))\n        else:\n            raise StorageServerError(get_schedules_result)\n        cf_mgr = ConfigurationManager(storage)\n        security_cat_name = \"{}Security\".format(svc_name)\n        # Check {service_name}Security existence\n        category = await cf_mgr.get_category_all_items(security_cat_name)\n        if category is not None:\n            # Delete {service_name}Security category\n            category_desc = \"Security category for {} service\".format(svc_name)\n            category_value = {\n                'AuthenticatedCaller':\n                    {\n                        'description': 'Caller authorisation is needed',\n                        'type': 'boolean',\n                        'default': 'false',\n                        'displayName': 'Enable caller authorisation'\n                    }\n                ,\n                'ACL':\n                    {\n                        'description': 'Service ACL for {}'.format(svc_name),\n                        'type': 'ACL',\n                        'displayName': 'Service ACL',\n                        'default': ''\n                    }\n            }\n            # Call service security endpoint with detachACL = ''\n            data = {'ACL': ''}\n            await cf_mgr.update_configuration_item_bulk(security_cat_name, data)\n\n            # Set new content without ACL item\n            await cf_mgr.create_category(category_name=security_cat_name,\n                                         category_description=category_desc,\n                                         category_value=category_value)\n\n            message = \"ACL is detached from {} service successfully.\".format(svc_name)\n        else:\n            raise ValueError(\"Nothing to delete as there is no ACL attached with {} service.\".format(svc_name))\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except NameNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Detach ACL from {} service failed.\".format(svc_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"message\": message})\n\nasync def _check_params(data, action):\n        final = {}\n        name = data.get('name', None)\n        service = data.get('service', None)\n        url = data.get('url', None)\n\n        if action == \"PUT\":\n            if service is None and url is None:\n                raise ValueError(\"Nothing to update for the given payload.\")\n\n        if action == \"POST\":\n            if name is None:\n                raise ValueError('ACL name is required.')\n            else:\n                if not isinstance(name, str):\n                    raise TypeError('ACL name must be a string.')\n                name = name.strip()\n                if name == \"\":\n                    raise ValueError('ACL name cannot be empty.')\n            final['name'] = name\n        if action == \"POST\":\n            if service is None:\n                raise ValueError('service parameter is required.')\n        if action == \"POST\" or (action == \"PUT\" and service is not None):\n            if not isinstance(service, list):\n                raise TypeError('service must be a list.')\n            if not service:\n                raise ValueError('service list cannot be empty.')\n            is_type_seen = False\n            is_name_seen = False\n            for s in service:\n                if not isinstance(s, dict):\n                    raise TypeError(\"service elements must be an object.\")\n                if not s:\n                    raise ValueError('service object cannot be empty.')\n                if 'type' in list(s.keys()) and not is_type_seen:\n                    if not isinstance(s['type'], str):\n                        raise TypeError(\"Value must be a string for service type.\")\n                    s['type'] = s['type'].strip()\n                    if s['type'] == \"\":\n                        raise ValueError('Value cannot be empty for service type.')\n                    is_type_seen = True\n                if 'name' in list(s.keys()) and not is_name_seen:\n                    if not isinstance(s['name'], str):\n                        raise TypeError(\"Value must be a string for service name.\")\n                    s['name'] = s['name'].strip()\n                    if s['name'] == \"\":\n                        raise ValueError('Value cannot be empty for service name.')\n                    is_name_seen = True\n            if not is_type_seen and not is_name_seen:\n                raise ValueError('Either type or name Key-Value Pair is missing for service.')\n        final['service'] = service\n\n        if action == \"POST\":\n            if url is None:\n                raise ValueError('url parameter is required.')\n        if action == \"POST\" or (action == \"PUT\" and url is not None):\n            if not isinstance(url, list):\n                raise TypeError('url must be a list.')\n            if url:\n                for u in url:\n                    is_url_seen = False\n                    if not isinstance(u, dict):\n                        raise TypeError(\"url elements must be an object.\")\n                    if 'url' in u:\n                        if not isinstance(u['url'], str):\n                            raise TypeError(\"Value must be a string for url object.\")\n                        u['url'] = u['url'].strip()\n                        if u['url'] == \"\":\n                            raise ValueError('Value cannot be empty for url object.')\n                        is_url_seen = True\n                    if 'acl' in u:\n                        if not isinstance(u['acl'], list):\n                            raise TypeError(\"Value must be an array for acl object.\")\n                        if u['acl']:\n                            for uacl in u['acl']:\n                                if not isinstance(uacl, dict):\n                                    raise TypeError(\"acl elements must be an object.\")\n                                if not uacl:\n                                    raise ValueError('acl object cannot be empty.')\n                    if not is_url_seen:\n                        raise ValueError('url child Key-Value Pair is missing.')\n            final['url'] = url\n        return final\n"
  },
  {
    "path": "python/fledge/services/core/api/control_service/entrypoint.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\n\nfrom enum import IntEnum\nimport aiohttp\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect, server\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\nfrom fledge.services.core.user_model import User\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2023 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n_help = \"\"\"\n    Two types of users: Control Administrator and Control Requestor\n    Control Administrator:\n                          - has access rights to create a control entrypoint.\n                          - must be a user with role of admin or control\n    Control Requestor:\n                      - can make requests to defined control entrypoint but cannot create new entrypoints.\n                      - any user role can make request to control entrypoint but the username must match one in list\n                       of users given when entrypoint was created.\n    -----------------------------------------------------------------------------------------------------------------\n    | GET POST                       |        /fledge/control/manage                                                 |\n    | GET PUT DELETE                 |        /fledge/control/manage/{name}                                          |\n    | PUT                            |        /fledge/control/request/{name}                                          |\n    ------------------------------------------------------------------------------------------------------------------\n\"\"\"\n\n\ndef setup(app):\n    app.router.add_route('POST', '/fledge/control/manage', create)\n    app.router.add_route('GET', '/fledge/control/manage', get_all)\n    app.router.add_route('GET', '/fledge/control/manage/{name}', get_by_name)\n    app.router.add_route('PUT', '/fledge/control/manage/{name}', update)\n    app.router.add_route('DELETE', '/fledge/control/manage/{name}', delete)\n    app.router.add_route('PUT', '/fledge/control/request/{name}', update_request)\n\n\nclass EntryPointType(IntEnum):\n    WRITE = 0\n    OPERATION = 1\n\n\nclass Destination(IntEnum):\n    BROADCAST = 0\n    SERVICE = 1\n    ASSET = 2\n    SCRIPT = 3\n\n\nasync def _get_type(identifier):\n    if isinstance(identifier, str):\n        type_converted = [ept.value for ept in EntryPointType if ept.name.lower() == identifier]\n    else:\n        type_converted = [ept.name.lower() for ept in EntryPointType if ept.value == identifier]\n    return type_converted[0]\n\n\nasync def _get_destination(identifier):\n    if isinstance(identifier, str):\n        dest_converted = [d.value for d in Destination if d.name.lower() == identifier]\n    else:\n        dest_converted = [d.name.lower() for d in Destination if d.value == identifier]\n    return dest_converted[0]\n\n\nasync def _check_parameters(payload, skip_required=False):\n    if not skip_required:\n        required_keys = {\"name\", \"description\", \"type\", \"destination\"}\n        if not all(k in payload.keys() for k in required_keys):\n            raise KeyError(\"{} required keys are missing in request payload.\".format(required_keys))\n    final = {}\n    name = payload.get('name', None)\n    if name is not None:\n        if not isinstance(name, str):\n            raise ValueError('Control entrypoint name should be in string.')\n        name = name.strip()\n        if len(name) == 0:\n            raise ValueError('Control entrypoint name cannot be empty.')\n        final['name'] = name\n    description = payload.get('description', None)\n    if description is not None:\n        if not isinstance(description, str):\n            raise ValueError('Control entrypoint description should be in string.')\n        description = description.strip()\n        if len(description) == 0:\n            raise ValueError('Control entrypoint description cannot be empty.')\n        final['description'] = description\n    _type = payload.get('type', None)\n    if _type is not None:\n        if not isinstance(_type, str):\n            raise ValueError('Control entrypoint type should be in string.')\n        _type = _type.strip()\n        if len(_type) == 0:\n            raise ValueError('Control entrypoint type cannot be empty.')\n        ept_names = [ept.name.lower() for ept in EntryPointType]\n        if _type not in ept_names:\n            raise ValueError('Possible types are: {}.'.format(ept_names))\n        if _type == EntryPointType.OPERATION.name.lower():\n            operation_name = payload.get('operation_name', None)\n            if operation_name is not None:\n                if not isinstance(operation_name, str):\n                    raise ValueError('Control entrypoint operation name should be in string.')\n                operation_name = operation_name.strip()\n                if len(operation_name) == 0:\n                    raise ValueError('Control entrypoint operation name cannot be empty.')\n            else:\n                raise KeyError('operation_name KV pair is missing.')\n            final['operation_name'] = operation_name\n        final['type'] = await _get_type(_type)\n\n    destination = payload.get('destination', None)\n    if destination is not None:\n        if not isinstance(destination, str):\n            raise ValueError('Control entrypoint destination should be in string.')\n        destination = destination.strip()\n        if len(destination) == 0:\n            raise ValueError('Control entrypoint destination cannot be empty.')\n        dest_names = [d.name.lower() for d in Destination]\n        if destination not in dest_names:\n            raise ValueError('Possible destination values are: {}.'.format(dest_names))\n\n        destination_idx = await _get_destination(destination)\n        final['destination'] = destination_idx\n\n        # only if non-zero\n        final['destination_arg'] = ''\n        if destination_idx:\n            destination_arg = payload.get(destination, None)\n            if destination_arg is not None:\n                if not isinstance(destination_arg, str):\n                    raise ValueError('Control entrypoint destination argument should be in string.')\n                destination_arg = destination_arg.strip()\n                if len(destination_arg) == 0:\n                    raise ValueError('Control entrypoint destination argument cannot be empty.')\n                final[destination] = destination_arg\n                final['destination_arg'] = destination\n            else:\n                raise KeyError('{} destination argument is missing.'.format(destination))\n    anonymous = payload.get('anonymous', None)\n    if anonymous is not None:\n        if not isinstance(anonymous, bool):\n            raise ValueError('anonymous should be a bool.')\n        anonymous = 't' if anonymous else 'f'\n        final['anonymous'] = anonymous\n    constants = payload.get('constants', None)\n    if constants is not None:\n        if not isinstance(constants, dict):\n            raise ValueError('constants should be a dictionary.')\n        final['constants'] = constants\n\n    variables = payload.get('variables', None)\n    if variables is not None:\n        if not isinstance(variables, dict):\n            raise ValueError('variables should be a dictionary.')\n        final['variables'] = variables\n\n    if _type == EntryPointType.WRITE.name.lower():\n        if not variables and not constants:\n            raise ValueError('For write type either variables or constants should not be empty.')\n\n    allow = payload.get('allow', None)\n    if allow is not None:\n        if not isinstance(allow, list):\n            raise ValueError('allow should be an array of list of users.')\n        if allow:\n            users = await User.Objects.all()\n            usernames = [u['uname'] for u in users]\n            invalid_users = list(set(payload['allow']) - set(usernames))\n            if invalid_users:\n                raise ValueError('Invalid user {} found.'.format(invalid_users))\n        final['allow'] = allow\n    return final\n\n\nasync def create(request: web.Request) -> web.Response:\n    \"\"\"Create a control entrypoint\n     :Example:\n         curl -sX POST http://localhost:8081/fledge/control/manage -d '{\"name\": \"SetLatheSpeed\", \"description\": \"Set the speed of the lathe\", \"type\": \"write\", \"destination\": \"asset\", \"asset\": \"lathe\", \"constants\": {\"units\": \"spin\"}, \"variables\": {\"rpm\": \"100\"}, \"allow\":[\"user\"], \"anonymous\": false}'\n     \"\"\"\n    try:\n        data = await request.json()\n        payload = await _check_parameters(data)\n        name = payload['name']\n        storage = connect.get_storage_async()\n        result = await storage.query_tbl(\"control_api\")\n        entrypoints = [r['name'] for r in result['rows']]\n        if name in entrypoints:\n            raise ValueError('{} control entrypoint is already in use.'.format(name))\n        # add common data keys in control_api table\n        control_api_column_name = {\"name\": name,\n                                   \"description\": payload['description'],\n                                   \"type\": payload['type'],\n                                   \"operation_name\": payload['operation_name'] if payload['type'] == 1 else \"\",\n                                   \"destination\": payload['destination'],\n                                   \"destination_arg\": payload[\n                                       payload['destination_arg']] if payload['destination'] else \"\",\n                                   \"anonymous\": payload['anonymous']\n                                   }\n        api_insert_payload = PayloadBuilder().INSERT(**control_api_column_name).payload()\n        insert_api_result = await storage.insert_into_tbl(\"control_api\", api_insert_payload)\n        if insert_api_result['rows_affected'] == 1:\n            # add if any params data keys in control_api_parameters table\n            if 'constants' in payload:\n                for k, v in payload['constants'].items():\n                    control_api_params_column_name = {\"name\": name, \"parameter\": k, \"value\": v, \"constant\": 't'}\n                    api_params_insert_payload = PayloadBuilder().INSERT(**control_api_params_column_name).payload()\n                    await storage.insert_into_tbl(\"control_api_parameters\", api_params_insert_payload)\n            if 'variables' in payload:\n                for k, v in payload['variables'].items():\n                    control_api_params_column_name = {\"name\": name, \"parameter\": k, \"value\": v, \"constant\": 'f'}\n                    api_params_insert_payload = PayloadBuilder().INSERT(**control_api_params_column_name).payload()\n                    await storage.insert_into_tbl(\"control_api_parameters\", api_params_insert_payload)\n            # add if any users in control_api_acl table\n            if 'allow' in payload:\n                for u in payload['allow']:\n                    control_acl_column_name = {\"name\": name, \"user\": u}\n                    acl_insert_payload = PayloadBuilder().INSERT(**control_acl_column_name).payload()\n                    await storage.insert_into_tbl(\"control_api_acl\", acl_insert_payload)\n    except (KeyError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create control entrypoint.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # CTEAD audit trail entry\n        audit = AuditLogger(storage)\n        if 'constants' not in data:\n            data['constants'] = {}\n        if 'variables' not in data:\n            data['variables'] = {}\n        await audit.information('CTEAD', data)\n        return web.json_response({\"message\": \"{} control entrypoint has been created successfully.\".format(name)})\n\n\nasync def get_all(request: web.Request) -> web.Response:\n    \"\"\"Get a list of all control entrypoints\n     :Example:\n         curl -sX GET http://localhost:8081/fledge/control/manage\n     \"\"\"\n    storage = connect.get_storage_async()\n    result = await storage.query_tbl(\"control_api\")\n    entrypoint = []\n    for row in result[\"rows\"]:\n        permitted = await _get_permitted(request, storage, row)\n        entrypoint.append({\"name\": row['name'], \"description\": row['description'], \"permitted\": permitted})\n    return web.json_response({\"controls\": entrypoint})\n\n\nasync def get_by_name(request: web.Request) -> web.Response:\n    \"\"\"Get a control entrypoint by name\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/control/manage/SetLatheSpeed\n    \"\"\"\n    ep_name = request.match_info.get('name', None)\n    try:\n        response = await _get_entrypoint(ep_name)\n        response['permitted'] = await _get_permitted(request, None, response)\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except KeyError as err:\n        msg = str(err.args[0])\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to fetch details of {} entrypoint.\".format(ep_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def delete(request: web.Request) -> web.Response:\n    \"\"\"Delete a control entrypoint\n    :Example:\n        curl -sX DELETE http://localhost:8081/fledge/control/manage/SetLatheSpeed\n    \"\"\"\n    name = request.match_info.get('name', None)\n    try:\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().WHERE([\"name\", '=', name]).payload()\n        result = await storage.query_tbl_with_payload(\"control_api\", payload)\n        if not result['rows']:\n            raise KeyError('{} control entrypoint not found.'.format(name))\n        await storage.delete_from_tbl(\"control_api_acl\", payload)\n        await storage.delete_from_tbl(\"control_api_parameters\", payload)\n        await storage.delete_from_tbl(\"control_api\", payload)\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except KeyError as err:\n        msg = str(err.args[0])\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete of {} entrypoint.\".format(name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        message = \"{} control entrypoint has been deleted successfully.\".format(name)\n        # CTEDL audit trail entry\n        audit = AuditLogger(storage)\n        await audit.information('CTEDL', {\"message\": message, \"name\": name})\n        return web.json_response({\"message\": message})\n\n\nasync def update(request: web.Request) -> web.Response:\n    \"\"\"Update a control entrypoint\n    :Example:\n        curl -sX PUT \"http://localhost:8081/fledge/control/manage/SetLatheSpeed\" -d '{\"constants\": {\"x\": \"486\"}, \"variables\": {\"rpm\": \"1200\"}, \"description\": \"Perform lathesim\", \"anonymous\": false, \"destination\": \"script\", \"script\": \"S4\", \"allow\": [\"user\"]}'\n        curl -sX PUT http://localhost:8081/fledge/control/manage/SetLatheSpeed -d '{\"description\": \"Updated\", \"anonymous\": false, \"allow\": []}'\n        curl -sX PUT http://localhost:8081/fledge/control/manage/SetLatheSpeed -d '{\"allow\": [\"user\"]}'\n        curl -sX PUT http://localhost:8081/fledge/control/manage/SetLatheSpeed -d '{\"variables\":{\"rpm\":\"800\", \"distance\": \"138\"}, \"constants\": {\"x\": \"640\", \"y\": \"480\"}}'\n    \"\"\"\n    name = request.match_info.get('name', None)\n    try:\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().WHERE([\"name\", '=', name]).payload()\n        entry_point_result = await storage.query_tbl_with_payload(\"control_api\", payload)\n        if not entry_point_result['rows']:\n            msg = '{} control entrypoint not found.'.format(name)\n            raise KeyError(msg)\n        try:\n            data = await request.json()\n            columns = await _check_parameters(data, skip_required=True)\n        except Exception as ex:\n            msg = str(ex)\n            raise ValueError(msg)\n        old_entrypoint = await _get_entrypoint(name)\n        # TODO: FOGL-8037 rename\n        if 'name' in columns:\n            del columns['name']\n        possible_keys = {\"name\", \"description\", \"type\", \"operation_name\", \"destination\", \"destination_arg\",\n                         \"anonymous\", \"constants\", \"variables\", \"allow\"}\n        if 'type' in columns:\n            columns['operation_name'] = columns['operation_name'] if columns['type'] == 1 else \"\"\n        if 'destination_arg' in columns:\n            dest = await _get_destination(columns['destination'])\n            columns['destination_arg'] = columns[dest] if columns['destination'] else \"\"\n        entries_to_remove = set(columns) - set(possible_keys)\n        for k in entries_to_remove:\n            del columns[k]\n        control_api_columns = {}\n        if columns:\n            for k, v in columns.items():\n                if k == \"constants\":\n                    await _update_params(\n                        name, old_entrypoint['constants'], columns['constants'], 't', storage)\n                elif k == \"variables\":\n                    await _update_params(\n                        name, old_entrypoint['variables'], columns['variables'], 'f', storage)\n                elif k == \"allow\":\n                    allowed_users = [u for u in v]\n                    db_allow_users = old_entrypoint[\"allow\"]\n                    insert_case = set(allowed_users) - set(db_allow_users)\n                    for _user in insert_case:\n                        acl_cols = {\"name\": name, \"user\": _user}\n                        acl_insert_payload = PayloadBuilder().INSERT(**acl_cols).payload()\n                        await storage.insert_into_tbl(\"control_api_acl\", acl_insert_payload)\n                    delete_case = set(db_allow_users) - set(allowed_users)\n                    for _user in delete_case:\n                        acl_delete_payload = PayloadBuilder().WHERE([\"name\", '=', name]\n                                                                    ).AND_WHERE([\"user\", '=', _user]).payload()\n                        await storage.delete_from_tbl(\"control_api_acl\", acl_delete_payload)\n                else:\n                    control_api_columns[k] = v\n            if control_api_columns:\n                payload = PayloadBuilder().SET(**control_api_columns).WHERE(['name', '=', name]).payload()\n                await storage.update_tbl(\"control_api\", payload)\n        else:\n            msg = \"Nothing to update. No valid key value pair found in payload.\"\n            raise ValueError(msg)\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except KeyError as err:\n        msg = str(err.args[0])\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to update the details of {} entrypoint.\".format(name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # CTECH audit trail entry\n        result = await _get_entrypoint(name)\n        audit = AuditLogger(storage)\n        await audit.information('CTECH', {'entrypoint': result, 'old_entrypoint': old_entrypoint})\n        return web.json_response({\"message\": \"{} control entrypoint has been updated successfully.\".format(name)})\n\n\nasync def update_request(request: web.Request) -> web.Response:\n    \"\"\"API control entrypoints can be called with PUT operation to URL form\n    :Example:\n        curl -sX PUT http://localhost:8081/fledge/control/request/SetLatheSpeed -d '{\"distance\": \"13\"}'\n    \"\"\"\n    name = request.match_info.get('name', None)\n    try:\n        # check the dispatcher service state\n        try:\n            service = ServiceRegistry.get(s_type=\"Dispatcher\")\n            if service[0]._status != ServiceRecord.Status.Running:\n                raise ValueError('The Dispatcher service is not in Running state.')\n        except service_registry_exceptions.DoesNotExist:\n            raise ValueError('Dispatcher service is either not installed or not added.')\n\n        ep_info = await _get_entrypoint(name)\n        username = \"Anonymous\"\n        if request.user is not None:\n            \"\"\" Admin and Control role users can always execute entrypoint.\n                With anonymous - Editor role can execute entrypoint.\n                When anonymous set to False, then\n                a) Editor role, it must be matched from the list of allowed users.\n                b) Other roles like viewer and dataviewer, endpoint is restricted.\n            \"\"\"\n            if request.user[\"role_id\"] not in (1, 5):\n                if ep_info and not ep_info['anonymous']:\n                    allowed_user = [r for r in ep_info['allow']]\n                    if request.user[\"uname\"] not in allowed_user:\n                        raise ValueError(\"Operation is not allowed for the {} user.\".format(request.user['uname']))\n            username = request.user[\"uname\"]\n\n        data = await request.json()\n        dispatch_payload = {\"destination\": ep_info['destination'], \"source\": \"API\", \"source_name\": username}\n        # If destination is broadcast then name KV pair is excluded from dispatch payload\n        if str(ep_info['destination']).lower() != 'broadcast':\n            dispatch_payload[\"name\"] = ep_info[ep_info['destination']]\n        constant_dict = {key: data.get(key, ep_info[\"constants\"][key]) for key in ep_info[\"constants\"]}\n        variables_dict = {key: data.get(key, ep_info[\"variables\"][key]) for key in ep_info[\"variables\"]}\n        params = {**constant_dict, **variables_dict}\n        if ep_info['type'] == 'write':\n            if not params:\n                raise ValueError(\"Nothing to update as given entrypoint do not have the parameters.\")\n            url = \"dispatch/write\"\n            dispatch_payload[\"write\"] = params\n        else:\n            url = \"dispatch/operation\"\n            dispatch_payload[\"operation\"] = {ep_info[\"operation_name\"]: params if params else {}}\n        _logger.debug(\"DISPATCH PAYLOAD: {}\".format(dispatch_payload))\n        svc, bearer_token = await _get_service_record_info_along_with_bearer_token()\n        await _call_dispatcher_service_api(svc._protocol, svc._address, svc._port, url, bearer_token, dispatch_payload)\n    except KeyError as err:\n        msg = str(err.args[0])\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to update the control request details of {} entrypoint.\".format(name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"message\": \"{} control entrypoint URL called.\".format(name)})\n\n\nasync def _get_entrypoint(name):\n    storage = connect.get_storage_async()\n    payload = PayloadBuilder().WHERE([\"name\", '=', name]).payload()\n    result = await storage.query_tbl_with_payload(\"control_api\", payload)\n    if not result['rows']:\n        raise KeyError('{} control entrypoint not found.'.format(name))\n    response = result['rows'][0]\n    response['type'] = await _get_type(response['type'])\n    response['destination'] = await _get_destination(response['destination'])\n    if response['destination'] != \"broadcast\":\n        response[response['destination']] = response['destination_arg']\n    del response['destination_arg']\n    response['anonymous'] = True if response['anonymous'] == 't' else False\n    param_result = await storage.query_tbl_with_payload(\"control_api_parameters\", payload)\n    constants = {}\n    variables = {}\n    if param_result['rows']:\n        for r in param_result['rows']:\n            if r['constant'] == 't':\n                constants[r['parameter']] = r['value']\n            else:\n                variables[r['parameter']] = r['value']\n        response['constants'] = constants\n        response['variables'] = variables\n    else:\n        response['constants'] = constants\n        response['variables'] = variables\n    response['allow'] = []\n    acl_result = await storage.query_tbl_with_payload(\"control_api_acl\", payload)\n    if acl_result['rows']:\n        users = []\n        for r in acl_result['rows']:\n            users.append(r['user'])\n        response['allow'] = users\n    return response\n\n\nasync def _get_service_record_info_along_with_bearer_token():\n    try:\n        service = ServiceRegistry.get(s_type=\"Dispatcher\")\n        svc_name = service[0]._name\n        token = ServiceRegistry.getBearerToken(svc_name)\n    except service_registry_exceptions.DoesNotExist:\n        msg = \"No service available with type Dispatcher.\"\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return service[0], token\n\n\nasync def _call_dispatcher_service_api(protocol: str, address: str, port: int, uri: str, token: str, payload: dict):\n    # Custom Request header\n    headers = {}\n    if token is not None:\n        headers['Authorization'] = \"Bearer {}\".format(token)\n    url = \"{}://{}:{}/{}\".format(protocol, address, port, uri)\n    try:\n        async with aiohttp.ClientSession() as session:\n            async with session.post(url, data=json.dumps(payload), headers=headers) as resp:\n                message = await resp.text()\n                response = (resp.status, message)\n                if resp.status not in range(200, 209):\n                    _logger.error(\"POST Request Error: Http status code: {}, reason: {}, response: {}\".format(\n                        resp.status, resp.reason, message))\n    except Exception as ex:\n        raise Exception(str(ex))\n    else:\n        # Return Tuple - (http statuscode, message)\n        return response\n\n\nasync def _update_params(ep_name: str, old_param: dict, new_param: dict, is_constant: str, _storage: connect):\n    insert_case = set(new_param) - set(old_param)\n    update_case = set(new_param) & set(old_param)\n    delete_case = set(old_param) - set(new_param)\n\n    for uc in update_case:\n        update_payload = PayloadBuilder().WHERE([\"name\", '=', ep_name]).AND_WHERE(\n            [\"constant\", '=', is_constant]).AND_WHERE([\"parameter\", '=', uc]).SET(value=new_param[uc]).payload()\n        await _storage.update_tbl(\"control_api_parameters\", update_payload)\n\n    for dc in delete_case:\n        delete_payload = PayloadBuilder().WHERE([\"name\", '=', ep_name]).AND_WHERE(\n            [\"constant\", '=', is_constant]).AND_WHERE([\"parameter\", '=', dc]).payload()\n        await _storage.delete_from_tbl(\"control_api_parameters\", delete_payload)\n\n    for ic in insert_case:\n        column_name = {\"name\": ep_name, \"parameter\": ic, \"value\": new_param[ic], \"constant\": is_constant}\n        api_params_insert_payload = PayloadBuilder().INSERT(**column_name).payload()\n        await _storage.insert_into_tbl(\"control_api_parameters\", api_params_insert_payload)\n\n\nasync def _get_permitted(request: web.Request, _storage: connect, ep: dict):\n    \"\"\"permitted: means user is able to make the API call\n          This is on the basis of anonymous flag if true then permitted true\n          If anonymous flag is false then list of allowed users to determine if the specific user can make the call\n       Note: In case of authentication optional permitted always true\n    \"\"\"\n    if _storage is None:\n        _storage = connect.get_storage_async()\n\n    if request.is_auth_optional is True:\n        return True\n    if ep['anonymous'] == 't' or ep['anonymous'] is True:\n        return True\n\n    permitted = False\n    if request.user[\"role_id\"] not in (1, 5):  # Admin, Control\n        payload = PayloadBuilder().WHERE([\"name\", '=', ep['name']]).payload()\n        acl_result = await _storage.query_tbl_with_payload(\"control_api_acl\", payload)\n        if acl_result['rows']:\n            users = [r['user'] for r in acl_result['rows']]\n            permitted = False if request.user[\"uname\"] not in users else True\n    else:\n        permitted = True\n    return permitted\n"
  },
  {
    "path": "python/fledge/services/core/api/control_service/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2021 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__all__ = ('DuplicateNameError', 'NameNotFoundError')\n\n\nclass DuplicateNameError(RuntimeError):\n    pass\n\n\nclass NameNotFoundError(ValueError):\n    pass\n"
  },
  {
    "path": "python/fledge/services/core/api/control_service/pipeline.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport copy\nimport json\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.services.core import connect, server\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2023 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n_help = \"\"\"\n    -----------------------------------------------------------------------------------\n    | GET POST                   |        /fledge/control/pipeline                    |\n    | GET PUT DELETE             |        /fledge/control/pipeline/{id}               |\n    | GET                        |        /fledge/control/lookup                      |\n    -----------------------------------------------------------------------------------\n\"\"\"\n\n\ndef setup(app):\n    app.router.add_route('GET', '/fledge/control/lookup', get_lookup)\n    app.router.add_route('POST', '/fledge/control/pipeline', create)\n    app.router.add_route('GET', '/fledge/control/pipeline', get_all)\n    app.router.add_route('GET', '/fledge/control/pipeline/{id}', get_by_id)\n    app.router.add_route('PUT', '/fledge/control/pipeline/{id}', update)\n    app.router.add_route('DELETE', '/fledge/control/pipeline/{id}', delete)\n\n\nasync def get_lookup(request: web.Request) -> web.Response:\n    \"\"\"List of supported control source and destinations\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/control/lookup\n        curl -sX GET http://localhost:8081/fledge/control/lookup?type=source\n        curl -sX GET http://localhost:8081/fledge/control/lookup?type=destination\n    \"\"\"\n    try:\n        _type = request.query.get('type')\n        if _type is None or not _type:\n            lookup = await _get_all_lookups()\n            response = {'controlLookup': lookup}\n        else:\n            table_name = None\n            if _type == \"source\":\n                table_name = \"control_source\"\n            elif _type == \"destination\":\n                table_name = \"control_destination\"\n            if table_name:\n                lookup = await _get_all_lookups(table_name)\n                response = lookup\n            else:\n                lookup = await _get_all_lookups()\n                response = {'controlLookup': lookup}\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get all control lookups.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def create(request: web.Request) -> web.Response:\n    \"\"\"Create a control pipeline. It's name must be unique and there must be no other pipelines with the same\n    source or destination\n\n    :Example:\n        curl -sX POST http://localhost:8081/fledge/control/pipeline -d '{\"name\": \"wildcard\", \"enabled\": true, \"execution\": \"shared\", \"source\": {\"type\": 1}, \"destination\": {\"type\": 1}}'\n        curl -sX POST http://localhost:8081/fledge/control/pipeline -d '{\"name\": \"pump\", \"enabled\": true, \"execution\": \"shared\", \"source\": {\"type\": 2, \"name\": \"pump\"}}'\n        curl -sX POST http://localhost:8081/fledge/control/pipeline -d '{\"name\": \"broadcast\", \"enabled\": true, \"execution\": \"exclusive\", \"destination\": {\"type\": 5}}'\n        curl -sX POST http://localhost:8081/fledge/control/pipeline -d '{\"name\": \"opcua_pump\", \"enabled\": true, \"execution\": \"shared\", \"source\": {\"type\": 2, \"name\": \"opcua\"}, \"destination\": {\"type\": 3, \"name\": \"pump1\"}}'\n        curl -sX POST http://localhost:8081/fledge/control/pipeline -d '{\"name\": \"opcua_pump1\", \"enabled\": true, \"execution\": \"exclusive\", \"source\": {\"type\": 2, \"name\": \"southOpcua\"}, \"destination\": {\"type\": 2, \"name\": \"northOpcua\"}, \"filters\": [\"Filter1\"]}'\n        curl -sX POST http://localhost:8081/fledge/control/pipeline -d '{\"name\": \"Test\", \"enabled\": false, \"filters\": [\"Filter1\", \"Filter2\"]}'\n    \"\"\"\n    try:\n        data = await request.json()\n        # Create entry in control_pipelines table\n        column_names = await _check_parameters(data, request)\n        source_type = column_names.get(\"stype\")\n        if source_type is None:\n            column_names['stype'] = 0\n            column_names['sname'] = ''\n        des_type = column_names.get(\"dtype\")\n        if des_type is None:\n            column_names['dtype'] = 0\n            column_names['dname'] = ''\n        payload = PayloadBuilder().INSERT(**column_names).payload()\n        storage = connect.get_storage_async()\n        insert_result = await storage.insert_into_tbl(\"control_pipelines\", payload)\n        pipeline_name = column_names['name']\n        pipeline_filter = data.get('filters', None)\n        if insert_result['response'] == \"inserted\" and insert_result['rows_affected'] == 1:\n            source = {'type': column_names[\"stype\"], 'name': column_names[\"sname\"]}\n            destination = {'type': column_names[\"dtype\"], 'name': column_names[\"dname\"]}\n            final_result = await _pipeline_in_use(pipeline_name, source, destination, info=True)\n            final_result['source'] = {\"type\": await _get_lookup_value('source', final_result[\"stype\"]),\n                                      \"name\": final_result['sname']}\n            final_result['destination'] = {\"type\": await _get_lookup_value('destination', final_result[\"dtype\"]),\n                                           \"name\": final_result['dname']}\n            final_result.pop('stype', None)\n            final_result.pop('sname', None)\n            final_result.pop('dtype', None)\n            final_result.pop('dname', None)\n            final_result['enabled'] = False if final_result['enabled'] == 'f' else True\n            final_result['filters'] = []\n            if pipeline_filter:\n                go_ahead = await _check_filters(storage, pipeline_filter)\n                if go_ahead:\n                    filters = await _update_filters(storage, final_result['id'], pipeline_name, pipeline_filter)\n                    final_result['filters'] = filters\n        else:\n            raise StorageServerError\n    except StorageServerError as serr:\n        msg = serr.error\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"Storage error: {}\".format(msg)}))\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create pipeline: {}.\".format(data.get('name')))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # CTPAD audit trail entry\n        audit = AuditLogger(storage)\n        await audit.information('CTPAD', final_result)\n        return web.json_response(final_result)\n\n\nasync def get_all(request: web.Request) -> web.Response:\n    \"\"\"List of all control pipelines within the system\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/control/pipeline\n    \"\"\"\n    try:\n        storage = connect.get_storage_async()\n        result = await storage.query_tbl(\"control_pipelines\")\n        control_pipelines = []\n        source_lookup = await _get_all_lookups(\"control_source\")\n        des_lookup = await _get_all_lookups(\"control_destination\")\n        for r in result[\"rows\"]:\n            source_name = [s['name'] for s in source_lookup if r['stype'] == s['cpsid']]\n            des_name = [s['name'] for s in des_lookup if r['dtype'] == s['cpdid']]\n            temp = {\n                'id': r['cpid'],\n                'name': r['name'],\n                'source': {\n                    'type': ''.join(source_name), 'name': r['sname']} if r['stype'] else {'type': '', 'name': ''},\n                'destination': {\n                    'type': ''.join(des_name), 'name': r['dname']} if r['dtype'] else {'type': '', 'name': ''},\n                'enabled': False if r['enabled'] == 'f' else True,\n                'execution': r['execution']\n            }\n            result = await _get_table_column_by_value(\"control_filters\", \"cpid\", r['cpid'])\n            temp.update({'filters': [r['fname'] for r in result[\"rows\"]]})\n            control_pipelines.append(temp)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get all pipelines.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'pipelines': control_pipelines})\n\n\nasync def get_by_id(request: web.Request) -> web.Response:\n    \"\"\"Fetch the pipeline within the system\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/control/pipeline/2\n    \"\"\"\n    cpid = request.match_info.get('id', None)\n    try:\n        pipeline = await _get_pipeline(cpid)\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to fetch details of pipeline having ID: <{}>.\".format(cpid))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(pipeline)\n\n\nasync def update(request: web.Request) -> web.Response:\n    \"\"\"Update an existing pipeline within the system\n\n    :Example:\n        curl -sX PUT http://localhost:8081/fledge/control/pipeline/1 -d '{\"filters\": [\"F3\", \"F2\"]}'\n        curl -sX PUT http://localhost:8081/fledge/control/pipeline/13 -d '{\"name\": \"Changed\"}'\n        curl -sX PUT http://localhost:8081/fledge/control/pipeline/9 -d '{\"enabled\": false, \"execution\": \"exclusive\", \"filters\": [], \"source\": {\"type\": 1, \"name\": \"Universal\"}, \"destination\": {\"type\": 4, \"name\": \"TestScript\"}}'\n    \"\"\"\n    cpid = request.match_info.get('id', None)\n    try:\n        pipeline = await _get_pipeline(cpid)\n        data = await request.json()\n        data['old_pipeline_name'] = pipeline['name']\n        columns = await _check_parameters(data, request)\n        storage = connect.get_storage_async()\n        if columns:\n            payload = PayloadBuilder().SET(**columns).WHERE(['cpid', '=', pipeline['id']]).payload()\n            await storage.update_tbl(\"control_pipelines\", payload)\n        filters = data.get('filters', None)\n        if filters is not None:\n            # Case: When filters payload is empty then remove all filters\n            if not filters:\n                await _remove_filters(storage, pipeline['filters'], pipeline['id'], pipeline['name'])\n            else:\n                go_ahead = await _check_filters(storage, filters) if filters else True\n                if go_ahead:\n                    if filters:\n                        result_filters = await _get_table_column_by_value(\"control_filters\", \"cpid\", cpid)\n                        db_filters = None\n                        if result_filters['rows']:\n                            db_filters = [r['fname'].replace(\"ctrl_{}_\".format(pipeline['name']), ''\n                                                             ) for r in result_filters['rows']]\n                        await _update_filters(storage, pipeline['id'], pipeline['name'], filters, db_filters)\n                else:\n                    raise ValueError('Filters do not exist as per the given list {}'.format(filters))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to update pipeline having ID: <{}>.\".format(cpid))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # CTPCH audit trail entry\n        audit = AuditLogger(storage)\n        updated_pipeline = await _get_pipeline(cpid)\n        await audit.information('CTPCH', {\"pipeline\": updated_pipeline, \"old_pipeline\": pipeline})\n        return web.json_response(\n            {\"message\": \"Control Pipeline with ID:<{}> has been updated successfully.\".format(cpid)})\n\n\nasync def delete(request: web.Request) -> web.Response:\n    \"\"\"Delete an existing pipeline within the system.\n    Also remove the filters along with configuration that are part of pipeline\n\n    :Example:\n        curl -sX DELETE http://localhost:8081/fledge/control/pipeline/1\n    \"\"\"\n    cpid = request.match_info.get('id', None)\n    try:\n        storage = connect.get_storage_async()\n        pipeline = await _get_pipeline(cpid)\n        # Remove filters if exists and also delete the entry from control_filter table\n        await _remove_filters(storage, pipeline['filters'], pipeline['id'], pipeline['name'])\n        # Delete entry from control_pipelines\n        payload = PayloadBuilder().WHERE(['cpid', '=', pipeline['id']]).payload()\n        await storage.delete_from_tbl(\"control_pipelines\", payload)\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete pipeline having ID: <{}>.\".format(cpid))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        message = {\"message\": \"Control Pipeline with ID:<{}> has been deleted successfully.\".format(cpid)}\n        audit_details = message\n        audit_details[\"name\"] = pipeline['name']\n        # CTPDL audit trail entry\n        audit = AuditLogger(storage)\n        await audit.information('CTPDL', audit_details)\n        return web.json_response(message)\n\n\nasync def _get_all_lookups(tbl_name=None):\n    storage = connect.get_storage_async()\n    if tbl_name:\n        res = await storage.query_tbl(tbl_name)\n        lookup = res[\"rows\"]\n        return lookup\n    result = await storage.query_tbl(\"control_source\")\n    source_lookup = result[\"rows\"]\n    result = await storage.query_tbl(\"control_destination\")\n    des_lookup = result[\"rows\"]\n    return {\"source\": source_lookup, \"destination\": des_lookup}\n\n\nasync def _get_table_column_by_value(table, column_name, column_value, limit=None):\n    storage = connect.get_storage_async()\n    if table == \"control_filters\":\n        payload = PayloadBuilder().WHERE([column_name, '=', column_value]).ORDER_BY([\"forder\", \"asc\"]).payload()\n    else:\n        payload = PayloadBuilder().WHERE([column_name, '=', column_value]).payload()\n    if limit is not None:\n        payload = PayloadBuilder().WHERE([column_name, '=', column_value]).LIMIT(limit).payload()\n    result = await storage.query_tbl_with_payload(table, payload)\n    return result\n\n\nasync def _get_pipeline(cpid, filters=True):\n    result = await _get_table_column_by_value(\"control_pipelines\", \"cpid\", cpid)\n    rows = result[\"rows\"]\n    if not rows:\n        raise KeyError(\"Pipeline having ID: {} not found.\".format(cpid))\n    r = rows[0]\n    pipeline = {\n        'id': r['cpid'],\n        'name': r['name'],\n        'source': {'type': await _get_lookup_value(\"source\", r['stype']), 'name': r['sname']\n                   } if r['stype'] else {'type': '', 'name': ''},\n        'destination': {'type': await _get_lookup_value(\"destination\", r['dtype']), 'name': r['dname']\n                        } if r['dtype'] else {'type': '', 'name': ''},\n        'enabled': False if r['enabled'] == 'f' else True,\n        'execution': r['execution']\n    }\n    if filters:\n        # update filters in pipeline\n        result = await _get_table_column_by_value(\"control_filters\", \"cpid\", pipeline['id'])\n        pipeline['filters'] = [r['fname'] for r in result[\"rows\"]]\n    return pipeline\n\n\nasync def _pipeline_in_use(name, source, destination, info=False):\n    result = await _get_table_column_by_value(\"control_pipelines\", \"name\", name)\n    rows = result[\"rows\"]\n    row = None\n    new_data = {'source': source if source else {'type': 0, 'name': ''},\n                'destination': destination if destination else {'type': 0, 'name': ''}\n                }\n    is_matched = False\n    for r in rows:\n        db_data = {'source': {'type': r['stype'], 'name': r['sname']},\n                   'destination': {'type': r['dtype'], 'name': r['dname']}}\n        if json.dumps(db_data, sort_keys=True) == json.dumps(new_data, sort_keys=True):\n            is_matched = True\n            r[\"id\"] = r['cpid']\n            r.pop('cpid', None)\n            row = r\n            break\n    return row if info else is_matched\n\n\nasync def _get_lookup_value(_type, value):\n    if _type == \"source\":\n        tbl_name = \"control_source\"\n        key_name = 'cpsid'\n    else:\n        tbl_name = \"control_destination\"\n        key_name = 'cpdid'\n    lookup = await _get_all_lookups(tbl_name)\n    name = [lu['name'] for lu in lookup if value == lu[key_name]]\n    return ''.join(name)\n\n\nasync def _check_parameters(payload, request):\n    column_names = {}\n    # name\n    name = payload.get('name', None)\n    if name is not None:\n        if not isinstance(name, str):\n            raise ValueError('Pipeline name should be in string.')\n        name = name.strip()\n        if len(name) == 0:\n            raise ValueError('Pipeline name cannot be empty.')\n        cpid = request.match_info.get('id', None)\n        old_name = payload.get('old_pipeline_name', None)\n        await _check_unique_pipeline(name, old_name, cpid)\n        column_names['name'] = name\n    # enabled\n    enabled = payload.get('enabled', None)\n    if enabled is not None:\n        if not isinstance(enabled, bool):\n            raise ValueError('Enabled should be a bool.')\n        column_names['enabled'] = 't' if enabled else 'f'\n    # execution\n    execution = payload.get('execution', None)\n    if execution is not None:\n        if not isinstance(execution, str):\n            raise ValueError('Execution should be in string.')\n        execution = execution.strip()\n        if len(execution) == 0:\n            raise ValueError('Execution value cannot be empty.')\n        if execution.lower() not in [\"shared\", \"exclusive\"]:\n            raise ValueError('Execution model value either shared or exclusive.')\n        column_names['execution'] = execution\n    # source\n    source = payload.get('source', None)\n    if source is not None:\n        if not isinstance(source, dict):\n            raise ValueError('Source should be passed with type and name.')\n        if len(source):\n            source_type = source.get(\"type\")\n            source_name = source.get(\"name\")\n            if source_type is not None:\n                if not isinstance(source_type, int):\n                    raise ValueError(\"Source type should be an integer value.\")\n                stype = await _get_lookup_value(\"source\", source_type)\n                if not stype:\n                    raise ValueError(\"Invalid source type found.\")\n            else:\n                raise ValueError('Source type is missing.')\n            # Note: when source type is Any or API; no name is applied\n            if source_type not in (1, 3):\n                if source_name is not None:\n                    if not isinstance(source_name, str):\n                        raise ValueError(\"Source name should be a string value.\")\n                    source_name = source_name.strip()\n                    if len(source_name) == 0:\n                        raise ValueError('Source name cannot be empty.')\n                    await _validate_lookup_name(\"source\", source_type, source_name)\n                    column_names[\"stype\"] = source_type\n                    column_names[\"sname\"] = source_name\n                else:\n                    raise ValueError('Source name is missing.')\n            else:\n                source_name = ''\n                if source_type == 3:\n                    source_name = request.user[\"uname\"] if hasattr(request, \"user\") and request.user else \"anonymous\"\n                source = {'type': source_type, 'name': source_name}\n                column_names[\"stype\"] = source_type\n                column_names[\"sname\"] = source_name\n        else:\n            column_names[\"stype\"] = 0\n            column_names[\"sname\"] = \"\"\n    # destination\n    destination = payload.get('destination', None)\n    if destination is not None:\n        if not isinstance(destination, dict):\n            raise ValueError('Destination should be passed with type and name.')\n        if len(destination):\n            des_type = destination.get(\"type\")\n            des_name = destination.get(\"name\")\n            if des_type is not None:\n                if not isinstance(des_type, int):\n                    raise ValueError(\"Destination type should be an integer value.\")\n                dtype = await _get_lookup_value(\"destination\", des_type)\n                if not dtype:\n                    raise ValueError(\"Invalid destination type found.\")\n            else:\n                raise ValueError('Destination type is missing.')\n            # Note: when destination type is Any or Broadcast; no name is applied\n            if des_type not in (1, 5):\n                if des_name is not None:\n                    if not isinstance(des_name, str):\n                        raise ValueError(\"Destination name should be a string value.\")\n                    des_name = des_name.strip()\n                    if len(des_name) == 0:\n                        raise ValueError('Destination name cannot be empty.')\n                    await _validate_lookup_name(\"destination\", des_type, des_name)\n                    column_names[\"dtype\"] = des_type\n                    column_names[\"dname\"] = des_name\n                else:\n                    raise ValueError('Destination name is missing.')\n            else:\n                des_name = ''\n                destination = {'type': des_type, 'name': des_name}\n                column_names[\"dtype\"] = des_type\n                column_names[\"dname\"] = des_name\n        else:\n            column_names[\"dtype\"] = 0\n            column_names[\"dname\"] = \"\"\n    if source is not None and destination is not None:\n        error_msg = \"Pipeline is not allowed with same type of source and destination.\"\n        # Service\n        if source_type == 2 and des_type == 2:\n            schedules = await server.Server.scheduler.get_schedules()\n            south_schedules = [sch.name for sch in schedules if sch.schedule_type == 1 and sch.process_name == \"south_c\"]\n            north_schedules = [sch.name for sch in schedules if\n                               sch.schedule_type == 1 and sch.process_name == \"north_C\"]\n            if (source_name in south_schedules and des_name in south_schedules) or (\n                    source_name in north_schedules and des_name in north_schedules):\n                raise ValueError(error_msg)\n            if source_name in south_schedules:\n                raise ValueError(\"South services can not be the source for control pipelines.\")\n            if des_name in north_schedules:\n                raise ValueError(\"North services can not be the destination for control pipelines.\")\n        # Script\n        if source_type == 6 and des_type == 4:\n            raise ValueError(error_msg)\n    # filters\n    filters = payload.get('filters', None)\n    if filters is not None:\n        if not isinstance(filters, list):\n            raise ValueError('Pipeline filters should be passed in list.')\n    return column_names\n\n\nasync def _validate_lookup_name(lookup_name, _type, value):\n    storage = connect.get_storage_async()\n    config_mgr = ConfigurationManager(storage)\n\n    async def get_schedules():\n        schedules = await server.Server.scheduler.get_schedules()\n        if _type == 2:\n            if lookup_name == \"source\":\n                # Source can not be a south service\n                south_north_schedules = [sch.name for sch in schedules if sch.schedule_type == 1 and sch.process_name == \"south_c\"]\n                error_message = \"South services can not be the source for control pipelines.\"\n            else:\n                # Destination can not be a north service\n                south_north_schedules = [sch.name for sch in schedules if sch.schedule_type == 1 and sch.process_name == \"north_C\"]\n                error_message = \"North services can not be the destination for control pipelines.\"\n            if value in south_north_schedules:\n                raise ValueError(error_message)\n            if not any(sch.name == value for sch in schedules\n                       if sch.schedule_type == 1 and sch.process_name in ('south_c', 'north_C')):\n                raise ValueError(\"'{}' not a valid service.\".format(value))\n        elif _type == 5:\n            # Verify against all type of schedules\n            if not any(sch.name == value for sch in schedules):\n                raise ValueError(\"'{}' not a valid schedule name.\".format(value))\n        else:\n            # Verify against STARTUP type schedule and having South, North based service\n            if not any(sch.name == value for sch in schedules\n                       if sch.schedule_type == 1 and sch.process_name in ('south_c', 'north_C')):\n                raise ValueError(\"'{}' not a valid service.\".format(value))\n\n    async def get_control_scripts():\n        script_payload = PayloadBuilder().SELECT(\"name\").payload()\n        scripts = await storage.query_tbl_with_payload('control_script', script_payload)\n        if not any(s['name'] == value for s in scripts['rows']):\n            raise ValueError(\"'{}' not a valid script name.\".format(value))\n\n    async def get_assets():\n        asset_payload = PayloadBuilder().DISTINCT([\"asset\"]).payload()\n        assets = await storage.query_tbl_with_payload('asset_tracker', asset_payload)\n        if not any(ac['asset'] == value for ac in assets['rows']):\n            raise ValueError(\"'{}' not a valid asset name.\".format(value))\n\n    async def get_notifications():\n        all_notifications = await config_mgr._read_all_child_category_names(\"Notifications\")\n        if not any(notify['child'] == value for notify in all_notifications):\n            raise ValueError(\"'{}' not a valid notification instance name.\".format(value))\n\n    if (lookup_name == \"source\" and _type == 2) or (lookup_name == 'destination' and _type == 2):\n        # Verify schedule name in startup type and south, north based schedules\n        await get_schedules()\n    elif (lookup_name == \"source\" and _type == 6) or (lookup_name == 'destination' and _type == 4):\n        # Verify control script name\n        await get_control_scripts()\n    elif lookup_name == \"source\" and _type == 4:\n        # Verify notification instance name\n        await get_notifications()\n    elif lookup_name == \"source\" and _type == 5:\n        # Verify schedule name in all type of schedules\n        await get_schedules()\n    elif lookup_name == \"destination\" and _type == 3:\n        # Verify asset name\n        await get_assets()\n    else:\n        \"\"\"No validation required for source id 1(Any) & destination id 4(Broadcast)\"\"\"\n        pass\n\n\nasync def _remove_filters(storage, filters, cp_id, cp_name=None):\n    cf_mgr = ConfigurationManager(storage)\n    if filters:\n        for f in filters:\n            # Delete entry from control_filter table\n            payload = PayloadBuilder().WHERE(['cpid', '=', cp_id]).AND_WHERE(['fname', '=', f]).payload()\n            await storage.delete_from_tbl(\"control_filters\", payload)\n\n            # Delete filter from filters table\n            filter_name = f.replace(\"ctrl_{}_\".format(cp_name), '')\n            payload = PayloadBuilder().WHERE(['name', '=', filter_name]).payload()\n            await storage.delete_from_tbl(\"filters\", payload)\n\n            # Delete the filters category\n            await cf_mgr.delete_category_and_children_recursively(f)\n            await cf_mgr.delete_category_and_children_recursively(filter_name)\n\n\nasync def _check_filters(storage, cp_filters):\n    is_exist = False\n    filters_result = await storage.query_tbl(\"filters\")\n    if filters_result['rows']:\n        filters_instances_list = [f['name'] for f in filters_result['rows']]\n        check_if = all(f in filters_instances_list for f in cp_filters)\n        if check_if:\n            is_exist = True\n        else:\n            _logger.warning(\"Filters do not exist as per the given {} payload.\".format(cp_filters))\n    else:\n        _logger.warning(\"No filter instances exists in the system.\")\n    return is_exist\n\n\nasync def _update_filters(storage, cp_id, cp_name, cp_filters, db_filters=None):\n    if db_filters is None:\n        db_filters = []\n    cf_mgr = ConfigurationManager(storage)\n    new_filters = []\n    children = []\n    insert_filters = list(filter(lambda x: x not in db_filters, cp_filters))\n    update_filters = list(filter(lambda x: x in cp_filters, db_filters))\n    delete_filters = list(filter(lambda x: x not in cp_filters, db_filters))\n    if insert_filters:\n        for fid, fname in enumerate(insert_filters, start=1):\n            # get plugin config of filter\n            category_value = await cf_mgr.get_category_all_items(category_name=fname)\n            cat_value = copy.deepcopy(category_value)\n            if cat_value is None:\n                raise ValueError(\n                    \"{} category does not exist during {} control pipeline filter.\".format(\n                        fname, cp_name))\n            # Copy value in default and remove value KV pair for creating new category\n            for k, v in cat_value.items():\n                v['default'] = v['value']\n                v.pop('value', None)\n            # Create category\n            cat_name = \"ctrl_{}_{}\".format(cp_name, fname)\n            await cf_mgr.create_category(category_name=cat_name,\n                                         category_description=\"Filter of {} control pipeline.\".format(\n                                             cp_name),\n                                         category_value=cat_value,\n                                         keep_original_items=True)\n            new_category = await cf_mgr.get_category_all_items(cat_name)\n            if new_category is None:\n                raise KeyError(\"No such {} category found.\".format(new_category))\n            # Create entry in control_filters table\n            column_names = {\"cpid\": cp_id, \"forder\": fid, \"fname\": cat_name}\n            payload = PayloadBuilder().INSERT(**column_names).payload()\n            await storage.insert_into_tbl(\"control_filters\", payload)\n            new_filters.append(cat_name)\n            children.append(cat_name)\n            children.extend([fname])\n        try:\n            # Create parent-child relation with Dispatcher service\n            await cf_mgr.create_child_category(\"dispatcher\", children)\n        except:\n            pass\n    if update_filters:\n        # only order\n        for fid, fname in enumerate(cp_filters, start=1):\n            payload = PayloadBuilder().SET(forder=fid).WHERE([\"fname\", \"=\", \"ctrl_{}_{}\".format(cp_name, fname)]).AND_WHERE([\"cpid\", \"=\", cp_id]).payload()\n            await storage.update_tbl(\"control_filters\", payload)\n    if delete_filters:\n        del_filters = [\"ctrl_{}_{}\".format(cp_name, f) for f in list(delete_filters)]\n        await _remove_filters(storage, del_filters, cp_id, cp_name)\n    return new_filters\n\n\nasync def _check_unique_pipeline(name, old_name=None, cpid=None):\n    \"\"\"Disallow pipeline name cases:\n       a) If given pipeline name already exists in DB.\n       b) If given pipeline has already attached filters.\n    \"\"\"\n    if cpid is not None:\n        if name != old_name:\n            pipeline_filter_result = await _get_table_column_by_value(\"control_filters\", \"cpid\", cpid, limit=1)\n            if pipeline_filter_result['rows']:\n                raise ValueError('Filters are attached. Pipeline name cannot be changed.')\n            pipeline_result = await _get_table_column_by_value(\"control_pipelines\", \"name\", name, limit=1)\n            if pipeline_result['rows']:\n                raise ValueError('{} pipeline already exists, name cannot be changed.'.format(name))\n    else:\n        pipeline_result = await _get_table_column_by_value(\"control_pipelines\", \"name\", name, limit=1)\n        if pipeline_result['rows']:\n            raise ValueError('{} pipeline already exists with the same name.'.format(name))\n"
  },
  {
    "path": "python/fledge/services/core/api/control_service/script_management.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport datetime\nimport uuid\n\nfrom aiohttp import web\n\nfrom fledge.common.acl_manager import ACLManager\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect, server\nfrom fledge.services.core.scheduler.entities import Schedule, ManualSchedule\nfrom fledge.services.core.api.control_service.exceptions import *\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2021 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -----------------------------------------------------------------------\n    | GET POST            | /fledge/control/script                        |\n    | GET PUT DELETE      | /fledge/control/script/{script_name}          |\n    | POST                | /fledge/control/script/{script_name}/schedule |\n    -----------------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\ndef setup(app):\n    # automation schedule task\n    app.router.add_route('POST', '/fledge/control/script/{script_name}/schedule', add_schedule_and_configuration)\n\n    # CRUD's\n    app.router.add_route('POST', '/fledge/control/script', add)\n    app.router.add_route('GET', '/fledge/control/script', get_all)\n    app.router.add_route('GET', '/fledge/control/script/{script_name}', get_by_name)\n    app.router.add_route('PUT', '/fledge/control/script/{script_name}', update)\n    app.router.add_route('DELETE', '/fledge/control/script/{script_name}', delete)\n\n\nasync def add_schedule_and_configuration(request: web.Request) -> web.Response:\n    \"\"\" Create a schedule and configuration category for the task\n       :Example:\n           curl -H \"authorization: $AUTH_TOKEN\" -sX POST http://localhost:8081/fledge/control/script/testScript/schedule\n           curl -H \"authorization: $AUTH_TOKEN\" -sX POST http://localhost:8081/fledge/control/script/testScript/schedule -d '{\"parameters\": {\"foobar\": \"0.8\"}}'\n       \"\"\"\n    params = None\n    try:\n        data = await request.json()\n        params = data.get('parameters')\n        if params is None:\n            msg = \"parameters field is required.\"\n            return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        if not isinstance(params, dict):\n            msg = \"parameters must be a dictionary.\"\n            return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        if not params:\n            msg = \"parameters cannot be an empty.\"\n            return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception:\n        pass\n    try:\n        name = request.match_info.get('script_name', None)\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().SELECT(\"name\", \"steps\", \"acl\").WHERE(['name', '=', name]).payload()\n        result = await storage.query_tbl_with_payload('control_script', payload)\n        if 'rows' in result:\n            if result['rows']:\n                write_steps, macros_used_in_write_steps = _validate_write_steps(result['rows'][0]['steps'])\n                if not write_steps:\n                    msg = 'write steps KV pair is missing for {} script.'.format(name)\n                    return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n                if params is not None:\n                    for pk, pv in params.items():\n                        if pk not in macros_used_in_write_steps:\n                            msg = '{} param is not found in write steps for {} script.'.format(pk, name)\n                            return web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n                        if not isinstance(pv, str):\n                            msg = 'Value should be in string for {} param.'.format(pk)\n                            return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n                if params is not None:\n                    for w in write_steps:\n                        for k, v in w['values'].items():\n                            if any(p in v for p in params):\n                                # Amend parameters to the existing dict\n                                w['values'][v[1:-1]] = w['values'].pop(k)\n                                w['values'][v[1:-1]] = params[v[1:-1]]\n                # Check if schedule exists for an automation task\n                schedule_list = await server.Server.scheduler.get_schedules()\n                for sch in schedule_list:\n                    if sch.name == name and sch.process_name == \"automation_script\":\n                        msg = '{} schedule already exists.'.format(name)\n                        return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n                # Create configuration category for a task\n                cf_mgr = ConfigurationManager(connect.get_storage_async())\n                category_value = {\"write\": {\"default\": json.dumps(write_steps),\n                                            \"description\": \"Dispatcher write operation using automation script\",\n                                            \"type\": \"string\"}}\n                category_desc = \"{} configuration for automation script task\".format(name)\n                cat_name = \"{}-automation-script\".format(name)\n                await cf_mgr.create_category(category_name=cat_name, category_description=category_desc,\n                                             category_value=category_value, keep_original_items=True, display_name=name)\n                # Create Parent-child relation\n                await cf_mgr.create_child_category(\"dispatcher\", [cat_name])\n                # Create schedule for an automation script\n                manual_schedule = ManualSchedule()\n                manual_schedule.name = name\n                manual_schedule.process_name = 'automation_script'\n                manual_schedule.repeat = datetime.timedelta(seconds=0)\n                manual_schedule.enabled = True\n                manual_schedule.exclusive = True\n                await server.Server.scheduler.save_schedule(manual_schedule)\n                # Set the schedule id\n                schedule_id = manual_schedule.schedule_id\n                # Add schedule_id to the schedule queue\n                await server.Server.scheduler.queue_task(schedule_id)\n            else:\n                raise NameNotFoundError('Script with name {} is not found.'.format(name))\n        else:\n            raise StorageServerError(result)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except (ValueError, NameNotFoundError) as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (KeyError, RuntimeError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to add schedule task for control script {}.\".format(name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        msg = \"Schedule and configuration is created for control script {}\".format(name)\n        return web.json_response({\"message\": msg})\n\n\nasync def get_all(request: web.Request) -> web.Response:\n    \"\"\" Get list of all scripts\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX GET http://localhost:8081/fledge/control/script\n    \"\"\"\n    try:\n        storage = connect.get_storage_async()\n        cf_mgr = ConfigurationManager(storage)\n        payload = PayloadBuilder().SELECT(\"name\", \"steps\", \"acl\").payload()\n        result = await storage.query_tbl_with_payload('control_script', payload)\n        scripts = []\n        if 'rows' in result:\n            if result['rows']:\n                # Get all schedules\n                schedule_list = await server.Server.scheduler.get_schedules()\n                for row in result['rows']:\n                    # Add configuration to script\n                    cat_name = \"{}-automation-script\".format(row['name'])\n                    get_category = await cf_mgr.get_category_all_items(cat_name)\n                    row['configuration'] = {}\n                    if get_category is not None:\n                        row['configuration'] = {\"categoryName\": cat_name}\n                        row['configuration'].update(get_category)\n                    # Add schedule to script\n                    for sch in schedule_list:\n                        row['schedule'] = {}\n                        if sch.name == row['name'] and sch.process_name == \"automation_script\":\n                            row['schedule'] = {\n                                'id': str(sch.schedule_id),\n                                'name': sch.name,\n                                'processName': sch.process_name,\n                                'type': Schedule.Type(int(sch.schedule_type)).name,\n                                'repeat': 0,\n                                'time': 0,\n                                'day': sch.day,\n                                'exclusive': sch.exclusive,\n                                'enabled': sch.enabled\n                            }\n                            break\n                    scripts.append(row)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Get Control script failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"scripts\": scripts})\n\n\nasync def get_by_name(request: web.Request) -> web.Response:\n    \"\"\" Get a named script\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX GET http://localhost:8081/fledge/control/script/testScript\n    \"\"\"\n    try:\n        name = request.match_info.get('script_name', None)\n        storage = connect.get_storage_async()\n        cf_mgr = ConfigurationManager(storage)\n        payload = PayloadBuilder().SELECT(\"name\", \"steps\", \"acl\").WHERE(['name', '=', name]).payload()\n        result = await storage.query_tbl_with_payload('control_script', payload)\n        if 'rows' in result:\n            if result['rows']:\n                rows = result['rows'][0]\n                rows['configuration'] = {}\n                rows['schedule'] = {}\n                try:\n                    # Add configuration to script\n                    cat_name = \"{}-automation-script\".format(rows['name'])\n                    get_category = await cf_mgr.get_category_all_items(cat_name)\n                    if get_category is not None:\n                        rows['configuration'] = {\"categoryName\": cat_name}\n                        rows['configuration'].update(get_category)\n                    # Add schedule to script\n                    sch = await server.Server.scheduler.get_schedule_by_name(rows['name'])\n                    rows['schedule'] = {\n                        'id': str(sch.schedule_id),\n                        'name': sch.name,\n                        'processName': sch.process_name,\n                        'type': Schedule.Type(int(sch.schedule_type)).name,\n                        'repeat': 0,\n                        'time': 0,\n                        'day': sch.day,\n                        'exclusive': sch.exclusive,\n                        'enabled': sch.enabled\n                    }\n                except:\n                    pass\n            else:\n                raise NameNotFoundError('Script with name {} is not found.'.format(name))\n        else:\n            raise StorageServerError(result)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except NameNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Get Control script by name failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(rows)\n\n\nasync def add(request: web.Request) -> web.Response:\n    \"\"\" Add a script\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX POST http://localhost:8081/fledge/control/script -d '{\"name\": \"testScript\", \"steps\": [{\"write\": {\"order\": 0, \"service\": \"modbus1\", \"values\": {\"speed\": \"$requestedSpeed$\", \"fan\": \"1200\"}, \"condition\": {\"key\": \"requestedSpeed\", \"condition\": \"<\", \"value\": \"2000\"}}}, {\"delay\": {\"order\": 1, \"duration\": 1500}}]}'\n        curl -H \"authorization: $AUTH_TOKEN\" -sX POST http://localhost:8081/fledge/control/script -d '{\"name\": \"test\", \"steps\": [], \"acl\": \"testACL\"}'\n    \"\"\"\n    try:\n        data = await request.json()\n        name = data.get('name', None)\n        steps = data.get('steps', None)\n        acl = data.get('acl', None)\n        if name is None:\n            raise ValueError('Script name is required.')\n        else:\n            if not isinstance(name, str):\n                raise TypeError('Script name must be a string.')\n            name = name.strip()\n            if name == \"\":\n                raise ValueError('Script name cannot be empty.')\n        if steps is None:\n            raise ValueError('steps parameter is required.')\n        if not isinstance(steps, list):\n            raise ValueError('steps must be a list.')\n        if acl is not None:\n            if not isinstance(acl, str):\n                raise ValueError('ACL name must be a string.')\n            acl = acl.strip()\n        _steps = _validate_steps_and_convert_to_str(steps)\n        result = {}\n        storage = connect.get_storage_async()\n        # Check duplicate script record\n        payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', name]).payload()\n        get_control_script_name_result = await storage.query_tbl_with_payload('control_script', payload)\n        if get_control_script_name_result['count'] == 0:\n            payload = PayloadBuilder().INSERT(name=name, steps=_steps).payload()\n            if acl is not None:\n                # Check the existence of valid ACL record\n                acl_payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', acl]).payload()\n                acl_result = await storage.query_tbl_with_payload('control_acl', acl_payload)\n                if 'rows' in acl_result:\n                    if acl_result['rows']:\n                        payload = PayloadBuilder().INSERT(name=name, steps=_steps, acl=acl).payload()\n                    else:\n                        raise NameNotFoundError('ACL with name {} is not found.'.format(acl))\n                else:\n                    raise StorageServerError(acl_result)\n            # Insert the script record\n            insert_control_script_result = await storage.insert_into_tbl(\"control_script\", payload)\n            if acl is not None:\n                acl_handler = ACLManager(storage)\n                await acl_handler.handle_create_for_acl_usage(entity_name=name,\n                                                              acl_name=acl, entity_type=\"script\")\n\n            if 'response' in insert_control_script_result:\n                if insert_control_script_result['response'] == \"inserted\":\n                    result = {\"name\": name, \"steps\": json.loads(_steps)}\n                    if acl is not None:\n                        # Append ACL into response if acl exists in payload\n                        result[\"acl\"] = acl\n                    # CTSAD audit trail entry\n                    audit = AuditLogger(storage)\n                    await audit.information('CTSAD', result)\n            else:\n                raise StorageServerError(insert_control_script_result)\n        else:\n            msg = 'Script with name {} already exists.'.format(name)\n            raise DuplicateNameError(msg)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except DuplicateNameError as err:\n        msg = str(err)\n        raise web.HTTPConflict(reason=msg, body=json.dumps({\"message\": msg}))\n    except NameNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (TypeError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Control script create failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(result)\n\n\nasync def update(request: web.Request) -> web.Response:\n    \"\"\" Update a script\n    Only the steps & ACL parameters can be updated\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX PUT http://localhost:8081/fledge/control/script/testScript -d '{\"steps\": []}'\n        curl -H \"authorization: $AUTH_TOKEN\" -sX PUT http://localhost:8081/fledge/control/script/test -d '{\"steps\": [{\"delay\": {\"order\": 0, \"duration\": 12}}], \"acl\": \"testACL\"}'\n    \"\"\"\n    try:\n        name = request.match_info.get('script_name', None)\n        data = await request.json()\n        steps = data.get('steps', None)\n        acl = data.get('acl', None)\n        if steps is None and acl is None:\n            raise ValueError(\"Nothing to update for the given payload.\")\n        if steps is not None and not isinstance(steps, list):\n            raise ValueError('steps must be a list.')\n        if acl is not None:\n            if not isinstance(acl, str):\n                raise ValueError('ACL must be a string.')\n            acl = acl.strip()\n        set_values = {}\n        values = {'name': name}\n        if steps is not None:\n            values['steps'] = steps\n            set_values[\"steps\"] = _validate_steps_and_convert_to_str(steps)\n        storage = connect.get_storage_async()\n        # Check existence of script record\n        payload = PayloadBuilder().SELECT(\"name\", \"steps\", \"acl\").WHERE(['name', '=', name]).payload()\n        result = await storage.query_tbl_with_payload('control_script', payload)\n        message = \"\"\n        if 'rows' in result:\n            if result['rows']:\n                if acl is not None:\n                    if len(acl):\n                        # Check the existence of valid ACL record\n                        acl_payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', acl]).payload()\n                        acl_result = await storage.query_tbl_with_payload('control_acl', acl_payload)\n                        if 'rows' in acl_result:\n                            if not acl_result['rows']:\n                                raise NameNotFoundError('ACL with name {} is not found.'.format(acl))\n                        else:\n                            raise StorageServerError(acl_result)\n                    set_values[\"acl\"] = acl\n                    values[\"acl\"] = acl\n                # Update script record\n                update_query = PayloadBuilder()\n                update_query.SET(**set_values).WHERE(['name', '=', name])\n                update_result = await storage.update_tbl(\"control_script\", update_query.payload())\n\n                acl_handler = ACLManager(storage)\n                old_acl_name = await acl_handler.get_acl_for_an_entity(name, \"script\")\n                if old_acl_name != \"\":\n                    # The acl attached to this script has changed.\n                    if acl is not None:\n                        if acl != \"\":\n                            # New acl has been attached to this script.\n                            await acl_handler.handle_update_for_acl_usage(entity_name=name,\n                                                                          acl_name=acl, entity_type=\"script\")\n                        # The acl has been detached.\n                        else:\n                            await acl_handler.handle_delete_for_acl_usage(entity_name=name,\n                                                                          acl_name=acl, entity_type=\"script\")\n                else:\n                    # New acl is attached to this script which was previously empty.\n                    if acl:\n                        await acl_handler.handle_create_for_acl_usage(entity_name=name,\n                                                                      acl_name=acl, entity_type=\"script\")\n                if 'response' in update_result:\n                    if update_result['response'] == \"updated\":\n                        message = \"Control script {} updated successfully.\".format(name)\n                        # CTSCH audit trail entry\n                        audit = AuditLogger(storage)\n                        await audit.information('CTSCH', {'script': values, 'old_script': result['rows'][0]})\n                else:\n                    raise StorageServerError(update_result)\n            else:\n                raise NameNotFoundError('No such {} script found.'.format(name))\n        else:\n            raise StorageServerError(result)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except NameNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (TypeError, ValueError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Control script update failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"message\": message})\n\n\nasync def delete(request: web.Request) -> web.Response:\n    \"\"\" Delete a script\n\n    :Example:\n        curl -H \"authorization: $AUTH_TOKEN\" -sX DELETE http://localhost:8081/fledge/control/script/test\n    \"\"\"\n    try:\n        name = request.match_info.get('script_name', None)\n        storage = connect.get_storage_async()\n        payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', name]).payload()\n        result = await storage.query_tbl_with_payload('control_script', payload)\n        message = \"\"\n        if 'rows' in result:\n            if result['rows']:\n                try:\n                    # Delete automation script category and schedule\n                    cf_mgr = ConfigurationManager(connect.get_storage_async())\n                    cat_name = \"{}-automation-script\".format(name)\n                    await cf_mgr.delete_category_and_children_recursively(cat_name)\n                    schedules_list = await server.Server.scheduler.get_schedules()\n                    for sch in schedules_list:\n                        if sch.name == name and sch.process_name == \"automation_script\":\n                            schedule_id = str(sch.schedule_id)\n                            await server.Server.scheduler.disable_schedule(uuid.UUID(schedule_id))\n                            await server.Server.scheduler.delete_schedule(uuid.UUID(schedule_id))\n                            break\n                except:\n                    pass\n                payload = PayloadBuilder().WHERE(['name', '=', name]).payload()\n                delete_result = await storage.delete_from_tbl(\"control_script\", payload)\n                acl_handler = ACLManager(storage)\n                acl_name = await acl_handler.get_acl_for_an_entity(name, \"script\")\n                if acl_name != \"\":\n                    await acl_handler.handle_delete_for_acl_usage(entity_name=name,\n                                                                  acl_name=acl_name, entity_type=\"script\")\n\n                if 'response' in delete_result:\n                    if delete_result['response'] == \"deleted\":\n                        message = \"{} script deleted successfully.\".format(name)\n                        # CTSDL audit trail entry\n                        audit = AuditLogger(storage)\n                        await audit.information('CTSDL', {'message': message, \"name\": name})\n                else:\n                    raise StorageServerError(delete_result)\n            else:\n                raise NameNotFoundError('No such {} script found.'.format(name))\n        else:\n            raise StorageServerError(result)\n    except StorageServerError as err:\n        msg = \"Storage error: {}\".format(str(err))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except NameNotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Control script delete failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"message\": message})\n\n\ndef _validate_write_steps(steps: list) -> tuple:\n    write_step_values = []\n    macro_step_values = []\n    for k in steps:\n        for k1, v1 in k.items():\n            if k1 == 'write':\n                for k2, v2 in v1['values'].items():\n                    if v2.startswith(\"$\") and v2.endswith(\"$\"):\n                        macro_step_values.append(v2[1:-1])\n                write_step_values.append(v1)\n    return write_step_values, macro_step_values\n\n\ndef _validate_steps_and_convert_to_str(payload: list) -> str:\n    \"\"\"\n    NOTE: We cannot really check the internal KV pairs in steps as they related to configuration of plugin.\n          And only do the type check of step and for each item it should have order KV pair along its unique value.\n\n          Also steps supported types are hardcoded at the moment, we may add new API to get the types\n          so that any client can use from there itself.\n          For example:\n          GUI client has also prepared this list by their own to show down in the dropdown.\n          Therefore, if any new/update type is introduced with the current scenario both sides needs to be changed\n    \"\"\"\n    steps_supported_types = [\"configure\", \"delay\", \"operation\", \"script\", \"write\"]\n    unique_order_items = []\n    if payload:\n        for p in payload:\n            if isinstance(p, dict):\n                for k, v in p.items():\n                    if k not in steps_supported_types:\n                        raise TypeError('{} is an invalid step. Supported step types are {} '\n                                        'with case-sensitive.'.format(k, steps_supported_types))\n                    else:\n                        if isinstance(v, dict):\n                            if 'order' not in v:\n                                raise ValueError('order key is missing for {} step.'.format(k))\n                            else:\n                                if isinstance(v['order'], int):\n                                    if v['order'] < 1:\n                                        if v['order'] == 0:\n                                            raise ValueError('order cannot be zero.')\n                                        raise ValueError('order should be a positive number.')\n                                    if v['order'] not in unique_order_items:\n                                        unique_order_items.append(v['order'])\n                                    else:\n                                        raise ValueError('order with value {} is also found in {}. '\n                                                         'It should be unique for each step item.'.format(\n                                            v['order'], k))\n                                else:\n                                    raise TypeError('order should be an integer for {} step.'.format(k))\n                        else:\n                            raise ValueError(\"For {} step nested elements should be in dictionary.\".format(k))\n            else:\n                raise ValueError('Steps should be in list of dictionaries.')\n    # Convert steps payload list into string\n    return json.dumps(payload)\n"
  },
  {
    "path": "python/fledge/services/core/api/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom aiohttp import web\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2021, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__all__ = ('AuthenticationIsOptional', 'VerificationFailed', 'ConflictError')\n\n\nclass AuthenticationIsOptional(web.HTTPPreconditionFailed):\n    pass\n\nclass VerificationFailed(web.HTTPUnauthorized):\n    pass\n\nclass ConflictError(web.HTTPConflict):\n    def __init__(self, message):\n        super().__init__(reason=message)\n        self.message = message\n\n"
  },
  {
    "path": "python/fledge/services/core/api/filters.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport copy\nimport aiohttp\nfrom aiohttp import web\nfrom typing import List, Dict, Tuple, Union\n\nfrom fledge.common import utils\nfrom fledge.common.common import _FLEDGE_ROOT\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\nfrom fledge.services.core import connect\nfrom fledge.services.core.api import utils as apiutils\nfrom fledge.services.core.api.exceptions import ConflictError\nfrom fledge.services.core.api.plugins import common\n\n__author__ = \"Massimiliano Pinto, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ---------------------------------------------------------------------------\n    | POST GET        | /fledge/filter                                       |\n    | PUT GET DELETE  | /fledge/filter/{user_name}/pipeline                  |\n    | GET DELETE      | /fledge/filter/{filter_name}                         |\n    ---------------------------------------------------------------------------\n\"\"\"\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\n\nasync def create_filter(request: web.Request) -> web.Response:\n    \"\"\"\n    Create a new filter with a specific plugin\n    :Example:\n     curl -X POST http://localhost:8081/fledge/filter -d '{\"name\": \"North_Readings_to_PI_scale_stage_1Filter\", \"plugin\": \"scale\"}'\n     curl -X POST http://localhost:8081/fledge/filter -d '{\"name\": \"North_Readings_to_PI_scale_stage_1Filter\", \"plugin\": \"scale\", \"filter_config\": {\"offset\":\"1\",\"enable\":\"true\"}}'\n\n    'name' is the filter name\n    'plugin' is the filter plugin name\n    'filter_config' is the new configuration of the plugin, part or full, should we desire to modify\n    the config at creation time itself\n\n    The plugin is loaded and default config from 'plugin_info'\n    is fetched.\n\n    A new config category 'name' is created:\n    items are:\n       - 'plugin'\n       - all items from default plugin config\n\n    NOTE: The 'create_category' call is made with keep_original_items = True\n\n    \"\"\"\n    try:\n        data = await request.json()\n        filter_name = data.get('name', None)\n        plugin_name = data.get('plugin', None)\n        filter_config = data.get('filter_config', {})\n        if not filter_name or not plugin_name:\n            raise TypeError('Filter name, plugin name are mandatory.')\n\n        storage = connect.get_storage_async()\n        cf_mgr = ConfigurationManager(storage)\n\n        # Check first whether filter already exists\n        category_info = await cf_mgr.get_category_all_items(category_name=filter_name)\n        if category_info is not None:\n            raise ValueError(\"This '{}' filter already exists\".format(filter_name))\n\n        # Load C/Python filter plugin info\n        try:\n            # Try fetching Python filter\n            plugin_module_path = \"{}/python/fledge/plugins/filter/{}\".format(_FLEDGE_ROOT, plugin_name)\n            loaded_plugin_info = common.load_and_fetch_python_plugin_info(plugin_module_path, plugin_name, \"filter\")\n        except FileNotFoundError as ex:\n            # Load C filter plugin\n            loaded_plugin_info = apiutils.get_plugin_info(plugin_name, dir='filter')\n\n        if not loaded_plugin_info or 'config' not in loaded_plugin_info:\n            message = \"Can not get 'plugin_info' detail from plugin '{}'\".format(plugin_name)\n            raise ValueError(message)\n\n        # Sanity checks\n        plugin_config = loaded_plugin_info['config']\n        loaded_plugin_type = loaded_plugin_info['type']\n        loaded_plugin_name = plugin_config['plugin']['default']\n        if plugin_name != loaded_plugin_name or loaded_plugin_type != 'filter':\n            raise ValueError(\n                \"Loaded plugin '{}', type '{}', doesn't match the specified one '{}', type 'filter'\".format(\n                    loaded_plugin_name, loaded_plugin_type, plugin_name))\n\n        # Set dict value for 'default' if type is JSON. This is required by the configuration manager\n        for key, value in plugin_config.items():\n            if value['type'] == 'JSON':\n                value['default'] = json.loads(json.dumps(value['default']))\n\n        # Check if filter exists in filters table\n        payload = PayloadBuilder().WHERE(['name', '=', filter_name]).payload()\n        result = await storage.query_tbl_with_payload(\"filters\", payload)\n        if len(result[\"rows\"]) == 0:\n            # Create entry in filters table\n            payload = PayloadBuilder().INSERT(name=filter_name, plugin=plugin_name).payload()\n            await storage.insert_into_tbl(\"filters\", payload)\n\n        # Everything ok, now create filter config\n        filter_desc = \"Configuration of '{}' filter for plugin '{}'\".format(filter_name, plugin_name)\n        await cf_mgr.create_category(category_name=filter_name,\n                                     category_description=filter_desc,\n                                     category_value=plugin_config,\n                                     keep_original_items=True)\n\n        # If custom filter_config is in POST data, then update the value for each config item\n        if filter_config is not None:\n            if not isinstance(filter_config, dict):\n                raise ValueError('filter_config must be a JSON object')\n            await cf_mgr.update_configuration_item_bulk(filter_name, filter_config)\n\n        # Fetch the new created filter: get category items\n        category_info = await cf_mgr.get_category_all_items(category_name=filter_name)\n        if category_info is None:\n            raise ValueError(\"No such '{}' filter found.\".format(filter_name))\n        else:\n            return web.json_response({'filter': filter_name, 'description': filter_desc, 'value': category_info})\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg)\n    except TypeError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg)\n    except StorageServerError as ex:\n        msg = ex.error\n        await _delete_configuration_category(storage, filter_name)  # Revert configuration entry\n        _LOGGER.exception(\"Failed to create filter with: {}\".format(msg))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Add filter failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n\nasync def add_filters_pipeline(request: web.Request) -> web.Response:\n    \"\"\"\n    Add filter names to \"filter\" item in {user_name}\n\n    PUT /fledge/filter/{user_name}/pipeline\n\n    'pipeline' is the array of filter category names to set\n    into 'filter' default/value properties\n\n    :Example: set 'pipeline' for user 'NorthReadings_to_PI'\n    curl -X PUT http://localhost:8081/fledge/filter/NorthReadings_to_PI/pipeline -d '[\"Scale10Filter\", \"Python_assetCodeFilter\"]'\n\n    Configuration item 'filter' is added to {user_name}\n    or updated with the pipeline list\n\n    Returns the filter pipeline on success:\n    {\"pipeline\": [\"Scale10Filter\", \"Python_assetCodeFilter\"]}\n\n    Query string parameters:\n    - append_filter=true|false       Default false\n    - allow_duplicates=true|false    Default true\n\n    :Example:\n    curl -X PUT http://localhost:8081/fledge/filter/NorthReadings_to_PI/pipeline?append_filter=true|false -d\n    '{\n        \"pipeline\": [\"Scale10Filter\", \"Python_assetCodeFilter\"],\n    }'\n    curl -X PUT http://localhost:8081/fledge/filter/NorthReadings_to_PI/pipeline?allow_duplicates=true|false -d\n    '{\n        \"pipeline\": [\"Scale10Filter\", \"Python_assetCodeFilter\"],\n    }'\n    curl -X PUT 'http://localhost:8081/fledge/filter/NorthReadings_to_PI/pipeline?append_filters=true&allow_duplicates=true|false' -d\n    '{\n        \"pipeline\": [\"Scale10Filter\", \"Python_assetCodeFilter\"],\n    }'\n\n    Delete pipeline:\n    curl -X PUT -d '{\"pipeline\": []}' http://localhost:8081/fledge/filter/NorthReadings_to_PI/pipeline\n\n    NOTE: the method also adds the filters category names under\n    parent category {user_name}\n    \"\"\"\n\n    def find_duplicates(filters):\n        \"\"\" Validates the filter input data to ensure no duplicate elements \"\"\"\n        seen = set()\n        for item in filters:\n            # Complex case\n            if isinstance(item, list):\n                for sub_item in item:\n                    if sub_item in seen:\n                        return True, sub_item\n                    seen.add(sub_item)\n            else:\n                # Linear case\n                if item in seen:\n                    return True, item\n                seen.add(item)\n        return False, \"No duplicates found!\"\n\n    try:\n        data = await request.json()\n        filter_list = data.get('pipeline', None)\n        user_name = request.match_info.get('user_name', None)\n\n        if filter_list is None:\n            raise KeyError(\"pipeline key-value pair is required in the payload.\")\n\n        # Empty list [] is allowed as it clears the pipeline\n        if not isinstance(filter_list, list):\n            raise TypeError('pipeline must be either a list of filters or an empty list.')\n\n        # We just need to update the value of config_item with the \"pipeline\" property. Check whether\n        # we want to replace or update the list or we allow duplicate entries in the list.\n        # Default: append and allow duplicates\n        append_filter = 'false'\n        allow_duplicates = 'true'\n        if 'append_filter' in request.query and request.query['append_filter'] != '':\n            append_filter = request.query['append_filter'].lower()\n            if append_filter not in ['true', 'false']:\n                raise ValueError(\"Only 'true' and 'false' are allowed for append_filter. {} given.\".format(\n                    append_filter))\n        if 'allow_duplicates' in request.query and request.query['allow_duplicates'] != '':\n            allow_duplicates = request.query['allow_duplicates'].lower()\n            if allow_duplicates not in ['true', 'false']:\n                raise ValueError(\"Only 'true' and 'false' are allowed for allow_duplicates. {} given.\".format(\n                    allow_duplicates))\n\n        # Find duplicates in filters list\n        has_duplicates, duplicate_element = find_duplicates(filter_list)\n        if has_duplicates:\n            raise TypeError(\"The filter name '{}' cannot be duplicated in the pipeline.\".format(duplicate_element))\n\n        storage = connect.get_storage_async()\n        cf_mgr = ConfigurationManager(storage)\n\n        # Check if category_name exists\n        category_info = await cf_mgr.get_category_all_items(category_name=user_name)\n        if category_info is None:\n            raise ValueError(\"No such '{}' category found.\".format(user_name))\n\n        async def _get_filter(f_name):\n            payload = PayloadBuilder().WHERE(['name', '=', f_name]).payload()\n            f_result = await storage.query_tbl_with_payload(\"filters\", payload)\n            if len(f_result[\"rows\"]) == 0:\n                raise ValueError(\"No such '{}' filter found in filters table.\".format(f_name))\n            user_filters = await storage.query_tbl_with_payload(\"filter_users\", payload)\n            if \"rows\" in user_filters and len(user_filters[\"rows\"]) != 0:\n                instance_name = user_filters[\"rows\"][0]['user']\n                if instance_name != user_name:\n                    error_msg = (\"The filter '{}' is currently in use. To update the filter pipeline, \"\n                           \"you must first remove it from the '{}' instance.\").format(f_name, instance_name)\n                    raise ConflictError(error_msg)\n\n        # Check and validate if all filters in the list exists in filters table\n        for _filter in filter_list:\n            if isinstance(_filter, list):\n                for f in _filter:\n                    await _get_filter(f)\n            else:\n                await _get_filter(_filter)\n        config_item = \"filter\"\n\n        if config_item in category_info:  # Check if config_item key has already been added to the category config\n            # Fetch existing filters\n            filter_value_from_storage = json.loads(category_info[config_item]['value'])\n            current_filters = filter_value_from_storage['pipeline']\n            # If filter list is empty don't check current list value\n            # Empty list [] clears current pipeline\n            if append_filter == 'true' and filter_list:\n                new_list = copy.deepcopy(current_filters)\n                for _filter in filter_list:\n                    if allow_duplicates == 'true' or _filter not in current_filters:\n                        new_list.append(_filter)\n            else:\n                new_list = filter_list\n            await _delete_child_filters(storage, cf_mgr, user_name, new_list, old_list=current_filters)\n            await _add_child_filters(storage, cf_mgr, user_name, new_list, old_list=current_filters)\n            # Config update for filter pipeline and a change callback after category children creation\n            await cf_mgr.set_category_item_value_entry(user_name, config_item, {'pipeline': new_list})\n        else:  # No existing filters, hence create new item 'config_item' and add the \"pipeline\" array as a string\n            new_item = dict(\n                {config_item: {'description': 'Filter pipeline', 'type': 'JSON', 'default': {}, 'readonly': 'true'}})\n            new_item[config_item]['default'] = json.dumps({'pipeline': filter_list})\n            await _add_child_filters(storage, cf_mgr, user_name, filter_list)\n            await cf_mgr.create_category(category_name=user_name, category_value=new_item, keep_original_items=True)\n\n        # Fetch up-to-date category item\n        result = await cf_mgr.get_category_item(user_name, config_item)\n        if result is None:\n            message = \"No detail found for user: {} and filter: {}\".format(user_name, config_item)\n            raise ValueError(message)\n        else:\n            # Create Parent-child relation for standalone filter category with service/username\n            # And that way we have the ability to remove the category when we delete the service\n            f_c = []\n            f_c2 = []\n            for _filter in filter_list:\n                if isinstance(_filter, list):\n                    for f in _filter:\n                        f_c.append(f)\n                else:\n                    f_c2.append(_filter)\n                if f_c:\n                    await cf_mgr.create_child_category(user_name, f_c)\n                if f_c2:\n                    await cf_mgr.create_child_category(user_name, f_c2)\n            return web.json_response(\n                {'result': \"Filter pipeline {} updated successfully\".format(json.loads(result['value']))})\n    except ConflictError as ex:\n        msg = ex.message\n        raise web.HTTPConflict(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (KeyError, TypeError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except StorageServerError as e:\n        msg = e.error\n        _LOGGER.exception(\"Add filters pipeline, caught storage error: {}\".format(msg))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Add filters pipeline failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n\nasync def get_filter(request: web.Request) -> web.Response:\n    \"\"\" GET filter detail\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/filter/<filter_name>\n    \"\"\"\n    filter_name = request.match_info.get('filter_name', None)\n    try:\n        storage = connect.get_storage_async()\n        cf_mgr = ConfigurationManager(storage)\n        filter_detail = {}\n\n        # Fetch the filter items: get category items\n        category_info = await cf_mgr.get_category_all_items(filter_name)\n        if category_info is None:\n            raise ValueError(\"No such '{}' category found.\".format(filter_name))\n        filter_detail.update({\"config\": category_info})\n\n        # Fetch filter detail\n        payload = PayloadBuilder().WHERE(['name', '=', filter_name]).payload()\n        result = await storage.query_tbl_with_payload(\"filters\", payload)\n        if len(result[\"rows\"]) == 0:\n            raise ValueError(\"No such filter '{}' found.\".format(filter_name))\n        row = result[\"rows\"][0]\n        filter_detail.update({\"name\": row[\"name\"], \"plugin\": row[\"plugin\"]})\n\n        # Fetch users which are using this filter\n        payload = PayloadBuilder().WHERE(['name', '=', filter_name]).payload()\n        result = await storage.query_tbl_with_payload(\"filter_users\", payload)\n        users = []\n        for row in result[\"rows\"]:\n            users.append(row[\"user\"])\n        filter_detail.update({\"users\": users})\n    except StorageServerError as ex:\n        msg = ex.error\n        _LOGGER.exception(\"Failed to get filter name: {}. Storage error occurred: {}\".format(filter_name, msg))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        raise web.HTTPNotFound(reason=str(err))\n    except TypeError as err:\n        raise web.HTTPBadRequest(reason=str(err))\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Get {} filter failed.\".format(filter_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'filter': filter_detail})\n\n\nasync def get_filters(request: web.Request) -> web.Response:\n    \"\"\" GET list of filters\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/filter\n    \"\"\"\n    try:\n        storage = connect.get_storage_async()\n        result = await storage.query_tbl(\"filters\")\n        filters = result[\"rows\"]\n    except StorageServerError as ex:\n        msg = ex.error\n        _LOGGER.exception(\"Get all filters, caught storage exception: {}\".format(msg))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Get all filters failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'filters': filters})\n\n\nasync def get_filter_pipeline(request: web.Request) -> web.Response:\n    \"\"\" GET filter pipeline\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/filter/<user_name>/pipeline\n    \"\"\"\n    user_name = request.match_info.get('user_name', None)\n    try:\n        storage = connect.get_storage_async()\n        cf_mgr = ConfigurationManager(storage)\n\n        # Fetch the filter items: get category items\n        category_info = await cf_mgr.get_category_all_items(category_name=user_name)\n        if category_info is None:\n            raise ValueError(\"No such '{}' category found.\".format(user_name))\n\n        filter_value_from_storage = json.loads(category_info['filter']['value'])\n    except KeyError:\n        msg = \"No filter pipeline exists for {}.\".format(user_name)\n        raise web.HTTPNotFound(reason=msg)\n    except StorageServerError as ex:\n        msg = ex.error\n        _LOGGER.exception(\"Failed to delete filter pipeline {}. Storage error occurred: {}\".format(user_name, msg))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        raise web.HTTPNotFound(reason=str(err))\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Get filter pipeline failed.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'result': filter_value_from_storage})\n\n\nasync def delete_filter(request: web.Request) -> web.Response:\n    \"\"\" DELETE filter\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/filter/<filter_name>\n    \"\"\"\n    filter_name = request.match_info.get('filter_name', None)\n    try:\n        storage = connect.get_storage_async()\n\n        # Check if it is a valid plugin\n        payload = PayloadBuilder().WHERE(['name', '=', filter_name]).payload()\n        result = await storage.query_tbl_with_payload(\"filters\", payload)\n        if len(result[\"rows\"]) == 0:\n            raise ValueError(\"No such filter '{}' found\".format(filter_name))\n\n        # Check if filter exists in any pipeline\n        payload = PayloadBuilder().WHERE(['name', '=', filter_name]).payload()\n        result = await storage.query_tbl_with_payload(\"filter_users\", payload)\n        if len(result[\"rows\"]) != 0:\n            msg = (\"The filter '{}' is currently being used within a pipeline. \"\n                   \"To delete the filter, you must first remove it from the pipeline.\").format(filter_name)\n            raise ConflictError(msg)\n        # Delete filter from filters table\n        payload = PayloadBuilder().WHERE(['name', '=', filter_name]).payload()\n        await storage.delete_from_tbl(\"filters\", payload)\n\n        # Delete configuration for filter\n        await _delete_configuration_category(storage, filter_name)\n\n        # Update deprecated timestamp in asset_tracker\n        \"\"\"\n        TODO: FOGL-6749\n        Once rows affected with 0 case handled at Storage side\n        then we will need to update the query with AND_WHERE(['deprecated_ts', 'isnull'])\n        At the moment deprecated_ts is updated even in notnull case.\n        Also added SELECT query before UPDATE to avoid BadCase when there is no asset track entry exists for the filter.\n        This should also be removed when given JIRA is fixed.\n        \"\"\"\n        select_payload = PayloadBuilder().SELECT(\"deprecated_ts\").WHERE(['plugin', '=', filter_name]).payload()\n        get_result = await storage.query_tbl_with_payload('asset_tracker', select_payload)\n        if 'rows' in get_result:\n            response = get_result['rows']\n            if response:\n                # AND_WHERE(['deprecated_ts', 'isnull']) once FOGL-6749 is done\n                current_time = utils.local_timestamp()\n                update_payload = PayloadBuilder().SET(deprecated_ts=current_time).WHERE(\n                    ['plugin', '=', filter_name]).payload()\n                await storage.update_tbl(\"asset_tracker\", update_payload)\n    except StorageServerError as ex:\n        msg = ex.error\n        _LOGGER.exception(\"Delete {} filter, caught storage exception: {}\".format(filter_name, msg))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except TypeError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except ConflictError as ex:\n        msg = ex.message\n        raise web.HTTPConflict(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Delete {} filter failed.\".format(filter_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'result': \"Filter {} deleted successfully.\".format(filter_name)})\n\n\nasync def delete_filter_pipeline(request: web.Request) -> web.Response:\n    \"\"\" DELETE filter pipeline\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/filter/<user_name>/pipeline\n    \"\"\"\n    user_name = request.match_info.get('user_name', None)\n    try:\n        put_url = request.url\n        data = '{\"pipeline\": []}'\n        async with aiohttp.ClientSession() as session:\n            async with session.put(put_url, data=data) as resp:\n                status_code = resp.status\n                jdoc = await resp.text()\n                if status_code not in range(200, 209):\n                    _LOGGER.error(\"Delete {} filter pipeline; Error code: {}, reason: {}, details: {}, url: {}\"\n                                  \"\".format(user_name, resp.status, resp.reason, jdoc, put_url))\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n    except Exception:\n        raise\n    else:\n        return web.json_response({'result': \"Filter pipeline for {} deleted successfully\".format(user_name)})\n\n\nasync def _delete_configuration_category(storage: StorageClientAsync, key: str) -> None:\n    payload = PayloadBuilder().WHERE(['key', '=', key]).payload()\n    await storage.delete_from_tbl('configuration', payload)\n\n    # Removed category from configuration cache and other related stuff e.g. script files\n    config_mgr = ConfigurationManager(storage)\n    config_mgr.delete_category_related_things(key)\n    config_mgr._cacheManager.remove(key)\n\n\ndef _diff(list1: Union[List, str], list2: Union[List, str]) -> List:\n    \"\"\" Compute the difference between two lists and also handling of nested lists \"\"\"\n\n    def flatten(lst: Union[List, str]) -> List:\n        \"\"\" Flatten a nested list into a single list \"\"\"\n        flat_list = []\n        for item in lst:\n            if isinstance(item, list):\n                flat_list.extend(flatten(item))\n            else:\n                flat_list.append(item)\n        return flat_list\n\n    # Compute the difference: items in list2 but not in list1\n    diff = [item for item in flatten(list2) if item not in flatten(list1)]\n    return diff\n\n\ndef _delete_keys_from_dict(dict_del: Dict, lst_keys: List[str], deleted_values: Dict = {}, parent: str = None) \\\n        -> Tuple[Dict, Dict]:\n    \"\"\" Delete keys from the dict and add deleted key=value pairs to deleted_values dict\"\"\"\n    for k in lst_keys:\n        try:\n            if parent is not None:\n                if dict_del['type'] == 'JSON':\n                    i_val = json.loads(dict_del[k]) if isinstance(dict_del[k], str) else json.loads(\n                        json.dumps(dict_del[k]))\n                else:\n                    i_val = dict_del[k]\n                deleted_values.update({parent: i_val})\n            del dict_del[k]\n        except KeyError:\n            pass\n    for k, v in dict_del.items():\n        if isinstance(v, dict):\n            parent = k\n            _delete_keys_from_dict(v, lst_keys, deleted_values, parent)\n    return dict_del, deleted_values\n\n\nasync def _delete_child_filters(storage: StorageClientAsync, cf_mgr: ConfigurationManager, user_name: str,\n                                new_list: List[str], old_list: List[str] = []) -> None:\n    # Difference between pipeline and value from storage lists and then delete relationship as per diff\n    delete_children = _diff(new_list, old_list)\n    async def _delete_relationship(cat_name):\n        try:\n            filter_child_category_name = \"{}_{}\".format(user_name, cat_name)\n            await cf_mgr.delete_child_category(user_name, filter_child_category_name)\n            await cf_mgr.delete_child_category(\"{} Filters\".format(user_name), filter_child_category_name)\n        except:\n            pass\n        await _delete_configuration_category(storage, \"{}_{}\".format(user_name, cat_name))\n        payload = PayloadBuilder().WHERE(['name', '=', cat_name]).AND_WHERE(['user', '=', user_name]).payload()\n        await storage.delete_from_tbl(\"filter_users\", payload)\n\n        # Delete plugin data of filters\n        payload = PayloadBuilder().WHERE(['name', '=', cat_name]).payload()\n        filters_res = await storage.query_tbl_with_payload(\"filters\", payload)\n        if 'rows' in filters_res and len(filters_res['rows']) > 0:\n            key_name = \"{}{}{}\".format(user_name, cat_name, filters_res[\"rows\"][0]['plugin'])\n            payload = PayloadBuilder().WHERE(['key', '=', key_name]).payload()\n            await storage.delete_from_tbl(\"plugin_data\", payload)\n\n    for child in delete_children:\n        if isinstance(child, list):\n            for c in child:\n                await _delete_relationship(c)\n        else:\n            await _delete_relationship(child)\n\nasync def _add_child_filters(storage: StorageClientAsync, cf_mgr: ConfigurationManager, user_name: str,\n                             filter_list: List[str], old_list: List[str] = []) -> None:\n    # Create children categories. Since create_category() does not expect \"value\" key to be\n    # present in the payload, we need to remove all \"value\" keys BUT need to add back these\n    # \"value\" keys to the new configuration.\n\n    async def _create_filter_category(filter_cat_name):\n        filter_config = await cf_mgr.get_category_all_items(category_name=\"{}_{}\".format(\n            user_name, filter_cat_name))\n        # If \"username_filter\" category does not exist\n        if filter_config is None:\n            filter_config = await cf_mgr.get_category_all_items(category_name=filter_cat_name)\n\n            filter_desc = \"Configuration of {} filter for user {}\".format(filter_cat_name, user_name)\n            new_filter_config, deleted_values = _delete_keys_from_dict(filter_config, ['value'],\n                                                                       deleted_values={}, parent=None)\n            await cf_mgr.create_category(category_name=\"{}_{}\".format(user_name, filter_cat_name),\n                                         category_description=filter_desc,\n                                         category_value=new_filter_config,\n                                         keep_original_items=True)\n            if deleted_values != {}:\n                await cf_mgr.update_configuration_item_bulk(\"{}_{}\".format(\n                    user_name, filter_cat_name), deleted_values)\n\n        # Remove cat from cache\n        if filter_cat_name in cf_mgr._cacheManager.cache:\n            cf_mgr._cacheManager.remove(filter_cat_name)\n\n    # Create filter category\n    for _fn in filter_list:\n        if isinstance(_fn, list):\n            for f in _fn:\n                await _create_filter_category(f)\n        else:\n            await _create_filter_category(_fn)\n\n    # Create children categories in category_children table\n    children = []\n    for _filter in filter_list:\n        if isinstance(_filter, list):\n            for f in _filter:\n                child_cat_name = \"{}_{}\".format(user_name, f)\n                children.append(child_cat_name)\n        else:\n            child_cat_name = \"{}_{}\".format(user_name, _filter)\n            children.append(child_cat_name)\n        await cf_mgr.create_child_category(category_name=user_name, children=children)\n    # Add entries to filter_users table\n    new_added = _diff(old_list, filter_list)\n    for filter_name in new_added:\n        payload = None\n        if isinstance(filter_name, list):\n            for f in filter_name:\n                payload = PayloadBuilder().INSERT(name=f, user=user_name).payload()\n        else:\n            payload = PayloadBuilder().INSERT(name=filter_name, user=user_name).payload()\n        if payload is not None:\n            await storage.insert_into_tbl(\"filter_users\", payload)\n"
  },
  {
    "path": "python/fledge/services/core/api/health.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport json\n\nfrom aiohttp import web\nfrom fledge.common.common import _FLEDGE_DATA, _FLEDGE_ROOT\nfrom fledge.common.logger import FLCoreLogger\n\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2022, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ----------------------------------------------------------\n    | GET            | /fledge/health/storage               |\n    | GET            | /fledge/health/logging               |\n    ----------------------------------------------------------\n\"\"\"\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\n\nasync def get_disk_usage(given_dir):\n    \"\"\"\n       Helper function that calculates used, available, usage(in %) for a given directory in file system.\n       Returns a tuple of used(in KB's integer), available(in KB's integer), usage(in %)\n    \"\"\"\n    disk_check_process = await asyncio.create_subprocess_exec(\n        'df', '-k', given_dir, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE)\n    stdout, stderr = await disk_check_process.communicate()\n    if disk_check_process.returncode != 0:\n        stderr = stderr.decode(\"utf-8\")\n        msg = \"Failed to get disk stats of {} directory. {}\".format(given_dir, str(stderr))\n        _LOGGER.error(msg)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n    # Following output is parsed.\n    \"\"\"\n        Filesystem     1K-blocks     Used Available Use% Mounted on\n        /dev/sda5      122473072 95449760  20755872  83% /\n\n    \"\"\"\n    disk_stats = stdout.decode(\"utf-8\")\n    required_stats = disk_stats.split('\\n')[1].split()\n    used = int(required_stats[2])\n    available = int(required_stats[3])\n    usage = int(required_stats[4].replace(\"%\", ''))\n\n    return used, available, usage\n\n\nasync def get_logging_health(request: web.Request) -> web.Response:\n    \"\"\"\n     Return the health of logging.\n    Args:\n       request: None\n\n    Returns:\n           Return the health of logging.\n           Sample Response :\n\n           {\n              \"disk\": {\n                \"usage\": 63,\n                \"used\": 42936800,\n                \"available\": 25229400\n              },\n              \"levels\": [\n                {\n                    \"name\" : \"Sine\",\n                    \"level\" : \"info\"\n                },\n                {\n                    \"name\" : \"OMF\",\n                    \"level\" : \"debug\"\n                }\n              ]\n           }\n\n    :Example:\n           curl -X GET http://localhost:8081/fledge/health/logging\n    \"\"\"\n    response = {}\n    try:\n        from fledge.common.storage_client.payload_builder import PayloadBuilder\n        from fledge.services.core import connect\n\n        payload = PayloadBuilder().SELECT(\"key\", \"value\").payload()\n        _storage_client = connect.get_storage_async()\n        excluded_log_levels = [\"error\", \"warning\"]\n        results = await _storage_client.query_tbl_with_payload('configuration', payload)\n        log_levels = []\n        for row in results[\"rows\"]:\n            for item_name, item_info in row[\"value\"].items():\n                if item_name == \"logLevel\" and item_info['value'] not in excluded_log_levels:\n                    service_name = row[\"key\"].replace(\"Advanced\", \"\").strip()\n                    log_level = item_info['value']\n                    log_levels.append({\"name\": service_name, \"level\": log_level})\n\n        response[\"levels\"] = log_levels\n\n    except Exception as ex:\n        msg = \"Could not fetch service information.\"\n        _LOGGER.error(ex, msg)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"{} {}\".format(msg, str(ex))}))\n\n    try:\n        response['disk'] = {}\n        used, available, usage = await get_disk_usage('/var/log')\n        # fill all the fields after values are retrieved\n        response['disk']['used'] = used\n        response['disk']['usage'] = usage\n        response['disk']['available'] = available\n\n    except Exception as ex:\n        msg = \"Failed to get disk stats for /var/log.\"\n        _LOGGER.error(ex, msg)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"{} {}\".format(msg, str(ex))}))\n    else:\n        return web.json_response(response)\n\n\nasync def get_storage_health(request: web.Request) -> web.Response:\n    \"\"\"\n     Return the health of Storage service & data directory.\n    Args:\n       request: None\n\n    Returns:\n           Return the health of Storage service & data directory.\n           Sample Response :\n\n           {\n              \"uptime\": 33,\n              \"name\": \"Fledge Storage\",\n              \"statistics\": {\n                \"commonInsert\": 30,\n                \"commonSimpleQuery\": 3,\n                \"commonQuery\": 91,\n                \"commonUpdate\": 2,\n                \"commonDelete\": 1,\n                \"readingAppend\": 0,\n                \"readingFetch\": 0,\n                \"readingQuery\": 1,\n                \"readingPurge\": 0\n              },\n              \"disk\": {\n                \"used\": 95287524,\n                \"usage\": 82,\n                \"available\": 20918108,\n                \"status\": \"green\"\n              }\n           }\n\n    :Example:\n           curl -X GET http://localhost:8081/fledge/health/storage\n    \"\"\"\n    # Find the address and management host for the Storage service.\n    from fledge.services.core.service_registry.service_registry import ServiceRegistry\n    from fledge.services.core.service_registry.exceptions import DoesNotExist\n    try:\n        services = ServiceRegistry.get(name=\"Fledge Storage\")\n        service = services[0]\n    except DoesNotExist:\n        msg = \"Cannot ping the storage service. It does not exist in service registry.\"\n        _LOGGER.error(msg)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    try:\n        from fledge.common.service_record import ServiceRecord\n        if service._status != ServiceRecord.Status.Running:\n            msg = \"The Storage service is not in Running state.\"\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n        from fledge.common.microservice_management_client.microservice_management_client import \\\n            MicroserviceManagementClient\n\n        mgt_client = MicroserviceManagementClient(service._address,\n                                                  service._management_port)\n        response = await mgt_client.ping_service()\n\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Could not ping the Storage service.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n    try:\n        response['disk'] = {}\n        data_dir_path = _FLEDGE_DATA if _FLEDGE_DATA else _FLEDGE_ROOT + '/data'\n        used, available, usage = await get_disk_usage(data_dir_path)\n        status = 'green'\n        if usage > 95:\n            status = 'red'\n        elif 90 < usage <= 95:\n            status = 'yellow'\n\n        # fill all the fields after values are retrieved\n        response['disk']['used'] = used\n        response['disk']['usage'] = usage\n        response['disk']['available'] = available\n        response['disk']['status'] = status\n    except Exception as ex:\n        msg = \"Failed to get disk stats for Storage service.\"\n        _LOGGER.error(ex, msg)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"{} {}\".format(msg, str(ex))}))\n    else:\n        return web.json_response(response)\n"
  },
  {
    "path": "python/fledge/services/core/api/north.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom functools import lru_cache\nfrom aiohttp import web\n\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect, server\nfrom fledge.services.core.scheduler.entities import Task\nfrom fledge.services.core.scheduler.exceptions import NotReadyError\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry.exceptions import DoesNotExist\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | GET                 | /fledge/north                                         |\n    -------------------------------------------------------------------------------\n\"\"\"\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def _get_sent_stats(storage_client, north_schedules):\n    stats = []\n    try:\n        payload = PayloadBuilder().SELECT([\"key\", \"value\"]).WHERE([\"key\", \"in\", north_schedules]).payload()\n        result = await storage_client.query_tbl_with_payload('statistics', payload)\n        if int(result['count']):\n            stats = result['rows']\n    except:\n        raise\n    else:\n        return stats\n\n\nasync def _get_tasks_status():\n    payload = PayloadBuilder().SELECT(\"id\", \"schedule_name\", \"process_name\", \"state\", \"start_time\",\n                                      \"end_time\", \"reason\", \"pid\", \"exit_code\")\\\n        .WHERE([\"process_name\", \"=\", \"north\"])\\\n        .OR_WHERE([\"process_name\", \"=\", \"north_c\"])\\\n        .ALIAS(\"return\", (\"start_time\", 'start_time'), (\"end_time\", 'end_time'))\\\n        .FORMAT(\"return\", (\"start_time\", \"YYYY-MM-DD HH24:MI:SS.MS\"), (\"end_time\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n        .ORDER_BY([\"schedule_name\", \"asc\"], [\"start_time\", \"desc\"])\n\n    tasks = {}\n    try:\n        _storage = connect.get_storage_async()\n        results = await _storage.query_tbl_with_payload('tasks', payload.payload())\n        previous_schedule = None\n        for row in results['rows']:\n            if not row['schedule_name'].strip():\n                continue\n            if previous_schedule != row['schedule_name']:\n                tasks.update({row['schedule_name']: row})\n                previous_schedule = row['schedule_name']\n    except Exception as ex:\n        raise ValueError(str(ex))\n    return tasks\n\n\nasync def _get_north_schedules(cf_mgr):\n    try:\n        north_categories = await cf_mgr.get_category_child(\"North\")\n        north_schedules = [nc[\"key\"] for nc in north_categories]\n    except ValueError:\n        return []\n\n    schedules = []\n    north_sch_dict = {}\n    schedule_list = await server.Server.scheduler.get_schedules()\n    latest_tasks = await _get_tasks_status()\n    try:\n        services_from_registry = ServiceRegistry.get(s_type=\"Northbound\")\n    except DoesNotExist:\n        services_from_registry = []\n    for sch in schedule_list:\n        if sch.name in north_schedules:\n            if sch.process_name != \"north_C\" and sch.schedule_type != 1:\n                task = latest_tasks.get(sch.name, None)\n                north_sch_dict = {\n                    'id': str(sch.schedule_id),\n                    'name': sch.name,\n                    'processName': sch.process_name,\n                    'repeat': sch.repeat.total_seconds() if sch.repeat else 0,\n                    'day': sch.day,\n                    'enabled': sch.enabled,\n                    'exclusive': sch.exclusive,\n                    'execution': 'task',\n                    'taskStatus': None if task is None else {\n                        'state': [t.name.capitalize() for t in list(Task.State)][int(task['state']) - 1],\n                        'startTime': str(task['start_time']),\n                        'endTime': str(task['end_time']),\n                        'exitCode': task['exit_code'],\n                        'reason': task['reason']\n                    }\n                }\n            else:\n                for s_record in services_from_registry:\n                    if s_record._name == sch.name:\n                        north_sch_dict = {\n                            'id': str(sch.schedule_id),\n                            'name': sch.name,\n                            'processName': sch.process_name,\n                            'enabled': sch.enabled,\n                            'execution': 'service',\n                            'address': s_record._address,\n                            'managementPort': s_record._management_port,\n                            'servicePort': s_record._port,\n                            'protocol': s_record._protocol,\n                            'status': ServiceRecord.Status(int(s_record._status)).name.lower()\n                        }\n                        # Add the 'debug' key only if it's non-empty\n                        if s_record._debug:\n                            north_sch_dict['debug'] = s_record._debug\n                # north-C service case, If not in service registry\n                if sch.enabled is False and sch.name not in [s_record._name for s_record in services_from_registry]:\n                    north_sch_dict = {\n                        'id': str(sch.schedule_id),\n                        'name': sch.name,\n                        'processName': sch.process_name,\n                        'enabled': sch.enabled,\n                        'execution': 'service',\n                        'address': '',\n                        'managementPort': '',\n                        'servicePort': '',\n                        'protocol': '',\n                        'status': ''\n                    }\n            if north_sch_dict:\n                schedules.append(north_sch_dict)\n\n    return list({v['name']: v for v in schedules}.values())\n\n\n@lru_cache(maxsize=1024)\ndef _get_installed_plugins():\n    return PluginDiscovery.get_plugins_installed(\"north\", False)\n\n\nasync def get_north_schedules(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n            list of all north instances with statistics\n\n    :Example:\n            curl -X GET http://localhost:8081/fledge/north\n    \"\"\"\n    try:\n        if 'cached' in request.query and request.query['cached'].lower() == 'false':\n            _get_installed_plugins.cache_clear()\n        storage_client = connect.get_storage_async()\n        cf_mgr = ConfigurationManager(storage_client)\n        north_schedules = await _get_north_schedules(cf_mgr)\n        \n        north_schs = [ns[\"name\"] for ns in north_schedules]\n        stats = []\n        if len(north_schs):\n            stats = await _get_sent_stats(storage_client, north_schs)\n        installed_plugins = _get_installed_plugins()\n        for sch in north_schedules:\n            stat = next((s for s in stats if s[\"key\"] == sch[\"name\"]), None)\n            sch[\"sent\"] = stat[\"value\"] if stat else -1\n            plugin = await cf_mgr.get_category_item(sch[\"name\"], 'plugin')\n            plugin_name = plugin['value'] if plugin is not None else ''\n            plugin_version = ''\n            for p in installed_plugins:\n                if p[\"name\"] == plugin_name:\n                    plugin_version = p[\"version\"]\n                    break\n            sch[\"plugin\"] = {\"name\": plugin_name, \"version\": plugin_version}\n\n    except (KeyError, ValueError) as e:  # Handles KeyError of _get_sent_stats\n        msg = str(e)\n        return web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    except NotReadyError:\n        # This case will occur if request is made while Fledge is not fully operational and/or being started.\n        msg = \"Failed to fetch schedules information. Fledge is not fully operational and/or being started. Try again!\"\n        _logger.warning(msg)\n        return web.HTTPServiceUnavailable(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get the north schedules.\")\n        return web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(north_schedules)\n"
  },
  {
    "path": "python/fledge/services/core/api/notification.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport urllib.parse\nimport aiohttp\nfrom aiohttp import web\n\nfrom fledge.common import utils\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.services.core import connect\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2018 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -----------------------------------------------------------------------------------------------------\n    | GET                            | /fledge/notification/plugin                                      |\n    | GET POST PUT DELETE            | /fledge/notification                                             |\n    | GET POST                       | /fledge/notification/{name}/delivery                             |\n    | GET DELETE                     | /fledge/notification/{notification_name}/delivery/{channel_name} |\n    -----------------------------------------------------------------------------------------------------\n\"\"\"\n_logger = FLCoreLogger().get_logger(__name__)\n\nNOTIFICATION_TYPE = [\"one shot\", \"retriggered\", \"toggled\"]\n\n\nclass PluginFetchError(Exception):\n    def __init__(self, message=\"Failed to fetch notification plugins.\"):\n        super().__init__(message)\n\n\nasync def fetch_plugins():\n    \"\"\" Fetch all rule and delivery plugins from notification service \"\"\"\n    try:\n        notification_service = ServiceRegistry.get(s_type=ServiceRecord.Type.Notification.name)\n        _address, _port = notification_service[0]._address, notification_service[0]._port\n    except service_registry_exceptions.DoesNotExist:\n        raise ValueError(\"No Notification service available.\")\n\n    try:\n        url = 'http://{}:{}/notification/rules'.format(_address, _port)\n        rule_plugins = json.loads(await _hit_get_url(url))\n        url = 'http://{}:{}/notification/delivery'.format(_address, _port)\n        delivery_plugins = json.loads(await _hit_get_url(url))\n    except Exception:\n        raise PluginFetchError()\n    else:\n        resp = {}\n        if rule_plugins is not None:\n            if isinstance(rule_plugins, dict) and 'rules' in rule_plugins:\n                resp['rules'] = rule_plugins['rules']\n            else:\n                resp['rules'] = rule_plugins\n\n        if delivery_plugins is not None:\n            if isinstance(delivery_plugins, dict) and 'delivery' in delivery_plugins:\n                resp['delivery'] = delivery_plugins['delivery']\n            else:\n                resp['delivery'] = delivery_plugins\n\n        return resp\n\n\nasync def get_plugin(request):\n    \"\"\" GET lists of rule plugins and delivery plugins\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/notification/plugin\n    \"\"\"\n    try:\n        list_plugins = await fetch_plugins()\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (Exception, PluginFetchError) as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get notification plugin list.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(list_plugins)\n\n\nasync def get_type(request):\n    \"\"\" GET the list of available notification types\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/notification/type\n    \"\"\"\n    return web.json_response({'notification_type': NOTIFICATION_TYPE})\n\n\nasync def get_notification(request):\n    \"\"\" GET an existing notification\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/notification/<notification_name>\n    \"\"\"\n    try:\n        notif = request.match_info.get('notification_name', None)\n        if notif is None:\n            raise ValueError(\"Notification name is required.\")\n\n        notification = {}\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n        notification_config = await config_mgr._read_category_val(notif)\n        if notification_config:\n            rule_config = await config_mgr._read_category_val(\"rule{}\".format(notif))\n            delivery_config = await config_mgr._read_category_val(\"delivery{}\".format(notif))\n            naming_extra = \"{}_channel_\".format(notification_config['name']['value'])\n            list_extra = await _get_channels_type(config_mgr,\n                                          notification_config['name']['value'],\n                                          naming_extra,\n                                          True)\n            notification = {\n                \"name\": notification_config['name']['value'],\n                \"description\": notification_config['description']['value'],\n                \"rule\": notification_config['rule']['value'],\n                \"ruleConfig\": rule_config,\n                \"channel\": notification_config['channel']['value'],\n                \"deliveryConfig\": delivery_config,\n                \"notificationType\": notification_config['notification_type']['value'],\n                \"retriggerTime\": notification_config['retrigger_time']['value'],\n                \"enable\": notification_config['enable']['value'],\n            }\n            if len(list_extra) > 0:\n                notification[\"additionalChannels\"] =  list_extra\n        else:\n            raise ValueError(\"The Notification: {} does not exist.\".format(notif))\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get {} notification instance.\".format(notif))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'notification': notification})\n\n\nasync def get_notifications(request):\n    \"\"\" GET list of notifications\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/notification\n    \"\"\"\n    try:\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n        all_notifications = await config_mgr._read_all_child_category_names(\"Notifications\")\n        notifications = []\n        for notification in all_notifications:\n            notification_config = await config_mgr._read_category_val(notification['child'])\n            naming_extra = \"{}_channel_\".format(notification_config['name']['value'])\n            list_extra = await _get_channels_type(config_mgr,\n                                          notification_config['name']['value'],\n                                          naming_extra,\n                                          True)\n            notification = {\n                \"name\": notification_config['name']['value'],\n                \"rule\": notification_config['rule']['value'],\n                \"channel\": notification_config['channel']['value'],\n                \"notificationType\": notification_config['notification_type']['value'],\n                \"retriggerTime\": notification_config['retrigger_time']['value'],\n                \"enable\": notification_config['enable']['value'],\n            }\n            if len(list_extra) > 0:\n                notification[\"additionalChannels\"] =  list_extra\n\n            notifications.append(notification)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get notification instances.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'notifications': notifications})\n\n\nasync def post_notification(request):\n    \"\"\"\n    Create a new notification to run a specific plugin\n\n    :Example:\n             curl -X POST http://localhost:8081/fledge/notification -d '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", \"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": false}'\n             curl -X POST http://localhost:8081/fledge/notification -d '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", \"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": false, \"rule_config\": {}, \"delivery_config\": {}}'\n    \"\"\"\n    try:\n        notification_service = ServiceRegistry.get(s_type=ServiceRecord.Type.Notification.name)\n        _address, _port = notification_service[0]._address, notification_service[0]._port\n    except service_registry_exceptions.DoesNotExist:\n        raise web.HTTPNotFound(reason=\"No Notification service available.\")\n\n    try:\n        data = await request.json()\n        if not isinstance(data, dict):\n            raise ValueError('Data payload must be a valid JSON')\n\n        name = data.get('name', None)\n        description = data.get('description', None)\n        rule = data.get('rule', None)\n        channel = data.get('channel', None)\n        notification_type = data.get('notification_type', None)\n        enabled = data.get('enabled', None)\n        rule_config = data.get('rule_config', {})\n        delivery_config = data.get('delivery_config', {})\n        retrigger_time = data.get('retrigger_time', None)\n        try:\n            if retrigger_time is not None:\n                if not (isinstance(retrigger_time, str) and float(retrigger_time) >= 0 and isinstance(\n                        float(retrigger_time), (int, float))):\n                    raise ValueError\n        except ValueError:\n            raise ValueError('retrigger_time value should be a number 0 or above. '\n                             'Ensure this value is quoted as string e.g. \"2.5\".')\n        if name is None or name.strip() == \"\":\n            raise ValueError('Missing name property in payload.')\n        if description is None:\n            raise ValueError('Missing description property in payload.')\n        if rule is None:\n            raise ValueError('Missing rule property in payload.')\n        if channel is None:\n            raise ValueError('Missing channel property in payload.')\n        if notification_type is None:\n            raise ValueError('Missing notification_type property in payload.')\n\n        if utils.check_reserved(name) is False:\n            raise ValueError('Invalid name property in payload.')\n        if utils.check_reserved(rule) is False:\n            raise ValueError('Invalid rule property in payload.')\n        if utils.check_reserved(channel) is False:\n            raise ValueError('Invalid channel property in payload.')\n        if notification_type not in NOTIFICATION_TYPE:\n            raise ValueError('Invalid notification_type property in payload.')\n\n        if enabled is not None:\n            if enabled not in ['true', 'false', True, False]:\n                raise ValueError('Only \"true\", \"false\", true, false are allowed for value of enabled.')\n        is_enabled = \"true\" if ((type(enabled) is str and enabled.lower() in ['true']) or (\n            (type(enabled) is bool and enabled is True))) else \"false\"\n\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n        curr_config = await config_mgr.get_category_all_items(name)\n\n        if curr_config is not None:\n            raise ValueError(\"A Category with name {} already exists.\".format(name))\n\n        try:\n            # Get default config for rule and channel plugins\n            list_plugins_r = await fetch_plugins()\n            r = list(filter(lambda rules: rules['name'] == rule, list_plugins_r['rules']))\n            c = list(filter(lambda channels: channels['name'] == channel, list_plugins_r['delivery']))\n            if len(r) == 0 or len(c) == 0: raise KeyError\n            rule_plugin_config = r[0]['config']\n            delivery_plugin_config = c[0]['config']\n        except KeyError:\n            raise ValueError(\"Invalid rule plugin {} and/or delivery plugin {} supplied.\".format(rule, channel))\n\n        # Verify if rule_config contains valid keys\n        if rule_config != {}:\n            for k, v in rule_config.items():\n                if k not in rule_plugin_config:\n                    raise ValueError(\"Invalid key {} in rule_config {} supplied for plugin {}.\".format(k, rule_config, rule))\n\n        # Verify if delivery_config contains valid keys\n        if delivery_config != {}:\n            for k, v in delivery_config.items():\n                if k not in delivery_plugin_config:\n                    raise ValueError(\n                        \"Invalid key {} in delivery_config {} supplied for plugin {}.\".format(k, delivery_config, channel))\n\n        # First create templates for notification and rule, channel plugins\n        post_url = 'http://{}:{}/notification/{}'.format(_address, _port, urllib.parse.quote(name))\n        await _hit_post_url(post_url)  # Create Notification template\n        post_url = 'http://{}:{}/notification/{}/rule/{}'.format(_address, _port, urllib.parse.quote(name),\n                                                                         urllib.parse.quote(rule))\n        await _hit_post_url(post_url)  # Create Notification rule template\n        post_url = 'http://{}:{}/notification/{}/delivery/{}'.format(_address, _port, urllib.parse.quote(name),\n                                                                             urllib.parse.quote(channel))\n        await _hit_post_url(post_url)  # Create Notification delivery template\n\n        # Create configurations\n        notification_config = {\n            \"description\": description,\n            \"rule\": rule,\n            \"channel\": channel,\n            \"notification_type\": notification_type,\n            \"enable\": is_enabled,\n        }\n        if retrigger_time is not None:\n            notification_config[\"retrigger_time\"] = retrigger_time\n        await _update_configurations(config_mgr, name, notification_config, rule_config, delivery_config)\n        audit = AuditLogger(storage)\n        await audit.information('NTFAD', {\"name\": name})\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except (Exception, PluginFetchError) as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create notification instance.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'result': \"Notification {} created successfully\".format(name)})\n\n\nclass NotFoundError(Exception):\n    pass\n\n\nasync def put_notification(request):\n    \"\"\"\n    Update an existing notification\n\n    :Example:\n             curl -X PUT http://localhost:8081/fledge/notification/<notification_name> -d '{\"description\":\"Test Notification modified\"}'\n             curl -X PUT http://localhost:8081/fledge/notification/<notification_name> -d '{\"rule\": \"threshold\", \"channel\": \"email\"}'\n             curl -X PUT http://localhost:8081/fledge/notification/<notification_name> -d '{\"notification_type\": \"one shot\", \"enabled\": false}'\n             curl -X PUT http://localhost:8081/fledge/notification/<notification_name> -d '{\"enabled\": false}'\n             curl -X PUT http://localhost:8081/fledge/notification/<notification_name> -d '{\"retrigger_time\":\"30\"}'\n             curl -X PUT http://localhost:8081/fledge/notification/<notification_name> -d '{\"description\":\"Test Notification\", \"rule\": \"threshold\", \"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": false, \"rule_config\": {}, \"delivery_config\": {}}'\n    \"\"\"\n    try:\n        notification_service = ServiceRegistry.get(s_type=ServiceRecord.Type.Notification.name)\n        _address, _port = notification_service[0]._address, notification_service[0]._port\n    except service_registry_exceptions.DoesNotExist:\n        raise web.HTTPNotFound(reason=\"No Notification service available.\")\n\n    try:\n        notif = request.match_info.get('notification_name', None)\n        if notif is None:\n            raise ValueError(\"Notification name is required for updation.\")\n\n        # TODO: Stop notification before update\n\n        data = await request.json()\n        if not isinstance(data, dict):\n            raise ValueError('Data payload must be a valid JSON')\n\n        description = data.get('description', None)\n        rule = data.get('rule', None)\n        channel = data.get('channel', None)\n        notification_type = data.get('notification_type', None)\n        enabled = data.get('enabled', None)\n        rule_config = data.get('rule_config', {})\n        delivery_config = data.get('delivery_config', {})\n        retrigger_time = data.get('retrigger_time', None)\n        try:\n            if retrigger_time is not None:\n                if not (isinstance(retrigger_time, str) and float(retrigger_time) >= 0 and isinstance(\n                        float(retrigger_time), (int, float))):\n                    raise ValueError\n        except ValueError:\n            raise ValueError('retrigger_time value should be a number 0 or above. '\n                             'Ensure this value is quoted as string e.g. \"2.5\".')\n        if utils.check_reserved(notif) is False:\n            raise ValueError('Invalid notification instance name.')\n        if rule is not None and utils.check_reserved(rule) is False:\n            raise ValueError('Invalid rule property in payload.')\n        if channel is not None and utils.check_reserved(channel) is False:\n            raise ValueError('Invalid channel property in payload.')\n        if notification_type is not None and notification_type not in NOTIFICATION_TYPE:\n            raise ValueError('Invalid notification_type property in payload.')\n\n        if enabled is not None:\n            if enabled not in ['true', 'false', True, False]:\n                raise ValueError('Only \"true\", \"false\", true, false are allowed for value of enabled.')\n        is_enabled = \"true\" if ((type(enabled) is str and enabled.lower() in ['true']) or (\n            (type(enabled) is bool and enabled is True))) else \"false\"\n\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n\n        current_config = await config_mgr._read_category_val(notif)\n\n        if current_config is None:\n            raise NotFoundError('No {} notification instance found'.format(notif))\n\n        rule_changed = True if rule is not None and rule != current_config['rule']['value'] else False\n        channel_changed = True if channel is not None and channel != current_config['channel']['value'] else False\n\n        try:\n            # Get default config for rule and channel plugins\n            list_plugins = await fetch_plugins()\n            search_rule = rule if rule_changed else current_config['rule']['value']\n            r = list(filter(lambda rules: rules['name'] == search_rule, list_plugins['rules']))\n            if len(r) == 0:\n                raise KeyError\n            rule_plugin_config = r[0]['config']\n\n            search_channel = channel if channel_changed else current_config['channel']['value']\n            c = list(filter(lambda channels: channels['name'] == search_channel, list_plugins['delivery']))\n            if len(c) == 0:\n                raise KeyError\n            delivery_plugin_config = c[0]['config']\n        except KeyError:\n            raise ValueError(\"Invalid rule plugin:{} and/or delivery plugin:{} supplied.\".format(rule, channel))\n\n        # Verify if rule_config contains valid keys\n        if rule_config != {}:\n            for k, v in rule_config.items():\n                if k not in rule_plugin_config:\n                    raise ValueError(\"Invalid key:{} in rule plugin:{}\".format(k, rule_plugin_config))\n\n        # Verify if delivery_config contains valid keys\n        if delivery_config != {}:\n            for k, v in delivery_config.items():\n                if k not in delivery_plugin_config:\n                    raise ValueError(\n                        \"Invalid key:{} in delivery plugin:{}\".format(k, delivery_plugin_config))\n\n        if rule_changed:  # A new rule has been supplied\n            category_desc = rule_plugin_config['plugin']['description']\n            category_name = \"rule{}\".format(notif)\n            await config_mgr.create_category(category_name=category_name,\n                                             category_description=category_desc,\n                                             category_value=rule_plugin_config,\n                                             keep_original_items=False)\n        if channel_changed:  # A new delivery has been supplied\n            category_desc = delivery_plugin_config['plugin']['description']\n            category_name = \"delivery{}\".format(notif)\n            await config_mgr.create_category(category_name=category_name,\n                                             category_description=category_desc,\n                                             category_value=delivery_plugin_config,\n                                             keep_original_items=False)\n        notification_config = {}\n        if description is not None:\n            notification_config.update({\"description\": description})\n        if rule is not None:\n            notification_config.update({\"rule\": rule})\n        if channel is not None:\n            notification_config.update({\"channel\": channel})\n        if notification_type is not None:\n            notification_config.update({\"notification_type\": notification_type})\n        if enabled is not None:\n            notification_config.update({\"enable\": is_enabled})\n        if retrigger_time is not None:\n            notification_config[\"retrigger_time\"] = retrigger_time\n        await _update_configurations(config_mgr, notif, notification_config, rule_config, delivery_config)\n    except ValueError as e:\n        raise web.HTTPBadRequest(reason=str(e))\n    except NotFoundError as e:\n        raise web.HTTPNotFound(reason=str(e))\n    except (Exception, PluginFetchError) as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to update {} notification instance.\".format(notif))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        # TODO: Start notification after update\n        return web.json_response({'result': \"Notification {} updated successfully\".format(notif)})\n\n\nasync def delete_notification(request):\n    \"\"\" Delete an existing notification\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/notification/<notification_name>\n    \"\"\"\n    try:\n        notification_service = ServiceRegistry.get(s_type=ServiceRecord.Type.Notification.name)\n        _address, _port = notification_service[0]._address, notification_service[0]._port\n    except service_registry_exceptions.DoesNotExist:\n        raise web.HTTPNotFound(reason=\"No Notification service available.\")\n\n    try:\n        notif = request.match_info.get('notification_name', None)\n        if notif is None:\n            raise ValueError(\"Notification name is required for deletion.\")\n\n        # Stop & remove notification\n        url = 'http://{}:{}/notification/{}'.format(_address, _port, urllib.parse.quote(notif))\n\n        notification = json.loads(await _hit_delete_url(url))\n\n        # Removes the child categories for the rule and delivery plugins, Removes the category for the notification itself\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n\n        await config_mgr.delete_category_and_children_recursively(notif)\n\n        audit = AuditLogger(storage)\n        await audit.information('NTFDL', {\"name\": notif})\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete {} notification instance.\".format(notif))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'result': 'Notification {} deleted successfully.'.format(notif)})\n\n\nasync def _hit_get_url(get_url, token=None):\n    headers = {\"Authorization\": token} if token else None\n    try:\n        async with aiohttp.ClientSession(connector=aiohttp.TCPConnector(verify_ssl=False)) as session:\n            async with session.get(get_url, headers=headers) as resp:\n                status_code = resp.status\n                jdoc = await resp.text()\n                if status_code not in range(200, 209):\n                    _logger.error(\"Error code: {}, reason: {}, details: {}, url: {}\".format(\n                        resp.status, resp.reason, jdoc, get_url))\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n    except Exception:\n        raise\n    else:\n        return jdoc\n\n\nasync def _hit_post_url(post_url, data=None):\n    try:\n        async with aiohttp.ClientSession() as session:\n            async with session.post(post_url, data=data) as resp:\n                status_code = resp.status\n                jdoc = await resp.text()\n                if status_code not in range(200, 209):\n                    _logger.error(\"Error code: {}, reason: {}, details: {}, url: {}\".format(\n                        resp.status, resp.reason, jdoc, post_url))\n                    raise StorageServerError(code=resp.status, reason=resp.reason, error=jdoc)\n    except Exception:\n        raise\n    else:\n        return jdoc\n\n\nasync def _update_configurations(config_mgr, name, notification_config, rule_config, delivery_config):\n    try:\n        # Update main notification\n        if notification_config != {}:\n            await config_mgr.update_configuration_item_bulk(name, notification_config)\n        # Replace rule configuration\n        if rule_config != {}:\n            category_name = \"rule{}\".format(name)\n            await config_mgr.update_configuration_item_bulk(category_name, rule_config)\n        # Replace delivery configuration\n        if delivery_config != {}:\n            category_name = \"delivery{}\".format(name)\n            await config_mgr.update_configuration_item_bulk(category_name, delivery_config)\n    except Exception as ex:\n        msg = \"Failed to update {} notification configuration.\".format(name)\n        _logger.error(ex, msg)\n        return web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n\nasync def _hit_delete_url(delete_url, data=None):\n    try:\n        async with aiohttp.ClientSession() as session:\n            async with session.delete(delete_url, data=data) as resp:\n                status_code = resp.status\n                jdoc = await resp.text()\n                if status_code not in range(200, 209):\n                    _logger.error(\"Error code: {}, reason: {}, details: {}, url: {}\".format(\n                        resp.status, resp.reason, jdoc, delete_url))\n                    raise StorageServerError(code=resp.status,\n                                             reason=resp.reason,\n                                             error=jdoc)\n    except Exception:\n        raise\n    else:\n        return jdoc\n\n\nasync def _get_channels(cfg_mgr: ConfigurationManager, notify_instance: str) -> list:\n    \"\"\" Retrieve all  channels\n        the first having the naming : delivery +  \"NotificationName\"\n        the extras                  : \"NotificationName\" + _channel_ + \"DeliveryName\"\n    :Example:\n        deliveryTooHot1 (mqtt)\n        TooHot1_channel_asset_2\n        TooHot1_channel_mqtt_3\n\n    \"\"\"\n\n    naming_first = \"delivery{}\".format(notify_instance)\n    list_first = await _get_channels_type(cfg_mgr, notify_instance, naming_first, False)\n\n    naming_extra = \"{}_channel_\".format(notify_instance)\n    list_extra = await _get_channels_type(cfg_mgr, notify_instance, naming_extra, True)\n\n    full_list = []\n\n    full_list.extend(list_first)\n    full_list.extend(list_extra)\n\n    return full_list\n\n\nasync def _get_channels_type(cfg_mgr: ConfigurationManager, notify_instance: str, prefix: str, extra: bool) -> list:\n    \"\"\" Retrieve a type of channel\n    \"\"\"\n\n    all_categories = await cfg_mgr.get_all_category_names()\n\n    categories = [c[0] for c in all_categories if c[0].startswith(prefix)]\n    channel_names = []\n\n    if categories:\n        for ch in categories:\n            if ch.startswith(prefix):\n\n                category_info = await cfg_mgr._read_category(ch)\n\n                if extra:\n                    try:\n                        #delivery_name = ch[len(prefix):] + \"/\" + category_info['value']['plugin']['value']\n                        delivery_name = ch[len(prefix):]\n                    except:\n                        delivery_name = ch[len(prefix):]\n                else:\n                    try:\n                        delivery_name = ch + \"/\" + category_info['value']['plugin']['value']\n                    except:\n                        delivery_name = ch\n\n                channel_names.append(delivery_name)\n\n    return channel_names\n\n\nasync def get_delivery_channels(request: web.Request) -> web.Response:\n    \"\"\" Retrieve a list of all the additional notification channels for the given notification\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/notification/overspeed/delivery\n    \"\"\"\n    try:\n        notification_instance_name = request.match_info.get('notification_name', None)\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n        notification_config = await config_mgr._read_category_val(notification_instance_name)\n        if notification_config:\n            channels = await _get_all_delivery_channels(config_mgr, notification_instance_name)\n        else:\n            raise NotFoundError(\"{} notification instance does not exist\".format(notification_instance_name))\n    except NotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get delivery channels of {} notification.\".format(notification_instance_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"channels\": channels})\n\n\nasync def post_delivery_channel(request: web.Request) -> web.Response:\n    \"\"\" Add a new delivery channel to an existing notification\n        :Example:\n            curl -sX POST http://localhost:8081/fledge/notification/overspeed/delivery -d '{\"name\": \"coolant\", \"config\": {\"action\": {\"description\": \"Perform a control action to turn pump\", \"type\": \"boolean\", \"default\": \"false\"}}}'\n    \"\"\"\n    try:\n        notification_instance_name = request.match_info.get('notification_name', None)\n\n        data = await request.json()\n        channel_name = data.get('name', None)\n        channel_description = data.get('description', \"{} delivery channel\".format(channel_name))\n        channel_config = data.get('config', None)\n\n        if channel_name is None:\n            raise ValueError('Missing name property in payload')\n        channel_name = channel_name.strip()\n        if channel_name == \"\":\n            raise ValueError('Name should not be empty')\n        if utils.check_reserved(channel_name) is False:\n            raise ValueError('name should not use reserved words')\n        if channel_config is None or not isinstance(channel_config, dict) or len(channel_config) == 0:\n            raise ValueError('config must be a valid JSON')\n\n        pluginData = channel_config.get('plugin', None)\n        if pluginData is None or not isinstance(pluginData, dict) or len(pluginData) == 0:\n            raise ValueError(\"'plugin' property is missing or not a JSON object in 'config' data\")\n\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n        notification_config = await config_mgr._read_category_val(notification_instance_name)\n\n        if notification_config:\n\n            channel_name = \"{}_channel_{}\".format(notification_instance_name, channel_name)\n            # Create category\n            await config_mgr.create_category(category_name=channel_name, category_description=channel_description,\n                                             category_value=channel_config)\n\n            category_info = await config_mgr.get_category_all_items(category_name=channel_name)\n\n            if category_info is None:\n                raise NotFoundError('No such {} category found'.format(channel_name))\n            # Create parent-child relationship\n            await config_mgr.create_child_category(notification_instance_name, [channel_name])\n\n        else:\n            raise NotFoundError(\"{} notification instance does not exist\".format(notification_instance_name))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except NotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create delivery channel of {} notification\".format(notification_instance_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"category\": channel_name, \"description\": channel_description,\n                                  \"config\": channel_config})\n\n\nasync def get_delivery_channel_configuration(request: web.Request) -> web.Response:\n    \"\"\" Retrieve the configuration of a delivery channel\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/notification/overspeed/delivery/coolant\n    \"\"\"\n    notification_instance_name = request.match_info.get('notification_name', None)\n    channel_name = request.match_info.get('channel_name', None)\n    storage = connect.get_storage_async()\n    config_mgr = ConfigurationManager(storage)\n    try:\n        category_name = \"{}_channel_{}\".format(notification_instance_name, channel_name)\n        notification_config = await config_mgr._read_category_val(notification_instance_name)\n        if notification_config:\n            channels = await _get_channels(config_mgr, notification_instance_name)\n            if channel_name in channels:\n                channel_config = await config_mgr._read_category_val(category_name)\n            else:\n                raise NotFoundError(\"{} channel does not exist\".format(channel_name))\n        else:\n            raise NotFoundError(\"{} notification instance does not exist\".format(notification_instance_name))\n    except NotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get delivery channel configuration of {} notification.\".format(\n            notification_instance_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"config\": channel_config})\n\n\nasync def delete_delivery_channel(request: web.Request) -> web.Response:\n    \"\"\" Remove a delivery channel\n\n    :Example:\n        curl -sX DELETE http://localhost:8081/fledge/notification/overspeed/delivery/coolant\n    \"\"\"\n\n    try:\n        notification_service = ServiceRegistry.get(s_type=ServiceRecord.Type.Notification.name)\n        _address, _port = notification_service[0]._address, notification_service[0]._port\n\n    except service_registry_exceptions.DoesNotExist:\n        raise web.HTTPNotFound(reason=\"No Notification service available.\")\n\n    notification_instance_name = request.match_info.get('notification_name', None)\n    channel_name = request.match_info.get('channel_name', None)\n    storage = connect.get_storage_async()\n    config_mgr = ConfigurationManager(storage)\n\n    try:\n\n        category_name = \"{}_channel_{}\".format(notification_instance_name, channel_name)\n        notification_config = await config_mgr._read_category_val(notification_instance_name)\n\n        if notification_config:\n            channels = await _get_channels(config_mgr, notification_instance_name)\n\n            if channel_name in channels:\n\n                # This call allows notification for deleted child category\n                # and deletes it from category_children table\n                await config_mgr.delete_child_category(notification_instance_name, category_name)\n\n                # Remove child category from configuration table\n                await config_mgr.delete_category_and_children_recursively(category_name)\n\n                # Get channels list again as relation gets deleted above\n                channels = await _get_channels(config_mgr, notification_instance_name)\n            else:\n                raise NotFoundError(\"{} channel does not exist\".format(channel_name))\n\n        else:\n            raise NotFoundError(\"{} notification instance does not exist\".format(notification_instance_name))\n\n    except NotFoundError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete delivery channel of {} notification.\".format(notification_instance_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"channels\": channels})\n\n\nasync def _get_all_delivery_channels(cfg_mgr: ConfigurationManager, notify_instance: str) -> dict:\n    \"\"\" Remove all delivery channels\n        in the form of array of dicts:\n        keys are 'name' and 'category'\n    \"\"\"\n\n    full_list = []\n    category_first = \"delivery{}\".format(notify_instance)\n    first_name = await _get_channels_type(cfg_mgr, notify_instance, category_first, False)\n    first_channel = {\n            \"name\" : first_name[0].split('/')[1],\n            \"category\" : category_first,\n    }\n    full_list.append(first_channel)\n\n    naming_extra = \"{}_channel_\".format(notify_instance)\n    list_extra = await _get_channels_type(cfg_mgr, notify_instance, naming_extra, True)\n\n    for ch in list_extra:\n        extra_channel = {\n            \"name\" : ch,\n            \"category\" : \"{}_channel_{}\".format(notify_instance, ch)\n        }\n        full_list.append(extra_channel)\n\n    return full_list\n\n"
  },
  {
    "path": "python/fledge/services/core/api/package_log.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport os\nimport json\nfrom datetime import datetime\n\nfrom pathlib import Path\nfrom aiohttp import web\n\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ----------------------------------------------------------\n    | GET            | /fledge/package/log                   |\n    | GET            | /fledge/package/log/{name}            |\n    | GET            | /fledge/package/{action}/status       |\n    ----------------------------------------------------------\n\"\"\"\nvalid_extension = '.log'\nvalid_actions = ('list', 'install', 'purge', 'update')\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\n\nasync def get_logs(request: web.Request) -> web.Response:\n    \"\"\" GET list of package logs\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/package/log\n    \"\"\"\n    # Get logs directory path\n    logs_root_dir = _get_logs_dir()\n    found_files = []\n\n    for root, dirs, files in os.walk(logs_root_dir, topdown=True):\n        # Only process files in the root directory and filter for .log files\n        if root == logs_root_dir:\n            found_files = [f for f in files if f.endswith(valid_extension)]\n    result = []\n    for f in found_files:\n        if f.endswith(valid_extension):\n            t1 = f.split(\".log\")\n            t2 = t1[0].split(\"-fledge\")\n            t3 = t2[0].split(\"-\")\n            t4 = t1[0].split(\"-list\")\n            if len(t2) >= 2:\n                name = \"fledge{}\".format(t2[1])\n            elif len(t4) >= 2:\n                name = \"list\"\n            else:\n                name = t1[0]\n            if len(t3) >= 4:\n                dt = \"{}-{}-{}-{}\".format(t3[0], t3[1], t3[2], t3[3])\n                ts = datetime.strptime(dt, \"%y%m%d-%H-%M-%S\").strftime('%Y-%m-%d %H:%M:%S')\n            else:\n                dt = datetime.utcnow()\n                ts = dt.strftime(\"%Y-%m-%d %H:%M:%S\")\n            result.append({\"timestamp\": ts, \"name\": name, \"filename\": f})\n\n    return web.json_response({\"logs\": result})\n\n\nasync def get_log_by_name(request: web.Request) -> web.FileResponse:\n    \"\"\" GET for a particular log file will return the content of the log file.\n\n    :Example:\n        a) Download file\n        curl -O http://localhost:8081/fledge/package/log/190802-11-45-28-fledge-south-sinusoid.log\n\n        b) See the content of a file\n        curl -sX GET http://localhost:8081/fledge/package/log/190802-11-45-28-fledge-south-sinusoid.log\n    \"\"\"\n    # Get logs directory path\n    file_name = request.match_info.get('name', None)\n    if not file_name.endswith(valid_extension):\n        raise web.HTTPBadRequest(reason=\"Accepted file extension is {}\".format(valid_extension))\n\n    logs_root_dir = _get_logs_dir()\n    for root, dirs, files in os.walk(logs_root_dir, topdown=True):\n        # Only process files in the root directory\n        if root == logs_root_dir:\n            if str(file_name) not in files:\n                raise web.HTTPNotFound(reason='{} file not found'.format(file_name))\n\n    fp = Path(logs_root_dir + \"/\" + str(file_name))\n    return web.FileResponse(path=fp)\n\n\ndef _get_logs_dir(_path: str = '/logs/') -> str:\n    dir_path = _FLEDGE_DATA + _path if _FLEDGE_DATA else _FLEDGE_ROOT + '/data' + _path\n    if not os.path.exists(dir_path):\n        os.makedirs(dir_path)\n    logs_dir = os.path.expanduser(dir_path)\n    return logs_dir\n\n\nasync def get_package_status(request: web.Request) -> web.Response:\n    \"\"\" GET list of package status\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/package/list/status\n        curl -sX GET http://localhost:8081/fledge/package/install/status\n        curl -sX GET http://localhost:8081/fledge/package/purge/status\n        curl -sX GET http://localhost:8081/fledge/package/update/status\n        curl -sX GET http://localhost:8081/fledge/package/list/status?id=6560fc22-1e69-416c-9b61-06cd4f3fd8af\n        curl -sX GET http://localhost:8081/fledge/package/install/status?id=9f2f11c6-cbc4-483f-b49d-5eb57cf8001a\n        curl -sX GET http://localhost:8081/fledge/package/purge/status?id=a7ca51b0-35bf-476a-84fe-60e98006875e\n        curl -sX GET http://localhost:8081/fledge/package/update/status?id=f156e5de-3e43-4451-a63b-a933c65754ef\n    \"\"\"\n    try:\n        action = request.match_info.get('action', '').lower()\n        if action not in valid_actions:\n            raise ValueError(\"Accepted package actions are {}\".format(valid_actions))\n        select = PayloadBuilder().SELECT((\"id\", \"name\", \"action\", \"status\", \"log_file_uri\")).WHERE(\n            ['action', '=', action]).chain_payload()\n        if 'id' in request.query and request.query['id'] != '':\n            select = PayloadBuilder(select).AND_WHERE(['id', '=', request.query['id']]).chain_payload()\n        final_payload = PayloadBuilder(select).payload()\n        storage_client = connect.get_storage_async()\n        result = await storage_client.query_tbl_with_payload('packages', final_payload)\n        response = result['rows']\n        if not response:\n            raise KeyError(\"No record found\")\n        result = []\n        for r in response:\n            tmp = r\n            if r['status'] == 0:\n                tmp['status'] = 'success'\n            elif r['status'] == -1:\n                tmp['status'] = 'in-progress'\n            else:\n                tmp['status'] = 'failed'\n            tmp['logFileURI'] = r['log_file_uri']\n            del tmp['log_file_uri']\n            result.append(tmp)\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as exc:\n        msg = str(exc)\n        _LOGGER.error(exc, \"Failed to get package log status for {} action.\".format(action))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"packageStatus\": result})\n"
  },
  {
    "path": "python/fledge/services/core/api/performance_monitor.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom aiohttp import web\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2023, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ----------------------------------------------------------------\n    | GET  DELETE          | /fledge/monitors                      |\n    | GET  DELETE          | /fledge/monitors/{service}            |\n    | GET  DELETE          | /fledge/monitors/{service}/{counter}  |\n    ----------------------------------------------------------------\n\"\"\"\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\ndef setup(app):\n    app.router.add_route('GET', '/fledge/monitors', get_all)\n    app.router.add_route('GET', '/fledge/monitors/{service}', get_by_service_name)\n    app.router.add_route('GET', '/fledge/monitors/{service}/{counter}', get_by_service_and_counter_name)\n    app.router.add_route('DELETE', '/fledge/monitors', purge_all)\n    app.router.add_route('DELETE', '/fledge/monitors/{service}', purge_by_service)\n    app.router.add_route('DELETE', '/fledge/monitors/{service}/{counter}', purge_by_service_and_counter)\n\n# TODO: 8167 - Limit and Offset support and other pending queries\n\nasync def get_all(request: web.Request) -> web.Response:\n    \"\"\" GET list of performance monitors\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/monitors\n    \"\"\"\n    storage = connect.get_storage_async()\n    monitors = await storage.query_tbl(\"monitors\")\n    counters = monitors[\"rows\"]\n    monitor = {}\n    for c in counters:\n        val = {\"average\": c[\"average\"], \"maximum\": c[\"maximum\"], \"minimum\": c[\"minimum\"], \"samples\": c[\"samples\"],\n               \"timestamp\": c[\"ts\"], \"service\": c[\"service\"]}\n        monitor.setdefault(c['monitor'], []).append(val)\n    monitors = [{'monitor': k, 'values': v} for k, v in monitor.items()]\n    # Group by service name\n    grouped_data = {}\n    for entry in monitors:\n        monitor_name = entry['monitor']\n        values = entry['values']\n        for value in values:\n            service = value.pop('service', None)\n            if service:\n                if service not in grouped_data:\n                    grouped_data[service] = {}\n                if monitor_name not in grouped_data[service]:\n                    grouped_data[service][monitor_name] = []\n                # Group by monitor\n                grouped_data[service][monitor_name].append(value)\n    return web.json_response({\"monitors\": grouped_data})\n\n\nasync def get_by_service_name(request: web.Request) -> web.Response:\n    \"\"\" GET performance monitors for the given service\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/monitors/<SVC_NAME>\n    \"\"\"\n    service = request.match_info.get('service', None)\n    storage = connect.get_storage_async()\n    payload = PayloadBuilder().SELECT(\"average\", \"maximum\", \"minimum\", \"monitor\", \"samples\", \"ts\").ALIAS(\n        \"return\", (\"ts\", 'timestamp')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")).WHERE(\n        [\"service\", '=', service]).payload()\n    response = {\"service\": service}\n    result = await storage.query_tbl_with_payload('monitors', payload)\n    if 'rows' in result:\n        monitor = {}\n        for row in result[\"rows\"]:\n            val = {\"average\": row[\"average\"], \"maximum\": row[\"maximum\"], \"minimum\": row[\"minimum\"],\n                   \"samples\": row[\"samples\"], \"timestamp\": row[\"timestamp\"]}\n            monitor.setdefault(row['monitor'], []).append(val)\n        monitors = [{'monitor': k, 'values': v} for k, v in monitor.items()]\n        response[\"monitors\"] = monitors\n    return web.json_response(response)\n\nasync def get_by_service_and_counter_name(request: web.Request) -> web.Response:\n    \"\"\" GET values for the single counter for the single service\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/monitors/<SVC_NAME>/<COUNTER_NAME>\n    \"\"\"\n    service = request.match_info.get('service', None)\n    counter = request.match_info.get('counter', None)\n\n    storage = connect.get_storage_async()\n    payload = PayloadBuilder().SELECT(\"average\", \"maximum\", \"minimum\", \"samples\", \"ts\").ALIAS(\n        \"return\", (\"ts\", 'timestamp')).FORMAT(\"return\", (\"ts\", \"YYYY-MM-DD HH24:MI:SS.MS\")).WHERE(\n        [\"service\", '=', service]).AND_WHERE([\"monitor\", '=', counter]).payload()\n    result = await storage.query_tbl_with_payload('monitors', payload)\n    response = {}\n    if 'rows' in result:\n        response = {\"service\": service, \"monitors\":{\"monitor\": counter}}\n        response[\"monitors\"][\"values\"] = result[\"rows\"] if result[\"rows\"] else []\n    return web.json_response(response)\n\nasync def purge_all(request: web.Request) -> web.Response:\n    \"\"\" DELETE all performance monitors\n\n    :Example:\n        curl -sX DELETE http://localhost:8081/fledge/monitors\n    \"\"\"\n    storage = connect.get_storage_async()\n    result = await storage.delete_from_tbl(\"monitors\", {})\n    message = \"Nothing to remove for service performance counters.\"\n    if 'rows_affected' in result:\n        if result['response'] == \"deleted\" and result['rows_affected']:\n            message = \"All Performance counters have been removed successfully.\"\n    return web.json_response({\"message\": message})\n\nasync def purge_by_service(request: web.Request) -> web.Response:\n    \"\"\" DELETE performance monitors for the given service\n\n    :Example:\n        curl -sX DELETE http://localhost:8081/fledge/monitors/<SVC_NAME>\n    \"\"\"\n    service = request.match_info.get('service', None)\n    storage = connect.get_storage_async()\n    payload = PayloadBuilder().WHERE([\"service\", '=', service]).payload()\n    result = await storage.delete_from_tbl(\"monitors\", payload)\n    message = \"Nothing to remove counters from '{}' service.\".format(service)\n    if 'rows_affected' in result:\n        if result['response'] == \"deleted\" and result['rows_affected']:\n            message = \"Performance counters have been removed from '{}' service.\".format(service)\n    return web.json_response({\"message\": message})\n\nasync def purge_by_service_and_counter(request: web.Request) -> web.Response:\n    \"\"\" DELETE performance monitors for the single counter for the single service\n\n    :Example:\n        curl -sX DELETE http://localhost:8081/fledge/monitors/<SVC_NAME>/<COUNTER_NAME>\n    \"\"\"\n    service = request.match_info.get('service', None)\n    counter = request.match_info.get('counter', None)\n    storage = connect.get_storage_async()\n    payload = PayloadBuilder().WHERE([\"service\", '=', service]).AND_WHERE(\n        [\"monitor\", '=', counter]).payload()\n    result = await storage.delete_from_tbl(\"monitors\", payload)\n    message = \"Nothing to remove '{}' counter from '{}' service.\".format(counter, service)\n    if 'rows_affected' in result:\n        if result['response'] == \"deleted\" and result['rows_affected']:\n            message = \"Performance '{}' counter has been removed from '{}' service.\".format(counter, service)\n    return web.json_response({\"message\": message})\n"
  },
  {
    "path": "python/fledge/services/core/api/pipeline_debugger.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport aiohttp\nfrom aiohttp import web\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2025 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n_help = \"\"\"\n    -----------------------------------------------------------------------------------------------------------------\n    | GET | PUT                      |        /fledge/service/{name}/debug?action=                                  |\n    -----------------------------------------------------------------------------------------------------------------\n\"\"\"\n\nSUPPORTED_ACTIONS = [\"attach\", \"detach\", \"buffer\", \"isolate\", \"suspend\", \"replay\", \"step\"]\nFORBIDDEN_MSG = 'Resource you were trying to reach is absolutely forbidden for some reason'\n\n\ndef setup(app):\n    app.router.add_route('GET', '/fledge/service/{name}/debug', get_action)\n    app.router.add_route('PUT', '/fledge/service/{name}/debug', put_action)\n\n\nasync def get_action(request: web.Request) -> web.Response:\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n    try:\n        action_items = ['buffer', 'state']\n        svc_name = request.match_info.get('name', None)\n        if 'action' in request.query and request.query['action'] != '' and request.query['action'] in action_items:\n            svc, svc_type, bearer_token = await _check_service(svc_name)\n            url = \"fledge/{}/debug/{}\".format(svc_type, request.query['action'])\n            status_code, response = await _call_service_api(\n                'GET', svc._protocol, svc._address, svc._port, url, bearer_token, {})\n        else:\n            raise ValueError('The action query parameter is either missing, empty, or contains an invalid value. '\n                             'Valid values are: {}'.format(action_items))\n    except service_registry_exceptions.DoesNotExist:\n        msg = \"No '{}' service available!.\".format(svc_name)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(status=status_code, body=response)\n\nasync def put_action(request: web.Request) -> web.Response:\n    if request.is_auth_optional:\n        _logger.warning(FORBIDDEN_MSG)\n        raise web.HTTPForbidden\n    try:\n        data = {}\n        svc_name = request.match_info.get('name', None)\n        if 'action' in request.query and request.query['action'] != '' and request.query['action'] in SUPPORTED_ACTIONS:\n            if request.body_exists:\n                data = await request.json()\n            svc, svc_type, bearer_token = await _check_service(svc_name)\n            action_name = request.query['action']\n            url = \"fledge/{}/debug/{}\".format(svc_type, action_name)\n            # For consistency, the buffer size should also be set with PUT from the C pipeline\n            verb = 'PUT' if action_name != 'buffer' else 'POST'\n            status_code, response = await _call_service_api(\n                verb, svc._protocol, svc._address, svc._port, url, bearer_token, data)\n        else:\n            raise ValueError('The action query parameter is either missing, empty, or contains an invalid value. '\n                             'Valid values are: {}'.format(SUPPORTED_ACTIONS))\n    except service_registry_exceptions.DoesNotExist:\n        msg = \"No '{}' service available!.\".format(svc_name)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(status=status_code, body=response)\n\nasync def _check_service(svc_name):\n    service = ServiceRegistry.get(name=svc_name)\n    token = ServiceRegistry.getBearerToken(svc_name)\n    svc = service[0]\n    if svc._type == 'Northbound':\n        svc_type = \"north\"\n    elif svc._type == 'Southbound':\n        svc_type = \"south\"\n    else:\n        raise ValueError('This is currently valid only for services with a type of \"South\" or \"North\".')\n    if svc._status != ServiceRecord.Status.Running:\n        raise ValueError(\"The '{}' service is not in a Running state.\".format(svc_name))\n    return svc, svc_type, token\n\n\nasync def _call_service_api(verb: str, protocol: str, address: str, port: int, uri: str, token: str, payload: dict):\n    # Custom Request header\n    headers = {}\n    if token is not None:\n        headers['Authorization'] = \"Bearer {}\".format(token)\n    url = \"{}://{}:{}/{}\".format(protocol, address, port, uri)\n    try:\n        if verb == 'GET':\n            async with aiohttp.ClientSession() as session:\n                async with session.get(url, headers=headers) as resp:\n                    status_code = resp.status\n                    jdoc = await resp.text()\n                    response = (resp.status, jdoc)\n                    if status_code not in range(200, 209):\n                        _logger.error(\"GET Request Error code: {}, reason: {}, details: {}, url: {}\".format(\n                            resp.status, resp.reason, jdoc, uri))\n        elif verb == 'PUT':\n            async with aiohttp.ClientSession() as session:\n                async with session.put(url, headers=headers, data=json.dumps(payload) if payload else None) as resp:\n                    jdoc = await resp.text()\n                    response = (resp.status, jdoc)\n                    if resp.status not in range(200, 209):\n                        _logger.error(\"PUT Request Error code: {}, reason: {}, details: {}, url: {}\".format(\n                            resp.status, resp.reason, jdoc, uri))\n        else:\n            async with aiohttp.ClientSession() as session:\n                async with session.post(url, data=json.dumps(payload), headers=headers) as resp:\n                    jdoc = await resp.text()\n                    response = (resp.status, jdoc)\n                    if resp.status not in range(200, 209):\n                        _logger.error(\"POST Request Error code: {}, reason: {}, details: {}, url: {}\".format(\n                            resp.status, resp.reason, jdoc, uri))\n    except Exception as ex:\n        raise Exception(str(ex))\n    else:\n        # Return Tuple - (http statuscode, message)\n        return response\n\n"
  },
  {
    "path": "python/fledge/services/core/api/plugins/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/api/plugins/common.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Common Definitions\"\"\"\nimport sys\nimport types\nimport os\nimport json\nimport glob\nimport importlib.util\nfrom typing import Dict\nfrom datetime import datetime\nfrom functools import lru_cache\n\nfrom fledge.common import utils as common_utils\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA, _FLEDGE_PLUGIN_PATH\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.services.core.api import utils\nfrom fledge.services.core.api.plugins.exceptions import *\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_logger = FLCoreLogger().get_logger(__name__)\n_NO_OF_FILES_TO_RETAIN = 10\n\n\ndef load_python_plugin(plugin_module_path: str, plugin: str, _type: str) -> types.ModuleType:\n    _plugin = None\n    module_name = \"fledge.plugins.{}.{}.{}\".format(_type, plugin, plugin)\n    try:\n        spec = importlib.util.spec_from_file_location(module_name, \"{}/{}.py\".format(plugin_module_path, plugin))\n        _plugin = importlib.util.module_from_spec(spec)\n        spec.loader.exec_module(_plugin)\n        sys.modules[module_name] = _plugin\n    except FileNotFoundError:\n        if _FLEDGE_PLUGIN_PATH:\n            plugin_paths = _FLEDGE_PLUGIN_PATH.split(\";\")\n            for pp in plugin_paths:\n                if os.path.isdir(pp):\n                    plugin_module_path = \"{}/{}/{}\".format(pp, _type, plugin)\n                    spec = importlib.util.spec_from_file_location(module_name, \"{}/{}.py\".format(\n                        plugin_module_path, plugin))\n                    _plugin = importlib.util.module_from_spec(spec)\n                    spec.loader.exec_module(_plugin)\n                    sys.modules[module_name] = _plugin\n    return _plugin\n\n\ndef load_and_fetch_python_plugin_info(plugin_module_path: str, plugin: str, _type: str) -> Dict:\n    module_name = \"fledge.plugins.{}.{}.{}\".format(_type, plugin, plugin)\n    try:\n        if module_name not in sys.modules:\n            _plugin_module = load_python_plugin(plugin_module_path, plugin, _type)\n            # Fetch configuration from the configuration defined in the plugin\n            try:\n                plugin_info = _plugin_module.plugin_info()\n                if plugin_info['type'] != _type:\n                    msg = \"Plugin of {} type is not supported\".format(plugin_info['type'])\n                    raise TypeError(msg)\n            except Exception as ex:\n                _logger.warning(\"Python plugin not found......{}, try C-plugin\".format(ex))\n                raise FileNotFoundError\n        else:\n            _plugin = sys.modules[module_name]\n            plugin_info = _plugin.plugin_info()\n    except Exception as ex:\n        _logger.warning(\"Python plugin not found......{}, try C-plugin\".format(ex))\n        raise FileNotFoundError\n    return plugin_info\n\n\ndef load_and_fetch_c_hybrid_plugin_info(plugin_name: str, is_config: bool, plugin_type='south') -> Dict:\n    plugin_info = None\n    if plugin_type == 'south':\n        config_items = ['default', 'type', 'description']\n        optional_items = ['readonly', 'order', 'length', 'maximum', 'minimum', 'rule', 'deprecated', 'displayName',\n                          'options']\n        config_items.extend(optional_items)\n        plugin_dir = _FLEDGE_ROOT + '/' + 'plugins' + '/' + plugin_type\n        if _FLEDGE_PLUGIN_PATH:\n            plugin_paths = _FLEDGE_PLUGIN_PATH.split(\";\")\n            for pp in plugin_paths:\n                if os.path.isdir(pp):\n                    plugin_dir = pp + '/' + plugin_type\n        if not os.path.isdir(plugin_dir + '/' + plugin_name):\n            plugin_dir = _FLEDGE_ROOT + '/' + 'plugins' + '/' + plugin_type\n\n        file_name = plugin_dir + '/' + plugin_name + '/' + plugin_name + '.json'\n        with open(file_name) as f:\n            data = json.load(f)\n            json_file_keys = ('connection', 'name', 'defaults', 'description')\n            if all(k in data for k in json_file_keys):\n                connection_name = data['connection']\n                if _FLEDGE_ROOT + '/' + 'plugins' + '/' + plugin_type or os.path.isdir(plugin_dir + '/' + connection_name):\n                    jdoc = utils.get_plugin_info(connection_name, dir=plugin_type)\n                    if jdoc:\n                        pkg_name = ''\n                        # Only OMF is an inbuilt plugin\n                        if plugin_name.lower() != 'omf':\n                            pkg_name = 'fledge-{}-{}'.format(plugin_type, plugin_name.lower().replace(\"_\", \"-\"))\n\n                        plugin_info = {'name': plugin_name,\n                                       'type': plugin_type,\n                                       'description': data['description'],\n                                       'version': jdoc['version'],\n                                       'installedDirectory': '{}/{}'.format(plugin_type, plugin_name),\n                                       'packageName': pkg_name\n                                    }\n                        keys_a = set(jdoc['config'].keys())\n                        keys_b = set(data['defaults'].keys())\n                        intersection = keys_a & keys_b\n                        # Merge default and other configuration fields of both connection plugin\n                        # and hybrid plugin with intersection of 'config' keys\n                        # Use Hybrid Plugin name and description defined in json file\n                        temp = jdoc['config']\n                        temp['plugin']['default'] = plugin_name\n                        temp['plugin']['description'] = data['description']\n                        for _key in intersection:\n                            config_item_keys = set(data['defaults'][_key].keys())\n                            for _config_key in config_item_keys:\n                                if _config_key in config_items:\n                                    if temp[_key]['type'] == 'JSON' and _config_key == 'default':\n                                        temp[_key][_config_key] = json.dumps(data['defaults'][_key][_config_key])\n                                    elif temp[_key]['type'] == 'enumeration' and _config_key == 'default':\n                                        temp[_key][_config_key] = data['defaults'][_key][_config_key]\n                                    else:\n                                        temp[_key][_config_key] = str(data['defaults'][_key][_config_key])\n                        if is_config:\n                            plugin_info.update({'config': temp})\n                    else:\n                        _logger.warning(\"{} hybrid plugin is not installed which is required for {}\".format(\n                            connection_name, plugin_name))\n                else:\n                    _logger.warning(\"{} hybrid plugin is not installed which is required for {}\".format(\n                        connection_name, plugin_name))\n            else:\n                raise Exception('Required {} keys are missing for json file'.format(json_file_keys))\n    return plugin_info\n\n\n@lru_cache(maxsize=1, typed=True)\ndef _get_available_packages(tmp_log_output_fp: str, pkg_mgt: str, pkg_type: str) -> tuple:\n    available_packages = []\n    open(tmp_log_output_fp, \"w\").close()\n    if pkg_mgt == 'yum':\n        cmd = \"sudo yum list available fledge-{}\\* | grep fledge | cut -d . -f1 > {} 2>&1\".format(\n            pkg_type, tmp_log_output_fp)\n    else:\n        cmd = \"sudo apt list | grep fledge-{} | grep -v installed | cut -d / -f1  > {} 2>&1\".format(\n            pkg_type, tmp_log_output_fp)\n    code = os.system(cmd)\n\n    # Below temporary file is for Output of above command which is needed to return in API response\n    with open(\"{}\".format(tmp_log_output_fp), 'r') as fh:\n        for line in fh:\n            line = line.rstrip(\"\\n\")\n            available_packages.append(line)\n\n    return code, available_packages\n\n\nasync def fetch_available_packages(package_type: str = \"\") -> tuple:\n    # Require a local import in order to avoid circular import references\n    from fledge.services.core import server\n\n    stdout_file_path = create_log_file(action=\"list\")\n    tmp_log_output_fp = stdout_file_path.split('logs/')[:1][0] + \"logs/output.txt\"\n    pkg_type = \"\" if package_type is None else package_type\n    pkg_mgt = 'yum' if common_utils.is_redhat_based() else 'apt'\n    category = await server.Server._configuration_manager.get_category_all_items(\"Installation\")\n    max_update_cat_item = category['maxUpdate']\n    pkg_cache_mgr = server.Server._package_cache_manager\n    last_accessed_time = pkg_cache_mgr['update']['last_accessed_time']\n    now = datetime.now()\n    then = last_accessed_time if last_accessed_time else now\n    duration_in_sec = (now - then).total_seconds()\n    # If max update per day is set to 1, then an update can not occurs until 24 hours after the last accessed update.\n    # If set to 2 then this drops to 12 hours between updates, 3 would result in 8 hours between calls and so on.\n    if duration_in_sec > (24 / int(max_update_cat_item['value'])) * 60 * 60 or not last_accessed_time:\n        _logger.info(\"Attempting {} update on {}...\".format(pkg_mgt, now))\n        cmd = \"sudo {} -y update > {} 2>&1\".format(pkg_mgt, stdout_file_path)\n        if pkg_mgt == 'yum':\n            cmd = \"sudo {} check-update > {} 2>&1\".format(pkg_mgt, stdout_file_path)\n        # Execute command\n        os.system(cmd)\n        pkg_cache_mgr['update']['last_accessed_time'] = now\n        # fetch available package caching always clear on every update request\n        _get_available_packages.cache_clear()\n    else:\n        _logger.warning(\"Maximum {} update exceeds the limit for the day.\".format(pkg_mgt))\n    ttl_cat_item_val = int(category['listAvailablePackagesCacheTTL']['value'])\n    if ttl_cat_item_val > 0:\n        last_accessed_time = pkg_cache_mgr['list']['last_accessed_time']\n        now = datetime.now()\n        if not last_accessed_time:\n            last_accessed_time = now\n            pkg_cache_mgr['list']['last_accessed_time'] = now\n        duration_in_sec = (now - last_accessed_time).total_seconds()\n        if duration_in_sec > ttl_cat_item_val * 60:\n            _get_available_packages.cache_clear()\n            pkg_cache_mgr['list']['last_accessed_time'] = datetime.now()\n    else:\n        _get_available_packages.cache_clear()\n        pkg_cache_mgr['list']['last_accessed_time'] = \"\"\n    ret_code, available_packages = _get_available_packages(tmp_log_output_fp, pkg_mgt, pkg_type)\n    # combine above output in logs file\n    with open(\"{}\".format(stdout_file_path), 'a') as fh:\n        fh.write(\" \\n\".join(available_packages))\n        if not len(available_packages):\n            fh.write(\"No package available to install\")\n    # Remove tmp_log_output_fp\n    if os.path.isfile(tmp_log_output_fp):\n        os.remove(tmp_log_output_fp)\n    # relative log file link\n    link = \"logs/\" + stdout_file_path.split(\"/\")[-1]\n    if ret_code != 0:\n        raise PackageError(link)\n    return available_packages, link\n\n\ndef create_log_file(action: str = \"\", plugin_name: str = \"\") -> str:\n    logs_dir = '/logs/'\n    _PATH = _FLEDGE_DATA + logs_dir if _FLEDGE_DATA else _FLEDGE_ROOT + '/data{}'.format(logs_dir)\n    # YYMMDD-HH-MM-SS-{plugin_name}.log\n    file_spec = datetime.utcnow().strftime('%y%m%d-%H-%M-%S')\n    if not action:\n        log_file_name = \"{}-{}.log\".format(file_spec, plugin_name) if plugin_name else \"{}.log\".format(file_spec)\n    else:\n        log_file_name = \"{}-{}-{}.log\".format(file_spec, plugin_name, action) if plugin_name else \"{}-{}.log\".format(file_spec, action)\n    if not os.path.exists(_PATH):\n        os.makedirs(_PATH)\n\n    # Create empty log file name\n    open(_PATH + log_file_name, \"w\").close()\n\n    # A maximum of _NO_OF_FILES_TO_RETAIN log files will be maintained.\n    # When it exceeds the limit the very first log file will be removed on the basis of creation time\n    files = glob.glob(\"{}*.log\".format(_PATH))\n    files.sort(key=os.path.getctime)\n    if len(files) > _NO_OF_FILES_TO_RETAIN:\n        for f in files[:-_NO_OF_FILES_TO_RETAIN]:\n            if os.path.isfile(f):\n                os.remove(f)\n\n    return _PATH + log_file_name\n"
  },
  {
    "path": "python/fledge/services/core/api/plugins/config_validator.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport json\nimport socket\nimport re\nfrom urllib.parse import urlparse\nfrom aiohttp import web\n\nfrom fledge.common.logger import FLCoreLogger\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2025 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n_help = \"\"\"\n    ------------------------------------------------------------------------------\n    | PUT                 | /fledge/plugin/validate                              |\n    ------------------------------------------------------------------------------\n\"\"\"\n\nclass ConfigurationValidator:\n    \"\"\"\n    Configuration Validation System\n    \n    Provides connectivity validation for plugin configurations through:\n    - ICMP ping tests for host reachability (preferred method)\n    - TCP connection attempts for service availability\n    - Automatic fallback to TCP connectivity testing in restricted container environments\n    \"\"\"\n    \n    # Configuration item names to check (case insensitive)\n    ADDRESS_FIELDS = ['address', 'ip', 'server', 'host', 'hostname']\n    URL_FIELDS = ['url']\n    BROKER_FIELDS = ['broker', 'brokerhost']\n    PORT_FIELDS = ['port', 'brokerport']\n    \n    # Standard protocol ports\n    STANDARD_PORTS = {\n        'http': 80,\n        'https': 443,\n        'ftp': 21,\n        'ssh': 22,\n        'telnet': 23,\n        'smtp': 25,\n        'dns': 53,\n        'dhcp': 67,\n        'tftp': 69,\n        'pop3': 110,\n        'imap': 143,\n        'snmp': 161,\n        'ldap': 389,\n        'ldaps': 636\n    }\n    \n    def __init__(self):\n        self.results = {}\n    \n    def extract_configuration_items(self, config_data):\n        \"\"\"\n        Extract relevant configuration items for validation.\n        \n        Args:\n            config_data (dict): Configuration category contents\n            \n        Returns:\n            dict: Categorized configuration items\n            \n        Raises:\n            ValueError: If configuration item has neither 'value' nor 'default' key\n        \"\"\"\n        extracted = {\n            'addresses': [],\n            'urls': [],\n            'brokers': [],\n            'ports': []\n        }\n        \n        if not isinstance(config_data, dict):\n            return extracted\n            \n        for key, value in config_data.items():\n            if not isinstance(value, dict):\n                continue\n            \n            # Check for value first, then default, then error\n            config_value = None\n            if 'value' in value:\n                config_value = str(value['value']).strip()\n            elif 'default' in value:\n                config_value = str(value['default']).strip()\n            else:\n                raise ValueError(f\"Configuration item '{key}' must have either 'value' or 'default' key\")\n            \n            # Skip empty or zero values\n            if not config_value or config_value == '0':\n                continue\n                \n            key_lower = key.lower()\n            field_type = value.get('type', 'string').lower()\n            \n            # Check for port fields first (more specific)\n            if any(field in key_lower for field in self.PORT_FIELDS):\n                try:\n                    port_val = int(config_value)\n                    if port_val > 0:\n                        extracted['ports'].append({\n                            'name': key,\n                            'value': port_val,\n                            'type': value.get('type', 'integer')\n                        })\n                except ValueError:\n                    pass\n            \n            # Check for URL fields\n            elif any(field in key_lower for field in self.URL_FIELDS):\n                extracted['urls'].append({\n                    'name': key,\n                    'value': config_value,\n                    'type': value.get('type', 'string')\n                })\n            \n            # Check for broker fields\n            elif any(field in key_lower for field in self.BROKER_FIELDS):\n                extracted['brokers'].append({\n                    'name': key,\n                    'value': config_value,\n                    'type': value.get('type', 'string')\n                })\n            \n            # Check for address fields (less specific, checked last)\n            # Exclude certain types that are not network addresses\n            elif (any(field in key_lower for field in self.ADDRESS_FIELDS) and \n                  field_type not in ['enumeration', 'password', 'boolean']):\n                extracted['addresses'].append({\n                    'name': key,\n                    'value': config_value,\n                    'type': value.get('type', 'string')\n                })\n                    \n        return extracted\n    \n    def parse_url(self, url_string):\n        \"\"\"\n        Parse URL to extract hostname and port.\n        \n        Args:\n            url_string (str): URL to parse\n            \n        Returns:\n            tuple: (hostname, port, protocol) or (None, None, None) if invalid\n        \"\"\"\n        try:\n            # Handle special protocols like opc.tcp and tcp (MQTT)\n            if url_string.startswith('opc.tcp://'):\n                url_string = url_string.replace('opc.tcp://', 'opcua://')\n            elif url_string.startswith('tcp://'):\n                # Convert tcp:// to mqtt:// for standard parsing\n                url_string = url_string.replace('tcp://', 'mqtt://')\n            \n            parsed = urlparse(url_string)\n            \n            if not parsed.hostname:\n                return None, None, None\n                \n            hostname = parsed.hostname\n            port = parsed.port\n            protocol = parsed.scheme.lower()\n            \n            # Map back to original protocol names\n            if protocol == 'opcua':\n                protocol = 'opc.tcp'\n            elif protocol == 'mqtt' and 'tcp://' in url_string:\n                protocol = 'tcp'  # Original was tcp://\n\n            # Use standard port if not specified\n            if port is None and protocol in self.STANDARD_PORTS:\n                port = self.STANDARD_PORTS[protocol]\n            \n                \n            return hostname, port, protocol\n            \n        except Exception as e:\n            _logger.debug(f\"URL parsing error for '{url_string}': {e}\")\n            return None, None, None\n    \n    def is_url(self, value):\n        \"\"\"\n        Check if a value is a valid URL.\n        \n        Args:\n            value (str): Value to check\n            \n        Returns:\n            bool: True if value appears to be a URL\n        \"\"\"\n        url_pattern = re.compile(\n            r'^(https?|ftp|mqtt|mqtts|opcua|opc\\.tcp|tcp|coap|coaps)://'\n            r'[\\w\\-\\.]+(:\\d+)?(/.*)?$',\n            re.IGNORECASE\n        )\n        return bool(url_pattern.match(value))\n    \n\n    async def _icmp_ping(self, hostname, timeout=3):\n        \"\"\"\n        Perform ICMP ping using system ping command.\n        \n        Args:\n            hostname (str): Hostname or IP address to ping\n            timeout (int): Timeout in seconds\n            \n        Returns:\n            tuple: (success, reason) or (None, reason) if ICMP unavailable\n        \"\"\"\n        try:\n            import subprocess\n            import asyncio\n            \n            # Use ping command with specific parameters for reliability\n            # -c 1: send only 1 packet\n            # -W timeout: wait timeout seconds for response\n            # -q: quiet output (only summary)\n            cmd = ['ping', '-c', '1', '-W', str(timeout), '-q', hostname]\n            \n            _logger.debug(f\"Running ICMP ping: {' '.join(cmd)}\")\n            \n            # Run ping command asynchronously\n            process = await asyncio.create_subprocess_exec(\n                *cmd,\n                stdout=asyncio.subprocess.PIPE,\n                stderr=asyncio.subprocess.PIPE\n            )\n            \n            stdout, stderr = await asyncio.wait_for(\n                process.communicate(), \n                timeout=timeout + 2  # Allow extra time for process cleanup\n            )\n            \n            if process.returncode == 0:\n                _logger.debug(f\"ICMP ping to the host '{hostname}' successful\")\n                return True, f\"Host '{hostname}' is reachable (ICMP ping successful)\"\n            else:\n                # Parse ping output for better error messages\n                stderr_str = stderr.decode('utf-8', errors='ignore').lower()\n                stdout_str = stdout.decode('utf-8', errors='ignore').lower()\n                combined_output = stderr_str + stdout_str\n                \n                if 'name or service not known' in combined_output or 'cannot resolve' in combined_output:\n                    return False, f\"Unable to resolve the hostname '{hostname}' - please check the hostname is correct\"\n                elif 'network is unreachable' in combined_output:\n                    return False, f\"Network unreachable to the hostname '{hostname}' - check network configuration\"\n                elif 'host unreachable' in combined_output or 'no route to host' in combined_output:\n                    return False, f\"Host '{hostname}' is unreachable - check if host is online and network path exists\"\n                elif '100% packet loss' in combined_output or 'no answer' in combined_output:\n                    return False, f\"Host '{hostname}' does not respond to ping - may be down or blocking ICMP\"\n                else:\n                    return False, f\"Host '{hostname}' ping failed - host may be unreachable or blocking ICMP\"\n                    \n        except asyncio.TimeoutError:\n            _logger.warning(f\"ICMP ping to the host '{hostname}' timed out\")\n            return False, f\"Ping to '{hostname}' timed out - host may be unreachable\"\n        except FileNotFoundError:\n            _logger.debug(\"Ping command not found - falling back to TCP connectivity test\")\n            return None, \"ICMP ping not available\"\n        except PermissionError:\n            _logger.debug(\"Permission denied for ICMP ping - falling back to TCP connectivity test\")\n            return None, \"ICMP ping not permitted\"\n        except Exception as e:\n            _logger.debug(f\"ICMP ping failed with error: {e}\")\n            return None, f\"ICMP ping unavailable: {e}\"\n\n    async def _tcp_connectivity_test(self, hostname):\n        \"\"\"\n        Test host connectivity using TCP socket connections.\n        \n        This method attempts to resolve the hostname and then tries to\n        create a TCP socket connection to a common port (e.g., port 7 - echo).\n        \n        Args:\n            hostname (str): Hostname or IP address to test\n            \n        Returns:\n            tuple: (success, reason)\n        \"\"\"\n        try:\n            # First try to resolve the hostname\n            _logger.debug(f\"Attempting to resolve hostname: {hostname}\")\n\n            # 1. DNS resolution (runs in thread executor to avoid blocking)\n            loop = asyncio.get_event_loop()\n            try:\n                addr_info = await loop.run_in_executor(None, socket.getaddrinfo, hostname, None)\n            except socket.gaierror as e:\n                _logger.error(f\"DNS resolution failed for hostname '{hostname}': {e}\")\n                return False, f\"Cannot resolve hostname '{hostname}'\"\n\n            if not addr_info:\n                return False, f\"Hostname '{hostname}' could not be resolved\"\n\n            # 2. Simple routing reachability test (non-blocking)\n            for family, _, _, _, sockaddr in addr_info:\n                ip_addr = sockaddr[0]\n                _logger.debug(f\"Testing route to {ip_addr}\")\n\n                try:\n                    # Try to create a socket and connect with timeout 0\n                    # to a non-existent port, expecting a fast network error\n                    with socket.socket(family, socket.SOCK_STREAM) as sock:\n                        sock.settimeout(1)\n                        sock.connect_ex((ip_addr, 7))  # Port 7 (echo) usually closed but routable\n                        # If connect_ex returns fast, the route exists\n                        return True, f\"Host '{hostname}' is reachable\"\n                except OSError as e:\n                    _logger.debug(f\"Network error testing {ip_addr}: {e}\")\n                    continue\n\n            return False, f\"Host '{hostname}' appears unreachable - no routable address found\"\n        except Exception as e:\n            _logger.error(f\"Unexpected error during TCP connectivity test for the hostname '{hostname}': {e}\")\n            return False, f\"Cannot test connectivity to the hostname '{hostname}' - network error occurred\"\n\n    async def ping_host(self, hostname):\n        \"\"\"\n        Perform host reachability test using ICMP ping when available,\n        falling back to TCP connectivity testing in restricted environments.\n        \n        This method tries ICMP ping first (most reliable and appropriate for host reachability),\n        and only falls back to TCP socket testing when ICMP is not available due to\n        container restrictions or permissions.\n        \n        Args:\n            hostname (str): Hostname or IP address to test\n            \n        Returns:\n            tuple: (success, reason)\n        \"\"\"\n        try:\n            \n            # Try ICMP ping first\n            icmp_result, icmp_reason = await self._icmp_ping(hostname)\n            if icmp_result is not None:  # None means ICMP not available\n                return icmp_result, icmp_reason\n                    \n            # Fall back to TCP connectivity test\n            _logger.debug(f\"Using TCP connectivity test for the hostname '{hostname}' (container environment: {in_container})\")\n            return await self._tcp_connectivity_test(hostname)\n            \n        except Exception as e:\n            _logger.error(f\"Unexpected error during host reachability test for the hostname '{hostname}': {e}\")\n            return False, f\"Unable to test reachability of the hostname '{hostname}' - error occurred: {e}\"\n    \n    async def check_port_listening(self, hostname, port):\n        \"\"\"\n        Check if a service is listening on the specified host and port.\n        \n        Args:\n            hostname (str): Hostname or IP address\n            port (int): Port number\n            \n        Returns:\n            tuple: (success, reason)\n        \"\"\"\n        try:\n            # Attempt TCP connection\n            _logger.debug(f\"Testing connection to {hostname}:{port}\")\n            future = asyncio.open_connection(hostname, port)\n            reader, writer = await asyncio.wait_for(future, timeout=5.0)\n            \n            # Close the connection immediately\n            writer.close()\n            await writer.wait_closed()\n            \n            _logger.debug(f\"Successfully connected to {hostname}:{port}\")\n            success_msg = f\"Service is listening on port {port}\"\n            return True, success_msg\n        except asyncio.TimeoutError:\n            _logger.warning(f\"Connection timeout to {hostname}:{port}\")\n            error_msg = f\"Connection to {hostname}:{port} timed out after 5 seconds\"\n            return False, error_msg\n        except ConnectionRefusedError:\n            _logger.error(f\"Connection refused by {hostname}:{port}\")\n            error_msg = f\"No service is listening on {hostname}:{port}\"\n            return False, error_msg\n        except socket.gaierror as e:\n            error_msg = str(e).lower()\n            _logger.error(f\"DNS resolution failed for {hostname}: {e}\")\n            \n            if 'name or service not known' in error_msg or 'nodename nor servname provided' in error_msg:\n                return False, f\"Unable to resolve hostname '{hostname}' - please verify the hostname is correct\"\n            elif 'temporary failure' in error_msg:\n                return False, f\"Temporary DNS failure for hostname '{hostname}' - please try again later\"\n            else:\n                return False, f\"DNS lookup failed for hostname '{hostname}'\"\n        except OSError as e:\n            error_code = getattr(e, 'errno', None)\n            error_msg = str(e).lower()\n            _logger.error(f\"Network error connecting to {hostname}:{port}: {e}\")\n            \n            # Handle specific error codes for better user messages\n            \n            if error_code == 113 or 'no route to host' in error_msg:\n                return False, f\"Unable to reach host '{hostname}' - please check your network connectivity\"\n            elif error_code == 110 or 'connection timed out' in error_msg:\n                error_msg = f\"Connection to {hostname}:{port} timed out - the host may be unreachable\"\n                return False, error_msg\n            elif 'network is unreachable' in error_msg:\n                return False, f\"Network is unreachable to '{hostname}' - please check your network configuration\"\n            elif 'multiple exceptions' in error_msg:\n                # Handle IPv6/IPv4 dual stack connection failures\n                error_msg = f\"Unable to connect to {hostname}:{port} - no service is available\"\n                return False, error_msg\n            else:\n                error_msg = f\"Network error connecting to {hostname}:{port}\"\n                return False, error_msg\n        except Exception as e:\n            _logger.error(f\"Unexpected error testing {hostname}:{port}: {e}\")\n            error_msg = f\"Connection test failed for {hostname}:{port}\"\n            return False, error_msg\n    \n    async def test_host_reachable(self, config_items):\n        \"\"\"\n        Test host reachability using ICMP ping.\n        \n        Args:\n            config_items (dict): Extracted configuration items\n            \n        Returns:\n            dict: Test results\n        \"\"\"\n        hosts_to_test = set()\n        test_values = []\n        \n        # Process direct address fields\n        for item in config_items['addresses']:\n            hosts_to_test.add(item['value'])\n            test_values.append({item['name']: item['value']})\n        \n        # Process URLs\n        for item in config_items['urls']:\n            hostname, _, _ = self.parse_url(item['value'])\n            if hostname:\n                hosts_to_test.add(hostname)\n                test_values.append({item['name']: item['value']})\n            else:\n                return {\n                    \"description\": \"Host Reachability\",\n                    \"result\": \"fail\",\n                    \"detail\": {\"reason\": f\"Invalid URL format '{item['value']}' - please check the URL is correct\"},\n                    \"values\": [{item['name']: item['value']}]\n                }\n        \n        # Process brokers\n        for item in config_items['brokers']:\n            if self.is_url(item['value']):\n                # Broker is a URL\n                hostname, _, _ = self.parse_url(item['value'])\n                if hostname:\n                    hosts_to_test.add(hostname)\n                    test_values.append({item['name']: item['value']})\n                else:\n                    return {\n                        \"description\": \"Host Reachability\",\n                        \"result\": \"fail\",\n                        \"detail\": {\"reason\": f\"Invalid URL format '{item['value']}' - please check the URL is correct\"},\n                        \"values\": [{item['name']: item['value']}]\n                    }\n            else:\n                # Broker is hostname only or brokerHost\n                hostname = item['value']\n                hosts_to_test.add(hostname)\n                test_values.append({item['name']: item['value']})\n        \n        if not hosts_to_test:\n            return None  # No applicable tests\n        \n        # Test all unique hosts\n        isHostReachable = False\n        failure_reason = None\n        \n        for hostname in hosts_to_test:\n            success, reason = await self.ping_host(hostname)\n            if success:\n                isHostReachable = True\n            else:\n                failure_reason = reason\n        \n        result = {\n            \"description\": \"Host Reachability\",\n            \"result\": \"pass\" if isHostReachable else \"fail\",\n            \"values\": test_values\n        }\n        \n        if not isHostReachable:\n            result[\"detail\"] = {\"reason\": failure_reason}\n            \n        return result\n    \n    async def test_listening(self, config_items):\n        \"\"\"\n        Test if services are listening on specified ports.\n        \n        Args:\n            config_items (dict): Extracted configuration items\n            \n        Returns:\n            dict: Test results\n        \"\"\"\n        connections_to_test = []\n        test_values = []\n        processed_combinations = set()  # Track processed host:port combinations to avoid duplicates\n        \n        # Handle separated broker host/port fields first (most specific)\n        broker_hosts = [item for item in config_items['brokers'] if 'host' in item['name'].lower()]\n        broker_ports = [item for item in config_items['ports'] if 'broker' in item['name'].lower()]\n        is_port_in_config = False # Track if any port is explicitly provided in configuration\n\n        # Pair broker hosts with broker ports\n        for host_item in broker_hosts:\n            hostname = host_item['value']\n            port = None\n            \n            # Find corresponding broker port\n            for port_item in broker_ports:\n                port = port_item['value']\n                combination_key = f\"{hostname}:{port}\"\n                if combination_key not in processed_combinations:\n                    # Combine broker host and port into single object\n                    test_values.append({\n                        host_item['name']: hostname,\n                        port_item['name']: str(port)\n                    })\n                    connections_to_test.append((hostname, port))\n                    processed_combinations.add(combination_key)\n                    is_port_in_config = True\n                break\n        \n        # Process broker URLs\n        for item in config_items['brokers']:\n            if self.is_url(item['value']):\n                # Broker is a URL\n                hostname, port, protocol = self.parse_url(item['value'])\n                if hostname and port:\n                    combination_key = f\"{hostname}:{port}\"\n                    if combination_key not in processed_combinations:\n                        connections_to_test.append((hostname, port))\n                        test_values.append({item['name']: item['value']})\n                        processed_combinations.add(combination_key)\n                        is_port_in_config = True\n            else:\n                # Broker is hostname only (check if not already processed by broker host/port logic)\n                if not any('host' in broker['name'].lower() for broker in config_items['brokers']):\n                    hostname = item['value']\n                    port = None\n\n                    # Look for a corresponding port field\n                    for port_item in config_items['ports']:\n                        # Skip broker-specific ports as they're handled separately\n                        if 'broker' not in port_item['name'].lower():\n                            port = port_item['value']\n                            combination_key = f\"{hostname}:{port}\"\n                            if combination_key not in processed_combinations:\n                                # Combine broker and port into single object\n                                test_values.append({\n                                    item['name']: item['value'],\n                                    port_item['name']: str(port)\n                                })\n                                connections_to_test.append((hostname, port))\n                                processed_combinations.add(combination_key)\n                                is_port_in_config = True\n                            break\n\n\n        # Process URLs with ports\n        for item in config_items['urls']:\n            hostname, port, protocol = self.parse_url(item['value'])\n            if hostname and port:\n                combination_key = f\"{hostname}:{port}\"\n                is_port_in_config = True\n                if combination_key not in processed_combinations:\n                    connections_to_test.append((hostname, port))\n                    test_values.append({item['name']: item['value']})\n                    processed_combinations.add(combination_key)\n\n        # Handle standard address + port combinations (skip if broker processing already handled them)\n        if config_items['ports'] and not broker_hosts:\n            port_item = config_items['ports'][0]  # Use first port found\n            port = port_item['value']\n            is_port_in_config = True\n\n            # Look for corresponding address\n            address = None\n            for item in config_items['addresses']:\n                address = item['value']\n                combination_key = f\"{address}:{port}\"\n                if combination_key not in processed_combinations:\n                    # Combine address and port into single object\n                    test_values.append({\n                        item['name']: item['value'],\n                        port_item['name']: str(port)\n                    })\n                    connections_to_test.append((address, port))\n                    processed_combinations.add(combination_key)\n                break\n        \n        # If no ports were explicitly provided, we cannot perform listening tests\n        if not connections_to_test:\n            return None  # No applicable tests\n        \n        # Test all connections\n        isPortConnectivity = False\n        failure_reason = None\n        \n        for hostname, port in connections_to_test:\n            success, reason = await self.check_port_listening(hostname, port)\n            if success:\n                isPortConnectivity = True\n            else:\n                failure_reason = reason\n        \n        result = {\n            \"description\": \"Listening\",\n            \"result\": \"pass\" if isPortConnectivity else \"fail\",\n            \"values\": test_values\n        }\n        \n        if not isPortConnectivity:\n            result[\"detail\"] = {\"reason\": failure_reason}\n            \n        return result\n    \n    async def validate_configuration(self, config_data):\n        \"\"\"\n        Validate plugin configuration by running all applicable tests.\n        \n        Args:\n            config_data (dict): Plugin configuration category contents\n            \n        Returns:\n            dict: Validation results\n        \"\"\"\n        self.results = {}\n        \n        # Extract configuration items\n        config_items = self.extract_configuration_items(config_data)\n        \n        # Run host reachability test\n        host_result = await self.test_host_reachable(config_items)\n        if host_result:\n            self.results[\"HostReachable\"] = host_result\n        \n        # Run listening test\n        listening_result = await self.test_listening(config_items)\n        if listening_result:\n            self.results[\"Listening\"] = listening_result\n        \n        return self.results\n\n\nasync def validate_configuration(request):\n    \"\"\"\n    API endpoint for plugin configuration validation.\n    \n    PUT /fledge/plugin/validate\n    \n    Validates connectivity aspects of plugin configurations.\n    \"\"\"\n    try:\n        data = await request.json()\n        \n        if not isinstance(data, dict):\n            raise ValueError(\"Configuration data must be a JSON object\")\n        \n        validator = ConfigurationValidator()\n        results = await validator.validate_configuration(data)\n        if not results:\n            # No validation could be performed\n            return web.Response(status=204)\n        \n        return web.json_response(results)\n    except json.JSONDecodeError:\n        raise web.HTTPBadRequest(reason=\"Invalid JSON payload\")\n    except ValueError as e:\n        # This catches both general validation errors and missing value/default errors\n        raise web.HTTPBadRequest(reason=str(e))\n    except Exception as e:\n        _logger.error(f\"Plugin validation error: {e}\")\n        raise web.HTTPInternalServerError(reason=\"Internal server error during validation\")\n\ndef setup(app):\n    \"\"\"Setup plugin validation routes\"\"\"\n    app.router.add_route('PUT', '/fledge/plugin/validate', validate_configuration)\n\n"
  },
  {
    "path": "python/fledge/services/core/api/plugins/data.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport urllib.parse\nfrom aiohttp import web\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.plugins.common import utils as common_utils\nfrom fledge.services.core import connect, server\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\n\n\n__author__ = \"Mark Riddoch, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ---------------------------------------------------------------------------------------\n    | GET                   | /fledge/service/{service_name}/persist                      |\n    | GET POST DELETE       | /fledge/service/{service_name}/plugin/{plugin_name}/data    |\n    ---------------------------------------------------------------------------------------\n\"\"\"\nFORBIDDEN_MSG = \"Resource you were trying to reach is absolutely forbidden!\"\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def get_persist_plugins(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n        request:\n    Returns:\n        list of plugins that have SP_PERSIST_DATA flag set in plugin info\n    :Example:\n        curl -sX GET \"http://localhost:8081/fledge/service/{service_name}/persist\"\n    \"\"\"\n    try:\n        service = request.match_info.get('service_name', None)\n        sch_list = await server.Server.scheduler.get_schedules()\n        dir_name = None\n        svc_info = (False, dir_name)\n        for sch in sch_list:\n            if service == sch.name and sch.process_name in (\"south_c\", \"north_C\", \"north_c\"):\n                dir_name = \"south\" if sch.process_name == \"south_c\" else \"north\"\n                svc_info = (True, dir_name)\n                break\n        if not svc_info[0]:\n            raise ValueError(\"{} service not found.\".format(service))\n        # Return all persistent plugins on the basis of directory + filters always\n        all_plugins = common_utils.get_persist_plugins(dir_name)\n        plugins = []\n        # Get key names from plugin_data table\n        payload = PayloadBuilder().SELECT(\"key\", \"data\").WHERE(['service_name', '=', service])\n        storage_client = connect.get_storage_async()\n        response = await _get_key(storage_client, payload)\n        # Get plugin name in plugin_data table and then find in persistent plugins lists\n        for r in response:\n            key_name = r['key']\n            # Check if the key starts with the service name\n            if key_name.startswith(service):\n                # Remove the service name (e.g., 'Sine') from the start\n                plugin_name = key_name[len(str(service)):]\n                # Check if the remaining part ends with a plugin name in the list\n                for persist_plugin in all_plugins:\n                    if plugin_name.endswith(persist_plugin):\n                        plugins.append(plugin_name)\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get persist plugins.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'persistent': plugins})\n\n\nasync def get(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n        request:\n    Returns:\n        plugin data\n    :Example:\n        curl -sX GET \"http://localhost:8081/fledge/service/{service_name}/plugin/{plugin_name}/data\"\n    \"\"\"\n    service = request.match_info.get('service_name', None)\n    service = urllib.parse.unquote(service) if service is not None else None\n    plugin = request.match_info.get('plugin_name', None)\n    key = \"{}{}\".format(service, plugin)\n    payload = PayloadBuilder().SELECT(\"key\", \"data\").WHERE(['key', '=', key])\n    storage_client = connect.get_storage_async()\n    try:\n        response = await _get_key(storage_client, payload)\n        if response:\n            data = response[0]['data']\n        else:\n            raise ValueError('No matching record found for {} key.'.format(key))\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get {} plugin data for {} service.\".format(plugin, service))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'data': data})\n\n\nasync def add(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n        request:\n    Returns:\n        plugin data\n    :Example:\n        curl -sX POST http://localhost:8081/fledge/service/{service_name}/plugin/{plugin_name}/data -d '{\"data\": {}}'\n    \"\"\"\n    try:\n        service = request.match_info.get('service_name', None)\n        plugin = request.match_info.get('plugin_name', None)\n        svc_records = ServiceRegistry.all()\n        for service_record in svc_records:\n            # 1 means - running\n            # Forbidden case - What if service is in Failed or Unresponsive state?\n            if service_record._name == service and int(service_record._status) == 1:\n                raise web.HTTPForbidden(reason=FORBIDDEN_MSG)\n        storage_client = connect.get_storage_async()\n        await _find_svc_and_plugin(storage_client, service, plugin)\n        key = \"{}{}\".format(service, plugin)\n        payload = PayloadBuilder().SELECT(\"key\", \"data\").WHERE(['key', '=', key])\n        response = await _get_key(storage_client, payload)\n        if response:\n            msg = \"{} key already exist.\".format(key)\n            return web.HTTPConflict(reason=msg, body=json.dumps({\"message\": msg}))\n        payload = await request.json()\n        data = payload.get(\"data\")\n        if data is not None:\n            payload = PayloadBuilder().INSERT(key=key, service_name=service, data=data)\n            await storage_client.insert_into_tbl('plugin_data', payload.payload())\n        else:\n            raise KeyError('Malformed data in payload!')\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to create plugin data.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'result': \"{} key added successfully.\".format(key)})\n\n\nasync def delete(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n        request:\n    Returns:\n        remove the entry from plugin_data\n    :Example:\n        curl -sX DELETE \"http://localhost:8081/fledge/service/{service_name}/plugin/{plugin_name}/data\"\n    \"\"\"\n    try:\n        service = request.match_info.get('service_name', None)\n        service = urllib.parse.unquote(service) if service is not None else None\n        plugin = request.match_info.get('plugin_name', None)\n        svc_records = ServiceRegistry.all()\n        for service_record in svc_records:\n            # 1 means - running\n            # Forbidden case - What if service is in Failed or Unresponsive state?\n            if service_record._name == service and int(service_record._status) == 1:\n                raise web.HTTPForbidden(reason=FORBIDDEN_MSG)\n        storage_client = connect.get_storage_async()\n        key = \"{}{}\".format(service, plugin)\n        payload = PayloadBuilder().WHERE(['key', '=', key])\n        response = await _get_key(storage_client, payload)\n        if not response:\n            raise ValueError('No matching record found for {} key.'.format(key))\n        await storage_client.delete_from_tbl('plugin_data', payload.payload())\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete {} plugin data for {} service.\".format(plugin, service))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'result': \"{} deleted successfully.\".format(key)})\n\n\nasync def _find_svc_and_plugin(storage, sname, pname):\n    payload = PayloadBuilder().SELECT([\"id\", \"enabled\"]).WHERE(['schedule_name', '=', sname]).payload()\n    result = await storage.query_tbl_with_payload('schedules', payload)\n    if result['count'] == 0:\n        raise ValueError('{} service does not exist.'.format(sname))\n    plugins = PluginDiscovery.get_plugins_installed()\n    plugin_names = [name['name'] for name in plugins]\n    if pname not in plugin_names:\n        raise ValueError('{} plugin does not exist.'.format(pname))\n\n\nasync def _get_key(storage, payload):\n    result = await storage.query_tbl_with_payload('plugin_data', payload.payload())\n    response = result['rows']\n    return response\n"
  },
  {
    "path": "python/fledge/services/core/api/plugins/discovery.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\n\nfrom aiohttp import web\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.services.core.api.plugins import common\nfrom fledge.services.core.api.plugins.exceptions import *\n\n__author__ = \"Amarendra K Sinha, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | GET             | /fledge/plugins/installed                                |\n    | GET             | /fledge/plugins/available                                |\n    -------------------------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def get_plugins_installed(request):\n    \"\"\" get list of installed plugins\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/plugins/installed\n        curl -X GET http://localhost:8081/fledge/plugins/installed?config=true\n        curl -X GET http://localhost:8081/fledge/plugins/installed?type=north|south|filter|notify|rule\n        curl -X 'GET http://localhost:8081/fledge/plugins/installed?type=north&config=true'\n    \"\"\"\n\n    plugin_type = None\n    is_config = False\n    if 'type' in request.query and request.query['type'] != '':\n        plugin_type = request.query['type'].lower()\n\n    if plugin_type is not None and plugin_type not in ['north', 'south', 'filter', 'notify', 'rule']:\n        raise web.HTTPBadRequest(reason=\"Invalid plugin type. Must be 'north' or 'south' or 'filter' or 'notify' \"\n                                        \"or 'rule'.\")\n\n    if 'config' in request.query:\n        config = request.query['config']\n        if config not in ['true', 'false', True, False]:\n            raise web.HTTPBadRequest(reason='Only \"true\", \"false\", true, false are allowed for value of config.')\n        is_config = True if ((type(config) is str and config.lower() in ['true']) or (\n            (type(config) is bool and config is True))) else False\n\n    plugins_list = PluginDiscovery.get_plugins_installed(plugin_type, is_config)\n\n    return web.json_response({\"plugins\": plugins_list})\n\n\nasync def get_plugins_available(request: web.Request) -> web.Response:\n    \"\"\" get list of a available plugins via package management i.e apt or yum\n\n        :Example:\n            curl -X GET http://localhost:8081/fledge/plugins/available\n            curl -X GET http://localhost:8081/fledge/plugins/available?type=north | south | filter | notify | rule\n    \"\"\"\n    try:\n        package_type = \"\"\n        if 'type' in request.query and request.query['type'] != '':\n            package_type = request.query['type'].lower()\n\n        if package_type and package_type not in ['north', 'south', 'filter', 'notify', 'rule']:\n            raise ValueError(\"Invalid package type. Must be 'north' or 'south' or 'filter' or 'notify' or 'rule'.\")\n        plugins, log_path = await common.fetch_available_packages(package_type)\n        if not package_type:\n            prefix_list = ['fledge-filter-', 'fledge-north-', 'fledge-notify-', 'fledge-rule-', 'fledge-south-']\n            plugins = [p for p in plugins if str(p).startswith(tuple(prefix_list))]\n    except ValueError as err:\n        raise web.HTTPBadRequest(reason=str(err))\n    except PackageError as e:\n        msg = \"Fetch available plugins package request failed.\"\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg, \"link\": str(e)}), reason=msg)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get plugins available list.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n    return web.json_response({\"plugins\": plugins, \"link\": log_path})\n"
  },
  {
    "path": "python/fledge/services/core/api/plugins/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n# NOTE: When multiple exceptions then enable below and add new exceptions in same list\n# __all__ = ('PackageError')\n\n\nclass PackageError(RuntimeError):\n    pass\n"
  },
  {
    "path": "python/fledge/services/core/api/plugins/install.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport os\nimport subprocess\nimport asyncio\nimport tarfile\nimport hashlib\nimport json\nimport uuid\nimport multiprocessing\n\nfrom aiohttp import web\nimport aiohttp\nimport async_timeout\nfrom typing import Dict\nfrom datetime import datetime\n\nfrom fledge.common import utils\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.services.core import connect, server\nfrom fledge.services.core.api.plugins import common\nfrom fledge.services.core.api.plugins.exceptions import *\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | POST             | /fledge/plugins                                         |\n    -------------------------------------------------------------------------------\n\"\"\"\n_TIME_OUT = 120\n_CHUNK_SIZE = 1024\n_PATH = _FLEDGE_DATA + '/plugins/' if _FLEDGE_DATA else _FLEDGE_ROOT + '/data/plugins/'\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\n\nasync def add_plugin(request: web.Request) -> web.Response:\n    \"\"\" add plugin\n\n    :Example:\n        curl -X POST http://localhost:8081/fledge/plugins\n        data:\n            format - the format of the file. One of tar or package (deb, rpm) or repository\n            name - the plugin package name to pull from repository\n            version - (optional) the plugin version to install from repository\n            url - The url to pull the plugin file from if format is not a repository\n            compressed - (optional) boolean this is used to indicate the package is a compressed gzip image\n            checksum - the checksum of the file, used to verify correct upload\n\n        curl -sX POST http://localhost:8081/fledge/plugins -d '{\"format\":\"repository\", \"name\": \"fledge-south-sinusoid\"}'\n        curl -sX POST http://localhost:8081/fledge/plugins -d '{\"format\":\"repository\", \"name\": \"fledge-notify-slack\", \"version\":\"1.6.0\"}'\n    \"\"\"\n    try:\n        data = await request.json()\n        url = data.get('url', None)\n        file_format = data.get('format', None)\n        compressed = data.get('compressed', None)\n        plugin_type = data.get('type', None)\n        checksum = data.get('checksum', None)\n        if not file_format:\n            raise TypeError('file format param is required')\n        if file_format not in [\"tar\", \"deb\", \"rpm\", \"repository\"]:\n            raise ValueError(\"Invalid format. Must be 'tar' or 'deb' or 'rpm' or 'repository'\")\n        if file_format == 'repository':\n            name = data.get('name', None)\n            if name is None:\n                raise ValueError('name param is required')\n            if not name.startswith(\"fledge-\"):\n                raise ValueError('name should start with \"fledge-\" prefix')\n            version = data.get('version', None)\n            if version:\n                if str(version).count('.') != 2:\n                    raise ValueError('Invalid version; it should be empty or a valid semantic version X.Y.Z '\n                                     'i.e. major.minor.patch to install as per the configured repository')\n\n            # Check Pre-conditions from Packages table\n            # if status is -1 (Already in progress) then return as rejected request\n            action = \"install\"\n            storage = connect.get_storage_async()\n            select_payload = PayloadBuilder().SELECT(\"status\").WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', name]).payload()\n            result = await storage.query_tbl_with_payload('packages', select_payload)\n            response = result['rows']\n            if response:\n                exit_code = response[0]['status']\n                if exit_code == -1:\n                    msg = \"{} package installation already in progress\".format(name)\n                    return web.HTTPTooManyRequests(reason=msg, body=json.dumps({\"message\": msg}))\n                # Remove old entry from table for other cases\n                delete_payload = PayloadBuilder().WHERE(['action', '=', action]).AND_WHERE(\n                    ['name', '=', name]).payload()\n                await storage.delete_from_tbl(\"packages\", delete_payload)\n\n            # Check If requested plugin is already installed and then return immediately\n            plugin_type = name.split('fledge-')[1].split('-')[0]\n            plugins_list = PluginDiscovery.get_plugins_installed(plugin_type, False)\n            for p in plugins_list:\n                if p['packageName'] == name:\n                    msg = \"{} package is already installed\".format(name)\n                    return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            # Check If requested plugin is available for configured APT repository\n            plugins, log_path = await common.fetch_available_packages()\n            if name not in plugins:\n                raise KeyError('{} plugin is not available for the configured repository'.format(name))\n\n            pkg_mgt = 'yum' if utils.is_redhat_based() else 'apt'\n            # Insert record into Packages table\n            insert_payload = PayloadBuilder().INSERT(id=str(uuid.uuid4()), name=name, action=action, status=-1,\n                                                     log_file_uri=\"\").payload()\n            result = await storage.insert_into_tbl(\"packages\", insert_payload)\n            response = result['response']\n            if response:\n                # GET id from Packages table to track the installation response\n                select_payload = PayloadBuilder().SELECT(\"id\").WHERE(['action', '=', action]).AND_WHERE(\n                    ['name', '=', name]).payload()\n                result = await storage.query_tbl_with_payload('packages', select_payload)\n                response = result['rows']\n                if response:\n                    pn = \"{}-{}\".format(action, name)\n                    uid = response[0]['id']\n                    # process based parallelism\n                    p = multiprocessing.Process(name=pn, target=install_package_from_repo,\n                                                args=(name, pkg_mgt, version, uid, storage))\n                    p.daemon = True\n                    p.start()\n                    _LOGGER.info(\"{} plugin {} started...\".format(name, action))\n                    msg = \"Plugin installation started.\"\n                    status_link = \"fledge/package/{}/status?id={}\".format(action, uid)\n                    result_payload = {\"message\": msg, \"id\": uid, \"statusLink\": status_link}\n            else:\n                raise StorageServerError\n        else:\n            if not url or not checksum:\n                raise TypeError('URL, checksum params are required')\n            if file_format == \"tar\" and not plugin_type:\n                raise ValueError(\"Plugin type param is required\")\n            if file_format == \"tar\" and plugin_type not in ['south', 'north', 'filter', 'notify', 'rule']:\n                raise ValueError(\"Invalid plugin type. Must be 'north' or 'south' or 'filter' or 'notify' or 'rule'\")\n            if compressed:\n                if compressed not in ['true', 'false', True, False]:\n                    raise ValueError('Only \"true\", \"false\", true, false are allowed for value of compressed.')\n            is_compressed = ((isinstance(compressed, str) and compressed.lower() in ['true']) or (\n                (isinstance(compressed, bool) and compressed is True)))\n\n            # All stuff goes into _PATH\n            if not os.path.exists(_PATH):\n                os.makedirs(_PATH)\n\n            result = await download([url])\n            file_name = result[0]\n\n            # validate checksum with MD5sum\n            if validate_checksum(checksum, file_name) is False:\n                raise ValueError(\"Checksum is failed.\")\n\n            _LOGGER.debug(\"Found {} format with compressed {}\".format(file_format, is_compressed))\n            if file_format == 'tar':\n                files = extract_file(file_name, is_compressed)\n                _LOGGER.debug(\"Files {} {}\".format(files, type(files)))\n                code, msg = copy_file_install_requirement(files, plugin_type, file_name)\n                if code != 0:\n                    raise ValueError(msg)\n            else:\n                pkg_mgt = 'yum' if file_format == 'rpm' else 'apt'\n                code, msg = install_package(file_name, pkg_mgt)\n                if code != 0:\n                    raise ValueError(msg)\n\n            result_payload = {\"message\": \"{} is successfully downloaded and installed\".format(file_name)}\n    except StorageServerError as err:\n        msg = str(err)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"Storage error: {}\".format(msg)}))\n    except (FileNotFoundError, KeyError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except (TypeError, ValueError) as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except Exception as ex:\n        msg = str(ex)\n        _LOGGER.error(ex, \"Failed to install plugin.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({'message': msg}))\n    else:\n        return web.json_response(result_payload)\n\n\nasync def get_url(url: str, session: aiohttp.ClientSession) -> str:\n    file_name = str(url.split(\"/\")[-1])\n    async with async_timeout.timeout(_TIME_OUT):\n        async with session.get(url) as response:\n            with open(_PATH + file_name, 'wb') as fd:\n                async for data in response.content.iter_chunked(_CHUNK_SIZE):\n                    fd.write(data)\n    return file_name\n\n\nasync def download(urls: list) -> asyncio.gather:\n    async with aiohttp.ClientSession() as session:\n        tasks = [get_url(url, session) for url in urls]\n        return await asyncio.gather(*tasks)\n\n\ndef validate_checksum(checksum: str, file_name: str) -> bool:\n    original = hashlib.md5(open(_PATH + file_name, 'rb').read()).hexdigest()\n    return True if original == checksum else False\n\n\ndef extract_file(file_name: str, is_compressed: bool) -> list:\n    mode = \"r:gz\" if is_compressed else \"r\"\n    tar = tarfile.open(_PATH + file_name, mode)\n    _LOGGER.debug(\"Extracted to {}\".format(_PATH))\n    tar.extractall(_PATH)\n    return tar.getnames()\n\n\ndef install_package(file_name: str, pkg_mgt: str) -> tuple:\n    pkg_file_path = \"/data/plugins/{}\".format(file_name)\n    stdout_file_path = \"/data/plugins/output.txt\"\n    cmd = \"sudo {} -y install {} > {} 2>&1\".format(pkg_mgt, _FLEDGE_ROOT + pkg_file_path, _FLEDGE_ROOT + stdout_file_path)\n    _LOGGER.debug(\"Install Package with command: {}\".format(cmd))\n    ret_code = os.system(cmd)\n    _LOGGER.debug(\"Package install return code: {}\".format(ret_code))\n    msg = \"\"\n    with open(\"{}\".format(_FLEDGE_ROOT + stdout_file_path), 'r') as fh:\n        for line in fh:\n            line = line.rstrip(\"\\n\")\n            msg += line\n    _LOGGER.debug(\"Package install message: {}\".format(msg))\n    # Remove stdout file\n    cmd = \"{}/extras/C/cmdutil rm {}\".format(_FLEDGE_ROOT, stdout_file_path)\n    subprocess.run([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\n\n    # Remove downloaded debian file\n    cmd = \"{}/extras/C/cmdutil rm {}\".format(_FLEDGE_ROOT, pkg_file_path)\n    subprocess.run([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\n    return ret_code, msg\n\n\ndef copy_file_install_requirement(dir_files: list, plugin_type: str, file_name: str) -> tuple:\n    py_file = any(f.endswith(\".py\") for f in dir_files)\n    so_1_file = any(f.endswith(\".so.1\") for f in dir_files)  # regular file\n    so_file = any(f.endswith(\".so\") for f in dir_files)  # symlink file\n\n    if not py_file and not so_file:\n        raise FileNotFoundError(\"Invalid plugin directory structure found, please check the contents of your tar file.\")\n\n    if so_1_file:\n        if not so_file:\n            err_msg = \"Symlink file is missing.\"\n            _LOGGER.debug(err_msg)\n            raise FileNotFoundError(err_msg)\n    _dir = []\n    for s in dir_files:\n        _dir.append(s.split(\"/\")[-1])\n\n    assert len(_dir), \"No data found\"\n    plugin_name = _dir[0]\n    _LOGGER.debug(\"Plugin name {} and Dir {} \".format(plugin_name, _dir))\n    plugin_path = \"python/fledge/plugins\" if py_file else \"plugins\"\n    full_path = \"{}/{}/{}/\".format(_FLEDGE_ROOT, plugin_path, plugin_type)\n    dest_path = \"{}/{}/\".format(plugin_path, plugin_type)\n\n    # Check if plugin dir exists then remove (for cleanup ONLY) otherwise create dir\n    if os.path.exists(full_path + plugin_name) and os.path.isdir(full_path + plugin_name):\n        cmd = \"{}/extras/C/cmdutil rm {}\".format(_FLEDGE_ROOT, dest_path + plugin_name)\n        subprocess.run([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\n    else:\n        cmd = \"{}/extras/C/cmdutil mkdir {}\".format(_FLEDGE_ROOT, dest_path + plugin_name)\n        subprocess.run([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\n\n    # copy plugin files to the relative plugins directory.\n    cmd = \"{}/extras/C/cmdutil cp {} {}\".format(_FLEDGE_ROOT, _PATH + plugin_name, dest_path)\n    subprocess.run([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\n    _LOGGER.debug(\"{} File copied to {}\".format(cmd, full_path))\n\n    # TODO: FOGL-2760 Handle external dependency for plugins which can be installed via tar file\n    # Use case: plugins like opcua, usb4704 (external dep)\n    # dht11- For pip packages we have requirements.txt file, as this plugin needs wiringpi apt package to install\n    py_req = filter(lambda x: x.startswith('requirement') and x.endswith('.txt'), _dir)\n    requirement = list(py_req)\n    code = 0\n    msg = \"\"\n    if requirement:\n        cmd = \"{}/extras/C/cmdutil pip3-req {}{}/{}\".format(_FLEDGE_ROOT, _PATH, plugin_name, requirement[0])\n        s = subprocess.run([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\n        code = s.returncode\n        msg = s.stderr.decode(\"utf-8\") if code != 0 else s.stdout.decode(\"utf-8\")\n        msg = msg.replace(\"\\n\", \"\").strip()\n        _LOGGER.debug(\"Return code {} and msg {}\".format(code, msg))\n\n    # Also removed downloaded and extracted tar file\n    cmd = \"{}/extras/C/cmdutil rm /data/plugins/{}\".format(_FLEDGE_ROOT, file_name)\n    subprocess.run([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\n    cmd = \"{}/extras/C/cmdutil rm /data/plugins/{}\".format(_FLEDGE_ROOT, plugin_name)\n    subprocess.run([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\n\n    return code, msg\n\n\ndef install_package_from_repo(name: str, pkg_mgt: str, version: str, uid: uuid, storage: connect) -> None:\n    stdout_file_path = common.create_log_file(action=\"install\", plugin_name=name)\n    link = \"log/\" + stdout_file_path.split(\"/\")[-1]\n    msg = \"installed\"\n    loop = asyncio.new_event_loop()\n    cat = loop.run_until_complete(check_upgrade_on_install())\n    upgrade_install_cat_item = cat[\"upgradeOnInstall\"]\n    max_upgrade_cat_item = cat['maxUpdate']\n    if 'value' in upgrade_install_cat_item:\n        if upgrade_install_cat_item['value'] == \"true\":\n            pkg_cache_mgr = server.Server._package_cache_manager\n            last_accessed_time = pkg_cache_mgr['upgrade']['last_accessed_time']\n            now = datetime.now()\n            then = last_accessed_time if last_accessed_time else now\n            duration_in_sec = (now - then).total_seconds()\n            # If max upgrade per day is set to 1, then an upgrade can not occurs until 24 hours after the last accessed upgrade.\n            # If set to 2 then this drops to 12 hours between upgrades, 3 would result in 8 hours between calls and so on.\n            if duration_in_sec > (24 / int(max_upgrade_cat_item['value'])) * 60 * 60 or not last_accessed_time:\n                _LOGGER.info(\"Attempting {} upgrade on {}...\".format(pkg_mgt, now))\n                cmd = \"sudo {} -y upgrade\".format(pkg_mgt) if pkg_mgt == 'apt' else \"sudo {} -y update\".format(pkg_mgt)\n                ret_code = os.system(cmd + \" > {} 2>&1\".format(stdout_file_path))\n                if ret_code != 0:\n                    # Update record in Packages table for given uid only in case of APT upgrade fails\n                    payload = PayloadBuilder().SET(status=ret_code, log_file_uri=link).WHERE(['id', '=', uid]).payload()\n                    loop.run_until_complete(storage.update_tbl(\"packages\", payload))\n                    return\n                else:\n                    pkg_cache_mgr['upgrade']['last_accessed_time'] = now\n            else:\n                _LOGGER.warning(\"Maximum {} upgrade exceeds the limit for the day.\".format(pkg_mgt))\n            msg = \"updated\"\n    cmd = \"sudo {} -y install {}\".format(pkg_mgt, name)\n    if version:\n        cmd = \"sudo {} -y install {}={}\".format(pkg_mgt, name, version)\n\n    ret_code = os.system(cmd + \" >> {} 2>&1\".format(stdout_file_path))\n    # Update record in Packages table for given uid\n    payload = PayloadBuilder().SET(status=ret_code, log_file_uri=link).WHERE(['id', '=', uid]).payload()\n    loop.run_until_complete(storage.update_tbl(\"packages\", payload))\n    if ret_code == 0:\n        # Audit info\n        audit = AuditLogger(storage)\n        audit_detail = {'packageName': name}\n        log_code = 'PKGUP' if msg == 'updated' else 'PKGIN'\n        loop.run_until_complete(audit.information(log_code, audit_detail))\n        _LOGGER.info('{} plugin {} successfully.'.format(name, msg))\n\n\nasync def check_upgrade_on_install() -> Dict:\n    cf_mgr = ConfigurationManager(connect.get_storage_async())\n    category = await cf_mgr.get_category_all_items(\"Installation\")\n    return category\n"
  },
  {
    "path": "python/fledge/services/core/api/plugins/remove.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport aiohttp\nimport os\nimport json\nimport asyncio\nimport uuid\nimport multiprocessing\n\nfrom aiohttp import web\nfrom fledge.common import utils\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.common import _FLEDGE_ROOT\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect\nfrom fledge.services.core.api.plugins import common\nfrom fledge.services.core.api.plugins.exceptions import *\n\n\n__author__ = \"Rajesh Kumar, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2020-2023, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    --------------------------------------------------------------------\n    | DELETE             | /fledge/plugins/{package_name}              |\n    --------------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\nvalid_plugin_types = ['north', 'south', 'filter', 'notify', 'rule']\nPYTHON_PLUGIN_PATH = _FLEDGE_ROOT+'/python/fledge/plugins/'\nC_PLUGINS_PATH = _FLEDGE_ROOT+'/plugins/'\n\n\n# only work with core 2.1.0 onwards version\nasync def remove_package(request: web.Request) -> web.Response:\n    \"\"\"Remove installed Package\n\n    package_name: package name of plugin\n\n    Example:\n        curl -sX DELETE http://localhost:8081/fledge/plugins/fledge-south-modbus\n        curl -sX DELETE http://localhost:8081/fledge/plugins/fledge-north-http-north\n        curl -sX DELETE http://localhost:8081/fledge/plugins/fledge-filter-scale\n        curl -sX DELETE http://localhost:8081/fledge/plugins/fledge-notify-alexa\n        curl -sX DELETE http://localhost:8081/fledge/plugins/fledge-rule-watchdog\n    \"\"\"\n    try:\n        package_name = request.match_info.get('package_name', \"fledge-\")\n        package_name = package_name.replace(\" \", \"\")\n        final_response = {}\n        if not package_name.startswith(\"fledge-\"):\n            raise ValueError(\"Package name should start with 'fledge-' prefix.\")\n        plugin_type = package_name.split(\"-\", 2)[1]\n        if not plugin_type:\n            raise ValueError('Invalid Package name. Check and verify the package name in plugins installed.')\n        if plugin_type not in valid_plugin_types:\n            raise ValueError(\"Invalid plugin type. Please provide valid type: {}\".format(valid_plugin_types))\n        installed_plugins = PluginDiscovery.get_plugins_installed(plugin_type, False)\n        plugin_info = [(_plugin[\"name\"], _plugin[\"version\"]) for _plugin in installed_plugins\n                       if _plugin[\"packageName\"] == package_name]\n        if not plugin_info:\n            raise KeyError(\"{} package not found. Either package is not installed or missing in plugins installed.\"\n                           \"\".format(package_name))\n        plugin_name = plugin_info[0][0]\n        plugin_version = plugin_info[0][1]\n        if plugin_type in ['notify', 'rule']:\n            notification_instances_plugin_used_in = await _check_plugin_usage_in_notification_instances(plugin_name)\n            if notification_instances_plugin_used_in:\n                err_msg = \"{} cannot be removed. This is being used by {} instances.\".format(\n                    plugin_name, notification_instances_plugin_used_in)\n                _logger.warning(err_msg)\n                raise RuntimeError(err_msg)\n        else:\n            get_tracked_plugins = await _check_plugin_usage(plugin_type, plugin_name)\n            if get_tracked_plugins:\n                e = \"{} cannot be removed. This is being used by {} instances.\". \\\n                    format(plugin_name, get_tracked_plugins[0]['service_list'])\n                _logger.warning(e)\n                raise RuntimeError(e)\n            else:\n                _logger.info(\"No entry found for {name} plugin in asset tracker; \"\n                             \"or {name} plugin may have been added in disabled state & never used.\"\n                             \"\".format(name=plugin_name))\n        # Check Pre-conditions from Packages table\n        # if status is -1 (Already in progress) then return as rejected request\n        action = 'purge'\n        storage = connect.get_storage_async()\n        select_payload = PayloadBuilder().SELECT(\"status\").WHERE(['action', '=', action]).AND_WHERE(\n            ['name', '=', package_name]).payload()\n        result = await storage.query_tbl_with_payload('packages', select_payload)\n        response = result['rows']\n        if response:\n            exit_code = response[0]['status']\n            if exit_code == -1:\n                msg = \"{} package purge already in progress.\".format(package_name)\n                return web.HTTPTooManyRequests(reason=msg, body=json.dumps({\"message\": msg}))\n            # Remove old entry from table for other cases\n            delete_payload = PayloadBuilder().WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            await storage.delete_from_tbl(\"packages\", delete_payload)\n\n        # Insert record into Packages table\n        insert_payload = PayloadBuilder().INSERT(id=str(uuid.uuid4()), name=package_name, action=action,\n                                                 status=-1, log_file_uri=\"\").payload()\n        result = await storage.insert_into_tbl(\"packages\", insert_payload)\n        response = result['response']\n        if response:\n            select_payload = PayloadBuilder().SELECT(\"id\").WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            result = await storage.query_tbl_with_payload('packages', select_payload)\n            response = result['rows']\n            if response:\n                pn = \"{}-{}\".format(action, plugin_name)\n                uid = response[0]['id']\n                p = multiprocessing.Process(name=pn,\n                                            target=_uninstall,\n                                            args=(package_name, plugin_version, uid, storage)\n                                            )\n                p.daemon = True\n                p.start()\n                msg = \"{} plugin remove started.\".format(plugin_name)\n                status_link = \"fledge/package/{}/status?id={}\".format(action, uid)\n                final_response = {\"message\": msg, \"id\": uid, \"statusLink\": status_link}\n        else:\n            raise StorageServerError\n    except (ValueError, RuntimeError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({'message': msg}))\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({'message': msg}))\n    except StorageServerError as e:\n        msg = e.error\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"Storage error: {}\".format(msg)}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete {} package.\".format(package_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({'message': msg}))\n    else:\n        return web.json_response(final_response)\n\n\ndef _uninstall(pkg_name: str, version: str, uid: uuid, storage: connect) -> tuple:\n    from fledge.services.core.server import Server\n    _logger.info(\"{} package removal started...\".format(pkg_name))\n    stdout_file_path = ''\n    try:\n        stdout_file_path = common.create_log_file(action='remove', plugin_name=pkg_name)\n        link = \"log/\" + stdout_file_path.split(\"/\")[-1]\n        if utils.is_redhat_based():\n            cmd = \"sudo yum -y remove {} > {} 2>&1\".format(pkg_name, stdout_file_path)\n        else:\n            cmd = \"sudo apt -y purge {} > {} 2>&1\".format(pkg_name, stdout_file_path)\n        code = os.system(cmd)\n        # Update record in Packages table\n        payload = PayloadBuilder().SET(status=code, log_file_uri=link).WHERE(['id', '=', uid]).payload()\n        loop = asyncio.new_event_loop()\n        loop.run_until_complete(storage.update_tbl(\"packages\", payload))\n        if code == 0:\n            # Clear internal cache\n            loop.run_until_complete(_put_refresh_cache(\"http\", Server._host, Server.core_management_port))\n            # Audit logger\n            audit = AuditLogger(storage)\n            audit_detail = {'package_name': pkg_name, 'version': version}\n            loop.run_until_complete(audit.information('PKGRM', audit_detail))\n            _logger.info('{} removed successfully.'.format(pkg_name))\n    except Exception:\n        # Non-Zero integer - Case of fail\n        code = 1\n    return code, stdout_file_path\n\n\n# only work with lesser or equal to version of core 2.1.0 version\nasync def remove_plugin(request: web.Request) -> web.Response:\n    \"\"\" Remove installed plugin from fledge\n\n    type: installed plugin type\n    name: installed plugin name\n\n    Example:\n        curl -X DELETE http://localhost:8081/fledge/plugins/south/sinusoid\n        curl -X DELETE http://localhost:8081/fledge/plugins/north/http_north\n        curl -X DELETE http://localhost:8081/fledge/plugins/filter/expression\n        curl -X DELETE http://localhost:8081/fledge/plugins/notify/alexa\n        curl -X DELETE http://localhost:8081/fledge/plugins/rule/Average\n    \"\"\"\n    plugin_type = request.match_info.get('type', None)\n    name = request.match_info.get('name', None)\n    try:\n        plugin_type = str(plugin_type).lower()\n        if plugin_type not in valid_plugin_types:\n            raise ValueError(\"Invalid plugin type. Please provide valid type: {}\".format(valid_plugin_types))\n        # only OMF is an inbuilt plugin\n        if name.lower() == 'omf':\n            raise ValueError(\"Cannot delete an inbuilt {} plugin.\".format(name.upper()))\n        result_payload = {}\n        installed_plugins = PluginDiscovery.get_plugins_installed(plugin_type, False)\n        plugin_info = [(_plugin[\"name\"], _plugin[\"packageName\"], _plugin[\"version\"]) for _plugin in installed_plugins]\n        package_name = \"fledge-{}-{}\".format(plugin_type, name.lower().replace(\"_\", \"-\"))\n        plugin_found = False\n        plugin_version = None\n        for p in plugin_info:\n            if p[0] == name:\n                package_name = p[1]\n                plugin_version = p[2]\n                plugin_found = True\n                break\n        if not plugin_found:\n            raise KeyError(\"Invalid plugin name {} or plugin is not installed.\".format(name))\n        if plugin_type in ['notify', 'rule']:\n            notification_instances_plugin_used_in = await _check_plugin_usage_in_notification_instances(name)\n            if notification_instances_plugin_used_in:\n                err_msg = \"{} cannot be removed. This is being used by {} instances.\".format(\n                    name, notification_instances_plugin_used_in)\n                _logger.warning(err_msg)\n                raise RuntimeError(err_msg)\n        else:\n            get_tracked_plugins = await _check_plugin_usage(plugin_type, name)\n            if get_tracked_plugins:\n                e = \"{} cannot be removed. This is being used by {} instances.\".\\\n                    format(name, get_tracked_plugins[0]['service_list'])\n                _logger.warning(e)\n                raise RuntimeError(e)\n            else:\n                _logger.info(\"No entry found for {name} plugin in asset tracker; or \"\n                             \"{name} plugin may have been added in disabled state & never used.\".format(name=name))\n        # Check Pre-conditions from Packages table\n        # if status is -1 (Already in progress) then return as rejected request\n        action = 'purge'\n        storage = connect.get_storage_async()\n        select_payload = PayloadBuilder().SELECT(\"status\").WHERE(['action', '=', action]).AND_WHERE(\n            ['name', '=', package_name]).payload()\n        result = await storage.query_tbl_with_payload('packages', select_payload)\n        response = result['rows']\n        if response:\n            exit_code = response[0]['status']\n            if exit_code == -1:\n                msg = \"{} package purge already in progress.\".format(package_name)\n                return web.HTTPTooManyRequests(reason=msg, body=json.dumps({\"message\": msg}))\n            # Remove old entry from table for other cases\n            delete_payload = PayloadBuilder().WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            await storage.delete_from_tbl(\"packages\", delete_payload)\n\n        # Insert record into Packages table\n        insert_payload = PayloadBuilder().INSERT(id=str(uuid.uuid4()), name=package_name, action=action, status=-1,\n                                                 log_file_uri=\"\").payload()\n        result = await storage.insert_into_tbl(\"packages\", insert_payload)\n        response = result['response']\n        if response:\n            select_payload = PayloadBuilder().SELECT(\"id\").WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            result = await storage.query_tbl_with_payload('packages', select_payload)\n            response = result['rows']\n            if response:\n                pn = \"{}-{}\".format(action, name)\n                uid = response[0]['id']\n                p = multiprocessing.Process(name=pn,\n                                            target=purge_plugin,\n                                            args=(plugin_type, name, package_name, plugin_version, uid, storage)\n                                            )\n                p.daemon = True\n                p.start()\n                msg = \"{} plugin remove started.\".format(name)\n                status_link = \"fledge/package/{}/status?id={}\".format(action, uid)\n                result_payload = {\"message\": msg, \"id\": uid, \"statusLink\": status_link}\n        else:\n            raise StorageServerError\n    except (ValueError, RuntimeError) as err:\n        raise web.HTTPBadRequest(reason=str(err), body=json.dumps({'message': str(err)}))\n    except KeyError as err:\n        raise web.HTTPNotFound(reason=str(err), body=json.dumps({'message': str(err)}))\n    except StorageServerError as e:\n        msg = e.error\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"Storage error: {}\".format(msg)}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to remove {} plugin.\".format(name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({'message': msg}))\n    else:\n        return web.json_response(result_payload)\n\n\nasync def _check_plugin_usage(plugin_type: str, plugin_name: str) -> list:\n    \"\"\" Check usage of plugin and return a list of services / tasks or other instances with reference\n    \"\"\"\n    plugin_users = []\n    filter_used = []\n    service_list = []\n    storage_client = connect.get_storage_async()\n    if plugin_type == 'south':\n        event = 'Ingest'\n    elif plugin_type == 'filter':\n        event = 'Filter'\n    else:\n        event = 'Egress'\n    payload_data = PayloadBuilder().SELECT('plugin', 'service').WHERE(['event', '=', event]).payload()\n    list_of_tracked_plugin = await storage_client.query_tbl_with_payload('asset_tracker', payload_data)\n\n    if plugin_type == 'filter':\n        filter_payload = PayloadBuilder().SELECT('name').WHERE(['plugin', '=', plugin_name]).payload()\n        filter_res = await storage_client.query_tbl_with_payload(\"filters\", filter_payload)\n        filter_used = [f['name'] for f in filter_res['rows']]\n        for r in range(0, len(list_of_tracked_plugin['rows'])):\n            service_in_schedules_list = await _check_service_in_schedules(list_of_tracked_plugin['rows'][r]['service'])\n            for p in filter_used:\n                if p in list_of_tracked_plugin['rows'][r]['plugin'] and service_in_schedules_list:\n                    service_list.append(list_of_tracked_plugin['rows'][r]['service'])\n                    break\n    if list_of_tracked_plugin['rows']:\n        for e in list_of_tracked_plugin['rows']:\n            if (plugin_name == e['plugin'] and plugin_type != 'filter') or (e['plugin'] in filter_used and\n                                                                            plugin_type == 'filter'):\n                service_in_list = await _check_service_in_schedules(e['service'])\n                if (plugin_name in [x['plugin'] for x in list_of_tracked_plugin['rows']] and service_in_list) \\\n                        or (e['plugin'] in filter_used and service_in_list):\n                    if service_list:\n                        plugin_users.append({'e': e, 'service_list': service_list})\n                    else:\n                        service_list.append(e['service'])\n                        plugin_users.append({'e': e, 'service_list': service_list})\n    return plugin_users\n\n\nasync def _check_service_in_schedules(service_name: str) -> bool:\n    storage_client = connect.get_storage_async()\n    payload_data = PayloadBuilder().SELECT('id', 'enabled').WHERE(['schedule_name', '=', service_name]).payload()\n    enabled_service_list = await storage_client.query_tbl_with_payload('schedules', payload_data)\n    is_service_list = True if enabled_service_list['rows'] else False\n    return is_service_list\n\n\nasync def _check_plugin_usage_in_notification_instances(plugin_name: str) -> list:\n    \"\"\" Check notification instance state using the given rule or delivery plugin\n    \"\"\"\n    notification_instances = []\n    storage_client = connect.get_storage_async()\n    configuration_mgr = ConfigurationManager(storage_client)\n    notifications = await configuration_mgr.get_category_child(\"Notifications\")\n    if notifications:\n        for notification in notifications:\n            notification_config = await configuration_mgr._read_category_val(notification['key'])\n            name = notification_config['name']['value']\n            channel = notification_config['channel']['value']\n            rule = notification_config['rule']['value']\n            enabled = True if notification_config['enable']['value'] == 'true' else False\n            if (channel == plugin_name and enabled) or (rule == plugin_name and enabled):\n                notification_instances.append(name)\n    return notification_instances\n\n\nasync def _put_refresh_cache(protocol: str, host: int, port: int) -> None:\n    # Scheme is always http:// on core_management_port\n    management_api_url = '{}://{}:{}/fledge/cache'.format(protocol, host, port)\n    headers = {'content-type': 'application/json'}\n    verify_ssl = False\n    connector = aiohttp.TCPConnector(verify_ssl=verify_ssl)\n    async with aiohttp.ClientSession(connector=connector) as session:\n        async with session.put(management_api_url, data=json.dumps({}), headers=headers) as resp:\n            result = await resp.text()\n            status_code = resp.status\n            if status_code in range(400, 500):\n                _logger.error(\"Bad request error code: {}, reason: {} when refresh cache\".format(status_code, resp.reason))\n            if status_code in range(500, 600):\n                _logger.error(\"Server error code: {}, reason: {} when refresh cache\".format(status_code, resp.reason))\n            response = json.loads(result)\n            _logger.debug(\"PUT Refresh Cache response: {}\".format(response))\n\n\ndef purge_plugin(plugin_type: str, plugin_name: str, pkg_name: str, version: str, uid: uuid, storage: connect) -> tuple:\n    from fledge.services.core.server import Server\n    _logger.info(\"{} plugin remove started...\".format(pkg_name))\n    is_package = True\n    stdout_file_path = ''\n    try:\n        if utils.is_redhat_based():\n            rpm_list = os.popen('rpm -qa | grep fledge*').read()\n            _logger.debug(\"rpm list : {}\".format(rpm_list))\n            if len(rpm_list):\n                f = rpm_list.find(pkg_name)\n                if f == -1:\n                    raise KeyError\n            else:\n                raise KeyError\n            stdout_file_path = common.create_log_file(action='remove', plugin_name=pkg_name)\n            link = \"log/\" + stdout_file_path.split(\"/\")[-1]\n            cmd = \"sudo yum -y remove {} > {} 2>&1\".format(pkg_name, stdout_file_path)\n        else:\n            dpkg_list = os.popen('dpkg --list \"fledge*\" 2>/dev/null')\n            ls_output = dpkg_list.read()\n            _logger.debug(\"dpkg list output: {}\".format(ls_output))\n            if len(ls_output):\n                f = ls_output.find(pkg_name)\n                if f == -1:\n                    raise KeyError\n            else:\n                raise KeyError\n            stdout_file_path = common.create_log_file(action='remove', plugin_name=pkg_name)\n            link = \"log/\" + stdout_file_path.split(\"/\")[-1]\n            cmd = \"sudo apt -y purge {} > {} 2>&1\".format(pkg_name, stdout_file_path)\n\n        code = os.system(cmd)\n        # Update record in Packages table\n        payload = PayloadBuilder().SET(status=code, log_file_uri=link).WHERE(['id', '=', uid]).payload()\n        loop = asyncio.new_event_loop()\n        loop.run_until_complete(storage.update_tbl(\"packages\", payload))\n\n        if code == 0:\n            # Clear internal cache\n            loop.run_until_complete(_put_refresh_cache(\"http\", Server._host, Server.core_management_port))\n            # Audit info\n            audit = AuditLogger(storage)\n            audit_detail = {'package_name': pkg_name, 'version': version}\n            loop.run_until_complete(audit.information('PKGRM', audit_detail))\n            _logger.info('{} plugin removed successfully.'.format(pkg_name))\n    except KeyError:\n        # This case is for non-package installation - python plugin path will be tried first and then C\n        _logger.info(\"Trying removal of manually installed plugin...\")\n        is_package = False\n        if plugin_type in ['notify', 'rule']:\n            plugin_type = 'notificationDelivery' if plugin_type == 'notify' else 'notificationRule'\n        try:\n            path = PYTHON_PLUGIN_PATH+'{}/{}'.format(plugin_type, plugin_name)\n            if not os.path.isdir(path):\n                path = C_PLUGINS_PATH + '{}/{}'.format(plugin_type, plugin_name)\n            rm_cmd = 'rm -rv {}'.format(path)\n            if os.path.exists(\"{}/bin\".format(_FLEDGE_ROOT)) and os.path.exists(\"{}/bin/fledge\".format(_FLEDGE_ROOT)):\n                rm_cmd = 'sudo rm -rv {}'.format(path)\n            code = os.system(rm_cmd)\n            if code != 0:\n                raise OSError(\"While deleting, invalid plugin path found for {}\".format(plugin_name))\n        except Exception as ex:\n            code = 1\n            _logger.error(ex, \"Error in removing plugin.\")\n        _logger.info('{} plugin removed successfully.'.format(plugin_name))\n    return code, stdout_file_path, is_package\n"
  },
  {
    "path": "python/fledge/services/core/api/plugins/update.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport aiohttp\nimport asyncio\nimport os\nimport uuid\nimport multiprocessing\nimport json\n\nfrom aiohttp import web\nfrom fledge.common import utils\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect, server\nfrom fledge.services.core.api.plugins import common\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019-2023, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ------------------------------------------------------------------------\n    | PUT             | /fledge/plugins/{package_name}                     |\n    ------------------------------------------------------------------------\n\"\"\"\n_logger = FLCoreLogger().get_logger(__name__)\n\n\n# only work with core 2.1.0 onwards version\nasync def update_package(request: web.Request) -> web.Response:\n    \"\"\" Update Package\n\n    package_name: package name of plugin\n\n    Example:\n        curl -sX PUT http://localhost:8081/fledge/plugins/fledge-south-modbus\n        curl -sX PUT http://localhost:8081/fledge/plugins/fledge-north-http-north\n        curl -sX PUT http://localhost:8081/fledge/plugins/fledge-filter-scale\n        curl -sX PUT http://localhost:8081/fledge/plugins/fledge-notify-alexa\n        curl -sX PUT http://localhost:8081/fledge/plugins/fledge-rule-watchdog\n    \"\"\"\n\n    try:\n        valid_plugin_types = ['north', 'south', 'filter', 'notify', 'rule']\n        package_name = request.match_info.get('package_name', \"fledge-\")\n        package_name = package_name.replace(\" \", \"\")\n        final_response = {}\n        if not package_name.startswith(\"fledge-\"):\n            raise ValueError(\"Package name should start with 'fledge-' prefix.\")\n        plugin_type = package_name.split(\"-\", 2)[1]\n        if not plugin_type:\n            raise ValueError('Invalid Package name. Check and verify the package name in plugins installed.')\n        if plugin_type not in valid_plugin_types:\n            raise ValueError(\"Invalid plugin type. Please provide valid type: {}\".format(valid_plugin_types))\n        installed_plugins = PluginDiscovery.get_plugins_installed(plugin_type, False)\n        plugin_info = [_plugin[\"name\"] for _plugin in installed_plugins if _plugin[\"packageName\"] == package_name]\n        if not plugin_info:\n            raise KeyError(\"{} package not found. Either package is not installed or missing in plugins installed.\"\n                           \"\".format(package_name))\n        plugin_name = plugin_info[0]\n        # Check Pre-conditions from Packages table\n        # if status is -1 (Already in progress) then return as rejected request\n        action = 'update'\n        storage_client = connect.get_storage_async()\n        select_payload = PayloadBuilder().SELECT(\"status\").WHERE(['action', '=', action]).AND_WHERE(\n            ['name', '=', package_name]).payload()\n        result = await storage_client.query_tbl_with_payload('packages', select_payload)\n        response = result['rows']\n        if response:\n            exit_code = response[0]['status']\n            if exit_code == -1:\n                msg = \"{} package {} already in progress.\".format(package_name, action)\n                return web.HTTPTooManyRequests(reason=msg, body=json.dumps({\"message\": msg}))\n            # Remove old entry from table for other cases\n            delete_payload = PayloadBuilder().WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            await storage_client.delete_from_tbl(\"packages\", delete_payload)\n\n        schedules = []\n        notifications = []\n        if plugin_type in ['notify', 'rule']:\n            # Check Notification service is enabled or not\n            payload = PayloadBuilder().SELECT(\"id\", \"enabled\", \"schedule_name\").WHERE(['process_name', '=',\n                                                                                       'notification_c']).payload()\n            result = await storage_client.query_tbl_with_payload('schedules', payload)\n            sch_info = result['rows']\n            if sch_info and sch_info[0]['enabled'] == 't':\n                # Find notification instances which are used by requested plugin name\n                # If its config item 'enable' is true then update to false\n                config_mgr = ConfigurationManager(storage_client)\n                all_notifications = await config_mgr._read_all_child_category_names(\"Notifications\")\n                for notification in all_notifications:\n                    notification_config = await config_mgr._read_category_val(notification['child'])\n                    notification_name = notification_config['name']['value']\n                    channel = notification_config['channel']['value']\n                    rule = notification_config['rule']['value']\n                    is_enabled = True if notification_config['enable']['value'] == 'true' else False\n                    if (channel == plugin_name and is_enabled) or (rule == plugin_name and is_enabled):\n                        _logger.warning(\n                            \"Disabling {} notification instance, as {} {} plugin is being updated...\".format(\n                                notification_name, plugin_name, plugin_type))\n                        await config_mgr.set_category_item_value_entry(notification_name, \"enable\", \"false\")\n                        notifications.append(notification_name)\n        else:\n            # FIXME: if any south/north service or task doesnot have tracked by Fledge;\n            #  then we need to handle the case to disable the service or task if enabled\n            # Tracked plugins from asset tracker\n            tracked_plugins = await _get_plugin_and_sch_name_from_asset_tracker(plugin_type)\n            filters_used_by = []\n            if plugin_type == 'filter':\n                # In case of filter, for asset_tracker table we are inserting filter category_name in plugin column\n                # instead of filter plugin name by Design\n                # Hence below query is required to get actual plugin name from filters table\n                storage_client = connect.get_storage_async()\n                payload = PayloadBuilder().SELECT(\"name\").WHERE(['plugin', '=', plugin_name]).payload()\n                result = await storage_client.query_tbl_with_payload('filters', payload)\n                filters_used_by = [r['name'] for r in result['rows']]\n            for p in tracked_plugins:\n                if (plugin_name == p['plugin'] and not plugin_type == 'filter') or (\n                        p['plugin'] in filters_used_by and plugin_type == 'filter'):\n                    sch_info = await _get_sch_id_and_enabled_by_name(p['service'])\n                    if sch_info and sch_info[0]['enabled'] == 't':\n                        status, reason = await server.Server.scheduler.disable_schedule(uuid.UUID(sch_info[0]['id']))\n                        if status:\n                            _logger.warning(\"Disabling {} {} instance, as {} plugin is being updated...\".format(\n                                p['service'], plugin_type, plugin_name))\n                            schedules.append(sch_info[0]['id'])\n        # Insert record into Packages table\n        insert_payload = PayloadBuilder().INSERT(id=str(uuid.uuid4()), name=package_name, action=action, status=-1,\n                                                 log_file_uri=\"\").payload()\n        result = await storage_client.insert_into_tbl(\"packages\", insert_payload)\n        response = result['response']\n        if response:\n            select_payload = PayloadBuilder().SELECT(\"id\").WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            result = await storage_client.query_tbl_with_payload('packages', select_payload)\n            response = result['rows']\n            if response:\n                pn = \"{}-{}\".format(action, package_name)\n                uid = response[0]['id']\n                p = multiprocessing.Process(name=pn,\n                                            target=do_update,\n                                            args=(\"http\", server.Server._host,\n                                                  server.Server.core_management_port, storage_client, plugin_type,\n                                                  plugin_name, package_name, uid, schedules, notifications))\n                p.daemon = True\n                p.start()\n                msg = \"{} {} started.\".format(package_name, action)\n                status_link = \"fledge/package/{}/status?id={}\".format(action, uid)\n                final_response = {\"message\": msg, \"id\": uid, \"statusLink\": status_link}\n        else:\n            raise StorageServerError\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({'message': msg}))\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({'message': msg}))\n    except StorageServerError as e:\n        msg = e.error\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"Storage error: {}\".format(msg)}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to update {} package.\".format(package_name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({'message': msg}))\n    else:\n        return web.json_response(final_response)\n\n\n# only work with lesser or equal to version of core 2.1.0 version\nasync def update_plugin(request: web.Request) -> web.Response:\n    \"\"\" update plugin\n\n    :Example:\n        curl -sX PUT http://localhost:8081/fledge/plugins/south/sinusoid/update\n        curl -sX PUT http://localhost:8081/fledge/plugins/north/http_north/update\n        curl -sX PUT http://localhost:8081/fledge/plugins/filter/metadata/update\n        curl -sX PUT http://localhost:8081/fledge/plugins/notify/asset/update\n        curl -sX PUT http://localhost:8081/fledge/plugins/rule/OutOfBound/update\n    \"\"\"\n    _type = request.match_info.get('type', None)\n    name = request.match_info.get('name', None)\n    try:\n        _type = _type.lower()\n        if _type not in ['north', 'south', 'filter', 'notify', 'rule']:\n            raise ValueError(\"Invalid plugin type. Must be one of 'south' , north', 'filter', 'notify' or 'rule'\")\n        # only OMF is an inbuilt plugin\n        if name.lower() == 'omf':\n            raise ValueError(\"Cannot update an inbuilt {} plugin.\".format(name.upper()))\n        # Check requested plugin name is installed or not\n        installed_plugins = PluginDiscovery.get_plugins_installed(_type, False)\n        plugin_info = [(_plugin[\"name\"], _plugin[\"packageName\"]) for _plugin in installed_plugins]\n        package_name = \"fledge-{}-{}\".format(_type, name.lower().replace('_', '-'))\n        plugin_found = False\n        for p in plugin_info:\n            if p[0] == name:\n                package_name = p[1]\n                plugin_found = True\n                break\n        if not plugin_found:\n            raise KeyError(\"{} plugin is not yet installed. So update is not possible.\".format(name))\n\n        # Check Pre-conditions from Packages table\n        # if status is -1 (Already in progress) then return as rejected request\n        result_payload = {}\n        action = 'update'\n        storage_client = connect.get_storage_async()\n        select_payload = PayloadBuilder().SELECT(\"status\").WHERE(['action', '=', action]).AND_WHERE(\n            ['name', '=', package_name]).payload()\n        result = await storage_client.query_tbl_with_payload('packages', select_payload)\n        response = result['rows']\n        if response:\n            exit_code = response[0]['status']\n            if exit_code == -1:\n                msg = \"{} package {} already in progress.\".format(package_name, action)\n                return web.HTTPTooManyRequests(reason=msg, body=json.dumps({\"message\": msg}))\n            # Remove old entry from table for other cases\n            delete_payload = PayloadBuilder().WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            await storage_client.delete_from_tbl(\"packages\", delete_payload)\n\n        schedules = []\n        notifications = []\n        if _type in ['notify', 'rule']:\n            # Check Notification service is enabled or not\n            payload = PayloadBuilder().SELECT(\"id\", \"enabled\", \"schedule_name\").WHERE(['process_name', '=',\n                                                                                       'notification_c']).payload()\n            result = await storage_client.query_tbl_with_payload('schedules', payload)\n            sch_info = result['rows']\n            if sch_info and sch_info[0]['enabled'] == 't':\n                # Find notification instances which are used by requested plugin name\n                # If its config item 'enable' is true then update to false\n                config_mgr = ConfigurationManager(storage_client)\n                all_notifications = await config_mgr._read_all_child_category_names(\"Notifications\")\n                for notification in all_notifications:\n                    notification_config = await config_mgr._read_category_val(notification['child'])\n                    notification_name = notification_config['name']['value']\n                    channel = notification_config['channel']['value']\n                    rule = notification_config['rule']['value']\n                    is_enabled = True if notification_config['enable']['value'] == 'true' else False\n                    if (channel == name and is_enabled) or (rule == name and is_enabled):\n                        _logger.warning(\"Disabling {} notification instance, as {} {} plugin is being updated...\".format(\n                            notification_name, name, _type))\n                        await config_mgr.set_category_item_value_entry(notification_name, \"enable\", \"false\")\n                        notifications.append(notification_name)\n        else:\n            # FIXME: if any south/north service or task doesnot have tracked by Fledge;\n            #  then we need to handle the case to disable the service or task if enabled\n            # Tracked plugins from asset tracker\n            tracked_plugins = await _get_plugin_and_sch_name_from_asset_tracker(_type)\n            filters_used_by = []\n            if _type == 'filter':\n                # In case of filter, for asset_tracker table we are inserting filter category_name in plugin column\n                # instead of filter plugin name by Design\n                # Hence below query is required to get actual plugin name from filters table\n                storage_client = connect.get_storage_async()\n                payload = PayloadBuilder().SELECT(\"name\").WHERE(['plugin', '=', name]).payload()\n                result = await storage_client.query_tbl_with_payload('filters', payload)\n                filters_used_by = [r['name'] for r in result['rows']]\n            for p in tracked_plugins:\n                if (name == p['plugin'] and not _type == 'filter') or (\n                        p['plugin'] in filters_used_by and _type == 'filter'):\n                    sch_info = await _get_sch_id_and_enabled_by_name(p['service'])\n                    if sch_info and sch_info[0]['enabled'] == 't':\n                        status, reason = await server.Server.scheduler.disable_schedule(uuid.UUID(sch_info[0]['id']))\n                        if status:\n                            _logger.warning(\"Disabling {} {} instance, as {} plugin is being updated...\".format(\n                                p['service'], _type, name))\n                            schedules.append(sch_info[0]['id'])\n        # Insert record into Packages table\n        insert_payload = PayloadBuilder().INSERT(id=str(uuid.uuid4()), name=package_name, action=action, status=-1,\n                                                 log_file_uri=\"\").payload()\n        result = await storage_client.insert_into_tbl(\"packages\", insert_payload)\n        response = result['response']\n        if response:\n            select_payload = PayloadBuilder().SELECT(\"id\").WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            result = await storage_client.query_tbl_with_payload('packages', select_payload)\n            response = result['rows']\n            if response:\n                pn = \"{}-{}\".format(action, name)\n                uid = response[0]['id']\n                p = multiprocessing.Process(name=pn,\n                                            target=do_update,\n                                            args=(\"http\",\n                                                  server.Server._host, server.Server.core_management_port,\n                                                  storage_client, _type, name, package_name, uid,\n                                                  schedules, notifications))\n                p.daemon = True\n                p.start()\n                msg = \"{} {} started.\".format(package_name, action)\n                status_link = \"fledge/package/{}/status?id={}\".format(action, uid)\n                result_payload = {\"message\": msg, \"id\": uid, \"statusLink\": status_link}\n        else:\n            raise StorageServerError\n    except KeyError as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except StorageServerError as e:\n        msg = e.error\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"Storage error: {}\".format(msg)}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to update {} plugin.\".format(name))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({'message': msg}))\n    else:\n        return web.json_response(result_payload)\n\n\nasync def _get_plugin_and_sch_name_from_asset_tracker(_type: str) -> list:\n    if _type == \"south\":\n        event_name = \"Ingest\"\n    elif _type == \"filter\":\n        event_name = \"Filter\"\n    elif _type == \"north\":\n        event_name = \"Egress\"\n    else:\n        # Return empty if _type is different\n        return []\n    storage_client = connect.get_storage_async()\n    payload = PayloadBuilder().SELECT(\"plugin\", \"service\").WHERE(['event', '=', event_name]).payload()\n    result = await storage_client.query_tbl_with_payload('asset_tracker', payload)\n    return result['rows']\n\n\nasync def _get_sch_id_and_enabled_by_name(name: str) -> list:\n    storage_client = connect.get_storage_async()\n    payload = PayloadBuilder().SELECT(\"id\", \"enabled\").WHERE(['schedule_name', '=', name]).payload()\n    result = await storage_client.query_tbl_with_payload('schedules', payload)\n    return result['rows']\n\n\nasync def _put_schedule(protocol: str, host: str, port: int, sch_id: uuid, is_enabled: bool) -> None:\n    # Scheme is always http:// on core_management_port\n    management_api_url = '{}://{}:{}/fledge/schedule/{}/enable'.format(protocol, host, port, sch_id)\n    headers = {'content-type': 'application/json'}\n    verify_ssl = False\n    connector = aiohttp.TCPConnector(verify_ssl=verify_ssl)\n    async with aiohttp.ClientSession(connector=connector) as session:\n        async with session.put(management_api_url, data=json.dumps({\"value\": is_enabled}), headers=headers) as resp:\n            result = await resp.text()\n            status_code = resp.status\n            if status_code in range(400, 500):\n                _logger.error(\"Bad request error code: {}, reason: {} when PUT schedule\".format(status_code, resp.reason))\n            if status_code in range(500, 600):\n                _logger.error(\"Server error code: {}, reason: {} when PUT schedule\".format(status_code, resp.reason))\n            response = json.loads(result)\n            _logger.debug(\"PUT Schedule response: {}\".format(response))\n\n\ndef _update_repo_sources_and_plugin(pkg_name: str) -> tuple:\n    stdout_file_path = common.create_log_file(action=\"update\", plugin_name=pkg_name)\n    pkg_mgt = 'apt'\n    cmd = \"sudo {} -y update > {} 2>&1\".format(pkg_mgt, stdout_file_path)\n    if utils.is_redhat_based():\n        pkg_mgt = 'yum'\n        cmd = \"sudo {} check-update > {} 2>&1\".format(pkg_mgt, stdout_file_path)\n    ret_code = os.system(cmd)\n    # sudo apt/yum -y install only happens when update is without any error\n    if ret_code == 0:\n        cmd = \"sudo {} -y install {} >> {} 2>&1\".format(pkg_mgt, pkg_name, stdout_file_path)\n        ret_code = os.system(cmd)\n\n    # relative log file link\n    link = \"log/\" + stdout_file_path.split(\"/\")[-1]\n    return ret_code, link\n\n\ndef do_update(protocol: str, host: str, port: int, storage: connect, _type: str, plugin_name: str,\n              pkg_name: str, uid: str, schedules: list, notifications: list) -> None:\n    _logger.info(\"{} package update started...\".format(pkg_name))\n\n    code, link = _update_repo_sources_and_plugin(pkg_name)\n\n    # Update record in Packages table\n    payload = PayloadBuilder().SET(status=code, log_file_uri=link).WHERE(['id', '=', uid]).payload()\n    loop = asyncio.new_event_loop()\n    loop.run_until_complete(storage.update_tbl(\"packages\", payload))\n\n    if code == 0:\n        # Audit info\n        audit = AuditLogger(storage)\n        installed_plugins = PluginDiscovery.get_plugins_installed(_type, False)\n        version = [p[\"version\"] for p in installed_plugins if p['name'] == plugin_name]\n        audit_detail = {'packageName': pkg_name}\n        if version:\n            audit_detail['version'] = version[0]\n        loop.run_until_complete(audit.information('PKGUP', audit_detail))\n        _logger.info('{} package updated successfully.'.format(pkg_name))\n\n    # Restart the services which were disabled before plugin update\n    for sch in schedules:\n        loop.run_until_complete(_put_schedule(protocol, host, port, uuid.UUID(sch), True))\n\n    # Below case is applicable for the notification plugins ONLY\n    # Enabled back configuration categories which were disabled during update process\n    if _type in ['notify', 'rule']:\n        config_mgr = ConfigurationManager(storage)\n        for notify in notifications:\n            loop.run_until_complete(config_mgr.set_category_item_value_entry(notify, \"enable\", \"true\"))\n"
  },
  {
    "path": "python/fledge/services/core/api/python_packages.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport json\nfrom typing import List\nimport pkg_resources\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.services.core import connect\n\n__author__ = \"Himanshu Vimal\"\n__copyright__ = \"Copyright (c) 2022, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ----------------------------------------------------------\n    | GET            | /fledge/python/packages               |\n    | POST           | /fledge/python/package                |\n    ----------------------------------------------------------\n\"\"\"\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\n\ndef get_packages_installed() -> List:\n    package_ws = pkg_resources.WorkingSet()\n    installed_pkgs = [{'package': dist.project_name, 'version': dist.version} for dist in package_ws]\n    return installed_pkgs\n\n\nasync def get_packages(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n       request:\n\n    Returns:\n           List of python distributions installed.\n\n    :Example:\n           curl -X GET http://localhost:8081/fledge/python/packages\n    \"\"\"\n    return web.json_response({'packages': get_packages_installed()})\n\n\nasync def install_package(request: web.Request) -> web.Response:\n    \"\"\"\n    Args:\n        request: '{ \"package\"   :   \"numpy\",\n                    \"version\"   :   \"1.2\"   #optional\n                  }'\n\n    Returns:\n        Json response with message key  \n\n    :Example:\n           curl -X POST http://localhost:8081/fledge/python/package -d '{\"package\":\"numpy\", \"version\":\"1.23\"}'\n    \"\"\"\n    data = await request.json()\n    input_package_name = data.get('package', \"\").strip()\n    input_package_version = data.get('version', \"\").strip()\n    \n    if len(input_package_name) == 0:\n        return web.HTTPBadRequest(reason=\"Package name empty.\")\n\n    def get_installed_package_info(input_package):\n        packages = pkg_resources.WorkingSet()\n        for package in packages:\n            if package.project_name.lower() == input_package.lower():\n                return package.project_name, package.version\n        return None, None\n    \n    install_args = input_package_name\n    if input_package_version:\n        install_args = input_package_name + \"==\" + input_package_version\n    \n    installed_package, installed_version = get_installed_package_info(input_package_name)\n\n    if installed_package:\n        # Package already exists\n        _LOGGER.warning(\"Package: {} Version: {} already installed.\".format(installed_package, installed_version))\n        return web.HTTPConflict(reason=\"Package already installed.\", \n                                body=json.dumps({\"message\": \"Package {} version {} already installed.\"\n                                                .format(installed_package, installed_version)}))\n\n    # Package not found, install package via pip\n    pip_process = await asyncio.create_subprocess_exec('python3', '-m', 'pip', 'install', install_args,\n                                                       stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE)\n    stdout, stderr = await pip_process.communicate()\n    if pip_process.returncode == 0:\n        _LOGGER.info(\"Package: {} successfully installed.\", format(input_package_name))\n        try:\n            # Audit log entry: PIPIN\n            storage_client = connect.get_storage_async()\n            pip_audit_log = AuditLogger(storage_client)\n            audit_message = {\"package\": input_package_name, \"status\": \"Success\"}\n            if input_package_version:\n                audit_message[\"version\"] = input_package_version\n            await pip_audit_log.information('PIPIN', audit_message)\n        except:\n            _LOGGER.error(\"Failed to log the audit entry for PIPIN, for package {} install\", format(\n                input_package_name))\n\n        response = \"Package {} version {} installed successfully.\".format(input_package_name, input_package_version)\n        if not input_package_version:\n            response = \"Package {} installed successfully.\".format(input_package_name)        \n        return web.json_response({\"message\": response})\n    else:\n        response = \"Error while installing package {} version {}.\".format(input_package_name, input_package_version)\n        if not input_package_version:\n            response = \"Error while installing package {}.\".format(input_package_name)        \n        _LOGGER.error(response)\n        return web.HTTPNotFound(reason=response, body=json.dumps({\"message\": stderr.decode(encoding='utf-8')}))\n        \n"
  },
  {
    "path": "python/fledge/services/core/api/repos/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/api/repos/configure.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport os\nimport platform\nimport json\n\nfrom aiohttp import web\n\nfrom fledge.common import utils\nfrom fledge.common.common import _FLEDGE_ROOT\nfrom fledge.common.logger import FLCoreLogger\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | POST             | /fledge/repository                                       |\n    -------------------------------------------------------------------------------\n\"\"\"\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\n\nasync def add_package_repo(request: web.Request) -> web.Response:\n    \"\"\" configure package repo\n\n    :Example:\n        curl -X POST http://localhost:8081/fledge/repository\n        data:\n            url - Set a repository URL that used for installing packages via apt or yum\n            version - Points to the package release version or any custom branch fixes\n\n        curl -sX POST http://localhost:8081/fledge/repository -d '{\"url\":\"http://archives.fledge-iot.org\"}'\n        curl -sX POST http://localhost:8081/fledge/repository -d '{\"url\":\"http://archives.fledge-iot.org\", \"version\":\"latest\"}'\n    \"\"\"\n    try:\n        data = await request.json()\n        url = data.get('url', None)\n        # By default version is latest\n        version = data.get('version', 'latest')\n        if not url:\n            raise ValueError('url param is required')\n\n        _platform = platform.platform()\n        v_list = ['nightly', 'latest']\n        if not (version in v_list or version.startswith('fixes/')):\n            if str(version).count('.') != 2:\n                raise ValueError('Invalid version; it should be latest, '\n                                 'nightly or a valid semantic version X.Y.Z i.e. major.minor.patch')\n\n        if utils.is_redhat_based():\n            pkg_mgt = 'yum'\n            if 'x86_64-with-redhat' in _platform:\n                os_name = \"rhel7\"\n                architecture = \"x86_64\"\n                extra_commands = \"sudo yum-config-manager --enable 'Red Hat Enterprise Linux Server 7 RHSCL (RPMs)'\"\n            elif 'x86_64-with-centos' in _platform:\n                os_name = \"centos7\"\n                architecture = \"x86_64\"\n                extra_commands = \"sudo yum install -y centos-release-scl-rh epel-release\"\n            elif 'x86_64-with-glibc' in _platform:\n                os_name = \"centos-stream-9\"\n                architecture = \"x86_64\"\n                extra_commands = \"\"\n            else:\n                raise ValueError(\"{} is not supported\".format(_platform))\n        else:\n            pkg_mgt = 'apt'\n            if 'x86_64-with-glib' in _platform:\n                os_name = \"ubuntu2004\"\n                architecture = \"x86_64\"\n                extra_commands = \"\"\n            elif 'armv7l-with-glibc' in _platform:\n                os_name = \"bullseye\"\n                architecture = \"armv7l\"\n                extra_commands = \"\"\n            else:\n                raise ValueError(\"{} is not supported\".format(_platform))\n\n        stdout_file_path = _FLEDGE_ROOT + \"/data/configure_repo_output.txt\"\n        if pkg_mgt == 'yum':\n            cmd = \"sudo rpm --import {}/RPM-GPG-KEY-fledge > {} 2>&1\".format(url, stdout_file_path)\n        else:\n            cmd = \"wget -q -O - {}/KEY.gpg | sudo apt-key add - > {} 2>&1\".format(url, stdout_file_path)\n        _LOGGER.debug(\"Add the key that is used to verify the package with command: {}\".format(cmd))\n        ret_code = os.system(cmd)\n        if ret_code != 0:\n            raise RuntimeError(\"See logs in {}\".format(stdout_file_path))\n        full_url = \"{}/{}/{}/{}\".format(url, version, os_name, architecture)\n        if pkg_mgt == 'yum':\n            cmd = \"echo -e \\\"[fledge]\\nname=fledge Repository\\nbaseurl={}\\nenabled=1\\ngpgkey={}/RPM-GPG-KEY-fledge\\ngpgcheck=1\\\" | sudo tee /etc/yum.repos.d/fledge.repo >> {} 2>&1\".format(full_url, url, stdout_file_path)\n        else:\n            cmd = \"echo \\\"deb {}/ /\\\" | sudo tee /etc/apt/sources.list.d/fledge.list >> {} 2>&1\".format(\n                full_url, stdout_file_path)\n        _LOGGER.debug(\"Edit the sources list with command: {}\".format(cmd))\n        ret_code = os.system(cmd)\n        if ret_code != 0:\n            raise RuntimeError(\"See logs in {}\".format(stdout_file_path))\n        if pkg_mgt == 'yum':\n            cmd = \"{} >> {} 2>&1\".format(extra_commands, stdout_file_path)\n        else:\n            cmd = \"sudo {} -y update >> {} 2>&1\".format(pkg_mgt, stdout_file_path)\n        _LOGGER.debug(\"Fetch the list of packages with command: {}\".format(cmd))\n        ret_code = os.system(cmd)\n        if ret_code != 0:\n            raise RuntimeError(\"See logs in {}\".format(stdout_file_path))\n        # TODO: audit log entry?\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n    except RuntimeError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": \"Failed to configure package repository\",\n                                                  \"output_log\": msg}), reason=msg)\n    except Exception as ex:\n        msg = \"Failed to configure archive package repository setup.\"\n        _LOGGER.error(ex, msg)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"{} {}\".format(msg, str(ex))}))\n    else:\n        return web.json_response({\"message\": \"Package repository configured successfully.\",\n                                  \"output_log\": stdout_file_path})\n"
  },
  {
    "path": "python/fledge/services/core/api/scheduler.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport datetime\nimport uuid\nfrom aiohttp import web\nfrom fledge.services.core import server\nfrom fledge.services.core.scheduler.entities import Schedule, StartUpSchedule, TimedSchedule, IntervalSchedule, \\\n    ManualSchedule, Task\nfrom fledge.services.core.scheduler.exceptions import *\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\n\n__author__ = \"Amarendra K. Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__DEFAULT_LIMIT = 20\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | GET  POST       | /fledge/schedule/process                                 |\n    | GET             | /fledge/schedule/process/{scheduled_process_name}        |\n\n    | GET POST        | /fledge/schedule                                         |\n    | GET PUT DELETE  | /fledge/schedule/{schedule_id}                           |\n    | PUT             | /fledge/schedule/{schedule_id}/enable                    |\n    | PUT             | /fledge/schedule/{schedule_id}/disable                   |\n    | PUT             | /fledge/schedule/enable                                  |\n    | PUT             | /fledge/schedule/disable                                 |\n    | POST            | /fledge/schedule/start/{schedule_id}                     |\n    | GET             | /fledge/schedule/type                                    |\n\n    | GET             | /fledge/task                                             |\n    | GET             | /fledge/task/latest                                      |\n    | GET             | /fledge/task/{task_id}                                   |\n    | GET             | /fledge/task/state                                       |\n    | PUT             | /fledge/task/{task_id}/cancel                            |\n    -------------------------------------------------------------------------------\n\"\"\"\n\n\n#################################\n# Scheduled_processes\n#################################\n\n\nasync def get_scheduled_processes(request):\n    \"\"\"\n    Returns:\n            a list of all the defined scheduled_processes from scheduled_processes table\n\n    :Example:\n             curl -X GET http://localhost:8081/fledge/schedule/process\n    \"\"\"\n\n    processes_list = await server.Server.scheduler.get_scheduled_processes()\n\n    processes = []\n    for proc in processes_list:\n        processes.append(proc.name)\n\n    return web.json_response({'processes': processes})\n\n\nasync def get_scheduled_process(request):\n    \"\"\"\n    Returns:\n            a list of all the defined scheduled_processes from scheduled_processes table\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/schedule/process/purge\n        curl -X GET http://localhost:8081/fledge/schedule/process/purge%2Cbackup%2Crestore\n        curl -X GET http://localhost:8081/fledge/schedule/process/purge%2Cbackup%2Cstats%20collector\n    \"\"\"\n\n    scheduled_process_names = request.match_info.get('scheduled_process_name', None)\n    scheduled_process_name = scheduled_process_names.split(',')\n    payload = PayloadBuilder().SELECT(\"name\").WHERE([\"name\", \"in\", scheduled_process_name]).payload()\n    _storage = connect.get_storage_async()\n    scheduled_process = await _storage.query_tbl_with_payload('scheduled_processes', payload)\n\n    if len(scheduled_process['rows']) == 0:\n        raise web.HTTPNotFound(reason='No such Scheduled Process: {}.'.format(scheduled_process_name))\n\n    if len(scheduled_process['rows']) == 1:\n        retval = scheduled_process['rows'][0].get(\"name\")\n    else:\n        retval = scheduled_process['rows']\n    return web.json_response(retval)\n\n\nasync def post_scheduled_process(request: web.Request) -> web.Response:\n    \"\"\"\n    Create a new process name in scheduled_process table\n\n    data:\n            process_name - Name of scheduled process name\n            script - path for the script\n\n    :Example:\n             curl -d '{\"process_name\": \"sleep30\", \"script\": \"[\\\"services/test\\\"]\"}' -sX POST  http://localhost:8081/fledge/schedule/process\n    \"\"\"\n    data = await request.json()\n    process_name = data.get('process_name', None)\n    script = data.get('script', None)\n    if process_name is None:\n        msg = \"Missing process_name property in payload.\"\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n    if script is None:\n        msg = \"Missing script property in payload.\"\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n    if len(process_name.strip()) == 0:\n        msg = \"Process name cannot be empty.\"\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n    if len(script.strip()) == 0:\n        msg = \"Script cannot be empty.\"\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n\n    # Check that the process name is not already registered\n    payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', process_name]).payload()\n    storage = connect.get_storage_async()\n    result = await storage.query_tbl_with_payload('scheduled_processes', payload)\n    if result['count'] == 0:\n        # Now first create the scheduled process entry for the new service\n        payload = PayloadBuilder().INSERT(name=process_name, script=script).payload()\n        try:\n            await storage.insert_into_tbl(\"scheduled_processes\", payload)\n            # Update _process_scripts dict of scheduler\n            await server.Server.scheduler._get_process_scripts()\n        except StorageServerError as err:\n            msg = str(err)\n            raise web.HTTPInternalServerError(body=json.dumps(\n                {\"message\": \"Storage error: {}\".format(msg)}), reason=msg)\n        except Exception as ex:\n            msg = str(ex)\n            raise web.HTTPInternalServerError(body=json.dumps(\n                {\"message\": \"Failed to create scheduled process. {}\".format(msg)}), reason=msg)\n        else:\n            return web.json_response({\"message\": \"{} process name created successfully.\".format(process_name)})\n    else:\n        msg = '{} process name already exists.'.format(process_name)\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n\n\n#################################\n# Schedules\n#################################\n\n\ndef _extract_args(data, curr_value):\n    if 'type' in data and (not isinstance(data['type'], int) and not data['type'].isdigit()):\n        raise ValueError('Error in type: {}'.format(data['type']))\n\n    if 'day' in data:\n        if isinstance(data['day'], float) or (isinstance(data['day'], str) and (data['day'].strip() != \"\" and not data['day'].isdigit())):\n            raise ValueError('Error in day: {}'.format(data['day']))\n\n    if 'time' in data and (not isinstance(data['time'], int) and not data['time'].isdigit()):\n        raise ValueError('Error in time: {}'.format(data['time']))\n\n    if 'repeat' in data and (not isinstance(data['repeat'], int) and not data['repeat'].isdigit()):\n        raise ValueError('Error in repeat: {}'.format(data['repeat']))\n\n    _schedule = dict()\n\n    _schedule['schedule_id'] = curr_value['schedule_id'] if curr_value else None\n\n    s_type = data.get('type') if 'type' in data else curr_value['schedule_type'] if curr_value else 0\n    _schedule['schedule_type'] = int(s_type)\n\n    s_day = data.get('day') if 'day' in data else curr_value['schedule_day'] if curr_value and curr_value[\n        'schedule_day'] else None\n    _schedule['schedule_day'] = int(s_day) if s_day is not None and (\n        isinstance(s_day, int) or (not isinstance(s_day, int) and s_day.isdigit())) else None\n\n    s_time = data.get('time') if 'time' in data else curr_value['schedule_time'] if curr_value and curr_value[\n        'schedule_time'] else 0\n    _schedule['schedule_time'] = int(s_time)\n\n    s_repeat = data.get('repeat') if 'repeat' in data else curr_value['schedule_repeat'] if curr_value and \\\n                                                                                            curr_value[\n                                                                                                'schedule_repeat'] else 0\n    _schedule['schedule_repeat'] = int(s_repeat)\n\n    _schedule['schedule_name'] = data.get('name') if 'name' in data else curr_value[\n        'schedule_name'] if curr_value else None\n\n    _schedule['schedule_process_name'] = data.get('process_name') if 'process_name' in data else curr_value[\n        'schedule_process_name'] if curr_value else None\n\n    _schedule['schedule_exclusive'] = data.get('exclusive') if 'exclusive' in data else curr_value[\n        'schedule_exclusive'] if curr_value else 'True'\n    _schedule['schedule_exclusive'] = 'True' if (\n        (type(_schedule['schedule_exclusive']) is str and _schedule['schedule_exclusive'].lower() in ['t', 'true']) or (\n        (type(_schedule['schedule_exclusive']) is bool and _schedule['schedule_exclusive'] is True))) else 'False'\n\n    _schedule['schedule_enabled'] = data.get('enabled') if 'enabled' in data else curr_value[\n        'schedule_enabled'] if curr_value else 'True'\n    _schedule['schedule_enabled'] = 'True' if (\n        (type(_schedule['schedule_enabled']) is str and _schedule['schedule_enabled'].lower() in ['t', 'true']) or (\n        (type(_schedule['schedule_enabled']) is bool and _schedule['schedule_enabled'] is True))) else 'False'\n\n    _schedule['is_enabled_modified'] = None\n    if 'enabled' in data:\n        _schedule['is_enabled_modified'] = True if _schedule['schedule_enabled'] == 'True' else False\n\n    return _schedule\n\n\nasync def _check_schedule_post_parameters(data, curr_value=None):\n    \"\"\"\n    Private method to validate post data for creating a new schedule or updating an existing schedule\n\n    Args:\n         data:\n\n    Returns:\n            list errors\n    \"\"\"\n\n    _schedule = _extract_args(data, curr_value)\n\n    _errors = list()\n\n    # Raise error if schedule_type is missing for a new schedule\n    if 'schedule_id' not in _schedule and not _schedule.get('schedule_type'):\n        _errors.append('Schedule type cannot be empty.')\n\n    # Raise error if schedule_type is wrong\n    if _schedule.get('schedule_type') not in list(Schedule.Type):\n        _errors.append('Schedule type error: {}'.format(_schedule.get('schedule_type')))\n\n    # Raise error if day and time are missing for schedule_type = TIMED\n    if _schedule.get('schedule_type') == Schedule.Type.TIMED:\n        if not _schedule.get('schedule_time'):\n            _errors.append('Schedule time cannot be empty for TIMED schedule.')\n        if _schedule.get('schedule_day') is not None and (not isinstance(_schedule.get('schedule_day'), int) or (\n                _schedule.get('schedule_day') < 1 or _schedule.get('schedule_day') > 7)):\n            _errors.append('Day must either be None or must be an integer and in range 1-7.')\n        if not isinstance(_schedule.get('schedule_time'), int) or (\n                _schedule.get('schedule_time') < 0 or _schedule.get('schedule_time') > 86399):\n            _errors.append('Time must be an integer and in range 0-86399.')\n\n    # Raise error if repeat is missing or is non integers\n    if _schedule.get('schedule_type') == Schedule.Type.INTERVAL:\n        if 'schedule_repeat' not in _schedule:\n            _errors.append('Repeat is required for INTERVAL Schedule type.')\n        elif not isinstance(_schedule.get('schedule_repeat'), int):\n            _errors.append('Repeat must be an integer.')\n\n    # Raise error if day is non integer\n    if _schedule.get('schedule_day') is not None and not isinstance(_schedule.get('schedule_day'), int):\n        _errors.append('Day must either be None or must be an integer.')\n\n    # Raise error if time is non integer\n    if not isinstance(_schedule.get('schedule_time'), int):\n        _errors.append('Time must be an integer.')\n\n    # Raise error if repeat is non integer\n    if not isinstance(_schedule.get('schedule_repeat'), int):\n        _errors.append('Repeat must be an integer.')\n\n    # Raise error if name and process_name are missing for a new schedule\n    if not _schedule.get('schedule_name') or not _schedule.get('schedule_process_name'):\n        _errors.append('Schedule name and Process name cannot be empty.')\n\n    # Raise error if scheduled_process name is wrong\n    payload = PayloadBuilder().SELECT(\"name\").WHERE([\"name\", \"=\", _schedule.get('schedule_process_name')]).payload()\n    _storage = connect.get_storage_async()\n    scheduled_process = await _storage.query_tbl_with_payload('scheduled_processes', payload)\n\n    if len(scheduled_process['rows']) == 0:\n        raise ScheduleProcessNameNotFoundError('No such Scheduled Process name: {}'.format(_schedule.get('schedule_process_name')))\n\n    # Raise error if exclusive is wrong\n    if _schedule.get('schedule_exclusive') not in ['True', 'False']:\n        _errors.append('Schedule exclusive error: {}'.format(_schedule.get('schedule_exclusive')))\n\n    # Raise error if enabled is wrong\n    if _schedule.get('schedule_enabled') not in ['True', 'False']:\n        _errors.append('Schedule enabled error: {}'.format(_schedule.get('schedule_enabled')))\n\n    return _errors\n\n\nasync def _execute_add_update_schedule(data, curr_value=None):\n    \"\"\"\n    Private method common to create a new schedule and update an existing schedule\n\n    Args:\n         data:\n\n    Returns:\n            schedule_id (new for created, existing for updated)\n    \"\"\"\n\n    _schedule = _extract_args(data, curr_value)\n\n    # Create schedule object as Scheduler.save_schedule requires an object\n    if _schedule.get('schedule_type') == Schedule.Type.STARTUP:\n        schedule = StartUpSchedule()\n    elif _schedule.get('schedule_type') == Schedule.Type.TIMED:\n        schedule = TimedSchedule()\n        schedule.day = _schedule.get('schedule_day')\n        m, s = divmod(_schedule.get('schedule_time'), 60)\n        h, m = divmod(m, 60)\n        schedule.time = datetime.time().replace(hour=h, minute=m, second=s)\n    elif _schedule.get('schedule_type') == Schedule.Type.INTERVAL:\n        schedule = IntervalSchedule()\n    elif _schedule.get('schedule_type') == Schedule.Type.MANUAL:\n        schedule = ManualSchedule()\n\n    # Populate scheduler object\n    schedule.schedule_id = _schedule.get('schedule_id')\n    schedule.name = _schedule.get('schedule_name')\n    schedule.process_name = _schedule.get('schedule_process_name')\n    schedule.repeat = datetime.timedelta(seconds=_schedule['schedule_repeat'])\n\n    schedule.exclusive = True if _schedule.get('schedule_exclusive') == 'True' else False\n    schedule.enabled = True if _schedule.get('schedule_enabled') == 'True' else False\n\n    # Save schedule\n    await server.Server.scheduler.save_schedule(schedule, _schedule['is_enabled_modified'])\n\n    updated_schedule_id = schedule.schedule_id\n\n    return updated_schedule_id\n\n\nasync def get_schedules(request):\n    \"\"\"\n    Returns:\n            a list of all the defined schedules from schedules table\n\n    :Example:\n             curl -X GET http://localhost:8081/fledge/schedule\n    \"\"\"\n\n    schedule_list = await server.Server.scheduler.get_schedules()\n\n    schedules = []\n    for sch in schedule_list:\n        schedules.append({\n            'id': str(sch.schedule_id),\n            'name': sch.name,\n            'processName': sch.process_name,\n            'type': Schedule.Type(int(sch.schedule_type)).name,\n            'repeat': sch.repeat.total_seconds() if sch.repeat else 0,\n            'time': (sch.time.hour * 60 * 60 + sch.time.minute * 60 + sch.time.second) if sch.time else 0,\n            'day': sch.day,\n            'exclusive': sch.exclusive,\n            'enabled': sch.enabled\n        })\n\n    return web.json_response({'schedules': schedules})\n\n\nasync def get_schedule(request):\n    \"\"\"\n    Returns:\n          the information for the given schedule from schedules table\n\n    :Example:\n            curl -X GET  http://localhost:8081/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\n    \"\"\"\n\n    try:\n        schedule_id = request.match_info.get('schedule_id', None)\n\n        try:\n            assert uuid.UUID(schedule_id)\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=\"Invalid Schedule ID {}\".format(schedule_id))\n\n        sch = await server.Server.scheduler.get_schedule(uuid.UUID(schedule_id))\n\n        schedule = {\n            'id': str(sch.schedule_id),\n            'name': sch.name,\n            \"processName\": sch.process_name,\n            'type': Schedule.Type(int(sch.schedule_type)).name,\n            'repeat': sch.repeat.total_seconds() if sch.repeat else 0,\n            'time': (sch.time.hour * 60 * 60 + sch.time.minute * 60 + sch.time.second) if sch.time else 0,\n            'day': sch.day,\n            'exclusive': sch.exclusive,\n            'enabled': sch.enabled\n        }\n\n        return web.json_response(schedule)\n    except (ValueError, ScheduleNotFoundError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def enable_schedule_with_name(request):\n    \"\"\" Enables the schedule for given schedule_name or schedule_id in request payload\n\n    curl -X PUT http://localhost:8081/fledge/schedule/enable  -d '{\"schedule_name\": \"a schedule name\"}'\n\n    :param request: {\"schedule_name\": \"sinusoid\"} or {\"schedule_id\": \"uuid of schedule\"}\n    :return:\n    \"\"\"\n    try:\n        data = await request.json()\n\n        sch_name = data.get('schedule_name', None)\n        sch_id = data.get('schedule_id', None)\n\n        if not sch_name and not sch_id:\n            raise web.HTTPBadRequest(reason='Schedule name or ID is required')\n\n        if sch_name and not sch_id:\n            storage_client = connect.get_storage_async()\n            payload = PayloadBuilder().SELECT(\"id\").WHERE(['schedule_name', '=', sch_name]).payload()\n            result = await storage_client.query_tbl_with_payload('schedules', payload)\n\n            if int(result['count']):\n                sch_id = result['rows'][0]['id']\n            else:\n                raise web.HTTPNotFound(reason=\"No Schedule with name {}\".format(sch_name))\n\n        if sch_id:\n            try:\n                assert uuid.UUID(sch_id)\n            except (TypeError, ValueError):\n                raise web.HTTPNotFound(reason=\"No Schedule with ID {}\".format(sch_id))\n\n        # Reset startup priority order\n        server.Server.scheduler.reset_process_script_priority()\n        status, reason = await server.Server.scheduler.enable_schedule(uuid.UUID(sch_id))\n\n        schedule = {\n            'scheduleId': sch_id,\n            'status': status,\n            'message': reason\n        }\n\n    except (KeyError, ValueError, ScheduleNotFoundError) as e:\n        raise web.HTTPNotFound(reason=str(e))\n    else:\n        return web.json_response(schedule)\n\n\nasync def disable_schedule_with_name(request):\n    \"\"\" Disable the schedule for given schedule_name or schedule_id in request payload\n\n    curl -X PUT http://localhost:8081/fledge/schedule/disable -d '{\"schedule_name\": \"a schedule name\"}'\n\n    :param request: {\"schedule_name\": \"sinusoid\"} or {\"schedule_id\": \"uuid of schedule\"}\n    :return:\n    \"\"\"\n    try:\n        data = await request.json()\n\n        sch_name = data.get('schedule_name', None)\n        sch_id = data.get('schedule_id', None)\n\n        if not sch_name and not sch_id:\n            raise web.HTTPBadRequest(reason='Schedule name or ID is required')\n\n        if sch_name and not sch_id:\n            storage_client = connect.get_storage_async()\n            payload = PayloadBuilder().SELECT(\"id\").WHERE(['schedule_name', '=', sch_name]).payload()\n            result = await storage_client.query_tbl_with_payload('schedules', payload)\n\n            if int(result['count']):\n                sch_id = result['rows'][0]['id']\n            else:\n                raise web.HTTPNotFound(reason=\"No Schedule with name {}\".format(sch_name))\n\n        if sch_id:\n            try:\n                assert uuid.UUID(sch_id)\n            except (TypeError, ValueError):\n                raise web.HTTPNotFound(reason=\"No Schedule with ID {}\".format(sch_id))\n\n        status, reason = await server.Server.scheduler.disable_schedule(uuid.UUID(sch_id))\n\n        schedule = {\n            'scheduleId': sch_id,\n            'status': status,\n            'message': reason\n        }\n\n    except (KeyError, ValueError, ScheduleNotFoundError) as e:\n        raise web.HTTPNotFound(reason=str(e))\n    else:\n        return web.json_response(schedule)\n\n\nasync def enable_schedule(request):\n    \"\"\"\n    Enable the given schedule from schedules table\n\n    :Example:\n             curl -X PUT  http://localhost:8081/fledge/schedule/ac6dd55d-f55d-44f7-8741-984604bf2384/enable\n    \"\"\"\n\n    try:\n        schedule_id = request.match_info.get('schedule_id', None)\n\n        try:\n            assert uuid.UUID(schedule_id)\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=\"Invalid Schedule ID {}\".format(schedule_id))\n\n        # Reset startup priority order\n        server.Server.scheduler.reset_process_script_priority()\n        status, reason = await server.Server.scheduler.enable_schedule(uuid.UUID(schedule_id))\n\n        schedule = {\n            'scheduleId': schedule_id,\n            'status': status,\n            'message': reason\n        }\n        return web.json_response(schedule)\n    except (ValueError, ScheduleNotFoundError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def disable_schedule(request):\n    \"\"\"\n    Disable the given schedule from schedules table\n\n    :Example:\n             curl -X PUT  http://localhost:8081/fledge/schedule/ac6dd55d-f55d-44f7-8741-984604bf2384/disable\n    \"\"\"\n\n    try:\n        schedule_id = request.match_info.get('schedule_id', None)\n\n        try:\n            assert uuid.UUID(schedule_id)\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=\"Invalid Schedule ID {}\".format(schedule_id))\n\n        status, reason = await server.Server.scheduler.disable_schedule(uuid.UUID(schedule_id))\n\n        schedule = {\n            'scheduleId': schedule_id,\n            'status': status,\n            'message': reason\n        }\n\n        return web.json_response(schedule)\n    except (ValueError, ScheduleNotFoundError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def start_schedule(request):\n    \"\"\"\n    Starts a given schedule\n\n    :Example:\n             curl -X POST  http://localhost:8081/fledge/schedule/start/fd439e5b-86ba-499a-86d3-34a6e5754b5a\n    \"\"\"\n\n    try:\n        schedule_id = request.match_info.get('schedule_id', None)\n\n        try:\n            assert uuid.UUID(schedule_id)\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=\"Invalid Schedule ID {}\".format(schedule_id))\n\n        await server.Server.scheduler.get_schedule(uuid.UUID(schedule_id))\n\n        # Start schedule\n        resp = await server.Server.scheduler.queue_task(uuid.UUID(schedule_id))\n\n        if resp is True:\n            return web.json_response({'id': schedule_id, 'message': 'Schedule started successfully'})\n        else:\n            return web.json_response({'id': schedule_id, 'message': 'Schedule could not be started'})\n\n    except (ValueError, ScheduleNotFoundError, NotReadyError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def post_schedule(request):\n    \"\"\"\n    Create a new schedule in schedules table\n\n    :Example:\n             curl -d '{\"type\": 3, \"name\": \"sleep30test\", \"process_name\": \"sleep30\", \"repeat\": \"45\"}'  -X POST  http://localhost:8081/fledge/schedule\n    \"\"\"\n\n    try:\n        data = await request.json()\n        schedule_id = data.get('schedule_id', None)\n        if schedule_id:\n            raise ValueError(\"Schedule ID not needed for new Schedule.\")\n\n        go_no_go = await _check_schedule_post_parameters(data)\n        if len(go_no_go) != 0:\n            raise ValueError(\"Errors in request: {} {}\".format(','.join(go_no_go), len(go_no_go)))\n\n        schedule_name = data.get('name', None)\n        sch_list = await server.Server.scheduler.get_schedules()\n        if any(schedule_name == schedule.name for schedule in sch_list):\n            raise DuplicateRequestError(\"Duplicate schedule name entry found\")\n\n        updated_schedule_id = await _execute_add_update_schedule(data)\n        sch = await server.Server.scheduler.get_schedule(updated_schedule_id)\n        schedule = {\n            'id': str(sch.schedule_id),\n            'name': sch.name,\n            \"processName\": sch.process_name,\n            'type': Schedule.Type(int(sch.schedule_type)).name,\n            'repeat': sch.repeat.total_seconds() if sch.repeat else 0,\n            'time': (sch.time.hour * 60 * 60 + sch.time.minute * 60 + sch.time.second) if sch.time else 0,\n            'day': sch.day,\n            'exclusive': sch.exclusive,\n            'enabled': sch.enabled\n        }\n    except (ScheduleNotFoundError, ScheduleProcessNameNotFoundError) as ex:\n        raise web.HTTPNotFound(reason=str(ex), body=json.dumps({\"message\": str(ex)}))\n    except DuplicateRequestError as err_msg:\n        raise web.HTTPConflict(reason=str(err_msg), body=json.dumps({\"message\": str(err_msg)}))\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex), body=json.dumps({\"message\": str(ex)}))\n    except Exception as ex:\n        raise web.HTTPInternalServerError(reason=str(ex), body=json.dumps({\"message\": str(ex)}))\n    else:\n        return web.json_response({'schedule': schedule})\n\n\nasync def update_schedule(request):\n    \"\"\"\n    Update a schedule in schedules table\n\n    :Example:\n             curl -d '{\"type\": 4, \"name\": \"sleep30 updated\", \"process_name\": \"sleep30\", \"repeat\": \"15\"}'  -X PUT  http://localhost:8081/fledge/schedule/84fe4ea1-df9c-4c87-bb78-cab2e7d5d2cc\n    \"\"\"\n\n    try:\n        data = await request.json()\n        schedule_id = request.match_info.get('schedule_id', None)\n        try:\n            assert uuid.UUID(schedule_id)\n        except Exception:\n            raise ValueError(\"Invalid Schedule ID {}\".format(schedule_id))\n\n        sch = await server.Server.scheduler.get_schedule(uuid.UUID(schedule_id))\n        if not sch:\n            raise ScheduleNotFoundError(schedule_id)\n\n        # Restrict name and type properties for STARTUP type schedules\n        if 'name' in data and Schedule.Type(int(sch.schedule_type)).name == \"STARTUP\":\n            raise ValueError(\"{} is a STARTUP schedule type and cannot be renamed.\".format(sch.name))\n        if 'type' in data and Schedule.Type(int(sch.schedule_type)).name == \"STARTUP\":\n            raise ValueError(\"{} is a STARTUP schedule type and cannot be changed its type.\".format(sch.name))\n\n        curr_value = dict()\n        curr_value['schedule_id'] = sch.schedule_id\n        curr_value['schedule_process_name'] = sch.process_name\n        curr_value['schedule_name'] = sch.name\n        curr_value['schedule_type'] = sch.schedule_type\n        curr_value['schedule_repeat'] = sch.repeat.total_seconds() if sch.repeat else 0\n        curr_value['schedule_time'] = (sch.time.hour * 60 * 60 + sch.time.minute * 60 + sch.time.second) if sch.time else 0\n        curr_value['schedule_day'] = sch.day\n        curr_value['schedule_exclusive'] = sch.exclusive\n        curr_value['schedule_enabled'] = sch.enabled\n        schedule_name = data.get('name', None)\n        go_no_go = await _check_schedule_post_parameters(data, curr_value)\n        if len(go_no_go) != 0:\n            raise ValueError(\"Errors in request: {}\".format(','.join(go_no_go)))\n\n        sch_list = await server.Server.scheduler.get_schedules()\n        for s in sch_list:\n            if schedule_id != str(s.schedule_id) and schedule_name == s.name:\n                raise DuplicateRequestError(\"Duplicate schedule name entry found\")\n\n        updated_schedule_id = await _execute_add_update_schedule(data, curr_value)\n\n        sch = await server.Server.scheduler.get_schedule(updated_schedule_id)\n\n        schedule = {\n            'id': str(sch.schedule_id),\n            'name': sch.name,\n            \"processName\": sch.process_name,\n            'type': Schedule.Type(int(sch.schedule_type)).name,\n            'repeat': sch.repeat.total_seconds() if sch.repeat else 0,\n            'time': (sch.time.hour * 60 * 60 + sch.time.minute * 60 + sch.time.second) if sch.time else 0,\n            'day': sch.day,\n            'exclusive': sch.exclusive,\n            'enabled': sch.enabled\n        }\n    except (ScheduleNotFoundError, ScheduleProcessNameNotFoundError) as ex:\n        raise web.HTTPNotFound(reason=str(ex), body=json.dumps({\"message\": str(ex)}))\n    except DuplicateRequestError as err_msg:\n        raise web.HTTPConflict(reason=str(err_msg), body=json.dumps({\"message\": str(err_msg)}))\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex), body=json.dumps({\"message\": str(ex)}))\n    except Exception as ex:\n        raise web.HTTPInternalServerError(reason=str(ex), body=json.dumps({\"message\": str(ex)}))\n    else:\n        return web.json_response({'schedule': schedule})\n\n\nasync def delete_schedule(request):\n    \"\"\"\n    Delete a schedule from schedules table\n\n    :Example:\n             curl -X DELETE  http://localhost:8081/fledge/schedule/dc9bfc01-066a-4cc0-b068-9c35486db87f\n    \"\"\"\n\n    try:\n        schedule_id = request.match_info.get('schedule_id', None)\n\n        try:\n            assert uuid.UUID(schedule_id)\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=\"Invalid Schedule ID {}\".format(schedule_id))\n\n        retval, message = await server.Server.scheduler.delete_schedule(uuid.UUID(schedule_id))\n\n        return web.json_response({'message': message, 'id': schedule_id})\n    except RuntimeWarning:\n        raise web.HTTPConflict(reason=\"Enabled Schedule {} cannot be deleted.\".format(schedule_id))\n    except (ValueError, ScheduleNotFoundError, NotReadyError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def get_schedule_type(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n         an array of Schedule type enumeration key index values\n\n    :Example:\n             curl -X GET  http://localhost:8081/fledge/schedule/type\n    \"\"\"\n\n    results = []\n    for _type in Schedule.Type:\n        data = {'index': _type.value, 'name': _type.name}\n        results.append(data)\n\n    return web.json_response({'scheduleType': results})\n\n\n#################################\n# Tasks\n#################################\n\n\nasync def get_task(request):\n    \"\"\"\n    Returns:\n            a task list\n\n    :Example:\n             curl -X GET  http://localhost:8081/fledge/task/{task_id}\n    \"\"\"\n\n    try:\n        task_id = request.match_info.get('task_id', None)\n\n        try:\n            assert uuid.UUID(task_id)\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=\"Invalid Task ID {}\".format(task_id))\n\n        tsk = await server.Server.scheduler.get_task(task_id)\n\n        task = {\n            'id': str(tsk.task_id),\n            'name': tsk.schedule_name,\n            'processName': tsk.process_name,\n            'state': Task.State(int(tsk.state)).name.capitalize(),\n            'startTime': str(tsk.start_time),\n            'endTime': str(tsk.end_time),\n            'exitCode': tsk.exit_code,\n            'reason': tsk.reason\n        }\n\n        return web.json_response(task)\n    except (ValueError, TaskNotFoundError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def get_tasks(request):\n    \"\"\"\n    Returns:\n            the list of tasks\n\n    :Example:\n             curl -X GET  http://localhost:8081/fledge/task\n\n             curl -X GET  http://localhost:8081/fledge/task?limit=2\n\n             curl -X GET  http://localhost:8081/fledge/task?name=xxx\n\n             curl -X GET  http://localhost:8081/fledge/task?state=xxx\n\n             curl -X GET  http://localhost:8081/fledge/task?name=xxx&state=xxx\n    \"\"\"\n    try:\n        limit = __DEFAULT_LIMIT\n        if 'limit' in request.query and request.query['limit'] != '':\n            try:\n                limit = int(request.query['limit'])\n                if limit < 0:\n                    raise ValueError\n            except ValueError:\n                raise web.HTTPBadRequest(reason=\"Limit must be a positive integer\")\n\n        name = None\n        if 'name' in request.query and request.query['name'] != '':\n            name = request.query['name']\n\n        state = None\n        if 'state' in request.query and request.query['state'] != '':\n            try:\n                state = Task.State[request.query['state'].upper()].value\n            except KeyError as ex:\n                raise web.HTTPBadRequest(reason=\"This state value {} not permitted.\".format(ex))\n\n        where_clause = None\n        if name and state:\n            where_clause = ([\"schedule_name\", \"=\", name], [\"state\", \"=\", state])\n        elif name:\n            where_clause = [\"schedule_name\", \"=\", name]\n        elif state:\n            where_clause = [\"state\", \"=\", state]\n\n        tasks = await server.Server.scheduler.get_tasks(where=where_clause, limit=limit)\n\n        if len(tasks) == 0:\n            raise web.HTTPNotFound(reason=\"No Tasks found\")\n\n        new_tasks = []\n        for task in tasks:\n            new_tasks.append(\n                {'id': str(task.task_id),\n                 'name': task.schedule_name,\n                 'processName': task.process_name,\n                 'state': Task.State(int(task.state)).name.capitalize(),\n                 'startTime': str(task.start_time),\n                 'endTime': str(task.end_time),\n                 'exitCode': task.exit_code,\n                 'reason': task.reason\n                 }\n            )\n\n        return web.json_response({'tasks': new_tasks})\n    except (ValueError, TaskNotFoundError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def get_tasks_latest(request):\n    \"\"\"\n    Returns:\n            the list of the most recent task execution for each name from tasks table\n\n    :Example:\n              curl -X GET  http://localhost:8081/fledge/task/latest\n\n              curl -X GET  http://localhost:8081/fledge/task/latest?name=xxx\n    \"\"\"\n    payload = PayloadBuilder().SELECT(\"id\", \"schedule_name\", \"process_name\", \"state\", \"start_time\", \"end_time\", \"reason\", \"pid\", \"exit_code\")\\\n        .ALIAS(\"return\", (\"start_time\", 'start_time'), (\"end_time\", 'end_time'))\\\n        .FORMAT(\"return\", (\"start_time\", \"YYYY-MM-DD HH24:MI:SS.MS\"), (\"end_time\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n        .ORDER_BY([\"schedule_name\", \"asc\"], [\"start_time\", \"desc\"])\n\n    if 'name' in request.query and request.query['name'] != '':\n        name = request.query['name']\n        payload.WHERE([\"schedule_name\", \"=\", name])\n\n    try:\n        _storage = connect.get_storage_async()\n        results = await _storage.query_tbl_with_payload('tasks', payload.payload())\n\n        if len(results['rows']) == 0:\n            raise web.HTTPNotFound(reason=\"No Tasks found\")\n\n        tasks = []\n        previous_schedule = None\n        for row in results['rows']:\n            if not row['schedule_name'].strip():\n                continue\n            if previous_schedule != row['schedule_name']:\n                tasks.append(row)\n                previous_schedule = row['schedule_name']\n\n        new_tasks = []\n        for task in tasks:\n            new_tasks.append(\n                {'id': str(task['id']),\n                 'name': task['schedule_name'],\n                 'processName': task['process_name'],\n                 'state': [t.name.capitalize() for t in list(Task.State)][int(task['state']) - 1],\n                 'startTime': str(task['start_time']),\n                 'endTime': str(task['end_time']),\n                 'exitCode': task['exit_code'],\n                 'reason': task['reason'],\n                 'pid': task['pid']\n                 }\n            )\n        return web.json_response({'tasks': new_tasks})\n    except (ValueError, TaskNotFoundError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def cancel_task(request):\n    \"\"\"Cancel a running task from tasks table\n\n    :Example:\n             curl -X PUT  http://localhost:8081/fledge/task/{task_id}/cancel\n    \"\"\"\n    try:\n        task_id = request.match_info.get('task_id', None)\n\n        try:\n            assert uuid.UUID(task_id)\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=\"Invalid Task ID {}\".format(task_id))\n\n        await server.Server.scheduler.get_task(task_id)\n\n        # Cancel Task\n        await server.Server.scheduler.cancel_task(uuid.UUID(task_id))\n\n        return web.json_response({'id': task_id, 'message': 'Task cancelled successfully'})\n    except (ValueError, TaskNotFoundError, TaskNotRunningError) as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n\n\nasync def get_task_state(request):\n    \"\"\"\n    Returns:\n            an array of Task State enumeration key index values\n\n    :Example:\n             curl -X GET  http://localhost:8081/fledge/task/state\n    \"\"\"\n\n    results = []\n    for _state in Task.State:\n        data = {'index': _state.value, 'name': _state.name.capitalize()}\n        results.append(data)\n\n    return web.json_response({'taskState': results})\n"
  },
  {
    "path": "python/fledge/services/core/api/service.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport aiohttp\nimport asyncio\nimport os\nimport datetime\nimport uuid\nimport json\nimport multiprocessing\nimport subprocess\nfrom aiohttp import web\n\nfrom typing import Dict, List\nfrom fledge.common import utils\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core import server\nfrom fledge.services.core import connect\nfrom fledge.services.core.api import utils as apiutils\nfrom fledge.services.core.scheduler.entities import StartUpSchedule\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\nfrom fledge.common.common import _FLEDGE_ROOT\nfrom fledge.services.core.api.plugins import common\nfrom fledge.services.core.api.plugins import install\nfrom fledge.services.core.api.plugins.exceptions import *\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.web.middleware import has_permission\n\n\n__author__ = \"Mark Riddoch, Ashwin Gopalakrishnan, Amarendra K Sinha, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ------------------------------------------------------------------------------\n    | GET POST            | /fledge/service                                      |\n    | GET                 | /fledge/service/available                            |\n    | GET                 | /fledge/service/installed                            |\n    | GET                 | /fledge/service/info                                 |\n    | GET                 | /fledge/service/info/{service_name}                  |\n    | PUT                 | /fledge/service/{type}/{name}/update                 |\n    | DELETE              | /fledge/service/{service_name}                       |\n    | POST                | /fledge/service/{service_name}/otp                   |\n    ------------------------------------------------------------------------------\n\"\"\"\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass ServiceInfoCache(object):\n    \"\"\"Service Information Cache Manager\"\"\"\n\n    def __init__(self, size=20):\n        \"\"\"\n        cache: value stored in dictionary as per service_name\n        max_cache_size: Hold the recently requested service info in the cache. Default cache size is 20\n        hit: number of times an item is read from the cache\n        miss: number of times an item was not found in the cache and a read of the service was required\n        \"\"\"\n        self.cache = {}\n        self.max_cache_size = size\n        self.hit = 0\n        self.miss = 0\n\n    def __contains__(self, service_name):\n        \"\"\"Returns True or False depending on whether or not the service is in the cache\n        and update the hit and date_accessed\"\"\"\n        if service_name in self.cache:\n            try:\n                current_hit = self.cache[service_name]['hit']\n            except KeyError:\n                current_hit = 0\n\n            self.hit += 1\n            self.cache[service_name].update({'date_accessed': datetime.datetime.now(), 'hit': current_hit + 1})\n            return True\n        self.miss += 1\n        return False\n\n    def update(self, service_name, service_info):\n        \"\"\"Update the cache dictionary and remove the oldest item if needed\"\"\"\n        if service_name not in self.cache and len(self.cache) >= self.max_cache_size:\n            self.remove_oldest()\n        self.cache[service_name] = {\n            'date_accessed': datetime.datetime.now(),\n            'service_info': service_info,\n            'hit': 0\n        }\n        _logger.debug(\"Updated Service Info Cache for service: %s\", service_name)\n\n    def remove_oldest(self):\n        \"\"\"Remove the entry that has the oldest accessed date\"\"\"\n        oldest_entry = None\n        for service_name in self.cache:\n            if oldest_entry is None:\n                oldest_entry = service_name\n            elif (self.cache[service_name].get('date_accessed') and\n                  self.cache[oldest_entry].get('date_accessed') and\n                  self.cache[service_name]['date_accessed'] < self.cache[oldest_entry]['date_accessed']):\n                oldest_entry = service_name\n        if oldest_entry:\n            self.cache.pop(oldest_entry)\n\n    def get(self, service_name):\n        \"\"\"Get service info from cache\"\"\"\n        if service_name in self:  # This will update hit count and date_accessed\n            return self.cache[service_name]['service_info']\n        return None\n\n    def remove(self, service_name):\n        \"\"\"Remove the entry with given service name\"\"\"\n        if service_name in self.cache:\n            self.cache.pop(service_name)\n\n    def clear(self):\n        \"\"\"Clear all cached service info\"\"\"\n        self.cache.clear()\n        self.hit = 0\n        self.miss = 0\n\n    @property\n    def size(self):\n        \"\"\"Return the size of the cache\"\"\"\n        return len(self.cache)\n\n\n# service info cache instance\n_service_info_cache = ServiceInfoCache()\n\n# TODO: FOGL-1022 - This is a temporary solution to get the service info for the prebuilt services.\n# Prebuilt services configuration\n_PREBUILT_SERVICES = {\n    \"south\": {\n        \"name\": \"south\",\n        \"description\": \"Service used to interact with device, API and generic sources of data\",\n        \"type\": \"south\",\n        \"process\": \"south\",\n        \"process_script\": \"south_c\"\n    },\n    \"north\": {\n        \"name\": \"north\",\n        \"description\": \"Service used to interact with device, API and generic sources of data\",\n        \"type\": \"north\",\n        \"process\": \"north\",\n        \"process_script\": \"north_C\"\n    },\n    \"storage\": {\n        \"name\": \"storage\",\n        \"description\": \"The storage service buffers data within a single Fledge instance\",\n        \"type\": \"storage\",\n        \"process\": \"storage\",\n        \"process_script\": \"storage\"\n    }\n}\n\n#################################\n#  Service\n#################################\n\n\ndef get_service_records(_type=None):\n    sr_list = list()\n    svc_records = ServiceRegistry.all() if _type is None else ServiceRegistry.get(s_type=_type)\n    for service_record in svc_records:\n        service_data = {\n            'name': service_record._name,\n            'type': service_record._type,\n            'address': service_record._address,\n            'management_port': service_record._management_port,\n            'service_port': service_record._port,\n            'protocol': service_record._protocol,\n            'status': ServiceRecord.Status(int(service_record._status)).name.lower()\n        }\n        # Add the 'debug' key only if it's non-empty\n        if service_record._debug:\n            service_data['debug'] = service_record._debug\n        sr_list.append(service_data)\n    recs = {'services': sr_list}\n    return recs\n\n\nasync def _fetch_service_info(service_name: str) -> dict:\n    \"\"\"\n    Fetch service information by attempting multiple service discovery methods.\n\n    Implements caching for non-prebuilt services to improve performance.\n\n    Tries to get service info in the following order:\n    1. Check cache for non-prebuilt services\n    2. C service executable with --info argument\n    3. Python service module executed with --info argument\n\n    Args:\n        service_name: Name of the service to fetch info for\n\n    Returns:\n        Service info dictionary with service details, or empty response if all methods fail\n    \"\"\"\n    # Check cache first for non-prebuilt services\n    if service_name not in _PREBUILT_SERVICES:\n        cached_info = _service_info_cache.get(service_name)\n        if cached_info is not None:\n            _logger.debug(f\"Retrieved service info for {service_name} from cache\")\n            return cached_info\n\n    def _create_empty_service_response(name: str) -> dict:\n        return {\n            \"name\": name,\n            \"description\": \"\",\n            \"type\": \"\",\n            \"process\": \"\",\n            \"process_script\": \"\"\n        }\n\n    def _get_service_info_from_path(service_path: str, is_python: bool = False, service_name: str = None) -> dict:\n        \"\"\"\n        Get service information from the specified path.\n\n        Args:\n            service_path: Path to the service (executable or Python service package)\n            is_python: True if this is a Python service, False for C service\n            service_name: Name of the service (required for Python services, optional for C services)\n\n        Returns:\n            Service information dictionary or None if failed\n        \"\"\"\n        if is_python:\n            import sys\n            # service_name is required for Python services\n            if service_name is None:\n                _logger.error(\"service_name is required for Python services\")\n                return None\n            # Execute Python service with --info argument\n            cmd_with_args = [sys.executable, \"-m\", f\"fledge.services.{service_name}\", \"--info\"]\n            try:\n                p = subprocess.Popen(cmd_with_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n                out, err = p.communicate(timeout=10)\n                return_code = p.returncode\n                if return_code == 0 and out:\n                    res = out.decode(\"utf-8\")\n                    return json.loads(res)\n                else:\n                    error_msg = err.decode(\"utf-8\") if err else \"Unknown error\"\n                    _logger.error(f\"Python service execution failed for {service_name}: {error_msg}\")\n                    return None\n            except subprocess.TimeoutExpired:\n                p.kill()\n                p.communicate()\n                _logger.error(f\"Python service execution timeout for {service_name}\")\n                return None\n            except json.JSONDecodeError as ex:\n                _logger.error(f\"Invalid JSON response from Python service {service_name}: {ex}\")\n                return None\n            except Exception as ex:\n                _logger.error(f\"Error executing Python service {service_name}: {ex}\")\n                return None\n        else:\n            # For C services, execute with --info argument\n            cmd_with_args = [service_path, \"--info\"]\n            try:\n                p = subprocess.Popen(cmd_with_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n                out, err = p.communicate(timeout=10)\n                return_code = p.returncode\n                if return_code == 0 and out:\n                    res = out.decode(\"utf-8\")\n                    return json.loads(res)\n                else:\n                    error_msg = err.decode(\"utf-8\") if err else \"Unknown error\"\n                    _logger.error(f\"Service execution failed for {service_name}: {error_msg}.\")\n                    return None\n            except subprocess.TimeoutExpired:\n                p.kill()\n                p.communicate()\n                _logger.error(f\"Service execution timeout for {service_name}.\")\n                return None\n            except json.JSONDecodeError as ex:\n                _logger.error(f\"Invalid JSON response from service {service_name}: {ex}\")\n                return None\n\n    # First check C service path\n    service_path = f\"{_FLEDGE_ROOT}/services/fledge.services.{service_name}\"\n    if os.path.exists(service_path):\n        try:\n            service_info = _get_service_info_from_path(service_path, service_name=service_name)\n            if service_info:\n                # Cache successful result for non-prebuilt services\n                if service_name not in _PREBUILT_SERVICES:\n                    _service_info_cache.update(service_name, service_info)\n                    _logger.debug(f\"Cached service info for {service_name}\")\n                return service_info\n            else:\n                # File found but unable to get info - return early\n                _logger.error(f\"C service {service_name} found but unable to retrieve service info.\")\n                return _create_empty_service_response(service_name)\n        except Exception as ex:\n            _logger.error(f\"Error executing service {service_name}: {ex}\")\n            return _create_empty_service_response(service_name)\n\n    # Fallback to Python service path (only if C service not found)\n    python_service_path = f\"{_FLEDGE_ROOT}/python/fledge/services/{service_name}\"\n    if os.path.exists(python_service_path):\n        try:\n            service_info = _get_service_info_from_path(python_service_path, is_python=True, service_name=service_name)\n            if service_info:\n                # Cache successful result for non-prebuilt services\n                if service_name not in _PREBUILT_SERVICES:\n                    _service_info_cache.update(service_name, service_info)\n                    _logger.debug(f\"Cached service info for {service_name}\")\n                return service_info\n            else:\n                # File found but unable to get info - return early\n                _logger.error(f\"Python service {service_name} found but unable to retrieve service info.\")\n                return _create_empty_service_response(service_name)\n        except Exception as ex:\n            _logger.error(f\"Error loading Python service {service_name}: {ex}\")\n            return _create_empty_service_response(service_name)\n\n    # Both paths failed - log once and return empty response\n    _logger.error(f\"Service {service_name} not found in both ({service_path}) and ({python_service_path}) paths.\")\n    return _create_empty_service_response(service_name)\n\n\ndef get_service_installed() -> List:\n    paths = [_FLEDGE_ROOT + \"/services\", _FLEDGE_ROOT + \"/python/fledge/services/management\"]\n    services = []\n    svc_prefix = 'fledge.services.'\n    for _path in paths:\n        for root, dirs, files in os.walk(_path):\n            for _file in files:\n                if _file.startswith(svc_prefix):\n                    services.append(_file.split(svc_prefix)[-1])\n                elif _file == '__main__.py':\n                    services.append('management')\n    return services\n\n\nasync def get_service_info(request):\n    \"\"\"\n    Get information about all the services by calling each service\n\n    Args:\n        request: HTTP request object\n\n    Returns:\n        JSON response with information about all services including their configuration\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/service/info\n    \"\"\"\n    try:\n        # Get list of installed services\n        installed_services = get_service_installed()\n        services = []\n        for service_name in installed_services:\n            # Check if it's a prebuilt service\n            if service_name in _PREBUILT_SERVICES:\n                services.append(_PREBUILT_SERVICES[service_name])\n                continue\n            service_info = await _fetch_service_info(service_name)\n            services.append(service_info)\n        response = {\"services\": services}\n        return web.json_response(response)\n    except Exception as ex:\n        msg = str(ex)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n\nasync def get_service_info_by_name(request):\n    \"\"\"\n    Get information about a specific service by name\n\n    Args:\n        request: HTTP request object with service name in URL path\n\n    Returns:\n        JSON response with information about the specified service\n\n    :Example:\n        curl -sX GET http://localhost:8081/fledge/service/info/MyService\n    \"\"\"\n    try:\n        service_name = request.match_info.get('service_name', None)\n        if service_name is None:\n            msg = f\"Service name is required in the URL path.\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n        # Get list of installed services to validate the service exists\n        installed_services = get_service_installed()\n        if service_name not in installed_services:\n            msg = f\"Service '{service_name}' not found in installed services.\"\n            raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n\n        # Check if it's a prebuilt service\n        if service_name in _PREBUILT_SERVICES:\n            service_info = _PREBUILT_SERVICES[service_name]\n        else:\n            # Try to get service info from C or Python service\n            service_info = await _fetch_service_info(service_name)\n        return web.json_response(service_info)\n    except Exception as ex:\n        msg = str(ex)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n\nasync def get_health(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n            health of all registered services\n\n    :Example:\n            curl -sX GET http://localhost:8081/fledge/service\n            curl -sX GET http://localhost:8081/fledge/service?type=Southbound\n    \"\"\"\n    try:\n        if 'type' in request.query and request.query['type'] != '':\n            _type = request.query['type']\n            svc_type_members = ServiceRecord.Type._member_names_\n            is_type_exists = _type in svc_type_members\n            if not is_type_exists:\n                raise ValueError('{} is not a valid service type. Supported types are {}'.format(_type,\n                                                                                                 svc_type_members))\n            response = get_service_records(_type)\n        else:\n            response = get_service_records()\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except service_registry_exceptions.DoesNotExist:\n        msg = \"No record found for {} service type\".format(_type)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(response)\n\n\nasync def delete_service(request):\n    \"\"\" Delete an existing service\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/service/<svc name>\n    \"\"\"\n    try:\n        svc = request.match_info.get('service_name', None)\n        storage = connect.get_storage_async()\n\n        result = await get_schedule(storage, svc)\n        if result['count'] == 0:\n            return web.HTTPNotFound(reason='{} service does not exist.'.format(svc))\n\n        config_mgr = ConfigurationManager(storage)\n\n        # TODO: 5141 - once done we need to fix for dispatcher type as well\n        # In case of notification service, if notifications exists, then deletion is not allowed\n        if 'notification' in result['rows'][0]['process_name']:\n            notf_children = await config_mgr.get_category_child(category_name=\"Notifications\")\n            children = [x['key'] for x in notf_children]\n            if len(notf_children) > 0:\n                return web.HTTPBadRequest(reason='Notification service `{}` can not be deleted, as {} notification instances exist.'.format(svc, children))\n\n        # First disable the schedule\n        svc_schedule = result['rows'][0]\n        sch_id = uuid.UUID(svc_schedule['id'])\n        if svc_schedule['enabled'].lower() == 't':\n            await server.Server.scheduler.disable_schedule(sch_id)\n            # return control to event loop\n            await asyncio.sleep(2)\n\n        # Delete all configuration for the service name\n        await config_mgr.delete_category_and_children_recursively(svc)\n\n        # Remove from registry as it has been already shutdown via disable_schedule() and since\n        # we intend to delete the schedule also, there is no use of its Service registry entry\n        try:\n            services = ServiceRegistry.get(name=svc)\n            ServiceRegistry.remove_from_registry(services[0]._id)\n        except service_registry_exceptions.DoesNotExist:\n            pass\n\n        # Delete streams and plugin data\n        await delete_streams(storage, svc)\n        await delete_plugin_data(storage, svc)\n        await delete_filters(storage, svc)\n\n        # Delete schedule\n        await server.Server.scheduler.delete_schedule(sch_id)\n\n        # Update deprecated timestamp in asset_tracker\n        await update_deprecated_ts_in_asset_tracker(storage, svc)\n\n        # Delete user alerts\n        try:\n            await server.Server._alert_manager.delete(svc)\n        except:\n            pass\n    except Exception as ex:\n        raise web.HTTPInternalServerError(reason=str(ex))\n    else:\n        return web.json_response({'result': 'Service {} deleted successfully.'.format(svc)})\n\n\nasync def delete_streams(storage, svc):\n    payload = PayloadBuilder().WHERE([\"description\", \"=\", svc]).payload()\n    await storage.delete_from_tbl(\"streams\", payload)\n\n\nasync def delete_plugin_data(storage, svc):\n    payload = PayloadBuilder().WHERE([\"service_name\", \"=\", svc]).payload()\n    await storage.delete_from_tbl(\"plugin_data\", payload)\n\n\nasync def delete_filters(storage, service_name):\n    # First, get the filter names related to the user\n    select_payload = PayloadBuilder().SELECT(\"name\").WHERE(['user', '=', service_name]).payload()\n    get_result = await storage.query_tbl_with_payload('filter_users', select_payload)\n    if 'rows' in get_result and get_result['rows']:\n        filter_names = [row['name'] for row in get_result['rows']]\n        # Delete each corresponding filter\n        for name in filter_names:\n            del_payload = PayloadBuilder().WHERE(['name', '=', name]).payload()\n            await storage.delete_from_tbl(\"filters\", del_payload)\n    # Delete filters\n    payload = PayloadBuilder().WHERE(['user', '=', service_name]).payload()\n    await storage.delete_from_tbl(\"filter_users\", payload)\n\n\nasync def update_deprecated_ts_in_asset_tracker(storage, svc):\n    \"\"\"\n    TODO: FOGL-6749\n    Once rows affected with 0 case handled at Storage side\n    then we will need to update the query with AND_WHERE(['deprecated_ts', 'isnull'])\n    At the moment deprecated_ts is updated even in notnull case.\n    Also added SELECT query before UPDATE to avoid BadCase when there is no asset track entry exists for the instance.\n    This should also be removed when given JIRA is fixed.\n    \"\"\"\n    select_payload = PayloadBuilder().SELECT(\"deprecated_ts\").WHERE(['service', '=', svc]).payload()\n    get_result = await storage.query_tbl_with_payload('asset_tracker', select_payload)\n    if 'rows' in get_result:\n        response = get_result['rows']\n        if response:\n            # AND_WHERE(['deprecated_ts', 'isnull']) once FOGL-6749 is done\n            current_time = utils.local_timestamp()\n            update_payload = PayloadBuilder().SET(deprecated_ts=current_time).WHERE(\n                ['service', '=', svc]).payload()\n            await storage.update_tbl(\"asset_tracker\", update_payload)\n\n\nasync def add_service(request):\n    \"\"\"\n    Create a new service to run a specific plugin\n\n    :Example:\n             curl -X POST http://localhost:8081/fledge/service -d '{\"name\": \"DHT 11\", \"plugin\": \"dht11\", \"type\": \"south\", \"enabled\": true}'\n             curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"Sine\", \"plugin\": \"sinusoid\", \"type\": \"south\", \"enabled\": true, \"config\": {\"dataPointsPerSec\": {\"value\": \"10\"}}}' | jq\n             curl -X POST http://localhost:8081/fledge/service -d '{\"name\": \"NotificationServer\", \"type\": \"notification\", \"enabled\": true}' | jq\n             curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"DispatcherServer\", \"type\": \"dispatcher\", \"enabled\": true}' | jq\n             curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"BucketServer\", \"type\": \"bucketstorage\", \"enabled\": true}' | jq\n             curl -X POST http://localhost:8081/fledge/service -d '{\"name\": \"HTC\", \"plugin\": \"httpc\", \"type\": \"north\", \"enabled\": true}' | jq\n             curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"HT\", \"plugin\": \"http_north\", \"type\": \"north\", \"enabled\": true, \"config\": {\"verifySSL\": {\"value\": \"false\"}}}' | jq\n             curl -sX POST http://localhost:8081/fledge/service -d '{\"name\": \"Pipeline-Ingest\", \"type\": \"pipeline\", \"enabled\": true}' | jq\n\n             curl -sX POST http://localhost:8081/fledge/service?action=install -d '{\"format\":\"repository\", \"name\": \"fledge-service-notification\"}'\n             curl -sX POST http://localhost:8081/fledge/service?action=install -d '{\"format\":\"repository\", \"name\": \"fledge-service-dispatcher\"}'\n             curl -sX POST http://localhost:8081/fledge/service?action=install -d '{\"format\":\"repository\", \"name\": \"fledge-service-bucketStorage\"}'\n             curl -sX POST http://localhost:8081/fledge/service?action=install -d '{\"format\":\"repository\", \"name\": \"fledge-service-notification\", \"version\":\"1.6.0\"}'\n             curl -sX POST http://localhost:8081/fledge/service?action=install -d '{\"format\":\"repository\", \"name\": \"fledge-service-dispatcher\", \"version\":\"1.9.1\"}'\n             curl -sX POST http://localhost:8081/fledge/service?action=install -d '{\"format\":\"repository\", \"name\": \"fledge-service-bucketStorage\", \"version\":\"1.9.2\"}'\n    \"\"\"\n\n    try:\n        data = await request.json()\n        if not isinstance(data, dict):\n            raise ValueError('Data payload must be a valid JSON')\n\n        name = data.get('name', None)\n        plugin = data.get('plugin', None)\n        service_type = data.get('type', None)\n        enabled = data.get('enabled', None)\n        config = data.get('config', None)\n\n        if name is None:\n            raise web.HTTPBadRequest(reason='Missing name property in payload.')\n        if 'action' in request.query and request.query['action'] != '':\n            if request.query['action'] == 'install':\n                file_format = data.get('format', None)\n                if file_format is None:\n                    raise ValueError(\"format param is required\")\n                if file_format not in [\"repository\"]:\n                    raise ValueError(\"Invalid format. Must be 'repository'\")\n                if not name.startswith(\"fledge-service-\"):\n                    raise ValueError('name should start with \"fledge-service-\" prefix')\n                version = data.get('version', None)\n                if version:\n                    delimiter = '.'\n                    if str(version).count(delimiter) != 2:\n                        raise ValueError('Service semantic version is incorrect; it should be like X.Y.Z')\n                pkg_mgt = 'yum' if utils.is_redhat_based() else 'apt'\n                # Check Pre-conditions from Packages table\n                # if status is -1 (Already in progress) then return as rejected request\n                storage = connect.get_storage_async()\n                action = \"install\"\n                select_payload = PayloadBuilder().SELECT(\"status\").WHERE(['action', '=', action]).AND_WHERE(\n                    ['name', '=', name]).payload()\n                result = await storage.query_tbl_with_payload('packages', select_payload)\n                response = result['rows']\n                if response:\n                    exit_code = response[0]['status']\n                    if exit_code == -1:\n                        msg = \"{} package installation already in progress\".format(name)\n                        return web.HTTPTooManyRequests(reason=msg, body=json.dumps({\"message\": msg}))\n                    # Remove old entry from table for other cases\n                    delete_payload = PayloadBuilder().WHERE(['action', '=', action]).AND_WHERE(\n                        ['name', '=', name]).payload()\n                    await storage.delete_from_tbl(\"packages\", delete_payload)\n\n                # Check If requested service is already installed and then return immediately\n                services = get_service_installed()\n                svc_name = name.split('fledge-')[1].split('-')[1]\n                for s in services:\n                    if s == svc_name:\n                        msg = \"{} package is already installed\".format(name)\n                        return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n                # Check If requested service is available for configured repository\n                services, log_path = await common.fetch_available_packages()\n                if name not in services:\n                    raise KeyError('{} service is not available for the given repository'.format(name))\n                \n                # Insert record into Packages table\n                uid = str(uuid.uuid4())\n                insert_payload = PayloadBuilder().INSERT(id=uid, name=name, action=action, status=-1,\n                                                         log_file_uri=\"\").payload()\n                result = await storage.insert_into_tbl(\"packages\", insert_payload)\n                if result['response'] == \"inserted\" and result['rows_affected'] == 1:\n                    pn = \"{}-{}\".format(action, name)\n                    p = multiprocessing.Process(name=pn, target=install.install_package_from_repo,\n                                                args=(name, pkg_mgt, version, uid, storage))\n                    p.daemon = True\n                    p.start()\n                    msg = \"{} service installation started\".format(name)\n                    _logger.info(\"{}...\".format(msg))\n                    status_link = \"fledge/package/install/status?id={}\".format(uid)\n                    return web.json_response({\"message\": msg, \"id\": uid, \"statusLink\": status_link})\n                else:\n                    raise StorageServerError\n            else:\n                raise web.HTTPBadRequest(reason='{} is not a valid action'.format(request.query['action']))\n        if utils.check_reserved(name) is False:\n            raise web.HTTPBadRequest(reason='Invalid name property in payload.')\n        if utils.check_fledge_reserved(name) is False:\n            raise web.HTTPBadRequest(reason=\"'{}' is reserved for Fledge and can not be used as service name!\".format(name))\n        if service_type is None:\n            raise web.HTTPBadRequest(reason='Missing type property in payload.')\n\n        service_type = str(service_type).lower()\n        if service_type not in ['south', 'north', 'notification', 'management', 'dispatcher',\n                                'bucketstorage', 'pipeline']:\n            raise web.HTTPBadRequest(reason='Only south, north, notification, management, dispatcher, bucketstorage '\n                                            'and pipeline types are supported.')\n        if plugin is None and service_type in ('south', 'north'):\n            raise web.HTTPBadRequest(reason='Missing plugin property for type {} in payload.'.format(service_type))\n        if plugin and utils.check_reserved(plugin) is False:\n            raise web.HTTPBadRequest(reason='Invalid plugin property in payload.')\n\n        if enabled is not None:\n            if enabled not in ['true', 'false', True, False]:\n                raise web.HTTPBadRequest(reason='Only \"true\", \"false\", true, false'\n                                                ' are allowed for value of enabled.')\n        is_enabled = True if ((type(enabled) is str and enabled.lower() in ['true']) or (\n            (type(enabled) is bool and enabled is True))) else False\n        \n        dryrun = not is_enabled\n\n        # Check if a valid plugin has been provided\n        plugin_module_path, plugin_config, process_name, script = \"\", {}, \"\", \"\"\n        if service_type == 'south' or service_type == 'north':\n            # \"plugin_module_path\" is fixed by design. It is MANDATORY to keep the plugin in the exactly similar named\n            # folder, within the plugin_module_path.\n            # if multiple plugin with same name are found, then python plugin import will be tried first\n            plugin_module_path = \"{}/python/fledge/plugins/{}/{}\".format(_FLEDGE_ROOT, service_type, plugin)\n            process_name = 'south_c' if service_type == 'south' else 'north_C'\n            script = '[\"services/south_c\"]' if service_type == 'south' else '[\"services/north_C\"]'\n            try:\n                plugin_info = common.load_and_fetch_python_plugin_info(plugin_module_path, plugin, service_type)\n                plugin_config = plugin_info['config']\n                if not plugin_config:\n                    msg = \"Plugin '{}' import problem from path '{}''.\".format(plugin, plugin_module_path)\n                    _logger.exception(msg)\n                    raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n            except FileNotFoundError as ex:\n                # Checking for C-type plugins\n                plugin_config = load_c_plugin(plugin, service_type)\n                plugin_module_path = \"{}/plugins/{}/{}\".format(_FLEDGE_ROOT, service_type, plugin)\n                if not plugin_config:\n                    msg = \"Plugin '{}' not found in path '{}'.\".format(plugin, plugin_module_path)\n                    _logger.exception(ex, msg)\n                    raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n            except TypeError as ex:\n                raise web.HTTPBadRequest(reason=str(ex))\n            except Exception as ex:\n                _logger.error(ex, \"Failed to fetch plugin info config item.\")\n                raise web.HTTPInternalServerError(reason='Failed to fetch plugin configuration')\n        elif service_type == 'notification':\n            if not os.path.exists(_FLEDGE_ROOT + \"/services/fledge.services.{}\".format(service_type)):\n                msg = \"{} service is not installed correctly.\".format(service_type.capitalize())\n                raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n            process_name = 'notification_c'\n            script = '[\"services/notification_c\"]'\n        elif service_type == 'management':\n            file_names_list = ['{}/python/fledge/services/management/__main__.py'.format(_FLEDGE_ROOT),\n                               '{}/scripts/services/management'.format(_FLEDGE_ROOT),\n                               '{}/scripts/tasks/manage'.format(_FLEDGE_ROOT)]\n            if not all(list(map(os.path.exists, file_names_list))):\n                msg = \"{} service is not installed correctly.\".format(service_type.capitalize())\n                raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n            process_name = 'management'\n            script = '[\"services/management\"]'\n        elif service_type == 'dispatcher':\n            if not os.path.exists(_FLEDGE_ROOT + \"/services/fledge.services.{}\".format(service_type)):\n                msg = \"{} service is not installed correctly.\".format(service_type.capitalize())\n                raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n            process_name = 'dispatcher_c'\n            script = '[\"services/dispatcher_c\"]'\n        elif service_type == 'bucketstorage':\n            if not os.path.exists(_FLEDGE_ROOT + \"/services/fledge.services.bucket\"):\n                msg = \"{} service is not installed correctly.\".format(service_type.capitalize())\n                raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n            process_name = 'bucket_storage_c'\n            script = '[\"services/bucket_storage_c\"]'\n        elif service_type == 'pipeline':\n            process_name = 'pipeline_c'\n            script = '[\"services/pipeline_c\"]'\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n\n        # Check  whether category name already exists\n        category_info = await config_mgr.get_category_all_items(category_name=name)\n        if category_info is not None:\n            raise web.HTTPBadRequest(reason=\"The '{}' category already exists\".format(name))\n\n        # Check that the schedule name is not already registered\n        count = await check_schedules(storage, name)\n        if count != 0:\n            msg = \"A service with {} name already exists.\".format(name)\n            raise ValueError(msg)\n\n        # Check that the process name is not already registered\n        count = await check_scheduled_processes(storage, process_name, script)\n        if count == 0:\n            # Now first create the scheduled process entry for the new service\n            column_name = {\"name\": process_name, \"script\": script}\n            if service_type == 'management':\n                column_name[\"priority\"] = 300\n            payload = PayloadBuilder().INSERT(**column_name).payload()\n            try:\n                res = await storage.insert_into_tbl(\"scheduled_processes\", payload)\n            except StorageServerError as ex:\n                _logger.exception(\"Failed to create scheduled process. %s\", ex.error)\n                raise web.HTTPInternalServerError(reason='Failed to create service.')\n            except Exception as ex:\n                _logger.error(ex, \"Failed to create scheduled process.\")\n                raise web.HTTPInternalServerError(reason='Failed to create service.')\n\n        # check that notification service is not already registered, right now notification service LIMIT to 1\n        if service_type == 'notification':\n            res = await check_schedule_entry(storage)\n            for ps in res['rows']:\n                if 'notification_c' in ps['process_name']:\n                    msg = \"A Notification service type schedule already exists.\"\n                    raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        # check that dispatcher service is not already registered, right now dispatcher service LIMIT to 1\n        elif service_type == 'dispatcher':\n            res = await check_schedule_entry(storage)\n            for ps in res['rows']:\n                if 'dispatcher_c' in ps['process_name']:\n                    msg = \"A Dispatcher service type schedule already exists.\"\n                    raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        # check the schedule entry for BucketStorage service as LIMIT to 1\n        elif service_type == 'bucketstorage':\n            res = await check_schedule_entry(storage)\n            for ps in res['rows']:\n                if 'bucket_storage_c' in ps['process_name']:\n                    msg = \"A BucketStorage service type schedule already exists.\"\n                    raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        # check that management service is not already registered, right now management service LIMIT to 1\n        elif service_type == 'management':\n            res = await check_schedule_entry(storage)\n            for ps in res['rows']:\n                if 'management' in ps['process_name']:\n                    msg = \"A Management service type schedule already exists.\"\n                    raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        elif service_type == 'south' or service_type == 'north':\n            try:\n                # Create a configuration category from the configuration defined in the plugin\n                category_desc = plugin_config['plugin']['description']\n                await config_mgr.create_category(category_name=name,\n                                                 category_description=category_desc,\n                                                 category_value=plugin_config,\n                                                 keep_original_items=True)\n                # Create the parent category for all South services\n                parent_cat_name = service_type.capitalize()\n                await config_mgr.create_category(parent_cat_name, {}, \"{} microservices\".format(parent_cat_name), True)\n                await config_mgr.create_child_category(parent_cat_name, [name])\n\n                # If config is in POST data, then update the value for each config item\n                if config is not None:\n                    if not isinstance(config, dict):\n                        raise ValueError('Config must be a JSON object')\n                    for k, v in config.items():\n                        await config_mgr.set_category_item_value_entry(name, k, v['value'])\n            except Exception as ex:\n                if \"Invalid character\" in ex.args[0]:\n                    # ex.args[0] contains the full error message to tell why it failed.\n                    msg = \"Failed to create service. {}\".format(ex.args[0])\n                    _logger.error(ex, msg)\n                    raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n                \n                # Cleanup the category created for the service\n                await config_mgr.delete_category_and_children_recursively(name)\n                msg = \"Failed to create plugin configuration while adding service.\"\n                _logger.error(ex, msg)\n                raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n        # If all successful then lastly add a schedule to run the new service at startup\n        try:\n            schedule = StartUpSchedule()\n            schedule.name = name\n            schedule.process_name = process_name\n            schedule.repeat = datetime.timedelta(0)\n            schedule.exclusive = True\n            #  if \"enabled\" is supplied, it gets activated in save_schedule() via is_enabled flag\n            schedule.enabled = False\n\n            # Reset startup priority order\n            server.Server.scheduler.reset_process_script_priority()\n\n            # Save schedule\n            await server.Server.scheduler.save_schedule(schedule, is_enabled, dryrun=dryrun)\n            schedule = await server.Server.scheduler.get_schedule_by_name(name)\n        except StorageServerError as ex:\n            await config_mgr.delete_category_and_children_recursively(name)\n            _logger.exception(\"Failed to create schedule. %s\", ex.error)\n            raise web.HTTPInternalServerError(reason='Failed to create service.')\n        except Exception as ex:\n            if \"Invalid character\" in ex.args[0]:\n                # ex.args[0] contains the full error message to tell why it failed.\n                msg = \"Failed to create service. {}\".format(ex.args[0])\n                _logger.error(ex, msg)\n                raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n            # Cleanup the category created for the service\n            await config_mgr.delete_category_and_children_recursively(name)\n            _logger.error(ex, \"Failed to create service.\")\n            raise web.HTTPInternalServerError(reason='Failed to create service.')\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except KeyError as err:\n        msg = str(err)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except StorageServerError as err:\n        msg = str(err)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"Storage error: {}\".format(msg)}))\n    else:\n        return web.json_response({'name': name, 'id': str(schedule.schedule_id)})\n\n\ndef load_c_plugin(plugin: str, service_type: str) -> Dict:\n    try:\n        plugin_info = apiutils.get_plugin_info(plugin, dir=service_type)\n        if plugin_info['type'] != service_type:\n            msg = \"Plugin of {} type is not supported\".format(plugin_info['type'])\n            raise TypeError(msg)\n        plugin_config = plugin_info['config']\n    except Exception:\n        # Now looking for hybrid plugins if exists\n        try:\n            plugin_info = common.load_and_fetch_c_hybrid_plugin_info(plugin, True)\n            if plugin_info:\n                plugin_config = plugin_info['config']\n        except Exception:\n            # This case if C-plugin is not found either in hybrid path. Hence treated as bad plugin\n            _logger.error(\"No {} plugin found\".format(plugin))\n            plugin_config = {}\n    return plugin_config\n\n\nasync def check_scheduled_processes(storage, process_name, script):\n    payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', process_name]).AND_WHERE(['script', '=', script]).payload()\n    result = await storage.query_tbl_with_payload('scheduled_processes', payload)\n    return result['count']\n\n\nasync def check_schedules(storage, schedule_name):\n    payload = PayloadBuilder().SELECT(\"schedule_name\").WHERE(['schedule_name', '=', schedule_name]).payload()\n    result = await storage.query_tbl_with_payload('schedules', payload)\n    return result['count']\n\n\nasync def get_schedule(storage, schedule_name):\n    payload = PayloadBuilder().SELECT([\"id\", \"enabled\"]).WHERE(['schedule_name', '=', schedule_name]).payload()\n    result = await storage.query_tbl_with_payload('schedules', payload)\n    return result\n\n\nasync def check_schedule_entry(storage):\n    payload = PayloadBuilder().SELECT(\"process_name\").payload()\n    result = await storage.query_tbl_with_payload('schedules', payload)\n    return result\n\n\nasync def get_available(request: web.Request) -> web.Response:\n    \"\"\" get list of a available services via package management i.e apt or yum\n\n        :Example:\n            curl -X GET http://localhost:8081/fledge/service/available\n    \"\"\"\n    try:\n        services, log_path = await common.fetch_available_packages(\"service\")\n    except PackageError as e:\n        msg = \"Fetch available service package request failed\"\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg, \"link\": str(e)}), reason=msg)\n    except Exception as ex:\n        raise web.HTTPInternalServerError(reason=str(ex))\n\n    return web.json_response({\"services\": services, \"link\": log_path})\n\n\nasync def get_installed(request: web.Request) -> web.Response:\n    \"\"\" get list of a installed services\n\n        :Example:\n            curl -X GET http://localhost:8081/fledge/service/installed\n    \"\"\"\n    # Clear service info cache when checking installed services\n    # to ensure cache stays synchronized with actual installed services\n    _service_info_cache.clear()\n    services = get_service_installed()\n    return web.json_response({\"services\": services})\n\n\nasync def update_service(request: web.Request) -> web.Response:\n    \"\"\" update service\n\n    :Example:\n        curl -sX PUT http://localhost:8081/fledge/service/notification/notification/update\n    \"\"\"\n    _type = request.match_info.get('type', None)\n    name = request.match_info.get('name', None)\n    try:\n        _type = _type.lower()\n        if _type not in ('notification', 'dispatcher', 'bucket_storage', 'management'):\n            raise ValueError(\"Invalid service type.\")\n\n        # NOTE: `bucketstorage` repository name with `BucketStorage` type in service registry has package name *-`bucket`.\n        # URL: /fledge/service/bucket_storage/bucket/update\n\n        # Check requested service is installed or not\n        installed_services = get_service_installed()\n        if name not in installed_services:\n            raise KeyError(\"{} service is not installed yet. Hence update is not possible.\".format(name))\n\n        # Check Pre-conditions from Packages table\n        # if status is -1 (Already in progress) then return as rejected request\n        action = 'update'\n        package_name = \"fledge-service-{}\".format(name)\n        storage_client = connect.get_storage_async()\n        select_payload = PayloadBuilder().SELECT(\"status\").WHERE(['action', '=', action]).AND_WHERE(\n            ['name', '=', package_name]).payload()\n        result = await storage_client.query_tbl_with_payload('packages', select_payload)\n        response = result['rows']\n        if response:\n            exit_code = response[0]['status']\n            if exit_code == -1:\n                msg = \"{} package {} already in progress\".format(package_name, action)\n                return web.HTTPTooManyRequests(reason=msg, body=json.dumps({\"message\": msg}))\n            # Remove old entry from table for other cases\n            delete_payload = PayloadBuilder().WHERE(['action', '=', action]).AND_WHERE(\n                ['name', '=', package_name]).payload()\n            await storage_client.delete_from_tbl(\"packages\", delete_payload)\n\n        _where_clause = ['process_name', '=', '{}_c'.format(_type)]\n        if _type == 'management':\n            _where_clause = ['process_name', '=', '{}'.format(_type)]\n\n        payload = PayloadBuilder().SELECT(\"id\", \"enabled\", \"schedule_name\").WHERE(_where_clause).payload()\n        result = await storage_client.query_tbl_with_payload('schedules', payload)\n        sch_info = result['rows']\n        sch_list = []\n        if sch_info and sch_info[0]['enabled'] == 't':\n            status, reason = await server.Server.scheduler.disable_schedule(uuid.UUID(sch_info[0]['id']))\n            if status:\n                _logger.warning(\"Schedule is disabled for {}, as {} service of type {} is being updated...\".format(\n                    sch_info[0]['schedule_name'], name, _type))\n                sch_list.append(sch_info[0]['id'])\n\n        # Insert record into Packages table\n        uid = str(uuid.uuid4())\n        insert_payload = PayloadBuilder().INSERT(id=uid, name=package_name, action=action, status=-1,\n                                                 log_file_uri=\"\").payload()\n        result = await storage_client.insert_into_tbl(\"packages\", insert_payload)\n        if result['response'] == \"inserted\" and result['rows_affected'] == 1:\n            pn = \"{}-{}\".format(action, name)\n            # Scheme is always http:// on core_management_port\n            p = multiprocessing.Process(name=pn,\n                                        target=do_update,\n                                        args=(\"http\", server.Server._host,\n                                              server.Server.core_management_port, storage_client, package_name,\n                                              uid, sch_list))\n            p.daemon = True\n            p.start()\n            msg = \"{} {} started\".format(package_name, action)\n            status_link = \"fledge/package/{}/status?id={}\".format(action, uid)\n            result_payload = {\"message\": msg, \"id\": uid, \"statusLink\": status_link}\n        else:\n            raise StorageServerError\n    except KeyError as ex:\n        raise web.HTTPNotFound(reason=str(ex))\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except Exception as ex:\n        raise web.HTTPInternalServerError(reason=str(ex))\n    else:\n        return web.json_response(result_payload)\n\n\nasync def _put_schedule(protocol: str, host: str, port: int, sch_id: uuid, is_enabled: bool) -> None:\n    management_api_url = '{}://{}:{}/fledge/schedule/{}/enable'.format(protocol, host, port, sch_id)\n    headers = {'content-type': 'application/json'}\n    verify_ssl = False\n    connector = aiohttp.TCPConnector(verify_ssl=verify_ssl)\n    async with aiohttp.ClientSession(connector=connector) as session:\n        async with session.put(management_api_url, data=json.dumps({\"value\": is_enabled}), headers=headers) as resp:\n            result = await resp.text()\n            status_code = resp.status\n            if status_code in range(400, 500):\n                _logger.error(\"Bad request error code: %d, reason: %s when PUT schedule\", status_code, resp.reason)\n            if status_code in range(500, 600):\n                _logger.error(\"Server error code: %d, reason: %s when PUT schedule\", status_code, resp.reason)\n            response = json.loads(result)\n            _logger.debug(\"PUT Schedule response: %s\", response)\n\n\ndef do_update(protocol: str, host: str, port: int, storage: connect, pkg_name: str, uid: str, schedules: list) -> None:\n    _logger.info(\"{} service update started...\".format(pkg_name))\n    stdout_file_path = common.create_log_file(\"update\", pkg_name)\n    pkg_mgt = 'yum' if utils.is_redhat_based() else 'apt'\n    cmd = \"sudo {} -y update > {} 2>&1\".format(pkg_mgt, stdout_file_path)\n    if pkg_mgt == 'yum':\n        cmd = \"sudo {} check-update > {} 2>&1\".format(pkg_mgt, stdout_file_path)\n    ret_code = os.system(cmd)\n    # sudo apt/yum -y install only happens when update is without any error\n    if ret_code == 0:\n        cmd = \"sudo {} -y install {} >> {} 2>&1\".format(pkg_mgt, pkg_name, stdout_file_path)\n        ret_code = os.system(cmd)\n\n    # relative log file link\n    link = \"log/\" + stdout_file_path.split(\"/\")[-1]\n\n    # Update record in Packages table\n    payload = PayloadBuilder().SET(status=ret_code, log_file_uri=link).WHERE(['id', '=', uid]).payload()\n    loop = asyncio.new_event_loop()\n    loop.run_until_complete(storage.update_tbl(\"packages\", payload))\n\n    if ret_code != 0:\n        _logger.error(\"{} service update failed. Logs available at {}\".format(pkg_name, link))\n    else:\n        # Audit info\n        audit = AuditLogger(storage)\n        audit_detail = {'packageName': pkg_name}\n        loop.run_until_complete(audit.information('PKGUP', audit_detail))\n        _logger.info('{} service updated successfully. Logs available at {}'.format(pkg_name, link))\n\n    # Restart the service which was disabled before service update\n    for sch in schedules:\n        loop.run_until_complete(_put_schedule(protocol, host, port, uuid.UUID(sch), True))\n\n\n@has_permission(\"admin\")\nasync def issueOTPToken(request):\n    \"\"\" Request an OTP startup token for service\n        being manually started\n\n        The request requires Core API authentication\n        and admin user\n\n    :Example:\n        curl -sX POST http://localhost:8081/fledge/service/SIN2/otp\n    \"\"\"\n    if request.is_auth_optional:\n        _logger.warning('Resource you were trying to reach is forbidden')\n        raise web.HTTPForbidden\n\n    # Get service name\n    service_name = request.match_info.get('service_name')\n\n    # Create a startup token for the service\n    startToken = ServiceRegistry.issueStartupToken(service_name)\n\n    return web.json_response({\"startupToken\": startToken})\n"
  },
  {
    "path": "python/fledge/services/core/api/snapshot/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/api/snapshot/plugins.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\nimport os\nfrom aiohttp import web\nfrom fledge.services.core.snapshot import SnapshotPluginBuilder\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.web.middleware import has_permission\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -------------------------------------------------------------------------\n    | GET POST        | /fledge/snapshot/plugins                            |\n    | PUT DELETE      | /fledge/snapshot/plugins/{id}                       |\n    -------------------------------------------------------------------------\n\"\"\"\n\n\n@has_permission(\"admin\")\nasync def get_snapshot(request):\n    \"\"\" get list of available snapshots\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/snapshot/plugins\n\n        When auth is mandatory:\n        curl -X GET http://localhost:8081/fledge/snapshot/plugins -H \"authorization: <token>\" \n    \"\"\"\n    # Get snapshot directory path\n    snapshot_dir = _get_snapshot_dir()\n    valid_extension = '.tar.gz'\n    sorted_list = []\n    if os.path.isdir(snapshot_dir):\n        for root, dirs, files in os.walk(snapshot_dir):\n            valid_files = list(\n                filter(lambda f: f.endswith(valid_extension), files))\n            list_files = list(map(\n                lambda x: {\"id\": x.split(\"snapshot-plugin-\")[1].split(\".tar.gz\")[0],\n                           \"name\": x}, valid_files))\n            sorted_list = sorted(list_files, key=lambda k: k['id'], reverse=True)\n\n    return web.json_response({\"snapshots\": sorted_list})\n\n\n@has_permission(\"admin\")\nasync def post_snapshot(request):\n    \"\"\" Create a snapshot  by name\n\n    :Example:\n        curl -X POST http://localhost:8081/fledge/snapshot/plugins\n\n        When auth is mandatory:\n        curl -X POST http://localhost:8081/fledge/snapshot/plugins -H \"authorization: <token>\" \n    \"\"\"\n    try:\n        snapshot_dir = _get_snapshot_dir()\n        snapshot_id, snapshot_name = await SnapshotPluginBuilder(\n            snapshot_dir).build()\n    except Exception as ex:\n        raise web.HTTPInternalServerError(\n            reason='Snapshot could not be created. {}'.format(str(ex)))\n    else:\n        return web.json_response({\n            \"message\": \"snapshot id={}, file={} created successfully.\".format(\n                snapshot_id, snapshot_name)})\n\n\n@has_permission(\"admin\")\nasync def put_snapshot(request):\n    \"\"\"extract a snapshot\n\n    :Example:\n        curl -X PUT http://localhost:8081/fledge/snapshot/plugins/1554204238\n\n        When auth is mandatory:\n        curl -X PUT http://localhost:8081/fledge/snapshot/plugins/1554204238 -H \"authorization: <token>\" \n    \"\"\"\n    try:\n        snapshot_id = request.match_info.get('id', None)\n        snapshot_name = \"snapshot-plugin-{}.tar.gz\".format(snapshot_id)\n\n        try:\n            snapshot_id = int(snapshot_id)\n        except:\n            raise ValueError('Invalid snapshot id: {}'.format(snapshot_id))\n\n        if not os.path.isdir(_get_snapshot_dir()):\n            raise web.HTTPNotFound(reason=\"No snapshot found.\")\n\n        snapshot_dir = _get_snapshot_dir()\n        for root, dirs, files in os.walk(snapshot_dir):\n            if str(snapshot_name) not in files:\n                raise web.HTTPNotFound(reason='{} not found'.format(snapshot_name))\n\n        p = \"{}/{}\".format(snapshot_dir, snapshot_name)\n        SnapshotPluginBuilder(snapshot_dir).extract_files(p)\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except Exception as ex:\n        raise web.HTTPInternalServerError(\n            reason='Snapshot {} could not be restored. {}'.format(snapshot_name,\n                                                                  str(ex)))\n    else:\n        return web.json_response(\n            {\"message\": \"snapshot {} restored successfully.\".format(\n                snapshot_name)})\n\n\n@has_permission(\"admin\")\nasync def delete_snapshot(request):\n    \"\"\"delete a snapshot\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/snapshot/plugins/1554204238\n\n        When auth is mandatory:\n        curl -X DELETE http://localhost:8081/fledge/snapshot/plugins/1554204238 -H \"authorization: <token>\" \n    \"\"\"\n    try:\n        snapshot_id = request.match_info.get('id', None)\n        snapshot_name = \"snapshot-plugin-{}.tar.gz\".format(snapshot_id)\n\n        try:\n            snapshot_id = int(snapshot_id)\n        except:\n            raise ValueError('Invalid snapshot id: {}'.format(snapshot_id))\n\n        if not os.path.isdir(_get_snapshot_dir()):\n            raise web.HTTPNotFound(reason=\"No snapshot found.\")\n\n        snapshot_dir = _get_snapshot_dir()\n        for root, dirs, files in os.walk(_get_snapshot_dir()):\n            if str(snapshot_name) not in files:\n                raise web.HTTPNotFound(reason='{} not found'.format(snapshot_name))\n\n        p = \"{}/{}\".format(snapshot_dir, snapshot_name)\n        os.remove(p)\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except Exception as ex:\n        raise web.HTTPInternalServerError(\n            reason='Snapshot {} could not be deleted. {}'.format(snapshot_name,\n                                                                 str(ex)))\n    else:\n        return web.json_response(\n            {\"message\": \"snapshot {} deleted successfully.\".format(\n                snapshot_name)})\n\n\ndef _get_snapshot_dir():\n    if _FLEDGE_DATA:\n        snapshot_dir = os.path.expanduser(_FLEDGE_DATA + '/snapshots/plugins')\n    else:\n        snapshot_dir = os.path.expanduser(\n            _FLEDGE_ROOT + '/data/snapshots/plugins')\n    return snapshot_dir\n"
  },
  {
    "path": "python/fledge/services/core/api/snapshot/table.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\nimport json\nfrom aiohttp import web\n\nfrom fledge.services.core.connect import *\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.web.middleware import has_permission\n\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | GET POST        | /fledge/snapshot/category                                |\n    | PUT DELETE      | /fledge/snapshot/category/{id}                           |\n    | GET POST        | /fledge/snapshot/schedule                                |\n    | PUT DELETE      | /fledge/snapshot/schedule/{id}                           |\n    -------------------------------------------------------------------------------\n\"\"\"\n\n_tables = {\n    \"category\": \"configuration\",\n    \"schedule\": \"schedules\"\n}\n\n\n@has_permission(\"admin\")\nasync def get_snapshot(request):\n    \"\"\" get list of available snapshots\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/snapshot/category\n        curl -X GET http://localhost:8081/fledge/snapshot/schedule\n        \n        When auth is mandatory:\n        curl -X GET http://localhost:8081/fledge/snapshot/category -H \"authorization: <token>\"\n        curl -X GET http://localhost:8081/fledge/snapshot/schedule -H \"authorization: <token>\"\n    \"\"\"\n    try:\n        r_path = request.path.split('/fledge/snapshot/')\n        table = _tables[r_path[1]]\n\n        _storage = get_storage_async()  # from fledge.services.core.connect\n        retval = await _storage.get_snapshot(table)\n        newlist = sorted(retval[\"rows\"], key=lambda k: k['id'], reverse=True)\n    except (StorageServerError, Exception) as ex:\n        raise web.HTTPInternalServerError(reason='{} table snapshots could not be fetched. {}'.format(table, str(ex)))\n    else:\n        return web.json_response({\"snapshots\": newlist})\n\n\n@has_permission(\"admin\")\nasync def post_snapshot(request):\n    \"\"\" Create a snapshot\n\n    :Example:\n        curl -X POST http://localhost:8081/fledge/snapshot/category\n        curl -X POST http://localhost:8081/fledge/snapshot/schedule\n        \n        When auth is mandatory:\n        curl -X POST http://localhost:8081/fledge/snapshot/category -H \"authorization: <token>\"\n        curl -X POST http://localhost:8081/fledge/snapshot/schedule -H \"authorization: <token>\"\n    \"\"\"\n    try:\n        r_path = request.path.split('/fledge/snapshot/')\n        table = _tables[r_path[1]]\n\n        _storage = get_storage_async()  # from fledge.services.core.connect\n        retval = await _storage.post_snapshot(table)\n    except (StorageServerError, Exception) as ex:\n        raise web.HTTPInternalServerError(reason='{} table snapshot could not be created. {}'.format(table, str(ex)))\n    else:\n        return web.json_response(retval[\"created\"])\n\n\n@has_permission(\"admin\")\nasync def put_snapshot(request):\n    \"\"\"restore a snapshot\n\n    :Example:\n        curl -X PUT http://localhost:8081/fledge/snapshot/category/1554202741\n        curl -X PUT http://localhost:8081/fledge/snapshot/schedule/1554202742\n        \n        When auth is mandatory:\n        curl -X PUT http://localhost:8081/fledge/snapshot/category/1554202741 -H \"authorization: <token>\"\n        curl -X PUT http://localhost:8081/fledge/snapshot/schedule/1554202742 -H \"authorization: <token>\"\n    \"\"\"\n    try:\n        r_path = request.path.split('/fledge/snapshot/')\n        table = _tables[r_path[1].split('/')[0]]\n\n        snapshot_id = request.match_info.get('id', None)\n\n        try:\n            snapshot_id = int(snapshot_id)\n        except:\n            raise ValueError('Invalid snapshot id: {}'.format(snapshot_id))\n\n        _storage = get_storage_async()  # from fledge.services.core.connect\n        retval = await _storage.put_snapshot(table, snapshot_id)\n    except StorageServerError as ex:\n        if int(ex.code) in range(400, 500):\n            raise web.HTTPBadRequest(\n                reason='{} table snapshot could not be restored. {}'.format(table, json.loads(ex.error)['message']))\n        else:\n            raise web.HTTPInternalServerError(\n                reason='{} table snapshot could not be restored. {}'.format(table, json.loads(ex.error)['message']))\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except Exception as ex:\n        raise web.HTTPInternalServerError(reason='{} table snapshot could not be restored. {}'.format(table, str(ex)))\n    else:\n        return web.json_response(retval[\"loaded\"])\n\n\n@has_permission(\"admin\")\nasync def delete_snapshot(request):\n    \"\"\"delete a snapshot\n\n    :Example:\n        curl -X DELETE http://localhost:8081/fledge/snapshot/category/1554202741\n        curl -X DELETE http://localhost:8081/fledge/snapshot/schedule/1554202742\n        \n        When auth is mandatory:\n        curl -X DELETE http://localhost:8081/fledge/snapshot/category/1554202741 -H \"authorization: <token>\"\n        curl -X DELETE http://localhost:8081/fledge/snapshot/schedule/1554202742 -H \"authorization: <token>\"\n    \"\"\"\n    try:\n        r_path = request.path.split('/fledge/snapshot/')\n        table = _tables[r_path[1].split('/')[0]]\n\n        snapshot_id = request.match_info.get('id', None)\n        try:\n            snapshot_id = int(snapshot_id)\n        except:\n            raise ValueError('Invalid snapshot id: {}'.format(snapshot_id))\n\n        _storage = get_storage_async()  # from fledge.services.core.connect\n        retval = await _storage.delete_snapshot(table, snapshot_id)\n    except StorageServerError as ex:\n        if int(ex.code) in range(400, 500):\n            raise web.HTTPBadRequest(\n                reason='{} table snapshot could not be deleted. {}'.format(table, json.loads(ex.error)['message']))\n        else:\n            raise web.HTTPInternalServerError(\n                reason='{} table snapshot could not be deleted. {}'.format(table, json.loads(ex.error)['message']))\n    except ValueError as ex:\n        raise web.HTTPBadRequest(reason=str(ex))\n    except Exception as ex:\n        raise web.HTTPInternalServerError(reason='{} table snapshot could not be deleted. {}'.format(table, str(ex)))\n    else:\n        return web.json_response(retval[\"deleted\"])\n"
  },
  {
    "path": "python/fledge/services/core/api/south.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom functools import lru_cache\n\nfrom aiohttp import web\n\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry.exceptions import DoesNotExist\nfrom fledge.services.core import connect\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.plugin_discovery import PluginDiscovery\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | GET                 | /fledge/south                                         |\n    -------------------------------------------------------------------------------\n\"\"\"\n\n\nasync def _get_schedule_status(storage_client, svc_name):\n    payload = PayloadBuilder().SELECT(\"enabled\").WHERE(['schedule_name', '=', svc_name]).payload()\n    result = await storage_client.query_tbl_with_payload('schedules', payload)\n    return True if result['rows'][0]['enabled'] == 't' else False\n\n\n@lru_cache(maxsize=1024)\ndef _get_installed_plugins():\n    return PluginDiscovery.get_plugins_installed(\"south\", False)\n\n\nasync def _services_with_assets(storage_client, cf_mgr, south_services):\n    sr_list = list()\n    try:\n        try:\n            services_from_registry = ServiceRegistry.get(s_type=\"Southbound\")\n        except DoesNotExist:\n            services_from_registry = []\n\n        def is_svc_in_service_registry(name):\n            return next((svc for svc in services_from_registry if svc._name == name), None)\n\n        installed_plugins = _get_installed_plugins()\n\n        for s_record in services_from_registry:\n            plugin, assets = await _get_tracked_plugin_assets_and_readings(storage_client, cf_mgr, s_record._name)\n            plugin_version = ''\n            for p in installed_plugins:\n                if p[\"name\"] == plugin:\n                    plugin_version = p[\"version\"]\n                    break\n\n            # Service running on another machine have no scheduler entry\n            sched_enable = 'unknown'\n            try:\n                sched_enable = await _get_schedule_status(storage_client, s_record._name)\n            except:\n                pass\n            service_data = {\n                'name': s_record._name,\n                'address': s_record._address,\n                'management_port': s_record._management_port,\n                'service_port': s_record._port,\n                'protocol': s_record._protocol,\n                'status': ServiceRecord.Status(int(s_record._status)).name.lower(),\n                'assets': assets,\n                'plugin': {'name': plugin, 'version': plugin_version},\n                'schedule_enabled': sched_enable\n            }\n            # Add the 'debug' key only if it's non-empty\n            if s_record._debug:\n                service_data['debug'] = s_record._debug\n            sr_list.append(service_data)\n        for s_name in south_services:\n            south_svc = is_svc_in_service_registry(s_name)\n            if not south_svc:\n                plugin, assets = await _get_tracked_plugin_assets_and_readings(storage_client, cf_mgr, s_name)\n                plugin_version = ''\n                for p in installed_plugins:\n                    if p[\"name\"] == plugin:\n                        plugin_version = p[\"version\"]\n                        break\n                # Handle schedule status when there is no schedule entry matching a South child category name\n                sch_status = 'unknown'\n                try:\n                    sch_status = await _get_schedule_status(storage_client, s_name)\n                except:\n                    pass\n                sr_list.append(\n                    {\n                        'name': s_name,\n                        'address': '',\n                        'management_port': '',\n                        'service_port': '',\n                        'protocol': '',\n                        'status': '',\n                        'assets': assets,\n                        'plugin': {'name': plugin, 'version': plugin_version},\n                        'schedule_enabled': sch_status\n                    })\n    except:\n        raise\n    else:\n        return sr_list\n\n\nasync def _get_tracked_plugin_assets_and_readings(storage_client, cf_mgr, svc_name):\n    asset_json = []\n    plugin_value = await cf_mgr.get_category_item(svc_name, 'plugin')\n    plugin = plugin_value['value'] if plugin_value is not None else ''\n    payload = PayloadBuilder().SELECT(\"asset\", \"plugin\").WHERE(['service', '=', svc_name]).AND_WHERE(\n        ['event', '=', 'Ingest']).AND_WHERE(['plugin', '=', plugin]).AND_WHERE(['deprecated_ts', 'isnull']).payload()\n    try:\n        result = await storage_client.query_tbl_with_payload('asset_tracker', payload)\n        # TODO: FOGL-2549\n        # old asset track entry still appears with combination of service name + plugin name + event name if exists\n        asset_records = result['rows']\n        assets = [ar[\"asset\"].upper()for ar in asset_records]\n        if len(assets):\n            def map_original_asset_name(asset_stats_key):\n                # asset name are being recorded in uppercase as key in statistics table\n                for ar in asset_records:\n                    if ar[\"asset\"].upper() == asset_stats_key:\n                        return ar[\"asset\"]\n                return None\n\n            payload = PayloadBuilder().SELECT([\"key\", \"value\"]).WHERE([\"key\", \"in\", assets]).payload()\n            results = await storage_client.query_tbl_with_payload(\"statistics\", payload)\n            for _r in results['rows']:\n                asset_json.append({\"count\": _r['value'], \"asset\": map_original_asset_name(_r['key'])})\n    except:\n        raise\n    else:\n        return plugin, asset_json\n\n\nasync def get_south_services(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n            list of all south services with tracked assets and readings count\n\n    :Example:\n            curl -X GET http://localhost:8081/fledge/south\n    \"\"\"\n    if 'cached' in request.query and request.query['cached'].lower() == 'false':\n        _get_installed_plugins.cache_clear()\n\n    storage_client = connect.get_storage_async()\n    cf_mgr = ConfigurationManager(storage_client)\n    try:\n        south_cat = await cf_mgr.get_category_child(\"South\")\n        south_categories = [nc[\"key\"] for nc in south_cat]\n    except:\n        return web.json_response({'services': []})\n\n    response = await _services_with_assets(storage_client, cf_mgr, south_categories)\n    return web.json_response({'services': response})\n"
  },
  {
    "path": "python/fledge/services/core/api/statistics.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\nimport datetime\nfrom aiohttp import web\n\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.services.core import connect\nfrom fledge.services.core.scheduler.scheduler import Scheduler\nfrom fledge.common.logger import FLCoreLogger\n\n__author__ = \"Amarendra K. Sinha, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    ------------------------------------------------------------------------------\n    | GET             | /fledge/statistics                                       |\n    | GET             | /fledge/statistics/history                               |\n    | GET             | /fledge/statistics/rate                                  |\n    ------------------------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\n#################################\n#  Statistics\n#################################\n\n\nasync def get_statistics(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n            a general set of statistics\n\n    :Example:\n            curl -X GET http://localhost:8081/fledge/statistics\n    \"\"\"\n    payload = PayloadBuilder().SELECT((\"key\", \"description\", \"value\")).ORDER_BY([\"key\"]).payload()\n    storage_client = connect.get_storage_async()\n    result = await storage_client.query_tbl_with_payload('statistics', payload)\n    return web.json_response(result['rows'])\n\n\nasync def get_statistics_history(request):\n    \"\"\"\n    Args:\n        request:\n\n    Returns:\n            a list of general set of statistics\n\n    :Example:\n            curl -X GET http://localhost:8081/fledge/statistics/history?limit=1\n            curl -X GET http://localhost:8081/fledge/statistics/history?key=READINGS\n            curl -X GET http://localhost:8081/fledge/statistics/history?key=READINGS,PURGED,UNSENT&minutes=60\n    \"\"\"\n    storage_client = connect.get_storage_async()\n    # To find the interval in secs from stats collector schedule\n    scheduler_payload = PayloadBuilder().SELECT(\"schedule_interval\").WHERE(\n        ['process_name', '=', 'stats collector']).payload()\n    result = await storage_client.query_tbl_with_payload('schedules', scheduler_payload)\n    if len(result['rows']) > 0:\n        scheduler = Scheduler()\n        interval_days, interval_dt = scheduler.extract_day_time_from_interval(result['rows'][0]['schedule_interval'])\n        interval = datetime.timedelta(days=interval_days, hours=interval_dt.hour, minutes=interval_dt.minute, seconds=interval_dt.second)\n        interval_in_secs = interval.total_seconds()\n    else:\n        raise web.HTTPNotFound(reason=\"No stats collector schedule found\")\n    stats_history_chain_payload = PayloadBuilder().SELECT((\"history_ts\", \"key\", \"value\"))\\\n        .ALIAS(\"return\", (\"history_ts\", 'history_ts')).FORMAT(\"return\", (\"history_ts\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n        .ORDER_BY(['history_ts', 'desc']).WHERE(['1', '=', 1]).chain_payload()\n\n    if 'key' in request.query:\n        key = request.query['key']\n        split_list = key.split(',')\n        stats_history_chain_payload = PayloadBuilder(stats_history_chain_payload).AND_WHERE(\n            ['key', '=', split_list[0]]).chain_payload()\n        del split_list[0]\n        for i in split_list:\n            stats_history_chain_payload = PayloadBuilder(stats_history_chain_payload).OR_WHERE(\n                ['key', '=', i]).chain_payload()\n    try:\n        # get time based graphs for statistics history\n        val = 0\n        if 'minutes' in request.query and request.query['minutes'] != '':\n            val = int(request.query['minutes']) * 60\n        elif 'hours' in request.query and request.query['hours'] != '':\n            val = int(request.query['hours']) * 60 * 60\n        elif 'days' in request.query and request.query['days'] != '':\n            val = int(request.query['days']) * 24 * 60 * 60\n\n        if val < 0:\n            raise ValueError\n        elif val > 0:\n            stats_history_chain_payload = PayloadBuilder(stats_history_chain_payload).AND_WHERE(['history_ts', 'newer', val]).chain_payload()\n    except ValueError:\n        raise web.HTTPBadRequest(reason=\"Time unit must be a positive integer\")\n\n    if 'limit' in request.query and request.query['limit'] != '':\n        try:\n            limit = int(request.query['limit'])\n            if limit < 0:\n                raise ValueError\n            if 'key' in request.query:\n                limit_count = limit\n            else:\n                # FIXME: Hack straight away multiply the LIMIT by the group count\n                # i.e. if there are 8 records per distinct (stats_key), and limit supplied is 2\n                # then internally, actual LIMIT = 2*8\n                # TODO: FOGL-663 Need support for \"subquery\" from storage service\n                # Remove python side handling date_trunc and use\n                # SELECT date_trunc('second', history_ts::timestamptz)::varchar as history_ts\n\n                count_payload = PayloadBuilder().AGGREGATE([\"count\", \"*\"]).payload()\n                result = await storage_client.query_tbl_with_payload(\"statistics\", count_payload)\n                key_count = result['rows'][0]['count_*']\n                limit_count = limit * key_count\n            stats_history_chain_payload = PayloadBuilder(stats_history_chain_payload).LIMIT(limit_count).chain_payload()\n        except ValueError:\n            raise web.HTTPBadRequest(reason=\"Limit must be a positive integer\")\n\n    stats_history_payload = PayloadBuilder(stats_history_chain_payload).payload()\n    result_from_storage = await storage_client.query_tbl_with_payload('statistics_history', stats_history_payload)\n    group_dict = []\n    for row in result_from_storage['rows']:\n        new_dict = {'history_ts': row['history_ts'], row['key']: row['value']}\n        group_dict.append(new_dict)\n\n    results = []\n    temp_dict = {}\n    previous_ts = None\n    for row in group_dict:\n        # first time or when history_ts changes\n        if previous_ts is None or previous_ts != row['history_ts']:\n            if previous_ts is not None:\n                results.append(temp_dict)\n            previous_ts = row['history_ts']\n            temp_dict = {'history_ts': previous_ts}\n\n        # Append statistics key to temp dict\n        for key, value in row.items():\n            temp_dict.update({key: value})\n\n    # Append the last set of records which do not get appended above\n    results.append(temp_dict)\n    return web.json_response({\"interval\": interval_in_secs, 'statistics': results})\n\n\nasync def get_statistics_rate(request: web.Request) -> web.Response:\n    \"\"\"To retrieve the statistics rates and will be calculated by formula:\n        (sum(value) / ((60 * period) / stats_collector_interval))\n        For example:\n            If stats_collector_interval set to 15 seconds then\n            a) For a 1 minute period should take 4 statistics history values, sum those and then divide by period\n            b) For a 5 minute period should take 20 statistics history values, sum those and then divide by period\n      Args:\n          request:\n      Returns:\n          A JSON document with the rates for each of the statistics\n      :Example:\n          curl -sX GET \"http://localhost:8081/fledge/statistics/rate?periods=5&statistics=READINGS\"\n          curl -sX GET \"http://localhost:8081/fledge/statistics/rate?periods=1,5,15&statistics=SINUSOID,FASTSINUSOID\"\n      \"\"\"\n    params = request.query\n    if 'periods' not in params:\n        raise web.HTTPBadRequest(reason=\"periods request parameter is required\")\n    if 'statistics' not in params:\n        raise web.HTTPBadRequest(reason=\"statistics request parameter is required\")\n\n    if params['periods'] == '':\n        raise web.HTTPBadRequest(reason=\"periods cannot be an empty. Also comma separated list of values required \"\n                                        \"in case of multiple periods of time\")\n    if params['statistics'] == '':\n        raise web.HTTPBadRequest(reason=\"statistics cannot be an empty. Also comma separated list of statistics values \"\n                                        \"required in case of multiple assets\")\n\n    periods = params['periods']\n    period_split_list = list(filter(None, periods.split(',')))\n    if not all(p.isdigit() for p in period_split_list):\n        raise web.HTTPBadRequest(reason=\"periods should contain numbers\")\n    # 1 week = 10080 mins\n    if any(int(p) > 10800 for p in period_split_list):\n        raise web.HTTPBadRequest(reason=\"The maximum allowed value for a period is 10080 minutes\")\n\n    stats = params['statistics']\n    stat_split_list = list(filter(None, [x for x in stats.split(',')]))\n    storage_client = connect.get_storage_async()\n    # To find the interval in secs from stats collector schedule\n    scheduler_payload = PayloadBuilder().SELECT(\"schedule_interval\").WHERE(\n        ['process_name', '=', 'stats collector']).payload()\n    result = await storage_client.query_tbl_with_payload('schedules', scheduler_payload)\n    if len(result['rows']) > 0:\n        scheduler = Scheduler()\n        interval_days, interval_dt = scheduler.extract_day_time_from_interval(result['rows'][0]['schedule_interval'])\n        interval_in_secs = datetime.timedelta(days=interval_days, hours=interval_dt.hour, minutes=interval_dt.minute,\n                                              seconds=interval_dt.second).total_seconds()\n    else:\n        raise web.HTTPNotFound(reason=\"No stats collector schedule found\")\n    resp = []\n    for x, y in [(x, y) for x in period_split_list for y in stat_split_list]:\n        # Get value column as per given key along with history_ts column order by\n        _payload = PayloadBuilder().SELECT(\"value\").WHERE(['key', '=', y]).ORDER_BY([\"history_ts\", \"desc\"]\n                                                                                    ).chain_payload()\n        # LIMIT set to ((60 * period) / stats_collector_interval))\n        calculated_formula = int((60 * int(x) / int(interval_in_secs)))\n        stats_rate_payload = PayloadBuilder(_payload).LIMIT(calculated_formula).payload()\n        result = await storage_client.query_tbl_with_payload(\"statistics_history\", stats_rate_payload)\n        temp_dict = {y: {x: 0}}\n        if result['rows']:\n            row_sum = 0\n            values = [r['value'] for r in result['rows']]\n            for v in values:\n                row_sum += v\n            temp_dict = {y: {x: row_sum / int(x)}}\n        resp.append(temp_dict)\n    rate_dict = {}\n    for d in resp:\n        for k, v in d.items():\n            rate_dict[k] = {**rate_dict[k], **v} if k in rate_dict else v\n    return web.json_response({\"rates\": rate_dict})\n"
  },
  {
    "path": "python/fledge/services/core/api/support.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport os\nimport subprocess\nimport json\nimport datetime\n\nimport urllib.parse\nfrom pathlib import Path\nfrom aiohttp import web\n\nfrom fledge.common import utils\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.web.middleware import has_permission\nfrom fledge.services.core.support import SupportBuilder\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n_SYSLOG_FILE = '/var/log/messages' if utils.is_redhat_based() else '/var/log/syslog'\n_SCRIPTS_DIR = \"{}/scripts\".format(_FLEDGE_ROOT)\n__DEFAULT_LIMIT = 20\n__DEFAULT_OFFSET = 0\n__DEFAULT_LOG_SOURCE = 'Fledge'\n\n# Debug and above\n__GET_SYSLOG_CMD_TEMPLATE = \"grep -a -E '({})\\[' {} | head -n {} | tail -n {}\"\n__GET_SYSLOG_TOTAL_MATCHED_LINES = \"grep -a -c -E '({})\\[' {}\"\n__GET_SYSLOG_TEMPLATE_WITH_NON_TOTALS = \"grep -a -E '({})\\[' {} | head -n -{} | tail -n {}\"\n# Info and above\n__GET_SYSLOG_CMD_WITH_INFO_TEMPLATE = \"grep -a -E '({})\\[.*].* (INFO|WARNING|ERROR|FATAL)' {} | head -n {} | tail -n {}\"\n__GET_SYSLOG_INFO_MATCHED_LINES = \"grep -a -c -E '({})\\[.*].* (INFO|WARNING|ERROR|FATAL)' {}\"\n__GET_SYSLOG_INFO_TEMPLATE_WITH_NON_TOTALS = \"grep -a -E '({})\\[.*].* (INFO|WARNING|ERROR|FATAL)' {} | head -n -{} | tail -n {}\"\n# Error and above\n__GET_SYSLOG_CMD_WITH_ERROR_TEMPLATE = \"grep -a -E '({})\\[.*].* (ERROR|FATAL)' {} | head -n {} | tail -n {}\"\n__GET_SYSLOG_ERROR_MATCHED_LINES = \"grep -a -c -E '({})\\[.*].* (ERROR|FATAL)' {}\"\n__GET_SYSLOG_ERROR_TEMPLATE_WITH_NON_TOTALS = \"grep -a -E '({})\\[.*].* (ERROR|FATAL)' {} | head -n -{} | tail -n {}\"\n# Warning and above\n__GET_SYSLOG_CMD_WITH_WARNING_TEMPLATE = \"grep -a -E '({})\\[.*].* (WARNING|ERROR|FATAL)' {} | head -n {} | tail -n {}\"\n__GET_SYSLOG_WARNING_MATCHED_LINES = \"grep -a -c -E '({})\\[.*].* (WARNING|ERROR|FATAL)' {}\"\n__GET_SYSLOG_WARNING_TEMPLATE_WITH_NON_TOTALS = \"grep -a -E '({})\\[.*].* (WARNING|ERROR|FATAL)' {} | head -n -{} | tail -n {}\"\n\n_help = \"\"\"\n    ------------------------------------------------------------------------------\n    | GET POST        | /fledge/support                                          |\n    | GET             | /fledge/support/{bundle}                                 |\n    | GET             | /fledge/syslog                                           |\n    ------------------------------------------------------------------------------\n\"\"\"\n\n\nasync def fetch_support_bundle(request):\n    \"\"\" get list of available support bundles\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/support\n    \"\"\"\n    # Get support directory path\n    support_dir = _get_support_dir()\n    valid_extension = '.tar.gz'\n    found_files = []\n    for root, dirs, files in os.walk(support_dir):\n        found_files = [f for f in files if f.endswith(valid_extension)]\n\n    return web.json_response({\"bundles\": found_files})\n\n\n@has_permission(\"admin\")\nasync def fetch_support_bundle_item(request):\n    \"\"\" check existence of a bundle support by name\n\n    :Example:\n        curl -O http://localhost:8081/fledge/support/support-180301-13-35-23.tar.gz\n\n        curl -X GET http://localhost:8081/fledge/support/support-180311-18-03-36.tar.gz\n        -H \"Accept-Encoding: gzip\" --write-out \"size_download=%{size_download}\\n\" --compressed\n    \"\"\"\n    bundle_name = request.match_info.get('bundle', None)\n\n    if not str(bundle_name).endswith('.tar.gz'):\n        return web.HTTPBadRequest(reason=\"Bundle file extension is invalid\")\n\n    if not os.path.isdir(_get_support_dir()):\n        raise web.HTTPNotFound(reason=\"Support bundle directory does not exist\")\n\n    for root, dirs, files in os.walk(_get_support_dir()):\n        if str(bundle_name) not in files:\n            raise web.HTTPNotFound(reason='{} not found'.format(bundle_name))\n\n    p = Path(_get_support_dir()) / str(bundle_name)\n    return web.FileResponse(path=p)\n\n\n@has_permission(\"admin\")\nasync def create_support_bundle(request):\n    \"\"\" Create a support bundle by name\n\n    :Example:\n        curl -X POST http://localhost:8081/fledge/support\n    \"\"\"\n    support_dir = _get_support_dir()\n    try:\n        support_bundle_config = await get_support_bundle_config()\n        bundle_name = await SupportBuilder(support_dir, support_bundle_config).build()\n    except Exception as ex:\n        msg = 'Failed to create support bundle.'\n        _logger.error(ex, msg)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n    return web.json_response({\"bundle created\": bundle_name})\n\n\nasync def get_syslog_entries(request):\n    \"\"\" Returns a list of syslog trail entries sorted with most recent first and total count\n        (including the criteria search if applied)\n\n    :Example:\n        curl -X GET http://localhost:8081/fledge/syslog\n        curl -X GET \"http://localhost:8081/fledge/syslog?limit=5\"\n        curl -X GET \"http://localhost:8081/fledge/syslog?offset=5\"\n        curl -X GET \"http://localhost:8081/fledge/syslog?source=storage\"\n        curl -X GET \"http://localhost:8081/fledge/syslog?source=<svc_name>|<task_name>\"\n        curl -X GET \"http://localhost:8081/fledge/syslog?level=error\"\n        curl -X GET \"http://localhost:8081/fledge/syslog?limit=5&source=storage\"\n        curl -X GET \"http://localhost:8081/fledge/syslog?limit=5&offset=5&source=storage\"\n        curl -sX GET \"http://localhost:8081/fledge/syslog?nontotals=true\"\n        curl -sX GET \"http://localhost:8081/fledge/syslog?nontotals=true&keyword=Storage%20error\"\n        curl -sX GET \"http://localhost:8081/fledge/syslog?nontotals=true&source=<svc_name>|<task_name>\"\n        curl -sX GET \"http://localhost:8081/fledge/syslog?nontotals=true&limit=5\"\n        curl -sX GET \"http://localhost:8081/fledge/syslog?nontotals=true&limit=100&offset=50\"\n        curl -sX GET \"http://localhost:8081/fledge/syslog?nontotals=true&limit=100&offset=50&keyword=fledge.services\"\n        curl -sX GET \"http://localhost:8081/fledge/syslog?nontotals=true&source=<svc_name>|<task_name>&limit=10&offset=50\"\n        curl -sX GET \"http://localhost:8081/fledge/syslog?nontotals=true&source=<svc_name>|<task_name>\"\n    \"\"\"\n    try:\n        # limit\n        limit = int(request.query['limit']) if 'limit' in request.query and request.query[\n            'limit'] != '' else __DEFAULT_LIMIT\n        if limit < 0:\n            raise ValueError('Limit must be a positive integer.')\n\n        # offset\n        offset = int(request.query['offset']) if 'offset' in request.query and request.query[\n            'offset'] != '' else __DEFAULT_OFFSET\n        if offset < 0:\n            raise ValueError('Offset must be a positive integer OR Zero.')\n\n        # source\n        source = urllib.parse.unquote(request.query['source']) if 'source' in request.query and request.query[\n            'source'] != '' else __DEFAULT_LOG_SOURCE\n        if source.lower() in ['fledge', 'storage']:\n            source = source.lower()\n            valid_source = {'fledge': \"Fledge.*\", 'storage': 'Fledge Storage'}\n        else:\n            valid_source = {source: \"Fledge {}\".format(source)}\n\n        # Get filtered lines\n        template = __GET_SYSLOG_CMD_TEMPLATE\n        lines = __GET_SYSLOG_TOTAL_MATCHED_LINES\n\n        levels = \"\"\n        level = \"debug\"\n        if 'level' in request.query and request.query['level'] != '':\n            level = request.query['level'].lower()\n            supported_level = ['info', 'warning', 'error', 'debug']\n            if level not in supported_level:\n                raise ValueError('{} is invalid level. Supported levels are {}'.format(level, supported_level))\n            if level == 'info':\n                template = __GET_SYSLOG_CMD_WITH_INFO_TEMPLATE\n                lines = __GET_SYSLOG_INFO_MATCHED_LINES\n                levels = \"(INFO|WARNING|ERROR|FATAL)\"\n            elif level == 'warning':\n                template = __GET_SYSLOG_CMD_WITH_WARNING_TEMPLATE\n                lines = __GET_SYSLOG_WARNING_MATCHED_LINES\n                levels = \"(WARNING|ERROR|FATAL)\"\n            elif level == 'error':\n                template = __GET_SYSLOG_CMD_WITH_ERROR_TEMPLATE\n                lines = __GET_SYSLOG_ERROR_MATCHED_LINES\n                levels = \"(ERROR|FATAL)\"\n        # keyword\n        keyword = ''\n        if 'keyword' in request.query and request.query['keyword'] != '':\n            keyword = request.query['keyword']\n        response = {}\n        # nontotals\n        non_totals = request.query['nontotals'].lower() if 'nontotals' in request.query and request.query[\n            'nontotals'] != '' else \"false\"\n        if non_totals not in (\"true\", \"false\"):\n            raise ValueError('nontotals must either be in True or False.')\n        if non_totals != \"true\":\n            # Get total lines\n            cmd = lines.format(valid_source[source], _SYSLOG_FILE)\n            t = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout.readlines()\n            total_lines = int(t[0].decode())\n            response['count'] = total_lines\n            cmd = template.format(valid_source[source], _SYSLOG_FILE, total_lines - offset, limit)\n        else:\n            script_path = os.path.join(_SCRIPTS_DIR, \"common\", \"get_logs.sh\")\n            # cmd = non_total_template.format(valid_source[source], _SYSLOG_FILE, offset, limit)\n            pattern = '({})\\[.*\\].*{}:'.format(valid_source[source], levels)\n            cmd = '{} -offset {} -limit {} -pattern \\'{}\\' -logfile {} -source \\'{}\\' -level {}'.format(\n                script_path, offset, limit, pattern, _SYSLOG_FILE, source, level)\n            if len(keyword):\n                cmd += ' -keyword \\'{}\\''.format(keyword)\n            _logger.debug('********* non_totals={}: new shell command: {}'.format(non_totals, cmd))\n\n        t1 = datetime.datetime.now()\n        rv = subprocess.Popen([cmd], shell=True, stdout=subprocess.PIPE).stdout.readlines()\n        t2 = datetime.datetime.now()\n        rv_str = [b.decode() for b in rv]  # Since \"rv\" contains return value in bytes, convert it to string\n        _logger.debug('********* Time taken for grep/tail/head subprocess: {} msec'.format((t2 - t1).total_seconds()*1000))\n        response['logs'] = rv_str\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(body=json.dumps({\"message\": msg}), reason=msg)\n    except (OSError, Exception) as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to get syslog entries.\")\n        raise web.HTTPInternalServerError(body=json.dumps({\"message\": msg}), reason=msg)\n\n    return web.json_response(response)\n\n\ndef _get_support_dir():\n    if _FLEDGE_DATA:\n        support_dir = os.path.expanduser(_FLEDGE_DATA + '/support')\n    else:\n        support_dir = os.path.expanduser(_FLEDGE_ROOT + '/data/support')\n\n    return support_dir\n\nasync def get_support_bundle_config():\n    \"\"\" Get the support bundle configuration from the configuration manager \"\"\"\n    from fledge.common.configuration_manager import ConfigurationManager\n    from fledge.services.core import connect\n    storage_client = connect.get_storage_async()\n    cfg_manager = ConfigurationManager(storage_client)\n    support_bundle_config = await cfg_manager.get_category_all_items('SUPPORT_BUNDLE')\n    return support_bundle_config\n"
  },
  {
    "path": "python/fledge/services/core/api/task.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport datetime\nimport json\nimport uuid\nfrom aiohttp import web\n\nfrom fledge.common import utils\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import StorageServerError\n\nfrom fledge.services.core import server\nfrom fledge.services.core import connect\nfrom fledge.services.core.scheduler.entities import Schedule, TimedSchedule, IntervalSchedule, ManualSchedule\nfrom fledge.services.core.api import utils as apiutils\nfrom fledge.common.common import _FLEDGE_ROOT\nfrom fledge.services.core.api.plugins import common\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -------------------------------------------------------------------------------\n    | POST                 | /fledge/scheduled/task                              |\n    | DELETE               | /fledge/scheduled/task/{task_name}                  |\n    -------------------------------------------------------------------------------\n\"\"\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nasync def add_task(request):\n    \"\"\" Create a new task to run a specific plugin\n\n    :Example:\n     curl -X POST http://localhost:8081/fledge/scheduled/task \\\n     -d @- << EOF\n{\n    \"name\": \"North Readings to PI-Web-API\",\n    \"plugin\": \"OMF\",\n    \"type\": \"north\",\n    \"schedule_type\": 3,\n    \"schedule_repeat\": 30,\n    \"schedule_enabled\": true\n}\nEOF\n\n     curl -sX POST http://localhost:8081/fledge/scheduled/task \\\n     -d @- << EOF\n{\n    \"name\":\"North Readings to PI\",\n    \"plugin\":\"OMF\",\n    \"type\":\"north\",\n    \"schedule_repeat\":30,\n    \"schedule_type\":\"3\",\n    \"schedule_enabled\":false,\n    \"config\":{\n        \"PIServerEndpoint\":{\"value\":\"Connector Relay\"},\n        \"producerToken\":{\"value\":\"XXXX\"}\n        }\n}\nEOF\n    \"\"\"\n\n    try:\n        data = await request.json()\n        if not isinstance(data, dict):\n            raise ValueError('Data payload must be a valid JSON')\n\n        name = data.get('name', None)\n        plugin = data.get('plugin', None)\n        task_type = data.get('type', None)\n\n        schedule_type = data.get('schedule_type', None)\n        schedule_day = data.get('schedule_day', None)\n        schedule_time = data.get('schedule_time', None)\n        schedule_repeat = data.get('schedule_repeat', None)\n        enabled = data.get('schedule_enabled', None)\n        config = data.get('config', None)\n\n        if name is None:\n            raise web.HTTPBadRequest(reason='Missing name property in payload.')\n        if plugin is None:\n            raise web.HTTPBadRequest(reason='Missing plugin property in payload.')\n        if task_type is None:\n            raise web.HTTPBadRequest(reason='Missing type property in payload.')\n        if utils.check_reserved(name) is False:\n            raise web.HTTPBadRequest(reason='Invalid name property in payload.')\n        if utils.check_fledge_reserved(name) is False:\n            raise web.HTTPBadRequest(reason=\"'{}' is reserved for Fledge and can not be used as task name!\".format(name))\n        if utils.check_reserved(plugin) is False:\n            raise web.HTTPBadRequest(reason='Invalid plugin property in payload.')\n        if task_type not in ['north']:\n            raise web.HTTPBadRequest(reason='Only north type is supported.')\n\n        if schedule_type is None:\n            raise web.HTTPBadRequest(reason='schedule_type is mandatory')\n        if not isinstance(schedule_type, int) and not schedule_type.isdigit():\n            raise web.HTTPBadRequest(reason='Error in schedule_type: {}'.format(schedule_type))\n        if int(schedule_type) not in list(Schedule.Type):\n            raise web.HTTPBadRequest(reason='schedule_type error: {}'.format(schedule_type))\n        if int(schedule_type) == Schedule.Type.STARTUP:\n            raise web.HTTPBadRequest(reason='schedule_type cannot be STARTUP: {}'.format(schedule_type))\n\n        schedule_type = int(schedule_type)\n\n        if schedule_day is not None:\n            if isinstance(schedule_day, float) or (isinstance(schedule_day, str) and (schedule_day.strip() != \"\" and not schedule_day.isdigit())):\n                raise web.HTTPBadRequest(reason='Error in schedule_day: {}'.format(schedule_day))\n        else:\n            schedule_day = int(schedule_day) if schedule_day is not None else None\n\n        if schedule_time is not None and (not isinstance(schedule_time, int) and not schedule_time.isdigit()):\n            raise web.HTTPBadRequest(reason='Error in schedule_time: {}'.format(schedule_time))\n        else:\n            schedule_time = int(schedule_time) if schedule_time is not None else None\n\n        if schedule_repeat is not None and (not isinstance(schedule_repeat, int) and not schedule_repeat.isdigit()):\n            raise web.HTTPBadRequest(reason='Error in schedule_repeat: {}'.format(schedule_repeat))\n        else:\n            schedule_repeat = int(schedule_repeat) if schedule_repeat is not None else None\n\n        if schedule_type == Schedule.Type.TIMED:\n            if not schedule_time:\n                raise web.HTTPBadRequest(reason='schedule_time cannot be empty/None for TIMED schedule.')\n            if schedule_day is not None and (schedule_day < 1 or schedule_day > 7):\n                raise web.HTTPBadRequest(reason='schedule_day {} must either be None or must be an integer, 1(Monday) '\n                                                'to 7(Sunday).'.format(schedule_day))\n            if schedule_time < 0 or schedule_time > 86399:\n                raise web.HTTPBadRequest(reason='schedule_time {} must be an integer and in range 0-86399.'.format(schedule_time))\n\n        if schedule_type == Schedule.Type.INTERVAL:\n            if schedule_repeat is None:\n                raise web.HTTPBadRequest(reason='schedule_repeat {} is required for INTERVAL schedule_type.'.format(schedule_repeat))\n            elif not isinstance(schedule_repeat, int):\n                raise web.HTTPBadRequest(reason='schedule_repeat {} must be an integer.'.format(schedule_repeat))\n\n        if enabled is not None:\n            if enabled not in ['true', 'false', True, False]:\n                raise web.HTTPBadRequest(reason='Only \"true\", \"false\", true, false are allowed for value of enabled.')\n        is_enabled = True if ((type(enabled) is str and enabled.lower() in ['true']) or (\n            (type(enabled) is bool and enabled is True))) else False\n\n        dryrun = not is_enabled\n\n        # Check if a valid plugin has been provided\n        try:\n            # \"plugin_module_path\" is fixed by design. It is MANDATORY to keep the plugin in the exactly similar named\n            # folder, within the plugin_module_path.\n            # if multiple plugin with same name are found, then python plugin import will be tried first\n            plugin_module_path = \"{}/python/fledge/plugins/{}/{}\".format(_FLEDGE_ROOT, task_type, plugin)\n            plugin_info = common.load_and_fetch_python_plugin_info(plugin_module_path, plugin, task_type)\n            plugin_config = plugin_info['config']\n        except FileNotFoundError as ex:\n            # Checking for C-type plugins\n            plugin_info = apiutils.get_plugin_info(plugin, dir=task_type)\n            if not plugin_info:\n                msg = \"Plugin {} does not appear to be a valid plugin.\".format(plugin)\n                return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            valid_c_plugin_info_keys = ['name', 'version', 'type', 'interface', 'flag', 'config']\n            for k in valid_c_plugin_info_keys:\n                if k not in list(plugin_info.keys()):\n                    msg = \"Plugin info does not appear to be a valid for {} plugin. '{}' item not found.\".format(\n                        plugin, k)\n                    return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            if plugin_info['type'] != task_type:\n                msg = \"Plugin of {} type is not supported.\".format(plugin_info['type'])\n                return web.HTTPBadRequest(reason=msg)\n            plugin_config = plugin_info['config']\n            if not plugin_config:\n                plugin_module_path = \"{}/plugins/{}/{}\".format(_FLEDGE_ROOT, task_type, plugin)\n                raise web.HTTPNotFound(reason='Plugin \"{}\" import problem from path \"{}\"'.format(\n                    plugin, plugin_module_path))\n        except TypeError as ex:\n            raise web.HTTPBadRequest(reason=str(ex))\n        except Exception as ex:\n            msg = \"Failed to fetch plugin configuration.\"\n            _logger.error(ex, msg)\n            raise web.HTTPInternalServerError(reason=msg)\n\n        storage = connect.get_storage_async()\n        config_mgr = ConfigurationManager(storage)\n\n        # Abort the operation if there are already executed tasks\n        payload = PayloadBuilder() \\\n            .SELECT([\"id\", \"schedule_name\"]) \\\n            .WHERE(['schedule_name', '=', name]) \\\n            .LIMIT(1) \\\n            .payload()\n\n        result = await storage.query_tbl_with_payload('tasks', payload)\n\n        if result['count'] >= 1:            \n            msg = 'Unable to reuse name {0}, already used by a previous task.'.format(name)\n            _logger.warning(msg)\n            raise web.HTTPBadRequest(reason=msg)\n\n        # Check whether category name already exists\n        category_info = await config_mgr.get_category_all_items(category_name=name)\n        if category_info is not None:\n            raise web.HTTPBadRequest(reason=\"The '{}' category already exists\".format(name))\n\n        # Check that the schedule name is not already registered\n        count = await check_schedules(storage, name)\n        if count != 0:\n            raise web.HTTPBadRequest(reason='A north instance with this name already exists')\n\n        # Always run with C based sending process task\n        process_name = 'north_c'\n        script = '[\"tasks/north_c\"]'\n        # Check that the process name is not already registered\n        count = await check_scheduled_processes(storage, process_name)\n        if count == 0:  # Create the scheduled process entry for the new task\n            payload = PayloadBuilder().INSERT(name=process_name, script=script).payload()\n            try:\n                res = await storage.insert_into_tbl(\"scheduled_processes\", payload)\n            except StorageServerError as ex:\n                _logger.exception(\"Failed to create scheduled process due to {}\".format(ex.error))\n                raise web.HTTPInternalServerError(reason='Failed to create north instance.')\n            except Exception as ex:\n                _logger.error(ex, \"Failed to create scheduled process.\")\n                raise web.HTTPInternalServerError(reason='Failed to create north instance.')\n\n        # If successful then create a configuration entry from plugin configuration\n        try:\n            # Create a configuration category from the configuration defined in the plugin\n            category_desc = plugin_config['plugin']['description']\n            await config_mgr.create_category(category_name=name,\n                                             category_description=category_desc,\n                                             category_value=plugin_config,\n                                             keep_original_items=True)\n            # Create the parent category for all North tasks\n            await config_mgr.create_category(\"North\", {}, 'North tasks', True)\n            await config_mgr.create_child_category(\"North\", [name])\n\n            # If config is in POST data, then update the value for each config item\n            if config is not None:\n                if not isinstance(config, dict):\n                    raise ValueError('Config must be a JSON object')\n                for k, v in config.items():\n                    await config_mgr.set_category_item_value_entry(name, k, v['value'])\n        except Exception as ex:\n            await config_mgr.delete_category_and_children_recursively(name)\n            _logger.error(ex, \"Failed to create plugin configuration.\")\n            raise web.HTTPInternalServerError(reason='Failed to create plugin configuration. {}'.format(ex))\n\n        # If all successful then lastly add a schedule to run the new task at startup\n        try:\n            schedule = TimedSchedule() if schedule_type == Schedule.Type.TIMED else \\\n                       IntervalSchedule() if schedule_type == Schedule.Type.INTERVAL else \\\n                       ManualSchedule()\n            schedule.name = name\n            schedule.process_name = process_name\n            schedule.day = schedule_day\n            m, s = divmod(schedule_time if schedule_time is not None else 0, 60)\n            h, m = divmod(m, 60)\n            schedule.time = datetime.time().replace(hour=h, minute=m, second=s)\n            schedule.repeat = datetime.timedelta(seconds=schedule_repeat if schedule_repeat is not None else 0)\n            schedule.exclusive = True\n            schedule.enabled = False  # if \"enabled\" is supplied, it gets activated in save_schedule() via is_enabled flag\n\n            # Note: For Python based sending process dryrun option support is not available;\n            # Therefore the runtime configuration will appear only when enabled & task executed once\n            # Save schedule\n            await server.Server.scheduler.save_schedule(schedule, is_enabled, dryrun=dryrun)\n            schedule = await server.Server.scheduler.get_schedule_by_name(name)\n        except StorageServerError as ex:\n            await config_mgr.delete_category_and_children_recursively(name)\n            _logger.exception(\"Failed to create north instance due to {}\".format(ex.error))\n            raise web.HTTPInternalServerError(reason='Failed to create north instance.')\n        except Exception as ex:\n            await config_mgr.delete_category_and_children_recursively(name)\n            _logger.error(ex, \"Failed to create north instance.\")\n            raise web.HTTPInternalServerError(reason='Failed to create north instance.')\n\n    except ValueError as e:\n        raise web.HTTPBadRequest(reason=str(e))\n    else:\n        return web.json_response({'name': name, 'id': str(schedule.schedule_id)})\n\n\nasync def delete_task(request):\n    \"\"\" Delete a north plugin instance task\n\n        :Example:\n            curl -X DELETE http://localhost:8081/fledge/scheduled/task/<task name>\n    \"\"\"\n    north_instance = request.match_info.get('task_name', None)\n    try:\n        storage = connect.get_storage_async()\n\n        result = await get_schedule(storage, north_instance)\n        if result['count'] == 0:\n            return web.HTTPNotFound(reason='{} north instance does not exist.'.format(north_instance))\n\n        north_instance_schedule = result['rows'][0]\n        sch_id = uuid.UUID(north_instance_schedule['id'])\n        if north_instance_schedule['enabled'].lower() == 't':\n            # disable it\n            await server.Server.scheduler.disable_schedule(sch_id)\n        # delete it\n        await server.Server.scheduler.delete_schedule(sch_id)\n\n        # delete tasks\n        await delete_task_entry_with_schedule_id(storage, sch_id)\n\n        # delete all configuration for the north task instance name\n        config_mgr = ConfigurationManager(storage)\n        await config_mgr.delete_category_and_children_recursively(north_instance)\n\n        # delete statistics key, streams, plugin data\n        await delete_statistics_key(storage, north_instance)\n        await delete_streams(storage, north_instance)\n        await delete_plugin_data(storage, north_instance)\n        # update deprecated timestamp in asset_tracker\n        await update_deprecated_ts_in_asset_tracker(storage, north_instance)\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to delete {} north task.\".format(north_instance))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({'result': 'North instance {} deleted successfully.'.format(north_instance)})\n\n\nasync def get_schedule(storage, schedule_name):\n    payload = PayloadBuilder().SELECT([\"id\", \"enabled\"]).WHERE(['schedule_name', '=', schedule_name]).payload()\n    result = await storage.query_tbl_with_payload('schedules', payload)\n    return result\n\n\nasync def check_scheduled_processes(storage, process_name):\n    payload = PayloadBuilder().SELECT(\"name\").WHERE(['name', '=', process_name]).payload()\n    result = await storage.query_tbl_with_payload('scheduled_processes', payload)\n    return result['count']\n\n\nasync def check_schedules(storage, schedule_name):\n    payload = PayloadBuilder().SELECT(\"schedule_name\").WHERE(['schedule_name', '=', schedule_name]).payload()\n    result = await storage.query_tbl_with_payload('schedules', payload)\n    return result['count']\n\n\nasync def delete_statistics_key(storage, key):\n    payload = PayloadBuilder().WHERE(['key', '=', key]).payload()\n    await storage.delete_from_tbl('statistics', payload)\n\n\nasync def delete_task_entry_with_schedule_id(storage, sch_id):\n    payload = PayloadBuilder().WHERE([\"schedule_id\", \"=\", str(sch_id)]).payload()\n    await storage.delete_from_tbl(\"tasks\", payload)\n\n\nasync def delete_streams(storage, north_instance):\n    payload = PayloadBuilder().WHERE([\"description\", \"=\", north_instance]).payload()\n    await storage.delete_from_tbl(\"streams\", payload)\n\n\nasync def delete_plugin_data(storage, north_instance):\n    payload = PayloadBuilder().WHERE([\"key\", \"like\", north_instance + \"%\"]).payload()\n    await storage.delete_from_tbl(\"plugin_data\", payload)\n\n\nasync def update_deprecated_ts_in_asset_tracker(storage, north_instance):\n    \"\"\"\n    TODO: FOGL-6749\n    Once rows affected with 0 case handled at Storage side\n    then we will need to update the query with AND_WHERE(['deprecated_ts', 'isnull'])\n    At the moment deprecated_ts is updated even in notnull case.\n    Also added SELECT query before UPDATE to avoid BadCase when there is no asset track entry exists for the instance.\n    This should also be removed when given JIRA is fixed.\n    \"\"\"\n    select_payload = PayloadBuilder().SELECT(\"deprecated_ts\").WHERE(['service', '=', north_instance]).payload()\n    get_result = await storage.query_tbl_with_payload('asset_tracker', select_payload)\n    if 'rows' in get_result:\n        response = get_result['rows']\n        if response:\n            # AND_WHERE(['deprecated_ts', 'isnull']) once FOGL-6749 is done\n            current_time = utils.local_timestamp()\n            update_payload = PayloadBuilder().SET(deprecated_ts=current_time).WHERE(\n                ['service', '=', north_instance]).payload()\n            await storage.update_tbl(\"asset_tracker\", update_payload)\n"
  },
  {
    "path": "python/fledge/services/core/api/update.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Fledge package updater API support\"\"\"\n\n\nimport json\nfrom aiohttp import web\nimport datetime\nimport os\nimport asyncio\nimport re\n\nfrom fledge.common import utils\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.services.core import server\nfrom fledge.services.core.scheduler.entities import ManualSchedule\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n__author__ = \"Massimiliano Pinto\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_help = \"\"\"\n    -----------------------------------------\n    | PUT, GET            | /fledge/update   |\n    -----------------------------------------\n\"\"\"\n\n###\n# The update.py is part of the Fledge REST API:\n# when called via PUT /fledge/update it will add/fetch the \"manual update task\"\n# to scheduler queue for execution.\n#\n#\n\n_FLEDGE_UPDATE_TASK = \"FledgeUpdater\"\n_FLEDGE_MANUAL_UPDATE_SCHEDULE = 'Fledge updater on demand'\n\n\nasync def update_package(request):\n    \"\"\" Queues the execution of Fledge package update task\n\n    :Example: curl -X PUT http://localhost:8081/fledge/update\n    \"\"\"\n\n    create_message = \"{} : a new schedule has been created\".format(_FLEDGE_MANUAL_UPDATE_SCHEDULE)\n    status_message = \"{}  has been queued for execution\".format(_FLEDGE_MANUAL_UPDATE_SCHEDULE)\n    error_message = \"Failed to create the schedule {}\".format(_FLEDGE_MANUAL_UPDATE_SCHEDULE)\n    schedule_disable_error_message = \"{} schedule is disabled\".format(_FLEDGE_MANUAL_UPDATE_SCHEDULE)\n\n    try:\n        task_found = False\n        # Get all the 'Scheduled Tasks'\n        schedule_list = await server.Server.scheduler.get_schedules()\n\n        # Find the manual updater schedule\n        for schedule_info in schedule_list:\n            if schedule_info.name == _FLEDGE_MANUAL_UPDATE_SCHEDULE:\n                task_found = True\n                # Set the schedule id\n                schedule_id = schedule_info.schedule_id\n                if schedule_info.enabled is False:\n                    _logger.warning(schedule_disable_error_message)\n                    raise ValueError(schedule_disable_error_message)\n                break\n\n        # If no schedule then create it\n        if task_found is False:\n            # Create a manual schedule for update\n            manual_schedule = ManualSchedule()\n\n            if not manual_schedule:\n                raise ValueError(error_message)\n            # Set schedule fields\n            manual_schedule.name = _FLEDGE_MANUAL_UPDATE_SCHEDULE\n            manual_schedule.process_name = _FLEDGE_UPDATE_TASK\n            manual_schedule.repeat = datetime.timedelta(seconds=0)\n            manual_schedule.enabled = True\n            manual_schedule.exclusive = True\n\n            await server.Server.scheduler.save_schedule(manual_schedule)\n\n            # Set the schedule id\n            schedule_id = manual_schedule.schedule_id\n\n            # Log new schedule creation\n            _logger.info(\"%s, ID [ %s ]\", create_message, str(schedule_id))\n\n        # Save current logged user token\n        token = request.headers.get('authorization', None)\n        if token is not None:\n            with open(os.path.expanduser('~') + '/.fledge_token', 'w') as f:\n                f.write(token)\n\n        # Add schedule_id to the schedule queue\n        await server.Server.scheduler.queue_task(schedule_id)\n    except ValueError as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        _logger.error(ex, \"Failed to update Fledge package.\")\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response({\"status\": \"Running\", \"message\": status_message})\n\n\nasync def get_updates(request: web.Request) -> web.Response:\n    \"\"\"\n        Gives list of packages for which updates are available.\n\n        Returns JSON Response\n         Sample Response\n         {\n            \"updates\": [\n            \"fledge-south-sinusoid\",\"fledge\"\n            ]\n        }\n        Example\n         curl -sX GET http://localhost:8081/fledge/update |jq\n    \"\"\"\n    update_cmd = \"sudo apt update\"\n    upgradable_pkgs_check_cmd = \"apt list --upgradable | grep \\^fledge | grep -v \\^fledge-manage\"\n    if utils.is_redhat_based():\n        update_cmd = \"sudo yum check-update\"\n        upgradable_pkgs_check_cmd = \"yum list updates | grep \\^fledge | grep -v \\^fledge-manage\"\n\n    update_args = update_cmd.split()\n    update_process = await asyncio.create_subprocess_exec(\n        *update_args, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE)\n    _, stderr = await update_process.communicate()\n    if update_process.returncode != 0:\n        msg = \"Could not run {} due to {}\".format(update_cmd, stderr.decode('utf-8'))\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n    upgrade_args = upgradable_pkgs_check_cmd.split()\n    upgradable_pkgs_check_process = await asyncio.create_subprocess_exec(\n        *upgrade_args, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE)\n    stdout, stderr = await upgradable_pkgs_check_process.communicate()\n    if upgradable_pkgs_check_process.returncode != 0:\n        _logger.info(\"Nothing to upgrade at the moment. {}\".format(stderr.decode(\"utf-8\")))\n        upgradable_packages = []\n        return web.json_response({'updates': upgradable_packages})\n    try:\n        process_output = stdout.decode(\"utf-8\")\n        _logger.debug(process_output)\n        # split on new-line\n        word_list = re.split(r\"\\n+\", process_output)\n\n        # remove '' from the list\n        word_list = [w for w in word_list if w != '']\n        packages = []\n\n        # For APT the output of apt list is as follows:\n        \"\"\"\n        $ apt list --upgradable\n        Listing... Done\n        fledge-gui/unknown 1.9.2-440-gf849eed5 all [upgradable from: 1.9.2]\n        fledge-south-sinusoid/unknown 1.9.2-1-g38a138f amd64 [upgradable from: 1.9.2]\n        \"\"\"\n        # Now match the character / . The string before / is the actual package name we want.\n        for word in word_list:\n            # TODO find regex for yum as well.\n            word_match = re.findall(r\".*[/]\", word)\n            if len(word_match) > 0:\n                packages.append(word_match[0].replace('/', '').strip())\n\n        # Make a set to avoid duplicates.\n        upgradable_packages = list(set(packages))\n    except Exception as ex:\n        msg = \"Failed to fetch upgradable packages list for the configured repository!\"\n        _logger.error(ex, msg)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": \"{} {}\".format(msg, str(ex))}))\n    else:\n        return web.json_response({'updates': upgradable_packages})\n"
  },
  {
    "path": "python/fledge/services/core/api/utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport subprocess\nimport os\nimport json\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_PLUGIN_PATH, _FLEDGE_DATA\nfrom fledge.common.logger import FLCoreLogger\n\n_logger = FLCoreLogger().get_logger(__name__)\n_lib_path = _FLEDGE_ROOT + \"/\" + \"plugins\"\n\nC_PLUGIN_UTIL_PATH = _FLEDGE_ROOT + \"/extras/C/get_plugin_info\" if os.path.isdir(_FLEDGE_ROOT + \"/extras/C\") \\\n        else _FLEDGE_ROOT + \"/cmake_build/C/plugins/utils/get_plugin_info\"\n\n\ndef get_plugin_info(name, dir):\n    try:\n        arg2 = _find_c_lib(name, dir)\n        if arg2 is None:\n            raise ValueError('The plugin {} does not exist'.format(name))\n        cmd_with_args = [C_PLUGIN_UTIL_PATH, arg2, \"plugin_info\"]\n        p = subprocess.Popen(cmd_with_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        out, err = p.communicate()\n        res = out.decode(\"utf-8\")\n        jdoc = json.loads(res)\n    except json.decoder.JSONDecodeError as err:\n        _logger.error(\"Failed to parse JSON data returned from the plugin information of {}, {} line {} column {}\".format(name, err.msg, err.lineno, err.colno))\n        return {}\n    except (OSError, ValueError) as err:\n        _logger.error(err, \"{} C plugin get info failed.\".format(name))\n        return {}\n    except subprocess.CalledProcessError as err:\n        if err.output is not None:\n            _logger.error(err, \"{} C plugin get info failed '{}'.\".format(name, err.output))\n        else:\n            _logger.error(err, \"{} C plugin get info failed.\".format(name))\n        return {}\n    except Exception as ex:\n        _logger.error(ex, \"{} C plugin get info failed.\".format(name))\n        return {}\n    else:\n        return jdoc\n\n\ndef _find_c_lib(name, installed_dir):\n    _path = [_lib_path + \"/\" + installed_dir]\n    _path = _find_plugins_from_env(_path)\n    lib_path = None\n\n    for fp in _path:\n        for path, subdirs, files in os.walk(fp):\n            for fname in files:\n                # C-binary file\n                if fname.endswith(\"lib{}.so\".format(name)):\n                    lib_path = os.path.join(path, fname)\n                    break\n            else:\n                continue\n            break\n    return lib_path\n\n\ndef find_c_plugin_libs(direction):\n    libraries = []\n    _path = [_lib_path]\n    _path = _find_plugins_from_env(_path)\n    for fp in _path:\n        if os.path.isdir(fp + \"/\" + direction):\n            for name in os.listdir(fp + \"/\" + direction):\n                p = fp + \"/\" + direction + \"/\" + name\n                for fname in os.listdir(p):\n                    if fname.endswith('.so'):\n                        # Replace lib and .so from fname\n                        libraries.append((fname.replace(\"lib\", \"\").replace(\".so\", \"\"), 'binary'))\n                    # For Hybrid plugins\n                    if direction == 'south' and fname.endswith('.json'):\n                        libraries.append((fname.replace(\".json\", \"\"), 'json'))\n    return libraries\n\n\ndef _find_plugins_from_env(_plugin_path: list) -> list:\n    if _FLEDGE_PLUGIN_PATH:\n        my_list = _FLEDGE_PLUGIN_PATH.split(\";\")\n        for ml in my_list:\n            dir_found = os.path.isdir(ml)\n            if dir_found:\n                subdirs = [dirs for x, dirs, files in os.walk(ml)]\n                if subdirs[0]:\n                    _plugin_path.append(ml)\n                else:\n                    _logger.warning(\"{} subdir type not found.\".format(ml))\n            else:\n                _logger.warning(\"{} dir path not found.\".format(ml))\n    return _plugin_path\n\n\ndef get_fl_dir(_path):\n    dir_path = _FLEDGE_DATA + _path if _FLEDGE_DATA else _FLEDGE_ROOT + '/data' + _path\n    if not os.path.exists(dir_path):\n        os.makedirs(dir_path)\n    return os.path.expanduser(dir_path)\n\n"
  },
  {
    "path": "python/fledge/services/core/asset_tracker/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/asset_tracker/asset_tracker.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.storage_client.exceptions import StorageServerError\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass AssetTracker(object):\n\n    _storage = None\n    \"\"\"Storage client async\"\"\"\n\n    fledge_svc_name = None\n    \"\"\"Fledge service name\"\"\"\n\n    _registered_asset_records = None\n    \"\"\"Set of rows for asset_tracker already in the storage tables\"\"\"\n\n    def __init__(self, storage=None):\n        if self._storage is None:\n            if not isinstance(storage, StorageClientAsync):\n                raise TypeError('Must be a valid Async Storage object')\n            self._storage = storage\n            self.fledge_svc_name = ''\n\n    async def load_asset_records(self):\n        \"\"\" Fetch all asset_tracker records from database \"\"\"\n\n        self._registered_asset_records = []\n        try:\n            payload = PayloadBuilder().SELECT(\"asset\", \"event\", \"service\", \"plugin\", \"data\").payload()\n            results = await self._storage.query_tbl_with_payload('asset_tracker', payload)\n            for row in results['rows']:\n                self._registered_asset_records.append(row)\n        except Exception as ex:\n            _logger.exception(ex, 'Failed to retrieve asset records')\n\n    async def add_asset_record(self, *,  asset, event, service, plugin, jsondata = {}):\n        \"\"\"\n        Args:\n             asset: asset code of the record\n             event: event the record is recording, one of a set of possible events including Ingest, Egress, Filter\n             service: The name of the service that made the entry\n             plugin: The name of the plugin, that has been loaded by the service.\n        \"\"\"\n        # If (asset + event + service + plugin) row combination exists in _find_registered_asset_record then return\n        d = {\"asset\": asset, \"event\": event, \"service\": service, \"plugin\": plugin, \"data\":jsondata}\n        if d in self._registered_asset_records:\n            return {}\n\n        # The name of the Fledge this entry has come from.\n        # This is defined as the service name and configured as part of the general configuration of Fledge.\n        # it will only change on restart! Later we may want to fix it via callback mechanism\n        if len(self.fledge_svc_name) == 0:\n            cfg_manager = ConfigurationManager(self._storage)\n            svc_config = await cfg_manager.get_category_item(category_name='service', item_name='name')\n            self.fledge_svc_name = svc_config['value']\n\n        try:\n            payload = PayloadBuilder().INSERT(asset=asset, event=event, service=service, plugin=plugin, fledge=self.fledge_svc_name, data=jsondata).payload()\n\n            result = await self._storage.insert_into_tbl('asset_tracker', payload)\n            response = result['response']\n            self._registered_asset_records.append(d)\n        except KeyError:\n            raise ValueError(result['message'])\n        except StorageServerError as ex:\n            err_response = ex.error\n            raise ValueError(err_response)\n        else:\n            import copy\n            result = copy.deepcopy(d)\n            result.update({\"fledge\": self.fledge_svc_name})\n            return result\n"
  },
  {
    "path": "python/fledge/services/core/connect.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.storage_client import StorageClientAsync, ReadingsStorageClientAsync\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\n# TODO: Needs refactoring or better way to allow global discovery in core process\ndef get_storage_async():\n    \"\"\" Storage Object \"\"\"\n    try:\n        services = ServiceRegistry.get(name=\"Fledge Storage\")\n        storage_svc = services[0]\n        _storage = StorageClientAsync(core_management_host=None, core_management_port=None, svc=storage_svc)\n    except Exception as ex:\n        _logger.error(ex)\n        raise\n    return _storage\n\n\n# TODO: Needs refactoring or better way to allow global discovery in core process\ndef get_readings_async():\n    \"\"\" Storage Object \"\"\"\n    try:\n        services = ServiceRegistry.get(name=\"Fledge Storage\")\n        storage_svc = services[0]\n        _readings = ReadingsStorageClientAsync(core_mgt_host=None, core_mgt_port=None, svc=storage_svc)\n    except Exception as ex:\n        _logger.error(ex)\n        raise\n    return _readings\n"
  },
  {
    "path": "python/fledge/services/core/firewall.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom fledge.common.logger import FLCoreLogger\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass Singleton(object):\n    _shared_state = {}\n\n    def __init__(self):\n        self.__dict__ = self._shared_state\n\n\nclass Firewall(Singleton):\n    \"\"\" Monitor and Control HTTP Network Traffic \"\"\"\n\n    __slots__ = ['category', 'display_name', 'description', 'config', 'ip_addresses']\n\n    DEFAULT_CONFIG = {\n        'allowedIP': {\n            'description': 'A list of allowed IP addresses',\n            'type': 'list',\n            'items': 'string',\n            'default': \"[]\",\n            'displayName': 'Allowed IP Addresses',\n            'order': '1',\n            'permissions': ['admin']\n        },\n        'deniedIP': {\n            'description': 'A list of denied IP addresses',\n            'type': 'list',\n            'items': 'string',\n            'default': \"[]\",\n            'displayName': 'Denied IP Addresses',\n            'order': '2',\n            'permissions': ['admin']\n        }\n    }\n\n    def __init__(self):\n        super().__init__()\n\n        self.category = 'firewall'\n        self.display_name = 'Firewall'\n        self.description = 'Monitor and Control HTTP Network Traffic'\n        self.config = self.DEFAULT_CONFIG\n        self.ip_addresses = {}\n\n    def __repr__(self):\n        template = ('Firewall settings: <category={s.category}, display_name={s.display_name}, '\n                    'description={s.description}, config={s.config}, ip_addresses={s.ip_addresses}>')\n        return template.format(s=self)\n\n    def __str__(self):\n        return self.__repr__()\n\n    @classmethod\n    def get_instance(cls):\n        if not hasattr(cls, '_instance'):\n            cls._instance = cls()\n        return cls._instance\n\n    class IPAddresses:\n\n        @classmethod\n        def get(cls) -> dict:\n            f = Firewall.get_instance()\n            return f.ip_addresses\n\n        @classmethod\n        def save(cls, data: dict) -> None:\n            f = Firewall.get_instance()\n            if 'allowedIP' in data:\n                f.ip_addresses.update({'allowedIP': json.loads(data['allowedIP']['value'])})\n            if 'deniedIP' in data:\n                f.ip_addresses.update({'deniedIP': json.loads(data['deniedIP']['value'])})\n\n        @classmethod\n        def clear(cls) -> None:\n            f = Firewall.get_instance()\n            f.ip_addresses = {}\n\n"
  },
  {
    "path": "python/fledge/services/core/interest_registry/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/interest_registry/change_callback.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Module to hold the callback to notify microservices of config changes. \"\"\"\n\nimport json\nimport asyncio\nimport aiohttp\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry\nfrom fledge.services.core.interest_registry import exceptions as interest_registry_exceptions\nfrom fledge.common import logger\n\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_LOGGER = logger.setup(__name__)\n\n\nasync def run(category_name):\n    \"\"\" Callback run by configuration category to notify changes to interested microservices\n\n    Note: this method is async as needed\n\n    Args:\n        configuration_name (str): name of category that was changed\n    \"\"\"\n\n    # get all interest records regarding category_name\n    cfg_mgr = ConfigurationManager()\n    interest_registry = InterestRegistry(cfg_mgr)\n    try:\n        interest_records = interest_registry.get(category_name=category_name)\n    except interest_registry_exceptions.DoesNotExist:\n        return\n\n    category_value = await cfg_mgr.get_category_all_items(category_name)\n    payload = {\"category\" : category_name, \"items\" : category_value}\n    headers = {'content-type': 'application/json'}\n\n    # for each microservice interested in category_name, notify change\n    for i in interest_records:\n        # get microservice management server info of microservice through service registry\n        try: \n            service_record = ServiceRegistry.get(idx=i._microservice_uuid)[0]\n        except service_registry_exceptions.DoesNotExist:\n            _LOGGER.exception(\"Unable to notify microservice with uuid %s as it is not found in the service registry\", i._microservice_uuid)\n            continue\n        url = \"{}://{}:{}/fledge/change\".format(service_record._protocol, service_record._address, service_record._management_port)\n\n        async with aiohttp.ClientSession() as session:\n            try:\n                async with session.post(url, data=json.dumps(payload, sort_keys=True), headers=headers) as resp:\n                    result = await resp.text()\n                    status_code = resp.status\n                    if status_code in range(400, 500):\n                        _LOGGER.error(\"Bad request error code: %d, reason: %s\", status_code, resp.reason)\n                    if status_code in range(500, 600):\n                        _LOGGER.error(\"Server error code: %d, reason: %s\", status_code, resp.reason)\n            except Exception as ex:\n                _LOGGER.exception(ex, \"Unable to notify microservice with uuid {}\".format(i._microservice_uuid))\n                continue\n\n\nasync def run_child_create(parent_category_name, child_category_list):\n    \"\"\" Call the child_create Management API\n\n    Args:\n        parent_category_name (str): parent category of the children one\n        child_category_list (str): list of the children category changed\n    \"\"\"\n\n    # get all interest records regarding category_name\n    cfg_mgr = ConfigurationManager()\n    interest_registry = InterestRegistry(cfg_mgr)\n    try:\n        interest_records = interest_registry.get(category_name=parent_category_name)\n    except interest_registry_exceptions.DoesNotExist:\n        return\n\n    for child_category in child_category_list:\n\n        category_value = await cfg_mgr.get_category_all_items(child_category)\n        payload = {\"parent_category\" : parent_category_name, \"category\" : child_category, \"items\" : category_value}\n        headers = {'content-type': 'application/json'}\n\n        # for each microservice interested in category_name, notify change\n        for i in interest_records:\n            # get microservice management server info of microservice through service registry\n            try:\n                service_record = ServiceRegistry.get(idx=i._microservice_uuid)[0]\n            except service_registry_exceptions.DoesNotExist:\n                _LOGGER.exception(\"Unable to notify microservice with uuid {} as it is not \"\n                                  \"found in the service registry\".format(i._microservice_uuid))\n                continue\n            url = \"{}://{}:{}/fledge/child_create\".format(service_record._protocol, service_record._address, service_record._management_port)\n\n            async with aiohttp.ClientSession() as session:\n                try:\n                    async with session.post(url, data=json.dumps(payload, sort_keys=True), headers=headers) as resp:\n                        result = await resp.text()\n                        status_code = resp.status\n                        if status_code in range(400, 500):\n                            _LOGGER.error(\"Bad request error code: %d, reason: %s\", status_code, resp.reason)\n                        if status_code in range(500, 600):\n                            _LOGGER.error(\"Server error code: %d, reason: %s\", status_code, resp.reason)\n                except Exception as ex:\n                    _LOGGER.exception(ex, \"Unable to notify microservice with uuid {}\".format(i._microservice_uuid))\n                    continue\n\n\nasync def run_child_delete(parent_category_name, child_category):\n    \"\"\" Call the child_delete Management API\n\n    Args:\n        parent_category_name (str): parent category of the children one\n        child_category_list (str): list of the children category changed\n    \"\"\"\n\n    # get all interest records regarding category_name\n    cfg_mgr = ConfigurationManager()\n    interest_registry = InterestRegistry(cfg_mgr)\n    try:\n        interest_records = interest_registry.get(category_name=parent_category_name)\n    except interest_registry_exceptions.DoesNotExist:\n        return\n\n    category_value = await cfg_mgr.get_category_all_items(child_category)\n    payload = {\"parent_category\" : parent_category_name, \"category\" : child_category, \"items\" : category_value}\n    headers = {'content-type': 'application/json'}\n\n    # for each microservice interested in category_name, notify change\n    for i in interest_records:\n        # get microservice management server info of microservice through service registry\n        try:\n            service_record = ServiceRegistry.get(idx=i._microservice_uuid)[0]\n        except service_registry_exceptions.DoesNotExist:\n            _LOGGER.exception(\"Unable to notify microservice with uuid %s as it is not found in the service registry\", i._microservice_uuid)\n            continue\n\n        url = \"{}://{}:{}/fledge/child_delete\".format(service_record._protocol, service_record._address, service_record._management_port)\n\n        async with aiohttp.ClientSession() as session:\n            try:\n                async with session.delete(url, data=json.dumps(payload, sort_keys=True), headers=headers) as resp:\n                    result = await resp.text()\n                    status_code = resp.status\n                    if status_code in range(400, 500):\n                        _LOGGER.error(\"Bad request error code: %d, reason: %s\", status_code, resp.reason)\n                    if status_code in range(500, 600):\n                        _LOGGER.error(\"Server error code: %d, reason: %s\", status_code, resp.reason)\n            except Exception as ex:\n                _LOGGER.exception(ex, \"Unable to notify microservice with uuid {}\".format(i._microservice_uuid))\n                continue\n\n\nasync def run_child(parent_category_name, child_category_list, operation):\n    \"\"\" Callback run by configuration category to notify changes to interested microservices\n\n    Note: this method is async as needed\n\n    Args:\n        parent_category_name (str): parent category of the children one\n        child_category_list (str): list of the children category changed\n        or child category anem if operatuon is 'd'\n        operation (str): \"c\" = created | \"d\" = delete\n    \"\"\"\n\n    if operation == \"c\":\n        await run_child_create (parent_category_name, child_category_list)\n    elif operation == \"d\":\n        await run_child_delete (parent_category_name, child_category_list)\n    else:\n        _LOGGER.error(\"Requested operation not supported only create/delete are supported, quested operation %s\", operation)\n\n"
  },
  {
    "path": "python/fledge/services/core/interest_registry/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Interests Registry Exceptions module\"\"\"\n\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass DoesNotExist(Exception):\n    pass\n\nclass ErrorInterestRegistrationAlreadyExists(Exception):\n    pass\n\n"
  },
  {
    "path": "python/fledge/services/core/interest_registry/interest_record.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Interest Record Class\"\"\"\n\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nclass InterestRecord(object):\n    \"\"\"Stores a single interest registration for notification changes \n    \"\"\"\n\n    def __init__(self, registration_id, microservice_uuid, category_name):\n        self._registration_id = registration_id\n        \"\"\" interest registration id \"\"\"\n        \n        self._microservice_uuid = microservice_uuid\n        \"\"\" microservice interested in the change \"\"\"\n        \n        self._category_name = category_name\n        \"\"\" configuration category name of interest \"\"\"\n\n    def __repr__(self):\n        template = 'interest registration id={s._registration_id}: <microservice uuid={s._microservice_uuid}, category_name={s._category_name}>'\n        return template.format(s=self)\n\n    def __str__(self):\n        return self.__repr__()\n\n\n"
  },
  {
    "path": "python/fledge/services/core/interest_registry/interest_registry.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Interest Registry Class\"\"\"\n\nimport uuid\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common import logger\nfrom fledge.services.core.interest_registry.interest_record import InterestRecord\nfrom fledge.services.core.interest_registry import exceptions as interest_registry_exceptions\n\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_LOGGER = logger.setup(__name__)\nNOTIFY_CHANGE_CALLBACK = \"fledge.services.core.interest_registry.change_callback\"\n\n\nclass InterestRegistrySingleton(object):\n    \"\"\"This class is used to provide singlton functionality to InterestRegistry\n    \"\"\"\n\n    _shared_state = {}\n\n    def __init__(self):\n        self.__dict__ = self._shared_state\n\n\nclass InterestRegistry(InterestRegistrySingleton):\n    \"\"\"Used by core to manage microservices' interests in configuration changes.\n\n    Inherits from InterestRegistrySingleton to make it a singleton\n\n    \"\"\"\n\n    _registered_interests = None\n    \"\"\" maintains the list of InterestRecord objects \"\"\"\n\n    _configuration_manager = None\n    \"\"\" ConfigurationManager used by InterestRegistry \"\"\"\n\n    def __init__(self, configuration_manager=None):\n        \"\"\" Used to create InterestRegistry object\n\n        Args:\n            configuration_manager (ConfigurationManager): configuration_manager instance to use\n\n        \"\"\"\n\n        InterestRegistrySingleton.__init__(self)\n        if self._configuration_manager is None:\n            if not isinstance(configuration_manager, ConfigurationManager):\n                raise TypeError('Must be a valid ConfigurationManager object')\n            self._configuration_manager = configuration_manager\n        if self._registered_interests is None:\n            self._registered_interests = list()\n    \n    def and_filter(self, **kwargs):\n        \"\"\" Used to filter InterestRecord objects based on attribute values.\n        \"\"\"\n        interest_records = None\n        interest_records = [s for s in self._registered_interests if all(getattr(s, k, None) == v for k, v in kwargs.items() if v is not None)]\n        return interest_records\n\n    def get(self, registration_id=None, category_name=None, microservice_uuid=None):\n        \"\"\" Used to filter InterestRecord objects based on attribute values.\n        Args:\n            registration_id (str): registration_id uuid as a string (optional)\n            category_name (str): category of interest (optional)\n            microservice_uuid (str): interested party - microservice uuid as a string (optional)\n        \"\"\"\n        interest_records = self.and_filter(_registration_id=registration_id, _category_name=category_name, _microservice_uuid=microservice_uuid)\n        if len(interest_records) == 0:\n            raise interest_registry_exceptions.DoesNotExist\n        return interest_records\n\n    def register_child(self, microservice_uuid, category_name):\n        \"\"\" Used to add an entry to the InterestRegistry\n        Args:\n            category_name (str): category of interest (required)\n            microservice_uuid (str): interested party - microservice_uuid as a string (required)\n        Note:\n            category_name, microservice_uuid pair must be unique\n        Returns:\n            registration id of new InterestRegistration entry\n        Raises:\n            fledge.services.core.interest_registry.exceptions.ErrorInterestRegistrationAlreadyExists\n                in the event that the microservice_uuid, category_name pair is already registered\n        \"\"\"\n\n        if microservice_uuid is None:\n            raise ValueError('Failed to register interest. microservice_uuid cannot be None')\n        if category_name is None:\n            raise ValueError('Failed to register interest. category_name cannot be None')\n\n        #// FIXME_I:\n\n        # try:\n        #     self.get(microservice_uuid=microservice_uuid, category_name=category_name)\n        # except interest_registry_exceptions.DoesNotExist:\n        #\n        #     #// FIXME_I:\n        #     _LOGGER.setLevel(logging.DEBUG)\n        #     _LOGGER.debug(\"xxx9 register_child OK microservice_uuid:{}: category_name :{}:\".format(microservice_uuid, category_name) )\n        #     _LOGGER.setLevel(logging.WARNING)\n        #\n        #     pass\n        # else:\n        #\n        #     #// FIXME_I:\n        #     _LOGGER.setLevel(logging.DEBUG)\n        #     _LOGGER.debug(\"xxx9 register_child ERROR microservice_uuid:{}: category_name :{}:\".format(microservice_uuid, category_name) )\n        #     _LOGGER.setLevel(logging.WARNING)\n        #\n        #     raise interest_registry_exceptions.ErrorInterestRegistrationAlreadyExists\n\n        # register callback with configuration manager\n        #// FIXME_I:\n        self._configuration_manager.register_interest_child(category_name, NOTIFY_CHANGE_CALLBACK)\n\n        # get registration_id\n        registration_id = str(uuid.uuid4())\n        # create new InterestRecord\n        #// FIXME_I:\n        #registered_interest = InterestRecord(registration_id, microservice_uuid, category_name)\n        # add interest record to list of registered interests\n        #// FIXME_I:\n        #self._registered_interests.append(registered_interest)\n\n        return registration_id\n\n\n    def register(self, microservice_uuid, category_name):\n        \"\"\" Used to add an entry to the InterestRegistry\n        Args:\n            category_name (str): category of interest (required)\n            microservice_uuid (str): interested party - microservice_uuid as a string (required)\n        Note:\n            category_name, microservice_uuid pair must be unique\n        Returns:\n            registration id of new InterestRegistration entry\n        Raises:\n            fledge.services.core.interest_registry.exceptions.ErrorInterestRegistrationAlreadyExists\n                in the event that the microservice_uuid, category_name pair is already registered\n        \"\"\"\n        if microservice_uuid is None:\n            raise ValueError('Failed to register interest. microservice_uuid cannot be None')\n        if category_name is None:\n            raise ValueError('Failed to register interest. category_name cannot be None')\n\n        try:\n            self.get(microservice_uuid=microservice_uuid, category_name=category_name)\n        except interest_registry_exceptions.DoesNotExist:\n            pass\n        else:\n            raise interest_registry_exceptions.ErrorInterestRegistrationAlreadyExists\n\n        # register callback with configuration manager\n        self._configuration_manager.register_interest(category_name, NOTIFY_CHANGE_CALLBACK)\n        # get registration_id\n        registration_id = str(uuid.uuid4())\n        # create new InterestRecord\n        registered_interest = InterestRecord(registration_id, microservice_uuid, category_name)\n        # add interest record to list of registered interests\n        self._registered_interests.append(registered_interest)\n\n        return registration_id\n\n    def unregister(self, registration_id):\n        \"\"\" Used to remove an entry from the InterestRegistry\n        Args:\n            registration_id (str): id (uuid as a string) of InterestRegistration entry to remove\n        Returns:\n            registration id of removed interest record\n        Raises:\n            fledge.services.core.interest_registry.exceptions.DoesNotExist\n                in the event that the registration id does not have a corresponding entry in the registry\n        \"\"\"\n        # remove entry from list in InterestRegistry\n        try: \n            registered_interests = self.get(registration_id=registration_id)\n            interest_record = registered_interests[0]\n            self._registered_interests.remove(registered_interests[0])\n        except interest_registry_exceptions.DoesNotExist:\n            raise\n        # remove entry from configuration manager if no registered interests exist for this category_name\n        try: \n            registered_interests = self.get(category_name=interest_record._category_name)\n        except interest_registry_exceptions.DoesNotExist:\n            self._configuration_manager.unregister_interest(interest_record._category_name, NOTIFY_CHANGE_CALLBACK)\n        _LOGGER.info(\"Unregistered interest with id {}\".format(str(registered_interests[0])))\n        return registration_id\n\n\n"
  },
  {
    "path": "python/fledge/services/core/proxy.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport urllib.parse\nimport aiohttp\n\nfrom aiohttp import web\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.utils import make_async\nfrom fledge.services.core import server\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = FLCoreLogger().get_logger(__name__)\n\n\ndef setup(app):\n    app.router.add_route('POST', '/fledge/proxy', add)\n    app.router.add_route('DELETE', '/fledge/proxy/{service_name}', delete)\n\n\ndef admin_api_setup(app):\n    # Note: /extension is only for to catch Proxy endpoints\n    # Below code is not working due to aiohttp-cors lib issue https://github.com/aio-libs/aiohttp-cors/issues/241\n\n    # app.router.add_route('*', r'/fledge/extension/{tail:.*}', handler)\n\n    # Once above resolved we will remove below routes and replaced with * handler\n    app.router.add_route('GET', r'/fledge/extension/{tail:.*}', handler)\n    app.router.add_route('POST', r'/fledge/extension/{tail:.*}', handler)\n    app.router.add_route('PUT', r'/fledge/extension/{tail:.*}', handler)\n    app.router.add_route('DELETE', r'/fledge/extension/{tail:.*}', handler)\n\n\nasync def add(request: web.Request) -> web.Response:\n    \"\"\" Add API proxy for a service\n\n    :Example:\n        curl -sX POST http://localhost:<CORE_MGT_PORT>/fledge/proxy -d '{\"service_name\": \"SVC #1\", \"DELETE\": {\"/fledge/svc/([0-9][0-9]*)$\": \"/svc/([0-9][0-9]*)$\"}, \"GET\": {\"/fledge/svc/([0-9][0-9]*)$\": \"/svc/([0-9][0-9]*)$\"}, \"POST\": {\"/fledge/svc\": \"/svc\"}, \"PUT\": {\"/fledge/svc/([0-9][0-9]*)$\": \"/svc/([0-9][0-9]*)$\", \"/fledge/svc/match\": \"/svc/match\"}}'\n   \"\"\"\n    data = await request.json()\n    svc_name = data.get('service_name', None)\n    try:\n        if svc_name is None:\n            raise ValueError(\"service_name KV pair is required.\")\n        if svc_name is not None:\n            if not isinstance(svc_name, str):\n                raise TypeError(\"service_name must be in string.\")\n            svc_name = svc_name.strip()\n            if not len(svc_name):\n                raise ValueError(\"service_name cannot be empty.\")\n            del data['service_name']\n            valid_verbs = [\"GET\", \"POST\", \"PUT\", \"DELETE\"]\n            intersection = [i for i in valid_verbs if i in data]\n            if not intersection:\n                raise ValueError(\"Nothing to add in proxy for {} service. \"\n                                 \"Pass atleast one {} verb in the given payload.\".format(svc_name, valid_verbs))\n            if not all(data.values()):\n                raise ValueError(\"Value cannot be empty for a verb in the given payload.\")\n            for k, v in data.items():\n                if not isinstance(v, dict):\n                    raise TypeError(\"Value should be a dictionary object for {} key.\".format(k))\n                for k1, v1 in v.items():\n                    if '/fledge/' not in k1:\n                        raise ValueError(\"Public URL must start with /fledge prefix for {} key.\".format(k))\n            if svc_name in server.Server._API_PROXIES:\n                raise ValueError(\"Proxy is already configured for {} service. \"\n                                 \"Delete it first and then re-create.\".format(svc_name))\n    except (TypeError, ValueError, KeyError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({'message': msg}))\n    else:\n        try:\n            for method, path in data.items():\n                for admin_route, micro_svc_route in path.items():\n                    prefix_url = '/{}/{}'.format(admin_route.split('/')[1], admin_route.split('/')[2])\n                    break\n                break\n            # NOTE: There will be no same Public URL for different Proxies\n            # Add service name KV pair in-memory structure\n            server.Server._API_PROXIES.update({svc_name: {\"endpoints\": data, \"prefix_url\": prefix_url}})\n        except Exception as ex:\n            msg = str(ex)\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({'message': msg}))\n        return web.json_response({\"result\": \"Proxy has been configured for {} service.\".format(svc_name)})\n\n\nasync def delete(request: web.Request) -> web.Response:\n    \"\"\" Stop API proxy for a service\n\n    :Example:\n             curl -sX DELETE http://localhost:<CORE_MGT_PORT>/fledge/proxy/{service}\n   \"\"\"\n    svc_name = request.match_info.get('service_name', None)\n    svc_name = urllib.parse.unquote(svc_name) if svc_name is not None else None\n    try:\n        ServiceRegistry.get(name=svc_name)\n        if svc_name not in server.Server._API_PROXIES:\n            raise ValueError(\"For {} service, no proxy operation is configured.\".format(svc_name))\n    except service_registry_exceptions.DoesNotExist:\n        msg = \"{} service not found.\".format(svc_name)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except (TypeError, ValueError, KeyError) as err:\n        msg = str(err)\n        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({'message': msg}))\n    else:\n        # Remove service name KV pair from in-memory structure\n        del server.Server._API_PROXIES[svc_name]\n        return web.json_response({\"result\": \"Configured proxy for {} service has been removed.\".format(svc_name)})\n\n\nasync def handler(request: web.Request) -> web.Response:\n    \"\"\" widecast handler \"\"\"\n    allow_methods = [\"GET\", \"POST\", \"PUT\", \"DELETE\"]\n    if request.method not in allow_methods:\n        raise web.HTTPMethodNotAllowed(method=request.method, allowed_methods=allow_methods)\n    try:\n        # Find service name as per request.rel_url in proxy dict in-memory\n        is_proxy_svc_found = False\n        proxy_svc_name = None\n        for svc_name, svc_info in server.Server._API_PROXIES.items():\n            # Handled extension identifier internally; if we don't want to change in an external service\n            if svc_info['prefix_url'] in str(request.rel_url).replace('/extension', ''):\n                is_proxy_svc_found = True\n                proxy_svc_name = svc_name\n        if is_proxy_svc_found and proxy_svc_name is not None:\n            svc, token = await _get_service_record_info_along_with_bearer_token(proxy_svc_name)\n            url = str(request.url).split('fledge/extension/')[1]\n            status_code, response, content_type = await _call_microservice_service_api(\n                request, svc._protocol, svc._address, svc._port, url, token)\n        else:\n            msg = \"{} route not found.\".format(request.rel_url)\n            return web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    except Exception as ex:\n        msg = str(ex)\n        raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return web.json_response(status=status_code, body=response, content_type=content_type)\n\nasync def _get_service_record_info_along_with_bearer_token(svc_name):\n    try:\n        service = ServiceRegistry.get(name=svc_name)\n        svc_name = service[0]._name\n        token = ServiceRegistry.getBearerToken(svc_name)\n    except service_registry_exceptions.DoesNotExist:\n        msg = \"No service available with {} name.\".format(svc_name)\n        raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n    else:\n        return service[0], token\n\n\n@make_async\ndef _post_multipart(url, headers, payload):\n    import requests\n    from requests_toolbelt.multipart.encoder import MultipartEncoder\n    from aiohttp.web_request import FileField\n    multipart_payload = {}\n    for k, v in payload.items():\n        multipart_payload[k] = (v.filename, v.file.read(), 'text/plain') if isinstance(v, FileField) else v\n    m = MultipartEncoder(fields=multipart_payload)\n    headers['Content-Type'] = m.content_type\n    result = requests.post(url, data=m, headers=headers)\n    return result\n\n\nasync def _call_microservice_service_api(\n        request: web.Request, protocol: str, address: str, port: int, uri: str, token: str):\n    # Custom Request header\n    headers = {}\n    if token is not None:\n        headers['Authorization'] = \"Bearer {}\".format(token)\n    url = \"{}://{}:{}/{}\".format(protocol, address, port, uri)\n    try:\n        if request.method == 'GET':\n            async with aiohttp.ClientSession() as session:\n                async with session.get(url, headers=headers) as resp:\n                    message = await resp.read() if resp.content_type == 'application/octet-stream' else await resp.text()\n                    response = (resp.status, message, resp.content_type)\n                    if resp.status not in range(200, 209):\n                        _logger.error(\"GET Request Error: Http status code: {}, reason: {}, response: {}\".format(\n                            resp.status, resp.reason, message))\n        elif request.method == 'POST':\n            payload = await request.post()\n            if 'multipart/form-data' in request.headers['Content-Type']:\n                r = await _post_multipart(url, headers, payload)\n                response = (r.status_code, r.text)\n                if r.status_code not in range(200, 209):\n                    _logger.error(\"POST Request Error: Http status code: {}, reason: {}, response: {}\".format(\n                        r.status_code, r.reason, r.text))\n            else:\n                payload = await request.json()\n                async with aiohttp.ClientSession() as session:\n                    async with session.post(url, data=json.dumps(payload), headers=headers) as resp:\n                        message = await resp.text()\n                        response = (resp.status, message)\n                        if resp.status not in range(200, 209):\n                            _logger.error(\"POST Request Error: Http status code: {}, reason: {}, response: {}\".format(\n                                resp.status, resp.reason, message))\n        elif request.method == 'PUT':\n            payload = await request.json()\n            async with aiohttp.ClientSession() as session:\n                async with session.put(url, data=json.dumps(payload), headers=headers) as resp:\n                    message = await resp.text()\n                    response = (resp.status, message)\n                    if resp.status not in range(200, 209):\n                        _logger.error(\"PUT Request Error: Http status code: {}, reason: {}, response: {}\".format(\n                            resp.status, resp.reason, message))\n        elif request.method == 'DELETE':\n            async with aiohttp.ClientSession() as session:\n                async with session.delete(url, headers=headers) as resp:\n                    message = await resp.text()\n                    response = (resp.status, message)\n                    if resp.status not in range(200, 209):\n                        _logger.error(\"DELETE Request Error: Http status code: {}, reason: {}, response: {}\".format(\n                            resp.status, resp.reason, message))\n    except Exception as ex:\n        raise Exception(str(ex))\n    else:\n        response_tuples = response\n        # Default content-type is 'application/json'\n        if len(response) == 2:\n            response_tuples = response + ('application/json',)\n        # Return Tuple - (http statuscode, message, content-type)\n        return response_tuples\n"
  },
  {
    "path": "python/fledge/services/core/routes.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom fledge.services.core import proxy\nfrom fledge.services.core.api import alerts, asset_tracker, auth, backup_restore, browser, certificate_store, filters, health, notification, north, package_log, performance_monitor, python_packages, south, support, service, task, update\nfrom fledge.services.core.api import audit as api_audit\nfrom fledge.services.core.api import common as api_common\nfrom fledge.services.core.api import configuration as api_configuration\nfrom fledge.services.core.api import scheduler as api_scheduler\nfrom fledge.services.core.api import statistics as api_statistics\nfrom fledge.services.core.api.control_service import script_management, acl_management, pipeline, entrypoint\nfrom fledge.services.core.api import pipeline_debugger\nfrom fledge.services.core.api.plugins import data as plugin_data\nfrom fledge.services.core.api.plugins import install as plugins_install, discovery as plugins_discovery\nfrom fledge.services.core.api.plugins import update as plugins_update\nfrom fledge.services.core.api.plugins import remove as plugins_remove\nfrom fledge.services.core.api.plugins import config_validator\nfrom fledge.services.core.api.repos import configure as configure_repo\nfrom fledge.services.core.api.snapshot import plugins as snapshot_plugins\nfrom fledge.services.core.api.snapshot import table as snapshot_table\n\n\n__author__ = \"Ashish Jabble, Praveen Garg, Massimiliano Pinto, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017-2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\ndef setup(app):\n    app.router.add_route('GET', '/fledge/ping', api_common.ping)\n    app.router.add_route('PUT', '/fledge/shutdown', api_common.shutdown)\n    app.router.add_route('PUT', '/fledge/restart', api_common.restart)\n\n    # user\n    app.router.add_route('GET', '/fledge/user', auth.get_user)\n    app.router.add_route('PUT', '/fledge/user', auth.update_me)\n    app.router.add_route('PUT', '/fledge/user/{user_id}/password', auth.update_password)\n\n    # role\n    app.router.add_route('GET', '/fledge/user/role', auth.get_roles)\n\n    # auth\n    app.router.add_route('POST', '/fledge/login', auth.login)\n    app.router.add_route('PUT', '/fledge/logout', auth.logout_me)\n\n    # auth ott\n    app.router.add_route('GET', '/fledge/auth/ott', auth.get_ott)\n\n    # logout all active sessions\n    app.router.add_route('PUT', '/fledge/{user_id}/logout', auth.logout)\n\n    # admin\n    app.router.add_route('POST', '/fledge/admin/user', auth.create_user)\n    app.router.add_route('DELETE', '/fledge/admin/{user_id}/delete', auth.delete_user)\n    app.router.add_route('PUT', '/fledge/admin/{user_id}', auth.update_user)\n    app.router.add_route('PUT', '/fledge/admin/{user_id}/enable', auth.enable_user)\n    app.router.add_route('PUT', '/fledge/admin/{user_id}/unblock', auth.unblock_user)\n    app.router.add_route('PUT', '/fledge/admin/{user_id}/reset', auth.reset)\n    app.router.add_route('POST', '/fledge/admin/{user_id}/authcertificate', auth.create_certificate)\n\n    # Configuration\n    app.router.add_route('GET', '/fledge/category', api_configuration.get_categories)\n    app.router.add_route('POST', '/fledge/category', api_configuration.create_category)\n    app.router.add_route('GET', '/fledge/category/{category_name}', api_configuration.get_category)\n    app.router.add_route('PUT', '/fledge/category/{category_name}', api_configuration.update_configuration_item_bulk)\n    app.router.add_route('DELETE', '/fledge/category/{category_name}', api_configuration.delete_category)\n    app.router.add_route('POST', '/fledge/category/{category_name}/children', api_configuration.create_child_category)\n    app.router.add_route('GET', '/fledge/category/{category_name}/children', api_configuration.get_child_category)\n    app.router.add_route('DELETE', '/fledge/category/{category_name}/children/{child_category}', api_configuration.delete_child_category)\n    app.router.add_route('DELETE', '/fledge/category/{category_name}/parent', api_configuration.delete_parent_category)\n    app.router.add_route('GET', '/fledge/category/{category_name}/{config_item}', api_configuration.get_category_item)\n    app.router.add_route('PUT', '/fledge/category/{category_name}/{config_item}', api_configuration.set_configuration_item)\n    app.router.add_route('POST', '/fledge/category/{category_name}/{config_item}', api_configuration.add_configuration_item)\n    app.router.add_route('DELETE', '/fledge/category/{category_name}/{config_item}/value', api_configuration.delete_configuration_item_value)\n    app.router.add_route('POST', '/fledge/category/{category_name}/{config_item}/upload', api_configuration.upload_script)\n    # Scheduler\n    # Scheduled_processes - As per doc\n    app.router.add_route('GET', '/fledge/schedule/process', api_scheduler.get_scheduled_processes)\n    app.router.add_route('POST', '/fledge/schedule/process', api_scheduler.post_scheduled_process)\n    app.router.add_route('GET', '/fledge/schedule/process/{scheduled_process_name}', api_scheduler.get_scheduled_process)\n\n    # Schedules - As per doc\n    app.router.add_route('GET', '/fledge/schedule', api_scheduler.get_schedules)\n    app.router.add_route('POST', '/fledge/schedule', api_scheduler.post_schedule)\n    app.router.add_route('GET', '/fledge/schedule/type', api_scheduler.get_schedule_type)\n    app.router.add_route('GET', '/fledge/schedule/{schedule_id}', api_scheduler.get_schedule)\n    app.router.add_route('PUT', '/fledge/schedule/{schedule_id}/enable', api_scheduler.enable_schedule)\n    app.router.add_route('PUT', '/fledge/schedule/{schedule_id}/disable', api_scheduler.disable_schedule)\n\n    app.router.add_route('PUT', '/fledge/schedule/enable', api_scheduler.enable_schedule_with_name)\n    app.router.add_route('PUT', '/fledge/schedule/disable', api_scheduler.disable_schedule_with_name)\n\n    app.router.add_route('POST', '/fledge/schedule/start/{schedule_id}', api_scheduler.start_schedule)\n    app.router.add_route('PUT', '/fledge/schedule/{schedule_id}', api_scheduler.update_schedule)\n    app.router.add_route('DELETE', '/fledge/schedule/{schedule_id}', api_scheduler.delete_schedule)\n\n    # Tasks - As per doc\n    app.router.add_route('GET', '/fledge/task', api_scheduler.get_tasks)\n    app.router.add_route('GET', '/fledge/task/state', api_scheduler.get_task_state)\n    app.router.add_route('GET', '/fledge/task/latest', api_scheduler.get_tasks_latest)\n    app.router.add_route('GET', '/fledge/task/{task_id}', api_scheduler.get_task)\n    app.router.add_route('PUT', '/fledge/task/{task_id}/cancel', api_scheduler.cancel_task)\n\n    # Service\n    app.router.add_route('POST', '/fledge/service', service.add_service)\n    app.router.add_route('GET', '/fledge/service', service.get_health)\n    app.router.add_route('DELETE', '/fledge/service/{service_name}', service.delete_service)\n    app.router.add_route('GET', '/fledge/service/available', service.get_available)\n    app.router.add_route('GET', '/fledge/service/installed', service.get_installed)\n    app.router.add_route('GET', '/fledge/service/info', service.get_service_info)\n    app.router.add_route('GET', '/fledge/service/info/{service_name}', service.get_service_info_by_name)\n    app.router.add_route('PUT', '/fledge/service/{type}/{name}/update', service.update_service)\n    app.router.add_route('POST', '/fledge/service/{service_name}/otp', service.issueOTPToken)\n\n    # Task\n    app.router.add_route('POST', '/fledge/scheduled/task', task.add_task)\n    app.router.add_route('DELETE', '/fledge/scheduled/task/{task_name}', task.delete_task)\n\n    # South\n    app.router.add_route('GET', '/fledge/south', south.get_south_services)\n\n    # North\n    app.router.add_route('GET', '/fledge/north', north.get_north_schedules)\n\n    # assets\n    browser.setup(app)\n\n    # asset tracker\n    app.router.add_route('GET',    '/fledge/track', asset_tracker.get_asset_tracker_events)\n    app.router.add_route('PUT',    '/fledge/track/service/{service}/asset/{asset}/event/{event}',\n                         asset_tracker.deprecate_asset_track_entry)\n    app.router.add_route('GET', '/fledge/track/storage/assets', asset_tracker.get_datapoint_usage)\n\n    # Statistics - As per doc\n    app.router.add_route('GET', '/fledge/statistics', api_statistics.get_statistics)\n    app.router.add_route('GET', '/fledge/statistics/history', api_statistics.get_statistics_history)\n    app.router.add_route('GET', '/fledge/statistics/rate', api_statistics.get_statistics_rate)\n\n    # Audit trail - As per doc\n    app.router.add_route('POST', '/fledge/audit', api_audit.create_audit_entry)\n    app.router.add_route('GET', '/fledge/audit', api_audit.get_audit_entries)\n    app.router.add_route('GET', '/fledge/audit/logcode', api_audit.get_audit_log_codes)\n    app.router.add_route('GET', '/fledge/audit/severity', api_audit.get_audit_log_severity)\n\n    # Backup & Restore - As per doc\n    app.router.add_route('GET', '/fledge/backup', backup_restore.get_backups)\n    app.router.add_route('POST', '/fledge/backup', backup_restore.create_backup)\n    app.router.add_route('POST', '/fledge/backup/upload', backup_restore.upload_backup)\n    app.router.add_route('GET', '/fledge/backup/status', backup_restore.get_backup_status)\n    app.router.add_route('GET', '/fledge/backup/{backup_id}', backup_restore.get_backup_details)\n    app.router.add_route('DELETE', '/fledge/backup/{backup_id}', backup_restore.delete_backup)\n    app.router.add_route('GET', '/fledge/backup/{backup_id}/download', backup_restore.get_backup_download)\n    app.router.add_route('PUT', '/fledge/backup/{backup_id}/restore', backup_restore.restore_backup)\n\n    # Package Update on demand\n    app.router.add_route('PUT', '/fledge/update', update.update_package)\n    app.router.add_route('GET', '/fledge/update', update.get_updates)\n\n    # certs store\n    app.router.add_route('GET', '/fledge/certificate', certificate_store.get_certs)\n    app.router.add_route('POST', '/fledge/certificate', certificate_store.upload)\n    app.router.add_route('DELETE', '/fledge/certificate/{name}', certificate_store.delete_certificate)\n\n    # Support bundle\n    app.router.add_route('GET', '/fledge/support', support.fetch_support_bundle)\n    app.router.add_route('GET', '/fledge/support/{bundle}', support.fetch_support_bundle_item)\n    app.router.add_route('POST', '/fledge/support', support.create_support_bundle)\n\n    # Get Syslog\n    app.router.add_route('GET', '/fledge/syslog', support.get_syslog_entries)\n\n    # Package logs\n    app.router.add_route('GET', '/fledge/package/log', package_log.get_logs)\n    app.router.add_route('GET', '/fledge/package/log/{name}', package_log.get_log_by_name)\n    app.router.add_route('GET', '/fledge/package/{action}/status', package_log.get_package_status)\n\n    # Plugins (install, discovery, update, delete)\n    app.router.add_route('GET', '/fledge/plugins/installed', plugins_discovery.get_plugins_installed)\n    app.router.add_route('GET', '/fledge/plugins/available', plugins_discovery.get_plugins_available)\n    app.router.add_route('POST', '/fledge/plugins', plugins_install.add_plugin)\n    if api_common.get_version() <= \"2.1.0\":\n        \"\"\"Note: This is only for to maintain the backward compatibility. (having core version<=2.1.0)\n         Plugin Update & Delete routes on the basis of type & installed name\"\"\"\n        app.router.add_route('PUT', '/fledge/plugins/{type}/{name}/update', plugins_update.update_plugin)\n        app.router.add_route('DELETE', '/fledge/plugins/{type}/{name}', plugins_remove.remove_plugin)\n    else:\n        # routes available 2.1.0 onwards\n        app.router.add_route('PUT', '/fledge/plugins/{package_name}', plugins_update.update_package)\n        app.router.add_route('DELETE', '/fledge/plugins/{package_name}', plugins_remove.remove_package)\n    # plugin data\n    app.router.add_route('GET', '/fledge/service/{service_name}/persist', plugin_data.get_persist_plugins)\n    app.router.add_route('GET', '/fledge/service/{service_name}/plugin/{plugin_name}/data', plugin_data.get)\n    app.router.add_route('POST', '/fledge/service/{service_name}/plugin/{plugin_name}/data', plugin_data.add)\n    app.router.add_route('DELETE', '/fledge/service/{service_name}/plugin/{plugin_name}/data', plugin_data.delete)\n    # Plugin validation\n    config_validator.setup(app)\n\n    # Filters \n    app.router.add_route('POST', '/fledge/filter', filters.create_filter)\n    app.router.add_route('PUT', '/fledge/filter/{user_name}/pipeline', filters.add_filters_pipeline)\n    app.router.add_route('GET', '/fledge/filter/{user_name}/pipeline', filters.get_filter_pipeline)\n    app.router.add_route('GET', '/fledge/filter/{filter_name}', filters.get_filter)\n    app.router.add_route('GET', '/fledge/filter', filters.get_filters)\n    app.router.add_route('DELETE', '/fledge/filter/{user_name}/pipeline', filters.delete_filter_pipeline)\n    app.router.add_route('DELETE', '/fledge/filter/{filter_name}', filters.delete_filter)\n\n    # Notification\n    app.router.add_route('GET', '/fledge/notification', notification.get_notifications)\n    app.router.add_route('GET', '/fledge/notification/plugin', notification.get_plugin)\n    app.router.add_route('GET', '/fledge/notification/type', notification.get_type)\n    app.router.add_route('GET', '/fledge/notification/{notification_name}', notification.get_notification)\n    app.router.add_route('POST', '/fledge/notification', notification.post_notification)\n    app.router.add_route('PUT', '/fledge/notification/{notification_name}', notification.put_notification)\n    app.router.add_route('DELETE', '/fledge/notification/{notification_name}', notification.delete_notification)\n    app.router.add_route('GET', '/fledge/notification/{notification_name}/delivery', notification.get_delivery_channels)\n    app.router.add_route('POST', '/fledge/notification/{notification_name}/delivery',\n                         notification.post_delivery_channel)\n    app.router.add_route('GET', '/fledge/notification/{notification_name}/delivery/{channel_name}',\n                         notification.get_delivery_channel_configuration)\n    app.router.add_route('DELETE', '/fledge/notification/{notification_name}/delivery/{channel_name}',\n                         notification.delete_delivery_channel)\n\n    # Snapshot plugins\n    app.router.add_route('GET', '/fledge/snapshot/plugins', snapshot_plugins.get_snapshot)\n    app.router.add_route('POST', '/fledge/snapshot/plugins', snapshot_plugins.post_snapshot)\n    app.router.add_route('PUT', '/fledge/snapshot/plugins/{id}', snapshot_plugins.put_snapshot)\n    app.router.add_route('DELETE', '/fledge/snapshot/plugins/{id}', snapshot_plugins.delete_snapshot)\n\n    # Snapshot config\n    app.router.add_route('GET', '/fledge/snapshot/category', snapshot_table.get_snapshot)\n    app.router.add_route('POST', '/fledge/snapshot/category', snapshot_table.post_snapshot)\n    app.router.add_route('PUT', '/fledge/snapshot/category/{id}', snapshot_table.put_snapshot)\n    app.router.add_route('DELETE', '/fledge/snapshot/category/{id}', snapshot_table.delete_snapshot)\n    app.router.add_route('GET', '/fledge/snapshot/schedule', snapshot_table.get_snapshot)\n    app.router.add_route('POST', '/fledge/snapshot/schedule', snapshot_table.post_snapshot)\n    app.router.add_route('PUT', '/fledge/snapshot/schedule/{id}', snapshot_table.put_snapshot)\n    app.router.add_route('DELETE', '/fledge/snapshot/schedule/{id}', snapshot_table.delete_snapshot)\n\n    # Repo configure\n    app.router.add_route('POST', '/fledge/repository', configure_repo.add_package_repo)\n\n    # Control Service Support\n    # script management\n    script_management.setup(app)\n\n    # Access Control List Management\n    app.router.add_route('POST', '/fledge/ACL', acl_management.add_acl)\n    app.router.add_route('GET', '/fledge/ACL', acl_management.get_all_acls)\n    app.router.add_route('GET', '/fledge/ACL/{acl_name}', acl_management.get_acl)\n    app.router.add_route('PUT', '/fledge/ACL/{acl_name}', acl_management.update_acl)\n    app.router.add_route('DELETE', '/fledge/ACL/{acl_name}', acl_management.delete_acl)\n    app.router.add_route('PUT', '/fledge/service/{service_name}/ACL', acl_management.attach_acl_to_service)\n    app.router.add_route('DELETE', '/fledge/service/{service_name}/ACL', acl_management.detach_acl_from_service)\n\n    # Control Pipelines\n    pipeline.setup(app)\n\n    # Control Entrypoint\n    entrypoint.setup(app)\n\n    # Python packages\n    app.router.add_route('GET', '/fledge/python/packages', python_packages.get_packages)\n    app.router.add_route('POST', '/fledge/python/package', python_packages.install_package)\n\n    # Health related calls\n    app.router.add_route('GET', '/fledge/health/storage', health.get_storage_health)\n    app.router.add_route('GET', '/fledge/health/logging', health.get_logging_health)\n\n    # Proxy Admin API setup with regex\n    proxy.admin_api_setup(app)\n\n    # Performance Monitor\n    performance_monitor.setup(app)\n\n    # Alerts\n    alerts.setup(app)\n\n    # Pipeline Debuger\n    pipeline_debugger.setup(app)\n\n    # enable cors support\n    enable_cors(app)\n\n    # enable a live debugger (watcher) for requests, see https://github.com/aio-libs/aiohttp-debugtoolbar\n    # this will neutralize error middleware\n    # Note: pip install aiohttp_debugtoolbar\n\n    # enable_debugger(app)\n\n\ndef enable_cors(app):\n    \"\"\" implements Cross Origin Resource Sharing (CORS) support \"\"\"\n    import aiohttp_cors\n\n    # Configure default CORS settings.\n    cors = aiohttp_cors.setup(app, defaults={\n        \"*\": aiohttp_cors.ResourceOptions(\n            allow_methods=[\"GET\", \"POST\", \"PUT\", \"DELETE\"],\n            allow_credentials=True,\n            expose_headers=\"*\",\n            allow_headers=\"*\",\n        )\n    })\n\n    # Configure CORS on all routes.\n    for route in list(app.router.routes()):\n        cors.add(route)\n\n\ndef enable_debugger(app):\n    \"\"\" provides a debug toolbar for server requests \"\"\"\n    import aiohttp_debugtoolbar\n\n    # dev mode only\n    # this will be served at API_SERVER_URL/_debugtoolbar\n    aiohttp_debugtoolbar.setup(app)\n"
  },
  {
    "path": "python/fledge/services/core/scheduler/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/scheduler/entities.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Fledge Scheduler module\"\"\"\n\nimport collections\nimport datetime\nimport uuid\nfrom enum import IntEnum\nfrom typing import List\n\n__author__ = \"Terris Linenbach, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__all__ = ('ScheduledProcess', 'Schedule', 'IntervalSchedule', 'TimedSchedule', 'ManualSchedule', 'StartUpSchedule', 'Task')\n\n\nclass ScheduledProcess(object):\n    \"\"\"Represents a program that a Task can run\"\"\"\n\n    __slots__ = ['name', 'script']\n\n    def __init__(self):\n        self.name = None  # type: str\n        \"\"\"Unique identifier\"\"\"\n        self.script = None  # type: List[ str ]\n\n\nclass Schedule(object):\n    class Type(IntEnum):\n        \"\"\"Enumeration for schedules.schedule_type\"\"\"\n        STARTUP = 1\n        TIMED = 2\n        INTERVAL = 3\n        MANUAL = 4\n\n    \"\"\"Schedule base class\"\"\"\n    __slots__ = ['schedule_id', 'name', 'process_name', 'exclusive', 'enabled', 'repeat', 'schedule_type']\n\n    def __init__(self, schedule_type: Type):\n        self.schedule_id = None  # type: uuid.UUID\n        self.name = None  # type: str\n        self.exclusive = True\n        self.enabled = False\n        self.repeat = None  # type: datetime.timedelta\n        self.process_name = None  # type: str\n        self.schedule_type = schedule_type  # type: Schedule.Type\n\n    def toDict(self):\n        return {'name': self.name,\n                'type': self.schedule_type,\n                'processName': self.process_name,\n                'repeat': self.repeat.total_seconds() if self.repeat else 0,\n                'enabled': self.enabled,\n                'exclusive': self.exclusive\n                }\n\n\nclass IntervalSchedule(Schedule):\n    \"\"\"Interval schedule\"\"\"\n\n    def __init__(self):\n        super().__init__(self.Type.INTERVAL)\n\n\nclass TimedSchedule(Schedule):\n    \"\"\"Timed schedule\"\"\"\n    __slots__ = ['time', 'day']\n\n    def __init__(self):\n        super().__init__(self.Type.TIMED)\n        self.time = None  # type: datetime.time\n        self.day = None  # type: int\n        \"\"\"1 (Monday) to 7 (Sunday)\"\"\"\n\n    def toDict(self):\n        my_dict = super().toDict();\n        my_dict['time'] = str(self.time.hour) + ':' + str(self.time.minute) + ':' + str(self.time.second) \\\n            if self.time else '00:00:00'\n        my_dict['day'] = self.day if self.day else 0\n        return my_dict\n\n\nclass ManualSchedule(Schedule):\n    \"\"\"A schedule that is run manually\"\"\"\n\n    def __init__(self):\n        super().__init__(self.Type.MANUAL)\n\n\nclass StartUpSchedule(Schedule):\n    \"\"\"A schedule that is run when the _scheduler starts\"\"\"\n\n    def __init__(self):\n        super().__init__(self.Type.STARTUP)\n\n\nclass Task(object):\n    \"\"\"A task represents an operating system process\"\"\"\n\n    class State(IntEnum):\n        \"\"\"Enumeration for tasks.task_state\"\"\"\n        RUNNING = 1\n        COMPLETE = 2\n        CANCELED = 3\n        INTERRUPTED = 4\n\n    # Class attributes\n    attr = collections.namedtuple('TaskAttributes', ['state', 'process_name', 'schedule_name', 'start_time',\n                                                     'end_time', 'exit_code'])\n\n    __slots__ = ['task_id', 'process_name', 'schedule_name', 'state', 'cancel_requested', 'start_time',\n                 'end_time', 'state', 'exit_code', 'reason', 'schedule_id']\n\n    def __init__(self):\n        # Instance attributes\n        self.task_id = None  # type: uuid.UUID\n        \"\"\"Unique identifier\"\"\"\n        self.process_name = None  # type: str\n        self.schedule_name = None  # type: str\n        self.schedule_id = None  # type: uuid.UUID\n        self.reason = None  # type: str\n        self.state = None  # type: Task.State\n        self.cancel_requested = None  # type: datetime.datetime\n        self.start_time = None  # type: datetime.datetime\n        self.end_time = None  # type: datetime.datetime\n        self.exit_code = None  # type: int\n"
  },
  {
    "path": "python/fledge/services/core/scheduler/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Fledge Scheduler module\"\"\"\n\nimport uuid\n\n__author__ = \"Terris Linenbach, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n__all__ = ('NotReadyError', 'DuplicateRequestError', 'TaskNotRunningError', 'TaskNotFoundError', 'ScheduleNotFoundError',\n           'ScheduleProcessNameNotFoundError')\n\nclass NotReadyError(RuntimeError):\n    pass\n\n\nclass DuplicateRequestError(RuntimeError):\n    pass\n\n\nclass TaskNotRunningError(RuntimeError):\n    \"\"\"Raised when canceling a task and the task isn't running\"\"\"\n\n    def __init__(self, task_id: uuid.UUID, *args):\n        self.task_id = task_id\n        super(RuntimeError, self).__init__(\n            \"Task is not running: {}\".format(task_id), *args)\n\n\nclass TaskNotFoundError(ValueError):\n    def __init__(self, task_id: uuid.UUID, *args):\n        self.task_id = task_id\n        super(ValueError, self).__init__(\n            \"Task not found: {}\".format(task_id), *args)\n\n\nclass ScheduleNotFoundError(ValueError):\n    def __init__(self, schedule_id: uuid.UUID, *args):\n        self.schedule_id = schedule_id\n        super(ValueError, self).__init__(\n            \"Schedule not found: {}\".format(schedule_id), *args)\n\n\nclass ScheduleProcessNameNotFoundError(ValueError):\n    pass"
  },
  {
    "path": "python/fledge/services/core/scheduler/scheduler.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Fledge Scheduler module\"\"\"\n\nimport asyncio\nimport collections\nimport datetime\nimport logging\nimport math\nimport time\nimport uuid\nimport os\nimport subprocess\nimport signal\nfrom typing import List\n\nfrom fledge.common import utils as common_utils\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.exceptions import *\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core.scheduler.entities import *\nfrom fledge.services.core.scheduler.exceptions import *\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\nfrom fledge.services.common import utils\n\n__author__ = \"Terris Linenbach, Amarendra K Sinha, Massimiliano Pinto\"\n__copyright__ = \"Copyright (c) 2017-2021 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n# FLEDGE_ROOT env variable\n_FLEDGE_ROOT = os.getenv(\"FLEDGE_ROOT\", default='/usr/local/fledge')\n_SCRIPTS_DIR = os.path.expanduser(_FLEDGE_ROOT + '/scripts')\n\n\nclass Scheduler(object):\n    \"\"\"Fledge Task Scheduler\n\n    Starts and tracks 'tasks' that run periodically,\n    start-up, and/or manually.\n\n    Schedules specify when to start and restart Tasks. A Task\n    is an operating system process. ScheduleProcesses\n    specify process/command name and parameters.\n\n    Most methods are coroutines and use the default\n    event loop to create tasks.\n\n    Usage:\n        - Call :meth:`start`\n        - Wait\n        - Call :meth:`stop`\n    \"\"\"\n\n    # TODO: Document the fields\n    _ScheduleRow = collections.namedtuple('ScheduleRow', ['id', 'name', 'type', 'time', 'day',\n                                                          'repeat', 'repeat_seconds', 'exclusive',\n                                                          'enabled', 'process_name'])\n    \"\"\"Represents a row in the schedules table\"\"\"\n\n    class _TaskProcess(object):\n        \"\"\"Tracks a running task with some flags\"\"\"\n        __slots__ = ['task_id', 'process', 'cancel_requested', 'schedule', 'start_time', 'future']\n\n        def __init__(self):\n            self.task_id = None  # type: uuid.UUID\n            self.process = None  # type: asyncio.subprocess.Process\n            self.cancel_requested = None  # type: int\n            \"\"\"Epoch time when cancel was requested\"\"\"\n            self.schedule = None  # Schedule._ScheduleRow\n            self.start_time = None  # type: int\n            \"\"\"Epoch time when the task was started\"\"\"\n            self.future = None\n\n    # TODO: Methods that accept a schedule and look in _schedule_executions\n    # should accept schedule_execution instead. Add reference to schedule\n    # in _ScheduleExecution.\n\n    class _ScheduleExecution(object):\n        \"\"\"Tracks information about schedules\"\"\"\n\n        __slots__ = ['next_start_time', 'task_processes', 'start_now']\n\n        def __init__(self):\n            self.next_start_time = None\n            \"\"\"When to next start a task for the schedule\"\"\"\n            self.task_processes = dict()\n            \"\"\"dict of task id to _TaskProcess\"\"\"\n            self.start_now = False\n            \"\"\"True when a task is queued to start via :meth:`start_task`\"\"\"\n\n    # Constant class attributes\n    _DEFAULT_MAX_RUNNING_TASKS = 50\n    \"\"\"Maximum number of running tasks allowed at any given time\"\"\"\n    _DEFAULT_MAX_COMPLETED_TASK_AGE_DAYS = 30\n    \"\"\"Maximum age of rows in the task table that have finished, in days\"\"\"\n    _DELETE_TASKS_LIMIT = 500\n    \"\"\"The maximum number of rows to delete in the tasks table in a single transaction\"\"\"\n    _DEFAULT_PROCESS_SCRIPT_PRIORITY = 999\n    \"\"\"Priority order for process scripts\"\"\"\n\n    _HOUR_SECONDS = 3600\n    _DAY_SECONDS = 3600 * 24\n    _WEEK_SECONDS = 3600 * 24 * 7\n    _ONE_HOUR = datetime.timedelta(hours=1)\n    _ONE_DAY = datetime.timedelta(days=1)\n\n    _MAX_SLEEP = 9999999\n    \"\"\"When there is nothing to do, sleep for this number of seconds (forever)\"\"\"\n\n    _STOP_WAIT_SECONDS = 5\n    \"\"\"Wait this number of seconds in :meth:`stop` for tasks to stop\"\"\"\n\n    _PURGE_TASKS_FREQUENCY_SECONDS = _DAY_SECONDS\n    \"\"\"How frequently to purge the tasks table\"\"\"\n\n    # Mostly constant class attributes\n    _logger = None  # type: logging.Logger\n\n    _core_management_host = None\n    _core_management_port = None\n    _storage = None\n    _storage_async = None\n\n    def __init__(self, core_management_host=None, core_management_port=None, is_safe_mode=False):\n        \"\"\"Constructor\"\"\"\n\n        cls = Scheduler\n\n        # Initialize class attributes\n        if not cls._logger:\n            cls._logger = FLCoreLogger().get_logger(__name__)\n        if not cls._core_management_port:\n            cls._core_management_port = core_management_port\n        if not cls._core_management_host:\n            cls._core_management_host = core_management_host\n\n\n        # Instance attributes\n\n        self._is_safe_mode = is_safe_mode # type: bool\n        \"\"\" When True the scheduler will be in restricted mode. \n        Only API operations and current state will be accessible; No tasks/ processes will be triggered\"\"\"\n        self._storage_async = None\n        self._ready = False\n        \"\"\"True when the scheduler is ready to accept API calls\"\"\"\n        self._start_time = None  # type: int\n        \"\"\"When the scheduler started\"\"\"\n        self._max_running_tasks = None  # type: int\n        \"\"\"Maximum number of tasks that can execute at any given time\"\"\"\n        self._paused = False\n        \"\"\"When True, the scheduler will not start any new tasks\"\"\"\n        self._process_scripts = dict()\n        \"\"\"Dictionary of scheduled_processes.name to script\"\"\"\n        self._schedules = dict()\n        \"\"\"Dictionary of schedules.id to _ScheduleRow\"\"\"\n        self._schedule_executions = dict()\n        \"\"\"Dictionary of schedules.id to _ScheduleExecution\"\"\"\n        self._task_processes = dict()\n        \"\"\"Dictionary of tasks.id to _TaskProcess\"\"\"\n        self._check_processes_pending = False\n        \"\"\"bool: True when request to run check_processes\"\"\"\n        self._scheduler_loop_task = None  # type: asyncio.Task\n        \"\"\"Task for :meth:`_scheduler_loop`, to ensure it has finished\"\"\"\n        self._scheduler_loop_sleep_task = None  # type: asyncio.Task\n        \"\"\"Task for asyncio.sleep used by :meth:`_scheduler_loop`\"\"\"\n        self.current_time = None  # type: int\n        \"\"\"Time to use when determining when to start tasks, for testing\"\"\"\n        self._last_task_purge_time = None  # type: int\n        \"\"\"When the tasks table was last purged\"\"\"\n        self._max_completed_task_age = None  # type: datetime.timedelta\n        \"\"\"Delete finished task rows when they become this old\"\"\"\n        self._purge_tasks_task = None  # type: asyncio.Task\n        \"\"\"asynico task for :meth:`purge_tasks`, if scheduled to run\"\"\"\n        self._restore_backup_id = None # type: int\n        \"\"\"Restore backup id and it will be used when SCHEDULE_RESTORE_ON_DEMAND runs\"\"\"\n\n    @property\n    def max_completed_task_age(self) -> datetime.timedelta:\n        return self._max_completed_task_age\n\n    @max_completed_task_age.setter\n    def max_completed_task_age(self, value: datetime.timedelta) -> None:\n        if not isinstance(value, datetime.timedelta):\n            raise TypeError(\"value must be a datetime.timedelta\")\n        self._max_completed_task_age = value\n\n    @property\n    def max_running_tasks(self) -> int:\n        \"\"\"Returns the maximum number of tasks that can run at any given time\n        \"\"\"\n        return self._max_running_tasks\n\n    @max_running_tasks.setter\n    def max_running_tasks(self, value: int) -> None:\n        \"\"\"Alters the maximum number of tasks that can run at any given time\n\n        Use 0 or a negative value to suspend task creation\n        \"\"\"\n        self._max_running_tasks = value\n        self._resume_check_schedules()\n\n    def _resume_check_schedules(self):\n        \"\"\"Wakes up :meth:`_scheduler_loop` so that\n        :meth:`_check_schedules` will be called the next time 'await'\n        is invoked.\n\n        \"\"\"\n        if self._scheduler_loop_sleep_task:\n            try:\n                self._scheduler_loop_sleep_task.cancel()\n                self._scheduler_loop_sleep_task = None\n            except RuntimeError:\n                self._check_processes_pending = True\n        else:\n            self._check_processes_pending = True\n\n    async def _wait_for_task_completion(self, task_process: _TaskProcess) -> None:\n        exit_code = await task_process.process.wait()\n        schedule = task_process.schedule\n\n        self._logger.info(\n            \"Process terminated: Schedule '%s' process '%s' task %s pid %s exit %s,\"\n            \" %s running tasks\\n%s\",\n            schedule.name,\n            schedule.process_name,\n            task_process.task_id,\n            task_process.process.pid,\n            exit_code,\n            len(self._task_processes) - 1,\n            self._process_scripts[schedule.process_name][0])\n\n        schedule_execution = self._schedule_executions[schedule.id]\n        del schedule_execution.task_processes[task_process.task_id]\n\n        schedule_deleted = False\n\n        # Pick up modifications to the schedule\n        # Or maybe it's been deleted\n        try:\n            schedule = self._schedules[schedule.id]\n        except KeyError:\n            schedule_deleted = True\n\n        if self._paused or schedule_deleted or (\n                        schedule.repeat is None and not schedule_execution.start_now):\n            if schedule_execution.next_start_time:\n                schedule_execution.next_start_time = None\n                self._logger.info(\n                    \"Tasks will no longer execute for schedule '%s'\", schedule.name)\n        elif schedule.exclusive:\n            self._schedule_next_task(schedule)\n\n        if schedule.type != Schedule.Type.STARTUP:\n            if exit_code < 0 and task_process.cancel_requested:\n                state = Task.State.CANCELED\n            else:\n                state = Task.State.COMPLETE\n            # Update the task's status\n            update_payload = PayloadBuilder() \\\n                .SET(exit_code=exit_code,\n                     state=int(state),\n                     end_time=str(common_utils.local_timestamp())) \\\n                .WHERE(['id', '=', str(task_process.task_id)]) \\\n                .payload()\n            try:\n                self._logger.debug('Database command: %s', update_payload)\n                res = await self._storage_async.update_tbl(\"tasks\", update_payload)\n            except Exception:\n                self._logger.exception('Update failed: %s', update_payload)\n                # Must keep going!\n\n        # Due to maximum running tasks reached, it is necessary to\n        # look for schedules that are ready to run even if there\n        # are only manual tasks waiting\n        # TODO Do this only if len(_task_processes) >= max_processes or\n        # an exclusive task finished and ( start_now or schedule.repeats )\n        self._resume_check_schedules()\n\n        # This must occur after all awaiting. The size of _task_processes\n        # is used by stop() to determine whether the scheduler can stop.\n        del self._task_processes[task_process.task_id]\n\n    async def _start_task(self, schedule: _ScheduleRow, dryrun=False) -> None:\n        \"\"\"Starts a task process\n\n        Raises:\n            EnvironmentError: If the process could not start\n        \"\"\"\n        def _get_delay_in_sec(pname):\n            if pname == 'dispatcher_c':\n                val = 3\n            elif pname == 'notification_c':\n                val = 5\n            elif pname == 'pipeline_c':\n                val = 7\n            elif pname == 'south_c':\n                val = 9\n            elif pname == 'north_C':\n                val = 11\n            else:\n                val = 14\n            return val\n\n        # This check is necessary only if significant time can elapse between \"await\" and\n        # the start of the awaited coroutine.\n        args = self._process_scripts[schedule.process_name][0]\n\n        # add core management host and port to process script args\n        args_to_exec = args.copy()\n        args_to_exec.append(\"--port={}\".format(self._core_management_port))\n        args_to_exec.append(\"--address=127.0.0.1\")\n        \n        if schedule.process_name == \"restore\" and self._restore_backup_id is not None:\n            # Adding backup id argument\n            args_to_exec.append(\"--backup-id={}\".format(self._restore_backup_id))\n            # Reset restore backup id\n            self._restore_backup_id = None\n\n        # Pass startup token to args for services\n        if schedule.type == Schedule.Type.STARTUP:\n            # This should be appended as an arg and passed to process\n            # and also kept as name | (single use) token pair for verification and assigning\n            # jwt token for cross communication\n\n            # Ask ServiceRegistry to issue a startup token and store\n            startToken = ServiceRegistry.issueStartupToken(schedule.name)\n            # Add startup token to args for services\n            args_to_exec.append(\"--token={}\".format(startToken))\n\n            if self._process_scripts[schedule.process_name][1] != self._DEFAULT_PROCESS_SCRIPT_PRIORITY:\n                # With startup Delay\n                res = _get_delay_in_sec(self._process_scripts[schedule.process_name][0][0].split(\"/\")[1])\n                args_to_exec.append(\"--delay={}\".format(res))\n\n        args_to_exec.append(\"--name={}\".format(schedule.name))\n        if dryrun:\n            args_to_exec.append(\"--dryrun\")\n\n        # avoid printing/logging tokens\n        args_to_exec_printable = [a for a in args_to_exec if not a.startswith(\"--token\")]\n        \n        task_process = self._TaskProcess()\n        task_process.start_time = time.time()\n\n        try:\n            process = await asyncio.create_subprocess_exec(*args_to_exec, cwd=_SCRIPTS_DIR)\n        except EnvironmentError:\n            self._logger.exception(\n                \"Unable to start schedule '%s' process '%s'\\n%s\",\n                schedule.name, schedule.process_name, args_to_exec_printable)\n            raise\n\n        if dryrun:\n            return\n\n        task_id = uuid.uuid4()\n        task_process.process = process\n        task_process.schedule = schedule\n        task_process.task_id = task_id\n\n        # All tasks including STARTUP tasks go into both self._task_processes and self._schedule_executions\n        self._task_processes[task_id] = task_process\n        self._schedule_executions[schedule.id].task_processes[task_id] = task_process\n \n        self._logger.info(\n            \"Process started: Schedule '%s' process '%s' task %s pid %s, %s running tasks\\n%s\",\n            schedule.name, schedule.process_name, task_id, process.pid,\n            len(self._task_processes), args_to_exec_printable)\n\n        # Startup tasks are not tracked in the tasks table and do not have any future associated with them.\n        if schedule.type != Schedule.Type.STARTUP:\n            # The task row needs to exist before the completion handler runs\n            insert_payload = PayloadBuilder() \\\n                .INSERT(id=str(task_id),\n                        pid=(self._schedule_executions[schedule.id].\n                             task_processes[task_id].process.pid),\n                        schedule_name=schedule.name,\n                        schedule_id=str(schedule.id),\n                        process_name=schedule.process_name,\n                        state=int(Task.State.RUNNING),\n                        start_time=str(common_utils.local_timestamp())) \\\n                .payload()\n            try:\n                self._logger.debug('Database command: %s', insert_payload)\n                res = await self._storage_async.insert_into_tbl(\"tasks\", insert_payload)\n            except Exception:\n                self._logger.exception('Insert failed: %s', insert_payload)\n                # The process has started. Regardless of this error it must be waited on.\n            self._task_processes[task_id].future = asyncio.ensure_future(self._wait_for_task_completion(task_process))\n\n    async def purge_tasks(self):\n        \"\"\"Deletes rows from the tasks table\"\"\"\n        if self._paused:\n            return\n\n        if not self._ready:\n            raise NotReadyError()\n\n        delete_payload = PayloadBuilder() \\\n            .WHERE([\"state\", \"!=\", int(Task.State.RUNNING)]) \\\n            .AND_WHERE([\"start_time\", \"<\", str(datetime.datetime.now() - self._max_completed_task_age)]) \\\n            .LIMIT(self._DELETE_TASKS_LIMIT) \\\n            .payload()\n        try:\n            self._logger.debug('Database command: %s', delete_payload)\n            while not self._paused:\n                res = await self._storage_async.delete_from_tbl(\"tasks\", delete_payload)\n                # TODO: Uncomment below when delete count becomes available in storage layer\n                # if res.get(\"count\") < self._DELETE_TASKS_LIMIT:\n                break\n        except Exception:\n            self._logger.exception('Delete failed: %s', delete_payload)\n            raise\n        finally:\n            self._purge_tasks_task = None\n\n        self._last_task_purge_time = time.time()\n\n    def _check_purge_tasks(self):\n        \"\"\"Schedules :meth:`_purge_tasks` to run if sufficient time has elapsed\n        since it last ran\n        \"\"\"\n\n        if self._purge_tasks_task is None and (self._last_task_purge_time is None or (\n                    time.time() - self._last_task_purge_time) >= self._PURGE_TASKS_FREQUENCY_SECONDS):\n            self._purge_tasks_task = asyncio.ensure_future(self.purge_tasks())\n\n    async def _check_schedules(self):\n        \"\"\"Starts tasks according to schedules based on the current time\"\"\"\n        earliest_start_time = None\n\n        # Can not iterate over _schedule_executions - it can change mid-iteration\n        for schedule_id in list(self._schedule_executions.keys()):\n            if self._paused or len(self._task_processes) >= self._max_running_tasks:\n                return None\n\n            schedule_execution = self._schedule_executions[schedule_id]\n\n            try:\n                schedule = self._schedules[schedule_id]\n            except KeyError:\n                # The schedule has been deleted\n                if not schedule_execution.task_processes:\n                    del self._schedule_executions[schedule_id]\n                continue\n\n            if schedule.enabled is False:\n                continue\n\n            if schedule.exclusive and schedule_execution.task_processes:\n                continue\n\n            # next_start_time is None when repeat is None until the\n            # task completes, at which time schedule_execution is removed\n            next_start_time = schedule_execution.next_start_time\n            if not next_start_time and not schedule_execution.start_now:\n                if not schedule_execution.task_processes:\n                    del self._schedule_executions[schedule_id]\n                continue\n\n            if next_start_time and not schedule_execution.start_now:\n                now = self.current_time if self.current_time else time.time()\n                right_time = now >= next_start_time\n            else:\n                right_time = False\n\n            if right_time or schedule_execution.start_now:\n                # Start a task\n\n                if not right_time:\n                    # Manual start - don't change next_start_time\n                    pass\n                elif schedule.exclusive:\n                    # Exclusive tasks won't start again until they terminate\n                    # Or the schedule doesn't repeat\n                    next_start_time = None\n                else:\n                    # _schedule_next_task alters next_start_time\n                    self._schedule_next_task(schedule)\n                    next_start_time = schedule_execution.next_start_time\n\n                await self._start_task(schedule)\n\n                # Queued manual execution is ignored when it was\n                # already time to run the task. The task doesn't\n                # start twice even when nonexclusive.\n                # The choice to put this after \"await\" above was\n                # deliberate. The above \"await\" could have allowed\n                # queue_task() to run. The following line\n                # will undo that because, after all, the task started.\n                schedule_execution.start_now = False\n\n            # Keep track of the earliest next_start_time\n            if next_start_time and (earliest_start_time is None or\n                                            earliest_start_time > next_start_time):\n                earliest_start_time = next_start_time\n\n        return earliest_start_time\n\n    async def _scheduler_loop(self):\n        \"\"\"Main loop for the scheduler\"\"\"\n        # TODO: log exception here or add an exception handler in asyncio\n\n        while True:\n            next_start_time = await self._check_schedules()\n\n            if self._paused:\n                break\n\n            self._check_purge_tasks()\n\n            # Determine how long to sleep\n            if self._check_processes_pending:\n                self._check_processes_pending = False\n                sleep_seconds = 0\n            elif next_start_time:\n                sleep_seconds = next_start_time - time.time()\n            else:\n                sleep_seconds = self._MAX_SLEEP\n\n            if sleep_seconds > 0:\n                self._logger.debug(\"Sleeping for %s seconds\", sleep_seconds)\n                self._scheduler_loop_sleep_task = (\n                    asyncio.ensure_future(asyncio.sleep(sleep_seconds)))\n\n                try:\n                    await self._scheduler_loop_sleep_task\n                    self._scheduler_loop_sleep_task = None\n                except asyncio.CancelledError:\n                    self._logger.debug(\"Main loop awakened\")\n            else:\n                # Relinquish control for each loop iteration to avoid starving\n                # other coroutines\n                await asyncio.sleep(0)\n\n    def _schedule_next_timed_task(self, schedule, schedule_execution, current_dt):\n        \"\"\"Handle daylight savings time transitions.\n           Assume 'repeat' is not null.\n\n        \"\"\"\n        if schedule.repeat_seconds is not None and schedule.repeat_seconds < self._DAY_SECONDS:\n            # If repeat is less than a day, use the current hour.\n            # Ignore the hour specified in the schedule's time.\n            dt = datetime.datetime(\n                year=current_dt.year,\n                month=current_dt.month,\n                day=current_dt.day,\n                hour=current_dt.hour,\n                minute=schedule.time.minute,\n                second=schedule.time.second)\n\n            if current_dt.time() > schedule.time:\n                # It's already too late. Try for an hour later.\n                dt += self._ONE_HOUR\n        else:\n            dt = datetime.datetime(\n                year=current_dt.year,\n                month=current_dt.month,\n                day=current_dt.day,\n                hour=schedule.time.hour,\n                minute=schedule.time.minute,\n                second=schedule.time.second)\n\n            if current_dt.time() > schedule.time:\n                # It's already too late. Try for tomorrow\n                dt += self._ONE_DAY\n\n        # Advance to the correct day if specified\n        if schedule.day:\n            while dt.isoweekday() != schedule.day:\n                dt += self._ONE_DAY\n\n        schedule_execution.next_start_time = time.mktime(dt.timetuple())\n\n    def _schedule_next_task(self, schedule) -> None:\n        \"\"\"Computes the next time to start a task for a schedule.\n\n        For nonexclusive schedules, this method is called after starting\n        a task automatically (it is not called when a task is started\n        manually).\n\n        For exclusive schedules, this method is called after the task\n        has completed.\n        \"\"\"\n        if schedule.enabled is False:\n            return\n\n        schedule_execution = self._schedule_executions[schedule.id]\n        advance_seconds = schedule.repeat_seconds\n\n        if self._paused or advance_seconds is None:\n            schedule_execution.next_start_time = None\n            self._logger.info(\n                \"Tasks will no longer execute for schedule '%s'\", schedule.name)\n            return\n\n        now = time.time()\n\n        if (schedule.exclusive and schedule_execution.next_start_time and\n                    now < schedule_execution.next_start_time):\n            # The task was started manually\n            # Or the schedule was modified after the task started (AVOID_ALTER_NEXT_START)\n            return\n\n        if advance_seconds:\n            advance_seconds *= max([1, math.ceil(\n                (now - schedule_execution.next_start_time) / advance_seconds)])\n\n            if schedule.type == Schedule.Type.TIMED:\n                # Handle daylight savings time transitions\n                next_dt = datetime.datetime.fromtimestamp(schedule_execution.next_start_time)\n                next_dt += datetime.timedelta(seconds=advance_seconds)\n\n                if schedule.day is not None and next_dt.isoweekday() != schedule.day:\n                    # Advance to the next matching day\n                    next_dt = datetime.datetime(year=next_dt.year,\n                                                month=next_dt.month,\n                                                day=next_dt.day)\n                    self._schedule_next_timed_task(schedule, schedule_execution, next_dt)\n                else:\n                    schedule_execution.next_start_time = time.mktime(next_dt.timetuple())\n            else:\n                if schedule.type == Schedule.Type.MANUAL:\n                    schedule_execution.next_start_time = time.time()\n                schedule_execution.next_start_time += advance_seconds\n\n            self._logger.info(\n                \"Scheduled task for schedule '%s' to start at %s\", schedule.name,\n                datetime.datetime.fromtimestamp(schedule_execution.next_start_time))\n\n    def _schedule_first_task(self, schedule, current_time):\n        \"\"\"Determines the time when a task for a schedule will start.\n\n        Args:\n            schedule: The schedule to consider\n\n            current_time:\n                Epoch time to use as the current time when determining\n                when to schedule tasks\n\n        \"\"\"\n        if schedule.enabled is False:\n            return\n\n        if schedule.type == Schedule.Type.MANUAL:\n            return\n\n        try:\n            schedule_execution = self._schedule_executions[schedule.id]\n        except KeyError:\n            schedule_execution = self._ScheduleExecution()\n            self._schedule_executions[schedule.id] = schedule_execution\n\n        if schedule.type == Schedule.Type.INTERVAL:\n            advance_seconds = schedule.repeat_seconds\n\n            # When modifying a schedule, this is imprecise if the\n            # schedule is exclusive and a task is running. When the\n            # task finishes, next_start_time will be incremented\n            # by at least schedule.repeat, thus missing the interval at\n            # start_time + advance_seconds. Fixing this required an if statement\n            # in _schedule_next_task. Search for AVOID_ALTER_NEXT_START\n\n            if advance_seconds:\n                advance_seconds *= max([1, math.ceil(\n                    (current_time - self._start_time) / advance_seconds)])\n            else:\n                advance_seconds = 0\n\n            schedule_execution.next_start_time = self._start_time + advance_seconds\n        elif schedule.type == Schedule.Type.TIMED:\n            self._schedule_next_timed_task(\n                schedule,\n                schedule_execution,\n                datetime.datetime.fromtimestamp(current_time))\n        elif schedule.type == Schedule.Type.STARTUP:\n            schedule_execution.next_start_time = current_time\n\n        if self._logger.isEnabledFor(logging.INFO):\n            self._logger.info(\n                \"Scheduled task for schedule '%s' to start at %s\", schedule.name,\n                datetime.datetime.fromtimestamp(schedule_execution.next_start_time))\n\n    async def _get_process_scripts(self):\n        try:\n            self._logger.debug('Database command: %s', \"scheduled_processes\")\n            res = await self._storage_async.query_tbl(\"scheduled_processes\")\n            for row in res['rows']:\n                self._process_scripts[row.get('name')] = (row.get('script'), row.get('priority'))\n        except Exception:\n            self._logger.exception('Query failed: %s', \"scheduled_processes\")\n            raise\n\n    async def _get_schedules(self):\n        def _get_schedule_by_priority(sch_list):\n            schedules_in_order = []\n            for sch in sch_list:\n                sch['priority'] = self._DEFAULT_PROCESS_SCRIPT_PRIORITY\n                for name, priority in self._process_scripts.items():\n                    if name == sch['process_name']:\n                        sch['priority'] = priority[1]\n                schedules_in_order.append(sch)\n            sort_sch = sorted(schedules_in_order, key=lambda k: (\"priority\" not in k, k.get(\"priority\", None)))\n            self._logger.debug(sort_sch)\n            return sort_sch\n\n        # TODO: Get processes first, then add to Schedule\n        try:\n            self._logger.debug('Database command: %s', 'schedules')\n            res = await self._storage_async.query_tbl(\"schedules\")\n            for row in _get_schedule_by_priority(res['rows']):\n                interval_days, interval_dt = self.extract_day_time_from_interval(row.get('schedule_interval'))\n                interval = datetime.timedelta(days=interval_days, hours=interval_dt.hour, minutes=interval_dt.minute, seconds=interval_dt.second)\n\n                repeat_seconds = None\n                if interval is not None and interval != datetime.timedelta(0):\n                    repeat_seconds = interval.total_seconds()\n\n                s_ti = row.get('schedule_time') if row.get('schedule_time') else '00:00:00'\n                s_tim = datetime.datetime.strptime(s_ti, \"%H:%M:%S\")\n                schedule_time = datetime.time().replace(hour=s_tim.hour, minute=s_tim.minute, second=s_tim.second)\n\n                schedule_id = uuid.UUID(row.get('id'))\n\n                schedule = self._ScheduleRow(\n                    id=schedule_id,\n                    name=row.get('schedule_name'),\n                    type=row.get('schedule_type'),\n                    day=row.get('schedule_day') if row.get('schedule_day') else None,\n                    time=schedule_time,\n                    repeat=interval,\n                    repeat_seconds=repeat_seconds,\n                    exclusive=True if row.get('exclusive') == 't' else False,\n                    enabled=True if row.get('enabled') == 't' else False,\n                    process_name=row.get('process_name'))\n\n                self._schedules[schedule_id] = schedule\n                if not self._is_safe_mode:\n                    self._schedule_first_task(schedule, self._start_time)\n        except Exception:\n            self._logger.exception('Query failed: %s', 'schedules')\n            raise\n\n    async def _read_storage(self):\n        \"\"\"Reads schedule information from the storage server\"\"\"\n        await self._get_process_scripts()\n        await self._get_schedules()\n\n    async def _mark_tasks_interrupted(self):\n        \"\"\"The state for any task with a NULL end_time is set to interrupted\"\"\"\n        # TODO FOGL-722 NULL can not be passed like this\n\n        \"\"\" # Update the task's status\n        update_payload = PayloadBuilder() \\\n            .SET(state=int(Task.State.INTERRUPTED),\n                 end_time=str(datetime.datetime.now())) \\\n            .WHERE(['end_time', '=', \"NULL\"]) \\\n            .payload()\n        try:\n            self._logger.debug('Database command: %s', update_payload)\n            res = await self._storage_async.update_tbl(\"tasks\", update_payload)\n        except Exception:\n            self._logger.exception('Update failed: %s', update_payload)\n            raise\n        \"\"\"\n        pass\n\n    async def _read_config(self):\n        \"\"\"Reads configuration\"\"\"\n        default_config = {\n            \"max_running_tasks\": {\n                \"description\": \"Maximum number of tasks that can be running at any given time\",\n                \"type\": \"integer\",\n                \"default\": str(self._DEFAULT_MAX_RUNNING_TASKS),\n                \"displayName\": \"Max Running Tasks\"\n            },\n            \"max_completed_task_age_days\": {\n                \"description\": \"Maximum age in days (based on the start time) for a row \"\n                               \"in the tasks table that does not have a status of running\",\n                \"type\": \"integer\",\n                \"default\": str(self._DEFAULT_MAX_COMPLETED_TASK_AGE_DAYS),\n                \"displayName\": \"Max Age Of Task (In days)\"\n            },\n        }\n\n        cfg_manager = ConfigurationManager(self._storage_async)\n        await cfg_manager.create_category('SCHEDULER', default_config, 'Scheduler configuration', display_name='Scheduler')\n\n        config = await cfg_manager.get_category_all_items('SCHEDULER')\n        self._max_running_tasks = int(config['max_running_tasks']['value'])\n        self._max_completed_task_age = datetime.timedelta(\n            seconds=int(config['max_completed_task_age_days']['value']) * self._DAY_SECONDS)\n\n    async def start(self):\n        \"\"\"Starts the scheduler\n\n        When this method returns, an asyncio task is\n        scheduled that starts tasks and monitors their subprocesses. This class\n        does not use threads (tasks run as subprocesses).\n\n        Raises:\n            NotReadyError: Scheduler was stopped\n        \"\"\"\n        if not self._is_safe_mode:\n            if self._paused or self._schedule_executions is None:\n                raise NotReadyError(\"The scheduler was stopped and can not be restarted\")\n\n            if self._ready:\n                return\n\n            if self._start_time:\n                raise NotReadyError(\"The scheduler is starting\")\n\n            self._logger.info(\"Starting\")\n\n        self._start_time = self.current_time if self.current_time else time.time()\n\n        # FIXME: Move below part code to server.py->_start_core(), line 123, after start of storage and before start\n        #        of scheduler. May need to either pass the storage object or create a storage object here itself.\n        #        Also provide a timeout option.\n        # ************ make sure that it go forward only when storage service is ready\n        storage_service = None\n\n        while storage_service is None and self._storage_async is None:\n            try:\n                found_services = ServiceRegistry.get(name=\"Fledge Storage\")\n                storage_service = found_services[0]\n                self._storage_async = StorageClientAsync(self._core_management_host, self._core_management_port,\n                                              svc=storage_service)\n\n            except (service_registry_exceptions.DoesNotExist, InvalidServiceInstance, StorageServiceUnavailable,\n                    Exception) as ex:\n                # traceback.print_exc()\n                await asyncio.sleep(5)\n        # **************\n\n        # Everything OK, so now start Scheduler and create Storage instance\n        self._logger.info(\"Starting Scheduler: Management port received is %d\", self._core_management_port)\n\n        await self._read_config()\n        await self._mark_tasks_interrupted()\n        await self._read_storage()\n\n        self._ready = True\n        if not self._is_safe_mode:\n            self._scheduler_loop_task = asyncio.ensure_future(self._scheduler_loop())\n\n    async def stop(self):\n        \"\"\"Attempts to stop the scheduler\n\n        Sends TERM signal to all running tasks. Does not wait for tasks to stop.\n\n        Prevents any new tasks from starting. This can be undone by setting the\n        _paused attribute to False.\n\n        Raises:\n            TimeoutError: A task is still running. Wait and try again.\n        \"\"\"\n        if not self._start_time:\n            return\n\n        self._logger.info(\"Processing stop request\")\n\n        # This method is designed to be called multiple times\n\n        if not self._paused:\n            # Wait for tasks purge task to finish\n            self._paused = True\n            if self._purge_tasks_task is not None:\n                try:\n                    await self._purge_tasks_task\n                except Exception as ex:\n                    self._logger.exception(ex, 'An exception was raised by Scheduler._purge_tasks.')\n\n            self._resume_check_schedules()\n\n            # Stop the main loop\n            try:\n                await self._scheduler_loop_task\n            except Exception as ex:\n                self._logger.exception(ex, 'An exception was raised by Scheduler._scheduler_loop')\n            self._scheduler_loop_task = None\n\n        # Can not iterate over _task_processes - it can change mid-iteration\n        for task_id in list(self._task_processes.keys()):\n            try:\n                task_process = self._task_processes[task_id]\n            except KeyError:\n                continue\n\n            # TODO: FOGL-356 track the last time TERM was sent to each task\n            task_process.cancel_requested = time.time()\n\n            schedule = task_process.schedule\n\n            # Stopping of STARTUP tasks aka microservices will be taken up separately by Core\n            if schedule.type != Schedule.Type.STARTUP:\n                self._logger.info(\n                    \"Stopping process: Schedule '%s' process '%s' task %s pid %s\\n%s\",\n                    schedule.name,\n                    schedule.process_name,\n                    task_id,\n                    task_process.process.pid,\n                    self._process_scripts[schedule.process_name][0])\n                try:\n                    # We need to terminate the child processes because now all tasks are started vide a script and\n                    # this creates two unix processes. Scheduler can store pid of the parent shell script process only\n                    # and on termination of the task, both the script shell process and actual task process need to\n                    # be stopped.\n                    self._terminate_child_processes(task_process.process.pid)\n                    task_process.process.terminate()\n                except ProcessLookupError:\n                    pass  # Process has terminated\n\n        # Wait for all processes to stop\n        for _ in range(self._STOP_WAIT_SECONDS):\n            if not self._task_processes:\n                break\n            await asyncio.sleep(1)\n\n        if self._task_processes:\n            # Before throwing timeout error, just check if there are still any tasks pending for cancellation\n            task_count = 0\n            for task_id in list(self._task_processes.keys()):\n                try:\n                    task_process = self._task_processes[task_id]\n                    schedule = task_process.schedule\n                    if schedule.process_name != \"FledgeUpdater\":\n                        if schedule.type != Schedule.Type.STARTUP:\n                            task_count += 1\n                except KeyError:\n                    continue\n            if task_count != 0:\n                raise TimeoutError(\"Timeout Error: Could not stop scheduler as {} tasks are pending\".format(task_count))\n\n        self._schedule_executions = None\n        self._task_processes = None\n        self._schedules = None\n        self._process_scripts = None\n\n        self._ready = False\n        self._paused = False\n        self._start_time = None\n\n        self._logger.info(\"Stopped\")\n        return True\n\n    # CRUD methods for scheduled_processes, schedules, tasks\n\n    async def get_scheduled_processes(self) -> List[ScheduledProcess]:\n        \"\"\"Retrieves all rows from the scheduled_processes table\n        \"\"\"\n        if not self._ready:\n            raise NotReadyError()\n\n        processes = []\n\n        for (name, script) in self._process_scripts.items():\n            process = ScheduledProcess()\n            process.name = name\n            process.script = script[0]\n            processes.append(process)\n\n        return processes\n\n    @classmethod\n    def _schedule_row_to_schedule(cls,\n                                  schedule_id: uuid.UUID,\n                                  schedule_row: _ScheduleRow) -> Schedule:\n        schedule_type = schedule_row.type\n\n        if schedule_type == Schedule.Type.STARTUP:\n            schedule = StartUpSchedule()\n        elif schedule_type == Schedule.Type.TIMED:\n            schedule = TimedSchedule()\n        elif schedule_type == Schedule.Type.INTERVAL:\n            schedule = IntervalSchedule()\n        elif schedule_type == Schedule.Type.MANUAL:\n            schedule = ManualSchedule()\n        else:\n            raise ValueError(\"Unknown schedule type {}\", schedule_type)\n\n        schedule.schedule_id = schedule_id\n        schedule.exclusive = schedule_row.exclusive\n        schedule.enabled = schedule_row.enabled\n        schedule.name = schedule_row.name\n        schedule.process_name = schedule_row.process_name\n        schedule.repeat = schedule_row.repeat\n\n        if schedule_type == Schedule.Type.TIMED:\n            schedule.day = schedule_row.day\n            schedule.time = schedule_row.time\n        else:\n            schedule.day = None\n            schedule.time = None\n\n        return schedule\n\n    async def get_schedules(self) -> List[Schedule]:\n        \"\"\"Retrieves all schedules\n        \"\"\"\n        if not self._ready:\n            raise NotReadyError()\n\n        schedules = []\n\n        for (schedule_id, schedule_row) in self._schedules.items():\n            schedules.append(self._schedule_row_to_schedule(schedule_id, schedule_row))\n\n        return schedules\n\n    async def get_schedule(self, schedule_id: uuid.UUID) -> Schedule:\n        \"\"\"Retrieves a schedule from its id\n\n        Raises:\n            ScheduleNotFoundException\n        \"\"\"\n        if not self._ready:\n            raise NotReadyError()\n\n        try:\n            schedule_row = self._schedules[schedule_id]\n        except KeyError:\n            raise ScheduleNotFoundError(schedule_id)\n\n        return self._schedule_row_to_schedule(schedule_id, schedule_row)\n\n    async def get_schedule_by_name(self, name) -> Schedule:\n        \"\"\"Retrieves a schedule from its id\n\n        Raises:\n            ScheduleNotFoundException\n        \"\"\"\n        if not self._ready:\n            raise NotReadyError()\n\n        found_id = None\n        for (schedule_id, schedule_row) in self._schedules.items():\n            if self._schedules[schedule_id].name == name:\n                found_id = schedule_id\n        if found_id is None:\n            raise ScheduleNotFoundError(name)\n\n        return self._schedule_row_to_schedule(found_id, schedule_row)\n\n    async def save_schedule(self, schedule: Schedule, is_enabled_modified=None, dryrun=False):\n        \"\"\"Creates or update a schedule\n\n        Args:\n            schedule:\n                The id can be None, in which case a new id will be generated\n\n        Raises:\n            NotReadyError: The scheduler is not ready for requests\n        \"\"\"\n        if self._paused or not self._ready:\n            raise NotReadyError()\n\n        # TODO should these checks be moved to the storage layer?\n        if schedule.name is None or len(schedule.name) == 0:\n            raise ValueError(\"name can not be empty\")\n        # Don't allow invalid character in the schedule name\n        is_valid_identifier, blocked_character = common_utils.is_valid_identifier(schedule.name)\n        if not is_valid_identifier:\n                raise ValueError(\"Invalid character {} found in schedule name {}\".format(blocked_character, schedule.name))\n\n        if schedule.repeat is not None and not isinstance(schedule.repeat, datetime.timedelta):\n            raise ValueError('repeat must be of type datetime.timedelta')\n\n        if schedule.exclusive is None or not (schedule.exclusive == True or schedule.exclusive == False):\n            raise ValueError('exclusive can not be None')\n\n        if isinstance(schedule, TimedSchedule):\n            schedule_time = schedule.time\n\n            if schedule_time is not None and not isinstance(schedule_time, datetime.time):\n                raise ValueError('time must be of type datetime.time')\n\n            day = schedule.day\n\n            # TODO Remove this check when the database has constraint\n            if day is not None and (day < 1 or day > 7):\n                raise ValueError('day must be between 1 and 7')\n        else:\n            day = None\n            schedule_time = None\n\n        prev_schedule_row = None\n\n        if schedule.schedule_id is None:\n            is_new_schedule = True\n            schedule.schedule_id = uuid.uuid4()\n        else:\n            try:\n                prev_schedule_row = self._schedules[schedule.schedule_id]\n                is_new_schedule = False\n            except KeyError:\n                is_new_schedule = True\n\n        if not is_new_schedule:\n            update_payload = PayloadBuilder() \\\n                .SET(schedule_name=schedule.name,\n                     schedule_type=schedule.schedule_type,\n                     schedule_interval=str(schedule.repeat),\n                     schedule_day=day if day else 0,\n                     schedule_time=str(schedule_time) if schedule_time else '00:00:00',\n                     exclusive='t' if schedule.exclusive else 'f',\n                     enabled='t' if schedule.enabled else 'f',\n                     process_name=schedule.process_name) \\\n                .WHERE(['id', '=', str(schedule.schedule_id)]) \\\n                .payload()\n            try:\n                self._logger.debug('Database command: %s', update_payload)\n                res = await self._storage_async.update_tbl(\"schedules\", update_payload)\n                if res.get('count') == 0:\n                    is_new_schedule = True\n            except Exception:\n                self._logger.exception('Update failed: %s', update_payload)\n                raise\n            await self.audit_trail_entry(prev_schedule_row, schedule)\n        else:\n            insert_payload = PayloadBuilder() \\\n                .INSERT(id=str(schedule.schedule_id),\n                        schedule_type=schedule.schedule_type,\n                        schedule_name=schedule.name,\n                        schedule_interval=str(schedule.repeat),\n                        schedule_day=day if day else 0,\n                        schedule_time=str(schedule_time) if schedule_time else '00:00:00',\n                        exclusive='t' if schedule.exclusive else 'f',\n                        enabled='t' if schedule.enabled else 'f',\n                        process_name=schedule.process_name) \\\n                .payload()\n            try:\n                self._logger.debug('Database command: %s', insert_payload)\n                res = await self._storage_async.insert_into_tbl(\"schedules\", insert_payload)\n            except Exception:\n                self._logger.exception('Insert failed: %s', insert_payload)\n                raise\n            audit = AuditLogger(self._storage_async)\n            await audit.information('SCHAD', {'schedule': schedule.toDict()})\n\n        repeat_seconds = None\n        if schedule.repeat is not None and schedule.repeat != datetime.timedelta(0):\n            repeat_seconds = schedule.repeat.total_seconds()\n        previous_enabled = self._schedules[schedule.schedule_id].enabled if not is_new_schedule else None\n        schedule_row = self._ScheduleRow(\n            id=schedule.schedule_id,\n            name=schedule.name,\n            type=schedule.schedule_type,\n            time=schedule_time,\n            day=day,\n            repeat=schedule.repeat,\n            repeat_seconds=repeat_seconds,\n            exclusive=schedule.exclusive,\n            enabled=schedule.enabled,\n            process_name=schedule.process_name)\n\n        self._schedules[schedule.schedule_id] = schedule_row\n\n        # Add process to self._process_scripts if not present.\n        try:\n            if schedule.process_name not in self._process_scripts: raise KeyError\n        except KeyError:\n            select_payload = PayloadBuilder().WHERE(['name', '=', schedule.process_name]).payload()\n            try:\n                self._logger.debug('Database command: %s', select_payload)\n                res = await self._storage_async.query_tbl_with_payload(\"scheduled_processes\", select_payload)\n                for row in res['rows']:\n                    self._process_scripts[row.get('name')] = (row.get('script'), row.get('priority'))\n            except Exception:\n                self._logger.exception('Select failed: %s', select_payload)\n\n        # Did the schedule change in a way that will affect task scheduling?\n        if schedule.schedule_type in [Schedule.Type.INTERVAL, Schedule.Type.TIMED] and (\n                                is_new_schedule or\n                                    prev_schedule_row.time != schedule_row.time or\n                                prev_schedule_row.day != schedule_row.day or\n                            prev_schedule_row.repeat_seconds != schedule_row.repeat_seconds or\n                        prev_schedule_row.exclusive != schedule_row.exclusive):\n            now = self.current_time if self.current_time else time.time()\n            self._schedule_first_task(schedule_row, now)\n            self._resume_check_schedules()\n\n        if dryrun and is_new_schedule:\n            await self._start_task(schedule_row, dryrun=True)\n            return\n\n        if is_enabled_modified is not None:\n            if previous_enabled is None:  # New Schedule\n                # For a new schedule, if enabled is set to True, the schedule will be enabled.\n                bypass_check = True if schedule.enabled is True else None\n            else:  # Existing Schedule\n                # During update, if a schedule's enabled attribute is not changed then it will return unconditionally\n                # otherwise suitable action will be invoked.\n                bypass_check = True if previous_enabled != schedule.enabled else None\n\n            if is_enabled_modified is True:\n                await self.enable_schedule(schedule.schedule_id, bypass_check=bypass_check, record_audit_trail=is_new_schedule)\n            else:\n                await self.disable_schedule(schedule.schedule_id, bypass_check=bypass_check, record_audit_trail=is_new_schedule)\n\n    async def remove_service_from_task_processes(self, service_name):\n        \"\"\"\n        This method caters to the use case when a microservice, e.g. South service, which has been started by the\n        Scheduler and then is shutdown vide api and then is needed to be restarted. It removes the Scheduler's record\n        of the task related to the STARTUP schedule (which is not removed when shutdown action is taken by the\n        microservice api as the microservice is running in a separate process and hinders starting a schedule by\n        Scheduler's queue_task() method).\n\n        Args: service_name:\n        Returns:\n        \"\"\"\n        if not self._ready: return False\n\n        # Find task_id for the service\n        task_id = None\n        task_process = None\n        schedule_type = None\n        try:\n            task_id = next(\n                (key for key in self._task_processes.keys() if self._task_processes[key].schedule.name == service_name),\n                None)\n            if task_id is None: raise KeyError\n\n            task_process = self._task_processes[task_id]\n\n            if task_id is not None:\n                schedule = task_process.schedule\n                schedule_type = schedule.type\n                if schedule_type == Schedule.Type.STARTUP:  # If schedule is a service e.g. South services\n                    del self._schedule_executions[schedule.id]\n                    del self._task_processes[task_process.task_id]\n                    self._logger.info(\"Service {} records successfully removed\".format(service_name))\n                    return True\n        except KeyError:\n            pass\n\n        self._logger.exception(\n            \"Service {} records could not be removed with task id {} type {}\".format(service_name, str(task_id),\n                                                                                     schedule_type))\n        return False\n\n    async def disable_schedule(self, schedule_id: uuid.UUID, bypass_check=None, record_audit_trail=True):\n        \"\"\"\n        Find running Schedule, Terminate running process, Disable Schedule, Update database\n\n        Args: schedule_id:\n        Returns:\n        \"\"\"\n        if self._paused or not self._ready:\n            raise NotReadyError()\n\n        # Find running task for the schedule.\n        # self._task_processes contains ALL tasks including STARTUP tasks.\n        try:\n            schedule = await self.get_schedule(schedule_id)\n        except ScheduleNotFoundError:\n            self._logger.exception(\"No such Schedule %s\", str(schedule_id))\n            return False, \"No such Schedule\"\n\n        if bypass_check is None and schedule.enabled is False:\n            self._logger.info(\"Schedule %s already disabled\", str(schedule_id))\n            return True, \"Schedule {} already disabled\".format(str(schedule_id))\n\n        # Disable Schedule - update the schedule in memory\n        prev_schedule_row = self._schedules[schedule_id]\n        self._schedules[schedule_id] = self._schedules[schedule_id]._replace(enabled=False)\n\n        # Update database\n        update_payload = PayloadBuilder().SET(enabled='f').WHERE(['id', '=', str(schedule_id)]).payload()\n        try:\n            self._logger.debug('Database command: %s', update_payload)\n            res = await self._storage_async.update_tbl(\"schedules\", update_payload)\n        except Exception:\n            self._logger.exception('Update failed: %s', update_payload)\n            raise RuntimeError('Update failed: %s', update_payload)\n        await asyncio.sleep(1)\n\n        # If a task is running for the schedule, then terminate the process\n        task_id = None\n        task_process = None\n        try:\n            for key in list(self._task_processes.keys()):\n                if self._task_processes[key].schedule.id == schedule_id:\n                    task_id = key\n                    break\n            if task_id is None:\n                raise KeyError\n            task_process = self._task_processes[task_id]\n        except KeyError:\n            self._logger.info(\"No Task running for Schedule %s\", str(schedule_id))\n\n        if task_id is not None:\n            schedule = task_process.schedule\n            if schedule.type == Schedule.Type.STARTUP:  # If schedule is a service e.g. South services\n                try:\n                    found_services = ServiceRegistry.get(name=schedule.name)\n                    service = found_services[0]\n                    if await utils.ping_service(service) is True:\n                        # Shutdown will take care of unregistering the service from core\n                        await utils.shutdown_service(service)\n                except:\n                    # Service registry does not exist but Scheduler records for service exist, hence remove records\n                    try:\n                        schedule_execution = self._schedule_executions[schedule_id]\n                    except KeyError:\n                        pass\n                    else:\n                        del schedule_execution.task_processes[task_process.task_id]\n                        self._logger.info(\"Removed Orphaned scheduler entries for schedule %s\", str(schedule.name))\n                try:\n                    # As of now, script starts the process and therefore, we need to explicitly stop this script process\n                    # as shutdown caters to stopping of the actual service only.\n                    task_process.process.terminate()\n                except ProcessLookupError:\n                    pass  # Process has terminated\n            else: # else it is a Task e.g. North tasks\n                # Terminate process\n                try:\n                    # We need to terminate the child processes because now all tasks are started vide a script and\n                    # this creates two unix processes. Scheduler can store pid of the parent shell script process only\n                    # and on termination of the task, both the script shell process and actual task process need to\n                    # be stopped.\n                    self._terminate_child_processes(task_process.process.pid)\n                    task_process.process.terminate()\n                except ProcessLookupError:\n                    pass  # Process has terminated\n                self._logger.info(\n                    \"Terminated Task '%s/%s' process '%s' task %s pid %s\\n%s\",\n                    schedule.name,\n                    str(schedule.id),\n                    schedule.process_name,\n                    task_id,\n                    task_process.process.pid,\n                    self._process_scripts[schedule.process_name][0])\n                # TODO: FOGL-356 track the last time TERM was sent to each task\n                task_process.cancel_requested = time.time()\n                task_future = task_process.future\n                if task_future.cancel() is True:\n                    await self._wait_for_task_completion(task_process)\n\n        self._logger.info(\n            \"Disabled Schedule '%s/%s' process '%s'\\n\",\n            schedule.name,\n            str(schedule_id),\n            schedule.process_name)\n        if record_audit_trail:\n            new_schedule_row = await self.get_schedule(schedule_id)\n            await self.audit_trail_entry(prev_schedule_row, new_schedule_row)\n        return True, \"Schedule successfully disabled\"\n\n    async def enable_schedule(self, schedule_id: uuid.UUID, bypass_check=None, record_audit_trail=True):\n        \"\"\"\n        Get Schedule, Enable Schedule, Update database, Start Schedule\n\n        Args: schedule_id:\n        Returns:\n        \"\"\"\n        if self._paused or not self._ready:\n            raise NotReadyError()\n\n        try:\n            schedule = await self.get_schedule(schedule_id)\n        except ScheduleNotFoundError:\n            self._logger.exception(\"No such Schedule %s\", str(schedule_id))\n            return False, \"No such Schedule\"\n\n        if bypass_check is None and schedule.enabled is True:\n            self._logger.info(\"Schedule %s already enabled\", str(schedule_id))\n            return True, \"Schedule is already enabled\"\n\n        # Enable Schedule\n        prev_schedule_row = self._schedules[schedule_id]\n        self._schedules[schedule_id] = self._schedules[schedule_id]._replace(enabled=True)\n\n        # Update database\n        update_payload = PayloadBuilder().SET(enabled='t').WHERE(['id', '=', str(schedule_id)]).payload()\n        try:\n            self._logger.debug('Database command: %s', update_payload)\n            res = await self._storage_async.update_tbl(\"schedules\", update_payload)\n        except Exception:\n            self._logger.exception('Update failed: %s', update_payload)\n            raise RuntimeError('Update failed: %s', update_payload)\n        await asyncio.sleep(1)\n\n        # Reset schedule_execution.next_start_time\n        schedule_row = self._schedules[schedule_id]\n        now = self.current_time if self.current_time else time.time()\n        self._schedule_first_task(schedule_row, now)\n\n        # Start schedule\n        await self.queue_task(schedule_id, start_now=False)\n\n        self._logger.info(\n            \"Enabled Schedule '%s/%s' process '%s'\\n\",\n            schedule.name,\n            str(schedule_id),\n            schedule.process_name)\n        if record_audit_trail:\n            new_schedule_row = await self.get_schedule(schedule_id)\n            await self.audit_trail_entry(prev_schedule_row, new_schedule_row)\n        return True, \"Schedule successfully enabled\"\n\n    async def queue_task(self, schedule_id: uuid.UUID, start_now=True) -> None:\n        \"\"\"Requests a task to be started for a schedule\n\n        Args:\n            schedule_id: Specifies the schedule\n\n        Raises:\n            SchedulePausedError:\n                The scheduler is stopping\n\n            ScheduleNotFoundError\n        \"\"\"\n        if self._paused or not self._ready:\n            raise NotReadyError()\n\n        try:\n            schedule_row = self._schedules[schedule_id]\n        except KeyError:\n            raise ScheduleNotFoundError(schedule_id)\n\n        if schedule_row.enabled is False:\n            self._logger.info(\"Schedule '%s' is not enabled\", schedule_row.name)\n            return False\n\n        try:\n            schedule_execution = self._schedule_executions[schedule_id]\n        except KeyError:\n            schedule_execution = self._ScheduleExecution()\n            self._schedule_executions[schedule_row.id] = schedule_execution\n\n        if start_now:\n            schedule_execution.start_now = True\n\n        self._logger.debug(\"Queued schedule '%s' for execution\", schedule_row.name)\n        self._resume_check_schedules()\n        return True\n\n    async def delete_schedule(self, schedule_id: uuid.UUID):\n        \"\"\"Deletes a schedule\n\n        Args:\n            schedule_id\n\n        Raises:\n            ScheduleNotFoundError\n\n            NotReadyError\n        \"\"\"\n        if not self._ready:\n            raise NotReadyError()\n\n        try:\n            schedule = self._schedules[schedule_id]\n            if schedule.enabled is True:\n                self._logger.exception('Attempt to delete an enabled Schedule %s. Not deleted.', str(schedule_id))\n                raise RuntimeWarning(\"Enabled Schedule {} cannot be deleted.\".format(str(schedule_id)))\n        except KeyError:\n            raise ScheduleNotFoundError(schedule_id)\n\n        del self._schedules[schedule_id]\n\n        # TODO: Inspect race conditions with _set_first\n        delete_payload = PayloadBuilder() \\\n            .WHERE(['id', '=', str(schedule_id)]) \\\n            .payload()\n        try:\n            self._logger.debug('Database command: %s', delete_payload)\n            res = await self._storage_async.delete_from_tbl(\"schedules\", delete_payload)\n        except Exception:\n            self._logger.exception('Delete failed: %s', delete_payload)\n            raise\n\n        return True, \"Schedule deleted successfully.\"\n\n    async def get_running_tasks(self) -> List[Task]:\n        \"\"\"Retrieves a list of all tasks that are currently running\n\n        Returns:\n            An empty list if no tasks are running\n\n            A list of Task objects\n        \"\"\"\n        if not self._ready:\n            raise NotReadyError()\n\n        tasks = []\n\n        for (task_id, task_process) in self._task_processes.items():\n            task = Task()\n            task.task_id = task_id\n            task.schedule_name = task_process.schedule.name\n            task.process_name = task_process.schedule.process_name\n            task.state = Task.State.RUNNING\n            if task_process.cancel_requested is not None:\n                task.cancel_requested = (\n                    datetime.datetime.fromtimestamp(task_process.cancel_requested))\n            task.start_time = datetime.datetime.fromtimestamp(task_process.start_time)\n            tasks.append(task)\n\n        return tasks\n\n    async def get_task(self, task_id: uuid.UUID) -> Task:\n        \"\"\"Retrieves a task given its id\"\"\"\n        query_payload = PayloadBuilder().SELECT(\"id\", \"process_name\", \"schedule_name\", \"state\", \"start_time\", \"end_time\", \"reason\", \"exit_code\")\\\n            .ALIAS(\"return\", (\"start_time\", 'start_time'), (\"end_time\", 'end_time'))\\\n            .FORMAT(\"return\", (\"start_time\", \"YYYY-MM-DD HH24:MI:SS.MS\"), (\"end_time\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n            .WHERE([\"id\", \"=\", str(task_id)]).payload()\n\n        try:\n            self._logger.debug('Database command: %s', query_payload)\n            res = await self._storage_async.query_tbl_with_payload(\"tasks\", query_payload)\n            for row in res['rows']:\n                task = Task()\n                task.task_id = row.get('id')\n                task.state = Task.State(int(row.get('state')))\n                task.start_time = row.get('start_time')\n                task.schedule_name = row.get('schedule_name')\n                task.schedule_id = row.get('schedule_id')\n                task.process_name = row.get('process_name')\n                task.end_time = row.get('end_time')\n                task.exit_code = row.get('exit_code')\n                task.reason = row.get('reason')\n                return task\n        except Exception:\n            self._logger.exception('Query failed: %s', query_payload)\n            raise\n\n        raise TaskNotFoundError(task_id)\n\n    async def get_tasks(self, limit=100, offset=0, where=None, and_where=None, or_where=None, sort=None) -> List[Task]:\n        \"\"\"Retrieves tasks\n        The result set is ordered by start_time descending\n        Args:\n            offset:\n                Ignore this number of rows at the beginning of the result set.\n                Results are unpredictable unless order_by is used.\n            limit: Return at most this number of rows\n            where: A query\n            sort:\n                A tuple of Task attributes to sort by.\n                Defaults to (\"start_time\", \"desc\")\n        \"\"\"\n        chain_payload = PayloadBuilder().SELECT(\"id\", \"process_name\", \"schedule_name\", \"state\", \"start_time\", \"end_time\", \"reason\", \"exit_code\") \\\n            .ALIAS(\"return\", (\"start_time\", 'start_time'), (\"end_time\", 'end_time'))\\\n            .FORMAT(\"return\", (\"start_time\", \"YYYY-MM-DD HH24:MI:SS.MS\"), (\"end_time\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n            .LIMIT(limit).chain_payload()\n        if offset:\n            chain_payload = PayloadBuilder(chain_payload).OFFSET(offset).chain_payload()\n        if where:\n            chain_payload = PayloadBuilder(chain_payload).WHERE(where).chain_payload()\n        if and_where:\n            chain_payload = PayloadBuilder(chain_payload).AND_WHERE(and_where).chain_payload()\n        if or_where:\n            chain_payload = PayloadBuilder(chain_payload).OR_WHERE(or_where).chain_payload()\n        if sort:\n            chain_payload = PayloadBuilder(chain_payload).ORDER_BY(sort).chain_payload()\n\n        query_payload = PayloadBuilder(chain_payload).payload()\n        tasks = []\n\n        try:\n            self._logger.debug('Database command: %s', query_payload)\n            res = await self._storage_async.query_tbl_with_payload(\"tasks\", query_payload)\n            for row in res['rows']:\n                task = Task()\n                task.task_id = row.get('id')\n                task.state = Task.State(int(row.get('state')))\n                task.start_time = row.get('start_time')\n                task.schedule_name = row.get('schedule_name')\n                task.schedule_id = row.get('schedule_id')\n                task.process_name = row.get('process_name')\n                task.end_time = row.get('end_time')\n                task.exit_code = row.get('exit_code')\n                task.reason = row.get('reason')\n                tasks.append(task)\n        except Exception:\n            self._logger.exception('Query failed: %s', query_payload)\n            raise\n\n        return tasks\n\n    async def cancel_task(self, task_id: uuid.UUID) -> None:\n        \"\"\"Cancels a running task\n\n        Args:\n\n        Raises:\n            NotReadyError\n\n            TaskNotRunningError\n        \"\"\"\n        if self._paused or not self._ready:\n            raise NotReadyError()\n\n        try:\n            task_process = self._task_processes[task_id]  # _TaskProcess\n        except KeyError:\n            raise TaskNotRunningError(task_id)\n\n        if task_process.cancel_requested:\n            # TODO: Allow after some period of time has elapsed\n            raise DuplicateRequestError()\n\n        # TODO: FOGL-356 track the last time TERM was sent to each task\n        task_process.cancel_requested = time.time()\n\n        schedule = task_process.schedule\n\n        self._logger.info(\n            \"Stopping process: Schedule '%s' process '%s' task %s pid %s\\n%s\",\n            schedule.name,\n            schedule.process_name,\n            task_id,\n            task_process.process.pid,\n            self._process_scripts[schedule.process_name][0])\n\n        try:\n            # We need to terminate the child processes because now all tasks are started vide a script and\n            # this creates two unix processes. Scheduler can store pid of the parent shell script process only\n            # and on termination of the task, both the script shell process and actual task process need to\n            # be stopped.\n            self._terminate_child_processes(task_process.process.pid)\n            task_process.process.terminate()\n        except ProcessLookupError:\n            pass  # Process has terminated\n\n        if task_process.future.cancel() is True:\n            await self._wait_for_task_completion(task_process)\n\n    def _terminate_child_processes(self, parent_id):\n        ps_command = subprocess.Popen(\"ps -o pid --ppid {} --noheaders\".format(parent_id), shell=True,\n                                      stdout=subprocess.PIPE)\n        ps_output, err = ps_command.communicate()\n        pids = ps_output.decode().strip().split(\"\\n\")\n        for pid_str in pids:\n            if pid_str.strip():\n                os.kill(int(pid_str.strip()), signal.SIGTERM)\n\n    def extract_day_time_from_interval(self, str_interval):\n        if 'days' in str_interval:\n            interval_split = str_interval.split('days')\n            interval_days = interval_split[0].strip()\n            interval_time = interval_split[1]\n        elif 'day' in str_interval:\n            interval_split = str_interval.split('day')\n            interval_days = interval_split[0].strip()\n            interval_time = interval_split[1]\n        else:\n            interval_days = 0\n            interval_time = str_interval\n\n        if not interval_time:\n            interval_time = \"00:00:00\"\n        interval_time = interval_time.replace(\",\", \"\").strip()\n        interval_time = datetime.datetime.strptime(interval_time, \"%H:%M:%S\")\n\n        return int(interval_days), interval_time\n\n    async def audit_trail_entry(self, old_row, new_row):\n        audit = AuditLogger(self._storage_async)\n        old_schedule = {\"name\": old_row.name,\n                        'type': old_row.type,\n                        \"processName\": old_row.process_name,\n                        \"repeat\": old_row.repeat.total_seconds() if old_row.repeat else 0,\n                        \"enabled\": True if old_row.enabled else False,\n                        \"exclusive\": True if old_row.exclusive else False\n                        }\n        # Timed schedule KV pairs\n        if old_row.type == 2:\n            old_schedule[\"time\"] = \"{}:{}:{}\".format(old_row.time.hour, old_row.time.minute, old_row.time.second\n                                                     ) if old_row.time else '00:00:00'\n            old_schedule[\"day\"] = old_row.day if old_row.day else 0\n        await audit.information('SCHCH', {'schedule': new_row.toDict(), 'old_schedule': old_schedule})\n\n    def reset_process_script_priority(self):\n        for k,v in self._process_scripts.items():\n            if isinstance(v, tuple):\n                self._process_scripts[k] = (v[0], self._DEFAULT_PROCESS_SCRIPT_PRIORITY)\n"
  },
  {
    "path": "python/fledge/services/core/server.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Core server module\"\"\"\n\nimport asyncio\nimport os\nimport logging\nimport subprocess\nimport sys\nimport ssl\nimport time\nimport uuid\nimport hmac\nfrom aiohttp import web\nimport aiohttp\nimport json\nimport signal\nfrom datetime import datetime, timedelta\nimport jwt\n\nfrom fledge.common import logger\nfrom fledge.common.utils import async_sleep\nfrom fledge.common.alert_manager import AlertManager\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.storage_client.exceptions import *\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.storage_client.storage_client import ReadingsStorageClientAsync\nfrom fledge.common.web import middleware\n\nfrom fledge.services.core import routes as admin_routes\nfrom fledge.services.core.api import configuration as conf_api\nfrom fledge.services.common.microservice_management import routes as management_routes\n\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry\nfrom fledge.services.core.interest_registry import exceptions as interest_registry_exceptions\nfrom fledge.services.core.scheduler.scheduler import Scheduler\nfrom fledge.services.core.service_registry.monitor import Monitor\nfrom fledge.services.common.service_announcer import ServiceAnnouncer\nfrom fledge.services.core.user_model import User\nfrom fledge.common.storage_client import payload_builder\nfrom fledge.services.core.asset_tracker.asset_tracker import AssetTracker\nfrom fledge.services.core.api import asset_tracker as asset_tracker_api\nfrom fledge.common.web.ssl_wrapper import SSLVerifier\nfrom fledge.services.core.api import exceptions as api_exception\nfrom fledge.services.core.api.control_service import acl_management as acl_management\nfrom fledge.services.core.firewall import Firewall\n\n\n__author__ = \"Amarendra K. Sinha, Praveen Garg, Terris Linenbach, Massimiliano Pinto, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017-2021 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_logger = logger.setup(__name__, level=logging.INFO)\n\n# FLEDGE_ROOT env variable\n_FLEDGE_DATA = os.getenv(\"FLEDGE_DATA\", default=None)\n_FLEDGE_ROOT = os.getenv(\"FLEDGE_ROOT\", default='/usr/local/fledge')\n_SCRIPTS_DIR = os.path.expanduser(_FLEDGE_ROOT + '/scripts')\n\n# PID dir and filename\n_FLEDGE_PID_DIR = \"/var/run\"\n_FLEDGE_PID_FILE = \"fledge.core.pid\"\n\n\nSSL_PROTOCOLS = (asyncio.sslproto.SSLProtocol,)\n\n# TODO generate secret at build time\nSERVICE_JWT_SECRET = 'f0gl@mp+Fl3dG3'\nSERVICE_JWT_ALGORITHM = 'HS512'\nSERVICE_JWT_EXP_DELTA_SECONDS = 30*60  # 30 minutes\nSERVICE_JWT_AUDIENCE = 'Fledge'\n\n# aiohttp client’s maximum size in a request, in bytes.\n# If a POST request exceeds this value, it raises an HTTPRequestEntityTooLarge exception.\nAIOHTTP_CLIENT_MAX_SIZE = 4*1024**3  # allowed up to 4GB\n\n\ndef ignore_aiohttp_ssl_eror(loop):\n    \"\"\"Ignore aiohttp #3535 / cpython #13548 issue with SSL data after close\n\n    There is an issue in Python 3.7 up to 3.7.3 that over-reports a\n    ssl.SSLError fatal error. See GitHub issues aio-libs/aiohttp#3535 and\n    python/cpython#13548.\n\n    Given a loop, this sets up an exception handler that ignores this specific\n    exception, but passes everything else on to the previous exception handler\n    this one replaces.\n\n    Checks for fixed Python versions, disabling itself when running on 3.7.4+\n    or 3.8.\n\n    \"\"\"\n    if sys.version_info >= (3, 7, 4):\n        return\n\n    orig_handler = loop.get_exception_handler()\n\n    def ignore_ssl_error(loop, context):\n        if context.get(\"message\") in {\n            \"SSL error in data received\",\n            \"SSL handshake failed\"\n        }:\n            # validate we have the right exception, transport and protocol\n            exception = context.get('exception')\n            protocol = context.get('protocol')\n            if (\n                isinstance(exception, ssl.SSLError)\n                and exception.reason == 'SSLV3_ALERT_CERTIFICATE_UNKNOWN'\n                and isinstance(protocol, SSL_PROTOCOLS)\n            ):\n                if loop.get_debug():\n                    asyncio.log.logger.debug('Ignoring asyncio SSL SSLV3_ALERT_CERTIFICATE_UNKNOWN error')\n                return\n        if orig_handler is not None:\n            orig_handler(loop, context)\n        else:\n            loop.default_exception_handler(context)\n\n    loop.set_exception_handler(ignore_ssl_error)\n\n\nclass Server:\n    \"\"\" Fledge core server.\n\n     Starts the Fledge REST server, storage and scheduler\n    \"\"\"\n\n    scheduler = None\n    \"\"\" fledge.core.Scheduler \"\"\"\n\n    service_monitor = None\n    \"\"\" fledge.microservice_management.service_registry.Monitor \"\"\"\n\n    _service_name = 'Fledge'\n    \"\"\" The name of this Fledge service \"\"\"\n\n    _service_description = 'Fledge REST Services'\n    \"\"\" The description of this Fledge service \"\"\"\n\n    _SERVICE_DEFAULT_CONFIG = {\n        'name': {\n            'description': 'Name of this Fledge service',\n            'type': 'string',\n            'default': 'Fledge',\n            'displayName': 'Name',\n            'order': '1',\n            'mandatory': \"true\"\n        },\n        'description': {\n            'description': 'Description of this Fledge service',\n            'type': 'string',\n            'default': 'Fledge administrative API',\n            'displayName': 'Description',\n            'order': '2'\n        }\n    }\n\n    _FEATURES_DEFAULT_CONFIG = {\n        'control': {\n            'description': 'Allow the use of control features within the Fledge instance',\n            'type': 'boolean',\n            'default': 'true',\n            'displayName': 'Control',\n            'order': '1',\n            'mandatory': \"true\",\n            'permissions': ['admin']\n        },\n        'debugging': {\n            'description': 'Allow the use of pipeline debugging features within the Fledge instance',\n            'type': 'boolean',\n            'default': 'true',\n            'displayName': 'Pipeline Debugging',\n            'order': '1',\n            'mandatory': \"true\",\n            'permissions': ['admin']\n        }\n    }\n\n    _MANAGEMENT_SERVICE = '_fledge-manage._tcp.local.'\n    \"\"\" The management service we advertise \"\"\"\n\n    _ADMIN_API_SERVICE = '_fledge-admin._tcp.local.'\n    \"\"\" The admin REST service we advertise \"\"\"\n\n    _USER_API_SERVICE = '_fledge-user._tcp.local.'\n    \"\"\" The user REST service we advertise \"\"\"\n\n    _API_PROXIES = {}\n    \"\"\" Proxy map for interfacing admin/user's REST API endpoints to Micro-services' service API endpoints \"\"\"\n\n    admin_announcer = None\n    \"\"\" The Announcer for the Admin API \"\"\"\n\n    user_announcer = None\n    \"\"\" The Announcer for the Admin API \"\"\"\n\n    management_announcer = None\n    \"\"\" The Announcer for the management API \"\"\"\n\n    _host = '0.0.0.0'\n    \"\"\" Host IP of core \"\"\"\n\n    core_management_port = 0\n    \"\"\" Microservice management port of core \"\"\"\n\n    rest_server_port = 0\n    \"\"\" Fledge REST API port \"\"\"\n\n    is_rest_server_http_enabled = True\n    \"\"\" a Flag to decide to enable Fledge REST API on HTTP on restart \"\"\"\n\n    is_auth_required = False\n    \"\"\" a var to decide to make authentication mandatory / optional for Fledge Admin/ User REST API\"\"\"\n\n    auth_method = 'any'\n\n    cert_file_name = ''\n    \"\"\" cert file name \"\"\"\n\n    _REST_API_DEFAULT_CONFIG = {\n        'enableHttp': {\n            'description': 'Enable HTTP (disable to use HTTPS)',\n            'type': 'boolean',\n            'default': 'true',\n            'displayName': 'Enable HTTP',\n            'order': '1',\n            'permissions': ['admin']\n        },\n        'httpPort': {\n            'description': 'Port to accept HTTP connections on',\n            'type': 'integer',\n            'default': '8081',\n            'displayName': 'HTTP Port',\n            'order': '2',\n            'permissions': ['admin']\n        },\n        'httpsPort': {\n            'description': 'Port to accept HTTPS connections on',\n            'type': 'integer',\n            'default': '1995',\n            'displayName': 'HTTPS Port',\n            'order': '3',\n            'validity': 'enableHttp==\"false\"',\n            'permissions': ['admin']\n        },\n        'certificateName': {\n            'description': 'Certificate file name',\n            'type': 'string',\n            'default': 'fledge',\n            'displayName': 'Certificate Name',\n            'order': '4',\n            'validity': 'enableHttp==\"false\"',\n            'permissions': ['admin']\n        },\n        'authentication': {\n            'description': 'API Call Authentication',\n            'type': 'enumeration',\n            'options': ['mandatory', 'optional'],\n            'default': 'mandatory',\n            'displayName': 'Authentication',\n            'order': '5',\n            'permissions': ['admin']\n        },\n        'authMethod': {\n            'description': 'Authentication method',\n            'type': 'enumeration',\n            'options': [\"any\", \"password\", \"certificate\"],\n            'default': 'any',\n            'displayName': 'Authentication method',\n            'order': '6',\n            'permissions': ['admin']\n        },\n        'authCertificateName': {\n            'description': 'Auth Certificate name',\n            'type': 'string',\n            'default': 'ca',\n            'displayName': 'Auth Certificate',\n            'order': '7',\n            'permissions': ['admin']\n        },\n        'allowPing': {\n            'description': 'Allow access to ping, regardless of the authentication required and'\n                           ' authentication header',\n            'type': 'boolean',\n            'default': 'true',\n            'displayName': 'Allow Ping',\n            'order': '8',\n            'permissions': ['admin']\n        },\n        'authProviders': {\n            'description': 'Authentication providers to use for the interface (JSON array object)',\n            'type': 'JSON',\n            'default': '{\"providers\": [\"username\", \"ldap\"] }',\n            'displayName': 'Auth Providers',\n            'order': '9',\n            'permissions': ['admin']\n        },\n        'disconnectIdleUserSession': {\n            'description': 'Disconnect idle user session after certain period of inactivity',\n            'type': 'integer',\n            'default': '15',\n            'displayName': 'Idle User Session Disconnection (In Minutes)',\n            'order': '10',\n            'minimum': '1',\n            'maximum': '1440',\n            'permissions': ['admin']\n        }\n    }\n\n    _LOGGING_DEFAULT_CONFIG = {\n        'logLevel': {\n            'description': 'Minimum logging level reported for Core server',\n            'type': 'enumeration',\n            'displayName': 'Minimum Log Level',\n            'options': ['debug', 'info', 'warning', 'error'],\n            'default': 'warning',\n            'order': '1'\n        }\n    }\n\n    _CONFIGURATION_DEFAULT_CONFIG = {\n        'cacheSize': {\n            'description': 'To control the caching size of Core Configuration Manager',\n            'type': 'integer',\n            'displayName': 'Cache Size',\n            'default': '30',\n            'order': '1',\n            'minimum': '1',\n            'maximum': '1000'\n        }\n    }\n\n    _RESOURCE_LIMIT_DEFAULT_CONFIG = {\n        'serviceBuffering': {\n            'description': 'Buffering level for the South Service',\n            'type': 'enumeration',\n            'displayName': 'South Service Buffering',\n            'options': ['Unlimited', 'Limited'],\n            'default': 'Unlimited',\n            'order': '1',\n            'permissions': ['admin']\n        },\n        'serviceBufferSize': {\n            'description': 'Buffer size for the South Service',\n            'type': 'integer',\n            'displayName': 'South Service Buffer Size',\n            'minimum': '1000',\n            'default': '1000',\n            'order': '2',\n            \"validity\" : \"serviceBuffering == \\\"Limited\\\"\",\n            'permissions': ['admin']\n        },\n        'discardPolicy': {\n            'description': 'The different discard policies for the South Service',\n            'type': 'enumeration',\n            'displayName': 'Discard Policy',\n            'options': ['Discard Oldest', 'Reduce Fidelity', 'Discard Newest'],\n            'default': 'Discard Oldest',\n            'order': '3',\n            \"validity\" : \"serviceBuffering == \\\"Limited\\\"\",\n            'permissions': ['admin']\n        }\n    }\n\n    _log_level = _LOGGING_DEFAULT_CONFIG['logLevel']['default']\n    \"\"\" Common logging level for Core \"\"\"\n\n    _start_time = time.time()\n    \"\"\" Start time of core process \"\"\"\n\n    _storage_client = None\n    \"\"\" Storage client to storage service \"\"\"\n\n    _storage_client_async = None\n    \"\"\" Async Storage client to storage service \"\"\"\n\n    _readings_client_async = None\n    \"\"\" Async Readings client to storage service \"\"\"\n\n    _configuration_manager = None\n    \"\"\" Instance of configuration manager (singleton) \"\"\"\n\n    _interest_registry = None\n    \"\"\" Instance of interest registry (singleton) \"\"\"\n\n    _audit = None\n    \"\"\" Instance of audit logger(singleton) \"\"\"\n\n    _pidfile = None\n    \"\"\" The PID file name \"\"\"\n\n    _asset_tracker = None\n    \"\"\" Asset tracker \"\"\"\n\n    _alert_manager = None\n    \"\"\" Alert Manager \"\"\"\n\n    running_in_safe_mode = False\n    \"\"\" Fledge running in Safe mode \"\"\"\n\n    _package_cache_manager = None\n    \"\"\" Package Cache Manager \"\"\"\n\n    _user_sessions = []\n    \"\"\" User sessions information to disconnect when idle for a certain period \"\"\"\n\n    _user_idle_session_timeout = 15 * 60\n    \"\"\" User idle session timeout (in minutes) \"\"\"\n\n    _INSTALLATION_DEFAULT_CONFIG = {\n        'maxUpdate': {\n            'description': 'Maximum updates per day',\n            'type': 'integer',\n            'default': '1',\n            'displayName': 'Maximum Update',\n            'order': '1',\n            'minimum': '1',\n            'maximum': '8'\n        },\n        'maxUpgrade': {\n            'description': 'Maximum upgrades per day',\n            'type': 'integer',\n            'default': '1',\n            'displayName': 'Maximum Upgrade',\n            'order': '3',\n            'minimum': '1',\n            'maximum': '8',\n            'validity': 'upgradeOnInstall == \"true\"'\n        },\n        'upgradeOnInstall': {\n            'description': 'Run upgrade prior to installing new software',\n            'type': 'boolean',\n            'default': 'false',\n            'displayName': 'Upgrade on Install',\n            'order': '2'\n        },\n        'listAvailablePackagesCacheTTL': {\n            'description': 'Caching of fetch available packages time to live in minutes',\n            'type': 'integer',\n            'default': '15',\n            'displayName': 'Available Packages Cache',\n            'order': '4',\n            'minimum': '0'\n        }\n    }\n\n    service_app, service_server, service_server_handler = None, None, None\n    core_app, core_server, core_server_handler = None, None, None\n\n    @classmethod\n    def get_certificates(cls):\n        # TODO: FOGL-780\n        if _FLEDGE_DATA:\n            certs_dir = os.path.expanduser(_FLEDGE_DATA + '/etc/certs')\n        else:\n            certs_dir = os.path.expanduser(_FLEDGE_ROOT + '/data/etc/certs')\n\n        \"\"\" Generated using\n                $ openssl version\n                OpenSSL 1.0.2g  1 Mar 2016\n\n        The openssl library is required to generate your own certificate. Run the following command in your local environment to see if you already have openssl installed installed.\n\n        $ which openssl\n        /usr/bin/openssl\n\n        If the which command does not return a path then you will need to install openssl yourself:\n\n        $ apt-get install openssl\n\n        Generate private key and certificate signing request:\n\n        A private key and certificate signing request are required to create an SSL certificate.\n        When the openssl req command asks for a “challenge password”, just press return, leaving the password empty.\n        This password is used by Certificate Authorities to authenticate the certificate owner when they want to revoke\n        their certificate. Since this is a self-signed certificate, there’s no way to revoke it via CRL(Certificate Revocation List).\n\n        $ openssl genrsa -des3 -passout pass:x -out server.pass.key 2048\n        ...\n        $ openssl rsa -passin pass:x -in server.pass.key -out fledge.key\n        writing RSA key\n        $ rm server.pass.key\n        $ openssl req -new -key server.key -out server.csr\n        ...\n        Country Name (2 letter code) [AU]:US\n        State or Province Name (full name) [Some-State]:California\n        ...\n        A challenge password []:\n        ...\n\n        Generate SSL certificate:\n\n        The self-signed SSL certificate is generated from the server.key private key and server.csr files.\n\n        $ openssl x509 -req -sha256 -days 365 -in server.csr -signkey server.key -out server.cert\n\n        The server.cert file is the certificate suitable for use along with the server.key private key.\n\n        Put these in $FLEDGE_DATA/etc/certs, $FLEDGE_ROOT/data/etc/certs or /usr/local/fledge/data/etc/certs\n\n        \"\"\"\n        cert = certs_dir + '/{}.cert'.format(cls.cert_file_name)\n        key = certs_dir + '/{}.key'.format(cls.cert_file_name)\n\n        if not os.path.isfile(cert) or not os.path.isfile(key):\n            _logger.warning(\"%s certificate files are missing. Hence using default certificate.\", cls.cert_file_name)\n            cert = certs_dir + '/fledge.cert'\n            key = certs_dir + '/fledge.key'\n            if not os.path.isfile(cert) or not os.path.isfile(key):\n                _logger.error(\"Certificates are missing\")\n                raise RuntimeError\n\n        return cert, key\n\n    @classmethod\n    async def rest_api_config(cls):\n        \"\"\"\n\n        :return: port and TLS enabled info\n        \"\"\"\n        try:\n            config = cls._REST_API_DEFAULT_CONFIG\n            category = 'rest_api'\n\n            await cls._configuration_manager.create_category(category, config, 'Fledge Admin and User REST API', True, display_name=\"Admin API\")\n            config = await cls._configuration_manager.get_category_all_items(category)\n\n            try:\n                cls.is_auth_required = True if config['authentication']['value'] == \"mandatory\" else False\n            except KeyError:\n                _logger.error(\"error in retrieving authentication info\")\n                raise\n\n            try:\n                cls.auth_method = config['authMethod']['value']\n            except KeyError:\n                _logger.error(\"error in retrieving authentication method info\")\n                raise\n\n            try:\n                cls.cert_file_name = config['certificateName']['value']\n            except KeyError:\n                _logger.error(\"error in retrieving certificateName info\")\n                raise\n\n            try:\n                cls.is_rest_server_http_enabled = False if config['enableHttp']['value'] == 'false' else True\n            except KeyError:\n                cls.is_rest_server_http_enabled = False\n\n            try:\n                port_from_config = config['httpPort']['value'] if cls.is_rest_server_http_enabled \\\n                    else config['httpsPort']['value']\n                cls.rest_server_port = int(port_from_config)\n            except KeyError:\n                _logger.error(\"error in retrieving port info\")\n                raise\n            except ValueError:\n                _logger.error(\"error in parsing port value, received %s with type %s\",\n                              port_from_config, type(port_from_config))\n                raise\n            try:\n                cls._user_idle_session_timeout = int(config['disconnectIdleUserSession']['value']) * 60\n            except:\n                cls._user_idle_session_timeout = 15 * 60\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @classmethod\n    async def password_config(cls):\n        try:\n            config = {\n                'policy': {\n                    'description': 'Password policy',\n                    'type': 'enumeration',\n                    'options': ['Any characters', 'Mixed case Alphabetic', 'Mixed case and numeric', 'Mixed case, numeric and special characters'],\n                    'default': 'Any characters',\n                    'displayName': 'Policy',\n                    'order': '1',\n                    'permissions': ['admin']\n                },\n                'length': {\n                    'description': 'Minimum password length',\n                    'type': 'integer',\n                    'default': '6',\n                    'displayName': 'Minimum Length',\n                    'minimum': '6',\n                    'maximum': '80',\n                    'order': '2',\n                    'permissions': ['admin']\n                },\n                'expiration': {\n                    'description': 'Number of days after which passwords must be changed',\n                    'type': 'integer',\n                    'default': '0',\n                    'displayName': 'Expiry (in Days)',\n                    'order': '3',\n                    'permissions': ['admin']\n                }\n            }\n            category = 'password'\n            await cls._configuration_manager.create_category(category, config, 'To control the password policy', True,\n                                                             display_name=\"Password Policy\")\n            await cls._configuration_manager.create_child_category(\"rest_api\", [category])\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n    \n    @classmethod\n    async def support_bundle_config(cls):\n        try:\n            config = {\n                \"auto_support_bundle\": {\n                    \"description\": \"Automatically create support bundle when service fails\",\n                    \"type\": \"boolean\",\n                    \"default\": \"true\",\n                    \"displayName\": \"Auto Generate On Failure\"\n                },\n                \"support_bundle_retain_count\": { \n                    \"description\": \"Number of support bundles to retain (minimum 1)\",\n                    \"type\": \"integer\",\n                    \"default\": \"3\",\n                    \"minimum\": \"1\",\n                    \"displayName\": \"Bundles To Retain\"\n                }\n            }\n\n            category = 'SUPPORT_BUNDLE'\n            await cls._configuration_manager.create_category(category, config, 'Support Bundle Configuration', True,\n                                                             display_name=\"Support Bundle\")\n            await cls._configuration_manager.create_child_category(\"Advanced\",[\"SUPPORT_BUNDLE\"])\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @classmethod\n    async def firewall_config(cls):\n        try:\n            _firewall = Firewall()\n            category = _firewall.category\n            await cls._configuration_manager.create_category(category, _firewall.config, _firewall.description, True,\n                                                             display_name=_firewall.display_name)\n            config = await cls._configuration_manager.get_category_all_items(category)\n            Firewall.IPAddresses.save(data=config)\n            await cls._configuration_manager.create_child_category(\"rest_api\", [category])\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @classmethod\n    async def features_config(cls):\n        \"\"\" Get the features inclusion configuration \"\"\"\n        try:\n            config = cls._FEATURES_DEFAULT_CONFIG\n            category = 'FEATURES'\n            description = \"Control the inclusion of system features\"\n            if cls._configuration_manager is None:\n                cls._configuration_manager = ConfigurationManager(cls._storage_client_async)\n            await cls._configuration_manager.create_category(category, config, description, True,\n                                                             display_name='Features')\n            config = await cls._configuration_manager.get_category_all_items(category)\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @classmethod\n    async def service_config(cls):\n        \"\"\"\n        Get the service level configuration\n        \"\"\"\n        try:\n            config = cls._SERVICE_DEFAULT_CONFIG\n            category = 'service'\n\n            if cls._configuration_manager is None:\n                _logger.error(\"No configuration manager available\")\n            await cls._configuration_manager.create_category(category, config, 'Fledge Service', True, display_name='Fledge Service')\n            config = await cls._configuration_manager.get_category_all_items(category)\n\n            try:\n                cls._service_name = config['name']['value']\n            except KeyError:\n                cls._service_name = 'Fledge'\n            try:\n                cls._service_description = config['description']['value']\n            except KeyError:\n                cls._service_description = 'Fledge REST Services'\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @classmethod\n    async def installation_config(cls):\n        \"\"\"\n        Get the installation level configuration\n        \"\"\"\n        try:\n            config = cls._INSTALLATION_DEFAULT_CONFIG\n            category = 'Installation'\n\n            if cls._configuration_manager is None:\n                _logger.error(\"No configuration manager available\")\n            await cls._configuration_manager.create_category(category, config, 'Installation', True,\n                                                             display_name='Installation')\n            await cls._configuration_manager.get_category_all_items(category)\n\n            cls._package_cache_manager = {\"update\": {\"last_accessed_time\": \"\"},\n                                          \"upgrade\": {\"last_accessed_time\": \"\"}, \"list\": {\"last_accessed_time\": \"\"}}\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @classmethod\n    async def core_logger_setup(cls):\n        \"\"\" Get the logging level configuration \"\"\"\n        try:\n            config = cls._LOGGING_DEFAULT_CONFIG\n            category = 'LOGGING'\n            description = \"Logging Level of Core Server\"\n            if cls._configuration_manager is None:\n                cls._configuration_manager = ConfigurationManager(cls._storage_client_async)\n            await cls._configuration_manager.create_category(category, config, description, True,\n                                                             display_name='Logging')\n            config = await cls._configuration_manager.get_category_all_items(category)\n            cls._log_level = config['logLevel']['value']\n            from fledge.common.logger import FLCoreLogger\n            FLCoreLogger().set_level(cls._log_level)\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @classmethod\n    async def core_south_service_resource_limit_setup(cls):\n        \"\"\" Get the south service resource limit configuration \"\"\"\n        try:\n            config = cls._RESOURCE_LIMIT_DEFAULT_CONFIG\n            category = 'RESOURCE_LIMIT'\n            description = \"Resource Limit of South Service\"\n            if cls._configuration_manager is None:\n                cls._configuration_manager = ConfigurationManager(cls._storage_client_async)\n            await cls._configuration_manager.create_category(category, config, description, True,\n                                                             display_name='Resource Limit')\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @classmethod\n    async def setup_config_manager(cls):\n        \"\"\" Configuration manager category \"\"\"\n        try:\n            if cls._configuration_manager is None:\n                cls._configuration_manager = ConfigurationManager(cls._storage_client_async)\n\n            config = cls._CONFIGURATION_DEFAULT_CONFIG\n            category = 'CONFIGURATION'\n            description = \"Core Configuration Manager\"\n            await cls._configuration_manager.create_category(category, config, description, True,\n                                                             display_name='Configuration Manager')\n            config = await cls._configuration_manager.get_category_all_items(category)\n            cache_size = int(config['cacheSize']['value'])\n            # Internal handling of cache size\n            if cache_size == 0:\n                default_cache_size = cls._configuration_manager._cacheManager.max_cache_size\n                _logger.warn(\"Configuration Manager Cache Size is being set to the default size {}.\".format(\n                    default_cache_size))\n                cache_size = default_cache_size\n            cls._configuration_manager._cacheManager.max_cache_size = cache_size\n        except Exception as ex:\n            _logger.exception(ex)\n            raise\n\n    @staticmethod\n    def _make_app(auth_required=True, auth_method='any'):\n        \"\"\"Creates the REST server\n\n        :rtype: web.Application\n        \"\"\"\n        mwares = [middleware.error_middleware]\n\n        # Maintain this order. Middlewares are executed in reverse order.\n        if auth_method != \"any\":\n            if auth_method == \"certificate\":\n                mwares.append(middleware.certificate_login_middleware)\n            else:  # password\n                mwares.append(middleware.password_login_middleware)\n\n        if not auth_required:\n            mwares.append(middleware.optional_auth_middleware)\n        else:\n            mwares.append(middleware.auth_middleware)\n\n        app = web.Application(middlewares=mwares, client_max_size=AIOHTTP_CLIENT_MAX_SIZE)\n        # aiohttp web server logging level always set to warning\n        web.access_logger.setLevel(logging.WARNING)\n        admin_routes.setup(app)\n        return app\n\n    @classmethod\n    def _make_core_app(cls):\n        \"\"\"Creates the Service management REST server Core a.k.a. service registry\n\n        :rtype: web.Application\n        \"\"\"\n        app = web.Application(middlewares=[middleware.error_middleware], client_max_size=AIOHTTP_CLIENT_MAX_SIZE)\n        # aiohttp web server logging level always set to warning\n        web.access_logger.setLevel(logging.WARNING)\n        management_routes.setup(app, cls, True)\n        return app\n\n    @classmethod\n    async def _start_service_monitor(cls):\n        \"\"\"Starts the micro-service monitor\"\"\"\n        cls.service_monitor = Monitor()\n        await cls.service_monitor.start()\n        _logger.info(\"Services monitoring started ...\")\n\n    @classmethod\n    async def stop_service_monitor(cls):\n        \"\"\"Stops the micro-service monitor\"\"\"\n        await cls.service_monitor.stop()\n        _logger.info(\"Services monitoring stopped.\")\n\n    @classmethod\n    async def _start_scheduler(cls):\n        \"\"\"Starts the scheduler\"\"\"\n        _logger.info(\"Starting scheduler ...\")\n        cls.scheduler = Scheduler(cls._host, cls.core_management_port, cls.running_in_safe_mode)\n        await cls.scheduler.start()\n\n    @staticmethod\n    def __start_storage(host, m_port):\n        _logger.info(\"Start storage, from directory {}\".format(_SCRIPTS_DIR))\n        try:\n            cmd_with_args = ['./services/storage', '--address={}'.format(host),\n                             '--port={}'.format(m_port)]\n            subprocess.call(cmd_with_args, cwd=_SCRIPTS_DIR)\n        except Exception as ex:\n            _logger.exception(ex)\n\n    @classmethod\n    async def _start_storage(cls, loop):\n        if loop is None:\n            loop = asyncio.get_event_loop()\n        # callback with args\n        loop.call_soon(cls.__start_storage, cls._host, cls.core_management_port)\n\n    @classmethod\n    async def _get_storage_client(cls):\n        storage_service = None\n        while storage_service is None and cls._storage_client_async is None:\n            try:\n                found_services = ServiceRegistry.get(name=\"Fledge Storage\")\n                storage_service = found_services[0]\n                cls._storage_client_async = StorageClientAsync(cls._host, cls.core_management_port, svc=storage_service)\n            except (service_registry_exceptions.DoesNotExist, InvalidServiceInstance, StorageServiceUnavailable, Exception) as ex:\n                await asyncio.sleep(5)\n        while cls._readings_client_async is None:\n            try:\n                cls._readings_client_async = ReadingsStorageClientAsync(cls._host, cls.core_management_port, svc=storage_service)\n            except (service_registry_exceptions.DoesNotExist, InvalidServiceInstance, StorageServiceUnavailable, Exception) as ex:\n                await asyncio.sleep(5)\n\n    @classmethod\n    def _start_app(cls, loop, app, host, port, ssl_ctx=None):\n        if loop is None:\n            loop = asyncio.get_event_loop()\n\n        handler = app.make_handler()\n        coro = loop.create_server(handler, host, port, ssl=ssl_ctx)\n        # added coroutine\n        server = loop.run_until_complete(coro)\n        return server, handler\n\n    @staticmethod\n    def pid_filename():\n        \"\"\" Get the full path of Fledge PID file \"\"\"\n        if _FLEDGE_DATA is None:\n            path = _FLEDGE_ROOT + \"/data\"\n        else:\n            path = _FLEDGE_DATA\n        return path + _FLEDGE_PID_DIR + \"/\" + _FLEDGE_PID_FILE\n\n    @classmethod\n    def _pidfile_exists(cls):\n        \"\"\" Check whether the PID file exists \"\"\"\n        try:\n            fh = open(cls._pidfile, 'r')\n            fh.close()\n            return True\n        except (FileNotFoundError, IOError, TypeError):\n            return False\n\n    @classmethod\n    def _remove_pid(cls):\n        \"\"\" Remove PID file \"\"\"\n        try:\n            os.remove(cls._pidfile)\n            _logger.info(\"Fledge PID file [\" + cls._pidfile + \"] removed.\")\n        except Exception as ex:\n            _logger.error(\"Fledge PID file remove error: [\" + ex.__class__.__name__ + \"], (\" + format(str(ex)) + \")\")\n\n    @classmethod\n    def _write_pid(cls, api_address, api_port):\n        \"\"\" Write data into PID file \"\"\"\n        try:\n            # Get PID file path\n            cls._pidfile = cls.pid_filename()\n\n            # Check for existing PID file and log a message \"\"\"\n            if cls._pidfile_exists() is True:\n                _logger.warn(\"A Fledge PID file has been found: [\" + \\\n                             cls._pidfile + \"] found, ignoring it.\")\n\n            # Get the running script PID\n            pid = os.getpid()\n\n            # Open for writing and truncate existing file\n            fh = None\n            try:\n                fh = open(cls._pidfile, 'w+')\n            except FileNotFoundError:\n                try:\n                    os.makedirs(os.path.dirname(cls._pidfile))\n                    _logger.info(\"The PID directory [\" + os.path.dirname(cls._pidfile) + \"] has been created\")\n                    fh = open(cls._pidfile, 'w+')\n                except Exception as ex:\n                    errmsg = \"PID dir create error: [\" + ex.__class__.__name__ + \"], (\" + format(str(ex)) + \")\"\n                    _logger.error(errmsg)\n                    raise\n            except Exception as ex:\n                errmsg = \"Fledge PID file create error: [\" + ex.__class__.__name__ + \"], (\" + format(str(ex)) + \")\"\n                _logger.error(errmsg)\n                raise\n\n            # Build the JSON object to write into PID file\n            info_data = {'processID': pid,\\\n                         'adminAPI': {\\\n                             \"protocol\": \"HTTP\" if cls.is_rest_server_http_enabled else \"HTTPS\",\\\n                             \"addresses\": [api_address],\\\n                             \"port\": api_port}\\\n                        }\n\n            # Write data into PID file\n            fh.write(json.dumps(info_data))\n\n            # Close the PID file\n            fh.close()\n            _logger.info(\"PID [\" + str(pid) + \"] written in [\" + cls._pidfile + \"]\")\n        except Exception as e:\n            sys.stderr.write('Error: ' + format(str(e)) + \"\\n\")\n            sys.exit(1)\n\n    @classmethod\n    def _reposition_streams_table(cls, loop):\n        _logger.info(\"'fledge.readings' is stored in memory and a restarted has occurred, \"\n                     \"force reset of last_object column in 'fledge.streams'\")\n\n        def _reset_last_object_in_streams(_stream_id):\n            payload = payload_builder.PayloadBuilder().SET(last_object=0, ts='now()').WHERE(\n                ['id', '=', _stream_id]).payload()\n            loop.run_until_complete(cls._storage_client_async.update_tbl(\"streams\", payload))\n        try:\n            # Find the child categories of parent North\n            query_payload = payload_builder.PayloadBuilder().SELECT(\"child\").WHERE([\"parent\", \"=\", \"North\"]).payload()\n            north_children = loop.run_until_complete(cls._storage_client_async.query_tbl_with_payload(\n                'category_children', query_payload))\n            rows = north_children['rows']\n            if len(rows) > 0:\n                configuration = loop.run_until_complete(cls._storage_client_async.query_tbl('configuration'))\n                for nc in rows:\n                    for cat in configuration['rows']:\n                        if nc['child'] == cat['key']:\n                            cat_value = cat['value']\n                            stream_id = cat_value['streamId']['value']\n                            # reset last_object in streams table as per streamId with following scenarios\n                            # a) if source KV pair is present and having value 'readings'\n                            # b) if source KV pair is not present\n                            if 'source' in cat_value:\n                                source_val = cat_value['source']['value']\n                                if source_val == 'readings':\n                                    _reset_last_object_in_streams(stream_id)\n                            else:\n                                _reset_last_object_in_streams(stream_id)\n                            break\n        except Exception as ex:\n            _logger.error(ex, \"last_object of 'fledge.streams' reset is failed.\")\n\n    @classmethod\n    def _check_readings_table(cls, loop):\n        # check readings table has any row\n        select_query_payload = payload_builder.PayloadBuilder().SELECT(\"id\").LIMIT(1).payload()\n        result = loop.run_until_complete(cls._readings_client_async.query(select_query_payload))\n        readings_row_exists = len(result['rows'])\n        if readings_row_exists == 0:\n            # check streams table has any row\n            s_result = loop.run_until_complete(cls._storage_client_async.query_tbl_with_payload('streams',\n                                                                                                select_query_payload))\n            streams_row_exists = len(s_result['rows'])\n            if streams_row_exists:\n                cls._reposition_streams_table(loop)\n        else:\n            _logger.info(\"'fledge.readings' is not empty; 'fledge.streams' last_objects reset is not required\")\n\n    @classmethod\n    async def _config_parents(cls):\n        # Create the parent category for all general configuration categories\n        try:\n            await cls._configuration_manager.create_category(\"General\", {}, 'General', True)\n            await cls._configuration_manager.create_child_category(\"General\", [\"service\", \"rest_api\", \"Installation\"])\n        except KeyError:\n            _logger.error('Failed to create General parent configuration category for service')\n            raise\n\n        # Create the parent category for all advanced configuration categories\n        try:\n            await cls._configuration_manager.create_category(\"Advanced\", {}, 'Advanced', True)\n            await cls._configuration_manager.create_child_category(\"Advanced\", [\"SMNTR\", \"SCHEDULER\", \"LOGGING\", \"RESOURCE_LIMIT\",\n                                                                                \"CONFIGURATION\",\"FEATURES\"])\n        except KeyError:\n            _logger.error('Failed to create Advanced parent configuration category for service')\n            raise\n\n        # Create the parent category for all Utilities configuration categories\n        try:\n            await cls._configuration_manager.create_category(\"Utilities\", {}, \"Utilities\", True)\n        except KeyError:\n            _logger.error('Failed to create Utilities parent configuration category for task')\n            raise\n\n    @classmethod\n    async def _start_asset_tracker(cls):\n        cls._asset_tracker = AssetTracker(cls._storage_client_async)\n        await cls._asset_tracker.load_asset_records()\n\n    @classmethod\n    async def _get_alerts(cls):\n        cls._alert_manager = AlertManager(cls._storage_client_async)\n        await cls._alert_manager.get_all()\n\n    @classmethod\n    def _start_core(cls, loop=None):\n        if cls.running_in_safe_mode:\n            _logger.info(\"Starting in SAFE MODE ...\")\n        else:\n            _logger.info(\"Starting ...\")\n        try:\n            host = cls._host\n\n            cls.core_app = cls._make_core_app()\n            cls.core_server, cls.core_server_handler = cls._start_app(loop, cls.core_app, host, 0)\n            address, cls.core_management_port = cls.core_server.sockets[0].getsockname()\n            _logger.info('Management API started on http://%s:%s', address, cls.core_management_port)\n            # see http://<core_mgt_host>:<core_mgt_port>/fledge/service for registered services\n            # start storage\n            loop.run_until_complete(cls._start_storage(loop))\n\n            # get storage client\n            loop.run_until_complete(cls._get_storage_client())\n\n            if not cls.running_in_safe_mode:\n                # If readings table is empty, set last_object of all streams to 0\n                cls._check_readings_table(loop)\n\n            # obtain configuration manager and interest registry\n            cls._configuration_manager = ConfigurationManager(cls._storage_client_async)\n            cls._interest_registry = InterestRegistry(cls._configuration_manager)\n\n            # Configuration Manager setup\n            loop.run_until_complete(cls.setup_config_manager())\n\n            # Logging category\n            loop.run_until_complete(cls.core_logger_setup())\n            loop.run_until_complete(cls.core_south_service_resource_limit_setup())\n\n            # start scheduler\n            # see scheduler.py start def FIXME\n            # scheduler on start will wait for storage service registration\n            #\n            # NOTE: In safe mode, the scheduler will be in restricted mode,\n            # and only API operations and current state will be accessible (No jobs / processes will be triggered)\n            #\n            loop.run_until_complete(cls._start_scheduler())\n\n            # Support bundle configuration\n            loop.run_until_complete(cls.support_bundle_config())\n\n            # start monitor\n            loop.run_until_complete(cls._start_service_monitor())\n\n            # REST API\n            loop.run_until_complete(cls.rest_api_config())\n            loop.run_until_complete(cls.password_config())\n            loop.run_until_complete(cls.firewall_config())\n            loop.run_until_complete(cls.features_config())\n\n            cls.service_app = cls._make_app(auth_required=cls.is_auth_required, auth_method=cls.auth_method)\n\n            # ssl context\n            ssl_ctx = None\n            if not cls.is_rest_server_http_enabled:\n                cert, key = cls.get_certificates()\n                _logger.info('Loading certificates %s and key %s', cert, key)\n\n                # Verification handling of a tls cert\n                with open(cert, 'r') as tls_cert_content:\n                    tls_cert = tls_cert_content.read()\n                SSLVerifier.set_user_cert(tls_cert)\n                if SSLVerifier.is_expired():\n                    msg = 'Certificate `{}` expired on {}'.format(cls.cert_file_name, SSLVerifier.get_enddate())\n                    _logger.error(msg)\n\n                    if cls.running_in_safe_mode:\n                        cls.is_rest_server_http_enabled = True\n                        # TODO: Should cls.rest_server_port be set to configured http port, as is_rest_server_http_enabled has been set to True?\n                        msg = \"Running in safe mode withOUT https on port {}\".format(cls.rest_server_port)\n                        _logger.info(msg)\n                    else:\n                        msg = 'Start in safe-mode to fix this problem!'\n                        _logger.warning(msg)\n                        raise SSLVerifier.VerificationError(msg)\n                else:\n                    ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)\n                    ssl_ctx.load_cert_chain(cert, key)\n\n            # Get the service data and advertise the management port of the core\n            # to allow other microservices to find Fledge\n            loop.run_until_complete(cls.service_config())\n            _logger.info('Announce management API service')\n            cls.management_announcer = ServiceAnnouncer(\"core-{}\".format(cls._service_name), cls._MANAGEMENT_SERVICE, cls.core_management_port,\n                                                        ['The Fledge Core REST API'])\n\n            cls.service_server, cls.service_server_handler = cls._start_app(loop, cls.service_app, host, cls.rest_server_port, ssl_ctx=ssl_ctx)\n            address, service_server_port = cls.service_server.sockets[0].getsockname()\n\n            # Write PID file with REST API details\n            cls._write_pid(address, service_server_port)\n\n            _logger.info('REST API Server started on %s://%s:%s', 'http' if cls.is_rest_server_http_enabled else 'https',\n                         address, service_server_port)\n\n            # All services are up so now we can advertise the Admin and User REST API's\n            cls.admin_announcer = ServiceAnnouncer(cls._service_name, cls._ADMIN_API_SERVICE, service_server_port,\n                                                   [cls._service_description])\n            cls.user_announcer = ServiceAnnouncer(cls._service_name, cls._USER_API_SERVICE, service_server_port,\n                                                  [cls._service_description])\n\n            # register core\n            # a service with 2 web server instance,\n            # registering now only when service_port is ready to listen the request\n            # TODO: if ssl then register with protocol https\n            cls._register_core(host, cls.core_management_port, service_server_port)\n\n            # Installation category\n            loop.run_until_complete(cls.installation_config())\n\n            # Create the configuration category parents\n            loop.run_until_complete(cls._config_parents())\n\n            if not cls.running_in_safe_mode:\n                # Start asset tracker\n                loop.run_until_complete(cls._start_asset_tracker())\n\n                # Start Alert Manager\n                loop.run_until_complete(cls._get_alerts())\n\n                # If dispatcher installation:\n                # a) not found then add it as a StartUp service\n                # b) found then check the status of its schedule and take action\n                is_dispatcher = loop.run_until_complete(cls.is_dispatcher_running(cls._storage_client_async))\n                if not is_dispatcher:\n                    _logger.info(\"Dispatcher service installation found on the system, but not in running state. \"\n                                 \"Therefore, starting the service...\")\n                    loop.run_until_complete(cls.add_and_enable_dispatcher())\n                    _logger.info(\"Dispatcher service started.\")\n                # dryrun execution of all the tasks that are installed but have schedule type other than STARTUP\n                schedule_list = loop.run_until_complete(cls.scheduler.get_schedules())\n                for sch in schedule_list:\n                    # STARTUP type schedules and special FledgeUpdater schedule process name exclusion to avoid dryrun\n                    if int(sch.schedule_type) != 1 and sch.process_name != \"FledgeUpdater\":\n                        schedule_row = cls.scheduler._ScheduleRow(\n                            id=sch.schedule_id,\n                            name=sch.name,\n                            type=sch.schedule_type,\n                            time=(sch.time.hour * 60 * 60 + sch.time.minute * 60 + sch.time.second) if sch.time else 0,\n                            day=sch.day,\n                            repeat=sch.repeat,\n                            repeat_seconds=sch.repeat.total_seconds() if sch.repeat else 0,\n                            exclusive=sch.exclusive,\n                            enabled=sch.enabled,\n                            process_name=sch.process_name)\n                        loop.run_until_complete(cls.scheduler._start_task(schedule_row, dryrun=True))\n            # Everything is complete in the startup sequence, write the audit log entry\n            cls._audit = AuditLogger(cls._storage_client_async)\n            audit_msg = {\"message\": \"Running in safe mode\"} if cls.running_in_safe_mode else None\n            loop.run_until_complete(cls._audit.information('START', audit_msg))\n            if sys.version_info >= (3, 7, 1):\n                ignore_aiohttp_ssl_eror(loop)\n            loop.run_forever()\n        except SSLVerifier.VerificationError as e:\n            sys.stderr.write('Error: ' + format(str(e)) + \"\\n\")\n            loop.run_until_complete(cls.stop_storage())\n            sys.exit(1)\n        except (OSError, RuntimeError, TimeoutError) as e:\n            sys.stderr.write('Error: ' + format(str(e)) + \"\\n\")\n            sys.exit(1)\n        except Exception as e:\n            sys.stderr.write('Error: ' + format(str(e)) + \"\\n\")\n            sys.exit(1)\n\n    @classmethod\n    def _register_core(cls, host, mgt_port, service_port):\n        core_service_id = ServiceRegistry.register(name=\"Fledge Core\", s_type=\"Core\", address=host,\n                                                     port=service_port, management_port=mgt_port)\n\n        return core_service_id\n\n    @classmethod\n    def start(cls, is_safe_mode=False):\n        \"\"\"Starts Fledge\"\"\"\n        #\n        # is_safe_mode: When True, It prevents the start of any services or tasks other than the storage layer.\n        # Starting Fledge in this way would mean only the core and storage services would be running.\n        # And Scheduler will be running in restricted mode.\n        #\n        cls.running_in_safe_mode = is_safe_mode\n        loop = asyncio.get_event_loop()\n        cls._start_core(loop=loop)\n\n    @classmethod\n    async def _stop(cls):\n        \"\"\"Stops Fledge\"\"\"\n        try:\n            # stop monitor\n            await cls.stop_service_monitor()\n\n            # stop the scheduler\n            await cls._stop_scheduler()\n\n            await cls.stop_microservices()\n\n            # poll microservices for unregister\n            await cls.poll_microservices_unregister()\n\n            # stop the REST api (exposed on service port)\n            await cls.stop_rest_server()\n\n            # Must write the audit log entry before we stop the storage service\n            cls._audit = AuditLogger(cls._storage_client_async)\n            audit_msg = {\"message\": \"Exited from safe mode\"} if cls.running_in_safe_mode else None\n            await cls._audit.information('FSTOP', audit_msg)\n\n            # stop storage\n            await cls.stop_storage()\n\n            # stop core management api\n            # loop.stop does it all\n\n            # Remove PID file\n            cls._remove_pid()\n        except Exception:\n            raise\n\n    @classmethod\n    async def stop_rest_server(cls):\n        # Delete all user tokens\n        await User.Objects.delete_all_user_tokens()\n        cls.service_server.close()\n\n        # Python 3.12 + aiohttp 3.10.11 specific fix: wait_closed() can hang indefinitely\n        # if there are active connections, so we add a timeout\n        if sys.version_info >= (3, 12):\n            try:\n                await asyncio.wait_for(cls.service_server.wait_closed(), timeout=5.0)\n            except asyncio.TimeoutError:\n                _logger.debug(\"REST server wait_closed() timeout - continuing with shutdown\")\n        else:\n            await cls.service_server.wait_closed()\n\n        await cls.service_app.shutdown()\n        await cls.service_server_handler.shutdown(60.0)\n        await cls.service_app.cleanup()\n        _logger.info(\"Rest server stopped.\")\n\n    @classmethod\n    async def stop_storage(cls):\n        \"\"\"Stops Storage service \"\"\"\n\n        try:\n            found_services = ServiceRegistry.get(name=\"Fledge Storage\")\n        except service_registry_exceptions.DoesNotExist:\n            raise\n\n        svc = found_services[0]\n        if svc is None:\n            _logger.info(\"Fledge Storage shut down requested, but could not be found.\")\n            return\n        await cls._request_microservice_shutdown(svc)\n\n    @classmethod\n    async def stop_microservices(cls):\n        \"\"\" call shutdown endpoint for non core micro-services\n\n        There are 3 types of services\n           - Core\n           - Storage\n           - Southbound\n        \"\"\"\n        try:\n            found_services = ServiceRegistry.get()\n            services_to_stop = list()\n\n            for fs in found_services:\n                if fs._name in (\"Fledge Storage\", \"Fledge Core\"):\n                    continue\n                if fs._status not in [ServiceRecord.Status.Running, ServiceRecord.Status.Unresponsive]:\n                    continue\n                services_to_stop.append(fs)\n\n            if len(services_to_stop) == 0:\n                _logger.info(\"No service found except the core, and(or) storage.\")\n                return\n\n            tasks = [cls._request_microservice_shutdown(svc) for svc in services_to_stop]\n            await asyncio.gather(*tasks)\n        except service_registry_exceptions.DoesNotExist:\n            pass\n        except Exception as ex:\n            _logger.exception(ex)\n\n    @classmethod\n    async def _request_microservice_shutdown(cls, svc):\n        \"\"\" request service's shutdown \"\"\"\n        management_api_url = 'http://{}:{}/fledge/service/shutdown'.format(svc._address, svc._management_port)\n        # TODO: need to set http / https based on service protocol\n        headers = {'content-type': 'application/json'}\n        async with aiohttp.ClientSession() as session:\n            async with session.post(management_api_url, data=None, headers=headers) as resp:\n                result = await resp.text()\n                status_code = resp.status\n                if status_code in range(400, 500):\n                    _logger.error(\"Bad request error code: %d, reason: %s\", status_code, resp.reason)\n                    raise web.HTTPBadRequest(reason=resp.reason)\n                if status_code in range(500, 600):\n                    _logger.error(\"Server error code: %d, reason: %s\", status_code, resp.reason)\n                    raise web.HTTPInternalServerError(reason=resp.reason)\n                try:\n                    response = json.loads(result)\n                    response['message']\n                    _logger.info(\"Shutdown scheduled for %s service %s. %s\", svc._type, svc._name, response['message'])\n                except KeyError:\n                    raise\n\n    @classmethod\n    async def poll_microservices_unregister(cls):\n        \"\"\" poll microservice shutdown endpoint for non core micro-services\"\"\"\n\n        def get_process_id(name):\n            \"\"\"Return process ids found by (partial) name or regex.\"\"\"\n            child = subprocess.Popen(['pgrep', '-f', 'name={}'.format(name)], stdout=subprocess.PIPE, shell=False)\n            response = child.communicate()[0]\n            return [int(_pid) for _pid in response.split()]\n\n        try:\n            shutdown_threshold = 0\n            found_services = ServiceRegistry.get()\n            _service_shutdown_threshold = 5 * (len(found_services) - 2)\n            while True:\n                services_to_stop = list()\n                for fs in found_services:\n                    if fs._name in (\"Fledge Storage\", \"Fledge Core\"):\n                        continue\n                    if fs._status not in [ServiceRecord.Status.Running, ServiceRecord.Status.Unresponsive]:\n                        continue\n                    services_to_stop.append(fs)\n                if len(services_to_stop) == 0:\n                    _logger.info(\"All microservices, except Core and Storage, have been shutdown.\")\n                    return\n                if shutdown_threshold > _service_shutdown_threshold:\n                    for fs in services_to_stop:\n                        pids = get_process_id(fs._name)\n                        for pid in pids:\n                            _logger.error(\"Microservice:%s status: %s has NOT been shutdown. Killing it...\", fs._name, fs._status)\n                            os.kill(pid, signal.SIGKILL)\n                            _logger.info(\"KILLED Microservice:%s...\", fs._name)\n                    return\n                await asyncio.sleep(2)\n                shutdown_threshold += 2\n                found_services = ServiceRegistry.get()\n\n        except service_registry_exceptions.DoesNotExist:\n            pass\n        except Exception as ex:\n            _logger.exception(ex)\n\n    @classmethod\n    async def _stop_scheduler(cls):\n        try:\n            await cls.scheduler.stop()\n        except TimeoutError as e:\n            _logger.exception('Unable to stop the scheduler')\n            raise e\n\n    @classmethod\n    async def ping(cls, request):\n        \"\"\" health check\n        \"\"\"\n        since_started = time.time() - cls._start_time\n        return web.json_response({'uptime': int(since_started)})\n\n    @classmethod\n    async def register(cls, request):\n        \"\"\" Register a service\n\n        :Example:\n            curl -d '{\"type\": \"Storage\", \"name\": \"Storage Services\", \"address\": \"127.0.0.1\", \"service_port\": 8090,\n                \"management_port\": 1090, \"protocol\": \"https\"}' -X POST http://localhost:<core mgt port>/fledge/service\n            curl -d '{\"type\": \"N1\", \"name\": \"Micro Service\", \"address\": \"127.0.0.1\", \"service_port\": 9091,\n                \"management_port\": 1090, \"protocol\": \"https\", \"token\": \"SVCNAME_ABCDE\"}' -X POST\n                http://localhost:<core mgt port>/fledge/service\n            service_port in payload is optional\n        \"\"\"\n        try:\n            data = await request.json()\n            service_name = data.get('name', None)\n            service_type = data.get('type', None)\n            service_address = data.get('address', None)\n            service_port = data.get('service_port', None)\n            service_management_port = data.get('management_port', None)\n            service_protocol = data.get('protocol', 'http')\n            token = data.get('token', None)\n\n            if not (service_name.strip() or service_type.strip() or service_address.strip()\n                    or service_management_port.strip() or not service_management_port.isdigit()):\n                raise web.HTTPBadRequest(reason='One or more values for type/name/address/management port missing')\n\n            if service_port is not None:\n                if not (isinstance(service_port, int)):\n                    raise web.HTTPBadRequest(reason=\"Service's service port can be a positive integer only\")\n\n            if not isinstance(service_management_port, int):\n                raise web.HTTPBadRequest(reason='Service management port can be a positive integer only')\n\n            if token is None and ServiceRegistry.getStartupToken(service_name) is not None:\n                raise web.HTTPBadRequest(body=json.dumps({\"message\": 'Required registration token is missing.'}))\n\n            # If token, then check single use token verification; if bad then return 4XX\n            if token is not None:\n                if not isinstance(token, str):\n                    msg = 'Token can be a string only'\n                    raise web.HTTPBadRequest(reason=msg)\n\n                # Check startup token exists\n                if ServiceRegistry.checkStartupToken(service_name, token) == False:\n                    msg = 'Token for the service was not found'\n                    raise web.HTTPBadRequest(reason=msg)\n\n            try:\n                registered_service_id = ServiceRegistry.register(service_name, service_type, service_address,\n                                                                 service_port, service_management_port,\n                                                                 service_protocol, token)\n                try:\n                    if not cls._storage_client_async is None:\n                        cls._audit = AuditLogger(cls._storage_client_async)\n                        await cls._audit.information('SRVRG', {'name': service_name})\n                except Exception as ex:\n                    _logger.info(\"Failed to audit registration: %s\", str(ex))\n            except service_registry_exceptions.AlreadyExistsWithTheSameName:\n                raise web.HTTPBadRequest(reason='A Service with the same name already exists')\n            except service_registry_exceptions.AlreadyExistsWithTheSameAddressAndPort:\n                raise web.HTTPBadRequest(reason='A Service is already registered on the same address: {} and '\n                                                'service port: {}'.format(service_address, service_port))\n            except service_registry_exceptions.AlreadyExistsWithTheSameAddressAndManagementPort:\n                raise web.HTTPBadRequest(reason='A Service is already registered on the same address: {} and '\n                                                'management port: {}'.format(service_address, service_management_port))\n\n            if not registered_service_id:\n                raise web.HTTPBadRequest(reason='Service {} could not be registered'.format(service_name))\n\n            bearer_token = ''\n            # Create a JWT token if startup token exists\n            if token is not None:\n                # Set JWT bearer token\n                # Set expiration now + delta seconds\n                exp = int(time.time()) + SERVICE_JWT_EXP_DELTA_SECONDS\n                # Add public token claims\n                claims = {\n                             'aud': service_type,\n                             'sub': service_name,\n                             'iss': SERVICE_JWT_AUDIENCE,\n                             'exp': exp\n                         }\n\n                # Create JWT token\n                bearer_token = jwt.encode(claims, SERVICE_JWT_SECRET, SERVICE_JWT_ALGORITHM) if token is not None else \"\"\n\n                # Add the bearer token for that service being registered\n                ServiceRegistry.addBearerToken(service_name, bearer_token)\n\n            # Prepare response JSON\n            _response = {\n                'id': registered_service_id,\n                'message': \"Service registered successfully\",\n                'bearer_token': bearer_token\n            }\n\n            _logger.debug(\"For service: {} SERVER RESPONSE: {}\".format(service_name, _response))\n\n            return web.json_response(_response)\n\n        except ValueError as err:\n            msg = str(err)\n            raise web.HTTPNotFound(reason=msg, body=json.dumps(msg))\n\n    @classmethod\n    async def unregister(cls, request):\n        \"\"\" Unregister a service\n\n        :Example:\n            curl -X DELETE  http://localhost:<core mgt port>/fledge/service/dc9bfc01-066a-4cc0-b068-9c35486db87f\n        \"\"\"\n\n        try:\n            service_id = request.match_info.get('service_id', None)\n\n            try:\n                services = ServiceRegistry.get(idx=service_id)\n            except service_registry_exceptions.DoesNotExist:\n                raise ValueError('Service with {} does not exist'.format(service_id))\n\n            ServiceRegistry.unregister(service_id)\n\n            if cls._storage_client_async is not None and services[0]._name not in (\"Fledge Storage\", \"Fledge Core\"):\n                try:\n                    cls._audit = AuditLogger(cls._storage_client_async)\n                    await cls._audit.information('SRVUN', {'name': services[0]._name})\n                except Exception as ex:\n                    _logger.exception(ex)\n\n            _resp = {'id': str(service_id), 'message': 'Service unregistered'}\n\n            return web.json_response(_resp)\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=str(ex))\n\n    @classmethod\n    async def restart_service(cls, request):\n        \"\"\" Restart a service\n\n        :Example:\n            curl -X PUT  http://localhost:<core mgt port>/fledge/service/dc9bfc01-066a-4cc0-b068-9c35486db87f/restart\n        \"\"\"\n        try:\n            service_id = request.match_info.get('service_id', None)\n            try:\n                services = ServiceRegistry.get(idx=service_id)\n            except service_registry_exceptions.DoesNotExist:\n                raise ValueError('Service with {} does not exist'.format(service_id))\n\n            ServiceRegistry.restart(service_id)\n            if cls._storage_client_async is not None and services[0]._name not in (\"Fledge Storage\", \"Fledge Core\"):\n                try:\n                    cls._audit = AuditLogger(cls._storage_client_async)\n                    await cls._audit.information('SRVRS', {'name': services[0]._name})\n                except Exception as ex:\n                    _logger.exception(ex)\n                \"\"\" Special Case:\n                    For BucketStorage type we have used proxy map for interfacing REST API endpoints \n                    to Microservice service API endpoints. Therefore we need to clear the proxy map on restart.\n                \"\"\"\n                if services[0]._type == \"BucketStorage\":\n                    cls._API_PROXIES = {}\n        except ValueError as err:\n            msg = str(err)\n            raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n        except Exception as ex:\n            msg = str(ex)\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            _resp = {'id': str(service_id), 'message': 'Service restart requested'}\n            return web.json_response(_resp)\n\n    @classmethod\n    async def get_service(cls, request):\n        \"\"\" Returns a list of all services or as per name &|| type filter\n\n        :Example:\n            curl -X GET  http://localhost:<core mgt port>/fledge/service\n            curl -X GET  http://localhost:<core mgt port>/fledge/service?name=X&type=Storage\n        \"\"\"\n        service_name = request.query['name'] if 'name' in request.query else None\n        service_type = request.query['type'] if 'type' in request.query else None\n\n        try:\n            if not service_name and not service_type:\n                services_list = ServiceRegistry.all()\n            elif service_name and not service_type:\n                services_list = ServiceRegistry.get(name=service_name)\n            elif not service_name and service_type:\n                services_list = ServiceRegistry.get(s_type=service_type)\n            else:\n                services_list = ServiceRegistry.filter_by_name_and_type(\n                        name=service_name, s_type=service_type\n                    )\n        except service_registry_exceptions.DoesNotExist as ex:\n            if not service_name and not service_type:\n                msg = 'No service found'\n            elif service_name and not service_type:\n                msg = 'Service with name {} does not exist'.format(service_name)\n            elif not service_name and service_type:\n                msg = 'Service with type {} does not exist'.format(service_type)\n            else:\n                msg = 'Service with name {} and type {} does not exist'.format(service_name, service_type)\n\n            raise web.HTTPNotFound(reason=msg)\n\n        services = []\n        for service in services_list:\n            svc = dict()\n            svc[\"id\"] = service._id\n            svc[\"name\"] = service._name\n            svc[\"type\"] = service._type\n            svc[\"address\"] = service._address\n            svc[\"management_port\"] = service._management_port\n            svc[\"protocol\"] = service._protocol\n            svc[\"status\"] = ServiceRecord.Status(int(service._status)).name.lower()\n            if service._port:\n                svc[\"service_port\"] = service._port\n            services.append(svc)\n\n        return web.json_response({\"services\": services})\n\n    @classmethod\n    async def service_login(cls, request: web.Request) -> web.Response:\n        \"\"\"Service login endpoint for authenticated services to obtain user tokens.\n\n        Allows registered services to authenticate users using bearer tokens.\n        The service must be registered and provide a valid bearer token.\n\n        Args:\n            request: HTTP request containing JSON body with username and Authorization header\n\n        Returns:\n            web.Response: JSON response with user token, uid, admin status, and success message\n\n        Raises:\n            web.HTTPBadRequest: Invalid JSON, missing username, or validation errors\n            web.HTTPUnauthorized: Invalid bearer token or authentication failures\n            web.HTTPForbidden: User not authorized for service access\n            web.HTTPNotFound: User or service not found\n            web.HTTPInternalServerError: Unexpected server errors\n\n        Example:\n            curl -sX POST -H \"{'Authorization': 'Bearer ..'}\" -d '{\"username\": \"Manager\"}' \\\\\n                http://localhost:<core mgt port>/fledge/service/login\n        \"\"\"\n        peername = None\n        client_host = '0.0.0.0'\n        try:\n            # extraction of client info for logging purposes\n            peername = request.transport.get_extra_info('peername')\n            if peername is not None:\n                client_host, _ = peername\n            try:\n                data = await request.json()\n                if not isinstance(data, dict):\n                    msg = 'Request body must be a valid JSON object.'\n                    raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            except json.JSONDecodeError as ex:\n                msg = 'Request body must be valid JSON. Please provide a JSON object with username field.'\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            except Exception as ex:\n                msg = 'Failed to parse request body. Please provide a valid JSON object with username field.'\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # username validation\n            username = data.get('username', None)\n            if username is None:\n                msg = 'Username field is required in request body.'\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n            if not isinstance(username, str):\n                msg = 'Username must be a string.'\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n            if not username.strip():\n                msg = 'Username cannot be empty or contain only whitespace.'\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n            username = username.strip()\n            if len(username) < 1:\n                msg = 'Username must be at least 1 character long.'\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # bearer token validation\n            auth_header = request.headers.get('Authorization', None)\n            if auth_header is None:\n                msg = \"Authorization header is missing.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n            if not isinstance(auth_header, str):\n                msg = \"Authorization header must be a string.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n            auth_header = auth_header.strip()\n            if not auth_header.startswith(\"Bearer \"):\n                msg = \"Authorization header must start with 'Bearer ' followed by a token.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n\n            bearer_token = auth_header[7:]\n            if not bearer_token:\n                msg = \"Bearer token cannot be empty.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # Token length validation (JWT tokens have reasonable length bounds)\n            if len(bearer_token) > 2048:\n                msg = \"Bearer token exceeds maximum length.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # Validate bearer token format and claims\n            claims = cls.validate_token(bearer_token)\n            if claims.get('error'):\n                error_detail = claims.get('error', 'Unknown token validation error')\n                if 'expired' in error_detail.lower():\n                    msg = \"Bearer token has expired.\"\n                elif 'signature' in error_detail.lower():\n                    msg = \"Bearer token signature is invalid.\"\n                else:\n                    msg = \"Bearer token is invalid or malformed.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # Verify required claims\n            service_name = claims.get('sub')\n            if not service_name:\n                msg = \"Bearer token missing service name claim.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # Verify service is registered and token matches\n            registered_token = ServiceRegistry.getBearerToken(service_name)\n            if registered_token is None:\n                msg = f\"Service '{service_name}' is not registered.\"\n                raise web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n            if not hmac.compare_digest(bearer_token, registered_token):\n                msg = \"Bearer token does not match registered service token.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # User lookup\n            user_data = None\n            users = await User.Objects.all()\n            for user in users:\n                if user.get('uname') and hmac.compare_digest(user['uname'], username):\n                    if user.get('enabled') == 't':\n                        # Check role authorization\n                        role_id = user.get('role_id')\n                        if role_id in [3, 4]:  # Viewer and Data View roles\n                            msg = \"User is not authorized to access services.\"\n                            raise web.HTTPForbidden(reason=msg, body=json.dumps({\"message\": msg}))\n                        user_data = user\n                        break\n\n            if user_data is None:\n                # Don't reveal whether user exists or is disabled for security\n                msg = \"Authentication failed. User not found or not enabled.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # Clean up existing user tokens before creating new one\n            try:\n                await User.Objects.delete_user_tokens(user_data['id'])\n            except Exception as ex:\n                # Continue with login attempt as this is not critical\n                _logger.warning(f\"Failed to delete existing tokens for user '{username}': {str(ex)}\")\n\n            # Handle different authentication methods\n            access_method = user_data.get('access_method', 'any')\n            try:\n                if access_method == 'cert':\n                    # Use certificate-based login\n                    uid, token, is_admin = await User.Objects.certificate_login(user_data['uname'], client_host)\n                    _logger.info(f\"Successful certificate login for user '{username}' via service '{service_name}' from {client_host}\")\n                else:\n                    # Use password-based login (covers 'password' and 'any' methods)\n                    password = user_data.get('pwd')\n                    if not password:\n                        msg = \"User password is not configured.\"\n                        raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n                    uid, token, is_admin = await User.Objects.login(user_data['uname'], password, client_host)\n                    _logger.info(f\"Successful password login for user '{username}' via service '{service_name}' from {client_host}\")\n            except User.PasswordNotSetError:\n                msg = \"Password is not set for this user.\"\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            except User.DoesNotExist:\n                msg = \"User authentication failed.\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n            except Exception as ex:\n                msg = f\"Authentication failed: {str(ex)}\"\n                raise web.HTTPUnauthorized(reason=msg, body=json.dumps({\"message\": msg}))\n\n            # Return successful login response\n            response_data = {\n                \"token\": token,\n                \"uid\": uid,\n                \"admin\": is_admin,\n                \"message\": f\"User '{user_data['uname']}' logged in successfully via service.\"\n            }\n            return web.json_response(response_data)\n        except web.HTTPException:\n            # Re-raise HTTP exceptions as-is\n            raise\n        except ValueError as ex:\n            msg = f\"Value error: {str(ex)}\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n        except Exception as ex:\n            msg = f\"Internal server error during service login: {str(ex)}\"\n            _logger.error(f\"Service login internal error from {client_host}: {str(ex)}\")\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n\n    @classmethod\n    async def shutdown(cls, request):\n        \"\"\" Shutdown the core microservice and its components\n\n        :return: JSON payload with message key\n        :Example:\n            curl -X POST http://localhost:<core mgt port>/fledge/service/shutdown\n        \"\"\"\n        async def _stop_event_loop(loop):\n            await async_sleep(2.0)\n            _logger.info(\"Stopping the Fledge Core event loop. Good Bye!\")\n            loop.stop()\n\n        try:\n            await cls._stop()\n            await _stop_event_loop(request.loop)\n        except asyncio.TimeoutError as err:\n            await _stop_event_loop(request.loop)\n        except TimeoutError as err:\n            raise web.HTTPInternalServerError(reason=str(err))\n        except Exception as ex:\n            raise web.HTTPInternalServerError(reason=str(ex))\n        else:\n            return web.json_response({'message': 'Fledge stopped successfully. Wait for few seconds for process cleanup.'})\n\n    @classmethod\n    async def restart(cls, request):\n        \"\"\" Restart the core microservice and its components \"\"\"\n\n        async def _restart_process():\n            loop = request.loop\n            # allow some time\n            await async_sleep(2.0)\n\n            _logger.info(\"Stopping the Fledge Core event loop. Good Bye!\")\n            loop.stop()\n\n            if 'safe-mode' in sys.argv:\n                sys.argv.remove('safe-mode')\n                sys.argv.append('')\n\n            python3 = sys.executable\n            os.execl(python3, python3, *sys.argv)  # Replaces current process, no return\n\n        try:\n            await cls._stop()\n            await _restart_process()\n        except asyncio.TimeoutError as err:\n            await _restart_process()\n        except TimeoutError as err:\n            raise web.HTTPInternalServerError(reason=str(err))\n        except Exception as ex:\n            raise web.HTTPInternalServerError(reason=str(ex))\n\n    @classmethod\n    async def register_interest(cls, request):\n        \"\"\" Register an interest in a configuration category\n\n        :Example:\n            curl -d '{\"category\": \"COAP\", \"service\": \"x43978x8798x\"}' -X POST http://localhost:<core mgt port>/fledge/interest\n        \"\"\"\n\n        try:\n            data = await request.json()\n            category_name = data.get('category', None)\n            microservice_uuid = data.get('service', None)\n\n            try:\n                value = data.get('child', None)\n                if value is not None:\n\n                    if value == \"True\":\n                        child_subscribe = True\n                    else:\n                        child_subscribe = False\n                else:\n                    child_subscribe = False\n\n            except:\n                child_subscribe = False\n\n            if microservice_uuid is not None:\n                try:\n                    assert uuid.UUID(microservice_uuid)\n                except:\n                    raise ValueError('Invalid microservice id {}'.format(microservice_uuid))\n\n            if child_subscribe:\n\n                try:\n                    registered_interest_id = cls._interest_registry.register_child(microservice_uuid, category_name)\n                except interest_registry_exceptions.ErrorInterestRegistrationAlreadyExists:\n                    raise web.HTTPBadRequest(reason='An InterestRecord already exists by microservice_uuid {} for category_name {}'.format(microservice_uuid, category_name))\n\n                if not registered_interest_id:\n                    raise web.HTTPBadRequest(reason='Interest by microservice_uuid {} for category_name {} could not be registered'.format(microservice_uuid, category_name))\n\n                _response = {\n                    'id': registered_interest_id,\n                    'message': \"Interest registered successfully\"\n                }\n\n\n            else:\n                try:\n                    registered_interest_id = cls._interest_registry.register(microservice_uuid, category_name)\n                except interest_registry_exceptions.ErrorInterestRegistrationAlreadyExists:\n                    raise web.HTTPBadRequest(reason='An InterestRecord already exists by microservice_uuid {} for category_name {}'.format(microservice_uuid, category_name))\n\n                if not registered_interest_id:\n                    raise web.HTTPBadRequest(reason='Interest by microservice_uuid {} for category_name {} could not be registered'.format(microservice_uuid, category_name))\n\n                _response = {\n                    'id': registered_interest_id,\n                    'message': \"Interest registered successfully\"\n                }\n\n        except ValueError as ex:\n            raise web.HTTPBadRequest(reason=str(ex))\n\n        return web.json_response(_response)\n\n    @classmethod\n    async def unregister_interest(cls, request):\n        \"\"\" Unregister an interest\n\n        :Example:\n            curl -X DELETE  http://localhost:<core mgt port>/fledge/interest/dc9bfc01-066a-4cc0-b068-9c35486db87f\n        \"\"\"\n\n        try:\n            interest_registration_id = request.match_info.get('interest_id', None)\n\n            try:\n                assert uuid.UUID(interest_registration_id)\n            except:\n                raise web.HTTPBadRequest(reason=\"Invalid registration id {}\".format(interest_registration_id))\n\n            try:\n                cls._interest_registry.get(registration_id=interest_registration_id)\n            except interest_registry_exceptions.DoesNotExist:\n                raise ValueError('InterestRecord with registration_id {} does not exist'.format(interest_registration_id))\n\n            cls._interest_registry.unregister(interest_registration_id)\n\n            _resp = {'id': str(interest_registration_id), 'message': 'Interest unregistered'}\n\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=str(ex))\n\n        return web.json_response(_resp)\n\n    @classmethod\n    async def get_interest(cls, request):\n        \"\"\" Returns a list of all interests or of the selected interest\n\n        :Example:\n                curl -X GET  http://localhost:{core_mgt_port}/fledge/interest\n                curl -X GET  http://localhost:{core_mgt_port}/fledge/interest?microserviceid=X&category=Y\n        \"\"\"\n        category = request.query['category'] if 'category' in request.query else None\n        microservice_id = request.query['microserviceid'] if 'microserviceid' in request.query else None\n        if microservice_id is not None:\n            try:\n                assert uuid.UUID(microservice_id)\n            except:\n                raise web.HTTPBadRequest(reason=\"Invalid microservice id {}\".format(microservice_id))\n\n        try:\n            if not category and not microservice_id:\n                interest_list = cls._interest_registry.get()\n            elif category and not microservice_id:\n                interest_list = cls._interest_registry.get(category_name=category)\n            elif not category and microservice_id:\n                interest_list = cls._interest_registry.get(microservice_uuid=microservice_id)\n            else:\n                interest_list = cls._interest_registry.get(category_name=category, microservice_uuid=microservice_id)\n        except interest_registry_exceptions.DoesNotExist as ex:\n            if not category and not microservice_id:\n                msg = 'No interest registered'\n            elif category and not microservice_id:\n                msg = 'No interest registered for category {}'.format(category)\n            elif not category and microservice_id:\n                msg = 'No interest registered microservice id {}'.format(microservice_id)\n            else:\n                msg = 'No interest registered for category {} and microservice id {}'.format(category, microservice_id)\n\n            raise web.HTTPNotFound(reason=msg)\n\n        interests = []\n        for interest in interest_list:\n            d = dict()\n            d[\"registrationId\"] = interest._registration_id\n            d[\"category\"] = interest._category_name\n            d[\"microserviceId\"] = interest._microservice_uuid\n            interests.append(d)\n\n        return web.json_response({\"interests\": interests})\n\n    @classmethod\n    async def change(cls, request):\n        pass\n\n    @classmethod\n    async def get_track(cls, request):\n        res = await asset_tracker_api.get_asset_tracker_events(request)\n        return res\n\n    @classmethod\n    async def add_track(cls, request):\n\n        data = await request.json()\n\n        if not isinstance(data, dict):\n            raise ValueError('Data payload must be a dictionary')\n\n        jsondata = data.get(\"data\")\n\n        try:\n            if jsondata is None:\n\n                result = await cls._asset_tracker.add_asset_record(asset=data.get(\"asset\"),\n                                                                   plugin=data.get(\"plugin\"),\n                                                                   service=data.get(\"service\"),\n                                                                   event=data.get(\"event\"))\n            else:\n                result = await cls._asset_tracker.add_asset_record(asset=data.get(\"asset\"),\n                                                                   plugin=data.get(\"plugin\"),\n                                                                   service=data.get(\"service\"),\n                                                                   event=data.get(\"event\"),\n                                                                   jsondata=data.get(\"data\"))\n\n        except (TypeError, StorageServerError) as ex:\n            raise web.HTTPBadRequest(reason=str(ex))\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=str(ex))\n        except Exception as ex:\n            raise web.HTTPInternalServerError(reason=ex)\n\n        return web.json_response(result)\n\n    @classmethod\n    async def enable_disable_schedule(cls, request: web.Request) -> web.Response:\n        data = await request.json()\n        try:\n            schedule_id = request.match_info.get('schedule_id', None)\n            is_enabled = data.get('value', False)\n            if is_enabled:\n                status, reason = await cls.scheduler.enable_schedule(uuid.UUID(schedule_id))\n            else:\n                status, reason = await cls.scheduler.disable_schedule(uuid.UUID(schedule_id))\n        except (TypeError, ValueError, KeyError) as err:\n            raise web.HTTPBadRequest(reason=str(err), body=json.dumps({'message': str(err)}))\n        except Exception as ex:\n            raise web.HTTPInternalServerError(reason=str(ex), body=json.dumps({'message': str(ex)}))\n        else:\n            schedule = {\n                'scheduleId': schedule_id,\n                'status': status,\n                'message': reason\n            }\n            return web.json_response(schedule)\n\n    @classmethod\n    async def refresh_cache(cls, request: web.Request) -> web.Response:\n        from fledge.services.core.api.plugins import common\n\n        data = await request.json()\n        try:\n            # At the moment only case to clear cache for available plugins\n            # We may add with action & key combination basis later on\n            common._get_available_packages.cache_clear()\n            cls._package_cache_manager['list']['last_accessed_time'] = \"\"\n        except (TypeError, ValueError, KeyError) as err:\n            raise web.HTTPBadRequest(reason=str(err), body=json.dumps({'message': str(err)}))\n        except Exception as ex:\n            raise web.HTTPInternalServerError(reason=str(ex), body=json.dumps({'message': str(ex)}))\n        else:\n            return web.json_response({\"message\": \"Refresh cache is completed\"})\n\n    @classmethod\n    async def get_configuration_categories(cls, request):\n        res = await conf_api.get_categories(request)\n        return res\n\n    @classmethod\n    async def create_configuration_category(cls, request):\n        request.is_core_mgt = True\n        res = await conf_api.create_category(request)\n        return res\n\n    @classmethod\n    async def delete_configuration_category(cls, request):\n        res = await conf_api.delete_category(request)\n        return res\n\n    @classmethod\n    async def create_child_category(cls, request):\n        res = await conf_api.create_child_category(request)\n        return res\n\n    @classmethod\n    async def get_child_category(cls, request):\n        res = await conf_api.get_child_category(request)\n        return res\n\n    @classmethod\n    async def get_configuration_category(cls, request):\n        request.is_core_mgt = True\n        res = await conf_api.get_category(request)\n        return res\n\n    @classmethod\n    async def get_configuration_item(cls, request):\n        request.is_core_mgt = True\n        res = await conf_api.get_category_item(request)\n        return res\n\n    @classmethod\n    async def update_configuration_item(cls, request):\n        request.is_core_mgt = True\n        res = await conf_api.set_configuration_item(request)\n        return res\n\n    @classmethod\n    async def delete_configuration_item(cls, request):\n        request.is_core_mgt = True\n        res = await conf_api.delete_configuration_item_value(request)\n        return res\n\n    @classmethod\n    async def add_audit(cls, request):\n        data = await request.json()\n        if not isinstance(data, dict):\n            raise ValueError('Data payload must be a dictionary')\n\n        try:\n            code = data.get(\"source\")\n            level = data.get(\"severity\")\n            message = data.get(\"details\")\n\n            # Add audit entry code and message for the given level\n            await getattr(cls._audit, str(level).lower())(code, message)\n\n            # Set timestamp for return message\n            timestamp = datetime.now().strftime(\"%Y-%m-%d %H:%M:%S.%f\")[:-3]\n\n            # Return JSON message\n            message = {'timestamp': str(timestamp),\n                       'source': code,\n                       'severity': level,\n                       'details': message\n                      }\n\n        except (TypeError, StorageServerError) as ex:\n            raise web.HTTPBadRequest(reason=str(ex))\n        except ValueError as ex:\n            raise web.HTTPNotFound(reason=str(ex))\n        except Exception as ex:\n            raise web.HTTPInternalServerError(reason=ex)\n\n        return web.json_response(message)\n\n    @classmethod\n    async def verify_token(cls, request):\n        \"\"\" Endpoint for verifycation of service bearer token received at registration time\n\n        :Example:\n            curl -H 'Authorization: Bearer evZGdrdmV.4dWFsY2dsaHVyZ.mFxdmdybXB5dXduaXJvc3g='\n            -X POST http://localhost:<core mngmt port>/fledge/service/verity_token\n\n        Authorization header must contain the Bearer token to verify\n        No post data\n\n        Note: token will be verified for the service name in token claim 'sub'\n        \"\"\"\n\n        try:\n            return web.json_response(cls.get_token_common(request))\n        except Exception as e:\n            msg = str(e)\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n    @classmethod\n    def validate_token(cls, token):\n        \"\"\" Validate service bearer token\n        \"\"\"\n        try:\n            ret = jwt.decode(token, SERVICE_JWT_SECRET, algorithms=[SERVICE_JWT_ALGORITHM],\n                             options={\"verify_signature\": True, \"verify_aud\": False, \"verify_exp\": True})\n            return ret\n        except Exception as e:\n            return {'error': str(e)}\n\n    @classmethod\n    async def refresh_token(cls, request):\n        \"\"\" Endpoint for refresh of service bearer token received at registration time\n\n        :Example:\n            curl -X POST\n             -H 'Authorization: Bearer evZGdrdmV.4dWFsY2dsaHVyZ.mFxdmdybXB5dXduaXJvc3g='\n             http://localhost:<core mngmt port>/fledge/service/refresh_token\n\n        Authorization header must contain the Bearer token\n        No post data\n\n        Note: token will be refreshed for the service it belongs to\n        \"\"\"\n\n        try:\n            claims = cls.get_token_common(request)\n            # Expiration set to now + delta\n            claims['exp'] = int(time.time()) + SERVICE_JWT_EXP_DELTA_SECONDS\n            bearer_token = jwt.encode(claims, SERVICE_JWT_SECRET, SERVICE_JWT_ALGORITHM)\n\n            # Replace bearer_token for the service\n            ServiceRegistry.addBearerToken(claims['sub'], bearer_token)\n            ret = {'bearer_token': bearer_token}\n\n            return web.json_response(ret)\n\n        except Exception as e:\n            msg = str(e)\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n    @classmethod\n    async def is_dispatcher_running(cls, storage):\n        from fledge.services.core.api import service as service_api\n        from fledge.common.storage_client.payload_builder import PayloadBuilder\n\n        # Find the dispatcher service installation\n        get_svc = service_api.get_service_installed()\n        # if installation found:\n        if 'dispatcher' in get_svc:\n            payload = PayloadBuilder().SELECT(\"id\", \"schedule_name\", \"process_name\", \"enabled\").payload()\n            res = await storage.query_tbl_with_payload('schedules', payload)\n            for sch in res['rows']:\n                if sch['process_name'] == 'dispatcher_c' and sch['enabled'] == 'f':\n                    _logger.info(\"Dispatcher service found but not in enabled state. \"\n                                 \"Therefore, {} schedule name is enabled\".format(sch['schedule_name']))\n                    # reset process_script priority for the service\n                    cls.scheduler._process_scripts['dispatcher_c'] = (\n                        cls.scheduler._process_scripts['dispatcher_c'][0], 999)\n                    await cls.scheduler.enable_schedule(uuid.UUID(sch[\"id\"]))\n                    return True\n                elif sch['process_name'] == 'dispatcher_c' and sch['enabled'] == 't':\n                    # As such no action required for the case\n                    return True\n            # If installation not found:\n            return False\n        return True\n\n    @classmethod\n    async def add_and_enable_dispatcher(cls):\n        import datetime as dt\n        from fledge.services.core.scheduler.entities import StartUpSchedule\n\n        name = \"dispatcher\"\n        process_name = 'dispatcher_c'\n        is_enabled = True\n        schedule = StartUpSchedule()\n        schedule.name = name\n        schedule.process_name = process_name\n        schedule.repeat = dt.timedelta(0)\n        schedule.exclusive = True\n        schedule.enabled = False\n        # Save schedule\n        await cls.scheduler.save_schedule(schedule, is_enabled)\n\n    @classmethod\n    def get_token_common(cls, request):\n        \"\"\" Get Bearer Token from request\n            validate it and return token claims\n        \"\"\"\n        auth_header = request.headers.get('Authorization', None)\n        if auth_header is None:\n            msg = \"Authorization header is missing\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n        if not \"Bearer \" in auth_header:\n            msg = \"Invalid Authorization token\"\n            # FIXME: raise UNAUTHORISED here and among other places\n            #   and JSON body to have message key\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n        parts = auth_header.split(\"Bearer \")\n        if len(parts) != 2:\n            msg = \"bearer token is missing\"\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n        bearer_token = parts[1]\n\n        try:\n            # Validate token and get public claims\n            claims = cls.validate_token(bearer_token)\n            if claims.get('error') is None:\n                # Check input token exists in system for the service name given in claims['sub']\n                foundToken = ServiceRegistry.getBearerToken(claims['sub'])\n                if foundToken is None:\n                    msg = \"service '\" + str(claims['sub']) + \"' not registered\"\n                    raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n                # Check input token is associated with service in claims['sub']\n                if foundToken != bearer_token:\n                    msg = \"bearer token does not belong to service '\" + str(claims['sub']) + \"'\"\n                    raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n                # Success\n                return claims\n\n            else:\n                msg = claims.get('error')\n                raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n        except Exception as e:\n            msg = str(e)\n            raise web.HTTPBadRequest(reason=msg, body=json.dumps({\"error\": msg}))\n\n    @classmethod\n    async def get_control_acl(cls, request):\n        request.is_core_mgt = True\n        res = await acl_management.get_acl(request)\n        return res\n\n    @classmethod\n    async def get_alert(cls, request):\n        name = request.match_info.get('key', None)\n        try:\n            alert = await cls._alert_manager.get_by_key(name)\n        except KeyError as err:\n            msg = str(err.args[0])\n            return web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n        except Exception as ex:\n            msg = str(ex)\n            _logger.error(ex, \"Failed to get an alert.\")\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            return web.json_response({\"alert\": alert})\n\n    @classmethod\n    async def delete_alert(cls, request):\n        name = request.match_info.get('key', None)\n        try:\n            alert = await cls._alert_manager.delete(name)\n        except KeyError as err:\n            msg = str(err.args[0])\n            return web.HTTPNotFound(reason=msg, body=json.dumps({\"message\": msg}))\n        except Exception as ex:\n            msg = str(ex)\n            _logger.error(ex, \"Failed to delete an alert.\")\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            return web.json_response({\"alert\": alert})\n\n    @classmethod\n    async def add_alert(cls, request):\n        try:\n            data = await request.json()\n            key = data.get(\"key\")\n            message = data.get(\"message\")\n            urgency = data.get(\"urgency\")\n            if any(elem is None for elem in [key, message, urgency]):\n                msg = 'key, message, urgency post params are required to raise an alert.'\n                return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            if not all(isinstance(i, str) for i in [key, message, urgency]):\n                msg = 'key, message, urgency KV pair must be passed as string.'\n                return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            urgency = urgency.lower().capitalize()\n            if urgency not in cls._alert_manager.urgency:\n                msg = 'Urgency value should be from list {}'.format(list(cls._alert_manager.urgency.keys()))\n                return web.HTTPBadRequest(reason=msg, body=json.dumps({\"message\": msg}))\n            key_exists = [a for a in cls._alert_manager.alerts if a['key'] == key]\n            if key_exists:\n                # Delete existing key\n                await cls._alert_manager.delete(key)\n            param = {\"key\": key, \"message\": message, \"urgency\": cls._alert_manager.urgency[urgency]}\n            response = await cls._alert_manager.add(param)\n            if response is None:\n                raise Exception\n        except Exception as ex:\n            msg = str(ex)\n            _logger.error(ex, \"Failed to add an alert.\")\n            raise web.HTTPInternalServerError(reason=msg, body=json.dumps({\"message\": msg}))\n        else:\n            response['alert']['urgency'] = cls._alert_manager._urgency_name_by_value(response['alert']['urgency'])\n            return web.json_response(response)\n"
  },
  {
    "path": "python/fledge/services/core/service_registry/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/core/service_registry/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Services Registry Exceptions module\"\"\"\n\n__author__ = \"Praveen Garg, Amarendra Kumar Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass DoesNotExist(Exception):\n    pass\n\n\nclass AlreadyExistsWithTheSameName(Exception):\n    pass\n\n\nclass AlreadyExistsWithTheSameAddressAndPort(Exception):\n    pass\n\n\nclass AlreadyExistsWithTheSameAddressAndManagementPort(Exception):\n    pass\n\n\nclass NonNumericPortError(TypeError):\n    pass\n"
  },
  {
    "path": "python/fledge/services/core/service_registry/monitor.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Fledge Monitor module\"\"\"\n\nimport asyncio\nimport aiohttp\nimport json\nfrom fledge.common import logger\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.core import connect\nfrom fledge.common.acl_manager import ACLManager\nfrom fledge.common.alert_manager import AlertManager\n\n__author__ = \"Ashwin Gopalakrishnan, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass MonitorRegistry:\n    \"\"\"Registry to manage monitor instances\"\"\"\n    _monitors = {}\n    \n    @classmethod\n    def register(cls, monitor_id, monitor_instance):\n        \"\"\"Register a monitor instance with the given ID\n        \n        Args:\n            monitor_id (str): Identifier for the monitor instance\n            monitor_instance (Monitor): The monitor instance to register\n        \"\"\"\n        cls._monitors[monitor_id] = monitor_instance\n    \n    @classmethod\n    def get(cls, monitor_id='default'):\n        \"\"\"Get a monitor instance by ID\n        \n        Args:\n            monitor_id (str): Identifier for the monitor instance\n            \n        Returns:\n            Monitor: The monitor instance, or None if not found\n        \"\"\"\n        return cls._monitors.get(monitor_id)\n    \n    @classmethod\n    def unregister(cls, monitor_id='default'):\n        \"\"\"Unregister a monitor instance by ID\n        \n        Args:\n            monitor_id (str): Identifier for the monitor instance to remove\n        \"\"\"\n        return cls._monitors.pop(monitor_id, None)\n    \n    @classmethod\n    def get_all(cls):\n        \"\"\"Get all registered monitor instances\n        \n        Returns:\n            dict: Dictionary of all registered monitors\n        \"\"\"\n        return cls._monitors.copy()\n\n\nclass Monitor(object):\n\n    _DEFAULT_SLEEP_INTERVAL = 5\n    \"\"\"The time (in seconds) to sleep between health checks\"\"\"\n\n    _DEFAULT_PING_TIMEOUT = 1\n    \"\"\"Timeout for a response from any given micro-service\"\"\"\n\n    _DEFAULT_MAX_ATTEMPTS = 15\n\n    _DEFAULT_RESTART_FAILED = \"auto\"\n    \"\"\"Restart failed microservice - manual/auto\"\"\"\n\n    _logger = None\n\n    def __init__(self):\n        self._logger = logger.setup(__name__)\n\n        self._monitor_loop_task = None  # type: asyncio.Task\n        \"\"\"Task for :meth:`_monitor_loop`, to ensure it has finished\"\"\"\n        self._sleep_interval = None  # type: int\n        \"\"\"The time (in seconds) to sleep between health checks\"\"\"\n        self._ping_timeout = None  # type: int\n        \"\"\"Timeout for a response from any given micro-service\"\"\"\n        self._max_attempts = None  # type: int\n        \"\"\"Number of max attempts for finding a heartbeat of service\"\"\"\n        self._restart_failed = None  # type: str\n        \"\"\"Restart failed microservice - manual/auto\"\"\"\n\n        self.restarted_services = []\n        self._acl_handler = None\n        # Support bundle config\n        self._support_bundle_config = None\n        # Alert manager instance to raise alerts\n        self._alert_manager = None\n        # Configuration manager instance\n        self._cfg_manager = None\n\n    async def _sleep(self, sleep_time):\n        await asyncio.sleep(sleep_time)\n\n    async def _monitor_loop(self):\n        \"\"\"async Monitor loop to monitor registered services\"\"\"\n        # check health of all micro-services every N seconds\n        round_cnt = 0\n        check_count = {}  # dict to hold current count of current status.\n                          # In case of ok and running status, count will always be 1.\n                          # In case of of non running statuses, count shows since when this status is set.\n        while True:\n            round_cnt += 1\n            self._logger.debug(\"Starting next round#{} of service monitoring, sleep/i:{} ping/t:{} max/a:{}\".format(\n                round_cnt, self._sleep_interval, self._ping_timeout, self._max_attempts))\n            for service_record in ServiceRegistry.all():\n                if service_record._id not in check_count:\n                    check_count.update({service_record._id: 1})\n\n                # Try ping if service status is either running or doubtful (i.e. give service a chance to recover)\n                if service_record._status not in [ServiceRecord.Status.Running,\n                                                  ServiceRecord.Status.Unresponsive,\n                                                  ServiceRecord.Status.Failed,\n                                                  ServiceRecord.Status.Restart]:\n                    continue\n\n                self._logger.debug(\"Service: {} Status: {}\".format(service_record._name, service_record._status))\n\n                if service_record._status == ServiceRecord.Status.Failed:\n                    if self._restart_failed == \"auto\":\n                        if service_record._id not in self.restarted_services:\n                            self.restarted_services.append(service_record._id)\n                            asyncio.ensure_future(self.restart_service(service_record))\n                    continue\n\n                if service_record._status == ServiceRecord.Status.Restart:\n                     if service_record._id not in self.restarted_services:\n                         self.restarted_services.append(service_record._id)\n                         asyncio.ensure_future(self.restart_service(service_record))\n                     continue\n\n                try:\n                    url = \"{}://{}:{}/fledge/service/ping\".format(\n                        service_record._protocol, service_record._address, service_record._management_port)\n                    async with aiohttp.ClientSession() as session:\n                        async with session.get(url, timeout=self._ping_timeout) as resp:\n                            text = await resp.text()\n                            res = json.loads(text)\n                            if res[\"uptime\"] is None:\n                                raise ValueError('res.uptime is None')\n                            # Set the 'debug' status for non-empty values, applicable only to\n                            # Southbound and Northbound services\n                            if service_record._type in ('Southbound', 'Northbound'):\n                                debugger_value = res.get(\"debug\", {})\n                                if not isinstance(debugger_value, dict):\n                                    self._logger.warning(\"Invalid debug value '{}' in service '{}': \"\n                                                       \"Expected a dictionary, but received a {}.\".format(\n                                        debugger_value, service_record._name, type(debugger_value).__name__))\n                                    debugger_value = {}\n                                service_record._debug = debugger_value\n                except (asyncio.TimeoutError, aiohttp.client_exceptions.ServerTimeoutError) as ex:\n                    service_record._status = ServiceRecord.Status.Unresponsive\n                    check_count[service_record._id] += 1\n                    self._logger.info(\"ServerTimeoutError: %s, %s\", str(ex), service_record.__repr__())\n                except aiohttp.client_exceptions.ClientConnectorError as ex:\n                    service_record._status = ServiceRecord.Status.Unresponsive\n                    check_count[service_record._id] += 1\n                    self._logger.info(\"ClientConnectorError: %s, %s\", str(ex), service_record.__repr__())\n                except ValueError as ex:\n                    service_record._status = ServiceRecord.Status.Unresponsive\n                    check_count[service_record._id] += 1\n                    self._logger.info(\"Invalid response: %s, %s\", str(ex), service_record.__repr__())\n                except Exception as ex:\n                    service_record._status = ServiceRecord.Status.Unresponsive\n                    check_count[service_record._id] += 1\n                    self._logger.info(\"Exception occurred: %s, %s\", str(ex), service_record.__repr__())\n                else:\n                    service_record._status = ServiceRecord.Status.Running\n\n                    self._logger.debug(\"Resolving pending notification for ACL change \"\n                                       \"for service {} \".format(service_record._name))\n                    if not self._acl_handler:\n                        self._acl_handler = ACLManager(connect.get_storage_async())\n                    await self._acl_handler.\\\n                        resolve_pending_notification_for_acl_change(service_record._name)\n\n                    check_count[service_record._id] = 1\n\n                if check_count[service_record._id] > self._max_attempts:\n                    ServiceRegistry.mark_as_failed(service_record._id)\n                    check_count[service_record._id] = 0\n                    if self._support_bundle_config['auto_support_bundle']['value'] == 'true':\n                        self._logger.info(\"Service %s failed, creating automated support bundle\",\n                                          service_record._name)\n                        asyncio.create_task(self.create_automated_support_bundle(service_record._name))\n                    try:\n                        audit = AuditLogger(connect.get_storage_async())\n                        await audit.failure('SRVFL', {'name':service_record._name})\n                    except Exception as ex:\n                        self._logger.info(\"Failed to audit service failure %s\", str(ex))\n            await self._sleep(self._sleep_interval)\n\n    async def create_automated_support_bundle(self, service_name):\n        \"\"\"Create support bundle asynchronously when service fails\"\"\"\n        try:\n            from fledge.services.core.support import SupportBuilder\n            from fledge.common.common import _FLEDGE_DATA, _FLEDGE_ROOT\n            support_dir = _FLEDGE_DATA + \"/support\" if _FLEDGE_DATA else _FLEDGE_ROOT + \"/data/support\"\n            builder = SupportBuilder(support_dir, self._support_bundle_config)\n            bundle_name = await builder.build(service_name)\n            # Raise alert about support bundle creation\n            await self.raise_support_bundle_alert(service_name, bundle_name)\n            self._logger.info(\"Support bundle created: {} for failed service: {}\".format(bundle_name, service_name))\n        except Exception as ex:\n            self._logger.error(ex, \"Failed to create support bundle for {}\".format(service_name))\n    \n    async def raise_support_bundle_alert(self, service_name, bundle_name):\n        \"\"\"Raise alert for automated support bundle creation\"\"\"\n        if not self._alert_manager:\n            self._alert_manager = AlertManager(connect.get_storage_async())\n        # Don't create alert if already exists\n        key = f\"{service_name}-support-bundle\"\n        alert = None\n        try:\n            alert = await self._alert_manager.get_by_key(key)\n        except KeyError:\n            pass\n\n        if alert is not None:\n            self._logger.debug(\"Alert for support bundle already exists for service: {}\".format(service_name))\n            return\n\n        try:\n            param = {\n                \"key\": key,\n                \"message\": f\"Support bundle created for failed service '{service_name}'\",\n                \"urgency\": \"2\"  # High urgency\n            }\n            await self._alert_manager.add(param)\n        except Exception as ex:\n            self._logger.error(ex, \"Failed to raise an alert on support bundle creation for {} service.\".format(service_name))\n\n    async def _handle_config_change(self, category_name):\n        \"\"\"Handle configuration changes for monitored categories\"\"\"\n        self._logger.info(\"Processing configuration change for category: {}\".format(category_name))\n        \n        try:\n            if category_name == 'SMNTR':\n                await self._reload_monitor_config()\n            elif category_name == 'SUPPORT_BUNDLE':\n                await self._reload_support_bundle_config()\n        except Exception as ex:\n            self._logger.error(\"Failed to handle configuration change for {}: {}\".format(category_name, str(ex)))\n\n    async def _reload_monitor_config(self):\n        \"\"\"Reload SMNTR configuration and update monitor parameters\"\"\"\n        self._logger.info(\"Reloading SMNTR configuration...\")\n        \n        config = await self._cfg_manager.get_category_all_items('SMNTR')\n        \n        # Store old values for logging\n        old_values = {\n            'sleep_interval': self._sleep_interval,\n            'ping_timeout': self._ping_timeout,\n            'max_attempts': self._max_attempts,\n            'restart_failed': self._restart_failed\n        }\n        \n        # Update with new values\n        self._sleep_interval = int(config['sleep_interval']['value'])\n        self._ping_timeout = int(config['ping_timeout']['value'])\n        self._max_attempts = int(config['max_attempts']['value'])\n        self._restart_failed = config['restart_failed']['value']\n        \n        # Log changes\n        changes = []\n        if old_values['sleep_interval'] != self._sleep_interval:\n            changes.append(\"Sleep interval: {} -> {}\".format(old_values['sleep_interval'], self._sleep_interval))\n        if old_values['ping_timeout'] != self._ping_timeout:\n            changes.append(\"Ping timeout: {} -> {}\".format(old_values['ping_timeout'], self._ping_timeout))\n        if old_values['max_attempts'] != self._max_attempts:\n            changes.append(\"Max attempts: {} -> {}\".format(old_values['max_attempts'], self._max_attempts))\n        if old_values['restart_failed'] != self._restart_failed:\n            changes.append(\"Restart failed: {} -> {}\".format(old_values['restart_failed'], self._restart_failed))\n            \n        if changes:\n            self._logger.info(\"SMNTR configuration changes applied: {}\".format(\", \".join(changes)))\n        else:\n            self._logger.debug(\"SMNTR configuration reloaded with no changes\")\n\n    async def _reload_support_bundle_config(self):\n        \"\"\"Reload SUPPORT_BUNDLE configuration\"\"\"\n        self._logger.info(\"Reloading SUPPORT_BUNDLE configuration...\")\n        \n        old_auto_bundle = self._support_bundle_config.get('auto_support_bundle', {}).get('value', 'false') if self._support_bundle_config else 'false'\n        \n        self._support_bundle_config = await self._cfg_manager.get_category_all_items('SUPPORT_BUNDLE')\n        \n        new_auto_bundle = self._support_bundle_config.get('auto_support_bundle', {}).get('value', 'false')\n        \n        if old_auto_bundle != new_auto_bundle:\n            self._logger.info(\"SUPPORT_BUNDLE configuration changed - Auto support bundle: {} -> {}\".format(old_auto_bundle, new_auto_bundle))\n        else:\n            self._logger.debug(\"SUPPORT_BUNDLE configuration reloaded with no changes\")\n\n    async def _register_config_interests(self):\n        \"\"\"Register interest for configuration changes\"\"\"\n        try:\n            # Register for SMNTR configuration changes\n            self._cfg_manager.register_interest('SMNTR', 'fledge.services.core.service_registry.monitor')\n            self._logger.info(\"Registered interest for SMNTR configuration changes\")\n            \n            # Register for SUPPORT_BUNDLE configuration changes\n            self._cfg_manager.register_interest('SUPPORT_BUNDLE', 'fledge.services.core.service_registry.monitor')\n            self._logger.info(\"Registered interest for SUPPORT_BUNDLE configuration changes\")\n            \n        except Exception as ex:\n            self._logger.error(\"Failed to register configuration interests: {}\".format(str(ex)))\n\n    \n    async def _read_config(self):\n        \"\"\"Reads configuration\"\"\"\n        # Register this instance with the registry\n        MonitorRegistry.register('default', self)\n        \n        default_config = {\n            \"sleep_interval\": {\n                \"description\": \"Time in seconds to sleep between health checks. (must be greater than 5)\",\n                \"type\": \"integer\",\n                \"default\": str(self._DEFAULT_SLEEP_INTERVAL),\n                \"displayName\": \"Health Check Interval (In seconds)\",\n                \"minimum\": \"5\"\n            },\n            \"ping_timeout\": {\n                \"description\": \"Timeout for a response from any given micro-service. (must be greater than 0)\",\n                \"type\": \"integer\",\n                \"default\": str(self._DEFAULT_PING_TIMEOUT),\n                \"displayName\": \"Ping Timeout\",\n                \"minimum\": \"1\",\n                \"maximum\": \"5\"\n            },\n            \"max_attempts\": {\n                \"description\": \"Maximum number of attempts for finding a heartbeat of service\",\n                \"type\": \"integer\",\n                \"default\": str(self._DEFAULT_MAX_ATTEMPTS),\n                \"displayName\": \"Max Attempts To Check Heartbeat\",\n                \"minimum\": \"1\"\n            },\n            \"restart_failed\": {\n                \"description\": \"Restart failed microservice - manual/auto\",\n                \"type\": \"enumeration\",\n                'options': ['auto', 'manual'],\n                \"default\": self._DEFAULT_RESTART_FAILED,\n                \"displayName\": \"Restart Failed\"\n            }\n        }\n\n        storage_client = connect.get_storage_async()\n        self._cfg_manager = ConfigurationManager(storage_client)\n        await self._cfg_manager.create_category('SMNTR', default_config, 'Service Monitor', display_name='Service Monitor')\n\n        config = await self._cfg_manager.get_category_all_items('SMNTR')\n        self._support_bundle_config = await self._cfg_manager.get_category_all_items('SUPPORT_BUNDLE')\n\n        self._sleep_interval = int(config['sleep_interval']['value'])\n        self._ping_timeout = int(config['ping_timeout']['value'])\n        self._max_attempts = int(config['max_attempts']['value'])\n        self._restart_failed = config['restart_failed']['value']\n\n        # Register for configuration change notifications\n        await self._register_config_interests()\n\n    async def restart_service(self, service_record):\n        from fledge.services.core import server  # To avoid cyclic import as server also imports monitor\n        schedule = await server.Server.scheduler.get_schedule_by_name(service_record._name)\n        await server.Server.scheduler.queue_task(schedule.schedule_id)\n        self.restarted_services.remove(service_record._id)\n        # Raise an alert during restart service\n        await self.raise_an_alert(server.Server, service_record._name)\n\n    async def raise_an_alert(self, obj, svc_name):\n        async def _new_alert_entry(restart_count=1):\n            param = {\"key\": svc_name, \"message\": 'The Service {} restarted {} times'.format(\n                svc_name, restart_count), \"urgency\": \"3\"}\n            await obj._alert_manager.add(param)\n\n        try:\n            alert = await obj._alert_manager.get_by_key(svc_name)\n            message = alert['message'].strip()\n            key = alert['key']\n            if message.startswith('The Service {} restarted'.format(key)) and message.endswith(\"times\"):\n                result = [int(s) for s in message.split() if s.isdigit()]\n                if result:\n                    await obj._alert_manager.delete(key)\n                    await _new_alert_entry(result[-1:][0] + 1)\n                else:\n                    await _new_alert_entry()\n            else:\n                await _new_alert_entry()\n        except KeyError:\n            await _new_alert_entry()\n        except Exception as ex:\n            self._logger.error(ex, \"Failed to raise an alert on restarting {} service.\".format(svc_name))\n\n    async def start(self):\n        await self._read_config()\n        self._monitor_loop_task = asyncio.ensure_future(self._monitor_loop())\n\n    async def stop(self):\n        \"\"\"Clean up when stopping the monitor\"\"\"\n        # Unregister from registry\n        MonitorRegistry.unregister('default')\n        \n        try:\n            # Unregister configuration interests\n            if self._cfg_manager:\n                self._cfg_manager.unregister_interest('SMNTR', 'fledge.services.core.service_registry.monitor')\n                self._cfg_manager.unregister_interest('SUPPORT_BUNDLE', 'fledge.services.core.service_registry.monitor')\n                self._logger.info(\"Unregistered configuration interests\")\n        except Exception as ex:\n            self._logger.error(\"Error unregistering config interests: {}\".format(str(ex)))\n            \n        try:\n            self._monitor_loop_task.cancel()\n        except asyncio.CancelledError:\n            pass\n\n\n# Module-level callback function required by ConfigurationManager\nasync def run(category_name):\n    \"\"\"Module-level callback function for configuration changes\n    \n    This function is called by ConfigurationManager when registered categories change.\n    It delegates to the monitor instance to handle the actual configuration update.\n    \n    Args:\n        category_name (str): The name of the category that changed\n    \"\"\"\n    monitor = MonitorRegistry.get('default')\n    \n    if monitor is None:\n        # Monitor instance not available - this could happen during startup/shutdown\n        # We can't use the monitor's logger since we don't have the instance\n        # Using the module-level logger setup instead\n        _logger = logger.setup(__name__)\n        _logger.warning(\"Monitor instance not available for config change callback\")\n        return\n        \n    try:\n        await monitor._handle_config_change(category_name)\n    except Exception as ex:\n        monitor._logger.error(\"Error in configuration change callback for {}: {}\".format(category_name, str(ex)))\n"
  },
  {
    "path": "python/fledge/services/core/service_registry/service_registry.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Services Registry class\"\"\"\n\nimport uuid\nimport asyncio\nimport string\nimport random\nfrom fledge.common import logger\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry\n\n__author__ = \"Praveen Garg, Amarendra Kumar Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass ServiceRegistry:\n\n    _registry = list()\n\n    # Startup tokens to pass to service or tasks being started\n    _startupTokens = dict()\n\n    # Bearer token for the registered services\n    _bearerTokens = dict()\n\n    # INFO - level 20\n    _logger = logger.setup(__name__, level=20)\n\n    @classmethod\n    def getStartupToken(cls, name):\n        \"\"\" fetch a token from issued startup tokens\n        \"\"\" \n        return cls._startupTokens.get(name, None)\n    \n    @classmethod\n    def issueStartupToken(cls, name):\n        \"\"\" Create a startup token upon request and store it\n        \"\"\" \n        startToken = ''.join((random.choice(string.ascii_letters) for _ in range(32)))\n        cls._startupTokens[name] = startToken\n        return startToken\n\n    @classmethod\n    def checkStartupToken(cls, name, token):\n        \"\"\" Check startup token exists for given service name\n        \"\"\" \n        foundToken = cls._startupTokens.get(name, None)\n        if foundToken is None or foundToken != token:\n            return False\n\n        return True\n\n    @classmethod\n    def addBearerToken(cls, service_name, bearer_token):\n        cls._bearerTokens[service_name] = bearer_token\n\n    @classmethod\n    def getBearerToken(cls, service_name):\n        return cls._bearerTokens.get(service_name, None)\n\n    @classmethod\n    def register(cls, name, s_type, address, port, management_port,  protocol='http', token=None):\n        \"\"\" registers the service instance\n       \n        :param name: name of the service\n        :param s_type: a valid service type; e.g. Storage, Core, Southbound\n        :param address: any IP or host address\n        :param port: a valid positive integer\n        :param management_port: a valid positive integer for management operations e.g. ping, shutdown\n        :param protocol: defaults to http\n        :param token: single use token\n\n        :return: registered services' uuid\n        \"\"\"\n\n        new_service = True\n        try:\n            current_service = cls.get(name=name)\n        except service_registry_exceptions.DoesNotExist:\n            pass\n        else:\n            # Re: FOGL-1123\n            if current_service[0]._status in [ServiceRecord.Status.Running, ServiceRecord.Status.Unresponsive]:\n                raise service_registry_exceptions.AlreadyExistsWithTheSameName\n            else:\n                new_service = False\n                current_service_id = current_service[0]._id\n\n        if port is not None and cls.check_address_and_port(address, port):\n            raise service_registry_exceptions.AlreadyExistsWithTheSameAddressAndPort\n\n        if cls.check_address_and_mgt_port(address, management_port):\n            raise service_registry_exceptions.AlreadyExistsWithTheSameAddressAndManagementPort\n\n        if port is not None and (not isinstance(port, int)):\n            raise service_registry_exceptions.NonNumericPortError\n\n        if not isinstance(management_port, int):\n            raise service_registry_exceptions.NonNumericPortError\n\n        if new_service is False:\n            # Remove current service to enable the service to register with new management port etc\n            cls.remove_from_registry(current_service_id)\n\n        service_id = str(uuid.uuid4()) if new_service is True else current_service_id\n        registered_service = ServiceRecord(service_id, name, s_type, protocol, address, port, management_port)\n        cls._registry.append(registered_service)\n        cls._logger.info(\"Registered {}\".format(str(registered_service)))\n\n        # Remove startup token\n        if token is not None:\n            cls._startupTokens.pop(name, None)\n\n        # Success\n        return service_id\n\n    @classmethod\n    def _expunge(cls, service_id, service_status):\n        \"\"\" removes the service instance from action\n\n        :param service_id: a uuid of registered service\n        :param service_status: service status to be marked\n        :return: service_id on successful deregistration\n        \"\"\"\n        services = cls.get(idx=service_id)\n        service_name = services[0]._name\n        services[0]._status = service_status\n        # Clear debug information for service record\n        services[0]._debug = {}\n\n        cls._remove_from_scheduler_records(service_name)\n\n        cls._bearerTokens.pop(service_name, None)\n\n        # Remove interest registry records, if any\n        interest_recs = InterestRegistry().get(microservice_uuid=service_id)\n        for interest_rec in interest_recs:\n            InterestRegistry().unregister(interest_rec._registration_id)\n\n        return services[0]\n\n    @classmethod\n    def unregister(cls, service_id):\n        \"\"\" deregisters the service instance\n\n        :param service_id: a uuid of registered service\n        :return: service_id on successful deregistration\n        \"\"\"\n        expunged_service = cls._expunge(service_id, ServiceRecord.Status.Shutdown)\n        cls._logger.info(\"Stopped {}\".format(str(expunged_service)))\n        return service_id\n\n    @classmethod\n    def restart(cls, service_id):\n        \"\"\" restart the service instance\n\n        :param service_id: a uuid of registered service\n        :return: service_id on successful deregistration\n        \"\"\"\n        expunged_service = cls._expunge(service_id, ServiceRecord.Status.Restart)\n        cls._logger.info(\"Restart requested {}\".format(str(expunged_service)))\n        return service_id\n\n    @classmethod\n    def mark_as_failed(cls, service_id):\n        \"\"\" marks the service instance as failed\n\n        :param service_id: a uuid of registered service\n        :return: service_id on successful deregistration\n        \"\"\"\n        expunged_service = cls._expunge(service_id, ServiceRecord.Status.Failed)\n        cls._logger.info(\"Mark as failed {}\".format(str(expunged_service)))\n        return service_id\n\n    @classmethod\n    def remove_from_registry(cls, service_id):\n        \"\"\" remove service_id from service_registry.\n\n        :param service_id: a uuid of registered service\n        \"\"\"\n        services = cls.get(idx=service_id)\n        cls._registry.remove(services[0])\n\n    @classmethod\n    def _remove_from_scheduler_records(cls, service_name):\n        \"\"\" removes service aka STARTUP from Scheduler internal records\n\n        :param service_name\n        :return:\n        \"\"\"\n        if service_name in (\"Fledge Storage\", \"Fledge Core\"): return\n\n        # Require a local import in order to avoid circular import references\n        from fledge.services.core import server\n        if server.Server.scheduler is None: return\n        asyncio.ensure_future(server.Server.scheduler.remove_service_from_task_processes(service_name))\n\n    @classmethod\n    def all(cls):\n        return cls._registry\n\n    @classmethod\n    def filter(cls, **kwargs):\n        # OR based filter\n        services = cls._registry\n        for k, v in kwargs.items():\n            if v:\n                services = [s for s in cls._registry if getattr(s, k, None) == v]\n        return services\n\n    @classmethod\n    def get(cls, idx=None, name=None, s_type=None):\n        services = cls.filter(_id=idx, _name=name, _type=s_type)\n        if len(services) == 0:\n            raise service_registry_exceptions.DoesNotExist\n        return services\n\n    @classmethod\n    def check_address_and_port(cls, address, port):\n        # AND based check\n        # ugly hack! <Make filter to support AND | OR>\n        services = [s for s in cls._registry if getattr(s, \"_address\") == address and getattr(s, \"_port\") == port and getattr(s, \"_status\") != ServiceRecord.Status.Failed]\n        if len(services) == 0:\n            return False\n        return True\n\n    @classmethod\n    def check_address_and_mgt_port(cls, address, m_port):\n        # AND based check\n        # ugly hack! <Make filter to support AND | OR>\n        services = [s for s in cls._registry if getattr(s, \"_address\") == address\n                    and getattr(s, \"_management_port\") == m_port and getattr(s, \"_status\") != ServiceRecord.Status.Failed]\n        if len(services) == 0:\n            return False\n        return True\n\n    @classmethod\n    def filter_by_name_and_type(cls, name, s_type):\n        # AND based check\n        # ugly hack! <Make filter to support AND | OR>\n        services = [s for s in cls._registry if getattr(s, \"_name\") == name and getattr(s, \"_type\") == s_type]\n        if len(services) == 0:\n            raise service_registry_exceptions.DoesNotExist\n        return services\n"
  },
  {
    "path": "python/fledge/services/core/snapshot.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Provides utility functions to take snapshot of plugins\"\"\"\n\nimport os\nfrom os import path\nfrom os.path import basename\nimport json\nimport tarfile\nimport fnmatch\nimport time\nfrom collections import OrderedDict\n\nfrom fledge.common.common import _FLEDGE_ROOT\nfrom fledge.common.logger import FLCoreLogger\n\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_NO_OF_FILES_TO_RETAIN = 3\nSNAPSHOT_PREFIX = \"snapshot-plugin\"\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\n\nclass SnapshotPluginBuilder:\n\n    _out_file_path = None\n    _interim_file_path = None\n\n    def __init__(self, snapshot_plugin_dir):\n        try:\n            if not os.path.exists(snapshot_plugin_dir):\n                os.makedirs(snapshot_plugin_dir)\n            else:\n                self.check_and_delete_plugins_tar_files(snapshot_plugin_dir)\n\n            self._out_file_path = snapshot_plugin_dir\n            self._interim_file_path = snapshot_plugin_dir\n        except (OSError, Exception) as ex:\n            _LOGGER.error(ex, \"Error in initializing SnapshotPluginBuilder class.\")\n            raise RuntimeError(str(ex))\n\n    async def build(self):\n        def reset(tarinfo):\n            tarinfo.uid = tarinfo.gid = 0\n            tarinfo.uname = tarinfo.gname = \"root\"\n            return tarinfo\n\n        tar_file_name = \"\"\n        try:\n            snapshot_id = str(int(time.time()))\n            snapshot_filename = \"{}-{}.tar.gz\".format(SNAPSHOT_PREFIX, snapshot_id)\n            tar_file_name = \"{}/{}\".format(self._out_file_path, snapshot_filename)\n            pyz = tarfile.open(tar_file_name, \"w:gz\")\n            try:\n                # files are being added to tarfile with relative path and NOT with absolute path.\n                pyz.add(\"{}/python/fledge/plugins\".format(_FLEDGE_ROOT),\n                        arcname=\"python/fledge/plugins\", recursive=True)\n                # C plugins location is different with \"make install\" and \"make\"\n                if path.exists(\"{}/bin\".format(_FLEDGE_ROOT)) and path.exists(\"{}/bin/fledge\".format(_FLEDGE_ROOT)):\n                    pyz.add(\"{}/plugins\".format(_FLEDGE_ROOT), arcname=\"plugins\", recursive=True, filter=reset)\n                else:\n                    pyz.add(\"{}/C/plugins\".format(_FLEDGE_ROOT), arcname=\"C/plugins\", recursive=True)\n                    pyz.add(\"{}/plugins\".format(_FLEDGE_ROOT), arcname=\"plugins\", recursive=True)\n                    pyz.add(\"{}/cmake_build/C/plugins\".format(_FLEDGE_ROOT), arcname=\"cmake_build/C/plugins\",\n                            recursive=True)\n            finally:\n                pyz.close()\n        except Exception as ex:\n            if os.path.isfile(tar_file_name):\n                os.remove(tar_file_name)\n            _LOGGER.error(ex, \"Error in creating Snapshot .tar.gz file.\")\n            raise RuntimeError(str(ex))\n\n        self.check_and_delete_temp_files(self._interim_file_path)\n        self.check_and_delete_plugins_tar_files(self._out_file_path)\n        _LOGGER.info(\"Snapshot %s successfully created.\", tar_file_name)\n        return snapshot_id, snapshot_filename\n\n    def check_and_delete_plugins_tar_files(self, snapshot_plugin_dir):\n        valid_extension = '.tar.gz'\n        valid_files_to_delete = dict()\n        try:\n            for root, dirs, files in os.walk(snapshot_plugin_dir):\n                for _file in files:\n                    if _file.endswith(valid_extension):\n                        valid_files_to_delete[_file.split(\".\")[0]] = os.path.join(root, _file)\n            valid_files_to_delete_sorted = OrderedDict(sorted(valid_files_to_delete.items(), reverse=True))\n            while len(valid_files_to_delete_sorted) > _NO_OF_FILES_TO_RETAIN:\n                _file, _path = valid_files_to_delete_sorted.popitem()\n                _LOGGER.warning(\"Removing plugin snapshot file %s.\", _path)\n                os.remove(_path)\n        except OSError as ex:\n            _LOGGER.error(ex, \"ERROR while deleting plugin file.\")\n\n    def check_and_delete_temp_files(self, snapshot_plugin_dir):\n        # Delete all non *.tar.gz files\n        for f in os.listdir(snapshot_plugin_dir):\n            if not fnmatch.fnmatch(f, '{}*.tar.gz'.format(SNAPSHOT_PREFIX)):\n                os.remove(os.path.join(snapshot_plugin_dir, f))\n\n    def write_to_tar(self, pyz, temp_file, data):\n        with open(temp_file, 'w') as outfile:\n            json.dump(data, outfile, indent=4)\n        pyz.add(temp_file, arcname=basename(temp_file))\n\n    def extract_files(self, pyz):\n        # Extraction methods are different for production env and dev env\n        if path.exists(\"{}/bin\".format(_FLEDGE_ROOT)) and path.exists(\"{}/bin/fledge\".format(_FLEDGE_ROOT)):\n            cmd = \"{}/extras/C/cmdutil tar-extract {}\".format(_FLEDGE_ROOT, pyz)\n            retcode = os.system(cmd)\n            if retcode != 0:\n                raise OSError('Error {}: {}'.format(retcode, cmd))\n            return True\n        else:\n            try:\n                with tarfile.open(pyz, \"r:gz\") as tar:\n                    # Since we are storing full path of the files, we need to specify \"/\" as the path to restore\n                    tar.extractall(path=_FLEDGE_ROOT, members=tar.getmembers())\n            except Exception as ex:\n                raise RuntimeError(\"Extraction error for snapshot {}. {}\".format(pyz, str(ex)))\n            else:\n                return True\n"
  },
  {
    "path": "python/fledge/services/core/support.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Provides utility functions to build a Fledge Support bundle.\n\"\"\"\nimport datetime\nimport os\nfrom os.path import basename\nimport glob\nimport sys\nimport shutil\nimport json\nimport tarfile\nimport fnmatch\nimport subprocess\n\nfrom fledge.common import utils\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.storage_client import payload_builder\nfrom fledge.services.core import server\nfrom fledge.services.core.connect import *\nfrom fledge.services.core.api.python_packages import get_packages_installed\nfrom fledge.services.core.api.service import get_service_records, get_service_installed\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_LOGGER = FLCoreLogger().get_logger(__name__)\n\n_SYSLOG_FILE = '/var/log/messages' if utils.is_redhat_based() else '/var/log/syslog'\n_PATH = _FLEDGE_DATA if _FLEDGE_DATA else _FLEDGE_ROOT + '/data'\n\n\nclass SupportBuilder:\n\n    _out_file_path = None\n    _interim_file_path = None\n    _storage = None\n    _num_of_files_to_retain = 1\n\n    def __init__(self, support_dir, support_bundle_config=None):\n        try:\n            if support_bundle_config:\n                self._num_of_files_to_retain = int(support_bundle_config['support_bundle_retain_count']['value'])\n\n            if not os.path.exists(support_dir):\n                os.makedirs(support_dir)\n            else:\n                self.check_and_delete_bundles(support_dir)\n\n            self._out_file_path = support_dir\n            self._interim_file_path = support_dir\n            self._storage = get_storage_async()  # from fledge.services.core.connect\n        except (OSError, Exception) as ex:\n            _LOGGER.error(ex, \"Error in initializing SupportBuilder class.\")\n            raise RuntimeError(str(ex))\n\n    async def build(self, name=None):\n        try:\n            today = datetime.datetime.utcnow()\n            file_spec = today.strftime('%y%m%d-%H-%M-%S')\n            support_file_name = f\"support-{name}-{file_spec}\" if name else f\"support-{file_spec}\"\n            tar_file_name = self._out_file_path+\"/\"+support_file_name+\".tar.gz\"\n            pyz = tarfile.open(tar_file_name, \"w:gz\")\n            try:\n                # fledge version and schema info\n                await self.add_fledge_version_and_schema(pyz)\n                # Details of machine resources\n                self.add_machine_resources(pyz, file_spec)\n                # Process status of services or tasks\n                self.add_psinfo(pyz, file_spec)\n                # softwares installed list\n                self.add_software_list(pyz, file_spec)\n                # package logs\n                self.add_package_log_dir_content(pyz)\n                # pip packages list\n                self.add_python_packages_list(pyz, file_spec)\n                # all logs\n                self.add_syslog_fledge(pyz, file_spec)\n                # storage service logs\n                self.add_syslog_storage(pyz, file_spec)\n                # service registry\n                self.add_service_registry(pyz, file_spec)\n                # debug trace logs\n                self.add_debug_trace_log_dir_content(pyz)\n                # configuration related scripts\n                self.add_script_dir_content(pyz)\n                # utility computation files\n                self.add_syslog_utility(pyz)\n                cf_mgr = ConfigurationManager(self._storage)\n                try:\n                    # South services logs\n                    south_cat = await cf_mgr.get_category_child(\"South\")\n                    south_categories = [sc[\"key\"] for sc in south_cat]\n                    for service in south_categories:\n                        self.add_syslog_service(pyz, file_spec, service)\n                except:\n                    pass\n                try:\n                    # North services and tasks logs\n                    north_cat = await cf_mgr.get_category_child(\"North\")\n                    north_categories = [nc[\"key\"] for nc in north_cat]\n                    for task in north_categories:\n                        if task != \"OMF_TYPES\":\n                            self.add_syslog_service(pyz, file_spec, task)\n                except:\n                    pass\n                try:\n                    # external services logs\n                    schedule_list = await server.Server.scheduler.get_schedules()\n                    external_svc_processes = ('bucket_storage_c', 'dispatcher_c', 'management', 'notification_c')\n                    for sch in filter(lambda obj: obj.process_name in external_svc_processes, schedule_list):\n                        self.add_syslog_service(pyz, file_spec, sch.name)\n                except:\n                    pass\n                # Tables related info\n                db_tables = {\"configuration\": \"category\", \"log\": \"audit\", \"schedules\": \"schedule\",\n                             \"scheduled_processes\": \"schedule-process\", \"monitors\": \"service-monitoring\",\n                             \"statistics\": \"statistics\", \"alerts\": \"alerts\"}\n                for tbl_name, file_name in sorted(db_tables.items()):\n                    await self.add_db_content(pyz, file_spec, tbl_name, file_name)\n                # Control info only if dispatcher schedule available\n                for sch in filter(lambda obj: obj.process_name == 'dispatcher_c', schedule_list):\n                    await self.add_control_info(pyz)\n                # Last 1000 rows of Statistics history\n                await self.add_table_statistics_history(pyz, file_spec)\n                # First 1000 rows of Plugin data\n                await self.add_table_plugin_data(pyz, file_spec)\n                # First 1000 rows of Streams\n                await self.add_table_streams(pyz, file_spec)\n            finally:\n                pyz.close()\n        except Exception as ex:\n            _LOGGER.error(ex, \"Error in creating Support .tar.gz file.\")\n            raise RuntimeError(str(ex))\n\n        self.check_and_delete_temp_files(self._interim_file_path)\n        _LOGGER.info(\"Support bundle %s successfully created.\", tar_file_name)\n        return tar_file_name\n\n    def check_and_delete_bundles(self, support_dir):\n        files = glob.glob(support_dir + \"/\" + \"support*.tar.gz\")\n        files.sort(key=os.path.getmtime)\n        if len(files) >= self._num_of_files_to_retain:\n            num_of_files_to_remove = len(files) - self._num_of_files_to_retain + 1\n            for file in files[:num_of_files_to_remove]:\n                file_path = os.path.join(support_dir, file)\n                if os.path.isfile(file_path):\n                    try:\n                        os.remove(file_path)\n                    except PermissionError:\n                        _LOGGER.error(\"Permission denied to delete file %s\", file_path)\n                    except Exception as ex:\n                        _LOGGER.error(ex, \"Error in deleting file %s\", file_path)\n\n    def check_and_delete_temp_files(self, support_dir):\n        # Delete all non *.tar.gz files\n        for f in os.listdir(support_dir):\n            if not fnmatch.fnmatch(f, 'support*.tar.gz'):\n                os.remove(os.path.join(support_dir, f))\n\n    def write_to_tar(self, pyz, temp_file, data):\n        with open(temp_file, 'w') as outfile:\n            json.dump(data, outfile, indent=4)\n        pyz.add(temp_file, arcname=basename(temp_file))\n\n    async def add_fledge_version_and_schema(self, pyz):\n        temp_file = self._interim_file_path + \"/\" + \"fledge-info\"\n        with open('{}/VERSION'.format(_FLEDGE_ROOT)) as f:\n            lines = [line.rstrip() for line in f]\n        self.write_to_tar(pyz, temp_file, lines)\n\n    def add_syslog_fledge(self, pyz, file_spec):\n        # The fledge entries from the syslog file\n        temp_file = self._interim_file_path + \"/\" + \"syslog-{}\".format(file_spec)\n        try:\n            subprocess.call(\"grep -a '{}' {} > {}\".format(\"Fledge\", _SYSLOG_FILE, temp_file), shell=True)\n        except OSError as ex:\n            raise RuntimeError(\"Error in creating {}. Error-{}\".format(temp_file, str(ex)))\n        pyz.add(temp_file, arcname='logs/sys/{}'.format(basename(temp_file)))\n\n    def add_syslog_storage(self, pyz, file_spec):\n        # The contents of the syslog file that relate to the database layer (postgres)\n        temp_file = self._interim_file_path + \"/\" + \"syslogStorage-{}\".format(file_spec)\n        try:\n            subprocess.call(\"grep -a '{}' {} > {}\".format(\"Fledge Storage\", _SYSLOG_FILE, temp_file), shell=True)\n        except OSError as ex:\n            raise RuntimeError(\"Error in creating {}. Error-{}\".format(temp_file, str(ex)))\n        pyz.add(temp_file, arcname='logs/sys/{}'.format(basename(temp_file)))\n\n    def add_syslog_service(self, pyz, file_spec, service):\n        # The fledge entries from the syslog file for a service or task\n        # Replace space occurrences with hyphen for service or task - so that file is created\n        tmp_svc = service.replace(' ', '-')\n        temp_file = self._interim_file_path + \"/\" + \"syslog-{}-{}\".format(tmp_svc, file_spec)\n        try:\n            subprocess.call(\"grep -a -E '(Fledge {})\\[' {} > {}\".format(service, _SYSLOG_FILE, temp_file), shell=True)\n            pyz.add(temp_file, arcname='logs/sys/{}'.format(basename(temp_file)))\n        except Exception as ex:\n            raise RuntimeError(\"Error in creating {}. Error-{}\".format(temp_file, str(ex)))\n\n    def add_syslog_utility(self, pyz):\n        # syslog utility files\n        for filename in os.listdir(\"/tmp\"):\n            if filename.startswith(\"fl_syslog\"):\n                temp_file = \"/tmp/{}\".format(filename)\n                pyz.add(temp_file, arcname='logs/sys/{}'.format(filename))\n\n    async def add_db_content(self, pyz, file_spec, tbl_name, file_name):\n        def mask_passwords(data):\n            def sanitize_dict(d):\n                for key, val in d.items():\n                    if isinstance(val, dict):\n                        if val.get(\"type\") == \"password\" and \"value\" in val:\n                            val[\"default\"] = \"****\"\n                            val[\"value\"] = \"****\"\n                        sanitize_dict(val)\n\n            if \"rows\" in data:\n                for row in data[\"rows\"]:\n                    if isinstance(row.get(\"value\"), dict):\n                        sanitize_dict(row[\"value\"])\n            return data\n        temp_file = \"{}/{}-{}\".format(self._interim_file_path, file_name, file_spec)\n        raw_data = await self._storage.query_tbl(tbl_name)\n        sanitized_data = mask_passwords(raw_data) if tbl_name == 'configuration' else raw_data\n        self.write_to_tar(pyz, temp_file, sanitized_data)\n\n    async def add_table_statistics_history(self, pyz, file_spec):\n        # The contents of the statistics history from the storage layer\n        temp_file = self._interim_file_path + \"/\" + \"statistics-history-{}\".format(file_spec)\n        payload = payload_builder.PayloadBuilder() \\\n            .LIMIT(1000) \\\n            .ORDER_BY(['history_ts', 'DESC']) \\\n            .payload()\n        data = await self._storage.query_tbl_with_payload(\"statistics_history\", payload)\n        self.write_to_tar(pyz, temp_file, data)\n\n    async def add_table_plugin_data(self, pyz, file_spec):\n        # The contents of the plugin_data from the storage layer\n        temp_file = self._interim_file_path + \"/\" + \"plugin-data-{}\".format(file_spec)\n        payload = payload_builder.PayloadBuilder() \\\n            .LIMIT(1000) \\\n            .ORDER_BY(['key', 'ASC']) \\\n            .payload()\n        data = await self._storage.query_tbl_with_payload(\"plugin_data\", payload)\n        self.write_to_tar(pyz, temp_file, data)\n\n    async def add_table_streams(self, pyz, file_spec):\n        # The contents of the streams from the storage layer\n        temp_file = self._interim_file_path + \"/\" + \"streams-{}\".format(file_spec)\n        payload = payload_builder.PayloadBuilder() \\\n            .LIMIT(1000) \\\n            .ORDER_BY(['id', 'ASC']) \\\n            .payload()\n        data = await self._storage.query_tbl_with_payload(\"streams\", payload)\n        self.write_to_tar(pyz, temp_file, data)\n\n    def add_service_registry(self, pyz, file_spec):\n        # The contents of the service registry\n        temp_file = self._interim_file_path + \"/\" + \"service_registry-{}\".format(file_spec)\n        data = {\n            \"about\": \"Service Registry\",\n            \"serviceRegistry\": get_service_records()\n        }\n        self.write_to_tar(pyz, temp_file, data)\n\n    def add_machine_resources(self, pyz, file_spec):\n        def _execute_command(cmd):\n            sub = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout.readlines()\n            convert_bytes_to_string = [s.decode() for s in sub]\n            remove_whitespaces_from_list = [bts.strip().replace(' ', '') for bts in convert_bytes_to_string]\n            result = {}\n            for rw in remove_whitespaces_from_list:\n                kv = rw.split(\":\")\n                result[kv[0]] = kv[1]\n            return result\n\n        # Details of machine resources, memory size, amount of available memory, storage size and amount of free storage\n        temp_file = self._interim_file_path + \"/\" + \"machine-{}\".format(file_spec)\n        total, used, free = shutil.disk_usage(\"/\")\n        memory = subprocess.Popen('free -h', shell=True, stdout=subprocess.PIPE).stdout.readlines()[1].split()[1:]\n        hostname_info = _execute_command('hostnamectl status')\n        cpu_architecture_info = _execute_command('lscpu')\n        data = {\n            \"about\": \"Machine resources\",\n            \"platform\": sys.platform,\n            \"totalMemory\": memory[0].decode(),\n            \"usedMemory\": memory[1].decode(),\n            \"freeMemory\": memory[2].decode(),\n            \"totalDiskSpace_MB\": int(total / (1024 * 1024)),\n            \"usedDiskSpace_MB\": int(used / (1024 * 1024)),\n            \"freeDiskSpace_MB\": int(free / (1024 * 1024)),\n            \"hostnameInfo\": hostname_info,\n            \"cpuArchitectureInfo\": cpu_architecture_info\n        }\n        self.write_to_tar(pyz, temp_file, data)\n\n    def add_psinfo(self, pyz, file_spec):\n        # A PS listing of all the python applications running on the machine\n        temp_file = self._interim_file_path + \"/\" + \"psinfo-{}\".format(file_spec)\n        a = subprocess.Popen('ps -aufx | egrep \"(%MEM|fledge\\.)\" | grep -v grep', shell=True,\n                             stdout=subprocess.PIPE).stdout.readlines()\n        c = [b.decode() for b in a]  # Since \"a\" contains return value in bytes, convert it to string\n\n        c_tasks = subprocess.Popen('ps -aufx | grep \"./tasks\" | grep -v grep', shell=True,\n                                   stdout=subprocess.PIPE).stdout.readlines()\n        c_tasks_decode = [t.decode() for t in c_tasks]\n        if c_tasks_decode:\n            c.extend(c_tasks_decode)\n        # Remove \"/n\" from the c list output\n        data = {\n            \"runningProcesses\": list(map(str.strip, c))\n        }\n        self.write_to_tar(pyz, temp_file, data)\n\n    def add_script_dir_content(self, pyz):\n        script_file_path = _PATH + '/scripts'\n        if os.path.exists(script_file_path):\n            # recursively 'true' by default and __pycache__ dir excluded\n            pyz.add(script_file_path, arcname='scripts', filter=self.exclude_pycache)\n\n    def add_package_log_dir_content(self, pyz) -> None:\n        package_logs_path = _PATH + '/logs'\n        if os.path.exists(package_logs_path):\n            for filename in os.listdir(package_logs_path):\n                if filename.endswith('.log'):\n                    file_path = os.path.join(package_logs_path, filename)\n                    # recursively 'true' by default and __pycache__ dir excluded\n                    pyz.add(file_path, arcname='logs/package/{}'.format(basename(file_path)),\n                            filter=self.exclude_pycache)\n\n    def add_debug_trace_log_dir_content(self, pyz) -> None:\n        debug_trace_logs_path = _PATH + '/logs/debug-trace'\n        if os.path.exists(debug_trace_logs_path):\n            for filename in os.listdir(debug_trace_logs_path):\n                # Check if the file has a .log extension\n                if filename.endswith('.log'):\n                    file_path = os.path.join(debug_trace_logs_path, filename)\n                    # recursively 'true' by default and __pycache__ dir excluded\n                    pyz.add(file_path, arcname='logs/debug-trace/{}'.format(basename(file_path)),\n                            filter=self.exclude_pycache)\n                    # Open the file in write mode ('w'), which will truncate it to zero length\n                    with open(file_path, 'w') as file:\n                        file.truncate(0)\n\n    async def add_control_info(self, pyz) -> None:\n        today = datetime.datetime.utcnow()\n        file_spec = today.strftime('%y%m%d-%H-%M-%S')\n        control_tables = ['control_acl', 'control_api_acl', 'control_api', 'control_api_parameters',\n                          'control_pipelines', 'control_filters', 'control_script']\n        for tbl in sorted(control_tables):\n            temp_file = \"{}/{}-{}\".format(self._interim_file_path, tbl.replace(\"_\", \"-\"), file_spec)\n            data = await self._storage.query_tbl(tbl)\n            with open(temp_file, 'w') as outfile:\n                json.dump(data, outfile, indent=4)\n            pyz.add(temp_file, arcname='control/{}'.format(basename(temp_file)))\n\n    def add_software_list(self, pyz, file_spec) -> None:\n        data = {\n            \"plugins\": PluginDiscovery.get_plugins_installed(),\n            \"services\": get_service_installed()\n        }\n        temp_file = self._interim_file_path + \"/\" + \"software-{}\".format(file_spec)\n        self.write_to_tar(pyz, temp_file, data)\n\n    def add_python_packages_list(self, pyz, file_spec) -> None:\n        data = {'packages': get_packages_installed()}\n        temp_file = self._interim_file_path + \"/\" + \"python-packages-{}\".format(file_spec)\n        self.write_to_tar(pyz, temp_file, data)\n\n    def exclude_pycache(self, tar_info):\n        return None if '__pycache__' in tar_info.name else tar_info\n"
  },
  {
    "path": "python/fledge/services/core/user_model.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Fledge user entity class with CRUD operations to Storage layer\"\"\"\nimport os\nimport json\nimport uuid\nimport hashlib\nfrom datetime import datetime, timedelta, timezone\nimport jwt\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.web.ssl_wrapper import SSLVerifier\nfrom fledge.services.core import connect\n__author__ = \"Praveen Garg, Ashish Jabble, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n# TODO: move to common  / config\nJWT_SECRET = 'f0gl@mp'\nJWT_ALGORITHM = 'HS512'\nJWT_EXP_DELTA_SECONDS = 30*60  # 30 minutes\nERROR_MSG = 'Something went wrong'\nUSED_PASSWORD_HISTORY_COUNT = 3\nHASH_PWD_ALGORITHM = 'SHA512'\nDATE_FORMAT = \"%Y-%m-%d %H:%M:%S.%f\"\n_logger = FLCoreLogger().get_logger(__name__)\n\n\nclass User:\n\n    __slots__ = ['uid', 'username', 'password', 'is_admin']\n\n    def __init__(self, uid, username, password, is_admin=False):\n        self.uid = uid\n        self.username = username\n        self.password = password\n        self.is_admin = is_admin\n\n    def __repr__(self):\n        template = 'User id={s.uid}: <{s.username}, is_admin={s.is_admin}>'\n        return template.format(s=self)\n\n    def __str__(self):\n        return self.__repr__()\n\n    class DoesNotExist(Exception):\n        pass\n\n    class UserAlreadyExists(Exception):\n        pass\n\n    class PasswordNotSetError(Exception):\n        pass\n\n    class PasswordDoesNotMatch(Exception):\n        pass\n\n    class PasswordAlreadyUsed(Exception):\n        pass\n\n    class PasswordExpired(Exception):\n        pass\n\n    class InvalidToken(Exception):\n        pass\n\n    class TokenExpired(Exception):\n        pass\n\n    class SessionTimeout(Exception):\n        pass\n\n    class Objects:\n\n        @classmethod\n        async def get_roles(cls):\n            storage_client = connect.get_storage_async()\n            result = await storage_client.query_tbl('roles')\n            return result[\"rows\"]\n\n        @classmethod\n        async def get_role_id_by_name(cls, name):\n            storage_client = connect.get_storage_async()\n            payload = PayloadBuilder().SELECT(\"id\").WHERE(['name', '=', name]).payload()\n            result = await storage_client.query_tbl_with_payload('roles', payload)\n            return result[\"rows\"]\n\n        @classmethod\n        async def get_role_name_by_id(cls, rid):\n            storage_client = connect.get_storage_async()\n            payload = PayloadBuilder().SELECT(\"name\").WHERE(['id', '=', rid]).LIMIT(1).payload()\n            result = await storage_client.query_tbl_with_payload('roles', payload)\n            name = None\n            if result[\"rows\"]:\n                rows = result['rows'][0]\n                if 'name' in rows:\n                    name = rows['name']\n            return name\n\n        @classmethod\n        async def create(cls, username, password, role_id, access_method='any', real_name='', description=''):\n            \"\"\"\n            Args:\n                username: user name\n                password: Password must contain at least one digit, one lowercase, one uppercase &\n                          one special character and length of minimum 6 characters\n                role_id: Role (by default normal 'user' role whose id is 2)\n                access_method: User access and can be of any, pwd, cert\n                real_name: full name of user\n                description: Description for user\n\n            Returns:\n                   user json info\n            \"\"\"\n\n            storage_client = connect.get_storage_async()\n            payload = PayloadBuilder().INSERT(uname=username,\n                                              pwd=cls.hash_password(password, HASH_PWD_ALGORITHM) if password else '',\n                                              access_method=access_method, role_id=role_id, real_name=real_name,\n                                              description=description).payload()\n            try:\n                result = await storage_client.insert_into_tbl(\"users\", payload)\n                # USRAD audit trail entry\n                audit = AuditLogger(storage_client)\n                audit_details = json.loads(payload)\n                audit_details.pop('pwd', None)\n                audit_details['message'] = \"'{}' username created for '{}' user.\".format(username, real_name)\n                await audit.information('USRAD', audit_details)\n            except StorageServerError as ex:\n                if ex.error[\"retryable\"]:\n                    pass  # retry INSERT\n                raise ValueError(ERROR_MSG)\n            return result\n\n        @classmethod\n        async def delete(cls, user_id):\n            \"\"\"\n            Args:\n                user_id: user id to delete\n\n            Returns:\n                  json response\n            \"\"\"\n\n            # either keep 1 admin user or just reserve id:1 for superuser\n            if int(user_id) == 1:\n                raise ValueError(\"Super admin user can not be deleted\")\n\n            storage_client = connect.get_storage_async()\n            try:\n                # first delete the active login references\n                await cls.delete_user_tokens(user_id)\n\n                payload = PayloadBuilder().SET(enabled=\"f\").WHERE(['id', '=', user_id]).AND_WHERE(\n                    ['enabled', '=', 't']).payload()\n                result = await storage_client.update_tbl(\"users\", payload)\n                # USRDL audit trail entry\n                audit = AuditLogger(storage_client)\n                await audit.information(\n                    'USRDL', {\"user_id\": user_id, \"message\": \"User ID: <{}> has been disabled.\".format(user_id)})\n            except StorageServerError as ex:\n                if ex.error[\"retryable\"]:\n                    pass  # retry INSERT\n                raise ValueError(ERROR_MSG)\n            return result\n\n        @classmethod\n        async def update(cls, user_id, user_data):\n            \"\"\"\n            Args:\n                 user_id: logged user id\n                 user_data: user dict\n\n            Returns:\n                  updated user info dict\n            \"\"\"\n            if not user_data:\n                return False\n            old_data = await cls.get(uid=user_id)\n            storage_client = connect.get_storage_async()\n\n            new_kwargs = {}\n            old_kwargs = {}\n            if 'access_method' in user_data:\n                if user_data['access_method'] == 'pwd':\n                    # Validate that the password is not empty; if it is, reject the client request with a bad request\n                    payload = PayloadBuilder().SELECT(\"pwd\").WHERE(['id', '=', user_id]).AND_WHERE(\n                        ['enabled', '=', 't']).payload()\n                    result = await storage_client.query_tbl_with_payload(\"users\", payload)\n                    if not result['rows'][0]['pwd']:\n                        msg = ('No password has been set for this user. Please create one before switching '\n                               'the authentication method to \"Password\".')\n                        raise ValueError(msg)\n                old_kwargs[\"access_method\"] = old_data['access_method']\n                new_kwargs.update({\"access_method\": user_data['access_method']})\n            if 'real_name' in user_data:\n                old_kwargs[\"real_name\"] = old_data['real_name']\n                new_kwargs.update({\"real_name\": user_data['real_name']})\n            if 'description' in user_data:\n                old_kwargs[\"description\"] = old_data['description']\n                new_kwargs.update({\"description\": user_data['description']})\n            if 'role_id' in user_data:\n                old_kwargs[\"role_id\"] = old_data['role_id']\n                new_kwargs.update({\"role_id\": user_data['role_id']})\n            if 'failed_attempts' in user_data:\n                old_kwargs[\"failed_attempts\"] = old_data['failed_attempts']\n                new_kwargs.update({\"failed_attempts\": user_data['failed_attempts']})\n            if 'block_until' in user_data:\n                old_kwargs[\"block_until\"] = old_data['block_until']\n                new_kwargs.update({\"block_until\": str(user_data['block_until'])})\n\n            hashed_pwd = None\n            pwd_history_list = []\n            if 'password' in user_data:\n                if len(user_data['password']):\n                    hashed_pwd = cls.hash_password(user_data['password'], old_data[\"hash_algorithm\"])\n                    current_datetime = datetime.now()\n                    old_kwargs[\"pwd\"] = \"****\"\n                    new_kwargs.update({\"pwd\": hashed_pwd, \"pwd_last_changed\": str(current_datetime)})\n\n                    # get password history list\n                    pwd_history_list = await cls._get_password_history(storage_client, user_id, user_data,\n                                                                       old_data[\"hash_algorithm\"])\n            try:\n                payload = PayloadBuilder().SET(**new_kwargs).WHERE(['id', '=', user_id]).AND_WHERE(\n                    ['enabled', '=', 't']).payload()\n                result = await storage_client.update_tbl(\"users\", payload)\n                if result['rows_affected']:\n                    # FIXME: FOGL-1226 active session delete only in case of role_id and password updation\n                    if 'password' in user_data or 'role_id' in user_data:\n                        # delete all active sessions\n                        await cls.delete_user_tokens(user_id)\n\n                    if 'password' in user_data:\n                        # insert pwd history and delete oldest pwd if USED_PASSWORD_HISTORY_COUNT exceeds\n                        await cls._insert_pwd_history_with_oldest_pwd_deletion_if_count_exceeds(\n                            storage_client, user_id, hashed_pwd, pwd_history_list)\n\n                    # USRCH audit trail entry\n                    audit = AuditLogger(storage_client)\n                    if 'pwd' in new_kwargs:\n                        new_kwargs['pwd'] = \"Password has been updated.\"\n                        new_kwargs.pop('pwd_last_changed', None)\n                    await audit.information(\n                        'USRCH', {'user_id': user_id, 'old_value': old_kwargs, 'new_value': new_kwargs,\n                                  \"message\": \"'{}' user has been changed.\".format(old_data['uname'])})\n                    return True\n            except StorageServerError as ex:\n                if ex.error[\"retryable\"]:\n                    pass  # retry UPDATE\n                raise ValueError(ERROR_MSG)\n            except Exception:\n                raise\n\n        @classmethod\n        async def is_user_exists(cls, uid, password):\n            payload = PayloadBuilder().SELECT(\"uname\", \"pwd\", \"hash_algorithm\").WHERE(['id', '=', uid]).AND_WHERE(\n                ['enabled', '=', 't']).payload()\n            storage_client = connect.get_storage_async()\n            result = await storage_client.query_tbl_with_payload('users', payload)\n            if len(result['rows']) == 0:\n                return None\n\n            found_user = result['rows'][0]\n            is_valid_pwd = cls.check_password(found_user['pwd'], str(password), found_user['hash_algorithm'])\n            return uid if is_valid_pwd else None\n\n        # utility\n        @classmethod\n        async def all(cls):\n            storage_client = connect.get_storage_async()\n            result = await storage_client.query_tbl('users')\n            return result['rows']\n\n        @classmethod\n        async def filter(cls, **kwargs):\n            user_id = kwargs['uid']\n            user_name = kwargs['username']\n\n            q = PayloadBuilder().SELECT(\"id\", \"uname\", \"role_id\", \"access_method\", \"real_name\", \"description\",\n                                        \"hash_algorithm\", \"block_until\", \"failed_attempts\").WHERE(['enabled', '=', 't'])\n\n            if user_id is not None:\n                q = q.AND_WHERE(['id', '=', user_id])\n\n            if user_name is not None:\n                q = q.AND_WHERE(['uname', '=', user_name])\n\n            storage_client = connect.get_storage_async()\n            q_payload = PayloadBuilder(q.chain_payload()).payload()\n            result = await storage_client.query_tbl_with_payload('users', q_payload)\n            return result['rows']\n\n        @classmethod\n        async def get(cls, uid=None, username=None):\n            users = await cls.filter(uid=uid, username=username)\n            if len(users) == 0:\n                msg = ''\n                if uid:\n                    msg = \"User with id:<{}> does not exist\".format(uid)\n                if username:\n                    msg = \"User with name:<{}> does not exist\".format(username)\n                if uid and username:\n                    msg = \"User with id:<{}> and name:<{}> does not exist\".format(uid, username)\n\n                raise User.DoesNotExist(msg)\n            return users[0]\n\n        @classmethod\n        async def refresh_token_expiry(cls, token):\n            storage_client = connect.get_storage_async()\n            exp = datetime.now() + timedelta(seconds=JWT_EXP_DELTA_SECONDS)\n            \"\"\" MODIFIER with allowzero is passed in payload so that storage returns rows_affected 0 in any case \"\"\"\n            payload = PayloadBuilder().SET(token_expiration=str(exp)).WHERE(['token', '=', token]\n                                                                            ).MODIFIER([\"allowzero\"]).payload()\n            await storage_client.update_tbl(\"user_logins\", payload)\n\n        @classmethod\n        async def validate_token(cls, token):\n            \"\"\" check existence and validity of token\n                    * exists in user_logins table\n                    * its not expired\n            :param token:\n            :return:\n            \"\"\"\n            storage_client = connect.get_storage_async()\n            payload = PayloadBuilder().SELECT(\"token_expiration\") \\\n                .ALIAS(\"return\", (\"token_expiration\", 'token_expiration')) \\\n                .FORMAT(\"return\", (\"token_expiration\", \"YYYY-MM-DD HH24:MI:SS.MS\")) \\\n                .WHERE(['token', '=', token]).payload()\n            result = await storage_client.query_tbl_with_payload('user_logins', payload)\n\n            if len(result['rows']) == 0:\n                raise User.InvalidToken(\"Token appears to be invalid\")\n\n            r = result['rows'][0]\n            token_expiry = r[\"token_expiration\"]\n            curr_time = datetime.now().strftime(\"%Y-%m-%d %H:%M:%S.%f\")\n            diff = datetime.strptime(token_expiry, DATE_FORMAT) - datetime.strptime(curr_time, DATE_FORMAT)\n            if diff.seconds < 0:\n                raise User.TokenExpired(\"The token has expired, login again\")\n\n            # verification of expiry set to false,\n            # as we want to refresh token on each successful request\n            # and extend it to keep session alive\n            user_payload = jwt.decode(token, JWT_SECRET, algorithms=[JWT_ALGORITHM], options={'verify_exp': False})\n            return user_payload[\"uid\"]\n\n        @classmethod\n        async def login(cls, username, password, host):\n            \"\"\"\n            Args:\n                username: username\n                password: password\n                host:     IP address\n            Returns:\n                  return token\n\n            \"\"\"\n            # check password change configuration\n            storage_client = connect.get_storage_async()\n            cfg_mgr = ConfigurationManager(storage_client)\n            category_item = await cfg_mgr.get_category_item('password', 'expiration')\n            age = int(category_item['value'])\n\n            # get user info on the basis of username\n            payload = PayloadBuilder().SELECT(\"pwd\", \"id\", \"role_id\", \"access_method\", \"pwd_last_changed\",\n                                              \"real_name\", \"description\", \"hash_algorithm\", \"block_until\", \"failed_attempts\")\\\n                .WHERE(['uname', '=', username])\\\n                .ALIAS(\"return\", (\"pwd_last_changed\", 'pwd_last_changed'))\\\n                .FORMAT(\"return\", (\"pwd_last_changed\", \"YYYY-MM-DD HH24:MI:SS.MS\"))\\\n                .AND_WHERE(['enabled', '=', 't']).payload()\n            result = await storage_client.query_tbl_with_payload('users', payload)\n            if len(result['rows']) == 0:\n                raise User.DoesNotExist('User does not exist')\n\n            found_user = result['rows'][0]\n            if not found_user.get('pwd'):\n                raise User.PasswordNotSetError(\"Password is not set for this user.\")\n            # check age of password\n            t1 = datetime.now()\n            t2 = datetime.strptime(found_user['pwd_last_changed'], \"%Y-%m-%d %H:%M:%S.%f\")\n            delta = t1 - t2\n            if age == 0:\n                # user will not be forced to change their password.\n                pass\n            elif age <= delta.days:\n                # user will be forced to change their password.\n                raise User.PasswordExpired(found_user['id'])\n\n            failed_attempts = found_user['failed_attempts']\n            block_until = found_user['block_until']\n\n            # Do not block already blocked account further\n            if block_until:\n                curr_time = datetime.now(timezone.utc).strftime(DATE_FORMAT)\n                if datetime.strptime(block_until, DATE_FORMAT) > datetime.strptime(curr_time, DATE_FORMAT):\n                    diff = datetime.strptime(block_until, DATE_FORMAT) -  datetime.strptime(curr_time, DATE_FORMAT)\n                    hours = diff.seconds // 3600\n                    hours_left = \"\"\n                    if hours == 1 :\n                        hours_left = \"{} hour \".format(hours)\n                    elif hours > 1:\n                        hours_left = \"{} hours \".format(hours)\n\n                    minutes = (diff.seconds % 3600) // 60\n                    minutes_left = \" 1 minute\" #Show minutes 1 or less than 1 as \"1 minute\" \n                    if minutes > 1:\n                        minutes_left = \" {} minutes \".format(minutes)\n\n                    blocked_message = \"Account is blocked for {}{}\".format(hours_left,minutes_left)\n                    raise User.PasswordDoesNotMatch(blocked_message)\n\n            # validate password\n            is_valid_pwd = cls.check_password(found_user['pwd'], str(password), algorithm=found_user['hash_algorithm'])\n            if not is_valid_pwd:\n                # Another condition to check password is ONLY for the case:\n                # when we have requested password with hashed value and this comes only with microservice to get token\n                if found_user['pwd'] != str(password):\n                    # Do not block admin user\n                    if int(found_user['role_id']) == 1:\n                        raise User.PasswordDoesNotMatch('Username or Password do not match')\n\n                    MAX_LOGIN_ATTEMPTS = 5\n                    failed_attempts += 1\n                    audit_log_message = \"\"\n                    blocked_message = \"\"\n\n                    # Do not block users for first failed attempt\n                    if failed_attempts < MAX_LOGIN_ATTEMPTS - 3:\n                        await cls.update(found_user['id'],{'failed_attempts': failed_attempts})\n                        raise User.PasswordDoesNotMatch('Username or Password do not match')\n\n                    # Check for other users\n                    if failed_attempts == MAX_LOGIN_ATTEMPTS - 3: # Block for 1 minute after 2 failed attempts \n                        block_until = datetime.now(timezone.utc) + timedelta(seconds=60)\n                        audit_log_message = \"'{}' user blocked for 1 minute.\".format(username)\n                        blocked_message = \"Invalid username/password attempted multiple times. Account blocked for 1 minute.\"\n\n                    elif failed_attempts == MAX_LOGIN_ATTEMPTS - 2: # Block for 15 minutes after 3 failed attempts \n                        block_until = datetime.now(timezone.utc) + timedelta(minutes=15)\n                        audit_log_message = \"'{}' user blocked for 15 minutes.\".format(username)\n                        blocked_message = \"Invalid username/password attempted multiple times. Account blocked for 15 minutes.\"\n\n                    elif failed_attempts == MAX_LOGIN_ATTEMPTS - 1: # Block for 1 hour after 4 failed attempts \n                        block_until = datetime.now(timezone.utc) + timedelta(hours=1)\n                        audit_log_message = \"'{}' user blocked for 1 hour.\".format(username)\n                        blocked_message = \"Invalid username/password attempted multiple times. Account blocked for 1 hour.\"\n\n                    elif failed_attempts == MAX_LOGIN_ATTEMPTS: # Block for 24 hours after 5 failed attempts \n                        block_until = datetime.now(timezone.utc) + timedelta(hours=24)\n                        audit_log_message = \"'{}' user blocked for 24 hours.\".format(username)\n                        blocked_message = \"Invalid username/password attempted multiple times. Account blocked for 24 hours.\"\n\n                        # Raise Alert if user is blocked for 24 hours\n                        from fledge.common.alert_manager import AlertManager\n                        alert_manager = AlertManager(storage_client)\n                        param = {\"key\": \"USRBK\", \"message\": audit_log_message, \"urgency\": 2}\n                        await alert_manager.add(param)\n\n                    # USRBK audit trail entry\n                    if failed_attempts >= MAX_LOGIN_ATTEMPTS - 3:\n                        await cls.update(found_user['id'],{'failed_attempts': failed_attempts, 'block_until':block_until})\n                        audit = AuditLogger(storage_client)\n                        await audit.information('USRBK', {'user_id': found_user['id'], 'user_name': username, 'failed_attempts':failed_attempts,\n                            \"message\": audit_log_message})\n                        raise User.PasswordDoesNotMatch(blocked_message)\n\n            # Clear failed_attempts on successful login\n            if int(found_user['failed_attempts']) > 0:\n                await cls.update(found_user['id'],{'failed_attempts': 0})\n            uid, jwt_token, is_admin = await cls._get_new_token(storage_client, found_user, host)\n            return uid, jwt_token, is_admin\n\n        @classmethod\n        async def _get_new_token(cls, storage_client, found_user, host):\n            # fetch user info\n            exp = datetime.now() + timedelta(seconds=JWT_EXP_DELTA_SECONDS)\n            uid = found_user['id']\n            p = {'uid': uid, 'exp': exp}\n            jwt_token = jwt.encode(p, JWT_SECRET, JWT_ALGORITHM)\n\n            payload = PayloadBuilder().INSERT(user_id=p['uid'], token=jwt_token,\n                                              token_expiration=str(exp), ip=host).payload()\n\n            # Insert token, uid, expiration into user_login table\n            try:\n                await storage_client.insert_into_tbl(\"user_logins\", payload)\n            except StorageServerError as ex:\n                if ex.error[\"retryable\"]:\n                    pass  # retry INSERT\n                raise ValueError(ERROR_MSG)\n\n            # Save session in memory for idle disconnection\n            current_time = datetime.now().strftime(DATE_FORMAT)\n            await User.Sessions.save(data={\"uid\": uid, \"token\": jwt_token, \"last_accessed_ts\": current_time})\n            # TODO remove hard code role id to return is_admin info\n            if int(found_user['role_id']) == 1:\n                return uid, jwt_token, True\n            return uid, jwt_token, False\n\n        @classmethod\n        async def certificate_login(cls, username, host):\n            \"\"\"\n            Args:\n                username: username\n                host:     IP address\n            Returns:\n                  uid: User id\n                  token: jwt token\n                  is_admin: boolean flag\n\n            \"\"\"\n            storage_client = connect.get_storage_async()\n\n            # get user info on the basis of username\n            payload = PayloadBuilder().SELECT(\"id\", \"role_id\").WHERE(['uname', '=', username])\\\n                .AND_WHERE(['enabled', '=', 't']).payload()\n            result = await storage_client.query_tbl_with_payload('users', payload)\n            if len(result['rows']) == 0:\n                raise User.DoesNotExist('User does not exist')\n\n            found_user = result['rows'][0]\n\n            uid, jwt_token, is_admin = await cls._get_new_token(storage_client, found_user, host)\n            return uid, jwt_token, is_admin\n\n        @classmethod\n        async def delete_user_tokens(cls, user_id):\n            storage_client = connect.get_storage_async()\n            payload = PayloadBuilder().WHERE(['user_id', '=', user_id]).payload()\n            try:\n                res = await storage_client.delete_from_tbl(\"user_logins\", payload)\n            except StorageServerError as ex:\n                if not ex.error[\"retryable\"]:\n                    pass\n                raise ValueError(ERROR_MSG)\n            # Remove user session on basis of user id\n            await User.Sessions.remove(data={\"uid\": user_id})\n            return res\n\n        @classmethod\n        async def delete_token(cls, token):\n            storage_client = connect.get_storage_async()\n            payload = PayloadBuilder().WHERE(['token', '=', token]).payload()\n            try:\n                res = await storage_client.delete_from_tbl(\"user_logins\", payload)\n            except StorageServerError as ex:\n                if not ex.error[\"retryable\"]:\n                    pass\n                raise ValueError(ERROR_MSG)\n            # Remove user session on basis of token\n            await User.Sessions.remove(data={\"token\": token})\n            return res\n\n        @classmethod\n        async def delete_all_user_tokens(cls):\n            storage_client = connect.get_storage_async()\n            await storage_client.delete_from_tbl(\"user_logins\")\n            # Clear all user sessions\n            await User.Sessions.clear()\n\n        @classmethod\n        def hash_password(cls, password, algorithm):\n            # uuid is used to generate a random number\n            salt = uuid.uuid4().hex\n            return hashlib.sha256(salt.encode() + password.encode()).hexdigest() + ':' + salt \\\n                if algorithm == \"SHA256\" else hashlib.sha512(salt.encode() + password.encode()).hexdigest() + ':' + salt\n\n        @classmethod\n        def check_password(cls, hashed_password, user_password, algorithm):\n            password, salt = hashed_password.split(':')\n            return password == (hashlib.sha256(salt.encode() + user_password.encode()).hexdigest() \\\n                if algorithm == \"SHA256\" else hashlib.sha512(salt.encode() + user_password.encode()).hexdigest())\n\n        @classmethod\n        async def _get_password_history(cls, storage_client, user_id, user_data, algorithm):\n            pwd_history_list = []\n            payload = PayloadBuilder().WHERE(['user_id', '=', user_id]).payload()\n            result = await storage_client.query_tbl_with_payload(\"user_pwd_history\", payload)\n            for row in result['rows']:\n                if cls.check_password(row['pwd'], user_data['password'], algorithm):\n                    raise User.PasswordAlreadyUsed\n                pwd_history_list.append(row['pwd'])\n            return pwd_history_list\n\n        @classmethod\n        async def _insert_pwd_history_with_oldest_pwd_deletion_if_count_exceeds(cls, storage_client, user_id, hashed_pwd, pwd_history_list):\n            # delete oldest password for user, as storage result in sorted order so its safe to delete its last index from pwd_history_list\n            if len(pwd_history_list) >= USED_PASSWORD_HISTORY_COUNT:\n                payload = PayloadBuilder().WHERE(['user_id', '=', user_id]).AND_WHERE(\n                    ['pwd', '=', pwd_history_list[-1]]).payload()\n                await storage_client.delete_from_tbl(\"user_pwd_history\", payload)\n\n            # insert into password history table\n            payload = PayloadBuilder().INSERT(user_id=user_id, pwd=hashed_pwd).payload()\n            await storage_client.insert_into_tbl(\"user_pwd_history\", payload)\n\n        @classmethod\n        async def verify_certificate(cls, cert):\n            certs_dir = _FLEDGE_DATA + '/etc/certs' if _FLEDGE_DATA else _FLEDGE_ROOT + \"/data/etc/certs\"\n\n            # TODO: we may require and additional configuration item in the REST API for user input regarding\n            #  intermediates or chain certs.\n\n            # Define possible filenames\n            possible_files = [\"intermediate.cert\", \"intermediate.pem\"]\n            intermediate_cert_file = \"\"\n            for filename in possible_files:\n                full_path = os.path.join(certs_dir, filename)\n                if os.path.isfile(full_path):\n                    intermediate_cert_file = full_path\n                    break\n\n            storage_client = connect.get_storage_async()\n            cfg_mgr = ConfigurationManager(storage_client)\n            ca_cert_item = await cfg_mgr.get_category_item('rest_api', 'authCertificateName')\n            ca_cert_file = \"{}/{}.cert\".format(certs_dir, ca_cert_item['value'])\n\n            SSLVerifier.set_ca_cert(ca_cert_file)\n            SSLVerifier.set_user_cert(cert)\n            SSLVerifier.set_intermediate_cert(None)\n            if intermediate_cert_file:\n                SSLVerifier.set_intermediate_cert(intermediate_cert_file)\n            SSLVerifier.verify()  # raises OSError, SSLVerifier.VerificationError\n\n    class Sessions:\n\n        @classmethod\n        async def get(cls):\n            # To avoid cyclic import\n            from fledge.services.core import server\n            return (server.Server._user_idle_session_timeout, server.Server._user_sessions)\n\n\n        @classmethod\n        async def save(cls, data):\n            # To avoid cyclic import\n            from fledge.services.core import server\n            server.Server._user_sessions.append(data)\n\n        @classmethod\n        async def remove(cls, data):\n            # To avoid cyclic import\n            from fledge.services.core import server\n            session = server.Server._user_sessions\n            if 'token' in data:\n                for s in session:\n                    if s['token'] == data['token']:\n                        server.Server._user_sessions.remove(s)\n            else:\n                for s in session:\n                    if s['uid'] == int(data['uid']):\n                        server.Server._user_sessions.remove(s)\n\n        @classmethod\n        async def clear(cls):\n            # To avoid cyclic import\n            from fledge.services.core import server\n            server.Server._user_sessions = []\n\n"
  },
  {
    "path": "python/fledge/services/south/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/services/south/exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" South Microservice exceptions module \"\"\"\n\nimport sys\n\n__author__ = \"Stefano Simonelli\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass InvalidCommandLineParametersError(Exception):\n    \"\"\" Command line parameters are invalid \"\"\"\n    pass\n\n\nclass InvalidMicroserviceNameError(Exception):\n    \"\"\" Invalid microservice name\"\"\"\n    pass\n\n\nclass InvalidPortError(Exception):\n    \"\"\" Invalid port \"\"\"\n    pass\n\n\nclass InvalidAddressError(Exception):\n    \"\"\" Invalid address \"\"\"\n    pass\n\n\nclass InvalidPluginTypeError(Exception):\n    \"\"\" Invalid plugin type, only the type -south- is allowed \"\"\"\n    pass\n\n\nclass DataRetrievalError(Exception):\n    \"\"\" Unable to retrieve data from the South plugin \"\"\"\n    pass\n\n\nclass QuietError(Exception):\n    # All who inherit me shall not traceback, but be spoken of cleanly\n    pass\n\n\ndef quiet_hook(kind, message, traceback):\n    if QuietError in kind.__bases__:\n        print('{0}: {1}'.format(kind.__name__, message))  # Only print Error Type and Message\n    else:\n        sys.__excepthook__(kind, message, traceback)  # Print Error Type, Message and Traceback\n\nsys.excepthook = quiet_hook\n"
  },
  {
    "path": "python/fledge/tasks/README.rst",
    "content": "***************\nScheduled Tasks\n***************\n\nThis directory contains the code that relaed to the tasks that are\nexecuted by the Fledge scheduler using either on-demand or timed\nschedules. This does not include microservices that are run as a startup\ntask by the scheduler code.\n"
  },
  {
    "path": "python/fledge/tasks/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/tasks/automation_script/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/tasks/automation_script/__main__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Automation script starter\"\"\"\n\nimport sys\nimport json\nimport http.client\nimport argparse\n\nfrom fledge.common.logger import FLCoreLogger\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nif __name__ == '__main__':\n    _logger = FLCoreLogger().get_logger(\"Control Script\")\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--name\", required=True)\n    parser.add_argument(\"--address\", required=True)\n    parser.add_argument(\"--port\", required=True, type=int)\n    namespace, args = parser.parse_known_args()\n    script_name = getattr(namespace, 'name')\n    core_management_host = getattr(namespace, 'address')\n    core_management_port = getattr(namespace, 'port')\n\n    if '--dryrun' in args:\n        # Put any configuration setting here\n        # if --dryrun option exists in args then simply exit. You can't return because you're not in a function\n        sys.exit()\n\n    # Get services list\n    get_svc_conn = http.client.HTTPConnection(\"{}:{}\".format(core_management_host, core_management_port))\n    get_svc_conn.request(\"GET\", '/fledge/service')\n    r = get_svc_conn.getresponse()\n    res = r.read().decode()\n    svc_jdoc = json.loads(res)\n    write_payload = {}\n    for svc in svc_jdoc['services']:\n        if svc['type'] == \"Core\":\n            # find the content of script category for write operation\n            get_script_cat_conn = http.client.HTTPConnection(\"{}:{}\".format(svc['address'], svc['service_port']))\n            get_script_cat_conn.request(\"GET\", '/fledge/category/{}-automation-script'.format(script_name))\n            r = get_script_cat_conn.getresponse()\n            res = r.read().decode()\n            script_cat_jdoc = json.loads(res)\n            write_payloads = json.loads(script_cat_jdoc['write']['value'])\n            for wp in write_payloads:\n                write_payload.update(wp['values'])\n            break\n\n    for svc in svc_jdoc['services']:\n        if svc['type'] == \"Dispatcher\":\n            # Call dispatcher write API with payload\n            post_dispatch_conn = http.client.HTTPConnection(\"{}:{}\".format(svc['address'], svc['service_port']))\n            data = {\"destination\": \"script\", \"name\": script_name, \"write\": write_payload}\n            post_dispatch_conn.request('POST', '/dispatch/write', json.dumps(data))\n            r = post_dispatch_conn.getresponse()\n            res = r.read().decode()\n            write_dispatch_jdoc = json.loads(res)\n            _logger.info(\"For script category with name: {}, dispatcher write API response: {}\".format(\n                script_name, write_dispatch_jdoc))\n            break\n"
  },
  {
    "path": "python/fledge/tasks/common/README.rst",
    "content": "*******************************************\nModules and Classes Commond to Python Tasks\n*******************************************\n\nThis directory and sub-directories of this directory contain code that\nis shared between more of the scheduled process within the Fledge system.\n"
  },
  {
    "path": "python/fledge/tasks/common/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/tasks/purge/README.rst",
    "content": "**********************\nReadings Purge Process\n**********************\n\nThe scheduled task that purges data within the readings that are buffered in Fledge\n\n"
  },
  {
    "path": "python/fledge/tasks/purge/__init__.py",
    "content": "\n"
  },
  {
    "path": "python/fledge/tasks/purge/__main__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Purge process starter\"\"\"\n\nimport asyncio\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.tasks.purge.purge import Purge\n\n\n__author__ = \"Terris Linenbach, Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nif __name__ == '__main__':\n    _logger = FLCoreLogger().get_logger(\"Purge\")\n    loop = asyncio.get_event_loop()\n    purge_process = Purge()\n    loop.run_until_complete(purge_process.run())\n"
  },
  {
    "path": "python/fledge/tasks/purge/purge.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\"\"\"\nPurge readings based on the age of the readings.\n\nReadings data that is older than the specified age will be removed from the readings store.\n\nConditions:\n    1. purge unsent - all readings older than the configured age | size, regardless of the minimum(last_object) of streams table will be removed.\n\n    2. retain unsent to any destination / retain unsent to all destinations\n        Allow the user the option to retain readings if they have not been sent to at least one destination or\n        retain readings if they have not been sent to all destinations.\n\n        retain unsent to any destination\n            Readings with an id value that is greater than the maximum(last_object) of streams table will not be removed.\n\n        retain unsent to all destinations\n            Readings with an id value that is greater than the minimum(last_object) of streams table will not be removed.\n\nStatistics reported by Purge process are:\n    -> Readings removed\n    -> Unsent readings removed\n    -> Readings retained (based on retainUnsent configuration)\n    -> Remaining readings\n    All these statistics are inserted into the log table\n\"\"\"\nimport time\n\nfrom datetime import datetime, timedelta, timezone\n\nfrom fledge.common import statistics\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.exceptions import *\n\n\n__author__ = \"Ori Shadmon, Vaibhav Singhal, Mark Riddoch, Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSI Soft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass Purge(FledgeProcess):\n\n    _DEFAULT_PURGE_CONFIG = {\n        \"age\": {\n            \"description\": \"Age of data to be retained (in hours). All data older than this value will be removed,\" +\n                           \"unless retained.\",\n            \"type\": \"integer\",\n            \"default\": \"72\",\n            \"displayName\": \"Age Of Data To Be Retained (In Hours)\",\n            \"order\": \"1\"\n        },\n        \"size\": {\n            \"description\": \"Maximum number of rows of data to be retained. Oldest data will be removed to keep \"\n                           \"below this row count, unless retained.\",\n            \"type\": \"integer\",\n            \"default\": \"1000000\",\n            \"displayName\": \"Max rows of data to retain\",\n            \"order\": \"2\"\n        },\n        \"retainUnsent\": {\n            \"description\": \"Retain data that has not been sent yet.\",\n            \"type\": \"enumeration\",\n            \"options\": [\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"],\n            \"default\": \"purge unsent\",\n            \"displayName\": \"Retain Unsent Data\",\n            \"order\": \"3\"\n        },\n        \"retainStatsHistory\": {\n            \"description\": \"This is the measure of how long to retain statistics history data for and should be measured in days.\",\n            \"type\": \"integer\",\n            \"default\": \"30\",\n            \"displayName\": \"Retain Stats History Data (In Days)\",\n            \"order\": \"4\",\n            \"minimum\": \"1\"\n        },\n        \"retainAuditLog\": {\n            \"description\": \"This is the measure of how long to retain audit trail information for and should be measured in days.\",\n            \"type\": \"integer\",\n            \"default\": \"60\",\n            \"displayName\": \"Retain Audit Trail Data (In Days)\",\n            \"order\": \"5\",\n            \"minimum\": \"1\"\n        }\n    }\n    _CONFIG_CATEGORY_NAME = 'PURGE_READ'\n    _CONFIG_CATEGORY_DESCRIPTION = 'Purge the readings, log, statistics history table'\n\n    def __init__(self):\n        super().__init__()\n        self._logger = FLCoreLogger().get_logger(\"Data Purge\")\n        self._audit = AuditLogger(self._storage_async)\n\n    async def write_statistics(self, total_purged, unsent_purged):\n        stats = await statistics.create_statistics(self._storage_async)\n        await stats.update('PURGED', total_purged)\n        await stats.update('UNSNPURGED', unsent_purged)\n\n    async def set_configuration(self):\n        \"\"\"\" set the default configuration for purge\n        :return:\n            Configuration information that was set for purge process\n        \"\"\"\n        cfg_manager = ConfigurationManager(self._readings_storage_async)\n        await cfg_manager.create_category(self._CONFIG_CATEGORY_NAME,\n                                          self._DEFAULT_PURGE_CONFIG,\n                                          self._CONFIG_CATEGORY_DESCRIPTION, True, display_name=\"Purge\")\n\n        # Create the child category for purge\n        try:\n            await cfg_manager.create_child_category(\"Utilities\", [self._CONFIG_CATEGORY_NAME])\n        except KeyError:\n            self._logger.error(\"Failed to create child category for purge process\")\n            raise\n\n        return await cfg_manager.get_category_all_items(self._CONFIG_CATEGORY_NAME)\n\n    async def purge_data(self, config):\n        \"\"\"\" Purge readings table based on the set configuration\n        :return:\n            total rows removed\n            rows removed that were not sent to any historian\n        \"\"\"\n        total_rows_removed = 0\n        unsent_rows_removed = 0\n        unsent_retained = 0\n        duration = 0\n        method = None\n\n        start_time = datetime.now(timezone.utc).isoformat(' ')\n\n        if config['retainUnsent']['value'].lower() == \"purge unsent\":\n            flag = \"purge\"\n            operation_type = \"min\"\n\n        elif config['retainUnsent']['value'].lower() == \"retain unsent to any destination\":\n            flag = \"retainany\"\n            operation_type = \"max\"\n\n        else:\n            flag = \"retainall\"\n            operation_type = \"min\"\n\n        # Identifies the row id to use\n        north_instance = []\n        north_list = []\n        payload_north_streams = PayloadBuilder().SELECT(\"description\").WHERE(['active', '=', 't']).payload()\n        north_streams = await self._storage_async.query_tbl_with_payload(\"streams\", payload_north_streams)\n        for item in north_streams[\"rows\"]:\n            if \"description\" in north_list:\n                north_list.append(item[\"description\"])\n\n        self._logger.debug(\"purge_data - north configured :{}: north active :{}: \".format(north_streams, north_list))\n\n        if north_list:\n            payload_sched = PayloadBuilder().SELECT(\"schedule_name\").WHERE(\n                ['schedule_name', 'in', north_list]).AND_WHERE(['enabled', '=', 't']).payload()\n            north_instance = await self._storage_async.query_tbl_with_payload(\"schedules\", payload_sched)\n\n            north_list = []\n            for item in north_instance[\"rows\"]:\n                if \"schedule_name\" in north_list:\n                    north_list.append(item[\"schedule_name\"])\n\n        self._logger.debug(\"purge_data - north schedules - schedules - :{}: north enabled :{}: \".format(north_instance,\n                                                                                                        north_list))\n\n        if north_list:\n            payload = PayloadBuilder().AGGREGATE([operation_type, \"last_object\"]).\\\n                        WHERE(['description', 'in', north_list])\\\n                        .payload()\n        else:\n            payload = PayloadBuilder().AGGREGATE([operation_type, \"last_object\"]).payload()\n\n        result = await self._storage_async.query_tbl_with_payload(\"streams\", payload)\n\n        if operation_type == \"min\":\n            last_object = result[\"rows\"][0][\"min_last_object\"]\n        else:\n            last_object = result[\"rows\"][0][\"max_last_object\"]\n\n        self._logger.debug(\"purge_data - last_object :{}: \".format(last_object))\n\n        if result[\"count\"] == 1:\n            # FIXME: Remove below check when fix from storage layer\n            # Below check is required as If no streams entry exists in DB storage layer returns response as below:\n            # {'rows': [{'min_last_object': ''}], 'count': 1}\n            # BTW it should return integer i.e 0 not in string\n            last_id = 0 if last_object == '' else last_object\n        else:\n            last_id = 0\n\n        self._logger.debug(\"purge_data - flag :{}: last_id :{}: count :{}: operation_type :{}:\".format(\n            flag, last_id, result[\"count\"], operation_type))\n\n        # Do the purge by rows first as it is cheaper than doing the purge by age and\n        # may result in less rows for purge by age to operate on.\n        try:\n            if int(config['size']['value']) != 0:\n                result = await self._readings_storage_async.purge(size=config['size']['value'], sent_id=last_id,\n                                                                  flag=flag)\n                if result is not None:\n                    total_rows_removed = result['removed']\n                    unsent_rows_removed = result['unsentPurged']\n                    unsent_retained = result['unsentRetained']\n                    duration = result['duration']\n                    if method is None:\n                        method = result['method']\n                    else:\n                        method += \" and \"\n                        method += result['method']\n        except ValueError:\n            self._logger.error(\"purge_data - Configuration item size {} should be integer!\".format(\n                config['size']['value']))\n        except StorageServerError:\n            # skip logging as its already done in details for this operation in case of error\n            # FIXME: check if ex.error jdoc has retryable True then retry the operation else move on\n            pass\n        try:\n            if int(config['age']['value']) != 0:\n                result = await self._readings_storage_async.purge(age=config['age']['value'], sent_id=last_id,\n                                                                  flag=flag)\n                if result is not None:\n                    total_rows_removed += result['removed']\n                    unsent_rows_removed += result['unsentPurged']\n                    unsent_retained = result['unsentRetained']\n                    duration += result['duration']\n                    method = result['method']\n        except ValueError:\n            self._logger.error(\"purge_data - Configuration item age {} should be integer!\".format(\n                config['age']['value']))\n        except StorageServerError:\n            # skip logging as its already done in details for this operation in case of error\n            # FIXME: check if ex.error jdoc has retryable True then retry the operation else move on\n            pass\n        end_time = datetime.now(timezone.utc).isoformat(' ')\n\n        if total_rows_removed > 0:\n            \"\"\" Only write an audit log entry when rows are removed \"\"\"\n            await self._audit.information('PURGE', {\"start_time\": start_time,\n                                                    \"end_time\": end_time,\n                                                    \"rowsRemoved\": total_rows_removed,\n                                                    \"unsentRowsRemoved\": unsent_rows_removed,\n                                                    \"rowsRetained\": unsent_retained,\n                                                    \"duration\": duration,\n                                                    \"method\": method\n                                                    })\n        else:\n            self._logger.info(\"No rows purged\")\n\n        return total_rows_removed, unsent_rows_removed\n\n    async def purge_stats_history(self, config):\n        \"\"\"\" Purge statistics history table based on the Age which is defined in retainStatsHistory config item\n        \"\"\"\n        ts = datetime.now() - timedelta(days=int(config['retainStatsHistory']['value']))\n        payload = PayloadBuilder().WHERE(['history_ts', '<=', str(ts)]).payload()\n        await self._storage_async.delete_from_tbl(\"statistics_history\", payload)\n\n    async def purge_audit_trail_log(self, config):\n        \"\"\"\" Purge log table based on the Age which is defined under in config item\n        \"\"\"\n        ts = datetime.now() - timedelta(days=int(config['retainAuditLog']['value']))\n        payload = PayloadBuilder().WHERE(['ts', '<=', str(ts)]).payload()\n        await self._storage_async.delete_from_tbl(\"log\", payload)\n\n    async def run(self):\n        \"\"\"\" Starts the purge task\n\n            1. Write and read Purge task configuration\n            2. Purge as per the configuration\n            3. Collect statistics\n            4. Write statistics to statistics table\n        \"\"\"\n        try:\n            config = await self.set_configuration()\n            if self.is_dry_run():\n                return\n            total_purged, unsent_purged = await self.purge_data(config)\n            await self.write_statistics(total_purged, unsent_purged)\n            await self.purge_stats_history(config)\n            await self.purge_audit_trail_log(config)\n        except Exception as ex:\n            self._logger.exception(ex)\n"
  },
  {
    "path": "python/fledge/tasks/statistics/README.rst",
    "content": "**************************\nFledge Statistics Process\n**************************\n\nCode responsible for creating time based statistics data for Fledge. This\ntakes the cumulative statistics and creates time bucketed copies in a\ndifferent table to allow statistical trends to be materialised.\n"
  },
  {
    "path": "python/fledge/tasks/statistics/__init__.py",
    "content": ""
  },
  {
    "path": "python/fledge/tasks/statistics/__main__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Statistics history process starter\"\"\"\n\nimport asyncio\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.tasks.statistics.statistics_history import StatisticsHistory\n\n\n__author__ = \"Terris Linenbach, Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nif __name__ == '__main__':\n    _logger = FLCoreLogger().get_logger(\"StatisticsHistory\")\n    statistics_history_process = StatisticsHistory()\n    loop = asyncio.get_event_loop()\n    loop.run_until_complete(statistics_history_process.run())\n"
  },
  {
    "path": "python/fledge/tasks/statistics/statistics_history.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Statistics history task\n\nFetch information from the statistics table, compute delta and\nstores the delta value (statistics.value - statistics.previous_value) in the statistics_history table\n\"\"\"\nimport json\n\nfrom fledge.common import utils as common_utils\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\n\n__author__ = \"Ori Shadmon, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSI Soft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass StatisticsHistory(FledgeProcess):\n\n    _logger = None\n\n    def __init__(self):\n        super().__init__()\n        self._logger = FLCoreLogger().get_logger(\"StatisticsHistory\")\n\n    async def _bulk_update_previous_value(self, payload):\n        \"\"\" UPDATE previous_value of column to have the same value as snapshot\n    \n        Query: \n            UPDATE statistics_history SET previous_value = value WHERE key = key\n        Args:\n           payload: dict containing statistics keys and previous values\n        \"\"\"\n        await self._storage_async.update_tbl(\"statistics\", json.dumps(payload, sort_keys=False))\n\n    async def run(self):\n        \"\"\" SELECT against the statistics table, to get a snapshot of the data at that moment.\n    \n        Based on the snapshot:\n            1. INSERT the delta between `value` and `previous_value` into  statistics_history\n            2. UPDATE the previous_value in statistics table to be equal to statistics.value at snapshot \n        \"\"\"\n        if self.is_dry_run():\n            return\n        current_time = common_utils.local_timestamp()\n        results = await self._storage_async.query_tbl(\"statistics\")\n        # Bulk updates payload\n        payload = {\"updates\": []}\n        # Bulk inserts payload\n        insert_payload = {\"inserts\": []}\n        for r in results['rows']:\n            key = r['key']\n            value = int(r[\"value\"])\n            previous_value = int(r[\"previous_value\"])\n            delta = value - previous_value\n            payload_item = PayloadBuilder().SET(previous_value=value).WHERE([\"key\", \"=\", key]).payload()\n            # Add element to bulk updates\n            payload['updates'].append(json.loads(payload_item))\n            # Add element to bulk inserts\n            insert_payload['inserts'].append({'key': key, 'value': delta, 'history_ts': current_time})\n        # Bulk inserts\n        await self._storage_async.insert_into_tbl(\"statistics_history\", json.dumps(insert_payload))\n        # Bulk updates\n        await self._bulk_update_previous_value(payload)\n"
  },
  {
    "path": "python/requirements-dev.txt",
    "content": "-r requirements.txt\n-r requirements-test.txt\n"
  },
  {
    "path": "python/requirements-test.txt",
    "content": "pylint==2.13.9\npytest==7.0.1\npytest-asyncio==0.16.0\npytest-mock==1.10.3\npytest-cov==2.9.0\npytest-aiohttp==0.3.0\npytz==2024.1\n\n# Common - REST interface\nrequests==2.20.0\n\n# For RTU serial test\npyserial==3.4\n\n# keep this in sync with requirement.txt, as otherwise pytest-aiohttp always pulls the latest\naiohttp==3.8.6;python_version<\"3.12\"\naiohttp==3.10.11;python_version>=\"3.12\"\nyarl==1.7.2;python_version<=\"3.10\"\nyarl==1.9.4;python_version>=\"3.11\" and python_version<\"3.12\"\nyarl==1.18.3;python_version>=\"3.12\"\n\n# specific version for setuptools as per pytest version\nsetuptools==70.3.0;python_version>=\"3.8\"\n\n"
  },
  {
    "path": "python/requirements.txt",
    "content": "# Common - REST interface\naiohttp==3.8.6;python_version<\"3.12\"\naiohttp==3.10.11;python_version>=\"3.12\"\naiohttp_cors==0.7.0\nasync-timeout==5.0.1;python_version>=\"3.12\"\ncchardet==2.1.4;python_version<\"3.9\"\ncchardet==2.1.7; python_version>=\"3.9\" and python_version<\"3.11\"\ncchardet==2.2.0a2; python_version>=\"3.11\"\nyarl==1.7.2;python_version<=\"3.10\"\nyarl==1.9.4;python_version>=\"3.11\" and python_version<\"3.12\"\nyarl==1.18.3;python_version>=\"3.12\"\npyjwt==2.4.0\n\n# only required for Public Proxy multipart payload\nrequests-toolbelt==1.0.0\n\n# Fledge discovery\nzeroconf==0.27.0\n"
  },
  {
    "path": "python/setup.py",
    "content": "from setuptools import setup, find_packages\n\nsetup(\n    name='Fledge',\n    python_requires='>=3.8.10',\n    version='3.1.0',\n    description='Fledge, the open source platform for the Internet of Things',\n    url='https://github.com/fledge-iot/fledge',\n    author='OSIsoft, LLC; Dianomic Systems Inc.',\n    author_email='info@dianomic.com',\n    license='Apache 2.0',\n    # TODO: list of excludes (tests)\n    packages=find_packages(),\n    entry_points={\n        'console_scripts': [],\n    },\n    zip_safe=False\n)\n"
  },
  {
    "path": "python/thirdparty/README.rst",
    "content": "Third party Python code\n=======================\n\nThis is the directory tree into whcuh any thirdparty Python code will be\nadded to the repository. Ech thirdparty element added will be added in\nits own directory and clearly show the origin of the code and preserve\nthe copyright and licensing of that code.\n"
  },
  {
    "path": "requirements.sh",
    "content": "#!/usr/bin/env bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2019 Dianomic Systems\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n##\n## Author: Ashish Jabble, Massimiliano Pinto, Vaibhav Singhal\n##\n\nset -e\n\n# Upgrades curl to the version related to Fledge\ncurl_upgrade(){\n\n\n    if [ -d \"${curl_tmp_path}\" ]; then\n        rm -rf \"${curl_tmp_path}\"\n    fi\n\n    echo \"Pulling curl from the Fledge curl repository ...\"\n    cd /tmp/\n\n    curl -s -L -O \"${curl_url}\" && \\\n    unzip -q \"${curl_filename}\"\n\n    cd \"${curl_tmp_path}\"\n\n    # curl in RHEL/CentOS is installed in /bin/curl\n    # but curl installs by default in /usr/local,\n    # so we select the proper target directories\n    echo \"Building curl ...\"\n    ./buildconf && \\\n    ./configure --with-ssl --with-gssapi  --includedir=/usr/include --libdir=/usr/lib64 --bindir=/usr/bin && \\\n    make && \\\n    make install\n}\n\n# Check if the curl version related to Fledge has been installed\ncurl_version_check () {\n\n    set +e\n\n    curl_version=$(curl -V | head -n 1)\n    curl_version_check=$(echo \"${curl_version}\" | grep -c \"${curl_fledge_version}\")\n\n    if (( $curl_version_check >= 1 )); then\n        echo \"curl version ${curl_fledge_version} installed.\"\n    else\n        echo \"WARNING: curl version ${curl_fledge_version} not installed, current version :${curl_version}:\"\n    fi\n\n    set -e\n}\n\n# Evaluates the current version of curl and upgrades it if needed\ncurl_upgrade_evaluates(){\n\n    set +e\n    curl_version=$(curl -V | head -n 1)\n    curl_version_check=$(echo \"${curl_version}\" | grep -c \"${curl_rhel_version}\")\n    set -e\n\n    # Evaluates if the curl is the default one and so it needs to be upgraded\n    if (( $curl_version_check >= 1 )); then\n\n        echo \"curl version ${curl_rhel_version} detected, the standard RHEL/CentOS, upgrading to ${curl_fledge_version}\"\n        curl_upgrade\n\n        curl_version_check\n    else\n        echo \"A curl version different from the default ${curl_rhel_version} detected, upgrade to a newer one if Fledge make fails.\"\n        echo \"version detected :${curl_version}:\"\n\n        # Evaluates if the installed version support Kerberos\n        curl_kerberos=$(curl -V | grep -ic \"Kerberos\")\n        curl_gssapi=$(curl -V | grep -ic \"GSS-API\")\n\n        if [[ $curl_kerberos == 0 || curl_gssapi == 0 ]]; then\n\n            echo \"WARNING : the curl version detected doesn't support Kerberos.\"\n        fi\n    fi\n\n}\n\n# Builds sqlite3 from sources\nsqlite3_build_prepare(){\n\n\t# SQLite3 build - start\n\tSQLITE_PKG_REPO_NAME=\"sqlite3-pkg\"\n\tif [ -d /tmp/${SQLITE_PKG_REPO_NAME} ]; then\n\t\trm -rf /tmp/${SQLITE_PKG_REPO_NAME}\n\tfi\n\techo \"Pulling SQLite3 from Dianomic ${SQLITE_PKG_REPO_NAME} repository ...\"\n\tcd /tmp/\n\tgit clone https://github.com/dianomic/${SQLITE_PKG_REPO_NAME}.git ${SQLITE_PKG_REPO_NAME}\n\tcd ${SQLITE_PKG_REPO_NAME}\n\tcd src\n\techo \"Compiling SQLite3 static library for Fledge ...\"\n\t./configure --enable-shared=false --enable-static=true --enable-static-shell CFLAGS=\"-DSQLITE_MAX_COMPOUND_SELECT=900 -DSQLITE_MAX_ATTACHED=62 -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_LOAD_EXTENSION -DSQLITE_ENABLE_COLUMN_METADATA -fno-common -fPIC\"\n\tautoreconf -f -i\n}\n\n\nget_pip_break_system_flag() {\n    # Get Python version from python3 --version and parse it\n    PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}')\n    PYTHON_MAJOR=$(echo \"$PYTHON_VERSION\" | cut -d. -f1)\n    PYTHON_MINOR=$(echo \"$PYTHON_VERSION\" | cut -d. -f2)\n\n    # Set the FLAG only for Python versions 3.11 or higher\n    if [ \"$PYTHON_MAJOR\" -eq 3 ] && [ \"$PYTHON_MINOR\" -ge 11 ] && [ \"$PYTHON_MINOR\" -lt 12 ]; then\n        FLAG=\"--break-system-packages\"\n    elif [ \"$PYTHON_MAJOR\" -eq 3 ] && [ \"$PYTHON_MINOR\" -ge 12 ]; then\n        FLAG=\"--user --break-system-packages\"\n    else\n        # Default to empty flag\n        FLAG=\"\"\n    fi\n    # Return the FLAG (via echo)\n    echo \"$FLAG\"\n}\n\n# Variables for curl upgrade, if needed\ncurl_filename=\"curl-7.65.3\"\ncurl_url=\"https://github.com/curl/curl/releases/download/curl-7_65_3/${curl_filename}.zip\"\ncurl_tmp_path=\"/tmp/${curl_filename}\"\ncurl_fledge_version=\"7.65.3\"\ncurl_rhel_version=\"7.29\"\n\nfledge_location=$(pwd)\nos_name=$(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\nos_version=$(grep -o '^VERSION_ID=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\necho \"Platform is ${os_name}, Version: ${os_version}\"\n\nUSE_SCL=false\nYUM_PLATFORM=false\n\nif [[ $os_name == *\"Red Hat\"* || $os_name == *\"CentOS\"* ]]; then\n\tYUM_PLATFORM=true\n\tif [[ $os_version == *\"7\"* ]]; then\n\t\tUSE_SCL=true\n\tfi\nfi\n\nif [[ $YUM_PLATFORM = true ]]; then\n\techo YUM Platform\n\tif [[ $os_name == *\"Red Hat\"* ]]; then\n\t\tyum-config-manager --enable 'Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server from RHUI'\n\t\tyum install -y @development\n\telse\n\t\tyum groupinstall \"Development tools\" -y\n\t\tif [[ $USE_SCL  = true ]]; then\n\t\t\techo Use SCL $USE_SCL\n\t\t\tyum install -y centos-release-scl\n\t\tfi\n\tfi\n\tyum install -y boost-devel\n\tyum install -y glib2-devel\n\tyum install -y rsyslog\n\tyum install -y openssl-devel\n\tif [[ $os_version == *\"7\"* ]]; then\n\t\tyum install -y rh-python36\n\t\tyum install -y rh-postgresql13\n\t\tyum install -y rh-postgresql13-postgresql-devel\n\telse\n\t\tyum install -y python3\n\t\tyum install -y python3-devel\n\t\tyum install -y postgresql\n\t\tyum install -y postgresql-devel\n\tfi\n\n\t# Numpy libraries\t\n\tyum install -y numpy\n\n\tyum install -y wget\n\tyum install -y zlib-devel\n\tyum install -y git\n\tyum install -y libuuid-devel\n\t# for Kerberos authentication\n\tyum install -y krb5-workstation\n\tyum install -y curl-devel\n\t# for Cmake3 installation\n\tif [[ $os_name == *\"CentOS\"* ]]; then\n\t\tyum install -y epel-release\n\telif [[ $os_name == *\"Red Hat\"* ]]; then\n\t\tset +e\n\t\trpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm\n\t\tset -e\n\tfi\n\tyum update -y\n\tyum install -y cmake3\n\t# create symlink so that cmake points to cmake3\n\tset +e\n\t# Make a copy of existing cmake\n\tcp -p /usr/bin/cmake /usr/bin/cmake.old\n\t# Use -f to remove existing cmake binary file\n\tln -f -s /usr/bin/cmake3 /usr/bin/cmake\n\tset -e\n\n\tif [[ $USE_SCL = true ]]; then\n\t\techo \"source scl_source enable rh-python36\" >> /home/${SUDO_USER}/.bashrc\n\tfi\n\tservice rsyslog start\n\n\tsqlite3_build_prepare\n\t# Attempts a second execution of make if the first fails\n\tset +e\n\tmake\n\texit_code=$?\n\tif [[ $exit_code != 0 ]]; then\n\n\t\tset -e\n\t\tmake\n\t\tmake install\n\telse\n\t    make install\n\tfi\n\tcd $fledge_location\n\tset -e\n\t\n\t# Upgrade curl if needed\n\tcurl_upgrade_evaluates\n\n\tcd $fledge_location\n\n\tif [[ $USE_SCL = true ]]; then\n\t\t# To avoid to stop the execution for any internal error of scl_source\n\t\tset +e\n\t\tsource scl_source enable rh-python36\n\t\tBREAK_PKG_FLAG=$(get_pip_break_system_flag)\n\t\tpython3 -m pip install --upgrade pip ${BREAK_PKG_FLAG:+$BREAK_PKG_FLAG}\n\t\tpython3 -m pip install numpy ${BREAK_PKG_FLAG:+$BREAK_PKG_FLAG}\n\t\tset -e\n\tfi\n\n\t#\n\t# A gcc version newer than 4.9.0 is needed to properly use <regex>\n\t# the installation of these packages will not overwrite the previous compiler\n\t# the new one will be available using the command 'source scl_source enable devtoolset-7'\n\t# the previous gcc will be enabled again after a log-off/log-in.\n\t#\n\tyum install -y yum-utils\n\tif [[ $USE_SCL = true ]]; then\n\t\tyum-config-manager --enable rhel-server-rhscl-7-rpms\n\t\tyum install -y devtoolset-7\n\n\t\t# To avoid to stop the execution for any internal error of scl_source\n\t\tset +e\n\t\tsource scl_source enable devtoolset-7\n\t\tset -e\n\tfi\nelif apt --version 2>/dev/null; then\n\t# avoid interactive questions\n\tDEBIAN_FRONTEND=noninteractive apt install -yq libssl-dev\n\n\tapt install -y avahi-daemon ca-certificates curl\n\tapt install -y cmake g++ make build-essential autoconf automake uuid-dev\n\tapt install -y libtool libboost-dev libboost-system-dev libboost-thread-dev libpq-dev libz-dev\n\tPYTHON_DEV_PKG=\"python-dev\"\n\tif apt search python-dev-is-python3 | grep -i \"^python-dev-is-python3\" > /dev/null; then\n\t  PYTHON_DEV_PKG=\"python-dev-is-python3\"\n\tfi\n\tapt install -y ${PYTHON_DEV_PKG} python3-dev python3-pip python3-numpy\n\tBREAK_PKG_FLAG=$(get_pip_break_system_flag)\n\tpython3 -m pip install --upgrade pip ${BREAK_PKG_FLAG:+$BREAK_PKG_FLAG}\n\n\tsqlite3_build_prepare\n\tmake\n\tmake install\n        \n\tapt install -y sqlite3 # make install after sqlite3_build_prepare should be enough to install sqlite3 as a command \n\tapt install -y pkg-config\n\n\t# for Kerberos authentication, avoid interactive questions\n\tDEBIAN_FRONTEND=noninteractive apt install -yq krb5-user\n\tDEBIAN_FRONTEND=noninteractive apt install -yq libcurl4-openssl-dev\n\n\tapt install -y cpulimit \n\t\nelse\n\techo \"Requirements cannot be automatically installed, please refer README.rst to install requirements manually.\"\nfi\n"
  },
  {
    "path": "scripts/__template__.sh",
    "content": "#!/usr/bin/env bash\n\n############################################################\n# Run with --help for description.\n#\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n############################################################\n\n__author__=\"${FULL_NAME}\"\n__copyright__=\"Copyright (c) 2017 OSIsoft, LLC\"\n__license__=\"Apache 2.0\"\n__version__=\"${VERSION}\"\n\n############################################################\n# Sourcing?\n############################################################\nif [[ \"$0\" != \"$BASH_SOURCE\" ]]\nthen\n  # See https://stackoverflow.com/questions/2683279/how-to-detect-if-a-script-is-being-sourced/23009039#23009039\n  # This only works reliably with 'bash'. Other shells probably\n  # can not 'source' this script.\n  SOURCING=1\n  SCRIPT=${BASH_SOURCE[@]}\n  if [ \"$SCRIPT\" == \"\" ]\n  then\n    SCRIPT=$_\n  fi\nelse\n  SOURCING=0\n  SCRIPT=$0\nfi\n\n############################################################\n# Change the current directory to the directory where this\n# script is located\n############################################################\npushd `dirname \"$SCRIPT\"` > /dev/null\n\nSCRIPTNAME=$(basename \"$SCRIPT\")\nSCRIPT_AND_VERSION=\"$SCRIPTNAME $__version__\"\n\n############################################################\n# Usage text for this script\n############################################################\nUSAGE=\"$SCRIPT_AND_VERSION\n\nDESCRIPTION\n  This is a script template.\n\nOPTIONS\n  Multiple commands can be specified but they all must be\n  specified separately (-hv is not supported).\n\n  -h, --help      Display this help text\n  -v, --version   Display this script's version information\n\nEXIT STATUS\n  This script exits with status code 1 when errors occur (e.g., \n  tests fail) except when it is sourced.\n  \nEXAMPLES\n  1) $SCRIPTNAME --version\"\n\n############################################################\n# Execute the command specified in $OPTION\n############################################################\nexecute_command() {\n  if [ \"$OPTION\" == \"HELP\" ]\n  then\n    echo \"${USAGE}\"\n\n  elif [ \"$OPTION\" == \"VERSION\" ]\n  then\n    echo $SCRIPT_AND_VERSION\n\n    # Example: check last shell command for failure\n    if [ $? -gt 0 ] && [ $SOURCING -lt 1 ]\n    then\n      exit 1\n    fi\n\n  fi\n}\n\n############################################################\n# Process arguments\n############################################################\nif [ $# -gt 0 ]\nthen\n  for i in \"$@\"\n  do\n    case $i in\n      -h|--help)\n        OPTION=\"HELP\"\n      ;;\n\n      -v|--version)\n        OPTION=\"VERSION\"\n      ;;\n\n      *)\n        echo \"Unrecognized option: $i\"\n\n        if [ $SOURCING -lt 1 ]\n        then\n          exit 1\n        fi\n\n        break\n      ;;\n    esac\n\n    execute_command\n  done\nelse\n  echo \"${USAGE}\"\n\n  if [ $SOURCING -lt 1 ]\n  then\n    exit 1\n  fi\nfi\n\n############################################################\n# Unset all temporary variables created by this script\n# and revert to the previous current directory\n# when this script has been sourced\n############################################################\nif [ $SOURCING -gt 0 ]\nthen\n  popd > /dev/null\n\n  unset OPTION\n  unset SCRIPT\n  unset SCRIPTNAME\n  unset SCRIPT_AND_VERSION\n  unset SOURCING\n  unset USAGE\nfi\n\n"
  },
  {
    "path": "scripts/auth_certificates",
    "content": "#!/usr/bin/env bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2019 Dianomic Systems\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n\n# @date: 2019-03-07\n#\n# Bash Script to manage auth certificates\n# The first implementation is to create a new self-signed SSL Certificate\n#\n# USAGE: this command will ask for the certificate name and number in days it will expire\n# $ ./certificates ca ca 365\n#\n# Create a certificate key.\n# Create the certificate signing request (CSR) which contains details such as the domain name and address details.\n# Sign the certificate\n# Install the certificate and key in the application.\n\n#\n# USAGE\n# The script is meant to be called via make, i.e. it is not meant to be used interactively\n# Arguments:\n#   $1 - ca or user\n#   $2 - Certificate name\n#   $3 - Expiration days\n#   $4 - Organization\n#   $5 - State\n#   $6 - Country\n#   $7 - Email\n#   $8 - Key Size\n#\n# The script is executed from the project directory\n#\n\nset -e\n#set -x\n\n#\n# Includes\n#\n\n# logger script\n. scripts/common/write_log.sh\n\n\n# Logger wrapper\ncertificate_log() {\n  write_log \"Utilities\" \"script.make.certificate\" \"$1\" \"$2\" \"$3\" \"$4\" \"$5\"\n}\n\n#\n# Initial settings\nSSL_LOCATION=\"data/etc/certs\"\nAUTH_NAME=\"ca\"\n\n#\n# Initial checks\n#\n\n# Auth type - ca or user\nif [ -z ${1+x} ]; then\n  certificate_log \"err\" \"Invalid/Missing auth type - ca or user\" \"all\" \"pretty\"\n  exit 1\nelse\n  if [[ \"$1\" != \"ca\" ]]; then\n    if [[ \"$1\" != \"user\" ]]; then\n      certificate_log \"err\" \"Invalid/Missing auth type - ca or user\" \"all\" \"pretty\"\n      exit 1\n    else\n      AUTH_TYPE=\"$1\"\n    fi\n  else\n    AUTH_TYPE=\"$1\"\n  fi\nfi\n\n# Certificate Name\nif [ -z ${2+x} ]; then\n  certificate_log \"err\" \"Missing certificate name\" \"all\" \"pretty\"\n  exit 1\nelse\n  SSL_NAME=\"$2\"\nfi\n\n# Expiration days\nif [ -z ${3+x} ]; then\n  certificate_log \"err\" \"Missing expiration days\" \"all\" \"pretty\"\n  exit 1\nelse\n  SSL_EXPIRATION_DAYS=\"$3\"\nfi\n\n# Organization\nSSL_ORGANIZATION=\"${4:-OSIsoft}\"\n\n# State\nSSL_STATE=\"${5:-California}\"\n\n# Country\nSSL_COUNTRY=\"${6:-US}\"\n\n# Email\nSSL_EMAIL=\"${7:-fledge@googlegroups.com}\"\n\n# Key size\nif [ -z ${8+x} ]; then\n  if [[ \"$AUTH_TYPE\" = \"ca\" ]]; then\n    KEY_SIZE=\"4096\"\n  else\n    KEY_SIZE=\"2048\"\n  fi\nelse\n  KEY_SIZE=\"$8\"\nfi\n\n# OpenSSL command\nif ! [ -x \"$(command -v openssl)\" ]; then\n  certificate_log \"err\" \"Missing openssl command or package\" \"all\" \"pretty\"\n  exit 1\nfi\n\nif [ ! -d \"${SSL_LOCATION}\" ]; then\n  mkdir -p \"${SSL_LOCATION}\"\nfi\n\ncertificate_log \"info\"  \"Creating ${AUTH_TYPE} SSL certificate ...\" \"outonly\" \"pretty\"\n\nif [[ \"$AUTH_TYPE\" = \"ca\" ]]; then\n  # Add more info /C=$country/ST=$state/L=$locality/O=$organization/OU=$organizational_unit/CN=$common_name/emailAddress=$email\n  SUBJ=\"/C=${SSL_COUNTRY}/ST=${SSL_STATE}/O=${SSL_ORGANIZATION}/CN=${AUTH_NAME}/emailAddress=${SSL_EMAIL}\"\n\n  openssl genrsa -out \"${SSL_LOCATION}/${AUTH_NAME}.key\" \"${KEY_SIZE}\" 2> /dev/null\n  openssl req -new -x509 -days \"${SSL_EXPIRATION_DAYS}\" -key \"${SSL_LOCATION}/${AUTH_NAME}.key\" -out \"${SSL_LOCATION}/${AUTH_NAME}.cert\" -subj '/C=US/CN=MY-CA' 2> /dev/null\n\n  certificate_log \"info\" \"${AUTH_TYPE} certificate created successfully, and placed in ${SSL_LOCATION}\" \"outonly\" \"pretty\"\nelse\n  # Add more info /C=$country/ST=$state/L=$locality/O=$organization/OU=$organizational_unit/CN=$common_name/emailAddress=$email\n  SUBJ=\"/C=${SSL_COUNTRY}/ST=${SSL_STATE}/O=${SSL_ORGANIZATION}/CN=${SSL_NAME}/emailAddress=${SSL_EMAIL}\"\n\n  openssl genrsa -out \"${SSL_LOCATION}/${SSL_NAME}.key\" \"${KEY_SIZE}\" 2> /dev/null\n  openssl req -new -key \"${SSL_LOCATION}/${SSL_NAME}.key\" -subj \"${SUBJ}\" -out \"${SSL_LOCATION}/${SSL_NAME}.csr\" 2> /dev/null\n  openssl x509 -req -days \"${SSL_EXPIRATION_DAYS}\" -in \"${SSL_LOCATION}/${SSL_NAME}.csr\" -CA \"${SSL_LOCATION}/${AUTH_NAME}.cert\" -CAkey \"${SSL_LOCATION}/${AUTH_NAME}.key\" -set_serial 01 -out \"${SSL_LOCATION}/${SSL_NAME}.cert\" 2> /dev/null\n\n  # Check key and certificate files\n  if [ ! -f \"${SSL_LOCATION}/${SSL_NAME}.key\" ]; then\n    certificate_log \"err\" \"Could not create SSL certificate ${SSL_NAME} key at ${SSL_LOCATION}\" \"all\" \"pretty\"\n    exit 1\n  fi\n\n  if [ ! -f \"${SSL_LOCATION}/${SSL_NAME}.cert\" ]; then\n    certificate_log \"err\" \"Could not create SSL certificate ${SSL_NAME} at ${SSL_LOCATION}\" \"all\" \"pretty\"\n    exit 1\n  fi\n\n  certificate_log \"info\" \"${AUTH_TYPE} certificate created successfully for ${SSL_NAME}, and placed in ${SSL_LOCATION}\" \"outonly\" \"pretty\"\nfi\n\nexit $?\n"
  },
  {
    "path": "scripts/certificates",
    "content": "#!/usr/bin/env bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n\n# @date: 2018-01-02\n#\n# Bash Script to manage certificates\n# The first implementation is to create a new self-signed SSL Certificate\n#\n# USAGE: this command will ask for the certificate name and number in days it will expire\n# $ ./certificates fledge 365\n#\n# Create a certificate key.\n# Create the certificate signing request (CSR) which contains details such as the domain name and address details.\n# Sign the certificate\n# Install the certificate and key in the application.\n\n#\n# USAGE\n# The script is meant to be called via make, i.e. it is not meant to be used interactively\n# Arguments:\n#   $1 - Certificate name\n#   $2 - Expiration days\n#   $3 - Organization\n#   $4 - State\n#   $5 - Country\n#   $6 - Email\n#   $7 - Key Size\n#\n# The script is executed from the project directory\n#\n\nset -e\n#set -x\n\n#\n# Includes\n#\n\n# logger script\n. scripts/common/write_log.sh\n\n\n# Logger wrapper\ncertificate_log() {\n  write_log \"Utilities\" \"script.make.certificate\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n#\n# Initial settings\nSSL_LOCATION=\"data/etc/certs\"\n\n#\n# Initial checks\n#\n\n# Certificate Name\nif [ -z ${1+x} ]; then\n  certificate_log \"err\" \"Missing certificate name\" \"all\" \"pretty\"\n  exit 1\nelse\n  SSL_NAME=\"$1\"\nfi\n\n# Expiration days\nif [ -z ${2+x} ]; then\n  certificate_log \"err\" \"Missing expiration days\" \"all\" \"pretty\"\n  exit 1\nelse\n  SSL_EXPIRATION_DAYS=\"$2\"\nfi\n\n# Organization\nSSL_ORGANIZATION=\"${3:-OSIsoft}\"\n\n# State\nSSL_STATE=\"${4:-California}\"\n\n# Country\nSSL_COUNTRY=\"${5:-US}\"\n\n# Email\nSSL_EMAIL=\"${6:-fledge@googlegroups.com}\"\n\n# Key Size\nKEY_SIZE=\"${7:-2048}\"\n\n# OpenSSL command\nif ! [ -x \"$(command -v openssl)\" ]; then\n  certificate_log \"err\" \"Missing openssl command or package\" \"all\" \"pretty\"\n  exit 1\nfi\n\nif [ ! -d \"${SSL_LOCATION}\" ]; then\n  mkdir -p \"${SSL_LOCATION}\"\nfi\n\ncertificate_log \"info\"  \"Creating a self signed SSL certificate ...\" \"outonly\" \"pretty\"\n\n# Add more info /C=$country/ST=$state/L=$locality/O=$organization/OU=$organizational_unit/CN=$common_name/emailAddress=$email\nSUBJ=\"/C=${SSL_COUNTRY}/ST=${SSL_STATE}/O=${SSL_ORGANIZATION}/CN=${SSL_NAME}/emailAddress=${SSL_EMAIL}\"\n\nopenssl genpkey -out \"${SSL_LOCATION}/fledge.pass.key\" -pass pass:fledge -algorithm RSA -pkeyopt rsa_keygen_bits:${KEY_SIZE} 2> /dev/null\nopenssl rsa -passin pass:fledge -in \"${SSL_LOCATION}/fledge.pass.key\" -out \"${SSL_LOCATION}/${SSL_NAME}.key\" 2> /dev/null\nrm \"${SSL_LOCATION}/fledge.pass.key\"\nopenssl req -new -key \"${SSL_LOCATION}/${SSL_NAME}.key\" -subj \"${SUBJ}\" -out \"${SSL_LOCATION}/${SSL_NAME}.csr\" 2> /dev/null\nopenssl x509 -req -sha256 -days \"${SSL_EXPIRATION_DAYS}\" -in \"${SSL_LOCATION}/${SSL_NAME}.csr\" -signkey \"${SSL_LOCATION}/${SSL_NAME}.key\" -out \"${SSL_LOCATION}/${SSL_NAME}.cert\" 2> /dev/null\n\n# Check key and certificate files\nif [ ! -f \"${SSL_LOCATION}/${SSL_NAME}.key\" ]; then\n  certificate_log \"err\" \"Could not create SSL certificate ${SSL_NAME} key at ${SSL_LOCATION}\" \"all\" \"pretty\"\n  exit 1\nfi\n\nif [ ! -f \"${SSL_LOCATION}/${SSL_NAME}.cert\" ]; then\n  certificate_log \"err\" \"Could not create SSL certificate ${SSL_NAME} at ${SSL_LOCATION}\" \"all\" \"pretty\"\n  exit 1\nfi\n\ncertificate_log \"info\" \"Certificates created successfully, and placed in ${SSL_LOCATION}\" \"outonly\" \"pretty\"\n\nexit $?\n"
  },
  {
    "path": "scripts/common/README.rst",
    "content": "**************\nCommon Scripts\n**************\n\nThis directory contains those scripts that are not related to any\nindividual microservice within Fledge.\n"
  },
  {
    "path": "scripts/common/audittime.py",
    "content": "\"\"\"\nExtract the audit entries that have a timestamp that starts with\nthe timestamp string passed in as an argument. The output is\nthe details/name tag of the audit entry.\n\nThe script takes as input the output of the GET /fledge/audit API call\n\"\"\"\n\nimport json\nimport sys\n\nif __name__ == '__main__':\n    json = json.loads(sys.stdin.readline())\n    ts = sys.argv[1]\n    for audit in json[\"audit\"]:\n         if audit[\"timestamp\"].startswith(ts):\n              print(audit[\"details\"][\"name\"])\n"
  },
  {
    "path": "scripts/common/check_schema_update.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\nCURRENT_VERSION_FILE=$1\nTHIS_VERSION_FILE=$2\n\n# Get installation path as dirname of CURRENT_VERSION_FILE var\nINSTALLED_DIR=$(dirname ${CURRENT_VERSION_FILE})\n# Get new version path as dirname of THIS_VERSION_FILE var\nNEW_VERSION_DIR=$(dirname ${THIS_VERSION_FILE})\n\n# Include common code\nif [ ! \"${FLEDGE_ROOT}\" ]; then\n    # Use source path\n    FILE_TO_SOURCE=\"scripts/common/get_storage_plugin.sh\"\nelse\n    FILE_TO_SOURCE=\"${FLEDGE_ROOT}/scripts/common/get_storage_plugin.sh\"\nfi\n\nsource ${FILE_TO_SOURCE} 2> /dev/null\nRET_CODE=$?\nif [ \"${RET_CODE}\" -ne 0 ]; then\n    echo \"Error: Missing get_storage_plugin.sh\"\n    exit 1\nfi\n\n#\n# Check required upgrade files\n#\n# param_1    the storage plugin name\n# param_2    current installed Fledge DB schema\n# param_3    new DB schema version\n# param 4    the installed Fledge path\n# return     0 on success, 1 on failure\n\ncheck_upgrade() {\n    PLUGIN_NAME=$1\n    FLEDGE_DB_VERSION=$2\n    NEW_VERSION=$3\n    NEW_VERSION_PATH=$4\n    INSTALLATION_PATH=$5\n    UPDATE_SCRIPTS_DIR=\"${NEW_VERSION_PATH}/scripts/plugins/storage/${PLUGIN_NAME}/upgrade\"\n    # Start from next schema revision\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} + 1 || return 1`\n    while [ \"${CHECK_VER}\" -le ${NEW_VERSION} ]\n    do\n        UPGRADE_SCRIPT=\"${UPDATE_SCRIPTS_DIR}/${CHECK_VER}.sql\"\n        if [ ! -e \"${UPGRADE_SCRIPT}\" ]; then\n            echo \"Error in schema Upgrade: cannot find file ${UPGRADE_SCRIPT} \"\\\n\"required for DB Upgrade of the existing installation from schema \"\\\n\"[${FLEDGE_DB_VERSION}] to [${NEW_VERSION}] in [${INSTALLATION_PATH}]. Exiting\"\n            return 1\n        fi\n        CHECK_VER=`expr $CHECK_VER + 1`\n    done\n    echo \"DB schema upgrade from ${FLEDGE_DB_VERSION} to ${NEW_VERSION} can be done. Ok\"\n    return 0\n}\n\n#\n# Check required downgrade files\n#\n# param_1    the storage plugin name\n# param_2    current installed Fledge DB schema\n# param_3    new DB schema version\n# param 4    the installed Fledge path\n# return     0 on success, 1 on failure\n\ncheck_downgrade() {\n    PLUGIN_NAME=$1\n    FLEDGE_DB_VERSION=$2\n    NEW_VERSION=$3\n    NEW_VERSION_PATH=$4\n    INSTALLATION_PATH=$5\n    DOWNGRADE_SCRIPTS_DIR=\"${NEW_VERSION_PATH}/scripts/plugins/storage/${PLUGIN_NAME}/downgrade\"\n    # Start from next schema revision\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} - 1`\n    while [ \"${CHECK_VER}\" -ge ${NEW_VERSION} ]\n    do\n        DOWNGRADE_SCRIPT=\"${DOWNGRADE_SCRIPTS_DIR}/${CHECK_VER}.sql\"\n        if [ ! -e \"${DOWNGRADE_SCRIPT}\" ]; then\n            echo \"Error in schema Downgrade: cannot find file ${DOWNGRADE_SCRIPT} \"\\\n\"required for DB Downgrade of the existing installation from schema \"\\\n\"[${FLEDGE_DB_VERSION}] to [${NEW_VERSION}] in [${INSTALLATION_PATH}]. Exiting\"\n            return 1\n        fi\n        CHECK_VER=`expr $CHECK_VER - 1`\n    done\n    echo \"DB schema dowgrade from ${FLEDGE_DB_VERSION} to ${NEW_VERSION} can be done. Ok\"\n    return 0\n}\n\n# Set current installation path\nINSTALLED_FLEDGE=${INSTALLED_DIR}\n\n# Check whether installed VERSION exists and it's valid\nCURRENT_FLEDGE_VERSION_FILE=${CURRENT_VERSION_FILE}\n\n# Abort if INSTALLED_FLEDGE path doesn't exist\nif [ ! -d \"${INSTALLED_FLEDGE}\" ]; then\n    echo \"Info: Fledge is not installed in [${INSTALLED_FLEDGE}]. Skipping DB schema check\"\n    exit 0\nelse\n    DIR_EMPTY=`ls -1A ${INSTALLED_FLEDGE} | wc -l`\n    if [ \"${DIR_EMPTY}\" -eq 0 ]; then\n        echo \"Info: ${INSTALLED_FLEDGE} is empty right now. Skipping DB schema check.\"\n        exit 0\n    fi\n    if [ ! -f \"${CURRENT_FLEDGE_VERSION_FILE}\" ]; then\n        # Set WARNING if VERSION file is not present in install path\n        echo \"Warning: Fledge version file [${CURRENT_FLEDGE_VERSION_FILE}] \"\\\n             \"not found in ${INSTALLED_FLEDGE}. It can be an old Fledge setup. Skipping DB schema check.\"\n        exit 0\n    fi \n    if [ ! -s \"${CURRENT_FLEDGE_VERSION_FILE}\" ]; then\n        # Abort if VERSION file is empty\n        echo \"Error: Fledge version file [${CURRENT_FLEDGE_VERSION_FILE}] is empty. \"\\\n             \"DB schema check cannot be performed. Exiting.\"\n        exit 1\n    fi\nfi\n\n# Check whether VERSION file in the new installing path exists and it's valid\nFLEDGE_VERSION_FILE=${THIS_VERSION_FILE}\nif [ ! -f \"${FLEDGE_VERSION_FILE}\" ]; then\n    # Abort on missing VERSION file\n    echo \"Error: Fledge version file [${FLEDGE_VERSION_FILE}] not found \"\\\n         \"in new installing path. Exiting\"\n    exit 1\nelse\n    if [ ! -s \"${FLEDGE_VERSION_FILE}\" ]; then\n        # Abort on empty VERSION file\n        echo \"Error: Fledge version file [${FLEDGE_VERSION_FILE}] in source tree is empty. \"\\\n             \"DB schema check cannot be performed. Exiting.\"\n        exit 1\n    fi\nfi\n\nPLUGIN_TO_USE=`get_storage_plugin`\nRET_CODE=$?\nif [ \"${RET_CODE}\" -ne 0 ]; then\n    echo \"Error: get_storage_plugin call failed.\"\n    exit 1\nfi\n\n###\n# Check for required files done\n# Now getting Fledge version and DB schema version\n#\n\n###\n# - 1 - Get Fledge version from installed VERSION file\n# abort on missing variables\n#\nCURRENT_FLEDGE_VERSION=`cat ${CURRENT_FLEDGE_VERSION_FILE} | tr -d ' ' | grep -i \"FLEDGE_VERSION=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'`\nCURRENT_FLEDGE_SCHEMA=`cat ${CURRENT_FLEDGE_VERSION_FILE} | tr -d ' ' | grep -i \"FLEDGE_SCHEMA=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'`\nif [ ! \"${CURRENT_FLEDGE_VERSION}\" ]; then\n    echo \"Error FLEDGE_VERSION is not set, check [${CURRENT_FLEDGE_VERSION_FILE}] file. Exiting.\"\n    exit  1\nfi\nif [ ! \"${CURRENT_FLEDGE_SCHEMA}\" ]; then\n    echo \"Error FLEDGE_SCHEMA is not set, check [${CURRENT_FLEDGE_VERSION_FILE}] file. Exiting.\"\n    exit 1\nfi\n\n###\n# - 2 - Get Fledge version from source tree VERSION file\n# abort on missing variables\n#\nFLEDGE_VERSION=`cat ${FLEDGE_VERSION_FILE} | tr -d ' ' | grep -i \"FLEDGE_VERSION=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'`\nFLEDGE_SCHEMA=`cat ${FLEDGE_VERSION_FILE} | tr -d ' ' | grep -i \"FLEDGE_SCHEMA=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'`\nif [ ! \"${FLEDGE_VERSION}\" ]; then\n    echo \"Error FLEDGE_VERSION is not set, check [${FLEDGE_VERSION_FILE}] file. Exiting.\"\n    exit  1\nfi\nif [ ! \"${FLEDGE_SCHEMA}\" ]; then\n    echo \"Error FLEDGE_SCHEMA is not set, check [${FLEDGE_VERSION_FILE}] file. Exiting.\"\n    exit 1\nfi\n\n###\n# Found DB schema versions are the same, nothing to do\n#\nif [ \"${CURRENT_FLEDGE_SCHEMA}\" == \"${FLEDGE_SCHEMA}\" ]; then\n    echo \"Info: DB schema is up-to-date in version ${FLEDGE_VERSION}\"\n    exit 0\nfi\n\n##\n# Check whether we need to Upgrade or Downgrade the DB schema\n#\nCHECK_OPERATION=`printf '%s\\n' \"${FLEDGE_SCHEMA}\" \"${CURRENT_FLEDGE_SCHEMA}\" | sort -V | head -n 1`\nif [ \"${CHECK_OPERATION}\" == \"${FLEDGE_SCHEMA}\" ]; then\n    SCHEMA_OPT=\"DOWNGRADE\"\nelse\n    SCHEMA_OPT=\"UPGRADE\"\nfi\n\n###\n# Call the schema operation with params:\n# plugin_name\n# current installed db schema\n# new db schema\n# new Fledge version path\n#\nif [ \"${SCHEMA_OPT}\" == \"UPGRADE\" ]; then\n    check_upgrade ${PLUGIN_TO_USE} ${CURRENT_FLEDGE_SCHEMA} ${FLEDGE_SCHEMA} ${NEW_VERSION_DIR} ${INSTALLED_FLEDGE} || exit 1\nelse\n    check_downgrade ${PLUGIN_TO_USE} ${CURRENT_FLEDGE_SCHEMA} ${FLEDGE_SCHEMA} ${NEW_VERSION_DIR} ${INSTALLED_FLEDGE} || exit 1\nfi\n\nexit 0\n"
  },
  {
    "path": "scripts/common/disk_usage.py",
    "content": "\"\"\"\nExtract the disk usage  percentage and print it if the percentage\nis greater than or equal to a value passed in.\nThe script takes as input the output of the GET /fledge/health/logging\nor GET /fledge/health/storage API call\n\"\"\"\nimport json\nimport sys\n\nif __name__ == '__main__':\n    json = json.loads(sys.stdin.readline())\n    item = json[\"disk\"]\n    usage=item[\"usage\"]\n    if usage >= int(sys.argv[1]):\n         print(usage)\n"
  },
  {
    "path": "scripts/common/get_engine_management.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2017 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n#     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n##\n## This script is used to get information regarding the Storage Plugin\n\n\nget_engine_management() {\n\n  storage_info=( $($FLEDGE_ROOT/scripts/services/storage --plugin) )\n\n  if [ \"${storage_info[0]}\" != \"$1\" ]; then\n\t  # Not the storage plugin, maybe beign used for readings\n    storage_info=( $($FLEDGE_ROOT/scripts/services/storage --readingplugin) )\n    if [ \"${storage_info[0]}\" != \"$1\" ]; then\n\t    echo \"\"\n    else\n    \techo \"${storage_info[1]}\"\n    fi\n  else\n    echo \"${storage_info[1]}\"\n  fi\n\n}\n\n"
  },
  {
    "path": "scripts/common/get_logs.sh",
    "content": "#!/bin/bash\n\n__author__=\"Amandeep Singh Arora\"\n__version__=\"1.0\"\n\n# open a log file at FD 2 for debugging purposes\n> /tmp/fl_syslog.log\nexec 2<&-\nexec 2<>/tmp/fl_syslog.log\n\nRECALC_AFTER_N_SCRIPT_RUNS=100\nNUM_LOGFILE_LINES_TO_CHECK_INITIALLY=2000\n\noffset=100\nlimit=5\npattern=\"\"\nkeyword=\"\"\nlevel=\"debug\"\nlogfile=\"/var/log/syslog\"\nsourceApp=\"fledge\"\n\nwhile [ \"$#\" -gt 0 ]; do\n  case \"$1\" in\n    -pattern) pattern=\"$2\"; shift 2;;\n    -keyword) keyword=\"$2\"; shift 2;;\n    -offset) offset=\"$2\"; shift 2;;\n    -limit) limit=\"$2\"; shift 2;;\n    -level) level=\"$2\"; shift 2;;\n    -source) sourceApp=\"$2\"; shift 2;;\n    -logfile) logfile=\"$2\"; shift 2;;\n  esac\ndone\n\n\nsum=$(($offset + $limit))\nkeyword_len=${#keyword}\nif [[ $keyword_len -gt 0 ]]; then\n    factor_keyword=\"$sourceApp:$level:$keyword:\"\n    search_pattern=\"grep -a -E '${pattern}' | grep -F '$keyword'\"\nelse\n    factor_keyword=\"$sourceApp:$level:\"\n    search_pattern=\"grep -a -E '${pattern}'\"\nfi\n\necho \"\" >&2\necho \"****************************************************************************************\" >&2\necho \"************************************* START ********************************************\" >&2\n\n# keep script run count in /tmp/fl_syslog_script_runs; when it reaches RECALC_AFTER_N_SCRIPT_RUNS, recalculate factor\nif [ -f /tmp/fl_syslog_script_runs ]; then\n\tscript_runs=$(cat /tmp/fl_syslog_script_runs)\n\tscript_runs=$(($script_runs + 1))\nelse\n\tscript_runs=0\nfi\n\necho \"script_runs=$script_runs\" >&2\nif [[ $script_runs -gt ${RECALC_AFTER_N_SCRIPT_RUNS} ]]; then\n\techo \"Resetting script_runs\" >&2\n\tscript_runs=0\n\techo -n \"$script_runs\" > /tmp/fl_syslog_script_runs\nfi\necho -n \"$script_runs\" > /tmp/fl_syslog_script_runs\necho \"offset=$offset, limit=$limit, sum=$sum, pattern=$search_pattern, sourceApp=$sourceApp, level=$level, script_runs=$script_runs\" >&2\n\n# calculate how many log lines are to be checked to get 'n' result lines for a given service and log level\n# if for getting 100 lines of interest, 6400 last syslog lines need to be checked, then factor would be 64\nfactor=2\ninitial_factor=$((${NUM_LOGFILE_LINES_TO_CHECK_INITIALLY} / $sum))\nif [[ $script_runs -eq 0 ]]; then\n    factor=$initial_factor\n    [[ $factor -lt 2 ]] && factor=2\nelse\n    if [ -f /tmp/fl_syslog_factor ]; then\n        echo \"Reading factor value from /tmp/fl_syslog_factor\" >&2\n        if [[ $keyword_len -gt 0 ]]; then\n            cmd=\"grep '$factor_keyword' /tmp/fl_syslog_factor | rev | cut -d: -f1 | rev\"\n        else\n            cmd=\"grep '$factor_keyword[0-9][0-9]*$' /tmp/fl_syslog_factor | rev | cut -d: -f1 | rev\"\n        fi\n        echo \"Read factor cmd='$cmd'\" >&2\n        factor=$(eval $cmd)\n        echo \"Read factor value of '$factor' from /tmp/fl_syslog_factor\" >&2\n        [ -z $factor ] && factor=$initial_factor && echo \"Using factor value of $factor\" >&2\n    else\n        [ -z $factor ] && factor=$initial_factor && echo \"Using factor value of $factor; file '/tmp/fl_syslog_factor' is missing\" >&2\n        echo \"Starting with factor=$factor\" >&2\n    fi\n\nfi\n\ntmpfile=$(mktemp)\nloop_iters=0\nlogfile_line_count=$(wc -l < $logfile)\n\n# check the last 'n' lines of syslog for log lines of interest, else keep doubling 'n' till syslog file size\nwhile [ 1 ];\ndo\n\tt1=$(date +%s%N)\n\tlines_to_check=$(($factor * $sum))\n\techo >&2\n\techo \"loop_iters=$loop_iters: factor=$factor, lines=$lines_to_check, tmpfile=$tmpfile\" >&2\n\tcmd=\"tail -n $lines_to_check $logfile | ${search_pattern} > $tmpfile\"\n\techo \"cmd=$cmd, logfile line count=$logfile_line_count\" >&2\n\teval \"$cmd\"\n\tt2=$(date +%s%N)\n\tt_diff=$(((t2 - t1)/1000000))\n\tcount=$(wc -l < $tmpfile)\n\techo \"Got $count matching log lines in last $lines_to_check lines of syslog file; processing time=${t_diff}ms\" >&2\n\n\tif [[ $count -ge $sum ]]; then\n\t\techo \"Got sufficient number of matching log lines, current factor value of $factor is good\" >&2\n\t\tcat $tmpfile | tail -n $sum | head -n $limit\n\t\trm $tmpfile\n\n\t\ttouch /tmp/fl_syslog_factor\n\t\tgrep -v \"$factor_keyword\" /tmp/fl_syslog_factor > /tmp/fl_syslog_factor.out; mv /tmp/fl_syslog_factor.out /tmp/fl_syslog_factor\n\t\techo \"$factor_keyword$factor\" >> /tmp/fl_syslog_factor\n\t\tbreak\n\tfi\n\n\tif [[ $lines_to_check -gt $logfile_line_count ]]; then\n\t\techo \"Cannot increase factor value any further; logfile line count=$logfile_line_count, lines=$lines_to_check\" >&2\n\n\t\tcat $tmpfile | tail -n $sum | head -n $limit\n\t\techo \"Log results START:\" >&2\n\t\tcat $tmpfile | tail -n $sum | head -n $limit >&2\n\t\techo \"Log results END:\" >&2\n\t\trm $tmpfile\n\t\ttouch /tmp/fl_syslog_factor\n\t\tgrep -v \"$factor_keyword\" /tmp/fl_syslog_factor > /tmp/fl_syslog_factor.out; mv /tmp/fl_syslog_factor.out /tmp/fl_syslog_factor\n\t\techo \"$factor_keyword$factor\" >> /tmp/fl_syslog_factor\n\t\tbreak\n\tfi\n\n    factor=$(($factor * 2))\n    echo \"Didn't get sufficient number of matching log lines, trying factor=$factor\" >&2\n\n\tloop_iters=$(($loop_iters + 1))\ndone\n\necho \"******************************** END ***************************************************\" >&2\necho \"\" >&2"
  },
  {
    "path": "scripts/common/get_platform.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2019 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n__author__=\"Stefano Simonelli\"\n__version__=\"1.0\"\n\n# Identifies the platform on which Fledge reside\n# output :\n#          not empty - Centos or RedHat\n#          empty     - Debian/Ubuntu\nget_platform() {\n\n\t(lsb_release -ds 2>/dev/null || cat /etc/*release 2>/dev/null | head -n1 || uname -om)\n\n}\n"
  },
  {
    "path": "scripts/common/get_readings_plugin.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2022 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\n# Get the storage database plugin from the Storage microservice cache file\nget_readings_plugin() {\n    # Capture the output to a variable\n    if [ \"${FLEDGE_ROOT}\" ]; then\n        res=$($FLEDGE_ROOT/scripts/services/storage --readingsPlugin 2>/dev/null)\n    elif [ -x scripts/services/storage ]; then\n        res=$(scripts/services/storage --readingsPlugin 2>/dev/null)\n    else\n        logger \"Unable to find Fledge storage script.\"\n        exit 1\n    fi\n    \n    # Extract the first word from the result\n    plugin=$(echo \"$res\" | cut -d' ' -f1)\n    \n    # Check if the first three words are \"Use main plugin\"\n    if [[ \"$res\" =~ \"Use main plugin\" ]]; then\n        if [ \"${FLEDGE_ROOT}\" ]; then\n            # Run the --plugin command to get the actual plugin name\n            plugin=$($FLEDGE_ROOT/scripts/services/storage --plugin 2>/dev/null | cut -d' ' -f1)\n        elif [ -x scripts/services/storage ]; then\n            # Run the local --plugin command\n            plugin=$(scripts/services/storage --plugin 2>/dev/null | cut -d' ' -f1)\n        else\n            # Log an error and exit if the storage script is not found\n            logger \"Failed to retrieve information from the readings plugin.\"\n            exit 1\n        fi\n    fi\n    \n    # Return the plugin name\n    echo \"$plugin\"\n}\n"
  },
  {
    "path": "scripts/common/get_storage_plugin.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\n# Get the storage database plugin from the Storage microservice cache file\nget_storage_plugin() {\nif [ \"${FLEDGE_ROOT}\" ]; then\n    $FLEDGE_ROOT/scripts/services/storage --plugin | cut -d' ' -f1\nelif [ -x scripts/services/storage ]; then\n    scripts/services/storage --plugin | cut -d' ' -f1\nelse\n    logger \"Unable to find Fledge storage script.\"\n    exit 1\nfi\n}\n"
  },
  {
    "path": "scripts/common/json_parse.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Command line JSON parsing utility Class\"\"\"\n\nimport sys\nimport json\n\n\n\"\"\"\nmodule name: json_parse.py\nThis module reads JSON data from STDIN an parse it with argv[1] method\nusing optional match argv[2]\n\nIt prints the requested JSON value.\nIn case of errors it prints the exception and returns 1 to the caller\n\nCurrent implemented methods:\n- get_rest_api_url() return the REST API url of Fledge\n- get_category_item_default() returns the default value of a Fledge category name\n- get_category_item_value() returns the value of a Fledge category name\n- get_category_key() returns the match for a given category name\n- get_config_item_value() returns the configuration item value of a Fledge category name\n- get_schedule_id() returns the scheduled_id of a given schedule name\n- get_current_schedule_id() returns the scheduled_id of new created schedule name\nUsage:\n\n$ echo $JSON_DATA | python3 -m json_parse $method_name $name\n\"\"\"\n\n__author__ = \"Massimiliano Pinto, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n# ExtractJson Class\nclass ExtractJson(object):\n    def __init__(self, json_input, method):\n        self._json = json_input\n        self._method = method\n\n    # Set error message for raising exceptions class methods\n    def set_error_message(self, name, err_exc):\n        return \"Error parsing JSON in method: {} for '{}' with exception {}:{}\".format(\n            self._method, name, err_exc.__class__.__name__, str(err_exc))\n\n    # Return REST API URL from 'Fledge' PID file in JSON input\n    def get_rest_api_url_from_pid(self, unused=None):\n        try:\n            json_data = self._json['adminAPI']\n            scheme = json_data['protocol'].lower()\n            port = str(json_data['port'])\n            address = \"127.0.0.1\" if json_data['addresses'][0] == \"0.0.0.0\" else json_data['addresses'][0]\n            url = \"{}://{}:{}\".format(scheme, address, port)\n            return url\n        except Exception as ex:\n            raise Exception(self.set_error_message(\"Fledge PID\", ex))\n\n    # Return REST API URL from 'Fledge Core' service in JSON input\n    def get_rest_api_url(self, unused=None):\n        try:\n            scheme = self._json['services'][0]['protocol']\n            port = str(self._json['services'][0]['service_port'])\n            address = \"127.0.0.1\" if self._json['services'][0]['address'] == \"0.0.0.0\" else \\\n                self._json['services'][0]['address']\n            url = \"{}://{}:{}\".format(scheme, address, port)\n            return url\n        except Exception as ex:\n            raise Exception(self.set_error_message(\"Fledge Core\", ex))\n\n    # Return the default value of a Fledge category item from JSON input\n    def get_category_item_default(self, item):\n        try:\n            # Get the specified category item name\n            cat_json = self._json\n            return str(cat_json['value'][item]['default']).replace('\"', '')\n        except Exception as ex:\n            raise Exception(self.set_error_message(item, ex))\n\n    # Return the default value of a Fledge category item from JSON input\n    def get_category_item_value(self, item):\n        try:\n            # Get the specified category item name\n            cat_json = self._json\n            return str(cat_json['value'][item]['value']).replace('\"', '')\n        except Exception as ex:\n            raise Exception(self.set_error_message(item, ex))\n\n    # Return the value of a Fledge category name from JSON input\n    def get_category_key(self, key):\n        try:\n            # Get the specified category name\n            cat_json = self._json\n            # If no match return empty string\n            if cat_json['key'] == key:\n                return str(cat_json['key']).replace('\"', '')\n            else:\n                return str(\"\")\n        except KeyError as er:\n            raise Exception(self.set_error_message(key, er))\n        except Exception as ex:\n            raise Exception(self.set_error_message(key, ex))\n\n    # Return the value of configuration item of a Fledge category name\n    def get_config_item_value(self, item):\n        try:\n            # Get the specified JSON\n            cat_json = self._json\n            return str(cat_json[item]['value']).replace('\"', '')\n        except Exception as ex:\n            raise Exception(self.set_error_message(item, ex))\n\n    # Return the ID of a Fledge schedule name just created\n    def get_current_schedule_id(self, name):\n        try:\n            # Get the specified schedule name\n            schedule_json = self._json['schedule']\n            if schedule_json['name'] == name:\n                # Scheduler found, return the id\n                return str(schedule_json['id'].replace('\"', ''))\n            else:\n                # Name non found, return empty string\n                return str(\"\")\n\n        except Exception as ex:\n            raise Exception(self.set_error_message(name, ex))\n\n    # Return the ID of a Fledge schedule name from JSON input with all schedules\n    def get_schedule_id(self, name):\n        try:\n            # Get the specified schedule name\n            schedules_json = self._json\n            found = False\n\n            # Look for _MATCH_SCHEDULE\n            for schedule in schedules_json['schedules']:\n                if schedule['name'] == name:\n                    # Scheduler found, return the id\n                    found = True\n                    return str(schedule['id'].replace('\"', ''))\n\n            # Nothing has been found, return empty string\n            return str(\"\")\n\n        except Exception as ex:\n            raise Exception(self.set_error_message(name, ex))\n\n\n# Main body\nif __name__ == '__main__':\n    try:\n        # Read from STDIN\n        read_data = sys.stdin.readline()\n\n        method_name = str(sys.argv[1])\n\n        # Instantiate the class with a JSON object from input data\n        json_parse = ExtractJson(json.loads(read_data), method_name)\n\n        # Build the class method to call using argv[1]\n        if len(sys.argv) > 2:\n            call_method = \"json_parse.\" + method_name + \"('\" + str(sys.argv[2]) + \"')\"\n        else:\n            call_method = \"json_parse.\" + method_name + \"()\"\n\n        try:\n            # Return the output\n            output = eval(call_method)\n            print(output)\n        except Exception as err:\n            print(\"ERROR: \" + str(err))\n            exit(1)\n\n        # Return success\n        exit(0)\n    except AttributeError:\n        print(\"ERROR: method '\" + method_name + \"' not implemented yet\")\n        # Return failure\n        exit(1)\n    except Exception as exc:\n        if len(sys.argv) == 1:\n            print(\"ERROR: \" + str(exc))\n        else:\n            print(\"ERROR: '\" + str(sys.argv[1]) + \"', \" + str(exc))\n        # Return failure\n        exit(1)\n"
  },
  {
    "path": "scripts/common/loglevel.py",
    "content": "\"\"\"\nExtract the names of services that have the current log level set\nto a level passed in as an argument. The script takes as input the\noutput of the GET /fledge/health/logging API call\n\"\"\"\n\nimport json\nimport sys\n\nif __name__ == '__main__':\n    level = sys.argv[1]\n    json = json.loads(sys.stdin.readline())\n    for service in json[\"levels\"]:\n        if service[\"level\"] == level:\n            print(service[\"name\"])\n"
  },
  {
    "path": "scripts/common/service_status.py",
    "content": "\"\"\"\nUtility to extract the names of services in a particular state given the output of\nthe API call GET /fledge/services\n\"\"\"\n\nimport json\nimport sys\n\nif __name__ == '__main__':\n    json = json.loads(sys.stdin.readline())\n    state = sys.argv[1]\n    for service in json[\"services\"]:\n         if service[\"status\"] == state:\n              print(service[\"name\"])\n"
  },
  {
    "path": "scripts/common/try_catch.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n##\n## write_log\n## This script is used to execute a command and control the output\n\n\n# Paramaters: $1 - The command to execute\n#             $2 - write_log function\n#             $3 - The message to show in case of error\n#\ntry_catch() {\n\n  set +e\n  command_output=$( $1 2>&1 )\n  command_rescode=$?\n\n  if [[ $command_rescode -ne 0 ]]; then\n    $2 \"err\" \"$3\" \"all\" \"pretty\"\n    $2 \"err\" \"Command Output: $command_output\" \"all\" \"pretty\"\n    exit $comand_rescode\n  fi\n  set -e\n\n}\n\n\n"
  },
  {
    "path": "scripts/common/write_log.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2017 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n##\n## write_log\n## This script is used to write in the syslog and to echo output to stderr\n## and stdout\n\n#set -x\n\n# Fetch environment variables for remote syslog configuration\nSYSLOG_UDP_ENABLED=${SYSLOG_UDP_ENABLED:-false} # Enable/disable remote syslog (default: false)\nLOG_IP=${LOG_IP:-\"127.0.0.1\"}                   # Remote syslog server IP (default: localhost)\nLOG_PORT=${LOG_PORT:-5140}                       # Remote syslog server port (default: 514)\n## The Loggging Handler\n#\n# Paramaters: $1 - Module\n#             $2 - Function\n#             $3 - Severity:\n#                  - debug\n#                  - info\n#                  - notice\n#                  - err\n#                  - crit\n#                  - alert\n#                  - emerg\n#             $4 - Message\n#             $5 - Output:\n#                  - logonly : send the message only to syslog\n#                  - all     : send the message to syslog and stdout\n#                  - outonly \" send the message only to stdout\n#             $6 - Format\n#                - pretty : Do not show the date and priority\n#\nwrite_log() {\n\n  # Check log severity\n  case \"$3\" in\n    \"debug\")\n      severity=\"DEBUG\"\n      ;;\n    \"info\")\n      severity=\"INFO\"\n      ;;\n    \"notice\")\n      severity=\"WARNING\"\n      ;;\n    \"err\")\n      severity=\"ERROR\"\n      ;;\n    \"crit\")\n      severity=\"CRITICAL ERROR\"\n      ;;\n    \"alert\")\n      severity=\"ALERT\"\n      ;;\n    \"emerg\")\n      severity=\"EMERGENCY\"\n      ;;\n    \"*\")\n      write_log $1 \"err\" \"Internal error: unrecognized priority: $3\" $4\n      exit 1\n      ;;\n  esac\n\n  # Log to syslog\n  if [[ \"$5\" =~ ^(logonly|all)$ ]]; then\n      tag=\"Fledge ${1}[${BASHPID}] ${severity}: ${2}\"\n      if [[ \"${SYSLOG_UDP_ENABLED,,}\" == \"true\" ]]; then\n          # Send log to remote syslog server using logger\n          logger --server \"${LOG_IP}\" --port \"${LOG_PORT}\" -t ${tag} \"${4}\"\n      else\n          # Log locally\n          logger -t \"${tag}\" \"${4}\"\n      fi\n  fi\n\n  # Log to Stdout\n  if [[ \"${5}\" =~ ^(outonly|all)$ ]]; then\n      if [[ \"${6}\" == \"pretty\" ]]; then\n          echo \"${4}\" >&2\n      else\n          echo \"[$(date +'%Y-%m-%d %H:%M:%S')]: $@\" >&2\n      fi\n  fi\n\n}\n\n\n"
  },
  {
    "path": "scripts/debug/.debugrc",
    "content": "    PS1='\\[\\033[01;32m\\]Debug\\[\\033[00m\\]:\\[\\033[01;34m\\] ${SERVICE}\\[\\033[00m\\]\\$ '\n"
  },
  {
    "path": "scripts/debug/README.rst",
    "content": "Debugger Command Line Test Programs\n===================================\n\nThis directory contains a set of utilities used to manually test the pipeline debugger features. These tests should be run against a live system built from this branch.\n\nTo start testing;\n\n  - Change directory to the debugger directory\n\n  - Type the command\n\n    .. code-block:: console\n\n        ./debug <ServiceName>\n\n    Where *<ServiceName>* is the name of a south or north service. You will need to quote the name if it contains whitespace or wildcard characters that have meaning to the shell.\n\n   - Attach the debugger to the pipeline by using the attach command\n\n     .. code-block:: console\n\n         attach\n\nThe debugger is now attached to the pipeline and collecting one reading at each point in the pipeline.\n\nTo see the list of commands available type the *commands* command\n\n.. code-block:: console\n\n    % commands\n    attach:\t\tAttach the pipeline debugger\n    buffer:\t\tReturn the contents of the buffers at every pipeline element\n    detach:\t\tDetach the debugger from the pipeline\n    isolate:\t        Isolate the pipeline from the destination\n    replay:\t\tReplay the buffered data through the pipeline\n    resumeIngest:\tResume the flow of data into the pipeline\n    setBuffer:\t        Set the number of readings to hold in each buffer, passing an integer argument\n    state:\t\tReturn the state of the debugger\n    step:\t\tAllow readings to flow into the pipeline. Pass an optional number of readings to ingest; default to 1 if omitted\n    store:\t\tAllow data to flow out of the pipeline into storage\n    suspendIngest:\tSuspend the ingestion of data into the pipeline\n\nThe data output from the *buffer* command shows the readings at the input to the named filter in the pipeline. The item name *Writer* is the end of the pipeline that writes data to storage, in south plugins. The item named *Branch* is a branch point in the pipeline.\n\nTo exit the debugger simple type exit or <Control>-D\n"
  },
  {
    "path": "scripts/debug/attach",
    "content": "#!/bin/bash\n#help \t\tAttach the pipeline debugger\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -X PUT \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=attach\"|jq\n"
  },
  {
    "path": "scripts/debug/buffer",
    "content": "#!/bin/bash\n#help \t\tReturn the contents of the buffers at every pipeline element\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=buffer\"|jq\n"
  },
  {
    "path": "scripts/debug/commands",
    "content": "#!/bin/bash\n#help \tList the commands that can be run\ncmds=`ls |grep -v debug |grep -v commands` \ngrep '#help' $cmds | sed -e 's/#help //'\n"
  },
  {
    "path": "scripts/debug/debug",
    "content": "#!/bin/bash\nif [ $# -gt 0 ]; then\n\tservice=$1\nelse\n\techo You must pass a service name\n\texit 1\nfi\n\n\n##\n## Get Fledge rest API URL\n##\nget_rest_api_url() {\n    pid_file=${FLEDGE_DATA}/var/run/fledge.core.pid\n    export PYTHONPATH=${FLEDGE_ROOT}\n\n    # Check whether pid_file exists and its contents are not empty\n    if [[ -s ${pid_file} ]]; then\n        export REST_API_URL=$(cat ${pid_file} | python3 -m scripts.common.json_parse  get_rest_api_url_from_pid)\n    fi\n\n    # Sets a default value if it not possible to determine the proper value using the pid file\n    if [ ! \"${REST_API_URL}\" ]; then\n        export REST_API_URL=\"http://localhost:8081\"\n    fi\n}\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\nget_rest_api_url\n\nresult=`curl -k -s ${REST_API_URL}/fledge/service || true`\nif [[ \"${result}\" == \"401\"* ]]; then\n\ttoken=`fledge_authenticate`\n\tif [[ \"${token}\" =~ \"failed\" ]]; then\n\t\techo \"Authentication failed.\"\n\t\texit -1\n\tfi\n\tresult=`curl -H \"authorization: $token\" -k -s ${REST_API_URL}/fledge/ping || true`\n\texport DEBUG_SERVICE=`curl -H \"authorization: $token\" -k -s ${REST_API_URL}/fledge/service|jq '.services[] | select (.name == \"'$service'\") | .service_port'`\n\ttype=`curl -H \"authorization: $token\" -k -s ${REST_API_URL}/fledge/service|jq '.services[] | select (.name == \"'$service'\") | .type' | sed -e 's/\"//g'`\nelse\n\texport DEBUG_SERVICE=`curl -s ${REST_API_URL}/fledge/service|jq '.services[] | select (.name == \"'$service'\") | .service_port'`\n\ttype=`curl -s ${REST_API_URL}/fledge/service|jq '.services[] | select (.name == \"'$service'\") | .type' | sed -e 's/\"//g'`\nfi\nif [ \"$type\" = \"Southbound\" ] ; then\n\texport DEBUG_TYPE=\"south\"\nelif [ \"$type\" = \"Northbound\" ]; then\n\texport DEBUG_TYPE=\"north\"\nelse\n\techo Only South or North services are currently supported. The named service must also be running\n\texit 1\nfi\nexport PATH=.:$PATH\nexport SERVICE=${service}\nif [ \"$DEBUG_SERVICE\" = \"\" ]; then\n\techo $service is not the name of a running south or north service\nelse\n\tcd ${FLEDGE_ROOT}/scripts/debug\n\tbash --rcfile .debugrc\nfi\n"
  },
  {
    "path": "scripts/debug/detach",
    "content": "#!/bin/bash\n#help \t\tDetach the debugger from the pipeline\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -X PUT \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=detach\"|jq\n"
  },
  {
    "path": "scripts/debug/isolate",
    "content": "#!/bin/bash\n#help \tIsolate the pipeline from the destination\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -X PUT -d'{\"state\":\"discard\"}' \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=isolate\"|jq\n"
  },
  {
    "path": "scripts/debug/replay",
    "content": "#!/bin/bash\n#help \t\tReplay the buffered data through the pipeline\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -X PUT \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=replay\"|jq\n"
  },
  {
    "path": "scripts/debug/resumeIngest",
    "content": "#!/bin/bash\n#help \tResume the flow of data into the pipeline\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -X PUT -d'{\"state\":\"resume\"}' \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=suspend\"|jq\n"
  },
  {
    "path": "scripts/debug/setBuffer",
    "content": "#!/bin/bash\n#help \tSet the number of readings to hold in each buffer, passing an integer argument\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\nsize=1\nif [ $# -gt 0 ]; then\n\tsize=$1\nfi\npayload='{\"size\":'$size'}'\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -d$payload -X PUT \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=buffer\"|jq\n"
  },
  {
    "path": "scripts/debug/state",
    "content": "#!/bin/bash\n#help \t\tReturn the state of the debugger\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=state\"|jq\n"
  },
  {
    "path": "scripts/debug/step",
    "content": "#!/bin/bash\n#help \t\tAllow readings to flow into the pipeline. Passed an optional number of readings to ingest; default to 1 if omitted\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\nsteps=1\nif [ $# -gt 0 ]; then\n\tsteps=$1\nfi\npayload='{\"steps\":'$steps'}'\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -d$payload -X PUT \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=step\"|jq\n"
  },
  {
    "path": "scripts/debug/store",
    "content": "#!/bin/bash\n#help \t\tAllow data to flow out of the pipeline into storage\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -X PUT -d'{\"state\":\"store\"}' \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=isolate\"|jq\n"
  },
  {
    "path": "scripts/debug/suspendIngest",
    "content": "#!/bin/bash\n#help \tSuspend the ingestion of data into the pipeline\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\ntoken=`fledge_authenticate`\ncurl -s -H \"authorization: $token\" -X PUT -d'{\"state\":\"suspend\"}' \"${REST_API_URL}/fledge/service/${SERVICE}/debug?action=suspend\"|jq\n"
  },
  {
    "path": "scripts/extras/fledge.sudoers",
    "content": "%sudo ALL=(ALL) NOPASSWD:SETENV: /usr/bin/apt -y update, /usr/bin/apt-get -y install fledge, /usr/bin/apt -y install /usr/local/fledge/data/plugins/fledge*.deb, /usr/bin/apt list, /usr/bin/apt -y install fledge*, /usr/bin/apt -y upgrade, /usr/bin/apt -y purge fledge*\n"
  },
  {
    "path": "scripts/extras/fledge.sudoers_rh",
    "content": "%sudo ALL=(ALL) NOPASSWD: /usr/bin/yum -y update, /usr/bin/yum -y install fledge, /usr/bin/yum -y install /usr/local/fledge/data/plugins/fledge*.rpm, /usr/bin/yum list available fledge-*, /usr/bin/yum -y install fledge*, /usr/bin/yum check-update, /usr/bin/yum -y remove fledge*\n"
  },
  {
    "path": "scripts/extras/fledge_update",
    "content": "#!/bin/bash\n\n##\n# This script is copied into Fledge 'bin' directory by the Fledge install process\n#\n# This script let the user to update Fledge either manually or in auto mode via a schedule\n# enable/disable/remove and also to set the time interval of the schedule for apt package update.\n#\n##\n\n#\n# Note:\n# current implementation only supports the scheduling interval setting:\n# it's not possible to specify to run at particular time not a specific week day\n#\n\n__author__=\"Massimiliano Pinto, Amarendra K Sinha\"\n__copyright__=\"Copyright (c) 2018 OSIsoft, LLC\"\n__license__=\"Apache 2.0\"\n__version__=\"1.0\"\n\nFLEDGE_AUTO_UPDATER_VER=${__version__}\n\nREST_API_SCHEME=\"http://\"\nSCHEDULE_PROCESS_NAME=\"FledgeUpdater\"\nSCHEDULE_NAME=\"Fledge updater\"\n\n# Set FLEDGE_ROOT to default location if not set\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n        FLEDGE_ROOT=/usr/local/fledge\nfi\n\n# Check FLEDGE_ROOT is a directory\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n        echo \"Fledge home directory missing or incorrectly set environment\"\n        exit 1\nfi\n\n# Add FLEDGE_ROOT/python to PYTHONPATH\nexport PYTHONPATH=\"${PYTHONPATH}:${FLEDGE_ROOT}/scripts/common\"\n\n# Print usage and credits\nusage()\n{\n\techo \"Fledge auto update enable/disable v${FLEDGE_AUTO_UPDATER_VER} Copyright (c) 2018 OSIsoft, LLC\"\n\techo\n\techo \"usage: $(basename $0) --manual --auto --enable --disable --remove-update [--host --port --use-https --update-interval=seconds]\"\n\techo\n\techo  mandatory options:\n\techo \"  --manual           Run update script manually.\"\n\techo \"  --auto             Create update task schedule for auto update.\"\n\techo \"  --enable           Enables the auto update  or creates the enabled schedule if not set.\"\n\techo \"  --disable          Disables the auto update if set\"\n\techo \"  --remove-update    Removes the auto update\"\n\techo\n\techo \"optional parameters:\"\n\techo \"  --host             Sets the Fledge REST API host, default is 127.0.0.1\"\n\techo \"  --port                Sets the Fledge REST API port, default is 8081\"\n\techo \"  --use-https           Sets HTTPS for Fledge REST API, default is http\"\n\techo \"  --update-interval     Sets the auto update interval in seconds, default is 10800 (3 hours)\"\n\texit 0\n}\n\n# Handle '--use-https' option\necho \"$@\" | grep -q -- --use-https && REST_API_SCHEME=\"https://\"\n\n# Handle input parameters\nwhile [ \"$1\" != \"\" ]; do\n    PARAM=`echo $1 | awk -F= '{print $1}'`\n    VALUE=`echo $1 | awk -F= '{print $2}'`\n    case $PARAM in\n        --port)\n            API_PORT=$VALUE\n            ;;\n        --host)\n            API_ADDRESS=$VALUE\n            ;;\n        --manual)\n            MANUAL_UPDATE=\"Y\"\n            ;;\n        --auto)\n            AUTO_UPDATE=\"Y\"\n            ;;\n        --enable)\n            ENABLE_UPDATE=\"Y\"\n            ;;\n        --disable)\n            DISABLE_UPDATE=\"Y\"\n            ;;\n        --remove-update)\n            REMOVE_UPDATE=\"Y\"\n            ;;\n        --update-interval)\n            UPDATE_INTERVAL=$VALUE\n            ;;\n        -h | --help)\n           usage \n            ;;\n        *)\n           usage \n        ;;\n    esac\n    shift\ndone\n\n# Check for mandatoruy options first\nif [ ! \"${MANUAL_UPDATE}\" ] && [ ! \"${AUTO_UPDATE}\" ] && [ ! \"${ENABLE_UPDATE}\" ] && [ ! \"${DISABLE_UPDATE}\" ] && [ ! \"${REMOVE_UPDATE}\" ]; then\n\tusage\n\texit 1\nfi\n\n# Set API default port\nif [ ! \"${API_PORT}\" ]; then\n\tAPI_PORT=8081\nfi\n\n# Set 'localhost' if API_ADDRESS is not set\nif [ ! \"${API_ADDRESS}\" ]; then\n\tAPI_ADDRESS=\"localhost\"\nfi\n\n# Set API URL\nREST_API_URL=\"${REST_API_SCHEME}${API_ADDRESS}:${API_PORT}\"\n\n# Check Fledge API is running at API_ADDRESS, API_PORT via 'ping'\nCHECK_SERVICE=`curl -s -k --max-time 30 \"${REST_API_URL}/fledge/ping\" | grep -i uptime`\nif [ ! \"${CHECK_SERVICE}\" ]; then\n\tif [ ! \"${CHECK_SERVICE}\" ]; then\n\t\techo \"$(basename $0): Error: cannot connect to Fledge API at [${REST_API_URL}]\"\n\t\texit 1\n\tfi\nfi\n\n# Check whether SCHEDULE_NAME exists\n# Abort on JSON erros\nCMD_SCHEDULE_EXISTS=\"curl -s -k --max-time 30 '${REST_API_URL}/fledge/schedule' | python3 -m json_parse get_schedule_id '${SCHEDULE_NAME}'\"\nSCHEDULE_EXISTS=`eval ${CMD_SCHEDULE_EXISTS}`\nret_code=$?\nif [ \"${ret_code}\" -ne 0 ]; then\n\techo \"$(basename $0): Error: checking schedule ${SCHEDULE_NAME}, [${SCHEDULE_EXISTS}]. Check Fledge configuration.\"\n\texit 3\n\nfi\n\n# Check SCHEDULE_NAME details from JSON data\n# Abort if more than one scheduler is found\n# Note:\n# If the schedule doesn't exist it will be created with --enable\nif [ \"${SCHEDULE_EXISTS}\" ]; then\n\tNUM_SCHEDULES=`echo ${SCHEDULE_EXISTS} | tr ' ' '\\\\n' | wc -l`\n\tif [ \"${NUM_SCHEDULES}\" -gt 1 ]; then\n\t\techo \"$(basename $0): Error: found more than one 'schedule_id' for schedule ${SCHEDULE_PROCESS_NAME}. Check Fledge configuration.\"\n\t\texit 3\n\tfi\n\n\t# Set the schedule id\n\tSCHEDULE_ID=${SCHEDULE_EXISTS}\nfi\n\n# Set default interval\nif [ ! \"${UPDATE_INTERVAL}\" ]; then\n\tUPDATE_INTERVAL=10800\nfi\n\n# Prepare JSON paylod for the new schedule creation\n#\n# - task type is INTERVAL\n# - repeat set to default or specified value\n# - enabled set to true\n# \nSCHEDULE_SET_PAYLOAD=\"{\\\"type\\\": 3, \\\n\t\t\t\\\"name\\\": \\\"${SCHEDULE_NAME}\\\",\n\t\t\t\\\"process_name\\\": \\\"${SCHEDULE_PROCESS_NAME}\\\",\n\t\t\t\\\"repeat\\\": ${UPDATE_INTERVAL},\n\t\t\t\\\"enabled\\\": \\\"t\\\",\n\t\t\t\\\"exclusive\\\": \\\"t\\\"}\"\n\n###\n# Commands handling\n###\n\n# If manual mode has been choosen, then simply run the update task script and exit\nif [ \"${MANUAL_UPDATE}\" = \"Y\" ]; then\n\t# CREATE API call\n\tMANUAL_OUTPUT=`curl -s -k --max-time 30 -X PUT \"${REST_API_URL}/fledge/update\"`\n\n\t# Check 'deleted' in JSON output\n\tCHECK_MANUAL=`echo ${MANUAL_OUTPUT} | grep Running`\n\tif [ ! \"${CHECK_MANUAL}\" ]; then\n\t\techo \"$(basename $0): error: failed to run manual update: ${MANUAL_OUTPUT}\"\n\t\texit 3\n\telse\n\t\techo \"The Fledge update process has been successfully scheduled.\"\n\t\texit 0\n\tfi\nfi\n\n# Create Update schedule vide create task\nif [ \"${AUTO_UPDATE}\" = \"Y\" ]; then\n\tif [ \"${SCHEDULE_ID}\" ]; then\n\t\techo \"$(basename $0): warning: the schedule '${SCHEDULE_NAME}' with id ${SCHEDULE_ID} already exists.\"\n\t\texit 2\n\tfi\n\n\t# CREATE API call\n\tAUTO_OUTPUT=`curl -s -k --max-time 30 -X POST -d \"${SCHEDULE_SET_PAYLOAD}\" \"${REST_API_URL}/fledge/schedule\"`\n\n\t# Check 'deleted' in JSON output\n\tAUTO_CREATE=`echo ${AUTO_OUTPUT} | grep ${SCHEDULE_PROCESS_NAME}`\n\tif [ ! \"${AUTO_CREATE}\" ]; then\n\t\techo \"$(basename $0): error: failed to create schedule: ${AUTO_OUTPUT}\"\n\t\texit 3\n\telse\n\t\techo \"The Fledge update has been successfully auto scheduled.\"\n\t\texit 0\n\tfi\nfi\n\n#\n# --remove-update\n# Remove the schedule from Fledge\n#\nif [ \"${REMOVE_UPDATE}\" = \"Y\" ]; then\n\tif [ ! \"${SCHEDULE_ID}\" ]; then\n\t\techo \"$(basename $0): warning: the schedule '${SCHEDULE_NAME}' is not active.\"\n\t\texit 2\n\tfi\n\n\t# DELETE API call\n\tREMOVE_OUTPUT=`curl -s -k --max-time 30 -X DELETE \"${REST_API_URL}/fledge/schedule/${SCHEDULE_ID}\"`\n\n\t# Check 'deleted' in JSON output\n\tCHECK_REMOVE=`echo ${REMOVE_OUTPUT} | grep -i message | grep -i deleted`\n\tif [ ! \"${CHECK_REMOVE}\" ]; then\n\t\techo \"$(basename $0): error: failed to remove schedule: ${REMOVE_OUTPUT}\"\n\t\texit 3\n\telse\n\t\techo \"The schedule '${SCHEDULE_NAME}', ID [${SCHEDULE_ID}] has been removed.\"\n\t\texit 0\n\tfi\nfi\n\n#\n# --enable\n# Enable the update schedule or activating it if not set\n#\nif [ \"${ENABLE_UPDATE}\" = \"Y\" ]; then\n\tif [ ! \"${SCHEDULE_ID}\" ]; then\n\t\techo \"The schedule '${SCHEDULE_NAME}' is not active. Activating and enabling it\"\n\n\t\t# Create the schedule\n\t\t# POST API call for 'enable' and 'update interval'\n\t\tSCHEDULE_SET=`curl -s -k --max-time 30 -X POST -d \"${SCHEDULE_SET_PAYLOAD}\" \"${REST_API_URL}/fledge/schedule\"`\n\n\t\t# Check \"id\" in JSON output\n\t\tCMD_NEW_SCHEDULE_EXISTS=\"echo '${SCHEDULE_SET}' | python3 -m json_parse get_current_schedule_id '${SCHEDULE_NAME}'\"\n\t\tSCHEDULE_ID=`eval ${CMD_NEW_SCHEDULE_EXISTS}`\n       \t\tif [ ! \"${SCHEDULE_ID}\" ]; then\n\t\t\techo \"$(basename $0): error: cannot get 'schedule_id' for new created schedule '${SCHEDULE_NAME}': [${SCHEDULE_SET}]\"\n\t\t\texit 3\n\t\tfi\n\t\techo \"Schedule '${SCHEDULE_NAME}' successfully added, ID [${SCHEDULE_ID}], interval ${UPDATE_INTERVAL} seconds\"\n\t\texit 0\n\telse\n\t\t# Update the schedule, using SCHEDULE_ID\n\t\t# PUT API call for 'enable'and 'update interval'\n\t\tENABLE_OUTPUT=`curl -s -k --max-time 30 -X PUT -d \"{\\\"repeat\\\": ${UPDATE_INTERVAL}, \\\"enabled\\\": true}\" \"${REST_API_URL}/fledge/schedule/${SCHEDULE_ID}\"`\n\n\t\t# Check \"id\":\"...\" in JSON output\n\t\tCMD_NEW_SCHEDULE_EXISTS=\"echo '${ENABLE_OUTPUT}' | python3 -m json_parse get_current_schedule_id '${SCHEDULE_NAME}'\"\n\t\tSCHEDULE_ID=`eval ${CMD_NEW_SCHEDULE_EXISTS}`\n\t\tif [ ! \"${SCHEDULE_ID}\" ]; then\n\t\t\techo \"$(basename $0): error: failed to enable schedule: ${ENABLE_OUTPUT}\"\n\t\t\texit 3\n\t\telse\n\t\t\techo \"The schedule '${SCHEDULE_NAME}', ID [${SCHEDULE_ID}] has been enabled, interval ${UPDATE_INTERVAL} seconds\"\n\t\tfi\n\t\texit 0\n\tfi\nfi\n\n#\n# --disable\n# Disable the update schedule (just set 'false')\n#\nif [ \"${DISABLE_UPDATE}\" = \"Y\" ]; then\n\tif [ ! \"${SCHEDULE_ID}\" ]; then\n\t\techo \"$(basename $0): info: the schedule '${SCHEDULE_NAME}' is not active. Try with --enable to install/active it\"\n\t\texit 2\n\tfi\n\n\t# PUT API call for 'disable' only using SCHEDULE_ID\n\tDISABLE_OUTPUT=`curl -s -k --max-time 30 -X PUT \"${REST_API_URL}/fledge/schedule/${SCHEDULE_ID}/disable\"`\n\n\t# Check \"scheduleId\":\"...\" in JSON output\n\tCHECK_DISABLE=`echo ${DISABLE_OUTPUT} | grep -i '\\\"scheduleId\\\"'`\n\tif [ ! \"${CHECK_DISABLE}\" ]; then\n\t\techo \"$(basename $0): error: failed to disable schedule: ${DISABLE_OUTPUT}\"\n\t\texit 3\n\telse\n\t\techo \"The schedule '${SCHEDULE_NAME}', ID [${SCHEDULE_ID}] has been disabled.\"\n\t\texit 0\n\tfi\nfi\n"
  },
  {
    "path": "scripts/extras/fogbench",
    "content": "#!/bin/sh\n# Run Fledge fogbench\n# execution sample :\n# \"${FLEDGE_ROOT}/scripts/extras/fledge.fogbench\" -t \"${FLEDGE_ROOT}/extras/python/fogbench/templates/fogbench_sensor_coap.template.json\" -H localhost -P 5683  -O 100\n\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}/extras/python\" ]; then\n\tlogger \"Fledge home directory is missing the Extras installation\"\n\texit 1\nfi\n\n# Add fogbench code to the PYTHONPATH\nexport PYTHONPATH=${FLEDGE_ROOT}/extras/python:${PYTHONPATH}\n\npython3 -m fogbench $@\n\n"
  },
  {
    "path": "scripts/extras/update_task.apt",
    "content": "#!/bin/bash\n\n##\n# Installation process creates a link file, named \"scripts/tasks/update\".\n#\n# It may either be called by Fledge scheduler for updating Fledge package and it may also be called\n# manually via /usr/local/fledge/bin/fledge_update script.\n#\n# Pre-requisites:\n# 1. Add the repository key to your apt key list:\n#        wget -q -O - http://archives.fledge-iot.org/KEY.gpg | sudo apt-key add -\n# 2. Add the repository location to your sources list.\n#    Add the following lines to your \"/etc/apt/sources.list\" or separate /etc/apt/sources.list.d/fledge.list file.\n#        Below example for ubuntu20.04 64bit machine\n#        echo \"deb http://archives.fledge-iot.org/latest/ubuntu2004/x86_64/ / \" > /etc/apt/sources.list.d/fledge.list\n##\n\n__author__=\"Amarendra K Sinha, Ashish Jabble\"\n__copyright__=\"Copyright (c) 2018 OSIsoft, LLC\"\n__license__=\"Apache 2.0\"\n__version__=\"1.1\"\n\n\n# Set the default value for FLEDGE_ROOT if not set\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n    export FLEDGE_ROOT='/usr/local/fledge'\nfi\n\n# Set the default value for FLEDGE_DATA if not set\nif [ \"${FLEDGE_DATA}\" = \"\" ]; then\n    export FLEDGE_DATA=${FLEDGE_ROOT}/data\nfi\n\n# Include logging: it works only with bash\n. \"${FLEDGE_ROOT}/scripts/common/write_log.sh\" || exit 1\n\n# Ignore signals: 1-SIGHUP, 2-SIGINT, 3-SIGQUIT, 6-SIGABRT, 15-SIGTERM\ntrap \"\" 1 2 3 6 15\n\n# Check availability of FLEDGE_ROOT directory\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n    write_log \"\" \"$0\" \"err\" \"home directory missing or incorrectly set environment.\" \"logonly\"\n    exit 1\nfi\n\n# Check availability of FLEDGE_DATA directory\nif [ ! -d \"${FLEDGE_DATA}\" ]; then\n    write_log \"\" \"$0\" \"err\" \"Data directory is missing or incorrectly set environment.\" \"logonly\"\n    exit 1\nfi\n\n# Set the PYTHONPATH\nexport PYTHONPATH=$FLEDGE_ROOT/python\n\nUPGRADE_DONE=\"N\"\n\n# Fledge STOP\nfledge_stop() {\n    STOP_FLEDGE_CMD=\"${FLEDGE_ROOT}/bin/fledge stop\"\n    STOP_FLEDGE_CMD_STATUS=`$STOP_FLEDGE_CMD`\n    sleep 15\n    if [ \"${STOP_FLEDGE_CMD_STATUS}\" = \"\" ]; then\n        write_log \"\" \"$0\" \"err\" \"cannot run \\\"${STOP_FLEDGE_CMD}\\\" command.\" \"logonly\"\n        exit 1\n    fi\n}\n\n# Commands for Packages to update\nrun_update() {\n    # Download and update the package information from all of the configured sources\n    UPDATE_CMD=\"sudo apt -y update\"\n    UPDATE_CMD_OUT=`$UPDATE_CMD`\n    UPDATE_CMD_STATUS=\"$?\"\n    if [ \"$UPDATE_CMD_STATUS\" != \"0\" ]; then\n        write_log \"\" \"$0\" \"err\" \"Failed on $UPDATE_CMD. Exit: $UPDATE_CMD_STATUS. Out: $UPDATE_CMD_OUT\" \"all\" \"pretty\"\n        exit 1\n    fi\n}\n\nrun_upgrade() {\n    # Upgrade Packages\n    PACKAGES_LIST=$(cat ${FLEDGE_DATA}/.upgradable)\n    UPGRADE_CMD=\"sudo apt -y upgrade $PACKAGES_LIST\"\n    UPGRADE_CMD_OUT=$($UPGRADE_CMD)\n    UPGRADE_CMD_STATUS=\"$?\"\n    if [ \"$UPGRADE_CMD_STATUS\" != \"0\" ]; then\n        $(rm -rf ${FLEDGE_DATA}/.upgradable)\n        write_log \"\" \"$0\" \"err\" \"Failed on $UPGRADE_CMD. Exit: $UPGRADE_CMD_STATUS. Out: $UPGRADE_CMD_OUT\" \"all\" \"pretty\"\n        exit 1\n    fi\n    msg=\"'$PACKAGES_LIST' packages upgraded successfully!\"\n    write_log \"\" \"$0\" \"info\" \"$msg\" \"all\" \"pretty\"\n    UPGRADE_DONE=\"Y\"\n}\n\n# Fledge START\nfledge_start() {\n    START_FLEDGE_CMD=\"${FLEDGE_ROOT}/bin/fledge start\"\n    START_FLEDGE_CMD_OUT=`$START_FLEDGE_CMD`\n    START_FLEDGE_CMD_STATUS=\"$?\"\n    if [ \"$START_FLEDGE_CMD_OUT\" = \"\" ]; then\n        write_log \"\" \"$0\" \"err\" \"Failed on $START_FLEDGE_CMD. Exit: $START_FLEDGE_CMD_STATUS. Out: $START_FLEDGE_CMD_OUT\" \"all\" \"pretty\"\n        exit 1\n    fi\n}\n\n# Find the local timestamp\nfunction local_timestamp\n{\npython3 - <<END\nimport datetime\nvarDateTime = str(datetime.datetime.now(datetime.timezone.utc).astimezone())\nprint (varDateTime)\nEND\n}\n\n# CREATE Audit trail entry for update package\naudit_trail_entry () {\n    AUDIT_PACKAGES_LIST=$(echo $PACKAGES_LIST | sed -e 's/ /, /g')\n    SQL_DATA=\"log(code, level, log) VALUES('PKGUP', 4, '{\\\"packageName\\\": \\\"${AUDIT_PACKAGES_LIST}\\\"}');\"\n    # Find storage engine value\n    STORAGE=`${FLEDGE_ROOT}/services/fledge.services.storage --plugin | awk '{print $1}'`\n    if [ \"${STORAGE}\" = \"postgres\" ]; then\n        INSERT_SQL=\"INSERT INTO fledge.${SQL_DATA}\"\n        SQL_CMD=`psql -d fledge -t -c \"${INSERT_SQL}\"`\n    elif [ \"${STORAGE}\" = \"sqlite\" ] || [ \"${STORAGE}\" = \"sqlitelb\" ]; then\n        INSERT_SQL=\"INSERT INTO ${SQL_DATA}\"\n        SQL_CMD=`sqlite3 ${FLEDGE_DATA}/fledge.db \"${INSERT_SQL}\"`\n    else\n        write_log \"\" \"$0\" \"err\" \"Bad storage engine found: ${STORAGE}\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n    ADD_AUDIT_LOG_STATUS=\"$?\"\n    if [ \"$ADD_AUDIT_LOG_STATUS\" != \"0\" ]; then\n        $(rm -rf ${FLEDGE_DATA}/.upgradable)\n        write_log \"\" \"$0\" \"err\" \"Failed on execution of ${INSERT_SQL}. Exit: ${ADD_AUDIT_LOG_STATUS}.\" \"all\" \"pretty\"\n        exit 1\n    else\n        $(rm -rf ${FLEDGE_DATA}/.upgradable)\n        msg=\"Audit trail entry created for '${AUDIT_PACKAGES_LIST}' packages upgrade!\"\n        write_log \"\" \"$0\" \"info\" \"$msg\" \"all\" \"pretty\"\n    fi\n}\n\n# UPDATE task record entry on completion for given schedule name\nupdate_task() {\n    SCHEDULE_NAME=\"Fledge updater on demand\"\n    TASK_STATE_COMPLETE=\"2\"\n    EXIT_CODE=\"0\"\n    TIMESTAMP=$(local_timestamp)\n    SQL_QUERY=\"SET state='$TASK_STATE_COMPLETE',exit_code='$EXIT_CODE',end_time='$TIMESTAMP' WHERE schedule_name='$SCHEDULE_NAME';\"\n\n    # Find storage engine value\n    STORAGE=`${FLEDGE_ROOT}/services/fledge.services.storage --plugin | awk '{print $1}'`\n    if [ \"${STORAGE}\" = \"postgres\" ]; then\n        UPDATE_SQL_QUERY=\"UPDATE fledge.tasks ${SQL_QUERY}\"\n        SQL_CMD=`psql -d fledge -t -c \"${UPDATE_SQL_QUERY}\"`\n    elif [ \"${STORAGE}\" = \"sqlite\" ] || [ \"${STORAGE}\" = \"sqlitelb\" ]; then\n        UPDATE_SQL_QUERY=\"UPDATE tasks ${SQL_QUERY}\"\n        SQL_CMD=`sqlite3 ${FLEDGE_DATA}/fledge.db \"${UPDATE_SQL_QUERY}\"`\n    else\n        write_log \"\" \"$0\" \"err\" \"Bad storage engine found: ${STORAGE}\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n    UPDATE_TASK_STATUS=\"$?\"\n    if [ \"$UPDATE_TASK_STATUS\" != \"0\" ]; then\n        write_log \"\" \"$0\" \"err\" \"Failed on execution of ${UPDATE_SQL_QUERY} in engine '${STORAGE}'. Exit: $UPDATE_TASK_STATUS.\" \"all\" \"pretty\"\n        exit 1\n    else\n        msg=\"'$SCHEDULE_NAME' task state updated successfully.\"\n        write_log \"\" \"$0\" \"info\" \"$msg\" \"all\" \"pretty\"\n    fi\n}\n\n# Upgrade check\nupgrade_check() {\n    # Find the upgradable list of fledge packages\n    UPGRADABLE_LIST=\"sudo apt list --upgradable | grep ^fledge\"\n    UPGRADE_CMD_OUT=$(eval $UPGRADABLE_LIST)\n    UPGRADE_CMD_STATUS=\"$?\"\n    if [ \"$UPGRADE_CMD_STATUS\" != \"0\" ]; then\n        write_log \"\" \"$0\" \"info\" \"No new Fledge packages to upgrade.\" \"all\" \"pretty\"\n        echo 0\n    else\n        while IFS= read -r line\n        do\n            if [[ \"$line\" == fledge* ]]; then\n                pkg=$(echo $line | cut -d \"/\" -f 1)\n                PACKAGES_LIST+=\" $pkg\"\n            fi\n        done < <(printf '%s\\n' \"$UPGRADE_CMD_OUT\")\n        echo $PACKAGES_LIST > ${FLEDGE_DATA}/.upgradable\n        echo 1\n    fi\n}\n\n# Main\n\nDO_UPDATE=$(run_update)\n\nDO_UPGRADE=$(upgrade_check)\n\nif [ \"$DO_UPGRADE\" = \"1\" ]; then\n\t# Stop Fledge\n\tfledge_stop\n\n\t# Now run Package upgrade\n\trun_upgrade\n\n\t# Start Fledge\n\tfledge_start\nfi\n\nif [ \"$UPGRADE_DONE\" = \"Y\" ]; then\n    # Audit log entry\n    audit_trail_entry\nfi\n\n# Update Task Record\nupdate_task\n"
  },
  {
    "path": "scripts/extras/update_task.snappy",
    "content": "#!/bin/bash\n\n##\n# This script has been copied into Fledge 'scripts/tasks' by the Snap build process\n#\n# It has to be called by Fledge scheduler for updating Fledge installed as Snap package\n##\n\n__author__=\"Massimiliano Pinto\"\n__copyright__=\"Copyright (c) 2018 OSIsoft, LLC\"\n__license__=\"Apache 2.0\"\n__version__=\"1.1\"\n\n# Set the default value for FLEDGE_ROOT if not set\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\n# Check availability of FLEDGE_ROOT directory\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n        logger -p local0.err -t \"Fledge[${$}]\" \"${TASK_NAME} $0 home directory missing or incorrectly set environment\"\n        exit 1\nfi\n\n# Handle input parameters\nwhile [ \"$1\" != \"\" ]; do\n    PARAM=`echo $1 | awk -F= '{print $1}'`\n    VALUE=`echo $1 | awk -F= '{print $2}'`\n    case $PARAM in\n        --port)\n            CORE_PORT=$VALUE\n            ;;\n        --address)\n            CORE_ADDRESS=$VALUE\n            ;;\n        --name)\n            TASK_NAME=$VALUE\n            ;;\n        *)\n        ;;\n    esac\n    shift\ndone\n\n# Include logging: it works only with bash\n. \"${FLEDGE_ROOT}/scripts/common/write_log.sh\" || exit 1\n\n# Abort on missing CORE_PORT \nif [ ! \"${CORE_PORT}\" ]; then\n\twrite_log \"Fledge[${$}]\" \"err\" \"${TASK_NAME} $0: missing '--port' option. Exiting\" \"all\" \"pretty\"\n\texit 1\nfi\n\n# Set 'localhost' if CORE_ADDRESS is not set\nif [ ! \"${CORE_ADDRESS}\" ]; then\n\tCORE_ADDRESS=\"localhost\"\nfi\n\nREST_API_SCHEME=\"https://\"\nCORE_BASE_URL=\"${REST_API_SCHEME}${CORE_ADDRESS}:${CORE_PORT}/fledge/service\"\n\n\n# Check Fledge Core is running at CORE_ADDRESS, CORE_PORT with HTTPS first\nCHECK_SERVICE=`curl -s -k --max-time 30 \"${CORE_BASE_URL}/ping\" | grep -i uptime`\n\nif [ ! \"${CHECK_SERVICE}\" ]; then\n\t# No reply using HTTPS, so try HTTP now\n\tREST_API_SCHEME=\"http://\"\n\tCORE_BASE_URL=\"${REST_API_SCHEME}${CORE_ADDRESS}:${CORE_PORT}/fledge/service\"\n\t# Check Fledge Core is running at CORE_ADDRESS, CORE_PORT with HTTPP\n\tCHECK_SERVICE=`curl -s -k --max-time 30 \"${CORE_BASE_URL}/ping\" | grep -i uptime`\n\t# Exit on error\n\tif [ ! \"${CHECK_SERVICE}\" ]; then\n\t\twrite_log \"Fledge[${$}]\" \"err\" \"${TASK_NAME} $0: cannot connect to Fledge Core [${REST_API_SCHEME}${CORE_ADDRESS}:${CORE_PORT}]. Exiting\" \"all\" \"pretty\"\n\t\texit 1\n\tfi\nfi\n\n# Get the Fledge Core details\nCORE_SERVICE_URL=\"${CORE_BASE_URL}?name=Fledge%20Core\"\nSERVICE_INFO=`curl -s -k --max-time 30 \"${CORE_SERVICE_URL}\"`\nCHECK_SERVICE=`echo \"${SERVICE_INFO}\" | grep -i \"Fledge Core\"`\n\n# Check for errors\nif [ ! \"${CHECK_SERVICE}\" ]; then\n\twrite_log \"Fledge[${$}]\" \"err\" \"${TASK_NAME} $0: cannot get Fledge Core details from [${CORE_SERVICE_URL}]. Exiting.\"\n\texit 2\nfi\n\n# Add fledge python path to PYTHONPATH\nexport PYTHONPATH=\"${PYTHONPATH}:${FLEDGE_ROOT}/scripts/common\"\n\n# On succes 'REST_API_URL' var holds the REST API server URL\nREST_API_URL=`echo ${SERVICE_INFO} | python3 -m json_parse get_rest_api_url`\nfind_api_server=$?\n\n# Check ret code\nif [ \"${find_api_server}\" -ne 0 ]; then\n    write_log \"Fledge[${$}]\" \"err\" \"${TASK_NAME} $0: Fledge API URL [${REST_API_URL}]\" \"all\" \"pretty\"\n    exit 2\nfi\n\n# This task is responsible for updating Fledge installed as Snap package\nPACKAGE_UPDATE_KEY=\"SNAP_UPD\"\n\n# Check/create SNAP configuration in Fledge with a JSON payload\nJSON_CATEGORY_PAYLOAD=\"{\\\"key\\\": \\\"${PACKAGE_UPDATE_KEY}\\\", \\\n                        \\\"description\\\":\\\"Snap Update process\\\", \\\n                        \\\"value\\\": {\\\n                            \\\"repository\\\": {\\\n                                \\\"description\\\": \\\"Remote repository for package manager\\\", \\\n                                \\\"type\\\": \\\"string\\\", \\\n                                \\\"default\\\": \\\"https://s3.amazonaws.com/fledge\\\"}, \\\n                            \\\"package\\\": {\\\n                                \\\"description\\\": \\\"Package manager type\\\", \\\n                                \\\"type\\\": \\\"string\\\", \\\n                                \\\"default\\\": \\\"Snap\\\"} \\\n                            }\\\n                       }\"\n\nwrite_log \"Fledge[${$}]\" \"info\" \"${TASK_NAME} $0: Fledge API URL is [${REST_API_URL}]\" \"all\" \"pretty\"\n\n# Execute check/create categoria via REST API\nAPI_OUTPUT=`curl -s -k --max-time 30 -d \"${JSON_CATEGORY_PAYLOAD}\" -X POST \"${REST_API_URL}/fledge/category\"`\n\n# Verify we have created or the category exists\nRET=`echo $API_OUTPUT | python3 -m json_parse get_category_key ${PACKAGE_UPDATE_KEY}`\nret_code=$?\nif [ \"${ret_code}\" -ne 0 ] || [ \"${RET}\" != \"${PACKAGE_UPDATE_KEY}\" ]; then\n\twrite_log \"Fledge[${$}]\" \"err\" \"${TASK_NAME} $0: error checking/creating configuration via REST API [${API_OUTPUT}], [${RET}]\" \"all\" \"pretty\"\n\texit 1\nfi\n\n# Get 'repository' value from JSON\nREPO_URL=`echo ${API_OUTPUT} | python3 -m json_parse get_category_item_value repository`\nret_code=$?\nif [ ! \"${REPO_URL}\" ] || [ \"${REPO_URL}\" = \"null\" ] || [ \"${ret_code}\" -ne 0 ]; then\n\tREPO_URL=`echo ${API_OUTPUT} | python3 -m json_parse get_category_item_default repository`\n\tret_code=$?\n\tif [ ! \"${REPO_URL}\" ] || [ \"${REPO_URL}\" = \"null\" ] || [ \"${ret_code}\" -ne 0 ]; then\n    \t\twrite_log \"Fledge[${$}]\" \"err\" \"${TASK_NAME} $0: cannot get package 'repository' info: [${REPO_URL}]\" \"all\" \"pretty\"\n\t\texit 3\n\tfi\nfi\nwrite_log \"Fledge[${$}]\" \"info\" \"${TASK_NAME} $0: Package update repository is [${REPO_URL}], type [Snap]\" \"all\" \"pretty\"\n\n#\n# We run the updater script\n#\n# Note: the called script must ignore some signals; TERM, INT, HUP, QUIT \n#\n\ncommand=\"${FLEDGE_ROOT}/scripts/common/snap-get.sh upgrade fledge --devmode --manage --repo=${REPO_URL}\"\n\n# We log options with -- using ''. '--manage'\nwrite_log \"Fledge[${$}]\" \"info\" \"${TASK_NAME} $0: Calling '${FLEDGE_ROOT}/scripts/common/snap-get.sh'\"\\\n\" 'upgrade' 'fledge' '--devmode' '--manage' '--repo=${REPO_URL}'\" \"all\" \"pretty\"\n\n# Disconnect script execution from shell\nnohup $command </dev/null >/dev/null 2>&1 &\n"
  },
  {
    "path": "scripts/extras/update_task.yum",
    "content": "#!/bin/bash\n\n##\n# Installation process creates a link file, named \"scripts/tasks/update\".\n#\n# It may either be called by Fledge scheduler for updating Fledge package and it may also be called\n# manually via /usr/local/fledge/bin/fledge_update script.\n#\n# Pre-requisites:\n# 1. To add the fledge repository to the yum package manager run the command:\n#        sudo rpm --import http://archives.fledge-iot.org/RPM-GPG-KEY-fledge\n# 2. Add the repository location to your sources list.\n#   a) Create a file called fledge.repo in the directory /etc/yum.repos.d and add the following content:\n#        Below example for CentOS Stream 9 64bit machine\n#        [fledge]\n#        name=fledge Repository\n#        baseurl=http://archives.fledge-iot.org/latest/centos-stream-9/x86_64/\n#        enabled=1\n#        gpgkey=http://archives.fledge-iot.org/RPM-GPG-KEY-fledge\n#        gpgcheck=1\n#   b) There are a few more pre-requisites that need to be installed\n#        sudo yum install -y epel-release\n#\n##\n\n__author__=\"Ashish Jabble, Massimiliano Pinto\"\n__copyright__=\"Copyright (c) 2024, Dianomic Systems Inc.\"\n__license__=\"Apache 2.0\"\n__version__=\"1.1\"\n\n\n# Set the default value for FLEDGE_ROOT if not set\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n    export FLEDGE_ROOT='/usr/local/fledge'\nfi\n\n# Set the default value for FLEDGE_DATA if not set\nif [ \"${FLEDGE_DATA}\" = \"\" ]; then\n    export FLEDGE_DATA=${FLEDGE_ROOT}/data\nfi\n\n# Include logging: it works only with bash\n. \"${FLEDGE_ROOT}/scripts/common/write_log.sh\" || exit 1\n\n# Ignore signals: 1-SIGHUP, 2-SIGINT, 3-SIGQUIT, 6-SIGABRT, 15-SIGTERM\ntrap \"\" 1 2 3 6 15\n\n# Check availability of FLEDGE_ROOT directory\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n    write_log \"\" \"$0\" \"err\" \"home directory missing or incorrectly set environment.\" \"logonly\"\n    exit 1\nfi\n\n# Check availability of FLEDGE_DATA directory\nif [ ! -d \"${FLEDGE_DATA}\" ]; then\n    write_log \"\" \"$0\" \"err\" \"Data directory is missing or incorrectly set environment.\" \"logonly\"\n    exit 1\nfi\n\n# Set the PYTHONPATH\nexport PYTHONPATH=$FLEDGE_ROOT/python\n\nUPGRADE_DONE=\"N\"\n\n# Fledge STOP\nfledge_stop() {\n    STOP_FLEDGE_CMD=\"${FLEDGE_ROOT}/bin/fledge stop\"\n    STOP_FLEDGE_CMD_STATUS=$($STOP_FLEDGE_CMD)\n    sleep 15\n    if [ \"${STOP_FLEDGE_CMD_STATUS}\" = \"\" ]; then\n        write_log \"\" \"$0\" \"err\" \"cannot run \\\"${STOP_FLEDGE_CMD}\\\" command.\" \"logonly\"\n        exit 1\n    fi\n}\n\n# Commands for Packages to update\nrun_update() {\n    # Download and update the package information from all of the configured sources\n    UPDATE_CMD=\"sudo yum -y check-update\"\n    write_log \"\" \"$0\" \"info\" \"Executing ${UPDATE_CMD} command...\" \"logonly\"\n    UPDATE_CMD_OUT=$($UPDATE_CMD)\n    UPDATE_CMD_STATUS=\"$?\"\n    if [ \"$UPDATE_CMD_STATUS\" -ne \"0\" ]; then\n        # check-update can return exit code 100 when updates are available\n        if [ \"$UPDATE_CMD_STATUS\" -ne \"100\" ]; then\n            write_log \"\" \"$0\" \"err\" \"Failed on $UPDATE_CMD. Exit: $UPDATE_CMD_STATUS. Out: $UPDATE_CMD_OUT\" \"all\" \"pretty\"\n            exit 1\n        fi\n    fi\n}\n\nrun_upgrade() {\n    # Upgrade Packages\n    PACKAGES_LIST=$(cat ${FLEDGE_DATA}/.upgradable)\n    UPGRADE_CMD=\"sudo yum -y upgrade $PACKAGES_LIST\"\n    write_log \"\" \"$0\" \"info\" \"Executing upgrade...\" \"logonly\"\n    UPGRADE_CMD_OUT=$($UPGRADE_CMD)\n    UPGRADE_CMD_STATUS=\"$?\"\n    if [ \"$UPGRADE_CMD_STATUS\" -ne \"0\" ]; then\n        $(rm -rf ${FLEDGE_DATA}/.upgradable)\n        write_log \"\" \"$0\" \"err\" \"Failed on $UPGRADE_CMD. Exit: $UPGRADE_CMD_STATUS. Out: $UPGRADE_CMD_OUT\" \"all\" \"pretty\"\n        exit 1\n    fi\n    msg=\"'$PACKAGES_LIST' packages upgraded successfully!\"\n    write_log \"\" \"$0\" \"info\" \"$msg\" \"all\" \"pretty\"\n    UPGRADE_DONE=\"Y\"\n}\n\n# Fledge START\nfledge_start() {\n    START_FLEDGE_CMD=\"${FLEDGE_ROOT}/bin/fledge start\"\n    START_FLEDGE_CMD_OUT=$($START_FLEDGE_CMD)\n    START_FLEDGE_CMD_STATUS=\"$?\"\n    if [ \"$START_FLEDGE_CMD_OUT\" = \"\" ]; then\n        write_log \"\" \"$0\" \"err\" \"Failed on $START_FLEDGE_CMD. Exit: $START_FLEDGE_CMD_STATUS. Out: $START_FLEDGE_CMD_OUT\" \"all\" \"pretty\"\n        exit 1\n    fi\n}\n\n# Find the local timestamp\nfunction local_timestamp\n{\npython3 - <<END\nimport datetime\nvarDateTime = str(datetime.datetime.now(datetime.timezone.utc).astimezone())\nprint (varDateTime)\nEND\n}\n\n# CREATE Audit trail entry for update package\naudit_trail_entry () {\n    AUDIT_PACKAGES_LIST=$(echo $PACKAGES_LIST | sed -e 's/ /, /g')\n    SQL_DATA=\"log(code, level, log) VALUES('PKGUP', 4, '{\\\"packageName\\\": \\\"${AUDIT_PACKAGES_LIST}\\\"}');\"\n    # Find storage engine value\n    STORAGE=$(${FLEDGE_ROOT}/services/fledge.services.storage --plugin | awk '{print $1}')\n    if [ \"${STORAGE}\" = \"postgres\" ]; then\n        INSERT_SQL=\"INSERT INTO fledge.${SQL_DATA}\"\n        SQL_CMD=$(psql -d fledge -t -c \"${INSERT_SQL}\")\n    elif [ \"${STORAGE}\" = \"sqlite\" ] || [ \"${STORAGE}\" = \"sqlitelb\" ]; then\n        INSERT_SQL=\"INSERT INTO ${SQL_DATA}\"\n        SQL_CMD=$(sqlite3 ${FLEDGE_DATA}/fledge.db \"${INSERT_SQL}\")\n    else\n        write_log \"\" \"$0\" \"err\" \"Bad storage engine found: ${STORAGE}\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n    ADD_AUDIT_LOG_STATUS=\"$?\"\n    if [ \"$ADD_AUDIT_LOG_STATUS\" -ne \"0\" ]; then\n        $(rm -rf ${FLEDGE_DATA}/.upgradable)\n        write_log \"\" \"$0\" \"err\" \"Failed on execution of ${INSERT_SQL}. Exit: ${ADD_AUDIT_LOG_STATUS}.\" \"all\" \"pretty\"\n        exit 1\n    else\n        $(rm -rf ${FLEDGE_DATA}/.upgradable)\n        msg=\"Audit trail entry created for '${AUDIT_PACKAGES_LIST}' packages upgrade!\"\n        write_log \"\" \"$0\" \"info\" \"$msg\" \"all\" \"pretty\"\n    fi\n}\n\n# UPDATE task record entry on completion for given schedule name\nupdate_task() {\n    SCHEDULE_NAME=\"Fledge updater on demand\"\n    TASK_STATE_COMPLETE=\"2\"\n    EXIT_CODE=\"0\"\n    TIMESTAMP=$(local_timestamp)\n    SQL_QUERY=\"SET state='$TASK_STATE_COMPLETE',exit_code='$EXIT_CODE',end_time='$TIMESTAMP' WHERE schedule_name='$SCHEDULE_NAME';\"\n\n    # Find storage engine value\n    STORAGE=$(${FLEDGE_ROOT}/services/fledge.services.storage --plugin | awk '{print $1}')\n    if [ \"${STORAGE}\" = \"postgres\" ]; then\n        UPDATE_SQL_QUERY=\"UPDATE fledge.tasks ${SQL_QUERY}\"\n        SQL_CMD=$(psql -d fledge -t -c \"${UPDATE_SQL_QUERY}\")\n    elif [ \"${STORAGE}\" = \"sqlite\" ] || [ \"${STORAGE}\" = \"sqlitelb\" ]; then\n        UPDATE_SQL_QUERY=\"UPDATE tasks ${SQL_QUERY}\"\n        SQL_CMD=$(sqlite3 ${FLEDGE_DATA}/fledge.db \"${UPDATE_SQL_QUERY}\")\n    else\n        write_log \"\" \"$0\" \"err\" \"Bad storage engine found: ${STORAGE}\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n    UPDATE_TASK_STATUS=\"$?\"\n    if [ \"$UPDATE_TASK_STATUS\" -ne \"0\" ]; then\n        write_log \"\" \"$0\" \"err\" \"Failed on execution of ${UPDATE_SQL_QUERY}. Exit: $UPDATE_TASK_STATUS.\" \"all\" \"pretty\"\n        exit 1\n    else\n        msg=\"'$SCHEDULE_NAME' task state updated successfully.\"\n        write_log \"\" \"$0\" \"info\" \"$msg\" \"all\" \"pretty\"\n    fi\n}\n\n# Upgrade check\nupgrade_check() {\n    # Find the upgradable list of fledge packages\n    UPGRADABLE_LIST=\"sudo yum list updates  | grep ^fledge\"\n    UPGRADE_CMD_OUT=$(eval $UPGRADABLE_LIST)\n    UPGRADE_CMD_STATUS=\"$?\"\n    if [ \"$UPGRADE_CMD_STATUS\" -ne \"0\" ]; then\n        write_log \"\" \"$0\" \"info\" \"No new Fledge packages to upgrade.\" \"all\" \"pretty\"\n        echo 0\n    else\n        while IFS= read -r line\n        do\n            if [[ \"$line\" == fledge* ]]; then\n                pkg=$(echo $line | cut -d \" \" -f 1 | cut -d '.' -f 1)\n                PACKAGES_LIST+=\" $pkg\"\n            fi\n        done < <(printf '%s\\n' \"$UPGRADE_CMD_OUT\")\n        echo $PACKAGES_LIST > ${FLEDGE_DATA}/.upgradable\n        echo 1\n    fi\n}\n\n# Main\n\nrun_update\n\nDO_UPGRADE=$(upgrade_check)\n\nif [ \"$DO_UPGRADE\" = \"1\" ]; then\n        # Stop Fledge\n        fledge_stop\n\n        # Now run Package upgrade\n        run_upgrade\n\n        # Start Fledge\n        fledge_start\nfi\n\nif [ \"$UPGRADE_DONE\" = \"Y\" ]; then\n    # Audit log entry\n    audit_trail_entry\nfi\n\n# Update Task Record\nupdate_task\n"
  },
  {
    "path": "scripts/fledge",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2017-2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\nset -e\n#set -x\n\n#\n# This is the startup script for fledge\n#\nUSAGE=\"Usage: `basename ${0}` <-h> <-u username> <-p password> <-c certificate> {start|start --safe-mode|stop|status|reset|purge|kill|healthcheck|help|version}\"\n\n# Check FLEDGE_ROOT\nif [ -z ${FLEDGE_ROOT+x} ]; then\n    # Set FLEDGE_ROOT as the default directory\n    FLEDGE_ROOT=\"/usr/local/fledge\"\n    export FLEDGE_ROOT\nfi\n\n# Check if the default directory exists\nif [[ ! -d \"${FLEDGE_ROOT}\" ]]; then\n    logger -p local0.err -t \"fledge.script.fledge\" \"Fledge cannot be executed: ${FLEDGE_ROOT} is not a valid directory.\"\n    echo \"Fledge cannot be executed: ${FLEDGE_ROOT} is not a valid directory.\"\n    echo \"Create the enviroment variable FLEDGE_ROOT before using Fledge.\"\n    echo \"Specify the base directory for Fledge and set the variable with:\"\n    echo \"export FLEDGE_ROOT=<basedir>\"\n    exit 1\nfi\n\nif [[ ! -e  \"${FLEDGE_ROOT}/scripts/common/get_platform.sh\" ]]; then\n\n\tmsg_text=\"ERROR: Fledge not properly installed in the dir :${FLEDGE_ROOT}:\"\n\techo $msg_text\n\tlogger -p local0.err $msg_text\n\n\texit 1\nfi\n\n# Include common code\nsource \"${FLEDGE_ROOT}/scripts/common/get_platform.sh\"\n\nPLATFORM=`get_platform`\nIS_RHEL=`echo $PLATFORM | egrep '(Red Hat|CentOS)' || echo \"\"`\nos_version=`(grep -o '^VERSION_ID=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')`\n\nif [[ \"$IS_RHEL\" != \"\" ]]\nthen\n\t# platform RedHat/Centos\n\tif [[ \"$os_version\" == *\"7\"* ]]\n\tthen\n\t\t# To avoid to stop the execution for any internal error of scl_source\n\t\tset +e\n\n\t\tsource scl_source enable rh-python36\n\t\tstatus=$?\n\n\t\tif [[ \"$status\" != \"0\" ]]\n\t\tthen\n\t\t\tmsg_text=\"ERROR: Fledge cannot enable the rh-python36 environment in RedHat/CentOS platform.\"\n\t\t\tlogger -p local0.err $msg_text\n\t\t\techo $msg_text\n\t\t\texit 1\n\t\tfi\n\n\t\t#\n\t\t# Enables the RedHat Postgres environment if available\n\t\t#\n\t\trhpg_package=\"rh-postgresql13\"\n\t\tsource scl_source enable ${rhpg_package} > /dev/null\n\t\tstatus=$?\n\n\t\tif [[ \"$status\" == \"0\" ]]\n\t\tthen\n\t\t\tpg_isready_check=$(command -v pg_isready)\n\t\t\tstatus=$?\n\n\t\t\tif [[ \"$status\" == \"0\" ]]\n\t\t\tthen\n\t\t\t\trhpg_path=${pg_isready_check/\\/bin\\/pg_isready/}\n\n\t\t\t\texport LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${rhpg_path}/lib64\n\n\t\t\t\tmsg_text=\"INFO: Fledge enabled ${rhpg_package} using the path ${rhpg_path}.\"\n\t\t\t\tlogger -p local0.info $msg_text\n\t\t\telse\n\t\t\t\tmsg_text=\"ERROR: Fledge cannot use the ${rhpg_package} environment, the package is installed/available but not the pg_isready command.\"\n\t\t\t\tlogger -p local0.err $msg_text\n\t\t\t\techo $msg_text\n\t\t\tfi\n\t\tfi\n\n\t\tset -e\n\tfi\nelse\n\t# platform Debian/Ubuntu\n\t:\nfi\n\n# Check/set LD_LIBRARY_PATH\nlibPathSet=0\nlibdir=${FLEDGE_ROOT}/lib; [ -d ${libdir} ] && LD_LIBRARY_PATH=$(echo $LD_LIBRARY_PATH | sed \"s|${libdir}||g\") && export LD_LIBRARY_PATH=${libdir}:${LD_LIBRARY_PATH} && libPathSet=1\nlibdir=${FLEDGE_ROOT}/cmake_build/C/lib; [ -d ${libdir} ] && LD_LIBRARY_PATH=$(echo $LD_LIBRARY_PATH | sed \"s|${libdir}||g\") && export LD_LIBRARY_PATH=${libdir}:${LD_LIBRARY_PATH} && libPathSet=1\n[ \"$libPathSet\" -eq \"0\" ] && echo \"Unable to set/update LD_LIBRARY_PATH to include path of Fledge shared libraries: check whether ${FLEDGE_ROOT}/lib or ${FLEDGE_ROOT}/cmake_build/C/lib exists\" && exit 1\n# RHEL stores some libraries under /usr/local/lib64\nexport LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib:/usr/local/lib64\n\n##########\n## INCLUDE SECTION\n##########\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n\n## Check the Storage management script\ncheck_storage_management_script() {\n\n  if [[ ! -e \"$FLEDGE_ROOT/scripts/storage\" ]]; then\n      fledge_log \"info\" \"Fledge cannot ${1}.\" \"all\" \"pretty\"\n      fledge_log \"err\" \"Fledge Storage Plugin script not found.\" \"all\" \"pretty\"\n      exit 1\n  fi\n\n}\n\n## Logger wrapper\nfledge_log() {\n    write_log \"\" \"script.fledge\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n\n## Fledge Reset\n## Reset means that the database is removed and all the data will be lost!\nfledge_reset() {\n\n  # Check the storage management script\n  check_storage_management_script \"be reset\"\n\n  # We could have made it easier here, we will improve it later.\n  # For now, check the status of faoglamp, since the server must be down\n  result=`fledge_status \"silent\"`\n\n  if [[ $result != \"2\" ]]; then\n      fledge_log \"info\" \"Fledge appears to be running and it cannot be reset. Stop Fledge first.\" \"all\" \"pretty\"\n    exit 0\n  fi\n\n  # Execute the Storage Plugin Script\n  # NOTE: this script prepares the storage,\n  #       but it does not start the microservice\n  source \"$FLEDGE_ROOT/scripts/storage\" reset\n\n  # Remove user data: scripts\n  rm -rf \"$FLEDGE_DATA/scripts\"\n  echo \"Removed user data from $FLEDGE_DATA/scripts\"\n  # Remove user data: logs\n  rm -rf ${FLEDGE_DATA}/logs/*\n  echo \"Removed user data from $FLEDGE_DATA/logs\"\n  # Remove user data: var\n  if [[ -d \"$FLEDGE_DATA/var\" ]]; then\n    find \"$FLEDGE_DATA/var\" -depth -type f -exec rm {} \\;\n    echo \"Removed user data from $FLEDGE_DATA/var\"\n  fi\n  # Remove user data: extras, with one exclusion\n  find \"$FLEDGE_DATA/extras\" -depth -type f -not -name fogbench_sensor_coap.template.json -exec rm {} \\;\n  echo \"Removed user data from $FLEDGE_DATA/extras\"\n  # Remove user etc/kerberos: extras, with one exclusion\n  if [[ -d \"$FLEDGE_DATA/etc/kerberos\" ]]; then\n    find \"$FLEDGE_DATA/etc/kerberos\" -depth -type f -not -name \"README.rst\" -exec rm {} \\;\n  fi\n  # Remove user etc/certs: extras, with exclusions\n  find \"$FLEDGE_DATA/etc/certs\" -depth -type f -not -name user.* -not -name fledge.* -not -name ca.* -not -name admin.* -exec rm {} \\;\n  # Remove user etc/ files: extras, with one exclusion\n  find \"$FLEDGE_DATA/etc/\" -maxdepth 1 -type f -not -name storage.json -exec rm {} \\;\n  echo \"Removed user data from $FLEDGE_DATA/etc\"\n  # Remove core.err file\n  rm -f \"$FLEDGE_DATA/core.err\"\n  echo \"Removed core.err from $FLEDGE_DATA\"\n  # Remove backup, snapshots, support and CSV files\n  rm -rf \"$FLEDGE_DATA/snapshots\"\n  echo \"Removed user data from $FLEDGE_DATA/snapshots\"\n  rm -rf \"$FLEDGE_DATA/backup\"\n  echo \"Removed user data from $FLEDGE_DATA/backup\"\n  rm -rf \"$FLEDGE_DATA/support\"\n  echo \"Removed user data from $FLEDGE_DATA/support\"\n  find \"$FLEDGE_DATA\" -depth -type f -name *.csv -exec rm {} \\;\n  echo \"Removed user CSV files from $FLEDGE_DATA\"\n}\n\n\n## Fledge Start\nfledge_start() {\n\n    # Remove any token cache left over from a previous execution\n    rm -f ~/.fledge_token\n\n    # Check the storage management script\n    check_storage_management_script \"start\"\n\n    # Check the Python environment\n    if ! [[ -x \"$(command -v python3)\" ]]; then\n        fledge_log \"err\" \"Python interpreter not found, Fledge cannot start.\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n    # Execute the Storage Plugin Script\n    # NOTE: this script prepares the storage,\n    #       but it does not start the microservice\n    # Pass FLEDGE_SCHEMA to 'storage' script\n    source \"$FLEDGE_ROOT/scripts/storage\" start ${FLEDGE_SCHEMA}\n\n    result=`fledge_status \"silent\"`\n    case \"$result\" in\n\n        \"0\")\n            #Fledge already running\n            fledge_log \"info\" \"Fledge is already running.\" \"all\" \"pretty\"\n            ;;\n\n        \"1\")\n            #Fledge already running - starting\n            fledge_log \"info\" \"Fledge is already starting.\" \"all\" \"pretty\"\n            ;;\n\n        \"2\")\n            #Fledge not running\n            if [[ \"$SAFE_MODE\" == \"safe-mode\" ]]; then\n                echo -n \"Starting Fledge v${FLEDGE_VERSION} in safe mode.\"\n            else\n                echo -n \"Starting Fledge v${FLEDGE_VERSION}.\"\n            fi\n            PYTHONPATH=${FLEDGE_ROOT}/python\n            export PYTHONPATH\n            if [[ ! -e \"$PYTHONPATH/fledge/services/core/__main__.py\" ]]; then\n                fledge_log \"err\" \"Fledge core not found.\" \"all\" \"pretty\"\n                exit 1\n            fi\n\n            python3 -m fledge.services.core \"$SAFE_MODE\" > /dev/null 2> \"$FLEDGE_DATA/core.err\" & disown\n\n            attempts=60\n            while [[ $attempts -gt 0 ]]; do\n                sleep 1\n                new_attempt=`fledge_status \"silent\"`\n                case \"$new_attempt\" in\n\n                  \"0\")  # Started\n                    echo\n                    fledge_log \"info\" \"Fledge started.\" \"all\" \"pretty\"\n                    attempts=0\n                    break\n                    ;;\n\n                  \"1\")  # Starting\n                    attempts=$((attempts - 1))\n                    # Check the status of the attempts - is the time over?\n                    if [[ attempts -gt 0 ]]; then\n\n                      # Print an extra dot\n                      echo -n \".\"\n\n                    else\n\n                      # Time is over - exit with error\n                      fledge_log \"err\" \"Fledge cannot start.\" \"all\" \"pretty\"\n                      fledge_log \"err\" \"Number of attempts exceeded: Fledge may be in an inconsistent state.\" \"all\" \"pretty\"\n                      exit 1\n\n                    fi\n                    ;;\n\n                  \"2\")  # Not running\n                    fledge_log \"err\" \"Fledge cannot start.\" \"all\" \"pretty\"\n                    fledge_log \"err\" \"Check ${FLEDGE_DATA}/core.err for more information.\" \"outonly\" \"pretty\"\n                    exit 1\n                    ;;\n\n                  *)\n                    echo \"Result X${new_attempt}X\"\n                    ;;\n                esac\n            done\n            ;;\n\n        *)\n            fledge_log \"err\" \"Unknown return status, $result.\" \"all\"\n            exit 1\n            ;;\n    esac\n\n}\n\n## Fledge Stop\n#\nfledge_stop() {\n\n  result=`fledge_status \"silent\"`\n\n  if [[ $result = \"2\" ]]; then\n      fledge_log \"info\" \"It looks like Fledge is not running.\" \"all\" \"pretty\"\n    exit 0\n  fi\n\n  result=`curl -k -s -X PUT ${REST_API_URL}/fledge/shutdown`\n\n  if [[ \"${result}\" == \"401\"* ]]; then\n    token=`fledge_authenticate`\n    if [[ \"${token}\" =~ \"failed\" ]]; then\n        fledge_log \"info\" \"Failed authentication when attempting to stop fledge.\" \"all\" \"pretty\"\n        echo \"Authentication failed.\"\n\texit 0\n    fi\n    result=`curl -k -s -H \"authorization: $token\" -X PUT ${REST_API_URL}/fledge/shutdown`\n  fi\n\n  if [[ \"${result}\" =~ \"Fledge shutdown has been scheduled\" ]]; then\n    echo -n \"Stopping Fledge.\"\n  fi\n\n  # Remove any token cache left over from a previous execution\n  rm -f ~/.fledge_token\n\n  attempts=60\n\n  while [[ $attempts -gt 0 ]]; do\n    sleep 1\n    new_attempt=`fledge_status \"silent\"`\n    case \"$new_attempt\" in\n\n      0|1 )  # Still running\n\n        attempts=$((attempts - 1))\n\n        # Check the status of the attempts - is the time over?\n        if [[ attempts -gt 0 ]]; then\n\n          # Print an extra dot\n          echo -n \".\"\n\n        else\n\n          # Time is over - exit with error\n          fledge_log \"err\" \"Fledge cannot be stopped.\" \"all\" \"pretty\"\n          fledge_log \"err\" \"Number of attempts exceeded: Fledge may be in an inconsistent state.\" \"all\" \"pretty\"\n          exit 1\n\n        fi\n        ;;\n\n      2 )  # Not running\n\n        echo\n        fledge_log \"info\" \"Fledge stopped.\" \"all\" \"pretty\"\n        attempts=0\n        break\n        ;;\n\n      * ) # Unknown status\n        fledge_log \"err\" \"Unknown status, $new_attempt.\" \"all\"\n        exit 1\n        ;;\n\n    esac\n  done\n\n}\n\n## Fledge Kill\n#\n# We know this is not the best way to stop Fledge, but for the moment this is all we have got\n#\nfledge_kill() {\n\n    # Check the storage management script\n    if [[ ! -e \"$FLEDGE_ROOT/scripts/storage\" ]]; then\n        fledge_log \"info\" \"Fledge cannot be killed.\" \"all\" \"pretty\"\n        fledge_log \"err\" \"Fledge Storage Plugin script not found.\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n    # Kills the python processes\n    list_to_kill=`ps -ef | grep 'python3 -m fledge' | grep -v 'grep' | grep -v 'backup_restore' | awk '{print $2}'`\n    if [[ \"${list_to_kill}\" != \"\" ]]\n    then\n        echo ${list_to_kill} | xargs kill -9\n    fi\n\n    # Kill the services processes\n    list_to_kill=`ps -ef | grep 'fledge.services' | grep -v 'grep' | awk '{print $2}'`\n    if [[ \"${list_to_kill}\" != \"\" ]]\n    then\n        echo ${list_to_kill} | xargs kill -9\n    fi\n\n    # Kill Fledge tasks - parent tasks\n    list_to_kill=`ps -ef | grep '/bin/sh tasks' | grep -v 'grep' | awk '{print $2}'`\n    if [[ \"${list_to_kill}\" != \"\" ]]\n    then\n        echo ${list_to_kill} | xargs kill -9\n    fi\n\n    # Kill Fledge tasks - child tasks\n    # TODO: improve the mechanism for the recognition of the C tasks\n    list_to_kill=`ps -ef | grep './tasks' | grep -v 'grep' | awk '{print $2}'`\n    if [[ \"${list_to_kill}\" != \"\" ]]\n    then\n        echo ${list_to_kill} | xargs kill -9\n    fi\n\n    # Kill the shell script processes\n    list_to_kill=`ps -ef | grep '/bin/sh services' | grep -v 'grep' | awk '{print $2}'`\n    if [[ \"${list_to_kill}\" != \"\" ]]\n    then\n        echo ${list_to_kill} | xargs kill -9\n    fi\n\n    # Execute the Storage Plugin script\n    # NOTE: This script does not stop the microservice,\n    #       it deals with the database engine.\n    source \"$FLEDGE_ROOT/scripts/storage\" stop\n\n    fledge_log \"info\" \"Fledge killed.\" \"all\" \"pretty\"\n\n}\n\nfledge_debug() {\n\tservice=\"$1\"\n\tresult=`fledge_status \"silent\"`\n\tif [[ $result != \"0\" ]]; then\n\t\tfledge_log \"info\" \"Fledge appears not to be running. Debug should be performed on a running system. Start Fledge first.\" \"all\" \"pretty\"\n\t\texit 0\n\tfi\n\t\"${FLEDGE_ROOT}/scripts/debug/debug\" \"$service\"\n}\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -z ${CERT+x} ]]; then\n\tif [[ -t \"$fd\" ]]; then\n\t    # We have an interactive shell\n\t    if [ -z ${USERNAME+x} ]; then\n\t\tread -p \"Username: \" USERNAME\n\t    fi\n\t    if [ -z ${PASSWORD+x} ]; then\n\t\tread -s -p \"Password: \" PASSWORD\n\t\t/bin/echo > /dev/tty\n\t    fi\n\tfi\n    fi\n\n    # Get/Updates the rest API URL\n    get_rest_api_url\n    if [[ -f ${CERT} ]]; then\n    \tresult=`curl -T ${CERT} -X POST -k -s ${REST_API_URL}/fledge/login --insecure`\n    else\n    \tpayload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    \tresult=`curl -X POST -k -s ${REST_API_URL}/fledge/login -d\"$payload\" || true`\n    fi\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\n\n\n## Fledge status\n#  NOTE: this command works only with the default REST API Port\n#\n# Echo Output:\n# 0 - Fledge core is running\n# 1 - Fledge core is starting\n# 2 - Fledge core is not running\n#\nfledge_status() {\n\n    # Get/Updates the rest API URL\n    get_rest_api_url\n    result=`curl -k -s ${REST_API_URL}/fledge/ping || true`\n\n    if [[ \"${result}\" == \"401\"* ]]; then\n      token=`fledge_authenticate`\n      if [[ \"${token}\" =~ \"failed\" ]]; then\n          fledge_log \"info\" \"Failed authentication when attempting to get fledge status.\" \"all\" \"pretty\"\n          echo \"Authentication failed.\"\n          exit -1\n      fi\n      result=`curl -H \"authorization: $token\" -k -s ${REST_API_URL}/fledge/ping || true`\n    fi\n\n    case \"$result\" in\n\n        *uptime*)\n            if [[ \"$1\" == \"silent\" ]]; then\n                echo \"0\"\n            else\n\n                uptime_sec=`echo ${result} | tr -d ' ' | grep -o '\"uptime\".*' | cut -d\":\" -f2 | cut -d\",\" -f1`\n                record_read=`echo ${result} | tr -d ' ' | grep -o '\"dataRead\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/}//g'`\n                record_sent=`echo ${result} | tr -d ' ' | grep -o '\"dataSent\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/}//g'`\n                record_purged=`echo ${result} | tr -d ' ' | grep -o '\"dataPurged\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/}//g'`\n                auth_opt=`echo ${result} | tr -d ' ' | grep -o '\"authenticationOptional\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/}//g'`\n                safe_mode=`echo ${result} | tr -d ' ' | grep -o '\"safeMode\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/}//g'`\n                if [[ \"${auth_opt}\" == \"true\" ]]; then\n                  req_auth=\"does not require\"\n                else\n                  req_auth=\"requires\"\n                fi\n\n                if [[ \"${safe_mode}\" == \"true\" ]]; then\n                  safe_mode=\" in safe mode\"\n                else\n                  safe_mode=\"\"\n                fi\n\n                fledge_log \"info\" \"Fledge v${FLEDGE_VERSION} running${safe_mode}.\" \"outonly\" \"pretty\"\n                fledge_log \"info\" \"Fledge Uptime:  ${uptime_sec} seconds.\" \"outonly\" \"pretty\"\n                fledge_log \"info\" \"Fledge records: ${record_read} read, ${record_sent} sent, ${record_purged} purged.\" \"outonly\" \"pretty\"\n\n                fledge_log \"info\" \"Fledge ${req_auth} authentication.\" \"outonly\" \"pretty\"\n\n                # Show Services\n                fledge_log \"info\" \"=== Fledge services:\" \"outonly\" \"pretty\"\n                fledge_log \"info\" \"fledge.services.core\" \"outonly\" \"pretty\"\n                ps -ef | grep \"fledge.services.storage\" | grep -v 'grep' | grep -v awk | awk '{print \"fledge.services.storage \" $9 \" \" $10}' || true\n                ps -ef | grep \"fledge.services.south \" |grep -v python3| grep -v 'grep' | grep -v awk | awk '{printf \"fledge.services.south \"; for(i=9;i<=NF;++i) printf $i FS; printf \"\\n\"}' | sed -e 's/--token.*--name/--name/g' || true\n                ps -ef | grep \"fledge.services.north \" |grep -v python3| grep -v 'grep' | grep -v awk | awk '{printf \"fledge.services.north \"; for(i=9;i<=NF;++i) printf $i FS; printf \"\\n\"}' | sed -e 's/--token.*--name/--name/g' || true\n                ps -ef | grep \"fledge.services.notification\" |grep -v python3| grep -v 'grep' | grep -v awk | awk '{printf \"fledge.services.notification \"; for(i=9;i<=NF;++i) printf $i FS; printf \"\\n\"}' | sed -e 's/--token.*--name/--name/g' || true\n                ps -ef | grep \"fledge.services.dispatcher\" |grep -v python3| grep -v 'grep' | grep -v awk | awk '{printf \"fledge.services.dispatcher \"; for(i=9;i<=NF;++i) printf $i FS; printf \"\\n\"}' | sed -e 's/--token.*--name/--name/g' || true\n                ps -ef | grep \"fledge.services.bucket\" |grep -v python3| grep -v 'grep' | grep -v awk | awk '{printf \"fledge.services.bucket \"; for(i=9;i<=NF;++i) printf $i FS; printf \"\\n\"}' | sed -e 's/--token.*--name/--name/g' || true\n                ps -ef | grep \"fledge.services.pipeline \" |grep -v python3| grep -v 'grep' | grep -v awk | awk '{printf \"fledge.services.pipeline \"; for(i=9;i<=NF;++i) printf $i FS; printf \"\\n\"}' | sed -e 's/--token.*--name/--name/g' || true\n                # Show Python services (except core)\n                ps -ef | grep -o 'python3 -m fledge.services.*' | grep -o 'fledge.services.*' | grep -v 'fledge.services.core' | grep -v 'fledge.services\\.\\*' | sed -e 's/--token.*--name/--name/g' || true\n\n                # Show Tasks\n                fledge_log \"info\" \"=== Fledge tasks:\" \"outonly\" \"pretty\"\n                ps -ef | grep -v 'cpulimit*' | grep -o 'python3 -m fledge.tasks.*' | grep -o 'fledge.tasks.*' | grep -v 'fledge.tasks\\.\\*' || true\n\n                # Show Tasks in C code\n                for task_name in `ls ${FLEDGE_ROOT}/tasks`\n                do\n                    ps -ef | grep \"./tasks/$task_name\" | grep -v python3 | grep -v grep | grep -v awk | awk  '{printf \"tasks/'$task_name' \" ; for(i=9;i<=NF;++i) printf $i FS; printf \"\\n\"}' || true\n                done\n            fi\n            ;;\n        *)\n            if [[ `pgrep -c -f 'python3.*-m.*fledge.services.core'` -ne 0 ]]; then\n                if [[ \"$1\" == \"silent\" ]]; then\n                    echo \"1\"\n                else\n                    fledge_log \"info\" \"Fledge starting.\" \"outonly\" \"pretty\"\n                fi\n            else\n                if [[ \"$1\" == \"silent\" ]]; then\n                    echo \"2\"\n                else\n                    fledge_log \"info\" \"Fledge not running.\" \"outonly\" \"pretty\"\n                fi\n            fi\n            ;;\n    esac\n}\n\n\n##\n## Print Fledge Version\n##\nfledge_print_version() {\n    echo \"Fledge version ${FLEDGE_VERSION}, DB schema version ${FLEDGE_SCHEMA}\"\n}\n\n\n##\n## Get Fledge version from VERSION file\n##\nget_fledge_version() {\n    FLEDGE_VERSION_FILE=\"${FLEDGE_ROOT}/VERSION\"\n    FLEDGE_VERSION=`cat ${FLEDGE_VERSION_FILE} | tr -d ' ' | grep -i \"FLEDGE_VERSION=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'`\n    FLEDGE_SCHEMA=`cat ${FLEDGE_VERSION_FILE} | tr -d ' ' | grep -i \"FLEDGE_SCHEMA=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'`\n\n    if [ ! \"${FLEDGE_VERSION}\" ]; then\n        echo \"Error FLEDGE_VERSION is not set, check [${FLEDGE_VERSION_FILE}]. Exiting.\"\n        return  1\n    fi\n    if [ ! \"${FLEDGE_SCHEMA}\" ]; then\n        echo \"Error FLEDGE_SCHEMA is not set, check [${FLEDGE_VERSION_FILE}]. Exiting.\"\n        return 1\n    fi\n}\n\n\n##\n## Get Fledge rest API URL\n##\nget_rest_api_url() {\n    pid_file=${FLEDGE_DATA}/var/run/fledge.core.pid\n    export PYTHONPATH=${FLEDGE_ROOT}\n\n    # Check whether pid_file exists and its contents are not empty\n    if [[ -s ${pid_file} ]]; then\n        REST_API_URL=$(cat ${pid_file} | python3 -m scripts.common.json_parse  get_rest_api_url_from_pid)\n    fi\n\n    # Sets a default value if it not possible to determine the proper value using the pid file\n    if [ ! \"${REST_API_URL}\" ]; then\n        export REST_API_URL=\"http://localhost:8081\"\n    fi\n}\n\n\n##\n## Fledge Script Help\n##\nfledge_help() {\n\n    echo \"${USAGE}\nFledge v${FLEDGE_VERSION} admin script\nThe script is used to start Fledge\nFlags:\n -u username       - The username to use for authentication\n -p password       - The password to use for authentication\n -c certificate    - The certificate file to use for authenticaton\n -h                - Print this help text\nArguments:\n start             - Start Fledge core (core will start other services)\n start --safe-mode - Start in safe mode (only core and storage services will be started)\n stop              - Stop all Fledge services and processes\n kill              - Kill all Fledge services and processes\n status            - Show the status for the Fledge services\n debug <service>   - Debug the pipeline of a south or north service\n healthcheck       - Perform a number of checks on the health of the system, Fledge must be running\n reset             - Restore Fledge factory settings\n                     WARNING! This command will destroy all your data!\n purge             - Purge all readings data and non-configuration data.\n                     WARNING! This command will destroy all data in affected tables!\n version           - Print Fledge version\n help              - This text\"\n}\n\n# Purge readings data and non configuration data\nfledge_purge() {\n    result=`fledge_status \"silent\"`\n    if [[ $result != \"2\" ]]; then\n        fledge_log \"info\" \"Fledge appears to be running and purge data cannot be done. Stop Fledge first.\" \"all\" \"pretty\"\n        exit 0\n    fi\n\n    # Purge data\n    source \"$FLEDGE_ROOT/scripts/storage\" purge\n}\n\n# Perform a healthcheck on the Fledge instance\nfledge_healthcheck() {\n\n\tresult=`fledge_status \"silent\"`\n\tif [[ $result != \"0\" ]]; then\n\t\tfledge_log \"info\" \"Fledge appears not to be running, healthcheck should be performed on a running system . Start Fledge first.\" \"all\" \"pretty\"\n\t\texit 0\n\tfi\n\n\techo Fledge Healthcheck\n\techo ==================\n\techo\n\n\tauth=\"optional\"\n\tresult=`curl -k -s ${REST_API_URL}/fledge/health/logging`\n\n\tif [[ \"${result}\" == \"401\"* ]]; then\n\t\ttoken=`fledge_authenticate`\n\t\tif [[ \"${token}\" =~ \"failed\" ]]; then\n\t\t\tfledge_log \"info\" \"Failed authentication when attempting to run fledge healthcheck.\" \"all\" \"pretty\"\n\t\t\techo \"Authentication failed.\"\n\t\t\texit 0\n\t\tfi\n\t\tresult=`curl -k -s -H \"authorization: $token\" ${REST_API_URL}/fledge/health/logging`\n\t\tauth=\"required\"\n\tfi\n\n\ta=`echo $result | python3 -m scripts.common.loglevel info`\n\tif [[ \"$a\" != \"\" ]]; then\n\t\techo The following services have a logging level set to info:\n\t        for i in $a; do\n\t\t\techo $i | sed -e s/\\\"//g -e 's/^/    /'\n\t\tdone\n\t\techo This is probably too high and may result in large log files\n\t\techo\n\tfi\n\n\ta=`echo $result | python3 -m scripts.common.loglevel debug`\n\tif [[ \"$a\" != \"\" ]]; then\n\t\techo The following services have a logging level set to debug:\n\t        for i in $a; do\n\t\t\techo $i | sed -e s/\\\"//g -e 's/^/    /'\n\t\tdone\n\t\techo This is probably too high and may result in large log files\n\t\techo\n\tfi\n\n\ta=`echo $result | python3 -m scripts.common.disk_usage 80`\n\tif [[ \"$a\" != \"\" ]]; then\n\t\techo The disk space in the logging directory is low, $a%\n\t\techo\n\tfi\n\n\tif [[ \"${auth}\" == \"required\" ]]; then\n\t\tresult=`curl -k -s -H \"authorization: $token\" ${REST_API_URL}/fledge/health/storage`\n\telse\n\t\tresult=`curl -s ${REST_API_URL}/fledge/health/storage`\n\tfi\n\ta=`echo $result | python3 -m scripts.common.disk_usage 80`\n\tif [[ \"$a\" != \"\" ]]; then\n\t\techo The disk space in the storage directory is low, $a%\n\t\techo\n\tfi\n\n\t# the && touch is a trick to defeat the fact we have done a set -e\n\tservice rsyslog status >/dev/null && touch /dev/null\n\tstatus=$?\n\tif [ $status -ne 0 ]; then\n\t\techo The rsyslog service does not appear to be running on the machine\n\t\techo No log entries will be seen on this machine, plesse restart the Linux service.\n\t\techo\n\tfi\n\n\tmonth=`date +%Y-%m-%d`\n\tif [[ \"${auth}\" == \"required\" ]]; then\n\t\tresult=`curl -k -s -H \"authorization: $token\" ${REST_API_URL}/fledge/audit?source=SRVFL`\n\telse\n\t\tresult=`curl -s ${REST_API_URL}/fledge/audit?source=SRVFL`\n\tfi\n\ttmpname=/tmp/fledge.$$\n\techo $result | python3 -m scripts.common.audittime $month | sort | uniq -c >$tmpname\n\tif [[ -s $tmpname ]]; then\n\t\techo Some services have failed at least once today\n\t\techo \"No. Failures  |  Service\"\n\t\techo \"--------------+--------------------\"\n\t\tsed -E 's/([0-9]) /      \\1 | /' $tmpname\n\t\techo\n\tfi\n\trm $tmpname\n\n\techo Service State\n\tissue=none\n\tif [[ \"${auth}\" == \"required\" ]]; then\n\t\tresult=`curl -k -s -H \"authorization: $token\" ${REST_API_URL}/fledge/service`\n\telse\n\t\tresult=`curl -s ${REST_API_URL}/fledge/service`\n\tfi\n\ta=`echo $result | python3 -m scripts.common.service_status unresponsive`\n\tif [[ \"$a\" != \"\" ]]; then\n\t\techo The following services are unresponsive:\n\t        for i in $a; do\n\t\t\techo $i | sed -e s/\\\"//g -e 's/^/    /'\n\t\tdone\n\t\tissue=\"some\"\n\tfi\n\ta=`echo $result | python3 -m scripts.common.service_status failed`\n\tif [[ \"$a\" != \"\" ]]; then\n\t\techo The following services have failed:\n\t        for i in $a; do\n\t\t\techo $i | sed -e s/\\\"//g -e 's/^/    /'\n\t\tdone\n\t\tissue=\"some\"\n\tfi\n\ta=`echo $result | python3 -m scripts.common.service_status shutdown`\n\tif [[ \"$a\" != \"\" ]]; then\n\t\techo The following services are shutdown:\n\t        for i in $a; do\n\t\t\techo $i | sed -e s/\\\"//g -e 's/^/    /'\n\t\tdone\n\t\tissue=\"some\"\n\tfi\n\tif [[ \"$issue\" == \"none\" ]]; then\n\t\techo All services are running\n\tfi\n\techo\n\n\techo Healthcheck completed\n}\n\n### Main Logic ###\n\n# Set FLEDGE_DATA if it does not exist\nif [ -z ${FLEDGE_DATA+x} ]; then\n    FLEDGE_DATA=\"${FLEDGE_ROOT}/data\"\n    export FLEDGE_DATA\nfi\n\n# Check if $FLEDGE_DATA exists and is a directory\nif [[ ! -d ${FLEDGE_DATA} ]]; then\n    fledge_log \"err\" \"Fledge cannot be executed: ${FLEDGE_DATA} is not a valid directory.\" \"all\" \"pretty\"\n    exit 1\nfi\n\n# Check if curl is present\nif [[ ! `command -v curl` ]]; then\n    fledge_log \"err\" \"Missing dependency: curl.\" \"all\" \"pretty\"\n    fledge_log \"info\" \"Install curl and run Fledge again.\" \"outonly\" \"pretty\"\n    exit 1\nfi\n\n# Get Fledge version\nget_fledge_version\n\n# Get Fledge rest API URL\nget_rest_api_url\n\n# Call getopt to get any command line options for username and password\nwhile getopts \"u:p:c:h\" option; do\n    case \"$option\" in\n    u)\n        USERNAME=${OPTARG}\n        ;;\n    p)\n        PASSWORD=${OPTARG}\n        ;;\n    c)\n        CERT=${OPTARG}\n        ;;\n    h)\n\tfledge_help\n\texit 0\n\t;;\n    ?)\n        echo Invalid option -${OPTARG}\n\texit 1\n\t;;\n    esac\ndone\nshift $((OPTIND-1))\n\nif [ -z ${USERNAME+x} ]; then\n\t# If username is not set on the command line use environment variable if set\n\tif [ ! -z ${FLEDGE_USER+x} ]; then\n\t\tUSERNAME=$FLEDGE_USER\n\tfi\nfi\n\nif [ -z ${PASSWORD+x} ]; then\n\t# If password is not set on the command line use environment variable if set\n\tif [ ! -z ${FLEDGE_PASSWORD+x} ]; then\n\t\tPASSWORD=$FLEDGE_PASSWORD\n\tfi\nfi\n\nif [ -f ~/.fledge ] ; then\n\t# if ~/.fledge is mode 0600 then fetch username and password\n\t# from it if they are not already set\n\tperm=`stat -c %A ~/.fledge`\n\tif [ \"$perm\" == \"-rw-------\" ]; then\n\t\tif [ -z ${USERNAME+x} ]; then\n\t\t\tUSERNAME=`awk -F: 'NR==1{ print $1 }' < ~/.fledge | cut -d\"=\" -f2`\n\t\tfi\n\t\tif [ -z ${PASSWORD+x} ]; then\n\t\t\tPASSWORD=`awk -F: 'NR==2{ print $1 }' < ~/.fledge | cut -d\"=\" -f2`\n\t\tfi\n\tfi\nfi\n\nSAFE_MODE=''\n# Handle commands\ncase \"$1\" in\n    reset)\n        fledge_reset\n        ;;\n    start)\n        if [ ! -z \"$2\" ]; then\n            if [ $2 = \"--safe-mode\" ]; then\n               SAFE_MODE='safe-mode'\n            else\n               echo \"An invalid option has been entered: $2. Use --safe-mode\"\n               exit 1\n            fi\n        fi\n        fledge_start\n        ;;\n    stop)\n        fledge_stop\n        ;;\n    kill)\n        fledge_kill\n        ;;\n    status)\n        fledge_status\n        ;;\n    version)\n        fledge_print_version\n        ;;\n    purge)\n        fledge_purge\n        ;;\n    healthcheck)\n        fledge_healthcheck\n        ;;\n    debug)\n        if [ ! -z \"$2\" ]; then\n             fledge_debug \"$2\"\n        else\n\t     echo \"Debug must be passed the name of a service\"\n\t     exit 1\n\tfi\n        ;;\n    help)\n        fledge_help\n        ;;\n    *)\n        echo \"${USAGE}\"\n        exit 1\nesac\n\nrm -f ~/.fledge_token\nexit $?\n"
  },
  {
    "path": "scripts/fledge_mnt",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Fledge management tool- to check/recovery fledge SQLITE databases\n##\n## Copyright (c) 2021 Dianomic Systems\n##\n## Released under the Apache 2.0 Licence\n##\n## Author: Stefano Simonelli\n##\n##\n## 2.0.04 - improved execution of min/max on tables\n## 2.0.03 - added --perf\n##          avoided the check of the db before after the shrink\n##          improve shrink  --table\n## 2.0.02 - handle PLUGIN_DATA_KEY having spaces\n## 2.0.01 - improves \"recover --custom\" handling a dynamic list of tables\n## 1.9.10 - adds Centos/Redhat handling for the syslog\n##\n##--------------------------------------------------------------------\n\n#set -e\n#set -x\n\nFLEDGE_MNT_VERSION=2.0.04\nFLEDGE_READINGS_DB_MAX=64           # Number of readings oer database handled\n\n#\n# Used for the command : clean --plugin_data\n#\nPLUGIN_DATA_KEY='xxx'               # Used to delete a specific row in the plugin_data table\n                                    # change this value to the proper one, use the info command\n                                    # to retrieve the current values\n\n\n\n#\nwrite_log() {\n\n  # Check log severity\n  case \"$3\" in\n    \"debug\")\n      severity=\"DEBUG\"\n      ;;\n    \"info\")\n      severity=\"INFO\"\n      ;;\n    \"notice\")\n      severity=\"WARNING\"\n      ;;\n    \"err\")\n      severity=\"ERROR\"\n      ;;\n    \"crit\")\n      severity=\"CRITICAL ERROR\"\n      ;;\n    \"alert\")\n      severity=\"ALERT\"\n      ;;\n    \"emerg\")\n      severity=\"EMERGENCY\"\n      ;;\n    \"*\")\n      write_log $1 \"err\" \"Internal error: unrecognized priority: $3\" $4\n      exit 1\n      ;;\n  esac\n\n  # Log to syslog\n  if [[ \"$5\" =~ ^(logonly|all)$ ]]; then\n      tag=\"Fledge ${1}[${BASHPID}] ${severity}: ${2}\"\n      logger -t \"${tag}\" \"${4}\"\n  fi\n\n  # Log to Stdout\n  if [[ \"${5}\" =~ ^(outonly|all)$ ]]; then\n      if [[ \"${6}\" == \"pretty\" ]]; then\n          echo \"${4}\" >&2\n      else\n          echo \"[$(date +'%Y-%m-%d %H:%M:%S')]: $@\" >&2\n      fi\n  fi\n\n}\n\n## fledge_log wrapper\nfledge_log() {\n    write_log \"\" \"script.fledge_mnt\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n#\n# FLEDGE_ROOT evaluation\n#\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tfledge_log \"err\" \"Fledge home directory missing or incorrectly set environment !${FLEDGE_ROOT}!\"  \"outonly\" \"pretty\"\n\texit 1\nfi\n\n#\n# FLEDGE_DATA evaluation\n#\nif [ \"${FLEDGE_DATA}\" = \"\" ]; then\n\tFLEDGE_DATA=\"${FLEDGE_ROOT}/data\"\nfi\n\n#\n# fledge command evaluation\n#\nFLEDGE_SCRIPT=\"${FLEDGE_ROOT}/scripts/fledge\"\nif [ ! -x \"${FLEDGE_SCRIPT}\" ]; then\n\n    FLEDGE_SCRIPT=\"${FLEDGE_ROOT}/bin/fledge\"\n    if [ ! -x \"${FLEDGE_SCRIPT}\" ]; then\n\n\t    fledge_log \"warn\" \"WARNING: Fledge command unavailable both in deployment and development environments\"  \"outonly\" \"pretty\"\n\t    fledge_log \"warn\" \"WARNING: if needed, fledge should be stopped manually before the execution\"  \"outonly\" \"pretty\"\n\t    FLEDGE_SCRIPT=\"\"\n    else\n        FLEDGE_ENV=\"deployment\"\n    fi\nelse\n    FLEDGE_ENV=\"development\"\nfi\n\n#\n# sqlite3 command selection - identify the newer one for the availability of th recover command\n#\nbase_dir=$(dirname \"$0\")\n#SQLITE_SQL=\"${HOME}/bin/sqlite3\"\nSQLITE_SQL=\"${base_dir}/sqlite3\"\nif ! [[ -x \"${SQLITE_SQL}\" ]]; then\n\n    SQLITE_SQL=\"$FLEDGE_ROOT/plugins/storage/sqlite/sqlite3\"\n    if ! [[ -x \"${SQLITE_SQL}\" ]]; then\n\n        # Check system default SQLite 3 command line is available\n        if ! [[ -x \"$(command -v sqlite3)\" ]]; then\n            fledge_log \"info\" \"The sqlite3 command cannot be found. Is SQLite3 installed?\" \"outonly\" \"pretty\"\n            fledge_log \"info\" \"If SQLite3 is installed, check if the bin dir is in the PATH.\" \"outonly\" \"pretty\"\n            exit 1\n        else\n            SQLITE_SQL=\"$(command -v sqlite3)\"\n        fi\n    fi\nfi\n\nexport SQLITE_SQL_ANALYZER=\"${base_dir}/sqlite3_analyzer\"\nif ! [[ -x \"${SQLITE_SQL_ANALYZER}\" ]]; then\n\n    fledge_log \"info\" \"sqlite3_analyzer not available in !${SQLITE_SQL_ANALYZER}!\" \"outonly\" \"pretty\"\n    SQLITE_SQL_ANALYZER=\"\"\nfi\n\n#\n# Configurations\n#\nif [[ \"${FLEDGE_DB_NAME}\" == \"\" ]]; then\n\n    FLEDGE_DB=\"${FLEDGE_ROOT}/data/fledge.db\"\n\nelse\n    FLEDGE_DB=\"${FLEDGE_DB_NAME}\"\nfi\n\n#\n# TODO: handle low bandwidth storage engine\n#\nREADINGS_DB=\"${FLEDGE_DATA}/readings_1.db\"\nREADINGS_2_DB=\"${FLEDGE_DATA}/readings_2.db\"\nREADINGS_3_DB=\"${FLEDGE_DATA}/readings_3.db\"\n\n#\n# Functions\n#\nfledge_stop() {\n\n    if [[ \"${FLEDGE_SCRIPT}\" == \"\" ]]; then\n\n\t    fledge_log \"warn\" \"WARNING: Fledge command unavailable both in deployment and development environments\"  \"outonly\" \"pretty\"\n\t    fledge_log \"warn\" \"WARNING: if needed, fledge should be stopped manually before the execution\"  \"outonly\" \"pretty\"\n    else\n\n         seconds=5\n        ${FLEDGE_SCRIPT} stop\n\n        echo \"sleeping ${seconds} seconds before fledge kill, to ensure everything is not running\"\n        sleep ${seconds}\n        ${FLEDGE_SCRIPT} kill\n    fi\n}\n\noperation_start() {\n\n    operation=\"${1}\"\n\n    echo \"\"\n    echo \"${operation}\"\n    echo \"Operation started : $(date)\"\n\n    time_start=$(date +%s)\n\n}\n\noperation_end() {\n\n    time_end=$(date +%s)\n\n    elapsed_sec=$(($time_end - $time_start))\n    if [[ $elapsed_sec -ge 60 ]]; then\n\n        elapsed_min=$(($elapsed_sec / 60))\n    fi\n    if [[ $elapsed_min -ge 60 ]]; then\n\n        elapsed_hour=$(($elapsed_min / 60))\n    fi\n\n    echo \"Operation completed : $(date)\"\n    echo \"Elapsed - seconds/minutes/hours - ${elapsed_sec} / ${elapsed_min} / ${elapsed_hour} \"\n}\n\nfledge_info_table() {\n\n    fledge_table=$1\n    option=$2\n    timestamp_field=$3\n    execute_count=no\n\n    echo \"Information on the table : ${fledge_table}\"\n\n    if [[ \"${option}\" == \"count\" ]]; then\n\n        execute_count=yes\n    fi\n\n    if [[ \"${option}\" == \"count_only\" ]]; then\n\n        execute_count=yes\n    else\n        echo -en \"\\tMIN Id : \"\n        \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT min(id) FROM ${fledge_table} \"\n\n        echo -en \"\\tMAX Id : \"\n        \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT max(id) FROM ${fledge_table} \"\n\n\n        if [[ \"${timestamp_field}\" != \"-\" ]]; then\n\n            echo -en \"\\tTimestamp MIN: \"\n            \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT min(${timestamp_field}) FROM ${fledge_table} \"\n\n            echo -en \"\\tTimestamp MAX: \"\n            \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT max(${timestamp_field}) FROM ${fledge_table} \"\n        fi\n    fi\n\n    if [[ \"${execute_count}\" == \"yes\" ]]; then\n\n        echo -en \"\\tCOUNT  : \"\n        \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT count(*) FROM ${fledge_table} \"\n    fi\n\n    if [[ \"${fledge_table}\" == \"PLUGIN_DATA\" ]]; then\n\n        echo -e \"\\tTable - ${fledge_table}  : \"\n        \"${SQLITE_SQL}\" -header \"${FLEDGE_DB}\" \"SELECT * FROM ${fledge_table} \"\n\n        echo \"\"\n    fi\n    if [[ \"${fledge_table}\" == \"STREAMS\" ]]; then\n\n        echo -e \"\\tNow - \"$(date \"+%Y-%m-%d %H:%M:%S.%3N\")\" - $(date) \"\n\n        echo -e \"\\tTable - ${fledge_table}  : \"\n        \"${SQLITE_SQL}\" -header \"${FLEDGE_DB}\" \"SELECT * FROM ${fledge_table} \"\n\n        echo \"\"\n        echo \"Information on the READINGS tables\"\n\n        min=$(\"${SQLITE_SQL}\" \"${READINGS_DB}\" \"SELECT min(id) FROM readings_1_1\")\n        max=$(\"${SQLITE_SQL}\" \"${READINGS_DB}\" \"SELECT max(id) FROM readings_1_1\")\n        echo -en \"\\tTable - readings_1_1 - min/max : ${min}/${max} \"\n        echo \"\"\n\n        min=$(\"${SQLITE_SQL}\" \"${READINGS_DB}\" \"SELECT min(id) FROM readings_1_2\")\n        max=$(\"${SQLITE_SQL}\" \"${READINGS_DB}\" \"SELECT max(id) FROM readings_1_2\")\n        echo -en \"\\tTable - readings_1_2 - min/max : ${min}/${max} \"\n        echo \"\"\n\n        min=$(\"${SQLITE_SQL}\" \"${READINGS_DB}\" \"SELECT min(id) FROM readings_1_3\")\n        max=$(\"${SQLITE_SQL}\" \"${READINGS_DB}\" \"SELECT max(id) FROM readings_1_3\")\n        echo -en \"\\tTable - readings_1_3 - min/max : ${min}/${max} \"\n        echo \"\"\n\n        echo \"\"\n        echo -e \"\\tTable - configuration_readings : \"\n        \"${SQLITE_SQL}\" -header \"${READINGS_DB}\" \"SELECT * FROM configuration_readings\"\n\n        echo \"\"\n        echo -e \"\\tTable - asset_reading_catalogue : \"\n        \"${SQLITE_SQL}\" -header \"${READINGS_DB}\" \"SELECT * FROM asset_reading_catalogue\"\n\n        echo \"\"\n        echo -e \"\\tDB - readings_1 - tables :\"\n        \"${SQLITE_SQL}\" \"${READINGS_DB}\" \".tables\"\n\n    fi\n\n    if [[ \"${fledge_table}\" == \"VERSION\" ]]; then\n\n        echo -en \"\\tTable - VERSION  : \"\n        \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT * FROM version\"\n    fi\n\n    echo \"\"\n}\n\nfledge_info_err() {\n\n    text='WARNING|warning|ERROR|error|FATAL|fatal'\n    echo \"\"\n    echo \"---- syslog - FULL - WARNING|warning|ERROR|error|FATAL|fatal -----------------------------------------------\"\n\n    cat \"${syslog_file}\" | grep  -E  \"${text}\"\n\n    echo \"---- syslog - Filtered  - WARNING|warning|ERROR|error|FATAL|fatal -----------------------------------------------\"\n\n    cat  \"${syslog_file}\" | grep  -E  \"${text}\" | grep -v \"Not all updates within transaction succeeded\" | grep -v 'ERROR: Failed to register configuration category' | grep -v 'try C-plugin'\n\n    echo \"---- syslog -----------------------------------------------------------------------------------------\"\n}\n\nfledge_info_full() {\n    operation_start \"Information on the database - full\"\n\n    fledge_info\n\n    echo \"\"\n    fledge_info_table \"STATISTICS_HISTORY\" \"count\"          \"history_ts\"\n\n    operation_end\n}\n\nfledge_info_perf() {\n\n    operation_start \"Information - performance\"\n\n    for i in $(seq 3); do\n        fledge_info_perf_measurement\n    done\n    echo \"\"\n\n    operation_end\n}\n\nfledge_info_perf_measurement() {\n\n    tmp_file=/tmp/$$\n    cfg_cpu=10024\n    cfg_disk=1024    # 1GB file\n\n    echo \"\"\n    echo -n \"CPU  : \"\n    sync;sync;sync\n    dd if=/dev/zero bs=1M count=${cfg_cpu} 2> ${tmp_file} | md5sum > /dev/null\n    cat ${tmp_file} | tail -1\n\n    echo -n \"DISK : \"\n    test_file=\"${FLEDGE_ROOT}/fledge_test_perf\"\n\n    if ! [[ -e \"${test_file}\" ]]; then\n\n        sync;sync;sync\n        dd  bs=1M count=${cfg_disk} if=/dev/zero of=\"${test_file}\"  2> ${tmp_file}\n        cat ${tmp_file} | tail -1\n        rm \"${test_file}\"\n    else\n\n        echo \"ERROR : file ${test_file} already exists, delete it if not needed\"\n    fi\n}\n\nfledge_info() {\n\n    operation_start \"Information on the database\"\n\n    echo \"Tables from .tables :\"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \".tables\"\n    echo \"\"\n\n    echo \"Tables from sqlite_master:\"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\"  \"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;\"\n\n    fledge_info_table \"STATISTICS_HISTORY\" \"-\"          \"history_ts\"\n    fledge_info_table \"TASKS\"              \"count\"      \"start_time\"\n    fledge_info_table \"LOG\"                \"count\"      \"ts\"\n    fledge_info_table \"PLUGIN_DATA\"        \"count_only\" \"-\"\n    fledge_info_table \"STREAMS\"            \"count_only\" \"-\"\n    fledge_info_table \"VERSION\"            \"count_only\" \"-\"\n\n    echo \"Data directory:\"\n    ls -lha \"${FLEDGE_DATA}\"\n    echo \"\"\n\n    operation_end\n}\n\nfledge_clean_plugin_data() {\n    echo \"Clean operation - plugin_data : \"\n\n    echo -e \"\\tTable - PLUGIN_DATA  : \"\n    \"${SQLITE_SQL}\" -header \"${FLEDGE_DB}\" \"SELECT * FROM PLUGIN_DATA \"\n\n    sql_cmd=\"DELETE FROM PLUGIN_DATA WHERE key='\"${PLUGIN_DATA_KEY}\"'\"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"${sql_cmd}\"\n\n    echo -e \"\\tTable - PLUGIN_DATA  : \"\n    \"${SQLITE_SQL}\" -header \"${FLEDGE_DB}\" \"SELECT * FROM PLUGIN_DATA \"\n\n}\n\nfledge_analyze() {\n\n    if [[ \"${SQLITE_SQL_ANALYZER}\" == \"\" ]]; then\n\n        fledge_log \"info\" \"sqlite3_analyzer not available in !${SQLITE_SQL_ANALYZER}!\" \"outonly\" \"pretty\"\n        exit 1\n    else\n        export_analyze=\"${FLEDGE_DB}_analyze.txt\"\n        operation_start \"Executing sqlite3_analyzer, output !${export_analyze}! \"\n        \"${SQLITE_SQL_ANALYZER}\" \"${FLEDGE_DB}\"      > \"${export_analyze}\"\n\n        for ((loop=1; loop <= ${FLEDGE_READINGS_DB_MAX} ; loop++))\n        do\n            READINGS_DB=\"${FLEDGE_ROOT}/data/readings_${loop}.db\"\n\n            if [[ -e \"${READINGS_DB}\" ]]; then\n\n                export_analyze_r=\"${READINGS_DB}_analyze.txt\"\n                \"${SQLITE_SQL_ANALYZER}\" \"${READINGS_DB}\"  > \"${export_analyze_r}\"\n            fi\n        done\n\n        operation_end\n    fi\n}\n\nfledge_check() {\n\n    operation_start \"Executing database check\"\n    fledge_stop\n\n    echo -n \"Checking database consistency : \"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"pragma integrity_check;\"\n\n    operation_end\n}\n\n#\n# Comment code for debug\n#\nfledge_shrink_table_shrink() {\n    fledge_table=$1\n\n    echo \"Before:\"\n    echo -n \"Table ${fledge_table} - min date: \"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT min(history_ts) FROM ${fledge_table}\"\n\n    echo -n \"Table ${fledge_table} - max date: \"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT max(history_ts) FROM ${fledge_table}\"\n\n    ###   #########################################################################################:\n\n    echo \"shrinking table ${fledge_table}\"\n\n    max_date=$(\"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT max(history_ts) FROM ${fledge_table}\")\n\n    sql_cmd=\"DELETE FROM ${fledge_table} WHERE  history_ts <= date('\"${max_date}\"','-3 day')\"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"${sql_cmd}\"\n\n    ###   #########################################################################################:\n\n    echo \"After:\"\n    echo -n \"Table ${fledge_table} - number of records : \"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT COUNT(*) FROM ${fledge_table}\"\n\n    echo -n \"Table ${fledge_table} - min date: \"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT min(history_ts) FROM ${fledge_table}\"\n\n    echo -n \"Table ${fledge_table} - max date: \"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"SELECT max(history_ts) FROM ${fledge_table}\"\n\n}\n\nfledge_shrink() {\n\n    operation_start \"Executing database shrink\"\n\n    fledge_stop\n\n    if [[ \"${SHRINK_MODE}\" == \"table\" ]]; then\n\n        fledge_shrink_table_shrink \"STATISTICS_HISTORY\"\n    fi\n\n    echo -n \"Shrinking database\"\n    \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \"VACUUM\"\n    echo \"\"\n\n    operation_end\n}\n\nexport_file_check() {\n\n    file_export=$1\n\n    if [[ -f \"${file_export}\" ]]; then\n\n        echo \"\"\n        echo \"WARNING: the export file ${file_export} is already present \"\n        echo \"\"\n        echo -n \"Enter YES if you want to delete the file : \"\n        read user_answer\n\n        if [[ \"$user_answer\" == 'YES' ]]; then\n\n            export_execute=\"yes\"\n        fi\n    else\n        export_execute=\"yes\"\n    fi\n}\n\nfledge_recover_sqlite3() {\n\n    export_execute=\"\"\n    export_sql=\"${FLEDGE_DB}_sql\"\n    new_db=\"${FLEDGE_DB}_rec\"\n\n    operation_start \"Executing SQLITE3 database recovery\"\n    fledge_stop\n\n    time_start=$(date +%s)\n\n    ###   #########################################################################################:\n    export_execute=\"\"\n    file_export=\"${export_sql}\"\n    export_file_check \"${file_export}\"\n\n    if [[ \"$export_execute\" == \"yes\" ]]; then\n\n        echo -n \"exporting database to a sql file : ${file_export}\"\n        \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \".recover\" > \"${file_export}\"\n        echo \"\"\n    fi\n    ###   #########################################################################################:\n\n    if [[ -f \"${new_db}\" ]]; then\n\n        echo \"\"\n        echo \"WARNING: the database ${new_db} is already present, deleting it \"\n        echo \"\"\n\n        rm ${new_db}\n    fi\n\n    echo -n \"Creating the new database !${new_db}!\"\n    ${SQLITE_SQL} \"${new_db}\" < \"${export_sql}\"\n    echo \"\"\n\n\n    operation_end\n}\n\nfledge_recover_custom() {\n\n    export_sql_schema=\"${FLEDGE_DB}_sql_schema\"\n    export_sql_data=\"${FLEDGE_DB}_sql_data\"\n    new_db=\"${FLEDGE_DB}_rec_custom\"\n\n    operation_start \"Executing CUSTOM database recovery\"\n    fledge_stop\n\n    time_start=$(date +%s)\n\n\n    ###   #########################################################################################:\n    export_execute=\"\"\n    file_export=\"${export_sql_schema}\"\n    export_file_check \"${file_export}\"\n\n    if [[ \"$export_execute\" == \"yes\" ]]; then\n\n        echo -n \"exporting database to a sql file ${file_export}\"\n        \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" << EOF\n\n.output ${file_export}\n.schema STATISTICS_HISTORY\n\nEOF\n\n        echo \"\"\n    fi\n\n    ###   #########################################################################################:\n\n    export_execute=\"\"\n    file_export=\"${export_sql_data}\"\n    export_file_check \"${file_export}\"\n\n    if [[ \"$export_execute\" == \"yes\" ]]; then\n\n        echo \"exporting database to the sql file ${file_export}\"\n\n        tables=`\"${SQLITE_SQL}\" \"${FLEDGE_DB}\"  \"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name\";`\n\n        echo \"\" > ${file_export}\n\n        for table in $tables; do\n\n            table=\"${table^^}\"                                  # to Uppercase\n            if [[ \"${table}\" == \"STATISTICS_HISTORY\" ]]; then\n\n                echo \"skipped :${table}:\"\n            else\n                echo \"exported :${table}:\"\n                \"${SQLITE_SQL}\" \"${FLEDGE_DB}\" \".dump ${table}\" >> ${file_export}\n\n            fi\n        done\n\n        echo \"\"\n    fi\n\n    ###   #########################################################################################:\n\n    if [[ -f \"${new_db}\" ]]; then\n\n        echo \"\"\n        echo \"WARNING: the database ${new_db} is already present, deleting it \"\n        echo \"\"\n\n        rm ${new_db}\n    fi\n\n    echo -n \"Creating the new database !${new_db}!\"\n    ${SQLITE_SQL} \"${new_db}\" < \"${export_sql_schema}\"\n    ${SQLITE_SQL} \"${new_db}\" < \"${export_sql_data}\"\n    echo \"\"\n\n    echo -n \"Checking database consistency !${new_db}! -  \"\n    \"${SQLITE_SQL}\" \"${new_db}\" \"pragma integrity_check;\"\n\n    operation_end\n}\n\nfledge_recover() {\n\n    if [[ \"${RECOVER_MODE}\" == \"custom\" ]]; then\n\n        fledge_recover_custom\n    else\n        fledge_recover_sqlite3\n    fi\n}\n\nfledge_help() {\n\n    echo \"\"\n    echo \"${USAGE}\nThe script is used to check/recovery fledge\n\nArguments:\n\n info              - show information on the database/tables\n info    --full    - show a complete set of information, performance measurement also\n info    --perf    - performance measurement\n info    --err     - shows errors in the syslog\n analyze           - generate a text file with information about the database\n check             - check the fledge database, stops Fledge\n shrink            - shrink the database, stops Fledge\n shrink  --table   - delete the content of the statistics_history older than 3 days and shrink the database\n recover           - attempt to recover the fledge database using the SQLITE .recover command\n                   - NOTE: it creates the database fledge.db_rec\n recover --custom  - a custom attempt to recover the fledge database, useful if the SQLITE .recover command fails\n                   - NOTE: it creates the database fledge.db_rec_custom\n                   - the new database will contain an empty STATISTICS_HISTORY and all other tables\n\n clean --plugin_data - WARNING: delete the plugin_data content\n\n help              - This text\n\nEnvironments:\n\n FLEDGE_ROOT        - fledge root directory\n FLEDGE_DATA        - fledge data directory\n FLEDGE_DB_NAME     - overrides the default name of the database\n\"\n\n\n}\n\n#\n# Main code\n#\n\n### Platform evaluation #######################################################################:\n\nos_name=`(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')`\nos_version=`(grep -o '^VERSION_ID=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')`\n\nUSE_SCL=false\nYUM_PLATFORM=false\n\nif [[ $os_name == *\"Red Hat\"* || $os_name == *\"CentOS\"* ]]; then\n\tYUM_PLATFORM=true\n\tif [[ $os_version == *\"7\"* ]]; then\n\t\tUSE_SCL=true\n\tfi\nfi\n\nif [[ $YUM_PLATFORM == true ]]; then\n\n    syslog_file=\"/var/log/messages\"\nelse\n    syslog_file=\"/var/log/syslog\"\nfi\n\n###   #########################################################################################:\n\necho \"\"\necho \"Fledge management tool v${FLEDGE_MNT_VERSION}\"\necho \"Node            : $(hostname) - $(hostnamectl status | grep 'Operating')\"\necho \"Platform is     : ${os_name}, Version: ${os_version}\"\necho \"Environment     : ${FLEDGE_ENV}\"\necho \"FLEDGE_ROOT     : ${FLEDGE_ROOT}\"\necho \"FLEDGE_DATA     : ${FLEDGE_DATA}\"\necho \"Fledge Db       : ${FLEDGE_DB}\"\necho \"Readings Db     : ${READINGS_DB}\"\necho \"sqlite3 command : ${SQLITE_SQL}\"\necho \"sqlite3 version : $(${SQLITE_SQL} --version)\"\necho \"sqlite3_analyzer: ${SQLITE_SQL_ANALYZER}\"\n\necho \"fledge version  : $(head -n 1 ${FLEDGE_ROOT}/VERSION)\"\necho \"fledge db ver.  : $(tail -1   ${FLEDGE_ROOT}/VERSION)\"\n\nUSAGE=\"Usage: `basename ${0}` {info|analyze|check|shrink --table|recover --custom}\"\n\nRECOVER_MODE=''\nSHRINK_MODE=''\n\n\n# Handle commands\ncase \"$1\" in\n    info)\n\n        if [ ! -z \"$2\" ]; then\n            if [[ \"$2\" == \"--full\" ]]; then\n\n                fledge_info_full\n\n            elif [[ \"$2\" == \"--perf\" ]]; then\n\n                fledge_info_perf\n\n            elif [[ \"$2\" == \"--err\" ]]; then\n\n                fledge_info_err\n\n            fi\n        else\n            fledge_info\n        fi\n        ;;\n\n    clean)\n        if [ ! -z \"$2\" ]; then\n            if [[ \"$2\" == \"--plugin_data\" ]]; then\n\n                echo \"\"\n                echo \"WARNING: This operation will clean fledge table\"\n                echo \"'${FLEDGE_DB}'\"\n                echo -n \"Enter YES if you want to continue: \"\n                read continue_reset\n\n                if [ \"$continue_reset\" != 'YES' ]; then\n\n                    echo \"Operation aborted.\"\n                    exit 0\n                fi\n                fledge_clean_plugin_data\n            fi\n        else\n            echo \"ERROR: An invalid option\"\n            echo \"\"\n            fledge_help\n            exit 1\n        fi\n        ;;\n\n    analyze)\n        fledge_analyze\n        ;;\n    check)\n        fledge_check\n        ;;\n    shrink)\n        if [ ! -z \"$2\" ]; then\n            if [[ \"$2\" == \"--table\" ]]; then\n\n                SHRINK_MODE='table'\n\n                echo \"\"\n                echo \"WARNING: This operation will remove the rows older than 3 days in the table STATISTICS_HISTORY\"\n                echo \"'${FLEDGE_DB}'\"\n                echo -n \"Enter YES if you want to continue: \"\n                read continue_reset\n\n                if [[ \"$continue_reset\" != 'YES' ]]; then\n\n                    echo \"Operation aborted.\"\n                    exit 0\n                fi\n            else\n               echo \"An invalid option has been entered: $2. Use --table\"\n               exit 1\n            fi\n        fi\n        fledge_shrink\n        ;;\n    recover)\n        if [ ! -z \"$2\" ]; then\n            if [[ \"${2}\" == \"--custom\" ]]; then\n                RECOVER_MODE='custom'\n            else\n               echo \"An invalid option has been entered: $2. Use --custom\"\n               exit 1\n            fi\n        fi\n        fledge_recover\n        ;;\n\n    help)\n        fledge_help\n        ;;\n    *)\n        echo \"${USAGE}\"\n        exit 1\nesac\n\nexit $?\n\n"
  },
  {
    "path": "scripts/package/debian/package_update.sh",
    "content": "#!/bin/sh\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 Dianomic Systems\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\n# This script is called by Debian package \"postint\" script\n# only if package has been update.\n\nPKG_NAME=\"fledge\"\n\n# $1 is previous version passed by 'postinst\" script\nif [ ! \"$1\" ]; then\n\texit 0\nfi\n\nprevious_version=$1\n# Get current new installed package version\nthis_version=`dpkg -s ${PKG_NAME} | grep '^Version:' | awk '{print $2}'`\n# Location of upgrade scripts\nUPGRADE_SCRIPTS_DIR=\"/usr/local/fledge/scripts/package/debian/upgrade\"\n\n# We use dpkg --compare-versions for all version checks\n# Check first 'previous_version' is less than 'this_version':\n# if same version we take no actions\ndiscard_out=`dpkg --compare-versions ${previous_version} ne ${this_version}`\nret_code=$?\n# Check whether we can call upgrade scripts\nif [ \"${ret_code}\" -eq \"0\" ]; then \n\t# List all *.sh files in upgrade dir, ascending order\n\t# 1.3.sh, 1.4.sh, 1.5.sh etc \n\tSTOP_UPGRADE=\"\"\n\tfor upgrade_file in `ls -1 ${UPGRADE_SCRIPTS_DIR}/*.sh | sort -V`\n\t\tdo\n\t\t\t# Extract script version file from name\n\t\t\tupdate_file_ver=`basename -s '.sh' $upgrade_file`\n\t\t\t# Check update_file_ver is less than previous_version\n\t\t\tdiscard_out=`dpkg --compare-versions ${update_file_ver} le ${previous_version}`\n\t\t\tfile_check=$?\n\t\t\t# If update_file_ver is equal or greater than previous_version\n\t\t\t# we skip previous upgrade scripts\n\t\t\tif [ \"${file_check}\" -eq \"1\" ]; then\n\t\t\t\t#\n\t\t\t\t# We can call upgrade scripts from:\n\t\t\t\t# previous_version up to this_version\n\t\t\t\t#\n\t\t\t\tdiscard_out=`dpkg --compare-versions ${update_file_ver} gt ${this_version}`\n\t\t\t\tfile_check=$?\n\t\t\t\tif [ \"${file_check}\" -eq \"0\" ]; then\n\t\t\t\t\t# Stop here: update_file_ver is greater than this package version\n\t\t\t\t\tSTOP_UPGRADE=\"Y\"\n\t\t\t\t\tbreak\n\t\t\t\telse\n\t\t\t\t\t# We can call the current update script\n\t\t\t\t\tif [ -x \"${upgrade_file}\" ] && [ -s \"${upgrade_file}\" ] && [ -O \"${upgrade_file}\" ]; then\n\t\t\t\t\t\techo \"Executing Fledge package upgrade from ${previous_version} to ${update_file_ver}, script ${upgrade_file} ...\"\n\t\t\t\t\t\t# Call upgrade script\n\t\t\t\t\t\t${upgrade_file}\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tif [ \"${STOP_UPGRADE}\" ]; then\n\t\t\t\tbreak\n\t\t\tfi\n\t\tdone\nfi\n"
  },
  {
    "path": "scripts/package/debian/upgrade/1.4.sh",
    "content": "#!/bin/sh\n\nremove_directory=\"/usr/local/fledge/python/fledge/plugins/north/omf/\"\n\n# Remove dir if exists\nif [ -d \"${remove_directory}\" ]; then\n    echo \"Fledge package update: removing 'omf' Python north plugin ...\"\n    rm -rf \"${remove_directory}\"\n\n    # Check\n    if [ -d \"${remove_directory}\" ]; then\n        echo \"ERROR: Fledge plugin 'omf' not removed in '${remove_directory}'\"\n        exit 1\n    else\n        echo \"Fledge plugin 'omf' removed in '${remove_directory}'\"\n    fi\nfi\n\n# The dummy C plugin has been renamed to random, remove the old plugin\ndummy_directory=\"/usr/local/fledge/plugins/south/dummy\"\n\nif [ -d $dummy_directory ]; then\n\techo \"Fledge package update: removing 'dummy' South plugin\"\n\trm -rf $dummy_directory\nfi\n\n# The omf C plugin has been renamed to PI_Server, remove the old plugin\nomf_directory=\"/usr/local/fledge/plugins/north/omf\"\nif [ -d $omf_directory ]; then\n\techo \"Fledge package update: removing 'omf' North plugin\"\n\trm -rf $omf_directory\nfi\n"
  },
  {
    "path": "scripts/package/debian/upgrade/1.5.sh",
    "content": "#!/bin/sh\n\n# The PI_Server C plugin has been renamed to PI_Server_V2, remove the old plugin\nomf_directory=\"/usr/local/fledge/plugins/north/PI_Server\"\nif [ -d $omf_directory ]; then\n\techo \"Fledge package update: removing 'PI_Server' North plugin\"\n\trm -rf $omf_directory\nfi\n"
  },
  {
    "path": "scripts/package/debian/upgrade/1.8.sh",
    "content": "#!/usr/bin/env bash\n\npi_server_v2_directory=\"/usr/local/foglamp/plugins/north/PI_Server_V2\"\nif [[ -d ${pi_server_v2_directory} ]]; then\n\techo \"Removing 'PI_Server_V2' North plugin\"\n\trm -rf ${pi_server_v2_directory}\nfi\n\nocs_v2_directory=\"/usr/local/foglamp/plugins/north/ocs_V2\"\nif [[ -d ${ocs_v2_directory} ]]; then\n\techo \"Removing 'ocs_V2' North plugin\"\n\trm -rf ${ocs_v2_directory}\nfi\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/1.sql",
    "content": "drop index if exists fki_readings_fk1;\nCREATE INDEX fki_readings_fk1\n    ON fledge.readings USING btree (asset_code);\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/10.sql",
    "content": "INSERT INTO fledge.statistics ( key, description, value, previous_value )\n     VALUES ( 'NORTH_READINGS_TO_PI', 'Readings sent to historian', 0, 0 ),\n            ( 'NORTH_STATISTICS_TO_PI', 'Statistics sent to historian', 0, 0 ),\n            ( 'NORTH_READINGS_TO_HTTP', 'Readings sent to HTTP', 0, 0 ),\n            ( 'North Readings to PI', 'Readings sent to the historian', 0, 0 ),\n            ( 'North Statistics to PI','Statistics data sent to the historian', 0, 0 ),\n            ( 'North Readings to OCS','Readings sent to OCS', 0, 0 );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/11.sql",
    "content": "CREATE SEQUENCE fledge.destinations_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE TABLE fledge.destinations (\n       id            integer                     NOT NULL DEFAULT nextval('fledge.destinations_id_seq'::regclass),   -- Sequence ID\n       type          smallint                    NOT NULL DEFAULT 1,                                                  -- Enum : 1: OMF, 2: Elasticsearch\n       description   character varying(255)      NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\", -- A brief description of the destination entry\n       properties    jsonb                       NOT NULL DEFAULT '{ \"streaming\" : \"all\" }'::jsonb,                   -- A generic set of properties\n       active_window jsonb                       NOT NULL DEFAULT '[ \"always\" ]'::jsonb,                              -- The window of operations\n       active        boolean                     NOT NULL DEFAULT true,                                               -- When false, all streams to this destination stop and are inactive\n       ts            timestamp(6) with time zone NOT NULL DEFAULT now(),                                              -- Creation or last update\n       CONSTRAINT destination_pkey PRIMARY KEY (id) );\n\nINSERT INTO fledge.destinations ( id, description )\n       VALUES (0, 'none' );\n\n-- Add the constraint to the the table\nBEGIN TRANSACTION;\nDROP TABLE IF EXISTS fledge.streams_old;\nALTER TABLE fledge.streams RENAME TO streams_old;\n\nALTER TABLE fledge.streams_old RENAME CONSTRAINT strerams_pkey TO strerams_pkey_old;\n\nCREATE TABLE fledge.streams (\n       id             integer                     NOT NULL DEFAULT nextval('fledge.streams_id_seq'::regclass),         -- Sequence ID\n       destination_id integer                     NOT NULL ,                                                            -- FK to fledge.destinations\n       description    character varying(255)      NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",  -- A brief description of the stream entry\n       properties     jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- A generic set of properties\n       object_stream  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Definition of what must be streamed\n       object_block   jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Definition of how the stream must be organised\n       object_filter  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Any filter involved in selecting the data to stream\n       active_window  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- The window of operations\n       active         boolean                     NOT NULL DEFAULT true,                                                -- When false, all data to this stream stop and are inactive\n       last_object    bigint                      NOT NULL DEFAULT 0,                                                   -- The ID of the last object streamed (asset or reading, depending on the object_stream)\n       ts             timestamp(6) with time zone NOT NULL DEFAULT now(),                                               -- Creation or last update\n       CONSTRAINT strerams_pkey PRIMARY KEY (id),\n       CONSTRAINT streams_fk1 FOREIGN KEY (destination_id)\n       REFERENCES fledge.destinations (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nINSERT INTO fledge.streams\n        SELECT\n            id,\n            0,\n            description,\n            properties,\n            object_stream,\n            object_block,\n            object_filter,\n            active_window,\n            active,\n            last_object,\n            ts\n        FROM fledge.streams_old;\n\nDROP TABLE fledge.streams_old;\nCOMMIT;\n\nCREATE INDEX fki_streams_fk1 ON fledge.streams USING btree (destination_id);"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/12.sql",
    "content": "DROP INDEX statistics_history_ix3;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/13.sql",
    "content": "-- Use plugin name omf\nUPDATE fledge.configuration SET value = jsonb_set(value, '{plugin, value}', '\"omf\"') WHERE value->'plugin'->>'value' = 'pi_server';\nUPDATE fledge.configuration SET value = jsonb_set(value, '{plugin, default}', '\"omf\"') WHERE value->'plugin'->>'default' = 'pi_server';\n\n-- Remove PURGE_READ from Utilities parent category\nDELETE FROM fledge.category_children WHERE EXISTS(SELECT 1 FROM fledge.category_children WHERE parent = 'Utilities' AND child = 'PURGE_READ');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/14.sql",
    "content": "DROP INDEX IF EXISTS log_ix2;\nDROP INDEX IF EXISTS tasks_ix1;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/15.sql",
    "content": "ALTER TABLE fledge.configuration DROP COLUMN display_name;"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/16.sql",
    "content": "-- Remove plugin_data table\nDROP TABLE IF EXISTS fledge.plugin_data;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/17.sql",
    "content": "ALTER TABLE fledge.tasks DROP COLUMN schedule_name;\nCREATE INDEX tasks_ix1\n   ON fledge.tasks(process_name, start_time);"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/18.sql",
    "content": "DELETE from fledge.log_codes WHERE code = 'NTFDL';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/19.sql",
    "content": "DROP TABLE fledge.filters;\nDROP TABLE fledge.filter_users;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/2.sql",
    "content": "UPDATE fledge.configuration SET key = 'SEND_PR_1' WHERE key = 'North Readings to PI';\nUPDATE fledge.configuration SET key = 'SEND_PR_2' WHERE key = 'North Statistics to PI';\nUPDATE fledge.configuration SET key = 'SEND_PR_4' WHERE key = 'North Readings to OCS';\n\n-- Remove DHT11 C++ south plugin entries\nDELETE FROM fledge.configuration WHERE key = 'dht11';\nDELETE FROM fledge.scheduled_processes WHERE name='dht11';\nDELETE FROM fledge.schedules WHERE process_name = 'dht11';\n\nDELETE FROM fledge.configuration WHERE key = 'North_Readings_to_PI';\nDELETE FROM fledge.configuration WHERE key = 'North_Statistics_to_PI';\nDELETE FROM fledge.statistics WHERE key = 'NORTH_READINGS_TO_PI';\nDELETE FROM fledge.statistics WHERE key = 'NORTH_STATISTICS_TO_PI';\nDELETE FROM fledge.scheduled_processes WHERE name = 'North_Readings_to_PI';\nDELETE FROM fledge.scheduled_processes WHERE name = 'North_Statistics_to_PI';\nDELETE FROM fledge.schedules WHERE schedule_name = 'OMF_to_PI_north_C';\nDELETE FROM fledge.schedules WHERE schedule_name = 'Stats_OMF_to_PI_north_C';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/20.sql",
    "content": "-- Notification log codes\nDELETE from fledge.log_codes WHERE code = 'NTFAD';\nDELETE from fledge.log_codes WHERE code = 'NTFSN';\nDELETE from fledge.log_codes WHERE code = 'NTFST';\nDELETE from fledge.log_codes WHERE code = 'NTFSD';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/21.sql",
    "content": "DROP INDEX readings_ix3;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/22.sql",
    "content": "-- Nothing required, here to keep numbering with SQLite\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/23.sql",
    "content": "-- Nothing required, empty file to keep numbering same as SQLite\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/24.sql",
    "content": "-- No actions"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/25.sql",
    "content": "-- No actions\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/26.sql",
    "content": "-- No actions\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/27.sql",
    "content": "-- Notification log code NTFCL\nDELETE from fledge.log_codes WHERE code = 'NTFCL';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/28.sql",
    "content": "DELETE from log_codes WHERE code LIKE 'PKG%';"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/29.sql",
    "content": "-- updates \"PIServerEndpoint\" option :\n--\n--  \"Auto Discovery\"  -> \"discovery\"\n--  \"PI Web API\"      -> \"piwebapi\"\n--  \"Connector Relay\" -> \"cr\"\n--\nUPDATE configuration\nSET value = jsonb_set(value, '{PIServerEndpoint, options}',  $$[\"discovery\",\"piwebapi\",\"cr\"]$$)\nWHERE value->'plugin'->>'value' = 'PI_Server_V2';\n\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, default}', '\"cr\"')\nWHERE value->'plugin'->>'value' = 'PI_Server_V2';\n\n--\n--\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, value}', '\"discovery\"')\nWHERE value->'plugin'->>'value' = 'PI_Server_V2' AND\n      value->'PIServerEndpoint'->>'value' = 'Auto Discovery';\n\nUPDATE configuration SET value = jsonb_set(value,  '{PIServerEndpoint, value}', '\"piwebapi\"')\nWHERE value->'plugin'->>'value' = 'PI_Server_V2' AND\n      value->'PIServerEndpoint'->>'value' = 'PI Web API';\n\nUPDATE configuration SET value = jsonb_set(value,  '{PIServerEndpoint, value}', '\"cr\"')\nWHERE value->'plugin'->>'value' = 'PI_Server_V2' AND\n      value->'PIServerEndpoint'->>'value' = 'Connector Relay';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/3.sql",
    "content": "-- Remove configuration category_children table\nDROP TABLE IF EXISTS fledge.category_children;"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/30.sql",
    "content": "ALTER TABLE fledge.tasks DROP COLUMN schedule_id;"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/31.sql",
    "content": "--\n-- ADD  the column: read_key   uuid\n--\nALTER TABLE fledge.readings ADD COLUMN read_key uuid UNIQUE;\nUPDATE fledge.readings SET read_key = uuid_in(md5(random()::text || clock_timestamp()::text)::cstring);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/32.sql",
    "content": "DELETE from fledge.log_codes WHERE code = 'PKGRM';"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/33.sql",
    "content": "-- NOTE -once you are up with fledge_schema=34, Downgrade is NOT possible as we removed C-source binaries.\n--       There is no way to get these back with current support.\n-- See scripts/package/debian/upgrade/1.8.sh\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/34.sql",
    "content": ""
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/35.sql",
    "content": "-- No action is required"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/36.sql",
    "content": "-- No action is required\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/37.sql",
    "content": "-- No action is required"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/38.sql",
    "content": "-- Remove packages table\nDROP TABLE IF EXISTS fledge.packages;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/39.sql",
    "content": "ALTER TABLE readings ALTER COLUMN asset_code TYPE varchar(50);\nALTER TABLE fledge.omf_created_objects ALTER COLUMN asset_code TYPE varchar(50);\nALTER TABLE fledge.statistics ALTER COLUMN key TYPE varchar(56);\nALTER TABLE fledge.statistics_history ALTER COLUMN key TYPE varchar(56);\nALTER TABLE fledge.asset_tracker ALTER COLUMN asset TYPE varchar(50);\n\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/4.sql",
    "content": "UPDATE fledge.configuration SET value = '{\"plugin\": {\"description\": \"OMF North Plugin\", \"type\": \"string\", \"default\": \"omf\", \"value\": \"omf\"}}'\n        WHERE key = 'North Readings to PI';\nUPDATE fledge.configuration SET value = '{\"plugin\": {\"description\": \"OMF North Plugin\", \"type\": \"string\", \"default\": \"omf\", \"value\": \"omf\"}}'\n        WHERE key = 'North Statistics to PI';\nUPDATE fledge.configuration SET value = '{\"plugin\": {\"description\": \"OCS North Plugin\", \"type\": \"string\", \"default\": \"ocs\", \"value\": \"ocs\"}}'\n        WHERE key = 'North Readings to OCS';\n\nUPDATE statistics SET key = 'SENT_1' WHERE key = 'North Readings to PI';\nUPDATE statistics SET key = 'SENT_2' WHERE key = 'North Statistics to PI';\nUPDATE statistics SET key = 'SENT_4' WHERE key = 'North Readings to OCS';\n\nUPDATE fledge.scheduled_processes SET name = 'SEND_PR_1', script = '[\"tasks/north\", \"--stream_id\", \"1\", \"--debug_level\", \"1\"]' ) WHERE name = 'North Readings to PI';\nUPDATE fledge.scheduled_processes SET name = 'SEND_PR_2', script = '[\"tasks/north\", \"--stream_id\", \"1\", \"--debug_level\", \"1\"]' ) WHERE name = 'North Statistics to PI';\nUPDATE fledge.scheduled_processes SET name = 'SEND_PR_4', script = '[\"tasks/north\", \"--stream_id\", \"1\", \"--debug_level\", \"1\"]' ) WHERE name = 'North Readings to OCS';\n\nUPDATE fledge.schedules SET process_name = 'SEND_PR_1' WHERE process_name = 'North Readings to PI';\nUPDATE fledge.schedules SET process_name = 'SEND_PR_2' WHERE process_name = 'North Statistics to PI';\nUPDATE fledge.schedules SET process_name = 'SEND_PR_4' WHERE process_name = 'North Readings to OCS';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/40.sql",
    "content": "-- No action is required\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/41.sql",
    "content": "ALTER TABLE fledge.users DROP COLUMN real_name;\nALTER TABLE fledge.users ALTER COLUMN access_method smallint NOT NULL DEFAULT 0;\nUPDATE fledge.users SET access_method=0;\nALTER TABLE fledge.users DROP CONSTRAINT access_method_check;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/42.sql",
    "content": "DROP INDEX IF EXISTS fledge.statistics_history_daily_ix1;\nDROP TABLE IF EXISTS fledge.statistics_history_daily;\n\nDELETE FROM fledge.schedules WHERE process_name = 'purge_system';\nDELETE FROM fledge.scheduled_processes WHERE name = 'purge_system';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/43.sql",
    "content": "\nUPDATE fledge.configuration SET value = jsonb_set(value, '{retainUnsent}', '{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"false\"}')\n        WHERE key = 'PURGE_READ' AND\n              value->'retainUnsent'->>'value' = 'purge unsent';\n\nUPDATE fledge.configuration SET value = jsonb_set(value, '{retainUnsent}', '{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"true\"}')\n        WHERE  key = 'PURGE_READ' AND\n               value->'retainUnsent'->>'value' = 'retain unsent to all destinations';\n\nUPDATE fledge.configuration SET value = jsonb_set(value, '{retainUnsent}', '{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"true\"}')\n        WHERE  key = 'PURGE_READ' AND\n               value->'retainUnsent'->>'value' = 'retain unsent to any destination';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/44.sql",
    "content": "-- Remove Control service support table\nDROP TABLE IF EXISTS fledge.control_script;\nDROP TABLE IF EXISTS fledge.control_acl;"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/45.sql",
    "content": "-- Dispatcher log codes\nDELETE from fledge.log WHERE code = 'DSPST';\nDELETE from fledge.log_codes WHERE code = 'DSPST';\nDELETE from fledge.log WHERE code = 'DSPSD';\nDELETE from fledge.log_codes WHERE code = 'DSPSD';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/46.sql",
    "content": "-- No action is required\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/47.sql",
    "content": "DELETE FROM fledge.schedules WHERE process_name = 'bucket_storage_c';\nDELETE FROM fledge.scheduled_processes WHERE name = 'bucket_storage_c';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/48.sql",
    "content": "-- Remove id column in fledge.category_children\nALTER TABLE fledge.category_children DROP COLUMN IF EXISTS id;\n\n-- Remove sequence\nDROP SEQUENCE IF EXISTS fledge.category_children_id_seq;\n\n-- Remove unique index\nDROP INDEX IF EXISTS fledge.config_children_idx1;\n\n-- Remove primary key\nALTER TABLE fledge.category_children DROP CONSTRAINT IF EXISTS config_children_pkey;\n\n-- Add parent, child primary key\nALTER TABLE fledge.category_children ADD CONSTRAINT config_children_pkey PRIMARY KEY (parent, child);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/49.sql",
    "content": "-- The Schema Service table used to hold information about extension schemas\nDROP TABLE IF EXISTS fledge.service_schema;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/5.sql",
    "content": "-- Remove HTTP North C++ plugin entries\nDELETE FROM fledge.configuration WHERE key = 'North_Readings_to_HTTP';\nDELETE FROM fledge.scheduled_processes WHERE name='North_Readings_to_HTTP';\nDELETE FROM fledge.schedules WHERE process_name = 'North_Readings_to_HTTP';\nDELETE FROM fledge.statistics WHERE key = 'NORTH_READINGS_TO_HTTP';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/50.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ESSRT', 'ESSTP' );\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/51.sql",
    "content": "-- Access Control List usage relation\nDROP TABLE IF EXISTS fledge.acl_usage;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/52.sql",
    "content": "--Remove deprecated_ts column from asset_tracker table\nALTER TABLE fledge.asset_tracker DROP COLUMN IF EXISTS deprecated_ts;"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/53.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ASTDP', 'ASTUN' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/54.sql",
    "content": "DELETE FROM fledge.log_codes where code = 'PIPIN';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/55.sql",
    "content": "DELETE FROM fledge.log_codes where code = 'SRVRS';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/56.sql",
    "content": "--Remove data column from asset_tracker table\nALTER TABLE fledge.asset_tracker DROP COLUMN IF EXISTS data;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/57.sql",
    "content": "-- Delete Audit marker log and log codes entry\nDELETE from fledge.log WHERE code = 'AUMRK';\nDELETE from fledge.log_codes WHERE code = 'AUMRK';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/58.sql",
    "content": "-- Delete roles\nDELETE FROM fledge.roles WHERE name IN ('view','data-view');\n-- Reset auto increment\nALTER SEQUENCE fledge.roles_id_seq RESTART WITH 3"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/59.sql",
    "content": "-- Drop control pipeline sequences & tables\nDROP TABLE IF EXISTS fledge.control_source;\nDROP SEQUENCE IF EXISTS fledge.control_source_id_seq;\n\nDROP TABLE IF EXISTS fledge.control_destination;\nDROP SEQUENCE IF EXISTS fledge.control_destination_id_seq;\n\nDROP TABLE IF EXISTS fledge.control_filters;\nDROP SEQUENCE IF EXISTS fledge.control_filters_id_seq;\n\nDROP TABLE IF EXISTS fledge.control_pipelines;\nDROP SEQUENCE IF EXISTS fledge.control_pipelines_id_seq;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/6.sql",
    "content": "DROP INDEX statistics_history_ix2;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/60.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('USRAD', 'USRDL', 'USRCH', 'USRRS' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/61.sql",
    "content": "DROP TABLE IF EXISTS fledge.monitors;\nDROP INDEX IF EXISTS fledge.monitors_ix1;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/62.sql",
    "content": "-- Delete role\nDELETE FROM fledge.roles WHERE name='control';\n-- Reset auto increment\nALTER SEQUENCE fledge.roles_id_seq RESTART WITH 5"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/63.sql",
    "content": "-- Drop control flow tables\nDROP TABLE IF EXISTS fledge.control_api_acl;\nDROP TABLE IF EXISTS fledge.control_api_parameters;\nDROP TABLE IF EXISTS fledge.control_api;"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/64.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ACLAD', 'ACLCH', 'ACLDL', 'CTSAD', 'CTSCH', 'CTSDL', 'CTPAD', 'CTPCH', 'CTPDL');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/65.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('CTEAD', 'CTECH', 'CTEDL');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/66.sql",
    "content": "--Remove priority column from scheduled_processes table\nALTER TABLE fledge.scheduled_processes DROP COLUMN IF EXISTS priority;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/67.sql",
    "content": "DROP TABLE IF EXISTS fledge.alerts;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/68.sql",
    "content": "DELETE FROM fledge.schedules WHERE process_name = 'update checker';\nDELETE FROM fledge.scheduled_processes WHERE name = 'update checker';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/69.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('BUCAD', 'BUCCH', 'BUCDL');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/7.sql",
    "content": "-- Remove asset_tracker sequence, index and table\nDROP SEQUENCE IF EXISTS fledge.asset_tracker_id_seq;\nDROP TABLE IF EXISTS fledge.asset_tracker;\nDROP INDEX IF EXISTS fledge.asset_tracker_ix1;\nDROP INDEX IF EXISTS fledge.asset_tracker_ix2;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/70.sql",
    "content": "-- Remove 'hash_algorithm' column from users table\n\nALTER TABLE fledge.users DROP COLUMN hash_algorithm;\nALTER TABLE fledge.users DROP CONSTRAINT users_hash_algorithm_check;\nUPDATE fledge.users SET pwd='39b16499c9311734c595e735cffb5d76ddffb2ebf8cf4313ee869525a9fa2c20:f400c843413d4c81abcba8f571e6ddb6' WHERE pwd ='495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/71.sql",
    "content": "ALTER TABLE fledge.users DROP COLUMN failed_attempts;\nALTER TABLE fledge.users DROP COLUMN block_until;\nDELETE FROM fledge.log_codes where code IN ('USRBK', 'USRUB');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/72.sql",
    "content": "-- No action required\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/73.sql",
    "content": "DELETE FROM fledge.scheduled_processes WHERE name = 'pipeline_c';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/74.sql",
    "content": "ALTER TABLE fledge.plugin_data DROP COLUMN service_name;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/75.sql",
    "content": "-- no downgrade is necessary for the version, as the Python-based North task became obsolete a considerable time ago; therefore, it is not worthwhile to include the entry for its scheduled process"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/8.sql",
    "content": "\n-- North plugins\n\n-- North_Readings_to_PI - OMF Translator for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Readings_to_PI',\n              'OMF North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"omf\", \"default\" : \"omf\", \"description\" : \"Module that OMF North Plugin will load\" } } '\n            );\n\n-- North_Readings_to_HTTP - for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Readings_to_HTTP',\n              'HTTP North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"http-north\", \"default\" : \"http-north\", \"description\" : \"Module that HTTP North Plugin will load\" } } '\n            );\n\n-- dht11 - South plugin for DHT11 - C\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'dht11',\n              'DHT11 South C Plugin',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"dht11\", \"default\" : \"dht11\", \"description\" : \"Module that DHT11 South Plugin will load\" } } '\n            );\n\n-- North_Statistics_to_PI - OMF Translator for statistics\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Statistics_to_PI',\n              'OMF North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"omf\", \"default\" : \"omf\", \"description\" : \"Module that OMF North Plugin will load\" } } '\n            );\n\n-- North Readings to PI - OMF Translator for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North Readings to PI',\n              'OMF North Plugin',\n              '{\"plugin\": {\"description\": \"OMF North Plugin\", \"type\": \"string\", \"default\": \"omf\", \"value\": \"omf\"}, \"source\": {\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"readings\", \"value\": \"readings\"}}'\n            );\n\n-- North Statistics to PI - OMF Translator for statistics\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North Statistics to PI',\n              'OMF North Statistics Plugin',\n              '{\"plugin\": {\"description\": \"OMF North Plugin\", \"type\": \"string\", \"default\": \"omf\", \"value\": \"omf\"}, \"source\": {\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"statistics\", \"value\": \"statistics\"}}'\n            );\n\n-- North Readings to OCS - OSIsoft Cloud Services plugin for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North Readings to OCS',\n              'OCS North Plugin',\n              '{\"plugin\": {\"description\": \"OCS North Plugin\", \"type\": \"string\", \"default\": \"ocs\", \"value\": \"ocs\"}, \"source\": {\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"readings\", \"value\": \"readings\"}}'\n            );\n\n-- North Tasks\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North Readings to PI',   '[\"tasks/north\"]' );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North Statistics to PI', '[\"tasks/north\"]' );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North Readings to OCS',  '[\"tasks/north\"]' );\n\n-- North Tasks - C code\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Readings_to_PI',   '[\"tasks/north_c\"]' );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Statistics_to_PI', '[\"tasks/north_c\"]' );\n\n-- North Tasks - C code\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Readings_to_HTTP',   '[\"tasks/north_c\"]' );\n\n-- South Tasks - C code\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'dht11',   '[\"services/south_c\"]' );\n\n\n--\n-- North Tasks\n--\n\n-- Readings OMF to PI - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '1cdf1ef8-7e02-11e8-adc0-fa7ae01bbebc', -- id\n                'OMF_to_PI_north_C',                    -- schedule_name\n                'North_Readings_to_PI',                 -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                true,                                    -- exclusive\n                false                                     -- disabled\n              );\n\n-- Readings to HTTP - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'ccdf1ef8-7e02-11e8-adc0-fa7ae01bb3bc', -- id\n                'HTTP_North_C',                         -- schedule_name\n                'North_Readings_to_HTTP',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                true,                                   -- exclusive\n                false                                   -- disabled\n              );\n\n-- DHT11 sensor south plugin - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '6b25f4d9-c7f3-4fc8-bd4a-4cf79f7055ca', -- id\n                'dht11',                                -- schedule_name\n                'dht11',                                -- process_name\n                1,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '01:00:00',                             -- schedule_interval (evey hour)\n                true,                                   -- exclusive\n                false                                   -- disabled\n              );\n\n-- Statistics OMF to PI - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'f1e3b377-5acb-4bde-93d5-b6a792f76e07', -- id\n                'Stats_OMF_to_PI_north_C',              -- schedule_name\n                'North_Statistics_to_PI',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                true,                                    -- exclusive\n                false                                     -- disabled\n              );\n\n-- Readings OMF to PI\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '2b614d26-760f-11e7-b5a5-be2e44b06b34', -- id\n                'OMF to PI north',                      -- schedule_name\n                'North Readings to PI',                 -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                true,                                   -- exclusive\n                false                                   -- disabled\n              );\n\n-- Statistics OMF to PI\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '1d7c327e-7dae-11e7-bb31-be2e44b06b34', -- id\n                'Stats OMF to PI north',                -- schedule_name\n                'North Statistics to PI',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                true,                                   -- exclusive\n                false                                   -- disabled\n              );\n\n-- Readings OMF to OCS\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '5d7fed92-fb9a-11e7-8c3f-9a214cf093ae', -- id\n                'OMF to OCS north',                     -- schedule_name\n                'North Readings to OCS',                -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                true,                                   -- exclusive\n                false                                   -- disabled\n              );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/9.sql",
    "content": "DROP INDEX readings_ix2;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/downgrade/README",
    "content": "Place Postgres downgrade sql files here.\n  \nFile name:\n\nX.sql, where X is the Postgres schema id\n\nExample:\n\n'9.sql' file is read by Fledge app which has Posgres schema version set to 10\n'8.sql' file is read either by Fledge app which has Posgres schema version set to 9\neither by Fledge app downgrading schema from 10 to 8\n\nNote:\n- whenever VERSION file in $FLEDGE_ROOT has a new schema in 'fledge_schema',\n  the corresponding sql file must be placed here for downgrade.\n- file id must exist even if empty\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/init.sql",
    "content": "----------------------------------------------------------------------\n-- Copyright (c) 2017 OSIsoft, LLC\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\");\n-- you may not use this file except in compliance with the License.\n-- You may obtain a copy of the License at\n--\n--     http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-- See the License for the specific language governing permissions and\n-- limitations under the License.\n----------------------------------------------------------------------\n\n--\n-- init.sql\n--\n-- PostgreSQL script to create the Fledge persistent Layer\n--\n\n-- NOTE:\n-- This script must be launched with:\n-- psql -U postgres -d postgres -f init.sql\n\n\n----------------------------------------------------------------------\n-- DDL CONVENTIONS\n--\n-- Tables:\n-- * Names are in plural, terms are separated by _\n-- * Columns are, when possible, not null and have a default value.\n--   For example, jsonb columns are '{}' by default.\n--\n-- Columns:\n-- id      : It is commonly the PK of the table, a smallint, integer or bigint.\n-- xxx_id  : It usually refers to a FK, where \"xxx\" is name of the table.\n-- code    : Usually an AK, based on fixed lenght characters.\n-- ts      : The timestamp with microsec precision and tz. It is updated at\n--           every change.\n\n\n-- This first part of the script must be executed by the postgres user\n\n-- Disable the NOTICE notes\nSET client_min_messages TO WARNING;\n\n-- Dropping objects\nDROP SCHEMA IF EXISTS fledge CASCADE;\nDROP DATABASE IF EXISTS fledge;\n\n\n-- Create the fledge database\nCREATE DATABASE fledge WITH\n    ENCODING = 'UTF8';\n\nGRANT CONNECT ON DATABASE fledge TO PUBLIC;\n\n\n-- Connect to fledge database\n\\connect fledge\n\n----------------------------------------------------------------------\n-- SCHEMA CREATION\n----------------------------------------------------------------------\n\n-- Create the fledge schema\nCREATE SCHEMA fledge;\nGRANT USAGE ON SCHEMA fledge TO PUBLIC;\n\n------ SEQUENCES\nCREATE SEQUENCE fledge.asset_messages_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.asset_status_changes_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.asset_status_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.asset_types_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.assets_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.links_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.resources_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.roles_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.streams_id_seq\n    INCREMENT 1\n    START 5\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.user_pwd_history_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.user_logins_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.users_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.backups_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.asset_tracker_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.control_source_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.control_destination_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.control_pipelines_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE SEQUENCE fledge.control_filters_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\n\n----- TABLES & SEQUENCES\n\n-- Log Codes Table\n-- List of tasks that log info into fledge.log.\nCREATE TABLE fledge.log_codes (\n       code        character(5)          NOT NULL,   -- The process that logs actions\n       description character varying(80) NOT NULL,\n       CONSTRAINT log_codes_pkey PRIMARY KEY (code) );\n\n\n-- Generic Log Table\n-- General log table for Fledge.\nCREATE SEQUENCE fledge.log_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE TABLE fledge.log (\n       id    bigint                      NOT NULL DEFAULT nextval('fledge.log_id_seq'::regclass),\n       code  character(5)                NOT NULL,                                                -- The process that logged the action\n       level smallint                    NOT NULL DEFAULT 0,                                      -- 0 Success - 1 Failure - 2 Warning - 4 Info\n       log   jsonb                       NOT NULL DEFAULT '{}'::jsonb,                            -- Generic log structure\n       ts    timestamp(6) with time zone NOT NULL DEFAULT now(),\n       CONSTRAINT log_pkey PRIMARY KEY (id),\n       CONSTRAINT log_fk1 FOREIGN KEY (code)\n       REFERENCES fledge.log_codes (code) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nALTER SEQUENCE fledge.log_id_seq OWNED BY fledge.log.id;\n\n-- Index: log_ix1 - For queries by code\nCREATE INDEX log_ix1\n    ON fledge.log USING btree (code, ts, level);\n\nCREATE INDEX log_ix2\n    ON fledge.log(ts);\n\n\n-- Asset status\n-- List of status an asset can have.\nCREATE TABLE fledge.asset_status (\n       id          integer                NOT NULL DEFAULT nextval('fledge.asset_status_id_seq'::regclass),\n       descriprion character varying(255) NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",\n        CONSTRAINT asset_status_pkey PRIMARY KEY (id) );\n\n\n-- Asset Types\n-- Type of asset (for example south, sensor etc.)\nCREATE TABLE fledge.asset_types (\n       id          integer                NOT NULL DEFAULT nextval('fledge.asset_types_id_seq'::regclass),\n       description character varying(255) NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",\n        CONSTRAINT asset_types_pkey PRIMARY KEY (id) );\n\n\n-- Assets table\n-- This table is used to list the assets used in Fledge\n-- Reading do not necessarily have an asset, but whenever possible this\n-- table provides information regarding the data collected.\nCREATE TABLE fledge.assets (\n       id           integer                     NOT NULL DEFAULT nextval('fledge.assets_id_seq'::regclass),         -- The internal PK for assets\n       code         character varying(50),                                                                           -- A unique code  (AK) used to match readings and assets. It can be anything.\n       description  character varying(255)      NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\", -- A brief description of the asset\n       type_id      integer                     NOT NULL,                                                            -- FK for the type of asset\n       address      inet                        NOT NULL DEFAULT '0.0.0.0'::inet,                                    -- An IPv4 or IPv6 address, if needed. Default means \"any address\"\n       status_id    integer                     NOT NULL,                                                            -- Status of the asset, FK to the asset_status table\n       properties   jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                        -- A generic JSON structure. Some elements (for example \"labels\") may be used in the rule to send messages to the south devices or data to the cloud\n       has_readings boolean                     NOT NULL DEFAULT false,                                              -- A boolean column, when TRUE, it means that the asset may have rows in the readings table\n       ts           timestamp(6) with time zone NOT NULL DEFAULT now(),\n         CONSTRAINT assets_pkey PRIMARY KEY (id),\n         CONSTRAINT assets_fk1 FOREIGN KEY (status_id)\n         REFERENCES fledge.asset_status (id) MATCH SIMPLE\n                 ON UPDATE NO ACTION\n                 ON DELETE NO ACTION,\n         CONSTRAINT assets_fk2 FOREIGN KEY (type_id)\n         REFERENCES fledge.asset_types (id) MATCH SIMPLE\n                 ON UPDATE NO ACTION\n                 ON DELETE NO ACTION );\n\n-- Index: fki_assets_fk1\nCREATE INDEX fki_assets_fk1\n    ON fledge.assets USING btree (status_id);\n\n-- Index: fki_assets_fk2\nCREATE INDEX fki_assets_fk2\n    ON fledge.assets USING btree (type_id);\n\n-- Index: assets_ix1\nCREATE UNIQUE INDEX assets_ix1\n    ON fledge.assets USING btree (code);\n\n\n-- Asset Status Changes\n-- When an asset changes its status, the previous status is added here.\n-- tart_ts contains the value of ts of the row in the asset table.\nCREATE TABLE fledge.asset_status_changes (\n       id         bigint                      NOT NULL DEFAULT nextval('fledge.asset_status_changes_id_seq'::regclass),\n       asset_id   integer                     NOT NULL,\n       status_id  integer                     NOT NULL,\n       log        jsonb                       NOT NULL DEFAULT '{}'::jsonb,\n       start_ts   timestamp(6) with time zone NOT NULL,\n       ts         timestamp(6) with time zone NOT NULL DEFAULT now(),\n       CONSTRAINT asset_status_changes_pkey PRIMARY KEY (id),\n       CONSTRAINT asset_status_changes_fk1 FOREIGN KEY (asset_id)\n       REFERENCES fledge.assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT asset_status_changes_fk2 FOREIGN KEY (status_id)\n       REFERENCES fledge.asset_status (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_asset_status_changes_fk1\n    ON fledge.asset_status_changes USING btree (asset_id);\n\nCREATE INDEX fki_asset_status_changes_fk2\n    ON fledge.asset_status_changes USING btree (status_id);\n\n\n-- Links table\n-- Links among assets in 1:M relationships.\nCREATE TABLE fledge.links (\n       id         integer                     NOT NULL DEFAULT nextval('fledge.links_id_seq'::regclass),\n       asset_id   integer                     NOT NULL,\n       properties jsonb                       NOT NULL DEFAULT '{}'::jsonb,\n       ts         timestamp(6) with time zone NOT NULL DEFAULT now(),\n       CONSTRAINT links_pkey PRIMARY KEY (id),\n       CONSTRAINT links_fk1 FOREIGN KEY (asset_id)\n       REFERENCES fledge.assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_links_fk1\n    ON fledge.links USING btree (asset_id);\n\n\n-- Assets Linked table\n-- In links, relationship between an asset and other assets.\nCREATE TABLE fledge.asset_links (\n       link_id  integer                     NOT NULL,\n       asset_id integer                     NOT NULL,\n       ts       timestamp(6) with time zone NOT NULL DEFAULT now(),\n       CONSTRAINT asset_links_pkey PRIMARY KEY (link_id, asset_id) );\n\nCREATE INDEX fki_asset_links_fk1\n    ON fledge.asset_links USING btree (link_id);\n\nCREATE INDEX fki_asset_link_fk2\n    ON fledge.asset_links USING btree (asset_id);\n\n\n-- Asset Message Status table\n-- Status of the messages to send South\nCREATE SEQUENCE fledge.asset_message_status_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE TABLE fledge.asset_message_status (\n       id          integer                NOT NULL DEFAULT nextval('fledge.asset_message_status_id_seq'::regclass),\n       description character varying(255) NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",\n        CONSTRAINT asset_message_status_pkey PRIMARY KEY (id) );\n\n\n-- Asset Messages table\n-- Messages directed to the south devices.\nCREATE TABLE fledge.asset_messages (\n       id        bigint                      NOT NULL DEFAULT nextval('fledge.asset_messages_id_seq'::regclass),\n       asset_id  integer                     NOT NULL,\n       status_id integer                     NOT NULL,\n       message   jsonb                       NOT NULL DEFAULT '{}'::jsonb,\n       ts        timestamp(6) with time zone NOT NULL DEFAULT now(),\n       CONSTRAINT asset_messages_pkey PRIMARY KEY (id),\n       CONSTRAINT asset_messages_fk1 FOREIGN KEY (asset_id)\n       REFERENCES fledge.assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT asset_messages_fk2 FOREIGN KEY (status_id)\n       REFERENCES fledge.asset_message_status (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_asset_messages_fk1\n    ON fledge.asset_messages USING btree (asset_id);\n\nCREATE INDEX fki_asset_messages_fk2\n    ON fledge.asset_messages USING btree (status_id);\n\n\n-- Readings table\n-- This tables contains the readings for assets.\n-- An asset can be a south with multiple sensor, a single sensor,\n-- a software or anything that generates data that is sent to Fledge\nCREATE SEQUENCE fledge.readings_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE TABLE fledge.readings (\n    id         bigint                      NOT NULL DEFAULT nextval('fledge.readings_id_seq'::regclass),\n    asset_code character varying(255)      NOT NULL,                      -- The provided asset code. Not necessarily located in the\n                                                                          -- assets table.\n    reading    jsonb                       NOT NULL DEFAULT '{}'::jsonb,  -- The json object received\n    user_ts    timestamp(6) with time zone NOT NULL DEFAULT now(),        -- The user timestamp extracted by the received message\n    ts         timestamp(6) with time zone NOT NULL DEFAULT now(),\n    CONSTRAINT readings_pkey PRIMARY KEY (id) );\n\nCREATE INDEX fki_readings_fk1\n    ON fledge.readings USING btree (asset_code, user_ts desc);\n\nCREATE INDEX readings_ix2\n    ON fledge.readings USING btree (asset_code);\n\nCREATE INDEX readings_ix3\n    ON fledge.readings USING btree (user_ts);\n\n\n-- Streams table\n-- List of the streams to the Cloud.\nCREATE TABLE fledge.streams (\n       id             integer                     NOT NULL DEFAULT nextval('fledge.streams_id_seq'::regclass),         -- Sequence ID\n       description    character varying(255)      NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",  -- A brief description of the stream entry\n       properties     jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- A generic set of properties\n       object_stream  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Definition of what must be streamed\n       object_block   jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Definition of how the stream must be organised\n       object_filter  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Any filter involved in selecting the data to stream\n       active_window  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- The window of operations\n       active         boolean                     NOT NULL DEFAULT true,                                                -- When false, all data to this stream stop and are inactive\n       last_object    bigint                      NOT NULL DEFAULT 0,                                                   -- The ID of the last object streamed (asset or reading, depending on the object_stream)\n       ts             timestamp(6) with time zone NOT NULL DEFAULT now(),                                               -- Creation or last update\n       CONSTRAINT strerams_pkey PRIMARY KEY (id));\n\n\n-- Configuration table\n-- The configuration in JSON format.\n-- The PK is also used in the REST API\n-- Values is a jsonb column\n-- ts is set by default with now().\nCREATE TABLE fledge.configuration (\n       key         character varying(255)      NOT NULL COLLATE pg_catalog.\"default\", -- Primary key\n       display_name character varying(255)     NOT NULL,                              -- Display Name\n       description character varying(255)      NOT NULL,                              -- Description, in plain text\n       value       jsonb                       NOT NULL DEFAULT '{}'::jsonb,          -- JSON object containing the configuration values\n       ts          timestamp(6) with time zone NOT NULL DEFAULT now(),                -- Timestamp, updated at every change\n       CONSTRAINT configuration_pkey PRIMARY KEY (key) );\n\n\n-- Configuration changes\n-- This table has the same structure of fledge.configuration, plus the timestamp that identifies the time it has changed\n-- The table is used to keep track of the changes in the \"value\" column\nCREATE TABLE fledge.configuration_changes (\n       key                 character varying(255)      NOT NULL COLLATE pg_catalog.\"default\",\n       configuration_ts    timestamp(6) with time zone NOT NULL,\n       configuration_value jsonb                       NOT NULL,\n       ts                  timestamp(6) with time zone NOT NULL DEFAULT now(),\n       CONSTRAINT configuration_changes_pkey PRIMARY KEY (key, configuration_ts) );\n\n\n-- Statistics table\n-- The table is used to keep track of the statistics for Fledge\nCREATE TABLE fledge.statistics (\n       key         character varying(255)      NOT NULL COLLATE pg_catalog.\"default\", -- Primary key, all uppercase\n       description character varying(255)      NOT NULL,                              -- Description, in plan text\n       value       bigint                      NOT NULL DEFAULT 0,                    -- Integer value, the statistics\n       previous_value       bigint             NOT NULL DEFAULT 0,                    -- Integer value, the prev stat to be updated by metrics collector\n       ts          timestamp(6) with time zone NOT NULL DEFAULT now(),                -- Timestamp, updated at every change\n       CONSTRAINT statistics_pkey PRIMARY KEY (key) );\n\n\n-- Statistics history\n-- Keeps history of the statistics in fledge.statistics\n-- The table is updated at startup\nCREATE SEQUENCE fledge.statistics_history_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE TABLE fledge.statistics_history (\n       id          bigint                      NOT NULL DEFAULT nextval('fledge.statistics_history_id_seq'::regclass),\n       key         character varying(255)      NOT NULL COLLATE pg_catalog.\"default\",                         -- Coumpund primary key, all uppercase\n       history_ts  timestamp(6) with time zone NOT NULL,                                                      -- Compound primary key, the highest value of statistics.ts when statistics are copied here.\n       value       bigint                      NOT NULL DEFAULT 0,                                            -- Integer value, the statistics\n       ts          timestamp(6) with time zone NOT NULL DEFAULT now(),                                        -- Timestamp, updated at every change\n       CONSTRAINT statistics_history_pkey PRIMARY KEY (key, history_ts) );\n\nCREATE INDEX statistics_history_ix2\n    ON fledge.statistics_history(key);\n\nCREATE INDEX statistics_history_ix3\n    ON fledge.statistics_history (history_ts);\n\n-- Contains history of the statistics_history table\n-- Data are historicized daily\n--\nCREATE TABLE fledge.statistics_history_daily (\n        year        integer NOT NULL,\n        day         timestamp(6) with time zone NOT NULL,\n        key         character varying(255)      NOT NULL,\n        value       bigint                      NOT NULL DEFAULT 0\n);\n\nCREATE INDEX statistics_history_daily_ix1\n    ON fledge.statistics_history_daily (year);\n\n-- Resources table\n-- A resource and be anything that is available or can be done in Fledge. Examples:\n-- - Access to assets\n-- - Access to readings\n-- - Access to streams\nCREATE TABLE fledge.resources (\n    id          bigint                 NOT NULL DEFAULT nextval('fledge.resources_id_seq'::regclass),\n    code        character(10)          NOT NULL COLLATE pg_catalog.\"default\",\n    description character varying(255) NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",\n    CONSTRAINT  resources_pkey PRIMARY KEY (id) );\n\nCREATE UNIQUE INDEX resource_ix1\n    ON fledge.resources USING btree (code COLLATE pg_catalog.\"default\");\n\n\n-- Roles table\nCREATE TABLE fledge.roles (\n    id          integer                NOT NULL DEFAULT nextval('fledge.roles_id_seq'::regclass),\n    name        character varying(25)  NOT NULL COLLATE pg_catalog.\"default\",\n    description character varying(255) NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",\n    CONSTRAINT  roles_pkey PRIMARY KEY (id) );\n\nCREATE UNIQUE INDEX roles_ix1\n    ON fledge.roles USING btree (name COLLATE pg_catalog.\"default\");\n\n\n-- Roles, Resources and Permssions table\n-- For each role there are resources associated, with a given permission.\nCREATE TABLE fledge.role_resource_permission (\n       role_id     integer NOT NULL,\n       resource_id integer NOT NULL,\n       access      jsonb   NOT NULL DEFAULT '{}'::jsonb,\n       CONSTRAINT role_resource_permission_pkey PRIMARY KEY (role_id, resource_id),\n       CONSTRAINT role_resource_permissions_fk1 FOREIGN KEY (role_id)\n       REFERENCES fledge.roles (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT role_resource_permissions_fk2 FOREIGN KEY (resource_id)\n       REFERENCES fledge.resources (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_role_resource_permissions_fk1\n    ON fledge.role_resource_permission USING btree (role_id);\n\nCREATE INDEX fki_role_resource_permissions_fk2\n    ON fledge.role_resource_permission USING btree (resource_id);\n\n\n-- Roles Assets Permissions table\n-- Combination of roles, assets and access\nCREATE TABLE fledge.role_asset_permissions (\n    role_id    integer NOT NULL,\n    asset_id   integer NOT NULL,\n    access     jsonb   NOT NULL DEFAULT '{}'::jsonb,\n    CONSTRAINT role_asset_permissions_pkey PRIMARY KEY (role_id, asset_id),\n    CONSTRAINT role_asset_permissions_fk1 FOREIGN KEY (role_id)\n    REFERENCES fledge.roles (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION,\n    CONSTRAINT role_asset_permissions_fk2 FOREIGN KEY (asset_id)\n    REFERENCES fledge.assets (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION,\n    CONSTRAINT user_asset_permissions_fk1 FOREIGN KEY (role_id)\n    REFERENCES fledge.roles (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION );\n\nCREATE INDEX fki_role_asset_permissions_fk1\n    ON fledge.role_asset_permissions USING btree (role_id);\n\nCREATE INDEX fki_role_asset_permissions_fk2\n    ON fledge.role_asset_permissions USING btree (asset_id);\n\n\n-- Users table\n-- Fledge users table.\n-- Authentication Method:\n-- 0 - Disabled\n-- 1 - PWD\n-- 2 - Public Key\nCREATE TABLE fledge.users (\n       id                integer                     NOT NULL DEFAULT nextval('fledge.users_id_seq'::regclass),\n       uname             character varying(80)       NOT NULL COLLATE pg_catalog.\"default\",\n       real_name         character varying(255) NOT NULL,\n       role_id           integer                     NOT NULL,\n       description       character varying(255)      NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",\n       pwd               character varying(255)      COLLATE pg_catalog.\"default\",\n       public_key        character varying(255)      COLLATE pg_catalog.\"default\",\n       enabled           boolean                     NOT NULL DEFAULT TRUE,\n       pwd_last_changed  timestamp(6) with time zone NOT NULL DEFAULT now(),\n       access_method     character varying(5) CHECK( access_method IN ('any','pwd','cert') )  NOT NULL DEFAULT 'any',\n       hash_algorithm    character varying(6) CHECK( hash_algorithm IN ('SHA256', 'SHA512') )  NOT NULL DEFAULT 'SHA512',\n       failed_attempts   integer                     DEFAULT 0,\n       block_until       timestamp(6)                DEFAULT NULL,\n          CONSTRAINT users_pkey PRIMARY KEY (id),\n          CONSTRAINT users_fk1 FOREIGN KEY (role_id)\n          REFERENCES fledge.roles (id) MATCH SIMPLE\n                  ON UPDATE NO ACTION\n                  ON DELETE NO ACTION );\n\nCREATE INDEX fki_users_fk1\n    ON fledge.users USING btree (role_id);\n\nCREATE UNIQUE INDEX users_ix1\n    ON fledge.users USING btree (uname COLLATE pg_catalog.\"default\");\n\n-- User Login table\n-- List of logins executed by the users.\nCREATE TABLE fledge.user_logins (\n       id               integer                     NOT NULL DEFAULT nextval('fledge.user_logins_id_seq'::regclass),\n       user_id          integer                     NOT NULL,\n       ip               inet                        NOT NULL DEFAULT '0.0.0.0'::inet,\n       ts               timestamp(6) with time zone NOT NULL DEFAULT now(),\n       token            character varying(255)      NOT NULL,\n       token_expiration timestamp(6) with time zone NOT NULL,\n       CONSTRAINT user_logins_pkey PRIMARY KEY (id),\n       CONSTRAINT user_logins_fk1 FOREIGN KEY (user_id)\n       REFERENCES fledge.users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_logins_fk1\n    ON fledge.user_logins USING btree (user_id);\n\n\n-- User Password History table\n-- Maintains a history of passwords\nCREATE TABLE fledge.user_pwd_history (\n       id               integer                     NOT NULL DEFAULT nextval('fledge.user_pwd_history_id_seq'::regclass),\n       user_id          integer                     NOT NULL,\n       pwd              character varying(255)      COLLATE pg_catalog.\"default\",\n       CONSTRAINT user_pwd_history_pkey PRIMARY KEY (id),\n       CONSTRAINT user_pwd_history_fk1 FOREIGN KEY (user_id)\n       REFERENCES fledge.users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_pwd_history_fk1\n    ON fledge.user_pwd_history USING btree (user_id);\n\n\n-- User Resource Permissions table\n-- Association of users with resources and given permissions for each resource.\nCREATE TABLE fledge.user_resource_permissions (\n       user_id     integer NOT NULL,\n       resource_id integer NOT NULL,\n       access      jsonb NOT NULL DEFAULT '{}'::jsonb,\n        CONSTRAINT user_resource_permissions_pkey PRIMARY KEY (user_id, resource_id),\n        CONSTRAINT user_resource_permissions_fk1 FOREIGN KEY (user_id)\n        REFERENCES fledge.users (id) MATCH SIMPLE\n                ON UPDATE NO ACTION\n                ON DELETE NO ACTION,\n        CONSTRAINT user_resource_permissions_fk2 FOREIGN KEY (resource_id)\n        REFERENCES fledge.resources (id) MATCH SIMPLE\n                ON UPDATE NO ACTION\n                ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_resource_permissions_fk1\n    ON fledge.user_resource_permissions USING btree (user_id);\n\nCREATE INDEX fki_user_resource_permissions_fk2\n    ON fledge.user_resource_permissions USING btree (resource_id);\n\n\n-- User Asset Permissions table\n-- Association of users with assets\nCREATE TABLE fledge.user_asset_permissions (\n       user_id    integer NOT NULL,\n       asset_id   integer NOT NULL,\n       access     jsonb NOT NULL DEFAULT '{}'::jsonb,\n       CONSTRAINT user_asset_permissions_pkey PRIMARY KEY (user_id, asset_id),\n       CONSTRAINT user_asset_permissions_fk1 FOREIGN KEY (user_id)\n       REFERENCES fledge.users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT user_asset_permissions_fk2 FOREIGN KEY (asset_id)\n       REFERENCES fledge.assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_asset_permissions_fk1\n    ON fledge.user_asset_permissions USING btree (user_id);\n\nCREATE INDEX fki_user_asset_permissions_fk2\n    ON fledge.user_asset_permissions USING btree (asset_id);\n\n\n-- List of scheduled Processes\nCREATE TABLE fledge.scheduled_processes (\n             name   character varying(255)  NOT NULL, -- Name of the process\n             script jsonb,                            -- Full path of the process\n             priority INTEGER NOT NULL DEFAULT 999,   -- priority to run for STARTUP\n  CONSTRAINT scheduled_processes_pkey PRIMARY KEY ( name ) );\n\n\n-- List of schedules\nCREATE TABLE fledge.schedules (\n             id                uuid                   NOT NULL, -- PK\n             process_name      character varying(255) NOT NULL, -- FK process name\n             schedule_name     character varying(255) NOT NULL, -- schedule name\n             schedule_type     smallint               NOT NULL, -- 1 = startup,  2 = timed\n                                                                -- 3 = interval, 4 = manual\n             schedule_interval interval,                        -- Repeat interval\n             schedule_time     time,                            -- Start time\n             schedule_day      smallint,                        -- ISO day 1 = Monday, 7 = Sunday\n             exclusive         boolean not null default true,   -- true = Only one task can run\n                                                                -- at any given time\n             enabled           boolean not null default false,  -- false = A given schedule is disabled by default\n  CONSTRAINT schedules_pkey PRIMARY KEY  ( id ),\n  CONSTRAINT schedules_fk1  FOREIGN KEY  ( process_name )\n  REFERENCES fledge.scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\n\n-- List of tasks\nCREATE TABLE fledge.tasks (\n             id           uuid                        NOT NULL,               -- PK\n             schedule_name character varying(255),                            -- Name of the task\n             schedule_id  uuid                        NOT NULL,               -- Link between schedule & task table\n             process_name character varying(255)      NOT NULL,               -- Name of the task's process\n             state        smallint                    NOT NULL,               -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   timestamp(6) with time zone NOT NULL DEFAULT now(), -- The date and time the task started\n             end_time     timestamp(6) with time zone,                        -- The date and time the task ended\n             reason       character varying(255),                             -- The reason why the task ended\n             pid          integer                     NOT NULL,               -- Linux process id\n             exit_code    integer,                                            -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES fledge.scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\nCREATE INDEX tasks_ix1\n   ON fledge.tasks(schedule_name, start_time);\n\n\n-- Tracks types already created into PI Server\nCREATE TABLE fledge.omf_created_objects (\n    configuration_key character varying(255)\tNOT NULL,            -- FK to fledge.configuration\n    type_id           integer           \t    NOT NULL,            -- Identifies the specific PI Server type\n    asset_code        character varying(255)   NOT NULL,\n    CONSTRAINT omf_created_objects_pkey PRIMARY KEY (configuration_key,type_id, asset_code),\n    CONSTRAINT omf_created_objects_fk1 FOREIGN KEY (configuration_key)\n    REFERENCES fledge.configuration (key) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION );\n\n\n-- Backups information\n-- Stores information about executed backups\nCREATE TABLE fledge.backups (\n    id         bigint                      NOT NULL DEFAULT nextval('fledge.backups_id_seq'::regclass),\n    file_name  character varying(255)      NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\", -- Backup file name, expressed as absolute path\n    ts         timestamp(6) with time zone NOT NULL DEFAULT now(),                                              -- Backup creation timestamp\n    type       integer           \t         NOT NULL,                                                            -- Backup type : 1-Full, 2-Incremental\n    status     integer           \t         NOT NULL,                                                            -- Backup status :\n                                                                                                                --   1-Running\n                                                                                                                --   2-Completed\n                                                                                                                --   3-Cancelled\n                                                                                                                --   4-Interrupted\n                                                                                                                --   5-Failed\n                                                                                                                --   6-Restored backup\n    exit_code  integer,                                                                                         -- Process exit status code\n    CONSTRAINT backups_pkey PRIMARY KEY (id) );\n\n-- Fledge DB version\nCREATE TABLE fledge.version (id CHAR(10));\n\n-- Create the configuration category_children table\nCREATE SEQUENCE fledge.category_children_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\nCREATE TABLE fledge.category_children (\n       id integer DEFAULT nextval('fledge.category_children_id_seq'::regclass) NOT NULL,\n       parent character varying(255) NOT NULL,\n       child character varying(255) NOT NULL,\n       CONSTRAINT config_children_pkey PRIMARY KEY(id)\n);\n\nCREATE UNIQUE INDEX config_children_ix1 ON fledge.category_children(parent, child);\n\n-- Create the asset_tracker table\nCREATE TABLE fledge.asset_tracker (\n       id              integer                         NOT NULL DEFAULT nextval('fledge.asset_tracker_id_seq'::regclass),\n       asset           character(255)                  NOT NULL, -- asset name\n       event           character varying(50)           NOT NULL, -- event name\n       service         character varying(255)          NOT NULL, -- service name\n       fledge          character varying(50)           NOT NULL, -- FL service name\n       plugin          character varying(50)           NOT NULL, -- Plugin name\n       deprecated_ts   timestamp(6) with time zone             , -- When an asset record is removed then time will be set else empty and that mean entry has not been deprecated\n       ts              timestamp(6) with time zone     NOT NULL DEFAULT now(),\n       data            jsonb                           DEFAULT '{}'::jsonb\n);\n\nCREATE INDEX asset_tracker_ix1 ON fledge.asset_tracker USING btree (asset);\nCREATE INDEX asset_tracker_ix2 ON fledge.asset_tracker USING btree (service);\n\n-- Create plugin_data table\n-- Persist plugin data in the storage\nCREATE TABLE fledge.plugin_data (\n\tkey             character varying(255)    NOT NULL,\n\tdata            jsonb                     NOT NULL DEFAULT '{}'::jsonb,\n\tservice_name    character varying(255),\n\tCONSTRAINT plugin_data_pkey PRIMARY KEY (key) );\n\n-- Create packages table\nCREATE TABLE fledge.packages (\n             id                uuid                   NOT NULL, -- PK\n             name              character varying(255) NOT NULL, -- Package name\n             action            character varying(10)  NOT NULL, -- APT actions:\n                                                                -- list\n                                                                -- install\n                                                                -- purge\n                                                                -- update\n             status            INTEGER                NOT NULL, -- exit code\n                                                                -- -1       - in-progress\n                                                                --  0       - success\n                                                                -- Non-Zero - failed\n             log_file_uri      character varying(255) NOT NULL, -- Package Log file relative path\n  CONSTRAINT packages_pkey PRIMARY KEY  ( id ) );\n\n-- Create filters table\nCREATE TABLE fledge.filters (\n             name        character varying(255)        NOT NULL,\n             plugin      character varying(255)        NOT NULL,\n       CONSTRAINT filter_pkey PRIMARY KEY( name ) );\n\n-- Create filter_users table\nCREATE TABLE fledge.filter_users (\n             name        character varying(255)        NOT NULL,\n             \"user\"      character varying(255)        NOT NULL);\n\n-- Create control_script table\n-- Script management for control dispatch service\nCREATE TABLE fledge.control_script (\n             name          character varying(255)        NOT NULL,\n             steps         jsonb                         NOT NULL DEFAULT '{}'::jsonb,\n             acl           character varying(255),\n             CONSTRAINT    control_script_pkey           PRIMARY KEY (name) );\n\n-- Create control_acl table\n-- Access Control List Management for control dispatch service\nCREATE TABLE fledge.control_acl (\n             name          character varying(255)        NOT NULL,\n             service       jsonb                         NOT NULL DEFAULT '{}'::jsonb,\n             url           jsonb                         NOT NULL DEFAULT '{}'::jsonb,\n             CONSTRAINT    control_acl_pkey              PRIMARY KEY (name) );\n\n-- Access Control List usage relation\nCREATE TABLE fledge.acl_usage (\n             name            character varying(255)  NOT NULL,  -- ACL name\n             entity_type     character varying(80)   NOT NULL,  -- associated entity type: service or script \n             entity_name     character varying(255)  NOT NULL,  -- associated entity name\n             CONSTRAINT      usage_acl_pkey          PRIMARY KEY (name, entity_type, entity_name) );\n\n-- Create control_source table\nCREATE TABLE fledge.control_source (\n             cpsid            integer                     NOT NULL DEFAULT nextval('fledge.control_source_id_seq'::regclass),     -- auto source id\n             name             character  varying(40)      NOT NULL                                                          ,     -- source name\n             description      character  varying(120)     NOT NULL                                                          ,     -- source description\n             CONSTRAINT       control_source_pkey         PRIMARY KEY (cpsid)\n            );\n\n-- Create control_destination table\nCREATE TABLE fledge.control_destination (\n             cpdid            integer                     NOT NULL DEFAULT nextval('fledge.control_destination_id_seq'::regclass),  -- auto destination id\n             name             character  varying(40)      NOT NULL                                                               ,  -- destination name\n             description      character  varying(120)     NOT NULL                                                               ,  -- destination description\n             CONSTRAINT       control_destination_pkey    PRIMARY KEY (cpdid)\n            );\n\n-- Create control_pipelines table\nCREATE TABLE fledge.control_pipelines (\n             cpid             integer                     NOT NULL DEFAULT nextval('fledge.control_pipelines_id_seq'::regclass),  -- control pipeline id\n             name             character  varying(255)     NOT NULL                                                             ,  -- control pipeline name\n             stype            integer                                                                                          ,  -- source type id from control_source table\n             sname            character  varying(80)                                                                           ,  -- source name from control_source table\n             dtype            integer                                                                                          ,  -- destination type id from control_destination table\n             dname            character  varying(80)                                                                           ,  -- destination name from control_destination table\n             enabled          boolean                     NOT NULL DEFAULT FALSE                                               ,  -- false = A given pipeline is disabled by default\n             execution        character  varying(20)      NOT NULL DEFAULT  'shared'                                           ,  -- pipeline will be executed as with shared execution model by default\n             CONSTRAINT       control_pipelines_pkey      PRIMARY KEY (cpid)\n             );\n\n-- Create control_filters table\nCREATE TABLE fledge.control_filters (\n             fid              integer                          NOT NULL DEFAULT nextval('fledge.control_filters_id_seq'::regclass),  -- auto filter id\n             cpid             integer                          NOT NULL                                                           ,  -- control pipeline id\n             forder           integer                          NOT NULL                                                           ,  -- filter order\n             fname            character  varying(255)          NOT NULL                                                           ,  -- Name of the filter instance\n             CONSTRAINT       control_filters_pkey             PRIMARY KEY (fid)                                                  ,\n             CONSTRAINT       control_filters_fk1              FOREIGN KEY (cpid) REFERENCES fledge.control_pipelines (cpid)  MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\n-- Create control_api table\nCREATE TABLE fledge.control_api (\n             name             character  varying(255)     NOT NULL                 ,       -- control API name\n             description      character  varying(255)     NOT NULL                 ,       -- description of control API\n             type             integer                     NOT NULL                 ,       -- 0 for write and 1 for operation\n             operation_name   character  varying(255)                              ,       -- name of the operation and only valid if type is operation\n             destination      integer                     NOT NULL                 ,       -- destination of request; 0-broadcast, 1-service, 2-asset, 3-script\n             destination_arg  character  varying(255)                              ,       -- name of the destination and only used if destination is non-zero\n             anonymous        boolean                     NOT NULL DEFAULT  'f'    ,       -- anonymous callers to make request to control API; by default false\n             CONSTRAINT       control_api_pname           PRIMARY KEY (name)\n             );\n\n-- Create control_api_parameters table\nCREATE TABLE fledge.control_api_parameters (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             parameter        character  varying(255)     NOT NULL                 ,       -- name of parameter\n             value            character  varying(255)                              ,       -- value of parameter if constant otherwise default\n             constant         boolean                     NOT NULL                 ,       -- parameter is either a constant or variable\n             CONSTRAINT       control_api_parameters_fk1  FOREIGN KEY (name) REFERENCES fledge.control_api (name)  MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\n-- Create control_api_acl table\nCREATE TABLE fledge.control_api_acl (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             \"user\"           character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.users\n             CONSTRAINT       control_api_acl_fk1         FOREIGN KEY (name) REFERENCES fledge.control_api (name)  MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION,\n             CONSTRAINT       control_api_acl_fk2         FOREIGN KEY (\"user\") REFERENCES fledge.users (uname)  MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\nCREATE TABLE fledge.monitors (\n             service        character varying(255) NOT NULL,\n             monitor        character varying(80) NOT NULL,\n             minimum        bigint,\n             maximum        bigint,\n             average        bigint,\n             samples        bigint,\n             ts             timestamp(6) with time zone NOT NULL DEFAULT now()\n             );\n\nCREATE INDEX monitors_ix1\n    ON fledge.monitors(service, monitor);\n\n-- Create alerts table\n\nCREATE TABLE fledge.alerts (\n       key         character varying(80)       NOT NULL,                                 -- Primary key\n       message     character varying(255)      NOT NULL,                                 -- Alert Message\n       urgency     smallint                    NOT NULL,                                 -- 1 Critical - 2 High - 3 Normal - 4 Low\n       ts          timestamp(6) with time zone NOT NULL DEFAULT now(),                   -- Timestamp, updated at every change\n       CONSTRAINT  alerts_pkey                 PRIMARY KEY (key) );\n\n\n-- Grants to fledge schema\nGRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA fledge TO PUBLIC;\n\n----------------------------------------------------------------------\n-- Initialization phase - DML\n----------------------------------------------------------------------\n\n-- Roles\nDELETE FROM fledge.roles;\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('admin', 'All CRUD privileges'),\n            ('user', 'All CRUD operations and self profile management'),\n            ('view', 'Only to view the configuration'),\n            ('data-view', 'Only read the data in buffer'),\n            ('control', 'Same as editor can do and also have access for control scripts and pipelines');\n\n\n-- Users\nDELETE FROM fledge.users;\nINSERT INTO fledge.users ( uname, real_name, pwd, role_id, description )\n     VALUES ('admin', 'Admin user', '495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', 1, 'admin user'),\n            ('user', 'Normal user', '495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', 2, 'normal user');\n\n-- User password history\nDELETE FROM fledge.user_pwd_history;\n\n\n-- User logins\nDELETE FROM fledge.user_logins;\n\n\n-- Log Codes\nDELETE FROM fledge.log_codes;\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'PURGE', 'Data Purging Process' ),\n            ( 'LOGGN', 'Logging Process' ),\n            ( 'STRMN', 'Streaming Process' ),\n            ( 'SYPRG', 'System Purge' ),\n            ( 'START', 'System Startup' ),\n            ( 'FSTOP', 'System Shutdown' ),\n            ( 'CONCH', 'Configuration Change' ),\n            ( 'CONAD', 'Configuration Addition' ),\n            ( 'SCHCH', 'Schedule Change' ),\n            ( 'SCHAD', 'Schedule Addition' ),\n            ( 'SRVRG', 'Service Registered' ),\n            ( 'SRVUN', 'Service Unregistered' ),\n            ( 'SRVFL', 'Service Fail' ),\n            ( 'SRVRS', 'Service Restart' ),\n            ( 'NHCOM', 'North Process Complete' ),\n            ( 'NHDWN', 'North Destination Unavailable' ),\n            ( 'NHAVL', 'North Destination Available' ),\n            ( 'UPEXC', 'Update Complete' ),\n            ( 'BKEXC', 'Backup Complete' ),\n            ( 'NTFDL', 'Notification Deleted' ),\n            ( 'NTFAD', 'Notification Added' ),\n            ( 'NTFSN', 'Notification Sent' ),\n            ( 'NTFCL', 'Notification Cleared' ),\n            ( 'NTFST', 'Notification Server Startup' ),\n            ( 'NTFSD', 'Notification Server Shutdown' ),\n            ( 'PKGIN', 'Package installation' ),\n            ( 'PKGUP', 'Package updated' ),\n            ( 'PKGRM', 'Package purged' ),\n            ( 'DSPST', 'Dispatcher Startup' ),\n            ( 'DSPSD', 'Dispatcher Shutdown' ),\n            ( 'ESSRT', 'External Service Startup' ),\n            ( 'ESSTP', 'External Service Shutdown' ),\n            ( 'ASTDP', 'Asset deprecated' ),\n            ( 'ASTUN', 'Asset un-deprecated' ),\n            ( 'PIPIN', 'Pip installation' ),\n            ( 'AUMRK', 'Audit Log Marker' ),\n            ( 'USRAD', 'User Added' ),\n            ( 'USRDL', 'User Deleted' ),\n            ( 'USRCH', 'User Changed' ),\n            ( 'USRRS', 'User Restored' ),\n            ( 'ACLAD', 'ACL Added' ),( 'ACLCH', 'ACL Changed' ),( 'ACLDL', 'ACL Deleted' ),\n            ( 'CTSAD', 'Control Script Added' ),( 'CTSCH', 'Control Script Changed' ),('CTSDL', 'Control Script Deleted' ),\n            ( 'CTPAD', 'Control Pipeline Added' ),( 'CTPCH', 'Control Pipeline Changed' ),('CTPDL', 'Control Pipeline Deleted' ),\n            ( 'CTEAD', 'Control Entrypoint Added' ),( 'CTECH', 'Control Entrypoint Changed' ),('CTEDL', 'Control Entrypoint Deleted' ),\n            ( 'BUCAD', 'Bucket Added' ), ( 'BUCCH', 'Bucket Changed' ), ( 'BUCDL', 'Bucket Deleted' ),\n            ( 'USRBK', 'User Blocked' ), ( 'USRUB', 'User Unblocked' )\n            ;\n\n--\n-- Configuration parameters\n--\nDELETE FROM fledge.configuration;\n\n-- Statistics\nINSERT INTO fledge.statistics ( key, description, value, previous_value )\n     VALUES ( 'READINGS',             'Readings received by Fledge', 0, 0 ),\n            ( 'BUFFERED',             'Readings currently in the Fledge buffer', 0, 0 ),\n            ( 'UNSENT',               'Readings filtered out in the send process', 0, 0 ),\n            ( 'PURGED',               'Readings removed from the buffer by the purge process', 0, 0 ),\n            ( 'UNSNPURGED',           'Readings that were purged from the buffer before being sent', 0, 0 ),\n            ( 'DISCARDED',            'Readings discarded by the South Service before being  placed in the buffer. This may be due to an error in the readings themselves.', 0, 0 );\n\n--\n-- Scheduled processes\n--\n-- Use this to create guids: https://www.uuidgenerator.net/version1 */\n-- Weekly repeat for timed schedules: set schedule_interval to 168:00:00\n\n-- Core Tasks\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'purge',               '[\"tasks/purge\"]'       );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'purge_system',        '[\"tasks/purge_system\"]');\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'stats collector',     '[\"tasks/statistics\"]'  );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'FledgeUpdater',       '[\"tasks/update\"]'      );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'certificate checker', '[\"tasks/check_certs\"]' );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'update checker',      '[\"tasks/check_updates\"]');\n\n-- Storage Tasks\n--\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('backup',  '[\"tasks/backup\"]'  );\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('restore', '[\"tasks/restore\"]' );\n\n-- South, Notification, North Tasks\n--\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'south_c',           '[\"services/south_c\"]',          100 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'notification_c',    '[\"services/notification_c\"]',   30  );\nINSERT INTO fledge.scheduled_processes (name, script)             VALUES ( 'north_c',           '[\"tasks/north_c\"]'                  );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'north_C',           '[\"services/north_C\"]',          200 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'dispatcher_c',      '[\"services/dispatcher_c\"]',     20  );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'bucket_storage_c',  '[\"services/bucket_storage_c\"]', 10  );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'pipeline_c',        '[\"services/pipeline_c\"]',       90  );\n\n-- Automation script tasks\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'automation_script', '[\"tasks/automation_script\"]' );\n\n--\n-- Schedules\n--\n-- Use this to create guids: https://www.uuidgenerator.net/version1 */\n-- Weekly repeat for timed schedules: set schedule_interval to 168:00:00\n--\n\n\n-- Core Tasks\n--\n\n-- Purge\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'cea17db8-6ccc-11e7-907b-a6006ad3dba0', -- id\n                'purge',                                -- schedule_name\n                'purge',                                -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:10:00',                             -- schedule_interval (evey hour)\n                true,                                   -- exclusive\n                true                                    -- enabled\n              );\n\n--\n-- Purge System\n--\n-- Purge old information from the fledge database\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                               schedule_time, schedule_interval, exclusive, enabled )\nVALUES ( 'd37265f0-c83a-11eb-b8bc-0242ac130003', -- id\n         'purge_system',                         -- schedule_name\n         'purge_system',                         -- process_name\n         3,                                      -- schedule_type (interval)\n         NULL,                                   -- schedule_time\n         '23:50:00',                             -- schedule_interval (evey 24 hours)\n         'true',                                 -- exclusive\n         'true'                                  -- enabled\n       );\n\n\n-- Statistics collection\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '2176eb68-7303-11e7-8cf7-a6006ad3dba0', -- id\n                'stats collection',                     -- schedule_name\n                'stats collector',                      -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:15',                             -- schedule_interval\n                true,                                   -- exclusive\n                true                                    -- enabled\n              );\n\n-- Update checker \nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '852cd8e4-3c29-440b-89ca-2c7691b0450d', -- id\n                'update checker',                       -- schedule_name\n                'update checker',                       -- process_name\n                2,                                      -- schedule_type (timed)\n                '00:05:00',                             -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                true,                                   -- exclusive\n                true                                    -- enabled\n              );\n\n-- Check for expired certificates\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '2176eb68-7303-11e7-8cf7-a6107ad3db21', -- id\n                'certificate checker',                  -- schedule_name\n                'certificate checker',                  -- process_name\n                2,                                      -- schedule_type (interval)\n                '00:05:00',                             -- schedule_time\n                '12:00:00',                             -- schedule_interval\n                true,                                   -- exclusive\n                true                                    -- enabled\n              );\n\n-- Storage Tasks\n--\n\n-- Execute a Backup every 1 hour\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'd1631422-9ec6-11e7-abc4-cec278b6b50a', -- id\n                'backup hourly',                        -- schedule_name\n                'backup',                               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '01:00:00',                             -- schedule_interval\n                true,                                   -- exclusive\n                false                                   -- disabled\n              );\n\n-- On demand Backup\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'fac8dae6-d8d1-11e7-9296-cec278b6b50a', -- id\n                'backup on demand',                     -- schedule_name\n                'backup',                               -- process_name\n                4,                                      -- schedule_type (manual)\n                NULL,                                   -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                true,                                   -- exclusive\n                true                                    -- enabled\n              );\n\n-- On demand Restore\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '8d4d3ca0-de80-11e7-80c1-9a214cf093ae', -- id\n                'restore on demand',                    -- schedule_name\n                'restore',                              -- process_name\n                4,                                      -- schedule_type (manual)\n                NULL,                                   -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                true,                                   -- exclusive\n                true                                    -- enabled\n              );\n\n-- The Schema Service table used to hold information about extension schemas\nCREATE TABLE fledge.service_schema (\n             name          character varying(255)        NOT NULL,\n             service       character varying(255)        NOT NULL,\n             version       integer                       NOT NULL,\n             definition    JSON);\n\n-- Insert predefined entries for Control Source\nDELETE FROM fledge.control_source;\nINSERT INTO fledge.control_source ( name, description )\n     VALUES ('Any', 'Any source.'),\n            ('Service', 'A named service in source of the control pipeline.'),\n            ('API', 'The control pipeline source is the REST API.'),\n            ('Notification', 'The control pipeline originated from a notification.'),\n            ('Schedule', 'The control request was triggered by a schedule.'),\n            ('Script', 'The control request has come from the named script.');\n\n-- Insert predefined entries for Control Destination\nDELETE FROM fledge.control_destination;\nINSERT INTO fledge.control_destination ( name, description )\n     VALUES ('Any', 'Any destination.'),\n            ('Service', 'A name of service that is being controlled.'),\n            ('Asset', 'A name of asset that is being controlled.'),\n            ('Script', 'A name of script that will be executed.'),\n            ('Broadcast', 'No name is applied and pipeline will be considered for any control writes or operations to broadcast destinations.');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/schema_update.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\nFLEDGE_DB_VERSION=$1\nNEW_VERSION=$2\nPG_SQL=$3\n\necho \"$@\" | grep -q -- --verbose && VERBOSE=\"Y\"\n\n# Include logging\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n# Logger wrapper\nschema_update_log() {\n    write_log \"Storage\" \"scripts.plugins.storage.postgres.schema_update\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n# Parameters passed by the caller\nif [ ! \"$1\" ]; then\n   schema_update_log \"err\" \"Error: missing required parameters for upgrade/downgrade. Fledge cannot start.\"\n   exit 1\nfi\n\n# Same version check: do nothing\nif [ \"${FLEDGE_DB_VERSION}\" == \"${NEW_VERSION}\" ]; then\n    schema_update_log \"info\" \"Fledge DB schema is up to date to version ${FLEDGE_DB_VERSION}\" \"logonly\" \"pretty\"\n    return  0\nfi\n\n# Perform DB Upgrade\ndb_upgrade()\n{\n    UPDATE_SCRIPTS_DIR=\"$FLEDGE_ROOT/scripts/plugins/storage/postgres/upgrade\"\n    # Start from next schema revision\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} + 1`\n    while [ \"${CHECK_VER}\" -le ${NEW_VERSION} ]\n    do\n        UPGRADE_SCRIPT=\"${UPDATE_SCRIPTS_DIR}/${CHECK_VER}.sql\"\n        if [ ! -e \"${UPGRADE_SCRIPT}\" ]; then\n            schema_update_log \"err\" \"Error in schema Upgrade: cannot find file ${UPGRADE_SCRIPT} \"\\\n\"required for [${FLEDGE_DB_VERSION}] to [${NEW_VERSION}] upgrade. Exiting\" \"all\" \"pretty\"\n            return 1\n        fi\n        CHECK_VER=`expr $CHECK_VER + 1`\n    done\n\n    START_UPGRADE=\"\"\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} + 1`\n    # sort in ascending order\n    for sql_file in `ls -1 ${UPDATE_SCRIPTS_DIR}/*.sql | sort -V`\n        do \n            # Get start_ver from filename START_VER-to-END_VER.sql\n            START_VER=`echo $(basename -s '.sql' $sql_file)`\n\n            # Skip current file ?\n            # Logic is: if sql_file name has START_VER != FLEDGE_DB_VERSION skip it\n            # else mark the START_UPGRADE\n            if [ ! \"${START_UPGRADE}\" ] && [ \"${START_VER}\" != \"${CHECK_VER}\" ]; then\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Skipping upgrade $(basename ${sql_file}) \"\\\n\"for Fledge upgrade from ${FLEDGE_DB_VERSION} to ${NEW_VERSION}\" \"logonly\" \"pretty\"\n                fi\n\n                # Get next file in the list\n                continue\n            else\n                START_UPGRADE=\"Y\"\n            fi\n\n            # Perform Upgrade\n            if [ \"${START_UPGRADE}\" ]; then\n                # Apply current update\n                SQL_COMMAND=\"$PG_SQL -v ON_ERROR_STOP=1 -d fledge -q -f ${sql_file} > /dev/null 2>&1\"\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Applying upgrade $(basename ${sql_file}) ...\" \"logonly\" \"pretty\"\n                    schema_update_log \"info\" \"Calling [${SQL_COMMAND}]\" \"logonly\" \"pretty\"\n                fi\n\n                # Call the DB script\n                eval \"${SQL_COMMAND}\"\n\n                if [ $? -ne 0 ]; then\n                    schema_update_log \"err\" \"Upgrade failed while applying the SQL file ${sql_file}. Stopping the upgrade.\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # Update the DB version\n                UPDATE_VER=`basename -s .sql ${sql_file}`\n                SQL_COMMAND=\"$PG_SQL -v ON_ERROR_STOP=1 -d fledge -q -c \\\"UPDATE fledge.version SET id = '${UPDATE_VER}'\\\"\"\n                eval \"${SQL_COMMAND}\"\n\n                if [ $? -ne 0 ]; then\n                    schema_update_log \"err\" \"Failed to update the version to ${UPDATE_VER}. Aborting/Stopping the upgrade.\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # If \"ver\" in current filename is NEW_VERSION we are done\n                if [ \"${START_VER}\" == \"${NEW_VERSION}\" ]; then\n                    if [ \"${VERBOSE}\" ]; then\n                        schema_update_log \"info\" \"Found last upgrade file $(basename ${sql_file}) for \"\\\n\"${FLEDGE_DB_VERSION} to ${NEW_VERSION} version upgrade\" \"logonly\" \"pretty\"\n                    fi\n                    # Report success\n                    schema_update_log \"info\" \"Fledge DB schema has been upgraded to version [${NEW_VERSION}]\" \"all\" \"pretty\"\n                    return 0\n                fi\n            fi\n        done\n        # Report error\n        if [ \"${START_UPGRADE}\" ]; then\n             schema_update_log \"err\" \"Error: the Fledge DB schema has not been upgraded \"\\\n\"to version [${NEW_VERSION}], this sql file is [$${sql_file}]\" \"all\" \"pretty\"\n            return 1\n        fi\n}\n\n# Perform DB Downgrade\ndb_downgrade()\n{\n    DOWNGRADE_SCRIPTS_DIR=\"$FLEDGE_ROOT/scripts/plugins/storage/postgres/downgrade\"\n    # Start from next schema revision\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} - 1`\n    while [ \"${CHECK_VER}\" -ge ${NEW_VERSION} ]\n    do\n        DOWNGRADE_SCRIPT=\"${DOWNGRADE_SCRIPTS_DIR}/${CHECK_VER}.sql\"\n        if [ ! -e \"${DOWNGRADE_SCRIPT}\" ]; then\n            schema_update_log \"err\" \"Error in schema Downgrade: cannot find file ${DOWNGRADE_SCRIPT} \"\\\n\"required for [${FLEDGE_DB_VERSION}] to [${NEW_VERSION}] downgrade. Exiting\" \"all\" \"pretty\"\n            return 1\n        fi\n        CHECK_VER=`expr $CHECK_VER - 1`\n    done\n\n    START_DOWNGRADE=\"\"\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} - 1`\n    # sort in descending order\n    for sql_file in `ls -1 ${DOWNGRADE_SCRIPTS_DIR}/*.sql | sort -rV`\n        do \n            # Get start_ver from filename START_VER-to-END_VER.sql\n            START_VER=`echo $(basename -s '.sql' $sql_file)`\n\n            # Skip current file?\n            # Logic is: sql_file name has START_VER != FLEDGE_DB_VERSION skip it\n            # else mark START_DOWNGRADE\n            if [ ! \"${START_DOWNGRADE}\" ] && [ \"${START_VER}\" != \"${CHECK_VER}\" ]; then\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Skipping downgrade $(basename ${sql_file}) \"\\\n\"for Fledge downgrade from ${FLEDGE_DB_VERSION} to ${NEW_VERSION}\" \"logonly\" \"pretty\"\n                fi\n\n                # Get next file in the list\n                continue\n            else\n                START_DOWNGRADE=\"Y\"\n            fi\n\n            # Perform Downgrade\n            if [ \"${START_DOWNGRADE}\" ]; then\n                SQL_COMMAND=\"$PG_SQL -v ON_ERROR_STOP=1 -d fledge -q -f ${sql_file} > /dev/null 2>&1\"\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Applying downgrade $(basename ${sql_file}) ...\" \"logonly\" \"pretty\"\n                    schema_update_log \"info\" \"Calling [${SQL_COMMAND}]\" \"logonly\" \"pretty\"\n                fi\n\n                # Call DB script\n                eval \"${SQL_COMMAND}\"\n                if [ $? -ne 0 ]; then\n                    schema_update_log \"err\" \"Downgrade failed while applying the SQL file ${sql_file}. Stopping the downgrade.\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # Update DB version\n                UPDATE_VER=`basename -s .sql ${sql_file} | awk -F'-' '{print $3}'`\n                SQL_COMMAND=\"$PG_SQL -v ON_ERROR_STOP=1 -d fledge -q -c \\\"UPDATE fledge.version SET id = '${START_VER}'\\\"\"\n                eval \"${SQL_COMMAND}\"\n                if [ $? -ne 0 ]; then\n                    schema_update_log \"err\" \"Failed to update the version to ${START_VER}. Aborting/Stopping the downgrade.\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # If \"ver\" in current filename is NEW_VERSION we are done\n                if [ \"${START_VER}\" == \"${NEW_VERSION}\" ]; then\n                    if [ \"${VERBOSE}\" ]; then\n                        schema_update_log \"info\" \"Found last downgrade file $(basename ${sql_file}) for \"\\\n\"${FLEDGE_DB_VERSION} to ${NEW_VERSION} version downgrade\" \"logonly\" \"pretty\"\n                    fi\n                    # Report success\n                    schema_update_log \"info\" \"Fledge DB schema has been downgraded to version [${NEW_VERSION}]\" \"all\" \"pretty\"\n                    return 0\n                fi\n            fi\n        done\n        # Report error\n        if [ \"${START_DOWNGRADE}\" ]; then\n            schema_update_log \"err\" \"Error: the Fledge DB schema has not been downgraded \"\\\n\"to version [${NEW_VERSION}], this sql file is [${sql_file}]\" \"all\" \"pretty\"\n            return 1\n        fi\n}\n\n# Check whether we need to Upgrade or Downgrade\nCHECK_OPERATION=`printf '%s\\n' \"${NEW_VERSION}\" \"${FLEDGE_DB_VERSION}\" | sort -V | head -n 1`\nif [ \"${CHECK_OPERATION}\" == \"${NEW_VERSION}\" ]; then\n    SCHEMA_OPT=\"DOWNGRADE\"\nelse\n    SCHEMA_OPT=\"UPGRADE\"\nfi\n\n# Call the schema operation\nif [ \"${SCHEMA_OPT}\" == \"UPGRADE\" ]; then\n    db_upgrade || exit 1\nelse\n    db_downgrade || exit 1\nfi\n\nexit 0\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/10.sql",
    "content": "CREATE INDEX readings_ix2\n    ON readings USING btree (asset_code);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/11.sql",
    "content": "DELETE FROM fledge.statistics WHERE key IN (\n    'NORTH_READINGS_TO_PI',\n    'NORTH_STATISTICS_TO_PI',\n    'NORTH_READINGS_TO_HTTP',\n    'North Readings to PI',\n    'North Statistics to PI',\n    'North Readings to OCS'\n    ) AND value = 0;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/12.sql",
    "content": "ALTER TABLE fledge.streams DROP CONSTRAINT streams_fk1;\nDROP TABLE IF EXISTS fledge.destinations;\nDROP INDEX IF EXISTS fledge.fki_streams_fk1;\n\n-- Drops destination_id field from the table\nDROP TABLE IF EXISTS fledge.streams_old;\nALTER TABLE fledge.streams RENAME TO streams_old;\n\nALTER TABLE fledge.streams_old RENAME CONSTRAINT strerams_pkey TO strerams_pkey_old;\n\nCREATE TABLE fledge.streams (\n       id             integer                     NOT NULL DEFAULT nextval('fledge.streams_id_seq'::regclass),         -- Sequence ID\n       description    character varying(255)      NOT NULL DEFAULT ''::character varying COLLATE pg_catalog.\"default\",  -- A brief description of the stream entry\n       properties     jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- A generic set of properties\n       object_stream  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Definition of what must be streamed\n       object_block   jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Definition of how the stream must be organised\n       object_filter  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- Any filter involved in selecting the data to stream\n       active_window  jsonb                       NOT NULL DEFAULT '{}'::jsonb,                                         -- The window of operations\n       active         boolean                     NOT NULL DEFAULT true,                                                -- When false, all data to this stream stop and are inactive\n       last_object    bigint                      NOT NULL DEFAULT 0,                                                   -- The ID of the last object streamed (asset or reading, depending on the object_stream)\n       ts             timestamp(6) with time zone NOT NULL DEFAULT now(),                                               -- Creation or last update\n       CONSTRAINT strerams_pkey PRIMARY KEY (id));\n\nINSERT INTO fledge.streams\n        SELECT\n            id,\n            description,\n            properties,\n            object_stream,\n            object_block,\n            object_filter,\n            active_window,\n            active,\n            last_object,\n            ts\n        FROM fledge.streams_old;\n\nDROP TABLE fledge.streams_old;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/13.sql",
    "content": "CREATE INDEX statistics_history_ix3\n    ON fledge.statistics_history (history_ts);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/14.sql",
    "content": "-- Use plugin name pi_server instead of former omf\nUPDATE fledge.configuration SET value = jsonb_set(value, '{plugin, value}', '\"pi_server\"') WHERE value->'plugin'->>'value' = 'omf';\nUPDATE fledge.configuration SET value = jsonb_set(value, '{plugin, default}', '\"pi_server\"') WHERE value->'plugin'->>'default' = 'omf';\n\n-- Insert PURGE_READ under Utilities parent category\nINSERT INTO fledge.category_children SELECT 'Utilities', 'PURGE_READ' WHERE NOT EXISTS(SELECT 1 FROM fledge.category_children WHERE parent = 'Utilities' AND child = 'PURGE_READ');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/15.sql",
    "content": "CREATE INDEX IF NOT EXISTS log_ix2 on fledge.log(ts);\nCREATE INDEX IF NOT EXISTS tasks_ix1 on fledge.tasks(process_name, start_time);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/16.sql",
    "content": "ALTER TABLE fledge.configuration ADD COLUMN display_name character varying(255);\nUPDATE fledge.configuration SET display_name = key;"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/17.sql",
    "content": "-- Create plugin_data table\n-- Persist plugin data in the storage\nCREATE TABLE IF NOT EXISTS fledge.plugin_data (\n        key     character varying(255)    NOT NULL,\n        data    JSON                      NOT NULL DEFAULT '{}',\n        CONSTRAINT plugin_data_pkey PRIMARY KEY (key) );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/18.sql",
    "content": "ALTER TABLE fledge.tasks ADD COLUMN schedule_name character varying(255);\nDROP INDEX IF EXISTS fledge.tasks_ix1;\nCREATE INDEX tasks_ix1\n   ON fledge.tasks(schedule_name, start_time);"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/19.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFDL', 'Notification Deleted' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/2.sql",
    "content": "drop index IF EXISTS fki_readings_fk1;\ncreate index fki_readings_fk1 on fledge.readings USING btree(asset_code, user_ts desc);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/20.sql",
    "content": "-- Create filters table\nCREATE TABLE fledge.filters (\n             name        character varying(255)        NOT NULL,\n             plugin      character varying(255)        NOT NULL,\n       CONSTRAINT filter_pkey PRIMARY KEY( name ) );\n\n-- Create filter_users table\nCREATE TABLE fledge.filter_users (\n             name        character varying(255)        NOT NULL,\n             \"user\"      character varying(255)        NOT NULL);\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/21.sql",
    "content": "-- Notification log codes\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFAD', 'Notification Added' );\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFSN', 'Notification Sent' );\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFST', 'Notification Server Startup' );\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFSD', 'Notification Server Shutdown' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/22.sql",
    "content": "CREATE INDEX readings_ix3\n    ON fledge.readings USING btree (user_ts);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/23.sql",
    "content": "-- Nothing required, empty file to keep numbering same as SQLite\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/24.sql",
    "content": "-- Nothing required, empty file to keep numbering same as SQLite\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/25.sql",
    "content": "-- No actions"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/26.sql",
    "content": "INSERT INTO scheduled_processes ( name, script ) VALUES ( 'south_c',   '[\"services/south_c\"]' ) ON CONFLICT DO NOTHING;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/27.sql",
    "content": "-- Remove index on fledge.readings read_key\nDROP INDEX IF EXISTS fledge.readings_ix1;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/28.sql",
    "content": "-- Add audit log key NTFCL\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFCL', 'Notification Cleared' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/29.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'PKGIN', 'Package installation' ), ( 'PKGUP', 'Package updated' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/3.sql",
    "content": "ALTER TABLE fledge.omf_created_objects DROP CONSTRAINT omf_created_objects_fk1;\n\nUPDATE fledge.omf_created_objects  SET configuration_key = 'North Readings to PI'   WHERE configuration_key = 'SEND_PR_1';\nUPDATE fledge.omf_created_objects  SET configuration_key = 'North Statistics to PI' WHERE configuration_key = 'SEND_PR_2';\nUPDATE fledge.omf_created_objects  SET configuration_key = 'North Readings to OCS'  WHERE configuration_key = 'SEND_PR_4';\n\nALTER TABLE fledge.omf_created_objects\n    ADD CONSTRAINT  omf_created_objects_fk1 FOREIGN KEY (configuration_key)\n    REFERENCES fledge.configuration (key) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION;\n\nUPDATE fledge.configuration SET key = 'North Readings to PI' WHERE key = 'SEND_PR_1';\nUPDATE fledge.configuration SET key = 'North Statistics to PI' WHERE key = 'SEND_PR_2';\nUPDATE fledge.configuration SET key = 'North Readings to OCS' WHERE key = 'SEND_PR_4';\n\n-- Insert entries for DHT11 C++ south plugin\n\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'dht11',\n              'DHT11 South C Plugin',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"dht11\", \"default\" : \"dht11\", \"description\" : \"Module that DHT11 South Plugin will load\" } } '\n            );\n\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'dht11',   '[\"services/south_c\"]' );\n\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '6b25f4d9-c7f3-4fc8-bd4a-4cf79f7055ca', -- id\n                'dht11',                                -- schedule_name\n                'dht11',                                -- process_name\n                1,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '01:00:00',                             -- schedule_interval (evey hour)\n                true,                                   -- exclusive\n                false                                   -- enabled\n              );\n\n-- North_Readings_to_PI - OMF Translator for readings - C Code\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Readings_to_PI',\n              'OMF North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"omf\", \"default\" : \"omf\", \"description\" : \"Module that OMF North Plugin will load\" } } '\n            );\n\n-- North_Statistics_to_PI - OMF Translator for statistics - C Code\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Statistics_to_PI',\n              'OMF North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"omf\", \"default\" : \"omf\", \"description\" : \"Module that OMF North Plugin will load\" } } '\n            );\n\n-- North Tasks - C code\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Readings_to_PI',   '[\"tasks/north_c\"]' );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Statistics_to_PI', '[\"tasks/north_c\"]' );\n\n-- Readings OMF to PI - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '1cdf1ef8-7e02-11e8-adc0-fa7ae01bbebc', -- id\n                'OMF_to_PI_north_C',                    -- schedule_name\n                'North_Readings_to_PI',                 -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                    -- exclusive\n                'f'                                     -- disabled\n              );\n\n-- Statistics OMF to PI - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'f1e3b377-5acb-4bde-93d5-b6a792f76e07', -- id\n                'Stats_OMF_to_PI_north_C',              -- schedule_name\n                'North_Statistics_to_PI',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                    -- exclusive\n                'f'                                     -- disabled\n              );\n-- Statistics\nINSERT INTO fledge.statistics ( key, description, value, previous_value ) VALUES ( 'NORTH_READINGS_TO_PI', 'Statistics sent to historian', 0, 0 );\nINSERT INTO fledge.statistics ( key, description, value, previous_value ) VALUES ( 'NORTH_STATISTICS_TO_PI', 'Statistics sent to historian', 0, 0 );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/30.sql",
    "content": "-- updates \"PIServerEndpoint\" option :\n--\n--  \"discovery\" -> \"Auto Discovery\"\n--  \"piwebapi\"  -> \"PI Web API\"\n--  \"cr\"        -> \"Connector Relay\"\n--\n\nUPDATE configuration\nSET value = jsonb_set(value, '{PIServerEndpoint, options}',  $$[\"Auto Discovery\",\"PI Web API\",\"Connector Relay\"]$$)\nWHERE value->'plugin'->>'value' = 'PI_Server_V2';\n\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, default}', '\"Connector Relay\"')\nWHERE value->'plugin'->>'value' = 'PI_Server_V2';\n\n--\n--\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, value}', '\"Auto Discovery\"')\nWHERE value->'plugin'->>'value' = 'PI_Server_V2' AND\n      value->'PIServerEndpoint'->>'value' = 'discovery';\n\nUPDATE configuration SET value = jsonb_set(value,  '{PIServerEndpoint, value}', '\"PI Web API\"')\nWHERE value->'plugin'->>'value' = 'PI_Server_V2' AND\n      value->'PIServerEndpoint'->>'value' = 'piwebapi';\n\nUPDATE configuration SET value = jsonb_set(value,  '{PIServerEndpoint, value}', '\"Connector Relay\"')\nWHERE value->'plugin'->>'value' = 'PI_Server_V2' AND\n      value->'PIServerEndpoint'->>'value' = 'cr';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/31.sql",
    "content": "ALTER TABLE fledge.tasks ADD COLUMN schedule_id uuid;\nDELETE FROM fledge.tasks WHERE fledge.tasks.schedule_name NOT IN (SELECT schedule_name FROM fledge.schedules);\nUPDATE fledge.tasks SET schedule_id = (SELECT id FROM fledge.schedules WHERE fledge.tasks.schedule_name = fledge.schedules.schedule_name AND fledge.tasks.process_name = fledge.schedules.process_name);\nALTER TABLE fledge.tasks ALTER COLUMN schedule_id SET NOT NULL;"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/32.sql",
    "content": "--\n-- DROP the column: read_key   uuid\n--\nALTER TABLE fledge.readings DROP COLUMN read_key;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/33.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'PKGRM', 'Package purged' );"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/34.sql",
    "content": "\n--- configuration -------------------------------------------------------------------------------------------------------\n\n-- plugin\n--\nUPDATE configuration SET value = jsonb_set(value, '{plugin, default}', '\"OMF\"') WHERE value->'plugin'->>'value'='PI_Server_V2';\nUPDATE configuration SET value = jsonb_set(value, '{plugin, value}', '\"OMF\"') WHERE value->'plugin'->>'value'='PI_Server_V2';\n\n-- PIServerEndpoint\n--\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, options}', $$[\"PI Web API\",\"Connector Relay\",\"OSIsoft Cloud Services\",\"Edge Data Store\"]$$) WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, description}', '\"Select the endpoint among PI Web API, Connector Relay, OSIsoft Cloud Services or Edge Data Store\"') WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, order}', '\"1\"') WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, displayName}', '\"Endpoint\"') WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, default}', '\"Connector Relay\"') WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, value}', '\"Connector Relay\"') WHERE value->'plugin'->>'value'='OMF' AND value->'PIServerEndpoint'->>'value'='Auto Discovery';\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, value}', '\"Connector Relay\"') WHERE value->'plugin'->>'value'='OMF' AND value->'PIServerEndpoint'->>'value'='cr';\nUPDATE configuration SET value = jsonb_set(value, '{PIServerEndpoint, value}', '\"PI Web API\"') WHERE value->'plugin'->>'value'='OMF' AND value->'PIServerEndpoint'->>'value'='piwebapi';\n\n-- ServerHostname\n-- Note: This is a new config item and its value extract from old URL config item\n--\nUPDATE configuration SET value = value || '{\"ServerHostname\": {\"default\": \"localhost\", \"validity\": \"PIServerEndpoint != \\\"OSIsoft Cloud Services\\\"\", \"description\": \"Hostname of the server running the endpoint either PI Web API or Connector Relay or Edge Data Store\", \"displayName\": \"Server hostname\", \"type\": \"string\", \"order\": \"2\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\nWITH K AS (\nSELECT key from configuration where (value->>'plugin')::jsonb->'value'='\"OMF\"'\n)\nUPDATE configuration AS t_cfg SET value=jsonb_set(value, '{ServerHostname, value}',\n(SELECT to_json((string_to_array(split_part(((value->>'URL')::jsonb->'value')::text, '/', 3), ':'))[1])::jsonb\nFROM configuration c where (value->>'plugin')::jsonb->'value'='\"OMF\"' AND t_cfg.key = c.key)\n) WHERE key in (select key from K);\n\n-- ServerPort\n-- Note: This is a new config item and its value extract from old URL config item\n--\nUPDATE configuration SET value = value || '{\"ServerPort\": {\"default\": \"0\", \"validity\": \"PIServerEndpoint != \\\"OSIsoft Cloud Services\\\"\", \"description\": \"Port on which the endpoint either PI Web API or Connector Relay or Edge Data Store is listening, 0 will use the default one\", \"displayName\": \"Server port, 0=use the default\", \"type\": \"integer\", \"order\": \"3\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\nWITH K AS (\nSELECT key from configuration where (value->>'plugin')::jsonb->'value'='\"OMF\"'\n)\nUPDATE configuration AS t_cfg SET value=jsonb_set(value, '{ServerPort, value}',\n(SELECT to_json((string_to_array(split_part(((value->>'URL')::jsonb->'value')::text, '/', 3), ':'))[2])::jsonb\nFROM configuration c where (value->>'plugin')::jsonb->'value'='\"OMF\"' AND t_cfg.key = c.key)\n) WHERE key in (select key from K);\n\n-- URL\n-- Note: Removed URL config item as it is replaced by ServerHostname & ServerPort\n--\nUPDATE configuration SET value = value - 'URL' WHERE value ->'plugin'->>'value'='OMF';\n\n-- producerToken\n--\nUPDATE configuration SET value = jsonb_set(value, '{producerToken, order}', '\"4\"') WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{producerToken, validity}', '\"PIServerEndpoint == \\\"Connector Relay\\\"\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- source\n--\nUPDATE configuration SET value = jsonb_set(value, '{source, order}', '\"5\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- StaticData\n--\nUPDATE configuration SET value = jsonb_set(value, '{StaticData, order}', '\"6\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- OMFRetrySleepTime\n--\nUPDATE configuration SET value = jsonb_set(value, '{OMFRetrySleepTime, order}', '\"7\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- OMFMaxRetry\n--\nUPDATE configuration SET value = jsonb_set(value, '{OMFMaxRetry, order}', '\"8\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- OMFHttpTimeout\n--\nUPDATE configuration SET value = jsonb_set(value, '{OMFHttpTimeout, order}', '\"9\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- formatInteger\n--\nUPDATE configuration SET value = jsonb_set(value, '{formatInteger, order}', '\"10\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- formatNumber\n--\nUPDATE configuration SET value = jsonb_set(value, '{formatNumber, order}', '\"11\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- compression\n--\nUPDATE configuration SET value = jsonb_set(value, '{compression, order}', '\"12\"') WHERE value->'plugin'->>'value'='OMF';\n\n--  DefaultAFLocation\n-- Note: This is a new config item and its default & value extract from old AFHierarchy1Level config item\n--\nUPDATE configuration SET value = value || '{\"DefaultAFLocation\": {\"validity\": \"PIServerEndpoint != \\\"PI Web API\\\"\", \"description\": \"Defines the hierarchies tree in Asset Framework in which the assets will be created, each level is separated by /, PI Web API only.\", \"displayName\": \"Asset Framework hierarchies tree\", \"type\": \"string\", \"order\": \"13\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\nWITH K AS (\nSELECT key from configuration where (value->>'plugin')::jsonb->'value'='\"OMF\"'\n)\nUPDATE configuration AS t_cfg SET value=jsonb_set(value, '{DefaultAFLocation, default}',\n(SELECT value->'AFHierarchy1Level'->'default'\nFROM configuration c where (value->>'plugin')::jsonb->'value'='\"OMF\"' AND t_cfg.key = c.key)\n) WHERE key in (select key from K);\n\nWITH K AS (\nSELECT key from configuration where (value->>'plugin')::jsonb->'value'='\"OMF\"'\n)\nUPDATE configuration AS t_cfg SET value=jsonb_set(value, '{DefaultAFLocation, value}',\n(SELECT value->'AFHierarchy1Level'->'value'\nFROM configuration c where (value->>'plugin')::jsonb->'value'='\"OMF\"' AND t_cfg.key = c.key)\n) WHERE key in (select key from K);\n\n-- AFHierarchy1Level\n-- Note: Removed AFHierarchy1Level config item as it is replaced by new config item DefaultAFLocation\n--\nUPDATE configuration SET value = value - 'AFHierarchy1Level' WHERE value ->'plugin'->>'value'='OMF';\n\n-- AFMap\n-- Note: This is a new config item\n--\nUPDATE configuration SET value = value || '{\"AFMap\": {\"default\": \"{}\", \"value\": \"{}\", \"validity\": \"PIServerEndpoint != \\\"PI Web API\\\"\", \"description\": \"Defines a SET of rules to address WHERE assets should be placed in the AF hierarchy.\", \"displayName\": \"Asset Framework hierarchies rules\", \"type\": \"JSON\", \"order\": \"14\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\n\n-- notBlockingErrors\n--\nUPDATE configuration SET value = jsonb_set(value, '{notBlockingErrors, order}', '\"15\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- streamId\n--\nUPDATE configuration SET value = jsonb_set(value, '{streamId, order}', '\"16\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- PIWebAPIAuthenticationMethod\n--\nUPDATE configuration SET value = jsonb_set(value, '{PIWebAPIAuthenticationMethod, order}', '\"17\"') WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{PIWebAPIAuthenticationMethod, validity}', '\"PIServerEndpoint != \\\"PI Web API\\\"\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- PIWebAPIUserId\n--\nUPDATE configuration SET value = jsonb_set(value, '{PIWebAPIUserId, order}', '\"18\"') WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{PIWebAPIUserId, validity}', '\"PIServerEndpoint != \\\"PI Web API\\\"\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- PIWebAPIPassword\n--\nUPDATE configuration SET value = jsonb_set(value, '{PIWebAPIPassword, order}', '\"19\"') WHERE value->'plugin'->>'value'='OMF';\nUPDATE configuration SET value = jsonb_set(value, '{PIWebAPIPassword, validity}', '\"PIServerEndpoint != \\\"PI Web API\\\"\"') WHERE value->'plugin'->>'value'='OMF';\n\n-- PIWebAPIKerberosKeytabFileName\n-- Note: This is a new config item\n--\nUPDATE configuration SET value = value || '{\"PIWebAPIKerberosKeytabFileName\": {\"default\": \"piwebapi_kerberos_https.keytab\", \"value\": \"piwebapi_kerberos_https.keytab\", \"validity\": \"PIWebAPIAuthenticationMethod == \\\"kerberos\\\"\", \"description\": \"Keytab file name used for Kerberos authentication in PI Web API.\", \"displayName\": \"PI Web API Kerberos keytab file\", \"type\": \"string\", \"order\": \"20\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\n\n---\n--- OCS configuration\n--- NOTE- All config items are new one's for OCS\n-- OCSNamespace\n--\nUPDATE configuration SET value = value || '{\"OCSNamespace\": {\"default\": \"name_space\", \"value\": \"name_space\", \"validity\": \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\", \"description\": \"Specifies the OCS namespace WHERE the information are stored and it is used for the interaction with the OCS API\", \"displayName\": \"OCS Namespace\", \"type\": \"string\", \"order\": \"21\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\n\n-- OCSTenantId\n--\nUPDATE configuration SET value = value || '{\"OCSTenantId\": {\"default\": \"ocs_tenant_id\", \"value\": \"ocs_tenant_id\", \"validity\": \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\", \"description\": \"Tenant id associated to the specific OCS account\", \"displayName\": \"OCS Tenant ID\", \"type\": \"string\", \"order\": \"22\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\n\n-- OCSClientId\n--\nUPDATE configuration SET value = value || '{\"OCSClientId\": {\"default\": \"ocs_client_id\", \"value\": \"ocs_client_id\", \"validity\": \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\", \"description\": \"Client id associated to the specific OCS account, it is used to authenticate the source for using the OCS API\", \"displayName\": \"OCS Client ID\", \"type\": \"string\", \"order\": \"23\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\n\n-- OCSClientSecret\n--\nUPDATE configuration SET value = value || '{\"OCSClientSecret\": {\"default\": \"ocs_client_secret\", \"value\": \"ocs_client_secret\", \"validity\": \"PIServerEndpoint == \\\"OSIsoft Cloud Services\\\"\", \"description\": \"Client secret associated to the specific OCS account, it is used to authenticate the source for using the OCS API\", \"displayName\": \"OCS Client Secret\", \"type\": \"password\", \"order\": \"24\"}}'::jsonb WHERE value->'plugin'->>'value'='OMF';\n\n--- plugin_data -------------------------------------------------------------------------------------------------------\n-- plugin_data\n--\nUPDATE plugin_data SET key = REPLACE(key,'PI_Server_V2','OMF') WHERE POSITION('PI_Server_V2' in key) > 0;\n\n--- asset_tracker -------------------------------------------------------------------------------------------------------\nUPDATE asset_tracker SET plugin = 'OMF' WHERE plugin in ('PI_Server_V2', 'ocs_V2');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/35.sql",
    "content": ""
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/36.sql",
    "content": "-- Scheduled process entries for south, notification, north tasks\n\nINSERT INTO fledge.scheduled_processes SELECT 'south_c', '[\"services/south_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'south_c');\nINSERT INTO fledge.scheduled_processes SELECT 'notification_c', '[\"services/notification_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'notification_c');\nINSERT INTO fledge.scheduled_processes SELECT 'north_c', '[\"tasks/north_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'north_c');\nINSERT INTO fledge.scheduled_processes SELECT 'north', '[\"tasks/north\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'north');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/37.sql",
    "content": "-- Scheduled process entry north microservice\n\nINSERT INTO fledge.scheduled_processes SELECT 'north_C', '[\"services/north_C\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'north_C');\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/38.sql",
    "content": "-- No action is required"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/39.sql",
    "content": "-- Create packages table\n\nDROP TABLE IF EXISTS fledge.packages;\n\nCREATE TABLE fledge.packages (\n             id                uuid                   NOT NULL, -- PK\n             name              character varying(255) NOT NULL, -- Package name\n             action            character varying(10)  NOT NULL, -- APT actions:\n                                                                -- list\n                                                                -- install\n                                                                -- purge\n                                                                -- update\n             status            INTEGER                NOT NULL, -- exit code\n                                                                -- -1       - in-progress\n                                                                --  0       - success\n                                                                -- Non-Zero - failed\n             log_file_uri      character varying(255) NOT NULL, -- Package Log file relative path\n  CONSTRAINT packages_pkey PRIMARY KEY  ( id ) );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/4.sql",
    "content": "-- Create the configuration category_children table\nCREATE TABLE fledge.category_children (\n       parent\tcharacter varying(255)\tNOT NULL,\n       child\tcharacter varying(255)\tNOT NULL,\n       CONSTRAINT config_children_pkey PRIMARY KEY (parent, child) );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/40.sql",
    "content": "ALTER TABLE fledge.readings ALTER COLUMN asset_code TYPE varchar(255);\nALTER TABLE fledge.omf_created_objects ALTER COLUMN asset_code TYPE varchar(255);\nALTER TABLE fledge.statistics ALTER COLUMN key TYPE varchar(255);\nALTER TABLE fledge.statistics_history ALTER COLUMN key TYPE varchar(255);\nALTER TABLE fledge.asset_tracker ALTER COLUMN asset TYPE varchar(255);\n\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/41.sql",
    "content": "-- Scheduled process entry for dispatcher microservice\n\nINSERT INTO fledge.scheduled_processes SELECT 'dispatcher_c', '[\"services/dispatcher_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'dispatcher_c');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/42.sql",
    "content": "ALTER TABLE fledge.users ADD COLUMN real_name character varying(255);\nUPDATE fledge.users SET real_name='Admin user' where uname='admin';\nUPDATE fledge.users SET real_name='Normal user' where uname='user';\nALTER TABLE fledge.users ALTER COLUMN access_method TYPE VARCHAR(5) NOT NULL DEFAULT 'any';\nALTER TABLE fledge.users ADD CONSTRAINT access_method_check CHECK (access_method IN ('any', 'pwd', 'cert'));\nUPDATE fledge.users SET access_method='any';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/43.sql",
    "content": "-- Contains history of the statistics_history table\n-- Data are historicized daily\n--\n\nBEGIN TRANSACTION;\n\nDROP INDEX IF EXISTS fledge.statistics_history_daily_ix1;\nDROP TABLE IF EXISTS fledge.statistics_history_daily;\n\nCREATE TABLE fledge.statistics_history_daily (\n                                                 year        integer NOT NULL,\n                                                 day         timestamp(6) with time zone NOT NULL,\n                                                 key         character varying(255)      NOT NULL,\n                                                 value       bigint                      NOT NULL DEFAULT 0\n);\n\nCREATE INDEX statistics_history_daily_ix1\n    ON fledge.statistics_history_daily (year);\n\n--- statistics_history_daily ------------------------------------------------------------------:\n\nINSERT INTO fledge.statistics_history_daily\n(year, day, key, value)\nSELECT\n    EXTRACT(YEAR FROM  date(history_ts)),\n    date(history_ts),\n    key,\n    sum(\"value\") AS \"value\"\nFROM fledge.statistics_history\nWHERE \"history_ts\" < now() - INTERVAL '7 days'\nGROUP BY date(history_ts), key;\n\nDELETE FROM fledge.statistics_history WHERE \"history_ts\" < now() - INTERVAL '7 days';\n\n---  -----------------------------------------------------------------------------------------:\n\nDELETE FROM fledge.tasks WHERE start_time < now() - INTERVAL '30 days';\nDELETE FROM fledge.log   WHERE ts         < now() - INTERVAL '30 days';\n\n--- Insert purge system schedule and process entry\nDELETE FROM fledge.schedules           WHERE id   = 'd37265f0-c83a-11eb-b8bc-0242ac130003';\nDELETE FROM fledge.scheduled_processes WHERE name = 'purge_system';\n\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('purge_system', '[\"tasks/purge_system\"]');\nINSERT INTO fledge.schedules (id, schedule_name, process_name, schedule_type, schedule_time, schedule_interval, exclusive, enabled)\nVALUES ('d37265f0-c83a-11eb-b8bc-0242ac130003', -- id\n        'purge_system',                         -- schedule_name\n        'purge_system',                         -- process_name\n        3,                                      -- schedule_type (interval)\n        NULL,                                   -- schedule_time\n        '23:50:00',                             -- schedule_interval (evey 24 hours)\n        'true',                                 -- exclusive\n        'true'                                  -- enabled\n    );\n\nCOMMIT;"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/44.sql",
    "content": "UPDATE fledge.configuration SET value = jsonb_set(value, '{retainUnsent}', '{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"purge unsent\"}')\n        WHERE key = 'PURGE_READ' AND\n              value->'retainUnsent'->>'value' = 'false';\n\n\nUPDATE fledge.configuration SET value = jsonb_set(value, '{retainUnsent}', '{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"retain unsent to all destinations\"}')\n        WHERE  key = 'PURGE_READ' AND\n               value->'retainUnsent'->>'value' = 'true';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/45.sql",
    "content": "-- Create Control service support table\n\nDROP TABLE IF EXISTS fledge.control_script;\nDROP TABLE IF EXISTS fledge.control_acl;\n\n-- Script management for control dispatch service\nCREATE TABLE fledge.control_script (\n             name          character varying(255)        NOT NULL,\n             steps         jsonb                         NOT NULL DEFAULT '{}'::jsonb,\n             acl           character varying(255),\n             CONSTRAINT    control_script_pkey           PRIMARY KEY (name) );\n\n-- Access Control List Management for control dispatch service\nCREATE TABLE fledge.control_acl (\n             name          character varying(255)        NOT NULL,\n             service       jsonb                         NOT NULL DEFAULT '{}'::jsonb,\n             url           jsonb                         NOT NULL DEFAULT '{}'::jsonb,\n             CONSTRAINT    control_acl_pkey              PRIMARY KEY (name) );\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/46.sql",
    "content": "-- Dispatcher log codes\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'DSPST', 'Dispatcher Startup' ),\n            ( 'DSPSD', 'Dispatcher Shutdown' );"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/47.sql",
    "content": "-- Scheduled process entry for automation script task\n\nINSERT INTO fledge.scheduled_processes SELECT 'automation_script', '[\"tasks/automation_script\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'automation_script');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/48.sql",
    "content": "-- Scheduled process entry for BucketStorage microservice\n\nINSERT INTO fledge.scheduled_processes SELECT 'bucket_storage_c', '[\"services/bucket_storage_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'bucket_storage_c');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/49.sql",
    "content": "-- Addition of id autoincrement column\n\n-- Add sequence\nCREATE SEQUENCE IF NOT EXISTS fledge.category_children_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\n-- Remove existing primary key\nALTER TABLE fledge.category_children DROP CONSTRAINT IF EXISTS config_children_pkey;\n\n-- Add new column as primary key\nALTER TABLE fledge.category_children ADD COLUMN id INTEGER NOT NULL DEFAULT nextval('fledge.category_children_id_seq'::regclass);\n\n-- Add unique index for parent, child\nCREATE UNIQUE INDEX IF NOT EXISTS config_children_idx1 ON fledge.category_children(parent, child);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/5.sql",
    "content": "UPDATE fledge.configuration SET value = jsonb_set(value, '{source}', '{\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"readings\", \"value\": \"readings\"}')\n        WHERE key = 'North Readings to PI';\n\nUPDATE fledge.configuration SET value = jsonb_set(value, '{source}', '{\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"statistics\", \"value\": \"statistics\"}')\n        WHERE key = 'North Statistics to PI';\n\nUPDATE fledge.configuration SET value = jsonb_set(value, '{source}', '{\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"readings\", \"value\": \"readings\"}')\n        WHERE key = 'North Readings to OCS';\n\nUPDATE statistics SET key = 'North Readings to PI' WHERE key = 'SENT_1';\nUPDATE statistics SET key = 'North Statistics to PI' WHERE key = 'SENT_2';\nUPDATE statistics SET key = 'North Readings to OCS' WHERE key = 'SENT_4';\n\n---\nINSERT INTO fledge.statistics ( key , description ) VALUES ( 'Readings Sent',   'Readings Sent North' );\nINSERT INTO fledge.statistics ( key , description ) VALUES ( 'Statistics Sent',   'Statistics Sent North' );\n\nINSERT INTO fledge.configuration (key, description, value) VALUES ( 'North',   'North tasks' , '{}' );\n\nUPDATE fledge.schedules SET schedule_name=process_name WHERE process_name  in (SELECT name FROM  fledge.scheduled_processes  WHERE script ? 'tasks/north');\n\nINSERT INTO fledge.category_children (parent, child)\nSELECT 'North', name FROM  fledge.scheduled_processes  WHERE script ? 'tasks/north';\n\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'north',   '[\"tasks/north\"]' );\n\nUPDATE fledge.schedules SET process_name='north' WHERE schedule_name in (SELECT name FROM  fledge.scheduled_processes  WHERE script ? 'tasks/north');\n\nINSERT INTO fledge.category_children (parent, child) VALUES ( 'North',   'OMF_TYPES' );\n\n--- Disables North pending tasks created before the upgrade process\nUPDATE tasks SET end_time=start_time, exit_code=0, state=2  WHERE end_time is null AND process_name in (SELECT name FROM  fledge.scheduled_processes  WHERE script ? 'tasks/north');"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/50.sql",
    "content": "-- The Schema Service table used to hold information about extension schemas\nDROP TABLE IF EXISTS fledge.service_schema;\n\nCREATE TABLE fledge.service_schema (\n             name          character varying(255)        NOT NULL,\n             service       character varying(255)        NOT NULL,\n             version       integer                       NOT NULL,\n             definition    JSON);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/51.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n\t( 'ESSRT', 'External Service Startup' ),\n\t( 'ESSTP', 'External Service Shutdown' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/52.sql",
    "content": "-- Access Control List usage relation\nDROP TABLE IF EXISTS fledge.acl_usage;\n\nCREATE TABLE fledge.acl_usage (\n             name            character varying(255)  NOT NULL,  -- ACL name\n             entity_type     character varying(80)   NOT NULL,  -- associated entity type: service or script \n             entity_name     character varying(255)  NOT NULL,  -- associated entity name\n             CONSTRAINT      usage_acl_pkey          PRIMARY KEY (name, entity_type, entity_name) );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/53.sql",
    "content": "-- Add new column name 'deprecated_ts' for asset_tracker\n\nALTER TABLE fledge.asset_tracker ADD COLUMN deprecated_ts timestamp(6) with time zone;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/54.sql",
    "content": "-- Addition of new log codes\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'ASTDP', 'Asset deprecated' ),\n        ( 'ASTUN', 'Asset un-deprecated' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/55.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'PIPIN', 'Pip installation' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/56.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'SRVRS', 'Service Restart' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/57.sql",
    "content": "-- Add new column name 'data' for asset_tracker\n\nALTER TABLE fledge.asset_tracker ADD COLUMN data JSONB DEFAULT '{}'::jsonb;\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/58.sql",
    "content": "-- Audit Log Marker log code\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'AUMRK', 'Audit Log Marker' );"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/59.sql",
    "content": "-- Roles\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('view', 'Only to view the configuration'),\n            ('data-view', 'Only read the data in buffer');"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/6.sql",
    "content": "-- North_Readings_to_HTTP - for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Readings_to_HTTP',\n              'HTTP North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"http-north\", \"default\" : \"http-north\", \"description\" : \"Module that HTTP North Plugin will load\" } } '\n            );\n\n-- North Tasks - C code\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Readings_to_HTTP',   '[\"tasks/north_c\"]' );\n\n-- Readings to HTTP - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'ccdf1ef8-7e02-11e8-adc0-fa7ae01bb3bc', -- id\n                'HTTP_North_C',                         -- schedule_name\n                'North_Readings_to_HTTP',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                true,                                   -- exclusive\n                false                                   -- disabled\n              );\n \n-- Statistics\nINSERT INTO fledge.statistics ( key, description, value, previous_value )\n     VALUES ( 'NORTH_READINGS_TO_HTTP', 'Readings sent to HTTP', 0, 0 );\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/60.sql",
    "content": "-- Create SEQUENCE for control_source\nCREATE SEQUENCE fledge.control_source_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\n-- Create control_source table\nCREATE TABLE fledge.control_source (\n             cpsid            integer                     NOT NULL DEFAULT nextval('fledge.control_source_id_seq'::regclass),     -- auto source id\n             name             character  varying(40)      NOT NULL                                                          ,     -- source name\n             description      character  varying(120)     NOT NULL                                                          ,     -- source description\n             CONSTRAINT       control_source_pkey         PRIMARY KEY (cpsid)\n            );\n\n-- Create SEQUENCE for control_destination\nCREATE SEQUENCE fledge.control_destination_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\n-- Create control_destination table\nCREATE TABLE fledge.control_destination (\n             cpdid            integer                     NOT NULL DEFAULT nextval('fledge.control_destination_id_seq'::regclass),  -- auto destination id\n             name             character  varying(40)      NOT NULL                                                               ,  -- destination name\n             description      character  varying(120)     NOT NULL                                                               ,  -- destination description\n             CONSTRAINT       control_destination_pkey    PRIMARY KEY (cpdid)\n            );\n\n-- Create SEQUENCE for control_pipelines\nCREATE SEQUENCE fledge.control_pipelines_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\n-- Create control_pipelines table\nCREATE TABLE fledge.control_pipelines (\n             cpid             integer                     NOT NULL DEFAULT nextval('fledge.control_pipelines_id_seq'::regclass),  -- control pipeline id\n             name             character  varying(255)     NOT NULL                                                             ,  -- control pipeline name\n             stype            integer                                                                                          ,  -- source type id from control_source table\n             sname            character  varying(80)                                                                           ,  -- source name from control_source table\n             dtype            integer                                                                                          ,  -- destination type id from control_destination table\n             dname            character  varying(80)                                                                           ,  -- destination name from control_destination table\n             enabled          boolean                     NOT NULL DEFAULT FALSE                                               ,  -- false = A given pipeline is disabled by default\n             execution        character  varying(20)      NOT NULL DEFAULT  'shared'                                           ,  -- pipeline will be executed as with shared execution model by default\n             CONSTRAINT       control_pipelines_pkey      PRIMARY KEY (cpid)\n             );\n\n-- Create SEQUENCE for control_filters\nCREATE SEQUENCE fledge.control_filters_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\n-- Create control_filters table\nCREATE TABLE fledge.control_filters (\n             fid              integer                          NOT NULL DEFAULT nextval('fledge.control_filters_id_seq'::regclass),  -- auto filter id\n             cpid             integer                          NOT NULL                                                           ,  -- control pipeline id\n             forder           integer                          NOT NULL                                                           ,  -- filter order\n             fname            character  varying(255)          NOT NULL                                                           ,  -- Name of the filter instance\n             CONSTRAINT       control_filters_pkey             PRIMARY KEY (fid)                                                  ,\n             CONSTRAINT       control_filters_fk1              FOREIGN KEY (cpid) REFERENCES fledge.control_pipelines (cpid)  MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\n-- Insert predefined entries for Control Source\nDELETE FROM fledge.control_source;\nINSERT INTO fledge.control_source ( name, description )\n     VALUES ('Any', 'Any source.'),\n            ('Service', 'A named service in source of the control pipeline.'),\n            ('API', 'The control pipeline source is the REST API.'),\n            ('Notification', 'The control pipeline originated from a notification.'),\n            ('Schedule', 'The control request was triggered by a schedule.'),\n            ('Script', 'The control request has come from the named script.');\n\n-- Insert predefined entries for Control Destination\nDELETE FROM fledge.control_destination;\nINSERT INTO fledge.control_destination ( name, description )\n     VALUES ('Any', 'Any destination.'),\n            ('Service', 'A name of service that is being controlled.'),\n            ('Asset', 'A name of asset that is being controlled.'),\n            ('Script', 'A name of script that will be executed.'),\n            ('Broadcast', 'No name is applied and pipeline will be considered for any control writes or operations to broadcast destinations.');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/61.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'USRAD', 'User Added' ),\n        ( 'USRDL', 'User Deleted' ),\n        ( 'USRCH', 'User Changed' ),\n        ( 'USRRS', 'User Restored' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/62.sql",
    "content": "\nCREATE TABLE fledge.monitors (\n             service        character varying(255) NOT NULL,\n             monitor        character varying(80) NOT NULL,\n             minimum        bigint,\n             maximum        bigint,\n             average        bigint,\n             samples        bigint,\n             ts             timestamp(6) with time zone NOT NULL DEFAULT now()\n);\n\nCREATE INDEX monitors_ix1\n    ON fledge.monitors(service, monitor);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/63.sql",
    "content": "-- Roles\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('control', 'Same as editor can do and also have access for control scripts and pipelines');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/64.sql",
    "content": "-- Create control_api table\nCREATE TABLE fledge.control_api (\n             name             character  varying(255)     NOT NULL                 ,       -- control API name\n             description      character  varying(255)     NOT NULL                 ,       -- description of control API\n             type             integer                     NOT NULL                 ,       -- 0 for write and 1 for operation\n             operation_name   character  varying(255)                              ,       -- name of the operation and only valid if type is operation\n             destination      integer                     NOT NULL                 ,       -- destination of request; 0-broadcast, 1-service, 2-asset, 3-script\n             destination_arg  character  varying(255)                              ,       -- name of the destination and only used if destination is non-zero\n             anonymous        boolean                     NOT NULL DEFAULT  'f'    ,       -- anonymous callers to make request to control API; by default false\n             CONSTRAINT       control_api_pname           PRIMARY KEY (name)\n             );\n\n-- Create control_api_parameters table\nCREATE TABLE fledge.control_api_parameters (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             parameter        character  varying(255)     NOT NULL                 ,       -- name of parameter\n             value            character  varying(255)                              ,       -- value of parameter if constant otherwise default\n             constant         boolean                     NOT NULL                 ,       -- parameter is either a constant or variable\n             CONSTRAINT       control_api_parameters_fk1  FOREIGN KEY (name) REFERENCES fledge.control_api (name)  MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\n-- Create control_api_acl table\nCREATE TABLE fledge.control_api_acl (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             \"user\"           character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.users\n             CONSTRAINT       control_api_acl_fk1         FOREIGN KEY (name) REFERENCES fledge.control_api (name)  MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION,\n             CONSTRAINT       control_api_acl_fk2         FOREIGN KEY (\"user\") REFERENCES fledge.users (uname)  MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/65.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'ACLAD', 'ACL Added' ),( 'ACLCH', 'ACL Changed' ),( 'ACLDL', 'ACL Deleted' ),\n            ( 'CTSAD', 'Control Script Added' ),( 'CTSCH', 'Control Script Changed' ),('CTSDL', 'Control Script Deleted' ),\n            ( 'CTPAD', 'Control Pipeline Added' ),( 'CTPCH', 'Control Pipeline Changed' ),('CTPDL', 'Control Pipeline Deleted' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/66.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'CTEAD', 'Control Entrypoint Added' ),\n            ( 'CTECH', 'Control Entrypoint Changed' ),\n            ('CTEDL', 'Control Entrypoint Deleted' );"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/67.sql",
    "content": "-- Add new column name 'priority' in scheduled_processes\n\nALTER TABLE fledge.scheduled_processes ADD COLUMN priority INTEGER NOT NULL DEFAULT 999;\nUPDATE fledge.scheduled_processes SET priority = '10' WHERE name = 'bucket_storage_c';\nUPDATE fledge.scheduled_processes SET priority = '20' WHERE name = 'dispatcher_c';\nUPDATE fledge.scheduled_processes SET priority = '30' WHERE name = 'notification_c';\nUPDATE fledge.scheduled_processes SET priority = '100' WHERE name = 'south_c';\nUPDATE fledge.scheduled_processes SET priority = '200' WHERE name = 'north_C';\nUPDATE fledge.scheduled_processes SET priority = '300' WHERE name = 'management';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/68.sql",
    "content": "-- Create alerts table\n\nCREATE TABLE IF NOT EXISTS fledge.alerts (\n       key         character varying(80)       NOT NULL,                        -- Primary key\n       message     character varying(255)      NOT NULL,                        -- Alert Message\n       urgency     smallint                    NOT NULL,                        -- 1 Critical - 2 High - 3 Normal - 4 Low\n       ts          timestamp(6) with time zone NOT NULL DEFAULT now(),          -- Timestamp, updated at every change\n       CONSTRAINT  alerts_pkey                 PRIMARY KEY (key) );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/69.sql",
    "content": "--- Insert update checker schedule and process entry\n\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'update checker', '[\"tasks/check_updates\"]' );\nINSERT INTO fledge.schedules (id, schedule_name, process_name, schedule_type, schedule_time, schedule_interval, exclusive, enabled)\nVALUES ('852cd8e4-3c29-440b-89ca-2c7691b0450d', -- id\n        'update checker',                       -- schedule_name\n        'update checker',                       -- process_name\n        2,                                      -- schedule_type (timed)\n        '00:05:00',                             -- schedule_time\n        '00:00:00',                             -- schedule_interval (evey 24 hours)\n        'true',                                 -- exclusive\n        'true'                                  -- enabled\n    );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/7.sql",
    "content": "CREATE INDEX statistics_history_ix2\n    ON fledge.statistics_history(key);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/70.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'BUCAD', 'Bucket Added' ), ( 'BUCCH', 'Bucket Changed' ), ( 'BUCDL', 'Bucket Deleted' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/71.sql",
    "content": "-- Add new column name 'hash_algorithm' in users table\n\nALTER TABLE fledge.users ADD COLUMN hash_algorithm character varying(6) DEFAULT 'SHA512';\nALTER TABLE fledge.users ADD CONSTRAINT users_hash_algorithm_check CHECK (hash_algorithm IN ('SHA256', 'SHA512'));\n\nUPDATE fledge.users SET hash_algorithm='SHA256';\nUPDATE fledge.users SET pwd='495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', hash_algorithm='SHA512' WHERE pwd ='39b16499c9311734c595e735cffb5d76ddffb2ebf8cf4313ee869525a9fa2c20:f400c843413d4c81abcba8f571e6ddb6';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/72.sql",
    "content": "ALTER TABLE fledge.users ADD COLUMN failed_attempts integer DEFAULT 0;\nALTER TABLE fledge.users ADD COLUMN block_until timestamp(6)  DEFAULT NULL;\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'USRBK', 'User Blocked' ), ( 'USRUB', 'User Unblocked' );\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/73.sql",
    "content": "update statistics set description = 'Readings Sent North' where description = 'Readings Sent Noth';\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/74.sql",
    "content": "-- Scheduled process entry for pipeline service\nINSERT INTO fledge.scheduled_processes SELECT 'pipeline_c', '[\"services/pipeline_c\"]', 90 WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'pipeline_c');\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/75.sql",
    "content": "ALTER TABLE fledge.plugin_data ADD COLUMN service_name character varying(255);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/76.sql",
    "content": "DELETE FROM fledge.scheduled_processes WHERE name = 'north';"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/8.sql",
    "content": "-- Create SEQUENCE for asset_tracker\nCREATE SEQUENCE fledge.asset_tracker_id_seq\n    INCREMENT 1\n    START 1\n    MINVALUE 1\n    MAXVALUE 9223372036854775807\n    CACHE 1;\n\n-- Create TABLE for asset_tracker\nCREATE TABLE fledge.asset_tracker (\n       id            integer                NOT NULL DEFAULT nextval('fledge.asset_tracker_id_seq'::regclass),\n       asset         character(50)          NOT NULL,\n       event         character varying(50)  NOT NULL,\n       service       character varying(255) NOT NULL,\n       fledge       character varying(50)  NOT NULL,\n       plugin        character varying(50)  NOT NULL,\n       ts            timestamp(6) with time zone NOT NULL DEFAULT now() );\n\n-- Create INDEX for asset_tracker\nCREATE INDEX asset_tracker_ix1 ON fledge.asset_tracker USING btree (asset);\nCREATE INDEX asset_tracker_ix2 ON fledge.asset_tracker USING btree (service);\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/9.sql",
    "content": "delete from fledge.configuration where key in (\n\t'North Readings to OCS',\n\t'North Statistics to PI',\n\t'North Readings to PI',\n\t'North_Statistics_to_PI',\n\t'dht11',\n\t'DHT11 South C Plugin',\n\t'North_Readings_to_HTTP',\n\t'North_Readings_to_PI') and key not in (\n\t\tselect distinct process_name from fledge.tasks);\n\ndelete from fledge.scheduled_processes where name in (\n\t'North Readings to OCS',\n\t'North Statistics to PI',\n\t'North Readings to PI',\n\t'North_Statistics_to_PI',\n\t'dht11',\n\t'DHT11 South C Plugin',\n\t'North_Readings_to_HTTP',\n\t'North_Readings_to_PI') and name not in (\n\t\tselect distinct process_name from fledge.tasks);\n\ndelete from fledge.schedules where schedule_name in (\n\t'North Readings to OCS',\n\t'North Statistics to PI',\n\t'North Readings to PI',\n\t'North_Statistics_to_PI',\n\t'dht11',\n\t'DHT11 South C Plugin',\n\t'North_Readings_to_HTTP',\n\t'North_Readings_to_PI') and schedule_name not in (\n\t\tselect distinct process_name from fledge.tasks);\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres/upgrade/README",
    "content": "Place Postgres upgrade schema sql files here.\n\nFile name:\n\nX.sql, where X is the Postgres schema id\n\nExample:\n\n'9.sql' file is read by Fledge app which has Posgres schema version set to 8\n'10.sql' file is read either by Fledge app which has Posgres schema version set to 9\neither by Fledge app upgrading schema from 8 to 10\n\nNote:\n- whenever VERSION file in $FLEDGE_ROOT has a new schema in 'fledge_schema',\n  the corresponding sql file must be placed here\n- file id must exist even if empty\n\n"
  },
  {
    "path": "scripts/plugins/storage/postgres.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2017-2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n# Script input parameters\n# $1 is action (start|stop|status|init|reset|purge|help)\n# $2 is db schema (i.e 35)\n# $3 (optional) is the engine management flag (true or false)\n# if $3 is empty a call to get_engine_management will be done\n\nset -e\n#set -x\n\nPLUGIN=\"postgres\"\nUSAGE=\"Usage: `basename ${0}` {start|stop|status|init|reset|purge|help}\"\n\n# Check FLEDGE_ROOT\nif [ -z ${FLEDGE_ROOT+x} ]; then\n    # Set FLEDGE_ROOT as the default directory\n    FLEDGE_ROOT=\"/usr/local/fledge\"\nfi\n\n# Check if the default directory exists\nif [[ ! -d \"${FLEDGE_ROOT}\" ]]; then\n\n    # Here we cannot use the logger because we cannot find the write_log script.\n    # But it is ok, because the script is called with source and if it is called\n    # as standalone script the echo will be captured.\n    echo \"Fledge cannot be executed: ${FLEDGE_ROOT} is not a valid directory.\"\n    echo \"Create the enviroment variable FLEDGE_ROOT before using Fledge.\"\n    echo \"Specify the base directory for Fledge and set the variable with:\"\n    echo \"export FLEDGE_ROOT=<basedir>\"\n    exit 1\n\nfi\n\n\n##########\n## INCLUDE SECTION\n##########\n. $FLEDGE_ROOT/scripts/common/get_engine_management.sh\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n\n# Logger wrapper\npostgres_log() {\n    write_log \"Storage\" \"script.plugin.storage.postgres\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n\n## PostgreSQL Start\npg_start() {\n\n    # Check the status of the server\n    result=`pg_status \"silent\"`\n    case \"$result\" in\n        \"0\")\n            # PostgreSQL already running.\n            if [[ \"$1\" == \"noisy\" ]]; then\n                postgres_log \"info\" \"PostgreSQL is already running.\" \"all\" \"pretty\"\n            else\n\t\tif [[ \"$1\" != \"skip\" ]]; then\n                    postgres_log \"info\" \"PostgreSQL is already running.\" \"logonly\" \"pretty\"\n\t\tfi\n            fi\n            ;;\n\n        \"2\")\n            if [[ \"$MANAGED\" = false ]]; then\n                if [[ \"$1\" == \"noisy\" ]]; then\n                    postgres_log \"info\" \"Unable to start PostgreSQL. The server is not managed by Fledge.\" \"all\" \"pretty\"\n                else\n                    postgres_log \"info\" \"Unable to start PostgreSQL. The server is not managed by Fledge.\" \"logonly\" \"pretty\"\n                fi\n                exit 2\n            fi\n\n            # PostgreSQL not running - starting\n            if [[ \"$1\" == \"noisy\" ]]; then\n                postgres_log \"info\" \"Starting PostgreSQL...\" \"all\" \"pretty\"\n            else\n                postgres_log \"info\" \"Starting PostgreSQL...\" \"logonly\" \"pretty\"\n            fi\n            eval $PG_CTL_COMMAND start > /dev/null 2>&1\n\n            check_again=`pg_status \"silent\"`\n            if [[ \"$check_again\" -eq \"0\" ]]; then\n                if [[ \"$1\" == \"noisy\" ]]; then\n                    postgres_log \"info\" \"PostgreSQL started.\" \"all\" \"pretty\"\n                else\n                    postgres_log \"info\" \"PostgreSQL started.\" \"logonly\" \"pretty\"\n                fi\n            else\n                postgres_log \"err\" \"Unable to start PostgreSQL.\" \"all\"\n                exit 1\n            fi\n            ;;\n\n        *)\n            postgres_log \"err\" \"Unknown database return status.\" \"all\"\n            exit 1\n            ;;\n    esac\n\n    # Check if the fledge database has been created\n    if [[ `$PG_SQL -l | grep -c '^ fledge'` -ne 1 ]]; then\n        # Create the Fledge database\n        pg_reset \"$1\" \"immediate\" \n    fi\n\n    # Fledge DB schema update: Fledge version is $2, $1 is log verbosity\n    pg_schema_update $2 $1\n}\n\n\n## PostgreSQL Stop\npg_stop() {\n\n    # Since the script may be called with \"source\", this condition must be set\n    # and the else must be maintained because exit can't be used\n    if [[ \"$MANAGED\" = false ]]; then\n\n        # UNMANAGED part\n\n        if [[ \"$1\" == \"noisy\" ]]; then\n            postgres_log \"info\" \"Unable to stop PostgreSQL. The server is not managed by Fledge.\" \"all\" \"pretty\"\n        else\n            postgres_log \"info\" \"Unable to stop PostgreSQL. The server is not managed by Fledge.\" \"logonly\" \"pretty\"\n        fi\n\n    else\n\n        # MANAGED part\n\n        # Check the status of the server\n        result=`pg_status \"silent\"`\n        case \"$result\" in\n            \"0\")\n                if [[ \"$1\" == \"noisy\" ]]; then\n                    postgres_log \"info\" \"Stopping PostgreSQL...\" \"all\" \"pretty\"\n                else\n                    postgres_log \"info\" \"Stopping PostgreSQL...\" \"logonly\" \"pretty\"\n                fi\n                eval $PG_CTL_COMMAND stop > /dev/null 2>&1\n\n                check_again=`pg_status \"silent\"`\n                if [[ \"$check_again\" -eq \"2\" ]]; then\n                    if [[ \"$1\" == \"noisy\" ]]; then\n                        postgres_log \"info\" \"PostgreSQL stopped.\" \"all\" \"pretty\"\n                    else\n                        postgres_log \"info\" \"PostgreSQL stopped.\" \"logonly\" \"pretty\"\n                    fi\n                else\n                    postgres_log \"err\" \"Unable to stop PostgreSQL.\" \"all\"\n                    exit 1\n                fi\n                ;;\n\n            \"2\")\n                if [[ \"$1\" == \"noisy\" ]]; then\n                    postgres_log \"info\" \"PostgreSQL is not running.\" \"all\" \"pretty\"\n                else\n                    postgres_log \"info\" \"PostgreSQL is not running.\" \"logonly\" \"pretty\"\n                fi\n                ;;\n\n            *)\n                postgres_log \"err\" \"Unknown database return status.\" \"all\"\n                exit 1\n                ;;\n        esac\n\n    fi  # MANAGED true/false\n\n}\n\n\n## PostgreSQL Reset\npg_reset() {\n\n    if [[ `pg_status silent` -eq 2 ]]; then\n       pg_start \"$1\" \"$2\"\n    fi\n\n    if [[ $2 != \"immediate\" ]]; then\n        echo \"This script will remove all data stored in the server.\"\n        echo -n \"Enter YES if you want to continue: \"\n        read continue_reset\n\n        if [ \"$continue_reset\" != 'YES' ]; then\n\t    echo \"The system will NOT be reset and current content remains\"\n            echo \"Goodbye.\"\n            # This is ok because it means that the script is called from command line\n            exit 0\n        fi\n    fi\n\n    if [[ \"$1\" == \"noisy\" ]]; then\n        postgres_log \"info\" \"Building the metadata for the Fledge Plugin 'postgres'\" \"all\" \"pretty\"\n    else\n        postgres_log \"info\" \"Building the metadata for the Fledge Plugin 'postgres'\" \"logonly\" \"pretty\"\n    fi\n\n    output=$(eval $PG_SQL -d postgres -q -f $INIT_SQL 2>&1)\n    outputUC=${output^^}\n\n    if [[ \"$outputUC\" =~ \"ERROR\" || \"$outputUC\" =~ \"WARNING\" || \"$outputUC\" =~ \"FATAL\" ]]; then\n\n        postgres_log \"err\" \"${output}\" \"all\"\n        exit 1\n    else\n        if [[ \"$1\" == \"noisy\" ]]; then\n            postgres_log \"info\" \"Build complete.\" \"all\" \"pretty\"\n        else\n            postgres_log \"info\" \"Build complete.\" \"logonly\" \"pretty\"\n        fi\n    fi\n\n    if [[ $2 != \"immediate\" && -d \"${FLEDGE_DATA}/buckets\" ]]; then\n        echo \"Removing user data from ${FLEDGE_DATA}/buckets\"\n        rm -rf ${FLEDGE_DATA}/buckets\n        echo \"Removed user data from ${FLEDGE_DATA}/buckets\"\n    fi\n\n}\n\n\n## PostgreSQL Status\n#\n# NOTE: You can call this script with $1 = silent to avoid non output errors\n#\n# Returns:\n#   0 - Server running\n#   1 - Error\n#   2 - Server not running\npg_status() {\n\n    # Check if the database server is managed by Fledge\n    if [[ \"$MANAGED\" = true ]]; then\n\n        # Check if the PostgreSQL directory in $FLEDGE_DATA exists and create it\n        # This is necessary to avoid an error in the PG_CTL command,\n        # when the log is set\n        if ! [[ -d \"$PG_DIR\" ]]; then\n            mkdir -p \"$PG_DIR\"\n        fi\n\n        # Check if the Data directory exists and create it\n        if ! [[ -d \"$PG_DATA\" ]]; then\n            mkdir -p \"$PG_DATA\"\n        fi\n\n        # Check if the cluster files exist and create them\n        if ! [[ -d \"$PG_DATA/base\" ]]; then\n\n            if [[ \"$1\" == \"noisy\" ]]; then\n                postgres_log \"info\" \"Initializing PostgreSQL.\" \"all\" \"pretty\"\n            else\n                postgres_log \"info\" \"Initializing PostgreSQL.\" \"logonly\" \"pretty\"\n            fi\n            eval $PG_CTL_COMMAND init > /dev/null 2>&1\n\n        fi\n\n        # Check the status command\n        cmd_to_exec=\"$PG_CTL_COMMAND status\"\n        case \"$($cmd_to_exec)\" in\n            \"pg_ctl: no server running\")\n                if [[ \"$1\" == \"noisy\" ]]; then\n                    postgres_log \"info\" \"PostgreSQL not running.\" \"outonly\" \"pretty\"\n                else\n                    echo \"2\"\n                fi    \n                ;;\n            \"pg_ctl: server is running\"*)\n                if [[ \"$1\" == \"noisy\" ]]; then\n                    postgres_log \"info\" \"PostgreSQL running.\" \"outonly\" \"pretty\"\n                else\n                    echo \"0\"\n                fi\n                ;;\n            *)\n                postgres_log \"err\" \"Unrecognized PostgreSQL status.\" \"all\"\n                exit 1\n                ;;\n        esac\n    \n    else\n\n        # The unmanaged part\n        if ! [[ -x \"$(command -v pg_isready)\" ]]; then\n            postgres_log \"info\" \"Status check cannot be found. Is PostgreSQL installed?\" \"outonly\" \"pretty\"\n            postgres_log \"info\" \"If PostgreSQL is installed, check if the bin dir is in the PATH.\" \"outonly\" \"pretty\"\n            exit 1\n        fi\n\n        ret_message=`pg_isready`\n        case \"${ret_message}\" in\n            *\"no response\")\n                if [[ \"$1\" == \"noisy\" ]]; then\n                    postgres_log \"info\" \"PostgreSQL not running.\" \"outonly\" \"pretty\"\n                else\n                    echo \"2\"\n                fi\n                ;;\n            *\"accepting connections\")\n                if [[ \"$1\" == \"noisy\" ]]; then\n                    postgres_log \"info\" \"PostgreSQL running.\" \"outonly\" \"pretty\"\n                else\n                    echo \"0\"\n                fi\n                ;;\n            *)\n                postgres_log \"err\" \"Unknow status return by the PostgreSQL database server.\" \"all\"\n                exit 1\n        esac\n\n    fi\n\n}\n\n## PostgreSQL schema update entry point\n#\npg_schema_update() {\n    # Current starting Fledge version\n    NEW_VERSION=$1\n    # DB table\n    VERSION_TABLE=\"fledge.version\"\n    # Check first if the version table exists\n    CURR_VERR=`${PG_SQL} -d fledge -q -A -t -c \"SELECT to_regclass('${VERSION_TABLE}')\"`\n    ret_code=$?\n    if [ ! \"${CURR_VERR}\" ] || [ \"${ret_code}\" -ne 0 ]; then\n        postgres_log \"error\" \"Error checking Fledge DB schema version: \"\\\n\"the table '${VERSION_TABLE}' doesn't exist. Exiting\" \"all\" \"pretty\"\n        return 1\n    fi\n\n    # Fetch Fledge DB version\n    CURR_VERR=`${PG_SQL} -d fledge -q -A -t -c \"SELECT id FROM ${VERSION_TABLE}\" | tr -d ' '`\n    if [ ! \"${CURR_VERR}\" ]; then\n        # No version found set DB version now\n        CURR_VERR=`${PG_SQL} -d fledge -q -A -t -c \"INSERT INTO ${VERSION_TABLE} (id) VALUES('${NEW_VERSION}')\"`\n        SET_VERSION_MSG=\"Fledge DB version not found in '${VERSION_TABLE}', setting version [${NEW_VERSION}]\"\n        if [[ \"$2\" == \"noisy\" ]]; then\n            postgres_log \"info\" \"${SET_VERSION_MSG}\" \"all\" \"pretty\"\n        else \n            postgres_log \"info\" \"${SET_VERSION_MSG}\" \"logonly\" \"pretty\"\n        fi\n    else\n        # Only if DB version is not equal to starting Fledge version we try schema update\n        if [ \"${CURR_VERR}\" != \"${NEW_VERSION}\" ]; then\n            postgres_log \"info\" \"Detected '${PLUGIN}' Fledge DB schema change from version [${CURR_VERR}]\"\\\n\" to [${NEW_VERSION}], applying Upgrade/Downgrade ...\" \"all\" \"pretty\"\n            # Call the schema update script\n            $FLEDGE_ROOT/scripts/plugins/storage/postgres/schema_update.sh \"${CURR_VERR}\" \"${NEW_VERSION}\" \"${PG_SQL}\"\n            update_code=$?\n            return ${update_code}\n        else\n            # Just log up-to-date\n\t    if [[ \"$2\" != \"skip\" ]]; then\n                postgres_log \"info\" \"Fledge DB schema is up to date to version [${CURR_VERR}]\" \"logonly\" \"pretty\"\n            fi\n            return 0\n        fi\n    fi\n}\n\n## PostgreSQL Help\npg_help() {\n    echo \"${USAGE}\nPostgreSQL Storage Layer plugin init script. \nThe script is used to control the PostgreSQL plugin as database for Fledge\nArguments:\n start   - Start the database server (when managed)\n           If the server has not been initialized, it also initialize it\n stop    - Stop the database server (when managed)\n status  - Check the status of the database server\n reset   - Bring the database server to the original installation.\n           This is a synonym of init.\n           WARNING: all the data stored in the server will be lost!\n init    - Database check: if Fledge database does not exist\n           it will be created.\n purge   - Purge all readings data and non-configuration data stored in the database.\n           WARNING: all the data stored in the affected tables will be lost!\n help    - This text\n\n managed   - The database server is embedded in Fledge\n unmanaged - The database server is not embedded in Fledge\"\n\n}\n\n## PostgreSQL purge all readings and non-configuration data\n#\npg_purge() {\n    echo \"This script will remove all readings data and non-configuration stored in the server.\"\n    echo -n \"Enter YES if you want to continue: \"\n    read continue_purge\n\n    if [ \"$continue_purge\" != 'YES' ]; then\n\techo \"The system will NOT be purged of data and current content remains\"\n        echo \"Goodbye.\"\n        # This is ok because it means that the script is called from command line\n        exit 0\n    fi\n\n    if [[ \"$1\" == \"noisy\" ]]; then\n        postgres_log \"info\" \"Purging data for the Fledge Plugin '${PLUGIN}' ...\" \"all\" \"pretty\"\n    else\n        postgres_log \"info\" \"Purging data for the Fledge Plugin '${PLUGIN}' ...\" \"logonly\" \"pretty\"\n    fi\n\n   SQL_COMMAND=`${PG_SQL} -d fledge -b -q <<EOF\nUPDATE fledge.statistics SET value = 0, previous_value = 0, ts = now();\nTRUNCATE TABLE fledge.asset_tracker;\nTRUNCATE TABLE fledge.tasks;\nTRUNCATE TABLE fledge.statistics_history;\nTRUNCATE TABLE fledge.log;\nTRUNCATE TABLE fledge.readings;\nUPDATE fledge.streams SET last_object = 0, ts = now();\nTRUNCATE TABLE fledge.plugin_data;\nTRUNCATE TABLE fledge.omf_created_objects;\nTRUNCATE TABLE fledge.user_logins;\nDO \\\\$alter_seq\\\\$\nDECLARE i TEXT;\nBEGIN\n FOR i IN (SELECT UNNEST(REGEXP_MATCHES(column_default, 'nextval\\(''(.*)''::regclass\\)')) AS seq_name\n           FROM information_schema.columns\n           WHERE column_default LIKE 'nextval%'\n           AND table_schema = 'fledge'\n           AND table_name IN ('asset_tracker', 'log'))\n  LOOP\n      EXECUTE 'ALTER SEQUENCE' || ' ' || i || ' '||' RESTART 1;';\n  END LOOP;\nEND\\\\$alter_seq\\\\$;\nEOF`\n\n    RET_CODE=$?\n    if [ \"${RET_CODE}\" -ne 0 ]; then\n        postgres_log \"err\" \"Failure in purge command. Exiting\" \"all\" \"pretty\"\n        return 1\n    fi\n\n    # Log success\n    if [[ \"$1\" == \"noisy\" ]]; then\n        postgres_log \"info\" \"Purge complete for Fledge Plugin '${PLUGIN}\" \"all\" \"pretty\"\n    else\n        postgres_log \"info\" \"Purge complete for Fledge Plugin '${PLUGIN}\" \"logonly\" \"pretty\"\n    fi\n}\n\n##################\n### Main Logic ###\n##################\n\n# Set FLEDGE_DATA if it does not exist\nif [ -z ${FLEDGE_DATA+x} ]; then\n    FLEDGE_DATA=\"${FLEDGE_ROOT}/data\"\nfi\n\n# Check if $FLEDGE_DATA exists\nif [[ ! -d ${FLEDGE_DATA} ]]; then\n    postgres_log \"err\" \"Fledge cannot be executed: ${FLEDGE_DATA} is not a valid directory.\" \"all\" \"pretty\"\n    exit 1\nfi\n\n# Extract plugin engine management status id $3 paramter is not set\nif [ ! \"${3}\" ]; then\n\tengine_management=`get_engine_management $PLUGIN`\nelse\n\tengine_management=$3\nfi\n\n# Settings if the database is managed by Fledge\ncase \"$engine_management\" in\n    \"true\")\n\n        MANAGED=true\n\n        # Set PGHOST if it does not exist\n        if [ -z ${PGHOST+x} ]; then\n            PGHOST=\"/tmp\"\n            export PGHOST\n        fi\n\n        PG_DIR=\"${FLEDGE_DATA}/storage/postgres/pgsql\"\n        PG_DATA=\"${PG_DIR}/data\"\n        PG_LOG=\"${PG_DIR}/logger\"\n\n        # Check if pg_ctl is present in the expected path\n        PG_CTL=\"$FLEDGE_ROOT/plugins/storage/postgres/pgsql/bin/pg_ctl\"\n        PG_CTL_COMMAND=\"$PG_CTL -w -D $PG_DATA -l $PG_LOG\"\n        if ! [[ -x \"${PG_CTL}\" ]]; then\n            postgres_log \"err\" \"PostgreSQL program pg_ctl not found: the database server cannot be managed.\" \"all\" \"pretty\"\n            exit 1\n        fi\n\n        # Check if psql is present in the expected path\n        PG_SQL=\"$FLEDGE_ROOT/plugins/storage/postgres/pgsql/bin/psql\"\n        if ! [[ -x \"${PG_SQL}\" ]]; then\n            postgres_log \"err\" \"PostgreSQL program psql not found: the database server cannot be managed.\" \"all\" \"pretty\"\n            exit 1\n        fi\n\n        print_output=\"noisy\"\n        ;;\n    \n    \"false\")\n\n        # This case includes UNMANAGED\n\n        if ! [[ -x \"$(command -v psql)\" ]]; then\n            postgres_log \"info\" \"The psql command cannot be found. Is PostgreSQL installed?\" \"outonly\" \"pretty\"\n            postgres_log \"info\" \"If PostgreSQL is installed, check if the bin dir is in the PATH.\" \"outonly\" \"pretty\"\n            exit 1\n        else\n            PG_SQL=\"$(command -v psql)\"\n        fi\n\n        # This is an explicit imput, which means that we do not want to send\n        # messages when we start or stop the server\n        print_output=\"silent\"\n        MANAGED=false\n        ;;\n\n    *)\n\n        # Unexpected value from the configuration file\n        postgres_log \"err\" \"Fledge cannot start.\" \"all\" \"pretty\"\n        postgres_log \"err\" \"Missing plugin information from the storage microservice\" \"all\" \"pretty\"\n        exit 1\n        ;;\n\nesac\n\n# Check if the init.sql file exists\n# Attempt 1: deployment path\nif [[ -e \"$FLEDGE_ROOT/plugins/storage/postgres/init.sql\" ]]; then\n    INIT_SQL=\"$FLEDGE_ROOT/plugins/storage/postgres/init.sql\"\nelse\n    # Attempt 2: development path\n    if [[ -e \"$FLEDGE_ROOT/scripts/plugins/storage/postgres/init.sql\" ]]; then\n        INIT_SQL=\"$FLEDGE_ROOT/scripts/plugins/storage/postgres/init.sql\"\n    else\n        postgres_log \"err\" \"Missing initialization file init.sql.\" \"all\" \"pretty\"\n        exit 1\n    fi\nfi\n\n# Main case\ncase \"$1\" in\n    start)\n        pg_start \"$print_output\" \"$2\"\n        ;;\n    init)\n        pg_start \"skip\" \"$2\"\n        ;;\n    stop)\n        pg_stop \"$print_output\"\n        ;;\n    reset)\n        pg_reset \"$print_output\" \"$2\"\n        ;;\n    status)\n        pg_status \"$print_output\"\n        ;;\n    purge)\n\tpg_purge\n        ;;\n    help)\n        pg_help\n        ;;\n    *)\n        echo \"${USAGE}\"\n        exit 1\nesac\n\n# Exit cannot be used because the script may be \"sourced\"\n#exit $?\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/1.sql",
    "content": "-- No actions\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/10.sql",
    "content": "-- Statistics\nINSERT INTO fledge.statistics ( key, description, value, previous_value )\n     VALUES ( 'NORTH_READINGS_TO_PI', 'Readings sent to historian', 0, 0 ),\n            ( 'NORTH_STATISTICS_TO_PI', 'Statistics sent to historian', 0, 0 ),\n            ( 'NORTH_READINGS_TO_HTTP', 'Readings sent to HTTP', 0, 0 ),\n            ( 'North Readings to PI', 'Readings sent to the historian', 0, 0 ),\n            ( 'North Statistics to PI','Statistics data sent to the historian', 0, 0 ),\n            ( 'North Readings to OCS','Readings sent to OCS', 0, 0 );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/11.sql",
    "content": "CREATE TABLE fledge.destinations (\n    id            INTEGER                     PRIMARY KEY AUTOINCREMENT,                  -- Sequence ID\n    type          smallint                    NOT NULL DEFAULT 1,                         -- Enum : 1: OMF, 2: Elasticsearch\n    description   character varying(255)      NOT NULL DEFAULT '',                        -- A brief description of the destination entry\n    properties    JSON                        NOT NULL DEFAULT '{ \"streaming\" : \"all\" }', -- A generic set of properties\n    active_window JSON                        NOT NULL DEFAULT '[ \"always\" ]',            -- The window of operations\n    active        boolean                     NOT NULL DEFAULT 't',                       -- When false, all streams to this destination stop and are inactive\n    ts            DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')));         -- Creation or last update\n\nINSERT INTO fledge.destinations ( id, description )\n       VALUES (0, 'none' );\n\n-- Add the constraint to the the table\nBEGIN TRANSACTION;\nDROP TABLE IF EXISTS fledge.streams_old;\nALTER TABLE fledge.streams RENAME TO streams_old;\n\nCREATE TABLE fledge.streams (\n    id            INTEGER                      PRIMARY KEY AUTOINCREMENT,         -- Sequence ID\n    destination_id integer                     NOT NULL,                          -- FK to fledge.destinations\n    description    character varying(255)      NOT NULL DEFAULT '',               -- A brief description of the stream entry\n    properties     JSON                        NOT NULL DEFAULT '{}',             -- A generic set of properties\n    object_stream  JSON                        NOT NULL DEFAULT '{}',             -- Definition of what must be streamed\n    object_block   JSON                        NOT NULL DEFAULT '{}',             -- Definition of how the stream must be organised\n    object_filter  JSON                        NOT NULL DEFAULT '{}',             -- Any filter involved in selecting the data to stream\n    active_window  JSON                        NOT NULL DEFAULT '{}',             -- The window of operations\n    active         boolean                     NOT NULL DEFAULT 't',              -- When false, all data to this stream stop and are inactive\n    last_object    bigint                      NOT NULL DEFAULT 0,                -- The ID of the last object streamed (asset or reading, depending on the object_stream)\n    ts             DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- Creation or last update\n    CONSTRAINT streams_fk1 FOREIGN KEY (destination_id)\n    REFERENCES destinations (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION );\n\nINSERT INTO fledge.streams\n        SELECT\n            id,\n            0,\n            description,\n            properties,\n            object_stream,\n            object_block,\n            object_filter,\n            active_window,\n            active,\n            last_object,\n            ts\n        FROM fledge.streams_old;\n\nDROP TABLE fledge.streams_old;\nCOMMIT;\n\nCREATE INDEX fki_streams_fk1 ON streams (destination_id);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/12.sql",
    "content": "DROP INDEX statistics_history_ix3;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/13.sql",
    "content": "-- Use plugin name omf\nUPDATE fledge.configuration SET value = json_set(value, '$.plugin.value', 'omf') WHERE json_extract(value, '$.plugin.value') = 'pi_server';\nUPDATE fledge.configuration SET value = json_set(value, '$.plugin.default', 'omf') WHERE json_extract(value, '$.plugin.default') = 'pi_server';\n\n-- Remove PURGE_READ from Utilities parent category\nDELETE FROM fledge.category_children WHERE EXISTS(SELECT 1 FROM fledge.category_children WHERE parent = 'Utilities' AND child = 'PURGE_READ');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/14.sql",
    "content": "DROP INDEX IF EXISTS log_ix2;\nDROP INDEX IF EXISTS tasks_ix1;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/15.sql",
    "content": "CREATE TABLE fledge.configuration_temp (\n       key         character varying(255)      NOT NULL,                          -- Primary key\n       description character varying(255)      NOT NULL,                          -- Description, in plain text\n       value       JSON                        NOT NULL DEFAULT '{}',             -- JSON object containing the configuration values\n       ts          DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- Timestamp, updated at every change\n       CONSTRAINT configuration_pkey PRIMARY KEY (key) );\n\n\nINSERT INTO fledge.configuration_temp (key, description, value, ts) SELECT key, description, value, ts FROM fledge.configuration;\n\nDROP TABLE fledge.configuration;\n\nALTER TABLE fledge.configuration_temp RENAME TO configuration;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/16.sql",
    "content": "-- Remove plugin_data table\nDROP TABLE IF EXISTS fledge.plugin_data;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/17.sql",
    "content": "CREATE TABLE fledge.tasks_temporary (\n             id           uuid                        NOT NULL,                          -- PK\n             process_name character varying(255)      NOT NULL,                          -- Name of the task's process\n             state        smallint                    NOT NULL,                          -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- The date and time the task started\n             end_time     DATETIME,                                                      -- The date and time the task ended\n             reason       character varying(255),                                        -- The reason why the task ended\n             pid          integer                     NOT NULL,                          -- Linux process id\n             exit_code    integer,                                                       -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_temporary_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_temporary_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\nINSERT INTO fledge.tasks_temporary SELECT id, process_name, state, start_time, end_time, reason, pid, exit_code FROM fledge.tasks;\nDROP TABLE fledge.tasks;\n\nCREATE TABLE fledge.tasks (\n             id           uuid                        NOT NULL,                          -- PK\n             process_name character varying(255)      NOT NULL,                          -- Name of the task's process\n             state        smallint                    NOT NULL,                          -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- The date and time the task started\n             end_time     DATETIME,                                                      -- The date and time the task ended\n             reason       character varying(255),                                        -- The reason why the task ended\n             pid          integer                     NOT NULL,                          -- Linux process id\n             exit_code    integer,                                                       -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\nINSERT INTO fledge.tasks SELECT id, process_name, state, start_time, end_time, reason, pid, exit_code FROM fledge.tasks_temporary;\nDROP TABLE fledge.tasks_temporary;\n\nDROP INDEX IF EXISTS tasks_ix1;\nCREATE INDEX tasks_ix1\n    ON tasks(process_name, start_time);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/18.sql",
    "content": "DELETE from fledge.log_codes WHERE code = 'NTFDL';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/19.sql",
    "content": "DROP TABLE fledge.filters;\nDROP TABLE fledge.filter_users;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/2.sql",
    "content": "UPDATE fledge.configuration SET key = 'SEND_PR_1' WHERE key = 'North Readings to PI';\nUPDATE fledge.configuration SET key = 'SEND_PR_2' WHERE key = 'North Statistics to PI';\nUPDATE fledge.configuration SET key = 'SEND_PR_4' WHERE key = 'North Readings to OCS';\n\n-- Remove DHT11 C++ south plugin entries\nDELETE FROM fledge.configuration WHERE key = 'dht11';\nDELETE FROM fledge.scheduled_processes WHERE name='dht11';\nDELETE FROM fledge.schedules WHERE process_name = 'dht11';\n\nDELETE FROM fledge.configuration WHERE key = 'North_Readings_to_PI';\nDELETE FROM fledge.configuration WHERE key = 'North_Statistics_to_PI';\nDELETE FROM fledge.statistics WHERE key = 'NORTH_READINGS_TO_PI';\nDELETE FROM fledge.statistics WHERE key = 'NORTH_STATISTICS_TO_PI';\nDELETE FROM fledge.scheduled_processes WHERE name = 'North_Readings_to_PI';\nDELETE FROM fledge.scheduled_processes WHERE name = 'North_Statistics_to_PI';\nDELETE FROM fledge.schedules WHERE schedule_name = 'OMF_to_PI_north_C';\nDELETE FROM fledge.schedules WHERE schedule_name = 'Stats_OMF_to_PI_north_C';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/20.sql",
    "content": "-- Notification log codes\nDELETE from fledge.log_codes WHERE code = 'NTFAD';\nDELETE from fledge.log_codes WHERE code = 'NTFSN';\nDELETE from fledge.log_codes WHERE code = 'NTFST';\nDELETE from fledge.log_codes WHERE code = 'NTFSD';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/21.sql",
    "content": "drop index readings_ix3;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/22.sql",
    "content": "CREATE TABLE new_readings (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    asset_code character varying(50)       NOT NULL,                         -- The provided asset code.  Not necessarily located in the\n                                                                             -- assets table.\n    read_key   uuid                        UNIQUE,                           -- An optional unique key us ed to avoid double-loading.\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))       -- UTC time\n);\n\ninsert into new_readings select * from readings;\n\ndrop index fki_readings_fk1;\ndrop index readings_ix1;\ndrop index readings_ix2;\ndrop index readings_ix3;\ndrop table readings;\n\nalter table new_readings rename to readings;\n\nCREATE INDEX fki_readings_fk1\n    ON readings (asset_code, user_ts desc);\n\nCREATE INDEX readings_ix1\n    ON readings (read_key);\n\nCREATE INDEX readings_ix2\n    ON readings (asset_code);\n\nCREATE INDEX readings_ix3\n    ON readings (user_ts);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/23.sql",
    "content": "-- List of tasks\nCREATE TABLE fledge.new_tasks (\n             id           uuid                        NOT NULL,                          -- PK\n             schedule_name character varying(255),                                       -- Name of the task\n             process_name character varying(255)      NOT NULL,                          -- Name of the task's process\n             state        smallint                    NOT NULL,                          -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\t -- The date and time the task started\n             end_time     DATETIME,                                                      -- The date and time the task ended\n             reason       character varying(255),                                        -- The reason why the task ended\n             pid          integer                     NOT NULL,                          -- Linux process id\n             exit_code    integer,                                                       -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\nDROP INDEX tasks_ix1;\n\nDROP TABLE tasks;\n\nALTER TABLE new_tasks RENAME TO tasks;\n\n-- Create index\nCREATE INDEX tasks_ix1\n    ON tasks(schedule_name, start_time);\n\n\n-- General log table for Fledge.\nCREATE TABLE fledge.new_log (\n       id    INTEGER                PRIMARY KEY AUTOINCREMENT,\n       code  CHARACTER(5)           NOT NULL,                  -- The process that logged the action\n       level SMALLINT               NOT NULL,                  -- 0 Success - 1 Failure - 2 Warning - 4 Info\n       log   JSON                   NOT NULL DEFAULT '{}',     -- Generic log structure\n       ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT log_fk1 FOREIGN KEY (code)\n       REFERENCES log_codes (code) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nDROP INDEX log_ix1;\n\nDROP INDEX log_ix2;\n\nDROP TABLE log;\n\nALTER TABLE new_log RENAME TO log;\n\n-- Index: log_ix1 - For queries by code\nCREATE INDEX log_ix1 ON log(code, ts, level);\n\n-- Index to make GUI response faster\nCREATE INDEX log_ix2 ON log(ts);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/24.sql",
    "content": "-- No actions"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/25.sql",
    "content": "-- No actions\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/26.sql",
    "content": "-- No actions\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/27.sql",
    "content": "-- Notification log code NTFCL\nDELETE from fledge.log_codes WHERE code = 'NTFCL';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/28.sql",
    "content": "DELETE from log_codes WHERE code LIKE 'PKG%';"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/29.sql",
    "content": "-- updates \"PIServerEndpoint\" option :\n--\n--  \"Auto Discovery\"  -> \"discovery\"\n--  \"PI Web API\"      -> \"piwebapi\"\n--  \"Connector Relay\" -> \"cr\"\n--\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.options', json_array('discovery','piwebapi','cr') )\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.default', 'cr')\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'discovery')\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2' AND\n      json_extract(value, '$.PIServerEndpoint.value') = 'Auto Discovery';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'piwebapi')\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2' AND\n      json_extract(value, '$.PIServerEndpoint.value') = 'PI Web API';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'cr')\n    WHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2' AND\n          json_extract(value, '$.PIServerEndpoint.value') = 'Connector Relay';"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/3.sql",
    "content": "-- Remove configuration category_children table\nDROP TABLE IF EXISTS fledge.category_children;"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/30.sql",
    "content": "DROP TABLE IF EXISTS fledge.tasks_old;\n\nCREATE TABLE fledge.tasks_old (\n             id           uuid                        NOT NULL,                               -- PK\n             schedule_name character varying(255),                                            -- Name of the task\n             process_name character varying(255)      NOT NULL,                               -- Name of the task's process\n             state        smallint                    NOT NULL,                               -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),            -- The date and time the task started UTC\n             end_time     DATETIME,                                                           -- The date and time the task ended\n             reason       character varying(255),                                             -- The reason why the task ended\n             pid          integer                     NOT NULL,                               -- Linux process id\n             exit_code    integer,                                                            -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_old_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_old_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\nINSERT INTO fledge.tasks_old (id, schedule_name, process_name, state, start_time, end_time, reason, pid, exit_code) SELECT id, schedule_name, process_name, state, start_time, end_time, reason, pid, exit_code FROM fledge.tasks;\nDROP TABLE IF EXISTS fledge.tasks;\n\nCREATE TABLE fledge.tasks (\n             id           uuid                        NOT NULL,                               -- PK\n             schedule_name character varying(255),                                            -- Name of the task\n             process_name character varying(255)      NOT NULL,                               -- Name of the task's process\n             state        smallint                    NOT NULL,                               -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),            -- The date and time the task started UTC\n             end_time     DATETIME,                                                           -- The date and time the task ended\n             reason       character varying(255),                                             -- The reason why the task ended\n             pid          integer                     NOT NULL,                               -- Linux process id\n             exit_code    integer,                                                            -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\nINSERT INTO fledge.tasks SELECT id, schedule_name, process_name, state, start_time, end_time, reason, pid, exit_code FROM fledge.tasks_old;\nDROP TABLE IF EXISTS fledge.tasks_old;\n\nDROP INDEX IF EXISTS tasks_ix1;\nCREATE INDEX tasks_ix1\n    ON tasks(schedule_name, start_time);"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/31.sql",
    "content": "--\n-- ADD  the column: read_key   uuid\n--\n\nCREATE TABLE new_readings (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    asset_code character varying(50)       NOT NULL,                         -- The provided asset code. Not necessarily located in the\n                                                                             -- assets table.\n    read_key   uuid                        UNIQUE,                           -- An optional unique key used to avoid double-loading.\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n);\n\nINSERT INTO new_readings\n    SELECT\n        id,\n        asset_code,\n        -- UUID Generation\n        (lower(hex( randomblob(4))) || '-' || lower(hex( randomblob(2))) || '-' || '4' || substr( lower(hex( randomblob(2))), 2) || '-' || substr('89ab', 1 + (abs(random()) % 4) , 1)  || substr(lower(hex(randomblob(2))), 2) || '-' || lower(hex(randomblob(6)) )) as read_key,\n        reading,\n        user_ts,\n        ts\n    FROM readings;\n\nDROP INDEX fki_readings_fk1;\nDROP INDEX readings_ix2;\nDROP INDEX readings_ix3;\n\nDROP TABLE readings;\n\nALTER TABLE new_readings rename to readings;\n\nCREATE INDEX fki_readings_fk1\n    ON readings (asset_code, user_ts desc);\n\nCREATE INDEX readings_ix2\n    ON readings (asset_code);\n\nCREATE INDEX readings_ix3\n    ON readings (user_ts);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/32.sql",
    "content": "DELETE from fledge.log_codes WHERE code = 'PKGRM';"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/33.sql",
    "content": "-- NOTE -once you are up with fledge_schema=34, Downgrade is NOT possible as we removed C-source binaries.\n--       There is no way to get these back with current support.\n-- See scripts/package/debian/upgrade/1.8.sh\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/34.sh",
    "content": "#!/bin/bash\n\ndeclare SQL_COMMAND\ndeclare COMMAND_OUTPUT\n\n# Include logging\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n# Logger wrapper\nschema_update_log() {\n    write_log \"Downgrade\" \"scripts.plugins.storage.${PLUGIN_NAME}.schema_update\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\nschema_update_log \"debug\" \"$0 - SQLITE_SQL :$SQLITE_SQL: sql_file :$sql_file: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\nSQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'                 AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\n.read '${sql_file}'\n.quit\nEOF\"\n\n\nCOMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'                 AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\n.read '${sql_file}'\n.quit\nEOF`\n\nret_code=$?\n\nif [ \"${ret_code}\" -ne 0 ]; then\n    schema_update_log \"err\" \"Failure in downgrade command [${SQL_COMMAND}] result [${COMMAND_OUTPUT}]. Exiting\" \"all\" \"pretty\"\n    exit 1\nfi\n\n#\n# Clean up - file system\n#\nfile_path=$(dirname ${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE})\nfile_name_path=\"${file_path}/readings*\"\n\nschema_update_log \"debug\" \"cleanup - deleting ${file_name_path}\" \"logonly\" \"pretty\"\n\nrm ${file_name_path}\nret_code=$?\n\nif [ \"${ret_code}\" -ne 0 ]; then\n    schema_update_log \"notice\" \"cleanup_db - Failure in downgrade, files [${file_name_path}] can't be deleted. Proceeding\" \"logonly\" \"pretty\"\nfi\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/34.sql",
    "content": "-- Downgrade - copy all the content of the readings.readings table into fledge.readings\n\nCREATE TABLE fledge.readings (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    asset_code character varying(50)       NOT NULL,                         -- The provided asset code. Not necessarily located in the\n                                                                             -- assets table.\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n);\n\nINSERT INTO fledge.readings SELECT * FROM readings.readings;\n\nCREATE INDEX fledge.fki_readings_fk1\n    ON readings  (asset_code, user_ts desc);\n\nCREATE INDEX fledge.readings_ix2\n    ON readings (asset_code);\n\nCREATE INDEX fledge.readings_ix3\n    ON readings (user_ts);\n\nDROP TABLE readings.readings;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/35.sql",
    "content": "-- No action is required"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/36.sql",
    "content": "-- No action is required\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/37.sh",
    "content": "#!/bin/bash\n\n# Include logging\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n# Logger wrapper\nschema_update_log() {\n    write_log \"Downgrade\" \"scripts.plugins.storage.${PLUGIN_NAME}.schema_update\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\nschema_update_log \"err\" \"Downgrade not supported. Exiting\" \"all\" \"pretty\"\nexit 1\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/37.sql",
    "content": ""
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/38.sql",
    "content": "-- Remove packages table\nDROP TABLE IF EXISTS fledge.packages;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/39.sql",
    "content": "-- No action required\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/4.sql",
    "content": "UPDATE fledge.configuration SET value = '{\"plugin\": {\"description\": \"OMF North Plugin\", \"type\": \"string\", \"default\": \"omf\", \"value\": \"omf\"}}'\n        WHERE key = 'North Readings to PI';\nUPDATE fledge.configuration SET value = '{\"plugin\": {\"description\": \"OMF North Plugin\", \"type\": \"string\", \"default\": \"omf\", \"value\": \"omf\"}}'\n        WHERE key = 'North Statistics to PI';\nUPDATE fledge.configuration SET value = '{\"plugin\": {\"description\": \"OCS North Plugin\", \"type\": \"string\", \"default\": \"ocs\", \"value\": \"ocs\"}}'\n        WHERE key = 'North Readings to OCS';\n\nUPDATE statistics SET key = 'SENT_1' WHERE key = 'North Readings to PI';\nUPDATE statistics SET key = 'SENT_2' WHERE key = 'North Statistics to PI';\nUPDATE statistics SET key = 'SENT_4' WHERE key = 'North Readings to OCS';\n\nUPDATE fledge.scheduled_processes SET name = 'SEND_PR_1', script = '[\"tasks/north\", \"--stream_id\", \"1\", \"--debug_level\", \"1\"]'  WHERE name = 'North Readings to PI';\nUPDATE fledge.scheduled_processes SET name = 'SEND_PR_2', script = '[\"tasks/north\", \"--stream_id\", \"1\", \"--debug_level\", \"1\"]'  WHERE name = 'North Statistics to PI';\nUPDATE fledge.scheduled_processes SET name = 'SEND_PR_4', script = '[\"tasks/north\", \"--stream_id\", \"1\", \"--debug_level\", \"1\"]'  WHERE name = 'North Readings to OCS';\n\nUPDATE fledge.schedules SET process_name = 'SEND_PR_1' WHERE process_name = 'North Readings to PI';\nUPDATE fledge.schedules SET process_name = 'SEND_PR_2' WHERE process_name = 'North Statistics to PI';\nUPDATE fledge.schedules SET process_name = 'SEND_PR_4' WHERE process_name = 'North Readings to OCS';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/40.sql",
    "content": "-- No action is required\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/41.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n-- Create temporary table with new changes and then copy the data from old table\nDROP TABLE IF EXISTS fledge.users_temp;\nCREATE TABLE fledge.users_temp (\n       id                INTEGER   PRIMARY KEY AUTOINCREMENT,\n       uname             character varying(80)  NOT NULL,\n       role_id           integer                NOT NULL,\n       description       character varying(255) NOT NULL DEFAULT '',\n       pwd               character varying(255) ,\n       public_key        character varying(255) ,\n       enabled           boolean                NOT NULL DEFAULT 't',\n       pwd_last_changed  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       access_method     smallint               NOT NULL DEFAULT 0,\n          CONSTRAINT users_temp_fk1 FOREIGN KEY (role_id)\n          REFERENCES roles (id) MATCH SIMPLE\n                  ON UPDATE NO ACTION\n                  ON DELETE NO ACTION );\nINSERT INTO fledge.users_temp ( id, uname, pwd, enabled, pwd_last_changed, role_id, description, access_method ) SELECT id, uname, pwd, enabled, pwd_last_changed, role_id, description, 0 FROM fledge.users;\n\n-- Recreate it again and copy the data from temp table\nDROP TABLE IF EXISTS fledge.users;\nCREATE TABLE fledge.users (\n       id                INTEGER   PRIMARY KEY AUTOINCREMENT,\n       uname             character varying(80)  NOT NULL,\n       role_id           integer                NOT NULL,\n       description       character varying(255) NOT NULL DEFAULT '',\n       pwd               character varying(255) ,\n       public_key        character varying(255) ,\n       enabled           boolean                NOT NULL DEFAULT 't',\n       pwd_last_changed  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       access_method     smallint               NOT NULL DEFAULT 0,\n          CONSTRAINT users_fk1 FOREIGN KEY (role_id)\n          REFERENCES roles (id) MATCH SIMPLE\n                  ON UPDATE NO ACTION\n                  ON DELETE NO ACTION );\nINSERT INTO fledge.users ( id, uname, pwd, enabled, pwd_last_changed, role_id, description ) SELECT id, uname, pwd, enabled, pwd_last_changed, role_id, description FROM fledge.users_temp;\nDROP TABLE IF EXISTS fledge.users_temp;\n\n-- Recreate INDEX\nDROP INDEX IF EXISTS fki_users_fk1;\nCREATE INDEX fki_users_fk1\n    ON users (role_id);\n\nDROP INDEX IF EXISTS users_ix1;\nCREATE UNIQUE INDEX users_ix1\n    ON users (uname);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/42.sql",
    "content": "DROP INDEX IF EXISTS fledge.statistics_history_daily_ix1;\nDROP TABLE IF EXISTS fledge.statistics_history_daily;\n\nDELETE FROM fledge.schedules WHERE process_name = 'purge_system';\nDELETE FROM fledge.scheduled_processes WHERE name = 'purge_system';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/43.sql",
    "content": "UPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent to any historian yet.\", \"type\": \"boolean\",  \"default\": \"false\", \"displayName\": \"Retain Unsent Data\", \"value\": \"false\"}'))\n        WHERE key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"purge unsent\";\n\n\nUPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent to any historian yet.\", \"type\": \"boolean\",  \"default\": \"false\", \"displayName\": \"Retain Unsent Data\", \"value\": \"true\"}'))\n        WHERE key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"retain unsent to all destinations\";\n\nUPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent to any historian yet.\", \"type\": \"boolean\",  \"default\": \"false\", \"displayName\": \"Retain Unsent Data\", \"value\": \"true\"}'))\n        WHERE key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"retain unsent to any destination\";\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/44.sql",
    "content": "-- Remove Control service support table\nDROP TABLE IF EXISTS fledge.control_script;\nDROP TABLE IF EXISTS fledge.control_acl;"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/45.sql",
    "content": "-- Delete dispatcher log and log codes\nDELETE from fledge.log WHERE code = 'DSPST';\nDELETE from fledge.log_codes WHERE code = 'DSPST';\nDELETE from fledge.log WHERE code = 'DSPSD';\nDELETE from fledge.log_codes WHERE code = 'DSPSD';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/46.sql",
    "content": "-- No action is required\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/47.sql",
    "content": "DELETE FROM fledge.schedules WHERE process_name = 'bucket_storage_c';\nDELETE FROM fledge.scheduled_processes WHERE name = 'bucket_storage_c';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/48.sql",
    "content": "-- Remove id column in fledge.category_children\n\nBEGIN TRANSACTION;\n\n-- Drop existing index\nDROP INDEX IF EXISTS fledge.config_children_idx1;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.category_children RENAME TO category_children_old;\n\n-- Create new table\nCREATE TABLE fledge.category_children (\n       parent   character varying(255)  NOT NULL,\n       child    character varying(255)  NOT NULL,\n       CONSTRAINT config_children_pkey PRIMARY KEY(parent, child)\n);\n\n-- Copy data\nINSERT INTO fledge.category_children(parent, child) SELECT parent, child FROM fledge.category_children_old;\n\n-- Remote temp table\nDROP TABLE IF EXISTS fledge.category_children_old;\n\nCOMMIT;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/49.sql",
    "content": "-- The Schema Service table used to hold information about extension schemas\nDROP TABLE IF EXISTS fledge.service_schema;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/5.sql",
    "content": "-- Remove HTTP North C++ plugin entries\nDELETE FROM fledge.configuration WHERE key = 'North_Readings_to_HTTP';\nDELETE FROM fledge.scheduled_processes WHERE name='North_Readings_to_HTTP';\nDELETE FROM fledge.schedules WHERE process_name = 'North_Readings_to_HTTP';\nDELETE FROM fledge.statistics WHERE key = 'NORTH_READINGS_TO_HTTP';\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/50.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ESSRT', 'ESSTP' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/51.sql",
    "content": "-- Access Control List usage relation\nDROP TABLE IF EXISTS fledge.acl_usage;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/52.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n\n-- Remove deprecated_ts column in fledge.asset_tracker\n\n-- Drop existing index\nDROP INDEX IF EXISTS asset_tracker_ix1;\nDROP INDEX IF EXISTS asset_tracker_ix2;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.asset_tracker RENAME TO asset_tracker_old;\n\n-- Create new table\nCREATE TABLE IF NOT EXISTS fledge.asset_tracker (\n       id              integer                  PRIMARY KEY AUTOINCREMENT,\n       asset           character(50)            NOT NULL, -- asset name\n       event           character varying(50)    NOT NULL, -- event name\n       service         character varying(255)   NOT NULL, -- service name\n       fledge          character varying(50)    NOT NULL, -- FL service name\n       plugin          character varying(50)    NOT NULL, -- Plugin name\n       ts              DATETIME                 DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))\n);\n\n-- Copy data\nINSERT INTO fledge.asset_tracker ( id, asset, event, service, fledge, plugin, ts ) SELECT  id, asset, event, service, fledge, plugin, ts FROM fledge.asset_tracker_old;\n\n-- Create Index\nCREATE INDEX asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX asset_tracker_ix2 ON asset_tracker (service);\n\n-- Remote old table\nDROP TABLE IF EXISTS fledge.asset_tracker_old;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/53.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ASTDP', 'ASTUN' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/54.sql",
    "content": "DELETE FROM fledge.log_codes where code = 'PIPIN';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/55.sql",
    "content": "DELETE FROM fledge.log_codes where code = 'SRVRS';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/56.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n\n\n-- Drop existing index\nDROP INDEX IF EXISTS asset_tracker_ix1;\nDROP INDEX IF EXISTS asset_tracker_ix2;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.asset_tracker RENAME TO asset_tracker_old;\n\n-- Create new table\nCREATE TABLE IF NOT EXISTS fledge.asset_tracker (\n       id              integer                  PRIMARY KEY AUTOINCREMENT,\n       asset           character(50)            NOT NULL, -- asset name\n       event           character varying(50)    NOT NULL, -- event name\n       service         character varying(255)   NOT NULL, -- service name\n       fledge          character varying(50)    NOT NULL, -- FL service name\n       plugin          character varying(50)    NOT NULL, -- Plugin name\n       deprecated_ts\t\t\t\tDATETIME,\n       ts              DATETIME                 DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))\n);\n\n-- Copy data\nINSERT INTO fledge.asset_tracker ( id, asset, event, service, fledge, plugin, deprecated_ts, ts ) SELECT  id, asset, event, service, fledge, plugin, deprecated_ts, ts FROM fledge.asset_tracker_old;\n\n-- Create Index\nCREATE INDEX asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX asset_tracker_ix2 ON asset_tracker (service);\n\n-- Remove old table\nDROP TABLE IF EXISTS fledge.asset_tracker_old;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/57.sql",
    "content": "-- Delete Audit marker log and log codes entry\nDELETE from fledge.log WHERE code = 'AUMRK';\nDELETE from fledge.log_codes WHERE code = 'AUMRK';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/58.sql",
    "content": "-- Delete roles\nDELETE FROM fledge.roles WHERE name IN ('view','data-view');\n-- Reset auto increment\n-- You cannot use ALTER TABLE for that. The autoincrement counter is stored in a separate table named \"sqlite_sequence\". You can modify the value there\nUPDATE sqlite_sequence SET seq=1 WHERE name=\"roles\";\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/59.sql",
    "content": "-- Drop control pipeline tables\nDROP TABLE IF EXISTS fledge.control_source;\nDROP TABLE IF EXISTS fledge.control_destination;\nDROP TABLE IF EXISTS fledge.control_pipelines;\nDROP TABLE IF EXISTS fledge.control_filters;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/6.sql",
    "content": "DROP INDEX statistics_history_ix2;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/60.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('USRAD', 'USRDL', 'USRCH', 'USRRS' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/61.sql",
    "content": "DROP TABLE IF EXISTS fledge.monitors;\nDROP INDEX IF EXISTS fledge.monitors_ix1;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/62.sql",
    "content": "-- Delete roles\nDELETE FROM fledge.roles WHERE name IN ('view','control');\n-- Reset auto increment\n-- You cannot use ALTER TABLE for that. The autoincrement counter is stored in a separate table named \"sqlite_sequence\". You can modify the value there\nUPDATE sqlite_sequence SET seq=1 WHERE name=\"roles\";\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/63.sql",
    "content": "-- Drop control flow tables\nDROP TABLE IF EXISTS fledge.control_api_acl;\nDROP TABLE IF EXISTS fledge.control_api_parameters;\nDROP TABLE IF EXISTS fledge.control_api;"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/64.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ACLAD', 'ACLCH', 'ACLDL', 'CTSAD', 'CTSCH', 'CTSDL', 'CTPAD', 'CTPCH', 'CTPDL');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/65.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('CTEAD', 'CTECH', 'CTEDL');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/66.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n\n-- Remove priority column in fledge.scheduled_processes\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.scheduled_processes RENAME TO scheduled_processes_old;\n\n-- Create new table\nCREATE TABLE fledge.scheduled_processes (\n             name        character varying(255)  NOT NULL,             -- Name of the process\n             script      JSON,                                         -- Full path of the process\n             CONSTRAINT scheduled_processes_pkey PRIMARY KEY ( name ) );\n\n-- Copy data\nINSERT INTO fledge.scheduled_processes ( name, script) SELECT name, script FROM fledge.scheduled_processes_old;\n\n-- Remote old table\nDROP TABLE IF EXISTS fledge.scheduled_processes_old;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/67.sql",
    "content": "DROP TABLE IF EXISTS fledge.alerts;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/68.sql",
    "content": "DELETE FROM fledge.schedules WHERE process_name = 'update checker';\nDELETE FROM fledge.scheduled_processes WHERE name = 'update checker';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/69.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('BUCAD', 'BUCCH', 'BUCDL');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/7.sql",
    "content": "-- Remove asset_tracker table and index\nDROP TABLE IF EXISTS fledge.asset_tracker;\nDROP INDEX IF EXISTS asset_tracker_ix1;\nDROP INDEX IF EXISTS asset_tracker_ix2;"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/70.sql",
    "content": "-- Remove 'hash_algorithm' column from users table\n\nALTER TABLE fledge.users DROP COLUMN hash_algorithm;\nUPDATE fledge.users SET pwd='39b16499c9311734c595e735cffb5d76ddffb2ebf8cf4313ee869525a9fa2c20:f400c843413d4c81abcba8f571e6ddb6' WHERE pwd ='495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/71.sql",
    "content": "ALTER TABLE fledge.users DROP COLUMN failed_attempts;\nALTER TABLE fledge.users DROP COLUMN block_until;\nDELETE FROM fledge.log_codes where code IN ('USRBK', 'USRUB');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/72.sql",
    "content": "-- No action required\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/73.sql",
    "content": "DELETE FROM fledge.scheduled_processes WHERE name = 'pipeline_c';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/74.sql",
    "content": "ALTER TABLE fledge.plugin_data DROP COLUMN service_name;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/75.sql",
    "content": "-- no downgrade is necessary for the version, as the Python-based North task became obsolete a considerable time ago; therefore, it is not worthwhile to include the entry for its scheduled process"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/8.sql",
    "content": "-- North_Readings_to_PI - OMF Translator for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Readings_to_PI',\n              'OMF North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"omf\", \"default\" : \"omf\", \"description\" : \"Module that OMF North Plugin will load\" } } '\n            );\n\n-- North_Readings_to_HTTP - for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Readings_to_HTTP',\n              'HTTP North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"http-north\", \"default\" : \"http-north\", \"description\" : \"Module that HTTP North Plugin will load\" } } '\n            );\n\n-- dht11 - South plugin for DHT11 - C\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'dht11',\n              'DHT11 South C Plugin',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"dht11\", \"default\" : \"dht11\", \"description\" : \"Module that DHT11 South Plugin will load\" } } '\n            );\n\n-- North_Statistics_to_PI - OMF Translator for statistics\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Statistics_to_PI',\n              'OMF North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"omf\", \"default\" : \"omf\", \"description\" : \"Module that OMF North Plugin will load\" } } '\n            );\n\n-- North Readings to PI - OMF Translator for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North Readings to PI',\n              'OMF North Plugin',\n              '{\"plugin\": {\"description\": \"OMF North Plugin\", \"type\": \"string\", \"default\": \"omf\", \"value\": \"omf\"}, \"source\": {\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"readings\", \"value\": \"readings\"}}'\n            );\n\n-- North Statistics to PI - OMF Translator for statistics\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North Statistics to PI',\n              'OMF North Statistics Plugin',\n              '{\"plugin\": {\"description\": \"OMF North Plugin\", \"type\": \"string\", \"default\": \"omf\", \"value\": \"omf\"}, \"source\": {\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"statistics\", \"value\": \"statistics\"}}'\n            );\n\n-- North Readings to OCS - OSIsoft Cloud Services plugin for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North Readings to OCS',\n              'OCS North Plugin',\n              '{\"plugin\": {\"description\": \"OCS North Plugin\", \"type\": \"string\", \"default\": \"ocs\", \"value\": \"ocs\"}, \"source\": {\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"readings\", \"value\": \"readings\"}}'\n            );\n\n\n-- Readings OMF to PI - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '1cdf1ef8-7e02-11e8-adc0-fa7ae01bbebc', -- id\n                'OMF_to_PI_north_C',                    -- schedule_name\n                'North_Readings_to_PI',                 -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                    -- exclusive\n                'f'                                     -- disabled\n              );\n\n-- Statistics OMF to PI - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'f1e3b377-5acb-4bde-93d5-b6a792f76e07', -- id\n                'Stats_OMF_to_PI_north_C',              -- schedule_name\n                'North_Statistics_to_PI',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                    -- exclusive\n                'f'                                     -- disabled\n              );\n\n-- Readings to HTTP - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'ccdf1ef8-7e02-11e8-adc0-fa7ae01bb3bc', -- id\n                'HTTP_North_C',                         -- schedule_name\n                'North_Readings_to_HTTP',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                    -- exclusive\n                'f'                                     -- disabled\n              );\n\n\n-- DHT11 sensor south plugin - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '6b25f4d9-c7f3-4fc8-bd4a-4cf79f7055ca', -- id\n                'dht11',                                -- schedule_name\n                'dht11',                                -- process_name\n                1,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '01:00:00',                             -- schedule_interval (evey hour)\n                't',                                    -- exclusive\n                'f'                                     -- enabled\n              );\n\n-- Readings OMF to PI\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '2b614d26-760f-11e7-b5a5-be2e44b06b34', -- id\n                'OMF to PI north',                      -- schedule_name\n                'North Readings to PI',                 -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                   -- exclusive\n                'f'                                   -- disabled\n              );\n\n-- Statistics OMF to PI\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '1d7c327e-7dae-11e7-bb31-be2e44b06b34', -- id\n                'Stats OMF to PI north',                -- schedule_name\n                'North Statistics to PI',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                   -- exclusive\n                'f'                                   -- disabled\n              );\n\n-- Readings OMF to OCS\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '5d7fed92-fb9a-11e7-8c3f-9a214cf093ae', -- id\n                'OMF to OCS north',                     -- schedule_name\n                'North Readings to OCS',                -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                   -- exclusive\n                'f'                                   -- disabled\n              );\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/9.sql",
    "content": "DROP INDEX readings_ix2;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/downgrade/README",
    "content": "Place SQLite3 downgrade sql files here.\n  \nFile name:\n\nX.sql, where X is the SQLite3 schema id\n\nExample:\n\n'9.sql' file is read by Fledge app which has SQLite3 schema version set to 10\n'8.sql' file is read either by Fledge app which has SQLite3 schema version set to 9\neither by Fledge app downgrading schema from 10 to 8\n\nNote:\n- whenever VERSION file in $FLEDGE_ROOT has a new schema in 'fledge_schema',\n  the corresponding sql file must be placed here for downgrade.\n- file id must exist even if empty\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/init.sql",
    "content": "----------------------------------------------------------------------\n-- Copyright (c) 2018 OSIsoft, LLC\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\");\n-- you may not use this file except in compliance with the License.\n-- You may obtain a copy of the License at\n--\n--     http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-- See the License for the specific language governing permissions and\n-- limitations under the License.\n----------------------------------------------------------------------\n\n--\n-- init.sql\n--\n-- SQLite script to create the Fledge persistent Layer\n--\n\n-- NOTE:\n--\n-- This schema has to be used with Sqlite3 JSON1 extension\n--\n-- This script must be launched with sqlite3 commamd line tool:\n--  sqlite3 /path/fledge.db\n--   > ATTACH DATABASE '/path/fledge.db' AS 'fledge'\n--   > .read init.sql\n--   > .quit\n\n----------------------------------------------------------------------\n-- DDL CONVENTIONS\n--\n-- Tables:\n-- * Names are in plural, terms are separated by _\n-- * Columns are, when possible, not null and have a default value.\n--\n-- Columns:\n-- id      : It is commonly the PK of the table, a smallint, integer or bigint.\n-- xxx_id  : It usually refers to a FK, where \"xxx\" is name of the table.\n-- code    : Usually an AK, based on fixed lenght characters.\n-- ts      : The timestamp with microsec precision and tz. It is updated at\n--           every change.\n\n----------------------------------------------------------------------\n-- SCHEMA CREATION\n----------------------------------------------------------------------\n\n----- TABLES\n\n-- Log Codes Table\n-- List of tasks that log info into fledge.log.\nCREATE TABLE fledge.log_codes (\n       code        character(5)          NOT NULL,   -- The process that logs actions\n       description character varying(80) NOT NULL,\n       CONSTRAINT log_codes_pkey PRIMARY KEY (code) );\n\n-- Generic Log Table\n-- General log table for Fledge.\nCREATE TABLE fledge.log (\n       id    INTEGER                PRIMARY KEY AUTOINCREMENT,\n       code  CHARACTER(5)           NOT NULL,                  -- The process that logged the action\n       level SMALLINT               NOT NULL,                  -- 0 Success - 1 Failure - 2 Warning - 4 Info\n       log   JSON                   NOT NULL DEFAULT '{}',     -- Generic log structure\n       ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- UTC\n       CONSTRAINT log_fk1 FOREIGN KEY (code)\n       REFERENCES log_codes (code) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\n-- Index: log_ix1 - For queries by code\nCREATE INDEX log_ix1\n    ON log(code, ts, level);\n\n-- Index to make GUI response faster\nCREATE INDEX log_ix2\n    ON log(ts);\n\n-- Asset status\n-- List of status an asset can have.\nCREATE TABLE fledge.asset_status (\n       id          INTEGER                PRIMARY KEY AUTOINCREMENT,\n       descriprion character varying(255) NOT NULL DEFAULT '' );\n\n-- Asset Types\n-- Type of asset (for example south, sensor etc.)\nCREATE TABLE fledge.asset_types (\n       id          INTEGER                PRIMARY KEY AUTOINCREMENT,\n       description character varying(255) NOT NULL DEFAULT '' );\n\n-- Assets table\n-- This table is used to list the assets used in Fledge\n-- Reading do not necessarily have an asset, but whenever possible this\n-- table provides information regarding the data collected.\nCREATE TABLE fledge.assets (\n       id           INTEGER                     PRIMARY KEY AUTOINCREMENT,\n       code         character varying(50),                                  -- A unique code  (AK) used to match readings and assets. It can be anything.\n       description  character varying(255)      NOT NULL DEFAULT '',        -- A brief description of the asset\n       type_id      integer                     NOT NULL,                   -- FK for the type of asset\n       address      inet                        NOT NULL DEFAULT '0.0.0.0', -- An IPv4 or IPv6 address, if needed. Default means \"any address\"\n       status_id    integer                     NOT NULL,                   -- Status of the asset, FK to the asset_status table\n       properties   JSON                        NOT NULL DEFAULT '{}',      -- A generic JSON structure. Some elements (for example \"labels\") may be used in the rule to send messages to the south devices or data to the cloud\n       has_readings boolean                     NOT NULL DEFAULT 'f',       -- A boolean column, when TRUE, it means that the asset may have rows in the readings table\n       ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT assets_fk1 FOREIGN KEY (status_id)\n       REFERENCES asset_status (id) MATCH SIMPLE\n                 ON UPDATE NO ACTION\n                 ON DELETE NO ACTION,\n       CONSTRAINT assets_fk2 FOREIGN KEY (type_id)\n       REFERENCES asset_types (id) MATCH SIMPLE\n                 ON UPDATE NO ACTION\n                 ON DELETE NO ACTION );\n\n-- Index: fki_assets_fk1\nCREATE INDEX fki_assets_fk1\n    ON assets (status_id);\n\n-- Index: fki_assets_fk2\nCREATE INDEX fki_assets_fk2\n    ON assets (type_id);\n\n-- Index: assets_ix1\nCREATE UNIQUE INDEX assets_ix1\n    ON assets (code);\n\n-- Asset Status Changes\n-- When an asset changes its status, the previous status is added here.\n-- start_ts contains the value of ts of the row in the asset table.\nCREATE TABLE fledge.asset_status_changes (\n       id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n       asset_id   integer                     NOT NULL,\n       status_id  integer                     NOT NULL,\n       log        JSON                        NOT NULL DEFAULT '{}',\n       start_ts   DATETIME NOT NULL,\n       ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT asset_status_changes_fk1 FOREIGN KEY (asset_id)\n       REFERENCES assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT asset_status_changes_fk2 FOREIGN KEY (status_id)\n       REFERENCES asset_status (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_asset_status_changes_fk1\n    ON asset_status_changes (asset_id);\n\nCREATE INDEX fki_asset_status_changes_fk2\n    ON asset_status_changes (status_id);\n\n\n-- Links table\n-- Links among assets in 1:M relationships.\nCREATE TABLE fledge.links (\n       id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n       asset_id   integer                     NOT NULL,\n       properties JSON                        NOT NULL DEFAULT '{}',\n       ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT links_fk1 FOREIGN KEY (asset_id)\n       REFERENCES assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_links_fk1\n    ON links (asset_id);\n\n-- Assets Linked table\n-- In links, relationship between an asset and other assets.\nCREATE TABLE fledge.asset_links (\n       link_id  integer                     NOT NULL,\n       asset_id integer                     NOT NULL,\n       ts      DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT asset_links_pkey PRIMARY KEY (link_id, asset_id) );\n\nCREATE INDEX fki_asset_links_fk1\n    ON asset_links (link_id);\n\nCREATE INDEX fki_asset_link_fk2\n    ON asset_links (asset_id);\n\n-- Asset Message Status table\n-- Status of the messages to send South\nCREATE TABLE fledge.asset_message_status (\n       id          INTEGER                PRIMARY KEY AUTOINCREMENT,\n       description character varying(255) NOT NULL DEFAULT '' );\n\n-- Asset Messages table\n-- Messages directed to the south devices.\nCREATE TABLE fledge.asset_messages (\n       id        INTEGER                     PRIMARY KEY AUTOINCREMENT,\n       asset_id  integer                     NOT NULL,\n       status_id integer                     NOT NULL,\n       message   JSON                        NOT NULL DEFAULT '{}',\n       ts        DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT asset_messages_fk1 FOREIGN KEY (asset_id)\n       REFERENCES assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT asset_messages_fk2 FOREIGN KEY (status_id)\n       REFERENCES asset_message_status (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_asset_messages_fk1\n    ON asset_messages (asset_id);\n\nCREATE INDEX fki_asset_messages_fk2\n    ON asset_messages (status_id);\n\n-- Streams table\n-- List of the streams to the Cloud.\nCREATE TABLE fledge.streams (\n    id            INTEGER                      PRIMARY KEY AUTOINCREMENT,         -- Sequence ID\n    description    character varying(255)      NOT NULL DEFAULT '',               -- A brief description of the stream entry\n    properties     JSON                        NOT NULL DEFAULT '{}',             -- A generic set of properties\n    object_stream  JSON                        NOT NULL DEFAULT '{}',             -- Definition of what must be streamed\n    object_block   JSON                        NOT NULL DEFAULT '{}',             -- Definition of how the stream must be organised\n    object_filter  JSON                        NOT NULL DEFAULT '{}',             -- Any filter involved in selecting the data to stream\n    active_window  JSON                        NOT NULL DEFAULT '{}',             -- The window of operations\n    active         boolean                     NOT NULL DEFAULT 't',              -- When false, all data to this stream stop and are inactive\n    last_object    bigint                      NOT NULL DEFAULT 0,                -- The ID of the last object streamed (asset or reading, depending on the object_stream)\n    ts             DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))); -- Creation or last update\n\n\n-- Configuration table\n-- The configuration in JSON format.\n-- The PK is also used in the REST API\n-- Values is a JSON column\n-- ts is set by default with now().\nCREATE TABLE fledge.configuration (\n       key         character varying(255)      NOT NULL,                          -- Primary key\n       display_name character varying(255)     NOT NULL,                          -- Display Name\n       description character varying(255)      NOT NULL,                          -- Description, in plain text\n       value       JSON                        NOT NULL DEFAULT '{}',             -- JSON object containing the configuration values\n       ts          DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- Timestamp, updated at every change\n       CONSTRAINT configuration_pkey PRIMARY KEY (key) );\n\n\n-- Configuration changes\n-- This table has the same structure of fledge.configuration, plus the timestamp that identifies the time it has changed\n-- The table is used to keep track of the changes in the \"value\" column\nCREATE TABLE fledge.configuration_changes (\n       key                 character varying(255)      NOT NULL,\n       configuration_ts    DATETIME                    NOT NULL,\n       configuration_value JSON                        NOT NULL DEFAULT '{}',\n       ts                  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT configuration_changes_pkey PRIMARY KEY (key, configuration_ts) );\n\n-- Statistics table\n-- The table is used to keep track of the statistics for Fledge\nCREATE TABLE fledge.statistics (\n       key                 character varying(56)       NOT NULL,                           -- Primary key, all uppercase\n       description         character varying(255)      NOT NULL,                           -- Description, in plan text\n       value               bigint                      NOT NULL DEFAULT 0,                 -- Integer value, the statistics\n       previous_value      bigint                      NOT NULL DEFAULT 0,                 -- Integer value, the prev stat to be updated by metrics collector\n       ts                  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))); -- Timestamp, updated at every change\nCREATE UNIQUE INDEX statistics_ix1\n    ON statistics(key);\n\n-- Statistics history\n-- Keeps history of the statistics in fledge.statistics\n-- The table is updated at startup\nCREATE TABLE fledge.statistics_history (\n       id          INTEGER                     PRIMARY KEY AUTOINCREMENT,          -- Sequence ID\n       key         character varying(56)       NOT NULL,                           -- Coumpund primary key, all uppercase\n       history_ts  DATETIME NOT NULL,                                              -- Compound primary key, the highest value of statistics.ts when statistics are copied here.\n       value       bigint                      NOT NULL DEFAULT 0,                 -- Integer value, the statistics\n       ts          DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')));       -- Timestamp, updated at every change, UTC time\n\nCREATE UNIQUE INDEX statistics_history_ix1\n    ON statistics_history (key, history_ts);\n\nCREATE INDEX statistics_history_ix2\n    ON statistics_history (key);\n\nCREATE INDEX statistics_history_ix3\n    ON statistics_history (history_ts);\n\n-- Contains history of the statistics_history table\n-- Data are historicized daily\n--\nCREATE TABLE fledge.statistics_history_daily (\n    year        DATE DEFAULT (STRFTIME('%Y', 'NOW')),\n    day         DATE DEFAULT (STRFTIME('%Y-%m-%d', 'NOW')),\n    key         character varying(56)       NOT NULL,\n    value       bigint                      NOT NULL DEFAULT 0\n);\n\nCREATE INDEX statistics_history_daily_ix1\n    ON statistics_history_daily (year);\n\n-- Resources table\n-- A resource and be anything that is available or can be done in Fledge. Examples:\n-- - Access to assets\n-- - Access to readings\n-- - Access to streams\nCREATE TABLE fledge.resources (\n    id          INTEGER                PRIMARY KEY AUTOINCREMENT,  -- Sequence ID\n    code        character(10)          NOT NULL,\n    description character varying(255) NOT NULL DEFAULT '' );\n\nCREATE UNIQUE INDEX resource_ix1\n    ON resources (code);\n\n-- Roles table\nCREATE TABLE fledge.roles (\n    id          INTEGER   PRIMARY KEY AUTOINCREMENT,\n    name        character varying(25)  NOT NULL,\n    description character varying(255) NOT NULL DEFAULT '' );\n\n\nCREATE UNIQUE INDEX roles_ix1\n    ON roles (name);\n\n-- Roles, Resources and Permssions table\n-- For each role there are resources associated, with a given permission.\nCREATE TABLE fledge.role_resource_permission (\n       role_id     integer NOT NULL,\n       resource_id integer NOT NULL,\n       access      JSON    NOT NULL DEFAULT '{}',\n       CONSTRAINT role_resource_permission_pkey PRIMARY KEY (role_id, resource_id),\n       CONSTRAINT role_resource_permissions_fk1 FOREIGN KEY (role_id)\n       REFERENCES roles (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT role_resource_permissions_fk2 FOREIGN KEY (resource_id)\n       REFERENCES resources (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_role_resource_permissions_fk1\n    ON role_resource_permission (role_id);\n\nCREATE INDEX fki_role_resource_permissions_fk2\n    ON role_resource_permission (resource_id);\n\n\n-- Roles Assets Permissions table\n-- Combination of roles, assets and access\nCREATE TABLE fledge.role_asset_permissions (\n    role_id    integer NOT NULL,\n    asset_id   integer NOT NULL,\n    access     JSON    NOT NULL DEFAULT '{}',\n    CONSTRAINT role_asset_permissions_pkey PRIMARY KEY (role_id, asset_id),\n    CONSTRAINT role_asset_permissions_fk1 FOREIGN KEY (role_id)\n    REFERENCES roles (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION,\n    CONSTRAINT role_asset_permissions_fk2 FOREIGN KEY (asset_id)\n    REFERENCES assets (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION,\n    CONSTRAINT user_asset_permissions_fk1 FOREIGN KEY (role_id)\n    REFERENCES roles (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION );\n\nCREATE INDEX fki_role_asset_permissions_fk1\n    ON role_asset_permissions (role_id);\n\nCREATE INDEX fki_role_asset_permissions_fk2\n    ON role_asset_permissions (asset_id);\n\n-- Users table\n-- Fledge users table.\n-- Authentication Method:\n-- 0 - Disabled\n-- 1 - PWD\n-- 2 - Public Key\nCREATE TABLE fledge.users (\n       id                INTEGER   PRIMARY KEY AUTOINCREMENT,\n       uname             character varying(80)  NOT NULL,\n       real_name         character varying(255) NOT NULL,\n       role_id           integer                NOT NULL,\n       description       character varying(255) NOT NULL DEFAULT '',\n       pwd               character varying(255) ,\n       public_key        character varying(255) ,\n       enabled           boolean                NOT NULL DEFAULT 't',\n       pwd_last_changed  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       access_method     TEXT CHECK( access_method IN ('any','pwd','cert') )  NOT NULL DEFAULT 'any',\n       hash_algorithm    TEXT CHECK( hash_algorithm IN ('SHA256', 'SHA512') )  NOT NULL DEFAULT 'SHA512',\n       failed_attempts   INTEGER    DEFAULT 0,\n       block_until  DATETIME DEFAULT NULL,\n          CONSTRAINT users_fk1 FOREIGN KEY (role_id)\n          REFERENCES roles (id) MATCH SIMPLE\n                  ON UPDATE NO ACTION\n                  ON DELETE NO ACTION );\n\nCREATE INDEX fki_users_fk1\n    ON users (role_id);\n\nCREATE UNIQUE INDEX users_ix1\n    ON users (uname);\n\n-- User Login table\n-- List of logins executed by the users.\nCREATE TABLE fledge.user_logins (\n       id               INTEGER   PRIMARY KEY AUTOINCREMENT,\n       user_id          integer   NOT NULL,\n       ip               inet      NOT NULL DEFAULT '0.0.0.0',\n       ts               DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       token            character varying(255)      NOT NULL,\n       token_expiration DATETIME NOT NULL,\n       CONSTRAINT user_logins_fk1 FOREIGN KEY (user_id)\n       REFERENCES users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\n CREATE INDEX fki_user_logins_fk1\n     ON user_logins (user_id);\n\n-- User Password History table\n-- Maintains a history of passwords\nCREATE TABLE fledge.user_pwd_history (\n       id               INTEGER   PRIMARY KEY AUTOINCREMENT,\n       user_id          integer   NOT NULL,\n       pwd              character varying(255),\n       CONSTRAINT user_pwd_history_fk1 FOREIGN KEY (user_id)\n       REFERENCES users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_pwd_history_fk1\n    ON user_pwd_history (user_id);\n\n\n-- User Resource Permissions table\n-- Association of users with resources and given permissions for each resource.\nCREATE TABLE fledge.user_resource_permissions (\n       user_id     integer NOT NULL,\n       resource_id integer NOT NULL,\n       access      JSON NOT NULL DEFAULT '{}',\n       CONSTRAINT user_resource_permissions_pkey PRIMARY KEY (user_id, resource_id),\n       CONSTRAINT user_resource_permissions_fk1 FOREIGN KEY (user_id)\n       REFERENCES users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT user_resource_permissions_fk2 FOREIGN KEY (resource_id)\n       REFERENCES resources (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_resource_permissions_fk1\n    ON user_resource_permissions (user_id);\n\nCREATE INDEX fki_user_resource_permissions_fk2\n    ON user_resource_permissions (resource_id);\n\n-- User Asset Permissions table\n-- Association of users with assets\nCREATE TABLE fledge.user_asset_permissions (\n       user_id    integer NOT NULL,\n       asset_id   integer NOT NULL,\n       access     JSON NOT NULL DEFAULT '{}',\n       CONSTRAINT user_asset_permissions_pkey PRIMARY KEY (user_id, asset_id),\n       CONSTRAINT user_asset_permissions_fk1 FOREIGN KEY (user_id)\n       REFERENCES users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT user_asset_permissions_fk2 FOREIGN KEY (asset_id)\n       REFERENCES assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_asset_permissions_fk1\n    ON user_asset_permissions (user_id);\n\nCREATE INDEX fki_user_asset_permissions_fk2\n    ON user_asset_permissions (asset_id);\n\n\n-- List of scheduled Processes\nCREATE TABLE fledge.scheduled_processes (\n             name        character varying(255)  NOT NULL,             -- Name of the process\n             script      JSON,                                         -- Full path of the process\n             priority    INTEGER                 NOT NULL DEFAULT 999, -- priority to run for STARTUP\n             CONSTRAINT scheduled_processes_pkey PRIMARY KEY ( name ) );\n\n-- List of schedules\nCREATE TABLE fledge.schedules (\n             id                uuid                   NOT NULL, -- PK\n             process_name      character varying(255) NOT NULL, -- FK process name\n             schedule_name     character varying(255) NOT NULL, -- schedule name\n             schedule_type     INTEGER                NOT NULL, -- 1 = startup,  2 = timed\n                                                                -- 3 = interval, 4 = manual\n             schedule_interval INTEGER,                         -- Repeat interval\n             schedule_time     INTEGER,                         -- Start time\n             schedule_day      INTEGER,                         -- ISO day 1 = Monday, 7 = Sunday\n             exclusive         boolean NOT NULL DEFAULT 't',    -- true = Only one task can run\n                                                                -- at any given time\n             enabled           boolean NOT NULL DEFAULT 'f',    -- false = A given schedule is disabled by default\n  CONSTRAINT schedules_pkey PRIMARY KEY  ( id ),\n  CONSTRAINT schedules_fk1  FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\n-- List of tasks\nCREATE TABLE fledge.tasks (\n             id           uuid                        NOT NULL,                          -- PK\n             schedule_name character varying(255),                                       -- Name of the task\n             schedule_id  uuid                        NOT NULL,                          -- Link between schedule & task table\n             process_name character varying(255)      NOT NULL,                          -- Name of the task's process\n             state        smallint                    NOT NULL,                          -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),       -- The date and time the task started UTC\n             end_time     DATETIME,                                                      -- The date and time the task ended\n             reason       character varying(255),                                        -- The reason why the task ended\n             pid          integer                     NOT NULL,                          -- Linux process id\n             exit_code    integer,                                                       -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\nCREATE INDEX tasks_ix1\n    ON tasks(schedule_name, start_time);\n\n\n-- Tracks types already created into PI Server\nCREATE TABLE fledge.omf_created_objects (\n    configuration_key character varying(255)    NOT NULL,            -- FK to fledge.configuration\n    type_id           integer                   NOT NULL,            -- Identifies the specific PI Server type\n    asset_code        character varying(255)    NOT NULL,\n    CONSTRAINT omf_created_objects_pkey PRIMARY KEY (configuration_key,type_id, asset_code),\n    CONSTRAINT omf_created_objects_fk1 FOREIGN KEY (configuration_key)\n    REFERENCES configuration (key) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION );\n\n\n-- Backups information\n-- Stores information about executed backups\nCREATE TABLE fledge.backups (\n    id         INTEGER                 PRIMARY KEY AUTOINCREMENT,\n    file_name  character varying(255)  NOT NULL DEFAULT '',                   -- Backup file name, expressed as absolute path\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- Backup creation timestamp\n    type       integer                 NOT NULL,                              -- Backup type : 1-Full, 2-Incremental\n    status     integer                 NOT NULL,                              -- Backup status :\n                                                                              --   1-Running\n                                                                              --   2-Completed\n                                                                              --   3-Cancelled\n                                                                              --   4-Interrupted\n                                                                              --   5-Failed\n                                                                              --   6-Restored backup\n    exit_code  integer );                                                     -- Process exit status code\n\n\n-- Fledge DB version: keeps the schema version id\nCREATE TABLE fledge.version (id CHAR(10));\n\n-- Create the configuration category_children table\nCREATE TABLE fledge.category_children (\n       id       integer                 PRIMARY KEY AUTOINCREMENT,\n       parent   character varying(255)  NOT NULL,\n       child    character varying(255)  NOT NULL\n);\n\nCREATE UNIQUE INDEX config_children_idx1\n    ON category_children (parent, child);\n\n-- Create the asset_tracker table\nCREATE TABLE fledge.asset_tracker (\n       id              integer                  PRIMARY KEY AUTOINCREMENT,\n       asset           character(50)            NOT NULL, -- asset name\n       event           character varying(50)    NOT NULL, -- event name\n       service         character varying(255)   NOT NULL, -- service name\n       fledge          character varying(50)    NOT NULL, -- FL service name\n       plugin          character varying(50)    NOT NULL, -- Plugin name\n       deprecated_ts   DATETIME                         , -- When an asset record is removed then time will be set else empty and that mean entry has not been deprecated\n       ts              DATETIME                 DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       data            JSON                     DEFAULT '{}'\n);\n\nCREATE INDEX asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX asset_tracker_ix2 ON asset_tracker (service);\n\n-- Create plugin_data table\n-- Persist plugin data in the storage\nCREATE TABLE fledge.plugin_data (\n\tkey             character varying(255)    NOT NULL,\n\tdata            JSON                      NOT NULL DEFAULT '{}',\n\tservice_name    character varying(255),\n\tCONSTRAINT plugin_data_pkey PRIMARY KEY (key) );\n\n-- Create packages table\nCREATE TABLE fledge.packages (\n             id                uuid                   NOT NULL, -- PK\n             name              character varying(255) NOT NULL, -- Package name\n             action            character varying(10) NOT NULL, -- APT actions:\n                                                                -- list\n                                                                -- install\n                                                                -- purge\n                                                                -- update\n             status            INTEGER                NOT NULL, -- exit code\n                                                                -- -1       - in-progress\n                                                                --  0       - success\n                                                                -- Non-Zero - failed\n             log_file_uri      character varying(255) NOT NULL, -- Package Log file relative path\n  CONSTRAINT packages_pkey PRIMARY KEY  ( id ) );\n\n-- Create filters table\nCREATE TABLE fledge.filters (\n             name        character varying(255)        NOT NULL,\n             plugin      character varying(255)        NOT NULL,\n       CONSTRAINT filter_pkey PRIMARY KEY( name ) );\n\n-- Create filter_users table\nCREATE TABLE fledge.filter_users (\n             name        character varying(255)        NOT NULL,\n             user        character varying(255)        NOT NULL);\n\n-- Create control_script table\n-- Script management for control dispatch service\nCREATE TABLE fledge.control_script (\n             name          character varying(255)        NOT NULL,\n             steps         JSON                          NOT NULL DEFAULT '{}',\n             acl           character varying(255),\n             CONSTRAINT    control_script_pkey           PRIMARY KEY (name) );\n\n-- Create control_acl table\n-- Access Control List Management for control dispatch service\nCREATE TABLE fledge.control_acl (\n             name          character varying(255)        NOT NULL,\n             service       JSON                          NOT NULL DEFAULT '{}',\n             url           JSON                          NOT NULL DEFAULT '{}',\n             CONSTRAINT    control_acl_pkey              PRIMARY KEY (name) );\n\n-- Access Control List usage relation\nCREATE TABLE fledge.acl_usage (\n             name            character varying(255)  NOT NULL,  -- ACL name\n             entity_type     character varying(80)   NOT NULL,  -- associated entity type: service or script \n             entity_name     character varying(255)  NOT NULL,  -- associated entity name\n             CONSTRAINT      usage_acl_pkey          PRIMARY KEY (name, entity_type, entity_name) );\n\n-- Create control_source table\nCREATE TABLE fledge.control_source (\n             cpsid            integer                     PRIMARY KEY AUTOINCREMENT,       -- auto source id\n             name             character  varying(40)      NOT NULL,                        -- source name\n             description      character  varying(120)     NOT NULL                         -- source description\n            );\n\n-- Create control_destination table\nCREATE TABLE fledge.control_destination (\n             cpdid            integer                     PRIMARY KEY AUTOINCREMENT,       -- auto destination id\n             name             character  varying(40)      NOT NULL,                        -- destination name\n             description      character  varying(120)     NOT NULL                         -- destination description\n            );\n\n-- Create control_pipelines table\nCREATE TABLE fledge.control_pipelines (\n             cpid             integer                     PRIMARY KEY AUTOINCREMENT,       -- control pipeline id\n             name             character  varying(255)     NOT NULL                 ,       -- control pipeline name\n             stype            integer                                              ,       -- source type id from control_source table\n             sname            character  varying(80)                               ,       -- source name from control_source table\n             dtype            integer                                              ,       -- destination type id from control_destination table\n             dname            character  varying(80)                               ,       -- destination name from control_destination table\n             enabled          boolean                     NOT NULL DEFAULT  'f'    ,       -- false = A given pipeline is disabled by default\n             execution        character  varying(20)      NOT NULL DEFAULT  'shared'       -- pipeline will be executed as with shared execution model by default\n             );\n\n-- Create control_filters table\nCREATE TABLE fledge.control_filters (\n             fid              integer                     PRIMARY KEY AUTOINCREMENT,       -- auto filter id\n             cpid             integer                     NOT NULL                 ,       -- control pipeline id\n             forder           integer                     NOT NULL                 ,       -- filter order\n             fname            character  varying(255)     NOT NULL                 ,       -- Name of the filter instance\n             CONSTRAINT       control_filters_fk1         FOREIGN KEY (cpid)\n             REFERENCES       control_pipelines (cpid)    MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\n-- Create control_api table\nCREATE TABLE fledge.control_api (\n             name             character  varying(255)     NOT NULL                 ,       -- control API name\n             description      character  varying(255)     NOT NULL                 ,       -- description of control API\n             type             integer                     NOT NULL                 ,       -- 0 for write and 1 for operation\n             operation_name   character  varying(255)                              ,       -- name of the operation and only valid if type is operation\n             destination      integer                     NOT NULL                 ,       -- destination of request; 0-broadcast, 1-service, 2-asset, 3-script\n             destination_arg  character  varying(255)                              ,       -- name of the destination and only used if destination is non-zero\n             anonymous        boolean                     NOT NULL DEFAULT  'f'    ,       -- anonymous callers to make request to control API; by default false\n             CONSTRAINT       control_api_pname           PRIMARY KEY (name)\n             );\n\n-- Create control_api_parameters table\nCREATE TABLE fledge.control_api_parameters (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             parameter        character  varying(255)     NOT NULL                 ,       -- name of parameter\n             value            character  varying(255)                              ,       -- value of parameter if constant otherwise default\n             constant         boolean                     NOT NULL                 ,       -- parameter is either a constant or variable\n             FOREIGN KEY (name) REFERENCES control_api (name)\n             );\n\n-- Create control_api_acl table\nCREATE TABLE fledge.control_api_acl (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             user             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.users\n             FOREIGN KEY (name) REFERENCES control_api (name)                      ,\n             FOREIGN KEY (user) REFERENCES users (uname)\n             );\n\n-- Create monitors table\nCREATE TABLE fledge.monitors (\n             service       character varying(255) NOT NULL,\n             monitor       character varying(80) NOT NULL,\n             minimum       integer,\n             maximum       integer,\n             average       integer,\n             samples       integer,\n             ts            DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))\n             );\n\nCREATE INDEX monitors_ix1\n    ON monitors(service, monitor);\n\n-- Create alerts table\n\nCREATE TABLE fledge.alerts (\n       key         character varying(80)       NOT NULL,                                  -- Primary key\n       message     character varying(255)      NOT NULL,                                 -- Alert Message\n       urgency     SMALLINT                    NOT NULL,                                 -- 1 Critical - 2 High - 3 Normal - 4 Low\n       ts          DATETIME    DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),     -- Timestamp, updated at every change\n       CONSTRAINT  alerts_pkey PRIMARY KEY (key) );\n\n----------------------------------------------------------------------\n-- Initialization phase - DML\n----------------------------------------------------------------------\n\n-- Roles\nDELETE FROM fledge.roles;\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('admin', 'All CRUD privileges'),\n            ('user', 'All CRUD operations and self profile management'),\n            ('view', 'Only to view the configuration'),\n            ('data-view', 'Only read the data in buffer'),\n            ('control', 'Same as editor can do and also have access for control scripts and pipelines');\n\n-- Users\nDELETE FROM fledge.users;\nINSERT INTO fledge.users ( uname, real_name, pwd, role_id, description )\n     VALUES ('admin', 'Admin user', '495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', 1, 'admin user'),\n            ('user', 'Normal user', '495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', 2, 'normal user');\n\n-- User password history\nDELETE FROM fledge.user_pwd_history;\n\n-- User logins\nDELETE FROM fledge.user_logins;\n\n-- Log Codes\nDELETE FROM fledge.log_codes;\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'PURGE', 'Data Purging Process' ),\n            ( 'LOGGN', 'Logging Process' ),\n            ( 'STRMN', 'Streaming Process' ),\n            ( 'SYPRG', 'System Purge' ),\n            ( 'START', 'System Startup' ),\n            ( 'FSTOP', 'System Shutdown' ),\n            ( 'CONCH', 'Configuration Change' ),\n            ( 'CONAD', 'Configuration Addition' ),\n            ( 'SCHCH', 'Schedule Change' ),\n            ( 'SCHAD', 'Schedule Addition' ),\n            ( 'SRVRG', 'Service Registered' ),\n            ( 'SRVUN', 'Service Unregistered' ),\n            ( 'SRVFL', 'Service Fail' ),\n            ( 'SRVRS', 'Service Restart' ),\n            ( 'NHCOM', 'North Process Complete' ),\n            ( 'NHDWN', 'North Destination Unavailable' ),\n            ( 'NHAVL', 'North Destination Available' ),\n            ( 'UPEXC', 'Update Complete' ),\n            ( 'BKEXC', 'Backup Complete' ),\n            ( 'NTFDL', 'Notification Deleted' ),\n            ( 'NTFAD', 'Notification Added' ),\n            ( 'NTFSN', 'Notification Sent' ),\n            ( 'NTFCL', 'Notification Cleared' ),\n            ( 'NTFST', 'Notification Server Startup' ),\n            ( 'NTFSD', 'Notification Server Shutdown' ),\n            ( 'PKGIN', 'Package installation' ),\n            ( 'PKGUP', 'Package updated' ),\n            ( 'PKGRM', 'Package purged' ),\n            ( 'DSPST', 'Dispatcher Startup' ),\n            ( 'DSPSD', 'Dispatcher Shutdown' ),\n            ( 'ESSRT', 'External Service Startup' ),\n            ( 'ESSTP', 'External Service Shutdown' ),\n            ( 'ASTDP', 'Asset deprecated' ),\n            ( 'ASTUN', 'Asset un-deprecated' ),\n            ( 'PIPIN', 'Pip installation' ),\n            ( 'AUMRK', 'Audit Log Marker' ),\n            ( 'USRAD', 'User Added' ),\n            ( 'USRDL', 'User Deleted' ),\n            ( 'USRCH', 'User Changed' ),\n            ( 'USRRS', 'User Restored' ),\n            ( 'ACLAD', 'ACL Added' ),( 'ACLCH', 'ACL Changed' ),( 'ACLDL', 'ACL Deleted' ),\n            ( 'CTSAD', 'Control Script Added' ),( 'CTSCH', 'Control Script Changed' ),('CTSDL', 'Control Script Deleted' ),\n            ( 'CTPAD', 'Control Pipeline Added' ),( 'CTPCH', 'Control Pipeline Changed' ),('CTPDL', 'Control Pipeline Deleted' ),\n            ( 'CTEAD', 'Control Entrypoint Added' ),( 'CTECH', 'Control Entrypoint Changed' ),('CTEDL', 'Control Entrypoint Deleted' ),\n            ( 'BUCAD', 'Bucket Added' ), ( 'BUCCH', 'Bucket Changed' ), ( 'BUCDL', 'Bucket Deleted' ),\n            ( 'USRBK', 'User Blocked' ), ( 'USRUB', 'User Unblocked' )\n            ;\n\n--\n-- Configuration parameters\n--\nDELETE FROM fledge.configuration;\n\n\n-- Statistics\nINSERT INTO fledge.statistics ( key, description, value, previous_value )\n     VALUES ( 'READINGS',             'Readings received by Fledge', 0, 0 ),\n            ( 'BUFFERED',             'Readings currently in the Fledge buffer', 0, 0 ),\n            ( 'UNSENT',               'Readings filtered out in the send process', 0, 0 ),\n            ( 'PURGED',               'Readings removed from the buffer by the purge process', 0, 0 ),\n            ( 'UNSNPURGED',           'Readings that were purged from the buffer before being sent', 0, 0 ),\n            ( 'DISCARDED',            'Readings discarded by the South Service before being  placed in the buffer. This may be due to an error in the readings themselves.', 0, 0 );\n\n--\n-- Scheduled processes\n--\n-- Use this to create guids: https://www.uuidgenerator.net/version1 */\n-- Weekly repeat for timed schedules: set schedule_interval to 168:00:00\n\n-- Core Tasks\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'purge',               '[\"tasks/purge\"]'       );\nINSERT INTO fledge.scheduled_processes (name, script)   VALUES ( 'purge_system',        '[\"tasks/purge_system\"]');\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'stats collector',     '[\"tasks/statistics\"]'  );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'FledgeUpdater',       '[\"tasks/update\"]'      );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'certificate checker', '[\"tasks/check_certs\"]' );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'update checker',      '[\"tasks/check_updates\"]');\n\n-- Storage Tasks\n--\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('backup',  '[\"tasks/backup\"]'  );\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('restore', '[\"tasks/restore\"]' );\n\n-- South, Notification, North Tasks\n--\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'south_c',           '[\"services/south_c\"]',         100  );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'notification_c',    '[\"services/notification_c\"]',    30 );\nINSERT INTO fledge.scheduled_processes (name, script)             VALUES ( 'north_c',           '[\"tasks/north_c\"]'                  );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'north_C',           '[\"services/north_C\"]',          200 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'dispatcher_c',      '[\"services/dispatcher_c\"]',      20 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'bucket_storage_c',  '[\"services/bucket_storage_c\"]',  10 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'pipeline_c',        '[\"services/pipeline_c\"]',        90 );\n\n-- Automation script tasks\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'automation_script', '[\"tasks/automation_script\"]' );\n\n--\n-- Schedules\n--\n-- Use this to create guids: https://www.uuidgenerator.net/version1 */\n-- Weekly repeat for timed schedules: set schedule_interval to 168:00:00\n--\n\n\n-- Core Tasks\n--\n\n-- Purge\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'cea17db8-6ccc-11e7-907b-a6006ad3dba0', -- id\n                'purge',                                -- schedule_name\n                'purge',                                -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:10:00',                             -- schedule_interval (evey hour)\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n--\n-- Purge System\n--\n-- Purge old information from the fledge database\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                               schedule_time, schedule_interval, exclusive, enabled )\nVALUES ( 'd37265f0-c83a-11eb-b8bc-0242ac130003', -- id\n         'purge_system',                         -- schedule_name\n         'purge_system',                         -- process_name\n         3,                                      -- schedule_type (interval)\n         NULL,                                   -- schedule_time\n         '23:50:00',                             -- schedule_interval (evey 24 hours)\n         't',                                    -- exclusive\n         't'                                     -- enabled\n       );\n\n\n-- Statistics collection\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '2176eb68-7303-11e7-8cf7-a6006ad3dba0', -- id\n                'stats collection',                     -- schedule_name\n                'stats collector',                      -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:15',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n-- Check Updates\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '852cd8e4-3c29-440b-89ca-2c7691b0450d', -- id\n                'update checker',                       -- schedule_name\n                'update checker',                       -- process_name\n                2,                                      -- schedule_type (timed)\n                '00:05:00',                             -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n\n-- Check for expired certificates\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '2176eb68-7303-11e7-8cf7-a6107ad3db21', -- id\n                'certificate checker',                  -- schedule_name\n                'certificate checker',                  -- process_name\n                2,                                      -- schedule_type (interval)\n                '00:05:00',                             -- schedule_time\n                '12:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n-- Storage Tasks\n--\n\n-- Execute a Backup every 1 hour\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'd1631422-9ec6-11e7-abc4-cec278b6b50a', -- id\n                'backup hourly',                        -- schedule_name\n                'backup',                               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '01:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                'f'                                   -- disabled\n              );\n\n-- On demand Backup\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'fac8dae6-d8d1-11e7-9296-cec278b6b50a', -- id\n                'backup on demand',                     -- schedule_name\n                'backup',                               -- process_name\n                4,                                      -- schedule_type (manual)\n                NULL,                                   -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n-- On demand Restore\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '8d4d3ca0-de80-11e7-80c1-9a214cf093ae', -- id\n                'restore on demand',                    -- schedule_name\n                'restore',                              -- process_name\n                4,                                      -- schedule_type (manual)\n                NULL,                                   -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n\n-- The Schema Service table used to hold information about extension schemas\nCREATE TABLE fledge.service_schema (\n             name          character varying(255)        NOT NULL,\n             service       character varying(255)        NOT NULL,\n             version       integer                       NOT NULL,\n             definition    JSON);\n\n-- Control Source\nDELETE FROM fledge.control_source;\nINSERT INTO fledge.control_source ( name, description )\n     VALUES ('Any', 'Any source.'),\n            ('Service', 'A named service in source of the control pipeline.'),\n            ('API', 'The control pipeline source is the REST API.'),\n            ('Notification', 'The control pipeline originated from a notification.'),\n            ('Schedule', 'The control request was triggered by a schedule.'),\n            ('Script', 'The control request has come from the named script.');\n\n-- Control Destination\nDELETE FROM fledge.control_destination;\nINSERT INTO fledge.control_destination ( name, description )\n     VALUES ('Any', 'Any destination.'),\n            ('Service', 'A name of service that is being controlled.'),\n            ('Asset', 'A name of asset that is being controlled.'),\n            ('Script', 'A name of script that will be executed.'),\n            ('Broadcast', 'No name is applied and pipeline will be considered for any control writes or operations to broadcast destinations.');\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/init_readings.sql",
    "content": "----------------------------------------------------------------------\n-- Copyright (c) 2020 OSIsoft, LLC\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\");\n-- you may not use this file except in compliance with the License.\n-- You may obtain a copy of the License at\n--\n--     http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-- See the License for the specific language governing permissions and\n-- limitations under the License.\n----------------------------------------------------------------------\n\n--\n-- init_readings.sql\n--\n-- SQLite script to create the Fledge persistent Layer\n--\n\n-- NOTE:\n--\n-- This schema has to be used with Sqlite3 JSON1 extension\n--\n-- This script must be launched with sqlite3 command line tool:\n--  sqlite3 /path/readings.db\n--   > ATTACH DATABASE '/path/readings_1.db' AS 'readings_1'\n--   > .read init_readings.sql\n--   > .quit\n\n----------------------------------------------------------------------\n-- DDL CONVENTIONS\n--\n-- Tables:\n-- * Names are in plural, terms are separated by _\n-- * Columns are, when possible, not null and have a default value.\n--\n-- Columns:\n-- id      : It is commonly the PK of the table, a smallint, integer or bigint.\n-- xxx_id  : It usually refers to a FK, where \"xxx\" is name of the table.\n-- code    : Usually an AK, based on fixed length characters.\n-- ts      : The timestamp with microsec precision and tz. It is updated at\n--           every change.\n\n----------------------------------------------------------------------\n-- SCHEMA CREATION\n----------------------------------------------------------------------\n\n--\n-- Stores in which database/readings table the specific asset_code is stored\n--\nCREATE TABLE readings_1.asset_reading_catalogue (\n    table_id     INTEGER               NOT NULL,\n    db_id        INTEGER               NOT NULL,\n    asset_code   character varying(50) NOT NULL\n);\n\n--\n-- Store information about the multi database/readings handling\n--\nCREATE TABLE readings_1.configuration_readings (\n    global_id         INTEGER,                                                  -- Stores the last global Id used +1\n                                                                                -- Updated at -1 when Fledge starts\n                                                                                -- Updated at the the proper value when Fledge stops\n    db_id_Last        INTEGER,                                                  -- Latest database available\n\n\tn_readings_per_db INTEGER,                                                  -- Number of readings table per database\n\tn_db_preallocate  INTEGER                                                   -- Number of databases to allocate in advance\n);\n\n-- Readings table\n-- This tables contains the readings for assets.\n-- An asset can be a south with multiple sensor, a single sensor,\n-- a software or anything that generates data that is sent to Fledge\nCREATE TABLE readings_1.readings_1_1 (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n);\n\nCREATE INDEX readings_1.readings_1_1_ix3\n    ON readings_1_1 (user_ts desc);\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/schema_update.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\nFLEDGE_DB_VERSION=$1\nNEW_VERSION=$2\nexport SQLITE_SQL=$3\nexport sql_file=\"\"\n\nPLUGIN_NAME=\"sqlite\"\n\necho \"$@\" | grep -q -- --verbose && VERBOSE=\"Y\"\n\n# Include logging\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n# Logger wrapper\nschema_update_log() {\n    write_log \"Storage\" \"scripts.plugins.storage.${PLUGIN_NAME}.schema_update\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n# Parameters passed by the caller\nif [ ! \"$1\" ]; then\n   schema_update_log \"err\" \"Error: missing required parameters for upgrade/downgrade. Fledge cannot start.\" \"all\" \"pretty\"\n   exit 1\nfi\n\n# Same version check: do nothing\nif [ \"${FLEDGE_DB_VERSION}\" == \"${NEW_VERSION}\" ]; then\n    schema_update_log \"info\" \"Fledge DB schema is up to date to version ${FLEDGE_DB_VERSION}\" \"logonly\" \"pretty\"\n    return  0\nfi\n\n# Perform DB Upgrade\ndb_upgrade()\n{\n    UPDATE_SCRIPTS_DIR=\"$FLEDGE_ROOT/scripts/plugins/storage/${PLUGIN_NAME}/upgrade\"\n    # Start from next schema revision\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} + 1`\n    while [ \"${CHECK_VER}\" -le ${NEW_VERSION} ]\n    do\n        UPGRADE_SCRIPT=\"${UPDATE_SCRIPTS_DIR}/${CHECK_VER}.sql\"\n        if [ ! -e \"${UPGRADE_SCRIPT}\" ]; then\n            schema_update_log \"err\" \"Error in schema Upgrade: cannot find file ${UPGRADE_SCRIPT} \"\\\n\"required for [${FLEDGE_DB_VERSION}] to [${NEW_VERSION}] upgrade. Exiting\" \"all\" \"pretty\"\n            return 1\n        fi\n        CHECK_VER=`expr $CHECK_VER + 1`\n    done\n\n    START_UPGRADE=\"\"\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} + 1`\n    # sort in ascending order\n    for sql_file in `ls -1 ${UPDATE_SCRIPTS_DIR}/*.sql | sort -V`\n        do\n            # Get start_ver from filename START_VER-to-END_VER.sql\n            START_VER=`echo $(basename -s '.sql' $sql_file)`\n\n            # Skip current file ?\n            # Logic is: if sql_file name has START_VER != FLEDGE_DB_VERSION skip it\n            # else mark the START_UPGRADE\n            if [ ! \"${START_UPGRADE}\" ] && [ \"${START_VER}\" != \"${CHECK_VER}\" ]; then\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Skipping upgrade $(basename ${sql_file}) \"\\\n\"for Fledge upgrade from ${FLEDGE_DB_VERSION} to ${NEW_VERSION}\" \"logonly\" \"pretty\"\n                fi\n\n                # Get next file in the list\n                continue\n            else\n                START_UPGRADE=\"Y\"\n            fi\n\n            # Perform Upgrade\n            if [ \"${START_UPGRADE}\" ]; then\n\n                # Evaluates if a shell script is available, in case it is executed instead of the .sql file\n                file_name=$(basename ${sql_file})\n                file_name_shell=${file_name%.*}.sh\n                file_name_path=\"${UPDATE_SCRIPTS_DIR}/${file_name_shell}\"\n\n                if [ -f \"${file_name_path}\" ]; then\n\n                    schema_update_log \"debug\" \"upgrade shell calling [${file_name_path}]\" \"logonly\" \"pretty\"\n                    ${file_name_path}\n\n                    RET_CODE=$?\n                    if [ \"${RET_CODE}\" -ne 0 ]; then\n                        schema_update_log \"err\" \"Failure in upgrade, executing ${file_name_path}. Exiting\" \"all\" \"pretty\"\n                        return 1\n                    fi\n                else\n                    schema_update_log \"debug\" \"upgrade sql calling [${sql_file}]\" \"logonly\" \"pretty\"\n\n                    # Prepare command string for error reporting\n                    SQL_COMMAND=\"${SQLITE_SQL} '${DEFAULT_SQLITE_DB_FILE}' \\\"ATTACH DATABASE \"\\\n\"'${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; .read '${sql_file}' .quit\\\"\"\n\n                    if [ \"${VERBOSE}\" ]; then\n                        schema_update_log \"info\" \"Applying upgrade $(basename ${sql_file}) ...\" \"logonly\" \"pretty\"\n                        schema_update_log \"info\" \"Calling [${SQL_COMMAND}]\" \"logonly\" \"pretty\"\n                    fi\n\n                    # Call the DB script\n                    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\n.read '${sql_file}'\n.quit\nEOF`\n                    RET_CODE=$?\n\n                    if [ \"${RET_CODE}\" -ne 0 ]; then\n                        schema_update_log \"err\" \"Failure in upgrade command [${SQL_COMMAND}]: ${COMMAND_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                        return 1\n                    fi\n\n                fi\n\n                # Update the DB version\n                UPDATE_VER=`basename -s .sql ${sql_file}`\n                UPDATE_COMMAND=\"${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" \\\"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; UPDATE fledge.version SET id = '${UPDATE_VER}';\\\" 2>&1\"\n                UPDATE_OUTPUT=`eval \"${UPDATE_COMMAND}\"`\n                RET_CODE=$?\n                if [ \"${RET_CODE}\" -ne 0 ]; then\n                    schema_update_log \"err\" \"Failure in upgrade command [${UPDATE_COMMAND}]: ${UPDATE_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # If \"ver\" in current filename is NEW_VERSION we are done\n                if [ \"${START_VER}\" == \"${NEW_VERSION}\" ]; then\n                    if [ \"${VERBOSE}\" ]; then\n                        schema_update_log \"info\" \"Found last upgrade file $(basename ${sql_file}) for \"\\\n\"${FLEDGE_DB_VERSION} to ${NEW_VERSION} version upgrade\" \"logonly\" \"pretty\"\n                    fi\n                    # Report success\n                    schema_update_log \"info\" \"Fledge DB schema has been upgraded to version [${NEW_VERSION}]\" \"all\" \"pretty\"\n                    return 0\n                fi\n            fi\n        done\n        # Report error\n        if [ \"${START_UPGRADE}\" ]; then\n             schema_update_log \"err\" \"Error: the Fledge DB schema has not been upgraded \"\\\n\"to version [${NEW_VERSION}], this sql file is [$${sql_file}]\" \"all\" \"pretty\"\n            return 0\n        fi\n}\n\n# Perform DB Downgrade\ndb_downgrade()\n{\n    DOWNGRADE_SCRIPTS_DIR=\"$FLEDGE_ROOT/scripts/plugins/storage/${PLUGIN_NAME}/downgrade\"\n    # Start from next schema revision\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} - 1`\n    while [ \"${CHECK_VER}\" -ge ${NEW_VERSION} ]\n    do\n        DOWNGRADE_SCRIPT=\"${DOWNGRADE_SCRIPTS_DIR}/${CHECK_VER}.sql\"\n        if [ ! -e \"${DOWNGRADE_SCRIPT}\" ]; then\n            schema_update_log \"err\" \"Error in schema Downgrade: cannot find file ${DOWNGRADE_SCRIPT} \"\\\n\"required for [${FLEDGE_DB_VERSION}] to [${NEW_VERSION}] downgrade. Exiting\" \"all\" \"pretty\"\n            return 1\n        fi\n        CHECK_VER=`expr $CHECK_VER - 1`\n    done\n\n    START_DOWNGRADE=\"\"\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} - 1`\n    # sort in descending order\n    for sql_file in `ls -1 ${DOWNGRADE_SCRIPTS_DIR}/*.sql | sort -rV`\n        do\n            # Get start_ver from filename START_VER-to-END_VER.sql\n            START_VER=`echo $(basename -s '.sql' $sql_file)`\n\n            # Skip current file?\n            # Logic is: sql_file name has START_VER != FLEDGE_DB_VERSION skip it\n            # else mark START_DOWNGRADE\n            if [ ! \"${START_DOWNGRADE}\" ] && [ \"${START_VER}\" != \"${CHECK_VER}\" ]; then\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Skipping downgrade $(basename ${sql_file}) \"\\\n\"for Fledge downgrade from ${FLEDGE_DB_VERSION} to ${NEW_VERSION}\" \"logonly\" \"pretty\"\n                fi\n\n                # Get next file in the list\n                continue\n            else\n                START_DOWNGRADE=\"Y\"\n            fi\n\n            # Perform Downgrade\n            if [ \"${START_DOWNGRADE}\" ]; then\n\n                # Evaluates if a shell script is available, in case it is executed instead of the .sql file\n                file_name=$(basename ${sql_file})\n                file_name_shell=${file_name%.*}.sh\n                file_name_path=\"${DOWNGRADE_SCRIPTS_DIR}/${file_name_shell}\"\n\n                if [ -f \"${file_name_path}\" ]; then\n\n                    schema_update_log \"debug\" \"downgrade shell calling [${file_name_path}]\" \"logonly\" \"pretty\"\n\n                    ${file_name_path}\n\n                    RET_CODE=$?\n                    if [ \"${RET_CODE}\" -ne 0 ]; then\n                        schema_update_log \"err\" \"Failure in downgrade, executing ${file_name_path}. Exiting\" \"all\" \"pretty\"\n                        return 1\n                    fi\n                else\n                    schema_update_log \"debug\" \"downgrade sql calling [${sql_file}]\" \"logonly\" \"pretty\"\n\n                    # Prepare command string for message reporting\n                    SQL_COMMAND=\"${SQLITE_SQL} '${DEFAULT_SQLITE_DB_FILE}' \\\"ATTACH DATABASE \"\\\n\"'${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; .read '${sql_file}' .quit\\\"\"\n                    if [ \"${VERBOSE}\" ]; then\n                        schema_update_log \"info\" \"Applying downgrade $(basename ${sql_file}) ...\" \"logonly\" \"pretty\"\n                        schema_update_log \"info\" \"Calling [${SQL_COMMAND}]\" \"logonly\" \"pretty\"\n                    fi\n\n                    # Call the DB script\n                    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge';\n.read '${sql_file}'\n.quit\nEOF`\n                    RET_CODE=$?\n                    if [ \"${RET_CODE}\" -ne 0 ]; then\n                        schema_update_log \"err\" \"Failure in downgrade command [${SQL_COMMAND}]: ${COMMAND_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                        return 1\n                    fi\n\n                    # Update DB version\n                    UPDATE_COMMAND=\"${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" \\\"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; UPDATE fledge.version SET id = '${START_VER}';\\\" 2>&1\"\n                    UPDATE_OUTPUT=`eval \"${UPDATE_COMMAND}\"`\n                    RET_CODE=$?\n                    if [ \"${RET_CODE}\" -ne 0 ]; then\n                        schema_update_log \"err\" \"Failure in downgrade command [${UPDATE_COMMAND}]: ${UPDATE_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                        return 1\n                    fi\n\n                fi\n\n                # Update DB version\n                UPDATE_COMMAND=\"${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" \\\"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; UPDATE fledge.version SET id = '${START_VER}';\\\" 2>&1\"\n                UPDATE_OUTPUT=`eval \"${UPDATE_COMMAND}\"`\n                RET_CODE=$?\n                if [ \"${RET_CODE}\" -ne 0 ]; then\n                    schema_update_log \"err\" \"Failure in downgrade command [${UPDATE_COMMAND}]: ${UPDATE_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # If \"ver\" in current filename is NEW_VERSION we are done\n                if [ \"${START_VER}\" == \"${NEW_VERSION}\" ]; then\n                    if [ \"${VERBOSE}\" ]; then\n                        schema_update_log \"info\" \"Found last downgrade file $(basename ${sql_file}) for \"\\\n\"${FLEDGE_DB_VERSION} to ${NEW_VERSION} version downgrade\" \"logonly\" \"pretty\"\n                    fi\n                    # Report success\n                    schema_update_log \"info\" \"Fledge DB schema has been downgraded to version [${NEW_VERSION}]\" \"all\" \"pretty\"\n                    return 0\n                fi\n            fi\n        done\n        # Report error\n        if [ \"${START_DOWNGRADE}\" ]; then\n            schema_update_log \"err\" \"Error: the Fledge DB schema has not been downgraded \"\\\n\"to version [${NEW_VERSION}], this sql file is [${sql_file}]\" \"all\" \"pretty\"\n            return 0\n        fi\n}\n\n# Check whether we need to Upgrade or Downgrade\nCHECK_OPERATION=`printf '%s\\n' \"${NEW_VERSION}\" \"${FLEDGE_DB_VERSION}\" | sort -V | head -n 1`\nif [ \"${CHECK_OPERATION}\" == \"${NEW_VERSION}\" ]; then\n    SCHEMA_OPT=\"DOWNGRADE\"\nelse\n    SCHEMA_OPT=\"UPGRADE\"\nfi\n\n# Call the schema operation\nif [ \"${SCHEMA_OPT}\" == \"UPGRADE\" ]; then\n    db_upgrade || exit 1\nelse\n    db_downgrade || exit 1\nfi\n\nexit 0"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/10.sql",
    "content": "CREATE INDEX readings_ix2\n    ON readings (asset_code);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/11.sql",
    "content": "DELETE FROM fledge.statistics WHERE key IN (\n    'NORTH_READINGS_TO_PI',\n    'NORTH_STATISTICS_TO_PI',\n    'NORTH_READINGS_TO_HTTP',\n    'North Readings to PI',\n    'North Statistics to PI',\n    'North Readings to OCS'\n    ) and value = 0;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/12.sql",
    "content": "DROP TABLE IF EXISTS fledge.destinations;\nDROP INDEX IF EXISTS fledge.fki_streams_fk1;\n\n-- Drops destination_id field from the table\nBEGIN TRANSACTION;\nDROP TABLE IF EXISTS fledge.streams_old;\nALTER TABLE fledge.streams RENAME TO streams_old;\n\nCREATE TABLE fledge.streams (\n    id            INTEGER                      PRIMARY KEY AUTOINCREMENT,         -- Sequence ID\n    description    character varying(255)      NOT NULL DEFAULT '',               -- A brief description of the stream entry\n    properties     JSON                        NOT NULL DEFAULT '{}',             -- A generic set of properties\n    object_stream  JSON                        NOT NULL DEFAULT '{}',             -- Definition of what must be streamed\n    object_block   JSON                        NOT NULL DEFAULT '{}',             -- Definition of how the stream must be organised\n    object_filter  JSON                        NOT NULL DEFAULT '{}',             -- Any filter involved in selecting the data to stream\n    active_window  JSON                        NOT NULL DEFAULT '{}',             -- The window of operations\n    active         boolean                     NOT NULL DEFAULT 't',              -- When false, all data to this stream stop and are inactive\n    last_object    bigint                      NOT NULL DEFAULT 0,                -- The ID of the last object streamed (asset or reading, depending on the object_stream)\n    ts             DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))); -- Creation or last update\n\nINSERT INTO fledge.streams\n        SELECT\n            id,\n            description,\n            properties,\n            object_stream,\n            object_block,\n            object_filter,\n            active_window,\n            active,\n            last_object,\n            ts\n        FROM fledge.streams_old;\n\nDROP TABLE fledge.streams_old;\nCOMMIT;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/13.sql",
    "content": "CREATE INDEX statistics_history_ix3\n    ON statistics_history (history_ts);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/14.sql",
    "content": "-- Use plugin name pi_server instead of former omf\nUPDATE fledge.configuration SET value = json_set(value, '$.plugin.value', 'pi_server') WHERE json_extract(value, '$.plugin.value') = 'omf';\nUPDATE fledge.configuration SET value = json_set(value, '$.plugin.default', 'pi_server') WHERE json_extract(value, '$.plugin.default') = 'omf';\n\n-- Insert PURGE_READ under Utilities parent category\nINSERT INTO fledge.category_children SELECT 'Utilities', 'PURGE_READ' WHERE NOT EXISTS(SELECT 1 FROM fledge.category_children WHERE parent = 'Utilities' AND child = 'PURGE_READ');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/15.sql",
    "content": "CREATE INDEX IF NOT EXISTS log_ix2 ON log(ts);\nCREATE INDEX IF NOT EXISTS tasks_ix1 ON tasks(process_name, start_time);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/16.sql",
    "content": "ALTER TABLE fledge.configuration ADD COLUMN display_name character varying(255);\nUPDATE fledge.configuration SET display_name = key;"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/17.sql",
    "content": "-- Create plugin_data table\n-- Persist plugin data in the storage\nCREATE TABLE IF NOT EXISTS fledge.plugin_data (\n        key     character varying(255)    NOT NULL,\n        data    JSON                      NOT NULL DEFAULT '{}',\n        CONSTRAINT plugin_data_pkey PRIMARY KEY (key) );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/18.sql",
    "content": "-- Such elaborate steps are needed because\n-- From: http://www.sqlite.org/faq.html:\n--    (11) How do I add or delete columns from an existing table in SQLite.\n--    SQLite has limited ALTER TABLE support that you can use to add a column to the end of a table or\n--    to change the name of a table. If you want to make more complex changes in the structure of a\n--    table, you will have to recreate the table. You can save existing data to a temporary table, drop\n--    the old table, create the new table, then copy the data back in from the temporary table.\n\nCREATE TABLE fledge.tasks_temporary (\n             id           uuid                        NOT NULL,                          -- PK\n             schedule_name character varying(255),                                       -- Name of the task\n             process_name character varying(255)      NOT NULL,                          -- Name of the task's process\n             state        smallint                    NOT NULL,                          -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- The date and time the task started\n             end_time     DATETIME,                                                      -- The date and time the task ended\n             reason       character varying(255),                                        -- The reason why the task ended\n             pid          integer                     NOT NULL,                          -- Linux process id\n             exit_code    integer,                                                       -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_temporary_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_temporary_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\nINSERT INTO fledge.tasks_temporary (id, process_name, state, start_time, end_time, reason, pid, exit_code) SELECT id, process_name, state, start_time, end_time, reason, pid, exit_code FROM fledge.tasks;\nDROP TABLE fledge.tasks;\n\nCREATE TABLE fledge.tasks (\n             id           uuid                        NOT NULL,                          -- PK\n             schedule_name character varying(255),                                       -- Name of the task\n             process_name character varying(255)      NOT NULL,                          -- Name of the task's process\n             state        smallint                    NOT NULL,                          -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- The date and time the task started\n             end_time     DATETIME,                                                      -- The date and time the task ended\n             reason       character varying(255),                                        -- The reason why the task ended\n             pid          integer                     NOT NULL,                          -- Linux process id\n             exit_code    integer,                                                       -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\nINSERT INTO fledge.tasks SELECT id, schedule_name, process_name, state, start_time, end_time, reason, pid, exit_code FROM fledge.tasks_temporary;\nDROP TABLE fledge.tasks_temporary;\n\nDROP INDEX IF EXISTS tasks_ix1;\nCREATE INDEX tasks_ix1\n    ON tasks(schedule_name, start_time);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/19.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFDL', 'Notification Deleted' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/2.sql",
    "content": "drop index IF EXISTS fki_readings_fk1;\ncreate index fki_readings_fk1 on readings(asset_code, user_ts desc);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/20.sql",
    "content": "-- Create filters table\nCREATE TABLE fledge.filters (\n             name        character varying(255)        NOT NULL,\n             plugin      character varying(255)        NOT NULL,\n       CONSTRAINT filter_pkey PRIMARY KEY( name ) );\n\n-- Create filter_users table\nCREATE TABLE fledge.filter_users (\n             name        character varying(255)        NOT NULL,\n             user        character varying(255)        NOT NULL);\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/21.sql",
    "content": "-- Notification log codes\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFAD', 'Notification Added' );\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFSN', 'Notification Sent' );\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFST', 'Notification Server Startup' );\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFSD', 'Notification Server Shutdown' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/22.sql",
    "content": "create index readings_ix3 on readings(user_ts);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/23.sql",
    "content": "CREATE TABLE new_readings (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    asset_code character varying(50)       NOT NULL,                         -- The provided asset code.  Not necessarily located in the\n                                                                             -- assets table.\n    read_key   uuid                        UNIQUE,                           -- An optional unique key us ed to avoid double-loading.\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n);\n\ninsert into new_readings select * from readings;\n\ndrop index fki_readings_fk1;\ndrop index readings_ix1;\ndrop index readings_ix2;\ndrop index readings_ix3;\ndrop table readings;\n\nalter table new_readings rename to readings;\n\nCREATE INDEX fki_readings_fk1\n    ON readings (asset_code, user_ts desc);\n\nCREATE INDEX readings_ix1\n    ON readings (read_key);\n\nCREATE INDEX readings_ix2\n    ON readings (asset_code);\n\nCREATE INDEX readings_ix3\n    ON readings (user_ts);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/24.sql",
    "content": "-- List of tasks\nCREATE TABLE fledge.new_tasks (\n             id           uuid                        NOT NULL,                          -- PK\n             schedule_name character varying(255),                                       -- Name of the task\n             process_name character varying(255)      NOT NULL,                          -- Name of the task's process\n             state        smallint                    NOT NULL,                          -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), \t -- The UTC date and time the task started\n             end_time     DATETIME,                                                      -- The date and time the task ended\n             reason       character varying(255),                                        -- The reason why the task ended\n             pid          integer                     NOT NULL,                          -- Linux process id\n             exit_code    integer,                                                       -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\nDROP INDEX tasks_ix1;\n\nDROP TABLE tasks;\n\nALTER TABLE new_tasks RENAME TO tasks;\n\n-- Create index\nCREATE INDEX tasks_ix1\n    ON tasks(schedule_name, start_time);\n\n\n-- General log table for Fledge.\nCREATE TABLE fledge.new_log (\n       id    INTEGER                PRIMARY KEY AUTOINCREMENT,\n       code  CHARACTER(5)           NOT NULL,                  -- The process that logged the action\n       level SMALLINT               NOT NULL,                  -- 0 Success - 1 Failure - 2 Warning - 4 Info\n       log   JSON                   NOT NULL DEFAULT '{}',     -- Generic log structure\n       ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- UTC time\n       CONSTRAINT log_fk1 FOREIGN KEY (code)\n       REFERENCES log_codes (code) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nDROP INDEX log_ix1;\n\nDROP INDEX log_ix2;\n\nDROP TABLE log;\n\nALTER TABLE new_log RENAME TO log;\n\n-- Index: log_ix1 - For queries by code\nCREATE INDEX log_ix1 ON log(code, ts, level);\n\n-- Index to make GUI response faster\nCREATE INDEX log_ix2 ON log(ts);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/25.sql",
    "content": "-- Fledge expects to have fledge.readings.user_ts in a precise fixed format\n-- the upgrade updates the contents of the fledge.readings table\n-- to the handled format\n\n-- Backup readings data\nDROP TABLE IF EXISTS fledge.readings_backup;\n\nCREATE TABLE fledge.readings_backup (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    asset_code character varying(50)       NOT NULL,                         -- The provided asset code. Not necessarily located in the\n                                                                             -- assets table.\n    read_key   uuid                        UNIQUE,                           -- An optional unique key used to avoid double-loading.\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))       -- UTC time\n);\n\nINSERT INTO fledge.readings_backup\n          (id, asset_code, read_key, reading, user_ts, ts)\n    SELECT id, asset_code, read_key, reading, user_ts, ts FROM fledge.readings;\n\n-- Apply user_ts changes\nUPDATE fledge.readings\n  SET user_ts =\n    CASE instr(user_ts,'.') -- Checks for the presence of sub-seconds\n        WHEN 0  THEN\n            user_ts || \".000000+00:00\"\n        ELSE\n            -- Handles microseconds\n            substr(user_ts,1, instr(user_ts,'.') -1 ) ||\n            CASE -- Check for the presence of the timezone\n                max (\n                    instr(substr(user_ts,instr(user_ts,'.')+1,99),'+') -1,\n                    instr(substr(user_ts,instr(user_ts,'.')+1,99),'-') -1\n                )\n                WHEN -1 THEN\n                    -- No timezone\n                    substr (\n                        substr(user_ts,instr(user_ts,'.'),99) || \"000000\", 1, 7)\n                    || \"+00:00\"\n                ELSE\n                    -- yes timezone - extract up to the timezone\n                    substr (\n                        \".\" || substr(user_ts, instr(user_ts, '.') + 1,\n                                    max (\n                                          instr(substr(user_ts,instr(user_ts,'.')+1,99),'+') -1,\n                                          instr(substr(user_ts,instr(user_ts,'.')+1,99),'-') -1\n                                      )\n                        )\n                        || \"000000\", 1, 7)\n                    -- Handles timezone\n                    ||  substr( substr(user_ts, instr(user_ts, '.') + 1),\n                                max (\n                                   instr(substr(user_ts,instr(user_ts,'.')+1,99),'+'),\n                                   instr(substr(user_ts,instr(user_ts,'.')+1,99),'-')\n                                )\n                            ,99)\n        END\n    END;\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/26.sql",
    "content": "INSERT OR IGNORE INTO scheduled_processes ( name, script ) VALUES ( 'south_c',   '[\"services/south_c\"]' );\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/27.sql",
    "content": "-- Remove index on fledge.readings read_key\nDROP INDEX IF EXISTS fledge.readings_ix1;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/28.sql",
    "content": "-- Add audit log key NTFCL\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'NTFCL', 'Notification Cleared' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/29.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'PKGIN', 'Package installation' ), ( 'PKGUP', 'Package updated' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/3.sql",
    "content": "--- Start :  updates omf_created_objects avoiding the problem of the constraint\nCREATE TABLE fledge.tmp_omf_created_objects (\n    configuration_key character varying(255)    NOT NULL,\n    type_id           integer                   NOT NULL,\n    asset_code        character varying(50)     NOT NULL,\n    CONSTRAINT tmp_omf_created_objects_pkey PRIMARY KEY (configuration_key,type_id, asset_code)\n  );\n\nINSERT INTO fledge.tmp_omf_created_objects\nSELECT * FROM fledge.omf_created_objects;\n\nUPDATE fledge.tmp_omf_created_objects  SET configuration_key = 'North Readings to PI'   WHERE configuration_key = 'SEND_PR_1';\nUPDATE fledge.tmp_omf_created_objects  SET configuration_key = 'North Statistics to PI' WHERE configuration_key = 'SEND_PR_2';\nUPDATE fledge.tmp_omf_created_objects  SET configuration_key = 'North Readings to OCS'  WHERE configuration_key = 'SEND_PR_4';\n\nDELETE FROM fledge.omf_created_objects;\n\nINSERT INTO fledge.omf_created_objects\nSELECT * FROM fledge.tmp_omf_created_objects;\n\nDROP TABLE fledge.tmp_omf_created_objects;\n--- End\n\nUPDATE fledge.configuration SET key = 'North Readings to PI' WHERE key = 'SEND_PR_1';\nUPDATE fledge.configuration SET key = 'North Statistics to PI' WHERE key = 'SEND_PR_2';\nUPDATE fledge.configuration SET key = 'North Readings to OCS' WHERE key = 'SEND_PR_4';\n\n-- Insert entries for DHT11 C++ south plugin\n\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'dht11',\n              'DHT11 South C Plugin',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"dht11\", \"default\" : \"dht11\", \"description\" : \"Module that DHT11 South Plugin will load\" } } '\n            );\n\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'dht11',   '[\"services/south_c\"]' );\n\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '6b25f4d9-c7f3-4fc8-bd4a-4cf79f7055ca', -- id\n                'dht11',                                -- schedule_name\n                'dht11',                                -- process_name\n                1,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '01:00:00',                             -- schedule_interval (evey hour)\n                't',                                    -- exclusive\n                'f'                                     -- enabled\n              );\n\n-- North_Readings_to_PI - OMF Translator for readings - C Code\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Readings_to_PI',\n              'OMF North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"omf\", \"default\" : \"omf\", \"description\" : \"Module that OMF North Plugin will load\" } } '\n            );\n\n-- North_Statistics_to_PI - OMF Translator for statistics - C Code\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Statistics_to_PI',\n              'OMF North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"omf\", \"default\" : \"omf\", \"description\" : \"Module that OMF North Plugin will load\" } } '\n            );\n\n-- North Tasks - C code\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Readings_to_PI',   '[\"tasks/north_c\"]' );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Statistics_to_PI', '[\"tasks/north_c\"]' );\n\n-- Readings OMF to PI - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '1cdf1ef8-7e02-11e8-adc0-fa7ae01bbebc', -- id\n                'OMF_to_PI_north_C',                    -- schedule_name\n                'North_Readings_to_PI',                 -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                    -- exclusive\n                'f'                                     -- disabled\n              );\n\n-- Statistics OMF to PI - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'f1e3b377-5acb-4bde-93d5-b6a792f76e07', -- id\n                'Stats_OMF_to_PI_north_C',              -- schedule_name\n                'North_Statistics_to_PI',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                    -- exclusive\n                'f'                                     -- disabled\n              );\n-- Statistics\nINSERT INTO fledge.statistics ( key, description, value, previous_value ) VALUES ( 'NORTH_READINGS_TO_PI', 'Statistics sent to historian', 0, 0 );\nINSERT INTO fledge.statistics ( key, description, value, previous_value ) VALUES ( 'NORTH_STATISTICS_TO_PI', 'Statistics sent to historian', 0, 0 );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/30.sql",
    "content": "-- updates \"PIServerEndpoint\" option :\n--\n--  \"discovery\" -> \"Auto Discovery\"\n--  \"piwebapi\"  -> \"PI Web API\"\n--  \"cr\"        -> \"Connector Relay\"\n--\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.options', json_array('Auto Discovery','PI Web API','Connector Relay') )\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.default', 'Connector Relay')\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'Auto Discovery')\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2' AND\n      json_extract(value, '$.PIServerEndpoint.value') = 'discovery';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'PI Web API')\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2' AND\n      json_extract(value, '$.PIServerEndpoint.value') = 'piwebapi';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'Connector Relay')\n    WHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2' AND\n          json_extract(value, '$.PIServerEndpoint.value') = 'cr';"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/31.sql",
    "content": "-- Such elaborate steps are needed because\n-- From: http://www.sqlite.org/faq.html:\n--    (11) How do I add or delete columns from an existing table in SQLite.\n--    SQLite has limited ALTER TABLE support that you can use to add a column to the end of a table or\n--    to change the name of a table. If you want to make more complex changes in the structure of a\n--    table, you will have to recreate the table. You can save existing data to a temporary table, drop\n--    the old table, create the new table, then copy the data back in from the temporary table\n\nDROP TABLE IF EXISTS fledge.tasks_new;\n\nCREATE TABLE fledge.tasks_new (\n             id           uuid                        NOT NULL,                               -- PK\n             schedule_id  uuid                        NOT NULL,                               -- Link between schedule & task table\n             schedule_name character varying(255),                                            -- Name of the task\n             process_name character varying(255)      NOT NULL,                               -- Name of the task's process\n             state        smallint                    NOT NULL,                               -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),            -- The date and time the task started UTC\n             end_time     DATETIME,                                                           -- The date and time the task ended\n             reason       character varying(255),                                             -- The reason why the task ended\n             pid          integer                     NOT NULL,                               -- Linux process id\n             exit_code    integer,                                                            -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_new_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_new_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\n\nINSERT INTO fledge.tasks_new (id, schedule_id, schedule_name, process_name, state, start_time, end_time, reason, pid, exit_code) SELECT id, \"\", schedule_name, process_name, state, start_time, end_time, reason, pid, exit_code FROM fledge.tasks;\nDELETE FROM fledge.tasks_new WHERE fledge.tasks_new.schedule_name NOT IN (SELECT schedule_name FROM fledge.schedules);\nUPDATE fledge.tasks_new SET schedule_id = (SELECT id FROM fledge.schedules WHERE fledge.tasks_new.schedule_name = fledge.schedules.schedule_name AND fledge.tasks_new.process_name = fledge.schedules.process_name);\nDROP TABLE IF EXISTS fledge.tasks;\n\nCREATE TABLE fledge.tasks (\n             id           uuid                        NOT NULL,                               -- PK\n             schedule_id  uuid                        NOT NULL,                               -- Link between schedule & task table\n             schedule_name character varying(255),                                            -- Name of the task\n             process_name character varying(255)      NOT NULL,                               -- Name of the task's process\n             state        smallint                    NOT NULL,                               -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),            -- The date and time the task started UTC\n             end_time     DATETIME,                                                           -- The date and time the task ended\n             reason       character varying(255),                                             -- The reason why the task ended\n             pid          integer                     NOT NULL,                               -- Linux process id\n             exit_code    integer,                                                            -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\nINSERT INTO fledge.tasks SELECT id, schedule_id, schedule_name, process_name, state, start_time, end_time, reason, pid, exit_code FROM fledge.tasks_new;\nDROP TABLE IF EXISTS fledge.tasks_new;\n\nDROP INDEX IF EXISTS tasks_ix1;\nCREATE INDEX tasks_ix1\n    ON tasks(schedule_name, start_time);"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/32.sql",
    "content": "--\n-- DROP the column: read_key   uuid\n--\n\nCREATE TABLE new_readings(\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    asset_code character varying(50)       NOT NULL,                         -- The provided asset code. Not necessarily located in the\n                                                                             -- assets TABLE.\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n);\n\nINSERT INTO new_readings\n    SELECT\n        id,\n        asset_code,\n        reading,\n        user_ts,\n        ts\n    FROM readings;\n\nDROP INDEX fki_readings_fk1;\nDROP INDEX readings_ix2;\nDROP INDEX readings_ix3;\n\nDROP TABLE readings;\n\nALTER TABLE new_readings rename to readings;\n\nCREATE INDEX fki_readings_fk1\n    ON readings (asset_code, user_ts desc);\n\nCREATE INDEX readings_ix2\n    ON readings (asset_code);\n\nCREATE INDEX readings_ix3\n    ON readings (user_ts);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/33.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'PKGRM', 'Package purged' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/34.sql",
    "content": "\n--- configuration -------------------------------------------------------------------------------------------------------\n\n-- plugin\n--\nUPDATE configuration SET value = json_set(value, '$.plugin.default', 'OMF')\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2';\n\nUPDATE configuration SET value = json_set(value, '$.plugin.value', 'OMF')\nWHERE json_extract(value, '$.plugin.value') = 'PI_Server_V2';\n\n-- PIServerEndpoint\n--\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.options', json_array('PI Web API','Connector Relay','OSIsoft Cloud Services','Edge Data Store') )\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.description', 'Select the endpoint among PI Web API, Connector Relay, OSIsoft Cloud Services or Edge Data Store')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.order', '1')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.displayName', 'Endpoint')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.default', 'Connector Relay')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'Connector Relay')\nWHERE json_extract(value, '$.plugin.value') = 'OMF' AND json_extract(value, '$.PIServerEndpoint.value') = 'Auto Discovery';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'Connector Relay')\nWHERE json_extract(value, '$.plugin.value') = 'OMF' AND json_extract(value, '$.PIServerEndpoint.value') = 'cr';\n\nUPDATE configuration SET value = json_set(value, '$.PIServerEndpoint.value', 'PI Web API')\nWHERE json_extract(value, '$.plugin.value') = 'OMF' AND json_extract(value, '$.PIServerEndpoint.value') = 'piwebapi';\n\n-- ServerHostname\n-- Note: This is a new config item and its value extract from old URL config item\n--\nUPDATE configuration SET value = json_set(value, '$.ServerHostname.description', 'Hostname of the server running the endpoint either PI Web API or Connector Relay or Edge Data Store')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerHostname.type', 'string')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerHostname.default', 'localhost')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerHostname.order', '2')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerHostname.displayName', 'Server hostname')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerHostname.validity', 'PIServerEndpoint != \"OSIsoft Cloud Services\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value =\n    json_set(\n            value,\n            '$.ServerHostname.value',\n            substr(json_extract(value, '$.URL.value'),\n                   instr(json_extract(value, '$.URL.value'), '://') + 3,\n                   instr(REPLACE(json_extract(value, '$.URL.value'), '://', 'xxx'), ':') -\n                   instr(json_extract(value, '$.URL.value'), '://') - 3)\n    )\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- ServerPort\n-- Note: This is a new config item and its value extract from old URL config item\n--\nUPDATE configuration SET value = json_set(value, '$.ServerPort.description', 'Port on which the endpoint either PI Web API or Connector Relay or Edge Data Store is listening, 0 will use the default one')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerPort.type', 'integer')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerPort.default', '0')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerPort.order', '3')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerPort.displayName', 'Server port, 0=use the default')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.ServerPort.validity', 'PIServerEndpoint != \"OSIsoft Cloud Services\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value =\n     json_set(\n             value,\n             '$.ServerPort.value',\n             substr(json_extract(value, '$.URL.value'),\n                    instr(REPLACE(json_extract(value, '$.URL.value'), '://', 'xxx'), ':') + 1,\n                    instr(REPLACE(json_extract(value, '$.URL.value'), '://', 'xxx'), '/') -\n                    instr(REPLACE(json_extract(value, '$.URL.value'), '://', 'xxx'), ':') - 1\n                 )\n    )\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- URL\n-- Note: Removed URL config item as it is replaced by ServerHostname & ServerPort\n--\nUPDATE configuration SET value = json_remove(value, '$.URL')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- producerToken\n--\nUPDATE configuration SET value = json_set(value, '$.producerToken.order', '4')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.producerToken.validity', 'PIServerEndpoint == \"Connector Relay\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- source\n--\nUPDATE configuration SET value = json_set(value, '$.source.order', '5')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- StaticData\n--\nUPDATE configuration SET value = json_set(value, '$.StaticData.order', '6')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- OMFRetrySleepTime\n--\nUPDATE configuration SET value = json_set(value, '$.OMFRetrySleepTime.order', '7')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- OMFMaxRetry\n--\nUPDATE configuration SET value = json_set(value, '$.OMFMaxRetry.order', '8')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- OMFHttpTimeout\n--\nUPDATE configuration SET value = json_set(value, '$.OMFHttpTimeout.order', '9')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- formatInteger\n--\nUPDATE configuration SET value = json_set(value, '$.formatInteger.order', '10')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- formatNumber\n--\nUPDATE configuration SET value = json_set(value, '$.formatNumber.order', '11')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- compression\n--\nUPDATE configuration SET value = json_set(value, '$.compression.order', '12')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n--  DefaultAFLocation\n-- Note: This is a new config item and its default & value extract from old AFHierarchy1Level config item\n--\nUPDATE configuration SET value = json_set(value, '$.DefaultAFLocation.description', 'Defines the hierarchies tree in Asset Framework in which the assets will be created, each level is separated by /, PI Web API only.')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.DefaultAFLocation.type', 'string')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.DefaultAFLocation.default', json_extract(value, '$.AFHierarchy1Level.default'))\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.DefaultAFLocation.value', json_extract(value, '$.AFHierarchy1Level.value'))\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.DefaultAFLocation.order', '13')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.DefaultAFLocation.displayName', 'Asset Framework hierarchies tree')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.DefaultAFLocation.validity', 'PIServerEndpoint == \"PI Web API\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- AFHierarchy1Level\n-- Note: Removed AFHierarchy1Level config item as it is replaced by new config item DefaultAFLocation\n--\nUPDATE configuration SET value = json_remove(value, '$.AFHierarchy1Level')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- AFMap\n-- Note: This is a new config item\n--\nUPDATE configuration SET value = json_set(value, '$.AFMap.description', 'Defines a SET of rules to address WHERE assets should be placed in the AF hierarchy.')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.AFMap.type', 'JSON')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.AFMap.default', '{}')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.AFMap.value', '{}')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.AFMap.order', '14')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.AFMap.displayName', 'Asset Framework hierarchies rules')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.AFMap.validity', 'PIServerEndpoint == \"PI Web API\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- notBlockingErrors\n--\nUPDATE configuration SET value = json_set(value, '$.notBlockingErrors.order', '15')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- streamId\n--\nUPDATE configuration SET value = json_set(value, '$.streamId.order', '16')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- PIWebAPIAuthenticationMethod\n--\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIAuthenticationMethod.order', '17')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIAuthenticationMethod.validity', 'PIServerEndpoint == \"PI Web API\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- PIWebAPIUserId\n--\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIUserId.order', '18')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIUserId.validity', 'PIServerEndpoint == \"PI Web API\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- PIWebAPIPassword\n--\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIPassword.order', '19')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIPassword.validity', 'PIServerEndpoint == \"PI Web API\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- PIWebAPIKerberosKeytabFileName\n-- Note: This is a new config item\n--\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIKerberosKeytabFileName.description', 'Keytab file name used for Kerberos authentication in PI Web API.')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIKerberosKeytabFileName.type', 'string')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIKerberosKeytabFileName.default', 'piwebapi_kerberos_https.keytab')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIKerberosKeytabFileName.value', 'piwebapi_kerberos_https.keytab')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIKerberosKeytabFileName.displayName', 'PI Web API Kerberos keytab file')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIKerberosKeytabFileName.order', '20')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.PIWebAPIKerberosKeytabFileName.validity', 'PIWebAPIAuthenticationMethod == \"kerberos\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n---\n--- OCS configuration\n--- NOTE- All config items are new one's for OCS\n-- OCSNamespace\n--\nUPDATE configuration SET value = json_set(value, '$.OCSNamespace.description', 'Specifies the OCS namespace WHERE the information are stored and it is used for the interaction with the OCS API')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSNamespace.type', 'string')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSNamespace.default', 'name_space')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSNamespace.order', '21')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSNamespace.displayName', 'OCS Namespace')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSNamespace.validity', 'PIServerEndpoint == \"OSIsoft Cloud Services\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSNamespace.value', 'name_space')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- OCSTenantId\n--\nUPDATE configuration SET value = json_set(value, '$.OCSTenantId.description', 'Tenant id associated to the specific OCS account')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSTenantId.type', 'string')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSTenantId.default', 'ocs_tenant_id')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSTenantId.order', '22')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSTenantId.displayName', 'OCS Tenant ID')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSTenantId.validity', 'PIServerEndpoint == \"OSIsoft Cloud Services\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSTenantId.value', 'ocs_tenant_id')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n\n-- OCSClientId\n--\nUPDATE configuration SET value = json_set(value, '$.OCSClientId.description', 'Client id associated to the specific OCS account, it is used to authenticate the source for using the OCS API')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientId.type', 'string')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientId.default', 'ocs_client_id')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientId.order', '23')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientId.displayName', 'OCS Client ID')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientId.validity', 'PIServerEndpoint == \"OSIsoft Cloud Services\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientId.value', 'ocs_client_id')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n-- OCSClientSecret\n--\nUPDATE configuration SET value = json_set(value, '$.OCSClientSecret.description', 'Client secret associated to the specific OCS account, it is used to authenticate the source for using the OCS API')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientSecret.type', 'password')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientSecret.default', 'ocs_client_secret')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientSecret.order', '24')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientSecret.displayName', 'OCS Client Secret')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientSecret.validity', 'PIServerEndpoint == \"OSIsoft Cloud Services\"')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\nUPDATE configuration SET value = json_set(value, '$.OCSClientSecret.value', 'ocs_client_secret')\nWHERE json_extract(value, '$.plugin.value') = 'OMF';\n\n\n--- plugin_data -------------------------------------------------------------------------------------------------------\n-- plugin_data\n--\nUPDATE plugin_data SET key = REPLACE(key,'PI_Server_V2','OMF')\nWHERE instr(key, 'PI_Server_V2') > 0;\n\n--- asset_tracker -------------------------------------------------------------------------------------------------------\nUPDATE asset_tracker SET plugin = 'OMF' WHERE plugin in ('PI_Server_V2', 'ocs_V2');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/35.sh",
    "content": "#!/bin/bash\n\ndeclare SQL_COMMAND\ndeclare COMMAND_OUTPUT\n\n# Include logging\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n# Logger wrapper\nschema_update_log() {\n\n    if [ \"$1\" != \"debug\" ]; then\n        write_log \"Upgrade\" \"scripts.plugins.storage.${PLUGIN_NAME}.schema_update\" \"$1\" \"$2\" \"$3\" \"$4\"\n    fi\n\n}\n\nschema_update_log \"info\" \"$0 - SQLITE_SQL :$SQLITE_SQL: sql_file :$sql_file: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\nSQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'                 AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\n.read '${sql_file}'\n.quit\nEOF\"\n\n\nCOMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'                 AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\n.read '${sql_file}'\n.quit\nEOF`\n\nret_code=$?\n\nif [ \"${ret_code}\" -ne 0 ]; then\n    schema_update_log \"err\" \"Failure in upgrade command [${SQL_COMMAND}] result [${COMMAND_OUTPUT}]. Exiting\" \"all\" \"pretty\"\n    exit 1\nfi"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/35.sql",
    "content": "\n-- Upgrade - copy all the content of the fledge.readings table into readings.readings\n\nCREATE TABLE readings.readings (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    asset_code character varying(50)       NOT NULL,                         -- The provided asset code. Not necessarily located in the\n                                                                             -- assets table.\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n);\n\nINSERT INTO readings.readings SELECT * FROM fledge.readings;\n\nCREATE INDEX readings.fki_readings_fk1\n    ON readings  (asset_code, user_ts desc);\n\nCREATE INDEX readings.readings_ix2\n    ON readings (asset_code);\n\nCREATE INDEX readings.readings_ix3\n    ON readings (user_ts);\n\n\nDROP TABLE fledge.readings;\n\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/36.sql",
    "content": "-- Scheduled process entries for south, notification, north tasks\n\nINSERT INTO fledge.scheduled_processes SELECT 'south_c', '[\"services/south_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'south_c');\nINSERT INTO fledge.scheduled_processes SELECT 'notification_c', '[\"services/notification_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'notification_c');\nINSERT INTO fledge.scheduled_processes SELECT 'north_c', '[\"tasks/north_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'north_c');\nINSERT INTO fledge.scheduled_processes SELECT 'north', '[\"tasks/north\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'north');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/37.sql",
    "content": "-- Scheduled process entry north microservice\n\nINSERT INTO fledge.scheduled_processes SELECT 'north_C', '[\"services/north_C\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'north_C');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/38.sh",
    "content": "#!/bin/bash\n\n# Include logging\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n# Logger wrapper\nschema_update_log() {\n\n    if [ \"$1\" != \"debug\" ]; then\n        write_log \"Upgrade\" \"scripts.plugins.storage.${PLUGIN_NAME}schema_update\" \"$1\" \"$2\" \"$3\" \"$4\"\n    fi\n\n}\n\ncalculate_table_id() {\n\n    declare _n_readings_allocate=$1\n\n    schema_update_log \"info\" \"calculate_db_id: SQLITE_SQL :$SQLITE_SQL: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\n    SQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\n\nINSERT INTO readings_1.asset_reading_catalogue\nSELECT\n    (tb.table_id - ((db_id - 1) * 15))\n\n    table_id,\n    db_id,\n    asset_code\nFROM readings_1.asset_reading_catalogue_tmp tb;\n\n.quit\nEOF\"\n\n\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\n\nINSERT INTO readings_1.asset_reading_catalogue\nSELECT\n    (tb.table_id - ((db_id - 1) * 15))\n\n    table_id,\n    db_id,\n    asset_code\nFROM readings_1.asset_reading_catalogue_tmp tb;\n\n.quit\nEOF`\n\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n        schema_update_log \"err\" \"calculate_db_id - Failure in upgrade command [${SQL_COMMAND}] result [${COMMAND_OUTPUT}]. Exiting\" \"all\" \"pretty\"\n        exit 1\n    fi\n}\n\n#\n# Updates asset_reading_catalogue setting the proper db id in relation to how many tables per db\n# should be managed\n#\ncalculate_db_id() {\n\n    declare _n_readings_allocate=$1\n\n    schema_update_log \"info\" \"calculate_db_id: SQLITE_SQL :$SQLITE_SQL: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\n    SQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\n\nUPDATE readings_1.asset_reading_catalogue SET db_id=(((table_id - 1) / '${_n_readings_allocate}') +1);\n.quit\nEOF\"\n\n\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\n\nUPDATE readings_1.asset_reading_catalogue_tmp SET db_id=(((table_id - 1) / '${_n_readings_allocate}') +1);\n\n.quit\nEOF`\n\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n        schema_update_log \"err\" \"calculate_db_id - Failure in upgrade command [${SQL_COMMAND}] result [${COMMAND_OUTPUT}]. Exiting\" \"all\" \"pretty\"\n        exit 1\n    fi\n}\n\n#\n# Executes the .sql file associated to this shell script\n#\nexecute_sql_file() {\n\n    schema_update_log \"info\" \"execute_sql_file: SQLITE_SQL :$SQLITE_SQL: sql_file :$sql_file: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\n    SQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\n.read '${sql_file}'\n\n.quit\nEOF\"\n\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\n.read '${sql_file}'\n.quit\nEOF`\n\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n        schema_update_log \"err\" \"execute_sql_file - Failure in upgrade command [${SQL_COMMAND}] result [${COMMAND_OUTPUT}]. Exiting\" \"all\" \"pretty\"\n        exit 1\n    fi\n}\n\n#\n# Creates a database file given the file name\n#\ncreate_database_file() {\n\n    readings_file=$1\n\n    file_path=$(dirname \"${DEFAULT_SQLITE_DB_FILE_READINGS}\")\n    file_name_path=\"${file_path}/${readings_file}.db\"\n\n    # Creates the file if it was not already created\n    if [ ! -f \"${file_name_path}\" ]; then\n\n        schema_update_log \"info\" \"create_database_file - file path :$file_name_path:\" \"logonly\" \"pretty\"\n\n        # Created datafile\n        COMMAND_OUTPUT=$(${SQLITE_SQL} ${file_name_path} .databases 2>&1)\n\n        ret_code=$?\n        if [ \"${ret_code}\" -ne 0 ]; then\n            schema_update_log \"err\" \"create_database_file - Failure in upgrade command [${SQL_COMMAND}] result [${COMMAND_OUTPUT}]. Exiting\" \"all\" \"pretty\"\n            exit 1\n        fi\n    fi\n}\n\n#\n# Updates the db with the max used database id\n#\nupdate_max_db_id() {\n\n    declare _db_id_max=$1\n\n    schema_update_log \"info\" \"calculate_db_id: SQLITE_SQL :$SQLITE_SQL: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\n    SQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\n\nUPDATE readings_1.configuration_readings SET db_id_Last=${_db_id_max};\n\n.quit\nEOF\"\n\n\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'              AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\n\nUPDATE readings_1.configuration_readings SET db_id_Last=${_db_id_max};\n\n.quit\nEOF`\n\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n        schema_update_log \"err\" \"calculate_db_id - Failure in upgrade command [${SQL_COMMAND}] result [${COMMAND_OUTPUT}]. Exiting\" \"all\" \"pretty\"\n        exit 1\n    fi\n}\n\n#\n# Creates all the required database file in relation to the asset_reading_catalogue content\n#\ncreate_all_database_files() {\n\n    declare _n_db_allocate=$1\n    declare db_name\n    declare db_id_start\n\n    while read -r table_id db_id asset_code; do\n\n        # The first database is created by the upgrade process\n        if [ \"$db_id\" != \"1\" ]; then\n\n            if [[ $db_id > $db_id_max ]]\n            then\n                db_id_max=${db_id}\n            fi\n            db_name=\"readings_$db_id\"\n\n            schema_update_log \"info\" \"create_all_database_file - db name :$db_name: db id :$db_id: \" \"logonly\" \"pretty\"\n            create_database_file \"$db_name\"\n        fi\n    done < \"$tmp_file\"\n\n    # Create 1 extra db\n    db_id_max=$((${db_id_max} +1))\n    db_name=\"readings_$db_id_max\"\n    schema_update_log \"info\" \"create_all_database_file - db name :$db_name: db id :$db_id_max: \" \"logonly\" \"pretty\"\n    create_database_file \"$db_name\"\n\n    # Creates all the required databases if not already created\n    if [[ $db_id_max < $_n_db_allocate ]]\n    then\n        db_id_start=$((${db_id_max} +1))\n\n        # The first database is created by the upgrade process\n        for ((db_id=${db_id_start}; db_id<=${_n_db_allocate}; db_id++)); do\n\n            db_name=\"readings_$db_id\"\n\n            schema_update_log \"info\" \"create_all_database_file - db name :$db_name: db id :$db_id: \" \"logonly\" \"pretty\"\n            create_database_file \"$db_name\"\n        done\n        db_id_max=$_n_db_allocate\n    fi\n}\n\n#\n# Creates a reading table given the database and the table name that should be used\n#\ncreate_readings() {\n\n    READINGS_DB=\"$1\"\n    READINGS_TABLE=\"$2\"\n\n    file_path=$(dirname ${DEFAULT_SQLITE_DB_FILE_READINGS})\n    readings_file=\"${file_path}/${READINGS_DB}.db\"\n\n    schema_update_log \"info\" \"create_readings - db :$READINGS_DB: table :$READINGS_TABLE: asset code :$ASSET_CODE:\" \"logonly\" \"pretty\"\n\n    SQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\n    ATTACH DATABASE '${readings_file}'                          AS '${READINGS_DB}';\n\n    CREATE TABLE  '${READINGS_DB}'.'${READINGS_TABLE}' (\n        id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n        reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n        user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n        ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n    );\n\n    CREATE INDEX  '${READINGS_DB}'.'${READINGS_TABLE}_ix3' ON '${READINGS_TABLE}' (user_ts desc);\n\n.quit\nEOF\"\n\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\n    ATTACH DATABASE '${readings_file}'                          AS '${READINGS_DB}';\n\n    CREATE TABLE  '${READINGS_DB}'.'${READINGS_TABLE}' (\n        id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n        reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n        user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n        ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n    );\n\n    CREATE INDEX  '${READINGS_DB}'.'${READINGS_TABLE}_ix3' ON '${READINGS_TABLE}' (user_ts desc);\n\n.quit\nEOF`\n\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n\n        schema_update_log \"err\" \"create_readings - Failure in upgrade command [${SQL_COMMAND}] result [${COMMAND_OUTPUT}]. Exiting\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n\n}\n\n#\n# Creates all the required reading tables in relation to the asset_reading_catalogue content\n#\ncreate_all_readings() {\n\n    declare _n_db_allocate=$1\n    declare _n_readings_allocate=$2\n\n    for ((db_id=1; db_id<=${_n_db_allocate}; db_id++)); do\n\n        for ((table_id=1; table_id<=${_n_readings_allocate}; table_id++)); do\n\n            schema_update_log \"info\" \"create_all_readings - dbid :$db_id: table id :${db_id}_${table_id}: \" \"logonly\" \"pretty\"\n\n            # The first reading table is created by the sql script\n            if [[  \"$db_id\" != \"1\" || \"$table_id\" != \"1\" ]]\n            then\n\n                create_readings \"readings_$db_id\" \"readings_${db_id}_${table_id}\"\n            fi\n        done\n    done\n}\n\n#\n#\n# Populate the proper readings table\n#\n# $1 = READINGS_DB\n# $2 = Readings table\n# $3 = Asset code\n#\npopulate_readings() {\n\n    READINGS_DB=\"$1\"\n    READINGS_TABLE=\"$2\"\n    ASSET_CODE=\"$3\"\n\n    file_path=$(dirname ${DEFAULT_SQLITE_DB_FILE_READINGS})\n    readings_file=\"${file_path}/${READINGS_DB}.db\"\n\n    schema_update_log \"info\" \"populate_readings - file :$readings_file: db :$READINGS_DB: table :$READINGS_TABLE: asset code :$ASSET_CODE:\" \"logonly\" \"pretty\"\n\n    SQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\n    ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'                 AS 'fledge';\n    ATTACH DATABASE '${readings_file}'                          AS '${READINGS_DB}';\n    ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\n    INSERT INTO '${READINGS_DB}'.'${READINGS_TABLE}'\n        SELECT\n            id,\n            reading,\n            user_ts,\n            ts\n        FROM readings.readings\n        WHERE asset_code = '${ASSET_CODE}';\n\n.quit\nEOF\"\n\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\n    ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'                 AS 'fledge';\n    ATTACH DATABASE '${readings_file}'                          AS '${READINGS_DB}';\n    ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\n    INSERT INTO '${READINGS_DB}'.'${READINGS_TABLE}'\n        SELECT\n            id,\n            reading,\n            user_ts,\n            ts\n        FROM readings.readings\n        WHERE asset_code = '${ASSET_CODE}';\n\n.quit\nEOF`\n\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n\n        schema_update_log \"err\" \"populate_readings - Failure in upgrade command [${SQL_COMMAND}]: ${COMMAND_OUTPUT}. Exiting\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n}\n\n#\n# Populates all the required reading tables in relation to the asset_reading_catalogue content\n#\npopulate_all_readings() {\n\n    cat \"$tmp_file\"  | while read table_id db_id asset_code; do\n\n        schema_update_log \"info\" \"populate_all_readings - db id :$db_id: table id :$table_id: asset code :$asset_code: \" \"logonly\" \"pretty\"\n\n        populate_readings \"readings_$db_id\" \"readings_${db_id}_${table_id}\" \"$asset_code\"\n    done\n}\n\n#\n# Export the asset_reading_catalogue content into a temporary file\n#\nexport_readings_list() {\n\n    schema_update_log \"info\" \"export_readings_list - tmp_file :$tmp_file: SQLITE_SQL :$SQLITE_SQL: sql_file :$sql_file: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\n    SQL_COMMAND=\"${SQLITE_SQL} \\\"${DEFAULT_SQLITE_DB_FILE}\\\" 2>&1 <<EOF\n\n        ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\n\n        SELECT\n            table_id,\n            db_id,\n            asset_code\n        FROM readings_1.asset_reading_catalogue;\n\n.quit\nEOF\"\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" > $tmp_file <<EOF\n\n        ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'     AS 'readings_1';\n\n        SELECT\n            table_id,\n            db_id,\n            asset_code\n        FROM readings_1.asset_reading_catalogue\n        ORDER BY db_id, table_id;\n\n.quit\nEOF`\n\n\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n\n        schema_update_log \"err\" \"export_readings_list - Failure in upgrade command [${SQL_COMMAND}]: ${COMMAND_OUTPUT}. Exiting\" \"all\" \"pretty\"\n        exit 1\n    fi\n\n}\n\n#\n# Cleanups database and file system\n#\ncleanup_db() {\n\n    schema_update_log \"info\" \"cleanup - SQLITE_SQL :$SQLITE_SQL: sql_file :$sql_file: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\n    #\n    # Clean up - database\n    #\n    SQL_COMMAND=\"${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'                 AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'        AS 'readings_1';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\nDROP TABLE readings.readings;\nDROP TABLE readings_1.asset_reading_catalogue_tmp;\n\n.quit\nEOF\"\n\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\n\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'                 AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}'        AS 'readings_1';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE}' AS 'readings';\n\nDROP TABLE readings.readings;\nDROP TABLE readings_1.asset_reading_catalogue_tmp;\n\n.quit\nEOF`\n\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n        schema_update_log \"err\" \"cleanup_db - Failure in upgrade command [${SQL_COMMAND}]: result [${COMMAND_OUTPUT}]. Proceeding\" \"all\" \"pretty\"\n    fi\n\n    #\n    # Clean up - file system\n    #\n    file_path=$(dirname ${DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE})\n    file_name_path=\"${file_path}/readings.db*\"\n\n    schema_update_log \"info\" \"cleanup - deleting ${file_name_path}\" \"logonly\" \"pretty\"\n\n    rm ${file_name_path}\n    ret_code=$?\n\n    if [ \"${ret_code}\" -ne 0 ]; then\n        schema_update_log \"notice\" \"cleanup_db - Failure in upgrade, files [${file_name_path}] can't be deleted. Proceeding\" \"logonly\" \"pretty\"\n    fi\n}\n\n#\n# Main\n#\nexport n_db_allocate=3\nexport n_readings_allocate=15\nexport tmp_file=/tmp/$$\nexport IFS=\"|\"\n\nschema_update_log \"info\" \"$0 - SQLITE_SQL :$SQLITE_SQL: sql_file :$sql_file: DEFAULT_SQLITE_DB_FILE :$DEFAULT_SQLITE_DB_FILE: DEFAULT_SQLITE_DB_FILE_READINGS :$DEFAULT_SQLITE_DB_FILE_READINGS:\" \"logonly\" \"pretty\"\n\nexecute_sql_file\n\ncalculate_db_id ${n_readings_allocate}\n\ncalculate_table_id ${n_readings_allocate}\n\nexport_readings_list\n\ndb_id_max=0\ncreate_all_database_files ${n_db_allocate}   # updates db_id_max\nupdate_max_db_id          ${db_id_max}   # updates db_id_max\ncreate_all_readings       ${db_id_max} ${n_readings_allocate}\n\npopulate_all_readings\n\ncleanup_db\n\nunset IFS\n\nexit 0\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/38.sql",
    "content": "--\n-- Stores in which database/readings table the specific asset_code is stored\n--\nCREATE TABLE readings_1.asset_reading_catalogue (\n    table_id     INTEGER               NOT NULL,\n    db_id        INTEGER               NOT NULL,\n    asset_code   character varying(50) NOT NULL\n);\n\nCREATE TABLE readings_1.asset_reading_catalogue_tmp (\n    table_id     INTEGER               PRIMARY KEY AUTOINCREMENT,\n    db_id        INTEGER               NOT NULL,\n    asset_code   character varying(50) NOT NULL\n);\n\n-- Store information about the multi database/readings handling\n--\nCREATE TABLE readings_1.configuration_readings (\n    global_id         INTEGER,                                                  -- Stores the last global Id used +1\n                                                                                -- Updated at -1 when Fledge starts\n                                                                                -- Updated at the the proper value when Fledge stops\n    db_id_Last        INTEGER,                                                  -- Latest database available\n\n\tn_readings_per_db INTEGER,                                                  -- Number of readings table per database\n\tn_db_preallocate  INTEGER                                                   -- Number of databases to allocate in advance\n);\n\n-- Readings table\n-- This tables contains the readings for assets.\n-- An asset can be a south with multiple sensor, a single sensor,\n-- a software or anything that generates data that is sent to Fledge\nCREATE TABLE readings_1.readings_1_1 (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n);\n\nCREATE INDEX readings_1.readings_1_1_ix3\n    ON readings_1_1 (user_ts desc);\n\n\n--\n-- global_id = -1 forces a calculation of the global id at the fledge starts\n--\nINSERT INTO readings_1.configuration_readings VALUES (-1, 0, 15, 3);\n\n--\n-- NULL is used to force the auto generation of the value starting from 1\n-- db_is will be properly valued by the shell script\n--\nINSERT INTO readings_1.asset_reading_catalogue_tmp\n    SELECT\n        NULL,\n        0,\n        asset_code\n    FROM readings.readings\n    GROUP BY asset_code\n    ORDER BY asset_code;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/39.sql",
    "content": "-- Create packages table\n\nDROP TABLE IF EXISTS fledge.packages;\n\nCREATE TABLE fledge.packages (\n             id                uuid                   NOT NULL, -- PK\n             name              character varying(255) NOT NULL, -- Package name\n             action            character varying(10)  NOT NULL, -- APT actions:\n                                                                -- list\n                                                                -- install\n                                                                -- purge\n                                                                -- update\n             status            INTEGER                NOT NULL, -- exit code\n                                                                -- -1       - in-progress\n                                                                --  0       - success\n                                                                -- Non-Zero - failed\n             log_file_uri      character varying(255) NOT NULL, -- Package Log file relative path\n  CONSTRAINT packages_pkey PRIMARY KEY  ( id ) );"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/4.sql",
    "content": "-- Create the configuration category_children table\nCREATE TABLE fledge.category_children (\n       parent\tcharacter varying(255)\tNOT NULL,\n       child\tcharacter varying(255)\tNOT NULL,\n       CONSTRAINT config_children_pkey PRIMARY KEY (parent, child) );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/40.sql",
    "content": "-- No action required\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/41.sql",
    "content": "-- Scheduled process entry for dispatcher microservice\n\nINSERT INTO fledge.scheduled_processes SELECT 'dispatcher_c', '[\"services/dispatcher_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'dispatcher_c');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/42.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n-- Create temporary table with new changes and then copy the data from old table\nDROP TABLE IF EXISTS fledge.users_temp;\nCREATE TABLE fledge.users_temp (\n       id                INTEGER   PRIMARY KEY AUTOINCREMENT,\n       uname             character varying(80)  NOT NULL,\n       real_name         character varying(255) NOT NULL,\n       role_id           integer                NOT NULL,\n       description       character varying(255) NOT NULL DEFAULT '',\n       pwd               character varying(255) ,\n       public_key        character varying(255) ,\n       enabled           boolean                NOT NULL DEFAULT 't',\n       pwd_last_changed  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       access_method     TEXT CHECK( access_method IN ('any','pwd','cert') )  NOT NULL DEFAULT 'any',\n          CONSTRAINT users_temp_fk1 FOREIGN KEY (role_id)\n          REFERENCES roles (id) MATCH SIMPLE\n                  ON UPDATE NO ACTION\n                  ON DELETE NO ACTION );\nINSERT INTO fledge.users_temp ( id, uname, real_name, pwd, enabled, pwd_last_changed, role_id, description ) SELECT id, uname, \"\", pwd, enabled, pwd_last_changed, role_id, description FROM fledge.users;\nUPDATE fledge.users_temp SET real_name='Admin user' where uname='admin';\nUPDATE fledge.users_temp SET real_name='Normal user' where uname='user';\n\n-- Recreate it again and copy the data from temp table\nDROP TABLE IF EXISTS fledge.users;\nCREATE TABLE fledge.users (\n       id                INTEGER   PRIMARY KEY AUTOINCREMENT,\n       uname             character varying(80)  NOT NULL,\n       real_name         character varying(255) NOT NULL,\n       role_id           integer                NOT NULL,\n       description       character varying(255) NOT NULL DEFAULT '',\n       pwd               character varying(255) ,\n       public_key        character varying(255) ,\n       enabled           boolean                NOT NULL DEFAULT 't',\n       pwd_last_changed  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       access_method     TEXT CHECK( access_method IN ('any','pwd','cert') )  NOT NULL DEFAULT 'any',\n          CONSTRAINT users_fk1 FOREIGN KEY (role_id)\n          REFERENCES roles (id) MATCH SIMPLE\n                  ON UPDATE NO ACTION\n                  ON DELETE NO ACTION );\nINSERT INTO fledge.users ( id, uname, real_name, pwd, enabled, pwd_last_changed, role_id, description ) SELECT id, uname, real_name, pwd, enabled, pwd_last_changed, role_id, description FROM fledge.users_temp;\nDROP TABLE IF EXISTS fledge.users_temp;\n\n-- Recreate INDEX\nDROP INDEX IF EXISTS fki_users_fk1;\nCREATE INDEX fki_users_fk1\n    ON users (role_id);\n\nDROP INDEX IF EXISTS users_ix1;\nCREATE UNIQUE INDEX users_ix1\n    ON users (uname);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/43.sql",
    "content": "-- Contains history of the statistics_history table\n-- Data are historicized daily\n--\nBEGIN TRANSACTION;\n\nDROP INDEX IF EXISTS fledge.statistics_history_daily_ix1;\nDROP TABLE IF EXISTS fledge.statistics_history_daily;\n\nCREATE TABLE fledge.statistics_history_daily (\n                                                 year        DATE DEFAULT (STRFTIME('%Y', 'NOW')),\n                                                 day         DATE DEFAULT (STRFTIME('%Y-%m-%d', 'NOW')),\n                                                 key         character varying(56)       NOT NULL,\n                                                 value       bigint                      NOT NULL DEFAULT 0\n);\n\nCREATE INDEX fledge.statistics_history_daily_ix1\n    ON statistics_history_daily (year);\n\n--- statistics_history_daily ------------------------------------------------------------------:\n\nINSERT INTO fledge.statistics_history_daily\n(year, day, key, value)\nSELECT\n    STRFTIME('%Y', date(history_ts)),\n    date(history_ts),\n    key,\n    sum(\"value\") AS \"value\"\nFROM fledge.statistics_history\nWHERE history_ts < datetime('now', '-7 days')\nGROUP BY date(history_ts), key;\n\nDELETE FROM fledge.statistics_history WHERE history_ts < datetime('now', '-7 days');\n\n---  -----------------------------------------------------------------------------------------:\n\nDELETE FROM fledge.tasks WHERE start_time < datetime('now', '-30 days');\nDELETE FROM fledge.log   WHERE ts         < datetime('now', '-30 days');\n\n--- Insert purge system schedule and process entry\nDELETE FROM fledge.schedules           WHERE id   = 'd37265f0-c83a-11eb-b8bc-0242ac130003';\nDELETE FROM fledge.scheduled_processes WHERE name = 'purge_system';\n\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('purge_system', '[\"tasks/purge_system\"]');\nINSERT INTO fledge.schedules (id, schedule_name, process_name, schedule_type, schedule_time, schedule_interval, exclusive, enabled)\nVALUES ('d37265f0-c83a-11eb-b8bc-0242ac130003', -- id\n        'purge_system',                         -- schedule_name\n        'purge_system',                         -- process_name\n        3,                                      -- schedule_type (interval)\n        NULL,                                   -- schedule_time\n        '23:50:00',                             -- schedule_interval (evey 24 hours)\n        't',                                    -- exclusive\n        't'                                     -- enabled\n    );\n\nCOMMIT;"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/44.sql",
    "content": "UPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"purge unsent\"}'))\n        WHERE key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"false\";\n\nUPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"retain unsent to all destinations\"}'))\n        WHERE  key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"true\";\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/45.sql",
    "content": "-- Create Control service support table\n\nDROP TABLE IF EXISTS fledge.control_script;\nDROP TABLE IF EXISTS fledge.control_acl;\n\n-- Script management for control dispatch service\nCREATE TABLE fledge.control_script (\n             name          character varying(255)        NOT NULL,\n             steps         JSON                          NOT NULL DEFAULT '{}',\n             acl           character varying(255),\n             CONSTRAINT    control_script_pkey           PRIMARY KEY (name) );\n\n-- Access Control List Management for control dispatch service\nCREATE TABLE fledge.control_acl (\n             name          character varying(255)        NOT NULL,\n             service       JSON                          NOT NULL DEFAULT '{}',\n             url           JSON                          NOT NULL DEFAULT '{}',\n             CONSTRAINT    control_acl_pkey              PRIMARY KEY (name) );\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/46.sql",
    "content": "-- Dispatcher log codes\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'DSPST', 'Dispatcher Startup' ),\n            ( 'DSPSD', 'Dispatcher Shutdown' );"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/47.sql",
    "content": "-- Scheduled process entry for automation script task\n\nINSERT INTO fledge.scheduled_processes SELECT 'automation_script', '[\"tasks/automation_script\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'automation_script');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/48.sql",
    "content": "-- Scheduled process entry for BucketStorage microservice\n\nINSERT INTO fledge.scheduled_processes SELECT 'bucket_storage_c', '[\"services/bucket_storage_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'bucket_storage_c');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/49.sql",
    "content": "-- Add id column in category_children table\n\nBEGIN TRANSACTION;\n\n-- Remove existing index\nDROP INDEX IF EXISTS fledge.config_children_idx1;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.category_children RENAME TO category_children_old;\n\n-- Create new table\nCREATE TABLE fledge.category_children (\n    id       integer                 PRIMARY KEY AUTOINCREMENT,\n    parent   character varying(255)  NOT NULL,\n    child    character varying(255)  NOT NULL\n);\n\n-- Add unique index for parent, child\nCREATE UNIQUE INDEX fledge.config_children_idx1 ON category_children (parent, child);\n\n-- Copy data\nINSERT INTO fledge.category_children(parent, child) SELECT parent, child FROM fledge.category_children_old;\n\n-- Remote temp table\nDROP TABLE fledge.category_children_old;\n\nCOMMIT;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/5.sql",
    "content": "UPDATE fledge.configuration SET value = json_set(value, '$.source', json('{\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"readings\", \"value\": \"readings\"}'))\n        WHERE key = 'North Readings to PI';\n\nUPDATE fledge.configuration SET value = json_set(value, '$.source', json('{\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"statistics\", \"value\": \"statistics\"}'))\n        WHERE key = 'North Statistics to PI';\n\nUPDATE fledge.configuration SET value = json_set(value, '$.source', json('{\"description\": \"Source of data to be sent on the stream. May be either readings, statistics or audit.\", \"type\": \"string\", \"default\": \"readings\", \"value\": \"readings\"}'))\n        WHERE key = 'North Readings to OCS';\n\nUPDATE statistics SET key = 'North Readings to PI' WHERE key = 'SENT_1';\nUPDATE statistics SET key = 'North Statistics to PI' WHERE key = 'SENT_2';\nUPDATE statistics SET key = 'North Readings to OCS' WHERE key = 'SENT_4';\n\n---\nINSERT INTO fledge.statistics ( key , description ) VALUES ( 'Readings Sent',   'Readings Sent North' );\nINSERT INTO fledge.statistics ( key , description ) VALUES ( 'Statistics Sent',   'Statistics Sent North' );\n\nINSERT INTO fledge.configuration (key, description, value) VALUES ( 'North',   'North tasks' , '{}' );\n\n\nUPDATE fledge.schedules SET schedule_name=process_name WHERE process_name  in (SELECT name FROM  fledge.scheduled_processes  WHERE script like '%\"tasks/north\"%');\n\nINSERT INTO fledge.category_children (parent, child)\nSELECT 'North', name FROM  fledge.scheduled_processes  WHERE script like '%\"tasks/north\"%';\n\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'north',   '[\"tasks/north\"]' );\n\nUPDATE fledge.schedules SET process_name='north' WHERE schedule_name in (SELECT name FROM  fledge.scheduled_processes  WHERE script like '%\"tasks/north\"%');\n\nINSERT INTO fledge.category_children (parent, child) VALUES ( 'North',   'OMF_TYPES' );\n\n--- Disables North pending tasks created before the upgrade process\nUPDATE fledge.tasks SET end_time=start_time, exit_code=0, state=2  WHERE end_time is null AND process_name in (SELECT name FROM  fledge.scheduled_processes  WHERE script like '%\"tasks/north\"%');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/50.sql",
    "content": "-- The Schema Service table used to hold information about extension schemas\nCREATE TABLE fledge.service_schema (\n             name          character varying(255)        NOT NULL,\n             service       character varying(255)        NOT NULL,\n             version       integer                       NOT NULL,\n             definition    JSON);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/51.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n\t\t( 'ESSRT', 'External Service Startup' ),\n\t\t( 'ESSTP', 'External Service Shutdown' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/52.sql",
    "content": "-- Access Control List usage relation\nCREATE TABLE fledge.acl_usage (\n             name            character varying(255)  NOT NULL,  -- ACL name\n             entity_type     character varying(80)   NOT NULL,  -- associated entity type: service or script \n             entity_name     character varying(255)  NOT NULL,  -- associated entity name\n             CONSTRAINT      usage_acl_pkey          PRIMARY KEY (name, entity_type, entity_name) );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/53.sql",
    "content": "-- Add new column name 'deprecated_ts' for asset_tracker\n\nALTER TABLE fledge.asset_tracker ADD COLUMN deprecated_ts DATETIME;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/54.sql",
    "content": "-- Addition of new log codes\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'ASTDP', 'Asset deprecated' ),\n        ( 'ASTUN', 'Asset un-deprecated' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/55.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'PIPIN', 'Pip installation' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/56.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\nVALUES ( 'SRVRS', 'Service Restart' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/57.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n\n\n-- Drop existing index\nDROP INDEX IF EXISTS asset_tracker_ix1;\nDROP INDEX IF EXISTS asset_tracker_ix2;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.asset_tracker RENAME TO asset_tracker_old;\n\n-- Create new table\nCREATE TABLE IF NOT EXISTS fledge.asset_tracker (\n       id              integer                  PRIMARY KEY AUTOINCREMENT,\n       asset           character(50)            NOT NULL, -- asset name\n       event           character varying(50)    NOT NULL, -- event name\n       service         character varying(255)   NOT NULL, -- service name\n       fledge          character varying(50)    NOT NULL, -- FL service name\n       plugin          character varying(50)    NOT NULL, -- Plugin name\n       deprecated_ts \t\t\t\tDATETIME,\n       ts              DATETIME                 DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       data\t       JSON\t\t\tDEFAULT '{}'\n);\n\n-- Copy data\nINSERT INTO fledge.asset_tracker ( id, asset, event, service, fledge, plugin, deprecated_ts, ts ) SELECT  id, asset, event, service, fledge, plugin, deprecated_ts, ts FROM fledge.asset_tracker_old;\n\n-- Create Index\nCREATE INDEX asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX asset_tracker_ix2 ON asset_tracker (service);\n\n-- Remote old table\nDROP TABLE IF EXISTS fledge.asset_tracker_old;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/58.sql",
    "content": "-- Audit Log Marker log code\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'AUMRK', 'Audit Log Marker' );"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/59.sql",
    "content": "-- Roles\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('view', 'Only to view the configuration'),\n            ('data-view', 'Only read the data in buffer');"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/6.sql",
    "content": "-- North_Readings_to_HTTP - for readings\nINSERT INTO fledge.configuration ( key, description, value )\n     VALUES ( 'North_Readings_to_HTTP',\n              'HTTP North Plugin - C Code',\n              ' { \"plugin\" : { \"type\" : \"string\", \"value\" : \"http-north\", \"default\" : \"http-north\", \"description\" : \"Module that HTTP North Plugin will load\" } } '\n            );\n\n-- North Tasks - C code\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'North_Readings_to_HTTP',   '[\"tasks/north_c\"]' );\n\n-- Readings to HTTP - C Code\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'ccdf1ef8-7e02-11e8-adc0-fa7ae01bb3bc', -- id\n                'HTTP_North_C',                         -- schedule_name\n                'North_Readings_to_HTTP',               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:30',                             -- schedule_interval\n                't',                                    -- exclusive\n                'f'                                     -- disabled\n              );\n\n-- Statistics\nINSERT INTO fledge.statistics ( key, description, value, previous_value )\n     VALUES ( 'NORTH_READINGS_TO_HTTP', 'Readings sent to HTTP', 0, 0 );\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/60.sql",
    "content": "-- Create control_source table\nCREATE TABLE fledge.control_source (\n             cpsid            integer                     PRIMARY KEY AUTOINCREMENT,       -- auto source id\n             name             character  varying(40)      NOT NULL,                        -- source name\n             description      character  varying(120)     NOT NULL                         -- source description\n            );\n\n-- Create control_destination table\nCREATE TABLE fledge.control_destination (\n             cpdid            integer                     PRIMARY KEY AUTOINCREMENT,       -- auto destination id\n             name             character  varying(40)      NOT NULL,                        -- destination name\n             description      character  varying(120)     NOT NULL                         -- destination description\n            );\n\n-- Create control_pipelines table\nCREATE TABLE fledge.control_pipelines (\n             cpid             integer                     PRIMARY KEY AUTOINCREMENT,       -- control pipeline id\n             name             character  varying(255)     NOT NULL                 ,       -- control pipeline name\n             stype            integer                                              ,       -- source type id from control_source table\n             sname            character  varying(80)                               ,       -- source name from control_source table\n             dtype            integer                                              ,       -- destination type id from control_destination table\n             dname            character  varying(80)                               ,       -- destination name from control_destination table\n             enabled          boolean                     NOT NULL DEFAULT  'f'    ,       -- false = A given pipeline is disabled by default\n             execution        character  varying(20)      NOT NULL DEFAULT  'shared'       -- pipeline will be executed as with shared execution model by default\n             );\n\n-- Create control_filters table\nCREATE TABLE fledge.control_filters (\n             fid              integer                     PRIMARY KEY AUTOINCREMENT,       -- auto filter id\n             cpid             integer                     NOT NULL                 ,       -- control pipeline id\n             forder           integer                     NOT NULL                 ,       -- filter order\n             fname            character  varying(255)     NOT NULL                 ,       -- Name of the filter instance\n             CONSTRAINT       control_filters_fk1         FOREIGN KEY (cpid)\n             REFERENCES       control_pipelines (cpid)    MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\n-- Insert predefined entries for Control Source\nDELETE FROM fledge.control_source;\nINSERT INTO fledge.control_source ( name, description )\n     VALUES ('Any', 'Any source.'),\n            ('Service', 'A named service in source of the control pipeline.'),\n            ('API', 'The control pipeline source is the REST API.'),\n            ('Notification', 'The control pipeline originated from a notification.'),\n            ('Schedule', 'The control request was triggered by a schedule.'),\n            ('Script', 'The control request has come from the named script.');\n\n-- Insert predefined entries for Control Destination\nDELETE FROM fledge.control_destination;\nINSERT INTO fledge.control_destination ( name, description )\n     VALUES ('Any', 'Any destination.'),\n            ('Service', 'A name of service that is being controlled.'),\n            ('Asset', 'A name of asset that is being controlled.'),\n            ('Script', 'A name of script that will be executed.'),\n            ('Broadcast', 'No name is applied and pipeline will be considered for any control writes or operations to broadcast destinations.');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/61.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'USRAD', 'User Added' ),\n        ( 'USRDL', 'User Deleted' ),\n        ( 'USRCH', 'User Changed' ),\n        ( 'USRRS', 'User Restored' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/62.sql",
    "content": "\nCREATE TABLE fledge.monitors (\n             service       character varying(255) NOT NULL,\n             monitor       character varying(80) NOT NULL,\n             minimum       integer,\n             maximum       integer,\n             average       integer,\n             samples       integer,\n             ts            DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))\n);\n\n\nCREATE INDEX monitors_ix1\n    ON monitors(service, monitor);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/63.sql",
    "content": "-- Roles\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('control', 'Same as editor can do and also have access for control scripts and pipelines');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/64.sql",
    "content": "-- Create control_api table\nCREATE TABLE fledge.control_api (\n             name             character  varying(255)     NOT NULL                 ,       -- control API name\n             description      character  varying(255)     NOT NULL                 ,       -- description of control API\n             type             integer                     NOT NULL                 ,       -- 0 for write and 1 for operation\n             operation_name   character  varying(255)                              ,       -- name of the operation and only valid if type is operation\n             destination      integer                     NOT NULL                 ,       -- destination of request; 0-broadcast, 1-service, 2-asset, 3-script\n             destination_arg  character  varying(255)                              ,       -- name of the destination and only used if destination is non-zero\n             anonymous        boolean                     NOT NULL DEFAULT  'f'    ,       -- anonymous callers to make request to control API; by default false\n             CONSTRAINT       control_api_pname           PRIMARY KEY (name)\n             );\n\n-- Create control_api_parameters table\nCREATE TABLE fledge.control_api_parameters (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             parameter        character  varying(255)     NOT NULL                 ,       -- name of parameter\n             value            character  varying(255)                              ,       -- value of parameter if constant otherwise default\n             constant         boolean                     NOT NULL                 ,       -- parameter is either a constant or variable\n             FOREIGN KEY (name) REFERENCES control_api (name)\n             );\n\n-- Create control_api_acl table\nCREATE TABLE fledge.control_api_acl (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             user             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.users\n             FOREIGN KEY (name) REFERENCES control_api (name)                      ,\n             FOREIGN KEY (user) REFERENCES users (uname)\n             );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/65.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'ACLAD', 'ACL Added' ),( 'ACLCH', 'ACL Changed' ),( 'ACLDL', 'ACL Deleted' ),\n            ( 'CTSAD', 'Control Script Added' ),( 'CTSCH', 'Control Script Changed' ),('CTSDL', 'Control Script Deleted' ),\n            ( 'CTPAD', 'Control Pipeline Added' ),( 'CTPCH', 'Control Pipeline Changed' ),('CTPDL', 'Control Pipeline Deleted' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/66.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'CTEAD', 'Control Entrypoint Added' ),\n            ( 'CTECH', 'Control Entrypoint Changed' ),\n            ('CTEDL', 'Control Entrypoint Deleted' );"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/67.sql",
    "content": "-- Add new column name 'priority' in scheduled_processes\n\nALTER TABLE fledge.scheduled_processes ADD COLUMN priority INTEGER NOT NULL DEFAULT 999;\nUPDATE scheduled_processes SET priority = '10' WHERE name = 'bucket_storage_c';\nUPDATE scheduled_processes SET priority = '20' WHERE name = 'dispatcher_c';\nUPDATE scheduled_processes SET priority = '30' WHERE name = 'notification_c';\nUPDATE scheduled_processes SET priority = '100' WHERE name = 'south_c';\nUPDATE scheduled_processes SET priority = '200' WHERE name = 'north_C';\nUPDATE scheduled_processes SET priority = '300' WHERE name = 'management';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/68.sql",
    "content": "-- Create alerts table\n\nCREATE TABLE IF NOT EXISTS fledge.alerts (\n       key         character varying(80)       NOT NULL,                                  -- Primary key\n       message     character varying(255)      NOT NULL,                                 -- Alert Message\n       urgency     SMALLINT                    NOT NULL,                                 -- 1 Critical - 2 High - 3 Normal - 4 Low\n       ts          DATETIME    DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),     -- Timestamp, updated at every change\n       CONSTRAINT  alerts_pkey PRIMARY KEY (key) );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/69.sql",
    "content": "--- Insert check updates schedule and process entry\n\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'update checker', '[\"tasks/check_updates\"]' );\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '852cd8e4-3c29-440b-89ca-2c7691b0450d', -- id\n                'update checker',                       -- schedule_name\n                'update checker',                       -- process_name\n                2,                                      -- schedule_type (timed)\n                '00:05:00',                             -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                't',                                    -- exclusive\n                't'                                     -- enabled\n              );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/7.sql",
    "content": "CREATE INDEX statistics_history_ix2\n    ON statistics_history (key);\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/70.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'BUCAD', 'Bucket Added' ), ( 'BUCCH', 'Bucket Changed' ), ( 'BUCDL', 'Bucket Deleted' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/71.sql",
    "content": "-- Add new column name 'hash_algorithm' in users table\n\nALTER TABLE fledge.users ADD COLUMN hash_algorithm TEXT CHECK( hash_algorithm IN ('SHA256', 'SHA512') )  NOT NULL DEFAULT 'SHA512';\nUPDATE fledge.users SET hash_algorithm='SHA256';\nUPDATE fledge.users SET pwd='495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', hash_algorithm='SHA512' WHERE pwd ='39b16499c9311734c595e735cffb5d76ddffb2ebf8cf4313ee869525a9fa2c20:f400c843413d4c81abcba8f571e6ddb6';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/72.sql",
    "content": "ALTER TABLE fledge.users ADD COLUMN failed_attempts INTEGER DEFAULT 0;\nALTER TABLE fledge.users ADD COLUMN block_until DATETIME DEFAULT NULL;\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'USRBK', 'User Blocked' ), ( 'USRUB', 'User Unblocked' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/73.sql",
    "content": "update statistics set description = 'Readings Sent North' where description = 'Readings Sent Noth';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/74.sql",
    "content": "-- Scheduled process entry for pipeline service\nINSERT INTO fledge.scheduled_processes SELECT 'pipeline_c', '[\"services/pipeline_c\"]', 90 WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'pipeline_c');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/75.sql",
    "content": "ALTER TABLE fledge.plugin_data ADD COLUMN service_name character varying(255);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/76.sql",
    "content": "DELETE FROM fledge.scheduled_processes WHERE name = 'north';"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/8.sql",
    "content": "-- Create TABLE for asset_tracker\nCREATE TABLE IF NOT EXISTS fledge.asset_tracker (\n       id            integer          PRIMARY KEY AUTOINCREMENT,\n       asset         character(50)    NOT NULL,\n       event         character varying(50) NOT NULL,\n       service       character varying(255) NOT NULL,\n       fledge       character varying(50) NOT NULL,\n       plugin        character varying(50) NOT NULL,\n       ts            DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')) );\n\n-- Create INDEX for asset_tracker\nCREATE INDEX IF NOT EXISTS asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX IF NOT EXISTS asset_tracker_ix2 ON asset_tracker (service);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/9.sql",
    "content": "delete from fledge.configuration where key in (\n\t'North Readings to OCS',\n\t'North Statistics to PI',\n\t'North Readings to PI',\n\t'North_Statistics_to_PI',\n\t'dht11',\n\t'DHT11 South C Plugin',\n\t'North_Readings_to_HTTP',\n\t'North_Readings_to_PI') and key not in (\n\t\tselect distinct process_name from fledge.tasks);\n\ndelete from fledge.scheduled_processes where name in (\n\t'North Readings to OCS',\n\t'North Statistics to PI',\n\t'North Readings to PI',\n\t'North_Statistics_to_PI',\n\t'dht11',\n\t'DHT11 South C Plugin',\n\t'North_Readings_to_HTTP',\n\t'North_Readings_to_PI') and name not in (\n\t\tselect distinct process_name from fledge.tasks);\n\ndelete from fledge.schedules where schedule_name in (\n\t'North Readings to OCS',\n\t'North Statistics to PI',\n\t'North Readings to PI',\n\t'North_Statistics_to_PI',\n\t'dht11',\n\t'DHT11 South C Plugin',\n\t'North_Readings_to_HTTP',\n\t'North_Readings_to_PI') and schedule_name not in (\n\t\tselect distinct process_name from fledge.tasks);\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite/upgrade/README",
    "content": "Place SQLite3 upgrade schema sql files here.\n\nFile name:\n\nX.sql, where X is the SQLite3 schema id\n\nExample:\n\n'9.sql' file is read by Fledge app which has SQLite3 schema version set to 8\n'10.sql' file is read either by Fledge app which has SQLite3 schema version set to 9\neither by Fledge app upgrading schema from 8 to 10\n\nNote:\n- whenever VERSION file in $FLEDGE_ROOT has a new schema in 'fledge_schema',\n  the corresponding sql file must be placed here\n- file id must exist even if empty\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlite.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2017-2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n# Script input parameters\n# $1 is action (start|stop|status|init|reset|purge|help)\n# $2 is db schema (i.e 35)\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\n# Avoid to stop immediately to report/show the error/reason\nset +e\n\nPLUGIN=\"sqlite\"\n\n# Set default DB file\nif [ ! \"${DEFAULT_SQLITE_DB_FILE}\" ]; then\n    export DEFAULT_SQLITE_DB_FILE=\"${FLEDGE_DATA}/fledge.db\"\nfi\n\n# if the script changes the value it forces the overwrite of the value every times\n# it is needed when the storage plugin is changed\nif [ ! \"${DEFAULT_SQLITE_DB_FILE_READINGS_FLAG}\" ]; then\n\n    if [ ! \"${DEFAULT_SQLITE_DB_FILE_READINGS}\" ]; then\n\n        export DEFAULT_SQLITE_DB_FILE_READINGS_FLAG=1\n    fi\nfi\n\nif [ \"${DEFAULT_SQLITE_DB_FILE_READINGS_FLAG}\" ]; then\n\n    export DEFAULT_SQLITE_DB_FILE_READINGS_BASE=\"${FLEDGE_DATA}/readings\"\n    export DEFAULT_SQLITE_DB_FILE_READINGS=\"${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}_1.db\"\n    export DEFAULT_SQLITE_DB_FILE_READINGS_SINGLE=\"${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}.db\"\nfi\n\nUSAGE=\"Usage: `basename ${0}` {start|stop|status|init|reset|purge|help}\"\n\n# Check FLEDGE_ROOT\nif [ -z ${FLEDGE_ROOT+x} ]; then\n    # Set FLEDGE_ROOT as the default directory\n    FLEDGE_ROOT=\"/usr/local/fledge\"\nfi\n\n# Check if the default directory exists\nif [[ ! -d \"${FLEDGE_ROOT}\" ]]; then\n\n    # Here we cannot use the logger because we cannot find the write_log script.\n    # But it is ok, because the script is called with source and if it is called\n    # as standalone script the echo will be captured.\n    echo \"Fledge cannot be executed: ${FLEDGE_ROOT} is not a valid directory.\"\n    echo \"Create the enviroment variable FLEDGE_ROOT before using Fledge.\"\n    echo \"Specify the base directory for Fledge and set the variable with:\"\n    echo \"export FLEDGE_ROOT=<basedir>\"\n    exit 1\n\nfi\n\n##########\n## INCLUDE SECTION\n##########\n. $FLEDGE_ROOT/scripts/common/get_engine_management.sh\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n\n# Logger wrapper\nsqlite_log() {\n    write_log \"Storage\" \"script.plugin.storage.sqlite\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n# Check first SQLite 3 with static library command line is available\nSQLITE_SQL=\"$FLEDGE_ROOT/plugins/storage/sqlite/sqlite3\"\nif ! [[ -x \"${SQLITE_SQL}\" ]]; then\n# Check system default SQLite 3 command line is available\n    if ! [[ -x \"$(command -v sqlite3)\" ]]; then\n        sqlite_log \"info\" \"The sqlite3 command cannot be found. Is SQLite3 installed?\" \"outonly\" \"pretty\"\n        sqlite_log \"info\" \"If SQLite3 is installed, check if the bin dir is in the PATH.\" \"outonly\" \"pretty\"\n        exit 1\n    else\n        SQLITE_SQL=\"$(command -v sqlite3)\"\n    fi\nfi\n\n## SQLite3 Start\nsqlite_start() {\n\n    # Check the status of the server\n    if [[ \"$1\" != \"skip\" ]]; then\n        result=`sqlite_status \"silent\"`\n    else\n        result=`sqlite_status \"skip\"`\n    fi\n    case \"$result\" in\n        \"0\")\n            # SQLilte3 DB found already running.\n            if [[ \"$1\" == \"noisy\" ]]; then\n                sqlite_log \"info\" \"SQLite3 database is ready.\" \"all\" \"pretty\"\n            else\n                if [[ \"$1\" != \"skip\" ]]; then\n                            sqlite_log \"info\" \"SQLite3 database is ready.\" \"logonly\" \"pretty\"\n                fi\n            fi\n            ;;\n\n        \"1\")\n            # Database not found, created datafile\n            COMMAND_OUTPUT=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} .databases 2>&1`\n            RET_CODE=$?\n            if [ \"${RET_CODE}\" -ne 0 ]; then\n                sqlite_log \"err\" \"Error creating SQLite3 database ${DEFAULT_SQLITE_DB_FILE}: ${COMMAND_OUTPUT}\" \"all\" \"pretty\"\n                exit 1\n            fi\n\n            # File created\n            if [[ \"$1\" == \"noisy\" ]]; then\n                sqlite_log \"info\" \"SQLite3 database ${DEFAULT_SQLITE_DB_FILE} has been created.\" \"all\" \"pretty\"\n            else\n                sqlite_log \"info\" \"SQLite3 database ${DEFAULT_SQLITE_DB_FILE} has been created.\" \"logonly\" \"pretty\"\n            fi\n           ;;\n\n        *)\n            sqlite_log \"err\" \"Unknown SQLite database return status.\" \"all\"\n            exit 1\n            ;;\n    esac\n\n    # Check the presence of the readingds.db datafile\n    if [[ \"$1\" != \"skip\" ]]; then\n        result=`sqlite_status_readings \"silent\"`\n    else\n        result=`sqlite_status_readings \"skip\"`\n    fi\n\n    case \"$result\" in\n        \"0\")\n            # SQLilte3 DB found already running.\n            if [[ \"$1\" == \"noisy\" ]]; then\n                sqlite_log \"info\" \"SQLite3 readings database is ready.\" \"all\" \"pretty\"\n            else\n                if [[ \"$1\" != \"skip\" ]]; then\n                            sqlite_log \"info\" \"SQLite3 readings database is ready.\" \"logonly\" \"pretty\"\n                fi\n            fi\n            ;;\n\n        \"1\")\n            # Database not found, created datafile\n            COMMAND_OUTPUT=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE_READINGS} .databases 2>&1`\n            RET_CODE=$?\n            if [ \"${RET_CODE}\" -ne 0 ]; then\n                sqlite_log \"err\" \"Error creating SQLite3 database ${DEFAULT_SQLITE_DB_FILE_READINGS}: ${COMMAND_OUTPUT}\" \"all\" \"pretty\"\n                exit 1\n            fi\n\n            # File created\n            if [[ \"$1\" == \"noisy\" ]]; then\n                sqlite_log \"info\" \"SQLite3 database ${DEFAULT_SQLITE_DB_FILE_READINGS} has been created.\" \"all\" \"pretty\"\n            else\n                sqlite_log \"info\" \"SQLite3 database ${DEFAULT_SQLITE_DB_FILE_READINGS} has been created.\" \"logonly\" \"pretty\"\n            fi\n           ;;\n\n        *)\n            sqlite_log \"err\" \"Unknown SQLite database return status.\" \"all\"\n            exit 1\n            ;;\n    esac\n\n    # Check if the fledge database has been created\n    FOUND_SCHEMAS=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} \"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; SELECT name FROM sqlite_master WHERE type='table'\"`\n\n    if [ ! \"${FOUND_SCHEMAS}\" ]; then\n        # Create the Fledge database\n         sqlite_reset \"$1\" \"immediate\"\n    else\n        # Check if the readings database has been created\n        FOUND_SCHEMAS=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE_READINGS} \"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}' AS 'readings'; SELECT name FROM sqlite_master WHERE type='table'\"`\n\n        if [ ! \"${FOUND_SCHEMAS}\" ]; then\n            # Create the readings database\n            sqlite_reset_db_readings \"$1\" \"immediate\"\n        fi\n    fi\n\n    # Fledge DB schema update: Fledge version is $2, $1 is log verbosity\n    sqlite_schema_update $2 $1\n}\n\n\n## SQLite  Stop\nsqlite_stop() {\n\n    # Since the script may be called with \"source\", this condition must be set\n    # and the else must be maintained because exit can't be used\n\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Fledge database is SQLite3. No stop/start actions available\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Fledge database is SQLite3. No stop/start actions available\" \"logonly\" \"pretty\"\n    fi\n\n    return 0\n}\n\n\n## SQLite3 Reset\nsqlite_reset() {\n\n    if [[ $2 != \"immediate\" ]]; then\n        echo \"This script will remove all data stored in the SQLite3 datafiles:\"\n        echo \"'${DEFAULT_SQLITE_DB_FILE}'\"\n        echo \"'${DEFAULT_SQLITE_DB_FILE_READINGS}'\"\n        echo -n \"Enter YES if you want to continue: \"\n        read continue_reset\n\n        if [ \"$continue_reset\" != 'YES' ]; then\n\t    echo \"The system will NOT be reset and current content remains\"\n            echo \"Goodbye.\"\n            # This is ok because it means that the script is called from command line\n            exit 0\n        fi\n    fi\n\n    sqlite_reset_db_fledge   \"$1\" \"$2\"\n    sqlite_reset_db_readings \"$1\" \"$2\"\n}\n\nsqlite_reset_db_fledge() {\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Building the metadata for the Fledge Plugin '${PLUGIN}' ...\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Building the metadata for the Fledge Plugin '${PLUGIN}' ...\" \"logonly\" \"pretty\"\n    fi\n\n    if [[ -f $DEFAULT_SQLITE_DB_FILE && $2 != \"immediate\" ]]; then\n        # Remove service schema files as per name\n        schema=$(${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} 'select name from service_schema;')\n        for f in $schema; do\n            echo \"Removing $f service schema...\"\n            echo \"'${FLEDGE_DATA}/${f}.db'\"\n            rm -f ${FLEDGE_DATA}/${f}.db*\n            echo \"Removal of $f service schema Done!\"\n            if [ -d \"${FLEDGE_DATA}/buckets\" ]; then\n                echo \"Removed user data from ${FLEDGE_DATA}/buckets\"\n                rm -rf ${FLEDGE_DATA}/buckets\n            fi\n        done\n    fi\n\n    # 1- Drop all databases in DEFAULT_SQLITE_DB_FILE\n    if [ -f \"${DEFAULT_SQLITE_DB_FILE}\" ]; then\n        rm ${DEFAULT_SQLITE_DB_FILE} ||\n        sqlite_log \"err\" \"Cannot drop database '${DEFAULT_SQLITE_DB_FILE}' for the Fledge Plugin '${PLUGIN}'\" \"all\" \"pretty\"\n    fi\n    rm -f ${DEFAULT_SQLITE_DB_FILE}-journal ${DEFAULT_SQLITE_DB_FILE}-wal ${DEFAULT_SQLITE_DB_FILE}-shm\n    # 2- Create new datafile an apply init file\n    INIT_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\nPRAGMA page_size = 4096;\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge';\n.read '${INIT_SQL}'\n.quit\nEOF`\n\n    RET_CODE=$?\n    # Exit on failure\n    if [ \"${RET_CODE}\" -ne 0 ]; then\n        sqlite_log \"err\" \"Cannot initialize '${DEFAULT_SQLITE_DB_FILE}' for the Fledge Plugin '${PLUGIN}': ${INIT_OUTPUT}. Exiting\" \"all\" \"pretty\"\n        exit 2\n    fi\n\n    # Log success\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Build complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE}'.\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Build complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE}'.\" \"logonly\" \"pretty\"\n    fi\n\n}\n\nsqlite_create_db_readings() {\n\n    # 2- Create new datafile an apply init file\n    INIT_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE_READINGS}\" 2>&1 <<EOF\nPRAGMA page_size = 4096;\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}' AS 'readings_1';\n.read '${INIT_READINGS_SQL}'\n.quit\nEOF`\n\n    RET_CODE=$?\n    # Exit on failure\n    if [ \"${RET_CODE}\" -ne 0 ]; then\n        sqlite_log \"err\" \"Cannot initialize '${DEFAULT_SQLITE_DB_FILE_READINGS}' for the Fledge Plugin '${PLUGIN}': ${INIT_OUTPUT}. Exiting\" \"all\" \"pretty\"\n        exit 2\n    fi\n\n    # Log success\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Build complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE_READINGS}'.\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Build complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE_READINGS}'.\" \"logonly\" \"pretty\"\n    fi\n}\n\nsqlite_reset_db_readings() {\n\n    # 1- Drop all databases in DEFAULT_SQLITE_DB_FILE_READINGS\n    if [ -f \"${DEFAULT_SQLITE_DB_FILE_READINGS}\" ]; then\n        rm ${DEFAULT_SQLITE_DB_FILE_READINGS} ||\n        sqlite_log \"err\" \"Cannot drop database '${DEFAULT_SQLITE_DB_FILE_READINGS}' for the Fledge Plugin '${PLUGIN}'\" \"all\" \"pretty\"\n    fi\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS}-journal ${DEFAULT_SQLITE_DB_FILE_READINGS}-wal ${DEFAULT_SQLITE_DB_FILE_READINGS}-shm\n    # Delete all the readings databases if any\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}*.db\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}*.db-journal\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}*.db-wal\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}*.db-shm\n\n    sqlite_create_db_readings\n}\n\n\n\n## SQLite 3 Database Status\n#\n# NOTE: You can call this script with $1 = silent to avoid non output errors\n#\n# Returns:\n#   0 - SQLite3 datafile found\n#   1 - SQLite3 datafile NOT found\nsqlite_status() {\n    if [ -f \"${DEFAULT_SQLITE_DB_FILE}\" ]; then\n        if [[ \"$1\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE}' ready.\" \"all\" \"pretty\"\n        else\n            if [[ \"$1\" != \"skip\" ]]; then\n                sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE}' ready.\" \"logonly\" \"pretty\"\n            fi\n        fi\n        echo \"0\"\n    else\n        if [[ \"$1\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE}' not found.\" \"all\" \"pretty\"\n        else\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE}' not found.\" \"logonly\" \"pretty\"\n        fi\n        echo \"1\"\n    fi\n}\n\n## SQLite 3 Database Status - checks the presence of the readingds.db datafile\n#\n# NOTE: You can call this script with $1 = silent to avoid non output errors\n#\n# Returns:\n#   0 - SQLite3 readingds datafile found\n#   1 - SQLite3 readingds datafile NOT found\nsqlite_status_readings() {\n    if [ -f \"${DEFAULT_SQLITE_DB_FILE_READINGS}\" ]; then\n        if [[ \"$1\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE_READINGS}' ready.\" \"all\" \"pretty\"\n        else\n            if [[ \"$1\" != \"skip\" ]]; then\n                sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE_READINGS}' ready.\" \"logonly\" \"pretty\"\n            fi\n        fi\n        echo \"0\"\n    else\n        if [[ \"$1\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE_READINGS}' not found.\" \"all\" \"pretty\"\n        else\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE_READINGS}' not found.\" \"logonly\" \"pretty\"\n        fi\n        echo \"1\"\n    fi\n}\n\n\n## SQLite schema update entry point\n#\nsqlite_schema_update() {\n\n    # Current starting Fledge version\n    NEW_VERSION=$1\n    # DB table\n    VERSION_TABLE=\"version\"\n    # Check first if the version table exists\n    VERSION_QUERY=\"SELECT name FROM sqlite_master WHERE type='table' and name = '${VERSION_TABLE}'\"\n    COMMAND_VERSION=\"${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} \\\"${VERSION_QUERY}\\\"\"\n    CURR_VER=`eval ${COMMAND_VERSION}`\n    ret_code=$?\n    if [ ! \"${CURR_VER}\" ] || [ \"${ret_code}\" -ne 0 ]; then\n        sqlite_log \"error\" \"Error checking Fledge DB schema version: \"\\\n\"the table '${VERSION_TABLE}' doesn't exist. Exiting\" \"all\" \"pretty\"\n        return 1\n    fi\n\n    # Fetch Fledge DB version\n    CURR_VER=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} \"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; SELECT id FROM fledge.${VERSION_TABLE}\" | tr -d ' '`\n    if [ ! \"${CURR_VER}\" ]; then\n        # No version found set DB version now\n        INSERT_QUERY=\"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; BEGIN; INSERT INTO fledge.${VERSION_TABLE} (id) VALUES('${NEW_VERSION}'); COMMIT;\"\n        COMMAND_INSERT=\"${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} \\\"${INSERT_QUERY}\\\"\"\n        CURR_VER=`eval \"${COMMAND_INSERT}\"`\n        ret_code=$?\n\n        SET_VERSION_MSG=\"Fledge DB version not found in fledge.'${VERSION_TABLE}', setting version [${NEW_VERSION}]\"\n        if [[ \"$2\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"${SET_VERSION_MSG}\" \"all\" \"pretty\"\n        else \n            sqlite_log \"info\" \"${SET_VERSION_MSG}\" \"logonly\" \"pretty\"\n        fi\n    else\n        # Only if DB version is not equal to starting Fledge version we try schema update\n        if [ \"${CURR_VER}\" != \"${NEW_VERSION}\" ]; then\n            sqlite_log \"info\" \"Detected '${PLUGIN}' Fledge DB schema change from version [${CURR_VER}]\"\\\n\" to [${NEW_VERSION}], applying Upgrade/Downgrade ...\" \"all\" \"pretty\"\n            SCHEMA_UPDATE_SCRIPT=\"$FLEDGE_ROOT/scripts/plugins/storage/${PLUGIN}/schema_update.sh\"\n            if [ -s \"${SCHEMA_UPDATE_SCRIPT}\" ] && [ -x \"${SCHEMA_UPDATE_SCRIPT}\" ]; then\n                # Call the schema update script\n                ${SCHEMA_UPDATE_SCRIPT} \"${CURR_VER}\" \"${NEW_VERSION}\" \"${SQLITE_SQL}\"\n                update_code=$?\n                return ${update_code}\n            else\n                sqlite_log \"err\" \"Cannot find schema update script '${SCHEMA_UPDATE_SCRIPT}'. Exiting\" \"all\" \"pretty\"\n                exit 2\n            fi\n        else\n            # Just log up-to-date\n            if [[ \"$2\" != \"skip\" ]]; then\n                sqlite_log \"info\" \"Fledge DB schema is up to date to version [${CURR_VER}]\" \"logonly\" \"pretty\"\n            fi\n            return 0\n        fi\n    fi\n}\n\n\n## SQLite Help\nsqlite_help() {\n\n    echo \"${USAGE}\nSQLite3 Storage Layer plugin init script. \nThe script is used to control the SQLite3 plugin as database for Fledge\nArguments:\n start   - Start the database server (when managed)\n           If the server has not been initialized, it also initialize it\n stop    - Stop the database server (when managed)\n status  - Check the status of the database server\n reset   - Bring the database server to the original installation.\n           WARNING: all the data stored in the server will be lost!\n init    - Database check: if Fledge database does not exist\n           it will be created.\n purge   - Purge all readings data and non-configuration data stored in the database.\n           WARNING: all the data stored in the affected tables will be lost!\n help    - This text\n\n managed   - The database server is embedded in Fledge\n unmanaged - The database server is not embedded in Fledge\"\n\n}\n\n## SQLite3 purge all readings and non-configuration data\nsqlite_purge() {\n    echo \"This script will remove all readings data and non-configuration data stored in the SQLite3 datafiles:\"\n    echo \"'${DEFAULT_SQLITE_DB_FILE}'\"\n    echo \"'${DEFAULT_SQLITE_DB_FILE_READINGS}'\"\n    echo -n \"Enter YES if you want to continue: \"\n    read continue_purge\n\n    if [ \"$continue_purge\" != 'YES' ]; then\n\techo \"The system will NOT be purged of data and current content remains\"\n        echo \"Goodbye.\"\n        # This is ok because it means that the script is called from command line\n        exit 0\n    fi\n\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Purging data for the Fledge Plugin '${PLUGIN}' ...\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Purging data for the Fledge Plugin '${PLUGIN}' ...\" \"logonly\" \"pretty\"\n    fi\n\n    # Purge database content\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge';\nUPDATE fledge.statistics SET value = 0, previous_value = 0, ts = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime');\nDELETE FROM fledge.asset_tracker; \nDELETE FROM fledge.sqlite_sequence WHERE name='asset_tracker';\nDELETE FROM fledge.tasks;\nDELETE FROM fledge.statistics_history;\nDELETE FROM sqlite_sequence WHERE name='statistics_history';\nDELETE FROM fledge.log;\nDELETE FROM sqlite_sequence WHERE name='log';\nDELETE FROM fledge.plugin_data;\nDELETE FROM fledge.omf_created_objects;\nDELETE FROM fledge.user_logins;\nUPDATE fledge.streams SET last_object = 0, ts = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime');\nVACUUM;\n.quit\nEOF`\n\n    RET_CODE=$?\n    if [ \"${RET_CODE}\" -ne 0 ]; then\n        sqlite_log \"err\" \"Failure in purge command [${SQL_COMMAND}]: ${COMMAND_OUTPUT}. Exiting\" \"all\" \"pretty\"\n        return 1\n    fi\n\n    # Log success\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Purge complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE}'.\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Purge complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE}'.\" \"logonly\" \"pretty\"\n    fi\n\n    # Remove all readings\n    sqlite_reset_db_readings\n}\n\n##################\n### Main Logic ###\n##################\n\n# Set FLEDGE_DATA if it does not exist\nif [ -z ${FLEDGE_DATA+x} ]; then\n    FLEDGE_DATA=\"${FLEDGE_ROOT}/data\"\nfi\n\n# Check if $FLEDGE_DATA exists\nif [[ ! -d ${FLEDGE_DATA} ]]; then\n    sqlite_log \"err\" \"Fledge cannot be executed: ${FLEDGE_DATA} is not a valid directory.\" \"all\" \"pretty\"\n    exit 1\nfi\n\nengine_management=\"false\"\n\n# Settings if the database is managed by Fledge\ncase \"$engine_management\" in\n    \"true\")\n\n\t# SQLite does not support managed storage. Ignore this option\n        print_output=\"silent\"\n        MANAGED=false\n        ;;\n    \n    \"false\")\n\n        # This is an explicit input, which means that we do not want to send\n        # messages when we start or stop the server\n        print_output=\"silent\"\n        MANAGED=false\n        ;;\n\n    *)\n\n        # Unexpected value from the configuration file\n        sqlite_log \"err\" \"Fledge cannot start.\" \"all\" \"pretty\"\n        sqlite_log \"err\" \"Missing plugin information from the storage microservice\" \"all\" \"pretty\"\n        exit 1\n        ;;\n\nesac\n\n# Check if the init.sql file exists\n# Attempt 1: deployment path\nif [[ -e \"$FLEDGE_ROOT/plugins/storage/sqlite/init.sql\" ]]; then\n    INIT_SQL=\"$FLEDGE_ROOT/plugins/storage/sqlite/init.sql\"\n    INIT_READINGS_SQL=\"$FLEDGE_ROOT/plugins/storage/sqlite/init_readings.sql\"\nelse\n    # Attempt 2: development path\n    if [[ -e \"$FLEDGE_ROOT/scripts/plugins/storage/sqlite/init.sql\" ]]; then\n        INIT_SQL=\"$FLEDGE_ROOT/scripts/plugins/storage/sqlite/init.sql\"\n        INIT_READINGS_SQL=\"$FLEDGE_ROOT/scripts/plugins/storage/sqlite/init_readings.sql\"\n    else\n        sqlite_log \"err\" \"Missing plugin '${PLUGIN}' initialization file init.sql.\" \"all\" \"pretty\"\n        exit 1\n    fi\nfi\n\n# Main case\ncase \"$1\" in\n    start)\n        sqlite_start \"$print_output\" \"$2\"\n        ;;\n    init)\n        sqlite_start \"skip\" \"$2\"\n        ;;\n    stop)\n        sqlite_stop \"$print_output\"\n        ;;\n    reset)\n        sqlite_reset \"$print_output\" \"$2\"\n        ;;\n    status)\n        sqlite_status \"$print_output\"\n        ;;\n    purge)\n        sqlite_purge \"$print_output\"\n        ;;\n    help)\n        sqlite_help\n        ;;\n    *)\n        echo \"${USAGE}\"\n        exit 1\nesac\n\n# Exit cannot be used because the script may be \"sourced\"\n#exit $?\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/42.sql",
    "content": "DROP INDEX IF EXISTS fledge.statistics_history_daily_ix1;\nDROP TABLE IF EXISTS fledge.statistics_history_daily;\n\nDELETE FROM fledge.schedules WHERE process_name = 'purge_system';\nDELETE FROM fledge.scheduled_processes WHERE name = 'purge_system';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/43.sql",
    "content": "UPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent to any historian yet.\", \"type\": \"boolean\",  \"default\": \"false\", \"displayName\": \"Retain Unsent Data\", \"value\": \"false\"}'))\n        WHERE key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"purge unsent\";\n\n\nUPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent to any historian yet.\", \"type\": \"boolean\",  \"default\": \"false\", \"displayName\": \"Retain Unsent Data\", \"value\": \"true\"}'))\n        WHERE key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"retain unsent to all destinations\";\n\nUPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent to any historian yet.\", \"type\": \"boolean\",  \"default\": \"false\", \"displayName\": \"Retain Unsent Data\", \"value\": \"true\"}'))\n        WHERE key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"retain unsent to any destination\";\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/44.sql",
    "content": "-- Remove Control service support table\nDROP TABLE IF EXISTS fledge.control_script;\nDROP TABLE IF EXISTS fledge.control_acl;"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/45.sql",
    "content": "-- Dispatcher scheduled process entry\nDELETE from fledge.scheduled_processes WHERE name = 'dispatcher_c';\n\n-- Delete dispatcher log and log codes\nDELETE from fledge.log WHERE code = 'DSPST';\nDELETE from fledge.log_codes WHERE code = 'DSPST';\nDELETE from fledge.log WHERE code = 'DSPSD';\nDELETE from fledge.log_codes WHERE code = 'DSPSD';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/46.sql",
    "content": "-- No action is required\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/47.sql",
    "content": "DELETE FROM fledge.schedules WHERE process_name = 'bucket_storage_c';\nDELETE FROM fledge.scheduled_processes WHERE name = 'bucket_storage_c';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/48.sql",
    "content": "-- Remove id column in fledge.category_children\n\nBEGIN TRANSACTION;\n\n-- Drop existing index\nDROP INDEX IF EXISTS fledge.config_children_idx1;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.category_children RENAME TO category_children_old;\n\n-- Create new table\nCREATE TABLE fledge.category_children (\n       parent   character varying(255)  NOT NULL,\n       child    character varying(255)  NOT NULL,\n       CONSTRAINT config_children_pkey PRIMARY KEY(parent, child)\n);\n\n-- Copy data\nINSERT INTO fledge.category_children(parent, child) SELECT parent, child FROM fledge.category_children_old;\n\n-- Remote temp table\nDROP TABLE IF EXISTS fledge.category_children_old;\n\nCOMMIT;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/49.sql",
    "content": "-- The Schema Service table used to hold information about extension schemas\nDROP TABLE IF EXISTS fledge.service_schema;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/50.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ESSRT', 'ESSTP' );\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/51.sql",
    "content": "-- Access Control List usage relation\nDROP TABLE IF EXISTS fledge.acl_usage;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/52.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n\n-- Remove deprecated_ts column in fledge.asset_tracker\n\n-- Drop existing index\nDROP INDEX IF EXISTS asset_tracker_ix1;\nDROP INDEX IF EXISTS asset_tracker_ix2;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.asset_tracker RENAME TO asset_tracker_old;\n\n-- Create new table\nCREATE TABLE IF NOT EXISTS fledge.asset_tracker (\n       id              integer                  PRIMARY KEY AUTOINCREMENT,\n       asset           character(50)            NOT NULL, -- asset name\n       event           character varying(50)    NOT NULL, -- event name\n       service         character varying(255)   NOT NULL, -- service name\n       fledge          character varying(50)    NOT NULL, -- FL service name\n       plugin          character varying(50)    NOT NULL, -- Plugin name\n       ts              DATETIME                 DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))\n);\n\n-- Copy data\nINSERT INTO fledge.asset_tracker ( id, asset, event, service, fledge, plugin, ts ) SELECT  id, asset, event, service, fledge, plugin, ts FROM fledge.asset_tracker_old;\n\n-- Create Index\nCREATE INDEX asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX asset_tracker_ix2 ON asset_tracker (service);\n\n-- Remote old table\nDROP TABLE IF EXISTS fledge.asset_tracker_old;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/53.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ASTDP', 'ASTUN' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/54.sql",
    "content": "DELETE FROM fledge.log_codes where code = 'PIPIN';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/55.sql",
    "content": "DELETE FROM fledge.log_codes where code = 'SRVRS';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/56.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n\n\n-- Drop existing index\nDROP INDEX IF EXISTS asset_tracker_ix1;\nDROP INDEX IF EXISTS asset_tracker_ix2;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.asset_tracker RENAME TO asset_tracker_old;\n\n-- Create new table\nCREATE TABLE IF NOT EXISTS fledge.asset_tracker (\n       id              integer                  PRIMARY KEY AUTOINCREMENT,\n       asset           character(50)            NOT NULL, -- asset name\n       event           character varying(50)    NOT NULL, -- event name\n       service         character varying(255)   NOT NULL, -- service name\n       fledge          character varying(50)    NOT NULL, -- FL service name\n       plugin          character varying(50)    NOT NULL, -- Plugin name\n       deprecated_ts\t\t\t\tDATETIME,\n       ts              DATETIME                 DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))\n);\n\n-- Copy data\nINSERT INTO fledge.asset_tracker ( id, asset, event, service, fledge, plugin, deprecated_ts, ts ) SELECT  id, asset, event, service, fledge, plugin, deprecated_ts, ts FROM fledge.asset_tracker_old;\n\n-- Create Index\nCREATE INDEX asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX asset_tracker_ix2 ON asset_tracker (service);\n\n-- Remove old table\nDROP TABLE IF EXISTS fledge.asset_tracker_old;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/57.sql",
    "content": "-- Delete Audit marker log and log codes entry\nDELETE from fledge.log WHERE code = 'AUMRK';\nDELETE from fledge.log_codes WHERE code = 'AUMRK';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/58.sql",
    "content": "-- Delete roles\nDELETE FROM fledge.roles WHERE name IN ('view','data-view');\n-- Reset auto increment\n-- You cannot use ALTER TABLE for that. The autoincrement counter is stored in a separate table named \"sqlite_sequence\". You can modify the value there\nUPDATE sqlite_sequence SET seq=1 WHERE name=\"roles\";\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/59.sql",
    "content": "-- Drop control pipeline tables\nDROP TABLE IF EXISTS fledge.control_source;\nDROP TABLE IF EXISTS fledge.control_destination;\nDROP TABLE IF EXISTS fledge.control_pipelines;\nDROP TABLE IF EXISTS fledge.control_filters;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/60.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('USRAD', 'USRDL', 'USRCH', 'USRRS' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/61.sql",
    "content": "DROP TABLE IF EXISTS fledge.monitors;\nDROP INDEX IF EXISTS fledge.monitors_ix1;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/62.sql",
    "content": "-- Delete roles\nDELETE FROM fledge.roles WHERE name IN ('view','control');\n-- Reset auto increment\n-- You cannot use ALTER TABLE for that. The autoincrement counter is stored in a separate table named \"sqlite_sequence\". You can modify the value there\nUPDATE sqlite_sequence SET seq=1 WHERE name=\"roles\";\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/63.sql",
    "content": "-- Drop control flow tables\nDROP TABLE IF EXISTS fledge.control_api_acl;\nDROP TABLE IF EXISTS fledge.control_api_parameters;\nDROP TABLE IF EXISTS fledge.control_api;"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/64.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('ACLAD', 'ACLCH', 'ACLDL', 'CTSAD', 'CTSCH', 'CTSDL', 'CTPAD', 'CTPCH', 'CTPDL');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/65.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('CTEAD', 'CTECH', 'CTEDL');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/66.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n\n-- Remove priority column in fledge.scheduled_processes\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.scheduled_processes RENAME TO scheduled_processes_old;\n\n-- Create new table\nCREATE TABLE fledge.scheduled_processes (\n             name        character varying(255)  NOT NULL,             -- Name of the process\n             script      JSON,                                         -- Full path of the process\n             CONSTRAINT scheduled_processes_pkey PRIMARY KEY ( name ) );\n\n-- Copy data\nINSERT INTO fledge.scheduled_processes ( name, script) SELECT name, script FROM fledge.scheduled_processes_old;\n\n-- Remote old table\nDROP TABLE IF EXISTS fledge.scheduled_processes_old;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/67.sql",
    "content": "DROP TABLE IF EXISTS fledge.alerts;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/68.sql",
    "content": "DELETE FROM fledge.schedules WHERE process_name = 'update checker';\nDELETE FROM fledge.scheduled_processes WHERE name = 'update checker';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/69.sql",
    "content": "DELETE FROM fledge.log_codes where code IN ('BUCAD', 'BUCCH', 'BUCDL');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/70.sql",
    "content": "-- Remove 'hash_algorithm' column from users table\n\nALTER TABLE fledge.users DROP COLUMN hash_algorithm;\nUPDATE fledge.users SET pwd='39b16499c9311734c595e735cffb5d76ddffb2ebf8cf4313ee869525a9fa2c20:f400c843413d4c81abcba8f571e6ddb6' WHERE pwd ='495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/71.sql",
    "content": "ALTER TABLE fledge.users DROP COLUMN failed_attempts;\nALTER TABLE fledge.users DROP COLUMN block_until;\nDELETE FROM fledge.log_codes where code IN ('USRBK', 'USRUB');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/72.sql",
    "content": "-- No action required\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/73.sql",
    "content": "DELETE FROM fledge.scheduled_processes WHERE name = 'pipeline_c';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/74.sql",
    "content": "ALTER TABLE fledge.plugin_data DROP COLUMN service_name;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/75.sql",
    "content": "-- no downgrade is necessary for the version, as the Python-based North task became obsolete a considerable time ago; therefore, it is not worthwhile to include the entry for its scheduled process"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/downgrade/README",
    "content": "Place SQLite3 downgrade sql files here.\n  \nFile name:\n\nX.sql, where X is the SQLite3 schema id\n\nExample:\n\n'9.sql' file is read by Fledge app which has SQLite3 schema version set to 10\n'8.sql' file is read either by Fledge app which has SQLite3 schema version set to 9\neither by Fledge app downgrading schema from 10 to 8\n\nNote:\n- whenever VERSION file in $FLEDGE_ROOT has a new schema in 'fledge_schema',\n  the corresponding sql file must be placed here for downgrade.\n- file id must exist even if empty\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/init.sql",
    "content": "----------------------------------------------------------------------\n-- Copyright (c) 2022 OSIsoft, LLC\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\");\n-- you may not use this file except in compliance with the License.\n-- You may obtain a copy of the License at\n--\n--     http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-- See the License for the specific language governing permissions and\n-- limitations under the License.\n----------------------------------------------------------------------\n\n--\n-- init.sql\n--\n-- SQLite script to create the Fledge persistent Layer\n--\n\n-- NOTE:\n--\n-- This schema has to be used with Sqlite3 JSON1 extension\n--\n-- This script must be launched with sqlite3 commamd line tool:\n--  sqlite3 /path/fledge.db\n--   > ATTACH DATABASE '/path/fledge.db' AS 'fledge'\n--   > .read init.sql\n--   > .quit\n\n----------------------------------------------------------------------\n-- DDL CONVENTIONS\n--\n-- Tables:\n-- * Names are in plural, terms are separated by _\n-- * Columns are, when possible, not null and have a default value.\n--\n-- Columns:\n-- id      : It is commonly the PK of the table, a smallint, integer or bigint.\n-- xxx_id  : It usually refers to a FK, where \"xxx\" is name of the table.\n-- code    : Usually an AK, based on fixed lenght characters.\n-- ts      : The timestamp with microsec precision and tz. It is updated at\n--           every change.\n\n----------------------------------------------------------------------\n-- SCHEMA CREATION\n----------------------------------------------------------------------\n\n----- TABLES\n\n-- Log Codes Table\n-- List of tasks that log info into fledge.log.\nCREATE TABLE fledge.log_codes (\n       code        character(5)          NOT NULL,   -- The process that logs actions\n       description character varying(80) NOT NULL,\n       CONSTRAINT log_codes_pkey PRIMARY KEY (code) );\n\n-- Generic Log Table\n-- General log table for Fledge.\nCREATE TABLE fledge.log (\n       id    INTEGER                PRIMARY KEY AUTOINCREMENT,\n       code  CHARACTER(5)           NOT NULL,                  -- The process that logged the action\n       level SMALLINT               NOT NULL,                  -- 0 Success - 1 Failure - 2 Warning - 4 Info\n       log   JSON                   NOT NULL DEFAULT '{}',     -- Generic log structure\n       ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- UTC\n       CONSTRAINT log_fk1 FOREIGN KEY (code)\n       REFERENCES log_codes (code) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\n-- Index: log_ix1 - For queries by code\nCREATE INDEX log_ix1\n    ON log(code, ts, level);\n\n-- Index to make GUI response faster\nCREATE INDEX log_ix2\n    ON log(ts);\n\n-- Asset status\n-- List of status an asset can have.\nCREATE TABLE fledge.asset_status (\n       id          INTEGER                PRIMARY KEY AUTOINCREMENT,\n       descriprion character varying(255) NOT NULL DEFAULT '' );\n\n-- Asset Types\n-- Type of asset (for example south, sensor etc.)\nCREATE TABLE fledge.asset_types (\n       id          INTEGER                PRIMARY KEY AUTOINCREMENT,\n       description character varying(255) NOT NULL DEFAULT '' );\n\n-- Assets table\n-- This table is used to list the assets used in Fledge\n-- Reading do not necessarily have an asset, but whenever possible this\n-- table provides information regarding the data collected.\nCREATE TABLE fledge.assets (\n       id           INTEGER                     PRIMARY KEY AUTOINCREMENT,\n       code         character varying(50),                                  -- A unique code  (AK) used to match readings and assets. It can be anything.\n       description  character varying(255)      NOT NULL DEFAULT '',        -- A brief description of the asset\n       type_id      integer                     NOT NULL,                   -- FK for the type of asset\n       address      inet                        NOT NULL DEFAULT '0.0.0.0', -- An IPv4 or IPv6 address, if needed. Default means \"any address\"\n       status_id    integer                     NOT NULL,                   -- Status of the asset, FK to the asset_status table\n       properties   JSON                        NOT NULL DEFAULT '{}',      -- A generic JSON structure. Some elements (for example \"labels\") may be used in the rule to send messages to the south devices or data to the cloud\n       has_readings boolean                     NOT NULL DEFAULT 'f',       -- A boolean column, when TRUE, it means that the asset may have rows in the readings table\n       ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT assets_fk1 FOREIGN KEY (status_id)\n       REFERENCES asset_status (id) MATCH SIMPLE\n                 ON UPDATE NO ACTION\n                 ON DELETE NO ACTION,\n       CONSTRAINT assets_fk2 FOREIGN KEY (type_id)\n       REFERENCES asset_types (id) MATCH SIMPLE\n                 ON UPDATE NO ACTION\n                 ON DELETE NO ACTION );\n\n-- Index: fki_assets_fk1\nCREATE INDEX fki_assets_fk1\n    ON assets (status_id);\n\n-- Index: fki_assets_fk2\nCREATE INDEX fki_assets_fk2\n    ON assets (type_id);\n\n-- Index: assets_ix1\nCREATE UNIQUE INDEX assets_ix1\n    ON assets (code);\n\n-- Asset Status Changes\n-- When an asset changes its status, the previous status is added here.\n-- start_ts contains the value of ts of the row in the asset table.\nCREATE TABLE fledge.asset_status_changes (\n       id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n       asset_id   integer                     NOT NULL,\n       status_id  integer                     NOT NULL,\n       log        JSON                        NOT NULL DEFAULT '{}',\n       start_ts   DATETIME NOT NULL,\n       ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT asset_status_changes_fk1 FOREIGN KEY (asset_id)\n       REFERENCES assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT asset_status_changes_fk2 FOREIGN KEY (status_id)\n       REFERENCES asset_status (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_asset_status_changes_fk1\n    ON asset_status_changes (asset_id);\n\nCREATE INDEX fki_asset_status_changes_fk2\n    ON asset_status_changes (status_id);\n\n\n-- Links table\n-- Links among assets in 1:M relationships.\nCREATE TABLE fledge.links (\n       id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n       asset_id   integer                     NOT NULL,\n       properties JSON                        NOT NULL DEFAULT '{}',\n       ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT links_fk1 FOREIGN KEY (asset_id)\n       REFERENCES assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_links_fk1\n    ON links (asset_id);\n\n-- Assets Linked table\n-- In links, relationship between an asset and other assets.\nCREATE TABLE fledge.asset_links (\n       link_id  integer                     NOT NULL,\n       asset_id integer                     NOT NULL,\n       ts      DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT asset_links_pkey PRIMARY KEY (link_id, asset_id) );\n\nCREATE INDEX fki_asset_links_fk1\n    ON asset_links (link_id);\n\nCREATE INDEX fki_asset_link_fk2\n    ON asset_links (asset_id);\n\n-- Asset Message Status table\n-- Status of the messages to send South\nCREATE TABLE fledge.asset_message_status (\n       id          INTEGER                PRIMARY KEY AUTOINCREMENT,\n       description character varying(255) NOT NULL DEFAULT '' );\n\n-- Asset Messages table\n-- Messages directed to the south devices.\nCREATE TABLE fledge.asset_messages (\n       id        INTEGER                     PRIMARY KEY AUTOINCREMENT,\n       asset_id  integer                     NOT NULL,\n       status_id integer                     NOT NULL,\n       message   JSON                        NOT NULL DEFAULT '{}',\n       ts        DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT asset_messages_fk1 FOREIGN KEY (asset_id)\n       REFERENCES assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT asset_messages_fk2 FOREIGN KEY (status_id)\n       REFERENCES asset_message_status (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_asset_messages_fk1\n    ON asset_messages (asset_id);\n\nCREATE INDEX fki_asset_messages_fk2\n    ON asset_messages (status_id);\n\n-- Streams table\n-- List of the streams to the Cloud.\nCREATE TABLE fledge.streams (\n    id            INTEGER                      PRIMARY KEY AUTOINCREMENT,         -- Sequence ID\n    description    character varying(255)      NOT NULL DEFAULT '',               -- A brief description of the stream entry\n    properties     JSON                        NOT NULL DEFAULT '{}',             -- A generic set of properties\n    object_stream  JSON                        NOT NULL DEFAULT '{}',             -- Definition of what must be streamed\n    object_block   JSON                        NOT NULL DEFAULT '{}',             -- Definition of how the stream must be organised\n    object_filter  JSON                        NOT NULL DEFAULT '{}',             -- Any filter involved in selecting the data to stream\n    active_window  JSON                        NOT NULL DEFAULT '{}',             -- The window of operations\n    active         boolean                     NOT NULL DEFAULT 't',              -- When false, all data to this stream stop and are inactive\n    last_object    bigint                      NOT NULL DEFAULT 0,                -- The ID of the last object streamed (asset or reading, depending on the object_stream)\n    ts             DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))); -- Creation or last update\n\n\n-- Configuration table\n-- The configuration in JSON format.\n-- The PK is also used in the REST API\n-- Values is a JSON column\n-- ts is set by default with now().\nCREATE TABLE fledge.configuration (\n       key         character varying(255)      NOT NULL,                          -- Primary key\n       display_name character varying(255)     NOT NULL,                          -- Display Name\n       description character varying(255)      NOT NULL,                          -- Description, in plain text\n       value       JSON                        NOT NULL DEFAULT '{}',             -- JSON object containing the configuration values\n       ts          DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- Timestamp, updated at every change\n       CONSTRAINT configuration_pkey PRIMARY KEY (key) );\n\n\n-- Configuration changes\n-- This table has the same structure of fledge.configuration, plus the timestamp that identifies the time it has changed\n-- The table is used to keep track of the changes in the \"value\" column\nCREATE TABLE fledge.configuration_changes (\n       key                 character varying(255)      NOT NULL,\n       configuration_ts    DATETIME                    NOT NULL,\n       configuration_value JSON                        NOT NULL DEFAULT '{}',\n       ts                  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       CONSTRAINT configuration_changes_pkey PRIMARY KEY (key, configuration_ts) );\n\n-- Statistics table\n-- The table is used to keep track of the statistics for Fledge\nCREATE TABLE fledge.statistics (\n       key                 character varying(56)       NOT NULL,                           -- Primary key, all uppercase\n       description         character varying(255)      NOT NULL,                           -- Description, in plan text\n       value               bigint                      NOT NULL DEFAULT 0,                 -- Integer value, the statistics\n       previous_value      bigint                      NOT NULL DEFAULT 0,                 -- Integer value, the prev stat to be updated by metrics collector\n       ts                  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))); -- Timestamp, updated at every change\nCREATE UNIQUE INDEX statistics_ix1\n    ON statistics(key);\n\n-- Statistics history\n-- Keeps history of the statistics in fledge.statistics\n-- The table is updated at startup\nCREATE TABLE fledge.statistics_history (\n       id          INTEGER                     PRIMARY KEY AUTOINCREMENT,          -- Sequence ID\n       key         character varying(56)       NOT NULL,                           -- Coumpund primary key, all uppercase\n       history_ts  DATETIME NOT NULL,                                              -- Compound primary key, the highest value of statistics.ts when statistics are copied here.\n       value       bigint                      NOT NULL DEFAULT 0,                 -- Integer value, the statistics\n       ts          DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')));       -- Timestamp, updated at every change, UTC time\n\nCREATE UNIQUE INDEX statistics_history_ix1\n    ON statistics_history (key, history_ts);\n\nCREATE INDEX statistics_history_ix2\n    ON statistics_history (key);\n\nCREATE INDEX statistics_history_ix3\n    ON statistics_history (history_ts);\n\n-- Contains history of the statistics_history table\n-- Data are historicized daily\n--\nCREATE TABLE fledge.statistics_history_daily (\n    year        DATE DEFAULT (STRFTIME('%Y', 'NOW')),\n    day         DATE DEFAULT (STRFTIME('%Y-%m-%d', 'NOW')),\n    key         character varying(56)       NOT NULL,\n    value       bigint                      NOT NULL DEFAULT 0\n);\n\nCREATE INDEX statistics_history_daily_ix1\n    ON statistics_history_daily (year);\n\n-- Resources table\n-- A resource and be anything that is available or can be done in Fledge. Examples:\n-- - Access to assets\n-- - Access to readings\n-- - Access to streams\nCREATE TABLE fledge.resources (\n    id          INTEGER                PRIMARY KEY AUTOINCREMENT,  -- Sequence ID\n    code        character(10)          NOT NULL,\n    description character varying(255) NOT NULL DEFAULT '' );\n\nCREATE UNIQUE INDEX resource_ix1\n    ON resources (code);\n\n-- Roles table\nCREATE TABLE fledge.roles (\n    id          INTEGER   PRIMARY KEY AUTOINCREMENT,\n    name        character varying(25)  NOT NULL,\n    description character varying(255) NOT NULL DEFAULT '' );\n\n\nCREATE UNIQUE INDEX roles_ix1\n    ON roles (name);\n\n-- Roles, Resources and Permssions table\n-- For each role there are resources associated, with a given permission.\nCREATE TABLE fledge.role_resource_permission (\n       role_id     integer NOT NULL,\n       resource_id integer NOT NULL,\n       access      JSON    NOT NULL DEFAULT '{}',\n       CONSTRAINT role_resource_permission_pkey PRIMARY KEY (role_id, resource_id),\n       CONSTRAINT role_resource_permissions_fk1 FOREIGN KEY (role_id)\n       REFERENCES roles (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT role_resource_permissions_fk2 FOREIGN KEY (resource_id)\n       REFERENCES resources (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_role_resource_permissions_fk1\n    ON role_resource_permission (role_id);\n\nCREATE INDEX fki_role_resource_permissions_fk2\n    ON role_resource_permission (resource_id);\n\n\n-- Roles Assets Permissions table\n-- Combination of roles, assets and access\nCREATE TABLE fledge.role_asset_permissions (\n    role_id    integer NOT NULL,\n    asset_id   integer NOT NULL,\n    access     JSON    NOT NULL DEFAULT '{}',\n    CONSTRAINT role_asset_permissions_pkey PRIMARY KEY (role_id, asset_id),\n    CONSTRAINT role_asset_permissions_fk1 FOREIGN KEY (role_id)\n    REFERENCES roles (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION,\n    CONSTRAINT role_asset_permissions_fk2 FOREIGN KEY (asset_id)\n    REFERENCES assets (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION,\n    CONSTRAINT user_asset_permissions_fk1 FOREIGN KEY (role_id)\n    REFERENCES roles (id) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION );\n\nCREATE INDEX fki_role_asset_permissions_fk1\n    ON role_asset_permissions (role_id);\n\nCREATE INDEX fki_role_asset_permissions_fk2\n    ON role_asset_permissions (asset_id);\n\n-- Users table\n-- Fledge users table.\n-- Authentication Method:\n-- 0 - Disabled\n-- 1 - PWD\n-- 2 - Public Key\nCREATE TABLE fledge.users (\n       id                INTEGER   PRIMARY KEY AUTOINCREMENT,\n       uname             character varying(80)  NOT NULL,\n       real_name         character varying(255) NOT NULL,\n       role_id           integer                NOT NULL,\n       description       character varying(255) NOT NULL DEFAULT '',\n       pwd               character varying(255) ,\n       public_key        character varying(255) ,\n       enabled           boolean                NOT NULL DEFAULT 't',\n       pwd_last_changed  DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       access_method     TEXT CHECK( access_method IN ('any','pwd','cert') )  NOT NULL DEFAULT 'any',\n       hash_algorithm    TEXT CHECK( hash_algorithm IN ('SHA256', 'SHA512') )  NOT NULL DEFAULT 'SHA512',\n       failed_attempts   INTEGER    DEFAULT 0,\n       block_until  DATETIME DEFAULT NULL,\n          CONSTRAINT users_fk1 FOREIGN KEY (role_id)\n          REFERENCES roles (id) MATCH SIMPLE\n                  ON UPDATE NO ACTION\n                  ON DELETE NO ACTION );\n\nCREATE INDEX fki_users_fk1\n    ON users (role_id);\n\nCREATE UNIQUE INDEX users_ix1\n    ON users (uname);\n\n-- User Login table\n-- List of logins executed by the users.\nCREATE TABLE fledge.user_logins (\n       id               INTEGER   PRIMARY KEY AUTOINCREMENT,\n       user_id          integer   NOT NULL,\n       ip               inet      NOT NULL DEFAULT '0.0.0.0',\n       ts               DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       token            character varying(255)      NOT NULL,\n       token_expiration DATETIME NOT NULL,\n       CONSTRAINT user_logins_fk1 FOREIGN KEY (user_id)\n       REFERENCES users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\n CREATE INDEX fki_user_logins_fk1\n     ON user_logins (user_id);\n\n-- User Password History table\n-- Maintains a history of passwords\nCREATE TABLE fledge.user_pwd_history (\n       id               INTEGER   PRIMARY KEY AUTOINCREMENT,\n       user_id          integer   NOT NULL,\n       pwd              character varying(255),\n       CONSTRAINT user_pwd_history_fk1 FOREIGN KEY (user_id)\n       REFERENCES users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_pwd_history_fk1\n    ON user_pwd_history (user_id);\n\n\n-- User Resource Permissions table\n-- Association of users with resources and given permissions for each resource.\nCREATE TABLE fledge.user_resource_permissions (\n       user_id     integer NOT NULL,\n       resource_id integer NOT NULL,\n       access      JSON NOT NULL DEFAULT '{}',\n       CONSTRAINT user_resource_permissions_pkey PRIMARY KEY (user_id, resource_id),\n       CONSTRAINT user_resource_permissions_fk1 FOREIGN KEY (user_id)\n       REFERENCES users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT user_resource_permissions_fk2 FOREIGN KEY (resource_id)\n       REFERENCES resources (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_resource_permissions_fk1\n    ON user_resource_permissions (user_id);\n\nCREATE INDEX fki_user_resource_permissions_fk2\n    ON user_resource_permissions (resource_id);\n\n-- User Asset Permissions table\n-- Association of users with assets\nCREATE TABLE fledge.user_asset_permissions (\n       user_id    integer NOT NULL,\n       asset_id   integer NOT NULL,\n       access     JSON NOT NULL DEFAULT '{}',\n       CONSTRAINT user_asset_permissions_pkey PRIMARY KEY (user_id, asset_id),\n       CONSTRAINT user_asset_permissions_fk1 FOREIGN KEY (user_id)\n       REFERENCES users (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION,\n       CONSTRAINT user_asset_permissions_fk2 FOREIGN KEY (asset_id)\n       REFERENCES assets (id) MATCH SIMPLE\n               ON UPDATE NO ACTION\n               ON DELETE NO ACTION );\n\nCREATE INDEX fki_user_asset_permissions_fk1\n    ON user_asset_permissions (user_id);\n\nCREATE INDEX fki_user_asset_permissions_fk2\n    ON user_asset_permissions (asset_id);\n\n\n-- List of scheduled Processes\nCREATE TABLE fledge.scheduled_processes (\n             name        character varying(255)   NOT NULL,                  -- Name of the process\n             script      JSON,                                              -- Full path of the process\n             priority    INTEGER                  NOT NULL DEFAULT 999,      -- priority to run for STARTUP\n             CONSTRAINT  scheduled_processes_pkey PRIMARY KEY ( name ) );\n\n-- List of schedules\nCREATE TABLE fledge.schedules (\n             id                uuid                   NOT NULL, -- PK\n             process_name      character varying(255) NOT NULL, -- FK process name\n             schedule_name     character varying(255) NOT NULL, -- schedule name\n             schedule_type     INTEGER                NOT NULL, -- 1 = startup,  2 = timed\n                                                                -- 3 = interval, 4 = manual\n             schedule_interval INTEGER,                         -- Repeat interval\n             schedule_time     INTEGER,                         -- Start time\n             schedule_day      INTEGER,                         -- ISO day 1 = Monday, 7 = Sunday\n             exclusive         boolean NOT NULL DEFAULT 't',    -- true = Only one task can run\n                                                                -- at any given time\n             enabled           boolean NOT NULL DEFAULT 'f',    -- false = A given schedule is disabled by default\n  CONSTRAINT schedules_pkey PRIMARY KEY  ( id ),\n  CONSTRAINT schedules_fk1  FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\n-- List of tasks\nCREATE TABLE fledge.tasks (\n             id           uuid                        NOT NULL,                          -- PK\n             schedule_name character varying(255),                                       -- Name of the task\n             schedule_id  uuid                        NOT NULL,                          -- Link between schedule & task table\n             process_name character varying(255)      NOT NULL,                          -- Name of the task's process\n             state        smallint                    NOT NULL,                          -- 1-Running, 2-Complete, 3-Cancelled, 4-Interrupted\n             start_time   DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),       -- The date and time the task started UTC\n             end_time     DATETIME,                                                      -- The date and time the task ended\n             reason       character varying(255),                                        -- The reason why the task ended\n             pid          integer                     NOT NULL,                          -- Linux process id\n             exit_code    integer,                                                       -- Process exit status code (negative means exited via signal)\n  CONSTRAINT tasks_pkey PRIMARY KEY ( id ),\n  CONSTRAINT tasks_fk1 FOREIGN KEY  ( process_name )\n  REFERENCES scheduled_processes ( name ) MATCH SIMPLE\n             ON UPDATE NO ACTION\n             ON DELETE NO ACTION );\n\nCREATE INDEX tasks_ix1\n    ON tasks(schedule_name, start_time);\n\n\n-- Tracks types already created into PI Server\nCREATE TABLE fledge.omf_created_objects (\n    configuration_key character varying(255)    NOT NULL,            -- FK to fledge.configuration\n    type_id           integer                   NOT NULL,            -- Identifies the specific PI Server type\n    asset_code        character varying(255)    NOT NULL,\n    CONSTRAINT omf_created_objects_pkey PRIMARY KEY (configuration_key,type_id, asset_code),\n    CONSTRAINT omf_created_objects_fk1 FOREIGN KEY (configuration_key)\n    REFERENCES configuration (key) MATCH SIMPLE\n            ON UPDATE NO ACTION\n            ON DELETE NO ACTION );\n\n\n-- Backups information\n-- Stores information about executed backups\nCREATE TABLE fledge.backups (\n    id         INTEGER                 PRIMARY KEY AUTOINCREMENT,\n    file_name  character varying(255)  NOT NULL DEFAULT '',                   -- Backup file name, expressed as absolute path\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')), -- Backup creation timestamp\n    type       integer                 NOT NULL,                              -- Backup type : 1-Full, 2-Incremental\n    status     integer                 NOT NULL,                              -- Backup status :\n                                                                              --   1-Running\n                                                                              --   2-Completed\n                                                                              --   3-Cancelled\n                                                                              --   4-Interrupted\n                                                                              --   5-Failed\n                                                                              --   6-Restored backup\n    exit_code  integer );                                                     -- Process exit status code\n\n\n-- Fledge DB version: keeps the schema version id\nCREATE TABLE fledge.version (id CHAR(10));\n\n-- Create the configuration category_children table\nCREATE TABLE fledge.category_children (\n       id       integer                 PRIMARY KEY AUTOINCREMENT,\n       parent   character varying(255)  NOT NULL,\n       child    character varying(255)  NOT NULL\n);\n\nCREATE UNIQUE INDEX config_children_idx1\n    ON category_children (parent, child);\n\n-- Create the asset_tracker table\nCREATE TABLE fledge.asset_tracker (\n       id              integer                  PRIMARY KEY AUTOINCREMENT,\n       asset           character(50)            NOT NULL, -- asset name\n       event           character varying(50)    NOT NULL, -- event name\n       service         character varying(255)   NOT NULL, -- service name\n       fledge          character varying(50)    NOT NULL, -- FL service name\n       plugin          character varying(50)    NOT NULL, -- Plugin name\n       deprecated_ts   DATETIME                         , -- When an asset record is removed then time will be set else empty and that mean entry has not been deprecated\n       ts              DATETIME                 DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       data            JSON                     DEFAULT '{}'\n);\n\nCREATE INDEX asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX asset_tracker_ix2 ON asset_tracker (service);\n\n-- Create plugin_data table\n-- Persist plugin data in the storage\nCREATE TABLE fledge.plugin_data (\n\tkey     character varying(255)    NOT NULL,\n\tdata    JSON                      NOT NULL DEFAULT '{}',\n\tservice_name    character varying(255),\n\tCONSTRAINT plugin_data_pkey PRIMARY KEY (key) );\n\n-- Create packages table\nCREATE TABLE fledge.packages (\n             id                uuid                   NOT NULL, -- PK\n             name              character varying(255) NOT NULL, -- Package name\n             action            character varying(10) NOT NULL, -- APT actions:\n                                                                -- list\n                                                                -- install\n                                                                -- purge\n                                                                -- update\n             status            INTEGER                NOT NULL, -- exit code\n                                                                -- -1       - in-progress\n                                                                --  0       - success\n                                                                -- Non-Zero - failed\n             log_file_uri      character varying(255) NOT NULL, -- Package Log file relative path\n  CONSTRAINT packages_pkey PRIMARY KEY  ( id ) );\n\n-- Create filters table\nCREATE TABLE fledge.filters (\n             name        character varying(255)        NOT NULL,\n             plugin      character varying(255)        NOT NULL,\n       CONSTRAINT filter_pkey PRIMARY KEY( name ) );\n\n-- Create filter_users table\nCREATE TABLE fledge.filter_users (\n             name        character varying(255)        NOT NULL,\n             user        character varying(255)        NOT NULL);\n\n-- Create control_script table\n-- Script management for control dispatch service\nCREATE TABLE fledge.control_script (\n             name          character varying(255)        NOT NULL,\n             steps         JSON                          NOT NULL DEFAULT '{}',\n             acl           character varying(255),\n             CONSTRAINT    control_script_pkey           PRIMARY KEY (name) );\n\n-- Create control_acl table\n-- Access Control List Management for control dispatch service\nCREATE TABLE fledge.control_acl (\n             name          character varying(255)        NOT NULL,\n             service       JSON                          NOT NULL DEFAULT '{}',\n             url           JSON                          NOT NULL DEFAULT '{}',\n             CONSTRAINT    control_acl_pkey              PRIMARY KEY (name) );\n\n-- Access Control List usage relation\nCREATE TABLE fledge.acl_usage (\n             name            character varying(255)  NOT NULL,  -- ACL name\n             entity_type     character varying(80)   NOT NULL,  -- associated entity type: service or script\n             entity_name     character varying(255)  NOT NULL,  -- associated entity name\n             CONSTRAINT      usage_acl_pkey          PRIMARY KEY (name, entity_type, entity_name) );\n\n-- Create control_source table\nCREATE TABLE fledge.control_source (\n             cpsid            integer                     PRIMARY KEY AUTOINCREMENT,       -- auto source id\n             name             character  varying(40)      NOT NULL,                        -- source name\n             description      character  varying(120)     NOT NULL                         -- source description\n            );\n\n-- Create control_destination table\nCREATE TABLE fledge.control_destination (\n             cpdid            integer                     PRIMARY KEY AUTOINCREMENT,       -- auto destination id\n             name             character  varying(40)      NOT NULL,                        -- destination name\n             description      character  varying(120)     NOT NULL                         -- destination description\n            );\n\n-- Create control_pipelines table\nCREATE TABLE fledge.control_pipelines (\n             cpid             integer                     PRIMARY KEY AUTOINCREMENT,       -- control pipeline id\n             name             character  varying(255)     NOT NULL                 ,       -- control pipeline name\n             stype            integer                                              ,       -- source type id from control_source table\n             sname            character  varying(80)                               ,       -- source name from control_source table\n             dtype            integer                                              ,       -- destination type id from control_destination table\n             dname            character  varying(80)                               ,       -- destination name from control_destination table\n             enabled          boolean                     NOT NULL DEFAULT  'f'    ,       -- false = A given pipeline is disabled by default\n             execution        character  varying(20)      NOT NULL DEFAULT  'shared'       -- pipeline will be executed as with shared execution model by default\n             );\n\n-- Create control_filters table\nCREATE TABLE fledge.control_filters (\n             fid              integer                     PRIMARY KEY AUTOINCREMENT,       -- auto filter id\n             cpid             integer                     NOT NULL                 ,       -- control pipeline id\n             forder           integer                     NOT NULL                 ,       -- filter order\n             fname            character  varying(255)     NOT NULL                 ,       -- Name of the filter instance\n             CONSTRAINT       control_filters_fk1         FOREIGN KEY (cpid)\n             REFERENCES       control_pipelines (cpid)    MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\n-- Create control_api table\nCREATE TABLE fledge.control_api (\n             name             character  varying(255)     NOT NULL                 ,       -- control API name\n             description      character  varying(255)     NOT NULL                 ,       -- description of control API\n             type             integer                     NOT NULL                 ,       -- 0 for write and 1 for operation\n             operation_name   character  varying(255)                              ,       -- name of the operation and only valid if type is operation\n             destination      integer                     NOT NULL                 ,       -- destination of request; 0-broadcast, 1-service, 2-asset, 3-script\n             destination_arg  character  varying(255)                              ,       -- name of the destination and only used if destination is non-zero\n             anonymous        boolean                     NOT NULL DEFAULT  'f'    ,       -- anonymous callers to make request to control API; by default false\n             CONSTRAINT       control_api_pname           PRIMARY KEY (name)\n             );\n\n-- Create control_api_parameters table\nCREATE TABLE fledge.control_api_parameters (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             parameter        character  varying(255)     NOT NULL                 ,       -- name of parameter\n             value            character  varying(255)                              ,       -- value of parameter if constant otherwise default\n             constant         boolean                     NOT NULL                 ,       -- parameter is either a constant or variable\n             FOREIGN KEY (name) REFERENCES control_api (name)\n             );\n\n-- Create control_api_acl table\nCREATE TABLE fledge.control_api_acl (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             user             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.users\n             FOREIGN KEY (name) REFERENCES control_api (name)                      ,\n             FOREIGN KEY (user) REFERENCES users (uname)\n             );\n\n-- Create monitors table\nCREATE TABLE fledge.monitors (\n             service       character varying(255) NOT NULL,\n             monitor       character varying(80) NOT NULL,\n             minimum       integer,\n             maximum       integer,\n             average       integer,\n             samples       integer,\n             ts            DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))\n             );\n\nCREATE INDEX monitors_ix1\n    ON monitors(service, monitor);\n\n-- Create alerts table\n\nCREATE TABLE fledge.alerts (\n       key         character varying(80)       NOT NULL,                                  -- Primary key\n       message     character varying(255)      NOT NULL,                                 -- Alert Message\n       urgency     SMALLINT                    NOT NULL,                                 -- 1 Critical - 2 High - 3 Normal - 4 Low\n       ts          DATETIME    DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),     -- Timestamp, updated at every change\n       CONSTRAINT  alerts_pkey PRIMARY KEY (key) );\n\n----------------------------------------------------------------------\n-- Initialization phase - DML\n----------------------------------------------------------------------\n\n-- Roles\nDELETE FROM fledge.roles;\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('admin', 'All CRUD privileges'),\n            ('user', 'All CRUD operations and self profile management'),\n            ('view', 'Only to view the configuration'),\n            ('data-view', 'Only read the data in buffer'),\n            ('control', 'Same as editor can do and also have access for control scripts and pipelines');\n\n-- Users\nDELETE FROM fledge.users;\nINSERT INTO fledge.users ( uname, real_name, pwd, role_id, description )\n     VALUES ('admin', 'Admin user', '495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', 1, 'admin user'),\n            ('user', 'Normal user', '495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', 2, 'normal user');\n\n-- User password history\nDELETE FROM fledge.user_pwd_history;\n\n-- User logins\nDELETE FROM fledge.user_logins;\n\n-- Log Codes\nDELETE FROM fledge.log_codes;\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'PURGE', 'Data Purging Process' ),\n            ( 'LOGGN', 'Logging Process' ),\n            ( 'STRMN', 'Streaming Process' ),\n            ( 'SYPRG', 'System Purge' ),\n            ( 'START', 'System Startup' ),\n            ( 'FSTOP', 'System Shutdown' ),\n            ( 'CONCH', 'Configuration Change' ),\n            ( 'CONAD', 'Configuration Addition' ),\n            ( 'SCHCH', 'Schedule Change' ),\n            ( 'SCHAD', 'Schedule Addition' ),\n            ( 'SRVRG', 'Service Registered' ),\n            ( 'SRVUN', 'Service Unregistered' ),\n            ( 'SRVFL', 'Service Fail' ),\n            ( 'SRVRS', 'Service Restart' ),\n            ( 'NHCOM', 'North Process Complete' ),\n            ( 'NHDWN', 'North Destination Unavailable' ),\n            ( 'NHAVL', 'North Destination Available' ),\n            ( 'UPEXC', 'Update Complete' ),\n            ( 'BKEXC', 'Backup Complete' ),\n            ( 'NTFDL', 'Notification Deleted' ),\n            ( 'NTFAD', 'Notification Added' ),\n            ( 'NTFSN', 'Notification Sent' ),\n            ( 'NTFCL', 'Notification Cleared' ),\n            ( 'NTFST', 'Notification Server Startup' ),\n            ( 'NTFSD', 'Notification Server Shutdown' ),\n            ( 'PKGIN', 'Package installation' ),\n            ( 'PKGUP', 'Package updated' ),\n            ( 'PKGRM', 'Package purged' ),\n            ( 'DSPST', 'Dispatcher Startup' ),\n            ( 'DSPSD', 'Dispatcher Shutdown' ),\n            ( 'ESSRT', 'External Service Startup' ),\n            ( 'ESSTP', 'External Service Shutdown' ),\n            ( 'ASTDP', 'Asset deprecated' ),\n            ( 'ASTUN', 'Asset un-deprecated' ),\n            ( 'PIPIN', 'Pip installation' ),\n            ( 'AUMRK', 'Audit Log Marker' ),\n            ( 'USRAD', 'User Added' ),\n            ( 'USRDL', 'User Deleted' ),\n            ( 'USRCH', 'User Changed' ),\n            ( 'USRRS', 'User Restored' ),\n            ( 'ACLAD', 'ACL Added' ),( 'ACLCH', 'ACL Changed' ),( 'ACLDL', 'ACL Deleted' ),\n            ( 'CTSAD', 'Control Script Added' ),( 'CTSCH', 'Control Script Changed' ),('CTSDL', 'Control Script Deleted' ),\n            ( 'CTPAD', 'Control Pipeline Added' ),( 'CTPCH', 'Control Pipeline Changed' ),('CTPDL', 'Control Pipeline Deleted' ),\n            ( 'CTEAD', 'Control Entrypoint Added' ),( 'CTECH', 'Control Entrypoint Changed' ),('CTEDL', 'Control Entrypoint Deleted' ),\n            ( 'BUCAD', 'Bucket Added' ), ( 'BUCCH', 'Bucket Changed' ), ( 'BUCDL', 'Bucket Deleted' ),\n            ( 'USRBK', 'User Blocked' ), ( 'USRUB', 'User Unblocked' )\n            ;\n\n--\n-- Configuration parameters\n--\nDELETE FROM fledge.configuration;\n\n\n-- Statistics\nINSERT INTO fledge.statistics ( key, description, value, previous_value )\n     VALUES ( 'READINGS',             'Readings received by Fledge', 0, 0 ),\n            ( 'BUFFERED',             'Readings currently in the Fledge buffer', 0, 0 ),\n            ( 'UNSENT',               'Readings filtered out in the send process', 0, 0 ),\n            ( 'PURGED',               'Readings removed from the buffer by the purge process', 0, 0 ),\n            ( 'UNSNPURGED',           'Readings that were purged from the buffer before being sent', 0, 0 ),\n            ( 'DISCARDED',            'Readings discarded by the South Service before being  placed in the buffer. This may be due to an error in the readings themselves.', 0, 0 );\n\n--\n-- Scheduled processes\n--\n-- Use this to create guids: https://www.uuidgenerator.net/version1 */\n-- Weekly repeat for timed schedules: set schedule_interval to 168:00:00\n\n-- Core Tasks\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'purge',               '[\"tasks/purge\"]'       );\nINSERT INTO fledge.scheduled_processes (name, script)   VALUES ( 'purge_system',        '[\"tasks/purge_system\"]');\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'stats collector',     '[\"tasks/statistics\"]'  );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'FledgeUpdater',       '[\"tasks/update\"]'      );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'certificate checker', '[\"tasks/check_certs\"]' );\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'update checker',      '[\"tasks/check_updates\"]');\n\n-- Storage Tasks\n--\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('backup',  '[\"tasks/backup\"]'  );\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('restore', '[\"tasks/restore\"]' );\n\n-- South, Notification, North Tasks\n--\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'south_c',           '[\"services/south_c\"]',         100 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'notification_c',    '[\"services/notification_c\"]',   30 );\nINSERT INTO fledge.scheduled_processes (name, script)             VALUES ( 'north_c',           '[\"tasks/north_c\"]'                 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'north_C',           '[\"services/north_C\"]',         200 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'dispatcher_c',      '[\"services/dispatcher_c\"]',     20 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'bucket_storage_c',  '[\"services/bucket_storage_c\"]', 10 );\nINSERT INTO fledge.scheduled_processes (name, script, priority)   VALUES ( 'pipeline_c',        '[\"services/pipeline_c\"]',        90 );\n\n\n-- Automation script tasks\n--\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'automation_script', '[\"tasks/automation_script\"]' );\n\n--\n-- Schedules\n--\n-- Use this to create guids: https://www.uuidgenerator.net/version1 */\n-- Weekly repeat for timed schedules: set schedule_interval to 168:00:00\n--\n\n\n-- Core Tasks\n--\n\n-- Purge\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'cea17db8-6ccc-11e7-907b-a6006ad3dba0', -- id\n                'purge',                                -- schedule_name\n                'purge',                                -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:10:00',                             -- schedule_interval (evey hour)\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n--\n-- Purge System\n--\n-- Purge old information from the fledge database\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                               schedule_time, schedule_interval, exclusive, enabled )\nVALUES ( 'd37265f0-c83a-11eb-b8bc-0242ac130003', -- id\n         'purge_system',                         -- schedule_name\n         'purge_system',                         -- process_name\n         3,                                      -- schedule_type (interval)\n         NULL,                                   -- schedule_time\n         '23:50:00',                             -- schedule_interval (evey 24 hours)\n         't',                                    -- exclusive\n         't'                                     -- enabled\n       );\n\n\n-- Statistics collection\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '2176eb68-7303-11e7-8cf7-a6006ad3dba0', -- id\n                'stats collection',                     -- schedule_name\n                'stats collector',                      -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '00:00:15',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n-- Update checker\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '852cd8e4-3c29-440b-89ca-2c7691b0450d', -- id\n                'update checker',                       -- schedule_name\n                'update checker',                       -- process_name\n                2,                                      -- schedule_type (timed)\n                '00:05:00',                             -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n-- Check for expired certificates\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '2176eb68-7303-11e7-8cf7-a6107ad3db21', -- id\n                'certificate checker',                  -- schedule_name\n                'certificate checker',                  -- process_name\n                2,                                      -- schedule_type (interval)\n                '00:05:00',                             -- schedule_time\n                '12:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n-- Storage Tasks\n--\n\n-- Execute a Backup every 1 hour\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'd1631422-9ec6-11e7-abc4-cec278b6b50a', -- id\n                'backup hourly',                        -- schedule_name\n                'backup',                               -- process_name\n                3,                                      -- schedule_type (interval)\n                NULL,                                   -- schedule_time\n                '01:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                'f'                                   -- disabled\n              );\n\n-- On demand Backup\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( 'fac8dae6-d8d1-11e7-9296-cec278b6b50a', -- id\n                'backup on demand',                     -- schedule_name\n                'backup',                               -- process_name\n                4,                                      -- schedule_type (manual)\n                NULL,                                   -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n-- On demand Restore\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '8d4d3ca0-de80-11e7-80c1-9a214cf093ae', -- id\n                'restore on demand',                    -- schedule_name\n                'restore',                              -- process_name\n                4,                                      -- schedule_type (manual)\n                NULL,                                   -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                't',                                   -- exclusive\n                't'                                    -- enabled\n              );\n\n-- The Schema Service table used to hold information about extension schemas\nCREATE TABLE fledge.service_schema (\n             name          character varying(255)        NOT NULL,\n             service       character varying(255)        NOT NULL,\n             version       integer                       NOT NULL,\n             definition    JSON);\n\n-- Control Source\nDELETE FROM fledge.control_source;\nINSERT INTO fledge.control_source ( name, description )\n     VALUES ('Any', 'Any source.'),\n            ('Service', 'A named service in source of the control pipeline.'),\n            ('API', 'The control pipeline source is the REST API.'),\n            ('Notification', 'The control pipeline originated from a notification.'),\n            ('Schedule', 'The control request was triggered by a schedule.'),\n            ('Script', 'The control request has come from the named script.');\n\n-- Control Destination\nDELETE FROM fledge.control_destination;\nINSERT INTO fledge.control_destination ( name, description )\n     VALUES ('Any', 'Any destination.'),\n            ('Service', 'A name of service that is being controlled.'),\n            ('Asset', 'A name of asset that is being controlled.'),\n            ('Script', 'A name of script that will be executed.'),\n            ('Broadcast', 'No name is applied and pipeline will be considered for any control writes or operations to broadcast destinations.');\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/init_readings.sql",
    "content": "----------------------------------------------------------------------\n-- Copyright (c) 2020 OSIsoft, LLC\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\");\n-- you may not use this file except in compliance with the License.\n-- You may obtain a copy of the License at\n--\n--     http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-- See the License for the specific language governing permissions and\n-- limitations under the License.\n----------------------------------------------------------------------\n\n--\n-- init_readings.sql\n--\n-- SQLite script to create the Fledge persistent Layer\n--\n\n-- NOTE:\n--\n-- This schema has to be used with Sqlite3 JSON1 extension\n--\n-- This script must be launched with sqlite3 command line tool:\n--  sqlite3 /path/readings.db\n--   > ATTACH DATABASE '/path/readings.db' AS 'readings'\n--   > .read init_readings.sql\n--   > .quit\n\n----------------------------------------------------------------------\n-- DDL CONVENTIONS\n--\n-- Tables:\n-- * Names are in plural, terms are separated by _\n-- * Columns are, when possible, not null and have a default value.\n--\n-- Columns:\n-- id      : It is commonly the PK of the table, a smallint, integer or bigint.\n-- xxx_id  : It usually refers to a FK, where \"xxx\" is name of the table.\n-- code    : Usually an AK, based on fixed length characters.\n-- ts      : The timestamp with microsec precision and tz. It is updated at\n--           every change.\n\n----------------------------------------------------------------------\n-- SCHEMA CREATION\n----------------------------------------------------------------------\n\n-- Readings table\n-- This tables contains the readings for assets.\n-- An asset can be a south with multiple sensor, a single sensor,\n-- a software or anything that generates data that is sent to Fledge\nCREATE TABLE readings.readings (\n    id         INTEGER                     PRIMARY KEY AUTOINCREMENT,\n    asset_code character varying(50)       NOT NULL,                         -- The provided asset code. Not necessarily located in the\n                                                                             -- assets table.\n    reading    JSON                        NOT NULL DEFAULT '{}',            -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),      -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))       -- UTC time\n);\n\nCREATE INDEX fki_readings_fk1\n    ON readings (asset_code, user_ts desc);\n\nCREATE INDEX readings_ix2\n    ON readings (asset_code);\n\nCREATE INDEX readings_ix3\n    ON readings (user_ts);\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/schema_update.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\nFLEDGE_DB_VERSION=$1\nNEW_VERSION=$2\nSQLITE_SQL=$3\n\nPLUGIN_NAME=\"sqlitelb\"\n\necho \"$@\" | grep -q -- --verbose && VERBOSE=\"Y\"\n\n# Include logging\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n# Logger wrapper\nschema_update_log() {\n    write_log \"Storage\" \"scripts.plugins.storage.${PLUGIN_NAME}.schema_update\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n# Parameters passed by the caller\nif [ ! \"$1\" ]; then\n   schema_update_log \"err\" \"Error: missing required parameters for upgrade/downgrade. Fledge cannot start.\" \"all\" \"pretty\"\n   exit 1\nfi\n\n# Same version check: do nothing\nif [ \"${FLEDGE_DB_VERSION}\" == \"${NEW_VERSION}\" ]; then\n    schema_update_log \"info\" \"Fledge DB schema is up to date to version ${FLEDGE_DB_VERSION}\" \"logonly\" \"pretty\"\n    return  0\nfi\n\n# Perform DB Upgrade\ndb_upgrade()\n{\n    UPDATE_SCRIPTS_DIR=\"$FLEDGE_ROOT/scripts/plugins/storage/${PLUGIN_NAME}/upgrade\"\n    # Start from next schema revision\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} + 1`\n    while [ \"${CHECK_VER}\" -le ${NEW_VERSION} ]\n    do\n        UPGRADE_SCRIPT=\"${UPDATE_SCRIPTS_DIR}/${CHECK_VER}.sql\"\n        if [ ! -e \"${UPGRADE_SCRIPT}\" ]; then\n            schema_update_log \"err\" \"Error in schema Upgrade: cannot find file ${UPGRADE_SCRIPT} \"\\\n\"required for [${FLEDGE_DB_VERSION}] to [${NEW_VERSION}] upgrade. Exiting\" \"all\" \"pretty\"\n            return 1\n        fi\n        CHECK_VER=`expr $CHECK_VER + 1`\n    done\n\n    START_UPGRADE=\"\"\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} + 1`\n    # sort in ascending order\n    for sql_file in `ls -1 ${UPDATE_SCRIPTS_DIR}/*.sql | sort -V`\n        do \n            # Get start_ver from filename START_VER-to-END_VER.sql\n            START_VER=`echo $(basename -s '.sql' $sql_file)`\n\n            # Skip current file ?\n            # Logic is: if sql_file name has START_VER != FLEDGE_DB_VERSION skip it\n            # else mark the START_UPGRADE\n            if [ ! \"${START_UPGRADE}\" ] && [ \"${START_VER}\" != \"${CHECK_VER}\" ]; then\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Skipping upgrade $(basename ${sql_file}) \"\\\n\"for Fledge upgrade from ${FLEDGE_DB_VERSION} to ${NEW_VERSION}\" \"logonly\" \"pretty\"\n                fi\n\n                # Get next file in the list\n                continue\n            else\n                START_UPGRADE=\"Y\"\n            fi\n\n            # Perform Upgrade\n            if [ \"${START_UPGRADE}\" ]; then\n                # Prepare command string for erro reporting\n                SQL_COMMAND=\"${SQLITE_SQL} '${DEFAULT_SQLITE_DB_FILE}' \\\"ATTACH DATABASE \"\\\n\"'${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; .read '${sql_file}' .quit\\\"\"\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Applying upgrade $(basename ${sql_file}) ...\" \"logonly\" \"pretty\"\n                    schema_update_log \"info\" \"Calling [${SQL_COMMAND}]\" \"logonly\" \"pretty\"\n                fi\n\n                # Call the DB script\n                COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}'          AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}' AS 'readings';\n.read '${sql_file}'\n.quit\nEOF`\n                RET_CODE=$?\n                if [ \"${RET_CODE}\" -ne 0 ]; then\n                    schema_update_log \"err\" \"Failure in upgrade command [${SQL_COMMAND}]: ${COMMAND_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # Update the DB version\n                UPDATE_VER=`basename -s .sql ${sql_file}`\n                UPDATE_COMMAND=\"${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" \\\"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; UPDATE fledge.version SET id = '${UPDATE_VER}';\\\" 2>&1\"\n                UPDATE_OUTPUT=`eval \"${UPDATE_COMMAND}\"`\n                RET_CODE=$?\n                if [ \"${RET_CODE}\" -ne 0 ]; then\n                    schema_update_log \"err\" \"Failure in upgrade command [${UPDATE_COMMAND}]: ${UPDATE_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # If \"ver\" in current filename is NEW_VERSION we are done\n                if [ \"${START_VER}\" == \"${NEW_VERSION}\" ]; then\n                    if [ \"${VERBOSE}\" ]; then\n                        schema_update_log \"info\" \"Found last upgrade file $(basename ${sql_file}) for \"\\\n\"${FLEDGE_DB_VERSION} to ${NEW_VERSION} version upgrade\" \"logonly\" \"pretty\"\n                    fi\n                    # Report success\n                    schema_update_log \"info\" \"Fledge DB schema has been upgraded to version [${NEW_VERSION}]\" \"all\" \"pretty\"\n                    return 0\n                fi\n            fi\n        done\n        # Report error\n        if [ \"${START_UPGRADE}\" ]; then\n             schema_update_log \"err\" \"Error: the Fledge DB schema has not been upgraded \"\\\n\"to version [${NEW_VERSION}], this sql file is [$${sql_file}]\" \"all\" \"pretty\"\n            return 0\n        fi\n}\n\n# Perform DB Downgrade\ndb_downgrade()\n{\n    DOWNGRADE_SCRIPTS_DIR=\"$FLEDGE_ROOT/scripts/plugins/storage/${PLUGIN_NAME}/downgrade\"\n    # Start from next schema revision\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} - 1`\n    while [ \"${CHECK_VER}\" -ge ${NEW_VERSION} ]\n    do\n        DOWNGRADE_SCRIPT=\"${DOWNGRADE_SCRIPTS_DIR}/${CHECK_VER}.sql\"\n        if [ ! -e \"${DOWNGRADE_SCRIPT}\" ]; then\n            schema_update_log \"err\" \"Error in schema Downgrade: cannot find file ${DOWNGRADE_SCRIPT} \"\\\n\"required for [${FLEDGE_DB_VERSION}] to [${NEW_VERSION}] downgrade. Exiting\" \"all\" \"pretty\"\n            return 1\n        fi\n        CHECK_VER=`expr $CHECK_VER - 1`\n    done\n\n    START_DOWNGRADE=\"\"\n    CHECK_VER=`expr ${FLEDGE_DB_VERSION} - 1`\n    # sort in descending order\n    for sql_file in `ls -1 ${DOWNGRADE_SCRIPTS_DIR}/*.sql | sort -rV`\n        do \n            # Get start_ver from filename START_VER-to-END_VER.sql\n            START_VER=`echo $(basename -s '.sql' $sql_file)`\n\n            # Skip current file?\n            # Logic is: sql_file name has START_VER != FLEDGE_DB_VERSION skip it\n            # else mark START_DOWNGRADE\n            if [ ! \"${START_DOWNGRADE}\" ] && [ \"${START_VER}\" != \"${CHECK_VER}\" ]; then\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Skipping downgrade $(basename ${sql_file}) \"\\\n\"for Fledge downgrade from ${FLEDGE_DB_VERSION} to ${NEW_VERSION}\" \"logonly\" \"pretty\"\n                fi\n\n                # Get next file in the list\n                continue\n            else\n                START_DOWNGRADE=\"Y\"\n            fi\n\n            # Perform Downgrade\n            if [ \"${START_DOWNGRADE}\" ]; then\n                # Prepare command string for message reporting\n                SQL_COMMAND=\"${SQLITE_SQL} '${DEFAULT_SQLITE_DB_FILE}' \\\"ATTACH DATABASE \"\\\n\"'${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; .read '${sql_file}' .quit\\\"\"\n                if [ \"${VERBOSE}\" ]; then\n                    schema_update_log \"info\" \"Applying downgrade $(basename ${sql_file}) ...\" \"logonly\" \"pretty\"\n                    schema_update_log \"info\" \"Calling [${SQL_COMMAND}]\" \"logonly\" \"pretty\"\n                fi\n\n                # Call the DB script\n                COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge';\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}' AS 'readings';\n.read '${sql_file}'\n.quit\nEOF`\n                RET_CODE=$?\n                if [ \"${RET_CODE}\" -ne 0 ]; then\n                    schema_update_log \"err\" \"Failure in downgrade command [${SQL_COMMAND}]: ${COMMAND_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # Update DB version\n                UPDATE_COMMAND=\"${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" \\\"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; UPDATE fledge.version SET id = '${START_VER}';\\\" 2>&1\"\n                UPDATE_OUTPUT=`eval \"${UPDATE_COMMAND}\"`\n                RET_CODE=$?\n                if [ \"${RET_CODE}\" -ne 0 ]; then\n                    schema_update_log \"err\" \"Failure in downgrade command [${UPDATE_COMMAND}]: ${UPDATE_OUTPUT}. Exiting\" \"all\" \"pretty\"\n                    return 1\n                fi\n\n                # If \"ver\" in current filename is NEW_VERSION we are done\n                if [ \"${START_VER}\" == \"${NEW_VERSION}\" ]; then\n                    if [ \"${VERBOSE}\" ]; then\n                        schema_update_log \"info\" \"Found last downgrade file $(basename ${sql_file}) for \"\\\n\"${FLEDGE_DB_VERSION} to ${NEW_VERSION} version downgrade\" \"logonly\" \"pretty\"\n                    fi\n                    # Report success\n                    schema_update_log \"info\" \"Fledge DB schema has been downgraded to version [${NEW_VERSION}]\" \"all\" \"pretty\"\n                    return 0\n                fi\n            fi\n        done\n        # Report error\n        if [ \"${START_DOWNGRADE}\" ]; then\n            schema_update_log \"err\" \"Error: the Fledge DB schema has not been downgraded \"\\\n\"to version [${NEW_VERSION}], this sql file is [${sql_file}]\" \"all\" \"pretty\"\n            return 0\n        fi\n}\n\n# Check whether we need to Upgrade or Downgrade\nCHECK_OPERATION=`printf '%s\\n' \"${NEW_VERSION}\" \"${FLEDGE_DB_VERSION}\" | sort -V | head -n 1`\nif [ \"${CHECK_OPERATION}\" == \"${NEW_VERSION}\" ]; then\n    SCHEMA_OPT=\"DOWNGRADE\"\nelse\n    SCHEMA_OPT=\"UPGRADE\"\nfi\n\n# Call the schema operation\nif [ \"${SCHEMA_OPT}\" == \"UPGRADE\" ]; then\n    db_upgrade || exit 1\nelse\n    db_downgrade || exit 1\nfi\n\nexit 0\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/43.sql",
    "content": "-- Contains history of the statistics_history table\n-- Data are historicized daily\n--\nBEGIN TRANSACTION;\n\nDROP INDEX IF EXISTS fledge.statistics_history_daily_ix1;\nDROP TABLE IF EXISTS fledge.statistics_history_daily;\n\nCREATE TABLE fledge.statistics_history_daily (\n                                                 year        DATE DEFAULT (STRFTIME('%Y', 'NOW')),\n                                                 day         DATE DEFAULT (STRFTIME('%Y-%m-%d', 'NOW')),\n                                                 key         character varying(56)       NOT NULL,\n                                                 value       bigint                      NOT NULL DEFAULT 0\n);\n\nCREATE INDEX fledge.statistics_history_daily_ix1\n    ON statistics_history_daily (year);\n\n--- statistics_history_daily ------------------------------------------------------------------:\n\nINSERT INTO fledge.statistics_history_daily\n(year, day, key, value)\nSELECT\n    STRFTIME('%Y', date(history_ts)),\n    date(history_ts),\n    key,\n    sum(\"value\") AS \"value\"\nFROM fledge.statistics_history\nWHERE history_ts < datetime('now', '-7 days')\nGROUP BY date(history_ts), key;\n\nDELETE FROM fledge.statistics_history WHERE history_ts < datetime('now', '-7 days');\n\n---  -----------------------------------------------------------------------------------------:\n\nDELETE FROM fledge.tasks WHERE start_time < datetime('now', '-30 days');\nDELETE FROM fledge.log   WHERE ts         < datetime('now', '-30 days');\n\n--- Insert purge system schedule and process entry\nDELETE FROM fledge.schedules           WHERE id   = 'd37265f0-c83a-11eb-b8bc-0242ac130003';\nDELETE FROM fledge.scheduled_processes WHERE name = 'purge_system';\n\nINSERT INTO fledge.scheduled_processes (name, script) VALUES ('purge_system', '[\"tasks/purge_system\"]');\nINSERT INTO fledge.schedules (id, schedule_name, process_name, schedule_type, schedule_time, schedule_interval, exclusive, enabled)\nVALUES ('d37265f0-c83a-11eb-b8bc-0242ac130003', -- id\n        'purge_system',                         -- schedule_name\n        'purge_system',                         -- process_name\n        3,                                      -- schedule_type (interval)\n        NULL,                                   -- schedule_time\n        '23:50:00',                             -- schedule_interval (evey 24 hours)\n        't',                                    -- exclusive\n        't'                                     -- enabled\n    );\n\nCOMMIT;"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/44.sql",
    "content": "UPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"purge unsent\"}'))\n        WHERE key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"false\";\n\nUPDATE fledge.configuration SET value = json_set(value, '$.retainUnsent', json('{\"description\": \"Retain data that has not been sent yet.\", \"type\": \"enumeration\", \"options\":[\"purge unsent\", \"retain unsent to any destination\", \"retain unsent to all destinations\"], \"default\": \"purge unsent\", \"displayName\": \"Retain Unsent Data\",\"value\": \"retain unsent to all destinations\"}'))\n        WHERE  key = 'PURGE_READ' AND\n               json_extract(value, '$.retainUnsent.value')  = \"true\";\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/45.sql",
    "content": "-- Create Control service support table\n\nDROP TABLE IF EXISTS fledge.control_script;\nDROP TABLE IF EXISTS fledge.control_acl;\n\n-- Script management for control dispatch service\nCREATE TABLE fledge.control_script (\n             name          character varying(255)        NOT NULL,\n             steps         JSON                          NOT NULL DEFAULT '{}',\n             acl           character varying(255),\n             CONSTRAINT    control_script_pkey           PRIMARY KEY (name) );\n\n-- Access Control List Management for control dispatch service\nCREATE TABLE fledge.control_acl (\n             name          character varying(255)        NOT NULL,\n             service       JSON                          NOT NULL DEFAULT '{}',\n             url           JSON                          NOT NULL DEFAULT '{}',\n             CONSTRAINT    control_acl_pkey              PRIMARY KEY (name) );\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/46.sql",
    "content": "-- Scheduled process entry for dispatcher microservice\nINSERT INTO fledge.scheduled_processes SELECT 'dispatcher_c', '[\"services/dispatcher_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'dispatcher_c');\n\n-- Dispatcher log codes\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'DSPST', 'Dispatcher Startup' ),\n            ( 'DSPSD', 'Dispatcher Shutdown' );"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/47.sql",
    "content": "-- Scheduled process entry for automation script task\n\nINSERT INTO fledge.scheduled_processes SELECT 'automation_script', '[\"tasks/automation_script\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'automation_script');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/48.sql",
    "content": "-- Scheduled process entry for BucketStorage microservice\n\nINSERT INTO fledge.scheduled_processes SELECT 'bucket_storage_c', '[\"services/bucket_storage_c\"]' WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'bucket_storage_c');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/49.sql",
    "content": "-- Add id column in category_children table\n\nBEGIN TRANSACTION;\n\n-- Remove existing index\nDROP INDEX IF EXISTS fledge.config_children_idx1;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.category_children RENAME TO category_children_old;\n\n-- Create new table\nCREATE TABLE fledge.category_children (\n    id       integer                 PRIMARY KEY AUTOINCREMENT,\n    parent   character varying(255)  NOT NULL,\n    child    character varying(255)  NOT NULL\n);\n\n-- Add unique index for parent, child\nCREATE UNIQUE INDEX fledge.config_children_idx1 ON category_children (parent, child);\n\n-- Copy data\nINSERT INTO fledge.category_children(parent, child) SELECT parent, child FROM fledge.category_children_old;\n\n-- Remote temp table\nDROP TABLE fledge.category_children_old;\n\nCOMMIT;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/50.sql",
    "content": "-- The Schema Service table used to hold information about extension schemas\nCREATE TABLE fledge.service_schema (\n             name          character varying(255)        NOT NULL,\n             service       character varying(255)        NOT NULL,\n             version       integer                       NOT NULL,\n             definition    JSON);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/51.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n\t( 'ESSRT', 'External Service Startup' ),\n\t( 'ESSTP', 'External Service Shutdown' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/52.sql",
    "content": "-- Access Control List usage relation\nCREATE TABLE fledge.acl_usage (\n             name            character varying(255)  NOT NULL,  -- ACL name\n             entity_type     character varying(80)   NOT NULL,  -- associated entity type: service or script \n             entity_name     character varying(255)  NOT NULL,  -- associated entity name\n             CONSTRAINT      usage_acl_pkey          PRIMARY KEY (name, entity_type, entity_name) );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/53.sql",
    "content": "-- Add new column name 'deprecated_ts' for asset_tracker\n\nALTER TABLE fledge.asset_tracker ADD COLUMN deprecated_ts DATETIME;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/54.sql",
    "content": "-- Addition of new log codes\n\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'ASTDP', 'Asset deprecated' ),\n        ( 'ASTUN', 'Asset un-deprecated' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/55.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'PIPIN', 'Pip installation' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/56.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\nVALUES ( 'SRVRS' , 'Service Restart' );\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/57.sql",
    "content": "-- From: http://www.sqlite.org/faq.html:\n--    SQLite has limited ALTER TABLE support that you can use to change type of column.\n--    If you want to change the type of any column you will have to recreate the table.\n--    You can save existing data to a temporary table and then drop the old table\n--    Now, create the new table, then copy the data back in from the temporary table\n\n\n\n-- Drop existing index\nDROP INDEX IF EXISTS asset_tracker_ix1;\nDROP INDEX IF EXISTS asset_tracker_ix2;\n\n-- Rename existing table into a temp one\nALTER TABLE fledge.asset_tracker RENAME TO asset_tracker_old;\n\n-- Create new table\nCREATE TABLE IF NOT EXISTS fledge.asset_tracker (\n       id              integer                  PRIMARY KEY AUTOINCREMENT,\n       asset           character(50)            NOT NULL, -- asset name\n       event           character varying(50)    NOT NULL, -- event name\n       service         character varying(255)   NOT NULL, -- service name\n       fledge          character varying(50)    NOT NULL, -- FL service name\n       plugin          character varying(50)    NOT NULL, -- Plugin name\n       deprecated_ts \t\t\t\tDATETIME,\n       ts              DATETIME                 DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),\n       data\t       JSON\t\t\tDEFAULT '{}'\n);\n\n-- Copy data\nINSERT INTO fledge.asset_tracker ( id, asset, event, service, fledge, plugin, deprecated_ts, ts ) SELECT  id, asset, event, service, fledge, plugin, deprecated_ts, ts FROM fledge.asset_tracker_old;\n\n-- Create Index\nCREATE INDEX asset_tracker_ix1 ON asset_tracker (asset);\nCREATE INDEX asset_tracker_ix2 ON asset_tracker (service);\n\n-- Remote old table\nDROP TABLE IF EXISTS fledge.asset_tracker_old;\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/58.sql",
    "content": "-- Audit Log Marker log code\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'AUMRK', 'Audit Log Marker' );"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/59.sql",
    "content": "-- Roles\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('view', 'Only to view the configuration'),\n            ('data-view', 'Only read the data in buffer');"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/60.sql",
    "content": "-- Create control_source table\nCREATE TABLE fledge.control_source (\n             cpsid            integer                     PRIMARY KEY AUTOINCREMENT,       -- auto source id\n             name             character  varying(40)      NOT NULL,                        -- source name\n             description      character  varying(120)     NOT NULL                         -- source description\n            );\n\n-- Create control_destination table\nCREATE TABLE fledge.control_destination (\n             cpdid            integer                     PRIMARY KEY AUTOINCREMENT,       -- auto destination id\n             name             character  varying(40)      NOT NULL,                        -- destination name\n             description      character  varying(120)     NOT NULL                         -- destination description\n            );\n\n-- Create control_pipelines table\nCREATE TABLE fledge.control_pipelines (\n             cpid             integer                     PRIMARY KEY AUTOINCREMENT,       -- control pipeline id\n             name             character  varying(255)     NOT NULL                 ,       -- control pipeline name\n             stype            integer                                              ,       -- source type id from control_source table\n             sname            character  varying(80)                               ,       -- source name from control_source table\n             dtype            integer                                              ,       -- destination type id from control_destination table\n             dname            character  varying(80)                               ,       -- destination name from control_destination table\n             enabled          boolean                     NOT NULL DEFAULT  'f'    ,       -- false = A given pipeline is disabled by default\n             execution        character  varying(20)      NOT NULL DEFAULT  'shared'       -- pipeline will be executed as with shared execution model by default\n             );\n\n-- Create control_filters table\nCREATE TABLE fledge.control_filters (\n             fid              integer                     PRIMARY KEY AUTOINCREMENT,       -- auto filter id\n             cpid             integer                     NOT NULL                 ,       -- control pipeline id\n             forder           integer                     NOT NULL                 ,       -- filter order\n             fname            character  varying(255)     NOT NULL                 ,       -- Name of the filter instance\n             CONSTRAINT       control_filters_fk1         FOREIGN KEY (cpid)\n             REFERENCES       control_pipelines (cpid)    MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n             );\n\n-- Insert predefined entries for Control Source\nDELETE FROM fledge.control_source;\nINSERT INTO fledge.control_source ( name, description )\n     VALUES ('Any', 'Any source.'),\n            ('Service', 'A named service in source of the control pipeline.'),\n            ('API', 'The control pipeline source is the REST API.'),\n            ('Notification', 'The control pipeline originated from a notification.'),\n            ('Schedule', 'The control request was triggered by a schedule.'),\n            ('Script', 'The control request has come from the named script.');\n\n-- Insert predefined entries for Control Destination\nDELETE FROM fledge.control_destination;\nINSERT INTO fledge.control_destination ( name, description )\n     VALUES ('Any', 'Any destination.'),\n            ('Service', 'A name of service that is being controlled.'),\n            ('Asset', 'A name of asset that is being controlled.'),\n            ('Script', 'A name of script that will be executed.'),\n            ('Broadcast', 'No name is applied and pipeline will be considered for any control writes or operations to broadcast destinations.');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/61.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES\n        ( 'USRAD', 'User Added' ),\n        ( 'USRDL', 'User Deleted' ),\n        ( 'USRCH', 'User Changed' ),\n        ( 'USRRS', 'User Restored' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/62.sql",
    "content": "\nCREATE TABLE fledge.monitors (\n             service       character varying(255) NOT NULL,\n             monitor       character varying(80) NOT NULL,\n             minimum       integer,\n             maximum       integer,\n             average       integer,\n             samples       integer,\n             ts            DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW'))\n);\n\nCREATE INDEX monitors_ix1\n    ON monitors(service, monitor);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/63.sql",
    "content": "-- Roles\nINSERT INTO fledge.roles ( name, description )\n     VALUES ('control', 'Same as editor can do and also have access for control scripts and pipelines');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/64.sql",
    "content": "-- Create control_api table\nCREATE TABLE fledge.control_api (\n             name             character  varying(255)     NOT NULL                 ,       -- control API name\n             description      character  varying(255)     NOT NULL                 ,       -- description of control API\n             type             integer                     NOT NULL                 ,       -- 0 for write and 1 for operation\n             operation_name   character  varying(255)                              ,       -- name of the operation and only valid if type is operation\n             destination      integer                     NOT NULL                 ,       -- destination of request; 0-broadcast, 1-service, 2-asset, 3-script\n             destination_arg  character  varying(255)                              ,       -- name of the destination and only used if destination is non-zero\n             anonymous        boolean                     NOT NULL DEFAULT  'f'    ,       -- anonymous callers to make request to control API; by default false\n             CONSTRAINT       control_api_pname           PRIMARY KEY (name)\n             );\n\n-- Create control_api_parameters table\nCREATE TABLE fledge.control_api_parameters (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             parameter        character  varying(255)     NOT NULL                 ,       -- name of parameter\n             value            character  varying(255)                              ,       -- value of parameter if constant otherwise default\n             constant         boolean                     NOT NULL                 ,       -- parameter is either a constant or variable\n             FOREIGN KEY (name) REFERENCES control_api (name)\n             );\n\n-- Create control_api_acl table\nCREATE TABLE fledge.control_api_acl (\n             name             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.control_api\n             user             character  varying(255)     NOT NULL                 ,       -- foreign key to fledge.users\n             FOREIGN KEY (name) REFERENCES control_api (name)                      ,\n             FOREIGN KEY (user) REFERENCES users (uname)\n             );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/65.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'ACLAD', 'ACL Added' ),( 'ACLCH', 'ACL Changed' ),( 'ACLDL', 'ACL Deleted' ),\n            ( 'CTSAD', 'Control Script Added' ),( 'CTSCH', 'Control Script Changed' ),('CTSDL', 'Control Script Deleted' ),\n            ( 'CTPAD', 'Control Pipeline Added' ),( 'CTPCH', 'Control Pipeline Changed' ),('CTPDL', 'Control Pipeline Deleted' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/66.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'CTEAD', 'Control Entrypoint Added' ),\n            ( 'CTECH', 'Control Entrypoint Changed' ),\n            ('CTEDL', 'Control Entrypoint Deleted' );"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/67.sql",
    "content": "-- Add new column name 'priority' in scheduled_processes\n\nALTER TABLE fledge.scheduled_processes ADD COLUMN priority INTEGER NOT NULL DEFAULT 999;\nUPDATE scheduled_processes SET priority = '10' WHERE name = 'bucket_storage_c';\nUPDATE scheduled_processes SET priority = '20' WHERE name = 'dispatcher_c';\nUPDATE scheduled_processes SET priority = '30' WHERE name = 'notification_c';\nUPDATE scheduled_processes SET priority = '100' WHERE name = 'south_c';\nUPDATE scheduled_processes SET priority = '200' WHERE name = 'north_C';\nUPDATE scheduled_processes SET priority = '300' WHERE name = 'management';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/68.sql",
    "content": "-- Create alerts table\n\nCREATE TABLE IF NOT EXISTS fledge.alerts (\n       key         character varying(80)       NOT NULL,                                  -- Primary key\n       message     character varying(255)      NOT NULL,                                 -- Alert Message\n       urgency     SMALLINT                    NOT NULL,                                 -- 1 Critical - 2 High - 3 Normal - 4 Low\n       ts          DATETIME    DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f+00:00', 'NOW')),     -- Timestamp, updated at every change\n       CONSTRAINT  alerts_pkey PRIMARY KEY (key) );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/69.sql",
    "content": "--- Insert update checker schedule and process entry\n\nINSERT INTO fledge.scheduled_processes ( name, script ) VALUES ( 'update checker', '[\"tasks/check_updates\"]' );\nINSERT INTO fledge.schedules ( id, schedule_name, process_name, schedule_type,\n                                schedule_time, schedule_interval, exclusive, enabled )\n       VALUES ( '852cd8e4-3c29-440b-89ca-2c7691b0450d', -- id\n                'update checker',                       -- schedule_name\n                'update checker',                       -- process_name\n                2,                                      -- schedule_type (timed)\n                '00:05:00',                             -- schedule_time\n                '00:00:00',                             -- schedule_interval\n                't',                                    -- exclusive\n                't'                                     -- enabled\n              );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/70.sql",
    "content": "INSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'BUCAD', 'Bucket Added' ), ( 'BUCCH', 'Bucket Changed' ), ( 'BUCDL', 'Bucket Deleted' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/71.sql",
    "content": "-- Add new column name 'hash_algorithm' in users table\n\nALTER TABLE fledge.users ADD COLUMN hash_algorithm TEXT CHECK( hash_algorithm IN ('SHA256', 'SHA512') )  NOT NULL DEFAULT 'SHA512';\nUPDATE fledge.users SET hash_algorithm='SHA256';\nUPDATE fledge.users SET pwd='495f7f5b17c534dbeabab3da2287a934b32ed6876568563b04c312be49e8773299243abd3881d13112ccfb67c4fb3ec8231406474810e1f6eb347d61c63785d4:672169c60df24b76b6b94e78cad800f8', hash_algorithm='SHA512' WHERE pwd ='39b16499c9311734c595e735cffb5d76ddffb2ebf8cf4313ee869525a9fa2c20:f400c843413d4c81abcba8f571e6ddb6';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/72.sql",
    "content": "ALTER TABLE fledge.users ADD COLUMN failed_attempts INTEGER DEFAULT 0;\nALTER TABLE fledge.users ADD COLUMN block_until DATETIME DEFAULT NULL;\nINSERT INTO fledge.log_codes ( code, description )\n     VALUES ( 'USRBK', 'User Blocked' ), ( 'USRUB', 'User Unblocked' );\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/73.sql",
    "content": "update statistics set description = 'Readings Sent North' where description = 'Readings Sent Noth';\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/74.sql",
    "content": "-- Scheduled process entry for pipeline service\nINSERT INTO fledge.scheduled_processes SELECT 'pipeline_c', '[\"services/pipeline_c\"]', 90 WHERE NOT EXISTS (SELECT 1 FROM fledge.scheduled_processes WHERE name = 'pipeline_c');\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/75.sql",
    "content": "ALTER TABLE fledge.plugin_data ADD COLUMN service_name character varying(255);\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/76.sql",
    "content": "DELETE FROM fledge.scheduled_processes WHERE name = 'north';"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb/upgrade/README",
    "content": "Place SQLite3 upgrade schema sql files here.\n\nFile name:\n\nX.sql, where X is the SQLite3 schema id\n\nExample:\n\n'9.sql' file is read by Fledge app which has SQLite3 schema version set to 8\n'10.sql' file is read either by Fledge app which has SQLite3 schema version set to 9\neither by Fledge app upgrading schema from 8 to 10\n\nNote:\n- whenever VERSION file in $FLEDGE_ROOT has a new schema in 'fledge_schema',\n  the corresponding sql file must be placed here\n- file id must exist even if empty\n\n"
  },
  {
    "path": "scripts/plugins/storage/sqlitelb.sh",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2017-2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n__author__=\"Massimiliano Pinto\"\n__version__=\"1.0\"\n\n# Avoid to stop immediately to report/show the error/reason\nset +e\n\nPLUGIN=\"sqlitelb\"\n\n# Set default DB file\nif [ ! \"${DEFAULT_SQLITE_DB_FILE}\" ]; then\n    export DEFAULT_SQLITE_DB_FILE=\"${FLEDGE_DATA}/fledge.db\"\nfi\n\n# if the script changes the value it forces the overwrite of the value every times\n# it is needed when the storage plugin is changed\nif [ ! \"${DEFAULT_SQLITE_DB_FILE_READINGS_FLAG}\" ]; then\n\n    if [ ! \"${DEFAULT_SQLITE_DB_FILE_READINGS}\" ]; then\n\n        export DEFAULT_SQLITE_DB_FILE_READINGS_FLAG=1\n    fi\nfi\n\nif [ \"${DEFAULT_SQLITE_DB_FILE_READINGS_FLAG}\" ]; then\n\n    export DEFAULT_SQLITE_DB_FILE_READINGS_BASE=\"${FLEDGE_DATA}/readings\"\n    export DEFAULT_SQLITE_DB_FILE_READINGS=\"${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}.db\"\nfi\n\nUSAGE=\"Usage: `basename ${0}` {start|stop|status|init|reset|purge|help}\"\n\n# Check FLEDGE_ROOT\nif [ -z ${FLEDGE_ROOT+x} ]; then\n    # Set FLEDGE_ROOT as the default directory\n    FLEDGE_ROOT=\"/usr/local/fledge\"\nfi\n\n# Check if the default directory exists\nif [[ ! -d \"${FLEDGE_ROOT}\" ]]; then\n\n    # Here we cannot use the logger because we cannot find the write_log script.\n    # But it is ok, because the script is called with source and if it is called\n    # as standalone script the echo will be captured.\n    echo \"Fledge cannot be executed: ${FLEDGE_ROOT} is not a valid directory.\"\n    echo \"Create the enviroment variable FLEDGE_ROOT before using Fledge.\"\n    echo \"Specify the base directory for Fledge and set the variable with:\"\n    echo \"export FLEDGE_ROOT=<basedir>\"\n    exit 1\n\nfi\n\n##########\n## INCLUDE SECTION\n##########\n. $FLEDGE_ROOT/scripts/common/get_engine_management.sh\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\n\n# Logger wrapper\nsqlite_log() {\n    write_log \"Storage\" \"script.plugin.storage.sqlitelb\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n# Check first SQLite 3 with static library command line is available\nSQLITE_SQL=\"$FLEDGE_ROOT/plugins/storage/sqlite/sqlite3\"\nif ! [[ -x \"${SQLITE_SQL}\" ]]; then\n# Check system default SQLite 3 command line is available\n    if ! [[ -x \"$(command -v sqlite3)\" ]]; then\n        sqlite_log \"info\" \"The sqlite3 command cannot be found. Is SQLite3 installed?\" \"outonly\" \"pretty\"\n        sqlite_log \"info\" \"If SQLite3 is installed, check if the bin dir is in the PATH.\" \"outonly\" \"pretty\"\n        exit 1\n    else\n        SQLITE_SQL=\"$(command -v sqlite3)\"\n    fi\nfi\n\n## SQLite3 Start\nsqlite_start() {\n\n    # Check the status of the server\n    if [[ \"$1\" != \"skip\" ]]; then\n        result=`sqlite_status \"silent\"`\n    else\n        result=`sqlite_status \"skip\"`\n    fi\n    case \"$result\" in\n        \"0\")\n            # SQLilte3 DB found already running.\n            if [[ \"$1\" == \"noisy\" ]]; then\n                sqlite_log \"info\" \"SQLite3 database is ready.\" \"all\" \"pretty\"\n            else\n                if [[ \"$1\" != \"skip\" ]]; then\n                    sqlite_log \"info\" \"SQLite3 database is ready.\" \"logonly\" \"pretty\"\n                fi\n            fi\n            ;;\n\n        \"1\")\n            # Database not found, created datafile\n            COMMAND_OUTPUT=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} .databases 2>&1`\n            RET_CODE=$?\n            if [ \"${RET_CODE}\" -ne 0 ]; then\n                sqlite_log \"err\" \"Error creating SQLite3 database ${DEFAULT_SQLITE_DB_FILE}: ${COMMAND_OUTPUT}\" \"all\" \"pretty\"\n                exit 1\n            fi\n\n            # File created\n            if [[ \"$1\" == \"noisy\" ]]; then\n                sqlite_log \"info\" \"SQLite3 database ${DEFAULT_SQLITE_DB_FILE} has been created.\" \"all\" \"pretty\"\n            else\n                sqlite_log \"info\" \"SQLite3 database ${DEFAULT_SQLITE_DB_FILE} has been created.\" \"logonly\" \"pretty\"\n            fi\n           ;;\n\n        *)\n            sqlite_log \"err\" \"Unknown SQLite database return status.\" \"all\"\n            exit 1\n            ;;\n    esac\n\n    # Check the presence of the readingds.db datafile\n    if [[ \"$1\" != \"skip\" ]]; then\n        result=`sqlite_status_readings \"silent\"`\n    else\n        result=`sqlite_status_readings \"skip\"`\n    fi\n\n    case \"$result\" in\n        \"0\")\n            # SQLilte3 DB found already running.\n            if [[ \"$1\" == \"noisy\" ]]; then\n                sqlite_log \"info\" \"SQLite3 readings database is ready.\" \"all\" \"pretty\"\n            else\n                if [[ \"$1\" != \"skip\" ]]; then\n                    sqlite_log \"info\" \"SQLite3 readings database is ready.\" \"logonly\" \"pretty\"\n                fi\n            fi\n            ;;\n\n        \"1\")\n            # Database not found, created datafile\n            COMMAND_OUTPUT=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE_READINGS} .databases 2>&1`\n            RET_CODE=$?\n            if [ \"${RET_CODE}\" -ne 0 ]; then\n                sqlite_log \"err\" \"Error creating SQLite3 database ${DEFAULT_SQLITE_DB_FILE_READINGS}: ${COMMAND_OUTPUT}\" \"all\" \"pretty\"\n                exit 1\n            fi\n\n            # File created\n            if [[ \"$1\" == \"noisy\" ]]; then\n                sqlite_log \"info\" \"SQLite3 database ${DEFAULT_SQLITE_DB_FILE_READINGS} has been created.\" \"all\" \"pretty\"\n            else\n                sqlite_log \"info\" \"SQLite3 database ${DEFAULT_SQLITE_DB_FILE_READINGS} has been created.\" \"logonly\" \"pretty\"\n            fi\n           ;;\n\n        *)\n            sqlite_log \"err\" \"Unknown SQLite database return status.\" \"all\"\n            exit 1\n            ;;\n    esac\n\n    # Check if the fledge database has been created\n    FOUND_SCHEMAS=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} \"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; SELECT name FROM sqlite_master WHERE type='table'\"`\n\n    if [ ! \"${FOUND_SCHEMAS}\" ]; then\n        # Create the Fledge database\n         sqlite_reset \"$1\" \"immediate\"\n    else\n        # Check if the readings database has been created\n        FOUND_SCHEMAS=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE_READINGS} \"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}' AS 'readings'; SELECT name FROM sqlite_master WHERE type='table'\"`\n\n        if [ ! \"${FOUND_SCHEMAS}\" ]; then\n            #  Reset if not found.\n            sqlite_reset_db_readings \"$1\" \"immediate\"\n        fi\n    fi\n\n    # Fledge DB schema update: Fledge version is $2\n    sqlite_schema_update $2\n}\n\n\n## SQLite  Stop\nsqlite_stop() {\n\n    # Since the script may be called with \"source\", this condition must be set\n    # and the else must be maintained because exit can't be used\n\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Fledge database is SQLite3. No stop/start actions available\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Fledge database is SQLite3. No stop/start actions available\" \"logonly\" \"pretty\"\n    fi\n\n    return 0\n}\n\n\n## SQLite3 Reset\nsqlite_reset() {\n\n    if [[ $2 != \"immediate\" ]]; then\n        echo \"This script will remove all data stored in the SQLite3 datafiles:\"\n        echo \"'${DEFAULT_SQLITE_DB_FILE}'\"\n        echo \"'${DEFAULT_SQLITE_DB_FILE_READINGS}'\"\n        echo -n \"Enter YES if you want to continue: \"\n        read continue_reset\n\n        if [ \"$continue_reset\" != 'YES' ]; then\n\t    echo \"The system will NOT be reset and current content remains\"\n            echo \"Goodbye.\"\n            # This is ok because it means that the script is called from command line\n            exit 0\n        fi\n    fi\n\n    sqlite_reset_db_fledge   \"$1\" \"$2\"\n    sqlite_reset_db_readings \"$1\" \"$2\"\n}\n\nsqlite_reset_db_fledge() {\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Building the metadata for the Fledge Plugin '${PLUGIN}' ...\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Building the metadata for the Fledge Plugin '${PLUGIN}' ...\" \"logonly\" \"pretty\"\n    fi\n\n    # 0- Remove service schema files\n    schema=`sqlite3 ${DEFAULT_SQLITE_DB_FILE} 'select name from service_schema;'`\n    for f in $schema; do\n         rm ${FLEDGE_DATA}/${f}.db*\n          if [ -d \"${FLEDGE_DATA}/buckets\" ]; then\n            echo \"Removed user data from ${FLEDGE_DATA}/buckets\"\n            rm -rf ${FLEDGE_DATA}/buckets\n          fi\n    done\n\n    # 1- Drop all databases in DEFAULT_SQLITE_DB_FILE\n    if [ -f \"${DEFAULT_SQLITE_DB_FILE}\" ]; then\n        rm ${DEFAULT_SQLITE_DB_FILE} ||\n        sqlite_log \"err\" \"Cannot drop database '${DEFAULT_SQLITE_DB_FILE}' for the Fledge Plugin '${PLUGIN}'\" \"all\" \"pretty\"\n    fi\n    rm -f ${DEFAULT_SQLITE_DB_FILE}-journal ${DEFAULT_SQLITE_DB_FILE}-wal ${DEFAULT_SQLITE_DB_FILE}-shm\n    # 2- Create new datafile an apply init file\n    INIT_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\nPRAGMA page_size = 4096;\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge';\n.read '${INIT_SQL}'\n.quit\nEOF`\n\n    RET_CODE=$?\n    # Exit on failure\n    if [ \"${RET_CODE}\" -ne 0 ]; then\n        sqlite_log \"err\" \"Cannot initialize '${DEFAULT_SQLITE_DB_FILE}' for the Fledge Plugin '${PLUGIN}': ${INIT_OUTPUT}. Exiting\" \"all\" \"pretty\"\n        exit 2\n    fi\n\n    # Log success\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Build complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE}'.\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Build complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE}'.\" \"logonly\" \"pretty\"\n    fi\n\n}\n\nsqlite_create_db_readings() {\n\n    # 2- Create new datafile an apply init file\n    INIT_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE_READINGS}\" 2>&1 <<EOF\nPRAGMA page_size = 4096;\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE_READINGS}' AS 'readings';\n.read '${INIT_READINGS_SQL}'\n.quit\nEOF`\n\n    RET_CODE=$?\n    # Exit on failure\n    if [ \"${RET_CODE}\" -ne 0 ]; then\n        sqlite_log \"err\" \"Cannot initialize '${DEFAULT_SQLITE_DB_FILE_READINGS}' for the Fledge Plugin '${PLUGIN}': ${INIT_OUTPUT}. Exiting\" \"all\" \"pretty\"\n        exit 2\n    fi\n\n    # Log success\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Build complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE_READINGS}'.\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Build complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE_READINGS}'.\" \"logonly\" \"pretty\"\n    fi\n\n}\n\nsqlite_reset_db_readings() {\n\n    # 1- Drop all databases in DEFAULT_SQLITE_DB_FILE_READINGS\n    if [ -f \"${DEFAULT_SQLITE_DB_FILE_READINGS}\" ]; then\n        rm ${DEFAULT_SQLITE_DB_FILE_READINGS} ||\n        sqlite_log \"err\" \"Cannot drop database '${DEFAULT_SQLITE_DB_FILE_READINGS}' for the Fledge Plugin '${PLUGIN}'\" \"all\" \"pretty\"\n    fi\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS}-journal ${DEFAULT_SQLITE_DB_FILE_READINGS}-wal ${DEFAULT_SQLITE_DB_FILE_READINGS}-shm\n\n    # Delete all the readings databases if any\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}*.db\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}*.db-journal\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}*.db-wal\n    rm -f ${DEFAULT_SQLITE_DB_FILE_READINGS_BASE}*.db-shm\n\n    sqlite_create_db_readings\n}\n\n\n\n## SQLite 3 Database Status\n#\n# NOTE: You can call this script with $1 = silent to avoid non output errors\n#\n# Returns:\n#   0 - SQLite3 datafile found\n#   1 - SQLite3 datafile NOT found\nsqlite_status() {\n    if [ -f \"${DEFAULT_SQLITE_DB_FILE}\" ]; then\n        if [[ \"$1\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE}' ready.\" \"all\" \"pretty\"\n        else\n            if [[ \"$1\" != \"skip\" ]]; then\n                sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE}' ready.\" \"logonly\" \"pretty\"\n            fi\n        fi\n        echo \"0\"\n    else\n        if [[ \"$1\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE}' not found.\" \"all\" \"pretty\"\n        else\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE}' not found.\" \"logonly\" \"pretty\"\n        fi\n        echo \"1\"\n    fi\n}\n\n## SQLite 3 Database Status - checks the presence of the readingds.db datafile\n#\n# NOTE: You can call this script with $1 = silent to avoid non output errors\n#\n# Returns:\n#   0 - SQLite3 readingds datafile found\n#   1 - SQLite3 readingds datafile NOT found\nsqlite_status_readings() {\n    if [ -f \"${DEFAULT_SQLITE_DB_FILE_READINGS}\" ]; then\n        if [[ \"$1\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE_READINGS}' ready.\" \"all\" \"pretty\"\n        else\n            if [[ \"$1\" != \"skip\" ]]; then\n                sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE_READINGS}' ready.\" \"logonly\" \"pretty\"\n            fi\n        fi\n        echo \"0\"\n    else\n        if [[ \"$1\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE_READINGS}' not found.\" \"all\" \"pretty\"\n        else\n            sqlite_log \"info\" \"SQLite 3 database '${DEFAULT_SQLITE_DB_FILE_READINGS}' not found.\" \"logonly\" \"pretty\"\n        fi\n        echo \"1\"\n    fi\n}\n\n\n## SQLite schema update entry point\n#\nsqlite_schema_update() {\n\n    # Current starting Fledge version\n    NEW_VERSION=$1\n    # DB table\n    VERSION_TABLE=\"version\"\n    # Check first if the version table exists\n    VERSION_QUERY=\"SELECT name FROM sqlite_master WHERE type='table' and name = '${VERSION_TABLE}'\"\n    COMMAND_VERSION=\"${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} \\\"${VERSION_QUERY}\\\"\"\n    CURR_VER=`eval ${COMMAND_VERSION}`\n    ret_code=$?\n    if [ ! \"${CURR_VER}\" ] || [ \"${ret_code}\" -ne 0 ]; then\n        sqlite_log \"error\" \"Error checking Fledge DB schema version: \"\\\n\"the table '${VERSION_TABLE}' doesn't exist. Exiting\" \"all\" \"pretty\"\n        return 1\n    fi\n\n    # Fetch Fledge DB version\n    CURR_VER=`${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} \"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; SELECT id FROM fledge.${VERSION_TABLE}\" | tr -d ' '`\n    if [ ! \"${CURR_VER}\" ]; then\n        # No version found set DB version now\n        INSERT_QUERY=\"ATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge'; BEGIN; INSERT INTO fledge.${VERSION_TABLE} (id) VALUES('${NEW_VERSION}'); COMMIT;\"\n        COMMAND_INSERT=\"${SQLITE_SQL} ${DEFAULT_SQLITE_DB_FILE} \\\"${INSERT_QUERY}\\\"\"\n        CURR_VER=`eval \"${COMMAND_INSERT}\"`\n        ret_code=$?\n\n        SET_VERSION_MSG=\"Fledge DB version not found in fledge.'${VERSION_TABLE}', setting version [${NEW_VERSION}]\"\n        if [[ \"$2\" == \"noisy\" ]]; then\n            sqlite_log \"info\" \"${SET_VERSION_MSG}\" \"all\" \"pretty\"\n        else \n            sqlite_log \"info\" \"${SET_VERSION_MSG}\" \"logonly\" \"pretty\"\n        fi\n    else\n        # Only if DB version is not equal to starting Fledge version we try schema update\n        if [ \"${CURR_VER}\" != \"${NEW_VERSION}\" ]; then\n            sqlite_log \"info\" \"Detected '${PLUGIN}' Fledge DB schema change from version [${CURR_VER}]\"\\\n\" to [${NEW_VERSION}], applying Upgrade/Downgrade ...\" \"all\" \"pretty\"\n            SCHEMA_UPDATE_SCRIPT=\"$FLEDGE_ROOT/scripts/plugins/storage/${PLUGIN}/schema_update.sh\"\n            if [ -s \"${SCHEMA_UPDATE_SCRIPT}\" ] && [ -x \"${SCHEMA_UPDATE_SCRIPT}\" ]; then\n                # Call the schema update script\n                ${SCHEMA_UPDATE_SCRIPT} \"${CURR_VER}\" \"${NEW_VERSION}\" \"${SQLITE_SQL}\"\n                update_code=$?\n                return ${update_code}\n            else\n                sqlite_log \"err\" \"Cannot find schema update script '${SCHEMA_UPDATE_SCRIPT}'. Exiting\" \"all\" \"pretty\"\n                exit 2\n            fi\n        else\n            # Just log up-to-date\n            if [[ \"$2\" != \"skip\" ]]; then\n                sqlite_log \"info\" \"Fledge DB schema is up to date to version [${CURR_VER}]\" \"logonly\" \"pretty\"\n            fi\n            return 0\n        fi\n    fi\n}\n\n\n## SQLite Help\nsqlite_help() {\n\n    echo \"${USAGE}\nSQLite3lb (Low bandwidth) Storage Layer plugin init script.\nThe script is used to control the SQLite3lb plugin as database for Fledge\nArguments:\n start   - Start the database server (when managed)\n           If the server has not been initialized, it also initialize it\n stop    - Stop the database server (when managed)\n status  - Check the status of the database server\n reset   - Bring the database server to the original installation.\n           This is a synonym of init.\n           WARNING: all the data stored in the server will be lost!\n init    - Database check: if Fledge database does not exist\n           it will be created.\n purge   - Purge all readings data and non-configuration data stored in the database.\n           WARNING: all the data stored in the affected tables will be lost!\n help    - This text\n\n managed   - The database server is embedded in Fledge\n unmanaged - The database server is not embedded in Fledge\"\n\n}\n\n\n## SQLite3 purge all readings and non-configuration data\nsqlite_purge() {\n    echo \"This script will remove all readings data and non-configuration data stored in the SQLite3 datafiles:\"\n    echo \"'${DEFAULT_SQLITE_DB_FILE}'\"\n    echo \"'${DEFAULT_SQLITE_DB_FILE_READINGS}'\"\n    echo -n \"Enter YES if you want to continue: \"\n    read continue_purge\n\n    if [ \"$continue_purge\" != 'YES' ]; then\n\techo \"The system will NOT be purged of data and current content remains\"\n        echo \"Goodbye.\"\n        # This is ok because it means that the script is called from command line\n        exit 0\n    fi\n\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Purging data for the Fledge Plugin '${PLUGIN}' ...\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Purging data for the Fledge Plugin '${PLUGIN}' ...\" \"logonly\" \"pretty\"\n    fi\n\n    # Purge database content\n    COMMAND_OUTPUT=`${SQLITE_SQL} \"${DEFAULT_SQLITE_DB_FILE}\" 2>&1 <<EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge';\nUPDATE fledge.statistics SET value = 0, previous_value = 0, ts = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime');\nDELETE FROM fledge.asset_tracker;\nDELETE FROM fledge.sqlite_sequence WHERE name='asset_tracker';\nDELETE FROM fledge.tasks;\nDELETE FROM fledge.statistics_history;\nDELETE FROM sqlite_sequence WHERE name='statistics_history';\nDELETE FROM fledge.log;\nDELETE FROM sqlite_sequence WHERE name='log';\nDELETE FROM fledge.plugin_data;\nDELETE FROM fledge.omf_created_objects;\nDELETE FROM fledge.user_logins;\nUPDATE fledge.streams SET last_object = 0, ts = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime');\nVACUUM;\n.quit\nEOF`\n\n    RET_CODE=$?\n    if [ \"${RET_CODE}\" -ne 0 ]; then\n        sqlite_log \"err\" \"Failure in purge command [${SQL_COMMAND}]: ${COMMAND_OUTPUT}. Exiting\" \"all\" \"pretty\"\n        return 1\n    fi\n\n    # Log success\n    if [[ \"$1\" == \"noisy\" ]]; then\n        sqlite_log \"info\" \"Purge complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE}'.\" \"all\" \"pretty\"\n    else\n        sqlite_log \"info\" \"Purge complete for Fledge Plugin '${PLUGIN} in database '${DEFAULT_SQLITE_DB_FILE}'.\" \"logonly\" \"pretty\"\n    fi\n\n    # Remove all readings\n    sqlite_reset_db_readings\n}\n\n\n##################\n### Main Logic ###\n##################\n\n# Set FLEDGE_DATA if it does not exist\nif [ -z ${FLEDGE_DATA+x} ]; then\n    FLEDGE_DATA=\"${FLEDGE_ROOT}/data\"\nfi\n\n# Check if $FLEDGE_DATA exists\nif [[ ! -d ${FLEDGE_DATA} ]]; then\n    sqlite_log \"err\" \"Fledge cannot be executed: ${FLEDGE_DATA} is not a valid directory.\" \"all\" \"pretty\"\n    exit 1\nfi\n\n# Extract plugin\nengine_management=\"false\"\n# Settings if the database is managed by Fledge\ncase \"$engine_management\" in\n    \"true\")\n\n       \t# SQLite does not support managed storage. Ignore this option\n        print_output=\"silent\"\n        MANAGED=false\n        ;;\n    \n    \"false\")\n\n        # This is an explicit input, which means that we do not want to send\n        # messages when we start or stop the server\n        print_output=\"silent\"\n        MANAGED=false\n        ;;\n\n    *)\n\n        # Unexpected value from the configuration file\n        sqlite_log \"err\" \"Fledge cannot start.\" \"all\" \"pretty\"\n        sqlite_log \"err\" \"Missing plugin information from the storage microservice\" \"all\" \"pretty\"\n        exit 1\n        ;;\n\nesac\n\n# Check if the init.sql file exists\n# Attempt 1: deployment path\nif [[ -e \"$FLEDGE_ROOT/plugins/storage/sqlitelb/init.sql\" ]]; then\n    INIT_SQL=\"$FLEDGE_ROOT/plugins/storage/sqlitelb/init.sql\"\n    INIT_READINGS_SQL=\"$FLEDGE_ROOT/plugins/storage/sqlitelb/init_readings.sql\"\nelse\n    # Attempt 2: development path\n    if [[ -e \"$FLEDGE_ROOT/scripts/plugins/storage/sqlitelb/init.sql\" ]]; then\n        INIT_SQL=\"$FLEDGE_ROOT/scripts/plugins/storage/sqlitelb/init.sql\"\n        INIT_READINGS_SQL=\"$FLEDGE_ROOT/scripts/plugins/storage/sqlitelb/init_readings.sql\"\n    else\n        sqlite_log \"err\" \"Missing plugin '${PLUGIN}' initialization file init.sql.\" \"all\" \"pretty\"\n        exit 1\n    fi\nfi\n\n# Main case\ncase \"$1\" in\n    start)\n        sqlite_start \"$print_output\" \"$2\"\n        ;;\n    init)\n        sqlite_start \"skip\" \"$2\"\n        ;;\n    stop)\n        sqlite_stop \"$print_output\"\n        ;;\n    reset)\n        sqlite_reset \"$print_output\" \"$2\"\n        ;;\n    status)\n        sqlite_status \"$print_output\"\n        ;;\n    purge)\n        sqlite_purge \"$print_output\"\n        ;;\n    help)\n        sqlite_help\n        ;;\n    *)\n        echo \"${USAGE}\"\n        exit 1\nesac\n\n# Exit cannot be used because the script may be \"sourced\"\n#exit $?\n"
  },
  {
    "path": "scripts/services/README.rst",
    "content": "***************\nService Scripts\n***************\n\nThese scripts are used by the core microservice of Fledge to start\nother microservices that Fledge requires in order to implement the full\nfunctionality of the platform.\n"
  },
  {
    "path": "scripts/services/bucket_storage_c",
    "content": "#!/bin/sh\n# Run a Fledge Bucket Storage service written in C/C++\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n  FLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\ncd \"${FLEDGE_ROOT}/services\"\n\n./fledge.services.bucket \"$@\"\n"
  },
  {
    "path": "scripts/services/dispatcher_c",
    "content": "#!/bin/bash\n# Run a Fledge Dispatcher service written in C/C++\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n  FLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\n# startup with delay\ndelay() {\n    for ARG in \"$@\";\n      do\n        PARAM=$(echo $ARG | cut -f1 -d=)\n        if [ $PARAM = '--delay' ]; then\n          PARAM_LENGTH=${#PARAM}\n          VALUE=\"${ARG:$PARAM_LENGTH+1}\"\n          sleep $VALUE\n          break\n        fi\n    done\n}\n\ncd \"${FLEDGE_ROOT}/services\"\ndelay \"$@\"\n./fledge.services.dispatcher \"$@\"\n"
  },
  {
    "path": "scripts/services/north_C",
    "content": "#!/bin/bash\n# Run a Fledge north service written in C/C++\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\nrunvalgrind=n\nif [ \"$VALGRIND_NORTH\" != \"\" ]; then\n\tfor i in \"$@\"; do\n\t\tcase $i in\n\t\t\t--name=*)\n\t\t\t\tname=\"`echo $i | sed -e s/--name=//`\"\n\t\t\t\t;;\n\t\tesac\n\tdone\n\tservices=$(echo $VALGRIND_NORTH | tr \";\" \"\\n\")\n\tfor service in $services; do\n\t\tif [ \"$service\" = \"$name\" ]; then\n\t\t\trunvalgrind=y\n\t\tfi\n\tdone\nfi\n\nrunstrace=n\nif [ \"$STRACE_NORTH\" != \"\" ]; then\n\tfor i in \"$@\"; do\n\t\tcase $i in\n\t\t\t--name=*)\n\t\t\t\tname=\"`echo $i | sed -e s/--name=//`\"\n\t\t\t\t;;\n\t\tesac\n\tdone\n\tservices=$(echo $STRACE_NORTH | tr \";\" \"\\n\")\n\tfor service in $services; do\n\t\tif [ \"$service\" = \"$name\" ]; then\n\t\t\trunstrace=y\n\t\tfi\n\tdone\nfi\n\n# startup with delay\ndelay() {\n    for ARG in \"$@\";\n      do\n        PARAM=$(echo $ARG | cut -f1 -d=)\n        if [ $PARAM = '--delay' ]; then\n          PARAM_LENGTH=${#PARAM}\n          VALUE=\"${ARG:$PARAM_LENGTH+1}\"\n          sleep $VALUE\n          break\n        fi\n    done\n}\n\ncd \"${FLEDGE_ROOT}/services\"\nif [ \"$runvalgrind\" = \"y\" ]; then\n\tfile=${HOME}/north.${name}.valgrind.out\n\trm -f $file\n\tlogger \"Running north service $name under valgrind\"\n\tif [ \"$VALGRIND_MASSIF\" != \"\" ]; then\n\t\tvalgrind  --tool=massif  --detailed-freq=1 --pages-as-heap=yes  ./fledge.services.north \"$@\"\n\telse\n\t\tvalgrind  --leak-check=full --trace-children=yes --show-leak-kinds=all --track-origins=yes --log-file=$file ./fledge.services.north \"$@\"\n\tfi\nelif [ \"$runstrace\" = \"y\" ]; then\n\tfile=${HOME}/north.${name}.strace.out\n\tlogger \"Running north service $name under strace\"\n\trm -f $file\n\tstrace -e 'trace=%memory,%process,%file' -f -o $file ./fledge.services.north \"$@\"\nelif [ \"$INTERPOSE_NORTH\" != \"\" ]; then\n\tLD_PRELOAD=${INTERPOSE_NORTH}\n\tlogger \"Running north service with interpose library $INTERPOSE_NORTH\"\n\texport LD_PRELOAD\n\t./fledge.services.north \"$@\"\n \tunset LD_PRELOAD\nelse\n  delay \"$@\"\n  ./fledge.services.north \"$@\"\nfi\n\n"
  },
  {
    "path": "scripts/services/notification_c",
    "content": "#!/bin/bash\n# Run a Fledge notification service written in C/C++\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\n# startup with delay\ndelay() {\n    for ARG in \"$@\";\n      do\n        PARAM=$(echo $ARG | cut -f1 -d=)\n        if [ $PARAM = '--delay' ]; then\n          PARAM_LENGTH=${#PARAM}\n          VALUE=\"${ARG:$PARAM_LENGTH+1}\"\n          sleep $VALUE\n          break\n        fi\n    done\n}\n\ncd \"${FLEDGE_ROOT}/services\"\ndelay \"$@\"\n./fledge.services.notification \"$@\"\n"
  },
  {
    "path": "scripts/services/pipeline_c",
    "content": "#!/bin/bash\n# Run a Fledge pipeline service written in C/C++\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n    FLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n    logger \"Fledge home directory missing or incorrectly set environment.\"\n    exit 1\nfi\n\n\n# startup with delay\ndelay() {\n    for ARG in \"$@\";\n      do\n        PARAM=$(echo $ARG | cut -f1 -d=)\n        if [ $PARAM = '--delay' ]; then\n          PARAM_LENGTH=${#PARAM}\n          VALUE=\"${ARG:$PARAM_LENGTH+1}\"\n          sleep $VALUE\n          break\n        fi\n    done\n}\n\ncd \"${FLEDGE_ROOT}/services\"\ndelay \"$@\"\n./fledge.services.pipeline \"$@\"\n"
  },
  {
    "path": "scripts/services/south_c",
    "content": "#!/bin/bash\n# Run a Fledge south service written in C/C++\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\n\n# startup with delay\ndelay() {\n    for ARG in \"$@\";\n      do\n        PARAM=$(echo $ARG | cut -f1 -d=)\n        if [ $PARAM = '--delay' ]; then\n          PARAM_LENGTH=${#PARAM}\n          VALUE=\"${ARG:$PARAM_LENGTH+1}\"\n          sleep $VALUE\n          break\n        fi\n    done\n}\n\ncd \"${FLEDGE_ROOT}/services\"\n\nrunvalgrind=n\nif [ \"$VALGRIND_SOUTH\" != \"\" ]; then\n\tfor i in \"$@\"; do\n\t\tcase $i in\n\t\t\t--name=*)\n\t\t\t\tname=`echo $i | sed -e s/--name=//`\n\t\t\t\t;;\n\t\tesac\n\tdone\n\tservices=$(echo $VALGRIND_SOUTH | tr \";\" \"\\n\")\n\tfor service in $services; do\n\t\tif [ \"$service\" = \"$name\" ]; then\n\t\t\trunvalgrind=y\n\t\tfi\n\tdone\nfi\n\nif [ \"$runvalgrind\" = \"y\" ]; then\n\tfile=\"$HOME/south.${name}.valgrind.out\"\n\trm -f \"$file\"\n\tvalgrind  --leak-check=full --trace-children=yes --log-file=\"$file\" ./fledge.services.south \"$@\"\nelse\n  delay \"$@\"\n  ./fledge.services.south \"$@\"\nfi\n\n"
  },
  {
    "path": "scripts/services/storage",
    "content": "#!/bin/bash\n# Run a Fledge Storage service written in C/C++\nset -eo pipefail\nstorageExec=\"\"\npluginScriptPath=\"\"\n\nif [[ \"${FLEDGE_ROOT}\" = \"\" ]]; then\n    if [[ ! -x /usr/local/fledge/services/fledge.services.storage ]] && [[ ! -x /usr/local/fledge/services/storage ]]; then\n        logger \"Unable to find Fledge storage microservice in the default location\"\n        exit 1\n    else\n        # Set plugin script path\n\tpluginScriptPath=/usr/local/fledge/scripts/plugins/storage/\n        # Set storage service exec\n        if [[ -x /usr/local/fledge/services/fledge.services.storage ]]; then\n            export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/fledge/lib\n            storageExec=/usr/local/fledge/services/fledge.services.storage\n        else\n            storageExec=/usr/local/fledge/services/storage\n        fi\n\tif [[ \"$1\" != \"--plugin\" ]]; then\n            logger \"Fledge storage microservice in the default location: /usr/local/fledge\"\n        fi\n    fi\nelse\n    # Include common logger script code\n    . $FLEDGE_ROOT/scripts/common/write_log.sh\n    if [[ ! -x ${FLEDGE_ROOT}/services/fledge.services.storage ]] && [[ ! -x ${FLEDGE_ROOT}/services/storage ]]; then\n        write_log \"\" \"scripts.services.storage\" \"err\" \"Unable to find Fledge storage microservice in ${FLEDGE_ROOT}/services/storage\" \"logonly\" \"\"\n        exit 1\n    else\n        # Set plugin script path\n\tpluginScriptPath=${FLEDGE_ROOT}/scripts/plugins/storage/\n        # Set storage service exec\n        if [[ -x ${FLEDGE_ROOT}/services/fledge.services.storage ]]; then\n            export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${FLEDGE_ROOT}/lib:/usr/local/fledge/lib\n            storageExec=${FLEDGE_ROOT}/services/fledge.services.storage\n        else\n            storageExec=${FLEDGE_ROOT}/services/storage\n        fi\n\tif [[ \"$1\" != \"--plugin\" ]]; then\n            write_log \"\" \"scripts.services.storage\" \"info\" \"Fledge storage microservice found in FLEDGE_ROOT location: ${FLEDGE_ROOT}\" \"logonly\" \"\"\n\tfi\n\tif [[ \"$1\" != \"--readingsPlugin\" ]]; then\n            write_log \"\" \"scripts.services.storage\" \"info\" \"Fledge storage microservice found in FLEDGE_ROOT location: ${FLEDGE_ROOT}\" \"logonly\" \"\"\n\tfi\n    fi\nfi\n\nif [[ \"$1\" == \"--plugin\" ]]; then\n    # Get db schema\n    FLEDGE_VERSION_FILE=\"${FLEDGE_ROOT}/VERSION\"\n    FLEDGE_SCHEMA=`cat ${FLEDGE_VERSION_FILE} | tr -d ' ' | grep -i \"FLEDGE_SCHEMA=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'`\n    # Get storage engine\n    res=(`${storageExec} --plugin`)\n    storagePlugin=${res[0]}\n    managedEngine=${res[1]}\n    # Call plugin check: this will create database if not set yet\n    ${pluginScriptPath}/${storagePlugin}.sh init ${FLEDGE_SCHEMA} ${managedEngine}\n    if [[ \"$VALGRIND_STORAGE\" = \"y\" ]]; then\n\t\twrite_log \"\" \"scripts.services.storage\" \"warn\" \"Running storage service under valgrind\" \"logonly\" \"\"\n\t\tif [[ -f \"$HOME/storage.valgrind.out\" ]]; then\n\t\t\trm $HOME/storage.valgrind.out\n\t\tfi\n\t\tvalgrind --leak-check=full --show-leak-kinds=all --trace-children=yes --log-file=$HOME/storage.valgrind.out ${storageExec} \"$@\" -d &\n    else\n\t\t${storageExec} \"$@\"\n    fi\nelif [[ \"$1\" == \"--readingsPlugin\" ]]; then\n    # Get db schema\n    FLEDGE_VERSION_FILE=\"${FLEDGE_ROOT}/VERSION\"\n    FLEDGE_SCHEMA=`cat ${FLEDGE_VERSION_FILE} | tr -d ' ' | grep -i \"FLEDGE_SCHEMA=\" | sed -e 's/\\(.*\\)=\\(.*\\)/\\2/g'`\n    # Get storage engine\n    res=(`${storageExec} --readingsPlugin`)\n    # Check if the first three words are \"Use main plugin\"\n    if [[ \"$res\" =~ \"Use main plugin\" ]]; then\n        res=(`${storageExec} --plugin`)\n    fi\n    storagePlugin=${res[0]}\n    managedEngine=${res[1]}\n    # Call plugin check: this will create database if not set yet\n    if [[ -x ${pluginScriptPath}/${storagePlugin}.sh ]]; then\n       ${pluginScriptPath}/${storagePlugin}.sh init ${FLEDGE_SCHEMA} ${managedEngine}\n    fi\n    ${storageExec} \"$@\"\nelif [[ \"$VALGRIND_STORAGE\" = \"y\" ]]; then\n        write_log \"\" \"scripts.services.storage\" \"warn\" \"Running storage service under valgrind\" \"logonly\" \"\"\n\tif [[ -f \"$HOME/storage.valgrind.out\" ]]; then\n\t\trm $HOME/storage.valgrind.out\n\tfi\n\tvalgrind --leak-check=full --show-leak-kinds=all --trace-children=yes --log-file=$HOME/storage.valgrind.out ${storageExec} \"$@\" -d &\nelse\n\t${storageExec} \"$@\"\nfi\nexit 0\n"
  },
  {
    "path": "scripts/storage",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2017-2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n#\n# This script is used to call the PosegreSQL storage plugin script\n# to store and retrieve the sensor data when the database\n# is embedded in Fledge\n#\n\n#set -x\n\n# Include common code\nsource \"${FLEDGE_ROOT}/scripts/common/get_storage_plugin.sh\"\nsource \"${FLEDGE_ROOT}/scripts/common/get_readings_plugin.sh\"\n\nPLUGIN_TO_USE=\"\"\n\n# Logger wrapper\nstorage_log() {\n    write_log \"Storage\" \"script.storage\" \"$1\" \"$2\" \"$3\" \"$4\"\n}\n\n#############\n## MAIN LOGIC\n#############\n\n\nPLUGIN_TO_USE=`get_storage_plugin`\nREADINGS_PLUGIN_TO_USE=`get_readings_plugin`\nif [[ \"${#PLUGIN_TO_USE}\" -eq 0 ]]; then\n    storage_log \"err\" \"Missing plugin from Fledge storage service\" \"all\" \"pretty\"\n    exit 1\nfi\n\nPLUGIN_SCRIPT=\"$FLEDGE_ROOT/scripts/plugins/storage/$PLUGIN_TO_USE.sh\"\nif [[ ! -x \"$PLUGIN_SCRIPT\" ]]; then\n\n    # Missing storage plugin script\n    storage_log \"err\" \"Fledge cannot start.\" \"all\" \"pretty\"\n    storage_log \"err\" \"Missing Storage Plugin script $PLUGIN_SCRIPT.\" \"all\" \"pretty\"\n    exit 1\n\nfi\n\n# The reset must be executed on both the storage and readings plugins, if the\n# readings are stored in a different plugin. On the readings plugin this becomes\n# a purge operation.\n#\n# The purge action is only executed via the readings plugin if defined, or\n# the main storage plugin is not defined.\n\nif [[ \"$1\" == \"reset\" ]] ; then\n\t# Pass action in $1 and FLEDGE_VERSION in $2\n\tsource \"$PLUGIN_SCRIPT\" $1 $2\n\n\tif [[ \"$PLUGIN_TO_USE\" != \"$READINGS_PLUGIN_TO_USE\" ]]; then\n\t\tREADINGS_SCRIPT=\"$FLEDGE_ROOT/scripts/plugins/storage/$READINGS_PLUGIN_TO_USE.sh\"\n\t\tif [[ -x \"$READINGS_SCRIPT\" ]]; then\n\t\t\tsource \"$READINGS_SCRIPT\" purge $2\n\t\tfi\n\tfi\nelif [[ \"$1\" == \"purge\" ]]; then\n\t# Pass action in $1 and FLEDGE_VERSION in $2\n\n\tif [[ \"$PLUGIN_TO_USE\" != \"$READINGS_PLUGIN_TO_USE\" ]]; then\n\t\tREADINGS_SCRIPT=\"$FLEDGE_ROOT/scripts/plugins/storage/$READINGS_PLUGIN_TO_USE.sh\"\n\t\t# Soem readings plugins, notably sqlitememory, do not have a script\n\t\tif [[ -x \"$READINGS_SCRIPT\" ]]; then\n\t\t    source \"$READINGS_SCRIPT\" $1 $2\n\t        fi\n\telse\n\t\tsource \"$PLUGIN_SCRIPT\" $1 $2\n\tfi\nelse\n\t# Pass any other operation to the storage plugin\n\tsource \"$PLUGIN_SCRIPT\" $1 $2\n\t# Also start the readings plugin if it is different to the configuration plugin\n\t# The reason to do this is to create the schema in the readings database if required\n\tif [[ \"$PLUGIN_TO_USE\" != \"$READINGS_PLUGIN_TO_USE\" ]]; then\n\t\tREADINGS_SCRIPT=\"$FLEDGE_ROOT/scripts/plugins/storage/$READINGS_PLUGIN_TO_USE.sh\"\n\t\t# Some readings plugins, notably sqlitememory, do not have a script\n\t\tif [[ -x \"$READINGS_SCRIPT\" ]]; then\n\t\t    source \"$READINGS_SCRIPT\" $1 $2\n\t        fi\n\tfi\nfi\n\n# exit cannot be used because the script is sourced.\n#exit $?\n\n"
  },
  {
    "path": "scripts/tasks/README.rst",
    "content": "************\nTask Scripts\n************\n\nAll scheduled processes taht are started by Fledge will be started\nusing a script in the tasks directory. This enables Fledge's core to\nrun tasks without having to kow the details of the environment those\ntasks require or the runtime support and implementation of the tasks.\n"
  },
  {
    "path": "scripts/tasks/automation_script",
    "content": "#!/bin/sh\n# Run a Fledge task written in Python\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}/python\" ]; then\n\tlogger \"Fledge home directory is missing the Python installation\"\n\texit 1\nfi\n\n# We run the Python code from the python directory\ncd \"${FLEDGE_ROOT}/python\"\n\n# TODO: Do we need to check dispatcher installation and is running state; if not running or not installed exit early?\n\npython3 -m fledge.tasks.automation_script \"$@\"\n"
  },
  {
    "path": "scripts/tasks/backup",
    "content": "#!/bin/bash\n\ndeclare FLEDGE_ROOT\ndeclare FLEDGE_DATA\ndeclare PYTHONPATH\n\n# Run a Fledge task written in Python\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}/python\" ]; then\n\tlogger \"Fledge home directory is missing the Python installation\"\n\texit 1\nfi\n\n# Adds required paths for the execution of the python module if not already defined\nif [ \"${PYTHONPATH}\" = \"\" ]; then\n\n\texport PYTHONPATH=\"${FLEDGE_ROOT}/python\"\nfi\n\nexport FLEDGE_DATA=$FLEDGE_ROOT/data\n\n# Include common code\nsource \"${FLEDGE_ROOT}/scripts/common/get_storage_plugin.sh\"\n\n# Evaluates which storage engine is enabled and it uses the proper command\nstorage=`get_storage_plugin`\n\nif [ \"${storage}\" == \"sqlite\" ]; then\n\n    python3 -m fledge.plugins.storage.sqlite.backup_restore.backup_sqlite \"$@\"\n\nelif [ \"${storage}\" == \"postgres\" ]; then\n\n    python3 -m fledge.plugins.storage.postgres.backup_restore.backup_postgres \"$@\"\nelse\n    logger \"ERROR: the backup functionality for the storage engine :${storage}: is not implemented.\"\n    exit 1\nfi\n\n\n\n\n"
  },
  {
    "path": "scripts/tasks/check_certs",
    "content": "#!/bin/bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2018 OSIsoft, LLC\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n\n# @date: 2018-01-02\n#\n# Bash Script to check for expired certificates and repalce with temporary certificates\n\nif [ -z ${FLEDGE_ROOT+x} ]; then\n    # Set FLEDGE_ROOT as the default directory\n    FLEDGE_ROOT=\"/usr/local/fledge\"\n    export FLEDGE_ROOT\nfi\n\n# Check if the default directory exists\nif [[ ! -d \"${FLEDGE_ROOT}\" ]]; then\n    logger -p local0.err -t \"fledge.script.tasks.cjeck_certs\" \"Check_certs cannot be executed: ${FLEDGE_ROOT} is not a valid directory.\"\n    echo \"Fledge cannot be executed: ${FLEDGE_ROOT} is not a valid directory.\"\n    echo \"Create the enviroment variable FLEDGE_ROOT before using check_certs.\"\n    echo \"Specify the base directory for Fledge and set the variable with:\"\n    echo \"export FLEDGE_ROOT=<basedir>\"\n    exit 1\nfi\n\n. $FLEDGE_ROOT/scripts/common/write_log.sh\n\nif [ -z ${FLEDGE_DATA+x} ]; then\n\texport dir=$FLEDGE_ROOT/data/etc/certs\nelse\n\texport dir=$FLEDGE_DATA/etc/certs\nfi\n\nexport day=`expr 60 \\* 60 \\* 24`\nexport week=`expr $day \\* 7`\ncerts=`echo $dir/*.cert`\nfor cert in $certs\ndo\n\topenssl x509 -noout -checkend $day -in $cert >/dev/null\n\tif [ $? -eq 1 ]; then\n\t\t\twrite_log \"\" \"fledge.check_certs\" \"err\" \"Certificate $cert will expire in less than a day\" \"all\" \"pretty\"\n\t\t\tcd $FLEDGE_ROOT\n\t\t\tscripts/certificates `basename $cert .cert` 7\n\telse\n\t\topenssl x509 -noout -checkend $week -in $cert >/dev/null\n\t\tif [ $? -eq 1 ]; then\n\t\t\twrite_log \"\" \"fledge.check_certs\" \"warn\" \"Certificate $cert will expire in less than a week\" \"all\" \"pretty\"\n\t\tfi\n\tfi\ndone\n"
  },
  {
    "path": "scripts/tasks/check_updates",
    "content": "#!/bin/sh\n# Run a Fledge task written in C\n\n# Bash Script to invoke Installed packages available upgrade checks binary and raise alerts\n\nif [ -z ${FLEDGE_ROOT+x} ]; then\n    # Set FLEDGE_ROOT as the default directory\n    FLEDGE_ROOT=\"/usr/local/fledge\"\n    export FLEDGE_ROOT\nfi\n\n# Check if the default directory exists\nif [[ ! -d \"${FLEDGE_ROOT}\" ]]; then\n    echo \"Fledge cannot be executed: ${FLEDGE_ROOT} is not a valid directory.\"\n    echo \"Create the enviroment variable FLEDGE_ROOT before using check_updates.\"\n    echo \"Specify the base directory for Fledge and set the variable with:\"\n    echo \"export FLEDGE_ROOT=<basedir>\"\n    exit 1\nfi\n\n\ncd \"${FLEDGE_ROOT}\"\n\n./tasks/check_updates \"$@\"\n\n"
  },
  {
    "path": "scripts/tasks/north_c",
    "content": "#!/bin/sh\n# Run a Fledge task written in C\n\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\n\n# TODO: define the proper path\ncd \"${FLEDGE_ROOT}\"\n\n./tasks/sending_process \"$@\"\n\n"
  },
  {
    "path": "scripts/tasks/purge",
    "content": "#!/bin/sh\n# Run a Fledge task written in Python\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}/python\" ]; then\n\tlogger \"Fledge home directory is missing the Python installation\"\n\texit 1\nfi\n\n# We run the Python code from the python directory\ncd \"${FLEDGE_ROOT}/python\"\n\npython3 -m fledge.tasks.purge \"$@\"\n"
  },
  {
    "path": "scripts/tasks/purge_system",
    "content": "#!/bin/sh\n# Run a Fledge task written in C\n\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\n\n# TODO: define the proper path\ncd \"${FLEDGE_ROOT}\"\n\n./tasks/purge_system \"$@\"\n\n"
  },
  {
    "path": "scripts/tasks/restore",
    "content": "#!/bin/bash\n\ndeclare FLEDGE_ROOT\ndeclare FLEDGE_DATA\ndeclare PYTHONPATH\n\n# Run a Fledge task written in Python\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}/python\" ]; then\n\tlogger \"Fledge home directory is missing the Python installation\"\n\texit 1\nfi\n\n# Adds required paths for the execution of the python module if not already defined\nif [ \"${PYTHONPATH}\" = \"\" ]; then\n\n\texport PYTHONPATH=\"${FLEDGE_ROOT}/python\"\nfi\n\nexport FLEDGE_DATA=$FLEDGE_ROOT/data\n\n# Include common code\nsource \"${FLEDGE_ROOT}/scripts/common/get_storage_plugin.sh\"\n\n# Evaluates which storage engine is enabled and it uses the proper command\nstorage=`get_storage_plugin`\n\nif [ \"${storage}\" == \"sqlite\" ]; then\n\n    command=\"python3 -m fledge.plugins.storage.sqlite.backup_restore.restore_sqlite $@\"\n    # Avoid Fledge termination at the Fledge stop\n    nohup $command </dev/null >/dev/null 2>&1 &\n\nelif [ \"${storage}\" == \"postgres\" ]; then\n\n    command=\"python3 -m fledge.plugins.storage.postgres.backup_restore.restore_postgres $@\"\n    # Avoid Fledge termination at the Fledge stop\n    nohup $command </dev/null >/dev/null 2>&1 &\n\nelse\n    logger \"ERROR: the restore functionality for the storage engine :${storage}: is not implemented.\"\n    exit 1\nfi\n"
  },
  {
    "path": "scripts/tasks/statistics",
    "content": "#!/usr/bin/env bash\n# Run a Fledge task written in C\nif [ \"${FLEDGE_ROOT}\" = \"\" ]; then\n\tFLEDGE_ROOT=/usr/local/fledge\nfi\n\nif [ ! -d \"${FLEDGE_ROOT}\" ]; then\n\tlogger \"Fledge home directory missing or incorrectly set environment\"\n\texit 1\nfi\n\nos_name=`(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')`\n\nif [[ $os_name == *\"Raspbian\"*  ]]; then\n\n\tcpulimit -l 40 -- $FLEDGE_ROOT/tasks/statistics_history \"$@\"\nelse\n\t# Standard execution on other platforms\n\t$FLEDGE_ROOT/tasks/statistics_history \"$@\"\nfi\n"
  },
  {
    "path": "tests/.gitignore",
    "content": "# Cache\n.pytest_cache\n\n# Coverage\n.coverage\nhtmlcov/\n\n# Test certificates\n!/unit/python/fledge/services/core/api/certs/*\n"
  },
  {
    "path": "tests/README.rst",
    "content": ".. Fledge test scripts describes how to Fledge scripted tests are organised and how to write the scripted tests\n\n.. |br| raw:: html\n\n   <br />\n\n.. Links\n\n.. Links in new tabs\n\n.. |pytest docs| raw:: html\n\n   <a href=\"https://docs.pytest.org/en/latest/contents.html\" target=\"_blank\">pytest</a>\n\n.. |pytest decorators| raw:: html\n\n   <a href=\"https://docs.pytest.org/en/latest/mark.html\" target=\"_blank\">pytest</a>\n\n.. |pytest-cov docs| raw:: html\n\n   <a href=\"https://pytest-cov.readthedocs.io/en/v2.9.0/\" target=\"_blank\">pytest-cov</a>\n\n.. _Unit: unit\\\\python\\\\\n.. _System: system\\\\\n.. _here: ..\\\\README.rst\n\n.. =============================================\n\n********************\nFledge Test Scripts\n********************\n\nFledge scripted tests are classified into two categories:\n\n- `Unit`_ - Tests that checks the expected output of a code block.\n- `System`_ - Tests that checks the end to end and integration flows in Fledge\n\n\nRunning Fledge scripted tests\n==============================\n\nTest Prerequisites\n------------------\n\nFollow the instructions mentioned `here`_  to install and run Fledge on your machine.\nYou can test Fledge from your development environment or after installing Fledge.\n\nTo install the dependencies required to run python tests, run the following command from FLEDGE_ROOT\n::\n   python3 -m pip install -r python/requirements-test.txt --user\n   sudo apt install jq libxslt-dev\n\n\nTest Execution\n--------------\n\nPython Tests\n++++++++++++\n\nFledge uses pytest as the test runner for testing python based code. For more information on pytest please refer\n|pytest docs|\nRunning the python tests:\n\n- ``pytest`` - This will execute all the python test files in the given directory and sub-directories.\n- ``pytest test_filename.py`` - This will execute all tests in the file named test_filename.py\n- ``pytest test_filename.py::TestClass`` -  This will execute all test methods in a single class TestClass in file test_filename.py\n- ``pytest test_filename.py::TestClass::test_case`` - This will execute test method test_case in class TestClass in file test_filename.py\n\n**NOTE:** *Information to run the different categories of tests can be found in their respective documentation*\n\n\nC Tests\n+++++++\n\nTO-DO\n\nTest addition\n-------------\n\nIf you want to contribute towards adding a new tests in Fledge, make sure you follow some rules:\n\n- Test file name should begin with the word ``test_`` to enable pytest auto test discovery.\n- Make sure you are placing your test file in the correct test directory. For example, if you are writing a unit test, it should be located under ``$FLEDGE_ROOT/tests/unit/python/fledge/<component>`` where component is the name of the component for which you are writing the unit tests. For more information of type of test, refer to the test categories.\n\nCode Coverage\n-------------\n\nPython Tests\n++++++++++++\n\nFledge uses pytest-cov Framework of pytest as the code coverage measuring tool for python tests, For more information on pytest-cov please refer to |pytest-cov docs|.\n\nTo install pytest-cov Framework along with pytest Framework use the following command:\n::\n   python3 -m pip install pytest==7.0.1 pytest-cov==2.9.0\n\nRunning the python tests:\n\n- ``pytest --cov=. --cov-report xml:xml_filepath --cov-report html:html_directorypath`` - This will execute all the python test files in the given directory and sub-directories and generate the code coverage report in XML as well as the HTML format at the specified path in the command.\n- ``pytest test_filename.py --cov=. --cov-report xml:xml_filepath --cov-report html:html_directorypath`` - This will execute all tests in the file named test_filename.py and generate the code coverage report in XML as well as the HTML format at the specified path in the command.\n- ``pytest test_filename.py::TestClass --cov=. --cov-report xml:xml_filepath --cov-report html:html_directorypath`` -  This will execute all test methods in a single class TestClass in file test_filename.py and generate the code coverage report in XML as well as the HTML format at the specified path in the command.\n- ``pytest test_filename.py::TestClass::test_case --cov=. --cov-report xml:xml_filepath --cov-report html:html_directorypath`` - This will execute test method test_case in class TestClass in file test_filename.py and generate the code coverage report in XML as well as the HTML format at the specified path in the command.\n- ``pytest -s -vv tests/unit/python/fledge/ --cov=. --cov-report=html --cov-config $FLEDGE_ROOT/tests/unit/python/.coveragerc`` - This will execute all the python tests and generate the code coverage report in the HTML format on the basis of settings in the configuration file.\n\n\nC Tests\n+++++++\n\nTODO: FOGL-8497 Add documentation of Code Coverage of C Based tests\n"
  },
  {
    "path": "tests/__init__.py",
    "content": ""
  },
  {
    "path": "tests/system/__init__.py",
    "content": ""
  },
  {
    "path": "tests/system/common/clean_pi_system.py",
    "content": "\nimport http.client\nimport json\nimport base64\nimport ssl\n\n\ndef delete_pi_point(host, admin, password, asset_name, data_point_name):\n    \"\"\"Deletes a given pi point fromPI.\"\"\"\n    username_password = \"{}:{}\".format(admin, password)\n\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64, 'Content-Type': 'application/json'}\n\n    try:\n        web_id, pi_point_name = search_for_pi_point(host, admin, password, asset_name, data_point_name)\n        if not web_id:\n            print(\"Could not search PI Point {}. \".format(data_point_name))\n            return\n\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"DELETE\", \"/piwebapi/points/{}\".format(web_id), headers=headers)\n        r = conn.getresponse()\n        assert r.status == 204, \"Could not delete\" \\\n                                \" the pi point {}.\".format(pi_point_name)\n\n        conn.close()\n\n    except Exception as er:\n        print(\"Could not delete pi point {} due to {}\".format(data_point_name, er))\n        assert False, \"Could not delete pi point {} due to {}\".format(data_point_name, er)\n\n\ndef search_for_pi_point(host, admin, password, asset_name, data_point_name):\n    \"\"\"Searches for a pi point in PI return its web_id and its full name in PI.\"\"\"\n    username_password = \"{}:{}\".format(admin, password)\n\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64, 'Content-Type': 'application/json'}\n    try:\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"GET\", '/piwebapi/dataservers', headers=headers)\n        res = conn.getresponse()\n        r = json.loads(res.read().decode())\n        points_url = r[\"Items\"][0][\"Links\"][\"Points\"]\n    except Exception:\n        print(\"Could not request data server of PI\")\n        return False, None\n\n    try:\n        conn.request(\"GET\", points_url, headers=headers)\n        res = conn.getresponse()\n        points = json.loads(res.read().decode())\n    except Exception:\n        print(\"Could not get Points data.\")\n        return False, None\n\n    # if datapoint name is given then will search with asset name.\n    if data_point_name != '':\n        name_to_search = asset_name + '.' + data_point_name\n    else:\n        # if no datapoint is given will search for asset name.\n        # This is expected behaviour when pi points are created for assets with single data point.\n        # See FOGL-6804 for details.\n        name_to_search = asset_name\n\n    for single_point in points['Items']:\n\n        if name_to_search in single_point['Name']:\n            web_id = single_point['WebId']\n            pi_point_name = single_point[\"Name\"]\n            conn.close()\n            return web_id, pi_point_name\n\n    return None, None\n\n\ndef search_for_element_template(host, admin, password, pi_database, search_string):\n    \"\"\"Searches for an element template using a search string. If found returns its web_id.\n       If multiple templates found then returns an array of web_ids.\n    \"\"\"\n    username_password = \"{}:{}\".format(admin, password)\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64}\n\n    try:\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"GET\", '/piwebapi/assetservers', headers=headers)\n        res = conn.getresponse()\n        r = json.loads(res.read().decode())\n        dbs = r[\"Items\"][0][\"Links\"][\"Databases\"]\n\n        if dbs is not None:\n            conn.request(\"GET\", dbs, headers=headers)\n            res = conn.getresponse()\n            r = json.loads(res.read().decode())\n            for el in r[\"Items\"]:\n                if el[\"Name\"] == pi_database:\n                    element_template_list = el[\"Links\"][\"ElementTemplates\"]\n\n        web_ids = []\n        if element_template_list is not None:\n            conn.request(\"GET\", element_template_list, headers=headers)\n            res = conn.getresponse()\n            r = json.loads(res.read().decode())\n            for template_info in r['Items']:\n                if search_string in template_info['Name']:\n                    web_ids.append(template_info['WebId'])\n\n        if not web_ids:\n            print(\"Could not find asset template with name {}\".format(search_string))\n            return []\n        return web_ids\n\n    except Exception as er:\n        print(\"Could not find asset element template with name\"\n              \"  {} due to {}\".format(search_string, er))\n        return []\n\n\ndef delete_element_template(host, admin, password, web_id):\n    \"\"\"Deletes an element template through its web_id.\"\"\"\n    username_password = \"{}:{}\".format(admin, password)\n\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64, 'Content-Type': 'application/json'}\n\n    try:\n\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"DELETE\", \"/piwebapi/elementtemplates/{}\".format(web_id), headers=headers)\n        r = conn.getresponse()\n        assert r.status == 204, \"Could not delete\" \\\n                                \" element template for web_id {}.\".format(web_id)\n\n        conn.close()\n\n    except Exception as er:\n        print(\"Could not delete element template {} due to {}\".format(web_id, er))\n        assert False, \"Could not delete element template {} due to {}\".format(web_id, er)\n\n\ndef delete_element_hierarchy(host, admin, password, pi_database, af_hierarchy_list):\n    \"\"\" This method deletes the given hierarchy list form PI.\"\"\"\n    url_elements_list = None\n\n    username_password = \"{}:{}\".format(admin, password)\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64}\n\n    try:\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"GET\", '/piwebapi/assetservers', headers=headers)\n        res = conn.getresponse()\n        r = json.loads(res.read().decode())\n        dbs = r[\"Items\"][0][\"Links\"][\"Databases\"]\n\n        if dbs is not None:\n            conn.request(\"GET\", dbs, headers=headers)\n            res = conn.getresponse()\n            r = json.loads(res.read().decode())\n            for el in r[\"Items\"]:\n                if el[\"Name\"] == pi_database:\n                    url_elements_list = el[\"Links\"][\"Elements\"]\n\n        af_level_count = 0\n        for level in af_hierarchy_list[:-1]:\n            if url_elements_list is not None:\n                conn.request(\"GET\", url_elements_list, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                for el in r[\"Items\"]:\n                    if el[\"Name\"] == af_hierarchy_list[af_level_count]:\n                        url_elements_list = el[\"Links\"][\"Elements\"]\n                        if af_level_count == 0:\n                            web_id_root = el[\"WebId\"]\n                        af_level_count = af_level_count + 1\n\n        if web_id_root:\n            conn.request(\"DELETE\", '/piwebapi/elements/{}'.format(web_id_root), headers=headers)\n            r = conn.getresponse()\n            assert r.status == 204, \"Could not delete element hierarchy of {}\".format(af_hierarchy_list)\n            conn.close()\n\n    except Exception as er:\n        print(\"Could not delete hierarchy of {} due to {}\".format(af_hierarchy_list, er))\n        print(\"Most probably it does not exist.\")\n\n\ndef clear_cache(host, admin, password, pi_database):\n    \"\"\"Method that deletes cache by supplying 'Cache-Control': 'no-cache' in header of GET request for\n        element list.\n    \"\"\"\n    username_password = \"{}:{}\".format(admin, password)\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64, 'Cache-Control': 'no-cache'}\n    normal_header = {'Authorization': 'Basic %s' % username_password_b64}\n    try:\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"GET\", '/piwebapi/assetservers', headers=normal_header)\n        res = conn.getresponse()\n        assert res.status == 200, \"Could not request asset server of Pi Web API.\"\n        r = json.loads(res.read().decode())\n        dbs = r[\"Items\"][0][\"Links\"][\"Databases\"]\n\n        if dbs is not None:\n            conn.request(\"GET\", dbs, headers=normal_header)\n            res = conn.getresponse()\n            assert res.status == 200, \"Databases is not accessible.\"\n            r = json.loads(res.read().decode())\n            for el in r[\"Items\"]:\n                if el[\"Name\"] == pi_database:\n                    url_elements_list = el[\"Links\"][\"Elements\"]\n\n        print(\"Getting old cache\")\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"GET\", '/piwebapi/system/cacheinstances', headers=normal_header)\n        res = conn.getresponse()\n        r = json.loads(res.read().decode())\n        try:\n            # assuming we have single user Administrator\n            old_cache_refresh_time = r[\"Items\"][0]['LastRefreshTime']\n        except (IndexError, KeyError):\n            print(\"The cache does not exist. \")\n            return\n\n        print(\"Old cache refresh time {} \".format(old_cache_refresh_time))\n        conn.close()\n\n        print(\"Going to request element list with cache control no cache.\")\n        if url_elements_list is not None:\n            conn.request(\"GET\", url_elements_list, headers=headers)\n            res = conn.getresponse()\n            r = json.loads(res.read().decode())\n\n        conn.close()\n\n        # for verification whether we are able to clear cache.\n        print(\"Now verifying whether cache cleared.\")\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"GET\", '/piwebapi/system/cacheinstances', headers=normal_header)\n        res = conn.getresponse()\n        r = json.loads(res.read().decode())\n        try:\n            # assuming we have only one user named Administrator.\n            new_cache_refresh_time = r[\"Items\"][0]['LastRefreshTime']\n        except (KeyError, IndexError):\n            print(\"The cache does not exist.\")\n            return\n\n        print(\"New cache refresh time {} \".format(new_cache_refresh_time))\n        conn.close()\n\n        assert new_cache_refresh_time != old_cache_refresh_time, \"The cache has not been refreshed.\"\n\n    except Exception as er:\n        print(\"Could not clear cache due to {}\".format(er))\n        return\n\n\ndef clear_pi_system_pi_web_api(host, admin, password, pi_database, af_hierarchy_list, asset_dict):\n    \"\"\"\n       Clears the pi system through pi web API.\n       1. Deletes the elements.\n       2. Deletes element templates.\n       3. Deletes the PI Points.\n       4. Clears the cache.\n    Args:\n        host (str): The address of the pi server.\n        admin (str): The user name inside pi server.\n        password (str): The passowrd for the username used above.\n        pi_database (str): The database inside pi server.\n        af_hierarchy_list (list): The asset heirarchy list to delete.\n        asset_dict (dict): It's a dict where keys are asset names, and it's value is list where each\n                           element of list is a datapoint associated with that asset.\n\n    Returns:\n        None\n    \"\"\"\n    print(\"Going to delete the element hierarchy list {}.\".format(af_hierarchy_list))\n    delete_element_hierarchy(host, admin, password, pi_database, af_hierarchy_list)\n    print(\"Deleted the element hierarchy list {}.\".format(af_hierarchy_list))\n\n    for asset_name in asset_dict.keys():\n        for dp_name in asset_dict[asset_name]:\n            print(\"Going to delete the PI point. with name {}.{}\".format(asset_name, dp_name))\n            delete_pi_point(host, admin, password, asset_name, dp_name)\n            print(\"Deleted the PI point. with name {}.{}\".format(asset_name, dp_name))\n\n    for h_level in af_hierarchy_list:\n        web_ids = search_for_element_template(host, admin, password, pi_database, h_level)\n        print(\"Going to delete the element template with name {} and web ids {}\".format(h_level, web_ids))\n        for web_id in web_ids:\n            delete_element_template(host, admin, password, web_id)\n        print(\"Deleted the element template with name {} and web ids {}\".format(h_level, web_ids))\n\n    clear_cache(host, admin, password, pi_database)\n    print(\"Cleared the cache of Pi system.\")\n"
  },
  {
    "path": "tests/system/common/scripts/reset_user_authentication",
    "content": "#!/usr/bin/env bash\n\n# Check if FLEDGE_ROOT (the first argument) is provided\nFLEDGE_ROOT=\"$1\"\nif [ -z \"$FLEDGE_ROOT\" ]; then\n    # If $1 (FLEDGE_ROOT) is not provided, exit with status 1\n    echo \"Error: FLEDGE_ROOT is required.\"\n    exit 1\nfi\n\n# Check if the second argument (authentication) is passed\nif [ -z \"$2\" ]; then\n    # If no second argument, set authentication to \"optional\"\n    authentication=\"optional\"\nelse\n    # If second argument is \"authentication\", set authentication to \"mandatory\"\n    if [ \"$2\" == \"authentication\" ]; then\n        authentication=\"mandatory\"\n    else\n        # If the second argument is something else, set authentication to \"optional\"\n        authentication=\"optional\"\n    fi\nfi\n\n# Use the authentication value\necho \"Authentication is set to: $authentication\"\n\nif [ \"$authentication\" == \"mandatory\" ]; then\n    sudo sed -i \"s/'default': 'optional'/'default': 'mandatory'/g\" \"$FLEDGE_ROOT/python/fledge/services/core/server.py\"\nelse\n    sudo sed -i \"s/'default': 'mandatory'/'default': 'optional'/g\" \"$FLEDGE_ROOT/python/fledge/services/core/server.py\"\nfi\n"
  },
  {
    "path": "tests/system/lab/README.rst",
    "content": "\nScript to automate Fledge Lab\n------------------------------\n\n1. Install git i.e. `sudo apt install git`\n\n2. Clone Fledge repo and `cd tests/system/lab/`\n\n3. Check and set the configuration in `test.config`\n\n4. Make sure to enable I2C Interface for enviro-pHAT and reboot.\n\nFor CI or individual's setup, `test.config` should be replaced (altered) per the parameters.\n\nExecute `./run` to run test once. Default version it will use is nightly, you can pass an argument e.g. `./run 1.7.0RC`\nTo run the test for required (say 10) iterations or until it fails - execute `./run_until_fails 10 1.7.0RC`\n\n\n**`run` and `run_until_fails` use the following scripts in its execution:**\n\n- **remove**: apt removes all fledge packages; deletes /usr/local/fledge;\n\n- **install**: apt update; install fledge; install gui; install other fledge packages\n\n- **test**: curl commands to simulate all gui actions in the lab (except game)\n\n- **reset**: Reset script is to stop fledge; reset the db and delete any python scripts.\n\n\n**`test.config` contains following variables that are used by `test` scripts in its execution:**\n\n- **FLEDGE_IP**: IP Address of the system on which fledge is running.\n\n- **PI_IP**: IP Address of PI Web API.\n\n- **PI_USER**: Username used for accessing PI Web API.\n\n- **PI_PASSWORD**: Password used for PI Web API.\n\n- **PI_PORT**: Port number of PI Web API on which fledge will connect.\n\n- **PI_DB**: Database in wihch PI Point is to be stored.\n\n- **MAX_RETRIES**: Retries to check data and info via API before declaring it failed to see the expected.\n\n- **SLEEP_FIX**: Time to sleep to fix bugs. This should be zero.\n\n- **EXIT_EARLY**: It is a Boolean variable, if contains value '1' then test will stop execution as soon as any error occur.\n\n- **ADD_NORTH_AS_SERVICE**: This variable defines whether North(OMF) is created as a task or a service.\n\n- **VERIFY_EGRESS_TO_PI**: It is a Boolean variable, if contains value '1' then North(OMF) is created and data sent to PI Web API will be verified.\n\n- **STORAGE**: This variable defines the storage plugin for configuration used by fledge, i.e. sqlite, sqlitelb, postgres.\n\n- **READING_PLUGIN_DB**: This variable by default contains \"Use main plugin\" that mean READING_PLUGIN_DB will be the same that used in `STORAGE` variable. Apart of \"Use main plugin\", it may also contain sqlite, sqlitelb, sqlite-in-memory, postgres values.\n"
  },
  {
    "path": "tests/system/lab/check_env",
    "content": "#!/usr/bin/env bash\n\nID=$(cat /etc/os-release | grep -w ID | cut -f2 -d\"=\" | tr -d '\"')\n\nif [[ \"${ID}\" != \"raspbian\" && \"${ID}\" != \"debian\" ]]; then\n  echo \"Please test with Raspberry Pi OS.\"\n  exit 1\nfi\n\nVERSION_CODENAME=$(cat /etc/os-release | grep VERSION_CODENAME | cut -f2 -d\"=\" | tr -d '\"')\nif [[ \"${VERSION_CODENAME}\" = \"bullseye\" || \"${VERSION_CODENAME}\" = \"bookworm\" ]]; then\n  echo \"Running test on ${VERSION_CODENAME}\"\nelse\n  echo \"This test is specific to Raspberry Pi OS bullseye & bookworm only!\"\n  exit 1\nfi\n"
  },
  {
    "path": "tests/system/lab/install",
    "content": "#!/usr/bin/env bash\n\n./check_env\n[[ $? -eq 0 ]]  || exit 1\n\ncat /etc/os-release | grep PRETTY_NAME | cut -f2 -d\"=\"\nuname -a\n\nsudo cp /etc/apt/sources.list /etc/apt/sources.list.bak\nsudo sed -i \"/\\b\\(archives.fledge-iot.org\\)\\b/d\" /etc/apt/sources.list\nsudo rm -f /etc/apt/sources.list.d/fledge.list;\n\n\nexport DEBIAN_FRONTEND=noninteractive\n\nsudo apt update && sudo apt upgrade -y && sudo apt update\necho \"==================== DONE update, upgrade, update ============================\"\n\necho \"==================== INSTALLING jq ==================\"\nsudo apt install -y jq\necho \"==================== DONE ==================\"\n\nBUILD_VERSION=\"nightly\"\nif [[ $# -gt 0 ]]\n then\n BUILD_VERSION=$1\nfi\n\nVERSION_CODENAME=$(cat /etc/os-release | grep VERSION_CODENAME | cut -f2 -d\"=\")\n\nwget -q -O - http://archives.fledge-iot.org/KEY.gpg | sudo apt-key add -\necho \"deb http://archives.fledge-iot.org/${BUILD_VERSION}/${VERSION_CODENAME}/$(arch)/ /\" | sudo tee -a /etc/apt/sources.list\nsudo apt update\n\ntime sudo -E apt install -yq fledge\necho \"==================== DONE INSTALLING Fledge ==================\"\n\ntime sudo apt install -y fledge-gui\necho \"==================== DONE INSTALLING Fledge GUI ======================\"\n\ntime sudo apt install -y fledge-service-notification fledge-filter-expression fledge-filter-python35 fledge-filter-rms \\\nfledge-filter-fft fledge-filter-delta fledge-filter-metadata fledge-filter-change \\\nfledge-notify-asset fledge-notify-python35 fledge-notify-email \\\nfledge-rule-simple-expression fledge-rule-average \\\nfledge-north-httpc \\\nfledge-south-sinusoid fledge-south-rpienviro fledge-south-randomwalk fledge-south-game fledge-south-modbus fledge-south-http-south\necho \"==================== DONE INSTALLING PLUGINS ==================\"\n"
  },
  {
    "path": "tests/system/lab/remove",
    "content": "#!/usr/bin/env bash\n\ntime sudo apt purge -y fledge fledge-gui\nsudo rm -rf /usr/local/fledge\n"
  },
  {
    "path": "tests/system/lab/reset",
    "content": "#!/usr/bin/env bash\n\nFLEDGE_ROOT=\"/usr/local/fledge\"\nsource ../common/scripts/reset_user_authentication \"$FLEDGE_ROOT\"\n\ninstall_postgres() {\n  sudo apt install -y postgresql\n  sudo -u postgres createuser -d \"$(whoami)\"\n}\n\n_config_reading_db () {\n  if [[ \"postgres\" == @($1|$2) ]]\n  then\n    install_postgres\n  fi\n  [[ -f $FLEDGE_ROOT/data/etc/storage.json ]] && echo $(jq -c --arg STORAGE_PLUGIN_VAL \"${1}\" '.plugin.value=$STORAGE_PLUGIN_VAL' $FLEDGE_ROOT/data/etc/storage.json) > $FLEDGE_ROOT/data/etc/storage.json || true\n  [[ -f $FLEDGE_ROOT/data/etc/storage.json ]] && echo $(jq -c --arg READING_PLUGIN_VAL \"${2}\" '.readingPlugin.value=$READING_PLUGIN_VAL' $FLEDGE_ROOT/data/etc/storage.json) > $FLEDGE_ROOT/data/etc/storage.json || true\n}\n\n# check for storage plugin\n. ./test.config\n\nif [[  ${STORAGE} == @(sqlite|postgres|sqlitelb) && ${READING_PLUGIN_DB} == @(Use main plugin|sqlitememory|sqlite|postgres|sqlitelb) ]]\nthen\n   _config_reading_db \"${STORAGE}\" \"${READING_PLUGIN_DB}\"\nelse\n  echo \"Invalid Storage Configuration\"\n  exit 1\nfi\n\necho \"Stopping Fledge using systemctl ...\"\n# FIXME: FOGL-1499 After the issue is resolved, remove the explicit 'kill' command and use 'systemctl stop' instead\n# sudo systemctl stop fledge\n/usr/local/fledge/bin/fledge kill\necho -e \"YES\\nYES\" | $FLEDGE_ROOT/bin/fledge reset || exit 1\necho\necho \"Starting Fledge using systemctl ...\"\n# FIXME: FOGL-1499 Once the issue is resolved, replace 'restart' with 'start\nsudo systemctl restart fledge\necho \"Fledge Status\"\nsystemctl status fledge | grep \"Active\"\n"
  },
  {
    "path": "tests/system/lab/run",
    "content": "#!/usr/bin/env bash\n\n./check_env\n[[ $? -eq 0 ]]  || exit 1\n\nVERSION=\"nightly\"\nif [[ $# -gt 0 ]]\n then\n VERSION=$1\nfi\n\n./remove\n./install ${VERSION}\n./reset || exit 1\n./test\n"
  },
  {
    "path": "tests/system/lab/run_until_fails",
    "content": "#!/usr/bin/env bash\n\n./check_env\n[[ $? -eq 0 ]]  || exit 1\n\nITERATIONS=10\nif [[ $# -gt 0 ]]\n then\n ITERATIONS=$1\nfi\n\nVERSION=\"nightly\"\nif [[ $# -gt 1 ]]\n then\n VERSION=$2\nfi\n\n\n./remove\n./install ${VERSION}\n\nfor i in $(seq ${ITERATIONS}); do\n  echo \"***************\"\n  echo \"***************\"\n  echo \"Run $i\"\n  echo \"***************\"\n  echo \"***************\"\n  ./reset || exit 1\n  ./test || exit 1\ndone\n\n"
  },
  {
    "path": "tests/system/lab/scripts/ema.py",
    "content": "# -*- coding: utf-8 -*-\n\n\n\"\"\" Generate exponential moving average\n\"\"\"\nimport json\n\nrate = 0.07  # rate default value: include 7% of current value (and 93% of history)\nlatest = None  # latest ema value\n\n\ndef set_filter_config(configuration):\n    \"\"\" Set configuration if provided\n\n    :param configuration:\n    :return:\n    \"\"\"\n    global rate\n    config = json.loads(configuration['config'])\n    if'rate' in config:\n        rate = config['rate']\n    return True\n\n\ndef doit(reading):\n    \"\"\" Process a reading\n\n    :param reading:\n    :return:\n    \"\"\"\n    global rate, latest\n    for attribute in list(reading):\n        if not latest:\n            latest = reading[attribute]\n        latest = reading[attribute] * rate + latest * (1 - rate)\n        reading[b'ema'] = latest\n\n\ndef ema(readings):\n    \"\"\" Process one or more readings\n\n    :param readings:\n    :return:\n    \"\"\"\n    for elem in list(readings):\n        doit(elem['reading'])\n    return readings\n"
  },
  {
    "path": "tests/system/lab/scripts/flash_leds.py",
    "content": "# -*- coding: utf-8 -*-\n\nfrom time import sleep\nfrom envirophat import leds\n\n\ndef flash_leds(message):\n    for count in range(4):\n        leds.on()\n        sleep(0.5)\n        leds.off()\n        sleep(0.5)\n"
  },
  {
    "path": "tests/system/lab/scripts/trendc.py",
    "content": "# -*- coding: utf-8 -*-\n\n\"\"\" Predict up/down trend in data which has momentum\n\n\"\"\"\nimport json\n\n# exponential moving average rate default values\n# short-term: include 15% of current value in ongoing average (and 85% of history)\nrate_short = 0.15\n# long-term: include 7% of current value\nrate_long = 0.07\n\n# short-term and long-term averages.\nema_short = ema_long = None\n\n# trend of data: 5: down / 10: up. Start with up.\ntrend = 10\n\n\n# get configuration if provided.\n# set this JSON string in configuration:\n#      {\"rate_short\":0.15, \"rate_long\":0.07}\ndef set_filter_config(configuration):\n    global rate_short, rate_long\n    filter_config = json.loads(configuration['config'])\n    if 'rate_short' in filter_config:\n        rate_short = filter_config['rate_short']\n    if 'rate_long' in filter_config:\n        rate_long = filter_config['rate_long']\n    return True\n\n\n# Process a reading\ndef doit(reading):\n    global rate_short, rate_long        # config\n    global ema_short, ema_long, trend   # internal variables\n\n    for attribute in list(reading):\n        if not ema_long:\n            ema_long = ema_short = reading[attribute]\n        else:\n            ema_long = reading[attribute] * rate_long + ema_long * (1 - rate_long)\n            reading[b'ema_long'] = ema_long\n            ema_short = reading[attribute] * rate_short + ema_short * (1 - rate_short)\n            reading[b'ema_short'] = ema_short\n            if(trend == 10) != (ema_short > ema_long):\n                trend = 5 if trend == 10 else 10\n            reading[b'trend'] = trend\n\n\n# process one or more readings\ndef trendc(readings):\n    for elem in list(readings):\n        doit(elem['reading'])\n    return readings\n"
  },
  {
    "path": "tests/system/lab/scripts/write_out.py",
    "content": "# -*- coding: utf-8 -*-\n\n\ndef write_out(message):\n    with open('/tmp/out', 'w') as f:\n        f.write(\"We have triggered.\")\n        f.write(message)\n"
  },
  {
    "path": "tests/system/lab/test",
    "content": "#!/usr/bin/env bash\n\n./check_env\n[[ $? -eq 0 ]]  || exit 1\n\nCPFX=\"\\033[\"\nCINFO=\"${CPFX}1;32m\"\nCERR=\"${CPFX}1;31m\"\nCRESET=\"${CPFX}0m\"\n\n# Read config file\n. ./test.config\n\nLAB_ASSET_NAME=\"PILAB-sinusoid\"\nAF_HIERARCHY_LEVEL=\"/$(date +%F | tr - _)_PIlabSinelvl1/PIlabSinelvl2/PIlabSinelvl3\"\n\nrm -f err.txt\ntouch err.txt\n\ndisplay_and_collect_err () {\n   echo -e \"${CERR} $1 ${CRESET}\"\n   echo $1 >> err.txt\n}\n\n\nURL=\"http://$FLEDGE_IP:8081/fledge\"\nPROJECT_ROOT=$(git rev-parse --show-toplevel)\n\nsinusoid_config=$(cat <<EOF\n{\n   \"name\": \"Sine\",\n   \"type\": \"south\",\n   \"plugin\": \"sinusoid\",\n   \"enabled\": true,\n   \"config\": {\"assetName\":{\"value\": \"$LAB_ASSET_NAME\" }}\n}\nEOF\n)\n\necho -e INFO: \"${CINFO} Add Sinusoid South ${CRESET}\"\ncurl -sX POST \"$URL/service\" -d \"$sinusoid_config\"\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/south\"`\n    # echo ${RESULT}\n    COUNT=`echo ${RESULT} | jq '.services[].assets[]|select(.asset == \"'${LAB_ASSET_NAME}'\").count // empty'`\n    if [[ -n \"${COUNT}\" ]] && [[ ${COUNT} -gt 0 ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! sinusoid data not seen in South tab. $URL/south\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- sinusoid data seen in South tab ----\"\nfi\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset\"`\n    # echo ${RESULT}\n    COUNT=`echo ${RESULT} | jq '.[]|select(.assetCode == \"'${LAB_ASSET_NAME}'\")|.count // empty'`\n    if [[ -n \"$COUNT\" ]] && [[ ${COUNT} -gt 0 ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! sinusoid data not seen in Asset tab. $URL/asset\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n  echo \"---- sinusoid data seen in Asset tab ----\"\nfi\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/ping\"`\n    # echo ${RESULT}\n    READ=`echo ${RESULT} | jq '.dataRead // empty'`\n    if [[ -n \"$READ\" ]] && [[ \"$READ\" -gt 0 ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! sinusoid data not seen in ping header. $URL/ping\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- sinusoid data seen in ping header ----\"\nfi\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset/${LAB_ASSET_NAME}?seconds=600\"`\n    # echo ${RESULT}\n    POINT=`echo ${RESULT} | jq '.[0].reading.sinusoid // empty'`\n    if [[ -n \"$POINT\" ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n  then\n    display_and_collect_err \"TIMEOUT! sinusoid data not seen in sinusoid graph. $URL/asset/${LAB_ASSET_NAME}?seconds=600\"\n    if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n  else\n    echo \"---- sinusoid data seen in sinusoid graph ----\"\nfi\n\necho \"======================= SINUSOID SETUP COMPLETE =======================\"\n\n\nsetup_north_pi_egress () {\n    if [[  ${ADD_NORTH_AS_SERVICE} == true ]]\n    then\n        # Add PI North as service\n        curl -sX POST \"$URL/service\" -d \\\n        '{\n           \"name\": \"PI Server\",\n           \"plugin\": \"OMF\",\n           \"type\": \"north\",\n           \"enabled\": true,\n           \"config\": {\n              \"PIServerEndpoint\": {\n                 \"value\": \"PI Web API\"\n              },\n              \"ServerHostname\": {\n                 \"value\": \"'${PI_IP}'\"\n              },\n              \"ServerPort\": {\n                 \"value\": \"443\"\n              },\n              \"PIWebAPIUserId\": {\n                 \"value\": \"'${PI_USER}'\"\n              },\n              \"PIWebAPIPassword\": {\n                 \"value\": \"'${PI_PASSWORD}'\"\n              },\n              \"NamingScheme\": {\n                 \"value\": \"Backward compatibility\"\n              },\n              \"PIWebAPIAuthenticationMethod\": {\n                 \"value\": \"basic\"\n              },\n              \"compression\": {\n                 \"value\": \"true\"\n              },\n              \"DefaultAFLocation\": {\n                 \"value\": \"'\"${AF_HIERARCHY_LEVEL}\"'\"\n              },\n              \"Legacy\": {\n                 \"value\": \"false\"\n              }\n           }\n        }'\n      else\n        # Add PI North as task\n        curl -sX POST \"$URL/scheduled/task\" -d \\\n        '{\n           \"name\": \"PI Server\",\n           \"plugin\": \"OMF\",\n           \"type\": \"north\",\n           \"schedule_repeat\": 30,\n           \"schedule_type\": \"3\",\n           \"schedule_enabled\": true,\n           \"config\": {\n              \"PIServerEndpoint\": {\n                 \"value\": \"PI Web API\"\n              },\n              \"ServerHostname\": {\n                 \"value\": \"'${PI_IP}'\"\n              },\n              \"ServerPort\": {\n                 \"value\": \"443\"\n              },\n              \"PIWebAPIUserId\": {\n                 \"value\": \"'${PI_USER}'\"\n              },\n              \"PIWebAPIPassword\": {\n                 \"value\": \"'${PI_PASSWORD}'\"\n              },\n              \"NamingScheme\": {\n                 \"value\": \"Backward compatibility\"\n              },\n              \"PIWebAPIAuthenticationMethod\": {\n                 \"value\": \"basic\"\n              },\n              \"compression\": {\n                 \"value\": \"true\"\n              },\n              \"DefaultAFLocation\": {\n                 \"value\": \"'\"${AF_HIERARCHY_LEVEL}\"'\"\n              },\n              \"Legacy\": {\n                 \"value\": \"false\"\n              }\n           }\n        }'\n    fi\n\n    echo\n    \n    # Wait for OMF to start  up properly\n    for LOOP in $(seq ${MAX_RETRIES}); do\n      RESULT=$(curl -sX GET  \"$URL/service\" | jq '.services[] | select (.name == \"PI Server\")| .status' | tr -d \\\")\n      if [[ \"${RESULT}\" -eq \"running\" ]];then \n         break\n      fi\n    done\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n    then\n       display_and_collect_err \"TIMEOUT! Unable to start North Service\"\n       if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n    else\n       echo \"---- North Service is started properly ----\"\n    fi\n   \n    for LOOP in $(seq ${MAX_RETRIES}); do\n        RESULT=`curl -sX GET \"$URL/north\"`\n        # echo ${RESULT}\n        SENT=`echo ${RESULT} | jq '.[0].sent // empty'`\n        if [[ -n \"$SENT\" ]] && [[ \"$SENT\" -gt 0 ]]; then break; fi\n    done\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n     then\n       display_and_collect_err \"TIMEOUT! PI data sent not seen in North tab. $URL/north\"\n       if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n     else\n       echo \"---- PI data sent seen in North tab ----\"\n    fi\n\n    for LOOP in $(seq ${MAX_RETRIES}); do\n        RESULT=`curl -sX GET \"$URL/ping\"`\n        # echo ${RESULT}\n        SENT=`echo ${RESULT} | jq '.dataSent // empty'`\n        if [[ -n \"$SENT\" ]] && [[ \"$SENT\" -gt 0 ]]; then break; fi\n    done\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n     then\n       display_and_collect_err \"TIMEOUT! PI data sent not seen in ping header. $URL/ping\"\n       if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n     else\n       echo \"---- PI data sent seen in ping header ----\"\n    fi\n\n    for LOOP in $(seq ${MAX_RETRIES}); do\n        RESULT=`curl -sX GET \"$URL/statistics/history?minutes=10\"`\n        # echo ${RESULT}\n        POINT=`echo ${RESULT} | jq '.statistics[0].\"PI Server\" // empty'`\n        if [[ -n \"$POINT\" ]] && [[ \"$POINT\" -gt 0 ]]; then break; fi\n    done\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n     then\n       display_and_collect_err \"TIMEOUT! PI data sent not seen in sent graph. $URL/statistics/history?minutes=10\"\n       if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n     else\n       echo \"---- PI data sent seen in sent graph ----\"\n    fi\n\n   echo \"======================= PI SETUP COMPLETE =======================\"\n\n}\n\n\nsquare_filter_config=$(cat <<EOF\n{\n   \"name\": \"Square\",\n   \"plugin\": \"expression\",\n   \"filter_config\": {\n      \"name\": \"square\",\n      \"expression\": \"if(sinusoid>0,0.5,-0.5)\",\n      \"enable\": \"true\"\n   }\n}\nEOF\n)\n\n# Add Expression Filter (Square)\ncurl -sX POST \"$URL/filter\" -d \"$square_filter_config\"\necho\n# Apply Square to Sine\ncurl -sX PUT \"$URL/filter/Sine/pipeline?allow_duplicates=true&append_filter=true\" -d \\\n'{\n   \"pipeline\": [\n      \"Square\"\n   ]\n}'\necho\necho \"======================= SINUSOID SQUARE FILTER COMPLETE =======================\"\n# Add Expression Filter (Max)\nmax_filter_config=$(cat <<EOF\n{\n   \"name\": \"Max\",\n   \"plugin\": \"expression\",\n   \"filter_config\": {\n      \"name\": \"max\",\n      \"expression\": \"max(sinusoid, square)\",\n      \"enable\": \"true\"\n   }\n}\nEOF\n)\ncurl -sX POST \"$URL/filter\" -d \"$max_filter_config\"\necho\n# Apply Max to Sine\ncurl -sX PUT \"$URL/filter/Sine/pipeline?allow_duplicates=true&append_filter=true\" -d \\\n'{\n   \"pipeline\": [\n      \"Max\"\n   ]\n}'\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset/${LAB_ASSET_NAME}?seconds=600\"`\n    # echo ${RESULT}\n    SQUARE=`echo ${RESULT} | jq '.[0].reading.square // empty'`\n    MAX=`echo ${RESULT} | jq '.[0].reading.max // empty'`\n    if [[ -n \"$SQUARE\" ]] && [[ -n \"$MAX\" ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! square and max data not seen in sinusoid graph. $URL/asset/${LAB_ASSET_NAME}?seconds=600\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- square and max data seen in sinusoid graph ----\"\nfi\n\necho \"======================= SINUSOID MAX FILTER COMPLETE =======================\"\n\n\n# Add Randomwalk South\ncurl -sX POST \"$URL/service\" -d \\\n'{\n   \"name\": \"Random\",\n   \"type\": \"south\",\n   \"plugin\": \"randomwalk\",\n   \"enabled\": true,\n   \"config\": {}\n}'\necho\n# need to wait for Fledge to be ready to accept python file\nsleep ${SLEEP_FIX}\n# Add Python35 Filter (ema)\ncurl -sX POST \"$URL/filter\" -d \\\n'{\n   \"name\": \"Ema\",\n   \"plugin\": \"python35\",\n   \"filter_config\": {\n      \"config\": {\n         \"rate\": 0.07\n      },\n      \"enable\": \"true\"\n   }\n}'\necho\n# Apply Ema to Random\ncurl -sX PUT \"$URL/filter/Random/pipeline?allow_duplicates=true&append_filter=true\" -d \\\n'{\n   \"pipeline\": [\n      \"Ema\"\n   ]\n}'\necho\n# Upload Ema python script\ncurl -sX POST \"$URL/category/Random_Ema/script/upload\" -F \"script=@scripts/ema.py\"\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset/randomwalk?seconds=600\"`\n    # echo ${RESULT}\n    RANDOM_RESULT=`echo ${RESULT} | jq '.[0].reading.randomwalk // empty'`\n    EMA=`echo ${RESULT} | jq '.[0].reading.ema // empty'`\n    if [[ -n \"$RANDOM_RESULT\" ]] && [[ -n \"$EMA\" ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! randomwalk and ema data not seen in randomwalk graph. $URL/asset/randomwalk?seconds=600\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- randomwalk and ema data seen in randomwalk graph ----\"\nfi\n\n# DELETE Randomwalk South\nDEL_RAND=`curl -sX DELETE \"$URL/service/Random\"`\n#echo \"$DEL_RAND\"\nRESULT_DEL_RAND=`echo ${DEL_RAND} | jq '.result // empty'`\nif [[ -n \"$RESULT_DEL_RAND\" ]];\n  then\n    echo \"$RESULT_DEL_RAND\"\n else\n   display_and_collect_err \"ERROR! Failed to delete randomwalk service\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\nfi\necho \"======================= RANDOMWALK SETUP COMPLETE =======================\"\n\n\necho \"Add Randomwalk south service again ...\"\ncurl -sX POST \"$URL/service\" -d \\\n'{\n   \"name\": \"Random1\",\n   \"type\": \"south\",\n   \"plugin\": \"randomwalk\",\n   \"enabled\": true,\n   \"config\": {\"assetName\": {\"value\": \"randomwalk1\"}}\n}'\necho\n\n# need to wait for Fledge to be ready to accept python file\nsleep ${SLEEP_FIX}\n\n# Add Python35 Filter (PF)\ncurl -sX POST \"$URL/filter\" -d \\\n'{\n   \"name\": \"PF\",\n   \"plugin\": \"python35\",\n   \"filter_config\": {\n      \"config\": {\n         \"rate\": 0.07\n      },\n      \"enable\": \"true\"\n   }\n}'\necho\n\n# Apply PF to Random\ncurl -sX PUT \"$URL/filter/Random1/pipeline?allow_duplicates=true&append_filter=true\" -d \\\n'{\n   \"pipeline\": [\n      \"PF\"\n   ]\n}'\necho\n\necho \"upload trendc script...\"\ncurl -sX POST \"$URL/category/Random1_PF/script/upload\" -F \"script=@scripts/trendc.py\"\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset/randomwalk1?seconds=60\"`\n    # echo ${RESULT}\n    RANDOM_RESULT=`echo ${RESULT} | jq '.[0].reading.randomwalk // empty'`\n    TRENDC=`echo ${RESULT} | jq '.[0].reading.ema_long // empty'`\n    if [[ -n \"$RANDOM_RESULT\" ]] && [[ -n \"$TRENDC\" ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! randomwalk1 and ema_long data not seen in randomwalk1 graph. $URL/asset/randomwalk1?seconds=60\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- randomwalk and ema_long data seen in randomwalk1 graph ----\"\nfi\n\necho \"upload trendc script with modified content...\"\n\ncp scripts/trendc.py scripts/trendc.py.bak\nsed -i \"s/reading\\[b'ema_long/reading\\[b'ema_longX/g\" scripts/trendc.py\n\ncurl -sX POST \"$URL/category/Random1_PF/script/upload\" -F \"script=@scripts/trendc.py\"\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset/randomwalk1?seconds=60\"`\n    # echo ${RESULT}\n    RANDOM_RESULT=`echo ${RESULT} | jq '.[0].reading.randomwalk // empty'`\n    TRENDCX=`echo ${RESULT} | jq '.[0].reading.ema_longX // empty'`\n    if [[ -n \"$RANDOM_RESULT\" ]] && [[ -n \"$TRENDCX\" ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! randomwalk1 and ema_longX data not seen in randomwalk1 graph. $URL/asset/randomwalk1?seconds=60\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- randomwalk and ema_longX data seen in randomwalk1 graph ----\"\nfi\n\n\nmv scripts/trendc.py.bak scripts/trendc.py\n\n\necho \"upload ema script...\"\ncurl -sX POST \"$URL/category/Random1_PF/script/upload\" -F \"script=@scripts/ema.py\"\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset/randomwalk1?seconds=60\"`\n    # echo ${RESULT}\n    RANDOM_RESULT=`echo ${RESULT} | jq '.[0].reading.randomwalk // empty'`\n    EMA=`echo ${RESULT} | jq '.[0].reading.ema // empty'`\n    if [[ -n \"$RANDOM_RESULT\" ]] && [[ -n \"$EMA\" ]]; then break; fi\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! randomwalk1 and ema data not seen in randomwalk1 graph. $URL/asset/randomwalk1?seconds=60\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- randomwalk1 and ema data seen in randomwalk1 graph ----\"\nfi\n\necho \"======================= RANDOMWALK SETUP 2 COMPLETE =======================\"\n\n# TODO: Remove the  conditional statement once FOGL-9521 is fixed\n\nVERSION_CODENAME=$(cat /etc/os-release | grep VERSION_CODENAME | cut -f2 -d\"=\" | tr -d '\"')\nif [[ \"${VERSION_CODENAME}\" != \"bookworm\" ]]; then\n# Add Enviro-pHAT South\ncurl -sX POST \"$URL/service\" -d \\\n'{\n   \"name\": \"Enviro\",\n   \"type\": \"south\",\n   \"plugin\": \"rpienviro\",\n   \"enabled\": true,\n   \"config\": {\n      \"assetNamePrefix\": {\n         \"value\": \"e_\"\n      }\n   }\n}'\necho\n# Add Expression Filter (Fahrenheit)\ncurl -sX POST \"$URL/filter\" -d \\\n'{\n   \"name\": \"Fahrenheit\",\n   \"plugin\": \"expression\",\n   \"filter_config\": {\n      \"name\": \"temp_fahr\",\n      \"expression\": \"temperature*1.8+32\",\n      \"enable\": \"true\"\n   }\n}'\necho\n# Apply Fahrenheit to Enviro\ncurl -sX PUT \"$URL/filter/Enviro/pipeline?allow_duplicates=true&append_filter=true\" -d \\\n'{\n   \"pipeline\": [\n      \"Fahrenheit\"\n   ]\n}'\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset/e_weather?seconds=600\"`\n    echo ${RESULT}\n    TEMP=`echo ${RESULT} | jq '.[0].reading.temperature // empty'`\n    FAHR=`echo ${RESULT} | jq '.[0].reading.temp_fahr // empty'`\n    if [[ -n \"$TEMP\" ]] && [[ -n \"$FAHR\" ]]; then break; fi\ndone\n\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! temperature and fahrenheit data not seen in e_weather graph. $URL/asset/e_weather?seconds=600\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- temperature and fahrenheit data seen in e_weather graph ----\"\nfi\n\necho \"======================= enviro-pHAT SETUP COMPLETE =======================\"\nfi\n\n# Enable Event Engine\ncurl -sX POST \"$URL/service\" -d \\\n'{\n   \"name\": \"Fledge Notifications\",\n   \"type\": \"notification\",\n   \"enabled\": true\n}'\necho\n# Need to wait for event engine to come up\ncurl -sX GET \"$URL/service\" | jq '.services[]|select(.name==\"Fledge Notifications\").status'\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/service\"`\n    echo ${RESULT}\n    STATUS=`echo ${RESULT} | jq '.services[]|select(.name==\"Fledge Notifications\").status // empty'`\n    if [[ -n \"$STATUS\" ]] && [[ ${STATUS} == \"\\\"running\\\"\" ]]; then break; fi\ndone\n\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n then\n   display_and_collect_err \"TIMEOUT! event engine is not running. $URL/service\"\n   if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n else\n   echo \"---- service reports event engine is running ----\"\nfi\n# sleep ${SLEEP_FIX}\n\necho \"======================= EVENT ENGINE ENABLED =======================\"\n\n\n# Add Notification with Threshold Rule and Asset Notification (Positive Sine)\ncurl -sX POST \"$URL/notification\" -d \\\n'{\n   \"name\": \"Positive Sine\",\n   \"description\": \"Positive Sine notification instance\",\n   \"rule\": \"Threshold\",\n   \"channel\": \"asset\",\n   \"notification_type\": \"retriggered\",\n   \"enabled\": true\n}'\necho\n# Set Positive Sine Rule Config (sinusoid.sinusoid > 0)\n\nsine_rule_config_positive=$(cat <<EOF\n{\n   \"asset\": \"${LAB_ASSET_NAME}\",\n   \"datapoint\": \"sinusoid\",\n   \"condition\": \">\"\n}\nEOF\n)\n\ncurl -sX PUT \"$URL/category/rulePositive%20Sine\" -d \"$sine_rule_config_positive\"\necho\n# Set Positive Sine Delivery Config (positive_sine: \"positive\")\ncurl -sX PUT \"$URL/category/deliveryPositive%20Sine\" -d \\\n'{\n   \"asset\": \"positive_sine\",\n   \"description\": \"positive\",\n   \"enable\": \"true\"\n}'\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    RESULT=`curl -sX GET \"$URL/asset/positive_sine?seconds=600\"`\n    echo ${RESULT}\n    EVENT=`echo ${RESULT} | jq '.[0].reading.event // empty'`\n    RULE=`echo ${RESULT} | jq '.[0].reading.rule // empty'`\n    if [[ -n \"$EVENT\" ]] && [[ \"$EVENT\" == \"\\\"triggered\\\"\" ]] && \\\n        [[ -n \"$RULE\" ]] && [[ \"$RULE\" == \"\\\"Positive Sine\\\"\" ]]; then break; fi\n    sleep 1\ndone\n\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n  then\n    display_and_collect_err \"TIMEOUT! positive_sine event not fired. $URL/asset/positive_sine?seconds=600\";\n    if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n  else\n    echo \"---- positive_sine event fired ----\"\nfi\n\n# [{\"reading\": {\"description\": \"Notification alert\", \"event\": \"triggered\", \"rule\": \"Positive Sign\"}, \"timestamp\": \"2019-08-28 17:42:42.421744\"}, {\"reading\": {\"description\": \"Notification alert\", \"event\": \"triggered\", \"rule\": \"Positive Sign\"}, \"timestamp\": \"2019-08-28 17:41:37.002845\"}, {\"reading\": {\"description\": \"Notification alert\", \"event\": \"triggered\", \"rule\": \"Positive Sign\"}, \"timestamp\": \"2019-08-28 17:40:33.489106\"}]\n\necho \"======================= EVENT POSITIVE SINE COMPLETE =======================\"\n\nrm -f /tmp/out\ncurl -sX POST \"$URL/notification\" -d \\\n'{\n   \"name\": \"Negative Sine\",\n   \"description\": \"Negative Sine notification instance\",\n   \"rule\": \"Threshold\",\n   \"channel\": \"python35\",\n   \"notification_type\": \"retriggered\",\n   \"enabled\": true\n}'\n\n# Upload Python Script (write_out.py)\ncurl  -sX POST \"$URL/category/deliveryNegative%20Sine/script/upload\" -F \"script=@scripts/write_out.py\"\necho\n\n\nsine_rule_config_negative=$(cat <<EOF\n{\n   \"asset\": \"${LAB_ASSET_NAME}\",\n   \"datapoint\": \"sinusoid\",\n   \"condition\": \"<\"\n}\nEOF\n)\n# Set Negative Sine Rule Config (sinusoid.sinusoid < 0)\ncurl -sX PUT \"$URL/category/ruleNegative%20Sine\" -d \"$sine_rule_config_negative\"\necho\n\n\n# Set Negative Sine Delivery Config (enabled)\ncurl -sX PUT \"$URL/category/deliveryNegative%20Sine\" -d \\\n'{\n   \"enable\": \"true\"\n}'\necho\n\nfor LOOP in $(seq ${MAX_RETRIES}); do\n    if [[ -f \"/tmp/out\" ]]; then break; fi\n    sleep 1\ndone\nif [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n  then\n    display_and_collect_err \"TIMEOUT! negative_sine event not fired. No /tmp/out file.\";\n    if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n  else\n    echo \"---- negative_sine event fired ----\"\nfi\n\necho \"======================= EVENT NEGATIVE SINE COMPLETE =======================\"\n\nsinusoid1_config=$(cat <<EOF\n{\n   \"name\": \"sin #1\",\n   \"type\": \"south\",\n   \"plugin\": \"sinusoid\",\n   \"enabled\": true,\n   \"config\": {\"assetName\":{\"value\": \"$LAB_ASSET_NAME\" }}\n}\nEOF\n)\n\nrule_config=$(cat <<EOF\n{ \"asset\":\"${LAB_ASSET_NAME}\",\n  \"datapoint\":\"sinusoid\",\n   \"trigger_value\": \"0.8\"\n}\nEOF\n)\n\nevent_toggled_sent_clear () {\n    echo \"Add sinusoid\"\n\n    ADD_SIN_RES=`curl -sX POST \"$URL/service\" -d \"$sinusoid1_config\"`\n    ID=`echo ${ADD_SIN_RES} | jq '.id // empty'`\n    NAME=`echo ${ADD_SIN_RES} | jq '.name // empty'`\n    echo \"$ADD_SIN_RES\"\n    if [[ -n \"$ID\" ]]  || [[ -n \"$NAME\" ]];\n      then\n        echo\n      else\n        display_and_collect_err \"ERROR! Failed to add sin #1\"\n        if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n    fi\n\n#    echo \"Add and enable Notification\"\n#    curl -sX POST \"$URL/service\" -d '{\"name\":\"Fledge Notifications\",\"type\":\"notification\",\"enabled\":true}'\n\n    echo \"Create event instance with threshold and asset; with notification trigger type toggled\"\n    CREATE_EVENT_INSTANCE=`curl -sX POST \"$URL/notification\" -d '{\"name\":\"test\",\"description\":\"test notification instance\",\"rule\":\"Threshold\",\"channel\":\"asset\",\"notification_type\":\"toggled\",\"enabled\":true}' | jq .`\n    echo \"$CREATE_EVENT_INSTANCE\"\n\n    for LOOP in $(seq ${MAX_RETRIES}); do\n        RESULT=`curl -sX GET \"$URL/notification\"`\n        echo ${RESULT}\n        EVENT_NAME=`echo ${RESULT} | jq '.notifications[]|select(.name==\"test\").name'`\n        RULE=`echo ${RESULT} | jq '.notifications[]|select(.name==\"test\").rule'`\n        TYPE=`echo ${RESULT} | jq '.notifications[]|select(.name==\"test\").notificationType'`\n        if [[ -n \"$EVENT_NAME\" ]] && [[ \"$EVENT_NAME\" == \"\\\"test\\\"\" ]] && \\\n        [[ -n \"$RULE\" ]] && [[ \"$RULE\" == \"\\\"Threshold\\\"\" ]] && \\\n        [[ -n \"$TYPE\" ]] && [[ \"$TYPE\" == \"\\\"toggled\\\"\" ]]; then break; fi\n    done\n\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n      then\n        display_and_collect_err \"TIMEOUT! event instance not created successfully . $URL/notification\";\n        if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n      else\n        echo \"---- event instance created successfully ----\"\n    fi\n\n    echo \"Set rule\"\n    SET_RULE=`curl -sX PUT \"$URL/category/ruletest\" -d \"$rule_config\" | jq .`\n    echo \"$SET_RULE\"\n\n    for LOOP in $(seq ${MAX_RETRIES}); do\n      RESULT=`curl -sX GET \"$URL/category/ruletest\"`\n      echo ${RESULT}\n      ASSET=`echo ${RESULT} | jq .asset.value`\n      DATAPOINT=`echo ${RESULT} | jq .datapoint.value`\n      TRIGGER_VALUE=` echo ${RESULT} | jq .trigger_value.value`\n      if [[ -n \"$ASSET\" ]] && [[ \"$ASSET\" == \"\\\"${LAB_ASSET_NAME}\\\"\" ]] &&\\\n      [[ -n \"$DATAPOINT\" ]] && [[ \"$DATAPOINT\" == \"\\\"sinusoid\\\"\" ]] &&\\\n      [[ -n \"$TRIGGER_VALUE\" ]] && [[ \"$TRIGGER_VALUE\" == \"\\\"0.8\\\"\" ]]; then break; fi\n    done\n\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n      then\n        display_and_collect_err \"TIMEOUT! Rule not  set successfully . $URL/category/ruletest\";\n        if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n      else\n        echo \"---- Rule set successfully ----\"\n    fi\n\n    echo \"Set delivery\"\n    SET_DELIVERY_CHN=`curl -sX PUT \"$URL/category/deliverytest\" -d '{\"asset\": \"sin0.8\", \"description\":\"asset notification\", \"enable\":\"true\"}' | jq .`\n    echo \"$SET_DELIVERY_CHN\"\n\n    for LOOP in $(seq ${MAX_RETRIES}); do\n      RESULT=`curl -sX GET \"$URL/category/deliverytest\"`\n      echo ${RESULT}\n      ASSET=`echo ${RESULT} | jq .asset.value`\n      ENABLE_VALUE=`echo ${RESULT} | jq .enable.value`\n      if [[ -n \"$ASSET\" ]] && [[ \"$ASSET\" == \"\\\"sin0.8\\\"\" ]] &&\\\n      [[ -n \"$ENABLE_VALUE\" ]] && [[ \"$ENABLE_VALUE\" == \"\\\"true\\\"\" ]]; then break; fi\n    done\n\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n      then\n        display_and_collect_err \"TIMEOUT! Delivery channel not set successfully . $URL/category/deliverytest\";\n        if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n      else\n        echo \"---- Delivery channel set successfully ----\"\n    fi\n\n    echo \"Sleeping for 20 seconds until sin0.8 is added in asset table\"\n    sleep 20\n\n    echo \"Verify sin0.8 has been created\"\n    GET_ASSET=`curl -sX GET \"$URL/asset/sin0.8?seconds=600\" | jq .`\n    echo \"$GET_ASSET\"\n    for LOOP in $(seq ${MAX_RETRIES}); do\n      RESULT=`curl -sX GET \"$URL/asset/sin0.8?seconds=600\"`\n      echo ${RESULT}\n      RULE=`echo ${RESULT} | jq '.[0].reading.rule // empty'`\n      if [[ -n \"$RESULT\" ]] &&  [[ -n \"$RULE\" ]] && \\\n      [[ \"$RULE\" == \"\\\"test\\\"\" ]]; then break; fi\n    done\n\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n      then\n        display_and_collect_err \"TIMEOUT! sin0.8 has not been created. $URL/asset/sin0.8?seconds=600\";\n        if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n      else\n        echo \"---- sin0.8 has been created ----\"\n    fi\n\n\n    echo \"TODO: FOGL-3285 Verify sin0.8 asset tracker entry\"\n#   GET_ASSET_TRACKER=`curl -sX GET \"$URL/track\" | jq .` # May be ?asset=sin0.8\n#   echo \"$GET_ASSET_TRACKER\"\n#   for LOOP in $(seq ${MAX_RETRIES}); do\n#        RESULT=`curl -sX GET \"$URL/track\"`\n#        echo ${RESULT}\n#        ASSET_NAME=`echo ${RESULT} | jq '.track[]|select(.asset==\"sin0.8\").asset'`\n#        ASSET_CREATION_EVENT=`echo ${RESULT} | jq '.track[]|select(.asset==\"sin0.8\").event'`\n#        ASSET_CREATED_BY=`echo ${RESULT} | jq '.track[]|select(.asset==\"sin0.8\").service'`\n#        # verify ASSET_CREATION_EVENT event & ASSET_CREATED_BY service\n#        if [[ -n \"$ASSET_NAME\" ]] && [[ \"$ASSET_NAME\" == \"\\\"sin0.8\\\"\" ]]; then break; fi\n#    done\n#\n#    if [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n#      then\n#        display_and_collect_err \"TIMEOUT! sin0.8 entry not found in asset tracker. $URL/track\";\n#        if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n#      else\n#        echo \"---- sin0.8 entry found in asset tracker ----\"\n#    fi\n\n    echo \"When rule is triggred, There should be audit entries for NTFSN & NTFCL\"\n\n    for LOOP in $(seq ${MAX_RETRIES}); do\n      RESULT=`curl -sX GET \"$URL/audit?limit=1&source=NTFSN&severity=INFORMATION\"`\n      echo ${RESULT}\n      AUDIT_NAME=`echo ${RESULT} | jq '.audit[].details.name'`\n      SOURCE=`echo ${RESULT} | jq '.audit[].source'`\n      if [[ -n \"$AUDIT_NAME\" ]] && [[ \"$AUDIT_NAME\" == \"\\\"test\\\"\" ]] && \\\n      [[ -n \"$SOURCE\" ]] && [[ \"$SOURCE\" == \"\\\"NTFSN\\\"\" ]]; then break; fi\n      # added sleep of 1s as next event fot sin0.8 will trigger when sinusoid datapoint value will be 0.8 again\n      # and with LIMIT 1 there will be entries for postive and negatives sine events from last test setup\n      sleep 1\n    done\n\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n      then\n        display_and_collect_err \"TIMEOUT! entry for test not found in NTFSN. $URL/audit?limit=1&source=NTFSN&severity=INFORMATION\";\n        if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n      else\n        echo \"---- Found NTFSN entry for test ----\"\n    fi\n\n\n    for LOOP in $(seq ${MAX_RETRIES}); do\n        RESULT=`curl -sX GET \"$URL/audit?limit=1&source=NTFCL&severity=INFORMATION\"`\n        echo ${RESULT}\n        AUDIT_NAME=`echo ${RESULT} | jq '.audit[].details.name'`\n        SOURCE=`echo ${RESULT} | jq '.audit[].source'`\n        if [[ -n \"$AUDIT_NAME\" ]] && [[ \"$AUDIT_NAME\" == \"\\\"test\\\"\" ]] && \\\n        [[ -n \"$SOURCE\" ]] && [[ \"$SOURCE\" == \"\\\"NTFCL\\\"\" ]]; then break; fi\n    done\n\n    if [[ ${LOOP} -eq ${MAX_RETRIES} ]];\n      then\n        display_and_collect_err \"TIMEOUT! entry for test not found in NTFCL. $URL/audit?limit=1&source=NTFCL&severity=INFORMATION\";\n        if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n      else\n        echo \"---- Found NTFCL entry for test ----\"\n    fi\n\n}\n\nevent_toggled_sent_clear\n\necho \"======================= TOGGLE EVENT NTFSN and NTFCL TEST COMPLETE =======================\"\n\nif [[  ${VERIFY_EGRESS_TO_PI} == 1 ]]\n  then\n     setup_north_pi_egress\n     # Diabling North Service\n     curl -sX PUT \"$URL/schedule/disable\" -d '{\"schedule_name\": \"PI Server\"}'\n     for LOOP in $(seq ${MAX_RETRIES}); do\n        RESULT=$(curl -sX GET  \"$URL/service\" | jq '.services[] | select (.name == \"PI Server\")| .status' | tr -d \\\")\n        if [[ \"${RESULT}\" -eq \"shutdown\" ]];then \n           break\n        fi\n     done\n     \n     if [[ ${LOOP} -eq ${MAX_RETRIES} ]]\n     then\n       display_and_collect_err \"TIMEOUT! Unable to Disable North Service, AF Heirarchy of PI Server may get disturbed\"\n       if [[ ${EXIT_EARLY} -eq 1 ]]; then exit 1; fi\n     else\n       echo\n       echo \"---- North Service is disabled properly ----\"\n     fi\n     echo \"---- Verify and clean the data sent to PI server ----\"\n     python3 verify_clean_pi.py --pi-admin=\"${PI_USER}\" --pi-passwd=\"${PI_PASSWORD}\" --pi-host=\"${PI_IP}\" --pi-port=\"${PI_PORT}\" --pi-db=\"${PI_DB}\" --asset-name=\"${LAB_ASSET_NAME}\"\nelse\n     echo \"======================= SKIPPED PI EGRESS =======================\"\nfi\n\necho \"===================== COLLECTING SUPPORT BUNDLE ============================\"\nSUPPORT_BUNDLE_DIR=\"$PROJECT_ROOT/support_bundle\"\nBUNDLE=$(curl -sX POST \"$URL/support\")\nif jq -e 'has(\"bundle created\")' <<< \"$BUNDLE\" > /dev/null; then\n    echo \"Support Bundle Created\"\n    rm -rf \"$SUPPORT_BUNDLE_DIR\" && mkdir -p \"$SUPPORT_BUNDLE_DIR\" && \\\n    cp -r /usr/local/fledge/data/support/* \"$SUPPORT_BUNDLE_DIR\"/. && \\\n    echo \"Support bundle has been saved to path: $SUPPORT_BUNDLE_DIR\"\nelse\n    echo \"Failed to Create support bundle\"\n    rm -rf \"$SUPPORT_BUNDLE_DIR\" && mkdir -p \"$SUPPORT_BUNDLE_DIR\" && \\\n    cp /var/log/syslog \"$SUPPORT_BUNDLE_DIR\"/. && \\\n    echo \"Syslog Saved to $SUPPORT_BUNDLE_DIR\"\nfi\necho \"===================== COLLECTED SUPPORT BUNDLE ============================\"\n\nERRORS=\"$(wc -c <\"err.txt\")\"\nif [[ ${ERRORS} -ne 0 ]]\nthen\n    echo \"============================= TESTS FAILED! =============================\"\n    cat err.txt\n    exit 1\nelse\n    echo \"======================================================\\\n          =================== S U C C E S S ====================\\\n          ======================================================\"\nfi\necho\n\nexit 0\n\n\n#####\n##### The Remainder are the actual rule instances used in the lab\n##### These aren't included because they can't be automated easily\n#####\n\nGREEN_TRIGGER=130\nTEMPERATURE_TRIGGER=31\n\n# Add Notification with Threshold Rule and Asset Notification (Temperature Monitor)\ncurl -sX POST \"$URL/notification\" -d \\\n'{\n   \"name\": \"Temperature Monitor\",\n   \"description\": \"Temperature Monitor notification instance\",\n   \"rule\": \"Threshold\",\n   \"channel\": \"asset\",\n   \"notification_type\": \"toggled\",\n   \"enabled\": true\n}'\necho\n# Set Temperature Monitor Rule Config (e_weather.temperature > 31)\ncurl -sX PUT \"$URL/category/ruleTemperature%20Monitor\" -d \\\n'{\n   \"asset\": \"e_weather\",\n   \"datapoint\": \"temperature\",\n   \"trigger_value\": \"'${TEMPERATURE_TRIGGER}'\"\n}'\necho\n# Set Temperature Monitor Delivery Config (temperature_monitor: \"Too Hot!\")\ncurl -sX PUT \"$URL/category/deliveryTemperature%20Monitor\" -d \\\n'{\n   \"asset\": \"temperature_monitor\",\n   \"description\": \"Too Hot!\",\n   \"enable\": \"true\"\n}'\necho\n# Set Temperature Monitor config (retrigger_time: 5)\ncurl -sX PUT \"$URL/category/Temperature%20Monitor\" -d \\\n'{\n   \"retrigger_time\": \"5\"\n}'\necho\necho \"======================= TEMPERATURE MONITOR SETUP COMPLETE =======================\"\n\n# Add Notification with Threshold Rule and Python35 Delivery (Flash on Green)\ncurl -sX POST \"$URL/notification\" -d \\\n'{\n   \"name\": \"Flash on Green\",\n   \"description\": \"Flash on Green notification instance\",\n   \"rule\": \"Threshold\",\n   \"channel\": \"python35\",\n   \"notification_type\": \"retriggered\",\n   \"enabled\": true\n}'\necho\n# Set Flash on Green Rule Config (e_rgb.g > 130)\ncurl -sX PUT \"$URL/category/ruleFlash%20on%20Green\" -d \\\n'{\n   \"asset\": \"e_rgb\",\n   \"datapoint\": \"g\",\n   \"trigger_value\": \"'${GREEN_TRIGGER}'\"\n}'\necho\n# Set Flash on Green Delivery Config (enabled)\ncurl -sX PUT \"$URL/category/deliveryFlash%20on%20Green\" -d \\\n'{\n   \"enable\": \"true\"\n}'\necho\n# Upload Flash on Green Python Script (flash_leds.py)\ncurl -sX POST \"$URL/category/deliveryFlash%20on%20Green/script/upload\" -F \"script=@scripts/flash_leds.py\"\necho\n# Set Flash on Green config (retrigger_time: 5)\ncurl -sX PUT \"$URL/category/Flash%20on%20Green\" -d \\\n'{\n   \"retrigger_time\": \"5\"\n}'\necho\necho \"======================= FLASH ON GREEN SETUP COMPLETE =======================\"\n"
  },
  {
    "path": "tests/system/lab/test.config",
    "content": "FLEDGE_IP=localhost\nPI_IP=192.168.4.41\nPI_USER=Administrator\nPI_PASSWORD='xxx'\nPI_PORT=\"443\"\nPI_DB=\"foglamp\"\nMAX_RETRIES=100\nSLEEP_FIX=10 # Time to sleep to fix bugs. This should be zero.\nEXIT_EARLY=0\nADD_NORTH_AS_SERVICE=true\nVERIFY_EGRESS_TO_PI=1\nSTORAGE=sqlite # postgres, sqlite-in-memory, sqlitelb\nREADING_PLUGIN_DB='Use main plugin'\n"
  },
  {
    "path": "tests/system/lab/verify_clean_pi.py",
    "content": "import argparse\nfrom pathlib import Path\nimport sys\nfrom datetime import datetime\n\nPROJECT_ROOT = Path(__file__).absolute().parent.parent.parent.parent\nsys.path.append('{}/tests/system/common'.format(PROJECT_ROOT))\n\nfrom clean_pi_system import clear_pi_system_pi_web_api\n\nretry_count = 0\ndata_from_pi = None\nretries = 6\nwait_time = 10\ntoday = datetime.now().strftime(\"%Y_%m_%d\")\n\nparser = argparse.ArgumentParser(description=\"PI server\",\n                                 formatter_class=argparse.ArgumentDefaultsHelpFormatter)\nparser.add_argument(\"--pi-host\", action=\"store\", default=\"pi-server\",\n                    help=\"PI Server Host Name/IP\")\nparser.add_argument(\"--pi-port\", action=\"store\", default=\"5460\", type=int,\n                    help=\"PI Server Port\")\nparser.add_argument(\"--pi-db\", action=\"store\", default=\"pi-server-db\",\n                    help=\"PI Server database\")\nparser.add_argument(\"--pi-admin\", action=\"store\", default=\"pi-server-uid\",\n                    help=\"PI Server user login\")\nparser.add_argument(\"--pi-passwd\", action=\"store\", default=\"pi-server-pwd\",\n                    help=\"PI Server user login password\")\nparser.add_argument(\"--asset-name\", action=\"store\", default=\"asset-name\",\n                    help=\"Asset name\")\nargs = vars(parser.parse_args())\n\npi_host = args[\"pi_host\"]\npi_admin = args[\"pi_admin\"]\npi_passwd = args[\"pi_passwd\"]\npi_db = args[\"pi_db\"]\nasset_name = args[\"asset_name\"]\n\naf_hierarchy_level = \"{}_PIlabSinelvl1/PIlabSinelvl2/PIlabSinelvl3\".format(today)\naf_hierarchy_level_list = af_hierarchy_level.split(\"/\")\n\nclear_pi_system_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                           {asset_name: ['sinusoid', '', 'max', 'square'],\n                            'e_accelerometer': ['', 'x', 'y', 'z'],\n                            'e_magnetometer': ['', 'x', 'y', 'z'],\n                            'e_rgb': ['', 'b', 'g', 'r'],\n                            'e_weather': ['', 'altitude', 'temperature', 'pressure', 'temp_fahr'],\n                            'randomwalk': ['', 'randomwalk', 'ema'],\n                            'randomwalk1': ['', 'randomwalk', 'ema'],\n                            'positive_sine': ['', 'description', 'event', 'rule'],\n                            'negative_sine': ['', 'description', 'event', 'rule'],\n                            'sin0.8': ['', 'description', 'event', 'rule']})\n"
  },
  {
    "path": "tests/system/memory_leak/config.sh",
    "content": "FLEDGE_URL=\"http://localhost:8081/fledge\"\nTEST_RUN_TIME=3600\nPI_IP=\"localhost\"\nPI_USER=\"Administrator\"\nPI_PASSWORD=\"password\"\nREADINGS_RATE=\"100\"   # It is the readings rate per second per service\nPURGE_INTERVAL_SECONDS=\"180\"\nSTORAGE='sqlite' # postgres, sqlite-in-memory, sqlitelb\nREADING_PLUGIN_DB='Use main plugin'\nMEMORY_THRESHOLD=20    # If system memory falls below the specified memory threshold percentage, Fledge halts and generates a support bundle with a Valgrind report."
  },
  {
    "path": "tests/system/memory_leak/scripts/log_analyzer",
    "content": "#!/bin/bash\n\nlog_directory=\"${1}\"\nerror_tolerance=$(printf '%d' \"${2}\" 2>/dev/null)\nleak_tolerance=$(printf '%d' \"${3}\" 2>/dev/null)\n\nfor log_file in \"$log_directory\"/*.log; do\n    echo \"Analyzing $log_file...\"\n\n    error_summary=$(grep -o \"ERROR SUMMARY: [0-9]* errors\" \"$log_file\" | tail -n 1 | cut -d ' ' -f 3)\n    leak_summary=$(sed -n '/LEAK SUMMARY:/,/ERROR SUMMARY:/p' \"$log_file\" | grep -E \"definitely lost|indirectly lost|possibly lost|still reachable\" | tail -n 4)\n\n    if [ -n \"$error_summary\" ]; then\n        if [ \"$error_summary\" -gt \"${error_tolerance}\" ]; then\n            echo \"Valgrind detected $error_summary error(s) in the log file: $log_file\"\n            exit 1\n        else\n            echo \"Valgrind did not detected any errors in the log file: $log_file\"\n        fi\n    else\n        echo \"No error summary found in the log file.\"\n    fi\n\n    if [ -n \"$leak_summary\" ]; then\n        echo \"Valgrind detected memory leaks in the log file.\"\n        definitely_lost=$(echo \"$leak_summary\" | grep -o \"definitely lost: [0-9,]* bytes\" | awk '{print $3}' | tr -d ',')\n        indirectly_lost=$(echo \"$leak_summary\" | grep -o \"indirectly lost: [0-9,]* bytes\" | awk '{print $3}' | tr -d ',')\n        possibly_lost=$(echo \"$leak_summary\" | grep -o \"possibly lost: [0-9,]* bytes\" | awk '{print $3}' | tr -d ',')\n        still_reachable=$(echo \"$leak_summary\" | grep -o \"still reachable: [0-9,]* bytes\" | awk '{print $3}' | tr -d ',')\n\n        echo \"Definitely Lost: $definitely_lost\"\n        echo \"Indirectly Lost: $indirectly_lost\"\n        echo \"Possibly Lost: $possibly_lost\"\n        echo \"Still Reachable: $still_reachable\"\n        \n        if [ \"$definitely_lost\" -gt \"$leak_tolerance\" ] || [ \"$indirectly_lost\" -gt \"$leak_tolerance\" ] || [ \"$possibly_lost\" -gt \"$leak_tolerance\" ] || [ \"$still_reachable\" -gt \"$leak_tolerance\" ]; then\n            echo \"Memory leak is higher than the tolerable value: $log_file\"\n            exit 1\n        else\n            echo \"Valgrind did not detect any errors in the log file: $log_file\"\n        fi\n\n    else\n        echo \"No memory leaks detected by Valgrind in the log file.\"\n    fi\n\n    echo \"==============================\"\ndone\n"
  },
  {
    "path": "tests/system/memory_leak/scripts/reset",
    "content": "#!/usr/bin/env bash\n\nexport FLEDGE_ROOT=$1\necho \"${FLEDGE_ROOT}\"\n\nsource \"${1}/tests/system/common/scripts/reset_user_authentication\" \"$FLEDGE_ROOT\"\n\ninstall_postgres() {\n  sudo apt install -y postgresql\n  sudo -u postgres createuser -d \"$(whoami)\"\n}\n\n_config_reading_db () {\n  if [[ \"postgres\" == @($1|$2) ]]\n  then\n    install_postgres\n  fi\n  [[ -f $FLEDGE_ROOT/data/etc/storage.json ]] && echo $(jq -c --arg STORAGE_PLUGIN_VAL \"${1}\" '.plugin.value=$STORAGE_PLUGIN_VAL' $FLEDGE_ROOT/data/etc/storage.json) > $FLEDGE_ROOT/data/etc/storage.json || true\n  [[ -f $FLEDGE_ROOT/data/etc/storage.json ]] && echo $(jq -c --arg READING_PLUGIN_VAL \"${2}\" '.readingPlugin.value=$READING_PLUGIN_VAL' $FLEDGE_ROOT/data/etc/storage.json) > $FLEDGE_ROOT/data/etc/storage.json || true\n}\n\n# check for storage plugin\n. ./config.sh\n\nif [[  ${STORAGE} == @(sqlite|postgres|sqlitelb) && ${READING_PLUGIN_DB} == @(Use main plugin|sqlitememory|sqlite|postgres|sqlitelb) ]]\nthen\n   _config_reading_db \"${STORAGE}\" \"${READING_PLUGIN_DB}\"\nelse\n  echo \"Invalid Storage Configuration\"\n  exit 1\nfi\n\necho \"Stopping Fledge\"\ncd ${1}/scripts/ && ./fledge stop\necho 'Resetting Fledge'\necho -e \"YES\\nYES\" | ./fledge reset || exit 1\necho\necho \"Starting Fledge\"\n./fledge start\necho \"Fledge Status\"\n./fledge status\n"
  },
  {
    "path": "tests/system/memory_leak/scripts/setup",
    "content": "#!/usr/bin/env bash\n\nset -e\n\nFLEDGE_PLUGINS_LIST=${1}\nBRANCH=${2:-develop}   # here Branch means branch of fledge repository that is needed to be scanned through valgrind, default is develop\nCOLLECT_FILES=${3}\nPROJECT_ROOT=$(pwd)\nSUPPRESSION_FILE=\"${PROJECT_ROOT}/valgrind-python.supp\"\n\nget_pip_break_system_flag() {\n    # Get Python version from python3 --version and parse it\n    PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}')\n    PYTHON_MAJOR=$(echo \"$PYTHON_VERSION\" | cut -d. -f1)\n    PYTHON_MINOR=$(echo \"$PYTHON_VERSION\" | cut -d. -f2)\n\n    # Default to empty flag\n    FLAG=\"\"\n\n    # Set the FLAG only for Python versions 3.11 or higher\n    if [ \"$PYTHON_MAJOR\" -gt 3 ] || { [ \"$PYTHON_MAJOR\" -eq 3 ] && [ \"$PYTHON_MINOR\" -ge 11 ]; }; then\n        FLAG=\"--break-system-packages\"\n    fi\n\n    # Return the FLAG (via echo)\n    echo \"$FLAG\"\n}\n\n# Function to fetch OS information\nfetch_os_info() {\n    OS_NAME=$(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\n    ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '\"')\n    UNAME=$(uname -m)\n    VERSION_ID=$(awk -F= '/^VERSION_ID=/{print $2}' /etc/os-release | tr -d '\"')\n    echo \"OS Name is ${OS_NAME}\"\n    echo \"VERSION ID is ${VERSION_ID}\"\n    echo \"ID is ${ID}\"\n    echo \"UNAME is ${UNAME}\"\n}\n\nclone_fledge(){\n\t# installing pre requisite package - git, for cloning fledge non package\n\tsudo apt -y  install git\n\n\t# cloning fledge\n\techo \"Cloning Fledge branch $BRANCH\"\n\tgit clone -b $BRANCH  https://github.com/fledge-iot/fledge.git &&  cd fledge && chmod +x requirements.sh && sh -x requirements.sh ;\n\n}\n\n# Function to modify scripts for Valgrind\nmodify_scripts_for_valgrind() {\n\techo 'fledge root path is set to  ${FLEDGE_ROOT}'\n\tvalgrind_conf=\" --tool=memcheck --leak-check=full --show-leak-kinds=all --suppressions=${SUPPRESSION_FILE}\"\n\n\tpsouth_c=${FLEDGE_ROOT}/scripts/services/south_c\n\techo $psouth_c\n\tsudo sed -i 's#/usr/local/fledge#'\"$FLEDGE_ROOT\"'#g' ${psouth_c}\n\tif [[ \"${COLLECT_FILES}\" == \"LOGS\" ]]; then\n\t\tsudo sed -i '/^  \\.\\/fledge\\.services\\.south \"\\$@\"$/s#^#PYTHONMALLOC=malloc valgrind --log-file=/tmp/south_valgrind.log '\"$valgrind_conf\"' #' ${psouth_c}\n\telse\n\t\tsudo sed -i '/^  \\.\\/fledge\\.services\\.south \"\\$@\"$/s#^#PYTHONMALLOC=malloc valgrind --xml=yes --xml-file=/tmp/south_valgrind_%p.xml --track-origins=yes '\"$valgrind_conf\"' #' ${psouth_c}\n\tfi\n\n\tpnorth_C=${FLEDGE_ROOT}/scripts/services/north_C\n\techo $pnorth_C\n\tsudo sed -i 's#/usr/local/fledge#'\"$FLEDGE_ROOT\"'#g' ${pnorth_C}\n\tif [[ \"${COLLECT_FILES}\" == \"LOGS\" ]]; then\n\t\tsudo sed -i '/^  \\.\\/fledge\\.services\\.north \"\\$@\"$/s#^#PYTHONMALLOC=malloc valgrind --log-file=/tmp/north_valgrind.log '\"$valgrind_conf\"' #' ${pnorth_C}\n\telse\n\t\tsudo sed -i '/^  \\.\\/fledge\\.services\\.north \"\\$@\"$/s#^#PYTHONMALLOC=malloc valgrind --xml=yes --xml-file=/tmp/north_valgrind_%p.xml --track-origins=yes '\"$valgrind_conf\"' #' ${pnorth_C}\n\tfi\n\n\tpstorage=${FLEDGE_ROOT}/scripts/services/storage\n\techo $pstorage\n\tsudo sed -i 's#/usr/local/fledge#'\"$FLEDGE_ROOT\"'#g' ${pstorage}\n\tif [[ \"${COLLECT_FILES}\" == \"LOGS\" ]]; then\n\t\tsudo sed -i '/^ \\${storageExec} \"\\$@\"$/s#^#PYTHONMALLOC=malloc valgrind --log-file=/tmp/storage_valgrind.log '\"$valgrind_conf\"' #' ${pstorage}\n\telse\n\t\tsudo sed -i '/^ \\${storageExec} \"\\$@\"$/s#^#PYTHONMALLOC=malloc valgrind --xml=yes --xml-file=/tmp/storage_valgrind_%p.xml --track-origins=yes '\"$valgrind_conf\"' #' ${pstorage}\n\tfi\n}\n\n# Function to install C based plugin\ninstall_c_plugin() {\n    local plugin=\"${1}\"\n    echo \"Installing C based plugin: ${plugin}\"\n    sed -i 's|c++11 -O3|c++11 -O0 -ggdb|g' \"${plugin}/CMakeLists.txt\"\n    cd \"${plugin}\" && mkdir -p build && cd build && \\\n    cmake -DFLEDGE_INSTALL=${FLEDGE_ROOT} -DFLEDGE_ROOT=${FLEDGE_ROOT} .. && make  && make install && cd \"${PROJECT_ROOT}\"\n    echo \"Done installation of C Based Plugin\"\n}\n\n# Function to install Python based plugin\ninstall_python_plugin() {\n    local plugin_dir=\"${1}\"\n    BREAK_PKG_FLAG=$(get_pip_break_system_flag)\n    # Install dependencies if requirements.txt exists\n    [[ -f ${plugin_dir}/requirements.txt ]] && python3 -m pip install -r \"${plugin_dir}/requirements.txt\" ${BREAK_PKG_FLAG:+$BREAK_PKG_FLAG}\n    # Copy plugin\n    echo 'Copying Plugin'\n    sudo cp -r \"${plugin_dir}/python\" \"${FLEDGE_ROOT}/\"\n    echo 'Copied.'\n}\n\n# Function to install plugins\ninstall_plugins() {\n    local plugin_dir=\"${1}\"\n    echo \"Installing Plugin: ${plugin_dir}\"\n\n    # Install dependencies if requirements.sh exists\n    [[ -f ${plugin_dir}/requirements.sh ]] && ${plugin_dir}/requirements.sh\n\n    # Install plugin based on type\n    if [[ -f ${plugin_dir}/CMakeLists.txt ]]; then \n        install_c_plugin \"${plugin_dir}\"\n    else\n        install_python_plugin \"${plugin_dir}\"\n    fi\n}\n\n# Main \n\n# Fetch OS information\nfetch_os_info\n\n# Clone Fledge\ncd \"${PROJECT_ROOT}\"\nclone_fledge\n\n# Change CMakelists to build with debug options\necho 'Changing CMakelists'\nsed -i 's|c++11 -O3|c++11 -O0 -ggdb|g' CMakeLists.txt && make\n\n\n# Export fledge path and change directory to the location where plugin repositories will be cloned\nexport FLEDGE_ROOT=$(pwd)\ncd \"${PROJECT_ROOT}\"\n\n# Install Fledge Based Plugins\nIFS=' ' read -ra fledge_plugin_list <<< \"${FLEDGE_PLUGINS_LIST}\"\nfor i in \"${fledge_plugin_list[@]}\"; do\n    echo \"Plugin: ${i}\"\n    # tar -xzf sources.tar.gz --wildcards \"*/${i}\" --strip-components=1\n\tgit clone  https://github.com/fledge-iot/${i}.git\n    install_plugins \"${PROJECT_ROOT}/${i}\"\ndone\n\n# Modify scripts for Valgrind\nmodify_scripts_for_valgrind\n\necho \"Current location - $(pwd)\"\necho 'End of setup'"
  },
  {
    "path": "tests/system/memory_leak/test_memcheck.sh",
    "content": "#!/bin/bash\n\n__author__=\"Mohit Singh Tomar\"\n__copyright__=\"Copyright (c) 2024 Dianomic Systems Inc.\"\n__license__=\"Apache 2.0\"\n__version__=\"1.0.0\"\n\n#######################################################################################################################\n# Script Name: test_memcheck.sh\n# Description: Tests for checking memory leaks in Fledge.\n# Usage: ./test_memcheck.sh FLEDGE_TEST_BRANCH COLLECT_FILES [OPTIONS]\n#\n# Parameters:\n#   FLEDGE_TEST_BRANCH (str): Branch of Fledge Repository on which valgrind test will run.\n#   COLLECT_FILES (str): Type of report file needs to be collected from valgrind test, default is LOGS otherwise XML.\n#\n# Options:\n#   --use-filters: If passed, add filters to South Services.\n#\n# Example:\n#   ./test_memcheck.sh develop LOGS\n#   ./test_memcheck.sh develop LOGS --use-filters\n# \n#########################################################################################################################\n\nset -e\nsource config.sh\n\nexport FLEDGE_ROOT=$(pwd)/fledge\n\nFLEDGE_TEST_BRANCH=\"$1\"    # here fledge_test_branch means branch of fledge repository that is needed to be scanned, default is develop\nCOLLECT_FILES=\"${2:-LOGS}\"\nUSE_FILTER=\"False\"\nSCRIPT_DIR=$(dirname \"$(readlink -f \"$0\")\")\n\nif [ \"$3\" = \"--use-filters\" ]; then\n   USE_FILTER=\"True\"\nfi\n\nif [[  ${COLLECT_FILES} != @(LOGS|XML|) ]]\nthen\n   echo \"Invalid argument ${COLLECT_FILES}. Please provide valid arguments: XML or LOGS.\"\n   exit 1\nfi\n\ncleanup(){\n  # Removing temporary files, fledge and its plugin repository cloned by previous build of the Job \n  echo \"Removing Cloned repository and log files\"\n  rm -rf fledge* reports && echo 'Done.'\n}\n\n# Setting up Fledge and installing its plugin\nsetup(){\n   ./scripts/setup \"fledge-south-sinusoid fledge-south-random fledge-filter-asset fledge-filter-rename\"  \"${FLEDGE_TEST_BRANCH}\" \"${COLLECT_FILES}\"\n}\n\nreset_fledge(){\n  ./scripts/reset ${FLEDGE_ROOT} ;\n}\n\nconfigure_purge(){\n   # This function is for updating purge configuration and schedule of python based purge.\n   echo -e \"Updating Purge Configuration \\n\"\n   row_count=\"$(printf \"%.0f\" \"$(echo \"${READINGS_RATE} * 2 * ${PURGE_INTERVAL_SECONDS}\"| bc)\")\"\n   curl -X PUT \"$FLEDGE_URL/category/PURGE_READ\" -d \"{\\\"size\\\":\\\"${row_count}\\\"}\"\n   echo \n   echo -e \"Updated Purge Configuration \\n\"\n   echo -e \"Updating Purge Schedule \\n\"\n   curl -X PUT \"$FLEDGE_URL/schedule/cea17db8-6ccc-11e7-907b-a6006ad3dba0\" -d \\\n   '{\n      \"name\": \"purge\",\n      \"type\": 3,\n      \"repeat\": '\"${PURGE_INTERVAL_SECONDS}\"',\n      \"exclusive\": true,\n      \"enabled\": true\n   }'\n   echo -e \"Updated Purge Schedule \\n\"\n}\n\nadd_sinusoid(){ \n  echo -e INFO: \"Add South Sinusoid\"\n  curl -sX POST \"$FLEDGE_URL/service\" -d \\\n  '{\n     \"name\": \"Sine\",\n     \"type\": \"south\",\n     \"plugin\": \"sinusoid\",\n     \"enabled\": \"false\",\n     \"config\": {}\n  }'  \n  echo\n  echo 'Updating Readings per second'\n\n  sleep 60\n  \n  curl -sX PUT \"$FLEDGE_URL/category/SineAdvanced\" -d '{ \"readingsPerSec\": \"'${READINGS_RATE}'\"}'\n  echo\n}\n\nadd_asset_filter_to_sine(){\n   echo 'Adding Asset Filter to Sinusoid Service'\n   curl -sX POST \"$FLEDGE_URL/filter\" -d \\\n   '{\n      \"name\":\"asset #1\",\n      \"plugin\":\"asset\",\n      \"filter_config\":{\n         \"enable\":\"true\",\n         \"config\":{\n            \"rules\":[\n               {\"asset_name\":\"sinusoid\",\"action\":\"rename\",\"new_asset_name\":\"sinner\"}\n            ]\n         }\n      }\n   }'\n\n   curl -sX PUT \"$FLEDGE_URL/filter/Sine/pipeline?allow_duplicates=true&append_filter=true\" -d \\\n   '{\n      \"pipeline\":[\"asset #1\"],\n      \"files\":[]\n   }'\n}\n\nadd_random(){\n  echo -e INFO: \"Add South Random\"\n  curl -sX POST \"$FLEDGE_URL/service\" -d \\\n  '{\n     \"name\": \"Random\",\n     \"type\": \"south\",\n     \"plugin\": \"Random\",\n     \"enabled\": \"false\",\n     \"config\": {}\n  }'\n  echo\n  echo 'Updating Readings per second'\n\n  sleep 60\n\n  curl -sX PUT \"$FLEDGE_URL/category/RandomAdvanced\" -d '{ \"readingsPerSec\": \"'${READINGS_RATE}'\"}'\n  echo\n\n}\n\nadd_rename_filter_to_random(){\n   echo -e \"\\nAdding Rename Filter to Random Service\"\n   curl -sX POST \"$FLEDGE_URL/filter\" -d \\\n   '{\n      \"name\":\"rename #1\",\n      \"plugin\":\"rename\",\n      \"filter_config\":{\n         \"find\":\"Random\",\n         \"replaceWith\":\"Randomizer\",\n         \"enable\":\"true\"\n      }\n   }'\n\n   curl -sX PUT \"$FLEDGE_URL/filter/Random/pipeline?allow_duplicates=true&append_filter=true\" -d \\\n   '{\n      \"pipeline\":[\"rename #1\"],\n      \"files\":[]\n   }'\n}\n\nenable_services(){\n   echo -e \"\\nEnable Services\"\n   curl -sX PUT \"$FLEDGE_URL/schedule/enable\" -d '{\"schedule_name\":\"Sine\"}'\n   sleep 20\n   curl -sX PUT \"$FLEDGE_URL/schedule/enable\" -d '{\"schedule_name\": \"Random\"}'\n   sleep 20\n}\n\nsetup_north_pi_egress () {\n  # Add PI North as service\n  echo 'Setting up North'\n  curl -sX POST \"$FLEDGE_URL/service\" -d \\\n  '{\n     \"name\": \"PI Server\",\n     \"plugin\": \"OMF\",\n     \"type\": \"north\",\n     \"enabled\": true,\n     \"config\": {\n        \"PIServerEndpoint\": {\n           \"value\": \"PI Web API\"\n        },\n        \"ServerHostname\": {\n           \"value\": \"'${PI_IP}'\"\n        },\n        \"ServerPort\": {\n           \"value\": \"443\"\n        },\n        \"PIWebAPIUserId\": {\n           \"value\": \"'${PI_USER}'\"\n        },\n        \"PIWebAPIPassword\": {\n           \"value\": \"'${PI_PASSWORD}'\"\n        },\n        \"NamingScheme\": {\n           \"value\": \"Backward compatibility\"\n        },\n        \"PIWebAPIAuthenticationMethod\": {\n           \"value\": \"basic\"\n        },\n        \"compression\": {\n           \"value\": \"true\"\n        }\n     }\n  }'\n  echo\n  echo 'North setup done'\n}\n\nmonitor_memory() {\n  local duration=$1\n  local threshold=$2\n  local interval=5  # Check memory every 5 seconds\n  \n  echo \"Monitoring system memory for ${duration} seconds...\"\n  \n  # Calculate threshold memory value\n  local total_mem=$(free | awk '/^Mem:/{print $2}')\n  local threshold_mem=$((total_mem * threshold / 100))\n  \n  local remaining=$duration\n  \n  while [ $remaining -gt 0 ]; do\n      # Check available memory\n      local avail_mem=$(free | awk '/^Mem:/{print $7}')\n      \n      if [ $avail_mem -lt $threshold_mem ]; then\n          echo \"Available memory is below threshold. Stopping monitoring.\"\n          break\n      fi\n      \n      # Sleep for interval seconds\n      sleep $interval\n      remaining=$((remaining - interval))\n      echo \"${remaining} seconds remaining\"\n  done\n}\n\ncollect_data() {\n  echo \"Collecting Data and Generating reports\"\n  set +e\n\n  echo \"===================== COLLECTING SUPPORT BUNDLE / SYSLOG ============================\"\n  mkdir -p reports/ && ls -lrth\n  BUNDLE=$(curl -sX POST \"$FLEDGE_URL/support\")\n  # Check if the bundle is created using jq\n  if jq -e 'has(\"bundle created\")' <<< \"$BUNDLE\" > /dev/null; then\n    echo \"Support Bundle Created\"\n    # Use proper quoting for variable expansion\n    cp -r \"$FLEDGE_ROOT/data/support/\"* reports/ && \\\n    echo \"Support bundle has been saved to path: $SCRIPT_DIR/reports\"\n  else\n    echo \"Failed to Create support bundle\"\n    # Use proper quoting for variable expansion\n    cp /var/log/syslog reports/ && \\\n    echo \"Syslog Saved to path: $SCRIPT_DIR/reports\"\n  fi\n  echo \"===================== COLLECTED SUPPORT BUNDLE / SYSLOG ============================\"\n  # Use proper quoting for variable expansion\n  \"${FLEDGE_ROOT}/scripts/fledge\" stop && echo $?\n  set -e\n}\n\n\ngenerate_valgrind_logs(){\n  echo 'Creating reports directory';\n  mkdir -p reports/ ; ls -lrth\n  echo 'copying reports'\n  extension=\"xml\"\n  if [[ \"${COLLECT_FILES}\" == \"LOGS\" ]]; then extension=\"log\"; fi\n  cp -rf /tmp/*valgrind*.${extension} reports/. && echo 'copied'\n}\n\ncleanup\nsetup\nreset_fledge\nconfigure_purge\nadd_sinusoid\nadd_random\nif [ \"${USE_FILTER}\" = \"True\" ]; then\n   add_asset_filter_to_sine\n   add_rename_filter_to_random\nfi\nenable_services\nsetup_north_pi_egress\nmonitor_memory ${TEST_RUN_TIME} ${MEMORY_THRESHOLD}\ncollect_data\ngenerate_valgrind_logs \n\n"
  },
  {
    "path": "tests/system/memory_leak/valgrind-python.supp",
    "content": "#\n# This is a valgrind suppression file that should be used when using valgrind.\n# This file has been updated to support multiple platforms:\n# - Ubuntu 20.04, 22.04, and 24.04\n# - x86_64 and aarch64 architectures\n#\n#  Here's an example of running valgrind:\n#\n#\tcd python/dist/src\n#\tvalgrind --tool=memcheck --suppressions=Misc/valgrind-python.supp \\\n#\t\t./python -E ./Lib/test/regrtest.py -u gui,network\n#\n# You must edit Objects/obmalloc.c and uncomment Py_USING_MEMORY_DEBUGGER\n# to use the preferred suppressions with address_in_range.\n#\n# If you do not want to recompile Python, you can uncomment\n# suppressions for _PyObject_Free and _PyObject_Realloc.\n#\n# See Misc/README.valgrind for more information.\n#\n# Platform-specific suppressions:\n# - x86_64: /lib/x86_64-linux-gnu/ paths\n# - aarch64: /lib/aarch64-linux-gnu/ paths  \n# - Legacy: /lib/ and /usr/lib/ paths for older systems\n# - Modern: OpenSSL 1.1+, glibc 2.31+, systemd suppressions\n\n# all tool names: Addrcheck,Memcheck,cachegrind,helgrind,massif\n{\n   ADDRESS_IN_RANGE/Invalid read of size 4\n   Memcheck:Addr4\n   fun:address_in_range\n}\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 4\n   Memcheck:Value4\n   fun:address_in_range\n}\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 8 (x86_64 aka amd64)\n   Memcheck:Value8\n   fun:address_in_range\n}\n\n{\n   ADDRESS_IN_RANGE/Conditional jump or move depends on uninitialised value\n   Memcheck:Cond\n   fun:address_in_range\n}\n\n#\n# Leaks (including possible leaks)\n#    Hmmm, I wonder if this masks some real leaks.  I think it does.\n#    Will need to fix that.\n#\n\n{\n   Suppress leaking the GIL after a fork.\n   Memcheck:Leak\n   fun:malloc\n   fun:PyThread_allocate_lock\n   fun:PyEval_ReInitThreads\n}\n\n{\n   Suppress leaking the autoTLSkey.  This looks like it shouldn't leak though.\n   Memcheck:Leak\n   fun:malloc\n   fun:PyThread_create_key\n   fun:_PyGILState_Init\n   fun:Py_InitializeEx\n   fun:Py_Main\n}\n\n{\n   Hmmm, is this a real leak or like the GIL?\n   Memcheck:Leak\n   fun:malloc\n   fun:PyThread_ReInitTLS\n}\n\n{\n   Handle PyMalloc confusing valgrind (possibly leaked)\n   Memcheck:Leak\n   fun:realloc\n   fun:_PyObject_GC_Resize\n   fun:COMMENT_THIS_LINE_TO_DISABLE_LEAK_WARNING\n}\n\n{\n   Handle PyMalloc confusing valgrind (possibly leaked)\n   Memcheck:Leak\n   fun:malloc\n   fun:_PyObject_GC_New\n   fun:COMMENT_THIS_LINE_TO_DISABLE_LEAK_WARNING\n}\n\n{\n   Handle PyMalloc confusing valgrind (possibly leaked)\n   Memcheck:Leak\n   fun:malloc\n   fun:_PyObject_GC_NewVar\n   fun:COMMENT_THIS_LINE_TO_DISABLE_LEAK_WARNING\n}\n\n#\n# Leaks: dlopen() called without dlclose()\n#\n\n{\n   dlopen() called without dlclose()\n   Memcheck:Leak\n   fun:malloc\n   fun:malloc\n   fun:strdup\n   fun:_dl_load_cache_lookup\n}\n{\n   dlopen() called without dlclose()\n   Memcheck:Leak\n   fun:malloc\n   fun:malloc\n   fun:strdup\n   fun:_dl_map_object\n}\n{\n   dlopen() called without dlclose()\n   Memcheck:Leak\n   fun:malloc\n   fun:*\n   fun:_dl_new_object\n}\n{\n   dlopen() called without dlclose()\n   Memcheck:Leak\n   fun:calloc\n   fun:*\n   fun:_dl_new_object\n}\n{\n   dlopen() called without dlclose()\n   Memcheck:Leak\n   fun:calloc\n   fun:*\n   fun:_dl_check_map_versions\n}\n\n\n#\n# Non-python specific leaks\n#\n\n{\n   Handle pthread issue (possibly leaked)\n   Memcheck:Leak\n   fun:calloc\n   fun:allocate_dtv\n   fun:_dl_allocate_tls_storage\n   fun:_dl_allocate_tls\n}\n\n{\n   Handle pthread issue (possibly leaked)\n   Memcheck:Leak\n   fun:memalign\n   fun:_dl_allocate_tls_storage\n   fun:_dl_allocate_tls\n}\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 4\n   Memcheck:Addr4\n   fun:_PyObject_Free\n}\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 4\n   Memcheck:Value4\n   fun:_PyObject_Free\n}\n\n{\n   ADDRESS_IN_RANGE/Use of uninitialised value of size 8\n   Memcheck:Addr8\n   fun:_PyObject_Free\n}\n\n{\n   ADDRESS_IN_RANGE/Use of uninitialised value of size 8\n   Memcheck:Value8\n   fun:_PyObject_Free\n}\n\n{\n   ADDRESS_IN_RANGE/Conditional jump or move depends on uninitialised value\n   Memcheck:Cond\n   fun:_PyObject_Free\n}\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 4\n   Memcheck:Addr4\n   fun:_PyObject_Realloc\n}\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 4\n   Memcheck:Value4\n   fun:_PyObject_Realloc\n}\n\n{\n   ADDRESS_IN_RANGE/Use of uninitialised value of size 8\n   Memcheck:Addr8\n   fun:_PyObject_Realloc\n}\n\n{\n   ADDRESS_IN_RANGE/Use of uninitialised value of size 8\n   Memcheck:Value8\n   fun:_PyObject_Realloc\n}\n\n{\n   ADDRESS_IN_RANGE/Conditional jump or move depends on uninitialised value\n   Memcheck:Cond\n   fun:_PyObject_Realloc\n}\n\n###\n### All the suppressions below are for errors that occur within libraries\n### that Python uses.  The problems to not appear to be related to Python's\n### use of the libraries.\n###\n\n{\n   Generic ubuntu ld problems (x86_64)\n   Memcheck:Addr8\n   obj:/lib/x86_64-linux-gnu/ld-*.so\n   obj:/lib/x86_64-linux-gnu/ld-*.so\n   obj:/lib/x86_64-linux-gnu/ld-*.so\n   obj:/lib/x86_64-linux-gnu/ld-*.so\n}\n\n{\n   Generic ubuntu ld problems (aarch64)\n   Memcheck:Addr8\n   obj:/lib/aarch64-linux-gnu/ld-*.so\n   obj:/lib/aarch64-linux-gnu/ld-*.so\n   obj:/lib/aarch64-linux-gnu/ld-*.so\n   obj:/lib/aarch64-linux-gnu/ld-*.so\n}\n\n{\n   Generic gentoo ld problems (x86_64)\n   Memcheck:Cond\n   obj:/lib/x86_64-linux-gnu/ld-*.so\n   obj:/lib/x86_64-linux-gnu/ld-*.so\n   obj:/lib/x86_64-linux-gnu/ld-*.so\n   obj:/lib/x86_64-linux-gnu/ld-*.so\n}\n\n{\n   Generic gentoo ld problems (aarch64)\n   Memcheck:Cond\n   obj:/lib/aarch64-linux-gnu/ld-*.so\n   obj:/lib/aarch64-linux-gnu/ld-*.so\n   obj:/lib/aarch64-linux-gnu/ld-*.so\n   obj:/lib/aarch64-linux-gnu/ld-*.so\n}\n\n{\n   Generic ubuntu ld problems (legacy path)\n   Memcheck:Addr8\n   obj:/lib/ld-*.so\n   obj:/lib/ld-*.so\n   obj:/lib/ld-*.so\n   obj:/lib/ld-*.so\n}\n\n{\n   Generic gentoo ld problems (legacy path)\n   Memcheck:Cond\n   obj:/lib/ld-*.so\n   obj:/lib/ld-*.so\n   obj:/lib/ld-*.so\n   obj:/lib/ld-*.so\n}\n\n{\n   DBM problems, see test_dbm\n   Memcheck:Param\n   write(buf)\n   fun:write\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   fun:dbm_close\n}\n\n{\n   DBM problems, see test_dbm\n   Memcheck:Value8\n   fun:memmove\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   fun:dbm_store\n   fun:dbm_ass_sub\n}\n\n{\n   DBM problems, see test_dbm\n   Memcheck:Cond\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   fun:dbm_store\n   fun:dbm_ass_sub\n}\n\n{\n   DBM problems, see test_dbm\n   Memcheck:Cond\n   fun:memmove\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   fun:dbm_store\n   fun:dbm_ass_sub\n}\n\n{\n   GDBM problems, see test_gdbm\n   Memcheck:Param\n   write(buf)\n   fun:write\n   fun:gdbm_open\n\n}\n\n{\n   Uninitialised byte(s) false alarm, see bpo-35561\n   Memcheck:Param\n   epoll_ctl(event)\n   fun:epoll_ctl\n   fun:pyepoll_internal_ctl\n}\n\n{\n   ZLIB problems, see test_gzip (x86_64)\n   Memcheck:Cond\n   obj:/lib/x86_64-linux-gnu/libz.so*\n   obj:/lib/x86_64-linux-gnu/libz.so*\n   fun:deflate\n}\n\n{\n   ZLIB problems, see test_gzip (aarch64)\n   Memcheck:Cond\n   obj:/lib/aarch64-linux-gnu/libz.so*\n   obj:/lib/aarch64-linux-gnu/libz.so*\n   fun:deflate\n}\n\n{\n   ZLIB problems, see test_gzip (legacy path)\n   Memcheck:Cond\n   obj:/lib/libz.so*\n   obj:/lib/libz.so*\n   fun:deflate\n}\n\n{\n   Avoid problems w/readline doing a putenv and leaking on exit (x86_64)\n   Memcheck:Leak\n   fun:malloc\n   fun:xmalloc\n   fun:sh_set_lines_and_columns\n   fun:_rl_get_screen_size\n   fun:_rl_init_terminal_io\n   obj:/lib/x86_64-linux-gnu/libreadline.so*\n   fun:rl_initialize\n}\n\n{\n   Avoid problems w/readline doing a putenv and leaking on exit (aarch64)\n   Memcheck:Leak\n   fun:malloc\n   fun:xmalloc\n   fun:sh_set_lines_and_columns\n   fun:_rl_get_screen_size\n   fun:_rl_init_terminal_io\n   obj:/lib/aarch64-linux-gnu/libreadline.so*\n   fun:rl_initialize\n}\n\n{\n   Avoid problems w/readline doing a putenv and leaking on exit (legacy path)\n   Memcheck:Leak\n   fun:malloc\n   fun:xmalloc\n   fun:sh_set_lines_and_columns\n   fun:_rl_get_screen_size\n   fun:_rl_init_terminal_io\n   obj:/lib/libreadline.so*\n   fun:rl_initialize\n}\n\n# Valgrind emits \"Conditional jump or move depends on uninitialised value(s)\"\n# false alarms on GCC builtin strcmp() function. The GCC code is correct.\n#\n# Valgrind bug: https://bugs.kde.org/show_bug.cgi?id=264936\n{\n   bpo-38118: Valgrind emits false alarm on GCC builtin strcmp()\n   Memcheck:Cond\n   fun:PyUnicode_Decode\n}\n\n\n###\n### These occur from somewhere within the SSL, when running\n###  test_socket_sll.  They are too general to leave on by default.\n###\n###{\n###   somewhere in SSL stuff\n###   Memcheck:Cond\n###   fun:memset\n###}\n###{\n###   somewhere in SSL stuff\n###   Memcheck:Value4\n###   fun:memset\n###}\n###\n###{\n###   somewhere in SSL stuff\n###   Memcheck:Cond\n###   fun:MD5_Update\n###}\n###\n###{\n###   somewhere in SSL stuff\n###   Memcheck:Value4\n###   fun:MD5_Update\n###}\n\n# Fedora's package \"openssl-1.0.1-0.1.beta2.fc17.x86_64\" on x86_64\n# See http://bugs.python.org/issue14171\n{\n   openssl 1.0.1 prng 1\n   Memcheck:Cond\n   fun:bcmp\n   fun:fips_get_entropy\n   fun:FIPS_drbg_instantiate\n   fun:RAND_init_fips\n   fun:OPENSSL_init_library\n   fun:SSL_library_init\n   fun:init_hashlib\n}\n\n{\n   openssl 1.0.1 prng 2\n   Memcheck:Cond\n   fun:fips_get_entropy\n   fun:FIPS_drbg_instantiate\n   fun:RAND_init_fips\n   fun:OPENSSL_init_library\n   fun:SSL_library_init\n   fun:init_hashlib\n}\n\n{\n   openssl 1.0.1 prng 3\n   Memcheck:Value8\n   fun:_x86_64_AES_encrypt_compact\n   fun:AES_encrypt\n}\n\n#\n# All of these problems come from using test_socket_ssl\n#\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BN_bin2bn\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BN_num_bits_word\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:BN_num_bits_word\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BN_mod_exp_mont_word\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BN_mod_exp_mont\n}\n\n{\n   from test_socket_ssl (legacy crypto)\n   Memcheck:Param\n   write(buf)\n   fun:write\n   obj:/usr/lib/libcrypto.so.0.9.7\n}\n\n{\n   from test_socket_ssl (modern crypto x86_64)\n   Memcheck:Param\n   write(buf)\n   fun:write\n   obj:/lib/x86_64-linux-gnu/libcrypto.so*\n}\n\n{\n   from test_socket_ssl (modern crypto aarch64)\n   Memcheck:Param\n   write(buf)\n   fun:write\n   obj:/lib/aarch64-linux-gnu/libcrypto.so*\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:RSA_verify\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:RSA_verify\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:DES_set_key_unchecked\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:DES_encrypt2\n}\n\n{\n   from test_socket_ssl (legacy ssl)\n   Memcheck:Cond\n   obj:/usr/lib/libssl.so.0.9.7\n}\n\n{\n   from test_socket_ssl (modern ssl x86_64)\n   Memcheck:Cond\n   obj:/lib/x86_64-linux-gnu/libssl.so*\n}\n\n{\n   from test_socket_ssl (modern ssl aarch64)\n   Memcheck:Cond\n   obj:/lib/aarch64-linux-gnu/libssl.so*\n}\n\n{\n   from test_socket_ssl (legacy ssl value)\n   Memcheck:Value4\n   obj:/usr/lib/libssl.so.0.9.7\n}\n\n{\n   from test_socket_ssl (modern ssl value x86_64)\n   Memcheck:Value4\n   obj:/lib/x86_64-linux-gnu/libssl.so*\n}\n\n{\n   from test_socket_ssl (modern ssl value aarch64)\n   Memcheck:Value4\n   obj:/lib/aarch64-linux-gnu/libssl.so*\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BUF_MEM_grow_clean\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:memcpy\n   fun:ssl3_read_bytes\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:SHA1_Update\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:SHA1_Update\n}\n\n{\n   test_buffer_non_debug\n   Memcheck:Addr4\n   fun:PyUnicodeUCS2_FSConverter\n}\n\n{\n   test_buffer_non_debug\n   Memcheck:Addr4\n   fun:PyUnicode_FSConverter\n}\n\n{\n   wcscmp_false_positive\n   Memcheck:Addr8\n   fun:wcscmp\n   fun:_PyOS_GetOpt\n   fun:Py_Main\n   fun:main\n}\n\n# Additional suppressions for the unified decimal tests:\n{\n   test_decimal\n   Memcheck:Addr4\n   fun:PyUnicodeUCS2_FSConverter\n}\n\n{\n   test_decimal2\n   Memcheck:Addr4\n   fun:PyUnicode_FSConverter\n}\n\n# General Python suppressions for multiple versions\n{\n   Python _PyObject_New possibly lost\n   Memcheck:Leak\n   match-leak-kinds: possible\n   fun:malloc\n   fun:_PyObject_New\n}\n\n{\n   Python initialization still reachable\n   Memcheck:Leak\n   match-leak-kinds: reachable\n   fun:malloc\n   ...\n   fun:Py_InitializeFromConfig\n}\n\n{\n   Python wide string copy still reachable\n   Memcheck:Leak\n   match-leak-kinds: reachable\n   fun:malloc\n   fun:_PyWideStringList_Copy\n}\n\n{\n   Python PyMem_Malloc possibly lost\n   Memcheck:Leak\n   match-leak-kinds: possible\n   fun:malloc\n   fun:PyMem_Malloc\n}\n\n{\n   Python PyMem_RawMalloc possibly lost  \n   Memcheck:Leak\n   match-leak-kinds: possible\n   fun:malloc\n   fun:PyMem_RawMalloc\n}\n\n###\n### Additional suppressions for Ubuntu 20.04, 22.04, and 24.04 compatibility\n###\n\n{\n   Ubuntu 20.04+ glibc malloc issues\n   Memcheck:Leak\n   fun:malloc\n   fun:__libc_malloc\n}\n\n{\n   Ubuntu 20.04+ pthread TLS issues\n   Memcheck:Leak\n   fun:calloc\n   fun:allocate_dtv\n   fun:_dl_allocate_tls_storage\n   fun:_dl_allocate_tls\n}\n\n{\n   Ubuntu 20.04+ pthread TLS issues (alternative)\n   Memcheck:Leak\n   fun:memalign\n   fun:_dl_allocate_tls_storage\n   fun:_dl_allocate_tls\n}\n\n{\n   Modern OpenSSL 1.1+ issues (x86_64)\n   Memcheck:Cond\n   obj:/lib/x86_64-linux-gnu/libssl.so*\n   obj:/lib/x86_64-linux-gnu/libcrypto.so*\n}\n\n{\n   Modern OpenSSL 1.1+ issues (aarch64)\n   Memcheck:Cond\n   obj:/lib/aarch64-linux-gnu/libssl.so*\n   obj:/lib/aarch64-linux-gnu/libcrypto.so*\n}\n\n{\n   Ubuntu 24.04+ systemd issues\n   Memcheck:Leak\n   fun:malloc\n   fun:sd_bus_new\n}\n\n{\n   Modern glibc issues (x86_64)\n   Memcheck:Cond\n   obj:/lib/x86_64-linux-gnu/libc.so*\n}\n\n{\n   Modern glibc issues (aarch64)\n   Memcheck:Cond\n   obj:/lib/aarch64-linux-gnu/libc.so*\n}\n\n"
  },
  {
    "path": "tests/system/plugins/README.rst",
    "content": "System Test Plugins\n===================\n\nA set of plugins written explicitly for system tests and not for general use\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/.gitignore",
    "content": "build\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.4.0)\n\n# Set the plugin name to build\nproject(testcard)\n\n# Supported options:\n# -DFLEDGE_INCLUDE\n# -DFLEDGE_LIB\n# -DFLEDGE_SRC\n# -DFLEDGE_INSTALL\n#\n# If no -D options are given and FLEDGE_ROOT environment variable is set\n# then Fledge libraries and header files are pulled from FLEDGE_ROOT path.\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\n# Generation version header file\nset_source_files_properties(version.h PROPERTIES GENERATED TRUE)\nadd_custom_command(\n  OUTPUT version.h\n  DEPENDS ${CMAKE_SOURCE_DIR}/VERSION\n  COMMAND ${CMAKE_SOURCE_DIR}/mkversion ${CMAKE_SOURCE_DIR}\n  COMMENT \"Generating version header\"\n  VERBATIM\n)\ninclude_directories(${CMAKE_BINARY_DIR} /usr/include/libxml2)\n\n\n# Set plugin type (south, north, filter)\nset(PLUGIN_TYPE \"south\")\n# Add here all needed Fledge libraries as list\nset(NEEDED_FLEDGE_LIBS common-lib)\n# Add here additional needed libraries\nset(ADD_LIBS -lssl -lcrypto -lxml2)\n\n# Find source files\nfile(GLOB SOURCES *.cpp)\n\n# Find Fledge includes and libs, by including FindFledge.cmak file\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR})\nfind_package(Fledge)\n# If errors: make clean and remove Makefile\nif (NOT FLEDGE_FOUND)\n\tif (EXISTS \"${CMAKE_BINARY_DIR}/Makefile\")\n\t\texecute_process(COMMAND make clean WORKING_DIRECTORY ${CMAKE_BINARY_DIR})\n\t\tfile(REMOVE \"${CMAKE_BINARY_DIR}/Makefile\")\n\tendif()\n\t# Stop the build process\n\tmessage(FATAL_ERROR \"Fledge plugin '${PROJECT_NAME}' build error.\")\nendif()\n# On success, FLEDGE_INCLUDE_DIRS and FLEDGE_LIB_DIRS variables are set \n\n# Add ./include\ninclude_directories(include)\n# Add Fledge include dir(s)\ninclude_directories(${FLEDGE_INCLUDE_DIRS})\n# Add other include paths\nif (FLEDGE_SRC)\n\tmessage(STATUS \"Using third-party includes \" ${FLEDGE_SRC}/C/thirdparty)\n\tinclude_directories(${FLEDGE_SRC}/C/thirdparty/rapidjson/include)\n\tinclude_directories(${FLEDGE_SRC}/C/thirdparty/Simple-Web-Server)\nendif()\n\n# Add Fledge lib path\nlink_directories(${FLEDGE_LIB_DIRS})\n\n# Create shared library\nadd_library(${PROJECT_NAME} SHARED ${SOURCES} version.h)\n# Add Fledge library names\ntarget_link_libraries(${PROJECT_NAME} ${NEEDED_FLEDGE_LIBS})\n# Add additional libraries\ntarget_link_libraries(${PROJECT_NAME} ${ADD_LIBS})\n# Set the build version \nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n\nset(FLEDGE_INSTALL \"\" CACHE INTERNAL \"\")\n# Install library\nif (FLEDGE_INSTALL)\n\tmessage(STATUS \"Installing ${PROJECT_NAME} in ${FLEDGE_INSTALL}/plugins/${PLUGIN_TYPE}/${PROJECT_NAME}\")\n\tinstall(TARGETS ${PROJECT_NAME} DESTINATION ${FLEDGE_INSTALL}/plugins/${PLUGIN_TYPE}/${PROJECT_NAME})\nendif()\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/Description",
    "content": "A south plugin that creates DPImage data points\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/FindFledge.cmake",
    "content": "# This CMake file locates the Fledge header files and libraries\n#\n# The following variables are set:\n# FLEDGE_INCLUDE_DIRS - Path(s) to Fledge headers files found\n# FLEDGE_LIB_DIRS - Path to Fledge shared libraries\n# FLEDGE_SUCCESS - Set on succes\n#\n# In case of error use SEND_ERROR and return()\n#\n\n# Set defaults paths of installed Fledge SDK package\nset(FLEDGE_DEFAULT_INCLUDE_DIR \"/usr/include/fledge\" CACHE INTERNAL \"\")\nset(FLEDGE_DEFAULT_LIB_DIR \"/usr/lib/fledge\" CACHE INTERNAL \"\")\n\n# CMakeLists.txt options\nset(FLEDGE_SRC \"\" CACHE INTERNAL \"\")\nset(FLEDGE_INCLUDE \"\" CACHE INTERNAL \"\")\nset(FLEDGE_LIB \"\" CACHE INTERNAL \"\")\n\n# Return variables\nset(FLEDGE_INCLUDE_DIRS \"\" CACHE INTERNAL \"\")\nset(FLEDGE_LIB_DIRS \"\" CACHE INTERNAL \"\")\nset(FLEDGE_FOUND \"\" CACHE INTERNAL \"\")\n\n# No options set\n# If FLEDGE_ROOT env var is set, use it\nif (NOT FLEDGE_SRC AND NOT FLEDGE_INCLUDE AND NOT FLEDGE_LIB)\n\tif (DEFINED ENV{FLEDGE_ROOT})\n\t\tmessage(STATUS \"No options set.\\n\" \n\t\t\t\"   +Using found FLEDGE_ROOT $ENV{FLEDGE_ROOT}\")\n\t\tset(FLEDGE_SRC $ENV{FLEDGE_ROOT})\n\tendif()\nendif()\n\n# -DFLEDGE_SRC=/some_path or FLEDGE_ROOT path\n# Set return variable FLEDGE_INCLUDE_DIRS\nif (FLEDGE_SRC)\n\tunset(_INCLUDE_LIST CACHE)\n\tfile(GLOB_RECURSE _INCLUDE_COMMON \"${FLEDGE_SRC}/C/common/*.h\")\n\tfile(GLOB_RECURSE _INCLUDE_SERVICES \"${FLEDGE_SRC}/C/services/common/*.h\")\n\tfile(GLOB_RECURSE _INCLUDE_PLUGINS_FILTER_COMMON \"${FLEDGE_SRC}/C/plugins/filter/common/*.h\")\n\tlist(APPEND _INCLUDE_LIST ${_INCLUDE_COMMON} ${_INCLUDE_SERVICES})\n\tforeach(_ITEM ${_INCLUDE_LIST})\n\t\tget_filename_component(_ITEM_PATH ${_ITEM} DIRECTORY)\n\t\tlist(APPEND FLEDGE_INCLUDE_DIRS ${_ITEM_PATH})\n\tendforeach()\n\tlist(APPEND FLEDGE_INCLUDE_DIRS \"${FLEDGE_SRC}/C/thirdparty/rapidjson/include\")\n\tunset(INCLUDE_LIST CACHE)\n\n\tlist(REMOVE_DUPLICATES FLEDGE_INCLUDE_DIRS)\n\n\tstring (REPLACE \";\" \"\\n   +\" DISPLAY_PATHS \"${FLEDGE_INCLUDE_DIRS}\")\n\tif (NOT DEFINED ENV{FLEDGE_ROOT})\n\t\tmessage(STATUS \"Using -DFLEDGE_SRC option for includes\\n   +\" \"${DISPLAY_PATHS}\")\n\telse()\n\t\tmessage(STATUS \"Using FLEDGE_ROOT for includes\\n   +\" \"${DISPLAY_PATHS}\")\n\tendif()\n\n\tif (NOT FLEDGE_INCLUDE_DIRS)\n\t\tmessage(SEND_ERROR \"Needed Fledge header files not found in path ${FLEDGE_SRC}/C\")\n\t\treturn()\n\tendif()\nelse()\n\t# -DFLEDGE_INCLUDE=/some_path\n\tif (NOT FLEDGE_INCLUDE)\n\t\tset(FLEDGE_INCLUDE ${FLEDGE_DEFAULT_INCLUDE_DIR})\n\t\tmessage(STATUS \"Using Fledge dev package includes \" ${FLEDGE_INCLUDE})\n\telse()\n\t\tmessage(STATUS \"Using -DFLEDGE_INCLUDE option \" ${FLEDGE_INCLUDE})\n\tendif()\n\t# Remove current value from cache\n\tunset(_FIND_INCLUDES CACHE)\n\t# Get up to date var from find_path\n\tfind_path(_FIND_INCLUDES NAMES plugin_api.h PATHS ${FLEDGE_INCLUDE})\n\tif (_FIND_INCLUDES)\n\t\tlist(APPEND FLEDGE_INCLUDE_DIRS ${_FIND_INCLUDES})\n\tendif()\n\t# Remove current value from cache\n\tunset(_FIND_INCLUDES CACHE)\n\n\tif (NOT FLEDGE_INCLUDE_DIRS)\n\t\tmessage(SEND_ERROR \"Needed Fledge header files not found in path ${FLEDGE_INCLUDE}\")\n\t\treturn()\n\tendif()\nendif()\n\n#\n# Fledge Libraries\n#\n# Check -DFLEDGE_LIB=/some path is valid\n# or use FLEDGE_SRC/cmake_build/C/lib\n# FLEDGE_SRC might have been set to FLEDGE_ROOT above\n#\nif (FLEDGE_SRC)\n\t# Set return variable FLEDGE_LIB_DIRS\n        set(FLEDGE_LIB \"${FLEDGE_SRC}/cmake_build/C/lib\")\n\n\tif (NOT DEFINED ENV{FLEDGE_ROOT})\n\t\tmessage(STATUS \"Using -DFLEDGE_SRC option for libs \\n   +\" \"${FLEDGE_SRC}/cmake_build/C/lib\")\n\telse()\n\t\tmessage(STATUS \"Using FLEDGE_ROOT for libs \\n   +\" \"${FLEDGE_SRC}/cmake_build/C/lib\")\n\tendif()\n\n\tif (NOT EXISTS \"${FLEDGE_SRC}/cmake_build\")\n\t\tmessage(SEND_ERROR \"Fledge has not been built yet in ${FLEDGE_SRC}  Compile it first.\")\n\t\treturn()\n\tendif()\n\n\t# Set return variable FLEDGE_LIB_DIRS\n\tset(FLEDGE_LIB_DIRS \"${FLEDGE_SRC}/cmake_build/C/lib\")\nelse()\n\tif (NOT FLEDGE_LIB)\n\t\tset(FLEDGE_LIB ${FLEDGE_DEFAULT_LIB_DIR})\n\t\tmessage(STATUS \"Using Fledge dev package libs \" ${FLEDGE_LIB})\n\telse()\n\t\tmessage(STATUS \"Using -DFLEDGE_LIB option \" ${FLEDGE_LIB})\n\tendif()\n\t# Set return variable FLEDGE_LIB_DIRS\n\tset(FLEDGE_LIB_DIRS ${FLEDGE_LIB})\nendif()\n\n# Check NEEDED_FLEDGE_LIBS in libraries in FLEDGE_LIB_DIRS\n# NEEDED_FLEDGE_LIBS variables comes from CMakeLists.txt\nforeach(_LIB ${NEEDED_FLEDGE_LIBS})\n\t# Remove current value from cache\n\tunset(_FOUND_LIB CACHE)\n\t# Get up to date var from find_library\n\tfind_library(_FOUND_LIB NAME ${_LIB} PATHS ${FLEDGE_LIB_DIRS})\n\tif (_FOUND_LIB)\n\t\t# Extract path form founf library file\n\t\tget_filename_component(_DIR_LIB ${_FOUND_LIB} DIRECTORY)\n\telse()\n\t\tmessage(SEND_ERROR \"Needed Fledge library ${_LIB} not found in ${FLEDGE_LIB_DIRS}\")\n\t\treturn()\n\tendif()\n\t# Remove current value from cache\n\tunset(_FOUND_LIB CACHE)\nendforeach()\n\n# Set return variable FLEDGE_FOUND\nset(FLEDGE_FOUND \"true\")\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2021 Dianomic Systems Inc\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/Package",
    "content": "# A set of variables that define how we package this repository\n#\nplugin_name=testcard\nplugin_type=south\nplugin_install_dirname=testcard\nadditional_libs=\n\n# Now build up the runtime requirements list. This has 3 components\n#   1. Generic packages we depend on in all architectures and package managers\n#   2. Architecture specific packages we depend on\n#   3. Package manager specific packages we depend on\nrequirements=\"fledge\"\n\ncase \"$arch\" in\n\tx86_64)\n\t\t;;\n\tarmv7l)\n\t\t;;\n\taarch64)\n\t\t;;\nesac\ncase \"$package_manager\" in\n\tdeb)\n\t\t;;\n\trpm)\n\t\t;;\nesac\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/README.rst",
    "content": "Testcard plugin for creating DPImage data\n=========================================\n\nA plugin to create image data for test purposes\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/VERSION",
    "content": "1.9.2\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/fledge.version",
    "content": "fledge_version>=1.9\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/mkversion",
    "content": "#!/bin/sh\ncat > version.h << END_WARNING\n\n/*\n * WARNING: This is an automatically generated file.\n *          Do not edit this file.\n *          To change the version edit the file VERSION\n */\n\nEND_WARNING\n/bin/echo '#define VERSION\t\"'`cat $1/VERSION`'\"' >> version.h\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/plugin.cpp",
    "content": "/*\n * Fledge south plugin.\n *\n * Copyright (c) 2022 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Mark Riddoch\n */\n#include <plugin_api.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <strings.h>\n#include <string>\n#include <logger.h>\n#include <plugin_exception.h>\n#include <config_category.h>\n#include <version.h>\n#include <reading.h>\n#include <dpimage.h>\n#include<cmath>\n\ntypedef void (*INGEST_CB)(void *, Reading);\n\n#define PLUGIN_NAME \"testcard\"\n\nusing namespace std;\n\n/**\n * The default configuration for the Flir plugin.\n */\nstatic const char *default_config = QUOTE({\n\t\"plugin\" : { \n\t\t\"description\" :  \"Plugin for image testcard production\",\n\t\t\"type\" : \"string\",\n\t\t\"default\" : PLUGIN_NAME, \n\t\t\"readonly\" : \"true\"\n\t\t}, \n\t\"asset\" : { \n\t\t\"description\" : \"Asset name to use\",\n\t\t\"type\" : \"string\",\n\t\t\"default\" : \"testcard\",\n\t\t\"displayName\": \"Asset Name\",\n\t\t\"mandatory\": \"true\",\n\t\t\"order\" : \"1\"\n\t       \t},\n\t\"imageHeight\" : { \n\t\t\"description\" : \"The height of test card image to create.\",\n\t\t\"type\" : \"integer\",\n\t\t\"displayName\": \"Image Height\",\n\t\t\"default\" : \"480\",\n\t\t\"mandatory\": \"true\",\n\t\t\"order\" : \"2\"\n\t       \t},\n\t\"imageWidth\" : { \n\t\t\"description\" : \"The Width of test card image to create.\",\n\t\t\"type\" : \"integer\",\n\t\t\"default\" : \"640\",\n\t\t\"displayName\": \"Image Width\",\n\t\t\"mandatory\": \"true\",\n\t\t\"order\" : \"3\"\n\t       \t},\n\t\"depth\" : {\n\t\t\"description\" : \"Depth of the testcard to create\",\n\t\t\"type\" : \"enumeration\",\n\t\t\"options\" : [ \"8\", \"16\", \"24\" ],\n\t\t\"default\" : \"8\",\n\t\t\"displayName\": \"Depth\",\n\t\t\"mandatory\": \"true\",\n\t\t\"order\" : \"4\"\n\t\t}\n\t});\n\t\t  \n/**\n * The Flir plugin interface\n */\nextern \"C\" {\n\n/**\n * The plugin information structure\n */\nstatic PLUGIN_INFORMATION info = {\n\tPLUGIN_NAME,              // Name\n\tVERSION,                  // Version\n\tSP_CONTROL,  \t  \t  // Flags\n\tPLUGIN_TYPE_SOUTH,        // Type\n\t\"1.0.0\",                  // Interface version\n\tdefault_config            // Default configuration\n};\n\n/**\n * Return the information about this plugin\n */\nPLUGIN_INFORMATION *plugin_info()\n{\n\treturn &info;\n}\n\n/**\n * Initialise the plugin, called to get the plugin handle\n */\nPLUGIN_HANDLE plugin_init(ConfigCategory *config)\n{\nConfigCategory *newconfig;\n\n\tnewconfig = new ConfigCategory(*config);\n\treturn (PLUGIN_HANDLE)newconfig;\n}\n\n/**\n * Start the Async handling for the plugin\n */\nvoid plugin_start(PLUGIN_HANDLE *handle)\n{\nConfigCategory *conf = (ConfigCategory *)handle;\n\n\tif (!handle)\n\t\treturn;\n}\n\n\n/**\n * Poll for a plugin reading\n */\nReading plugin_poll(PLUGIN_HANDLE *handle)\n{\nConfigCategory *conf = (ConfigCategory *)handle;\n\n\tstring d = conf->getValue(\"depth\");\n\tint depth = strtol(d.c_str(), NULL, 10);\n\n\tstring imageHeightStr = conf->getValue(\"imageHeight\");\n\tint imageHeight = strtol(imageHeightStr.c_str(), NULL, 10);\n\n\tstring imageWidthStr = conf->getValue(\"imageWidth\");\n\tint imageWidth = strtol(imageWidthStr.c_str(), NULL, 10);\n\tconst int maxIntensity8Bit = 256;\n\tconst int maxIntensity16Bit = 65536;\n\n\tswitch (depth)\n\t{\n\t\tcase 8:\n\t\t\t{\n\t\t\t\tvoid *data = malloc(imageHeight * imageWidth);\n\t\t\t\tuint8_t *ptr = (uint8_t *)data;\n\t\t\t\tfloat reductionFactor = (float) maxIntensity8Bit / imageHeight;\n\n\t\t\t\tfor (int i = 0; i < imageHeight; i++)\n\t\t\t\t{\n\t\t\t\t\tfor (int j = 0; j < imageWidth; j++)\n\t\t\t\t\t{\n\t\t\t\t\t\t*ptr++ = round(i * reductionFactor);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tDPImage *image = new DPImage(imageWidth, imageHeight, 8, data);\n\t\t\t\tfree(data);\n\t\t\t\tDatapointValue img(image);\n\t\t\t\treturn Reading(conf->getValue(\"asset\"), new Datapoint(\"testcard\", img));\n\t\t\t}\n\t\tcase 16:\n\t\t\t{\n\t\t\t\tvoid *data = malloc(imageHeight * imageWidth * 2);\n\t\t\t\tuint16_t *ptr = (uint16_t *)data;\n\t\t\t\tfloat reductionFactor = (float) maxIntensity16Bit / (imageHeight * imageHeight);\n\n\t\t\t\tfor (int i = 0; i < imageHeight; i++)\n\t\t\t\t{\n\t\t\t\t\tfor (int j = 0; j < imageWidth; j++)\n\t\t\t\t\t{\n\t\t\t\t\t\t*ptr++ = round(i*i  * reductionFactor);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tDPImage *image = new DPImage(imageWidth, imageHeight, 16, data);\n\t\t\t\tfree(data);\n\t\t\t\tDatapointValue img(image);\n\t\t\t\treturn Reading(conf->getValue(\"asset\"), new Datapoint(\"testcard\", img));\n\t\t\t}\n\t\tcase 24:\n\t\t\t{\n\t\t\t\tvoid *data = malloc(imageHeight * imageWidth * 3);\n\t\t\t\tuint8_t *ptr = (uint8_t *)data;\n\t\t\t\tint rowLimitFirstHalf, rowLimitSecondHalf;\n\t\t\t\t// We divide the image into 2 equal parts. \n\t\t\t\t// In the first half we display four component namely:\n\t\t\t\t// 1. A red line 2. A Green Line 3. A Blue Line 4. A White Line\n\t\t\t\t// In the second half we create a random pattern of RGB colours.\n\n\t\t\t\trowLimitFirstHalf = imageHeight / 8;\n\t\t\t\trowLimitSecondHalf = imageHeight / 2;\n\t\t\t\tuint8_t stepSize = 0;\n\n\t\t\t\tfloat reductionFactorFirstHalf = (float) maxIntensity8Bit / (rowLimitFirstHalf * 8);\n\t\t\t\tfloat reductionFactorSecondHalf = (float) maxIntensity8Bit / (rowLimitSecondHalf * 2);\n\t\t\t\t\n\t\t\t\t// This will create a red horizontal line.\n\t\t\t\tfor (int i = 0; i < rowLimitFirstHalf; i++)\n\t\t\t\t{\n\t\t\t\t\tfor (int j = 0; j < imageWidth; j++)\n\t\t\t\t\t{\n\t\t\t\t\t\t*ptr++ =  round(i * 8 * reductionFactorFirstHalf); // R\n\t\t\t\t\t\t*ptr++ = 0;\t// G\n\t\t\t\t\t\t*ptr++ = 0;\t// B\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// This will create a green horizontal line.\n\t\t\t\tfor (int i = 0; i < rowLimitFirstHalf; i++)\n\t\t\t\t{\n\t\t\t\t\tfor (int j = 0; j < imageWidth; j++)\n\t\t\t\t\t{\n\t\t\t\t\t\t*ptr++ = 0;\t// R\n\t\t\t\t\t\t*ptr++ = round(i * 8 * reductionFactorFirstHalf);\t// G\n\t\t\t\t\t\t*ptr++ = 0;\t// B\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// This will create a blue horizontal line.\n\t\t\t\tfor (int i = 0; i < rowLimitFirstHalf; i++)\n\t\t\t\t{\n\t\t\t\t\tfor (int j = 0; j < imageWidth; j++)\n\t\t\t\t\t{\n\t\t\t\t\t\t*ptr++ = 0;\t// R\n\t\t\t\t\t\t*ptr++ = 0;\t// G\n\t\t\t\t\t\t*ptr++ = round(i * 8 * reductionFactorFirstHalf); // B\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// This will create a white horizontal line.\n\t\t\t\tfor (int i = 0; i < rowLimitFirstHalf; i++)\n\t\t\t\t{\n\t\t\t\t\tfor (int j = 0; j < imageWidth; j++)\n\t\t\t\t\t{\n\t\t\t\t\t\t*ptr++ = round(i * 8 * reductionFactorFirstHalf); // R\n\t\t\t\t\t\t*ptr++ = round(i * 8 * reductionFactorFirstHalf); // G\n\t\t\t\t\t\t*ptr++ = round(i * 8 * reductionFactorFirstHalf); // B\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// We are in second half now.\n\t\t\t\t// This will create a colorful pattern in second half.  \n\t\t\t\tfor (int i = 0; i < rowLimitSecondHalf; i++)\n\t\t\t\t{\n\t\t\t\t\tfor (int j = 0; j < imageWidth; j++)\n\t\t\t\t\t{\n\t\t\t\t\t\t*ptr++ = round(i * 4  * reductionFactorSecondHalf);\t// R\n\t\t\t\t\t\t*ptr++ = round((255 - (i * 4)) * reductionFactorSecondHalf);\t// G\n\t\t\t\t\t\t*ptr++ = j;\t// B\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tDPImage *image = new DPImage(imageWidth, imageHeight, 24, data);\n\t\t\t\tfree(data);\n\t\t\t\tDatapointValue img(image);\n\t\t\t\treturn Reading(conf->getValue(\"asset\"), new Datapoint(\"testcard\", img));\n\t\t\t}\n\t\tdefault:\n\t\t\tLogger::getLogger()->error(\"Unsupported depth %d\", depth);\n\t}\n}\n\n/**\n * Reconfigure the plugin\n */\nvoid plugin_reconfigure(PLUGIN_HANDLE *handle, string& newConfig)\n{\nConfigCategory\t*config = new ConfigCategory(\"testcard\", newConfig);\n\n\t*handle = config;\n}\n\n/**\n * Shutdown the plugin\n */\nvoid plugin_shutdown(PLUGIN_HANDLE *handle)\n{\n}\n\n/**\n * Control entry point for a write operation.\n *\n * No write operations are currently supported by the camera\n */\nbool plugin_write(PLUGIN_HANDLE *handle, string& name, string& value)\n{\n\n\treturn false;\n}\n\n/**\n * Control operation entry point. Currently only one operation\n * is supported by the camera, the trigger operation.\n */\nbool plugin_operation(PLUGIN_HANDLE *handle, string& operation, int count, PLUGIN_PARAMETER **params)\n{\n\treturn false;\n}\n};\n\n\n"
  },
  {
    "path": "tests/system/plugins/south/fledge-south-testcard/requirements.sh",
    "content": "#!/bin/sh\n\nsudo apt install -y libxml2-dev\n"
  },
  {
    "path": "tests/system/python/README.rst",
    "content": "\n.. |pytest| raw:: html\n\n   <a href=\"https://docs.pytest.org/en/latest/#\" target=\"_blank\">pytest</a>\n\n.. |installed| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge#build-prerequisites\" target=\"_blank\">installed</a>\n\n.. |build| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge#build\" target=\"_blank\">build</a>\n\n.. |set| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge#testing-fledge-from-your-development-environment\" target=\"_blank\">set</a>\n\n.. |kafka-build| raw:: html\n\n   <a href=\"https://github.com/fledge-iot/fledge-north-kafka#build\" target=\"_blank\">kafka-build</a>\n\n.. |confluent| raw:: html\n\n   <a href=\"https://www.confluent.io/download/\" target=\"_blank\">confluent</a>\n\n.. |Confluent CLI| raw:: html\n\n   <a href=\"https://docs.confluent.io/current/cli/command-reference/index.html\" target=\"_blank\">Confluent CLI</a>\n\n.. |REST Proxy| raw:: html\n\n   <a href=\"https://docs.confluent.io/current/kafka-rest/docs/quickstart.html\" target=\"_blank\">REST Proxy QuickStart</a>\n\n.. =============================================\n\n*******************************************\nFledge System Tests using pytest framework\n*******************************************\n\nSystem tests are the third category of test in Fledge. These test ensures that end to end flow of a Fledge system is\nworking as expected.\n\nA typical example can be ingesting asset data in Fledge database, and sending to a cloud system with different set of\nconfiguration rules.\n\nSince these kinds of tests interacts between two or more heterogeneous systems, these are often slow in nature.\n\nFledge uses python |pytest| framework to execute the system tests. To contribute to system test, a developer should\nbe comfortable in writing tests in pytest.\n\nRunning Fledge System tests\n============================\n\nTest Prerequisites\n------------------\n\nInstall the following prerequisites to run a System test ::\n\n   python3 -m pip install pytest\n\nAlso, Fledge must have:\n\n   1. All dependencies |installed|\n   2. |build|\n   3. and Fledge_ROOT must be |set|\n\n\nTest Execution\n--------------\n\nSome tests, like ``test_e2e_coap_PI.py`` , requires some information to be provided\nfor example the PI-Server or the OCS account that should be used. This information can be passed though command\nlike during test execution. For e.g., ::\n\n    /Fledge/tests/system/python/e2e/ $ pytest test_e2e_coap_PI.py\n    --pi-db=<PI DB name>\n    --pi-host=<Hostname/IP of PI Server>\n    --pi-admin=<Login of PI Machine>\n    --pi-passwd=<Password of PI Machine>\n    --pi-token=\"<PI Producer token>\"\n\nThese command line arguments and their help can be seen typing ``pytest --help`` from console, refer section\ncustom options ::\n\n    $ pytest --help\n    ...\n    custom options:\n    --storage-plugin=STORAGE_PLUGIN\n                        Database plugin to use for tests\n    --south-branch=SOUTH_BRANCH\n                        south branch name\n    --north-branch=NORTH_BRANCH\n                        north branch name\n    --fledge-url=FLEDGE_URL\n                        Fledge client api url\n    --use-pip-cache=USE_PIP_CACHE\n                        use pip cache is requirement is available\n\n    --pi-host=PI_HOST\n                        PI Server Host Name/IP\n    --pi-port=PI_PORT\n                        PI Server Port\n    --pi-db=PI_DB\n                        PI Server database\n    --pi-admin=PI_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_PASSWD\n                        PI Server user login password\n    --pi-token=PI_TOKEN\n                        OMF Producer Token\n\n    --ocs-tenant=OCS_TENANT\n                        Tenant id of OCS\n    --ocs-client-id=OCS_CLIENT_ID\n                        Client id of OCS account\n    --ocs-client-secret=OCS_CLIENT_SECRET\n                        Client Secret of OCS account\n    --ocs-namespace=OCS_NAMESPACE\n                        OCS namespace where the information are stored\n    --ocs-token=OCS_TOKEN\n                        Token of OCS account\n\n\n    --south-service-name=SOUTH_SERVICE_NAME\n                        Name of the South Service\n    --asset-name=ASSET_NAME\n                        Name of asset\n\n    --wait-time=WAIT_TIME\n                        Generic wait time between processes to run\n    --retries=RETRIES\n                        Number of tries for polling\n\n    --kafka-host=KAFKA_HOST\n                        Kafka Server Host Name/IP\n    --kafka-port=KAFKA_PORT\n                        Kafka Server Port\n    --kafka-topic=KAFKA_TOPIC\n                        Kafka topic\n    --kafka-rest-port=KAFKA_REST_PORT\n                        Kafka REST Proxy Port\n\nUsing different storage engine\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nBy default system tests runs with sqlite database. If you want, you can use postgres storage plugin and tests will be\nexecuted using postgres database and postgres storage engine::\n\n    $ pytest test_smoke.py --storage-plugin=postgres\n\nTest test_e2e_coap_PI and test_e2e_csv_PI\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe minimum required parameters to run these tests are ::\n\n    --pi-db=<PI DB name>\n    --pi-host=<Hostname/IP of PI Server>\n    --pi-port=<PI Server Port>\n    --pi-admin=<Login of PI Machine>\n    --pi-passwd=<Password of PI Machine>\n    --pi-token=\"<PI Producer token>\"\n\n\nTest test_e2e_coap_OCS\n~~~~~~~~~~~~~~~~~~~~~~\n\nThe minimum required parameters to run these tests are ::\n\n    --ocs-tenant=<Tenant id of OCS>\n    --ocs-client-id=<Client id of OCS account>\n    --ocs-client-secret=<Client Secret of OCS account>\n    --ocs-namespace=<OCS namespace where the information are stored>\n    --ocs-token=<Token of OCS account>\n\n\nTest test_e2e_kafka\n~~~~~~~~~~~~~~~~~~~\n\nPrerequisite\n++++++++++++\n\nInstall the following prerequisites to run a test,\n\n  1. Kafka is built from |kafka-build|\n  2. Download Confluent Community Edition from |confluent|. You can use the |Confluent CLI| installation methods to quickly get a single-node Confluent Platform development environment up and running; Start by running the |REST Proxy| and the services it depends on: ZooKeeper, Kafka\n\n  Below are the minimal services required for the test ::\n\n    $ /opt/confluent-5.1.0/bin/confluent start zookeeper\n    $ /opt/confluent-5.1.0/bin/confluent start kafka\n    $ /opt/confluent-5.1.0/bin/confluent start kafka-rest\n\n  NOTE: By default Listen Ports are 2181, 9092, 8082, If any conflicts with your environment setup. You may change port properties from ::\n\n          /opt/confluent-5.1.0/etc\n          kafka/server.properties\n          kafka/zookeeper.properties\n          kafka-rest/kafka-rest.properties\n\nThe minimum required parameters to run ::\n\n    --kafka-host=<Hostname/IP of Kafka Server>\n    --kafka-port=<Kafka Server Port>\n    --kafka-topic=<Kafka topic>\n    --kafka-rest-port=<Kafka REST Proxy Port>\n\nTest test_e2e_fledge_pair.py\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe minimum required parameters to run this test is ::\n\n    --remote-user=<Username of remote machine>\n    --remote-ip=<IP of remote machine>\n    --key-path=<Absolute path of key used for authentication>\n    --remote-fledge-path=<Absolute path on remote machine where Fledge is cloned>\n    --pi-db=<PI DB name>\n    --pi-host=<Hostname/IP of PI Server>\n    --pi-port=<PI Server Port>\n    --pi-admin=<Login of PI Machine>\n    --pi-passwd=<Password of PI Machine>\n    --pi-token=\"<PI Producer token>\"\n\nTest test_north_pi_webapi_nw_throttle.py\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nExample\npython3 -m pytest -s -v test_north_pi_webapi_nw_throttle.py  --pi-db=<db_name>  \\\n--pi-host=<host_ip> --pi-port=<port> --pi-admin=<user>  \\\n--pi-passwd=<password> \\\n--throttled-network-config='{\"rate_limit\": 100, \"packet_delay\": 25, \"interface\": \"eth0\"}' \\\n--south-service-wait-time=20 --north-catch-up-time=180\n\n\nExecute all the System tests\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIt is possible to execute all the system tests in one go by navigating to the python system test directory\nand running pytest without the test name::\n\n    /Fledge/tests/system/python $ pytest\n    --pi-db=<PI DB name>\n    --pi-host=<Hostname/IP of PI Server>\n    --pi-admin=<Login of PI Machine>\n    --pi-passwd=<Password of PI Machine>\n    --pi-token=<PI Producer token>\n\n    --ocs-tenant=<Tenant id of OCS>\n    --ocs-client-id=<Client id of OCS account>\n    --ocs-client-secret=<Client Secret of OCS account>\n    --ocs-namespace=<OCS namespace where the information are stored>\n    --ocs-token=<Token of OCS account>\n\n    --kafka-host=<Hostname/IP of Kafka Server>\n    --kafka-port=<Kafka Server Port>\n    --kafka-topic=<Kafka topic>\n    --kafka-rest-port=<Kafka REST Proxy Port>\n\n    --remote-user=REMOTE_USER\n                        Username on remote machine where Fledge will run\n    --remote-ip=REMOTE_IP\n                        IP of remote machine where Fledge will run\n    --key-path=KEY_PATH   Path of key file used for authentication to remote\n                        machine\n    --remote-fledge-path=REMOTE_FLEDGE_PATH\n                        Path on the remote machine where Fledge is clone and\n                        built\n\n\nConsole output\n++++++++++++++\n\nConsole displays the docstring of the test that tells a user what test is running and what are the assertion points, for e.g., ::\n\n    $ pytest test_smoke.py\n    ================= test session starts =================\n    platform linux -- Python 3.5.3+, pytest-3.6.0, py-1.6.0, pluggy-0.6.0\n    rootdir: /Fledge/tests/system/python, inifile: pytest.ini\n    plugins:\n    collected 1 item\n\n    Test system/python/smoke/test_smoke.py Test that data is inserted in Fledge\n        start_south_coap: Fixture that starts Fledge with south coap plugin\n        Assertions:\n            on endpoint GET /fledge/asset\n            on endpoint GET /fledge/asset/<asset_name>\n\n\nRunning tests on raspberry pi\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe system tests can be also executed on a raspberry pi (Raspbian OS). Test Prerequisites remains the same as above.\nThe only difference is you run the test using ``python3 -m pytest`` instead of ``pytest``.\n"
  },
  {
    "path": "tests/system/python/api/control_service/test_entrypoint.py",
    "content": "import http.client\nimport json\nimport pytest\nfrom urllib.parse import quote\nfrom collections import OrderedDict\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2023 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n\"\"\" Control Flow Entrypoint API tests \"\"\"\n\nEP_1 = \"EP #1\"\nEP_2 = \"EP-1\"\nEP_3 = \"EP #_2\"\npayload1 = {\"name\": EP_1, \"type\": \"write\", \"description\": \"Entry Point 1\", \"operation_name\": \"\",\n            \"destination\": \"broadcast\", \"constants\": {\"c1\": \"100\"}, \"variables\": {\"v1\": \"100\"}, \"anonymous\": False,\n            \"allow\": []}\npayload2 = {\"name\": EP_2, \"type\": \"operation\", \"description\": \"Operation 1\", \"operation_name\": \"distance\",\n            \"destination\": \"broadcast\", \"anonymous\": False, \"allow\": []}\npayload3 = {\"name\": EP_3, \"type\": \"operation\", \"description\": \"Operation 2\", \"operation_name\": \"distance\",\n            \"destination\": \"broadcast\", \"constants\": {\"c1\": \"100\"}, \"variables\": {\"v1\": \"1200\"}, \"anonymous\": True,\n            \"allow\": [\"admin\", \"user\"]}\n\n# TODO: add more tests\n\"\"\"\n    a) authentication based\n    b) update request by installing external service\n\"\"\"\n\n\nclass TestEntrypoint:\n    def test_empty_get_all(self, fledge_url, reset_and_start_fledge):\n        jdoc = self._get_all(fledge_url)\n        assert [] == jdoc\n\n    @pytest.mark.parametrize(\"payload\", [payload1, payload2, payload3])\n    def test_create(self, fledge_url, payload):\n        ep_name = payload['name']\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('POST', '/fledge/control/manage', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"Failed to create {} entrypoint!\".format(ep_name)\n        assert 'message' in jdoc\n        assert '{} control entrypoint has been created successfully.'.format(ep_name) == jdoc['message']\n        self.verify_details(conn, payload)\n        self.verify_audit_details(conn, ep_name, 'CTEAD')\n\n    def test_get_all(self, fledge_url):\n        jdoc = self._get_all(fledge_url)\n        assert 3 == len(jdoc)\n        assert ['name', 'description', 'permitted'] == list(jdoc[0].keys())\n\n    def test_get_by_name(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/control/manage/{}'.format(quote(EP_1)))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"{} entrypoint found!\".format(EP_1)\n        assert payload1 == jdoc\n        assert 'permitted' in jdoc\n\n    @pytest.mark.parametrize(\"name, payload, old_info\", [\n        (EP_1, {\"anonymous\": True}, {\"anonymous\": False}),\n        (EP_2, {\"description\": \"Updated\", \"type\": \"operation\", \"operation_name\": \"focus\", \"allow\": [\"user\"]},\n         {\"description\": \"Operation 1\", \"type\": \"operation\", \"operation_name\": \"distance\", \"allow\": []}),\n        (EP_3, {\"constants\": {\"c1\": \"123\", \"c2\": \"100\"}, \"variables\": {\"v1\": \"900\"}},\n         {\"constants\": {\"c1\": \"100\"}, \"variables\": {\"v1\": \"1200\"}})\n    ])\n    def test_update(self, fledge_url, name, payload, old_info):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('PUT', '/fledge/control/manage/{}'.format(quote(name)), body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert '{} control entrypoint has been updated successfully.'.format(name) == jdoc['message']\n\n        source = 'CTECH'\n        conn.request(\"GET\", '/fledge/audit?source={}'.format(source))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'audit' in jdoc\n        assert len(jdoc['audit'])\n        audit = jdoc['audit'][0]\n        assert 'INFORMATION' == audit['severity']\n        assert source == audit['source']\n        assert 'details' in audit\n        assert 'entrypoint' in audit['details']\n        assert 'old_entrypoint' in audit['details']\n        audit_old = audit['details']['old_entrypoint']\n        audit_new = audit['details']['entrypoint']\n        assert name == audit_new['name']\n        assert name == audit_old['name']\n\n        conn.request(\"GET\", '/fledge/control/manage/{}'.format(quote(name)))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"{} entrypoint found!\".format(name)\n        assert name == jdoc['name']\n        if name == EP_1:\n            assert old_info['anonymous'] == audit_old['anonymous']\n            assert payload['anonymous'] == audit_new['anonymous']\n            assert payload['anonymous'] == jdoc['anonymous']\n        elif name == EP_2:\n            assert old_info['description'] == audit_old['description']\n            assert payload['description'] == audit_new['description']\n            assert old_info['type'] == audit_old['type']\n            assert payload['type'] == audit_new['type']\n            assert old_info['operation_name'] == audit_old['operation_name']\n            assert payload['operation_name'] == audit_new['operation_name']\n            assert old_info['allow'] == audit_old['allow']\n            assert payload['allow'] == audit_new['allow']\n            assert payload['description'] == jdoc['description']\n            assert payload['type'] == jdoc['type']\n            assert payload['operation_name'] == jdoc['operation_name']\n            assert payload['allow'] == jdoc['allow']\n        elif name == EP_3:\n            assert old_info['constants']['c1'] == audit_old['constants']['c1']\n            assert 'c2' not in audit_old['constants']\n            assert payload['constants']['c1'] == audit_new['constants']['c1']\n            assert payload['constants']['c2'] == audit_new['constants']['c2']\n            assert old_info['variables']['v1'] == audit_old['variables']['v1']\n            assert payload['variables']['v1'] == audit_new['variables']['v1']\n            assert payload['constants']['c1'] == jdoc['constants']['c1']\n            assert payload['constants']['c2'] == jdoc['constants']['c2']\n            assert payload['variables']['v1'] == jdoc['variables']['v1']\n        else:\n            # Add more scenarios\n            pass\n\n    @pytest.mark.parametrize(\"name, count\", [(EP_1, 2), (EP_2, 1), (EP_3, 0)])\n    def test_delete(self, fledge_url, name, count):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/control/manage/{}'.format(quote(name)))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"{} entrypoint found!\".format(name)\n        assert 'message' in jdoc\n        assert '{} control entrypoint has been deleted successfully.'.format(name) == jdoc['message']\n        self.verify_audit_details(conn, name, 'CTEDL')\n        jdoc = self._get_all(fledge_url)\n        assert count == len(jdoc)\n\n    def verify_audit_details(self, conn, ep_name, source):\n        conn.request(\"GET\", '/fledge/audit?source={}'.format(source))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No audit record entry found!\"\n        assert 'audit' in jdoc\n        assert ep_name == jdoc['audit'][0]['details']['name']\n        assert 'INFORMATION' == jdoc['audit'][0]['severity']\n        assert source == jdoc['audit'][0]['source']\n\n    def verify_details(self, conn, data):\n        name = data['name']\n        conn.request(\"GET\", '/fledge/control/manage/{}'.format(quote(name)))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"{} entrypoint found!\".format(name)\n        data['permitted'] = True\n        if 'constants' not in data:\n            data['constants'] = {}\n        if 'variables' not in data:\n            data['variables'] = {}\n\n        d1 = OrderedDict(sorted(data.items()))\n        d2 = OrderedDict(sorted(jdoc.items()))\n        assert d1 == d2\n    \n    def _get_all(self, url):\n        conn = http.client.HTTPConnection(url)\n        conn.request(\"GET\", '/fledge/control/manage')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No entrypoint found!\"\n        assert 'controls' in jdoc\n        return jdoc['controls']\n"
  },
  {
    "path": "tests/system/python/api/control_service/test_pipeline.py",
    "content": "import http.client\nimport json\nimport time\n\nimport pytest\n\nimport plugin_and_service\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nCP_1 = \"CP #1\"\nCP_2 = \"CP-mine\"\nCP_3 = \"CP@!23\"\nCP_4 = \"1234\"\n\nSINUSOID_SVC_NAME = \"Sine #1\"\nOMF_SVC_NAME = \"OMF #1\"\nRENAME_FILTER_NAME = \"DP_R1\"\nMETA_FILTER_NAME = \"Mi @com\"\n\npayload1 = {\"name\": CP_1, \"enabled\": True, \"execution\": \"shared\", \"source\": {\"type\": 1},\n            \"destination\": {\"type\": 1}}\npayload2 = {\"name\": CP_2, \"enabled\": False, \"execution\": \"shared\", \"source\": {\"type\": 3},\n            \"destination\": {\"type\": 5}}\npayload3 = {\"execution\": \"Shared\", \"source\": {\"type\": 2, \"name\": OMF_SVC_NAME},\n            \"destination\": {\"type\": 5, \"name\": \"\"}, \"filters\": [RENAME_FILTER_NAME], \"enabled\": True, \"name\": CP_3}\npayload4 = {\"execution\": \"Shared\", \"source\": {\"type\": 2, \"name\": OMF_SVC_NAME},\n            \"destination\": {\"type\": 3, \"name\": \"sinusoid\"}, \"filters\": [META_FILTER_NAME, RENAME_FILTER_NAME],\n            \"enabled\": True, \"name\": CP_4}\n\n\ndef verify_audit_details(conn, name, source):\n    conn.request(\"GET\", '/fledge/audit?source={}'.format(source))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert len(jdoc), \"No audit record entry found!\"\n    assert 'audit' in jdoc\n    if source.endswith(\"CH\"):\n        assert 'pipeline' in jdoc['audit'][0]['details']\n        assert name == jdoc['audit'][0]['details']['pipeline']['name']\n        assert 'old_pipeline' in jdoc['audit'][0]['details']\n        assert name == jdoc['audit'][0]['details']['old_pipeline']['name']\n    else:\n        assert name == jdoc['audit'][0]['details']['name']\n    assert 'INFORMATION' == jdoc['audit'][0]['severity']\n    assert source == jdoc['audit'][0]['source']\n\n\ndef verify_details(conn, data, cpid):\n    conn.request(\"GET\", '/fledge/control/pipeline/{}'.format(cpid))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc_pipeline = json.loads(r)\n    assert len(jdoc_pipeline), \"Failed to fetch details of pipeline having ID: {}\".format(cpid)\n\n    if jdoc_pipeline['name'] == CP_1:\n        assert 1 == jdoc_pipeline['id']\n        assert data['name'] == jdoc_pipeline['name']\n        assert \"Any\" == jdoc_pipeline['source']['type']\n        assert \"\" == jdoc_pipeline['source']['name']\n        assert \"Any\" == jdoc_pipeline['destination']['type']\n        assert \"\" == jdoc_pipeline['destination']['name']\n        assert data['enabled'] == jdoc_pipeline['enabled']\n        assert data['execution'] == jdoc_pipeline['execution']\n        assert [] == jdoc_pipeline['filters']\n    elif jdoc_pipeline['name'] == CP_2:\n        assert 2 == jdoc_pipeline['id']\n        assert data['name'] == jdoc_pipeline['name']\n        assert \"API\" == jdoc_pipeline['source']['type']\n        assert \"anonymous\" == jdoc_pipeline['source']['name']\n        assert \"Broadcast\" == jdoc_pipeline['destination']['type']\n        assert \"\" == jdoc_pipeline['destination']['name']\n        assert data['enabled'] == jdoc_pipeline['enabled']\n        assert data['execution'] == jdoc_pipeline['execution']\n        assert [] == jdoc_pipeline['filters']\n    elif jdoc_pipeline['name'] == CP_3:\n        assert 1 == jdoc_pipeline['id']\n        assert data['name'] == jdoc_pipeline['name']\n        assert \"Service\" == jdoc_pipeline['source']['type']\n        assert OMF_SVC_NAME == jdoc_pipeline['source']['name']\n        assert \"Broadcast\" == jdoc_pipeline['destination']['type']\n        assert \"\" == jdoc_pipeline['destination']['name']\n        assert data['enabled'] == jdoc_pipeline['enabled']\n        assert data['execution'] == jdoc_pipeline['execution']\n        assert data['filters'] == jdoc_pipeline['filters']\n    elif jdoc_pipeline['name'] == CP_4:\n        assert 2 == jdoc_pipeline['id']\n        assert data['name'] == jdoc_pipeline['name']\n        assert \"Service\" == jdoc_pipeline['source']['type']\n        assert OMF_SVC_NAME == jdoc_pipeline['source']['name']\n        assert \"Asset\" == jdoc_pipeline['destination']['type']\n        assert \"sinusoid\" == jdoc_pipeline['destination']['name']\n        assert data['enabled'] == jdoc_pipeline['enabled']\n        assert data['execution'] == jdoc_pipeline['execution']\n        assert data['filters'] == jdoc_pipeline['filters']\n\n\ndef get_all(url):\n    conn = http.client.HTTPConnection(url)\n    conn.request(\"GET\", '/fledge/control/pipeline')\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert len(jdoc), \"No pipelines found!\"\n    assert 'pipelines' in jdoc\n    return jdoc['pipelines']\n\n\nclass TestPipeline:\n    \"\"\" Control Pipeline API tests \"\"\"\n    def test_empty_get_all(self, fledge_url, reset_and_start_fledge):\n        pipelines = get_all(fledge_url)\n        assert [] == pipelines\n\n    @pytest.mark.parametrize(\"payload\", [payload1, payload2])\n    def test_create(self, fledge_url, payload):\n        name = payload['name']\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('POST', '/fledge/control/pipeline', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"Failed to create {} pipeline!\".format(name)\n        verify_details(conn, payload, jdoc['id'])\n        verify_audit_details(conn, name, 'CTPAD')\n\n    def test_get_all(self, fledge_url):\n        pipelines = get_all(fledge_url)\n        assert 2 == len(pipelines)\n\n    def test_get_pipeline(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        verify_details(conn, payload1, 1)\n\n    @pytest.mark.parametrize(\"cpid, name, data, payload\", [\n        (1, CP_1, payload1, {\"enabled\": False}),\n        (2, CP_2, payload2, {\"execution\": \"exclusive\", \"enabled\": True})\n    ])\n    def test_update(self, fledge_url, cpid, name, data, payload):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('PUT', '/fledge/control/pipeline/{}'.format(cpid), body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert 'Control Pipeline with ID:<{}> has been updated successfully.'.format(cpid) == jdoc['message']\n        modified_data = data.copy()\n        for k, v in payload.items():\n            modified_data[k] = v\n        verify_details(conn, modified_data, cpid)\n        verify_audit_details(conn, name, 'CTPCH')\n\n    @pytest.mark.parametrize(\"cpid, name, count\", [(1, CP_1, 1), (2, CP_2, 0)])\n    def test_delete(self, fledge_url, cpid, name, count):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/control/pipeline/{}'.format(cpid))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"Pipeline with ID:<{}> not found!\".format(cpid)\n        assert 'message' in jdoc\n        assert 'Control Pipeline with ID:<{}> has been deleted successfully.'.format(cpid) == jdoc['message']\n        verify_audit_details(conn, name, 'CTPDL')\n        jdoc = get_all(fledge_url)\n        assert count == len(jdoc)\n\n\ndef filter_categories(info):\n    modified_filters = info.copy()\n    for k, v in info.items():\n        if k == \"filters\":\n            filters_with_prefix = [\"ctrl_{}_{}\".format(info['name'], f) for f in v]\n            modified_filters[k] = filters_with_prefix\n    return modified_filters\n\n\nclass TestPipelineFilters:\n    \"\"\" Control Pipeline Filters API tests \"\"\"\n    @pytest.fixture\n    def reset_plugins(self):\n        plugin_and_service.reset()\n\n    def test_setup(self, fledge_url, reset_plugins, reset_and_start_fledge):\n        # Install Plugins\n        plugin_and_service.install(\"south\", plugin='sinusoid', plugin_lang='python')\n        plugin_and_service.install(\"filter\", plugin='metadata', plugin_lang='C')\n        plugin_and_service.install(\"filter\", plugin='rename', plugin_lang='python')\n        # Add south service\n        plugin_and_service.add_south_service(\"sinusoid\", fledge_url, SINUSOID_SVC_NAME, None, True)\n        # Sleep is required to get asset readings\n        time.sleep(20)\n        # Add north service\n        plugin_and_service.add_north_service(\"OMF\", fledge_url, OMF_SVC_NAME, None, True)\n        # Create filters\n        plugin_and_service.create_filter(fledge_url, RENAME_FILTER_NAME, \"rename\", config=None)\n        plugin_and_service.create_filter(fledge_url, META_FILTER_NAME, \"metadata\", config=None)\n\n    @pytest.mark.parametrize(\"payload\", [payload3, payload4])\n    def test_create(self, fledge_url, payload):\n        name = payload['name']\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('POST', '/fledge/control/pipeline', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"Failed to create {} pipeline!\".format(name)\n        modified_data = filter_categories(payload)\n        verify_details(conn, modified_data, jdoc['id'])\n        verify_audit_details(conn, name, 'CTPAD')\n\n    def test_get_pipeline(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        modified_data = filter_categories(payload4)\n        verify_details(conn, modified_data, 2)\n\n    def test_update_insert_case(self, fledge_url, cpid=1, name=CP_3):\n        payload = {\"filters\": [RENAME_FILTER_NAME, META_FILTER_NAME]}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('PUT', '/fledge/control/pipeline/{}'.format(cpid), body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert 'Control Pipeline with ID:<{}> has been updated successfully.'.format(cpid) == jdoc['message']\n        new_filters = payload3.copy()\n        new_filters['filters'] = payload['filters']\n        modified_data = filter_categories(new_filters)\n        verify_details(conn, modified_data, cpid)\n        verify_audit_details(conn, name, 'CTPCH')\n\n    def test_update_ordering(self, fledge_url, cpid=2, name=CP_4):\n        payload = {\"filters\": [RENAME_FILTER_NAME, META_FILTER_NAME]}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('PUT', '/fledge/control/pipeline/{}'.format(cpid), body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert 'Control Pipeline with ID:<{}> has been updated successfully.'.format(cpid) == jdoc['message']\n        new_filters = payload4.copy()\n        new_filters['filters'] = payload['filters']\n        modified_data = filter_categories(new_filters)\n        verify_details(conn, modified_data, cpid)\n        verify_audit_details(conn, name, 'CTPCH')\n\n    def test_update_remove_case(self, fledge_url, cpid=2, name=CP_4):\n        payload = {\"filters\": []}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('PUT', '/fledge/control/pipeline/{}'.format(cpid), body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert 'Control Pipeline with ID:<{}> has been updated successfully.'.format(cpid) == jdoc['message']\n        new_filters = payload4.copy()\n        new_filters['filters'] = payload['filters']\n        modified_data = filter_categories(new_filters)\n        verify_details(conn, modified_data, cpid)\n        verify_audit_details(conn, name, 'CTPCH')\n\n    def test_delete(self, fledge_url, cpid=2, name=CP_4, count=1):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/control/pipeline/{}'.format(cpid))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"Pipeline with ID:<{}> not found!\".format(cpid)\n        assert 'message' in jdoc\n        assert 'Control Pipeline with ID:<{}> has been deleted successfully.'.format(cpid) == jdoc['message']\n        verify_audit_details(conn, name, 'CTPDL')\n        jdoc = get_all(fledge_url)\n        assert count == len(jdoc)\n\n"
  },
  {
    "path": "tests/system/python/api/test_alerts.py",
    "content": "import http.client\nimport json\nimport pytest\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\"\"\" User Alerts API tests \"\"\"\n\n\ndef verify_alert_in_ping(url, alert_count):\n    conn = http.client.HTTPConnection(url)\n    conn.request(\"GET\", '/fledge/ping')\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert len(jdoc), \"No Ping data found.\"\n    assert jdoc['alerts'] == alert_count\n\n\ndef create_alert(url, payload):\n    svc_conn = http.client.HTTPConnection(url)\n    svc_conn.request(\"GET\", '/fledge/service?type=Core')\n    resp = svc_conn.getresponse()\n    assert 200 == resp.status\n    resp = resp.read().decode()\n    svc_jdoc = json.loads(resp)\n\n    svc_details = svc_jdoc[\"services\"][0]\n    url = \"{}:{}\".format(svc_details['address'], svc_details['management_port'])\n    conn = http.client.HTTPConnection(url)\n    conn.request('POST', '/fledge/alert', body=json.dumps(payload))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert len(jdoc), \"Failed to create alert!\"\n    return jdoc\n\n\nclass TestAlerts:\n\n    def test_get_default_alerts(self, fledge_url, reset_and_start_fledge):\n        verify_alert_in_ping(fledge_url, alert_count=0)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/alert')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No alerts found.\"\n        assert 'alerts' in jdoc\n        assert jdoc['alerts'] == []\n\n    def test_no_delete_alert(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/alert')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert {\"message\": \"Nothing to delete.\"} == jdoc\n\n    def test_bad_delete_alert_by_key(self, fledge_url):\n        key = \"blah\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/alert/{}'.format(key))\n        r = conn.getresponse()\n        assert 404 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert {\"message\": \"{} alert not found.\".format(key)} == jdoc\n\n    @pytest.mark.parametrize(\"payload, count\", [\n        ({\"key\": \"updates\", \"urgency\": \"normal\", \"message\": \"Fledge new version is available.\"}, 1),\n        ({\"key\": \"updates\", \"urgency\": \"normal\", \"message\": \"Fledge new version is available.\"}, 1)\n    ])\n    def test_create_alert(self, fledge_url, payload, count):\n        jdoc = create_alert(fledge_url, payload)\n        assert 'alert' in jdoc\n        alert_jdoc = jdoc['alert']\n        payload['urgency'] = 'Normal'\n        assert payload == alert_jdoc\n\n        verify_alert_in_ping(fledge_url, alert_count=count)\n\n    def test_get_all_alerts(self, fledge_url):\n        verify_alert_in_ping(fledge_url, alert_count=1)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/alert')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No alerts found.\"\n        assert 'alerts' in jdoc\n        assert 1 == len(jdoc['alerts'])\n        alert_jdoc = jdoc['alerts'][0]\n        assert 'key' in alert_jdoc\n        assert 'updates' == alert_jdoc['key']\n        assert 'message' in alert_jdoc\n        assert 'Fledge new version is available.' == alert_jdoc['message']\n        assert 'urgency' in alert_jdoc\n        assert 'Normal' == alert_jdoc['urgency']\n        assert 'timestamp' in alert_jdoc\n        import utils\n        assert utils.validate_date_format(alert_jdoc['timestamp']) is True, \"Timestamp format mismatched.\"\n\n    def test_delete_alert_by_key(self, fledge_url):\n        payload = {\"key\": \"Sine\", \"message\": \"The service has restarted 4 times\", \"urgency\": \"critical\"}\n        jdoc = create_alert(fledge_url, payload)\n        assert 'alert' in jdoc\n        alert_jdoc = jdoc['alert']\n        payload['urgency'] = 'Critical'\n        assert payload == alert_jdoc\n\n        verify_alert_in_ping(fledge_url, alert_count=2)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/alert/{}'.format(payload['key']))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert {'message': '{} alert is deleted.'.format(payload['key'])} == jdoc\n\n        verify_alert_in_ping(fledge_url, alert_count=1)\n\n    def test_delete_alert(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/alert')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert {'message': 'Delete all alerts.'} == jdoc\n\n        verify_alert_in_ping(fledge_url, alert_count=0)\n"
  },
  {
    "path": "tests/system/python/api/test_audit.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test Audit REST API \"\"\"\n\n\nimport http.client\nimport json\nimport time\nfrom collections import Counter\nimport pytest\nimport utils\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nDEFAULT_AUDIT_COUNT = 20\n\n\nclass TestAudit:\n    def test_get_log_codes(self, fledge_url, reset_and_start_fledge):\n        expected_code_list = ['PURGE', 'LOGGN', 'STRMN', 'SYPRG', 'START', 'FSTOP',\n                              'CONCH', 'CONAD', 'SCHCH', 'SCHAD', 'SRVRG', 'SRVUN',\n                              'SRVFL', 'SRVRS', 'NHCOM', 'NHDWN', 'NHAVL', 'UPEXC',\n                              'BKEXC', 'NTFDL', 'NTFAD', 'NTFSN', 'NTFCL', 'NTFST',\n                              'NTFSD', 'PKGIN', 'PKGUP', 'PKGRM', 'DSPST', 'DSPSD',\n                              'ESSRT', 'ESSTP', 'ASTDP', 'ASTUN', 'PIPIN', 'AUMRK',\n                              'USRAD', 'USRDL', 'USRCH', 'USRRS',\n                              'ACLAD', 'ACLCH', 'ACLDL',\n                              'CTSAD', 'CTSCH', 'CTSDL',\n                              'CTPAD', 'CTPCH', 'CTPDL',\n                              'CTEAD', 'CTECH', 'CTEDL',\n                              'BUCAD', 'BUCCH', 'BUCDL',\n                              'USRBK','USRUB'\n                              ]\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/audit/logcode')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert len(expected_code_list) == len(jdoc['logCode'])\n        codes = [key['code'] for key in jdoc['logCode']]\n        assert Counter(expected_code_list) == Counter(codes)\n\n    def test_get_severity(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/audit/severity')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        log_severity = jdoc['logSeverity']\n        assert 4 == len(log_severity)\n        name = [name['name'] for name in log_severity]\n        assert Counter(['SUCCESS', 'FAILURE', 'WARNING', 'INFORMATION']) == Counter(name)\n        index = [idx['index'] for idx in log_severity]\n        assert Counter([0, 1, 2, 4]) == Counter(index)\n\n    @pytest.mark.parametrize(\"request_params, total_count, audit_count\", [\n        ('', DEFAULT_AUDIT_COUNT, DEFAULT_AUDIT_COUNT),\n        ('?limit=1', DEFAULT_AUDIT_COUNT, 1),\n        ('?skip=4', DEFAULT_AUDIT_COUNT, 16),\n        ('?limit=1&skip=8', DEFAULT_AUDIT_COUNT, 1),\n        ('?source=START', 1, 1),\n        ('?source=CONAD', 19, 19),\n        ('?source=CONAD&limit=1', 19, 1),\n        ('?source=CONAD&skip=1', 19, 18),\n        ('?source=CONAD&skip=6&limit=1', 19, 1),\n        ('?severity=INFORMATION', DEFAULT_AUDIT_COUNT, DEFAULT_AUDIT_COUNT),\n        ('?severity=failure', 0, 0),\n        ('?source=CONAD&severity=failure', 0, 0),\n        ('?source=START&severity=INFORMATION', 1, 1),\n        ('?source=START&severity=information&limit=1', 1, 1),\n        ('?source=START&severity=information&limit=1&skip=1', 1, 0)\n    ])\n    def test_default_get_audit(self, fledge_url, wait_time, request_params, total_count, audit_count, storage_plugin):\n        # wait for Fledge start, first test only, once in the iteration before start\n        if request_params == '':\n            time.sleep(wait_time)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/audit{}'.format(request_params))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        elems = jdoc['audit']\n        assert len(jdoc), \"No data found\"\n        assert total_count == jdoc['totalCount']\n        assert audit_count == len(elems)\n        if len(elems):\n            assert utils.validate_date_format(elems[0]['timestamp']) is True, \"Timestamp format mismatched.\"\n\n    @pytest.mark.parametrize(\"payload, total_count\", [\n        ({\"source\": \"LOGGN\", \"severity\": \"warning\", \"details\": {\"message\": \"Engine oil pressure low\"}}, 1),\n        ({\"source\": \"NHCOM\", \"severity\": \"success\", \"details\": {}}, 2),\n        ({\"source\": \"START\", \"severity\": \"information\", \"details\": {\"message\": \"fledge started\"}}, 3),\n        ({\"source\": \"CONCH\", \"severity\": \"failure\", \"details\": {\"message\": \"Scheduler configuration failed\"}}, 4)\n    ])\n    def test_create_audit_entry(self, fledge_url, payload, total_count):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('POST', '/fledge/audit', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert payload['source'] == jdoc['source']\n        assert payload['severity'] == jdoc['severity']\n        assert payload['details'] == jdoc['details']\n        assert utils.validate_date_format(jdoc['timestamp']) is True, \"Timestamp format mismatched.\"\n\n        # Verify new audit log entries\n        conn.request(\"GET\", '/fledge/audit')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert DEFAULT_AUDIT_COUNT + total_count == jdoc['totalCount']\n\n    @pytest.mark.parametrize(\"payload\", [\n        ({\"source\": \"LOGGN_X\", \"severity\": \"warning\", \"details\": {\"message\": \"Engine oil pressure low\"}}),\n        ({\"source\": \"LOG_X\", \"severity\": \"information\", \"details\": {\"message\": \"Engine oil pressure is okay.\"}})\n    ])\n    def test_create_nonexistent_log_code_audit_entry(self, fledge_url, payload, storage_plugin):\n        if storage_plugin == 'sqlite':\n            pytest.skip('TODO: FOGL-2124 Enable foreign key constraint in SQLite')\n\n        msg = \"Audit entry cannot be logged.\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('POST', '/fledge/audit', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 400 == r.status\n        assert msg in r.reason\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert msg in jdoc['message']\n"
  },
  {
    "path": "tests/system/python/api/test_authentication.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test authentication REST API \"\"\"\n\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nimport pytest\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nTOKEN = None\n\n# TODO: Cover scenario when auth is optional and negative scenarios\n\n\n@pytest.fixture\ndef authentication():\n    return \"mandatory\"\n\n\nclass TestAuthenticationAPI:\n    def test_login_username_regular_user(self, fledge_url, wait_time,  authentication, reset_and_start_fledge):\n        time.sleep(wait_time * 3)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Logged in successfully.\" == jdoc['message']\n        assert \"token\" in jdoc\n        assert not jdoc['admin']\n        global TOKEN\n        TOKEN = jdoc[\"token\"]\n\n    def test_logout_me(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_login_username_admin(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Logged in successfully.\" == jdoc['message']\n        assert \"token\" in jdoc\n        assert jdoc['admin']\n        global TOKEN\n        TOKEN = jdoc[\"token\"]\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('', {'users': [{'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n                         'description': 'admin user'},\n                        {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                         'description': 'normal user'}]}),\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin',\n         {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n          'description': 'admin user'}),\n        ('?id=1&username=admin',\n         {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n          'description': 'admin user'})\n    ])\n    def test_get_users(self, fledge_url, query, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_get_roles(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'roles': [{'description': 'All CRUD privileges', 'id': 1, 'name': 'admin'},\n                          {'description': 'All CRUD operations and self profile management', 'id': 2, 'name': 'user'},\n                          {'id': 3, 'name': 'view', 'description': 'Only to view the configuration'},\n                          {'id': 4, 'name': 'data-view', 'description': 'Only read the data in buffer'},\n                          {'id': 5, 'name': 'control', 'description':\n                              'Same as editor can do and also have access for control scripts and pipelines'}\n                          ]} == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\", \"real_name\": \"AJ\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any1', 'userId': 3, 'roleId': 2, 'accessMethod': 'any', 'realName': 'AJ',\n                   'description': 'Nerd user'}, 'message': 'any1 user has been created successfully.'}),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin1', 'userId': 4, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin1 user has been created successfully.'}),\n        ({\"username\": \"bogus\", \"password\": \"Fl3dG$\", \"role_id\": 2},\n         {'user': {'userName': 'bogus', 'userId': 5, 'roleId': 2, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'bogus user has been created successfully.'}),\n        ({\"username\": \"view\", \"password\": \"V!3w@1\", \"role_id\": 3, \"real_name\": \"View\",\n          \"description\": \"Only to view the configuration\"},\n         {'user': {\n             'userName': 'view', 'userId': 6, 'roleId': 3, 'accessMethod': 'any', 'realName': 'View',\n             'description': 'Only to view the configuration'}, 'message': 'view user has been created successfully.'}),\n        ({\"username\": \"dataView\", \"password\": \"DV!3w@1\", \"role_id\": 4, \"real_name\": \"DataView\",\n          \"description\": \"Only read the data in buffer\"},\n         {'user': {\n             'userName': 'dataview', 'userId': 7, 'roleId': 4, 'accessMethod': 'any', 'realName': 'DataView',\n             'description': 'Only read the data in buffer'}, 'message': 'dataview user has been created successfully.'}\n         ),\n        ({\"username\": \"control\", \"password\": \"C0ntrol!\", \"role_id\": 5, \"real_name\": \"Control\",\n          \"description\": \"Same as editor can do and also have access for control scripts and pipelines\"},\n         {'user': {\n             'userName': 'control', 'userId': 8, 'roleId': 5, 'accessMethod': 'any', 'realName': 'Control',\n             'description': 'Same as editor can do and also have access for control scripts and pipelines'},\n             'message': 'control user has been created successfully.'})\n    ])\n    def test_create_user(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data), headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_update_password(self, fledge_url):\n        uid = 3\n        payload = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp1\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    def test_update_user(self, fledge_url):\n        uid = 5\n        conn = http.client.HTTPConnection(fledge_url)\n        payload = {\"real_name\": \"Test Real\", \"description\": \"Test Desc\", \"access_method\": \"pwd\"}\n        conn.request(\"PUT\", \"/fledge/admin/{}\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'user_info' in jdoc\n        assert uid == jdoc[\"user_info\"][\"id\"]\n        assert payload[\"real_name\"] == jdoc[\"user_info\"][\"real_name\"]\n        assert payload[\"description\"] == jdoc[\"user_info\"][\"description\"]\n        assert payload[\"access_method\"] == jdoc[\"user_info\"][\"access_method\"]\n\n    def test_update_me(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        payload = {\"real_name\": \"Admin\"}\n        conn.request(\"PUT\", \"/fledge/user\", body=json.dumps(payload), headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert {\"message\": \"Real name has been updated successfully!\"} == jdoc\n\n        conn.request(\"GET\", \"/fledge/user?id=1\", headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert payload['real_name'] == jdoc['realName']\n\n    def test_enable_user(self, fledge_url):\n        uid = 5\n        # Fetch users list\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user\", headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        user_list = [u['userId'] for u in jdoc['users']]\n        assert uid in user_list\n\n        # Disable user\n        payload = {\"enabled\": \"false\"}\n        conn.request(\"PUT\", \"/fledge/admin/{}/enable\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<{}> has been disabled successfully.'.format(uid)} == jdoc\n\n        # Fetch users list again and check disabled user does not exist in the response\n        conn.request(\"GET\", \"/fledge/user\", headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        user_list = [u['userId'] for u in jdoc['users']]\n        assert uid not in user_list\n\n    def test_reset_user(self, fledge_url):\n        uid = 3\n        payload = {\"role_id\": 1, \"password\": \"F0gl@mp!\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/admin/{}/reset\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<{}> has been updated successfully.'.format(uid)} == jdoc\n\n    def test_create_user_cert(self, fledge_url, storage_plugin):\n        conn = http.client.HTTPConnection(fledge_url)\n        # Get users\n        conn.request(\"GET\", \"/fledge/user\", headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        user = jdoc[\"users\"][6]\n        if storage_plugin == 'postgres':\n            user = jdoc[\"users\"][4]\n        assert 8 == user[\"userId\"]\n        assert \"control\" == user[\"userName\"]\n\n        # Generate an Authentication Certificate for the control user.\n        conn.request(\"POST\", \"/fledge/admin/{}/authcertificate\".format(user[\"userId\"]),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        assert \"OK\" == r.reason\n        cert = r.read().decode()\n        assert cert.startswith(\"-----BEGIN CERTIFICATE-----\")\n        assert cert.endswith(\"\\n-----END CERTIFICATE-----\\n\")\n\n        # Get users\n        conn.request(\"GET\", \"/fledge/user\", headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        user = jdoc[\"users\"][6]\n        if storage_plugin == 'postgres':\n            user = jdoc[\"users\"][4]\n        assert 8 == user[\"userId\"]\n        assert \"control\" == user[\"userName\"]\n\n        # Log in using the newly created certificate above\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/{}.cert'.format(\n            user[\"userName\"]))\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert \"Logged in successfully.\" == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n            assert user[\"userId\"] == jdoc['uid']\n\n    def test_delete_user(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", \"/fledge/admin/4/delete\", headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    def test_logout_all(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/1/logout', headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_admin_actions_reg_user(self, fledge_url):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert not jdoc['admin']\n        _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Update User\n        conn.request(\"PUT\", \"/fledge/admin/2\", body=json.dumps({\"access_method\": 'cert'}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Enable/Disable User\n        conn.request(\"PUT\", \"/fledge/admin/2/enable\", body=json.dumps({\"enabled\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Create a user authentication certificate.\n        conn.request(\"POST\", \"/fledge/admin/2/authcertificate\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    def test_login_with_user_certificate(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert \"Logged in successfully.\" == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n    @pytest.mark.parametrize(\"cert_path, username, is_admin, role_id\", [\n        ('data/etc/certs/admin.cert', 'Admin', True, 1),\n        ('data/etc/certs/admin.cert', 'admin', True, 1),\n        ('data/etc/certs/admin.cert', 'ADMIN', True, 1),\n        ('data/etc/certs/user.cert', 'USER', False, 2),\n        ('data/etc/certs/user.cert', 'User', False, 2),\n        ('data/etc/certs/user.cert', 'user', False, 2)\n    ])\n    def test_login_with_admin_certificate(self, fledge_url, cert_path, username, is_admin, role_id):\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), cert_path)\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert \"Logged in successfully.\" == jdoc['message']\n            assert \"token\" in jdoc\n            assert is_admin == jdoc['admin']\n            # Verify user after login\n            conn.request(\"GET\", \"/fledge/user?username={}\".format(username),\n                         headers={\"authorization\": jdoc['token']})\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            user_jdoc = json.loads(r)\n            assert len(user_jdoc) > 0, \"No record found for {} username.\".format(username)\n            assert role_id == user_jdoc[\"roleId\"]\n            assert username.lower() == user_jdoc['userName']\n\n    def test_login_with_custom_certificate(self, fledge_url, remove_data_file):\n        # Create a custom certificate and sign\n        subprocess.run([\"openssl genrsa -out custom.key 2048 2> /dev/null\"], shell=True)\n        subprocess.run([\"openssl req -new -key custom.key -out custom.csr -subj '/C=IN/CN=user' 2> /dev/null\"],\n                       shell=True)\n        subprocess.run([\"openssl x509 -req -days 1 -in custom.csr \"\n                        \"-CA $FLEDGE_ROOT/data/etc/certs/ca.cert -CAkey $FLEDGE_ROOT/data/etc/certs/ca.key \"\n                        \"-set_serial 01 -out custom.cert 2> /dev/null\"], shell=True)\n\n        # Login with custom certificate\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = 'custom.cert'\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert \"Logged in successfully.\" == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n        # Delete Certificates and keys created\n        remove_data_file('custom.key')\n        remove_data_file('custom.csr')\n        remove_data_file('custom.cert')\n"
  },
  {
    "path": "tests/system/python/api/test_browser_assets.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test Browser Assets REST API \"\"\"\n\n\nimport os\nimport shutil\nimport http.client\nimport time\nimport json\nimport pytest\nimport utils\n\n__author__ = \"Ashish Jabble, Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nASSET_NAME = 'test-loudness'\nSENSOR = 'loudness'\nSENSOR_VALUES = [1, 2, 3, 4, 5, 6]\nSOUTH_PLUGIN_NAME = 'dummyplugin'\nSERVICE_NAME = 'TestBrowserAPI'\nDT_FORMAT = '%Y-%m-%d %H:%M:%S.%f'\n\n\ndef verify_utc_timestamp_with_different_timezones(timestamp_str):\n\n    def _convert_utc_to_other_timezone(tz, hours, minutes):\n        from datetime import datetime, timedelta\n        import pytz\n\n        # Parse the timestamp string into a naive datetime object\n        utc_dt = datetime.strptime(timestamp_str, DT_FORMAT)\n        # Add timezone info to the naive datetime object\n        utc_dt = pytz.utc.localize(utc_dt)\n        # Convert UTC to tz\n        converted_dt = utc_dt.astimezone(pytz.timezone(tz))\n        # Check if the converted datetime is in DST\n        if converted_dt.dst():\n            print(converted_dt, \"is in DST for\", tz)\n        else:\n            print(converted_dt, \"is not in DST for\", tz)\n            if tz == \"America/Los_Angeles\":\n                hours = -8  # No DST (UTC-8)\n                minutes = 0\n            elif tz == \"America/New_York\":\n                hours = -5  # No DST (UTC-5)\n                minutes = 0\n            elif tz == \"Europe/London\":\n                hours = 0  # No DST (UTC+0)\n                minutes = 0\n            elif tz == \"Europe/Rome\":\n                hours = 1  # No DST (UTC+1)\n                minutes = 0\n        # Define the expected offset\n        expected_offset = timedelta(hours=hours, minutes=minutes)\n        # Calculate the difference between the two offsets\n        actual_offset = converted_dt.utcoffset() - utc_dt.utcoffset()\n        assert actual_offset == expected_offset, \"Expected offset {}, but got {}\".format(\n            expected_offset, actual_offset)\n\n    assert utils.validate_date_format(timestamp_str) is True, \"Timestamp format mismatched.\"\n\n    _convert_utc_to_other_timezone(\"Asia/Kolkata\", 5, 30)\n    _convert_utc_to_other_timezone(\"America/Los_Angeles\", -7, 0)  # Adjust for DST if needed\n    _convert_utc_to_other_timezone(\"America/New_York\", -4, 0)     # Adjust for DST if needed\n    _convert_utc_to_other_timezone(\"Europe/London\", 1, 0)   # Adjust for DST if needed\n    _convert_utc_to_other_timezone(\"Europe/Rome\", 2, 0)     # Adjust for DST if needed\n    _convert_utc_to_other_timezone(\"Etc/UTC\", 0, 0)\n\n    # TODO: Add more mappings if required\n\n\nclass TestBrowserAssets:\n\n    @pytest.fixture\n    def start_south(self, reset_and_start_fledge, remove_directories, fledge_url, south_plugin=SOUTH_PLUGIN_NAME):\n        \"\"\" This fixture clone a south repo and starts south instance\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            remove_directories: Fixture that remove directories created during the tests\"\"\"\n\n        # Create a south plugin\n        plugin_dir = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'python/fledge/plugins/south/dummyplugin')\n        plugin_file = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'tests/system/python/data/dummyplugin.py')\n        try:\n            os.mkdir(plugin_dir)\n        except FileExistsError:\n            print(\"Directory \", plugin_dir, \" already exists\")\n\n        shutil.copy2(plugin_file, plugin_dir)\n        # Create south service\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"name\": \"{}\".format(SERVICE_NAME), \"type\": \"South\", \"plugin\": \"{}\".format(south_plugin),\n                \"enabled\": \"true\", \"config\": {}}\n        conn.request(\"POST\", '/fledge/service', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert SERVICE_NAME == retval[\"name\"]\n\n        yield self.start_south\n\n        # Cleanup code that runs after the caller test is over\n        remove_directories(plugin_dir)\n\n    def test_get_asset_counts(self, start_south, fledge_url, wait_time):\n        \"\"\"Test that browsing an asset gives correct asset name and asset count\"\"\"\n        time.sleep(wait_time * 4)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert ASSET_NAME == jdoc[0][\"assetCode\"]\n        assert 6 == jdoc[0][\"count\"]\n\n    def test_get_asset(self, fledge_url):\n        \"\"\"Test that browsing an asset gives correct asset values\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}'.format(ASSET_NAME))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        i = 0\n        for val in SENSOR_VALUES:\n            assert {SENSOR: val} == jdoc[i]['reading']\n            verify_utc_timestamp_with_different_timezones(jdoc[i]['timestamp'])\n            i += 1\n\n    @pytest.mark.parametrize((\"query\", \"expected_count\", \"expected_values\"), [\n        ('?limit=1', 1, [SENSOR_VALUES[0]]),\n        ('?limit=1&skip=1', 1, [SENSOR_VALUES[1]]),\n        ('?seconds=59', 2, SENSOR_VALUES[0:2]),\n        ('?minutes=15', 4, SENSOR_VALUES[0:4]),\n        ('?hours=4', 5, SENSOR_VALUES[0:5]),\n        # Verify that if a combination of hrs, min, sec is used, shortest period will apply\n        ('?hours=20&minutes=20&seconds=59&limit=20', 2, SENSOR_VALUES[0:2]),\n        ('?limit=&hours=&minutes=&seconds=', 6, SENSOR_VALUES)\n        # In case of empty params, all values are returned\n    ])\n    def test_get_asset_query(self, fledge_url, query, expected_count, expected_values):\n        \"\"\"Test that browsing an asset with query parameters gives correct asset values\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}{}'.format(ASSET_NAME, query))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc) == expected_count\n        i = 0\n        for item in expected_values:\n            assert {SENSOR: item} == jdoc[i]['reading']\n            verify_utc_timestamp_with_different_timezones(jdoc[i]['timestamp'])\n            i += 1\n\n    @pytest.mark.parametrize(\"request_params, response_code, response_message\", [\n        ('?limit=invalid', 400, \"Limit must be a positive integer\"),\n        ('?limit=-1', 400, \"Limit must be a positive integer\"),\n        ('?skip=invalid', 400, \"Skip/Offset must be a positive integer\"),\n        ('?skip=-1', 400, \"Skip/Offset must be a positive integer\"),\n        ('?minutes=-1', 400, \"Time must be a positive integer\"),\n        ('?minutes=blah', 400, \"Time must be a positive integer\"),\n        ('?seconds=-1', 400, \"Time must be a positive integer\"),\n        ('?seconds=blah', 400, \"Time must be a positive integer\"),\n        ('?hours=-1', 400, \"Time must be a positive integer\"),\n        ('?hours=blah', 400, \"Time must be a positive integer\")\n    ])\n    def test_get_asset_query_bad_data(self, fledge_url, request_params, response_code, response_message):\n        \"\"\"Test that browsing an asset with invalid query parameters generates http errors\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}{}'.format(ASSET_NAME, request_params))\n        r = conn.getresponse()\n        conn.close()\n        assert response_code == r.status\n        assert response_message == r.reason\n\n    def test_get_asset_reading(self, fledge_url):\n        \"\"\"Test that browsing an asset's data point gives correct asset data point values\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}'.format(ASSET_NAME, SENSOR))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        i = 0\n        for val in SENSOR_VALUES:\n            assert val == jdoc[i][SENSOR]\n            verify_utc_timestamp_with_different_timezones(jdoc[i]['timestamp'])\n            i += 1\n\n    @pytest.mark.parametrize((\"query\", \"expected_count\", \"expected_values\"), [\n        ('?limit=1', 1, [SENSOR_VALUES[0]]),\n        ('?limit=1&skip=1', 1, [SENSOR_VALUES[1]]),\n        ('?seconds=59', 2, SENSOR_VALUES[0:2]),\n        ('?minutes=15', 4, SENSOR_VALUES[0:4]),\n        ('?hours=4', 5, SENSOR_VALUES[0:5]),\n        # Verify that if a combination of hrs, min, sec is used, shortest period will apply\n        ('?hours=20&minutes=20&seconds=59&limit=20', 2, SENSOR_VALUES[0:2]),\n        ('?limit=&hours=&minutes=&seconds=', 6, SENSOR_VALUES)\n        # In case of empty params, all values are returned\n    ])\n    def test_get_asset_readings_query(self, fledge_url, query, expected_count, expected_values):\n        \"\"\"Test that browsing an asset's data point with query parameters gives correct asset data point values\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}{}'.format(ASSET_NAME, SENSOR, query))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc) == expected_count\n        i = 0\n        for item in expected_values:\n            assert item == jdoc[i][SENSOR]\n            verify_utc_timestamp_with_different_timezones(jdoc[i]['timestamp'])\n            i += 1\n\n    def test_get_asset_summary(self, fledge_url):\n        \"\"\"Test that browsing an asset's summary gives correct min, max and average values\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/summary'.format(ASSET_NAME))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        summary = jdoc[0][SENSOR]\n        avg = sum(SENSOR_VALUES) / len(SENSOR_VALUES)\n        assert avg == summary['average']\n        assert max(SENSOR_VALUES) == summary['max']\n        assert min(SENSOR_VALUES) == summary['min']\n\n    def test_get_asset_readings_summary_invalid_sensor(self, fledge_url):\n        \"\"\"Test that browsing a non-existing asset's data point summary gives blank min, max and average values\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        invalid_sensor = \"invalid\"\n        conn.request(\"GET\", '/fledge/asset/{}/{}/summary'.format(ASSET_NAME, invalid_sensor))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert '{} reading key is not found'.format(invalid_sensor) == r.reason\n\n    def test_get_asset_readings_summary(self, fledge_url):\n        \"\"\"Test that browsing an asset's data point summary gives correct min, max and average values\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}/summary'.format(ASSET_NAME, SENSOR))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        summary = jdoc[SENSOR]\n        avg = sum(SENSOR_VALUES) / len(SENSOR_VALUES)\n        assert avg == summary['average']\n        assert max(SENSOR_VALUES) == summary['max']\n        assert min(SENSOR_VALUES) == summary['min']\n\n    def test_get_asset_series(self, fledge_url):\n        \"\"\"Test that browsing an asset's data point time series gives correct min, max and average values\n         for all timestamps\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}/series'.format(ASSET_NAME, SENSOR))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        i = 0\n        # Min, average and max values of a time series data is noting but the value itself if readings were ingested at\n        # different timestamps\n        for val in SENSOR_VALUES:\n            assert val == jdoc[i]['min']\n            assert val == jdoc[i]['average']\n            assert val == jdoc[i]['max']\n            i += 1\n\n    @pytest.mark.parametrize((\"query\", \"expected_count\", \"expected_values\"), [\n        ('?limit=1', 1, [SENSOR_VALUES[0]]),\n        ('?limit=1&skip=1', 1, [SENSOR_VALUES[1]]),\n        ('?seconds=59', 2, SENSOR_VALUES[0:2]),\n        ('?minutes=15', 4, SENSOR_VALUES[0:4]),\n        ('?hours=4', 5, SENSOR_VALUES[0:5]),\n        # Verify that if a combination of hrs, min, sec is used, shortest period will apply\n        ('?hours=20&minutes=20&seconds=59&limit=20', 2, SENSOR_VALUES[0:2]),\n        ('?limit=&hours=&minutes=&seconds=', 6, SENSOR_VALUES)\n        # In case of empty params, all values are returned\n    ])\n    def test_get_asset_series_query_time_limit(self, fledge_url, query, expected_count, expected_values):\n        \"\"\"Test that browsing an asset's data point time series with query parameter\n         gives correct min, max and average values for all timestamps\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}/series{}'.format(ASSET_NAME, SENSOR, query))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc) == expected_count\n        i = 0\n        for item in expected_values:\n            assert item == jdoc[i]['min']\n            assert item == jdoc[i]['average']\n            assert item == jdoc[i]['max']\n            i += 1\n\n    def test_get_asset_series_query_group_sec(self, fledge_url):\n        \"\"\"Test that browsing an asset's data point time series with seconds grouping\n                 gives correct min, max and average values\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}/series{}'.format(ASSET_NAME, SENSOR, '?group=seconds'))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc) == 6\n        for i in range(0, len(jdoc)):\n            assert SENSOR_VALUES[i] == jdoc[i]['min']\n            assert SENSOR_VALUES[i] == jdoc[i]['average']\n            assert SENSOR_VALUES[i] == jdoc[i]['max']\n            assert utils.validate_date_format(jdoc[i]['timestamp'], '%Y-%m-%d %H:%M:%S') is True, \\\n                \"Timestamp format mismatched.\"\n\n    def test_get_asset_series_query_group_min(self, fledge_url, wait_time):\n        \"\"\"Test that browsing an asset's data point time series with minutes grouping\n                         gives correct min, max and average values\"\"\"\n        time.sleep(wait_time)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}/series{}'.format(ASSET_NAME, SENSOR, '?group=minutes'))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc) == 5\n\n        assert (sum(SENSOR_VALUES[0:2]) / len(SENSOR_VALUES[0:2])) == jdoc[0]['average']\n        assert min(SENSOR_VALUES[0:2]) == jdoc[0]['min']\n        assert max(SENSOR_VALUES[0:2]) == jdoc[0]['max']\n        assert utils.validate_date_format(jdoc[0]['timestamp'], '%Y-%m-%d %H:%M') is True, \\\n            \"Timestamp format mismatched.\"\n\n        for i in range(1, len(jdoc) - 1):\n            assert SENSOR_VALUES[i + 1] == jdoc[i]['min']\n            assert SENSOR_VALUES[i + 1] == jdoc[i]['average']\n            assert SENSOR_VALUES[i + 1] == jdoc[i]['max']\n            assert utils.validate_date_format(jdoc[i + 1]['timestamp'], '%Y-%m-%d %H:%M') is True, \\\n                \"Timestamp format mismatched.\"\n\n    def test_get_asset_series_query_group_hrs(self, fledge_url, wait_time):\n        \"\"\"Test that browsing an asset's data point time series with hour grouping\n                                 gives correct min, max and average values\"\"\"\n        time.sleep(wait_time)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}/series{}'.format(ASSET_NAME, SENSOR, '?group=hours'))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc) == 3\n\n        assert (sum(SENSOR_VALUES[0:4]) / len(SENSOR_VALUES[0:4])) == jdoc[0]['average']\n        assert min(SENSOR_VALUES[0:4]) == jdoc[0]['min']\n        assert max(SENSOR_VALUES[0:4]) == jdoc[0]['max']\n        assert utils.validate_date_format(jdoc[0]['timestamp'], '%Y-%m-%d %H') is True, \"Timestamp format mismatched.\"\n\n        for i in range(4, 6):\n            assert SENSOR_VALUES[i] == jdoc[i - 3]['min']\n            assert SENSOR_VALUES[i] == jdoc[i - 3]['average']\n            assert SENSOR_VALUES[i] == jdoc[i - 3]['max']\n            assert utils.validate_date_format(jdoc[i - 3]['timestamp'], '%Y-%m-%d %H') is True, \\\n                \"Timestamp format mismatched.\"\n\n    def test_get_asset_sensor_readings_invalid_group(self, fledge_url):\n        \"\"\"Test that browsing an asset's data point time series with invalid grouping\n                                 gives http error\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset/{}/{}/series?group=blah'.format(ASSET_NAME, SENSOR))\n        r = conn.getresponse()\n        conn.close()\n        assert r.status == 400\n        assert r.reason == \"blah is not a valid group\"\n"
  },
  {
    "path": "tests/system/python/api/test_common.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test Common (ping, shutdown, restart) REST API \"\"\"\n\nimport re\nimport socket\nimport subprocess\nimport http.client\nimport time\nimport json\nfrom conftest import restart_and_wait_for_fledge\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nSEMANTIC_VERSIONING_REGEX = \"^(?P<major>0|[1-9]\\d*)\\.(?P<minor>0|[1-9]\\d*)\\.(?P<patch>0|[1-9]\\d*)$\"\n\n\ndef get_machine_detail():\n    host_name = socket.gethostname()\n    # all addresses for the host\n    all_ip_addresses_cmd_res = subprocess.run(['hostname', '-I'], stdout=subprocess.PIPE)\n    ip_addresses = all_ip_addresses_cmd_res.stdout.decode('utf-8').replace(\"\\n\", \"\").strip().split(\" \")\n    return host_name, ip_addresses\n\n\nclass TestCommon:\n\n    def test_ping_default(self, reset_and_start_fledge, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/ping')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        host_name, ip_addresses = get_machine_detail()\n        assert 1 < jdoc['uptime']\n        assert isinstance(jdoc['uptime'], int)\n        assert 0 == jdoc['dataRead']\n        assert 0 == jdoc['dataSent']\n        assert 0 == jdoc['dataPurged']\n        assert 'Fledge' == jdoc['serviceName']\n        assert host_name == jdoc['hostName']\n        assert ip_addresses == jdoc['ipAddresses']\n        assert 'green' == jdoc['health']\n        assert jdoc['authenticationOptional'] is True\n        assert jdoc['safeMode'] is False\n        assert re.match(SEMANTIC_VERSIONING_REGEX, jdoc['version']) is not None\n        assert jdoc['alerts'] == 0\n\n    def test_ping_when_auth_mandatory_allow_ping_true(self, fledge_url, wait_time, retries):\n        conn = http.client.HTTPConnection(fledge_url)\n        payload = {\"allowPing\": \"true\", \"authentication\": \"mandatory\"}\n        conn.request(\"PUT\", '/fledge/category/rest_api', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n\n        jdoc = restart_and_wait_for_fledge(fledge_url, wait_time)\n        assert len(jdoc), \"No data found\"\n        assert 1 < jdoc['uptime']\n        assert isinstance(jdoc['uptime'], int)\n\n    def test_ping_when_auth_mandatory_allow_ping_false(self, reset_and_start_fledge, fledge_url, wait_time, retries):\n        # reset_and_start_fledge fixture needed to get default settings back\n        conn = http.client.HTTPConnection(fledge_url)\n        payload = {\"allowPing\": \"false\", \"authentication\": \"mandatory\"}\n        conn.request(\"PUT\", '/fledge/category/rest_api', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n\n        jdoc = restart_and_wait_for_fledge(fledge_url, wait_time)\n        assert 'Unauthorized' == jdoc['message']\n\n    def test_restart(self):\n        assert True, \"Already verified in test_ping_when_auth_mandatory_allow_ping_true\"\n\n    def test_shutdown(self, reset_and_start_fledge, fledge_url, wait_time):\n        # reset_and_start_fledge fixture needed to get default settings back\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/shutdown')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert 'Fledge shutdown has been scheduled. Wait for few seconds for process cleanup.' == jdoc['message']\n\n        from contextlib import closing\n        max_retries = 5\n        for attempt in range(max_retries):\n            try:\n                with closing(http.client.HTTPConnection(fledge_url)) as connection:\n                    connection.request(\"GET\", \"/fledge/ping\")\n            except (ConnectionRefusedError, socket.error) as ex:\n                break\n            finally:\n                time.sleep(wait_time * 8)\n        else:\n            raise AssertionError(\"Fledge did not shut down after maximum retries.\")\n\n        stat = subprocess.run([\"$FLEDGE_ROOT/scripts/fledge status\"], shell=True, stdout=subprocess.PIPE,\n                              stderr=subprocess.PIPE)\n        assert \"Fledge not running.\" == stat.stderr.decode(\"utf-8\").replace(\"\\n\", \"\").strip()\n"
  },
  {
    "path": "tests/system/python/api/test_configuration.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test Configuration REST API \"\"\"\n\nimport os\nimport http.client\nimport json\nfrom urllib.parse import quote\nimport time\nfrom collections import Counter\nimport pytest\nfrom fledge.common.common import _FLEDGE_ROOT, _FLEDGE_DATA\n\n__author__ = \"Praveen Garg, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2025 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\ncat_name = \"Pub #1\"\nscript_file_path = _FLEDGE_DATA + '/scripts/pub #1_item5_notify35.py' if _FLEDGE_DATA else _FLEDGE_ROOT + '/data/scripts/pub #1_item5_notify35.py'\n\n\nclass TestConfiguration:\n\n    def test_default(self, fledge_url, reset_and_start_fledge, wait_time, storage_plugin):\n        # TODO: FOGL-2349, once resolved below remove file check will be deleted\n        if os.path.exists(script_file_path):\n            os.remove(script_file_path)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/category')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc)\n\n        # Utilities parent key creation\n        time.sleep(wait_time)\n\n        conn.request(\"GET\", '/fledge/category?root=true')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        cats = jdoc[\"categories\"]\n        assert 3 == len(cats)\n        assert {'key': 'Advanced', 'displayName': 'Advanced', 'description': 'Advanced'} == cats[0]\n        assert {'key': 'General', 'displayName': 'General', 'description': 'General'} == cats[1]\n        assert {'key': 'Utilities', 'displayName': 'Utilities', 'description': 'Utilities'} == cats[2]\n\n        conn.request(\"GET\", '/fledge/category?root=true&children=true')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 3 == len(jdoc[\"categories\"])\n\n        expected_with_utilities = [\n          {\n              'key': 'Advanced',\n              'description': 'Advanced',\n              'displayName': 'Advanced',\n              'children': [\n                  {\n                            'key': 'Storage',\n                            'description': 'Storage configuration',\n                            'displayName': 'Storage',\n                            'children': [\n                                {\n                                  'key': storage_plugin,\n                                  'description': 'Storage Plugin',\n                                  'displayName': storage_plugin,\n                                  'children': []\n                                }\n                            ]\n                  },\n                  {\n                     'key': 'SUPPORT_BUNDLE',\n                     'description': 'Support Bundle Configuration',\n                     'displayName': 'Support Bundle',\n                     'children': []\n                  },\n                  {\n                     'key': 'SMNTR',\n                     'description': 'Service Monitor',\n                     'displayName': 'Service Monitor',\n                     'children': []\n                  },\n                  {\n                     'key': 'SCHEDULER',\n                     'description': 'Scheduler configuration',\n                     'displayName': 'Scheduler',\n                     'children': []\n                  },\n                  {\n                      \"key\": \"LOGGING\",\n                      \"description\": \"Logging Level of Core Server\",\n                      \"displayName\": \"Logging\",\n                      'children': []\n                  },\n                  {\n                      \"key\": \"RESOURCE_LIMIT\",\n                      \"description\": \"Resource Limit of South Service\",\n                      \"displayName\": \"Resource Limit\",\n                      \"children\": []\n                  },\n                  {\n                      'key': 'CONFIGURATION',\n                      'description': 'Core Configuration Manager',\n                      'displayName': 'Configuration Manager',\n                      'children': []\n                  },\n                  {\n                      'key': 'FEATURES',\n                      'description': 'Control the inclusion of system features',\n                      'displayName': 'Features',\n                      'children': []\n                  }\n              ]\n          },\n          {\n                     'key': 'General',\n                     'description': 'General',\n                     'displayName': 'General',\n                     'children': [\n                         {\n                             'key': 'service',\n                             'description': 'Fledge Service',\n                             'displayName': 'Fledge Service',\n                             'children': []\n                         },\n                         {\n                             'key': 'rest_api',\n                             'description': 'Fledge Admin and User REST API',\n                             'displayName': 'Admin API',\n                             'children': [\n                                 {\n                                     \"key\": \"password\",\n                                     \"description\": \"To control the password policy\",\n                                     \"displayName\": \"Password Policy\",\n                                     \"children\": [\n\n                                     ]\n                                 },\n                                 {\n                                     \"key\": \"firewall\",\n                                     \"description\": \"Monitor and Control HTTP Network Traffic\",\n                                     \"displayName\": \"Firewall\",\n                                     \"children\": [\n\n                                     ]\n                                 }\n                             ]\n                         },\n                         {\n                              'key': 'Installation',\n                              'description': 'Installation',\n                              'displayName': 'Installation',\n                              'children': []\n                         }\n                     ]\n            },\n            {\n                          'key': 'Utilities',\n                          'description': 'Utilities',\n                          'displayName': 'Utilities',\n                          'children': [\n                              {\n                                  'key': 'purge_system',\n                                  'description': 'Configuration of the Purge System',\n                                  'displayName': 'Purge System',\n                                  'children': []\n                              },\n                              {\n                                  'key': 'PURGE_READ',\n                                  'description': 'Purge the readings, log, statistics history table',\n                                  'displayName': 'Purge',\n                                  'children': []\n                              }\n                          ]\n            }\n        ]\n\n        assert expected_with_utilities == jdoc[\"categories\"]\n\n    def test_get_category(self, fledge_url):\n        expected = {'httpsPort': {'displayName': 'HTTPS Port', 'description': 'Port to accept HTTPS connections on', 'type': 'integer', 'order': '3', 'value': '1995', 'default': '1995', 'validity': 'enableHttp==\"false\"', 'permissions': ['admin']},\n                    'authCertificateName': {'displayName': 'Auth Certificate', 'description': 'Auth Certificate name', 'type': 'string', 'order': '7', 'value': 'ca', 'default': 'ca', 'permissions': ['admin']},\n                    'certificateName': {'displayName': 'Certificate Name', 'description': 'Certificate file name', 'type': 'string', 'order': '4', 'value': 'fledge', 'default': 'fledge', 'validity': 'enableHttp==\"false\"', 'permissions': ['admin']},\n                    'authProviders': {'displayName': 'Auth Providers', 'description': 'Authentication providers to use for the interface (JSON array object)', 'type': 'JSON', 'order': '9', 'value': '{\"providers\": [\"username\", \"ldap\"] }', 'default': '{\"providers\": [\"username\", \"ldap\"] }', 'permissions': ['admin']},\n                    'authentication': {'displayName': 'Authentication', 'description': 'API Call Authentication', 'type': 'enumeration', 'options': ['mandatory', 'optional'], 'order': '5', 'value': 'optional', 'default': 'optional', 'permissions': ['admin']},\n                    'authMethod': {'displayName': 'Authentication method', 'description': 'Authentication method', 'type': 'enumeration', 'options': ['any', 'password', 'certificate'], 'order': '6', 'value': 'any', 'default': 'any', 'permissions': ['admin']},\n                    'httpPort': {'displayName': 'HTTP Port', 'description': 'Port to accept HTTP connections on', 'type': 'integer', 'order': '2', 'value': '8081', 'default': '8081', 'permissions': ['admin']},\n                    'allowPing': {'displayName': 'Allow Ping', 'description': 'Allow access to ping, regardless of the authentication required and authentication header', 'type': 'boolean', 'order': '8', 'value': 'true', 'default': 'true', 'permissions': ['admin']},\n                    'enableHttp': {'displayName': 'Enable HTTP', 'description': 'Enable HTTP (disable to use HTTPS)', 'type': 'boolean', 'order': '1', 'value': 'true', 'default': 'true', 'permissions': ['admin']},\n                    'disconnectIdleUserSession': {'description': 'Disconnect idle user session after certain period of inactivity', 'type': 'integer', 'default': '15', 'displayName': 'Idle User Session Disconnection (In Minutes)', 'order': '10', 'minimum': '1', 'maximum': '1440', 'value': '15', 'permissions': ['admin']}}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/category/rest_api')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc)\n        assert Counter(expected) == Counter(jdoc)\n\n    def test_create_category(self, fledge_url):\n        payload = {'key': cat_name, 'description': 'a publisher', 'display_name': 'Publisher'}\n        conf = {'item1': {'type': 'boolean', 'description': 'A Boolean check', 'default': 'False'},\n                'item2': {'type': 'integer', 'description': 'An Integer check', 'default': '2'},\n                'item3': {'type': 'password', 'description': 'A password check', 'default': 'Fledge'},\n                'item4': {'type': 'string', 'description': 'A string check', 'default': 'fledge'},\n                'item5': {'type': 'script', 'description': 'A script check', 'default': ''},\n                'item6': {'type': 'string', 'description': 'A string check', 'default': 'test', 'group': 'Advanced'}\n                }\n        payload.update({'value': conf})\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('POST', '/fledge/category', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert cat_name == jdoc['key']\n        assert \"a publisher\" == jdoc['description']\n        assert \"Publisher\" == jdoc['displayName']\n        expected_value = {\n            'item1': {'type': 'boolean', 'default': 'false', 'value': 'false', 'description': 'A Boolean check'},\n            'item2': {'type': 'integer', 'description': 'An Integer check', 'default': '2', 'value': '2'},\n            'item3': {'type': 'password', 'description': 'A password check', 'default': 'Fledge', 'value': \"****\"},\n            'item4': {'type': 'string', 'description': 'A string check', 'default': 'fledge', 'value': 'fledge'},\n            'item5': {'type': 'script', 'description': 'A script check', 'default': '', 'value': ''},\n            'item6': {'type': 'string', 'description': 'A string check', 'default': 'test', 'value': 'test',\n                      'group': 'Advanced'}\n        }\n        assert Counter(expected_value) == Counter(jdoc['value'])\n\n    def test_get_category_item(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        encoded_url = '/fledge/category/{}/item1'.format(quote(cat_name))\n        conn.request(\"GET\", encoded_url)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'boolean' == jdoc['type']\n        assert 'A Boolean check' == jdoc['description']\n        assert 'false' == jdoc['value']\n\n        # Get optional attribute\n        encoded_url = '/fledge/category/{}/item6'.format(quote(cat_name))\n        conn.request(\"GET\", encoded_url)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'string' == jdoc['type']\n        assert 'Advanced' == jdoc['group']\n\n    def test_set_configuration_item(self, fledge_url):\n        new_value = \"true\"\n        conn = http.client.HTTPConnection(fledge_url)\n        encoded_url = '/fledge/category/{}/item1'.format(quote(cat_name))\n        conn.request(\"PUT\", encoded_url, body=json.dumps({\"value\": new_value}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'boolean' == jdoc['type']\n        assert new_value == jdoc['value']\n        assert 'false' == jdoc['default']\n\n        # Verify new value in GET endpoint\n        conn.request(\"GET\", encoded_url)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'boolean' == jdoc['type']\n        assert new_value == jdoc['value']\n        assert 'false' == jdoc['default']\n\n        # set optional attribute\n        new_val = \"Security\"\n        encoded_url = '/fledge/category/{}/item6'.format(quote(cat_name))\n        conn.request(\"PUT\", encoded_url, body=json.dumps({\"group\": new_val}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'test' == jdoc['default']\n        assert 'test' == jdoc['value']\n        assert new_val == jdoc['group']\n\n        # Verify new value in GET endpoint\n        conn.request(\"GET\", encoded_url)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'test' == jdoc['default']\n        assert 'test' == jdoc['value']\n        assert new_val == jdoc['group']\n\n    def test_update_configuration_item_bulk(self, fledge_url):\n        expected = {\n            'item1': {'type': 'boolean', 'default': 'false', 'value': 'false', 'description': 'A Boolean check'},\n            'item2': {'type': 'integer', 'description': 'An Integer check', 'default': '2', 'value': '1'},\n            'item3': {'type': 'password', 'description': 'A password check', 'default': 'Fledge', 'value': \"****\"},\n            'item4': {'type': 'string', 'description': 'A string check', 'default': 'fledge', 'value': 'new'},\n            'item5': {'type': 'script', 'description': 'A script check', 'default': '', 'value': ''},\n            'item6': {'type': 'string', 'description': 'A string check', 'default': 'test', 'value': 'test',\n                      'group': 'Security'}\n        }\n        conn = http.client.HTTPConnection(fledge_url)\n        encoded_url = '/fledge/category/{}'.format(quote(cat_name))\n        conn.request(\"PUT\", encoded_url, body=json.dumps({\"item1\": \"false\", \"item2\": \"1\", \"item4\": \"new\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert Counter(expected) == Counter(jdoc)\n\n    @pytest.mark.skip(reason=\"Not in use\")\n    def test_add_configuration_item(self, fledge_url):\n        pass\n\n    def test_delete_configuration_item_value(self, fledge_url):\n        expected = {'description': 'A string check', 'type': 'string', 'default': 'fledge', 'value': 'fledge'}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/category/{}/item4/value'.format(quote(cat_name)))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert Counter(expected) == Counter(jdoc)\n\n    def test_get_child_category(self, fledge_url):\n        expected = [{'displayName': 'Installation', 'key': 'Installation', 'description': 'Installation'},\n                    {'displayName': 'Admin API', 'key': 'rest_api', 'description': 'Fledge Admin and User REST API'},\n                    {'displayName': 'Fledge Service', 'key': 'service', 'description': 'Fledge Service'}]\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/category/General/children')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        actual = jdoc[\"categories\"]\n        assert 3 == len(actual)\n        result = sorted(expected, key=lambda ex_element: sorted(ex_element.items())\n                        ) == sorted(actual, key=lambda ac_element: sorted(ac_element.items()))\n        assert result\n\n    def test_create_child_category(self, fledge_url):\n        payload = {'children': ['rest_api', 'service']}\n        conn = http.client.HTTPConnection(fledge_url)\n        encoded_url = '/fledge/category/{}/children'.format(quote(cat_name))\n        conn.request('POST', encoded_url, body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert Counter(payload) == Counter(jdoc)\n\n        expected_children = [{'key': 'rest_api', 'displayName': 'Admin API', 'description': 'Fledge Admin and User REST API'},\n                             {'key': 'service', 'displayName': 'Fledge Service', 'description': 'Fledge Service'}]\n        conn.request(\"GET\", encoded_url)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 2 == len(jdoc['categories'])\n        assert Counter({'categories': expected_children}) == Counter(jdoc)\n\n    def test_delete_child_category(self, fledge_url):\n        encoded_url = '/fledge/category/{}/children'.format(quote(cat_name))\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", encoded_url + '/rest_api')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ['service'] == jdoc['children']\n\n        expected_children = [{'key': 'service', 'displayName': 'Fledge Service', 'description': 'Fledge Service'}]\n        conn.request(\"GET\", encoded_url)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_children == jdoc['categories']\n\n    def test_delete_parent_category(self, fledge_url):\n        encoded_url = '/fledge/category/{}'.format(quote(cat_name))\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", encoded_url + '/parent')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'Parent-child relationship for the parent-{} is deleted'.format(cat_name) == jdoc['message']\n\n        conn.request(\"GET\", encoded_url + '/children')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert [] == jdoc['categories']\n\n    def test_upload_script(self, fledge_url):\n        encoded_url = '/fledge/category/{}'.format(quote(cat_name))\n        script_path = _FLEDGE_ROOT + '/tests/system/python/data/notify35.py'\n        url = 'http://' + fledge_url + encoded_url + '/item5/upload'\n        # Verify the API response keys\n        upload_script = 'curl -s  -F \"script=@{}\" {}  | jq --raw-output \".value,.file,.default,.description,.type\"'.format(script_path, url)\n        exit_code = os.system(upload_script)\n        assert 0 == exit_code\n        expected = {\n            'item1': {'type': 'boolean', 'default': 'false', 'value': 'false', 'description': 'A Boolean check'},\n            'item2': {'type': 'integer', 'description': 'An Integer check', 'default': '2', 'value': '1'},\n            'item3': {'type': 'password', 'description': 'A password check', 'default': 'Fledge', 'value': \"****\"},\n            'item4': {'type': 'string', 'description': 'A string check', 'default': 'fledge', 'value': 'fledge'},\n            'item5': {'default': '', 'value': 'import logging\\nfrom logging.handlers import SysLogHandler\\n\\n\\ndef notify35(message):\\n    logger = logging.getLogger(__name__)\\n    logger.setLevel(level=logging.INFO)\\n    handler = SysLogHandler(address=\\'/dev/log\\')\\n    logger.addHandler(handler)\\n\\n    logger.info(\"notify35 called with {}\".format(message))\\n    print(\"Notification alert: \" + str(message))\\n', 'file': script_file_path, 'type': 'script', 'description': 'A script check'},\n            'item6': {'type': 'string', 'description': 'A string check', 'default': 'test', 'value': 'test',\n                      'group': 'Security'}\n        }\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", encoded_url)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert Counter(expected) == Counter(jdoc)\n        assert os.path.exists(script_file_path) is True\n\n    def test_delete_category(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/category/{}'.format(quote(cat_name)))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'Category {} deleted successfully.'.format(cat_name) == jdoc['result']\n        assert os.path.exists(script_file_path) is False\n\n    def test_list_configuration_with_list_name(self, fledge_url):\n        category = \"TEST #123\"\n        config_item1 = \"info\"\n        config_item2 = \"list-1\"\n        config_item3 = \"list-2\"\n        config_item4 = \"list-3\"\n        payload = {'key': category, 'description': 'Test description'}\n        conf = {\n            config_item1: {'type': 'boolean', 'description': 'A Boolean check', 'default': 'False', 'order': '1'},\n            config_item2: {'type': 'list', 'description': 'A list of variables', 'listName': 'items',\n                           'items': 'string', 'default': '{\"items\": [\"a\", \"b\"]}', 'displayName': 'ListName',\n                           'order': '2'},\n            config_item3: {'type': 'list', 'description': 'A list of variables', 'items': 'string',\n                           'default': '[\"foo\", \"bar\"]', 'displayName': 'Simple List', 'order': '3'},\n            config_item4: {'type': 'list', 'description': 'A list of datapoints to read PLC registers definitions',\n                           'items': 'object', 'listName': 'map-items', 'displayName': 'PLC Map',\n                           'default': '{\"map-items\": [{\"datapoint\": \"voltage\", \"register\": \"10\", \"type\": \"integer\"}]}',\n                           'properties': {\n                               'datapoint': {'description': 'The datapoint name to create', 'displayName': 'Datapoint',\n                                             'type': 'string', 'default': ''},\n                               'register': {'description': 'The register number to read', 'displayName': 'Register',\n                                            'type': 'integer', 'default': '0'},\n                               'type': {'description': 'The data type to read', 'displayName': 'Data Type',\n                                        'type': 'enumeration', 'options': ['integer', 'float'], 'default': 'integer'}}}\n        }\n        payload.update({'value': conf})\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('POST', '/fledge/category', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert category == jdoc['key']\n        # Verify default and value KV pair for a config item\n        assert conf[config_item2]['default'] == jdoc['value'][config_item2]['default']\n        assert conf[config_item2]['default'] == jdoc['value'][config_item2]['value']\n\n        assert conf[config_item3]['default'] == jdoc['value'][config_item3]['default']\n        assert conf[config_item3]['default'] == jdoc['value'][config_item3]['value']\n\n        assert conf[config_item4]['default'] == jdoc['value'][config_item4]['default']\n        assert conf[config_item4]['default'] == jdoc['value'][config_item4]['value']\n\n        # Merge category test scenario\n        payload.update({'value': conf})\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request('POST', '/fledge/category', body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert category == jdoc['key']\n        # Verify No change in default and value KV pair for a config item\n        assert conf[config_item2]['default'] == jdoc['value'][config_item2]['default']\n        assert conf[config_item2]['default'] == jdoc['value'][config_item2]['value']\n        assert conf[config_item3]['default'] == jdoc['value'][config_item3]['default']\n        assert conf[config_item3]['default'] == jdoc['value'][config_item3]['value']\n        assert conf[config_item4]['default'] == jdoc['value'][config_item4]['default']\n        assert conf[config_item4]['default'] == jdoc['value'][config_item4]['value']\n\n        # Bulk update\n        encoded_url = '/fledge/category/{}'.format(quote(category))\n        new_value_for_config_item2 = '[\"a\", \"b\", \"c\"]'\n        # with single config item\n        conn.request(\"PUT\", encoded_url, body=json.dumps({config_item2: new_value_for_config_item2}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert json.dumps({conf[config_item2]['listName']: json.loads(new_value_for_config_item2)}\n                          ) == jdoc[config_item2]['value']\n        # with multiple config items\n        new_value_for_config_item4 = ('[{\"datapoint\": \"voltage\", \"register\": \"10\", \"type\": \"integer\"}, '\n                                      '{\"datapoint\": \"pressure\", \"register\": \"75.4\", \"type\": \"float\"}]')\n        conn.request(\"PUT\", encoded_url, body=json.dumps({config_item2: new_value_for_config_item2,\n                                                          config_item4: new_value_for_config_item4}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert json.dumps({conf[config_item2]['listName']: json.loads(new_value_for_config_item2)}) == \\\n               jdoc[config_item2]['value']\n        assert json.dumps({conf[config_item4]['listName']: json.loads(new_value_for_config_item4)}) == \\\n               jdoc[config_item4]['value']\n\n        # Single update call\n        new_value_for_config_item2 = '[\"a\", \"b\", \"c\", \"d\"]'\n        encoded_url = '/fledge/category/{}/{}'.format(quote(category), config_item2)\n        conn.request(\"PUT\", encoded_url, body=json.dumps({\"value\": new_value_for_config_item2}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert json.dumps({conf[config_item2]['listName']: json.loads(new_value_for_config_item2)}) == jdoc['value']\n\n        new_value_for_config_item4 = '[{\"datapoint\": \"pressure\", \"register\": \"75.4\", \"type\": \"float\"}]'\n        encoded_url = '/fledge/category/{}/{}'.format(quote(category), config_item4)\n        conn.request(\"PUT\", encoded_url, body=json.dumps({\"value\": new_value_for_config_item4}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert json.dumps({conf[config_item4]['listName']: json.loads(new_value_for_config_item4)}) == jdoc['value']\n\n"
  },
  {
    "path": "tests/system/python/api/test_endpoints_with_different_user_types.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test REST API endpoints with different user types \"\"\"\n\nimport http.client\nimport json\nimport time\nimport pytest\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nTOKEN = None\nVIEW_USERNAME = \"view\"\nVIEW_PWD = \"V!3w@1\"\nDATA_VIEW_USERNAME = \"dataview\"\nDATA_VIEW_PWD = \"DV!3w$\"\nCONTROL_USERNAME = \"control\"\nCONTROL_PWD = \"C0ntrol!\"\n\n\n@pytest.fixture\ndef change_to_auth_mandatory(fledge_url, wait_time):\n    # Wait for fledge server to start\n    time.sleep(wait_time)\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"authentication\": \"mandatory\"}))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"mandatory\" == jdoc['authentication']['value']\n\n    from conftest import restart_and_wait_for_fledge\n    restart_and_wait_for_fledge(fledge_url, wait_time)\n\n\ndef test_setup(reset_and_start_fledge, change_to_auth_mandatory, fledge_url):\n    conn = http.client.HTTPConnection(fledge_url)\n    # Admin login\n    conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"Logged in successfully.\" == jdoc['message']\n    assert \"token\" in jdoc\n    assert jdoc['admin']\n    admin_token = jdoc[\"token\"]\n    # Create view user\n    view_payload = {\"username\": VIEW_USERNAME, \"password\": VIEW_PWD, \"role_id\": 3, \"real_name\": \"View\",\n                    \"description\": \"Only to view the configuration\"}\n    conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(view_payload), headers={\"authorization\": admin_token})\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"{} user has been created successfully.\".format(VIEW_USERNAME) == jdoc[\"message\"]\n    # Create Data view user\n    data_view_payload = {\"username\": DATA_VIEW_USERNAME, \"password\": DATA_VIEW_PWD, \"role_id\": 4,\n                         \"real_name\": \"DataView\", \"description\": \"Only read the data in buffer\"}\n    conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(data_view_payload),\n                 headers={\"authorization\": admin_token})\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"{} user has been created successfully.\".format(DATA_VIEW_USERNAME) == jdoc[\"message\"]\n    # Create Control user\n    control_payload = {\"username\": CONTROL_USERNAME, \"password\": CONTROL_PWD, \"role_id\": 5, \"real_name\": \"Control\",\n                       \"description\": \"Same as editor can do and also have access for control scripts and pipelines\"}\n    conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(control_payload), headers={\"authorization\": admin_token})\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"{} user has been created successfully.\".format(CONTROL_USERNAME) == jdoc[\"message\"]\n\n\nclass TestAPIEndpointsWithViewUserType:\n    def test_login(self, fledge_url, wait_time):\n        time.sleep(wait_time * 2)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": VIEW_USERNAME, \"password\": VIEW_PWD}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Logged in successfully.\" == jdoc['message']\n        assert \"token\" in jdoc\n        assert not jdoc['admin']\n        global TOKEN\n        TOKEN = jdoc[\"token\"]\n\n    @pytest.mark.parametrize((\"method\", \"route_path\", \"http_status_code\"), [\n        # common\n        (\"GET\", \"/fledge/ping\", 200), (\"PUT\", \"/fledge/shutdown\", 403), (\"PUT\", \"/fledge/restart\", 403),\n        # health\n        (\"GET\", \"/fledge/health/storage\", 200), (\"GET\", \"/fledge/health/logging\", 200),\n        # user & roles\n        (\"GET\", \"/fledge/user\", 403), (\"GET\", \"/fledge/user?id=3\", 200), (\"GET\", \"/fledge/user?id=2\", 403),\n        (\"GET\", \"/fledge/user?username={}\".format(VIEW_USERNAME), 200),\n        (\"GET\", \"/fledge/user?username={}\".format(CONTROL_USERNAME), 403),\n        (\"GET\", \"/fledge/user?id={}&username={}\".format(3, VIEW_USERNAME), 200),\n        (\"GET\", \"/fledge/user?username={}&id={}\".format(VIEW_USERNAME, 3), 200),\n        (\"GET\", \"/fledge/user?username=admin&id=1\", 403),\n        (\"PUT\", \"/fledge/user\", 500), (\"PUT\", \"/fledge/user/1/password\", 403), (\"PUT\", \"/fledge/user/3/password\", 500),\n        (\"GET\", \"/fledge/user/role\", 403),\n        # auth\n        (\"POST\", \"/fledge/login\", 403), (\"PUT\", \"/fledge/31/logout\", 401),\n        (\"GET\", \"/fledge/auth/ott\", 200),\n        # admin\n        (\"POST\", \"/fledge/admin/user\", 403), (\"DELETE\", \"/fledge/admin/3/delete\", 403), (\"PUT\", \"/fledge/admin/3\", 403),\n        (\"PUT\", \"/fledge/admin/3/enable\", 403), (\"PUT\", \"/fledge/admin/3/reset\", 403),\n        (\"POST\", \"/fledge/admin/3/authcertificate\", 403),\n        # category\n        (\"GET\", \"/fledge/category\", 200), (\"POST\", \"/fledge/category\", 403), (\"GET\", \"/fledge/category/General\", 200),\n        (\"PUT\", \"/fledge/category/General\", 403), (\"DELETE\", \"/fledge/category/General\", 403),\n        (\"POST\", \"/fledge/category/General/children\", 403), (\"GET\", \"/fledge/category/General/children\", 200),\n        (\"DELETE\", \"/fledge/category/General/children/Advanced\", 403),\n        (\"DELETE\", \"/fledge/category/General/parent\", 403),\n        (\"GET\", \"/fledge/category/rest_api/allowPing\", 200), (\"PUT\", \"/fledge/category/rest_api/allowPing\", 403),\n        (\"DELETE\", \"/fledge/category/rest_api/allowPing/value\", 403),\n        (\"POST\", \"/fledge/category/rest_api/allowPing/upload\", 403),\n        # schedule processes & schedules\n        (\"GET\", \"/fledge/schedule/process\", 200), (\"POST\", \"/fledge/schedule/process\", 403),\n        (\"GET\", \"/fledge/schedule/process/purge\", 200),\n        (\"GET\", \"/fledge/schedule\", 200), (\"POST\", \"/fledge/schedule\", 403), (\"GET\", \"/fledge/schedule/type\", 200),\n        (\"GET\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 200),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0/enable\", 403),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0/disable\", 403),\n        (\"PUT\", \"/fledge/schedule/enable\", 403), (\"PUT\", \"/fledge/schedule/disable\", 403),\n        (\"POST\", \"/fledge/schedule/start/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 403),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 403),\n        (\"DELETE\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 403),\n        # tasks\n        (\"GET\", \"/fledge/task\", 200), (\"GET\", \"/fledge/task/state\", 200), (\"GET\", \"/fledge/task/latest\", 200),\n        (\"GET\", \"/fledge/task/123\", 404), (\"PUT\", \"/fledge/task/123/cancel\", 403),\n        (\"POST\", \"/fledge/scheduled/task\", 403), (\"DELETE\", \"/fledge/scheduled/task/blah\", 403),\n        # service\n        (\"POST\", \"/fledge/service\", 403), (\"GET\", \"/fledge/service\", 200), (\"DELETE\", \"/fledge/service/blah\", 403),\n        # (\"GET\", \"/fledge/service/available\", 200), -- checked manually and commented out only to avoid apt-update\n        (\"GET\", \"/fledge/service/installed\", 200),\n        (\"PUT\", \"/fledge/service/Southbound/blah/update\", 403), (\"POST\", \"/fledge/service/blah/otp\", 403),\n        # south & north\n        (\"GET\", \"/fledge/south\", 200), (\"GET\", \"/fledge/north\", 200),\n        # asset browse\n        (\"GET\", \"/fledge/asset\", 200), (\"GET\", \"/fledge/asset/sinusoid\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/latest\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/summary\", 404), (\"GET\", \"/fledge/asset/sinusoid/sinusoid\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/sinusoid/summary\", 404), (\"GET\", \"/fledge/asset/sinusoid/sinusoid/series\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/bucket/1\", 200), (\"GET\", \"/fledge/asset/sinusoid/sinusoid/bucket/1\", 200),\n        (\"GET\", \"/fledge/structure/asset\", 200), (\"DELETE\", \"/fledge/asset\", 403),\n        (\"DELETE\", \"/fledge/asset/sinusoid\", 403),\n        # asset tracker\n        (\"GET\", \"/fledge/track\", 200), (\"GET\", \"/fledge/track/storage/assets\", 200),\n        (\"PUT\", \"/fledge/track/service/foo/asset/bar/event/Ingest\", 403),\n        # statistics\n        (\"GET\", \"/fledge/statistics\", 200), (\"GET\", \"/fledge/statistics/history\", 200),\n        (\"GET\", \"/fledge/statistics/rate?periods=1&statistics=FOO\", 200),\n        # audit trail\n        (\"POST\", \"/fledge/audit\", 403), (\"GET\", \"/fledge/audit\", 200), (\"GET\", \"/fledge/audit/logcode\", 200),\n        (\"GET\", \"/fledge/audit/severity\", 200),\n        # backup & restore\n        (\"GET\", \"/fledge/backup\", 200), (\"POST\", \"/fledge/backup\", 403), (\"POST\", \"/fledge/backup/upload\", 403),\n        (\"GET\", \"/fledge/backup/status\", 200), (\"GET\", \"/fledge/backup/123\", 404),\n        (\"DELETE\", \"/fledge/backup/123\", 403), (\"GET\", \"/fledge/backup/123/download\", 403),\n        (\"PUT\", \"/fledge/backup/123/restore\", 403),\n        # package update\n        # (\"GET\", \"/fledge/update\", 200), -- checked manually and commented out only to avoid apt-update run\n        (\"PUT\", \"/fledge/update\", 403),\n        # certs store\n        (\"GET\", \"/fledge/certificate\", 200), (\"POST\", \"/fledge/certificate\", 403),\n        (\"DELETE\", \"/fledge/certificate/user\", 403),\n        # support bundle\n        (\"GET\", \"/fledge/support\", 200), (\"GET\", \"/fledge/support/foo\", 403), (\"POST\", \"/fledge/support\", 403),\n        # syslogs & package logs\n        (\"GET\", \"/fledge/syslog\", 200), (\"GET\", \"/fledge/package/log\", 200), (\"GET\", \"/fledge/package/log/foo\", 400),\n        (\"GET\", \"/fledge/package/install/status\", 404),\n        # plugins\n        (\"GET\", \"/fledge/plugins/installed\", 200),\n        # (\"GET\", \"/fledge/plugins/available\", 200), -- checked manually and commented out only to avoid apt-update\n        (\"POST\", \"/fledge/plugins\", 403), (\"PUT\", \"/fledge/plugins/south/sinusoid/update\", 403),\n        (\"DELETE\", \"/fledge/plugins/south/sinusoid\", 403), (\"PUT\", \"/fledge/plugin/validate\", 400), (\"GET\", \"/fledge/service/foo/persist\", 404),\n        (\"GET\", \"/fledge/service/foo/plugin/omf/data\", 404), (\"POST\", \"/fledge/service/foo/plugin/omf/data\", 403),\n        (\"DELETE\", \"/fledge/service/foo/plugin/omf/data\", 403),\n        # filters\n        (\"POST\", \"/fledge/filter\", 403), (\"PUT\", \"/fledge/filter/foo/pipeline\", 403),\n        (\"GET\", \"/fledge/filter/foo/pipeline\", 404), (\"GET\", \"/fledge/filter/bar\", 404), (\"GET\", \"/fledge/filter\", 200),\n        (\"DELETE\", \"/fledge/filter/foo/pipeline\", 403), (\"DELETE\", \"/fledge/filter/bar\", 403),\n        # snapshots\n        (\"GET\", \"/fledge/snapshot/plugins\", 403), (\"POST\", \"/fledge/snapshot/plugins\", 403),\n        (\"PUT\", \"/fledge/snapshot/plugins/1\", 403), (\"DELETE\", \"/fledge/snapshot/plugins/1\", 403),\n        (\"GET\", \"/fledge/snapshot/category\", 403), (\"POST\", \"/fledge/snapshot/category\", 403),\n        (\"PUT\", \"/fledge/snapshot/category/1\", 403), (\"DELETE\", \"/fledge/snapshot/category/1\", 403),\n        (\"GET\", \"/fledge/snapshot/schedule\", 403), (\"POST\", \"/fledge/snapshot/schedule\", 403),\n        (\"PUT\", \"/fledge/snapshot/schedule/1\", 403), (\"DELETE\", \"/fledge/snapshot/schedule/1\", 403),\n        # repository\n        (\"POST\", \"/fledge/repository\", 403),\n        # ACL\n        (\"POST\", \"/fledge/ACL\", 403), (\"GET\", \"/fledge/ACL\", 200), (\"GET\", \"/fledge/ACL/foo\", 404),\n        (\"PUT\", \"/fledge/ACL/foo\", 403), (\"DELETE\", \"/fledge/ACL/foo\", 403), (\"PUT\", \"/fledge/service/foo/ACL\", 403),\n        (\"DELETE\", \"/fledge/service/foo/ACL\", 403),\n        # control script\n        (\"POST\", \"/fledge/control/script\", 403), (\"GET\", \"/fledge/control/script\", 200),\n        (\"GET\", \"/fledge/control/script/foo\", 404), (\"PUT\", \"/fledge/control/script/foo\", 403),\n        (\"DELETE\", \"/fledge/control/script/foo\", 403), (\"POST\", \"/fledge/control/script/foo/schedule\", 403),\n        # control pipeline\n        (\"POST\", \"/fledge/control/pipeline\", 403), (\"GET\", \"/fledge/control/lookup\", 200),\n        (\"GET\", \"/fledge/control/pipeline\", 200), (\"GET\", \"/fledge/control/pipeline/1\", 404),\n        (\"PUT\", \"/fledge/control/pipeline/1\", 403), (\"DELETE\", \"/fledge/control/pipeline/1\", 403),\n        # python packages\n        (\"GET\", \"/fledge/python/packages\", 200), (\"POST\", \"/fledge/python/package\", 403),\n        # notification\n        (\"GET\", \"/fledge/notification\", 200), (\"GET\", \"/fledge/notification/plugin\", 404),\n        (\"GET\", \"/fledge/notification/type\", 200), (\"GET\", \"/fledge/notification/N1\", 400),\n        (\"POST\", \"/fledge/notification\", 403), (\"PUT\", \"/fledge/notification/N1\", 403),\n        (\"DELETE\", \"/fledge/notification/N1\", 403), (\"GET\", \"/fledge/notification/N1/delivery\", 404),\n        (\"POST\", \"/fledge/notification/N1/delivery\", 403), (\"GET\", \"/fledge/notification/N1/delivery/C1\", 404),\n        (\"DELETE\", \"/fledge/notification/N1/delivery/C1\", 403),\n        # performance monitors\n        (\"GET\", \"/fledge/monitors\", 200), (\"GET\", \"/fledge/monitors/SVC\", 200),\n        (\"GET\", \"/fledge/monitors/Svc/Counter\", 200), (\"DELETE\", \"/fledge/monitors\", 403),\n        (\"DELETE\", \"/fledge/monitors/SVC\", 403), (\"DELETE\", \"/fledge/monitors/Svc/Counter\", 403),\n        # alerts\n        (\"GET\", \"/fledge/alert\", 200), (\"DELETE\", \"/fledge/alert\", 403), (\"DELETE\", \"/fledge/alert/blah\", 403),\n        # pipeline debugger\n        (\"GET\", \"/fledge/service/name/debug?action=state\", 403),\n        (\"GET\", \"/fledge/service/name/debug?action=buffer\", 403),\n        (\"PUT\", \"/fledge/service/name/debug?action=buffer\", 403),\n        (\"PUT\", \"/fledge/service/{name}/debug?action=attach\", 403)\n    ])\n    def test_endpoints(self, fledge_url, method, route_path, http_status_code, storage_plugin):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(method, route_path, headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert http_status_code == r.status\n        r.read().decode()\n\n    def test_logout_me(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n\nclass TestAPIEndpointsWithDataViewUserType:\n    def test_login(self, fledge_url, wait_time):\n        time.sleep(wait_time * 2)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": DATA_VIEW_USERNAME, \"password\": DATA_VIEW_PWD}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Logged in successfully.\" == jdoc['message']\n        assert \"token\" in jdoc\n        assert not jdoc['admin']\n        global TOKEN\n        TOKEN = jdoc[\"token\"]\n\n    @pytest.mark.parametrize((\"method\", \"route_path\", \"http_status_code\"), [\n        # common\n        (\"GET\", \"/fledge/ping\", 200), (\"PUT\", \"/fledge/shutdown\", 403), (\"PUT\", \"/fledge/restart\", 403),\n        # health\n        (\"GET\", \"/fledge/health/storage\", 403), (\"GET\", \"/fledge/health/logging\", 403),\n        # user & roles\n        (\"GET\", \"/fledge/user\", 403), (\"GET\", \"/fledge/user?id=4\", 200), (\"GET\", \"/fledge/user?id=1\", 403),\n        (\"GET\", \"/fledge/user?username={}\".format(DATA_VIEW_USERNAME), 200), (\"GET\", \"/fledge/user?username=user\", 403),\n        (\"GET\", \"/fledge/user?id={}&username={}\".format(4, DATA_VIEW_USERNAME), 200),\n        (\"GET\", \"/fledge/user?id=1&username=admin\", 403),\n        (\"GET\", \"/fledge/user?username={}&id={}\".format(DATA_VIEW_USERNAME, 4), 200),\n        (\"PUT\", \"/fledge/user\", 500), (\"PUT\", \"/fledge/user/1/password\", 403), (\"PUT\", \"/fledge/user/4/password\", 500),\n        (\"GET\", \"/fledge/user/role\", 403),\n        # auth\n        (\"POST\", \"/fledge/login\", 403), (\"PUT\", \"/fledge/31/logout\", 401),\n        (\"GET\", \"/fledge/auth/ott\", 403),\n        # admin\n        (\"POST\", \"/fledge/admin/user\", 403), (\"DELETE\", \"/fledge/admin/3/delete\", 403), (\"PUT\", \"/fledge/admin/3\", 403),\n        (\"PUT\", \"/fledge/admin/3/enable\", 403), (\"PUT\", \"/fledge/admin/3/reset\", 403),\n        (\"POST\", \"/fledge/admin/3/authcertificate\", 403),\n        # category\n        (\"GET\", \"/fledge/category\", 403), (\"POST\", \"/fledge/category\", 403), (\"GET\", \"/fledge/category/General\", 403),\n        (\"PUT\", \"/fledge/category/General\", 403), (\"DELETE\", \"/fledge/category/General\", 403),\n        (\"POST\", \"/fledge/category/General/children\", 403), (\"GET\", \"/fledge/category/General/children\", 403),\n        (\"DELETE\", \"/fledge/category/General/children/Advanced\", 403),\n        (\"DELETE\", \"/fledge/category/General/parent\", 403),\n        (\"GET\", \"/fledge/category/rest_api/allowPing\", 403), (\"PUT\", \"/fledge/category/rest_api/allowPing\", 403),\n        (\"DELETE\", \"/fledge/category/rest_api/allowPing/value\", 403),\n        (\"POST\", \"/fledge/category/rest_api/allowPing/upload\", 403),\n        # schedule processes & schedules\n        (\"GET\", \"/fledge/schedule/process\", 403), (\"POST\", \"/fledge/schedule/process\", 403),\n        (\"GET\", \"/fledge/schedule/process/purge\", 403),\n        (\"GET\", \"/fledge/schedule\", 403), (\"POST\", \"/fledge/schedule\", 403), (\"GET\", \"/fledge/schedule/type\", 403),\n        (\"GET\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 403),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0/enable\", 403),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0/disable\", 403),\n        (\"PUT\", \"/fledge/schedule/enable\", 403), (\"PUT\", \"/fledge/schedule/disable\", 403),\n        (\"POST\", \"/fledge/schedule/start/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 403),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 403),\n        (\"DELETE\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 403),\n        # tasks\n        (\"GET\", \"/fledge/task\", 403), (\"GET\", \"/fledge/task/state\", 403), (\"GET\", \"/fledge/task/latest\", 403),\n        (\"GET\", \"/fledge/task/123\", 403), (\"PUT\", \"/fledge/task/123/cancel\", 403),\n        (\"POST\", \"/fledge/scheduled/task\", 403), (\"DELETE\", \"/fledge/scheduled/task/blah\", 403),\n        # service\n        (\"POST\", \"/fledge/service\", 403), (\"GET\", \"/fledge/service\", 200), (\"DELETE\", \"/fledge/service/blah\", 403),\n        (\"GET\", \"/fledge/service/available\", 403), (\"GET\", \"/fledge/service/installed\", 403),\n        (\"PUT\", \"/fledge/service/Southbound/blah/update\", 403), (\"POST\", \"/fledge/service/blah/otp\", 403),\n        # south & north\n        (\"GET\", \"/fledge/south\", 403), (\"GET\", \"/fledge/north\", 403),\n        # asset browse\n        (\"GET\", \"/fledge/asset\", 200), (\"GET\", \"/fledge/asset/sinusoid\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/latest\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/summary\", 404), (\"GET\", \"/fledge/asset/sinusoid/sinusoid\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/sinusoid/summary\", 404), (\"GET\", \"/fledge/asset/sinusoid/sinusoid/series\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/bucket/1\", 200), (\"GET\", \"/fledge/asset/sinusoid/sinusoid/bucket/1\", 200),\n        (\"GET\", \"/fledge/structure/asset\", 403), (\"DELETE\", \"/fledge/asset\", 403),\n        (\"DELETE\", \"/fledge/asset/sinusoid\", 403),\n        # asset tracker\n        (\"GET\", \"/fledge/track\", 403), (\"GET\", \"/fledge/track/storage/assets\", 403),\n        (\"PUT\", \"/fledge/track/service/foo/asset/bar/event/Ingest\", 403),\n        # statistics\n        (\"GET\", \"/fledge/statistics\", 200), (\"GET\", \"/fledge/statistics/history\", 200),\n        (\"GET\", \"/fledge/statistics/rate?periods=1&statistics=FOO\", 200),\n        # audit trail\n        (\"POST\", \"/fledge/audit\", 403), (\"GET\", \"/fledge/audit\", 403), (\"GET\", \"/fledge/audit/logcode\", 403),\n        (\"GET\", \"/fledge/audit/severity\", 403),\n        # backup & restore\n        (\"GET\", \"/fledge/backup\", 403), (\"POST\", \"/fledge/backup\", 403), (\"POST\", \"/fledge/backup/upload\", 403),\n        (\"GET\", \"/fledge/backup/status\", 403), (\"GET\", \"/fledge/backup/123\", 403),\n        (\"DELETE\", \"/fledge/backup/123\", 403), (\"GET\", \"/fledge/backup/123/download\", 403),\n        (\"PUT\", \"/fledge/backup/123/restore\", 403),\n        # package update\n        # (\"GET\", \"/fledge/update\", 200), -- checked manually and commented out only to avoid apt-update\n        (\"PUT\", \"/fledge/update\", 403),\n        # certs store\n        (\"GET\", \"/fledge/certificate\", 403), (\"POST\", \"/fledge/certificate\", 403),\n        (\"DELETE\", \"/fledge/certificate/user\", 403),\n        # support bundle\n        (\"GET\", \"/fledge/support\", 403), (\"GET\", \"/fledge/support/foo\", 403), (\"POST\", \"/fledge/support\", 403),\n        # syslogs & package logs\n        (\"GET\", \"/fledge/syslog\", 403), (\"GET\", \"/fledge/package/log\", 403), (\"GET\", \"/fledge/package/log/foo\", 403),\n        (\"GET\", \"/fledge/package/install/status\", 403),\n        # plugins\n        (\"GET\", \"/fledge/plugins/installed\", 403), (\"GET\", \"/fledge/plugins/available\", 403),\n        (\"POST\", \"/fledge/plugins\", 403), (\"PUT\", \"/fledge/plugins/south/sinusoid/update\", 403),\n        (\"DELETE\", \"/fledge/plugins/south/sinusoid\", 403), (\"PUT\", \"/fledge/plugin/validate\", 403), (\"GET\", \"/fledge/service/foo/persist\", 403),\n        (\"GET\", \"/fledge/service/foo/plugin/omf/data\", 403), (\"POST\", \"/fledge/service/foo/plugin/omf/data\", 403),\n        (\"DELETE\", \"/fledge/service/foo/plugin/omf/data\", 403),\n        # filters\n        (\"POST\", \"/fledge/filter\", 403), (\"PUT\", \"/fledge/filter/foo/pipeline\", 403),\n        (\"GET\", \"/fledge/filter/foo/pipeline\", 403), (\"GET\", \"/fledge/filter/bar\", 403), (\"GET\", \"/fledge/filter\", 403),\n        (\"DELETE\", \"/fledge/filter/foo/pipeline\", 403), (\"DELETE\", \"/fledge/filter/bar\", 403),\n        # snapshots\n        (\"GET\", \"/fledge/snapshot/plugins\", 403), (\"POST\", \"/fledge/snapshot/plugins\", 403),\n        (\"PUT\", \"/fledge/snapshot/plugins/1\", 403), (\"DELETE\", \"/fledge/snapshot/plugins/1\", 403),\n        (\"GET\", \"/fledge/snapshot/category\", 403), (\"POST\", \"/fledge/snapshot/category\", 403),\n        (\"PUT\", \"/fledge/snapshot/category/1\", 403), (\"DELETE\", \"/fledge/snapshot/category/1\", 403),\n        (\"GET\", \"/fledge/snapshot/schedule\", 403), (\"POST\", \"/fledge/snapshot/schedule\", 403),\n        (\"PUT\", \"/fledge/snapshot/schedule/1\", 403), (\"DELETE\", \"/fledge/snapshot/schedule/1\", 403),\n        # repository\n        (\"POST\", \"/fledge/repository\", 403),\n        # ACL\n        (\"POST\", \"/fledge/ACL\", 403), (\"GET\", \"/fledge/ACL\", 403), (\"GET\", \"/fledge/ACL/foo\", 403),\n        (\"PUT\", \"/fledge/ACL/foo\", 403), (\"DELETE\", \"/fledge/ACL/foo\", 403), (\"PUT\", \"/fledge/service/foo/ACL\", 403),\n        (\"DELETE\", \"/fledge/service/foo/ACL\", 403),\n        # control script\n        (\"POST\", \"/fledge/control/script\", 403), (\"GET\", \"/fledge/control/script\", 403),\n        (\"GET\", \"/fledge/control/script/foo\", 403), (\"PUT\", \"/fledge/control/script/foo\", 403),\n        (\"DELETE\", \"/fledge/control/script/foo\", 403), (\"POST\", \"/fledge/control/script/foo/schedule\", 403),\n        # control pipeline\n        (\"POST\", \"/fledge/control/pipeline\", 403), (\"GET\", \"/fledge/control/lookup\", 403),\n        (\"GET\", \"/fledge/control/pipeline\", 403), (\"GET\", \"/fledge/control/pipeline/1\", 403),\n        (\"PUT\", \"/fledge/control/pipeline/1\", 403), (\"DELETE\", \"/fledge/control/pipeline/1\", 403),\n        # python packages\n        (\"GET\", \"/fledge/python/packages\", 403), (\"POST\", \"/fledge/python/package\", 403),\n        # notification\n        (\"GET\", \"/fledge/notification\", 403), (\"GET\", \"/fledge/notification/plugin\", 403),\n        (\"GET\", \"/fledge/notification/type\", 403), (\"GET\", \"/fledge/notification/N1\", 403),\n        (\"POST\", \"/fledge/notification\", 403), (\"PUT\", \"/fledge/notification/N1\", 403),\n        (\"DELETE\", \"/fledge/notification/N1\", 403), (\"GET\", \"/fledge/notification/N1/delivery\", 403),\n        (\"POST\", \"/fledge/notification/N1/delivery\", 403), (\"GET\", \"/fledge/notification/N1/delivery/C1\", 403),\n        (\"DELETE\", \"/fledge/notification/N1/delivery/C1\", 403),\n        # performance monitors\n        (\"GET\", \"/fledge/monitors\", 403), (\"GET\", \"/fledge/monitors/SVC\", 403),\n        (\"GET\", \"/fledge/monitors/Svc/Counter\", 403), (\"DELETE\", \"/fledge/monitors\", 403),\n        (\"DELETE\", \"/fledge/monitors/SVC\", 403), (\"DELETE\", \"/fledge/monitors/Svc/Counter\", 403),\n        # alerts\n        (\"GET\", \"/fledge/alert\", 403), (\"DELETE\", \"/fledge/alert\", 403), (\"DELETE\", \"/fledge/alert/blah\", 403),\n        # pipeline debugger\n        (\"GET\", \"/fledge/service/name/debug?action=state\", 403),\n        (\"GET\", \"/fledge/service/name/debug?action=buffer\", 403),\n        (\"PUT\", \"/fledge/service/name/debug?action=buffer\", 403),\n        (\"PUT\", \"/fledge/service/name/debug?action=attach\", 403)\n    ])\n    def test_endpoints(self, fledge_url, method, route_path, http_status_code, storage_plugin):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(method, route_path, headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert http_status_code == r.status\n        r.read().decode()\n\n    def test_logout_me(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n\nclass TestAPIEndpointsWithControlUserType:\n    def test_login(self, fledge_url, wait_time):\n        time.sleep(wait_time * 2)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": CONTROL_USERNAME,\n                                                          \"password\": CONTROL_PWD}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Logged in successfully.\" == jdoc['message']\n        assert \"token\" in jdoc\n        assert not jdoc['admin']\n        global TOKEN\n        TOKEN = jdoc[\"token\"]\n\n    @pytest.mark.parametrize((\"method\", \"route_path\", \"http_status_code\"), [\n        # common\n        (\"GET\", \"/fledge/ping\", 200),  # (\"PUT\", \"/fledge/shutdown\", 200), (\"PUT\", \"/fledge/restart\", 200),\n        # health\n        (\"GET\", \"/fledge/health/storage\", 200), (\"GET\", \"/fledge/health/logging\", 200),\n        # user & roles\n        (\"GET\", \"/fledge/user\", 200), (\"GET\", \"/fledge/user?id=5\", 200), (\"GET\", \"/fledge/user?id=1\", 200),\n        (\"GET\", \"/fledge/user?username={}\".format(CONTROL_USERNAME), 200), (\"GET\", \"/fledge/user?username=admin\", 200),\n        (\"GET\", \"/fledge/user?id={}&username={}\".format(5, CONTROL_USERNAME), 200),\n        (\"GET\", \"/fledge/user?username={}&id={}\".format(CONTROL_USERNAME, 5), 200),\n        (\"GET\", \"/fledge/user?username={}&id={}\".format(VIEW_USERNAME, 3), 200),\n        (\"GET\", \"/fledge/user?id={}&username={}\".format(4, DATA_VIEW_USERNAME), 200),\n        (\"PUT\", \"/fledge/user\", 500), (\"PUT\", \"/fledge/user/1/password\", 401), (\"PUT\", \"/fledge/user/5/password\", 500),\n        (\"GET\", \"/fledge/user/role\", 403),\n        # auth\n        (\"POST\", \"/fledge/login\", 400), (\"PUT\", \"/fledge/31/logout\", 401),\n        (\"GET\", \"/fledge/auth/ott\", 200),\n        # admin\n        (\"POST\", \"/fledge/admin/user\", 403), (\"DELETE\", \"/fledge/admin/3/delete\", 403), (\"PUT\", \"/fledge/admin/3\", 403),\n        (\"PUT\", \"/fledge/admin/3/enable\", 403), (\"PUT\", \"/fledge/admin/3/reset\", 403),\n        (\"POST\", \"/fledge/admin/3/authcertificate\", 403),\n        # category\n        (\"GET\", \"/fledge/category\", 200), (\"POST\", \"/fledge/category\", 400), (\"GET\", \"/fledge/category/General\", 200),\n        (\"PUT\", \"/fledge/category/General\", 400), (\"DELETE\", \"/fledge/category/General\", 400),\n        (\"POST\", \"/fledge/category/General/children\", 500), (\"GET\", \"/fledge/category/General/children\", 200),\n        (\"DELETE\", \"/fledge/category/General/children/Advanced\", 200),\n        (\"DELETE\", \"/fledge/category/General/parent\", 200),\n        (\"GET\", \"/fledge/category/rest_api/allowPing\", 200), (\"PUT\", \"/fledge/category/rest_api/allowPing\", 500),\n        (\"DELETE\", \"/fledge/category/rest_api/allowPing/value\", 200),\n        (\"POST\", \"/fledge/category/rest_api/allowPing/upload\", 400),\n        # schedule processes & schedules\n        (\"GET\", \"/fledge/schedule/process\", 200), (\"POST\", \"/fledge/schedule/process\", 500),\n        (\"GET\", \"/fledge/schedule/process/purge\", 200),\n        (\"GET\", \"/fledge/schedule\", 200), (\"POST\", \"/fledge/schedule\", 400), (\"GET\", \"/fledge/schedule/type\", 200),\n        (\"GET\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 200),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0/enable\", 200),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0/disable\", 200),\n        (\"PUT\", \"/fledge/schedule/enable\", 404), (\"PUT\", \"/fledge/schedule/disable\", 404),\n        (\"POST\", \"/fledge/schedule/start/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 200),\n        (\"PUT\", \"/fledge/schedule/2176eb68-7303-11e7-8cf7-a6006ad3dba0\", 400),\n        (\"DELETE\", \"/fledge/schedule/d1631422-9ec6-11e7-abc4-cec278b6b50a\", 200),\n        # tasks\n        (\"GET\", \"/fledge/task\", 200), (\"GET\", \"/fledge/task/state\", 200), (\"GET\", \"/fledge/task/latest\", 200),\n        (\"GET\", \"/fledge/task/123\", 404), (\"PUT\", \"/fledge/task/123/cancel\", 404),\n        (\"POST\", \"/fledge/scheduled/task\", 400), (\"DELETE\", \"/fledge/scheduled/task/blah\", 404),\n        # service\n        (\"POST\", \"/fledge/service\", 400), (\"GET\", \"/fledge/service\", 200), (\"DELETE\", \"/fledge/service/blah\", 404),\n        # (\"GET\", \"/fledge/service/available\", 200), -- checked manually and commented out only to avoid apt-update\n        (\"GET\", \"/fledge/service/installed\", 200),\n        (\"PUT\", \"/fledge/service/Southbound/blah/update\", 400), (\"POST\", \"/fledge/service/blah/otp\", 403),\n        # south & north\n        (\"GET\", \"/fledge/south\", 200), (\"GET\", \"/fledge/north\", 200),\n        # asset browse\n        (\"GET\", \"/fledge/asset\", 200), (\"GET\", \"/fledge/asset/sinusoid\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/latest\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/summary\", 404), (\"GET\", \"/fledge/asset/sinusoid/sinusoid\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/sinusoid/summary\", 404), (\"GET\", \"/fledge/asset/sinusoid/sinusoid/series\", 200),\n        (\"GET\", \"/fledge/asset/sinusoid/bucket/1\", 200), (\"GET\", \"/fledge/asset/sinusoid/sinusoid/bucket/1\", 200),\n        (\"GET\", \"/fledge/structure/asset\", 200), (\"DELETE\", \"/fledge/asset\", 200),\n        (\"DELETE\", \"/fledge/asset/sinusoid\", 200),\n        # asset tracker\n        (\"GET\", \"/fledge/track\", 200), (\"GET\", \"/fledge/track/storage/assets\", 200),\n        (\"PUT\", \"/fledge/track/service/foo/asset/bar/event/Ingest\", 404),\n        # statistics\n        (\"GET\", \"/fledge/statistics\", 200), (\"GET\", \"/fledge/statistics/history\", 200),\n        (\"GET\", \"/fledge/statistics/rate?periods=1&statistics=FOO\", 200),\n        # audit trail\n        (\"POST\", \"/fledge/audit\", 500), (\"GET\", \"/fledge/audit\", 200), (\"GET\", \"/fledge/audit/logcode\", 200),\n        (\"GET\", \"/fledge/audit/severity\", 200),\n        # backup & restore\n        (\"GET\", \"/fledge/backup\", 200), (\"POST\", \"/fledge/backup\", 403),\n        (\"POST\", \"/fledge/backup/upload\", 403),\n        (\"GET\", \"/fledge/backup/status\", 200), (\"GET\", \"/fledge/backup/123\", 404),\n        (\"DELETE\", \"/fledge/backup/123\", 403), (\"GET\", \"/fledge/backup/123/download\", 403),\n        (\"PUT\", \"/fledge/backup/123/restore\", 403),\n        # package update\n        # (\"GET\", \"/fledge/update\", 200), -- checked manually and commented out only to avoid apt-update run\n        # (\"PUT\", \"/fledge/update\", 200), -- checked manually\n        # certs store\n        (\"GET\", \"/fledge/certificate\", 200), (\"POST\", \"/fledge/certificate\", 400),\n        (\"DELETE\", \"/fledge/certificate/user\", 403),\n        # support bundle\n        (\"GET\", \"/fledge/support\", 200), (\"GET\", \"/fledge/support/foo\", 403), (\"POST\", \"/fledge/support\", 403),\n        # syslogs & package logs\n        (\"GET\", \"/fledge/syslog\", 200), (\"GET\", \"/fledge/package/log\", 200), (\"GET\", \"/fledge/package/log/foo\", 400),\n        (\"GET\", \"/fledge/package/install/status\", 404),\n        # plugins\n        (\"GET\", \"/fledge/plugins/installed\", 200),\n        # (\"GET\", \"/fledge/plugins/available\", 200), -- checked manually and commented out only to avoid apt operations\n        # (\"PUT\", \"/fledge/plugins/south/sinusoid/update\", 200),\n        # (\"DELETE\", \"/fledge/plugins/south/sinusoid\", 404),\n        (\"POST\", \"/fledge/plugins\", 400), (\"PUT\", \"/fledge/plugin/validate\", 400), (\"GET\", \"/fledge/service/foo/persist\", 404),\n        (\"GET\", \"/fledge/service/foo/plugin/omf/data\", 404), (\"POST\", \"/fledge/service/foo/plugin/omf/data\", 404),\n        (\"DELETE\", \"/fledge/service/foo/plugin/omf/data\", 404),\n        # filters\n        (\"POST\", \"/fledge/filter\", 404), (\"PUT\", \"/fledge/filter/foo/pipeline\", 404),\n        (\"GET\", \"/fledge/filter/foo/pipeline\", 404), (\"GET\", \"/fledge/filter/bar\", 404), (\"GET\", \"/fledge/filter\", 200),\n        (\"DELETE\", \"/fledge/filter/foo/pipeline\", 500), (\"DELETE\", \"/fledge/filter/bar\", 404),\n        # snapshots\n        (\"GET\", \"/fledge/snapshot/plugins\", 403), (\"POST\", \"/fledge/snapshot/plugins\", 403),\n        (\"PUT\", \"/fledge/snapshot/plugins/1\", 403), (\"DELETE\", \"/fledge/snapshot/plugins/1\", 403),\n        (\"GET\", \"/fledge/snapshot/category\", 403), (\"POST\", \"/fledge/snapshot/category\", 403),\n        (\"PUT\", \"/fledge/snapshot/category/1\", 403), (\"DELETE\", \"/fledge/snapshot/category/1\", 403),\n        (\"GET\", \"/fledge/snapshot/schedule\", 403), (\"POST\", \"/fledge/snapshot/schedule\", 403),\n        (\"PUT\", \"/fledge/snapshot/schedule/1\", 403), (\"DELETE\", \"/fledge/snapshot/schedule/1\", 403),\n        # repository\n        (\"POST\", \"/fledge/repository\", 400),\n        # ACL\n        (\"POST\", \"/fledge/ACL\", 403), (\"GET\", \"/fledge/ACL\", 200), (\"GET\", \"/fledge/ACL/foo\", 404),\n        (\"PUT\", \"/fledge/ACL/foo\", 403), (\"DELETE\", \"/fledge/ACL/foo\", 403), (\"PUT\", \"/fledge/service/foo/ACL\", 403),\n        (\"DELETE\", \"/fledge/service/foo/ACL\", 403),\n        # control script\n        (\"POST\", \"/fledge/control/script\", 400), (\"GET\", \"/fledge/control/script\", 200),\n        (\"GET\", \"/fledge/control/script/foo\", 404), (\"PUT\", \"/fledge/control/script/foo\", 400),\n        (\"DELETE\", \"/fledge/control/script/foo\", 404), (\"POST\", \"/fledge/control/script/foo/schedule\", 404),\n        # control pipeline\n        (\"POST\", \"/fledge/control/pipeline\", 400), (\"GET\", \"/fledge/control/lookup\", 200),\n        (\"GET\", \"/fledge/control/pipeline\", 200), (\"GET\", \"/fledge/control/pipeline/1\", 404),\n        (\"PUT\", \"/fledge/control/pipeline/1\", 404), (\"DELETE\", \"/fledge/control/pipeline/1\", 404),\n        # python packages\n        (\"GET\", \"/fledge/python/packages\", 200), (\"POST\", \"/fledge/python/package\", 500),\n        # notification\n        (\"GET\", \"/fledge/notification\", 200), (\"GET\", \"/fledge/notification/plugin\", 404),\n        (\"GET\", \"/fledge/notification/type\", 200), (\"GET\", \"/fledge/notification/N1\", 400),\n        (\"POST\", \"/fledge/notification\", 404), (\"PUT\", \"/fledge/notification/N1\", 404),\n        (\"DELETE\", \"/fledge/notification/N1\", 404), (\"GET\", \"/fledge/notification/N1/delivery\", 404),\n        (\"POST\", \"/fledge/notification/N1/delivery\", 400), (\"GET\", \"/fledge/notification/N1/delivery/C1\", 404),\n        (\"DELETE\", \"/fledge/notification/N1/delivery/C1\", 404),\n        # performance monitors\n        (\"GET\", \"/fledge/monitors\", 200), (\"GET\", \"/fledge/monitors/SVC\", 200),\n        (\"GET\", \"/fledge/monitors/Svc/Counter\", 200), (\"DELETE\", \"/fledge/monitors\", 200),\n        (\"DELETE\", \"/fledge/monitors/SVC\", 200), (\"DELETE\", \"/fledge/monitors/Svc/Counter\", 200),\n        # alerts\n        (\"GET\", \"/fledge/alert\", 200), (\"DELETE\", \"/fledge/alert\", 200), (\"DELETE\", \"/fledge/alert/blah\", 404),\n        # pipeline debugger\n        (\"GET\", \"/fledge/service/name/debug?action=state\", 404),\n        (\"GET\", \"/fledge/service/name/debug?action=buffer\", 404),\n        (\"PUT\", \"/fledge/service/name/debug?action=buffer\", 404),\n        (\"PUT\", \"/fledge/service/{name}/debug?action=attach\", 404)\n    ])\n    def test_endpoints(self, fledge_url, method, route_path, http_status_code, storage_plugin):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(method, route_path, headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert http_status_code == r.status\n        r.read().decode()\n\n    def test_logout_me(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n"
  },
  {
    "path": "tests/system/python/api/test_notification.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test notification REST API \"\"\"\n\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nimport urllib\nimport pytest\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nSERVICE = \"notification\"\nSERVICE_NAME = \"Notification Server #1\"\nNOTIFY_PLUGIN = \"slack\"\nNOTIFY_INBUILT_RULES = [\"Threshold\", \"DataAvailability\"]\nDATA = {\"name\": \"Test - 1\",\n        \"description\": \"Test4_Notification\",\n        \"rule\": NOTIFY_INBUILT_RULES[0],\n        \"channel\": NOTIFY_PLUGIN,\n        \"enabled\": True,\n        \"notification_type\": \"one shot\"\n        }\n\n\nclass TestNotificationServiceAPI:\n    def test_notification_without_install(self, reset_and_start_fledge, fledge_url, wait_time):\n        # Wait for fledge server to start\n        time.sleep(wait_time)\n        conn = http.client.HTTPConnection(fledge_url)\n\n        conn.request(\"GET\", '/fledge/notification/plugin')\n        r = conn.getresponse()\n        assert 404 == r.status\n        msg = \"No Notification service available.\"\n        assert msg == r.reason\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'message' in jdoc\n        assert {\"message\": msg} == jdoc\n\n        conn.request(\"GET\", '/fledge/notification/type')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {\"notification_type\": [\"one shot\", \"retriggered\", \"toggled\"]} == jdoc\n\n        conn.request(\"POST\", '/fledge/notification', json.dumps({}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert msg == r.reason\n        r = r.read().decode()\n        assert \"404: {}\".format(msg) == r\n\n        pytest.xfail(\"FOGL-2748\")\n        conn.request(\"GET\", '/fledge/notification')\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert msg == r.reason\n        r = r.read().decode()\n        assert \"404: {}\".format(msg) == r\n\n    def test_notification_service_add(self, service_branch, fledge_url, wait_time, remove_directories):\n        try:\n            subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_service {} {}\"\n                           .format(service_branch, SERVICE)], shell=True, check=True)\n        except subprocess.CalledProcessError:\n            assert False, \"{} installation failed\".format(SERVICE)\n        finally:\n            remove_directories(\"/tmp/fledge-service-{}\".format(SERVICE))\n\n        # Start service\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"name\": SERVICE_NAME,\n                \"type\": \"notification\",\n                \"enabled\": \"true\"\n                }\n        conn.request(\"POST\", '/fledge/service', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 2 == len(jdoc)\n        assert SERVICE_NAME == jdoc['name']\n\n        # Wait for service to get created\n        time.sleep(wait_time)\n        conn.request(\"GET\", '/fledge/service')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert SERVICE_NAME == jdoc['services'][2]['name']\n\n    def test_install_delivery_plugin(self, notify_branch, remove_directories):\n        # Remove any external plugins if installed\n        remove_directories(os.path.expandvars('$FLEDGE_ROOT/plugins/notificationDelivery'))\n        remove_directories(os.path.expandvars('$FLEDGE_ROOT/plugins/notificationRule'))\n        try:\n            subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} notify {}\".format(\n                notify_branch, NOTIFY_PLUGIN)], shell=True, check=True)\n        except subprocess.CalledProcessError:\n            assert False, \"{} installation failed\".format(NOTIFY_PLUGIN)\n        finally:\n            remove_directories(\"/tmp/fledge-notify-{}\".format(NOTIFY_PLUGIN))\n\n    @pytest.mark.parametrize(\"test_input, expected_error\", [\n        ({\"description\": \"Test4_Notification\", \"rule\": NOTIFY_INBUILT_RULES[0], \"channel\": NOTIFY_PLUGIN,\n          \"enabled\": True, \"notification_type\": \"one shot\"}, '400: Missing name property in payload.'),\n        ({\"name\": \"Test4\", \"rule\": NOTIFY_INBUILT_RULES[0], \"channel\": NOTIFY_PLUGIN, \"enabled\": True,\n          \"notification_type\": \"one shot\"}, '400: Missing description property in payload.'),\n        ({\"name\": \"Test4\", \"description\": \"Test4_Notification\", \"channel\": NOTIFY_PLUGIN, \"enabled\": True,\n          \"notification_type\": \"one shot\"}, '400: Missing rule property in payload.'),\n        ({\"name\": \"Test4\", \"description\": \"Test4_Notification\", \"rule\": NOTIFY_INBUILT_RULES[0], \"enabled\": True,\n          \"notification_type\": \"one shot\"}, '400: Missing channel property in payload.'),\n        ({\"name\": \"Test4\", \"description\": \"Test4_Notification\", \"rule\": NOTIFY_INBUILT_RULES[0],\n          \"channel\": NOTIFY_PLUGIN, \"enabled\": True}, '400: Missing notification_type property in payload.'),\n        ({\"name\": \"=\", \"description\": \"Test4_Notification\", \"rule\": NOTIFY_INBUILT_RULES[0], \"channel\": NOTIFY_PLUGIN,\n          \"enabled\": True, \"notification_type\": \"one shot\"}, '400: Invalid name property in payload.'),\n        ({\"name\": \"Test4\", \"description\": \"Test4_Notification\", \"rule\": \"+\", \"channel\": NOTIFY_PLUGIN, \"enabled\": True,\n          \"notification_type\": \"one shot\"}, '400: Invalid rule property in payload.'),\n        ({\"name\": \"Test4\", \"description\": \"Test4_Notification\", \"rule\": NOTIFY_INBUILT_RULES[0], \"channel\": \":\",\n          \"enabled\": True, \"notification_type\": \"one shot\"}, '400: Invalid channel property in payload.'),\n        ({\"name\": \"Test4\", \"description\": \"Test4_Notification\", \"rule\": NOTIFY_INBUILT_RULES[0],\n          \"channel\": NOTIFY_PLUGIN, \"enabled\": \"bla\", \"notification_type\": \"one shot\"},\n         '400: Only \"true\", \"false\", true, false are allowed for value of enabled.'),\n        ({\"name\": \"Test4\", \"description\": \"Test4_Notification\", \"rule\": \"InvalidRulePlugin\",\n          \"channel\": \"InvalidChannelPlugin\", \"enabled\": True, \"notification_type\": \"one shot\"},\n         '400: Invalid rule plugin InvalidRulePlugin and/or delivery plugin InvalidChannelPlugin supplied.')\n    ])\n    def test_invalid_create_notification_instance(self, fledge_url, test_input, expected_error):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", '/fledge/notification', json.dumps(test_input))\n        r = conn.getresponse()\n        assert 400 == r.status\n        r = r.read().decode()\n        assert expected_error == r\n\n    def test_create_valid_notification_instance(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", '/fledge/notification', json.dumps(DATA))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Notification {} created successfully\".format(DATA['name']) == jdoc['result']\n\n        conn.request(\"GET\", '/fledge/notification')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        actual_data = jdoc['notifications'][0]\n        assert DATA['name'] == actual_data['name']\n        assert DATA['channel'] == actual_data['channel']\n        assert 'true' == actual_data['enable']\n        assert DATA['notification_type'] == actual_data['notificationType']\n        assert DATA['rule'] == actual_data['rule']\n\n        conn.request(\"GET\", '/fledge/notification/plugin')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 2 == len(jdoc)\n        assert NOTIFY_PLUGIN == jdoc['delivery'][0]['name']\n        assert \"notify\" == jdoc['delivery'][0]['type']\n        assert 2 == len(jdoc['rules'])\n        assert NOTIFY_INBUILT_RULES[0] == jdoc['rules'][1]['name']\n        assert NOTIFY_INBUILT_RULES[1] == jdoc['rules'][0]['name']\n\n    @pytest.mark.parametrize(\"test_input, expected_error\", [\n        ({\"rule\": \"+\"}, '400: Invalid rule property in payload.'),\n        ({\"channel\": \":\"}, '400: Invalid channel property in payload.'),\n        ({\"enabled\": \"bla\"}, '400: Only \"true\", \"false\", true, false are allowed for value of enabled.'),\n        ({\"rule\": \"InvalidRulePlugin\"},\n         '400: Invalid rule plugin:InvalidRulePlugin and/or delivery plugin:None supplied.'),\n        ({\"channel\": \"InvalidChannelPlugin\"},\n         '400: Invalid rule plugin:None and/or delivery plugin:InvalidChannelPlugin supplied.'),\n        ({\"rule\": \"InvalidRulePlugin\", \"channel\": \"InvalidChannelPlugin\"},\n         '400: Invalid rule plugin:InvalidRulePlugin and/or delivery plugin:InvalidChannelPlugin supplied.')\n    ])\n    def test_invalid_update_notification_instance(self, fledge_url, test_input, expected_error):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/notification/{}'.format(urllib.parse.quote(DATA['name'])), json.dumps(test_input))\n        r = conn.getresponse()\n        assert 400 == r.status\n        r = r.read().decode()\n        assert expected_error == r\n\n    def test_invalid_name_update_notification_instance(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        changed_data = {\"description\": \"changed_desc\"}\n        conn.request(\"PUT\", '/fledge/notification/{}'.format('nonExistent'), json.dumps(changed_data))\n        r = conn.getresponse()\n        assert 404 == r.status\n        r = r.read().decode()\n        assert '404: No nonExistent notification instance found' == r\n\n    def test_update_valid_notification_instance(self, fledge_url):\n        changed_data = {\"description\": \"changed_desc\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/notification/{}'.format(urllib.parse.quote(DATA['name'])), json.dumps(changed_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Notification {} updated successfully\".format(DATA[\"name\"]) == jdoc['result']\n\n    def test_delete_service_without_notification_delete(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/service/{}'.format(urllib.parse.quote(SERVICE_NAME)))\n        r = conn.getresponse()\n        assert 400 == r.status\n        r = r.read().decode()\n        assert \"400: Notification service `{}` can not be deleted, as ['{}'] \" \\\n               \"notification instances exist.\".format(SERVICE_NAME, DATA['name']) == r\n\n    def test_delete_notification_and_service(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/notification/{}'.format(urllib.parse.quote(DATA['name'])))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Notification {} deleted successfully.\".format(DATA['name']) == jdoc['result']\n\n        conn.request(\"DELETE\", '/fledge/service/{}'.format(urllib.parse.quote(SERVICE_NAME)))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Service {} deleted successfully.\".format(SERVICE_NAME) == jdoc['result']\n"
  },
  {
    "path": "tests/system/python/api/test_passwords.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport http.client\nimport json\nimport time\nimport pytest\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nTOKEN = None\n\n\ndef update_policy(fledge_url, policy):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"PUT\", '/fledge/category/password', json.dumps({\"policy\": policy}),\n                 headers={\"authorization\": TOKEN})\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert policy == jdoc['policy']['value']\n\n\ndef test_setup(reset_and_start_fledge, fledge_url, wait_time):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"authentication\": \"mandatory\"}))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"mandatory\" == jdoc['authentication']['value']\n\n    from conftest import restart_and_wait_for_fledge\n    restart_and_wait_for_fledge(fledge_url, wait_time)\n\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"Logged in successfully.\" == jdoc['message']\n    assert \"token\" in jdoc\n    global TOKEN\n    TOKEN = jdoc[\"token\"]\n\n\nclass TestAnyCharPolicy:\n\n    def test_default_policy(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/category/password\", headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'Any characters' == jdoc['policy']['value']\n\n    @pytest.mark.parametrize(\"payload\", [\n        {\"username\": \"any1\", \"password\": \"User@123\", \"real_name\": \"AJ\", \"description\": \"Test user\"},\n        {\"username\": \"dianomic\", \"password\": \"password\", \"real_name\": \"Dianomic\", \"description\": \"Dianomic user\"},\n        {\"username\": \"nerd\", \"password\": \"PASSWORD\", \"real_name\": \"Nerd\", \"description\": \"Nerd user\"}\n    ])\n    def test_create_user(self, fledge_url, payload):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert '{} user has been created successfully.'.format(payload['username']) == jdoc['message']\n\n    def test_update_password(self, fledge_url):\n        uid = 4\n        payload = {\"current_password\": \"password\", \"new_password\": \"0123456\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n\nclass TestMixedCasePolicy:\n\n    def test_setup(self, fledge_url):\n        update_policy(fledge_url, \"Mixed case Alphabetic\")\n\n    @pytest.mark.parametrize(\"payload\", [\n        {\"username\": \"any2\", \"password\": \"Passw0rd\", \"real_name\": \"AJ\", \"description\": \"Any user\"},\n        {\"username\": \"dianomic2\", \"password\": \"Password\", \"real_name\": \"Dianomic\", \"description\": \"Dianomic-2 user\"},\n        {\"username\": \"nerd2\", \"password\": \"Pass!23\", \"real_name\": \"Nerd\", \"description\": \"Nerd-2 user\"},\n        {\"username\": \"test2\", \"password\": \"Pass123\", \"real_name\": \"Nerd\", \"description\": \"Test-2 user\"}\n    ])\n    def test_create_user(self, fledge_url, payload):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert '{} user has been created successfully.'.format(payload['username']) == jdoc['message']\n\n    def test_update_password(self, fledge_url):\n        uid = 6\n        payload = {\"current_password\": \"Passw0rd\", \"new_password\": \"13pAss1\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n\nclass TestMixedAndNumericCasePolicy:\n\n    def test_setup(self, fledge_url):\n        update_policy(fledge_url, \"Mixed case and numeric\")\n\n    @pytest.mark.parametrize(\"payload\", [\n        {\"username\": \"any3\", \"password\": \"Passw0rd\", \"real_name\": \"AJ\", \"description\": \"Any User\"},\n        {\"username\": \"dianomic3\", \"password\": \"paSSw0rd\", \"real_name\": \"Dianomic\", \"description\": \"Dianomic-3 user\"},\n        {\"username\": \"nerd3\", \"password\": \"1ass0Rd\", \"real_name\": \"Nerd\", \"description\": \"Nerd-3 user\"},\n        {\"username\": \"test3\", \"password\": \"PASSw0rD\", \"real_name\": \"Nerd\", \"description\": \"Test-3 user\"}\n    ])\n    def test_create_user(self, fledge_url, payload):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert '{} user has been created successfully.'.format(payload['username']) == jdoc['message']\n\n    def test_update_password(self, fledge_url):\n        uid = 11\n        payload = {\"current_password\": \"paSSw0rd\", \"new_password\": \"13pAss1\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n\nclass TestMixedAndNumericAndSpecialCasePolicy:\n\n    def test_setup(self, fledge_url):\n        update_policy(fledge_url, \"Mixed case, numeric and special characters\")\n\n    @pytest.mark.parametrize(\"payload\", [\n        {\"username\": \"any4\", \"password\": \"pAss@!1\", \"real_name\": \"AJ\", \"description\": \"user\"},\n        {\"username\": \"dianomic4\", \"password\": \"s!@#$%G2\", \"real_name\": \"Dianomic\", \"description\": \"Dianomic-4 user\"},\n        {\"username\": \"nerd4\", \"password\": \"A(swe1)\", \"real_name\": \"Nerd\", \"description\": \"Nerd-4 user\"},\n        {\"username\": \"test4\", \"password\": \"Fl@3737\", \"real_name\": \"Nerd\", \"description\": \"Test-4 user\"}\n    ])\n    def test_create_user(self, fledge_url, payload):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert '{} user has been created successfully.'.format(payload['username']) == jdoc['message']\n\n    def test_update_password(self, fledge_url):\n        uid = 17\n        payload = {\"current_password\": \"Fl@3737\", \"new_password\": \"pAss@!1\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    def test_reset_user(self, fledge_url):\n        uid = 17\n        payload = {\"password\": \"F0gl@mp!\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/admin/{}/reset\".format(uid), body=json.dumps(payload),\n                     headers={\"authorization\": TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<{}> has been updated successfully.'.format(uid)} == jdoc\n\n"
  },
  {
    "path": "tests/system/python/api/test_plugin_discovery.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test plugin discovery (north, south, filter, notify, rule) REST API \"\"\"\n\nimport subprocess\nimport http.client\nimport json\nfrom collections import Counter\nimport pytest\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\ndef install_plugin(_type, plugin, branch=\"develop\", plugin_lang=\"python\", use_pip_cache=True):\n    if plugin_lang == \"python\":\n        path = \"$FLEDGE_ROOT/tests/system/python/scripts/install_python_plugin {} {} {} {}\".format(\n            branch, _type, plugin, use_pip_cache)\n    else:\n        path = \"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} {} {}\".format(\n            branch, _type, plugin)\n    try:\n        subprocess.run([path], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"{} plugin installation failed\".format(plugin)\n\n    # Cleanup /tmp repos\n    if _type == \"rule\":\n        subprocess.run([\"rm -rf /tmp/fledge-service-notification\"], shell=True, check=True)\n    subprocess.run([\"rm -rf /tmp/fledge-{}-{}\".format(_type, plugin)], shell=True, check=True)\n\n\n@pytest.fixture\ndef reset_plugins():\n    try:\n        subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/reset_plugins\"], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset plugin script failed\"\n\n\nclass TestPluginDiscovery:\n\n    def test_cleanup(self, reset_plugins, reset_and_start_fledge):\n        # TODO: Remove this workaround\n        # Use better setup & teardown methods\n        pass\n\n    @pytest.mark.parametrize(\"param, config\", [\n        (\"\", False),\n        (\"?config=false\", False),\n        (\"?config=true\", True)\n    ])\n    def test_default_all_plugins_installed(self, fledge_url, param, config):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/plugins/installed{}'.format(param))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        # Only OMF north C-based plugin is expected by default\n        assert 1 == len(jdoc['plugins'])\n        plugin = jdoc['plugins'][0]\n        assert 'OMF' == plugin['name']\n        assert 'north' == plugin['type']\n        assert plugin['type'] not in ['south', 'filter', 'notify', 'rule']\n        # config is not expected by default\n        assert 'config' in plugin if config else 'config' not in plugin\n\n    @pytest.mark.parametrize(\"method, count, config\", [\n        (\"/fledge/plugins/installed?type=south\", 0, None),\n        (\"/fledge/plugins/installed?type=filter\", 0, None),\n        (\"/fledge/plugins/installed?type=notify\", 0, None),\n        (\"/fledge/plugins/installed?type=rule\", 0, None),\n        (\"/fledge/plugins/installed?type=north&config=false\", 1, False),\n        (\"/fledge/plugins/installed?type=north&config=true\", 1, True)\n    ])\n    def test_default_plugins_installed_by_type(self, fledge_url, method, count, config):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", method)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert count == len(jdoc['plugins'])\n        name = []\n        for plugin in jdoc['plugins']:\n            assert 'config' in plugin if config else 'config' not in plugin\n            name.append(plugin['name'])\n        # Verify only OMF north plugin when type is north\n        if count == 1:\n            assert Counter(['OMF']) == Counter(name)\n\n    def test_south_plugins_installed(self, fledge_url, _type='south'):\n        # install south plugin (Python version)\n        install_plugin(_type, plugin='sinusoid', plugin_lang='python')\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/plugins/installed?type={}'.format(_type))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        plugins = jdoc['plugins']\n        assert 1 == len(plugins)\n        assert 'sinusoid' == plugins[0]['name']\n        assert 'south' == plugins[0]['type']\n        assert 'south/sinusoid' == plugins[0]['installedDirectory']\n        assert 'fledge-south-sinusoid' == plugins[0]['packageName']\n\n        # install one more south plugin (C version)\n        install_plugin(_type, plugin='random', plugin_lang='C')\n        conn.request(\"GET\", '/fledge/plugins/installed?type={}'.format(_type))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        plugins = jdoc['plugins']\n        assert 2 == len(plugins)\n        assert 'sinusoid' == plugins[0]['name']\n        assert 'south' == plugins[0]['type']\n        assert 'south/sinusoid' == plugins[0]['installedDirectory']\n        assert 'fledge-south-sinusoid' == plugins[0]['packageName']\n        assert 'Random' == plugins[1]['name']\n        assert 'south' == plugins[1]['type']\n        assert 'south/Random' == plugins[1]['installedDirectory']\n        assert 'fledge-south-random' == plugins[1]['packageName']\n\n    def test_north_plugins_installed(self, fledge_url, _type='north'):\n        # install north plugin (Python version)\n        install_plugin(_type, plugin='http', plugin_lang='python')\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/plugins/installed?type={}'.format(_type))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        plugins = jdoc['plugins']\n        plugin_names = [name['name'] for name in plugins]\n        # verify north plugins which is OMF by default and a new one (http)\n        assert 2 == len(plugins)\n        assert Counter(['http_north', 'OMF']) == Counter(plugin_names)\n\n        # install one more north plugin (C version)\n        install_plugin(_type, plugin='thingspeak', plugin_lang='C')\n        conn.request(\"GET\", '/fledge/plugins/installed?type={}'.format(_type))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        plugins = jdoc['plugins']\n        plugin_names = [name['name'] for name in plugins]\n        # verify north plugins which is OMF by default and 2 are new one's (Python & C version)\n        assert 3 == len(plugins)\n        assert Counter(['http_north', 'OMF', 'ThingSpeak']) == Counter(plugin_names)\n\n    def test_filter_plugins_installed(self, fledge_url, _type='filter'):\n        # install rms filter plugin\n        install_plugin(_type, plugin='rms', plugin_lang='C')\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/plugins/installed?type={}'.format(_type))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        plugins = jdoc['plugins']\n        assert 1 == len(plugins)\n        assert 'rms' == plugins[0]['name']\n\n    def test_delivery_plugins_installed(self, fledge_url, _type='notify'):\n        # install slack delivery plugin\n        install_plugin(_type, plugin='slack', plugin_lang='C')\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/plugins/installed?type={}'.format(_type))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        plugins = jdoc['plugins']\n        assert 1 == len(plugins)\n        assert 'slack' == plugins[0]['name']\n        assert 'notify' == plugins[0]['type']\n        assert 'notificationDelivery/slack' == plugins[0]['installedDirectory']\n        assert 'fledge-notify-slack' == plugins[0]['packageName']\n\n    def test_rule_plugins_installed(self, fledge_url, _type='rule'):\n        # install OutOfBound rule plugin\n        install_plugin(_type, plugin='outofbound', plugin_lang='C')\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/plugins/installed?type={}'.format(_type))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        plugins = jdoc['plugins']\n        assert 1 == len(plugins)\n        assert 'OutOfBound' == plugins[0]['name']\n        assert 'rule' == plugins[0]['type']\n        assert 'notificationRule/OutOfBound' == plugins[0]['installedDirectory']\n        assert 'fledge-rule-outofbound' == plugins[0]['packageName']\n"
  },
  {
    "path": "tests/system/python/api/test_service.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test add service using poll and async plugins for both python & C version REST API \"\"\"\n\nimport os\nimport http.client\nimport json\nimport time\nfrom uuid import UUID\nfrom collections import Counter\nfrom urllib.parse import quote\nimport pytest\n\nimport plugin_and_service\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nSVC_NAME_1 = 'Random Walk #1'\nSVC_NAME_2 = 'HTTP-SOUTH'\nSVC_NAME_3 = '1 Bench'\nSVC_NAME_4 = 'Rand 1 #3'\n\nSVC_NAME_5 = SVC_NAME_C_ASYNC = \"Async 1\"\nSVC_NAME_6 = 'randomwalk'\n\n\nPLUGIN_FILTER = 'metadata'\nFILTER_NAME = 'meta'\n\n\n@pytest.fixture\ndef install_plugins():\n    plugin_and_service.install('south', plugin='randomwalk')\n    plugin_and_service.install('south', plugin='http')\n    plugin_and_service.install('south', plugin='benchmark', plugin_lang='C')\n    plugin_and_service.install('south', plugin='random', plugin_lang='C')\n    plugin_and_service.install('south', plugin='csv-async', plugin_lang='C')\n\n\ndef get_service(fledge_url, path):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"GET\", path)\n    res = conn.getresponse()\n    r = res.read().decode()\n    assert 200 == res.status\n    jdoc = json.loads(r)\n    return jdoc\n\n\nclass TestService:\n\n    def test_cleanup_and_setup(self, reset_and_start_fledge, install_plugins):\n        # TODO: FOGL-2669 Better setup & teardown fixtures\n        pass\n\n    def test_default_service(self, fledge_url):\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n\n        # Only storage and core service is expected by default\n        assert 2 == len(jdoc['services'])\n        keys = {'address', 'service_port', 'type', 'status', 'name', 'management_port', 'protocol'}\n        assert Counter(keys) == Counter(jdoc['services'][0].keys())\n\n        storage_svc = jdoc['services'][0]\n        assert isinstance(storage_svc['service_port'], int)\n        assert isinstance(storage_svc['management_port'], int)\n        assert 'running' == storage_svc['status']\n        assert 'Storage' == storage_svc['type']\n        assert 'localhost' == storage_svc['address']\n        assert 'Fledge Storage' == storage_svc['name']\n        assert 'http' == storage_svc['protocol']\n\n        core_svc = jdoc['services'][1]\n        assert isinstance(core_svc['management_port'], int)\n        assert 8081 == core_svc['service_port']\n        assert 'running' == core_svc['status']\n        assert 'Core' == core_svc['type']\n        assert '0.0.0.0' == core_svc['address']\n        assert 'Fledge Core' == core_svc['name']\n        assert 'http' == core_svc['protocol']\n\n        # filter with Storage type\n        jdoc = get_service(fledge_url, '/fledge/service?type=Storage')\n        assert len(jdoc), \"No data found\"\n        assert 1 == len(jdoc['services'])\n        assert 'Storage' == jdoc['services'][0]['type']\n\n    C_ASYNC_CONFIG = {\"file\": {\"value\": os.getenv(\"FLEDGE_ROOT\", \"\") + '/tests/system/python/data/vibration.csv'}}\n\n    @pytest.mark.parametrize(\"plugin, svc_name, display_svc_name, config, enabled, svc_count\", [\n        (\"randomwalk\", SVC_NAME_1, SVC_NAME_1, None, True, 3),\n        (\"http_south\", SVC_NAME_2, SVC_NAME_1, None, False, 3),\n        (\"Benchmark\", SVC_NAME_3, SVC_NAME_3, None, True, 4),\n        (\"Random\", SVC_NAME_4, SVC_NAME_3, None, False, 4),\n        (\"CSV-Async\", SVC_NAME_C_ASYNC, SVC_NAME_C_ASYNC, C_ASYNC_CONFIG, True, 5)\n    ])\n    def test_add_service(self, fledge_url, wait_time, plugin, svc_name, display_svc_name, config, enabled, svc_count):\n\n        jdoc = plugin_and_service.add_south_service(plugin, fledge_url, svc_name, config, enabled)\n        assert svc_name == jdoc['name']\n        assert UUID(jdoc['id'], version=4)\n\n        time.sleep(wait_time)\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n        assert svc_count == len(jdoc['services'])\n\n        southbound_svc = jdoc['services'][svc_count - 1]\n        assert isinstance(southbound_svc['management_port'], int)\n        assert isinstance(southbound_svc['service_port'], int)\n        assert display_svc_name == southbound_svc['name']\n        assert 'running' == southbound_svc['status']\n        assert 'Southbound' == southbound_svc['type']\n        assert 'localhost' == southbound_svc['address']\n        assert 'http' == southbound_svc['protocol']\n\n    def test_add_service_with_config(self, fledge_url, wait_time):\n        # add service with config param\n        data = {\"name\": SVC_NAME_6,\n                \"type\": \"South\",\n                \"plugin\": 'randomwalk',\n                \"config\": {\"maxValue\": {\"value\": \"20\"}, \"assetName\": {\"value\": \"Random\"}},\n                \"enabled\": True\n                }\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", '/fledge/service', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert SVC_NAME_6 == jdoc['name']\n        assert UUID(jdoc['id'], version=4)\n\n        # verify config is correctly saved\n        conn.request(\"GET\", '/fledge/category/{}'.format(SVC_NAME_6))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert data['config']['assetName']['value'] == jdoc['assetName']['value']\n        assert data['config']['maxValue']['value'] == jdoc['maxValue']['value']\n\n        time.sleep(wait_time)\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n        assert 6 == len(jdoc['services'])\n        assert SVC_NAME_6 == jdoc['services'][5]['name']\n\n    @pytest.mark.parametrize(\"svc_name, status, svc_count\", [\n        (\"Fledge Storage\", 404, 2),\n        (\"Fledge Core\", 404, 2),\n        (SVC_NAME_1, 200, 5),\n        (SVC_NAME_2, 200, 5),\n        (SVC_NAME_3, 200, 4)\n    ])\n    def test_delete_service(self, svc_name, status, svc_count, fledge_url, wait_time):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/service/{}'.format(quote(svc_name)))\n        res = conn.getresponse()\n        assert status == res.status\n\n        if status == 404:\n            # FIXME: FOGL-2668 expected 403 for Core and Storage\n            assert '{} service does not exist.'.format(svc_name) == res.reason\n        else:\n            r = res.read().decode()\n            jdoc = json.loads(r)\n            assert 'Service {} deleted successfully.'.format(svc_name) == jdoc['result']\n\n            time.sleep(wait_time)\n\n            jdoc = get_service(fledge_url, '/fledge/service')\n            assert len(jdoc), \"No data found\"\n            assert svc_count == len(jdoc['services'])\n            services = [s['name'] for s in jdoc['services']]\n            assert svc_name not in services\n\n            # no category (including its children) exists anymore for serviceName\n            conn = http.client.HTTPConnection(fledge_url)\n            conn.request(\"GET\", '/fledge/category/{}'.format(quote(svc_name)))\n            res = conn.getresponse()\n            r = res.read().decode()\n            assert 404 == res.status\n\n            conn.request(\"GET\", '/fledge/category/{}/children'.format(quote(svc_name)))\n            res = conn.getresponse()\n            r = res.read().decode()\n            assert 404 == res.status\n\n            # no schedule exists anymore for serviceName\n            conn.request(\"GET\", '/fledge/schedule')\n            res = conn.getresponse()\n            r = res.read().decode()\n            jdoc = json.loads(r)\n            assert svc_name not in [s['name'] for s in jdoc[\"schedules\"]]\n\n            # TODO: verify FOGL-2718 no category interest exists anymore for serviceId in InterestRegistry\n\n    def test_service_with_enable_schedule(self, fledge_url, wait_time, enable_schedule):\n        enable_schedule(fledge_url, SVC_NAME_4)\n\n        time.sleep(wait_time)\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n        assert 5 == len(jdoc['services'])\n        assert SVC_NAME_4 in [s['name'] for s in jdoc['services']]\n\n    def test_service_with_disable_schedule(self, fledge_url, wait_time, disable_schedule):\n        disable_schedule(fledge_url, SVC_NAME_4)\n\n        time.sleep(wait_time)\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n        assert 5 == len(jdoc['services'])\n        assert (SVC_NAME_4, 'shutdown') in [(s['name'], s['status']) for s in jdoc['services']]\n\n    def test_service_on_restart(self, fledge_url, wait_time):\n        from conftest import restart_and_wait_for_fledge\n        restart_and_wait_for_fledge(fledge_url, wait_time)\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n        assert 4 == len(jdoc['services'])\n        services = [name['name'] for name in jdoc['services']]\n        assert SVC_NAME_4 not in services\n\n    def test_delete_service_with_filters(self, fledge_url, wait_time, add_filter, filter_branch, enable_schedule):\n        # add filter\n        add_filter(PLUGIN_FILTER, filter_branch, FILTER_NAME, {\"enable\": \"true\"}, fledge_url, SVC_NAME_6)\n\n        # delete service\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/service/{}'.format(SVC_NAME_6))\n        r = conn.getresponse()\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'Service {} deleted successfully.'.format(SVC_NAME_6) == jdoc['result']\n\n        # verify service does not exist\n        time.sleep(wait_time * 2)\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n        assert 3 == len(jdoc['services'])\n        services = [name['name'] for name in jdoc['services']]\n        assert SVC_NAME_6 not in services\n        # As now we have the ability to delete the orphan category of filter when service got deleted\n        # Create new filter category to use with another service\n        filter_data = {\"name\": FILTER_NAME, \"plugin\": PLUGIN_FILTER}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", '/fledge/filter', json.dumps(filter_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert FILTER_NAME == jdoc['filter']\n        assert PLUGIN_FILTER == jdoc['value']['plugin']['value']\n        # filter linked with SVC_NAME_4\n        data = {\"pipeline\": [FILTER_NAME]}\n        conn.request(\"PUT\", '/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=true'\n                     .format(quote(SVC_NAME_4)), json.dumps(data))\n        r = conn.getresponse()\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Filter pipeline {{'pipeline': ['{}']}} updated successfully\".format(FILTER_NAME) == jdoc['result']\n\n        # enable SVC_NAME_4 schedule\n        enable_schedule(fledge_url, SVC_NAME_4)\n\n        # verify SVC_NAME_4 exist\n        time.sleep(wait_time)\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n        assert 4 == len(jdoc['services'])\n        services = [s['name'] for s in jdoc['services']]\n        assert SVC_NAME_4 in services\n\n        # delete SVC_NAME_4\n        conn.request(\"DELETE\", '/fledge/service/{}'.format(quote(SVC_NAME_4)))\n        r = conn.getresponse()\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'Service {} deleted successfully.'.format(SVC_NAME_4) == jdoc['result']\n\n        # verify SVC_NAME_4 does not exist anymore\n        time.sleep(wait_time)\n        jdoc = get_service(fledge_url, '/fledge/service')\n        assert len(jdoc), \"No data found\"\n        assert 3 == len(jdoc['services'])\n        services = [s['name'] for s in jdoc['services']]\n        assert SVC_NAME_4 not in services\n\n        # Verify filters\n        jdoc = get_service(fledge_url, '/fledge/filter')\n        assert len(jdoc), \"No data found for filters.\"\n        assert ('filters' in jdoc and\n                not jdoc['filters']), \"Unexpected filters found after service deletion. Filters should be empty.\"\n\n    def test_notification_service(self):\n        assert True, \"Already verified in test_e2e_notification_service_with_plugins.py\"\n"
  },
  {
    "path": "tests/system/python/api/test_statistics.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test Statistics & Statistics history REST API \"\"\"\n\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nfrom collections import Counter\nimport pytest\nimport utils\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nSTATS_KEYS = {'DISCARDED', 'PURGED', 'READINGS', 'UNSENT', 'UNSNPURGED', 'BUFFERED'}\nSTATS_HISTORY_KEYS = {'history_ts'}\nSTATS_HISTORY_KEYS.update(STATS_KEYS)\nTEMPLATE_NAME = \"template.json\"\nSENSOR_VALUE = 10\nPLUGIN_NAME = \"coap\"\nASSET_NAME = \"COAP\"\n\n\n@pytest.fixture\ndef start_south_coap(add_south, remove_data_file, remove_directories, south_branch,\n                     fledge_url, south_plugin=PLUGIN_NAME, asset_name=ASSET_NAME):\n    # Define the template file for fogbench\n    fogbench_template_path = os.path.join(\n        os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(TEMPLATE_NAME))\n    with open(fogbench_template_path, \"w\") as f:\n        f.write(\n            '[{\"name\": \"%s\", \"sensor_values\": '\n            '[{\"name\": \"sensor\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                asset_name, SENSOR_VALUE, SENSOR_VALUE))\n\n    add_south(south_plugin, south_branch, fledge_url, service_name=PLUGIN_NAME)\n\n    yield start_south_coap\n\n    # Cleanup code that runs after the caller test is over\n    remove_data_file(fogbench_template_path)\n    remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin))\n\n\nclass TestStatistics:\n\n    def test_cleanup(self, reset_and_start_fledge):\n        # TODO: Remove this workaround\n        # Use better setup & teardown methods\n        pass\n\n    def test_default_statistics(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/statistics')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert len(STATS_KEYS) == len(jdoc)\n        keys = [key['key'] for key in jdoc]\n        assert Counter(STATS_KEYS) == Counter(keys)\n\n    def test_default_statistics_history(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/statistics/history')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert 15 == jdoc['interval']\n        assert {} == jdoc['statistics'][0]\n\n    def test_statistics_history_with_stats_collector_schedule(self, fledge_url, wait_time, retries):\n        conn = http.client.HTTPConnection(fledge_url)\n        for i in range (0,retries):\n            # wait for sometime for stats collector schedule to start\n            time.sleep(wait_time)\n\n            conn.request(\"GET\", '/fledge/statistics/history')\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert len(jdoc), \"No data found\"\n            assert 15 == jdoc['interval']\n            assert 1 == len(jdoc['statistics'])\n            statistics = jdoc['statistics'][0]\n            if i <= retries and Counter(STATS_HISTORY_KEYS) != Counter(statistics.keys()): continue\n            assert Counter(STATS_HISTORY_KEYS) == Counter(statistics.keys())\n            assert utils.validate_date_format(statistics['history_ts']) is True, \"History Timestamp format mismatched.\"\n\n            del statistics['history_ts']\n            assert Counter([0, 0, 0, 0, 0, 0]) == Counter(statistics.values())\n            break\n\n    @pytest.mark.parametrize(\"request_params, keys\", [\n        ('', STATS_HISTORY_KEYS),\n        ('?limit=1', STATS_HISTORY_KEYS),\n        ('?key=READINGS', {'history_ts', 'READINGS'}),\n        ('?key=READINGS&limit=1', {'history_ts', 'READINGS'}),\n        ('?key=READINGS&limit=0', {}),\n    ])\n    def test_statistics_history_with_params(self, fledge_url, request_params, keys):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/statistics/history{}'.format(request_params))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert 15 == jdoc['interval']\n        assert 1 == len(jdoc['statistics'])\n        assert Counter(keys) == Counter(jdoc['statistics'][0].keys())\n        if keys:\n            assert utils.validate_date_format(jdoc['statistics'][0]['history_ts']) is True, \\\n                \"History Timestamp format mismatched.\"\n\n    def test_statistics_history_with_service_enabled(self, start_south_coap, fledge_url, wait_time):\n        # Allow CoAP listener to start\n        time.sleep(wait_time)\n\n        # ingest one reading via fogbench\n        subprocess.run([\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{}; cd -\"\n                       .format(TEMPLATE_NAME)], shell=True, check=True)\n        # Let the readings to be Ingressed\n        time.sleep(wait_time)\n\n        # verify stats\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/statistics')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        stats = utils.serialize_stats_map(jdoc)\n        assert 1 == stats[ASSET_NAME.upper()]\n        assert 1 == stats['READINGS']\n\n        # Allow stats collector schedule to run i.e. by default 15s\n        time.sleep(wait_time * 3)\n\n        # check stats history\n        conn.request(\"GET\", '/fledge/statistics/history')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        stats_history = jdoc['statistics']\n\n        # READINGS & ASSET_NAME keys and verify no duplicate entry found with value 1\n        read = [r['READINGS'] for r in stats_history]\n        assert 1 in read\n        assert 1 == read.count(1)\n        asset_stats_history = [a for a in stats_history if ASSET_NAME.upper() in a.keys()]\n        assert any(ash[ASSET_NAME.upper()] == 1 for ash in asset_stats_history), \"Failed to find statistics history \" \\\n                                                                                 \"record for \" + ASSET_NAME.upper()\n\n        # verify stats history by READINGS key only\n        conn.request(\"GET\", '/fledge/statistics/history?key={}'.format('READINGS'))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        for stat in jdoc['statistics']:\n            assert utils.validate_date_format(stat['history_ts']) is True, \"History Timestamp format mismatched.\"\n\n        read = [r['READINGS'] for r in jdoc['statistics']]\n        assert 1 in read\n        assert 1 == read.count(1)\n\n        # verify stats history by ASSET_NAME key only\n        conn.request(\"GET\", '/fledge/statistics/history?key={}'.format(ASSET_NAME.upper()))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        asset = [a[ASSET_NAME.upper()] for a in jdoc['statistics']]\n        assert 1 in asset\n        assert 1 == asset.count(1)\n\n"
  },
  {
    "path": "tests/system/python/conftest.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Configuration system/python/conftest.py\n\"\"\"\nimport subprocess\nimport os\nimport sys\nimport fnmatch\nimport http.client\nimport json\nimport base64\nimport ssl\nimport shutil\nfrom urllib.parse import quote\nfrom pathlib import Path\nimport time\nimport pytest\nfrom helpers import utils\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nsys.path.append(os.path.join(os.path.dirname(__file__), 'helpers'))\nsys.path.append(os.path.join(os.path.dirname(__file__)))\n\n\n@pytest.fixture\ndef clean_setup_fledge_packages(package_build_version):\n    # This  gives the path of directory where fledge is cloned. conftest_file < python < system < tests < ROOT\n    PROJECT_ROOT = Path(__file__).parent.parent.parent.parent\n    SCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\n\n    try:\n        subprocess.run([\"cd {} && ./remove\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"remove package script failed!\"\n\n    try:\n        subprocess.run([\"cd {} && ./setup {}\"\n                       .format(SCRIPTS_DIR_ROOT, package_build_version)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"setup package script failed\"\n\n\n@pytest.fixture\ndef reset_and_start_fledge(storage_plugin, readings_plugin, authentication):\n    \"\"\"Fixture that kills fledge, reset database and starts fledge again\n        storage_plugin: A fixture that specifies the storage plugin to be used in the tests.\n        readings_plugin: A fixture that specifies the readings plugin to be used in the tests.\n        authentication: A fixture that defines the authentication method to be used for the tests. By default 'optional'\n    \"\"\"\n    assert os.environ.get('FLEDGE_ROOT') is not None\n\n    subprocess.run([\"$FLEDGE_ROOT/scripts/fledge kill\"], shell=True, check=True)\n    assert storage_plugin in [\"sqlite\", \"postgres\", \"sqlitelb\"]\n    assert readings_plugin in [\"Use main plugin\", \"sqlitememory\", \"sqlite\", \"postgres\", \"sqlitelb\"]\n    subprocess.run(\n        [\"echo $(jq -c --arg STORAGE_PLUGIN_VAL {} '.plugin.value=$STORAGE_PLUGIN_VAL' \"\n         \"$FLEDGE_ROOT/data/etc/storage.json) > $FLEDGE_ROOT/data/etc/storage.json\".format(storage_plugin)],\n        shell=True, check=True)\n    subprocess.run(\n        [\"echo $(jq -c --arg READINGS_PLUGIN_VAL \\\"{}\\\" '.readingPlugin.value=$READINGS_PLUGIN_VAL' \"\n         \"$FLEDGE_ROOT/data/etc/storage.json) > $FLEDGE_ROOT/data/etc/storage.json\".format(readings_plugin)],\n        shell=True, check=True)\n    if authentication == 'optional':\n        subprocess.run([\"sed -i \\\"s/'default': 'mandatory'/'default': 'optional'/g\\\" \"\n                        \"$FLEDGE_ROOT/python/fledge/services/core/server.py\"], shell=True, check=True)\n    else:\n        subprocess.run([\"sed -i \\\"s/'default': 'optional'/'default': 'mandatory'/g\\\" \"\n                        \"$FLEDGE_ROOT/python/fledge/services/core/server.py\"], shell=True, check=True)\n    subprocess.run([\"echo 'YES\\nYES' | $FLEDGE_ROOT/scripts/fledge reset\"], shell=True, check=True)\n    subprocess.run([\"$FLEDGE_ROOT/scripts/fledge start\"], shell=True)\n    stat = subprocess.run([\"$FLEDGE_ROOT/scripts/fledge status\"], shell=True, stdout=subprocess.PIPE,\n                          stderr=subprocess.PIPE)\n    assert \"Fledge not running.\" not in stat.stderr.decode(\"utf-8\")\n\n\ndef find(pattern, path):\n    result = None\n    for root, dirs, files in os.walk(path):\n        for name in files:\n            if fnmatch.fnmatch(name, pattern):\n                result = os.path.join(root, name)\n    return result\n\n\n@pytest.fixture\ndef remove_data_file():\n    \"\"\"Fixture that removes any file from a given path\"\"\"\n\n    def _remove_data_file(file_path=None):\n        if os.path.exists(file_path):\n            os.remove(file_path)\n\n    return _remove_data_file\n\n\n@pytest.fixture\ndef remove_directories():\n    \"\"\"Fixture that recursively removes any file and directories from a given path\"\"\"\n\n    def _remove_directories(dir_path=None):\n        if os.path.exists(dir_path):\n            shutil.rmtree(dir_path, ignore_errors=True)\n\n    return _remove_directories\n\n\n@pytest.fixture\ndef add_south():\n    def _add_fledge_south(south_plugin, south_branch, fledge_url, service_name=\"play\", config=None,\n                          plugin_lang=\"python\", use_pip_cache=True, start_service=True, plugin_discovery_name=None,\n                          installation_type='make'):\n        \"\"\"Add south plugin and start the service by default\"\"\"\n\n        plugin_discovery_name = south_plugin if plugin_discovery_name is None else plugin_discovery_name\n        _config = config if config is not None else {}\n        _enabled = \"true\" if start_service else \"false\"\n        data = {\"name\": \"{}\".format(service_name), \"type\": \"South\", \"plugin\": \"{}\".format(plugin_discovery_name),\n                \"enabled\": _enabled, \"config\": _config}\n\n        conn = http.client.HTTPConnection(fledge_url)\n\n        def clone_make_install():\n            try:\n                if plugin_lang == \"python\":\n                    subprocess.run(\n                        [\"$FLEDGE_ROOT/tests/system/python/scripts/install_python_plugin {} south {} {}\".format(\n                            south_branch, south_plugin, use_pip_cache)], shell=True, check=True)\n                else:\n                    subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} south {}\".format(\n                        south_branch, south_plugin)], shell=True, check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} plugin installation failed\".format(south_plugin)\n\n        if installation_type == 'make':\n            clone_make_install()\n        elif installation_type == 'package':\n            try:\n                subprocess.run([\"sudo {} install -y fledge-south-{}\".format(pytest.PKG_MGR, south_plugin)], shell=True,\n                               check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} package installation failed!\".format(south_plugin)\n        else:\n            print(\"Skipped {} plugin installation. Installation mechanism is set to {}.\".format(south_plugin,\n                                                                                                installation_type))\n\n        # Create south service\n        conn.request(\"POST\", '/fledge/service', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert service_name == retval[\"name\"]\n        return retval\n\n    return _add_fledge_south\n\n\n@pytest.fixture\ndef add_north():\n    def _add_fledge_north(fledge_url, north_plugin, north_branch, installation_type='make', north_instance_name=\"play\",\n                          config=None, schedule_repeat_time=30, \n                          plugin_lang=\"python\", use_pip_cache=True, enabled=True, plugin_discovery_name=None,\n                          is_task=True):\n        \"\"\"Add north plugin and start the service/task by default\"\"\"\n\n        if plugin_discovery_name is None:\n            plugin_discovery_name = north_plugin\n        _config = config if config is not None else {}\n        _enabled = \"true\" if enabled else \"false\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n\n        def clone_make_install():\n            try:\n                if plugin_lang == \"python\":\n                    subprocess.run(\n                        [\"$FLEDGE_ROOT/tests/system/python/scripts/install_python_plugin {} north {} {}\".format(\n                            north_branch, north_plugin, use_pip_cache)], shell=True, check=True)\n                else:\n                    subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} north {}\".format(\n                        north_branch, north_plugin)], shell=True, check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} plugin installation failed\".format(north_plugin)\n\n        if installation_type == 'make':\n            clone_make_install()\n        elif installation_type == 'package':\n            try:\n                subprocess.run([\"sudo {} install -y fledge-north-{}\".format(pytest.PKG_MGR, north_plugin)], shell=True,\n                               check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} package installation failed!\".format(north_plugin)\n        else:\n            print(\"Skipped {} plugin installation. Installation mechanism is set to {}.\".format(north_plugin,\n                                                                                                installation_type))\n\n        if is_task:\n            # Create north task\n            data = {\"name\": \"{}\".format(north_instance_name), \"type\": \"north\",\n                    \"plugin\": \"{}\".format(plugin_discovery_name),\n                    \"schedule_enabled\": _enabled, \"schedule_repeat\": \"{}\".format(schedule_repeat_time), \"schedule_type\": \"3\", \"config\": _config}\n            print(data)\n            conn.request(\"POST\", '/fledge/scheduled/task', json.dumps(data))\n        else:\n            # Create north service\n            data = {\"name\": \"{}\".format(north_instance_name), \"type\": \"North\",\n                    \"plugin\": \"{}\".format(plugin_discovery_name),\n                    \"enabled\": _enabled, \"config\": _config}\n            print(data)\n            conn.request(\"POST\", '/fledge/service', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert north_instance_name == retval[\"name\"]\n        return retval\n\n    return _add_fledge_north\n\n@pytest.fixture\ndef add_service():\n    def _add_service(fledge_url, service, service_branch, retries, installation_type = \"make\", service_name = \"svc@123\",\n                     enabled = True):\n        \n        \"\"\" \n            Fixture to add Service and start the start service by default\n            fledge_url: IP address or domain to access fledge\n            service: Service to be installed\n            service_branch: Branch of service to be installed\n            retries: Number of tries for polling\n            installation_type: Type of installation for service i.e. make or package\n            service_name: Name that will be given to service to be installed\n            enabled: Flag to enable or disable notification instance\n        \"\"\"\n        \n        # Check if the service is already installed installed\n        retval = utils.get_request(fledge_url, \"/fledge/service\")\n        for ele in retval[\"services\"]:\n            if ele[\"type\"].lower() == service:\n                return ele\n        \n        PROJECT_ROOT = Path(__file__).parent.parent.parent.parent\n        \n        # Install Service\n        def clone_make_install():\n            try:\n                subprocess.run([\"{}/tests/system/python/scripts/install_c_service {} {}\".format(\n                    PROJECT_ROOT, service_branch, service)], shell=True, check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} service  installation failed\".format(service)\n                \n        if installation_type == 'make':\n            clone_make_install()\n        elif installation_type == 'package':\n            try:\n                subprocess.run([\"sudo {} install -y fledge-service-{}\".format(pytest.PKG_MGR, service)], shell=True,\n                            check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} package installation failed!\".format(service)\n        else:\n            return(\"Skipped {} service installation. Installation mechanism is set to {}.\".format(service, installation_type))\n        \n        # Add Service\n        data = {\"name\": \"{}\".format(service_name), \"type\": \"{}\".format(service), \"enabled\": enabled}\n        retval = utils.post_request(fledge_url, \"/fledge/service\", data)\n        assert service_name == retval[\"name\"]\n        return retval\n        \n    return _add_service\n\n@pytest.fixture\ndef add_notification_instance():\n    def _add_notification_instance(fledge_url, delivery_plugin, delivery_branch , rule_config={}, delivery_config={}, \n                                   rule_plugin=\"Threshold\", rule_branch=None, rule_plugin_discovery_name=None, \n                                   delivery_plugin_discovery_name=None, installation_type='make', notification_type=\"one shot\",\n                                   notification_instance_name=\"noti@123\", retrigger_time=30, enabled=True):\n        \"\"\"\n            Fixture to add Service instance and start the instance by default\n            fledge_url: IP address or domain to access fledge\n            delivery_plugin: Notify or Delivery plugin to be installed\n            delivery_branch: Branch of Notify or Delivery plugin to be installed\n            rule_config: Configuration of Rule plugin\n            delivery_config: Configuration of Delivery plugin\n            rule_plugin: Rule plugin to be installed, by default Threshold and DataAvailability plugin is installed \n            rule_branch: Branch of Rule plugin to be installed\n            rule_plugin_discovery_name: Name to identify the Rule Plugin after installation \n            delivery_plugin_discovery_name: Name to identify the Delivery Plugin after installation \n            installation_type: Type of installation for plugins i.e. make or package\n            notification_type: Type of notification to be triggered i.e. one_shot, retriggered, toggle\n            notification_instance_name: Name that will be given to notification instance to be created\n            retrigger_time: Interval between retriggered notifications\n            enabled: Flag to enable or disable notification instance\n        \"\"\"\n        PROJECT_ROOT = Path(__file__).parent.parent.parent.parent\n        \n        if rule_plugin_discovery_name is None:\n            rule_plugin_discovery_name = rule_plugin\n            \n        if delivery_plugin_discovery_name is None:\n            delivery_plugin_discovery_name = delivery_plugin\n\n        def clone_make_install(plugin_branch, plugin_type, plugin):\n            try:\n                subprocess.run([\"{}/tests/system/python/scripts/install_c_plugin {} {} {}\".format(\n                    PROJECT_ROOT, plugin_branch, plugin_type, plugin)], shell=True, check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} plugin installation failed\".format(plugin)\n\n        if installation_type == 'make':\n            # Install Rule Plugin if it is not Threshold or DataAvailability\n            if rule_plugin not in  (\"Threshold\",\"DataAvailability\"):\n                clone_make_install(rule_branch, \"rule\", rule_plugin)\n            \n            clone_make_install(delivery_branch, \"notify\", delivery_plugin)\n            \n        elif installation_type == 'package':\n            try:\n                if rule_plugin not in [\"Threshold\", \"DataAvailability\"]:\n                    subprocess.run([\"sudo {} install -y fledge-rule-{}\".format(pytest.PKG_MGR, rule_plugin)], shell=True,\n                               check=True)\n                    \n            except subprocess.CalledProcessError:\n                assert False, \"Package installation of {} failed!\".format(rule_plugin)\n                \n            try :\n                subprocess.run([\"sudo {} install -y fledge-notify-{}\".format(pytest.PKG_MGR, delivery_plugin)], shell=True,\n                               check=True)\n                \n            except subprocess.CalledProcessError:\n                assert False, \"Package installation of {} failed!\".format(delivery_plugin)\n        else:\n            return(\"Skipped {} and {} plugin installation. Installation mechanism is set to {}.\".format(rule_plugin, delivery_plugin,\n                                                                                                installation_type))\n\n        data = {\n                \"name\": notification_instance_name,\n                \"description\": \"{} notification instance\".format(notification_instance_name),\n                \"rule_config\": rule_config,\n                \"rule\": rule_plugin_discovery_name,\n                \"delivery_config\": delivery_config,\n                \"channel\": delivery_plugin_discovery_name,\n                \"notification_type\": notification_type,\n                \"enabled\": enabled, \n                \"retrigger_time\": \"{}\".format(retrigger_time),\n                }\n        \n        retval = utils.post_request(fledge_url, \"/fledge/notification\", data)\n        assert \"Notification {} created successfully\".format(notification_instance_name) == retval[\"result\"]\n        return retval\n    \n    return _add_notification_instance\n\n@pytest.fixture\ndef start_north_pi_v2():\n    def _start_north_pi_server_c(fledge_url, pi_host, pi_port, pi_token, north_plugin=\"OMF\",\n                                 taskname=\"NorthReadingsToPI\", start_task=True, naming_scheme=\"Backward compatibility\",\n                                 pi_use_legacy=\"true\"):\n        \"\"\"Start north task\"\"\"\n\n        _enabled = \"true\" if start_task else \"false\"\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"name\": taskname,\n                \"plugin\": \"{}\".format(north_plugin),\n                \"type\": \"north\",\n                \"schedule_type\": 3,\n                \"schedule_day\": 0,\n                \"schedule_time\": 0,\n                \"schedule_repeat\": 30,\n                \"schedule_enabled\": _enabled,\n                \"config\": {\"PIServerEndpoint\": {\"value\": \"Connector Relay\"},\n                           \"producerToken\": {\"value\": pi_token},\n                           \"ServerHostname\": {\"value\": pi_host},\n                           \"ServerPort\": {\"value\": str(pi_port)},\n                           \"NamingScheme\": {\"value\": naming_scheme},\n                           \"Legacy\": {\"value\": pi_use_legacy}\n                           }\n                }\n        conn.request(\"POST\", '/fledge/scheduled/task', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        retval = r.read().decode()\n        return retval\n\n    return _start_north_pi_server_c\n\n\n@pytest.fixture\ndef start_north_task_omf_web_api():\n    def _start_north_task_omf_web_api(fledge_url, pi_host, pi_port, pi_db=\"Dianomic\", auth_method='basic',\n                                      pi_user=None, pi_pwd=None, north_plugin=\"OMF\",\n                                      taskname=\"NorthReadingsToPI_WebAPI\", start_task=True,\n                                      naming_scheme=\"Backward compatibility\",\n                                      default_af_location=\"fledge/room1/machine1\",\n                                      pi_use_legacy=\"true\"):\n        \"\"\"Start north task\"\"\"\n\n        _enabled = True if start_task else False\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"name\": taskname,\n                \"plugin\": \"{}\".format(north_plugin),\n                \"type\": \"north\",\n                \"schedule_type\": 3,\n                \"schedule_day\": 0,\n                \"schedule_time\": 0,\n                \"schedule_repeat\": 10,\n                \"schedule_enabled\": _enabled,\n                \"config\": {\"PIServerEndpoint\": {\"value\": \"PI Web API\"},\n                           \"PIWebAPIAuthenticationMethod\": {\"value\": auth_method},\n                           \"PIWebAPIUserId\": {\"value\": pi_user},\n                           \"PIWebAPIPassword\": {\"value\": pi_pwd},\n                           \"ServerHostname\": {\"value\": pi_host},\n                           \"ServerPort\": {\"value\": str(pi_port)},\n                           \"compression\": {\"value\": \"true\"},\n                           \"DefaultAFLocation\": {\"value\": default_af_location},\n                           \"NamingScheme\": {\"value\": naming_scheme},\n                           \"Legacy\": {\"value\": pi_use_legacy}\n                           }\n                }\n\n        conn.request(\"POST\", '/fledge/scheduled/task', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        retval = r.read().decode()\n        return retval\n\n    return _start_north_task_omf_web_api\n\n\n@pytest.fixture\ndef start_north_omf_as_a_service():\n    def _start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_db=\"Dianomic\", auth_method='basic',\n                                      pi_user=None, pi_pwd=None, north_plugin=\"OMF\",\n                                      service_name=\"NorthReadingsToPI_WebAPI\", start=True,\n                                      naming_scheme=\"Backward compatibility\",\n                                      default_af_location=\"fledge/room1/machine1\",\n                                      pi_use_legacy=\"true\"):\n        \"\"\"Start north service\"\"\"\n\n        _enabled = True if start else False\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"name\": service_name,\n                \"plugin\": \"{}\".format(north_plugin),\n                \"enabled\": _enabled,\n                \"type\": \"north\",\n                \"config\": {\"PIServerEndpoint\": {\"value\": \"PI Web API\"},\n                           \"PIWebAPIAuthenticationMethod\": {\"value\": auth_method},\n                           \"PIWebAPIUserId\": {\"value\": pi_user},\n                           \"PIWebAPIPassword\": {\"value\": pi_pwd},\n                           \"ServerHostname\": {\"value\": pi_host},\n                           \"ServerPort\": {\"value\": str(pi_port)},\n                           \"compression\": {\"value\": \"true\"},\n                           \"DefaultAFLocation\": {\"value\": default_af_location},\n                           \"NamingScheme\": {\"value\": naming_scheme},\n                           \"Legacy\": {\"value\": pi_use_legacy}\n                           }\n                }\n\n        conn.request(\"POST\", '/fledge/service', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        retval = json.loads(r.read().decode())\n        return retval\n\n    return _start_north_omf_as_a_service\n\n\nstart_north_pi_server_c = start_north_pi_v2\nstart_north_pi_server_c_web_api = start_north_pi_v2_web_api = start_north_task_omf_web_api\n\n\n@pytest.fixture\ndef read_data_from_pi():\n    def _read_data_from_pi(host, admin, password, pi_database, asset, sensor):\n        \"\"\" This method reads data from pi web api \"\"\"\n\n        # List of pi databases\n        dbs = None\n        # PI logical grouping of attributes and child elements\n        elements = None\n        # List of elements\n        url_elements_list = None\n        # Element's recorded data url\n        url_recorded_data = None\n        # Resources in the PI Web API are addressed by WebID, parameter used for deletion of element\n        web_id = None\n\n        username_password = \"{}:{}\".format(admin, password)\n        username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n        headers = {'Authorization': 'Basic %s' % username_password_b64}\n\n        try:\n            ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)\n            ctx.options |= ssl.PROTOCOL_TLSv1_1\n            # With ssl.CERT_NONE as verify_mode, validation errors such as untrusted or expired cert\n            # are ignored and do not abort the TLS/SSL handshake.\n            ctx.verify_mode = ssl.CERT_NONE\n            conn = http.client.HTTPSConnection(host, context=ctx)\n            conn.request(\"GET\", '/piwebapi/assetservers', headers=headers)\n            res = conn.getresponse()\n            r = json.loads(res.read().decode())\n            dbs = r[\"Items\"][0][\"Links\"][\"Databases\"]\n\n            if dbs is not None:\n                conn.request(\"GET\", dbs, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                for el in r[\"Items\"]:\n                    if el[\"Name\"] == pi_database:\n                        elements = el[\"Links\"][\"Elements\"]\n\n            if elements is not None:\n                conn.request(\"GET\", elements, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                url_elements_list = r[\"Items\"][0][\"Links\"][\"Elements\"]\n\n            if url_elements_list is not None:\n                conn.request(\"GET\", url_elements_list, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                items = r[\"Items\"]\n                for el in items:\n                    if el[\"Name\"] == asset:\n                        url_recorded_data = el[\"Links\"][\"RecordedData\"]\n                        web_id = el[\"WebId\"]\n\n            _data_pi = {}\n            if url_recorded_data is not None:\n                conn.request(\"GET\", url_recorded_data, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                _items = r[\"Items\"]\n                for el in _items:\n                    _recoded_value_list = []\n                    for _head in sensor:\n                        if el[\"Name\"] == _head:\n                            elx = el[\"Items\"]\n                            for _el in elx:\n                                _recoded_value_list.append(_el[\"Value\"])\n                            _data_pi[_head] = _recoded_value_list\n\n                # Delete recorded elements\n                conn.request(\"DELETE\", '/piwebapi/elements/{}'.format(web_id), headers=headers)\n                res = conn.getresponse()\n                res.read()\n\n                return _data_pi\n        except (KeyError, IndexError, Exception):\n            return None\n\n    return _read_data_from_pi\n\n\n@pytest.fixture\ndef clear_pi_system_through_pi_web_api():\n    PROJECT_ROOT = Path(__file__).absolute().parent.parent.parent.parent\n    sys.path.append('{}/tests/system/common'.format(PROJECT_ROOT))\n\n    from clean_pi_system import clear_pi_system_pi_web_api\n\n    return clear_pi_system_pi_web_api\n\n@pytest.fixture\ndef verify_hierarchy_and_get_datapoints_from_pi_web_api():\n    def _verify_hierarchy_and_get_datapoints_from_pi_web_api(host, admin, password, pi_database, af_hierarchy_list, asset, sensor):\n        \"\"\" This method verifies hierarchy created in pi web api is correctly \"\"\"\n    \n        username_password = \"{}:{}\".format(admin, password)\n        username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n        headers = {'Authorization': 'Basic %s' % username_password_b64}\n        AF_HIERARCHY_LIST=af_hierarchy_list.split('/')[1:]\n        AF_HIERARCHY_COUNT=len(AF_HIERARCHY_LIST)\n        \n        try:\n            ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)\n            ctx.options |= ssl.PROTOCOL_TLSv1_1\n            # With ssl.CERT_NONE as verify_mode, validation errors such as untrusted or expired cert\n            # are ignored and do not abort the TLS/SSL handshake.\n            ctx.verify_mode = ssl.CERT_NONE\n            conn = http.client.HTTPSConnection(host, context=ctx)\n            conn = http.client.HTTPSConnection(host, context=ctx)\n            conn.request(\"GET\", '/piwebapi/assetservers', headers=headers)\n            res = conn.getresponse()\n            r = json.loads(res.read().decode())\n            dbs_url= r['Items'][0]['Links']['Databases']\n            print(dbs_url)\n            if dbs_url is not None:\n                conn.request(\"GET\", dbs_url, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                items = r['Items']\n                CHECK_DATABASE_EXISTS = list(filter(lambda items: items['Name'] == pi_database, items))[0]\n                \n                if len(CHECK_DATABASE_EXISTS) > 0:\n                    elements_url = CHECK_DATABASE_EXISTS['Links']['Elements']\n                else:\n                    raise Exception('Database not exist')\n                \n                if elements_url is not None:\n                    conn.request(\"GET\", elements_url, headers=headers)\n                    res = conn.getresponse()\n                    r = json.loads(res.read().decode())\n                    items = r['Items']\n                    \n                    CHECK_AF_ELEMENT_EXISTS = list(filter(lambda items: items['Name'] == AF_HIERARCHY_LIST[0], items))[0]\n                    if len(CHECK_AF_ELEMENT_EXISTS) != 0:\n                        \n                        counter =  0\n                        while counter < AF_HIERARCHY_COUNT:\n                            if CHECK_AF_ELEMENT_EXISTS['Name'] == AF_HIERARCHY_LIST[counter]:\n                                counter+=1\n                                elements_url = CHECK_AF_ELEMENT_EXISTS['Links']['Elements']\n                                conn.request(\"GET\", elements_url, headers=headers)\n                                res = conn.getresponse()\n                                CHECK_AF_ELEMENT_EXISTS = json.loads(res.read().decode())['Items'][0]\n                            else:\n                                raise Exception(\"AF Heirarchy is incorrect\")\n                            \n                        record = dict()\n                        if CHECK_AF_ELEMENT_EXISTS['Name'] == asset:\n                            record_url = CHECK_AF_ELEMENT_EXISTS['Links']['RecordedData']\n                            get_record_url = quote(\"{}?limit=10000\".format(record_url), safe='?,=&/.:')\n                            print(get_record_url)\n                            conn.request(\"GET\", get_record_url, headers=headers)\n                            res = conn.getresponse()\n                            items = json.loads(res.read().decode())['Items']\n                            no_of_datapoint_in_pi_server = len(items)\n                            Item_matched = False\n                            count = 0\n                            if no_of_datapoint_in_pi_server == 0:\n                                raise \"Data points are not created in PI Server\"\n                            else:\n                                for item in items:\n                                    count += 1\n                                    if item['Name'] in sensor:\n                                        print(item['Name'])\n                                        record[item['Name']] = list(map(lambda val: val['Value'], filter(lambda ele: isinstance(ele['Value'], int) or isinstance(ele['Value'], float) , item['Items'])))\n                                        Item_matched = True\n                                    elif count == no_of_datapoint_in_pi_server and Item_matched == False:\n                                        raise \"Required Data points is not Present --> {}\".format(sensor)\n                        else:\n                            raise \"Asset does not exist, Although Hierarchy is correct\"\n                        \n                        return(record)\n                            \n                    else:\n                        raise Exception(\"AF Root not exists\")\n                else:\n                    raise Exception(\"Elements URL not found\")\n            else:\n                raise Exception(\"DataBase URL not found\")\n                \n            \n        except (KeyError, IndexError, Exception) as ex:\n            print(\"Failed to read data due to {}\".format(ex))\n            return None\n        \n    return(_verify_hierarchy_and_get_datapoints_from_pi_web_api)\n\n@pytest.fixture\ndef read_data_from_pi_web_api():\n    def _read_data_from_pi_web_api(host, admin, password, pi_database, af_hierarchy_list, asset, sensor):\n        \"\"\" This method reads data from pi web api \"\"\"\n\n        username_password = \"{}:{}\".format(admin, password)\n        username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n        headers = {'Authorization': 'Basic %s' % username_password_b64}\n\n        try:\n            ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)\n            ctx.options |= ssl.PROTOCOL_TLSv1_1\n            # With ssl.CERT_NONE as verify_mode, validation errors such as untrusted or expired cert\n            # are ignored and do not abort the TLS/SSL handshake.\n            ctx.verify_mode = ssl.CERT_NONE\n            conn = http.client.HTTPSConnection(host, context=ctx)\n            conn.request(\"GET\", '/piwebapi/dataservers', headers=headers)\n            res = conn.getresponse()\n            r = json.loads(res.read().decode())\n            points= r[\"Items\"][0]['Links'][\"Points\"]\n            \n            if points is not None:\n                conn.request(\"GET\", points, headers=headers)\n                res = conn.getresponse()\n                r=json.loads(res.read().decode())\n                data = r[\"Items\"]\n                if data is not None:\n                    value = None\n                    if sensor == '':\n                        search_string = asset\n                    else:\n                        search_string = \"{}.{}\".format(asset, sensor)\n                    for el in data:\n                        if search_string in el[\"Name\"]:\n                            value_url = el[\"Links\"][\"Value\"]\n                            if value_url is not None:\n                                conn.request(\"GET\", value_url, headers=headers)\n                                res = conn.getresponse()\n                                r = json.loads(res.read().decode())\n                                value = r[\"Value\"]\n                    if not value:\n                        print(\"Could not find the latest reading of asset ->{}. sensor->{}\".format(asset,\n                                  sensor))\n                        return value\n                    else:\n                        print(\"The latest value of asset->{}.sensor->{} is {}\".format(asset, sensor, value))\n                        return(value)\n                else:\n                    print(\"Data inside points not found.\")\n                    return None     \n            else:\n                print(\"Could not find the points.\")\n                return None\n\n        except (KeyError, IndexError, Exception) as ex:\n            print(\"Failed to read data due to {}\".format(ex))\n            return None\n\n    return _read_data_from_pi_web_api\n\n\n@pytest.fixture\ndef read_data_from_pi_asset_server():\n    def _read_data_from_pi_asset_server(host, admin, password, pi_database, asset, sensor, af_hierarchy_list=['fledge', 'room1', 'machine1']):\n        \"\"\"This method reads data from PI Web API asset server\"\"\"\n\n        # Initialize variables\n        dbs = None\n        # PI logical grouping of attributes and child elements\n        elements = None\n        # List of elements\n        url_elements_list = None\n        # Element's recorded data url\n        url_recorded_data = None\n        # Resources in the PI Web API are addressed by WebID, parameter used for deletion of element\n        web_id = None\n        # URL of source datapoint\n        source_link = None\n\n        # Create the basic authorization header\n        username_password = \"{}:{}\".format(admin, password)\n        username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n        headers = {'Authorization': 'Basic %s' % username_password_b64}\n\n        try:\n            # Establish a connection to the PI Asset Server\n            conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n            conn.request(\"GET\", '/piwebapi/assetservers', headers=headers)\n            res = conn.getresponse()\n            r = json.loads(res.read().decode())\n            dbs = r[\"Items\"][0][\"Links\"][\"Databases\"]\n\n            if dbs is not None:\n                conn.request(\"GET\", dbs, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                for el in r[\"Items\"]:\n                    if el[\"Name\"] == pi_database:\n                        url_elements_list = el[\"Links\"][\"Elements\"]\n            else:\n                print(\"Unable to find Databases\")\n                return None\n            \n            # This block is for iteration when we have multi-level hierarchy.\n            # For example, if we have DefaultAFLocation as \"foglamp/room1/machine1\" then\n            # it will recursively find elements of \"foglamp\" and then \"room1\".\n            # And next block is for finding element of \"machine1\".\n            \n            # Iterate through AF hierarchy list\n            af_level_count = 0\n            for level in af_hierarchy_list:\n                if url_elements_list is not None:\n                    conn.request(\"GET\", url_elements_list, headers=headers)\n                    res = conn.getresponse()\n                    r = json.loads(res.read().decode())\n                    for el in r[\"Items\"]:\n                        if el[\"Name\"] == af_hierarchy_list[af_level_count]:\n                            url_elements_list = el[\"Links\"][\"Elements\"]\n                            if af_level_count == 0:\n                                web_id_root = el[\"WebId\"]\n                            af_level_count += 1\n\n            if url_elements_list is not None:\n                conn.request(\"GET\", url_elements_list, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                items = r[\"Items\"]\n                for el in items:\n                    if asset in el[\"Name\"]:\n                        url_recorded_data = el[\"Links\"][\"RecordedData\"]\n                        web_id = el[\"WebId\"]\n            \n            _data_pi = {}\n            if url_recorded_data is not None:\n                conn.request(\"GET\", url_recorded_data, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n                _items = r[\"Items\"]\n                \n                for el in _items:\n                    _recoded_value_list = list()\n                    if len(sensor) == 1 and el[\"Name\"].endswith(asset):\n                        for itm in el[\"Items\"]:\n                            _recoded_value_list.append(itm[\"Value\"])\n                        _data_pi[sensor.pop()] = _recoded_value_list\n                    else:\n                        for _head in sensor:\n                            if el[\"Name\"].endswith(f\"{asset}.{_head}\"):\n                                for itm in el[\"Items\"]:\n                                    _recoded_value_list.append(itm[\"Value\"])\n                                _data_pi[_head] = _recoded_value_list\n\n            return _data_pi\n\n        except (KeyError, IndexError, Exception) as ex:\n            print(\"Failed to read data due to {}\".format(ex))\n            print(ex.__traceback__.tb_lineno)\n            return None\n\n    return _read_data_from_pi_asset_server\n\n\n@pytest.fixture\ndef add_filter():\n    def _add_filter(filter_plugin, filter_plugin_branch, filter_name, filter_config, fledge_url, filter_user_svc_task,\n                    installation_type='make', only_installation=False):\n        \"\"\"\n\n        :param filter_plugin: filter plugin `fledge-filter-?`\n        :param filter_plugin_branch:\n        :param filter_name: name of the filter with which it will be added to pipeline\n        :param filter_config:\n        :param fledge_url:\n        :param filter_user_svc_task: south service or north task instance name\n        \"\"\"\n\n        if installation_type == 'make':\n            try:\n                subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} filter {}\".format(\n                    filter_plugin_branch, filter_plugin)], shell=True, check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} filter plugin installation failed\".format(filter_plugin)\n        elif installation_type == 'package':\n            try:\n                subprocess.run([\"sudo {} install -y fledge-filter-{}\".format(pytest.PKG_MGR, filter_plugin)],\n                               shell=True, check=True)\n            except subprocess.CalledProcessError:\n                assert False, \"{} package installation failed!\".format(filter_plugin)\n        else:\n            print(\"Skipped {} plugin installation. Installation mechanism is set to {}.\".format(filter_plugin,\n                                                                                                installation_type))\n\n        if only_installation:\n            return\n\n        data = {\"name\": \"{}\".format(filter_name), \"plugin\": \"{}\".format(filter_plugin), \"filter_config\": filter_config}\n        conn = http.client.HTTPConnection(fledge_url)\n\n        conn.request(\"POST\", '/fledge/filter', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert filter_name == jdoc[\"filter\"]\n\n        uri = \"{}/pipeline?allow_duplicates=true&append_filter=true\".format(quote(filter_user_svc_task))\n        filters_in_pipeline = [filter_name]\n        conn.request(\"PUT\", '/fledge/filter/' + uri, json.dumps({\"pipeline\": filters_in_pipeline}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        res = r.read().decode()\n        jdoc = json.loads(res)\n        # Asset newly added filter exist in request's response\n        assert filter_name in jdoc[\"result\"]\n        return jdoc\n\n    return _add_filter\n\n\n@pytest.fixture\ndef enable_schedule():\n    def _enable_sch(fledge_url, sch_name):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/schedule/enable', json.dumps({\"schedule_name\": sch_name}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"scheduleId\" in jdoc\n        return jdoc\n\n    return _enable_sch\n\n\n@pytest.fixture\ndef disable_schedule():\n    def _disable_sch(fledge_url, sch_name):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/schedule/disable', json.dumps({\"schedule_name\": sch_name}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc[\"status\"]\n        return jdoc\n\n    return _disable_sch\n\n\ndef pytest_addoption(parser):\n    parser.addoption(\"--storage-plugin\", action=\"store\", default=\"sqlite\",\n                     help=\"Database plugin to use for tests\")\n    parser.addoption(\"--readings-plugin\", action=\"store\", default=\"Use main plugin\",\n                     help=\"Readings plugin to use for tests\")\n    parser.addoption(\"--fledge-url\", action=\"store\", default=\"localhost:8081\",\n                     help=\"Fledge client api url\")\n    parser.addoption(\"--use-pip-cache\", action=\"store\", default=False,\n                     help=\"use pip cache is requirement is available\")\n    parser.addoption(\"--wait-time\", action=\"store\", default=5, type=int,\n                     help=\"Generic wait time between processes to run\")\n    parser.addoption(\"--wait-fix\", action=\"store\", default=0, type=int,\n                     help=\"Extra wait time required for process to run\")\n    parser.addoption(\"--retries\", action=\"store\", default=3, type=int,\n                     help=\"Number of tries for polling\")\n    # TODO: Temporary fixture, to be used with value False for environments where PI Web API is not stable\n    parser.addoption(\"--skip-verify-north-interface\", action=\"store_false\",\n                     help=\"Verify data from external north system api\")\n\n    parser.addoption(\"--remote-user\", action=\"store\", default=\"ubuntu\",\n                     help=\"Username on remote machine where Fledge will run\")\n    parser.addoption(\"--remote-ip\", action=\"store\", default=\"127.0.0.1\",\n                     help=\"IP of remote machine where Fledge will run\")\n    parser.addoption(\"--key-path\", action=\"store\", default=\"~/.ssh/id_rsa.pub\",\n                     help=\"Path of key file used for authentication to remote machine\")\n    parser.addoption(\"--remote-fledge-path\", action=\"store\",\n                     help=\"Path on the remote machine where Fledge is clone and built\")\n\n    # South/North Args\n    parser.addoption(\"--south-branch\", action=\"store\", default=\"develop\",\n                     help=\"south branch name\")\n    parser.addoption(\"--north-branch\", action=\"store\", default=\"develop\",\n                     help=\"north branch name\")\n    parser.addoption(\"--south-service-name\", action=\"store\", default=\"southSvc #1\",\n                     help=\"Name of the South Service\")\n    parser.addoption(\"--asset-name\", action=\"store\", default=\"SystemTest\",\n                     help=\"Name of asset\")\n    parser.addoption(\"--num-assets\", action=\"store\", default=300, type=int, \n                     help=\"Total number of assets that need to be created\")\n    parser.addoption(\"--north-historian\", action=\"store\", default=\"EdgeDataStore\",\n                     help=\"Name of the North Historian to which the data will be sent\")\n    \n    # Filter Args\n    parser.addoption(\"--filter-branch\", action=\"store\", default=\"develop\", help=\"Filter plugin repo branch\")\n    parser.addoption(\"--filter-name\", action=\"store\", default=\"Meta #1\", help=\"Filter name to be added to pipeline\")\n\n    # External Services Arg fledge-service-* e.g. fledge-service-notification\n    parser.addoption(\"--service-branch\", action=\"store\", default=\"develop\",\n                     help=\"service branch name\")\n    # Notify Arg\n    parser.addoption(\"--notify-branch\", action=\"store\", default=\"develop\", help=\"Notify plugin repo branch\")\n\n    # PI Config\n    parser.addoption(\"--pi-host\", action=\"store\", default=\"pi-server\",\n                     help=\"PI Server Host Name/IP\")\n    parser.addoption(\"--pi-port\", action=\"store\", default=\"5460\", type=int,\n                     help=\"PI Server Port\")\n    parser.addoption(\"--pi-db\", action=\"store\", default=\"pi-server-db\",\n                     help=\"PI Server database\")\n    parser.addoption(\"--pi-admin\", action=\"store\", default=\"pi-server-uid\",\n                     help=\"PI Server user login\")\n    parser.addoption(\"--pi-passwd\", action=\"store\", default=\"pi-server-pwd\",\n                     help=\"PI Server user login password\")\n    parser.addoption(\"--pi-token\", action=\"store\", default=\"omf_north_0001\",\n                     help=\"OMF Producer Token\")\n    parser.addoption(\"--pi-use-legacy\", action=\"store\", default=\"true\",\n                     help=\"Set false to override the default plugin behaviour i.e. for OMF version >=1.2.x to send linked data types.\")\n    \n    # OCS Config\n    parser.addoption(\"--ocs-tenant\", action=\"store\", default=\"ocs_tenant_id\",\n                     help=\"Tenant id of OCS\")\n    parser.addoption(\"--ocs-client-id\", action=\"store\", default=\"ocs_client_id\",\n                     help=\"Client id of OCS account\")\n    parser.addoption(\"--ocs-client-secret\", action=\"store\", default=\"ocs_client_secret\",\n                     help=\"Client Secret of OCS account\")\n    parser.addoption(\"--ocs-namespace\", action=\"store\", default=\"ocs_namespace_0001\",\n                     help=\"OCS namespace where the information are stored\")\n    parser.addoption(\"--ocs-token\", action=\"store\", default=\"ocs_north_0001\",\n                     help=\"Token of OCS account\")\n\n    # Kafka Config\n    parser.addoption(\"--kafka-host\", action=\"store\", default=\"localhost\",\n                     help=\"Kafka Server Host Name/IP\")\n    parser.addoption(\"--kafka-port\", action=\"store\", default=\"9092\", type=int,\n                     help=\"Kafka Server Port\")\n    parser.addoption(\"--kafka-topic\", action=\"store\", default=\"Fledge\", help=\"Kafka topic\")\n    parser.addoption(\"--kafka-rest-port\", action=\"store\", default=\"8082\", help=\"Kafka Rest Proxy Port\")\n\n    # Modbus Config\n    parser.addoption(\"--modbus-host\", action=\"store\", default=\"localhost\", help=\"Modbus simulator host\")\n    parser.addoption(\"--modbus-port\", action=\"store\", default=\"502\", type=int, help=\"Modbus simulator port\")\n    parser.addoption(\"--modbus-serial-port\", action=\"store\", default=\"/dev/ttyS1\", help=\"Modbus serial port\")\n    parser.addoption(\"--modbus-baudrate\", action=\"store\", default=\"9600\", type=int, help=\"Serial port baudrate\")\n\n    # Packages\n    parser.addoption(\"--package-build-version\", action=\"store\", default=\"nightly\",\n                     help=\"Package build version for http://archives.fledge-iot.org\")\n    parser.addoption(\"--package-build-list\", action=\"store\", default=\"p0\",\n                     help=\"Package to build as per key defined in tests/system/python/packages/data/package_list.json and comma separated values are accepted if more than one to build with\")\n    parser.addoption(\"--package-build-source-list\", action=\"store\", default=\"false\",\n                     help=\"Package to build from apt/yum sources list\")\n    parser.addoption(\"--exclude-packages-list\", action=\"store\", default=\"None\",\n                     help=\"Packages to be excluded from test e.g. --exclude-packages-list=fledge-south-sinusoid,fledge-filter-log\")\n\n    # Config required for testing fledge under impaired network.\n\n    parser.addoption(\"--south-service-wait-time\", action=\"store\", type=int, default=20,\n                     help=\"The time in seconds before which the south service should keep  on\"\n                          \"sending data. After this time the south service will shutdown.\")\n\n    parser.addoption(\"--north-catch-up-time\", action=\"store\", type=int, default=30,\n                     help=\"The time in seconds we will allow the north task /service\"\n                          \" to keep on running \"\n                          \"after switching off the south service.\")\n\n    parser.addoption('--throttled-network-config', action='store', type=json.loads,\n                     help=   \"Give config '{'rate_limit': '100',\"\n                             \"            'packet_delay': '50',\"\n                             \"            'interface': 'eth0'}' \"\n                             \"for causing a delay of 50 milliseconds \"\n                             \"and rate restriction of 100 kbps on interface eth0.\")\n\n    parser.addoption(\"--start-north-as-service\", action=\"store\", type=bool, default=True,\n                     help=\"Whether start the north as a service.\")\n\n    # Fogbench Config\n    parser.addoption(\"--fogbench-host\", action=\"store\", default=\"localhost\",\n                     help=\"FogBench Destination Host Address\")\n                     \n    parser.addoption(\"--fogbench-port\", action=\"store\", default=\"5683\", type=int,\n                     help=\"FogBench Destination Port\")\n    \n    # Azure-IoT Config\n    parser.addoption(\"--azure-host\", action=\"store\", default=\"azure-server\",\n                     help=\"Azure-IoT Host Name\")\n    \n    parser.addoption(\"--azure-device\", action=\"store\", default=\"azure-iot-device\",\n                     help=\"Azure-IoT Device ID\")\n    \n    parser.addoption(\"--azure-key\", action=\"store\", default=\"azure-iot-key\",\n                     help=\"Azure-IoT SharedAccess key\")\n    \n    parser.addoption(\"--azure-storage-account-url\", action=\"store\", default=\"azure-storage-account-url\",\n                     help=\"Azure Storage Account URL\")\n    \n    parser.addoption(\"--azure-storage-account-key\", action=\"store\", default=\"azure-storage-account-key\",\n                     help=\"Azure Storage Account Access Key\")\n    \n    parser.addoption(\"--azure-storage-container\", action=\"store\", default=\"azure_storage_container\",\n                     help=\"Container Name in Azure where data is stored\")\n    \n    parser.addoption(\"--run-time\", action=\"store\", default=\"60\",\n                    help=\"The number of minute for which a test should run\")\n\n    parser.addoption(\"--plugin-name\", action=\"store\", default=\"sinusoid\",\n                    help=\"The name of the plugin\")\n    \n    parser.addoption(\"--plugin-language\", action=\"store\", default=\"python\",\n                    help=\"Language of the plugin python/c\")\n\n\n@pytest.fixture\ndef plugin_name(request):\n    return request.config.getoption(\"--plugin-name\")\n\n@pytest.fixture\ndef plugin_language(request):\n    return request.config.getoption(\"--plugin-language\")\n\n@pytest.fixture\ndef num_assets(request):\n    return request.config.getoption(\"--num-assets\")\n    \n@pytest.fixture\ndef storage_plugin(request):\n    return request.config.getoption(\"--storage-plugin\")\n\n\n@pytest.fixture\ndef readings_plugin(request):\n    return request.config.getoption(\"--readings-plugin\")\n\n\n@pytest.fixture\ndef remote_user(request):\n    return request.config.getoption(\"--remote-user\")\n\n\n@pytest.fixture\ndef remote_ip(request):\n    return request.config.getoption(\"--remote-ip\")\n\n\n@pytest.fixture\ndef key_path(request):\n    return request.config.getoption(\"--key-path\")\n\n\n@pytest.fixture\ndef remote_fledge_path(request):\n    return request.config.getoption(\"--remote-fledge-path\")\n\n\n@pytest.fixture\ndef skip_verify_north_interface(request):\n    return not request.config.getoption(\"--skip-verify-north-interface\")\n\n\n@pytest.fixture\ndef south_branch(request):\n    return request.config.getoption(\"--south-branch\")\n\n\n@pytest.fixture\ndef north_branch(request):\n    return request.config.getoption(\"--north-branch\")\n\n\n@pytest.fixture\ndef service_branch(request):\n    return request.config.getoption(\"--service-branch\")\n\n\n@pytest.fixture\ndef filter_branch(request):\n    return request.config.getoption(\"--filter-branch\")\n\n\n@pytest.fixture\ndef notify_branch(request):\n    return request.config.getoption(\"--notify-branch\")\n\n\n@pytest.fixture\ndef use_pip_cache(request):\n    return request.config.getoption(\"--use-pip-cache\")\n\n\n@pytest.fixture\ndef filter_name(request):\n    return request.config.getoption(\"--filter-name\")\n\n\n@pytest.fixture\ndef south_service_name(request):\n    return request.config.getoption(\"--south-service-name\")\n\n\n@pytest.fixture\ndef asset_name(request):\n    return request.config.getoption(\"--asset-name\")\n\n\n@pytest.fixture\ndef fledge_url(request):\n    return request.config.getoption(\"--fledge-url\")\n\n@pytest.fixture\ndef authentication(request):\n    return \"optional\"\n\n@pytest.fixture\ndef wait_time(request):\n    return request.config.getoption(\"--wait-time\")\n\n@pytest.fixture\ndef wait_fix(request):\n    return request.config.getoption(\"--wait-fix\")\n    \n@pytest.fixture\ndef retries(request):\n    return request.config.getoption(\"--retries\")\n\n@pytest.fixture\ndef north_historian(request):\n    return request.config.getoption(\"--north-historian\")\n\n@pytest.fixture\ndef pi_host(request):\n    return request.config.getoption(\"--pi-host\")\n\n\n@pytest.fixture\ndef pi_port(request):\n    return request.config.getoption(\"--pi-port\")\n\n\n@pytest.fixture\ndef pi_db(request):\n    return request.config.getoption(\"--pi-db\")\n\n\n@pytest.fixture\ndef pi_admin(request):\n    return request.config.getoption(\"--pi-admin\")\n\n\n@pytest.fixture\ndef pi_passwd(request):\n    return request.config.getoption(\"--pi-passwd\")\n\n\n@pytest.fixture\ndef pi_token(request):\n    return request.config.getoption(\"--pi-token\")\n\n\n@pytest.fixture\ndef pi_use_legacy(request):\n    return request.config.getoption(\"--pi-use-legacy\")\n\n\n@pytest.fixture\ndef ocs_tenant(request):\n    return request.config.getoption(\"--ocs-tenant\")\n\n\n@pytest.fixture\ndef ocs_client_id(request):\n    return request.config.getoption(\"--ocs-client-id\")\n\n\n@pytest.fixture\ndef ocs_client_secret(request):\n    return request.config.getoption(\"--ocs-client-secret\")\n\n\n@pytest.fixture\ndef ocs_namespace(request):\n    return request.config.getoption(\"--ocs-namespace\")\n\n\n@pytest.fixture\ndef ocs_token(request):\n    return request.config.getoption(\"--ocs-token\")\n\n\n@pytest.fixture\ndef kafka_host(request):\n    return request.config.getoption(\"--kafka-host\")\n\n\n@pytest.fixture\ndef kafka_port(request):\n    return request.config.getoption(\"--kafka-port\")\n\n\n@pytest.fixture\ndef kafka_topic(request):\n    return request.config.getoption(\"--kafka-topic\")\n\n\n@pytest.fixture\ndef kafka_rest_port(request):\n    return request.config.getoption(\"--kafka-rest-port\")\n\n\n@pytest.fixture\ndef modbus_host(request):\n    return request.config.getoption(\"--modbus-host\")\n\n\n@pytest.fixture\ndef modbus_port(request):\n    return request.config.getoption(\"--modbus-port\")\n\n\n@pytest.fixture\ndef modbus_serial_port(request):\n    return request.config.getoption(\"--modbus-serial-port\")\n\n\n@pytest.fixture\ndef modbus_baudrate(request):\n    return request.config.getoption(\"--modbus-baudrate\")\n\n\n@pytest.fixture\ndef package_build_version(request):\n    return request.config.getoption(\"--package-build-version\")\n\n\n@pytest.fixture\ndef package_build_list(request):\n    return request.config.getoption(\"--package-build-list\")\n\n\n@pytest.fixture\ndef package_build_source_list(request):\n    return request.config.getoption(\"--package-build-source-list\")\n\n\n@pytest.fixture\ndef exclude_packages_list(request):\n    return request.config.getoption(\"--exclude-packages-list\")\n\n\ndef pytest_itemcollected(item):\n    par = item.parent.obj\n    node = item.obj\n    pref = par.__doc__.strip() if par.__doc__ else par.__class__.__name__\n    suf = node.__doc__.strip() if node.__doc__ else node.__name__\n    if pref or suf:\n        item._nodeid = ' '.join((pref, suf))\n\n\n# Parameters required for testing Fledge under an impaired or noisy network.\n@pytest.fixture\ndef south_service_wait_time(request):\n    return request.config.getoption(\"--south-service-wait-time\")\n\n\n@pytest.fixture\ndef north_catch_up_time(request):\n    return request.config.getoption(\"--north-catch-up-time\")\n\n\n@pytest.fixture\ndef throttled_network_config(request):\n    return request.config.getoption(\"--throttled-network-config\")\n\n\n@pytest.fixture\ndef start_north_as_service(request):\n    return request.config.getoption(\"--start-north-as-service\")\n\n\ndef read_os_release():\n    \"\"\" General information to identifying the operating system \"\"\"\n    import ast\n    import re\n    os_details = {}\n    with open('/etc/os-release', encoding=\"utf-8\") as f:\n        for line_number, line in enumerate(f, start=1):\n            line = line.rstrip()\n            if not line or line.startswith('#'):\n                continue\n            m = re.match(r'([A-Z][A-Z_0-9]+)=(.*)', line)\n            if m:\n                name, val = m.groups()\n                if val and val[0] in '\"\\'':\n                    val = ast.literal_eval(val)\n                os_details.update({name: val})\n    return os_details\n\n\ndef is_redhat_based():\n    \"\"\"\n        To check if the Operating system is of Red Hat family or Not\n        Examples:\n            a) For an operating system with \"ID=centos\", an assignment of \"ID_LIKE=\"rhel fedora\"\" is appropriate\n            b) For an operating system with \"ID=ubuntu/raspbian\", an assignment of \"ID_LIKE=debian\" is appropriate.\n    \"\"\"\n    os_release = read_os_release()\n    id_like = os_release.get('ID_LIKE')\n    if id_like is not None and any(x in id_like.lower() for x in ['centos', 'rhel', 'redhat', 'fedora']):\n        return True\n    return False\n\n\ndef pytest_configure():\n    pytest.OS_PLATFORM_DETAILS = read_os_release()\n    pytest.IS_REDHAT = is_redhat_based()\n    pytest.PKG_MGR = 'yum' if pytest.IS_REDHAT else 'apt'\n\n\ndef restart_and_wait_for_fledge(fledge_url, wait_time, auth_token=None, custom_port=None, https_enabled=False):\n    \"\"\" Restarts the Fledge service and waits until it becomes responsive\n\n    Args:\n        fledge_url (str): base fledge url\n        wait_time (int): Seconds between retries\n        auth_token (str): Authorization Token (Optional)\n        custom_port (int, optional): Custom port. Defaults to None.\n        https_enabled (bool, optional): Whether to use HTTPS instead of HTTP. Defaults to False.\n    Raises:\n        AssertionError: If Fledge failed to restart\n\n    Returns:\n        JSON Document\n    \"\"\"\n    import ssl\n    from contextlib import closing\n    headers = {\"authorization\": auth_token} if auth_token else {}\n        \n    with closing(http.client.HTTPConnection(fledge_url)) as connection:\n        connection.request(\"PUT\", '/fledge/restart', headers=headers, body=json.dumps({}))\n        response = connection.getresponse()\n        assert response.status == 200\n        response_data = response.read().decode()\n        jdoc = json.loads(response_data)\n        assert \"Fledge restart has been scheduled.\" == jdoc['message']\n\n    print(f\"Waiting for Fledge to restart... (Initial wait: {wait_time}s)\")\n    time.sleep(wait_time)\n    start_time = time.time()\n    max_retries = 5\n    \n    # After restart, Fledge runs on default ports, not the original URL port\n    host = fledge_url.split(':')[0]\n    port = custom_port if custom_port is not None else (1995 if https_enabled else 8081)\n    \n    # Prepare connection configuration once (parameters don't change between attempts)\n    if https_enabled:\n        connection = http.client.HTTPSConnection(host, port, context=ssl._create_unverified_context())\n    else:\n        connection = http.client.HTTPConnection(host, port)\n    \n    for attempt in range(max_retries):\n        try:\n            with closing(connection) as conn:\n                conn.request(\"GET\", \"/fledge/ping\", headers=headers)\n                response = conn.getresponse()\n                if response.status == 200:\n                    response_data = response.read().decode()\n                    jdoc = json.loads(response_data)\n                    break\n                elif response.status == 401:\n                    jdoc = {\"message\": \"Unauthorized\"}\n                    break\n                else:\n                    print(f\"Attempt {attempt + 1}: Got HTTP status {response.status}\")\n        except Exception as e:\n            print(f\"Attempt {attempt + 1}: Connection failed - {type(e).__name__}: {e}\")\n        \n        if attempt < max_retries - 1:\n            sleep_time = wait_time * 5\n            print(f\"Waiting {sleep_time}s before next attempt...\")\n            time.sleep(sleep_time)\n    else:\n        elapsed = round(time.time() - start_time, 2)\n        raise AssertionError(f\"Failed to restart Fledge after {elapsed} seconds.\")\n    \n    time.sleep(wait_time * 5)\n    return jdoc\n\n\n@pytest.fixture\ndef fogbench_host(request):\n    return request.config.getoption(\"--fogbench-host\")\n\n\n@pytest.fixture\ndef fogbench_port(request):\n    return request.config.getoption(\"--fogbench-port\")\n\n\n@pytest.fixture\ndef azure_host(request):\n    return request.config.getoption(\"--azure-host\")\n\n\n@pytest.fixture\ndef azure_device(request):\n    return request.config.getoption(\"--azure-device\")\n\n\n@pytest.fixture\ndef azure_key(request):\n    return request.config.getoption(\"--azure-key\")\n\n\n@pytest.fixture\ndef azure_storage_account_url(request):\n    return request.config.getoption(\"--azure-storage-account-url\")\n\n\n@pytest.fixture\ndef azure_storage_account_key(request):\n    return request.config.getoption(\"--azure-storage-account-key\")\n\n\n@pytest.fixture\ndef azure_storage_container(request):\n    return request.config.getoption(\"--azure-storage-container\")\n\n\n@pytest.fixture\ndef run_time(request):\n    return request.config.getoption(\"--run-time\")\n"
  },
  {
    "path": "tests/system/python/data/dummyplugin.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Async Plugin used for testing purpose \"\"\"\nimport asyncio\nimport copy\nimport uuid\nimport logging\nimport async_ingest\n\nfrom fledge.common import logger\nfrom fledge.services.south import exceptions\nfrom threading import Thread\nfrom datetime import datetime, timezone, timedelta\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nc_callback = None\nc_ingest_ref = None\nloop = None\n_task = None\nt = None\n\n_DEFAULT_CONFIG = {\n    'plugin': {\n        'description': 'Test Async Plugin',\n        'type': 'string',\n        'default': 'dummyplugin',\n        'readonly': 'true'\n    },\n    'assetPrefix': {\n        'description': 'Prefix of asset name',\n        'type': 'string',\n        'default': 'test-',\n        'order': '1',\n        'displayName': 'Asset Name Prefix'\n    },\n    'loudnessAssetName': {\n        'description': 'Loudness sensor asset name',\n        'type': 'string',\n        'default': 'loudness',\n        'order': '3',\n        'displayName': 'Loudness Sensor Asset Name'\n    }\n}\n\n_LOGGER = logger.setup(__name__, level=logging.INFO)\n\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n    Args:\n    Returns:\n        dict: plugin information\n    Raises:\n    \"\"\"\n    return {\n        'name': 'TEST Async Plugin',\n        'version': '2.0.0',\n        'mode': 'async',\n        'type': 'south',\n        'interface': '1.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\ndef plugin_init(config):\n    \"\"\" Initialise the plugin.\n    Args:\n        config: JSON configuration document for the South plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n    handle = copy.deepcopy(config)\n    return handle\n\n\ndef plugin_start(handle):\n    \"\"\" Extracts data from the sensor and returns it in a JSON document as a Python dict.\n    Available for async mode only.\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        returns a sensor reading in a JSON document, as a Python dict, if it is available\n        None - If no reading is available\n    Raises:\n        TimeoutError\n    \"\"\"\n    global _task, loop, t\n    loop = asyncio.new_event_loop()\n    _task = asyncio.ensure_future(_start_aiotest(handle), loop=loop)\n\n    def run():\n        global loop\n        loop.run_forever()\n\n    t = Thread(target=run)\n    t.start()\n\n\nasync def _start_aiotest(handle):\n    # This plugin adds 6 data points 2 within same min, 2 within same hour and 2 within same day\n    # this data is useful when testing asset browsing based on timestamps\n    ts_lst = list()\n    ts_lst.append(str(datetime.now(timezone.utc).astimezone()))\n    ts_lst.append(str(datetime.now(timezone.utc).astimezone() - timedelta(seconds=3)))\n    ts_lst.append(str(datetime.now(timezone.utc).astimezone() - timedelta(minutes=5)))\n    ts_lst.append(str(datetime.now(timezone.utc).astimezone() - timedelta(minutes=6)))\n    ts_lst.append(str(datetime.now(timezone.utc).astimezone() - timedelta(hours=3)))\n    ts_lst.append(str(datetime.now(timezone.utc).astimezone() - timedelta(hours=5)))\n\n    i = 1\n    for user_ts in ts_lst:\n        try:\n            data = list()\n            data.append({\n                'asset': '{}{}'.format(handle['assetPrefix']['value'], handle['loudnessAssetName']['value']),\n                'timestamp': user_ts,\n                'key': str(uuid.uuid4()),\n                'readings': {\"loudness\": i}\n            })\n            async_ingest.ingest_callback(c_callback, c_ingest_ref, data)\n            await asyncio.sleep(0.1)\n\n        except (Exception, RuntimeError) as ex:\n            _LOGGER.exception(\"TEST exception: {}\".format(str(ex)))\n            raise exceptions.DataRetrievalError(ex)\n        else:\n            i += 1\n\n\ndef plugin_register_ingest(handle, callback, ingest_ref):\n    \"\"\"Required plugin interface component to communicate to South C server\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        callback: C opaque object required to passed back to C->ingest method\n        ingest_ref: C opaque object required to passed back to C->ingest method\n    \"\"\"\n    global c_callback, c_ingest_ref\n    c_callback = callback\n    c_ingest_ref = ingest_ref\n\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n    Returns:\n        new_handle: new handle to be used in the future calls\n    \"\"\"\n    _LOGGER.info(\"Old config for TEST plugin {} \\n new config {}\".format(handle, new_config))\n    new_handle = copy.deepcopy(new_config)\n    return new_handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shutdowns the plugin doing required cleanup, to be called prior to the South plugin service being shut down.\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        plugin shutdown\n    \"\"\"\n    _LOGGER.info('TEST plugin shut down.')\n\n"
  },
  {
    "path": "tests/system/python/data/notify35.py",
    "content": "import logging\nfrom logging.handlers import SysLogHandler\n\n\ndef notify35(message):\n    logger = logging.getLogger(__name__)\n    logger.setLevel(level=logging.INFO)\n    handler = SysLogHandler(address='/dev/log')\n    logger.addHandler(handler)\n\n    logger.info(\"notify35 called with {}\".format(message))\n    print(\"Notification alert: \" + str(message))\n"
  },
  {
    "path": "tests/system/python/data/vibration.csv",
    "content": "2,3,4,5,6\n"
  },
  {
    "path": "tests/system/python/data/wind-data.csv",
    "content": "10 Min Std Dev,Time,10 Min Sampled Avg\n2.73,2001-06-11 11:00,22.3\n1.98,2001-06-11 11:10,23\n1.87,2001-06-11 11:20,23.3\n2.03,2001-06-11 11:30,22\n3.1,2001-06-11 11:40,20.5\n2.3,2001-06-11 11:50,25.2\n2.46,2001-06-11 12:00,24.8\n1.87,2001-06-11 12:10,24\n1.71,2001-06-11 12:20,22.9\n1.76,2001-06-11 12:30,17.9\n1.87,2001-06-11 12:40,20.3\n1.66,2001-06-11 12:50,21.5\n2.09,2001-06-11 13:00,21.3\n1.6,2001-06-11 13:10,21.3\n1.44,2001-06-11 13:20,21.3\n3.69,2001-06-11 13:30,22.4\n2.78,2001-06-11 13:40,25.4\n2.14,2001-06-11 13:50,25.9\n2.67,2001-06-11 14:00,25.5\n2.57,2001-06-11 14:10,25.7\n2.35,2001-06-11 14:20,23.9\n2.67,2001-06-11 14:30,24.7\n2.46,2001-06-11 14:40,26.3\n2.73,2001-06-11 14:50,26.9\n2.19,2001-06-11 15:00,29\n3.32,2001-06-11 15:10,24.4\n1.98,2001-06-11 15:20,21.4\n2.46,2001-06-11 15:30,20\n1.76,2001-06-11 15:40,18.3\n1.82,2001-06-11 15:50,17.1\n2.3,2001-06-11 16:00,17.6\n1.98,2001-06-11 16:10,16.7\n2.3,2001-06-11 16:20,11.4\n2.62,2001-06-11 16:30,16.6\n2.3,2001-06-11 16:40,16.1\n2.09,2001-06-11 16:50,16\n2.51,2001-06-11 17:00,17\n2.19,2001-06-11 17:10,17.2\n2.62,2001-06-11 17:20,17.1\n2.94,2001-06-11 17:30,16.8\n3.05,2001-06-11 17:40,16.6\n2.62,2001-06-11 17:50,17.5\n2.09,2001-06-11 18:00,17.1\n2.03,2001-06-11 18:10,18.6\n2.03,2001-06-11 18:20,18.4\n2.73,2001-06-11 18:30,15\n2.19,2001-06-11 18:40,16\n2.14,2001-06-11 18:50,16.5\n2.41,2001-06-11 19:00,18.7\n2.62,2001-06-11 19:10,16.7\n2.3,2001-06-11 19:20,14.7\n2.09,2001-06-11 19:30,13.6\n2.89,2001-06-11 19:40,16\n2.73,2001-06-11 19:50,15.9\n2.14,2001-06-11 20:00,16.2\n2.46,2001-06-11 20:10,17.7\n2.14,2001-06-11 20:20,16.2\n1.87,2001-06-11 20:30,15.9\n2.19,2001-06-11 20:40,15.4\n2.35,2001-06-11 20:50,18.3\n2.41,2001-06-11 21:00,18.1\n2.62,2001-06-11 21:10,19.2\n2.3,2001-06-11 21:20,17.5\n1.92,2001-06-11 21:30,16\n2.14,2001-06-11 21:40,14.9\n1.98,2001-06-11 21:50,15.4\n1.55,2001-06-11 22:00,14.7\n1.87,2001-06-11 22:10,15.5\n2.41,2001-06-11 22:20,17.4\n2.41,2001-06-11 22:30,14.1\n1.82,2001-06-11 22:40,14\n2.25,2001-06-11 22:50,13\n2.3,2001-06-11 23:00,14.7\n1.98,2001-06-11 23:10,14.2\n2.25,2001-06-11 23:20,15\n2.19,2001-06-11 23:30,14.3\n1.98,2001-06-11 23:40,14.7\n2.09,2001-06-11 23:50,15.9\n2.09,2001-06-12 00:00,17.7\n2.03,2001-06-12 00:10,16.1\n2.03,2001-06-12 00:20,15.3\n1.76,2001-06-12 00:30,15.2\n1.98,2001-06-12 00:40,14.7\n2.14,2001-06-12 00:50,15.9\n2.09,2001-06-12 01:00,14.8\n2.03,2001-06-12 01:10,15.2\n1.92,2001-06-12 01:20,14.5\n2.09,2001-06-12 01:30,15.1\n1.92,2001-06-12 01:40,15.2\n2.19,2001-06-12 01:50,15.3\n2.03,2001-06-12 02:00,15.2\n2.09,2001-06-12 02:10,14.9\n2.35,2001-06-12 02:20,15.4\n1.98,2001-06-12 02:30,15.3\n1.55,2001-06-12 02:40,14.5\n2.25,2001-06-12 02:50,15.7\n1.82,2001-06-12 03:00,15\n1.92,2001-06-12 03:10,14\n1.92,2001-06-12 03:20,11.3\n1.66,2001-06-12 03:30,11.8\n1.6,2001-06-12 03:40,12.1\n1.82,2001-06-12 03:50,12.3\n1.6,2001-06-12 04:00,12.3\n1.76,2001-06-12 04:10,12.2\n1.66,2001-06-12 04:20,11.8\n1.71,2001-06-12 04:30,9.3\n1.28,2001-06-12 04:40,8.7\n1.02,2001-06-12 04:50,9.5\n1.66,2001-06-12 05:00,8.9\n0.91,2001-06-12 05:10,7.2\n1.34,2001-06-12 05:20,8\n1.02,2001-06-12 05:30,7.9\n1.28,2001-06-12 05:40,5.5\n1.07,2001-06-12 05:50,4.9\n1.23,2001-06-12 06:00,8.9\n1.23,2001-06-12 06:10,8.6\n1.39,2001-06-12 06:20,12.1\n1.87,2001-06-12 06:30,15\n2.03,2001-06-12 06:40,13.8\n1.98,2001-06-12 06:50,14.9\n2.03,2001-06-12 07:00,15.9\n3.26,2001-06-12 07:10,19.6\n3.26,2001-06-12 07:20,24.9\n3.69,2001-06-12 07:30,19.2\n4.06,2001-06-12 07:40,21.7\n2.94,2001-06-12 07:50,13.5\n6.1,2001-06-12 08:00,17.2\n2.62,2001-06-12 08:10,28.5\n3.26,2001-06-12 08:20,29.9\n3.32,2001-06-12 08:30,30.9\n3.96,2001-06-12 08:40,27.8\n3.32,2001-06-12 08:50,26.1\n3.74,2001-06-12 09:00,23.6\n4.81,2001-06-12 09:10,17.4\n2.73,2001-06-12 09:20,18.6\n1.92,2001-06-12 09:30,12.1\n2.78,2001-06-12 09:40,14.9\n2.89,2001-06-12 09:50,13.8\n2.41,2001-06-12 10:00,13.3\n2.83,2001-06-12 10:10,10.7\n1.07,2001-06-12 10:20,12.2\n4.17,2001-06-12 10:30,17.8\n1.55,2001-06-12 10:40,17.8\n2.14,2001-06-12 10:50,18.3\n1.82,2001-06-12 11:00,11.3\n2.35,2001-06-12 11:10,16.8\n2.62,2001-06-12 11:20,8.7\n13.63,2001-06-12 11:30,14.4\n1.71,2001-06-12 11:40,12.3\n2.99,2001-06-12 11:50,15.4\n1.5,2001-06-12 12:00,10\n2.83,2001-06-12 12:10,15.1\n2.78,2001-06-12 12:20,14.8\n2.51,2001-06-12 12:30,12.8\n1.44,2001-06-12 12:40,9.6\n1.76,2001-06-12 12:50,12.5\n2.03,2001-06-12 13:00,15.5\n1.98,2001-06-12 13:10,18.9\n1.76,2001-06-12 13:20,19.8\n2.35,2001-06-12 13:30,20.4\n2.46,2001-06-12 13:40,19.3\n3.53,2001-06-12 13:50,20.6\n2.67,2001-06-12 14:00,22.3\n2.99,2001-06-12 14:10,22.6\n3.15,2001-06-12 14:20,21\n3.9,2001-06-12 14:30,23.3\n2.99,2001-06-12 14:40,21.9\n3.15,2001-06-12 14:50,22.8\n3.26,2001-06-12 15:00,24.6\n3.05,2001-06-12 15:10,24\n3.64,2001-06-12 15:20,25\n3.69,2001-06-12 15:30,26.9\n3.69,2001-06-12 15:40,28.4\n3.74,2001-06-12 15:50,27.1\n3.21,2001-06-12 16:00,24.4\n4.49,2001-06-12 16:10,24.4\n3.85,2001-06-12 16:20,28.8\n3.42,2001-06-12 16:30,28.9\n2.62,2001-06-12 16:40,28.5\n3.53,2001-06-12 16:50,27.5\n2.94,2001-06-12 17:00,24.9\n3.64,2001-06-12 17:10,27\n3.37,2001-06-12 17:20,25.7\n3.96,2001-06-12 17:30,26.6\n3.53,2001-06-12 17:40,25.2\n3.96,2001-06-12 17:50,28.6\n4.97,2001-06-12 18:00,32.5\n4.28,2001-06-12 18:10,32.7\n2.57,2001-06-12 18:20,30.2\n3.9,2001-06-12 18:30,30\n2.99,2001-06-12 18:40,24.9\n2.94,2001-06-12 18:50,25.7\n2.83,2001-06-12 19:00,24.5\n3.15,2001-06-12 19:10,24.5\n3.74,2001-06-12 19:20,26.7\n3.15,2001-06-12 19:30,25.9\n3.05,2001-06-12 19:40,27.5\n2.89,2001-06-12 19:50,28.2\n2.46,2001-06-12 20:00,26.2\n2.62,2001-06-12 20:10,26.8\n2.78,2001-06-12 20:20,26.7\n2.51,2001-06-12 20:30,26.7\n2.35,2001-06-12 20:40,25.5\n2.46,2001-06-12 20:50,25.4\n2.73,2001-06-12 21:00,24.9\n2.51,2001-06-12 21:10,25.2\n3.26,2001-06-12 21:20,27.8\n2.41,2001-06-12 21:30,27.1\n2.46,2001-06-12 21:40,26.8\n2.62,2001-06-12 21:50,27.6\n2.94,2001-06-12 22:00,26.6\n2.73,2001-06-12 22:10,26.4\n2.46,2001-06-12 22:20,27.1\n2.94,2001-06-12 22:30,24.7\n2.09,2001-06-12 22:40,23.1\n2.09,2001-06-12 22:50,22.1\n2.03,2001-06-12 23:00,22\n2.14,2001-06-12 23:10,21.2\n2.35,2001-06-12 23:20,22.9\n1.98,2001-06-12 23:30,21.6\n2.19,2001-06-12 23:40,22.2"
  },
  {
    "path": "tests/system/python/e2e/docs/test_e2e_coap_PI.rst",
    "content": "E2E CSV to PI Server Test\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test performs end-to-end validation of Fledge by ingesting data via the `fledge-south-coap` plugin and forwarding it to the PI Server using the `fledge-north-OMF` plugin.\n\nThis test consists of *TestE2E_CoAP_PI* class, which contains a single test case function:\n\n1. **test_end_to_end**: Verifies that data is ingested into Fledge and sent to the PI Server. It also checks the data sent and received counts, confirms the creation of the required asset, and ensures that the data sent from Fledge via the OMF plugin reaches the PI Server.\n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/ ; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv e2e/test_e2e_coap_PI.py --pi-host=\"<PI_SYSTEM_HOST>\" --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" \\\n      --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/e2e/docs/test_e2e_csv_PI.rst",
    "content": "E2E CSV to PI Server Test\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test performs end-to-end validation of Fledge by ingesting data from a CSV file into Fledge using the `fledge-south-playback` plugin, and then sending the data to the PI Server via the `fledge-north-OMF` plugin.\n\nThis test consists of *TestE2E_CSV_PI* class, which contains a single test case function:\n\n1. **test_e2e_csv_pi**: Verifies that data is inserted into Fledge and sent to PI. The test also checks the data sent and received counts, confirms whether the required asset is created, and ensures that the data sent from Fledge via the OMF plugin reaches the PI Server.\n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/ ; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv e2e/test_e2e_csv_pi.py --pi-host=\"<PI_SYSTEM_HOST>\" --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" \\\n      --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/e2e/docs/test_e2e_csv_multi_filter_pi.rst",
    "content": "E2E CSV Multi-Filter to PI Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to perform end-to-end testing of Fledge by ingesting data into Fledge using the `fledge-south-playback` plugin from a CSV file, with filters of the `fledge-filter-scale`, `fledge-filter-asset`, `fledge-filter-rate`, `fledge-filter-delta`, and `fledge-filter-rms` plugins attached. The data is then sent to the PI Server using the service of the `fledge-north-OMF` plugin, which has the `fledge-filter-threshold` plugin attached.\n\nThis test consists of *TestE2eCsvMultiFltrPi* class, which contains a single test case function:\n\n1. **test_end_to_end**: Verifies that data is inserted into Fledge using the playback south plugin with filters for Delta, RMS, Rate, Scale, Asset, and Metadata, and then sent to PI. It also checks the data sent and received counts, verifies the creation of the required asset, and ensures that the data sent from Fledge via the OMF plugin reaches the PI Server.\n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/ ; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv e2e/test_e2e_csv_multi_filter_pi.py --pi-host=\"<PI_SYSTEM_HOST>\" --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" \\\n      --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/e2e/docs/test_e2e_expr_pi.rst",
    "content": "E2E Expression to PI Server Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to perform end-to-end testing of Fledge by ingesting data into Fledge using the `fledge-south-coap` plugin and sending it to the PI Server using the `fledge-north-OMF` plugin.\n\nThis test consists of *TestE2eExprPi* class, which contains a single test case function:\n\n1. **test_end_to_end**: Verifies that data is ingested into Fledge and sent to the PI Server. It also checks the data sent and received counts, ensures the required asset is created, and confirms that the data sent from Fledge via the OMF plugin reaches the PI Server.\n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/ ; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv e2e/test_e2e_expr_pi.py --pi-host=\"<PI_SYSTEM_HOST>\" --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" \\\n      --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/e2e/docs/test_e2e_filter_fft_threshold.rst",
    "content": "E2E Filter with FFT Threshold Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to perform end-to-end testing of Fledge by ingesting data from a CSV file into Fledge using the `fledge-south-playback` plugin, with the `fledge-filter-fft` plugin attached. The data is then sent to the PI Server using the `fledge-north-OMF` plugin, with the `fledge-filter-threshold` plugin applied.\n\nThis test consists of *TestE2eFilterFFTThreshold* class, which contains a single test case function:\n\n1. **test_e2e_csv_pi**: Verifies that data is ingested into Fledge using the playback south plugin with an FFT filter, then sent to PI after passing through the threshold filter. It checks the data sent and received counts, ensures the required asset is created, and confirms that the data sent from Fledge via the OMF plugin reaches the PI Server.\n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/ ; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv e2e/test_e2e_filter_fft_threshold.py --pi-host=\"<PI_SYSTEM_HOST>\" --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" \\\n      --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/e2e/docs/test_e2e_kafka.rst",
    "content": "Test Kafka\n~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to perform end-to-end testing of Fledge by ingesting data into Fledge using the `fledge-south-coap` plugin and sending it to the Kafka Server using the `fledge-north-kafka` plugin.\n\n\nThis test consists of *TestE2EKafka* class, which contains a single test case function:\n\n1. **test_end_to_end**: Verifies that data is ingested into Fledge through the south service of the fledge-south-coap plugin and sent to the Kafka Server via the fledge-north-kafka plugin. It also checks the data sent and received counts, ensures the required asset is created, and confirms that the data sent from Fledge via the fledge-north-kafka plugin reaches the Kafka Server.\n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --kafka-host=KAFKA_HOST\n                        IP Address of Kafka Server\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/ ; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv e2e/test_e2e_kafka.py --kafka-host=\"<KAFKA_HOST>\" --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/e2e/docs/test_e2e_modbus_c_pi.rst",
    "content": "E2E Modbus to PI Server Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to perform end-to-end testing of Fledge by ingesting data into Fledge using the `fledge-south-modbus-c` plugin and sending it to the PI Server using the `fledge-north-OMF` plugin.\n\nThis test consists of *TestE2EModbusCPI* class, which contains a single test case function:\n\n1. **test_end_to_end**: Verifies that data is ingested into Fledge and successfully sent to PI. It checks the data sent and received counts, ensures the required asset is created, and confirms that the data sent from Fledge through the OMF plugin reaches the PI Server.\n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n  --modbus-host=MODBUS_HOST\n                      IP Address of Modbus\n  --modbus-port=MODBUS_PORT\n                      Port of Modbus\n  --pi-host=PI_SYSTEM_HOST\n                      PI Server HostName/IP\n  --pi-port=PI_SYSTEM_PORT\n                      PI Server port\n  --pi-admin=PI_SYSTEM_ADMIN\n                      PI Server user login\n  --pi-passwd=PI_SYSTEM_PWD\n                      PI Server user login password\n  --pi-db=PI_SYSTEM_DB\n                      PI Server Database\n  --wait-time=WAIT_TIME\n                      Generic wait time (in seconds) between processes\n  --retries=RETIRES\n                      Number of tries for polling\n  --junit-xml=JUNIT_XML\n                      Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/ ; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv e2e/test_e2e_modbus_c_pi.py --modbus-host=\"<MODBUS_HOST>\" --modbus-port=\"<MODBUS_PORT>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n    --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" --wait-time=\"<WAIT_TIME>\" \\\n    --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/e2e/docs/test_e2e_notification_service_with_plugins.rst",
    "content": "Notification Service E2E Test with Plugins\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to perform end-to-end testing of the `fledge-service-notification` by using the south service of `fledge-south-coap` plugin, the built-in rule plugin `fledge-rule-threshold`, and the delivery plugin `fledge-notify-python35`.\n\nThis test consists of four classes, each consists of multiple test cases functions:\n\n1. **TestNotificationService**:\n    a. **test_service**: Verifies that the notification service is correctly installed in Fledge and properly configured.\n    b. **test_get_default_notification_plugins**: Verifies whether the built-in rule plugins are available after the notification service is installed.\n\n2. **TestNotificationCRUD**:\n    a. **test_create_notification_instances_with_default_rule_and_channel_python35**: Verifies whether Fledge can install the fledge-notify-python35 delivery plugin and create three notification instances using the threshold rule with the python35 delivery plugin, each having a different notification type.\n    b. **test_inbuilt_rule_plugin_and_notify_python35_delivery**: Verifies whether the required rule and delivery plugins are available in Fledge.\n    c. **test_get_notifications_and_audit_entry**: Verifies that Fledge logs an NTFAD audit entry after adding a notification instance.\n    d. **test_update_notification**: Verifies whether Fledge can reconfigure the notification type in an existing notification instance.\n    e. **test_delete_notification**: Verifies whether Fledge can delete a notification instance without any issues.\n\n3. **TestSentAndReceiveNotification**:\n    a. **test_sent_and_receive_notification**: Creates a notification instance using the threshold rule with the python35 delivery plugin and adds the fledge-south-coap service. Verifies whether the notification service is working properly and creating the corresponding audit logs.\n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/ ; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv e2e/test_e2e_notification_service_with_plugins.py --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/e2e/docs/test_south_service_tuning.rst",
    "content": "Test South Service Tuning\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to check the effect of south service tuning parameters on the performance of Fledge. By default, it uses the `fledge-south-sinusoid` plugin to ingest data into Fledge, Other plugin can also be used by changing the command line parameter `--south-plugin`. The test measures ingested number of readings with different tuning parameters and compares the results.\n\nThis test contains *TestSouthServiceTuning* class, which contains multiple test case functions:\n\n1. **test_south_service_tuning_buffer_threshold**: \n2. **test_south_service_comprehensive_tuning**: \n3. **test_buffer_threshold_impact_on_send_frequency**: \n4. **test_max_send_latency_impact**: \n\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n   $ cd fledge/python\n   $ python3 -m pip install -r requirements-test.txt --user\n\n\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ export FLEDGE_ROOT=<path_to_fledge_installation>\n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ cd fledge/tests/system/python/\n  $ python3 -m pytest -s -vv e2e/test_south_service_tuning.py --south-plugin sinusoid --plugin-language python\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_coap_OCS.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test system/python/test_e2e_coap_OCS.py\n\"\"\"\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nimport pytest\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nTEMPLATE_NAME = \"template.json\"\nSENSOR_VALUE = 20\n\n\n@pytest.fixture\ndef prepare_template_reading_from_fogbench():\n    def _prepare_template_reading_from_fogbench(FOGBENCH_TEMPLATE, ASSET_NAME):\n        \"\"\" Define the template file for fogbench readings \"\"\"\n        fogbench_template_path = os.path.join(\n            os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(FOGBENCH_TEMPLATE))\n        with open(fogbench_template_path, \"w\") as f:\n            f.write(\n                '[{\"name\": \"%s\", \"sensor_values\": '\n                '[{\"name\": \"sensor\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                    ASSET_NAME, SENSOR_VALUE, SENSOR_VALUE))\n        return fogbench_template_path\n\n    return _prepare_template_reading_from_fogbench\n\n\n@pytest.fixture\ndef start_south_north(reset_and_start_fledge, add_south, start_north_ocs_server_c,\n                      prepare_template_reading_from_fogbench, remove_data_file,\n                      remove_directories, south_branch, fledge_url,\n                      ocs_tenant, ocs_client_id, ocs_client_secret, ocs_namespace, ocs_token,\n                      asset_name=\"endToEndCoAP\"):\n    \"\"\" This fixture clone a south repo and starts both south and north instance\n        reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n        add_south: Fixture that add a south service with given configuration\n        start_north_ocs_server_c: Fixture that starts OCS north task\n        remove_data_file: Fixture that remove data file created during the tests\n        remove_directories: Fixture that remove directories created during the tests\"\"\"\n\n    # fogbench template path for readings\n    fogbench_template_path = prepare_template_reading_from_fogbench(TEMPLATE_NAME, asset_name)\n\n    south_plugin = \"coap\"\n    add_south(south_plugin, south_branch, fledge_url, service_name=\"CoAP #1\")\n    start_north_ocs_server_c(fledge_url, ocs_tenant, ocs_client_id, ocs_client_secret,\n                             ocs_namespace, ocs_token)\n\n    yield start_south_north\n\n    # Cleanup code that runs after the caller test is over\n    remove_data_file(fogbench_template_path)\n    remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin))\n\n\n@pytest.fixture\ndef start_north_ocs_v2():\n    def _start_north_ocs_server_c(fledge_url, ocs_tenant, ocs_client_id, ocs_client_secret,\n                                  ocs_namespace, ocs_token, taskname=\"NorthReadingsToOCS\"):\n        \"\"\"Start north task\"\"\"\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"name\": taskname,\n                \"plugin\": \"{}\".format(\"OMF\"),\n                \"type\": \"north\",\n                \"schedule_type\": 3,\n                \"schedule_day\": 0,\n                \"schedule_time\": 0,\n                \"schedule_repeat\": 30,\n                \"schedule_enabled\": \"true\",\n                \"config\": {\n                           \"PIServerEndpoint\": {\"value\": \"OSIsoft Cloud Services\"},\n                           \"OCSTenantId\": {\"value\": ocs_tenant},\n                           \"OCSClientId\": {\"value\": ocs_client_id},\n                           \"OCSClientSecret\": {\"value\": ocs_client_secret},\n                           \"OCSNamespace\": {\"value\": ocs_namespace}\n                           }\n                }\n        conn.request(\"POST\", '/fledge/scheduled/task', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        retval = r.read().decode()\n        return retval\n\n    return _start_north_ocs_server_c\n\n\nstart_north_ocs_server_c = start_north_ocs_v2\n\n\n@pytest.fixture\ndef read_data_from_ocs():\n    def _read_data_from_ocs(ocs_client_id, ocs_client_secret, ocs_tenant, ocs_namespace, sensor):\n        \"\"\" This method reads data from OCS web api \"\"\"\n\n        # TODO: use http.client instead of requests library\n        import requests\n        ocs_type_id = 1\n        ocs_stream = \"{}measurement_{}\".format(ocs_type_id, sensor)\n        start_timestamp = \"2019-01-01T00:00:00.000000Z\"\n        values_count = 1\n\n        url = 'https://login.windows.net/{0}/oauth2/token'.format(ocs_tenant)\n\n        # Get the access token first\n        authorization = requests.post(\n            url,\n            data={\n                'grant_type': 'client_credentials',\n                'client_id': ocs_client_id,\n                'client_secret': ocs_client_secret,\n                'resource': 'https://qihomeprod.onmicrosoft.com/ocsapi'\n            }\n        )\n\n        # Generate the header using access token\n        header = {\n            'Authorization': 'bearer %s' % authorization.json()['access_token'],\n            'Content-type': 'application/json',\n            'Accept': 'text/plain'\n        }\n\n        # OCS Cleanup, Delete streams and types if they exist\n        streams_url = \"https://dat-a.osisoft.com/api/Tenants/{}/Namespaces/{}/{}\" \\\n            .format(ocs_tenant, ocs_namespace, \"Streams\")\n        requests.delete(streams_url, headers=header)\n        types_url = \"https://dat-a.osisoft.com/api/Tenants/{}/Namespaces/{}/{}\" \\\n            .format(ocs_tenant, ocs_namespace, \"Types\")\n        requests.delete(types_url, headers=header)\n\n        # Get data for stream\n        stream_url = \"https://dat-a.osisoft.com/api/Tenants/{}/Namespaces/{}/\" \\\n                     \"Streams/{}/Data/GetRangeValues?startIndex={}&count={}\" \\\n            .format(ocs_tenant, ocs_namespace, ocs_stream, start_timestamp, values_count)\n\n        response = requests.get(stream_url, headers=header)\n        api_output = response.json()\n        return api_output\n\n    return _read_data_from_ocs\n\n\n@pytest.mark.skip(reason=\"OCS is currently disabled!\")\nclass TestE2EOCS:\n    def test_end_to_end(self, start_south_north, read_data_from_ocs, fledge_url, wait_time, retries,\n                        ocs_client_id, ocs_client_secret, ocs_tenant, ocs_namespace, asset_name=\"endToEndCoAP\"):\n        \"\"\" Test that data is inserted in Fledge and sent to OCS\n            start_south_north: Fixture that starts Fledge with south and north instance\n            read_data_from_ocs: Fixture to read data from OCS\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name>\n                data received from OCS is same as data sent\"\"\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n        time.sleep(wait_time)\n        subprocess.run([\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{}; cd -\".format(TEMPLATE_NAME)],\n                       shell=True, check=True)\n        time.sleep(wait_time)\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert len(retval) == 1\n        assert asset_name == retval[0][\"assetCode\"]\n        assert 1 == retval[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(asset_name))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert {'sensor': SENSOR_VALUE} == retval[0][\"reading\"]\n\n        retry_count = 0\n        data_from_ocs = None\n        while (data_from_ocs is None or data_from_ocs == []) and retry_count < retries:\n            data_from_ocs = read_data_from_ocs(ocs_client_id, ocs_client_secret, ocs_tenant, ocs_namespace, asset_name)\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_ocs is None or retry_count == retries:\n            assert False, \"Failed to read data from OCS\"\n\n        assert data_from_ocs[-1]['sensor'] == SENSOR_VALUE"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_coap_PI.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test system/python/test_e2e_coap_PI.py\n\n\"\"\"\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nimport pytest\nimport utils\nimport math\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nTEMPLATE_NAME = \"template.json\"\nSENSOR_VALUE = 20\nASSET_NAME = \"e2e_COAP_PI\"\n\n\ndef get_ping_status(fledge_url):\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/ping')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return jdoc\n\n\ndef get_statistics_map(fledge_url):\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/statistics')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return utils.serialize_stats_map(jdoc)\n\n\ndef _verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name):\n    retry_count = 0\n    data_from_pi = None\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset_name, {\"sensor\"})\n        retry_count += 1\n        time.sleep(wait_time*2)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Failed to read data from PI\"\n\n    assert data_from_pi[\"sensor\"][-1] == SENSOR_VALUE\n\n\n@pytest.fixture\ndef start_south_north(reset_and_start_fledge, add_south, start_north_pi_server_c_web_api, remove_data_file,\n                      remove_directories, south_branch, fledge_url, pi_host, pi_port, pi_passwd,\n                      pi_db, pi_admin, clear_pi_system_through_pi_web_api, asset_name=ASSET_NAME):\n    \"\"\" This fixture clone a south repo and starts both south and north instance\n        reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n        add_south: Fixture that adds a south service with given configuration\n        start_north_pi_server_c_web_api: Fixture that starts PI north task\n        remove_data_file: Fixture that remove data file created during the tests\n        remove_directories: Fixture that remove directories created during the tests\"\"\"\n\n    # No need to give asset hierarchy in case of connector relay.\n    # There are two data points here. 1. sensor 2. no data point (Asset name be used in this case.)\n    dp_list = ['sensor', '']\n    asset_dict = {}\n    asset_dict[asset_name] = dp_list\n    # For connector relay we should not delete PI Point because\n    # when the PI point is created again (after deletion) the compressing attribute for it\n    # is always true. That means all the data is not stored in PI data archive.\n    # We lose a large proportion of the data because of compressing attribute.\n    # This is problematic for the fixture that verifies the data stored in PI.\n    # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n    #                                    [], asset_dict)\n\n    # Define the template file for fogbench\n    fogbench_template_path = os.path.join(\n        os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(TEMPLATE_NAME))\n    with open(fogbench_template_path, \"w\") as f:\n        f.write(\n            '[{\"name\": \"%s\", \"sensor_values\": '\n            '[{\"name\": \"sensor\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                asset_name, SENSOR_VALUE, SENSOR_VALUE))\n\n    south_plugin = \"coap\"\n    add_south(south_plugin, south_branch, fledge_url, service_name=\"coap\")\n    start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                    taskname=\"NorthReadingsToPI\")\n\n    yield start_south_north\n\n    # Cleanup code that runs after the caller test is over\n    remove_data_file(fogbench_template_path)\n    remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin))\n\n\nclass TestE2E_CoAP_PI:\n    def test_end_to_end(self, start_south_north, read_data_from_pi_asset_server, fledge_url, pi_host, pi_admin, pi_passwd, pi_db,\n                        wait_time, retries, skip_verify_north_interface, asset_name=ASSET_NAME):\n        \"\"\" Test that data is inserted in Fledge and sent to PI\n            start_south_north: Fixture that starts Fledge with south and north instance\n            read_data_from_pi_asset_server: Fixture to read data from PI\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name>\n                data received from PI is same as data sent\"\"\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n        time.sleep(wait_time)\n        subprocess.run([\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{}; cd -\".format(TEMPLATE_NAME)],\n                       shell=True, check=True)\n        # Time to wait until north schedule runs\n        time.sleep(wait_time * math.ceil(15/wait_time) + 15)\n\n        ping_response = get_ping_status(fledge_url)\n        assert 1 == ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert 1 == ping_response[\"dataSent\"]\n\n        actual_stats_map = get_statistics_map(fledge_url)\n        assert 1 == actual_stats_map[asset_name.upper()]\n        assert 1 == actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert 1 == actual_stats_map['Readings Sent']\n            assert 1 == actual_stats_map['NorthReadingsToPI']\n\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert len(retval) == 1\n        assert asset_name == retval[0][\"assetCode\"]\n        assert 1 == retval[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(asset_name))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert {'sensor': SENSOR_VALUE} == retval[0][\"reading\"]\n\n        if not skip_verify_north_interface:\n            _verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name)\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert \"coap\" == tracked_item[\"service\"]\n        assert ASSET_NAME == tracked_item[\"asset\"]\n        assert \"coap\" == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url,\"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert \"NorthReadingsToPI\" == tracked_item[\"service\"]\n            assert ASSET_NAME == tracked_item[\"asset\"]\n            assert \"OMF\" == tracked_item[\"plugin\"]\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_csv_PI.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test system/python/test_e2e_csv_PI.py\n\n\"\"\"\nimport os\nimport http.client\nimport json\nimport time\nimport pytest\nfrom collections import Counter\nimport utils\nimport math\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nCSV_NAME = \"sample.csv\"\nCSV_HEADERS = \"ivalue,fvalue,svalue\"\nCSV_DATA = [{'ivalue': 1, 'fvalue': 1.1, 'svalue': 'abc'},\n            {'ivalue': 0, 'fvalue': 0.0, 'svalue': 'def'},\n            {'ivalue': -1, 'fvalue': -1.1, 'svalue': 'ghi'}]\n\nNORTH_TASK_NAME = \"NorthReadingsTo_PI\"\n\n_data_str = {}\n\n\ndef get_ping_status(fledge_url):\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/ping')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return jdoc\n\n\ndef get_statistics_map(fledge_url):\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/statistics')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return utils.serialize_stats_map(jdoc)\n\n\n@pytest.fixture\ndef start_south_north(reset_and_start_fledge, add_south, start_north_pi_server_c_web_api, remove_data_file,\n                      remove_directories, south_branch, fledge_url, pi_host, pi_port,\n                      clear_pi_system_through_pi_web_api, pi_admin, pi_passwd, pi_db,\n                      asset_name=\"end_to_end_csv\"):\n    \"\"\" This fixture clone a south repo and starts both south and north instance\n        reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n        add_south: Fixture that starts any south service with given configuration\n        start_north_pi_server_c_web_api: Fixture that starts PI north task\n        remove_data_file: Fixture that remove data file created during the tests\n        remove_directories: Fixture that remove directories created during the tests\"\"\"\n\n    # No need to give asset hierarchy in case of connector relay.\n    # There are four data points here. 1. ivalue  2. fvalue\n    # 3. svalue    4. no data point (Asset name be used in this case.)\n    dp_list = ['ivalue', 'fvalue', 'svalue', '']\n    asset_dict = {}\n    asset_dict[asset_name] = dp_list\n    # For connector relay we should not delete PI Point because\n    # when the PI point is created again (after deletion) the compressing attribute for it\n    # is always true. That means all the data is not stored in PI data archive.\n    # We lose a large proportion of the data because of compressing attribute.\n    # This is problematic for the fixture that verifies the data stored in PI.\n    # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n    #                                    [], asset_dict)\n\n    # Define configuration of fledge south playback service\n    south_config = {\"assetName\": {\"value\": \"{}\".format(asset_name)}, \"csvFilename\": {\"value\": \"{}\".format(CSV_NAME)},\n                    \"ingestMode\": {\"value\": \"batch\"}}\n\n    # Define the CSV data and create expected lists to be verified later\n    csv_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(CSV_NAME))\n    f = open(csv_file_path, \"w\")\n    f.write(CSV_HEADERS)\n    _heads = CSV_HEADERS.split(\",\")\n    for c_data in CSV_DATA:\n        temp_data = []\n        for _head in _heads:\n            temp_data.append(str(c_data[_head]))\n        row = ','.join(temp_data)\n        f.write(\"\\n{}\".format(row))\n    f.close()\n\n    # Prepare list of values for each header\n    for _head in _heads:\n        tmp_list = []\n        for c_data in CSV_DATA:\n            tmp_list.append(c_data[_head])\n        _data_str[_head] = tmp_list\n\n    south_plugin = \"playback\"\n    add_south(south_plugin, south_branch, fledge_url, config=south_config)\n    start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                    taskname=\"NorthReadingsToPI\")\n\n\n    yield start_south_north\n\n    # Cleanup code that runs after the caller test is over\n    remove_data_file(csv_file_path)\n    remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin))\n\n\ndef _verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name):\n    retry_count = 0\n    data_from_pi = None\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset_name,\n                                         CSV_HEADERS.split(\",\"))\n        retry_count += 1\n        time.sleep(wait_time*2)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Failed to read data from PI\"\n\n    for _head in CSV_HEADERS.split(\",\"):\n        assert Counter(data_from_pi[_head][-len(CSV_DATA):]) == Counter(_data_str[_head])\n\n\nclass TestE2E_CSV_PI:\n    def test_e2e_csv_pi(self, start_south_north, read_data_from_pi_asset_server, fledge_url, pi_host, pi_admin, pi_passwd, pi_db,\n                        wait_time, retries, skip_verify_north_interface, asset_name=\"end_to_end_csv\"):\n        \"\"\" Test that data is inserted in Fledge and sent to PI\n            start_south_north: Fixture that starts Fledge with south and north instance\n            read_data_from_pi_asset_server: Fixture to read data from PI\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name>\n                data received from PI is same as data sent\"\"\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n        # Time to wait until north schedule runs\n        time.sleep(wait_time * math.ceil(15/wait_time) + 15)\n\n        ping_response = get_ping_status(fledge_url)\n        assert len(CSV_DATA) == ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert len(CSV_DATA) == ping_response[\"dataSent\"]\n\n        actual_stats_map = get_statistics_map(fledge_url)\n        assert len(CSV_DATA) == actual_stats_map[asset_name.upper()]\n        assert len(CSV_DATA) == actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert len(CSV_DATA) == actual_stats_map['Readings Sent']\n            assert len(CSV_DATA) == actual_stats_map['NorthReadingsToPI']\n\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert len(retval) == 1\n        assert asset_name == retval[0][\"assetCode\"]\n        assert len(CSV_DATA) == retval[0][\"count\"]\n\n        for _head in CSV_HEADERS.split(\",\"):\n            conn.request(\"GET\", '/fledge/asset/{}/{}'.format(asset_name, _head))\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            retval = json.loads(r)\n            _actual_read_list = []\n            for _el in retval:\n                _actual_read_list.append(_el[_head])\n            assert Counter(_actual_read_list) == Counter(_data_str[_head])\n\n        if not skip_verify_north_interface:\n            _verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name)\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert \"play\" == tracked_item[\"service\"]\n        assert asset_name == tracked_item[\"asset\"]\n        assert \"playback\" == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url,\"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert \"NorthReadingsToPI\" == tracked_item[\"service\"]\n            assert asset_name == tracked_item[\"asset\"]\n            assert \"OMF\" == tracked_item[\"plugin\"]\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_csv_multi_filter_pi.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test end to end flow with:\n        Playback south plugin\n        Delta, RMS, Rate, Scale, Asset & Metadata filter plugins\n        PI Server (C) plugin\n\"\"\"\n\n\nimport http.client\nimport os\nimport json\nimport time\nimport pytest\nimport utils\nimport math\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nSVC_NAME = \"playfilter\"\nCSV_NAME = \"sample.csv\"\nCSV_HEADERS = \"ivalue\"\nCSV_DATA = \"10,20,21,40\"\n\nNORTH_TASK_NAME = \"NorthReadingsTo_PI\"\n\n\nclass TestE2eCsvMultiFltrPi:\n    def get_ping_status(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/ping')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/statistics')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n\n    @pytest.fixture\n    def start_south_north(self, reset_and_start_fledge, add_south, enable_schedule, remove_directories,\n                          remove_data_file, south_branch, fledge_url, add_filter, filter_branch,\n                          start_north_pi_server_c_web_api, pi_host, pi_port,\n                          clear_pi_system_through_pi_web_api, pi_admin, pi_passwd, pi_db,\n                          asset_name=\"e2e_csv_filter_pi\"):\n        \"\"\" This fixture clone a south and north repo and starts both south and north instance\n\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            add_south: Fixture that adds a south service with given configuration with enabled or disabled mode\n            remove_directories: Fixture that remove directories created during the tests\n            remove_data_file: Fixture that remove data file created during the tests\n        \"\"\"\n        dp_list = ['ivalue', 'name', 'ivaluecrest', 'ivaluepeak']\n        # There are four data points here. 1. ivalue  2. name as metadata filter is used.\n        # 3. ivaluecrest    4. ivaluepeak\n        asset_dict = {}\n        asset_dict['e2e_filters_RMS'] = dp_list\n        # For connector relay we should not delete PI Point because\n        # when the PI point is created again (after deletion) the compressing attribute for it\n        # is always true. That means all the data is not stored in PI data archive.\n        # We lose a large proportion of the data because of compressing attribute.\n        # This is problematic for the fixture that verifies the data stored in PI.\n        # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n        #                                    [], asset_dict)\n\n        # Define configuration of fledge south playback service\n        south_config = {\"assetName\": {\"value\": \"{}\".format(asset_name)},\n                        \"csvFilename\": {\"value\": \"{}\".format(CSV_NAME)},\n                        \"ingestMode\": {\"value\": \"batch\"}}\n\n        # Define the CSV data and create expected lists to be verified later\n        csv_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(CSV_NAME))\n        with open(csv_file_path, 'w') as f:\n            f.write(CSV_HEADERS)\n            for _items in CSV_DATA.split(\",\"):\n                f.write(\"\\n{}\".format(_items))\n\n        south_plugin = \"playback\"\n        add_south(south_plugin, south_branch, fledge_url, service_name=SVC_NAME,\n                  config=south_config, start_service=False)\n\n        filter_cfg_scale = {\"enable\": \"true\"}\n        # I/P 10, 20, 21, 40 -> O/P 1000, 2000, 2100, 4000\n        add_filter(\"scale\", filter_branch, \"fscale\", filter_cfg_scale, fledge_url, SVC_NAME)\n\n        # I/P asset_name : e2e_csv_filter_pi > O/P e2e_filters\n        filter_cfg_asset = {\"config\": {\"rules\": [{\"new_asset_name\": \"e2e_filters\",\n                                                  \"action\": \"rename\",\n                                                  \"asset_name\": asset_name}]},\n                            \"enable\": \"true\"}\n        add_filter(\"asset\", filter_branch, \"fasset\", filter_cfg_asset, fledge_url, SVC_NAME)\n\n        # I/P 1000, 2000, 2100, 4000 -> O/P 2000, 2100, 4000\n        filter_cfg_rate = {\"trigger\": \"ivalue > 1200\", \"untrigger\": \"ivalue < 1100\", \"preTrigger\": \"0\", \"enable\": \"true\"}\n        add_filter(\"rate\", filter_branch, \"frate\", filter_cfg_rate, fledge_url, SVC_NAME)\n\n        # I/P 1000, 2000, 2100, 4000 -> O/P 2000, 4000\n        # Delta in 1st pair (2000-1000) = 1000 (> 20% of 1000) so 2000 is output\n        # Delta in second pair (2100-2000) = 100 (<20% of 2000) so 2100 not in output\n        # Delta in third pair (4000-2100) = 1900 (>20% of 2100) so 4000 in output\n        filter_cfg_delta = {\"tolerance\": \"20\", \"enable\": \"true\"}\n        add_filter(\"delta\", filter_branch, \"fdelta\", filter_cfg_delta , fledge_url, SVC_NAME)\n\n        # I/P 2000, 4000 -> O/P rms=3162.2776601684, rms_peak=2000\n        filter_cfg_rms = {\"assetName\": \"%a_RMS\", \"samples\": \"2\", \"peak\": \"true\", \"enable\": \"true\"}\n        add_filter(\"rms\", filter_branch, \"frms\", filter_cfg_rms, fledge_url, SVC_NAME)\n\n        filter_cfg_meta = {\"enable\": \"true\"}\n        add_filter(\"metadata\", filter_branch, \"fmeta\", filter_cfg_meta, fledge_url, SVC_NAME)\n\n        # Since playback plugin reads all csv data at once, we cant keep it in enable mode before filter add\n        # enable service when all filters all applied\n        enable_schedule(fledge_url, SVC_NAME)\n\n        start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                        taskname=\"NorthReadingsToPI\")\n\n        yield self.start_south_north\n\n        remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin))\n        filters = [\"scale\", \"asset\", \"rate\", \"delta\", \"rms\", \"metadata\"]\n        for fltr in filters:\n            remove_directories(\"/tmp/fledge-filter-{}\".format(fltr))\n\n        remove_data_file(csv_file_path)\n\n    def test_end_to_end(self, start_south_north, disable_schedule, fledge_url, read_data_from_pi_asset_server, pi_host, pi_admin,\n                        pi_passwd, pi_db, wait_time, retries, skip_verify_north_interface):\n        \"\"\" Test that data is inserted in Fledge using playback south plugin &\n            Delta, RMS, Rate, Scale, Asset & Metadata filters, and sent to PI\n            start_south_north: Fixture that starts Fledge with south service, add filter and north instance\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name> with applied data processing filter value\n                data received from PI is same as data sent\"\"\"\n\n        # Time to wait until north schedule runs\n        time.sleep(wait_time * math.ceil(15/wait_time) + 15)\n        conn = http.client.HTTPConnection(fledge_url)\n        self._verify_ingest(conn)\n\n        # disable schedule to stop the service and sending data\n        disable_schedule(fledge_url, SVC_NAME)\n\n        ping_response = self.get_ping_status(fledge_url)\n        assert 1 == ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert 1 == ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert 1 == actual_stats_map[\"e2e_filters_RMS\".upper()]\n        assert 1 == actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert 1 == actual_stats_map['Readings Sent']\n            assert 1 == actual_stats_map['NorthReadingsToPI']\n\n        if not skip_verify_north_interface:\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries)\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_filters_RMS\" == tracked_item[\"asset\"]\n        assert \"playback\" == tracked_item[\"plugin\"]\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Filter\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Filter event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_csv_filter_pi\" == tracked_item[\"asset\"]\n        assert \"fscale\" == tracked_item[\"plugin\"]\n\n        tracked_item = tracking_details[\"track\"][1]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_csv_filter_pi\" == tracked_item[\"asset\"]\n        assert \"fasset\" == tracked_item[\"plugin\"]\n\n        tracked_item = tracking_details[\"track\"][2]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_filters\" == tracked_item[\"asset\"]\n        assert \"fasset\" == tracked_item[\"plugin\"]\n\n        tracked_item = tracking_details[\"track\"][3]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_filters\" == tracked_item[\"asset\"]\n        assert \"frate\" == tracked_item[\"plugin\"]\n\n        tracked_item = tracking_details[\"track\"][4]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_filters\" == tracked_item[\"asset\"]\n        assert \"fdelta\" == tracked_item[\"plugin\"]\n\n        tracked_item = tracking_details[\"track\"][5]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_filters_RMS\" == tracked_item[\"asset\"]\n        assert \"frms\" == tracked_item[\"plugin\"]\n\n        tracked_item = tracking_details[\"track\"][6]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_filters_RMS\" == tracked_item[\"asset\"]\n        assert \"fmeta\" == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url,\"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert \"NorthReadingsToPI\" == tracked_item[\"service\"]\n            assert \"e2e_filters_RMS\" == tracked_item[\"asset\"]\n            assert \"OMF\" == tracked_item[\"plugin\"]\n\n    def _verify_ingest(self, conn):\n\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 1 == len(jdoc)\n        assert \"e2e_filters_RMS\" == jdoc[0][\"assetCode\"]\n        assert 0 < jdoc[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(\"e2e_filters_RMS\"))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 0 < len(jdoc)\n\n        read = jdoc[0][\"reading\"]\n        assert 2000.0 == read[\"ivaluepeak\"]\n        assert 3162.2776601684 == read[\"ivalue\"]\n        assert \"value\" == read[\"name\"]\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n\n        # Wait until full data is recieved in PI server\n        time.sleep(wait_time * 2)\n        retry_count = 0\n        data_from_pi = None\n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db,\n                                             \"e2e_filters_RMS\", {\"ivalue\", \"ivaluepeak\", \"name\"})\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        assert 3162.2776601684 == data_from_pi[\"ivalue\"][-1]\n        assert 2000 == data_from_pi[\"ivaluepeak\"][-1]\n        assert \"value\" == data_from_pi[\"name\"][-1]\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_expr_pi.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test end to end flow with:\n        Expression south plugin\n        Metadata filter plugin\n        PI Server (C) plugin\n\"\"\"\n\n\nimport http.client\nimport json\nimport time\nimport pytest\nimport utils\nimport math\n\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nSOUTH_PLUGIN = \"Expression\"\nSOUTH_PLUGIN_LANGUAGE = \"C\"\n\nSVC_NAME = \"Expr #1\"\nASSET_NAME = \"Expression\"\n\n\nclass TestE2eExprPi:\n    def get_ping_status(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/ping')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/statistics')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n\n    @pytest.fixture\n    def start_south_north(self, reset_and_start_fledge, add_south, enable_schedule, remove_directories,\n                          south_branch, fledge_url, add_filter, filter_branch, filter_name,\n                          start_north_pi_server_c_web_api, pi_host, pi_port,\n                          clear_pi_system_through_pi_web_api, pi_admin, pi_passwd, pi_db):\n        \"\"\" This fixture clone a south and north repo and starts both south and north instance\n\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            add_south: Fixture that adds a south service with given configuration with enabled or disabled mode\n            remove_directories: Fixture that remove directories created during the tests\n        \"\"\"\n\n        # No need to give asset hierarchy in case of connector relay.\n        dp_list = [ASSET_NAME, 'name', '']\n        # There are three data points here. 1. ASSET_NAME  2. name as metadata filter is used.\n        # 3. no data point (Asset name be used in this case.)\n        asset_dict = {}\n        asset_dict[ASSET_NAME] = dp_list\n        # For connector relay we should not delete PI Point because\n        # when the PI point is created again (after deletion) the compressing attribute for it\n        # is always true. That means all the data is not stored in PI data archive.\n        # We lose a large proportion of the data because of compressing attribute.\n        # This is problematic for the fixture that verifies the data stored in PI.\n        # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n        #                                    [], asset_dict)\n\n        cfg = {\"expression\": {\"value\": \"tan(x)\"}, \"minimumX\": {\"value\": \"45\"}, \"maximumX\": {\"value\": \"45\"},\n               \"stepX\": {\"value\": \"0\"}}\n\n        add_south(SOUTH_PLUGIN, south_branch, fledge_url, service_name=SVC_NAME, config=cfg,\n                  plugin_lang=SOUTH_PLUGIN_LANGUAGE, start_service=True)\n\n        filter_cfg = {\"enable\": \"true\"}\n        filter_plugin = \"metadata\"\n        add_filter(filter_plugin, filter_branch, filter_name, filter_cfg, fledge_url, SVC_NAME)\n\n        # enable_schedule(fledge_url, SVC_NAME)\n\n        start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                    taskname=\"NorthReadingsToPI\")\n\n        yield self.start_south_north\n\n        remove_directories(\"/tmp/fledge-south-{}\".format(SOUTH_PLUGIN.lower()))\n        remove_directories(\"/tmp/fledge-filter-{}\".format(filter_plugin))\n\n    def test_end_to_end(self, start_south_north, disable_schedule, fledge_url, read_data_from_pi_asset_server, pi_host, pi_admin,\n                        pi_passwd, pi_db, wait_time, retries, skip_verify_north_interface):\n        \"\"\" Test that data is inserted in Fledge using expression south plugin & metadata filter, and sent to PI\n            start_south_north: Fixture that starts Fledge with south service, add filter and north instance\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name> with applied data processing filter value\n                data received from PI is same as data sent\"\"\"\n\n        # Time to wait until north schedule runs\n        time.sleep(wait_time * math.ceil(15/wait_time) + 15)\n\n        ping_response = self.get_ping_status(fledge_url)\n        assert 0 < ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert 0 < ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert 0 < actual_stats_map[ASSET_NAME.upper()]\n        assert 0 < actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert 0 < actual_stats_map['Readings Sent']\n            assert 0 < actual_stats_map['NorthReadingsToPI']\n\n        conn = http.client.HTTPConnection(fledge_url)\n        self._verify_ingest(conn)\n\n        # disable schedule to stop the service and sending data\n        disable_schedule(fledge_url, SVC_NAME)\n        if not skip_verify_north_interface:\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries)\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert ASSET_NAME == tracked_item[\"asset\"]\n        assert \"Expression\" == tracked_item[\"plugin\"]\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Filter\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Filter event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert ASSET_NAME == tracked_item[\"asset\"]\n        assert \"Meta #1\" == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url,\"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert \"NorthReadingsToPI\" == tracked_item[\"service\"]\n            assert ASSET_NAME == tracked_item[\"asset\"]\n            assert \"OMF\" == tracked_item[\"plugin\"]\n\n    def _verify_ingest(self, conn):\n\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 1 == len(jdoc)\n        assert ASSET_NAME == jdoc[0][\"assetCode\"]\n        assert 0 < jdoc[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(ASSET_NAME))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 0 < len(jdoc)\n\n        read = jdoc[0][\"reading\"]\n        # FOGL-2438 values like tan(45) = 1.61977519054386 gets truncated to 1.6197751905 with ingest\n        assert 1.6197751905 == read[\"Expression\"]\n        # verify filter is applied and we have {name: value} pair added by metadata filter\n        assert \"value\" == read[\"name\"]\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n        retry_count = 0\n        data_from_pi = None\n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, ASSET_NAME, {\"Expression\", \"name\"})\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        assert len(data_from_pi)\n        assert \"name\" in data_from_pi\n        assert \"Expression\" in data_from_pi\n        assert isinstance(data_from_pi[\"name\"], list)\n        assert isinstance(data_from_pi[\"Expression\"], list)\n        # TODO: FOGL-2883: Test fails randomly in below assertion needs to be fixed\n        # assert \"value\" in data_from_pi[\"name\"]\n        # FOGL-2438 values like tan(45) = 1.61977519054386 gets truncated to 1.6197751905 with ingest\n        # assert 1.6197751905 in data_from_pi[\"Expression\"]\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_filter_fft_threshold.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test end to end flow with:\n        Playback south plugin\n        FFT Filter on playback south plugin and Threshold on PI north\n        PI Server (C) plugin\n\"\"\"\n\n\nimport http.client\nimport os\nimport json\nimport time\nimport pytest\nimport utils\nimport subprocess\nimport math\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nSVC_NAME = \"playfilter\"\nCSV_NAME = \"wind-data.csv\"\nCSV_HEADERS = \"10 Min Std Dev,10 Min Sampled Avg\"\n\nNORTH_TASK_NAME = \"NorthReadingsTo_PI\"\n\nASSET = \"e2e_fft_threshold\"\n\n\nclass TestE2eFilterFFTThreshold:\n    def get_ping_status(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/ping')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/statistics')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n\n    @pytest.fixture\n    def start_south_north(self, reset_and_start_fledge, add_south, enable_schedule, remove_directories,\n                          remove_data_file, south_branch, fledge_url, add_filter, filter_branch,\n                          start_north_pi_server_c_web_api, pi_host, pi_port,\n                          clear_pi_system_through_pi_web_api, pi_admin, pi_passwd, pi_db, asset_name=ASSET):\n        \"\"\" This fixture clone a south and north repo and starts both south and north instance\n\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            add_south: Fixture that adds a south service with given configuration with enabled or disabled mode\n            remove_directories: Fixture that remove directories created during the tests\n            remove_data_file: Fixture that remove data file created during the tests\n        \"\"\"\n\n        # No need to give asset hierarchy in case of connector relay.\n        dp_list = ['Band00', 'Band01', 'Band02']\n        # There are three data points here. 1. Band00  2. Band01\n        # 3. Band02\n        asset_dict = {}\n        asset_dict[ASSET + \" \" + 'FFT'] = dp_list\n        # For connector relay we should not delete PI Point because\n        # when the PI point is created again (after deletion) the compressing attribute for it\n        # is always true. That means all the data is not stored in PI data archive.\n        # We lose a large proportion of the data because of compressing attribute.\n        # This is problematic for the fixture that verifies the data stored in PI.\n        # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n        #                                    [], asset_dict)\n\n        # Define configuration of Fledge playback service\n        south_config = {\"assetName\": {\"value\": \"{}\".format(asset_name)},\n                        \"csvFilename\": {\"value\": \"{}\".format(CSV_NAME)},\n                        \"fieldNames\": {\"value\": \"10 Min Std Dev\"},\n                        \"ingestMode\": {\"value\": \"batch\"}}\n\n        csv_dest = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(CSV_NAME))\n        csv_src_file = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'tests/system/python/data/{}'.format(CSV_NAME))\n\n        cmd = 'cp {} {}'.format(csv_src_file, csv_dest)\n        status = subprocess.call(cmd, shell=True)\n        if status != 0:\n            if status < 0:\n                print(\"Killed by signal\", status)\n            else:\n                print(\"copy command failed with return code - \", status)\n\n        south_plugin = \"playback\"\n        add_south(south_plugin, south_branch, fledge_url, service_name=SVC_NAME,\n                  config=south_config, start_service=False)\n\n        filter_cfg_fft = {\"asset\": ASSET, \"lowPass\": \"10\", \"highPass\": \"30\", \"enable\": \"true\"}\n        add_filter(\"fft\", filter_branch, \"FFT Filter\", filter_cfg_fft, fledge_url, SVC_NAME)\n\n        # Since playback plugin reads all csv data at once, we cant keep it in enable mode before filter add\n        # enable service when all filters all applied\n        enable_schedule(fledge_url, SVC_NAME)\n\n        start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                        taskname=NORTH_TASK_NAME, start_task=False)\n\n        # Add threshold filter at north side\n        filter_cfg_threshold = {\"expression\": \"Band00 > 30\", \"enable\": \"true\"}\n        # TODO: Apply a better expression with AND / OR with data points e.g. OR Band01 > 19\n        add_filter(\"threshold\", filter_branch, \"fltr_threshold\", filter_cfg_threshold, fledge_url, NORTH_TASK_NAME)\n        enable_schedule(fledge_url, NORTH_TASK_NAME)\n\n        yield self.start_south_north\n\n        remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin))\n        filters = [\"fft\", \"threshold\"]\n        for fltr in filters:\n            remove_directories(\"/tmp/fledge-filter-{}\".format(fltr))\n\n        remove_data_file(csv_dest)\n\n    def test_end_to_end(self, start_south_north, disable_schedule, fledge_url, read_data_from_pi_asset_server, pi_host, pi_admin,\n                        pi_passwd, pi_db, wait_time, retries, skip_verify_north_interface):\n        \"\"\" Test that data is inserted in Fledge using playback south plugin &\n            FFT filter, and sent to PI after passing through threshold filter\n            start_south_north: Fixture that starts Fledge with south service, add filter and north instance\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name> with applied data processing filter value\n                data received from PI is same as data sent\"\"\"\n\n        # Time to wait until north schedule runs\n        time.sleep(wait_time * math.ceil(15/wait_time) + 15)\n        conn = http.client.HTTPConnection(fledge_url)\n\n        self._verify_ingest(conn)\n\n        # disable schedule to stop the service and sending data\n        disable_schedule(fledge_url, SVC_NAME)\n\n        ping_response = self.get_ping_status(fledge_url)\n        assert 6 == ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert 1 == ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert 6 == actual_stats_map[ASSET.upper() + \" FFT\"]\n        assert 6 == actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert 1 == actual_stats_map['Readings Sent']\n            assert 1 == actual_stats_map[NORTH_TASK_NAME]\n\n        if not skip_verify_north_interface:\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries)\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"e2e_fft_threshold FFT\" == tracked_item[\"asset\"]\n        assert \"playback\" == tracked_item[\"plugin\"]\n\n        # TODO: FOGL:3440,FOGL:3441 Add asset tracker entry for fft and threshold filters\n        # tracking_details = self.get_asset_tracking_details(fledge_url, \"Filter\")\n        # assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        # tracked_item = tracking_details[\"track\"][0]\n        # assert \"playfilter\" == tracked_item[\"service\"]\n        # assert \"e2e_fft_threshold FFT\" == tracked_item[\"asset\"]\n        # assert \"FFT Filter\" == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url,\"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert NORTH_TASK_NAME == tracked_item[\"service\"]\n            assert \"e2e_fft_threshold FFT\" == tracked_item[\"asset\"]\n            assert \"OMF\" == tracked_item[\"plugin\"]\n\n\n    def _verify_ingest(self, conn):\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc)\n\n        assert ASSET + \" FFT\" == jdoc[0][\"assetCode\"]\n        assert 0 < jdoc[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(ASSET + \"%20FFT\"))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 0 < len(jdoc)\n        # print(jdoc)\n        read = jdoc[0][\"reading\"]\n        assert read[\"Band00\"]\n        assert read[\"Band01\"]\n        assert read[\"Band02\"]\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n        retry_count = 0\n        data_from_pi = None\n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db,\n                                             ASSET + \" FFT\", {\"Band00\"})\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        assert 30 < data_from_pi[\"Band00\"][-1]\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_kafka.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test system/python/test_e2e_kafka.py\n\n\"\"\"\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nimport pytest\nimport utils\nimport math\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nFOGBENCH_TEMPLATE = \"fogbench-template.json\"\nSENSOR_VALUE = 20\nSOUTH_PLUGIN_NAME = \"coap\"\nNORTH_PLUGIN_NAME = \"Kafka\"\nASSET_NAME = \"{}_to_{}\".format(SOUTH_PLUGIN_NAME, NORTH_PLUGIN_NAME.lower())\nCONSUMER_GROUP = \"fledge_consumer\"\nCONSUMER_INSTANCE = \"fledge_instance\"\nHEADER = {'Content-Type': 'application/vnd.kafka.v2+json', 'Accept': 'application/vnd.kafka.json.v2+json'}\n\n\nclass TestE2EKafka:\n    def get_ping_status(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/ping')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/statistics')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n\n    def _prepare_template_reading_from_fogbench(self):\n        \"\"\" Define the template file for fogbench readings \"\"\"\n\n        fogbench_template_path = os.path.join(\n            os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(FOGBENCH_TEMPLATE))\n        with open(fogbench_template_path, \"w\") as f:\n            f.write(\n                '[{\"name\": \"%s\", \"sensor_values\": '\n                '[{\"name\": \"sensor\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                    ASSET_NAME, SENSOR_VALUE, SENSOR_VALUE))\n\n        return fogbench_template_path\n\n    def _configure_and_start_north_kafka(self, north_branch, fledge_url, host, port, topic, task_name=\"NorthReadingsTo{}\"\n                                         .format(NORTH_PLUGIN_NAME)):\n        \"\"\" Configure and Start north kafka task \"\"\"\n\n        try:\n            subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} north {}\"\n                           .format(north_branch, NORTH_PLUGIN_NAME)], shell=True, check=True)\n        except subprocess.CalledProcessError:\n            assert False, \"kafka plugin installation failed\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"name\": task_name,\n                \"plugin\": \"{}\".format(NORTH_PLUGIN_NAME),\n                \"type\": \"north\",\n                \"schedule_type\": 3,\n                \"schedule_day\": 0,\n                \"schedule_time\": 0,\n                \"schedule_repeat\": 0,\n                \"schedule_enabled\": \"true\",\n                \"config\": {\"topic\": {\"value\": topic},\n                           \"brokers\": {\"value\": \"{}:{}\".format(host, port)}}\n                }\n        conn.request(\"POST\", '/fledge/scheduled/task', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        val = json.loads(r)\n        assert 2 == len(val)\n        assert task_name == val['name']\n\n    @pytest.fixture\n    def start_south_north(self, reset_and_start_fledge, add_south, remove_data_file,\n                          remove_directories, south_branch, fledge_url, north_branch, kafka_host, kafka_port, kafka_topic):\n        \"\"\" This fixture clone a south and north repo and starts both south and north instance\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            add_south: Fixture that starts any south service with given configuration\n            remove_data_file: Fixture that remove data file created during the tests\n            remove_directories: Fixture that remove directories created during the tests \"\"\"\n\n        fogbench_template_path = self._prepare_template_reading_from_fogbench()\n\n        add_south(SOUTH_PLUGIN_NAME, south_branch, fledge_url, service_name=SOUTH_PLUGIN_NAME)\n\n        self._configure_and_start_north_kafka(north_branch, fledge_url, kafka_host, kafka_port, kafka_topic)\n\n        yield self.start_south_north\n\n        # Cleanup code that runs after the test is over\n        remove_data_file(fogbench_template_path)\n        remove_directories(\"/tmp/fledge-south-{}\".format(SOUTH_PLUGIN_NAME))\n        remove_directories(\"/tmp/fledge-north-{}\".format(NORTH_PLUGIN_NAME.lower()))\n\n    def test_end_to_end(self, start_south_north, fledge_url, wait_time, kafka_host, kafka_rest_port, kafka_topic,\n                        skip_verify_north_interface):\n        \"\"\" Test that data is inserted in Fledge and sent to Kafka\n            start_south_north: Fixture that starts Fledge with south and north instance\n            skip_verify_north_interface: Flag for assertion of data from kafka rest\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name>\n                data received from Kafka is same as data sent\"\"\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n        time.sleep(wait_time)\n        subprocess.run([\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{}; cd -\"\n                       .format(FOGBENCH_TEMPLATE)], shell=True, check=True)\n        # Time to wait until north schedule runs\n        time.sleep(wait_time * math.ceil(15/wait_time) + 15)\n\n        ping_response = self.get_ping_status(fledge_url)\n        assert 1 == ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert 1 == ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert 1 == actual_stats_map[ASSET_NAME.upper()]\n        assert 1 == actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert 1 == actual_stats_map['Readings Sent']\n            assert 1 == actual_stats_map[\"NorthReadingsTo{}\".format(NORTH_PLUGIN_NAME)]\n\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        val = json.loads(r)\n        assert 1 == len(val)\n        assert ASSET_NAME == val[0][\"assetCode\"]\n        assert 1 == val[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(ASSET_NAME))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        val = json.loads(r)\n        assert 1 == len(val)\n        assert {'sensor': SENSOR_VALUE} == val[0][\"reading\"]\n\n        if not skip_verify_north_interface:\n            self._read_from_kafka(kafka_host, kafka_rest_port, kafka_topic)\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert \"coap\" == tracked_item[\"service\"]\n        assert ASSET_NAME == tracked_item[\"asset\"]\n        assert SOUTH_PLUGIN_NAME == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url,\"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert \"NorthReadingsTo{}\".format(NORTH_PLUGIN_NAME) == tracked_item[\"service\"]\n            assert ASSET_NAME == tracked_item[\"asset\"]\n            assert NORTH_PLUGIN_NAME == tracked_item[\"plugin\"]\n\n\n    def _read_from_kafka(self, host, rest_port, topic):\n        conn = http.client.HTTPConnection(\"{}:{}\".format(host, rest_port))\n\n        # Close the consumer (DELETE) to make it leave the group and clean up its resources\n        self._close_consumer(host, rest_port)\n\n        # Assertions on Kafka topic, consumer and subscription\n        self._verify_kafka_topic(conn, topic)\n\n        self._verify_kafka_topic_by_name(conn, topic)\n\n        # Create a consumer group and instance\n        self._verify_kafka_consumer_group_and_instance(conn)\n\n        self._verify_consumer_subscription_to_topic(conn, topic)\n\n        # FIXME: FOGL-2573 local / AWS confluent setup results in no data\n        self._verify_consumer_data_from_topic(conn)\n\n    def _verify_kafka_topic(self, conn, topic):\n        conn.request(\"GET\", '/topics')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert topic in jdoc\n\n    def _verify_kafka_topic_by_name(self, conn, topic):\n        conn.request(\"GET\", '/topics/{}'.format(topic))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert topic == jdoc[\"name\"]\n\n    def _verify_kafka_consumer_group_and_instance(self, conn):\n        data = {\"name\": CONSUMER_INSTANCE, \"format\": \"json\", \"auto.offset.reset\": \"earliest\"}\n        conn.request(\"POST\", '/consumers/{}'.format(CONSUMER_GROUP), json.dumps(data), headers=HEADER)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert CONSUMER_INSTANCE == jdoc[\"instance_id\"]\n\n    def _verify_consumer_subscription_to_topic(self, conn, topic):\n        data = {\"topics\": [topic]}\n        conn.request(\"POST\", '/consumers/{}/instances/{}/subscription'.format(CONSUMER_GROUP, CONSUMER_INSTANCE),\n                     json.dumps(data), headers=HEADER)\n        r = conn.getresponse()\n        r.read()\n        # No content response\n        assert 204 == r.status\n\n    def _verify_consumer_data_from_topic(self, conn):\n        conn.request(\"GET\", '/consumers/{}/instances/{}/records'.format(CONSUMER_GROUP, CONSUMER_INSTANCE), headers=HEADER)\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc)\n        assert ASSET_NAME == jdoc[0]['value']['asset']\n        assert SENSOR_VALUE == float(jdoc[0]['value']['sensor'])\n\n    def _close_consumer(self, kafka_host, kafka_rest_port):\n        conn = http.client.HTTPConnection(\"{}:{}\".format(kafka_host, kafka_rest_port))\n        conn.request(\"DELETE\", '/consumers/{}/instances/{}'.format(CONSUMER_GROUP, CONSUMER_INSTANCE),\n                     headers=HEADER)\n        r = conn.getresponse()\n        # No content response\n        r.read()\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_modbus_c_pi.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test system/python/e2e/test_e2e_modbus_c_pi.py\n\n\"\"\"\n\nimport http.client\nimport json\nimport time\nimport socket\nimport pytest\nimport utils\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nSOUTH_PLUGIN = \"modbus-c\"\nPLUGIN_NAME = \"ModbusC\"\nSVC_NAME = \"modbus-c\"\nASSET_NAME = \"A15\"\n\n\nclass TestE2EModbusCPI:\n    def check_connect(self, modbus_host, modbus_port):\n        s = socket.socket()\n        print(\"Connecting... Modbus simulator on {}:{}\".format(modbus_host, modbus_port))\n        result = s.connect_ex((modbus_host, modbus_port))\n        if result != 0:\n            print(\"Socket connection failed!!\")\n            pytest.skip(\"Test requires a running simulator, please run before starting the test!!\")\n\n    def get_ping_status(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/ping')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/statistics')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n    @pytest.fixture\n    def start_south_north(self, reset_and_start_fledge, add_south, remove_directories, south_branch, fledge_url,\n                          start_north_pi_server_c_web_api, pi_host, pi_port, modbus_host, modbus_port,\n                          clear_pi_system_through_pi_web_api, pi_admin, pi_passwd, pi_db):\n        \"\"\" This fixture clone a south and north repo and starts both south and north instance\n\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            add_south: Fixture that adds a south service with given configuration with enabled or disabled mode\n            remove_directories: Fixture that remove directories created during the tests\n        \"\"\"\n\n        cfg = {\"protocol\": {\"value\": \"TCP\"}, \"address\": {\"value\": modbus_host},\n               \"port\": {\"value\": \"{}\".format(modbus_port)},\n               \"map\": {\"value\": {\"values\": [\n                   {\"slave\": 1, \"scale\": 1, \"offset\": 0, \"register\": 1, \"assetName\": ASSET_NAME, \"name\": \"front right\"},\n                   {\"slave\": 1, \"scale\": 1, \"offset\": 0, \"register\": 2, \"assetName\": ASSET_NAME, \"name\": \"rear right\"},\n                   {\"slave\": 1, \"scale\": 1, \"offset\": 0, \"register\": 3, \"assetName\": ASSET_NAME, \"name\": \"front left\"},\n                   {\"slave\": 1, \"scale\": 1, \"offset\": 0, \"register\": 4, \"assetName\": ASSET_NAME, \"name\": \"rear left\"}\n               ]}}\n               }\n\n        # No need to give asset hierarchy in case of connector relay.\n        # There are five data points here. 1. front right 2. rear right\n        # 3. front left           4. rear left\n        # 5. no data point (Asset name be used in this case.)\n        dp_list = ['front right', 'rear right', 'front left', 'rear left', '']\n        asset_dict = {}\n        asset_dict[ASSET_NAME] = dp_list\n        # For connector relay we should not delete PI Point because\n        # when the PI point is created again (after deletion) the compressing attribute for it\n        # is always true. That means all the data is not stored in PI data archive.\n        # We lose a large proportion of the data because of compressing attribute.\n        # This is problematic for the fixture that verifies the data stored in PI.\n        # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n        #                                    [], asset_dict)\n\n        add_south(SOUTH_PLUGIN, south_branch, fledge_url, service_name=SVC_NAME, config=cfg,\n                  plugin_lang=\"C\", start_service=False, plugin_discovery_name=PLUGIN_NAME)\n\n        start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                        taskname=\"NorthReadingsToPI\")\n\n        yield self.start_south_north\n\n        remove_directories(\"/tmp/fledge-south-{}\".format(SOUTH_PLUGIN.lower()))\n\n    def test_end_to_end(self, start_south_north, enable_schedule, disable_schedule, fledge_url, read_data_from_pi_asset_server,\n                        pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, skip_verify_north_interface,\n                        modbus_host, modbus_port):\n        \"\"\" Test that data is inserted in Fledge using modbus-c south plugin and sent to PI\n            start_south_north: Fixture that starts Fledge with south service and north instance\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name> with applied data processing filter value\n                data received from PI is same as data sent\"\"\"\n\n        # self.check_connect(modbus_host, modbus_port)\n        \"\"\" $ docker logs --follow modbus-device\n            start listening at: port:502\n            start listening at: port:503\n            Quit the loop: Connection reset by peer // on check_connect\n        \"\"\"\n        enable_schedule(fledge_url, SVC_NAME)\n        time.sleep(wait_time * 2)\n\n        ping_response = self.get_ping_status(fledge_url)\n        assert 0 < ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert 0 < ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert 0 < actual_stats_map[ASSET_NAME.upper()]\n        assert 0 < actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert 0 < actual_stats_map['Readings Sent']\n            assert 0 < actual_stats_map['NorthReadingsToPI']\n\n        conn = http.client.HTTPConnection(fledge_url)\n        self._verify_ingest(conn)\n\n        # disable schedule to stop the service and sending data\n        disable_schedule(fledge_url, SVC_NAME)\n        if not skip_verify_north_interface:\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries)\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert ASSET_NAME == tracked_item[\"asset\"]\n        assert PLUGIN_NAME == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert \"NorthReadingsToPI\" == tracked_item[\"service\"]\n            assert ASSET_NAME == tracked_item[\"asset\"]\n            assert \"OMF\" == tracked_item[\"plugin\"]\n\n    def _verify_ingest(self, conn):\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 1 == len(jdoc)\n        assert ASSET_NAME == jdoc[0][\"assetCode\"]\n        assert 0 < jdoc[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(ASSET_NAME))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 0 < len(jdoc)\n\n        read = jdoc[0][\"reading\"]\n        assert 11 == read[\"front right\"]\n        assert 12 == read[\"rear right\"]\n        assert 13 == read[\"front left\"]\n        assert 14 == read[\"rear left\"]\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n        retry_count = 0\n        data_from_pi = None\n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, ASSET_NAME,\n                                             {\"front right\", \"rear right\", \"front left\", \"rear left\"})\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        assert len(data_from_pi)\n        assert \"front right\" in data_from_pi\n        assert \"rear right\" in data_from_pi\n        assert \"front left\" in data_from_pi\n        assert \"rear left\" in data_from_pi\n\n        assert isinstance(data_from_pi[\"front right\"], list)\n        assert isinstance(data_from_pi[\"rear right\"], list)\n        assert isinstance(data_from_pi[\"front left\"], list)\n        assert isinstance(data_from_pi[\"front left\"], list)\n\n        assert 11 == data_from_pi[\"front right\"][-1]\n        assert 12 == data_from_pi[\"rear right\"][-1]\n        assert 13 == data_from_pi[\"front left\"][-1]\n        assert 14 == data_from_pi[\"rear left\"][-1]\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_modbus_c_rtu_pi.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test system/python/e2e/test_e2e_modbus_c_rtu_pi.py\n\n\"\"\"\n\nimport http.client\nimport json\nimport time\nimport pytest\nimport serial\nimport utils\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nSOUTH_PLUGIN = \"modbus-c\"\nPLUGIN_NAME = \"ModbusC\"\nSVC_NAME = \"modbus-c\"\nASSET_NAME_1 = \"adam4015\"\nASSET_NAME_2 = \"adam4017\"\n\n\nclass TestE2EModbusC_RTU_PI:\n    def check_connect(self, modbus_serial_port, modbus_baudrate):\n        print(\"Checking serial port {} at baudrate {}.....\".format(modbus_serial_port, modbus_baudrate))\n        try:\n            ser = serial.Serial(modbus_serial_port, modbus_baudrate, timeout=1)\n        except:\n            print(\"Socket connection failed!!\")\n            pytest.skip(\"Test requires a connected serial port!!\")\n\n    def get_ping_status(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/ping')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/statistics')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n    @pytest.fixture\n    def start_south_north(self, reset_and_start_fledge, add_south, skip_verify_north_interface, remove_directories,\n                          south_branch, fledge_url, start_north_pi_server_c_web_api, pi_host, pi_port, pi_db, pi_admin, pi_passwd,\n                          modbus_serial_port, modbus_baudrate):\n        \"\"\" This fixture clone a south and north repo and starts both south and north instance\n\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            add_south: Fixture that adds a south service with given configuration with enabled or disabled mode\n            remove_directories: Fixture that remove directories created during the tests\n        \"\"\"\n        self.check_connect(modbus_serial_port, modbus_baudrate)\n        cfg = {\"protocol\": {\"value\": \"RTU\"}, \"asset\": {\"value\": \"modbus\"},\n               \"device\": {\"value\": modbus_serial_port}, \"baud\": {\"value\": modbus_baudrate},\n               \"map\": {\"value\": {\"values\": [\n                   {\"offset\": -1.1, \"assetName\": \"adam4017\", \"slave\": 2, \"name\": \"dwyer_temperature\", \"register\": 0,\n                    \"scale\": 0.00178},\n                   {\"offset\": 0, \"assetName\": \"adam4017\", \"slave\": 2, \"name\": \"dwyer_humidity\", \"register\": 1,\n                    \"scale\": 0.00152},\n                   {\"offset\": -50, \"assetName\": \"adam4015\", \"slave\": 3, \"name\": \"pt100_0\", \"register\": 0,\n                    \"scale\": 0.00305},\n                   {\"offset\": -50, \"assetName\": \"adam4015\", \"slave\": 3, \"name\": \"pt100_1\", \"register\": 1,\n                    \"scale\": 0.00305}\n               ]}}\n               }\n\n        add_south(SOUTH_PLUGIN, south_branch, fledge_url, service_name=SVC_NAME, config=cfg,\n                  plugin_lang=\"C\", start_service=False, plugin_discovery_name=PLUGIN_NAME)\n\n        if not skip_verify_north_interface:\n            start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin,\n                                            pi_pwd=pi_passwd, taskname=\"NorthReadingsToPI\")\n\n        yield self.start_south_north\n\n        remove_directories(\"/tmp/fledge-south-{}\".format(SOUTH_PLUGIN.lower()))\n\n    def test_end_to_end(self, start_south_north, enable_schedule, disable_schedule, fledge_url, read_data_from_pi_asset_server, pi_host, pi_admin,\n                        pi_passwd, pi_db, wait_time, retries, skip_verify_north_interface):\n        \"\"\" Test that data is inserted in Fledge using modbus-c south plugin and sent to PI\n            start_south_north: Fixture that starts Fledge with south service and north instance\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name> with applied data processing filter value\n                data received from PI is same as data sent\"\"\"\n\n        enable_schedule(fledge_url, SVC_NAME)\n        time.sleep(wait_time * 2)\n\n        ping_response = self.get_ping_status(fledge_url)\n        assert 0 < ping_response[\"dataRead\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert 0 < actual_stats_map[ASSET_NAME_1.upper()]\n        assert 0 < actual_stats_map[ASSET_NAME_2.upper()]\n\n        assert 0 < actual_stats_map['READINGS']\n\n        conn = http.client.HTTPConnection(fledge_url)\n        self._verify_ingest(conn)\n\n        # disable schedule to stop the service and sending data\n        disable_schedule(fledge_url, SVC_NAME)\n        if not skip_verify_north_interface:\n            assert 0 < ping_response[\"dataSent\"]\n            assert 0 < actual_stats_map['NorthReadingsToPI']\n            assert 0 < actual_stats_map['Readings Sent']\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                                ASSET_NAME_1, {\"pt100_1\", \"pt100_0\"})\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                                ASSET_NAME_2, {\"dwyer_temperature\", \"dwyer_humidity\"})\n\n    def _verify_ingest(self, conn):\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 2 == len(jdoc)\n        assert [ASSET_NAME_1, ASSET_NAME_2] == [jdoc[0][\"assetCode\"], jdoc[1][\"assetCode\"]]\n        assert 0 < jdoc[0][\"count\"]\n        assert 0 < jdoc[1][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(ASSET_NAME_1))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 0 < len(jdoc)\n\n        read = jdoc[0][\"reading\"]\n        assert read[\"pt100_1\"] is not None\n        assert read[\"pt100_0\"] is not None\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(ASSET_NAME_2))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 0 < len(jdoc)\n\n        read = jdoc[0][\"reading\"]\n        assert read[\"dwyer_temperature\"] is not None\n        assert read[\"dwyer_humidity\"] is not None\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n        \n        retry_count = 0\n        data_from_pi = None\n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset, datapoints)\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        assert len(data_from_pi)\n        for itm in datapoints:\n            assert itm in data_from_pi\n            assert isinstance(data_from_pi[itm], list)\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_notification_service_with_plugins.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test end to end flow with:\n        Notification service with\n        Threshold in-built rule plugin\n        notify-python35 delivery channel plugin\n\"\"\"\n\nimport os\nimport time\nimport subprocess\nimport http.client\nimport json\nfrom threading import Event\nimport urllib.parse\n\nimport pytest\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nSERVICE = \"notification\"\nSERVICE_NAME = \"NotificationServer #1\"\nNOTIFY_PLUGIN = \"python35\"\nNOTIFY_INBUILT_RULES = [\"Threshold\", \"DataAvailability\"]\n\n\ndef _configure_and_start_service(service_branch, fledge_url, remove_directories):\n    try:\n        subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_service {} {}\"\n                       .format(service_branch, SERVICE)], shell=True, check=True, stdout=subprocess.DEVNULL)\n    except subprocess.CalledProcessError:\n        assert False, \"{} installation failed\".format(SERVICE)\n    finally:\n        remove_directories(\"/tmp/fledge-service-{}\".format(SERVICE))\n\n    # Start service\n    conn = http.client.HTTPConnection(fledge_url)\n    data = {\"name\": SERVICE_NAME,\n            \"type\": \"notification\",\n            \"enabled\": \"true\"\n            }\n    conn.request(\"POST\", '/fledge/service', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert 2 == len(jdoc)\n    assert SERVICE_NAME == jdoc['name']\n\n\ndef _install_notify_plugin(notify_branch, plugin_name, remove_directories):\n    try:\n        subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} notify {}\".format(\n            notify_branch, plugin_name)], shell=True, check=True, stdout=subprocess.DEVNULL)\n    except subprocess.CalledProcessError:\n        assert False, \"{} installation failed\".format(plugin_name)\n    finally:\n        remove_directories(\"/tmp/fledge-notify-{}\".format(plugin_name))\n\n\ndef _get_result(fledge_url, path):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"GET\", path)\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return jdoc\n\n\ndef _verify_service(fledge_url, status):\n    jdoc = _get_result(fledge_url, '/fledge/service')\n    srvc = [s for s in jdoc['services'] if s['name'] == SERVICE_NAME]\n    assert 1 == len(srvc)\n    svc = srvc[0]\n    assert SERVICE.capitalize() == svc['type']\n    assert status == svc['status']\n\n\ndef _verify_audit_log_entry(fledge_url, path, name, severity='INFORMATION', count=1):\n    jdoc = _get_result(fledge_url, path)\n    assert len(jdoc['audit'])\n    assert count == jdoc['totalCount']\n    audit_detail = jdoc['audit'][0]\n    assert severity == audit_detail['severity']\n    assert name == audit_detail['details']['name']\n\n\ndef _add_notification_instance(fledge_url, payload):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", '/fledge/notification', json.dumps(payload))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"Notification {} created successfully\".format(payload['name']) == jdoc['result']\n\n\ndef pause_for_x_seconds(x=1):\n    wait_e = Event()\n    wait_e.clear()\n    wait_e.wait(timeout=x)\n\n\nclass TestNotificationService:\n    def test_service(self, reset_and_start_fledge, service_branch, fledge_url, wait_time, retries, remove_directories):\n        _configure_and_start_service(service_branch, fledge_url, remove_directories)\n\n        retry_count = 0\n        # only 2 services is being up by default i.e core and storage\n        default_registry_count = 2\n        service_registry = default_registry_count\n        while service_registry != 3 and retry_count < retries:\n            svc = _get_result(fledge_url, '/fledge/service')\n            service_registry = svc['services']\n            retry_count += 1\n\n            pause_for_x_seconds(x=wait_time * 2)\n\n        if len(service_registry) == default_registry_count:\n            assert False, \"Failed to start the {} service\".format(SERVICE)\n\n        _verify_service(fledge_url, status='running')\n        _verify_audit_log_entry(fledge_url, '/fledge/audit?source=NTFST', name=SERVICE_NAME)\n\n    def test_get_default_notification_plugins(self, fledge_url, remove_directories):\n        remove_directories(os.environ['FLEDGE_ROOT'] + '/plugins/notificationDelivery')\n        remove_directories(os.environ['FLEDGE_ROOT'] + '/plugins/notificationRule')\n        remove_directories(os.environ['FLEDGE_ROOT'] + 'cmake_build/C/plugins/notificationDelivery')\n        remove_directories(os.environ['FLEDGE_ROOT'] + 'cmake_build/C/plugins/notificationRule')\n        jdoc = _get_result(fledge_url, '/fledge/notification/plugin')\n        assert [] == jdoc['delivery']\n        assert 2 == len(jdoc['rules'])\n        assert NOTIFY_INBUILT_RULES[0] == jdoc['rules'][1]['name']\n        assert NOTIFY_INBUILT_RULES[1] == jdoc['rules'][0]['name']\n\n\nclass TestNotificationCRUD:\n\n    @pytest.mark.parametrize(\"data\", [\n        {\"name\": \"Test 1\", \"description\": \"Test 1 notification\", \"rule\": NOTIFY_INBUILT_RULES[0],\n         \"channel\": NOTIFY_PLUGIN, \"enabled\": \"false\", \"notification_type\": \"retriggered\"},\n        {\"name\": \"Test2\", \"description\": \"Test 2 notification\", \"rule\": NOTIFY_INBUILT_RULES[0],\n         \"channel\": NOTIFY_PLUGIN, \"enabled\": \"false\", \"notification_type\": \"toggled\"},\n        {\"name\": \"Test #3\", \"description\": \"Test 3 notification\", \"rule\": NOTIFY_INBUILT_RULES[0],\n         \"channel\": NOTIFY_PLUGIN, \"enabled\": \"false\", \"notification_type\": \"one shot\"}\n    ])\n    def test_create_notification_instances_with_default_rule_and_channel_python35(self, fledge_url, notify_branch,\n                                                                                  data,\n                                                                                  remove_directories):\n        if data['name'] == 'Test 1':\n            _install_notify_plugin(notify_branch, NOTIFY_PLUGIN, remove_directories)\n        _add_notification_instance(fledge_url, data)\n\n    def test_inbuilt_rule_plugin_and_notify_python35_delivery(self, fledge_url):\n        jdoc = _get_result(fledge_url, '/fledge/notification/plugin')\n        assert 1 == len(jdoc['delivery'])\n        assert NOTIFY_PLUGIN == jdoc['delivery'][0]['name']\n        assert 2 == len(jdoc['rules'])\n        assert NOTIFY_INBUILT_RULES[0] == jdoc['rules'][1]['name']\n        assert NOTIFY_INBUILT_RULES[1] == jdoc['rules'][0]['name']\n\n    def test_get_notifications_and_audit_entry(self, fledge_url):\n        jdoc = _get_result(fledge_url, '/fledge/notification')\n        assert 3 == len(jdoc['notifications'])\n\n        # Test 1, Test2 and Test #3\n        jdoc = _get_result(fledge_url, '/fledge/audit?source=NTFAD')\n        assert 3 == jdoc['totalCount']\n\n    def test_update_notification(self, fledge_url, name=\"Test 1\"):\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"notification_type\": \"toggled\"}\n        conn.request(\"PUT\", '/fledge/notification/{}'.format(urllib.parse.quote(name))\n                     , json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Notification {} updated successfully\".format(name) == jdoc[\"result\"]\n\n        # Verify updated notification info\n        jdoc = _get_result(fledge_url, '/fledge/notification/{}'.format(urllib.parse.quote(name)))\n        assert \"toggled\" == jdoc['notification']['notificationType']\n\n    def test_delete_notification(self, fledge_url, name=\"Test #3\"):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/notification/{}'.format(urllib.parse.quote(name)))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Notification {} deleted successfully.\".format(name) == jdoc[\"result\"]\n\n        # Verify only two notifications should exist NOT 3\n        jdoc = _get_result(fledge_url, '/fledge/notification')\n        notifications = jdoc['notifications']\n        assert 2 == len(notifications)\n        assert \"Test 1\" == notifications[0]['name']\n        assert \"Test2\" == notifications[1]['name']\n\n        jdoc = _get_result(fledge_url, '/fledge/audit?source=NTFDL')\n        assert 1 == jdoc['totalCount']\n\n\nclass TestSentAndReceiveNotification:\n    FOGBENCH_TEMPLATE = \"fogbench-template.json\"\n    SENSOR_VALUE = 20\n    SOUTH_PLUGIN_NAME = \"coap\"\n    ASSET_NAME = \"{}\".format(SOUTH_PLUGIN_NAME)\n\n    @pytest.fixture\n    def start_south(self, add_south, remove_data_file, remove_directories, south_branch, fledge_url):\n        \"\"\" This fixture clone a south repo and starts south instance\n            add_south: Fixture that starts any south service with given configuration\n            remove_data_file: Fixture that remove data file created during the tests\n            remove_directories: Fixture that remove directories created during the tests \"\"\"\n\n        fogbench_template_path = self.prepare_template_reading_from_fogbench()\n\n        add_south(self.SOUTH_PLUGIN_NAME, south_branch, fledge_url, service_name=self.SOUTH_PLUGIN_NAME)\n\n        yield self.start_south\n\n        # Cleanup code that runs after the test is over\n        remove_data_file(fogbench_template_path)\n        remove_directories(\"/tmp/fledge-south-{}\".format(self.SOUTH_PLUGIN_NAME))\n\n    def prepare_template_reading_from_fogbench(self):\n        \"\"\" Define the template file for fogbench readings \"\"\"\n\n        fogbench_template_path = os.path.join(\n            os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(self.FOGBENCH_TEMPLATE))\n        with open(fogbench_template_path, \"w\") as f:\n            f.write(\n                '[{\"name\": \"%s\", \"sensor_values\": '\n                '[{\"name\": \"sensor\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                    self.ASSET_NAME, self.SENSOR_VALUE, self.SENSOR_VALUE))\n\n        return fogbench_template_path\n\n    def ingest_readings_from_fogbench(self, fledge_url, wait_time):\n        pause_for_x_seconds(x=wait_time*3)\n        conn = http.client.HTTPConnection(fledge_url)\n        subprocess.run([\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{}; cd -\"\n                       .format(self.FOGBENCH_TEMPLATE)], shell=True, check=True, stdout=subprocess.DEVNULL)\n\n        pause_for_x_seconds(x=wait_time)\n\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        val = json.loads(r)\n        assert 1 == len(val)\n        assert self.ASSET_NAME == val[0][\"assetCode\"]\n        assert 1 == val[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(self.ASSET_NAME))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        val = json.loads(r)\n        assert 1 == len(val)\n        assert {'sensor': self.SENSOR_VALUE} == val[0][\"reading\"]\n\n    def configure_rule_with_single_item_eval_type(self, fledge_url, cat_name):\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"asset\": self.ASSET_NAME,\n                \"datapoint\": \"sensor\",\n                \"evaluation_data\": \"Single Item\",\n                \"condition\": \">\",\n                \"trigger_value\": str(self.SENSOR_VALUE - 10),\n                }\n        conn.request(\"PUT\", '/fledge/category/rule{}'.format(cat_name), json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n\n    def enable_notification(self, fledge_url, cat_name, is_enabled=True):\n        _enabled = \"true\" if is_enabled else \"false\"\n        data = {\"value\": _enabled}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/category/{}/enable'.format(cat_name), json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n\n    def test_sent_and_receive_notification(self, fledge_url, start_south, wait_time):\n        data = {\"name\": \"Test4\",\n                \"description\": \"Test4_Notification\",\n                \"rule\": NOTIFY_INBUILT_RULES[0],\n                \"channel\": NOTIFY_PLUGIN,\n                \"enabled\": True,\n                \"notification_type\": \"one shot\"\n                }\n        name = data['name']\n        _add_notification_instance(fledge_url, data)\n        self.configure_rule_with_single_item_eval_type(fledge_url, name)\n\n        # upload script NotifyPython35::configure() -> lowercase(categoryName) + _script_ + method_name + \".py\"\n        cat_name = \"delivery{}\".format(name)\n        script_path = '$FLEDGE_ROOT/tests/system/python/data/notify35.py'\n        url = 'http://' + fledge_url + '/fledge/category/' + cat_name + '/script/upload'\n        upload_script = 'curl -F \"script=@{}\" {}'.format(script_path, url)\n        subprocess.run(upload_script, shell=True, check=True, stdout=subprocess.DEVNULL)\n\n        # enable notification delivery (it was getting disabled, as no script file was available)\n        self.enable_notification(fledge_url, \"delivery\" + name)\n\n        self.ingest_readings_from_fogbench(fledge_url, wait_time)\n        time.sleep(wait_time)\n\n        _verify_audit_log_entry(fledge_url, '/fledge/audit?source=NTFSN', name=name)\n\n\nclass TestStartStopNotificationService:\n    def test_shutdown_service_with_schedule_disable(self, fledge_url, disable_schedule, wait_time):\n        disable_schedule(fledge_url, SERVICE_NAME)\n        pause_for_x_seconds(x=wait_time)\n        _verify_service(fledge_url, status='shutdown')\n        pause_for_x_seconds(x=wait_time)\n        # After shutdown there should be 1 entry for NTFSD (shutdown)\n        _verify_audit_log_entry(fledge_url, '/fledge/audit?source=NTFSD', name=SERVICE_NAME, count=1)\n\n    def test_restart_notification_service(self, fledge_url, enable_schedule, wait_time):\n        enable_schedule(fledge_url, SERVICE_NAME)\n        pause_for_x_seconds(x=wait_time)\n        _verify_service(fledge_url, status='running')\n        # After restart there should be 2 entries for NTFST (start)\n        _verify_audit_log_entry(fledge_url, '/fledge/audit?source=NTFST', name=SERVICE_NAME, count=2)\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_pi_scaleset.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test end to end flow with:\n        Ingress: HTTP south plugin\n        Egress: PI Server (C) plugin & scale-set filter plugin\n\"\"\"\n\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nimport pytest\nimport utils\nimport math\n\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nSOUTH_PLUGIN = \"http_south\"\nSVC_NAME = \"Room #1\"\nASSET_PREFIX = \"http-\"  # default for HTTP South plugin\nASSET_NAME = \"e1\"\n\nTASK_NAME = \"North v2 PI\"\n\nFILTER_PLUGIN = \"scale-set\"\nEGRESS_FILTER_NAME = \"SS #1\"\n\nREAD_KEY = \"temperature\"\nSENSOR_VALUE = 21\n\n# scale(set) factor\nSCALE = \"1.8\"\nOFFSET = \"32\"\nOUTPUT = (SENSOR_VALUE * float(SCALE)) + int(OFFSET)\n\n\nclass TestE2ePiEgressWithScalesetFilter:\n\n    def get_ping_status(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/ping')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/statistics')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n    @pytest.fixture\n    def start_south_north_with_filter(self, reset_and_start_fledge, add_south, south_branch,\n                                      remove_data_file, remove_directories, enable_schedule,\n                                      fledge_url, add_filter, filter_branch, filter_name,\n                                      clear_pi_system_through_pi_web_api, pi_admin, pi_passwd, pi_db,\n                                      start_north_pi_server_c_web_api, pi_host, pi_port):\n        \"\"\" This fixture clones given south & filter plugin repo, and starts south and PI north C instance with filter\n\n        \"\"\"\n\n        # No need to give asset hierarchy in case of connector relay.\n        # There are two data points here. 1. READ_KEY 2. no data point (Asset name be used in this case.)\n        dp_list = [READ_KEY, '']\n        asset_dict = {}\n        asset_dict[ASSET_NAME] = dp_list\n        asset_dict[\"{}{}\".format(ASSET_PREFIX, ASSET_NAME)] = dp_list\n\n        # For connector relay we should not delete PI Point because\n        # when the PI point is created again (after deletion) the compressing attribute for it\n        # is always true. That means all the data is not stored in PI data archive.\n        # We lose a large proportion of the data because of compressing attribute.\n        # This is problematic for the fixture that verifies the data stored in PI.\n        # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n        #                                   [], asset_dict)\n\n        fogbench_template_path = os.path.join(\n            os.path.expandvars('${FLEDGE_ROOT}'), 'data/template.json')\n        with open(fogbench_template_path, \"w\") as f:\n            f.write(\n                '[{\"name\": \"%s\", \"sensor_values\": '\n                '[{\"name\": \"%s\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                    ASSET_NAME, READ_KEY, SENSOR_VALUE, SENSOR_VALUE))\n\n        add_south(SOUTH_PLUGIN, south_branch, fledge_url, service_name=SVC_NAME)\n\n        start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd, \n                                        taskname=TASK_NAME, start_task=False)\n\n        filter_cfg = {\"enable\": \"true\",\n                      \"factors\": json.dumps([\n                          {\n                              \"asset\": \"{}{}\".format(ASSET_PREFIX, ASSET_NAME),\n                              \"datapoint\": READ_KEY,\n                              \"scale\": SCALE,\n                              \"offset\": OFFSET\n                          }])\n                      }\n        add_filter(FILTER_PLUGIN, filter_branch, EGRESS_FILTER_NAME, filter_cfg, fledge_url, TASK_NAME)\n        enable_schedule(fledge_url, TASK_NAME)\n\n        yield self.start_south_north_with_filter\n\n        remove_data_file(fogbench_template_path)\n        remove_directories(\"/tmp/fledge-south-{}\".format(ASSET_NAME.lower()))\n        remove_directories(\"/tmp/fledge-filter-{}\".format(FILTER_PLUGIN))\n\n    def test_end_to_end(self, start_south_north_with_filter, read_data_from_pi_asset_server, fledge_url, pi_host, pi_admin,\n                        pi_passwd, pi_db, wait_time, retries, skip_verify_north_interface):\n\n        subprocess.run([\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/template.json -p http; cd -\"]\n                       , shell=True, check=True)\n        # Time to wait until north schedule runs\n        time.sleep(wait_time * math.ceil(15/wait_time) + 15)\n\n        self._verify_ping_and_statistics(fledge_url, count=1, skip_verify_north_interface=skip_verify_north_interface)\n\n        self._verify_ingest(fledge_url, SENSOR_VALUE, read_count=1)        \n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert SVC_NAME == tracked_item[\"service\"]\n        assert \"http-e1\" == tracked_item[\"asset\"]\n        assert \"http_south\" == tracked_item[\"plugin\"]\n\n        # wait for egress scheduled task to run and let filter make asset tracker entry\n        time.sleep(wait_time * 3)\n\n        if not skip_verify_north_interface:\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries)\n        \n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Filter\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Filter event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert TASK_NAME == tracked_item[\"service\"]\n        assert \"http-e1\" == tracked_item[\"asset\"]\n        assert \"SS #1\" == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url,\"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert TASK_NAME == tracked_item[\"service\"]\n            assert \"http-e1\" == tracked_item[\"asset\"]\n            assert \"OMF\" == tracked_item[\"plugin\"]\n\n    def _verify_ping_and_statistics(self, fledge_url, count, skip_verify_north_interface=False):\n        ping_response = self.get_ping_status(fledge_url)\n        assert count == ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert count == ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        key_asset_name_with_prefix = \"{}{}\".format(ASSET_PREFIX.upper(), ASSET_NAME.upper())\n        assert count == actual_stats_map[key_asset_name_with_prefix]\n        assert count == actual_stats_map['READINGS']\n\n        if not skip_verify_north_interface:\n            assert count == actual_stats_map['Readings Sent']\n            assert count == actual_stats_map[TASK_NAME]\n\n    def _verify_ingest(self, fledge_url, value, read_count):\n        asset_name_with_prefix = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No asset found\"\n        assert asset_name_with_prefix == jdoc[0][\"assetCode\"]\n        assert read_count == jdoc[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(asset_name_with_prefix))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No asset found\"\n        assert value == jdoc[0][\"reading\"][READ_KEY]\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n        retry_count = 0\n        data_from_pi = None\n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            asset_name_with_prefix = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME)\n            # data_from_pi = read_data_from_pi(pi_host, pi_admin, pi_passwd, pi_db, asset_name_with_prefix, {READ_KEY})\n            data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset_name_with_prefix, {READ_KEY})\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        assert READ_KEY in data_from_pi\n        assert isinstance(data_from_pi[READ_KEY], list)\n        assert round(OUTPUT, 1) in [round(n, 1) for n in data_from_pi[READ_KEY]]\n"
  },
  {
    "path": "tests/system/python/e2e/test_e2e_vary_asset_http_pi.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test system/python/e2e/test_e2e_vary_asset_http_pi.py\n\n\"\"\"\n\nimport http.client\nimport json\nimport time\nfrom datetime import datetime, timezone\nimport uuid\nimport pytest\nimport utils\nimport math\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestE2EAssetHttpPI:\n    def get_ping_status(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/ping')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/statistics')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name,\n                       sensor_data, sensor_data_2):\n        retry_count = 0\n        data_from_pi = None\n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset_name, {\"a\", \"b\", \"a2\", \"b2\"})\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        assert data_from_pi[\"b\"][-1] == 0.0\n        assert data_from_pi[\"b\"][-2] == 0.0\n        assert data_from_pi[\"b\"][-3] == sensor_data_2[0][\"b\"]\n        assert data_from_pi[\"b\"][-4] == sensor_data[2][\"b\"]\n        assert data_from_pi[\"b\"][-5] == sensor_data[1][\"b\"]\n        assert data_from_pi[\"b\"][-6] == 0.0\n\n        assert data_from_pi[\"a\"][-1] == sensor_data_2[2][\"a\"]\n        assert data_from_pi[\"a\"][-2] == 0.0\n        assert data_from_pi[\"a\"][-3] == 0.0\n        assert data_from_pi[\"a\"][-4] == 0.0\n        assert data_from_pi[\"a\"][-5] == sensor_data[1][\"a\"]\n        assert data_from_pi[\"a\"][-6] == sensor_data[0][\"a\"]\n\n        assert data_from_pi[\"b2\"][-1] == 0.0\n        assert data_from_pi[\"b2\"][-2] == sensor_data_2[1][\"b2\"]\n        assert data_from_pi[\"b2\"][-3] == 0.0\n        assert data_from_pi[\"b2\"][-4] == 0.0\n        assert data_from_pi[\"b2\"][-5] == 0.0\n        assert data_from_pi[\"b2\"][-6] == 0.0\n\n        assert data_from_pi[\"a2\"][-1] == 0.0\n        assert data_from_pi[\"a2\"][-2] == sensor_data_2[1][\"a2\"]\n        assert data_from_pi[\"a2\"][-3] == 0.0\n        assert data_from_pi[\"a2\"][-4] == 0.0\n        assert data_from_pi[\"a2\"][-5] == 0.0\n        assert data_from_pi[\"a2\"][-6] == 0.0\n\n    @pytest.fixture\n    def start_south_north(self, reset_and_start_fledge, add_south, start_north_pi_server_c_web_api, remove_directories,\n                          south_branch, fledge_url, pi_host, pi_port,\n                          clear_pi_system_through_pi_web_api, pi_admin, pi_passwd, pi_db,\n                          asset_name = \"e2e_varying\"):\n        \"\"\" This fixture clone a south repo and starts both south and north instance\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            add_south: Fixture that adds a south service with given configuration\n            start_north_pi_server_c_web_api: Fixture that starts PI north task\n            remove_directories: Fixture that remove directories created during the tests\"\"\"\n\n        # No need to give asset hierarchy in case of connector relay.\n        dp_list = ['a', 'b', 'a2', 'b2', '']\n        # There are five data points here. 1. a 2. b\n        # 3. a2          4. b2\n        # 5. no data point (Asset name be used in this case.)\n        asset_dict = {}\n        asset_dict[asset_name] = dp_list\n        # For connector relay we should not delete PI Point because\n        # when the PI point is created again (after deletion) the compressing attribute for it\n        # is always true. That means all the data is not stored in PI data archive.\n        # We lose a large proportion of the data because of compressing attribute.\n        # This is problematic for the fixture that verifies the data stored in PI.\n        # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n        #                                    [], asset_dict)\n\n        south_plugin = \"http\"\n        add_south(\"http_south\", south_branch, fledge_url, config={\"assetNamePrefix\": {\"value\": \"\"}},\n                  service_name=\"http_south\")\n        start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                    taskname=\"NorthReadingsToPI\")\n        \n        yield self.start_south_north\n\n        # Cleanup code that runs after the caller test is over\n        remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin))\n\n    def test_end_to_end(self, start_south_north, read_data_from_pi_asset_server, fledge_url, pi_host, pi_admin, pi_passwd, pi_db,\n                        wait_time, retries, skip_verify_north_interface):\n        \"\"\" Test that data is inserted in Fledge and sent to PI\n            start_south_north: Fixture that starts Fledge with south and north instance\n            read_data_from_pi_asset_server: Fixture to read data from PI\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name>\n                data received from PI is same as data sent\"\"\"\n\n        # Allow http_south service to come up and register before sending data\n        time.sleep(wait_time)\n        conn = http.client.HTTPConnection(fledge_url)\n\n        # Send data to fledge-south-http\n        conn_http_south = http.client.HTTPConnection(\"localhost:6683\")\n\n        asset_name = \"e2e_varying\"\n        # 2 list having mixed data simulating different sensors\n        # (sensors coming up and down, sensors throwing int and float data)\n        sensor_data = [{\"a\": 1}, {\"a\": 2, \"b\": 3}, {\"b\": 4}]\n        sensor_data_2 = [{\"b\": 1.1}, {\"a2\": 2, \"b2\": 3}, {\"a\": 4.0}]\n        for d in sensor_data + sensor_data_2:\n            tm = str(datetime.now(timezone.utc).astimezone())\n            data = [{\"asset\": \"{}\".format(asset_name), \"timestamp\": \"{}\".format(tm), \"key\": str(uuid.uuid4()),\n                     \"readings\": d}]\n            conn_http_south.request(\"POST\", '/sensor-reading', json.dumps(data))\n            r = conn_http_south.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert {'result': 'success'} == jdoc\n\n        # Time to wait until north schedule runs\n        time.sleep(wait_time * math.ceil(15/wait_time) + 15)\n\n        ping_response = self.get_ping_status(fledge_url)\n        assert 6 == ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert 6 == ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert 6 == actual_stats_map[asset_name.upper()]\n        assert 6 == actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert 6 == actual_stats_map['Readings Sent']\n            assert 6 == actual_stats_map['NorthReadingsToPI']\n\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert len(retval) == 1\n        assert asset_name == retval[0][\"assetCode\"]\n        assert 6 == retval[0][\"count\"]\n\n        conn.request(\"GET\", '/fledge/asset/{}'.format(asset_name))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n\n        assert sensor_data_2[2] == retval[0][\"reading\"]\n        assert sensor_data_2[1] == retval[1][\"reading\"]\n        assert sensor_data_2[0] == retval[2][\"reading\"]\n        assert sensor_data[2] == retval[3][\"reading\"]\n        assert sensor_data[1] == retval[4][\"reading\"]\n        assert sensor_data[0] == retval[5][\"reading\"]\n\n        if not skip_verify_north_interface:\n            # Allow some buffer so that data is ingested in PI before fetching using PI Web API\n            time.sleep(wait_time)\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name,\n                                sensor_data, sensor_data_2)\n\n        tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n        assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n        tracked_item = tracking_details[\"track\"][0]\n        assert \"http_south\" == tracked_item[\"service\"]\n        assert asset_name == tracked_item[\"asset\"]\n        assert \"http_south\" == tracked_item[\"plugin\"]\n\n        if not skip_verify_north_interface:\n            egress_tracking_details = utils.get_asset_tracking_details(fledge_url,\"Egress\")\n            assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n            tracked_item = egress_tracking_details[\"track\"][0]\n            assert \"NorthReadingsToPI\" == tracked_item[\"service\"]\n            assert asset_name == tracked_item[\"asset\"]\n            assert \"OMF\" == tracked_item[\"plugin\"]\n\n\n\n\n"
  },
  {
    "path": "tests/system/python/e2e/test_south_service_tuning.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test south service tuning parameters for bufferThreshold and maxSendLatency \"\"\"\n\nimport time\nimport urllib.parse\nimport pytest\n\nfrom helpers import utils\n\n__author__ = \"Devki Nandan Ghildiyal\"\n__copyright__ = \"Copyright (c) 2025 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nSERVICE_NAME = \"TuningSouth\"\n\nclass TestSouthServiceTuning:\n    \n    def test_south_service_tuning_buffer_threshold(self, reset_and_start_fledge, fledge_url, \n                                                   wait_time, retries, add_south, south_branch, plugin_language, enable_schedule, disable_schedule, plugin_name):\n        \"\"\" Test south service tuning parameters - bufferThreshold and maxSendLatency\n            \n            This test:\n            1. Sets up a south service using sinusoid plugin\n            2. Configures polling interval to 2 seconds (30 readingsPerSec)\n            3. Sets maxSendLatency to 60000ms (1 minute)\n            4. Sets bufferThreshold to 200\n            5. Verifies 30 readings after 90 seconds (1 minute worth)\n            6. Tests dynamic parameter changes\n        \"\"\"\n        \n        print(\"\\n=== Running South Service Tuning Test ===\")\n\n        # Step 1: Create south service with sinusoid plugin and initial configuration\n        # Create the service\n        self._add_south_service(SERVICE_NAME, fledge_url, plugin_name, add_south, south_branch, plugin_language)\n        # Configure advanced parameters after service creation\n        advanced_config = {\n            \"units\": \"minute\" ,            # Polling interval unit\n            \"readingsPerSec\": \"30\",        # 2 seconds interval\n            \"maxSendLatency\": \"60000\",     # 1 minute\n            \"bufferThreshold\": \"200\"       # Buffer 200 readings\n        }\n        resp = self._set_advance_config(fledge_url, SERVICE_NAME, advanced_config)\n\n        # Step 2: Enable the south service\n        response = enable_schedule(fledge_url, SERVICE_NAME)\n        assert \"Schedule successfully enabled\" == response[\"message\"]\n        print (f\"Scheduled south service: {response}\")\n        print(f\"Enabled south service: {SERVICE_NAME}\")\n\n        # Step 3: Wait 90 seconds and verify ~30 readings (1 minute worth data)\n        print(\"Waiting 90 seconds for data collection...\")\n        time.sleep(90)\n        \n        # Get ping statistics to check dataRead count\n        ping_result = utils.get_request(fledge_url, \"/fledge/ping\")\n        initial_data_read = ping_result[\"dataRead\"]\n        print(f\"Initial dataRead count: {initial_data_read}\")\n        \n        # Verify approximately 30 readings (allow some tolerance)\n        assert 20 <= initial_data_read <= 40, f\"Expected ~30 readings, got {initial_data_read}\"\n        \n        # Step 4: Disable the service\n        response = disable_schedule(fledge_url, SERVICE_NAME)\n        assert \"Schedule successfully disabled\" == response[\"message\"]\n        print(\"Disabled south service\")\n        time.sleep(2)  # Allow disable to take effect\n\n        # Verify buffered readings are sent to storage after stopping the service\n        ping_result = utils.get_request(fledge_url, \"/fledge/ping\")\n        buffered_data_read = ping_result[\"dataRead\"]\n        buffered_data_read = buffered_data_read - initial_data_read\n        print(f\"Buffered dataRead count: {buffered_data_read}\")\n\n        # Step 5: Change bufferThreshold to 10\n        config_data = {\"bufferThreshold\": \"10\"}\n        resp = self._set_advance_config(fledge_url, SERVICE_NAME, config_data)\n        assert \"10\" == resp[\"bufferThreshold\"][\"value\"]\n        print(\"Updated bufferThreshold to 10\")\n        \n        # Step 6: Re-enable the service \n        response = enable_schedule(fledge_url, SERVICE_NAME)\n        assert \"Schedule successfully enabled\" == response[\"message\"]\n        print(\"Re-enabled south service\")\n        \n        # Step 7: Wait 25 seconds and verify 10 additional readings\n        print(\"Waiting 25 seconds for additional data...\")\n        time.sleep(25)\n        \n        new_ping_result = utils.get_request(fledge_url, \"/fledge/ping\")\n        new_data_read = new_ping_result[\"dataRead\"]\n        additional_readings = new_data_read - buffered_data_read - initial_data_read\n        print(f\"Additional readings: {additional_readings}\")\n        \n        # Verify approximately 10 additional readings (with small buffer time, should be ~10-12)\n        assert 8 <= additional_readings <= 15, f\"Expected ~10 additional readings, got {additional_readings}\"\n        \n        # Step 8: Test dynamic parameter changes without disabling service\n        self._test_dynamic_buffer_threshold_changes(fledge_url, wait_time, SERVICE_NAME)\n        self._test_dynamic_latency_changes(fledge_url, wait_time, SERVICE_NAME)\n        \n        # Cleanup: Delete the service\n        response = utils.delete_request(fledge_url, f\"/fledge/service/{SERVICE_NAME}\")\n        assert f\"Service {SERVICE_NAME} deleted successfully.\" == response[\"result\"]\n        print(f\"Deleted south service: {SERVICE_NAME}\")\n\n    def _add_south_service(self, service_name, fledge_url, plugin_name, add_south, south_branch, plugin_language):\n        response = add_south(plugin_name, south_branch, fledge_url, service_name=service_name, plugin_lang=plugin_language, start_service=False)\n        service_name = response[\"name\"]\n        assert service_name == response[\"name\"]\n        print(f\"Created south service: {service_name}\")\n        time.sleep(2)  # Allow time for Advance category creation\n\n    def _set_advance_config(self, fledge_url, service_name, config):\n        \"\"\" Helper to set advanced configuration \"\"\"\n        put_url = f\"/fledge/category/{service_name}Advanced\"\n        print(f\"Configuring advanced parameters for {service_name}: {config}\", put_url)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), config)\n        return resp\n    \n    def _test_dynamic_buffer_threshold_changes(self, fledge_url, wait_time, service_name):\n        \"\"\" Test dynamic changes to bufferThreshold without disabling service \"\"\"\n        \n        print(\"\\n=== Testing Dynamic Buffer Threshold Changes ===\")\n        \n        # Get baseline reading count\n        baseline_ping = utils.get_request(fledge_url, \"/fledge/ping\")\n        baseline_count = baseline_ping[\"dataRead\"]\n        \n        # Test 1: Increase bufferThreshold to 300 (should delay sending)\n        config_data = {\"bufferThreshold\": \"300\"}\n        resp = self._set_advance_config(fledge_url, service_name, config_data)\n        assert \"300\" == resp[\"bufferThreshold\"][\"value\"]\n        print(\"Increased bufferThreshold to 300\")\n        \n        # Wait and verify fewer sends due to higher threshold\n        time.sleep(wait_time * 4)  # 20 seconds\n        ping_result = utils.get_request(fledge_url, \"/fledge/ping\")\n        readings_with_high_threshold = ping_result[\"dataRead\"] - baseline_count\n        print(f\"Readings with high threshold (300): {readings_with_high_threshold}\")\n        \n        # Test 2: Decrease bufferThreshold to 5 (should send more frequently)\n        config_data = {\"bufferThreshold\": \"5\"}\n        resp = self._set_advance_config(fledge_url, service_name, config_data)\n        assert \"5\" == resp[\"bufferThreshold\"][\"value\"]\n        print(\"Decreased bufferThreshold to 5\")\n        \n        baseline_after_change = ping_result[\"dataRead\"]\n        time.sleep(wait_time * 4)  # 20 seconds\n        ping_result = utils.get_request(fledge_url, \"/fledge/ping\")\n        readings_with_low_threshold = ping_result[\"dataRead\"] - baseline_after_change\n        print(f\"Readings with low threshold (5): {readings_with_low_threshold}\")\n        \n        # With lower threshold, we should see similar or more frequent sends\n        assert readings_with_low_threshold >= readings_with_high_threshold, \"Lower threshold should allow more frequent sending\"\n\n    def _test_dynamic_latency_changes(self, fledge_url, wait_time, service_name):\n        \"\"\" Test dynamic changes to maxSendLatency without disabling service \"\"\"\n        \n        print(\"\\n=== Testing Dynamic Max Send Latency Changes ===\")\n        \n        # Get baseline reading count\n        baseline_ping = utils.get_request(fledge_url, \"/fledge/ping\")\n        baseline_count = baseline_ping[\"dataRead\"]\n        \n        # Test 1: Set very high maxSendLatency (30 seconds)\n        config_data = {\"maxSendLatency\": \"30000\"}\n        resp = self._set_advance_config(fledge_url, service_name, config_data)\n        assert \"30000\" == resp[\"maxSendLatency\"][\"value\"]\n        print(\"Set maxSendLatency to 30000ms (30 seconds)\")\n        \n        time.sleep(wait_time * 6)  # 30 seconds\n        ping_result = utils.get_request(fledge_url, \"/fledge/ping\")\n        readings_with_high_latency = ping_result[\"dataRead\"] - baseline_count\n        print(f\"Readings with high latency (30s): {readings_with_high_latency}\")\n        \n        # Test 2: Set lower maxSendLatency (5 seconds)\n        config_data = {\"maxSendLatency\": \"5000\"}\n        resp = self._set_advance_config(fledge_url, service_name, config_data)\n        assert \"5000\" == resp[\"maxSendLatency\"][\"value\"]\n        print(\"Set maxSendLatency to 5000ms (5 seconds)\")\n        \n        baseline_after_change = ping_result[\"dataRead\"]\n        time.sleep(wait_time * 6)  # 30 seconds\n        ping_result = utils.get_request(fledge_url, \"/fledge/ping\")\n        readings_with_low_latency = ping_result[\"dataRead\"] - baseline_after_change\n        print(f\"Readings with low latency (5s): {readings_with_low_latency}\")\n        \n    def test_south_service_comprehensive_tuning(self, reset_and_start_fledge, fledge_url, \n                                               wait_time, retries, add_south, south_branch, plugin_language,  enable_schedule, disable_schedule,plugin_name):\n        \"\"\" Comprehensive test of south service tuning - tests all parameter combinations \"\"\"\n        \n        service_name = f\"{SERVICE_NAME}_Comprehensive\"\n    \n        # Create and enable service\n        self._add_south_service(service_name, fledge_url, plugin_name, add_south, south_branch, plugin_language)\n\n        # Configure advanced parameters after service creation\n        advanced_config = {\n            \"units\": \"minute\" ,            # Polling interval unit\n            \"readingsPerSec\": \"30\",        # 2 seconds interval\n            \"maxSendLatency\": \"10000\",     # 10 seconds\n            \"bufferThreshold\": \"20\"        # Buffer 20 readings\n        }\n        resp = self._set_advance_config(fledge_url, service_name, advanced_config)\n        enable_schedule(fledge_url, service_name)\n        \n        try:\n            # Test matrix of different configurations\n            test_configs = [\n                {\"bufferThreshold\": \"50\", \"maxSendLatency\": \"15000\", \"expected_behavior\": \"delayed_sends\"},\n                {\"bufferThreshold\": \"5\", \"maxSendLatency\": \"5000\", \"expected_behavior\": \"frequent_sends\"},\n                {\"bufferThreshold\": \"100\", \"maxSendLatency\": \"2000\", \"expected_behavior\": \"latency_driven\"},\n                {\"bufferThreshold\": \"10\", \"maxSendLatency\": \"30000\", \"expected_behavior\": \"threshold_driven\"}\n            ]\n            \n            for i, test_config in enumerate(test_configs):\n                print(f\"\\n--- Test Configuration {i+1}: {test_config['expected_behavior']} ---\")\n                \n                # Get baseline\n                baseline_ping = utils.get_request(fledge_url, \"/fledge/ping\")\n                baseline_count = baseline_ping[\"dataRead\"]\n                \n                # Update configuration\n                config_update = {\n                    \"bufferThreshold\": test_config[\"bufferThreshold\"],\n                    \"maxSendLatency\": test_config[\"maxSendLatency\"]\n                }\n                resp = self._set_advance_config(fledge_url, service_name, config_update)\n                \n                # Verify configuration was applied\n                assert test_config[\"bufferThreshold\"] == resp[\"bufferThreshold\"][\"value\"]\n                assert test_config[\"maxSendLatency\"] == resp[\"maxSendLatency\"][\"value\"]\n                \n                # Wait and measure results\n                time.sleep(wait_time * 8)  # 40 seconds\n                \n                new_ping = utils.get_request(fledge_url, \"/fledge/ping\")\n                new_count = new_ping[\"dataRead\"]\n                readings_collected = new_count - baseline_count\n                \n                print(f\"Configuration: bufferThreshold={test_config['bufferThreshold']}, \"\n                      f\"maxSendLatency={test_config['maxSendLatency']}\")\n                print(f\"Readings collected in 40s: {readings_collected}\")\n                \n                # Basic validation - should always collect some readings\n                assert readings_collected > 0, \"Should have collected some readings in 40 seconds\"\n                \n                # Store results for comparison\n                test_config[\"actual_readings\"] = readings_collected\n            \n            # Verify that different configurations produce different behaviors\n            frequent_sends_config = next(c for c in test_configs if c[\"expected_behavior\"] == \"frequent_sends\")\n            delayed_sends_config = next(c for c in test_configs if c[\"expected_behavior\"] == \"delayed_sends\")\n            \n            print(f\"\\nComparison: Frequent sends={frequent_sends_config['actual_readings']}, \"\n                  f\"Delayed sends={delayed_sends_config['actual_readings']}\")\n            \n        finally:\n            # Cleanup\n            utils.delete_request(fledge_url, f\"/fledge/service/{service_name}\")\n            print(f\"Deleted service: {service_name}\")\n  \n    def test_buffer_threshold_impact_on_send_frequency(self, reset_and_start_fledge, fledge_url, add_south, south_branch, plugin_language,\n                                                      enable_schedule, disable_schedule, plugin_name):\n        \"\"\" Test how bufferThreshold impacts send frequency \"\"\"\n        \n        service_name = f\"{SERVICE_NAME}_BufferTest\"\n        \n        # # Create and enable service\n        self._add_south_service(service_name, fledge_url, plugin_name, add_south, south_branch, plugin_language)\n        \n        # Configure advanced parameters after service creation\n        advanced_config = {\n            \"units\": \"second\" ,            # Polling interval unit\n            \"readingsPerSec\": \"10\",        # 10 reading per second\n            \"maxSendLatency\": \"60000\",     # High latency so threshold dominates\n            \"bufferThreshold\": \"10\"        # Buffer 10 readings\n        }\n        resp = self._set_advance_config(fledge_url, service_name, advanced_config)\n        enable_schedule(fledge_url, service_name)\n\n        try:\n            # Test with small buffer threshold\n            time.sleep(15)  # 15 seconds should trigger ~1-2 sends with 10-reading buffer\n            ping1 = utils.get_request(fledge_url, \"/fledge/ping\")\n            readings_small_buffer = ping1[\"dataRead\"]\n            print(f\"Readings with bufferThreshold=10: {readings_small_buffer}\")\n            \n            # Change to large buffer threshold\n            config_update = {\"bufferThreshold\": \"100\"}\n            resp = self._set_advance_config(fledge_url, service_name, config_update)\n            \n            time.sleep(15)  # Another 15 seconds\n            ping2 = utils.get_request(fledge_url, \"/fledge/ping\")\n            total_readings = ping2[\"dataRead\"]\n            additional_readings = total_readings - readings_small_buffer\n            print(f\"Additional readings with bufferThreshold=100: {additional_readings}\")\n            \n            # Should have collected more readings but sent less frequently due to higher threshold\n            assert additional_readings >= readings_small_buffer, \"Should continue collecting readings with high threshold\"\n            \n        finally:\n            utils.delete_request(fledge_url, f\"/fledge/service/{service_name}\")\n\n    def test_max_send_latency_impact(self, reset_and_start_fledge, fledge_url, add_south, south_branch, plugin_language,\n                                   enable_schedule, disable_schedule, plugin_name):\n        \"\"\" Test how maxSendLatency impacts send timing \"\"\"\n        \n        service_name = f\"{SERVICE_NAME}_LatencyTest\"\n        \n        # Create and enable service\n        self._add_south_service(service_name, fledge_url, plugin_name, add_south, south_branch, plugin_language)\n\n        # Configure advanced parameters after service creation\n        advanced_config = {\n            \"units\": \"second\" ,            # Polling interval unit\n            \"readingsPerSec\": \"2\",         # 2 reading per second\n            \"maxSendLatency\": \"5000\",      # 5 seconds\n            \"bufferThreshold\": \"1000\"      # High threshold so latency dominate\n        }\n        resp = self._set_advance_config(fledge_url, service_name, advanced_config)\n        enable_schedule(fledge_url, service_name)\n        try:\n            # Test with short latency\n            time.sleep(20)  # Should trigger multiple sends\n            ping1 = utils.get_request(fledge_url, \"/fledge/ping\")\n            readings_short_latency = ping1[\"dataRead\"]\n            print(f\"Readings with maxSendLatency=5000ms: {readings_short_latency}\")\n            \n            # Change to long latency\n            config_update = {\"maxSendLatency\": \"30000\"}  # 30 seconds\n            resp = self._set_advance_config(fledge_url, service_name, config_update)\n            \n            time.sleep(20)  # Another 20 seconds\n            ping2 = utils.get_request(fledge_url, \"/fledge/ping\")\n            total_readings = ping2[\"dataRead\"]\n            additional_readings = total_readings - readings_short_latency\n            print(f\"Additional readings with maxSendLatency=30000ms: {additional_readings}\")\n            \n            # Should continue collecting but behavior may differ based on latency setting\n            assert additional_readings > 0, ( f\"No additional readings collected. Before: {readings_short_latency}, After: {total_readings}\" )\n            \n        finally:\n            utils.delete_request(fledge_url, f\"/fledge/service/{service_name}\") \n\n\n"
  },
  {
    "path": "tests/system/python/fledge/plugins/filter/imageblock/README.rst",
    "content": "=======================================\nFogLAMP imageblock Python filter plugin\n=======================================\n\nThe plugin overlays a square block of required grayscale shade on the input image.\n\nRuntime configuration\n---------------------\n\nThe filter supports the following configuration items:\n\nenable\n  A switch to control if the filter should take effect or not\n\nblock_color\n  Color of the block to overlay (0-255)\n\n"
  },
  {
    "path": "tests/system/python/fledge/plugins/filter/imageblock/__init__.py",
    "content": ""
  },
  {
    "path": "tests/system/python/fledge/plugins/filter/imageblock/imageblock.py",
    "content": "# -*- coding: utf-8 -*-\n# FOGLAMP_BEGIN\n# See: https://foglamp-foglamp-documentation.readthedocs-hosted.com\n# FOGLAMP_END\n\n\"\"\" Plugin module which adds a square block of specific monochrome shade on images \"\"\"\n\nimport os\nimport logging\nimport datetime\nimport filter_ingest\nimport traceback\nimport copy\nimport json \nimport numpy as np\n\nfrom fledge.common import logger\n\n# local logger\n_LOGGER = logger.setup(__name__, level=logging.DEBUG)\n\n\n_DEFAULT_CONFIG = {\n    'plugin': {        # mandatory  filter\n        'description': 'Filter that overlays a square block on image',\n        'type': 'string',\n        'default': 'imageblock',\n        'readonly': 'true'\n    },\n    'enable': {    # recommended filter\n        'description': 'Enable imageblock filter plugin',\n        'type': 'boolean',\n        'default': 'false',\n        'displayName': 'Enabled',\n        'order': \"1\"\n    },\n    'block_color': {\n        'description': 'Block color (0-255)',\n        'type': 'integer',\n        'default': '255',\n        'displayName': 'Block color',\n        'order': '2'\n    }\n}\n\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n    Args:\n    Returns:\n        dict: plugin information\n    Raises:\n    \"\"\"\n    _LOGGER.info(\"imageblock - plugin_info called\")\n    return {\n        'name': 'imageblock',\n        'version': '1.9.2',\n        'mode': 'none',\n        'type': 'filter',\n        'interface': '1.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\ndef plugin_init(config, ingest_ref, callback):\n    \"\"\" Initialise the plugin.\n    Args:\n        config: JSON configuration document for the plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n\n    _LOGGER.info(\"imageblock - plugin_init called\")\n    try:\n        _config = copy.deepcopy(config)\n        _config['ingest_ref'] = ingest_ref\n        _config['callback'] = callback\n    except:\n        _LOGGER.info(\"could not create configuration\")\n        raise\n    return _config\n\n\ndef plugin_ingest(handle, data):\n    \"\"\" plugin_ingest -- log data we receive \"\"\"\n\n    if handle['enable']['value'] == 'false':\n        _LOGGER.debug(\"imageblock - plugin_ingest: enable=FALSE, not processing data, forwarding received data\")\n        filter_ingest.filter_ingest_callback(handle['callback'], handle['ingest_ref'], data)\n        return\n\n    _LOGGER.debug(\"imageblock - plugin_ingest: INPUT: type(data)={}, data={}\".format(type(data), data))\n    color = int(handle['block_color']['value'])\n    \n    try:\n        \n        if type(data) == dict:\n            data = [data]\n\n        for entry in data:\n            _LOGGER.debug(\"np.pi={}, type(entry) = {}\".format(np.pi, type(entry)))\n\n            for k in entry['readings'].keys():\n                v = entry['readings'][k]\n                _LOGGER.debug(\"k={}, type(v)={}, v.shape={}, v={}\".format(k, type(v), v.shape, v))\n                \n                import random\n                center = random.randint(v.shape[0]//4,v.shape[0]//4*3+1)\n                sz = random.randint(10,v.shape[0]//4-10)\n                _LOGGER.debug(\"imageblock - plugin_ingest: center={}, sz={}, color={}\".format(center, sz, color))\n                v[center-sz:center+sz,center-sz:center+sz] = color\n                entry['readings'][k] = v\n        \n        _LOGGER.debug(\"After adding a small block, pixel values: OUTPUT: data={}\".format(data))\n\n        filter_ingest.filter_ingest_callback(handle['callback'], handle['ingest_ref'], data)\n\n    except Exception as ex:\n        _LOGGER.error(\"imageblock writer exception {}\".format(traceback.format_exc()))\n        raise\n\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n    Returns:\n        new_handle: new handle to be used in the future calls\n    \"\"\"\n\n    _LOGGER.info(\"imageblock - Old config for  plugin {} \\n new config {}\".format(handle, new_config))\n    plugin_shutdown(handle)\n    # plugin_init\n    new_handle = plugin_init(new_config, handle['ingest_ref'], handle['callback'])\n    return new_handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shut down the plugin.\n    Args:\n        handle: JSON configuration document for the plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n    _LOGGER.info(\"imageblock Shutdown\")\n\n"
  },
  {
    "path": "tests/system/python/fledge/plugins/south/imagetest/__init__.py",
    "content": ""
  },
  {
    "path": "tests/system/python/fledge/plugins/south/imagetest/imagetest.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Module for ImageTest poll mode plugin \"\"\"\n\nimport copy\nimport logging\nimport numpy as np\n\nfrom fledge.common import logger\nfrom fledge.plugins.common import utils\n\n__author__ = \"Mark Riddoch\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_DEFAULT_CONFIG = {\n    'plugin': {\n        'description': 'ImageTest Poll Plugin which implements a test',\n        'type': 'string',\n        'default': 'imagetest',\n        'readonly': 'true'\n    },\n    'assetName': {\n        'description': 'Name of Asset',\n        'type': 'string',\n        'default': 'image',\n        'displayName': 'Asset name',\n        'mandatory': 'true'\n    },\n    'depth': {\n        'description': 'Bits per pixel',\n        'type': 'enumeration',\n        'options' : [ '8', '16', '24' ],\n        'default': '8',\n        'displayName': 'Depth',\n        'mandatory': 'true'\n    }\n\n}\n\n_LOGGER = logger.setup(__name__, level=logging.INFO)\n\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n    Args:\n    Returns:\n        dict: plugin information\n    Raises:\n    \"\"\"\n    return {\n        'name': 'ImageTest Poll plugin',\n        'version': '1.9.2',\n        'mode': 'poll',\n        'type': 'south',\n        'interface': '1.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\ndef plugin_init(config):\n    \"\"\" Initialise the plugin.\n    Args:\n        config: JSON configuration document for the South plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n    data = copy.deepcopy(config)\n    return data\n\n\ndef plugin_poll(handle):\n    \"\"\" Extracts data from the sensor and returns it in a JSON document as a Python dict.\n    Available for poll mode only.\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        returns a sensor reading in a JSON document, as a Python dict, if it is available\n        None - If no reading is available\n    Raises:\n        Exception\n    \"\"\"\n    try:\n        time_stamp = utils.local_timestamp()\n        depth = int(handle['depth']['value'])\n        _LOGGER.info(\" depth from config {} \\n\".format(str(depth)))\n\n        if depth == 8:\n\n            image = np.full((256,256), 0 , dtype=np.uint8)\n\n            for i in range(0, 256):\n                for j in range(0,256):\n                        image[i][j] = i \n        \n            data = {'asset':  handle['assetName']['value'], 'timestamp': time_stamp, 'readings': {\"image\": image}}\n        elif depth == 16:\n            image = np.full((256, 256), 0, dtype=np.uint16)\n            for i in range(0, 256):\n                for j in range(0, 256):\n                    image[i][j] = i*i\n\n            data = {'asset':  handle['assetName']['value'], 'timestamp': time_stamp, 'readings': {\"image\": image}}\n        elif depth == 24:\n            image = np.full((256, 256, 3), 0, dtype=np.uint8)\n            for i in range(0, 32):\n                for j in range(0, 256):\n                    for k in range(0,3):\n                        if k%3 == 0:\n                            image[i][j][k] =  i*8\n                        else:\n                            image[i][j][k] = 0\n\n            for i in range (32,64):\n                for j in range(0,256):\n                    for k in range(0,3):\n                        if  k%3 == 0:\n                            image[i][j][k] = 0         #R\n                        elif k%3 == 1:\n                            image[i][j][k] = (i%32)*8  #G\n                        else:\n                            image[i][j][k] = 0         #B\n\n            for i in range(64,96):\n                for j in range(0,256):    \n                    for k in range(0,3):\n                        if k%3 == 0:\n                            image[i][j][k] = 0            #R\n                        elif k%3 == 1:\n                            image[i][j][k] = 0            #G\n                        else:\n                            image[i][j][k] = (i%32)*8     #B\n\n\n            for i in range(96,128):\n                for  j in range(0,256):\n                    for k in range(0,3):\n                        image[i][j][k] = (i%32)*8\n\n            for i in range(128,256):\n                for j in range(0,256):\n                    for k in range(0,3):\n                        if k%3 == 0:\n                            image[i][j][k] = (i%128) * 4                  #R\n                        elif k%3 == 1:\n                            image[i][j][k] = 255 - ( ( i%128) * 4 )      #G\n                        else:\n                            image[i][j][k] = j                           #B\n\n            data = {'asset':  handle['assetName']['value'], 'timestamp': time_stamp, 'readings': {\"image\": image}}\n        else:\n            pass\n    except (Exception, RuntimeError) as ex:\n        _LOGGER.exception(\"Imagetest exception: {}\".format(str(ex)))\n        raise ex\n    else:\n        return data\n\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n    Returns:\n        new_handle: new handle to be used in the future calls\n    \"\"\"\n    _LOGGER.info(\"Old config for imagetest plugin {} \\n new config {}\".format(handle, new_config))\n    new_handle = copy.deepcopy(new_config)\n    return new_handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shutdowns the plugin doing required cleanup, to be called prior to the South plugin service being shut down.\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        plugin shutdown\n    \"\"\"\n    _LOGGER.info('imagetest plugin shut down.')\n"
  },
  {
    "path": "tests/system/python/fledge/plugins/south/imagetest/readme.rst",
    "content": "***********************\nFledge South ImageTest\n***********************\n\nThis directory contains a South plugin that implements 8, 16 or 24 bits per pixel test image\n"
  },
  {
    "path": "tests/system/python/helpers/utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom datetime import datetime\nimport os\n\nimport http.client\nimport json\n\n\"\"\"Utility methods\"\"\"\n\n\ndef serialize_stats_map(jdoc):\n    actual_stats_map = {}\n    for itm in jdoc:\n        actual_stats_map[itm['key']] = itm['value']\n    return actual_stats_map\n\n\ndef get_asset_tracking_details(fledge_url, event=None):\n    _connection = http.client.HTTPConnection(fledge_url)\n    uri = '/fledge/track'\n    if event:\n        uri += '?event={}'.format(event)\n    _connection.request(\"GET\", uri)\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return jdoc\n\n\ndef post_request(fledge_url, post_url, payload):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", post_url, json.dumps(payload))\n    res = conn.getresponse()\n    assert 200 == res.status, \"ERROR! POST {} request failed\".format(post_url)\n    res = res.read().decode()\n    r = json.loads(res)\n    return r\n\n\ndef get_request(fledge_url, get_url):\n    con = http.client.HTTPConnection(fledge_url)\n    con.request(\"GET\", get_url)\n    res = con.getresponse()\n    assert 200 == res.status, \"ERROR! GET {} request failed\".format(get_url)\n    r = json.loads(res.read().decode())\n    return r\n\n\ndef put_request(fledge_url, put_url, payload = None):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"PUT\", put_url, json.dumps(payload))\n    res = conn.getresponse()\n    assert 200 == res.status, \"ERROR! PUT {} request failed\".format(put_url)\n    r = json.loads(res.read().decode())\n    return r\n\n\ndef delete_request(fledge_url, delete_url):\n    con = http.client.HTTPConnection(fledge_url)\n    con.request(\"DELETE\", delete_url)\n    res = con.getresponse()\n    assert 200 == res.status, \"ERROR! GET {} request failed\".format(delete_url)\n    r = json.loads(res.read().decode())\n    return r\n\n\ndef validate_date_format(datetime_str, format_str=None):\n    \"\"\"\n    Check if the given datetime string matches the specified format\n\n    Parameters:\n    - datetime_str: The datetime string to check.\n    - format_str: The format string that the datetime string should match. By default, format '%Y-%m-%d %H:%M:%S.%f'\n\n    Returns:\n    - True if the string matches the format, False otherwise.\n    \"\"\"\n    try:\n        if format_str is None:\n            format_str = \"%Y-%m-%d %H:%M:%S.%f\"\n        datetime.strptime(datetime_str, format_str)\n        return True\n    except ValueError:\n        return False\n\n\ndef detect_os():\n    \"\"\"\n    Detect Ubuntu or Raspberry Pi OS version\n    \n    Returns:\n    - str: 'Ubuntu X' where X is the major version (e.g., 'Ubuntu 20', 'Ubuntu 22')\n    - str: Codename for Raspberry Pi OS (e.g., 'Bookworm', 'Bullseye')\n    - str: 'Unknown' if neither Ubuntu nor Raspberry Pi OS is detected\n    \"\"\"\n    try:\n        if os.path.exists('/etc/os-release'):\n            with open('/etc/os-release', 'r') as f:\n                content = f.read()\n                \n                # Parse the content into a dictionary\n                os_info = {}\n                for line in content.split('\\n'):\n                    if '=' in line and not line.startswith('#'):\n                        key, value = line.split('=', 1)\n                        os_info[key] = value.strip().strip('\"')\n                \n                # Check for Ubuntu\n                if 'Ubuntu' in content:\n                    version_id = os_info.get('VERSION_ID', '')\n                    major_version = version_id.split('.')[0]\n                    return f'Ubuntu {major_version}'\n                \n                # Check for Raspberry Pi OS (Debian-based)\n                elif os_info.get('ID') == 'debian' and os_info.get('VERSION_CODENAME') in ['bullseye', 'bookworm']:\n                    codename = os_info.get('VERSION_CODENAME', '')\n                    return codename.capitalize()\n        \n        return 'Unknown'\n    \n    except Exception:\n        return 'Unknown'\n\n"
  },
  {
    "path": "tests/system/python/iprpc/README.rst",
    "content": ".. FLedge system tests involving iprpc module.\n\n.. |br| raw:: html\n\n   <br />\n\n.. Links\n\n.. Links in new tabs\n\n.. |pytest docs| raw:: html\n\n   <a href=\"https://docs.pytest.org/en/latest/contents.html\" target=\"_blank\">pytest</a>\n\n.. |pytest decorators| raw:: html\n\n   <a href=\"https://docs.pytest.org/en/latest/mark.html\" target=\"_blank\">pytest</a>\n\n.. _iprpc: ..\\\\..\\\\..\\\\..\\\\python\\\\fledge\\\\common\\\\iprpc.py\n.. _numpy_south: ..\\\\plugins\\\\dummy\\\\iprpc\\\\south\\\\numpy_south\\\\numpy_south.py\n.. _numpy_filter: ..\\\\plugins\\\\dummy\\\\iprpc\\\\filter\\\\numpy_filter\\\\numpy_filter.py\n.. _numpy_iprpc_filter: ..\\\\plugins\\\\dummy\\\\iprpc\\\\filter\\\\numpy_iprpc_filter\\\\numpy_iprpc_filter.py\n\n.. =============================================\n\n*************************\nFledge IPRPC System Tests\n*************************\n\nFledge system tests involving the `iprpc`_ module are given below:\n\n1.  A test that uses numpy module in both south plugin and filter plugin. But fails to use it simultaneously because numpy cannot be re initialized in sub interpreter when already initialized in parent interpreter.\n2.  The same plugins are used except this time the filter plugin uses the iprpc facility to perform an operation specific to numpy.\n\nThere are three dummy plugins used in the tests.\n\n1. `numpy_south`_ - A plugin that ingests random values in Fledge using numpy 's random function.\n2. `numpy_filter`_ - A plugin which calculates root mean square on the values it gets from south service and creates an extra asset for rms_values. However it does not uses iprpc facility.\n3. `numpy_iprpc_filter`_ - Similar to numpy_filter but uses iprpc facility.\n\n**NOTE**\n\nThe south service may or may not use iprpc facility. However it is mandatory to use iprpc facility when\nboth south and filter plugins use modules like numpy.\n\nScenarios\n=========\n\nWhile testing following settings can be present.\n\n.. list-table:: Results\n   :widths: 25 25 50\n   :header-rows: 1\n\n   * - South Plugin\n     - Filter Plugin\n     - Expected Result\n   * - The plugin uses numpy module\n     - The plugin uses numpy without iprpc module\n     - South service crashes\n   * - The plugin does not use numpy module\n     - The plugin uses numpy without iprpc module\n     - Service does not crash however Runtime error wrt to numpy module is observed.\n   * - The plugin uses numpy module\n     - The plugin does not use numpy.\n     - Working Fine.\n   * - The plugin uses numpy module.\n     - The plugin uses numpy with iprpc module.\n     - Working Fine.\n\n\nRunning Fledge System tests involving iprpc\n===========================================\n\n\nTest Execution\n--------------\n\n\nAfter installing the Prerequisites\n::\n\n    python3 -m pytest -s -v test_iprpc.py --fledge-url localhost:8081\n"
  },
  {
    "path": "tests/system/python/iprpc/test_iprpc.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Tests for iprpc facility \"\"\"\n\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nimport os\nimport json\nimport http.client\nfrom urllib.parse import quote\nimport subprocess\nimport time\n\n\ndef start_south_service_for_filter(config, fledge_url='localhost:8081',\n                                   service_name='numpy_ingest', plugin_name='numpy_south', enabled='true'):\n    \"\"\"\n        Only starts the south service with configuation provided.\n       Args:\n           config: The configuation of south service. (json string)\n           fledge_url: The url of the fledge api.\n           service_name: The name of south service to be executed.\n           plugin_name: The name of the installed south plugin.\n           enabled : Switch to enable south service\n       Returns: Response of addiing south service request in json.\n    \"\"\"\n    # Create south service\n\n    data = {\"name\": service_name, \"type\": \"south\", \"plugin\": plugin_name,\n            \"enabled\": enabled, \"config\": config}\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", '/fledge/service', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status, 'Could not start south service'\n    r = r.read().decode()\n    conn.close()\n    retval = json.loads(r)\n    print(retval)\n\n\ndef add_filter(fledge_url, filter_plugin, filter_name, filter_config, plugin_to_filter):\n    \"\"\"\n        Adds the filter with configuation provided.\n        Args:\n            fledge_url: The url of the fledge api.\n            filter_plugin: The name of the installed filter plugin.\n            filter_name: The name that user wants to give to the filter pipeline.\n            filter_config: The configuration of the filter (json string)\n            plugin_to_filter: The south service name to which filter is to be applied.\n        Returns:\n            Response of filter addition request in json\n    \"\"\"\n    data = {\"name\": \"{}\".format(filter_name), \"plugin\": \"{}\".format(filter_plugin), \"filter_config\": filter_config}\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", '/fledge/filter', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert filter_name == jdoc[\"filter\"]\n\n    uri = \"{}/pipeline?allow_duplicates=true&append_filter=true\".format(quote(plugin_to_filter))\n    filters_in_pipeline = [filter_name]\n    conn.request(\"PUT\", '/fledge/filter/' + uri, json.dumps({\"pipeline\": filters_in_pipeline}))\n    r = conn.getresponse()\n    assert 200 == r.status\n    res = r.read().decode()\n    jdoc = json.loads(res)\n    # Asset newly added filter exist in request's response\n    assert filter_name in jdoc[\"result\"]\n    return jdoc\n\n\ndef get_service_status(fledge_url):\n    \"\"\"\n    Return ping status from fledge.\n    Args:\n        fledge_url: The URL of Fledge.\n    Returns:\n        A json string that contains ping status.\n    \"\"\"\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/service')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return jdoc\n\n\ndef change_category(fledge_url, cat_name, config_item, value):\n    \"\"\"\n    Changes the value of configuration item in the given category.\n    Args:\n        fledge_url: The url of the fledge api.\n        cat_name: The category name.\n        config_item: The configuration item to be changed.\n        value: The new value of configuration item.\n    Returns: returns the value of changed category or raises error.\n    \"\"\"\n    conn = http.client.HTTPConnection(fledge_url)\n    body = {\"value\": str(value)}\n    json_data = json.dumps(body)\n    conn.request(\"PUT\", '/fledge/category/{}/{}'.format(cat_name, config_item), json_data)\n    r = conn.getresponse()\n    # assert 200 == r.status, 'Could not change config item'\n    print(r.status)\n    r = r.read().decode()\n    conn.close()\n    retval = json.loads(r)\n    print(retval)\n\n\ndef enable_schedule(fledge_url, sch_name):\n    \"\"\"\n    Enables schedule.\n    Args:\n        fledge_url: The url of the fledge api.\n        sch_name: The name of schedule to be enabled.\n    Returns: Response of enabling schedule in json.\n    \"\"\"\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"PUT\", '/fledge/schedule/enable', json.dumps({\"schedule_name\": sch_name}))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    assert \"scheduleId\" in jdoc\n    return jdoc\n\n\ndef get_fledge_root():\n    \"\"\"\n        Args : None\n        Returns: The path which contains FLEDGE_ROOT.\n    \"\"\"\n    FLEDGE_ROOT = os.getenv(\"FLEDGE_ROOT\")\n    # checking that FLEDGE_ROOT exists\n    assert os.path.exists(FLEDGE_ROOT) is True, \"The FLEDGE_ROOT does not exist\"\n    return FLEDGE_ROOT\n\n\ndef install_python_plugin(plugin_path, plugin_type):\n    \"\"\"\n        Install the python plugin into Fledge.\n        Args:\n            plugin_path: The url of the fledge api.\n            plugin_type: The name of schedule to be enabled.\n        Returns: None\n    \"\"\"\n    fledge_root_dir = get_fledge_root()\n    dest_dir = os.path.join(fledge_root_dir, 'python', 'fledge', 'plugins', plugin_type)\n    subprocess.run([\"cp -r {} {}\".format(plugin_path, dest_dir)], shell=True, check=True)\n\n\ndef test_reinitialization_of_numpy_without_iprpc(reset_and_start_fledge, fledge_url):\n    \"\"\"\n        Test that re initializes numpy inside south plugin as well as inside filter plugin.\n        And verifies that the service becomes un responsive as numpy cannot be used inside\n        sub interpreters when already used in a parent interpreter.\n        Args:\n            fledge_url: The url of the fledge api.\n            reset_and_start_fledge: A fixture that resets and starts Fledge again.\n        Returns: None\n    \"\"\"\n\n    # installing required plugins\n    # These are dummy plugins written for reproducing the issue.\n    source_directory_for_south_plugin = os.path.join(get_fledge_root(), \"tests\", \"system\", \"python\", \"plugins\",\n                                                     \"dummy\", \"iprpc\", \"south\", \"numpy_south\")\n    source_directory_for_filter_plugin = os.path.join(get_fledge_root(), \"tests\", \"system\", \"python\", \"plugins\",\n                                                      \"dummy\", \"iprpc\", \"filter\", \"numpy_filter\")\n    install_python_plugin(source_directory_for_south_plugin, \"south\")\n    install_python_plugin(source_directory_for_filter_plugin, \"filter\")\n\n    # Start the south service\n    config = {\"assetName\": {\"value\": \"np_random\"},\n              \"totalValuesArray\": {\"value\": \"100\"}}\n    start_south_service_for_filter(config, service_name=\"numpy_ingest\", plugin_name='numpy_south',\n                                   enabled='true')\n\n    # start the filter\n    filter_cfg_numpy_filter = {\"enable\": \"true\", \"assetName\": \"np_random\",\n                               \"dataPointName\": \"random\", \"numSamples\": \"100\"}\n\n    add_filter(fledge_url, \"numpy_filter\", \"numpy_filter_ingest\", filter_cfg_numpy_filter, \"numpy_ingest\")\n\n    # enable schedule\n    enable_schedule(fledge_url, \"numpy_ingest\")\n\n    time.sleep(5)\n    cat_name = \"numpy_ingest\" + \"Advanced\"\n    config_item = \"readingsPerSec\"\n    change_category(fledge_url, cat_name, config_item, \"100\")\n\n    time.sleep(5)\n\n    # the service will become unresponsive\n    status = get_service_status(fledge_url)\n    for index, service in enumerate(status['services']):\n        if status['services'][index]['name'] == \"numpy_ingest\":\n            service_to_verify = status['services'][index]\n            assert service_to_verify['status'] == \"unresponsive\", \"Libraries like numpy cannot be re initialized\"\n\n\ndef test_reinitialization_of_numpy_with_iprpc(reset_and_start_fledge, fledge_url):\n    \"\"\"\n        Test that uses iprpc facility to use numpy functions inside south plugin as well\n        a filter plugin.\n        And verifies that the service is up and running with required operations being done.\n        Args:\n            fledge_url: The url of the fledge api.\n            reset_and_start_fledge: A fixture that resets and starts Fledge again.\n        Returns: None\n    \"\"\"\n\n    # installing required plugins\n    # These are dummy plugins written for reproducing the issue.\n    source_directory_for_south_plugin = os.path.join(get_fledge_root(), \"tests\", \"system\", \"python\", \"plugins\",\n                                                     \"dummy\", \"iprpc\", \"south\", \"numpy_south\")\n    source_directory_for_filter_plugin = os.path.join(get_fledge_root(), \"tests\", \"system\", \"python\", \"plugins\",\n                                                      \"dummy\", \"iprpc\", \"filter\", \"numpy_iprpc_filter\")\n    install_python_plugin(source_directory_for_south_plugin, \"south\")\n    install_python_plugin(source_directory_for_filter_plugin, \"filter\")\n\n    # Start the south service\n    config = {\"assetName\": {\"value\": \"np_random\"},\n              \"totalValuesArray\": {\"value\": \"100\"}}\n    start_south_service_for_filter(config, service_name=\"numpy_ingest\", plugin_name='numpy_south',\n                                   enabled='true')\n\n    # start the filter\n    filter_cfg_numpy_filter = {\"enable\": \"true\", \"assetName\": \"np_random\",\n                               \"dataPointName\": \"random\", \"numSamples\": \"100\"}\n\n    add_filter(fledge_url, \"numpy_iprpc_filter\", \"numpy_filter_ingest\", filter_cfg_numpy_filter, \"numpy_ingest\")\n\n    # enable schedule\n    enable_schedule(fledge_url, \"numpy_ingest\")\n\n    time.sleep(5)\n    cat_name = \"numpy_ingest\" + \"Advanced\"\n    config_item = \"readingsPerSec\"\n    change_category(fledge_url, cat_name, config_item, \"100\")\n\n    time.sleep(5)\n\n    # the service will be running\n    status = get_service_status(fledge_url)\n    for index, service in enumerate(status['services']):\n        if status['services'][index]['name'] == \"numpy_ingest\":\n            service_to_verify = status['services'][index]\n            assert service_to_verify['status'] == \"running\", \"Libraries like numpy cannot be re initialized\"\n"
  },
  {
    "path": "tests/system/python/packages/README.rst",
    "content": "Script to automate Fledge Packages\n-----------------------------------\n\n1. Install git i.e. `sudo apt install git`\n\n2. Clone Fledge repo and `cd tests/system/python/packages/`\n\n3. Make sure FLEDGE_ROOT to be set where you have cloned Fledge\n\nExecute `python3 -m pytest -s -vv test_available_and_install_api.py` to run test once.\nDefault build-version is nightly, build-list set to p0, build-source-list set to false.\nSo you can pass an argument to override these values e.g. `--package-build-version=1.7.0RC` || (`--package-build-list=p0,p1` or `--package-build-list=all`) || `--package-build-source-list=true`\n\nTODO Items\n----------\n\n1. Script should run on CentOS 7 & RHEL 7 platforms\n2. We need to think for those plugins their discovery totally depend upon on the sensor attached like fledge-south-sensehat etc\n3. Coral specific plugins handling for other platforms in script\n4. Better reporting probably in csv format for the status of plugin\n"
  },
  {
    "path": "tests/system/python/packages/data/package_list.json",
    "content": "{\n\t\"p0\": [{\n\t\t\"south\": [\"modbus\", \"mqtt-readings\", \"sinusoid\"],\n\t\t\"north\": [\"httpc\"],\n\t\t\"filter\": [\"expression\", \"python35\"],\n\t\t\"notify\": [\"asset\", \"python35\"],\n\t\t\"rule\": [\"average\", \"simple-expression\"]\n\t}],\n\t\"p1\": [{\n\t\t\"south\": [\"dnp3\", \"rpienviro\", \"flirax8\", \"game\", \"mqtt-sparkplug\", \"opcua\", \"playback\", \"randomwalk\"],\n\t\t\"north\": [\"kafka\", \"opcua\"],\n\t\t\"filter\": [\"change\", \"delta\", \"fft\", \"flirvalidity\", \"metadata\", \"rms\"],\n\t\t\"notify\": [\"email\"],\n\t\t\"rule\": [\"outofbound\"]\n\t}],\n\t\"p2\": [{\n\t\t\"south\": [\"benchmark\", \"cc2650\", \"coap\", \"csv\", \"dht11V2\", \"expression\", \"http-south\", \"modbustcp\", \"openweathermap\", \"random\", \"systeminfo\", \"wind-turbine\"],\n\t\t\"north\": [\"http-north\", \"thingspeak\"],\n\t\t\"filter\": [\"asset\", \"rate\", \"scale\", \"threshold\"],\n\t\t\"notify\": [\"slack\", \"telegram\"],\n\t\t\"rule\": []\n\t}],\n\t\"p3\": [{\n\t\t\"south\": [\"am2315\", \"csv-async\", \"dht11\", \"ina219\", \"pt100\", \"roxtec\", \"sensehat\", \"sensorphone\"],\n\t\t\"north\": [],\n\t\t\"filter\": [\"python27\", \"scale-set\"],\n\t\t\"notify\": [\"alexa\", \"blynk\", \"hangouts\", \"ifttt\"],\n\t\t\"rule\": []\n\t}]\n}"
  },
  {
    "path": "tests/system/python/packages/data/readings35.py",
    "content": "\"\"\"\nFledge filtering for readings data\nusing Python 3.5\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport sys\nimport json\n\n\"\"\"\nFilter configuration set by set_filter_config(config)\n\"\"\"\n\n\"\"\"\nfilter_config, global variable\n\"\"\"\nfilter_config = dict()\n\n\"\"\"\nSet the Filter configuration into filter_config (global variable)\n\nInput data is a dict with 'config' key and JSON string version wit data\n\nJSON string is loaded into a dict, set to global variable filter_config\n\nReturn True\n\"\"\"\ndef set_filter_config(configuration):\n    print(configuration)\n    global filter_config\n    filter_config = json.loads(configuration['config'])\n\n    return True\n\n\"\"\"\nMethod for filtering readings data\n\nInput is array of dicts\n[\n    {'reading': {'power_set1': '5980'}, 'asset_code': 'lab1'},\n    {'reading': {'power_set1': '211'}, 'asset_code': 'lab1'}\n]\n\nInput data:\n   readings: can be modified, dropped etc\nOutput is array of dict\n\"\"\"\ndef readings35(readings):\n    # Get list of asset code to filter\n    if ('asset_code' in filter_config):\n        asset_codes = filter_config['asset_code']\n    else:\n        asset_codes = []\n\n    for elem in readings:\n            print(\"IN=\" + str(elem))\n            reading = elem['reading']\n            # Apply some changes: multiply by 2 and divide by 10 to all datapoint values\n            for key in reading:\n                newVal = (reading[key] * 2)/10\n                reading[key] = newVal\n\n            print(\"OUT=\" + str(elem))\n    return readings\n"
  },
  {
    "path": "tests/system/python/packages/data/set_id.py",
    "content": "# add reading id inside datapoint\n\nimport json\n\n\ndef set_filter_config(configuration):\n    config = json.loads(configuration['config'])\n    return True\n\n\n# process one or more readings\ndef set_id(readings):\n    for elem in list(readings):\n        id = elem['id']\n        reading = elem['reading']\n        reading[b'id_datapoint'] = id\n    return readings\n\n\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_authentication.rst",
    "content": "Test Authentication\n~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is specifically designed for basic testing of the `fledge-filter-python35` plugin. It incorporates the use of `fledge-south-http-south` for ingesting data into Fledge via Fogbench and `fledge-filter-expression` to verify Fledge's ability to handle multiple filters in a pipeline alongside `fledge-filter-python35`.\n\nThis test consists of eight classes, each containing multiple test case functions:\n\n1. **TestTLSDisabled**:\n    Following test case function check functionality of Fledge, when TLS is disabled and auth is not mandatory:\n    i. **test_on_default_port**: Verify if Fledge is properly running on the default port.\n    ii. **test_on_custom_port**: Verify that Fledge's default HTTP port is successfully changed to a custom port, the service is restarted, and that Fledge is functioning correctly by confirming it is running on the new custom port.\n    iii. **test_reset_to_default_port**: Verify that Fledge's HTTP port is changed back to the default port (8081), the service is restarted, and that Fledge is functioning correctly by confirming it is running on the default port.\n\n2. **TestAuthAnyWithoutTLS**: \n    Following test case function check functionality of Fledge, when TLS is disabled but auth is mandatory with any authentication method only:\n    i. **test_login_regular_user_using_password**: Checks if Fledge is allowing login of regular user (not admin user) via username and password.\n    ii. **test_logout_me_password_token**: Checks if Fledge is allowing logout of regular user (not admin user) via password token.\n    iii. **test_login_with_invalid_credentials_regular_user_using_password**: Checks if regular user is able to login into Fledge using invalid credentials.\n    iv. **test_login_username_admin_using_password**: Checks if Fledge is allowing admin u using username and password.\n    v. **test_login_with_invalid_credentials_admin_using_password**: Checks if admin user is able to login into Fledge using invalid credentials.\n    vi. **test_login_with_user_certificate**: Checks if regular user (not admin user) is able to login into Fledge using certificates.\n    vii. **test_login_with_admin_certificate**: Checks if admin user is able to login into Fledge using certificates.\n    viii. **test_login_with_custom_certificate**: Creates custom certificates for a regular user and verifies whether the user can log in to Fledge using those custom certificates.\n    ix. **test_ping_with_allow_ping_true**: Checks if `/fledge/ping` is giving response, when ping is allowed by Fledge.\n    x. **test_ingest_with_password_token**: Verify that the `http-south` plugin is successfully added as a south service using a password token, and check whether Fledge can ingest data via Fogbench into Fledge.\n    xi. **test_ingest_with_certificate_token**: Verify that the `http-south` plugin is successfully added as a south service using a certificate token, and check whether Fledge can ingest data via Fogbench into Fledge.\n    xii. **test_ping_with_allow_ping_false_with_password_token**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with regular user's credentials.\n    xiii. **test_ping_with_allow_ping_false_with_certificate_token**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with regular user's certificates.\n    xiv. **test_get_users_with_password_token**: Checks if different users (admin and regular users) are able to list the users of Fledge, using password token.\n    xv. **test_get_users_with_certificate_token**: Checks if different users (admin and regular users) are able to list the users of Fledge, using certificate token.\n    xvi. **test_get_roles_with_password_token**: Checks if admin users is able to list the users of Fledge, using password token.\n    xvii. **test_get_roles_with_certificate_token**: Checks if admin users is able to list the users of Fledge, using certificate token.\n    xviii. **test_create_user_with_password_token**: Checks if admin users is able to create users of Fledge, using password token.\n    xix. **test_create_user_with_certificate_token**: Checks if admin users is able to create users of Fledge, using certificate token.\n    xx. **test_login_of_newly_created_user**: Checks if newly created user are able to login into Fledge using username and password.\n    xxi. **test_update_password_with_password_token**: Checks if Fledge is allowing regular user to update password using password token.\n    xxii. **test_update_password_with_certificate_token**: Checks if Fledge is allowing regular user to update password using certificate token.\n    xxiii. **test_login_with_updated_password**: Checks if regular user is able to login into Fledge using updated password.\n    xxiv. **test_reset_user_with_password_token**: Checks if admin user is able to reset/update password  of regular user using password token.\n    xxv. **test_reset_user_with_certificate_token**: Checks if admin user is able to reset/update password  of regular user using certificate token.\n    xxvi. **test_login_with_resetted_password**: Checks if regular user is able to login into Fledge using resetted password or password updated by admin user.\n    xxvii. **test_delete_user_with_password_token**: Checks if admin is able to delete any specific user from Fledge using the password token.\n    xxviii. **test_delete_user_with_certificate_token**: Checks if admin is able to delete any specific user from Fledge using the certificate token.\n    xxix. **test_login_of_deleted_user**: Checks if the deleted user is able to login into Fledge.\n    xxx. **test_logout_all_with_password_token**: Checks if admin is able to log out all the session of specific user of Fledge, using password token.\n    xxxi. **test_verify_logout**: Checks if specific user is logged out.\n    xxxii. **test_admin_actions_forbidden_for_regular_user_with_pwd_token**: Checks if regular user is not able to perform any actions that only an admin can, using password token.\n    xxxiii. **test_admin_actions_forbidden_for_regular_user_with_cert_token**: Checks if regular user is not able to perform any actions that only an admin can, using certificate token.\n\n3. **TestAuthPasswordWithoutTLS**:\n    Following test case function check functionality of Fledge, when TLS is disabled but auth is mandatory with password authentication method:\n    i. **test_login_username_regular_user**: Checks if Fledge is allowing login of regular user (not admin user) via username and password.\n    ii. **test_logout_me**: Checks if Fledge is allowing logout of regular user (not admin user) via password token.\n    iii. **test_login_with_invalid_credentials_regular_user**: Checks if regular user is able to login into Fledge using invalid credentials.\n    iv. **test_login_username_admin**: Checks if Fledge is allowing admin u using username and password.\n    v. **test_login_with_invalid_credentials_admin**: Checks if admin user is able to login into Fledge using invalid credentials.\n    vi. **test_login_with_admin_certificate**: Checks admin user should not able to login into Fledge using certificates.\n    vii. **test_ping_with_allow_ping_true**: Checks if `/fledge/ping` is giving response, when ping is allowed by Fledge.\n    viii. **test_ingest**: Verify that the `http-south` plugin is successfully added as a south service using a password token, and confirm whether Fledge is able to ingest data via Fogbench into the system\n    ix. **test_ping_with_allow_ping_false**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with regular user's credentials.\n    x. **test_get_users**: Checks if different users (admin and regular users) are able to list the users of Fledge, using password token.\n    xi. **test_get_roles**: Checks if admin users is able to list the users of Fledge, using password token.\n    xii. **test_create_user**: Checks if admin users is able to create users of Fledge, using password token.\n    xiii. **test_login_of_newly_created_user**: Checks if newly created user are able to login into Fledge using username and password.\n    xiv. **test_update_password**: Checks if Fledge is allowing regular user to update password using password token.\n    xv. **test_login_with_updated_password**: Checks if regular user is able to login into Fledge using updated password.\n    xvi. **test_reset_user**: Checks if admin user is able to reset/update password  of regular user using password token.\n    xvii. **test_login_with_resetted_password**: Checks if regular user is able to login into Fledge using resetted password or password updated by admin user.\n    xviii. **test_delete_user**: Checks if admin is able to delete any specific user from Fledge using the password token.\n    xix. **test_login_of_deleted_user**: Checks if the deleted user is able to login into Fledge.\n    xx. **test_logout_all**: Checks if admin is able to log out all the session of specific user of Fledge, using password token.\n    xxi. **test_verify_logout**: Checks if specific user is logged out.\n    xxii. **test_admin_actions_forbidden_for_regular_user**: Checks if regular user is not able to perform any actions that only an admin can, using password token.\n\n4. **TestAuthCertificateWithoutTLS**:\n    Following test case function check functionality of Fledge, when TLS is disabled but auth is mandatory with certificate authentication method only:\n    i. **test_login_with_user_certificate**: Checks if regular user (not admin user) is able to login into Fledge using certificates.\n    ii. **test_login_with_admin_certificate**: Checks if admin user is able to login into Fledge using certificates.\n    iii. **test_login_with_custom_certificate**: Creates custom certificates for a regular user and verifies whether the user can log in to Fledge using those custom certificates.\n    iv. **test_login_with_invalid_credentials**: Checks if regular user is able to login into Fledge using invalid certificate.\n    v. **test_login_username_admin**: Checks Fledge should not allow admin user to login using username and password.\n    vi. **test_ping_with_allow_ping_true**: Checks if `/fledge/ping` is giving response, when ping is allowed by Fledge.\n    vii. **test_ingest**: Verify that the `http-south` plugin is successfully added as a south service using a certificate token, and confirm whether Fledge is able to ingest data via Fogbench into the system\n    viii. **test_ping_with_allow_ping_false**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with admin user's certificates.\n    ix. **test_get_users**: Checks if different users (admin and regular users) are able to list the users of Fledge, using certificate token.\n    x. **test_get_roles**: Checks if admin users is able to list the users of Fledge, using certificate token.\n    xi. **test_create_user**: Checks if admin users is able to create users of Fledge, using certificate token.\n    xii. **test_update_password**: Checks if Fledge is allowing regular user to update password using certificate token.\n    xiii. **test_reset_user**: Checks if admin user is able to reset/update password  of regular user using certificate token.\n    xiv. **test_delete_user**: Checks if admin is able to delete any specific user from Fledge using the certificate token.\n    xv. **test_logout_all**: Checks if admin is able to log out all the session of specific user of Fledge, using certificate token.\n    xvi. **test_verify_logout**: Checks if specific user is logged out.\n    xvii. **test_admin_actions_forbidden_for_regular_user**: Checks if regular user is not able to perform any actions that only an admin can, using certificate token.\n\n5. **TestTLSEnabled**:\n    Following test case function check functionality of Fledge, when TLS is enabled and auth is not mandatory:\n    i. **test_on_default_port**: Verifies if Fledge is properly running on the default port.\n    ii. **test_on_custom_port**: Verify that Fledge's default HTTP port is changed to a custom port, restart the service, and check if Fledge is running correctly on the custom port\n\n6. **TestAuthAnyWithTLS**:\n    Following test case function check functionality of Fledge, when TLS is enabled and auth is mandatory with any authentication method:\n    i. **test_login_regular_user_using_password**: Checks if Fledge is allowing login of regular user (not admin user) via username and password.\n    ii. **test_logout_me_password_token**: Checks if Fledge is allowing logout of regular user (not admin user) via password token.\n    iii. **test_login_with_invalid_credentials_regular_user_using_password**: Checks if regular user is able to login into Fledge using invalid credentials.\n    iv. **test_login_username_admin_using_password**: Checks if Fledge is allowing admin u using username and password.\n    v. **test_login_with_invalid_credentials_admin_using_password**: Checks if admin user is able to login into Fledge using invalid credentials.\n    vi. **test_login_with_user_certificate**: Checks if regular user (not admin user) is able to login into Fledge using certificates.\n    vii. **test_login_with_admin_certificate**: Checks if admin user is able to login into Fledge using certificates.\n    viii. **test_ping_with_allow_ping_false**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with regular user's credentials.\n    ix. **test_login_with_custom_certificate**: Creates custom certificates for a regular user and verifies whether the user can log in to Fledge using those custom certificates.\n    x. **test_ping_with_allow_ping_true**: Checks if `/fledge/ping` is giving response, when ping is allowed by Fledge.\n    xi. **test_ingest_with_password_token**: Verify that the `http-south` plugin is successfully added as a south service using a password token, and check whether Fledge can ingest data via Fogbench into Fledge.\n    xii. **test_ingest_with_certificate_token**: Verify that the `http-south` plugin is successfully added as a south service using a certificate token, and check whether Fledge can ingest data via Fogbench into Fledge.\n    xiii. **test_ping_with_allow_ping_false_with_password_token**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with regular user's credentials.\n    xiv. **test_ping_with_allow_ping_false_with_certificate_token**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with regular user's certificates.\n    xv. **test_get_users_with_password_token**: Checks if different users (admin and regular users) are able to list the users of Fledge, using password token.\n    xvi. **test_get_users_with_certificate_token**: Checks if different users (admin and regular users) are able to list the users of Fledge, using certificate token.\n    xvii. **test_get_roles_with_certificate_token**: Checks if admin users is able to list the users of Fledge, using certificate token.\n    xviii. **test_create_user_with_password_token**: Checks if admin users is able to create users of Fledge, using password token.\n    xix. **test_create_user_with_certificate_token**: Checks if admin users is able to create users of Fledge, using certificate token.\n    xx. **test_login_of_newly_created_user**: Checks if newly created user are able to login into Fledge using username and password.\n    xxi. **test_update_password_with_password_token**: Checks if Fledge is allowing regular user to update password using password token.\n    xxii. **test_update_password_with_certificate_token**: Checks if Fledge is allowing regular user to update password using certificate token.\n    xxiii. **test_login_with_updated_password**: Checks if regular user is able to login into Fledge using updated password.\n    xxiv. **test_reset_user_with_password_token**: Checks if admin user is able to reset/update password  of regular user using password token.\n    xxv. **test_reset_user_with_certificate_token**: Checks if admin user is able to reset/update password  of regular user using certificate token.\n    xxvi. **test_login_with_resetted_password**: Checks if regular user is able to login into Fledge using resetted password or password updated by admin user.\n    xxvii. **test_delete_user_with_password_token**: Checks if admin is able to delete any specific user from Fledge using the password token.\n    xxviii. **test_delete_user_with_certificate_token**: Checks if admin is able to delete any specific user from Fledge using the certificate token.\n    xxix. **test_login_of_deleted_user**: Checks if the deleted user is able to login into Fledge.\n    xxx. **test_logout_all_with_password_token**: Checks if admin is able to log out all the session of specific user of Fledge, using password token.\n    xxxi. **test_verify_logout**: Checks if specific user is logged out.\n    xxxii. **test_admin_actions_forbidden_for_regular_user_with_pwd_token**: Checks if regular user is not able to perform any actions that only an admin can, using password token.\n    xxxiii. **test_admin_actions_forbidden_for_regular_user_with_cert_token**: Checks if regular user is not able to perform any actions that only an admin can, using certificate token.\n\n7. **TestAuthPasswordWithTLS**:\n    Following test case function check functionality of Fledge, when TLS is enabled and auth is mandatory with password authentication method:\n    i. **test_login_username_regular_user**: Checks if Fledge is allowing login of regular user (not admin user) via username and password.\n    ii. **test_logout_me**: Checks if Fledge is allowing logout of regular user (not admin user) via password token.\n    iii. **test_login_with_invalid_credentials_regular_user**: Checks if regular user is able to login into Fledge using invalid credentials.\n    iv. **test_login_username_admin**: Checks if Fledge is allowing admin u using username and password.\n    v. **test_login_with_invalid_credentials_admin**: Checks if admin user is able to login into Fledge using invalid credentials.\n    vi. **test_login_with_admin_certificate**: Checks admin user should not able to login into Fledge using certificates.\n    vii. **test_ping_with_allow_ping_true**: Checks if `/fledge/ping` is giving response, when ping is allowed by Fledge.\n    viii. **test_ingest**: Verify that the 'http-south' plugin is added as a south service using a password token, and check if Fledge is able to ingest data via Fogbench into the system.\n    ix. **test_ping_with_allow_ping_false**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with regular user's credentials.\n    x. **test_get_users**: Checks if different users (admin and regular users) are able to list the users of Fledge, using password token.\n    xi. **test_get_roles**: Checks if admin users is able to list the users of Fledge, using password token.\n    xii. **test_create_user**: Checks if admin users is able to create users of Fledge, using password token.\n    xiii. **test_login_of_newly_created_user**: Checks if newly created user are able to login into Fledge using username and password.\n    xiv. **test_update_password**: Checks if Fledge is allowing regular user to update password using password token.\n    xv. **test_login_with_updated_password**: Checks if regular user is able to login into Fledge using updated password.\n    xvi. **test_reset_user**: Checks if admin user is able to reset/update password  of regular user using password token.\n    xvii. **test_login_with_resetted_password**: Checks if regular user is able to login into Fledge using resetted password or password updated by admin user.\n    xviii. **test_delete_user**: Checks if admin is able to delete any specific user from Fledge using the password token.\n    xix. **test_login_of_deleted_user**: Checks if the deleted user is able to login into Fledge.\n    xx. **test_logout_all**: Checks if admin is able to log out all the session of specific user of Fledge, using password token.\n    xxi. **test_verify_logout**: Checks if specific user is logged out.\n    xxii. **test_admin_actions_forbidden_for_regular_user**: Checks if regular user is not able to perform any actions that only an admin can, using password token.\n\n8. **TestAuthCertificateWithTLS**:\n    Following test case function check functionality of Fledge, when TLS is enabled and auth is mandatory with certificate authentication method only:\n    i. **test_login_with_user_certificate**: Checks if regular user (not admin user) is able to login into Fledge using certificates.\n    ii. **test_login_with_admin_certificate**: Checks if admin user is able to login into Fledge using certificates.\n    iii. **test_login_with_custom_certificate**: Creates custom certificates for a regular user and verifies whether the user can log in to Fledge using those custom certificates.\n    iv. **test_login_with_invalid_credentials**: Checks if regular user is able to login into Fledge using invalid certificate.\n    v. **test_login_username_admin**: Checks Fledge should not allow admin user to login using username and password.\n    vi. **test_ping_with_allow_ping_true**: Checks if `/fledge/ping` is giving response, when ping is allowed by Fledge.\n    vii. **test_ingest**: Verify that the 'http-south' plugin is added as a south service using a certificate token, and check if Fledge is able to ingest data via Fogbench into the system.\n    viii. **test_ping_with_allow_ping_false**: Checks if `/fledge/ping` is giving response, when ping is not allowed by Fledge and tried with admin user's certificates.\n    ix. **test_get_users**: Checks if different users (admin and regular users) are able to list the users of Fledge, using certificate token.\n    x. **test_get_roles**: Checks if admin users is able to list the users of Fledge, using certificate token.\n    xi. **test_create_user**: Checks if admin users is able to create users of Fledge, using certificate token.\n    xii. **test_update_password**: Checks if Fledge is allowing regular user to update password using certificate token.\n    xiii. **test_reset_user**: Checks if admin user is able to reset/update password  of regular user using certificate token.\n    xiv. **test_delete_user**: Checks if admin is able to delete any specific user from Fledge using the certificate token.\n    xv. **test_logout_all**: Checks if admin is able to log out all the session of specific user of Fledge, using certificate token.\n    xvi. **test_verify_logout**: Checks if specific user is logged out.\n    xvii. **test_admin_actions_forbidden_for_regular_user**: Checks if regular user is not able to perform any actions that only an admin can, using certificate token.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n   $ cd fledge/python\n   $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/\n  $ python3 -m pytest -s -vv packages/test_authentication.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --wait-time=\"<WAIT_TIME>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_filters.rst",
    "content": "Test Filters\n~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is specifically designed for basic testing of the `fledge-filter-python35` plugin. It incorporates the use of `fledge-south-http-south` for ingesting data into Fledge via Fogbench and `fledge-filter-expression` to verify Fledge's ability to handle multiple filters in a pipeline alongside `fledge-filter-python35`.\n\nThis test contains *TestPython35* class, which contains multiple test case functions:\n\n1. **test_filter_python35_with_uploaded_script**: Verifies whether the Python35 filter creates the required assets and datapoints after a script is uploaded in enabled mode. This test also checks the stability of Fledge.\n2. **test_filter_python35_with_updated_content**: Verifies whether the reconfiguration of the Python35 filter works correctly by updating the script content and ensuring it creates the required assets and datapoints.\n3. **test_filter_python35_disable_enable**: Checks whether Fledge remains stable after disabling and enabling the Python35 filter, and verifies that the required assets and data points are created.\n4. **test_filter_python35_expression**: Checks whether Fledge can handle and remain stable when the expression filter is added to the http-south plugin's south service, followed by the Python35 filter. This includes testing the behavior when the south service is disabled and then re-enabled\n5. **test_delete_filter_python35**: Verifies whether Fledge can successfully delete the Python35 filter from the http-south plugin's south service.\n6. **test_filter_python35_by_enabling_disabling_south**: Checks whether Fledge can disable the south service of fledge-south-http-south plugin, then add the Python35 filter, and re-enable the south service. Ensure that Fledge creates the required assets and datapoints.  \n7. **test_delete_south_service**: Verifies the deletion of the `http-south` plugin's south service, which includes the Python35 filter, and checks whether the Python35 script is also removed from the `$FLEDGE_DATA` directory.  \n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n   $ cd fledge/python\n   $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/\n  $ python3 -m pytest -s -vv packages/test_filters.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --wait-time=\"<WAIT_TIME>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_multiple_assets.rst",
    "content": "Test Multiple Assets\n~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to validate Fledge's ability to handle the creation of a large number of assets using the fledge-south-benchmark plugin while ensuring the stability of Fledge and its components.\n\nThis test consists of *TestMultiAssets* class, which contains multiple test case functions:\n\n1. **test_multiple_assets_with_restart**: Verifies that Fledge can create multiple fledge-south-benchmark services with a large number of assets, ensures the assets are correctly created, and checks the stability of Fledge after a restart.\n2. **test_add_multiple_assets_before_after_restart**: Ensures Fledge's ability to create a large number of assets both before and after restarting, using multiple fledge-south-benchmark services.\n3. **test_multiple_assets_with_reconfig**: Tests the creation of a large number of assets through the reconfiguration of fledge-south-benchmark services and confirms Fledge's stability during and after the reconfiguration .\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --num-assets=NUM_OF_ASSETS\n                        Total No. of Assets to be created\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/\n  $ python3 -m pytest -s -vv packages/test_multiple_assets.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n      --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\"  --pi-db=\"<PI_SYSTEM_DB>\" --num-assets=\"<NUM_OF_ASSETS>\" \\\n      --wait-time=\"<WAIT_TIME>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_north_azure.rst",
    "content": "Test North Azure\n~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is specifically designed to verify the functionality and stability of the `fledge-north-azure` plugin. It uses the `fledge-south-systeminfo`` plugin for ingesting data into Fledge and applies the `fledge-filter-expression`` on the north side to validate that the data is sent to Azure IoT Hub successfully, while ensuring Fledge remains stable when the service or task of fledge-north-azure sends data.\n\nThis test consists of four classes, each contains multiple test case functions:\n\n1. **TestNorthAzureIoTHubDevicePlugin**: \n    a. **test_send**: Verifies that data is successfully ingested into Fledge and sent to Azure IoT Hub.\n    b. **test_mqtt_over_websocket_reconfig**: Enables MQTT over websocket then verify whether data ingested into Fledge, sent to Azure-IoT Hub.\n    c. **test_disable_enable**: Verifies that enabling and disabling the south and north services periodically does not affect data transmission to Azure IoT Hub.\n    d. **test_send_with_filter**: Verifies the impact of enabling and disabling fledge-filter-expression on the north service while ensuring data is still sent to Azure IoT Hub.\n\n2. **TestNorthAzureIoTHubDevicePluginTask**:\n    a. **test_send_as_a_task**: Creates south and north bound as task and check if data is ingested in Fledge and sent to Azure-IoT Hub.\n    b. **test_mqtt_over_websocket_reconfig_task**: Verifies that data sent to Azure IoT Hub with MQTT over WebSocket enabled, when south and north services are configured as tasks.\n    c. **test_disable_enable_task**: Verifies that enabling and disabling south and north services configured as tasks does not impact data transmission to Azure IoT Hub.\n    d. **test_send_with_filter_task**: Ensures that applying and toggling filters on the north task does not affect data being sent to Azure IoT Hub.\n\n3. **TestNorthAzureIoTHubDevicePluginInvalidConfig**:\n    a. **test_invalid_connstr**: Checks if the connection string for the north Azure plugin is valid.\n    b. **test_invalid_connstr_sharedkey**: Verifies if the shared key in the connection string for the north Azure plugin is valid.\n\n4. **TestNorthAzureIoTHubDevicePluginLongRun**:\n    a. **test_send_long_run**: Verifies that data is continuously sent to Azure IoT Hub over a long period, based on parameters passed.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --azure-host=AZURE_HOST\n                        Azure-IoT Host Name\n    --azure-device=AZURE_DEVICE\n                        Azure-IoT Device ID\n    --azure-key=AZURE_KEY\n                        Azure-IoT SharedAccess key\n    --azure-storage-account-url=AZURE_STORAGE_URL\n                        Azure Storage Account URL\n    --azure-storage-account-key=AZURE_STORAGE_KEY\n                        Azure Storage Account Access Key\n    --azure-storage-container=AZURE_STORAGE_CONTAINER\n                        Container Name in Azure where data is stored\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv packages/test_north_azure.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --azure-host=\"<AZURE_HOST>\" \\\n        --azure-device=\"<AZURE_DEVICE>\" --azure-key=\"<AZURE_KEY>\" --azure-storage-account-url=\"<AZURE_STORAGE_URL>\" --azure-storage-account-key=\"<AZURE_STORAGE_KEY>\" \\\n        --azure-storage-container=\"<AZURE_STORAGE_CONTAINER>\" --wait-time=\"<WAIT_TIME>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_north_pi_webapi_nw_throttle.rst",
    "content": "PI Web API Network Throttle Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test verifies the functionality of Fledge when ingesting data using the fledge-south-sinusoid plugin and sending it to the PI Server via the fledge-north-OMF plugin under distorted network conditions.\n\nThis test consists of *TestPackagesSinusoid_PI_WebAPI* class, which contains a single test case function:\n\n1. **test_omf_task**: Verifies that data is ingested into Fledge using the fledge-south-sinusoid plugin and sent to the PI Server using the fledge-north-OMF plugin, while simulating a degraded network scenario.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --throttled-network-config=THROTTLED_NETWORK_CFG\n                        Give config '{'rate_limit': '100','packet_delay': '50','interface': 'eth0'}' \n                        for causing a delay of 50 milliseconds and rate restriction of 100 kbps on interface eth0.\n    --south-service-wait-time=SOUTH_SVC_WAIT_TIME\n                        The time in seconds before which the south service should keep  on\n                        sending data. After this time the south service will shutdown\n    --north-catch-up-time=NORTH_CATCHUP_TIME\n                        Time (in seconds) for which the north task/service will keep on running \n                        after switching off the south service.\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ export FLEDGE_ROOT=/usr/local/fledge\n  $ export PYTHONPATH=$FLEDGE_ROOT/python \n  $ python3 -m pytest -s -v test_north_pi_webapi_nw_throttle.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n      --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" \\\n      --throttled-network-config=\"<THROTTLED_NETWORK_CFG>\" --south-service-wait-time=\"<SOUTH_SVC_WAIT_TIME>\" --north-catch-up-time=\"<NORTH_CATCHUP_TIME>\" \\\n      --junit-xml=\"<JUNIT_XML>\" \n"
  },
  {
    "path": "tests/system/python/packages/docs/test_omf_naming_scheme.rst",
    "content": "Test OMF Naming Scheme\n~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to verify the naming functionality of the north task of `fledge-north-OMF` plugin. It incorporates the use of the `fledge-south-coap` plugin to ingest data into Fledge and sends it to the PI Server using the north task of `fledge-north-OMF`.\n\nThis test consist of `TestOMFNamingScheme` class, which contains multiple test case functions:\n\n1. **test_omf_with_concise_naming**: Ingests data into Fledge via the fledge-south-coap plugin and checks if the north task can send it to the PI Server when the naming scheme is \"concise\".\n2. **test_omf_with_type_suffix_naming**: Ingests data into Fledge via the fledge-south-coap plugin and checks if the north task can send it to the PI Server when the naming scheme is \"Use Type Suffix\".\n3. **test_omf_with_attribute_hash_naming**: Ingests data into Fledge via the fledge-south-coap plugin and checks if the north task can send it to the PI Server when the naming scheme is \"Use Attribute Hash\".\n4. **test_omf_with_backward_compatibility_naming**: Ingests data into Fledge via the fledge-south-coap plugin and checks if the north task can send it to the PI Server when the naming scheme is \"backward compatible\".\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv packages/test_omf_naming_scheme.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n      --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\"  --wait-time=\"<WAIT_TIME>\" \\\n      --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_omf_north_service.rst",
    "content": "Test OMF North Service\n~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to verify the basic functionality of the `fledge-north-OMF` plugin's north service, incorporating the use of `fledge-south-sinusoid` for ingesting data into Fledge, and `fledge-filter-scale` & `fledge-filter-metadata` filters to validate the handling of multiple filters at the north service.\n\nThis test consists of two classes, each contains multiple test cases functions:\n\n1. **TestOMFNorthService**: \n    a. **test_omf_service_with_restart**: Verify that the north service of OMF is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, both before and after restarting Fledge.\n    b. **test_omf_service_with_enable_disable**: Verify that the north service of OMF is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, when the service is disabled and then re-enabled.\n    c. **test_omf_service_with_delete_add**: Verify that the north service of OMF is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, when its service is deleted and then re-added.\n    d. **test_omf_service_with_reconfig**: Verify that the north service of OMF is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, after the service is reconfigured to invalid credentials.\n\n2. **TestOMFNorthServicewithFilters**:\n    a. **test_omf_service_with_filter**: Verify that the north service of OMF is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, when a fledge-filter-scale filter is added.\n    b. **test_omf_service_with_disable_enable_filter**: Verify that the north service of OMF is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, when the fledge-filter-scale filter is disabled and then re-enabled.\n    c. **test_omf_service_with_filter_reconfig**: Verify that the north service of OMF is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, when the fledge-filter-scale filter is reconfigured.\n    d. **test_omf_service_with_delete_add**: Verify that the north service of OMF, with the fledge-filter-scale filter, is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, when its service is deleted and then re-added.\n    e. **test_omf_service_with_delete_add_filter**: Verify that the north service of OMF, with the fledge-filter-scale filter, is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, when the filter is deleted and then re-added.\n    f. **test_omf_service_with_filter_reorder**: Verify that the north service of OMF, with the fledge-filter-scale and fledge-filter-metadata filters, is able to send data to PI, ingested by the fledge-south-sinusoid service in Fledge, when the filters are reordered in the pipeline.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --wait-fix=WAIT_FIX\n                        Extra wait time (in seconds) required for process to run\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv packages/test_omf_north_service.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n      --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\"  --wait-time=\"<WAIT_TIME>\" \\\n      --wait-fix=\"<WAIT_FIX>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_pi_webapi.rst",
    "content": "Test PIWebAPI\n~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to verify that data ingested into Fledge using the `fledge-south-coap` plugin is successfully sent to the PI Server in complex(legacy) data type format using the `fledge-north-OMF` plugin.\n\nThis test consists of *TestPackagesCoAP_PI_WebAPI* class, which contains following test case functions:\n\n1. **test_omf_task**: Verifies that data is ingested into Fledge using the fledge-south-coap service and sent to the PI Server via the fledge-north-OMF task. It checks that the data sent and received counts match, the required asset is created, and that the data sent from Fledge through the OMF plugin successfully reaches the PI Server.\n2. **test_omf_task_with_reconfig**: Verifies that data is ingested into Fledge using the fledge-south-coap service and sent to the PI Server via the fledge-north-OMF task. Then, it reconfigures the OMF task with an invalid user ID to verify whether data transmission to the PI Server stops.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv packages/test_pi_webapi.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --pi-host=\"<PI_SYSTEM_HOST>\" --pi-port=\"<PI_SYSTEM_PORT>\" \\\n     --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\"  --wait-time=\"<WAIT_TIME>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_pi_webapi_linked_data_type.rst",
    "content": "Test PIWebAPI Linked Data Type\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to verify that data ingested into Fledge using the `fledge-south-sinusoid` and `fledge-south-randomwalk` plugins is successfully sent to the PI Server in linked data type format using the `fledge-north-OMF` plugin.\n\nThis test consists of *Test_linked_data_PIWebAPI* class, which contains multiple test cases functions:\n\n1. **test_linked_data**: Verifies that data is ingested into Fledge using fledge-south-sinusoid and fledge-south-randomwalk plugins and then sent to the PI Server in linked data format using the fledge-north-OMF plugin. It also checks that the data sent and received counts match, the required asset is created, and the data successfully reaches the PI Server.  \n2. **test_linked_data_with_filter**: Verifies that data is ingested into Fledge using fledge-south-sinusoid and fledge-south-randomwalk plugins, with fledge-filter-expression applied, and sent to PI via the fledge-north-OMF plugin. It ensures that data sent and received counts match, the required asset is created, and the data successfully reaches the PI Server.\n3. **test_linked_data_with_onoff_filter**: Verifies that data is ingested into Fledge using fledge-south-sinusoid and fledge-south-randomwalk plugins, with fledge-filter-expression applied. The test ensures data is sent to PI via the fledge-north-OMF plugin in linked data format when filters are disabled and enabled multiple times. It confirms data sent and received counts match, the required asset is created, and the data successfully reaches the PI Server.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv packages/test_pi_webapi_linked_data_type.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n      --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\"  --wait-time=\"<WAIT_TIME>\" \\\n      --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_rule_data_availability.rst",
    "content": "Data Availability Rule Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test verifies the basic functionality of the notification rule for the `fledge-rule-DataAvailability` (inbuilt) plugin. It involves using `fledge-south-sinusoid` for data ingestion into Fledge and `fledge-north-OMF` for sending data to the PI server.\n\nThis test consists of three classes, each containing multiple test case functions:\n\n1. **TestDataAvailabilityAuditBasedNotificationRuleOnIngress**: \n    a. **test_data_availability_multiple_audit**: Verifies if NTFSN is triggered with CONAD and SCHAD in Fledge.\n    b. **test_data_availability_single_audit**: Verifies if NTFSN is triggered with CONCH in the sinusoid plugin.\n    c. **test_data_availability_all_audit**: Verifies if NTFSN is triggered with all audit changes, referring to JIRA FOGL-7712.\n\n2. **TestDataAvailabilityAssetBasedNotificationRuleOnIngress**:\n    a. **test_data_availability_asset**: Verifies if the north service of OMF can send data to PI, ingested by the south service of sinusoid into Fledge, when a fledge-filter-scale filter is added to the north service.\n\n3. **TestDataAvailabilityBasedNotificationRuleOnEgress**:\n    a. **test_data_availability_north**: Verifies if NTFSN is triggered with configuration changes in the north EDS plugin. Please check FOGL-9355.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv packages/test_rule_data_availability.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --wait-time=\"<WAIT_TIME>\" \\\n        --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/docs/test_statistics_history_notification_rule.rst",
    "content": "Statistics History Notification Rule Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test verifies the basic functionality of the statistics history notification using `fledge-rule-Threshold` plugin, which utilizes `fledge-south-sinusoid` for data ingestion into Fledge and `fledge-north-OMF` for sending data to the PI server.\n\nThis test consists of two classes, each containing multiple test case functions:\n\n1. **TestStatisticsHistoryBasedNotificationRuleOnIngress**: \n    a. **test_stats_readings_south**: Verifies if NTFSN is triggered with source as statistics history and name as READINGS in threshold rule.\n    b. **test_stats_south_asset_ingest**: Verifies if NTFSN is triggered with source as statistics history and name as \"ingested south asset>\" in the threshold rule.\n    c. **test_stats_south_asset**: Verifies if NTFSN is triggered with source as statistics history and name as the south asset name in the threshold rule.\n\n2. **TestStatisticsHistoryBasedNotificationRuleOnEgress**:\n    a. **test_stats_readings_north**: Verifies if NTFSN is triggered with source as statistics history and name as READINGS in the threshold rule\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv packages/test_statistics_history_notification_rule.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n        --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" --wait-time=\"<WAIT_TIME>\" \\\n        --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/packages/network_impairment.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Module for applying network impairments.\"\"\"\n\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n# References\n# 1. http://myconfigure.blogspot.com/2012/03/traffic-shaping.html\n# 2. https://lartc.org/howto/lartc.qdisc.classful.html\n# 3. https://lartc.org/howto/lartc.qdisc.filters.html\n# 4. https://serverfault.com/a/841865\n# 5. https://serverfault.com/a/906499\n# 6. https://wiki.linuxfoundation.org/networking/netem\n# 7. https://srtlab.github.io/srt-cookbook/how-to-articles/using-netem-to-emulate-networks/\n# 8. https://wiki.linuxfoundation.org/networking/netem\n\nimport subprocess\nimport multiprocessing\nimport datetime\nimport time\nimport socket\n\n\ndef check_for_interface(interface):\n    \"\"\"Checks for given interface if present in output of ifconfig\"\"\"\n    for tup in socket.if_nameindex():\n        if tup[1] == interface:\n            return True\n\n    return False\n\n\nclass Distortion(multiprocessing.Process):\n    def __init__(self, run_cmd_list, clear_cmd, duration):\n        super(Distortion, self).__init__()\n        self.run_cmd_list = run_cmd_list\n        self.duration = duration\n        self.clear_cmd = clear_cmd\n\n    @staticmethod\n    def run_command(command):\n        \"\"\"Executes a shell command using subprocess module.\"\"\"\n        try:\n            process = subprocess.Popen(command, cwd=None, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n        except Exception as inst:\n            print(\"Problem running command : \\n   \", str(command))\n            return False\n\n        [stdoutdata, stderrdata] = process.communicate(None)\n        if process.returncode:\n            print(stderrdata)\n            print(\"Problem running command : \\n   \", str(command), \" \", process.returncode)\n            return False\n\n        return True\n\n    def run(self) -> None:\n\n        # Make sure we are in clean state. Ignore error if there.\n        _ = Distortion.run_command(self.clear_cmd)\n\n        for run_cmd in self.run_cmd_list:\n            print(\"Executing {}\".format(run_cmd), flush=True)\n            ret_val = Distortion.run_command(run_cmd)\n            if not ret_val:\n                print(\"Could not perform execution of command {}\".format(run_cmd), flush=True)\n                return\n\n        end_time = datetime.datetime.now() + datetime.timedelta(seconds=self.duration)\n        while datetime.datetime.now() < end_time:\n            time.sleep(0.5)\n\n        print(\"Executing {}\".format(self.clear_cmd), flush=True)\n        ret_val = Distortion.run_command(self.clear_cmd)\n        if not ret_val:\n            print(\"Could not perform execution of command {}\".format(self.clear_cmd), flush=True)\n            return\n\n        print(\"Network Impairment complete.\", flush=True)\n\n\ndef reset_network(interface):\n    \"\"\"\n    Reset the network in the middle of impairment.\n    :param interface: The interface of the network.\n    :type interface: string\n    :return: True/False If successful.\n    :rtype: boolean\n    \"\"\"\n    if not check_for_interface(interface):\n        raise Exception(\"Could not find given {} among present interfaces.\".format(interface))\n\n    clear_cmd = \"sudo tc qdisc del dev {} root\".format(interface)\n    ret_val = Distortion.run_command(command=clear_cmd)\n\n    if ret_val:\n        print(\"Network has been reset.\")\n    else:\n        print(\"Could not reset the network.\")\n\n\ndef distort_network(interface, duration, rate_limit, latency, ip=None, port=None,\n                    traffic=''):\n    \"\"\"\n\n    :param interface: Interface on which network impairment will be applied. See ifconfig in\n                       your linux machine to decide.\n    :type interface: string\n    :param duration: The duration (in seconds) for which impairment will be applied. Note it will\n                     get auto cleared after application.\n    :type duration: integer\n    :param traffic:  If inbound then the given ip and port will be used to filter packets coming\n                     from destination. For these packets only the impairment will be applied.\n                     If outbound then we are talking about packets leaving this machine for destination.\n                     This is exactly the opposite of first case.\n    :type traffic:  inbound/ outbound string\n    :param ip: The ip of machine where packets are coming / leaving to filter. Keep None\n               if no filter required.\n    :type ip: string\n    :param port: The port of machine where packets are coming / leaving to filter.  Keep None\n               if no filter required.\n    :type port: integer\n    :param rate_limit: The restriction in rate in kbps. Use value 20 for 20 kbps.\n    :type rate_limit: integer\n    :param latency: The delay to cause for every packet leaving/ coming from machine in\n                    milliseconds. Use something like 300 for causing a delay for 300 milliseconds.\n    :type latency: integer\n    :return: None\n    :rtype: None\n    \"\"\"\n\n    if not check_for_interface(interface):\n        raise Exception(\"Could not find given {} among present interfaces.\".format(interface))\n\n    if not latency and not rate_limit:\n        raise Exception(\"Could not find latency or  rate_limit.\")\n\n    if latency:\n        latency_converted = str(latency) + 'ms'\n    else:\n        latency_converted = None\n\n    if rate_limit:\n        rate_limit_converted = str(rate_limit) + 'Kbit'\n    else:\n        rate_limit_converted = None\n\n    if not (ip and port):\n        if rate_limit_converted and latency_converted:\n            run_cmd = \"sudo tc qdisc add dev {} root netem\" \\\n                      \" delay {} rate {}\".format(interface, latency_converted,\n                                                 rate_limit_converted)\n        elif rate_limit_converted and not latency_converted:\n            run_cmd = \"sudo tc qdisc add dev {} root netem\" \\\n                      \" rate {}\".format(interface, rate_limit_converted)\n        elif not rate_limit_converted and latency_converted:\n            run_cmd = \"sudo tc qdisc add dev {} root netem\" \\\n                      \"delay {}\".format(interface, latency_converted)\n\n        clear_cmd = \"sudo tc qdisc del dev {} root\".format(interface)\n        p = Distortion([run_cmd], clear_cmd, duration)\n        p.daemon = True\n        p.start()\n\n    else:\n        r1 = \"sudo tc qdisc add dev {} root handle 1: prio\" \\\n             \" priomap 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2\".format(interface)\n        if latency_converted and rate_limit_converted:\n            r2 = \"sudo tc qdisc add dev {} parent 1:1 \" \\\n                 \"handle 10: netem delay {} rate {}\".format(interface,\n                                                            latency_converted,\n                                                            rate_limit_converted)\n        elif not latency_converted and rate_limit_converted:\n            r2 = \"sudo tc qdisc add dev {} parent 1:1 \" \\\n                 \"handle 10: netem  rate {}\".format(interface,\n                                                    rate_limit_converted)\n        elif latency_converted and not rate_limit_converted:\n            r2 = \"sudo tc qdisc add dev {} parent 1:1 \" \\\n                 \"handle 10: netem delay {} \".format(interface,\n                                                     latency_converted)\n\n        if traffic.lower() == 'outbound':\n            ip_param = 'dst'\n            port_param = 'dport'\n\n        elif traffic.lower() == \"inbound\":\n            ip_param = 'src'\n            port_param = 'sport'\n        else:\n            raise Exception(\"For ip and port are given then traffic has to be either inbound or outbound.\"\n                            \" But got other than these two. \")\n\n        r3 = \"sudo tc filter add dev {} protocol ip parent 1:0 prio 1 u32 \" \\\n             \"match ip {} {}/32 match ip {} {}  0xffff flowid 1:1\".format(interface, ip_param,\n                                                                          ip, port_param, port)\n        clear_cmd = \"sudo tc qdisc del dev {} root\".format(interface)\n        run_cmd_list = [r1, r2, r3]\n        p = Distortion(run_cmd_list, clear_cmd, duration)\n        p.daemon = True\n        p.start()\n\n\n\"\"\" -------------------------Usage -------------------------------------\"\"\"\n\n# from network_impairment import distort_network, reset_network\n# distort_network(interface=\"wlp2s0\", duration=40, rate_limit=20, latency=300,\n#                 ip=\"192.168.1.80\", port=8081, traffic=\"inbound\")\n#\n# distort_network(interface=\"wlp2s0\", duration=40, rate_limit=20, latency=300,\n#                 ip=\"192.168.1.80\", port=8081, traffic=\"outbound\")\n#\n# distort_network(interface=\"wlp2s0\", duration=40, rate_limit=20, latency=300)\n\n# reset_network(interface=\"wlp2s0\")\n"
  },
  {
    "path": "tests/system/python/packages/test_authentication.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test authentication REST API \"\"\"\n\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nimport ssl\nfrom pathlib import Path\nfrom contextlib import closing\nimport pytest\nfrom pytest import PKG_MGR\nfrom conftest import restart_and_wait_for_fledge\n\n__author__ = \"Yash Tatkondawar, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nTEMPLATE_NAME = \"template.json\"\nSENSOR_VALUE = 12.25\nHTTP_SOUTH_SVC_NAME = \"SOUTH_HTTP\"\nHTTP_SOUTH_SVC_NAME_1 = \"SOUTH_HTTP_1\"\nASSET_NAME = \"auth\"\nPASSWORD_TOKEN = None\nCERT_TOKEN = None\nKEY_SIZE = 2048\n\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/packages/data/\".format(PROJECT_ROOT)\ncontext = ssl._create_unverified_context()\n\nLOGIN_SUCCESS_MSG = \"Logged in successfully.\"\nROLES = {'roles': [\n    {'description': 'All CRUD privileges', 'id': 1, 'name': 'admin'},\n    {'description': 'All CRUD operations and self profile management', 'id': 2, 'name': 'user'},\n    {'id': 3, 'name': 'view', 'description': 'Only to view the configuration'},\n    {'id': 4, 'name': 'data-view', 'description': 'Only read the data in buffer'},\n    {'id': 5, 'name': 'control',\n     'description': 'Same as editor can do and also have access for control scripts and pipelines'}\n]}\n\n\ndef send_data_using_fogbench(wait_time):\n    execute_fogbench = 'cd {}/extras/python ;python3 -m fogbench -t $FLEDGE_ROOT/data/tests/{} ' \\\n                       '-p http -O 10'.format(PROJECT_ROOT, TEMPLATE_NAME)\n    exit_code = os.system(execute_fogbench)\n    assert 0 == exit_code\n    # Wait until data gets ingested\n    time.sleep(wait_time)\n\n\ndef add_south_http(fledge_url, name, token, wait_time, tls_enabled):\n    payload = {\"name\": name, \"type\": \"south\", \"plugin\": \"http_south\", \"enabled\": True}\n    post_url = \"/fledge/service\"\n    if tls_enabled:\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n    else:\n        conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", post_url, json.dumps(payload), headers={\"authorization\": token})\n    res = conn.getresponse()\n    assert 200 == res.status, \"ERROR! POST {} request failed\".format(post_url)\n    res = res.read().decode()\n    r = json.loads(res)\n    # Wait for service to get added\n    time.sleep(wait_time * 2)\n    return r\n\n\ndef generate_json_for_fogbench(asset_name):\n    subprocess.run([\"cd $FLEDGE_ROOT/data && mkdir -p tests\"], shell=True, check=True)\n\n    fogbench_template_path = os.path.join(\n        os.path.expandvars('${FLEDGE_ROOT}'), 'data/tests/{}'.format(TEMPLATE_NAME))\n    with open(fogbench_template_path, \"w\") as f:\n        f.write(\n            '[{\"name\": \"%s\", \"sensor_values\": '\n            '[{\"name\": \"sensor\", \"type\": \"number\", \"min\": %f, \"max\": %f, \"precision\": 2}]}]' % (\n                asset_name, SENSOR_VALUE, SENSOR_VALUE))\n\n\n@pytest.fixture\ndef change_auth_method(fledge_url, wait_time):\n    def _change_auth_method(auth_method, restart, enable_tls):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert jdoc['admin']\n        token = jdoc['token']\n\n        payload = {\"authMethod\": auth_method}\n        if enable_tls:\n            payload[\"enableHttp\"] = \"false\"\n        conn.request(\"PUT\", '/fledge/category/rest_api', headers={\"authorization\": token},\n                     body=json.dumps(payload))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert auth_method == jdoc['authMethod']['value']\n        if restart:\n            restart_and_wait_for_fledge(fledge_url, wait_time, token, https_enabled=enable_tls)\n        if not restart:\n            conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": token})\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert jdoc['logout']\n    return _change_auth_method\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    def _reset_fledge(authentication):\n        # TODO: Remove kill after resolution of FOGL-1499\n        try:\n            subprocess.run([\"$FLEDGE_ROOT/bin/fledge kill\"], shell=True, check=True)\n        except subprocess.CalledProcessError:\n            assert False, \"kill command failed!\"\n\n        try:\n            if authentication:\n                cmd = \"cd {}/tests/system/python/scripts/package && ./reset $FLEDGE_ROOT authentication\".format(\n                    PROJECT_ROOT)\n            else:\n                cmd = \"cd {}/tests/system/python/scripts/package && ./reset\".format(PROJECT_ROOT)\n            subprocess.run([cmd], shell=True, check=True)\n        except subprocess.CalledProcessError:\n            assert False, \"reset package script failed!\"\n\n        # Wait for fledge server to start\n        time.sleep(wait_time)\n    return _reset_fledge\n\n\n@pytest.fixture\ndef remove_and_add_fledge_pkgs(package_build_version):\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package && ./remove\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"remove package script failed!\"\n\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package/ && ./setup {}\"\n                       .format(PROJECT_ROOT, package_build_version)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"setup package script failed\"\n\n    try:\n        subprocess.run([\"sudo {} install -y fledge-south-http-south\".format(PKG_MGR)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"installation of http-south package failed\"\n\n\nclass TestTLSDisabled:\n    def test_on_default_port(self, remove_and_add_fledge_pkgs, fledge_url, reset_fledge):\n        reset_fledge(authentication=False)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"uptime\" in jdoc\n        assert 0 < jdoc['uptime'], \"Fledge not up.\"\n\n    def test_on_custom_port(self, fledge_url, wait_time):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"httpPort\": \"8005\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"httpPort\" in jdoc\n        assert '8005' == jdoc['httpPort']['value']\n\n        # FIXME: Remove this wait time\n        time.sleep(wait_time)\n\n        restart_and_wait_for_fledge(fledge_url, wait_time, custom_port=8005)\n\n    def test_reset_to_default_port(self, fledge_url, wait_time):\n        conn = http.client.HTTPConnection(\"localhost\", 8005)\n        conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"httpPort\": \"8081\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"httpPort\" in jdoc\n        assert '8081' == jdoc['httpPort']['value']\n\n        # FIXME: Remove this wait time\n        time.sleep(wait_time)\n\n        restart_and_wait_for_fledge(\"localhost:8005\", wait_time, custom_port=8081)\n\n\nclass TestAuthAnyWithoutTLS:\n    def test_login_regular_user_using_password(self, fledge_url, reset_fledge, change_auth_method):\n        reset_fledge(authentication=True)\n        change_auth_method(auth_method=\"any\", restart=False, enable_tls=False)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert not jdoc['admin']\n        global PASSWORD_TOKEN\n        PASSWORD_TOKEN = jdoc[\"token\"]\n\n    def test_logout_me_password_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_login_with_invalid_credentials_regular_user_using_password(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"Fledge\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"Username or Password do not match\" == r.reason\n\n    def test_login_username_admin_using_password(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert \"token\" in jdoc\n        assert jdoc['admin']\n        global PASSWORD_TOKEN\n        PASSWORD_TOKEN = jdoc[\"token\"]\n\n    def test_login_with_invalid_credentials_admin_using_password(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"FLEDGE\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"Username or Password do not match\" == r.reason\n\n    def test_login_with_user_certificate(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n    def test_login_with_admin_certificate(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert jdoc['admin']\n            global CERT_TOKEN\n            CERT_TOKEN = jdoc[\"token\"]\n\n    def test_login_with_custom_certificate(self, fledge_url, remove_data_file):\n        # Create a custom certificate and sign\n        try:\n            subprocess.run([\"openssl genrsa -out custom.key {} 2> /dev/null\".format(KEY_SIZE)], shell=True)\n            subprocess.run([\"openssl req -new -key custom.key -out custom.csr -subj '/C=IN/CN=user' 2> /dev/null\"],\n                           shell=True)\n            subprocess.run([\"openssl x509 -req -days 1 -in custom.csr \"\n                            \"-CA $FLEDGE_ROOT/data/etc/certs/ca.cert -CAkey $FLEDGE_ROOT/data/etc/certs/ca.key \"\n                            \"-set_serial 01 -out custom.cert 2> /dev/null\"], shell=True)\n        except subprocess.CalledProcessError:\n            assert False, \" Certificate creation failed!\"\n\n            # Login with custom certificate\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = 'custom.cert'\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n        # Delete Certificates and keys created\n        remove_data_file('custom.key')\n        remove_data_file('custom.csr')\n        remove_data_file('custom.cert')\n\n    def test_ping_with_allow_ping_true(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 0 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ingest_with_password_token(self, fledge_url, wait_time):\n        add_south_http(fledge_url, HTTP_SOUTH_SVC_NAME, PASSWORD_TOKEN, wait_time, tls_enabled=False)\n\n        generate_json_for_fogbench(ASSET_NAME)\n\n        send_data_using_fogbench(wait_time)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 10 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ingest_with_certificate_token(self, fledge_url, wait_time):\n        add_south_http(fledge_url, HTTP_SOUTH_SVC_NAME_1, CERT_TOKEN, wait_time, tls_enabled=False)\n\n        generate_json_for_fogbench(ASSET_NAME)\n\n        send_data_using_fogbench(wait_time)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 20 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ping_with_allow_ping_false_with_password_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        _token = jdoc[\"token\"]\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/category/rest_api', body=json.dumps({\"allowPing\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        assert 401 == r.status\n        assert \"Unauthorized\" == r.reason\n\n    def test_ping_with_allow_ping_false_with_certificate_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            _token = jdoc[\"token\"]\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"allowPing\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        assert 401 == r.status\n        assert \"Unauthorized\" == r.reason\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('', {'users': [{'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n                         'description': 'admin user'},\n                        {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                         'description': 'normal user'}]}),\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                             'realName': 'Admin user', 'description': 'admin user'}),\n        ('?id=1&username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                                  'realName': 'Admin user', 'description': 'admin user'})\n    ])\n    def test_get_users_with_password_token(self, fledge_url, query, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('', {'users': [{'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n                         'description': 'admin user'},\n                        {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                         'description': 'normal user'}]}),\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                             'realName': 'Admin user', 'description': 'admin user'}),\n        ('?id=1&username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                                  'realName': 'Admin user', 'description': 'admin user'})\n    ])\n    def test_get_users_with_certificate_token(self, fledge_url, query, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_get_roles_with_password_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ROLES == jdoc\n\n    def test_get_roles_with_certificate_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ROLES == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\", \"real_name\": \"AJ\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any1', 'userId': 3, 'roleId': 2, 'accessMethod': 'any', 'realName': 'AJ',\n                   'description': 'Nerd user'}, 'message': 'any1 user has been created successfully.'}),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin1', 'userId': 4, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin1 user has been created successfully.'})\n    ])\n    def test_create_user_with_password_token(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any2\", \"password\": \"User@123\", \"real_name\": \"PG\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any2', 'userId': 5, 'roleId': 2, 'accessMethod': 'any', 'realName': 'PG',\n                   'description': 'Nerd user'}, 'message': 'any2 user has been created successfully.'}),\n        ({\"username\": \"admin2\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin2', 'userId': 6, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin2 user has been created successfully.'})\n    ])\n    def test_create_user_with_certificate_token(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"any2\", \"password\": \"User@123\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"admin2\", \"password\": \"F0gl@mp!\"}, LOGIN_SUCCESS_MSG)\n    ])\n    def test_login_of_newly_created_user(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc['message']\n\n    def test_update_password_with_password_token(self, fledge_url):\n        uid = 3\n        data = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp1\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(data),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    def test_update_password_with_certificate_token(self, fledge_url):\n        uid = 5\n        data = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp2\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(data),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"F0gl@mp1\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"any2\", \"password\": \"F0gl@mp2\"}, LOGIN_SUCCESS_MSG)\n    ])\n    def test_login_with_updated_password(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc['message']\n\n    def test_reset_user_with_password_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/admin/3/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@mp!#1\"}),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<3> has been updated successfully.'} == jdoc\n\n    def test_reset_user_with_certificate_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/admin/5/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@mp!#2\"}),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<5> has been updated successfully.'} == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"F0gl@mp!#1\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"any2\", \"password\": \"F0gl@mp!#2\"}, LOGIN_SUCCESS_MSG)\n    ])\n    def test_login_with_resetted_password(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc['message']\n\n    def test_delete_user_with_password_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", \"/fledge/admin/4/delete\", headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    def test_delete_user_with_certificate_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", \"/fledge/admin/6/delete\", headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\"}, \"\"),\n        ({\"username\": \"admin2\", \"password\": \"F0gl@mp!\"}, \"\")\n    ])\n    def test_login_of_deleted_user(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"User does not exist\" == r.reason\n\n    def test_logout_all_with_password_token(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/1/logout', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_verify_logout(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 401 == r.status\n\n    def test_admin_actions_forbidden_for_regular_user_with_pwd_token(self, fledge_url):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert not jdoc['admin']\n        _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    def test_admin_actions_forbidden_for_regular_user_with_cert_token(self, fledge_url):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert not jdoc['admin']\n            _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    @pytest.mark.skip(reason=\"Currently this feature is not implemented.\")\n    def test_regular_user_access_to_admin_api_config(self, fledge_url):\n        pass\n\n\nclass TestAuthPasswordWithoutTLS:\n    def test_login_username_regular_user(self, fledge_url, reset_fledge, change_auth_method):\n        reset_fledge(authentication=True)\n        change_auth_method(auth_method=\"password\", restart=True, enable_tls=False)\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert not jdoc['admin']\n        global PASSWORD_TOKEN\n        PASSWORD_TOKEN = jdoc[\"token\"]\n\n    def test_logout_me(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_login_with_invalid_credentials_regular_user(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"Fledge\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"Username or Password do not match\" == r.reason\n\n    def test_login_username_admin(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert \"token\" in jdoc\n        assert jdoc['admin']\n        global PASSWORD_TOKEN\n        PASSWORD_TOKEN = jdoc[\"token\"]\n\n    def test_login_with_invalid_credentials_admin(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"FLEDGE\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"Username or Password do not match\" == r.reason\n\n    def test_login_with_admin_certificate(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 400 == r.status\n            assert \"Use valid username & password to log in.\" == r.reason\n\n    def test_ping_with_allow_ping_true(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 0 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ingest(self, fledge_url, wait_time):\n        add_south_http(fledge_url, HTTP_SOUTH_SVC_NAME, PASSWORD_TOKEN, wait_time, tls_enabled=False)\n\n        generate_json_for_fogbench(ASSET_NAME)\n\n        send_data_using_fogbench(wait_time)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 10 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ping_with_allow_ping_false(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        _token = jdoc[\"token\"]\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/category/rest_api', body=json.dumps({\"allowPing\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        assert 401 == r.status\n        assert \"Unauthorized\" == r.reason\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('', {'users': [{'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n                         'description': 'admin user'},\n                        {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                         'description': 'normal user'}]}),\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                             'realName': 'Admin user', 'description': 'admin user'}),\n        ('?id=1&username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                                  'realName': 'Admin user', 'description': 'admin user'})\n    ])\n    def test_get_users(self, fledge_url, query, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_get_roles(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ROLES == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\", \"real_name\": \"AJ\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any1', 'userId': 3, 'roleId': 2, 'accessMethod': 'any', 'realName': 'AJ',\n                   'description': 'Nerd user'}, 'message': 'any1 user has been created successfully.'}),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin1', 'userId': 4, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin1 user has been created successfully.'})\n    ])\n    def test_create_user(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\"}, LOGIN_SUCCESS_MSG)\n    ])\n    def test_login_of_newly_created_user(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc['message']\n\n    def test_update_password(self, fledge_url):\n        uid = 3\n        data = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp1\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(data),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    def test_login_with_updated_password(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps({\"username\": \"any1\", \"password\": \"F0gl@mp1\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n\n    def test_reset_user(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/admin/3/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@mp!\"}),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<3> has been updated successfully.'} == jdoc\n\n    def test_login_with_resetted_password(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps({\"username\": \"any1\", \"password\": \"F0gl@mp!\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n\n    def test_delete_user(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", \"/fledge/admin/4/delete\", headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    def test_login_of_deleted_user(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps({\"username\": \"admin1\", \"password\": \"F0gl@mp!\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"User does not exist\" == r.reason\n\n    def test_logout_all(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/1/logout', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_verify_logout(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 401 == r.status\n\n    def test_admin_actions_forbidden_for_regular_user(self, fledge_url):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert not jdoc['admin']\n        _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    @pytest.mark.skip(reason=\"Currently this feature is not implemented.\")\n    def test_regular_user_access_to_admin_api_config(self, fledge_url):\n        pass\n\n\nclass TestAuthCertificateWithoutTLS:\n    def test_login_with_user_certificate(self, fledge_url, reset_fledge, change_auth_method):\n        reset_fledge(authentication=True)\n        change_auth_method(auth_method=\"certificate\", restart=True, enable_tls=False)\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n    def test_login_with_admin_certificate(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert jdoc['admin']\n            global CERT_TOKEN\n            CERT_TOKEN = jdoc[\"token\"]\n\n    def test_login_with_custom_certificate(self, fledge_url, remove_data_file):\n        # Create a custom certificate and sign\n        try:\n            subprocess.run([\"openssl genrsa -out custom.key {} 2> /dev/null\".format(KEY_SIZE)], shell=True)\n            subprocess.run([\"openssl req -new -key custom.key -out custom.csr -subj '/C=IN/CN=user' 2> /dev/null\"],\n                           shell=True)\n            subprocess.run([\"openssl x509 -req -days 1 -in custom.csr \"\n                            \"-CA $FLEDGE_ROOT/data/etc/certs/ca.cert -CAkey $FLEDGE_ROOT/data/etc/certs/ca.key \"\n                            \"-set_serial 01 -out custom.cert 2> /dev/null\"], shell=True)\n        except subprocess.CalledProcessError:\n            assert False, \" Certificate creation failed!\"\n\n            # Login with custom certificate\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = 'custom.cert'\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n        # Delete Certificates and keys created\n        remove_data_file('custom.key')\n        remove_data_file('custom.csr')\n        remove_data_file('custom.cert')\n\n    def test_login_with_invalid_credentials(self, fledge_url, remove_data_file):\n        try:\n            subprocess.run([\"echo 'Fledge certificate' > template.cert\"], shell=True)\n        except subprocess.CalledProcessError:\n            assert False, \" Certificate creation failed!\"\n\n        # Login with custom certificate\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = 'template.cert'\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 400 == r.status\n            assert 'Use a valid certificate to login.' == r.reason\n\n        # Delete Certificates and keys created\n        remove_data_file('template.cert')\n\n    def test_login_username_admin(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 400 == r.status\n        assert \"Use a valid certificate to login.\" == r.reason\n\n    def test_ping_with_allow_ping_true(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 0 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ingest(self, fledge_url, wait_time):\n        add_south_http(fledge_url, HTTP_SOUTH_SVC_NAME, CERT_TOKEN, wait_time, tls_enabled=False)\n\n        generate_json_for_fogbench(ASSET_NAME)\n\n        send_data_using_fogbench(wait_time)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 10 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ping_with_allow_ping_false(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            _token = jdoc[\"token\"]\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"allowPing\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        assert 401 == r.status\n        assert \"Unauthorized\" == r.reason\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                             'realName': 'Admin user', 'description': 'admin user'}),\n        ('?id=1&username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                                  'realName': 'Admin user', 'description': 'admin user'})\n    ])\n    def test_get_users(self, fledge_url, query, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_get_roles(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ROLES == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\", \"real_name\": \"AJ\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any1', 'userId': 3, 'roleId': 2, 'accessMethod': 'any', 'realName': 'AJ',\n                   'description': 'Nerd user'}, 'message': 'any1 user has been created successfully.'}),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin1', 'userId': 4, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin1 user has been created successfully.'})\n    ])\n    def test_create_user(self, fledge_url, form_data, expected_values):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_update_password(self, fledge_url):\n        uid = 3\n        data = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp1\"}\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(data),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    def test_reset_user(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", \"/fledge/admin/3/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@mp!\"}),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<3> has been updated successfully.'} == jdoc\n\n    def test_delete_user(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", \"/fledge/admin/4/delete\", headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    def test_logout_all(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"PUT\", '/fledge/1/logout', headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_verify_logout(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/asset', headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 401 == r.status\n\n    def test_admin_actions_forbidden_for_regular_user(self, fledge_url):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPConnection(fledge_url)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert not jdoc['admin']\n            _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    @pytest.mark.skip(reason=\"Currently this feature is not implemented.\")\n    def test_regular_user_access_to_admin_api_config(self, fledge_url):\n        pass\n\n\nclass TestTLSEnabled:\n    def test_on_default_port(self, reset_fledge, change_auth_method):\n        reset_fledge(authentication=False)\n        change_auth_method(auth_method=\"any\", restart=True, enable_tls=True)\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"uptime\" in jdoc\n        assert 0 < jdoc['uptime'], \"Fledge not up.\"\n\n    def test_on_custom_port(self, wait_time):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"httpsPort\": \"2005\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"httpsPort\" in jdoc\n        assert '2005' == jdoc['httpsPort']['value']\n\n        # FIXME: Remove this wait time\n        time.sleep(wait_time)\n\n        # Restart Fledge\n        restart_headers = {}  # No auth headers needed for restart endpoint\n        conn.request(\"PUT\", '/fledge/restart', headers=restart_headers, body=json.dumps({}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert \"Fledge restart has been scheduled.\" == jdoc['message']\n        \n        # Note: Can't use restart_and_wait_for_fledge() from conftest because it only supports\n        # default ports and HTTP connections, but we need to verify HTTPS on custom port 2005\n        # Wait for Fledge to restart and verify it's running on the new port (2005)\n        print(f\"Waiting for Fledge to restart on port 2005... (Initial wait: {wait_time}s)\")\n        time.sleep(wait_time)\n        \n        start_time = time.time()\n        max_retries = 5\n        ping_headers = {}  # No auth headers needed for ping endpoint\n        \n        # Create connection for the new port (2005)\n        conn = http.client.HTTPSConnection(\"localhost\", 2005, context=context)\n        \n        for attempt in range(max_retries):\n            try:\n                with closing(conn) as connection:\n                    connection.request(\"GET\", \"/fledge/ping\", headers=ping_headers)\n                    response = connection.getresponse()\n                    if response.status == 200:\n                        response_data = response.read().decode()\n                        jdoc = json.loads(response_data)\n                        assert \"uptime\" in jdoc, \"Fledge ping response missing uptime field\"\n                        assert jdoc['uptime'] > 0, \"Fledge uptime should be greater than 0\"\n                        break\n                    elif response.status == 401:\n                        jdoc = {\"message\": \"Unauthorized\"}\n                        assert response.status == 401, \"Expected 401 status for unauthorized access\"\n                        break\n                    else:\n                        print(f\"Attempt {attempt + 1}: Got HTTP status {response.status}\")\n            except Exception as e:\n                print(f\"Attempt {attempt + 1}: Connection failed - {type(e).__name__}: {e}\")\n            \n            if attempt < max_retries - 1:\n                sleep_time = wait_time * 2\n                print(f\"Waiting {sleep_time}s before next attempt...\")\n                time.sleep(sleep_time)\n        else:\n            elapsed = round(time.time() - start_time, 2)\n            raise AssertionError(f\"Failed to restart Fledge on port 2005 after {elapsed} seconds: {jdoc}\")\n\n\nclass TestAuthAnyWithTLS:\n    def test_login_regular_user_using_password(self, reset_fledge, change_auth_method):\n        reset_fledge(authentication=True)\n        change_auth_method(auth_method=\"any\", restart=True, enable_tls=True)\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert not jdoc['admin']\n        global PASSWORD_TOKEN\n        PASSWORD_TOKEN = jdoc[\"token\"]\n\n    def test_logout_me_password_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_login_with_invalid_credentials_regular_user_using_password(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"Fledge\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"Username or Password do not match\" == r.reason\n\n    def test_login_username_admin_using_password(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert \"token\" in jdoc\n        assert jdoc['admin']\n        global PASSWORD_TOKEN\n        PASSWORD_TOKEN = jdoc[\"token\"]\n\n    def test_login_with_invalid_credentials_admin_using_password(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"FLEDGE\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"Username or Password do not match\" == r.reason\n\n    def test_login_with_user_certificate(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n    def test_login_with_admin_certificate(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert jdoc['admin']\n            global CERT_TOKEN\n            CERT_TOKEN = jdoc[\"token\"]\n\n    def test_login_with_custom_certificate(self, remove_data_file):\n        # Create a custom certificate and sign\n        try:\n            subprocess.run([\"openssl genrsa -out custom.key {} 2> /dev/null\".format(KEY_SIZE)], shell=True)\n            subprocess.run([\"openssl req -new -key custom.key -out custom.csr -subj '/C=IN/CN=user' 2> /dev/null\"],\n                           shell=True)\n            subprocess.run([\"openssl x509 -req -days 1 -in custom.csr \"\n                            \"-CA $FLEDGE_ROOT/data/etc/certs/ca.cert -CAkey $FLEDGE_ROOT/data/etc/certs/ca.key \"\n                            \"-set_serial 01 -out custom.cert 2> /dev/null\"], shell=True)\n        except subprocess.CalledProcessError:\n            assert False, \" Certificate creation failed!\"\n\n        # Login with custom certificate\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = 'custom.cert'\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n        # Delete Certificates and keys created\n        remove_data_file('custom.key')\n        remove_data_file('custom.csr')\n        remove_data_file('custom.cert')\n\n    def test_ping_with_allow_ping_true(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 0 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ingest_with_password_token(self, fledge_url, wait_time):\n        add_south_http(fledge_url, HTTP_SOUTH_SVC_NAME, PASSWORD_TOKEN, wait_time, tls_enabled=True)\n\n        generate_json_for_fogbench(ASSET_NAME)\n\n        send_data_using_fogbench(wait_time)\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 10 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ingest_with_certificate_token(self, fledge_url, wait_time):\n        add_south_http(fledge_url, HTTP_SOUTH_SVC_NAME_1, CERT_TOKEN, wait_time, tls_enabled=True)\n\n        generate_json_for_fogbench(ASSET_NAME)\n\n        send_data_using_fogbench(wait_time)\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 20 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ping_with_allow_ping_false_with_password_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        _token = jdoc[\"token\"]\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/category/rest_api', body=json.dumps({\"allowPing\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        assert 401 == r.status\n        assert \"Unauthorized\" == r.reason\n\n    def test_ping_with_allow_ping_false_with_certificate_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            _token = jdoc[\"token\"]\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"allowPing\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        assert 401 == r.status\n        assert \"Unauthorized\" == r.reason\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('', {'users': [{'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n                         'description': 'admin user'},\n                        {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                         'description': 'normal user'}]}),\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                             'realName': 'Admin user', 'description': 'admin user'}),\n        ('?id=1&username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                                  'realName': 'Admin user', 'description': 'admin user'})\n    ])\n    def test_get_users_with_password_token(self, query, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('', {'users': [{'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n                         'description': 'admin user'},\n                        {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                         'description': 'normal user'}]}),\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                             'realName': 'Admin user', 'description': 'admin user'}),\n        ('?id=1&username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                                  'realName': 'Admin user', 'description': 'admin user'})\n    ])\n    def test_get_users_with_certificate_token(self, query, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_get_roles_with_password_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ROLES == jdoc\n\n    def test_get_roles_with_certificate_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ROLES == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\", \"real_name\": \"AJ\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any1', 'userId': 3, 'roleId': 2, 'accessMethod': 'any', 'realName': 'AJ',\n                   'description': 'Nerd user'}, 'message': 'any1 user has been created successfully.'}),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin1', 'userId': 4, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin1 user has been created successfully.'})\n    ])\n    def test_create_user_with_password_token(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any2\", \"password\": \"User@123\", \"real_name\": \"PG\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any2', 'userId': 5, 'roleId': 2, 'accessMethod': 'any', 'realName': 'PG',\n                   'description': 'Nerd user'}, 'message': 'any2 user has been created successfully.'}),\n        ({\"username\": \"admin2\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin2', 'userId': 6, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin2 user has been created successfully.'})\n    ])\n    def test_create_user_with_certificate_token(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"any2\", \"password\": \"User@123\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"admin2\", \"password\": \"F0gl@mp!\"}, LOGIN_SUCCESS_MSG)\n    ])\n    def test_login_of_newly_created_user(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc['message']\n\n    def test_update_password_with_password_token(self):\n        uid = 3\n        data = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp1\"}\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(data),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    def test_update_password_with_certificate_token(self):\n        uid = 5\n        data = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp2\"}\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(data),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"F0gl@mp1\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"any2\", \"password\": \"F0gl@mp2\"}, LOGIN_SUCCESS_MSG)\n    ])\n    def test_login_with_updated_password(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc['message']\n\n    def test_reset_user_with_password_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", \"/fledge/admin/3/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@mp!#1\"}),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<3> has been updated successfully.'} == jdoc\n\n    def test_reset_user_with_certificate_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", \"/fledge/admin/5/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@mp!#2\"}),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<5> has been updated successfully.'} == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"F0gl@mp!#1\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"any2\", \"password\": \"F0gl@mp!#2\"}, LOGIN_SUCCESS_MSG)\n    ])\n    def test_login_with_resetted_password(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc['message']\n\n    def test_delete_user_with_password_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"DELETE\", \"/fledge/admin/4/delete\", headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    def test_delete_user_with_certificate_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"DELETE\", \"/fledge/admin/6/delete\", headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\"}, \"\"),\n        ({\"username\": \"admin2\", \"password\": \"F0gl@mp!\"}, \"\")\n    ])\n    def test_login_of_deleted_user(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"User does not exist\" == r.reason\n\n    def test_logout_all_with_password_token(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/1/logout', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_verify_logout(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", '/fledge/asset', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 401 == r.status\n\n    def test_admin_actions_forbidden_for_regular_user_with_pwd_token(self):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert not jdoc['admin']\n        _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    def test_admin_actions_forbidden_for_regular_user_with_cert_token(self):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert not jdoc['admin']\n            _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    @pytest.mark.skip(reason=\"Currently this function is not implemented.\")\n    def test_regular_user_access_to_admin_api_config(self, fledge_url):\n        pass\n\n\nclass TestAuthPasswordWithTLS:\n    def test_login_username_regular_user(self, reset_fledge, change_auth_method):\n        reset_fledge(authentication=True)\n        change_auth_method(auth_method=\"password\", restart=True, enable_tls=True)\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert not jdoc['admin']\n        global PASSWORD_TOKEN\n        PASSWORD_TOKEN = jdoc[\"token\"]\n\n    def test_logout_me(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_login_with_invalid_credentials_regular_user(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"Fledge\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"Username or Password do not match\" == r.reason\n\n    def test_login_username_admin(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        assert \"token\" in jdoc\n        assert jdoc['admin']\n        global PASSWORD_TOKEN\n        PASSWORD_TOKEN = jdoc[\"token\"]\n\n    def test_login_with_invalid_credentials_admin(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"FLEDGE\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"Username or Password do not match\" == r.reason\n\n    def test_login_with_admin_certificate(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 400 == r.status\n            assert \"Use valid username & password to log in.\" == r.reason\n\n    def test_ping_with_allow_ping_true(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 0 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ingest(self, fledge_url, wait_time):\n        add_south_http(fledge_url, HTTP_SOUTH_SVC_NAME, PASSWORD_TOKEN, wait_time, tls_enabled=True)\n\n        generate_json_for_fogbench(ASSET_NAME)\n\n        send_data_using_fogbench(wait_time)\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 10 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ping_with_allow_ping_false(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n        _token = jdoc[\"token\"]\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/category/rest_api', body=json.dumps({\"allowPing\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        assert 401 == r.status\n        assert \"Unauthorized\" == r.reason\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('', {'users': [{'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n                         'description': 'admin user'},\n                        {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                         'description': 'normal user'}]}),\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                             'realName': 'Admin user', 'description': 'admin user'}),\n        ('?id=1&username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                                  'realName': 'Admin user', 'description': 'admin user'})\n    ])\n    def test_get_users(self, query, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_get_roles(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ROLES == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\", \"real_name\": \"AJ\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any1', 'userId': 3, 'roleId': 2, 'accessMethod': 'any', 'realName': 'AJ',\n                   'description': 'Nerd user'}, 'message': 'any1 user has been created successfully.'}),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin1', 'userId': 4, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin1 user has been created successfully.'})\n    ])\n    def test_create_user(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\"}, LOGIN_SUCCESS_MSG),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\"}, LOGIN_SUCCESS_MSG)\n    ])\n    def test_login_of_newly_created_user(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps(form_data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc['message']\n\n    def test_update_password(self):\n        uid = 3\n        data = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp1\"}\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(data),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    def test_login_with_updated_password(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps({\"username\": \"any1\", \"password\": \"F0gl@mp1\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n\n    def test_reset_user(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", \"/fledge/admin/3/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@mp!\"}),\n                     headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<3> has been updated successfully.'} == jdoc\n\n    def test_login_with_resetted_password(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps({\"username\": \"any1\", \"password\": \"F0gl@mp!\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert LOGIN_SUCCESS_MSG == jdoc['message']\n\n    def test_delete_user(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"DELETE\", \"/fledge/admin/4/delete\", headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    def test_login_of_deleted_user(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", body=json.dumps({\"username\": \"admin1\", \"password\": \"F0gl@mp!\"}))\n        r = conn.getresponse()\n        assert 404 == r.status\n        assert \"User does not exist\" == r.reason\n\n    def test_logout_all(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/1/logout', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_verify_logout(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", '/fledge/asset', headers={\"authorization\": PASSWORD_TOKEN})\n        r = conn.getresponse()\n        assert 401 == r.status\n\n    def test_admin_actions_forbidden_for_regular_user(self):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"user\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert not jdoc['admin']\n        _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    @pytest.mark.skip(reason=\"Currently this feature is not implemented.\")\n    def test_regular_user_access_to_admin_api_config(self, fledge_url):\n        pass\n\n\nclass TestAuthCertificateWithTLS:\n    def test_login_with_user_certificate(self, reset_fledge, change_auth_method):\n        reset_fledge(authentication=True)\n        change_auth_method(auth_method=\"certificate\", restart=True, enable_tls=True)\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n    def test_login_with_admin_certificate(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert jdoc['admin']\n            global CERT_TOKEN\n            CERT_TOKEN = jdoc[\"token\"]\n\n    def test_login_with_custom_certificate(self, remove_data_file):\n        # Create a custom certificate and sign\n        try:\n            subprocess.run([\"openssl genrsa -out custom.key {} 2> /dev/null\".format(KEY_SIZE)], shell=True)\n            subprocess.run([\"openssl req -new -key custom.key -out custom.csr -subj '/C=IN/CN=user' 2> /dev/null\"],\n                           shell=True)\n            subprocess.run([\"openssl x509 -req -days 1 -in custom.csr \"\n                            \"-CA $FLEDGE_ROOT/data/etc/certs/ca.cert -CAkey $FLEDGE_ROOT/data/etc/certs/ca.key \"\n                            \"-set_serial 01 -out custom.cert 2> /dev/null\"], shell=True)\n        except subprocess.CalledProcessError:\n            assert False, \" Certificate creation failed!\"\n\n        # Login with custom certificate\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = 'custom.cert'\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            assert \"token\" in jdoc\n            assert not jdoc['admin']\n\n        # Delete Certificates and keys created\n        remove_data_file('custom.key')\n        remove_data_file('custom.csr')\n        remove_data_file('custom.cert')\n\n    def test_login_with_invalid_credentials(self, remove_data_file):\n        try:\n            subprocess.run([\"echo 'Fledge certificate' > template.cert\"], shell=True)\n        except subprocess.CalledProcessError:\n            assert False, \" Certificate creation failed!\"\n\n        # Login with custom certificate\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = 'template.cert'\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 400 == r.status\n            assert 'Use a valid certificate to login.' == r.reason\n\n        # Delete Certificates and keys created\n        remove_data_file('template.cert')\n\n    def test_login_username_admin(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/login\", json.dumps({\"username\": \"admin\", \"password\": \"fledge\"}))\n        r = conn.getresponse()\n        assert 400 == r.status\n        assert \"Use a valid certificate to login.\" == r.reason\n\n    def test_ping_with_allow_ping_true(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 0 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ingest(self, fledge_url, wait_time):\n        add_south_http(fledge_url, HTTP_SOUTH_SVC_NAME, CERT_TOKEN, wait_time, tls_enabled=True)\n\n        generate_json_for_fogbench(ASSET_NAME)\n\n        send_data_using_fogbench(wait_time)\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        jdoc = json.loads(r.read().decode())\n        assert \"dataRead\" in jdoc\n        assert 10 == jdoc['dataRead'], \"data NOT seen in ping header\"\n\n    def test_ping_with_allow_ping_false(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/admin.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert LOGIN_SUCCESS_MSG == jdoc['message']\n            _token = jdoc[\"token\"]\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/category/rest_api', json.dumps({\"allowPing\": \"false\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/logout', headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 200 == r.status\n\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/ping\")\n        r = conn.getresponse()\n        assert 401 == r.status\n        assert \"Unauthorized\" == r.reason\n\n    @pytest.mark.parametrize((\"query\", \"expected_values\"), [\n        ('', {'users': [{'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any', 'realName': 'Admin user',\n                         'description': 'admin user'},\n                        {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                         'description': 'normal user'}]}),\n        ('?id=2', {'userId': 2, 'roleId': 2, 'userName': 'user', 'accessMethod': 'any', 'realName': 'Normal user',\n                   'description': 'normal user'}),\n        ('?username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                             'realName': 'Admin user', 'description': 'admin user'}),\n        ('?id=1&username=admin', {'userId': 1, 'roleId': 1, 'userName': 'admin', 'accessMethod': 'any',\n                                  'realName': 'Admin user', 'description': 'admin user'})\n    ])\n    def test_get_users(self, query, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/user{}\".format(query), headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_get_roles(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", \"/fledge/user/role\", headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert ROLES == jdoc\n\n    @pytest.mark.parametrize((\"form_data\", \"expected_values\"), [\n        ({\"username\": \"any1\", \"password\": \"User@123\", \"real_name\": \"AJ\", \"description\": \"Nerd user\"},\n         {'user': {'userName': 'any1', 'userId': 3, 'roleId': 2, 'accessMethod': 'any', 'realName': 'AJ',\n                   'description': 'Nerd user'}, 'message': 'any1 user has been created successfully.'}),\n        ({\"username\": \"admin1\", \"password\": \"F0gl@mp!\", \"role_id\": 1},\n         {'user': {'userName': 'admin1', 'userId': 4, 'roleId': 1, 'accessMethod': 'any', 'realName': '',\n                   'description': ''}, 'message': 'admin1 user has been created successfully.'})\n    ])\n    def test_create_user(self, form_data, expected_values):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps(form_data),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert expected_values == jdoc\n\n    def test_update_password(self):\n        uid = 3\n        data = {\"current_password\": \"User@123\", \"new_password\": \"F0gl@mp1\"}\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", \"/fledge/user/{}/password\".format(uid), body=json.dumps(data),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'Password has been updated successfully for user ID:<{}>.'.format(uid)} == jdoc\n\n    def test_reset_user(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", \"/fledge/admin/3/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@mp!\"}),\n                     headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': 'User with ID:<3> has been updated successfully.'} == jdoc\n\n    def test_delete_user(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"DELETE\", \"/fledge/admin/4/delete\", headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert {'message': \"User has been deleted successfully.\"} == jdoc\n\n    def test_logout_all(self):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"PUT\", '/fledge/1/logout', headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert jdoc['logout']\n\n    def test_verify_logout(self, fledge_url):\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        conn.request(\"GET\", '/fledge/asset', headers={\"authorization\": CERT_TOKEN})\n        r = conn.getresponse()\n        assert 401 == r.status\n\n    def test_admin_actions_forbidden_for_regular_user(self):\n        \"\"\"Test that regular user is not able to perform any actions that only an admin can\"\"\"\n        # Login with regular user\n        conn = http.client.HTTPSConnection(\"localhost\", 1995, context=context)\n        cert_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/etc/certs/user.cert')\n        with open(cert_file_path, 'r') as f:\n            conn.request(\"POST\", \"/fledge/login\", body=f)\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert not jdoc['admin']\n            _token = jdoc[\"token\"]\n\n        # Create User\n        conn.request(\"POST\", \"/fledge/admin/user\", body=json.dumps({\"username\": \"other\", \"password\": \"User@123\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Reset User\n        conn.request(\"PUT\", \"/fledge/admin/2/reset\", body=json.dumps({\"role_id\": 1, \"password\": \"F0gl@p!\"}),\n                     headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n        # Delete User\n        conn.request(\"DELETE\", \"/fledge/admin/2/delete\", headers={\"authorization\": _token})\n        r = conn.getresponse()\n        assert 403 == r.status\n        r = r.read().decode()\n        assert \"403: Forbidden\" == r\n\n    @pytest.mark.skip(reason=\"Currently this feature is not implemented.\")\n    def test_regular_user_access_to_admin_api_config(self, fledge_url):\n        pass\n"
  },
  {
    "path": "tests/system/python/packages/test_available_and_install_api.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" System tests that obtain the set of available packages for the current platform\n    It then installs each of those plugin packages via REST API endpoints\n\"\"\"\nimport os\nimport subprocess\nimport http.client\nimport json\nimport pytest\nimport py\nimport uuid\nimport time\nfrom pathlib import Path\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\navailable_pkg = []\n\"\"\" Hold all the available packages in list \"\"\"\ncounter = 1\n\"\"\"  By default One OMF north plugin is pre-installed \"\"\"\nerrors = []\n\"\"\" Hold all the Package discovery errors in list \"\"\"\n\n\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package\".format(PROJECT_ROOT)\n\n\n@pytest.fixture\ndef reset_packages():\n    try:\n        subprocess.run([\"cd {} && ./remove\".format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"remove package script failed\"\n\n\n@pytest.fixture\ndef setup_package(package_build_version):\n    try:\n        subprocess.run([\"cd {} && ./setup {}\".format(SCRIPTS_DIR_ROOT, package_build_version)],\n                       shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"setup package script failed\"\n\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\".format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n    # Wait for fledge server to start\n    time.sleep(wait_time)\n\n\ndef load_data_from_json():\n    _dir = os.path.dirname(os.path.realpath(__file__))\n    file_path = py.path.local(_dir).join('/').join('data/package_list.json')\n    with open(str(file_path)) as data_file:\n        json_data = json.load(data_file)\n    return json_data\n\n\nclass TestPackages:\n\n    def test_reset_and_setup(self, reset_packages, setup_package, reset_fledge):\n        # TODO: Remove this workaround\n        #  Use better setup & teardown methods\n        pass\n\n    def test_ping(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/ping')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert 1 < jdoc['uptime']\n        assert isinstance(jdoc['uptime'], int)\n        assert 0 == jdoc['dataRead']\n        assert 0 == jdoc['dataSent']\n        assert 0 == jdoc['dataPurged']\n        assert 'Fledge' == jdoc['serviceName']\n        assert 'green' == jdoc['health']\n        assert jdoc['authenticationOptional'] is True\n        assert jdoc['safeMode'] is False\n        assert jdoc['alerts'] == 0\n\n    def test_available_plugin_packages(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/plugins/available')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        global available_pkg\n        plugins = available_pkg = jdoc['plugins']\n        assert len(plugins), \"No plugin found\"\n        assert 'link' in jdoc\n        assert 'fledge-filter-python35' in plugins\n        assert 'fledge-north-http-north' in plugins\n        assert 'fledge-north-kafka' in plugins\n        assert 'fledge-notify-python35' in plugins\n        assert 'fledge-rule-outofbound' in plugins\n        assert 'fledge-south-modbus' in plugins\n        assert 'fledge-south-playback' in plugins\n\n    def test_available_service_packages(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/service/available')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert len(jdoc['services']), \"No services found\"\n        assert 'fledge-service-notification' in jdoc['services']\n        assert 'link' in jdoc\n\n    def test_install_service_package(self, fledge_url, wait_time, retries):\n        pkg_name = \"fledge-service-notification\"\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"format\": \"repository\", \"name\": pkg_name}\n        conn.request(\"POST\", '/fledge/service?action=install', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert 'id' in jdoc\n        assert '{} service installation started'.format(pkg_name) == jdoc['message']\n        assert jdoc['statusLink'].startswith('fledge/package/install/status?id=')\n\n        # Max retry count for to GET service package installed\n        max_retry_count = retries * 3\n        while max_retry_count:\n            # GET Service Package Status\n            conn.request(\"GET\", \"/{}\".format(jdoc['statusLink']))\n            r = conn.getresponse()\n            if r.status != 200:\n                msg = \"GET Service package status failed due to {} while attempting {}\".format(\n                    r.reason, jdoc['statusLink'])\n                print(msg)\n                errors.append(msg)\n                return\n            r = r.read().decode()\n            get_package_status_jdoc = json.loads(r)\n            if get_package_status_jdoc['packageStatus'][0]['status'] == \"success\":\n                # Exit if SUCCESS\n                break\n            elif get_package_status_jdoc['packageStatus'][0]['status'] == \"failed\":\n                msg = \"GET Service package status response failed while attempting {}\".format(jdoc['statusLink'])\n                print(msg)\n                errors.append(msg)\n                break\n            # sleep time added b/w retries\n            time.sleep(wait_time * 3)\n            max_retry_count -= 1\n\n        # verify service installed\n        conn.request(\"GET\", '/fledge/service/installed')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No data found\"\n        assert 4 == len(jdoc['services'])\n        assert 'notification' in jdoc['services']\n\n    def test_install_plugin_package(self, fledge_url, package_build_source_list, package_build_list,\n                                    exclude_packages_list, wait_time, retries):\n        # \"exclude_packages_list\" packages will get excluded from tests\n        exclude_packages_list = exclude_packages_list.split(\",\")\n        for pkg in exclude_packages_list:            \n            if pkg.strip() in available_pkg:\n                available_pkg.remove(pkg.strip())\n\n        # When \"package_build_source_list\" is true then it will install all available packages\n        # Otherwise install from list as we defined in JSON file\n        if package_build_source_list.lower() == 'true':\n            for pkg_name in available_pkg:\n                self._verify_and_install_package(fledge_url, pkg_name, wait_time, retries)\n            assert not errors, \"Package errors have occurred: \\n {}\".format(\"\\n\".join(errors))\n        else:\n            json_data = load_data_from_json()\n            # If 'all' in 'package_build_list' then it will iterate each key in JSON file\n            if 'all' in package_build_list:\n                package_build_list = \",\".join(json_data.keys())\n            my_list = package_build_list.split(\",\")\n\n            for pkg_list_cat in my_list:\n                for k, pkg_list_name in json_data[pkg_list_cat][0].items():\n                    for pkg_name in pkg_list_name:\n                        full_pkg_name = 'fledge-{}-{}'.format(k, pkg_name)\n                        if full_pkg_name in available_pkg:\n                            self._verify_and_install_package(fledge_url, full_pkg_name, wait_time, retries)\n                        else:\n                            print(\"{} not found in available package list\".format(full_pkg_name))\n            assert not errors, \"Package errors have occurred: \\n {}\".format(\"\\n\".join(errors))\n\n    def _verify_and_install_package(self, fledge_url, pkg_name, wait_time, retries):\n        global counter\n        global errors\n        print(\"Installing %s package and having counter value %s\" % (pkg_name, counter))\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"format\": \"repository\", \"name\": pkg_name}\n        # POST Plugin\n        conn.request(\"POST\", '/fledge/plugins', json.dumps(data))\n        r = conn.getresponse()\n        if r.status != 200:\n            msg = \"POST Install plugin failed due to {} while attempting {}\".format(r.reason, pkg_name)\n            print(msg)\n            errors.append(msg)\n            return\n        r = r.read().decode()\n        post_install_jdoc = json.loads(r)\n        assert \"Plugin installation started.\" == post_install_jdoc['message']\n        assert post_install_jdoc['statusLink'].startswith('fledge/package/install/status?id=')\n        assert uuid.UUID(post_install_jdoc['id'])\n\n        # Max try count for to GET package installed\n        max_retry_count = retries * 3\n        while max_retry_count:\n            # GET Package Status\n            conn.request(\"GET\", \"/{}\".format(post_install_jdoc['statusLink']))\n            r = conn.getresponse()\n            if r.status != 200:\n                msg = \"GET Package status failed due to {} while attempting {}\".format(r.reason, pkg_name)\n                print(msg)\n                errors.append(msg)\n                counter -= 1\n                return\n            r = r.read().decode()\n            get_package_status_jdoc = json.loads(r)\n            if get_package_status_jdoc['packageStatus'][0]['status'] == \"success\":\n                # Special case: On flirax8 package installation this installs modbus package too as it depends upon\n                # available package list always in alphabetically sorted order\n                if pkg_name == 'fledge-south-flirax8':\n                    available_pkg.remove('fledge-south-modbus')\n                    counter += 1\n                counter += 1\n                break\n            elif get_package_status_jdoc['packageStatus'][0]['status'] == \"failed\":\n                msg = \"GET Package status response failed while attempting {}\".format(pkg_name)\n                print(msg)\n                errors.append(msg)\n                return\n            # sleep time added b/w retries\n            time.sleep(wait_time * 3)\n            max_retry_count -= 1\n\n        # GET Plugins Installed\n        conn.request(\"GET\", '/fledge/plugins/installed')\n        r = conn.getresponse()\n        if r.status != 200:\n            msg = \"GET Plugins installed request failed due to {} while attempting {}\".format(r.reason, pkg_name)\n            print(msg)\n            errors.append(msg)\n            counter -= 1\n            return\n        r = r.read().decode()\n        get_plugins_installed_jdoc = json.loads(r)\n        assert len(get_plugins_installed_jdoc), \"No data found\"\n        if counter != len(get_plugins_installed_jdoc['plugins']):\n            print(\"Error in discovery of %s package\" % pkg_name)\n            errors.append(\"{} package discovery failed\".format(pkg_name))\n            counter -= 1\n"
  },
  {
    "path": "tests/system/python/packages/test_eds.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test EDS\nNote: Please make sure that the EdgeDataStore package is installed before running the test.\n      You may refer the following documentation for more details on installation\n      https://osisoft.github.io/Edge-Data-Store-Docs/V1/Installation/Install%20Edge%20Data%20Store_1-0.html\n      And for more details on EDS refer https://osisoft.github.io/Edge-Data-Store-Docs/V1/index.html\n\"\"\"\n\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nfrom pathlib import Path\nfrom datetime import datetime\nimport pytest\nimport utils\nfrom pytest import PKG_MGR\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems, Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nasset_name = \"sinusoid\"\nnorth_plugin = \"OMF\"\ntask_name = \"eds\"\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/packages/data/\".format(PROJECT_ROOT)\nFLEDGE_ROOT = os.environ.get('FLEDGE_ROOT')\n\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package && ./reset\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n    # Wait for fledge server to start\n    time.sleep(wait_time)\n\n\n@pytest.fixture\ndef reset_eds():\n    eds_reset_url = \"/api/v1/Administration/Storage/Reset\"\n    con = http.client.HTTPConnection(\"localhost\", 5590)\n    con.request(\"POST\", eds_reset_url, \"\")\n    resp = con.getresponse()\n    assert 204 == resp.status\n\n\n@pytest.fixture\ndef remove_and_add_pkgs(package_build_version):\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package && ./remove\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"remove package script failed!\"\n\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package/ && ./setup {}\"\n                       .format(PROJECT_ROOT, package_build_version)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"setup package script failed\"\n\n    try:\n        subprocess.run([\"sudo {} install -y fledge-south-sinusoid\".format(PKG_MGR)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"installation of sinusoid package failed\"\n\n\ndef get_ping_status(fledge_url):\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/ping')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return jdoc\n\n\ndef get_statistics_map(fledge_url):\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/statistics')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return utils.serialize_stats_map(jdoc)\n\n\n@pytest.fixture\ndef check_eds_installed():\n    dpkg_list = os.popen('dpkg --list osisoft.edgedatastore >/dev/null; echo $?')\n    ls_output = dpkg_list.read()\n    assert ls_output == \"0\\n\", \"EDS not installed. Please install it first!\"\n    eds_data_url = \"/api/v1/diagnostics/productinformation\"\n    con = http.client.HTTPConnection(\"localhost\", 5590)\n    con.request(\"GET\", eds_data_url)\n    resp = con.getresponse()\n    r = json.loads(resp.read().decode())\n    assert len(r) != 0, \"EDS not installed. Please install it first!\"\n\n\n@pytest.fixture\ndef start_south_north(fledge_url):\n    payload = {\"name\": \"Sine\", \"type\": \"south\", \"plugin\": \"sinusoid\", \"enabled\": True, \"config\": {}}\n    post_url = \"/fledge/service\"\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", post_url, json.dumps(payload))\n    res = conn.getresponse()\n    assert 200 == res.status, \"ERROR! POST {} request failed\".format(post_url)\n\n    conn = http.client.HTTPConnection(fledge_url)\n    data = {\"name\": task_name,\n            \"plugin\": \"{}\".format(north_plugin),\n            \"type\": \"north\",\n            \"schedule_type\": 3,\n            \"schedule_day\": 0,\n            \"schedule_time\": 0,\n            \"schedule_repeat\": 30,\n            \"schedule_enabled\": \"true\",\n            \"config\": {\"PIServerEndpoint\": {\"value\": \"Edge Data Store\"},\n                       \"NamingScheme\": {\"value\": \"Backward compatibility\"}}\n            }\n    conn.request(\"POST\", '/fledge/scheduled/task', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n\n\ndef verify_eds_data():\n    eds_data_url = \"/api/v1/tenants/default/namespaces/default/streams/1measurement_sinusoid/Data/Last\"\n    con = http.client.HTTPConnection(\"localhost\", 5590)\n    con.request(\"GET\", eds_data_url)\n    resp = con.getresponse()\n    r = json.loads(resp.read().decode())\n    return r\n\n\ndef verify_eds_egress_asset_tracking(fledge_url):\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"GET\", '/fledge/asset')\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    retval = json.loads(r)\n    assert len(retval) == 1\n    assert asset_name == retval[0][\"assetCode\"]\n\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    tracked_item = tracking_details[\"track\"][0]\n    assert \"Sine\" == tracked_item[\"service\"]\n    assert asset_name == tracked_item[\"asset\"]\n    assert asset_name == tracked_item[\"plugin\"]\n\n    egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n    assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n    tracked_item = egress_tracking_details[\"track\"][0]\n    assert task_name == tracked_item[\"service\"]\n    assert asset_name == tracked_item[\"asset\"]\n    assert north_plugin == tracked_item[\"plugin\"]\n\n\nclass TestEDS:\n    def test_eds(self, check_eds_installed, reset_eds, remove_and_add_pkgs, reset_fledge, start_south_north, fledge_url,\n                 wait_time):\n        time.sleep(wait_time * 4)\n\n        ping_response = get_ping_status(fledge_url)\n        assert 0 < ping_response[\"dataRead\"]\n        assert 0 < ping_response[\"dataSent\"]\n\n        actual_stats_map = get_statistics_map(fledge_url)\n        assert 0 < actual_stats_map['SINUSOID']\n        assert 0 < actual_stats_map['READINGS']\n        assert 0 < actual_stats_map['Readings Sent']\n        assert 0 < actual_stats_map[task_name]\n\n        verify_eds_egress_asset_tracking(fledge_url)\n\n        r = verify_eds_data()\n        assert asset_name in r, \"Data in EDS not found!\"\n        ts = r.get(\"Time\")\n        assert ts.find(datetime.now().strftime(\"%Y-%m-%d\")) != -1, \"Latest data not found in EDS!\"\n"
  },
  {
    "path": "tests/system/python/packages/test_filters.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test Filters Package System tests:\n        fledge-south-http-south south plugin\n        fledge-filter-expression, fledge-filter-python35 filter plugins\n\"\"\"\n\n# FIXME: This test requires aiocoap,cbor pip packages installed explicitly due to FOGL-3500\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport subprocess\nimport http.client\nimport json\nimport os\nimport time\nimport urllib.parse\nfrom pathlib import Path\nimport utils\nimport pytest\n\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\nDATA_DIR_ROOT = \"{}/tests/system/python/packages/data/\".format(PROJECT_ROOT)\n\nHTTP_SOUTH_SVC_NAME = \"South_http #1\"\nASSET_NAME_PY35 = \"end_to_end_py35\"\nASSET_NAME_SP = \"end_to_end_sp\"\nASSET_NAME_EMA = \"end_to_end_ema\"\nTEMPLATE_NAME = \"template.json\"\nSENSOR_VALUE = 12.25\n\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n\ndef call_fogbench(wait_time):\n    execute_fogbench = 'cd {}/extras/python ;python3 -m fogbench -t $FLEDGE_ROOT/data/tests/{} ' \\\n                       '-p http -O 10'.format(PROJECT_ROOT, TEMPLATE_NAME)\n    exit_code = os.system(execute_fogbench)\n    assert 0 == exit_code\n    time.sleep(wait_time)\n\n\n@pytest.fixture\ndef add_south_http_with_filter(add_south, add_filter, fledge_url):\n    south_plugin = \"http-south\"\n    add_south(south_plugin, None, fledge_url, service_name=\"{}\".format(HTTP_SOUTH_SVC_NAME),\n              plugin_discovery_name=\"http_south\", installation_type='package')\n\n    filter_cfg_scale = {\"enable\": \"true\"}\n    add_filter(\"python35\", None, \"py35\", filter_cfg_scale, fledge_url, HTTP_SOUTH_SVC_NAME, installation_type='package')\n\n    yield add_south_http_with_filter\n\n\n@pytest.fixture\ndef add_filter_expression(add_filter, fledge_url):\n    filter_cfg_exp = {\"name\": \"double\", \"expression\": \"sensor*2\", \"enable\": \"true\"}\n    add_filter(\"expression\", None, \"expr\", filter_cfg_exp, fledge_url, HTTP_SOUTH_SVC_NAME, installation_type='package')\n\n\ndef generate_json(asset_name):\n    subprocess.run([\"cd $FLEDGE_ROOT/data && mkdir -p tests\"], shell=True, check=True)\n\n    fogbench_template_path = os.path.join(\n        os.path.expandvars('${FLEDGE_ROOT}'), 'data/tests/{}'.format(TEMPLATE_NAME))\n    with open(fogbench_template_path, \"w\") as f:\n        f.write(\n            '[{\"name\": \"%s\", \"sensor_values\": '\n            '[{\"name\": \"sensor\", \"type\": \"number\", \"min\": %f, \"max\": %f, \"precision\": 2}]}]' % (\n                asset_name, SENSOR_VALUE, SENSOR_VALUE))\n\n\ndef verify_south_added(fledge_url, name):\n    get_url = \"/fledge/south\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert \"name\" in result[\"services\"][0]\n    assert name == result[\"services\"][0][\"name\"]\n\n\ndef verify_ping(fledge_url):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert 0 < ping_result['dataRead'], \"data NOT seen in ping header\"\n    return ping_result\n\n\ndef verify_asset(fledge_url, asset_name):\n    asset_name = \"http-\" + asset_name\n    get_url = \"/fledge/asset\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No asset found\"\n    assert \"assetCode\" in result[0]\n    assert asset_name == result[0][\"assetCode\"]\n    return result[0]\n\n\ndef verify_readings(fledge_url, asset_name):\n    asset_name = \"http-\" + asset_name\n    get_url = \"/fledge/asset/{}\".format(asset_name)\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No readings found\"\n    assert \"reading\" in result[0]\n    return result[0]\n\n\nclass TestPython35:\n    def test_filter_python35_with_uploaded_script(self, clean_setup_fledge_packages, reset_fledge,\n                                                  add_south_http_with_filter, fledge_url, wait_time):\n        # Wait until south service is created\n        time.sleep(wait_time * 2)\n        verify_south_added(fledge_url, HTTP_SOUTH_SVC_NAME)\n\n        generate_json(ASSET_NAME_PY35)\n\n        url = fledge_url + urllib.parse.quote('/fledge/category/{}_py35/script/upload'\n                                              .format(HTTP_SOUTH_SVC_NAME))\n        script_path = 'script=@{}/readings35.py'.format(DATA_DIR_ROOT)\n        upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n        exit_code = os.system(upload_script)\n        assert 0 == exit_code\n\n        call_fogbench(wait_time)\n\n        ping_response = verify_ping(fledge_url)\n        assert 10 == ping_response[\"dataRead\"]\n\n        asset_response = verify_asset(fledge_url, ASSET_NAME_PY35)\n        assert 10 == asset_response[\"count\"]\n\n        reading_resp = verify_readings(fledge_url, ASSET_NAME_PY35)\n        assert 2.45 == reading_resp[\"reading\"][\"sensor\"]\n\n    def test_filter_python35_with_updated_content(self, fledge_url, retries, wait_time):\n        copy_file = \"cp {}/readings35.py {}/readings35.py.bak\".format(DATA_DIR_ROOT, DATA_DIR_ROOT)\n        exit_code = os.system(copy_file)\n        assert 0 == exit_code\n\n        sed_cmd = \"sed -i \\\"s+newVal .*+newVal = reading[key] * 10+\\\" {}/readings35.py\".format(DATA_DIR_ROOT)\n        exit_code = os.system(sed_cmd)\n        assert 0 == exit_code\n\n        url = fledge_url + urllib.parse.quote('/fledge/category/{}_py35/script/upload'\n                                              .format(HTTP_SOUTH_SVC_NAME))\n        script_path = 'script=@{}/readings35.py'.format(DATA_DIR_ROOT)\n        upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n        exit_code = os.system(upload_script)\n        assert 0 == exit_code\n\n        time.sleep(wait_time)\n\n        call_fogbench(wait_time)\n\n        ping_response = verify_ping(fledge_url)\n        assert 20 == ping_response[\"dataRead\"]\n\n        asset_response = verify_asset(fledge_url, ASSET_NAME_PY35)\n        assert 20 == asset_response[\"count\"]\n\n        reading_resp = verify_readings(fledge_url, ASSET_NAME_PY35)\n        assert 122.5 == reading_resp[\"reading\"][\"sensor\"]\n\n        move_file = \"mv {}/readings35.py.bak {}/readings35.py\".format(DATA_DIR_ROOT, DATA_DIR_ROOT)\n        exit_code = os.system(move_file)\n        assert 0 == exit_code, \"{} cmd failed!\".format(move_file)\n\n    def test_filter_python35_disable_enable(self, fledge_url, retries, wait_time):\n        data = {\"enable\": \"false\"}\n        utils.put_request(fledge_url, urllib.parse.quote(\"/fledge/category/{}_py35\".format(HTTP_SOUTH_SVC_NAME),\n                                                         safe='?,=,&,/'), data)\n\n        call_fogbench(wait_time)\n\n        ping_response = verify_ping(fledge_url)\n        assert 30 == ping_response[\"dataRead\"]\n\n        asset_response = verify_asset(fledge_url, ASSET_NAME_PY35)\n        assert 30 == asset_response[\"count\"]\n\n        reading_resp = verify_readings(fledge_url, ASSET_NAME_PY35)\n        assert 12.25 == reading_resp[\"reading\"][\"sensor\"]\n\n        data = {\"enable\": \"true\"}\n        utils.put_request(fledge_url, urllib.parse.quote(\"/fledge/category/{}_py35\"\n                                                         .format(HTTP_SOUTH_SVC_NAME), safe='?,=,&,/'), data)\n\n        call_fogbench(wait_time)\n\n        ping_response = verify_ping(fledge_url)\n        assert 40 == ping_response[\"dataRead\"]\n\n        asset_response = verify_asset(fledge_url, ASSET_NAME_PY35)\n        assert 40 == asset_response[\"count\"]\n\n        reading_resp = verify_readings(fledge_url, ASSET_NAME_PY35)\n        assert 122.5 == reading_resp[\"reading\"][\"sensor\"]\n\n    def test_filter_python35_expression(self, add_filter_expression, fledge_url, wait_time):\n        data = {\"schedule_name\": \"{}\".format(HTTP_SOUTH_SVC_NAME)}\n        put_url = \"/fledge/schedule/disable\"\n        utils.put_request(fledge_url, put_url, data)\n\n        data = {\"schedule_name\": \"{}\".format(HTTP_SOUTH_SVC_NAME)}\n        put_url = \"/fledge/schedule/enable\"\n        utils.put_request(fledge_url, put_url, data)\n\n        time.sleep(wait_time)\n\n        call_fogbench(wait_time)\n\n        ping_response = verify_ping(fledge_url)\n        assert 50 == ping_response[\"dataRead\"]\n\n        asset_response = verify_asset(fledge_url, ASSET_NAME_PY35)\n        assert 50 == asset_response[\"count\"]\n\n        reading_resp = verify_readings(fledge_url, ASSET_NAME_PY35)\n        assert 122.5 == reading_resp[\"reading\"][\"sensor\"]\n        assert 245.0 == reading_resp[\"reading\"][\"double\"]\n\n    def test_delete_filter_python35(self, fledge_url, wait_time):\n        data = {\"pipeline\": [\"expr\"]}\n        put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=false\" \\\n            .format(HTTP_SOUTH_SVC_NAME)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/filter/py35')\n        res = conn.getresponse()\n        assert 200 == res.status, \"ERROR! Failed to delete filter\"\n\n        call_fogbench(wait_time)\n\n        ping_response = verify_ping(fledge_url)\n        assert 60 == ping_response[\"dataRead\"]\n\n        asset_response = verify_asset(fledge_url, ASSET_NAME_PY35)\n        assert 60 == asset_response[\"count\"]\n\n        reading_resp = verify_readings(fledge_url, ASSET_NAME_PY35)\n        assert 12.25 == reading_resp[\"reading\"][\"sensor\"]\n        assert 24.5 == reading_resp[\"reading\"][\"double\"]\n\n    def test_filter_python35_by_enabling_disabling_south(self, fledge_url, wait_time):\n        data = {\"schedule_name\": \"{}\".format(HTTP_SOUTH_SVC_NAME)}\n        put_url = \"/fledge/schedule/disable\"\n        utils.put_request(fledge_url, put_url, data)\n\n        time.sleep(wait_time)\n\n        get_url = \"/fledge/south\"\n        result = utils.get_request(fledge_url, get_url)\n        assert HTTP_SOUTH_SVC_NAME == result[\"services\"][0][\"name\"]\n        assert \"shutdown\" == result[\"services\"][0][\"status\"]\n\n        data = {\"name\": \"py35\", \"plugin\": \"python35\", \"filter_config\": {\"enable\": \"true\"}}\n        utils.post_request(fledge_url, \"/fledge/filter\", data)\n\n        data = {\"pipeline\": [\"py35\"]}\n        put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=true\" \\\n            .format(HTTP_SOUTH_SVC_NAME)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n        url = fledge_url + urllib.parse.quote('/fledge/category/{}_py35/script/upload'\n                                              .format(HTTP_SOUTH_SVC_NAME))\n        script_path = 'script=@{}/readings35.py'.format(DATA_DIR_ROOT)\n        upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n        exit_code = os.system(upload_script)\n        assert 0 == exit_code\n\n        data = {\"schedule_name\": HTTP_SOUTH_SVC_NAME}\n        put_url = \"/fledge/schedule/enable\"\n        utils.put_request(fledge_url, put_url, data)\n\n        time.sleep(wait_time)\n\n        call_fogbench(wait_time)\n\n        ping_response = verify_ping(fledge_url)\n        assert 70 == ping_response[\"dataRead\"]\n\n        asset_response = verify_asset(fledge_url, ASSET_NAME_PY35)\n        assert 70 == asset_response[\"count\"]\n\n        reading_resp = verify_readings(fledge_url, ASSET_NAME_PY35)\n        assert 2.45 == reading_resp[\"reading\"][\"sensor\"]\n        assert 4.9 == reading_resp[\"reading\"][\"double\"]\n\n    def test_delete_south_service(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", urllib.parse.quote('/fledge/service/{}'\n                                                  .format(HTTP_SOUTH_SVC_NAME), safe='?,=,&,/'))\n        res = conn.getresponse()\n        assert 200 == res.status, \"ERROR! Failed to delete service\"\n\n        get_url = \"/fledge/south\"\n        result = utils.get_request(fledge_url, get_url)\n        assert 0 == len(result[\"services\"])\n\n        filename = \"{}_py35_script_readings35.py\".format(HTTP_SOUTH_SVC_NAME).lower()\n        filepath = \"$FLEDGE_ROOT/data/scripts/{}\".format(filename)\n        assert False == os.path.isfile('{}'.format(filepath))\n\n"
  },
  {
    "path": "tests/system/python/packages/test_lab.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Lab packages System tests\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport subprocess\nimport http.client\nimport json\nimport pytest\nimport os\nimport time\nimport urllib.parse\nimport utils\nfrom pathlib import Path\n\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/lab/scripts/\".format(PROJECT_ROOT)\n\n@pytest.fixture\ndef setup_fledge_packages():\n    try:\n        subprocess.run([\"cd {}/tests/system/lab && ./remove\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"remove package script failed!\"\n\n    try:\n        subprocess.run([\"cd {}/tests/system/lab && ./install\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"install package script failed\"\n\n\nclass TestSouth:\n    def test_south_sinusoid(self, setup_fledge_packages, fledge_url, retries, wait_time):\n        data = {\"name\": \"Sine\", \"type\": \"south\", \"plugin\": \"sinusoid\", \"enabled\": True, \"config\": {}}\n        post_url = \"/fledge/service\"\n        utils.post_request(fledge_url, post_url, data)\n\n        time.sleep(wait_time * 2)\n\n        while retries:\n            get_url = \"/fledge/south\"\n            result = utils.get_request(fledge_url, get_url)\n\n            assert len(result[\"services\"])\n            assert \"name\" in result[\"services\"][0]\n            assert \"Sine\" == result[\"services\"][0][\"name\"]\n\n            assert \"assets\" in result[\"services\"][0]\n            if \"asset\" in result[\"services\"][0][\"assets\"][0]:\n                assert \"sinusoid\" == result[\"services\"][0][\"assets\"][0][\"asset\"]\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! sinusoid data NOT seen in South tab.\" + fledge_url + \"/fledge/south\"\n\n    def test_sinusoid_in_asset(self, fledge_url):\n        get_url = \"/fledge/asset\"\n        result = utils.get_request(fledge_url, get_url)\n        assert len(result)\n        assert \"assetCode\" in result[0]\n        assert \"sinusoid\" == result[0][\"assetCode\"], \"sinusoid data NOT seen in Asset tab\"\n\n    def test_sinusoid_ping(self, fledge_url):\n        get_url = \"/fledge/ping\"\n        ping_result = utils.get_request(fledge_url, get_url)\n        assert \"dataRead\" in ping_result\n        assert 0 < ping_result['dataRead'], \"sinusoid data NOT seen in ping header\"\n\n    def test_sinusoid_graph(self, fledge_url):\n        get_url = \"/fledge/asset/sinusoid?seconds=600\"\n        result = utils.get_request(fledge_url, get_url)\n        assert len(result)\n        assert \"reading\" in result[0]\n        assert \"sinusoid\" in result[0][\"reading\"]\n        assert 0 < result[0][\"reading\"][\"sinusoid\"]\n\n    def test_delete_south_sinusoid(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/service/Sine')\n        res = conn.getresponse()\n        assert 200 == res.status\n        \n        get_url = \"/fledge/south\"\n        result = utils.get_request(fledge_url, get_url)\n        assert [] == result[\"services\"]\n        \n        get_url = \"/fledge/service\"\n        result = utils.get_request(fledge_url, get_url)\n        assert \"Sine\" not in [s[\"name\"] for s in result[\"services\"]]\n                  \n        get_url = \"/fledge/category\"\n        result = utils.get_request(fledge_url, get_url)\n        assert \"Sine\" not in [s[\"key\"] for s in result[\"categories\"]]\n        \n        get_url = \"/fledge/schedule\"\n        result = utils.get_request(fledge_url, get_url)\n        assert \"Sine\" not in [s[\"name\"] for s in result[\"schedules\"]]\n\n\n@pytest.mark.skip(reason=\"FIXME: To enable North verification on the basis of \"\n                         \"--skip-verify-north-interface fixture value\")\nclass TestNorth:\n\n    def test_north_pi_egress(self, fledge_url, pi_host, pi_port, pi_token, retries):\n        payload = {\"name\": \"PI Server\", \"plugin\": \"OMF\", \"type\": \"north\", \"schedule_repeat\": 30,\n                   \"schedule_type\": \"3\", \"schedule_enabled\": True,\n                   \"config\": {\n                              \"ServerHostname\": {\"value\": pi_host},\n                              \"ServerPort\": {\"value\": pi_port},\n                              \"producerToken\": {\"value\": pi_token},\n                              \"compression\": {\"value\": \"false\"},\n                              \"NamingScheme\": {\"value\": \"Backward compatibility\"}\n                              }\n                    }\n        utils.post_request(fledge_url, \"/fledge/scheduled/task\", payload)\n\n        while retries:\n            r = utils.get_request(fledge_url, \"/fledge/north\")\n            if \"sent\" in r[0]:\n                assert 0 < r[0][\"sent\"]\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! PI data sent not seen in North tab.\" + fledge_url + \"/fledge/north\"\n\n    def test_north_ping(self, fledge_url):\n        r = utils.get_request(fledge_url, \"/fledge/ping\")\n        assert \"dataSent\" in r\n        assert 0 < r['dataSent']\n\n    def test_north_stats_history(self, fledge_url, retries, wait_time):\n        while retries:\n            # time.sleep(wait_time)\n            get_url = \"/fledge/statistics/history?minutes=10\"\n            r = utils.get_request(fledge_url, get_url)\n            if \"PI Server\" in r[\"statistics\"][0]:\n                assert 0 < r[\"statistics\"][0][\"PI Server\"]\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! PI data sent not seen in sent graph. \" + fledge_url + \"/fledge/statistics/history?minutes=10\"\n\n       \nclass TestSinusoidMaxSquare:\n\n    def add_sinusoid_with_square_and_max_filter(self, fledge_url):\n        data = {\"name\": \"Sine\", \"type\": \"south\", \"plugin\": \"sinusoid\", \"enabled\": True, \"config\": {}}\n        post_url = \"/fledge/service\"\n        utils.post_request(fledge_url, post_url, data)\n\n        # Square expression filter\n        data = {\"name\": \"Square\", \"plugin\": \"expression\",\n                \"filter_config\": {\"name\": \"square\", \"expression\": \"if(sinusoid>0,0.5,-0.5)\", \"enable\": \"true\"}}\n        utils.post_request(fledge_url, \"/fledge/filter\", data)\n\n        data = {\"pipeline\": [\"Square\"]}\n        put_url = \"/fledge/filter/Sine/pipeline?allow_duplicates=true&append_filter=true\"\n        utils.put_request(fledge_url, put_url, data)\n\n        # Max expression filter\n        data = {\"name\": \"Max2\", \"plugin\": \"expression\",\n                \"filter_config\": {\"name\": \"max\", \"expression\": \"max(sinusoid, square)\", \"enable\": \"true\"}}\n        post_url = \"/fledge/filter\"\n        utils.post_request(fledge_url, post_url, data)\n\n        data = {\"pipeline\": [\"Max2\"]}\n        put_url = \"/fledge/filter/Sine/pipeline?allow_duplicates=true&append_filter=true\"\n        utils.put_request(fledge_url, put_url, data)\n\n    def test_sinusoid_max_square(self, fledge_url, retries, wait_time):\n        self.add_sinusoid_with_square_and_max_filter(fledge_url)\n        time.sleep(wait_time * 2)\n        while retries:\n            get_url = \"/fledge/asset/sinusoid?seconds=600\"\n            r = utils.get_request(fledge_url, get_url)\n            if \"square\" in r[0][\"reading\"] and \"max\" in r[0][\"reading\"]:\n                assert 0 < r[0][\"reading\"][\"square\"]\n                assert 0 < r[0][\"reading\"][\"max\"]\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! square and max data not seen in sinusoid graph.\" + fledge_url + \"/fledge/asset/sinusoid?seconds=600\"\n\n\nclass TestRandomwalk:\n\n    def test_add_randomwalk_south(self, fledge_url, wait_time):\n        payload = {\"name\": \"Random\", \"type\": \"south\", \"plugin\": \"randomwalk\", \"enabled\": True, \"config\": {}}\n        post_url = \"/fledge/service\"\n        utils.post_request(fledge_url, post_url, payload)\n\n        time.sleep(wait_time*2)\n\n        # verify Random service\n        get_url = \"/fledge/service\"\n        result = utils.get_request(fledge_url, get_url)\n        assert \"Random\" in [s[\"name\"] for s in result[\"services\"]]\n\n    def test_randomwalk_with_filter_python35(self, fledge_url, wait_time, retries):\n        data = {\"name\": \"Ema\", \"plugin\": \"python35\", \"filter_config\": {\"config\": {\"rate\": 0.07}, \"enable\": \"true\"}}\n        utils.post_request(fledge_url, \"/fledge/filter\", data)\n\n        data = {\"pipeline\": [\"Ema\"]}\n        put_url = \"/fledge/filter/Random/pipeline?allow_duplicates=true&append_filter=true\"\n        utils.put_request(fledge_url, put_url, data)\n\n        url = fledge_url + '/fledge/category/Random_Ema/script/upload'\n        script_path = 'script=@{}/ema.py'.format(SCRIPTS_DIR_ROOT)\n        upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n        exit_code = os.system(upload_script)\n        assert exit_code == 0\n\n        time.sleep(wait_time)\n\n        while retries:\n            get_url = \"/fledge/asset/randomwalk?seconds=600\"\n            data = utils.get_request(fledge_url, get_url)\n            if len(data) and \"randomwalk\" in data[0][\"reading\"] and \"ema\" in data[0][\"reading\"]:\n                assert 0 < data[0][\"reading\"][\"randomwalk\"]\n                assert 0 < data[0][\"reading\"][\"ema\"]\n                # TODO: verify asset tracker entry\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! randomwalk and ema data not seen in randomwalk graph.\" + fledge_url + \"/fledge/asset/randomwalk?seconds=600\"\n\n    def test_delete_randomwalk_south(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"DELETE\", '/fledge/service/Random')\n        res = conn.getresponse()\n        assert 200 == res.status, \"ERROR! Failed to delete randomwalk service\"\n\n        get_url = \"/fledge/service\"\n        result = utils.get_request(fledge_url, get_url)\n        assert \"Random\" not in [s[\"name\"] for s in result[\"services\"]]\n\n        get_url = \"/fledge/category\"\n        result = utils.get_request(fledge_url, get_url)\n        assert \"Random\" not in [c[\"key\"] for c in result[\"categories\"]]\n\n        get_url = \"/fledge/schedule\"\n        result = utils.get_request(fledge_url, get_url)\n        assert \"Random\" not in [sch[\"name\"] for sch in result[\"schedules\"]]\n\n\nclass TestRandomwalk1:\n\n    def test_randomwalk1_south_with_python35_filter(self, fledge_url, wait_time, retries):\n        print(\"Add Randomwalk south service again ...\")\n        data = {\"name\": \"Random1\", \"type\": \"south\", \"plugin\": \"randomwalk\", \"enabled\": True,\n                \"config\": {\"assetName\": {\"value\": \"randomwalk1\"}}}\n        utils.post_request(fledge_url, \"/fledge/service\", data)\n\n        # need to wait for Fledge to be ready to accept python file\n        time.sleep(wait_time)\n\n        data = {\"name\": \"PF\", \"plugin\": \"python35\", \"filter_config\": {\"config\": {\"rate\": 0.07}, \"enable\": \"true\"}}\n        utils.post_request(fledge_url, \"/fledge/filter\", data)\n\n        # Apply PF to Random\n        data = {\"pipeline\": [\"PF\"]}\n        put_url = \"/fledge/filter/Random1/pipeline?allow_duplicates=true&append_filter=true\"\n        utils.put_request(fledge_url, put_url, data)\n\n        print(\"upload trendc script...\")\n        url = fledge_url + '/fledge/category/Random1_PF/script/upload'\n\n        script_path = 'script=@{}/trendc.py'.format(SCRIPTS_DIR_ROOT)\n        upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n        exit_code = os.system(upload_script)\n        assert exit_code == 0\n\n        time.sleep(wait_time)\n\n        while retries:\n            get_url = \"/fledge/asset/randomwalk1?seconds=600\"\n            data = utils.get_request(fledge_url, get_url)\n            if len(data) and \"randomwalk\" in data[0][\"reading\"] and \"ema_long\" in data[0][\"reading\"]:\n                assert 0 < data[0][\"reading\"][\"randomwalk\"]\n                assert 0 < data[0][\"reading\"][\"ema_long\"]\n                # TODO: verify ema_short and trend in reading\n                # TODO: verify asset tracker entry\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! randomwalk and ema_long data not seen in randomwalk1 graph.\" + fledge_url + \"/fledge/asset/randomwalk1?seconds=600\"\n\n    def test_randomwalk1_python35_filter_script_content_reconfig(self, fledge_url, retries, wait_time):\n        copy_file = \"cp {}/trendc.py {}/trendc.py.bak\".format(SCRIPTS_DIR_ROOT, SCRIPTS_DIR_ROOT)\n        exit_code = os.system(copy_file)\n        assert exit_code == 0\n\n        sed_cmd = \"sed -i \\\"s/reading\\[b'ema_long/reading\\[b'ema_longX/g\\\" {}/trendc.py\".format(SCRIPTS_DIR_ROOT)\n        exit_code = os.system(sed_cmd)\n        assert exit_code == 0\n\n        print(\"upload modified trendc script...\")\n        url = fledge_url + '/fledge/category/Random1_PF/script/upload'\n        script_path = 'script=@{}/trendc.py'.format(SCRIPTS_DIR_ROOT)\n        upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n        exit_code = os.system(upload_script)\n        assert exit_code == 0\n\n        time.sleep(wait_time)\n\n        while retries:\n            get_url = \"/fledge/asset/randomwalk1?seconds=600\"\n            data = utils.get_request(fledge_url, get_url)\n            if len(data) and \"randomwalk\" in data[0][\"reading\"] and \"ema_longX\" in data[0][\"reading\"]:\n                assert 0 < data[0][\"reading\"][\"randomwalk\"]\n                assert 0 < data[0][\"reading\"][\"ema_longX\"]\n                break\n                # TODO: verify asset tracker entry\n            retries -= 1\n\n        move_file = \"mv {}/trendc.py.bak {}/trendc.py\".format(SCRIPTS_DIR_ROOT, SCRIPTS_DIR_ROOT)\n        exit_code = os.system(move_file)\n        assert exit_code == 0, \"{} cmd failed!\".format(move_file)\n\n        if retries == 0:\n            assert False, \"TIMEOUT! randomwalk and ema_longX data not seen in randomwalk1 graph.\" + fledge_url + \"/fledge/asset/randomwalk1?seconds=600\"\n\n    def test_randomwalk1_python35_filter_script_reconfig(self, fledge_url, retries, wait_time):\n        url = fledge_url + '/fledge/category/Random1_PF/script/upload'\n        script_path = 'script=@{}/ema.py'.format(SCRIPTS_DIR_ROOT)\n        upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n        exit_code = os.system(upload_script)\n        assert exit_code == 0\n\n        time.sleep(wait_time)\n\n        while retries:\n            get_url = \"/fledge/asset/randomwalk1?seconds=600\"\n            data = utils.get_request(fledge_url, get_url)\n            if len(data) and \"randomwalk\" in data[0][\"reading\"] and \"ema\" in data[0][\"reading\"]:\n                assert 0 < data[0][\"reading\"][\"randomwalk\"]\n                assert 0 < data[0][\"reading\"][\"ema\"]\n                # TODO: verify asset tracker entry\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! randomwalk and ema data not seen in randomwalk1 graph.\" + fledge_url + \"/fledge/asset/randomwalk1?seconds=600\"\n\n\n@pytest.mark.skipif(os.uname()[4][:3] != 'arm', reason=\"only compatible with arm architecture\")\nclass TestRpiEnviro:\n    def test_enviro_phat(self, fledge_url, retries, wait_time):\n        data = {\"name\": \"Enviro\", \"type\": \"south\", \"plugin\": \"rpienviro\", \"enabled\": True,\n                \"config\": {\"assetNamePrefix\": {\"value\": \"e_\"}}}\n        utils.post_request(fledge_url, \"/fledge/service\", data)\n\n        data = {\"name\": \"Fahrenheit\", \"plugin\": \"expression\",\n                \"filter_config\": {\"name\": \"temp_fahr\", \"expression\": \"temperature*1.8+32\", \"enable\": \"true\"}}\n        utils.post_request(fledge_url, \"/fledge/filter\", data)\n\n        data = {\"pipeline\": [\"Fahrenheit\"]}\n        put_url = \"/fledge/filter/Enviro/pipeline?allow_duplicates=true&append_filter=true\"\n        utils.put_request(fledge_url, put_url, data)\n\n        time.sleep(wait_time*2)\n\n        while retries:\n            get_url = \"/fledge/asset/e_weather?seconds=600\"\n            data = utils.get_request(fledge_url, get_url)\n            if len(data) and \"temperature\" in data[0][\"reading\"] and \"max\" in data[0][\"reading\"]:\n                assert data[0][\"reading\"][\"temperature\"] != \"\"\n                assert data[0][\"reading\"][\"temp_fahr\"] != \"\"\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! temperature and max data not seen in e_weather graph.\" + fledge_url + \"/fledge/asset/e_weather?seconds=600\"\n\n\nclass TestEventEngine:\n    \"\"\"This will test the Notification service in fledge.\"\"\"\n\n    def test_event_engine(self, fledge_url, retries, wait_time):\n        payload = {\"name\": \"Fledge Notifications\", \"type\": \"notification\", \"enabled\": True}\n        post_url = \"/fledge/service\"\n        utils.post_request(fledge_url, post_url, payload)\n\n        time.sleep(wait_time)\n\n        svc_found = False\n        while retries:\n            get_url = \"/fledge/service\"\n            resp = utils.get_request(fledge_url, get_url)\n            for item in resp[\"services\"]:\n                if item['name'] == \"Fledge Notifications\":\n                    svc_found = True\n                    assert item['status'] == \"running\"\n                    break  # break for loop\n            if svc_found:\n                break  # break while loop\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! event engine is not running.\" + fledge_url + \"/fledge/service\"\n\n    def test_positive_sine_notification(self, fledge_url, retries, wait_time):\n        \"\"\"Add Notification with Threshold Rule and Asset Notification (Positive Sine)\"\"\"\n\n        payload = {\"name\": \"Positive Sine\", \"description\": \"Positive Sine notification instance\", \"rule\": \"Threshold\",\n                   \"channel\": \"asset\", \"notification_type\": \"retriggered\", \"enabled\": True}\n        post_url = \"/fledge/notification\"\n        utils.post_request(fledge_url, post_url, payload)\n\n        payload = {\"asset\": \"sinusoid\", \"datapoint\": \"sinusoid\"}\n        put_url = \"/fledge/category/rulePositive Sine\"\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), payload)\n\n        payload = {\"asset\": \"positive_sine\", \"description\": \"positive\", \"enable\": \"true\"}\n        put_url = \"/fledge/category/deliveryPositive Sine\"\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), payload)\n\n        time.sleep(wait_time)\n\n        while retries:\n            get_url = \"/fledge/asset/positive_sine?seconds=600\"\n            resp = utils.get_request(fledge_url, get_url)\n            if len(resp) and \"event\" in resp[0][\"reading\"] and \"rule\" in resp[0][\"reading\"]:\n                assert resp[0][\"reading\"][\"event\"] == \"triggered\"\n                assert resp[0][\"reading\"][\"rule\"] == \"Positive Sine\"\n                break\n            retries -= 1\n\n        if retries == 0:\n            assert False, \"TIMEOUT! positive_sine event not fired.\" + fledge_url + \"/fledge/asset/positive_sine?seconds=600\"\n\n    def test_negative_sine_notification(self, fledge_url, remove_data_file, retries, wait_time):\n        \"\"\"Add Notification with Threshold Rule and Asset Notification (Negative Sine)\"\"\"\n        remove_data_file(\"/tmp/out\")\n\n        payload = {\"name\": \"Negative Sine\", \"description\": \"Negative Sine notification instance\", \"rule\": \"Threshold\",\n                   \"channel\": \"python35\", \"notification_type\": \"retriggered\", \"enabled\": True}\n        utils.post_request(fledge_url, \"/fledge/notification\", payload)\n\n        # Upload Python Script (write_out.py)\n        url = fledge_url + urllib.parse.quote('/fledge/category/deliveryNegative Sine/script/upload')\n        script_path = 'script=@{}/write_out.py'.format(SCRIPTS_DIR_ROOT)\n        upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n        exit_code = os.system(upload_script)\n        assert exit_code == 0\n\n        payload = {\"asset\": \"sinusoid\", \"datapoint\": \"sinusoid\", \"condition\": \"<\"}\n        utils.put_request(fledge_url, urllib.parse.quote(\"/fledge/category/ruleNegative Sine\"), payload)\n\n        payload = {\"enable\": \"true\"}\n        utils.put_request(fledge_url, urllib.parse.quote(\"/fledge/category/deliveryNegative Sine\"), payload)\n\n        s = wait_time * 2\n        for _ in range(retries):\n            if os.path.exists(\"/tmp/out\"):\n                break\n            time.sleep(s)\n            print(\"sleeping for {}s before retrying the existence check of '/tmp/out'\".format(s))\n\n        if not os.path.exists(\"/tmp/out\"):\n            assert False, \"TIMEOUT! negative_sine event not fired. No /tmp/out file.\"\n\n    def test_event_toggled_sent_clear(self, fledge_url, wait_time, retries):\n        print(\"Add sinusoid\")\n        payload = {\"name\": \"sin #1\", \"plugin\": \"sinusoid\", \"type\": \"south\", \"enabled\": True}\n        post_url = \"/fledge/service\"\n        resp = utils.post_request(fledge_url, post_url, payload)\n        assert resp[\"id\"] != \"\", \"Failed to add sin #1\"\n        assert resp[\"name\"] == \"sin #1\", \"Failed to add sin #1\"\n\n        print(\"Create event instance with threshold and asset; with notification trigger type toggled\")\n        payload = {\"name\": \"test #1\", \"description\": \"test notification instance\", \"rule\": \"Threshold\",\n                   \"channel\": \"asset\", \"notification_type\": \"toggled\", \"enabled\": True}\n        post_url = \"/fledge/notification\"\n        utils.post_request(fledge_url, post_url, payload)\n        \n        get_url = \"/fledge/notification\"\n        resp = utils.get_request(fledge_url, get_url)\n        assert \"test #1\" in [s[\"name\"] for s in resp[\"notifications\"]]\n\n        print(\"Set rule\")\n        payload = {\"asset\": \"sinusoid\", \"datapoint\": \"sinusoid\", \"trigger_value\": \"0.8\"}\n        put_url = \"/fledge/category/ruletest #1\"\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), payload)\n        \n        get_url = \"/fledge/category/ruletest #1\"\n        resp = utils.get_request(fledge_url, urllib.parse.quote(get_url))\n        assert resp[\"asset\"][\"value\"] == \"sinusoid\"\n        assert resp[\"datapoint\"][\"value\"] == \"sinusoid\"\n        assert resp[\"trigger_value\"][\"value\"] == \"0.8\"\n\n        print(\"Set delivery\")\n        payload = {\"asset\": \"sin 0.8\", \"description\": \"asset notification\", \"enable\": \"true\"}\n        put_url = \"/fledge/category/deliverytest #1\"\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), payload)\n\n        get_url = \"/fledge/category/deliverytest #1\"\n        resp = utils.get_request(fledge_url, urllib.parse.quote(get_url))\n        assert resp[\"asset\"][\"value\"] == \"sin 0.8\"\n        assert resp[\"enable\"][\"value\"] == \"true\"\n\n        s = wait_time*2\n\n        print(\"Verify sin 0.8 has been created\")\n        while retries:\n            time.sleep(s)\n            get_url = \"/fledge/asset/sin 0.8\"\n            resp = utils.get_request(fledge_url, urllib.parse.quote(get_url))\n            if len(resp) > 0:\n                assert True\n                break\n            retries -= 1\n\n        # TODO: FOGL-3115 verify asset tracker entry\n\n        if retries == 0:\n            assert False, \"TIMEOUT! sin 0.8 event not fired.\" + fledge_url + \"/fledge/asset/sin 0.8\"\n\n        print(\"When rule is triggred, There should be audit entries for NTFSN & NTFCL\")\n        get_url = \"/fledge/audit?limit=1&source=NTFSN&severity=INFORMATION\"\n        resp = utils.get_request(fledge_url, get_url)\n        assert len(resp['audit'])\n        assert \"test #1\" in [s[\"details\"][\"name\"] for s in resp[\"audit\"]]\n        for audit_detail in resp['audit']:\n            if \"test #1\" == audit_detail['details']['name']:\n                assert \"INFORMATION\" == audit_detail['severity']\n                assert \"NTFSN\" == audit_detail['source']\n\n        time.sleep(2)  # let clear event to trigger\n\n        get_url = \"/fledge/audit?limit=1&source=NTFCL&severity=INFORMATION\"\n        resp = utils.get_request(fledge_url, get_url)\n        assert len(resp['audit'])\n        assert \"test #1\" in [s[\"details\"][\"name\"] for s in resp[\"audit\"]]\n        for audit_detail in resp['audit']:\n            if \"test #1\" == audit_detail['details']['name']:\n                assert \"INFORMATION\" == audit_detail['severity']\n                assert \"NTFCL\" == audit_detail['source']\n"
  },
  {
    "path": "tests/system/python/packages/test_multiple_assets.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test Multiple Assets System tests:\n        Creates large number of assets and verifies them.\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2021 Dianomic Systems, Inc.\"\n\nimport base64\nimport http.client\nimport json\nimport os\nimport ssl\nimport subprocess\nimport time\nimport urllib.parse\nfrom pathlib import Path\n\nimport pytest\nimport utils\nfrom pytest import PKG_MGR\n\n\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/packages/data/\".format(PROJECT_ROOT)\nFLEDGE_ROOT = os.environ.get('FLEDGE_ROOT')\nBENCHMARK_SOUTH_SVC_NAME = \"BenchMark #\"\nASSET_NAME = \"{}_random_multiple_assets\".format(time.strftime(\"%Y%m%d\"))\nAF_HIERARCHY_LEVEL = \"{0}_multipleassets/{0}_multipleassetslvl2/{0}_multipleassetslvl3\".format(time.strftime(\"%Y%m%d\"))\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package && ./reset\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n    # Wait for fledge server to start\n    time.sleep(wait_time)\n\n\n@pytest.fixture\ndef remove_and_add_pkgs(package_build_version):\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package && ./remove\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"remove package script failed!\"\n\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package/ && ./setup {}\"\n                       .format(PROJECT_ROOT, package_build_version)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"setup package script failed\"\n\n    try:\n        subprocess.run([\"sudo {} install -y fledge-south-benchmark\".format(PKG_MGR)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"installation of benchmark package failed\"\n\n\n@pytest.fixture\ndef start_north(start_north_omf_as_a_service, fledge_url, num_assets,\n                      pi_host, pi_port, pi_admin, pi_passwd, clear_pi_system_through_pi_web_api, pi_db):\n    global north_schedule_id\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    # There is one data points here.\n    # 1. no data point (Asset name be used in this case.)\n    dp_list = ['']\n    asset_dict = {}\n\n    no_of_services = 6\n    num_assets_per_service=(num_assets//no_of_services)\n    # Creates assets dictionary for PI server cleanup\n    for service_count in range(no_of_services):\n        for asst_count in range(num_assets_per_service):\n            asset_name = ASSET_NAME + \"-{}{}\".format(service_count + 1, asst_count + 1)\n            asset_dict[asset_name] = dp_list\n\n    clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                       af_hierarchy_level_list, asset_dict)\n\n    response = start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                            default_af_location=AF_HIERARCHY_LEVEL)\n    north_schedule_id = response[\"id\"]\n\n    yield start_north\n\n\ndef add_benchmark(fledge_url, name, count, num_assets_per_service):\n    data = {\n        \"name\": name,\n        \"type\": \"south\",\n        \"plugin\": \"Benchmark\",\n        \"enabled\": True,\n        \"config\": {\n            \"asset\": {\n                \"value\": \"{}-{}\".format(ASSET_NAME, count)\n            },\n            \"numAssets\": {\n                \"value\": \"{}\".format(num_assets_per_service)\n            }\n        }\n    }\n    post_url = \"/fledge/service\"\n    utils.post_request(fledge_url, post_url, data)\n\ndef verify_restart(fledge_url, retries):\n    for i in range(retries):\n        time.sleep(30)\n        get_url = '/fledge/ping'\n        ping_result = utils.get_request(fledge_url, get_url)\n        if ping_result['uptime'] > 0:\n            return\n    assert ping_result['uptime'] > 0\n\ndef verify_service_added(fledge_url, name):\n    get_url = \"/fledge/south\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert name in [s[\"name\"] for s in result[\"services\"]]\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data via PI Web API using Basic auth\"\n    return ping_result\n\n\ndef verify_asset(fledge_url, total_assets, count, wait_time):\n    # Check whether \"total_assets\" are created or not by calling \"/fledge/asset\" endpoint for \"count\" number of iterations\n    # In each iteration sleep for wait_time * 6, i.e., 60 seconds..\n    for i in range(count):\n        get_url = \"/fledge/asset\"\n        result = utils.get_request(fledge_url, get_url)\n        asset_created = len(result)\n        if (total_assets == asset_created):\n            print(\"Total {} asset created\".format(asset_created))\n            return\n        # Fledge takes 60 seconds to create 100 assets.\n        # Added sleep for \"wait_time * 6\", So that we can changes sleep time by changing value of wait_time from the jenkins job in future if required.\n        time.sleep(wait_time * 6)\n    assert total_assets == len(result)\n\n\ndef verify_asset_tracking_details(fledge_url, total_assets, total_benchmark_services, num_assets):\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    assert total_assets == len(tracking_details[\"track\"])\n    for service_count in range(total_benchmark_services):\n        for asset_count in range(num_assets):\n            service_name = BENCHMARK_SOUTH_SVC_NAME + \"{}\".format(service_count + 1)\n            asset_name = ASSET_NAME + \"-{}{}\".format(service_count + 1, asset_count + 1)\n            assert service_name in [s[\"service\"] for s in tracking_details[\"track\"]]\n            assert asset_name in [s[\"asset\"] for s in tracking_details[\"track\"]]\n            assert \"Benchmark\" in [s[\"plugin\"] for s in tracking_details[\"track\"]]\n\n\ndef _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, total_benchmark_services, num_assets_per_service):\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    type_id = 1\n    \n    for s in range(1, total_benchmark_services+1):\n      for a in range(1, num_assets_per_service+1):\n        retry_count = 0\n        data_from_pi = None\n        asset_name = \"random_multiple_assets-\" + str(s) + str(a)\n        print(asset_name)\n        recorded_datapoint = \"{}\".format(asset_name)\n        # Name of asset in the PI server\n        pi_asset_name = \"{}\".format(asset_name)\n    \n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            data_from_pi = read_data_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                                                     pi_asset_name, '')\n            if data_from_pi is None:\n                retry_count += 1\n                time.sleep(wait_time)\n    \n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n\nclass TestMultiAssets:\n    def test_multiple_assets_with_restart(self, remove_and_add_pkgs, reset_fledge, start_north, read_data_from_pi_web_api,\n                                          skip_verify_north_interface, fledge_url, num_assets, wait_time, retries, pi_host, \n                                          pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test multiple benchmark services with multiple assets are created in fledge, also verifies assets after\n            restarting fledge.\n            remove_and_add_pkgs: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            Assertions:\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/asset\"\"\"\n\n        total_benchmark_services = 6\n        num_assets_per_service = (num_assets//total_benchmark_services)\n        # Total number of assets that would be created, total_assets variable is used instead of num_assets to handle case where num_assets is not divisible by 3 or 6. Since we are creating 3 or 6 services and each service should create equal num ber of aasets.\n        total_assets = num_assets_per_service * total_benchmark_services\n\n        for count in range(total_benchmark_services):\n            service_name = BENCHMARK_SOUTH_SVC_NAME + \"{}\".format(count + 1)\n            add_benchmark(fledge_url, service_name, count + 1, num_assets_per_service)\n            verify_service_added(fledge_url, service_name)\n        \n        # Sleep for few seconds, So that data from south service can be ingested into the Fledge\n        time.sleep(wait_time * 3)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # num//assets return integer value that passes to count\n        verify_asset(fledge_url, total_assets, num_assets//100, wait_time)\n\n        put_url = \"/fledge/restart\"\n        utils.put_request(fledge_url, urllib.parse.quote(put_url))\n        # Wait for fledge to restart\n        verify_restart(fledge_url, retries)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # num//assets return integer value that passes to count\n        verify_asset(fledge_url, total_assets, num_assets//100, wait_time)\n        verify_asset_tracking_details(fledge_url, total_assets, total_benchmark_services, num_assets_per_service)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Sleep for few seconds to verify data ingestion into Fledge is increasing or not after restart.\n        time.sleep(wait_time * 3)\n        \n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           total_benchmark_services, num_assets_per_service)\n\n        # FIXME: If sleep is removed then the next test fails\n        time.sleep(wait_time * 2)\n\n    def test_add_multiple_assets_before_after_restart(self, reset_fledge, start_north, read_data_from_pi_web_api,\n                                                      skip_verify_north_interface, fledge_url, num_assets,\n                                                      wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test addition of multiple assets before and after restarting fledge.\n            reset_fledge: Fixture to reset fledge\n            Assertions:\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/asset\"\"\"\n\n        total_benchmark_services = 3\n        num_assets_per_service = (num_assets//(total_benchmark_services*2))\n        # Total number of assets that would be created, total_assets variable is used instead of num_assets to handle case where num_assets is not divisible by 3 or 6. Since we are creating 3 or 6 services and each service should create equal num ber of aasets.\n        total_assets = num_assets_per_service * total_benchmark_services\n        \n        for count in range(total_benchmark_services):\n            service_name = BENCHMARK_SOUTH_SVC_NAME + \"{}\".format(count + 1)\n            add_benchmark(fledge_url, service_name, count + 1, num_assets_per_service)\n            verify_service_added(fledge_url, service_name)\n\n        \n        # Sleep for few seconds, So that data from south service can be ingested into the Fledge.\n        time.sleep(wait_time * 3)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # num//assets return integer value that passes to count\n        verify_asset(fledge_url, total_assets, num_assets//100, wait_time)\n\n        verify_asset_tracking_details(fledge_url, total_assets, total_benchmark_services, num_assets_per_service)\n\n        put_url = \"/fledge/restart\"\n        utils.put_request(fledge_url, urllib.parse.quote(put_url))\n        # Wait for fledge to restart\n        verify_restart(fledge_url, retries)\n\n        # We are adding more total_assets number of assets\n        total_assets = total_assets * 2\n\n        for count in range(total_benchmark_services):\n            service_name = BENCHMARK_SOUTH_SVC_NAME + \"{}\".format(count + 4)\n            add_benchmark(fledge_url, service_name, count + 4, num_assets_per_service)\n            verify_service_added(fledge_url, service_name)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # num//assets return integer value that passes to count\n        verify_asset(fledge_url, total_assets, num_assets//100, wait_time)\n        verify_asset_tracking_details(fledge_url, total_assets, total_benchmark_services * 2, num_assets_per_service)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Sleep for few seconds to verify data ingestion into Fledge is increasing or not after adding more services.\n        time.sleep(wait_time * 3)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            # Initially total_benchmark_services is 3 but after the restart the 3 more south services are added. So, total_benchmark_services * 2 is 6\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           total_benchmark_services * 2, num_assets_per_service)\n    \n    def test_multiple_assets_with_reconfig(self, reset_fledge, start_north, read_data_from_pi_web_api, \n                                           skip_verify_north_interface, fledge_url, num_assets,\n                                           wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test addition of multiple assets with reconfiguration of south service.\n            reset_fledge: Fixture to reset fledge\n            Assertions:\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/asset\"\"\"\n\n        total_benchmark_services = 3\n        num_assets_per_service = (num_assets//(total_benchmark_services*2))\n        # total_assets variable is used instead of num_assets to handle case where num_assets is not divisible by 3 or 6. Since we are creating 3 or 6 services and each service should create equal num ber of aasets.\n        # Number of assets that would be created initially\n        total_assets = num_assets_per_service * total_benchmark_services\n\n        for count in range(total_benchmark_services):\n            service_name = BENCHMARK_SOUTH_SVC_NAME + \"{}\".format(count + 1)\n            add_benchmark(fledge_url, service_name, count + 1, num_assets_per_service)\n            verify_service_added(fledge_url, service_name)\n\n        # Sleep for few seconds, So that data from south service can be ingested into the Fledge\n        time.sleep(wait_time * 3)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # num//assets return integer value that passes to count\n        verify_asset(fledge_url, total_assets, num_assets//100, wait_time)\n        verify_asset_tracking_details(fledge_url, total_assets, total_benchmark_services, num_assets_per_service)\n\n        # With reconfig, number of assets are doubled in each south service\n        num_assets_per_service = 2 * num_assets_per_service\n        payload = {\"numAssets\": \"{}\".format(num_assets_per_service)}\n        for count in range(total_benchmark_services):\n            service_name = BENCHMARK_SOUTH_SVC_NAME + \"{}\".format(count + 1)\n            put_url = \"/fledge/category/{}\".format(service_name)\n            utils.put_request(fledge_url, urllib.parse.quote(put_url), payload)\n\n        # In reconfig number of assets are doubled\n        total_assets = total_assets * 2\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        \n        verify_asset(fledge_url, total_assets,  num_assets//100, wait_time)\n        verify_asset_tracking_details(fledge_url, total_assets, total_benchmark_services, num_assets_per_service)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        \n        # Sleep for few seconds to verify data ingestion into Fledge is increasing or not after reconfig of south services.\n        time.sleep(wait_time * 3)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           total_benchmark_services, num_assets_per_service)\n"
  },
  {
    "path": "tests/system/python/packages/test_north_azure.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test sending data to Azure-IoT-Hub using fledge-north-azure plugin\n\n\"\"\"\n\n__author__ = \"Mohit Singh Tomar\"\n__copyright__ = \"Copyright (c) 2023 Dianomic Systems Inc\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport subprocess\nimport http.client\nimport pytest\nimport os\nimport time\nimport utils\nfrom pathlib import Path\nimport urllib.parse\nimport json\nimport sys\nimport datetime\n\ntry:\n    cmd = \"python3 -m pip install azure-storage-blob==12.13.1\"\n    if utils.detect_os() == \"Ubuntu 24\":\n        cmd = cmd + \" --upgrade urllib3==2.5.0 --upgrade requests==2.32.5 --break-system-packages\"\n    subprocess.run([cmd], shell=True, check=True)\nexcept subprocess.CalledProcessError:\n    assert False, \"Failed to install azure-storage-blob module\"\n\nfrom azure.storage.blob import BlobServiceClient\n\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = subprocess.getoutput(\"git rev-parse --show-toplevel\")\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\nSOUTH_SERVICE_NAME = \"FOGL-7352_sysinfo\"\nSOUTH_PLUGIN = \"systeminfo\"\nNORTH_SERVICE_NAME = \"FOGL-7352_azure\"\nNORTH_PLUGIN_NAME = \"azure-iot\"\nNORTH_PLUGIN_DISCOVERY_NAME = \"azure_iot\"\nLOCALJSONFILE = \"azure.json\"\nFILTER = \"expression\"\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n\ndef read_data_from_azure_storage_container(azure_storage_account_url,azure_storage_account_key, azure_storage_container):\n    \n    try:\n        t1=time.time()\n        blob_service_client_instance = BlobServiceClient(account_url=azure_storage_account_url, credential=azure_storage_account_key)\n\n        container_client = blob_service_client_instance.get_container_client(container=azure_storage_container)\n\n        blob_list = container_client.list_blobs()\n\n        for blob in blob_list:\n            BLOBNAME = blob.name\n            print(f\"Name: {blob.name}\")\n\n\n        blob_client_instance = blob_service_client_instance.get_blob_client(azure_storage_container, BLOBNAME, snapshot=None)\n        with open(LOCALJSONFILE, \"wb\") as my_blob:\n            blob_data = blob_client_instance.download_blob()\n            blob_data.readinto(my_blob)\n        t2=time.time()\n        print((\"It takes %s seconds to download \"+BLOBNAME) % (t2 - t1))\n        \n        with open(LOCALJSONFILE) as handler:\n            data = handler.readlines()\n            \n        return data\n        \n    except (Exception) as ex:\n        print(\"Failed to read data due to {}\".format(ex))\n        return None\n\ndef verify_north_stats_on_invalid_config(fledge_url):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert ping_result['dataRead'] > 0, \"South data NOT seen in ping header\"\n    assert \"dataSent\" in ping_result\n    assert ping_result['dataSent'] < 1, \"Data sent to Azure Iot Hub\"\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data to Azure-IoT-Hub\"\n    return ping_result\n\n\ndef verify_statistics_map(fledge_url, skip_verify_north_interface):\n    get_url = \"/fledge/statistics\"\n    jdoc = utils.get_request(fledge_url, get_url)\n    actual_stats_map = utils.serialize_stats_map(jdoc)\n    assert 1 <= actual_stats_map[\"{}-Ingest\".format(SOUTH_SERVICE_NAME)]\n    assert 1 <= actual_stats_map['READINGS']\n    if not skip_verify_north_interface:\n        assert 1 <= actual_stats_map['Readings Sent']\n        assert 1 <= actual_stats_map[NORTH_SERVICE_NAME]\n\n\ndef verify_asset(fledge_url, ASSET):\n    get_url = \"/fledge/asset\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No asset found\"\n    assert any(filter(lambda x: ASSET in x, [s[\"assetCode\"] for s in result]))\n\n\ndef verify_asset_tracking_details(fledge_url, skip_verify_north_interface, ASSET):\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    tracked_item = tracking_details[\"track\"][0]\n    assert ASSET in tracked_item[\"asset\"]\n    assert \"systeminfo\" == tracked_item[\"plugin\"]\n\n    egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n    assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n    tracked_item = egress_tracking_details[\"track\"][0]\n    assert ASSET in tracked_item[\"asset\"]\n    assert NORTH_PLUGIN_DISCOVERY_NAME == tracked_item[\"plugin\"]\n\n\ndef _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET):\n    retry_count = 0\n    data_from_azure = None\n    \n    while (data_from_azure is None or len(data_from_azure) == 0) and retry_count < retries:\n        data_from_azure = read_data_from_azure_storage_container(azure_storage_account_url,azure_storage_account_key, azure_storage_container)\n\n        if data_from_azure is None:\n            retry_count += 1\n            time.sleep(wait_time)\n        \n    if data_from_azure is None or retry_count == retries:\n        assert False, \"Failed to read data from Azure IoT Hub\"\n    \n    asset_collected  = list()\n    for ele in data_from_azure:\n      asset_collected.extend(list(map(lambda d: d['asset'], json.loads(ele)[\"Body\"])))\n    \n    assert any(filter(lambda x: ASSET in x, asset_collected))\n\n@pytest.fixture\ndef add_south_north_task(add_south, add_north, fledge_url, azure_host, azure_device, azure_key):\n    \"\"\" This fixture\n        add_south: Fixture that adds a south service with given configuration\n        add_north: Fixture that adds a north service with given configuration\n\n    \"\"\"\n    \n    # south_branch does not matter as these are archives.fledge-iot.org version install\n    add_south(SOUTH_PLUGIN, None, fledge_url, service_name=SOUTH_SERVICE_NAME, start_service=False, installation_type='package')\n    \n    _config = {\n        \"primaryConnectionString\": {\"value\":\"HostName={};DeviceId={};SharedAccessKey={}\".format(azure_host, azure_device, azure_key)}\n        }\n    # north_branch does not matter as these are archives.fledge-iot.org version install\n    add_north(fledge_url, NORTH_PLUGIN_NAME, None, installation_type='package', north_instance_name=NORTH_SERVICE_NAME, \n              config=_config, schedule_repeat_time=10, enabled=False, plugin_discovery_name=NORTH_PLUGIN_DISCOVERY_NAME, is_task=True)\n    \n@pytest.fixture\ndef add_south_north_service(add_south, add_north, fledge_url, azure_host, azure_device, azure_key):\n    \"\"\" This fixture\n        add_south: Fixture that adds a south service with given configuration\n        add_north: Fixture that adds a north service with given configuration\n\n    \"\"\"\n    # south_branch does not matter as these are archives.fledge-iot.org version install\n    add_south(SOUTH_PLUGIN, None, fledge_url, service_name=SOUTH_SERVICE_NAME, start_service=False, installation_type='package')\n    \n    _config = {\n        \"primaryConnectionString\": {\"value\":\"HostName={};DeviceId={};SharedAccessKey={}\".format(azure_host, azure_device, azure_key)}\n        }\n    # north_branch does not matter as these are archives.fledge-iot.org version install\n    add_north(fledge_url, NORTH_PLUGIN_NAME, None, installation_type='package', north_instance_name=NORTH_SERVICE_NAME, \n              config=_config, enabled=False, plugin_discovery_name=NORTH_PLUGIN_DISCOVERY_NAME, is_task=False)\n    \ndef config_south(fledge_url, ASSET):\n    payload = {\"assetNamePrefix\": \"{}\".format(ASSET)}\n    put_url = \"/fledge/category/{}\".format(SOUTH_SERVICE_NAME)\n    utils.put_request(fledge_url, urllib.parse.quote(put_url), payload)\n\ndef update_filter_config(fledge_url, plugin, mode):\n    data = {\"enable\": \"{}\".format(mode)}\n    put_url = \"/fledge/category/{}_{}_exp\".format(NORTH_SERVICE_NAME, plugin)\n    utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\ndef add_expression_filter(add_filter, fledge_url, NORTH_PLUGIN_NAME):\n    filter_cfg = {\"enable\": \"true\", \"expression\": \"log(1K-blocks)\".format(), \"name\": \"{}_exp\".format(NORTH_PLUGIN_NAME)}\n    add_filter(\"{}\".format(FILTER), None, \"{}_exp\".format(NORTH_PLUGIN_NAME), filter_cfg, fledge_url, \"{}\".format(NORTH_SERVICE_NAME), installation_type='package')\n\nclass TestNorthAzureIoTHubDevicePlugin:\n    \n    def test_send(self, clean_setup_fledge_packages, reset_fledge, add_south_north_service, fledge_url, enable_schedule, \n                  disable_schedule, azure_host, azure_device, azure_key, wait_time, retries, skip_verify_north_interface,\n                  azure_storage_account_url, azure_storage_account_key, azure_storage_container):\n        \n        \"\"\" Test that check data is inserted in Fledge and sent to Azure-IoT Hub or not.\n            clean_setup_fledge_packages: Fixture for removing fledge from system completely if it is already present \n                                         and reinstall it baased on commandline arguments.\n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_service: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n        \"\"\"\n        # Update Asset name\n        ASSET = \"test1_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        # Enable South Service for 10 Seonds\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        time.sleep(wait_time)\n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, ASSET)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface, ASSET)\n        \n        # Storage blob JSON will be created every 2 minutes\n        time.sleep(150)\n        \n        \n        _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n\n    \n    def test_mqtt_over_websocket_reconfig(self, reset_fledge, add_south_north_service, fledge_url, enable_schedule, disable_schedule,\n                                          azure_host, azure_device, azure_key, azure_storage_account_url, azure_storage_account_key, \n                                          azure_storage_container, wait_time, retries, skip_verify_north_interface):\n        \n        \"\"\" Test that enable MQTT over websocket then check data inserted into Fledge and sent to Azure-IoT Hub or not.\n            \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_service: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n        \"\"\"\n        # Update Asset name\n        ASSET = \"test2_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        # Enable South Service for 10 Seonds\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        time.sleep(wait_time)\n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Enable MQTT over websocket\n        payload = {\"websockets\": \"true\"}\n        put_url = \"/fledge/category/{}\".format(NORTH_SERVICE_NAME)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), payload)\n        \n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, ASSET)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface, ASSET)\n        \n        # Storage blob JSON will be created every 2 minutes\n        time.sleep(150)\n        \n        \n        _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n\n    \n    def test_disable_enable(self, reset_fledge, add_south_north_service, fledge_url, enable_schedule, disable_schedule,\n                            azure_host, azure_device, azure_key, azure_storage_account_url, azure_storage_account_key, \n                            azure_storage_container, wait_time, retries, skip_verify_north_interface):\n        \n        \"\"\" Test that enable and disable south and north service perioically then \n            check data inserted into Fledge and sent to Azure-IoT Hub or not.\n            \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_service: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n        \"\"\"\n        \n        for i in range(2):\n            # Update Asset name\n            ASSET = \"test3.{}_FOGL-7352_system\".format(i)\n            config_south(fledge_url, ASSET)\n            \n            # Enable South Service for 10 Seonds\n            enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n            time.sleep(wait_time)\n            disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n            \n            # Enable North Service for sending data to Azure-IOT-Hub\n            enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n            \n            verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n            verify_asset(fledge_url, ASSET)\n            verify_statistics_map(fledge_url, skip_verify_north_interface)\n            \n            # Storage blob JSON will be created every 2 minutes\n            time.sleep(150)\n            disable_schedule(fledge_url, NORTH_SERVICE_NAME)\n            \n            _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n    \n       \n    def test_send_with_filter(self, reset_fledge, add_south_north_service, fledge_url, enable_schedule, disable_schedule,\n                              azure_host, azure_device, azure_key, azure_storage_account_url, azure_storage_account_key, \n                              azure_storage_container, wait_time, retries, skip_verify_north_interface, add_filter):\n        \n        \"\"\" Test that attach filters to North service and enable and disable filter periodically \n            then check data inserted into Fledge and sent to Azure-IoT Hub or not.\n            \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_service: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n            add_filter:Fixture that add filter to south and north Instances\n        \"\"\"\n        # Update Asset name\n        ASSET = \"test4_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        # Add Expression filter to North Service\n        add_expression_filter(add_filter, fledge_url, NORTH_PLUGIN_NAME)\n        \n        # Enable South Service for 10 Seonds\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        print(\"On/Off of filter starts\")\n        count = 0\n        while count<3:\n            # For Disabling filter\n            update_filter_config(fledge_url, NORTH_PLUGIN_NAME, 'false')\n            time.sleep(wait_time*2)\n            \n            # For enabling filter\n            update_filter_config(fledge_url, NORTH_PLUGIN_NAME, 'true')\n            time.sleep(wait_time*2)\n            count+=1\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, ASSET)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface, ASSET)\n        \n        # Storage blob JSON will be created every 2 minutes\n        time.sleep(150)\n        \n        \n        _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n\n\nclass TestNorthAzureIoTHubDevicePluginTask:\n        \n    def test_send_as_a_task(self, reset_fledge, add_south_north_task, fledge_url, enable_schedule, disable_schedule, \n                            azure_host, azure_device, azure_key, azure_storage_account_url, azure_storage_account_key, \n                            azure_storage_container, wait_time, retries, skip_verify_north_interface):\n        \n        \"\"\" Test that creates south and north bound as task and check data is inserted in Fledge and sent to Azure-IoT Hub or not.\n            \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_task: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n        \"\"\"\n        # Update Asset name\n        ASSET = \"test5_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        # Enable South Service for 10 Seonds\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        time.sleep(wait_time)\n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, ASSET)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface, ASSET)\n        \n        # Storage blob JSON will be created every 2 minutes\n        time.sleep(150)\n        \n        \n        _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n    \n    \n    def test_mqtt_over_websocket_reconfig_task(self, reset_fledge, add_south_north_task, fledge_url, enable_schedule, disable_schedule,\n                                               azure_host, azure_device, azure_key, azure_storage_account_url, azure_storage_account_key, \n                                               azure_storage_container, wait_time, retries, skip_verify_north_interface):\n        \n        \"\"\" Test that creates south and north bound as task as well as enable MQTT over websocket then\n            check data inserted in Fledge and sent to Azure-IoT Hub or not.\n            \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_task: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n        \"\"\"\n        # Update Asset name\n        ASSET = \"test6_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        # Enable South Service for 10 Seonds\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        time.sleep(wait_time)\n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Enable MQTT over websocket\n        payload = {\"websockets\": \"true\"}\n        put_url = \"/fledge/category/{}\".format(NORTH_SERVICE_NAME)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), payload)\n        \n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, ASSET)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface, ASSET)\n        \n        # Storage blob JSON will be created every 2 minutes\n        time.sleep(150)\n        \n        \n        _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n\n    \n    def test_disable_enable_task(self, reset_fledge, add_south_north_task, fledge_url, enable_schedule, disable_schedule,\n                                 azure_host, azure_device, azure_key, azure_storage_account_url, azure_storage_account_key, \n                                 azure_storage_container, wait_time, retries, skip_verify_north_interface):\n        \n        \"\"\" Test that creates south and north bound as task as enable and disable them periodically then \n            check data inserted in Fledge and sent to Azure-IoT Hub or not.\n            \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_task: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n        \"\"\"\n        for i in range(2):\n            # Update Asset name\n            ASSET = \"test7.{}_FOGL-7352_system\".format(i)\n            config_south(fledge_url, ASSET)\n            \n            # Enable South Service for 10 Seonds\n            enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n            time.sleep(wait_time)\n            disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n            \n            # Enable North Service for sending data to Azure-IOT-Hub\n            enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n            \n            verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n            verify_asset(fledge_url, ASSET)\n            verify_statistics_map(fledge_url, skip_verify_north_interface)\n            \n            # Storage blob JSON will be created every 2 minutes\n            time.sleep(150)\n            disable_schedule(fledge_url, NORTH_SERVICE_NAME)\n            \n            _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n                \n    def test_send_with_filter_task(self, reset_fledge, add_south_north_task, fledge_url, enable_schedule, disable_schedule,\n                                   azure_host, azure_device, azure_key, azure_storage_account_url, azure_storage_account_key, \n                                   azure_storage_container, wait_time, retries, skip_verify_north_interface, add_filter):\n        \n        \"\"\" Test that creates south and north bound as task and attach filters to North Bound as well as\n            enable and disable filters periodically then check data inserted in Fledge and sent to Azure-IoT Hub or not.\n            \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_task: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n            add_filter: Fixture that add fiter to south or north instances.\n        \"\"\"\n        # Update Asset name\n        ASSET = \"test8_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        # Add Expression filter to North Service\n        add_expression_filter(add_filter, fledge_url, NORTH_PLUGIN_NAME)\n        \n        # Enable South Service for 10 Seonds\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        print(\"On/Off of filter starts\")\n        count = 0\n        while count<3:\n            # For Disabling filter\n            update_filter_config(fledge_url, NORTH_PLUGIN_NAME, 'false')\n            time.sleep(wait_time*2)\n            \n            # For enabling filter\n            update_filter_config(fledge_url, NORTH_PLUGIN_NAME, 'true')\n            time.sleep(wait_time*2)\n            count+=1\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, ASSET)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface, ASSET)\n        \n        # Storage blob JSON will be created every 2 minutes\n        time.sleep(150)\n        \n        \n        _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n\n\n\nclass TestNorthAzureIoTHubDevicePluginInvalidConfig:\n\n    def test_invalid_connstr(self, reset_fledge, add_south, add_north, fledge_url, enable_schedule, disable_schedule, wait_time, retries):\n        \n        \"\"\" Test that checks connection string of north azure plugin is invalid or not.\n        \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south: Fixture that south Instance in disable mode\n            add_north: Fixture that add north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n        \"\"\"\n        # Add South and North\n        add_south(SOUTH_PLUGIN, None, fledge_url, service_name=SOUTH_SERVICE_NAME, start_service=False, installation_type='package')\n        \n        _config = {\n        \"primaryConnectionString\": {\"value\":\"InvalidConfig\"}\n        }\n        # north_branch does not matter as these are archives.fledge-iot.org version install\n        add_north(fledge_url, NORTH_PLUGIN_NAME, None, installation_type='package', north_instance_name=NORTH_SERVICE_NAME, \n                  config=_config, enabled=False, plugin_discovery_name=NORTH_PLUGIN_DISCOVERY_NAME, is_task=False)\n        \n        # Update Asset name\n        ASSET = \"test9_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        # Enable South Service for 10 Seonds\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        time.sleep(wait_time)\n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        verify_north_stats_on_invalid_config(fledge_url)\n\n    \n    def test_invalid_connstr_sharedkey(self, reset_fledge, add_south, add_north, fledge_url, enable_schedule, disable_schedule, \n                                       wait_time, retries, azure_host, azure_device, azure_key):\n        \n        \"\"\" Test that checks shared key passed to connection string of north azure plugin is invalid or not.\n        \n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south: Fixture that south Instance in disable mode\n            add_north: Fixture that add north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n        \"\"\"\n        # Add South and North\n        add_south(SOUTH_PLUGIN, None, fledge_url, service_name=SOUTH_SERVICE_NAME, start_service=False, installation_type='package')\n        \n        _config = {\n            \"primaryConnectionString\": {\"value\":\"HostName={};DeviceId={};SharedAccessKey={}\".format(azure_host, azure_device, azure_key[:-5])}\n        }\n        # north_branch does not matter as these are archives.fledge-iot.org version install\n        add_north(fledge_url, NORTH_PLUGIN_NAME, None, installation_type='package', north_instance_name=NORTH_SERVICE_NAME, \n                  config=_config, enabled=False, plugin_discovery_name=NORTH_PLUGIN_DISCOVERY_NAME, is_task=False)\n        \n        # Update Asset name\n        ASSET = \"test10_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        # Enable South Service for 10 Seonds\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        time.sleep(wait_time)\n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        verify_north_stats_on_invalid_config(fledge_url)\n\n\nclass TestNorthAzureIoTHubDevicePluginLongRun:\n    \n    def test_send_long_run(self, clean_setup_fledge_packages, reset_fledge, add_south_north_service, fledge_url, enable_schedule, \n                           disable_schedule, azure_host, azure_device, azure_key, wait_time, retries, skip_verify_north_interface,\n                           azure_storage_account_url, azure_storage_account_key, azure_storage_container, run_time):\n        \n        \"\"\" Test that check data is inserted in Fledge and sent to Azure-IoT Hub for long duration based parameter passed.\n        \n            clean_setup_fledge_packages: Fixture for removing fledge from system completely if it is already present \n                                         and reinstall it baased on commandline arguments.\n            reset_fledge: Fixture that reset and cleanup the fledge \n            add_south_north_service: Fixture that add south and north instance in disable mode\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            azure_host: Fixture that provide Hostname of Azure IoT Hub\n            azure_device: Fixture that provide ID of Device deployed in Azure IoT Hub\n            azure_key: Fixture that provide access key of Azure IoT Hub\n            azure_storage_account_url: Fixture that provide URL for accessing Storage Blob of Azure\n            azure_storage_account_key: Fixture that provide access key for accessing Storage Blob\n            azure_storage_container: Fixture that provides name of container deployed in Azure\n            run_time: Fixture that defines durration for which this test will be executed.\n        \"\"\"        \n        START_TIME = datetime.datetime.now()\n        current_iteration = 1\n        # Update Asset name\n        ASSET = \"test11_FOGL-7352_system\"\n        config_south(fledge_url, ASSET)\n        \n        \n        # Enable South Service for ingesting data into fledge\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        time.sleep(wait_time)\n        # Enable North Service for sending data to Azure-IOT-Hub\n        enable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        \n        while (datetime.datetime.now() - START_TIME).seconds <= (int(run_time) * 60):\n            verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n            verify_asset(fledge_url, ASSET)\n            verify_statistics_map(fledge_url, skip_verify_north_interface)\n            verify_asset_tracking_details(fledge_url, skip_verify_north_interface, ASSET)\n            \n            # Storage blob JSON will be created every 2 minutes\n            time.sleep(150)\n            \n            \n            _verify_egress(azure_storage_account_url, azure_storage_account_key, azure_storage_container, wait_time, retries, ASSET)\n                \n            print('Successfully ran {} iterations'.format(current_iteration), datetime.datetime.now())\n            current_iteration += 1\n            current_duration = (datetime.datetime.now() - START_TIME).seconds\n        \n        # Disable South Service  \n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        \n        # Disable North Service \n        disable_schedule(fledge_url, NORTH_SERVICE_NAME)\n        "
  },
  {
    "path": "tests/system/python/packages/test_north_pi_webapi_nw_throttle.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test sending data to PI using Web API under a distorted network.\n\n\"\"\"\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport subprocess\nimport http.client\nimport pytest\nimport os\nimport time\nimport utils\nimport json\nfrom pathlib import Path\nimport urllib.parse\nimport base64\nimport ssl\nimport csv\n\nfrom network_impairment import distort_network, reset_network\n\nASSET = \"Sine-FOGL-6333\"\nDATAPOINT = \"sinusoid\"\nNORTH_INSTANCE_NAME = \"NorthReadingsToPI_WebAPI\"\nSOUTH_SERVICE_NAME = \"Sinusoid-FOGL-6333\"\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\nDATA_DIR_ROOT = \"{}/tests/system/python/packages/data/\".format(PROJECT_ROOT)\n\nAF_HIERARCHY_LEVEL = \"throttlednetworktest/throttlednetworktestlvl2/throttlednetworktestlvl3\"\n\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n\n@pytest.fixture\ndef install_netem(wait_time):\n    try:\n        subprocess.run([\"sudo apt install net-tools iproute2\"], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"Could not install netem \"\n\n\ndef verify_ping(fledge_url, north_catch_up_time):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    assert ping_result['dataRead'] == ping_result['dataSent'], \"Could not send all\" \\\n                                                               \" the data even after \" \\\n                                                               \"waiting {} \" \\\n                                                               \"seconds.\".format(north_catch_up_time)\n\n\ndef change_category(fledge_url, cat_name, config_item, value):\n    \"\"\"\n    Changes the value of configuration item in the given category.\n    Args:\n        fledge_url: The url of the Fledge API.\n        cat_name: The category name.\n        config_item: The configuration item to be changed.\n        value: The new value of configuration item.\n    Returns: returns the value of changed category or raises error.\n    \"\"\"\n    conn = http.client.HTTPConnection(fledge_url)\n    body = {\"value\": str(value)}\n    json_data = json.dumps(body)\n    conn.request(\"PUT\", '/fledge/category/{}/{}'.format(cat_name, config_item), json_data)\n    r = conn.getresponse()\n    # assert 200 == r.status, 'Could not change config item'\n    print(r.status)\n    r = r.read().decode()\n    conn.close()\n    retval = json.loads(r)\n    print(retval)\n\n\n@pytest.fixture\ndef start_south_north(add_south, start_north_task_omf_web_api, add_filter, remove_data_file,\n                      fledge_url, pi_host, pi_port, pi_admin, pi_passwd, pi_db,\n                      start_north_omf_as_a_service, start_north_as_service,\n                      enable_schedule, clear_pi_system_through_pi_web_api, asset_name=ASSET):\n    \"\"\" This fixture starts the sinusoid plugin and north pi web api plugin. Also puts a filter\n        to insert reading id as a datapoint when we send the data to north.\n        clean_setup_fledge_packages: purge the fledge* packages and install latest for given repo url\n        add_south: Fixture that adds a south service with given configuration\n        start_north_task_omf_web_api: Fixture that starts PI north task\n        remove_data_file: Fixture that remove data file created during the tests \"\"\"\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    # There are two data points here. 1. sinusoid\n    # 2. id_datapoint\n    dp_list = ['sinusoid', 'id_datapoint']\n    asset_dict = {}\n    asset_dict[ASSET] = dp_list\n    clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                       af_hierarchy_level_list, asset_dict)\n\n    south_plugin = \"sinusoid\"\n    # south_branch does not matter as these are archives.fledge-iot.org version install\n    _config = {\"assetName\": {\"value\": ASSET}}\n    add_south(south_plugin, None, fledge_url, config=_config,\n              service_name=SOUTH_SERVICE_NAME, installation_type='package', start_service=False)\n    if not start_north_as_service:\n        start_north_task_omf_web_api(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                     start_task=False, taskname=NORTH_INSTANCE_NAME,\n                                     default_af_location=AF_HIERARCHY_LEVEL)\n    else:\n        start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                     start=False, service_name=NORTH_INSTANCE_NAME,\n                                     default_af_location=AF_HIERARCHY_LEVEL)\n\n    add_filter(\"python35\", None, \"py35\", {}, fledge_url, None, installation_type='package',\n               only_installation=True)\n\n    data = {\"name\": \"py35\", \"plugin\": \"python35\", \"filter_config\": {\"enable\": \"true\"}}\n    utils.post_request(fledge_url, \"/fledge/filter\", data)\n\n    data = {\"pipeline\": [\"py35\"]}\n    put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=true\" \\\n        .format(NORTH_INSTANCE_NAME)\n    utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n    url = fledge_url + urllib.parse.quote('/fledge/category/{}_py35/script/upload'\n                                          .format(NORTH_INSTANCE_NAME))\n    script_path = 'script=@{}/set_id.py'.format(DATA_DIR_ROOT)\n    upload_script = \"curl -sX POST '{}' -F '{}'\".format(url, script_path)\n    exit_code = os.system(upload_script)\n    assert 0 == exit_code\n\n    enable_schedule(fledge_url, NORTH_INSTANCE_NAME)\n    time.sleep(3)\n    enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n    time.sleep(1)\n    yield start_south_north\n\n\ndef get_total_readings(fledge_url):\n    \"\"\"\n    Fetches the reading for an asset\n    Args:\n        fledge_url: The url of fledge . By default localhost:8081\n    Returns: The first element in the list of json strings. (A dictionary)\n    \"\"\"\n\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"GET\", '/fledge/asset')\n    r = conn.getresponse()\n    assert 200 == r.status, \"Could not get total readings from fledge\"\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return jdoc[0]['count']\n\n\ndef get_bulk_data_from_pi(host, admin, password, asset_name, data_point_name):\n    \"\"\"Used for getting bulk data < 100000 from PI.\"\"\"\n    username_password = \"{}:{}\".format(admin, password)\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64}\n    try:\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"GET\", '/piwebapi/dataservers', headers=headers)\n        res = conn.getresponse()\n        r = json.loads(res.read().decode())\n        points_url = r[\"Items\"][0][\"Links\"][\"Points\"]\n    except Exception:\n        assert False, \"Could not request data server of PI\"\n\n    try:\n        conn.request(\"GET\", points_url, headers=headers)\n        res = conn.getresponse()\n        points = json.loads(res.read().decode())\n    except Exception:\n        assert False, \"Could not get Points data.\"\n\n    name_to_search = asset_name + '.' + data_point_name\n    for single_point in points['Items']:\n\n        if name_to_search in single_point[\"Name\"]:\n            web_id = single_point[\"WebId\"]\n            pi_point_name = single_point[\"Name\"]\n            url = single_point[\"Links\"][\"RecordedData\"]\n            full_url = url + '?startTime=*-1d&endTime=*&maxCount=100000'\n            try:\n                conn.request(\"GET\", full_url, headers=headers)\n                res = conn.getresponse()\n                r = json.loads(res.read().decode())\n            except Exception:\n                assert False, \"Could not get Required data from PI\"\n\n            required_values = []\n            # ignoring first value as it is not needed.\n            for full_value in r[\"Items\"][1:]:\n                required_values.append(full_value['Value'])\n\n            assert required_values != [], \"Could not get required values for PI point.\"\n\n            # The last reading will come from API if we wait for a few moments.\n            # So not required to insert the last reading.\n            # url_for_last_value = single_point[\"Links\"][\"EndValue\"]\n            # conn.request(\"GET\", url_for_last_value, headers=headers)\n            # res = conn.getresponse()\n            # r = json.loads(res.read().decode())\n            # assert \"Value\" in r, \"Could not fetch the last reading from PI.\"\n            # required_values.append(r[\"Value\"])\n\n            conn.close()\n            return required_values\n\n    assert False, \"Could not find {} in all PI points\".format(name_to_search)\n\n\ndef search_for_pi_point(host, admin, password, asset_name, data_point_name):\n    \"\"\"Searches for a pi point in PI return its web_id and its full name in PI.\"\"\"\n    username_password = \"{}:{}\".format(admin, password)\n\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64, 'Content-Type': 'application/json'}\n    try:\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        conn.request(\"GET\", '/piwebapi/dataservers', headers=headers)\n        res = conn.getresponse()\n        r = json.loads(res.read().decode())\n        points_url = r[\"Items\"][0][\"Links\"][\"Points\"]\n    except Exception:\n        assert False, \"Could not request data server of PI\"\n\n    try:\n        conn.request(\"GET\", points_url, headers=headers)\n        res = conn.getresponse()\n        points = json.loads(res.read().decode())\n    except Exception:\n        assert False, \"Could not get Points data.\"\n\n    name_to_search = asset_name + '.' + data_point_name\n    for single_point in points['Items']:\n\n        if name_to_search in single_point['Name']:\n            web_id = single_point['WebId']\n            pi_point_name = single_point[\"Name\"]\n            conn.close()\n            return web_id, pi_point_name\n\n    return None, None\n\n\ndef turn_off_compression_for_pi_point(host, admin, password, asset_name, data_point_name):\n    \"\"\"Turns off compression for a given point in PI.\"\"\"\n    username_password = \"{}:{}\".format(admin, password)\n\n    username_password_b64 = base64.b64encode(username_password.encode('ascii')).decode(\"ascii\")\n    headers = {'Authorization': 'Basic %s' % username_password_b64, 'Content-Type': 'application/json'}\n    try:\n        web_id, pi_point_name = search_for_pi_point(host, admin, password, asset_name, data_point_name)\n        if not web_id:\n            assert False, \"Could not search PI Point {}\".format(data_point_name)\n\n        conn = http.client.HTTPSConnection(host, context=ssl._create_unverified_context())\n        attr_name = 'compressing'\n        conn.request(\"PUT\", '/piwebapi/points/{}/attributes/{}'.format(web_id, attr_name),\n                     body=\"0\", headers=headers)\n        r = conn.getresponse()\n        conn.close()\n        assert r.status == 204, \"Could not update the compression\" \\\n                                \" for the PI Point {}.\".format(pi_point_name)\n    except Exception as er:\n        print(\"Could not turn off compression for pi point {} due to {}\".format(data_point_name, er))\n        assert False, \"Could not turn off compression for pi point {} due to {}\".format(data_point_name, er)\n\n    print(\"Turned off compression for the PI Point {} \".format(pi_point_name))\n    return\n\n\nclass TestPackagesSinusoid_PI_WebAPI:\n\n    def test_omf_in_impaired_network(self, clean_setup_fledge_packages, reset_fledge,\n                                     install_netem, start_south_north, read_data_from_pi_web_api,\n                                     fledge_url, pi_host, pi_admin, pi_passwd, pi_db,\n                                     wait_time, retries, skip_verify_north_interface,\n                                     south_service_wait_time, north_catch_up_time, pi_port,\n                                     throttled_network_config, disable_schedule,\n                                     enable_schedule, clear_pi_system_through_pi_web_api,\n                                     asset_name=ASSET):\n        \"\"\" Test that checks data is inserted in Fledge and sent to PI under an impaired network.\n            start_south_north: Fixture that add south and north instance\n            read_data_from_pi: Fixture to read data from PI\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name>\n                data received from PI is same as data sent\n        \"\"\"\n\n        duration = south_service_wait_time + north_catch_up_time\n\n        try:\n            interface_for_impairment = throttled_network_config['interface']\n        except KeyError:\n            raise Exception(\"Interface not given for network impairment.\")\n        try:\n            packet_delay = int(throttled_network_config['packet_delay'])\n        except KeyError:\n            packet_delay = None\n        try:\n            rate_limit = int(throttled_network_config['rate_limit'])\n        except KeyError:\n            rate_limit = None\n        if not rate_limit and not packet_delay:\n            raise Exception(\"None of packet delay or rate limit given, \"\n                            \"cannot apply network impairment.\")\n        # Insert some readings before turning off compression.\n        time.sleep(3)\n        # Turn off south service\n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n        time.sleep(5)\n        # switch off Compression\n        verify_ping(fledge_url, 5)\n        dp_name = 'id_datapoint'\n        turn_off_compression_for_pi_point(pi_host, pi_admin, pi_passwd, ASSET, dp_name)\n\n        # allow the newly applied compression setting to be saved.\n        time.sleep(2)\n\n        # Restart the south service\n        enable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n\n        # Wait for the south service to start and ingest a few readings.\n        time.sleep(10)\n        # Increase the ingest rate.\n        # Note down the total readings ingested\n        initial_readings = int(get_total_readings(fledge_url))\n\n        print(\"Initial readings ingested {} \\n\".format(initial_readings))\n        change_category(fledge_url, SOUTH_SERVICE_NAME + \"Advanced\", \"readingsPerSec\", 3000)\n\n        # Now we can distort the network.\n        distort_network(interface=interface_for_impairment, traffic=\"outbound\",\n                        latency=packet_delay,\n                        rate_limit=rate_limit, ip=pi_host, port=pi_port, duration=duration)\n\n        # Wait for south service to accumulate some readings\n        time.sleep(south_service_wait_time)\n\n        # now shutdown the south service\n        disable_schedule(fledge_url, SOUTH_SERVICE_NAME)\n\n        # Wait for north task or service to send these accumulated readings.\n        time.sleep(north_catch_up_time)\n\n        # clear up all the distortions on this network.\n        reset_network(interface=interface_for_impairment)\n        verify_ping(fledge_url, north_catch_up_time)\n\n        # verify the bulk data from PI.\n        data_from_pi = get_bulk_data_from_pi(pi_host, pi_admin, pi_passwd, ASSET, dp_name)\n\n        af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n        dp_list = ['sinusoid', dp_name]\n        asset_dict = {}\n        asset_dict[ASSET] = dp_list\n        clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                           af_hierarchy_level_list, asset_dict)\n\n        assert len(data_from_pi) > 0, \"Could not fetch fetch data from PI.\"\n\n        data_from_pi = [int(d) for d in data_from_pi]\n        # opening the csv file in 'w+' mode\n        file_csv = open('readings_from_PI.csv', 'w+', newline='')\n\n        # writing the data into the file\n        with file_csv:\n            write = csv.writer(file_csv)\n            for d in data_from_pi:\n                write.writerow([d])\n\n        total_readings = int(get_total_readings(fledge_url))\n        print(\"Total readings from Fledge {}\\n\".format(total_readings))\n        discontinuities = [data_from_pi[i] for i in range(len(data_from_pi)-1) if data_from_pi[i+1] != data_from_pi[i]+1]\n        discontinuities = sorted(discontinuities)\n        print(discontinuities)\n        assert total_readings == data_from_pi[-1], \"The last reading from Fledge {} \" \\\n                                                   \"is not the same as PI {}\".format(total_readings, data_from_pi[-1])\n        for val in discontinuities:\n            assert val < initial_readings, \"There is gap at reading {} \" \\\n                                           \"after permissible value {}\".format(val, initial_readings)\n"
  },
  {
    "path": "tests/system/python/packages/test_omf_naming_scheme.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test OMF North Service System tests:\n        Tests OMF as a north service along with reconfiguration.\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2021 Dianomic Systems, Inc.\"\n\nimport subprocess\nimport time\nfrom pathlib import Path\nimport pytest\nimport utils\nimport os\n\nsouth_plugin = \"coap\"\nsouth_asset_name = \"coap-omf-naming\"\nsouth_service_name = \"CoAP #1\"\nnorth_plugin = \"OMF\"\nnorth_task_name = \"NorthReadingsToPI_WebAPI\"\nTEMPLATE_NAME = \"template.json\"\nDATAPOINT = \"sensor\"\nDATAPOINT_VALUE = 20\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\nAF_HIERARCHY_LEVEL = \"namingscheme/namingschemelvl2/namingschemelvl3\"\n\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n\n@pytest.fixture\ndef start_south(add_south, remove_data_file, fledge_url, clear_pi_system_through_pi_web_api,\n                pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n    \"\"\" This fixture\n        clean_setup_fledge_packages: purge the fledge* packages and install latest for given repo url\n        add_south: Fixture that adds a south service with given configuration\n        start_north_task_omf_web_api: Fixture that starts PI north task\n        remove_data_file: Fixture that remove data file created during the tests \"\"\"\n\n    # Define the template file for fogbench\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    # There are two data points here. 1. DATAPOINT\n    # 2. no data point (Asset name be used in this case.)\n    dp_list = [DATAPOINT, '']\n    asset_dict = {}\n    asset_dict[south_asset_name] = dp_list\n    clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                       af_hierarchy_level_list, asset_dict)\n\n    fogbench_template_path = os.path.join(\n        os.path.expandvars('{}'.format(PROJECT_ROOT)), 'data/{}'.format(TEMPLATE_NAME))\n    with open(fogbench_template_path, \"w\") as f:\n        f.write(\n            '[{\"name\": \"%s\", \"sensor_values\": '\n            '[{\"name\": \"%s\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                south_asset_name, DATAPOINT, DATAPOINT_VALUE, DATAPOINT_VALUE))\n\n    add_south(south_plugin, None, fledge_url, service_name=\"{}\".format(south_service_name), installation_type='package')\n\n    yield start_south\n\n    # Cleanup code that runs after the caller test is over\n    remove_data_file(fogbench_template_path)\n\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data via PI Web API using Basic auth\"\n    return ping_result\n\n\ndef verify_statistics_map(fledge_url, skip_verify_north_interface):\n    get_url = \"/fledge/statistics\"\n    jdoc = utils.get_request(fledge_url, get_url)\n    actual_stats_map = utils.serialize_stats_map(jdoc)\n    assert 1 <= actual_stats_map[south_asset_name.upper()]\n    assert 1 <= actual_stats_map['READINGS']\n    if not skip_verify_north_interface:\n        assert 1 <= actual_stats_map['Readings Sent']\n        assert 1 <= actual_stats_map[north_task_name]\n\n\ndef verify_service_added(fledge_url):\n    get_url = \"/fledge/south\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert south_service_name in [s[\"name\"] for s in result[\"services\"]]\n\n    get_url = \"/fledge/service\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert south_service_name in [s[\"name\"] for s in result[\"services\"]]\n\n\ndef verify_asset(fledge_url):\n    get_url = \"/fledge/asset\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No asset found\"\n    assert south_asset_name in [s[\"assetCode\"] for s in result]\n\n\ndef verify_asset_tracking_details(fledge_url, skip_verify_north_interface):\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    tracked_item = tracking_details[\"track\"][0]\n    assert south_service_name == tracked_item[\"service\"]\n    assert south_asset_name == tracked_item[\"asset\"]\n    assert south_plugin == tracked_item[\"plugin\"]\n\n    if not skip_verify_north_interface:\n        egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n        assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n        tracked_item = egress_tracking_details[\"track\"][0]\n        assert north_task_name == tracked_item[\"service\"]\n        assert south_asset_name == tracked_item[\"asset\"]\n        assert north_plugin == tracked_item[\"plugin\"]\n\n\ndef _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                   recorded_datapoint, pi_asset_name):\n    retry_count = 0\n    data_from_pi = None\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                                                 pi_asset_name, '')\n        retry_count += 1\n        time.sleep(wait_time * 2)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Failed to read data from PI\"\n\n    assert int(data_from_pi) == DATAPOINT_VALUE\n\n\nclass TestOMFNamingScheme:\n    def test_omf_with_concise_naming(self, clean_setup_fledge_packages, reset_fledge, start_south,\n                                     start_north_task_omf_web_api,\n                                     read_data_from_pi_web_api, skip_verify_north_interface, fledge_url,\n                                     wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF with concise naming scheme.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            start_south: Adds and configures south service\n            start_north_task_omf_web_api: Adds and configures north service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        start_north_task_omf_web_api(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                        naming_scheme=\"Concise\", default_af_location=AF_HIERARCHY_LEVEL)\n        subprocess.run(\n            [\"cd {}/extras/python; python3 -m fogbench -t ../../data/{}; cd -\".format(PROJECT_ROOT, TEMPLATE_NAME)],\n            shell=True, check=True)\n\n        # Wait until south and north services/tasks are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, verify_asset_tracking_details)\n\n        if not skip_verify_north_interface:\n            recorded_datapoint = \"{}\".format(south_asset_name)\n            # Name of asset in the PI server\n            pi_asset_name = \"{}\".format(south_asset_name)\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           recorded_datapoint, pi_asset_name)\n\n    def test_omf_with_type_suffix_naming(self, reset_fledge, start_south, start_north_task_omf_web_api,\n                                         read_data_from_pi_web_api, skip_verify_north_interface, fledge_url,\n                                         wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF with concise naming scheme.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            start_south: Adds and configures south service\n            start_north_task_omf_web_api: Adds and configures north service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        start_north_task_omf_web_api(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                        naming_scheme=\"Use Type Suffix\", default_af_location=AF_HIERARCHY_LEVEL)\n        subprocess.run(\n            [\"cd {}/extras/python; python3 -m fogbench -t ../../data/{}; cd -\".format(PROJECT_ROOT, TEMPLATE_NAME)],\n            shell=True, check=True)\n\n        # Wait until south and north services/tasks are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, verify_asset_tracking_details)\n\n        if not skip_verify_north_interface:\n            type_id = 1\n            recorded_datapoint = \"{}\".format(south_asset_name)\n            # Name of asset in the PI server\n            pi_asset_name = \"{}\".format(south_asset_name)\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           recorded_datapoint, pi_asset_name)\n\n    def test_omf_with_attribute_hash_naming(self, reset_fledge, start_south, start_north_task_omf_web_api,\n                                            read_data_from_pi_web_api, skip_verify_north_interface, fledge_url,\n                                            wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF with concise naming scheme.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            start_south: Adds and configures south service\n            start_north_task_omf_web_api: Adds and configures north service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        start_north_task_omf_web_api(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                        naming_scheme=\"Use Attribute Hash\", default_af_location=AF_HIERARCHY_LEVEL)\n        subprocess.run(\n            [\"cd {}/extras/python; python3 -m fogbench -t ../../data/{}; cd -\".format(PROJECT_ROOT, TEMPLATE_NAME)],\n            shell=True, check=True)\n\n        # Wait until south and north services/tasks are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, verify_asset_tracking_details)\n\n        if not skip_verify_north_interface:\n            type_id = 1\n            recorded_datapoint = \"{}\".format(south_asset_name)\n            # Name of asset in the PI server\n            pi_asset_name = \"{}\".format(south_asset_name)\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           recorded_datapoint, pi_asset_name)\n\n    def test_omf_with_backward_compatibility_naming(self, reset_fledge, start_south, start_north_task_omf_web_api,\n                                                    read_data_from_pi_web_api, skip_verify_north_interface, fledge_url,\n                                                    wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF with concise naming scheme.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            start_south: Adds and configures south service\n            start_north_task_omf_web_api: Adds and configures north service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        start_north_task_omf_web_api(fledge_url, pi_host, pi_port, pi_user=pi_admin,\n                                     pi_pwd=pi_passwd, default_af_location=AF_HIERARCHY_LEVEL)\n        subprocess.run(\n            [\"cd {}/extras/python; python3 -m fogbench -t ../../data/{}; cd -\".format(PROJECT_ROOT, TEMPLATE_NAME)],\n            shell=True, check=True)\n\n        # Wait until south and north services/tasks are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, verify_asset_tracking_details)\n\n        if not skip_verify_north_interface:\n            type_id = 1\n            recorded_datapoint = \"{}\".format(south_asset_name)\n            # Name of asset in the PI server\n            pi_asset_name = \"{}\".format(south_asset_name)\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           recorded_datapoint, pi_asset_name)\n"
  },
  {
    "path": "tests/system/python/packages/test_omf_north_service.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test OMF North Service System tests:\n        Tests OMF as a north service along with reconfiguration.\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2021 Dianomic Systems, Inc.\"\n\nimport subprocess\nimport time\nimport urllib.parse\nfrom pathlib import Path\nimport pytest\nimport utils\n\nsouth_plugin = \"sinusoid\"\nsouth_asset_name = \"omf_svc_sinusoid\"\nsouth_service_name = \"Sine #1\"\nnorth_plugin = \"OMF\"\nnorth_service_name = \"NorthReadingsToPI_WebAPI\"\nnorth_schedule_id = \"\"\nfilter1_name = \"SF1\"\nfilter2_name = \"SF2\"\nfilter3_name = \"MD1\"\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\nAF_HIERARCHY_LEVEL = 'omfsvc/omfsvclvl2/omfsvclvl3'\n\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n\n@pytest.fixture\ndef start_south_north(add_south, start_north_omf_as_a_service, fledge_url,\n                      pi_host, pi_port, pi_admin, pi_passwd, clear_pi_system_through_pi_web_api, pi_db):\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    # There are three data points here. 1. sinusoid   2. name for meta data\n    # 3. no data point (Asset name be used in this case.)\n    dp_list = ['sinusoid', 'name', '']\n    asset_dict = {}\n    asset_dict[south_asset_name] = dp_list\n    clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                       af_hierarchy_level_list, asset_dict)\n\n    global north_schedule_id\n    south_config = {\"assetName\": {\"value\": south_asset_name}}\n    add_south(south_plugin, None, fledge_url, service_name=\"{}\".format(south_service_name),\n              config=south_config, installation_type='package')\n\n    response = start_north_omf_as_a_service(fledge_url, pi_host, pi_port,\n                                            pi_user=pi_admin, pi_pwd=pi_passwd,\n                                            default_af_location=AF_HIERARCHY_LEVEL)\n    north_schedule_id = response[\"id\"]\n\n    yield start_south_north\n\n\n@pytest.fixture\ndef add_configure_filter(add_filter, fledge_url):\n    filter_cfg_scale = {\"enable\": \"true\"}\n    add_filter(\"scale\", None, filter1_name, filter_cfg_scale, fledge_url, north_service_name,\n               installation_type='package')\n\n    yield add_configure_filter\n\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data via PI Web API using Basic auth\"\n    return ping_result\n\n\ndef verify_statistics_map(fledge_url, skip_verify_north_interface):\n    get_url = \"/fledge/statistics\"\n    jdoc = utils.get_request(fledge_url, get_url)\n    actual_stats_map = utils.serialize_stats_map(jdoc)\n    assert 1 <= actual_stats_map[south_asset_name.upper()]\n    assert 1 <= actual_stats_map['READINGS']\n    if not skip_verify_north_interface:\n        assert 1 <= actual_stats_map['Readings Sent']\n        assert 1 <= actual_stats_map[north_service_name]\n\n\ndef verify_service_added(fledge_url):\n    get_url = \"/fledge/south\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert south_service_name in [s[\"name\"] for s in result[\"services\"]]\n\n    get_url = \"/fledge/north\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result)\n    assert north_service_name in [s[\"name\"] for s in result]\n\n    get_url = \"/fledge/service\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert south_service_name in [s[\"name\"] for s in result[\"services\"]]\n    assert north_service_name in [s[\"name\"] for s in result[\"services\"]]\n\n\ndef get_filters(fledge_url):\n    get_url = \"/fledge/filter\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"filters\"])\n    filters = [s[\"name\"] for s in result[\"filters\"]]\n    return filters\n\n\ndef verify_asset(fledge_url):\n    get_url = \"/fledge/asset\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No asset found\"\n    assert south_asset_name in [s[\"assetCode\"] for s in result]\n\n\ndef verify_asset_tracking_details(fledge_url, skip_verify_north_interface):\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    tracked_item = tracking_details[\"track\"][0]\n    assert south_service_name == tracked_item[\"service\"]\n    assert south_asset_name == tracked_item[\"asset\"]\n    assert south_plugin == tracked_item[\"plugin\"]\n\n    if not skip_verify_north_interface:\n        egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n        assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n        tracked_item = egress_tracking_details[\"track\"][0]\n        assert north_service_name == tracked_item[\"service\"]\n        assert south_asset_name == tracked_item[\"asset\"]\n        assert north_plugin == tracked_item[\"plugin\"]\n\n\ndef _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name):\n    retry_count = 0\n    data_from_pi = None\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    type_id = 1\n    recorded_datapoint = asset_name\n    # Name of asset in the PI server\n    pi_asset_name = asset_name\n\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                                                 pi_asset_name, '')\n        retry_count += 1\n        time.sleep(wait_time * 2)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Failed to read data from PI\"\n\n\nclass TestOMFNorthService:\n    def test_omf_service_with_restart(self, clean_setup_fledge_packages, reset_fledge, start_south_north,\n                                      read_data_from_pi_web_api, skip_verify_north_interface, fledge_url,\n                                      wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service before and after restarting fledge.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, verify_asset_tracking_details)\n\n        put_url = \"/fledge/restart\"\n        utils.put_request(fledge_url, urllib.parse.quote(put_url))\n\n        # Wait for fledge to restart\n        time.sleep(wait_time * 2)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n    def test_omf_service_with_enable_disable(self, reset_fledge, start_south_north, read_data_from_pi_web_api,\n                                             skip_verify_north_interface, wait_fix,\n                                             fledge_url, wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd,\n                                             pi_db):\n        \"\"\" Test OMF as a North service by enabling and disabling it.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        data = {\"enabled\": \"false\"}\n        put_url = \"/fledge/schedule/{}\".format(north_schedule_id)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert False == resp['schedule']['enabled']\n        print(f\"Waiting for {wait_fix} seconds for delay caused by FOGL-8813 - tune pre-fetch buffers...\")\n        time.sleep(wait_fix)\n        data = {\"enabled\": \"true\"}\n        put_url = \"/fledge/schedule/{}\".format(north_schedule_id)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert True == resp['schedule']['enabled']\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after disable/enable\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n    def test_omf_service_with_delete_add(self, reset_fledge, start_south_north, read_data_from_pi_web_api,\n                                         start_north_omf_as_a_service,\n                                         skip_verify_north_interface, fledge_url, wait_time, retries, pi_host, pi_port,\n                                         pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service by deleting and adding north service.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        global north_schedule_id\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        delete_url = \"/fledge/service/{}\".format(north_service_name)\n        resp = utils.delete_request(fledge_url, delete_url)\n        assert \"Service {} deleted successfully.\".format(north_service_name) == resp['result']\n\n        response = start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin,\n                                                           pi_pwd=pi_passwd)\n        north_schedule_id = response[\"id\"]\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after delete/add of north service\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n    def test_omf_service_with_reconfig(self, reset_fledge, start_south_north, read_data_from_pi_web_api,\n                                       skip_verify_north_interface, fledge_url, wait_fix,\n                                       wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service by reconfiguring it.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        # Good reconfiguration to check data is sent\n        data = {\"SendFullStructure\": \"false\"}\n        put_url = \"/fledge/category/{}\".format(north_service_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"false\" == resp[\"SendFullStructure\"][\"value\"]\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after delete/add of north service\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n\n        # Bad reconfiguration to check data is not sent\n        data = {\"PIWebAPIUserId\": \"Admin\"}\n        put_url = \"/fledge/category/{}\".format(north_service_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"Admin\" == resp[\"PIWebAPIUserId\"][\"value\"]\n        print(f\"Waiting for {wait_fix} seconds for delay caused by FOGL-8738 - north statistics update thread de-prioritised....\")\n        time.sleep(wait_fix)\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after delete/add of north service\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] == new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n\nclass TestOMFNorthServiceWithFilters:\n    def test_omf_service_with_filter(self, reset_fledge, start_south_north, add_configure_filter,\n                                     read_data_from_pi_web_api,\n                                     skip_verify_north_interface, fledge_url, wait_time, retries, pi_host, pi_port,\n                                     pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service with filters.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south, north services and filters are created and some data is loaded\n        time.sleep(wait_time)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        assert filter1_name in get_filters(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after delete/add of north service\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n    def test_omf_service_with_disable_enable_filter(self, reset_fledge, start_south_north, add_configure_filter,\n                                                    read_data_from_pi_web_api,\n                                                    skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                    pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service with enable/disable of filters.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south, north services and filters are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        assert filter1_name in get_filters(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        data = {\"enable\": \"false\"}\n        put_url = \"/fledge/category/{}_SF1\".format(north_service_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"false\" == resp['enable']['value']\n\n        data = {\"enable\": \"true\"}\n        put_url = \"/fledge/category/{}_SF1\".format(north_service_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"true\" == resp['enable']['value']\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after disable/enable\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n    def test_omf_service_with_filter_reconfig(self, reset_fledge, start_south_north, add_configure_filter,\n                                              read_data_from_pi_web_api,\n                                              skip_verify_north_interface, fledge_url, wait_time, retries, pi_host,\n                                              pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service with reconfiguration of filters.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south, north services and filters are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        assert filter1_name in get_filters(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        data = {\"factor\": \"50\"}\n        put_url = \"/fledge/category/{}_SF1\".format(north_service_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"50.0\" == resp['factor']['value']\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after disable/enable\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n    def test_omf_service_with_delete_add(self, reset_fledge, start_south_north, add_configure_filter, add_filter,\n                                         read_data_from_pi_web_api,\n                                         start_north_omf_as_a_service, skip_verify_north_interface,\n                                         fledge_url, wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service by deleting and adding north service.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        global north_schedule_id\n\n        # Wait until south, north services and filters are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        assert filter1_name in get_filters(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        delete_url = \"/fledge/service/{}\".format(north_service_name)\n        resp = utils.delete_request(fledge_url, delete_url)\n        assert \"Service {} deleted successfully.\".format(north_service_name) == resp['result']\n\n        response = start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin,\n                                                           pi_pwd=pi_passwd)\n        north_schedule_id = response[\"id\"]\n\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter2_name, filter_cfg_scale, fledge_url, north_service_name,\n                   installation_type='package')\n\n        # Wait for some time for north service to come up.\n        time.sleep(wait_time)\n        verify_service_added(fledge_url)\n        _filters = get_filters(fledge_url)\n        assert filter1_name not in _filters\n        assert filter2_name in _filters\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after delete/add of north service\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n    def test_omf_service_with_delete_readd_filter(self, reset_fledge, start_south_north, add_configure_filter, add_filter,\n                                                read_data_from_pi_web_api,\n                                                skip_verify_north_interface, fledge_url, wait_time, retries, pi_host,\n                                                pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service by deleting filter attached to north service.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south, north services and filters are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        assert filter1_name in get_filters(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        data = {\"pipeline\": []}\n        put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=false\" \\\n            .format(north_service_name)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n        delete_url = \"/fledge/filter/{}\".format(filter1_name)\n        resp = utils.delete_request(fledge_url, delete_url)\n        assert \"Filter {} deleted successfully.\".format(filter1_name) == resp['result']\n\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter1_name, filter_cfg_scale, fledge_url, north_service_name,\n                   installation_type='package')\n        assert filter1_name in get_filters(fledge_url)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after delete/add of north service\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n\n    def test_omf_service_with_filter_reorder(self, reset_fledge, start_south_north, add_configure_filter, add_filter,\n                                             read_data_from_pi_web_api,\n                                             skip_verify_north_interface, fledge_url, wait_time, retries, pi_host,\n                                             pi_port, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test OMF as a North service by reordering filters attached to north service.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south(sinusoid) and north(OMF) service\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/filter\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south, north services and filters are created and some data is loaded\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        assert filter1_name in get_filters(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        # Adding second filter\n        filter_cfg_meta = {\"enable\": \"true\"}\n        add_filter(\"metadata\", None, filter3_name, filter_cfg_meta, fledge_url, north_service_name,\n                   installation_type='package')\n        assert [filter1_name, filter3_name] == get_filters(fledge_url)\n        # Verify the filter pipeline order\n        get_url = \"/fledge/filter/{}/pipeline\".format(north_service_name)\n        resp = utils.get_request(fledge_url, get_url)\n        assert filter1_name == resp['result']['pipeline'][0]\n        assert filter3_name == resp['result']['pipeline'][1]\n\n        data = {\"pipeline\": [\"{}\".format(filter3_name), \"{}\".format(filter1_name)]}\n        put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=false\" \\\n            .format(north_service_name)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n        # Verify the filter pipeline order after reordering\n        get_url = \"/fledge/filter/{}/pipeline\".format(north_service_name)\n        resp = utils.get_request(fledge_url, get_url)\n        assert filter3_name == resp['result']['pipeline'][0]\n        assert filter1_name == resp['result']['pipeline'][1]\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after reordering of filters\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           south_asset_name)\n"
  },
  {
    "path": "tests/system/python/packages/test_opcua.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test OPCUA System tests:\n        * Prerequisite:\n        a) On First instance\n        - Install fledge-south-opcua and fledge-south-s2opcua\n        - Install fledge-north-opcua\n        - Use Prosys OPCUA simulator with set of simulated data with all supoorted data types\n        - Use Prosys OPCUA client to connect north opcua server and then browse around the objects\n        that Fledge is creating And those subscriptions to second fledge instance\n        Download:\n         Prosys OPCUA client from https://downloads.prosysopc.com/opc-ua-client-downloads.php\n         Prosys OPCUA server from https://downloads.prosysopc.com/opc-ua-simulation-server-downloads.php\n\n        b) On Second instance (manual process)\n        - Install fledge, fledge-south-opcua packages And Make sure Fledge is in running mode with reset data\n\n        * Test:\n        - Add south service with opcua/s2opcua plugin\n        - Create data points with supported types from simulator\n        - Verify the readings of data is correct and that will be from an asset API\n        - Add north service with opcua plugin\n        - Publish data to north-opcua and use another Fledge instance to read the data and compare the data\n        between two instances\n\"\"\"\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2021 Dianomic Systems, Inc.\"\n\nimport subprocess\nimport time\nimport urllib.parse\nfrom typing import Tuple\nimport utils\nimport pytest\nfrom pytest import PKG_MGR\n\n\n\"\"\" First FL instance IP Address \"\"\"\nFL1_INSTANCE_IP = \"192.168.1.8\"\n\"\"\" Second FL instance IP Address \"\"\"\nFL2_INSTANCE_IP = \"192.168.1.7\"\n\"\"\" Packages list for FL instances \"\"\"\nPKG_LIST = \"fledge-south-opcua fledge-south-s2opcua fledge-north-opcua\"\n\"\"\" opcua south plugin name \"\"\"\nOPCUA_SOUTH_PLUGIN_NAME = \"opcua\"\n\"\"\" s2opcua south plugin name \"\"\"\nS2OPCUA_SOUTH_PLUGIN_NAME = \"s2opcua\"\n\"\"\" Service name with opcua south plugin \"\"\"\nOPCUA_SOUTH_SVC_NAME = \"OPCUA #1\"\n\"\"\" Service name with s2opcua south plugin \"\"\"\nS2OPCUA_SOUTH_SVC_NAME = \"S2 OPC-UA\"\n\"\"\" opcua north plugin name \"\"\"\nOPCUA_NORTH_PLUGIN_NAME = \"opcua\"\n\"\"\" Service name with opcua north plugin \"\"\"\nOPCUA_NORTH_SVC_NAME = \"OPCUA\"\n\"\"\" opcua readings asset count as configured in Prosys simulation server \"\"\"\nOPCUA_ASSET_COUNT = 12\n\"\"\" s2opcua readings asset count as configured in Prosys simulation server \"\"\"\nS2OPCUA_ASSET_COUNT = 12\n\"\"\" Asset prefix for opcua south plugin \"\"\"\nOPCUA_ASSET_NAME = \"opcua\"\n\"\"\" Asset prefix for s2opcua south plugin \"\"\"\nS2OPCUA_ASSET_NAME = \"s2opcua\"\n\"\"\" Server URL used in south opcua and s2opcua plugin configuration to get readings data \"\"\"\nOPCUA_SERVER_URL = \"opc.tcp://{}:53530/OPCUA/SimulationServer\".format(FL1_INSTANCE_IP)\n\"\"\" Server URL used in north opcua plugin configuration for to pull the data \"\"\"\nOPCUA_NORTH_SERVER_URL = \"opc.tcp://{}:4840/fledge/server\".format(FL1_INSTANCE_IP)\n\"\"\" Supported data types lists and in tuple format (data type, node identifier, data value) \nas given Prosys Simulation settings. NOTE: Create in an order and node display name as is \"\"\"\nSUPPORTED_DATA_TYPES = [(\"Boolean\", 1008, 0), (\"SByte\", 1009, -128), (\"Byte\", 1010, 128), (\"Int16\", 1011, -32768),\n                        (\"UInt16\", 1012, 65535), (\"Int32\", 1013, -2147483648), (\"UInt32\", 1014, 4294967295),\n                        (\"Int64\", 1015, -9223372036854775808), (\"UInt64\", 1016, 18446744073709551615),\n                        (\"Float\", 1017, -3.4E38), (\"Double\", 1018, 1.7E308), (\"String\", 1019, \"0.0\")]\n\"\"\" Subscription plugin configuration used for both opcua and s2opcua south plugins \"\"\"\nSUBSCRIPTION = [\"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[0][1]), \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[1][1]),\n                \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[2][1]), \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[3][1]),\n                \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[4][1]), \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[5][1]),\n                \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[6][1]), \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[7][1]),\n                \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[8][1]), \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[9][1]),\n                \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[10][1]), \"ns=3;i={}\".format(SUPPORTED_DATA_TYPES[11][1])\n                ]\n\"\"\" OPCUA objects which will be used when we pull the data from north opcua to second FL instance (FL2_INSTANCE_IP) \"\"\"\nOPCUA_OBJECTS = [(\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2001, \"false\"),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2002, -128),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2003, 128),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2004, -32768),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2005, 65535),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2006, -2147483648),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2007, 4294967295),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2008, -9223372036854775808),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2009, 18446744073709551615),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2010, -3.4E38),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2011, 1.7E308),\n                 (\"{}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2012, \"0.0\"),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2013, 0),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2014, -128),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2015, 128),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2016, -32768),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2017, 65535),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2018, -2147483648),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2019, 4294967295),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2020, -9223372036854775808),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2021, 18446744073709551615),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2022, -3.4E38),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2023, 1.7E308),\n                 (\"{}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]), 2024, \"0.0\")\n                 ]\n\n\"\"\" Subscription plugin configuration used for north opcua plugin \"\"\"\nSUBSCRIPTION2 = [\"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[1][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[2][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[3][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[4][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[5][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[6][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[7][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[8][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[9][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[10][1]),\n                 \"ns=2;s={}{}\".format(OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[11][1]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[0][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[1][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[2][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[3][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[4][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[5][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[6][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[7][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[8][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[9][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[10][0]),\n                 \"ns=2;s={}{}\".format(S2OPCUA_ASSET_NAME, SUPPORTED_DATA_TYPES[11][0])\n                 ]\n\n\n@pytest.fixture\ndef install_pkg():\n    \"\"\" Fixture used for to install packages and only used in First FL instance \"\"\"\n    try:\n        subprocess.run([\"sudo {} install -y {}\".format(PKG_MGR, PKG_LIST)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"{} one of installation package failed\".format(PKG_LIST)\n\n\ndef add_service(fledge_url: str, wait_time: int, name: str, _type: str, plugin: str, config: dict) -> None:\n    \"\"\" Used to add any service \"\"\"\n    data = {\n        \"name\": name,\n        \"type\": _type,\n        \"plugin\": plugin,\n        \"enabled\": \"true\",\n        \"config\": config\n    }\n    utils.post_request(fledge_url, \"/fledge/service\", data)\n    # extra wait time needed\n    time.sleep(2 * wait_time)\n\n\ndef get_ping_data(fledge_url: str, key: str, asset_count: int) -> Tuple[int, str]:\n    \"\"\" Used to get ping info \"\"\"\n    ping_info = utils.get_request(fledge_url, \"/fledge/ping\")\n    assert key in ping_info\n    total_read = asset_count\n    # Special handling requires when both plugin runs\n    if ping_info[key] > asset_count:\n        total_read = OPCUA_ASSET_COUNT + S2OPCUA_ASSET_COUNT\n    return total_read, ping_info[key]\n\n\ndef get_asset_readings(fledge_url: str, asset_prefix: str, plugin_name: str, data: list) -> None:\n    \"\"\" Used to get asset readings for an asset code \"\"\"\n    for obj in data:\n        asset_suffix = str(obj[1]) if plugin_name == OPCUA_SOUTH_PLUGIN_NAME else obj[0]\n        asset_name = \"{}{}\".format(asset_prefix, asset_suffix)\n        jdoc = utils.get_request(fledge_url, \"/fledge/asset/{}\".format(asset_name))\n        if jdoc:\n            result = jdoc[0]['reading'][str(asset_suffix)]\n            print(\"Asset Reading Jdoc: {} \\nExpected:{} == Actual:{}\".format(jdoc[0]['reading'], obj[2], result))\n            # TODO: FOGL-6076 - readings mismatched for some data types\n            if asset_suffix not in (\"SByte\", \"Byte\", \"Int16\", \"Int32\", \"UInt64\", \"1016\", \"Float\", \"1017\",\n                                    \"Double\", \"1018\", \"2007\", \"2008\", \"2009\", \"2010\", \"2011\", \"2014\",\n                                    \"2016\", \"2019\", \"2020\", \"2021\", \"2022\", \"2023\"):\n                # For opcua plugin it is treated as false Not 0\n                if asset_suffix == \"1008\":\n                    assert \"false\" == result\n                else:\n                    assert obj[2] == result\n            else:\n                print(\"Verification skipped for an asset: {}; Due to Bug exists. See FOGL-6076\".format(asset_name))\n        else:\n            print(\"Reading not found for an asset code: {}\".format(asset_name))\n\n\ndef verify_service(fledge_url: str, retries: int, svc_name: str, plugin_name: str, _type: str,\n                   asset_count: int) -> None:\n    \"\"\" Used for verification of any service\"\"\"\n    get_url = \"/fledge/south\" if _type == \"Southbound\" else \"/fledge/north\"\n    while retries:\n        result = utils.get_request(fledge_url, get_url)\n        if _type == \"Southbound\":\n            if len(result[\"services\"]):\n                svc_info = [s for s in result[\"services\"] if s['name'] == svc_name]\n                if 'status' in svc_info[0] and svc_info[0]['status'] != \"\":\n                    assert svc_name == svc_info[0]['name']\n                    assert 'running' == svc_info[0]['status']\n                    assert plugin_name == svc_info[0]['plugin']['name']\n                    assert asset_count == len(svc_info[0]['assets'])\n                    break\n        else:\n            if len(result):\n                svc_info = [s for s in result if s['name'] == svc_name]\n                if 'status' in svc_info[0] and svc_info[0]['status'] != \"\":\n                    assert svc_name == svc_info[0]['name']\n                    assert 'north_C' == svc_info[0]['processName']\n                    assert 'running' == svc_info[0]['status']\n                    # assert total_read == svc_info[0]['sent']\n                    assert OPCUA_NORTH_PLUGIN_NAME == svc_info[0]['plugin']['name']\n                    break\n        retries -= 1\n    if retries == 0:\n        assert False, \"TIMEOUT! Data NOT seen for {} with endpoint {}\".format(svc_name, get_url)\n\n\ndef verify_asset_and_readings(fledge_url: str, total_assets: int, asset_name: str, plugin_name: str,\n                              data: list) -> None:\n    \"\"\" Used for verification of assets and readings \"\"\"\n    result = utils.get_request(fledge_url, \"/fledge/asset\")\n    assert len(result), \"No asset found\"\n    assert total_assets == len(result)\n    get_asset_readings(fledge_url, asset_name, plugin_name, data)\n\n\ndef verify_asset_tracking_details(fledge_url: str, total_assets: int, svc_name: str, asset_name_prefix: str,\n                                  plugin_name: str, event: str, data: list) -> None:\n    \"\"\" Used for verification of asset tracker details \"\"\"\n    tracking_details = utils.get_request(fledge_url,  urllib.parse.quote(\"/fledge/track?service={}&event={}\".format(\n        svc_name, event), safe='?,=,&,/'))\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    assert total_assets == len(tracking_details[\"track\"])\n    for record in tracking_details['track']:\n        for idx, val in enumerate(data):\n            asset = \"{}{}\".format(asset_name_prefix, val[0]) if plugin_name == S2OPCUA_SOUTH_PLUGIN_NAME else \\\n                \"{}{}\".format(asset_name_prefix, str(val[1]))\n            if asset in record['asset']:\n                print(\"Asset Tracking JDoc: {} \\nExpected:{} == Actual:{}\".format(record, asset, record['asset']))\n                assert asset == record['asset']\n                assert event == record['event']\n                assert plugin_name == record['plugin']\n                assert svc_name == record['service']\n                break\n\n\nclass TestSouthOPCUA:\n    \"\"\" To test south opcua plugins \"\"\"\n\n    # NOTE: Below test can be mark as skip if already executed as this requires only to be run once on an instance\n    # @pytest.mark.skip(reason=\"Already installed the packages\")\n    def test_clean_install(self, clean_setup_fledge_packages, install_pkg):\n        pass\n\n    def test_setup(self, reset_and_start_fledge):\n        pass\n\n    @pytest.mark.parametrize(\"plugin_name, svc_name, asset_name, asset_count\", [\n        (OPCUA_SOUTH_PLUGIN_NAME, OPCUA_SOUTH_SVC_NAME, OPCUA_ASSET_NAME, OPCUA_ASSET_COUNT),\n        (S2OPCUA_SOUTH_PLUGIN_NAME, S2OPCUA_SOUTH_SVC_NAME, S2OPCUA_ASSET_NAME, S2OPCUA_ASSET_COUNT)\n    ])\n    def test_asset_readings_and_tracker_entry(self, fledge_url, retries, wait_time, plugin_name, svc_name, asset_name,\n                                              asset_count):\n        print(\"a) Adding {} south service...\".format(svc_name))\n        config = {\n            \"asset\": {\n                \"value\": asset_name\n            },\n            \"url\": {\n                \"value\": OPCUA_SERVER_URL\n            },\n            \"subscription\": {\n                \"value\": {\n                    \"subscriptions\": SUBSCRIPTION\n                }\n            }\n        }\n        add_service(fledge_url, wait_time, svc_name, \"south\", plugin_name, config)\n        print(\"b) Verifying {} south service and its details...\".format(svc_name))\n        verify_service(fledge_url, retries, svc_name, plugin_name, \"Southbound\", asset_count)\n        print(\"c) Verifying data read in ping...\")\n        total_read_count, data_read = get_ping_data(fledge_url, \"dataRead\", asset_count)\n        assert total_read_count == data_read\n        print(\"d) Verifying assets and readings...\")\n        verify_asset_and_readings(fledge_url, total_read_count, asset_name, plugin_name, SUPPORTED_DATA_TYPES)\n        print(\"e) Verifying Ingest asset tracker entry...\")\n        verify_asset_tracking_details(fledge_url, asset_count, svc_name, asset_name, plugin_name, \"Ingest\",\n                                      SUPPORTED_DATA_TYPES)\n\n\nclass TestNorthOPCUA:\n    \"\"\" To test north opcua plugin \"\"\"\n    @pytest.mark.parametrize(\"asset_name, asset_count,\", [\n        (OPCUA_ASSET_NAME, OPCUA_ASSET_COUNT),\n        (S2OPCUA_ASSET_NAME, S2OPCUA_ASSET_COUNT)\n    ])\n    def test_service_and_sent_readings(self, fledge_url, retries, wait_time, asset_name, asset_count):\n        get_north_svc = utils.get_request(fledge_url, \"/fledge/north\")\n        if not get_north_svc:\n            print(\"a) Adding {} north service...\".format(OPCUA_NORTH_PLUGIN_NAME))\n            config = {\n                \"url\":\n                    {\n                        \"value\": OPCUA_NORTH_SERVER_URL\n                    }\n            }\n            add_service(fledge_url, wait_time, OPCUA_NORTH_SVC_NAME, \"north\", OPCUA_NORTH_PLUGIN_NAME, config)\n        print(\"b) Verifying {} north service and its details...\".format(OPCUA_NORTH_SVC_NAME))\n        verify_service(fledge_url, retries, OPCUA_NORTH_SVC_NAME, OPCUA_NORTH_PLUGIN_NAME, \"Northbound\", asset_count)\n        print(\"c) Verifying data sent in ping...\")\n        total_read_count, data_read = get_ping_data(fledge_url, \"dataSent\", asset_count)\n        assert total_read_count == data_read\n        print(\"d) Verifying Egress asset tracker entry...\")\n        verify_asset_tracking_details(fledge_url, total_read_count, OPCUA_NORTH_SVC_NAME, asset_name,\n                                      OPCUA_NORTH_PLUGIN_NAME, \"Egress\", SUPPORTED_DATA_TYPES)\n\n\nclass TestPublishNorthOPCUA:\n    \"\"\" Publish the readings data to north using the fledge-north-opcua and use another FL instance to read the data.\n        Comparison of readings data between two FL instances to confirm data is correctly transmitted.\n    \"\"\"\n\n    def test_data_to_another_fl_instance(self, wait_time, retries):\n        rest_api_url = \"{}:8081\".format(FL2_INSTANCE_IP)\n        asset_count = OPCUA_ASSET_COUNT + S2OPCUA_ASSET_COUNT\n        print(\"Verifying publishing of data to another {} FL instance\".format(FL2_INSTANCE_IP))\n        print(\"a) Adding {} south service...\".format(OPCUA_SOUTH_SVC_NAME))\n        config = {\n            \"asset\": {\n                \"value\": OPCUA_ASSET_NAME\n            },\n            \"url\": {\n                \"value\": OPCUA_NORTH_SERVER_URL\n            },\n            \"subscription\": {\n                \"value\": {\n                    \"subscriptions\": SUBSCRIPTION2\n                }\n            }\n        }\n        add_service(rest_api_url, wait_time, OPCUA_SOUTH_SVC_NAME, \"south\",\n                    OPCUA_SOUTH_PLUGIN_NAME, config)\n        print(\"b) Verifying {} south service and its details...\".format(OPCUA_SOUTH_SVC_NAME))\n        verify_service(rest_api_url, retries, OPCUA_SOUTH_SVC_NAME, OPCUA_SOUTH_PLUGIN_NAME, \"Southbound\", asset_count)\n        print(\"c) Verifying data read in ping...\")\n        total_read_count, data_read = get_ping_data(rest_api_url, \"dataRead\", asset_count)\n        assert total_read_count == data_read\n        print(\"d) Verifying assets and readings...\")\n        verify_asset_and_readings(rest_api_url, total_read_count, OPCUA_ASSET_NAME, OPCUA_SOUTH_PLUGIN_NAME,\n                                  OPCUA_OBJECTS)\n        print(\"e) Verifying Ingest asset tracker entry...\")\n        verify_asset_tracking_details(rest_api_url, asset_count, OPCUA_SOUTH_SVC_NAME, OPCUA_ASSET_NAME,\n                                      OPCUA_SOUTH_PLUGIN_NAME, \"Ingest\", OPCUA_OBJECTS)\n"
  },
  {
    "path": "tests/system/python/packages/test_pi_webapi.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test sending data to PI using Web API\n\n\"\"\"\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport subprocess\nimport http.client\nimport pytest\nimport os\nimport time\nimport utils\nfrom pathlib import Path\nimport urllib.parse\n\nTEMPLATE_NAME = \"template.json\"\nASSET = \"FOGL-2964-e2e-CoAP-PIWebAPI\"\nDATAPOINT = \"sensor\"\nDATAPOINT_VALUE = 20\nNORTH_TASK_NAME = \"NorthReadingsToPI_WebAPI\"\nSOUTH_SERVICE_NAME = \"CoAP FOGL-2964\"\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\nAF_HIERARCHY_LEVEL = 'testpiwebapi/testpiwebapilvl2/testpiwebapilvl3'\n\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data via PI Web API using Basic auth\"\n    return ping_result\n\n\ndef verify_statistics_map(fledge_url, skip_verify_north_interface):\n    get_url = \"/fledge/statistics\"\n    jdoc = utils.get_request(fledge_url, get_url)\n    actual_stats_map = utils.serialize_stats_map(jdoc)\n    assert 1 <= actual_stats_map[ASSET.upper()]\n    assert 1 <= actual_stats_map['READINGS']\n    if not skip_verify_north_interface:\n        assert 1 <= actual_stats_map['Readings Sent']\n        assert 1 <= actual_stats_map[NORTH_TASK_NAME]\n\n\ndef verify_asset(fledge_url):\n    get_url = \"/fledge/asset\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No asset found\"\n    assert ASSET in [s[\"assetCode\"] for s in result]\n\n\ndef verify_asset_tracking_details(fledge_url, skip_verify_north_interface):\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    tracked_item = tracking_details[\"track\"][0]\n    assert ASSET == tracked_item[\"asset\"]\n    assert \"coap\" == tracked_item[\"plugin\"]\n\n    if not skip_verify_north_interface:\n        egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n        assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n        tracked_item = egress_tracking_details[\"track\"][0]\n        assert ASSET == tracked_item[\"asset\"]\n        assert \"OMF\" == tracked_item[\"plugin\"]\n\n\ndef _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name):\n    retry_count = 0\n    data_from_pi = None\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    type_id = 1\n    recorded_datapoint = asset_name\n    # Name of asset in the PI server\n    PI_ASSET_NAME = asset_name\n\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                                                 ASSET, '')\n        retry_count += 1\n        time.sleep(wait_time * 2)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Failed to read data from PI\"\n\n    assert int(data_from_pi) == DATAPOINT_VALUE\n\n\n@pytest.fixture\ndef start_south_north(add_south, start_north_task_omf_web_api, remove_data_file,\n                      fledge_url, pi_host, pi_port, pi_admin, pi_passwd,\n                      clear_pi_system_through_pi_web_api, pi_db, asset_name=ASSET):\n    \"\"\" This fixture\n        clean_setup_fledge_packages: purge the fledge* packages and install latest for given repo url\n        add_south: Fixture that adds a south service with given configuration\n        start_north_task_omf_web_api: Fixture that starts PI north task\n        remove_data_file: Fixture that remove data file created during the tests \"\"\"\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    # There are two data points here. 1. DATAPOINT\n    # 2. no data point (Asset name be used in this case.)\n    dp_list = [DATAPOINT, '']\n    asset_dict = {}\n    asset_dict[ASSET] = dp_list\n    clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                       af_hierarchy_level_list, asset_dict)\n\n    # Define the template file for fogbench\n    fogbench_template_path = os.path.join(\n        os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(TEMPLATE_NAME))\n    with open(fogbench_template_path, \"w\") as f:\n        f.write(\n            '[{\"name\": \"%s\", \"sensor_values\": '\n            '[{\"name\": \"%s\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                asset_name, DATAPOINT, DATAPOINT_VALUE, DATAPOINT_VALUE))\n\n    south_plugin = \"coap\"\n    # south_branch does not matter as these are archives.fledge-iot.org version install\n    add_south(south_plugin, None, fledge_url, service_name=SOUTH_SERVICE_NAME, installation_type='package')\n    start_north_task_omf_web_api(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                 default_af_location=AF_HIERARCHY_LEVEL)\n\n    yield start_south_north\n\n    # Cleanup code that runs after the caller test is over\n    remove_data_file(fogbench_template_path)\n\n\nclass TestPackagesCoAP_PI_WebAPI:\n\n    def test_omf_task(self, clean_setup_fledge_packages, reset_fledge, start_south_north, read_data_from_pi_web_api,\n                        fledge_url, pi_host, pi_admin, pi_passwd, pi_db, fogbench_host, fogbench_port,\n                        wait_time, retries, skip_verify_north_interface, asset_name=ASSET):\n        \"\"\" Test that data is inserted in Fledge and sent to PI\n            start_south_north: Fixture that add south and north instance\n            read_data_from_pi: Fixture to read data from PI\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name>\n                data received from PI is same as data sent\"\"\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n        # Time to get CoAP service started\n        time.sleep(2)\n        subprocess.run(\n            [\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{} --host {} --port {}; cd -\".format(TEMPLATE_NAME, fogbench_host, fogbench_port)],\n            shell=True, check=True)\n\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        if not skip_verify_north_interface:\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           asset_name)\n\n    def test_omf_task_with_reconfig(self, reset_fledge, start_south_north, read_data_from_pi_web_api,\n                                       skip_verify_north_interface, fledge_url, fogbench_host, fogbench_port,\n                                       wait_time, retries, pi_host, pi_port, pi_admin, pi_passwd, pi_db,\n                                       asset_name=ASSET):\n        \"\"\" Test OMF as a North task by reconfiguring it.\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Adds and configures south and north (OMF)\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n        # Time to get CoAP service started\n        time.sleep(2)\n        subprocess.run(\n            [\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{} --host {} --port {}; cd -\".format(TEMPLATE_NAME, fogbench_host, fogbench_port)],\n            shell=True, check=True)\n\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n\n        # Good reconfiguration to check data is sent\n        data = {\"SendFullStructure\": \"false\"}\n        put_url = \"/fledge/category/{}\".format(NORTH_TASK_NAME)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"false\" == resp[\"SendFullStructure\"][\"value\"]\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        subprocess.run(\n            [\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{} --host {} --port {}; cd -\".format(TEMPLATE_NAME, fogbench_host, fogbench_port)],\n            shell=True, check=True)\n\n        # Wait for the OMF schedule to run.\n        time.sleep(wait_time * 2)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after delete/add of north service\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n\n        # Bad reconfiguration to check data is not sent\n        data = {\"PIWebAPIUserId\": \"Inv@lidRandomUserID\"}\n        put_url = \"/fledge/category/{}\".format(NORTH_TASK_NAME)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"Inv@lidRandomUserID\" == resp[\"PIWebAPIUserId\"][\"value\"]\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n\n        # Wait for the OMF schedule to run and then send the new data.\n        # Otherwise, it sends the data with old config. \n        time.sleep(wait_time * 2)\n\n        conn = http.client.HTTPConnection(fledge_url)\n        subprocess.run(\n            [\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{} --host {} --port {}; cd -\".format(TEMPLATE_NAME, fogbench_host, fogbench_port)],\n            shell=True, check=True)\n\n        # Wait for the OMF schedule to run.\n        time.sleep(wait_time * 2)\n\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after delete/add of north service\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] == new_ping_result['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           asset_name)\n"
  },
  {
    "path": "tests/system/python/packages/test_pi_webapi_linked_data_type.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test sending data to PI using Web API\n\n\"\"\"\n\n__author__ = \"Mohit Singh Tomar\"\n__copyright__ = \"Copyright (c) 2023 Dianomic Systems Inc\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport subprocess\nimport http.client\nimport pytest\nimport os\nimport time\nimport utils\nimport json\nfrom pathlib import Path\nimport urllib.parse\nimport base64\nimport ssl\nimport csv\n\nASSET = \"FOGL-7303\"\nASSET_DICT = {ASSET: ['sinusoid', 'randomwalk', 'sinusoid_exp', 'randomwalk_exp']}\nSOUTH_PLUGINS_LIST = [\"sinusoid\", \"randomwalk\"]\nNORTH_INSTANCE_NAME = \"NorthReadingsToPI_WebAPI\"\nFILTER = \"expression\"\nprint(\"Asset Dict -->\", ASSET_DICT) \n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\nDATA_DIR_ROOT = \"{}/tests/system/python/packages/data/\".format(PROJECT_ROOT)\nAF_HIERARCHY_LEVEL = '/testpilinkeddata/testpilinkeddatalvl2/testpilinkeddatalvl3'\nAF_HIERARCHY_LEVEL_LIST = AF_HIERARCHY_LEVEL.split(\"/\")[1:]\nprint('AF HEIR -->', AF_HIERARCHY_LEVEL_LIST)\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\".format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n        \ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n    \n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data via PI Web API using Basic auth\"\n    return ping_result\n\ndef verify_asset(fledge_url):\n    get_url = \"/fledge/asset\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No asset found\"\n    assert ASSET in [s[\"assetCode\"] for s in result]\n    \ndef verify_statistics_map(fledge_url, skip_verify_north_interface):\n    get_url = \"/fledge/statistics\"\n    jdoc = utils.get_request(fledge_url, get_url)\n    actual_stats_map = utils.serialize_stats_map(jdoc)\n    assert 1 <= actual_stats_map[ASSET.upper()]\n    assert 1 <= actual_stats_map['READINGS']\n    if not skip_verify_north_interface:\n        assert 1 <= actual_stats_map['Readings Sent']\n        assert 1 <= actual_stats_map[NORTH_INSTANCE_NAME]\n        \ndef verify_asset_tracking_details(fledge_url, skip_verify_north_interface):\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    for item in tracking_details[\"track\"]:\n        tracked_item= item\n        assert ASSET == tracked_item[\"asset\"]\n        assert tracked_item[\"plugin\"].lower() in SOUTH_PLUGINS_LIST\n\n    if not skip_verify_north_interface:\n        egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n        assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n        tracked_item = egress_tracking_details[\"track\"][0]\n        assert ASSET == tracked_item[\"asset\"]\n        assert \"OMF\" == tracked_item[\"plugin\"]\n\ndef get_data_from_fledge(fledge_url, PLUGINS_LIST):\n    record = dict()\n    get_url = \"/fledge/asset/{}?limit=10000\".format(ASSET)\n    jdoc = utils.get_request(fledge_url, urllib.parse.quote(get_url, safe='?,=,&,/'))\n    for plugin  in PLUGINS_LIST:\n        record[plugin] = list(map(lambda val: val['reading'][plugin], filter(lambda item: (len(item['reading'].keys())==1 and list(item['reading'].keys())[0] == plugin) or (len(item['reading'].keys())==2 and (plugin in list(item['reading'].keys()))), jdoc) ))\n    return(record)\n\ndef verify_equality_between_fledge_and_pi(data_from_fledge, data_from_pi, PLUGINS_LIST):\n    matched = \"\"\n    for plugin in PLUGINS_LIST:\n        listA = sorted(data_from_fledge[plugin])\n        listB = sorted(data_from_pi[plugin])\n        if listA == listB:\n            matched = \"Equal\"\n        else:\n            matched = \"Data of {} is unequal\".format(plugin)\n            break\n    return(matched)\n\ndef verify_filter_added(fledge_url):\n    get_url = \"/fledge/filter\"\n    result = utils.get_request(fledge_url, get_url)[\"filters\"]\n    assert len(result)\n    list_of_filters = list(map(lambda val: val['name'], result))\n    for plugin in SOUTH_PLUGINS_LIST:\n        assert \"{}_exp\".format(plugin) in list_of_filters\n\ndef verify_service_added(fledge_url, ):\n    get_url = \"/fledge/south\"\n    result = utils.get_request(fledge_url, urllib.parse.quote(get_url, safe='?,=,&,/'))['services']\n    assert len(result)\n    list_of_southbounds = list(map(lambda val: val['name'], result))\n    for plugin in SOUTH_PLUGINS_LIST:\n        assert \"{}_{}\".format(ASSET, plugin) in list_of_southbounds\n\n    get_url = \"/fledge/north\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result)\n    list_of_northbounds = list(map(lambda val: val['name'], result))\n    assert NORTH_INSTANCE_NAME in list_of_northbounds\n\n    get_url = \"/fledge/service\"\n    result = utils.get_request(fledge_url, urllib.parse.quote(get_url, safe='?,=,&,/'))['services']\n    assert len(result)\n    list_of_services = list(map(lambda val: val['name'], result))\n    for plugin in SOUTH_PLUGINS_LIST:\n        assert \"{}_{}\".format(ASSET, plugin) in list_of_services\n    assert NORTH_INSTANCE_NAME in list_of_services\n\ndef verify_data_between_fledge_and_piwebapi(fledge_url, pi_host, pi_admin, pi_passwd, pi_db, AF_HIERARCHY_LEVEL, ASSET, PLUGINS_LIST, verify_hierarchy_and_get_datapoints_from_pi_web_api, wait_time):\n    # Wait until All data loaded to PI server properly\n    time.sleep(wait_time)\n    # Checking if hierarchy created properly or not, and retrieveing data from PI Server\n    data_from_pi = verify_hierarchy_and_get_datapoints_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, AF_HIERARCHY_LEVEL, ASSET, ASSET_DICT[ASSET])\n    assert len(data_from_pi) > 0, \"Datapoint does not exist\"\n    print('Data from PI Web API')\n    print(data_from_pi)\n    # Getting Data from fledge\n    data_from_fledge = dict()\n    data_from_fledge = get_data_from_fledge(fledge_url, PLUGINS_LIST)\n    print('data fledge retrieved')\n    print(data_from_fledge)\n    \n    # For verifying data send to PI Server from fledge is same.\n    check_data = verify_equality_between_fledge_and_pi(data_from_fledge, data_from_pi, PLUGINS_LIST)\n    assert check_data == 'Equal', \"Data is not equal\"\n\ndef update_filter_config(fledge_url, plugin, mode):\n    data = {\"enable\": \"{}\".format(mode)}\n    put_url = \"/fledge/category/{0}_{1}_{1}_exp\".format(ASSET, plugin)\n    utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\ndef add_configure_filter(add_filter, fledge_url, south_plugin):\n    filter_cfg = {\"enable\": \"true\", \"expression\": \"log({})\".format(south_plugin), \"name\": \"{}_exp\".format(south_plugin)}\n    add_filter(\"expression\", None, \"{}_exp\".format(south_plugin), filter_cfg, fledge_url, \"{}_{}\".format(ASSET, south_plugin), installation_type='package')\n\n@pytest.fixture\ndef start_south_north(add_south, start_north_task_omf_web_api, add_filter, remove_data_file,\n                      fledge_url, pi_host, pi_port, pi_admin, pi_passwd, pi_db,\n                      start_north_omf_as_a_service, asset_name=ASSET):\n    \"\"\" This fixture starts two south plugins,i.e. sinusoid and randomwalk., and one north plugin OMF for PIWebAPI. Also puts a filter\n        to insert reading id as a datapoint when we send the data to north.\n        clean_setup_fledge_packages: purge the fledge* packages and install latest for given repo url\n        add_south: Fixture that adds a south service with given configuration\n        start_north_task_omf_web_api: Fixture that starts PI north task\n        remove_data_file: Fixture that remove data file created during the tests \"\"\"\n    \n    poll_rate=1\n    \n    _config = {\"assetName\": {\"value\": \"{}\".format(ASSET)}}\n    for south_plugin in SOUTH_PLUGINS_LIST:\n        add_south(south_plugin, None, fledge_url, config=_config, \n                  service_name=\"{0}_{1}\".format(ASSET, south_plugin), installation_type='package', start_service=False)\n        # Wait for 10 seconds, SO that  Services can be added\n        time.sleep(10)\n        \n        data = {\"readingsPerSec\": \"{}\".format(poll_rate)}\n        put_url=\"/fledge/category/{0}_{1}Advanced\".format(ASSET, south_plugin)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n        \n        poll_rate+=5\n    \n    start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd, pi_use_legacy=\"false\",\n                                 service_name=NORTH_INSTANCE_NAME, default_af_location=AF_HIERARCHY_LEVEL)\n\nclass Test_linked_data_PIWebAPI:\n    # @pytest.mark.skip(reason=\"no way of currently testing this\")\n    def test_linked_data(self, clean_setup_fledge_packages, reset_fledge, start_south_north, fledge_url, \n                         pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, pi_port, enable_schedule, disable_schedule, \n                         verify_hierarchy_and_get_datapoints_from_pi_web_api, clear_pi_system_through_pi_web_api, \n                         skip_verify_north_interface, asset_name=ASSET):\n        \n        \"\"\" Test that check data is inserted in Fledge and sent to PI are equal\n            clean_setup_fledge_packages: Fixture for removing fledge from system completely if it is already present \n                                         and reinstall it baased on commandline arguments.\n            reset_fledge: Fixture that reset and cleanup the fledge \n            start_south_north: Fixture that add south and north instance\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            verify_hierarchy_and_get_datapoints_from_pi_web_api: Fixture to read data from PI and Verify hierarchy\n            clear_pi_system_through_pi_web_api: Fixture for cleaning up PI Server\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            \n            Assertions:\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/asset/<asset_name>\n                data received from PI is same as data sent\"\"\"\n\n        clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, AF_HIERARCHY_LEVEL_LIST, ASSET_DICT)\n        \n        for south_plugin in SOUTH_PLUGINS_LIST:\n            enable_schedule(fledge_url, \"{0}_{1}\".format(ASSET, south_plugin))\n        \n        # Wait until south, north services are created and some data is loaded\n        time.sleep(wait_time)\n        \n        for south_plugin in SOUTH_PLUGINS_LIST:\n            disable_schedule(fledge_url,\"{}_{}\".format(ASSET, south_plugin))\n            \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_service_added(fledge_url)\n        verify_asset(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n        \n        # Verify Data from fledge sent to PI Server is same.\n        verify_data_between_fledge_and_piwebapi(fledge_url, pi_host, pi_admin, pi_passwd, pi_db, AF_HIERARCHY_LEVEL, ASSET, SOUTH_PLUGINS_LIST, verify_hierarchy_and_get_datapoints_from_pi_web_api, wait_time)\n    \n    # @pytest.mark.skip(reason=\"no way of currently testing this\")\n    def test_linked_data_with_filter(self, reset_fledge, start_south_north, fledge_url, pi_host, pi_admin, pi_passwd, add_filter, pi_db, wait_time, \n                                       retries, pi_port, enable_schedule, disable_schedule, verify_hierarchy_and_get_datapoints_from_pi_web_api, \n                                       clear_pi_system_through_pi_web_api, skip_verify_north_interface, asset_name=ASSET):\n                \n        \"\"\" Test that apply filter and check data is inserted in Fledge and sent to PI are equal.\n            reset_fledge: Fixture that reset and cleanup the fledge \n            start_south_north: Fixture that add south and north instance\n            add_filter: Fixture that adds filter to the Services\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            verify_hierarchy_and_get_datapoints_from_pi_web_api: Fixture to read data from PI and Verify hierarchy\n            clear_pi_system_through_pi_web_api: Fixture for cleaning up PI Server\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            \n            Assertions:\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/asset/<asset_name>\n                on endpoint GET /fledge/filter\n                data received from PI is same as data sent\"\"\"\n        \n        clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, AF_HIERARCHY_LEVEL_LIST, ASSET_DICT)\n        \n        for south_plugin in SOUTH_PLUGINS_LIST:\n            add_configure_filter(add_filter, fledge_url, south_plugin)\n            enable_schedule(fledge_url, \"{0}_{1}\".format(ASSET, south_plugin))\n            \n        # Wait until south, north services and filters are created and some data is loaded\n        time.sleep(wait_time)\n        \n        for south_plugin in SOUTH_PLUGINS_LIST:\n            disable_schedule(fledge_url,\"{}_{}\".format(ASSET, south_plugin))\n        \n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n        verify_filter_added(fledge_url)\n\n        # Verify Data from fledge sent to PI Server is same.\n        verify_data_between_fledge_and_piwebapi(fledge_url, pi_host, pi_admin, pi_passwd, pi_db, AF_HIERARCHY_LEVEL, ASSET, ASSET_DICT[ASSET], verify_hierarchy_and_get_datapoints_from_pi_web_api, wait_time)\n        \n    # @pytest.mark.skip(reason=\"no way of currently testing this\")\n    def test_linked_data_with_onoff_filter(self, reset_fledge, start_south_north, fledge_url, pi_host, pi_admin, pi_passwd, add_filter, pi_db, wait_time, \n                                           retries, pi_port, enable_schedule, disable_schedule, verify_hierarchy_and_get_datapoints_from_pi_web_api, \n                                           clear_pi_system_through_pi_web_api, skip_verify_north_interface, asset_name=ASSET):\n        \n        \"\"\" Test that apply filter and check data is inserted in Fledge and sent to PI are equal.\n            reset_fledge: Fixture that reset and cleanup the fledge \n            start_south_north: Fixture that add south and north instance\n            add_filter: Fixture that adds filter to the Services\n            enable_schedule: Fixture for enabling schedules or services\n            disable_schedule: Fixture for disabling schedules or services\n            verify_hierarchy_and_get_datapoints_from_pi_web_api: Fixture to read data from PI and Verify hierarchy\n            clear_pi_system_through_pi_web_api: Fixture for cleaning up PI Server\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            \n            Assertions:\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/asset/<asset_name>\n                on endpoint GET /fledge/filter\n                data received from PI is same as data sent\"\"\"\n        \n        clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, AF_HIERARCHY_LEVEL_LIST, ASSET_DICT)\n        \n        for south_plugin in SOUTH_PLUGINS_LIST:\n            add_configure_filter(add_filter, fledge_url, south_plugin)\n            enable_schedule(fledge_url, \"{0}_{1}\".format(ASSET, south_plugin))\n            \n        # Wait until south, north services and filters are created and some data is loaded\n        time.sleep(wait_time)\n        \n        for south_plugin in SOUTH_PLUGINS_LIST:\n            disable_schedule(fledge_url,\"{}_{}\".format(ASSET, south_plugin))\n        \n        verify_asset(fledge_url)\n        verify_service_added(fledge_url)\n        verify_statistics_map(fledge_url, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, skip_verify_north_interface)\n        verify_filter_added(fledge_url)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        \n        for south_plugin in SOUTH_PLUGINS_LIST:\n            enable_schedule(fledge_url, \"{0}_{1}\".format(ASSET, south_plugin))\n        time.sleep(wait_time)\n        \n        print(\"On/Off of filter starts\")\n        count = 0\n        while count<3:\n            for south_plugin in SOUTH_PLUGINS_LIST:\n                # For Disabling filter\n                update_filter_config(fledge_url, south_plugin, 'false')\n            time.sleep(wait_time*2)\n            for south_plugin in SOUTH_PLUGINS_LIST:\n                # For enabling filter\n                update_filter_config(fledge_url, south_plugin, 'true')\n            time.sleep(wait_time*2)\n            count+=1\n            \n        for south_plugin in SOUTH_PLUGINS_LIST:\n            disable_schedule(fledge_url,\"{}_{}\".format(ASSET, south_plugin))\n  \n        # Verify Data from fledge sent to PI Server is same.\n        verify_data_between_fledge_and_piwebapi(fledge_url, pi_host, pi_admin, pi_passwd, pi_db, AF_HIERARCHY_LEVEL, ASSET, ASSET_DICT[ASSET], verify_hierarchy_and_get_datapoints_from_pi_web_api, wait_time)\n        "
  },
  {
    "path": "tests/system/python/packages/test_rule_data_availability.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test data availability notification rule system tests:\n        Creates notification instance with data availability rule\n        and notify asset plugin for triggering the notifications based on CONCH.\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2023 Dianomic Systems, Inc.\"\n\n\nimport os\nimport subprocess\nimport time\nimport urllib.parse\nimport json\nfrom pathlib import Path\nimport http\nfrom datetime import datetime\nimport pytest\nimport utils\nfrom pytest import PKG_MGR\n\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/packages/data/\".format(PROJECT_ROOT)\nFLEDGE_ROOT = os.environ.get('FLEDGE_ROOT')\nSOUTH_SERVICE_NAME = \"Sine #1\"\nSOUTH_DP_NAME=\"sinusoid\"\nSOUTH_ASSET_NAME = \"{}_sinusoid_assets\".format(time.strftime(\"%Y%m%d\"))\nNORTH_PLUGIN = \"OMF\"\nNOTIF_SERVICE_NAME = \"notification\"\nNOTIF_INSTANCE_NAME = \"notify #1\"\nAF_HIERARCHY_LEVEL = \"{0}_teststatslvl1/{0}_teststatslvl2/{0}_teststatslvl3\".format(time.strftime(\"%Y%m%d\"))\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package && ./reset\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n    # Wait for fledge server to start\n    time.sleep(wait_time)\n\n@pytest.fixture\ndef reset_eds(north_historian):\n    if north_historian != \"EdgeDataStore\":\n        print(\"Skipping EDS reset check as north_historian is not EdgeDataStore\")\n        return\n    eds_reset_url = \"/api/v1/Administration/Storage/Reset\"\n    con = http.client.HTTPConnection(\"localhost\", 5590)\n    con.request(\"POST\", eds_reset_url, \"\")\n    resp = con.getresponse()\n    assert 204 == resp.status\n\n@pytest.fixture\ndef check_eds_installed(north_historian):\n    \n    if north_historian != \"EdgeDataStore\":\n        print(\"Skipping EDS installation check as north_historian is not EdgeDataStore\")\n        return\n\n    dpkg_list = os.popen('dpkg --list osisoft.edgedatastore >/dev/null; echo $?')\n    ls_output = dpkg_list.read()\n    assert ls_output == \"0\\n\", \"EDS not installed. Please install it first!\"\n    eds_data_url = \"/api/v1/diagnostics/productinformation\"\n    con = http.client.HTTPConnection(\"localhost\", 5590)\n    con.request(\"GET\", eds_data_url)\n    resp = con.getresponse()\n    r = json.loads(resp.read().decode())\n    assert len(r) != 0, \"EDS not installed. Please install it first!\"\n\n@pytest.fixture\ndef start_south(add_south, fledge_url):\n    south_plugin = \"sinusoid\"\n    config = {\"assetName\": {\"value\": SOUTH_ASSET_NAME}}\n    # south_branch does not matter as these are archives.fledge-iot.org version install\n    add_south(south_plugin, None, fledge_url, service_name=SOUTH_SERVICE_NAME, installation_type='package', config=config)\n\n\ndef _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n    retry_count = 0\n    data_from_pi = None\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                                                 SOUTH_ASSET_NAME, '')\n        retry_count += 1\n        time.sleep(wait_time * 2)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Unable to read data from PI\"\n\n@pytest.fixture\ndef start_north(fledge_url, north_historian, start_north_omf_as_a_service, \n                pi_host, pi_port, pi_admin, pi_passwd, enabled=True):\n    \n    if north_historian == \"EdgeDataStore\":\n        data = {\"name\": \"EDS #1\",\n                \"plugin\": NORTH_PLUGIN,\n                \"type\": \"north\",\n                \"enabled\": enabled,\n                \"config\": {\"PIServerEndpoint\": {\"value\": \"Edge Data Store\"},\n                        \"NamingScheme\": {\"value\": \"Backward compatibility\"}}\n                }\n        post_url = \"/fledge/service\"\n        utils.post_request(fledge_url, post_url, data)\n    else:\n        start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd, pi_use_legacy=\"false\",\n                                     service_name=\"OMF #1\", default_af_location=AF_HIERARCHY_LEVEL)\n\n@pytest.fixture\ndef start_notification(fledge_url, add_service, add_notification_instance,wait_time, retries):\n    \n    # Install and Add Notification Service\n    add_service(fledge_url, \"notification\", None, retries, installation_type='package', service_name=NOTIF_SERVICE_NAME)\n    \n    # Wait and verify service created or not\n    time.sleep(wait_time)\n    verify_service_added(fledge_url, NOTIF_SERVICE_NAME)\n    \n    # Add Notification Instance\n    rule_config = {\"auditCode\": \"CONAD,SCHAD\"}\n    delivery_config = {\"enable\": \"true\"}\n    add_notification_instance(fledge_url, \"asset\", None, rule_config=rule_config, delivery_config=delivery_config, \n                              rule_plugin=\"DataAvailability\", installation_type='package', notification_type=\"retriggered\",\n                              notification_instance_name=\"test #1\", retrigger_time=5)\n    \n    # Verify Notification Instance created or not\n    notification_url = \"/fledge/notification\"\n    resp = utils.get_request(fledge_url, notification_url)\n    assert \"test #1\" in [s[\"name\"] for s in resp[\"notifications\"]]\n\ndef verify_service_added(fledge_url, name):\n    get_url = \"/fledge/service\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert name in [s[\"name\"] for s in result[\"services\"]]\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data to Edge Data Store\"\n    return ping_result\n\ndef verify_eds_data():\n    eds_data_url = \"/api/v1/tenants/default/namespaces/default/streams/1measurement_{}/Data/Last\".format(SOUTH_ASSET_NAME)\n    print (eds_data_url)\n    con = http.client.HTTPConnection(\"localhost\", 5590)\n    con.request(\"GET\", eds_data_url)\n    resp = con.getresponse()\n    r = json.loads(resp.read().decode())\n    return r\n\nclass TestDataAvailabilityAuditBasedNotificationRuleOnIngress:\n    def test_data_availability_multiple_audit(self, clean_setup_fledge_packages, reset_fledge, start_notification, \n                                              start_south, fledge_url, skip_verify_north_interface, wait_time, retries):\n        \"\"\" Test NTFSN triggered or not with CONAD, SCHAD.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            start_south: Fixtures to add and start south services\n            start_notification: Fixture to add and start notification service with rule and delivery plugins\n            Assertions:\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/category \"\"\"\n        time.sleep(wait_time)\n\n        verify_ping(fledge_url, True, wait_time, retries)\n\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n        print (len(resp1['audit']))\n        assert len(resp1['audit'])\n        \n        assert \"test #1\" in [s[\"details\"][\"name\"] for s in resp1[\"audit\"]]\n        for audit_detail in resp1['audit']:\n            if \"test #1\" == audit_detail['details']['name']:\n                assert \"NTFSN\" == audit_detail['source'], \"ERROR: NTFSN not triggered properly on CONAD or SCHAD\"\n\n    def test_data_availability_single_audit(self, fledge_url, skip_verify_north_interface, wait_time, retries):\n        \"\"\" Test NTFSN triggered or not with CONCH in sinusoid plugin.\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/category \"\"\"\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n\n        # Change the configuration of rule plugin\n        put_url = \"/fledge/category/ruletest #1\"\n        data = {\"auditCode\": \"CONCH\"}\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        \n        # Change the configuration of sinusoid plugin\n        put_url = \"/fledge/category/Sine #1Advanced\"\n        data = {\"readingsPerSec\": \"10\"}\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n\n        time.sleep(wait_time)\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp2 = utils.get_request(fledge_url, get_url)\n        assert len(resp2['audit']) - len(resp1['audit']) == 1, \"ERROR: NTFSN not triggered properly on CONCH\"\n\n    @pytest.mark.xfail(reason=\"FOGL-7712\")\n    def test_data_availability_all_audit(self, fledge_url, add_south, skip_verify_north_interface, wait_time, retries):\n        \"\"\" Test NTFSN triggered or not with all audit changes.\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/category \"\"\"\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n\n        # Change the configuration of rule plugin\n        put_url = \"/fledge/category/ruletest #1\"\n        data = {\"auditCode\": \"*\"}\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        \n        # Add new service\n        south_plugin = \"sinusoid\"\n        config = {\"assetName\": {\"value\": \"sine-test\"}}\n        # south_branch does not matter as these are archives.fledge-iot.org version install\n        add_south(south_plugin, None, fledge_url, service_name=\"sine-test\", installation_type='package', config=config)\n\n        time.sleep(wait_time)\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp2 = utils.get_request(fledge_url, get_url)\n        assert len(resp2['audit']) > len(resp1['audit']), \"ERROR: NTFSN not triggered properly with * audit code\"\n\nclass TestDataAvailabilityAssetBasedNotificationRuleOnIngress:\n    def test_data_availability_asset(self, fledge_url, add_south, skip_verify_north_interface, wait_time, retries):\n        \"\"\" Test NTFSN triggered or not with all audit changes.\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/category \"\"\"\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n\n        # Change the configuration of rule plugin\n        put_url = \"/fledge/category/ruletest #1\"\n        data = {\"auditCode\": \"\", \"assetCode\": SOUTH_ASSET_NAME}\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n\n        time.sleep(wait_time)\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp2 = utils.get_request(fledge_url, get_url)\n        assert len(resp2['audit']) > len(resp1['audit']), \"ERROR: NTFSN not triggered properly with asset code\"\n\nclass TestDataAvailabilityBasedNotificationRuleOnEgress:\n    def test_data_availability_north(self, check_eds_installed, reset_fledge, start_notification, reset_eds, \n                                     start_north, fledge_url, wait_time, skip_verify_north_interface, add_south, \n                                     retries, north_historian, read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, \n                                     pi_db):\n        \"\"\" Test NTFSN triggered or not with configuration change in north EDS plugin.\n            start_north: Fixtures to add and start south services\n            Assertions:\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping \"\"\"\n        \n        # Change the configuration of rule plugin\n        put_url = \"/fledge/category/ruletest #1\"\n        data = {\"auditCode\": \"\", \"assetCode\": SOUTH_ASSET_NAME}\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        \n        # Add new service\n        south_plugin = \"sinusoid\"\n        config = {\"assetName\": {\"value\": SOUTH_ASSET_NAME}}\n        # south_branch does not matter as these are archives.fledge-iot.org version install\n        add_south(south_plugin, None, fledge_url, service_name=\"sine-test\", installation_type='package', config=config)\n\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n        \n        time.sleep(wait_time)\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp2 = utils.get_request(fledge_url, get_url)\n        assert len(resp2['audit']) > len(resp1['audit']), \"ERROR: NTFSN not triggered properly with asset code\"\n        \n        time.sleep(wait_time)\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        if north_historian == \"EdgeDataStore\":\n            r = verify_eds_data()\n            assert SOUTH_DP_NAME in r, \"Data found in EDS!\"\n            ts = r.get(\"Time\")\n            assert ts.find(datetime.now().strftime(\"%Y-%m-%d\")) != -1, \"Latest data not found in EDS!\"\n        else:\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries)\n"
  },
  {
    "path": "tests/system/python/packages/test_statistics_history_notification_rule.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test statistics history notification rule system tests:\n        Creates notification instance with source as statistics history in threshold rule\n        and notify asset plugin for triggering the notifications based on rules.\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2023 Dianomic Systems, Inc.\"\n\nimport base64\nimport http.client\nimport json\nimport os\nimport ssl\nimport subprocess\nimport time\nimport urllib.parse\nfrom pathlib import Path\n\nimport pytest\nimport utils\nfrom pytest import PKG_MGR\n\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/packages/data/\".format(PROJECT_ROOT)\nFLEDGE_ROOT = os.environ.get('FLEDGE_ROOT')\nSOUTH_SERVICE_NAME = \"Sine #1\"\nSOUTH_ASSET_NAME = \"{}_sinusoid_assets\".format(time.strftime(\"%Y%m%d\"))\nNOTIF_SERVICE_NAME = \"notification\"\nNOTIF_INSTANCE_NAME = \"notify #1\"\nAF_HIERARCHY_LEVEL = \"{0}_teststatslvl1/{0}_teststatslvl2/{0}_teststatslvl3\".format(time.strftime(\"%Y%m%d\"))\n\n@pytest.fixture\ndef reset_fledge(wait_time):\n    try:\n        subprocess.run([\"cd {}/tests/system/python/scripts/package && ./reset\"\n                       .format(PROJECT_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n    # Wait for fledge server to start\n    time.sleep(wait_time)\n\n\n@pytest.fixture\ndef start_south(add_south, fledge_url):\n    south_plugin = \"sinusoid\"\n    config = {\"assetName\": {\"value\": SOUTH_ASSET_NAME}}\n    # south_branch does not matter as these are archives.fledge-iot.org version install\n    add_south(south_plugin, None, fledge_url, service_name=SOUTH_SERVICE_NAME, installation_type='package', config=config)\n\n\n@pytest.fixture\ndef start_north(start_north_omf_as_a_service, fledge_url,\n                      pi_host, pi_port, pi_admin, pi_passwd, clear_pi_system_through_pi_web_api, pi_db):\n    \n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    dp_list = ['sinusoid']\n    asset_dict = {}\n    asset_dict[SOUTH_ASSET_NAME] = dp_list\n    clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                       af_hierarchy_level_list, asset_dict)\n\n    response = start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                            default_af_location=AF_HIERARCHY_LEVEL)\n\n    yield start_north\n\n@pytest.fixture\ndef start_notification(fledge_url, add_service, add_notification_instance,wait_time, retries):\n    \n    # Install and Add Notification Service\n    add_service(fledge_url, \"notification\", None, retries, installation_type='package', service_name=NOTIF_SERVICE_NAME)\n    \n    # Wait and verify service created or not\n    time.sleep(wait_time)\n    verify_service_added(fledge_url, NOTIF_SERVICE_NAME)\n    \n    # Add Notification Instance\n    rule_config = {\n            \"source\": \"Statistics History\",\n            \"asset\": \"READINGS\",\n            \"trigger_value\": \"10.0\",\n        }\n    delivery_config = {\"enable\": \"true\"}\n    add_notification_instance(fledge_url, \"asset\", None, rule_config=rule_config, delivery_config=delivery_config, installation_type='package', \n                              notification_type=\"retriggered\", notification_instance_name=\"test #1\", retrigger_time=30)\n    \n    # Verify Notification Instance created or not\n    notification_url = \"/fledge/notification\"\n    resp = utils.get_request(fledge_url, notification_url)\n    assert \"test #1\" in [s[\"name\"] for s in resp[\"notifications\"]]\n\ndef verify_service_added(fledge_url, name):\n    get_url = \"/fledge/service\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert name in [s[\"name\"] for s in result[\"services\"]]\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data via PI Web API using Basic auth\"\n    return ping_result\n\ndef _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    retry_count = 0\n    data_from_pi = None\n    # Name of asset in the PI server\n    pi_asset_name = \"{}\".format(SOUTH_ASSET_NAME)\n\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                                                    pi_asset_name, '')\n        if data_from_pi is None:\n            retry_count += 1\n            time.sleep(wait_time)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Failed to read data from PI\"\n\n\nclass TestStatisticsHistoryBasedNotificationRuleOnIngress:\n    def test_stats_readings_south(self, clean_setup_fledge_packages, reset_fledge, start_south, start_notification, fledge_url,\n                 skip_verify_north_interface, wait_time, retries):\n        \"\"\" Test NTFSN triggered or not with source as statistics history and name as READINGS in threshold rule.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            start_south: Fixtures to add and start south services\n            start_notification: Fixture to add and start notification service with rule and delivery plugins\n            Assertions:\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/statistics/history \"\"\"\n        time.sleep(wait_time * 4)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        \n        # When rule is triggered, there should be audit entries for NTFSN\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n        assert len(resp1['audit'])\n        assert \"test #1\" in [s[\"details\"][\"name\"] for s in resp1[\"audit\"]]\n        for audit_detail in resp1['audit']:\n            if \"test #1\" == audit_detail['details']['name']:\n                assert \"NTFSN\" == audit_detail['source']\n        # Waiting for 90 sec to get 2 more NTFSN entries if rule is triggered properly\n        time.sleep(90)\n        resp2 = utils.get_request(fledge_url, get_url)\n        assert len(resp2['audit']) - len(resp1['audit']) >= 2, \"ERROR: NTFSN not triggered properly\"\n        \n        get_url = \"/fledge/statistics/history?minutes=10\"\n        r = utils.get_request(fledge_url, get_url)\n        if \"READINGS\" in r[\"statistics\"][0]:\n            assert 0 < r[\"statistics\"][0][\"READINGS\"]\n\n    def test_stats_south_asset_ingest(self, fledge_url, wait_time, skip_verify_north_interface, retries):\n        \"\"\" Test NTFSN triggered or not with source as statistics history and name as ingested south asset in threshold rule.\n            Assertions:\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/statistics/history \"\"\"\n        # Change the config of threshold, name of statistics - READINGS replaced with statistics key name - Sine #1-Ingest\n        put_url = \"/fledge/category/ruletest #1\"\n        data = {\"asset\": \"Sine #1-Ingest\"}\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        \n        # When rule is triggered, there should be audit entries for NTFSN\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n        assert len(resp1['audit'])\n        assert \"test #1\" in [s[\"details\"][\"name\"] for s in resp1[\"audit\"]]\n        for audit_detail in resp1['audit']:\n            if \"test #1\" == audit_detail['details']['name']:\n                assert \"NTFSN\" == audit_detail['source']\n        # Waiting for 90 sec to get more NTFSN entries\n        time.sleep(90)\n        resp2 = utils.get_request(fledge_url, get_url)\n        assert len(resp2['audit']) - len(resp1['audit']) >= 2, \"ERROR: NTFSN not triggered properly\"\n        \n        get_url = \"/fledge/statistics/history?minutes=10\"\n        r = utils.get_request(fledge_url, get_url)\n        if \"Sine #1-Ingest\" in r[\"statistics\"][0]:\n            assert 0 < r[\"statistics\"][0][\"Sine #1-Ingest\"]\n        \n    def test_stats_south_asset(self, fledge_url, wait_time, skip_verify_north_interface, retries):\n        \"\"\" Test NTFSN triggered or not with source as statistics history and name as south asset name in threshold rule.\n            Assertions:\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/statistics/history \"\"\"\n        # Change the config of threshold, name of statistics - Sine #1-Ingest replaced with statistics key name - 20230420_SINUSOID_ASSETS\n        put_url = \"/fledge/category/ruletest #1\"\n        data = {\"asset\": SOUTH_ASSET_NAME.upper()}\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        \n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        \n        # When rule is triggered, there should be audit entries for NTFSN\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n        assert len(resp1['audit'])\n        assert \"test #1\" in [s[\"details\"][\"name\"] for s in resp1[\"audit\"]]\n        for audit_detail in resp1['audit']:\n            if \"test #1\" == audit_detail['details']['name']:\n                assert \"NTFSN\" == audit_detail['source']\n        # Waiting for 90 sec to get more NTFSN entries\n        time.sleep(90)\n        resp2 = utils.get_request(fledge_url, get_url)\n        assert len(resp2['audit']) - len(resp1['audit']) >= 2, \"ERROR: NTFSN not triggered properly\"\n        \n        get_url = \"/fledge/statistics/history?minutes=10\"\n        r = utils.get_request(fledge_url, get_url)\n        if SOUTH_ASSET_NAME.upper() in r[\"statistics\"][0]:\n            assert 0 < r[\"statistics\"][0][SOUTH_ASSET_NAME.upper()]\n\n\nclass TestStatisticsHistoryBasedNotificationRuleOnEgress:\n    def test_stats_readings_north(self, start_north, fledge_url, wait_time, skip_verify_north_interface, retries, \n                                  read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test NTFSN triggered or not with source as statistics history and name as READINGS in threshold rule.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            reset_fledge: Fixture to reset fledge\n            start_south_north: Fixtures to add and start south and north services\n            Assertions:\n                on endpoint GET /fledge/audit\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/statistics/history \"\"\"\n        # Change the config of threshold, name of statistics - Sine #1-Ingest replaced with statistics key name - 20230420_SINUSOID_ASSETS\n        put_url = \"/fledge/category/ruletest #1\"\n        data = {\"asset\": \"Readings Sent\"}\n        utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        \n        # When rule is triggered, there should be audit entries for NTFSN``\n        get_url = \"/fledge/audit?source=NTFSN\"\n        resp1 = utils.get_request(fledge_url, get_url)\n        assert len(resp1['audit'])\n        assert \"test #1\" in [s[\"details\"][\"name\"] for s in resp1[\"audit\"]]\n        for audit_detail in resp1['audit']:\n            if \"test #1\" == audit_detail['details']['name']:\n                assert \"NTFSN\" == audit_detail['source']\n        # Waiting for 90 sec to get more NTFSN entries\n        time.sleep(90)\n        resp2 = utils.get_request(fledge_url, get_url)\n        assert len(resp2['audit']) - len(resp1['audit']) >= 2, \"ERROR: NTFSN for north not triggered properly\"\n        \n        get_url = \"/fledge/statistics/history?minutes=10\"\n        r = utils.get_request(fledge_url, get_url)\n        if \"Readings Sent\" in r[\"statistics\"][0]:\n            assert 0 < r[\"statistics\"][0][\"Readings Sent\"]\n\n        _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries)\n"
  },
  {
    "path": "tests/system/python/pair/docs/test_c_north_service_pair.rst",
    "content": "C Based North Service Pair Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is specifically designed to validate data ingestion in Fledge(A) using the fledge-south-sinusoid plugin and its subsequent transfer to Fledge(B) via the fledge-north-httpc plugin. Fledge(B) processes this data through the fledge-south-http (Python) plugin and forwards it to the PI Server using the fledge-north-OMF plugin.\nThis test verifies the basic functionality and reliability of the fledge-north-httpc plugin, focusing on scenarios such as restarts, reconfigurations, and filter manipulations.\n\nThis test consists of *TestCNorthService* class, which contains multiple test case functions:\n\n1. **test_north_C_service_with_restart**: Verifies that data ingested into Fledge(A) using fledge-south-sinusoid, sent to Fledge(B) using the fledge-north-httpc plugin, and that Fledge(B) forwards the data to the PI Server correctly, even after restarting Fledge(A).\n2. **test_north_C_service_with_enable_disable**: Ensures that the fledge-north-httpc plugin sends data to Fledge(B) and the PI Server correctly after being disabled and then re-enabled in Fledge(A).\n3. **test_north_C_service_with_delete_add**: Confirms that the fledge-north-httpc plugin continues to function correctly after being deleted and re-added in Fledge(A), with data successfully flowing to Fledge(B) and the PI Server.\n4. **test_north_C_service_with_reconfig**: Validates that reconfiguring the fledge-north-httpc plugin in Fledge(A) does not disrupt the data flow to Fledge(B) and the PI Server. \n5. **test_north_C_service_with_filter**: Verifies that adding the fledge-filter-scale to the fledge-north-httpc plugin in Fledge(A) applies the filter correctly before forwarding the data to Fledge(B) and the PI Server.\n6. **test_north_C_service_with_filter_enable_disable**: Ensures that disabling and re-enabling the fledge-filter-scale filter on the fledge-north-httpc plugin in Fledge(A) does not disrupt data transformation or flow to Fledge(B) and the PI Server.\n7. **test_north_C_service_with_filter_reconfig**: Confirms that reconfiguring the fledge-filter-scale filter on the fledge-north-httpc plugin applies the updated configuration correctly while maintaining data flow to Fledge(B) and the PI Server.\n8. **test_north_C_service_with_delete_add_filter**: Ensures that deleting and re-adding the fledge-filter-scale filter to the fledge-north-httpc plugin in Fledge(A) does not affect the data flow or transformation, with data reaching Fledge(B) and the PI Server as expected.\n9. **test_north_C_service_with_filter_reorder**: Verifies that reordering filters (e.g., fledge-filter-scale and fledge-filter-metadata) applied to the fledge-north-httpc plugin updates the processing sequence correctly, with data accurately processed and forwarded to Fledge(B) and the PI Server.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --remote-user=FLEDGE(B)_USER\n                        Username of remote machine on which Fledge(B) is running\n    --remote-ip=FLEDGE(B)_IP\n                        IP of remote machine on which Fledge(B) is running\n    --key-path=KEY_PATH\n                        Path of the key required to access remote machine on which Fledge(B) is running\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n    --wait-fix=\"<WAIT_FIX\"\n                        Extra wait time (in seconds) required for process to run\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv pair/test_c_north_service_pair.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --remote-user=\"<FLEDGE(B)_USER>\" \\ \n      --remote-ip=\"<FLEDGE(B)_IP>\" --key-path=\"<KEY_PATH>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n      --pi-port=\"<PI_SYSTEM_PORT>\" --pi-db=\"<PI_SYSTEM_DB>\"  --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"  --wait-fix=\"<WAIT_FIX>\"\n"
  },
  {
    "path": "tests/system/python/pair/docs/test_e2e_fledge_pair.rst",
    "content": "E2E Fledge Pair Test\n~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed for end-to-end validation between two Fledge instances, Fledge(A) and Fledge(B). Fledge(A) uses the fledge-south-sinusoid, fledge-south-expression, and fledge-south-playback south services to ingest data. This data is then transferred to Fledge(B) using the fledge-north-http-north plugin. Fledge(B) processes the data through the fledge-south-http plugin and forwards it to the PI Server via the fledge-north-OMF plugin.\n\nThis test consists *TestE2eFogPairPi* class, which contains a single test case function:\n\n1. **test_end_to_end**: Verifies that data is ingested into Fledge(A) using the mentioned south plugins, sent to Fledge(B) via north-http plugin, Then Fledge(B) receive this data via http south and send to PI. Checks data integrity, asset creation, and ensures that data reaches the PI Server.\n\nPrerequisite\n++++++++++++\n\n1. Fledge must be installed by `make` command\n2. The FLEDGE_ROOT environment variable should be exported to the directory where Fledge is installed.\n3. Install the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --remote-user=FLEDGE(B)_USER\n                        Username of remote machine on which Fledge(B) is running\n    --remote-ip=FLEDGE(B)_IP\n                        IP of remote machine on which Fledge(B) is running\n    --key-path=KEY_PATH\n                        Path of the key required to access remote machine on which Fledge(B) is running\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python/; \n  $ export FLEDGE_ROOT=<path_to_fledge_installation> \n  $ export PYTHONPATH=$FLEDGE_ROOT/python\n  $ python3 -m pytest -s -vv pair/test_e2e_fledge_pair.py --remote-user=\"<FLEDGE(B)_USER>\" --remote-ip=\"<FLEDGE(B)_IP>\" --key-path=\"<KEY_PATH>\" \\\n        --pi-host=\"<PI_SYSTEM_HOST>\" --pi-port=\"<PI_SYSTEM_PORT>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-db=\"<PI_SYSTEM_DB>\" \\\n        --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\"\n"
  },
  {
    "path": "tests/system/python/pair/docs/test_python_north_service_pair.rst",
    "content": "Python Based North Service Pair Test\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nObjective\n+++++++++\nThis test is designed to validate the data ingestion and transfer process between Fledge(A) and Fledge(B). Specifically, it verifies the functionality of the fledge-south-sinusoid plugin for data ingestion into Fledge(A) and its subsequent transfer to Fledge(B) via the fledge-north-http-north plugin (Python). Fledge(B) then processes the data through the fledge-south-http plugin and forwards it to the PI Server using the fledge-north-OMF plugin.\nThe goal of this test is to ensure the basic functionality and reliability of the fledge-north-http-north plugin, with a focus on handling scenarios such as restarts, reconfigurations, and filter manipulations\n\nThis test consist of *TestPythonNorthService* class, which contains multiple test case functions:\n\n1. **test_north_python_service_with_restart**: Verifies that data ingested into Fledge(A) using fledge-south-sinusoid, sent to Fledge(B) using the fledge-north-http-north plugin, and that Fledge(B) forwards the data to the PI Server correctly, even after restarting Fledge(A).\n2. **test_north_python_service_with_enable_disable**: Ensures that the fledge-north-http-north plugin sends data to Fledge(B) and the PI Server correctly after being disabled and then re-enabled in Fledge(A).\n3. **test_north_python_service_with_delete_add**: Confirms that the fledge-north-http-north plugin continues to function correctly after being deleted and re-added in Fledge(A), with data successfully flowing to Fledge(B) and the PI Server.\n4. **test_north_python_service_with_reconfig**: Validates that reconfiguring the fledge-north-http-north plugin in Fledge(A) does not disrupt the data flow to Fledge(B) and the PI Server. \n5. **test_north_python_service_with_filter**: Verifies that adding the fledge-filter-scale to the fledge-north-http-north plugin in Fledge(A) applies the filter correctly before forwarding the data to Fledge(B) and the PI Server.\n6. **test_north_python_service_with_filter_enable_disable**: Ensures that disabling and re-enabling the fledge-filter-scale filter on the fledge-north-http-north plugin in Fledge(A) does not disrupt data transformation or flow to Fledge(B) and the PI Server.\n7. **test_north_python_service_with_filter_reconfig**: Confirms that reconfiguring the fledge-filter-scale filter on the fledge-north-http-north plugin applies the updated configuration correctly while maintaining data flow to Fledge(B) and the PI Server.\n8. **test_north_python_service_with_delete_add_filter**: Ensures that deleting and re-adding the fledge-filter-scale filter to the fledge-north-http-north plugin in Fledge(A) does not affect the data flow or transformation, with data reaching Fledge(B) and the PI Server as expected.\n9. **test_north_python_service_with_filter_reorder**: Verifies that reordering filters (e.g., fledge-filter-scale and fledge-filter-metadata) applied to the fledge-north-http-north plugin updates the processing sequence correctly, with data accurately processed and forwarded to Fledge(B) and the PI Server.\n\n\nPrerequisite\n++++++++++++\n\nInstall the prerequisites to run a test:\n\n.. code-block:: console\n\n  $ cd fledge/python\n  $ python3 -m pip install -r requirements-test.txt --user\n\n\nThe minimum required parameters to run,\n\n.. code-block:: console\n\n    --package-build-version=PACKAGE_BUILD_VERSION\n                        Package build version for http://archives.fledge-iot.org/\n    --remote-user=FLEDGE(B)_USER\n                Username of remote machine on which Fledge(B) is running\n    --remote-ip=FLEDGE(B)_IP\n                        IP of remote machine on which Fledge(B) is running\n    --key-path=KEY_PATH\n                        Path of the key required to access remote machine on which Fledge(B) is running\n    --pi-host=PI_SYSTEM_HOST\n                        PI Server HostName/IP\n    --pi-port=PI_SYSTEM_PORT\n                        PI Server port\n    --pi-admin=PI_SYSTEM_ADMIN\n                        PI Server user login\n    --pi-passwd=PI_SYSTEM_PWD\n                        PI Server user login password\n    --pi-db=PI_SYSTEM_DB\n                        PI Server Database\n    --wait-time=WAIT_TIME\n                        Generic wait time (in seconds) between processes\n    --retries=RETIRES\n                        Number of tries for polling\n    --junit-xml=JUNIT_XML\n                        Specifies the file path or directory where the JUnit XML test results should be saved.\n    --wait-fix=WAIT_FIX\n                        Extra wait time (in seconds) required for process to run\n\nExecution of Test\n+++++++++++++++++\n\n.. code-block:: console\n\n  $ cd fledge/tests/system/python\n  $ python3 -m pytest -s -vv pair/test_python_north_service_pair.py --package-build-version=\"<PACKAGE_BUILD_VERSION>\" --remote-user=\"<FLEDGE(B)_USER>\" \\ \n      --remote-ip=\"<FLEDGE(B)_IP>\" --key-path=\"<KEY_PATH>\" --pi-admin=\"<PI_SYSTEM_ADMIN>\" --pi-passwd=\"<PI_SYSTEM_PWD>\" --pi-host=\"<PI_SYSTEM_HOST>\" \\\n      --pi-port=\"<PI_SYSTEM_PORT>\" --pi-db=\"<PI_SYSTEM_DB>\"  --wait-time=\"<WAIT_TIME>\" --retries=\"<RETIRES>\" --junit-xml=\"<JUNIT_XML>\" --wait-fix=\"<WAIT_FIX>\"\n"
  },
  {
    "path": "tests/system/python/pair/test_c_north_service_pair.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"\n       A pair system test to verify C north service with south python plugins\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems, Inc.\"\n\nimport http.client\nimport json\nimport subprocess\nimport time\nimport urllib.parse\nfrom pathlib import Path\n\nimport pytest\nimport utils\n\n# Local machine\nlocal_south_plugin = \"sinusoid\"\nlocal_south_asset_name = \"north_svc_pair_C_sinusoid\"\nlocal_south_service_name = \"Sine #1\"\nlocal_north_plugin = \"httpc\"\nlocal_north_service_name = \"HN #1\"\n\n# Remote machine\nremote_south_plugin = \"http_south\"\nremote_south_service_name = \"HS #1\"\nremote_south_asset_name = \"north_svc_pair_C_sinusoid\"\nremote_north_plugin = \"OMF\"\nremote_north_service_name = \"NorthReadingsToPI_WebAPI\"\n\nnorth_schedule_id = \"\"\nfilter_name = \"ScaleFilter #1\"\n\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\n# SSH command to make connection with the remote machine\nssh_cmd = \"ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i\"\nAF_HIERARCHY_LEVEL = \"Cservicepair/Cservicepairlvl1/Cservicepairlvl2\"\n\n\n@pytest.fixture\ndef reset_fledge_local(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n\n@pytest.fixture\ndef setup_local(reset_fledge_local, add_south, add_north, fledge_url, remote_ip):\n    local_south_config = {\"assetName\": {\"value\": remote_south_asset_name}}\n    add_south(local_south_plugin, None, fledge_url, config=local_south_config,\n              service_name=\"{}\".format(local_south_service_name),\n              installation_type='package')\n    # Change name of variables such as service_name, plugin_type\n    global north_schedule_id\n    local_north_config = {\"URL\": {\"value\": \"http://{}:6683/sensor-reading\".format(remote_ip)}}\n    response = add_north(fledge_url, local_north_plugin, None, installation_type='package',\n                         north_instance_name=\"{}\".format(local_north_service_name),\n                         is_task=False, config=local_north_config, enabled=True)\n    north_schedule_id = response[\"id\"]\n\n    yield setup_local\n\n\n@pytest.fixture\ndef reset_fledge_remote(remote_user, remote_ip, key_path, remote_fledge_path):\n    \"\"\"Fixture that kills fledge, reset database and starts fledge again on a remote machine\n            remote_user: User of remote machine\n            remote_ip: IP of remote machine\n            key_path: Path of key file used for authentication to remote machine\n            remote_fledge_path: Path where Fledge is cloned and built\n        \"\"\"\n    if remote_fledge_path is None:\n        remote_fledge_path = '/home/{}/fledge'.format(remote_user)\n    # Reset fledge on remote machine\n    subprocess.run([\n        \"{} {} {}@{} 'cd {}/tests/system/python/scripts/package/ && ./reset'\".format(\n            ssh_cmd, key_path, remote_user,\n            remote_ip, remote_fledge_path)], shell=True, check=True)\n\n\n@pytest.fixture\ndef clean_install_fledge_packages_remote(remote_user, remote_ip, key_path, remote_fledge_path, package_build_version):\n    \"\"\"Fixture that kills fledge, reset database and starts fledge again on a remote machine\n            remote_user: User of remote machine\n            remote_ip: IP of remote machine\n            key_path: Path of key file used for authentication to remote machine\n            remote_fledge_path: Path where Fledge is cloned and built\n        \"\"\"\n    if remote_fledge_path is None:\n        remote_fledge_path = '/home/{}/fledge'.format(remote_user)\n    # Remove all already installed packages from remote machine\n    subprocess.run([\n        \"{} {} {}@{} 'export FLEDGE_ROOT={};cd \"\n        \"$FLEDGE_ROOT/tests/system/python/scripts/package/ && ./remove'\".format(\n            ssh_cmd, key_path, remote_user,\n            remote_ip, remote_fledge_path)], shell=True, check=True)\n    # Installs packages on remote machine based on packages version passed\n    subprocess.run([\n        \"{} {} {}@{} 'export FLEDGE_ROOT={};cd \"\n        \"$FLEDGE_ROOT/tests/system/python/scripts/package/ && ./setup {}'\".format(\n            ssh_cmd, key_path, remote_user,\n            remote_ip, remote_fledge_path, package_build_version)], shell=True, check=True)\n    # Installs http_south python plugin on remote machine\n    try:\n        subprocess.run([\n            \"{} {} {}@{} 'sudo apt install -y fledge-south-http-south'\".format(\n                ssh_cmd, key_path, remote_user,\n                remote_ip)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"{} plugin installation failed\".format(remote_south_plugin)\n\n\n@pytest.fixture\ndef setup_remote(reset_fledge_remote, remote_user, remote_ip, start_north_omf_as_a_service,\n                 pi_host, pi_port, pi_admin, pi_passwd,\n                 clear_pi_system_through_pi_web_api, pi_db):\n    \"\"\"Fixture that setups remote machine\n            reset_fledge_remote: Fixture that kills fledge, reset database and starts fledge again on a remote\n                                           machine.\n            remote_user: User of remote machine\n            remote_ip: IP of remote machine\n            pi_host: Host IP of PI machine\n            pi_port: Host port of PI machine\n            pi_admin: Username of PI machine\n            pi_passwd: Password of PI machine\n        \"\"\"\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    # There are two data points here. 1. sinusoid\n    # 2. no data point (Asset name be used in this case.)\n    dp_list = ['sinusoid', '']\n    asset_dict = {}\n    asset_dict[remote_south_asset_name] = dp_list\n    clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                       af_hierarchy_level_list, asset_dict)\n\n    fledge_url = \"{}:8081\".format(remote_ip)\n\n    # Configure http_south python plugin on remote machine\n    conn = http.client.HTTPConnection(fledge_url)\n    data = {\"name\": \"{}\".format(remote_south_service_name), \"type\": \"South\", \"plugin\": \"{}\".format(remote_south_plugin),\n            \"enabled\": \"true\", \"config\": {\"assetNamePrefix\": {\"value\": \"\"}}}\n    conn.request(\"POST\", '/fledge/service', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    retval = json.loads(r)\n    assert remote_south_service_name == retval[\"name\"]\n\n    # Configure pi north plugin on remote machine\n    global remote_north_schedule_id\n    response = start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                            start=True, default_af_location=AF_HIERARCHY_LEVEL)\n    remote_north_schedule_id = response[\"id\"]\n\n    yield setup_remote\n\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data\"\n    return ping_result\n\n\ndef verify_statistics_map(fledge_url, south_asset_name, north_service_name, skip_verify_north_interface):\n    get_url = \"/fledge/statistics\"\n    jdoc = utils.get_request(fledge_url, get_url)\n    actual_stats_map = utils.serialize_stats_map(jdoc)\n    assert 1 <= actual_stats_map[south_asset_name.upper()]\n    assert 1 <= actual_stats_map['READINGS']\n    if not skip_verify_north_interface:\n        assert 1 <= actual_stats_map['Readings Sent']\n        assert 1 <= actual_stats_map[north_service_name]\n\n\ndef verify_service_added(fledge_url, south_service_name, north_service_name):\n    get_url = \"/fledge/south\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert south_service_name in [s[\"name\"] for s in result[\"services\"]]\n\n    get_url = \"/fledge/north\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result)\n    assert north_service_name in [s[\"name\"] for s in result]\n\n    get_url = \"/fledge/service\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert south_service_name in [s[\"name\"] for s in result[\"services\"]]\n    assert north_service_name in [s[\"name\"] for s in result[\"services\"]]\n\n\ndef verify_filter_added(fledge_url, filter_name):\n    get_url = \"/fledge/filter\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"filters\"])\n    assert filter_name in [s[\"name\"] for s in result[\"filters\"]]\n    return result\n\n\ndef verify_asset(fledge_url, south_asset_name):\n    get_url = \"/fledge/asset\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No asset found\"\n    assert south_asset_name in [s[\"assetCode\"] for s in result]\n\n\ndef verify_asset_tracking_details(fledge_url, south_service_name, south_asset_name, south_plugin, north_service_name,\n                                  north_plugin, skip_verify_north_interface):\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    tracked_item = tracking_details[\"track\"][0]\n    assert south_service_name == tracked_item[\"service\"]\n    assert south_asset_name == tracked_item[\"asset\"]\n    assert south_plugin == tracked_item[\"plugin\"]\n\n    if not skip_verify_north_interface:\n        egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n        assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n        tracked_item = egress_tracking_details[\"track\"][0]\n        assert north_service_name == tracked_item[\"service\"]\n        assert south_asset_name == tracked_item[\"asset\"]\n        assert north_plugin == tracked_item[\"plugin\"]\n\n\ndef _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name):\n    retry_count = 0\n    data_from_pi = None\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    type_id = 1\n    recorded_datapoint = \"{}\".format(asset_name)\n    # Name of asset in the PI server\n    pi_asset_name = \"{}\".format(asset_name)\n\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                                                 pi_asset_name, '')\n        retry_count += 1\n        time.sleep(wait_time * 2)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Failed to read data from PI\"\n\n\nclass TestCNorthService:\n    def test_north_C_service_with_restart(self, clean_setup_fledge_packages, clean_install_fledge_packages_remote,\n                                               setup_local, setup_remote, skip_verify_north_interface, fledge_url,\n                                               wait_time, retries, remote_ip, read_data_from_pi_web_api, pi_host,\n                                               pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service before and after restarting fledge.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            clean_install_fledge_packages_remote: Fixture to remove and install latest fledge packages on remote machine\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Verify on local machine\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, local_south_asset_name)\n        verify_service_added(fledge_url, local_south_service_name, local_north_service_name)\n        verify_statistics_map(fledge_url, local_south_asset_name, local_north_service_name, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, local_south_service_name, local_south_asset_name, local_south_plugin,\n                                      local_north_service_name, local_north_plugin, verify_asset_tracking_details)\n\n        # Verify on remote machine\n        verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url_remote, remote_south_asset_name)\n        verify_service_added(fledge_url_remote, remote_south_service_name, remote_north_service_name)\n        verify_statistics_map(fledge_url_remote, remote_south_asset_name, remote_north_service_name,\n                              skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url_remote, remote_south_service_name, remote_south_asset_name,\n                                      remote_south_plugin, remote_north_service_name, remote_north_plugin,\n                                      verify_asset_tracking_details)\n\n        from conftest import restart_and_wait_for_fledge\n        restart_and_wait_for_fledge(fledge_url, wait_time)\n        restart_and_wait_for_fledge(fledge_url_remote, wait_time)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_C_service_with_enable_disable(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                      remote_ip,\n                                                      skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                      pi_host, wait_fix,\n                                                      pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service by disabling and enabling it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Verify on local machine\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, local_south_asset_name)\n        verify_service_added(fledge_url, local_south_service_name, local_north_service_name)\n        verify_statistics_map(fledge_url, local_south_asset_name, local_north_service_name, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, local_south_service_name, local_south_asset_name, local_south_plugin,\n                                      local_north_service_name, local_north_plugin, verify_asset_tracking_details)\n\n        # Verify on remote machine\n        verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url_remote, remote_south_asset_name)\n        verify_service_added(fledge_url_remote, remote_south_service_name, remote_north_service_name)\n        verify_statistics_map(fledge_url_remote, remote_south_asset_name, remote_north_service_name,\n                              skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url_remote, remote_south_service_name, remote_south_asset_name,\n                                      remote_south_plugin, remote_north_service_name, remote_north_plugin,\n                                      verify_asset_tracking_details)\n\n        # Disabling local machine north service\n        data = {\"enabled\": \"false\"}\n        put_url = \"/fledge/schedule/{}\".format(north_schedule_id)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert False == resp['schedule']['enabled']\n        print(f\"Waiting for {wait_fix} seconds for delay caused by FOGL-8813 - tune pre-fetch buffers...\")\n        time.sleep(wait_fix)\n        # Enabling local machine north service\n        data = {\"enabled\": \"true\"}\n        put_url = \"/fledge/schedule/{}\".format(north_schedule_id)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert True == resp['schedule']['enabled']\n        print(f\"Waiting for {wait_fix} seconds for delay caused by FOGL-8813 - tune pre-fetch buffers...\")\n        time.sleep(wait_fix)\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_C_service_with_delete_add(self, setup_local, setup_remote, read_data_from_pi_web_api, remote_ip,\n                                                  add_north, skip_verify_north_interface, fledge_url, wait_time,\n                                                  retries,\n                                                  pi_host, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service by deleting and adding it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Delete and re-add the north service from local machine\n        delete_url = \"/fledge/service/{}\".format(local_north_service_name)\n        resp = utils.delete_request(fledge_url, urllib.parse.quote(delete_url))\n        assert \"Service {} deleted successfully.\".format(local_north_service_name) == resp['result']\n\n        local_north_config = {\"URL\": {\"value\": \"http://{}:6683/sensor-reading\".format(remote_ip)}}\n        add_north(fledge_url, local_north_plugin, None, installation_type='package',\n                  north_instance_name=\"{}\".format(local_north_service_name),\n                  is_task=False, config=local_north_config, enabled=True)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_C_service_with_reconfig(self, setup_local, setup_remote, read_data_from_pi_web_api, remote_ip,\n                                                skip_verify_north_interface, fledge_url, wait_time, retries, pi_host,\n                                                pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service by reconfiguring it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Bad reconfiguration to check data is not sent\n        data = {\"URL\": \"http://100.1.2.3:6683/sensor-reading\"}\n        put_url = \"/fledge/category/{}\".format(local_north_service_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"http://100.1.2.3:6683/sensor-reading\" == resp[\"URL\"][\"value\"]\n\n        # Wait for all readings to be sent to remote machine from local machine\n        time.sleep(wait_time)\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read readings are increasing on local machine and not increasing on remote machine after\n        # reconfig\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] == new_ping_result_remote['dataRead']\n\n        # Verifies whether Sent readings are not increasing after reconfig\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] == new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] == new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_C_service_with_filter(self, setup_local, setup_remote, read_data_from_pi_web_api, remote_ip,\n                                              add_filter,\n                                              skip_verify_north_interface, fledge_url, wait_time, retries, pi_host,\n                                              pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service by adding filter on it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_C_service_with_filter_enable_disable(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                             remote_ip, add_filter,\n                                                             skip_verify_north_interface, fledge_url, wait_time,\n                                                             retries, pi_host,\n                                                             pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service by enabling/disabling filter.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Add filter in disbled mode\n        filter_cfg_scale = {\"enable\": \"false\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        # Enable the filter  \n        data = {\"enable\": \"true\"}\n        put_url = \"/fledge/category/{}_{}\".format(local_north_service_name, filter_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"true\" == resp['enable']['value']\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_C_service_with_filter_reconfig(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                       remote_ip, add_filter,\n                                                       skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                       pi_host,\n                                                       pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service by reconfiguring filter.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Add filter in disbled mode\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        # Enable the filter  \n        data = {\"factor\": \"50\"}\n        put_url = \"/fledge/category/{}_{}\".format(local_north_service_name, filter_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"50.0\" == resp['factor']['value']\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_C_service_with_delete_add_filter(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                         remote_ip, add_filter,\n                                                         skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                         pi_host,\n                                                         pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service by deleting and re-adding filter on it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Add filter in enabled mode\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        # Delete the filter\n        data = {\"pipeline\": []}\n        put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=false\" \\\n            .format(local_north_service_name)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n        delete_url = \"/fledge/filter/{}\".format(filter_name)\n        resp = utils.delete_request(fledge_url, urllib.parse.quote(delete_url))\n        assert \"Filter {} deleted successfully.\".format(filter_name) == resp['result']\n\n        # Re-add filter in enabled mode\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_C_service_with_filter_reorder(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                      remote_ip, add_filter,\n                                                      skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                      pi_host,\n                                                      pi_admin, pi_passwd, pi_db):\n        \"\"\" Test C plugin as a North service by deleting and re-adding filter on it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Add first filter in enabled mode\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        # Add second filter in enabled mode\n        filter2_name = \"MetadataFilter #1\"\n        filter2_cfg = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter2_name, filter2_cfg, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter2_name)\n\n        # Verify the filter pipeline order\n        get_url = \"/fledge/filter/{}/pipeline\".format(local_north_service_name)\n        resp = utils.get_request(fledge_url, urllib.parse.quote(get_url))\n        assert filter_name == resp['result']['pipeline'][0]\n        assert filter2_name == resp['result']['pipeline'][1]\n\n        data = {\"pipeline\": [\"{}\".format(filter2_name), \"{}\".format(filter_name)]}\n        put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=false\" \\\n            .format(local_north_service_name)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n        # Verify the filter pipeline order after reordering\n        get_url = \"/fledge/filter/{}/pipeline\".format(local_north_service_name)\n        resp = utils.get_request(fledge_url, urllib.parse.quote(get_url))\n        assert filter2_name == resp['result']['pipeline'][0]\n        assert filter_name == resp['result']['pipeline'][1]\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n"
  },
  {
    "path": "tests/system/python/pair/test_e2e_fledge_pair.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test end to end flow with:\n        2 fledges, One fledge use programatic south services, sinusoid, expression and playback to send data\n        via http north to send fledge\n        second fledge use PI Server (C) plugin to send data to PI\n\"\"\"\n\nimport subprocess\nimport http.client\nimport os\nimport json\nimport time\nimport pytest\nfrom collections import Counter\nimport utils\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nCSV_NAME = \"sample.csv\"\nCSV_HEADERS = \"ivalue\"\nCSV_DATA = \"10,20,21,40\"\n\nNORTH_TASK_NAME = \"NorthReadingsTo_PI\"\nremote_asset_name = \"fogpair_playback\"\n\n\nclass TestE2eFogPairPi:\n\n    def get_asset_list(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/asset')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        actual_asset_codes = []\n        for itm in jdoc:\n            actual_asset_codes.append(itm[\"assetCode\"])\n        return actual_asset_codes\n\n    def get_ping_status(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/ping')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        _connection = http.client.HTTPConnection(fledge_url)\n        _connection.request(\"GET\", '/fledge/statistics')\n        r = _connection.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n    @pytest.fixture\n    def reset_and_start_fledge_remote(self, storage_plugin, remote_user, remote_ip, key_path, remote_fledge_path):\n        \"\"\"Fixture that kills fledge, reset database and starts fledge again on a remote machine\n                storage_plugin: Fixture that defines the storage plugin to be used for tests\n                remote_user: User of remote machine\n                remote_ip: IP of remote machine\n                key_path: Path of key file used for authentication to remote machine\n                remote_fledge_path: Path where Fledge is cloned and built\n            \"\"\"\n        if remote_fledge_path is None:\n            remote_fledge_path = '/home/{}/fledge'.format(remote_user)\n        subprocess.run([\"ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i {} {}@{} \"\n                        \"'export FLEDGE_ROOT={};$FLEDGE_ROOT/scripts/fledge kill'\".format(\n            key_path, remote_user, remote_ip, remote_fledge_path)], shell=True, check=True)\n        storage_plugin_val = \"postgres\" if storage_plugin == 'postgres' else \"sqlite\"\n        # Check whether storage.json file exist on remote machine or not,\n        # if it doesn't exist then raise assertion otherwise update its storage plugin value.\n        ssh = subprocess.Popen([\"ssh\", \"-o\", \"UserKnownHostsFile=/dev/null\", \"-o\", \"StrictHostKeyChecking=no\",\n                                \"-i\", \"{}\".format(key_path), \"{}@{}\".format(remote_user, remote_ip),\n                                \"cat {}/data/etc/storage.json\".format(remote_fledge_path)], shell=False,\n                               stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        result = ssh.stdout.readlines()\n        assert [] != result, \"storage.json file not found on the remote machine {}\".format(remote_ip)\n        data = json.loads(result[0])\n        data['plugin']['value'] = storage_plugin_val\n        ssh = subprocess.Popen([\"ssh\", \"-o\", \"UserKnownHostsFile=/dev/null\", \"-o\", \"StrictHostKeyChecking=no\",\n                                \"-i\", \"{}\".format(key_path), \"{}@{}\".format(remote_user, remote_ip),\n                                \"echo '\" + json.dumps(data) + \"' > {}/data/etc/storage.json\".format(\n                                    remote_fledge_path) ], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        subprocess.run([\"ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i {} {}@{} \"\n                        \"'export FLEDGE_ROOT={};echo \\\"YES\\nYES\\\" | $FLEDGE_ROOT/scripts/fledge reset'\".format(\n            key_path, remote_user, remote_ip, remote_fledge_path)], shell=True, check=True)\n        # Authentication is optional\n        command = ('ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i {} {}@{} '\n                   '\"export FLEDGE_ROOT={} && sed -i \\\\\"s/\\'default\\': \\'mandatory\\'/\\'default\\': \\'optional\\'/g\\\\\" '\n                   '\\\\$FLEDGE_ROOT/python/fledge/services/core/server.py\"').format(\n            key_path, remote_user, remote_ip, remote_fledge_path)\n        subprocess.run(command, shell=True, check=True)\n        subprocess.run([\"ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i {} {}@{} \"\n                        \"'export FLEDGE_ROOT={};$FLEDGE_ROOT/scripts/fledge start'\".format(\n            key_path, remote_user, remote_ip, remote_fledge_path)], shell=True)\n        stat = subprocess.run([\"ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i {} {}@{} \"\n                               \"'export FLEDGE_ROOT={}; $FLEDGE_ROOT/scripts/fledge status'\".format(\n            key_path, remote_user, remote_ip, remote_fledge_path)], shell=True, stdout=subprocess.PIPE)\n        assert \"Fledge not running.\" not in stat.stdout.decode(\"utf-8\")\n\n    @pytest.fixture\n    def start_south_north_remote(self, reset_and_start_fledge_remote, use_pip_cache, remote_user,\n                                 key_path, remote_fledge_path, remote_ip, south_branch,\n                                 start_north_pi_server_c_web_api, pi_host, pi_port,\n                                 clear_pi_system_through_pi_web_api, pi_admin, pi_passwd, pi_db):\n        \"\"\"Fixture that starts south and north plugins on remote machine\n                reset_and_start_fledge_remote: Fixture that kills fledge, reset database and starts fledge again on a remote machine\n                use_pip_cache: flag to tell whether to use python's pip cache for python dependencies\n                remote_user: User of remote machine\n                remote_fledge_path: Path where Fledge is cloned and built\n                remote_ip: IP of remote machine\n                south_branch: branch of fledge south plugin\n                start_north_pi_server_c_web_api: fixture that configures and starts pi plugin\n                pi_host: Host IP of PI machine\n                pi_token: Token of connector relay of PI\n            \"\"\"\n\n        # No need to give asset hierarchy in case of connector relay.\n        # There are two data points here. 1. ivalue 2. No data point (Asset name is will be used).\n        dp_list = ['ivalue', '']\n        asset_dict = {}\n        asset_dict[remote_asset_name] = dp_list\n        # For connector relay we should not delete PI Point because\n        # when the PI point is created again (after deletion) the compressing attribute for it\n        # is always true. That means all the data is not stored in PI data archive.\n        # We lose a large proportion of the data because of compressing attribute.\n        # This is problematic for the fixture that verifies the data stored in PI.\n        # clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n        #                                    [], asset_dict)\n\n        if remote_fledge_path is None:\n            remote_fledge_path = '/home/{}/fledge'.format(remote_user)\n        fledge_url = \"{}:8081\".format(remote_ip)\n        south_plugin = \"http\"\n        south_service = \"http_south\"\n\n        # Install http_south python plugin on remote machine\n        try:\n            subprocess.run([\n                \"scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i {} $FLEDGE_ROOT/tests/system/python/scripts/install_python_plugin {}@{}:/tmp/\".format(\n                    key_path, remote_user, remote_ip)], shell=True, check=True)\n            subprocess.run([\"ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i {} {}@{} 'export FLEDGE_ROOT={}; /tmp/install_python_plugin {} south {} {}'\".format(\n                    key_path, remote_user, remote_ip, remote_fledge_path, south_branch, south_plugin, use_pip_cache)],\n                shell=True, check=True)\n\n        except subprocess.CalledProcessError:\n            assert False, \"{} plugin installation failed\".format(south_plugin)\n        conn = http.client.HTTPConnection(fledge_url)\n\n        # Configure http_south python plugin on remote machine\n        data = {\"name\": \"{}\".format(south_service), \"type\": \"South\", \"plugin\": \"{}\".format(south_service),\n                \"enabled\": \"true\", \"config\": {\"assetNamePrefix\": {\"value\": \"\"}}}\n        conn.request(\"POST\", '/fledge/service', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        retval = json.loads(r)\n        assert south_service == retval[\"name\"]\n\n        # Configure pi north plugin on remote machine\n        start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd, taskname=\"NorthReadingsToPI\")\n\n        yield self.start_south_north_remote\n\n    def configure_and_start_north_http(self, north_branch, fledge_url, remote_ip, task_name=\"NorthReadingsToHTTP\"):\n        \"\"\" Configure and Start north http task \"\"\"\n\n        try:\n            subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} north {}\"\n                           .format(north_branch, \"http-c\")], shell=True, check=True)\n        except subprocess.CalledProcessError:\n            assert False, \"http north plugin installation failed\"\n\n        conn = http.client.HTTPConnection(fledge_url)\n        data = {\"name\": task_name,\n                \"plugin\": \"{}\".format(\"httpc\"),\n                \"type\": \"north\",\n                \"schedule_type\": 3,\n                \"schedule_day\": 0,\n                \"schedule_time\": 0,\n                \"schedule_repeat\": 30,\n                \"schedule_enabled\": \"false\",\n                \"config\": {\"URL\": {\"value\": \"http://{}:6683/sensor-reading\".format(remote_ip)}}\n                }\n\n        conn.request(\"POST\", '/fledge/scheduled/task', json.dumps(data))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        val = json.loads(r)\n        assert 2 == len(val)\n        assert task_name == val['name']\n\n    @pytest.fixture\n    def start_south_north_local(self, reset_and_start_fledge, add_south, enable_schedule, remove_directories,\n                                remove_data_file, south_branch, north_branch, fledge_url, remote_ip,\n                                add_filter, filter_branch):\n        \"\"\" This fixture clone a south and north repo and starts both south and north instance\n\n            reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n            add_south: Fixture that adds a south service with given configuration with enabled or disabled mode\n            enable_schedule: Fixture used to enable a schedule\n            remove_directories: Fixture that remove directories created during the tests\n            remove_data_file: Fixture that remove data file created during the tests\n            south_branch: south branch to pull\n            north_branch: north branch to pull\n            fledge_url: Fledge instance url for local setup (Instance 1)\n            remote_ip: IP of remote machine which will receive data from local instance\n            add_filter: Fixture that add and configures a filter\n            filter_branch: filter branch to pull\n        \"\"\"\n        # Add playback plugin\n        south_config_playbk = {\"assetName\": {\"value\": \"{}\".format(\"fogpair_playback\")},\n                               \"csvFilename\": {\"value\": \"{}\".format(CSV_NAME)},\n                               \"ingestMode\": {\"value\": \"batch\"}}\n\n        # Define the CSV data and create expected lists to be verified later\n        csv_file_path = os.path.join(os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(CSV_NAME))\n        with open(csv_file_path, 'w') as f:\n            f.write(CSV_HEADERS)\n            for _items in CSV_DATA.split(\",\"):\n                f.write(\"\\n{}\".format(_items))\n\n        south_plugin_playbk = \"playback\"\n        add_south(south_plugin_playbk, south_branch, fledge_url, service_name=\"fogpair_playbk\",\n                  config=south_config_playbk, start_service=False)\n\n        # Add expression plugin\n        south_plugin_expression = \"Expression\"\n        south_config_expr = {\"expression\": {\"value\": \"cos(x)\"}, \"minimumX\": {\"value\": \"45\"},\n                             \"maximumX\": {\"value\": \"45\"}, \"stepX\": {\"value\": \"0\"}}\n\n        add_south(south_plugin_expression, south_branch, fledge_url, service_name=\"fogpair_expr\",\n                  config=south_config_expr, plugin_lang=\"C\", start_service=False)\n\n        # Add sinusoid plugin\n        south_plugin_sinusoid = \"sinusoid\"\n\n        add_south(south_plugin_sinusoid, south_branch, fledge_url, service_name=\"fogpair_sine\", start_service=False)\n\n        self.configure_and_start_north_http(north_branch, fledge_url, remote_ip)\n\n        # Add asset filter\n        # I/P All assets > O/P only fogpair_playback asset\n        filter_cfg_asset = {\"config\": {\"rules\": [{\"asset_name\": \"fogpair_playback\", \"action\": \"include\"}],\n                                       \"defaultAction\": \"exclude\"}, \"enable\": \"true\"}\n        add_filter(\"asset\", filter_branch, \"fasset\", filter_cfg_asset, fledge_url, \"NorthReadingsToHTTP\")\n\n        # Enable all south and north schedules\n        enable_schedule(fledge_url, \"fogpair_playbk\")\n        enable_schedule(fledge_url, \"fogpair_expr\")\n        enable_schedule(fledge_url, \"fogpair_sine\")\n        enable_schedule(fledge_url, \"NorthReadingsToHTTP\")\n\n        yield self.start_south_north_local\n\n        # Cleanup\n        remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin_playbk))\n        remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin_expression.lower()))\n        remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin_sinusoid))\n        remove_directories(\"/tmp/fledge-north-{}\".format(\"http\"))\n        remove_data_file(csv_file_path)\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                       expected_read_values):\n        \"\"\"\n        Verify that data is received in pi db by making calls to PI web api\n            read_data_from_pi_asset_server: Fixture that reads data drom pi\n            pi_host: pi host\n            pi_admin: pi machine username\n            pi_passwd: pi machine password\n            pi_db: pi database\n            wait_time: wait before making a next call to pi web api\n            retries: number of tries to make\n            expected_read_values: expected readings value\n        \"\"\"\n        retry_count = 0\n        data_from_pi = None\n        while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n            data_from_pi = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, \"fogpair_playback\", [CSV_HEADERS])\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi is None or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        assert Counter(data_from_pi[CSV_HEADERS][-len(expected_read_values):]) == Counter(expected_read_values)\n\n    def test_end_to_end(self, start_south_north_remote, start_south_north_local,\n                        read_data_from_pi_asset_server, retries, pi_host, pi_admin, pi_passwd, pi_db,\n                        fledge_url, remote_ip, wait_time, skip_verify_north_interface):\n        \"\"\" Test that data is inserted in Fledge (local instance) using playback south plugin,\n            sinusoid south plugin and expression south plugin and sent to http north (filter only playback data),\n            Fledge (remote instance) receive this data via http south and send to PI\n            start_south_north_remote: Fixture that starts Fledge with http south service and pi north instance\n            start_south_north_local: Fixture that starts Fledge with south services and north instance with asset filter\n            read_data_from_pi_asset_server: Fixture that reads data from PI web api\n            retries: number to retries to make to fetch data from pi\n            pi_host: PI host IP\n            pi_admin: PI Machine user\n            pi_passwd: PI Machine user\n            pi_db: PI database\n            fledge_url: Local Fledge URL\n            remote_ip: IP address where 2 Fledge is running (Remote)\n            wait_time: time to wait in sec before making assertions\n            skip_verify_north_interface: Flag for assertion of data from Pi web API\n            Assertions:\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/asset/<asset_name> with applied data processing filter value\n                data received from PI is same as data sent\"\"\"\n\n        # Wait for data to be sent to Fledge instance 2 and then to PI\n        time.sleep(wait_time * 3)\n\n        # Fledge Instance 1 (Local) verification\n        expected_asset_list = [\"Expression\", \"fogpair_playback\", \"sinusoid\"]\n        actual_asset_list = self.get_asset_list(fledge_url)\n        assert set(expected_asset_list) == set(actual_asset_list)\n\n        ping_response = self.get_ping_status(fledge_url)\n        assert 4 <= ping_response[\"dataRead\"]\n        assert 4 == ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert 1 < actual_stats_map['EXPRESSION']\n        assert 1 < actual_stats_map['SINUSOID']\n        assert 4 == actual_stats_map['FOGPAIR_PLAYBACK']\n        assert 4 == actual_stats_map['NorthReadingsToHTTP']\n        assert 6 <= actual_stats_map['READINGS']\n        assert 4 == actual_stats_map['Readings Sent']\n\n        # Fledge Instance 2 (Remote) verification\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n        conn_remote = http.client.HTTPConnection(fledge_url_remote)\n\n        expected_list = [\"fogpair_playback\"]\n        actual_asset_list = self.get_asset_list(fledge_url_remote)\n        assert set(expected_list) == set(actual_asset_list)\n\n        remote_ping_response = self.get_ping_status(fledge_url_remote)\n        assert 4 == remote_ping_response[\"dataRead\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url_remote)\n        assert 'EXPRESSION' not in actual_stats_map.keys()\n        assert 'SINUSOID' not in actual_stats_map.keys()\n        assert 4 == actual_stats_map['FOGPAIR_PLAYBACK']\n        assert 4 == actual_stats_map['READINGS']\n\n        if not skip_verify_north_interface:\n            assert 4 == remote_ping_response[\"dataSent\"]\n            assert 4 == actual_stats_map['NorthReadingsToPI']\n            assert 4 == actual_stats_map['Readings Sent']\n\n        conn_remote.request(\"GET\", '/fledge/asset/{}'.format(\"fogpair_playback\"))\n        r = conn_remote.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        tmp_list = CSV_DATA.split(',')\n        tmp_list.reverse()\n        expected_read_values = [int(x) for x in tmp_list]\n        assert len(expected_read_values) == len(jdoc)\n\n        actual_read_values = []\n        for itm in jdoc:\n            actual_read_values.append(itm['reading'][CSV_HEADERS])\n        assert expected_read_values == actual_read_values\n\n        if not skip_verify_north_interface:\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                                expected_read_values)\n"
  },
  {
    "path": "tests/system/python/pair/test_python_north_service_pair.py",
    "content": "# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"\n       A pair system test to verify north service with north and south python plugins\n\"\"\"\n\n__author__ = \"Yash Tatkondawar\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems, Inc.\"\n\nimport http.client\nimport json\nimport subprocess\nimport time\nimport urllib.parse\nfrom pathlib import Path\n\nimport pytest\nimport utils\n\n# Local machine\nlocal_south_plugin = \"sinusoid\"\nlocal_south_asset_name = \"python-north-service-pair\"\nlocal_south_service_name = \"Sine #1\"\nlocal_north_plugin = \"http-north\"\nlocal_north_service_name = \"HN #1\"\n\n# Remote machine\nremote_south_plugin = \"http_south\"\nremote_south_service_name = \"HS #1\"\nremote_south_asset_name = \"python-north-service-pair\"\nremote_north_plugin = \"OMF\"\nremote_north_service_name = \"NorthReadingsToPI_WebAPI\"\n\nnorth_schedule_id = \"\"\nfilter_name = \"ScaleFilter #1\"\n\n# This  gives the path of directory where fledge is cloned. test_file < packages < python < system < tests < ROOT\nPROJECT_ROOT = Path(__file__).parent.parent.parent.parent.parent\nSCRIPTS_DIR_ROOT = \"{}/tests/system/python/scripts/package/\".format(PROJECT_ROOT)\n# SSH command to make connection with the remote machine\nssh_cmd = \"ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i\"\nAF_HIERARCHY_LEVEL = \"pythonnorthservicepair/pythonnorthservicepairlvl2/pythonnorthservicepairlvl3\"\n\n\n@pytest.fixture\ndef reset_fledge_local(wait_time):\n    try:\n        subprocess.run([\"cd {} && ./reset\"\n                       .format(SCRIPTS_DIR_ROOT)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset package script failed!\"\n\n\n@pytest.fixture\ndef setup_local(reset_fledge_local, add_south, add_north, fledge_url, remote_ip):\n    local_south_config = {\"assetName\": {\"value\": local_south_asset_name}}\n    add_south(local_south_plugin, None, fledge_url, config=local_south_config,\n              service_name=\"{}\".format(local_south_service_name),\n              installation_type='package')\n    # Change name of variables such as service_name, plugin_type\n    global north_schedule_id\n    local_north_config = {\"url\": {\"value\": \"http://{}:6683/sensor-reading\".format(remote_ip)}}\n    response = add_north(fledge_url, local_north_plugin, None, installation_type='package',\n                         north_instance_name=\"{}\".format(local_north_service_name),\n                         plugin_discovery_name=\"http_north\", is_task=False,\n                         config=local_north_config, enabled=True)\n    north_schedule_id = response[\"id\"]\n\n    yield setup_local\n\n\n@pytest.fixture\ndef reset_fledge_remote(remote_user, remote_ip, key_path, remote_fledge_path):\n    \"\"\"Fixture that kills fledge, reset database and starts fledge again on a remote machine\n            remote_user: User of remote machine\n            remote_ip: IP of remote machine\n            key_path: Path of key file used for authentication to remote machine\n            remote_fledge_path: Path where Fledge is cloned and built\n        \"\"\"\n    if remote_fledge_path is None:\n        remote_fledge_path = '/home/{}/fledge'.format(remote_user)\n    # Reset fledge on remote machine\n    subprocess.run([\n        \"{} {} {}@{} 'cd {}/tests/system/python/scripts/package/ && ./reset'\".format(\n            ssh_cmd, key_path, remote_user,\n            remote_ip, remote_fledge_path)], shell=True, check=True)\n\n\n@pytest.fixture\ndef clean_install_fledge_packages_remote(remote_user, remote_ip, key_path, remote_fledge_path, package_build_version):\n    \"\"\"Fixture that kills fledge, reset database and starts fledge again on a remote machine\n            remote_user: User of remote machine\n            remote_ip: IP of remote machine\n            key_path: Path of key file used for authentication to remote machine\n            remote_fledge_path: Path where Fledge is cloned and built\n        \"\"\"\n    if remote_fledge_path is None:\n        remote_fledge_path = '/home/{}/fledge'.format(remote_user)\n    # Remove all already installed packages from remote machine\n    subprocess.run([\n        \"{} {} {}@{} 'export FLEDGE_ROOT={};cd \"\n        \"$FLEDGE_ROOT/tests/system/python/scripts/package/ && ./remove'\".format(\n            ssh_cmd, key_path, remote_user,\n            remote_ip, remote_fledge_path)], shell=True, check=True)\n    # Installs packages on remote machine based on packages version passed\n    subprocess.run([\n        \"{} {} {}@{} 'export FLEDGE_ROOT={};cd \"\n        \"$FLEDGE_ROOT/tests/system/python/scripts/package/ && ./setup {}'\".format(\n            ssh_cmd, key_path, remote_user,\n            remote_ip, remote_fledge_path, package_build_version)], shell=True, check=True)\n    # Installs http_south python plugin on remote machine\n    try:\n        subprocess.run([\n            \"{} {} {}@{} 'sudo apt install -y fledge-south-http-south'\".format(\n                ssh_cmd, key_path, remote_user,\n                remote_ip)], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"{} plugin installation failed\".format(remote_south_plugin)\n\n\n@pytest.fixture\ndef setup_remote(reset_fledge_remote, remote_user, remote_ip, start_north_omf_as_a_service,\n                 pi_host, pi_port, pi_admin, pi_passwd,\n                 clear_pi_system_through_pi_web_api, pi_db):\n    \"\"\"Fixture that setups remote machine\n            reset_fledge_remote: Fixture that kills fledge, reset database and starts fledge again on a remote\n                                           machine.\n            remote_user: User of remote machine\n            remote_ip: IP of remote machine\n            pi_host: Host IP of PI machine\n            pi_port: Host port of PI machine\n            pi_admin: Username of PI machine\n            pi_passwd: Password of PI machine\n        \"\"\"\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    dp_list = ['sinusoid', 'name', '']\n    # There are three data points here. 1. sinusoid  2. name\n    # 3. no data point (Asset name be used in this case.)\n    asset_dict = {}\n    asset_dict[remote_south_asset_name] = dp_list\n    clear_pi_system_through_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db,\n                                       af_hierarchy_level_list, asset_dict)\n\n    fledge_url = \"{}:8081\".format(remote_ip)\n\n    # Configure http_south python plugin on remote machine\n    conn = http.client.HTTPConnection(fledge_url)\n    data = {\"name\": \"{}\".format(remote_south_service_name), \"type\": \"South\", \"plugin\": \"{}\".format(remote_south_plugin),\n            \"enabled\": \"true\", \"config\": {\"assetNamePrefix\": {\"value\": \"\"}}}\n    conn.request(\"POST\", '/fledge/service', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    retval = json.loads(r)\n    assert remote_south_service_name == retval[\"name\"]\n\n    # Configure pi north plugin on remote machine\n    global remote_north_schedule_id\n    response = start_north_omf_as_a_service(fledge_url, pi_host, pi_port, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                            start=True, default_af_location=AF_HIERARCHY_LEVEL)\n    remote_north_schedule_id = response[\"id\"]\n\n    yield setup_remote\n\n\ndef verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries):\n    get_url = \"/fledge/ping\"\n    ping_result = utils.get_request(fledge_url, get_url)\n    assert \"dataRead\" in ping_result\n    assert \"dataSent\" in ping_result\n    assert 0 < ping_result['dataRead'], \"South data NOT seen in ping header\"\n\n    retry_count = 1\n    sent = 0\n    if not skip_verify_north_interface:\n        while retries > retry_count:\n            sent = ping_result[\"dataSent\"]\n            if sent >= 1:\n                break\n            else:\n                time.sleep(wait_time)\n\n            retry_count += 1\n            ping_result = utils.get_request(fledge_url, get_url)\n\n        assert 1 <= sent, \"Failed to send data\"\n    return ping_result\n\n\ndef verify_statistics_map(fledge_url, south_asset_name, north_service_name, skip_verify_north_interface):\n    get_url = \"/fledge/statistics\"\n    jdoc = utils.get_request(fledge_url, get_url)\n    actual_stats_map = utils.serialize_stats_map(jdoc)\n    assert 1 <= actual_stats_map[south_asset_name.upper()]\n    assert 1 <= actual_stats_map['READINGS']\n    if not skip_verify_north_interface:\n        assert 1 <= actual_stats_map['Readings Sent']\n        assert 1 <= actual_stats_map[north_service_name]\n\n\ndef verify_service_added(fledge_url, south_service_name, north_service_name):\n    get_url = \"/fledge/south\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert south_service_name in [s[\"name\"] for s in result[\"services\"]]\n\n    get_url = \"/fledge/north\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result)\n    assert north_service_name in [s[\"name\"] for s in result]\n\n    get_url = \"/fledge/service\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"services\"])\n    assert south_service_name in [s[\"name\"] for s in result[\"services\"]]\n    assert north_service_name in [s[\"name\"] for s in result[\"services\"]]\n\n\ndef verify_filter_added(fledge_url, filter_name):\n    get_url = \"/fledge/filter\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result[\"filters\"])\n    assert filter_name in [s[\"name\"] for s in result[\"filters\"]]\n    return result\n\n\ndef verify_asset(fledge_url, south_asset_name):\n    get_url = \"/fledge/asset\"\n    result = utils.get_request(fledge_url, get_url)\n    assert len(result), \"No asset found\"\n    assert south_asset_name in [s[\"assetCode\"] for s in result]\n\n\ndef verify_asset_tracking_details(fledge_url, south_service_name, south_asset_name, south_plugin, north_service_name,\n                                  north_plugin, skip_verify_north_interface):\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    tracked_item = tracking_details[\"track\"][0]\n    assert south_service_name == tracked_item[\"service\"]\n    assert south_asset_name == tracked_item[\"asset\"]\n    assert south_plugin == tracked_item[\"plugin\"]\n\n    if not skip_verify_north_interface:\n        egress_tracking_details = utils.get_asset_tracking_details(fledge_url, \"Egress\")\n        assert len(egress_tracking_details[\"track\"]), \"Failed to track Egress event\"\n        tracked_item = egress_tracking_details[\"track\"][0]\n        assert north_service_name == tracked_item[\"service\"]\n        assert south_asset_name == tracked_item[\"asset\"]\n        assert north_plugin == tracked_item[\"plugin\"]\n\n\ndef _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries, asset_name):\n    retry_count = 0\n    data_from_pi = None\n\n    af_hierarchy_level_list = AF_HIERARCHY_LEVEL.split(\"/\")\n    type_id = 1\n    recorded_datapoint = \"{}\".format(asset_name)\n    # Name of asset in the PI server\n    pi_asset_name = \"{}\".format(asset_name)\n\n    while (data_from_pi is None or data_from_pi == []) and retry_count < retries:\n        data_from_pi = read_data_from_pi_web_api(pi_host, pi_admin, pi_passwd, pi_db, af_hierarchy_level_list,\n                                                 pi_asset_name, '')\n        retry_count += 1\n        time.sleep(wait_time * 2)\n\n    if data_from_pi is None or retry_count == retries:\n        assert False, \"Failed to read data from PI\"\n\n\nclass TestPythonNorthService:\n    def test_north_python_service_with_restart(self, clean_setup_fledge_packages, clean_install_fledge_packages_remote,\n                                               setup_local, setup_remote, skip_verify_north_interface, fledge_url,\n                                               wait_time, retries, remote_ip, read_data_from_pi_web_api, pi_host,\n                                               pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service before and after restarting fledge.\n            clean_setup_fledge_packages: Fixture to remove and install latest fledge packages\n            clean_install_fledge_packages_remote: Fixture to remove and install latest fledge packages on remote machine\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Verify on local machine\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, local_south_asset_name)\n        verify_service_added(fledge_url, local_south_service_name, local_north_service_name)\n        verify_statistics_map(fledge_url, local_south_asset_name, local_north_service_name, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, local_south_service_name, local_south_asset_name, local_south_plugin,\n                                      local_north_service_name, local_north_plugin, verify_asset_tracking_details)\n\n        # Verify on remote machine\n        verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url_remote, remote_south_asset_name)\n        verify_service_added(fledge_url_remote, remote_south_service_name, remote_north_service_name)\n        verify_statistics_map(fledge_url_remote, remote_south_asset_name, remote_north_service_name,\n                              skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url_remote, remote_south_service_name, remote_south_asset_name,\n                                      remote_south_plugin, remote_north_service_name, remote_north_plugin,\n                                      verify_asset_tracking_details)\n\n        from conftest import restart_and_wait_for_fledge\n        restart_and_wait_for_fledge(fledge_url, wait_time)\n        restart_and_wait_for_fledge(fledge_url_remote, wait_time)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_python_service_with_enable_disable(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                      remote_ip,\n                                                      skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                      pi_host, wait_fix,\n                                                      pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service by disabling and enabling it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/south\n                on endpoint GET /fledge/north\n                on endpoint GET /fledge/service\n                on endpoint GET /fledge/statistics\n                on endpoint GET /fledge/asset\n                on endpoint GET /fledge/track\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Verify on local machine\n        verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url, local_south_asset_name)\n        verify_service_added(fledge_url, local_south_service_name, local_north_service_name)\n        verify_statistics_map(fledge_url, local_south_asset_name, local_north_service_name, skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url, local_south_service_name, local_south_asset_name, local_south_plugin,\n                                      local_north_service_name, local_north_plugin, verify_asset_tracking_details)\n\n        # Verify on remote machine\n        verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        verify_asset(fledge_url_remote, remote_south_asset_name)\n        verify_service_added(fledge_url_remote, remote_south_service_name, remote_north_service_name)\n        verify_statistics_map(fledge_url_remote, remote_south_asset_name, remote_north_service_name,\n                              skip_verify_north_interface)\n        verify_asset_tracking_details(fledge_url_remote, remote_south_service_name, remote_south_asset_name,\n                                      remote_south_plugin, remote_north_service_name, remote_north_plugin,\n                                      verify_asset_tracking_details)\n\n        # Disabling local machine north service\n        data = {\"enabled\": \"false\"}\n        put_url = \"/fledge/schedule/{}\".format(north_schedule_id)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert False == resp['schedule']['enabled']\n        print(f\"Waiting for {wait_fix} seconds for delay caused by FOGL-8813 - tune pre-fetch buffers...\")\n        time.sleep(wait_fix)\n        # Enabling local machine north service\n        data = {\"enabled\": \"true\"}\n        put_url = \"/fledge/schedule/{}\".format(north_schedule_id)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert True == resp['schedule']['enabled']\n        print(f\"Waiting for {wait_fix} seconds for delay caused by FOGL-8813 - tune pre-fetch buffers...\")\n        time.sleep(wait_fix)\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_python_service_with_delete_add(self, setup_local, setup_remote, read_data_from_pi_web_api, remote_ip,\n                                                  add_north, skip_verify_north_interface, fledge_url, wait_time,\n                                                  retries,\n                                                  pi_host, pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service by deleting and adding it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Delete and re-add the north service from local machine\n        delete_url = \"/fledge/service/{}\".format(local_north_service_name)\n        resp = utils.delete_request(fledge_url, urllib.parse.quote(delete_url))\n        assert \"Service {} deleted successfully.\".format(local_north_service_name) == resp['result']\n\n        local_north_config = {\"url\": {\"value\": \"http://{}:6683/sensor-reading\".format(remote_ip)}}\n        add_north(fledge_url, local_north_plugin, None, installation_type='package',\n                  north_instance_name=\"{}\".format(local_north_service_name),\n                  plugin_discovery_name=\"http_north\", is_task=False,\n                  config=local_north_config, enabled=True)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Verifies whether Read and Sent readings are increasing after restart\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_python_service_with_reconfig(self, setup_local, setup_remote, read_data_from_pi_web_api, remote_ip,\n                                                skip_verify_north_interface, fledge_url, wait_time, retries, pi_host,\n                                                pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service by reconfiguring it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Bad reconfiguration to check data is not sent\n        data = {\"url\": \"http://100.1.2.3:6683/sensor-reading\"}\n        put_url = \"/fledge/category/{}\".format(local_north_service_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"http://100.1.2.3:6683/sensor-reading\" == resp[\"url\"][\"value\"]\n\n        # Wait for all readings to be sent to remote machine from local machine\n        time.sleep(wait_time)\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read readings are increasing on local machine and not increasing on remote machine after\n        # reconfig\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] == new_ping_result_remote['dataRead']\n\n        # Verifies whether Sent readings are not increasing after reconfig\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] == new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] == new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_python_service_with_filter(self, setup_local, setup_remote, read_data_from_pi_web_api, remote_ip,\n                                              add_filter,\n                                              skip_verify_north_interface, fledge_url, wait_time, retries, pi_host,\n                                              pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service by adding filter on it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_python_service_with_filter_enable_disable(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                             remote_ip, add_filter,\n                                                             skip_verify_north_interface, fledge_url, wait_time,\n                                                             retries, pi_host,\n                                                             pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service by enabling/disabling filter.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Add filter in disbled mode\n        filter_cfg_scale = {\"enable\": \"false\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        # Enable the filter  \n        data = {\"enable\": \"true\"}\n        put_url = \"/fledge/category/{}_{}\".format(local_north_service_name, filter_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"true\" == resp['enable']['value']\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_python_service_with_filter_reconfig(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                       remote_ip, add_filter,\n                                                       skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                       pi_host,\n                                                       pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service by reconfiguring filter.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Add filter in disbled mode\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        # Enable the filter  \n        data = {\"factor\": \"50\"}\n        put_url = \"/fledge/category/{}_{}\".format(local_north_service_name, filter_name)\n        resp = utils.put_request(fledge_url, urllib.parse.quote(put_url), data)\n        assert \"50.0\" == resp['factor']['value']\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_python_service_with_delete_add_filter(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                         remote_ip, add_filter,\n                                                         skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                         pi_host,\n                                                         pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service by deleting and re-adding filter on it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Add filter in enabled mode\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        # Delete the filter\n        data = {\"pipeline\": []}\n        put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=false\" \\\n            .format(local_north_service_name)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n        delete_url = \"/fledge/filter/{}\".format(filter_name)\n        resp = utils.delete_request(fledge_url, urllib.parse.quote(delete_url))\n        assert \"Filter {} deleted successfully.\".format(filter_name) == resp['result']\n\n        # Re-add filter in enabled mode\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n\n    def test_north_python_service_with_filter_reorder(self, setup_local, setup_remote, read_data_from_pi_web_api,\n                                                      remote_ip, add_filter,\n                                                      skip_verify_north_interface, fledge_url, wait_time, retries,\n                                                      pi_host,\n                                                      pi_admin, pi_passwd, pi_db):\n        \"\"\" Test python plugin as a North service by deleting and re-adding filter on it.\n            setup_local: Fixture to reset, add and configure plugins on local machine\n            setup_remote: Fixture to reset, add and configure plugins on remote machine\n            add_filter: Adds and configures a filter\n            read_data_from_pi_web_api: Fixture to read data from PI web API\n            skip_verify_north_interface: Flag for assertion of data using PI web API\n            Assertions:\n                on endpoint GET /fledge/ping\n                on endpoint GET /fledge/filter\"\"\"\n\n        # Wait until south and north services are created and some data is loaded\n        time.sleep(wait_time)\n\n        fledge_url_remote = \"{}:8081\".format(remote_ip)\n\n        # Add first filter in enabled mode\n        filter_cfg_scale = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter_name, filter_cfg_scale, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter_name)\n\n        # Add second filter in enabled mode\n        filter2_name = \"MetadataFilter #1\"\n        filter2_cfg = {\"enable\": \"true\"}\n        add_filter(\"scale\", None, filter2_name, filter2_cfg, fledge_url, local_north_service_name,\n                   installation_type='package')\n        verify_filter_added(fledge_url, filter2_name)\n\n        # Verify the filter pipeline order\n        get_url = \"/fledge/filter/{}/pipeline\".format(local_north_service_name)\n        resp = utils.get_request(fledge_url, urllib.parse.quote(get_url))\n        assert filter_name == resp['result']['pipeline'][0]\n        assert filter2_name == resp['result']['pipeline'][1]\n\n        data = {\"pipeline\": [\"{}\".format(filter2_name), \"{}\".format(filter_name)]}\n        put_url = \"/fledge/filter/{}/pipeline?allow_duplicates=true&append_filter=false\" \\\n            .format(local_north_service_name)\n        utils.put_request(fledge_url, urllib.parse.quote(put_url, safe='?,=,&,/'), data)\n\n        # Verify the filter pipeline order after reordering\n        get_url = \"/fledge/filter/{}/pipeline\".format(local_north_service_name)\n        resp = utils.get_request(fledge_url, urllib.parse.quote(get_url))\n        assert filter2_name == resp['result']['pipeline'][0]\n        assert filter_name == resp['result']['pipeline'][1]\n\n        old_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        old_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n        # Wait for read and sent readings to increase\n        time.sleep(wait_time)\n        new_ping_result = verify_ping(fledge_url, skip_verify_north_interface, wait_time, retries)\n        new_ping_result_remote = verify_ping(fledge_url_remote, skip_verify_north_interface, wait_time, retries)\n\n        # Verifies whether Read and Sent readings are increasing\n        assert old_ping_result['dataRead'] < new_ping_result['dataRead']\n        assert old_ping_result_remote['dataRead'] < new_ping_result_remote['dataRead']\n\n        if not skip_verify_north_interface:\n            assert old_ping_result['dataSent'] < new_ping_result['dataSent']\n            assert old_ping_result_remote['dataSent'] < new_ping_result_remote['dataSent']\n            _verify_egress(read_data_from_pi_web_api, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries,\n                           remote_south_asset_name)\n"
  },
  {
    "path": "tests/system/python/plugin_and_service.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Install plugin as per type, plugin name, language & add/start south service\"\"\"\n\nimport subprocess\nimport http.client\nimport json\n\n\ndef install(_type, plugin, branch=\"develop\", plugin_lang=\"python\", use_pip_cache=True):\n    if plugin_lang == \"python\":\n        path = \"$FLEDGE_ROOT/tests/system/python/scripts/install_python_plugin {} {} {} {}\".format(\n            branch, _type, plugin, use_pip_cache)\n    else:\n        path = \"$FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {} {} {}\".format(\n            branch, _type, plugin)\n    try:\n        subprocess.run([path], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"{} plugin installation failed\".format(plugin)\n\n    # Cleanup /tmp repos\n    if _type in (\"notify\", \"rule\"):\n        subprocess.run([\"rm -rf /tmp/fledge-service-notification\"], shell=True, check=True)\n    subprocess.run([\"rm -rf /tmp/fledge-{}-{}\".format(_type, plugin)], shell=True, check=True)\n\n\ndef reset():\n    try:\n        subprocess.run([\"$FLEDGE_ROOT/tests/system/python/scripts/reset_plugins\"], shell=True, check=True)\n    except subprocess.CalledProcessError:\n        assert False, \"reset plugin script failed\"\n\n\ndef add_south_service(south_plugin, fledge_url, service_name, config=None, start_service=True):\n    \"\"\"Add south plugin and start the service by default\"\"\"\n    _config = config if config is not None else {}\n    _enabled = \"true\" if start_service else \"false\"\n    data = {\"name\": \"{}\".format(service_name), \"type\": \"South\", \"plugin\": \"{}\".format(south_plugin),\n            \"enabled\": _enabled, \"config\": _config}\n\n    # Create south service\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", '/fledge/service', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    return json.loads(r)\n\n\ndef add_north_service(north_plugin, fledge_url, service_name, config=None, start_service=True):\n    \"\"\"Add the north service and set it to start automatically\"\"\"\n    _config = config if config is not None else {}\n    _enabled = \"true\" if start_service else \"false\"\n    data = {\"name\": \"{}\".format(service_name), \"type\": \"North\", \"plugin\": \"{}\".format(north_plugin),\n            \"enabled\": _enabled, \"config\": _config}\n\n    # Create south service\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", '/fledge/service', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    return json.loads(r)\n\n\ndef create_filter(fledge_url, filter_name, plugin_name, config=None):\n    \"\"\"Create standalone filter\"\"\"\n    data = {\"name\": filter_name, \"plugin\": plugin_name}\n    if config is not None:\n        data[\"filter_config\"] = config\n    conn = http.client.HTTPConnection(fledge_url)\n    conn.request(\"POST\", '/fledge/filter', json.dumps(data))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    return json.loads(r)"
  },
  {
    "path": "tests/system/python/plugins/dummy/iprpc/filter/numpy_filter/__init__.py",
    "content": ""
  },
  {
    "path": "tests/system/python/plugins/dummy/iprpc/filter/numpy_filter/numpy_filter.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" A dummy filter plugin to tests iprpc module in Fledge. \"\"\"\n\nimport copy\nimport datetime\nimport logging\nimport math\n\nimport numpy as np\n\nimport filter_ingest\nfrom fledge.common import logger\nfrom fledge.plugins.common import utils\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_DEFAULT_CONFIG = {\n    'plugin': {  # mandatory\n        'description': 'Filter plugin to calculate rms value from south service using numpy',\n        'type': 'string',\n        'default': 'numpy_filter',\n        'readonly': 'true'\n    },\n    'enable': {  # mandatory\n        'description': 'Enable / disable this plugin',\n        'type': 'boolean',\n        'default': 'False',\n        'displayName': 'Enable this plugin',\n        'order': '1'\n    },\n    'assetName': {\n        'description': 'Asset name from which rms values to be calculated.',\n        'type': 'string',\n        'default': 'numpy_ingest',\n        'displayName': 'Asset Name',\n        'order': '2'\n    },\n    'dataPointName': {\n        'description': 'The data point name from asset',\n        'type': 'string',\n        'default': 'random',\n        'displayName': 'Data Point Name',\n        'order': '3'\n    },\n    'numSamples': {\n        'description': 'Number of samples to collect before applying rms function',\n        'type': 'string',\n        'default': '100',\n        'displayName': 'Num Of Samples',\n        'order': '4'\n    }\n}\n\n_LOGGER = logger.setup(__name__, level=logging.INFO)\nshutdown_in_progress = False\n\n\ndef rms(a):\n    \"\"\"\n    Root mean square of the numpy array.\n    Args:\n        a: Input numpy array\n    Returns:\n        root mean square of the numpy array\n    \"\"\"\n    if True:\n        s = 0\n        for ele in a:\n            s += ele*ele\n        return math.sqrt(s/len(a))\n    # cannot perform simple operations like these  np.sqrt(np.mean(np.square(a)))\n\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n    Args:\n    Returns:\n        dict: plugin information\n    Raises:\n    \"\"\"\n    _LOGGER.info(\"plugin_info called\")\n    return {\n        'name': 'numpy_filter',\n        'version': '1.8.2',\n        'mode': 'none',\n        'type': 'filter',\n        'interface': '1.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\ndef plugin_init(config, ingest_ref, callback):\n    \"\"\" Initialise the plugin.\n    Args:\n        config: JSON configuration document for the plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n    global shutdown_in_progress\n    _LOGGER.info(\"plugin_init called\")\n    _config = config  # copy.deepcopy(config)\n    _config['ingest_ref'] = ingest_ref\n    _config['callback'] = callback\n\n    _config['readings_buffer'] = list()\n    shutdown_in_progress = False\n    return _config\n\n\ndef plugin_ingest(handle, data):\n    \"\"\" plugin_ingest -- calculate rms values usinfg numpy \"\"\"\n\n    global shutdown_in_progress\n\n    if shutdown_in_progress:\n        return\n\n    if not handle['enable']['value']:\n        # Filter not enabled, just pass data onwards\n        filter_ingest.filter_ingest_callback(handle['callback'], handle['ingest_ref'], data)\n        return\n\n    _asset_name = handle['assetName']['value']\n    _datapoint_name = handle['dataPointName']['value']\n    _num_samples_for_calculation = int(handle['numSamples']['value'])\n\n    if type(data) == dict:\n        data = [data]\n\n    if len(data) == 0:\n        _LOGGER.info(\"empty data received \")\n\n    buffer = []\n    handle['readings_buffer'].extend(data)\n\n    if len(handle['readings_buffer']) < _num_samples_for_calculation:\n        return\n\n    for reading in handle['readings_buffer'][:_num_samples_for_calculation]:\n        if reading['asset'] != _asset_name:\n            continue\n\n        datapoints = reading['readings']\n        if _datapoint_name not in datapoints:\n            continue\n\n        single_reading = datapoints[_datapoint_name]\n        buffer.append(single_reading)\n\n    buffer_np = buffer\n\n    # The following line of code does not work in this filter. Even if south service is a sinusoid.\n    # iprpc is inevitable\n\n    # buffer_np = np.array(buffer, dtype=np.dtype('float64'))\n\n    # we will get the following error when try to print the numpy array.\n\n    # 'RuntimeError('implement_array_function method already has a docstring',)\n    # _LOGGER.info(\"buffer np  is {} shape is {}\".format(buffer_np, buffer_np.shape))\n\n    # rms value is calculated without using numpy.\n    rms_value = rms(buffer_np)\n\n    time_stamp = utils.local_timestamp()\n    rms_reading = {'asset': _asset_name + '_RMS', 'timestamp': time_stamp, 'readings': {}}\n    rms_reading['readings'][_datapoint_name] = rms_value\n\n    readings_buffer_to_fwd = handle['readings_buffer'][:_num_samples_for_calculation]\n\n    readings_buffer_to_fwd.append(rms_reading)\n\n    filter_ingest.filter_ingest_callback(handle['callback'], handle['ingest_ref'], readings_buffer_to_fwd)\n\n    handle['readings_buffer'] = handle['readings_buffer'][_num_samples_for_calculation:]\n\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n    Returns:\n        new_handle: new handle to be used in the future calls\n    \"\"\"\n    _LOGGER.info(\"Old config for numpy filter plugin={} \\n new config={}\".format(handle.config, new_config))\n    plugin_shutdown(handle)\n    new_handle = plugin_init(new_config)\n    return new_handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shutdowns the plugin doing required cleanup\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        plugin shutdown\n    \"\"\"\n    _LOGGER.info(\"Plugin shutdown \")\n    global shutdown_in_progress\n    shutdown_in_progress= True\n"
  },
  {
    "path": "tests/system/python/plugins/dummy/iprpc/filter/numpy_iprpc_filter/__init__.py",
    "content": ""
  },
  {
    "path": "tests/system/python/plugins/dummy/iprpc/filter/numpy_iprpc_filter/np_server.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Derived class of server functions for Numpy operations\n\n    take a stream of assets which are raw acceleration values, process them, and return\n    a stream of assets representing the results.\n\n\"\"\"\n\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nimport logging\n\nimport numpy as np\n\nfrom fledge.common import logger, iprpc\n\n\n_LOGGER = logger.setup(__name__, level=logging.INFO)\n\n\nclass NPServer(iprpc.InterProcessRPC):\n    \"\"\" Class for offloading numpy/scipy operations from filter plugin to a separate process\n    \"\"\"\n    def __init__(self, service_name):\n        \"\"\" Initialize numpy/scipy offloading server\n        Args:\n            service_name: name of the server\n        Returns:\n        Raises:\n        \"\"\"\n        super().__init__()  # initialize rpc server\n\n    def rms(self, input_array):\n        \"\"\"\n        Root mean square of the input array.\n        Args:\n            input_array: Input numpy array\n        Returns:\n            root mean square of the input array.\n        \"\"\"\n        input_array_np = np.array(input_array)\n        return np.sqrt(np.mean(np.square(input_array_np)))\n\n    def plugin_shutdown(self):\n        \"\"\" Shutdown numpy/scipy offloading server\n        Args:\n        Returns:\n        Raises:\n            SystemExit\n        \"\"\"\n        _LOGGER.info(\"PLUGIN SHUTDOWN\")\n        raise SystemExit\n\n\nif __name__ == \"__main__\":\n    _LOGGER.info(\"SERVING...np functions\")\n    rpc = NPServer('np service')\n    rpc.serve()\n"
  },
  {
    "path": "tests/system/python/plugins/dummy/iprpc/filter/numpy_iprpc_filter/numpy_iprpc_filter.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" A dummy filter plugin to tests iprpc module in Fledge. \"\"\"\n\n\nimport logging\nimport os\n\nimport filter_ingest\nfrom fledge.common import logger, iprpc\nfrom fledge.plugins.common import utils\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_DEFAULT_CONFIG = {\n    'plugin': {  # mandatory\n        'description': 'Filter plugin to calculate rms value from south service using numpy',\n        'type': 'string',\n        'default': 'numpy_iprpc_filter',\n        'readonly': 'true'\n    },\n    'enable': {  # mandatory\n        'description': 'Enable / disable this plugin',\n        'type': 'boolean',\n        'default': 'False',\n        'displayName': 'Enable this plugin',\n        'order': '1'\n    },\n    'assetName': {\n        'description': 'Asset name from which rms values to be calculated.',\n        'type': 'string',\n        'default': 'numpy_ingest',\n        'displayName': 'Asset Name',\n        'order': '2'\n    },\n    'dataPointName': {\n        'description': 'The data point name from asset',\n        'type': 'string',\n        'default': 'random',\n        'displayName': 'Data Point Name',\n        'order': '3'\n    },\n    'numSamples': {\n        'description': 'Number of samples to collect before applying rms function',\n        'type': 'string',\n        'default': '100',\n        'displayName': 'Num Of Samples',\n        'order': '4'\n    }\n}\n\n_LOGGER = logger.setup(__name__, level=logging.INFO)\n_module_dir = os.path.dirname(__file__)\n\nshutdown_in_progress = False\nthe_rpc = None\n\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n    Args:\n    Returns:\n        dict: plugin information\n    Raises:\n    \"\"\"\n    _LOGGER.info(\"plugin_info called\")\n    return {\n        'name': 'numpy_filter',\n        'version': '1.8.2',\n        'mode': 'none',\n        'type': 'filter',\n        'interface': '1.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\ndef plugin_init(config, ingest_ref, callback):\n    \"\"\" Initialise the plugin.\n    Args:\n        config: JSON configuration document for the plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n    global shutdown_in_progress, the_rpc\n    _LOGGER.info(\"plugin_init called\")\n    _config = config  # copy.deepcopy(config)\n    _config['ingest_ref'] = ingest_ref\n    _config['callback'] = callback\n\n    # start a server for the ai algorithm the user has configured\n    the_rpc = iprpc.IPCModuleClient(\"np_server\", _module_dir)\n\n    _config['readings_buffer'] = list()\n    shutdown_in_progress = False\n    return _config\n\n\ndef plugin_ingest(handle, data):\n    \"\"\" plugin_ingest -- calculate rms values usinfg numpy \"\"\"\n\n    global shutdown_in_progress\n\n    if shutdown_in_progress:\n        return\n\n    if not handle['enable']['value']:\n        # Filter not enabled, just pass data onwards\n        filter_ingest.filter_ingest_callback(handle['callback'], handle['ingest_ref'], data)\n        return\n\n    _asset_name = handle['assetName']['value']\n    _datapoint_name = handle['dataPointName']['value']\n    _num_samples_for_calculation = int(handle['numSamples']['value'])\n\n    if type(data) == dict:\n        data = [data]\n\n    if len(data) == 0:\n        _LOGGER.info(\"empty data received \")\n\n    buffer = []\n    handle['readings_buffer'].extend(data)\n\n    if len(handle['readings_buffer']) < _num_samples_for_calculation:\n        return\n\n    for reading in handle['readings_buffer'][:_num_samples_for_calculation]:\n        if reading['asset'] != _asset_name:\n            continue\n\n        datapoints = reading['readings']\n        if _datapoint_name not in datapoints:\n            continue\n\n        single_reading = datapoints[_datapoint_name]\n        buffer.append(single_reading)\n\n    rms_value = the_rpc.rms(buffer)\n\n    time_stamp = utils.local_timestamp()\n    rms_reading = {'asset': _asset_name + '_RMS', 'timestamp': time_stamp, 'readings': {}}\n    rms_reading['readings'][_datapoint_name] = rms_value\n\n    readings_buffer_to_fwd = handle['readings_buffer'][:_num_samples_for_calculation]\n\n    readings_buffer_to_fwd.append(rms_reading)\n\n    filter_ingest.filter_ingest_callback(handle['callback'], handle['ingest_ref'], readings_buffer_to_fwd)\n\n    handle['readings_buffer'] = handle['readings_buffer'][_num_samples_for_calculation:]\n\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n    Returns:\n        new_handle: new handle to be used in the future calls\n    \"\"\"\n    _LOGGER.info(\"Old config for numpy filter plugin={} \\n new config={}\".format(handle.config, new_config))\n    plugin_shutdown(handle)\n    new_handle = plugin_init(new_config)\n    return new_handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shutdowns the plugin doing required cleanup\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        plugin shutdown\n    \"\"\"\n    _LOGGER.info(\"Plugin shutdown \")\n    global shutdown_in_progress, the_rpc\n    shutdown_in_progress = True\n    if the_rpc is not None:\n        try:\n            the_rpc.plugin_shutdown()\n        except EOFError:\n            _LOGGER.info(\"Normal shutdown exit\")\n        the_rpc = None\n"
  },
  {
    "path": "tests/system/python/plugins/dummy/iprpc/south/numpy_south/__init__.py",
    "content": ""
  },
  {
    "path": "tests/system/python/plugins/dummy/iprpc/south/numpy_south/numpy_south.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" A dummy south plugin to ingest random values in Fledge using numpy \"\"\"\n\nimport copy\nimport logging\n\nimport numpy as np\n\nfrom fledge.common import logger\nfrom fledge.plugins.common import utils\n\n__author__ = \"Deepanshu Yadav\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_DEFAULT_CONFIG = {\n    'plugin': {\n        'description': 'A dummy south plugin to ingest random values in Fledge using numpy',\n        'type': 'string',\n        'default': 'numpy_south',\n        'readonly': 'true'\n    },\n    'assetName': {\n        'description': 'Name of Asset',\n        'type': 'string',\n        'default': 'np_random',\n        'displayName': 'Asset name',\n        'mandatory': 'true'\n    },\n    'totalValuesArray': {\n        'description': 'The total number values in input array',\n        'type': 'string',\n        'default': '100',\n        'displayName': 'Total Array Values',\n        'mandatory': 'true'\n    }\n}\n\n_LOGGER = logger.setup(__name__, level=logging.INFO)\nindex = 0\nTOTAL_VALUES_IN_ARRAY = 0\n\n\ndef generate_data():\n    global index, TOTAL_VALUES_IN_ARRAY\n\n    while index <= TOTAL_VALUES_IN_ARRAY:\n        # index exceeds, reset to default\n        if index >= TOTAL_VALUES_IN_ARRAY:\n            index = 0\n        if index == 0:\n            np_array = np.random.rand(TOTAL_VALUES_IN_ARRAY, 1)\n\n        yield np_array[index, 0]\n        index += 1\n\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n    Args:\n    Returns:\n        dict: plugin information\n    Raises:\n    \"\"\"\n    return {\n        'name': 'Numpy Poll plugin',\n        'version': '1.8.2',\n        'mode': 'poll',\n        'type': 'south',\n        'interface': '1.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\ndef plugin_init(config):\n    \"\"\" Initialise the plugin.\n    Args:\n        config: JSON configuration document for the South plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n    data = copy.deepcopy(config)\n    global TOTAL_VALUES_IN_ARRAY\n    TOTAL_VALUES_IN_ARRAY = int(data['totalValuesArray']['value'])\n    return data\n\n\ndef plugin_poll(handle):\n    \"\"\" Extracts data from the sensor and returns it in a JSON document as a Python dict.\n    Available for poll mode only.\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        returns a sensor reading in a JSON document, as a Python dict, if it is available\n        None - If no reading is available\n    Raises:\n        Exception\n    \"\"\"\n    try:\n        time_stamp = utils.local_timestamp()\n        data = {'asset':  handle['assetName']['value'],\n                'timestamp': time_stamp,\n                'readings': {\"random\":  next(generate_data())}}\n    except (Exception, RuntimeError) as ex:\n        _LOGGER.exception(\"Exception is {}\".format(str(ex)))\n        raise ex\n    else:\n        return data\n\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n    Returns:\n        new_handle: new handle to be used in the future calls\n    \"\"\"\n    _LOGGER.info(\"Old config for sinusoid plugin {} \\n new config {}\".format(handle, new_config))\n    new_handle = copy.deepcopy(new_config)\n    return new_handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shutdowns the plugin doing required cleanup, to be called prior to the South plugin service being shut down.\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        plugin shutdown\n    \"\"\"\n    _LOGGER.info('numpy south plugin shut down.')\n"
  },
  {
    "path": "tests/system/python/plugins/notificationDelivery/send/send.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: https://fledge-iot.readthedocs.io\n# FLEDGE_END\n\n\"\"\" Example Notification delivery plugin \"\"\"\n\nimport copy\nimport uuid\nimport logging\nimport json\n\nfrom fledge.common import logger\nfrom fledge.plugins.common import utils\n\n__author__ = \"Massimiliano Pinto\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n_LOGGER = logger.setup(__name__, level=logging.DEBUG)\n\n_DEFAULT_CONFIG = {\n    'plugin': {\n        'description': 'Send notification delivery plugin',\n        'type': 'string',\n        'default': 'send',\n        'readonly': 'true'\n    },\n    'enable': {\n        'description': 'Enable send plugin',\n        'type': 'boolean',\n        'default': 'false',\n        'displayName': 'Enable',\n        'order': \"1\"\n    }\n}\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n    Args:\n    Returns:\n        dict: plugin information\n    Raises:\n    \"\"\"\n    return {\n        'name': 'send',\n        'version': '1.7.1',\n        'mode': 'none',\n        'type': 'notificationDelivery',\n        'interface': '1.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\ndef plugin_init(config):\n    \"\"\" Initialise the plugin.\n    Args:\n        config: JSON configuration document for the South plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n    data = copy.deepcopy(config)\n\n    return data\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n    Returns:\n        new_handle: new handle to be used in the future calls\n    \"\"\"\n    new_handle = copy.deepcopy(new_config)\n\n    new_handle = plugin_init(new_config)\n    return new_handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shutdowns the plugin doing required cleanup, to be called prior to the South plugin service being shut down.\n\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        plugin shutdown\n    \"\"\"\n    _LOGGER.info('plugin_shutdown() called.')\n\ndef plugin_deliver(handle, deliveryName, notificationName, triggerReason, message):\n    _LOGGER.debug(\"send delivery for \" + str(notificationName))\n    _LOGGER.debug(\"send message is \" + str(message))\n    _LOGGER.debug(\"send reason object type is \" + str(type(triggerReason)))\n    _LOGGER.debug(\"send reason \" + str(triggerReason['reason']))\n\n    # Get data that triggered the notification\n    data = triggerReason['data']\n\n    if type(data) == dict:\n        for k in data.keys():\n            _LOGGER.debug(\"=== type of key '\" + str(k) + \"' is \" + str(type(data[k])))\n            if type(data[k]) == dict:\n                for j in data[k].keys():\n                    _LOGGER.debug(\"=== type of dict key '\" + str(j) + \"' is \" + str(type(data[k][j])))\n            else:\n                _LOGGER.debug(\"=== value of key '\" + str(k) + \"' is \" + str(data[k]))\n\n    return True\n"
  },
  {
    "path": "tests/system/python/plugins/notificationRule/numpy_image/numpy_image.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: https://fledge-iot.readthedocs.io\n# FLEDGE_END\n\n\"\"\" Notification Rule plugin module to handle images with Numpy \"\"\"\n\nimport os\n\nimport copy\nimport logging\nfrom fledge.common import logger\nimport datetime\nfrom fledge.plugins.common import utils\nimport numpy\nimport json\n\n__author__ = \"Massimiliano Pinto\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n_DEFAULT_CONFIG = {\n    'plugin': {\n        'description': 'Numpy image processing rule plugin',\n        'type': 'string',\n        'default': 'numpy_image',\n        'readonly': 'true'\n    },\n    'assetName': {\n        'description': 'Name of asset',\n        'type': 'string',\n        'default': 'sinusoid',\n        'displayName': 'Asset name to monitor'\n    }\n}\n\n_LOGGER = logger.setup(__name__, level=logging.DEBUG)\n\ndef plugin_info():\n    \"\"\" Returns information about the plugin.\n    Args:\n    Returns:\n        dict: plugin information\n    Raises:\n    \"\"\"\n    _LOGGER.info(\"plugin_info() called\")\n    return {\n        'name': 'numpy_image',\n        'version': '1.7.0',\n        'mode': 'poll',\n        'type': 'notificationRule',\n        'interface': '1.0.0',\n        'config': _DEFAULT_CONFIG\n    }\n\n\ndef plugin_init(config):\n    \"\"\" Initialise the plugin.\n    Args:\n        config: JSON configuration document for the plugin configuration category\n    Returns:\n        data: JSON object to be used in future calls to the plugin\n    Raises:\n    \"\"\"\n\n    _LOGGER.info(\"plugin_init() called: config={}\".format(config))\n    try:\n        handle = copy.deepcopy(config)\n        handle['state'] = False\n    except Exception as ex:\n        _LOGGER.exception('Error in initializing plugin {}'.format(str(ex)))\n        raise\n\n    return handle\n\n\ndef plugin_triggers(handle):\n    \"\"\" Returns assets to monitor for numpy_image\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        Returns the dict with asset name(s) under which numpy_image data is available.\n        This would be used to subscribe to such data from storage service\n    \"\"\"\n    d = {'triggers': [{'asset': handle['assetName']['value']}]}\n    _LOGGER.debug(\"plugin_triggers() called: triggers={}\".format(d))\n    return d\n\n\ndef plugin_eval(handle, data):\n    \"\"\" Evaluates whether it is a bad bearing by running vibration data thru' ML model\n    Args:\n        handle: handle returned by the plugin initialisation call\n        data: dict with the readings, containing vibration data for bearings\n    Returns:\n        Returns a bool indicating whether the bearing is bad\n    \"\"\"\n\n    ret_val = False\n    try:\n        if type(data) == dict:\n            for k in data.keys():\n                if type(data[k]) == dict:\n                    for j in data[k].keys():\n                        if type(data[k][j]) != str:\n                            t_type = type(data[k][j])\n                            if t_type == numpy.ndarray:\n                                #_LOGGER.debug(\"=== (2.4) numpy.ndarray here\")\n                                arr = data[k][j]\n                                #_LOGGER.debug(\"=== (2.4) numpy.ndarray check: \" + str(type(arr)))\n                                #_LOGGER.debug(\"=== (2.4.1) numpy.ndarray has dims \" + str(arr.ndim))\n                                #_LOGGER.debug(\"=== (2.4.2) numpy.ndarray has size \" + str(arr.size))\n                                #_LOGGER.debug(\"=== (2.4.3) numpy.ndarray has shape \" + str(arr.shape))\n                                #_LOGGER.debug(\"=== (2.4.4) numpy.ndarray has stored types \" + str(arr.dtype))\n                                _LOGGER.debug(\"*** (2.4.5) numpy.ndarray has numpy.count_nonzero:\" + str(numpy.count_nonzero(arr)))\n                                if numpy.count_nonzero(arr) > 145200:\n                                    ret_val = True\n                else:\n                    _LOGGER.debug(\"plugin_eval(): key '\" + str(k) + \"' is not a dict\")\n\n    except Exception as ex:\n        _LOGGER.exception('Error in plugin_eval():  {}'.format(str(ex)))\n\n    handle[\"state\"] = ret_val\n\n    _LOGGER.debug(\"plugin_eval() returns \" + str(ret_val))\n\n    return ret_val\n\ndef plugin_reason(handle):\n    \"\"\" Returns reason string for the last time plugin_eval returned True\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        Returns a simple dict with 'reason' field indicating the reason for last 'plugin_eval' call returning True\n    \"\"\"\n    s = \"triggered\" if handle[\"state\"] == True else \"cleared\"\n    timestamp = utils.local_timestamp()\n    _LOGGER.debug(\"plugin_reason() called: reason={}\".format(s))\n    return {'reason': s, 'asset': [handle['assetName']['value']], 'timestamp': timestamp}\n\n\ndef plugin_reconfigure(handle, new_config):\n    \"\"\" Reconfigures the plugin\n    Args:\n        handle: handle returned by the plugin initialisation call\n        new_config: JSON object representing the new configuration category for the category\n    Returns:\n        new_handle: new handle to be used in the future calls\n    \"\"\"\n    _LOGGER.debug(\"plugin_reconfigur() Old config for plugin={} \\n new config={}\".format(handle, new_config))\n\n    # Shutdown plugin\n    plugin_shutdown(handle)\n\n    # Call init with new configuration\n    new_handle = plugin_init(new_config)\n\n    return new_handle\n\n\ndef plugin_shutdown(handle):\n    \"\"\" Shutdowns the plugin doing required cleanup\n    Args:\n        handle: handle returned by the plugin initialisation call\n    Returns:\n        plugin shutdown\n    \"\"\"\n\n    _LOGGER.debug(\"plugin_ shutdown() called.\")\n\n    # Remove any data here\n"
  },
  {
    "path": "tests/system/python/pytest.ini",
    "content": "[pytest]\naddopts = --wait-time=6\n          --retries=4"
  },
  {
    "path": "tests/system/python/rpi/test_e2e_rpi_ephat.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test end to end flow with:\n        Ingress: ePhat south plugin\n        Egress: PI Server (C) plugin\n\"\"\"\n\nimport os\nimport http.client\nimport json\nimport time\nimport pytest\nimport utils\nfrom urllib.parse import quote\nfrom collections import Counter\n\n\n__author__ = \"Praveen Garg, Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nSOUTH_PLUGIN = \"rpienviro\"\nSVC_NAME = \"Room-1\"\n\nASSET_PREFIX = \"e_\"  # default asset prefix for rpienviro South plugin\n\nASSET_NAME_W = \"weather\"\nSENSOR_READ_KEY_W = {\"temperature\", \"altitude\", \"pressure\"}\n\nASSET_NAME_M = \"magnetometer\"\nSENSOR_READ_KEY_M = {\"x\", \"y\", \"z\"}\n\nASSET_NAME_A = \"accelerometer\"\nSENSOR_READ_KEY_A = {\"x\", \"y\", \"z\"}\n\nASSET_NAME_C = \"rgb\"\nSENSOR_READ_KEY_C = {\"r\", \"g\", \"b\"}\n\nTASK_NAME = \"North v2 PI\"\n\n\n@pytest.mark.skipif('raspberrypi' != os.uname()[1] and 'raspizero' != os.uname()[1], reason=\"RPi only (ePhat) test\")\n# sysname='Linux', nodename='raspberrypi', release='4.14.98+', version='#1200 ', machine='armv6l'\nclass TestE2eRPiEphatEgress:\n\n    def get_ping_status(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/ping')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return jdoc\n\n    def get_statistics_map(self, fledge_url):\n        conn = http.client.HTTPConnection(fledge_url)\n        conn.request(\"GET\", '/fledge/statistics')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        return utils.serialize_stats_map(jdoc)\n\n    @pytest.fixture\n    def start_south_north(self, reset_and_start_fledge, add_south, south_branch, disable_schedule,\n                          remove_data_file, skip_verify_north_interface, remove_directories, enable_schedule,\n                          fledge_url, start_north_pi_server_c_web_api, pi_host, pi_port, pi_db, pi_admin, pi_passwd, wait_time):\n        \"\"\" This fixture clones given south & filter plugin repo, and starts south and PI north C instance\n\n        \"\"\"\n\n        add_south(SOUTH_PLUGIN, south_branch, fledge_url, service_name=SVC_NAME)\n\n        if not skip_verify_north_interface:\n            start_north_pi_server_c_web_api(fledge_url, pi_host, pi_port, pi_db=pi_db, pi_user=pi_admin, pi_pwd=pi_passwd,\n                                            taskname=TASK_NAME, start_task=False)\n\n        # let the readings ingress\n        time.sleep(wait_time)\n        disable_schedule(fledge_url, SVC_NAME)\n\n        if not skip_verify_north_interface:\n            enable_schedule(fledge_url, TASK_NAME)\n\n        yield self.start_south_north\n\n        remove_directories(\"/tmp/fledge-south-{}\".format(SOUTH_PLUGIN))\n\n    def test_end_to_end(self, start_south_north, read_data_from_pi_asset_server, fledge_url, pi_host, pi_admin,\n                        pi_passwd, pi_db, wait_time, retries, skip_verify_north_interface):\n\n        # let the readings egress\n        time.sleep(wait_time * 2)\n        self._verify_ping_and_statistics(fledge_url, skip_verify_north_interface)\n\n        self._verify_ingest(fledge_url)\n\n        if not skip_verify_north_interface:\n            self._verify_egress(read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries)\n\n    def _verify_ping_and_statistics(self, fledge_url, skip_verify_north_interface):\n        ping_response = self.get_ping_status(fledge_url)\n        assert ping_response[\"dataRead\"]\n        if not skip_verify_north_interface:\n            assert ping_response[\"dataSent\"]\n\n        actual_stats_map = self.get_statistics_map(fledge_url)\n        assert actual_stats_map[\"{}{}\".format(ASSET_PREFIX.upper(), ASSET_NAME_W.upper())]\n        assert actual_stats_map[\"{}{}\".format(ASSET_PREFIX.upper(), ASSET_NAME_M.upper())]\n        assert actual_stats_map[\"{}{}\".format(ASSET_PREFIX.upper(), ASSET_NAME_A.upper())]\n        assert actual_stats_map[\"{}{}\".format(ASSET_PREFIX.upper(), ASSET_NAME_C.upper())]\n        assert actual_stats_map['READINGS']\n        if not skip_verify_north_interface:\n            assert actual_stats_map[TASK_NAME]\n            assert actual_stats_map['Readings Sent']\n\n    def _verify_ingest(self, fledge_url):\n        asset_name_with_prefix_w = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME_W)\n        asset_name_with_prefix_m = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME_M)\n        asset_name_with_prefix_a = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME_A)\n        asset_name_with_prefix_c = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME_C)\n        conn = http.client.HTTPConnection(fledge_url)\n\n        conn.request(\"GET\", '/fledge/asset')\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc = json.loads(r)\n        assert len(jdoc), \"No asset found\"\n        actual_assets = [i[\"assetCode\"] for i in jdoc]\n        assert asset_name_with_prefix_w in actual_assets\n        assert asset_name_with_prefix_m in actual_assets\n        assert asset_name_with_prefix_a in actual_assets\n        assert asset_name_with_prefix_c in actual_assets\n        assert jdoc[0][\"count\"]\n        expected_assets = Counter([asset_name_with_prefix_w, asset_name_with_prefix_m,\n                                   asset_name_with_prefix_a, asset_name_with_prefix_c])\n        assert Counter(actual_assets) == expected_assets\n\n        # fledge/asset/e%2Fweather\n        conn.request(\"GET\", '/fledge/asset/{}'.format(quote(asset_name_with_prefix_w, safe='')))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc_asset = json.loads(r)\n\n        for _sensor in SENSOR_READ_KEY_W:\n            assert len(jdoc_asset), \"No data found for asset '{}'\".format(asset_name_with_prefix_w)\n            assert jdoc_asset[0][\"reading\"][_sensor] is not None\n            conn.request(\"GET\", '/fledge/asset/{}/{}'.format(quote(asset_name_with_prefix_w, safe=''), _sensor))\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert len(jdoc), \"No data found for asset '{}' and datapoint '{}'\".format(asset_name_with_prefix_w, _sensor)\n\n        # fledge/asset/e%2Fmagnetometer\n        conn.request(\"GET\", '/fledge/asset/{}'.format(quote(asset_name_with_prefix_m, safe='')))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc_asset = json.loads(r)\n\n        for _sensor in SENSOR_READ_KEY_M:\n            assert len(jdoc_asset), \"No data found for asset '{}'\".format(asset_name_with_prefix_m)\n            assert jdoc_asset[0][\"reading\"][_sensor] is not None\n            conn.request(\"GET\", '/fledge/asset/{}/{}'.format(quote(asset_name_with_prefix_m, safe=''), _sensor))\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert len(jdoc), \"No data found for asset '{}' and datapoint '{}'\".format(asset_name_with_prefix_m,\n                                                                                       _sensor)\n\n        # fledge/asset/e%2Faccelerometer\n        conn.request(\"GET\", '/fledge/asset/{}'.format(quote(asset_name_with_prefix_a, safe='')))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc_asset = json.loads(r)\n\n        for _sensor in SENSOR_READ_KEY_A:\n            assert len(jdoc_asset), \"No data found for asset '{}'\".format(asset_name_with_prefix_a)\n            assert jdoc_asset[0][\"reading\"][_sensor] is not None\n            conn.request(\"GET\", '/fledge/asset/{}/{}'.format(quote(asset_name_with_prefix_a, safe=''), _sensor))\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert len(jdoc), \"No data found for asset '{}' and datapoint '{}'\".format(asset_name_with_prefix_a,\n                                                                                       _sensor)\n        # fledge/asset/e%2Frgb\n        conn.request(\"GET\", '/fledge/asset/{}'.format(quote(asset_name_with_prefix_c, safe='')))\n        r = conn.getresponse()\n        assert 200 == r.status\n        r = r.read().decode()\n        jdoc_asset = json.loads(r)\n\n        for _sensor in SENSOR_READ_KEY_C:\n            assert len(jdoc_asset), \"No data found for asset '{}'\".format(asset_name_with_prefix_c)\n            assert jdoc_asset[0][\"reading\"][_sensor] is not None\n            conn.request(\"GET\", '/fledge/asset/{}/{}'.format(quote(asset_name_with_prefix_c, safe=''), _sensor))\n            r = conn.getresponse()\n            assert 200 == r.status\n            r = r.read().decode()\n            jdoc = json.loads(r)\n            assert len(jdoc), \"No data found for asset '{}' and datapoint '{}'\".format(asset_name_with_prefix_c,\n                                                                                       _sensor)\n\n    def _verify_egress(self, read_data_from_pi_asset_server, pi_host, pi_admin, pi_passwd, pi_db, wait_time, retries):\n        retry_count = 0\n\n        data_from_pi_w = None\n        data_from_pi_m = None\n        data_from_pi_a = None\n        data_from_pi_c = None\n\n        asset_name_with_prefix_w = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME_W)\n        asset_name_with_prefix_a = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME_A)\n        asset_name_with_prefix_m = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME_M)\n        asset_name_with_prefix_c = \"{}{}\".format(ASSET_PREFIX, ASSET_NAME_C)\n\n        while (data_from_pi_w is None or data_from_pi_w == []) and retry_count < retries:\n            data_from_pi_w = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset_name_with_prefix_w,\n                                               SENSOR_READ_KEY_W)\n\n            data_from_pi_m = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset_name_with_prefix_m,\n                                               SENSOR_READ_KEY_M)\n\n            data_from_pi_a = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset_name_with_prefix_a,\n                                               SENSOR_READ_KEY_A)\n\n            data_from_pi_c = read_data_from_pi_asset_server(pi_host, pi_admin, pi_passwd, pi_db, asset_name_with_prefix_c,\n                                               SENSOR_READ_KEY_C)\n\n            retry_count += 1\n            time.sleep(wait_time * 2)\n\n        if data_from_pi_w is None or data_from_pi_m is None or data_from_pi_a is None or data_from_pi_c is None\\\n                or retry_count == retries:\n            assert False, \"Failed to read data from PI\"\n\n        print(\"Data read from PI System:\\nWeather={}\\nMagnetometer={}\\nAccelerometer={}\\nrgbColor={}\\n\".format(\n            data_from_pi_w, data_from_pi_m, data_from_pi_a, data_from_pi_c))\n\n        for w in SENSOR_READ_KEY_W:\n            assert w in data_from_pi_w\n            abs_sum_w = sum([abs(n) for n in data_from_pi_w[w]])\n            print(\"Weather (sum of {} absolute values), Sensor={}\".format(len(data_from_pi_w[w]), w), abs_sum_w)\n            assert abs_sum_w, \"Sum of weather sensor absolute values is 0\"\n\n        for a in SENSOR_READ_KEY_A:\n            assert a in data_from_pi_a\n            abs_sum_a = sum([abs(n) for n in data_from_pi_a[a]])\n            print(\"Accelerometer (sum of {} absolute values, Sensor={}\".format(len(data_from_pi_a[a]), a), abs_sum_a)\n            assert abs_sum_a, \"Sum of accelerometer sensor absolute values is 0\"\n\n        for m in SENSOR_READ_KEY_M:\n            assert m in data_from_pi_m\n            abs_sum_m = sum([abs(n) for n in data_from_pi_m[m]])\n            print(\"Magnetometer (sum of {} absolute values), Sensor={}\".format(len(data_from_pi_m[m]), m), abs_sum_m)\n            assert abs_sum_m, \"Sum of magnetometer sensor absolute values is 0\"\n\n        for c in SENSOR_READ_KEY_C:\n            assert c in data_from_pi_c\n            abs_sum_c = sum([abs(n) for n in data_from_pi_c[c]])\n            print(\"RGB colors (sum of {} absolute values), Sensor={}\".format(len(data_from_pi_c[c]), c), abs_sum_c)\n            assert abs_sum_c, \"Sum of rgb sensors absolute values is 0\"\n"
  },
  {
    "path": "tests/system/python/scripts/install_c_plugin",
    "content": "#!/usr/bin/env bash\nset -e\n\n__author__=\"Ashish Jabble\"\n__copyright__=\"Copyright (c) 2019 Dianomic Systems\"\n__license__=\"Apache 2.0\"\n__version__=\"1.0.0\"\n\n######################################################################################################\n# Usage text for this script\n# $FLEDGE_ROOT/tests/system/python/scripts/install_c_plugin {BRANCH_NAME} {PLUGIN_TYPE} {PLUGIN_NAME}\n######################################################################################################\n\nBRANCH_NAME=$1\nPLUGIN_TYPE=$2\nPLUGIN_NAME=$3\n\n[[ -z \"${BRANCH_NAME}\" ]] && echo \"Branch name not found.\" && exit 1\n[[ -z \"${PLUGIN_TYPE}\" ]] && echo \"Plugin type not found.\" && exit 1\n[[ -z \"${PLUGIN_NAME}\" ]] && echo \"Plugin name not found.\" && exit 1\n\nos_name=$(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\nos_version=$(grep -o '^VERSION_ID=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\nREPO_NAME=fledge-${PLUGIN_TYPE}-${PLUGIN_NAME,,}\nif [[ \"${PLUGIN_TYPE}\" = \"rule\" || \"${PLUGIN_TYPE}\" == \"notify\" ]]; then rm -rf /tmp/fledge-service-notification; fi\n\n\nclean () {\n   rm -rf /tmp/${REPO_NAME}\n   if [[ \"${PLUGIN_TYPE}\" = \"rule\" ]]; then rm -rf ${FLEDGE_ROOT}/plugins/notificationRule/${PLUGIN_NAME} ; elif [[ \"${PLUGIN_TYPE}\" == \"notify\" ]]; then rm -rf ${FLEDGE_ROOT}/plugins/notificationDelivery/${PLUGIN_NAME} ; fi\n   rm -rf ${FLEDGE_ROOT}/plugins/${PLUGIN_TYPE}/${PLUGIN_NAME}\n}\n\nclone_repo () {\n   git clone -b ${BRANCH_NAME} --single-branch https://github.com/fledge-iot/${REPO_NAME}.git /tmp/${REPO_NAME}\n}\n\ninstall_requirement (){\n    req_file=$(find /tmp/${REPO_NAME} -name requirement*.sh)\n    [[ ! -z \"${req_file}\" ]] && ${req_file} || echo \"No external dependency needed for ${PLUGIN_NAME} plugin.\"\n}\n\ninstall_binary_file () {\n   if [[ \"${PLUGIN_TYPE}\" = \"rule\" || \"${PLUGIN_TYPE}\" == \"notify\" ]]\n   then\n   \n        # fledge-service-notification repo is required to build notificationRule Plugins\n        service_repo_name='fledge-service-notification'\n        git clone -b ${BRANCH_NAME} --single-branch https://github.com/fledge-iot/${service_repo_name}.git /tmp/${service_repo_name}\n        export NOTIFICATION_SERVICE_INCLUDE_DIRS=/tmp/${service_repo_name}/C/services/notification/include\n   fi\n   \n   if [ -f /tmp/${REPO_NAME}/build.sh ]; then\n\tcd /tmp/${REPO_NAME}; ./build.sh -DFLEDGE_INSTALL=${FLEDGE_ROOT}; cd build && make install;\n   else\n        if [[ $os_name == *\"Red Hat\"* || $os_name == *\"CentOS\"* ]]; then    \n            if [[ ${os_version} -eq \"7\" ]]; then\n                set +e\n                source scl_source enable rh-postgresql13\n                source scl_source enable devtoolset-7\n                set -e\n            fi\n        fi\n        mkdir -p /tmp/${REPO_NAME}/build; cd /tmp/${REPO_NAME}/build; cmake -DFLEDGE_INSTALL=${FLEDGE_ROOT} ..; make -j4 && make install; cd -\n   fi   \n   \n}\n\nclean\nclone_repo\ninstall_requirement\ninstall_binary_file\necho \"${PLUGIN_NAME} plugin is installed.\"\n"
  },
  {
    "path": "tests/system/python/scripts/install_c_service",
    "content": "#!/usr/bin/env bash\nset -e\n\n__author__=\"Ashish Jabble\"\n__copyright__=\"Copyright (c) 2019 Dianomic Systems\"\n__license__=\"Apache 2.0\"\n__version__=\"1.0.0\"\n\n##########################################################################################\n# Usage text for this script\n# $FLEDGE_ROOT/tests/system/python/scripts/install_c_service {BRANCH_NAME} {SERVICE_NAME}\n##########################################################################################\n\nBRANCH_NAME=$1\nSERVICE_NAME=$2\n\n[[ -z \"${BRANCH_NAME}\" ]] && echo \"Branch name not found.\" && exit 1\n[[ -z \"${SERVICE_NAME}\" ]] && echo \"Service name not found.\" && exit 1\n\nos_name=$(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\nos_version=$(grep -o '^VERSION_ID=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\nREPO_NAME=fledge-service-${SERVICE_NAME}\n\nclean () {\n   rm -rf /tmp/${REPO_NAME}\n   rm -rf ${FLEDGE_ROOT}/services/fledge.services.${SERVICE_NAME}\n}\n\nclone_repo () {\n   git clone -b ${BRANCH_NAME} --single-branch https://github.com/fledge-iot/${REPO_NAME}.git /tmp/${REPO_NAME}\n}\n\ninstall_binary_file () {\n   \n   if [ -f /tmp/${REPO_NAME}/build.sh ]; then\n\tcd /tmp/${REPO_NAME}; ./build.sh -DFLEDGE_INSTALL=${FLEDGE_ROOT}; cd build && make install;\n   else\n        if [[ $os_name == *\"Red Hat\"* || $os_name == *\"CentOS\"* ]]; then    \n            if [[ ${os_version} -eq \"7\" ]]; then\n                set +e\n                source scl_source enable rh-postgresql13\n                source scl_source enable devtoolset-7\n                set -e\n            fi\n        fi\n        mkdir -p /tmp/${REPO_NAME}/build; cd /tmp/${REPO_NAME}/build; cmake -DFLEDGE_INSTALL=${FLEDGE_ROOT} ..; make -j4 && make install; cd -\n   fi   \n   \n}\n\nclean\nclone_repo\ninstall_binary_file\necho \"${SERVICE_NAME} service is installed.\"\n"
  },
  {
    "path": "tests/system/python/scripts/install_python_plugin",
    "content": "#!/usr/bin/env bash\nset -e\n\n__author__=\"Ashish Jabble\"\n__copyright__=\"Copyright (c) 2019 Dianomic Systems\"\n__license__=\"Apache 2.0\"\n__version__=\"1.0.0\"\n\n###########################################################################################################\n# Usage text for this script\n# $FLEDGE_ROOT/tests/system/python/scripts/install_python_plugin {BRANCH_NAME} {PLUGIN_TYPE} {PLUGIN_NAME}\n###########################################################################################################\n\nBRANCH_NAME=$1\nPLUGIN_TYPE=$2\nPLUGIN_NAME=$3\nUSE_PIP_CACHE=$4\n\n[[ -z \"${BRANCH_NAME}\" ]] && echo \"Branch name not found.\" && exit 1\n[[ -z \"${PLUGIN_TYPE}\" ]] && echo \"Plugin type not found.\" && exit 1\n[[ -z \"${PLUGIN_NAME}\" ]] && echo \"Plugin name not found.\" && exit 1\n\nUSE_PIP_CACHE=${USE_PIP_CACHE,,}\nif [[ \"${USE_PIP_CACHE}\" = false ]]; then USE_PIP_CACHE=\"--no-cache-dir\"; else USE_PIP_CACHE=\"\"; fi\n\n\nREPO_NAME=fledge-${PLUGIN_TYPE}-${PLUGIN_NAME}\n\nif [ \"${PLUGIN_NAME}\" == \"http_south\" ] || [ \"${PLUGIN_NAME}\" == \"http_north\" ] ; then\n   REPO_NAME=fledge-${PLUGIN_TYPE}-http\nfi\n\n\nget_pip_break_system_flag() {\n    # Get Python version from python3 --version and parse it\n    PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}')\n    PYTHON_MAJOR=$(echo \"$PYTHON_VERSION\" | cut -d. -f1)\n    PYTHON_MINOR=$(echo \"$PYTHON_VERSION\" | cut -d. -f2)\n\n    # Default to empty flag\n    FLAG=\"\"\n\n    # Set the FLAG only for Python versions 3.11 or higher\n    if [ \"$PYTHON_MAJOR\" -gt 3 ] || { [ \"$PYTHON_MAJOR\" -eq 3 ] && [ \"$PYTHON_MINOR\" -ge 11 ]; }; then\n        FLAG=\"--break-system-packages\"\n    fi\n\n    # Return the FLAG (via echo)\n    echo \"$FLAG\"\n}\n\nclean () {\n   rm -rf /tmp/${REPO_NAME}\n   rm -rf ${FLEDGE_ROOT}/python/fledge/plugins/${PLUGIN_TYPE}/fledge-${PLUGIN_TYPE}-${PLUGIN_NAME}\n}\n\nclone_repo () {\n   git clone -b ${BRANCH_NAME} --single-branch https://github.com/fledge-iot/${REPO_NAME}.git /tmp/${REPO_NAME}\n}\n\ncopy_file_and_requirement () {\n    if [ \"$PLUGIN_NAME\" = \"http\" ]; then\n        cp -r /tmp/${REPO_NAME}/python/fledge/plugins/${PLUGIN_TYPE}/${PLUGIN_NAME}_${PLUGIN_TYPE} $FLEDGE_ROOT/python/fledge/plugins/${PLUGIN_TYPE}/\n    else\n        cp -r /tmp/${REPO_NAME}/python/fledge/plugins/${PLUGIN_TYPE}/${PLUGIN_NAME} $FLEDGE_ROOT/python/fledge/plugins/${PLUGIN_TYPE}/\n    fi\n    req_file=$(find /tmp/${REPO_NAME} -name requirement*.txt)\n    BREAK_PKG_FLAG=$(get_pip_break_system_flag)\n    [ ! -z \"${req_file}\" ] && python3 -m pip install --user -Ir ${req_file} ${USE_PIP_CACHE} ${BREAK_PKG_FLAG:+$BREAK_PKG_FLAG} || echo \"No such external dependency needed for ${PLUGIN_NAME} plugin.\"\n}\n\nclean\nclone_repo\ncopy_file_and_requirement\necho \"${PLUGIN_NAME} plugin is installed.\"\n"
  },
  {
    "path": "tests/system/python/scripts/package/remove",
    "content": "#!/usr/bin/env bash\n\nOS_NAME=`(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')`\n\nif [[ ${OS_NAME} == *\"Red Hat\"* || ${OS_NAME} == *\"CentOS\"* ]]; then\n    time sudo yum remove -y fledge\n    # Do not remove fledge-gui if environment is Docker\n    if [ \"${FLEDGE_ENVIRONMENT}\" != \"docker\" ]; then\n        time sudo yum remove -y fledge-gui\n    fi\n    sudo rm -rf /usr/local/fledge\nelse\n    time sudo apt purge -y fledge\n    # Do not remove fledge-gui if environment is Docker\n    if [ \"${FLEDGE_ENVIRONMENT}\" != \"docker\" ]; then\n        time sudo apt purge -y fledge-gui\n    fi\n    sudo rm -rf /usr/local/fledge\nfi"
  },
  {
    "path": "tests/system/python/scripts/package/reset",
    "content": "#!/usr/bin/env bash\n\n# Check if two arguments are passed\nif [ $# -eq 2 ]; then\n    # If exactly two arguments are passed\n    source ../../../common/scripts/reset_user_authentication \"$1\" \"$2\"\nelse\n    # Default case\n    source ../../../common/scripts/reset_user_authentication \"/usr/local/fledge\"\nfi\n\nif [ \"${FLEDGE_ENVIRONMENT}\" == \"docker\" ]; then\n    /usr/local/fledge/bin/fledge kill\n    echo -e \"YES\\nYES\" | /usr/local/fledge/bin/fledge reset || exit 1\n    echo\n    /usr/local/fledge/bin/fledge start\n    echo \"Fledge Status\"\n    /usr/local/fledge/bin/fledge status\nelse\n    echo \"Stopping Fledge using systemctl ...\"\n    # FIXME: FOGL-1499 After the issue is resolved, remove the explicit 'kill' command and use 'systemctl stop' instead\n    # sudo systemctl stop fledge\n    /usr/local/fledge/bin/fledge kill\n    echo -e \"YES\\nYES\" | /usr/local/fledge/bin/fledge reset || exit 1\n    echo\n    echo \"Starting Fledge using systemctl ...\"\n    # FIXME: FOGL-1499 Once the issue is resolved, replace 'restart' with 'start\n    sudo systemctl restart fledge\n    echo \"Fledge Status\"\n    sudo systemctl status fledge | grep \"Active\"\nfi\n"
  },
  {
    "path": "tests/system/python/scripts/package/setup",
    "content": "#!/usr/bin/env bash\nset -e\n\n__author__=\"Ashish Jabble\"\n__copyright__=\"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__=\"Apache 2.0\"\n__version__=\"1.0.0\"\n\n######################################################################################################\n# Usage text for this script\n# $FLEDGE_ROOT/tests/system/python/scripts/package/setup ${PACKAGE_BUILD_VERSION}\n######################################################################################################\n\nPACKAGE_BUILD_VERSION=$1\n\n[[ -z \"${PACKAGE_BUILD_VERSION}\" ]] && echo \"Build Version not found.\" && exit 1\n\nOS_NAME=$(grep -o '^NAME=.*' /etc/os-release | cut -f2 -d\\\" | sed 's/\"//g')\nID=$(cat /etc/os-release | grep -w ID | cut -f2 -d\"=\" | tr -d '\"')\nUNAME=$(uname -m)\nVERSION_ID=$(cat /etc/os-release | grep -w VERSION_ID | cut -f2 -d\"=\" |  tr -d '\"')\necho \"Version ID is ${VERSION_ID}\"\n\n\nif [[ ${OS_NAME} == *\"Red Hat\"* || ${OS_NAME} == *\"CentOS\"* ]]; then\n    if [[ ${VERSION_ID} -eq \"7\" ]]; then ARCHIVE_PKG_OS=\"${ID}7\"; else ARCHIVE_PKG_OS=\"${ID}-stream-9\"; fi\n\n    echo \"Build version is ${PACKAGE_BUILD_VERSION}\"\n    echo \"ID is ${ID} and Archive package OS is ${ARCHIVE_PKG_OS}\"\n    echo \"Uname is ${UNAME}\"\n\n    sudo cp -f /etc/yum.repos.d/fledge.repo /etc/yum.repos.d/fledge.repo.bak | true\n\n    # Configure Fledge repo\n    echo -e \"[fledge]\\n\\\nname=fledge Repository\\n\\\nbaseurl=http://archives.fledge-iot.org/${PACKAGE_BUILD_VERSION}/${ARCHIVE_PKG_OS}/${UNAME}/\\n\\\nenabled=1\\n\\\ngpgkey=http://archives.fledge-iot.org/RPM-GPG-KEY-fledge\\n\\\ngpgcheck=1\" | sudo tee /etc/yum.repos.d/fledge.repo\n\n    # Install prerequisites\n    if [[ ${ID} = \"centos\" ]]; then\n        if [[ ${VERSION_ID} -eq \"7\" ]]; then\n            sudo yum install -y centos-release-scl-rh\n            sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm\n        fi\n    elif [[ ${ID} = \"rhel\" ]]; then\n        sudo yum-config-manager --enable 'Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server from RHUI'\n    fi\n\n    sudo yum -y check-update && sudo yum -y update\n    echo \"==================== DONE check-update, update ============================\"\n\n    time sudo yum install -y fledge\n    echo \"==================== DONE INSTALLING Fledge ==================\"\n\n    if [ \"${FLEDGE_ENVIRONMENT}\" != \"docker\" ]; then\n        time sudo yum install -y fledge-gui\n        echo \"==================== DONE INSTALLING Fledge GUI ======================\"\n    fi\nelse\n    if [[ ${ID} = \"ubuntu\" ]]; then\n        VERSION_ID=$(echo \"${VERSION_ID//.}\")\n        ID=\"ubuntu${VERSION_ID}\";\n    elif [[ ${ID} = \"raspbian\" ]]; then\n        ID=$(cat /etc/os-release | grep VERSION_CODENAME | cut -f2 -d\"=\")\n        UNAME=\"armv7l\"\n    fi\n    \n    echo \"Build version is \"${PACKAGE_BUILD_VERSION}\n    echo \"ID is \"${ID}\n    echo \"uname is \"${UNAME}\n    \n    if [[ -f /etc/apt/sources.list.d/fledge.list ]]; then cd /etc/apt/sources.list.d/ && sudo cp -f fledge.list fledge.list.bak && sudo rm -f fledge.list; fi\n    sudo sed -i \"/\\b\\(archives.fledge-iot.org\\)\\b/d\" /etc/apt/sources.list\n\n    sudo apt update && sudo apt upgrade -y && sudo apt update\n    echo \"==================== DONE update, upgrade, update ============================\"\n\n    wget -q -O - http://archives.fledge-iot.org/KEY.gpg | sudo apt-key add -\n    echo \"deb http://archives.fledge-iot.org/${PACKAGE_BUILD_VERSION}/${ID}/${UNAME}/ /\" | sudo tee -a /etc/apt/sources.list.d/fledge.list\n    sudo apt update\n    \n    time sudo DEBIAN_FRONTEND=noninteractive apt install -yq fledge\n    echo \"==================== DONE INSTALLING Fledge ==================\"\n    \n    if [ \"${FLEDGE_ENVIRONMENT}\" != \"docker\" ]; then     \n        time sudo apt install -y fledge-gui\n        echo \"==================== DONE INSTALLING Fledge GUI ======================\"\n    fi\nfi\n"
  },
  {
    "path": "tests/system/python/scripts/reset_plugins",
    "content": "#!/usr/bin/env bash\nset -e\n\n__author__=\"Ashish Jabble\"\n__copyright__=\"Copyright (c) 2019 Dianomic Systems\"\n__license__=\"Apache 2.0\"\n__version__=\"1.0.0\"\n\n#########################################################\n# Usage text for this script\n# $FLEDGE_ROOT/tests/system/python/scripts/reset_plugins\n#########################################################\n\nremove_south () {\n    rm -rf $FLEDGE_ROOT/plugins/south\n    path=\"$FLEDGE_ROOT/python/fledge/plugins/south\"\n    find ${path} -maxdepth 1 | grep -v \"^${path}$\" | egrep -v '(__init__.py)' | xargs rm -rf\n}\n\nremove_north () {\n    path=\"$FLEDGE_ROOT/plugins/north\"\n    find ${path} -maxdepth 1 | grep -v \"^${path}$\" | egrep -v '(OMF)' | xargs rm -rf\n    path=\"$FLEDGE_ROOT/python/fledge/plugins/north\"\n    find ${path} -maxdepth 1 | grep -v \"^${path}$\" | egrep -v '(common|empty|README.rst|__init__.py)' | xargs rm -rf\n}\n\nremove_filter () {\n    path=\"$FLEDGE_ROOT/plugins/filter\"\n    find ${path} -maxdepth 1 | grep -v \"^${path}$\" | egrep -v '(common)' | xargs rm -rf\n    path=\"$FLEDGE_ROOT/python/fledge/plugins/filter\"\n    find ${path} -maxdepth 1 | grep -v \"^${path}$\" | egrep -v '(__init__.py)' | xargs rm -rf\n}\n\nremove_notification_delivery () {\n    rm -rf $FLEDGE_ROOT/plugins/notificationDelivery\n    rm -rf $FLEDGE_ROOT/python/fledge/plugins/notificationDelivery\n}\n\nremove_notification_rule () {\n    rm -rf $FLEDGE_ROOT/plugins/notificationRule\n    rm -rf $FLEDGE_ROOT/python/fledge/plugins/notificationRule\n}\n\nremove_all () {\n    remove_south\n    remove_north\n    remove_filter\n    remove_notification_delivery\n    remove_notification_rule\n}\n\nremove_all\necho \"Removed all plugins...\""
  },
  {
    "path": "tests/system/python/smoke/test_smoke.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test system/python/test_smoke.py\n\n\"\"\"\nimport os\nimport subprocess\nimport http.client\nimport json\nimport time\nimport pytest\nimport utils\n\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nTEMPLATE_NAME = \"template.json\"\nSENSOR_VALUE = 10\n\n\ndef get_ping_status(fledge_url):\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/ping')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return jdoc\n\n\ndef get_statistics_map(fledge_url):\n    _connection = http.client.HTTPConnection(fledge_url)\n    _connection.request(\"GET\", '/fledge/statistics')\n    r = _connection.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    jdoc = json.loads(r)\n    return utils.serialize_stats_map(jdoc)\n\n\n@pytest.fixture\ndef start_south_coap(reset_and_start_fledge, add_south, remove_data_file, remove_directories, south_branch,\n                     fledge_url, south_plugin=\"coap\", asset_name=\"smoke\"):\n    \"\"\" This fixture clone a south repo and starts both south and north instance\n        reset_and_start_fledge: Fixture that resets and starts fledge, no explicit invocation, called at start\n        add_south: Fixture that adds a south service with given configuration\n        remove_data_file: Fixture that remove data file created during the tests\n        remove_directories: Fixture that remove directories created during the tests\"\"\"\n\n    # Define the template file for fogbench\n    fogbench_template_path = os.path.join(\n        os.path.expandvars('${FLEDGE_ROOT}'), 'data/{}'.format(TEMPLATE_NAME))\n    with open(fogbench_template_path, \"w\") as f:\n        f.write(\n            '[{\"name\": \"%s\", \"sensor_values\": '\n            '[{\"name\": \"sensor\", \"type\": \"number\", \"min\": %d, \"max\": %d, \"precision\": 0}]}]' % (\n                asset_name, SENSOR_VALUE, SENSOR_VALUE))\n\n    add_south(south_plugin, south_branch, fledge_url, service_name=\"coap\")\n\n    yield start_south_coap\n\n    # Cleanup code that runs after the caller test is over\n    remove_data_file(fogbench_template_path)\n    remove_directories(\"/tmp/fledge-south-{}\".format(south_plugin))\n\n\ndef test_smoke(start_south_coap, fledge_url, wait_time, asset_name=\"smoke\"):\n    \"\"\" Test that data is inserted in Fledge\n        start_south_coap: Fixture that starts Fledge with south coap plugin\n        Assertions:\n            on endpoint GET /fledge/asset\n            on endpoint GET /fledge/asset/<asset_name>\n    \"\"\"\n\n    conn = http.client.HTTPConnection(fledge_url)\n    time.sleep(wait_time)\n    subprocess.run([\"cd $FLEDGE_ROOT/extras/python; python3 -m fogbench -t ../../data/{}; cd -\".format(TEMPLATE_NAME)],\n                   shell=True, check=True)\n    time.sleep(wait_time)\n\n    ping_response = get_ping_status(fledge_url)\n    assert 1 == ping_response[\"dataRead\"]\n    assert 0 == ping_response[\"dataSent\"]\n\n    actual_stats_map = get_statistics_map(fledge_url)\n    assert 1 == actual_stats_map[asset_name.upper()]\n    assert 1 == actual_stats_map['READINGS']\n\n    conn.request(\"GET\", '/fledge/asset')\n    r = conn.getresponse()\n\n    assert 200 == r.status\n    r = r.read().decode()\n    retval = json.loads(r)\n    assert len(retval) == 1\n    assert asset_name == retval[0][\"assetCode\"]\n    assert 1 == retval[0][\"count\"]\n\n    conn.request(\"GET\", '/fledge/asset/{}'.format(asset_name))\n    r = conn.getresponse()\n    assert 200 == r.status\n    r = r.read().decode()\n    retval = json.loads(r)\n    assert {'sensor': SENSOR_VALUE} == retval[0][\"reading\"]\n\n    tracking_details = utils.get_asset_tracking_details(fledge_url, \"Ingest\")\n    assert len(tracking_details[\"track\"]), \"Failed to track Ingest event\"\n    tracked_item = tracking_details[\"track\"][0]\n    assert \"coap\" == tracked_item[\"service\"]\n    assert \"smoke\" == tracked_item[\"asset\"]\n    assert \"coap\" == tracked_item[\"plugin\"]\n"
  },
  {
    "path": "tests/unit/C/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR})\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n \nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\n\nset(UUIDLIB -luuid)\nset(COMMONLIB -ldl)\n\nEXECUTE_PROCESS( COMMAND grep -o ^NAME=.* /etc/os-release COMMAND cut -f2 -d\\\" COMMAND sed s/\\\"//g OUTPUT_VARIABLE os_name )\nEXECUTE_PROCESS( COMMAND grep -o ^VERSION_ID=.* /etc/os-release COMMAND cut -f2 -d\\\" COMMAND sed s/\\\"//g OUTPUT_VARIABLE os_version )\n\nif ( ( ${os_name} MATCHES \"Red Hat\" OR ${os_name} MATCHES \"CentOS\") AND ( ${os_version} MATCHES \"7\" ) )\n        add_compile_options(-D RHEL_CENTOS_7)\n        message( \"System is RHEL/CentOS 7\" )\nelse()\n        message( \"System is not RHEL/CentOS 7\" )\nendif()\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 COMPONENTS Interpreter Development NumPy)\nendif()\n\nif(Python3_VERSION VERSION_GREATER_EQUAL 3.12)\n    # Now you can use Python3_NumPy_INCLUDE_DIRS in your project\n    message(STATUS \"Using NumPy include dirs: ${Python3_NumPy_INCLUDE_DIRS}\")\n    include_directories(${Python3_NumPy_INCLUDE_DIRS})\nendif()\n\ninclude_directories(../../../C/common/include)\ninclude_directories(../../../C/plugins/common/include)\ninclude_directories(../../../C/plugins/north/OMF/include)\ninclude_directories(../../../C/services/common/include)\ninclude_directories(../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../C/thirdparty/Simple-Web-Server)\ninclude_directories(../../../C/plugins/storage/common/include)\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nset(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/../lib)\n\n# Find source files\nfile(GLOB COMMON_LIB_SOURCES ../../../C/common/*.cpp)\n\n# Create shared library\nadd_library(common-lib SHARED ${COMMON_LIB_SOURCES})\ntarget_link_libraries(common-lib ${UUIDLIB})\ntarget_link_libraries(common-lib ${Boost_LIBRARIES})\ntarget_link_libraries(common-lib -lcrypto)\nset_target_properties(common-lib PROPERTIES SOVERSION 1)\n\n\n# Find source files\nfile(GLOB SERVICES_COMMON_LIB_SOURCES ../../../C/services/common/*.cpp)\n\n# Create shared library\nadd_library(services-common-lib SHARED ${SERVICES_COMMON_LIB_SOURCES})\ntarget_link_libraries(services-common-lib ${COMMONLIB})\nset_target_properties(services-common-lib PROPERTIES SOVERSION 1)\n\n\n# Find source files\nfile(GLOB PLUGINS_COMMON_LIB_SOURCES ../../../C/plugins/common/*.cpp)\n\n# Create shared library\nset(LIBCURL_LIB -lcurl)\n\nadd_library(plugins-common-lib SHARED ${PLUGINS_COMMON_LIB_SOURCES})\ntarget_link_libraries(plugins-common-lib ${Boost_LIBRARIES} common-lib services-common-lib z ssl crypto)\ntarget_link_libraries(plugins-common-lib ${LIBCURL_LIB})\n\nset_target_properties(plugins-common-lib PROPERTIES SOVERSION 1)\n\n#\n# OMF library\n#\nset(LIB_NAME OMF)\nfile(GLOB OMF_LIB_SOURCES\n        ../../../C/plugins/north/OMF/omf.cpp\n        ../../../C/plugins/north/OMF/omfbuffer.cpp\n        ../../../C/plugins/north/OMF/omfhints.cpp\n        ../../../C/plugins/north/OMF/OMFError.cpp\n\t../../../C/plugins/north/OMF/linkdata.cpp)\n\nadd_library(${LIB_NAME}  SHARED ${OMF_LIB_SOURCES})\ntarget_link_libraries(${LIB_NAME}\n                        common-lib\n                        plugins-common-lib\n                        ssl\n                        crypto)\n\nset_target_properties(${LIB_NAME}  PROPERTIES SOVERSION 1)\n\n#\n# storage-common-lib\n#\nset(LIB_NAME storage-common-lib)\nset(DLLIB -ldl)\n\n# Find source files\nfile(GLOB STORAGE_COMMON_LIB_SOURCE ../../../C/plugins/storage/common/*.cpp)\n\n# Create shared library\nadd_library(${LIB_NAME} SHARED ${STORAGE_COMMON_LIB_SOURCE})\ntarget_link_libraries(${LIB_NAME} ${DLLIB})\nset_target_properties(${LIB_NAME} PROPERTIES SOVERSION 1)\n\nadd_subdirectory(cmake_pg)\nadd_subdirectory(cmake_sqlite)\nadd_subdirectory(cmake_sqlitelb)\nadd_subdirectory(cmake_sqliteM)\n\n"
  },
  {
    "path": "tests/unit/C/CodeCoverage.cmake",
    "content": "# Copyright (c) 2012 - 2017, Lars Bilke\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without modification,\n# are permitted provided that the following conditions are met:\n#\n# 1. Redistributions of source code must retain the above copyright notice, this\n#    list of conditions and the following disclaimer.\n#\n# 2. Redistributions in binary form must reproduce the above copyright notice,\n#    this list of conditions and the following disclaimer in the documentation\n#    and/or other materials provided with the distribution.\n#\n# 3. Neither the name of the copyright holder nor the names of its contributors\n#    may be used to endorse or promote products derived from this software without\n#    specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR\n# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\n# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n#\n# CHANGES:\n#\n# 2012-01-31, Lars Bilke\n# - Enable Code Coverage\n#\n# 2013-09-17, Joakim Söderberg\n# - Added support for Clang.\n# - Some additional usage instructions.\n#\n# 2016-02-03, Lars Bilke\n# - Refactored functions to use named parameters\n#\n# 2017-06-02, Lars Bilke\n# - Merged with modified version from github.com/ufz/ogs\n#\n# 2019-05-06, Anatolii Kurotych\n# - Remove unnecessary --coverage flag\n#\n# 2019-12-13, FeRD (Frank Dana)\n# - Deprecate COVERAGE_LCOVR_EXCLUDES and COVERAGE_GCOVR_EXCLUDES lists in favor\n#   of tool-agnostic COVERAGE_EXCLUDES variable, or EXCLUDE setup arguments.\n# - CMake 3.4+: All excludes can be specified relative to BASE_DIRECTORY\n# - All setup functions: accept BASE_DIRECTORY, EXCLUDE list\n# - Set lcov basedir with -b argument\n# - Add automatic --demangle-cpp in lcovr, if 'c++filt' is available (can be\n#   overridden with NO_DEMANGLE option in setup_target_for_coverage_lcovr().)\n# - Delete output dir, .info file on 'make clean'\n# - Remove Python detection, since version mismatches will break gcovr\n# - Minor cleanup (lowercase function names, update examples...)\n#\n# 2019-12-19, FeRD (Frank Dana)\n# - Rename Lcov outputs, make filtered file canonical, fix cleanup for targets\n#\n# 2020-01-19, Bob Apthorpe\n# - Added gfortran support\n#\n# 2020-02-17, FeRD (Frank Dana)\n# - Make all add_custom_target()s VERBATIM to auto-escape wildcard characters\n#   in EXCLUDEs, and remove manual escaping from gcovr targets\n#\n# 2021-01-19, Robin Mueller\n# - Add CODE_COVERAGE_VERBOSE option which will allow to print out commands which are run\n# - Added the option for users to set the GCOVR_ADDITIONAL_ARGS variable to supply additional\n#   flags to the gcovr command\n#\n# 2020-05-04, Mihchael Davis\n#     - Add -fprofile-abs-path to make gcno files contain absolute paths\n#     - Fix BASE_DIRECTORY not working when defined\n#     - Change BYPRODUCT from folder to index.html to stop ninja from complaining about double defines\n#\n# 2021-05-10, Martin Stump\n#     - Check if the generator is multi-config before warning about non-Debug builds\n#\n# 2022-02-22, Marko Wehle\n#     - Change gcovr output from -o <filename> for --xml <filename> and --html <filename> output respectively.\n#       This will allow for Multiple Output Formats at the same time by making use of GCOVR_ADDITIONAL_ARGS, e.g. GCOVR_ADDITIONAL_ARGS \"--txt\".\n#\n# USAGE:\n#\n# 1. Copy this file into your cmake modules path.\n#\n# 2. Add the following line to your CMakeLists.txt (best inside an if-condition\n#    using a CMake option() to enable it just optionally):\n#      include(CodeCoverage)\n#\n# 3. Append necessary compiler flags for all supported source files:\n#      append_coverage_compiler_flags()\n#    Or for specific target:\n#      append_coverage_compiler_flags_to_target(YOUR_TARGET_NAME)\n#\n# 3.a (OPTIONAL) Set appropriate optimization flags, e.g. -O0, -O1 or -Og\n#\n# 4. If you need to exclude additional directories from the report, specify them\n#    using full paths in the COVERAGE_EXCLUDES variable before calling\n#    setup_target_for_coverage_*().\n#    Example:\n#      set(COVERAGE_EXCLUDES\n#          '${PROJECT_SOURCE_DIR}/src/dir1/*'\n#          '/path/to/my/src/dir2/*')\n#    Or, use the EXCLUDE argument to setup_target_for_coverage_*().\n#    Example:\n#      setup_target_for_coverage_lcov(\n#          NAME coverage\n#          EXECUTABLE testrunner\n#          EXCLUDE \"${PROJECT_SOURCE_DIR}/src/dir1/*\" \"/path/to/my/src/dir2/*\")\n#\n# 4.a NOTE: With CMake 3.4+, COVERAGE_EXCLUDES or EXCLUDE can also be set\n#     relative to the BASE_DIRECTORY (default: PROJECT_SOURCE_DIR)\n#     Example:\n#       set(COVERAGE_EXCLUDES \"dir1/*\")\n#       setup_target_for_coverage_gcovr_html(\n#           NAME coverage\n#           EXECUTABLE testrunner\n#           BASE_DIRECTORY \"${PROJECT_SOURCE_DIR}/src\"\n#           EXCLUDE \"dir2/*\")\n#\n# 5. Use the functions described below to create a custom make target which\n#    runs your test executable and produces a code coverage report.\n#\n# 6. Build a Debug build:\n#      cmake -DCMAKE_BUILD_TYPE=Debug ..\n#      make\n#      make my_coverage_target\n#\n\ninclude(CMakeParseArguments)\n\noption(CODE_COVERAGE_VERBOSE \"Verbose information\" TRUE)\n\n# Check prereqs\nfind_program( GCOV_PATH gcov )\nfind_program( LCOV_PATH  NAMES lcov lcov.bat lcov.exe lcov.perl)\nfind_program( FASTCOV_PATH NAMES fastcov fastcov.py )\nfind_program( GENHTML_PATH NAMES genhtml genhtml.perl genhtml.bat )\nfind_program( GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)\nfind_program( CPPFILT_PATH NAMES c++filt )\n\nif(NOT GCOV_PATH)\n    message(FATAL_ERROR \"gcov not found! Aborting...\")\nendif() # NOT GCOV_PATH\n\nget_property(LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)\nlist(GET LANGUAGES 0 LANG)\n\nif(\"${CMAKE_${LANG}_COMPILER_ID}\" MATCHES \"(Apple)?[Cc]lang\")\n    if(\"${CMAKE_${LANG}_COMPILER_VERSION}\" VERSION_LESS 3)\n        message(FATAL_ERROR \"Clang version must be 3.0.0 or greater! Aborting...\")\n    endif()\nelseif(NOT CMAKE_COMPILER_IS_GNUCXX)\n    if(\"${CMAKE_Fortran_COMPILER_ID}\" MATCHES \"[Ff]lang\")\n        # Do nothing; exit conditional without error if true\n    elseif(\"${CMAKE_Fortran_COMPILER_ID}\" MATCHES \"GNU\")\n        # Do nothing; exit conditional without error if true\n    else()\n        message(FATAL_ERROR \"Compiler is not GNU gcc! Aborting...\")\n    endif()\nendif()\n\nset(COVERAGE_COMPILER_FLAGS \"-g -fprofile-arcs -ftest-coverage\"\n    CACHE INTERNAL \"\")\nif(CMAKE_CXX_COMPILER_ID MATCHES \"(GNU|Clang)\")\n    include(CheckCXXCompilerFlag)\n    check_cxx_compiler_flag(-fprofile-abs-path HAVE_fprofile_abs_path)\n    if(HAVE_fprofile_abs_path)\n        set(COVERAGE_COMPILER_FLAGS \"${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path\")\n    endif()\nendif()\n\nset(CMAKE_Fortran_FLAGS_COVERAGE\n    ${COVERAGE_COMPILER_FLAGS}\n    CACHE STRING \"Flags used by the Fortran compiler during coverage builds.\"\n    FORCE )\nset(CMAKE_CXX_FLAGS_COVERAGE\n    ${COVERAGE_COMPILER_FLAGS}\n    CACHE STRING \"Flags used by the C++ compiler during coverage builds.\"\n    FORCE )\nset(CMAKE_C_FLAGS_COVERAGE\n    ${COVERAGE_COMPILER_FLAGS}\n    CACHE STRING \"Flags used by the C compiler during coverage builds.\"\n    FORCE )\nset(CMAKE_EXE_LINKER_FLAGS_COVERAGE\n    \"\"\n    CACHE STRING \"Flags used for linking binaries during coverage builds.\"\n    FORCE )\nset(CMAKE_SHARED_LINKER_FLAGS_COVERAGE\n    \"\"\n    CACHE STRING \"Flags used by the shared libraries linker during coverage builds.\"\n    FORCE )\nmark_as_advanced(\n    CMAKE_Fortran_FLAGS_COVERAGE\n    CMAKE_CXX_FLAGS_COVERAGE\n    CMAKE_C_FLAGS_COVERAGE\n    CMAKE_EXE_LINKER_FLAGS_COVERAGE\n    CMAKE_SHARED_LINKER_FLAGS_COVERAGE )\n\nget_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)\nif(NOT (CMAKE_BUILD_TYPE STREQUAL \"Debug\" OR GENERATOR_IS_MULTI_CONFIG))\n    message(WARNING \"Code coverage results with an optimised (non-Debug) build may be misleading\")\nendif() # NOT (CMAKE_BUILD_TYPE STREQUAL \"Debug\" OR GENERATOR_IS_MULTI_CONFIG)\n\nif(CMAKE_C_COMPILER_ID STREQUAL \"GNU\" OR CMAKE_Fortran_COMPILER_ID STREQUAL \"GNU\")\n    link_libraries(gcov)\nendif()\n\n# Defines a target for running and collection code coverage information\n# Builds dependencies, runs the given executable and outputs reports.\n# NOTE! The executable should always have a ZERO as exit code otherwise\n# the coverage generation will not complete.\n#\n# setup_target_for_coverage_lcov(\n#     NAME testrunner_coverage                    # New target name\n#     EXECUTABLE testrunner -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR\n#     DEPENDENCIES testrunner                     # Dependencies to build first\n#     BASE_DIRECTORY \"../\"                        # Base directory for report\n#                                                 #  (defaults to PROJECT_SOURCE_DIR)\n#     EXCLUDE \"src/dir1/*\" \"src/dir2/*\"           # Patterns to exclude (can be relative\n#                                                 #  to BASE_DIRECTORY, with CMake 3.4+)\n#     NO_DEMANGLE                                 # Don't demangle C++ symbols\n#                                                 #  even if c++filt is found\n# )\nfunction(setup_target_for_coverage_lcov)\n\n    set(options NO_DEMANGLE)\n    set(oneValueArgs BASE_DIRECTORY NAME)\n    set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES LCOV_ARGS GENHTML_ARGS)\n    cmake_parse_arguments(Coverage \"${options}\" \"${oneValueArgs}\" \"${multiValueArgs}\" ${ARGN})\n\n    if(NOT LCOV_PATH)\n        message(FATAL_ERROR \"lcov not found! Aborting...\")\n    endif() # NOT LCOV_PATH\n\n    if(NOT GENHTML_PATH)\n        message(FATAL_ERROR \"genhtml not found! Aborting...\")\n    endif() # NOT GENHTML_PATH\n\n    # Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR\n    if(DEFINED Coverage_BASE_DIRECTORY)\n        get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)\n    else()\n        set(BASEDIR ${PROJECT_SOURCE_DIR})\n    endif()\n\n    # Collect excludes (CMake 3.4+: Also compute absolute paths)\n    set(LCOV_EXCLUDES \"\")\n    foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_LCOV_EXCLUDES})\n        if(CMAKE_VERSION VERSION_GREATER 3.4)\n            get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})\n        endif()\n        list(APPEND LCOV_EXCLUDES \"${EXCLUDE}\")\n    endforeach()\n    list(REMOVE_DUPLICATES LCOV_EXCLUDES)\n\n    # Conditional arguments\n    if(CPPFILT_PATH AND NOT ${Coverage_NO_DEMANGLE})\n      set(GENHTML_EXTRA_ARGS \"--demangle-cpp\")\n    endif()\n     \n    # Setting up commands which will be run to generate coverage data.\n    # Cleanup lcov\n    set(LCOV_CLEAN_CMD \n        ${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} -directory . \n        -b ${BASEDIR} --zerocounters\n    )\n    # Create baseline to make sure untouched files show up in the report\n    set(LCOV_BASELINE_CMD \n        ${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} -c -i -d . -b \n        ${BASEDIR} -o ${Coverage_NAME}.base\n    )\n    # Run tests\n    set(LCOV_EXEC_TESTS_CMD \n        ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}\n    )    \n    # Capturing lcov counters and generating report\n    set(LCOV_CAPTURE_CMD \n        ${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} --directory . -b \n        ${BASEDIR} --capture --output-file ${Coverage_NAME}.capture\n    )\n    # add baseline counters\n    set(LCOV_BASELINE_COUNT_CMD\n        ${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} -a ${Coverage_NAME}.base \n        -a ${Coverage_NAME}.capture --output-file ${Coverage_NAME}.total\n    ) \n    # filter collected data to final coverage report\n    set(LCOV_FILTER_CMD \n        ${LCOV_PATH} ${Coverage_LCOV_ARGS} --gcov-tool ${GCOV_PATH} --remove \n        ${Coverage_NAME}.total ${LCOV_EXCLUDES} --output-file ${Coverage_NAME}.info\n    )    \n    # Generate HTML output\n    set(LCOV_GEN_HTML_CMD\n        ${GENHTML_PATH} ${GENHTML_EXTRA_ARGS} ${Coverage_GENHTML_ARGS} -o \n        ${Coverage_NAME} ${Coverage_NAME}.info\n    )\n    \n\n    if(CODE_COVERAGE_VERBOSE)\n        message(STATUS \"Executed command report\")\n        message(STATUS \"Command to clean up lcov: \")\n        string(REPLACE \";\" \" \" LCOV_CLEAN_CMD_SPACED \"${LCOV_CLEAN_CMD}\")\n        message(STATUS \"${LCOV_CLEAN_CMD_SPACED}\")\n\n        message(STATUS \"Command to create baseline: \")\n        string(REPLACE \";\" \" \" LCOV_BASELINE_CMD_SPACED \"${LCOV_BASELINE_CMD}\")\n        message(STATUS \"${LCOV_BASELINE_CMD_SPACED}\")\n\n        message(STATUS \"Command to run the tests: \")\n        string(REPLACE \";\" \" \" LCOV_EXEC_TESTS_CMD_SPACED \"${LCOV_EXEC_TESTS_CMD}\")\n        message(STATUS \"${LCOV_EXEC_TESTS_CMD_SPACED}\")\n\n        message(STATUS \"Command to capture counters and generate report: \")\n        string(REPLACE \";\" \" \" LCOV_CAPTURE_CMD_SPACED \"${LCOV_CAPTURE_CMD}\")\n        message(STATUS \"${LCOV_CAPTURE_CMD_SPACED}\")\n\n        message(STATUS \"Command to add baseline counters: \")\n        string(REPLACE \";\" \" \" LCOV_BASELINE_COUNT_CMD_SPACED \"${LCOV_BASELINE_COUNT_CMD}\")\n        message(STATUS \"${LCOV_BASELINE_COUNT_CMD_SPACED}\")\n\n        message(STATUS \"Command to filter collected data: \")\n        string(REPLACE \";\" \" \" LCOV_FILTER_CMD_SPACED \"${LCOV_FILTER_CMD}\")\n        message(STATUS \"${LCOV_FILTER_CMD_SPACED}\")\n\n        message(STATUS \"Command to generate lcov HTML output: \")\n        string(REPLACE \";\" \" \" LCOV_GEN_HTML_CMD_SPACED \"${LCOV_GEN_HTML_CMD}\")\n        message(STATUS \"${LCOV_GEN_HTML_CMD_SPACED}\")\n    endif()\n\n    # Setup target\n    add_custom_target(${Coverage_NAME}\n        COMMAND ${LCOV_CLEAN_CMD}\n        COMMAND ${LCOV_BASELINE_CMD} \n        COMMAND ${LCOV_EXEC_TESTS_CMD}\n        COMMAND ${LCOV_CAPTURE_CMD}\n        COMMAND ${LCOV_BASELINE_COUNT_CMD}\n        COMMAND ${LCOV_FILTER_CMD} \n        COMMAND ${LCOV_GEN_HTML_CMD}\n\n        # Set output files as GENERATED (will be removed on 'make clean')\n        BYPRODUCTS\n            ${Coverage_NAME}.base\n            ${Coverage_NAME}.capture\n            ${Coverage_NAME}.total\n            ${Coverage_NAME}.info\n            ${Coverage_NAME}/index.html\n        WORKING_DIRECTORY ${PROJECT_BINARY_DIR}\n        DEPENDS ${Coverage_DEPENDENCIES}\n        VERBATIM # Protect arguments to commands\n        COMMENT \"Resetting code coverage counters to zero.\\nProcessing code coverage counters and generating report.\"\n    )\n\n    # Show where to find the lcov info report\n    add_custom_command(TARGET ${Coverage_NAME} POST_BUILD\n        COMMAND ;\n        COMMENT \"Lcov code coverage info report saved in ${Coverage_NAME}.info.\"\n    )\n\n    # Show info where to find the report\n    add_custom_command(TARGET ${Coverage_NAME} POST_BUILD\n        COMMAND ;\n        COMMENT \"Open ./${Coverage_NAME}/index.html in your browser to view the coverage report.\"\n    )\n\nendfunction() # setup_target_for_coverage_lcov\n\n# Defines a target for running and collection code coverage information\n# Builds dependencies, runs the given executable and outputs reports.\n# NOTE! The executable should always have a ZERO as exit code otherwise\n# the coverage generation will not complete.\n#\n# setup_target_for_coverage_gcovr_xml(\n#     NAME ctest_coverage                    # New target name\n#     EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR\n#     DEPENDENCIES executable_target         # Dependencies to build first\n#     BASE_DIRECTORY \"../\"                   # Base directory for report\n#                                            #  (defaults to PROJECT_SOURCE_DIR)\n#     EXCLUDE \"src/dir1/*\" \"src/dir2/*\"      # Patterns to exclude (can be relative\n#                                            #  to BASE_DIRECTORY, with CMake 3.4+)\n# )\n# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the\n# GCVOR command.\nfunction(setup_target_for_coverage_gcovr_xml)\n\n    set(options NONE)\n    set(oneValueArgs BASE_DIRECTORY NAME)\n    set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)\n    cmake_parse_arguments(Coverage \"${options}\" \"${oneValueArgs}\" \"${multiValueArgs}\" ${ARGN})\n\n    if(NOT GCOVR_PATH)\n        message(FATAL_ERROR \"gcovr not found! Aborting...\")\n    endif() # NOT GCOVR_PATH\n\n    # Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR\n    if(DEFINED Coverage_BASE_DIRECTORY)\n        get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)\n    else()\n        set(BASEDIR ${PROJECT_SOURCE_DIR})\n    endif()\n\n    # Collect excludes (CMake 3.4+: Also compute absolute paths)\n    set(GCOVR_EXCLUDES \"\")\n    foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})\n        if(CMAKE_VERSION VERSION_GREATER 3.4)\n            get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})\n        endif()\n        list(APPEND GCOVR_EXCLUDES \"${EXCLUDE}\")\n    endforeach()\n    list(REMOVE_DUPLICATES GCOVR_EXCLUDES)\n\n    # Combine excludes to several -e arguments\n    set(GCOVR_EXCLUDE_ARGS \"\")\n    foreach(EXCLUDE ${GCOVR_EXCLUDES})\n        list(APPEND GCOVR_EXCLUDE_ARGS \"-e\")\n        list(APPEND GCOVR_EXCLUDE_ARGS \"${EXCLUDE}\")\n    endforeach()\n    \n    # Set up commands which will be run to generate coverage data\n    # Run tests\n    set(GCOVR_XML_EXEC_TESTS_CMD\n        ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}\n    )\n    # Running gcovr\n    set(GCOVR_XML_CMD\n        ${GCOVR_PATH} --xml ${Coverage_NAME}.xml -r ${BASEDIR} ${GCOVR_ADDITIONAL_ARGS}\n        ${GCOVR_EXCLUDE_ARGS} --object-directory=${PROJECT_BINARY_DIR}\n    )\n    \n    if(CODE_COVERAGE_VERBOSE)\n        message(STATUS \"Executed command report\")\n\n        message(STATUS \"Command to run tests: \")\n        string(REPLACE \";\" \" \" GCOVR_XML_EXEC_TESTS_CMD_SPACED \"${GCOVR_XML_EXEC_TESTS_CMD}\")\n        message(STATUS \"${GCOVR_XML_EXEC_TESTS_CMD_SPACED}\")\n\n        message(STATUS \"Command to generate gcovr XML coverage data: \")\n        string(REPLACE \";\" \" \" GCOVR_XML_CMD_SPACED \"${GCOVR_XML_CMD}\")\n        message(STATUS \"${GCOVR_XML_CMD_SPACED}\")\n    endif()\n\n    add_custom_target(${Coverage_NAME}\n        COMMAND ${GCOVR_XML_EXEC_TESTS_CMD}\n        COMMAND ${GCOVR_XML_CMD}\n        \n        BYPRODUCTS ${Coverage_NAME}.xml\n        WORKING_DIRECTORY ${PROJECT_BINARY_DIR}\n        DEPENDS ${Coverage_DEPENDENCIES}\n        VERBATIM # Protect arguments to commands\n        COMMENT \"Running gcovr to produce Cobertura code coverage report.\"\n    )\n\n    # Show info where to find the report\n    add_custom_command(TARGET ${Coverage_NAME} POST_BUILD\n        COMMAND ;\n        COMMENT \"Cobertura code coverage report saved in ${Coverage_NAME}.xml.\"\n    )\nendfunction() # setup_target_for_coverage_gcovr_xml\n\n# Defines a target for running and collection code coverage information\n# Builds dependencies, runs the given executable and outputs reports.\n# NOTE! The executable should always have a ZERO as exit code otherwise\n# the coverage generation will not complete.\n#\n# setup_target_for_coverage_gcovr_html(\n#     NAME ctest_coverage                    # New target name\n#     EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR\n#     DEPENDENCIES executable_target         # Dependencies to build first\n#     BASE_DIRECTORY \"../\"                   # Base directory for report\n#                                            #  (defaults to PROJECT_SOURCE_DIR)\n#     EXCLUDE \"src/dir1/*\" \"src/dir2/*\"      # Patterns to exclude (can be relative\n#                                            #  to BASE_DIRECTORY, with CMake 3.4+)\n# )\n# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the\n# GCVOR command.\nfunction(setup_target_for_coverage_gcovr_html)\n\n    set(options NONE)\n    set(oneValueArgs BASE_DIRECTORY NAME)\n    set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)\n    cmake_parse_arguments(Coverage \"${options}\" \"${oneValueArgs}\" \"${multiValueArgs}\" ${ARGN})\n\n    if(NOT GCOVR_PATH)\n        message(FATAL_ERROR \"gcovr not found! Aborting...\")\n    endif() # NOT GCOVR_PATH\n\n    # Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR\n    if(DEFINED Coverage_BASE_DIRECTORY)\n        get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)\n    else()\n        set(BASEDIR ${PROJECT_SOURCE_DIR})\n    endif()\n\n    # Collect excludes (CMake 3.4+: Also compute absolute paths)\n    set(GCOVR_EXCLUDES \"\")\n    foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})\n        if(CMAKE_VERSION VERSION_GREATER 3.4)\n            get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})\n        endif()\n        list(APPEND GCOVR_EXCLUDES \"${EXCLUDE}\")\n    endforeach()\n    list(REMOVE_DUPLICATES GCOVR_EXCLUDES)\n\n    # Combine excludes to several -e arguments\n    set(GCOVR_EXCLUDE_ARGS \"\")\n    foreach(EXCLUDE ${GCOVR_EXCLUDES})\n        list(APPEND GCOVR_EXCLUDE_ARGS \"-e\")\n        list(APPEND GCOVR_EXCLUDE_ARGS \"${EXCLUDE}\")\n    endforeach()\n\n    # Set up commands which will be run to generate coverage data\n    # Run tests\n    set(GCOVR_HTML_EXEC_TESTS_CMD\n        ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}\n    )\n    # Create folder\n    set(GCOVR_HTML_FOLDER_CMD\n        ${CMAKE_COMMAND} -E make_directory ${PROJECT_BINARY_DIR}/${Coverage_NAME}\n    )\n    # Running gcovr\n    set(GCOVR_HTML_CMD\n        ${GCOVR_PATH} --html ${Coverage_NAME}/index.html --html-details -r ${BASEDIR} ${GCOVR_ADDITIONAL_ARGS}\n        ${GCOVR_EXCLUDE_ARGS} --object-directory=${PROJECT_BINARY_DIR}\n    )\n\n    if(CODE_COVERAGE_VERBOSE)\n        message(STATUS \"Executed command report\")\n\n        message(STATUS \"Command to run tests: \")\n        string(REPLACE \";\" \" \" GCOVR_HTML_EXEC_TESTS_CMD_SPACED \"${GCOVR_HTML_EXEC_TESTS_CMD}\")\n        message(STATUS \"${GCOVR_HTML_EXEC_TESTS_CMD_SPACED}\")\n\n        message(STATUS \"Command to create a folder: \")\n        string(REPLACE \";\" \" \" GCOVR_HTML_FOLDER_CMD_SPACED \"${GCOVR_HTML_FOLDER_CMD}\")\n        message(STATUS \"${GCOVR_HTML_FOLDER_CMD_SPACED}\")\n\n        message(STATUS \"Command to generate gcovr HTML coverage data: \")\n        string(REPLACE \";\" \" \" GCOVR_HTML_CMD_SPACED \"${GCOVR_HTML_CMD}\")\n        message(STATUS \"${GCOVR_HTML_CMD_SPACED}\")\n    endif()\n\n    add_custom_target(${Coverage_NAME}\n        COMMAND ${GCOVR_HTML_EXEC_TESTS_CMD}\n        COMMAND ${GCOVR_HTML_FOLDER_CMD}\n        COMMAND ${GCOVR_HTML_CMD}\n\n        BYPRODUCTS ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html  # report directory\n        WORKING_DIRECTORY ${PROJECT_BINARY_DIR}\n        DEPENDS ${Coverage_DEPENDENCIES}\n        VERBATIM # Protect arguments to commands\n        COMMENT \"Running gcovr to produce HTML code coverage report.\"\n    )\n\n    # Show info where to find the report\n    add_custom_command(TARGET ${Coverage_NAME} POST_BUILD\n        COMMAND ;\n        COMMENT \"Open ./${Coverage_NAME}/index.html in your browser to view the coverage report.\"\n    )\n\nendfunction() # setup_target_for_coverage_gcovr_html\n\n# Defines a target for running and collection code coverage information\n# Builds dependencies, runs the given executable and outputs reports.\n# NOTE! The executable should always have a ZERO as exit code otherwise\n# the coverage generation will not complete.\n#\n# setup_target_for_coverage_fastcov(\n#     NAME testrunner_coverage                    # New target name\n#     EXECUTABLE testrunner -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR\n#     DEPENDENCIES testrunner                     # Dependencies to build first\n#     BASE_DIRECTORY \"../\"                        # Base directory for report\n#                                                 #  (defaults to PROJECT_SOURCE_DIR)\n#     EXCLUDE \"src/dir1/\" \"src/dir2/\"             # Patterns to exclude.\n#     NO_DEMANGLE                                 # Don't demangle C++ symbols\n#                                                 #  even if c++filt is found\n#     SKIP_HTML                                   # Don't create html report\n#     POST_CMD perl -i -pe s!${PROJECT_SOURCE_DIR}/!!g ctest_coverage.json  # E.g. for stripping source dir from file paths\n# )\nfunction(setup_target_for_coverage_fastcov)\n\n    set(options NO_DEMANGLE SKIP_HTML)\n    set(oneValueArgs BASE_DIRECTORY NAME)\n    set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES FASTCOV_ARGS GENHTML_ARGS POST_CMD)\n    cmake_parse_arguments(Coverage \"${options}\" \"${oneValueArgs}\" \"${multiValueArgs}\" ${ARGN})\n\n    if(NOT FASTCOV_PATH)\n        message(FATAL_ERROR \"fastcov not found! Aborting...\")\n    endif()\n\n    if(NOT Coverage_SKIP_HTML AND NOT GENHTML_PATH)\n        message(FATAL_ERROR \"genhtml not found! Aborting...\")\n    endif()\n\n    # Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR\n    if(Coverage_BASE_DIRECTORY)\n        get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)\n    else()\n        set(BASEDIR ${PROJECT_SOURCE_DIR})\n    endif()\n\n    # Collect excludes (Patterns, not paths, for fastcov)\n    set(FASTCOV_EXCLUDES \"\")\n    foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_FASTCOV_EXCLUDES})\n        list(APPEND FASTCOV_EXCLUDES \"${EXCLUDE}\")\n    endforeach()\n    list(REMOVE_DUPLICATES FASTCOV_EXCLUDES)\n\n    # Conditional arguments\n    if(CPPFILT_PATH AND NOT ${Coverage_NO_DEMANGLE})\n        set(GENHTML_EXTRA_ARGS \"--demangle-cpp\")\n    endif()\n\n    # Set up commands which will be run to generate coverage data\n    set(FASTCOV_EXEC_TESTS_CMD ${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS})\n\n    set(FASTCOV_CAPTURE_CMD ${FASTCOV_PATH} ${Coverage_FASTCOV_ARGS} --gcov ${GCOV_PATH}\n        --search-directory ${BASEDIR}\n        --process-gcno\n        --output ${Coverage_NAME}.json\n        --exclude ${FASTCOV_EXCLUDES}\n        --exclude ${FASTCOV_EXCLUDES}\n    )\n    \n    set(FASTCOV_CONVERT_CMD ${FASTCOV_PATH}\n        -C ${Coverage_NAME}.json --lcov --output ${Coverage_NAME}.info\n    )\n\n    if(Coverage_SKIP_HTML)\n        set(FASTCOV_HTML_CMD \";\")\n    else()\n        set(FASTCOV_HTML_CMD ${GENHTML_PATH} ${GENHTML_EXTRA_ARGS} ${Coverage_GENHTML_ARGS}\n            -o ${Coverage_NAME} ${Coverage_NAME}.info\n        )\n    endif()\n\n    set(FASTCOV_POST_CMD \";\")\n    if(Coverage_POST_CMD)\n        set(FASTCOV_POST_CMD ${Coverage_POST_CMD})\n    endif()\n\n    if(CODE_COVERAGE_VERBOSE)\n        message(STATUS \"Code coverage commands for target ${Coverage_NAME} (fastcov):\")\n\n        message(\"   Running tests:\")\n        string(REPLACE \";\" \" \" FASTCOV_EXEC_TESTS_CMD_SPACED \"${FASTCOV_EXEC_TESTS_CMD}\")\n        message(\"     ${FASTCOV_EXEC_TESTS_CMD_SPACED}\")\n\n        message(\"   Capturing fastcov counters and generating report:\")\n        string(REPLACE \";\" \" \" FASTCOV_CAPTURE_CMD_SPACED \"${FASTCOV_CAPTURE_CMD}\")\n        message(\"     ${FASTCOV_CAPTURE_CMD_SPACED}\")\n\n        message(\"   Converting fastcov .json to lcov .info:\")\n        string(REPLACE \";\" \" \" FASTCOV_CONVERT_CMD_SPACED \"${FASTCOV_CONVERT_CMD}\")\n        message(\"     ${FASTCOV_CONVERT_CMD_SPACED}\")\n\n        if(NOT Coverage_SKIP_HTML)\n            message(\"   Generating HTML report: \")\n            string(REPLACE \";\" \" \" FASTCOV_HTML_CMD_SPACED \"${FASTCOV_HTML_CMD}\")\n            message(\"     ${FASTCOV_HTML_CMD_SPACED}\")\n        endif()\n        if(Coverage_POST_CMD)\n            message(\"   Running post command: \")\n            string(REPLACE \";\" \" \" FASTCOV_POST_CMD_SPACED \"${FASTCOV_POST_CMD}\")\n            message(\"     ${FASTCOV_POST_CMD_SPACED}\")\n        endif()\n    endif()\n\n    # Setup target\n    add_custom_target(${Coverage_NAME}\n\n        # Cleanup fastcov\n        COMMAND ${FASTCOV_PATH} ${Coverage_FASTCOV_ARGS} --gcov ${GCOV_PATH}\n            --search-directory ${BASEDIR}\n            --zerocounters\n\n        COMMAND ${FASTCOV_EXEC_TESTS_CMD}\n        COMMAND ${FASTCOV_CAPTURE_CMD}\n        COMMAND ${FASTCOV_CONVERT_CMD}\n        COMMAND ${FASTCOV_HTML_CMD}\n        COMMAND ${FASTCOV_POST_CMD}\n\n        # Set output files as GENERATED (will be removed on 'make clean')\n        BYPRODUCTS\n             ${Coverage_NAME}.info\n             ${Coverage_NAME}.json\n             ${Coverage_NAME}/index.html  # report directory\n\n        WORKING_DIRECTORY ${PROJECT_BINARY_DIR}\n        DEPENDS ${Coverage_DEPENDENCIES}\n        VERBATIM # Protect arguments to commands\n        COMMENT \"Resetting code coverage counters to zero. Processing code coverage counters and generating report.\"\n    )\n\n    set(INFO_MSG \"fastcov code coverage info report saved in ${Coverage_NAME}.info and ${Coverage_NAME}.json.\")\n    if(NOT Coverage_SKIP_HTML)\n        string(APPEND INFO_MSG \" Open ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html in your browser to view the coverage report.\")\n    endif()\n    # Show where to find the fastcov info report\n    add_custom_command(TARGET ${Coverage_NAME} POST_BUILD\n        COMMAND ${CMAKE_COMMAND} -E echo ${INFO_MSG}\n    )\n\nendfunction() # setup_target_for_coverage_fastcov\n\nfunction(append_coverage_compiler_flags)\n    set(CMAKE_C_FLAGS \"${CMAKE_C_FLAGS} ${COVERAGE_COMPILER_FLAGS}\" PARENT_SCOPE)\n    set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} ${COVERAGE_COMPILER_FLAGS}\" PARENT_SCOPE)\n    set(CMAKE_Fortran_FLAGS \"${CMAKE_Fortran_FLAGS} ${COVERAGE_COMPILER_FLAGS}\" PARENT_SCOPE)\n    message(STATUS \"Appending code coverage compiler flags: ${COVERAGE_COMPILER_FLAGS}\")\nendfunction() # append_coverage_compiler_flags\n\n# Setup coverage for specific library\nfunction(append_coverage_compiler_flags_to_target name)\n    target_compile_options(${name}\n        PRIVATE ${COVERAGE_COMPILER_FLAGS})\nendfunction()\n\n"
  },
  {
    "path": "tests/unit/C/README.rst",
    "content": "*********************\nC/C++ Code Unit Tests\n*********************\n\nThis directory tree contains the unit tests for the C and C++ code.\n\nPrequisite\n==========\n\nThese tests are written using the Google Test framework. This should be installed on your machine\n\nTo install Google Test, you can use the following commands:\n\n- sudo ./requirements.sh\n\nRunning Tests\n=============\n\nTo run all the unit tests go to the directory scripts and execute the script\n\n- RunAllTests\n\nThis will run all unit tests and place the JUnit XML files in the directory results\n\nTo generate coverage reports, go to the directory scripts and execute the script as follows:\n\n- RunAllTests coverageHtml\n\nThis will run all unit tests and report test coverage results in 'build/CoverageHtml/index.html' file w.r.t. path of CMakeLists.txt files.\n\n- RunAllTests coverageXml\n\nThis will run all unit tests and report test coverage results in 'build/CoverageXml.xml' file w.r.t. path of CMakeLists.txt files.\n\n"
  },
  {
    "path": "tests/unit/C/cmake_pg/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nproject(postgres)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Handle Postgres on RedHat/CentOS\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\ninclude(CheckRhPg)\n\n# postgres plugin\ninclude_directories(../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../C/common/include)\ninclude_directories(../../../../C/services/common/include)\ninclude_directories(../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../C/plugins/storage/postgres/include)\n\n# Handle Postgres on RedHat/CentOS\nif(${RH_POSTGRES_FOUND} EQUAL 1)\n\n    include_directories(${RH_POSTGRES_INCLUDE})\n    link_directories(${RH_POSTGRES_LIB64})\nelse()\n    include_directories(/usr/include/postgresql)\nendif()\n\n# Find source files\nfile(GLOB SOURCES ../../../../C/plugins/storage/postgres/*.cpp)\n\n# Create shared library\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_library(${PROJECT_NAME} SHARED ${SOURCES})\ntarget_link_libraries(${PROJECT_NAME} -lpq)\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n"
  },
  {
    "path": "tests/unit/C/cmake_pg/CheckRhPg.cmake",
    "content": "# Evaluates if rh-postgresql13 is available and enabled and identifies its path\n\nexecute_process(\n        COMMAND  \"scl\" \"enable\" \"rh-postgresql13\" \"command -v pg_isready\"\n        RESULT_VARIABLE CMD_ERROR\n        OUTPUT_VARIABLE CMD_OUTPUT\n)\n\nif(${CMD_ERROR} EQUAL 0)\n    string(REGEX REPLACE \"/bin/pg_isready[\\n]\" \"\" RH_POSTGRES_PATH ${CMD_OUTPUT})\n\n    set(RH_POSTGRES_FOUND 1)\n    set(RH_POSTGRES_INCLUDE \"${RH_POSTGRES_PATH}/include\")\n    set(RH_POSTGRES_LIB64   \"${RH_POSTGRES_PATH}/lib64\")\nelse()\n    set(RH_POSTGRES_FOUND 0)\nendif()\n\nif(${RH_POSTGRES_FOUND} EQUAL 1)\n\n    MESSAGE( STATUS \"INFO: rh-postgresql13 found in the path :${RH_POSTGRES_PATH}:\")\nelse()\n    MESSAGE( STATUS \"INFO: rh-postgresql13 not found\")\nendif()\n"
  },
  {
    "path": "tests/unit/C/cmake_sqlite/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nproject(sqlite)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Path of compiled libsqlite3.a and .h files: /tmp/sqlite3-pkg/src\nset(FLEDGE_SQLITE3_LIBS \"/tmp/sqlite3-pkg/src\" CACHE INTERNAL \"\")\n\n## sqlite plugin\ninclude_directories(../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../C/common/include)\ninclude_directories(../../../../C/services/common/include)\ninclude_directories(../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../C/plugins/storage/sqlite/include)\ninclude_directories(../../../../C/plugins/storage/sqlite/common/include)\ninclude_directories(../../../../C/plugins/storage/sqlite/schema/include)\n\n# Check Sqlite3 required version\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\nfind_package(sqlite3)\n\n# Find source files\nfile(GLOB COMMON_SOURCES ../../../../C/plugins/storage/sqlite/common/*.cpp)\nfile(GLOB SOURCES ../../../../C/plugins/storage/sqlite/*.cpp ../../../../C/plugins/storage/sqlite/schema/*.cpp)\n\n# Create shared library\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_library(${PROJECT_NAME} SHARED ${SOURCES} ${COMMON_SOURCES})\n\nadd_definitions(-DPLUGIN_LOG_NAME=\"SQLite 3\")\n\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tinclude_directories(${FLEDGE_SQLITE3_LIBS})\n\ttarget_link_libraries(${PROJECT_NAME} -L\"${FLEDGE_SQLITE3_LIBS}/.libs\" -lsqlite3)\nelse()\n\t\n\ttarget_link_libraries(${PROJECT_NAME} -lsqlite3)\nendif()\n\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n"
  },
  {
    "path": "tests/unit/C/cmake_sqlite/Findsqlite3.cmake",
    "content": "# This CMake file locates the SQLite3 development libraries\n#\n# The following variables are set:\n# SQLITE_FOUND - If the SQLite library was found\n# SQLITE_LIBRARIES - Path to the static library\n# SQLITE_INCLUDE_DIR - Path to SQLite headers\n# SQLITE_VERSION - Library version\n\nset(SQLITE_MIN_VERSION \"3.11.0\")\n# Check wether path of compiled libsqlite3.a and .h files exists\nif (EXISTS ${FLEDGE_SQLITE3_LIBS})\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h PATHS ${FLEDGE_SQLITE3_LIBS})\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.a PATHS \"${FLEDGE_SQLITE3_LIBS}/.libs\")\nelse()\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h)\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.so)\nendif()\n\nif (SQLITE_INCLUDE_DIR AND SQLITE_LIBRARIES)\n  execute_process(COMMAND grep \".*#define.*SQLITE_VERSION \" ${SQLITE_INCLUDE_DIR}/sqlite3.h\n    COMMAND sed \"s/.*\\\"\\\\(.*\\\\)\\\".*/\\\\1/\"\n    OUTPUT_VARIABLE SQLITE_VERSION\n    OUTPUT_STRIP_TRAILING_WHITESPACE)\n    if (\"${SQLITE_VERSION}\" VERSION_LESS \"${SQLITE_MIN_VERSION}\")\n        message(FATAL_ERROR \"SQLite3 version >= ${SQLITE_MIN_VERSION} required, found version ${SQLITE_VERSION}\")\n    else()\n        message(STATUS \"Found SQLite version ${SQLITE_VERSION}: ${SQLITE_LIBRARIES}\")\n        set(SQLITE_FOUND TRUE)\n    endif()\nelse()\n  message(FATAL_ERROR \"Could not find SQLite\")\nendif()\n"
  },
  {
    "path": "tests/unit/C/cmake_sqliteM/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nproject(sqlitememory)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Path of compiled libsqlite3.a and .h files: /tmp/sqlite3-pkg/src\nset(FLEDGE_SQLITE3_LIBS \"/tmp/sqlite3-pkg/src\" CACHE INTERNAL \"\")\n\nadd_definitions(-DMEMORY_READING_PLUGIN=1)\n\n## sqlitememory plugin\ninclude_directories(../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../C/common/include)\ninclude_directories(../../../../C/services/common/include)\ninclude_directories(../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../C/plugins/storage/sqlitelb/include)\ninclude_directories(../../../../C/plugins/storage/sqlitelb/common/include)\ninclude_directories(../../../../C/plugins/storage/sqlite/common/include)\n\n# Check Sqlite3 required version\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\nfind_package(sqlite3)\n\n# Find source files\nfile(GLOB COMMON_SOURCES ../../../../C/plugins/storage/sqlitelb/common/*.cpp)\nfile(GLOB SOURCES ../../../../C/plugins/storage/sqlitememory/*.cpp)\n\n# Create shared library\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_library(${PROJECT_NAME} SHARED ${SOURCES} ${COMMON_SOURCES})\n\nadd_definitions(-DSQLITE_SPLIT_READINGS=1)\nadd_definitions(-DPLUGIN_LOG_NAME=\"SQLite 3 in_memory\")\n\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tinclude_directories(${FLEDGE_SQLITE3_LIBS})\n\ttarget_link_libraries(${PROJECT_NAME} -L\"${FLEDGE_SQLITE3_LIBS}/.libs\" -lsqlite3)\nelse()\n\t\n\ttarget_link_libraries(${PROJECT_NAME} -lsqlite3)\nendif()\n\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n"
  },
  {
    "path": "tests/unit/C/cmake_sqliteM/Findsqlite3.cmake",
    "content": "# This CMake file locates the SQLite3 development libraries\n#\n# The following variables are set:\n# SQLITE_FOUND - If the SQLite library was found\n# SQLITE_LIBRARIES - Path to the static library\n# SQLITE_INCLUDE_DIR - Path to SQLite headers\n# SQLITE_VERSION - Library version\n\nset(SQLITE_MIN_VERSION \"3.11.0\")\n# Check wether path of compiled libsqlite3.a and .h files exists\nif (EXISTS ${FLEDGE_SQLITE3_LIBS})\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h PATHS ${FLEDGE_SQLITE3_LIBS})\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.a PATHS \"${FLEDGE_SQLITE3_LIBS}/.libs\")\nelse()\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h)\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.so)\nendif()\n\nif (SQLITE_INCLUDE_DIR AND SQLITE_LIBRARIES)\n  execute_process(COMMAND grep \".*#define.*SQLITE_VERSION \" ${SQLITE_INCLUDE_DIR}/sqlite3.h\n    COMMAND sed \"s/.*\\\"\\\\(.*\\\\)\\\".*/\\\\1/\"\n    OUTPUT_VARIABLE SQLITE_VERSION\n    OUTPUT_STRIP_TRAILING_WHITESPACE)\n    if (\"${SQLITE_VERSION}\" VERSION_LESS \"${SQLITE_MIN_VERSION}\")\n        message(FATAL_ERROR \"SQLite3 version >= ${SQLITE_MIN_VERSION} required, found version ${SQLITE_VERSION}\")\n    else()\n        message(STATUS \"Found SQLite version ${SQLITE_VERSION}: ${SQLITE_LIBRARIES}\")\n        set(SQLITE_FOUND TRUE)\n    endif()\nelse()\n  message(FATAL_ERROR \"Could not find SQLite\")\nendif()\n"
  },
  {
    "path": "tests/unit/C/cmake_sqlitelb/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nproject(sqlitelb)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\n\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Path of compiled libsqlite3.a and .h files: /tmp/sqlite3-pkg/src\nset(FLEDGE_SQLITE3_LIBS \"/tmp/sqlite3-pkg/src\" CACHE INTERNAL \"\")\n\n## sqlitelb plugin\ninclude_directories(../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../C/common/include)\ninclude_directories(../../../../C/services/common/include)\ninclude_directories(../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../C/plugins/storage/sqlite/schema/include)\ninclude_directories(../../../../C/plugins/storage/sqlitelb/include)\ninclude_directories(../../../../C/plugins/storage/sqlitelb/common/include)\ninclude_directories(../../../../C/plugins/storage/sqlite/common/include)\n\n# Check Sqlite3 required version\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\nfind_package(sqlite3)\n\n# Find source files\nfile(GLOB COMMON_SOURCES ../../../../C/plugins/storage/sqlitelb/common/*.cpp)\nfile(GLOB SOURCES ../../../../C/plugins/storage/sqlitelb/*.cpp ../../../../C/plugins/storage/sqlite/schema/*.cpp)\n\n# Create shared library\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\nadd_library(${PROJECT_NAME} SHARED ${SOURCES} ${COMMON_SOURCES})\n\nadd_definitions(-DPLUGIN_LOG_NAME=\"SQLite 3 lb\")\n\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tinclude_directories(${FLEDGE_SQLITE3_LIBS})\n\ttarget_link_libraries(${PROJECT_NAME} -L\"${FLEDGE_SQLITE3_LIBS}/.libs\" -lsqlite3)\nelse()\n\t\n\ttarget_link_libraries(${PROJECT_NAME} -lsqlite3)\nendif()\n\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\nset_target_properties(${PROJECT_NAME} PROPERTIES SOVERSION 1)\n"
  },
  {
    "path": "tests/unit/C/cmake_sqlitelb/Findsqlite3.cmake",
    "content": "# This CMake file locates the SQLite3 development libraries\n#\n# The following variables are set:\n# SQLITE_FOUND - If the SQLite library was found\n# SQLITE_LIBRARIES - Path to the static library\n# SQLITE_INCLUDE_DIR - Path to SQLite headers\n# SQLITE_VERSION - Library version\n\nset(SQLITE_MIN_VERSION \"3.11.0\")\n# Check wether path of compiled libsqlite3.a and .h files exists\nif (EXISTS ${FLEDGE_SQLITE3_LIBS})\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h PATHS ${FLEDGE_SQLITE3_LIBS})\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.a PATHS \"${FLEDGE_SQLITE3_LIBS}/.libs\")\nelse()\n    find_path(SQLITE_INCLUDE_DIR sqlite3.h)\n    find_library(SQLITE_LIBRARIES NAMES libsqlite3.so)\nendif()\n\nif (SQLITE_INCLUDE_DIR AND SQLITE_LIBRARIES)\n  execute_process(COMMAND grep \".*#define.*SQLITE_VERSION \" ${SQLITE_INCLUDE_DIR}/sqlite3.h\n    COMMAND sed \"s/.*\\\"\\\\(.*\\\\)\\\".*/\\\\1/\"\n    OUTPUT_VARIABLE SQLITE_VERSION\n    OUTPUT_STRIP_TRAILING_WHITESPACE)\n    if (\"${SQLITE_VERSION}\" VERSION_LESS \"${SQLITE_MIN_VERSION}\")\n        message(FATAL_ERROR \"SQLite3 version >= ${SQLITE_MIN_VERSION} required, found version ${SQLITE_VERSION}\")\n    else()\n        message(STATUS \"Found SQLite version ${SQLITE_VERSION}: ${SQLITE_LIBRARIES}\")\n        set(SQLITE_FOUND TRUE)\n    endif()\nelse()\n  message(FATAL_ERROR \"Could not find SQLite\")\nendif()\n"
  },
  {
    "path": "tests/unit/C/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\nproject(RunTests)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\nset(UUIDLIB -luuid)\nset(COMMONLIB -ldl)\nset(LIBCURL_LIB -lcurl)\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\n# Locate GTest\nfind_package(GTest REQUIRED)\ninclude_directories(${GTEST_INCLUDE_DIRS})\n\nEXECUTE_PROCESS( COMMAND grep -o ^NAME=.* /etc/os-release COMMAND cut -f2 -d\\\" COMMAND sed s/\\\"//g OUTPUT_VARIABLE os_name )\nEXECUTE_PROCESS( COMMAND grep -o ^VERSION_ID=.* /etc/os-release COMMAND cut -f2 -d\\\" COMMAND sed s/\\\"//g OUTPUT_VARIABLE os_version )\n\n# Get the os name\nexecute_process(COMMAND bash -c \"cat /etc/os-release | grep -w ID | cut -f2 -d'='\"\n                                OUTPUT_VARIABLE\n                                OS_NAME\n                                OUTPUT_STRIP_TRAILING_WHITESPACE)\n\nif ( ( ${os_name} MATCHES \"Red Hat\" OR ${os_name} MATCHES \"CentOS\") AND ( ${os_version} MATCHES \"7\" ) )\n    add_definitions(-DRHEL_CENTOS_7=1)\n    message( \"System is RHEL/CentOS 7, tests on regex are skipped\" )\nendif()\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\ninclude_directories(../../../../C/common/include)\ninclude_directories(../../../../C/plugins/common/include)\ninclude_directories(../../../../C/services/common/include)\ninclude_directories(../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../C/thirdparty/Simple-Web-Server)\n\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\nset(PLUGINS_COMMON_LIB plugins-common-lib)\n\nfile(GLOB unittests \"*.cpp\")\n \n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development NumPy)\nendif()\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS} ${Python3_NUMPY_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nlink_directories(${PROJECT_BINARY_DIR}/../../lib)\n\n# Link runTests with what we want to test and the GTest and pthread library\nadd_executable(RunTests ${unittests})\ntarget_link_libraries(RunTests ${GTEST_LIBRARIES} pthread)\ntarget_link_libraries(RunTests  ${Boost_LIBRARIES})\ntarget_link_libraries(RunTests  ${UUIDLIB})\ntarget_link_libraries(RunTests  ${COMMONLIB})\ntarget_link_libraries(RunTests  ${LIBCURL_LIB})\ntarget_link_libraries(RunTests -lssl -lcrypto -lz)\ntarget_link_libraries(RunTests ${COMMON_LIB})\ntarget_link_libraries(RunTests ${SERVICE_COMMON_LIB})\ntarget_link_libraries(RunTests ${PLUGINS_COMMON_LIB})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(RunTests ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES} Python3::NumPy)\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\n"
  },
  {
    "path": "tests/unit/C/common/main.cpp",
    "content": "#include <gtest/gtest.h>\n#include <resultset.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n\n    testing::GTEST_FLAG(repeat) = 200;\n    testing::GTEST_FLAG(shuffle) = true;\n    testing::GTEST_FLAG(death_test_style) = \"threadsafe\";\n\n    return RUN_ALL_TESTS();\n}\n\n"
  },
  {
    "path": "tests/unit/C/common/test_JSONPath.cpp",
    "content": "#include <gtest/gtest.h>\n#include <JSONPath.h>\n#include <rapidjson/document.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\nusing namespace rapidjson;\n\nconst char *testdoc = \"{ \\\"a\\\" : { \\\"b\\\" : \\\"x\\\" }, \" \\\n\t\t        \"\\\"c\\\" : [ \\\"d\\\", \\\"e\\\" ], \" \\\n\t\t\t\"\\\"f\\\" : [ { \\\"g\\\" : \\\"h\\\", \\\"i\\\" : \\\"j\\\" }, \" \\\n\t\t\t\"          { \\\"k\\\" : \\\"l\\\", \\\"m\\\" : \\\"n\\\" }, \" \\\n\t\t\t\"          { \\\"o\\\" : \\\"p\\\", \\\"q\\\" : \\\"r\\\" } ], \" \\\n\t\t\t\"\\\"data\\\" : { \\\"child\\\" : [ { \\\"item\\\" : 1 } ] }, \" \\\n\t\t\t\"\\\"numeric\\\" : [ { \\\"id\\\" : 1, \\\"child\\\" : { \\\"item\\\" : 1 } }, \" \\\n\t\t\t             \" { \\\"id\\\" : 2, \\\"child\\\" : { \\\"item\\\" : 2 } } ] \" \\\n\t\t\t\"}\";\n\n/**\n * Simple literal path test\n */\nTEST(LiterialJSONPathTest, JSON)\n{\n\tstring path(\"/a/b\");\n\tJSONPath jpath(path);\n\tDocument doc;\n\tdoc.Parse(testdoc);\n\tValue *v = jpath.findNode(doc);\n\tASSERT_TRUE(v->IsString());\n\tASSERT_TRUE(doc.IsObject());\n\tASSERT_TRUE(doc.HasMember(\"a\"));\n\tASSERT_TRUE(doc.HasMember(\"c\"));\n\tASSERT_TRUE(doc.HasMember(\"f\"));\n\tASSERT_TRUE(doc.HasMember(\"data\"));\n}\n\n/**\n * Simple index path test\n */\nTEST(IndexJSONPath, JSON)\n{\n\tstring path(\"/c[0]\");\n\tJSONPath jpath(path);\n\tDocument doc;\n\tdoc.Parse(testdoc);\n\tValue *v = jpath.findNode(doc);\n\tASSERT_TRUE(v->IsString());\n\tASSERT_EQ(0, strcmp(v->GetString(), \"d\"));\n}\n\n/**\n * Simple index path test with nested objects\n */\nTEST(IndexIntJSONPath, JSON)\n{\n\tstring path(\"/data/child[0]/item\");\n\tJSONPath jpath(path);\n\tDocument doc;\n\tdoc.Parse(testdoc);\n\tValue *v = jpath.findNode(doc);\n\tASSERT_TRUE(v->IsInt());\n\tASSERT_EQ(1, v->GetInt());\n}\n\n\n/**\n * Simple index path test\n */\nTEST(MatchJSONPath, JSON)\n{\n\tstring path(\"/f[k==l]\");\n\tJSONPath jpath(path);\n\tDocument doc;\n\tdoc.Parse(testdoc);\n\tValue *v = jpath.findNode(doc);\n\tASSERT_TRUE(v->IsObject());\n\tASSERT_TRUE(v->HasMember(\"k\"));\n\tASSERT_TRUE(v->HasMember(\"m\"));\n}\n\n\n/**\n * Simple index path test\n */\nTEST(MatchNumericJSONPath, JSON)\n{\n\tstring path(\"/numeric[id==1]/child\");\n\tJSONPath jpath(path);\n\tDocument doc;\n\tdoc.Parse(testdoc);\n\tValue *v = jpath.findNode(doc);\n\tASSERT_TRUE(v->IsObject());\n\tASSERT_TRUE(v->HasMember(\"item\"));\n\tASSERT_TRUE((*v)[\"item\"].IsInt());\n\tASSERT_EQ(1, (*v)[\"item\"].GetInt());\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_circular_buffer.cpp",
    "content": "/*\n * unit tests - FOGL-8750 : ReadingSet Circular Buffer\n *\n * Copyright (c) 2024 Dianomic Systems, Inc.\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <gtest/gtest.h>\n#include <readingset_circularbuffer.h>\n#include <exception>\n\nusing namespace std;\n\nTEST(TESTCircularBuffer, TestMaxLimitOfBuffer)\n{\n\tReadingSetCircularBuffer buffer;\n    //First ReadingSet\n    vector<Reading *> *readings1 = new vector<Reading *>;\n    long dpVal1 = 30;\n    DatapointValue dpv1(dpVal1);\n    readings1->emplace_back(new Reading(\"R1\", new Datapoint(\"DP1\", dpv1)));\n    ReadingSet* rs1 = new  ReadingSet(readings1);\n    buffer.insert(rs1);\n    delete readings1;\n    delete rs1;\n\n    //Second ReadingSet\n    long dpVal2 = 50;\n    DatapointValue dpv2(dpVal2);\n    vector<Reading *> *readings2 = new vector<Reading *>;\n    readings2->emplace_back(new Reading(\"R2\", new Datapoint(\"DP2\", dpv2)));\n    ReadingSet* rs2 = new  ReadingSet(readings2);\n    buffer.insert(rs2);\n    delete readings2;\n    delete rs2;\n\n    //Third ReadingSet\n    long dpVal3 = 40;\n    DatapointValue dpv3(dpVal3);\n    vector<Reading *> *readings3 = new vector<Reading *>;\n    readings3->emplace_back(new Reading(\"R3\", new Datapoint(\"DP3\", dpv3)));\n    ReadingSet* rs3 = new  ReadingSet(readings3);\n    buffer.insert(rs3);\n    delete readings3;\n    delete rs3;\n\n    //Fourth ReadingSet\n    long dpVal4 = 45;\n    DatapointValue dpv4(dpVal4);\n    vector<Reading *> *readings4 = new vector<Reading *>;\n    readings4->emplace_back(new Reading(\"R4\", new Datapoint(\"DP4\", dpv4)));\n    ReadingSet* rs4 = new  ReadingSet(readings4);\n    buffer.insert(rs4);\n    delete readings4;\n    delete rs4;\n\n    //Fifth ReadingSet\n    long dpVal5 = 86;\n    DatapointValue dpv5(dpVal5);\n    vector<Reading *> *readings5 = new vector<Reading *>;\n    readings5->emplace_back(new Reading(\"R5\", new Datapoint(\"DP5\", dpv5)));\n    ReadingSet* rs5 = new  ReadingSet(readings5);\n    buffer.insert(rs5);\n    delete readings5;\n    delete rs5;\n\n    //Sixth ReadingSet\n    long dpVal6 = 75;\n    DatapointValue dpv6(dpVal6);\n    vector<Reading *> *readings6 = new vector<Reading *>;\n    readings6->emplace_back(new Reading(\"R6\", new Datapoint(\"DP6\", dpv6)));\n    ReadingSet* rs6 = new  ReadingSet(readings6);\n    buffer.insert(rs6);\n    delete readings6;\n    delete rs6;\n\n    //Seventh ReadingSet\n    long dpVal7 = 49;\n    DatapointValue dpv7(dpVal4);\n    vector<Reading *> *readings7 = new vector<Reading *>;\n    readings7->emplace_back(new Reading(\"R7\", new Datapoint(\"DP7\", dpv7)));\n    ReadingSet* rs7 = new  ReadingSet(readings7);\n    buffer.insert(rs7);\n    delete readings7;\n    delete rs7;\n\n    //Eighth ReadingSet\n    long dpVal8 = 15;\n    DatapointValue dpv8(dpVal8);\n    vector<Reading *> *readings8 = new vector<Reading *>;\n    readings8->emplace_back(new Reading(\"R8\", new Datapoint(\"DP8\", dpv8)));\n    ReadingSet* rs8 = new  ReadingSet(readings8);\n    buffer.insert(rs8);\n    delete readings8;\n    delete rs8;\n\n    //Ninth ReadingSet\n    long dpVal9 = 23;\n    DatapointValue dpv9(dpVal9);\n    vector<Reading *> *readings9 = new vector<Reading *>;\n    readings9->emplace_back(new Reading(\"R9\", new Datapoint(\"DP9\", dpv4)));\n    ReadingSet* rs9 = new  ReadingSet(readings9);\n    buffer.insert(rs9);\n    delete readings9;\n    delete rs9;\n\n    //Tenth ReadingSet\n    long dpVal10 = 38;\n    DatapointValue dpv10(dpVal10);\n    vector<Reading *> *readings10 = new vector<Reading *>;\n    readings10->emplace_back(new Reading(\"R10\", new Datapoint(\"DP10\", dpv4)));\n    ReadingSet* rs10 = new  ReadingSet(readings10);\n    buffer.insert(rs10);\n    delete readings10;\n    delete rs10;\n    ASSERT_EQ(buffer.extract(false).size(),10); // MaxLimit for buffer is reached\n    \n    //Eleventh ReadingSet\n    long dpVal11 = 47;\n    DatapointValue dpv11(dpVal11);\n    vector<Reading *> *readings11 = new vector<Reading *>;\n    readings11->emplace_back(new Reading(\"R4\", new Datapoint(\"DP11\", dpv11)));\n    ReadingSet* rs11 = new  ReadingSet(readings11);\n    buffer.insert(rs11);\n    delete readings11;\n    delete rs11;\n    \n    ASSERT_EQ(buffer.extract(false).size(),1); // Buffer size can't exceed the default MaxLimit\n}\n\n\n\nTEST(TESTCircularBuffer, TestCustomSizeBuffer)\n{\n\tReadingSetCircularBuffer buffer(5);\n    //First ReadingSet\n    vector<Reading *> *readings1 = new vector<Reading *>;\n    long dpVal1 = 30;\n    DatapointValue dpv1(dpVal1);\n    readings1->emplace_back(new Reading(\"R1\", new Datapoint(\"DP1\", dpv1)));\n    ReadingSet* rs1 = new  ReadingSet(readings1);\n    buffer.insert(rs1);\n    delete readings1;\n    delete rs1;\n    ASSERT_EQ(buffer.extract().size(),1);\n\n    //Second ReadingSet\n    long dpVal2 = 50;\n    DatapointValue dpv2(dpVal2);\n    vector<Reading *> *readings2 = new vector<Reading *>;\n    readings2->emplace_back(new Reading(\"R2\", new Datapoint(\"DP2\", dpv2)));\n    ReadingSet* rs2 = new  ReadingSet(readings2);\n    buffer.insert(rs2);\n    delete rs2;\n    delete readings2;\n\n    //Third ReadingSet\n    long dpVal3 = 40;\n    DatapointValue dpv3(dpVal3);\n    vector<Reading *> *readings3 = new vector<Reading *>;\n    readings3->emplace_back(new Reading(\"R3\", new Datapoint(\"DP3\", dpv3)));\n    ReadingSet* rs3 = new  ReadingSet(readings3);\n    buffer.insert(rs3);\n    delete rs3;\n    delete readings3;\n\n    //Fourth ReadingSet\n    long dpVal4 = 45;\n    DatapointValue dpv4(dpVal4);\n    vector<Reading *> *readings4 = new vector<Reading *>;\n    readings4->emplace_back(new Reading(\"R4\", new Datapoint(\"DP4\", dpv4)));\n    ReadingSet* rs4 = new  ReadingSet(readings4);\n    buffer.insert(rs4);\n    delete rs4;\n    delete readings4;\n\n    //Fifth ReadingSet\n    long dpVal5 = 86;\n    DatapointValue dpv5(dpVal5);\n    vector<Reading *> *readings5 = new vector<Reading *>;\n    readings5->emplace_back(new Reading(\"R5\", new Datapoint(\"DP5\", dpv5)));\n    ReadingSet* rs5 = new  ReadingSet(readings5);\n    buffer.insert(rs5);\n    delete readings5;\n    delete rs5;\n    ASSERT_EQ(buffer.extract(false).size(),4); // Remaining item in buffer\n\n    //Sixth ReadingSet\n    long dpVal6 = 75;\n    DatapointValue dpv6(dpVal6);\n    vector<Reading *> *readings6 = new vector<Reading *>;\n    readings6->emplace_back(new Reading(\"R6\", new Datapoint(\"DP6\", dpv6)));\n    ReadingSet* rs6 = new  ReadingSet(readings6);\n    buffer.insert(rs6);\n    delete readings6;\n    delete rs6;\n    ASSERT_EQ(buffer.extract(false).size(),1); // Buffer size can't exceed the default MaxLimit\n}\n\n\nTEST(TESTCircularBuffer, TestExtactFromEmptyBuffer)\n{\n\tReadingSetCircularBuffer buffer;\n    ASSERT_EQ(buffer.extract(false).size(),0); // Buffer size zero in case of extract from an empty buffer\n\n}\n\nTEST(TESTCircularBuffer, TestHeadAndTailMarkerAdjustment)\n{\n\tReadingSetCircularBuffer buffer(3);\n    //First ReadingSet\n    vector<Reading *> *readings1 = new vector<Reading *>;\n    long dpVal1 = 30;\n    DatapointValue dpv1(dpVal1);\n    readings1->emplace_back(new Reading(\"R1\", new Datapoint(\"DP1\", dpv1)));\n    ReadingSet* rs1 = new  ReadingSet(readings1);\n    buffer.insert(rs1);\n    std::vector<std::shared_ptr<ReadingSet>> buff1 = buffer.extract();\n    ASSERT_EQ(buff1.size(),1);\n    ASSERT_EQ(buff1[0]->getAllReadings()[0]->getAssetName(), \"R1\");\n    ASSERT_EQ(buff1[0]->getAllReadings()[0]->getDatapointsJSON(), readings1->at(0)->getDatapointsJSON());\n    delete readings1;\n    delete rs1;\n\n    //Second ReadingSet\n    long dpVal2 = 50;\n    DatapointValue dpv2(dpVal2);\n    vector<Reading *> *readings2 = new vector<Reading *>;\n    readings2->emplace_back(new Reading(\"R2\", new Datapoint(\"DP2\", dpv2)));\n    ReadingSet* rs2 = new  ReadingSet(readings2);\n    buffer.insert(rs2);\n    \n    std::vector<std::shared_ptr<ReadingSet>> buff2 = buffer.extract();\n    ASSERT_EQ(buff2.size(),1);\n    ASSERT_EQ(buff2[0]->getAllReadings()[0]->getAssetName(), \"R2\");\n    ASSERT_EQ(buff2[0]->getAllReadings()[0]->getDatapointsJSON(), readings2->at(0)->getDatapointsJSON());\n    delete readings2;\n    delete rs2;\n\n    //Third ReadingSet\n    long dpVal3 = 40;\n    DatapointValue dpv3(dpVal3);\n    vector<Reading *> *readings3 = new vector<Reading *>;\n    readings3->emplace_back(new Reading(\"R3\", new Datapoint(\"DP3\", dpv3)));\n    ReadingSet* rs3 = new  ReadingSet(readings3);\n    buffer.insert(rs3);\n    \n    std::vector<std::shared_ptr<ReadingSet>> buff3 = buffer.extract();\n    ASSERT_EQ(buff3.size(),1);\n    ASSERT_EQ(buff3[0]->getAllReadings()[0]->getAssetName(), \"R3\"); //Buffer is Full\n    ASSERT_EQ(buff3[0]->getAllReadings()[0]->getDatapointsJSON(), readings3->at(0)->getDatapointsJSON());\n    delete readings3;\n    delete rs3;\n\n    //Fourth ReadingSet\n    long dpVal4 = 45;\n    DatapointValue dpv4(dpVal4);\n    vector<Reading *> *readings4 = new vector<Reading *>;\n    readings4->emplace_back(new Reading(\"R4\", new Datapoint(\"DP4\", dpv4)));\n    ReadingSet* rs4 = new  ReadingSet(readings4);\n    buffer.insert(rs4);\n    \n    std::vector<std::shared_ptr<ReadingSet>> buff4 = buffer.extract();\n    ASSERT_EQ(buff4.size(),1);\n    // m_head and m_tail pointer set correctly to fetch the reading which came after buffer is full\n    ASSERT_EQ(buff4[0]->getAllReadings()[0]->getAssetName(), \"R4\"); \n    ASSERT_EQ(buff4[0]->getAllReadings()[0]->getDatapointsJSON(), readings4->at(0)->getDatapointsJSON());\n    delete readings4;\n    delete rs4;\n\n}\n\nTEST(TESTCircularBuffer, TestCustomSizeBufferLessThanOne)\n{\n    ReadingSetCircularBuffer buffer(0);\n\n    //First ReadingSet\n    vector<Reading *> *readings1 = new vector<Reading *>;\n    long dpVal1 = 30;\n    DatapointValue dpv1(dpVal1);\n    readings1->emplace_back(new Reading(\"R1\", new Datapoint(\"DP1\", dpv1)));\n    ReadingSet* rs1 = new  ReadingSet(readings1);\n    buffer.insert(rs1);\n\n    std::vector<std::shared_ptr<ReadingSet>> buff1 = buffer.extract();\n    ASSERT_EQ(buff1.size(),1);\n    ASSERT_EQ(buff1[0]->getAllReadings()[0]->getAssetName(), \"R1\");\n    ASSERT_EQ(buff1[0]->getAllReadings()[0]->getDatapointsJSON(), readings1->at(0)->getDatapointsJSON());\n    delete readings1;\n    delete rs1;\n\n    //Second ReadingSet\n    long dpVal2 = 50;\n    DatapointValue dpv2(dpVal2);\n    vector<Reading *> *readings2 = new vector<Reading *>;\n    readings2->emplace_back(new Reading(\"R2\", new Datapoint(\"DP2\", dpv2)));\n    ReadingSet* rs2 = new  ReadingSet(readings2);\n    buffer.insert(rs2);\n\n    std::vector<std::shared_ptr<ReadingSet>> buff2 = buffer.extract();\n    ASSERT_EQ(buff2.size(),1);\n    ASSERT_EQ(buff2[0]->getAllReadings()[0]->getAssetName(), \"R2\");\n    ASSERT_EQ(buff2[0]->getAllReadings()[0]->getDatapointsJSON(), readings2->at(0)->getDatapointsJSON());\n    delete readings2;\n    delete rs2;\n\n}\n\n"
  },
  {
    "path": "tests/unit/C/common/test_config_category.cpp",
    "content": "#include <gtest/gtest.h>\n#include <config_category.h>\n#include <string.h>\n#include <string>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nconst char *categories = \"{\\\"categories\\\": [\"\n\t\"{\\\"key\\\": \\\"cat1\\\", \\\"description\\\":\\\"First category\\\"},\"\n\t\"{\\\"key\\\": \\\"cat2\\\", \\\"description\\\":\\\"Second\\\"}]}\";\n\nconst char *categories_quoted = \"{\\\"categories\\\": [\"\n\t\"{\\\"key\\\": \\\"cat \\\\\\\"1\\\\\\\"\\\", \\\"description\\\":\\\"First \\\\\\\"category\\\\\\\"\\\"},\"\n\t\"{\\\"key\\\": \\\"cat \\\\\\\"2\\\\\\\"\\\", \\\"description\\\":\\\"Second\\\"}]}\";\n\nconst char *myCategory = \"{\\\"description\\\": {\"\n\t\t\"\\\"value\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this Fledge service\\\"},\"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"value\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"description\\\": \\\"The name of this Fledge service\\\"},\"\n        \"\\\"complex\\\": {\" \\\n\t\t\"\\\"value\\\": { \\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"type\\\": \\\"json\\\",\"\n\t\t\"\\\"default\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"description\\\": \\\"A JSON configuration parameter\\\"}}\";\n\nconst char *myCategory_quoted = \"{\\\"description\\\": {\"\n\t\t\"\\\"value\\\": \\\"The \\\\\\\"Fledge\\\\\\\" administrative API\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"The \\\\\\\"Fledge\\\\\\\" administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this \\\\\\\"Fledge\\\\\\\" service\\\"},\"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"value\\\": \\\"\\\\\\\"Fledge\\\\\\\"\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"\\\\\\\"Fledge\\\\\\\"\\\",\"\n\t\t\"\\\"description\\\": \\\"The name of this \\\\\\\"Fledge\\\\\\\" service\\\"},\"\n        \"\\\"complex\\\": {\" \\\n\t\t\"\\\"value\\\": { \\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"type\\\": \\\"json\\\",\"\n\t\t\"\\\"default\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"description\\\": \\\"A JSON configuration parameter\\\"}}\";\n\nconst char *myCategory_quotedSpecial = R\"QQ({\"description\": { \"value\": \"The \\\"Fledge\\\" admini\\\\strative API\", \"type\": \"string\", \"default\": \"The \\\"Fledge\\\" administra\\tive API\", \"description\": \"The description of this \\\"Fledge\\\" service\"}, \"name\": { \"value\": \"\\\"Fledge\\\"\", \"type\": \"string\", \"default\": \"\\\"Fledge\\\"\", \"description\": \"The name of this \\\"Fledge\\\" service\"}, \"complex\": { \"value\": { \"first\" : \"Fledge\", \"second\" : \"json\" }, \"type\": \"json\", \"default\": {\"first\" : \"Fledge\", \"second\" : \"json\" }, \"description\": \"A JSON configuration parameter\"} })QQ\";\n\nconst char *myCategoryDisplayName = \"{\\\"description\\\": {\"\n\t\t\"\\\"value\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this Fledge service\\\"},\"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"value\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"displayName\\\" : \\\"My Fledge\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"description\\\": \\\"The name of this Fledge service\\\"},\"\n        \"\\\"complex\\\": {\" \\\n\t\t\"\\\"value\\\": { \\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"type\\\": \\\"json\\\",\"\n\t\t\"\\\"default\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"description\\\": \\\"A JSON configuration parameter\\\"}}\";\n\nconst char *myCategoryEnum = \"{\\\"description\\\": {\"\n\t\t\"\\\"value\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this Fledge service\\\"},\"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"value\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"description\\\": \\\"The name of this Fledge service\\\"},\"\n        \"\\\"enum\\\": {\" \\\n\t\t\"\\\"value\\\": \\\"first\\\",\"\n\t\t\"\\\"type\\\": \\\"enumeration\\\",\"\n\t\t\"\\\"default\\\": \\\"first\\\",\"\n\t\t\"\\\"options\\\": [\\\"first\\\",\\\"second\\\",\\\"third\\\"], \"\n\t\t\"\\\"description\\\": \\\"An enumeration configuration parameter\\\"}}\";\n\nconst char *enum_JSON = \"{ \\\"key\\\" : \\\"test\\\", \\\"description\\\" : \\\"\\\", \\\"value\\\" : {\\\"description\\\" : { \\\"description\\\" : \\\"The description of this Fledge service\\\", \\\"type\\\" : \\\"string\\\", \\\"value\\\" : \\\"The Fledge administrative API\\\", \\\"default\\\" : \\\"The Fledge administrative API\\\" }, \\\"name\\\" : { \\\"description\\\" : \\\"The name of this Fledge service\\\", \\\"type\\\" : \\\"string\\\", \\\"value\\\" : \\\"Fledge\\\", \\\"default\\\" : \\\"Fledge\\\" }, \\\"enum\\\" : { \\\"description\\\" : \\\"An enumeration configuration parameter\\\", \\\"type\\\" : \\\"enumeration\\\", \\\"options\\\" : [ \\\"first\\\",\\\"second\\\",\\\"third\\\"], \\\"value\\\" : \\\"first\\\", \\\"default\\\" : \\\"first\\\" }} }\";\n\nconst char *myCategory_JSON_type_with_escaped_default = \"{ \"\n\t\"\\\"filter\\\": { \"\n\t\t\"\\\"type\\\": \\\"JSON\\\", \"\n\t\t\"\\\"description\\\": \\\"filter\\\", \"\n\t\t\"\\\"default\\\": \\\"{\\\\\\\"pipeline\\\\\\\":[\\\\\\\"scale\\\\\\\",\\\\\\\"exceptional\\\\\\\"]}\\\", \"\n\t\t\"\\\"value\\\": \\\"{}\\\" } }\";\n\nconst char *myCategoryMinMax = \"{\\\"description\\\": {\"\n\t\t\"\\\"value\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this Fledge service\\\"},\"\n\t\"\\\"range\\\": {\"\n\t\t\"\\\"value\\\": \\\"1\\\",\"\n\t\t\"\\\"type\\\": \\\"integer\\\",\"\n\t\t\"\\\"default\\\": \\\"1\\\",\"\n\t\t\"\\\"minimum\\\": \\\"1\\\",\"\n\t\t\"\\\"maximum\\\": \\\"10\\\",\"\n\t\t\"\\\"description\\\": \\\"A constrained value\\\"},\"\n        \"\\\"complex\\\": {\" \\\n\t\t\"\\\"value\\\": { \\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"type\\\": \\\"json\\\",\"\n\t\t\"\\\"default\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"description\\\": \\\"A JSON configuration parameter\\\"}}\";\n\nconst char *myCategoryRemoveItems = \"{\" \\\n\t\t\t\"\\\"plugin\\\" : {\"\\\n\t\t\t\t\"\\\"description\\\" : \\\"Random C south plugin\\\", \"\\\n\t\t\t\t\"\\\"type\\\" : \\\"string\\\", \"\\\n\t\t\t\t\"\\\"default\\\" : \\\"Random_2\\\", \"\\\n\t\t\t\t\"\\\"value\\\" : \\\"Random_2\\\" \"\\\n\t\t\t\"}, \"\\\n\t\t\t\"\\\"asset\\\" : {\"\\\n\t\t\t\t\"\\\"description\\\" : \\\"Asset name\\\", \" \\\n\t\t\t\t\"\\\"type\\\" : \\\"category\\\", \"\\\n\t\t\t\t\"\\\"default\\\" : {\"\\\n\t\t\t\t\t\"\\\"bias\\\" : {\"\\\n\t\t\t\t\t\t\"\\\"description\\\" : \\\"Bias offset\\\", \"\\\n\t\t\t\t\t\t\"\\\"type\\\" : \\\"float\\\", \"\\\n\t\t\t\t\t\t\"\\\"default\\\" : \\\"2\\\" \"\\\n\t\t\t\t\t\"} \"\\\n\t\t\t\t\"}, \"\\\n\t\t\t\t\"\\\"value\\\" : {\"\\\n\t\t\t\t\t\"\\\"bias\\\" : {\"\\\n\t\t\t\t\t\t\"\\\"description\\\" : \\\"Bias offset\\\", \"\\\n\t\t\t\t\t\t\"\\\"type\\\" : \\\"float\\\", \"\\\n\t\t\t\t\t\t\"\\\"default\\\" : \\\"2\\\" \"\\\n\t\t\t\t\t\"} \"\\\n\t\t\t\t\"} \"\\\n\t\t\t\"} \"\\\n\t\t\"}\";\n\n\nconst char *myCategoryScript = \"{\\\"config\\\": {\\\"displayName\\\": \\\"Configuration\\\", \\\"order\\\": \\\"1\\\", \"\n\t\t\t\t\t\"\\\"default\\\": \\\"{}\\\", \\\"value\\\": \\\"{\\\\\\\"d\\\\\\\":76}\\\", \"\n\t\t\t\t\t\"\\\"type\\\": \\\"JSON\\\", \\\"description\\\": \\\"Python 2.7 filter configuration.\\\"}, \"\n\t\t\t\t\"\\\"plugin\\\": {\\\"readonly\\\": \\\"true\\\", \\\"default\\\": \\\"python27\\\", \"\n\t\t\t\t\t\"\\\"type\\\": \\\"string\\\", \\\"value\\\": \\\"python27\\\", \"\n\t\t\t\t\t\"\\\"description\\\": \\\"Python 2.7 filter plugin\\\"}, \"\n\t\t\t\t\"\\\"enable\\\": {\\\"displayName\\\": \\\"Enabled\\\", \\\"default\\\": \\\"false\\\", \"\n\t\t\t\t\t\"\\\"type\\\": \\\"boolean\\\", \\\"value\\\": \\\"true\\\", \"\n\t\t\t\t\t\"\\\"description\\\": \\\"A switch that can be used to enable or disable execution of the Python 2.7 filter.\\\"}, \"\n\t\t\t\t\"\\\"script\\\": {\\\"displayName\\\": \\\"Python Script\\\", \\\"order\\\": \\\"2\\\", \"\n\t\t\t\t\t\"\\\"default\\\": \\\"\\\", \"\n\t\t\t\t\t\"\\\"value\\\": \\\"\\\\\\\"\\\\\\\"\\\\\\\"\\\\nFledge filtering for readings data\\\\\\\"\\\\\\\"\\\\\\\"\\\\n\"\n\t\t\t\t\t\t\"def set_filter_config(configuration):\\\\n\"\n\t\t\t\t\t\t\"    print configuration\\\\n\"\n\t\t\t\t\t\t\"    global filter_config\\\\n\"\n\t\t\t\t\t\t\"    filter_config = json.loads(configuration['config'])\\\\n\\\\n\"\n\t\t\t\t\t\t\"    return True\\\\n\\\\n\\\", \"\n\t\t\t\t\t\"\\\"file\\\": \\\"/home/ubuntu/source/develop/Fledge/data/scripts/pumpa_powerfilter_script_file27.py\\\", \"\n\t\t\t\t\t\"\\\"type\\\": \\\"script\\\", \"\n\t\t\t\t\t\"\\\"description\\\": \\\"Python 2.7 module to load.\\\" } }\";\n\n// default has invalid (escaped) JSON object value here: a \\\\\\\" is missing for pipeline\nconst char *myCategory_JSON_type_without_escaped_default = \"{ \"\n\t\"\\\"filter\\\": { \"\n\t\t\"\\\"type\\\": \\\"JSON\\\", \"\n\t\t\"\\\"description\\\": \\\"filter\\\", \"\n\t\t\"\\\"default\\\": \\\"{\\\"pipeline\\\\\\\" : \\\\\\\"scale\\\\\\\", \\\\\\\"exceptional\\\\\\\"]}\\\", \"\n\t\t\"\\\"value\\\": \\\"{}\\\" } }\";\n\nconst char *myCategoryDeprecated = \"{\\\"description\\\": {\"\n\t\t\"\\\"value\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this Fledge service\\\"},\"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"value\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"description\\\": \\\"The name of this Fledge service\\\"},\"\n        \"\\\"location\\\": {\" \\\n\t\t\"\\\"value\\\": \\\"remote\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"local\\\", \"\n\t\t\"\\\"deprecated\\\": \\\"true\\\", \"\n\t\t\"\\\"description\\\": \\\"A deprecated configuration parameter\\\"}}\";\n\nconst char *json_array_item = \"{\\\"pipeline\\\":[\\\"scale\\\",\\\"exceptional\\\"]}\";\n\nconst char *myCategory_number_and_boolean_items =  \"{\\\"factor\\\": {\"\n\t\t\"\\\"value\\\": \\\"112\\\",\"\n\t\t\"\\\"type\\\": \\\"integer\\\",\"\n\t\t\"\\\"default\\\": 101,\"\n\t\t\"\\\"description\\\": \\\"The factor value\\\"}, \"\n\t\"\\\"enable\\\" : {\"\n\t\"\\\"description\\\": \\\"Switch enabled\\\", \"\n\t\"\\\"default\\\" : \\\"false\\\", \"\n\t\"\\\"value\\\" : true, \"\n\t\"\\\"type\\\" : \\\"boolean\\\"}}\";\n\n\nconst char *myCategory_to_json_parameters = \"{\"\\\n\t\t\"\\\"memoryBufferSize\\\": {\"\n\t\t\t\"\\\"description\\\": \\\"Number of elements of blockSize size to be buffered in memory\\\",\"\n\t\t\t\"\\\"type\\\": \\\"integer\\\", \"\n\t\t\t\"\\\"default\\\": \\\"10\\\", \"\n\t\t\t\"\\\"order\\\": \\\"12\\\" ,\"\n\t\t\t\"\\\"readonly\\\": \\\"false\\\" \"\n\t\t\"} \"\n\t\"}\";\n\nconst char *json = \"{ \\\"key\\\" : \\\"test\\\", \\\"description\\\" : \\\"Test description\\\", \"\n    \"\\\"value\\\" : {\"\n\t\"\\\"description\\\" : { \"\n\t\t\"\\\"description\\\" : \\\"The description of this Fledge service\\\", \"\n\t\t\"\\\"type\\\" : \\\"string\\\", \"\n\t\t\"\\\"value\\\" : \\\"The Fledge administrative API\\\", \"\n\t\t\"\\\"default\\\" : \\\"The Fledge administrative API\\\" }, \"\n\t\"\\\"name\\\" : { \"\n\t\t\"\\\"description\\\" : \\\"The name of this Fledge service\\\", \"\n\t\t\"\\\"type\\\" : \\\"string\\\", \"\n\t\t\"\\\"value\\\" : \\\"Fledge\\\", \"\n\t\t\"\\\"default\\\" : \\\"Fledge\\\" }, \"\n\t\"\\\"complex\\\" : { \" \n\t\t\"\\\"description\\\" : \\\"A JSON configuration parameter\\\", \"\n\t\t\"\\\"type\\\" : \\\"json\\\", \"\n\t\t\"\\\"value\\\" : {\\\"first\\\":\\\"Fledge\\\",\\\"second\\\":\\\"json\\\"}, \"\n\t\t\"\\\"default\\\" : {\\\"first\\\":\\\"Fledge\\\",\\\"second\\\":\\\"json\\\"} }} }\";\n\nconst char *json_quoted = \"{ \\\"key\\\" : \\\"test \\\\\\\"a\\\\\\\"\\\", \\\"description\\\" : \\\"Test \\\\\\\"description\\\\\\\"\\\", \"\n    \"\\\"value\\\" : {\"\n\t\"\\\"description\\\" : { \"\n\t\t\"\\\"description\\\" : \\\"The description of this \\\\\\\"Fledge\\\\\\\" service\\\", \"\n\t\t\"\\\"type\\\" : \\\"string\\\", \"\n\t\t\"\\\"value\\\" : \\\"The \\\\\\\"Fledge\\\\\\\" administrative API\\\", \"\n\t\t\"\\\"default\\\" : \\\"The \\\\\\\"Fledge\\\\\\\" administrative API\\\" }, \"\n\t\"\\\"name\\\" : { \"\n\t\t\"\\\"description\\\" : \\\"The name of this \\\\\\\"Fledge\\\\\\\" service\\\", \"\n\t\t\"\\\"type\\\" : \\\"string\\\", \"\n\t\t\"\\\"value\\\" : \\\"\\\\\\\"Fledge\\\\\\\"\\\", \"\n\t\t\"\\\"default\\\" : \\\"\\\\\\\"Fledge\\\\\\\"\\\" }, \"\n\t\"\\\"complex\\\" : { \" \n\t\t\"\\\"description\\\" : \\\"A JSON configuration parameter\\\", \"\n\t\t\"\\\"type\\\" : \\\"json\\\", \"\n\t\t\"\\\"value\\\" : {\\\"first\\\":\\\"Fledge\\\",\\\"second\\\":\\\"json\\\"}, \"\n\t\t\"\\\"default\\\" : {\\\"first\\\":\\\"Fledge\\\",\\\"second\\\":\\\"json\\\"} }} }\";\n\nconst char *json_type_JSON = \"{ \\\"key\\\" : \\\"test\\\", \\\"description\\\" : \\\"Test description\\\", \"\n\t\t\"\\\"value\\\" : {\\\"filter\\\" : { \\\"description\\\" : \\\"filter\\\", \\\"type\\\" : \\\"JSON\\\", \"\n\t\t\"\\\"value\\\" : {}, \\\"default\\\" : {\\\"pipeline\\\":[\\\"scale\\\",\\\"exceptional\\\"]} }} }\";\n\nconst char *json_boolean_number = \"{ \\\"key\\\" : \\\"test\\\", \\\"description\\\" : \\\"Test description\\\", \"\n\t\t\t\t\"\\\"value\\\" : \"\n\t\t\"{\\\"factor\\\" : { \\\"description\\\" : \\\"The factor value\\\", \\\"type\\\" : \\\"integer\\\", \"\n\t\t\t\"\\\"value\\\" : 112, \\\"default\\\" : 101 }, \"\n\t\t\"\\\"enable\\\" : { \\\"description\\\" : \\\"Switch enabled\\\", \\\"type\\\" : \\\"boolean\\\", \"\n\t\t\t\"\\\"value\\\" : \\\"true\\\", \\\"default\\\" : \\\"false\\\" }} }\";\n\nconst char *allCategories = \"[{\\\"key\\\": \\\"cat1\\\", \\\"description\\\" : \\\"desc1\\\"}, {\\\"key\\\": \\\"cat2\\\", \\\"description\\\" : \\\"desc2\\\"}]\";\nconst char *allCategories_quoted = \"[{\\\"key\\\": \\\"cat\\\\\\\"1\\\\\\\"\\\", \\\"description\\\" : \\\"desc\\\\\\\"1\\\\\\\"\\\"}, \"\n\t\t\t\t   \"{\\\"key\\\": \\\"cat\\\\\\\"2\\\\\\\"\\\", \\\"description\\\" : \\\"desc\\\\\\\"2\\\\\\\"\\\"}]\";\n\nconst char *myCategoryEnumFull = \"{\\\"description\\\": {\"\n\t\t\"\\\"value\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\", \\\"order\\\" : \\\"1\\\", \"\n\t\t\"\\\"default\\\": \\\"The Fledge administrative API\\\", \"\n\t\t\"\\\"description\\\": \\\"The description of this Fledge service\\\"}, \"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"value\\\": \\\"Fledge\\\", \\\"readonly\\\" : \\\"false\\\", \"\n\t\t\"\\\"type\\\": \\\"string\\\", \\\"order\\\" : \\\"2\\\", \"\n\t\t\"\\\"default\\\": \\\"Fledge\\\", \\\"displayName\\\" : \\\"Fledge service\\\", \"\n\t\t\"\\\"description\\\": \\\"The name of this Fledge service\\\"}, \"\n\t\"\\\"range\\\": {\"\n\t\t\"\\\"value\\\": \\\"1\\\",\"\n\t\t\"\\\"type\\\": \\\"integer\\\",\"\n\t\t\"\\\"default\\\": \\\"1\\\",\"\n\t\t\"\\\"minimum\\\": \\\"1\\\", \"\n\t\t\"\\\"maximum\\\": \\\"10\\\", \\\"order\\\" : \\\"4\\\",  \\\"displayName\\\" : \\\"Fledge range parameter\\\", \"\n\t\t\"\\\"description\\\": \\\"A constrained value\\\"},\"\n        \"\\\"enum\\\": {\" \\\n\t\t\"\\\"value\\\": \\\"first\\\",\"\n\t\t\"\\\"type\\\": \\\"enumeration\\\", \\\"order\\\" : \\\"3\\\", \"\n\t\t\"\\\"default\\\": \\\"first\\\", \\\"displayName\\\" : \\\"Fledge configuration parameter\\\", \"\n\t\t\"\\\"options\\\": [\\\"first\\\",\\\"second\\\",\\\"third\\\"], \"\n\t\t\"\\\"description\\\": \\\"An enumeration configuration parameter\\\"}}\";\n\nconst char* bigCategory =\n\t\t\"{\\\"OMFMaxRetry\\\": { \" \\\n\t\t\t\"\\\"type\\\": \\\"integer\\\", \\\"displayName\\\": \\\"Maximum Retry\\\", \" \\\n\t\t\t\"\\\"value\\\": \\\"3\\\", \\\"default\\\": \\\"3\\\", \" \\\n\t\t\t\"\\\"description\\\": \\\"Max number of retries\\\", \" \\\n\t\t\t\"\\\"order\\\": \\\"10\\\"}, \"\n\t\t\"\\\"compression\\\": { \" \\\n\t\t\t\"\\\"type\\\": \\\"boolean\\\", \\\"displayName\\\": \\\"Compression\\\", \" \\\n\t\t\t\"\\\"value\\\": \\\"false\\\", \\\"default\\\": \\\"true\\\", \" \\\n\t\t\t\"\\\"description\\\": \\\"Compress data before sending\\\", \" \\\n\t\t\t\"\\\"order\\\": \\\"16\\\"}, \" \\\n\t\t\t\"\\\"enable\\\": {\\\"type\\\": \\\"boolean\\\", \\\"description\\\": \" \\\n\t\t\t\t\"\\\"A switch that can be used to enable or disable execution\\\", \" \\\n\t\t\t\"\\\"default\\\": \\\"true\\\", \\\"value\\\": \\\"true\\\", \\\"readonly\\\": \\\"true\\\"}, \" \\\n\t\t\"\\\"plugin\\\": { \" \\\n\t\t\t\"\\\"type\\\": \\\"string\\\", \" \\\n\t\t\t\"\\\"description\\\": \\\"PI Server North C Plugin\\\", \" \\\n\t\t\t\"\\\"default\\\": \\\"OMF\\\", \" \\\n\t\t\t\"\\\"value\\\": \\\"OMF\\\", \\\"readonly\\\": \\\"true\\\"}, \" \\\n\t\t\"\\\"source\\\": { \" \\\n\t\t\t\"\\\"type\\\": \\\"enumeration\\\", \" \\\n\t\t\t\"\\\"options\\\": [\\\"readings\\\", \\\"statistics\\\"], \" \\\n\t\t\t\"\\\"displayName\\\": \\\"Data Source\\\", \" \\\n\t\t\t\"\\\"value\\\": \\\"readings\\\", \" \\\n\t\t\t\"\\\"default\\\": \\\"readings\\\", \" \\\n\t\t\t\"\\\"description\\\": \\\"Defines\\\"} \" \\\n\t\t\"}\";\n\nconst char *optionals = \n\t\"{\\\"item1\\\" : { \"\\\n\t\t\t\"\\\"type\\\": \\\"integer\\\", \\\"displayName\\\": \\\"Item1\\\", \" \\\n\t\t\t\"\\\"value\\\": \\\"3\\\", \\\"default\\\": \\\"3\\\", \" \\\n\t\t\t\"\\\"description\\\": \\\"First Item\\\", \" \\\n\t\t\t\"\\\"group\\\" : \\\"Group1\\\", \" \\\n\t\t\t\"\\\"rule\\\" : \\\"1 = 0\\\", \" \\\n\t\t\t\"\\\"deprecated\\\" : \\\"false\\\", \" \\\n\t\t\t\"\\\"order\\\": \\\"10\\\"} \"\n\t\t\"}\";\n\nconst char *json_quotedSpecial = R\"QS({ \"key\" : \"test \\\"a\\\"\", \"description\" : \"Test \\\"description\\\"\", \"value\" : {\"description\" : { \"description\" : \"The description of this \\\"Fledge\\\" service\", \"type\" : \"string\", \"value\" : \"The \\\"Fledge\\\" admini\\\\strative API\", \"default\" : \"The \\\"Fledge\\\" administra\\tive API\" }, \"name\" : { \"description\" : \"The name of this \\\"Fledge\\\" service\", \"type\" : \"string\", \"value\" : \"\\\"Fledge\\\"\", \"default\" : \"\\\"Fledge\\\"\" }, \"complex\" : { \"description\" : \"A JSON configuration parameter\", \"type\" : \"json\", \"value\" : {\"first\":\"Fledge\",\"second\":\"json\"}, \"default\" : {\"first\":\"Fledge\",\"second\":\"json\"} }} })QS\";\n\nconst char *json_parse_error = \"{\\\"description\\\": {\"\n\t\t\"\\\"value\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this Fledge service\\\"},\"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"value\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"default\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"description\\\": \\\"The name of this Fledge service\\\"},\"\n\t\t\"error : here,\"\n        \"\\\"complex\\\": {\" \\\n\t\t\"\\\"value\\\": { \\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"type\\\": \\\"json\\\",\"\n\t\t\"\\\"default\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"description\\\": \\\"A JSON configuration parameter\\\"}}\";\n\nconst char *listConfig = \"{ \\\"name\\\": {\"\n                \"\\\"type\\\": \\\"list\\\",\"\n\t\t\"\\\"items\\\" : \\\"string\\\",\"\n                \"\\\"default\\\": \\\"[ \\\\\\\"Fledge\\\\\\\" ]\\\",\"\n\t\t\"\\\"value\\\" : \\\"[ \\\\\\\"one\\\\\\\", \\\\\\\"two\\\\\\\" ]\\\",\"\n                \"\\\"description\\\": \\\"A simple list\\\"} }\";\n\nconst char *kvlistConfig = \"{ \\\"name\\\": {\"\n                \"\\\"type\\\": \\\"kvlist\\\",\"\n\t\t\"\\\"items\\\" : \\\"string\\\",\"\n                \"\\\"default\\\": \\\"{ }\\\",\"\n\t\t\"\\\"value\\\" : \\\"{ \\\\\\\"a\\\\\\\" : \\\\\\\"first\\\\\\\", \\\\\\\"b\\\\\\\" : \\\\\\\"second\\\\\\\" }\\\",\"\n                \"\\\"description\\\": \\\"A simple list\\\"} }\";\n\nconst char *kvlistObjectConfig = \"{ \\\"name\\\": {\"\n                \"\\\"type\\\": \\\"kvlist\\\",\"\n\t\t\"\\\"items\\\" : \\\"object\\\",\"\n                \"\\\"default\\\": \\\"{ }\\\",\"\n\t\t\"\\\"value\\\" : \\\"{ \\\\\\\"a\\\\\\\" : { \\\\\\\"one\\\\\\\" : \\\\\\\"first\\\\\\\"}, \\\\\\\"b\\\\\\\" : { \\\\\\\"two\\\\\\\" :\\\\\\\"second\\\\\\\" } }\\\",\"\n                \"\\\"description\\\": \\\"A simple list\\\"} }\";\n\nconst std::string DefaultConfigJson = R\"({\n\t\"bool_true\":     { \"type\": \"boolean\", \"default\": \"true\", \"value\": \"true\" },\n\t\"bool_false\":    { \"type\": \"boolean\", \"default\": \"false\", \"value\": \"false\" },\n\t\"int_val\":       { \"type\": \"integer\", \"default\": \"10\", \"value\": \"42\" },\n\t\"long_val\":      { \"type\": \"integer\", \"default\": \"1000\", \"value\": \"1000000000\" },\n\t\"double_val\":    { \"type\": \"float\",   \"default\": \"3.14\", \"value\": \"2.718\" },\n\t\"invalid_bool\":  { \"type\": \"boolean\", \"default\": \"true\", \"value\": \"maybe\" },\n\t\"invalid_int\":   { \"type\": \"integer\", \"default\": \"5\", \"value\": \"not_a_number\" },\n\t\"invalid_double\":{ \"type\": \"float\", \"default\": \"1.0\", \"value\": \"oops\" }\n})\";\n\nTEST(CategoriesTest, GetValueWithDefault)\n{\n\tConfigCategory cat(\"test\", DefaultConfigJson);\n\tEXPECT_EQ(cat.getValue(\"non_existing\", \"TestVal\"), \"TestVal\");\n\tEXPECT_EQ(cat.getValue(\"bool_true\", \"nope\"), \"true\");\n}\n\nTEST(CategoriesTest, GetBoolValue)\n{\n\tConfigCategory cat(\"test\", DefaultConfigJson);\n\tEXPECT_TRUE(cat.getBoolValue(\"bool_true\"));\n\tEXPECT_FALSE(cat.getBoolValue(\"bool_false\"));\n\tEXPECT_TRUE(cat.getBoolValue(\"non_existing_bool\", true)); // fallback\n\tEXPECT_FALSE(cat.getBoolValue(\"invalid_bool\", false)); // wrong type fallback\n}\n\nTEST(CategoriesTest, GetIntegerValue)\n{\n\tConfigCategory cat(\"test\", DefaultConfigJson);\n\tEXPECT_EQ(cat.getIntegerValue(\"int_val\"), 42);\n\tEXPECT_EQ(cat.getIntegerValue(\"non_existing_int\", 123), 123); // fallback\n\tEXPECT_EQ(cat.getIntegerValue(\"invalid_int\", -1), -1); // wrong type fallback\n}\n\nTEST(CategoriesTest, GetLongValue)\n{\n\tConfigCategory cat(\"test\", DefaultConfigJson);\n\tEXPECT_EQ(cat.getLongValue(\"long_val\"), 1000000000L);\n\tEXPECT_EQ(cat.getLongValue(\"non_existing_long\", 55555L), 55555L); // fallback\n\tEXPECT_EQ(cat.getLongValue(\"invalid_int\", 99L), 99L); // wrong type fallback\n}\n\nTEST(CategoriesTest, GetDoubleValue)\n{\n\tConfigCategory cat(\"test\", DefaultConfigJson);\n\tEXPECT_DOUBLE_EQ(cat.getDoubleValue(\"double_val\"), 2.718);\n\tEXPECT_DOUBLE_EQ(cat.getDoubleValue(\"non_existing_double\", 1.618), 1.618); // fallback\n\tEXPECT_DOUBLE_EQ(cat.getDoubleValue(\"invalid_double\", 0.0), 0.0); // wrong type fallback\n}\n\nTEST(CategoriesTest, Count)\n{\n\tConfigCategories confCategories(categories);\n\tASSERT_EQ(2, confCategories.length());\n}\n\nTEST(CategoriesTestQuoted, CountQuoted)\n{\nEXPECT_EXIT({\n\tConfigCategories confCategories(categories_quoted);\n\tint num = confCategories.length();\n\tif (num != 2)\n\t{\n\t\tcerr << \"CountQuoted is not 2\" << endl;\n\t}\n\texit(!(num == 2)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(CategoriesTest, Index)\n{\n\tConfigCategories confCategories(categories);\n\tconst ConfigCategoryDescription *item = confCategories[0];\n\tASSERT_EQ(0, item->getName().compare(\"cat1\"));\n\tASSERT_EQ(0, item->getDescription().compare(\"First category\"));\n}\n\nTEST(CategoriesTest, addElements)\n{\n\tConfigCategories categories;\n\tConfigCategoryDescription *one = new ConfigCategoryDescription(string(\"cat1\"), string(\"desc1\"));\n\tConfigCategoryDescription *two = new ConfigCategoryDescription(string(\"cat2\"), string(\"desc2\"));\n\tcategories.addCategoryDescription(one);\n\tcategories.addCategoryDescription(two);\n\tASSERT_EQ(2, categories.length());\n}\n\nTEST(CategoriesTest, toJSON)\n{\n\tConfigCategories categories;\n\tConfigCategoryDescription *one = new ConfigCategoryDescription(string(\"cat1\"), string(\"desc1\"));\n\tConfigCategoryDescription *two = new ConfigCategoryDescription(string(\"cat2\"), string(\"desc2\"));\n\tcategories.addCategoryDescription(one);\n\tcategories.addCategoryDescription(two);\n\tstring result =  categories.toJSON();\n\tASSERT_EQ(2, categories.length());\n\tASSERT_EQ(0, result.compare(allCategories));\n}\n\nTEST(CategoriesTestQuoted, toJSONQuoted)\n{\n\tConfigCategories categories;\n\tConfigCategoryDescription *one = new ConfigCategoryDescription(string(\"cat\\\"1\\\"\"), string(\"desc\\\"1\\\"\"));\n\tConfigCategoryDescription *two = new ConfigCategoryDescription(string(\"cat\\\"2\\\"\"), string(\"desc\\\"2\\\"\"));\n\tcategories.addCategoryDescription(one);\n\tcategories.addCategoryDescription(two);\n\tstring result =  categories.toJSON();\n\tASSERT_EQ(2, categories.length());\n\tASSERT_EQ(0, result.compare(allCategories_quoted));\n}\n\nTEST(CategoriesTest, toJSONParameters)\n{\n\t// Arrange\n\tConfigCategory category(\"test_toJSONParameters\", myCategory_to_json_parameters);\n\n\t// Act\n\tstring strJSONFalse = category.toJSON();\n\tstring strJSONTrue = category.toJSON(true);\n\n\t// Assert\n\tASSERT_EQ(string::npos, strJSONFalse.find(\"order\"));\n\tASSERT_EQ(string::npos, strJSONFalse.find(\"readonly\"));\n\n\tASSERT_NE(string::npos, strJSONTrue.find(\"order\"));\n\tASSERT_NE(string::npos, strJSONTrue.find(\"readonly\"));\n}\n\n\nTEST(CategoryTest, Construct)\n{\n\tConfigCategory confCategory(\"test\", myCategory);\n\tASSERT_EQ(3, confCategory.getCount());\n}\n\nTEST(CategoryTest, ExistsTest)\n{\n\tConfigCategory confCategory(\"test\", myCategory);\n\tASSERT_EQ(true, confCategory.itemExists(\"name\"));\n\tASSERT_EQ(false, confCategory.itemExists(\"non-existance\"));\n}\n\nTEST(CategoryTest, getValue)\n{\nEXPECT_EXIT({\n\tConfigCategory confCategory(\"test\", myCategory);\n\tbool ret = confCategory.getValue(\"name\").compare(\"Fledge\") == 0;\n\tif (!ret)\n\t{\n\t\tcerr << \"getValue failed\" << endl;\n\t}\n\texit(!ret); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(CategoryTest, getType)\n{\n\tConfigCategory confCategory(\"test\", myCategory);\n\tASSERT_EQ(0, confCategory.getType(\"name\").compare(\"string\"));\n}\n\nTEST(CategoryTest, getDefault)\n{\n\tConfigCategory confCategory(\"test\", myCategory);\n\tASSERT_EQ(0, confCategory.getDefault(\"name\").compare(\"Fledge\"));\n}\n\nTEST(CategoryTest, getDescription)\n{\n\tConfigCategory confCategory(\"test\", myCategory);\n\tASSERT_EQ(0, confCategory.getDescription(\"name\").compare(\"The name of this Fledge service\"));\n}\n\nTEST(CategoryTest, isString)\n{\n\tConfigCategory confCategory(\"test\", myCategory);\n\tASSERT_EQ(true, confCategory.isString(\"name\"));\n\tASSERT_EQ(false, confCategory.isString(\"complex\"));\n}\n\nTEST(CategoryTest, isJSON)\n{\n\tConfigCategory confCategory(\"test\", myCategory);\n\tASSERT_EQ(false, confCategory.isJSON(\"name\"));\n\tASSERT_EQ(true, confCategory.isJSON(\"complex\"));\n}\n\nTEST(CategoryTest, toJSON)\n{\n\tConfigCategory confCategory(\"test\", myCategory);\n\tconfCategory.setDescription(\"Test description\");\n\tASSERT_EQ(0, confCategory.toJSON().compare(json));\n}\n\nTEST(CategoryTestQuoted, toJSONQuoted)\n{\n\tConfigCategory confCategory(\"test \\\"a\\\"\", myCategory_quoted);\n\tconfCategory.setDescription(\"Test \\\"description\\\"\");\n\tASSERT_EQ(0, confCategory.toJSON().compare(json_quoted));\n}\n\nTEST(CategoryTest, bool_and_number_ok)\n{\n\tConfigCategory confCategory(\"test\", myCategory_number_and_boolean_items);\n\tconfCategory.setDescription(\"Test description\");\n\tASSERT_EQ(true, confCategory.isBool(\"enable\"));\n\tASSERT_EQ(true, confCategory.isNumber(\"factor\"));\n\tASSERT_EQ(0, confCategory.toJSON().compare(json_boolean_number));\n\tASSERT_EQ(0, confCategory.getValue(\"factor\").compare(\"112\"));\n}\n\nTEST(CategoryTest, handle_type_JSON_ok)\n{\n\tConfigCategory confCategory(\"test\", myCategory_JSON_type_with_escaped_default);\n\tconfCategory.setDescription(\"Test description\");\n\tASSERT_EQ(true, confCategory.isJSON(\"filter\"));\n\n\tDocument arrayItem;\n\tarrayItem.Parse(confCategory.getDefault(\"filter\").c_str());\n\tconst Value& arrayValue = arrayItem[\"pipeline\"];\n\n\tASSERT_TRUE(arrayValue.IsArray());\n\tASSERT_TRUE(arrayValue.Size() == 2);\n\tASSERT_EQ(0, confCategory.getDefault(\"filter\").compare(json_array_item));\n\tASSERT_EQ(0, confCategory.toJSON().compare(json_type_JSON));\n}\n\nTEST(CategoryTest, handle_type_JSON_fail)\n{\nEXPECT_EXIT({\n\tbool ret = false;\n\ttry\n\t{\n\t\tConfigCategory confCategory(\"test\", myCategory_JSON_type_without_escaped_default);\n\t\tconfCategory.setDescription(\"Test description\");\n\n\t\t// test fails here!\n\t\tcerr << \"setting confCategory must fail\" << endl;\n\t}\n\tcatch (exception *e)\n\t{\n\t\tret = true;\n\t\tdelete e;\n\t\t// Test ok; exception found\n\t} \n\tcatch (...)\n\t{\n\t\tret = true;\n\t\t// Test ok; exception found\n\t}\n\texit(!ret); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(CategoryTest, enumerationTest)\n{\n\tConfigCategory confCategory(\"test\", myCategoryEnum);\n\tASSERT_EQ(true, confCategory.isEnumeration(\"enum\"));\n\tstd::vector<std::string> options = confCategory.getOptions(\"enum\");\n\tASSERT_EQ(3, options.size());\n}\n\nTEST(CategoryTest, enumerationJSONTest)\n{\n\tConfigCategory confCategory(\"test\", myCategoryEnum);\n\tASSERT_EQ(true, confCategory.isEnumeration(\"enum\"));\n\tstd::vector<std::string> options = confCategory.getOptions(\"enum\");\n\tASSERT_EQ(3, options.size());\n\tASSERT_EQ(0, confCategory.toJSON().compare(enum_JSON));\n}\n\nTEST(CategoryTest, displayName)\n{\n\tConfigCategory confCategory(\"test\", myCategoryDisplayName);\n\tASSERT_EQ(\"My Fledge\", confCategory.getDisplayName(\"name\"));\n}\n\nTEST(CategoryTest, deprecated)\n{\n\tConfigCategory confCategory(\"test\", myCategoryDeprecated);\n\tASSERT_EQ(false, confCategory.isDeprecated(\"name\"));\n\tASSERT_EQ(true, confCategory.isDeprecated(\"location\"));\n}\n\nTEST(CategoryTest, minMax)\n{\n\tConfigCategory confCategory(\"test\", myCategoryMinMax);\n\tASSERT_EQ(\"1\", confCategory.getMinimum(\"range\"));\n\tASSERT_EQ(\"10\", confCategory.getMaximum(\"range\"));\n}\n\nTEST(CategoryTest, removeItems)\n{\n\tConfigCategory confCategory(\"test\", myCategoryRemoveItems);\n\tASSERT_EQ(2, confCategory.getCount());\n\n\tconfCategory.removeItems();\n\tASSERT_EQ(0, confCategory.getCount());\n}\n\nTEST(CategoryTest, removeItemsType)\n{\n\tConfigCategory conf2Category(\"test\", myCategoryRemoveItems);\n\tASSERT_EQ(2, conf2Category.getCount());\n\n\tconf2Category.removeItemsType(ConfigCategory::ItemType::CategoryType);\n\tASSERT_EQ(1, conf2Category.getCount());\n\n}\n\n/**\n * Test \"script\" type item\n */\nTEST(CategoryTest, scriptItem)\n{\n\tstring file = \"/home/ubuntu/source/develop/Fledge/data/scripts/pumpa_powerfilter_script_file27.py\";\n\tConfigCategory scriptCategory(\"script\", myCategoryScript);\n\tConfigCategory newCategory(\"scriptNew\", scriptCategory.itemsToJSON(true));\n\t// Check we have file attribute in Category object\n\tASSERT_EQ(0,\n\t\t  scriptCategory.getItemAttribute(\"script\",\n\t\t\t\t\t\t  ConfigCategory::FILE_ATTR).compare(file));\n\n\t// Check we have 4 items in Category object\n\tASSERT_EQ(4, newCategory.getCount());\n}\n\n/**\n * Check a cateogy object with oder, displayName etc\n */\nTEST(CategoryTest, categoryAllFullOutput)\n{\n\tConfigCategory fullItems(\"full\", myCategoryEnumFull);\n\t// Get all hidden objects\n\tstring fullCategoryItems = fullItems.itemsToJSON(true);\n\t// Create a Category object from a JSON with all objects\n\tfullItems = ConfigCategory(\"full\", fullCategoryItems);\n\t// Get standard objects\n\tstring categoryItems = fullItems.itemsToJSON(false);\n\t// Check category has all hidden objects\n\tASSERT_EQ(0, fullCategoryItems.compare(fullItems.itemsToJSON(true)));\n\t// Check basic category object has no hidden objects\n\tASSERT_EQ(0, categoryItems.compare(fullItems.itemsToJSON(false)));\n\t// Check we have 4 items in Category object\n\tASSERT_EQ(4, fullItems.getCount());\n}\n\n/**\n * Check all return values of a category\n */\nTEST(CategoryTest, categoryValues)\n{\n        ConfigCategory complex(\"complex\", bigCategory);\n        ASSERT_EQ(true, complex.isBool(\"compression\"));\n        ASSERT_EQ(true, complex.isEnumeration(\"source\"));\n        ASSERT_EQ(true, complex.isString(\"plugin\"));\n        ASSERT_EQ(true, complex.getValue(\"plugin\").compare(\"OMF\") == 0);\n        ASSERT_EQ(true, complex.getValue(\"OMFMaxRetry\").compare(\"3\") == 0);\n}\n\n\n/**\n * Test optional attributes\n */\nTEST(CategoryTest, optionalItems)\n{\n\tConfigCategory category(\"optional\", optionals);\n\tASSERT_EQ(0, category.getItemAttribute(\"item1\", ConfigCategory::GROUP_ATTR).compare(\"Group1\"));\n\tASSERT_EQ(0, category.getItemAttribute(\"item1\", ConfigCategory::DEPRECATED_ATTR).compare(\"false\"));\n\tASSERT_EQ(0, category.getItemAttribute(\"item1\", ConfigCategory::RULE_ATTR).compare(\"1 = 0\"));\n\tASSERT_EQ(0, category.getItemAttribute(\"item1\", ConfigCategory::DISPLAY_NAME_ATTR).compare(\"Item1\"));\n}\n\n/**\n * Special quotes for \\\\s and \\\\t\n */\n\nTEST(CategoryTestQuoted, toJSONQuotedSpecial)\n{\n\tConfigCategory confCategory(\"test \\\"a\\\"\", myCategory_quotedSpecial);\n\tconfCategory.setDescription(\"Test \\\"description\\\"\");\n\tASSERT_EQ(0, confCategory.toJSON().compare(json_quotedSpecial));\n}\n\nTEST(Categorytest, parseError)\n{\n\tEXPECT_THROW(ConfigCategory(\"parseTest\", json_parse_error), ConfigMalformed*);\n}\n\nTEST(CategoryTest, listItem)\n{\n\tConfigCategory category(\"list\", listConfig);\n\tASSERT_EQ(true, category.isList(\"name\"));\n\tASSERT_EQ(0, category.getItemAttribute(\"name\", ConfigCategory::ITEM_TYPE_ATTR).compare(\"string\"));\n\tstd::vector<std::string> v = category.getValueList(\"name\");\n\tASSERT_EQ(2, v.size());\n\tASSERT_EQ(0, v[0].compare(\"one\"));\n\tASSERT_EQ(0, v[1].compare(\"two\"));\n}\n\nTEST(CategoryTest, kvlistItem)\n{\n\tConfigCategory category(\"list\", kvlistConfig);\n\tASSERT_EQ(true, category.isKVList(\"name\"));\n\tASSERT_EQ(0, category.getItemAttribute(\"name\", ConfigCategory::ITEM_TYPE_ATTR).compare(\"string\"));\n\tstd::map<std::string, std::string> v = category.getValueKVList(\"name\");\n\tASSERT_EQ(2, v.size());\n\tASSERT_EQ(0, v[\"a\"].compare(\"first\"));\n\tASSERT_EQ(0, v[\"b\"].compare(\"second\"));\n}\n\nTEST(CategoryTest, kvlistObjectItem)\n{\n\tConfigCategory category(\"list\", kvlistObjectConfig);\n\tASSERT_EQ(true, category.isKVList(\"name\"));\n\tASSERT_EQ(0, category.getItemAttribute(\"name\", ConfigCategory::ITEM_TYPE_ATTR).compare(\"object\"));\n\tstd::map<std::string, std::string> v = category.getValueKVList(\"name\");\n\tASSERT_EQ(2, v.size());\n\tASSERT_EQ(0, v[\"a\"].compare(\"{\\\"one\\\":\\\"first\\\"}\"));\n\tASSERT_EQ(0, v[\"b\"].compare(\"{\\\"two\\\":\\\"second\\\"}\"));\n}\n\n// Config category with \\n in both default and value\nstring categoryJsonLF = R\"({\"plugin\": {\"description\": \"Sinusoid Poll Plugin which implements sine wave with data points\", \"type\": \"string\", \"default\": \"sinusoid\", \"readonly\": \"true\"}, \"assetName\": {\"description\": \"Name of Asset\", \"type\": \"string\", \"default\": \"sinusoid\", \"displayName\": \"Asset name\", \"mandatory\": \"true\"}, \"writemap\": {\"description\": \"Map of tags\", \"displayName\": \"Tags to write\", \"type\": \"JSON\", \"default\": \"{\\n\\\"tags\\\": [\\n{\\n\\\"name\\\": \\\"PLCTAG\\\",\\n\\\"type\\\": \\\"UINT32\\\",\\n\\\"program\\\": \\\"\\\"\\n}]\\n\\n}\", \"value\" : \"{\\n\\\"tags\\\": [\\n{\\n\\\"name\\\": \\\"PLCTAG\\\",\\n\\\"type\\\": \\\"UINT32\\\",\\n\\\"program\\\": \\\"\\\"\\n, \\\"asset\\\": \\\"Asset\\\",\\n\\\"datapoint\\\": \\\"Datapoint\\\"\\n}\\n]\\n}\"}})\";\nstring defaultJsonClear = R\"({\"tags\": [{\"name\": \"PLCTAG\",\"type\": \"UINT32\",\"program\": \"\"}]})\";\nstring valueJsonClear = R\"({\"tags\": [{\"name\": \"PLCTAG\",\"type\": \"UINT32\",\"program\": \"\", \"asset\": \"Asset\",\"datapoint\": \"Datapoint\"}]})\";\n\nTEST(CategoryTestLF, toJSONWithoutLF)\n{\n\tDefaultConfigCategory confCategory(\"testLF\", categoryJsonLF);\n\tconfCategory.setDescription(\"Test description\");\n\n\tASSERT_EQ(0, confCategory.getDefault(\"writemap\").compare(defaultJsonClear));\n\tASSERT_EQ(0, confCategory.getValue(\"writemap\").compare(valueJsonClear));\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_createDirectory.cpp",
    "content": "/*\n * unit tests - FOGL-9345 : Improve createDirectory utility routine\n *\n * Copyright (c) 2025 Dianomic Systems, Inc.\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <gtest/gtest.h>\n#include <file_utils.h>\n#include <exception>\n\nusing namespace std;\n\n\n\nTEST(TEST_DIRECTORY, PERMISSION_DENIED)\n{\n    std::string directoryName = \"/root/nopermission\";\n    try {\n        createDirectory(directoryName);\n        FAIL() << \"Expected std::runtime_error\";\n    }\n    catch(std::runtime_error const & err) {\n        EXPECT_EQ(err.what(),std::string(\"Unable to create directory /root/nopermission: error: -1\"));\n    }\n    catch(...) {\n        FAIL() << \"Expected std::runtime_error\";\n    }\n}\n\nTEST(TEST_DIRECTORY, PATH_NOT_DIRECTORY)\n{\n    std::string directoryName = \"/lib/systemd/systemd\";\n    try {\n        createDirectory(directoryName);\n        FAIL() << \"Expected std::runtime_error\";\n    }\n    catch(std::runtime_error const & err) {\n        EXPECT_EQ(err.what(),std::string(\"Path exists but is not a directory: /lib/systemd/systemd\"));\n    }\n    catch(...) {\n        FAIL() << \"Expected std::runtime_error\";\n    }\n}\n\nTEST(TEST_DIRECTORY, DIRECTORY_EXISTS_OR_CREATED)\n{\n    std::string directoryName = \"/tmp/testCreateDirFunc\";\n \n    createDirectory(directoryName);\n    struct stat sb;\n    if (stat(directoryName.c_str(), &sb) != 0)\n    {\n        FAIL() << \"Directory \" << directoryName << \" could not be created\";\n    }\n\n}"
  },
  {
    "path": "tests/unit/C/common/test_default_config_category.cpp",
    "content": "#include <gtest/gtest.h>\n#include <config_category.h>\n#include <string.h>\n#include <string>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nconst char *default_categories = \"{\\\"categories\\\": [\"\n\t\"{\\\"key\\\": \\\"cat1\\\", \\\"description\\\":\\\"First category\\\"},\"\n\t\"{\\\"key\\\": \\\"cat2\\\", \\\"description\\\":\\\"Second\\\"}]}\";\n\nconst char *default_categories_quoted = \"{\\\"categories\\\": [\"\n\t\"{\\\"key\\\": \\\"cat\\\\\\\"1\\\\\\\"\\\", \\\"description\\\":\\\"The \\\\\\\"First\\\\\\\" category\\\"},\"\n\t\"{\\\"key\\\": \\\"cat\\\\\\\"2\\\\\\\"\\\", \\\"description\\\":\\\"The \\\\\\\"Second\\\\\\\" category\\\"}]}\";\n\nconst char *default_myCategory = \"{\\\"description\\\": {\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"value\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"default\\\": \\\"The Fledge administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this Fledge service\\\"},\"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"value\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"default\\\": \\\"Fledge\\\",\"\n\t\t\"\\\"description\\\": \\\"The name of this Fledge service\\\"},\"\n        \"\\\"complex\\\": {\" \\\n\t\t\"\\\"type\\\": \\\"json\\\",\"\n\t\t\"\\\"value\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"default\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"description\\\": \\\"A JSON configuration parameter\\\"}}\";\n\nconst char *default_myCategory_quoted = \"{\\\"description\\\": {\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"value\\\": \\\"The \\\\\\\"Fledge\\\\\\\" administrative API\\\",\"\n\t\t\"\\\"default\\\": \\\"The \\\\\\\"Fledge\\\\\\\" administrative API\\\",\"\n\t\t\"\\\"description\\\": \\\"The description of this \\\\\\\"Fledge\\\\\\\" service\\\"},\"\n\t\"\\\"name\\\": {\"\n\t\t\"\\\"type\\\": \\\"string\\\",\"\n\t\t\"\\\"value\\\": \\\"\\\\\\\"Fledge\\\\\\\"\\\",\"\n\t\t\"\\\"default\\\": \\\"\\\\\\\"Fledge\\\\\\\"\\\",\"\n\t\t\"\\\"description\\\": \\\"The name of this \\\\\\\"Fledge\\\\\\\" service\\\"},\"\n        \"\\\"complex\\\": {\" \\\n\t\t\"\\\"type\\\": \\\"json\\\",\"\n\t\t\"\\\"value\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"default\\\": {\\\"first\\\" : \\\"Fledge\\\", \\\"second\\\" : \\\"json\\\" },\"\n\t\t\"\\\"description\\\": \\\"A JSON configuration parameter\\\"}}\";\n\nconst char *default_myCategory_quotedSpecial = R\"DQS({ \"description\": { \"type\": \"string\", \"value\": \"The \\\"Fledge\\\" administra\\tive API\", \"default\": \"The \\\"Fledge\\\" admini\\\\strative API\", \"description\": \"The description of this \\\"Fledge\\\" service\"}, \"name\": { \"type\": \"string\", \"value\": \"\\\"Fledge\\\"\", \"default\": \"\\\"Fledge\\\"\", \"description\": \"The name of this \\\"Fledge\\\" service\"}, \"complex\": { \"type\": \"json\", \"value\": {\"first\" : \"Fledge\", \"second\" : \"json\" }, \"default\": {\"first\" : \"Fledge\", \"second\" : \"json\" }, \"description\": \"A JSON configuration parameter\"}})DQS\";\n\n/**\n * The JSON output from DefaulltCategory::toJSON has \"default\" values olny\n */\nconst char *default_json = \"{ \\\"key\\\" : \\\"test\\\", \\\"description\\\" : \\\"Test description\\\", \"\n    \"\\\"value\\\" : {\"\n\t\"\\\"description\\\" : { \"\n\t\t\"\\\"description\\\" : \\\"The description of this Fledge service\\\", \"\n\t\t\"\\\"type\\\" : \\\"string\\\", \"\n\t\t\"\\\"default\\\" : \\\"The Fledge administrative API\\\" }, \"\n\t\"\\\"name\\\" : { \"\n\t\t\"\\\"description\\\" : \\\"The name of this Fledge service\\\", \"\n\t\t\"\\\"type\\\" : \\\"string\\\", \"\n\t\t\"\\\"default\\\" : \\\"Fledge\\\" }, \"\n\t\"\\\"complex\\\" : { \" \n\t\t\"\\\"description\\\" : \\\"A JSON configuration parameter\\\", \"\n\t\t\"\\\"type\\\" : \\\"json\\\", \"\n\t\t\"\\\"default\\\" : \\\"{\\\\\\\"first\\\\\\\":\\\\\\\"Fledge\\\\\\\",\\\\\\\"second\\\\\\\":\\\\\\\"json\\\\\\\"}\\\" }} }\";\n\nconst char *default_json_quoted = \"{ \\\"key\\\" : \\\"test \\\\\\\"a\\\\\\\"\\\", \\\"description\\\" : \\\"Test \\\\\\\"description\\\\\\\"\\\", \"\n    \"\\\"value\\\" : {\"\n\t\"\\\"description\\\" : { \"\n\t\t\"\\\"description\\\" : \\\"The description of this \\\\\\\"Fledge\\\\\\\" service\\\", \"\n\t\t\"\\\"type\\\" : \\\"string\\\", \"\n\t\t\"\\\"default\\\" : \\\"The \\\\\\\"Fledge\\\\\\\" administrative API\\\" }, \"\n\t\"\\\"name\\\" : { \"\n\t\t\"\\\"description\\\" : \\\"The name of this \\\\\\\"Fledge\\\\\\\" service\\\", \"\n\t\t\"\\\"type\\\" : \\\"string\\\", \"\n\t\t\"\\\"default\\\" : \\\"\\\\\\\"Fledge\\\\\\\"\\\" }, \"\n\t\"\\\"complex\\\" : { \" \n\t\t\"\\\"description\\\" : \\\"A JSON configuration parameter\\\", \"\n\t\t\"\\\"type\\\" : \\\"json\\\", \"\n\t\t\"\\\"default\\\" : \\\"{\\\\\\\"first\\\\\\\":\\\\\\\"Fledge\\\\\\\",\\\\\\\"second\\\\\\\":\\\\\\\"json\\\\\\\"}\\\" }} }\";\n\nconst char *default_myCategory_number_and_boolean_items =  \"{\\\"factor\\\": {\"\n\t\t\"\\\"value\\\": \\\"101\\\",\"\n\t\t\"\\\"type\\\": \\\"integer\\\",\"\n\t\t\"\\\"default\\\": 100,\"\n\t\t\"\\\"description\\\": \\\"The factor value\\\"}, \"\n\t\"\\\"enable\\\" : {\"\n\t\"\\\"description\\\": \\\"Switch enabled\\\", \"\n\t\"\\\"default\\\" : \\\"false\\\", \"\n\t\"\\\"value\\\" : true, \"\n\t\"\\\"type\\\" : \\\"boolean\\\"}}\";\n\n// NOTE: toJSON() methods return escaped content for default properties \nconst char *default_json_boolean_number = \"{ \\\"key\\\" : \\\"test\\\", \\\"description\\\" : \\\"Test description\\\", \"\n\t\t\t\t\"\\\"value\\\" : \"\n\t\t\"{\\\"factor\\\" : { \\\"description\\\" : \\\"The factor value\\\", \\\"type\\\" : \\\"integer\\\", \"\n\t\t\t\"\\\"default\\\" : \\\"100\\\" }, \"\n\t\t\"\\\"enable\\\" : { \\\"description\\\" : \\\"Switch enabled\\\", \\\"type\\\" : \\\"boolean\\\", \"\n\t\t\t\"\\\"default\\\" : \\\"false\\\" }} }\";\n\nconst char *default_myCategory_JSON_type_with_escaped_default = \"{ \"\n        \"\\\"filter\\\": { \"\n                \"\\\"type\\\": \\\"JSON\\\", \"\n                \"\\\"description\\\": \\\"filter\\\", \"\n                \"\\\"default\\\": \\\"{\\\\\\\"pipeline\\\\\\\":[\\\\\\\"scale\\\\\\\",\\\\\\\"exceptional\\\\\\\"]}\\\", \"\n                \"\\\"value\\\": \\\"{}\\\" } }\";\n\n// NOTE: toJSON() methods return escaped content for default properties \nconst char *default_json_type_JSON = \"{ \\\"key\\\" : \\\"test\\\", \\\"description\\\" : \\\"Test description\\\", \"\n                \"\\\"value\\\" : {\\\"filter\\\" : { \\\"description\\\" : \\\"filter\\\", \\\"type\\\" : \\\"JSON\\\", \"\n                \"\\\"default\\\" : \\\"{\\\\\\\"pipeline\\\\\\\":[\\\\\\\"scale\\\\\\\",\\\\\\\"exceptional\\\\\\\"]}\\\" }} }\";\n\n// default has invalid (escaped) JSON object value here: a \\\\\\\" is missing for pipeline\nconst char *default_myCategory_JSON_type_without_escaped_default = \"{ \"\n        \"\\\"filter\\\": { \"\n                \"\\\"type\\\": \\\"JSON\\\", \"\n                \"\\\"description\\\": \\\"filter\\\", \"\n                \"\\\"default\\\": \\\"{\\\"pipeline\\\\\\\" : \\\\\\\"scale\\\\\\\", \\\\\\\"exceptional\\\\\\\"]}\\\", \"\n                \"\\\"value\\\": \\\"{}\\\" } }\";\n\n// This is the output pf getValue or getDefault and the contend is unescaped\nconst char *default_json_array_item = \"{\\\"pipeline\\\":[\\\"scale\\\",\\\"exceptional\\\"]}\";\n\nconst char *myDefCategoryRemoveItems = \"{\" \\\n\t\t\t\"\\\"plugin\\\" : {\"\\\n\t\t\t\t\"\\\"description\\\" : \\\"Random C south plugin\\\", \"\\\n\t\t\t\t\"\\\"type\\\" : \\\"string\\\", \"\\\n\t\t\t\t\"\\\"default\\\" : \\\"Random_2\\\" \"\\\n\t\t\t\"}, \"\\\n\t\t\t\"\\\"asset\\\" : {\"\\\n\t\t\t\t\"\\\"description\\\" : \\\"Asset name\\\", \" \\\n\t\t\t\t\"\\\"type\\\" : \\\"category\\\", \"\\\n\t\t\t\t\"\\\"default\\\" : {\"\\\n\t\t\t\t\t\"\\\"bias\\\" : {\"\\\n\t\t\t\t\t\t\"\\\"description\\\" : \\\"Bias offset\\\", \"\\\n\t\t\t\t\t\t\"\\\"type\\\" : \\\"float\\\", \"\\\n\t\t\t\t\t\t\"\\\"default\\\" : \\\"2\\\" \"\\\n\t\t\t\t\t\"} \"\\\n\t\t\t\t\"} \"\\\n\t\t\t\"} \"\\\n\t\t\"}\";\n\n\nconst char *default_json_quotedSpecial = R\"SDQ({ \"key\" : \"test \\\"a\\\"\", \"description\" : \"Test \\\"description\\\"\", \"value\" : {\"description\" : { \"description\" : \"The description of this \\\"Fledge\\\" service\", \"type\" : \"string\", \"default\" : \"The \\\"Fledge\\\" admini\\\\strative API\" }, \"name\" : { \"description\" : \"The name of this \\\"Fledge\\\" service\", \"type\" : \"string\", \"default\" : \"\\\"Fledge\\\"\" }, \"complex\" : { \"description\" : \"A JSON configuration parameter\", \"type\" : \"json\", \"default\" : \"{\\\"first\\\":\\\"Fledge\\\",\\\"second\\\":\\\"json\\\"}\" }} })SDQ\";\n\nTEST(DefaultCategoriesTest, Count)\n{\n\tConfigCategories confCategories(default_categories);\n\tASSERT_EQ(2, confCategories.length());\n}\n\nTEST(DefaultCategoriesTestQuoted, CountQuoted)\n{\n\tConfigCategories confCategories(default_categories_quoted);\n\tASSERT_EQ(2, confCategories.length());\n}\n\nTEST(DefaultCategoriesTest, Index)\n{\n\tConfigCategories confCategories(default_categories);\n\tconst ConfigCategoryDescription *item = confCategories[0];\n\tASSERT_EQ(0, item->getName().compare(\"cat1\"));\n\tASSERT_EQ(0, item->getDescription().compare(\"First category\"));\n}\n\nTEST(DefaultCategoryTest, Construct)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory);\n\tASSERT_EQ(3, confCategory.getCount());\n}\n\nTEST(DefaultCategoryTestQuoted, ConstructQuoted)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory_quoted);\n\tASSERT_EQ(3, confCategory.getCount());\n}\n\nTEST(DefaultCategoryTest, ExistsTest)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory);\n\tASSERT_EQ(true, confCategory.itemExists(\"name\"));\n\tASSERT_EQ(false, confCategory.itemExists(\"non-existance\"));\n}\n\nTEST(DefaultCategoryTestQuoted, ExistsTestQuoted)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory_quoted);\n\tASSERT_EQ(true, confCategory.itemExists(\"name\"));\n\tASSERT_EQ(false, confCategory.itemExists(\"non-existance\"));\n}\n\nTEST(DefaultCategoryTest, getValue)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory);\n\tASSERT_EQ(0, confCategory.getValue(\"name\").compare(\"Fledge\"));\n}\n\nTEST(DefaultCategoryTestQuoted, getValueQuoted)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory_quoted);\n\tASSERT_EQ(0, confCategory.getValue(\"name\").compare(\"\\\"Fledge\\\"\"));\n}\n\nTEST(DefaultCategoryTest, getType)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory);\n\tASSERT_EQ(0, confCategory.getType(\"name\").compare(\"string\"));\n}\n\nTEST(DefaultCategoryTest, getDefault)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory);\n\tASSERT_EQ(0, confCategory.getDefault(\"name\").compare(\"Fledge\"));\n}\n\nTEST(DefaultCategoryTestQuoted, getDefaultQuoted)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory_quoted);\n\tASSERT_EQ(0, confCategory.getDefault(\"name\").compare(\"\\\"Fledge\\\"\"));\n}\n\nTEST(DefaultCategoryTest, getDescription)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory);\n\tASSERT_EQ(0, confCategory.getDescription(\"name\").compare(\"The name of this Fledge service\"));\n}\n\nTEST(DefaultCategoryTestQuoted, getDescriptionQuoted)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory_quoted);\n\tASSERT_EQ(0, confCategory.getDescription(\"name\").compare(\"The name of this \\\"Fledge\\\" service\"));\n}\n\nTEST(DefaultCategoryTestQuoted, isStringQuoted)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory_quoted);\n\tASSERT_EQ(true, confCategory.isString(\"name\"));\n\tASSERT_EQ(false, confCategory.isString(\"complex\"));\n}\n\nTEST(DefaultCategoryTest, isJSON)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory);\n\tASSERT_EQ(false, confCategory.isJSON(\"name\"));\n\tASSERT_EQ(true, confCategory.isJSON(\"complex\"));\n}\n\nTEST(DefaultCategoryTest, toJSON)\n{\n\tDefaultConfigCategory confCategory(\"test\", default_myCategory);\n\tconfCategory.setDescription(\"Test description\");\n\t// Only \"default\" value in the output\n\tASSERT_EQ(0, confCategory.toJSON().compare(default_json));\n}\n\nTEST(DefaultCategoryTestQuoted, toJSONQuoted)\n{\n\tDefaultConfigCategory confCategory(\"test \\\"a\\\"\", default_myCategory_quoted);\n\tconfCategory.setDescription(\"Test \\\"description\\\"\");\n\t// Only \"default\" value in the output\n\tASSERT_EQ(0, confCategory.toJSON().compare(default_json_quoted));\n}\n\nTEST(DefaultCategoryTest, default_bool_and_number_ok)\n{       \n\tDefaultConfigCategory confCategory(\"test\",\n\t\t\t\t\t   default_myCategory_number_and_boolean_items);\n\tconfCategory.setDescription(\"Test description\");\n\n\t//confCategory.checkDefaultValuesOnly();\n\tASSERT_EQ(true, confCategory.isBool(\"enable\"));\n\tASSERT_EQ(true, confCategory.isNumber(\"factor\"));\n\tASSERT_EQ(0, confCategory.getValue(\"factor\").compare(\"101\"));\n\tASSERT_EQ(0, confCategory.getDefault(\"factor\").compare(\"100\"));\n\tASSERT_EQ(0, confCategory.toJSON().compare(default_json_boolean_number));\n}\n\nTEST(CategoryTest, default_handle_type_JSON_ok)\n{\n        DefaultConfigCategory confCategory(\"test\",\n\t\t\t\t\t   default_myCategory_JSON_type_with_escaped_default);\n        confCategory.setDescription(\"Test description\");\n        ASSERT_EQ(true, confCategory.isJSON(\"filter\"));\n\n        Document arrayItem;\n        arrayItem.Parse(confCategory.getDefault(\"filter\").c_str());\n        const Value& arrayValue = arrayItem[\"pipeline\"];\n\n        ASSERT_TRUE(arrayValue.IsArray());\n        ASSERT_TRUE(arrayValue.Size() == 2);\n        ASSERT_EQ(0, confCategory.getDefault(\"filter\").compare(default_json_array_item));\n        ASSERT_EQ(0, confCategory.toJSON().compare(default_json_type_JSON));\n}\n\nTEST(CategoryTest, default_handle_type_JSON_fail)\n{\n        try\n        {\n                DefaultConfigCategory confCategory(\"test\",\n\t\t\t\t\t\t   default_myCategory_JSON_type_without_escaped_default);\n                confCategory.setDescription(\"Test description\");\n\n                // test fails here!\n                ASSERT_TRUE(false);\n        }\n\tcatch (exception *e)\n\t{\n\t\tdelete e;\n\t\t// Test ok; exception found\n\t\tASSERT_TRUE(true);\n\t}\n        catch (...)\n        {\n                // Test ok; exception found\n                ASSERT_TRUE(true);\n        }\n}\n\n\nTEST(DefaultCategoryTest, removeItemsType)\n{\n\tDefaultConfigCategory defCategory(\"test\", myDefCategoryRemoveItems);\n\tASSERT_EQ(2, defCategory.getCount());\n\n\tdefCategory.removeItemsType(ConfigCategory::ItemType::CategoryType);\n\tASSERT_EQ(1, defCategory.getCount());\n\n}\n\n/**\n * Test special quoted chars\n */\n\nTEST(DefaultCategoryTestQuoted, toJSONQuotedSpecial)\n{\n\tDefaultConfigCategory confCategory(\"test \\\"a\\\"\", default_myCategory_quotedSpecial);\n\tconfCategory.setDescription(\"Test \\\"description\\\"\");\n\n\t// Only \"default\" value in the output\n\tASSERT_EQ(0, confCategory.toJSON().compare(default_json_quotedSpecial));\n}\n\n// Default config category with \\n in default JSON\nstring jsonLF = R\"({\"plugin\": {\"description\": \"Sinusoid Poll Plugin which implements sine wave with data points\", \"type\": \"string\", \"default\": \"sinusoid\", \"readonly\": \"true\"}, \"assetName\": {\"description\": \"Name of Asset\", \"type\": \"string\", \"default\": \"sinusoid\", \"displayName\": \"Asset name\", \"mandatory\": \"true\"}, \"writemap\": {\"description\": \"Map of tags\", \"displayName\": \"Tags to write\", \"type\": \"JSON\", \"default\": \"{\\n  \\\"tags\\\": [\\n    {\\n      \\\"name\\\": \\\"PLCTAG\\\",\\n      \\\"type\\\": \\\"UINT32\\\",\\n      \\\"program\\\": \\\"\\\"\\n    }\\n  ]\\n}\"}})\";\nstring jsonClear = R\"({  \"tags\": [    {      \"name\": \"PLCTAG\",      \"type\": \"UINT32\",      \"program\": \"\"    }  ]})\";\n\nTEST(DefaultCategoryTestLF, toJSONWithoutLF)\n{\n\tDefaultConfigCategory confCategory(\"testLF\", jsonLF);\n\tconfCategory.setDescription(\"Test description\");\n\n\t// Only \"default\" value in the output\n\tASSERT_EQ(0, confCategory.getDefault(\"writemap\").compare(jsonClear));\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_expression.cpp",
    "content": "#include <gtest/gtest.h>\n#include <expression.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nTEST(ExpressionTest, IntColumn)\n{\nstring expected(\"{ \\\"column\\\" : \\\"c1\\\", \\\"operator\\\" : \\\"+\\\", \\\"value\\\" : 10}\");\n\n\tExpression expression(\"c1\", \"+\", 10);\n\tASSERT_EQ(expected.compare(expression.toJSON()), 0);\n}\n\nTEST(ExpressionTest, FloatColumn)\n{\nstring expected(\"{ \\\"column\\\" : \\\"c1\\\", \\\"operator\\\" : \\\"+\\\", \\\"value\\\" : 10.25}\");\n\n\tExpression expression(\"c1\", \"+\", 10.25);\n\tASSERT_EQ(expected.compare(expression.toJSON()), 0);\n}\n\nTEST(ExpressionValuesTest, IntColumns)\n{\nstring expected(\"[ { \\\"column\\\" : \\\"c1\\\", \\\"operator\\\" : \\\"+\\\", \\\"value\\\" : 1}, { \\\"column\\\" : \\\"c2\\\", \\\"operator\\\" : \\\"-\\\", \\\"value\\\" : 2} ]\");\n\n\tExpressionValues expressions;\n\texpressions.push_back(Expression(\"c1\", \"+\", 1));\n\texpressions.push_back(Expression(\"c2\", \"-\", 2));\n\tASSERT_EQ(expressions.toJSON().compare(expected), 0);\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_imageencode.cpp",
    "content": "#include <gtest/gtest.h>\n#include <reading.h>\n#include <reading_set.h>\n#include <rapidjson/document.h>\n#include <string.h>\n#include <string>\n#include <logger.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nTEST(ImageEncodingTest, ImageRoundTrip64)\n{\n\tuint16_t *data = (uint16_t *)malloc(64 * 64 * 2);\n\tuint16_t *ptr = data;\n\tfor (int i = 0; i < 64; i++)\n\t\tfor (int j = 0; j < 64; j++)\n\t\t\t*ptr++ = 0 + i + j;\n\tDPImage  *image = new DPImage(64, 64, 16, data);\n\tDatapointValue img(image);\n\tReading reading(\"test\", new Datapoint(\"image\", img));\n\tstring json = reading.toJSON();\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tJSONReading decoded(doc);\n\tDatapoint *dp = decoded.getDatapoint(\"image\");\n\tDPImage *image2 = dp->getData().getImage();\n\tASSERT_EQ(image->getWidth(), image2->getWidth());\n\tASSERT_EQ(image->getHeight(), image2->getHeight());\n\tASSERT_EQ(image->getDepth(), image2->getDepth());\n\tuint16_t *ptr2 = (uint16_t *)image2->getData();\n\tptr = data;\n\tfor (int i = 0; i < 64; i++)\n\t\tfor (int j = 0; j < 64; j++)\n\t\t{\n\t\t\tif (*ptr != *ptr2)\n\t\t\t{\n\t\t\t\tprintf(\"Differ at %d, %d\", i, j);\n\t\t\t}\n\t\t\tASSERT_EQ(*ptr, *ptr2);\n\t\t\tptr++;\n\t\t\tptr2++;\n\t\t}\n\tfree(data);\n}\n\nTEST(ImageEncodingTest, ImageRoundTrip65)\n{\n\tuint16_t *data = (uint16_t *)malloc(64 * 65 * 2);\n\tuint16_t *ptr = data;\n\tfor (int i = 0; i < 64; i++)\n\t\tfor (int j = 0; j < 65; j++)\n\t\t\t*ptr++ = 100 + i + j;\n\tDPImage  *image = new DPImage(64, 65, 16, data);\n\tDatapointValue img(image);\n\tReading reading(\"test\", new Datapoint(\"image\", img));\n\tstring json = reading.toJSON();\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tJSONReading decoded(doc);\n\tDatapoint *dp = decoded.getDatapoint(\"image\");\n\tDPImage *image2 = dp->getData().getImage();\n\tASSERT_EQ(image->getWidth(), image2->getWidth());\n\tASSERT_EQ(image->getHeight(), image2->getHeight());\n\tASSERT_EQ(image->getDepth(), image2->getDepth());\n\tuint16_t *ptr2 = (uint16_t *)image2->getData();\n\tptr = data;\n\tfor (int i = 0; i < 64; i++)\n\t\tfor (int j = 0; j < 65; j++)\n\t\t{\n\t\t\tif (*ptr != *ptr2)\n\t\t\t{\n\t\t\t\tprintf(\"Differ at %d, %d\", i, j);\n\t\t\t}\n\t\t\tASSERT_EQ(*ptr, *ptr2);\n\t\t\tptr++;\n\t\t\tptr2++;\n\t\t}\n\tfree(data);\n}\n\nTEST(ImageEncodingTest, ImageRoundTrip66)\n{\n\tuint16_t *data = (uint16_t *)malloc(64 * 66 * 2);\n\tuint16_t *ptr = data;\n\tfor (int i = 0; i < 64; i++)\n\t\tfor (int j = 0; j < 66; j++)\n\t\t\t*ptr++ = 200 + i + j;\n\tDPImage  *image = new DPImage(64, 66, 16, data);\n\tDatapointValue img(image);\n\tReading reading(\"test\", new Datapoint(\"image\", img));\n\tstring json = reading.toJSON();\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tJSONReading decoded(doc);\n\tDatapoint *dp = decoded.getDatapoint(\"image\");\n\tDPImage *image2 = dp->getData().getImage();\n\tASSERT_EQ(image->getWidth(), image2->getWidth());\n\tASSERT_EQ(image->getHeight(), image2->getHeight());\n\tASSERT_EQ(image->getDepth(), image2->getDepth());\n\tuint16_t *ptr2 = (uint16_t *)image2->getData();\n\tptr = data;\n\tfor (int i = 0; i < 64; i++)\n\t\tfor (int j = 0; j < 66; j++)\n\t\t{\n\t\t\tif (*ptr != *ptr2)\n\t\t\t{\n\t\t\t\tprintf(\"Differ at %d, %d\", i, j);\n\t\t\t}\n\t\t\tASSERT_EQ(*ptr, *ptr2);\n\t\t\tptr++;\n\t\t\tptr2++;\n\t\t}\n\tfree(data);\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_insert_value.cpp",
    "content": "#include <gtest/gtest.h>\n#include <insert.h>\n#include <string.h>\n#include <string>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nTEST(InsertValueTest, IntColumn)\n{\nstring expected(\"\\\"c1\\\" : 10\");\n\n\tInsertValue value(\"c1\", 10);\n\tASSERT_EQ(expected.compare(value.toJSON()), 0);\n}\n\nTEST(InsertValueTest, LongColumn)\n{\nstring expected(\"\\\"c1\\\" : 12345678\");\n\n\tInsertValue value(\"c1\", 12345678L);\n\tASSERT_EQ(expected.compare(value.toJSON()), 0);\n}\n\nTEST(InsertValueTest, NumberColumn)\n{\nstring expected(\"\\\"c1\\\" : 123.4\");\n\n\tInsertValue value(\"c1\", 123.4);\n\tASSERT_EQ(expected.compare(value.toJSON()), 0);\n}\n\nTEST(InsertValueTest, StringColumn)\n{\nstring expected(\"\\\"c1\\\" : \\\"hello\\\"\");\n\n\tInsertValue value(\"c1\", \"hello\");\n\tASSERT_EQ(expected.compare(value.toJSON()), 0);\n}\n\nTEST(InsertValuesTest, IntColumns)\n{\nstring expected(\"{ \\\"c1\\\" : 1, \\\"c2\\\" : 2 }\");\n\n\tInsertValues values;\n\tvalues.push_back(InsertValue(\"c1\", 1));\n\tvalues.push_back(InsertValue(\"c2\", 2));\n\tASSERT_EQ(expected.compare(values.toJSON()), 0);\n}\n\nTEST(InsertValueTest, JSONColumn)\n{\nstring expected(\"\\\"c1\\\" : {\\\"hello\\\":\\\"world\\\"}\");\n\n\tDocument doc;\n\tdoc.Parse(\"{\\\"hello\\\":\\\"world\\\"}\");\n\tInsertValue value(\"c1\", doc);\n\tASSERT_EQ(expected.compare(value.toJSON()), 0);\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_json_reading.cpp",
    "content": "\n#include <gtest/gtest.h>\n#include <reading_set.h>\n#include <string.h>\n#include <string>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nconst char *inputData = \"{ \\\"id\\\": 17651, \\\"asset_code\\\": \\\"luxometer\\\", \"\n            \"\\\"reading\\\": { \\\"lux\\\": 76204.524 }, \"\n            \"\\\"user_ts\\\": \\\"2017-09-21 15:00:08.532958\\\", \"\n            \"\\\"ts\\\": \\\"2017-09-22 14:47:18.872708\\\" }\";\n\nTEST(JSONReadingTest, ParseReading)\n{\n\tDocument doc;\n\tdoc.Parse(inputData);\n\tJSONReading reading(doc);\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"asset_code\\\" : \\\"luxmeter\\\"\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"reading\\\" : { \\\"lux\\\" : \\\"76204.524\\\" }\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\" : \\\"2017-09-22 14:47:18.872708\\\"\")), 0);\n}\n\nTEST(JSONReadingTest, CopyReading)\n{\n\tDocument doc;\n\tdoc.Parse(inputData);\n\tJSONReading reading(doc);\n\t// Copy Reading\tinto a new variable\n\tReading copyReading(reading);\n\n\t// Get JSON string of both reading objects\n\tstring json = reading.toJSON();\n\tstring copyJson = copyReading.toJSON();\n\n\tASSERT_NE(json.find(string(\"\\\"asset_code\\\":\\\"luxometer\\\"\")), std::string::npos);\n\tASSERT_NE(json.find(string(\"\\\"reading\\\":{\\\"lux\\\":76204.524}\")), std::string::npos);\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\":\")), std::string::npos);\n\n\t// Check JSON object as string\n\tASSERT_EQ(json, copyJson);\n\n\t// Check reading id is the same: copy is ok\n\tASSERT_EQ(reading.getId(), copyReading.getId());\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_json_utils.cpp",
    "content": "#include <gtest/gtest.h>\n#include <string>\n#include <vector>\n#include \"json_utils.h\"\n\nusing namespace std;\n\nconst char *json_ok =  \"{\" \\\n\t\t\t\t\"\\\"errors400\\\":  \"\\\n\t\t\t\t        \"[\"\\\n\t\t\t\t                \"\\\"Redefinition of the type with the same ID is not allowed\\\", \"\\\n\t\t\t\t\t\t\"\\\"Invalid value type for the property\\\" \"\\\n\t\t\t\t        \"]\"\\\n\t\t\t\t\", \" \\\n\t\t\t\t\"\\\"type\\\": \\\"JSON\\\", \" \\\n\t\t\t\t\"\\\"order\\\": \\\"17\\\" ,\"  \\\n\t\t\t\t\"\\\"readonly\\\": \\\"true\\\" \" \\\n\t\t\t\"} \";\n\nconst char *json_bad_not_array =  \"{\" \\\n\t\t\t\t\"\\\"errors400\\\":  \\\"text\\\", \" \\\n\t\t\t\t\"\\\"type\\\": \\\"JSON\\\", \" \\\n\t\t\t\t\"\\\"order\\\": \\\"17\\\" ,\"  \\\n\t\t\t\t\"\\\"readonly\\\": \\\"true\\\" \" \\\n\t\t\t\"} \";\n\nconst char *json_bad =  \"{\" \\\n\t\t\t\t\"\\\"errors400\\\":  \"\\\n\t\t\t\t        \"[\"\\\n\t\t\t\t                \"\\\"Redefinition of the type with the same ID is not allowed\\\", \"\\\n\t\t\t\t\t\t\"\\\"Invalid value type for the property\\\" \"\\\n\t\t\t\t        \"]\"\\\n\t\t\t\t\", \" \\\n\t\t\t\t\"xxxx\";\n\nTEST(JsonToVectorString, JSONok)\n{\n\tstd::vector<std::string> stringJSON;\n\tbool result;\n\n\tresult = JSONStringToVectorString(stringJSON,json_ok,\"errors400\");\n\n\tASSERT_EQ(result, true);\n}\n\nTEST(JsonToVectorString, KeyNotExist)\n{\n\tstd::vector<std::string> stringJSON;\n\tbool result;\n\n\tresult = JSONStringToVectorString(stringJSON,json_ok,\"xxx\");\n\n\tASSERT_EQ(result, false);\n}\n\nTEST(JsonToVectorString, NotArray)\n{\n\tstd::vector<std::string> stringJSON;\n\tbool result;\n\n\tresult = JSONStringToVectorString(stringJSON,json_bad_not_array,\"errors400\");\n\n\tASSERT_EQ(result, false);\n}\n\n\nTEST(JsonToVectorString, JSONbad)\n{\n\tstd::vector<std::string> stringJSON;\n\tbool result;\n\n\tresult = JSONStringToVectorString(stringJSON,json_bad,\"errors400\");\n\n\tASSERT_EQ(result, false);\n}\n\nTEST(JsonStringUnescape, LeadingAndTrailingDoubleQuote)\n{\n\tstring json = R\"(\"value\")\";\n\tASSERT_EQ(\"value\", JSONunescape(json));\n}\n\nTEST(JsonStringUnescape, UnescapedDoubleQuote)\n{\n\tstring json = R\"({\\\"key\\\":\\\"value\\\"})\";\n\tASSERT_EQ(R\"({\"key\":\"value\"})\", JSONunescape(json));\n}\n\nTEST(JsonStringUnescape, TwoTimesUnescapedDoubleQuote)\n{\n\tstring json = R\"({\\\\\"key\\\\\":\\\\\"value\\\\\"})\";\n\tASSERT_EQ(R\"({\\\"key\\\":\\\"value\\\"})\", JSONunescape(json));\n}\nTEST(JsonStringUnescape, ThreeTimesUnescapedDoubleQuote)\n{\n\tstring json = R\"({\\\"key3\\\":\\\"_-_\\\\\\\"value3\\\\\\\"_-_\\\"})\";\n\tASSERT_EQ(R\"({\"key3\":\"_-_\\\"value3\\\"_-_\"})\", JSONunescape(json));\n}\nTEST(JsonStringUnescape, TwoTimesQuotedValue)\n{\n\tstring json = R\"({\\\"key4\\\":\\\"\\\\\"value4\\\\\"\\\"})\";\n\tASSERT_EQ(R\"({\"key4\":\"\\\"value4\\\"\"})\", JSONunescape(json));\n}\nTEST(JsonStringUnescape, DoubleQuotedValue)\n{\n\tstring json = R\"({\\\"city\\\":\\\"\\\\\\\"Milan\\\\\\\"\\\"})\";\n\tASSERT_EQ(R\"({\"city\":\"\\\"Milan\\\"\"})\", JSONunescape(json));\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_log_interceptor.cpp",
    "content": "/*\n * unit tests FOGL-9560 : Add log interceptor to C++ Logger class\n *\n * Copyright (c) 2025 Dianomic Systems, Inc.\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n#include <gtest/gtest.h>\n#include <logger.h>\n#include <mutex>\n#include <condition_variable>\nusing namespace std;\n\nLogger* log1 = Logger::getLogger();\n\n\nbool log_interceptor_executed = false;\n\nstruct LogInterceptorData {\n    std::mutex cond_mtx;\n    std::condition_variable cv1;\n    std::string intercepted_message;\n};\n\n// Callback Interceptor functions\nvoid errorInterceptor(Logger::LogLevel level, const std::string& message, void* userData)\n{\n    std::unique_lock<std::mutex> lock(((LogInterceptorData*)userData)->cond_mtx);\n    ((LogInterceptorData*)userData)->intercepted_message = \"INTERCEPTED ERROR : \" + message;\n    log_interceptor_executed = true;\n    ((LogInterceptorData*)userData)->cv1.notify_one();\n}\n\nvoid warningInterceptor(Logger::LogLevel level, const std::string& message, void* userData)\n{\n    std::unique_lock<std::mutex> lock(((LogInterceptorData*)userData)->cond_mtx);\n    ((LogInterceptorData*)userData)->intercepted_message= \"INTERCEPTED WARNING : \" + message;\n    log_interceptor_executed = true;\n    ((LogInterceptorData*)userData)->cv1.notify_one();\n}\n\nvoid infoInterceptor(Logger::LogLevel level, const std::string& message, void* userData)\n{\n    std::unique_lock<std::mutex> lock(((LogInterceptorData*)userData)->cond_mtx);\n    ((LogInterceptorData*)userData)->intercepted_message = \"INTERCEPTED INFO : \" + message;\n    log_interceptor_executed = true;\n    ((LogInterceptorData*)userData)->cv1.notify_one();\n}\n\nvoid debugInterceptor_1(Logger::LogLevel level, const std::string& message, void* userData)\n{\n   std::unique_lock<std::mutex> lock(((LogInterceptorData*)userData)->cond_mtx);\n   ((LogInterceptorData*)userData)->intercepted_message= \"INTERCEPTED DEBUG #1 : \" + message;\n   log_interceptor_executed = true;\n   ((LogInterceptorData*)userData)->cv1.notify_one();\n}\n\nvoid debugInterceptor_2(Logger::LogLevel level, const std::string& message, void* userData)\n{\n    std::unique_lock<std::mutex> lock(((LogInterceptorData*)userData)->cond_mtx);\n    ((LogInterceptorData*)userData)->intercepted_message = \"INTERCEPTED DEBUG #2 : \" + message;\n    log_interceptor_executed = true;\n    ((LogInterceptorData*)userData)->cv1.notify_one();\n}\n\nvoid fatalInterceptor(Logger::LogLevel level, const std::string& message, void* userData)\n{\n    std::unique_lock<std::mutex> lock(((LogInterceptorData*)userData)->cond_mtx);\n    ((LogInterceptorData*)userData)->intercepted_message = \"INTERCEPTED FATAL : \" + message;\n    log_interceptor_executed = true;\n    ((LogInterceptorData*)userData)->cv1.notify_one();\n}\n\n// Test Case : Check registration and unregistration of Interceptor\nTEST(TEST_LOG_INTERCEPTOR, REGISTER_UNREGISTER)\n{\n    // LogLevel Debug\n    log1->setMinLevel(\"debug\");\n    Logger::LogLevel level1 = Logger::LogLevel::DEBUG; \n    LogInterceptorData userData;\n    \n    \n    EXPECT_TRUE(log1->registerInterceptor(level1, debugInterceptor_1, &userData));\n    {\n        std::unique_lock<std::mutex> lock1(userData.cond_mtx);\n        log1->debug(\"Testing REGISTER_UNREGISTER\");\n        userData.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData.intercepted_message, \"INTERCEPTED DEBUG #1 : DEBUG: Testing REGISTER_UNREGISTER\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level1, debugInterceptor_1));\n}\n\n\n// Test Case : Check registration with null callback\nTEST(TEST_LOG_INTERCEPTOR, REGISTER_NULL_CALLBACK)\n{\n    // LogLevel Debug\n    log1->setMinLevel(\"debug\");\n    Logger::LogLevel level1 = Logger::LogLevel::DEBUG; \n    log1->debug(\"Register NULL Callback\");\n    EXPECT_FALSE(log1->registerInterceptor(level1, nullptr, nullptr)); // Interceptor is not registered with null callback\n}\n\n// Test Case: Unregister Non-Registered Interceptor\nTEST(TEST_LOG_INTERCEPTOR, UNREGISTER_NON_REGISTERED)\n{\n    // LogLevel Debug\n    log1->setMinLevel(\"debug\");\n    Logger::LogLevel level = Logger::LogLevel::DEBUG; \n    EXPECT_FALSE(log1->unregisterInterceptor(level, debugInterceptor_1));  // Trying to unregister before it's registered\n}\n\n\n// Test Case: Multiple Interceptors for the Same Log Level\nTEST(TEST_LOG_INTERCEPTOR, MULTIPLE_INTERCEPTORS_SAME_LEVEL)\n{\n    LogInterceptorData userData1;\n    LogInterceptorData userData2;\n    // LogLevel Debug\n    log1->setMinLevel(\"debug\");\n    Logger::LogLevel level = Logger::LogLevel::DEBUG; \n    EXPECT_TRUE(log1->registerInterceptor(level, debugInterceptor_1, &userData1));\n    EXPECT_TRUE(log1->registerInterceptor(level, debugInterceptor_2, &userData2));\n    {\n        std::unique_lock<std::mutex> lock1(userData1.cond_mtx);\n        std::unique_lock<std::mutex> lock2(userData2.cond_mtx);\n        log1->debug(\"Multiple interceptors test\");\n        userData1.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData1.intercepted_message, \"INTERCEPTED DEBUG #1 : DEBUG: Multiple interceptors test\");\n\n        userData2.cv1.wait(lock2, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData2.intercepted_message, \"INTERCEPTED DEBUG #2 : DEBUG: Multiple interceptors test\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level, debugInterceptor_1));\n    EXPECT_TRUE(log1->unregisterInterceptor(level, debugInterceptor_2));\n}\n\n\n// Test Case : Check multiple registration for same log level\nTEST(TEST_LOG_INTERCEPTOR, MULTIPLE_REGISTER)\n{\n    LogInterceptorData userData1;\n    // LogLevel Debug\n    log1->setMinLevel(\"debug\");\n    Logger::LogLevel level1 = Logger::LogLevel::DEBUG; \n    EXPECT_TRUE(log1->registerInterceptor(level1, debugInterceptor_1, &userData1));\n    {\n        std::unique_lock<std::mutex> lock1(userData1.cond_mtx);\n        log1->debug(\"Register Debug Logger\");\n        userData1.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData1.intercepted_message, \"INTERCEPTED DEBUG #1 : DEBUG: Register Debug Logger\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level1, debugInterceptor_1));\n    LogInterceptorData userData2;\n    EXPECT_TRUE(log1->registerInterceptor(level1, debugInterceptor_2, &userData2));\n    {\n        std::unique_lock<std::mutex> lock2(userData2.cond_mtx);\n        log1->debug(\"Register Debug Logger\");\n        userData2.cv1.wait(lock2, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData2.intercepted_message, \"INTERCEPTED DEBUG #2 : DEBUG: Register Debug Logger\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level1, debugInterceptor_2));\n}\n\n// Test Case : Check multiple unregister \nTEST(TEST_LOG_INTERCEPTOR, MULTIPLE_UNREGISTER)\n{\n    LogInterceptorData userData1;\n    // LogLevel Debug\n    log1->setMinLevel(\"debug\");\n    Logger::LogLevel level1 = Logger::LogLevel::DEBUG; \n    EXPECT_TRUE(log1->registerInterceptor(level1, debugInterceptor_1, &userData1));\n    {\n        std::unique_lock<std::mutex> lock1(userData1.cond_mtx);\n        log1->debug(\"Testing First UNREGISTER\");\n        userData1.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData1.intercepted_message, \"INTERCEPTED DEBUG #1 : DEBUG: Testing First UNREGISTER\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level1, debugInterceptor_1));\n    EXPECT_FALSE(log1->unregisterInterceptor(level1, debugInterceptor_1)); // return false because interceptor already unregistered\n\n}\n\n// Test Case : Check registration and unregistration of Interceptor for all the supported log levels\nTEST(TEST_LOG_INTERCEPTOR, ALL_LOG_LEVELS)\n{\n    LogInterceptorData userData1;\n\n    // LogLevel Error\n    log1->setMinLevel(\"error\");\n    Logger::LogLevel level1 = Logger::LogLevel::ERROR; \n    EXPECT_TRUE(log1->registerInterceptor(level1, errorInterceptor, &userData1));\n    {\n        std::unique_lock<std::mutex> lock1(userData1.cond_mtx);\n        log1->error(\"Testing error interceptor\");\n        userData1.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData1.intercepted_message, \"INTERCEPTED ERROR : ERROR: Testing error interceptor\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level1, errorInterceptor));\n\n    // LogLevel Warning\n    LogInterceptorData userData2;\n    log1->setMinLevel(\"warning\");\n    Logger::LogLevel level2 = Logger::LogLevel::WARNING; \n    EXPECT_TRUE(log1->registerInterceptor(level2, warningInterceptor, &userData2));\n    {\n        std::unique_lock<std::mutex> lock1(userData2.cond_mtx);\n        log1->warn(\"Testing warning interceptor\");\n        userData2.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData2.intercepted_message, \"INTERCEPTED WARNING : WARNING: Testing warning interceptor\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level2, warningInterceptor));\n\n    // LogLevel Info\n    LogInterceptorData userData3;\n    log1->setMinLevel(\"info\");\n    Logger::LogLevel level3 = Logger::LogLevel::INFO; \n    \n    EXPECT_TRUE(log1->registerInterceptor(level3, infoInterceptor, &userData3));\n    {\n        std::unique_lock<std::mutex> lock1(userData3.cond_mtx);\n        log1->info(\"Testing info interceptor\");\n        userData3.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData3.intercepted_message, \"INTERCEPTED INFO : INFO: Testing info interceptor\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level3, infoInterceptor));\n\n    // LogLevel Debug\n    LogInterceptorData userData4;\n    log1->setMinLevel(\"debug\");\n    Logger::LogLevel level4 = Logger::LogLevel::DEBUG; \n    EXPECT_TRUE(log1->registerInterceptor(level4, debugInterceptor_1, &userData4));\n    {\n        std::unique_lock<std::mutex> lock1(userData4.cond_mtx);\n        log1->debug(\"Testing debug interceptor\");\n        userData4.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData4.intercepted_message, \"INTERCEPTED DEBUG #1 : DEBUG: Testing debug interceptor\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level4, debugInterceptor_1));\n\n    // LogLevel Debug takes care of FATAL error as well\n    LogInterceptorData userData5;\n    log1->setMinLevel(\"debug\");\n    Logger::LogLevel level5 = Logger::LogLevel::FATAL; \n    EXPECT_TRUE(log1->registerInterceptor(level5, fatalInterceptor, &userData5));\n    {\n        std::unique_lock<std::mutex> lock1(userData5.cond_mtx);\n        log1->fatal(\"Testing fatal interceptor\");\n        userData5.cv1.wait(lock1, [] { return log_interceptor_executed; });\n        ASSERT_EQ(log_interceptor_executed,true);\n        log_interceptor_executed = false; // Reset the flag for next test case\n        ASSERT_EQ(userData5.intercepted_message, \"INTERCEPTED FATAL : FATAL: Testing fatal interceptor\");\n    }\n    EXPECT_TRUE(log1->unregisterInterceptor(level5, fatalInterceptor));\n}\n\n"
  },
  {
    "path": "tests/unit/C/common/test_purge_result.cpp",
    "content": "#include <gtest/gtest.h>\n#include <purge_result.h>\n#include <string.h>\n#include <string>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nTEST(PurgeResult, Values)\n{\nconst char *input = \"{ \\\"removed\\\" : 1234, \\\"unsentPurged\\\" : 0, \"\n\t\t\"\\\"unsentRetained\\\" : 100, \\\"readings\\\" : 1000 }\";\n\n\tPurgeResult purgeResult(input);\n\tASSERT_EQ(1234, purgeResult.getRemoved());\n\tASSERT_EQ(0, purgeResult.getUnsentPurged());\n\tASSERT_EQ(100, purgeResult.getUnsentRetained());\n\tASSERT_EQ(1000, purgeResult.getRemaining());\n}\n\nTEST(PurgeResult, UnsentPurged)\n{\nconst char *input = \"{ \\\"removed\\\" : 1234, \\\"unsentPurged\\\" : 100, \"\n\t\t\"\\\"unsentRetained\\\" : 0, \\\"readings\\\" : 1000 }\";\n\n\tPurgeResult purgeResult(input);\n\tASSERT_EQ(1234, purgeResult.getRemoved());\n\tASSERT_EQ(100, purgeResult.getUnsentPurged());\n\tASSERT_EQ(0, purgeResult.getUnsentRetained());\n\tASSERT_EQ(1000, purgeResult.getRemaining());\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_python_reading.cpp",
    "content": "#include <gtest/gtest.h>\n#include <pythonreading.h>\n#include <string.h>\n#include <string>\n#include <logger.h>\n#include <pyruntime.h>\n\nusing namespace std;\n\nnamespace {\n\nconst char *script = R\"(\ndef count(arg):\n    readings =  arg[\"readings\"]\n    return len(readings)\n\ndef element(arg, key):\n    readings =  arg[\"readings\"]\n    return readings[key]\n\ndef assetCode(arg):\n    return arg[\"asset\"]\n\ndef returnIt(arg):\n    return arg\n\ndef isDict(arg):\n    return isinstance(arg, dict)\n\ndef setAsset(arg, name):\n    arg[\"asset\"] = name\n    return arg\n\ndef array_element_0(arg, key):\n    readings = arg[\"readings\"]\n    arr = readings[key];\n    return arr[0]\n\ndef array_swap(arg, key):\n    readings = arg[\"readings\"]\n    arr = readings[key];\n    tmp = arr[0]\n    arr[0] = arr[1]\n    arr[1] = tmp\n    return arg\n\ndef image_swap(arg, key):\n    readings = arg[\"readings\"]\n    img = readings[key];\n    return arg\n\ndef row_swap(arg, key):\n    readings = arg[\"readings\"]\n    a2d = readings[key];\n    newlist = [a2d[1], a2d[0]]\n    readings[key] = newlist\n    return arg\n)\";\n\nclass  PythonReadingTest : public testing::Test {\n protected:\n\tvoid SetUp() override\n\t{\n\t\tm_python = PythonRuntime::getPythonRuntime();\n\t}\n\n\tvoid TearDown() override\n\t{\n\t}\n\n\n   public:\n\tPythonRuntime *m_python;\n\n\tvoid logErrorMessage(const char *name)\n\t{\n\t\tPyObject* type;\n\t\tPyObject* value;\n\t\tPyObject* traceback;\n\n\n\t\tPyErr_Fetch(&type, &value, &traceback);\n\t\tPyErr_NormalizeException(&type, &value, &traceback);\n\n\t\tPyObject* str_exc_value = PyObject_Repr(value);\n\t\tPyObject* pyExcValueStr = PyUnicode_AsEncodedString(str_exc_value, \"utf-8\", \"Error ~\");\n\t\tconst char* pErrorMessage = value ?\n\t\t\t\t\t    PyBytes_AsString(pyExcValueStr) :\n\t\t\t\t\t    \"no error description.\";\n\t\tLogger::getLogger()->fatal(\"logErrorMessage: %s: Error '%s'\", name, pErrorMessage);\n\t\t\n\t\t// Check for numpy/pandas import errors\n\t\tconst char *err1 = \"implement_array_function method already has a docstring\";\n\t\tconst char *err2 = \"cannot import name 'check_array_indexer' from 'pandas.core.indexers'\";\n\n\t\t\n\t\tstd::string fcn = \"\";\n\t\tfcn += \"def get_pretty_traceback(exc_type, exc_value, exc_tb):\\n\";\n\t\tfcn += \"    import sys, traceback\\n\";\n\t\tfcn += \"    lines = []\\n\"; \n\t\tfcn += \"    lines = traceback.format_exception(exc_type, exc_value, exc_tb)\\n\";\n\t\tfcn += \"    output = '\\\\n'.join(lines)\\n\";\n\t\tfcn += \"    return output\\n\";\n\n\t\tPyRun_SimpleString(fcn.c_str());\n\t\tPyObject* mod = PyImport_ImportModule(\"__main__\");\n\t\tif (mod != NULL) {\n\t\t\tPyObject* method = PyObject_GetAttrString(mod, \"get_pretty_traceback\");\n\t\t\tif (method != NULL) {\n\t\t\t\tPyObject* outStr = PyObject_CallObject(method, Py_BuildValue(\"OOO\", type, value, traceback));\n\t\t\t\tif (outStr != NULL) {\n\t\t\t\t\tPyObject* tmp = PyUnicode_AsASCIIString(outStr);\n\t\t\t\t\tif (tmp != NULL) {\n\t\t\t\t\t\tstd::string pretty = PyBytes_AsString(tmp);\n\t\t\t\t\t\tLogger::getLogger()->fatal(\"%s\", pretty.c_str());\n\t\t\t\t\t\tLogger::getLogger()->printLongString(pretty.c_str());\n\t\t\t\t\t}\n\t\t\t\t\tPy_CLEAR(tmp);\n\t\t\t\t}\n\t\t\t\tPy_CLEAR(outStr);\n\t\t\t}\n\t\t\tPy_CLEAR(method);\n\t\t}\n\n\t\t// Reset error\n\t\tPyErr_Clear();\n\n\t\t// Remove references\n\t\tPy_CLEAR(type);\n\t\tPy_CLEAR(value);\n\t\tPy_CLEAR(traceback);\n\t\tPy_CLEAR(str_exc_value);\n\t\tPy_CLEAR(pyExcValueStr);\n\t\tPy_CLEAR(mod);\n\t}\n\n\tPyObject *callPythonFunc(const char *name, PyObject *arg)\n\t{\n\t\tPyObject *rval = NULL;\n\n\t\tm_python->execute(script);\n\t\trval = m_python->call(name, \"(O)\", arg);\n\t\treturn rval;\n\t}\n\n\tPyObject *callPythonFunc2(const char *name, PyObject *arg1, PyObject *arg2)\n\t{\n\t\tPyObject *rval = NULL;\n\n\t\tm_python->execute(script);\n\t\trval = m_python->call(name, \"OO\", arg1, arg2);\n\t\treturn rval;\n\t}\n};\n\nTEST_F(PythonReadingTest, SimpleSizeLong)\n{\n\tlong i = 1234;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"long\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *obj = callPythonFunc(\"count\", pyReading);\n\tlong rval = PyLong_AsLong(obj);\n\tPyGILState_Release(state);\n\tEXPECT_EQ(rval, 1);\n}\n\nTEST_F(PythonReadingTest, IsDict)\n{\n\tlong i = 1234;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"long\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, PyIsDict)\n{\n\tlong i = 1234;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"long\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyObject *obj = callPythonFunc(\"isDict\", pyReading);\n\tif (obj)\n\t{\n\t\tint truth = PyObject_IsTrue(obj);\n\t\tEXPECT_EQ(truth, 1);\n\t}\n\telse\n\t\tEXPECT_STREQ(\"Expected object to be returned\", \"\");\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, Py2IsDict)\n{\n\tlong i = 1234;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"long\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyObject *obj = callPythonFunc(\"returnIt\", pyReading);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PyDict_Check(obj), true);\n\t}\n\telse\n\t\tEXPECT_EQ(true, false);\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, SimpleLong)\n{\n\tlong i = 1234;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"long\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyObject *element = PyUnicode_FromString(\"long\");\n\tPyObject *obj = callPythonFunc2(\"element\", pyReading, element);\n\tEXPECT_EQ(PyLong_Check(obj), true);\n\tlong rval = PyLong_AsLong(obj);\n\tEXPECT_EQ(rval, 1234);\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, SimpleDouble)\n{\n\tdouble i = 1234.5;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"double\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyObject *element = PyUnicode_FromString(\"double\");\n\tPyObject *obj = callPythonFunc2(\"element\", pyReading, element);\n\tEXPECT_EQ(PyFloat_Check(obj), true);\n\tdouble rval = PyFloat_AS_DOUBLE(obj);\n\tEXPECT_EQ(rval, 1234.5);\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, DictCheck)\n{\n\tlong i = 1234;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"long\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, SimpleSizeString)\n{\n\tDatapointValue value(\"just a string\");\n\tReading reading(\"test\", new Datapoint(\"str\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *obj = callPythonFunc(\"count\", pyReading);\n\tlong rval = PyLong_AsLong(obj);\n\tEXPECT_EQ(rval, 1);\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, SimpleString)\n{\n\tDatapointValue value(\"just a string\");\n\tReading reading(\"test\", new Datapoint(\"str\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"str\");\n\tPyObject *obj = callPythonFunc2(\"element\", pyReading, element);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PyUnicode_Check(obj), true);\n\t\tconst char *rval = PyUnicode_AsUTF8(obj);\n\t\tEXPECT_STREQ(rval, \"just a string\");\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected a string object\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, AssetCode)\n{\n\tDatapointValue value(\"just a string\");\n\tReading reading(\"testAsset\", new Datapoint(\"str\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *obj = callPythonFunc(\"assetCode\", pyReading);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PyUnicode_Check(obj), true);\n\t\tconst char *rval = PyUnicode_AsUTF8(obj);\n\t\tEXPECT_STREQ(rval, \"testAsset\");\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected a string object\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, TwoDataPoints)\n{\n\tvector<Datapoint *> values;\n\tDatapointValue value(\"just a string\");\n\tvalues.push_back(new Datapoint(\"s1\", value));\n\tvalues.push_back(new Datapoint(\"s2\", value));\n\tReading reading(\"test\", values);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *obj = callPythonFunc(\"count\", pyReading);\n\tlong rval = PyLong_AsLong(obj);\n\tPyGILState_Release(state);\n\tEXPECT_EQ(rval, 2);\n}\n\nTEST_F(PythonReadingTest, TwoDifferentDataPoints)\n{\n\tvector<Datapoint *> values;\n\tDatapointValue v1(\"just a string\");\n\tDatapointValue v2((long)12345678);\n\tvalues.push_back(new Datapoint(\"s\", v1));\n\tvalues.push_back(new Datapoint(\"l\", v2));\n\tReading reading(\"test\", values);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *obj = callPythonFunc(\"count\", pyReading);\n\tlong rval = PyLong_AsLong(obj);\n\tEXPECT_EQ(rval, 2);\n\tPyObject *element = PyUnicode_FromString(\"s\");\n\tobj = callPythonFunc2(\"element\", pyReading, element);\n\tEXPECT_EQ(PyUnicode_Check(obj), true);\n\tconst char *sval = PyUnicode_AsUTF8(obj);\n\tEXPECT_STREQ(sval, \"just a string\");\n\telement = PyUnicode_FromString(\"l\");\n\tobj = callPythonFunc2(\"element\", pyReading, element);\n\trval = PyLong_AsLong(obj);\n\tPyGILState_Release(state);\n\tEXPECT_EQ(rval, 12345678);\n}\n\nTEST_F(PythonReadingTest, TwoDataPointsFetchString1)\n{\n\tvector<Datapoint *> values;\n\tDatapointValue value(\"just a string\");\n\tvalues.push_back(new Datapoint(\"s1\", value));\n\tvalues.push_back(new Datapoint(\"s2\", value));\n\tReading reading(\"test\", values);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"s1\");\n\tPyObject *obj = callPythonFunc2(\"element\", pyReading, element);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PyUnicode_Check(obj), true);\n\t\tconst char *rval = PyUnicode_AsUTF8(obj);\n\t\tEXPECT_STREQ(rval, \"just a string\");\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected a string object\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, TwoDataPointsFetchString2)\n{\n\tvector<Datapoint *> values;\n\tDatapointValue value(\"just a string\");\n\tvalues.push_back(new Datapoint(\"s1\", value));\n\tvalues.push_back(new Datapoint(\"s2\", value));\n\tReading reading(\"test\", values);\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"s2\");\n\tPyObject *obj = callPythonFunc2(\"element\", pyReading, element);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PyUnicode_Check(obj), true);\n\t\tconst char *rval = PyUnicode_AsUTF8(obj);\n\t\tEXPECT_STREQ(rval, \"just a string\");\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected a string object\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, DoubleListDataPoint)\n{\n\tvector<double> values;\n\tvalues.push_back(1.4);\n\tvalues.push_back(3.7);\n\tDatapointValue value(values);\n\tReading reading(\"test\", new Datapoint(\"array\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"array\");\n\tPyObject *obj = callPythonFunc2(\"element\", pyReading, element);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PyList_Check(obj), true);\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected a LIST object\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, DictDataPoint)\n{\n\tvector<Datapoint *> *values = new vector<Datapoint *>;\n\tDatapointValue value(\"just a string\");\n\tvalues->push_back(new Datapoint(\"s1\", value));\n\tvalues->push_back(new Datapoint(\"s2\", value));\n\tDatapointValue dict(values, true);\n\tReading reading(\"test\", new Datapoint(\"child\", dict));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"child\");\n\tPyObject *obj = callPythonFunc2(\"element\", pyReading, element);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PyDict_Check(obj), true);\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected a DICT object\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, DataBuffer)\n{\n\tDataBuffer *buffer = new DataBuffer(sizeof(uint16_t), 10);\n\tuint16_t *ptr = (uint16_t *)buffer->getData();\n\t*ptr = 1234;\n\t*(ptr + 1) = 5678;\n\tDatapointValue buf(buffer);\n\tReading reading(\"test\", new Datapoint(\"buffer\", buf));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"buffer\");\n\tPyObject *obj = callPythonFunc2(\"element\", pyReading, element);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PythonReading::isArray(obj), true);\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected an array object\", \"\");\n\t}\n\tobj = callPythonFunc2(\"array_element_0\", pyReading, element);\n\tif (obj)\n\t{\n\t\tEXPECT_STREQ(obj->ob_type->tp_name, \"numpy.uint16\");\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected a long object\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, SimpleLongRoundTrip)\n{\n\tlong i = 1234;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"long\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyObject *obj = callPythonFunc(\"returnIt\", pyReading);\n\tif (obj)\n\t{\n\t\tPythonReading pyr(obj);\n\t\tEXPECT_STREQ(pyr.getAssetName().c_str(), \"test\");\n\t\tEXPECT_EQ(pyr.getDatapointCount(), 1);\n\t\tDatapoint *dp = pyr.getDatapoint(\"long\");\n\t\tif (!dp)\n\t\t{\n\t\t\tEXPECT_STREQ(\"Expected datapoint missing\", \"\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tEXPECT_EQ(dp->getData().getType(), DatapointValue::dataTagType::T_INTEGER);\n\t\t\tEXPECT_EQ(dp->getData().toInt(), 1234);\n\t\t}\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expect PythonReading object missing\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, SimpleStringRoundTrip)\n{\n\tDatapointValue value(\"this is a string\");\n\tReading reading(\"test\", new Datapoint(\"str\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyObject *obj = callPythonFunc(\"returnIt\", pyReading);\n\tif (obj)\n\t{\n\t\tPythonReading pyr(obj);\n\t\tEXPECT_STREQ(pyr.getAssetName().c_str(), \"test\");\n\t\tEXPECT_EQ(pyr.getDatapointCount(), 1);\n\t\tDatapoint *dp = pyr.getDatapoint(\"str\");\n\t\tif (!dp)\n\t\t{\n\t\t\tEXPECT_STREQ(\"Expected datapoint missing\", \"\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tEXPECT_EQ(dp->getData().getType(), DatapointValue::dataTagType::T_STRING);\n\t\t\tEXPECT_STREQ(dp->getData().toStringValue().c_str(),\n\t\t\t\t\t\"this is a string\");\n\t\t}\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expect PythonReading object missing\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, DataBufferSwapRoundTrip)\n{\n\tDataBuffer *buffer = new DataBuffer(sizeof(uint16_t), 10);\n\tuint16_t *ptr = (uint16_t *)buffer->getData();\n\t*ptr = 1234;\n\t*(ptr + 1) = 5678;\n\tDatapointValue buf(buffer);\n\tReading reading(\"test\", new Datapoint(\"buffer\", buf));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"buffer\");\n\tPyObject *obj = callPythonFunc2(\"array_swap\", pyReading, element);\n\tPythonReading pyr(obj);\n\tEXPECT_STREQ(pyr.getAssetName().c_str(), \"test\");\n\tEXPECT_EQ(pyr.getDatapointCount(), 1);\n\tDatapoint *dp = pyr.getDatapoint(\"buffer\");\n\tif (!dp)\n\t{\n\t\tEXPECT_STREQ(\"Expected datapoint missing\", \"\");\n\t}\n\telse\n\t{\n\t\tEXPECT_EQ(dp->getData().getType(), DatapointValue::dataTagType::T_DATABUFFER);\n\t\tDataBuffer *dpbuf = dp->getData().getDataBuffer();\n\t\tptr = (uint16_t *)dpbuf->getData();\n\t\n\t\tEXPECT_EQ(*ptr, 5678);\n\t\tEXPECT_EQ(*(ptr + 1), 1234);\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, ImageRoundTrip)\n{\n\tvoid *data = malloc(64 * 96);\n\tDPImage  *image = new DPImage(64, 96, 8, data);\n\tDatapointValue img(image);\n\tReading reading(\"test\", new Datapoint(\"image\", img));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"image\");\n\tPyObject *obj = callPythonFunc2(\"image_swap\", pyReading, element);\n\tPythonReading pyr(obj);\n\tEXPECT_STREQ(pyr.getAssetName().c_str(), \"test\");\n\tEXPECT_EQ(pyr.getDatapointCount(), 1);\n\tDatapoint *dp = pyr.getDatapoint(\"image\");\n\tif (!dp)\n\t{\n\t\tEXPECT_STREQ(\"Expected datapoint missing\", \"\");\n\t}\n\telse\n\t{\n\t\tEXPECT_EQ(dp->getData().getType(), DatapointValue::dataTagType::T_IMAGE);\n\t\tDPImage *image2 = dp->getData().getImage();\n\t\tuint8_t *ptr = (uint8_t *)image2->getData();\n\t\tEXPECT_EQ(image2->getWidth(), image->getWidth());\n\t\tEXPECT_EQ(image2->getHeight(), image->getHeight());\n\t\tEXPECT_EQ(image2->getDepth(), image->getDepth());\n\t}\n\tPyGILState_Release(state);\n\tfree(data);\n}\n\nTEST_F(PythonReadingTest, UpdateAssetCode)\n{\n\tlong i = 1234;\n\tDatapointValue value(i);\n\tReading reading(\"test\", new Datapoint(\"long\", value));\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tEXPECT_EQ(PyDict_Check(pyReading), true);\n\tPyObject *newName = PyUnicode_FromString(\"shorter\");\n\tPyObject *obj = callPythonFunc2(\"setAsset\", pyReading, newName);\n\tif (obj)\n\t{\n\t\tPythonReading pyr(obj);\n\t\tEXPECT_STREQ(pyr.getAssetName().c_str(), \"shorter\");\n\t\tEXPECT_EQ(pyr.getDatapointCount(), 1);\n\t\tDatapoint *dp = pyr.getDatapoint(\"long\");\n\t\tif (!dp)\n\t\t{\n\t\t\tEXPECT_STREQ(\"Expected datapoint missing\", \"\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\tEXPECT_EQ(dp->getData().getType(), DatapointValue::dataTagType::T_INTEGER);\n\t\t\tEXPECT_EQ(dp->getData().toInt(), 1234);\n\t\t}\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expect PythonReading object missing\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\nTEST_F(PythonReadingTest, Double2DArray)\n{\n\tvector<vector<double>* > array;\n\tfor (int i = 0; i < 2; i++)\n\t{\n\t\tvector<double> *row = new vector<double>;\n\t\trow->push_back(1.4 + i);\n\t\trow->push_back(3.7 + i);\n\t\tarray.emplace_back(row);\n\t}\n\n\tDatapointValue value(array);\n\tReading reading(\"test2d\", new Datapoint(\"array\", value));\n\n\tfor (auto& row : array)\n\t\tdelete row;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"array\");\n\tPyObject *obj = callPythonFunc2(\"row_swap\", pyReading, element);\n\tif (obj)\n\t{\n\t\tEXPECT_EQ(PyDict_Check(obj), true);\n\t\tPythonReading pyr(obj);\n\t\tEXPECT_STREQ(pyr.getAssetName().c_str(), \"test2d\");\n\t\tEXPECT_EQ(pyr.getDatapointCount(), 1);\n\t\tDatapoint *dp = pyr.getDatapoint(\"array\");\n\t\tEXPECT_EQ(dp->getData().getType(), DatapointValue::dataTagType::T_2D_FLOAT_ARRAY);\n\t\tvector<vector<double> *> *a2d = dp->getData().getDp2DArr();\n\t\tEXPECT_EQ(a2d->at(0)->at(0), 2.4);\n\t\tEXPECT_EQ(a2d->at(0)->at(1), 4.7);\n\t\tEXPECT_EQ(a2d->at(1)->at(0), 1.4);\n\t\tEXPECT_EQ(a2d->at(1)->at(1), 3.7);\n\t}\n\telse\n\t{\n\t\tEXPECT_STREQ(\"Expected a LIST object\", \"\");\n\t}\n\tPyGILState_Release(state);\n}\n\n};\n"
  },
  {
    "path": "tests/unit/C/common/test_python_reading_set.cpp",
    "content": "#include <gtest/gtest.h>\n#include <pythonreadingset.h>\n#include <string.h>\n#include <string>\n#include <logger.h>\n#include <pyruntime.h>\n\nusing namespace std;\n\nnamespace {\n\nconst char *script = R\"(\ndef count(set):\n    return len(set)\n)\";\n\nclass  PythonReadingSetTest : public testing::Test {\n protected:\n\tvoid SetUp() override\n\t{\n\t\tm_python = PythonRuntime::getPythonRuntime();\n\t}\n\n\tvoid TearDown() override\n\t{\n\t}\n\n   public:\n\tPythonRuntime\t*m_python;\n\n\tvoid logErrorMessage(const char *name)\n\t{\n\t\tPyObject* type;\n\t\tPyObject* value;\n\t\tPyObject* traceback;\n\n\n\t\tPyErr_Fetch(&type, &value, &traceback);\n\t\tPyErr_NormalizeException(&type, &value, &traceback);\n\n\t\tPyObject* str_exc_value = PyObject_Repr(value);\n\t\tPyObject* pyExcValueStr = PyUnicode_AsEncodedString(str_exc_value, \"utf-8\", \"Error ~\");\n\t\tconst char* pErrorMessage = value ?\n\t\t\t\t\t    PyBytes_AsString(pyExcValueStr) :\n\t\t\t\t\t    \"no error description.\";\n\t\tLogger::getLogger()->fatal(\"logErrorMessage: %s: Error '%s'\", name, pErrorMessage);\n\t\t\n\t\t// Check for numpy/pandas import errors\n\t\tconst char *err1 = \"implement_array_function method already has a docstring\";\n\t\tconst char *err2 = \"cannot import name 'check_array_indexer' from 'pandas.core.indexers'\";\n\n\t\t\n\t\tstd::string fcn = \"\";\n\t\tfcn += \"def get_pretty_traceback(exc_type, exc_value, exc_tb):\\n\";\n\t\tfcn += \"    import sys, traceback\\n\";\n\t\tfcn += \"    lines = []\\n\"; \n\t\tfcn += \"    lines = traceback.format_exception(exc_type, exc_value, exc_tb)\\n\";\n\t\tfcn += \"    output = '\\\\n'.join(lines)\\n\";\n\t\tfcn += \"    return output\\n\";\n\n\t\tPyRun_SimpleString(fcn.c_str());\n\t\tPyObject* mod = PyImport_ImportModule(\"__main__\");\n\t\tif (mod != NULL) {\n\t\t\tPyObject* method = PyObject_GetAttrString(mod, \"get_pretty_traceback\");\n\t\t\tif (method != NULL) {\n\t\t\t\tPyObject* outStr = PyObject_CallObject(method, Py_BuildValue(\"OOO\", type, value, traceback));\n\t\t\t\tif (outStr != NULL) {\n\t\t\t\t\tPyObject* tmp = PyUnicode_AsASCIIString(outStr);\n\t\t\t\t\tif (tmp != NULL) {\n\t\t\t\t\t\tstd::string pretty = PyBytes_AsString(tmp);\n\t\t\t\t\t\tLogger::getLogger()->fatal(\"%s\", pretty.c_str());\n\t\t\t\t\t\tLogger::getLogger()->printLongString(pretty.c_str());\n\t\t\t\t\t}\n\t\t\t\t\tPy_CLEAR(tmp);\n\t\t\t\t}\n\t\t\t\tPy_CLEAR(outStr);\n\t\t\t}\n\t\t\tPy_CLEAR(method);\n\t\t}\n\n\t\t// Reset error\n\t\tPyErr_Clear();\n\n\t\t// Remove references\n\t\tPy_CLEAR(type);\n\t\tPy_CLEAR(value);\n\t\tPy_CLEAR(traceback);\n\t\tPy_CLEAR(str_exc_value);\n\t\tPy_CLEAR(pyExcValueStr);\n\t\tPy_CLEAR(mod);\n\t}\n\n\tPyObject *callPythonFunc(const char *name, PyObject *arg)\n\t{\n\t\tPyObject *rval = NULL;\n\n\t\tm_python->execute(script);\n\t\trval = m_python->call(name, \"(O)\", arg);\n\t\treturn rval;\n\t}\n\n\tPyObject *callPythonFunc2(const char *name, PyObject *arg1, PyObject *arg2)\n\t{\n\t\tPyObject *rval = NULL;\n\n\t\tm_python->execute(script);\n\t\trval = m_python->call(name, \"OO\", arg1, arg2);\n\t\treturn rval;\n\t}\n\n};\n\nTEST_F(PythonReadingSetTest, SingleReading)\n{  \n\tvector<Reading *> *readings = new vector<Reading *>;\n\tlong i = 1234;\n\tDatapointValue value(i);\n\treadings->push_back(new Reading(\"test\", new Datapoint(\"long\", value)));\n\tReadingSet set(readings);\n\tdelete readings;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pySet = ((PythonReadingSet *)(&set))->toPython();\n\tPyObject *obj = callPythonFunc(\"count\", pySet);\n\tlong rval = PyLong_AsLong(obj);\n\tPyGILState_Release(state);\n\tEXPECT_EQ(rval, 1);\n}\n\nTEST_F(PythonReadingSetTest, MultipleReadings)\n{\n\tvector<Reading *> *readings = new vector<Reading *>;\n\tlong i = 1234;\n\tDatapointValue value(i);\n\treadings->push_back(new Reading(\"test\", new Datapoint(\"long\", value)));\n\treadings->push_back(new Reading(\"test\", new Datapoint(\"long\", value)));\n\treadings->push_back(new Reading(\"test\", new Datapoint(\"long\", value)));\n\tReadingSet set(readings);\n\tdelete readings;\n\tPyGILState_STATE state = PyGILState_Ensure();\n\tPyObject *pySet = ((PythonReadingSet *)(&set))->toPython();\n\tPyObject *obj = callPythonFunc(\"count\", pySet);\n\tlong rval = PyLong_AsLong(obj);\n\tPyGILState_Release(state);\n\tEXPECT_EQ(rval, 3);\n}\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_python_readingnumpy.cpp",
    "content": "#include <gtest/gtest.h>\n#include <pythonreading.h>\n#include <string.h>\n#include <string>\n#include <logger.h>\n#include <pyruntime.h>\n\nusing namespace std;\n\nnamespace {\n\nconst char *script = R\"(\nimport numpy as np\n\ndef array_sort(arg, key):\n    readings = arg[\"readings\"];\n    arr = readings[key];\n    readings[key] = np.sort(arr)\n    return arg\n\ndef image_normalise(arg, key):\n    readings = arg[\"readings\"]\n    ar = readings[key]\n    mn = np.min(ar)\n    mx = np.max(ar)\n    norm = (ar - mn) * (1.0 / (mx - mn))\n    readings[key] = norm\n    return arg\n)\";\n\nclass  PythonReadingNumpyTest : public testing::Test {\n protected:\n\tvoid SetUp() override\n\t{\n\t\tm_python = PythonRuntime::getPythonRuntime();\n\t}\n\n\tvoid TearDown() override\n\t{\n\t}\n\n\tPythonRuntime *m_python;\n\n   public:\n\tvoid logErrorMessage(const char *name)\n\t{\n\t\tPyObject* type;\n\t\tPyObject* value;\n\t\tPyObject* traceback;\n\n\n\t\tPyErr_Fetch(&type, &value, &traceback);\n\t\tPyErr_NormalizeException(&type, &value, &traceback);\n\n\t\tPyObject* str_exc_value = PyObject_Repr(value);\n\t\tPyObject* pyExcValueStr = PyUnicode_AsEncodedString(str_exc_value, \"utf-8\", \"Error ~\");\n\t\tconst char* pErrorMessage = value ?\n\t\t\t\t\t    PyBytes_AsString(pyExcValueStr) :\n\t\t\t\t\t    \"no error description.\";\n\t\tLogger::getLogger()->fatal(\"logErrorMessage: %s: Error '%s'\", name, pErrorMessage);\n\t\t\n\t\t// Check for numpy/pandas import errors\n\t\tconst char *err1 = \"implement_array_function method already has a docstring\";\n\t\tconst char *err2 = \"cannot import name 'check_array_indexer' from 'pandas.core.indexers'\";\n\n\t\t\n\t\tstd::string fcn = \"\";\n\t\tfcn += \"def get_pretty_traceback(exc_type, exc_value, exc_tb):\\n\";\n\t\tfcn += \"    import sys, traceback\\n\";\n\t\tfcn += \"    lines = []\\n\"; \n\t\tfcn += \"    lines = traceback.format_exception(exc_type, exc_value, exc_tb)\\n\";\n\t\tfcn += \"    output = '\\\\n'.join(lines)\\n\";\n\t\tfcn += \"    return output\\n\";\n\n\t\tPyRun_SimpleString(fcn.c_str());\n\t\tPyObject* mod = PyImport_ImportModule(\"__main__\");\n\t\tif (mod != NULL) {\n\t\t\tPyObject* method = PyObject_GetAttrString(mod, \"get_pretty_traceback\");\n\t\t\tif (method != NULL) {\n\t\t\t\tPyObject* outStr = PyObject_CallObject(method, Py_BuildValue(\"OOO\", type, value, traceback));\n\t\t\t\tif (outStr != NULL) {\n\t\t\t\t\tPyObject* tmp = PyUnicode_AsASCIIString(outStr);\n\t\t\t\t\tif (tmp != NULL) {\n\t\t\t\t\t\tstd::string pretty = PyBytes_AsString(tmp);\n\t\t\t\t\t\tLogger::getLogger()->fatal(\"%s\", pretty.c_str());\n\t\t\t\t\t\tLogger::getLogger()->printLongString(pretty.c_str());\n\t\t\t\t\t}\n\t\t\t\t\tPy_CLEAR(tmp);\n\t\t\t\t}\n\t\t\t\tPy_CLEAR(outStr);\n\t\t\t}\n\t\t\tPy_CLEAR(method);\n\t\t}\n\n\t\t// Reset error\n\t\tPyErr_Clear();\n\n\t\t// Remove references\n\t\tPy_CLEAR(type);\n\t\tPy_CLEAR(value);\n\t\tPy_CLEAR(traceback);\n\t\tPy_CLEAR(str_exc_value);\n\t\tPy_CLEAR(pyExcValueStr);\n\t\tPy_CLEAR(mod);\n\t}\n\n\tPyObject *callPythonFunc(const char *name, PyObject *arg)\n\t{\n\t\tPyObject *rval = NULL;\n\n\t\tm_python->execute(script);\n\t\trval = m_python->call(name, \"(O)\", arg);\n\t\treturn rval;\n\t}\n\n\tPyObject *callPythonFunc2(const char *name, PyObject *arg1, PyObject *arg2)\n\t{\n\t\tPyObject *rval = NULL;\n\n\t\tm_python->execute(script);\n\t\trval = m_python->call(name, \"OO\", arg1, arg2);\n\t\treturn rval;\n\t}\n\n};\n\n\nTEST_F(PythonReadingNumpyTest, ArraySort)\n{\n\tDataBuffer *buffer = new DataBuffer(sizeof(uint16_t), 5);\n\tuint16_t *ptr = (uint16_t *)buffer->getData();\n\t*ptr = 5;\n\t*(ptr + 1) = 4;\n\t*(ptr + 2) = 3;\n\t*(ptr + 3) = 2;\n\t*(ptr + 4) = 1;\n\tDatapointValue buf(buffer);\n\tReading reading(\"test\", new Datapoint(\"buffer\", buf));\n\tPyGILState_STATE state = PyGILState_Ensure();  // Take GIL\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"buffer\");\n\tPyObject *obj = callPythonFunc2(\"array_sort\", pyReading, element);\n\tPythonReading pyr(obj);\n\tPyGILState_Release(state);\n\tEXPECT_STREQ(pyr.getAssetName().c_str(), \"test\");\n\tEXPECT_EQ(pyr.getDatapointCount(), 1);\n\tDatapoint *dp = pyr.getDatapoint(\"buffer\");\n\tif (!dp)\n\t{\n\t\tEXPECT_STREQ(\"Expected datapoint missing\", \"\");\n\t}\n\telse\n\t{\n\t\tEXPECT_EQ(dp->getData().getType(), DatapointValue::dataTagType::T_DATABUFFER);\n\t\tDataBuffer *dpbuf = dp->getData().getDataBuffer();\n\t\tptr = (uint16_t *)dpbuf->getData();\n\t\n\t\tEXPECT_EQ(*ptr, 1);\n\t\tEXPECT_EQ(*(ptr + 1), 2);\n\t\tEXPECT_EQ(*(ptr + 2), 3);\n\t\tEXPECT_EQ(*(ptr + 3), 4);\n\t\tEXPECT_EQ(*(ptr + 4), 5);\n\t}\n}\n\nTEST_F(PythonReadingNumpyTest, ImageFloat)\n{\n\tvoid *data = malloc(64 * 96);\n\tuint8_t *ptr = (uint8_t *)data;\n\tfor (int i = 0; i < 64; i++)\n\t\tfor (int j = 0; j < 96; j++)\n\t\t\t*ptr = i;\n\tDPImage  *image = new DPImage(64, 96, 8, data);\n\tDatapointValue img(image);\n\tReading reading(\"test\", new Datapoint(\"image\", img));\n\tPyGILState_STATE state = PyGILState_Ensure();  // Take GIL\n\tPyObject *pyReading = ((PythonReading *)(&reading))->toPython();\n\tPyObject *element = PyUnicode_FromString(\"image\");\n\tPyObject *obj = callPythonFunc2(\"image_normalise\", pyReading, element);\n\tEXPECT_NE(obj, (PyObject *)NULL);\n\tPythonReading pyr(obj);\n\tPyGILState_Release(state); // Release GIL\n\tEXPECT_STREQ(pyr.getAssetName().c_str(), \"test\");\n\tEXPECT_EQ(pyr.getDatapointCount(), 1);\n\tDatapoint *dp = pyr.getDatapoint(\"image\");\n\tif (!dp)\n\t{\n\t\tEXPECT_STREQ(\"Expected datapoint missing\", \"\");\n\t}\n\telse\n\t{\n\t\tEXPECT_EQ(dp->getData().getType(), DatapointValue::dataTagType::T_IMAGE);\n\t\tDPImage *image2 = dp->getData().getImage();\n\t\tuint8_t *ptr = (uint8_t *)image2->getData();\n\t\tEXPECT_EQ(image2->getWidth(), image->getWidth());\n\t\tEXPECT_EQ(image2->getHeight(), image->getHeight());\n\t\tEXPECT_EQ(image2->getDepth(), 64);\n\n\t\tdouble *dptr = (double *)image2->getData();\n\t\tfor (int i = 0; i < 64; i++)\n\t\t{\n\t\t\tfor (int j = 0; j < 96; j++)\n\t\t\t{\n\t\t\t\tEXPECT_TRUE(*dptr >= 0.0 && *dptr <= 1.0);\n\t\t\t\tdptr++;\n\t\t\t}\n\t\t}\n\t}\n\tfree(data);\n}\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_query.cpp",
    "content": "#include <gtest/gtest.h>\n#include <query.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nTEST(QueryTest, Simple)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" } }\");\n\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, IsNull)\n{\nQuery query(new Where(\"c1\", IsNull));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"isnull\\\" } }\");\n\n\tjson = query.toJSON();\n\tASSERT_STREQ(json.c_str(), expected.c_str());\n}\n\nTEST(QueryTest, NotNull)\n{\nQuery query(new Where(\"c1\", NotNull));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"notnull\\\" } }\");\n\n\tjson = query.toJSON();\n\tASSERT_STREQ(json.c_str(), expected.c_str());\n}\n\nTEST(QueryTest, And)\n{\nQuery query2(new Where(\"c1\", Equals, \"10\", new Where(\"c2\", LessThan, \"15\")));\nstring json2;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\", \\\"and\\\" : { \\\"column\\\" : \\\"c2\\\", \\\"condition\\\" : \\\"<\\\", \\\"value\\\" : \\\"15\\\" } } }\");\n\n\tjson2 = query2.toJSON();\n\tASSERT_EQ(json2.compare(expected), 0);\n}\n\nTEST(QueryTest, Aggregate)\n{\nQuery query2(new Where(\"c1\", Equals, \"10\", new Where(\"c2\", LessThan, \"15\")));\nstring json2;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\", \\\"and\\\" : { \\\"column\\\" : \\\"c2\\\", \\\"condition\\\" : \\\"<\\\", \\\"value\\\" : \\\"15\\\" } }, \\\"aggregate\\\" : { \\\"column\\\" : \\\"c3\\\", \\\"operation\\\" : \\\"min\\\" } }\");\n\n\tquery2.aggregate(new Aggregate(\"min\", \"c3\"));\n\tjson2 = query2.toJSON();\n\n\tASSERT_EQ(json2.compare(expected), 0);\n}\n\nTEST(QueryTest, AggregateList)\n{\nQuery query2(new Where(\"c1\", Equals, \"10\", new Where(\"c2\", LessThan, \"15\")));\nstring json2;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\", \\\"and\\\" : { \\\"column\\\" : \\\"c2\\\", \\\"condition\\\" : \\\"<\\\", \\\"value\\\" : \\\"15\\\" } }, \\\"aggregate\\\" : [ { \\\"column\\\" : \\\"c3\\\", \\\"operation\\\" : \\\"min\\\" }, { \\\"column\\\" : \\\"c3\\\", \\\"operation\\\" : \\\"max\\\" } ] }\");\n\n\tquery2.aggregate(new Aggregate(\"min\", \"c3\"));\n\tquery2.aggregate(new Aggregate(\"max\", \"c3\"));\n\tjson2 = query2.toJSON();\n\n\tASSERT_EQ(json2.compare(expected), 0);\n}\n\nTEST(QueryTest, Group)\n{\nQuery query2(new Where(\"c1\", Equals, \"10\", new Where(\"c2\", LessThan, \"15\")));\nstring json2;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\", \\\"and\\\" : { \\\"column\\\" : \\\"c2\\\", \\\"condition\\\" : \\\"<\\\", \\\"value\\\" : \\\"15\\\" } }, \\\"aggregate\\\" : { \\\"column\\\" : \\\"c3\\\", \\\"operation\\\" : \\\"min\\\" }, \\\"group\\\" : \\\"c5\\\" }\");\n\n\tquery2.aggregate(new Aggregate(\"min\", \"c3\"));\n\tquery2.group(\"c5\");\n\tjson2 = query2.toJSON();\n\n\tASSERT_EQ(json2.compare(expected), 0);\n}\n\nTEST(QueryTest, Sort)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"sort\\\" : { \\\"column\\\" : \\\"c2\\\", \\\"direction\\\" : \\\"asc\\\" } }\");\n\n\tquery.sort(new Sort(\"c2\"));\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, Sort2)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"sort\\\" : { \\\"column\\\" : \\\"c2\\\", \\\"direction\\\" : \\\"desc\\\" } }\");\n\n\tquery.sort(new Sort(\"c2\", true));\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, Limit)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"limit\\\" : 10 }\");\n\n\tquery.limit(10);\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, Timebucket)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"timebucket\\\" : { \\\"timestamp\\\" : \\\"user_ts\\\", \\\"size\\\" : \\\"10\\\", \\\"format\\\" : \\\"DD-MM-YYYY HH:MI:SS\\\", \\\"alias\\\" : \\\"bucket\\\" } }\");\n\n\tquery.timebucket(new Timebucket(\"user_ts\", 10, \"DD-MM-YYYY HH:MI:SS\", \"bucket\"));\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, AggregateConstructor)\n{\nQuery query2(new Aggregate(\"min\", \"c3\"), new Where(\"c1\", Equals, \"10\"));\nstring json2;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"aggregate\\\" : { \\\"column\\\" : \\\"c3\\\", \\\"operation\\\" : \\\"min\\\" } }\");\n\n\tjson2 = query2.toJSON();\n\tASSERT_EQ(json2.compare(expected), 0);\n}\n\nTEST(QueryTest, TimebucketConstructor)\n{\nQuery query(new Timebucket(\"user_ts\", 10, \"DD-MM-YYYY HH:MI:SS\", \"bucket\"),\n\t    new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"timebucket\\\" : { \\\"timestamp\\\" : \\\"user_ts\\\", \\\"size\\\" : \\\"10\\\", \\\"format\\\" : \\\"DD-MM-YYYY HH:MI:SS\\\", \\\"alias\\\" : \\\"bucket\\\" } }\");\n\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, TimebucketConstructorLimit)\n{\nQuery query(new Timebucket(\"user_ts\", 10, \"DD-MM-YYYY HH:MI:SS\", \"bucket\"),\n\t    new Where(\"c1\", Equals, \"10\"), 10);\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"timebucket\\\" : { \\\"timestamp\\\" : \\\"user_ts\\\", \\\"size\\\" : \\\"10\\\", \\\"format\\\" : \\\"DD-MM-YYYY HH:MI:SS\\\", \\\"alias\\\" : \\\"bucket\\\" }, \\\"limit\\\" : 10 }\");\n\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, SingleReturn)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"return\\\" : [ \\\"c2\\\" ] }\");\n\n\tquery.returns(new Returns(\"c2\"));\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, MultipleReturn)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"return\\\" : [ \\\"c1\\\", \\\"c2\\\", \\\"c3\\\" ] }\");\n\n\tquery.returns(new Returns(\"c1\"));\n\tquery.returns(new Returns(\"c2\"));\n\tquery.returns(new Returns(\"c3\"));\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, MultipleReturn2)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"return\\\" : [ \\\"c1\\\", { \\\"column\\\" : \\\"c2\\\", \\\"alias\\\" : \\\"Col2\\\" }, { \\\"column\\\" : \\\"c3\\\", \\\"alias\\\" : \\\"Col3\\\", \\\"format\\\" : \\\"DD-MM-YY HH:MI:SS\\\" } ] }\");\n\n\tquery.returns(new Returns(\"c1\"));\n\tquery.returns(new Returns(\"c2\", \"Col2\"));\n\tquery.returns(new Returns(\"c3\", \"Col3\", \"DD-MM-YY HH:MI:SS\"));\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, MultipleReturnVector)\n{\nQuery query(new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"return\\\" : [ \\\"c1\\\", \\\"c2\\\", \\\"c3\\\" ] }\");\n\n\tquery.returns(vector<Returns *> {new Returns(\"c1\"),\n\t\t\t\t         new Returns(\"c2\"),\n\t\t\t\t         new Returns(\"c3\")});\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, MultipleReturnConstrcutor)\n{\nQuery query(vector<Returns *> {new Returns(\"c1\"),\n\t\t\t       new Returns(\"c2\"),\n\t\t\t       new Returns(\"c3\")}, \n            new Where(\"c1\", Equals, \"10\"));\nstring json;\nstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"=\\\", \\\"value\\\" : \\\"10\\\" }, \\\"return\\\" : [ \\\"c1\\\", \\\"c2\\\", \\\"c3\\\" ] }\");\n\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, fullTable)\n{\nQuery query(new Returns(\"c1\"));\nstring json;\nstring expected(\"{ \\\"return\\\" : [ \\\"c1\\\" ] }\");\n\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, distinctTable)\n{\nQuery query(new Returns(\"c1\"));\nstring json;\nstring expected(\"{ \\\"return\\\" : [ \\\"c1\\\" ], \\\"modifier\\\" : \\\"distinct\\\" }\");\n\n\tquery.distinct();\n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, whereInSingle)\n{\n\t// Add one element for IN\n\tQuery query(new Where(\"c1\", In, \"10\"));\n\n\tstring json;\n\tstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"in\\\", \\\"value\\\" : [\\\"10\\\"] } }\");\n        \n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n\nTEST(QueryTest, whereIn)\n{\n\t// Add one element for IN\n\tWhere* two = new Where(\"c1\", In, \"10\");\n\t// Add second element\n\ttwo->addIn(\"20\");\n\tQuery query(two);\n\n\tstring json;\n\tstring expected(\"{ \\\"where\\\" : { \\\"column\\\" : \\\"c1\\\", \\\"condition\\\" : \\\"in\\\", \\\"value\\\" : [\\\"10\\\", \\\"20\\\"] } }\");\n        \n\tjson = query.toJSON();\n\tASSERT_EQ(json.compare(expected), 0);\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_reading.cpp",
    "content": "#include <gtest/gtest.h>\n#include <reading.h>\n#include <string.h>\n#include <string>\n#include <vector>\n\nusing namespace std;\n\nTEST(ReadingTest, IntValue)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"asset_code\\\" : \\\"test1\\\"\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"reading\\\" : { \\\"x\\\" : \\\"10\\\" }\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"ser_ts\\\" : \")), 0);\n}\n\nTEST(ReadingTest, FloatValue)\n{\n\tDatapointValue value(3.1415);\n\tReading reading(string(\"test1\"), new Datapoint(\"pi\", value));\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"asset_code\\\" : \\\"test1\\\"\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"reading\\\" : { \\\"pi\\\" : \\\"3.1415\\\" }\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"ser_ts\\\" : \")), 0);\n}\n\nTEST(ReadingTest, AString)\n{\n\tDatapointValue value(\"just a string\");\n\tReading reading(string(\"test3\"), new Datapoint(\"str\", value));\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"asset_code\\\":\\\"test3\\\"\")), std::string::npos);\n\tASSERT_NE(json.find(string(\"\\\"reading\\\":{\\\"str\\\":\\\"just a string\\\"}\")), std::string::npos);\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\":\")), std::string::npos);\n}\n\nTEST(ReadingTest, FloatArray)\n{\n\tstd::vector<double> v {3.1415, -128, 0, -0.0021, 0.2345};\n\tDatapointValue value(v);\n\tReading reading(string(\"test55\"), new Datapoint(\"a\", value));\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"asset_code\\\":\\\"test55\\\"\")), std::string::npos);\n\tASSERT_NE(json.find(string(\"\\\"reading\\\":{\\\"a\\\":[3.1415, -128, 0, -0.0021, 0.2345]}\")), std::string::npos);\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\":\")), std::string::npos);\n}\n\nTEST(ReadingTest, GMT)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\treading.setUserTimestamp(\"2019-01-10 10:01:03.123456+0:00\");\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\" : \\\"2019-01-10 10:01:03.123456+0:00\\\"\")), 0);\n}\n\nTEST(ReadingTest, CET)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\treading.setUserTimestamp(\"2019-01-10 10:01:03.123456-1:00\");\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\" : \\\"2019-01-10 11:01:03.123456+0:00\\\"\")), 0);\n}\n\nTEST(ReadingTest, PST)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\treading.setUserTimestamp(\"2019-01-10 10:01:03.123456+8:00\");\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\" : \\\"2019-01-10 18:01:03.123456+0:00\\\"\")), 0);\n}\n\nTEST(ReadingTest, IST)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\treading.setUserTimestamp(\"2019-01-10 10:01:03.123456-5:30\");\n\tstring json = reading.toJSON();\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\" : \\\"2019-01-10 15:31:03.123456+0:00\\\"\")), 0);\n}\n\nTEST(ReadingTest, rmDatapoint)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\tDatapointValue val2((long) 20);\n\treading.addDatapoint(new Datapoint(\"y\", val2));\n\tASSERT_EQ(reading.getDatapointCount(), 2);\n\tDatapoint *removed = reading.removeDatapoint(\"x\");\n\tASSERT_EQ(reading.getDatapointCount(), 1);\n\tASSERT_EQ(removed->getName().compare(\"x\"), 0);\n\tdelete removed;\n\tremoved = reading.removeDatapoint(\"x\");\n\tASSERT_EQ(removed,  (Datapoint *)0);\n}\n\nTEST(ReadingTest, DictDatapoint)\n{\n\tDatapointValue dpv1(1.0);\n\tDatapointValue dpv2(1.1);\n\tvector<Datapoint *> *values = new vector<Datapoint *>;\n\tvalues->push_back(new Datapoint(\"first\", dpv1));\n\tvalues->push_back(new Datapoint(\"second\", dpv2));\n\t// Create a dict\n\tDatapointValue dpv(values, true);\n\n\tReading reading(string(\"test55\"), new Datapoint(\"a\", dpv));\n\tstring json = reading.toJSON();\n\n\t// Expected output: {\"a\":{\"first\":1.0, \"second\":1.1}}\n\tASSERT_NE(json.find(string(\"\\\"reading\\\":{\\\"a\\\":{\\\"first\\\":1.0, \\\"second\\\":1.1}}\")), std::string::npos);\n}\n\nTEST(ReadingTest, ArrayOfDicts)\n{\n\tDatapointValue dpv1(1.0);\n\tDatapointValue dpv2(1.1);\n\n\tvector<Datapoint *> *val1 = new vector<Datapoint *>;\n\tval1->push_back(new Datapoint(\"first\", dpv1));\n\t// Create an array of dicts, one entry\n\tDatapointValue dpv_1(val1, true); // put this into a dict of its own\n\n\tvector<Datapoint *> *val2 = new vector<Datapoint *>;\n\tval2->push_back(new Datapoint(\"second\", dpv2));\n\t// Create an array of dicts, one entry\n\tDatapointValue dpv_2(val2, true); // put this into a dict of its own\n\n\tstd::vector<Datapoint*>* dpVec = new std::vector<Datapoint *>();\n\n\t// Create a datapoints with unamed elements\n\tdpVec->emplace_back(new Datapoint(std::string(\"unnamed_list_elem#1\"), dpv_1));\n\tdpVec->emplace_back(new Datapoint(std::string(\"unnamed_list_elem#2\"), dpv_2));\n\tDatapointValue dpv(dpVec, false); // put dicts into list\n\n\t// Expected output: {\"a\":[{\"first\":1.0}, {\"second\":1.1}]}\n\tReading reading(string(\"test55\"), new Datapoint(\"a\", dpv));\n\tstring json = reading.toJSON();\n\n\tASSERT_NE(json.find(string(\"\\\"reading\\\":{\\\"a\\\":[{\\\"first\\\":1.0}, {\\\"second\\\":1.1}]}\")), std::string::npos);\n}\n\nTEST(ReadingTest, FMTDEFAULT)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\treading.setUserTimestamp(\"2019-01-10 10:01:03.123456+0:00\");\n\tstring datetime = reading.getAssetDateUserTime(Reading::FMT_DEFAULT);\n\tASSERT_EQ(datetime.compare(\"2019-01-10 10:01:03.123456\"), 0);\n}\n\nTEST(ReadingTest, FMTSTANDARD)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\treading.setUserTimestamp(\"2019-01-10 10:01:03.123456+0:00\");\n\tstring datetime = reading.getAssetDateUserTime(Reading::FMT_STANDARD);\n\tASSERT_EQ(datetime.compare(\"2019-01-10T10:01:03.123456\"), 0);\n}\n\nTEST(ReadingTest, ISO8601MS)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\treading.setUserTimestamp(\"2019-01-10 10:01:03.123456+0:00\");\n\tstring datetime = reading.getAssetDateUserTime(Reading::FMT_ISO8601MS);\n\tASSERT_EQ(datetime.compare(\"2019-01-10 10:01:03.123456 +0000\"), 0);\n}\n\nTEST(ReadingTest, SimpleSub)\n{\n\tDatapointValue value(\"a string\");\n\tReading reading(string(\"test3\"), new Datapoint(\"str\", value));\n\tstring json = reading.toJSON();\n\tstring s = \"$ASSET$ $str$\";\n\tstring res = reading.substitute(s);\n\tASSERT_STREQ(res.c_str(), \"test3 a string\");\n}\n\nTEST(ReadingTest, SubWithDefault)\n{\n\tDatapointValue value(\"a string\");\n\tReading reading(string(\"test3\"), new Datapoint(\"str\", value));\n\tDatapointValue val2(\"foobar\");\n\treading.addDatapoint(new Datapoint(\"foo\", val2));\n\tstring json = reading.toJSON();\n\tstring s = \"$ASSET$ $foo|bar$\";\n\tstring res = reading.substitute(s);\n\tASSERT_STREQ(res.c_str(), \"test3 foobar\");\n}\n\nTEST(ReadingTest, DefaultSub)\n{\n\tDatapointValue value(\"a string\");\n\tReading reading(string(\"test3\"), new Datapoint(\"str\", value));\n\tstring json = reading.toJSON();\n\tstring s = \"$ASSET$ $foo|bar$\";\n\tstring res = reading.substitute(s);\n\tASSERT_STREQ(res.c_str(), \"test3 bar\");\n}\n\nTEST(ReadingTest, NoDefaultSub)\n{\n\tDatapointValue value(\"a string\");\n\tReading reading(string(\"test3\"), new Datapoint(\"str\", value));\n\tstring json = reading.toJSON();\n\tstring s = \"$ASSET$ $foo$\";\n\tstring res = reading.substitute(s);\n\tASSERT_STREQ(res.c_str(), \"test3 \");\n}\n\nTEST(ReadingTest, MultipleSub)\n{\n\tDatapointValue value(\"first\");\n\tReading reading(string(\"test3\"), new Datapoint(\"str1\", value));\n\tDatapointValue val2(\"second\");\n\treading.addDatapoint(new Datapoint(\"str2\", val2));\n\tstring json = reading.toJSON();\n\tstring s = \"$ASSET$ $str1$ $str2$\";\n\tstring res = reading.substitute(s);\n\tASSERT_STREQ(res.c_str(), \"test3 first second\");\n\ts = \"$ASSET$ $str2$ $str1$\";\n\tres = reading.substitute(s);\n\tASSERT_STREQ(res.c_str(), \"test3 second first\");\n\ts = \"$ASSET$ $str1$ $str2$ $str1$\";\n\tres = reading.substitute(s);\n\tASSERT_STREQ(res.c_str(), \"test3 first second first\");\n}\n\nTEST(ReadingTest, TimestampMethods)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\t\n\t// Set timestamp using unsigned long\n\tunsigned long ts = 1735689600; // 2025-01-01 00:00:00 UTC\n\treading.setTimestamp(ts);\n\tASSERT_EQ(reading.getTimestamp(), ts);\n\t\n\t// microseconds should be zero when setting using unsigned long\n\tstruct timeval tv_out_long;\n\treading.getTimestamp(&tv_out_long);\n\tASSERT_EQ(tv_out_long.tv_sec, 1735689600);\n\tASSERT_EQ(tv_out_long.tv_usec, 0);\n\t\n\t// Set timestamp using struct timeval\n\tstruct timeval tv;\n\ttv.tv_sec = 1735689600; // 2025-01-01 00:00:00 UTC\n\ttv.tv_usec = 123456;\n\treading.setTimestamp(tv);\n\tASSERT_EQ(reading.getTimestamp(), 1735689600);\n\t\n\t// Get timestamp using struct timeval\n\tstruct timeval tv_out;\n\treading.getTimestamp(&tv_out);\n\tASSERT_EQ(tv_out.tv_sec, 1735689600);\n\tASSERT_EQ(tv_out.tv_usec, 123456);\n}\n\nTEST(ReadingTest, UserTimestampMethods)\n{\n\tDatapointValue value((long) 10);\n\tReading reading(string(\"test1\"), new Datapoint(\"x\", value));\n\t\n\t// Set user timestamp using unsigned long\n\tunsigned long uts = 1735689600; // 2025-01-01 00:00:00 UTC\n\treading.setUserTimestamp(uts);\n\tASSERT_EQ(reading.getUserTimestamp(), uts);\n\n\t// microseconds should be zero when setting using unsigned long\n\tstruct timeval tv_out_long;\n\treading.getUserTimestamp(&tv_out_long);\n\tASSERT_EQ(tv_out_long.tv_sec, 1735689600);\n\tASSERT_EQ(tv_out_long.tv_usec, 0);\n\t\n\t// Set user timestamp using struct timeval\n\tstruct timeval tv;\n\ttv.tv_sec = 1735689600; // 2025-01-01 00:00:00 UTC\n\ttv.tv_usec = 654321;\n\treading.setUserTimestamp(tv);\n\tASSERT_EQ(reading.getUserTimestamp(), 1735689600);\n\t\n\t// Get user timestamp using struct timeval\n\tstruct timeval tv_out;\n\treading.getUserTimestamp(&tv_out);\n\tASSERT_EQ(tv_out.tv_sec, 1735689600);\n\tASSERT_EQ(tv_out.tv_usec, 654321);\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_reading_array.cpp",
    "content": "/*\n * unit tests - FOGL-7748 : Support array data in reading json\n *\n * Copyright (c) 2023 Dianomic Systems, Inc.\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <gtest/gtest.h>\n#include <datapoint.h>\n#include <reading.h>\n#include <vector>\n#include <exception>\n#include <string>\n\nusing namespace std;\n\n\nconst char *ReadingJSON = R\"(\n    {\n        \"floor1\":30.25, \"floor2\":34.28, \"floor3\":[38.25,60.89,40.28]\n    }\n)\";\n\nconst char *unsupportedReadingJSON = R\"(\n    {\n        \"floor1\":[38,\"error\",40]\n    }\n)\";\n\nconst char *NestedReadingJSON = R\"(\n{\n\t\"pressure\": {\"floor1\":30, \"floor2\":34, \"floor3\":[38,60,40] }\n}\n)\";\n\nTEST(TESTReading, TestUnspportedReadingForListType )\n{\n    try\n    {\n        vector<Reading *> readings;\n        readings.push_back(new Reading(\"test\", unsupportedReadingJSON));\n        vector<Datapoint *>&dp = readings[0]->getReadingData();\n\n        ASSERT_EQ(readings[0]->getDatapointCount(),1);\n        ASSERT_EQ(readings[0]->getAssetName(),\"test\");\n    }\n    catch(exception& ex)\n    {\n        string msg(ex.what());\n        ASSERT_EQ(msg,\"Only numeric lists are currently supported in datapoints\");\n    }\n   \n\n}\n\nTEST(TESTReading, TestReadingForListType )\n{\n    vector<Reading *> readings;\n    readings.push_back(new Reading(\"test\", ReadingJSON));\n    vector<Datapoint *>&dp = readings[0]->getReadingData();\n\n    ASSERT_EQ(readings[0]->getDatapointCount(),3);\n    ASSERT_EQ(readings[0]->getAssetName(),\"test\");\n\n    ASSERT_EQ(dp[0]->getName(),\"floor1\");\n    ASSERT_EQ(dp[0]->getData().toDouble(),30.25);\n\n    ASSERT_EQ(dp[1]->getName(),\"floor2\");\n    ASSERT_EQ(dp[1]->getData().toDouble(),34.28);\n\n    ASSERT_EQ(dp[2]->getName(),\"floor3\");\n    ASSERT_EQ(dp[2]->getData().toString(),\"[38.25, 60.89, 40.28]\");\n    delete readings[0];\n}\n\nTEST(TESTReading, TestReadingForNestedListType )\n{\n    vector<Reading *> readings;\n    readings.push_back(new Reading(\"test\", NestedReadingJSON));\n    vector<Datapoint *>&dp = readings[0]->getReadingData();\n\n    ASSERT_EQ(readings[0]->getDatapointCount(),1);\n    ASSERT_EQ(readings[0]->getAssetName(),\"test\");\n\n    ASSERT_EQ(dp[0]->getName(),\"pressure\");\n    ASSERT_EQ(dp[0]->getData().toString(),\"{\\\"floor1\\\":30, \\\"floor2\\\":34, \\\"floor3\\\":[38, 60, 40]}\");\n    delete readings[0];\n}\n\n"
  },
  {
    "path": "tests/unit/C/common/test_reading_set.cpp",
    "content": "#include <gtest/gtest.h>\n#include <reading_set.h>\n#include <string.h>\n#include <string>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\nconst char *input = \"{ \\\"count\\\" : 2, \\\"rows\\\" : [ \"\n\t    \"{ \\\"id\\\": 1, \\\"asset_code\\\": \\\"luxometer\\\", \"\n            \"\\\"reading\\\": { \\\"lux\\\": 76204.524 }, \"\n            \"\\\"user_ts\\\": \\\"2017-09-21 15:00:08.532958\\\", \"\n            \"\\\"ts\\\": \\\"2017-09-22 14:47:18.872708\\\" }, \"\n\t    \"{ \\\"id\\\": 2, \\\"asset_code\\\": \\\"luxometer\\\", \"\n            \"\\\"reading\\\": { \\\"lux\\\": 76834.361 }, \"\n            \"\\\"user_ts\\\": \\\"2017-09-21 15:00:09.32958\\\", \"\n            \"\\\"ts\\\": \\\"2017-09-22 14:48:18.72708\\\" }\"\n\t    \"] }\";\n\nconst char *asset_notification = \"{ \\\"readings\\\" : [ \"\n\t    \"{ \\\"id\\\": 1, \\\"asset_code\\\": \\\"luxometer\\\", \"\n            \"\\\"reading\\\": { \\\"lux\\\": 76204.524 }, \"\n            \"\\\"user_ts\\\": \\\"2017-09-21 15:00:08.532958\\\", \"\n            \"\\\"ts\\\": \\\"2017-09-22 14:47:18.872708\\\" }, \"\n\t    \"{ \\\"id\\\": 2, \\\"asset_code\\\": \\\"luxometer\\\", \"\n            \"\\\"reading\\\": { \\\"lux\\\": 76834.361 }, \"\n            \"\\\"user_ts\\\": \\\"2017-09-21 15:00:09.32958\\\", \"\n            \"\\\"ts\\\": \\\"2017-09-22 14:48:18.72708\\\" }\"\n\t    \"] }\";\nTEST(ReadingSet, Count)\n{\n\tReadingSet readingSet(input);\n\tASSERT_EQ(2, readingSet.getCount());\n}\n\nTEST(ReadingSet, Index)\n{\n\tReadingSet readingSet(input);\n\tconst Reading *reading = readingSet[0];\n\tstring json = reading->toJSON();\n\tASSERT_NE(json.find(string(\"\\\"asset_code\\\" : \\\"luxmeter\\\"\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"reading\\\" : { \\\"lux\\\" : \\\"76204.524\\\" }\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\" : \\\"2017-09-22 14:47:18.872708\\\"\")), 0);\n}\n\nTEST(ReadingSet, NotificationCount)\n{\n\tReadingSet readingSet(asset_notification);\n\tASSERT_EQ(2, readingSet.getCount());\n}\n\nTEST(ReadingSet, NotificationIndex)\n{\n\tReadingSet readingSet(asset_notification);\n\tconst Reading *reading = readingSet[0];\n\tstring json = reading->toJSON();\n\tASSERT_NE(json.find(string(\"\\\"asset_code\\\" : \\\"luxmeter\\\"\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"reading\\\" : { \\\"lux\\\" : \\\"76204.524\\\" }\")), 0);\n\tASSERT_NE(json.find(string(\"\\\"readkey\\\" : \")), 0);\n\tASSERT_NE(json.find(string(\"\\\"user_ts\\\" : \\\"2017-09-22 14:47:18.872708\\\"\")), 0);\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_readingset_merge.cpp",
    "content": "/*\n * unit tests FOGL-9849 : Add merge function to ReadingSet\n *\n * Copyright (c) 2025 Dianomic Systems, Inc.\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <gtest/gtest.h>\n#include <reading.h>\n#include <reading_set.h>\n#include <datapoint.h>\n#include <cassert>\n#include <iostream>\n\n\n// Helper to create a reading with a specific timestamp\nReading* createReading(const std::string& asset, const std::string& dpName, long value, struct timeval ts)\n{\n\tstd::vector<Datapoint *> dps;\n\tDatapointValue dp1(value);\n\tdps.push_back(new Datapoint(dpName, dp1));\n\tReading* r = new Reading(asset, dps);\n\tr->setUserTimestamp(ts);\n\treturn r;\n}\n\n// Reading to be merged with the ReadingSet which has same timestamp\nTEST(ReadingSetMerge, ReadingsWithSameTimestamp)\n{\n\tstruct timeval ts1 = { 100, 0 };  // Earliest\n\tstruct timeval ts2 = { 200, 0 };\n\tstruct timeval ts3 = { 300, 0 };\n\tstruct timeval ts4 = { 200, 0 };  // Duplicate timestamp with ts2\n\n\t// Step 1: Initial ReadingSet with ts1 and ts3\n\tReading* r1 = createReading(\"asset1\", \"dp1\", 10, ts1);\n\tReading* r3 = createReading(\"asset3\", \"dp3\", 30, ts3); \n\n\tstd::vector<Reading*> initialReadings = { r1, r3 };\n\tReadingSet set(&initialReadings);  // Initial state\n\n\tassert(set.getCount() == 2);\n\n\t// Step 2: Vector to merge\n\tstd::vector<Reading*> toMerge;\n\tReading* r2 = createReading(\"asset2\", \"dp2\", 20, ts2); \n\tReading* r4 = createReading(\"asset4\", \"dp4\", 40, ts4);\n\n\ttoMerge.push_back(r2);\n\ttoMerge.push_back(r4);\n\n\tset.merge(&toMerge);  // In-place merge\n\n\t// Step 3: Validate ordering and content\n\tassert(set.getCount() == 4);\n\tassert(toMerge.empty());\n\n\tstd::vector<Reading*> merged = set.getAllReadings();\n\n\t// Order should be: r1(ts1), r2(ts2), r4(ts4), r3(ts3)\n\tassert(merged[0]->getAssetName() == \"asset1\");\n\tassert(merged[1]->getAssetName() == \"asset2\");\n\tassert(merged[2]->getAssetName() == \"asset4\");\n\tassert(merged[3]->getAssetName() == \"asset3\");\n\n\t// Step 4: Validate timestamps are non-decreasing\n\tfor (size_t i = 1; i < merged.size(); ++i)\n\t{\n\t\tstruct timeval prev; merged[i - 1]->getUserTimestamp(&prev);\n\t\tstruct timeval curr; merged[i]->getUserTimestamp(&curr);\n\t\tassert(timercmp(&prev, &curr, <) || timercmp(&prev, &curr, ==));\n\t}\n}\n\n// Reading to be merged with the ReadingSet which has duplicate timestamp\nTEST(ReadingSetMerge, ReadingsWithDuplicateTimestamp)\n{\n\tstruct timeval ts1 = { 100, 0 };  // Earliest\n\tstruct timeval ts2 = { 200, 0 };\n\tstruct timeval ts3 = { 300, 0 };\n\tstruct timeval ts4 = { 200, 0 };  // Duplicate timestamp with ts2\n\n\t// Step 1: Initial ReadingSet with ts1 and ts4\n\tReading* r1 = createReading(\"asset1\", \"dp1\", 10, ts1);\n\tReading* r4 = createReading(\"asset4\", \"dp4\", 40, ts4);\n\t\n\n\tstd::vector<Reading*> initialReadings = { r1, r4 };\n\tReadingSet set(&initialReadings); // Initial state\n\n\tassert(set.getCount() == 2);\n\n\t// Step 2: Vector to merge\n\tstd::vector<Reading*> toMerge;\n\tReading* r2 = createReading(\"asset2\", \"dp2\", 20, ts2);  \n\tReading* r3 = createReading(\"asset3\", \"dp3\", 30, ts3);  \n\t\n\n\ttoMerge.push_back(r2);\n\ttoMerge.push_back(r3);\n\n\tset.merge(&toMerge);  // In-place merge\n\n\t// Step 3: Validate ordering and content\n\tassert(set.getCount() == 4);\n\tassert(toMerge.empty());\n\n\tstd::vector<Reading*> merged = set.getAllReadings();\n\n\t// Order should be: r1(ts1), r4(ts4), r2(ts2), r3(ts3)\n\tassert(merged[0]->getAssetName() == \"asset1\");\n\t// Reading in the existing ReadingSet must come before the new reading with the same timestamp\n\tassert(merged[1]->getAssetName() == \"asset4\");\n\tassert(merged[2]->getAssetName() == \"asset2\");\n\tassert(merged[3]->getAssetName() == \"asset3\");\n\n\t// Step 4: Validate timestamps are non-decreasing\n\tfor (size_t i = 1; i < merged.size(); ++i)\n\t{\n\t\tstruct timeval prev; merged[i - 1]->getUserTimestamp(&prev);\n\t\tstruct timeval curr; merged[i]->getUserTimestamp(&curr);\n\t\tassert(timercmp(&prev, &curr, <) || timercmp(&prev, &curr, ==));\n\t}\n}"
  },
  {
    "path": "tests/unit/C/common/test_resultset.cpp",
    "content": "#include <gtest/gtest.h>\n#include <resultset.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nTEST(ResultSetTest, RowCount)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 1);\n}\n\nTEST(ResultSetTest, NoRows)\n{\nstring\tjson(\"{ \\\"count\\\" : 0, \\\"rows\\\" : [  ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 0);\n}\n\nTEST(ResultSetTest, ColumnCount1)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.columnCount(), 1);\n}\n\nTEST(ResultSetTest, ColumnCount2)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"c1\\\" : 1, \\\"c2\\\" : 1.45 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.columnCount(), 2);\n}\n\nTEST(ResultSetTest, ColumnName)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.columnName(0).compare(\"c1\"), 0);\n}\n\nTEST(ResultSetTest, ColumnTypeInt)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.columnType(0), INT_COLUMN);\n}\n\nTEST(ResultSetTest, ColumnTypeIntAfterJSON)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : {}, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.columnType(1), INT_COLUMN);\n\tASSERT_EQ(result.columnName(1).compare(\"c1\"), 0);\n}\n\nTEST(ResultSetTest, ColumnTypeIntByNameAfterJSON)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : {}, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.columnType(\"c1\"), INT_COLUMN);\n}\n\nTEST(ResultSetTest, GetColumnAfterJSON)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : {}, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tASSERT_EQ((*rowIter)->getColumn(1)->getInteger(), 1);\n}\n\nTEST(ResultSetTest, GetColumnByNameAfterJSON)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : {}, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tASSERT_EQ((*rowIter)->getColumn(\"c1\")->getInteger(), 1);\n}\n\nTEST(ResultSetTest, ColumnTypeNumber)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"c1\\\" : 1.4 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.columnType(0), NUMBER_COLUMN);\n}\n\nTEST(ResultSetTest, RowIterator)\n{\nstring\tjson(\"{ \\\"count\\\" : 2, \\\"rows\\\" : [ { \\\"c1\\\" : 1 }, { \\\"c1\\\" : 2 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 2);\n\n\tint i = 1;\n\tResultSet::RowIterator rowIter = result.firstRow();\n\twhile (result.hasNextRow(rowIter))\n\t{\n\t\trowIter = result.nextRow(rowIter);\n\t\ti++;\n\t}\n\tASSERT_EQ(result.rowCount(), i);\n}\n\nTEST(ResultSetTest, RowIteratorIsLast)\n{\nstring\tjson(\"{ \\\"count\\\" : 2, \\\"rows\\\" : [ { \\\"c1\\\" : 1 }, { \\\"c1\\\" : 2 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 2);\n\n\tint i = 1;\n\tResultSet::RowIterator rowIter = result.firstRow();\n\twhile (! result.isLastRow(rowIter))\n\t{\n\t\trowIter = result.nextRow(rowIter);\n\t\ti++;\n\t}\n\tASSERT_EQ(result.rowCount(), i);\n}\n\nTEST(ResultSetTest, RowIteratorType)\n{\nstring\tjson(\"{ \\\"count\\\" : 2, \\\"rows\\\" : [ { \\\"c1\\\" : 1 }, { \\\"c1\\\" : 2 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 2);\n\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tASSERT_EQ(INT_COLUMN, (*rowIter)->getType(0));\n\twhile (result.hasNextRow(rowIter))\n\t{\n\t\trowIter = result.nextRow(rowIter);\n\t\tASSERT_EQ(INT_COLUMN, (*rowIter)->getType(0));\n\t}\n}\n\nTEST(ResultSetTest, RowIteratorIntValue)\n{\nstring\tjson(\"{ \\\"count\\\" : 2, \\\"rows\\\" : [ { \\\"c1\\\" : 1 }, { \\\"c1\\\" : 2 }, { \\\"c1\\\" : 3 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 2);\n\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tint i = 1;\n\tconst ResultSet::ColumnValue *value = (*rowIter)->getColumn(0);\n\tASSERT_EQ(i, value->getInteger());\n\twhile (result.hasNextRow(rowIter))\n\t{\n\t\ti++;\n\t\trowIter = result.nextRow(rowIter);\n\t\tASSERT_EQ(i, (*rowIter)->getColumn(0)->getInteger());\n\t}\n}\n\nTEST(ResultSetTest, RowIteratorIntValueByName)\n{\nstring\tjson(\"{ \\\"count\\\" : 2, \\\"rows\\\" : [ { \\\"c1\\\" : 1 }, { \\\"c1\\\" : 2 }, { \\\"c1\\\" : 3 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 2);\n\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tint i = 1;\n\tconst ResultSet::ColumnValue *value = (*rowIter)->getColumn(\"c1\");\n\tASSERT_EQ(i, value->getInteger());\n\twhile (result.hasNextRow(rowIter))\n\t{\n\t\ti++;\n\t\trowIter = result.nextRow(rowIter);\n\t\tASSERT_EQ(i, (*rowIter)->getColumn(\"c1\")->getInteger());\n\t}\n}\n\nTEST(ResultSetTest, RowIndexIntValueByName)\n{\nstring\tjson(\"{ \\\"count\\\" : 2, \\\"rows\\\" : [ { \\\"c1\\\" : 1 }, { \\\"c1\\\" : 2 }, { \\\"c1\\\" : 3 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 2);\n\n\tfor (int i = 0; i < result.rowCount(); i++)\n\t{\n\t\tASSERT_EQ(i + 1, result[i]->getColumn(\"c1\")->getInteger());\n\t}\n}\n\nTEST(ResultSetTest, RowIndexIntValueByIndex)\n{\nstring\tjson(\"{ \\\"count\\\" : 2, \\\"rows\\\" : [ { \\\"c1\\\" : 1 }, { \\\"c1\\\" : 2 }, { \\\"c1\\\" : 3 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.rowCount(), 2);\n\n\tfor (int i = 0; i < result.rowCount(); i++)\n\t{\n\t\tASSERT_EQ(i + 1, (*result[i])[0]->getInteger());\n\t}\n}\n\nTEST(ResultSetTest, JSONColumn)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : { \\\"j1\\\" : \\\"test\\\" }, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tASSERT_EQ(result.columnType(0), JSON_COLUMN);\n}\n\nTEST(ResultSetTest, JSONColumnType)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : { \\\"j1\\\" : \\\"test\\\" }, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tconst ResultSet::ColumnValue *value = (*rowIter)->getColumn(\"json\");\n\tASSERT_EQ(result.columnType(0), JSON_COLUMN);\n}\n\nTEST(ResultSetTest, JSONColumnObjectType)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : { \\\"j1\\\" : \\\"test\\\" }, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tconst ResultSet::ColumnValue *value = (*rowIter)->getColumn(\"json\");\n\tconst rapidjson::Value *v = value->getJSON();\n\tASSERT_EQ(v->IsObject(), true);\n}\n\nTEST(ResultSetTest, JSONColumnHasMember)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : { \\\"j1\\\" : \\\"test\\\" }, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tconst ResultSet::ColumnValue *value = (*rowIter)->getColumn(\"json\");\n\tconst rapidjson::Value *v = value->getJSON();\n\tASSERT_EQ(v->HasMember(\"j1\"), true);\n}\n\nTEST(ResultSetTest, JSONColumnGetString)\n{\nstring\tjson(\"{ \\\"count\\\" : 1, \\\"rows\\\" : [ { \\\"json\\\" : { \\\"j1\\\" : \\\"test\\\" }, \\\"c1\\\" : 1 } ] }\");\n\n\tResultSet result(json);\n\tResultSet::RowIterator rowIter = result.firstRow();\n\tconst ResultSet::ColumnValue *value = (*rowIter)->getColumn(\"json\");\n\tconst rapidjson::Value *v = value->getJSON();\n\tASSERT_EQ(strcmp((*v)[\"j1\"].GetString(), \"test\"), 0);\n}\n"
  },
  {
    "path": "tests/unit/C/common/test_service_record.cpp",
    "content": "#include <gtest/gtest.h>\n#include <service_record.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\n/**\n * Creation of service record JSON\n */\nTEST(ServiceRecordTest, JSON)\n{\nEXPECT_EXIT({\nServiceRecord serviceRecord(\"test1\", \"testType\", \"http\", \"localhost\", 1234, 4321);\nstring json;\nstring expected(\"{ \\\"name\\\" : \\\"test1\\\",\\\"type\\\" : \\\"testType\\\",\\\"protocol\\\" : \\\"http\\\",\\\"address\\\" : \\\"localhost\\\",\\\"management_port\\\" : 4321,\\\"service_port\\\" : 1234 }\");\n\n\tserviceRecord.asJSON(json);\n\n\tbool ret = json.compare(expected) == 0;\n\tif (!ret)\n\t{\n\t\tcerr << \"serviceRecord 'test1' asJSON compare failed\" << endl;\n\t}\n\texit(!(ret)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\n"
  },
  {
    "path": "tests/unit/C/common/test_string_utils.cpp",
    "content": "#include <gtest/gtest.h>\n#include <iostream>\n#include <string>\n#include \"string_utils.h\"\n#include <vector>\n#include <bits/stdc++.h>\n#include <regex>\n\nusing namespace std;\n\nclass Row  {\n\tpublic:\n\t\tstring v1;\n\t\tstring v2;\n\t\tstring v3;\n\t\tstring v4;\n\n\t\tRow(const char *p1, const char *p2, const char *p3, const char *p4) {\n\t\t\tv1 = p1;\n\t\t\tv2 = p2;\n\t\t\tv3 = p3;\n\t\t\tv4 = p4;\n\t\t};\n};\n\nclass StringUtilsTestClass : public ::testing::TestWithParam<Row> {\n};\n\nTEST(StringSlashFixTestClass, goodCases)\n{\n\tvector<pair<string, string>> testCases = {\n\n\t\t// TestCase        - Expected\n\t\t{\"fledge_test1\",    \"fledge_test1\"},\n\n\t\t{\"/fledge_test1\",   \"fledge_test1\"},\n\t\t{\"//fledge_test1\",  \"fledge_test1\"},\n\t\t{\"///fledge_test1\", \"fledge_test1\"},\n\n\t\t{\"fledge_test1/\",   \"fledge_test1\"},\n\t\t{\"fledge_test1//\",  \"fledge_test1\"},\n\t\t{\"fledge_test1///\", \"fledge_test1\"},\n\n\t\t{\"/a//b/c/\",         \"a/b/c\"},\n\t\t{\"fledge/test1\",    \"fledge/test1\"},\n\t\t{\"fledge//test1\",   \"fledge/test1\"},\n\t\t{\"fledge//test//1\", \"fledge/test/1\"},\n\n\t\t{\"//fledge_test1//\",    \"fledge_test1\"},\n\t\t{\"//fledge//test//1//\", \"fledge/test/1\"}\n\t};\n\tstring result;\n\n\tfor(auto &testCase : testCases)\n\t{\n\t\tresult = StringSlashFix(testCase.first);\n\t\tASSERT_EQ(result, testCase.second);\n\t}\n}\n\n\nTEST(StringReplaceAllTestClass, goodCases)\n{\n\tvector<std::tuple<string, string, string, string>> testCases = {\n\n\t\t// TestCase               - to search - to repplace   - Expected\n\t\t{std::make_tuple(\"fledge@@test1\",        \"@@\",       \"@\",            \"fledge@test1\")},\n\t\t{std::make_tuple(\"fledge@@test@@2\",      \"@@\",       \"@\",            \"fledge@test@2\")},\n\t\t{std::make_tuple(\"@@fledge@@test@@3@@\",  \"@@\",       \"@\",            \"@fledge@test@3@\")}\n\t};\n\tstring test;\n\tstring toSearch;\n\tstring toReplace;\n\tstring Expected;\n\n\tfor(auto &testCase : testCases)\n\t{\n\t\ttest = std::get<0>(testCase);\n\t\ttoSearch = std::get<1>(testCase);\n\t\ttoReplace = std::get<2>(testCase);\n\t\tExpected = std::get<3>(testCase);\n\n\t\tStringReplaceAll(test, toSearch, toReplace);\n\t\tASSERT_EQ(test, Expected);\n\t}\n}\n\n// Test Code\nTEST_P(StringUtilsTestClass, StringUtilsTestCase)\n{\n\tRow const& p = GetParam();\n\n\tstring StringToManage = p.v1;\n\tstring StringToSearch = p.v2;\n\tstring StringReplacement = p.v3;\n\tstring StringReplaced = p.v4;\n\n\tStringReplace(StringToManage, StringToSearch ,StringReplacement);\n\n\tASSERT_EQ(StringToManage, StringReplaced);\n}\n\nINSTANTIATE_TEST_CASE_P(\n\tStringUtilsTestCase,\n\tStringUtilsTestClass,\n\t::testing::Values(\n\t\t//  To manage\t\tSearch\t\tReplace\t\tExpected\n\t\tRow(\"a XX XX\",\t\t\"a\",\t\t\"b\",\t\t\"b XX XX\"),\n\t\tRow(\"XX a XX\",\t\t\"a\",\t\t\"b\",\t\t\"XX b XX\"),\n\t\tRow(\"XX XX a\",\t\t\"a\",\t\t\"b\",\t\t\"XX XX b\"),\n\n\t\tRow(\"XX XX\",\t\t\"a\",\t\t\"b\",\t\t\"XX XX\"),\n\n\t\tRow(\"XX a a XX\",\t\"a\",\t\t\"b\",\t\t\"XX b a XX\")\n\t)\n);\n\n\n// Test String trim\nTEST(StringTrim, StringTrimCases)\n{\n\tASSERT_EQ(StringRTrim(\"xxx\") , \"xxx\");\n\tASSERT_EQ(StringRTrim(\"xxx \"), \"xxx\");\n\tASSERT_EQ(StringRTrim(\"xxx   \"), \"xxx\");\n\n\tASSERT_EQ(StringLTrim(\"xxx\"), \"xxx\");\n\tASSERT_EQ(StringLTrim(\" xxx\"), \"xxx\");\n\tASSERT_EQ(StringLTrim(\"  xxx\"), \"xxx\");\n\n\tASSERT_EQ(StringTrim(\"xxx\"), \"xxx\");\n\tASSERT_EQ(StringTrim(\"  xxx\"), \"xxx\");\n\tASSERT_EQ(StringTrim(\"xxx  \"), \"xxx\");\n\tASSERT_EQ(StringTrim(\"  xxx  \"), \"xxx\");\n}\n\n// Test StringStripWhiteSpacesAll\nTEST(StringStripWhiteSpacesAll, AllCases)\n{\n\tASSERT_EQ(StringStripWhiteSpacesAll(\"xxx\") , \"xxx\");\n\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" xxx\") , \"xxx\");\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" xxx \") , \"xxx\");\n\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" x x x \") , \"xxx\");\n\n\tASSERT_EQ(StringStripWhiteSpacesAll(\"Messages:[  {   MessageIndex:0\") , \"Messages:[{MessageIndex:0\");\n\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" x x x \") , \"xxx\");\n\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" x x\\tx \") , \"xxx\");\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" x x\\nx \") , \"xxx\");\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" x x\\vx \") , \"xxx\");\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" x x\\fx \") , \"xxx\");\n\tASSERT_EQ(StringStripWhiteSpacesAll(\" x x\\rx \") , \"xxx\");\n\n}\n\n// Test StringStripWhiteSpacesAll\nTEST(StringStripWhiteSpacesLeave1Space, AllCases)\n{\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\"xxx\") , \"xxx\");\n\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" xxx\") , \"xxx\");\n\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" xxx \") , \"xxx\");\n\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x x \") , \"x x x\");\n\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x  x   x \") , \"x x x\");\n\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\"Messages:[  {   MessageIndex:0\") , \"Messages:[ { MessageIndex:0\");\n\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\tx \") , \"x xx\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\nx \") , \"x xx\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\vx \") , \"x xx\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\fx \") , \"x xx\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\rx \") , \"x xx\");\n\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\tx \") , \"x xx\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\n x \") , \"x x x\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\v  x \") , \"x x x\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\f   x \") , \"x x x\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x\\r    x \") , \"x x x\");\n\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x \\tx \") , \"x x x\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x  \\n x \") , \"x x x\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x  \\v  x \") , \"x x x\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x  \\f   x \") , \"x x x\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(\" x x  \\r    x \") , \"x x x\");\n}\n\n// Some tests are skipped on Centos\n// Centos 7.0 has gcc 4.8.5, <regex> was implemented and released in GCC 4.9.0.\n// the version available in C7 was highly experimental.\n//\n// Test IsRegex\nTEST(TestIsRegex, AllCases)\n{\n\tASSERT_EQ(IsRegex(\"^a\") , true);\n\tASSERT_EQ(IsRegex(\".*\") , true);\n\tASSERT_EQ(IsRegex(\"\\\\s\") , true);\n\tASSERT_EQ(IsRegex(\"^.*(Code:)((?!2).)*$\") , true);\n\n\tASSERT_EQ(IsRegex(\"asset_1\") , false);\n\n\tASSERT_EQ(std::regex_match (\"sin_1_asset_1\", regex(\"^a\")), false);\n\n#ifndef RHEL_CENTOS_7\n\tASSERT_EQ(std::regex_match (\"sin_1_asset_1\", regex(\"^s.*\")), true);\n#endif\n\n\tASSERT_EQ(std::regex_match (\"sin_1_asset_1\", regex(\"a.*\")), false);\n\tASSERT_EQ(std::regex_match (\"sin_1_asset_1\", regex(\"s.*\")), true);\n}\n\n\n\nTEST(TestAround, Extract)\n{\n\tstring longString(\"not shownpreamble123This part is after the location\");\n\tstring s = StringAround(longString, 19);\n\tEXPECT_STREQ(s.c_str(), \"preamble123This part is after the locati\");\n\ts = StringAround(longString, 19, 10);\n\tEXPECT_STREQ(s.c_str(), \"preamble123This part\");\n\ts = StringAround(longString, 19, 10, 5);\n\tEXPECT_STREQ(s.c_str(), \"ble123This part\");\n\ts = StringAround(longString, 5);\n\tEXPECT_STREQ(s.c_str(), \"not shownpreamble123This part is after t\");\n}\n\n// URL Encoding/Decoding Tests\nTEST(UrlEncodeTest, BasicCases)\n{\n\tASSERT_EQ(urlEncode(\"hello world\"), \"hello%20world\");\n\tASSERT_EQ(urlEncode(\"hello+world\"), \"hello%2Bworld\");\n\tASSERT_EQ(urlEncode(\"hello/world\"), \"hello%2Fworld\");\n\tASSERT_EQ(urlEncode(\"hello&world\"), \"hello%26world\");\n\tASSERT_EQ(urlEncode(\"hello=world\"), \"hello%3Dworld\");\n\tASSERT_EQ(urlEncode(\"hello@world\"), \"hello%40world\");\n\tASSERT_EQ(urlEncode(\"hello:world\"), \"hello%3Aworld\");\n\tASSERT_EQ(urlEncode(\"hello;world\"), \"hello%3Bworld\");\n\tASSERT_EQ(urlEncode(\"hello,world\"), \"hello%2Cworld\");\n\tASSERT_EQ(urlEncode(\"hello?world\"), \"hello%3Fworld\");\n\tASSERT_EQ(urlEncode(\"hello#world\"), \"hello%23world\");\n\tASSERT_EQ(urlEncode(\"hello[world\"), \"hello%5Bworld\");\n\tASSERT_EQ(urlEncode(\"hello]world\"), \"hello%5Dworld\");\n\tASSERT_EQ(urlEncode(\"hello{world\"), \"hello%7Bworld\");\n\tASSERT_EQ(urlEncode(\"hello}world\"), \"hello%7Dworld\");\n\tASSERT_EQ(urlEncode(\"hello|world\"), \"hello%7Cworld\");\n\tASSERT_EQ(urlEncode(\"hello\\\\world\"), \"hello%5Cworld\");\n\tASSERT_EQ(urlEncode(\"hello^world\"), \"hello%5Eworld\");\n\tASSERT_EQ(urlEncode(\"hello`world\"), \"hello%60world\");\n\tASSERT_EQ(urlEncode(\"hello~world\"), \"hello~world\");\n\tASSERT_EQ(urlEncode(\"hello\\\"world\"), \"hello%22world\");\n\tASSERT_EQ(urlEncode(\"hello'world\"), \"hello%27world\");\n\tASSERT_EQ(urlEncode(\"hello<world\"), \"hello%3Cworld\");\n\tASSERT_EQ(urlEncode(\"hello>world\"), \"hello%3Eworld\");\n\tASSERT_EQ(urlEncode(\"hello%world\"), \"hello%25world\");\n\tASSERT_EQ(urlEncode(\"hello$world\"), \"hello%24world\");\n\tASSERT_EQ(urlEncode(\"hello!world\"), \"hello%21world\");\n\tASSERT_EQ(urlEncode(\"hello*world\"), \"hello%2Aworld\");\n\tASSERT_EQ(urlEncode(\"hello(world\"), \"hello%28world\");\n\tASSERT_EQ(urlEncode(\"hello)world\"), \"hello%29world\");\n\tASSERT_EQ(urlEncode(\"hello_world\"), \"hello_world\"); // Should remain unchanged\n\tASSERT_EQ(urlEncode(\"hello-world\"), \"hello-world\"); // Should remain unchanged\n\tASSERT_EQ(urlEncode(\"hello.world\"), \"hello.world\"); // Should remain unchanged\n\tASSERT_EQ(urlEncode(\"\"), \"\"); // Empty string\n\tASSERT_EQ(urlEncode(\"abc123\"), \"abc123\"); // Alphanumeric should remain unchanged\n}\n\nTEST(UrlDecodeTest, BasicCases)\n{\n\tASSERT_EQ(urlDecode(\"hello%20world\"), \"hello world\");\n\tASSERT_EQ(urlDecode(\"hello+world\"), \"hello world\");\n\tASSERT_EQ(urlDecode(\"hello%2Bworld\"), \"hello+world\");\n\tASSERT_EQ(urlDecode(\"hello%2Fworld\"), \"hello/world\");\n\tASSERT_EQ(urlDecode(\"hello%26world\"), \"hello&world\");\n\tASSERT_EQ(urlDecode(\"hello%3Dworld\"), \"hello=world\");\n\tASSERT_EQ(urlDecode(\"hello%40world\"), \"hello@world\");\n\tASSERT_EQ(urlDecode(\"hello%3Aworld\"), \"hello:world\");\n\tASSERT_EQ(urlDecode(\"hello%3Bworld\"), \"hello;world\");\n\tASSERT_EQ(urlDecode(\"hello%2Cworld\"), \"hello,world\");\n\tASSERT_EQ(urlDecode(\"hello%3Fworld\"), \"hello?world\");\n\tASSERT_EQ(urlDecode(\"hello%23world\"), \"hello#world\");\n\tASSERT_EQ(urlDecode(\"hello%5Bworld\"), \"hello[world\");\n\tASSERT_EQ(urlDecode(\"hello%5Dworld\"), \"hello]world\");\n\tASSERT_EQ(urlDecode(\"hello%7Bworld\"), \"hello{world\");\n\tASSERT_EQ(urlDecode(\"hello%7Dworld\"), \"hello}world\");\n\tASSERT_EQ(urlDecode(\"hello%7Cworld\"), \"hello|world\");\n\tASSERT_EQ(urlDecode(\"hello%5Cworld\"), \"hello\\\\world\");\n\tASSERT_EQ(urlDecode(\"hello%5Eworld\"), \"hello^world\");\n\tASSERT_EQ(urlDecode(\"hello%60world\"), \"hello`world\");\n\tASSERT_EQ(urlDecode(\"hello%7Eworld\"), \"hello~world\");\n\tASSERT_EQ(urlDecode(\"hello%22world\"), \"hello\\\"world\");\n\tASSERT_EQ(urlDecode(\"hello%27world\"), \"hello'world\");\n\tASSERT_EQ(urlDecode(\"hello%3Cworld\"), \"hello<world\");\n\tASSERT_EQ(urlDecode(\"hello%3Eworld\"), \"hello>world\");\n\tASSERT_EQ(urlDecode(\"hello%25world\"), \"hello%world\");\n\tASSERT_EQ(urlDecode(\"hello%24world\"), \"hello$world\");\n\tASSERT_EQ(urlDecode(\"hello%21world\"), \"hello!world\");\n\tASSERT_EQ(urlDecode(\"hello%2Aworld\"), \"hello*world\");\n\tASSERT_EQ(urlDecode(\"hello%28world\"), \"hello(world\");\n\tASSERT_EQ(urlDecode(\"hello%29world\"), \"hello)world\");\n\tASSERT_EQ(urlDecode(\"hello_world\"), \"hello_world\"); // Should remain unchanged\n\tASSERT_EQ(urlDecode(\"hello-world\"), \"hello-world\"); // Should remain unchanged\n\tASSERT_EQ(urlDecode(\"hello.world\"), \"hello.world\"); // Should remain unchanged\n\tASSERT_EQ(urlDecode(\"\"), \"\"); // Empty string\n\tASSERT_EQ(urlDecode(\"abc123\"), \"abc123\"); // Alphanumeric should remain unchanged\n}\n\nTEST(UrlEncodeDecodeTest, RoundTrip)\n{\n\tvector<string> testStrings = {\n\t\t\"hello world\",\n\t\t\"hello+world\",\n\t\t\"hello/world\",\n\t\t\"hello&world\",\n\t\t\"hello=world\",\n\t\t\"hello@world\",\n\t\t\"hello:world\",\n\t\t\"hello;world\",\n\t\t\"hello,world\",\n\t\t\"hello?world\",\n\t\t\"hello#world\",\n\t\t\"hello[world\",\n\t\t\"hello]world\",\n\t\t\"hello{world\",\n\t\t\"hello}world\",\n\t\t\"hello|world\",\n\t\t\"hello\\\\world\",\n\t\t\"hello^world\",\n\t\t\"hello`world\",\n\t\t\"hello~world\",\n\t\t\"hello\\\"world\",\n\t\t\"hello'world\",\n\t\t\"hello<world\",\n\t\t\"hello>world\",\n\t\t\"hello%world\",\n\t\t\"hello$world\",\n\t\t\"hello!world\",\n\t\t\"hello*world\",\n\t\t\"hello(world\",\n\t\t\"hello)world\",\n\t\t\"hello_world\",\n\t\t\"hello-world\",\n\t\t\"hello.world\",\n\t\t\"abc123\",\n\t\t\"\",\n\t\t\"special chars: !@#$%^&*()_+-=[]{}|;':\\\",./<>?\"\n\t};\n\t\n\tfor (const auto& original : testStrings) {\n\t\tstring encoded = urlEncode(original);\n\t\tstring decoded = urlDecode(encoded);\n\t\tASSERT_EQ(decoded, original) << \"Round trip failed for: \" << original;\n\t}\n}\n\n// Path Manipulation Tests\nTEST(EvaluateParentPathTest, BasicCases)\n{\n\tASSERT_EQ(evaluateParentPath(\"/a/b/c\", '/'), \"/a/b\");\n\tASSERT_EQ(evaluateParentPath(\"/a/b/\", '/'), \"/a/b\");\n\tASSERT_EQ(evaluateParentPath(\"/a/b\", '/'), \"/a\");\n\tASSERT_EQ(evaluateParentPath(\"/a/\", '/'), \"/a\");\n\tASSERT_EQ(evaluateParentPath(\"/\", '/'), \"/\");\n\tASSERT_EQ(evaluateParentPath(\"a/b/c\", '/'), \"a/b\");\n\tASSERT_EQ(evaluateParentPath(\"a/b/\", '/'), \"a/b\");\n\tASSERT_EQ(evaluateParentPath(\"a/b\", '/'), \"a\");\n\tASSERT_EQ(evaluateParentPath(\"a/\", '/'), \"a\");\n\tASSERT_EQ(evaluateParentPath(\"a\", '/'), \"a\");\n\tASSERT_EQ(evaluateParentPath(\"\", '/'), \"\");\n\tASSERT_EQ(evaluateParentPath(\"abc\", '/'), \"abc\");\n\t\n\t// Test with different separators\n\tASSERT_EQ(evaluateParentPath(\"a\\\\b\\\\c\", '\\\\'), \"a\\\\b\");\n\tASSERT_EQ(evaluateParentPath(\"a.b.c\", '.'), \"a.b\");\n}\n\nTEST(ExtractLastLevelTest, BasicCases)\n{\n\tASSERT_EQ(extractLastLevel(\"/a/b/c\", '/'), \"c\");\n\tASSERT_EQ(extractLastLevel(\"/a/b/\", '/'), \"\");\n\tASSERT_EQ(extractLastLevel(\"/a/b\", '/'), \"b\");\n\tASSERT_EQ(extractLastLevel(\"/a/\", '/'), \"\");\n\tASSERT_EQ(extractLastLevel(\"/a\", '/'), \"a\");\n\tASSERT_EQ(extractLastLevel(\"/\", '/'), \"\");\n\tASSERT_EQ(extractLastLevel(\"a/b/c\", '/'), \"c\");\n\tASSERT_EQ(extractLastLevel(\"a/b/\", '/'), \"\");\n\tASSERT_EQ(extractLastLevel(\"a/b\", '/'), \"b\");\n\tASSERT_EQ(extractLastLevel(\"a/\", '/'), \"\");\n\tASSERT_EQ(extractLastLevel(\"a\", '/'), \"a\");\n\tASSERT_EQ(extractLastLevel(\"\", '/'), \"\");\n\tASSERT_EQ(extractLastLevel(\"abc\", '/'), \"abc\");\n\t\n\t// Test with different separators\n\tASSERT_EQ(extractLastLevel(\"a\\\\b\\\\c\", '\\\\'), \"c\");\n\tASSERT_EQ(extractLastLevel(\"a.b.c\", '.'), \"c\");\n\tASSERT_EQ(extractLastLevel(\"a-b-c\", '-'), \"c\");\n}\n\n// String Stripping Tests\nTEST(StringStripCRLFTest, BasicCases)\n{\n\tstring test1 = \"hello\\r\\nworld\";\n\tStringStripCRLF(test1);\n\tASSERT_EQ(test1, \"helloworld\");\n\t\n\tstring test2 = \"hello\\rworld\";\n\tStringStripCRLF(test2);\n\tASSERT_EQ(test2, \"helloworld\");\n\t\n\tstring test3 = \"hello\\nworld\";\n\tStringStripCRLF(test3);\n\tASSERT_EQ(test3, \"helloworld\");\n\t\n\tstring test4 = \"hello\\r\\n\\r\\nworld\";\n\tStringStripCRLF(test4);\n\tASSERT_EQ(test4, \"helloworld\");\n\t\n\tstring test5 = \"hello world\";\n\tStringStripCRLF(test5);\n\tASSERT_EQ(test5, \"hello world\");\n\t\n\tstring test6 = \"\";\n\tStringStripCRLF(test6);\n\tASSERT_EQ(test6, \"\");\n}\n\nTEST(StringStripQuotesTest, BasicCases)\n{\n\tstring test1 = \"\\\"hello world\\\"\";\n\tStringStripQuotes(test1);\n\tASSERT_EQ(test1, \"hello world\");\n\t\n\tstring test2 = \"hello \\\"world\\\"\";\n\tStringStripQuotes(test2);\n\tASSERT_EQ(test2, \"hello world\");\n\t\n\tstring test3 = \"\\\"hello\\\" \\\"world\\\"\";\n\tStringStripQuotes(test3);\n\tASSERT_EQ(test3, \"hello world\");\n\t\n\tstring test4 = \"hello world\";\n\tStringStripQuotes(test4);\n\tASSERT_EQ(test4, \"hello world\");\n\t\n\tstring test5 = \"\";\n\tStringStripQuotes(test5);\n\tASSERT_EQ(test5, \"\");\n\t\n\tstring test6 = \"\\\"\\\"\";\n\tStringStripQuotes(test6);\n\tASSERT_EQ(test6, \"\");\n}\n\n// String Escape Tests\nTEST(StringEscapeQuotesTest, BasicCases)\n{\n\tstring test1 = \"hello \\\"world\\\"\";\n\tStringEscapeQuotes(test1);\n\tASSERT_EQ(test1, \"hello \\\\\\\"world\\\\\\\"\");\n\t\n\tstring test2 = \"\\\"hello world\\\"\";\n\tStringEscapeQuotes(test2);\n\tASSERT_EQ(test2, \"\\\\\\\"hello world\\\\\\\"\");\n\t\n\tstring test3 = \"hello world\";\n\tStringEscapeQuotes(test3);\n\tASSERT_EQ(test3, \"hello world\");\n\t\n\tstring test4 = \"hello\\\\\\\"world\";\n\tStringEscapeQuotes(test4);\n\tASSERT_EQ(test4, \"hello\\\\\\\"world\"); // Already escaped\n\t\n\tstring test5 = \"\";\n\tStringEscapeQuotes(test5);\n\tASSERT_EQ(test5, \"\");\n\t\n\tstring test6 = \"\\\"\";\n\tStringEscapeQuotes(test6);\n\tASSERT_EQ(test6, \"\\\\\\\"\");\n}\n\nTEST(EscapeTest, BasicCases)\n{\n\tASSERT_EQ(escape(\"hello world\"), \"hello world\");\n\tASSERT_EQ(escape(\"hello \\\"world\\\"\"), \"hello \\\\\\\"world\\\\\\\"\");\n\tASSERT_EQ(escape(\"hello \\\\world\"), \"hello \\\\\\\\world\");\n\tASSERT_EQ(escape(\"hello /world\"), \"hello /world\");\n\tASSERT_EQ(escape(\"hello \\\"world\\\" \\\\test /path\"), \"hello \\\\\\\"world\\\\\\\" \\\\\\\\test /path\");\n\tASSERT_EQ(escape(\"\"), \"\");\n\tASSERT_EQ(escape(\"no_special_chars\"), \"no_special_chars\");\n\tASSERT_EQ(escape(\"\\\\\"), \"\\\\\\\\\");\n\tASSERT_EQ(escape(\"\\\"\"), \"\\\\\\\"\");\n\tASSERT_EQ(escape(\"/\"), \"/\");\n\tASSERT_EQ(escape(\"\\\\\\\"\"), \"\\\\\\\"\"); // Already escaped\n\tASSERT_EQ(escape(\"\\\\/\"), \"\\\\/\"); // Already escaped\n}\n\n// String Replace All Ex Tests\nTEST(StringReplaceAllExTest, BasicCases)\n{\n\tstring test1 = \"hello world hello\";\n\tStringReplaceAllEx(test1, \"hello\", \"hi\");\n\tASSERT_EQ(test1, \"hi world hi\");\n\t\n\tstring test2 = \"hello hello hello\";\n\tStringReplaceAllEx(test2, \"hello\", \"hi\");\n\tASSERT_EQ(test2, \"hi hi hi\");\n\t\n\tstring test3 = \"hello world\";\n\tStringReplaceAllEx(test3, \"hello\", \"hi\");\n\tASSERT_EQ(test3, \"hi world\");\n\t\n\tstring test4 = \"hello world\";\n\tStringReplaceAllEx(test4, \"world\", \"earth\");\n\tASSERT_EQ(test4, \"hello earth\");\n\t\n\tstring test5 = \"hello world\";\n\tStringReplaceAllEx(test5, \"xyz\", \"abc\");\n\tASSERT_EQ(test5, \"hello world\"); // No change\n\t\n\tstring test6 = \"\";\n\tStringReplaceAllEx(test6, \"hello\", \"hi\");\n\tASSERT_EQ(test6, \"\");\n\t\n\tstring test7 = \"hello\";\n\tStringReplaceAllEx(test7, \"hello\", \"\");\n\tASSERT_EQ(test7, \"\");\n\t\n\tstring test8 = \"hellohello\";\n\tStringReplaceAllEx(test8, \"hello\", \"hi\");\n\tASSERT_EQ(test8, \"hihi\");\n}\n\n// Trim Function Tests (C-style)\nTEST(TrimTest, BasicCases)\n{\n\tchar test1[] = \"  hello world  \";\n\tchar* result1 = trim(test1);\n\tASSERT_STREQ(result1, \"hello world\");\n\t\n\tchar test2[] = \"hello world\";\n\tchar* result2 = trim(test2);\n\tASSERT_STREQ(result2, \"hello world\");\n\t\n\tchar test3[] = \"  hello world\";\n\tchar* result3 = trim(test3);\n\tASSERT_STREQ(result3, \"hello world\");\n\t\n\tchar test4[] = \"hello world  \";\n\tchar* result4 = trim(test4);\n\tASSERT_STREQ(result4, \"hello world\");\n\t\n\tchar test5[] = \"   \";\n\tchar* result5 = trim(test5);\n\tASSERT_STREQ(result5, \"\");\n\t\n\tchar test6[] = \"\";\n\tchar* result6 = trim(test6);\n\tASSERT_STREQ(result6, \"\");\n\t\n\tchar test7[] = \"hello\";\n\tchar* result7 = trim(test7);\n\tASSERT_STREQ(result7, \"hello\");\n}\n\n// Edge Cases and Error Conditions\nTEST(StringUtilsEdgeCases, EmptyAndNullCases)\n{\n\t// Test empty strings\n\tstring empty = \"\";\n\tASSERT_EQ(StringSlashFix(empty), \"\");\n\tASSERT_EQ(evaluateParentPath(empty, '/'), \"\");\n\tASSERT_EQ(extractLastLevel(empty, '/'), \"\");\n\tASSERT_EQ(StringStripWhiteSpacesAll(empty), \"\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(empty), \"\");\n\tASSERT_EQ(urlEncode(empty), \"\");\n\tASSERT_EQ(urlDecode(empty), \"\");\n\tASSERT_EQ(StringLTrim(empty), \"\");\n\tASSERT_EQ(StringRTrim(empty), \"\");\n\tASSERT_EQ(StringTrim(empty), \"\");\n\tASSERT_EQ(escape(empty), \"\");\n\t\n\t// Test single character strings\n\tASSERT_EQ(StringSlashFix(\"/\"), \"\");\n\tASSERT_EQ(StringSlashFix(\"a\"), \"a\");\n\tASSERT_EQ(evaluateParentPath(\"/\", '/'), \"/\");\n\tASSERT_EQ(evaluateParentPath(\"a\", '/'), \"a\");\n\tASSERT_EQ(extractLastLevel(\"/\", '/'), \"\");\n\tASSERT_EQ(extractLastLevel(\"a\", '/'), \"a\");\n}\n\nTEST(StringUtilsBoundaryCases, BoundaryConditions)\n{\n\t// Test very long strings\n\tstring longString(1000, 'a');\n\tASSERT_EQ(StringStripWhiteSpacesAll(longString), longString);\n\t\n\t// Test strings with only special characters\n\tASSERT_EQ(urlEncode(\"!@#$%^&*()\"), \"%21%40%23%24%25%5E%26%2A%28%29\");\n\t\n\t// Test strings with mixed content\n\tstring mixed = \"Hello World 123 !@#\";\n\tASSERT_EQ(StringStripWhiteSpacesAll(mixed), \"HelloWorld123!@#\");\n\tASSERT_EQ(StringStripWhiteSpacesExtra(mixed), \"Hello World 123 !@#\");\n}\n\n// Additional comprehensive tests for existing functions\nTEST(StringReplaceAllTest, AdditionalCases)\n{\n\tstring test1 = \"hello world hello world\";\n\tStringReplaceAll(test1, \"hello\", \"hi\");\n\tASSERT_EQ(test1, \"hi world hi world\");\n\t\n\tstring test2 = \"hello\";\n\tStringReplaceAll(test2, \"hello\", \"hi\");\n\tASSERT_EQ(test2, \"hi\");\n\t\n\tstring test3 = \"hello\";\n\tStringReplaceAll(test3, \"world\", \"earth\");\n\tASSERT_EQ(test3, \"hello\"); // No change\n\t\n\tstring test4 = \"hellohello\";\n\tStringReplaceAll(test4, \"hello\", \"hi\");\n\tASSERT_EQ(test4, \"hihi\");\n\t\n\tstring test5 = \"\";\n\tStringReplaceAll(test5, \"hello\", \"hi\");\n\tASSERT_EQ(test5, \"\");\n\t\n\tstring test6 = \"hello\";\n\tStringReplaceAll(test6, \"hello\", \"\");\n\tASSERT_EQ(test6, \"\");\n}\n\nTEST(StringAroundTest, AdditionalCases)\n{\n\tstring testString = \"abcdefghijklmnopqrstuvwxyz\";\n\t\n\t// Test boundary conditions\n\tASSERT_EQ(StringAround(testString, 0, 5, 0), \"abcde\");\n\tASSERT_EQ(StringAround(testString, 25, 5, 5), \"uvwxyz\");\n\tASSERT_EQ(StringAround(testString, 13, 10, 10), \"defghijklmnopqrstuvw\");\n\t\n\t// Test with position beyond string length\n\tASSERT_EQ(StringAround(testString, 30, 5, 5), \"z\");\n\t\n\t// Test with empty string\n\tASSERT_EQ(StringAround(\"\", 0, 5, 5), \"\");\n\t\n\t// Test with short string\n\tASSERT_EQ(StringAround(\"abc\", 1, 5, 5), \"abc\");\n}\n"
  },
  {
    "path": "tests/unit/C/plugins/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\nset(UUIDLIB -luuid)\nset(COMMONLIB -ldl)\nset(LIBCURL_LIB -lcurl)\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\nEXECUTE_PROCESS( COMMAND grep -o ^NAME=.* /etc/os-release COMMAND cut -f2 -d\\\" COMMAND sed s/\\\"//g OUTPUT_VARIABLE os_name )\nEXECUTE_PROCESS( COMMAND grep -o ^VERSION_ID=.* /etc/os-release COMMAND cut -f2 -d\\\" COMMAND sed s/\\\"//g OUTPUT_VARIABLE os_version )\n\nif ( ( ${os_name} MATCHES \"Red Hat\" OR ${os_name} MATCHES \"CentOS\") AND ( ${os_version} MATCHES \"7\" ) )\n        add_compile_options(-D RHEL_CENTOS_7)\nendif()\n\n# Locate GTest\nfind_package(GTest REQUIRED)\ninclude_directories(${GTEST_INCLUDE_DIRS})\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\ninclude_directories(../../../../../C/common/include)\ninclude_directories(../../../../../C/plugins/common/include)\ninclude_directories(../../../../../C/plugins/north/OMF/include)\ninclude_directories(../../../../../C/services/common/include)\ninclude_directories(../../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../../C/thirdparty/Simple-Web-Server)\n\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\nset(PLUGINS_COMMON_LIB plugins-common-lib)\nset(OMF_LIB OMF)\n\nfile(GLOB unittests \"*.cpp\")\n \n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 COMPONENTS Interpreter Development)\nendif()\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nlink_directories(${PROJECT_BINARY_DIR}/../../../lib)\n\n# Link runTests with what we want to test and the GTest and pthread library\nadd_executable(RunTests ${unittests})\ntarget_link_libraries(RunTests ${GTEST_LIBRARIES} pthread)\ntarget_link_libraries(RunTests ${Boost_LIBRARIES})\ntarget_link_libraries(RunTests ${UUIDLIB})\ntarget_link_libraries(RunTests ${COMMONLIB})\ntarget_link_libraries(RunTests -lssl -lcrypto -lz)\ntarget_link_libraries(RunTests ${COMMON_LIB})\ntarget_link_libraries(RunTests ${SERVICE_COMMON_LIB})\ntarget_link_libraries(RunTests ${PLUGINS_COMMON_LIB})\ntarget_link_libraries(RunTests ${OMF_LIB})\ntarget_link_libraries(RunTests ${LIBCURL_LIB})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(RunTests ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(RunTests ${Python3_LIBRARIES})\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\n"
  },
  {
    "path": "tests/unit/C/plugins/common/main.cpp",
    "content": "#include <gtest/gtest.h>\n#include <resultset.h>\n#include <string.h>\n#include <string>\n\n/*\n * Fledge Readings to OMF translation unit tests\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n\n    testing::GTEST_FLAG(repeat) = 5000;\n    testing::GTEST_FLAG(shuffle) = true;\n\n    return RUN_ALL_TESTS();\n}\n\n"
  },
  {
    "path": "tests/unit/C/plugins/common/test_omf_translation.cpp",
    "content": "#include <gtest/gtest.h>\n#include <reading.h>\n#include <reading_set.h>\n#include <omf.h>\n#include <rapidjson/document.h>\n#include <simple_https.h>\n#include <OMFHint.h>\n#include <omfbuffer.h>\n\n/*\n * Fledge Readings to OMF translation unit tests\n *\n * Copyright (c) 2018 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Massimiliano Pinto\n */\n\nusing namespace std;\nusing namespace rapidjson;\n\n#define TYPE_ID 1234\n\n// Mock SimpleHttps class for testing\nclass MockSimpleHttps : public HttpSender\n{\npublic:\n    MockSimpleHttps(const std::string& host_port,\n                    unsigned int connect_timeout = 0,\n                    unsigned int request_timeout = 0,\n                    unsigned int retry_sleep_Time = 1,\n                    unsigned int max_retry = 4) :\n                    m_host_port(host_port),\n                    m_retry_sleep_time(retry_sleep_Time),\n                    m_max_retry(max_retry),\n                    m_http_response(\"\"),\n                    m_auth_method(\"\"),\n                    m_auth_basic_credentials(\"\"),\n                    m_ocs_namespace(\"\"),\n                    m_ocs_tenant_id(\"\"),\n                    m_ocs_client_id(\"\"),\n                    m_ocs_client_secret(\"\"),\n                    m_ocs_token(\"\")\n    {\n    }\n\n    ~MockSimpleHttps() = default;\n\n    void setProxy(const std::string& proxy) override\n    {\n        m_proxy = proxy;\n    }\n\n    int sendRequest(const std::string& method = std::string(HTTP_SENDER_DEFAULT_METHOD),\n                   const std::string& path = std::string(HTTP_SENDER_DEFAULT_PATH),\n                   const std::vector<std::pair<std::string, std::string>>& headers = {},\n                   const std::string& payload = std::string()) override\n    {\n        // Mock implementation - return success\n        m_last_method = method;\n        m_last_path = path;\n        m_last_headers = headers;\n        m_last_payload = payload;\n        m_http_response = \"{\\\"status\\\":\\\"success\\\"}\";\n        return 200;\n    }\n\n    std::string getHostPort() override { return m_host_port; }\n    std::string getHTTPResponse() override { return m_http_response; }\n    unsigned int getMaxRetries() override { return m_max_retry; }\n\n    void setAuthMethod(std::string& authMethod) override { m_auth_method = authMethod; }\n    void setAuthBasicCredentials(std::string& authBasicCredentials) override { m_auth_basic_credentials = authBasicCredentials; }\n    void setMaxRetries(unsigned int retries) override { m_max_retry = retries; }\n\n    // OCS configurations\n    void setOCSNamespace(std::string& OCSNamespace) override { m_ocs_namespace = OCSNamespace; }\n    void setOCSTenantId(std::string& OCSTenantId) override { m_ocs_tenant_id = OCSTenantId; }\n    void setOCSClientId(std::string& OCSClientId) override { m_ocs_client_id = OCSClientId; }\n    void setOCSClientSecret(std::string& OCSClientSecret) override { m_ocs_client_secret = OCSClientSecret; }\n    void setOCSToken(std::string& OCSToken) override { m_ocs_token = OCSToken; }\n\n    // Mock-specific methods for testing\n    std::string getLastMethod() const { return m_last_method; }\n    std::string getLastPath() const { return m_last_path; }\n    std::string getLastPayload() const { return m_last_payload; }\n    std::vector<std::pair<std::string, std::string>> getLastHeaders() const { return m_last_headers; }\n    std::string getAuthMethod() const { return m_auth_method; }\n    std::string getAuthBasicCredentials() const { return m_auth_basic_credentials; }\n\nprivate:\n    std::string m_host_port;\n    unsigned int m_retry_sleep_time;\n    unsigned int m_max_retry;\n    std::string m_http_response;\n    std::string m_proxy;\n    std::string m_auth_method;\n    std::string m_auth_basic_credentials;\n    std::string m_ocs_namespace;\n    std::string m_ocs_tenant_id;\n    std::string m_ocs_client_id;\n    std::string m_ocs_client_secret;\n    std::string m_ocs_token;\n    \n    // Mock tracking\n    std::string m_last_method;\n    std::string m_last_path;\n    std::vector<std::pair<std::string, std::string>> m_last_headers;\n    std::string m_last_payload;\n};\n\n// 2 readings JSON text\nconst char *af_hierarchy_test01 = R\"(\n{\n\t\"names\" :\n\t\t\t{\n\t\t\t\t\t\"3814_asset2\" : \"/Building1/EastWing/GroundFloor/Room4\",\n\t\t\t\t\t\"3814_asset3\" : \"Room14\",\n\t\t\t\t\t\"3814_asset4\" : \"names_asset4\"\n\t\t\t},\n\t\"metadata\" : {\n\t\t\"exist\" : {\n\t\t\t\"sinusoid4\"     : \"md_asset4\",\n\t\t\t\"sinusoid4_1\"   : \"md_asset4_1\",\n\t\t\t\"sinusoid2\"     : \"md_asset5\"\n\t\t}\n\t}\n}\n)\";\n\nconst char *af_hierarchy_test02 = R\"(\n{\n}\n)\";\n\nmap<std::string, std::string> af_hierarchy_check01 ={\n\t// Asset_name   - Asset Framework path\n\t{\"3814_asset2\",         \"/Building1/EastWing/GroundFloor/Room4\"},\n\t{\"3814_asset3\",         \"Room14\"},\n\t{\"3814_asset4\",         \"names_asset4\"}\n};\n\nmap<std::string, std::string> af_hierarchy_check02 ={\n\n\t// Asset_name   - Asset Framework path\n\t{\"sinusoid4\",         \"md_asset4\"},\n\t{\"sinusoid4_1\",       \"md_asset4_1\"},\n\t{\"sinusoid2\",         \"md_asset5\"}\n};\n\nmap<std::string, std::string> af_hierarchy_check03 ={\n\n\t// Asset_name   - Asset Framework path\n\t{\"sinusoid4\",         \"md_asset4\"}\n};\n\n\n// 2 readings JSON text\nconst char *two_readings = R\"(\n    {\n        \"count\" : 2, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"luxometer\",\n                \"reading\": { \"lux\": 45204.524 },\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            },\n            {\n                \"id\": 2, \"asset_code\": \"luxometer\",\n                \"reading\": { \"lux\": 76834.361 },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 14:48:18.72708\"\n            }\n        ]\n    }\n)\";\n\n// 2 readings JSON text\nconst char *readings_with_different_datapoints = R\"(\n    {\n        \"count\" : 2, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"A\",\n                \"reading\": { \"lux\": 45204.524 },\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            },\n            {\n                \"id\": 2, \"asset_code\": \"A\",\n                \"reading\": { \"temp\": 23, \"label\" : \"device_1\" },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 14:48:18.72708\"\n            }\n        ]\n    }\n)\";\n\n// 3 readings JSON text with unsupported data types (array)\nconst char *all_readings_with_unsupported_datapoints_types = R\"(\n    {\n        \"count\" : 4, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"A\",\n                \"reading\": { \"lux\": [45204.524] },\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            },\n            {\n                \"id\": 2, \"asset_code\": \"B\",\n                \"reading\": { \"temp\": [87], \"label\" : [1] },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 14:48:18.72708\"\n            },\n            {\n                \"id\": 3, \"asset_code\": \"C\",\n                \"reading\": { \"temp\": [23.2], \"label\" : [5] },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 15:48:18.72708\"\n\t    }\n        ]\n    }\n)\";\n\n// 5 readings JSON text with unsupported data types (array)\nconst char *readings_with_unsupported_datapoints_types = R\"(\n    {\n        \"count\" : 4, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"A\",\n                \"reading\": { \"lux\": [45204.524] },\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            },\n            {\n                \"id\": 2, \"asset_code\": \"B\",\n                \"reading\": { \"temp\": 87, \"label\" : [1] },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 14:48:18.72708\"\n            },\n            {\n                \"id\": 3, \"asset_code\": \"C\",\n                \"reading\": { \"temp\": [23.2], \"label\" : [5] },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 15:48:18.72708\"\n            },\n            {\n                \"id\": 3, \"asset_code\": \"D\",\n                \"reading\": { \"temp\": 23.2, \"label\" : 5 },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 15:48:18.72708\"\n            },\n            {\n                \"id\": 3, \"asset_code\": \"E\",\n                \"reading\": { \"temp\": [23.2], \"label\" : [5] },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 15:48:18.72708\"\n            }\n        ]\n    }\n)\";\n\nconst char *OMFHint_readings_variable_handling_1 = R\"(\n    {\n        \"count\" : 1, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"fogbench_luxometer\",\n                \"reading\": { \"lux\": [45204.524], \"site\":\"Suez\" ,\"OMFHint\": {\"AFLocation\":\"/Sites/Orange/${site:unknown}/ADN C1\"} },\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            }\n        ]\n    }\n)\";\n\nconst char *OMFHint_readings_variable_handling_2 = R\"(\n    {\n        \"count\" : 1, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"fogbench_pressure\",\n                \"reading\": { \"pressure\": [951.8],\"site\":\"Trackonomy\",\"OMFHint\": {\"AFLocation\":\"/Sites/Orange/${site:unknown}/ADN C1\"} },\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            }\n        ]\n    }\n)\";\n\nconst char *OMFHint_readings_variable_handling_3 = R\"(\n    {\n        \"count\" : 1, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"fogbench_accelerometer\",\n                \"reading\": { \"x\": [951.8], \"y\": [951.8] },\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            }\n        ]\n    }\n)\";\n\nconst char *OMFHint_readings_variable_handling_4 = R\"(\n    {\n        \"count\" : 1, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"fogbench_accelerometer\",\n\t\t\t\t\"reading\": { \"lux\": [45204.524], \"site\":\"Suez\" , \"l1\":\"Sites_new\" ,\"OMFHint\": {\"AFLocation\":\"/${l1:Sites}/${l2:Orange}/${site:unknown}/ADN C1\"} },\n\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            }\n        ]\n    }\n)\";\n\n\n// 2 readings translated to OMF JSON text\nconst string two_translated_readings = \"[{\\\"containerid\\\": \\\"\" + to_string(TYPE_ID) + \\\n\t\t\t\t\t\"measurement_luxometer\\\", \\\"values\\\": [{\\\"lux\\\": \"\n\t\t\t\t\t\"45204.524, \\\"Time\\\": \\\"2018-06-11T14:00:08.532958Z\\\"}]}, \"\n\t\t\t\t\t\"{\\\"containerid\\\": \\\"\" + to_string(TYPE_ID) + \\\n\t\t\t\t\t\"measurement_luxometer\\\", \\\"values\\\": \"\n\t\t\t\t\t\"[{\\\"lux\\\": 76834.361, \\\"Time\\\": \\\"2018-08-21T14:00:09.329580Z\\\"}]}]\";\n\n// Compare translated readings with a provided JSON value\nTEST(OMF_transation, TwoTranslationsCompareResult)\n{\n\tstring measurementId;\n\n\t// Build a ReadingSet from JSON\n\tReadingSet readingSet(two_readings);\n\n\tOMFBuffer payload;\n        payload.append(\"[\");\n\n\tbool sep = false;\n\t// Iterate over Readings via readingSet.getAllReadings()\n\tfor (vector<Reading *>::const_iterator elem = readingSet.getAllReadings().begin();\n\t\t\t\t\t\t\telem != readingSet.getAllReadings().end();\n\t\t\t\t\t\t\t++elem)\n\t{\n\t\tmeasurementId = to_string(TYPE_ID) + \"measurement_luxometer\";\n\n\t\t// Add into JSON string the OMF transformed Reading data\n\t\tif (OMFData(payload, **elem, measurementId, sep).hasData())\n\t\t\tsep = true;\n\t}\n\n\tpayload.append(\"]\");\n\tconst char *data = payload.coalesce();\n\tstring json(data);\n\tdelete[] data;\n\n\t// Compare translation\n\tASSERT_EQ(0, json.compare(two_translated_readings));\n}\n\n// Create ONE reading, convert it and run checks\nTEST(OMF_transation, OneReading)\n{\n\tstring measurementId;\n\n\tstring strVal(\"printer\");\n        DatapointValue value(strVal);\n\t// ONE reading\n\tReading lab(\"lab\", new Datapoint(\"device\", value));\n\n\t// Add another datapoint\n\tDatapointValue id((long) 3001);\n\tlab.addDatapoint(new Datapoint(\"id\", id));\n\n\tmeasurementId = \"dummy\";\n\n\tOMFBuffer payload;\n\t// Create the OMF Json data\n\tpayload.append(\"[\");\n\tOMFData(payload, lab, measurementId, false);\n\tpayload.append(\"]\");\n\n\tconst char *data = payload.coalesce();\n\tstring json(data);\n\tdelete[] data;\n\n\t// \"values\" key is in the output\n\tASSERT_NE(json.find(string(\"\\\"values\\\" : { \")), 0);\n\n\t// Parse JSON of translated data\n        Document doc;\n        doc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tASSERT_FALSE(true);\n\t}\n\telse\n\t{\n\t\t// JSON is an array\n\t\tASSERT_TRUE(doc.IsArray());\n\t\t// Array size is 1\n\t\tASSERT_EQ(doc.Size(), 1);\n\n\t\t// Get element 0 of the array\n\t\tValue::ConstValueIterator itr = doc.Begin();\n\n\t\t// Check it's an object\n\t\tASSERT_TRUE(itr->IsObject());\n\t\t// It has \"containerid\" and \"values\"\n\t\tASSERT_TRUE(itr->HasMember(\"containerid\") && itr->HasMember(\"values\"));\n\n\t\t// \"values\" is an array\n\t\tASSERT_TRUE((*itr)[\"values\"].IsArray());\n\t\t// The array element [0] is an object with 3 keys\n\t\tASSERT_EQ((*itr)[\"values\"].GetArray()[0].GetObject().MemberCount(), 3);\n\t}\n}\n\n// Compare translated readings with a provided JSON value\nTEST(OMF_transation, SuperSet)\n{\n\tMockSimpleHttps sender(\"0.0.0.0:0\", 10, 10, 10, 1);\n\tOMF omf(\"test\", sender, \"/\", 1, \"ABC\");\n\t// Build a ReadingSet from JSON\n\tReadingSet readingSet(readings_with_different_datapoints);\n\tvector<Reading *>readings = readingSet.getAllReadings();\n\n\tstd::map<string, Reading*> superSetDataPoints;\n\n\t// Create a superset of all found datapoints for each assetName\n\t// the superset[assetName] is then passed to routines which handle\n\t// creation of OMF data types\n\tomf.setMapObjectTypes(readings, superSetDataPoints);\n\n\t// We have only 1 superset reading as the readings in input\n\t// have same assetName\n\tASSERT_EQ(1, superSetDataPoints.size());\n\tauto it = superSetDataPoints.begin();\n\t// We have 3 datapoints in total in te superset\n\tASSERT_EQ(3, (*it).second->getDatapointCount());\n\tomf.unsetMapObjectTypes(superSetDataPoints);\n\t// Superset map is empty\n\tASSERT_EQ(0, superSetDataPoints.size());\n}\n\n// Compare translated readings with a provided JSON value\nTEST(OMF_transation, AllReadingsWithUnsupportedTypes)\n{\n\tstring measurementId;\n\n\t// Build a ReadingSet from JSON\n\tReadingSet readingSet(all_readings_with_unsupported_datapoints_types);\n\n\tOMFBuffer payload;\n\tpayload.append(\"[\");\n\n\tbool pendingSeparator = false;\n\t// Iterate over Readings via readingSet.getAllReadings()\n\tfor (auto elem = readingSet.getAllReadings().begin();\n\t     elem != readingSet.getAllReadings().end();\n\t     ++elem)\n\t{\n\t\tmeasurementId = \"dummy\";\n\n\t\tif (OMFData(payload, **elem, measurementId, pendingSeparator).hasData())\n\t\t\tpendingSeparator = true;\n\t\t// Add into JSON string the OMF transformed Reading data\n\t}\n\n\tpayload.append(\"]\");\n\n\tconst char *data = payload.coalesce();\n\tstring json(data);\n\tdelete[] data;\n\n\tDocument doc;\n\tdoc.Parse(json.c_str());\n\tif (doc.HasParseError())\n\t{\n\t\tASSERT_FALSE(true);\n\t}\n\telse\n\t{\n\t\t// JSON is an array\n\t\tASSERT_TRUE(doc.IsArray());\n\t\t// Array size is 1\n\t\tASSERT_EQ(doc.Size(), 0);\n\t}\n}\n\n// Compare translated readings with a provided JSON value\nTEST(OMF_transation, ReadingsWithUnsupportedTypes)\n{\n\tstring measurementId;\n\n\t// Build a ReadingSet from JSON\n\tReadingSet readingSet(readings_with_unsupported_datapoints_types);\n\n\tOMFBuffer payload;\n\tpayload.append(\"[\");\n\n\tbool pendingSeparator = false;\n\t// Iterate over Readings via readingSet.getAllReadings()\n\tfor (auto elem = readingSet.getAllReadings().begin();\n\t     elem != readingSet.getAllReadings().end();\n\t     ++elem)\n\t{\n\t\tmeasurementId = \"dummy\";\n\n\t\tif (OMFData(payload, **elem, measurementId, pendingSeparator).hasData())\n\t\t\tpendingSeparator = true;\n\t\t// Add into JSON string the OMF transformed Reading data\n\t}\n\n\tpayload.append(\"]\");\n\n\tconst char *data = payload.coalesce();\n\tDocument doc;\n\tdoc.Parse(data);\n\tif (doc.HasParseError())\n\t{\n\t\tcout << data << \"\\n\";\n\t\tASSERT_FALSE(true);\n\t}\n\telse\n\t{\n\t\t// JSON is an array\n\t\tASSERT_TRUE(doc.IsArray());\n\t\t// Array size is 1\n\t\tASSERT_EQ(doc.Size(), 2);\n\t}\n\tdelete[] data;\n}\n\n// Test the Asset Framework hierarchy fucntionlities\nTEST(OMF_AfHierarchy, HandleAFMapNamesGood)\n{\n\tDocument JSon;\n\n\tmap<std::string, std::string> NamesRules;\n\tmap<std::string, std::string> MetadataRulesExist;\n\n\tbool AFMapEmptyNames;\n\tbool AFMapEmptyMetadata;\n\n\t// Dummy initializations\n\tMockSimpleHttps sender(\"0.0.0.0:0\", 10, 10, 10, 1);\n\tOMF omf(\"test\", sender, \"/\", 1, \"ABC\");\n\n\tomf.setAFMap(af_hierarchy_test01);\n\n\tNamesRules = omf.getNamesRules();\n\tMetadataRulesExist = omf.getMetadataRulesExist();\n\n\t// Test\n\tASSERT_EQ(NamesRules,         af_hierarchy_check01);\n\tASSERT_EQ(MetadataRulesExist, af_hierarchy_check02);\n\n\tAFMapEmptyNames = omf.getAFMapEmptyNames();\n\tAFMapEmptyMetadata = omf.getAFMapEmptyMetadata();\n\n\tASSERT_EQ(AFMapEmptyNames, false);\n\tASSERT_EQ(AFMapEmptyMetadata, false);\n}\n\nTEST(OMF_AfHierarchy, HandleAFMapEmpty)\n{\n\tDocument JSon;\n\n\tmap<std::string, std::string> NamesRules;\n\tmap<std::string, std::string> MetadataRulesExist;\n\n\tbool AFMapEmptyNames;\n\tbool AFMapEmptyMetadata;\n\n\t// Dummy initializations\n\tMockSimpleHttps sender(\"0.0.0.0:0\", 10, 10, 10, 1);\n\tOMF omf(\"test\", sender, \"/\", 1, \"ABC\");\n\n\t// Test\n\tomf.setAFMap(af_hierarchy_test02);\n\n\tAFMapEmptyNames = omf.getAFMapEmptyNames();\n\tAFMapEmptyMetadata = omf.getAFMapEmptyMetadata();\n\n\tASSERT_EQ(AFMapEmptyNames, true);\n\tASSERT_EQ(AFMapEmptyMetadata, true);\n}\n\nTEST(OMF_AfHierarchy, HandleAFMapNamesBad)\n{\n\tDocument JSon;\n\n\tmap<std::string, std::string> MetadataRulesExist;\n\n\t// Dummy initializations\n\tMockSimpleHttps sender(\"0.0.0.0:0\", 10, 10, 10, 1);\n\tOMF omf(\"test\", sender, \"/\", 1, \"ABC\");\n\n\tomf.setAFMap(af_hierarchy_test01);\n\tMetadataRulesExist = omf.getMetadataRulesExist();\n\n\t// Test 02\n\tASSERT_NE(MetadataRulesExist, af_hierarchy_check03);\n}\n\n// Test PI Server naming rules for invalid chars - Control characters plus: * ? ; { } [ ] | \\ ` ' \"\nTEST(PiServer_NamingRules, NamingRulesCheck)\n{\n\tbool changed = false;\n\n\t// Dummy initializations\n\tMockSimpleHttps sender(\"0.0.0.0:0\", 10, 10, 10, 1);\n\tOMF omf(\"test\", sender, \"/\", 1, \"ABC\");\n\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_1\", &changed), \"asset_1\");\n\tASSERT_EQ(changed, false);\n\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_*1\", &changed), \"asset__1\");\n\tASSERT_EQ(changed, true);\n\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_?1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_;1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_{1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_}1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_[1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_]1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_|1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_\\\\1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_`1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_'1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_\\\"1\", &changed), \"asset__1\");\n\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"_asset_1\", &changed), \"_asset_1\");\n\tASSERT_EQ(changed, false);\n\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_\\t1\", &changed), \"asset__1\");\n\tASSERT_EQ(changed, true);\n\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_\\n1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_\\b1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_\\r1\", &changed), \"asset__1\");\n\tASSERT_EQ(omf.ApplyPIServerNamingRulesInvalidChars(\"asset_\\\\1\", &changed), \"asset__1\");\n}\n\nTEST(PiServer_NamingRules, Suffix)\n{\n\tstring assetName;\n\t// Dummy initializations\n\tMockSimpleHttps sender(\"0.0.0.0:0\", 10, 10, 10, 1);\n\tOMF omf(\"test\", sender, \"/\", 1, \"ABC\");\n\n\tassetName = \"asset_1\";\n\n\tomf.setNamingScheme(NAMINGSCHEME_CONCISE);\n\tASSERT_EQ(omf.generateSuffixType(assetName, 1), \"\");\n\tASSERT_EQ(omf.generateSuffixType(assetName, 2), \"-type2\");\n\tASSERT_EQ(omf.generateSuffixType(assetName, 3), \"-type3\");\n\n\tomf.setNamingScheme(NAMINGSCHEME_SUFFIX);\n\tASSERT_EQ(omf.generateSuffixType(assetName, 1), \"-type1\");\n\tASSERT_EQ(omf.generateSuffixType(assetName, 2), \"-type2\");\n\tASSERT_EQ(omf.generateSuffixType(assetName, 3), \"-type3\");\n\n\tomf.setNamingScheme(NAMINGSCHEME_HASH);\n\tASSERT_EQ(omf.generateSuffixType(assetName, 1), \"\");\n\tASSERT_EQ(omf.generateSuffixType(assetName, 2), \"-type2\");\n\tASSERT_EQ(omf.generateSuffixType(assetName, 3), \"-type3\");\n\n\tomf.setNamingScheme(NAMINGSCHEME_COMPATIBILITY);\n\tASSERT_EQ(omf.generateSuffixType(assetName, 1), \"-type1\");\n\tASSERT_EQ(omf.generateSuffixType(assetName, 2), \"-type2\");\n\tASSERT_EQ(omf.generateSuffixType(assetName, 3), \"-type3\");\n}\n\nTEST(PiServer_NamingRules, Prefix)\n{\n\tstring asset;\n\n\t// Dummy initializations\n\tMockSimpleHttps sender(\"0.0.0.0:0\", 10, 10, 10, 1);\n\tOMF omf(\"test\", sender, \"/\", 1, \"ABC\");\n\n\tasset=\"asset_1\";\n\n\t{ // ENDPOINT_PIWEB_API\n\n\t\tomf.setPIServerEndpoint(ENDPOINT_PIWEB_API);\n\t\tomf.setNamingScheme(NAMINGSCHEME_CONCISE);\n\t\tASSERT_EQ(omf.generateMeasurementId(asset), asset);\n\n\t\tomf.setNamingScheme(NAMINGSCHEME_SUFFIX);\n\t\tASSERT_EQ(omf.generateMeasurementId(asset), asset);\n\n\t\tomf.setNamingScheme(NAMINGSCHEME_HASH);\n\t\tASSERT_EQ(omf.generateMeasurementId(asset), \"_1measurement_\" + asset);\n\n\t\tomf.setNamingScheme(NAMINGSCHEME_COMPATIBILITY);\n\t\tASSERT_EQ(omf.generateMeasurementId(asset), \"_1measurement_\" + asset);\n\t}\n\n\t{ // ENDPOINT_EDS\n\n\t\tomf.setPIServerEndpoint(ENDPOINT_EDS);\n\t\tomf.setNamingScheme(NAMINGSCHEME_CONCISE);\n\t\tASSERT_EQ(omf.generateMeasurementId(asset), asset);\n\n\t\tomf.setNamingScheme(NAMINGSCHEME_SUFFIX);\n\t\tASSERT_EQ(omf.generateMeasurementId(asset), asset);\n\n\t\tomf.setNamingScheme(NAMINGSCHEME_HASH);\n\t\tASSERT_EQ(omf.generateMeasurementId(asset), \"1measurement_\" + asset);\n\n\t\tomf.setNamingScheme(NAMINGSCHEME_COMPATIBILITY);\n\t\tASSERT_EQ(omf.generateMeasurementId(asset), \"1measurement_\" + asset);\n\t}\n}\n\nTEST(OMF_hints, m_chksum)\n{\n\tstring asset;\n\n\t// Case - test case having unexpected result\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"AFLocation\\\":\\\"/Sites/Orange/Suez/ADN C1\\\"}\"),\n\t\t\"\"\n\t);\n\n\t// Case - single rule\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"AFLocation\\\":123}\"),\n\t\t\"\"\n\t);\n\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"AFLocation\\\":\\\"/Sites/Orange/Suez/ADN C1\\\"}\"),\n\t\t\"\"\n\t);\n\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"number\\\":\\\"float32\\\"}\"),\n\t\t\"{\\\"number\\\":\\\"float32\\\"}\"\n\t);\n\n\t// Case - multi rules\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"number\\\":\\\"float32\\\",\\\"AFLocation\\\":\\\"/Sites/Orange/Trackonomy/ADN C2\\\"}\"),\n\t\t\"{\\\"number\\\":\\\"float32\\\"}\"\n\t);\n\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"AFLocation\\\":\\\"/Sites/Orange/Trackonomy/ADN C2\\\",\\\"number\\\":\\\"float32\\\"}\"),\n\t\t\"{\\\"number\\\":\\\"float32\\\"}\"\n\t);\n\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"number\\\":\\\"float32\\\",\\\"AFLocation\\\":\\\"/Sites/Orange/Trackonomy/ADN C2\\\",\\\"number\\\":\\\"float32\\\"}\"),\n\t\t\"{\\\"number\\\":\\\"float32\\\",\\\"number\\\":\\\"float32\\\"}\"\n\t);\n\n\t// Case - variables\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"AFLocation\\\":\\\"/${l1:Sites}/${l2}/${site:unknown}/ADN C1\\\"}\"),\n\t\t\"\"\n\t);\n\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"number\\\":\\\"float32\\\",\\\"AFLocation\\\":\\\"/${l1:Sites}/${l2}/${site:unknown}/ADN C1\\\"}\"),\n\t\t\"{\\\"number\\\":\\\"float32\\\"}\"\n\t);\n\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"AFLocation\\\":\\\"/${l1:Sites}/${l2}/${site:unknown}/ADN C1\\\",\\\"number\\\":\\\"float32\\\"}\"),\n\t\t\"{\\\"number\\\":\\\"float32\\\"}\"\n\t);\n\n\tASSERT_EQ(\n\t\tOMFHints::getHintForChecksum(\"{\\\"number\\\":\\\"float32\\\",\\\"AFLocation\\\":\\\"/${l1:Sites}/${l2}/${site:unknown}/ADN C1\\\",\\\"number\\\":\\\"float32\\\"}\"),\n\t\t\"{\\\"number\\\":\\\"float32\\\",\\\"number\\\":\\\"float32\\\"}\"\n\t);\n}\n\nTEST(OMF_hints, variableHandling)\n{\n\tstring AFHierarchy;\n\tstring AFHierarchyNew;\n\n\t{ // Case\n\t\tReadingSet readingSet(OMFHint_readings_variable_handling_1);\n\t\tvector<Reading *> readings = readingSet.getAllReadings();\n\n\t\tAFHierarchy = \"/Sites/Orange/${site:unknown}/ADN C1\";\n\t\tvector<Reading *>::const_iterator elem = readings.begin();\n\t\tReading *reading = *elem;\n\t\tAFHierarchyNew = OMF::variableValueHandle(*reading, AFHierarchy);\n\t\tASSERT_EQ (AFHierarchyNew, \"/Sites/Orange/Suez/ADN C1\");\n\t}\n\n\t{ // Case\n\t\tReadingSet readingSet(OMFHint_readings_variable_handling_2);\n\t\tvector<Reading *> readings = readingSet.getAllReadings();\n\n\t\tAFHierarchy = \"/Sites/Orange/${site:unknown}/ADN C1\";\n\t\tvector<Reading *>::const_iterator elem = readings.begin();\n\t\tReading *reading = *elem;\n\t\tAFHierarchyNew = OMF::variableValueHandle(*reading, AFHierarchy);\n\t\tASSERT_EQ (AFHierarchyNew, \"/Sites/Orange/Trackonomy/ADN C1\");\n\t}\n\n\t{ // Case\n\t\tReadingSet readingSet(OMFHint_readings_variable_handling_3);\n\t\tvector<Reading *> readings = readingSet.getAllReadings();\n\n\t\tAFHierarchy = \"/Sites/Orange/${site:unknown}/ADN C1\";\n\t\tvector<Reading *>::const_iterator elem = readings.begin();\n\t\tReading *reading = *elem;\n\t\tAFHierarchyNew = OMF::variableValueHandle(*reading, AFHierarchy);\n\t\tASSERT_EQ (AFHierarchyNew, \"/Sites/Orange/unknown/ADN C1\");\n\t}\n\n\t{ // Case - multiple variables\n\t\tReadingSet readingSet(OMFHint_readings_variable_handling_4);\n\t\tvector<Reading *> readings = readingSet.getAllReadings();\n\n\t\tAFHierarchy = \"/${l1:Sites}/${l2:Orange}/${site:unknown}/ADN C1\";\n\t\tvector<Reading *>::const_iterator elem = readings.begin();\n\t\tReading *reading = *elem;\n\t\tAFHierarchyNew = OMF::variableValueHandle(*reading, AFHierarchy);\n\t\tASSERT_EQ (AFHierarchyNew, \"/Sites_new/Orange/Suez/ADN C1\");\n\t}\n\n\t{ // Case - default not defined ${l3}\n\t\tReadingSet readingSet(OMFHint_readings_variable_handling_4);\n\t\tvector<Reading *> readings = readingSet.getAllReadings();\n\n\t\tAFHierarchy = \"/${l1:Sites}/${l3}/${site:unknown}/ADN C1\";\n\t\tvector<Reading *>::const_iterator elem = readings.begin();\n\t\tReading *reading = *elem;\n\t\tAFHierarchyNew = OMF::variableValueHandle(*reading, AFHierarchy);\n\t\tASSERT_EQ (AFHierarchyNew, \"/Sites_new/Suez/ADN C1\");\n\t}\n\n}\n\nTEST(OMF_hints, variableExtract)\n{\n\tbool found;\n\tstring AFHierarchy, variable, value, deafult;\n\n\t// Case\n\tAFHierarchy= \"/Sites/Orange/${site:unknown}/ADN C1\";\n\tfound = OMF::extractVariable(AFHierarchy, variable, value, deafult);\n\tASSERT_EQ (found, true);\n\tASSERT_EQ (variable, \"${site:unknown}\");\n\tASSERT_EQ (value, \"site\");\n\tASSERT_EQ (deafult, \"unknown\");\n\n\t// Case\n\tAFHierarchy= \"/Sites/Orange/Trackonomy/ADN C1\";\n\tfound = OMF::extractVariable(AFHierarchy, variable, value, deafult);\n\tASSERT_EQ (found, false);\n\tASSERT_EQ (variable, \"\");\n\tASSERT_EQ (value, \"\");\n\tASSERT_EQ (deafult, \"\");\n\n\t// Case\n\tAFHierarchy= \"${Trackonomy:unknown1}/Sites/Orange/ADN C1\";\n\tfound = OMF::extractVariable(AFHierarchy, variable, value, deafult);\n\tASSERT_EQ (found, true);\n\tASSERT_EQ (variable, \"${Trackonomy:unknown1}\");\n\tASSERT_EQ (value, \"Trackonomy\");\n\tASSERT_EQ (deafult, \"unknown1\");\n\n\t// Case\n\tAFHierarchy= \"/Sites/Orange/ADN C1/${Orange:unknown12}\";\n\tfound = OMF::extractVariable(AFHierarchy, variable, value, deafult);\n\tASSERT_EQ (found, true);\n\tASSERT_EQ (variable, \"${Orange:unknown12}\");\n\tASSERT_EQ (value, \"Orange\");\n\tASSERT_EQ (deafult, \"unknown12\");\n}\n\nTEST(OMF_MockSimpleHttps, MockFunctionality)\n{\n    // Test that the mock SimpleHttps works correctly\n    MockSimpleHttps sender(\"test-host:8080\", 10, 10, 10, 1);\n    \n    // Test basic getters\n    ASSERT_EQ(sender.getHostPort(), \"test-host:8080\");\n    ASSERT_EQ(sender.getMaxRetries(), 1);\n    \n    // Test authentication methods\n    std::string auth_method = \"basic\";\n    std::string auth_creds = \"dGVzdDp0ZXN0\"; // base64 encoded \"test:test\"\n    sender.setAuthMethod(auth_method);\n    sender.setAuthBasicCredentials(auth_creds);\n    \n    ASSERT_EQ(sender.getAuthMethod(), \"basic\");\n    ASSERT_EQ(sender.getAuthBasicCredentials(), \"dGVzdDp0ZXN0\");\n    \n    // Test OCS configuration methods\n    std::string ocs_namespace = \"test-namespace\";\n    std::string ocs_tenant_id = \"test-tenant\";\n    std::string ocs_client_id = \"test-client\";\n    std::string ocs_client_secret = \"test-secret\";\n    std::string ocs_token = \"test-token\";\n    \n    sender.setOCSNamespace(ocs_namespace);\n    sender.setOCSTenantId(ocs_tenant_id);\n    sender.setOCSClientId(ocs_client_id);\n    sender.setOCSClientSecret(ocs_client_secret);\n    sender.setOCSToken(ocs_token);\n    \n    // Test sendRequest mock functionality\n    std::vector<std::pair<std::string, std::string>> headers = {\n        {\"Content-Type\", \"application/json\"},\n        {\"Authorization\", \"Bearer test-token\"}\n    };\n    std::string payload = \"{\\\"test\\\":\\\"data\\\"}\";\n    \n    int result = sender.sendRequest(\"POST\", \"/api/test\", headers, payload);\n    \n    // Verify the mock captured the request details\n    ASSERT_EQ(result, 200);\n    ASSERT_EQ(sender.getLastMethod(), \"POST\");\n    ASSERT_EQ(sender.getLastPath(), \"/api/test\");\n    ASSERT_EQ(sender.getLastPayload(), \"{\\\"test\\\":\\\"data\\\"}\");\n    ASSERT_EQ(sender.getLastHeaders().size(), 2);\n    ASSERT_EQ(sender.getLastHeaders()[0].first, \"Content-Type\");\n    ASSERT_EQ(sender.getLastHeaders()[0].second, \"application/json\");\n    ASSERT_EQ(sender.getLastHeaders()[1].first, \"Authorization\");\n    ASSERT_EQ(sender.getLastHeaders()[1].second, \"Bearer test-token\");\n    \n    // Test HTTP response\n    ASSERT_EQ(sender.getHTTPResponse(), \"{\\\"status\\\":\\\"success\\\"}\");\n}\n\nTEST(OMF_MockSimpleHttps, OMFIntegration)\n{\n    // Test that OMF class works correctly with the mock\n    MockSimpleHttps sender(\"pi-server:5460\", 10, 10, 10, 1);\n    OMF omf(\"test\", sender, \"/\", 1, \"ABC\");\n    \n    // Create a test reading\n    string strVal(\"test-device\");\n    DatapointValue value(strVal);\n    Reading testReading(\"test-asset\", new Datapoint(\"device\", value));\n    \n    // Add another datapoint\n    DatapointValue id((long) 1001);\n    testReading.addDatapoint(new Datapoint(\"id\", id));\n    \n    // Test that OMF can use the mock sender\n    // This test verifies that the OMF class can work with our mock\n    // without making actual HTTP requests\n    ASSERT_EQ(sender.getHostPort(), \"pi-server:5460\");\n    ASSERT_EQ(sender.getMaxRetries(), 1);\n    \n    // The mock should not have made any requests yet\n    ASSERT_EQ(sender.getLastMethod(), \"\");\n    ASSERT_EQ(sender.getLastPath(), \"\");\n    ASSERT_EQ(sender.getLastPayload(), \"\");\n}\n"
  },
  {
    "path": "tests/unit/C/plugins/common/test_omf_translation_piwebapi.cpp",
    "content": "/*\n * unit tests - Fledge Readings to OMF translation having PI Web API as end-point\n *\n * Copyright (c) 2019 Dianomic Systems\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Stefano Simonelli\n */\n\n#include <gtest/gtest.h>\n#include <reading.h>\n#include <reading_set.h>\n#include <omf.h>\n#include <rapidjson/document.h>\n#include <simple_https.h>\n\n#include <piwebapi.h>\n\n\nusing namespace std;\nusing namespace rapidjson;\n\n#define TO_STRING(...) DEFER(TO_STRING_)(__VA_ARGS__)\n#define DEFER(x) x\n#define TO_STRING_(...) #__VA_ARGS__\n#define QUOTE(...) TO_STRING(__VA_ARGS__)\n\n#define TYPE_ID             1234\n#define AF_HIERARCHY_1LEVEL \"fledge_data_piwebapi\"\n#define CONTAINER_ID        \"fledge_data_piwebapi_1234measurement_luxometer\"\n\n// 2 readings JSON text\nconst char *pi_web_api_two_readings = R\"(\n    {\n        \"count\" : 2, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"luxometer\",\n                \"reading\": { \"lux\": 45204.524 },\n                \"user_ts\": \"2018-06-11 14:00:08.532958\",\n                \"ts\": \"2018-06-12 14:47:18.872708\"\n            },\n            {\n                \"id\": 2, \"asset_code\": \"luxometer\",\n                \"reading\": { \"lux\": 76834.361 },\n                \"user_ts\": \"2018-08-21 14:00:09.32958\",\n                \"ts\": \"2018-08-22 14:48:18.72708\"\n            }\n        ]\n    }\n)\";\n\n// 2 readings translated to OMF JSON text\nconst char *pi_web_api_two_translated_readings = QUOTE(\n\t[{\"containerid\": CONTAINER_ID,\n\t\t\"values\": [{\"lux\": 45204.524, \"Time\": \"2018-06-11T14:00:08.532958Z\"}]},\n  \t {\"containerid\": CONTAINER_ID,\n\t\t\"values\": [{\"lux\": 76834.361, \"Time\": \"2018-08-21T14:00:09.329580Z\"}]}]\n);\n\n\n// Compare translated readings with a provided JSON value\nTEST(PIWEBAPI_OMF_transation, TwoTranslationsCompareResult)\n{\n\t// Build a ReadingSet from JSON\n\tReadingSet readingSet(pi_web_api_two_readings);\n\n\tOMFBuffer payload;\n\tpayload.append('[');\n\n\tconst OMF_ENDPOINT PI_SERVER_END_POINT = ENDPOINT_PIWEB_API;\n\tbool sep = false;\n\n\t// Iterate over Readings via readingSet.getAllReadings()\n\tfor (vector<Reading *>::const_iterator elem = readingSet.getAllReadings().begin();\n\t     elem != readingSet.getAllReadings().end();\n\t     ++elem)\n\t{\n\t\t// Add into JSON string the OMF transformed Reading data\n\t\tsep = OMFData(payload, **elem, CONTAINER_ID, sep, PI_SERVER_END_POINT, AF_HIERARCHY_1LEVEL).hasData();\n\t}\n\n\tpayload.append(']');\n\n\tconst char *buf = payload.coalesce();\n\t// Compare translation\n\tASSERT_STREQ(buf, pi_web_api_two_translated_readings);\n\tdelete[] buf;\n\n}\n\n// Tests the handling of the PI Web API error message\nTEST(PIWEBAPI_OMF_ErrorMessages, AllCases)\n{\n\tPIWebAPI piWeb;\n\tstring json;\n\n\t// Base case\n\tASSERT_EQ(piWeb.errorMessageHandler(\"x x x\"),\"x x x\");\n\n\t// Handles error message substitution\n\tASSERT_EQ(piWeb.errorMessageHandler(\"Noroutetohost\"),\"The PI Web API server is not reachable, verify the network reachability\");\n\tASSERT_EQ(piWeb.errorMessageHandler(\"Failedtosenddata:Noroutetohost\"),\"The PI Web API server is not reachable, verify the network reachability\");\n\n\t// Handles HTTP error code recognition\n\tASSERT_EQ(piWeb.errorMessageHandler(\"n, HTTP code |503| HTTP error |<!DOCTYPE HTML PUBLIC \\\"-//W3C//DTD HTML\"),\"503 Service Unavailable\");\n\n\t// Handles error in JSON format returned by the PI Web API\n\tjson = QUOTE(\n\t\t{\n\t\t\t\"OperationId\": \"939b4d00-9041-48ee-9d50-d1365711706c\",\n\t\t\t\"Messages\": [\n\t\t\t\t{\n\t\t\t\t\t\"MessageIndex\": 1,\n\t\t\t\t\t\"Events\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"EventInfo\": {\n\t\t\t\t\t\t\t\"Message\": \"Type does not have a property with the specified index.\",\n\t\t\t\t\t\t\t\"Reason\": null,\n\t\t\t\t\t\t\t\"Suggestions\": [\n\t\t\t\t\t\t\t\"Check that the correct type is being used.\",\n\t\t\t\t\t\t\t\"Check that no unexpected data loss has occurred.\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"EventCode\": 6021,\n\t\t\t\t\t\t\t\"Parameters\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"Name\": \"TypeId\",\n\t\t\t\t\t\t\t\t\"Value\": \"A_4273005507977094880_fledge_ihsdev_1_sin_4816_asset_1_typename_measurement\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"Name\": \"TypeVersion\",\n\t\t\t\t\t\t\t\t\"Value\": \"1.0.0.0\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"Name\": \"Property\",\n\t\t\t\t\t\t\t\t\"Value\": \"sinusoidB\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"ExceptionInfo\": null,\n\t\t\t\t\t\t\"Severity\": \"Info\",\n\t\t\t\t\t\t\"InnerEvents\": []\n\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"Status\": {\n\t\t\t\t\t\t\"Code\": 202,\n\t\t\t\t\t\t\"HighestSeverity\": \"Info\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t);\n\n\tASSERT_EQ(piWeb.errorMessageHandler(json),\"Type does not have a property with the specified index. A_4273005507977094880_fledge_ihsdev_1_sin_4816_asset_1_typename_measurement 1.0.0.0 sinusoidB\");\n\n\n\tjson = QUOTE(\n\t{\n\t\t\"OperationId\": \"4760dad2-c08b-4606-901a-4288f1ffd7da\",\n\t\t\t\"Messages\": [\n\t\t{\n\t\t\t\"MessageIndex\": 0,\n\t\t\t\t\"Events\": [\n\t\t\t{\n\t\t\t\t\"EventInfo\": {\n\t\t\t\t\t\"Message\": \"Type does not have a property with the specified index.\",\n\t\t\t\t\t\t\"Reason\": null,\n\t\t\t\t\t\t\"Suggestions\": [\n\t\t\t\t\t\"Check that the correct type is being used.\",\n\t\t\t\t\t\t\"Check that no unexpected data loss has occurred.\"\n\t\t\t\t\t],\n\t\t\t\t\t\"EventCode\": 6021,\n\t\t\t\t\t\t\"Parameters\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"Name\": \"TypeId\",\n\t\t\t\t\t\t\t\"Value\": \"A_4273005507977094880_fledge_ihsdev_1_sin_4816_asset_1_typename_measurement\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"Name\": \"TypeVersion\",\n\t\t\t\t\t\t\t\"Value\": \"1.0.0.0\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"Name\": \"Property\",\n\t\t\t\t\t\t\t\"Value\": \"sinusoidB\"\n\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"ExceptionInfo\": null,\n\t\t\t\t\t\"Severity\": \"Info\",\n\t\t\t\t\t\"InnerEvents\": []\n\t\t\t}\n\t\t\t],\n\t\t\t\"Status\": {\n\t\t\t\t\"Code\": 202,\n\t\t\t\t\t\"HighestSeverity\": \"Info\"\n\t\t\t}\n\t\t}\n\t\t]\n\t}\n\t);\n\n\tASSERT_EQ(piWeb.errorMessageHandler(json),\"Type does not have a property with the specified index. A_4273005507977094880_fledge_ihsdev_1_sin_4816_asset_1_typename_measurement 1.0.0.0 sinusoidB\");\n\n\tjson = QUOTE(\n\t\t{\n\t\t\t\"OperationId\": \"xcaa5120-ca94-4eda-934e-ffc7d368c6f6\",\n\t\t\t\"Messages\": [\n\t\t\t\t{\n\t\t\t\t  \"MessageIndex\": 0,\n\t\t\t\t  \"Events\": [\n\t\t\t\t\t{\n\t\t\t\t\t  \"EventInfo\": {\n\t\t\t\t\t\t\"Message\": \"Container not found.\",\n\t\t\t\t\t\t\"Reason\": null,\n\t\t\t\t\t\t\"Suggestions\": [],\n\t\t\t\t\t\t\"EventCode\": 5002,\n\t\t\t\t\t\t\"Parameters\": [\n\t\t\t\t\t\t  {\n\t\t\t\t\t\t\t\"Name\": \"ContainerId\",\n\t\t\t\t\t\t\t\"Value\": \"4273005507977094880_1measurement_sin_4816_asset_1\"\n\t\t\t\t\t\t  }\n\t\t\t\t\t\t]\n\t\t\t\t\t  },\n\t\t\t\t\t  \"ExceptionInfo\": {\n\t\t\t\t\t\t\"Type\": \"OSIsoft.OMF.Loggers.OmfLoggableException\",\n\t\t\t\t\t\t\"Message\": \"Container not found.\"\n\t\t\t\t\t  },\n\t\t\t\t\t  \"Severity\": \"Error\",\n\t\t\t\t\t  \"InnerEvents\": []\n\t\t\t\t\t}\n\t\t\t\t  ],\n\t\t\t\t  \"Status\": {\n\t\t\t\t\t\"Code\": 404,\n\t\t\t\t\t\"HighestSeverity\": \"Error\"\n\t\t\t\t  }\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t);\n\n\t// within bad characters\n\tASSERT_EQ(piWeb.errorMessageHandler(\": errorMsg  |HTTP code |404| HTTP error |\\uFEFF\" + json + \"|\"),\"Container not found. 4273005507977094880_1measurement_sin_4816_asset_1\");\n\n\tASSERT_EQ(piWeb.errorMessageHandler(\": errorMsg  |HTTP code |404| HTTP error |\" + json + \"|\"),\"Container not found. 4273005507977094880_1measurement_sin_4816_asset_1\");\n\n\tjson = QUOTE(\n\t\t{\n\t\t\t\"OperationId\": \"xcaa5120-ca94-4eda-934e-ffc7d368c6f6\",\n\t\t\t\"Messages\": [\n\t\t\t{\n\t\t\t\t\"MessageIndex\": 1\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"MessageIndex\": 0,\n\t\t\t\t\"Events\": [\n\t\t\t\t{\n\t\t\t\t\t\"EventInfo\": {\n\t\t\t\t\t\t\"Message\": \"Container not found.\",\n\t\t\t\t\t\t\"Reason\": null,\n\t\t\t\t\t\t\"Suggestions\": [],\n\t\t\t\t\t\t\"EventCode\": 5002,\n\t\t\t\t\t\t\"Parameters\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"Name\": \"ContainerId\",\n\t\t\t\t\t\t\t\"Value\": \"4273005507977094880_1measurement_sin_4816_asset_1\"\n\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"ExceptionInfo\": {\n\t\t\t\t\t\t\"Type\": \"OSIsoft.OMF.Loggers.OmfLoggableException\",\n\t\t\t\t\t\t\"Message\": \"Container not found.\"\n\t\t\t\t\t},\n\t\t\t\t\t\"Severity\": \"Error\",\n\t\t\t\t\t\"InnerEvents\": []\n\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"Status\": {\n\t\t\t\t\t\"Code\": 404,\n\t\t\t\t\t\"HighestSeverity\": \"Error\"\n\t\t\t\t}\n\t\t\t}\n\t\t\t]\n\t\t}\n\t);\n\tASSERT_EQ(piWeb.errorMessageHandler(json),\"Container not found. 4273005507977094880_1measurement_sin_4816_asset_1\");\n\n\t// Handling reason\n\tjson = QUOTE(\n\t\t{\n\t\t\t\"OperationId\": \"f48a2233-86ba-45d2-9787-48e3e48be78a\",\n\t\t\t\"Messages\": [\n\t\t\t{\n\t\t\t\t\"MessageIndex\": null,\n\t\t\t\t\"Events\": [\n\t\t\t\t{\n\t\t\t\t\t\"EventInfo\": {\n\t\t\t\t\t\t\"Message\": \"An error parsing the OMF message(s) occurred.\",\n\t\t\t\t\t\t\"Reason\": \"The OMF request body was unable to be parsed.\",\n\t\t\t\t\t\t\"Suggestions\": [\n\t\t\t\t\t\t\"Check that the OMF request body is syntactically valid.\",\n\t\t\t\t\t\t\"Check that the request only uses features available in the OMF version specified by the 'omfversion' header.\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"EventCode\": 3002,\n\t\t\t\t\t\t\"Parameters\": []\n\t\t\t\t\t},\n\t\t\t\t\t\"ExceptionInfo\": {\n\t\t\t\t\t\t\"Type\": \"OSIsoft.OMF.Loggers.OmfLoggableException\",\n\t\t\t\t\t\t\"Message\": \"An error parsing the OMF message(s) occurred.\"\n\t\t\t\t\t},\n\t\t\t\t\t\"Severity\": \"Error\",\n\t\t\t\t\t\"InnerEvents\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"EventInfo\": null,\n\t\t\t\t\t\t\"ExceptionInfo\": {\n\t\t\t\t\t\t\t\"Type\": \"Newtonsoft.Json.JsonSerializationException\",\n\t\t\t\t\t\t\t\"Message\": \"Error converting value \\\"containerid\\\" to type 'OSIsoft.OMF.Specification.Models.V1_1.DataMessageDTO'. Path '[1]', line 1, position 247.\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"Severity\": null,\n\t\t\t\t\t\t\"InnerEvents\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"EventInfo\": null,\n\t\t\t\t\t\t\t\"ExceptionInfo\": {\n\t\t\t\t\t\t\t\t\"Type\": \"System.ArgumentException\",\n\t\t\t\t\t\t\t\t\"Message\": \"Could not cast or convert from System.String to OSIsoft.OMF.Specification.Models.V1_1.DataMessageDTO.\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"Severity\": null,\n\t\t\t\t\t\t\t\"InnerEvents\": []\n\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"Status\": {\n\t\t\t\t\t\"Code\": 400,\n\t\t\t\t\t\"HighestSeverity\": \"Error\"\n\t\t\t\t}\n\t\t\t}\n\t\t\t]\n\t\t}\n\t);\n\n\tASSERT_EQ(piWeb.errorMessageHandler(json),\"An error parsing the OMF message(s) occurred. The OMF request body was unable to be parsed.\");\n\n}\n\n\n\n\n\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\n# Fledge libraries\nset(COMMON_LIB              common-lib)\nset(SERVICE_COMMON_LIB      services-common-lib)\nset(PLUGINS_COMMON_LIB      plugins-common-lib)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\n# Locate GTest\nfind_package(GTest REQUIRED)\ninclude_directories(${GTEST_INCLUDE_DIRS})\ninclude_directories(../../../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../../../C/common/include)\n\n# Exe creation\nlink_directories(\n        ${PROJECT_BINARY_DIR}/../../../../lib\n)\n\n\n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 COMPONENTS Interpreter Development)\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nfile(GLOB test_sources \"../../../../../../C/plugins/storage/common/*.cpp\")\n\n \n# Link runTests with what we want to test and the GTest and pthread library\nadd_executable(RunTests ${test_sources} tests.cpp)\n\n#setting BOOST_COMPONENTS to use pthread library only\nset(BOOST_COMPONENTS thread)\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ntarget_link_libraries(RunTests ${COMMON_LIB})\ntarget_link_libraries(RunTests ${SERVICE_COMMON_LIB})\ntarget_link_libraries(RunTests ${PLUGINS_COMMON_LIB})\ntarget_link_libraries(RunTests ${GTEST_LIBRARIES} pthread ${COMMONLIB} )\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n        target_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n        target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/common/README.rst",
    "content": "*****************************************************\nUnit Test for Common Components of the Storage Plugin\n*****************************************************\n\nRequire Google Unit Test framework\n\nInstall with:\n::\n    sudo apt-get install libgtest-dev\n    cd /usr/src/gtest\n    cmake CMakeLists.txt\n    sudo make\n    sudo make install\n\nTo build the unit test:\n::\n    mkdir build\n    cd build\n    cmake ..\n    make\n    ./runTests\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/common/tests.cpp",
    "content": "#include <gtest/gtest.h>\n#include <sql_buffer.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n\n    testing::GTEST_FLAG(repeat) = 2000;\n    testing::GTEST_FLAG(shuffle) = true;\n    testing::GTEST_FLAG(break_on_failure) = true;\n\n    return RUN_ALL_TESTS();\n}\n\n/**\n * Test appending characters to the buffer\n */\nTEST(SQLBufferTest, charappend) {\nSQLBuffer\tsql;\n\n\tfor (int i = 0; i < 100; i++)\n\t\tsql.append(' ');\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(100, strlen(buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending more characers than will fit in a single\n * buffer.\n */\nTEST(SQLBufferTest, charappendlarge) {\nSQLBuffer\tsql;\n\n\tfor (int i = 0; i < 10000; i++)\n\t\tsql.append(' ');\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(10000, strlen(buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending a fixed pattern - check order of appends\n */\nTEST(SQLBufferTest, charappendpattern) {\nSQLBuffer\tsql;\n\n\tsql.append('a');\n\tsql.append('b');\n\tsql.append('c');\n\tsql.append('d');\n\tsql.append('e');\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"abcde\"));\n\tdelete[] buf;\n}\n\n/**\n * Test appending a fixed pattern - check order of appends\n * across buffer boundaries.\n */\nTEST(SQLBufferTest, charappendlongpattern) {\nSQLBuffer\tsql;\nint\t\ti;\n\n\tfor (i = 0; i < 2000; i++)\n\t{\n\t\tchar ch = 'a' + (i % 26);\n\t\tsql.append(ch);\n\t}\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(2000, strlen(buf));\n\tint result = 0;\n\t/* Check the pattern matches what we put in */\n\tfor (i = 0; i < 2000; i++)\n\t{\n\t\tchar ch = 'a' + (i % 26);\n\t\tif (buf[i] != ch)\n\t\t{\n\t\t\tresult = 1;\n\t\t}\n\t}\n\tASSERT_EQ(0, result);\n\tdelete[] buf;\n}\n\n/**\n * Test appendign null terminated strings\n */\nTEST(SQLBufferTest, strappend) {\nSQLBuffer\tsql;\n\n\tfor (int i = 0; i < 100; i++)\n\t\tsql.append(\"    \");\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(400, strlen(buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending long null terminated strings\n */\nTEST(SQLBufferTest, strappendlarge) {\nSQLBuffer\tsql;\n\n\tfor (int i = 0; i < 10000; i++)\n\t\tsql.append(\"1234567890\");\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(100000, strlen(buf));\n\tdelete[] buf;\n}\n\n/**\n * test appending an integer\n */\nTEST(SQLBufferTest, intappend) {\nSQLBuffer\tsql;\n\n\tsql.append(1234);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"1234\"));\n\tdelete[] buf;\n}\n\n/**\n * test appending an unsigned integer\n */\nTEST(SQLBufferTest, uintappend) {\nSQLBuffer\tsql;\nunsigned int\tvalue = 4321;\n\n\tsql.append(value);\n\tsql.append(value);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"43214321\"));\n\tdelete[] buf;\n}\n\n/**\n * test appending a long integer\n */\nTEST(SQLBufferTest, longappend) {\nSQLBuffer\tsql;\nlong\t\tvalue = 491572107;\n\n\tsql.append(value);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"491572107\"));\n\tdelete[] buf;\n}\n\n/**\n * test appending a negative long integer\n */\nTEST(SQLBufferTest, negappend) {\nSQLBuffer\tsql;\nlong\t\tvalue = -491572107;\n\n\tsql.append(value);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"-491572107\"));\n\tdelete[] buf;\n}\n\n/**\n * test appending a double\n */\nTEST(SQLBufferTest, doubleappend) {\nSQLBuffer\tsql;\ndouble\t\tvalue = 3.141526;\n\n\tsql.append(value);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(3.141526, atof(buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending a C++ string class\n */\nTEST(SQLBufferTest, stringappend) {\nSQLBuffer\tsql;\nstring\t\tstr(\"A C++ String\");\n\n\tsql.append(str);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(str.c_str(), buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending a mixture of types\n */\nTEST(SQLBufferTest, mixedappend) {\nSQLBuffer\tsql;\n\n\tsql.append(\"Hello\");\n\tsql.append(' ');\n\tsql.append(123456);\n\tsql.append(string(\" world\"));\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"Hello 123456 world\"));\n\tdelete[] buf;\n}\n\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/postgres/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\n# libraries\nset(PG_LIB     pq)\nset(LIBCURL_LIB -lcurl)\n\n# Fledge libraries\nset(COMMON_LIB         common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\nset(PLUGINS_COMMON_LIB plugins-common-lib)\nset(PLUGIN_POSTGRES    postgres)\nset(STORAGE_COMMON_LIB storage-common-lib)\n\n# Handle Postgres on RedHat/CentOS\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} \"${CMAKE_CURRENT_SOURCE_DIR}\")\ninclude(CheckRhPg)\n\n# Locate GTest\nfind_package(GTest REQUIRED)\n\n# Include files\ninclude_directories(${GTEST_INCLUDE_DIRS})\ninclude_directories(../../../../../../C/common/include)\ninclude_directories(../../../../../../C/services/common/include)\ninclude_directories(../../../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../../../C/plugins/storage/postgres/include)\ninclude_directories(../../../../../../C/thirdparty/rapidjson/include)\n\n# Handle Postgres on RedHat/CentOS\nif(${RH_POSTGRES_FOUND} EQUAL 1)\n\n    include_directories(${RH_POSTGRES_INCLUDE})\n    link_directories(${RH_POSTGRES_LIB64})\nelse()\n    include_directories(/usr/include/postgresql)\nendif()\n\n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 COMPONENTS Interpreter Development)\nendif()\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\n# Source files\nfile(GLOB test_sources tests.cpp)\n\n# Exe creation\nlink_directories(\n        ${PROJECT_BINARY_DIR}/../../../../lib\n)\n\nadd_executable(${PROJECT_NAME} ${test_sources})\n\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${PLUGINS_COMMON_LIB})\n\ntarget_link_libraries(${PROJECT_NAME} ${PLUGIN_POSTGRES})\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${PG_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${LIBCURL_LIB})\n\n#setting BOOST_COMPONENTS to use pthread library only\nset(BOOST_COMPONENTS thread)\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ntarget_link_libraries(${PROJECT_NAME} ${GTEST_LIBRARIES} pthread)\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n\ttarget_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n\ttarget_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/postgres/CheckRhPg.cmake",
    "content": "# Evaluates if rh-postgresql13 is available and enabled and identifies its path\n\nexecute_process(\n        COMMAND  \"scl\" \"enable\" \"rh-postgresql13\" \"command -v pg_isready\"\n        RESULT_VARIABLE CMD_ERROR\n        OUTPUT_VARIABLE CMD_OUTPUT\n)\n\nif(${CMD_ERROR} EQUAL 0)\n    string(REGEX REPLACE \"/bin/pg_isready[\\n]\" \"\" RH_POSTGRES_PATH ${CMD_OUTPUT})\n\n    set(RH_POSTGRES_FOUND 1)\n    set(RH_POSTGRES_INCLUDE \"${RH_POSTGRES_PATH}/include\")\n    set(RH_POSTGRES_LIB64   \"${RH_POSTGRES_PATH}/lib64\")\nelse()\n    set(RH_POSTGRES_FOUND 0)\nendif()\n\nif(${RH_POSTGRES_FOUND} EQUAL 1)\n\n    MESSAGE( STATUS \"INFO: rh-postgresql13 found in the path :${RH_POSTGRES_PATH}:\")\nelse()\n    MESSAGE( STATUS \"INFO: rh-postgresql13 not found\")\nendif()\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/postgres/README.rst",
    "content": "*****************************************************\nUnit Test for Postgres Storage Plugin\n*****************************************************\n\nRequire Google Unit Test framework\n\nInstall with:\n::\n    sudo apt-get install libgtest-dev\n    cd /usr/src/gtest\n    cmake CMakeLists.txt\n    sudo make\n    sudo make install\n\nTo build the unit test:\n::\n    mkdir build\n    cd build\n    cmake ..\n    make\n    ./runTests\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/postgres/tests.cpp",
    "content": "#include <gtest/gtest.h>\n#include <connection.h>\n#include \"gtest/gtest.h\"\n#include <logger.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n\n    testing::GTEST_FLAG(repeat) = 50;\n    testing::GTEST_FLAG(shuffle) = true;\n    testing::GTEST_FLAG(death_test_style) = \"threadsafe\";\n\n    return RUN_ALL_TESTS();\n}\n\n\nclass RowFormatDate  {\n\tpublic:\n\t\tconst char *test_case;\n\t\tconst char *expected;\n\t\tbool result;\n\n\t\tRowFormatDate(const char *p1, const char *p2, bool p3) {\n\t\t\ttest_case = p1;\n\t\t\texpected = p2;\n\t\t\tresult = p3;\n\t\t};\n};\n\nclass TestFormatDate : public ::testing::TestWithParam<RowFormatDate> {\n};\n\nTEST_P(TestFormatDate, TestConversions)\n{\nEXPECT_EXIT({\n\tLogger::getLogger()->setMinLevel(\"debug\");\n\n\tRowFormatDate const& p = GetParam();\n\n\tchar formatted_date[50] = {0};\n\tmemset (formatted_date,0 , sizeof (formatted_date));\n\tbool result  = Connection::formatDate(formatted_date, sizeof(formatted_date), p.test_case);\n\n\tstring test_case = formatted_date;\n\tstring expected = p.expected;\n\n\tbool ret = test_case.compare(expected) == 0;\n\tif (!ret)\n\t{       \n\t\tcerr << \"TestConversions doesn't return expected value\" << endl;\n\t\texit(1);\n\t}\n\tret = result == p.result;\n\n\texit(!ret); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nINSTANTIATE_TEST_CASE_P(\n\tTestConversions,\n\tTestFormatDate,\n\t::testing::Values(\n\t\t// Test cases                                      Expected\n\t\tRowFormatDate(\"2019-01-01 10:01:01\"              ,\"2019-01-01 10:01:01.000000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-01 10:02:01.0\"            ,\"2019-02-01 10:02:01.000000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-02 10:02:02.841\"          ,\"2019-02-02 10:02:02.841000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-03 10:02:03.123456\"       ,\"2019-02-03 10:02:03.123456+00:00\", true),\n\n\t\tRowFormatDate(\"2019-03-01 10:03:01.1+00:00\"      ,\"2019-03-01 10:03:01.100000+00:00\", true),\n\t\tRowFormatDate(\"2019-03-02 10:03:02.123+00:00\"    ,\"2019-03-02 10:03:02.123000+00:00\", true),\n\n\t\tRowFormatDate(\"2019-03-03 10:03:03.123456+00:00\" ,\"2019-03-03 10:03:03.123456+00:00\", true),\n\t\tRowFormatDate(\"2019-03-04 10:03:04.123456+01:00\" ,\"2019-03-04 10:03:04.123456+01:00\", true),\n\t\tRowFormatDate(\"2019-03-05 10:03:05.123456-01:00\" ,\"2019-03-05 10:03:05.123456-01:00\", true),\n\t\tRowFormatDate(\"2019-03-04 10:03:04.123456+02:30\" ,\"2019-03-04 10:03:04.123456+02:30\", true),\n\t\tRowFormatDate(\"2019-03-05 10:03:05.123456-02:30\" ,\"2019-03-05 10:03:05.123456-02:30\", true),\n\n\t\t// Timestamp truncated\n\t\tRowFormatDate(\"2017-10-11 15:10:51.927191906\"    ,\"2017-10-11 15:10:51.927191+00:00\", true),\n\n\t\t// Bad cases\n\t\tRowFormatDate(\"xxx\",                    \"\", false),\n\t\tRowFormatDate(\"2019-50-50 10:01:01.0\",  \"\", false)\n\t)\n);\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlite/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb --coverage\")\n\n# External libraries\nset(LIBCURL_LIB -lcurl)\n\n# Fledge libraries\nset(COMMON_LIB              common-lib)\nset(SERVICE_COMMON_LIB      services-common-lib)\nset(PLUGINS_COMMON_LIB      plugins-common-lib)\nset(PLUGIN_SQLITE           sqlite)\nset(STORAGE_COMMON_LIB      storage-common-lib)\n\n\n# Locate GTest\nfind_package(GTest REQUIRED)\n\n# Include files\ninclude_directories(${GTEST_INCLUDE_DIRS})\ninclude_directories(../../../../../../C/common/include)\ninclude_directories(../../../../../../C/services/common/include)\ninclude_directories(../../../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../../../C/plugins/storage/sqlite/include)\ninclude_directories(../../../../../../C/plugins/storage/sqlite/common/include)\ninclude_directories(../../../../../../C/plugins/storage/sqlite/schema/include)\n\n# Source files\nfile(GLOB TEST_SOURCES tests.cpp)\nfile(GLOB COMMON_SOURCES ../sqlite/*.cpp ../sqlite/schema/*.cpp)\nfile(GLOB COMMON_SOURCES ../sqlite/common/*.cpp)\n\n# Check for SQLite3 source tree in specific location\nset(FLEDGE_SQLITE3_LIBS \"/tmp/sqlite3-pkg/src\" CACHE INTERNAL \"\")\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tmessage(STATUS \"Using SLITE3 source files in ${FLEDGE_SQLITE3_LIBS}\")\n\tinclude_directories(${FLEDGE_SQLITE3_LIBS})\nendif()\n\n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 COMPONENTS Interpreter Development)\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\n\n# Exe creation\nlink_directories(\n        ${PROJECT_BINARY_DIR}/../../../../lib\n)\n\nadd_executable(${PROJECT_NAME} ${TEST_SOURCES} ${COMMON_SOURCES})\n\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${PLUGINS_COMMON_LIB})\n\ntarget_link_libraries(${PROJECT_NAME} ${PLUGIN_SQLITE})\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${LIBCURL_LIB})\n\n#setting BOOST_COMPONENTS to use pthread library only\nset(BOOST_COMPONENTS thread)\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ntarget_link_libraries(${PROJECT_NAME} ${GTEST_LIBRARIES} pthread)\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n\ttarget_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n\ttarget_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlite/README.rst",
    "content": "*****************************************************\nUnit Test for SQLite Storage Plugin\n*****************************************************\n\nRequire Google Unit Test framework\n\nInstall with:\n::\n    sudo apt-get install libgtest-dev\n    cd /usr/src/gtest\n    cmake CMakeLists.txt\n    sudo make\n    sudo make install\n\nTo build the unit test:\n::\n    mkdir build\n    cd build\n    cmake ..\n    make\n    ./RunTests\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlite/tests.cpp",
    "content": "#include <gtest/gtest.h>\n#include <connection.h>\n#include <connection_manager.h>\n#include <config_category.h>\n#include <logger.h>\n#include <string.h>\n#include <string>\n#include <readings_catalogue.h>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n\n    testing::GTEST_FLAG(repeat) = 50;\n    testing::GTEST_FLAG(shuffle) = true;\n    testing::GTEST_FLAG(death_test_style) = \"threadsafe\";\n\n    return RUN_ALL_TESTS();\n}\n\n\nTEST(MultiReadings, extractReadingsIdFromName) {\n\n\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\n\tASSERT_EQ(readCat->extractReadingsIdFromName(\"reading_1_1\"), 1);\n\tASSERT_EQ(readCat->extractReadingsIdFromName(\"reading_2_1\"), 1);\n\tASSERT_EQ(readCat->extractReadingsIdFromName(\"reading_10_1\"), 1);\n\tASSERT_EQ(readCat->extractReadingsIdFromName(\"reading_60_1\"), 1);\n\n\tASSERT_EQ(readCat->extractReadingsIdFromName(\"reading_1_10\"), 10);\n\tASSERT_EQ(readCat->extractReadingsIdFromName(\"reading_1_100\"), 100);\n\tASSERT_EQ(readCat->extractReadingsIdFromName(\"reading_10_100\"), 100);\n\tASSERT_EQ(readCat->extractReadingsIdFromName(\"reading_60_100\"), 100);\n}\n\nTEST(MultiReadings, extractDbIdFromName) {\n\n\tReadingsCatalogue *readCat = ReadingsCatalogue::getInstance();\n\n\tASSERT_EQ(readCat->extractDbIdFromName(\"reading_1_1\"), 1);\n\tASSERT_EQ(readCat->extractDbIdFromName(\"reading_2_1\"), 2);\n\tASSERT_EQ(readCat->extractDbIdFromName(\"reading_10_1\"), 10);\n\tASSERT_EQ(readCat->extractDbIdFromName(\"reading_60_1\"), 60);\n\n\tASSERT_EQ(readCat->extractDbIdFromName(\"reading_1_10\"), 1);\n\tASSERT_EQ(readCat->extractDbIdFromName(\"reading_1_100\"), 1);\n\tASSERT_EQ(readCat->extractDbIdFromName(\"reading_10_100\"), 10);\n\tASSERT_EQ(readCat->extractDbIdFromName(\"reading_60_100\"), 60);\n}\n\nclass RowFormatDate  {\n\tpublic:\n\t\tconst char *test_case;\n\t\tconst char *expected;\n\t\tbool result;\n\n\t\tRowFormatDate(const char *p1, const char *p2, bool p3) {\n\t\t\ttest_case = p1;\n\t\t\texpected = p2;\n\t\t\tresult = p3;\n\t\t};\n};\n\nclass TestFormatDate : public ::testing::TestWithParam<RowFormatDate> {\n};\n\nTEST_P(TestFormatDate, TestConversions)\n{\nEXPECT_EXIT({\n\tConnectionManager *manager = ConnectionManager::getInstance();\n\tConfigCategory category;\n\n\tmanager->setConfiguration(&category);\n\tConnection a(manager);\n\tLogger::getLogger()->setMinLevel(\"debug\");\n\n\tRowFormatDate const& p = GetParam();\n\n\tchar formatted_date[50] = {0};\n\tmemset (formatted_date,0 , sizeof (formatted_date));\n\tbool result  = a.formatDate(formatted_date, sizeof(formatted_date), p.test_case);\n\n\tstring test_case = formatted_date;\n\tstring expected = p.expected;\n\n\tbool ret = test_case.compare(expected) == 0;\n\tif (!ret)\n\t{\n\t\tcerr << \"TestConversions doesn't return expected value\" << endl;\n\t\texit(1);\n\t}\n\tret = result == p.result;\n\texit(!ret); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nINSTANTIATE_TEST_CASE_P(\n\tTestConversions,\n\tTestFormatDate,\n\t::testing::Values(\n\t\t// Test cases                                      Expected\n\t\tRowFormatDate(\"2019-01-01 10:01:01\"              ,\"2019-01-01 10:01:01.000000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-01 10:02:01.0\"            ,\"2019-02-01 10:02:01.000000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-02 10:02:02.841\"          ,\"2019-02-02 10:02:02.841000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-03 10:02:03.123456\"       ,\"2019-02-03 10:02:03.123456+00:00\", true),\n\n\t\tRowFormatDate(\"2019-03-01 10:03:01.1+00:00\"      ,\"2019-03-01 10:03:01.100000+00:00\", true),\n\t\tRowFormatDate(\"2019-03-02 10:03:02.123+00:00\"    ,\"2019-03-02 10:03:02.123000+00:00\", true),\n\n\t\tRowFormatDate(\"2019-03-03 10:03:03.123456+00:00\" ,\"2019-03-03 10:03:03.123456+00:00\", true),\n\t\tRowFormatDate(\"2019-03-04 10:03:04.123456+01:00\" ,\"2019-03-04 10:03:04.123456+01:00\", true),\n\t\tRowFormatDate(\"2019-03-05 10:03:05.123456-01:00\" ,\"2019-03-05 10:03:05.123456-01:00\", true),\n\t\tRowFormatDate(\"2019-03-04 10:03:04.123456+02:30\" ,\"2019-03-04 10:03:04.123456+02:30\", true),\n\t\tRowFormatDate(\"2019-03-05 10:03:05.123456-02:30\" ,\"2019-03-05 10:03:05.123456-02:30\", true),\n\n\t\t// Timestamp truncated\n\t\tRowFormatDate(\"2017-10-11 15:10:51.927191906\"    ,\"2017-10-11 15:10:51.927191+00:00\", true),\n\n\t\t// Bad cases\n\t\tRowFormatDate(\"xxx\",                    \"\", false),\n\t\tRowFormatDate(\"2019-50-50 10:01:01.0\",  \"\", false)\n\t)\n);\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlitelb/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb\")\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\n# External libraries\nset(LIBCURL_LIB -lcurl)\n\n# Fledge libraries\nset(COMMON_LIB              common-lib)\nset(SERVICE_COMMON_LIB      services-common-lib)\nset(PLUGINS_COMMON_LIB      plugins-common-lib)\nset(PLUGIN_SQLITELB         sqlitelb)\nset(STORAGE_COMMON_LIB      storage-common-lib)\n\n# Locate GTest\nfind_package(GTest REQUIRED)\n\n# Include files\ninclude_directories(${GTEST_INCLUDE_DIRS})\ninclude_directories(../../../../../../C/common/include)\ninclude_directories(../../../../../../C/services/common/include)\ninclude_directories(../../../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../../../C/plugins/storage/sqlitelb/include)\ninclude_directories(../../../../../../C/plugins/storage/sqlitelb/common/include)\ninclude_directories(../../../../../../C/plugins/storage/sqlite/schema/include)\n\n# Source files\nfile(GLOB TEST_SOURCES tests.cpp)\nfile(GLOB COMMON_SOURCES ../sqlitelb/*.cpp)\nfile(GLOB COMMON_SOURCES ../sqlitelb/common/*.cpp ../sqlite/schema/*.cpp)\n\n# Check for SQLite3 source tree in specific location\nset(FLEDGE_SQLITE3_LIBS \"/tmp/sqlite3-pkg/src\" CACHE INTERNAL \"\")\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tmessage(STATUS \"Using SLITE3 source files in ${FLEDGE_SQLITE3_LIBS}\")\n\tinclude_directories(${FLEDGE_SQLITE3_LIBS})\nendif()\n\n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 COMPONENTS Interpreter Development)\nendif()\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\n# Exe creation\nlink_directories(\n        ${PROJECT_BINARY_DIR}/../../../../lib\n)\n\nadd_executable(${PROJECT_NAME} ${TEST_SOURCES} ${COMMON_SOURCES})\n\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${PLUGINS_COMMON_LIB})\n\ntarget_link_libraries(${PROJECT_NAME} ${PLUGIN_SQLITELB})\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${LIBCURL_LIB})\n\n#setting BOOST_COMPONENTS to use pthread library only\nset(BOOST_COMPONENTS thread)\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ntarget_link_libraries(${PROJECT_NAME} ${GTEST_LIBRARIES} pthread)\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n\ttarget_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n\ttarget_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlitelb/README.rst",
    "content": "*****************************************************\nUnit Test for SQLite in memory Storage Plugin\n*****************************************************\n\nRequire Google Unit Test framework\n\nInstall with:\n::\n    sudo apt-get install libgtest-dev\n    cd /usr/src/gtest\n    cmake CMakeLists.txt\n    sudo make\n    sudo make install\n\nTo build the unit test:\n::\n    mkdir build\n    cd build\n    cmake ..\n    make\n    ./RunTests\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlitelb/tests.cpp",
    "content": "#include <gtest/gtest.h>\n#include <connection.h>\n#include \"gtest/gtest.h\"\n#include <logger.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n\n    testing::GTEST_FLAG(repeat) = 50;\n    testing::GTEST_FLAG(shuffle) = true;\n    testing::GTEST_FLAG(death_test_style) = \"threadsafe\";\n\n    return RUN_ALL_TESTS();\n}\n\nTEST(Sqlitelb, dummy) {\n\n\tASSERT_EQ(1, 1);\n}\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlitememory/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\nset(CMAKE_CXX_FLAGS_DEBUG \"-O0 -ggdb --coverage\")\n\n# External libraries\nset(LIBCURL_LIB -lcurl)\n\n# Fledge libraries\nset(COMMON_LIB              common-lib)\nset(SERVICE_COMMON_LIB      services-common-lib)\nset(PLUGINS_COMMON_LIB      plugins-common-lib)\nset(PLUGIN_SQLITEMEMORY     sqlitememory)\nset(STORAGE_COMMON_LIB      storage-common-lib)\n\n# Locate GTest\nfind_package(GTest REQUIRED)\n\nadd_definitions(-DMEMORY_READING_PLUGIN=1)\n\n# Include files\ninclude_directories(${GTEST_INCLUDE_DIRS})\ninclude_directories(../../../../../../C/common/include)\ninclude_directories(../../../../../../C/services/common/include)\ninclude_directories(../../../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../../../C/plugins/storage/sqlitelb/include)\ninclude_directories(../../../../../../C/plugins/storage/sqlitelb/common/include)\n\n# Source files\nfile(GLOB COMMON_SOURCES ../sqlitelb/common/*.cpp)\nfile(GLOB COMMON_SOURCES ../sqlitememory/*.cpp)\nfile(GLOB test_sources tests.cpp)\n\n# Check for SQLite3 source tree in specific location\nset(FLEDGE_SQLITE3_LIBS \"/tmp/sqlite3-pkg/src\" CACHE INTERNAL \"\")\nif(EXISTS ${FLEDGE_SQLITE3_LIBS})\n\tmessage(STATUS \"Using SLITE3 source files in ${FLEDGE_SQLITE3_LIBS}\")\n\tinclude_directories(${FLEDGE_SQLITE3_LIBS})\nendif()\n\n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 COMPONENTS Interpreter Development)\nendif()\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\n# Exe creation\nlink_directories(\n        ${PROJECT_BINARY_DIR}/../../../../lib\n)\n\nadd_executable(${PROJECT_NAME} ${test_sources} ${COMMON_SOURCES})\n\ntarget_link_libraries(${PROJECT_NAME} ${COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${SERVICE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${PLUGINS_COMMON_LIB})\n\ntarget_link_libraries(${PROJECT_NAME} ${PLUGIN_SQLITEMEMORY})\ntarget_link_libraries(${PROJECT_NAME} ${STORAGE_COMMON_LIB})\ntarget_link_libraries(${PROJECT_NAME} ${LIBCURL_LIB})\n\n#setting BOOST_COMPONENTS to use pthread library only\nset(BOOST_COMPONENTS thread)\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ntarget_link_libraries(${PROJECT_NAME} ${GTEST_LIBRARIES} pthread)\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n\ttarget_link_libraries(${PROJECT_NAME} ${PYTHON_LIBRARIES})\nelse()\n\ttarget_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlitememory/README.rst",
    "content": "*****************************************************\nUnit Test for SQLite in memory Storage Plugin\n*****************************************************\n\nRequire Google Unit Test framework\n\nInstall with:\n::\n    sudo apt-get install libgtest-dev\n    cd /usr/src/gtest\n    cmake CMakeLists.txt\n    sudo make\n    sudo make install\n\nTo build the unit test:\n::\n    mkdir build\n    cd build\n    cmake ..\n    make\n    ./RunTests\n"
  },
  {
    "path": "tests/unit/C/plugins/storage/sqlitememory/sqlmem_tests.cpp",
    "content": "#include <gtest/gtest.h>\n#include <connection.h>\n#include \"gtest/gtest.h\"\n#include <logger.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n\n    testing::GTEST_FLAG(repeat) = 50;\n    testing::GTEST_FLAG(shuffle) = true;\n    testing::GTEST_FLAG(death_test_style) = \"threadsafe\";\n\n    return RUN_ALL_TESTS();\n}\n\n\nclass RowFormatDate  {\n\tpublic:\n\t\tconst char *test_case;\n\t\tconst char *expected;\n\t\tbool result;\n\n\t\tRowFormatDate(const char *p1, const char *p2, bool p3) {\n\t\t\ttest_case = p1;\n\t\t\texpected = p2;\n\t\t\tresult = p3;\n\t\t};\n};\n\nclass TestFormatDate : public ::testing::TestWithParam<RowFormatDate> {\n};\n\nTEST_P(TestFormatDate, TestConversions)\n{\nEXPECT_EXIT({\n\tConnection a;\n\tLogger::getLogger()->setMinLevel(\"debug\");\n\n\tRowFormatDate const& p = GetParam();\n\n\tchar formatted_date[50] = {0};\n\tmemset (formatted_date,0 , sizeof (formatted_date));\n\tbool result  = a.formatDate(formatted_date, sizeof(formatted_date), p.test_case);\n\n\tstring test_case = formatted_date;\n\tstring expected = p.expected;\n\n\tbool ret = test_case.compare(expected) == 0;\n\tif (!ret)\n\t{\n\t\tcerr << \"TestConversions doesn't return expected value\" << endl;\n\t\texit(1);\n\t}\n\tret = result == p.result;\n\texit(!ret); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nINSTANTIATE_TEST_CASE_P(\n\tTestConversions,\n\tTestFormatDate,\n\t::testing::Values(\n\t\t// Test cases                                      Expected\n\t\tRowFormatDate(\"2019-01-01 10:01:01\"              ,\"2019-01-01 10:01:01.000000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-01 10:02:01.0\"            ,\"2019-02-01 10:02:01.000000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-02 10:02:02.841\"          ,\"2019-02-02 10:02:02.841000+00:00\", true),\n\t\tRowFormatDate(\"2019-02-03 10:02:03.123456\"       ,\"2019-02-03 10:02:03.123456+00:00\", true),\n\n\t\tRowFormatDate(\"2019-03-01 10:03:01.1+00:00\"      ,\"2019-03-01 10:03:01.100000+00:00\", true),\n\t\tRowFormatDate(\"2019-03-02 10:03:02.123+00:00\"    ,\"2019-03-02 10:03:02.123000+00:00\", true),\n\n\t\tRowFormatDate(\"2019-03-03 10:03:03.123456+00:00\" ,\"2019-03-03 10:03:03.123456+00:00\", true),\n\t\tRowFormatDate(\"2019-03-04 10:03:04.123456+01:00\" ,\"2019-03-04 10:03:04.123456+01:00\", true),\n\t\tRowFormatDate(\"2019-03-05 10:03:05.123456-01:00\" ,\"2019-03-05 10:03:05.123456-01:00\", true),\n\t\tRowFormatDate(\"2019-03-04 10:03:04.123456+02:30\" ,\"2019-03-04 10:03:04.123456+02:30\", true),\n\t\tRowFormatDate(\"2019-03-05 10:03:05.123456-02:30\" ,\"2019-03-05 10:03:05.123456-02:30\", true),\n\n\t\t// Timestamp truncated\n\t\tRowFormatDate(\"2017-10-11 15:10:51.927191906\"    ,\"2017-10-11 15:10:51.927191+00:00\", true),\n\n\t\t// Bad cases\n\t\tRowFormatDate(\"xxx\",                    \"\", false),\n\t\tRowFormatDate(\"2019-50-50 10:01:01.0\",  \"\", false)\n\t)\n);\n"
  },
  {
    "path": "tests/unit/C/requirements.sh",
    "content": "#!/usr/bin/env bash\n\n##--------------------------------------------------------------------\n## Copyright (c) 2025 Dianomic Systems\n##\n## Licensed under the Apache License, Version 2.0 (the \"License\");\n## you may not use this file except in compliance with the License.\n## You may obtain a copy of the License at\n##\n##     http://www.apache.org/licenses/LICENSE-2.0\n##\n## Unless required by applicable law or agreed to in writing, software\n## distributed under the License is distributed on an \"AS IS\" BASIS,\n## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n## See the License for the specific language governing permissions and\n## limitations under the License.\n##--------------------------------------------------------------------\n\n##\n## Author: Ashwini Kumar Pandey, Ashish Jabble\n##\n\nset -e\n\nOS_NAME=$(grep -oP '^NAME=\"\\K[^\"]+' /etc/os-release)\nOS_VERSION=$(grep -oP '^VERSION_ID=\"\\K[^\"]+' /etc/os-release)\nOS_CODENAME=$(grep -oP '^VERSION_CODENAME=\\K.*' /etc/os-release)\n# Function to install and build Google Test from package\ninstall_gtest_from_package() {\n    echo \"Installing Google Test via package manager...\"\n\n    # Install the libgtest-dev package\n    sudo apt-get install -y libgtest-dev\n   \n    echo \"Google Test has been successfully installed.\"\n}\n\n# Function to build and install Google Test from source\ninstall_gtest_from_source() {\n    echo \"Installing Google Test via source...\"\n\n    # Define repository name and branch\n    local GTEST_REPO_NAME=\"googletest\"\n    local GTEST_BRANCH=\"${1:-release-1.11.0}\"  # Default branch is release-1.11.0\n\n    # Clean up any existing directory\n    if [ -d \"/tmp/${GTEST_REPO_NAME}\" ]; then\n        echo \"Removing existing ${GTEST_REPO_NAME} directory in /tmp...\"\n        sudo rm -rf \"/tmp/${GTEST_REPO_NAME}\"\n    fi\n\n    # Clone the Google Test repository\n    echo \"Cloning Google Test repository (${GTEST_BRANCH})...\"\n    git clone https://github.com/google/${GTEST_REPO_NAME}.git --branch \"${GTEST_BRANCH}\" --depth 1 -q \"/tmp/${GTEST_REPO_NAME}\"\n\n    # Build and install Google Test\n    cd \"/tmp/${GTEST_REPO_NAME}\"\n    mkdir -p build && cd build\n    echo \"Configuring and building Google Test...\"\n    cmake .. -DBUILD_GMOCK=OFF > /dev/null\n    make -j$(nproc) > /dev/null\n    sudo make install > /dev/null\n\n    echo \"Google Test installation complete for Ubuntu 24.04 or later!\"\n}\n\n# Function to install Google Test for Red Hat-based distributions\ninstall_gtest_rhel() {\n    echo \"Installing Google Test for Red Hat-based distributions...\"\n\n    # Install required packages\n    sudo yum install -y gtest gtest-devel\n\n    echo \"Google Test installation complete for Red Hat-based distributions!\"\n}\n\n# Function to detect the platform and execute the appropriate installation\ndetect_and_install_gtest() {\n    echo \"Detected Platform: ${OS_NAME}, Version: ${OS_VERSION}\"\n    # Install based on detected OS\n    if [[ ${OS_NAME,,} == \"red hat\"* ]] || [[ ${OS_NAME,,} == \"centos\"* ]]; then\n        install_gtest_rhel\n    elif [[ ${OS_NAME,,} == \"ubuntu\" ]]; then\n        if [[ $(echo \"${OS_VERSION} >= 24.04\" | bc -l) -eq 1 ]]; then\n            install_gtest_from_source\n        else\n            install_gtest_from_package\n        fi\n    else\n        install_gtest_from_package\n    fi\n}\n\n# Main execution\ndetect_and_install_gtest\necho \"Google Test installation process completed successfully!\""
  },
  {
    "path": "tests/unit/C/scripts/RunAllTests.sh",
    "content": "#!/usr/bin/env bash\n#set -e\n#\n# This is the shell script wrapper for running C unit tests\n#\njobs=\"-j4\"\nif [[ \"$1\" == -j* ]]; then\n  jobs=\"$1\"\nfi\n# echo \"Using $jobs option for parallel make jobs\"\n\nCOVERAGE_HTML=0\nCOVERAGE_XML=0\nif [ \"$1\" = \"coverageHtml\" ]; then\n  COVERAGE_HTML=1\n  target=\"CoverageHtml\"\nelif [ \"$1\" = \"coverageXml\" ]; then\n  COVERAGE_XML=1\n  target=\"CoverageXml\"\nelif [ \"$1\" = \"coverage\" ]; then\n  echo \"Use target 'CoverageHtml' or 'CoverageXml' instead\"\n  exit 1\nfi\n\nif [ \"$FLEDGE_ROOT\" = \"\" ]; then\n\techo You must set FLEDGE_ROOT before running this script\n\texit -1\nfi\nexitstate=0\n\ncd $FLEDGE_ROOT/tests/unit/C\nif [ ! -d results ] ; then\n\tmkdir results\nfi\n\nif [ -f \"./CMakeLists.txt\" ] ; then\n\techo \"Compiling libraries...\"\n\t(rm -rf build && mkdir -p build && cd build && cmake -DCMAKE_BUILD_TYPE=Debug .. && make ${jobs} && cd ..) \n\techo \"done\"\nfi\n\ncmakefile=`find . -name CMakeLists.txt | grep -v \"\\.\\/CMakeLists.txt\" `\nfor f in $cmakefile; do\t\n\techo \"-----------------> Processing $f <-----------------\"\n\tdir=`dirname $f`\n\techo Testing $dir\n\t(\n\t\tcd $dir;\n\t\trm -rf build;\n\t\tmkdir build;\n\t\tcd build;\n\t\techo Building Tests...;\n\t\tcmake -DCMAKE_BUILD_TYPE=Debug ..;\n\t\trc=$?\n\t\tif [ $rc != 0 ]; then\n\t\t\techo cmake failed for $dir;\n\t\t\texit 1\n\t\tfi\n\t\tmake ${jobs} > /dev/null;\n\t\trc=$?\n\t\tif [ $rc != 0 ]; then\n\t\t\techo make failed for $dir;\n\t\t\texit 1\n\t\tfi\n\t\tif [ $COVERAGE_HTML -eq 0 ] && [ $COVERAGE_XML -eq 0 ] ; then\n\t\t\techo Running tests...;\n\t\t\tif [ -f \"./RunTests\" ] ; then\n\t\t\t\t./RunTests --gtest_output=xml > /tmp/results;\n\t\t\t\trc=$?\n\t\t\t\tif [ $rc != 0 ]; then\n\t\t\t\t\texit $rc\n\t\t\t\tfi\n\t\t\tfi\n\t\telse\n\t\t\techo Generating coverage reports...;\n\t\t\tfile=$(basename $f)\n\t\t\t# echo \"pwd=`pwd`, f=$f, file=$file\"\n\t\t\tgrep -q ${target} ../${file}\n\t\t\t[ $? -eq 0 ] && (echo Running \"make ${target}\" && make ${target}) || echo \"${target} target not found, skipping...\"\n\t\tfi\n\n\t) >/dev/null\n\trc=$?\n\tif [ $rc != 0 ]; then\n\t\techo Tests for $dir failed\n\t\tcat /tmp/results\n\t\texitstate=1\n\telse\n\t\techo All tests in $dir passed\n\tfi\n\tfile=`echo $dir | sed -e 's#./##' -e 's#/#_#g'`\n\tsource_file=$dir/build/test_detail.xml\n\tif [ -f \"$source_file\" ] ; then\n\t\tmv $source_file results/${file}.xml\n\tfi\ndone\nexit $exitstate\n"
  },
  {
    "path": "tests/unit/C/services/core/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\nset(UUIDLIB -luuid)\nset(COMMONLIB -ldl)\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\n# Locate GTest\nfind_package(GTest REQUIRED)\ninclude_directories(${GTEST_INCLUDE_DIRS})\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\ninclude_directories(../../../../../C/common/include)\ninclude_directories(../../../../../C/services/common/include)\ninclude_directories(../../../../../C/services/core/include)\ninclude_directories(../../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../../C/thirdparty/Simple-Web-Server)\n\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\n\nfile(GLOB test_sources \"../../../../../C/services/core/*.cpp\")\nfile(GLOB unittests \"*.cpp\")\n \n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 COMPONENTS Interpreter Development)\nendif()\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\nlink_directories(${PROJECT_BINARY_DIR}/../../../lib)\n\n# Link runTests with what we want to test and the GTest and pthread library\nadd_executable(RunTests ${test_sources} ${unittests})\ntarget_link_libraries(RunTests ${GTEST_LIBRARIES} pthread)\ntarget_link_libraries(RunTests  ${Boost_LIBRARIES})\ntarget_link_libraries(RunTests  ${UUIDLIB})\ntarget_link_libraries(RunTests  ${COMMONLIB})\ntarget_link_libraries(RunTests -lssl -lcrypto -lz)\ntarget_link_libraries(RunTests ${COMMON_LIB})\ntarget_link_libraries(RunTests ${SERVICE_COMMON_LIB})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    target_link_libraries(RunTests ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(RunTests ${Python3_LIBRARIES})\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\n"
  },
  {
    "path": "tests/unit/C/services/core/main.cpp",
    "content": "#include <gtest/gtest.h>\n#include <resultset.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n\n    testing::GTEST_FLAG(repeat) = 300;\n    testing::GTEST_FLAG(shuffle) = true;\n    testing::GTEST_FLAG(death_test_style) = \"threadsafe\";\n\n    return RUN_ALL_TESTS();\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/core/reading_set_copy.cpp",
    "content": "/*\n * unit tests - FOGL-7353 Fledge Copy ReadingSet\n *\n * Copyright (c) 2023 Dianomic Systems, Inc.\n *\n * Released under the Apache 2.0 Licence\n *\n * Author: Devki Nandan Ghildiyal\n */\n\n#include <gtest/gtest.h>\n#include <datapoint.h>\n#include <reading.h>\n#include <reading_set.h>\n\nusing namespace std;\n\nconst char *ReadingJSON = R\"(\n    {\n        \"count\" : 1, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"temperature\",\n                \"reading\": { \"degrees\": 200.65 },\n                \"user_ts\": \"2023-02-06 14:00:08.532958\",\n                \"ts\": \"2023-02-06 14:47:18.872708\"\n            }\n        ]\n    }\n)\";\n\nconst char *NestedReadingJSON = R\"(\n    {\n        \"count\" : 1, \"rows\" : [\n            {\n                \"id\": 1, \"asset_code\": \"SiteStatus\",\n                \"reading\": { \"degrees\": [200.65,34.45,500.36],\"pressure\": {\"floor1\":30, \"floor2\":34, \"floor3\":36 } },\n                \"user_ts\": \"2023-02-06 14:00:08.532958\",\n                \"ts\": \"2023-02-06 14:47:18.872708\"\n            }\n        ]\n    }\n)\";\n\nTEST(READINGSET, DeepCopyCheckReadingFromNestedJSON)\n{\n    ReadingSet *readingSet1 = new ReadingSet(NestedReadingJSON);\n    ReadingSet *readingSet2 = new ReadingSet();\n    readingSet2->copy(*readingSet1);\n\n    auto r1 = readingSet1->getAllReadings();\n    auto dp1 = r1[0]->getReadingData();\n\n    // Fetch nested datapoints\n    ASSERT_EQ(dp1[0]->getName(), \"degrees\");\n    ASSERT_EQ(dp1[0]->getData().toString(), \"[200.65, 34.45, 500.36]\");\n    ASSERT_EQ(dp1[1]->getName(), \"pressure\");\n    ASSERT_EQ(dp1[1]->getData().toString(), \"{\\\"floor1\\\":30, \\\"floor2\\\":34, \\\"floor3\\\":36}\");\n\n    auto r2 = readingSet2->getAllReadings();\n    auto dp2 = r2[0]->getReadingData();\n    ASSERT_EQ(dp2[0]->getName(), \"degrees\");\n    ASSERT_EQ(dp2[0]->getData().toString(), \"[200.65, 34.45, 500.36]\");\n    ASSERT_EQ(dp2[1]->getName(), \"pressure\");\n    ASSERT_EQ(dp2[1]->getData().toString(), \"{\\\"floor1\\\":30, \\\"floor2\\\":34, \\\"floor3\\\":36}\");\n\n    // Check the address of datapoints\n    ASSERT_NE(dp1[0], dp2[0]);\n    ASSERT_NE(dp1[1], dp2[1]);\n\n    // Confirm there is no error of double delete\n    delete readingSet1;\n    delete readingSet2;\n}\n\nTEST(READINGSET, DeepCopyCheckReadingFromJSON)\n{\n    ReadingSet *readingSet1 = new ReadingSet(ReadingJSON);\n    ReadingSet *readingSet2 = new ReadingSet();\n    readingSet2->copy(*readingSet1);\n\n    delete readingSet1;\n\n    // Fetch value after deleting readingSet1 to check readingSet2 is pointing to different memory location\n    for (auto reading : readingSet2->getAllReadings())\n    {\n        for (auto &dp : reading->getReadingData())\n        {\n            std::string dataPointName = dp->getName();\n            DatapointValue dv = dp->getData();\n            ASSERT_EQ(dataPointName, \"degrees\");\n            ASSERT_EQ(dv.toDouble(), 200.65);\n        }\n    }\n\n    // Confirm there is no error of double delete\n    delete readingSet2;\n}\n\nTEST(READINGSET, DeepCopyCheckReadingFromVector)\n{\n    vector<Reading *> *readings1 = new vector<Reading *>;\n    long integerValue = 100;\n    DatapointValue dpv(integerValue);\n    Datapoint *value = new Datapoint(\"kPa\", dpv);\n    Reading *in = new Reading(\"Pressure\", value);\n    readings1->push_back(in);\n\n    ReadingSet *readingSet1 = new ReadingSet(readings1);\n    ReadingSet *readingSet2 = new ReadingSet();\n    readingSet2->copy(*readingSet1);\n\n    delete readingSet1;\n\n    // Fetch value after deleting readingSet1 to check readingSet2 is pointing to different memory location\n    for (auto reading : readingSet2->getAllReadings())\n    {\n        for (auto &dp : reading->getReadingData())\n        {\n            std::string dataPointName = dp->getName();\n            DatapointValue dv = dp->getData();\n            ASSERT_EQ(dataPointName, \"kPa\");\n            ASSERT_EQ(dv.toInt(), 100);\n        }\n    }\n    // Confirm there is no error of double delete\n    delete readingSet2;\n}\n\nTEST(READINGSET, DeepCopyCheckAppend)\n{\n    vector<Reading *> *readings1 = new vector<Reading *>;\n    long integerValue = 100;\n    DatapointValue dpv(integerValue);\n    Datapoint *value = new Datapoint(\"kPa\", dpv);\n    Reading *in = new Reading(\"Pressure\", value);\n    readings1->push_back(in);\n    ReadingSet *readingSet1 = new ReadingSet(readings1);\n    delete readings1;\n\n    vector<Reading *> *readings2 = new vector<Reading *>;\n    long integerValue2 = 400;\n    DatapointValue dpv2(integerValue2);\n    Datapoint *value2 = new Datapoint(\"kPa\", dpv2);\n    Reading *in2 = new Reading(\"Pressure\", value2);\n    readings2->push_back(in2);\n    ReadingSet *readingSet2 = new ReadingSet(readings2);\n    delete readings2;\n\n    readingSet2->copy(*readingSet1);\n\n    int size = readingSet2->getAllReadings().size();\n    ASSERT_EQ(size, 2);\n    delete readingSet2;\n    delete readingSet1;\n}\n\nTEST(READINGSET, DeepCopyCheckAddress)\n{\n    vector<Reading *> *readings1 = new vector<Reading *>;\n    long integerValue = 100;\n    DatapointValue dpv(integerValue);\n    Datapoint *value = new Datapoint(\"kPa\", dpv);\n    Reading *in = new Reading(\"Pressure\", value);\n    readings1->push_back(in);\n\n    ReadingSet *readingSet1 = new ReadingSet(readings1);\n    delete readings1;\n    ReadingSet *readingSet2 = new ReadingSet();\n    readingSet2->copy(*readingSet1);\n\n    auto r1 = readingSet1->getAllReadings();\n    auto dp1 = r1[0]->getReadingData();\n\n    auto r2 = readingSet2->getAllReadings();\n    auto dp2 = r2[0]->getReadingData();\n\n    ASSERT_NE(dp1, dp2);\n    delete readingSet1;\n    delete readingSet2;\n}\n"
  },
  {
    "path": "tests/unit/C/services/core/test_service_regsitery.cpp",
    "content": "#include <gtest/gtest.h>\n#include <service_registry.h>\n\nusing namespace std;\n\nTEST(ServiceRegistryTest, Creation)\n{\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tASSERT_NE(registry, (ServiceRegistry *)0);\n}\n\nTEST(ServiceRegistryTest, Singleton)\n{\n\tServiceRegistry *registry1 = ServiceRegistry::getInstance();\n\tServiceRegistry *registry2 = ServiceRegistry::getInstance();\n\tASSERT_EQ(registry1, registry2);\n}\n\nTEST(ServiceRegistryTest, Register)\n{\nEXPECT_EXIT({\n\tServiceRecord *record = new ServiceRecord(\"test1\", \"south\", \"http\", \"hostname\", 1234, 4321);\n\t\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tbool ret = registry->registerService(record);\n\tif (!ret)\n\t{\n\t\tcerr << \"registerService 'test1' returned false\" << endl;\n\t\texit(1);\n\t}\n\texit(!(ret == true)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(ServiceRegistryTest, DupRegister)\n{\nEXPECT_EXIT({\n\tServiceRecord *record = new ServiceRecord(\"test1\", \"south\", \"http\", \"hostname\", 1234, 4321);\n\tServiceRecord *dupRecord = new ServiceRecord(\"test1\", \"north\", \"http\", \"hostname\", 1234, 4321);\n\t\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tbool ret = registry->registerService(record);\n\tif (!ret)\n\t{\n\t\tcerr << \"Failed to register 'test1'\" << endl;\n\t\texit(1);\n\t}\n\tret = registry->registerService(dupRecord);\n\tif (ret)\n\t{\n\t\tcerr << \"Registering 'test1' twice does not return false\" << endl;\n\t}\n\texit(!(ret == false)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(ServiceRegistryTest, Overwrite)\n{\nEXPECT_EXIT({\n\tServiceRecord *record = new ServiceRecord(\"test1\", \"south\", \"http\", \"hostname\", 1234, 4321);\n\tServiceRecord *dupRecord = new ServiceRecord(\"test1\", \"south\", \"http\", \"hostname\", 1234, 666);\n\t\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tbool ret = registry->registerService(record);\n\tif (!ret)\n\t{\n\t\tcerr << \"Failed to register 'test1'\" << endl;\n\t\texit(1);\n\t}\n\tret = registry->registerService(dupRecord);\n\tif (!ret)\n\t{\n\t\tcerr << \"Failed to overwite 'test1'\" << endl;\n\t}\n\texit(!(ret == true)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(ServiceRegistryTest, Find)\n{\nEXPECT_EXIT({\n\tServiceRecord *record = new ServiceRecord(\"findtest\", \"south\", \"http\", \"hostname\", 1234, 4321);\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tbool ret = registry->registerService(record);\n\tif (!ret)\n\t{\n\t\tcerr << \"Failed to register 'findtest'\" << endl;\n\t\texit(1);\n\t}\n\trecord = registry->findService(\"findtest\");\n\tif (record == (ServiceRecord *)0)\n\t{\n\t\tcerr << \"findService 'findtest' can not be NULL\" << endl;\n\t}\n\texit(!record); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(ServiceRegistryTest, NotFind)\n{\nEXPECT_EXIT({\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tServiceRecord* record = registry->findService(\"non-existant\");\n\tif (record != (ServiceRecord *)0)\n\t{\n\t\tcerr << \"findService 'non-existant' must return NULL\" << endl;\n\t}\n\texit(!(record == (ServiceRecord *)0)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(ServiceRegistryTest, Unregister)\n{\nEXPECT_EXIT({\n\tServiceRecord *record = new ServiceRecord(\"unregisterme\", \"south\", \"http\", \"hostname\", 1234, 4321);\n\t\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tbool ret = registry->registerService(record);\n\tif (!ret)\n\t{\n\t\tcerr << \"registerService 'unregisterme' failed\" << endl;\n\t\texit(1);\n\t}\n\tret = registry->unRegisterService(record);\n\tif (!ret)\n\t{\n\t\tcerr << \"unRegisterService 'unregisterme' failed\" << endl;\n\t}\n\texit(!(ret == true)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(ServiceRegistryTest, UnregisterNoExistant)\n{\nEXPECT_EXIT({\n\tServiceRecord *record = new ServiceRecord(\"non-existant\", \"north\", \"http\", \"hostname\", 1234, 4321);\n\t\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tbool ret = registry->unRegisterService(record);\n\tif (ret)\n\t{\n\t\tcerr << \"unRegisterService 'non-existant' failed\" << endl;\n\t}\n\texit(!(ret == false)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\nTEST(ServiceRegistryTest, uuid)\n{\nEXPECT_EXIT({\n\tServiceRecord *record = new ServiceRecord(\"testuuid\", \"south\", \"http\", \"hostname\", 1234, 4321);\n\t\n\tServiceRegistry *registry = ServiceRegistry::getInstance();\n\tbool ret = registry->registerService(record);\n\tif (!ret)\n\t{\n\t\tcerr << \"registerService 'testuuid' failed\" << endl;\n\t\texit(1);\n\t}\n\tstring uuid = registry->getUUID(record);\n\tret = registry->unRegisterService(uuid);\n\tif (!ret)\n\t{\n\t\tcerr << \"unRegisterService 'testuuid' by UUID failed\" << endl;\n\t}\n\texit(!(ret == true)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\n\nset(COMMON_LIB common-lib)\nset(SERVICE_COMMON_LIB services-common-lib)\nset(PLUGINS_COMMON_LIB plugins-common-lib)\n\ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\n# Locate GTest\nfind_package(GTest REQUIRED)\ninclude_directories(${GTEST_INCLUDE_DIRS})\ninclude_directories(../../../../../../C/services/storage/include)\ninclude_directories(../../../../../../C/services/common/include)\ninclude_directories(../../../../../../C/common/include)\ninclude_directories(../../../../../../C/thirdparty/rapidjson/include)\n\nfile(GLOB test_sources \"../../../../../../C/services/storage/configuration.cpp\")\n\nlink_directories(${PROJECT_BINARY_DIR}/../../../../lib)\n \n# Find python3.x dev/lib package\nfind_package(PkgConfig REQUIRED)\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    pkg_check_modules(PYTHON REQUIRED python3)\nelse()\n    find_package(Python3 REQUIRED COMPONENTS Interpreter Development NumPy)\nendif()\n\n# Add Python 3.x header files\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    include_directories(${PYTHON_INCLUDE_DIRS})\nelse()\n    include_directories(${Python3_INCLUDE_DIRS} ${Python3_NUMPY_INCLUDE_DIRS})\nendif()\n\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\") \n    link_directories(${PYTHON_LIBRARY_DIRS})\nelse()\n    link_directories(${Python3_LIBRARY_DIRS})\nendif()\n\n# Link runTests with what we want to test and the GTest and pthread library\nadd_executable(RunTests ${test_sources}  tests.cpp)\n#setting BOOST_COMPONENTS to use pthread library only\nset(BOOST_COMPONENTS thread)\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ntarget_link_libraries(RunTests ${GTEST_LIBRARIES} pthread)\n\ntarget_link_libraries(RunTests ${COMMON_LIB})\ntarget_link_libraries(RunTests ${SERVICE_COMMON_LIB})\ntarget_link_libraries(RunTests ${PLUGINS_COMMON_LIB})\n\n# Add Python 3.x library\nif(${CMAKE_VERSION} VERSION_LESS \"3.12.0\")\n    target_link_libraries(RunTests ${PYTHON_LIBRARIES})\nelse()\n    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES} Python3::NumPy)\nendif()\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/DeleteRows.json",
    "content": "{\n        \"where\"    : {\n                           \"column\"    : \"col2\",\n                           \"condition\" : \"=\",\n                           \"value\"     : 30\n                         }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/GetTable.json",
    "content": "{\n    \"condition\" : { \n                    \"column\"    : \"col2\",\n                    \"condition\" : \"=\",\n                    \"value\"     : 20 \n                  }\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/PostStorageSchema.json",
    "content": "{\n       \"schema\"   : \"test1\",\n\t\"service\"  : \"TestService\",\n        \"version\" : 1,\n        \"tables\"  : [\n                {\n\t\t     \"name\" : \"table1\",\n                        \"columns\" : [\n\t\t\t\t\t{\n                                                \"column\" : \"col1\",\n                                                \"type\" : \"varchar\",\n                                                \"size\" : 60\n                                        },\n\t\t\t\t\t{\n                                                \"column\" : \"col2\",\n                                                \"type\" : \"integer\"\n                                        },\n\t\t\t\t\t{\n                                                \"column\" : \"col3\",\n                                                \"type\" : \"real\"\n                                        },\n\t\t\t\t\t{\n                                                \"column\" : \"col4\",\n                                                \"type\" : \"integer\"\n                                        }\n\n                                ],\n\t\t\t\t\"indexes\" : [\n                             ]\n\n                },\n\t\t{\n                     \"name\" : \"table2\",\n                        \"columns\" : [\n                                        {\n                                                \"column\" : \"col1\",\n                                                \"type\" : \"varchar\",\n                                                \"size\" : 60\n                                        },\n                                        {\n                                                \"column\" : \"col2\",\n                                                \"type\" : \"double\"\n                                        },\n\t\t\t\t\t{\n                                                \"column\" : \"col3\",\n                                                \"type\" : \"double\"\n                                        }\n\n\n                                ],\n                                \"indexes\" : [\n\t\t\t\t\t {\n                                 \"index\" : [\n                                        \"col2\"\n                                        ]\n                               }\n\n                             ]\n\n                }\n        ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/PostTable.json",
    "content": "{\n    \"col1\"         : \"Frank\",\n    \"col2\"        : 70,\n    \"col3\"       :  23.4 ,\n    \"col4\"\t: 45\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/PutQuery.json",
    "content": "{\n\t\"where\"    : {\n\t\t\t   \"column\"    : \"col2\",\n\t\t\t   \"condition\" : \"=\",\n\t\t\t   \"value\"     : 70 \n\t\t\t },\n      \"return\"   : [ \"col1\" ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/PutTable.json",
    "content": "{\n    \"condition\" : { \n                    \"column\"    : \"col2\",\n                    \"condition\" : \"=\",\n                    \"value\"     : 70 \n                  },\n    \"values\"    : {\n                    \"col2\" : 20\n                  }\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/PutTableExpression.json",
    "content": "{\n    \"condition\"    : { \n                       \"column\"    : \"col2\",\n                       \"condition\" : \"=\",\n                       \"value\"     : 20 \n                     },\n    \"expressions\" : [\n                       {\n                          \"column\"   : \"col2\",\n                          \"operator\" : \"+\",\n                          \"value\"    : 10\n                       }\n                    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/README.rst",
    "content": "********************\nStorage Engine Tests\n********************\n\nRun tests against the Storage service and compare to expected results.\n\nEither set *FLEDGE_ROOT* to point at the installation to test or pass\nthe path of the Storage service to test in the *testRunner.sh* command line.\n\ne.g.\n\t``./testRunner.sh ../../../C/services/storafe/build/storage``\n\nor\n\t``export FLEDGE_ROOT=~/fledge; ./testRunner.sh``\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/1",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/10",
    "content": "{\"count\":1,\"rows\":[{\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/100",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/101",
    "content": "{ \"removed\" : 100,  \"unsentPurged\" : 100,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/102",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 11 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/103",
    "content": "{\"count\":11,\"rows\":[{\"asset_code\":\"msec_003_OK\",\"reading\":{\"value\":3},\"user_ts\":\"2019-01-01 10:01:01.000000\"},{\"asset_code\":\"msec_004_OK\",\"reading\":{\"value\":4},\"user_ts\":\"2019-01-02 10:02:01.000000\"},{\"asset_code\":\"msec_005_OK\",\"reading\":{\"value\":5},\"user_ts\":\"2019-01-03 10:02:02.841000\"},{\"asset_code\":\"msec_006_OK\",\"reading\":{\"value\":6},\"user_ts\":\"2019-01-04 10:03:05.123456\"},{\"asset_code\":\"msec_007_OK\",\"reading\":{\"value\":7},\"user_ts\":\"2019-01-04 10:03:05.100000\"},{\"asset_code\":\"msec_008_OK\",\"reading\":{\"value\":8},\"user_ts\":\"2019-01-04 10:03:05.123000\"},{\"asset_code\":\"msec_009_OK\",\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 10:03:03.123456\"},{\"asset_code\":\"msec_010_OK\",\"reading\":{\"value\":10},\"user_ts\":\"2019-03-04 09:03:04.123456\"},{\"asset_code\":\"msec_011_OK\",\"reading\":{\"value\":11},\"user_ts\":\"2019-03-05 11:03:05.123456\"},{\"asset_code\":\"msec_012_OK\",\"reading\":{\"value\":12},\"user_ts\":\"2019-03-04 07:33:04.123456\"},{\"asset_code\":\"msec_013_OK\",\"reading\":{\"value\":13},\"user_ts\":\"2019-03-05 12:33:05.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/104",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/105",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts_alias\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/106",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/107",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 10:03:03.123456\",\"user_ts_max\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/108",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/109",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/11",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/110",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"UPDA\",\"data\":{\"json\":\"{\\\"providers\\\":[\\\"username123\\\",\\\"ldap_test_10\\\"]}\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/111",
    "content": "{\"created\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/112",
    "content": "{\"loaded\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/113",
    "content": "{\"deleted\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/115",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 31 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/116",
    "content": "{\"count\":2,\"rows\":[{\"asset_code\":\"Asset1\",\"time\":\"2019-10-11 15:11:00+00\",\"reading\":{\"rms\":{\"max\":99,\"min\":10,\"sum\":305,\"count\":7,\"average\":43.5714285714286},\"rate\":{\"max\":96,\"min\":2,\"sum\":812,\"count\":19,\"average\":42.7368421052632}}},{\"asset_code\":\"Asset2\",\"time\":\"2019-10-11 15:11:00+00\",\"reading\":{\"lux\":{\"max\":3456,\"min\":3456,\"sum\":3456,\"count\":1,\"average\":3456},\"temp\":{\"max\":90,\"min\":12.306,\"sum\":499.306,\"count\":10,\"average\":49.9306},\"pressure\":{\"max\":1023,\"min\":8,\"sum\":3024,\"count\":6,\"average\":504}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/12",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/13",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"ERROR:  column \\\"nonexistant\\\" of relation \\\"test\\\" does not existLINE 1: ...ERT INTO fledge.test (id, key, description, data, Nonexistan...                                                             ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/14",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"Failed to parse JSON payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/15",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/16",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/17",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/18",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/19",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/2",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/20",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/21",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"column\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/22",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"condition\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/23",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"value\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/24",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" property must be a JSON object\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/25",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/26",
    "content": "{ \"entryPoint\" : \"Select sort\", \"message\" : \"Missing property \\\"column\\\"\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/27",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/28",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/29",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/3",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/30",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/31",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"Missing values or expressions object in payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/32",
    "content": "{\"count\":2,\"rows\":[{\"count_id\":1,\"key\":\"UPDA\"},{\"count_id\":1,\"key\":\"TEST1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/33",
    "content": "{ \"error\" : \"Unsupported URL: /fledge/nothing\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/34",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  relation \\\"fledge.doesntexist\\\" does not existLINE 1: SELECT * FROM fledge.doesntexist;                      ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/35",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  column \\\"doesntexist\\\" does not existLINE 1: SELECT  * FROM fledge.test WHERE \\\"doesntexist\\\" = '9';                                         ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/37",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 3 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/38",
    "content": "{\"count\":2,\"rows\":[{\"id\":966457,\"asset_code\":\"MyAsset\",reading\":{\"rate\":18.4},\"user_ts\":\"2017-09-21 15:00:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"},{\"id\":966458,\"asset_code\":\"MyAsset\",reading\":{\"rate\":45.1},\"user_ts\":\"2017-09-21 15:03:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"}]}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/39",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/4",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/40",
    "content": "{ \"error\" : \"Missing query parameter count\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/41",
    "content": "{ \"error\" : \"Missing query parameter id\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/42",
    "content": "{ \"removed\" : 3,  \"unsentPurged\" : 3,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/43",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/44",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1,\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/45",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/46",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"MyDescription\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/47",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"JSONvalue\":\"test1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/48",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"11:14:26\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/49",
    "content": "{\"count\":5,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"11:14:26\"},{\"key\":\"TEST2\",\"description\":\"A test row\",\"time\":\"11:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"time\":\"11:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"time\":\"11:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"time\":\"11:15:00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/5",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/50",
    "content": "{\"count\":6,\"rows\":[{\"key\":\"TEST4\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/51",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:15:00\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:15:33\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:16:20\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 12:14:30\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:14:30\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/52",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:14:27 am\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:14:28 am\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:14:29 am\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:15:00 am\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:15:33 am\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:16:20 am\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 12:14:30 pm\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:14:30 pm\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/53",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"return object must have either a column or json property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/54",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"The json property is missing a properties property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/55",
    "content": "{\"count\":1,\"rows\":[{\"Entries\":9}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/56",
    "content": "{\"count\":1,\"rows\":[{\"sum_id\":\"43\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/57",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 100 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/58",
    "content": "{\"count\":1,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/59",
    "content": "{\"count\":2,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":53.7721518987342,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 15:10:52+00\"},{\"min\":2.0,\"max\":96.0,\"average\":47.9523809523809,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 15:10:50+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/6",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/60",
    "content": "{\"count\":2,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":53.7721518987342,\"asset_code\":\"MyAsset\",\"bucket\":\"11-10-20177 15:10:52\"},{\"min\":2.0,\"max\":96.0,\"average\":47.9523809523809,\"asset_code\":\"MyAsset\",\"bucket\":\"11-10-20177 15:10:50\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/61",
    "content": "{\"count\":5,\"rows\":[{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/62",
    "content": "{\"count\":4,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/63",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/64",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"No rows where updated\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/65",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/66",
    "content": "{\"count\":1,\"rows\":[{\"Count\":100,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/67",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 15:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 15:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 15:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 15:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 15:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 15:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 15:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 15:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 15:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 15:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 15:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 15:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 15:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 15:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 15:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 15:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 15:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 15:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 15:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 15:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 15:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 15:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 15:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 15:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 15:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 15:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 15:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 15:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 15:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 15:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 15:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 15:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 15:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 15:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 15:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 15:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 15:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 15:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 15:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 15:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/68",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/69",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 4 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/7",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/70",
    "content": "{\"count\":4,\"rows\":[{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/71",
    "content": "{\"count\":2,\"rows\":[{\"description\":\"A test row\"},{\"description\":\"Updated with expression\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/72",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/73",
    "content": "{\"count\":2,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}},{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"new value\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/74",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/75",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/76",
    "content": "{\"count\":1,\"rows\":[{\"id\":4,\"key\":\"Admin\",\"description\":\"URL of the admin API\",\"data\":{\"url\":{\"value\":\"new value\"}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/77",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/78",
    "content": "{\"count\":1,\"rows\":[{\"Count\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/79",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"},{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/8",
    "content": "{\"count\":1,\"rows\":[{\"avg_id\":\"1.00000000000000000000\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/80",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/81",
    "content": "{\"count\":5,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/82",
    "content": "{\"count\":2,\"rows\":[{\"min\":2.0,\"max\":96.0,\"average\":47.9523809523809,\"user_ts\":\"2017-10-11 15:10:51\"},{\"min\":1.0,\"max\":98.0,\"average\":53.7721518987342,\"user_ts\":\"2017-10-11 15:10:52\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/83",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/84",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/85",
    "content": "{ \"entryPoint\" : \"appendReadings\", \"message\" : \"Payload is missing a readings array\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/86",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/87",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 15:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 15:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 15:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 15:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 15:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 15:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 15:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 15:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 15:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 15:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 15:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 15:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 15:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 15:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 15:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 15:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 15:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 15:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 15:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 15:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 15:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 15:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 15:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 15:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 15:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 15:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 15:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 15:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 15:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 15:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 15:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 15:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 15:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 15:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 15:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 15:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 15:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 15:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 15:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 15:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/88",
    "content": "{ \"entryPoint\" : \"limit\", \"message\" : \"Limit must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/89",
    "content": "{ \"entryPoint\" : \"skip\", \"message\" : \"Skip must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/9",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/90",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  syntax error at or near \\\"\\\\\"LINE 1: ... FROM fledge.test2 WHERE \\\"id\\\" >= '2' ORDER BY \\\"id\\\\\">\\\\\"\\\" ASC;                                                               ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/91",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  time zone \\\"bad\\\" not recognized\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/92",
    "content": "{\"count\":1,\"rows\":[{\"description\":\"added'some'ch'''ars'\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/93",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/94",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/95",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/96",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":10}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/97",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of a \\\"in\\\" condition must be an array and must not be empty.\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/98",
    "content": "{\"count\":1,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC/99",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/1",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/10",
    "content": "{\"count\":1,\"rows\":[{\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/100",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/101",
    "content": "{ \"removed\" : 100,  \"unsentPurged\" : 100,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/102",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 11 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/103",
    "content": "{\"count\":11,\"rows\":[{\"asset_code\":\"msec_003_OK\",\"reading\":{\"value\":3},\"user_ts\":\"2019-01-01 10:01:01.000000\"},{\"asset_code\":\"msec_004_OK\",\"reading\":{\"value\":4},\"user_ts\":\"2019-01-02 10:02:01.000000\"},{\"asset_code\":\"msec_005_OK\",\"reading\":{\"value\":5},\"user_ts\":\"2019-01-03 10:02:02.841000\"},{\"asset_code\":\"msec_006_OK\",\"reading\":{\"value\":6},\"user_ts\":\"2019-01-04 10:03:05.123456\"},{\"asset_code\":\"msec_007_OK\",\"reading\":{\"value\":7},\"user_ts\":\"2019-01-04 10:03:05.100000\"},{\"asset_code\":\"msec_008_OK\",\"reading\":{\"value\":8},\"user_ts\":\"2019-01-04 10:03:05.123000\"},{\"asset_code\":\"msec_009_OK\",\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 10:03:03.123456\"},{\"asset_code\":\"msec_010_OK\",\"reading\":{\"value\":10},\"user_ts\":\"2019-03-04 09:03:04.123456\"},{\"asset_code\":\"msec_011_OK\",\"reading\":{\"value\":11},\"user_ts\":\"2019-03-05 11:03:05.123456\"},{\"asset_code\":\"msec_012_OK\",\"reading\":{\"value\":12},\"user_ts\":\"2019-03-04 07:33:04.123456\"},{\"asset_code\":\"msec_013_OK\",\"reading\":{\"value\":13},\"user_ts\":\"2019-03-05 12:33:05.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/104",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/105",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts_alias\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/106",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/107",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 10:03:03.123456\",\"user_ts_max\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/108",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/109",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/11",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/110",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"UPDA\",\"data\":{\"json\":\"{\\\"providers\\\":[\\\"username123\\\",\\\"ldap_test_10\\\"]}\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/111",
    "content": "{\"created\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/112",
    "content": "{\"loaded\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/113",
    "content": "{\"deleted\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/115",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 31 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/116",
    "content": "{\"count\":2,\"rows\":[{\"asset_code\":\"Asset1\",\"time\":\"2019-10-11 15:11:00+00\",\"reading\":{\"rms\":{\"max\":99,\"min\":10,\"sum\":305,\"count\":7,\"average\":43.57142857142857},\"rate\":{\"max\":96,\"min\":2,\"sum\":812,\"count\":19,\"average\":42.73684210526316}}},{\"asset_code\":\"Asset2\",\"time\":\"2019-10-11 15:11:00+00\",\"reading\":{\"lux\":{\"max\":3456,\"min\":3456,\"sum\":3456,\"count\":1,\"average\":3456},\"temp\":{\"max\":90,\"min\":12.306,\"sum\":499.306,\"count\":10,\"average\":49.9306},\"pressure\":{\"max\":1023,\"min\":8,\"sum\":3024,\"count\":6,\"average\":504}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/12",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/13",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"ERROR:  column \\\"nonexistant\\\" of relation \\\"test\\\" does not existLINE 1: ...ERT INTO fledge.test (id, key, description, data, Nonexistan...                                                             ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/14",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"Failed to parse JSON payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/15",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/16",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/17",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/18",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/19",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/2",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/20",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/21",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"column\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/22",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"condition\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/23",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"value\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/24",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" property must be a JSON object\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/25",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/26",
    "content": "{ \"entryPoint\" : \"Select sort\", \"message\" : \"Missing property \\\"column\\\"\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/27",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/28",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/29",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/3",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/30",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/31",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"Missing values or expressions object in payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/32",
    "content": "{\"count\":2,\"rows\":[{\"count_id\":1,\"key\":\"UPDA\"},{\"count_id\":1,\"key\":\"TEST1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/33",
    "content": "{ \"error\" : \"Unsupported URL: /fledge/nothing\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/34",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  relation \\\"fledge.doesntexist\\\" does not existLINE 1: SELECT * FROM fledge.doesntexist;                      ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/35",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  column \\\"doesntexist\\\" does not existLINE 1: SELECT  * FROM fledge.test WHERE \\\"doesntexist\\\" = '9';                                         ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/37",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 3 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/38",
    "content": "{\"count\":2,\"rows\":[{\"id\":966457,\"asset_code\":\"MyAsset\",reading\":{\"rate\":18.4},\"user_ts\":\"2017-09-21 15:00:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"},{\"id\":966458,\"asset_code\":\"MyAsset\",reading\":{\"rate\":45.1},\"user_ts\":\"2017-09-21 15:03:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"}]}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/39",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/4",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/40",
    "content": "{ \"error\" : \"Missing query parameter count\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/41",
    "content": "{ \"error\" : \"Missing query parameter id\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/42",
    "content": "{ \"removed\" : 3,  \"unsentPurged\" : 3,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/43",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/44",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1,\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/45",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/46",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"MyDescription\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/47",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"JSONvalue\":\"test1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/48",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"11:14:26\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/49",
    "content": "{\"count\":5,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"11:14:26\"},{\"key\":\"TEST2\",\"description\":\"A test row\",\"time\":\"11:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"time\":\"11:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"time\":\"11:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"time\":\"11:15:00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/5",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/50",
    "content": "{\"count\":6,\"rows\":[{\"key\":\"TEST4\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/51",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:15:00\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:15:33\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:16:20\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 12:14:30\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:14:30\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/52",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:14:27 am\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:14:28 am\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:14:29 am\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:15:00 am\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:15:33 am\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 11:16:20 am\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 12:14:30 pm\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:14:30 pm\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/53",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"return object must have either a column or json property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/54",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"The json property is missing a properties property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/55",
    "content": "{\"count\":1,\"rows\":[{\"Entries\":9}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/56",
    "content": "{\"count\":1,\"rows\":[{\"sum_id\":\"43\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/57",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 100 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/58",
    "content": "{\"count\":1,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/59",
    "content": "{\"count\":2,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":53.77215189873418,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 15:10:52+00\"},{\"min\":2.0,\"max\":96.0,\"average\":47.95238095238095,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 15:10:50+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/6",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/60",
    "content": "{\"count\":2,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":53.77215189873418,\"asset_code\":\"MyAsset\",\"bucket\":\"11-10-20177 15:10:52\"},{\"min\":2.0,\"max\":96.0,\"average\":47.95238095238095,\"asset_code\":\"MyAsset\",\"bucket\":\"11-10-20177 15:10:50\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/61",
    "content": "{\"count\":5,\"rows\":[{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/62",
    "content": "{\"count\":4,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/63",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/64",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"No rows where updated\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/65",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/66",
    "content": "{\"count\":1,\"rows\":[{\"Count\":100,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/67",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 15:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 15:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 15:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 15:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 15:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 15:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 15:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 15:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 15:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 15:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 15:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 15:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 15:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 15:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 15:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 15:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 15:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 15:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 15:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 15:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 15:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 15:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 15:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 15:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 15:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 15:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 15:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 15:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 15:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 15:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 15:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 15:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 15:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 15:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 15:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 15:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 15:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 15:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 15:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 15:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/68",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/69",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 4 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/7",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/70",
    "content": "{\"count\":4,\"rows\":[{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/71",
    "content": "{\"count\":2,\"rows\":[{\"description\":\"A test row\"},{\"description\":\"Updated with expression\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/72",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/73",
    "content": "{\"count\":2,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}},{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"new value\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/74",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/75",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/76",
    "content": "{\"count\":1,\"rows\":[{\"id\":4,\"key\":\"Admin\",\"description\":\"URL of the admin API\",\"data\":{\"url\":{\"value\":\"new value\"}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/77",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/78",
    "content": "{\"count\":1,\"rows\":[{\"Count\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/79",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"},{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:33.622315+00\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:16:20.622315+00\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 12:14:30.622315+00\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:30.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/8",
    "content": "{\"count\":1,\"rows\":[{\"avg_id\":\"1.00000000000000000000\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/80",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/81",
    "content": "{\"count\":5,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:26.622315+00\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:27.422315+00\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:28.622315+00\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:14:29.622315+00\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 11:15:00.622315+00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/82",
    "content": "{\"count\":2,\"rows\":[{\"min\":2.0,\"max\":96.0,\"average\":47.95238095238095,\"user_ts\":\"2017-10-11 15:10:51\"},{\"min\":1.0,\"max\":98.0,\"average\":53.77215189873418,\"user_ts\":\"2017-10-11 15:10:52\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/83",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/84",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/85",
    "content": "{ \"entryPoint\" : \"appendReadings\", \"message\" : \"Payload is missing a readings array\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/86",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/87",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 15:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 15:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 15:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 15:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 15:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 15:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 15:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 15:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 15:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 15:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 15:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 15:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 15:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 15:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 15:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 15:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 15:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 15:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 15:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 15:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 15:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 15:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 15:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 15:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 15:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 15:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 15:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 15:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 15:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 15:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 15:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 15:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 15:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 15:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 15:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 15:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 15:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 15:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 15:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 15:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/88",
    "content": "{ \"entryPoint\" : \"limit\", \"message\" : \"Limit must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/89",
    "content": "{ \"entryPoint\" : \"skip\", \"message\" : \"Skip must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/9",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/90",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  syntax error at or near \\\"\\\\\"LINE 1: ... FROM fledge.test2 WHERE \\\"id\\\" >= '2' ORDER BY \\\"id\\\\\">\\\\\"\\\" ASC;                                                               ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/91",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  time zone \\\"bad\\\" not recognized\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/92",
    "content": "{\"count\":1,\"rows\":[{\"description\":\"added'some'ch'''ars'\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/93",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/94",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/95",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/96",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":10}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/97",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of a \\\"in\\\" condition must be an array and must not be empty.\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/98",
    "content": "{\"count\":1,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_ETC_UTC_PG12/99",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/1",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/10",
    "content": "{\"count\":1,\"rows\":[{\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/100",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/101",
    "content": "{ \"removed\" : 100,  \"unsentPurged\" : 100,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/102",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 11 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/103",
    "content": "{\"count\":11,\"rows\":[{\"asset_code\":\"msec_003_OK\",\"reading\":{\"value\":3},\"user_ts\":\"2019-01-01 11:01:01.000000\"},{\"asset_code\":\"msec_004_OK\",\"reading\":{\"value\":4},\"user_ts\":\"2019-01-02 11:02:01.000000\"},{\"asset_code\":\"msec_005_OK\",\"reading\":{\"value\":5},\"user_ts\":\"2019-01-03 11:02:02.841000\"},{\"asset_code\":\"msec_006_OK\",\"reading\":{\"value\":6},\"user_ts\":\"2019-01-04 11:03:05.123456\"},{\"asset_code\":\"msec_007_OK\",\"reading\":{\"value\":7},\"user_ts\":\"2019-01-04 11:03:05.100000\"},{\"asset_code\":\"msec_008_OK\",\"reading\":{\"value\":8},\"user_ts\":\"2019-01-04 11:03:05.123000\"},{\"asset_code\":\"msec_009_OK\",\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 11:03:03.123456\"},{\"asset_code\":\"msec_010_OK\",\"reading\":{\"value\":10},\"user_ts\":\"2019-03-04 10:03:04.123456\"},{\"asset_code\":\"msec_011_OK\",\"reading\":{\"value\":11},\"user_ts\":\"2019-03-05 12:03:05.123456\"},{\"asset_code\":\"msec_012_OK\",\"reading\":{\"value\":12},\"user_ts\":\"2019-03-04 08:33:04.123456\"},{\"asset_code\":\"msec_013_OK\",\"reading\":{\"value\":13},\"user_ts\":\"2019-03-05 13:33:05.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/104",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/105",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts_alias\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/106",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/107",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 11:03:03.123456\",\"user_ts_max\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/108",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/109",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/11",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/110",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"UPDA\",\"data\":{\"json\":\"{\\\"providers\\\":[\\\"username123\\\",\\\"ldap_test_10\\\"]}\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/111",
    "content": "{\"created\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/112",
    "content": "{\"loaded\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/113",
    "content": "{\"deleted\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/115",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 31 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/116",
    "content": "{\"count\":2,\"rows\":[{\"asset_code\":\"Asset1\",\"time\":\"2019-10-11 17:11:00+02\",\"reading\":{\"rms\":{\"max\":99,\"min\":10,\"sum\":305,\"count\":7,\"average\":43.5714285714286},\"rate\":{\"max\":96,\"min\":2,\"sum\":812,\"count\":19,\"average\":42.7368421052632}}},{\"asset_code\":\"Asset2\",\"time\":\"2019-10-11 17:11:00+02\",\"reading\":{\"lux\":{\"max\":3456,\"min\":3456,\"sum\":3456,\"count\":1,\"average\":3456},\"temp\":{\"max\":90,\"min\":12.306,\"sum\":499.306,\"count\":10,\"average\":49.9306},\"pressure\":{\"max\":1023,\"min\":8,\"sum\":3024,\"count\":6,\"average\":504}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/12",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/13",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"ERROR:  column \\\"nonexistant\\\" of relation \\\"test\\\" does not existLINE 1: ...ERT INTO fledge.test (id, key, description, data, Nonexistan...                                                             ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/14",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"Failed to parse JSON payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/15",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/16",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/17",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/18",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/19",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/2",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/20",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/21",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"column\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/22",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"condition\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/23",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"value\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/24",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" property must be a JSON object\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/25",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/26",
    "content": "{ \"entryPoint\" : \"Select sort\", \"message\" : \"Missing property \\\"column\\\"\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/27",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/28",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/29",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/3",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/30",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/31",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"Missing values or expressions object in payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/32",
    "content": "{\"count\":2,\"rows\":[{\"count_id\":1,\"key\":\"UPDA\"},{\"count_id\":1,\"key\":\"TEST1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/33",
    "content": "{ \"error\" : \"Unsupported URL: /fledge/nothing\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/34",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  relation \\\"fledge.doesntexist\\\" does not existLINE 1: SELECT * FROM fledge.doesntexist;                      ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/35",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  column \\\"doesntexist\\\" does not existLINE 1: SELECT  * FROM fledge.test WHERE \\\"doesntexist\\\" = '9';                                         ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/37",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 3 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/38",
    "content": "{\"count\":2,\"rows\":[{\"id\":966457,\"asset_code\":\"MyAsset\",\"reading\":{\"rate\":18.4},\"user_ts\":\"2017-09-21 15:00:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"},{\"id\":966458,\"asset_code\":\"MyAsset\",\"reading\":{\"rate\":45.1},\"user_ts\":\"2017-09-21 15:03:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"}]}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/39",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/4",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/40",
    "content": "{ \"error\" : \"Missing query parameter count\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/41",
    "content": "{ \"error\" : \"Missing query parameter id\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/42",
    "content": "{ \"removed\" : 3,  \"unsentPurged\" : 3,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/43",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/44",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1,\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/45",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/46",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"MyDescription\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/47",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"JSONvalue\":\"test1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/48",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"01:14:26\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/49",
    "content": "{\"count\":5,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"13:14:26\"},{\"key\":\"TEST2\",\"description\":\"A test row\",\"time\":\"13:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"time\":\"13:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"time\":\"13:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"time\":\"13:15:00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/5",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/50",
    "content": "{\"count\":6,\"rows\":[{\"key\":\"TEST4\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/51",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:15:00\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:15:33\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:16:20\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 14:14:30\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 15:14:30\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/52",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:14:27 pm\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:14:28 pm\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:14:29 pm\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:15:00 pm\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:15:33 pm\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:16:20 pm\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 02:14:30 pm\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 03:14:30 pm\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/53",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"return object must have either a column or json property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/54",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"The json property is missing a properties property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/55",
    "content": "{\"count\":1,\"rows\":[{\"Entries\":9}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/56",
    "content": "{\"count\":1,\"rows\":[{\"sum_id\":\"43\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/57",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 100 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/58",
    "content": "{\"count\":1,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/59",
    "content": "{\"count\":2,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":53.7721518987342,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 17:10:52+02\"},{\"min\":2.0,\"max\":96.0,\"average\":47.9523809523809,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 17:10:50+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/6",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/60",
    "content": "{\"count\":2,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":53.7721518987342,\"asset_code\":\"MyAsset\",\"bucket\":\"11-10-20177 17:10:52\"},{\"min\":2.0,\"max\":96.0,\"average\":47.9523809523809,\"asset_code\":\"MyAsset\",\"bucket\":\"11-10-20177 17:10:50\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/61",
    "content": "{\"count\":5,\"rows\":[{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/62",
    "content": "{\"count\":4,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/63",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/64",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"No rows where updated\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/65",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/66",
    "content": "{\"count\":1,\"rows\":[{\"Count\":100,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/67",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 17:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 17:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 17:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 17:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 17:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 17:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 17:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 17:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 17:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 17:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 17:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 17:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 17:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 17:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 17:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 17:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 17:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 17:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 17:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 17:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 17:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 17:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 17:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 17:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 17:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 17:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 17:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 17:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 17:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 17:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 17:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 17:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 17:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 17:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 17:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 17:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 17:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 17:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 17:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 17:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/68",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/69",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 4 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/7",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/70",
    "content": "{\"count\":4,\"rows\":[{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/71",
    "content": "{\"count\":2,\"rows\":[{\"description\":\"A test row\"},{\"description\":\"Updated with expression\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/72",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/73",
    "content": "{\"count\":2,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}},{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"new value\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/74",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/75",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/76",
    "content": "{\"count\":1,\"rows\":[{\"id\":4,\"key\":\"Admin\",\"description\":\"URL of the admin API\",\"data\":{\"url\":{\"value\":\"new value\"}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/77",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/78",
    "content": "{\"count\":1,\"rows\":[{\"Count\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/79",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"},{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/8",
    "content": "{\"count\":1,\"rows\":[{\"avg_id\":\"1.00000000000000000000\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/80",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/81",
    "content": "{\"count\":5,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/82",
    "content": "{\"count\":2,\"rows\":[{\"min\":2.0,\"max\":96.0,\"average\":47.9523809523809,\"user_ts\":\"2017-10-11 17:10:51\"},{\"min\":1.0,\"max\":98.0,\"average\":53.7721518987342,\"user_ts\":\"2017-10-11 17:10:52\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/83",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/84",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/85",
    "content": "{ \"entryPoint\" : \"appendReadings\", \"message\" : \"Payload is missing a readings array\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/86",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/87",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 17:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 17:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 17:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 17:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 17:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 17:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 17:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 17:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 17:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 17:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 17:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 17:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 17:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 17:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 17:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 17:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 17:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 17:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 17:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 17:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 17:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 17:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 17:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 17:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 17:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 17:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 17:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 17:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 17:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 17:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 17:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 17:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 17:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 17:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 17:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 17:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 17:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 17:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 17:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 17:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/88",
    "content": "{ \"entryPoint\" : \"limit\", \"message\" : \"Limit must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/89",
    "content": "{ \"entryPoint\" : \"skip\", \"message\" : \"Skip must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/9",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/90",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  syntax error at or near \\\"\\\\\"LINE 1: ... FROM fledge.test2 WHERE \\\"id\\\" >= '2' ORDER BY \\\"id\\\\\">\\\\\"\\\" ASC;                                                               ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/91",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  time zone \\\"bad\\\" not recognized\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/92",
    "content": "{\"count\":1,\"rows\":[{\"description\":\"added'some'ch'''ars'\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/93",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/94",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/95",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/96",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":10}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/97",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of a \\\"in\\\" condition must be an array and must not be empty.\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/98",
    "content": "{\"count\":1,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME/99",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/1",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/10",
    "content": "{\"count\":1,\"rows\":[{\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/100",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/101",
    "content": "{ \"removed\" : 100,  \"unsentPurged\" : 100,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/102",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 11 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/103",
    "content": "{\"count\":11,\"rows\":[{\"asset_code\":\"msec_003_OK\",\"reading\":{\"value\":3},\"user_ts\":\"2019-01-01 11:01:01.000000\"},{\"asset_code\":\"msec_004_OK\",\"reading\":{\"value\":4},\"user_ts\":\"2019-01-02 11:02:01.000000\"},{\"asset_code\":\"msec_005_OK\",\"reading\":{\"value\":5},\"user_ts\":\"2019-01-03 11:02:02.841000\"},{\"asset_code\":\"msec_006_OK\",\"reading\":{\"value\":6},\"user_ts\":\"2019-01-04 11:03:05.123456\"},{\"asset_code\":\"msec_007_OK\",\"reading\":{\"value\":7},\"user_ts\":\"2019-01-04 11:03:05.100000\"},{\"asset_code\":\"msec_008_OK\",\"reading\":{\"value\":8},\"user_ts\":\"2019-01-04 11:03:05.123000\"},{\"asset_code\":\"msec_009_OK\",\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 11:03:03.123456\"},{\"asset_code\":\"msec_010_OK\",\"reading\":{\"value\":10},\"user_ts\":\"2019-03-04 10:03:04.123456\"},{\"asset_code\":\"msec_011_OK\",\"reading\":{\"value\":11},\"user_ts\":\"2019-03-05 12:03:05.123456\"},{\"asset_code\":\"msec_012_OK\",\"reading\":{\"value\":12},\"user_ts\":\"2019-03-04 08:33:04.123456\"},{\"asset_code\":\"msec_013_OK\",\"reading\":{\"value\":13},\"user_ts\":\"2019-03-05 13:33:05.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/104",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/105",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts_alias\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/106",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/107",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 11:03:03.123456\",\"user_ts_max\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/108",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/109",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/11",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/110",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"UPDA\",\"data\":{\"json\":\"{\\\"providers\\\":[\\\"username123\\\",\\\"ldap_test_10\\\"]}\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/111",
    "content": "{\"created\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/112",
    "content": "{\"loaded\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/113",
    "content": "{\"deleted\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/115",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 31 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/116",
    "content": "{\"count\":2,\"rows\":[{\"asset_code\":\"Asset1\",\"time\":\"2019-10-11 17:11:00+02\",\"reading\":{\"rms\":{\"max\":99,\"min\":10,\"sum\":305,\"count\":7,\"average\":43.57142857142857},\"rate\":{\"max\":96,\"min\":2,\"sum\":812,\"count\":19,\"average\":42.73684210526316}}},{\"asset_code\":\"Asset2\",\"time\":\"2019-10-11 17:11:00+02\",\"reading\":{\"lux\":{\"max\":3456,\"min\":3456,\"sum\":3456,\"count\":1,\"average\":3456},\"temp\":{\"max\":90,\"min\":12.306,\"sum\":499.306,\"count\":10,\"average\":49.9306},\"pressure\":{\"max\":1023,\"min\":8,\"sum\":3024,\"count\":6,\"average\":504}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/12",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/13",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"ERROR:  column \\\"nonexistant\\\" of relation \\\"test\\\" does not existLINE 1: ...ERT INTO fledge.test (id, key, description, data, Nonexistan...                                                             ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/14",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"Failed to parse JSON payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/15",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/16",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/17",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/18",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/19",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/2",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/20",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/21",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"column\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/22",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"condition\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/23",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"value\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/24",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" property must be a JSON object\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/25",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/26",
    "content": "{ \"entryPoint\" : \"Select sort\", \"message\" : \"Missing property \\\"column\\\"\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/27",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/28",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/29",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/3",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/30",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/31",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"Missing values or expressions object in payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/32",
    "content": "{\"count\":2,\"rows\":[{\"count_id\":1,\"key\":\"UPDA\"},{\"count_id\":1,\"key\":\"TEST1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/33",
    "content": "{ \"error\" : \"Unsupported URL: /fledge/nothing\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/34",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  relation \\\"fledge.doesntexist\\\" does not existLINE 1: SELECT * FROM fledge.doesntexist;                      ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/35",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  column \\\"doesntexist\\\" does not existLINE 1: SELECT  * FROM fledge.test WHERE \\\"doesntexist\\\" = '9';                                         ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/37",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 3 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/38",
    "content": "{\"count\":2,\"rows\":[{\"id\":966457,\"asset_code\":\"MyAsset\",\"reading\":{\"rate\":18.4},\"user_ts\":\"2017-09-21 15:00:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"},{\"id\":966458,\"asset_code\":\"MyAsset\",\"reading\":{\"rate\":45.1},\"user_ts\":\"2017-09-21 15:03:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"}]}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/39",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/4",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/40",
    "content": "{ \"error\" : \"Missing query parameter count\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/41",
    "content": "{ \"error\" : \"Missing query parameter id\" }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/42",
    "content": "{ \"removed\" : 3,  \"unsentPurged\" : 3,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/43",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/44",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1,\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/45",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/46",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"MyDescription\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/47",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"JSONvalue\":\"test1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/48",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"01:14:26\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/49",
    "content": "{\"count\":5,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"13:14:26\"},{\"key\":\"TEST2\",\"description\":\"A test row\",\"time\":\"13:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"time\":\"13:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"time\":\"13:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"time\":\"13:15:00\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/5",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/50",
    "content": "{\"count\":6,\"rows\":[{\"key\":\"TEST4\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"date\":\"10 Oct 2017\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/51",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:15:00\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:15:33\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 13:16:20\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 14:14:30\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 15:14:30\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/52",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:14:27 pm\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:14:28 pm\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:14:29 pm\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:15:00 pm\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:15:33 pm\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 01:16:20 pm\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 02:14:30 pm\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"date\":\"10 Oct 2017 03:14:30 pm\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/53",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"return object must have either a column or json property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/54",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"The json property is missing a properties property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/55",
    "content": "{\"count\":1,\"rows\":[{\"Entries\":9}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/56",
    "content": "{\"count\":1,\"rows\":[{\"sum_id\":\"43\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/57",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 100 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/58",
    "content": "{\"count\":1,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/59",
    "content": "{\"count\":2,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":53.77215189873418,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 17:10:52+02\"},{\"min\":2.0,\"max\":96.0,\"average\":47.95238095238095,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 17:10:50+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/6",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/60",
    "content": "{\"count\":2,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":53.77215189873418,\"asset_code\":\"MyAsset\",\"bucket\":\"11-10-20177 17:10:52\"},{\"min\":2.0,\"max\":96.0,\"average\":47.95238095238095,\"asset_code\":\"MyAsset\",\"bucket\":\"11-10-20177 17:10:50\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/61",
    "content": "{\"count\":5,\"rows\":[{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/62",
    "content": "{\"count\":4,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/63",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/64",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"No rows where updated\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/65",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/66",
    "content": "{\"count\":1,\"rows\":[{\"Count\":100,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/67",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 17:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 17:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 17:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 17:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 17:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 17:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 17:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 17:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 17:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 17:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 17:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 17:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 17:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 17:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 17:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 17:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 17:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 17:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 17:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 17:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 17:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 17:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 17:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 17:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 17:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 17:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 17:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 17:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 17:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 17:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 17:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 17:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 17:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 17:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 17:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 17:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 17:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 17:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 17:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 17:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/68",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/69",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 4 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/7",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/70",
    "content": "{\"count\":4,\"rows\":[{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/71",
    "content": "{\"count\":2,\"rows\":[{\"description\":\"A test row\"},{\"description\":\"Updated with expression\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/72",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/73",
    "content": "{\"count\":2,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}},{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"new value\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/74",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/75",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/76",
    "content": "{\"count\":1,\"rows\":[{\"id\":4,\"key\":\"Admin\",\"description\":\"URL of the admin API\",\"data\":{\"url\":{\"value\":\"new value\"}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/77",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/78",
    "content": "{\"count\":1,\"rows\":[{\"Count\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/79",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"},{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:33.622315+02\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:16:20.622315+02\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 14:14:30.622315+02\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 15:14:30.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/8",
    "content": "{\"count\":1,\"rows\":[{\"avg_id\":\"1.00000000000000000000\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/80",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/81",
    "content": "{\"count\":5,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:26.622315+02\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:27.422315+02\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:28.622315+02\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:14:29.622315+02\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"},\"prop1\":\"test1\"},\"ts\":\"2017-10-10 13:15:00.622315+02\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/82",
    "content": "{\"count\":2,\"rows\":[{\"min\":2.0,\"max\":96.0,\"average\":47.95238095238095,\"user_ts\":\"2017-10-11 17:10:51\"},{\"min\":1.0,\"max\":98.0,\"average\":53.77215189873418,\"user_ts\":\"2017-10-11 17:10:52\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/83",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/84",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/85",
    "content": "{ \"entryPoint\" : \"appendReadings\", \"message\" : \"Payload is missing a readings array\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/86",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/87",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 17:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 17:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 17:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 17:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 17:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 17:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 17:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 17:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 17:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 17:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 17:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 17:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 17:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 17:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 17:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 17:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 17:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 17:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 17:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 17:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 17:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 17:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 17:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 17:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 17:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 17:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 17:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 17:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 17:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 17:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 17:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 17:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 17:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 17:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 17:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 17:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 17:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 17:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 17:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 17:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/88",
    "content": "{ \"entryPoint\" : \"limit\", \"message\" : \"Limit must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/89",
    "content": "{ \"entryPoint\" : \"skip\", \"message\" : \"Skip must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/9",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/90",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  syntax error at or near \\\"\\\\\"LINE 1: ... FROM fledge.test2 WHERE \\\"id\\\" >= '2' ORDER BY \\\"id\\\\\">\\\\\"\\\" ASC;                                                               ^\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/91",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"ERROR:  time zone \\\"bad\\\" not recognized\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/92",
    "content": "{\"count\":1,\"rows\":[{\"description\":\"added'some'ch'''ars'\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/93",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/94",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/95",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/96",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":10}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/97",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of a \\\"in\\\" condition must be an array and must not be empty.\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/98",
    "content": "{\"count\":1,\"rows\":[{\"min\":1.0,\"max\":98.0,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/expected_EUROPE_ROME_PG12/99",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/makeReadings.sh",
    "content": "#!/bin/sh\nif [ $# -eq \"1\" ] ; then\nnreadings=$1\nelse\nnreadings=100\nfi\necho \"{\"\necho \"   \\\"readings\\\" : [\"\nwhile [ $nreadings -gt 1 ]; do\nuuid=`uuidgen`\nts=`date --rfc-3339=ns | sed -e 's/\\+.*//'`\nreading=`shuf -i 1-100 -n 1`\necho \"\t\t{\"\necho \"\t\t\t\\\"asset_code\\\": \\\"MyAsset\\\",\"\necho \"\t\t\t\\\"reading\\\" : { \\\"rate\\\" : $reading },\"\necho \"\t\t\t\\\"user_ts\\\" : \\\"$ts\\\"\"\necho \"\t\t},\"\nnreadings=`expr $nreadings - 1`\ndone\n\nuuid=`uuidgen`\nts=`date --rfc-3339=ns | sed -e 's/\\+.*//'`\nreading=`shuf -i 1-100 -n 1`\necho \"\t\t{\"\necho \"\t\t\t\\\"asset_code\\\": \\\"MyAsset\\\",\"\necho \"\t\t\t\\\"reading\\\" : { \\\"rate\\\" : $reading },\"\necho \"\t\t\t\\\"user_ts\\\" : \\\"$ts\\\"\"\necho \"\t\t}\"\necho \"\t]\"\necho \"}\"\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\"condition\" : \"!=\",\n\t\t\t\t\"value\" : \"SENSORS\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload2.json",
    "content": "{\n   \"key\" : \"Mark\",\n   \"history_ts\" : \"now()\",\n   \"value\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload3.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"Mark\"\n\t\t\t},\n\t\"values\" : {\n\t\t\t\t\"value\" : 44444\n\t\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload4.json",
    "content": "{\n\t\"values\" : {\n\t\t\"log\" : {\n\t\t\t\"module\" : \"user\",\n\t\t\t\"message\" : \"Testing JSON insert\",\n\t\t\t\"severity\" : 10\n\t\t\t}\n\t\t},\n\t\"condition\" : {\n\t\t\"column\" : \"code\",\n\t\t\"condition\" : \"=\",\n\t\t\"value\" : \"LOGGN\"\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload5.json",
    "content": "{\n\"where\" : {\n\t\"column\" : \"id\",\n\t\"condition\" : \"<\",\n\t\"value\" : 4\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload6.json",
    "content": "{\n\t\"readings\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\t\t\t\"reading\" : { \"rate\" : 18.4 },\n\t\t\t\t\t\t\"user_ts\" : \"2017-09-21 15:00:09.025655\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\t\t\t\"reading\" : { \"rate\" : 45.1 },\n\t\t\t\t\t\t\"user_ts\" : \"2017-09-21 15:03:09.025655\"\n\t\t\t\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload7.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"column\" : \"value\",\n\t\t\t\"operation\" : \"min\"\n\t\t},\n\t\"group\" : \"key\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload8.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"Mark\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"column\" : \"value\",\n\t\t\t\"operation\" : \"min\"\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payload9.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"Mark\"\n\t\t\t},\n\t\"limit\" : 4,\n\t\"skip\" : 2\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/FOGL-983.json",
    "content": "{\n    \"condition\" : { \n                    \"column\"    : \"key\",\n                    \"condition\" : \"=\",\n                    \"value\"     : \"DEVICE\"\n                  },\n    \"values\"    : {\n                    \"description\" : \"added'some'ch'''ars'\"\n                  }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/add_snapshot.json",
    "content": "{ \"id\" : \"99\" }\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/addnew.json",
    "content": "{\n\t\"id\" : 200,\n\t\"key\" : \"T2NOW\",\n\t\"description\" : \"An inserted row with the current timestamp\",\n\t\"data\" : { \"json\" : \"inserted object\" },\n\t\"ts\" : \"now()\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/asset.json",
    "content": "{\n\t\"readings\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\t\t\t\"reading\" : { \"rate\" : 18.4 },\n\t\t\t\t\t\t\"user_ts\" : \"2017-09-21 15:00:09.025655\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\t\t\t\"reading\" : { \"rate\" : 45.1 },\n\t\t\t\t\t\t\"user_ts\" : \"2017-09-21 15:03:09.025655\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\t\t\t\"reading\" : { \"rate\" : 60.1 },\n\t\t\t\t\t\t\"user_ts\" : \"2017-09-21 15:04:09.025655\"\n\t\t\t\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/bad_sort_1.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\"\n\t\t},\n\t\"limit\" : 1,\n\t\"skip\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/bad_sort_2.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 1,\n\t\"skip\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/bad_update.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 2\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/badreadings.json",
    "content": "{\n   \"noreadings\" : [\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 90 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.927191906\"\n\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/count_assets.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"*\",\n\t\t\t\"alias\" : \"Count\"\n\t\t\t},\n\t\"group\" : \"asset_code\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/delete.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"DEVICE\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/fogl690-error.json",
    "content": "{\"key\": \"DEVICE\", \"value\": {\"readings_insert_batch_size\": {\"type\": \"integer\", \"default\": \"100\", \"value\": \"100\", \"description\": \"The maximum number of readings in a batch of inserts\"}, \"max_concurrent_readings_inserts\": {\"type\": \"integer\", \"default\": \"5\", \"value\": \"5\", \"description\": \"The maximum number of concurrent processes that send batches of readings to storage\"}, \"readings_insert_batch_timeout_seconds\": {\"type\": \"integer\", \"default\": \"1\", \"value\": \"1\", \"description\": \"The number of seconds to wait for a readings list to reach the minimum batch size\"}, \"max_readings_insert_batch_connection_idle_seconds\": {\"type\": \"integer\", \"default\": \"60\", \"value\": \"60\", \"description\": \"Close storage connections used to insert readings when idle for this number of seconds\"}, \"readings_buffer_size\": {\"type\": \"integer\", \"default\": \"500\", \"value\": \"500\", \"description\": \"The maximum number of readings to buffer in memory\"}, \"write_statistics_frequency_seconds\": {\"type\": \"integer\", \"default\": \"5\", \"value\": \"5\", \"description\": \"The number of seconds to wait before writing readings-related statistics to storage\"}, \"max_readings_insert_batch_reconnect_wait_seconds\": {\"type\": \"integer\", \"default\": \"10\", \"value\": \"10\", \"description\": \"The maximum number of seconds to wait before reconnecting to storage when inserting readings\"}}, \"description\": \"Device server configuration\"}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/fogl690-ok.json",
    "content": "{\"key\": \"DEVICE\", \"display_name\": \"DEVICE\", \"value\": {\"readings_insert_batch_size\": {\"type\": \"integer\", \"default\": \"100\", \"value\": \"100\", \"description\": \"The maximum number of readings in a batch of inserts\"}, \"max_concurrent_readings_inserts\": {\"type\": \"integer\", \"default\": \"5\", \"value\": \"5\", \"description\": \"The maximum number of concurrent processes that send batches of readings to storage\"}, \"readings_insert_batch_timeout_seconds\": {\"type\": \"integer\", \"default\": \"1\", \"value\": \"1\", \"description\": \"The number of seconds to wait for a readings list to reach the minimum batch size\"}, \"max_readings_insert_batch_connection_idle_seconds\": {\"type\": \"integer\", \"default\": \"60\", \"value\": \"60\", \"description\": \"Close storage connections used to insert readings when idle for this number of seconds\"}, \"readings_buffer_size\": {\"type\": \"integer\", \"default\": \"500\", \"value\": \"500\", \"description\": \"The maximum number of readings to buffer in memory\"}, \"write_statistics_frequency_seconds\": {\"type\": \"integer\", \"default\": \"5\", \"value\": \"5\", \"description\": \"The number of seconds to wait before writing readings-related statistics to storage\"}, \"max_readings_insert_batch_reconnect_wait_seconds\": {\"type\": \"integer\", \"default\": \"10\", \"value\": \"10\", \"description\": \"The maximum number of seconds to wait before reconnecting to storage when inserting readings\"}}, \"description\": \"Device server configuration\"}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/get-FOGL-983.json",
    "content": "{\n    \"where\" : { \n                    \"column\"    : \"key\",\n                    \"condition\" : \"=\",\n                    \"value\"     : \"DEVICE\"\n                  },\n    \"return\" : [ \"description\" ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/get_updated_complex_JSON.json",
    "content": "{\n        \"return\": [\"key\",\"data\"],\n        \"where\": {\n            \"column\": \"key\",\n            \"condition\": \"=\",\n            \"value\": \"UPDA\"\n        }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/group.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"id\"\n\t\t\t},\n\t\"group\" : \"key\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/group_time.json",
    "content": "{\n        \"aggregate\": [{\n            \"operation\": \"min\",\n            \"json\": {\n                \"column\": \"reading\",\n                \"properties\": \"rate\"\n            },\n            \"alias\": \"min\"\n        }, {\n            \"operation\": \"max\",\n            \"json\": {\n                \"column\": \"reading\",\n                \"properties\": \"rate\"\n            },\n            \"alias\": \"max\"\n        }, {\n            \"operation\": \"avg\",\n            \"json\": {\n                \"column\": \"reading\",\n                \"properties\": \"rate\"\n            },\n            \"alias\": \"average\"\n    }],\n    \"where\": {\n        \"column\": \"asset_code\",\n        \"condition\": \"=\",\n        \"value\": \"MyAsset\"\n    },\n    \"limit\": 20,\n    \"group\" : {\n        \"column\": \"user_ts\",\n        \"format\": \"YYYY-MM-DD HH24:MI:SS\"\n    },\n    \"sort\" : {\n        \"column\": \"user_ts\",\n        \"direction\": \"asc\"\n    }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/insert.json",
    "content": "{\n\t\"id\" : 2,\n\t\"key\" : \"TEST2\",\n\t\"description\" : \"An inserted row\",\n\t\"data\" : { \"json\" : \"inserted object\" }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/insert2.json",
    "content": "{\n\t\"id\" : 4,\n\t\"key\" : \"Admin\",\n\t\"description\" : \"URL of the admin API\",\n\t\"data\" : { \"url\" : { \"value\" : \"inserted object\" } }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/insert_bad.json",
    "content": "{\n\t\"id\" : 2,\n\t\"key\" : \"TEST2\",\n\t\"description\" : \"An inserted row\",\n\t\"data\" : { \"json\" : \"inserted object\" },\n\t\"Nonexistant\" : 4\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/insert_bad2.json",
    "content": ",\n{\n\t\"id\" : 2\n\t\"key\" : \"TEST2\",\n\t\"description\" : \"An inserted row\",\n\t\"data\" : { \"json\" : \"inserted object\" }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/limit.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/limit_max_int.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 999999999999\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/msec_add_readings_user_ts.json",
    "content": "{\n  \"readings\" : [\n    { \"user_ts\" : \"xxx\",                             \"asset_code\": \"msec_001_BAD\", \"reading\" : { \"value\" : 1  }},\n    { \"user_ts\" : \"2019-30-07 10:17:17.123456+00\",   \"asset_code\": \"msec_002_BAD\", \"reading\" : { \"value\" : 2  }},\n\n    { \"user_ts\" : \"2019-01-01 10:01:01\",             \"asset_code\": \"msec_003_OK\",  \"reading\" : { \"value\" : 3  }},\n    { \"user_ts\" : \"2019-01-02 10:02:01.0\",           \"asset_code\": \"msec_004_OK\",  \"reading\" : { \"value\" : 4  }},\n    { \"user_ts\" : \"2019-01-03 10:02:02.841\",         \"asset_code\": \"msec_005_OK\",  \"reading\" : { \"value\" : 5  }},\n    { \"user_ts\" : \"2019-01-04 10:03:05.123456\",      \"asset_code\": \"msec_006_OK\",  \"reading\" : { \"value\" : 6  }},\n\n    { \"user_ts\" : \"2019-01-04 10:03:05.1+00:00\",     \"asset_code\": \"msec_007_OK\",  \"reading\" : { \"value\" : 7  }},\n    { \"user_ts\" : \"2019-01-04 10:03:05.123+00:00\",   \"asset_code\": \"msec_008_OK\",  \"reading\" : { \"value\" : 8  }},\n\n    { \"user_ts\" : \"2019-03-03 10:03:03.123456+00:00\",\"asset_code\": \"msec_009_OK\",  \"reading\" : { \"value\" : 9  }},\n    { \"user_ts\" : \"2019-03-04 10:03:04.123456+01:00\",\"asset_code\": \"msec_010_OK\",  \"reading\" : { \"value\" :10  }},\n    { \"user_ts\" : \"2019-03-05 10:03:05.123456-01:00\",\"asset_code\": \"msec_011_OK\",  \"reading\" : { \"value\" :11  }},\n    { \"user_ts\" : \"2019-03-04 10:03:04.123456+02:30\",\"asset_code\": \"msec_012_OK\",  \"reading\" : { \"value\" :12  }},\n    { \"user_ts\" : \"2019-03-05 10:03:05.123456-02:30\",\"asset_code\": \"msec_013_OK\",  \"reading\" : { \"value\" :13  }}\n\n  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/msec_query_asset_aggmin.json",
    "content": "{\n  \"aggregate\":\n  {\n    \"operation\": \"min\",\n    \"column\": \"user_ts\",\n    \"alias\": \"user_ts_min\"\n  },\n  \"where\":\n  {\n    \"column\": \"asset_code\",\n    \"condition\": \"=\",\n    \"value\": \"msec_009_OK\"\n  }\n}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/msec_query_asset_aggminarray.json",
    "content": "{\n  \"aggregate\": [\n      {\n        \"operation\": \"min\",\n        \"column\": \"user_ts\",\n        \"alias\": \"user_ts_min\"\n      },\n      {\n        \"operation\": \"max\",\n        \"column\": \"user_ts\",\n        \"alias\": \"user_ts_max\"\n      }\n    ],\n    \"where\": {\n      \"column\": \"asset_code\",\n      \"condition\": \"=\",\n      \"value\": \"msec_009_OK\"\n  }\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/msec_query_asset_alias.json",
    "content": "{\n        \"where\" : {\n                        \"column\"    : \"asset_code\",\n                        \"condition\" : \"=\",\n                        \"value\"     : \"msec_009_OK\"\n                  },\n\n        \"return\" : [\n                        \"reading\",\n                        {\n                            \"column\" : \"user_ts\",\n                            \"alias\" :  \"user_ts_alias\"\n                        }\n                    ]\n}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/msec_query_asset_noalias.json",
    "content": "{\n        \"where\" : {\n                        \"column\"    : \"asset_code\",\n                        \"condition\" : \"=\",\n                        \"value\"     : \"msec_009_OK\"\n                  },\n\n        \"return\" : [\n                        \"reading\",\n                        {\n                            \"column\" : \"user_ts\"\n                        }\n\n                    ]\n}"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/msec_query_readings.json",
    "content": "{\n  \"where\" : {\n    \"column\" : \"asset_code\",\n    \"condition\" : \"!=\",\n    \"value\" : \"MyAsset\"\n  },\n  \"return\" : [\n    \"asset_code\",\n    \"reading\",\n    \"user_ts\"\n  ],\n  \"sort\" : {\n    \"column\" : \"id\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/multi_and.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \">\",\n\t\t\t\"value\" : \"2\",\n\t\t\t\"and\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"<\",\n\t\t\t\t\"value\" : \"7\"\n\t\t\t}\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/multi_mixed.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \"<\",\n\t\t\t\"value\" : \"3\",\n\t\t\t\"or\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">\",\n\t\t\t\t\"value\" : \"7\",\n\t\t\t\t\"and\" : {\n\t\t\t\t\t\"column\" : \"description\",\n\t\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\t\"value\" : \"A test row\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/multi_or.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \"<\",\n\t\t\t\"value\" : \"3\",\n\t\t\t\"or\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">\",\n\t\t\t\t\"value\" : \"7\"\n\t\t\t}\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/newer.json",
    "content": "{\n\t\"where\" : {\n\t\t\"column\" : \"ts\",\n\t\t\"condition\" : \"newer\",\n\t\t\"value\" : 60\n\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"*\",\n\t\t\t\"alias\" : \"Count\"\n\t\t\t}\n\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/newerBad.json",
    "content": "{\n\t\"where\" : {\n\t\t\"column\" : \"ts\",\n\t\t\"condition\" : \"newer\",\n\t\t\"value\" : \"60\"\n\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"*\",\n\t\t\t\"alias\" : \"Count\"\n\t\t\t}\n\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/older.json",
    "content": "{\n\t\"where\" : {\n\t\t\"column\" : \"ts\",\n\t\t\"condition\" : \"older\",\n\t\t\"value\" : 600\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/put_function_in_JSON.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"UPDA\"\n\t},\n\t\"json_properties\" : [\n\t\t\t{\n\t\t\t\t\"column\" : \"data\",\n\t\t\t\t\"path\"   : [\"json\"],\n\t\t\t\t\"value\"  : \"cos(x)\"\n\t\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/put_json_in_JSON.json",
    "content": "{\n\t\"return\": [\"key\", \"data\"],\n\t\"json_properties\": [{\n\t\t\"column\": \"data\",\n\t\t\"path\": [\"json\"],\n\t\t\"value\": { \"providers\": [\"username123\", \"ldap_test_10\"]}\n\t}],\n\t\"where\": {\n\t\t\"column\": \"key\",\n\t\t\"condition\": \"=\",\n\t\t\"value\": \"UPDA\"\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/query_readings.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/query_readings_in.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"in\",\n\t\t\t\t\"value\" : [\"MyAsset\", \"m\", \"p\"]\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/query_readings_in_bad_values.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"not in\",\n\t\t\t\t\"value\" : {\"m\": \"p\"}\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/query_readings_not_in.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"not in\",\n\t\t\t\t\"value\" : [\"MyAsset\"]\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/query_readings_timebucket.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\",\n\t\"timebucket\" :  {\n\t\t\t   \"timestamp\" : \"user_ts\",\n\t\t\t   \"size\"      : \"2\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/query_readings_timebucket1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\",\n\t\"timebucket\" :  {\n\t\t\t   \"timestamp\" : \"user_ts\",\n\t\t\t   \"size\"      : \"2\",\n\t\t\t   \"format\"    : \"DD-MM-YYYYY HH24:MI:SS\",\n\t\t\t   \"alias\"     : \"bucket\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/query_readings_timebucket_bad.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate_bad\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\",\n\t\"timebucket\" :  {\n\t\t\t   \"timestamp\" : \"user_ts\",\n\t\t\t   \"size\"      : \"2\",\n\t\t\t   \"format\"    : \"DD-MM-YYYYY HH24:MI:SS\",\n\t\t\t   \"alias\"     : \"bucket\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/query_timebucket_datapoints.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"asset_code\",\n\t\t\t\"condition\" : \"in\",\n\t\t\t\"value\" : [\"Asset1\", \"Asset2\"]\n\t},\n\t\"aggregate\" : {\n\t\t\"operation\" : \"all\"\n\t},\n\t\"timebucket\" :  {\n\t\t   \"timestamp\" : \"user_ts\",\n\t\t   \"size\"      : \"20\",\n\t\t   \"alias\": \"time\"\n\t},\n\t\"limit\" : 40\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/read_id_1xx.json",
    "content": "{\n  \"where\" : {\n\t\t\"column\" : \"id\",\n\t\t\"condition\" : \">=\",\n\t\t\"value\" : 100\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/reading_property.json",
    "content": "{\n\t\"return\" : [\n\t\t\t{\n\t\t\t\t\"column\": \"user_ts\",\n\t\t\t\t\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n\t\t\t\t\"alias\" : \"timestamp\"},\n\t\t\t{\n\t\t\t \"json\" : {\n\t\t\t\t\t \"column\"     : \"reading\",\n\t\t\t\t\t \"properties\" : \"rate\"\n\t\t\t\t   },\n\t\t\t\t   \"alias\" : \"Rate\"\n\t\t\t\t }\n\t\t  ],\n\t\"sort\" : {\n\t\t\t\"column\" : \"user_ts\"\n\t\t }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/reading_property_array.json",
    "content": "{\n\t\"return\" : [\n\t\t\t{\n\t\t\t\t\"column\": \"user_ts\",\n\t\t\t\t\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n\t\t\t\t\"alias\" : \"timestamp\"},\n\t\t\t{\n\t\t\t \"json\" : {\n\t\t\t\t\t \"column\"     : \"reading\",\n\t\t\t\t\t \"properties\" : [ \"rate\" ]\n\t\t\t\t   },\n\t\t\t\t   \"alias\" : \"Rate\"\n\t\t\t\t }\n\t\t  ],\n\t\"sort\" : {\n\t\t\t\"column\" : \"user_ts\"\n\t\t }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/reading_property_bad.json",
    "content": "{\n\t\"return\" : [\n\t\t\t{\n\t\t\t\t\"column\": \"user_ts_X\",\n\t\t\t\t\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n\t\t\t\t\"alias\" : \"timestamp\"},\n\t\t\t{\n\t\t\t \"json\" : {\n\t\t\t\t\t \"column\"     : \"reading\",\n\t\t\t\t\t \"properties\" : \"temperature\"\n\t\t\t\t   },\n\t\t\t\t   \"alias\" : \"Temperature\"\n\t\t\t\t }\n\t\t  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/readings.json",
    "content": "{\n   \"readings\" : [\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 90 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.927191906\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 13 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.930077316\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 84 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.933029946\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 96 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.936351551\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 2 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.939633486\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 54 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.942860303\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 28 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.946072705\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 3 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.949303775\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 77 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.952531085\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 38 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.955723213\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 26 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.959048996\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 86 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.963008671\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 39 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.966955796\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 57 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.970002731\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 73 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.973462058\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 22 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.979245019\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 34 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.982825759\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 78 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.986811733\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 20 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.990407738\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 70 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.993641575\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 17 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.996855039\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 2 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.000131125\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 18 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.005780345\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 52 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.009004973\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 62 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.012432720\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 47 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.015931562\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 73 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.019228599\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 9 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.022430891\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 66 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.026312804\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 30 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.029248210\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 70 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.031930036\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 41 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.034759931\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 2 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.037494767\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 69 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.040900408\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 98 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.043678399\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 13 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.046833403\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 91 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.050656950\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 18 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.053705004\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 78 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.056612533\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 70 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.059816818\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 48 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.062667327\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 94 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.066313145\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 79 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.070054298\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 87 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.073032300\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 60 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.075709529\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 48 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.078469594\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 88 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.081413496\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 3 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.084291705\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 93 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.086988344\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 83 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.089854394\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 76 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.092628067\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 97 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.095270252\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 31 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.098438807\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 49 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.100992475\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 36 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.103591495\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 15 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.106399825\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 67 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.109127938\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 67 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.111779897\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 94 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.114243636\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 68 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.116767938\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 22 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.119724561\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 54 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.122293876\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 94 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.124791131\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 49 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.127260082\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 59 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.130223611\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 6 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.132656635\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 82 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.135199400\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 5 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.137822954\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 1 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.140390135\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 53 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.142788188\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 69 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.145544112\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 97 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.147995971\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 58 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.150512948\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 76 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.153044797\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 81 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.157244547\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 30 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.160105403\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 4 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.163172158\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 67 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.165631529\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 5 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.169386767\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 72 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.171844558\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 20 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.174302648\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 58 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.176815980\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 75 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.179330413\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 74 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.182026984\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 60 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.184453875\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 96 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.187060071\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 30 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.189544659\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 40 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.192069519\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 33 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.195018406\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 87 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.197861237\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 67 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.200490704\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 40 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.203486404\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 44 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.206153509\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 7 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.208776100\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 52 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.211964714\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 93 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.214504566\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 43 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.219924224\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 66 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.222819931\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 8 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.225401299\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 79 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.228495495\"\n\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/readings_timebucket.json",
    "content": "{\n   \"readings\" : [\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 90 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.927191906\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 13 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.930077316\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 84, \"pressure\": 1001 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.933029946\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 96, \"rms\": 10 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.936351551\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 2 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.939633486\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 54 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.942860303\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 28 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.946072705\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 3 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.949303775\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rms\" : 70 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.993641575\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 17, \"rms\" : 81 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.996855039\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 21 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.000131125\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"pressure\" : 918 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.005780345\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 52 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.009004973\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 62 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.012432720\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 47, \"rms\" : 19 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.015931562\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 73, \"pressure\" : 1023 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.019228599\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 9 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.022430891\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 66, \"rms\" : 15 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.026312804\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 30 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.029248210\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 70 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.031930036\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 41 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.034759931\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"pressure\" : 30, \"temp\" : 19 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.189544659\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 12.306 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.192069519\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 33, \"rms\": 99 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.195018406\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 87 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.197861237\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 67, \"rms\" : 11 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.200490704\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 40 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:53.203486404\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"pressure\" : 44, \"temp\": 90 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:53.206153509\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 7 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:53.208776100\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"pressure\" : 8, \"temp\" : 19, \"lux\": 3456 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.225401299\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 79 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.228495495\"\n\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/skip.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 1,\n\t\"skip\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/skip_max_int.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 1,\n\t\"skip\" : 99999999999\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/sort.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"desc\"\n\t\t }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/sort2.json",
    "content": "{\n\t\"sort\" : [\n\t\t{\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\t{\n\t\t\t\"column\" : \"key\",\n\t\t\t\"direction\" : \"asc\"\n\t\t}\n\t\t],\n\t\"limit\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/timezone.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">=\",\n\t\t\t\t\"value\" : \"2\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"timezone\" : \"utc\"\n\t\t\t}\n\t\t    ],\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\\\\\\\">\\\\\\\"\"\n\t}\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/timezone_bad.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">=\",\n\t\t\t\t\"value\" : \"2\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"timezone\" : \"bad\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/update.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 2\n\t\t\t},\n\t\"values\" : {\n\t\t\t\t\"description\" : \"updated description\"\n\t\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/updateKey.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 2\n\t\t\t},\n\t\"values\" : {\n\t\t\t\t\"description\" : \"updated description\",\n\t\t\t\t\"key\" : \"UPDA\"\n\t\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/update_bad.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 200\n\t\t\t},\n\t\"values\" : {\n\t\t\t\t\"description\" : \"updated description\"\n\t\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/update_expression.json",
    "content": "{\n    \"condition\" : { \n                    \"column\"    : \"id\",\n                    \"condition\" : \">\",\n                    \"value\"     : 5\n                  },\n    \"values\"    : {\n                    \"description\" : \"Updated with expression\"\n                  },\n    \"expressions\" : [\n\t\t\t{\n\t\t\t  \"column\"   : \"id\",\n\t\t\t  \"operator\" : \"+\",\n\t\t\t  \"value\"    : 100\n\t\t\t}\n\t   \t    ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/update_json.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 1\n\t\t\t},\n\t\"json_properties\" : [\n\t\t\t\t{\n\t\t\t\t\t\"column\" : \"data\",\n\t\t\t\t\t\"path\"   : [ \"json\" ],\n\t\t\t\t\t\"value\"  : \"new value\"\n\t\t\t\t}\n\t\t\t    ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/update_json2.json",
    "content": "{\n\t\"condition\"       : {\n\t\t\t\t\"column\"    : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\"     : 4\n\t\t\t    },\n\t\"json_properties\" : [\n\t\t\t\t{\n\t\t\t\t\t\"column\" : \"data\",\n\t\t\t\t\t\"path\"   : [ \"url\", \"value\" ],\n\t\t\t\t\t\"value\"  : \"new value\"\n\t\t\t\t}\n\t\t\t    ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_avg.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"avg\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_bad_1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_bad_2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_bad_3.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_bad_4.json",
    "content": "{\n\t\"where\" : \"x = 1\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_bad_format1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", { \"alias\" : \"MyDescription\" } ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_bad_format2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", {\n\t\t\t \"json\" : {\n\t\t\t\t \"column\" : \"data\"\n\t\t\t\t   },\n\t\t\t  \"alias\" : \"JSONvalue\"\n\t\t\t\t }\n\t\t  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_count.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_count_star.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"*\",\n\t\t\t\"alias\" : \"Entries\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_distinct.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"<\",\n\t\t\t\t\"value\" : 1000\n\t\t\t},\n         \"modifier\" : \"distinct\",\n\t\"return\" : [ \"description\"\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_id_1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_id_1_r1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\" ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_id_1_r2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", { \"column\" : \"description\", \"alias\" : \"MyDescription\" } ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_id_1_r3.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", {\n\t\t\t \"json\" : {\n\t\t\t\t \"column\" : \"data\",\n\t\t\t\t \"properties\" : \"json\"\n\t\t\t\t   },\n\t\t\t  \"alias\" : \"JSONvalue\"\n\t\t\t\t }\n\t\t  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_id_2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"2\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_id_not_1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"!=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_in.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \"in\",\n\t\t\t\"value\" : [ -4, 12.3, 0, 1001]\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_in_bad_values.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \"in\",\n\t\t\t\"value\" : {\"a\": 12}\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_like.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"description\",\n\t\t\t\t\"condition\" : \"like\",\n\t\t\t\t\"value\" : \"%row%\",\n\t\t\t\t\"and\" : {\n\t\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\t\"condition\" : \"!=\",\n\t\t\t\t\t\"value\" : \"T2NOW\"\n\t\t\t\t\t}\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_max.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"max\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_min.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"min\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_multi_aggregatee.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"column\" : \"id\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"column\" : \"id\"\n\t\t\t}\n\t\t      ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_not_in.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\"operation\" : \"count\",\n\t\t\"column\" : \"id\"\n\t},\n\t\"where\" : {\n\t\t\"column\" : \"id\",\n\t\t\"condition\" : \"not in\",\n\t\t\"value\" : [ -4, 12.3, 0, 11111]\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_numeric_column.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"-0.55\",\n\t\t\t\"condition\" : \"=\",\n\t\t\t\"value\" : \"-0.55\",\n\t\t\t\"and\" : {\n                                       \"column\" : \"id\",\n                                       \"condition\" : \"=\",\n                                       \"value\" : \"0\"\n\t\t\t\t}\n\t\t }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_sum.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"sum\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_test2_d1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"HH12:MI:SS\",\n\t\t\t  \"alias\" : \"time\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_test2_d2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"<\",\n\t\t\t\t\"value\" : \"6\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"HH24:MI:SS\",\n\t\t\t  \"alias\" : \"time\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_test2_d3.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">\",\n\t\t\t\t\"value\" : \"3\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"DD Mon YYYY\",\n\t\t\t  \"alias\" : \"date\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_test2_d4.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">=\",\n\t\t\t\t\"value\" : \"2\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"YYYY-MM-DD HH24:MI:SS\",\n\t\t\t  \"alias\" : \"timestamp\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/payloads/where_test2_d5.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"!=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"DD Mon YYYY HH12:MI:SS am\",\n\t\t\t  \"alias\" : \"date\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/plugins/common/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/../../../../..)\nset(GCOVR_PATH \"$ENV{HOME}/.local/bin/gcovr\")\n\n# Project configuration\nproject(RunTests)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O0\")\n \ninclude(CodeCoverage)\nappend_coverage_compiler_flags()\n\n# Locate GTest\nfind_package(GTest REQUIRED)\ninclude_directories(${GTEST_INCLUDE_DIRS})\ninclude_directories(../../../../../../../../C/plugins/storage/common/include)\ninclude_directories(../../../../../../../../C/common/include)\n\nfile(GLOB test_sources \"../../../../../../../../C/plugins/storage/common/sql_buffer.cpp\")\nset(common_sources \"../../../../../../../../C/common/string_utils.cpp\")\n\n \n# Link runTests with what we want to test and the GTest and pthread library\nadd_executable(RunTests ${test_sources} ${common_sources} tests.cpp)\n#setting BOOST_COMPONENTS to use pthread library only\nset(BOOST_COMPONENTS thread)\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ntarget_link_libraries(RunTests ${GTEST_LIBRARIES} pthread)\n\nsetup_target_for_coverage_gcovr_html(\n            NAME CoverageHtml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\nsetup_target_for_coverage_gcovr_xml(\n            NAME CoverageXml\n            EXECUTABLE ${PROJECT_NAME}\n            DEPENDENCIES ${PROJECT_NAME}\n    )\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/plugins/common/README.rst",
    "content": "*************************************************\nUnit Test for Common Components of Storage Plugin\n*************************************************\n\nRequire Google Unit Test framework\n\nInstall with:\n::\n    sudo apt-get install libgtest-dev\n    cd /usr/src/gtest\n    cmake CMakeLists.txt\n    sudo make\n    sudo make install\n\nTo build the unit test:\n::\n    mkdir build\n    cd build\n    cmake ..\n    make\n    ./runTests\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/plugins/common/tests.cpp",
    "content": "#include <gtest/gtest.h>\n#include <sql_buffer.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n    return RUN_ALL_TESTS();\n}\n\n/**\n * Test appending characters to the buffer\n */\nTEST(SQLBufferTest, charappend) {\nSQLBuffer\tsql;\n\n\tfor (int i = 0; i < 100; i++)\n\t\tsql.append(' ');\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(100, strlen(buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending more characers than will fit in a single\n * buffer.\n */\nTEST(SQLBufferTest, charappendlarge) {\nSQLBuffer\tsql;\n\n\tfor (int i = 0; i < 10000; i++)\n\t\tsql.append(' ');\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(10000, strlen(buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending a fixed pattern - check order of appends\n */\nTEST(SQLBufferTest, charappendpattern) {\nSQLBuffer\tsql;\n\n\tsql.append('a');\n\tsql.append('b');\n\tsql.append('c');\n\tsql.append('d');\n\tsql.append('e');\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"abcde\"));\n\tdelete[] buf;\n}\n\n/**\n * Test appending a fixed pattern - check order of appends\n * across buffer boundaries.\n */\nTEST(SQLBufferTest, charappendlongpattern) {\nSQLBuffer\tsql;\nint\t\ti;\n\n\tfor (i = 0; i < 2000; i++)\n\t{\n\t\tchar ch = 'a' + (i % 26);\n\t\tsql.append(ch);\n\t}\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(2000, strlen(buf));\n\tint result = 0;\n\t/* Check the pattern matches what we put in */\n\tfor (i = 0; i < 2000; i++)\n\t{\n\t\tchar ch = 'a' + (i % 26);\n\t\tif (buf[i] != ch)\n\t\t{\n\t\t\tresult = 1;\n\t\t}\n\t}\n\tASSERT_EQ(0, result);\n\tdelete[] buf;\n}\n\n/**\n * Test appendign null terminated strings\n */\nTEST(SQLBufferTest, strappend) {\nSQLBuffer\tsql;\n\n\tfor (int i = 0; i < 100; i++)\n\t\tsql.append(\"    \");\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(400, strlen(buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending long null terminated strings\n */\nTEST(SQLBufferTest, strappendlarge) {\nSQLBuffer\tsql;\n\n\tfor (int i = 0; i < 10000; i++)\n\t\tsql.append(\"1234567890\");\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(100000, strlen(buf));\n\tdelete[] buf;\n}\n\n/**\n * test appending an integer\n */\nTEST(SQLBufferTest, intappend) {\nSQLBuffer\tsql;\n\n\tsql.append(1234);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"1234\"));\n\tdelete[] buf;\n}\n\n/**\n * test appending an unsigned integer\n */\nTEST(SQLBufferTest, uintappend) {\nSQLBuffer\tsql;\nunsigned int\tvalue = 4321;\n\n\tsql.append(value);\n\tsql.append(value);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"43214321\"));\n\tdelete[] buf;\n}\n\n/**\n * test appending a long integer\n */\nTEST(SQLBufferTest, longappend) {\nSQLBuffer\tsql;\nlong\t\tvalue = 491572107;\n\n\tsql.append(value);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"491572107\"));\n\tdelete[] buf;\n}\n\n/**\n * test appending a negative long integer\n */\nTEST(SQLBufferTest, negappend) {\nSQLBuffer\tsql;\nlong\t\tvalue = -491572107;\n\n\tsql.append(value);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"-491572107\"));\n\tdelete[] buf;\n}\n\n/**\n * test appending a double\n */\nTEST(SQLBufferTest, doubleappend) {\nSQLBuffer\tsql;\ndouble\t\tvalue = 3.141526;\n\n\tsql.append(value);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(3.141526, atof(buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending a C++ string class\n */\nTEST(SQLBufferTest, stringappend) {\nSQLBuffer\tsql;\nstring\t\tstr(\"A C++ String\");\n\n\tsql.append(str);\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(str.c_str(), buf));\n\tdelete[] buf;\n}\n\n/**\n * Test appending a mixture of types\n */\nTEST(SQLBufferTest, mixedappend) {\nSQLBuffer\tsql;\n\n\tsql.append(\"Hello\");\n\tsql.append(' ');\n\tsql.append(123456);\n\tsql.append(string(\" world\"));\n\tconst char *buf = sql.coalesce();\n\tASSERT_EQ(0, strcmp(buf, \"Hello 123456 world\"));\n\tdelete[] buf;\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/storageSchemaTest.sh",
    "content": "#!/bin/sh\n\n port=`curl -sX GET http://localhost:8081/fledge/service | cut -d ',' -f5 | cut -d ':' -f2`\n\n curl -X POST http://localhost:\"$port\"/storage/schema -d @PostStorageSchema.json\n\n curl -X POST http://localhost:\"$port\"/storage/schema/test1/table/table1 -d @PostTable.json\n\n curl -X PUT http://localhost:\"$port\"/storage/schema/test1/table/table1/query -d @PutQuery.json\n\n curl -X PUT http://localhost:\"$port\"/storage/schema/test1/table/table1 -d @PutTable.json\n\n curl -X GET http://localhost:\"$port\"/storage/schema/test1/table/table1 -d @GetTable.json\n\n curl -X PUT http://localhost:\"$port\"/storage/schema/test1/table/table1 -d @PutTableExpression.json\n\n curl -X DELETE http://localhost:\"$port\"/storage/schema/test1/table/table1 -d @DeleteRows.json\n\n\n\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/test1.sh",
    "content": "curl -X PUT http://192.168.56.101:8080/storage/table/configuration/query -d @payload1.json\ncurl -X GET http://192.168.56.101:8080/storage/table/configuration?key=COAP_CONF\ncurl -X GET http://192.168.56.101:8080/storage/table/configuration\ncurl -X POST http://192.168.56.101:8080/storage/table/statistics_history -d @payload2.json\ncurl -X GET http://192.168.56.101:8080/storage/table/statistics_history?key=Mark\ncurl -X PUT http://192.168.56.101:8080/storage/table/statistics_history -d @payload3.json\ncurl -X GET http://192.168.56.101:8080/storage/table/log?code=LOGGN\ncurl -X PUT http://192.168.56.101:8080/storage/table/log -d @payload4.json\ncurl -X GET http://192.168.56.101:8080/storage/reading?id=1&count=1000\ncurl -X PUT http://192.168.56.101:8080/storage/reading/query -d @payload5.json\ncurl -X POST http://192.168.56.101:8080/storage/reading -d @payload6.json\ncurl -X PUT http://192.168.56.101:8080/storage/table/statistics_history/query -d @payload7.json\ncurl -X PUT http://192.168.56.101:8080/storage/table/statistics_history/query -d @payload8.json\ncurl -X PUT http://192.168.56.101:8080/storage/table/statistics_history/query -d @payload9.json\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/test2.sh",
    "content": "#!/bin/sh\nwhile true; do\nrm -f readings.json\nsh makeReadings.sh >readings.json\ncurl -X POST http://localhost:8080/storage/reading -d @readings.json\ndone\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/testCleanup.sh",
    "content": "psql -d fledge << EOF\ndelete from fledge.test;\ndrop table fledge.test;\ndelete from fledge.test2;\ndrop table fledge.test2;\nEOF\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/testRunner.sh",
    "content": "#!/usr/bin/env bash\n\n# Default values\nexport FLEDGE_DATA=.\nexport storage_exec=$FLEDGE_ROOT/services/fledge.services.storage\nexport TZ='Etc/UTC'\n\nshow_configuration () {\n\n\techo \"Fledge unit tests for the PostgreSQL plugin\"\n\n\techo \"Starting storage layer      :$storage_exec:\"\n\techo \"PostgreSQL version          :$pg_version:\"\n\techo \"PostgreSQL timezone         :$tz_exec:\"\n\techo \"OS timezone                 :$tz_os:\"\n\techo \"expected dir                :$expected_dir:\"\n\techo \"configuration               :$FLEDGE_DATA:\"\n}\n\nrestore_tz() {\n\t#Restore the initial TZ\n\tpsql -d fledge -c \"ALTER DATABASE fledge SET timezone TO '\"$tz_original\"';\" > /dev/null\n\ttz_current=`psql -qtAX -d fledge -c \"SHOW timezone ;\"`\n\techo -e \"\\nOriginal PostgreSQL timezone restored   :$tz_current:\\n\"\n}\n\n# Set UTC as TZ for the proper execution of the tests\ntz_original=`psql -qtAX -d fledge -c \"SHOW timezone ;\"`\n\ntrap restore_tz 1 2 3 6 15\n\npg_version=`psql -V | awk '{split($3,a,\".\"); print a[1] }'`\n# Handle specific a expected directory in relation to the PostgreSQL version\nif [[ $pg_version == \"12\" ]]; then\n\n    pg_version_major=\"_PG$pg_version\"\nelse\n    pg_version_major=\"\"\nfi\n\n#\n# evaluates : FLEDGE_DATA, storage_exec, TZ and expected_dir\n#\nif [[ \"$@\" != \"\" ]];\nthen\n\t# Handles input parameters\n\tSCRIPT_NAME=`basename $0`\n\toptions=`getopt -o c:s:t: --long configuration:,storage_exec:,timezone: -n \"$SCRIPT_NAME\" -- \"$@\"`\n\teval set -- \"$options\"\n\n\twhile true ; do\n\t    case \"$1\" in\n\t        -c|--configuration)\n\t            export FLEDGE_DATA=\"$2\"\n\t            shift 2\n\t            ;;\n\n\t        -s|--storage_exec)\n\t            export storage_exec=\"$2\"\n\t            shift 2\n\t            ;;\n\n\t        -t|--timezone)\n\t\t\t\texport TZ=\"$2\"\n\t            shift 2\n\t            ;;\n\t        --)\n\t            shift\n\t            break\n\t            ;;\n\t    esac\n\tdone\nfi\n\n# Set the timezone to UTC or to the requested one\npsql -d fledge -c \"ALTER DATABASE fledge SET timezone TO '\"${TZ}\"';\" > /dev/null\ntz_exec=`psql -qtAX -d fledge -c \"SHOW timezone ;\"`\n\ntz_os=`cat /etc/timezone`\n\n# Converts '/' to '_' and to upper case\nstep1=\"${TZ/\\//_}\"\nexpected_dir=\"expected_${step1^^}${pg_version_major}\"\n\nif [[ \"$storage_exec\" != \"\" ]] ; then\n\n\tshow_configuration\n\t$storage_exec\n\tsleep 1\n\nelif [[ \"${FLEDGE_ROOT}\" != \"\" ]] ; then\n\n\tshow_configuration\n\t$storage_exec\n\tsleep 1\n\nelse\n\techo Must either set FLEDGE_ROOT or provide storage service to test\n\texit 1\nfi\n\nexport IFS=\",\"\ntestNum=1\nn_failed=0\nn_passed=0\nn_unchecked=0\n./testSetup.sh > /dev/null 2>&1\nrm -f failed\nrm -rf results\nmkdir results\ncat testset | while read name method url payload optional; do\necho -n \"Test $testNum ${name}: \"\nif [ \"$payload\" = \"\" ] ; then\n\toutput=$(curl -X $method $url -o results/$testNum 2>/dev/null)\n\tcurlstate=$?\nelse\n\toutput=$(curl -X $method $url -d@payloads/$payload 2>/dev/null)\n\tcurlstate=$?\n\nfi\n\n# Forces the creation on an empty file if the output of the curl command is empty\n# it is needed for the behavior of the curl command in RHEL/CentOS\nif [ \"$output\" = \"\" ] ; then\n\n\ttouch results/$testNum\nelse\n\techo -n \"${output}\" > results/$testNum\nfi\n\nif [ \"$optional\" = \"\" ] ; then\n\tif [ ! -f ${expected_dir}/$testNum ]; then\n\t\tn_unchecked=`expr $n_unchecked + 1`\n\t\techo Missing expected results in :${expected_dir}: for test $testNum - result unchecked\n\telse\n\t\tcmp -s results/$testNum ${expected_dir}/$testNum\n\t\tif [ $? -ne \"0\" ]; then\n\t\t\techo Failed\n\t\t\tn_failed=`expr $n_failed + 1`\n\t\t\tif [ \"$payload\" = \"\" ]\n\t\t\tthen\n\t\t\t\techo Test $testNum  ${name} curl -X $method $url >> failed\n\t\t\telse\n\t\t\t\techo Test $testNum  ${name} curl -X $method $url -d@payloads/$payload  >> failed\n\t\t\tfi\n\t\t\t(\n\t\t\tunset IFS\n\t\t\techo \"   \" Expected: \"`cat ${expected_dir}/$testNum`\" >> failed\n\t\t\techo \"   \" Got:     \"`cat results/$testNum`\" >> failed\n\t\t\t)\n\t\t\techo >> failed\n\t\telse\n\t\t\techo Passed\n\t\t\tn_passed=`expr $n_passed + 1`\n\t\tfi\n\tfi\nelif [ \"$optional\" = \"checkstate\" ] ; then\n\tif [ $curlstate -eq 0 ] ; then\n\t\techo Passed\n\t\tn_passed=`expr $n_passed + 1`\n\telse\n\t\techo Failed\n\t\tn_failed=`expr $n_failed + 1`\n\t\tif [ \"$payload\" = \"\" ]\n\t\tthen\n\t\t\techo Test $testNum  curl -X $method $url >> failed\n\t\telse\n\t\t\techo Test $testNum  curl -X $method $url -d@payloads/$payload  >> failed\n\t\tfi\n\tfi\nfi\ntestNum=`expr $testNum + 1`\nrm -f tests.result\necho $n_failed Tests Failed \t\t>  tests.result\necho $n_passed Tests Passed \t\t>> tests.result\necho $n_unchecked Tests Unchecked\t>> tests.result\ndone\n\n#Restore the initial TZ\nrestore_tz\n\n./testCleanup.sh > /dev/null\ncat tests.result\nrm -f tests.result\nif [ -f \"failed\" ]; then\n\techo\n\techo \"Failed Tests\"\n\techo \"============\"\n\tcat failed\n\texit 1\nfi\n\nexit 0"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/testSetup.sh",
    "content": "psql -d fledge << EOF\ndrop table if exists fledge.test;\n\ncreate table fledge.test (\n\tid\tbigint,\n\tkey\tcharacter(5),\n\tdescription\tcharacter varying(255),\n\tdata\tjsonb\n);\n\ninsert into fledge.test values (1, 'TEST1',  'A test row', '{ \"json\" : \"test1\" }');\n\ndelete from fledge.readings;\n\n--\n-- test2 handling\n--\ndrop table if exists fledge.test2;\n\ncreate table fledge.test2 (\n\tid\tbigint,\n\tkey\tcharacter(5),\n\tdescription\tcharacter varying(255),\n\tdata\tjsonb,\n\tts    timestamp(6) with time zone NOT NULL DEFAULT now()\n);\n\ninsert into fledge.test2 values (1, 'TEST1',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:14:26.622315+01');\ninsert into fledge.test2 values (2, 'TEST2',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:14:27.422315+01');\ninsert into fledge.test2 values (3, 'TEST3',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:14:28.622315+01');\ninsert into fledge.test2 values (4, 'TEST4',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:14:29.622315+01');\ninsert into fledge.test2 values (5, 'TEST5',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:15:00.622315+01');\ninsert into fledge.test2 values (6, 'TEST6',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:15:33.622315+01');\ninsert into fledge.test2 values (6, 'TEST7',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:16:20.622315+01');\ninsert into fledge.test2 values (8, 'TEST8',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 13:14:30.622315+01');\ninsert into fledge.test2 values (9, 'TEST9',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 14:14:30.622315+01');\ninsert into fledge.test2 values (10, 'TEST10',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-11 12:14:30.622315+01');\n\nEOF\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/tests.cpp",
    "content": "#include <gtest/gtest.h>\n#include <configuration.h>\n#include <string.h>\n#include <string>\n#include <stdlib.h>\n\nusing namespace std;\n\n\nint main(int argc, char **argv) {\n\n\t// Select the proper storage.json file for the tests\n\tstring fledge_root = getenv(\"FLEDGE_ROOT\");\n\tstring fledge_data = fledge_root + \"/tests/unit/C/services/storage/postgres\";\n\n\tsetenv(\"FLEDGE_DATA\", fledge_data.c_str(), 1 );\n\n\ttesting::InitGoogleTest(&argc, argv);\n\n\ttesting::GTEST_FLAG(repeat) = 250;\n\ttesting::GTEST_FLAG(shuffle) = true;\n\ttesting::GTEST_FLAG(death_test_style) = \"threadsafe\";\n\n\treturn RUN_ALL_TESTS();\n}\n\n/**\n * Test retrieval of port from default\n */\nTEST(ConfigurationTest, getport)\n{\nEXPECT_EXIT({\n\tStorageConfiguration\tconf;\n\n\tbool ret = strcmp(conf.getValue(string(\"port\")), \"8080\") == 0;\n\tif (!ret)\n\t{\n\t\tcerr << \"port value is not 8080\" << endl;\n\t}\n\texit(!(ret == true)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\n/**\n * Test retrieval of plugin from default\n */\nTEST(ConfigurationTest, getplugin)\n{\nEXPECT_EXIT({\n\tStorageConfiguration\tconf;\n\n\tbool ret = strcmp(conf.getValue(string(\"plugin\")), \"postgres\") == 0;\n\tif (!ret)\n\t{\n\t\tcerr << \"plugin value is not 'postgres'\" << endl;\n\t}\n\texit(!(ret == true)); }, ::testing::ExitedWithCode(0), \"\");\n}\n\n/**\n * Test setting of port\n */\nTEST(ConfigurationTest, setport)\n{\nEXPECT_EXIT({\n\tStorageConfiguration\tconf;\n\n\tbool ret = conf.setValue(string(\"port\"), string(\"8188\"));\n\tif (!ret)\n\t{\n\t\tcerr << \"Failed to set port value to 8188\" << endl;\n\t\texit(1);\n\t}\n\tret = strcmp(conf.getValue(string(\"port\")), \"8188\") == 0;\n\tif (!ret)\n\t{\n\t\tcerr << \"Port value retrieved is not 8188\" << endl;\t\n\t}\n\texit(!(ret == true)); }, ::testing::ExitedWithCode(0), \"\");\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/postgres/testset",
    "content": "Common Read,GET,http://localhost:8080/storage/table/test,\nCommon Read key,GET,http://localhost:8080/storage/table/test?id=1,\nCommon Read key empty,GET,http://localhost:8080/storage/table/test?id=2,\nCommon Read complex,PUT,http://localhost:8080/storage/table/test/query,where_code_1.json\nCommon Read complex empty,PUT,http://localhost:8080/storage/table/test/query,where_id_2.json\nCommon Read complex not equal,PUT,http://localhost:8080/storage/table/test/query,where_id_not_1.json\nCommon Read complex count,PUT,http://localhost:8080/storage/table/test/query,where_count.json\nCommon Read complex avg,PUT,http://localhost:8080/storage/table/test/query,where_avg.json\nCommon Read complex min,PUT,http://localhost:8080/storage/table/test/query,where_min.json\nCommon Read complex max,PUT,http://localhost:8080/storage/table/test/query,where_max.json\nCommon Insert,POST,http://localhost:8080/storage/table/test,insert.json\nCommon Read back,GET,http://localhost:8080/storage/table/test?id=2,\nCommon Insert bad column,POST,http://localhost:8080/storage/table/test,insert_bad.json\nCommon Insert bad syntax,POST,http://localhost:8080/storage/table/test,insert_bad2.json\nCommon Delete,DELETE,http://localhost:8080/storage/table/test,where_id_2.json\nCommon Read deleted,GET,http://localhost:8080/storage/table/test?id=2,\nCommon Delete non-existant,DELETE,http://localhost:8080/storage/table/test,where_id_2.json\nCommon Insert,POST,http://localhost:8080/storage/table/test,insert.json\nCommon Read limit,PUT,http://localhost:8080/storage/table/test/query,limit.json\nCommon Read skip,PUT,http://localhost:8080/storage/table/test/query,skip.json\nCommon Read bad 1,PUT,http://localhost:8080/storage/table/test/query,where_bad_1.json\nCommon Read bad 2,PUT,http://localhost:8080/storage/table/test/query,where_bad_2.json\nCommon Read bad 3,PUT,http://localhost:8080/storage/table/test/query,where_bad_3.json\nCommon Read bad 4,PUT,http://localhost:8080/storage/table/test/query,where_bad_4.json\nCommon Read default sort order,PUT,http://localhost:8080/storage/table/test/query,bad_sort_1.json\nCommon Read bad sort 2,PUT,http://localhost:8080/storage/table/test/query,bad_sort_2.json\nCommon Update,PUT,http://localhost:8080/storage/table/test,update.json\nCommon Read back,GET,http://localhost:8080/storage/table/test?id=2,\nCommon Update,PUT,http://localhost:8080/storage/table/test,updateKey.json\nCommon Read back,GET,http://localhost:8080/storage/table/test?key=UPDA,\nCommon Update no values,PUT,http://localhost:8080/storage/table/test,bad_update.json\nCommon Read group,PUT,http://localhost:8080/storage/table/test/query,group.json\nBad URL,GET,http://localhost:8080/fledge/nothing,\nBad table,GET,http://localhost:8080/storage/table/doesntexist,\nBad column,GET,http://localhost:8080/storage/table/test?doesntexist=9,\nPing interface,GET,http://localhost:1081/fledge/service/ping,,checkstate\nAdd Readings,POST,http://localhost:8080/storage/reading,asset.json\nFetch Readings,GET,http://localhost:8080/storage/reading?id=1&count=1000,,checkstate\nFetch Readings zero count,GET,http://localhost:8080/storage/reading?id=1&count=0,\nFetch Readings no count,GET,http://localhost:8080/storage/reading?id=1,\nFetch Readings no id,GET,http://localhost:8080/storage/reading?count=1000,\nPurge Readings,PUT,http://localhost:8080/storage/reading/purge?age=1000&sent=0&flags=purge,\nCommon Read sort array,PUT,http://localhost:8080/storage/table/test/query,sort2.json\nCommon Read multiple aggregates,PUT,http://localhost:8080/storage/table/test/query,where_multi_aggregatee.json,\nCommon Read columns,PUT,http://localhost:8080/storage/table/test/query,where_id_1_r1.json,\nCommon Read columns alias,PUT,http://localhost:8080/storage/table/test/query,where_id_1_r2.json,\nCommon Read columns json,PUT,http://localhost:8080/storage/table/test/query,where_id_1_r3.json,\nDate format1,PUT,http://localhost:8080/storage/table/test2/query,where_test2_d1.json\nDate format2,PUT,http://localhost:8080/storage/table/test2/query,where_test2_d2.json\nDate format3,PUT,http://localhost:8080/storage/table/test2/query,where_test2_d3.json\nDate format4,PUT,http://localhost:8080/storage/table/test2/query,where_test2_d4.json\nDate format5,PUT,http://localhost:8080/storage/table/test2/query,where_test2_d5.json\nBad format1,PUT,http://localhost:8080/storage/table/test2/query,where_bad_format1.json\nBad format2,PUT,http://localhost:8080/storage/table/test2/query,where_bad_format2.json\nCount star,PUT,http://localhost:8080/storage/table/test2/query,where_count_star.json\nsum,PUT,http://localhost:8080/storage/table/test2/query,where_sum.json\nAdd more Readings,POST,http://localhost:8080/storage/reading,readings.json\nQuery Readings,PUT,http://localhost:8080/storage/reading/query,query_readings.json\nQuery Readings Timebucket,PUT,http://localhost:8080/storage/reading/query,query_readings_timebucket.json\nQuery Readings Timebucket 1,PUT,http://localhost:8080/storage/reading/query,query_readings_timebucket1.json\nMulti And,PUT,http://localhost:8080/storage/table/test2/query,multi_and.json\nMulti Or,PUT,http://localhost:8080/storage/table/test2/query,multi_or.json\nMulti Mixed,PUT,http://localhost:8080/storage/table/test2/query,multi_mised.json\nUpdate Bad Condition,PUT,http://localhost:8080/storage/table/test2,update_bad.json\nRead back,GET,http://localhost:8080/storage/table/test2,\nCount Assets,PUT,http://localhost:8080/storage/reading/query,count_assets.json\nReading Rate,PUT,http://localhost:8080/storage/reading/query,reading_property.json\nBad JSON,PUT,http://localhost:8080/storage/reading/query,reading_property_bad.json\nUpdate expression,PUT,http://localhost:8080/storage/table/test2,update_expression.json\nRead back update,PUT,http://localhost:8080/storage/table/test2/query,read_id_1xx.json\nDistinct,PUT,http://localhost:8080/storage/table/test2/query,where_distinct.json\nUpdate JSON,PUT,http://localhost:8080/storage/table/test,update_json.json\nRead back update,PUT,http://localhost:8080/storage/table/test/query,sort.json\nAdd JSON,POST,http://localhost:8080/storage/table/test,insert2.json\nUpdate Complex JSON,PUT,http://localhost:8080/storage/table/test,update_json2.json\nRead back update,GET,http://localhost:8080/storage/table/test?id=4,\nAdd now,POST,http://localhost:8080/storage/table/test2,addnew.json\nNewer,PUT,http://localhost:8080/storage/table/test2/query,newer.json\nOlder,PUT,http://localhost:8080/storage/table/test2/query,older.json\nNewer Bad,PUT,http://localhost:8080/storage/table/test2/query,newerBad.json\nLike,PUT,http://localhost:8080/storage/table/test2/query,where_like.json\nGroup Time,PUT,http://localhost:8080/storage/reading/query,group_time.json\nJira FOGL-690,POST,http://localhost:8080/storage/table/configuration,fogl690-ok.json\nSet-FOGL-983,PUT,http://localhost:8080/storage/table/configuration,FOGL-983.json\nAdd bad Readings,POST,http://localhost:8080/storage/reading,badreadings.json\nQuery Readings Timebucket Bad,PUT,http://localhost:8080/storage/reading/query,query_readings_timebucket_bad.json\nReading Rate Array,PUT,http://localhost:8080/storage/reading/query,reading_property_array.json\nCommon Read limit max_int,PUT,http://localhost:8080/storage/table/test/query,limit_max_int.json\nCommon Read skip max_int,PUT,http://localhost:8080/storage/table/test/query,skip_max_int.json\nTimezone,PUT,http://localhost:8080/storage/table/test2/query,timezone.json\nBad Timezone,PUT,http://localhost:8080/storage/table/test2/query,timezone_bad.json\nGet-FOGL-983,PUT,http://localhost:8080/storage/table/configuration/query,get-FOGL-983.json\nJira FOGL-690 cleanup,DELETE,http://localhost:8080/storage/table/configuration,delete.json\nNumeric Column Name,PUT,http://localhost:8080/storage/table/test/query,where_numeric_column.json,\nCommon table IN operator,PUT,http://localhost:8080/storage/table/test2/query,where_in.json\nCommon table NOT IN operator,PUT,http://localhost:8080/storage/table/test2/query,where_not_in.json\nCommon table IN operator bad values,PUT,http://localhost:8080/storage/table/test2/query,where_in_bad_values.json\nQuery Readings IN operator,PUT,http://localhost:8080/storage/reading/query,query_readings_in.json\nQuery Readings NOT IN operator,PUT,http://localhost:8080/storage/reading/query,query_readings_not_in.json\nQuery Readings IN operator bad values,PUT,http://localhost:8080/storage/reading/query,query_readings_in_bad_values.json\nmicroseconds - Purge Readings,PUT,http://localhost:8080/storage/reading/purge?age=1&sent=0&flags=purge,\nmicroseconds - Add Readings,POST,http://localhost:8080/storage/reading,msec_add_readings_user_ts.json\nmicroseconds - Query Readings,PUT,http://localhost:8080/storage/reading/query,msec_query_readings.json\nmicroseconds - Query asset NO alias,PUT,http://localhost:8080/storage/reading/query,msec_query_asset_noalias.json\nmicroseconds - Query asset alias,PUT,http://localhost:8080/storage/reading/query,msec_query_asset_alias.json\nmicroseconds - Query asset aggregate min,PUT,http://localhost:8080/storage/reading/query,msec_query_asset_aggmin.json\nmicroseconds - Query asset aggregate min array,PUT,http://localhost:8080/storage/reading/query,msec_query_asset_aggminarray.json\nUpdate JSON value as function,PUT,http://localhost:8080/storage/table/test,put_function_in_JSON.json\nUpdate JSON value in JSON value,PUT,http://localhost:8080/storage/table/test,put_json_in_JSON.json\nGet updated complex JSON value,PUT,http://localhost:8080/storage/table/test/query,get_updated_complex_JSON.json\nAdd table snapshot,POST,http://localhost:8080/storage/table/test2/snapshot,add_snapshot.json\nLoad table snapshot,PUT,http://localhost:8080/storage/table/test2/snapshot/99,\nDelete table snapshot,DELETE,http://localhost:8080/storage/table/test2/snapshot/99,\nJira FOGL-690,POST,http://localhost:8080/storage/table/configuration,fogl690-error.json\nAdd more Readings,POST,http://localhost:8080/storage/reading,readings_timebucket.json\nQuery Readings Timebucket,PUT,http://localhost:8080/storage/reading/query,query_timebucket_datapoints.json\nShutdown,POST,http://localhost:1081/fledge/service/shutdown,,checkstate\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/README.rst",
    "content": "Before any test operation set the SQLite3 db file\nby using the env var DEFAULT_SQLITE_DB_FILE\n\nExample:\n\n# export DEFAULT_SQLITE_DB_FILE=/tmp/fledge.db\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/1",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/10",
    "content": "{\"count\":1,\"rows\":[{\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/100",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/101",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/102",
    "content": "{ \"removed\" : 100,  \"unsentPurged\" : 100,  \"unsentRetained\" : 1,  \"readings\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/103",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 11 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/104",
    "content": "{\"count\":11,\"rows\":[{\"asset_code\":\"msec_003_OK\",\"reading\":{\"value\":3},\"user_ts\":\"2019-01-01 10:01:01.000000\"},{\"asset_code\":\"msec_004_OK\",\"reading\":{\"value\":4},\"user_ts\":\"2019-01-02 10:02:01.000000\"},{\"asset_code\":\"msec_005_OK\",\"reading\":{\"value\":5},\"user_ts\":\"2019-01-03 10:02:02.841000\"},{\"asset_code\":\"msec_006_OK\",\"reading\":{\"value\":6},\"user_ts\":\"2019-01-04 10:03:05.123456\"},{\"asset_code\":\"msec_007_OK\",\"reading\":{\"value\":7},\"user_ts\":\"2019-01-04 10:03:05.100000\"},{\"asset_code\":\"msec_008_OK\",\"reading\":{\"value\":8},\"user_ts\":\"2019-01-04 10:03:05.123000\"},{\"asset_code\":\"msec_009_OK\",\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 10:03:03.123456\"},{\"asset_code\":\"msec_010_OK\",\"reading\":{\"value\":10},\"user_ts\":\"2019-03-04 09:03:04.123456\"},{\"asset_code\":\"msec_011_OK\",\"reading\":{\"value\":11},\"user_ts\":\"2019-03-05 11:03:05.123456\"},{\"asset_code\":\"msec_012_OK\",\"reading\":{\"value\":12},\"user_ts\":\"2019-03-04 07:33:04.123456\"},{\"asset_code\":\"msec_013_OK\",\"reading\":{\"value\":13},\"user_ts\":\"2019-03-05 12:33:05.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/105",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/106",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts_alias\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/107",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/108",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 10:03:03.123456\",\"user_ts_max\":\"2019-03-03 10:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/109",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 3 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/11",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/110",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/111",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 2 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/112",
    "content": "{\"created\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/113",
    "content": "{\"loaded\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/114",
    "content": "{\"deleted\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/115",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"NOT NULL constraint failed: configuration.display_name\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/116",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 13 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/117",
    "content": "{ \"removed\" : 11,  \"unsentPurged\" : 11,  \"unsentRetained\" : 0,  \"readings\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/118",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/119",
    "content": "{\"count\":1,\"rows\":[{\"id\":2000,\"key\":\"tz_01\",\"description\":\"test - timezone - all tables\",\"data\":{\"test\":\"timezone\"},\"ts\":\"2019-04-17 14:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/12",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/120",
    "content": "{\"count\":1,\"rows\":[{\"id\":2000,\"key\":\"tz_01\",\"description\":\"test - timezone - all tables\",\"data\":{\"test\":\"timezone\"},\"ts\":\"2019-04-17 14:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/121",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_max\":\"2019-04-17 14:01:02.123456+00:00\",\"ts_timestamp\":\"2019-04-17 14:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/122",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/123",
    "content": "{\"count\":1,\"rows\":[{\"id\":941,\"asset_code\":\"tz_02\",\"reading\":{\"test\":\"2\"},\"user_ts\":\"2019-04-17 14:01:02.123456\",\"ts\":\"2019-04-17 13:40:21.594\"}]}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/124",
    "content": "{\"count\":1,\"rows\":[{\"asset_code\":\"tz_02\",\"reading\":{\"test\":\"2\"},\"user_ts\":\"2019-04-17 14:01:02.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/125",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_max\":\"2019-04-17 14:01:02.123456\",\"ts_timestamp\":\"2019-04-17 14:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/126",
    "content": "{\"count\":1,\"rows\":[{\"test_min\":\"2\",\"ts_timestamp\":\"2019-04-17 14:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/127",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 31 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/128",
    "content": "{\"count\":2,\"rows\":[{\"asset_code\":\"Asset1\",\"time\":\"2019-10-11 15:11:00\",\"reading\":{\"rate\":{\"min\":2,\"max\":96,\"average\":42.7368421052632,\"count\":19,\"sum\":812},\"rms\":{\"min\":10,\"max\":99,\"average\":43.5714285714286,\"count\":7,\"sum\":305}}},{\"asset_code\":\"Asset2\",\"time\":\"2019-10-11 15:11:00\",\"reading\":{\"lux\":{\"min\":3456,\"max\":3456,\"average\":3456.0,\"count\":1,\"sum\":3456},\"pressure\":{\"min\":8,\"max\":1023,\"average\":504.0,\"count\":6,\"sum\":3024},\"temp\":{\"min\":12.306,\"max\":90,\"average\":49.9306,\"count\":10,\"sum\":499.306}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/13",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"table fledge.test has no column named Nonexistant\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/14",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"Failed to parse JSON payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/15",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/16",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/17",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/18",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/19",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/2",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/20",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/21",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"column\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/22",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"condition\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/23",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"value\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/24",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" property must be a JSON object\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/25",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/26",
    "content": "{ \"entryPoint\" : \"Select sort\", \"message\" : \"Missing property \\\"column\\\"\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/27",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/28",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"The 'description' has been \\\"updated\\\"\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/29",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/3",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/30",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/31",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"Missing values or expressions object in payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/32",
    "content": "{\"count\":2,\"rows\":[{\"count_id\":1,\"key\":\"UPDA\"},{\"count_id\":1,\"key\":\"TEST1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/33",
    "content": "{ \"error\" : \"Unsupported URL: /fledge/nothing\" }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/34",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"no such table: fledge.doesntexist\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/35",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"no such column: doesntexist\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/37",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 3 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/38",
    "content": "{\"count\":2,\"rows\":[{\"id\":966457,\"asset_code\":\"MyAsset\",reading\":{\"rate\":18.4},\"user_ts\":\"2017-09-21 15:00:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"},{\"id\":966458,\"asset_code\":\"MyAsset\",reading\":{\"rate\":45.1},\"user_ts\":\"2017-09-21 15:03:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"}]}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/39",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/4",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/40",
    "content": "{ \"error\" : \"Missing query parameter count\" }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/41",
    "content": "{ \"error\" : \"Missing query parameter id\" }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/42",
    "content": "{ \"removed\" : 3,  \"unsentPurged\" : 3,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/43",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/44",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1,\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/45",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/46",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"MyDescription\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/47",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"JSONvalue\":\"test1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/48",
    "content": "{\"count\":9,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"12:14:26\"},{\"key\":\"TEST2\",\"description\":\"A test row\",\"time\":\"12:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"time\":\"11:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"time\":\"11:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"time\":\"11:15:00\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"time\":\"11:15:33\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"time\":\"11:16:20\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"time\":\"05:14:30\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"time\":\"21:14:30\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/49",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 12:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:15:00\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:15:33\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:16:20\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 05:14:30\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 21:14:30\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/5",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/50",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"return object must have either a column or json property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/51",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"The json property is missing a properties property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/52",
    "content": "{\"count\":1,\"rows\":[{\"Entries\":9}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/53",
    "content": "{\"count\":1,\"rows\":[{\"sum_id\":43}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/54",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 100 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/55",
    "content": "{\"count\":1,\"rows\":[{\"min\":1,\"max\":98,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/56",
    "content": "{\"count\":2,\"rows\":[{\"min\":1,\"max\":98,\"average\":53.7721518987342,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 15:10:52\"},{\"min\":2,\"max\":96,\"average\":47.9523809523809,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 15:10:50\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/57",
    "content": "{\"count\":2,\"rows\":[{\"min\":1,\"max\":98,\"average\":53.7721518987342,\"asset_code\":\"MyAsset\",\"bucket\":\"2017-10-11 15:10:52\"},{\"min\":2,\"max\":96,\"average\":47.9523809523809,\"asset_code\":\"MyAsset\",\"bucket\":\"2017-10-11 15:10:50\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/58",
    "content": "{\"count\":6,\"rows\":[{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/59",
    "content": "{\"count\":4,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/6",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/60",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/61",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"Not all updates within transaction succeeded\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/62",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/63",
    "content": "{\"count\":1,\"rows\":[{\"Count\":100,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/64",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 15:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 15:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 15:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 15:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 15:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 15:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 15:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 15:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 15:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 15:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 15:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 15:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 15:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 15:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 15:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 15:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 15:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 15:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 15:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 15:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 15:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 15:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 15:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 15:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 15:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 15:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 15:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 15:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 15:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 15:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 15:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 15:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 15:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 15:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 15:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 15:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 15:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 15:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 15:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 15:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/65",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 4 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/66",
    "content": "{\"count\":4,\"rows\":[{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/67",
    "content": "{\"count\":2,\"rows\":[{\"description\":\"A test row\"},{\"description\":\"Updated with expression\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/68",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/69",
    "content": "{\"count\":2,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}},{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"new value\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/7",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/70",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/71",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/72",
    "content": "{\"count\":1,\"rows\":[{\"id\":4,\"key\":\"Admin\",\"description\":\"URL of the admin API\",\"data\":{\"url\":{\"value\":\"new value\"}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/73",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/74",
    "content": "{\"count\":1,\"rows\":[{\"Count\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/75",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"},{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/76",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/77",
    "content": "{\"count\":5,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/78",
    "content": "{\"count\":2,\"rows\":[{\"min\":2,\"max\":96,\"average\":47.9523809523809,\"user_ts\":\"2017-10-11 15:10:51\"},{\"min\":1,\"max\":98,\"average\":53.7721518987342,\"user_ts\":\"2017-10-11 15:10:52\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/79",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/8",
    "content": "{\"count\":1,\"rows\":[{\"avg_id\":1.0}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/80",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/81",
    "content": "{ \"entryPoint\" : \"appendReadings\", \"message\" : \"Payload is missing a readings array\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/82",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/83",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 15:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 15:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 15:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 15:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 15:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 15:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 15:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 15:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 15:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 15:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 15:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 15:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 15:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 15:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 15:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 15:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 15:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 15:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 15:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 15:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 15:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 15:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 15:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 15:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 15:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 15:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 15:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 15:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 15:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 15:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 15:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 15:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 15:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 15:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 15:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 15:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 15:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 15:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 15:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 15:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 15:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 15:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 15:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 15:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 15:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 15:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 15:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 15:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 15:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 15:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 15:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 15:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 15:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 15:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 15:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 15:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 15:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 15:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 15:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 15:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 15:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 15:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 15:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 15:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 15:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 15:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 15:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/84",
    "content": "{ \"entryPoint\" : \"limit\", \"message\" : \"Limit must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/85",
    "content": "{ \"entryPoint\" : \"skip\", \"message\" : \"Skip must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/86",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"unrecognized token: \\\":\\\"\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/87",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"SQLite3 plugin does not support timezones in qeueries\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/88",
    "content": "{\"count\":1,\"rows\":[{\"description\":\"added'some'ch'''ars'\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/89",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/9",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/90",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/91",
    "content": "{\"count\":1,\"rows\":[{\"min\":1,\"max\":98,\"average\":52.55,\"timestamp\":\"2017-10-11 15:10\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/92",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/93",
    "content": "{\"count\":1,\"rows\":[{\"min\":\"\",\"max\":\"\",\"average\":\"\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/94",
    "content": "{\"count\":1,\"rows\":[{\"min\":1,\"max\":98,\"average\":52.55,\"timestamp\":\"2017-10-11 15\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/95",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/96",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/97",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":10}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/98",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of a \\\"in\\\" condition must be an array and must not be empty.\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_ETC_UTC/99",
    "content": "{\"count\":1,\"rows\":[{\"min\":1,\"max\":98,\"average\":52.9207920792079,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/1",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/10",
    "content": "{\"count\":1,\"rows\":[{\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/100",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/101",
    "content": ""
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/102",
    "content": "{ \"removed\" : 100,  \"unsentPurged\" : 100,  \"unsentRetained\" : 1,  \"readings\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/103",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 11 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/104",
    "content": "{\"count\":11,\"rows\":[{\"asset_code\":\"msec_003_OK\",\"reading\":{\"value\":3},\"user_ts\":\"2019-01-01 11:01:01.000000\"},{\"asset_code\":\"msec_004_OK\",\"reading\":{\"value\":4},\"user_ts\":\"2019-01-02 11:02:01.000000\"},{\"asset_code\":\"msec_005_OK\",\"reading\":{\"value\":5},\"user_ts\":\"2019-01-03 11:02:02.841000\"},{\"asset_code\":\"msec_006_OK\",\"reading\":{\"value\":6},\"user_ts\":\"2019-01-04 11:03:05.123456\"},{\"asset_code\":\"msec_007_OK\",\"reading\":{\"value\":7},\"user_ts\":\"2019-01-04 11:03:05.100000\"},{\"asset_code\":\"msec_008_OK\",\"reading\":{\"value\":8},\"user_ts\":\"2019-01-04 11:03:05.123000\"},{\"asset_code\":\"msec_009_OK\",\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 11:03:03.123456\"},{\"asset_code\":\"msec_010_OK\",\"reading\":{\"value\":10},\"user_ts\":\"2019-03-04 10:03:04.123456\"},{\"asset_code\":\"msec_011_OK\",\"reading\":{\"value\":11},\"user_ts\":\"2019-03-05 12:03:05.123456\"},{\"asset_code\":\"msec_012_OK\",\"reading\":{\"value\":12},\"user_ts\":\"2019-03-04 08:33:04.123456\"},{\"asset_code\":\"msec_013_OK\",\"reading\":{\"value\":13},\"user_ts\":\"2019-03-05 13:33:05.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/105",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/106",
    "content": "{\"count\":1,\"rows\":[{\"reading\":{\"value\":9},\"user_ts_alias\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/107",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/108",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_min\":\"2019-03-03 11:03:03.123456\",\"user_ts_max\":\"2019-03-03 11:03:03.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/109",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 3 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/11",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/110",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/111",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 2 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/112",
    "content": "{\"created\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/113",
    "content": "{\"loaded\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/114",
    "content": "{\"deleted\": {\"id\": \"99\", \"table\": \"test2\"} }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/115",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"NOT NULL constraint failed: configuration.display_name\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/116",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 13 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/117",
    "content": "{ \"removed\" : 11,  \"unsentPurged\" : 11,  \"unsentRetained\" : 0,  \"readings\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/118",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/119",
    "content": "{\"count\":1,\"rows\":[{\"id\":2000,\"key\":\"tz_01\",\"description\":\"test - timezone - all tables\",\"data\":{\"test\":\"timezone\"},\"ts\":\"2019-04-17 14:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/12",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/120",
    "content": "{\"count\":1,\"rows\":[{\"id\":2000,\"key\":\"tz_01\",\"description\":\"test - timezone - all tables\",\"data\":{\"test\":\"timezone\"},\"ts\":\"2019-04-17 14:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/121",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_max\":\"2019-04-17 14:01:02.123456+00:00\",\"ts_timestamp\":\"2019-04-17 14:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/122",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/123",
    "content": "{\"count\":1,\"rows\":[{\"id\":941,\"asset_code\":\"tz_02\",\"reading\":{\"test\":\"2\"},\"user_ts\":\"2019-04-17 14:01:02.123456\",\"ts\":\"2019-04-17 13:40:21.594\"}]}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/124",
    "content": "{\"count\":1,\"rows\":[{\"asset_code\":\"tz_02\",\"reading\":{\"test\":\"2\"},\"user_ts\":\"2019-04-17 16:01:02.123456\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/125",
    "content": "{\"count\":1,\"rows\":[{\"user_ts_max\":\"2019-04-17 16:01:02.123456\",\"ts_timestamp\":\"2019-04-17 16:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/126",
    "content": "{\"count\":1,\"rows\":[{\"test_min\":\"2\",\"ts_timestamp\":\"2019-04-17 16:01:02.123\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/127",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 31 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/128",
    "content": "{\"count\":2,\"rows\":[{\"asset_code\":\"Asset1\",\"time\":\"2019-10-11 15:11:00\",\"reading\":{\"rate\":{\"min\":2,\"max\":96,\"average\":42.7368421052632,\"count\":19,\"sum\":812},\"rms\":{\"min\":10,\"max\":99,\"average\":43.5714285714286,\"count\":7,\"sum\":305}}},{\"asset_code\":\"Asset2\",\"time\":\"2019-10-11 15:11:00\",\"reading\":{\"lux\":{\"min\":3456,\"max\":3456,\"average\":3456.0,\"count\":1,\"sum\":3456},\"pressure\":{\"min\":8,\"max\":1023,\"average\":504.0,\"count\":6,\"sum\":3024},\"temp\":{\"min\":12.306,\"max\":90,\"average\":49.9306,\"count\":10,\"sum\":499.306}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/13",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"table fledge.test has no column named Nonexistant\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/14",
    "content": "{ \"entryPoint\" : \"insert\", \"message\" : \"Failed to parse JSON payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/15",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/16",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/17",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/18",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/19",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/2",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/20",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/21",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"column\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/22",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"condition\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/23",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" object is missing a \\\"value\\\" property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/24",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"where\\\" property must be a JSON object\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/25",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"An inserted row\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/26",
    "content": "{ \"entryPoint\" : \"Select sort\", \"message\" : \"Missing property \\\"column\\\"\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/27",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/28",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"TEST2\",\"description\":\"The 'description' has been \\\"updated\\\"\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/29",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/3",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/30",
    "content": "{\"count\":1,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/31",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"Missing values or expressions object in payload\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/32",
    "content": "{\"count\":2,\"rows\":[{\"count_id\":1,\"key\":\"UPDA\"},{\"count_id\":1,\"key\":\"TEST1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/33",
    "content": "{ \"error\" : \"Unsupported URL: /fledge/nothing\" }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/34",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"no such table: fledge.doesntexist\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/35",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"no such column: doesntexist\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/37",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 3 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/38",
    "content": "{\"count\":2,\"rows\":[{\"id\":966457,\"asset_code\":\"MyAsset\",\"reading\":{\"rate\":18.4},\"user_ts\":\"2017-09-21 15:00:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"},{\"id\":966458,\"asset_code\":\"MyAsset\",\"reading\":{\"rate\":45.1},\"user_ts\":\"2017-09-21 15:03:09.025655+01\",\"ts\":\"2017-10-04 11:38:39.368881+01\"}]}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/39",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/4",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/40",
    "content": "{ \"error\" : \"Missing query parameter count\" }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/41",
    "content": "{ \"error\" : \"Missing query parameter id\" }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/42",
    "content": "{ \"removed\" : 3,  \"unsentPurged\" : 3,  \"unsentRetained\" : 0,  \"readings\" : 0 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/43",
    "content": "{\"count\":1,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"test1\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/44",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1,\"max_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/45",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/46",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"MyDescription\":\"A test row\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/47",
    "content": "{\"count\":1,\"rows\":[{\"key\":\"TEST1\",\"JSONvalue\":\"test1\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/48",
    "content": "{\"count\":9,\"rows\":[{\"key\":\"TEST1\",\"description\":\"A test row\",\"time\":\"12:14:26\"},{\"key\":\"TEST2\",\"description\":\"A test row\",\"time\":\"12:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"time\":\"11:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"time\":\"11:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"time\":\"11:15:00\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"time\":\"11:15:33\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"time\":\"11:16:20\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"time\":\"05:14:30\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"time\":\"21:14:30\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/49",
    "content": "{\"count\":8,\"rows\":[{\"key\":\"TEST2\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 12:14:27\"},{\"key\":\"TEST3\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:28\"},{\"key\":\"TEST4\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:14:29\"},{\"key\":\"TEST5\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:15:00\"},{\"key\":\"TEST6\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:15:33\"},{\"key\":\"TEST7\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 11:16:20\"},{\"key\":\"TEST8\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 05:14:30\"},{\"key\":\"TEST9\",\"description\":\"A test row\",\"timestamp\":\"2017-10-10 21:14:30\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/5",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/50",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"return object must have either a column or json property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/51",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"The json property is missing a properties property\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/52",
    "content": "{\"count\":1,\"rows\":[{\"Entries\":9}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/53",
    "content": "{\"count\":1,\"rows\":[{\"sum_id\":43}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/54",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 100 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/55",
    "content": "{\"count\":1,\"rows\":[{\"min\":1,\"max\":98,\"average\":52.55,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/56",
    "content": "{\"count\":2,\"rows\":[{\"min\":1,\"max\":98,\"average\":53.7721518987342,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 15:10:52\"},{\"min\":2,\"max\":96,\"average\":47.9523809523809,\"asset_code\":\"MyAsset\",\"timestamp\":\"2017-10-11 15:10:50\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/57",
    "content": "{\"count\":2,\"rows\":[{\"min\":1,\"max\":98,\"average\":53.7721518987342,\"asset_code\":\"MyAsset\",\"bucket\":\"2017-10-11 15:10:52\"},{\"min\":2,\"max\":96,\"average\":47.9523809523809,\"asset_code\":\"MyAsset\",\"bucket\":\"2017-10-11 15:10:50\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/58",
    "content": "{\"count\":6,\"rows\":[{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/59",
    "content": "{\"count\":4,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/6",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/60",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/61",
    "content": "{ \"entryPoint\" : \"update\", \"message\" : \"Not all updates within transaction succeeded\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/62",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"},{\"id\":6,\"key\":\"TEST6\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":6,\"key\":\"TEST7\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":8,\"key\":\"TEST8\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":9,\"key\":\"TEST9\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/63",
    "content": "{\"count\":1,\"rows\":[{\"Count\":100,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/64",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 17:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 17:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 17:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 17:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 17:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 17:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 17:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 17:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 17:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 17:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 17:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 17:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 17:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 17:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 17:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 17:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 17:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 17:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 17:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 17:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 17:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 17:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 17:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 17:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 17:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 17:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 17:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 17:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 17:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 17:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 17:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 17:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 17:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 17:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 17:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 17:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 17:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 17:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 17:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 17:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/65",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 4 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/66",
    "content": "{\"count\":4,\"rows\":[{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/67",
    "content": "{\"count\":2,\"rows\":[{\"description\":\"A test row\"},{\"description\":\"Updated with expression\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/68",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/69",
    "content": "{\"count\":2,\"rows\":[{\"id\":2,\"key\":\"UPDA\",\"description\":\"updated description\",\"data\":{\"json\":\"inserted object\"}},{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"json\":\"new value\"}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/7",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/70",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/71",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/72",
    "content": "{\"count\":1,\"rows\":[{\"id\":4,\"key\":\"Admin\",\"description\":\"URL of the admin API\",\"data\":{\"url\":{\"value\":\"new value\"}}}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/73",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/74",
    "content": "{\"count\":1,\"rows\":[{\"Count\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/75",
    "content": "{\"count\":9,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"},{\"id\":106,\"key\":\"TEST6\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:33.622\"},{\"id\":106,\"key\":\"TEST7\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:16:20.622\"},{\"id\":108,\"key\":\"TEST8\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 05:14:30.622\"},{\"id\":109,\"key\":\"TEST9\",\"description\":\"Updated with expression\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 21:14:30.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/76",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of an \\\"newer\\\" condition must be an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/77",
    "content": "{\"count\":5,\"rows\":[{\"id\":1,\"key\":\"TEST1\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:26.622\"},{\"id\":2,\"key\":\"TEST2\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 12:14:27.422\"},{\"id\":3,\"key\":\"TEST3\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:28.622\"},{\"id\":4,\"key\":\"TEST4\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:14:29.622\"},{\"id\":5,\"key\":\"TEST5\",\"description\":\"A test row\",\"data\":{\"prop1\":\"test1\",\"obj1\":{\"p1\":\"v1\",\"p2\":\"v2\"}},\"ts\":\"2017-10-10 11:15:00.622\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/78",
    "content": "{\"count\":2,\"rows\":[{\"min\":2,\"max\":96,\"average\":47.9523809523809,\"user_ts\":\"2017-10-11 17:10:51\"},{\"min\":1,\"max\":98,\"average\":53.7721518987342,\"user_ts\":\"2017-10-11 17:10:52\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/79",
    "content": "{ \"response\" : \"inserted\", \"rows_affected\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/8",
    "content": "{\"count\":1,\"rows\":[{\"avg_id\":1.0}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/80",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/81",
    "content": "{ \"entryPoint\" : \"appendReadings\", \"message\" : \"Payload is missing a readings array\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/82",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/83",
    "content": "{\"count\":100,\"rows\":[{\"timestamp\":\"2017-10-11 17:10:51.927\",\"Rate\":90},{\"timestamp\":\"2017-10-11 17:10:51.930\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:51.933\",\"Rate\":84},{\"timestamp\":\"2017-10-11 17:10:51.936\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:51.939\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:51.942\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:51.946\",\"Rate\":28},{\"timestamp\":\"2017-10-11 17:10:51.949\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:51.952\",\"Rate\":77},{\"timestamp\":\"2017-10-11 17:10:51.955\",\"Rate\":38},{\"timestamp\":\"2017-10-11 17:10:51.959\",\"Rate\":26},{\"timestamp\":\"2017-10-11 17:10:51.963\",\"Rate\":86},{\"timestamp\":\"2017-10-11 17:10:51.966\",\"Rate\":39},{\"timestamp\":\"2017-10-11 17:10:51.970\",\"Rate\":57},{\"timestamp\":\"2017-10-11 17:10:51.973\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:51.979\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:51.982\",\"Rate\":34},{\"timestamp\":\"2017-10-11 17:10:51.986\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:51.990\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:51.993\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:51.996\",\"Rate\":17},{\"timestamp\":\"2017-10-11 17:10:52.000\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.005\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.009\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.012\",\"Rate\":62},{\"timestamp\":\"2017-10-11 17:10:52.015\",\"Rate\":47},{\"timestamp\":\"2017-10-11 17:10:52.019\",\"Rate\":73},{\"timestamp\":\"2017-10-11 17:10:52.022\",\"Rate\":9},{\"timestamp\":\"2017-10-11 17:10:52.026\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.029\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.031\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.034\",\"Rate\":41},{\"timestamp\":\"2017-10-11 17:10:52.037\",\"Rate\":2},{\"timestamp\":\"2017-10-11 17:10:52.040\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.043\",\"Rate\":98},{\"timestamp\":\"2017-10-11 17:10:52.046\",\"Rate\":13},{\"timestamp\":\"2017-10-11 17:10:52.050\",\"Rate\":91},{\"timestamp\":\"2017-10-11 17:10:52.053\",\"Rate\":18},{\"timestamp\":\"2017-10-11 17:10:52.056\",\"Rate\":78},{\"timestamp\":\"2017-10-11 17:10:52.059\",\"Rate\":70},{\"timestamp\":\"2017-10-11 17:10:52.062\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.066\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.070\",\"Rate\":79},{\"timestamp\":\"2017-10-11 17:10:52.073\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.075\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.078\",\"Rate\":48},{\"timestamp\":\"2017-10-11 17:10:52.081\",\"Rate\":88},{\"timestamp\":\"2017-10-11 17:10:52.084\",\"Rate\":3},{\"timestamp\":\"2017-10-11 17:10:52.086\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.089\",\"Rate\":83},{\"timestamp\":\"2017-10-11 17:10:52.092\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.095\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.098\",\"Rate\":31},{\"timestamp\":\"2017-10-11 17:10:52.100\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.103\",\"Rate\":36},{\"timestamp\":\"2017-10-11 17:10:52.106\",\"Rate\":15},{\"timestamp\":\"2017-10-11 17:10:52.109\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.111\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.114\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.116\",\"Rate\":68},{\"timestamp\":\"2017-10-11 17:10:52.119\",\"Rate\":22},{\"timestamp\":\"2017-10-11 17:10:52.122\",\"Rate\":54},{\"timestamp\":\"2017-10-11 17:10:52.124\",\"Rate\":94},{\"timestamp\":\"2017-10-11 17:10:52.127\",\"Rate\":49},{\"timestamp\":\"2017-10-11 17:10:52.130\",\"Rate\":59},{\"timestamp\":\"2017-10-11 17:10:52.132\",\"Rate\":6},{\"timestamp\":\"2017-10-11 17:10:52.135\",\"Rate\":82},{\"timestamp\":\"2017-10-11 17:10:52.137\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.140\",\"Rate\":1},{\"timestamp\":\"2017-10-11 17:10:52.142\",\"Rate\":53},{\"timestamp\":\"2017-10-11 17:10:52.145\",\"Rate\":69},{\"timestamp\":\"2017-10-11 17:10:52.147\",\"Rate\":97},{\"timestamp\":\"2017-10-11 17:10:52.150\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.153\",\"Rate\":76},{\"timestamp\":\"2017-10-11 17:10:52.157\",\"Rate\":81},{\"timestamp\":\"2017-10-11 17:10:52.160\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.163\",\"Rate\":4},{\"timestamp\":\"2017-10-11 17:10:52.165\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.169\",\"Rate\":5},{\"timestamp\":\"2017-10-11 17:10:52.171\",\"Rate\":72},{\"timestamp\":\"2017-10-11 17:10:52.174\",\"Rate\":20},{\"timestamp\":\"2017-10-11 17:10:52.176\",\"Rate\":58},{\"timestamp\":\"2017-10-11 17:10:52.179\",\"Rate\":75},{\"timestamp\":\"2017-10-11 17:10:52.182\",\"Rate\":74},{\"timestamp\":\"2017-10-11 17:10:52.184\",\"Rate\":60},{\"timestamp\":\"2017-10-11 17:10:52.187\",\"Rate\":96},{\"timestamp\":\"2017-10-11 17:10:52.189\",\"Rate\":30},{\"timestamp\":\"2017-10-11 17:10:52.192\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.195\",\"Rate\":33},{\"timestamp\":\"2017-10-11 17:10:52.197\",\"Rate\":87},{\"timestamp\":\"2017-10-11 17:10:52.200\",\"Rate\":67},{\"timestamp\":\"2017-10-11 17:10:52.203\",\"Rate\":40},{\"timestamp\":\"2017-10-11 17:10:52.206\",\"Rate\":44},{\"timestamp\":\"2017-10-11 17:10:52.208\",\"Rate\":7},{\"timestamp\":\"2017-10-11 17:10:52.211\",\"Rate\":52},{\"timestamp\":\"2017-10-11 17:10:52.214\",\"Rate\":93},{\"timestamp\":\"2017-10-11 17:10:52.219\",\"Rate\":43},{\"timestamp\":\"2017-10-11 17:10:52.222\",\"Rate\":66},{\"timestamp\":\"2017-10-11 17:10:52.225\",\"Rate\":8},{\"timestamp\":\"2017-10-11 17:10:52.228\",\"Rate\":79}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/84",
    "content": "{ \"entryPoint\" : \"limit\", \"message\" : \"Limit must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/85",
    "content": "{ \"entryPoint\" : \"skip\", \"message\" : \"Skip must be specfied as an integer\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/86",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"unrecognized token: \\\":\\\"\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/87",
    "content": "{ \"entryPoint\" : \"retrieve\", \"message\" : \"SQLite3 plugin does not support timezones in qeueries\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/88",
    "content": "{\"count\":1,\"rows\":[{\"description\":\"added'some'ch'''ars'\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/89",
    "content": "{ \"response\" : \"deleted\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/9",
    "content": "{\"count\":1,\"rows\":[{\"min_id\":1}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/90",
    "content": "{ \"response\" : \"updated\", \"rows_affected\"  : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/91",
    "content": "{\"count\":1,\"rows\":[{\"min\":1,\"max\":98,\"average\":52.55,\"timestamp\":\"2017-10-11 17:10\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/92",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/93",
    "content": "{\"count\":1,\"rows\":[{\"min\":\"\",\"max\":\"\",\"average\":\"\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/94",
    "content": "{\"count\":1,\"rows\":[{\"min\":1,\"max\":98,\"average\":52.55,\"timestamp\":\"2017-10-11 17\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/95",
    "content": "{ \"response\" : \"appended\", \"readings_added\" : 1 }"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/96",
    "content": "{\"count\":0,\"rows\":[]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/97",
    "content": "{\"count\":1,\"rows\":[{\"count_id\":10}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/98",
    "content": "{ \"entryPoint\" : \"where clause\", \"message\" : \"The \\\"value\\\" of a \\\"in\\\" condition must be an array and must not be empty.\", \"retryable\" : false}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/expected_EUROPE_ROME/99",
    "content": "{\"count\":1,\"rows\":[{\"min\":1,\"max\":98,\"average\":52.9207920792079,\"asset_code\":\"MyAsset\"}]}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/makeReadings.sh",
    "content": "#!/bin/sh\nif [ $# -eq \"1\" ] ; then\nnreadings=$1\nelse\nnreadings=100\nfi\necho \"{\"\necho \"   \\\"readings\\\" : [\"\nwhile [ $nreadings -gt 1 ]; do\n#ts=`date --rfc-3339=ns | sed -e 's/\\+.*//'`\nts=`date --rfc-3339=ns`\nreading=`shuf -i 1-100 -n 1`\necho \"\t\t{\"\necho \"\t\t\t\\\"asset_code\\\": \\\"MyAsset\\\",\"\necho \"\t\t\t\\\"reading\\\" : { \\\"rate\\\" : $reading },\"\necho \"\t\t\t\\\"user_ts\\\" : \\\"$ts\\\"\"\necho \"\t\t},\"\nnreadings=`expr $nreadings - 1`\ndone\n\n#ts=`date --rfc-3339=ns | sed -e 's/\\+.*//'`\nts=`date --rfc-3339=ns`\nreading=`shuf -i 1-100 -n 1`\necho \"\t\t{\"\necho \"\t\t\t\\\"asset_code\\\": \\\"MyAsset\\\",\"\necho \"\t\t\t\\\"reading\\\" : { \\\"rate\\\" : $reading },\"\necho \"\t\t\t\\\"user_ts\\\" : \\\"$ts\\\"\"\necho \"\t\t}\"\necho \"\t]\"\necho \"}\"\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/FOGL-983.json",
    "content": "{\n    \"condition\" : { \n                    \"column\"    : \"key\",\n                    \"condition\" : \"=\",\n                    \"value\"     : \"DEVICE\"\n                  },\n    \"values\"    : {\n                    \"description\" : \"added'some'ch'''ars'\"\n                  }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/add_readings_now.json",
    "content": "{               \n   \"readings\" : [       \n                {       \n                        \"asset_code\": \"MyAsset\",\n                        \"reading\" : { \"rate\" : 90 },\n                        \"user_ts\" : \"now()\"\n                }]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/add_snapshot.json",
    "content": "{ \"id\" : \"99\" }\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/addnew.json",
    "content": "{\n\t\"id\" : 200,\n\t\"key\" : \"T2NOW\",\n\t\"description\" : \"An inserted row with the current timestamp\",\n\t\"data\" : { \"json\" : \"inserted object\" },\n\t\"ts\" : \"now()\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/asset.json",
    "content": "{\n\t\"readings\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\t\t\t\"reading\" : { \"rate\" : 18.4 },\n\t\t\t\t\t\t\"user_ts\" : \"2017-09-21 15:00:09.025655\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\t\t\t\"reading\" : { \"rate\" : 45.1 },\n\t\t\t\t\t\t\"user_ts\" : \"2017-09-21 15:03:09.025655\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\t\t\t\"reading\" : { \"rate\" : 60.1 },\n\t\t\t\t\t\t\"user_ts\" : \"2017-09-21 15:04:09.025655\"\n\t\t\t\t\t}\n\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/bad_sort_1.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\"\n\t\t},\n\t\"limit\" : 1,\n\t\"skip\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/bad_sort_2.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 1,\n\t\"skip\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/bad_update.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 2\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/badreadings.json",
    "content": "{\n   \"noreadings\" : [\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 90 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.927191906\"\n\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/count_assets.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"*\",\n\t\t\t\"alias\" : \"Count\"\n\t\t\t},\n\t\"group\" : \"asset_code\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/delete.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"DEVICE\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/fogl690-error.json",
    "content": "{\"key\": \"DEVICE\", \"value\": {\"readings_insert_batch_size\": {\"type\": \"integer\", \"default\": \"100\", \"value\": \"100\", \"description\": \"The maximum number of readings in a batch of inserts\"}, \"max_concurrent_readings_inserts\": {\"type\": \"integer\", \"default\": \"5\", \"value\": \"5\", \"description\": \"The maximum number of concurrent processes that send batches of readings to storage\"}, \"readings_insert_batch_timeout_seconds\": {\"type\": \"integer\", \"default\": \"1\", \"value\": \"1\", \"description\": \"The number of seconds to wait for a readings list to reach the minimum batch size\"}, \"max_readings_insert_batch_connection_idle_seconds\": {\"type\": \"integer\", \"default\": \"60\", \"value\": \"60\", \"description\": \"Close storage connections used to insert readings when idle for this number of seconds\"}, \"readings_buffer_size\": {\"type\": \"integer\", \"default\": \"500\", \"value\": \"500\", \"description\": \"The maximum number of readings to buffer in memory\"}, \"write_statistics_frequency_seconds\": {\"type\": \"integer\", \"default\": \"5\", \"value\": \"5\", \"description\": \"The number of seconds to wait before writing readings-related statistics to storage\"}, \"max_readings_insert_batch_reconnect_wait_seconds\": {\"type\": \"integer\", \"default\": \"10\", \"value\": \"10\", \"description\": \"The maximum number of seconds to wait before reconnecting to storage when inserting readings\"}}, \"description\": \"Device server configuration\"}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/fogl690-ok.json",
    "content": "{\"key\": \"DEVICE\", \"display_name\": \"DEVICE\", \"value\": {\"readings_insert_batch_size\": {\"type\": \"integer\", \"default\": \"100\", \"value\": \"100\", \"description\": \"The maximum number of readings in a batch of inserts\"}, \"max_concurrent_readings_inserts\": {\"type\": \"integer\", \"default\": \"5\", \"value\": \"5\", \"description\": \"The maximum number of concurrent processes that send batches of readings to storage\"}, \"readings_insert_batch_timeout_seconds\": {\"type\": \"integer\", \"default\": \"1\", \"value\": \"1\", \"description\": \"The number of seconds to wait for a readings list to reach the minimum batch size\"}, \"max_readings_insert_batch_connection_idle_seconds\": {\"type\": \"integer\", \"default\": \"60\", \"value\": \"60\", \"description\": \"Close storage connections used to insert readings when idle for this number of seconds\"}, \"readings_buffer_size\": {\"type\": \"integer\", \"default\": \"500\", \"value\": \"500\", \"description\": \"The maximum number of readings to buffer in memory\"}, \"write_statistics_frequency_seconds\": {\"type\": \"integer\", \"default\": \"5\", \"value\": \"5\", \"description\": \"The number of seconds to wait before writing readings-related statistics to storage\"}, \"max_readings_insert_batch_reconnect_wait_seconds\": {\"type\": \"integer\", \"default\": \"10\", \"value\": \"10\", \"description\": \"The maximum number of seconds to wait before reconnecting to storage when inserting readings\"}}, \"description\": \"Device server configuration\"}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/get-FOGL-983.json",
    "content": "{\n    \"where\" : { \n                    \"column\"    : \"key\",\n                    \"condition\" : \"=\",\n                    \"value\"     : \"DEVICE\"\n                  },\n    \"return\" : [ \"description\" ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/group.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"id\"\n\t\t\t},\n\t\"group\" : \"key\",\n        \"sort\" : {\n                        \"column\" : \"id\",\n                        \"direction\" : \"desc\"\n                 }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/group_time.json",
    "content": "{\n        \"aggregate\": [{\n            \"operation\": \"min\",\n            \"json\": {\n                \"column\": \"reading\",\n                \"properties\": \"rate\"\n            },\n            \"alias\": \"min\"\n        }, {\n            \"operation\": \"max\",\n            \"json\": {\n                \"column\": \"reading\",\n                \"properties\": \"rate\"\n            },\n            \"alias\": \"max\"\n        }, {\n            \"operation\": \"avg\",\n            \"json\": {\n                \"column\": \"reading\",\n                \"properties\": \"rate\"\n            },\n            \"alias\": \"average\"\n    }],\n    \"where\": {\n        \"column\": \"asset_code\",\n        \"condition\": \"=\",\n        \"value\": \"MyAsset\"\n    },\n    \"limit\": 20,\n    \"group\" : {\n        \"column\": \"user_ts\",\n        \"format\": \"YYYY-MM-DD HH24:MI:SS\"\n    },\n    \"sort\" : {\n        \"column\": \"user_ts\",\n        \"direction\": \"asc\"\n    }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/insert.json",
    "content": "{\n\t\"id\" : 2,\n\t\"key\" : \"TEST2\",\n\t\"description\" : \"An inserted row\",\n\t\"data\" : { \"json\" : \"inserted object\" }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/insert2.json",
    "content": "{\n\t\"id\" : 4,\n\t\"key\" : \"Admin\",\n\t\"description\" : \"URL of the admin API\",\n\t\"data\" : { \"url\" : { \"value\" : \"inserted object\" } }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/insert_1row.json",
    "content": "{\n  \"id\" : 1000,\n  \"key\" : \"INSERT_1_1\",\n  \"description\" : \"insert - 1 rows\",\n  \"data\" : { \"json\" : \"inserted\" }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/insert_bad.json",
    "content": "{\n\t\"id\" : 2,\n\t\"key\" : \"TEST2\",\n\t\"description\" : \"An inserted row\",\n\t\"data\" : { \"json\" : \"inserted object\" },\n\t\"Nonexistant\" : 4\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/insert_bad2.json",
    "content": ",\n{\n\t\"id\" : 2\n\t\"key\" : \"TEST2\",\n\t\"description\" : \"An inserted row\",\n\t\"data\" : { \"json\" : \"inserted object\" }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/insert_multi_rows.json",
    "content": "{\n  \"inserts\" : [\n    {\n      \"id\": 1000,\n      \"key\": \"INSERT_2_1\",\n      \"description\": \"insert - multi rows\",\n      \"data\": {\n        \"json\": \"inserted\"\n      }\n    },\n    {\n      \"id\": 1001,\n      \"key\": \"INSERT_2_2\",\n      \"description\": \"insert - multi rows\",\n      \"data\": {\n        \"json\": \"inserted\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/limit.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/limit_max_int.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 999999999999\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/msec_add_readings_user_ts.json",
    "content": "{\n  \"readings\" : [\n    { \"user_ts\" : \"xxx\",                             \"asset_code\": \"msec_001_BAD\", \"reading\" : { \"value\" : 1  }},\n    { \"user_ts\" : \"2019-30-07 10:17:17.123456+00\",   \"asset_code\": \"msec_002_BAD\", \"reading\" : { \"value\" : 2  }},\n\n    { \"user_ts\" : \"2019-01-01 10:01:01\",             \"asset_code\": \"msec_003_OK\",  \"reading\" : { \"value\" : 3  }},\n    { \"user_ts\" : \"2019-01-02 10:02:01.0\",           \"asset_code\": \"msec_004_OK\",  \"reading\" : { \"value\" : 4  }},\n    { \"user_ts\" : \"2019-01-03 10:02:02.841\",         \"asset_code\": \"msec_005_OK\",  \"reading\" : { \"value\" : 5  }},\n    { \"user_ts\" : \"2019-01-04 10:03:05.123456\",      \"asset_code\": \"msec_006_OK\",  \"reading\" : { \"value\" : 6  }},\n\n    { \"user_ts\" : \"2019-01-04 10:03:05.1+00:00\",     \"asset_code\": \"msec_007_OK\",  \"reading\" : { \"value\" : 7  }},\n    { \"user_ts\" : \"2019-01-04 10:03:05.123+00:00\",   \"asset_code\": \"msec_008_OK\",  \"reading\" : { \"value\" : 8  }},\n\n    { \"user_ts\" : \"2019-03-03 10:03:03.123456+00:00\",\"asset_code\": \"msec_009_OK\",  \"reading\" : { \"value\" : 9  }},\n    { \"user_ts\" : \"2019-03-04 10:03:04.123456+01:00\",\"asset_code\": \"msec_010_OK\",  \"reading\" : { \"value\" :10  }},\n    { \"user_ts\" : \"2019-03-05 10:03:05.123456-01:00\",\"asset_code\": \"msec_011_OK\",  \"reading\" : { \"value\" :11  }},\n    { \"user_ts\" : \"2019-03-04 10:03:04.123456+02:30\",\"asset_code\": \"msec_012_OK\",  \"reading\" : { \"value\" :12  }},\n    { \"user_ts\" : \"2019-03-05 10:03:05.123456-02:30\",\"asset_code\": \"msec_013_OK\",  \"reading\" : { \"value\" :13  }}\n\n  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/msec_query_asset_aggmin.json",
    "content": "{\n  \"aggregate\":\n  {\n    \"operation\": \"min\",\n    \"column\": \"user_ts\",\n    \"alias\": \"user_ts_min\"\n  },\n  \"where\":\n  {\n    \"column\": \"asset_code\",\n    \"condition\": \"=\",\n    \"value\": \"msec_009_OK\"\n  }\n}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/msec_query_asset_aggminarray.json",
    "content": "{\n  \"aggregate\": [\n      {\n        \"operation\": \"min\",\n        \"column\": \"user_ts\",\n        \"alias\": \"user_ts_min\"\n      },\n      {\n        \"operation\": \"max\",\n        \"column\": \"user_ts\",\n        \"alias\": \"user_ts_max\"\n      }\n    ],\n    \"where\": {\n      \"column\": \"asset_code\",\n      \"condition\": \"=\",\n      \"value\": \"msec_009_OK\"\n  }\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/msec_query_asset_alias.json",
    "content": "{\n        \"where\" : {\n                        \"column\"    : \"asset_code\",\n                        \"condition\" : \"=\",\n                        \"value\"     : \"msec_009_OK\"\n                  },\n\n        \"return\" : [\n                        \"reading\",\n                        {\n                            \"column\" : \"user_ts\",\n                            \"alias\" :  \"user_ts_alias\"\n                        }\n                    ]\n}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/msec_query_asset_noalias.json",
    "content": "{\n        \"where\" : {\n                        \"column\"    : \"asset_code\",\n                        \"condition\" : \"=\",\n                        \"value\"     : \"msec_009_OK\"\n                  },\n\n        \"return\" : [\n                        \"reading\",\n                        {\n                            \"column\" : \"user_ts\"\n                        }\n\n                    ]\n}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/msec_query_readings.json",
    "content": "{\n  \"where\" : {\n    \"column\" : \"asset_code\",\n    \"condition\" : \"!=\",\n    \"value\" : \"MyAsset\"\n  },\n  \"return\" : [\n    \"asset_code\",\n    \"reading\",\n    \"user_ts\"\n  ],\n  \"sort\" : {\n    \"column\" : \"id\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/multi_and.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \">\",\n\t\t\t\"value\" : \"2\",\n\t\t\t\"and\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"<\",\n\t\t\t\t\"value\" : \"9\"\n\t\t\t}\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/multi_mixed.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \"<\",\n\t\t\t\"value\" : \"3\",\n\t\t\t\"or\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">\",\n\t\t\t\t\"value\" : \"7\",\n\t\t\t\t\"and\" : {\n\t\t\t\t\t\"column\" : \"description\",\n\t\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\t\"value\" : \"A test row\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/multi_or.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \"<\",\n\t\t\t\"value\" : \"3\",\n\t\t\t\"or\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">\",\n\t\t\t\t\"value\" : \"7\"\n\t\t\t}\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/newer.json",
    "content": "{\n\t\"where\" : {\n\t\t\"column\" : \"ts\",\n\t\t\"condition\" : \"newer\",\n\t\t\"value\" : 60\n\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"*\",\n\t\t\t\"alias\" : \"Count\"\n\t\t\t}\n\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/newerBad.json",
    "content": "{\n\t\"where\" : {\n\t\t\"column\" : \"ts\",\n\t\t\"condition\" : \"newer\",\n\t\t\"value\" : \"60\"\n\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"*\",\n\t\t\t\"alias\" : \"Count\"\n\t\t\t}\n\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/older.json",
    "content": "{\n\t\"where\" : {\n\t\t\"column\" : \"ts\",\n\t\t\"condition\" : \"older\",\n\t\t\"value\" : 600\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/query_readings.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/query_readings_in.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"in\",\n\t\t\t\t\"value\" : [\"MyAsset\", \"m\", \"p\"]\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/query_readings_in_bad_values.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"not in\",\n\t\t\t\t\"value\" : {\"m\": \"p\"}\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/query_readings_not_in.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"not in\",\n\t\t\t\t\"value\" : [\"MyAsset\"]\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/query_readings_timebucket.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\",\n\t\"timebucket\" :  {\n\t\t\t   \"timestamp\" : \"user_ts\",\n\t\t       \"format\"    : \"YYYYY-MM-DD HH24:MI:SS\",\n\t\t\t   \"size\"      : \"2\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/query_readings_timebucket1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\",\n\t\"timebucket\" :  {\n\t\t\t   \"timestamp\" : \"user_ts\",\n\t\t\t   \"size\"      : \"2\",\n\t\t\t   \"format\"    : \"DD-MM-YYYYY HH24:MI:SS\",\n\t\t\t   \"alias\"     : \"bucket\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/query_readings_timebucket_bad.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"asset_code\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"MyAsset\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"min\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate_bad\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"max\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"avg\",\n\t\t\t\t\"json\" : {\n\t\t\t\t\t\t\"column\" : \"reading\",\n\t\t\t\t\t\t\"properties\" : \"rate\"\n\t\t\t\t\t },\n\t\t\t\t\"alias\" : \"average\"\n\t\t\t}\n\t\t      ],\n\t\"group\" : \"asset_code\",\n\t\"timebucket\" :  {\n\t\t\t   \"timestamp\" : \"user_ts\",\n\t\t\t   \"size\"      : \"2\",\n\t\t\t   \"format\"    : \"DD-MM-YYYYY HH24:MI:SS\",\n\t\t\t   \"alias\"     : \"bucket\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/query_timebucket_datapoints.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"asset_code\",\n\t\t\t\"condition\" : \"in\",\n\t\t\t\"value\" : [\"Asset1\", \"Asset2\"]\n\t},\n\t\"aggregate\" : {\n\t\t\"operation\" : \"all\"\n\t},\n\t\"timebucket\" :  {\n\t\t   \"timestamp\" : \"user_ts\",\n\t\t   \"size\"      : \"20\",\n\t\t   \"alias\": \"time\"\n\t},\n\t\"limit\" : 40\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/read_id_1xx.json",
    "content": "{\n  \"where\" : {\n\t\t\"column\" : \"id\",\n\t\t\"condition\" : \">=\",\n\t\t\"value\" : 100\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/reading_property.json",
    "content": "{\n\t\"return\" : [\n\t\t\t{\n\t\t\t\t\"column\": \"user_ts\",\n\t\t\t\t\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n\t\t\t\t\"alias\" : \"timestamp\"},\n\t\t\t{\n\t\t\t \"json\" : {\n\t\t\t\t\t \"column\"     : \"reading\",\n\t\t\t\t\t \"properties\" : \"rate\"\n\t\t\t\t   },\n\t\t\t\t   \"alias\" : \"Rate\"\n\t\t\t\t }\n\t\t  ],\n\t\"sort\" : {\n\t\t\t\"column\" : \"user_ts\"\n\t\t }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/reading_property_array.json",
    "content": "{\n\t\"return\" : [\n\t\t\t{\n\t\t\t\t\"column\": \"user_ts\",\n\t\t\t\t\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n\t\t\t\t\"alias\" : \"timestamp\"},\n\t\t\t{\n\t\t\t \"json\" : {\n\t\t\t\t\t \"column\"     : \"reading\",\n\t\t\t\t\t \"properties\" : [ \"rate\" ]\n\t\t\t\t   },\n\t\t\t\t   \"alias\" : \"Rate\"\n\t\t\t\t }\n\t\t  ],\n\t\"sort\" : {\n\t\t\t\"column\" : \"user_ts\"\n\t\t }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/reading_property_bad.json",
    "content": "{\n\t\"return\" : [\n\t\t\t{\n\t\t\t\t\"column\": \"user_ts\",\n\t\t\t\t\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n\t\t\t\t\"alias\" : \"timestamp\"},\n\t\t\t{\n\t\t\t \"json\" : {\n\t\t\t\t\t \"column\"     : \"reading\",\n\t\t\t\t\t \"properties\" : \"temperature\"\n\t\t\t\t   },\n\t\t\t\t   \"alias\" : \"Temperature\"\n\t\t\t\t }\n\t\t  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/readings.json",
    "content": "{\n   \"readings\" : [\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 90 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.927191906\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 13 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.930077316\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 84 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.933029946\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 96 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.936351551\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 2 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.939633486\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 54 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.942860303\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 28 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.946072705\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 3 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.949303775\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 77 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.952531085\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 38 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.955723213\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 26 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.959048996\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 86 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.963008671\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 39 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.966955796\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 57 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.970002731\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 73 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.973462058\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 22 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.979245019\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 34 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.982825759\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 78 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.986811733\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 20 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.990407738\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 70 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.993641575\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 17 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:51.996855039\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 2 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.000131125\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 18 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.005780345\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 52 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.009004973\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 62 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.012432720\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 47 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.015931562\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 73 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.019228599\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 9 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.022430891\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 66 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.026312804\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 30 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.029248210\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 70 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.031930036\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 41 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.034759931\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 2 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.037494767\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 69 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.040900408\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 98 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.043678399\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 13 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.046833403\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 91 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.050656950\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 18 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.053705004\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 78 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.056612533\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 70 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.059816818\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 48 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.062667327\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 94 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.066313145\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 79 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.070054298\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 87 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.073032300\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 60 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.075709529\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 48 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.078469594\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 88 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.081413496\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 3 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.084291705\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 93 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.086988344\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 83 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.089854394\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 76 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.092628067\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 97 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.095270252\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 31 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.098438807\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 49 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.100992475\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 36 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.103591495\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 15 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.106399825\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 67 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.109127938\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 67 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.111779897\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 94 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.114243636\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 68 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.116767938\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 22 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.119724561\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 54 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.122293876\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 94 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.124791131\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 49 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.127260082\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 59 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.130223611\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 6 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.132656635\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 82 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.135199400\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 5 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.137822954\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 1 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.140390135\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 53 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.142788188\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 69 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.145544112\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 97 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.147995971\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 58 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.150512948\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 76 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.153044797\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 81 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.157244547\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 30 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.160105403\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 4 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.163172158\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 67 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.165631529\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 5 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.169386767\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 72 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.171844558\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 20 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.174302648\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 58 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.176815980\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 75 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.179330413\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 74 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.182026984\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 60 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.184453875\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 96 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.187060071\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 30 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.189544659\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 40 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.192069519\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 33 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.195018406\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 87 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.197861237\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 67 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.200490704\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 40 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.203486404\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 44 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.206153509\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 7 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.208776100\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 52 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.211964714\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 93 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.214504566\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 43 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.219924224\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 66 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.222819931\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 8 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.225401299\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"MyAsset\",\n\t\t\t\"reading\" : { \"rate\" : 79 },\n\t\t\t\"user_ts\" : \"2017-10-11 15:10:52.228495495\"\n\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/readings_timebucket.json",
    "content": "{\n   \"readings\" : [\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 90 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.927191906\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 13 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.930077316\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 84, \"pressure\": 1001 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.933029946\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 96, \"rms\": 10 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.936351551\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 2 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.939633486\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 54 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.942860303\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 28 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.946072705\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 3 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.949303775\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rms\" : 70 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.993641575\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 17, \"rms\" : 81 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:51.996855039\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 21 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.000131125\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"pressure\" : 918 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.005780345\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 52 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.009004973\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 62 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.012432720\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 47, \"rms\" : 19 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.015931562\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 73, \"pressure\" : 1023 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.019228599\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 9 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.022430891\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 66, \"rms\" : 15 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.026312804\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 30 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.029248210\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 70 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.031930036\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 41 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:52.034759931\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"pressure\" : 30, \"temp\" : 19 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.189544659\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 12.306 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.192069519\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 33, \"rms\": 99 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.195018406\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 87 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.197861237\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 67, \"rms\" : 11 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.200490704\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"temp\" : 40 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:53.203486404\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"pressure\" : 44, \"temp\": 90 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:53.206153509\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 7 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:53.208776100\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset2\",\n\t\t\t\"reading\" : { \"pressure\" : 8, \"temp\" : 19, \"lux\": 3456 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.225401299\"\n\t\t},\n\t\t{\n\t\t\t\"asset_code\": \"Asset1\",\n\t\t\t\"reading\" : { \"rate\" : 79 },\n\t\t\t\"user_ts\" : \"2019-10-11 15:10:54.228495495\"\n\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/series_group_by_hours.json",
    "content": "{\n\t\"aggregate\": [\n\t\t{\n\t\t\t\"alias\": \"min\", \"operation\": \"min\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"alias\": \"max\",\n\t\t\t\"operation\": \"max\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"alias\": \"average\",\n\t\t\t\"operation\": \"avg\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t}\n\t],\n\t\"where\": {\n\t\t\t\"column\": \"asset_code\",\n\t\t\t\"condition\": \"=\",\n\t\t\t\"value\": \"MyAsset\"\n\t},\n\t\"group\": {\n\t\t\t\"column\": \"user_ts\",\n\t\t\t\"alias\": \"timestamp\",\n\t\t\t\"format\": \"YYYY-MM-DD HH24\"\n\t},\n\t\"limit\": 20,\n\t\"sort\": {\n\t\t\t\"column\": \"timestamp\",\n\t\t\t\"direction\": \"desc\"\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/series_group_by_minutes.json",
    "content": "{\n\t\"aggregate\": [\n\t\t{\n\t\t\t\"alias\": \"min\", \"operation\": \"min\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"alias\": \"max\",\n\t\t\t\"operation\": \"max\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"alias\": \"average\",\n\t\t\t\"operation\": \"avg\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t}\n\t],\n\t\"where\": {\n\t\t\t\"column\": \"asset_code\",\n\t\t\t\"condition\": \"=\",\n\t\t\t\"value\": \"MyAsset\"\n\t},\n\t\"group\": {\n\t\t\t\"column\": \"user_ts\",\n\t\t\t\"alias\": \"timestamp\",\n\t\t\t\"format\": \"YYYY-MM-DD HH24:MI\"\n\t},\n\t\"limit\": 20,\n\t\"sort\": {\n\t\t\t\"column\": \"timestamp\",\n\t\t\t\"direction\": \"desc\"\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/series_seconds.json",
    "content": "{\n\t\"aggregate\": [\n\t\t{\n\t\t\t\"json\": {\n\t\t\t\t\"properties\": \"rate\",\n\t\t\t\t\"column\": \"reading\"\n\t\t\t},\n\t\t\t\"operation\": \"min\",\n\t\t\t\"alias\": \"min\"\n\t\t},\n\t\t{\n\t\t\t\"json\": {\n\t\t\t\t\"properties\": \"rate\",\n\t\t\t\t\"column\": \"reading\"\n\t\t\t},\n\t\t\t\"operation\": \"max\",\n\t\t\t\"alias\": \"max\"\n\t\t},\n\t\t{\n\t\t\t\"json\": {\n\t\t\t\t\"properties\": \"rate\",\n\t\t\t\t\"column\": \"reading\"\n\t\t\t},\n\t\t\t\"operation\": \"avg\",\n\t\t\t\"alias\": \"average\"\n\t\t}\n\t],\n\t\"where\": {\n\t\t\"column\": \"asset_code\",\n\t\t\"condition\": \"=\",\n\t\t\"value\": \"MyAsset\",\n\t\t\"and\": {\n\t\t\t\"column\": \"user_ts\",\n\t\t\t\"condition\": \"newer\",\n\t\t\t\"value\": 1\n\t\t}\n\t},\n\t\"group\": {\n\t\t\"format\": \"YYYY-MM-DD HH24:MI:SS\",\n\t\t\"column\": \"user_ts\",\n\t\t\"alias\": \"timestamp\"\n\t},\n\t\"limit\": 20,\n\t\"sort\": {\n\t\t\"column\": \"timestamp\",\n\t\t\"direction\": \"desc\"\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/series_summary_seconds.json",
    "content": "{\n\t\"aggregate\": [\n\t\t{\n\t\t\t\"alias\": \"min\",\n\t\t\t\"operation\": \"min\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"alias\": \"max\",\n\t\t\t\"operation\": \"max\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"alias\": \"average\",\n\t\t\t\"operation\": \"avg\",\n\t\t\t\"json\": {\n\t\t\t\t\"column\": \"reading\",\n\t\t\t\t\"properties\": \"rate\"\n\t\t\t}\n\t\t}\n\t],\n\t\"where\": {\n\t\t\"column\": \"asset_code\",\n\t\t\"condition\": \"=\",\n\t\t\"value\": \"MyAsset\",\n\t\t\"and\": {\n\t\t\t\"column\": \"user_ts\",\n\t\t\t\"condition\": \"newer\",\n\t\t\t\"value\": 1\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/skip.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 1,\n\t\"skip\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/skip_max_int.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\"limit\" : 1,\n\t\"skip\" : 99999999999\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/sort.json",
    "content": "{\n\t\"sort\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"desc\"\n\t\t }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/sort2.json",
    "content": "{\n\t\"sort\" : [\n\t\t{\n\t\t\t\"column\" : \"id\",\n\t\t\t\"direction\" : \"asc\"\n\t\t},\n\t\t{\n\t\t\t\"column\" : \"key\",\n\t\t\t\"direction\" : \"asc\"\n\t\t}\n\t\t],\n\t\"limit\" : 1\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/timezone.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">=\",\n\t\t\t\t\"value\" : \"2\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"timezone\" : \"utc\"\n\t\t\t}\n\t\t    ],\n\t\"sort\" : {\n\t\t\t\"column\" : \"id:\"\n\t}\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/timezone_bad.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">=\",\n\t\t\t\t\"value\" : \"2\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"timezone\" : \"bad\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/tz_all_insert.json",
    "content": "{\n  \"id\" : 2000,\n  \"key\" : \"tz_01\",\n  \"description\" : \"test - timezone - all tables\",\n  \"data\" : { \"test\" : \"timezone\" },\n  \"ts\" : \"2019-04-17 14:01:02.123456+00:00\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/tz_all_read_2.json",
    "content": "{\n  \"where\":{\n    \"column\":\"id\",\n    \"condition\":\"=\",\n    \"value\":2000\n  },\n  \"limit\":1,\n  \"sort\":{\n    \"column\":\"id\",\n    \"direction\":\"ASC\"\n  }\n}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/tz_all_read_3.json",
    "content": "{\n  \"aggregate\":{\n    \"operation\":\"max\",\n    \"column\":\"ts\",\n    \"alias\":\"user_ts_max\"\n  },\n  \"group\":{\n    \"column\":\"ts\",\n    \"alias\":\"ts_timestamp\",\n    \"format\":\"YYYY-MM-DD HH24:MI:SS.MS\"\n  },\n  \"where\":{\n    \"column\":\"id\",\n    \"condition\":\"=\",\n    \"value\":\"2000\"\n  }\n}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/tz_readings_insert.json",
    "content": "{\n  \"readings\" : [\n    {\n      \"asset_code\": \"tz_02\",\n      \"reading\" : { \"test\" : \"2\" },\n      \"user_ts\" : \"2019-04-17 14:01:02.123456+00:00\"\n    }\n  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/tz_readings_read_2.json",
    "content": "{\n  \"return\" : [\n      \"asset_code\",\n      \"reading\",\n      \"user_ts\"\n  ],\n\n  \"where\":{\n    \"column\":\"asset_code\",\n    \"condition\":\"=\",\n    \"value\":\"tz_02\"\n  }\n}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/tz_readings_read_3.json",
    "content": "{\n  \"aggregate\":{\n    \"operation\":\"max\",\n    \"column\":\"user_ts\",\n    \"alias\":\"user_ts_max\"\n  },\n  \"group\":{\n    \"column\":\"user_ts\",\n    \"alias\":\"ts_timestamp\",\n    \"format\":\"YYYY-MM-DD HH24:MI:SS.MS\"\n  },\n  \"where\":{\n    \"column\":\"asset_code\",\n    \"condition\":\"=\",\n    \"value\":\"tz_02\"\n  }\n}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/tz_readings_read_4.json",
    "content": "{\n  \"aggregate\":[\n    {\n      \"operation\":\"min\",\n      \"json\":{\n        \"column\":\"reading\",\n        \"properties\":\"test\"\n      },\n      \"alias\":\"test_min\"\n    }\n  ],\n  \"group\":{\n    \"column\":\"user_ts\",\n    \"alias\":\"ts_timestamp\",\n    \"format\":\"YYYY-MM-DD HH24:MI:SS.MS\"\n  },\n  \"where\":{\n    \"column\":\"asset_code\",\n    \"condition\":\"=\",\n    \"value\":\"tz_02\"\n  }\n}"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/update.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 2\n\t\t\t},\n\t\"values\" : {\n\t\t\t\t\"description\" : \"The 'description' has been \\\"updated\\\"\"\n\t\t   }\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/updateKey.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 2\n\t\t\t},\n\t\"values\" : {\n\t\t\t\t\"description\" : \"updated description\",\n\t\t\t\t\"key\" : \"UPDA\"\n\t\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/update_bad.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 200\n\t\t\t},\n\t\"values\" : {\n\t\t\t\t\"description\" : \"updated description\"\n\t\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/update_expression.json",
    "content": "{\n    \"condition\" : { \n                    \"column\"    : \"id\",\n                    \"condition\" : \">\",\n                    \"value\"     : 5\n                  },\n    \"values\"    : {\n                    \"description\" : \"Updated with expression\"\n                  },\n    \"expressions\" : [\n\t\t\t{\n\t\t\t  \"column\"   : \"id\",\n\t\t\t  \"operator\" : \"+\",\n\t\t\t  \"value\"    : 100\n\t\t\t}\n\t   \t    ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/update_json.json",
    "content": "{\n\t\"condition\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : 1\n\t\t\t},\n\t\"json_properties\" : [\n\t\t\t\t{\n\t\t\t\t\t\"column\" : \"data\",\n\t\t\t\t\t\"path\"   : [ \"json\" ],\n\t\t\t\t\t\"value\"  : \"new value\"\n\t\t\t\t}\n\t\t\t    ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/update_json2.json",
    "content": "{\n\t\"condition\"       : {\n\t\t\t\t\"column\"    : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\"     : 4\n\t\t\t    },\n\t\"json_properties\" : [\n\t\t\t\t{\n\t\t\t\t\t\"column\" : \"data\",\n\t\t\t\t\t\"path\"   : [ \"url\", \"value\" ],\n\t\t\t\t\t\"value\"  : \"new value\"\n\t\t\t\t}\n\t\t\t    ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/update_multi_rows.json",
    "content": "{\n  \"updates\" : [\n    {\n      \"condition\": {\n        \"column\": \"id\",\n        \"condition\": \"=\",\n        \"value\": 1\n      },\n      \"values\": {\n        \"description\": \"update multi rows - 1\"\n      }\n    },\n    {\n      \"condition\": {\n        \"column\": \"id\",\n        \"condition\": \"=\",\n        \"value\": 2\n      },\n      \"values\": {\n        \"description\": \"update multi rows - 2\"\n      }\n    },\n    {\n      \"condition\": {\n        \"column\": \"id\",\n        \"condition\": \"=\",\n        \"value\": 3\n      },\n      \"values\": {\n        \"description\": \"update multi rows - 3\"\n      }\n    }\n\n  ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/updatenow.json",
    "content": "{\n        \"where\" : {\n                \"column\" : \"id\",\n                \"condition\" : \"=\",\n                \"value\" : 200\n                },\n        \"values\"    : {\n                    \"ts\" : \"now()\"\n                  }\n\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_avg.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"avg\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_bad_1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_bad_2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_bad_3.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_bad_4.json",
    "content": "{\n\t\"where\" : \"x = 1\"\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_bad_format1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", { \"alias\" : \"MyDescription\" } ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_bad_format2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", {\n\t\t\t \"json\" : {\n\t\t\t\t \"column\" : \"data\"\n\t\t\t\t   },\n\t\t\t  \"alias\" : \"JSONvalue\"\n\t\t\t\t }\n\t\t  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_count.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_count_star.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"count\",\n\t\t\t\"column\" : \"*\",\n\t\t\t\"alias\" : \"Entries\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_distinct.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"<\",\n\t\t\t\t\"value\" : 1000\n\t\t\t},\n         \"modifier\" : \"distinct\",\n\t\"return\" : [ \"description\"\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_id_1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_id_1_r1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\" ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_id_1_r2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", { \"column\" : \"description\", \"alias\" : \"MyDescription\" } ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_id_1_r3.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", {\n\t\t\t \"json\" : {\n\t\t\t\t \"column\" : \"data\",\n\t\t\t\t \"properties\" : \"json\"\n\t\t\t\t   },\n\t\t\t  \"alias\" : \"JSONvalue\"\n\t\t\t\t }\n\t\t  ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_id_2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"2\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_id_not_1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"!=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_in.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \"in\",\n\t\t\t\"value\" : [\"a\", -4, \"added'some'ch'''ars'\", 12.3, 0, \"One\\\"Two\", -0.01234, \"c d\"]\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_in_bad_values.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\"column\" : \"id\",\n\t\t\t\"condition\" : \"in\",\n\t\t\t\"value\" : {\"a\": 12}\n\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_like.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"description\",\n\t\t\t\t\"condition\" : \"like\",\n\t\t\t\t\"value\" : \"%row%\",\n\t\t\t\t\"and\" : {\n\t\t\t\t\t\"column\" : \"key\",\n\t\t\t\t\t\"condition\" : \"!=\",\n\t\t\t\t\t\"value\" : \"T2NOW\"\n\t\t\t\t\t}\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_max.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"max\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_min.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"min\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_multi_aggregatee.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : [\n\t\t\t{\n\t\t\t\t\"operation\" : \"min\",\n\t\t\t\t\"column\" : \"id\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"operation\" : \"max\",\n\t\t\t\t\"column\" : \"id\"\n\t\t\t}\n\t\t      ]\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_not_in.json",
    "content": "{\n\t\"aggregate\" : {\n\t\t\"operation\" : \"count\",\n\t\t\"column\" : \"id\"\n\t},\n\t\"where\" : {\n\t\t\"column\" : \"id\",\n\t\t\"condition\" : \"not in\",\n\t\t\"value\" : [\"a\", -4, \"added'some'ch'''ars'\", 12.3, 0, \"One\\\"Two\", -0.01234, \"c d\"]\n\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_sum.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"aggregate\" : {\n\t\t\t\"operation\" : \"sum\",\n\t\t\t\"column\" : \"id\"\n\t\t\t}\n}\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_test2_d1.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"HH12:MI:SS\",\n\t\t\t  \"alias\" : \"time\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_test2_d2.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"<\",\n\t\t\t\t\"value\" : \"10\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"HH24:MI:SS\",\n\t\t\t  \"alias\" : \"time\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_test2_d3.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">\",\n\t\t\t\t\"value\" : \"3\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"DD Mon YYYY\",\n\t\t\t  \"alias\" : \"date\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_test2_d4.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \">=\",\n\t\t\t\t\"value\" : \"2\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"YYYY-MM-DD HH24:MI:SS\",\n\t\t\t  \"alias\" : \"timestamp\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/payloads/where_test2_d5.json",
    "content": "{\n\t\"where\" : {\n\t\t\t\t\"column\" : \"id\",\n\t\t\t\t\"condition\" : \"!=\",\n\t\t\t\t\"value\" : \"1\"\n\t\t\t},\n\t\"return\" : [ \"key\", \"description\", \n\t\t\t{\n\t\t\t  \"column\" : \"ts\",\n\t\t\t  \"format\" : \"DD Mon YYYY HH12:MI:SS am\",\n\t\t\t  \"alias\" : \"date\"\n\t\t\t}\n\t\t    ]\n}\n\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/testCleanup.sh",
    "content": "sqlite3 ${DEFAULT_SQLITE_DB_FILE} << EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge';\ndelete from fledge.test;\ndrop table fledge.test;\ndelete from fledge.test2;\ndrop table fledge.test2;\nEOF\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/testRunner.sh",
    "content": "#!/usr/bin/env bash\n\n# Default values\nexport FLEDGE_DATA=./plugin_cfg/sqlite          # Select the persistent storage plugin\nexport storage_exec=$FLEDGE_ROOT/services/fledge.services.storage\nexport TZ='Etc/UTC'\n\nshow_configuration () {\n\n\techo \"Fledge unit tests for the SQLite plugin\"\n\n\techo \"Starting storage layer      :$storage_exec:\"\n\techo \"timezone                    :$TZ:\"\n\techo \"expected dir                :$expected_dir:\"\n\techo \"configuration               :$FLEDGE_DATA:\"\n\techo \"database file fledge        :$DEFAULT_SQLITE_DB_FILE:\"\n\techo \"database file readings      :$DEFAULT_SQLITE_DB_READINGS_FILE:\"\n}\n\n\n#\n# evaluates : FLEDGE_DATA, storage_exec, TZ, and expected_dir\n#\nif [[ \"$@\" != \"\" ]];\nthen\n\t# Handles input parameters\n\tSCRIPT_NAME=`basename $0`\n\toptions=`getopt -o c:s:t: --long configuration:,storage_exec:,timezone: -n \"$SCRIPT_NAME\" -- \"$@\"`\n\teval set -- \"$options\"\n\n\twhile true ; do\n\t    case \"$1\" in\n\t        -c|--configuration)\n\t            export FLEDGE_DATA=\"$2\"\n\t            shift 2\n\t            ;;\n\n\t        -s|--storage_exec)\n\t            export storage_exec=\"$2\"\n\t            shift 2\n\t            ;;\n\n\t        -t|--timezone)\n\t\t\t\texport TZ=\"$2\"\n\t            shift 2\n\t            ;;\n\t        --)\n\t            shift\n\t            break\n\t            ;;\n\t    esac\n\tdone\nfi\n\n# Converts '/' to '_' and to upper case\nstep1=\"${TZ/\\//_}\"\nexpected_dir=\"expected_${step1^^}\"\n\nif [[ \"$storage_exec\" != \"\" ]] ; then\n\n\tshow_configuration\n\t$storage_exec\n\tsleep 1\n\nelif [[ \"${FLEDGE_ROOT}\" != \"\" ]] ; then\n\n\tshow_configuration\n\t$storage_exec\n\tsleep 1\n\nelse\n\techo Must either set FLEDGE_ROOT or provide storage service to test\n\texit 1\nfi\n\n#\n# Main\n#\nexport IFS=\",\"\ntestNum=1\nn_failed=0\nn_passed=0\nn_unchecked=0\n\n./testSetup.sh > /dev/null 2>&1\nsleep 5\n\nrm -f failed\nrm -rf results\nmkdir results\ncat testset | while read name method url payload optional; do\n#sleep 0.003\necho -n \"Test $testNum ${name}: \"\nif [ \"$payload\" = \"\" ] ; then\n\toutput=$(curl -X $method $url -o results/$testNum 2>/dev/null)\n\tcurlstate=$?\nelse\n\toutput=$(curl -X $method $url -d@payloads/$payload -o results/$testNum 2>/dev/null)\n\tcurlstate=$?\nfi\n\n# Forces the creation on an empty file if the output of the curl command is empty\n# it is needed for the behavior of the curl command in RHEL/CentOS\nif [ \"$output\" = \"\" ] ; then\n\n\ttouch results/$testNum\nelse\n\techo -n \"${output}\" > results/$testNum\nfi\n\nif [ \"$optional\" = \"\" ] ; then\n\tif [ ! -f ${expected_dir}/$testNum ]; then\n\t\tn_unchecked=`expr $n_unchecked + 1`\n\t\techo Missing expected results in :${expected_dir}: for test $testNum - result unchecked\n\telse\n\t\tcmp -s results/$testNum ${expected_dir}/$testNum\n\t\tif [ $? -ne \"0\" ]; then\n\t\t\techo Failed\n\t\t\tn_failed=`expr $n_failed + 1`\n\t\t\tif [ \"$payload\" = \"\" ]\n\t\t\tthen\n\t\t\t\techo Test $testNum  ${name} curl -X $method $url >> failed\n\t\t\telse\n\t\t\t\techo Test $testNum  ${name} curl -X $method $url -d@payloads/$payload  >> failed\n\t\t\tfi\n\t\t\t(\n\t\t\tunset IFS\n\t\t\techo \"   \" Expected: \"`cat ${expected_dir}/$testNum`\" >> failed\n\t\t\techo \"   \" Got:     \"`cat results/$testNum`\" >> failed\n\t\t\t)\n\t\t\techo >> failed\n\t\telse\n\t\t\techo Passed\n\t\t\tn_passed=`expr $n_passed + 1`\n\t\tfi\n\tfi\nelif [ \"$optional\" = \"checkstate\" ] ; then\n\tif [ $curlstate -eq 0 ] ; then\n\t\techo Passed\n\t\tn_passed=`expr $n_passed + 1`\n\telse\n\t\techo Failed\n\t\tn_failed=`expr $n_failed + 1`\n\t\tif [ \"$payload\" = \"\" ]\n\t\tthen\n\t\t\techo Test $testNum  curl -X $method $url >> failed\n\t\telse\n\t\t\techo Test $testNum  curl -X $method $url -d@payloads/$payload  >> failed\n\t\tfi\n\tfi\nfi\n#sleep 2\ntestNum=`expr $testNum + 1`\nrm -f tests.result\necho $n_failed Tests Failed \t\t>  tests.result\necho $n_passed Tests Passed \t\t>> tests.result\necho $n_unchecked Tests Unchecked\t>> tests.result\ndone\n./testCleanup.sh > /dev/null\ncat tests.result\nrm -f tests.result\nif [ -f \"failed\" ]; then\n\techo\n\techo \"Failed Tests\"\n\techo \"============\"\n\tcat failed\n\texit 1\nfi\nexit 0\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/testSetup.sh",
    "content": "\nsqlite3 ${DEFAULT_SQLITE_DB_READINGS_FILE} << EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_READINGS_FILE}' AS 'readings_1';\n\n--drop table if exists readings_1.readings;\n\n-- Readings table\n-- This tables contains the readings for assets.\n-- An asset can be a south with multiple sensor, a single sensor,\n-- a software or anything that generates data that is sent to Fledge\nCREATE TABLE IF NOT EXISTS readings_1.readings_1 (\n    id         INTEGER                PRIMARY KEY AUTOINCREMENT,\n    reading    JSON                        NOT NULL DEFAULT '{}',  -- The json object received\n    user_ts    DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- UTC time\n    ts         DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))  -- UTC time\n);\n\n-- CREATE INDEX fki_readings_fk1\n--    ON readings (asset_code);\n\ndelete from readings_1.readings;\n\nCREATE TABLE readings_1.asset_reading_catalogue (\n    table_id     INTEGER               PRIMARY KEY AUTOINCREMENT,\n    db_id        INTEGER               NOT NULL,\n    asset_code   character varying(50) NOT NULL\n);\n\nCREATE TABLE readings_1.configuration_readings (\n    global_id         INTEGER\n);\n\n\n\n\nEOF\n\n\nsqlite3 ${DEFAULT_SQLITE_DB_FILE} << EOF\nATTACH DATABASE '${DEFAULT_SQLITE_DB_FILE}' AS 'fledge';\n\ndrop table if exists fledge.test;\n\ncreate table fledge.test (\n\tid\tbigint,\n\tkey\tcharacter(5),\n\tdescription\tcharacter varying(255),\n\tdata    JSON\t\n);\n\ndrop table if exists fledge.test2;\n\ninsert into fledge.test values (1, 'TEST1',  'A test row', '{ \"json\" : \"test1\" }');\ndelete from fledge.configuration;\n\nCREATE TABLE IF NOT EXISTS fledge.configuration (\n       key         character varying(255)      NOT NULL, -- Primary key\n       display_name character varying(255)     NOT NULL, -- Display Name\n       description character varying(255)      NOT NULL, -- Description, in plain text\n       value       JSON                       NOT NULL DEFAULT '{}',          -- JSON object containing the configuration values\n       ts          DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime')),          -- Timestamp, updated at every change\n       CONSTRAINT configuration_pkey PRIMARY KEY (key) );\n\ncreate table fledge.test2 (\n\tid\tbigint,\n\tkey\tchar(5),\n\tdescription\tcharacter varying(255),\n\tdata\tjson,\n\tts\tDATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW', 'localtime'))\n);\n\ninsert into fledge.test2 values (1, 'TEST1',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:14:26.622315');\ninsert into fledge.test2 values (2, 'TEST2',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:14:27.422315+00:00');\ninsert into fledge.test2 values (3, 'TEST3',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:14:28.622315+01:00');\ninsert into fledge.test2 values (4, 'TEST4',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:14:29.622315+01:00');\ninsert into fledge.test2 values (5, 'TEST5',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:15:00.622315+01:00');\ninsert into fledge.test2 values (6, 'TEST6',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:15:33.622315+01:00');\ninsert into fledge.test2 values (6, 'TEST7',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 12:16:20.622315+01:00');\ninsert into fledge.test2 values (8, 'TEST8',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 13:14:30.622315+08:00');\ninsert into fledge.test2 values (9, 'TEST9',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-10 14:14:30.622315-07:00');\ninsert into fledge.tyyest2 values (10, 'TEST10',  'A test row', '{ \"prop1\" : \"test1\", \"obj1\" : { \"p1\" : \"v1\",\"p2\" : \"v2\"} }', '2017-10-11 12:14:30.622315+01');\n\nEOF\n"
  },
  {
    "path": "tests/unit/C/services/storage/sqlite/testset",
    "content": "Common Read,GET,http://localhost:8080/storage/table/test,\nCommon Read key,GET,http://localhost:8080/storage/table/test?id=1,\nCommon Read key empty,GET,http://localhost:8080/storage/table/test?id=2,\nCommon Read complex,PUT,http://localhost:8080/storage/table/test/query,where_code_1.json\nCommon Read complex empty,PUT,http://localhost:8080/storage/table/test/query,where_id_2.json\nCommon Read complex not equal,PUT,http://localhost:8080/storage/table/test/query,where_id_not_1.json\nCommon Read complex count,PUT,http://localhost:8080/storage/table/test/query,where_count.json\nCommon Read complex avg,PUT,http://localhost:8080/storage/table/test/query,where_avg.json\nCommon Read complex min,PUT,http://localhost:8080/storage/table/test/query,where_min.json\nCommon Read complex max,PUT,http://localhost:8080/storage/table/test/query,where_max.json\nCommon Insert,POST,http://localhost:8080/storage/table/test,insert.json\nCommon Read back,GET,http://localhost:8080/storage/table/test?id=2,\nCommon Insert bad column,POST,http://localhost:8080/storage/table/test,insert_bad.json\nCommon Insert bad syntax,POST,http://localhost:8080/storage/table/test,insert_bad2.json\nCommon Delete,DELETE,http://localhost:8080/storage/table/test,where_id_2.json\nCommon Read deleted,GET,http://localhost:8080/storage/table/test?id=2,\nCommon Delete non-existant,DELETE,http://localhost:8080/storage/table/test,where_id_2.json\nCommon Insert,POST,http://localhost:8080/storage/table/test,insert.json\nCommon Read limit,PUT,http://localhost:8080/storage/table/test/query,limit.json\nCommon Read skip,PUT,http://localhost:8080/storage/table/test/query,skip.json\nCommon Read bad 1,PUT,http://localhost:8080/storage/table/test/query,where_bad_1.json\nCommon Read bad 2,PUT,http://localhost:8080/storage/table/test/query,where_bad_2.json\nCommon Read bad 3,PUT,http://localhost:8080/storage/table/test/query,where_bad_3.json\nCommon Read bad 4,PUT,http://localhost:8080/storage/table/test/query,where_bad_4.json\nCommon Read default sort order,PUT,http://localhost:8080/storage/table/test/query,bad_sort_1.json\nCommon Read bad sort 2,PUT,http://localhost:8080/storage/table/test/query,bad_sort_2.json\nCommon Update,PUT,http://localhost:8080/storage/table/test,update.json\nCommon Read back,GET,http://localhost:8080/storage/table/test?id=2,\nCommon Update,PUT,http://localhost:8080/storage/table/test,updateKey.json\nCommon Read back,GET,http://localhost:8080/storage/table/test?key=UPDA,\nCommon Update no values,PUT,http://localhost:8080/storage/table/test,bad_update.json\nCommon Read group,PUT,http://localhost:8080/storage/table/test/query,group.json\nBad URL,GET,http://localhost:8080/fledge/nothing,\nBad table,GET,http://localhost:8080/storage/table/doesntexist,\nBad column,GET,http://localhost:8080/storage/table/test?doesntexist=9,\nPing interface,GET,http://localhost:1081/fledge/service/ping,,checkstate\nAdd Readings,POST,http://localhost:8080/storage/reading,asset.json\nFetch Readings,GET,http://localhost:8080/storage/reading?id=1&count=1000,,checkstate\nFetch Readings zero count,GET,http://localhost:8080/storage/reading?id=1&count=0,\nFetch Readings no count,GET,http://localhost:8080/storage/reading?id=1,\nFetch Readings no id,GET,http://localhost:8080/storage/reading?count=1000,\nPurge Readings,PUT,http://localhost:8080/storage/reading/purge?age=1000&sent=0&flags=purge,\nCommon Read sort array,PUT,http://localhost:8080/storage/table/test/query,sort2.json\nCommon Read multiple aggregates,PUT,http://localhost:8080/storage/table/test/query,where_multi_aggregatee.json,\nCommon Read columns,PUT,http://localhost:8080/storage/table/test/query,where_id_1_r1.json,\nCommon Read columns alias,PUT,http://localhost:8080/storage/table/test/query,where_id_1_r2.json,\nCommon Read columns json,PUT,http://localhost:8080/storage/table/test/query,where_id_1_r3.json,\nDate format2,PUT,http://localhost:8080/storage/table/test2/query,where_test2_d2.json\nDate format4,PUT,http://localhost:8080/storage/table/test2/query,where_test2_d4.json\nBad format1,PUT,http://localhost:8080/storage/table/test2/query,where_bad_format1.json\nBad format2,PUT,http://localhost:8080/storage/table/test2/query,where_bad_format2.json\nCount star,PUT,http://localhost:8080/storage/table/test2/query,where_count_star.json\nsum,PUT,http://localhost:8080/storage/table/test2/query,where_sum.json\nAdd more Readings,POST,http://localhost:8080/storage/reading,readings.json\nQuery Readings,PUT,http://localhost:8080/storage/reading/query,query_readings.json\nQuery Readings Timebucket,PUT,http://localhost:8080/storage/reading/query,query_readings_timebucket.json\nQuery Readings Timebucket 1,PUT,http://localhost:8080/storage/reading/query,query_readings_timebucket1.json\nMulti And,PUT,http://localhost:8080/storage/table/test2/query,multi_and.json\nMulti Or,PUT,http://localhost:8080/storage/table/test2/query,multi_or.json\nMulti Mixed,PUT,http://localhost:8080/storage/table/test2/query,multi_mised.json\nupdate - Update Bad Condition,PUT,http://localhost:8080/storage/table/test2,update_bad.json\nRead back,GET,http://localhost:8080/storage/table/test2,\nCount Assets,PUT,http://localhost:8080/storage/reading/query,count_assets.json\nReading Rate,PUT,http://localhost:8080/storage/reading/query,reading_property.json\nupdate - Update expression,PUT,http://localhost:8080/storage/table/test2,update_expression.json\nRead back update,PUT,http://localhost:8080/storage/table/test2/query,read_id_1xx.json\nDistinct,PUT,http://localhost:8080/storage/table/test2/query,where_distinct.json\nUpdate JSON,PUT,http://localhost:8080/storage/table/test,update_json.json\nRead back update,PUT,http://localhost:8080/storage/table/test/query,sort.json\nAdd JSON,POST,http://localhost:8080/storage/table/test,insert2.json\nUpdate Complex JSON,PUT,http://localhost:8080/storage/table/test,update_json2.json\nRead back update,GET,http://localhost:8080/storage/table/test?id=4,\nAdd now,POST,http://localhost:8080/storage/table/test2,addnew.json\nNewer,PUT,http://localhost:8080/storage/table/test2/query,newer.json\nOlder,PUT,http://localhost:8080/storage/table/test2/query,older.json\nNewer Bad,PUT,http://localhost:8080/storage/table/test2/query,newerBad.json\nLike,PUT,http://localhost:8080/storage/table/test2/query,where_like.json\nGroup Time,PUT,http://localhost:8080/storage/reading/query,group_time.json\nJira FOGL-690,POST,http://localhost:8080/storage/table/configuration,fogl690-ok.json\nSet-FOGL-983,PUT,http://localhost:8080/storage/table/configuration,FOGL-983.json\nAdd bad Readings,POST,http://localhost:8080/storage/reading,badreadings.json\nQuery Readings Timebucket Bad,PUT,http://localhost:8080/storage/reading/query,query_readings_timebucket_bad.json\nReading Rate Array,PUT,http://localhost:8080/storage/reading/query,reading_property_array.json\nCommon Read limit max_int,PUT,http://localhost:8080/storage/table/test/query,limit_max_int.json\nCommon Read skip max_int,PUT,http://localhost:8080/storage/table/test/query,skip_max_int.json\nTimezone,PUT,http://localhost:8080/storage/table/test2/query,timezone.json\nBad Timezone,PUT,http://localhost:8080/storage/table/test2/query,timezone_bad.json\nGet-FOGL-983,PUT,http://localhost:8080/storage/table/configuration/query,get-FOGL-983.json\nJira FOGL-690 cleanup,DELETE,http://localhost:8080/storage/table/configuration,delete.json\nUpdate now,PUT,http://localhost:8080/storage/table/test2,updatenow.json\nGet Reading series group by minutes,PUT,http://localhost:8080/storage/reading/query,series_group_by_minutes.json\nGet Reading series (seconds),PUT,http://localhost:8080/storage/reading/query,series_seconds.json\nGet Reading series summary (seconds),PUT,http://localhost:8080/storage/reading/query,series_summary_seconds.json\nGet Reading series group by hours,PUT,http://localhost:8080/storage/reading/query,series_group_by_hours.json\nAdd Readings now,POST,http://localhost:8080/storage/reading,add_readings_now.json\nCommon table IN operator,PUT,http://localhost:8080/storage/table/test2/query,where_in.json\nCommon table NOT IN operator,PUT,http://localhost:8080/storage/table/test2/query,where_not_in.json\nCommon table IN operator bad values,PUT,http://localhost:8080/storage/table/test2/query,where_in_bad_values.json\nQuery Readings IN operator,PUT,http://localhost:8080/storage/reading/query,query_readings_in.json\nQuery Readings NOT IN operator,PUT,http://localhost:8080/storage/reading/query,query_readings_not_in.json\nQuery Readings IN operator bad values,PUT,http://localhost:8080/storage/reading/query,query_readings_in_bad_values.json\nmicroseconds - Purge Readings,PUT,http://localhost:8080/storage/reading/purge?age=1&sent=0&flags=purge,\nmicroseconds - Add Readings,POST,http://localhost:8080/storage/reading,msec_add_readings_user_ts.json\nmicroseconds - Query Readings,PUT,http://localhost:8080/storage/reading/query,msec_query_readings.json\nmicroseconds - Query asset NO alias,PUT,http://localhost:8080/storage/reading/query,msec_query_asset_noalias.json\nmicroseconds - Query asset alias,PUT,http://localhost:8080/storage/reading/query,msec_query_asset_alias.json\nmicroseconds - Query asset aggregate min,PUT,http://localhost:8080/storage/reading/query,msec_query_asset_aggmin.json\nmicroseconds - Query asset aggregate min array,PUT,http://localhost:8080/storage/reading/query,msec_query_asset_aggminarray.json\nupdate - Update multi rows,PUT,http://localhost:8080/storage/table/test2,update_multi_rows.json\ninsert - Insert 1 row,POST,http://localhost:8080/storage/table/test2,insert_1row.json\ninsert - Insert multi rows,POST,http://localhost:8080/storage/table/test2,insert_multi_rows.json\nAdd table snapshot,POST,http://localhost:8080/storage/table/test2/snapshot,add_snapshot.json\nLoad table snapshot,PUT,http://localhost:8080/storage/table/test2/snapshot/99,\nDelete table snapshot,DELETE,http://localhost:8080/storage/table/test2/snapshot/99,\nJira FOGL-690,POST,http://localhost:8080/storage/table/configuration,fogl690-error.json\ntimezone - all tables - Delete,DELETE,http://localhost:8080/storage/table/test2,\ntimezone - readings   - Delete,PUT,http://localhost:8080/storage/reading/purge?age=1&sent=0&flags=purge,\ntimezone - all tables - insert,POST,http://localhost:8080/storage/table/test2,tz_all_insert.json\ntimezone - all tables - read 1,GET,http://localhost:8080/storage/table/test2?id=2000,\ntimezone - all tables - read 2,PUT,http://localhost:8080/storage/table/test2/query,tz_all_read_2.json\ntimezone - all tables - read 3,PUT,http://localhost:8080/storage/table/test2/query,tz_all_read_3.json\ntimezone - readings   - insert,POST,http://localhost:8080/storage/reading,tz_readings_insert.json\ntimezone - readings   - read 1,GET,http://localhost:8080/storage/reading?id=1&count=2000,,checkstate\ntimezone - readings   - read 2,PUT,http://localhost:8080/storage/reading/query,tz_readings_read_2.json\ntimezone - readings   - read 3,PUT,http://localhost:8080/storage/reading/query,tz_readings_read_3.json\ntimezone - readings   - read 4,PUT,http://localhost:8080/storage/reading/query,tz_readings_read_4.json\nAdd more Readings,POST,http://localhost:8080/storage/reading,readings_timebucket.json\nQuery Readings Timebucket,PUT,http://localhost:8080/storage/reading/query,query_timebucket_datapoints.json\nShutdown,POST,http://localhost:1081/fledge/service/shutdown,,checkstate\n"
  },
  {
    "path": "tests/unit/python/.coveragerc",
    "content": "[run]\nomit =\n    # Ignore files\n    */__init__.py\n    */__template__.py\n    */setup.py\n    # Omit directory\n    */python/fledge/plugins/common/*\n    */python/fledge/plugins/filter/*\n    */python/fledge/plugins/north/*\n    */python/fledge/plugins/south/*\n    */python/fledge/plugins/notificationDelivery/*\n    */python/fledge/plugins/notificationRule/*\n    tests/*\n"
  },
  {
    "path": "tests/unit/python/.pytest.ini",
    "content": "[pytest]\nminversion = 7.0.1\nnorecursedirs=tests/unit/python/fledge/plugins/north tests/unit/python/fledge/services/south\n"
  },
  {
    "path": "tests/unit/python/README.rst",
    "content": "******************\nFledge Unit Tests\n******************\n\nUnit tests are the first category of test in Fledge. These test ensures that every single function of your code under\ntest is generating the desired output.\n\nFledge unit tests heavily use test doubles to replace a production object. A typical example is a code fragment that\nrequires a database connection. Instead of creating a database connection, we create a mock object that can be used.\nBy doing this, we can make sure that our unit-tests are not dependent on external systems. This approach also helps to\nminimise the test execution time of unit tests. For example:\n::\n    def mock_request(data, loop):\n        payload = StreamReader(\"http\", loop=loop, limit=1024)\n        payload.feed_data(data.encode())\n        payload.feed_eof()\n\n        protocol = mock.Mock()\n        app = mock.Mock()\n        headers = CIMultiDict([('CONTENT-TYPE', 'application/json')])\n        req = make_mocked_request('POST', '/sensor-reading', headers=headers,\n                                  protocol=protocol, payload=payload, app=app)\n        return req\n\nThis code creates a mock request and can be used as follows:\n::\n    async def test_post_sensor_reading_ok(self, event_loop):\n        # event_loop is a fixture from pytest-asyncio\n        data = \"\"\"{\n            \"timestamp\": \"2017-01-02T01:02:03.23232Z-05:00\",\n            \"asset\": \"sensor1\",\n            \"key\": \"80a43623-ebe5-40d6-8d80-3f892da9b3b4\",\n            \"readings\": {\n                \"velocity\": \"500\",\n                \"temperature\": {\n                    \"value\": \"32\",\n                    \"unit\": \"kelvin\"\n                }\n            }\n        }\"\"\"\n        with patch.object(Ingest, 'add_readings', return_value=asyncio.ensure_future(asyncio.sleep(0.1))):\n            with patch.object(Ingest, 'is_available', return_value=True):\n                request = mock_request(data, event_loop)\n                r = await HttpSouthIngest.render_post(request)\n                retval = json.loads(r.body.decode())\n                # Assert the POST request response\n                assert 200 == retval['status']\n                assert 'success' == retval['result']\n\n\nNote the use of patch object context managers, we are patching the ``add_readings`` and ``is_available`` methods of ``Ingest`` class,\nfurther we are also using the mock object created above for creating a POST request to ``/sensor-readings`` endpoint.\n"
  },
  {
    "path": "tests/unit/python/__template__.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Example of docstring of test purpose\"\"\"\n\n# package imports, utilities that will be used for running this module., e.g:\nimport pytest\nfrom unittest import mock\nfrom unittest.mock import patch\n\n# Fledge imports\n# from fledge.common.storage_client.payload_builder import PayloadBuilder\n\n\n__author__ = \"${FULL_NAME}\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass UnitTestTemplateClass:\n    \"\"\"\n    Example of docstring of Test Class. This class organises the unit tests of test_module\n    \"\"\"\n\n    @pytest.fixture(scope=\"\", params=\"\", autouse=False, ids=None, name=None)\n    def _module_fixture(self):\n        \"\"\"Test fixtures that is specific for this class. This fixture can be used with any test definition\"\"\"\n        pass\n\n    @pytest.mark.parametrize(\"input, expected\", [\n        (\"input_data1\", \"expected_result_1\"),\n        (\"input_data1\", \"expected_result_2\")\n    ])\n    def test_some_unit(self, _module_fixture, input, expected):\n        \"\"\"Purpose of the test, This test is called twice with different test inputs and expected values.\n        \"\"\"\n        # assertions to verify that the actual output after running a code block is equal to the expected output\n        # Use test doubles (like mocks and patch) to remove dependencies on the external services/code referred in your function under test\n        mock_dependency = mock.MagicMock()\n        with patch.object(mock_dependency, 'some_method', return_value='bla'):\n            actual = None\n            # actual = code_under_test(input)\n            assert expected == actual\n\n    def test_other_unit_component(self, _module_fixture):\n        \"\"\"Purpose of the test, This test is called once.\n        \"\"\"\n        # assertions to verify that the actual output of a component is equal to the expected output\n        assert \"expected\" == \"actual\"\n"
  },
  {
    "path": "tests/unit/python/fledge/common/configuration_manager_callback.py",
    "content": "import asyncio\nasync def run(category_name):\n    return\n"
  },
  {
    "path": "tests/unit/python/fledge/common/configuration_manager_callback_nonasync.py",
    "content": "import asyncio\ndef run(category_name):\n    return\n"
  },
  {
    "path": "tests/unit/python/fledge/common/configuration_manager_callback_norun.py",
    "content": "import asyncio\n"
  },
  {
    "path": "tests/unit/python/fledge/common/microservice_management_client/test_microservice_management_client.py",
    "content": "# -*- coding: utf-8 -*-\n\nfrom unittest.mock import MagicMock\nfrom unittest.mock import patch\nfrom http.client import HTTPConnection, HTTPResponse\nimport json\nimport pytest\n\nfrom fledge.common.microservice_management_client import exceptions as client_exceptions\nfrom fledge.common.microservice_management_client.microservice_management_client import MicroserviceManagementClient, _logger\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestMicroserviceManagementClient:\n    def test_constructor(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        assert hasattr(ms_mgt_client, '_management_client_conn')\n\n    def test_register_service_good_id(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'id': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.register_service({'keys': 'vals'})\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(\n            body='{\"keys\": \"vals\"}', method='POST', url='/fledge/service')\n        assert {'id': 'bla'} == ret_value\n\n    def test_register_service_no_id(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'notid': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"exception\") as log_exc:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.register_service({})\n                    assert excinfo.type is KeyError\n                assert 1 == log_exc.call_count\n                args = log_exc.call_args\n                assert 'Could not register the microservice, From request {}'.format('{}') == args[0][1]\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(body='{}', method='POST', url='/fledge/service')\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_register_service_status_client_err(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.register_service({})\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(body='{}', method='POST', url='/fledge/service')\n\n    def test_unregister_service_good_id(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'id': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ms_mgt_client.unregister_service('someid')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='DELETE', url='/fledge/service/someid')\n\n    def test_unregister_service_no_id(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'notid': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.unregister_service('someid')\n                    assert excinfo.type is KeyError\n                assert 1 == log_error.call_count\n                args = log_error.call_args\n                assert 'Could not unregister the microservice having UUID {}'.format('someid') == args[0][1]\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='DELETE', url='/fledge/service/someid')\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_unregister_service_client_err(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'notid': 'bla'})\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.unregister_service('someid')\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='DELETE', url='/fledge/service/someid')\n\n    def test_register_interest_good_id(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'id': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.register_interest('cat', 'msid')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(\n            body='{\"category\": \"cat\", \"service\": \"msid\"}', method='POST', url='/fledge/interest')\n        assert {'id': 'bla'} == ret_value\n\n    def test_register_interest_no_id(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'notid': 'bla'})\n        response_mock.status = 200\n        payload = '{\"category\": \"cat\", \"service\": \"msid\"}'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.register_interest('cat', 'msid')\n                    assert excinfo.type is KeyError\n                assert 1 == log_error.call_count\n                args = log_error.call_args\n                assert 'Could not register interest, for request payload {}'.format(payload) == args[0][1]\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(body=payload, method='POST', url='/fledge/interest')\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_register_interest_status_client_err(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.register_interest('cat', 'msid')\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(body='{\"category\": \"cat\", \"service\": \"msid\"}', method='POST',\n                                              url='/fledge/interest')\n\n    def test_unregister_interest_good_id(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'id': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ms_mgt_client.unregister_interest('someid')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='DELETE', url='/fledge/interest/someid')\n\n    def test_unregister_interest_no_id(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'notid': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.unregister_interest('someid')\n                    assert excinfo.type is KeyError\n                assert 1 == log_error.call_count\n                args = log_error.call_args\n                assert 'Could not unregister interest for {}'.format('someid') == args[0][1]\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='DELETE', url='/fledge/interest/someid')\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_unregister_interest_client_err(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps({'notid': 'bla'})\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.unregister_interest('someid')\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='DELETE', url='/fledge/interest/someid')\n\n    @pytest.mark.parametrize(\"name, _type, url\",\n                             [('foo', None, '/fledge/service?name=foo'),\n                              (None, 'bar', '/fledge/service?type=bar'),\n                              ('foo', 'bar', '/fledge/service?name=foo&type=bar')])\n    def test_get_services_good(self, name, _type, url):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps(\n            {'services': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.get_services(name, _type)\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url=url)\n        assert {'services': 'bla'} == ret_value\n\n    def test_get_services_no_services(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps(\n            {'notservices': 'bla'})\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.get_services('foo', 'bar')\n                    assert excinfo.type is KeyError\n                assert 1 == log_error.call_count\n                args = log_error.call_args\n                assert 'Could not find the microservice for requested url {}'.format(\n                    '/fledge/service?name=foo&type=bar') == args[0][1]\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url='/fledge/service?name=foo&type=bar')\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_get_services_client_err(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps(\n            {'services': 'bla'})\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.get_services('foo', 'bar')\n                        assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url='/fledge/service?name=foo&type=bar')\n\n    def test_get_configuration_category(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = {\n            'ping_timeout': {\n                'type': 'integer',\n                'description': 'Timeout for a response from any given microservice. (must be greater than 0)',\n                'value': '1',\n                'default': '1'},\n            'sleep_interval': {\n                'type': 'integer',\n                'description': 'The time (in seconds) to sleep between health checks. (must be greater than 5)',\n                'value': '5', 'default': '5'\n            }\n        }\n\n        undecoded_data_mock.decode.return_value = json.dumps(test_dict)\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.get_configuration_category(\"SMNTR\")\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url='/fledge/service/category/SMNTR')\n        assert test_dict == ret_value\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_get_configuration_category_exception(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.get_configuration_category(\"SMNTR\")\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url='/fledge/service/category/SMNTR')\n\n    def test_get_configuration_item(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = {\n                'type': 'integer',\n                'description': 'Timeout for a response from any given microservice. (must be greater than 0)',\n                'value': '1',\n                'default': '1'\n        }\n\n        undecoded_data_mock.decode.return_value = json.dumps(test_dict)\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.get_configuration_item(\"SMNTR\", \"ping_timeout\")\n                assert test_dict == ret_value\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url='/fledge/service/category/SMNTR/ping_timeout')\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_get_configuration_item_exception(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.get_configuration_item(\"SMNTR\", \"ping_timeout\")\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url='/fledge/service/category/SMNTR/ping_timeout')\n\n    def test_create_configuration_category(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = json.dumps({\n            'key': 'TEST',\n            'description': 'description',\n            'value': {\n                'ping_timeout': {\n                    'type': 'integer',\n                    'description': 'Timeout for a response from any given microservice. (must be greater than 0)',\n                    'value': '1',\n                    'default': '1'},\n                'sleep_interval': {\n                    'type': 'integer',\n                    'description': 'The time (in seconds) to sleep between health checks. (must be greater than 5)',\n                    'value': '5',\n                    'default': '5'\n                }\n            }\n        })\n\n        undecoded_data_mock.decode.return_value = test_dict\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.create_configuration_category(test_dict)\n                assert json.loads(test_dict) == ret_value\n            response_patch.assert_called_once_with()\n        args, kwargs = request_patch.call_args_list[0]\n        assert 'POST' == kwargs['method']\n        assert '/fledge/service/category' == kwargs['url']\n        assert json.loads(test_dict) == json.loads(kwargs['body'])\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_create_configuration_category_exception(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        test_dict = json.dumps({\n            'key': 'TEST',\n            'description': 'description',\n            'value': {\n                'ping_timeout': {\n                    'type': 'integer',\n                    'description': 'Timeout for a response from any given microservice. (must be greater than 0)',\n                    'value': '1',\n                    'default': '1'},\n                'sleep_interval': {\n                    'type': 'integer',\n                    'description': 'The time (in seconds) to sleep between health checks. (must be greater than 5)',\n                    'value': '5',\n                    'default': '5'\n                }\n            }\n        })\n\n        undecoded_data_mock.decode.return_value = test_dict\n        response_mock.read.return_value = undecoded_data_mock\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.create_configuration_category(test_dict)\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        args, kwargs = request_patch.call_args_list[0]\n        assert 'POST' == kwargs['method']\n        assert '/fledge/service/category' == kwargs['url']\n        assert json.loads(test_dict) == json.loads(kwargs['body'])\n\n    def test_create_configuration_category_keep_original(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = json.dumps({\n            'key': 'TEST',\n            'description': 'description',\n            'value': {\n                'ping_timeout': {\n                    'type': 'integer',\n                    'description': 'Timeout for a response from any given microservice. (must be greater than 0)',\n                    'value': '1',\n                    'default': '1'},\n                'sleep_interval': {\n                    'type': 'integer',\n                    'description': 'The time (in seconds) to sleep between health checks. (must be greater than 5)',\n                    'value': '5',\n                    'default': '5'\n                }\n            },\n            'keep_original_items': True\n        })\n\n        expected_test_dict = json.dumps({\n            'key': 'TEST',\n            'description': 'description',\n            'value': {\n                'ping_timeout': {\n                    'type': 'integer',\n                    'description': 'Timeout for a response from any given microservice. (must be greater than 0)',\n                    'value': '1',\n                    'default': '1'},\n                'sleep_interval': {\n                    'type': 'integer',\n                    'description': 'The time (in seconds) to sleep between health checks. (must be greater than 5)',\n                    'value': '5',\n                    'default': '5'\n                }\n            }\n        })\n        undecoded_data_mock.decode.return_value = test_dict\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.create_configuration_category(test_dict)\n                assert json.loads(test_dict) == ret_value\n            response_patch.assert_called_once_with()\n        args, kwargs = request_patch.call_args_list[0]\n        assert 'POST' == kwargs['method']\n        assert '/fledge/service/category?keep_original_items=true' == kwargs['url']\n        assert json.loads(expected_test_dict) == json.loads(kwargs['body'])\n\n    def test_update_configuration_item(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = {'value': '5'}\n\n        undecoded_data_mock.decode.return_value = json.dumps(test_dict)\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.update_configuration_item(\"TEST\", \"blah\", test_dict)\n                assert test_dict == ret_value\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='PUT', url='/fledge/service/category/TEST/blah', body=test_dict)\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_update_configuration_item_exception(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = {'value': '5'}\n\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.update_configuration_item(\"TEST\", \"blah\", test_dict)\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(body={'value': '5'}, method='PUT',\n                                              url='/fledge/service/category/TEST/blah')\n\n    def test_delete_configuration_item(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = {'value': ''}\n\n        undecoded_data_mock.decode.return_value = json.dumps(test_dict)\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.delete_configuration_item(\"TEST\", \"blah\")\n                assert test_dict == ret_value\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='DELETE', url='/fledge/service/category/TEST/blah/value')\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_delete_configuration_item_exception(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.delete_configuration_item(\"TEST\", \"blah\")\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='DELETE', url='/fledge/service/category/TEST/blah/value')\n\n    def test_get_asset_tracker_event(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = {\n            \"track\": [\n                {\n                    \"asset\": \"sinusoid\",\n                    \"fledge\": \"Fledge\",\n                    \"plugin\": \"sinusoid\",\n                    \"service\": \"sine\",\n                    \"timestamp\": \"2018-08-21 16:58:45.118\",\n                    \"event\": \"Ingest\"\n                }\n            ]\n        }\n\n        undecoded_data_mock.decode.return_value = json.dumps(test_dict)\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.get_asset_tracker_events()\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url='/fledge/track')\n        assert test_dict == ret_value\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_get_asset_tracker_event_client_err(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        undecoded_data_mock.decode.return_value = json.dumps(\n            {'track': []})\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.get_asset_tracker_events()\n                        assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        request_patch.assert_called_once_with(method='GET', url='/fledge/track')\n\n    def test_create_asset_tracker_event(self):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        response_mock.read.return_value = undecoded_data_mock\n        test_dict = json.dumps({\n            'asset': 'AirIntake',\n            'event': 'Ingest',\n            'service': 'PT100_In1',\n            'plugin': 'PT100'\n        })\n\n        undecoded_data_mock.decode.return_value = test_dict\n        response_mock.status = 200\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                ret_value = ms_mgt_client.create_asset_tracker_event(test_dict)\n                assert json.loads(test_dict) == ret_value\n            response_patch.assert_called_once_with()\n        args, kwargs = request_patch.call_args_list[0]\n        assert 'POST' == kwargs['method']\n        assert '/fledge/track' == kwargs['url']\n        assert test_dict == json.loads(kwargs['body'])\n\n    @pytest.mark.parametrize(\"status_code, host\", [(450, 'Client'), (550, 'Server')])\n    def test_create_asset_tracker_event_exception(self, status_code, host):\n        microservice_management_host = 'host1'\n        microservice_management_port = 1\n        ms_mgt_client = MicroserviceManagementClient(\n            microservice_management_host, microservice_management_port)\n        response_mock = MagicMock(type=HTTPResponse)\n        undecoded_data_mock = MagicMock()\n        test_dict = json.dumps({\n            'asset': 'AirIntake',\n            'event': 'Ingest',\n            'service': 'PT100_In1',\n            'plugin': 'PT100'\n        })\n        undecoded_data_mock.decode.return_value = test_dict\n        response_mock.read.return_value = undecoded_data_mock\n        response_mock.status = status_code\n        response_mock.reason = 'this is the reason'\n        with patch.object(HTTPConnection, 'request') as request_patch:\n            with patch.object(HTTPConnection, 'getresponse', return_value=response_mock) as response_patch:\n                with patch.object(_logger, \"error\") as log_error:\n                    with pytest.raises(Exception) as excinfo:\n                        ms_mgt_client.create_asset_tracker_event(test_dict)\n                    assert excinfo.type is client_exceptions.MicroserviceManagementClientError\n                assert 1 == log_error.call_count\n                msg = '{} error code: %d, Reason: %s'.format(host)\n                log_error.assert_called_once_with(msg, status_code, 'this is the reason')\n            response_patch.assert_called_once_with()\n        args, kwargs = request_patch.call_args_list[0]\n        assert 'POST' == kwargs['method']\n        assert '/fledge/track' == kwargs['url']\n        assert test_dict == json.loads(kwargs['body'])\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate1.json",
    "content": "{\n  \"aggregate\": {\n    \"column\": \"values\",\n    \"operation\": \"min\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate1_alias.json",
    "content": "{\n  \"aggregate\": {\n    \"column\": \"values\",\n    \"operation\": \"min\",\n    \"alias\": \"min_values\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate2.json",
    "content": "{\n  \"aggregate\": {\n    \"column\": \"values\",\n    \"operation\": \"max\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate2_alias.json",
    "content": "{\n  \"aggregate\": {\n    \"column\": \"values\",\n    \"operation\": \"max\",\n    \"alias\": \"max_values\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate3.json",
    "content": "{\n  \"aggregate\": {\n    \"column\": \"values\",\n    \"operation\": \"avg\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate4.json",
    "content": "{\n  \"aggregate\": {\n    \"column\": \"values\",\n    \"operation\": \"sum\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate5.json",
    "content": "{\n  \"aggregate\": {\n    \"column\": \"values\",\n    \"operation\": \"count\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate6.json",
    "content": "{\n  \"aggregate\": [\n    {\n      \"operation\": \"min\",\n      \"column\": \"values\"\n    },\n    {\n      \"operation\": \"max\",\n      \"column\": \"values\"\n    },\n    {\n      \"operation\": \"avg\",\n      \"column\": \"values\"\n    }\n  ]\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate6_alias.json",
    "content": "{\n  \"aggregate\": [\n    {\n      \"operation\": \"min\",\n      \"column\": \"values\",\n      \"alias\": \"min_values\"\n    },\n    {\n      \"operation\": \"max\",\n      \"column\": \"values\",\n      \"alias\": \"max_values\"\n    },\n    {\n      \"operation\": \"avg\",\n      \"column\": \"values\",\n      \"alias\": \"avg_values\"\n    }\n  ]\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate7_alias.json",
    "content": "{\n  \"aggregate\": [\n    {\n      \"operation\": \"min\",\n      \"json\"      : {\n                        \"column\"     : \"values\",\n                        \"properties\" : \"rate\"\n                    },\n      \"alias\": \"Minimum\"\n    },\n    {\n      \"operation\": \"max\",\n      \"json\"      : {\n                        \"column\"     : \"values\",\n                        \"properties\" : \"rate\"\n                    },\n      \"alias\": \"Maximum\"\n    },\n    {\n      \"operation\": \"avg\",\n      \"json\"      : {\n                        \"column\"     : \"values\",\n                        \"properties\" : \"rate\"\n                    },\n      \"alias\": \"Average\"\n    }\n  ]\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate_all.json",
    "content": "{\n  \"aggregate\": {\n    \"operation\": \"all\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_aggregate_where.json",
    "content": "{\n  \"where\": {\n\t\t\"column\": \"ts\",\n\t\t\"condition\": \"newer\",\n\t\t\"value\": 60\n\t},\n\t\"aggregate\": {\n\t\t\"operation\": \"count\",\n\t\t\"column\": \"*\"\n\t}\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_and_where1.json",
    "content": "{\n  \"where\": {\n  \"and\": {\n    \"column\": \"id\",\n    \"condition\": \">\",\n    \"value\": 3\n  },\n  \"column\": \"name\",\n  \"condition\": \"=\",\n  \"value\": \"test\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_and_where2.json",
    "content": "{\n  \"where\": {\n  \"and\": {\n    \"column\": \"id\",\n    \"condition\": \">\",\n    \"value\": 3,\n    \"and\": {\n      \"column\": \"value\",\n      \"condition\": \"!=\",\n      \"value\": 0\n    }\n  },\n  \"column\": \"name\",\n  \"condition\": \"=\",\n  \"value\": \"test\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_and_where_isnull.json",
    "content": "{\n  \"where\": {\n  \"and\": {\n    \"column\": \"deprecated_ts\",\n    \"condition\": \"isnull\"\n  },\n  \"column\": \"event\",\n  \"condition\": \"=\",\n  \"value\": \"Ingest\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_and_where_notnull.json",
    "content": "{\n  \"where\": {\n  \"and\": {\n    \"column\": \"deprecated_ts\",\n    \"condition\": \"notnull\"\n  },\n  \"column\": \"plugin\",\n  \"condition\": \"=\",\n  \"value\": \"sinusoid\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_complex_select1.json",
    "content": "{\n  \"aggregate\": {\n    \"column\": \"name\",\n    \"operation\": \"count\"\n  },\n  \"return\": [\"id\", \"name\"],\n  \"group\": \"name, id\",\n  \"limit\": 5,\n  \"skip\": 1,\n  \"sort\": {\n    \"column\": \"id\",\n    \"direction\": \"desc\"\n  },\n  \"where\": {\n    \"column\": \"id\",\n    \"condition\": \"=\",\n    \"value\": 1,\n    \"and\": {\n      \"column\": \"name\",\n      \"condition\": \"=\",\n      \"value\": \"test\",\n      \"or\": {\n        \"column\": \"name\",\n        \"condition\": \"=\",\n        \"value\": \"test2\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_condition_in.json",
    "content": " {\n   \"where\": {\n     \"column\": \"plugin_type\",\n     \"condition\": \"in\",\n     \"value\": [\"north\", \"south\", \"rule\", \"delivery\", \"filter\"]\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_condition_not_in.json",
    "content": " {\n   \"where\": {\n     \"column\": \"plugin_type\",\n     \"condition\": \"not in\",\n     \"value\": [\"north\", \"south\"]\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_conditions1.json",
    "content": " {\n   \"where\": {\n     \"column\": \"name\",\n     \"condition\": \"=\",\n     \"value\": \"test\"\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_conditions2.json",
    "content": " {\n   \"where\": {\n     \"column\": \"id\",\n     \"condition\": \">\",\n     \"value\": 1\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_conditions3.json",
    "content": " {\n   \"where\": {\n     \"column\": \"id\",\n     \"condition\": \"<\",\n     \"value\": 1.5\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_conditions4.json",
    "content": " {\n   \"where\": {\n     \"column\": \"id\",\n     \"condition\": \">=\",\n     \"value\": 9\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_conditions5.json",
    "content": " {\n   \"where\": {\n     \"column\": \"id\",\n     \"condition\": \"<=\",\n     \"value\": 99\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_conditions6.json",
    "content": " {\n   \"where\": {\n     \"column\": \"id\",\n     \"condition\": \"!=\",\n     \"value\": \"False\"\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_delete_where1.json",
    "content": "{\n  \"table\": \"test_tbl\",\n  \"where\": {\n    \"column\": \"name\",\n    \"condition\": \"=\",\n    \"value\": \"test\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_distinct.json",
    "content": "{\n  \"where\": {\n    \"column\": \"id\",\n    \"condition\": \"<\",\n    \"value\": 1000\n  },\n  \"modifier\": \"distinct\",\n  \"return\": [\"description\"]\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_expr1.json",
    "content": " {\n  \t\"expressions\": [{\n  \t\t\"column\": \"value\",\n  \t\t\"operator\": \"+\",\n  \t\t\"value\": 10\n  \t}],\n  \t\"where\": {\n  \t\t\"column\": \"key\",\n  \t\t\"condition\": \"=\",\n  \t\t\"value\": \"READINGS\"\n  \t}\n  }\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_expr2.json",
    "content": " {\n\t \"expressions\": [\n\t\t{\n\t\t\t\"column\": \"value1\",\n\t\t\t\"operator\": \"+\",\n\t\t\t\"value\": 10\n\t\t},\n\t\t{\n\t\t\t\"column\": \"value2\",\n\t\t\t\"operator\": \"-\",\n\t\t\t\"value\": 5\n\t\t}\n\t],\n\t \"where\": {\n\t\t \"column\": \"key\",\n\t\t \"condition\": \"=\",\n\t\t \"value\": \"READINGS\"\n\t }\n }\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_from1.json",
    "content": " {\n  \"table\": \"test\"\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_from2.json",
    "content": " {\n  \"table\": \"test, test2\"\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_group_by1.json",
    "content": "{\n  \"group\": \"name\"\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_group_by1_alias.json",
    "content": "{\n  \"group\": {\"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"column\": \"user_ts\"}\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_group_by2.json",
    "content": "{\n  \"group\": \"name,id\"\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_group_by2_alias.json",
    "content": "{\n  \"group\": {\"alias\": \"timestamp\", \"column\": \"user_ts\"}\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_insert1.json",
    "content": "{\n  \"key\": \"x\"\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_join_with_query.json",
    "content": "{\n  \"join\": {\n    \"table\": {\n      \"name\": \"attributes\",\n      \"column\": \"parent_id\"\n    },\n    \"on\": \"id\",\n    \"query\": {\n      \"return\": [\n        \"parent_id\",\n        {\n          \"column\": \"name\",\n          \"alias\": \"attribute_name\"\n        },\n        {\n          \"column\": \"value\",\n          \"alias\": \"attribute_value\"\n        }\n      ],\n      \"where\": {\n        \"column\": \"name\",\n        \"condition\": \"=\",\n        \"value\": \"MyName\"\n      }\n    }\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_join_with_query_only_table_name.json",
    "content": "{\n  \"join\": {\n    \"table\": {\n      \"name\": \"attributes\"\n    },\n    \"on\": \"id\",\n    \"query\": {\n      \"return\": [\n        \"parent_id\",\n        {\n          \"column\": \"name\",\n          \"alias\": \"attribute_name\"\n        },\n        {\n          \"column\": \"value\",\n          \"alias\": \"attribute_value\"\n        }\n      ],\n      \"where\": {\n        \"column\": \"name\",\n        \"condition\": \"=\",\n        \"value\": \"MyName\"\n      }\n    }\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_join_without_query.json",
    "content": "{\"join\": {\"table\": {\"name\": \"table1\", \"column\": \"table1_id\"}, \"on\": \"table2_id\"}}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_join_without_query_only_table_name.json",
    "content": "{\"join\": {\"table\": {\"name\": \"table1\"}, \"on\": \"table2_id\"}}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_json_properties1.json",
    "content": "{\n\t\"json_properties\" : [\n\t\t\t\t{\n\t\t\t\t\t\"column\" : \"data\",\n\t\t\t\t\t\"path\"   : [ \"url\", \"value\" ],\n\t\t\t\t\t\"value\"  : \"new value\"\n\t\t\t\t}\n\t\t\t    ]\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_json_properties2.json",
    "content": "{\n\t\"json_properties\" : [\n\t\t\t\t{\n\t\t\t\t\t\"column\" : \"data\",\n\t\t\t\t\t\"path\"   : [ \"url\", \"value\" ],\n\t\t\t\t\t\"value\"  : \"new value\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"column\" : \"data1\",\n\t\t\t\t\t\"path\"   : [ \"url1\", \"value1\" ],\n\t\t\t\t\t\"value\"  : \"new value1\"\n\t\t\t\t}\n\t]\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_limit1.json",
    "content": "{\n  \"limit\": 3\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_limit2.json",
    "content": "{\n  \"limit\": 3.5\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_limit_offset1.json",
    "content": "{\n  \"limit\": 3,\n  \"skip\": 3\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_limit_offset2.json",
    "content": "{\n  \"limit\": 3.5,\n  \"skip\": 3.5\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_modifier_set_where.json",
    "content": "{\n  \"values\": {\n    \"value\": \"token_expiration\"\n  },\n  \"where\": {\n    \"column\": \"token\",\n    \"condition\": \"=\",\n    \"value\": \"TOKEN\"\n  },\n  \"modifier\": [\"allowzero\"]\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_multiple_and_where_with_isnull.json",
    "content": "{\n  \"where\": {\n    \"and\": {\n      \"column\": \"deprecated_ts\",\n      \"condition\": \"isnull\",\n      \"and\": {\n        \"column\": \"event\",\n        \"condition\": \"=\",\n        \"value\": \"Ingest\"\n      }\n  },\n  \"column\": \"asset\",\n  \"condition\": \"=\",\n  \"value\": \"sinusoid\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_multiple_and_where_with_notnull.json",
    "content": "{\n  \"where\": {\n    \"and\": {\n      \"column\": \"deprecated_ts\",\n      \"condition\": \"notnull\",\n      \"and\": {\n        \"column\": \"event\",\n        \"condition\": \"=\",\n        \"value\": \"Ingest\"\n      }\n  },\n  \"column\": \"plugin\",\n  \"condition\": \"=\",\n  \"value\": \"sinusoid\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_multiple_or_where_with_isnull.json",
    "content": "{\n  \"where\": {\n    \"or\": {\n      \"column\": \"deprecated_ts\",\n      \"condition\": \"isnull\",\n      \"or\": {\n        \"column\": \"event\",\n        \"condition\": \"=\",\n        \"value\": \"Egress\"\n      }\n  },\n  \"column\": \"plugin\",\n  \"condition\": \"=\",\n  \"value\": \"http_north\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_multiple_or_where_with_notnull.json",
    "content": "{\n  \"where\": {\n    \"or\": {\n      \"column\": \"deprecated_ts\",\n      \"condition\": \"notnull\",\n      \"or\": {\n        \"column\": \"event\",\n        \"condition\": \"=\",\n        \"value\": \"Egress\"\n      }\n  },\n  \"column\": \"service\",\n  \"condition\": \"=\",\n  \"value\": \"HT #1\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_nested_join.json",
    "content": "{\n  \"join\": {\n    \"table\": {\n      \"name\": \"attributes\",\n      \"column\": \"parent_id\"\n    },\n    \"on\": \"id\",\n    \"query\": {\n      \"return\": [\n        \"parent_id\",\n        {\n          \"column\": \"value\",\n          \"alias\": \"my_name\"\n        }\n      ],\n      \"where\": {\n        \"column\": \"name\",\n        \"condition\": \"=\",\n        \"value\": \"MyName\"\n      },\n      \"join\": {\n        \"table\": {\n          \"name\": \"attributes\",\n          \"column\": \"parent_id\"\n        },\n        \"on\": \"id\",\n        \"query\": {\n          \"return\": [\n            \"parent_id\",\n            {\n              \"column\": \"value\",\n              \"alias\": \"colour\"\n            }\n          ],\n          \"where\": {\n            \"column\": \"name\",\n            \"condition\": \"=\",\n            \"value\": \"colour\"\n          }\n        }\n      }\n    }\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_newer_condition.json",
    "content": "{\n\t\"where\": {\n\t\t\"column\": \"ts\",\n\t\t\"condition\": \"newer\",\n\t\t\"value\": 3600\n\t}\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_offset1.json",
    "content": "{\n  \"skip\": 3\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_offset2.json",
    "content": "{\n  \"skip\": 3.5\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_older_condition.json",
    "content": "{\n\t\"where\": {\n\t\t\"column\": \"ts\",\n\t\t\"condition\": \"older\",\n\t\t\"value\": 600\n\t}\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_or_where1.json",
    "content": "{\n  \"where\": {\n  \"or\": {\n    \"column\": \"id\",\n    \"condition\": \">\",\n    \"value\": 3\n  },\n  \"column\": \"name\",\n  \"condition\": \"=\",\n  \"value\": \"test\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_or_where2.json",
    "content": "{\n  \"where\": {\n  \"or\": {\n    \"column\": \"id\",\n    \"condition\": \">\",\n    \"value\": 3,\n    \"or\": {\n      \"column\": \"value\",\n      \"condition\": \"!=\",\n      \"value\": 0\n    }\n  },\n  \"column\": \"name\",\n  \"condition\": \"=\",\n  \"value\": \"test\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_or_where_isnull.json",
    "content": "{\n  \"where\": {\n  \"or\": {\n    \"column\": \"deprecated_ts\",\n    \"condition\": \"isnull\"\n  },\n  \"column\": \"event\",\n  \"condition\": \"=\",\n  \"value\": \"Filter\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_or_where_notnull.json",
    "content": "{\n  \"where\": {\n  \"or\": {\n    \"column\": \"deprecated_ts\",\n    \"condition\": \"notnull\"\n  },\n  \"column\": \"event\",\n  \"condition\": \"=\",\n  \"value\": \"Egress\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_order_by1.json",
    "content": "{\n  \"sort\": {\n    \"column\": \"name\",\n    \"direction\": \"asc\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_order_by2.json",
    "content": "{\n  \"sort\": {\n    \"column\": \"name\",\n    \"direction\": \"desc\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_order_by3.json",
    "content": "{\n  \"sort\": [\n    {\n      \"column\": \"name\",\n      \"direction\": \"desc\"\n    },\n    {\n      \"column\": \"id\",\n      \"direction\": \"asc\"\n    },\n    {\n      \"column\": \"ts\",\n      \"direction\": \"asc\"\n    }\n  ]\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_select1.json",
    "content": " {\n  \"return\": [\"name\"]\n }\n\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_select1_alias.json",
    "content": "{\n  \"return\": [\n              {\"alias\": \"my_name\", \"column\": \"name\"}\n            ]\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_select2.json",
    "content": " {\n  \"return\": [\"name\", \"id\"]\n }\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_select2_alias.json",
    "content": "{\n  \"return\": [\n              {\"alias\": \"my_name\", \"column\": \"name\"},\n              {\"alias\": \"my_id\", \"column\": \"id\"}\n            ]\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_select3_alias.json",
    "content": "{\n  \"return\": [\n              {\"alias\": \"my_name\", \"column\": \"name\"},\n              {\n                  \"json\" : {\n                              \"column\"     : \"id\",\n                              \"properties\" : \"reason\"\n                           },\n                  \"alias\" : \"my_id\"\n              }\n            ]\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_select_alias_with_timezone.json",
    "content": "{\n\t\"return\": [{\n\t\t\"column\": \"user_ts\",\n\t\t\"alias\": \"timestamp\",\n\t\t\"timezone\": \"utc\"\n\t}]\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_set1.json",
    "content": "{\n  \"values\": {\n    \"value\": \"test_update\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_timebucket1.json",
    "content": "{\n  \"timebucket\" : {\n                    \"timestamp\" : \"user_ts\",\n                    \"size\"      : \"5\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_timebucket2.json",
    "content": "{\n  \"timebucket\" : {\n                    \"timestamp\" : \"user_ts\",\n                    \"size\"      : \"5\",\n                    \"format\"    : \"DD-MM-YYYYY HH24:MI:SS\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_timebucket3.json",
    "content": "{\n  \"timebucket\" : {\n                    \"timestamp\" : \"user_ts\",\n                    \"size\"      : \"5\",\n                    \"format\"    : \"DD-MM-YYYYY HH24:MI:SS\",\n                    \"alias\"     : \"bucket\"\n  }\n}\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_timebucket4.json",
    "content": "{\n  \"timebucket\" : {\n                    \"timestamp\" : \"user_ts\",\n                    \"size\"      : \"1\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_update_set_where1.json",
    "content": "{\n  \"table\": \"test_tbl\",\n  \"values\": {\n    \"value\": \"test_update\"\n  },\n  \"where\": {\n    \"column\": \"name\",\n    \"condition\": \"=\",\n    \"value\": \"test\"\n  }\n}"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_where_condition_isnull.json",
    "content": " {\n   \"where\": {\n     \"column\": \"deprecated_ts\",\n     \"condition\": \"isnull\"\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/data/payload_where_condition_notnull.json",
    "content": " {\n   \"where\": {\n     \"column\": \"deprecated_ts\",\n     \"condition\": \"notnull\"\n   }\n }"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/test_payload_builder.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport os\nimport pytest\nimport py\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\ndef _payload(test_data_file=None):\n    _dir = os.path.dirname(os.path.realpath(__file__))\n    file_path = py.path.local(_dir).join('/').join(test_data_file)\n\n    with open(str(file_path)) as data_file:\n        json_data = json.load(data_file)\n    return json_data\n\n\nclass TestPayloadBuilderRead:\n    \"\"\"\n    This class tests all SELECT (Read) data specific payload methods of payload builder\n    \"\"\"\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"name\", _payload(\"data/payload_select1.json\")),\n        ((\"name\", \"id\"), _payload(\"data/payload_select2.json\"))\n    ])\n    def test_select_payload(self, test_input, expected):\n        res = PayloadBuilder().SELECT(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"name\", _payload(\"data/payload_select1_alias.json\"))\n    ])\n    def test_select_payload_with_alias1(self, test_input, expected):\n        res = PayloadBuilder().SELECT(test_input).ALIAS('return', ('name', 'my_name')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ((\"reading\", \"user_ts\"),\n         {\"return\": [\"reading\", {\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"column\": \"user_ts\",\n                                 \"alias\": \"timestamp\", \"timezone\": \"utc\"}]})\n    ])\n    def test_select_payload_with_alias_and_format(self, test_input, expected):\n        res = PayloadBuilder().SELECT(test_input).ALIAS('return', ('user_ts', 'timestamp')).\\\n            FORMAT('return', ('user_ts', \"YYYY-MM-DD HH24:MI:SS.MS\")).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ((\"name\", \"id\"), _payload(\"data/payload_select2_alias.json\"))\n    ])\n    def test_select_payload_with_alias2(self, test_input, expected):\n        res = PayloadBuilder().SELECT(test_input).ALIAS('return', ('name', 'my_name'), ('id', 'my_id')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ((\"name\", [\"id\", \"reason\"]), _payload(\"data/payload_select3_alias.json\"))\n    ])\n    def test_select_payload_with_alias3(self, test_input, expected):\n        res = PayloadBuilder().SELECT(test_input).ALIAS('return', ('name', 'my_name'), ('id', 'my_id')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"user_ts\", _payload(\"data/payload_select_alias_with_timezone.json\"))\n    ])\n    def test_select_payload_with_alias_with_timezone(self, test_input, expected):\n        res = PayloadBuilder().SELECT(test_input).ALIAS('return', ('user_ts', 'timestamp'), ('timezone', 'utc')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"test\", _payload(\"data/payload_from1.json\")),\n        (\"test, test2\", _payload(\"data/payload_from2.json\"))\n    ])\n    def test_from_payload(self, test_input, expected):\n        res = PayloadBuilder().FROM(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ([\"name\", \"=\", \"test\"], _payload(\"data/payload_conditions1.json\")),\n        ([\"id\", \">\", 1], _payload(\"data/payload_conditions2.json\")),\n        ([\"id\", \"<\", 1.5], _payload(\"data/payload_conditions3.json\")),\n        ([\"id\", \">=\", 9], _payload(\"data/payload_conditions4.json\")),\n        ([\"id\", \"<=\", 99], _payload(\"data/payload_conditions5.json\")),\n        ([\"id\", \"!=\", \"False\"], _payload(\"data/payload_conditions6.json\")),\n        ([\"ts\", \"newer\", 3600], _payload(\"data/payload_newer_condition.json\")),\n        ([\"ts\", \"older\", 600], _payload(\"data/payload_older_condition.json\")),\n        ([\"plugin_type\", \"in\", ['north', 'south', 'rule', 'delivery', 'filter']], _payload(\"data/payload_condition_in.json\")),\n        ([\"plugin_type\", \"not in\", ['north', 'south']], _payload(\"data/payload_condition_not_in.json\"))\n    ])\n    def test_conditions_payload(self, test_input, expected):\n        res = PayloadBuilder().WHERE(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ([\"name\", \"=\", \"test\"], _payload(\"data/payload_conditions1.json\")),\n        ([\"deprecated_ts\", \"isnull\"], _payload(\"data/payload_where_condition_isnull.json\")),\n        ([\"deprecated_ts\", \"notnull\"], _payload(\"data/payload_where_condition_notnull.json\"))\n    ])\n    def test_where_payload(self, test_input, expected):\n        res = PayloadBuilder().WHERE(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input_1, test_input_2, expected\", [\n        ([\"name\", \"=\", \"test\"], [\"id\", \">\", 3], _payload(\"data/payload_and_where1.json\")),\n        ([\"event\", \"=\", \"Ingest\"], [\"deprecated_ts\", \"isnull\"], _payload(\"data/payload_and_where_isnull.json\")),\n        ([\"plugin\", \"=\", \"sinusoid\"], [\"deprecated_ts\", \"notnull\"], _payload(\"data/payload_and_where_notnull.json\"))\n    ])\n    def test_and_where_payload(self, test_input_1, test_input_2, expected):\n        res = PayloadBuilder().WHERE(test_input_1).AND_WHERE(test_input_2).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input_1, test_input_2, test_input_3, expected\", [\n        ([\"name\", \"=\", \"test\"], [\"id\", \">\", 3], [\"value\", \"!=\", 0], _payload(\"data/payload_and_where2.json\")),\n        ([\"plugin\", \"=\", \"sinusoid\"], [\"deprecated_ts\", \"notnull\"], [\"event\", \"=\", \"Ingest\"],\n         _payload(\"data/payload_multiple_and_where_with_notnull.json\")),\n        ([\"asset\", \"=\", \"sinusoid\"], [\"deprecated_ts\", \"isnull\"], [\"event\", \"=\", \"Ingest\"],\n         _payload(\"data/payload_multiple_and_where_with_isnull.json\"))\n    ])\n    def test_multiple_and_where_payload(self, test_input_1, test_input_2, test_input_3, expected):\n        res = PayloadBuilder().WHERE(test_input_1).AND_WHERE(test_input_2).AND_WHERE(test_input_3).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input_1, test_input_2, expected\", [\n        ([\"name\", \"=\", \"test\"], [\"id\", \">\", 3], _payload(\"data/payload_or_where1.json\")),\n        ([\"event\", \"=\", \"Filter\"], [\"deprecated_ts\", \"isnull\"], _payload(\"data/payload_or_where_isnull.json\")),\n        ([\"event\", \"=\", \"Egress\"], [\"deprecated_ts\", \"notnull\"], _payload(\"data/payload_or_where_notnull.json\"))\n    ])\n    def test_or_where_payload(self, test_input_1, test_input_2, expected):\n        res = PayloadBuilder().WHERE(test_input_1).OR_WHERE(test_input_2).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input_1, test_input_2, test_input_3, expected\", [\n        ([\"name\", \"=\", \"test\"], [\"id\", \">\", 3], [\"value\", \"!=\", 0], _payload(\"data/payload_or_where2.json\")),\n        ([\"plugin\", \"=\", \"http_north\"], [\"deprecated_ts\", \"isnull\"], [\"event\", \"=\", \"Egress\"],\n         _payload(\"data/payload_multiple_or_where_with_isnull.json\")),\n        ([\"service\", \"=\", \"HT #1\"], [\"deprecated_ts\", \"notnull\"], [\"event\", \"=\", \"Egress\"],\n         _payload(\"data/payload_multiple_or_where_with_notnull.json\"))\n    ])\n    def test_multiple_or_where_payload(self, test_input_1, test_input_2, test_input_3, expected):\n        res = PayloadBuilder().WHERE(test_input_1).OR_WHERE(test_input_2).OR_WHERE(test_input_3).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (3, _payload(\"data/payload_limit1.json\")),\n        (3.5, _payload(\"data/payload_limit2.json\")),\n        (\"invalid\", {})\n    ])\n    def test_limit_payload(self, test_input, expected):\n        res = PayloadBuilder().LIMIT(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ([\"name\", \"asc\"], _payload(\"data/payload_order_by1.json\")),\n        ([\"name\", \"desc\"], _payload(\"data/payload_order_by2.json\")),\n        (([\"name\", \"desc\"], [\"id\", \"asc\"], [\"ts\", \"asc\"]), _payload(\"data/payload_order_by3.json\")),\n        ([\"name\"], _payload(\"data/payload_order_by1.json\")),\n        ([\"name\", \"invalid\"], {})\n    ])\n    def test_order_by_payload(self, test_input, expected):\n        res = PayloadBuilder().ORDER_BY(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"name\", _payload(\"data/payload_group_by1.json\")),\n        (\"name,id\", _payload(\"data/payload_group_by2.json\"))\n    ])\n    def test_group_by_payload(self, test_input, expected):\n        res = PayloadBuilder().GROUP_BY(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ('user_ts', _payload(\"data/payload_group_by2_alias.json\"))\n    ])\n    def test_group_by_payload_alias1(self, test_input, expected):\n        res = PayloadBuilder().GROUP_BY(test_input).ALIAS('group', ('user_ts', 'timestamp')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"user_ts\", _payload(\"data/payload_group_by1_alias.json\"))\n    ])\n    def test_group_by_payload_alias2(self, test_input, expected):\n        res = PayloadBuilder().GROUP_BY(test_input).ALIAS('group', ('user_ts', 'timestamp')).\\\n            FORMAT('group', ('user_ts', \"YYYY-MM-DD HH24:MI:SS.MS\")).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ('{\"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"column\": \"user_ts\"}', _payload(\"data/payload_group_by1_alias.json\"))\n    ])\n    def test_group_by_payload_alias3(self, test_input, expected):\n        res = PayloadBuilder().GROUP_BY(test_input).ALIAS('group', ('user_ts', 'timestamp')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ([\"min\", \"values\"], _payload(\"data/payload_aggregate1.json\")),\n        ([\"max\", \"values\"], _payload(\"data/payload_aggregate2.json\")),\n        ([\"avg\", \"values\"], _payload(\"data/payload_aggregate3.json\")),\n        ([\"sum\", \"values\"], _payload(\"data/payload_aggregate4.json\")),\n        ([\"count\", \"values\"], _payload(\"data/payload_aggregate5.json\")),\n        (([\"min\", \"values\"], [\"max\", \"values\"], [\"avg\", \"values\"]), _payload(\"data/payload_aggregate6.json\")),\n        (([\"all\"], {'all'}), _payload(\"data/payload_aggregate_all.json\")),\n        ([\"invalid\", \"values\"], {})\n    ])\n    def test_aggregate_payload(self, test_input, expected):\n        res = PayloadBuilder().AGGREGATE(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ([\"min\", \"values\"], _payload(\"data/payload_aggregate1_alias.json\"))\n    ])\n    def test_aggregate_payload_with_alias1(self, test_input, expected):\n        res = PayloadBuilder().AGGREGATE(test_input).ALIAS('aggregate', ('values', 'min', 'min_values')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ([\"max\", \"values\"], _payload(\"data/payload_aggregate2_alias.json\"))\n    ])\n    def test_aggregate_payload_with_alias2(self, test_input, expected):\n        res = PayloadBuilder().AGGREGATE(test_input).ALIAS('aggregate', ('values', 'max', 'max_values')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (([\"min\", \"values\"], [\"max\", \"values\"], [\"avg\", \"values\"]), _payload(\"data/payload_aggregate6_alias.json\"))\n    ])\n    def test_aggregate_payload_with_alias3(self, test_input, expected):\n        res = PayloadBuilder().AGGREGATE(test_input).ALIAS('aggregate',\n                                                           ('values', 'min', 'min_values'),\n                                                           ('values', 'max', 'max_values'),\n                                                           ('values', 'avg', 'avg_values')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (([\"min\", [\"values\", \"rate\"]], [\"max\", [\"values\", \"rate\"]], [\"avg\", [\"values\", \"rate\"]]), _payload(\"data/payload_aggregate7_alias.json\"))\n    ])\n    def test_aggregate_payload_with_alias4(self, test_input, expected):\n        res = PayloadBuilder().AGGREGATE(test_input).ALIAS('aggregate',\n                                                           ('values', 'min', 'Minimum'),\n                                                           ('values', 'max', 'Maximum'),\n                                                           ('values', 'avg', 'Average')).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ((\"user_ts\",), _payload(\"data/payload_timebucket4.json\")),\n        ((\"user_ts\", \"5\"), _payload(\"data/payload_timebucket1.json\")),\n        ((\"user_ts\", \"5\", \"DD-MM-YYYYY HH24:MI:SS\"), _payload(\"data/payload_timebucket2.json\")),\n        ((\"user_ts\", \"5\", \"DD-MM-YYYYY HH24:MI:SS\", \"bucket\"), _payload(\"data/payload_timebucket3.json\"))\n    ])\n    def test_timebucket(self, test_input, expected):\n        timestamp = test_input[0]\n        fmt = None\n        alias = None\n        if len(test_input) == 1:\n            res = PayloadBuilder().TIMEBUCKET(timestamp).payload()\n        else:\n            size = test_input[1]\n            if len(test_input) == 3:\n                fmt = test_input[2]\n            if len(test_input) == 4:\n                fmt = test_input[2]\n                alias = test_input[3]\n            res = PayloadBuilder().TIMEBUCKET(timestamp, size, fmt, alias).payload()\n        assert expected == json.loads(res)\n\n    def test_select_all_payload(self):\n        res = PayloadBuilder().SELECT().payload()\n        expected = {}\n        assert expected == json.loads(res)\n\n    def test_select_distinct_payload(self):\n        expr = PayloadBuilder().\\\n            SELECT().DISTINCT([\"description\"])\\\n            .WHERE([\"id\", \"<\", 1000]).payload()\n        assert _payload(\"data/payload_distinct.json\") == json.loads(expr)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (3, _payload(\"data/payload_offset1.json\")),\n        (3.5, _payload(\"data/payload_offset2.json\")),\n        (\"invalid\", {})\n    ])\n    def test_offset_payload(self, test_input, expected):\n        res = PayloadBuilder().OFFSET(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (3, _payload(\"data/payload_limit_offset1.json\")),\n        (3.5, _payload(\"data/payload_limit_offset2.json\")),\n        (\"invalid\", {})\n    ])\n    def test_limit_offset_payload(self, test_input, expected):\n        res = PayloadBuilder().LIMIT(test_input).OFFSET(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (3, _payload(\"data/payload_offset1.json\")),\n        (3.5, _payload(\"data/payload_offset2.json\")),\n        (\"invalid\", {})\n    ])\n    def test_skip_payload(self, test_input, expected):\n        res = PayloadBuilder().SKIP(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (3, _payload(\"data/payload_limit_offset1.json\")),\n        (3.5, _payload(\"data/payload_limit_offset2.json\")),\n        (\"invalid\", {})\n    ])\n    def test_limit_skip_payload(self, test_input, expected):\n        res = PayloadBuilder().LIMIT(test_input).SKIP(test_input).payload()\n        assert expected == json.loads(res)\n\n    def test_query_params_payload(self):\n        res = PayloadBuilder().WHERE([\"key\", \"=\", \"value1\"]).query_params()\n        assert \"key=value1\" == res\n\n    @pytest.mark.parametrize(\"test_input1, test_input2, expected\", [\n        ([\"key1\", \"=\", \"value1\"], [\"key2\", \"=\", 2],  \"key1=value1&key2=2\"),\n    ])\n    def test_and_query_params_payload(self, test_input1, test_input2, expected):\n        res = PayloadBuilder().WHERE(test_input1).AND_WHERE(test_input2).query_params()\n        assert expected == res\n\n    @pytest.mark.parametrize(\"test_input1, test_input2, expected\", [\n        ([\"key1\", \"=\", \"value1\"], [\"key2\", \"=\", 2],  \"key1=value1\"),\n    ])\n    def test_or_query_params_payload(self, test_input1, test_input2, expected):\n        \"\"\"Since URL does not support OR hence only 1 value should be parsed as query parameter\"\"\"\n        res = PayloadBuilder().WHERE(test_input1).OR_WHERE(test_input2).query_params()\n        assert expected == res\n\n    @pytest.mark.skip(reason=\"No support from storage layer yet\")\n    def test_having_payload(self, expected):\n        res = PayloadBuilder().HAVING().payload()\n        assert expected == json.loads(res)\n\n    def test_expr_payload(self):\n        res = PayloadBuilder().WHERE([\"key\", \"=\", \"READINGS\"]).EXPR([\"value\", \"+\", 10]).payload()\n        assert _payload(\"data/payload_expr1.json\") == json.loads(res)\n\n        exprs = ([\"value1\", \"+\", 10], [\"value2\", \"-\", 5])  # a tuple\n        res = PayloadBuilder().WHERE([\"key\", \"=\", \"READINGS\"]).EXPR(exprs).payload()\n        assert 2 == len(json.loads(res))\n        assert _payload(\"data/payload_expr2.json\") == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        ((\"data\", [\"url\", \"value\"], \"new value\"), _payload(\"data/payload_json_properties1.json\"))\n    ])\n    def test_json_properties1(self, test_input, expected):\n        res = PayloadBuilder().JSON_PROPERTY(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input1, test_input2, expected\", [\n        ((\"data\", [\"url\", \"value\"], \"new value\"), (\"data1\", [\"url1\", \"value1\"], \"new value1\"), _payload(\"data/payload_json_properties2.json\"))\n    ])\n    def test_json_properties2(self, test_input1, test_input2, expected):\n        res = PayloadBuilder().JSON_PROPERTY(test_input1, test_input2).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input1, test_input2, expected\", [\n        ((\"data\", [\"url\", \"value\"], \"new value\"), (\"data1\", [\"url1\", \"value1\"], \"new value1\"), _payload(\"data/payload_json_properties2.json\"))\n    ])\n    def test_json_properties3(self, test_input1, test_input2, expected):\n        res = PayloadBuilder().JSON_PROPERTY(test_input1).JSON_PROPERTY(test_input2).payload()\n        assert expected == json.loads(res)\n\n    def test_complex_select_payload(self):\n        res = PayloadBuilder() \\\n            .SELECT(\"id\", \"name\") \\\n            .WHERE([\"id\", \"=\", 1]) \\\n            .AND_WHERE([\"name\", \"=\", \"test\"]) \\\n            .OR_WHERE([\"name\", \"=\", \"test2\"]) \\\n            .LIMIT(5) \\\n            .OFFSET(1) \\\n            .GROUP_BY(\"name\", \"id\") \\\n            .ORDER_BY([\"id\", \"desc\"]) \\\n            .AGGREGATE([\"count\", \"name\"]) \\\n            .payload()\n        assert _payload(\"data/payload_complex_select1.json\") == json.loads(res)\n\n    def test_chain_payload(self):\n        res_chain = PayloadBuilder() \\\n            .SELECT(\"id\", \"name\") \\\n            .WHERE([\"id\", \"=\", 1]) \\\n            .AND_WHERE([\"name\", \"=\", \"test\"]) \\\n            .OR_WHERE([\"name\", \"=\", \"test2\"]) \\\n            .chain_payload()\n\n        res = PayloadBuilder(res_chain) \\\n            .LIMIT(5) \\\n            .OFFSET(1) \\\n            .GROUP_BY(\"name\", \"id\") \\\n            .ORDER_BY([\"id\", \"desc\"]) \\\n            .AGGREGATE([\"count\", \"name\"]) \\\n            .payload()\n\n        assert _payload(\"data/payload_complex_select1.json\") == json.loads(res)\n\n    def test_aggregate_with_where(self):\n        res = PayloadBuilder().WHERE([\"ts\", \"newer\", 60]).AGGREGATE([\"count\", \"*\"]).payload()\n        assert _payload(\"data/payload_aggregate_where.json\") == json.loads(res)\n\n    def test_join_without_query(self):\n        res = PayloadBuilder().JOIN(\"table1\", \"table1_id\").ON(\"table2_id\").payload()\n        assert _payload(\"data/payload_join_without_query.json\") == json.loads(res)\n\n    def test_join_with_only_table_name_and_without_query(self):\n        res = PayloadBuilder().JOIN(\"table1\").ON(\"table2_id\").payload()\n        assert _payload(\"data/payload_join_without_query_only_table_name.json\") == json.loads(res)\n\n    def test_join_with_query(self):\n        qp = PayloadBuilder().SELECT((\"parent_id\", \"name\", \"value\")). \\\n            ALIAS('return', ('name', 'attribute_name'), ('value', 'attribute_value')). \\\n            WHERE([\"name\", \"=\", \"MyName\"]). \\\n            chain_payload()\n        res = PayloadBuilder().JOIN(\"attributes\", \"parent_id\").ON(\"id\").QUERY(qp).payload()\n        assert _payload(\"data/payload_join_with_query.json\") == json.loads(res)\n\n    def test_join_with_query_and_only_table_name(self):\n        qp = PayloadBuilder().SELECT((\"parent_id\", \"name\", \"value\")). \\\n            ALIAS('return', ('name', 'attribute_name'), ('value', 'attribute_value')). \\\n            WHERE([\"name\", \"=\", \"MyName\"]). \\\n            chain_payload()\n        res = PayloadBuilder().JOIN(\"attributes\").ON(\"id\").QUERY(qp).payload()\n        assert _payload(\"data/payload_join_with_query_only_table_name.json\") == json.loads(res)\n\n    def test_nested_join(self):\n        qp1 = PayloadBuilder().SELECT((\"parent_id\", \"value\")). \\\n            ALIAS('return', ('value', 'colour')). \\\n            WHERE([\"name\", \"=\", \"colour\"]). \\\n            chain_payload()\n        join1 = PayloadBuilder().JOIN(\"attributes\", \"parent_id\").ON(\"id\").QUERY(qp1).chain_payload()\n\n        qp2 = PayloadBuilder().SELECT((\"parent_id\", \"value\")). \\\n            ALIAS('return', ('value', 'my_name')). \\\n            WHERE([\"name\", \"=\", \"MyName\"]). \\\n            chain_payload()\n\n        join2 = PayloadBuilder().JOIN(\"attributes\", \"parent_id\").ON(\"id\").QUERY(qp2)\n        join2.QUERY(join1)\n\n        res = join2.payload()\n        assert _payload(\"data/payload_nested_join.json\") == json.loads(res)\n\n    def test_invalid_join_clause(self):\n        with pytest.raises(Exception) as exc_info:\n            PayloadBuilder().JOIN()\n        assert exc_info.value.args[0] == \"Expected at least table name with JOIN clause.\"\n\n    def test_invalid_on_without_join(self):\n        with pytest.raises(Exception) as exc_info:\n            PayloadBuilder().ON(\"id\")\n        assert exc_info.value.args[0] == \"ON Clause used without using JOIN first.\"\n\n    def test_invalid_on_clause(self):\n        with pytest.raises(Exception) as exc_info:\n            PayloadBuilder().JOIN(\"table1\").ON()\n        assert exc_info.value.args[0] == \"Expected column name with ON clause.\"\n\n    def test_invalid_query_clause_without_join(self):\n        with pytest.raises(Exception) as exc_info:\n            PayloadBuilder().QUERY({'a': 1})\n        assert exc_info.value.args[0] == \"Query used without JOIN clause.\"\n\n    def test_invalid_query_clause_without_on(self):\n\n        with pytest.raises(Exception) as exc_info:\n            PayloadBuilder().JOIN(\"table1\").QUERY({'a': 1})\n        assert exc_info.value.args[0] == \"Query used without ON clause.\"\n\n    def test_invalid_query_clause_invalid_query_payload(self):\n\n        with pytest.raises(Exception) as exc_info:\n            PayloadBuilder().JOIN(\"table1\", \"column1\").ON(\"id\").QUERY(\"random\")\n        assert exc_info.value.args[0] == \"The query payload parameter must be an OrderedDict.\"\n\n\nclass TestPayloadBuilderCreate:\n    \"\"\"\n    This class tests all INSERT data specific payload methods of payload builder\n    \"\"\"\n    def test_insert_payload(self):\n        res = PayloadBuilder().INSERT(key=\"x\").payload()\n        assert _payload(\"data/payload_insert1.json\") == json.loads(res)\n\n    def test_insert_into_payload(self):\n        res = PayloadBuilder().INSERT_INTO(\"test\").payload()\n        assert _payload(\"data/payload_from1.json\") == json.loads(res)\n\n\nclass TestPayloadBuilderUpdate:\n    \"\"\"\n    This class tests all UPDATE specific payload methods of payload builder\n    \"\"\"\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"test\", _payload(\"data/payload_from1.json\")),\n    ])\n    def test_update_table_payload(self, test_input, expected):\n        res = PayloadBuilder().UPDATE_TABLE(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"test_update\", _payload(\"data/payload_set1.json\")),\n    ])\n    def test_set_payload(self, test_input, expected):\n        res = PayloadBuilder().SET(value=test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"input_set, input_where, input_table, expected\", [\n        (\"test_update\", [\"name\", \"=\", \"test\"], \"test_tbl\",\n         _payload(\"data/payload_update_set_where1.json\")),\n    ])\n    def test_update_set_where_payload(self, input_set, input_where, input_table, expected):\n        res = PayloadBuilder().SET(value=input_set).WHERE(input_where).UPDATE_TABLE(input_table).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"input_set, input_where, input_modifier, expected\", [\n        (\"token_expiration\", [\"token\", \"=\", \"TOKEN\"], [\"allowzero\"],\n         _payload(\"data/payload_modifier_set_where.json\")),\n    ])\n    def test_modifier_with_set_where_payload(self, input_set, input_where, input_modifier, expected):\n        res = PayloadBuilder().SET(value=input_set).WHERE(input_where).MODIFIER(input_modifier).payload()\n        assert expected == json.loads(res)\n\n\nclass TestPayloadBuilderDelete:\n    \"\"\"\n    This class tests all DELETE specific payload methods of payload builder\n    \"\"\"\n    @pytest.mark.parametrize(\"test_input, expected\", [\n        (\"test\", _payload(\"data/payload_from1.json\")),\n    ])\n    def test_delete_payload(self, test_input, expected):\n        res = PayloadBuilder().DELETE(test_input).payload()\n        assert expected == json.loads(res)\n\n    @pytest.mark.parametrize(\"input_where, input_table, expected\", [\n        ([\"name\", \"=\", \"test\"], \"test_tbl\",\n         _payload(\"data/payload_delete_where1.json\")),\n    ])\n    def test_delete_where_payload(self, input_where, input_table, expected):\n        res = PayloadBuilder().DELETE(input_table).WHERE(input_where).payload()\n        assert expected == json.loads(res)\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/test_sc_exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge/common/storage_client/exceptions.py \"\"\"\n\nimport pytest\n\nfrom fledge.common.storage_client.exceptions import StorageClientException\nfrom fledge.common.storage_client.exceptions import *\n\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestStorageClientExceptions:\n\n    def test_init_StorageClientException(self):\n\n        with pytest.raises(Exception) as excinfo:\n            raise StorageClientException()\n        assert excinfo.type is TypeError\n        assert \"__init__() missing 1 required positional argument: 'code'\" in str(excinfo.value)\n\n    def test_default_init_StorageClientException(self):\n        with pytest.raises(Exception) as excinfo:\n            raise StorageClientException(40)\n        assert excinfo.type is StorageClientException\n        assert issubclass(excinfo.type, Exception)\n\n        try:\n            raise StorageClientException(40)\n        except Exception as ex:\n            assert ex.__class__ is StorageClientException\n            assert issubclass(ex.__class__, Exception)\n            assert 40 == ex.code\n            assert ex.message is None\n\n    def test_init_args_StorageClientException(self):\n        with pytest.raises(Exception) as excinfo:\n            raise StorageClientException(code=11, message=\"foo\")\n        assert excinfo.type is StorageClientException\n        assert issubclass(excinfo.type, Exception)\n\n        # code is 11\n        # pytest raises wrapper allow only type, value and traceback info\n        assert \"foo\" == str(excinfo.value)\n\n        try:\n            raise StorageClientException(code=11, message=\"foo\")\n        except Exception as ex:\n            assert ex.__class__ is StorageClientException\n            assert issubclass(ex.__class__, Exception)\n            assert 11 == ex.code\n            assert \"foo\" == ex.message\n\n    def test_BadRequest(self):\n        with pytest.raises(Exception) as excinfo:\n            raise BadRequest()\n        assert excinfo.type is BadRequest\n        assert issubclass(excinfo.type, StorageClientException)\n\n        try:\n            raise BadRequest()\n        except Exception as ex:\n            assert ex.__class__ is BadRequest\n            assert issubclass(ex.__class__, StorageClientException)\n            assert 400 == ex.code\n            assert \"Bad request\" == ex.message\n\n    def test_StorageServiceUnavailable(self):\n        with pytest.raises(Exception) as excinfo:\n            raise StorageServiceUnavailable()\n        assert excinfo.type is StorageServiceUnavailable\n        assert issubclass(excinfo.type, StorageClientException)\n\n        try:\n            raise StorageServiceUnavailable()\n        except Exception as ex:\n            assert ex.__class__ is StorageServiceUnavailable\n            assert issubclass(ex.__class__, StorageClientException)\n            assert 503 == ex.code\n            assert \"Storage service is unavailable\" == ex.message\n\n    def test_InvalidServiceInstance(self):\n        with pytest.raises(Exception) as excinfo:\n            raise InvalidServiceInstance()\n        assert excinfo.type is InvalidServiceInstance\n        assert issubclass(excinfo.type, StorageClientException)\n\n        try:\n            raise InvalidServiceInstance()\n        except Exception as ex:\n            assert ex.__class__ is InvalidServiceInstance\n            assert issubclass(ex.__class__, StorageClientException)\n            assert 502 == ex.code\n            assert \"Storage client needs a valid *Fledge storage* micro-service instance\" == ex.message\n\n    def test_InvalidReadingsPurgeFlagParameters(self):\n        with pytest.raises(Exception) as excinfo:\n            raise InvalidReadingsPurgeFlagParameters()\n        assert excinfo.type is InvalidReadingsPurgeFlagParameters\n        assert issubclass(excinfo.type, BadRequest)\n\n        try:\n            raise InvalidReadingsPurgeFlagParameters()\n        except Exception as ex:\n            assert ex.__class__ is InvalidReadingsPurgeFlagParameters\n            assert issubclass(ex.__class__, BadRequest)\n            assert 400 == ex.code\n            assert \"Purge flag valid options are retain or purge only\" == ex.message\n\n    def test_PurgeOneOfAgeAssetAndSize(self):\n        with pytest.raises(Exception) as excinfo:\n            raise PurgeOneOfAgeAssetAndSize()\n        assert excinfo.type is PurgeOneOfAgeAssetAndSize\n        assert issubclass(excinfo.type, BadRequest)\n\n        try:\n            raise PurgeOneOfAgeAssetAndSize()\n        except Exception as ex:\n            assert ex.__class__ is PurgeOneOfAgeAssetAndSize\n            assert issubclass(ex.__class__, BadRequest)\n            assert 400 == ex.code\n            assert \"Purge must specify one of age, size or asset\" == ex.message\n\n    def test_PurgeOnlyOneOfAgeAndSize(self):\n        with pytest.raises(Exception) as excinfo:\n            raise PurgeOnlyOneOfAgeAndSize()\n        assert excinfo.type is PurgeOnlyOneOfAgeAndSize\n        assert issubclass(excinfo.type, BadRequest)\n\n        try:\n            raise PurgeOnlyOneOfAgeAndSize()\n        except Exception as ex:\n            assert ex.__class__ is PurgeOnlyOneOfAgeAndSize\n            assert issubclass(ex.__class__, BadRequest)\n            assert 400 == ex.code\n            assert \"Purge must specify only one of age or size\" == ex.message\n\n    def test_StorageServerError(self):\n        with pytest.raises(Exception) as excinfo:\n            raise StorageServerError(code=400, reason=\"blah\", error={\"k\": \"v\"})\n        assert excinfo.type is StorageServerError\n        assert issubclass(excinfo.type, Exception)\n\n        try:\n            raise StorageServerError(code=400, reason=\"blah\", error={\"k\": \"v\"})\n        except Exception as ex:\n            assert ex.__class__ is StorageServerError\n            assert issubclass(ex.__class__, Exception)\n            assert 400 == ex.code\n            assert \"blah\" == ex.reason\n            assert {\"k\": \"v\"} == ex.error\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/test_storage_client.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge/common/storage_client/storage_client.py \"\"\"\nimport pytest\nimport aiohttp\nfrom unittest.mock import MagicMock, patch\nimport json\nimport asyncio\nfrom aiohttp import web\nfrom aiohttp.test_utils import unused_port\nfrom functools import partial\n\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.storage_client.storage_client import _LOGGER, StorageClientAsync, ReadingsStorageClientAsync\n\nfrom fledge.common.storage_client.exceptions import *\n\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nHOST = '127.0.0.1'\nPORT = unused_port()\n\n\nclass FakeFledgeStorageSrvr:\n\n    def __init__(self, *, loop):\n        self.loop = loop\n        self.app = web.Application(loop=loop)\n        self.app.router.add_routes([\n            # common table operations\n            web.post('/storage/table/{tbl_name}', self.query_with_payload_insert_into_or_update_tbl_handler),\n            web.put('/storage/table/{tbl_name}', self.query_with_payload_insert_into_or_update_tbl_handler),\n            web.delete('/storage/table/{tbl_name}', self.delete_from_tbl_handler),\n            web.get('/storage/table/{tbl_name}', self.query_tbl_handler),\n            web.put('/storage/table/{tbl_name}/query', self.query_with_payload_insert_into_or_update_tbl_handler),\n\n            # readings table\n            web.post('/storage/reading', self.readings_append),\n            web.get('/storage/reading', self.readings_fetch),\n            web.put('/storage/reading/query', self.readings_query),\n            web.put('/storage/reading/purge', self.readings_purge)\n        ])\n        self.handler = None\n        self.server = None\n\n    async def start(self):\n\n        self.handler = self.app.make_handler()\n        self.server = await self.loop.create_server(self.handler, HOST, PORT, ssl=None)\n\n    async def stop(self):\n        self.server.close()\n        await self.server.wait_closed()\n        await self.app.shutdown()\n        await self.handler.shutdown()\n        await self.app.cleanup()\n\n    async def query_with_payload_insert_into_or_update_tbl_handler(self, request):\n        payload = await request.json()\n\n        if payload.get(\"bad_request\", None):\n            return web.HTTPBadRequest(reason=\"bad data\", text='{\"key\": \"value\"}')\n\n        if payload.get(\"internal_server_err\", None):\n            return web.HTTPInternalServerError(reason=\"something wrong\", text='{\"key\": \"value\"}')\n\n        return web.json_response({\n           \"called\": payload\n        })\n\n    async def delete_from_tbl_handler(self, request):\n        try:\n            payload = await request.json()\n\n            if payload.get(\"bad_request\", None):\n                return web.HTTPBadRequest(reason=\"bad data\", text='{\"key\": \"value\"}')\n\n            if payload.get(\"internal_server_err\", None):\n                return web.HTTPInternalServerError(reason=\"something wrong\", text='{\"key\": \"value\"}')\n        except:\n            payload = 1\n\n        return web.json_response({\n            \"called\": payload\n        })\n\n    async def query_tbl_handler(self, request):\n\n        # add side effect based on query param foo `?foo=`\n\n        res = 1\n        if request.query.get('foo', None):\n            res = 'foo passed'\n\n        if request.query.get(\"bad_foo\", None):\n            return web.HTTPBadRequest(reason=\"bad data\", text='{\"key\": \"value\"}')\n\n        if request.query.get(\"internal_server_err_foo\", None):\n            return web.HTTPInternalServerError(reason=\"something wrong\", text='{\"key\": \"value\"}')\n\n        return web.json_response({\n            \"called\": res\n        })\n\n    async def readings_append(self, request):\n        payload = await request.json()\n\n        if payload.get(\"readings\", None) is None:\n            return web.HTTPBadRequest(reason=\"bad data\", text='{\"key\": \"value\"}')\n\n        if payload.get(\"internal_server_err\", None):\n            return web.HTTPInternalServerError(reason=\"something wrong\", text='{\"key\": \"value\"}')\n\n        return web.json_response({\n            \"appended\": payload\n        })\n\n    async def readings_fetch(self, request):\n        if request.query.get(\"id\") == \"bad_data\":\n            return web.HTTPBadRequest(reason=\"bad data\", text='{\"key\": \"value\"}')\n\n        if request.query.get(\"id\") == \"internal_server_err\":\n            return web.HTTPInternalServerError(reason=\"something wrong\", text='{\"key\": \"value\"}')\n\n        return web.json_response({\"readings\": [],\n                                  \"start\": request.query.get('id'),\n                                  \"count\": request.query.get('count')\n                                  })\n\n    async def readings_query(self, request):\n        payload = await request.json()\n\n        if payload.get(\"bad_request\", None):\n            return web.HTTPBadRequest(reason=\"bad data\", text='{\"key\": \"value\"}')\n\n        if payload.get(\"internal_server_err\", None):\n            return web.HTTPInternalServerError(reason=\"something wrong\", text='{\"key\": \"value\"}')\n\n        return web.json_response({\n           \"called\": payload\n        })\n\n    async def readings_purge(self, request):\n\n        if request.query.get('age', None) == \"-1\":\n            return web.HTTPBadRequest(reason=\"age should not be less than 0\", text='{\"key\": \"value\"}')\n\n        if request.query.get('size', None) == \"4294967296\":\n            return web.HTTPInternalServerError(reason=\"unsigned int range\", text='{\"key\": \"value\"}')\n\n        return web.json_response({\n            \"called\": 1\n        })\n\n\nclass TestStorageClientAsync:\n\n    def test_init(self):\n        svc = {\"id\": 1, \"name\": \"foo\", \"address\": \"local\", \"service_port\": 1000, \"management_port\": 2000,\n               \"type\": \"Storage\", \"protocol\": \"http\"}\n        with patch.object(StorageClientAsync, '_get_storage_service', return_value=svc):\n            sc = StorageClientAsync(1, 2)\n            assert \"local:1000\" == sc.base_url\n            assert \"local:2000\" == sc.management_api_url\n\n    def test_init_with_invalid_storage_service(self):\n        svc = {\"id\": 1, \"name\": \"foo\", \"address\": \"local\", \"service_port\": 1000, \"management_port\": 2000,\n               \"type\": \"xStorage\", \"protocol\": \"http\"}\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(StorageClientAsync, '_get_storage_service', return_value=svc):\n                sc = StorageClientAsync(1, 2)\n        assert excinfo.type is InvalidServiceInstance\n\n    def test_init_with_service_record(self):\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = \"local\"\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = 1000\n        mockServiceRecord._management_port = 2000\n\n        sc = StorageClientAsync(1, 2, mockServiceRecord)\n        assert \"local:1000\" == sc.base_url\n        assert \"local:2000\" == sc.management_api_url\n\n    def test_init_with_invalid_service_record(self):\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"warning\") as log:\n                sc = StorageClientAsync(1, 2, \"blah\")\n        log.assert_called_once_with(\"Storage should be a valid Fledge micro-service instance\")\n        assert excinfo.type is InvalidServiceInstance\n\n    def test_init_with_service_record_non_storage_type(self):\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = \"local\"\n        mockServiceRecord._type = \"xStorage\"\n        mockServiceRecord._port = 1000\n        mockServiceRecord._management_port = 2000\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"warning\") as log:\n                sc = StorageClientAsync(1, 2, mockServiceRecord)\n        log.assert_called_once_with(\"Storage should be a valid *Storage* micro-service instance\")\n        assert excinfo.type is InvalidServiceInstance\n\n    @pytest.mark.asyncio\n    async def test_insert_into_tbl(self, event_loop):\n        # 'POST', '/storage/table/{tbl_name}', data\n\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        sc = StorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == sc.base_url\n\n        with pytest.raises(Exception) as excinfo:\n            args = None, '{\"k\": \"v\"}'\n            await sc.insert_into_tbl(*args)\n        assert excinfo.type is ValueError\n        assert \"Table name is missing\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            args = \"aTable\", None\n            await sc.insert_into_tbl(*args)\n        assert excinfo.type is ValueError\n        assert \"Data to insert is missing\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            args = \"aTable\", {\"k\": \"v\"}\n            await sc.insert_into_tbl(*args)\n        assert excinfo.type is TypeError\n        assert \"Provided data to insert must be a valid JSON\" in str(excinfo.value)\n\n        args = \"aTable\", json.dumps({\"k\": \"v\"})\n        response = await sc.insert_into_tbl(*args)\n        assert {\"k\": \"v\"} == response[\"called\"]\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", json.dumps({\"bad_request\": \"v\"})\n                    await sc.insert_into_tbl(*args)\n            log_i.assert_called_once_with(\"POST %s, with payload: %s\", '/storage/table/aTable', '{\"bad_request\": \"v\"}')\n            log_e.assert_called_once_with('Error code: %d, reason: %s, details: %s', 400, 'bad data', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", json.dumps({\"internal_server_err\": \"v\"})\n                    await sc.insert_into_tbl(*args)\n            log_i.assert_called_once_with(\"POST %s, with payload: %s\", '/storage/table/aTable', '{\"internal_server_err\": \"v\"}')\n            log_e.assert_called_once_with('Error code: %d, reason: %s, details: %s', 500, 'something wrong', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        await fake_storage_srvr.stop()\n\n    @pytest.mark.asyncio\n    async def test_update_tbl(self, event_loop):\n        # PUT, '/storage/table/{tbl_name}', data\n\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        sc = StorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == sc.base_url\n\n        with pytest.raises(Exception) as excinfo:\n            args = None, json.dumps({\"k\": \"v\"})\n            await sc.update_tbl(*args)\n        assert excinfo.type is ValueError\n        assert \"Table name is missing\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            args = \"aTable\", None\n            await sc.update_tbl(*args)\n        assert excinfo.type is ValueError\n        assert \"Data to update is missing\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            args = \"aTable\", {\"k\": \"v\"}\n            await sc.update_tbl(*args)\n        assert excinfo.type is TypeError\n        assert \"Provided data to update must be a valid JSON\" in str(excinfo.value)\n\n        args = \"aTable\", json.dumps({\"k\": \"v\"})\n        response = await sc.update_tbl(*args)\n        assert {\"k\": \"v\"} == response[\"called\"]\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", json.dumps({\"bad_request\": \"v\"})\n                    await sc.update_tbl(*args)\n            log_i.assert_called_once_with(\"PUT %s, with payload: %s\", '/storage/table/aTable', '{\"bad_request\": \"v\"}')\n            log_e.assert_called_once_with(\"Error code: %d, reason: %s, details: %s\", 400, 'bad data', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", json.dumps({\"internal_server_err\": \"v\"})\n                    await sc.update_tbl(*args)\n            log_i.assert_called_once_with(\"PUT %s, with payload: %s\", '/storage/table/aTable',\n                                          '{\"internal_server_err\": \"v\"}')\n            log_e.assert_called_once_with(\"Error code: %d, reason: %s, details: %s\", 500, 'something wrong', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        await fake_storage_srvr.stop()\n\n    @pytest.mark.asyncio\n    async def test_delete_from_tbl(self, event_loop):\n        # 'DELETE', '/storage/table/{tbl_name}', condition (optional)\n\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        sc = StorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == sc.base_url\n\n        with pytest.raises(Exception) as excinfo:\n            await sc.delete_from_tbl(None)\n        assert excinfo.type is ValueError\n        assert \"Table name is missing\" in str(excinfo.value)\n\n        args = \"aTable\", None  # delete without condition is allowed\n        response = await sc.delete_from_tbl(*args)\n        assert 1 == response[\"called\"]\n\n        with pytest.raises(Exception) as excinfo:\n            args = \"aTable\", {\"condition\": \"v\"}\n            await sc.delete_from_tbl(*args)\n        assert excinfo.type is TypeError\n        assert \"condition payload must be a valid JSON\" in str(excinfo.value)\n\n        args = \"aTable\", json.dumps({\"condition\": \"v\"})\n        response = await sc.delete_from_tbl(*args)\n        assert {\"condition\": \"v\"} == response[\"called\"]\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", json.dumps({\"bad_request\": \"v\"})\n                    await sc.delete_from_tbl(*args)\n            log_i.assert_called_once_with(\"DELETE %s, with payload: %s\", '/storage/table/aTable', '{\"bad_request\": \"v\"}')\n            log_e.assert_called_once_with(\"Error code: %d, reason: %s, details: %s\", 400, 'bad data', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", json.dumps({\"internal_server_err\": \"v\"})\n                    await sc.delete_from_tbl(*args)\n            log_i.assert_called_once_with(\"DELETE %s, with payload: %s\", '/storage/table/aTable',\n                                          '{\"internal_server_err\": \"v\"}')\n            log_e.assert_called_once_with(\"Error code: %d, reason: %s, details: %s\", 500, 'something wrong', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        await fake_storage_srvr.stop()\n\n    @pytest.mark.asyncio\n    async def test_query_tbl(self, event_loop):\n        # 'GET', '/storage/table/{tbl_name}', *allows query params\n\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        sc = StorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == sc.base_url\n\n        with pytest.raises(Exception) as excinfo:\n            await sc.query_tbl()\n        assert excinfo.type is TypeError\n        assert \"query_tbl() missing 1 required positional argument: 'tbl_name'\" in str(excinfo.value)\n\n        args = \"aTable\", None  # query_tbl without query param is == SELECT *\n        response = await sc.query_tbl(*args)\n        assert 1 == response[\"called\"]\n\n        args = \"aTable\", 'foo=v1&bar=v2'\n        response = await sc.query_tbl(*args)\n        assert 'foo passed' == response[\"called\"]\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", 'bad_foo=1'\n                    await sc.query_tbl(*args)\n            log_i.assert_called_once_with(\"GET %s\", '/storage/table/aTable?bad_foo=1')\n            log_e.assert_called_once_with(\"Error code: %d, reason: %s, details: %s\", 400, 'bad data', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", 'internal_server_err_foo=1'\n                    await sc.query_tbl(*args)\n            log_i.assert_called_once_with(\"GET %s\", '/storage/table/aTable?internal_server_err_foo=1')\n            log_e.assert_called_once_with(\"Error code: %d, reason: %s, details: %s\", 500, 'something wrong', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        await fake_storage_srvr.stop()\n\n    @pytest.mark.asyncio\n    async def test_query_tbl_with_payload(self, event_loop):\n        # 'PUT', '/storage/table/{tbl_name}/query', query_payload\n\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        sc = StorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == sc.base_url\n\n        with pytest.raises(Exception) as excinfo:\n            args = None, json.dumps({\"k\": \"v\"})\n            await sc.query_tbl_with_payload(*args)\n        assert excinfo.type is ValueError\n        assert \"Table name is missing\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            args = \"aTable\", None\n            await sc.query_tbl_with_payload(*args)\n        assert excinfo.type is ValueError\n        assert \"Query payload is missing\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            args = \"aTable\", {\"k\": \"v\"}\n            await sc.query_tbl_with_payload(*args)\n        assert excinfo.type is TypeError\n        assert \"Query payload must be a valid JSON\" in str(excinfo.value)\n\n        args = \"aTable\", json.dumps({\"k\": \"v\"})\n        response = await sc.query_tbl_with_payload(*args)\n        assert {\"k\": \"v\"} == response[\"called\"]\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", json.dumps({\"bad_request\": \"v\"})\n                    await sc.query_tbl_with_payload(*args)\n            log_i.assert_called_once_with(\"PUT %s, with query payload: %s\", '/storage/table/aTable/query',\n                                          '{\"bad_request\": \"v\"}')\n            log_e.assert_called_once_with(\"Error code: %d, reason: %s, details: %s\", 400, 'bad data', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                with patch.object(_LOGGER, \"info\") as log_i:\n                    args = \"aTable\", json.dumps({\"internal_server_err\": \"v\"})\n                    await sc.query_tbl_with_payload(*args)\n            log_i.assert_called_once_with(\"PUT %s, with query payload: %s\", '/storage/table/aTable/query',\n                                          '{\"internal_server_err\": \"v\"}')\n            log_e.assert_called_once_with(\"Error code: %d, reason: %s, details: %s\", 500, 'something wrong', {'key': 'value'})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        await fake_storage_srvr.stop()\n\n\nclass TestReadingsStorageAsyncClient:\n\n    def test_init(self):\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        rsc = ReadingsStorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == rsc.base_url\n\n    @pytest.mark.asyncio\n    async def test_append(self, event_loop):\n        # 'POST', '/storage/reading', readings\n\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        rsc = ReadingsStorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == rsc.base_url\n\n        with pytest.raises(Exception) as excinfo:\n            await rsc.append(None)\n        assert excinfo.type is ValueError\n        assert \"Readings payload is missing\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            await rsc.append(\"blah\")\n        assert excinfo.type is TypeError\n        assert \"Readings payload must be a valid JSON\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                readings_bad_payload = json.dumps({\"Xreadings\": []})\n                await rsc.append(readings_bad_payload)\n            log_e.assert_called_once_with(\"POST url %s with payload: %s, Error code: %d, reason: %s, details: %s\",\n                                          '/storage/reading', '{\"Xreadings\": []}', 400, 'bad data', {\"key\": \"value\"})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                r = '{\"readings\": [], \"internal_server_err\": 1}'\n                await rsc.append(r)\n            log_e.assert_called_once_with(\"POST url %s with payload: %s, Error code: %d, reason: %s, details: %s\",\n                                          '/storage/reading', '{\"readings\": [], \"internal_server_err\": 1}',\n                                          500, 'something wrong', {\"key\": \"value\"})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        readings = json.dumps({\"readings\": []})\n        response = await rsc.append(readings)\n        assert {'readings': []} == response['appended']\n\n        await fake_storage_srvr.stop()\n\n    @pytest.mark.asyncio\n    async def test_fetch(self, event_loop):\n        # GET, '/storage/reading?id={}&count={}'\n\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        rsc = ReadingsStorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == rsc.base_url\n\n        with pytest.raises(Exception) as excinfo:\n            args = None, 3\n            await rsc.fetch(*args)\n        assert excinfo.type is ValueError\n        assert \"first reading id to retrieve the readings block is required\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            args = 2, None\n            await rsc.fetch(*args)\n        assert excinfo.type is ValueError\n        assert \"count is required to retrieve the readings block\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            args = 2, \"1s\"\n            await rsc.fetch(*args)\n        assert excinfo.type is ValueError\n        assert \"invalid literal for int() with base 10\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                args = \"bad_data\", 3\n                await rsc.fetch(*args)\n            log_e.assert_called_once_with('GET url: %s, Error code: %d, reason: %s, details: %s',\n                                          '/storage/reading?id=bad_data&count=3', 400, 'bad data', {\"key\": \"value\"})\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                args = \"internal_server_err\", 3\n                await rsc.fetch(*args)\n            log_e.assert_called_once_with('GET url: %s, Error code: %d, reason: %s, details: %s',\n                                          '/storage/reading?id=internal_server_err&count=3', 500, 'something wrong', {\"key\": \"value\"})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        args = 2, 3\n        response = await rsc.fetch(*args)\n        assert {'readings': [], 'start': '2', 'count': '3'} == response\n\n        await fake_storage_srvr.stop()\n\n    @pytest.mark.asyncio\n    async def test_query(self, event_loop):\n        # 'PUT', '/storage/reading/query' query_payload\n\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        rsc = ReadingsStorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == rsc.base_url\n\n        with pytest.raises(Exception) as excinfo:\n            await rsc.query(None)\n        assert excinfo.type is ValueError\n        assert \"Query payload is missing\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            await rsc.query(\"blah\")\n        assert excinfo.type is TypeError\n        assert \"Query payload must be a valid JSON\" in str(excinfo.value)\n\n        response = await rsc.query(json.dumps({\"k\": \"v\"}))\n        assert {\"k\": \"v\"} == response[\"called\"]\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                await rsc.query(json.dumps({\"bad_request\": \"v\"}))\n            log_e.assert_called_once_with(\"PUT url %s with query payload: %s, Error code: %d, reason: %s, details: %s\",\n                                          '/storage/reading/query', '{\"bad_request\": \"v\"}', 400, 'bad data', {\"key\": \"value\"})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(_LOGGER, \"error\") as log_e:\n                await rsc.query(json.dumps({\"internal_server_err\": \"v\"}))\n            log_e.assert_called_once_with(\"PUT url %s with query payload: %s, Error code: %d, reason: %s, details: %s\",\n                                          '/storage/reading/query', '{\"internal_server_err\": \"v\"}', 500, 'something wrong', {\"key\": \"value\"})\n        assert excinfo.type is aiohttp.client_exceptions.ContentTypeError\n\n        await fake_storage_srvr.stop()\n\n    @pytest.mark.asyncio\n    async def test_purge(self, event_loop):\n        # 'PUT', url=put_url, /storage/reading/purge?age=&sent=&flags\n        fake_storage_srvr = FakeFledgeStorageSrvr(loop=event_loop)\n        await fake_storage_srvr.start()\n\n        mockServiceRecord = MagicMock(ServiceRecord)\n        mockServiceRecord._address = HOST\n        mockServiceRecord._type = \"Storage\"\n        mockServiceRecord._port = PORT\n        mockServiceRecord._management_port = 2000\n\n        rsc = ReadingsStorageClientAsync(1, 2, mockServiceRecord)\n        assert \"{}:{}\".format(HOST, PORT) == rsc.base_url\n        \n        RETAINALL_FLAG = \"retainall\"\n        PURGE = \"purge\"\n        with pytest.raises(Exception) as excinfo:\n            kwargs = dict(flag='blah', age=1, sent_id=0, size=None)\n            await rsc.purge(**kwargs)\n        assert excinfo.type is InvalidReadingsPurgeFlagParameters\n        assert \"Purge flag valid options are retain or purge only\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            kwargs = dict(age=1, sent_id=0, size=1, flag=RETAINALL_FLAG)\n            await rsc.purge(**kwargs)\n        assert excinfo.type is PurgeOnlyOneOfAgeAndSize\n        assert \"Purge must specify only one of age or size\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            kwargs = dict(age=None, sent_id=0, size=None, flag=RETAINALL_FLAG)\n            await rsc.purge(**kwargs)\n        assert excinfo.type is PurgeOneOfAgeAssetAndSize\n        assert \"Purge must specify one of age, size or asset\" in str(excinfo.value)\n\n        with pytest.raises(Exception) as excinfo:\n            kwargs = dict(age=0, sent_id=0, size=0, flag=RETAINALL_FLAG)\n            await rsc.purge(**kwargs)\n        assert excinfo.type is PurgeOneOfAgeAssetAndSize\n        assert \"Purge must specify one of age, size or asset\" in str(excinfo.value)\n\n        # age int\n        with pytest.raises(Exception) as excinfo:\n            kwargs = dict(age=\"1b\", sent_id=0, size=None, flag=RETAINALL_FLAG)\n            await rsc.purge(**kwargs)\n        assert excinfo.type is ValueError\n        assert \"invalid literal for int() with base 10\" in str(excinfo.value)\n\n        # size int\n        with pytest.raises(Exception) as excinfo:\n            kwargs = dict(age=None, sent_id=0, size=\"1b\", flag=RETAINALL_FLAG)\n            await rsc.purge(**kwargs)\n        assert excinfo.type is ValueError\n        assert \"invalid literal for int() with base 10\" in str(excinfo.value)\n\n        # sent_id int\n        with pytest.raises(Exception) as excinfo:\n            kwargs = dict(age=1, sent_id=\"1b\", size=None, flag=RETAINALL_FLAG)\n            await rsc.purge(**kwargs)\n        assert excinfo.type is ValueError\n        assert \"invalid literal for int() with base 10\" in str(excinfo.value)\n\n        with patch.object(_LOGGER, \"error\") as log_e:\n            kwargs = dict(age=-1, sent_id=1, size=None, flag=RETAINALL_FLAG)\n            result = await rsc.purge(**kwargs)\n            assert result is None\n        log_e.assert_called()\n\n        with patch.object(_LOGGER, \"error\") as log_e:\n            kwargs = dict(age=None, sent_id=1, size=4294967296, flag=RETAINALL_FLAG)\n            result = await rsc.purge(**kwargs)\n            assert result is None\n        log_e.assert_called()\n\n        kwargs = dict(age=1, sent_id=1, size=0, flag=RETAINALL_FLAG)\n        response = await rsc.purge(**kwargs)\n        assert 1 == response[\"called\"]\n\n        kwargs = dict(age=0, sent_id=1, size=1, flag=RETAINALL_FLAG)\n        response = await rsc.purge(**kwargs)\n        assert 1 == response[\"called\"]\n\n        kwargs = dict(age=1, sent_id=1, size=None, flag=RETAINALL_FLAG)\n        response = await rsc.purge(**kwargs)\n        assert 1 == response[\"called\"]\n\n        kwargs = dict(age=None, sent_id=1, size=1, flag=RETAINALL_FLAG)\n        response = await rsc.purge(**kwargs)\n        assert 1 == response[\"called\"]\n\n        with patch.object(_LOGGER, \"error\") as log_e:\n            kwargs = dict(age=-1, sent_id=1, size=None, flag=PURGE)\n            result = await rsc.purge(**kwargs)\n            assert result is None\n        log_e.assert_called()\n\n        with patch.object(_LOGGER, \"error\") as log_e:\n            kwargs = dict(age=None, sent_id=1, size=4294967296, flag=PURGE)\n            result = await rsc.purge(**kwargs)\n            assert result is None\n        log_e.assert_called()\n\n        kwargs = dict(age=10, sent_id=1, size=None, flag=PURGE)\n        response = await rsc.purge(**kwargs)\n        assert 1 == response[\"called\"]\n\n        kwargs = dict(age=None, sent_id=1, size=100, flag=PURGE)\n        response = await rsc.purge(**kwargs)\n        assert 1 == response[\"called\"]\n\n        kwargs = dict(asset=\"sin #1\")\n        response = await rsc.purge(**kwargs)\n        assert 1 == response[\"called\"]\n\n        await fake_storage_srvr.stop()\n"
  },
  {
    "path": "tests/unit/python/fledge/common/storage_client/test_utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test common/storage_client/utils.py \"\"\"\n\nimport pytest\nfrom fledge.common.storage_client.utils import Utils\n\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestUtils:\n\n    @pytest.mark.parametrize(\"test_input\", ['{\"k\": \"v\"}',\n                                            '{\"k\": 1}',\n                                            '{\"k\": []}',\n                                            '{\"k\": {}}',\n                                            '[]',\n                                            '[{\"k\": {\"k1\": \"v1\"}}]',\n                                            '{}',\n                                            \"{\\\"k\\\": \\\"v\\\"}\",\n                                            \"{\\\"k\\\": 1}\",\n                                            ])\n    def test_is_json_return_true_with_valid_json(self, test_input):\n        ret_val = Utils.is_json(test_input)\n        assert ret_val is True\n\n    @pytest.mark.parametrize(\"test_input\", ['{ k\": \"v\"}',\n                                            '[\"k\": {}]',\n                                            1,\n                                            'a',\n                                            b'any',\n                                            {\"k\", \"v\"},\n                                            '{k: v}',\n                                            \"{'k': 'v'}\",\n                                            \"{\\'k\\\": \\\"v\\\"}\"\n                                            ])\n    def test_is_json_return_false_with_invalid_json(self, test_input):\n        ret_val = Utils.is_json(test_input)\n        assert ret_val is False\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_alert_manager.py",
    "content": "import json\n\nfrom unittest.mock import MagicMock, patch\nimport pytest\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.alert_manager import AlertManager\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nclass TestAlertManager:\n    \"\"\" Alert Manager \"\"\"\n    alert_manager = None\n\n    async def async_mock(self, ret_val):\n        return ret_val\n\n    def setup_method(self):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        self.alert_manager = AlertManager(storage_client=storage_client_mock)\n        self.alert_manager.storage_client = storage_client_mock\n        #self.alert_manager.alerts = []\n\n    def teardown_method(self):\n        self.alert_manager.alerts = []\n        self.alert_manager = None\n\n    async def test_urgency(self):\n        urgencies = self.alert_manager.urgency\n        assert 4 == len(urgencies)\n        assert ['Critical', 'High', 'Normal', 'Low'] == list(urgencies.keys())\n\n    @pytest.mark.parametrize(\"urgency_index, urgency\", [\n        ('1', 'UNKNOWN'),\n        ('High', 'UNKNOWN'),\n        (0, 'UNKNOWN'),\n        (1, 'Critical'),\n        (2, 'High'),\n        (3, 'Normal'),\n        (4, 'Low')\n    ])\n    async def test__urgency_name_by_value(self, urgency_index, urgency):\n        value = self.alert_manager._urgency_name_by_value(value=urgency_index)\n        assert urgency == value\n\n    @pytest.mark.parametrize(\"storage_result, response\", [\n        ({\"rows\": [], 'count': 0}, []),\n        ({\"rows\": [{\"key\": \"RW\", \"message\": \"The Service RW restarted 1 times\", \"urgency\": 3,\n                    \"timestamp\": \"2024-03-01 09:40:34.482\"}], 'count': 1}, [{\"key\": \"RW\", \"message\":\n            \"The Service RW restarted 1 times\", \"urgency\": \"Normal\", \"timestamp\": \"2024-03-01 09:40:34.482\"}])\n    ])\n    async def test_get_all(self, storage_result, response):\n        rv = await self.async_mock(storage_result)\n        with patch.object(self.alert_manager.storage_client, 'query_tbl_with_payload', return_value=rv\n                          ) as patch_query_tbl:\n            result = await self.alert_manager.get_all()\n            assert response == result\n        args, _ = patch_query_tbl.call_args\n        assert 'alerts' == args[0]\n        assert {\"return\": [\"key\", \"message\", \"urgency\", {\"column\": \"ts\", \"alias\": \"timestamp\",\n                                                         \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}]} == json.loads(args[1])\n\n\n    async def test_bad_get_all(self):\n        storage_result = {\"rows\": [{}], 'count': 1}\n        rv = await self.async_mock(storage_result)\n        with patch.object(self.alert_manager.storage_client, 'query_tbl_with_payload', return_value=rv\n                          ) as patch_query_tbl:\n            with pytest.raises(Exception) as ex:\n                await self.alert_manager.get_all()\n            assert \"'key'\" == str(ex.value)\n        args, _ = patch_query_tbl.call_args\n        assert 'alerts' == args[0]\n        assert {\"return\": [\"key\", \"message\", \"urgency\", {\"column\": \"ts\", \"alias\": \"timestamp\",\n                                                         \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}]} == json.loads(args[1])\n\n    async def test_get_by_key_when_in_cache(self):\n        self.alert_manager.alerts = [{\"key\": \"RW\", \"message\": \"The Service RW restarted 1 times\", \"urgency\": 3,\n                    \"timestamp\": \"2024-03-01 09:40:34.482\"}]\n        key = \"RW\"\n        result = await self.alert_manager.get_by_key(key)\n        assert self.alert_manager.alerts[0] == result\n\n    async def test_get_by_key_not_found(self):\n        key = \"Sine\"\n        storage_result = {\"rows\": [], 'count': 1}\n        rv = await self.async_mock(storage_result)\n        with patch.object(self.alert_manager.storage_client, 'query_tbl_with_payload', return_value=rv\n                          ) as patch_query_tbl:\n            with pytest.raises(Exception) as ex:\n                await self.alert_manager.get_by_key(key)\n            assert ex.type is KeyError\n            assert \"'{} alert not found.'\".format(key) == str(ex.value)\n        args, _ = patch_query_tbl.call_args\n        assert 'alerts' == args[0]\n        assert {\"return\": [\"key\", \"message\", \"urgency\", {\"column\": \"ts\", \"alias\": \"timestamp\",\n                                                         \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}],\n                \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": key}} == json.loads(args[1])\n\n    async def test_get_by_key_when_not_in_cache(self):\n        key = 'update'\n        storage_result = {\"rows\": [{\"key\": \"RW\", \"message\": \"The Service RW restarted 1 times\", \"urgency\": 3,\n                    \"timestamp\": \"2024-03-01 09:40:34.482\"}], 'count': 1}\n        rv = await self.async_mock(storage_result)\n        with patch.object(self.alert_manager.storage_client, 'query_tbl_with_payload', return_value=rv\n                          ) as patch_query_tbl:\n            result = await self.alert_manager.get_by_key(key)\n            storage_result['rows'][0]['urgency'] = 'Normal'\n            assert storage_result['rows'][0] == result\n        args, _ = patch_query_tbl.call_args\n        assert 'alerts' == args[0]\n        assert {\"return\": [\"key\", \"message\", \"urgency\", {\"column\": \"ts\", \"alias\": \"timestamp\",\n                                                         \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}],\n                \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": key}} == json.loads(args[1])\n\n    async def test_add(self):\n        params = {\"key\": \"update\", 'message': 'New version available', 'urgency': 'High'}\n        storage_result = {'rows_affected': 1, \"response\": \"inserted\"}\n        rv = await self.async_mock(storage_result)\n        with patch.object(self.alert_manager.storage_client, 'insert_into_tbl', return_value=rv\n                          ) as insert_tbl_patch:\n            result = await self.alert_manager.add(params)\n            assert 'alert' in result\n            assert params == result['alert']\n        args, _ = insert_tbl_patch.call_args\n        assert 'alerts' == args[0]\n        assert params == json.loads(args[1])\n\n    async def test_bad_add(self):\n        params = {\"key\": \"update\", 'message': 'New version available', 'urgency': 'High'}\n        storage_result = {}\n        rv = await self.async_mock(storage_result)\n        with patch.object(self.alert_manager.storage_client, 'insert_into_tbl', return_value=rv\n                          ) as insert_tbl_patch:\n            with pytest.raises(Exception) as ex:\n                await self.alert_manager.add(params)\n            assert \"'response'\" == str(ex.value)\n        args, _ = insert_tbl_patch.call_args\n        assert 'alerts' == args[0]\n        assert params == json.loads(args[1])\n\n    async def test_delete(self):\n        storage_result = {'rows_affected': 1, \"response\": \"deleted\"}\n        rv = await self.async_mock(storage_result)\n        with patch.object(self.alert_manager.storage_client, 'delete_from_tbl', return_value=rv\n                          ) as delete_tbl_patch:\n            result = await self.alert_manager.delete()\n            assert 'alert' in result\n            assert \"Delete all alerts.\" == result\n        args, _ = delete_tbl_patch.call_args\n        assert 'alerts' == args[0]\n\n    async def test_delete_by_key(self):\n        key = \"RW\"\n        self.alert_manager.alerts = [{\"key\": key, \"message\": \"The Service RW restarted 1 times\", \"urgency\": 3,\n                                      \"timestamp\": \"2024-03-01 09:40:34.482\"}]\n        storage_result = {'rows_affected': 1, \"response\": \"deleted\"}\n        rv = await self.async_mock(storage_result)\n        with patch.object(self.alert_manager.storage_client, 'delete_from_tbl', return_value=rv\n                          ) as delete_tbl_patch:\n            result = await self.alert_manager.delete(key)\n            assert 'alert' in result\n            assert \"{} alert is deleted.\".format(key) == result\n        args, _ = delete_tbl_patch.call_args\n        assert 'alerts' == args[0]\n        assert {\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": key}} == json.loads(args[1])\n\n    async def test_bad_delete(self):\n        with pytest.raises(Exception) as ex:\n            await self.alert_manager.delete(\"Update\")\n        assert ex.type is KeyError\n        assert \"\" == str(ex.value)\n\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_audit_logger.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport asyncio\nimport pytest\nfrom unittest.mock import MagicMock\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro():\n    return None\n\n\nclass TestAuditLogger:\n\n    @pytest.mark.asyncio\n    async def test_constructor_no_storage(self):\n        \"\"\" Test that we must construct with a storage client \"\"\"\n        with pytest.raises(TypeError) as excinfo:\n            AuditLogger()\n        assert 'Must be a valid Storage object' in str(excinfo.value)\n\n    @pytest.mark.asyncio\n    async def test_singleton(self, event_loop):\n        \"\"\" Test that two audit loggers share the same state \"\"\"\n        storageMock1 = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock1.configure_mock(**attrs)\n        a1 = AuditLogger(storageMock1)\n\n        storageMock2 = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock2.configure_mock(**attrs)\n        a2 = AuditLogger(storageMock2)\n\n        assert a1._storage == a2._storage\n        a1._storage.insert_into_tbl.reset_mock()\n\n    @pytest.mark.asyncio\n    async def test_failure(self, event_loop):\n        \"\"\" Test that audit log results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock.configure_mock(**attrs)\n        audit = AuditLogger(storageMock)\n        await audit.failure('AUDTCODE', {'message': 'failure'})\n        assert audit._storage.insert_into_tbl.called is True\n        audit._storage.insert_into_tbl.reset_mock()\n\n    @pytest.mark.asyncio\n    async def test_warning(self, event_loop):\n        \"\"\" Test that audit log results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock.configure_mock(**attrs)\n        audit = AuditLogger(storageMock)\n        await audit.warning('AUDTCODE', { 'message': 'failure' })\n        assert audit._storage.insert_into_tbl.called is True\n        audit._storage.insert_into_tbl.reset_mock()\n\n    @pytest.mark.asyncio\n    async def test_information(self, event_loop):\n        \"\"\" Test that audit log results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock.configure_mock(**attrs)\n        audit = AuditLogger(storageMock)\n        await audit.information('AUDTCODE', { 'message': 'failure' })\n        assert audit._storage.insert_into_tbl.called is True\n        audit._storage.insert_into_tbl.reset_mock()\n\n    @pytest.mark.asyncio\n    async def test_success(self, event_loop):\n        \"\"\" Test that audit log results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock.configure_mock(**attrs)\n        audit = AuditLogger(storageMock)\n        await audit.success('AUDTCODE', { 'message': 'failure' })\n        assert audit._storage.insert_into_tbl.called is True\n        audit._storage.insert_into_tbl.reset_mock()\n\n    @pytest.mark.asyncio\n    async def test_failure_no_data(self, event_loop):\n        \"\"\" Test that audit log results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock.configure_mock(**attrs)\n        audit = AuditLogger(storageMock)\n        await audit.failure('AUDTCODE', None)\n        assert audit._storage.insert_into_tbl.called is True\n        audit._storage.insert_into_tbl.reset_mock()\n\n    @pytest.mark.asyncio\n    async def test_warning_no_data(self, event_loop):\n        \"\"\" Test that audit log results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock.configure_mock(**attrs)\n        audit = AuditLogger(storageMock)\n        await audit.warning('AUDTCODE', None)\n        assert audit._storage.insert_into_tbl.called is True\n        audit._storage.insert_into_tbl.reset_mock()\n\n    @pytest.mark.asyncio\n    async def test_information_no_data(self, event_loop):\n        \"\"\" Test that audit log results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock.configure_mock(**attrs)\n        audit = AuditLogger(storageMock)\n        await audit.information('AUDTCODE', None)\n        assert audit._storage.insert_into_tbl.called is True\n        audit._storage.insert_into_tbl.reset_mock()\n\n    @pytest.mark.asyncio\n    async def test_success_no_data(self, event_loop):\n        \"\"\" Test that audit log results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        attrs = {'insert_into_tbl.return_value': asyncio.ensure_future(mock_coro(), loop=event_loop)}\n        storageMock.configure_mock(**attrs)\n        audit = AuditLogger(storageMock)\n        await audit.success('AUDTCODE', None)\n        assert audit._storage.insert_into_tbl.called is True\n        audit._storage.insert_into_tbl.reset_mock()\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_common_utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Unit tests for common utils \"\"\"\n\nimport pytest\nfrom fledge.common import utils as common_utils\nfrom collections import Counter\n\n\nclass TestCommonUtils:\n    @pytest.mark.parametrize(\"test_string, expected\", [\n        (\"Gabbar&Gang\", False),\n        (\"with;Sambha\", False),\n        (\"andothers,halkats\", False),\n        (\"@Rampur\", False),\n        (\"triedloot/arson\", False),\n        (\"For$Gold\", False),\n        (\"Andlot{more\", False),\n        (\"Andmore}\", False),\n        (\"Veeru+Jai\", False),\n        (\"Gaonwale,Thakur\", False),\n        (\"=resisted\", False),\n        (\"successfully:\", False),\n        (\"any attack!\", True),\n    ])\n    def test_check_reserved(self, test_string, expected):\n        actual = common_utils.check_reserved(test_string)\n        assert expected == actual\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_configuration_cache.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport pytest\nfrom fledge.common.configuration_manager import ConfigurationCache\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestConfigurationCache:\n\n    @pytest.mark.parametrize(\"size\", [\n        0, 1, 10, 20, 1000\n    ])\n    def test_init(self, size):\n        cached_manager = ConfigurationCache(size)\n        assert {} == cached_manager.cache\n        assert size == cached_manager.max_cache_size\n        assert 0 == cached_manager.hit\n        assert 0 == cached_manager.miss\n\n    def test_size(self):\n        cached_manager = ConfigurationCache()\n        assert 0 == cached_manager.size\n\n    def test_contains_with_no_cache(self):\n        cached_manager = ConfigurationCache()\n        assert cached_manager.__contains__(\"Blah\") is False\n\n    def test_contains_with_cache(self):\n        cached_manager = ConfigurationCache()\n        cached_manager.cache = {\"test_cat\": {'value': {'config_item': {'default': 'woo', 'description': 'foo',\n                                                                       'type': 'string'}}}}\n        assert cached_manager.__contains__(\"test_cat\") is True\n\n    def test_update(self):\n        cached_manager = ConfigurationCache()\n        cat_name = \"test_cat\"\n        cat_desc = \"test_desc\"\n        cat_val = {'config_item': {'default': 'woo', 'description': 'foo', 'type': 'string'}}\n        cat_display_name = \"AJ\"\n        cached_manager.cache = {cat_name: {'value': {}}}\n        cached_manager.update(cat_name, cat_desc, cat_val)\n        assert 'date_accessed' in cached_manager.cache[cat_name]\n        assert cat_desc == cached_manager.cache[cat_name]['description']\n        assert cat_val == cached_manager.cache[cat_name]['value']\n        assert cat_name == cached_manager.cache[cat_name]['displayName']\n\n        cached_manager.update(cat_name, cat_desc, cat_val, cat_display_name)\n        assert 'date_accessed' in cached_manager.cache[cat_name]\n        assert cat_desc == cached_manager.cache[cat_name]['description']\n        assert cat_val == cached_manager.cache[cat_name]['value']\n        assert cat_display_name == cached_manager.cache[cat_name]['displayName']\n\n    @pytest.mark.parametrize(\"size, cat_names, cat_miss\", [\n        (1, ['cat10'], ['cat1', 'cat2', 'cat3', 'cat4', 'cat5', 'cat6', 'cat7', 'cat8', 'cat9']),\n        (2, ['cat9', 'cat10'], ['cat1', 'cat2', 'cat3', 'cat4', 'cat5', 'cat6', 'cat7', 'cat8']),\n        (10, ['cat1', 'cat2', 'cat3', 'cat4', 'cat5', 'cat6', 'cat7', 'cat8', 'cat9', 'cat10'], [])\n    ])\n    def test_update_with_cache_size(self, size, cat_names, cat_miss):\n        cached_manager = ConfigurationCache(size)\n        cached_manager.update(\"cat1\", \"desc1\", {'value': {}})\n        cached_manager.update(\"cat2\", \"desc2\", {'value': {}})\n        cached_manager.update(\"cat3\", \"desc3\", {'value': {}})\n        cached_manager.update(\"cat4\", \"desc4\", {'value': {}})\n        cached_manager.update(\"cat5\", \"desc5\", {'value': {}})\n        cached_manager.update(\"cat6\", \"desc6\", {'value': {}})\n        cached_manager.update(\"cat7\", \"desc7\", {'value': {}})\n        cached_manager.update(\"cat8\", \"desc8\", {'value': {}})\n        cached_manager.update(\"cat9\", \"desc9\", {'value': {}})\n        cached_manager.update(\"cat10\", \"desc10\", {'value': {}})\n        assert size == cached_manager.size\n        keys = list(cached_manager.cache.keys())\n        assert cat_names == keys\n        if set(cat_miss) & set(cat_names):\n            assert False, \"Category should not exist in cache manager\"\n\n    def test_remove_oldest(self):\n        cached_manager = ConfigurationCache()\n        cached_manager.max_cache_size = 10\n        cached_manager.update(\"cat1\", \"desc1\", {'value': {}})\n        cached_manager.update(\"cat2\", \"desc2\", {'value': {}})\n        cached_manager.update(\"cat3\", \"desc3\", {'value': {}})\n        cached_manager.update(\"cat4\", \"desc4\", {'value': {}})\n        cached_manager.update(\"cat5\", \"desc5\", {'value': {}})\n        cached_manager.update(\"cat6\", \"desc6\", {'value': {}})\n        cached_manager.update(\"cat7\", \"desc7\", {'value': {}})\n        cached_manager.update(\"cat8\", \"desc8\", {'value': {}})\n        cached_manager.update(\"cat9\", \"desc9\", {'value': {}})\n        cached_manager.update(\"cat10\", \"desc10\", {'value': {}})\n        assert 10 == cached_manager.size\n        cached_manager.update(\"cat11\", \"desc11\", {'value': {}})\n        assert 'cat1' not in cached_manager.cache\n        assert 'cat2' in cached_manager.cache\n        assert 'cat3' in cached_manager.cache\n        assert 'cat4' in cached_manager.cache\n        assert 'cat5' in cached_manager.cache\n        assert 'cat6' in cached_manager.cache\n        assert 'cat7' in cached_manager.cache\n        assert 'cat8' in cached_manager.cache\n        assert 'cat9' in cached_manager.cache\n        assert 'cat10' in cached_manager.cache\n        assert 'cat11' in cached_manager.cache\n        assert 10 == cached_manager.size\n\n    def test_remove(self):\n        cached_manager = ConfigurationCache()\n        cached_manager.update(\"cat1\", \"desc1\", {'value': {}})\n        cached_manager.update(\"cat2\", \"desc2\", {'value': {}})\n        cached_manager.update(\"cat3\", \"desc3\", {'value': {}})\n        cached_manager.update(\"cat4\", \"desc4\", {'value': {}})\n        assert 4 == cached_manager.size\n        cached_manager.remove(\"cat2\")\n        assert 3 == cached_manager.size\n        assert 'cat2' not in cached_manager.cache\n        assert 'cat1' in cached_manager.cache\n        assert 'cat3' in cached_manager.cache\n        assert 'cat4' in cached_manager.cache\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_configuration_manager.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport asyncio\nimport json\nimport ipaddress\nfrom unittest.mock import MagicMock, patch, call\nimport pytest\nfrom fledge.common.configuration_manager import ConfigurationManager, ConfigurationManagerSingleton, \\\n    _valid_type_strings, _logger, _optional_items\nfrom fledge.common.storage_client.payload_builder import PayloadBuilder\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.audit_logger import AuditLogger\n\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nCAT_NAME = 'test'\nITEM_NAME = \"test_item_name\"\n\n\nclass TestConfigurationManager:\n    async def async_mock(self, return_value):\n        return return_value\n\n    @pytest.fixture()\n    def reset_singleton(self):\n        # executed before each test\n        ConfigurationManagerSingleton._shared_state = {}\n        yield\n        ConfigurationManagerSingleton._shared_state = {}\n\n    def test_supported_validate_type_strings(self):\n        expected_types = ['IPv4', 'IPv6', 'JSON', 'URL', 'X509 certificate', 'boolean', 'code', 'enumeration',\n                          'float', 'integer', 'northTask', 'password', 'script', 'string', 'ACL', 'bucket',\n                          'list', 'kvlist']\n        assert len(expected_types) == len(_valid_type_strings)\n        assert sorted(expected_types) == _valid_type_strings\n\n    def test_supported_optional_items(self):\n        expected_types = ['deprecated', 'displayName', 'group', 'length', 'mandatory', 'maximum', 'minimum', 'order',\n                          'readonly', 'rule', 'validity', 'listSize', 'listName', 'permissions']\n        assert len(expected_types) == len(_optional_items)\n        assert sorted(expected_types) == _optional_items\n\n    def test_constructor_no_storage_client_defined_no_storage_client_passed(\n            self, reset_singleton):\n        # first time initializing ConfigurationManager without storage client\n        # produces error\n        with pytest.raises(TypeError) as excinfo:\n            ConfigurationManager()\n        assert 'Must be a valid Storage object' in str(excinfo.value)\n\n    def test_constructor_no_storage_client_defined_storage_client_passed(\n            self, reset_singleton):\n        # first time initializing ConfigurationManager with storage client\n        # works\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        assert hasattr(c_mgr, '_storage')\n        assert isinstance(c_mgr._storage, StorageClientAsync)\n        assert hasattr(c_mgr, '_registered_interests')\n\n    def test_constructor_storage_client_defined_storage_client_passed(\n            self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        # second time initializing ConfigurationManager with new storage client\n        # works\n        storage_client_mock2 = MagicMock(spec=StorageClientAsync)\n        c_mgr2 = ConfigurationManager(storage_client_mock2)\n        assert hasattr(c_mgr2, '_storage')\n        # ignore new storage client\n        assert isinstance(c_mgr2._storage, StorageClientAsync)\n        assert hasattr(c_mgr2, '_registered_interests')\n\n    def test_constructor_storage_client_defined_no_storage_client_passed(\n            self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        # second time initializing ConfigurationManager without storage client\n        # works\n        c_mgr2 = ConfigurationManager()\n        assert hasattr(c_mgr2, '_storage')\n        assert isinstance(c_mgr2._storage, StorageClientAsync)\n        assert hasattr(c_mgr2, '_registered_interests')\n        assert 0 == len(c_mgr._registered_interests)\n\n    def test_register_interest_no_category_name(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(ValueError) as excinfo:\n            c_mgr.register_interest(None, 'callback')\n        assert 'Failed to register interest. category_name cannot be None' in str(\n            excinfo.value)\n\n    def test_register_interest_no_callback(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(ValueError) as excinfo:\n            c_mgr.register_interest('name', None)\n        assert 'Failed to register interest. callback cannot be None' in str(\n            excinfo.value)\n\n    def test_register_interest(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_mgr.register_interest('name', 'callback')\n        assert 'callback' in c_mgr._registered_interests['name']\n        assert 1 == len(c_mgr._registered_interests)\n\n    def test_unregister_interest_no_category_name(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(ValueError) as excinfo:\n            c_mgr.unregister_interest(None, 'callback')\n        assert 'Failed to unregister interest. category_name cannot be None' in str(\n            excinfo.value)\n\n    def test_unregister_interest_no_callback(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(ValueError) as excinfo:\n            c_mgr.unregister_interest('name', None)\n        assert 'Failed to unregister interest. callback cannot be None' in str(\n            excinfo.value)\n\n    def test_unregister_interest(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_mgr.register_interest('name', 'callback')\n        assert 1 == len(c_mgr._registered_interests)\n        c_mgr.unregister_interest('name', 'callback')\n        assert len(c_mgr._registered_interests) is 0\n\n    async def test__run_callbacks(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_mgr.register_interest('name', 'configuration_manager_callback')\n        await c_mgr._run_callbacks('name')\n\n    async def test__run_callbacks_invalid_module(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_mgr.register_interest('name', 'invalid')\n        with patch.object(_logger, \"error\") as log_error:\n            with pytest.raises(Exception) as excinfo:\n                await c_mgr._run_callbacks('name')\n            import sys\n            if sys.version_info[1] >= 6:\n                assert excinfo.type is ModuleNotFoundError\n            else:\n                assert excinfo.type is ImportError\n            assert \"No module named 'invalid'\" == str(excinfo.value)\n        assert 1 == log_error.call_count\n        log_error.assert_called_once_with('Unable to import callback module %s for category_name %s', 'invalid',\n                                          'name', exc_info=True)\n\n    async def test__run_callbacks_norun(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_mgr.register_interest('name', 'configuration_manager_callback_norun')\n        with patch.object(_logger, \"error\") as log_error:\n            with pytest.raises(Exception) as excinfo:\n                await c_mgr._run_callbacks('name')\n            assert excinfo.type is AttributeError\n            assert 'Callback module configuration_manager_callback_norun does not have method run' in str(\n                excinfo.value)\n        assert 1 == log_error.call_count\n        log_error.assert_called_once_with('Callback module %s does not have method run',\n                                          'configuration_manager_callback_norun', exc_info=True)\n\n    async def test__run_callbacks_nonasync(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_mgr.register_interest(\n            'name', 'configuration_manager_callback_nonasync')\n        with patch.object(_logger, \"error\") as log_error:\n            with pytest.raises(Exception) as excinfo:\n                await c_mgr._run_callbacks('name')\n            assert excinfo.type is AttributeError\n            assert 'Callback module configuration_manager_callback_nonasync run method must be a coroutine function' in\\\n                   str(excinfo.value)\n        assert 1 == log_error.call_count\n        log_error.assert_called_once_with('Callback module %s run method must be a coroutine function',\n                                          'configuration_manager_callback_nonasync', exc_info=True)\n\n    async def test__validate_category_val_valid_config_use_default_val(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\"\n            },\n        }\n        c_return_value = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                                            set_value_val_from_default_val=True)\n        assert isinstance(c_return_value, dict)\n        assert len(c_return_value) is 1\n        test_item_val = c_return_value.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert len(test_item_val) is 4\n        assert test_item_val.get(\"description\") is \"test description val\"\n        assert test_item_val.get(\"type\") is \"string\"\n        assert test_item_val.get(\"default\") is \"test default val\"\n        assert test_item_val.get(\"value\") is \"test default val\"\n\n        # deep copy check to make sure test_config wasn't modified in the\n        # method call\n        assert test_config is not c_return_value\n        assert isinstance(test_config, dict)\n        assert len(test_config) is 1\n        test_item_val = test_config.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert len(test_item_val) is 3\n        assert test_item_val.get(\"description\") is \"test description val\"\n        assert test_item_val.get(\"type\") is \"string\"\n        assert test_item_val.get(\"default\") is \"test default val\"\n\n    async def test__validate_category_val_invalid_config_use_default_val(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"IPv4\",\n                \"default\": \"test default val\",\n                \"displayName\": \"{}\"\n            },\n        }\n\n        with pytest.raises(Exception) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=True)\n        assert excinfo.type is ValueError\n        assert \"For {} category, unrecognized value for item name {}\".format(\n            CAT_NAME, ITEM_NAME) == str(excinfo.value)\n\n    async def test__validate_category_val_valid_config_use_value_val(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                \"value\": \"test value val\"\n            },\n        }\n        c_return_value = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                                            set_value_val_from_default_val=False)\n        assert isinstance(c_return_value, dict)\n        assert len(c_return_value) is 1\n        test_item_val = c_return_value.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert len(test_item_val) is 4\n        assert test_item_val.get(\"description\") is \"test description val\"\n        assert test_item_val.get(\"type\") is \"string\"\n        assert test_item_val.get(\"default\") is \"test default val\"\n        assert test_item_val.get(\"value\") is \"test value val\"\n        # deep copy check to make sure test_config wasn't modified in the\n        # method call\n        assert test_config is not c_return_value\n        assert isinstance(test_config, dict)\n        assert len(test_config) is 1\n        test_item_val = test_config.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert len(test_item_val) is 4\n        assert test_item_val.get(\"description\") is \"test description val\"\n        assert test_item_val.get(\"type\") is \"string\"\n        assert test_item_val.get(\"default\") is \"test default val\"\n        assert test_item_val.get(\"value\") is \"test value val\"\n\n    async def test__validate_category_optional_attributes_and_use_value(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                \"value\": \"test value val\",\n                \"readonly\": \"false\",\n                \"length\": \"100\"\n            },\n        }\n        c_return_value = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                                            set_value_val_from_default_val=False)\n        assert isinstance(c_return_value, dict)\n        assert len(c_return_value) is 1\n        test_item_val = c_return_value.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert 6 == len(test_item_val) is 6\n        assert \"test description val\" == test_item_val.get(\"description\")\n        assert \"string\" == test_item_val.get(\"type\")\n        assert \"test default val\" == test_item_val.get(\"default\")\n        assert \"test value val\" == test_item_val.get(\"value\")\n        assert \"false\" == test_item_val.get(\"readonly\")\n        assert \"100\" == test_item_val.get(\"length\")\n\n        # deep copy check to make sure test_config wasn't modified in the\n        # method call\n        assert test_config is not c_return_value\n        assert isinstance(test_config, dict)\n        assert len(test_config) is 1\n        test_item_val = test_config.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert 6 == len(test_item_val) is 6\n        assert \"test description val\" == test_item_val.get(\"description\")\n        assert \"string\" == test_item_val.get(\"type\")\n        assert \"test default val\" == test_item_val.get(\"default\")\n        assert \"test value val\" == test_item_val.get(\"value\")\n        assert \"false\" == test_item_val.get(\"readonly\")\n        assert \"100\" == test_item_val.get(\"length\")\n\n    async def test__validate_category_optional_attributes_and_use_default_val(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                \"readonly\": \"false\",\n                \"length\": \"100\"\n            },\n        }\n        c_return_value = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                                            set_value_val_from_default_val=True)\n        assert isinstance(c_return_value, dict)\n        assert 1 == len(c_return_value)\n        test_item_val = c_return_value.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert 6 == len(test_item_val)\n        assert \"test description val\" == test_item_val.get(\"description\")\n        assert \"string\" == test_item_val.get(\"type\")\n        assert \"test default val\" == test_item_val.get(\"default\")\n        assert \"test default val\" == test_item_val.get(\"value\")\n        assert \"false\" == test_item_val.get(\"readonly\")\n        assert \"100\" == test_item_val.get(\"length\")\n\n        # deep copy check to make sure test_config wasn't modified in the\n        # method call\n        assert test_config is not c_return_value\n        assert isinstance(test_config, dict)\n        assert 1 == len(test_config)\n        test_item_val = test_config.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert 5 == len(test_item_val)\n        assert \"test description val\" == test_item_val.get(\"description\")\n        assert \"string\" == test_item_val.get(\"type\")\n        assert \"test default val\" == test_item_val.get(\"default\")\n        assert \"false\" == test_item_val.get(\"readonly\")\n        assert \"100\" == test_item_val.get(\"length\")\n\n    @pytest.mark.parametrize(\"config, item_name, message\", [\n        ({\n             ITEM_NAME: {\n                 \"description\": \"test description val\",\n                 \"type\": \"string\",\n                 \"default\": \"test default val\",\n                 \"readonly\": \"unexpected\",\n             },\n         }, \"readonly\", \"boolean\"),\n        ({\n             ITEM_NAME: {\n                 \"description\": \"test description val\",\n                 \"type\": \"string\",\n                 \"default\": \"test default val\",\n                 \"order\": \"unexpected\",\n             },\n         }, \"order\", \"an integer\"),\n        ({\n             ITEM_NAME: {\n                 \"description\": \"test description val\",\n                 \"type\": \"string\",\n                 \"default\": \"test default val\",\n                 \"length\": \"unexpected\",\n             },\n         }, \"length\", \"an integer\"),\n        ({\n             ITEM_NAME: {\n                 \"description\": \"test description val\",\n                 \"type\": \"float\",\n                 \"default\": \"test default val\",\n                 \"minimum\": \"unexpected\",\n             },\n         }, \"minimum\", \"an integer or float\"),\n        ({\n             ITEM_NAME: {\n                 \"description\": \"test description val\",\n                 \"type\": \"integer\",\n                 \"default\": \"test default val\",\n                 \"maximum\": \"unexpected\",\n             },\n         }, \"maximum\", \"an integer or float\"),\n        ({\n             ITEM_NAME: {\n                 \"description\": \"test description val\",\n                 \"type\": \"string\",\n                 \"default\": \"test default val\",\n                 \"mandatory\": \"1\",\n             },\n         }, \"mandatory\", \"boolean\")\n    ])\n    async def test__validate_category_val_optional_attributes_unrecognized_entry_name(self, config, item_name, message):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(Exception) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=config,\n                                               set_value_val_from_default_val=True)\n        assert excinfo.type is ValueError\n        assert \"For {} category, entry value must be {} for item name {}; got <class 'str'>\".format(\n            CAT_NAME, message, item_name) == str(excinfo.value)\n\n    async def test__validate_category_val_config_without_value_use_value_val(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n            },\n        }\n        with pytest.raises(ValueError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=False)\n        assert 'For {} category, missing entry name value for item name {}'.format(\n            CAT_NAME, ITEM_NAME) == str(excinfo.value)\n\n    async def test__validate_category_val_config_not_dictionary(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        cat_name = 'blah'\n        test_config = ()\n        with pytest.raises(TypeError) as excinfo:\n            await c_mgr._validate_category_val(category_name=cat_name, category_val=test_config,\n                                               set_value_val_from_default_val=False)\n        assert 'For {} category, category value must be a dictionary; got {}'.format(\n            cat_name, type(test_config)) == str(excinfo.value)\n\n    async def test__validate_category_val_item_name_not_string(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        config_item = 5\n        test_config = {\n            config_item: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n            },\n        }\n        with pytest.raises(TypeError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=False)\n        assert 'For {} category, item name {} must be a string; got {}'.format(\n            CAT_NAME, config_item, type(config_item)) == str(excinfo.value)\n\n    async def test__validate_category_val_item_value_not_dictionary(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        item_name = 'test_item_name'\n        test_config = {\n            item_name: ()\n        }\n        with pytest.raises(TypeError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=False)\n        assert 'For {} category, item value must be a dict for item name {}; got {}'.format(\n            CAT_NAME, item_name, type(test_config[item_name])) == str(excinfo.value)\n\n    async def test__validate_category_val_config_entry_name_not_string(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        entry_name = 5\n        item_name = 'test_item_name'\n        test_config = {\n            item_name: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                entry_name: \"bla\"\n            }\n        }\n        with pytest.raises(TypeError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=False)\n        assert 'For {} category, entry name {} must be a string for item name {}; got {}'.format(\n            CAT_NAME, entry_name, item_name, type(entry_name)) == str(excinfo.value)\n\n    async def test__validate_category_val_config_entry_val_not_string(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        entry_name = 'something'\n        entry_value = 5\n        item_name = 'test_item_name'\n        test_config = {\n            item_name: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                entry_name: entry_value\n            },\n        }\n        with pytest.raises(TypeError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=False)\n        assert 'For {} category, entry value must be a string for item name {} ' \\\n               'and entry name {}; got {}'.format(CAT_NAME, item_name, entry_name, type(entry_value)) == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"config, exception_name, exception_msg\", [\n        ({\"description\": \"test description\", \"type\": \"enumeration\", \"default\": \"A\"},\n         KeyError, \"'For test category, options required for enumeration type'\"),\n        ({\"description\": \"test description\", \"type\": \"enumeration\", \"default\": \"A\", \"options\": \"\"},\n         TypeError, \"For test category, entry value must be a list for item name test_item_name and entry name options; got <class 'str'>\"),\n        ({\"description\": \"test description\", \"type\": \"enumeration\", \"default\": \"A\", \"options\": []},\n         ValueError, \"For test category, entry value cannot be empty list for item_name test_item_name and entry_name options; got []\"),\n        ({\"description\": \"test description\", \"type\": \"enumeration\", \"default\": \"C\", \"options\": [\"A\", \"B\"]},\n         ValueError, \"For test category, entry value does not exist in options list for item name test_item_name and entry_name options; got C\"),\n        ({\"description\": 1, \"type\": \"enumeration\", \"default\": \"A\", \"options\": [\"A\", \"B\"]},\n         TypeError, \"For test category, entry value must be a string for item name test_item_name and entry name description; got <class 'int'>\"),\n        ({\"description\": \"Test\", \"type\": \"enumeration\", \"default\": \"A\", \"options\": [\"A\", \"B\"], 'permissions': \"\"},\n         ValueError, \"For test category, permissions entry value must be a list of string for item name test_item_name;\"\n                     \" got <class 'str'>.\"),\n        ({\"description\": \"Test\", \"type\": \"enumeration\", \"default\": \"A\", \"options\": [\"A\", \"B\"], 'permissions': []},\n         ValueError, \"For test category, permissions entry value must not be empty for item name test_item_name.\"),\n        ({\"description\": \"Test\", \"type\": \"enumeration\", \"default\": \"A\", \"options\": [\"A\", \"B\"], 'permissions': [\"\"]},\n         ValueError,\n         \"For test category, permissions entry values must be a string and non-empty for item name test_item_name.\"),\n        ({\"description\": \"Test\", \"type\": \"enumeration\", \"default\": \"A\", \"options\": [\"A\", \"B\"],\n          'permissions': [\"editor\", 2]}, ValueError,\n         \"For test category, permissions entry values must be a string and non-empty for item name test_item_name.\")\n    ])\n    async def test__validate_category_val_enum_type_bad(self, config, exception_name, exception_msg):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {ITEM_NAME: config}\n        with pytest.raises(Exception) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=False)\n        assert excinfo.type is exception_name\n        assert exception_msg == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"config\", [\n    ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\",\n                  \"default\": \"{'type': 'model', 'name': 'Person', 'version': '1.0', 'hardware': 'tpu'}\", \"properties\":\n        {\"constant\": {\"type\": \"model\"}, \"key\": {\n            \"name\": {\"description\": \"TFlite model name to use for inference\", \"type\": \"string\", \"default\": \"People\",\n                     \"order\": \"1\", \"displayName\": \"TFlite model name\"}}, \"properties\": {\n            \"version\": {\"description\": \"Model version as stored in bucket\", \"type\": \"string\", \"default\": \"1.2\",\n                        \"order\": \"2\", \"displayName\": \"Model version\"}, \"hardware\": {\n                \"description\": \"Inference hardware (\\'tpu\\' may be chosen only if available and configured properly)\",\n                \"type\": \"enumeration\", \"default\": \"cpu\", \"options\": [\"cpu\", \"tpu\"], \"order\": \"3\",\n                \"displayName\": \"Inference hardware\"}}}}}),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\",\n                      \"default\": \"{'type': 'model', 'name': 'Person', 'version': '1.0', 'hardware': 'tpu'}\",\n                      \"properties\":\n                          {\"constant\": {\"type\": \"model\"}, \"key\": {\n                              \"name\": {\"description\": \"TFlite model name to use for inference\", \"type\": \"string\",\n                                       \"default\": \"People\",\n                                       \"order\": \"1\", \"displayName\": \"TFlite model name\"}}, \"properties\": {\n                              \"version\": {\"description\": \"Model version as stored in bucket\", \"type\": \"string\",\n                                          \"default\": \"1.2\",\n                                          \"order\": \"2\", \"displayName\": \"Model version\"}, \"hardware\": {\n                                  \"description\": \"Inference hardware (\\'tpu\\' may be chosen only if available and configured properly)\",\n                                  \"type\": \"enumeration\", \"default\": \"cpu\", \"options\": [\"cpu\", \"tpu\"], \"order\": \"3\",\n                                  \"displayName\": \"Inference hardware\"}}}}}),\n    ({\"item\": {\"description\": \"test description\", \"type\": \"string\", \"default\": \"A\"},\n       ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\", \"default\":\n           \"{'type': 'model', 'name': 'People', 'version': '1.2', 'hardware': 'cpu'}\", \"properties\":\n        {\"constant\": {\"type\": \"model\"}, \"key\": {\n            \"name\": {\"description\": \"TFlite model name to use for inference\", \"type\": \"string\", \"default\": \"People\",\n                     \"order\": \"1\", \"displayName\": \"TFlite model name\"}}, \"properties\": {\n            \"version\": {\"description\": \"Model version as stored in bucket\", \"type\": \"string\", \"default\": \"1.2\",\n                        \"order\": \"2\", \"displayName\": \"Model version\"}, \"hardware\": {\n                \"description\": \"Inference hardware (\\'tpu\\' may be chosen only if available and configured properly)\",\n                \"type\": \"enumeration\", \"default\": \"cpu\", \"options\": [\"cpu\", \"tpu\"], \"order\": \"3\",\n                \"displayName\": \"Inference hardware\"}}}}}),\n    ({ITEM_NAME: {\"description\": \"Model Test\", \"type\": \"bucket\", \"properties\":\n        {\"key\": {\"name\": {\"description\": \"TFlite model name to use for inference\", \"type\": \"string\", \"default\":\n            \"People\"}}}, \"default\": \"A\", \"permissions\": [\"control\"]}})\n    ])\n    async def test__validate_category_val_bucket_type_good(self, config):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_return_value = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=config,\n                                               set_value_val_from_default_val=True)\n        assert isinstance(c_return_value, dict)\n\n    @pytest.mark.parametrize(\"config, exc_name, reason\", [\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\", \"default\": \"A\"}}, KeyError,\n         \"'For {} category, properties KV pair must be required for item name {}.'\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\", \"default\": \"A\", \"property\": '{\"a\": 1}'}},\n         KeyError, \"'For {} category, properties KV pair must be required for item name {}.'\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({\"item\": {\"description\": \"test description\", \"type\": \"string\", \"default\": \"A\", \"value\": \"B\"},\n          ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\", \"default\": \"A\"}}, KeyError,\n         \"'For {} category, properties KV pair must be required for item name {}.'\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\", \"default\": \"A\", \"properties\": '{\"a\": 1}'}},\n         ValueError, \"For {} category, properties must be JSON object for item name {}; got <class 'str'>\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\", \"default\": \"A\", \"properties\": {}}},\n         ValueError, \"For {} category, properties JSON object cannot be empty for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\", \"default\": \"A\", \"properties\": {\"k\": \"v\"}}},\n         ValueError, \"For {} category, key KV pair must exist in properties for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"bucket\", \"default\": {}, \"properties\": {\"key\": \"v\"}}},\n         TypeError, \"For {} category, entry value must be a string for item name {} and entry name default; \"\n                    \"got <class 'dict'>\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"bucket\", \"properties\": {}, \"default\": \"A\", \"permissions\": \"\"}},\n         ValueError, \"For {} category, permissions entry value must be a list of string for item name {}; \"\n                     \"got <class 'str'>.\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"bucket\", \"properties\": {}, \"default\": \"A\", \"permissions\": [\"\"]}},\n         ValueError, \"For {} category, permissions entry values must be a string and non-empty for item name {}.\"\n                     \"\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"bucket\", \"properties\": {}, \"default\": \"A\", \"permissions\": [2]}},\n         ValueError,\"For {} category, permissions entry values must be a string and non-empty for item name {}.\"\n                    \"\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"bucket\", \"properties\": {}, \"default\": \"A\",\n                      \"permissions\": [\"user\", 5]}},\n         ValueError, \"For {} category, permissions entry values must be a string and non-empty for item name {}.\"\n                     \"\".format(CAT_NAME, ITEM_NAME))\n    ])\n    async def test__validate_category_val_bucket_type_bad(self, config, exc_name, reason):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(Exception) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=config,\n                                               set_value_val_from_default_val=False)\n        assert excinfo.type is exc_name\n        assert reason == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"config, exc_name, reason\", [\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"list\", \"default\": \"A\"}}, KeyError,\n         \"'For {} category, items KV pair must be required for item name {}.'\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"list\", \"default\": \"A\", \"items\": []}}, TypeError,\n         \"For {} category, entry value must be a string for item name {} and entry name items; \"\n         \"got <class 'list'>\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"list\", \"default\": \"A\", \"items\": \"str\"}}, ValueError,\n         \"For {} category, items value should either be in string, float, integer, object or enumeration for \"\n         \"item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test description\", \"type\": \"list\", \"default\": \"A\", \"items\": \"float\"}}, TypeError,\n         \"For {} category, default value should be passed array list in string format for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"AJ\\\"]\", \"items\": \"float\"}}, ValueError,\n        \"For {} category, all elements should be of same <class 'float'> type in default value for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"13\\\", \\\"AJ\\\"]\", \"items\": \"integer\"}},\n         ValueError, \"For {} category, all elements should be of same <class 'int'> type in default \"\n                     \"value for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"13\\\", \\\"1.04\\\"]\", \"items\": \"integer\"}},\n         ValueError, \"For {} category, all elements should be of same <class 'int'> type in default \"\n                     \"value for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({\"include\": {\"description\": \"multiple\", \"type\": \"list\", \"default\": \"[\\\"135\\\", \\\"1111\\\"]\", \"items\": \"integer\",\n                      \"value\": \"1\"},\n        ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"13\\\", \\\"1\\\"]\", \"items\": \"float\"}},\n         ValueError, \"For {} category, all elements should be of same <class 'float'> type in default \"\n                     \"value for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[]\", \"items\": \"float\", \"listSize\": 1}},\n         TypeError, \"For {} category, listSize type must be a string for item name {}; got <class 'int'>\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[]\", \"items\": \"float\", \"listSize\": \"\"}},\n         ValueError, \"For {} category, listSize value must be an integer value for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"10.12\\\", \\\"0.9\\\"]\", \"items\": \"float\",\n                      \"listSize\": \"1\"}}, ValueError, \"For {} category, default value array list size limit to 1 for \"\n                                                     \"item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"1\\\"]\", \"items\": \"integer\",\n                      \"listSize\": \"0\"}}, ValueError, \"For {} category, default value array list size limit to 0 \"\n                                                     \"for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"6e7777\\\", \\\"1.79e+308\\\", \\\"1.0\\\", \\\"0.9\\\"]\",\n                      \"items\": \"float\", \"listSize\": \"3\"}}, ValueError,\n         \"For {} category, default value array list size limit to 3 for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"1\\\", \\\"2\\\", \\\"1\\\"]\", \"items\": \"integer\",\n                      \"listSize\": \"3\"}}, ValueError, \"For {} category, default value array elements are not unique \"\n                                                     \"for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"a\\\", \\\"b\\\", \\\"ab\\\", \\\"a\\\"]\",\n                      \"items\": \"string\"}}, ValueError, \"For {} category, default value array elements are not unique \"\n                                                     \"for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"property\": {}}}, KeyError, \"'For {} category, properties KV pair must be required for item name \"\n                                                  \"{}'\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": 1}}, ValueError,\n         \"For {} category, properties must be JSON object for item name {}; got <class 'int'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": \"\"}}, ValueError,\n         \"For {} category, properties must be JSON object for item name {}; got <class 'str'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {}}}, ValueError,\n         \"For {} category, properties JSON object cannot be empty for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"integer\\\"]\",\n                      \"items\": \"enumeration\"}}, KeyError,\n         \"'For {} category, options required for item name {}'\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"integer\\\"]\",\n                      \"items\": \"enumeration\", \"options\": 1}}, TypeError,\n         \"For {} category, entry value must be a list for item name {} and entry name items; got <class 'int'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"integer\\\"]\",\n                      \"items\": \"enumeration\", \"options\": []}}, ValueError,\n         \"For {} category, options cannot be empty list for item_name {} and entry_name items\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"integer\\\"]\",\n                      \"items\": \"enumeration\", \"options\": [\"integer\"], \"listSize\": 1}}, TypeError,\n         \"For {} category, listSize type must be a string for item name {}; got <class 'int'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"integer\\\"]\",\n                      \"items\": \"enumeration\", \"options\": [\"integer\"], \"listSize\": \"blah\"}}, ValueError,\n         \"For {} category, listSize value must be an integer value for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"integer\\\"]\",\n                      \"items\": \"enumeration\", \"options\": [\"int\"], \"listSize\": \"1\"}}, ValueError,\n         \"For {} category, integer value does not exist in options for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"0\\\"]\",\n                      \"items\": \"enumeration\", \"options\": [\"999\"], \"listSize\": \"1\"}}, ValueError,\n         \"For {} category, 0 value does not exist in options for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"0\\\"]\",\n                      \"items\": \"integer\", \"listSize\": \"1\", \"listName\": 2}}, TypeError,\n         \"For {} category, listName type must be a string for item name {}; got <class 'int'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"[\\\"0\\\"]\",\n                      \"items\": \"string\", \"listSize\": \"1\", \"listName\": \"\"}}, ValueError,\n         \"For {} category, listName cannot be empty for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {\"width\": {\"description\": \"\", \"default\": \"\", \"type\": \"\"}}, \"listName\": \"\"}},\n         ValueError,\"For {} category, listName cannot be empty for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {\"width\": {\"description\": \"\", \"default\": \"\", \"type\": \"\"}}, \"permissions\": \"\"}},\n         ValueError, \"For {} category, permissions entry value must be a list of string for item name {}; \"\n                     \"got <class 'str'>.\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {\"width\": {\"description\": \"\", \"default\": \"\", \"type\": \"\"}}, \"permissions\": []}},\n         ValueError, \"For {} category, permissions entry value must not be empty for item name {}.\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {\"width\": {\"description\": \"\", \"default\": \"\", \"type\": \"\"}}, \"permissions\": [1]}},\n         ValueError, \"For {} category, permissions entry values must be a string and non-empty for item name {}.\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {\"width\": {\"description\": \"\", \"default\": \"\", \"type\": \"\"}}, \"permissions\": [\"a\", 2]}},\n         ValueError, \"For {} category, permissions entry values must be a string and non-empty for item name {}.\"\n                     \"\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"test\", \"type\": \"list\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {\"width\": {\"description\": \"\", \"default\": \"\", \"type\": \"\"}}, \"permissions\": [\"\", \"A\"]}},\n         ValueError, \"For {} category, permissions entry values must be a string and non-empty for item name {}.\"\n                     \"\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"A\"}}, KeyError,\n         \"'For {} category, items KV pair must be required for item name {}.'\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"A\", \"items\": []}}, TypeError,\n         \"For {} category, entry value must be a string for item name {} and entry name items; \"\n         \"got <class 'list'>\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"A\", \"items\": \"str\"}}, ValueError,\n         \"For {} category, items value should either be in string, float, integer, object or enumeration for \"\n         \"item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"A\", \"items\": \"string\"}}, TypeError,\n         \"For {} category, default value should be passed KV pair list in string format for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\"}\", \"items\": \"string\"}},\n         TypeError, \"For {} category, KV pair invalid in default value for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"1\\\"}\", \"items\": \"float\"}},\n         ValueError, \"For {} category, all elements should be of same <class 'float'> type in default value for \"\n                     \"item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"AJ\\\"}\",\n                      \"items\": \"integer\"}}, ValueError,\n         \"For {} category, all elements should be of same <class 'int'> type in default value for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"13\\\", \\\"key2\\\": \\\"1.04\\\"}\"\n            , \"items\": \"integer\"}}, ValueError, \"For {} category, all elements should be of same <class 'int'> type in \"\n                                                \"default value for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({\"include\": {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key1\\\": \\\"135\\\", \\\"key2\\\": \\\"1111\\\"}\", \"items\": \"integer\", \"value\": \"1\"},\n          ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key1\\\": \\\"135\\\", \\\"key2\\\": \\\"1111\\\"}\", \"items\": \"float\"}}, ValueError,\n         \"For {} category, all elements should be of same <class 'float'> type in default value for item name \"\n         \"{}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"[]\", \"items\": \"float\", \"listSize\": 1}},\n         TypeError, \"For {} category, listSize type must be a string for item name {}; got <class 'int'>\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"[]\", \"items\": \"float\",\n                      \"listSize\": \"blah\"}}, ValueError, \"For {} category, listSize value must be an integer value for \"\n                                                        \"item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"[\\\"1\\\"]\", \"items\": \"float\",\n                      \"listSize\": \"1\"}}, TypeError, \"For {} category, KV pair invalid in default value for item name \"\n                                                    \"{}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"1\\\"}\", \"items\": \"float\",\n                      \"listSize\": \"1\"}}, TypeError, \"For {} category, KV pair invalid in default value for item name \"\n                                                    \"{}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": {} }\", \"items\": \"float\",\n                      \"listSize\": \"1\"}}, ValueError, \"For {} category, all elements should be of same <class 'float'> \"\n                                                     \"type in default value for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key2\\\": \\\"val2\\\"}\", \"items\": \"float\", \"listSize\": \"1\"}},\n         ValueError, \"For {} category, default value KV pair list size limit to 1 for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key\\\": \\\"val2\\\"}\", \"items\": \"float\", \"listSize\": \"2\"}},\n         ValueError, \"For category {}, duplicate KV pair found for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key1\\\": \\\"val2\\\"}\", \"items\": \"float\", \"listSize\": \"2\"}},\n         ValueError, \"For {} category, all elements should be of same <class 'float'> type in default value for \"\n                     \"item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key1\\\": \\\"val2\\\", \\\"key3\\\": \\\"val2\\\"}\", \"items\": \"float\",\n                      \"listSize\": \"2\"}}, ValueError, \"For {} category, default value KV pair list size limit to 2 for\"\n                                                     \" item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"float\",\n                      \"listSize\": \"0\"}}, ValueError, \"For {} category, default value KV pair list size limit to 0 \"\n                                                     \"for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\"\n                      }}, KeyError, \"'For {} category, properties KV pair must be required for item name {}'\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"property\": {}}}, KeyError, \"'For {} category, properties KV pair must be required for item name \"\n                                                  \"{}'\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": 1}}, ValueError,\n         \"For {} category, properties must be JSON object for item name {}; got <class 'int'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": \"\"}}, ValueError,\n         \"For {} category, properties must be JSON object for item name {}; got <class 'str'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {}}}, ValueError,\n         \"For {} category, properties JSON object cannot be empty for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key\\\": \\\"1.0\\\"}\", \"items\": \"object\",\n                      \"properties\": {\"width\": 1}}}, TypeError,\n         \"For {} category, Properties must be a JSON object for width key for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"width\\\": \\\"12\\\"}\", \"items\":\n            \"object\", \"properties\": {\"width\": {}}}}, ValueError,\n         \"For {} category, width properties cannot be empty for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"width\\\": \\\"12\\\"}\", \"items\":\n            \"object\",\"properties\": {\"width\": {\"type\": \"\"}}}}, ValueError,\n        \"For {} category, width properties must have type, description, default keys for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"width\\\": \\\"12\\\"}\", \"items\":\n            \"object\", \"properties\": {\"width\": {\"description\": \"\"}}}}, ValueError,\n         \"For {} category, width properties must have type, description, default keys for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"width\\\": \\\"12\\\"}\", \"items\":\n            \"object\", \"properties\": {\"width\": {\"default\": \"\"}}}}, ValueError,\n         \"For {} category, width properties must have type, description, default keys for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"width\\\": \\\"12\\\"}\", \"items\":\n            \"object\", \"properties\": {\"width\": {\"type\": \"\", \"description\": \"\"}}}}, ValueError,\n         \"For {} category, width properties must have type, description, default keys for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"width\\\": \\\"12\\\"}\", \"items\":\n            \"object\", \"properties\": {\"width\": {\"type\": \"\", \"default\": \"\"}}}}, ValueError,\n         \"For {} category, width properties must have type, description, default keys for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"width\\\": \\\"12\\\"}\", \"items\":\n            \"object\", \"properties\": {\"width\": {\"description\": \"\", \"default\": \"\"}}}}, ValueError,\n         \"For {} category, width properties must have type, description, default keys for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{}\", \"items\": \"object\", \"keyName\": 1,\n                      \"properties\": {\"width\": {\"description\": \"Width\", \"default\": \"1\", \"type\": \"integer\"}}}},\n         TypeError, \"For {} category, keyName type must be a string for item name {}; got <class 'int'>\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{}\", \"items\": \"object\",\n                      \"keyDescription\": False, \"properties\": {\"width\": {\"description\": \"Width\", \"default\": \"1\", \"type\":\n                \"integer\"}}}}, TypeError, \"For {} category, keyDescription type must be a string for item name {}; \"\n                                          \"got <class 'bool'>\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{}\", \"items\": \"object\", \"keyName\":\n            \"DP\", \"keyDescription\": False, \"properties\": {\"width\": {\"description\": \"Width\", \"default\": \"1\", \"type\":\n                \"integer\"}}}}, TypeError, \"For {} category, keyDescription type must be a string for item name {}; \"\n                                          \"got <class 'bool'>\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{}\", \"items\": \"object\", \"keyName\":\n            4.5, \"keyDescription\": \"\", \"properties\": {\"width\": {\"description\": \"Width\", \"default\": \"1\", \"type\":\n            \"integer\"}}}}, TypeError, \"For {} category, keyName type must be a string for item name {}; \"\n                                      \"got <class 'float'>\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{}\", \"items\": \"object\", \"keyName\":\n            \"\", \"keyDescription\": \"\", \"properties\": {\"width\": {\"description\": \"Width\", \"default\": \"1\", \"type\":\n            \"integer\"}}}}, ValueError, \"For {} category, keyName cannot be empty for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{}\", \"items\": \"object\", \"keyName\":\n            \"DP\", \"keyDescription\": \"\", \"properties\": {\"width\": {\"description\": \"Width\", \"default\": \"1\", \"type\":\n            \"integer\"}}}}, ValueError, \"For {} category, keyDescription cannot be empty for item name {}\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"integer\\\"}\",\n                      \"items\": \"enumeration\"}}, KeyError,\n         \"'For {} category, options required for item name {}'\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"integer\\\"}\",\n                      \"items\": \"enumeration\", \"options\": 1}}, TypeError,\n         \"For {} category, entry value must be a list for item name {} and entry name items; got <class 'int'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"integer\\\"}\",\n                      \"items\": \"enumeration\", \"options\": []}}, ValueError,\n         \"For {} category, options cannot be empty list for item_name {} and entry_name items\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"integer\\\"}\",\n                      \"items\": \"enumeration\", \"options\": [\"integer\"], \"listSize\": 1}}, TypeError,\n         \"For {} category, listSize type must be a string for item name {}; got <class 'int'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"integer\\\"}\",\n                      \"items\": \"enumeration\", \"options\": [\"integer\"], \"listSize\": \"blah\"}}, ValueError,\n         \"For {} category, listSize value must be an integer value for item name {}\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"int\\\"}\",\n                      \"items\": \"enumeration\", \"options\": [\"integer\"], \"listSize\": \"1\"}}, ValueError,\n         \"For {} category, int value does not exist in options for item name {} and entry_name key1\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"1\\\"}\",\n                      \"items\": \"enumeration\", \"options\": [\"integer\", \"2\"], \"listSize\": \"1\"}}, ValueError,\n         \"For {} category, 1 value does not exist in options for item name {} and entry_name key1\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"1\\\"}\",\n                      \"items\": \"enumeration\", \"options\": [\"integer\", \"2\"], \"listSize\": \"1\", \"listName\": 1}},\n         TypeError, \"For {} category, listName type must be a string for item name {}; got <class 'int'>\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"key1\\\": \\\"1\\\"}\",\n                      \"items\": \"enumeration\", \"options\": [\"integer\", \"2\"], \"listSize\": \"1\", \"listName\": \"\"}},\n         ValueError, \"For {} category, listName cannot be empty for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\", \"default\": \"{\\\"width\\\": \\\"12\\\"}\", \"items\":\n            \"object\", \"properties\": {\"width\": {\"description\": \"\", \"default\": \"\", \"type\": \"\"}}, \"listName\": \"\"}},\n         ValueError, \"For {} category, listName cannot be empty for item name {}\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key\\\": \\\"val2\\\"}\", \"items\": \"float\", \"listName\": 2}},\n         TypeError, \"For {} category, listName type must be a string for item name {}; got <class 'int'>\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key\\\": \\\"val2\\\"}\", \"items\": \"float\", \"permissions\": \"\"}},\n         ValueError, \"For {} category, permissions entry value must be a list of string for item name {}; \"\n                     \"got <class 'str'>.\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key\\\": \\\"val2\\\"}\", \"items\": \"float\", \"permissions\": []}},\n         ValueError, \"For {} category, permissions entry value must not be empty for item name {}.\".format(\n            CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key\\\": \\\"val2\\\"}\", \"items\": \"float\", \"permissions\": [\"\"]}},\n         ValueError, \"For {} category, permissions entry values must be a string and non-empty for item name {}.\"\n                     \"\".format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"expression\", \"type\": \"kvlist\",\n                      \"default\": \"{\\\"key\\\": \\\"1.0\\\", \\\"key\\\": \\\"val2\\\"}\", \"items\": \"float\", \"permissions\": [2]}},\n         ValueError, \"For {} category, permissions entry values must be a string and non-empty for item name {}.\"\n                     \"\".format(CAT_NAME, ITEM_NAME)),\n    ])\n    async def test__validate_category_val_list_type_bad(self, config, exc_name, reason):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(Exception) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=config,\n                                               set_value_val_from_default_val=False)\n        assert excinfo.type is exc_name\n        assert reason == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"config\", [\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"string\",\n                     \"default\": \"[]\"}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"string\",\n                     \"default\": \"[\\\"first\\\", \\\"second\\\"]\"}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"integer\",\n                     \"default\": \"[\\\"1\\\", \\\"0\\\"]\"}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"float\",\n                     \"default\": \"[\\\"0.5\\\", \\\"123.57\\\"]\"}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"float\",\n                     \"default\": \"[\\\".5\\\", \\\"1.79e+308\\\"]\", \"listSize\": \"2\"}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"string\",\n                     \"default\": \"[\\\"var1\\\", \\\"var2\\\"]\", \"listSize\": \"2\"}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"string\",\n                     \"default\": \"[]\", \"listSize\": \"1\"}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"string\",\n                     \"default\": \"[]\", \"permissions\": [\"user\", \"control\"]}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"integer\",\n                     \"default\": \"[\\\"10\\\", \\\"100\\\", \\\"200\\\", \\\"300\\\"]\", \"listSize\": \"4\"}},\n        {\"include\": {\"description\": \"A list of variables to include\", \"type\": \"list\", \"items\": \"object\",\n                     \"default\": \"[{\\\"datapoint\\\": \\\"voltage\\\"}]\",\n                     \"properties\": {\"datapoint\": {\"description\": \"The datapoint name to create\", \"displayName\":\n                         \"Datapoint\", \"type\": \"string\", \"default\": \"\"}}}},\n        {\"include\": {\"description\": \"A simple list\", \"type\": \"list\", \"default\": \"[\\\"integer\\\", \\\"float\\\"]\",\n                     \"items\": \"enumeration\", \"options\": [\"integer\", \"float\"]}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"string\",\n                    \"default\": \"{}\", \"order\": \"1\", \"displayName\": \"labels\"}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"string\",\n                     \"default\": \"{\\\"key\\\": \\\"value\\\"}\", \"order\": \"1\", \"displayName\": \"labels\"}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"integer\",\n                     \"default\": \"{\\\"key\\\": \\\"13\\\"}\", \"order\": \"1\", \"displayName\": \"labels\"}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"float\",\n                     \"default\": \"{\\\"key\\\": \\\"13.13\\\"}\", \"order\": \"1\", \"displayName\": \"labels\"}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"string\",\n                     \"default\": \"{\\\"key\\\": \\\"value\\\"}\", \"order\": \"1\", \"displayName\": \"labels\", \"listSize\": \"1\"}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"integer\",\n                     \"default\": \"{\\\"key\\\": \\\"13\\\"}\", \"order\": \"1\", \"displayName\": \"labels\", \"listSize\": \"1\"}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"float\",\n                     \"default\": \"{\\\"key\\\": \\\"13.13\\\"}\", \"order\": \"1\", \"displayName\": \"labels\", \"listSize\": \"1\"}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"float\",\n                     \"default\": \"{}\", \"order\": \"1\", \"displayName\": \"labels\", \"listSize\": \"3\"}},\n        {\"include\": {\"description\": \"A list of expressions and values\", \"type\": \"kvlist\", \"items\": \"object\",\n                     \"keyName\": \"Register\", \"keyDescription\": \"Register to read\",\n                     \"default\": \"{\\\"register\\\": {\\\"width\\\": \\\"2\\\"}}\", \"order\": \"1\", \"displayName\": \"labels\",\n                     \"properties\": {\"width\": {\"description\": \"Number of registers to read\", \"displayName\": \"Width\",\n                                              \"type\": \"integer\", \"maximum\": \"4\", \"default\": \"1\"}}}},\n        {\"include\": {\"description\": \"A list of expressions and values \", \"type\": \"kvlist\", \"default\":\n            \"{\\\"key1\\\": \\\"integer\\\", \\\"key2\\\": \\\"float\\\"}\", \"items\": \"enumeration\", \"options\": [\"integer\", \"float\"]}},\n        {\"include\": {\"description\": \"A list of expressions and values \", \"type\": \"kvlist\", \"default\":\n            \"{\\\"key1\\\": \\\"integer\\\", \\\"key2\\\": \\\"float\\\"}\", \"items\": \"enumeration\", \"options\": [\"integer\", \"float\"],\n                     \"permissions\": [\"admin\"]}}\n    ])\n    async def test__validate_category_val_list_type_good(self, config):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        res = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=config,\n                                                 set_value_val_from_default_val=True)\n        assert config['include']['default'] == res['include']['default']\n        assert config['include']['default'] == res['include']['value']\n\n    @pytest.mark.parametrize(\"config, exc_name, reason\", [\n        ({ITEM_NAME: {\"description\": \"Test JSON\", \"type\": \"json\", \"default\": \"A\"}}, ValueError,\n         'For {} category, invalid entry value for entry name \"type\" for item name {}. valid type strings are: {}'\n         ''.format(CAT_NAME, ITEM_NAME, _valid_type_strings)),\n        ({ITEM_NAME: {\"description\": \"Test JSON\", \"type\": \"JSON\", \"default\": \"A\"}}, ValueError,\n         'For {} category, missing entry name value for item name {}'.format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"Test JSON\", \"type\": \"JSON\", \"default\": \"{}\", \"schema\": \"\"}}, TypeError,\n         \"For {} category, {} item name and schema entry value must be an object; got <class 'str'>\".format(\n             CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"Test JSON\", \"type\": \"JSON\", \"default\": \"{}\", \"schema\": {}}}, ValueError,\n         'For {} category, {} item name and schema entry value can not be empty.'.format(CAT_NAME, ITEM_NAME)),\n        ({ITEM_NAME: {\"description\": \"Test JSON\", \"type\": \"JSON\", \"default\": \"{}\", \"schema\": {}}, \"info\": {\n            \"description\": \"test description val\", \"type\": \"string\", \"default\": \"test default val\"}}, ValueError,\n         'For {} category, {} item name and schema entry value can not be empty.'.format(CAT_NAME, ITEM_NAME)),\n        ({\"bool\": {\"description\": \"Test boolean\", \"type\": \"boolean\", \"default\": \"false\", \"value\": \"true\"},\n          ITEM_NAME: {\"description\": \"Test JSON\", \"type\": \"JSON\", \"default\": \"{}\", \"schema\": {}},\n          \"str\": {\"description\": \"Test simple string\", \"type\": \"string\", \"default\": \"test default val\"}}, ValueError,\n         'For {} category, {} item name and schema entry value can not be empty.'.format(CAT_NAME, ITEM_NAME))\n    ])\n    async def test__validate_category_val_JSON_type_bad(self, config, exc_name, reason):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(Exception) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=config,\n                                               set_value_val_from_default_val=False)\n        assert excinfo.type is exc_name\n        assert reason == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"config\", [\n        ({ITEM_NAME: {\"description\": \"Test JSON\", \"type\": \"JSON\", \"default\": \"{}\"}}),\n        ({ITEM_NAME: {\"description\": \"Test JSON\", \"type\": \"JSON\", \"default\": {}}}),\n        # Object Schema\n        ({ITEM_NAME: {\"description\": \"Test JSON schema\", \"type\": \"JSON\", \"default\": {\"name\": \"AJ\"},\n                      \"schema\": {\"type\": \"object\", \"properties\": {\"name\": {\"type\": \"string\"}}}}}),\n        ({ITEM_NAME: {\"description\": \"Test JSON schema\", \"type\": \"JSON\", \"default\": {\"name\": \"AJ\", \"age\": 35},\n                      \"schema\": {\"type\": \"object\", \"properties\": {\"name\": {\"type\": \"string\"}, \"age\": {\"type\": \"integer\"}\n                                                                  }}}}),\n        ({ITEM_NAME: {\"description\": \"Test JSON schema\", \"type\": \"JSON\", \"default\": {\"name\": \"AJ\", \"age\": 35},\n                      \"schema\": {\"type\": \"object\", \"properties\": {\n                          \"name\": {\"type\": \"string\"}, \"age\": {\"type\": \"integer\"}}, \"required\": [\"name\"]}}}),\n        ({ITEM_NAME: {\"description\": \"Test JSON schema\", \"type\": \"JSON\", \"default\": {\"name\": \"AJ\"},\n                      \"schema\": {\"type\": \"object\", \"properties\": {\n                          \"name\": {\"type\": \"string\"}, \"age\": {\"type\": \"integer\"}}, \"required\": [\"name\"]}}}),\n        # Array Schema\n        ({ITEM_NAME: {\"description\": \"Test JSON schema\", \"type\": \"JSON\", \"default\": \"[10]\",\n                      \"schema\": {\"type\": \"array\", \"items\": {\"type\": \"integer\"}}}}),\n        ({ITEM_NAME: {\"description\": \"Test JSON schema\", \"type\": \"JSON\", \"default\": \"[10, 20, 30]\",\n                      \"schema\": {\"type\": \"array\", \"items\": {\"type\": \"integer\"}, \"minItems\": 1, \"maxItems\": 5}}}),\n        # Nested Objects with Array of Objects\n        ({ITEM_NAME: {\"description\": \"Test JSON schema\", \"type\": \"JSON\",\n                      \"default\": {\"project_name\": \"New Project\", \"tasks\": [{\"task_id\": 1, \"completed\": True},\n                                                                           {\"task_id\": 10, \"completed\": False}]},\n                      \"schema\": {\"type\": \"object\", \"properties\": {\n                          \"project_name\": {\"type\": \"string\"}, \"tasks\": {\n                              \"type\": \"array\", \"items\": {\"type\": \"object\", \"properties\": {\n                                  \"task_id\": {\"type\": \"integer\"}, \"completed\": {\"type\": \"boolean\"}},\n                                \"required\": [\"task_id\", \"completed\"]}\n                            }},\n                                 \"required\": [\"project_name\", \"tasks\"]}}}),\n        # Array of Arrays\n        ({ITEM_NAME: {\"description\": \"Test JSON schema\", \"type\": \"JSON\", \"default\": \"[[10, 20, 30], [300], [0, 1]]\",\n                      \"schema\": {\"type\": \"array\", \"items\": {\"type\": \"array\", \"items\": {\"type\": \"integer\"}}}}})\n    ])\n    async def test__validate_category_val_JSON_type_good(self, config):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_return_value = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=config,\n                                                            set_value_val_from_default_val=True)\n        assert isinstance(c_return_value, dict)\n\n    @pytest.mark.parametrize(\"_type, value, from_default_val\", [\n        (\"integer\", \" \", False),\n        (\"string\", \"\", False),\n        (\"string\", \" \", False),\n        (\"JSON\", \"\", False),\n        (\"JSON\", \" \", False),\n        (\"bucket\", \"\", False),\n        (\"bucket\", \" \", False),\n        (\"list\", \"\", False),\n        (\"list\", \" \", False),\n        (\"kvlist\", \"\", False),\n        (\"kvlist\", \" \", False),\n        (\"integer\", \" \", True),\n        (\"string\", \"\", True),\n        (\"string\", \" \", True),\n        (\"JSON\", \"\", True),\n        (\"JSON\", \" \", True),\n        (\"bucket\", \"\", True),\n        (\"bucket\", \" \", True),\n        (\"list\", \"\", True),\n        (\"list\", \" \", True),\n        (\"kvlist\", \"\", True),\n        (\"kvlist\", \" \", True)\n    ])\n    async def test__validate_category_val_with_optional_mandatory(self, _type, value, from_default_val):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {ITEM_NAME: {\"description\": \"test description\", \"type\": _type, \"default\": value,\n                                   \"mandatory\": \"true\"}}\n        if _type == \"bucket\":\n            test_config[ITEM_NAME]['properties'] = {\"key\": \"foo\"}\n        elif _type in (\"list\", \"kvlist\"):\n            test_config[ITEM_NAME]['items'] = \"string\"\n\n        with pytest.raises(Exception) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=from_default_val)\n        assert excinfo.type is ValueError\n        assert (\"For {} category, A default value must be given for {}\"\n                \"\").format(CAT_NAME, ITEM_NAME) == str(excinfo.value)\n\n    async def test__validate_category_val_with_enum_type(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"enumeration\",\n                \"default\": \"A\",\n                \"options\": [\"A\", \"B\", \"C\"]\n            }\n        }\n        c_return_value = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                                            set_value_val_from_default_val=True)\n        assert isinstance(c_return_value, dict)\n        assert 1 == len(c_return_value)\n        test_item_val = c_return_value.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert 5 == len(test_item_val)\n        assert \"test description val\" == test_item_val.get(\"description\")\n        assert \"enumeration\" == test_item_val.get(\"type\")\n        assert \"A\" == test_item_val.get(\"default\")\n        assert \"A\" == test_item_val.get(\"value\")\n\n        # deep copy check to make sure test_config wasn't modified in the\n        # method call\n        assert test_config is not c_return_value\n        assert isinstance(test_config, dict)\n        assert 1 == len(test_config)\n        test_item_val = test_config.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert 4 == len(test_item_val)\n        assert \"test description val\" == test_item_val.get(\"description\")\n        assert \"enumeration\" == test_item_val.get(\"type\")\n        assert \"A\" == test_item_val.get(\"default\")\n\n    @pytest.mark.parametrize(\"test_input, test_value, clean_value\", [\n        (\"boolean\", \"false\", \"false\"),\n        (\"integer\", \"123\", \"123\"),\n        (\"string\", \"blah\", \"blah\"),\n        (\"IPv4\", \"127.0.0.1\", \"127.0.0.1\"),\n        (\"IPv6\", \"2001:db8::\", \"2001:db8::\"),\n        (\"password\", \"not implemented\", \"not implemented\"),\n        (\"X509 certificate\", \"not implemented\", \"not implemented\"),\n        (\"JSON\", \"{\\\"foo\\\": \\\"bar\\\"}\", '{\"foo\": \"bar\"}'),\n        (\"northTask\", \"north_task_category\", \"north_task_category\")\n    ])\n    async def test__validate_category_val_valid_type(self, reset_singleton, test_input, test_value, clean_value):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": test_input,\n                \"default\": test_value,\n            },\n        }\n        c_return_value = await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                                            set_value_val_from_default_val=True)\n        assert c_return_value[ITEM_NAME][\"type\"] == test_input\n        assert c_return_value[ITEM_NAME][\"value\"] == clean_value\n\n    async def test__validate_category_val_invalid_type(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        item_name = 'test_item_name'\n        test_config = {\n            item_name: {\n                \"description\": \"test description val\",\n                \"type\": \"blablabla\",\n                \"default\": \"test default val\",\n            },\n        }\n        with pytest.raises(ValueError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=True)\n        assert 'For {} category, invalid entry value for entry name \"type\" for item name {}. valid type strings ' \\\n               'are: {}'.format(CAT_NAME, item_name, _valid_type_strings) == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"test_input\", [\"type\", \"description\", \"default\"])\n    async def test__validate_category_val_missing_entry(self, reset_singleton, test_input):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n            },\n        }\n        del test_config['test_item_name'][test_input]\n        with pytest.raises(ValueError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=True)\n        assert 'For {} category, missing entry name {} for item name {}'.format(\n            CAT_NAME, test_input, ITEM_NAME) == str(excinfo.value)\n\n    async def test__validate_category_val_config_without_default_notuse_value_val(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n            },\n        }\n        with pytest.raises(ValueError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=True)\n        assert 'For {} category, missing entry name default for item name {}'.format(\n            CAT_NAME, ITEM_NAME) == str(excinfo.value)\n\n    async def test__validate_category_val_config_with_default_andvalue_val_notuse_value_val(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                \"value\": \"test value val\"\n            },\n        }\n        with pytest.raises(ValueError) as excinfo:\n            await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                               set_value_val_from_default_val=True)\n        assert 'Specifying value_name and value_val for item_name test_item_name is not allowed if desired behavior is to use default_val as value_val' in str(\n            excinfo.value)\n\n    async def test__merge_category_vals_same_items_different_values(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config_new = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                \"value\": \"test value val\"\n            },\n        }\n        test_config_storage = {\n            ITEM_NAME: {\n                \"description\": \"test description val storage\",\n                \"type\": \"string\",\n                \"default\": \"test default val storage\",\n                \"value\": \"test value val storage\"\n            },\n        }\n        c_return_value = await c_mgr._merge_category_vals(test_config_new, test_config_storage,\n                                                          keep_original_items=True, category_name=CAT_NAME)\n        assert isinstance(c_return_value, dict)\n        assert len(c_return_value) is 1\n        test_item_val = c_return_value.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert len(test_item_val) is 4\n        assert test_item_val.get(\"description\") is \"test description val\"\n        assert test_item_val.get(\"type\") is \"string\"\n        assert test_item_val.get(\"default\") is \"test default val\"\n        # use value val from storage\n        assert test_item_val.get(\"value\") is \"test value val storage\"\n        # return new dictionary, do not modify parameters passed in\n        assert test_config_new is not c_return_value\n        assert test_config_storage is not c_return_value\n        assert test_config_new is not test_config_storage\n\n    async def test__merge_category_vals_deprecated(self, reset_singleton, mocker):\n        _rv = await self.async_mock('CONCH')\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config_new = {\n            \"test_item_name1\": {\n                \"description\": \"test description val storage1\",\n                \"type\": \"string\",\n                \"default\": \"test default val storage1\",\n                \"value\": \"test value val storage1\",\n                \"deprecated\": \"true\"\n            },\n            \"test_item_name2\": {\n                \"description\": \"test description val2\",\n                \"type\": \"string\",\n                \"default\": \"test default val2\",\n                \"value\": \"test value val2\"\n            },\n        }\n        test_config_storage = {\n            \"test_item_name1\": {\n                \"description\": \"test description val storage1\",\n                \"type\": \"string\",\n                \"default\": \"test default val storage1\",\n                \"value\": \"test value val storage1\"\n            },\n            \"test_item_name2\": {\n                \"description\": \"test description val storage2\",\n                \"type\": \"string\",\n                \"default\": \"test default val storage2\",\n                \"value\": \"test value val storage2\"\n            },\n        }\n        expected_new_value = {\n            \"test_item_name2\": {\n                \"description\": \"test description val2\",\n                \"type\": \"string\",\n                \"default\": \"test default val2\",\n                \"value\": \"test value val storage2\"\n            },\n        }\n        mocker.patch.object(AuditLogger, '__init__', return_value=None)\n        mocker.patch.object(AuditLogger, 'information', return_value=_rv)\n        c_return_value = await c_mgr._merge_category_vals(test_config_new, test_config_storage,\n                                                          keep_original_items=True, category_name=CAT_NAME)\n        assert expected_new_value == c_return_value\n\n    async def test__merge_category_vals_with_list_name(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config_new = {\n            \"config_item\": {\n                \"description\": \"A list of variables\",\n                \"type\": \"list\",\n                \"items\": \"string\",\n                \"listName\": \"items\",\n                \"default\": \"{\\\"items\\\": [\\\"A\\\", \\\"B\\\"]}\",\n                \"value\": \"{\\\"items\\\": [\\\"E\\\", \\\"F\\\"]}\"\n            }\n        }\n        test_config_storage = {\n            \"config_item\": {\n                \"description\": \"A list of variables\",\n                \"type\": \"list\",\n                \"items\": \"string\",\n                \"listName\": \"items\",\n                \"default\": \"{\\\"items\\\": [\\\"A\\\", \\\"B\\\"]}\",\n                \"value\": \"{\\\"items\\\": [\\\"C\\\", \\\"D\\\"]}\"\n            }\n        }\n        c_return_value = await c_mgr._merge_category_vals(test_config_new, test_config_storage,\n                                                          keep_original_items=False, category_name=CAT_NAME)\n        assert test_config_storage == c_return_value\n\n    async def test__merge_category_vals_no_mutual_items_ignore_original(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config_new = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                \"value\": \"test value val\"\n            },\n        }\n        test_config_storage = {\n            \"test_item_name_storage\": {\n                \"description\": \"test description val storage\",\n                \"type\": \"string\",\n                \"default\": \"test default val storage\",\n                \"value\": \"test value val storage\"\n            },\n        }\n        c_return_value = await c_mgr._merge_category_vals(test_config_new, test_config_storage,\n                                                          keep_original_items=False, category_name=CAT_NAME)\n        assert isinstance(c_return_value, dict)\n        # ignore \"test_item_name_storage\" and include ITEM_NAME\n        assert len(c_return_value) is 1\n        test_item_val = c_return_value.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert len(test_item_val) is 4\n        assert test_item_val.get(\"description\") is \"test description val\"\n        assert test_item_val.get(\"type\") is \"string\"\n        assert test_item_val.get(\"default\") is \"test default val\"\n        assert test_item_val.get(\"value\") is \"test value val\"\n        assert test_config_new is not c_return_value\n        assert test_config_storage is not c_return_value\n        assert test_config_new is not test_config_storage\n\n    async def test__merge_category_vals_no_mutual_items_include_original(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_config_new = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                \"value\": \"test value val\"\n            },\n        }\n        test_config_storage = {\n            \"test_item_name_storage\": {\n                \"description\": \"test description val storage\",\n                \"type\": \"string\",\n                \"default\": \"test default val storage\",\n                \"value\": \"test value val storage\"\n            },\n        }\n        c_return_value = await c_mgr._merge_category_vals(test_config_new, test_config_storage,\n                                                          keep_original_items=True, category_name=CAT_NAME)\n        assert isinstance(c_return_value, dict)\n        # include \"test_item_name_storage\" and ITEM_NAME\n        assert len(c_return_value) is 2\n        test_item_val = c_return_value.get(ITEM_NAME)\n        assert isinstance(test_item_val, dict)\n        assert len(test_item_val) is 4\n        assert test_item_val.get(\"description\") is \"test description val\"\n        assert test_item_val.get(\"type\") is \"string\"\n        assert test_item_val.get(\"default\") is \"test default val\"\n        assert test_item_val.get(\"value\") is \"test value val\"\n        test_item_val = c_return_value.get(\"test_item_name_storage\")\n        assert isinstance(test_item_val, dict)\n        assert len(test_item_val) is 4\n        assert test_item_val.get(\n            \"description\") is \"test description val storage\"\n        assert test_item_val.get(\"type\") is \"string\"\n        assert test_item_val.get(\"default\") is \"test default val storage\"\n        assert test_item_val.get(\"value\") is \"test value val storage\"\n        assert test_config_new is not c_return_value\n        assert test_config_storage is not c_return_value\n        assert test_config_new is not test_config_storage\n\n    p1 = {ITEM_NAME: {\"description\": \"test description val\", \"type\": \"string\", \"default\": \"test default val\",\n                      \"value\": \"test value val\", \"readonly\": \"true\"}}\n    p2 = {ITEM_NAME: {\"description\": \"test description val\", \"type\": \"string\", \"default\": \"test default val\",\n                      \"value\": \"test value val\", \"readonly\": \"false\", \"order\": 3}}\n    p3 = {ITEM_NAME: {\"description\": \"test description val\", \"type\": \"string\", \"default\": \"test default val\",\n                      \"value\": \"test value val\", \"readonly\": \"true\", \"order\": 3, \"length\": 80}}\n    p4 = {ITEM_NAME: {\"description\": \"test description val\", \"type\": \"integer\", \"default\": \"1\", \"minimum\": 0,\n                      \"maximum\": 5}}\n    p5 = {\"test_item_name_storage\": {\"description\": \"\", \"type\": \"integer\", \"default\": \"10\", \"value\": \"100\",\n                                     \"minimum\": 10, \"maximum\": 100, \"order\": 1, \"displayName\": \"test\"}}\n    p6 = {\"test_item_name_storage\": {\"description\": \"\", \"type\": \"integer\", 'default': \"3\", \"value\": \"100\",\n                                     \"rule\": \"value < 200\", \"order\": 1}}\n\n    @pytest.mark.parametrize(\"idx, new_config, keep_original_items\", [\n        (1, p1, False), (2, p2, False), (3, p3, False), (4, p4, False), (5, p5, False), (6, p6, False),\n        (1, p1, True), (2, p2, True), (3, p3, True), (4, p4, True), (5, p5, True), (6, p6, True),\n    ])\n    async def test__merge_category_vals_with_optional_attributes(self, reset_singleton, idx, new_config,\n                                                                 keep_original_items):\n        def verify_data_ignore_original_items():\n            assert len(actual_result) == 1\n            actual = list(actual_result.values())[0]\n            if idx == 1:\n                assert ITEM_NAME in actual_result\n                assert 'test_item_name_storage' not in actual_result\n                assert len(actual) == 5\n                assert actual['description'] == \"test description val\"\n                assert actual['type'] == \"string\"\n                assert actual['default'] == \"test default val\"\n                assert actual['value'] == \"test value val\"\n                assert actual[\"readonly\"] == \"true\"\n            elif idx == 2:\n                assert ITEM_NAME in actual_result\n                assert 'test_item_name_storage' not in actual_result\n                assert len(actual) == 6\n                assert actual['description'] == \"test description val\"\n                assert actual['type'] == \"string\"\n                assert actual['default'] == \"test default val\"\n                assert actual['value'] == \"test value val\"\n                assert actual[\"readonly\"] == \"false\"\n                assert actual['order'] == 3\n            elif idx == 3:\n                assert ITEM_NAME in actual_result\n                assert 'test_item_name_storage' not in actual_result\n                assert len(actual) == 7\n                assert actual['description'] == \"test description val\"\n                assert actual['type'] == \"string\"\n                assert actual['default'] == \"test default val\"\n                assert actual['value'] == \"test value val\"\n                assert actual[\"readonly\"] == \"true\"\n                assert actual['order'] == 3\n                assert actual['length'] == 80\n            elif idx == 4:\n                assert ITEM_NAME in actual_result\n                assert 'test_item_name_storage' not in actual_result\n                assert len(actual) == 6\n                assert actual['description'] == \"test description val\"\n                assert actual['type'] == \"integer\"\n                assert actual['default'] == \"1\"\n                assert actual['value'] == \"1\"\n                assert actual['minimum'] == 0\n                assert actual['maximum'] == 5\n            elif idx == 5:\n                assert ITEM_NAME not in actual_result\n                assert 'test_item_name_storage' in actual_result\n                assert len(actual) == 8\n                assert actual['description'] == \"\"\n                assert actual['type'] == \"integer\"\n                assert actual['default'] == \"10\"\n                assert actual['value'] == \"100\"\n                assert actual[\"minimum\"] == 10\n                assert actual['maximum'] == 100\n                assert actual['order'] == 1\n                assert actual['displayName'] == \"test\"\n            elif idx == 6:\n                assert ITEM_NAME not in actual_result\n                assert 'test_item_name_storage' in actual_result\n                assert len(actual) == 6\n                assert actual['description'] == \"\"\n                assert actual['type'] == \"integer\"\n                assert actual['default'] == \"3\"\n                assert actual['value'] == \"100\"\n                assert actual[\"order\"] == 1\n                assert actual['rule'] == \"value < 200\"\n\n        def verify_data_include_original_items():\n            assert len(actual_result) == 2\n            item_name1 = ITEM_NAME\n            item_name2 = 'test_item_name_storage'\n            assert item_name1 in actual_result\n            assert item_name2 in actual_result\n            actual_item1 = actual_result[item_name1]\n            actual_item2 = actual_result[item_name2]\n            if idx == 1:\n                assert len(actual_item1) == 5\n                assert actual_item1['description'] == \"test description val\"\n                assert actual_item1['type'] == \"string\"\n                assert actual_item1['default'] == \"test default val\"\n                assert actual_item1['value'] == \"test value val\"\n                assert actual_item1[\"readonly\"] == \"true\"\n                assert len(actual_item2) == 7\n                assert actual_item2['description'] == \"\"\n                assert actual_item2['type'] == \"integer\"\n                assert actual_item2['default'] == \"10\"\n                assert actual_item2['value'] == \"100\"\n                assert actual_item2[\"minimum\"] == 20\n                assert actual_item2[\"maximum\"] == 200\n                assert actual_item2[\"order\"] == 1\n            elif idx == 2:\n                assert len(actual_item1) == 6\n                assert actual_item1['description'] == \"test description val\"\n                assert actual_item1['type'] == \"string\"\n                assert actual_item1['default'] == \"test default val\"\n                assert actual_item1['value'] == \"test value val\"\n                assert actual_item1[\"readonly\"] == \"false\"\n                assert actual_item1[\"order\"] == 3\n                assert len(actual_item2) == 7\n                assert actual_item2['description'] == \"\"\n                assert actual_item2['type'] == \"integer\"\n                assert actual_item2['default'] == \"10\"\n                assert actual_item2['value'] == \"100\"\n                assert actual_item2[\"minimum\"] == 20\n                assert actual_item2[\"maximum\"] == 200\n                assert actual_item2[\"order\"] == 1\n            elif idx == 3:\n                assert len(actual_item1) == 7\n                assert actual_item1['description'] == \"test description val\"\n                assert actual_item1['type'] == \"string\"\n                assert actual_item1['default'] == \"test default val\"\n                assert actual_item1['value'] == \"test value val\"\n                assert actual_item1[\"readonly\"] == \"true\"\n                assert actual_item1[\"order\"] == 3\n                assert actual_item1['length'] == 80\n                assert len(actual_item2) == 7\n                assert actual_item2['description'] == \"\"\n                assert actual_item2['type'] == \"integer\"\n                assert actual_item2['default'] == \"10\"\n                assert actual_item2['value'] == \"100\"\n                assert actual_item2[\"minimum\"] == 20\n                assert actual_item2[\"maximum\"] == 200\n                assert actual_item2[\"order\"] == 1\n            elif idx == 4:\n                assert len(actual_item1) == 6\n                assert actual_item1['description'] == \"test description val\"\n                assert actual_item1['type'] == \"integer\"\n                assert actual_item1['default'] == \"1\"\n                assert actual_item1['value'] == \"1\"\n                assert actual_item1[\"minimum\"] == 0\n                assert actual_item1['maximum'] == 5\n                assert len(actual_item2) == 7\n                assert actual_item2['description'] == \"\"\n                assert actual_item2['type'] == \"integer\"\n                assert actual_item2['default'] == \"10\"\n                assert actual_item2['value'] == \"100\"\n                assert actual_item2[\"minimum\"] == 20\n                assert actual_item2[\"maximum\"] == 200\n                assert actual_item2[\"order\"] == 1\n            elif idx == 5:\n                assert len(actual_item1) == 6\n                assert actual_item1['description'] == \"test description val\"\n                assert actual_item1['type'] == \"string\"\n                assert actual_item1['default'] == \"test default val\"\n                assert actual_item1['value'] == \"test value val\"\n                assert actual_item1[\"readonly\"] == \"false\"\n                assert actual_item1[\"order\"] == 2\n                assert len(actual_item2) == 8\n                assert actual_item2['description'] == \"\"\n                assert actual_item2['type'] == \"integer\"\n                assert actual_item2['default'] == \"10\"\n                assert actual_item2['value'] == \"100\"\n                assert actual_item2[\"minimum\"] == 10\n                assert actual_item2[\"maximum\"] == 100\n                assert actual_item2[\"order\"] == 1\n                assert actual_item2[\"displayName\"] == \"test\"\n            elif idx == 6:\n                assert len(actual_item1) == 6\n                assert actual_item1['description'] == \"test description val\"\n                assert actual_item1['type'] == \"string\"\n                assert actual_item1['default'] == \"test default val\"\n                assert actual_item1['value'] == \"test value val\"\n                assert actual_item1[\"readonly\"] == \"false\"\n                assert actual_item1[\"order\"] == 2\n                assert len(actual_item2) == 6\n                assert actual_item2['description'] == \"\"\n                assert actual_item2['type'] == \"integer\"\n                assert actual_item2['default'] == \"3\"\n                assert actual_item2['value'] == \"100\"\n                assert actual_item2[\"order\"] == 1\n                assert actual_item2[\"rule\"] == \"value < 200\"\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        storage_config = {\n            \"test_item_name_storage\": {\n                \"description\": \"\",\n                \"type\": \"integer\",\n                \"default\": \"10\",\n                \"value\": \"100\",\n                \"minimum\": 20,\n                \"maximum\": 200,\n                \"order\": 1\n            },\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"string\",\n                \"default\": \"test default val\",\n                \"value\": \"test value val\",\n                \"readonly\": \"false\",\n                \"order\": 2\n            }\n        }\n        actual_result = await c_mgr._merge_category_vals(\n            new_config, storage_config, keep_original_items=keep_original_items, category_name=CAT_NAME)\n        assert isinstance(actual_result, dict)\n        if keep_original_items:\n            getattr(verify_data_include_original_items, \"__call__\")()\n        else:\n            getattr(verify_data_ignore_original_items, \"__call__\")()\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ((2, 'catvalue', 'catdesc'), \"category_name must be a string\"),\n        (('catname', 'catvalue', 3), \"category_description must be a string\")\n    ])\n    async def test_bad_create_category(self, reset_singleton, payload, message):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(Exception) as excinfo:\n            await c_mgr.create_category(category_name=payload[0], category_value=payload[1],\n                                        category_description=payload[2])\n        assert excinfo.type is TypeError\n        assert message == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"rule\", [\n        'value * 3 == 6',\n        'value > 4',\n        'value % 2 == 0',\n        'value * (value + 1) == 9',\n        '(value + 1) / (value - 1) >= 3',\n        'sqrt(value) < 1',\n        'pow(value, value) != 27',\n        'value ^ value == 2',\n        'factorial(value) != 6',\n        'fabs(value) != 3.0',\n        'ceil(value) != 3',\n        'floor(value) != 3',\n        'sin(value) <= 0',\n        'degrees(value) < 171'\n    ])\n    async def test_bad_rule_create_category(self, reset_singleton, rule):\n        d = {'info': {'rule': rule, 'default': '3', 'type': 'integer', 'description': 'Test', 'value': '3'}}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _se = await self.async_mock(d)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_validate_category_val', side_effect=[_se,\n                                                                                           Exception()]) as valpatch:\n                with pytest.raises(Exception) as excinfo:\n                    await c_mgr.create_category('catname', 'catvalue', 'catdesc')\n                assert excinfo.type is ValueError\n                assert 'For catname category, The value of info is not valid, please supply a valid value' == str(excinfo.value)\n            valpatch.assert_called_once_with('catname', 'catvalue', True)\n        assert 1 == log_exc.call_count\n\n    async def test_create_category_good_newval_bad_storageval_good_update(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock({})\n        _se = await self.async_mock({})\n        _sr = await self.async_mock((False, None, None, None))\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_validate_category_val', side_effect=[_se, Exception()]) as valpatch:\n                with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv) as readpatch:\n                    with patch.object(ConfigurationManager, '_merge_category_vals') as mergepatch:\n                        with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv) as callbackpatch:\n                            with patch.object(ConfigurationManager, 'search_for_ACL_recursive_from_cat_name',\n                                              return_value=_sr) as searchaclpatch:\n                                cat = await c_mgr.create_category('catname', 'catvalue', 'catdesc')\n                                assert cat is None\n                            searchaclpatch.assert_called_once_with('catname')\n                        callbackpatch.assert_called_once_with('catname')\n                    mergepatch.assert_not_called()\n                readpatch.assert_called_once_with('catname')\n            valpatch.assert_has_calls([call('catname', 'catvalue', True), call('catname', {}, False)])\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('category_value for category_name %s from storage is corrupted; using category_value without merge', 'catname')\n\n    async def test_create_category_good_newval_bad_storageval_bad_update(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv = await self.async_mock({})\n        _se = await self.async_mock({})\n        _sr = await self.async_mock((False, None, None, None))\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_validate_category_val', side_effect=[_se, Exception]) as valpatch:\n                with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv) as readpatch:\n                    with patch.object(ConfigurationManager, '_merge_category_vals') as mergepatch:\n                        with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv) as callbackpatch:\n                            with patch.object(ConfigurationManager, 'search_for_ACL_recursive_from_cat_name',\n                                              return_value=_sr) as searchaclpatch:\n                                await c_mgr.create_category('catname', 'catvalue', 'catdesc')\n                            searchaclpatch.assert_called_once_with('catname')\n                        callbackpatch.assert_called_once_with('catname')\n                    mergepatch.assert_not_called()\n                readpatch.assert_called_once_with('catname')\n            valpatch.assert_has_calls([call('catname', 'catvalue', True), call('catname', {}, False)])\n        assert 1 == log_exc.call_count\n        calls = [call('category_value for category_name %s from storage is corrupted; using category_value without merge', 'catname')]\n        log_exc.assert_has_calls(calls, any_order=True)\n\n    # (merged_value)\n    async def test_create_category_good_newval_good_storageval_nochange(self, reset_singleton):\n        all_cat_names = [('rest_api', 'Fledge Admin and User REST API', 'rest_api'), ('catname', 'catdesc', 'catname')]\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv1 = await self.async_mock({})\n        _rv2 = await self.async_mock(all_cat_names)\n        _se = await self.async_mock({})\n        with patch.object(ConfigurationManager, '_validate_category_val', side_effect=[_se, _se]) as valpatch:\n            with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv1) as readpatch:\n                with patch.object(ConfigurationManager, '_read_all_category_names', return_value=_rv2) as read_all_patch:\n                    with patch.object(ConfigurationManager, '_merge_category_vals', return_value=_rv1) as mergepatch:\n                        with patch.object(ConfigurationManager, '_run_callbacks') as callbackpatch:\n                            with patch.object(ConfigurationManager, '_update_category') as updatepatch:\n                                cat = await c_mgr.create_category('catname', 'catvalue', 'catdesc')\n                                assert cat is None\n                            updatepatch.assert_not_called()\n                        callbackpatch.assert_not_called()\n                    mergepatch.assert_called_once_with({}, {}, False, 'catname')\n                read_all_patch.assert_called_once_with()\n            readpatch.assert_called_once_with('catname')\n        valpatch.assert_has_calls([call('catname', 'catvalue', True), call('catname', {}, False)])\n\n    async def test_create_category_good_newval_good_storageval_good_update(self, reset_singleton):\n        all_cat_names = [('rest_api', 'Fledge Admin and User REST API', 'rest_api'), ('catname', 'catdesc', 'catname')]\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv1 = await self.async_mock({})\n        _rv2 = await self.async_mock(all_cat_names)\n        _rv3 = await self.async_mock({'bla': 'bla'})\n        _rv4 = await self.async_mock(None)\n        _se = await self.async_mock({})\n        _sr = await self.async_mock((False, None, None, None))\n        with patch.object(ConfigurationManager, '_validate_category_val', side_effect=[_se, _se]) as valpatch:\n            with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv1) as readpatch:\n                with patch.object(ConfigurationManager, '_read_all_category_names',\n                                  return_value=_rv2) as read_all_patch:\n                    with patch.object(ConfigurationManager, '_merge_category_vals', return_value=_rv3) as mergepatch:\n                        with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv4) as callbackpatch:\n                            with patch.object(ConfigurationManager, '_update_category',\n                                              return_value=_rv4) as updatepatch:\n                                with patch.object(AuditLogger, '__init__', return_value=None):\n                                    with patch.object(AuditLogger, 'information', return_value=_rv4) as auditinfopatch:\n                                        with patch.object(ConfigurationManager,\n                                                          'search_for_ACL_recursive_from_cat_name',\n                                                          return_value=_sr) as searchaclpatch:\n                                            cat = await c_mgr.create_category('catname', 'catvalue', 'catdesc')\n                                            assert cat is None\n                                        searchaclpatch.assert_called_once_with('catname')\n                                    auditinfopatch.assert_called_once_with(\n                                        'CONCH', {'category': 'catname', 'item': 'configurationChange', 'oldValue': {},\n                                                  'newValue': {'bla': 'bla'}})\n                            updatepatch.assert_called_once_with('catname', {'bla': 'bla'}, 'catdesc', 'catname')\n                        callbackpatch.assert_called_once_with('catname')\n                    mergepatch.assert_called_once_with({}, {}, False, 'catname')\n                read_all_patch.assert_called_once_with()\n            readpatch.assert_called_once_with('catname')\n        valpatch.assert_has_calls([call('catname', 'catvalue', True), call('catname', {}, False)])\n\n    async def test_create_category_good_newval_good_storageval_bad_update(self, reset_singleton):\n        all_cat_names = [('rest_api', 'Fledge Admin and User REST API', 'rest_api'), ('catname', 'catdesc', 'catname')]\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv1 = await self.async_mock({})\n        _rv2 = await self.async_mock(all_cat_names)\n        _rv4 = await self.async_mock(None)\n        _se = await self.async_mock({})\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_validate_category_val', side_effect=[_se, _se]) as valpatch:\n                with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv1) as readpatch:\n                    with patch.object(ConfigurationManager, '_read_all_category_names', return_value=_rv2) as read_all_patch:\n                        with patch.object(ConfigurationManager, '_merge_category_vals', side_effect=TypeError) as mergepatch:\n                            with patch.object(ConfigurationManager, '_run_callbacks') as callbackpatch:\n                                with pytest.raises(Exception) as excinfo:\n                                    await c_mgr.create_category('catname', 'catvalue', 'catdesc')\n                                assert excinfo.type is TypeError\n                            callbackpatch.assert_not_called()\n                        mergepatch.assert_called_once_with({}, {}, False, 'catname')\n                    read_all_patch.assert_called_once_with()\n                readpatch.assert_called_once_with('catname')\n            valpatch.assert_has_calls([call('catname', 'catvalue', True), call('catname', {}, False)])\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to create new category based on category_name %s and category_description %s '\n                                        'and category_json_schema %s', 'catname', 'catdesc', {})\n\n    async def test_create_category_good_newval_no_storageval_good_create(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv1 = await self.async_mock({})\n        _rv2 = await self.async_mock(None)\n        _sr = await self.async_mock((False, None, None, None))\n        with patch.object(ConfigurationManager, '_validate_category_val', return_value=_rv1) as valpatch:\n            with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv2) as readpatch:\n                with patch.object(ConfigurationManager, '_create_new_category', return_value=_rv2) as createpatch:\n                    with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv2) as callbackpatch:\n                        with patch.object(ConfigurationManager, 'search_for_ACL_recursive_from_cat_name',\n                                          return_value=_sr) as searchaclpatch:\n                            await c_mgr.create_category('catname', 'catvalue', \"catdesc\")\n                        searchaclpatch.assert_called_once_with('catname')\n                    callbackpatch.assert_called_once_with('catname')\n                createpatch.assert_called_once_with('catname', {}, 'catdesc', None)\n            readpatch.assert_called_once_with('catname')\n        valpatch.assert_called_once_with('catname', 'catvalue', True)\n\n    async def test_create_category_good_newval_no_storageval_bad_create(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv1 = await self.async_mock({})\n        _rv2 = await self.async_mock(None)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_validate_category_val', return_value=_rv1) as valpatch:\n                with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv2) as readpatch:\n                    with patch.object(ConfigurationManager, '_create_new_category', side_effect=StorageServerError(None, None, None)) as createpatch:\n                        with patch.object(ConfigurationManager, '_run_callbacks') as callbackpatch:\n                            with pytest.raises(StorageServerError):\n                                await c_mgr.create_category('catname', 'catvalue', \"catdesc\")\n                        callbackpatch.assert_not_called()\n                    createpatch.assert_called_once_with('catname', {}, 'catdesc', None)\n                readpatch.assert_called_once_with('catname')\n            valpatch.assert_called_once_with('catname', 'catvalue', True)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to create new category based on category_name %s and category_description %s and category_json_schema %s', 'catname', 'catdesc', {})\n\n    async def test_create_category_good_newval_keyerror_bad_create(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv1 = await self.async_mock({})\n        _rv2 = await self.async_mock(None)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_validate_category_val', return_value=_rv1) as valpatch:\n                with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv2) as readpatch:\n                    with patch.object(ConfigurationManager, '_create_new_category', side_effect=KeyError()) as createpatch:\n                        with patch.object(ConfigurationManager, '_run_callbacks') as callbackpatch:\n                            with pytest.raises(KeyError):\n                                await c_mgr.create_category('catname', 'catvalue', \"catdesc\")\n                        callbackpatch.assert_not_called()\n                    createpatch.assert_called_once_with('catname', {}, 'catdesc', None)\n                readpatch.assert_called_once_with('catname')\n            valpatch.assert_called_once_with('catname', 'catvalue', True)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to create new category based on category_name %s and category_description %s and category_json_schema %s', 'catname', 'catdesc', {})\n\n    async def test_create_category_bad_newval(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_validate_category_val', side_effect=Exception()) as valpatch:\n                with patch.object(ConfigurationManager, '_read_category_val') as readpatch:\n                    with patch.object(ConfigurationManager, '_create_new_category') as createpatch:\n                        with patch.object(ConfigurationManager, '_run_callbacks') as callbackpatch:\n                            with pytest.raises(Exception):\n                                await c_mgr.create_category('catname', 'catvalue', \"catdesc\")\n                        callbackpatch.assert_not_called()\n                    createpatch.assert_not_called()\n                readpatch.assert_not_called()\n            valpatch.assert_called_once_with('catname', 'catvalue', True)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to create new category based on category_name %s and category_description %s and category_json_schema %s', 'catname', 'catdesc', '')\n\n    async def test_set_category_item_value_entry_good_update(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_entry = 'newvalentry'\n        storage_value_entry = {'value': 'test', 'description': 'Test desc', 'type': 'string', 'default': 'test'}\n        c_mgr._cacheManager.update(category_name, \"desc\", {item_name: storage_value_entry})\n\n        _rv1 = await self.async_mock(storage_value_entry)\n        _rv2 = await self.async_mock(None)\n        with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv1) as readpatch:\n            with patch.object(ConfigurationManager, '_update_value_val', return_value=_rv2) as updatepatch:\n                with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv2) as callbackpatch:\n                    await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                callbackpatch.assert_called_once_with(category_name)\n            updatepatch.assert_called_once_with(category_name, item_name, new_value_entry, storage_value_entry.get('type'))\n        readpatch.assert_called_once_with(category_name, item_name)\n\n    @pytest.mark.parametrize(\"new_value_entry, storage_result, exc_name, exc_msg\", [\n        ('', {'value': 'test', 'description': 'Test desc', 'type': 'string', 'default': 'test',\n              'mandatory': 'true'}, ValueError, \"A value must be given for itemname\"),\n        ('', {'value': '{}', 'description': 'Test desc', 'type': 'JSON', 'default': '{}',\n              'mandatory': 'true'}, TypeError, \"Unrecognized value name for item_name itemname\"),\n        ({}, {'value': '{}', 'description': 'Test desc', 'type': 'JSON', 'default': '{}',\n              'mandatory': 'true'}, ValueError, \"Dict cannot be set as empty. A value must be given for itemname\"),\n        (1, {'value': '{}', 'description': 'Test desc', 'type': 'JSON', 'default': '{}',\n             'mandatory': 'true'}, TypeError, \"Unrecognized value name for item_name itemname\"),\n        (' ', {'value': 'test1', 'description': 'Test desc', 'type': 'string', 'default': 'test1',\n               'mandatory': 'true'}, ValueError, \"A value must be given for itemname\"),\n        ('5', {'rule': 'value*3==9', 'default': '3', 'description': 'Test', 'value': '3',\n               'type': 'integer'}, ValueError, \"The value of itemname is not valid, please supply a valid value\"),\n        ('blah', {'value': 'woo', 'default': 'woo', 'description': 'enum types', 'type': 'enumeration',\n                  'options': ['foo', 'woo']}, ValueError, \"new value does not exist in options enum\"),\n        ('', {'value': 'woo', 'default': 'woo', 'description': 'enum types', 'type': 'enumeration',\n              'options': ['foo', 'woo']}, ValueError, \"entry_val cannot be empty\"),\n    ])\n    async def test_set_category_item_value_entry_bad_update(self, reset_singleton, new_value_entry, storage_result,\n                                                            exc_name, exc_msg):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'catname'\n        item_name = 'itemname'\n        \n        _rv = await self.async_mock(storage_result)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv) as readpatch:\n                with patch.object(ConfigurationManager, '_run_callbacks') as callbackpatch:\n                    with pytest.raises(Exception) as excinfo:\n                        await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                    assert excinfo.type is exc_name\n                    assert exc_msg == str(excinfo.value)\n                callbackpatch.assert_not_called()\n            readpatch.assert_called_once_with(category_name, item_name)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to set item value entry based on category_name %s and item_name %s and value_item_entry %s', category_name, item_name, new_value_entry)\n\n    async def test_set_category_item_value_entry_bad_storage(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_entry = 'newvalentry'\n        \n        _rv = await self.async_mock(None)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv) as readpatch:\n                with patch.object(ConfigurationManager, '_update_value_val') as updatepatch:\n                    with patch.object(ConfigurationManager, '_run_callbacks') as callbackpatch:\n                        with pytest.raises(ValueError) as excinfo:\n                            await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                        assert 'No detail found for the category_name: {} and item_name: {}'.format(category_name, item_name) in str(excinfo.value)\n                    callbackpatch.assert_not_called()\n                updatepatch.assert_not_called()\n            readpatch.assert_called_once_with(category_name, item_name)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to set item value entry based on category_name %s and item_name %s and value_item_entry %s', 'catname', 'itemname', 'newvalentry')\n\n    async def test_set_category_item_value_entry_no_change(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_entry = 'newvalentry'\n\n        _rv = await self.async_mock(new_value_entry)\n        with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv) as readpatch:\n            with patch.object(ConfigurationManager, '_update_value_val') as updatepatch:\n                with patch.object(ConfigurationManager, '_run_callbacks') as callbackpatch:\n                    await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                callbackpatch.assert_not_called()\n            updatepatch.assert_not_called()\n        readpatch.assert_called_once_with(category_name, item_name)\n\n    async def test_set_category_item_invalid_type_value(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_entry = 'newvalentry'\n        \n        _rv = await self.async_mock({'value': 'test', 'description': 'Test desc', 'type': 'boolean', 'default': 'test'})\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv) as readpatch:\n                with pytest.raises(Exception) as excinfo:\n                    await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                assert excinfo.type is TypeError\n                assert 'Unrecognized value name for item_name itemname' == str(excinfo.value)\n            readpatch.assert_called_once_with(category_name, item_name)\n        assert 1 == log_exc.call_count\n\n    async def test_set_category_item_value_entry_with_enum_type(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_entry = 'foo'\n        storage_value_entry = {\"value\": \"woo\", \"default\": \"woo\", \"description\": \"enum types\", \"type\": \"enumeration\", \"options\": [\"foo\", \"woo\"]}\n        c_mgr._cacheManager.update(category_name, \"desc\", {item_name: storage_value_entry})\n\n        _rv1 = await self.async_mock(storage_value_entry)\n        _rv2 = await self.async_mock(None)\n        with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv1) as readpatch:\n            with patch.object(ConfigurationManager, '_update_value_val', return_value=_rv2) as updatepatch:\n                with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv2) as callbackpatch:\n                    await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                callbackpatch.assert_called_once_with(category_name)\n            updatepatch.assert_called_once_with(category_name, item_name, new_value_entry, storage_value_entry.get('type'))\n        readpatch.assert_called_once_with(category_name, item_name)\n\n    @pytest.mark.parametrize(\"new_value_entry, message\", [\n        (\"\", \"entry_val cannot be empty\"),\n        (\"blah\", \"new value does not exist in options enum\")\n    ])\n    async def test_set_category_item_value_entry_with_enum_type_exceptions(self, new_value_entry, message):\n        cat = {\"default\": \"woo\", \"description\": \"enum types\", \"type\": \"enumeration\", \"options\": [\"foo\", \"woo\"]}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'catname'\n        item_name = 'itemname'\n\n        _rv = await self.async_mock(cat)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv) as readpatch:\n                with pytest.raises(Exception) as excinfo:\n                    await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                assert excinfo.type is ValueError\n                assert message == str(excinfo.value)\n            readpatch.assert_called_once_with(category_name, item_name)\n        assert 1 == log_exc.call_count\n\n    async def test_set_category_item_value_entry_with_rule_optional_attribute(self):\n        cat = {'rule': 'value*3==9', 'default': '3', 'description': 'Test', 'value': '3', 'type': 'integer'}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'catname'\n        item_name = 'info'\n        new_value_entry = '13'\n        \n        _rv = await self.async_mock(cat)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv) as readpatch:\n                with pytest.raises(Exception) as excinfo:\n                    await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                assert excinfo.type is ValueError\n                assert 'The value of {} is not valid, please supply a valid value'.format(item_name) == str(excinfo.value)\n            readpatch.assert_called_once_with(category_name, item_name)\n        assert 1 == log_exc.call_count\n\n    async def test_set_category_item_value_entry_with_list_name(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_entry = \"[\\\"E\\\", \\\"F\\\"]\"\n        storage_value_entry = {'type': 'list', 'description': 'A list of variables', 'listName': 'items',\n                                        'items': 'string', 'default': '{\"items\": [\"A\", \"B\"]}',\n                                        'value': '{\"items\": [\"C\", \"D\"]}'}\n        modified_new_value_entry = json.dumps({storage_value_entry['listName']: json.loads(new_value_entry)})\n        c_mgr._cacheManager.update(category_name, \"desc\", {item_name: storage_value_entry})\n        _rv1 = await self.async_mock(storage_value_entry)\n        _rv2 = await self.async_mock(None)\n        with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv1) as patch_read:\n            with patch.object(ConfigurationManager, '_update_value_val', return_value=_rv2) as patch_update:\n                with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv2) as patch_callback:\n                    await c_mgr.set_category_item_value_entry(category_name, item_name, new_value_entry)\n                patch_callback.assert_called_once_with(category_name)\n            patch_update.assert_called_once_with(category_name, item_name, modified_new_value_entry, storage_value_entry.get('type'))\n        patch_read.assert_called_once_with(category_name, item_name)\n\n    async def test_get_all_category_names_good(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock('bla')\n        with patch.object(ConfigurationManager, '_read_all_category_names', return_value=_rv) as readpatch:\n            ret_val = await c_mgr.get_all_category_names()\n            assert 'bla' == ret_val\n        readpatch.assert_called_once_with()\n\n    @pytest.mark.parametrize(\"value\", [\n        \"True\", \"False\"\n    ])\n    async def test_get_all_category_names_with_root(self, reset_singleton, value):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock('bla')\n        with patch.object(ConfigurationManager, '_read_all_groups', return_value=_rv) as readpatch:\n            ret_val = await c_mgr.get_all_category_names(root=value)\n            assert 'bla' == ret_val\n        readpatch.assert_called_once_with(value, False)\n\n    async def test_get_all_category_names_bad(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_all_category_names', side_effect=Exception()) as readpatch:\n                with pytest.raises(Exception):\n                    await c_mgr.get_all_category_names()\n            readpatch.assert_called_once_with()\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to read all category names')\n\n    async def test_get_category_all_items_good(self, reset_singleton):\n        category_name = 'catname'\n        cat_value = {\"config_item\": {\"type\": \"string\", \"default\": \"blah\", \"description\": \"Des\", \"value\": \"blah\"}}\n        cat_info = {'display_name': category_name, 'key': category_name, 'description': 'Test Des', \"value\": cat_value}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock(cat_info)\n        with patch.object(ConfigurationManager, '_read_category', return_value=_rv) as readpatch:\n            ret_val = await c_mgr.get_category_all_items(category_name)\n            assert cat_value == ret_val\n        readpatch.assert_called_once_with(category_name)\n\n    async def test_get_category_all_items_bad(self, reset_singleton):\n        category_name = 'catname'\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_category', side_effect=Exception()) as readpatch:\n                with pytest.raises(Exception):\n                    await c_mgr.get_category_all_items(category_name)\n            readpatch.assert_called_once_with(category_name)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to get all category items of {} category.'.format(category_name))\n\n    async def test_get_category_item_good(self, reset_singleton):\n        category_name = 'catname'\n        item_name = 'item_name'\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv1 = await self.async_mock('bla')\n        _rv2 = await self.async_mock(None)\n        with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv1) as read_item_patch:\n            with patch.object(ConfigurationManager, '_read_category', return_value=_rv2) as read_cat_patch:\n                ret_val = await c_mgr.get_category_item(category_name, item_name)\n                assert 'bla' == ret_val\n            read_cat_patch.assert_called_once_with(category_name)\n        read_item_patch.assert_called_once_with(category_name, item_name)\n\n    async def test_get_category_item_bad(self, reset_singleton):\n        category_name = 'catname'\n        item_name = 'item_name'\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_item_val', side_effect=Exception()) as readpatch:\n                with pytest.raises(Exception):\n                    await c_mgr.get_category_item(category_name, item_name)\n            readpatch.assert_called_once_with(category_name, item_name)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to get category item based on category_name %s and item_name %s', 'catname', 'item_name')\n\n    async def test_get_category_item_value_entry_good(self, reset_singleton):\n        category_name = 'catname'\n        item_name = 'item_name'\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock('bla')\n        with patch.object(ConfigurationManager, '_read_value_val', return_value=_rv) as readpatch:\n            ret_val = await c_mgr.get_category_item_value_entry(category_name, item_name)\n            assert 'bla' == ret_val\n        readpatch.assert_called_once_with(category_name, item_name)\n\n    async def test_get_category_item_value_entry_bad(self, reset_singleton):\n        category_name = 'catname'\n        item_name = 'item_name'\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(ConfigurationManager, '_read_value_val', side_effect=Exception()) as readpatch:\n                with pytest.raises(Exception):\n                    await c_mgr.get_category_item_value_entry(category_name, item_name)\n            readpatch.assert_called_once_with(category_name, item_name)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to get the \"value\" entry based on category_name %s and item_name %s', 'catname', 'item_name')\n\n    async def test__create_new_category_good(self, reset_singleton):\n        category_name = 'catname'\n        category_val = 'catval'\n        category_description = 'catdesc'\n        category_response = {'response': [{'display_name': 'catname', 'category_name': 'catname',\n                                           'category_val': 'catval', 'description': 'catdesc'}]}\n        _rv = await self.async_mock(None)\n        _attr = await self.async_mock(category_response)\n        attrs = {\"insert_into_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)      \n\n        with patch.object(AuditLogger, '__init__', return_value=None):\n            with patch.object(AuditLogger, 'information', return_value=_rv) as auditinfopatch:\n                with patch.object(PayloadBuilder, '__init__', return_value=None):\n                    with patch.object(PayloadBuilder, 'INSERT', return_value=PayloadBuilder) as pbinsertpatch:\n                        with patch.object(PayloadBuilder, 'payload', return_value=None) as pbpayloadpatch:\n                            await c_mgr._create_new_category(category_name, category_val, category_description)\n                        pbpayloadpatch.assert_called_once_with()\n                    pbinsertpatch.assert_called_once_with(display_name=category_name, description=category_description,\n                                                          key=category_name, value=category_val)\n            auditinfopatch.assert_called_once_with('CONAD', {'category': category_val, 'name': category_name})\n        storage_client_mock.insert_into_tbl.assert_called_once_with(\n            'configuration', None)\n\n    async def test_create_new_category_deprecated(self, reset_singleton):\n        category_name = 'catname'\n        category_val = {\n            \"test_item_name1\": {\n                \"description\": \"test description val1\",\n                \"type\": \"string\",\n                \"default\": \"test default val1\",\n                \"deprecated\": \"true\"\n            },\n            \"test_item_name2\": {\n                \"description\": \"test description val2\",\n                \"type\": \"string\",\n                \"default\": \"test default val2\"\n            },\n\n        }\n        category_val_actual = {\n            \"test_item_name2\": {\n                \"description\": \"test description val2\",\n                \"type\": \"string\",\n                \"default\": \"test default val2\"\n            },\n        }\n\n        category_description = 'catdesc'\n        category_response = {'response': [\n            {'category_name': 'catname', 'category_val': 'catval', 'description': 'catdesc'}]}\n\n        _rv = await self.async_mock(None)\n        _attr = await self.async_mock(category_response)\n        attrs = {\"insert_into_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(AuditLogger, '__init__', return_value=None):\n            with patch.object(AuditLogger, 'information', return_value=_rv) as auditinfopatch:\n                with patch.object(PayloadBuilder, '__init__', return_value=None):\n                    with patch.object(PayloadBuilder, 'INSERT', return_value=PayloadBuilder) as pbinsertpatch:\n                        with patch.object(PayloadBuilder, 'payload', return_value=None) as pbpayloadpatch:\n                            await c_mgr._create_new_category(category_name, category_val, category_description)\n                        pbpayloadpatch.assert_called_once_with()\n                    pbinsertpatch.assert_called_once_with(display_name=category_name, description=category_description,\n                                                          key=category_name, value=category_val_actual)\n            auditinfopatch.assert_called_once_with('CONAD', {'category': category_val_actual, 'name': category_name})\n        storage_client_mock.insert_into_tbl.assert_called_once_with('configuration', None)\n\n    async def test_create_new_category_with_list_name(self, reset_singleton):\n        category_name = 'catname'\n        category_val = {\n            \"config_item\": {\n                \"description\": \"A list of variables\",\n                \"type\": \"list\",\n                \"items\": \"string\",\n                \"listName\": \"items\",\n                \"default\": \"{\\\"items\\\": [\\\"A\\\", \\\"B\\\"]}\",\n                \"value\": \"{\\\"items\\\": [\\\"A\\\", \\\"B\\\"]}\"\n            }\n        }\n        category_description = 'catdesc'\n        category_response = {'response': [\n            {'category_name': 'catname', 'category_val': 'catval', 'description': 'catdesc'}]}\n        _rv = await self.async_mock(None)\n        _attr = await self.async_mock(category_response)\n        attrs = {\"insert_into_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(AuditLogger, '__init__', return_value=None):\n            with patch.object(AuditLogger, 'information', return_value=_rv) as patch_audit_info:\n                with patch.object(PayloadBuilder, '__init__', return_value=None):\n                    with patch.object(PayloadBuilder, 'INSERT', return_value=PayloadBuilder) as patch_insert:\n                        with patch.object(PayloadBuilder, 'payload', return_value=None) as patch_payload:\n                            await c_mgr._create_new_category(category_name, category_val, category_description)\n                        patch_payload.assert_called_once_with()\n                    patch_insert.assert_called_once_with(display_name=category_name, description=category_description,\n                                                          key=category_name, value=category_val)\n            patch_audit_info.assert_called_once_with('CONAD', {'category': category_val,\n                                                               'name': category_name})\n        storage_client_mock.insert_into_tbl.assert_called_once_with('configuration', None)\n\n    async def test__read_all_category_names_1_row(self, reset_singleton):\n        rows = {'rows': [{'key': 'key1', 'description': 'description1', 'display_name': 'display key'}]}\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_all_category_names()\n        args, kwargs = storage_client_mock.query_tbl_with_payload.call_args\n        assert 'configuration' == args[0]\n        p = json.loads(args[1])\n        assert {\"return\": [\"key\", \"description\", \"value\", \"display_name\", {\"column\": \"ts\", \"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}]} == p\n        assert [('key1', 'description1', 'display key')] == ret_val\n\n    async def test__read_all_category_names_2_row(self, reset_singleton):\n        rows = {'rows': [{'key': 'key1', 'description': 'description1', 'display_name': 'display key1'},\n                         {'key': 'key2', 'description': 'description2', 'display_name': 'display key2'}]}\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_all_category_names()\n        args, kwargs = storage_client_mock.query_tbl_with_payload.call_args\n        assert 'configuration' == args[0]\n        p = json.loads(args[1])\n        assert {\"return\": [\"key\", \"description\", \"value\", \"display_name\", {\"column\": \"ts\", \"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}]} == p\n        assert [('key1', 'description1', 'display key1'), ('key2', 'description2', 'display key2')] == ret_val\n\n    async def test__read_all_category_names_0_row(self, reset_singleton):\n        rows = {'rows': []}\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr }\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_all_category_names()\n        args, kwargs = storage_client_mock.query_tbl_with_payload.call_args\n        assert 'configuration' == args[0]\n        p = json.loads(args[1])\n        assert {\"return\": [\"key\", \"description\", \"value\", \"display_name\", {\"column\": \"ts\", \"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}]} == p\n        assert [] == ret_val\n\n    async def test__read_category_0_row(self, reset_singleton):\n        rows = {\"rows\": []}\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_category(CAT_NAME)\n        assert ret_val is None\n        args, kwargs = storage_client_mock.query_tbl_with_payload.call_args\n        assert 'configuration' == args[0]\n        p = json.loads(args[1])\n        assert {\"return\": [\"key\", \"description\", \"value\", \"display_name\",\n                           {\"column\": \"ts\", \"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}],\n                \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": CAT_NAME}, \"limit\": 1} == p\n\n    async def test__read_category_1_row(self, reset_singleton):\n        storage_result = [{'display_name': CAT_NAME, 'key': CAT_NAME, 'description': 'Test Des',\n                           'value': {'config_item': {'default': 'blah', 'value': 'blah', 'description': 'Des',\n                                                     'type': 'string'}}}]\n        rows = {\"rows\": storage_result, \"count\": 1}\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_category(CAT_NAME)\n        assert storage_result[0] == ret_val\n        args, kwargs = storage_client_mock.query_tbl_with_payload.call_args\n        assert 'configuration' == args[0]\n        p = json.loads(args[1])\n        assert {\"return\": [\"key\", \"description\", \"value\", \"display_name\",\n                           {\"column\": \"ts\", \"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}],\n                \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": CAT_NAME},  \"limit\": 1} == p\n\n    @pytest.mark.parametrize(\"value, expected_result\", [\n        (True, [('General', 'General', 'GEN'), ('Advanced', 'Advanced', 'ADV')]),\n        (False, [('service', 'Fledge service', 'SERV'), ('rest_api', 'User REST API', 'API')])\n    ])\n    async def test__read_all_groups(self, reset_singleton, value, expected_result):\n        async def q_result(*args):\n            table = args[0]\n            payload = json.loads(args[1])\n            if table == \"configuration\":\n                assert {\"return\": [\"key\", \"description\", \"display_name\"]} == payload\n                return {\"rows\": [{\"key\": \"General\", \"description\": \"General\", \"display_name\": \"GEN\"}, {\"key\": \"Advanced\", \"description\": \"Advanced\", \"display_name\": \"ADV\"}, {\"key\": \"service\", \"description\": \"Fledge service\", \"display_name\": \"SERV\"}, {\"key\": \"rest_api\", \"description\": \"User REST API\", \"display_name\": \"API\"}], \"count\": 4}\n\n            if table == \"category_children\":\n                assert {\"return\": [\"child\"], \"modifier\": \"distinct\"} == payload\n                return {\"rows\": [{\"child\": \"SMNTR\"}, {\"child\": \"service\"}, {\"child\": \"rest_api\"}], \"count\": 3}\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result) as query_tbl_patch:\n            ret_val = await c_mgr._read_all_groups(root=value, children=False)\n            assert expected_result == ret_val\n        assert 2 == query_tbl_patch.call_count\n\n    async def test__read_category_val_1_row(self, reset_singleton):\n        rows = {'rows': [{'value': 'value1'}]}\n        category_name = 'catname'\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(PayloadBuilder, '__init__', return_value=None):\n            with patch.object(PayloadBuilder, 'SELECT', return_value=PayloadBuilder) as pbselectpatch:\n                with patch.object(PayloadBuilder, 'WHERE', return_value=PayloadBuilder) as pbwherepatch:\n                    with patch.object(PayloadBuilder, 'payload', return_value=None) as pbpayloadpatch:\n                        ret_val = await c_mgr._read_category_val(category_name)\n                        assert 'value1' == ret_val\n                    pbpayloadpatch.assert_called_once_with()\n                pbwherepatch.assert_called_once_with([\"key\", \"=\", category_name])\n            pbselectpatch.assert_called_once_with('value')\n        storage_client_mock.query_tbl_with_payload.assert_called_once_with(\n            'configuration', None)\n\n    async def test__read_category_val_0_row(self, reset_singleton):\n        rows = {'rows': []}\n        category_name = 'catname'\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(PayloadBuilder, '__init__', return_value=None):\n            with patch.object(PayloadBuilder, 'SELECT', return_value=PayloadBuilder) as pbselectpatch:\n                with patch.object(PayloadBuilder, 'WHERE', return_value=PayloadBuilder) as pbwherepatch:\n                    with patch.object(PayloadBuilder, 'payload', return_value=None) as pbpayloadpatch:\n                        ret_val = await c_mgr._read_category_val(category_name)\n                        assert ret_val is None\n                    pbpayloadpatch.assert_called_once_with()\n                pbwherepatch.assert_called_once_with([\"key\", \"=\", category_name])\n            pbselectpatch.assert_called_once_with('value')\n        storage_client_mock.query_tbl_with_payload.assert_called_once_with(\n            'configuration', None)\n\n    async def test__read_item_val_0_row(self, reset_singleton):\n        rows = {'rows': []}\n        category_name = 'catname'\n        item_name = 'itemname'\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_item_val(category_name, item_name)\n        assert ret_val is None\n\n    async def test__read_item_val_1_row(self, reset_singleton):\n        rows = {'rows': [{'value': 'value1'}]}\n        category_name = 'catname'\n        item_name = 'itemname'\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_item_val(category_name, item_name)\n        assert ret_val == 'value1'\n\n    async def test__read_value_val_0_row(self, reset_singleton):\n        rows = {'rows': []}\n        category_name = 'catname'\n        item_name = 'itemname'\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_value_val(category_name, item_name)\n        assert ret_val is None\n\n    async def test__read_value_val_1_row(self, reset_singleton):\n        rows = {'rows': [{'value': 'value1'}]}\n        category_name = 'catname'\n        item_name = 'itemname'\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_value_val(category_name, item_name)\n        assert ret_val == 'value1'\n\n    async def test__update_value_val(self, reset_singleton):\n        rows = {\"rows\": []}\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_val = 'newval'\n        _attr = await self.async_mock(rows)\n        _rv = await self.async_mock(None)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr, \"update_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(AuditLogger, '__init__', return_value=None):\n            with patch.object(AuditLogger, 'information', return_value=_rv) as auditinfopatch:\n                await c_mgr._update_value_val(category_name, item_name, new_value_val)\n        auditinfopatch.assert_called_once_with(\n            'CONCH', {\n                'category': category_name, 'item': item_name, 'oldValue': None, 'newValue': new_value_val})\n\n    async def test__update_value_val_storageservererror(self, reset_singleton):\n        rows = {\"rows\": []}\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_val = 'newval'\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr, \"update_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(AuditLogger, '__init__', return_value=None):\n            with patch.object(AuditLogger, 'information', return_value=None) as auditinfopatch:\n                with patch.object(ConfigurationManager, '_update_value_val',\n                                  side_effect=StorageServerError(None, None, None)) as createpatch:\n                    with pytest.raises(StorageServerError):\n                        await c_mgr._update_value_val(category_name, item_name, new_value_val)\n                createpatch.assert_called_once_with('catname', 'itemname', 'newval')\n\n        assert 0 == auditinfopatch.call_count\n\n    async def test__update_value_val_keyerror(self, reset_singleton):\n        rows = {\"rows\": []}\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_val = 'newval'\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr, \"update_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(AuditLogger, '__init__', return_value=None):\n            with patch.object(AuditLogger, 'information', return_value=None) as auditinfopatch:\n                with patch.object(ConfigurationManager, '_update_value_val',\n                                  side_effect=KeyError()) as createpatch:\n                    with pytest.raises(KeyError):\n                        await c_mgr._update_value_val(category_name, item_name, new_value_val)\n                createpatch.assert_called_once_with('catname', 'itemname', 'newval')\n\n        assert 0 == auditinfopatch.call_count\n\n    async def test__update_category(self, reset_singleton):\n        response = {\"response\": \"dummy\"}\n        category_name = 'catname'\n        category_description = 'catdesc'\n        category_val = 'catval'\n        _attr = await self.async_mock(response)\n        _rv1 = await self.async_mock(category_val)\n        attrs = {\"update_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(PayloadBuilder, '__init__', return_value=None):\n            with patch.object(PayloadBuilder, 'SET', return_value=PayloadBuilder) as pbsetpatch:\n                with patch.object(PayloadBuilder, 'WHERE', return_value=PayloadBuilder) as pbwherepatch:\n                    with patch.object(PayloadBuilder, 'payload', return_value=None) as pbpayloadpatch:\n                        with patch.object(c_mgr, '_read_category_val', return_value=_rv1) as readpatch:\n                            await c_mgr._update_category(category_name, category_val, category_description)\n                        readpatch.assert_called_once_with(category_name)\n                    pbpayloadpatch.assert_called_once_with()\n                pbwherepatch.assert_called_once_with([\"key\", \"=\", category_name])\n            pbsetpatch.assert_called_once_with(description=category_description, value=category_val, display_name=category_name)\n        storage_client_mock.update_tbl.assert_called_once_with('configuration', None)\n\n    async def test__update_category_storageservererror(self, reset_singleton):\n        response = {\"response\": \"dummy\"}\n        category_name = 'catname'\n        category_description = 'catdesc'\n        category_val = 'catval'\n        _attr = await self.async_mock(response)\n        attrs = {\"update_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(PayloadBuilder, '__init__', return_value=None):\n            with patch.object(PayloadBuilder, 'SET', return_value=PayloadBuilder) as pbsetpatch:\n                with patch.object(PayloadBuilder, 'WHERE', return_value=PayloadBuilder) as pbwherepatch:\n                    with patch.object(PayloadBuilder, 'payload', return_value=None) as pbpayloadpatch:\n                        with patch.object(ConfigurationManager, '_update_category',\n                                          side_effect=StorageServerError(None, None, None)) as createpatch:\n                            with pytest.raises(StorageServerError):\n                                await c_mgr._update_category(category_name, category_val, category_description)\n                        createpatch.assert_called_once_with('catname', 'catval', 'catdesc')\n                    assert 0 == pbpayloadpatch.call_count\n                assert 0 == pbwherepatch.call_count\n            assert 0 == pbsetpatch.call_count\n        assert 0 == storage_client_mock.update_tbl.call_count\n\n    async def test__update_category_keyerror(self, reset_singleton):\n        response = {\"noresponse\": \"dummy\"}\n        category_name = 'catname'\n        category_description = 'catdesc'\n        category_val = 'catval'\n        _attr = await self.async_mock(response)\n        attrs = {\"update_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(PayloadBuilder, '__init__', return_value=None):\n            with patch.object(PayloadBuilder, 'SET', return_value=PayloadBuilder) as pbsetpatch:\n                with patch.object(PayloadBuilder, 'WHERE', return_value=PayloadBuilder) as pbwherepatch:\n                    with patch.object(PayloadBuilder, 'payload', return_value=None) as pbpayloadpatch:\n                        with pytest.raises(KeyError):\n                            await c_mgr._update_category(category_name, category_val, category_description)\n                    pbpayloadpatch.assert_called_once_with()\n                pbwherepatch.assert_called_once_with([\"key\", \"=\", category_name])\n            pbsetpatch.assert_called_once_with(description=category_description, value=category_val, display_name=category_name)\n        storage_client_mock.update_tbl.assert_called_once_with('configuration', None)\n\n    async def test_get_category_child(self):\n        category_name = 'HTTP SOUTH'\n        all_child_ret_val = [{'parent': 'south', 'child': category_name}]\n        child_info_ret_val = [{'key': category_name, 'description': 'HTTP South Plugin', 'display_name': category_name}]\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv1 = await self.async_mock('bla')\n        _rv2 = await self.async_mock(all_child_ret_val)\n        _rv3 = await self.async_mock(child_info_ret_val)\n        with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv1) as patch_read_cat_val:\n            with patch.object(ConfigurationManager, '_read_all_child_category_names', return_value=_rv2) as patch_read_all_child:\n                with patch.object(ConfigurationManager, '_read_child_info', return_value=_rv3) as patch_read_child_info:\n                    ret_val = await c_mgr.get_category_child(category_name)\n                    assert [{'displayName': category_name, 'description': 'HTTP South Plugin', 'key': category_name}] == ret_val\n                patch_read_child_info.assert_called_once_with([{'child': category_name, 'parent': 'south'}])\n            patch_read_all_child.assert_called_once_with(category_name)\n        patch_read_cat_val.assert_called_once_with(category_name)\n\n    async def test_get_category_child_no_exist(self):\n        category_name = 'HTTP SOUTH'\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock(None)\n        with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv) as patch_read_cat_val:\n            with pytest.raises(ValueError) as excinfo:\n                await c_mgr.get_category_child(category_name)\n            assert 'No such {} category exist'.format(category_name) == str(excinfo.value)\n        patch_read_cat_val.assert_called_once_with(category_name)\n\n    @pytest.mark.parametrize(\"cat_name, children, message\", [\n        (1, [\"coap\"], 'category_name must be a string'),\n        (\"south\", \"coap\", 'children must be a list')\n    ])\n    async def test_create_child_category_type_error(self, cat_name, children, message):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(TypeError) as excinfo:\n            await c_mgr.create_child_category(cat_name, children)\n        assert message == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"ret_cat_name, ret_child_name, message\", [\n        (None, None, 'No such south category exist'),\n        (\"south\", None, 'No such coap child exist')\n    ])\n    async def test_create_child_category_no_exists(self, ret_cat_name, ret_child_name, message):\n        async def q_result(*args):\n            if args[0] == cat_name:\n                return ret_cat_name\n            if args[0] == child_name:\n                return ret_child_name\n\n        cat_name = 'south'\n        child_name = [\"coap\"]\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(ConfigurationManager, '_read_category_val', side_effect=q_result):\n            with pytest.raises(ValueError) as excinfo:\n                await c_mgr.create_child_category(cat_name, child_name)\n            assert message == str(excinfo.value)\n\n    async def test_create_child_category(self, reset_singleton):\n        async def q_result(*args):\n            if args[0] == cat_name:\n                return 'blah1'\n            if args[0] == child_name:\n                return 'blah2'\n\n        cat_name = 'south'\n        child_name = \"coap\"\n        all_child_ret_val = [{'parent': cat_name, 'child': 'http'}]\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv1 = await self.async_mock('inserted')\n        _rv2 = await self.async_mock(all_child_ret_val)\n        _sr = await self.async_mock((False, None, None, None))\n        with patch.object(ConfigurationManager, '_read_category_val', side_effect=q_result):\n            with patch.object(ConfigurationManager, '_read_all_child_category_names',\n                              return_value=_rv2) as patch_readall_child:\n                with patch.object(ConfigurationManager, '_create_child',\n                                  return_value=_rv1) as patch_create_child:\n                    with patch.object(ConfigurationManager, 'search_for_ACL_recursive_from_cat_name',\n                                      return_value=_sr) as searchaclpatch:\n                        ret_val = await c_mgr.create_child_category(cat_name, [child_name])\n                        assert {'children': ['http', 'coap']} == ret_val\n                    searchaclpatch.assert_has_calls([call(cat_name), call(child_name)])\n            patch_readall_child.assert_called_once_with(cat_name)\n        patch_create_child.assert_called_once_with(cat_name, child_name)\n\n    async def test_create_child_category_if_exists(self, reset_singleton):\n        async def q_result(*args):\n            if args[0] == cat_name:\n                return 'blah1'\n            if args[0] == child_name:\n                return 'blah2'\n\n        cat_name = 'south'\n        child_name = \"coap\"\n        all_child_ret_val = [{'parent': cat_name, 'child': child_name}]\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv = await self.async_mock(all_child_ret_val)\n        _sr = await self.async_mock((False, None, None, None))\n        with patch.object(ConfigurationManager, '_read_category_val', side_effect=q_result):\n            with patch.object(ConfigurationManager, '_read_all_child_category_names',\n                              return_value=_rv) as patch_readall_child:\n                with patch.object(ConfigurationManager, 'search_for_ACL_recursive_from_cat_name',\n                                  return_value=_sr) as searchaclpatch:\n                    ret_val = await c_mgr.create_child_category(cat_name, [child_name])\n                    assert {'children': ['coap']} == ret_val\n                searchaclpatch.assert_has_calls([call(cat_name)])\n            patch_readall_child.assert_called_once_with(cat_name)\n\n    @pytest.mark.parametrize(\"cat_name, child_name, message\", [\n        (1, \"coap\", 'category_name must be a string'),\n        (\"south\", 1, 'child_category must be a string')\n    ])\n    async def test_delete_child_category_type_error(self, cat_name, child_name, message):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(TypeError) as excinfo:\n            await c_mgr.delete_child_category(cat_name, child_name)\n        assert message == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"ret_cat_name, ret_child_name, message\", [\n        (None, None, 'No such south category exist'),\n        (\"south\", None, 'No such coap child exist')\n    ])\n    async def test_delete_child_category_no_exists(self, ret_cat_name, ret_child_name, message):\n        async def q_result(*args):\n            if args[0] == cat_name:\n                return await self.async_mock(ret_cat_name)\n            if args[0] == child_name:\n                return await self.async_mock(ret_child_name)\n\n        cat_name = 'south'\n        child_name = 'coap'\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(ConfigurationManager, '_read_category_val', side_effect=q_result):\n            with pytest.raises(ValueError) as excinfo:\n                await c_mgr.delete_child_category(cat_name, child_name)\n            assert message == str(excinfo.value)\n\n    async def test_delete_child_category(self, reset_singleton):\n        async def q_result(*args):\n            if args[0] == cat_name:\n                return await self.async_mock('blah1')\n            if args[0] == child_name:\n                return await self.async_mock('blah2')\n\n        expected_result = {\"response\": \"deleted\", \"rows_affected\": 1}\n        cat_name = 'south'\n        child_name = 'coap'\n        all_child_ret_val = [{'parent': cat_name, 'child': child_name}]\n        \n        _attr = await self.async_mock(expected_result)\n        _rv = await self.async_mock(all_child_ret_val)\n        attrs = {\"delete_from_tbl.return_value\": _attr}\n        payload = {\"where\": {\"column\": \"parent\", \"condition\": \"=\", \"value\": \"south\", \"and\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"coap\"}}}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(ConfigurationManager, '_read_category_val', side_effect=q_result):\n            with patch.object(ConfigurationManager, '_read_all_child_category_names',\n                              return_value=_rv) as patch_read_all_child:\n                ret_val = await c_mgr.delete_child_category(cat_name, child_name)\n                assert [child_name] == ret_val\n            patch_read_all_child.assert_called_once_with(cat_name)\n        del_args, del_kwargs = storage_client_mock.delete_from_tbl.call_args\n        assert 'category_children' == del_args[0]\n        assert payload == json.loads(del_args[1])\n\n    async def test_delete_child_category_key_error(self, reset_singleton):\n        async def q_result(*args):\n            if args[0] == cat_name:\n                return await self.async_mock('blah1')\n            if args[0] == child_name:\n                return await self.async_mock('blah2')\n\n        expected_result = {\"message\": \"blah\"}\n        _attr = await self.async_mock(expected_result)\n        attrs = {\"delete_from_tbl.return_value\": _attr}\n        cat_name = 'south'\n        child_name = 'coap'\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(ConfigurationManager, '_read_category_val', side_effect=q_result):\n            with pytest.raises(ValueError) as excinfo:\n                await c_mgr.delete_child_category(cat_name, child_name)\n            assert 'blah' == str(excinfo.value)\n\n    async def test_delete_child_category_storage_exception(self, reset_singleton):\n        async def q_result(*args):\n            if args[0] == cat_name:\n                return await self.async_mock('blah1')\n            if args[0] == child_name:\n                return await self.async_mock('blah2')\n\n        cat_name = 'south'\n        child_name = 'coap'\n        msg = {\"entryPoint\": \"delete\", \"message\": \"failed\"}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(ConfigurationManager, '_read_category_val', side_effect=q_result):\n            with patch.object(storage_client_mock, 'delete_from_tbl', side_effect=StorageServerError(code=400,\n                                                                                                     reason=\"blah\", error=msg)):\n                with pytest.raises(ValueError) as excinfo:\n                    await c_mgr.delete_child_category(cat_name, child_name)\n                assert str(msg) == str(excinfo.value)\n\n    async def test_delete_parent_category(self, reset_singleton):\n        expected_result = {\"response\": \"deleted\", \"rows_affected\": 1}\n        _attr = await self.async_mock(expected_result)\n        _rv = await self.async_mock('bla')\n        attrs = {\"delete_from_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv) as patch_read_cat_val:\n            ret_val = await c_mgr.delete_parent_category(\"south\")\n            assert expected_result == ret_val\n        patch_read_cat_val.assert_called_once_with('south')\n        storage_client_mock.delete_from_tbl.assert_called_once_with('category_children',\n                                                                    '{\"where\": {\"column\": \"parent\", \"condition\": \"=\", \"value\": \"south\"}}')\n\n    async def test_delete_parent_category_bad_cat_name(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(TypeError) as excinfo:\n            await c_mgr.delete_parent_category(1)\n        assert 'category_name must be a string' == str(excinfo.value)\n\n    async def test_delete_parent_category_no_exists(self):\n        category_name = 'blah'\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock(None)\n        with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv) as patch_read_cat_val:\n            with pytest.raises(ValueError) as excinfo:\n                await c_mgr.delete_parent_category(category_name)\n            assert 'No such {} category exist'.format(category_name) == str(excinfo.value)\n        patch_read_cat_val.assert_called_once_with(category_name)\n\n    async def test_delete_parent_category_key_error(self, reset_singleton):\n        rows = {\"message\": \"blah\"}\n        _attr = await self.async_mock(rows)\n        _rv = await self.async_mock('blah')\n        attrs = {\"delete_from_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv) as patch_read_cat_val:\n            with pytest.raises(ValueError) as excinfo:\n                await c_mgr.delete_parent_category(\"south\")\n            assert 'blah' == str(excinfo.value)\n        patch_read_cat_val.assert_called_once_with(\"south\")\n\n    async def test_delete_parent_category_storage_exception(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        msg = {\"entryPoint\": \"delete\", \"message\": \"failed\"}\n\n        _rv = await self.async_mock('blah')\n        with patch.object(ConfigurationManager, '_read_category_val', return_value=_rv) as patch_read_cat_val:\n            with patch.object(storage_client_mock, 'delete_from_tbl', side_effect=\n            StorageServerError(code=400, reason=\"blah\", error=msg)):\n                with pytest.raises(ValueError) as excinfo:\n                    await c_mgr.delete_parent_category(\"south\")\n                assert str(msg) == str(excinfo.value)\n        patch_read_cat_val.assert_called_once_with(\"south\")\n\n    async def test_delete_category_and_children_recursively(self, mocker, reset_singleton):\n        async def mock_coro(a, b):\n            return expected_result\n\n        async def mock_read_all_child_category_names(cat):\n            \"\"\"\n            Mimics \n                     G      I\n                      \\    /\n                       F  H\n                        \\/\n                        E    \n                       /    M\n                      /    / \n                A -- B -- C  -- D \n                \\     \\\n                 \\     N\n                  \\\n                   \\   K\n                    \\ /\n                     J\\\n                       \\\n                        L\n            :param cat: \n            :return: \n            \"\"\"\n            if cat == \"A\":\n                return [{\"parent\": 'A', \"child\": 'B'}, {\"parent\": 'A', \"child\": 'J'}]\n            if cat == \"B\":\n                return [{\"parent\": 'B', \"child\": 'E'}, {\"parent\": 'B', \"child\": 'N'}, {\"parent\": 'B', \"child\": 'C'}]\n            if cat == \"C\":\n                return [{\"parent\": 'C', \"child\": 'M'}, {\"parent\": 'C', \"child\": 'D'}]\n            if cat == \"D\":\n                return []\n            if cat == \"E\":\n                return [{\"parent\": 'E', \"child\": 'F'}, {\"parent\": 'E', \"child\": 'H'}]\n            if cat == \"F\":\n                return [{\"parent\": 'F', \"child\": 'G'}]\n            if cat == \"G\":\n                return []\n            if cat == \"H\":\n                return [{\"parent\": 'H', \"child\": 'I'}]\n            if cat == \"I\":\n                return []\n            if cat == \"J\":\n                return [{\"parent\": 'J', \"child\": 'K'}, {\"parent\": 'J', \"child\": 'L'}]\n            if cat == \"K\":\n                return []\n            if cat == \"L\":\n                return []\n            if cat == \"M\":\n                return []\n            if cat == \"N\":\n                return []\n\n        expected_result = {\"response\": \"deleted\", \"rows_affected\": 1}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        patch_delete_from_tbl = mocker.patch.object(storage_client_mock, 'delete_from_tbl', side_effect=mock_coro)\n\n        _rv = await self.async_mock('bla')\n        _sr = await self.async_mock((False, None, None, None))\n        c_mgr = ConfigurationManager(storage_client_mock)\n        patch_read_cat_val = mocker.patch.object(ConfigurationManager, '_read_category_val',\n                                                 return_value=_rv)\n        mocker.patch.object(ConfigurationManager, '_read_all_child_category_names',\n                            side_effect=mock_read_all_child_category_names)\n        patch_fetch_descendents = mocker.patch.object(ConfigurationManager, '_fetch_descendents',\n                                                      return_value=_rv)\n\n        mocker.patch.object(AuditLogger, '__init__', return_value=None)\n        audit_info = mocker.patch.object(AuditLogger, 'information', return_value=_rv)\n\n        acl_search_calls = [call('G'), call('F'), call('I'), call('H'),\n                            call('E'), call('N'), call('M'), call('D'),\n                            call('C'), call('B'), call('K'), call('L'),\n                            call('J'), call('A')]\n        with patch.object(ConfigurationManager, 'search_for_ACL_single',\n                          return_value=_sr) as searchaclpatch:\n            ret_val = await c_mgr.delete_category_and_children_recursively(\"A\")\n            assert expected_result == ret_val\n        searchaclpatch.assert_has_calls(acl_search_calls)\n\n        patch_read_cat_val.assert_called_once_with('A')\n        patch_fetch_descendents.assert_called_once_with('A')\n        calls = [call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"G\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"G\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"F\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"F\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"I\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"I\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"H\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"H\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"E\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"E\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"N\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"N\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"M\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"M\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"D\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"D\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"C\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"C\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"B\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"B\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"K\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"K\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"L\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"L\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"J\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"J\"}}'),\n                 call('category_children', '{\"where\": {\"column\": \"child\", \"condition\": \"=\", \"value\": \"A\"}}'),\n                 call('configuration', '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"A\"}}')]\n\n        audit_calls = [call('CONCH', {'categoryDeleted': 'G'}),\n                       call('CONCH', {'categoryDeleted': 'F'}),\n                       call('CONCH', {'categoryDeleted': 'I'}),\n                       call('CONCH', {'categoryDeleted': 'H'}),\n                       call('CONCH', {'categoryDeleted': 'E'}),\n                       call('CONCH', {'categoryDeleted': 'N'}),\n                       call('CONCH', {'categoryDeleted': 'M'}),\n                       call('CONCH', {'categoryDeleted': 'D'}),\n                       call('CONCH', {'categoryDeleted': 'C'}),\n                       call('CONCH', {'categoryDeleted': 'B'}),\n                       call('CONCH', {'categoryDeleted': 'K'}),\n                       call('CONCH', {'categoryDeleted': 'L'}),\n                       call('CONCH', {'categoryDeleted': 'J'}),\n                       call('CONCH', {'categoryDeleted': 'A'})]\n        \n        patch_delete_from_tbl.assert_has_calls(calls, any_order=True)\n        audit_info.assert_has_calls(audit_calls, any_order=True)\n\n\n    async def test_delete_category_and_children_recursively_exception(self, mocker, reset_singleton):\n        async def mock_coro(a, b):\n            return expected_result\n\n        async def mock_read_all_child_category_names(cat):\n            \"\"\"\n            Mimics \n                     G      I\n                      \\    /\n                       F  H\n                        \\/\n                        E    \n                       /    M\n                      /    / \n                A -- B -- C  -- D \n                \\     \\\n                 \\     N\n                  \\\n                   \\   K\n                    \\ /\n                     North\\\n                           \\\n                            L\n            :param cat: \n            :return: \n            \"\"\"\n            if cat == \"A\":\n                return [{\"parent\": 'A', \"child\": 'B'}, {\"parent\": 'A', \"child\": \"North\"}]\n            if cat == \"B\":\n                return [{\"parent\": 'B', \"child\": 'E'}, {\"parent\": 'B', \"child\": 'N'}, {\"parent\": 'B', \"child\": 'C'}]\n            if cat == \"C\":\n                return [{\"parent\": 'C', \"child\": 'M'}, {\"parent\": 'C', \"child\": 'D'}]\n            if cat == \"D\":\n                return []\n            if cat == \"E\":\n                return [{\"parent\": 'E', \"child\": 'F'}, {\"parent\": 'E', \"child\": 'H'}]\n            if cat == \"F\":\n                return [{\"parent\": 'F', \"child\": 'G'}]\n            if cat == \"G\":\n                return []\n            if cat == \"H\":\n                return [{\"parent\": 'H', \"child\": 'I'}]\n            if cat == \"I\":\n                return []\n            if cat == \"North\":\n                return [{\"parent\": \"North\", \"child\": 'K'}, {\"parent\": \"North\", \"child\": 'L'}]\n            if cat == \"K\":\n                return []\n            if cat == \"L\":\n                return []\n            if cat == \"M\":\n                return []\n            if cat == \"N\":\n                return []\n\n        expected_result = {\"response\": \"deleted\", \"rows_affected\": 1}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        mocker.patch.object(storage_client_mock, 'delete_from_tbl', side_effect=mock_coro)\n\n        _rv = await self.async_mock('bla')\n        c_mgr = ConfigurationManager(storage_client_mock)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', return_value=_rv)\n        mocker.patch.object(ConfigurationManager, '_read_all_child_category_names',\n                            side_effect=mock_read_all_child_category_names)\n\n        mocker.patch.object(AuditLogger, '__init__', return_value=None)\n        mocker.patch.object(AuditLogger, 'information', return_value=_rv)\n        msg = \"Reserved category found in descendents of A - ['B', 'E', 'F', 'G', 'H', 'I', 'N', 'C', 'M', 'D', \" \\\n              \"'North', 'K', 'L']\"\n\n        with pytest.raises(ValueError) as excinfo:\n            await c_mgr.delete_category_and_children_recursively(\"A\")\n        assert str(msg) == str(excinfo.value)\n\n    async def test__read_all_child_category_names(self, reset_singleton):\n        rows = {'rows': [{'parent': 'south', 'child': 'http'}], 'count': 1}\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        payload = {\"return\": [\"parent\", \"child\"], \"where\": {\"value\": \"south\", \"condition\": \"=\", \"column\": \"parent\"}, \"sort\": {\"column\": \"id\", \"direction\": \"asc\"}}\n        ret_val = await c_mgr._read_all_child_category_names('south')\n        assert [{'parent': 'south', 'child': 'http'}] == ret_val\n        args, kwargs = storage_client_mock.query_tbl_with_payload.call_args\n        assert 'category_children' == args[0]\n        assert payload == json.loads(args[1])\n\n    async def test__read_child_info(self, reset_singleton):\n        rows = {'rows': [{'description': 'HTTP South Plugin', 'key': 'HTTP SOUTH'}], 'count': 1}\n        _attr = await self.async_mock(rows)\n        attrs = {\"query_tbl_with_payload.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        child_cat_names = [{'child': 'HTTP SOUTH', 'parent': 'south'}]\n        payload = {\"return\": [\"key\", \"description\", \"display_name\"], \"where\": {\"column\": \"key\", \"condition\": \"=\",\n                                                                               \"value\": \"HTTP SOUTH\"}}\n        c_mgr = ConfigurationManager(storage_client_mock)\n        ret_val = await c_mgr._read_child_info(child_cat_names)\n        assert [{'description': 'HTTP South Plugin', 'key': 'HTTP SOUTH'}] == ret_val\n        args, kwargs = storage_client_mock.query_tbl_with_payload.call_args\n        assert 'configuration' == args[0]\n        assert payload == json.loads(args[1])\n\n    async def test__create_child(self):\n        response = {\"response\": \"inserted\", \"rows_affected\": 1}\n        _attr = await self.async_mock(response)\n        attrs = {\"insert_into_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        payload = {\"child\": \"http\", \"parent\": \"south\"}\n\n        ret_val = await c_mgr._create_child(\"south\", \"http\")\n        assert 'inserted' == ret_val\n\n        args, kwargs = storage_client_mock.insert_into_tbl.call_args\n        assert 'category_children' == args[0]\n        assert payload == json.loads(args[1])\n\n    async def test__create_child_key_error(self, reset_singleton):\n        rows = {\"message\": \"blah\"}\n        _attr = await self.async_mock(rows)\n        attrs = {\"insert_into_tbl.return_value\": _attr}\n        storage_client_mock = MagicMock(spec=StorageClientAsync, **attrs)\n\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(ValueError) as excinfo:\n            await c_mgr._create_child(\"south\", \"http\")\n        assert 'blah' == str(excinfo.value)\n\n    async def test__create_child_storage_exception(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        msg = {\"entryPoint\": \"insert\", \"message\": \"UNIQUE constraint failed\"}\n        with patch.object(storage_client_mock, 'insert_into_tbl', side_effect=StorageServerError(code=400, reason=\"blah\", error=msg)):\n            with pytest.raises(ValueError) as excinfo:\n                await c_mgr._create_child(\"south\", \"http\")\n            assert str(msg) == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"item_type, item_val, result\", [\n        ('boolean', \"True\", \"true\"),\n        ('string', \"Plugin\", \"Plugin\"),\n        ({'description': 'Test', 'type': 'boolean', 'default': 'true'}, \"True\", \"true\"),\n        ({'description': 'Test', 'type': 'boolean', 'default': 'true'}, \"true\", \"true\"),\n        ({'description': 'Test', 'type': 'boolean', 'default': 'false'}, \"false\", \"false\"),\n        ({'description': 'Test', 'type': 'boolean', 'default': 'false'}, \"False\", \"false\"),\n        ({'description': 'Datapoint', 'type': 'list', 'items': 'object', 'default': '[{\"datapoint\": \"sin\"}]'},\n         '[{\"datapoint\": \"sin\"}]', '[{\"datapoint\": \"sin\"}]'),\n        ({'description': 'Datapoints', 'type': 'list', 'items': 'object', 'default': '[{\"datapoint\": \"dp\"}]'},\n         '[{\"datapoint\": \"dp\"}, {\"datapoint\": \"dp\"}]', '[{\"datapoint\": \"dp\"}]'),\n        ({'description': 'Datapoints', 'type': 'list', 'items': 'object', 'default': '[{\"datapoint\": \"dp\"}]'},\n         '[{\"datapoint\": \"dp\"}, {\"datapoint\": \"dp2\"}, {\"datapoint\": \"dp\"}]',\n         '[{\"datapoint\": \"dp\"}, {\"datapoint\": \"dp2\"}]'),\n        ({'description': 'Datapoints', 'type': 'kvlist', 'items': 'object', 'default': '{\"plc\": {\"register\": \"0\"}}'},\n         '{\"plc\": {\"register\": \"0\"}}' , '{\"plc\": {\"register\": \"0\"}}'),\n        ({'description': 'Datapoints', 'type': 'kvlist', 'items': 'object', 'default': '{\"plc\": {\"register\": \"0\"}}'},\n         '{\"plc\": {\"register\": \"0\"}, \"plc\": {\"register\": \"0\"}}', '{\"plc\": {\"register\": \"0\"}}'),\n        ({'description': 'Datapoints', 'type': 'kvlist', 'items': 'object', 'default': '{\"plc\": {\"register\": \"0\"}}'},\n         '{\"plc\": {\"register\": \"0\"}, \"plc\": {\"type\": \"integer\"}}', '{\"plc\": {\"type\": \"integer\"}}'),\n        ({'description': 'Datapoints', 'type': 'kvlist', 'items': 'object', 'default': '{\"plc\": {\"register\": \"0\"}}'},\n         '{\"plc\": {\"register\": \"0\"}, \"plc-2\": {\"type\": \"integer\"}}',\n         '{\"plc\": {\"register\": \"0\"}, \"plc-2\": {\"type\": \"integer\"}}')\n    ])\n    async def test__clean(self, item_type, item_val, result):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        assert result == c_mgr._clean(item_type, item_val)\n\n    @pytest.mark.parametrize(\"item_type, item_val, result\", [\n        (\"boolean\", \"false\", True),\n        (\"boolean\", \"true\", True),\n        (\"integer\", \"123\", True),\n        (\"float\", \"123456\", True),\n        (\"float\", \"0\", True),\n        (\"float\", \"NaN\", True),\n        (\"float\", \"123.456\", True),\n        (\"float\", \"123.E4\", True),\n        (\"float\", \".1\", True),\n        (\"float\", \"6.523e-07\", True),\n        (\"float\", \"6e7777\", True),\n        (\"float\", \"1.79e+308\", True),\n        (\"float\", \"infinity\", True),\n        (\"float\", \"0E0\", True),\n        (\"float\", \"+1e1\", True),\n        (\"IPv4\", \"127.0.0.1\", ipaddress.IPv4Address('127.0.0.1')),\n        (\"IPv6\", \"2001:db8::\", ipaddress.IPv6Address('2001:db8::')),\n        (\"JSON\", {}, True),  # allow a dict\n        (\"JSON\", \"{}\", True),\n        (\"JSON\", \"1\", True),\n        (\"JSON\", \"[]\", True),\n        (\"JSON\", \"1.2\", True),\n        (\"JSON\", \"{\\\"age\\\": 31}\", True),\n        (\"URL\", \"http://somevalue.do\", True),\n        (\"URL\", \"http://www.example.com\", True),\n        (\"URL\", \"https://www.example.com\", True),\n        (\"URL\", \"http://blog.example.com\", True),\n        (\"URL\", \"http://www.example.com/product\", True),\n        (\"URL\", \"http://www.example.com/products?id=1&page=2\", True),\n        (\"URL\", \"http://255.255.255.255\", True),\n        (\"URL\", \"http://255.255.255.255:8080\", True),\n        (\"URL\", \"http://127.0.0.1:8080\", True),\n        (\"URL\", \"http://localhost\", True),\n        (\"URL\", \"http://0.0.0.0:8081\", True),\n        (\"URL\", \"http://fe80::4\", True),\n        (\"URL\", \"https://pi-server:5460/ingress/messages\", True),\n        (\"URL\", \"https://dat-a.osisoft.com/api/omf\", True),\n        (\"URL\", \"coap://host\", True),\n        (\"URL\", \"coap://host.co.in\", True),\n        (\"URL\", \"coaps://host:6683\", True),\n        (\"password\", \"not implemented\", None),\n        (\"X509 certificate\", \"not implemented\", None),\n        (\"northTask\", \"valid_north_task\", True),\n        (\"listSize\", \"5\", True),\n        (\"listSize\", \"0\", True)\n    ])\n    async def test__validate_type_value(self, item_type, item_val, result):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        assert result == c_mgr._validate_type_value(item_type, item_val)\n\n    @pytest.mark.parametrize(\"item_type, item_val\", [\n        (\"float\", \"\"),\n        (\"float\", \"nana\"),\n        (\"float\", \"1,234\"),\n        (\"float\", \"NULL\"),\n        (\"float\", \",1\"),\n        (\"float\", \"123.EE4\"),\n        (\"float\", \"12.34.56\"),\n        (\"float\", \"1,234\"),\n        (\"float\", \"#12\"),\n        (\"float\", \"12%\"),\n        (\"float\", \"x86E0\"),\n        (\"float\", \"86-5\"),\n        (\"float\", \"True\"),\n        (\"float\", \"+1e1.3\"),\n        (\"float\", \"-+1\"),\n        (\"float\", \"(1)\"),\n        (\"boolean\", \"blah\"),\n        (\"JSON\", \"Blah\"),\n        (\"JSON\", True),\n        (\"JSON\", \"True\"),\n        (\"JSON\", []),\n        (\"JSON\", None),\n        (\"URL\", \"blah\"),\n        (\"URL\", \"example.com\"),\n        (\"URL\", \"123:80\"),\n        (\"listSize\", \"Blah\"),\n        (\"listSize\", \"None\")\n        # TODO: can not use urlopen hence we may want to check\n        # result.netloc with some regex, but limited\n        # (\"URL\", \"http://somevalue.a\"),\n        # (\"URL\", \"http://25.25.25. :80\"),\n        # (\"URL\", \"http://25.25.25.25: 80\"),\n        # (\"URL\", \"http://www.example.com | http://www.example2.com\")\n    ])\n    async def test__validate_type_value_bad_data(self, item_type, item_val):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        assert c_mgr._validate_type_value(item_type, item_val) is False\n\n    @pytest.mark.parametrize(\"cat_info, config_item_list, exc_type, exc_msg\", [\n        (None, {}, NameError, \"No such Category found for testcat\"),\n        ({'enableHttp': {'default': 'true', 'description': 'Enable HTTP', 'type': 'boolean', 'value': 'true'}},\n         {\"blah\": \"12\"}, KeyError, \"'blah config item not found'\"),\n        ({'enableHttp': {'default': 'true', 'description': 'Enable HTTP', 'type': 'boolean', 'value': 'true'}},\n         {\"enableHttp\": False}, TypeError, \"new value should be of type string\"),\n        ({'authentication': {'default': 'optional', 'options': ['mandatory', 'optional'], 'type': 'enumeration',\n                             'description': 'API Call Authentication', 'value': 'optional'}}, {\"authentication\": \"\"},\n         ValueError, \"entry_val cannot be empty\"),\n        ({'authentication': {'default': 'optional', 'options': ['mandatory', 'optional'], 'type': 'enumeration',\n                             'description': 'API Call Authentication', 'value': 'optional'}},\n         {\"authentication\": \"false\"}, ValueError, \"new value does not exist in options enum\"),\n        ({'authProviders': {'default': '{\"providers\": [\"username\", \"ldap\"] }',\n                            'description': 'Authentication providers to use for the interface', 'type': 'JSON',\n                            'value': '{\"providers\": [\"username\", \"ldap\"] }'}},\n         {\"authProviders\": 3}, TypeError, \"new value should be a valid dict Or a string literal, in double quotes\"),\n        ({'enableHttp': {'default': 'true', 'description': 'Enable HTTP', 'type': 'boolean', 'value': 'true'}},\n         {\"enableHttp\": \"blah\"}, TypeError, \"Unrecognized value name for item_name enableHttp\"),\n        ({'asset': {'default': 'sinusoid', 'description': 'Asset Name', 'type': 'string', 'value': 'sinusoid',\n                    'mandatory': 'true'}}, {\"asset\": ''}, ValueError, \"A value must be given for asset\"),\n        ({'datapoint': {'default': 'rw', 'description': 'Datapoint Name', 'type': 'string', 'value': 'rw',\n                        'mandatory': 'true'}}, {\"datapoint\": ' '}, ValueError, \"A value must be given for datapoint\"),\n        ({'testJSON': {'default': '{\"foo\": \"bar\"}', 'description': 'Test JSON', 'type': 'JSON',\n                       'value': '{\"foo\": \"bar\"}', 'mandatory': 'true'}}, {\"testJSON\": ' '},\n         TypeError, \"Unrecognized value name for item_name testJSON\"),\n        ({'testJSON': {'default': '{\"foo\": \"bar\"}', 'description': 'Test JSON', 'type': 'JSON',\n                       'value': '{\"foo\": \"bar\"}', 'mandatory': 'true'}}, {\"testJSON\": 'blah'},\n         TypeError, \"Unrecognized value name for item_name testJSON\"),\n        ({'testJSON': {'default': '{\"foo\": \"bar\"}', 'description': 'Test JSON', 'type': 'JSON',\n                       'value': '{\"foo\": \"bar\"}', 'mandatory': 'true'}}, {\"testJSON\": {}},\n         ValueError, \"Dict cannot be set as empty. A value must be given for testJSON\"),\n        ({ITEM_NAME: {'default': '[\\\"foo\\\": \\\"bar\\\"]', 'description': 'Test list', 'type': 'list',\n                      \"items\": \"enumeration\", 'value': '[\\\"foo\\\": \\\"bar\\\"]', 'options': ['foo', 'bar']}},\n         {ITEM_NAME: \"\"}, TypeError, \"Malformed payload for given testcat category\"),\n        ({ITEM_NAME: {'default': '{\"key1\": \"a\",\"key2\": \"b\"}', 'description': 'Test Kvlist', 'type': 'kvlist',\n                      \"items\": \"enumeration\", 'value': '{\"key1\": \"a\",\"key2\": \"b\"}', 'options': ['b', 'a']}},\n         {ITEM_NAME: \"\"}, TypeError, \"Malformed payload for given testcat category\")\n    ])\n    async def test_update_configuration_item_bulk_exceptions(self, cat_info, config_item_list, exc_type, exc_msg,\n                                                             category_name='testcat'):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        rv1 = await self.async_mock(cat_info)\n        rv2 = await self.async_mock(\"\")\n        with patch.object(c_mgr, 'get_category_all_items', return_value=rv1) as patch_get_all_items:\n            with patch.object(c_mgr, '_check_updates_by_role', return_value=rv2):\n                with patch.object(_logger, 'exception') as patch_log_exc:\n                    with pytest.raises(Exception) as exc_info:\n                        await c_mgr.update_configuration_item_bulk(category_name, config_item_list)\n                    assert exc_type == exc_info.type\n                    assert exc_msg == str(exc_info.value)\n                assert 1 == patch_log_exc.call_count\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_update_configuration_item_bulk(self, category_name='rest_api'):\n        cat_info = {'enableHttp': {'default': 'true', 'description': 'Enable HTTP', 'type': 'boolean', 'value': 'true'}}\n        config_item_list = {\"enableHttp\": \"false\"}\n        update_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        read_val = {'allowPing': {'default': 'true', 'description': 'Allow access to ping', 'value': 'true', 'type': 'boolean'},\n                    'enableHttp': {'default': 'true', 'description': 'Enable HTTP', 'value': 'false', 'type': 'boolean'}}\n        payload = {'updates': [{'json_properties': [{'path': ['enableHttp', 'value'], 'column': 'value', 'value': 'false'}],\n                                'return': ['key', 'description', {'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'column': 'ts'}, 'value'],\n                                'where': {'value': 'rest_api', 'column': 'key', 'condition': '='}}]}\n        audit_details = {'items': {'enableHttp': {'oldValue': 'true', 'newValue': 'false'}}, 'category': category_name}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv1 = await self.async_mock(cat_info)\n        _rv2 = await self.async_mock(update_result)\n        _rv3 = await self.async_mock(read_val)\n        _rv4 = await self.async_mock(None)\n        with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_all_items:\n            with patch.object(c_mgr._storage, 'update_tbl', return_value=_rv2) as patch_update:\n                with patch.object(c_mgr, '_read_category_val', return_value=_rv3) as patch_read_val:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv4) as patch_audit:\n                            with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv4) \\\n                                    as patch_callback:\n                                await c_mgr.update_configuration_item_bulk(category_name, config_item_list)\n                            patch_callback.assert_called_once_with(category_name)\n                        patch_audit.assert_called_once_with('CONCH', audit_details)\n                patch_read_val.assert_called_once_with(category_name)\n            args, kwargs = patch_update.call_args\n            assert 'configuration' == args[0]\n            assert payload == json.loads(args[1])\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_update_configuration_item_bulk_no_change(self, category_name='rest_api'):\n        cat_info = {'enableHttp': {'default': 'true', 'description': 'Enable HTTP', 'type': 'boolean', 'value': 'true'}}\n        config_item_list = {\"enableHttp\": \"true\"}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock(cat_info)\n        with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_all_items:\n            with patch.object(c_mgr._storage, 'update_tbl') as patch_update:\n                with patch.object(AuditLogger, 'information') as patch_audit:\n                    with patch.object(ConfigurationManager, '_run_callbacks') as patch_callback:\n                        result = await c_mgr.update_configuration_item_bulk(category_name, config_item_list)\n                        assert result is None\n                    patch_callback.assert_not_called()\n                patch_audit.assert_not_called()\n            patch_update.assert_not_called()\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_update_configuration_item_bulk_dict_no_change(self, category_name='rest_api'):\n        cat_info = {'providers': {'default': '{\"providers\": [\"username\", \"ldap\"] }', 'description': 'descr',\n                                  'type': 'JSON', 'value': '{\"providers\": [\"username\", \"ldap\"] }'}}\n        config_item_list = {\"providers\": {\"providers\": [\"username\", \"ldap\"]}}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        _rv = await self.async_mock(cat_info)\n        with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_all_items:\n            with patch.object(c_mgr._storage, 'update_tbl') as patch_update:\n                with patch.object(AuditLogger, 'information') as patch_audit:\n                    with patch.object(ConfigurationManager, '_run_callbacks') as patch_callback:\n                        result = await c_mgr.update_configuration_item_bulk(category_name, config_item_list)\n                        assert result is None\n                    patch_callback.assert_not_called()\n                patch_audit.assert_not_called()\n            patch_update.assert_not_called()\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_update_configuration_item_bulk_dict_change(self, category_name='rest_api'):\n        cat_info = {'providers': {'default': '{\"providers\": [\"username\", \"ldap\"] }', 'description': 'descr',\n                                  'type': 'JSON', 'value': '{\"providers\": [\"username\", \"ldap\"] }'}}\n        config_item_list = {\"providers\": {\"providers\": [\"username\", \"ldap_new\"]}}\n\n        update_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        read_val = {'allowPing': {'default': 'true', 'description': 'Allow access to ping', 'value': 'true', 'type': 'boolean'},\n                    'enableHttp': {'default': 'true', 'description': 'Enable HTTP', 'value': 'false', 'type': 'boolean'}}\n        payload = {'updates': [{'json_properties': [{'path': ['enableHttp', 'value'], 'column': 'value', 'value': 'false'}],\n                                'return': ['key', 'description', {'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'column': 'ts'}, 'value'],\n                                'where': {'value': 'rest_api', 'column': 'key', 'condition': '='}}]}\n        audit_details = {'items': {'enableHttp': {'oldValue': 'true', 'newValue': 'false'}}, 'category': category_name}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(cat_info)\n        _rv2 = await self.async_mock(update_result)\n        _rv3 = await self.async_mock(read_val)\n        _rv4 = await self.async_mock(None)\n        with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_all_items:\n            with patch.object(c_mgr._storage, 'update_tbl', return_value=_rv2) as patch_update:\n                with patch.object(c_mgr, '_read_category_val', return_value=_rv3) as patch_read_val:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv4):\n                            with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv4):\n                                await c_mgr.update_configuration_item_bulk(category_name, config_item_list)\n                patch_read_val.assert_called_once_with(category_name)\n            assert 1 == patch_update.call_count\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    @pytest.mark.parametrize(\"config_item_list\", [\n        {'info': \"2\"},\n        {'info': \"2\", \"info1\": \"9\"},\n        {'info1': \"2\", \"info\": \"9\"}\n    ])\n    async def test_update_configuration_item_bulk_with_rule_optional_attribute(self, config_item_list,\n                                                                               category_name='testcat'):\n        cat_info = {'info': {'rule': 'value*3==9', 'default': '3', 'description': 'Test', 'value': '3',\n                             'type': 'integer'}, 'info1': {'default': '3', 'description': 'Test', 'value': '3',\n                                                           'type': 'integer'}}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(cat_info)\n        with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_all_items:\n            with patch.object(_logger, 'exception') as patch_log_exc:\n                with pytest.raises(Exception) as exc_info:\n                    await c_mgr.update_configuration_item_bulk(category_name, config_item_list)\n                assert exc_info.type is ValueError\n                assert 'The value of info is not valid, please supply a valid value' == str(exc_info.value)\n            assert 1 == patch_log_exc.call_count\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    @pytest.mark.parametrize(\"list_type, payload, exc_type, exc_msg\", [\n        ('list', {ITEM_NAME: \"{}\"}, TypeError, 'New value should be passed in list'),\n        ('list', {ITEM_NAME: \"[]\"}, ValueError, 'enum value cannot be empty'),\n        ('list', {ITEM_NAME: \"[\\\"1\\\"]\"}, ValueError, 'For 1, new value does not exist in options enum'),\n        ('kvlist', {ITEM_NAME: \"[]\"}, TypeError, 'New value should be in KV pair format'),\n        ('kvlist', {ITEM_NAME: \"{\\\"key1\\\":\\\"\\\"}\"}, ValueError, 'For key1, enum value cannot be empty'),\n        ('kvlist', {ITEM_NAME: \"{\\\"key1\\\":\\\"b1\\\",\\\"key2\\\":\\\"b\\\"}\"}, ValueError,\n         'For key1, new value does not exist in options enum')\n    ])\n    async def test_bad_update_configuration_item_bulk_with_list_type(self, list_type, payload, exc_type, exc_msg):\n        category_name = 'testcat'\n        if list_type == 'kvlist':\n            cat_info = {ITEM_NAME: {'type': 'kvlist', 'default': '{\"key1\": \"a\", \"key2\": \"b\"}', 'items': 'enumeration',\n                                    'options': ['b', 'a'], 'listSize': '2', 'description': 'test desc',\n                                    'value': '{\"key1\":\"a1\", \"key2\":\"b\"}'}}\n        else:\n            cat_info = {ITEM_NAME: {'type': 'list', 'default': '[\\\"999\\\"]', 'items': 'enumeration',\n                                    'options': ['13', '999'], 'listSize': '2', 'description': 'test desc',\n                                    'value': '[\\\"13\\\"]'}}\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(cat_info)\n        with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_all_items:\n            with patch.object(_logger, 'exception') as patch_log_exc:\n                with pytest.raises(Exception) as exc_info:\n                    await c_mgr.update_configuration_item_bulk(category_name, payload)\n                assert exc_type == exc_info.type\n                assert exc_msg == str(exc_info.value)\n            assert 1 == patch_log_exc.call_count\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_update_configuration_item_bulk_with_list_name(self, category_name='Test'):\n        cat_info ={'list': {'type': 'list', 'description': 'A list of variables', 'listName': 'items',\n                           'items': 'string', 'default': '{\"items\": [\"A\", \"B\"]}', 'value': '{\"items\": [\"A\", \"B\"]}'}}\n        config_item_list = {\"list\": \"[\\\"C\\\", \\\"D\\\"]\"}\n        update_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        read_val = {'list': {'default': '{\"items\": [\"A\", \"B\"]}', 'description': 'A list of variables',\n                             'value': '{\"items\": [\"C\", \"D\"]}', 'type': 'list', 'items': 'string', 'listName': 'items'}}\n\n        payload = {'updates': [{'json_properties': [{'path': ['list', 'value'], 'column': 'value',\n                                                     'value': '{\"items\": [\"C\", \"D\"]}'}],\n                                'return': ['key', 'description', {'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'column': 'ts'},\n                                           'value'], 'where': {'value': 'Test', 'column': 'key', 'condition': '='}}]}\n        audit_details = {'items': {'list': {'oldValue': '{\"items\": [\"A\", \"B\"]}', 'newValue': '{\"items\": [\"C\", \"D\"]}'}},\n                         'category': category_name}\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(cat_info)\n        _rv2 = await self.async_mock(update_result)\n        _rv3 = await self.async_mock(read_val)\n        _rv4 = await self.async_mock(None)\n        with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_all_items:\n            with patch.object(c_mgr._storage, 'update_tbl', return_value=_rv2) as patch_update:\n                with patch.object(c_mgr, '_read_category_val', return_value=_rv3) as patch_read_val:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv4) as patch_audit:\n                            with patch.object(ConfigurationManager, '_run_callbacks',\n                                              return_value=_rv4) as patch_callback:\n                                await c_mgr.update_configuration_item_bulk(category_name, config_item_list)\n                            patch_callback.assert_called_once_with(category_name)\n                        patch_audit.assert_called_once_with('CONCH', audit_details)\n                patch_read_val.assert_called_once_with(category_name)\n            args, kwargs = patch_update.call_args\n            assert 'configuration' == args[0]\n            assert payload == json.loads(args[1])\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_set_optional_value_entry_good_update(self, reset_singleton):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'catname'\n        item_name = 'itemname'\n        new_value_entry = '25'\n        optional_key_name = 'maximum'\n        storage_value_entry = {'readonly': 'true', 'type': 'string', 'order': '4', 'description': 'Test Optional', 'minimum': '2', 'value': '13', 'maximum': '20', 'default': '13'}\n        new_storage_value_entry = {'readonly': 'true', 'type': 'string', 'order': '4', 'description': 'Test Optional', 'minimum': '2', 'value': '13', 'maximum': new_value_entry, 'default': '13'}\n        payload = {\"return\": [\"key\", \"description\", {\"column\": \"ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"value\"], \"json_properties\": [{\"column\": \"value\", \"path\": [item_name, optional_key_name], \"value\": new_value_entry}], \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": category_name}}\n        update_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        _se = await self.async_mock(storage_value_entry)\n        _rv = await self.async_mock(update_result)\n        with patch.object(ConfigurationManager, '_read_item_val', side_effect=[_se, _se]) as readpatch:\n            with patch.object(c_mgr._storage, 'update_tbl', return_value=_rv) as patch_update:\n                await c_mgr.set_optional_value_entry(category_name, item_name, optional_key_name, new_value_entry)\n            args, kwargs = patch_update.call_args\n            assert 'configuration' == args[0]\n            assert payload == json.loads(args[1])\n        assert 2 == readpatch.call_count\n        calls = readpatch.call_args_list\n        args, kwargs = calls[0]\n        assert category_name == args[0]\n        assert item_name == args[1]\n        args, kwargs = calls[1]\n        assert category_name == args[0]\n        assert item_name == args[1]\n\n    @pytest.mark.parametrize(\"_type, optional_key_name, new_value_entry, exc_msg\", [\n        (int, 'maximum', '1', 'Maximum value should be greater than equal to Minimum value'),\n        (int, 'maximum', '00100', 'Maximum value should be greater than equal to Minimum value'),\n        (float, 'maximum', '11.2', 'Maximum value should be greater than equal to Minimum value'),\n        (int, 'minimum', '30', 'Minimum value should be less than equal to Maximum value'),\n        (float, 'minimum', '50.0', 'Minimum value should be less than equal to Maximum value'),\n        (None, 'readonly', '1',\n         \"For catname category, entry value must be boolean for optional item name readonly; got <class 'str'>\"),\n        (None, 'deprecated', '1',\n         \"For catname category, entry value must be boolean for optional item name deprecated; got <class 'str'>\"),\n        (None, 'rule', 2, \"For catname category, entry value must be string for optional item rule; got <class 'int'>\"),\n        (None, 'displayName', 123,\n         \"For catname category, entry value must be string for optional item displayName; got <class 'int'>\"),\n        (None, 'length', '1a',\n         \"For catname category, entry value must be an integer for optional item length; got <class 'str'>\"),\n        (None, 'maximum', 'blah',\n         \"For catname category, entry value must be an integer or float for optional item maximum; got <class 'str'>\"),\n        (None, 'validity', 12,\n         \"For catname category, entry value must be string for optional item validity; got <class 'int'>\"),\n        (None, 'mandatory', '1',\n         \"For catname category, entry value must be boolean for optional item name mandatory; got <class 'str'>\"),\n        (None, 'group', 5,\n         \"For catname category, entry value must be string for optional item group; got <class 'int'>\"),\n        (None, 'group', True,\n         \"For catname category, entry value must be string for optional item group; got <class 'bool'>\"),\n        (None, 'properties', {\"key\": \"Bot\"}, 'For catname category, optional item name properties cannot be updated.')\n    ])\n    async def test_set_optional_value_entry_bad_update(self, reset_singleton, _type, optional_key_name,\n                                                       new_value_entry, exc_msg):\n        minimum = '2'\n        maximum = '20'\n        if _type is not None:\n            if isinstance(1.1, _type):\n                minimum = '12.5'\n                maximum = '40.3'\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'catname'\n        item_name = 'itemname'\n        storage_value_entry = {'length': '255', 'displayName': category_name, 'rule': 'value * 3 == 6',\n                               'deprecated': 'false', 'readonly': 'true', 'type': 'string', 'order': '4',\n                               'description': 'Test Optional', 'minimum': minimum, 'value': '13', 'maximum': maximum,\n                               'default': '13', 'validity': 'field X is set', 'mandatory': 'false', 'group': 'Security',\n                               'properties': {\"key\": \"model\"}}\n        _rv = await self.async_mock(storage_value_entry)\n        with patch.object(_logger, \"exception\") as log_exc:\n            with patch.object(ConfigurationManager, '_read_item_val', return_value=_rv) as readpatch:\n                with pytest.raises(Exception) as excinfo:\n                    await c_mgr.set_optional_value_entry(category_name, item_name, optional_key_name, new_value_entry)\n\n                assert excinfo.type is ValueError\n                assert exc_msg == str(excinfo.value)\n            readpatch.assert_called_once_with(category_name, item_name)\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to set optional %s entry based on category_name %s and item_name %s and value_item_entry %s', optional_key_name, 'catname', 'itemname', new_value_entry)\n\n    @pytest.mark.parametrize(\"new_value_entry, storage_value_entry, exc_msg, exc_type\", [\n        (\"Fledge\", {'default': 'FOG', 'length': '3', 'displayName': 'Length Test', 'value': 'fog', 'type': 'string',\n                    'description': 'Test value '},\n         'For config item {} you cannot set the new value, beyond the length 3', TypeError),\n        (\"0\", {'order': '4', 'default': '10', 'minimum': '10', 'maximum': '19', 'displayName': 'RangeMin Test',\n               'value': '15', 'type': 'integer', 'description': 'Test value'},\n         'For config item {} you cannot set the new value, beyond the range (10,19)', TypeError),\n        (\"20\", {'order': '4', 'default': '10', 'minimum': '10', 'maximum': '19', 'displayName': 'RangeMax Test',\n                'value': '19', 'type': 'integer', 'description': 'Test value'},\n         'For config item {} you cannot set the new value, beyond the range (10,19)', TypeError),\n        (\"1\", {'order': '5', 'default': '2', 'minimum': '2', 'displayName': 'MIN', 'value': '10', 'type': 'integer',\n               'description': 'Test value '}, 'For config item {} you cannot set the new value, below 2', TypeError),\n        (\"11\", {'default': '10', 'maximum': '10', 'displayName': 'MAX', 'value': '10', 'type': 'integer',\n                'description': 'Test value'}, 'For config item {} you cannot set the new value, above 10', TypeError),\n        (\"19.0\", {'default': '19.3', 'minimum': '19.1', 'maximum': '19.5', 'displayName': 'RangeMin Test',\n                  'value': '19.1', 'type': 'float', 'description': 'Test val'},\n         'For config item {} you cannot set the new value, beyond the range (19.1,19.5)', TypeError),\n        (\"19.6\", {'default': '19.4', 'minimum': '19.1', 'maximum': '19.5', 'displayName': 'RangeMax Test',\n                  'value': '19.5', 'type': 'float', 'description': 'Test val'},\n         'For config item {} you cannot set the new value, beyond the range (19.1,19.5)', TypeError),\n        (\"20\", {'order': '8', 'default': '10.1', 'maximum': '19.8', 'displayName': 'MAX Test', 'value': '10.1',\n                'type': 'float', 'description': 'Test value'},\n         'For config item {} you cannot set the new value, above 19.8', TypeError),\n        (\"0.7\", {'order': '9', 'default': '0.9', 'minimum': '0.8', 'displayName': 'MIN Test', 'value': '0.9',\n                 'type': 'float', 'description': 'Test value'},\n         'For config item {} you cannot set the new value, below 0.8', TypeError),\n        (\"\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1\\\", \\\"1\\\"]', 'order': '2',\n                'items': 'integer', 'listSize': '2', 'value': '[\\\"1\\\", \\\"2\\\"]'},\n         \"For config item {} value should be passed array list in string format\", TypeError),\n        (\"[\\\"5\\\", \\\"7\\\", \\\"9\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1\\\", \\\"3\\\"]',\n                                   'order': '2', 'items': 'integer', 'listSize': '2', 'value': '[\\\"5\\\", \\\"7\\\"]'},\n         \"For config item {} value array list size limit to 2\", TypeError),\n        (\"\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"foo\\\"]', 'order': '2',\n                'items': 'string', 'listSize': '1', 'value': '[\\\"bar\\\"]'},\n         \"For config item {} value should be passed array list in string format\", TypeError),\n        (\"\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"foo\\\"]', 'order': '2',\n                'items': 'string', 'listSize': '1', 'value': '[\\\"bar\\\"]'},\n         \"For config item {} value should be passed array list in string format\", TypeError),\n        (\"[\\\"foo\\\", \\\"bar\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"foo\\\"]', 'order': '2',\n                                'items': 'string', 'listSize': '1', 'value': '[\\\"bar\\\"]'},\n         \"For config item {} value array list size limit to 1\", TypeError),\n        (\"[\\\"1.4\\\", \\\".03\\\", \\\"50.67\\\", \\\"13.13\\\"]\",\n         {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1.4\\\", \\\".03\\\", \\\"50.67\\\"]', 'order': '2',\n          'items': 'float', 'listSize': '3', 'value': '[\\\"1.4\\\", \\\".03\\\", \\\"50.67\\\"]'},\n         \"For config item {} value array list size limit to 3\", TypeError),\n        (\"[\\\"10\\\", \\\"10\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1\\\", \\\"2\\\"]', 'order': '2',\n                'items': 'integer', 'value': '[\\\"3\\\", \\\"4\\\"]'}, \"For config item {} elements are not unique\", ValueError),\n        (\"[\\\"foo\\\", \\\"foo\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"a\\\", \\\"c\\\"]', 'order': '2',\n                              'items': 'string', 'value': '[\\\"abc\\\", \\\"def\\\"]'},\n         \"For config item {} elements are not unique\", ValueError),\n        (\"[\\\".002\\\", \\\".002\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1.2\\\", \\\"1.4\\\"]',\n                                  'order': '2', 'items': 'float', 'value': '[\\\"5.67\\\", \\\"12.0\\\"]'},\n         \"For config item {} elements are not unique\", ValueError),\n        (\"[\\\"10\\\", \\\"foo\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1\\\", \\\"2\\\"]', 'order': '2',\n                              'items': 'integer', 'value': '[\\\"3\\\", \\\"4\\\"]'},\n         \"For config item {} all elements should be of same integer type\", ValueError),\n        (\"[\\\"foo\\\", 1]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"a\\\", \\\"c\\\"]', 'order': '2',\n                                'items': 'string', 'value': '[\\\"abc\\\", \\\"def\\\"]'},\n         \"For config item {} all elements should be of same string type\", ValueError),\n        (\"[\\\"1\\\", \\\"2\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1.2\\\", \\\"1.4\\\"]',\n                                  'order': '2', 'items': 'float', 'value': '[\\\"5.67\\\", \\\"12.0\\\"]'},\n         \"For config item {} all elements should be of same float type\", ValueError),\n        (\"[\\\"100\\\", \\\"2\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"34\\\", \\\"48\\\"]', 'order': '2',\n                              'items': 'integer', 'listSize': '2', 'value': '[\\\"34\\\", \\\"48\\\"]', 'minimum': '20'},\n         \"For config item {} you cannot set the new value, below 20\", ValueError),\n        (\"[\\\"50\\\", \\\"49\\\", \\\"51\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"34\\\", \\\"48\\\"]',\n                                       'order': '2', 'items': 'integer', 'listSize': '3',\n                                       'value': '[\\\"34\\\", \\\"48\\\"]', 'maximum': '50'},\n         \"For config item {} you cannot set the new value, above 50\", ValueError),\n        (\"[\\\"50\\\", \\\"49\\\", \\\"46\\\"]\", {'description': 'Simple list', 'type': 'list', 'default':\n            '[\\\"50\\\", \\\"48\\\", \\\"49\\\"]', 'order': '2', 'items': 'integer', 'listSize': '3',\n                                      'value': '[\\\"47\\\", \\\"48\\\", \\\"49\\\"]', 'maximum': '50', 'minimum': '47'},\n         \"For config item {} you cannot set the new value, beyond the range (47,50)\", ValueError),\n        (\"[\\\"50\\\", \\\"49\\\", \\\"51\\\"]\", {'description': 'Simple list', 'type': 'list', 'default':\n            '[\\\"50\\\", \\\"48\\\", \\\"49\\\"]', 'order': '2', 'items': 'integer', 'listSize': '3',\n                                      'value': '[\\\"47\\\", \\\"48\\\", \\\"49\\\"]', 'maximum': '50', 'minimum': '47'},\n         \"For config item {} you cannot set the new value, beyond the range (47,50)\", ValueError),\n        (\"[\\\"foo\\\", \\\"bars\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"a1\\\", \\\"c1\\\"]',\n                                 'order': '2', 'items': 'string', 'value': '[\\\"ab\\\", \\\"de\\\"]', 'listSize': '2',\n                                 'length': '3'},\n         \"For config item {} you cannot set the new value, beyond the length 3\", ValueError),\n        (\"[\\\"2.6\\\", \\\"1.002\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"5.2\\\", \\\"2.5\\\"]',\n                                  'order': '2', 'items': 'float', 'value': '[\\\"5.67\\\", \\\"2.5\\\"]', 'minimum': '2.5',\n                                  'listSize': '2'}, \"For config item {} you cannot set the new value, below 2.5\",\n         ValueError),\n        (\"[\\\"2.6\\\", \\\"1.002\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"2.2\\\", \\\"2.5\\\"]',\n                                  'order': '2', 'items': 'float', 'value': '[\\\"1.67\\\", \\\"2.5\\\"]', 'maximum': '2.5',\n                                  'listSize': '2'}, \"For config item {} you cannot set the new value, above 2.5\",\n         ValueError),\n        (\"[\\\"2.6\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"2.2\\\"]', 'order': '2',\n                       'items': 'float', 'value': '[\\\"2.5\\\"]', 'listSize': '1', 'minimum': '2', 'maximum': '2.5'},\n         \"For config item {} you cannot set the new value, beyond the range (2,2.5)\", ValueError),\n        (\"[\\\"1.999\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"2.2\\\"]', 'order': '2',\n                       'items': 'float', 'value': '[\\\"2.5\\\"]', 'listSize': '1', 'minimum': '2', 'maximum': '2.5'},\n         \"For config item {} you cannot set the new value, beyond the range (2,2.5)\", ValueError),\n        (\"\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"val\\\"}', 'order': '2',\n              'items': 'integer', 'listSize': '1', 'value': '{\\\"key\\\": \\\"val\\\"}'},\n         \"For config item {} value should be passed KV pair list in string format\", TypeError),\n        (\"{\\\"key\\\": \\\"1\\\", \\\"key2\\\": \\\"2\\\"}\",\n         {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"1\\\"}', 'order': '2',\n          'items': 'integer', 'listSize': '1', 'value': '{\\\"key\\\": \\\"2\\\"}'},\n         \"For config item {} value KV pair list size limit to 1\", TypeError),\n        (\"\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"val\\\"}', 'order': '2',\n              'items': 'string', 'listSize': '1', 'value': '{\\\"key\\\": \\\"val\\\"}'},\n         \"For config item {} value should be passed KV pair list in string format\", TypeError),\n        (\"\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"val\\\"}', 'order': '2',\n              'items': 'string', 'listSize': '1', 'value': '[\\\"bar\\\"]'},\n         \"For config item {} value should be passed KV pair list in string format\", TypeError),\n        (\"{\\\"key\\\": \\\"val\\\", \\\"key2\\\": \\\"val2\\\"}\",\n         {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"val\\\"}', 'order': '2',\n          'items': 'string', 'listSize': '1', 'value': '{\\\"key\\\": \\\"val\\\"}'},\n         \"For config item {} value KV pair list size limit to 1\", TypeError),\n        (\"{\\\"key\\\": \\\"1.2\\\", \\\"key2\\\": \\\"0.9\\\", \\\"key3\\\": \\\"444.12\\\"}\",\n         {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"1.2\\\", \\\"key2\\\": \\\"0.9\\\"}',\n          'order': '2', 'items': 'float', 'listSize': '2', 'value': '{\\\"key\\\": \\\"1.2\\\", \\\"key2\\\": \\\"0.9\\\"}'},\n         \"For config item {} value KV pair list size limit to 2\", TypeError),\n        (\"{\\\"key\\\": \\\"1.2\\\", \\\"key\\\": \\\"1.23\\\"}\", {'description': 'Simple list', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"11.12\\\"}',\n                                  'order': '2', 'items': 'float', 'value': '{\\\"key\\\": \\\"1.4\\\"}'},\n         \"For config item {} duplicate KV pair found\", TypeError),\n        (\"{\\\"key\\\": \\\"val\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"1\\\"}',\n                               'items': 'integer', 'value': '{\\\"key\\\": \\\"13\\\"}'},\n         \"For config item {} all elements should be of same integer type\", ValueError),\n        (\"{\\\"key\\\": 1}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"a\\\": \\\"c\\\"}', 'order': '2',\n                          'items': 'string', 'value': '{\\\"abc\\\", \\\"def\\\"}'},\n         \"For config item {} all elements should be of same string type\", ValueError),\n        (\"{\\\"key\\\": \\\"2\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"1.4\\\"}',\n                            'order': '2', 'items': 'float', 'value': '{\\\"key\\\": \\\"12.0\\\"}'},\n         \"For config item {} all elements should be of same float type\", ValueError),\n        (\"{\\\"key\\\": \\\"2\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"48\\\"}',\n                              'items': 'integer', 'listSize': '1', 'value': '{\\\"key\\\": \\\"48\\\"}', 'minimum': '20'},\n         \"For config item {} you cannot set the new value, below 20\", ValueError),\n        (\"{\\\"key\\\": \\\"100\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"48\\\"}',\n                                'items': 'integer', 'listSize': '1', 'value': '{\\\"key\\\": \\\"48\\\"}', 'maximum': '50'},\n         \"For config item {} you cannot set the new value, above 50\", ValueError),\n        (\"{\\\"key\\\": \\\"46\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"50\\\"}',\n                               'items': 'integer', 'listSize': '1', 'value': '{\\\"key\\\": \\\"48\\\"}', 'maximum': '50',\n                               'minimum': '47'}, \"For config item {} you cannot set the new value, beyond the \"\n                                                 \"range (47,50)\", ValueError),\n        (\"{\\\"key\\\": \\\"100\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"48\\\"}',\n                                'items': 'integer', 'listSize': '1', 'value': '{\\\"key\\\": \\\"48\\\"}', 'maximum': '50',\n                                'minimum': '47'},\n         \"For config item {} you cannot set the new value, beyond the range (47,50)\", ValueError),\n        (\"{\\\"foo\\\": \\\"bars\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"a1\\\": \\\"c1\\\"}',\n                                 'items': 'string', 'value': '[\\\"ab\\\", \\\"de\\\"]', 'listSize': '1', 'length': '3'},\n         \"For config item {} you cannot set the new value, beyond the length 3\", ValueError),\n        (\"{\\\"key\\\": \\\"1.002\\\", \\\"key2\\\": \\\"2.6\\\"}\", {'description': 'expression', 'type': 'kvlist',\n                                                     'default': '{\\\"key\\\", \\\"2.5\\\"}', 'items': 'float',\n                                                     'value': '{\\\"key\\\", \\\"2.5\\\"}', 'minimum': '2.5', 'listSize': '2'},\n         \"For config item {} you cannot set the new value, below 2.5\", ValueError),\n        (\"{\\\"key\\\": \\\"2.6\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"2.5\\\"}',\n                                  'items': 'float', 'value': '{\\\"key\\\": \\\"2.5\\\"}', 'maximum': '2.5',\n                                  'listSize': '1'}, \"For config item {} you cannot set the new value, above 2.5\",\n         ValueError),\n        (\"{\\\"key\\\": \\\"2.6\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"2.2\\\"}',\n                                'items': 'float', 'value': '{\\\"key\\\": \\\"2.5\\\"}', 'listSize': '1', 'minimum': '2',\n                                'maximum': '2.5'},\n         \"For config item {} you cannot set the new value, beyond the range (2,2.5)\", ValueError),\n        (\"{\\\"key\\\": \\\"1.999\\\"}\", {'description': 'expression', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"2.2\\\"}',\n                         'items': 'float', 'value': '{\\\"key\\\": \\\"2.5\\\"}', 'listSize': '1', 'minimum': '2',\n                                  'maximum': '2.5'},\n         \"For config item {} you cannot set the new value, beyond the range (2,2.5)\", ValueError)\n    ])\n    def test_bad__validate_value_per_optional_attribute(self, new_value_entry, storage_value_entry, exc_msg, exc_type):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with pytest.raises(Exception) as exc_info:\n            c_mgr._validate_value_per_optional_attribute(ITEM_NAME, storage_value_entry, new_value_entry)\n        assert exc_info.type is exc_type\n        msg = exc_msg.format(ITEM_NAME)\n        assert msg == str(exc_info.value)\n\n    @pytest.mark.parametrize(\"new_value_entry, storage_value_entry\", [\n        (\"Fledge\", {'default': 'FOG', 'length': '7', 'displayName': 'Length Test', 'value': 'fledge',\n                    'type': 'string', 'description': 'Test value '}),\n        (\"2\", {'order': '5', 'default': '10', 'minimum': '2', 'displayName': 'MIN', 'value': '10', 'type': 'integer',\n               'description': 'Test value '}),\n        (\"19.1\", {'default': '19.4', 'minimum': '19.1', 'maximum': '19.5', 'displayName': 'RangeMin Test',\n                  'value': '19.5', 'type': 'float', 'description': 'Test val'}),\n        (\"19.5\", {'default': '19.4', 'minimum': '19.1', 'maximum': '19.5', 'displayName': 'RangeMax Test',\n                  'value': '19.5', 'type': 'float', 'description': 'Test val'}),\n        (\"19.2\", {'default': '19.4', 'minimum': '19.1', 'maximum': '19.5', 'displayName': 'Range Test',\n                  'value': '19.5', 'type': 'float', 'description': 'Test val'}),\n        (\"10\", {'order': '4', 'default': '10', 'minimum': '10', 'maximum': '19', 'displayName': 'RangeMin Test',\n                'value': '15', 'type': 'integer', 'description': 'Test value'}),\n        (\"19\", {'order': '4', 'default': '10', 'minimum': '10', 'maximum': '19', 'displayName': 'RangeMax Test',\n                'value': '15', 'type': 'integer', 'description': 'Test value'}),\n        (\"15\", {'order': '4', 'default': '10', 'minimum': '10', 'maximum': '19', 'displayName': 'Range Test',\n                'value': '15', 'type': 'integer', 'description': 'Test value'}),\n        (\"[]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1\\\", \\\"2\\\"]', 'order': '2',\n                'items': 'integer', 'value': '[\\\"3\\\", \\\"4\\\"]'}),\n        (\"[\\\"10\\\", \\\"20\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1\\\", \\\"2\\\"]', 'order': '2',\n                              'items': 'integer', 'value': '[\\\"3\\\", \\\"4\\\"]'}),\n        (\"[\\\"foo\\\", \\\"bar\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"a\\\", \\\"c\\\"]', 'order': '2',\n                                'items': 'string', 'value': '[\\\"abc\\\", \\\"def\\\"]'}),\n        (\"[\\\".002\\\", \\\"1.002\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1.2\\\", \\\"1.4\\\"]',\n                                  'order': '2', 'items': 'float', 'value': '[\\\"5.67\\\", \\\"12.0\\\"]'}),\n        (\"[\\\"10\\\", \\\"20\\\", \\\"30\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1\\\", \\\"2\\\"]',\n                                      'order': '2', 'items': 'integer', 'listSize': \"3\", 'value': '[\\\"3\\\", \\\"4\\\"]'}),\n        (\"[\\\"new string\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"a\\\", \\\"c\\\"]', 'order': '2',\n                                'items': 'string', 'listSize': \"1\", 'value': '[\\\"abc\\\", \\\"def\\\"]'}),\n        (\"[\\\"6.523e-07\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1.2\\\", \\\"1.4\\\"]',\n                                   'order': '2', 'items': 'float', 'listSize': \"1\", 'value': '[\\\"5.67\\\", \\\"12.0\\\"]'}),\n        (\"[]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1\\\", \\\"2\\\"]',\n                                      'order': '2', 'items': 'integer', 'listSize': \"0\", 'value': '[\\\"3\\\", \\\"4\\\"]'}),\n        (\"[]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"a\\\", \\\"c\\\"]', 'order': '2',\n                              'items': 'string', 'listSize': \"0\", 'value': '[\\\"abc\\\", \\\"def\\\"]'}),\n        (\"[]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"1.2\\\", \\\"1.4\\\"]',\n                             'order': '2', 'items': 'float', 'listSize': \"0\", 'value': '[\\\"5.67\\\", \\\"12.0\\\"]'}),\n        (\"[]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"a\\\", \\\"c\\\"]', 'order': '2',\n                'items': 'string', 'listSize': \"1\", 'value': '[\\\"abc\\\", \\\"def\\\"]'}),\n        (\"[\\\"100\\\", \\\"20\\\"]\", {'description': 'SL', 'type': 'list', 'default': '[\\\"34\\\", \\\"48\\\"]', 'order': '2',\n                               'items': 'integer', 'listSize': '2', 'value': '[\\\"34\\\", \\\"48\\\"]', 'minimum': '20'}),\n        (\"[\\\"50\\\", \\\"49\\\", \\\"0\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"34\\\", \\\"48\\\"]',\n                                      'order': '2', 'items': 'integer', 'listSize': '3',\n                                      'value': '[\\\"34\\\", \\\"48\\\"]', 'maximum': '50'}),\n        (\"[\\\"50\\\", \\\"49\\\", \\\"47\\\"]\", {'description': 'Simple list', 'type': 'list', 'default':\n            '[\\\"50\\\", \\\"48\\\", \\\"49\\\"]', 'order': '2', 'items': 'integer', 'listSize': '3',\n                                      'value': '[\\\"47\\\", \\\"48\\\", \\\"49\\\"]', 'maximum': '50', 'minimum': '47'}),\n        (\"[\\\"50\\\", \\\"49\\\", \\\"48\\\"]\", {'description': 'Simple list', 'type': 'list', 'default':\n            '[\\\"50\\\", \\\"48\\\", \\\"49\\\"]', 'order': '2', 'items': 'integer', 'listSize': '3',\n                                      'value': '[\\\"47\\\", \\\"48\\\", \\\"49\\\"]', 'maximum': '50', 'minimum': '47'}),\n        (\"[\\\"foo\\\", \\\"bar\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"a1\\\", \\\"c1\\\"]',\n                                 'order': '2', 'items': 'string', 'value': '[\\\"ab\\\", \\\"de\\\"]', 'listSize': '2',\n                                 'length': '3'}),\n        (\"[\\\"2.6\\\", \\\"13.002\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"5.2\\\", \\\"2.5\\\"]',\n                                  'order': '2', 'items': 'float', 'value': '[\\\"5.67\\\", \\\"2.5\\\"]', 'minimum': '2.5',\n                                  'listSize': '2'}),\n        (\"[\\\"2.4\\\", \\\"1.002\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"2.2\\\", \\\"2.5\\\"]',\n                                  'order': '2', 'items': 'float', 'value': '[\\\"1.67\\\", \\\"2.5\\\"]', 'maximum': '2.5',\n                                  'listSize': '2'}),\n        (\"[\\\"2.0\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"2.2\\\"]', 'order': '2',\n                       'items': 'float', 'value': '[\\\"2.5\\\"]', 'listSize': '1', 'minimum': '2', 'maximum': '2.5'}),\n        (\"[\\\"2.5\\\"]\", {'description': 'Simple list', 'type': 'list', 'default': '[\\\"2.2\\\"]', 'order': '2',\n                         'items': 'float', 'value': '[\\\"2.5\\\"]', 'listSize': '1', 'minimum': '2', 'maximum': '2.5'}),\n        (\"{\\\"key\\\": \\\"bar\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"c\\\"}',\n          'order': '2',\n          'items': 'string', 'value': '{\\\"key\\\": \\\"def\\\"}'}),\n        (\"{\\\"key\\\": \\\"1.002\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"1.4\\\"}',\n          'order': '2', 'items': 'float', 'value': '{\\\"key\\\": \\\"12.0\\\"}'}),\n        (\"{\\\"key\\\": \\\"10\\\", \\\"key1\\\": \\\"20\\\", \\\"key2\\\": \\\"30\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist',\n          'default': '{\\\"key\\\": \\\"10\\\", \\\"key1\\\": \\\"20\\\", \\\"key2\\\": \\\"30\\\"}',\n          'order': '2', 'items': 'integer', 'listSize': \"3\",\n          'value': '{\\\"key\\\": \\\"1\\\", \\\"key1\\\": \\\"2\\\", \\\"key2\\\": \\\"3\\\"}'}),\n        (\"{\\\"key\\\": \\\"new string\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"c\\\"}',\n          'order': '2',\n          'items': 'string', 'listSize': \"1\", 'value': '{\\\"key\\\": \\\"def\\\"}'}),\n        (\"{\\\"key\\\": \\\"6.523e-07\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"1.4\\\"}',\n          'order': '2', 'items': 'float', 'listSize': \"1\", 'value': '{\\\"key\\\": \\\"12.0\\\"}'}),\n        (\"{}\", {'description': 'A list of expressions and values', 'type': 'kvlist', 'default': '{\\\"1\\\": \\\"2\\\"}',\n                'order': '2', 'items': 'integer', 'listSize': \"0\", 'value': '{\\\"3\\\": \\\"4\\\"}'}),\n        (\"{}\", {'description': 'A list of expressions and values', 'type': 'kvlist', 'default': '{\\\"a\\\": \\\"c\\\"}',\n                'order': '2',\n                'items': 'string', 'listSize': \"0\", 'value': '{\\\"abc\\\": \\\"def\\\"}'}),\n        (\"{}\", {'description': 'A list of expressions and values', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"1.4\\\"}',\n                'order': '2', 'items': 'float', 'listSize': \"0\", 'value': '{\\\"key\\\": \\\"12.0\\\"}'}),\n        (\"{}\", {'description': 'A list of expressions and values', 'type': 'kvlist', 'default': '{\\\"1\\\": \\\"2\\\"}',\n                'order': '2', 'items': 'integer', 'listSize': \"1\", 'value': '{\\\"3\\\": \\\"4\\\"}'}),\n        (\"{\\\"key\\\": \\\"100\\\", \\\"key2\\\": \\\"20\\\"}\",\n         {'description': 'SL', 'type': 'kvlist', 'default': '{\\\"key\\\": \\\"100\\\", \\\"key2\\\": \\\"48\\\"}', 'order': '2',\n          'items': 'integer', 'listSize': '2', 'value': '{\\\"key\\\": \\\"34\\\", \\\"key2\\\": \\\"20\\\"}', 'minimum': '20'}),\n        (\"{\\\"key\\\": \\\"50\\\", \\\"key2\\\": \\\"0\\\", \\\"key3\\\": \\\"49\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist',\n          'default': '{\\\"key\\\": \\\"47\\\", \\\"key2\\\": \\\"48\\\", \\\"key3\\\": \\\"49\\\"}',\n          'order': '2', 'items': 'integer', 'listSize': '3',\n          'value': '{\\\"key\\\": \\\"47\\\", \\\"key2\\\": \\\"48\\\", \\\"key3\\\": \\\"49\\\"}', 'maximum': '50'}),\n        (\"{\\\"key\\\": \\\"50\\\", \\\"key2\\\": \\\"48\\\", \\\"key3\\\": \\\"49\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist', 'default':\n             '{\\\"key\\\": \\\"50\\\", \\\"key2\\\": \\\"48\\\", \\\"key3\\\": \\\"49\\\"}', 'order': '2', 'items': 'integer', 'listSize': '3',\n          'value': '{\\\"key\\\": \\\"47\\\", \\\"key2\\\": \\\"48\\\", \\\"key3\\\": \\\"49\\\"}', 'maximum': '50', 'minimum': '47'}),\n        (\"{\\\"key\\\": \\\"50\\\", \\\"key2\\\": \\\"48\\\", \\\"key3\\\": \\\"49\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist', 'default':\n             '{\\\"key\\\": \\\"47\\\", \\\"key2\\\": \\\"48\\\", \\\"key3\\\": \\\"49\\\"}', 'order': '2', 'items': 'integer', 'listSize': '3',\n          'value': '{\\\"key\\\": \\\"47\\\", \\\"key2\\\": \\\"48\\\", \\\"key3\\\": \\\"49\\\"}', 'maximum': '50', 'minimum': '47'}),\n        (\"{\\\"key\\\": \\\"foo\\\", \\\"key2\\\": \\\"bar\\\"}\", {'description': 'A list of expressions and values', 'type': 'kvlist',\n                                                   'default': '{\\\"key\\\": \\\"a1\\\", \\\"key2\\\": \\\"c1\\\"}',\n                                                   'order': '2', 'items': 'string',\n                                                   'value': '{\\\"key\\\": \\\"ab\\\", \\\"key2\\\": \\\"de\\\"}', 'listSize': '2',\n                                                   'length': '3'}),\n        (\"{\\\"key\\\": \\\"2.6\\\", \\\"key2\\\": \\\"13.002\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist',\n          'default': '{\\\"key\\\": \\\"5.2\\\", \\\"key2\\\": \\\"2.5\\\"}',\n          'order': '2', 'items': 'float', 'value': '{\\\"key\\\": \\\"5.67\\\", \\\"key2\\\": \\\"2.5\\\"}', 'minimum': '2.5',\n          'listSize': '2'}),\n        (\"{\\\"key\\\": \\\"2.4\\\", \\\"key2\\\": \\\"1.002\\\"}\",\n         {'description': 'A list of expressions and values', 'type': 'kvlist',\n          'default': '{\\\"key\\\": \\\"2.2\\\", \\\"key2\\\": \\\"2.5\\\"}', 'order': '2', 'items': 'float',\n          'value': '{\\\"key\\\": \\\"1.67\\\", \\\"key2\\\": \\\"2.5\\\"}', 'maximum': '2.5', 'listSize': '2'}),\n        (\"{\\\"key\\\": \\\"2.0\\\"}\", {'description': 'A list of expressions and values', 'type': 'kvlist',\n                                'default': '{\\\"key\\\": \\\"2.2\\\"}', 'order': '2', 'items': 'float', 'value': '{\\\"2.5\\\"}',\n                                'listSize': '1', 'minimum': '2', 'maximum': '2.5'}),\n        (\"{\\\"key\\\": \\\"2.5\\\"}\", {'description': 'A list of expressions and values', 'type': 'kvlist',\n                                'default': '{\\\"key\\\": \\\"2.2\\\"}', 'order': '2', 'items': 'float', 'value': '{\\\"2.5\\\"}',\n                                'listSize': '1', 'minimum': '2', 'maximum': '2.5'})\n    ])\n    def test_good__validate_value_per_optional_attribute(self, new_value_entry, storage_value_entry):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        raised = False\n        try:\n            c_mgr._validate_value_per_optional_attribute(ITEM_NAME, storage_value_entry, new_value_entry)\n        except Exception:\n            raised = True\n        assert raised is False\n\n    async def test__ignore_unrecognized_key_in_config_items(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        entry_name = \"test_entry\"\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"Test with entry_name\",\n                \"type\": \"string\",\n                \"default\": \"test_default_value\",\n                entry_name: \"some_value\"\n            }\n        }\n        with patch.object(_logger, 'warning') as log_warn:\n            await c_mgr._validate_category_val(CAT_NAME, test_config)\n        assert 1 == log_warn.call_count\n        log_warn.assert_called_once_with('For {} category, DISCARDING unrecognized entry name {} for item name {}'.\n                                         format(CAT_NAME, entry_name, ITEM_NAME))\n\n    async def test__ignore_unrecognized_key_in_config_items_without_set_value_val_from_default_val(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        entry_name = \"blah\"\n        test_config = {\n            ITEM_NAME: {\n                \"description\": \"test description val\",\n                \"type\": \"integer\",\n                \"default\": \"test default val\",\n                entry_name: \"some_value\"\n            },\n        }\n        with patch.object(_logger, 'warning') as log_warn:\n            with pytest.raises(ValueError) as excinfo:\n                await c_mgr._validate_category_val(category_name=CAT_NAME, category_val=test_config,\n                                                   set_value_val_from_default_val=False)\n            assert 'For {} category, missing entry name value for item name {}'.format(\n                CAT_NAME, ITEM_NAME) == str(excinfo.value)\n        assert 1 == log_warn.call_count\n        log_warn.assert_called_once_with('For {} category, DISCARDING unrecognized entry name {} for item name {}'.\n                                         format(CAT_NAME, entry_name, ITEM_NAME))\n\n    # Password masking tests\n    @pytest.mark.parametrize(\"item_type, value, expected\", [\n        ('password', 'secret123', '****'),\n        ('string', 'normalvalue', 'normalvalue'),\n        ('integer', '42', '42'),\n        ('boolean', 'true', 'true'),\n        ('JSON', '{\"key\": \"value\"}', '{\"key\": \"value\"}')\n    ])\n    def test__mask_password_value_individual(self, reset_singleton, item_type, value, expected):\n        \"\"\"Test password masking for individual values\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        assert expected == c_mgr._mask_password_value(item_type, value)\n\n    @pytest.mark.parametrize(\"category_val, expected_masked_items\", [\n        # Category with password item\n        ({\n            'username': {'type': 'string', 'value': 'admin', 'default': 'admin'},\n            'password': {'type': 'password', 'value': 'secret123', 'default': 'defaultpass'}\n        }, {'password': {'value': '****', 'default': '****'}}),\n        # Category with multiple password items\n        ({\n            'dbPassword': {'type': 'password', 'value': 'dbsecret', 'default': 'dbdefault'},\n            'apiKey': {'type': 'password', 'value': 'apisecret', 'default': 'apidefault'},\n            'port': {'type': 'integer', 'value': '8080', 'default': '8080'}\n        }, {\n            'dbPassword': {'value': '****', 'default': '****'},\n            'apiKey': {'value': '****', 'default': '****'}\n        }),\n        # Category with no password items\n        ({\n            'host': {'type': 'string', 'value': 'localhost', 'default': 'localhost'},\n            'port': {'type': 'integer', 'value': '8080', 'default': '8080'}\n        }, {}),\n        # Empty category\n        ({}, {}),\n    ])\n    def test__mask_password_value_category(self, reset_singleton, category_val, expected_masked_items):\n        \"\"\"Test password masking for category configurations\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        result = c_mgr._mask_password_value(category_val)\n        # Verify that password items are masked\n        for item_name, expected_values in expected_masked_items.items():\n            assert item_name in result\n            for field, expected_value in expected_values.items():\n                assert expected_value == result[item_name][field]\n        # Verify non-password items are unchanged\n        if isinstance(category_val, dict):\n            for item_name, item_val in category_val.items():\n                if isinstance(item_val, dict) and item_val.get('type') != 'password':\n                    assert item_val == result[item_name]\n\n    def test__mask_password_value_non_dict_input(self, reset_singleton):\n        \"\"\"Test password masking with non-dict input returns unchanged\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        test_inputs = ['string_value', 123, ['list'], None]\n        for test_input in test_inputs:\n            result = c_mgr._mask_password_value(test_input)\n            assert test_input == result\n\n    async def test_create_category_with_password_masking_in_audit(self, reset_singleton):\n        \"\"\"Test that password values are masked in audit trail during category creation\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'test_auth'\n        category_val = {\n            'username': {'type': 'string', 'description': 'Username', 'default': 'admin'},\n            'password': {'type': 'password', 'description': 'Password', 'default': 'secret123'}\n        }\n        _rv = await self.async_mock({'response': 'inserted', 'rows_affected': 1})\n        _rv2 = await self.async_mock(None)\n        with patch.object(ConfigurationManager, '_read_category_val', return_value=None) as patch_read:\n            with patch.object(ConfigurationManager, '_storage') as patch_storage:\n                patch_storage.insert_into_tbl.return_value = _rv\n                with patch.object(AuditLogger, '__init__', return_value=None):\n                    with patch.object(AuditLogger, 'information', return_value=_rv2) as patch_audit:\n                        with patch.object(ConfigurationManager, 'search_for_ACL_recursive_from_cat_name', \n                                        return_value=(False, None, None, None)) as patch_acl:\n                            with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv2):\n                                await c_mgr.create_category(category_name, category_val, 'Test auth category')\n                        patch_acl.assert_called_once()\n                    patch_audit.assert_called_once()\n                    audit_call_args = patch_audit.call_args[0]\n                    assert 'CONAD' == audit_call_args[0]\n                    audit_data = audit_call_args[1]\n                    assert category_name == audit_data['name']\n                    # Verify password is masked in audit data\n                    assert '****' == audit_data['category']['password']['value']\n                    assert '****' == audit_data['category']['password']['default']\n                    # Verify non-password items are not masked\n                    assert 'admin' == audit_data['category']['username']['default']\n            patch_storage.insert_into_tbl.assert_not_called()\n        patch_read.assert_called_once_with(category_name)\n\n    async def test_merge_category_with_password_masking_in_audit(self, reset_singleton):\n        \"\"\"Test that password values are masked in audit trail during category merge\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'test_auth'\n        category_val_new = {\n            'username': {'type': 'string', 'description': 'Username', 'default': 'admin'},\n            'password': {'type': 'password', 'description': 'Password', 'default': 'newsecret'}\n        }\n        category_val_storage = {\n            'username': {'type': 'string', 'description': 'Username', 'default': 'admin', 'value': 'admin'},\n            'password': {'type': 'password', 'description': 'Password', 'default': 'oldsecret', 'value': 'oldsecret'}\n        }\n        _rv2 = await self.async_mock(None)\n        with patch.object(ConfigurationManager, '_read_category_val', return_value=category_val_storage):\n            with patch.object(ConfigurationManager, '_read_all_category_names', \n                              return_value=[('test_auth', 'desc', 'test_auth')]):\n                with patch.object(ConfigurationManager, '_update_category', return_value=_rv2):\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv2) as patch_audit:\n                            with patch.object(ConfigurationManager, 'search_for_ACL_recursive_from_cat_name', \n                                              return_value=(False, None, None, None)):\n                                with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv2):\n                                    await c_mgr.create_category(category_name, category_val_new, 'Test auth category')\n                        patch_audit.assert_called_once()\n                        audit_call_args = patch_audit.call_args[0]\n                        assert 'CONCH' == audit_call_args[0]\n                        audit_data = audit_call_args[1]\n                        # Verify passwords are masked in both old and new values\n                        assert '****' == audit_data['oldValue']['password']['value']\n                        assert '****' == audit_data['oldValue']['password']['default']\n                        assert '****' == audit_data['newValue']['password']['default']\n                        # Verify non-password items are not masked\n                        assert 'admin' == audit_data['oldValue']['username']['value']\n                        assert 'admin' == audit_data['newValue']['username']['default']\n\n    async def test_deprecated_item_with_password_masking_in_audit(self, reset_singleton):\n        \"\"\"Test that password values are masked when items are deprecated\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'test_deprecated'\n        category_val_new = {\n            'oldPassword': {'type': 'password', 'description': 'Old Password', 'default': 'oldsecret', \n                            'value': 'oldsecret', 'deprecated': 'true'}\n        }\n        category_val_storage = {}\n        _rv = await self.async_mock(None)\n        with patch.object(AuditLogger, '__init__', return_value=None):\n            with patch.object(AuditLogger, 'information', return_value=_rv) as patch_audit:\n                result = await c_mgr._merge_category_vals(category_val_new, category_val_storage, \n                                                          keep_original_items=False, category_name=category_name)\n            # Verify audit was called with masked password for deprecated item\n            patch_audit.assert_called_once()\n            audit_call_args = patch_audit.call_args[0]\n            assert 'CONCH' == audit_call_args[0]\n            audit_data = audit_call_args[1]\n            assert category_name == audit_data['category']\n            assert 'oldPassword' == audit_data['item']\n            assert '****' == audit_data['oldValue']  # Password should be masked\n            assert 'deprecated' == audit_data['newValue']\n            # Verify deprecated item is removed from result\n            assert 'oldPassword' not in result\n\n    async def test_update_configuration_item_bulk_with_password_masking(self, reset_singleton):\n        \"\"\"Test that password values are masked in bulk update audit trail\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        category_name = 'test_bulk_auth'\n        config_item_list = {\n            'password': 'newsecret123',\n            'username': 'newuser'\n        }\n        cat_info = {\n            'password': {'type': 'password', 'description': 'Password', 'value': 'oldsecret', 'default': 'oldsecret'},\n            'username': {'type': 'string', 'description': 'Username', 'value': 'olduser', 'default': 'olduser'}\n        }\n        _rv = await self.async_mock({'response': 'updated', 'rows_affected': 1})\n        _rv2 = await self.async_mock(cat_info)\n        _rv3 = await self.async_mock(None)\n        with patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv2) as patch_get_all_items:\n            with patch.object(ConfigurationManager, '_storage') as patch_storage:\n                patch_storage.update_tbl.return_value = _rv\n                with patch.object(ConfigurationManager, '_read_category_val', return_value=cat_info) as patch_read_val:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv3) as patch_audit:\n                            with patch.object(ConfigurationManager, '_run_callbacks', return_value=_rv3):\n                                await c_mgr.update_configuration_item_bulk(category_name, config_item_list)\n                        # Verify audit was called with masked password values\n                        patch_audit.assert_called_once()\n                        audit_call_args = patch_audit.call_args[0]\n                        assert 'CONCH' == audit_call_args[0]\n                        audit_data = audit_call_args[1]\n                        assert category_name == audit_data['category']\n                        # Verify password values are masked in audit trail\n                        assert '****' == audit_data['items']['password']['oldValue']\n                        assert '****' == audit_data['items']['password']['newValue']\n                        # Verify non-password values are not masked\n                        assert 'olduser' == audit_data['items']['username']['oldValue']\n                        assert 'newuser' == audit_data['items']['username']['newValue']\n                patch_read_val.assert_called_once_with(category_name)\n            patch_storage.update_tbl.assert_not_called()\n        patch_get_all_items.assert_called_once_with(category_name)\n\n    @pytest.mark.parametrize(\"category_name, should_raise\", [\n        (\"valid_category\", False),\n        (\"valid-category\", False),\n        (\"valid_category_123\", False),\n        (\"invalid\\\\category\", True),\n        (\"invalid/category\", False),  # Forward slash is allowed\n        (\"\", True),\n        (None, True),\n        (\"category with spaces\", False),  # Spaces are allowed\n        (\"category.with.dots\", False),  # Dots are allowed\n    ])\n    async def test_create_category_identifier_validation(self, reset_singleton, category_name, should_raise):\n        \"\"\"Test that category names are validated for invalid characters\"\"\"\n        # GIVEN\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n\n        test_config = {\n            \"test_item\": {\n                \"description\": \"test description\",\n                \"type\": \"string\",\n                \"default\": \"test default\"\n            }\n        }\n\n        # Mock the storage operations\n        with patch.object(c_mgr._storage, 'insert_into_tbl', return_value={'response': 'OK'}):\n            with patch.object(AuditLogger, '__init__', return_value=None):\n                with patch.object(AuditLogger, 'information', return_value=None):\n                    # WHEN/THEN\n                    if should_raise:\n                        if category_name is None:\n                            with pytest.raises(TypeError) as excinfo:\n                                await c_mgr.create_category(category_name, test_config, \"test description\")\n                            assert \"category_name must be a string\" in str(excinfo.value)\n                        else:\n                            with pytest.raises(ValueError) as excinfo:\n                                await c_mgr.create_category(category_name, test_config, \"test description\")\n                            assert \"Invalid character\" in str(excinfo.value)\n                    else:\n                        # Should not raise an exception\n                        result = await c_mgr.create_category(category_name, test_config, \"test description\")\n                        assert result is None\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_configuration_validation_helpers.py",
    "content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nUnit tests for validation helper methods in ConfigurationManager.\n\nThis test file specifically tests the helper methods extracted from _validate_list_type\nand other validation helper methods:\n- _validate_optional_string_attribute\n- _validate_permissions_entry\n- _validate_enumeration_type\n- _validate_bucket_type\n- _validate_list_items_object\n- _validate_list_items_enumeration\n- _validate_list_default_values\n- _validate_enumeration_default_values\n- _validate_items_entry\n\"\"\"\n\nimport pytest\nfrom unittest.mock import MagicMock\nfrom fledge.common.configuration_manager import ConfigurationManager, ConfigurationManagerSingleton\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n__author__ = \"Devki Nandan Ghildiyal\"\n__copyright__ = \"Copyright (c) 2025 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nCAT_NAME = 'test_category'\nITEM_NAME = \"test_item\"\n\n\nclass TestConfigurationManagerRefactoredHelpers:\n    \"\"\"Test suite for refactored helper methods extracted from _validate_list_type.\"\"\"\n\n    @pytest.fixture()\n    def reset_singleton(self):\n        \"\"\"Reset singleton state before and after each test.\"\"\"\n        ConfigurationManagerSingleton._shared_state = {}\n        yield\n        ConfigurationManagerSingleton._shared_state = {}\n\n    @pytest.fixture()\n    def config_mgr(self, reset_singleton):\n        \"\"\"Create a ConfigurationManager instance for testing.\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        return ConfigurationManager(storage_client_mock)\n\n    # ==================== Tests for _validate_optional_string_attribute ====================\n\n    @pytest.mark.parametrize(\"attr_name,input_value,expected\", [\n        (\"displayName\", \"Valid Display Name\", \"Valid Display Name\"),\n        (\"listName\", \"ValidListName\", \"ValidListName\"),\n        (\"keyName\", \"  Display Name  \", \"Display Name\"),  # whitespace trim\n        (\"keyDescription\", \"  Test  \", \"Test\"),\n        (\"rule\", \"value > 0\", \"value > 0\"),\n        (\"validity\", \"^[a-z]+$\", \"^[a-z]+$\"),\n    ])\n    def test_validate_optional_string_attribute_valid(self, config_mgr, attr_name, input_value, expected):\n        \"\"\"Test _validate_optional_string_attribute with valid inputs.\"\"\"\n        result = config_mgr._validate_optional_string_attribute(\n            CAT_NAME, attr_name, input_value, ITEM_NAME\n        )\n        assert result == expected\n\n    @pytest.mark.parametrize(\"attr_name,invalid_value,error_type,error_msg\", [\n        (\"displayName\", 123, TypeError, \"displayName type must be a string\"),\n        (\"listName\", True, TypeError, \"listName type must be a string\"),\n        (\"keyName\", [], TypeError, \"keyName type must be a string\"),\n        (\"displayName\", \"\", ValueError, \"displayName cannot be empty\"),\n        (\"listName\", \"   \", ValueError, \"listName cannot be empty\"),\n        (\"keyDescription\", \"\\t\\n\", ValueError, \"keyDescription cannot be empty\"),\n    ])\n    def test_validate_optional_string_attribute_invalid(self, config_mgr, attr_name, invalid_value, error_type, error_msg):\n        \"\"\"Test _validate_optional_string_attribute with invalid inputs.\"\"\"\n        with pytest.raises(error_type) as excinfo:\n            config_mgr._validate_optional_string_attribute(\n                CAT_NAME, attr_name, invalid_value, ITEM_NAME\n            )\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Tests for _validate_permissions_entry ====================\n\n    @pytest.mark.parametrize(\"permissions\", [\n        [\"admin\"],\n        [\"admin\", \"user\"],\n        [\"admin\", \"user\", \"editor\", \"viewer\"],\n    ])\n    def test_validate_permissions_entry_valid(self, config_mgr, permissions):\n        \"\"\"Test _validate_permissions_entry with valid permission lists.\"\"\"\n        # Should not raise any exception\n        config_mgr._validate_permissions_entry(CAT_NAME, 'permissions', ITEM_NAME, permissions)\n\n    @pytest.mark.parametrize(\"invalid_permissions,error_msg\", [\n        (\"admin\", \"permissions entry value must be a list\"),\n        (123, \"permissions entry value must be a list\"),\n        ([], \"permissions entry value must not be empty\"),\n        ([\"admin\", 123], \"permissions entry values must be a string and non-empty\"),\n        ([\"admin\", \"\"], \"permissions entry values must be a string and non-empty\"),\n        ([\"admin\", None], \"permissions entry values must be a string and non-empty\"),\n        ([\"\", \"user\"], \"permissions entry values must be a string and non-empty\"),\n    ])\n    def test_validate_permissions_entry_invalid(self, config_mgr, invalid_permissions, error_msg):\n        \"\"\"Test _validate_permissions_entry with invalid inputs.\"\"\"\n        with pytest.raises(ValueError) as excinfo:\n            config_mgr._validate_permissions_entry(CAT_NAME, 'permissions', ITEM_NAME, invalid_permissions)\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Tests for _validate_enumeration_type ====================\n\n    def test_validate_enumeration_type_valid_options(self, config_mgr):\n        \"\"\"Test _validate_enumeration_type with valid options.\"\"\"\n        item_val = {\n            \"type\": \"enumeration\",\n            \"options\": [\"option1\", \"option2\", \"option3\"],\n            \"default\": \"option1\"\n        }\n        def get_entry_val(key):\n            return item_val.get(key)\n        \n        updates = config_mgr._validate_enumeration_type(\n            CAT_NAME, ITEM_NAME, item_val, \"options\", item_val[\"options\"], get_entry_val\n        )\n        assert \"options\" in updates\n        assert updates[\"options\"] == [\"option1\", \"option2\", \"option3\"]\n\n    def test_validate_enumeration_type_with_permissions(self, config_mgr):\n        \"\"\"Test _validate_enumeration_type with permissions.\"\"\"\n        item_val = {\n            \"type\": \"enumeration\",\n            \"options\": [\"opt1\", \"opt2\"],\n            \"default\": \"opt1\",\n            \"permissions\": [\"admin\"]\n        }\n        def get_entry_val(key):\n            return item_val.get(key)\n        \n        # Should not raise any exception\n        config_mgr._validate_enumeration_type(\n            CAT_NAME, ITEM_NAME, item_val, \"permissions\", [\"admin\"], get_entry_val\n        )\n\n    @pytest.mark.parametrize(\"item_val,entry_name,entry_val,error_type,error_msg\", [\n        # Missing options\n        ({\"type\": \"enumeration\", \"default\": \"opt1\"}, \"default\", \"opt1\", KeyError, \"options required for enumeration type\"),\n        # Options not list\n        ({\"type\": \"enumeration\", \"options\": \"not_list\", \"default\": \"opt1\"}, \"options\", \"not_list\", TypeError, \"entry value must be a list\"),\n        # Empty options\n        ({\"type\": \"enumeration\", \"options\": [], \"default\": \"opt1\"}, \"options\", [], ValueError, \"entry value cannot be empty list\"),\n        # Default not in options\n        ({\"type\": \"enumeration\", \"options\": [\"opt1\", \"opt2\"], \"default\": \"invalid\"}, \"options\", [\"opt1\", \"opt2\"], ValueError, \"entry value does not exist in options list\"),\n        # Non-string entry value\n        ({\"type\": \"enumeration\", \"options\": [\"opt1\"], \"default\": \"opt1\"}, \"default\", 123, TypeError, \"entry value must be a string\"),\n    ])\n    def test_validate_enumeration_type_invalid(self, config_mgr, item_val, entry_name, entry_val, error_type, error_msg):\n        \"\"\"Test _validate_enumeration_type with invalid inputs.\"\"\"\n        def get_entry_val(key):\n            return item_val.get(key)\n        \n        with pytest.raises(error_type) as excinfo:\n            config_mgr._validate_enumeration_type(\n                CAT_NAME, ITEM_NAME, item_val, entry_name, entry_val, get_entry_val\n            )\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Tests for _validate_bucket_type ====================\n\n    def test_validate_bucket_type_valid_properties(self, config_mgr):\n        \"\"\"Test _validate_bucket_type with valid properties.\"\"\"\n        item_val = {\n            \"type\": \"bucket\",\n            \"properties\": {\"key\": \"bucketName\", \"description\": \"Test bucket\"},\n            \"default\": \"\"\n        }\n        def get_entry_val(key):\n            return item_val.get(key)\n        \n        updates = config_mgr._validate_bucket_type(\n            CAT_NAME, ITEM_NAME, item_val, \"properties\", item_val[\"properties\"], get_entry_val\n        )\n        assert \"properties\" in updates\n\n    def test_validate_bucket_type_with_permissions(self, config_mgr):\n        \"\"\"Test _validate_bucket_type with valid permissions.\"\"\"\n        item_val = {\n            \"type\": \"bucket\",\n            \"properties\": {\"key\": \"bucketName\"},\n            \"permissions\": [\"admin\", \"user\"],\n            \"default\": \"\"\n        }\n        def get_entry_val(key):\n            return item_val.get(key)\n        \n        # Should not raise any exception\n        config_mgr._validate_bucket_type(\n            CAT_NAME, ITEM_NAME, item_val, \"properties\", item_val[\"properties\"], get_entry_val\n        )\n\n    @pytest.mark.parametrize(\"item_val,entry_name,entry_val,error_type,error_msg\", [\n        # Missing properties\n        ({\"type\": \"bucket\", \"default\": \"\"}, \"default\", \"\", KeyError, \"properties KV pair must be required\"),\n        # Properties not dict\n        ({\"type\": \"bucket\", \"properties\": \"not_dict\", \"default\": \"\"}, \"properties\", \"not_dict\", ValueError, \"properties must be JSON object\"),\n        # Empty properties\n        ({\"type\": \"bucket\", \"properties\": {}, \"default\": \"\"}, \"properties\", {}, ValueError, \"properties JSON object cannot be empty\"),\n        # Missing key in properties\n        ({\"type\": \"bucket\", \"properties\": {\"desc\": \"test\"}, \"default\": \"\"}, \"properties\", {\"desc\": \"test\"}, ValueError, \"key KV pair must exist in properties\"),\n        # Non-string entry value\n        ({\"type\": \"bucket\", \"properties\": {\"key\": \"test\"}, \"default\": \"\"}, \"default\", 123, TypeError, \"entry value must be a string\"),\n        # Invalid permissions\n        ({\"type\": \"bucket\", \"properties\": {\"key\": \"test\"}, \"permissions\": \"not_list\", \"default\": \"\"}, \"default\", \"\", ValueError, \"permissions entry value must be a list\"),\n    ])\n    def test_validate_bucket_type_invalid(self, config_mgr, item_val, entry_name, entry_val, error_type, error_msg):\n        \"\"\"Test _validate_bucket_type with invalid inputs.\"\"\"\n        def get_entry_val(key):\n            return item_val.get(key)\n        \n        with pytest.raises(error_type) as excinfo:\n            config_mgr._validate_bucket_type(\n                CAT_NAME, ITEM_NAME, item_val, entry_name, entry_val, get_entry_val\n            )\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Tests for _validate_list_items_object ====================\n\n    def test_validate_list_items_object_valid(self, config_mgr):\n        \"\"\"Test _validate_list_items_object with valid properties structure.\"\"\"\n        prop_val = {\n            \"width\": {\"description\": \"Width\", \"type\": \"integer\", \"default\": \"100\"},\n            \"height\": {\"description\": \"Height\", \"type\": \"integer\", \"default\": \"200\"}\n        }\n        # Should not raise any exception\n        config_mgr._validate_list_items_object(CAT_NAME, ITEM_NAME, prop_val)\n\n    @pytest.mark.parametrize(\"prop_val,error_type,error_msg\", [\n        # Not dict\n        (\"not_a_dict\", ValueError, \"properties must be JSON object\"),\n        # Empty dict\n        ({}, ValueError, \"properties JSON object cannot be empty\"),\n        # Property not dict\n        ({\"width\": \"string\"}, TypeError, \"Properties must be a JSON object\"),\n        # Empty property\n        ({\"width\": {}}, ValueError, \"properties cannot be empty\"),\n        # Missing type key\n        ({\"width\": {\"description\": \"W\", \"default\": \"100\"}}, ValueError, \"must have type, description, default keys\"),\n        # Missing description key\n        ({\"width\": {\"type\": \"integer\", \"default\": \"100\"}}, ValueError, \"must have type, description, default keys\"),\n        # Missing default key\n        ({\"width\": {\"type\": \"integer\", \"description\": \"W\"}}, ValueError, \"must have type, description, default keys\"),\n    ])\n    def test_validate_list_items_object_invalid(self, config_mgr, prop_val, error_type, error_msg):\n        \"\"\"Test _validate_list_items_object with invalid inputs.\"\"\"\n        with pytest.raises(error_type) as excinfo:\n            config_mgr._validate_list_items_object(CAT_NAME, ITEM_NAME, prop_val)\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Tests for _validate_list_items_enumeration ====================\n\n    def test_validate_list_items_enumeration_valid(self, config_mgr):\n        \"\"\"Test _validate_list_items_enumeration with valid options.\"\"\"\n        item_val = {\"type\": \"list\", \"items\": \"enumeration\", \"options\": [\"opt1\", \"opt2\", \"opt3\"]}\n        # Should not raise any exception\n        config_mgr._validate_list_items_enumeration(CAT_NAME, ITEM_NAME, item_val, \"items\")\n\n    @pytest.mark.parametrize(\"item_val,error_type,error_msg\", [\n        # Missing options\n        ({\"type\": \"list\", \"items\": \"enumeration\"}, KeyError, \"options required\"),\n        # Options not list\n        ({\"type\": \"list\", \"items\": \"enumeration\", \"options\": \"not_list\"}, TypeError, \"entry value must be a list\"),\n        # Empty options\n        ({\"type\": \"list\", \"items\": \"enumeration\", \"options\": []}, ValueError, \"options cannot be empty list\"),\n    ])\n    def test_validate_list_items_enumeration_invalid(self, config_mgr, item_val, error_type, error_msg):\n        \"\"\"Test _validate_list_items_enumeration with invalid inputs.\"\"\"\n        with pytest.raises(error_type) as excinfo:\n            config_mgr._validate_list_items_enumeration(CAT_NAME, ITEM_NAME, item_val, \"items\")\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Tests for _validate_list_default_values ====================\n\n    @pytest.mark.parametrize(\"item_type,entry_val,default_val,list_size\", [\n        # List of strings\n        (\"list\", \"string\", '[\"value1\", \"value2\", \"value3\"]', -1),\n        # List of integers\n        (\"list\", \"integer\", '[\"1\", \"2\", \"3\"]', -1),\n        # List of floats\n        (\"list\", \"float\", '[\"1.5\", \"2.7\", \"3.14\"]', -1),\n        # KVList of strings\n        (\"kvlist\", \"string\", '{\"key1\": \"value1\", \"key2\": \"value2\"}', -1),\n        # List with size limit\n        (\"list\", \"string\", '[\"a\", \"b\"]', 3),\n        # KVList with size limit\n        (\"kvlist\", \"integer\", '{\"k1\": \"1\", \"k2\": \"2\"}', 5),\n    ])\n    def test_validate_list_default_values_valid(self, config_mgr, item_type, entry_val, default_val, list_size):\n        \"\"\"Test _validate_list_default_values with valid inputs.\"\"\"\n        item_val = {\"type\": item_type}\n        # Should not raise any exception\n        config_mgr._validate_list_default_values(CAT_NAME, ITEM_NAME, item_val, entry_val, default_val, list_size)\n\n    @pytest.mark.parametrize(\"item_type,entry_val,default_val,list_size,error_type,error_msg\", [\n        # List with duplicates\n        (\"list\", \"string\", '[\"val1\", \"val2\", \"val1\"]', -1, ValueError, \"elements are not unique\"),\n        # KVList with duplicate keys\n        (\"kvlist\", \"string\", '{\"key1\": \"v1\", \"key1\": \"v2\"}', -1, ValueError, \"duplicate KV pair found\"),\n        # Exceeds list size\n        (\"list\", \"string\", '[\"1\", \"2\", \"3\", \"4\", \"5\"]', 3, ValueError, \"list size limit to 3\"),\n        # Invalid format\n        (\"list\", \"string\", \"not_a_valid_list\", -1, TypeError, \"should be passed array list in string format\"),\n        # Type mismatch integer\n        (\"list\", \"integer\", '[\"1\", \"2\", \"not_int\"]', -1, ValueError, \"all elements should be of same\"),\n        # Type mismatch float\n        (\"list\", \"float\", '[\"1.5\", \"not_float\"]', -1, ValueError, \"all elements should be of same\"),\n        # KVList not dict\n        (\"kvlist\", \"string\", '[\"not\", \"dict\"]', -1, TypeError, \"KV pair invalid in default value\"),\n    ])\n    def test_validate_list_default_values_invalid(self, config_mgr, item_type, entry_val, default_val, list_size, error_type, error_msg):\n        \"\"\"Test _validate_list_default_values with invalid inputs.\"\"\"\n        item_val = {\"type\": item_type}\n        with pytest.raises(error_type) as excinfo:\n            config_mgr._validate_list_default_values(CAT_NAME, ITEM_NAME, item_val, entry_val, default_val, list_size)\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Tests for _validate_enumeration_default_values ====================\n\n    @pytest.mark.parametrize(\"item_type,options,default_val\", [\n        # List enumeration\n        (\"list\", [\"option1\", \"option2\", \"option3\"], '[\"option1\", \"option3\"]'),\n        # KVList enumeration\n        (\"kvlist\", [\"opt1\", \"opt2\", \"opt3\"], '{\"key1\": \"opt1\", \"key2\": \"opt2\"}'),\n        # Single option\n        (\"list\", [\"only_option\"], '[\"only_option\"]'),\n        # All options\n        (\"kvlist\", [\"a\", \"b\"], '{\"k1\": \"a\", \"k2\": \"b\"}'),\n    ])\n    def test_validate_enumeration_default_values_valid(self, config_mgr, item_type, options, default_val):\n        \"\"\"Test _validate_enumeration_default_values with valid inputs.\"\"\"\n        item_val = {\"type\": item_type, \"items\": \"enumeration\", \"options\": options}\n        # Should not raise any exception\n        config_mgr._validate_enumeration_default_values(CAT_NAME, ITEM_NAME, item_val, default_val)\n\n    @pytest.mark.parametrize(\"item_type,options,default_val,error_msg\", [\n        # List with invalid option\n        (\"list\", [\"option1\", \"option2\"], '[\"option1\", \"invalid\"]', \"value does not exist in options\"),\n        # KVList with invalid option\n        (\"kvlist\", [\"opt1\", \"opt2\"], '{\"key1\": \"opt1\", \"key2\": \"invalid\"}', \"value does not exist in options\"),\n    ])\n    def test_validate_enumeration_default_values_invalid(self, config_mgr, item_type, options, default_val, error_msg):\n        \"\"\"Test _validate_enumeration_default_values with invalid inputs.\"\"\"\n        item_val = {\"type\": item_type, \"items\": \"enumeration\", \"options\": options}\n        with pytest.raises(ValueError) as excinfo:\n            config_mgr._validate_enumeration_default_values(CAT_NAME, ITEM_NAME, item_val, default_val)\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Tests for _validate_items_entry ====================\n\n    @pytest.mark.parametrize(\"items_type,default_val,extra_config\", [\n        # String type\n        (\"string\", '[\"test1\", \"test2\"]', {}),\n        # Integer type\n        (\"integer\", '[\"1\", \"2\", \"3\"]', {}),\n        # Float type\n        (\"float\", '[\"1.5\", \"2.7\"]', {}),\n        # Object type\n        (\"object\", \"[]\", {\"properties\": {\"width\": {\"description\": \"W\", \"type\": \"integer\", \"default\": \"100\"}}}),\n        # Enumeration type\n        (\"enumeration\", '[\"opt1\"]', {\"options\": [\"opt1\", \"opt2\"]}),\n        # With listSize\n        (\"string\", '[\"a\", \"b\"]', {\"listSize\": \"2\"}),\n    ])\n    def test_validate_items_entry_valid(self, config_mgr, items_type, default_val, extra_config):\n        \"\"\"Test _validate_items_entry with valid inputs.\"\"\"\n        item_val = {\"type\": \"list\", \"items\": items_type, **extra_config}\n        def get_entry_val(key):\n            if key == \"default\":\n                return default_val\n            if key == \"properties\" and \"properties\" in item_val:\n                return item_val[\"properties\"]\n            return None\n        \n        # Should not raise any exception\n        config_mgr._validate_items_entry(CAT_NAME, ITEM_NAME, item_val, \"items\", items_type, get_entry_val)\n\n    @pytest.mark.parametrize(\"item_val,entry_val,error_type,error_msg\", [\n        # Invalid type\n        ({\"type\": \"list\", \"items\": \"invalid\"}, \"invalid\", ValueError, \"items value should either be in string, float, integer, object or enumeration\"),\n        # Object missing properties\n        ({\"type\": \"list\", \"items\": \"object\"}, \"object\", KeyError, \"properties KV pair must be required\"),\n        # listSize invalid type\n        ({\"type\": \"list\", \"items\": \"string\", \"listSize\": 2}, \"string\", TypeError, \"listSize type must be a string\"),\n        # listSize not integer\n        ({\"type\": \"list\", \"items\": \"string\", \"listSize\": \"not_num\"}, \"string\", ValueError, \"listSize value must be an integer value\"),\n    ])\n    def test_validate_items_entry_invalid(self, config_mgr, item_val, entry_val, error_type, error_msg):\n        \"\"\"Test _validate_items_entry with invalid inputs.\"\"\"\n        def get_entry_val(key):\n            return \"[]\"\n        \n        with pytest.raises(error_type) as excinfo:\n            config_mgr._validate_items_entry(CAT_NAME, ITEM_NAME, item_val, \"items\", entry_val, get_entry_val)\n        assert error_msg in str(excinfo.value)\n\n    # ==================== Integration Tests ====================\n\n    @pytest.mark.parametrize(\"item_config,entry_val,default_val\", [\n        # Complete list workflow\n        ({\"type\": \"list\", \"items\": \"integer\", \"listSize\": \"5\"}, \"integer\", '[\"1\", \"2\", \"3\"]'),\n        # Complete kvlist workflow\n        ({\"type\": \"kvlist\", \"items\": \"string\", \"listSize\": \"3\"}, \"string\", '{\"key1\": \"val1\", \"key2\": \"val2\"}'),\n        # Object with complex properties\n        (\n            {\n                \"type\": \"list\",\n                \"items\": \"object\",\n                \"properties\": {\n                    \"name\": {\"description\": \"Name\", \"type\": \"string\", \"default\": \"\"},\n                    \"age\": {\"description\": \"Age\", \"type\": \"integer\", \"default\": \"0\"},\n                    \"score\": {\"description\": \"Score\", \"type\": \"float\", \"default\": \"0.0\"}\n                }\n            },\n            \"object\",\n            \"[]\"\n        ),\n    ])\n    def test_validate_items_entry_integration(self, config_mgr, item_config, entry_val, default_val):\n        \"\"\"Integration tests for complete validation workflows.\"\"\"\n        item_val = item_config.copy()\n        def get_entry_val(key):\n            if key == \"default\":\n                return default_val\n            if key == \"properties\" and \"properties\" in item_val:\n                return item_val[\"properties\"]\n            return None\n        \n        # Should validate successfully\n        config_mgr._validate_items_entry(CAT_NAME, ITEM_NAME, item_val, \"items\", entry_val, get_entry_val)\n\n    def test_validate_enumeration_default_values_integration(self, config_mgr):\n        \"\"\"Integration test for enumeration with all options.\"\"\"\n        item_val = {\n            \"type\": \"list\",\n            \"items\": \"enumeration\",\n            \"options\": [\"red\", \"green\", \"blue\", \"yellow\"]\n        }\n        default_val = '[\"red\", \"blue\", \"yellow\"]'\n        \n        # Should validate successfully\n        config_mgr._validate_enumeration_default_values(CAT_NAME, ITEM_NAME, item_val, default_val)\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_logger.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport pytest\nimport logging\n\nfrom fledge.common import logger\n\n\"\"\" Test python/fledge/common/logger.py\"\"\"\n\n__author__ = \"Ori Shadmon\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestLogger:\n    \"\"\" Logger Tests\n\n        :assert:\n          - Test the type that gets returned\n          - Test that ValueError gets returned for invalid destination\n          - Test the different logging levels\n\n        :todo:\n          - Test handler SysLogHandler or StreamHandler\n          - Test Format of log entry\n          - Test propagate is set to False\n    \"\"\"\n    def test_logger_instance(self):\n        \"\"\" Test the logger type being returned at setup\n\n        :assert:\n           Assert that setup returns instance of type logger.Logger\n           Assert instance name\n           Assert instance hasHandler\n           Assert instance default log level WARNING\n        \"\"\"\n        instance = logger.setup(__name__)\n        assert isinstance(instance, logging.Logger)\n        assert \"test_logger\" == instance.name\n        assert instance.hasHandlers()\n        assert logging.WARNING == instance.getEffectiveLevel()\n\n    def test_destination_console(self):\n        \"\"\" Test the logger type being returned when destination=1\n\n        :assert:\n            Assert that the setup returns instance of type logging.Logger\n        \"\"\"\n        instance = logger.setup(__name__, destination=1)\n        assert isinstance(instance, logging.Logger) is True\n\n    def test_logger_destination_error(self):\n        \"\"\" Test Error gets returned when destination isn't 0 or 1\n\n        :assert:\n            Assert ValueError is returned when destination=2\n        \"\"\" \n        with pytest.raises(ValueError) as error_exec:\n            logger.setup(__name__, destination=2)\n        assert error_exec.type is ValueError\n        assert \"Invalid destination 2\" in str(error_exec)\n \n    def test_logger_level(self):\n        \"\"\" Test logger level gets updated\n\n        :assert:\n            Assert that unless i==0, output.getEffectiveLevel() == i\n        \"\"\"\n        for i in range(0, 60, 10):\n            output = logger.setup(__name__, level=i)\n            if i == 0:\n                # Level NOTSET (0) so inherits level WARNING (30)\n                assert logging.WARNING == output.getEffectiveLevel()\n            else:  \n                assert i == output.getEffectiveLevel()\n\n    def test_compare_setup(self):\n        \"\"\"\n        Test that logger.setup() generates the same value as logging for \n          level - 10 to 50 \n          propagate: True or False\n        :assert:\n            Assert logging.getLogger() and logger.setup return the same value(s)\n        \"\"\"\n        for name in (__name__, 'aaa'):\n            log = logging.getLogger(name)\n            for level in range(10, 60, 10): \n                for propagate in (True, False):\n                    log.setLevel(level) \n                    log.propagate = propagate\n                    assert log is logger.setup(name, propagate=propagate, level=level)\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_plugin_discovery.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport os\nimport copy\nfrom unittest.mock import patch\nimport pytest\n\nfrom fledge.common.plugin_discovery import PluginDiscovery, _logger\nfrom fledge.services.core.api import utils\nfrom fledge.services.core.api.plugins import common\nfrom fledge.plugins.common import utils as api_utils\n\n\n__author__ = \"Amarendra K Sinha, Ashish Jabble \"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestPluginDiscovery:\n    mock_north_folders = [\"OMF\", \"fledge-north\"]\n    mock_south_folders = [\"modbus\", \"http\"]\n    mock_c_north_folders = [(\"ocs\", \"binary\")]\n    mock_c_south_folders = [(\"dummy\", \"binary\")]\n    mock_py_filter_folders = [(\"foo\", \"bar\")]\n    mock_c_filter_folders = [(\"scale\", \"binary\")]\n    mock_c_notify_folders = [(\"email\", \"binary\")]\n    mock_py_notify_folders = [(\"notify1\", \"notify2\")]\n    mock_c_rule_folders = [(\"OverMaxRule\", \"binary\")]\n    mock_py_rule_folders = [(\"bad_bearing\", \"engine_failure\")]\n    mock_all_folders = [\"OMF\", \"fledge-north\", \"modbus\", \"http\"]\n    mock_filter_config = [\n        {\"name\": \"foo\",\n         \"version\": \"1.0.0\",\n         \"type\": \"filter\",\n         \"description\": \"Filter foo plugin\",\n         \"config\": {\"plugin\": {\"default\": \"foo\", \"type\": \"string\", \"description\": \"Foo filter plugin\"}}\n         },\n        {\"name\": \"bar\",\n         \"version\": \"1.0.0\",\n         \"type\": \"filter\",\n         \"description\": \"Filter bar plugin\",\n         \"config\": {\"plugin\": {\"default\": \"bar\", \"type\": \"string\", \"description\": \"Bar filter plugin\"}}\n         }\n    ]\n    mock_plugins_config = [\n        {\n            \"name\": \"OMF\",\n            \"type\": \"north\",\n            \"description\": \"OMF to PI connector relay\",\n            \"version\": \"1.2\"\n        },\n        {\n            \"name\": \"fledge-north\",\n            \"type\": \"north\",\n            \"description\": \"Northbound Fledge aggregator\",\n            \"version\": \"1.0\"\n        },\n        {\n            \"name\": \"modbus\",\n            \"type\": \"south\",\n            \"description\": \"Modbus RTU plugin\",\n            \"version\": \"1.1\"\n        },\n        {\n            \"name\": \"http\",\n            \"type\": \"south\",\n            \"description\": \"HTTP request plugin\",\n            \"version\": \"1.4\"\n        }\n    ]\n    mock_plugins_north_config = [\n        {\n            \"name\": \"OMF\",\n            \"type\": \"north\",\n            \"description\": \"OMF to PI connector relay\",\n            \"version\": \"1.2\"\n        },\n        {\n            \"name\": \"fledge-north\",\n            \"type\": \"north\",\n            \"description\": \"Northbound Fledge aggregator\",\n            \"version\": \"1.0\"\n        }\n    ]\n    mock_c_plugins_north_config = [\n        {\"interface\": \"1.0.0\",\n         \"name\": \"OCS\",\n         \"version\": \"1.0.0\",\n         \"config\": {\n             \"plugin\": {\n                 \"default\": \"ocs\",\n                 \"type\": \"string\",\n                 \"description\": \"OCS North C Plugin\"\n             }\n         }\n         }\n    ]\n    mock_plugins_south_config = [\n        {\n            \"name\": \"modbus\",\n            \"type\": \"south\",\n            \"description\": \"Modbus RTU plugin\",\n            \"version\": \"1.1\"\n        },\n        {\n            \"name\": \"http\",\n            \"type\": \"south\",\n            \"description\": \"HTTP request plugin\",\n            \"version\": \"1.4\"\n        }\n    ]\n\n    mock_c_plugins_south_config = [\n        {\"interface\": \"1.0.0\",\n         \"version\": \"1.0.0\",\n         \"type\": \"south\",\n         \"name\": \"Dummy\",\n         \"config\": {\"plugin\": {\"type\": \"string\", \"description\": \"Dummy C south plugin\", \"default\": \"dummy\"}}\n         }\n    ]\n\n    mock_c_filter_config = [\n        {\"name\": \"scale\",\n         \"version\": \"1.0.0\",\n         \"type\": \"filter\",\n         \"description\": \"Filter Scale plugin\",\n         \"config\": {\"plugin\": {\"default\": \"scale\", \"type\": \"string\", \"description\": \"Scale filter plugin\"}}}\n    ]\n\n    mock_c_notify_config = [\n        {\"name\": \"email\",\n         \"version\": \"1.0.0\",\n         \"type\": \"notificationDelivery\",\n         \"description\": \"Email notification plugin\",\n         \"config\": {\"plugin\": {\"type\": \"string\", \"description\": \"Email notification plugin\", \"default\": \"email\"}}}\n    ]\n\n    mock_py_notify_config = [\n        {\n          \"version\": \"1.7.0\",\n          \"description\": \"notify1 delivery plugin\",\n          \"type\": \"notificationDelivery\",\n          \"name\": \"notify1\"\n        },\n        {\n            \"version\": \"1.7.0\",\n            \"description\": \"notify2 delivery plugin\",\n            \"type\": \"notificationDelivery\",\n            \"name\": \"notify2\"\n        }\n    ]\n\n    mock_c_rule_config = [\n        {\"name\": \"OverMaxRule\",\n         \"version\": \"1.0.0\",\n         \"type\": \"notificationRule\",\n         \"description\": \"The OverMaxRule notification rule\",\n         \"config\": {\"plugin\": {\"type\": \"string\", \"description\": \"The OverMaxRule notification rule plugin\", \"default\": \"OverMaxRule\"}}}\n    ]\n\n    mock_py_rule_config = [\n        {\n          \"version\": \"1.7.0\",\n          \"description\": \"Notification rule plugin to detect bad bearing\",\n          \"type\": \"notificationRule\",\n          \"name\": \"bad_bearing\"\n        },\n        {\n          \"version\": \"1.6.2\",\n          \"description\": \"Notification rule plugin which detects imminent engine failure\",\n          \"type\": \"notificationRule\",\n          \"name\": \"engine_failure\"\n        }\n    ]\n\n    mock_plugins_config += mock_filter_config + mock_py_notify_config + mock_py_rule_config\n\n    mock_c_plugins_config = [\n        {\"interface\": \"1.0.0\",\n         \"version\": \"1.0.0\",\n         \"type\": \"south\",\n         \"name\": \"Dummy\",\n         \"config\": {\"plugin\": {\"type\": \"string\", \"description\": \"Dummy C south plugin\", \"default\": \"dummy\"}}\n         },\n        {\"interface\": \"1.0.0\",\n         \"name\": \"OCS\",\n         \"version\": \"1.0.0\",\n         \"config\": {\n             \"plugin\": {\n                 \"default\": \"ocs\",\n                 \"type\": \"string\",\n                 \"description\": \"OMF North C Plugin\"\n             }\n         }\n         },\n        {\"name\": \"scale\", \"version\": \"1.0.0\", \"type\": \"filter\", \"description\": \"Filter Scale plugin\",\n         \"config\": {\n             \"plugin\": {\n                 \"default\": \"scale\",\n                 \"type\": \"string\",\n                 \"description\": \"Scale filter plugin\"}}},\n        {\"name\": \"email\", \"type\": \"notificationDelivery\", \"version\": \"1.0.0\", \"description\": \"Email notification plugin\",\n         \"config\": {\"plugin\": {\n             \"type\": \"string\",\n             \"description\": \"Email notification plugin\",\n             \"default\": \"email\"}}},\n        {\"name\": \"OverMaxRule\",\n         \"version\": \"1.0.0\",\n         \"type\": \"notificationRule\",\n         \"description\": \"The OverMaxRule notification rule\",\n         \"config\": {\"plugin\": {\"type\": \"string\", \"description\": \"The OverMaxRule notification rule plugin\",\n                               \"default\": \"OverMaxRule\"}}}\n    ]\n\n    def test_get_plugins_installed_type_none(self, mocker):\n        def mock_folders():\n            yield TestPluginDiscovery.mock_north_folders\n            yield TestPluginDiscovery.mock_south_folders\n            yield TestPluginDiscovery.mock_py_filter_folders\n            yield TestPluginDiscovery.mock_py_notify_folders\n            yield TestPluginDiscovery.mock_py_rule_folders\n\n        def mock_c_folders():\n            yield TestPluginDiscovery.mock_c_north_folders\n            yield TestPluginDiscovery.mock_c_south_folders\n            yield TestPluginDiscovery.mock_c_filter_folders\n            yield TestPluginDiscovery.mock_c_notify_folders\n            yield TestPluginDiscovery.mock_c_rule_folders\n\n        mock_get_folders = mocker.patch.object(PluginDiscovery, \"get_plugin_folders\", return_value=next(mock_folders()))\n        mock_get_c_folders = mocker.patch.object(utils, \"find_c_plugin_libs\", return_value=next(mock_c_folders()))\n        mock_get_plugin_config = mocker.patch.object(PluginDiscovery, \"get_plugin_config\",\n                                                     side_effect=TestPluginDiscovery.mock_plugins_config)\n        mock_get_c_plugin_config = mocker.patch.object(utils, \"get_plugin_info\",\n                                                       side_effect=TestPluginDiscovery.mock_c_plugins_config)\n\n        plugins = PluginDiscovery.get_plugins_installed()\n        expected_plugin = TestPluginDiscovery.mock_plugins_config\n        expected_plugin.extend(TestPluginDiscovery.mock_c_plugins_config)\n        # FIXME: ordering issue\n        # assert expected_plugin == plugins\n        assert 5 == mock_get_folders.call_count\n        assert 10 == mock_get_plugin_config.call_count\n        assert 5 == mock_get_c_folders.call_count\n        assert 5 == mock_get_c_plugin_config.call_count\n\n    def test_get_plugins_installed_type_north(self, mocker):\n        def mock_folders():\n            yield TestPluginDiscovery.mock_north_folders\n\n        def mock_c_folders():\n            yield TestPluginDiscovery.mock_c_north_folders\n\n        mock_get_folders = mocker.patch.object(PluginDiscovery, \"get_plugin_folders\", return_value=next(mock_folders()))\n        mock_get_plugin_config = mocker.patch.object(PluginDiscovery, \"get_plugin_config\",\n                                                     side_effect=TestPluginDiscovery.mock_plugins_north_config)\n        mock_get_c_folders = mocker.patch.object(utils, \"find_c_plugin_libs\", return_value=next(mock_c_folders()))\n        mock_get_c_plugin_config = mocker.patch.object(utils, \"get_plugin_info\",\n                                                       side_effect=TestPluginDiscovery.mock_c_plugins_north_config)\n\n        plugins = PluginDiscovery.get_plugins_installed(\"north\")\n        expected_plugin = TestPluginDiscovery.mock_plugins_north_config\n        expected_plugin.extend(TestPluginDiscovery.mock_c_plugins_north_config)\n        # FIXME: ordering issue\n        # assert expected_plugin == plugins\n        assert 1 == mock_get_folders.call_count\n        assert 2 == mock_get_plugin_config.call_count\n        assert 1 == mock_get_c_folders.call_count\n        assert 1 == mock_get_c_plugin_config.call_count\n\n    def test_get_plugins_installed_type_south(self, mocker):\n        def mock_folders():\n            yield TestPluginDiscovery.mock_south_folders\n\n        def mock_c_folders():\n            yield TestPluginDiscovery.mock_c_south_folders\n\n        mock_get_folders = mocker.patch.object(PluginDiscovery, \"get_plugin_folders\", return_value=next(mock_folders()))\n        mock_get_plugin_config = mocker.patch.object(PluginDiscovery, \"get_plugin_config\",\n                                                     side_effect=TestPluginDiscovery.mock_plugins_south_config)\n        mock_get_c_folders = mocker.patch.object(utils, \"find_c_plugin_libs\", return_value=next(mock_c_folders()))\n        mock_get_c_plugin_config = mocker.patch.object(utils, \"get_plugin_info\",\n                                                       side_effect=TestPluginDiscovery.mock_c_plugins_south_config)\n\n        plugins = PluginDiscovery.get_plugins_installed(\"south\")\n        expected_plugin = TestPluginDiscovery.mock_plugins_south_config\n        expected_plugin.extend(TestPluginDiscovery.mock_c_plugins_south_config)\n        # FIXME: ordering issue\n        # assert expected_plugin == plugins\n        assert 1 == mock_get_folders.call_count\n        assert 2 == mock_get_plugin_config.call_count\n        assert 1 == mock_get_c_folders.call_count\n        assert 1 == mock_get_c_plugin_config.call_count\n\n    def test_get_filter_plugins_installed(self, mocker):\n        def mock_c_filter_folders():\n            yield TestPluginDiscovery.mock_c_filter_folders\n\n        def mock_filter_folders():\n            yield TestPluginDiscovery.mock_py_filter_folders\n\n        mock_get_filter_folders = mocker.patch.object(PluginDiscovery, \"get_plugin_folders\", return_value=next(mock_filter_folders()))\n        mock_get_filter_config = mocker.patch.object(PluginDiscovery, \"get_plugin_config\", side_effect=TestPluginDiscovery.mock_filter_config)\n        mock_get_c_filter_folders = mocker.patch.object(utils, \"find_c_plugin_libs\", return_value=next(mock_c_filter_folders()))\n        mock_get_c_filter_plugin_config = mocker.patch.object(utils, \"get_plugin_info\", side_effect=TestPluginDiscovery.mock_c_filter_config)\n\n        plugins = PluginDiscovery.get_plugins_installed(\"filter\")\n        # expected_plugin = TestPluginDiscovery.mock_c_plugins_config[2]\n        # FIXME: ordering issue\n        # assert expected_plugin == plugins\n        assert 1 == mock_get_filter_folders.call_count\n        assert 1 == mock_get_filter_config.call_count\n        assert 1 == mock_get_c_filter_folders.call_count\n        assert 1 == mock_get_c_filter_plugin_config.call_count\n\n    def test_get_notify_plugins_installed(self, mocker):\n        def mock_folders():\n            yield TestPluginDiscovery.mock_py_notify_folders\n\n        def mock_c_folders():\n            yield TestPluginDiscovery.mock_c_notify_folders\n\n        mock_get_py_folders = mocker.patch.object(PluginDiscovery, \"get_plugin_folders\", return_value=next(mock_folders()))\n        mock_get_py_plugin_config = mocker.patch.object(PluginDiscovery, \"get_plugin_config\",\n                                                        side_effect=TestPluginDiscovery.mock_py_notify_config)\n        mock_get_c_folders = mocker.patch.object(utils, \"find_c_plugin_libs\",\n                                                        return_value=next(mock_c_folders()))\n        mock_get_c_plugin_config = mocker.patch.object(utils, \"get_plugin_info\",\n                                                              side_effect=TestPluginDiscovery.mock_c_notify_config)\n        plugins = PluginDiscovery.get_plugins_installed(\"notify\")\n        # expected_plugin = TestPluginDiscovery.mock_c_plugins_config[3]\n        # FIXME: ordering issue\n        # assert expected_plugin == plugins\n        assert 1 == mock_get_py_folders.call_count\n        assert 1 == mock_get_py_plugin_config.call_count\n        assert 1 == mock_get_c_folders.call_count\n        assert 1 == mock_get_c_plugin_config.call_count\n\n    def test_get_rules_plugins_installed(self, mocker):\n        def mock_folders():\n            yield TestPluginDiscovery.mock_py_rule_folders\n\n        def mock_c_folders():\n            yield TestPluginDiscovery.mock_c_rule_folders\n\n        mock_get_py_folders = mocker.patch.object(PluginDiscovery, \"get_plugin_folders\",\n                                                  return_value=next(mock_folders()))\n        mock_get_py_plugin_config = mocker.patch.object(PluginDiscovery, \"get_plugin_config\",\n                                                        side_effect=TestPluginDiscovery.mock_py_rule_config)\n        mock_get_c_folders = mocker.patch.object(utils, \"find_c_plugin_libs\", return_value=next(mock_c_folders()))\n        mock_get_c_plugin_config = mocker.patch.object(utils, \"get_plugin_info\",\n                                                            side_effect=TestPluginDiscovery.mock_c_rule_config)\n\n        plugins = PluginDiscovery.get_plugins_installed(\"rule\")\n        # expected_plugin = TestPluginDiscovery.mock_c_plugins_config[4]\n        # FIXME: ordering issue\n        # assert expected_plugin == plugins\n        assert 1 == mock_get_py_folders.call_count\n        assert 1 == mock_get_py_plugin_config.call_count\n        assert 1 == mock_get_c_folders.call_count\n        assert 1 == mock_get_c_plugin_config.call_count\n\n    def test_fetch_plugins_installed(self, mocker):\n        def mock_folders():\n            yield TestPluginDiscovery.mock_north_folders\n\n        mock_get_folders = mocker.patch.object(PluginDiscovery, \"get_plugin_folders\", return_value=next(mock_folders()))\n        mock_get_plugin_config = mocker.patch.object(PluginDiscovery, \"get_plugin_config\",\n                                                     side_effect=TestPluginDiscovery.mock_plugins_north_config)\n\n        plugins = PluginDiscovery.fetch_plugins_installed(\"north\", \"north\", False)\n        # FIXME: below line is failing when in suite\n        # assert TestPluginDiscovery.mock_plugins_north_config == plugins\n        assert 1 == mock_get_folders.call_count\n        assert 2 == mock_get_plugin_config.call_count\n\n    def test_get_plugin_folders(self, mocker):\n        def mock_folders():\n            listdir = copy.deepcopy(TestPluginDiscovery.mock_north_folders)\n            listdir.extend([\"__init__\", \"empty\", \"common\"])\n            yield listdir\n\n        mocker.patch.object(os, \"listdir\", return_value=next(mock_folders()))\n        mocker.patch.object(os.path, \"isdir\", return_value=True)\n        plugin_folders = PluginDiscovery.get_plugin_folders(\"north\")\n        actual_plugin_folders = []\n        for dir_name in plugin_folders:\n            actual_plugin_folders.append(dir_name.split('/')[-1])\n        assert TestPluginDiscovery.mock_north_folders == actual_plugin_folders\n\n    @pytest.mark.parametrize(\"info, expected, is_config, installed_dir_name\", [\n        ({'name': \"furnace4\", 'version': \"1.1\", 'type': \"south\", 'interface': \"1.0\",\n          'config': {'plugin': {'description': \"Modbus RTU plugin\", 'type': 'string', 'default': 'modbus'}}},\n         {'name': 'modbus', 'type': 'south', 'description': 'Modbus RTU plugin', 'version': '1.1',\n          'installedDirectory': 'south/modbus', 'packageName': 'fledge-south-modbus'}, False, 'south'),\n        ({'name': \"furnace4\", 'version': \"1.1\", 'type': \"south\", 'interface': \"1.0\",\n          'config': {'plugin': {'description': \"Modbus RTU plugin\", 'type': 'string', 'default': 'modbus'}}},\n         {'name': 'modbus', 'type': 'south', 'description': 'Modbus RTU plugin', 'version': '1.1',\n          'installedDirectory': 'south/modbus', 'packageName': 'fledge-south-modbus',\n          'config': {'plugin': {'description': 'Modbus RTU plugin', 'type': 'string', 'default': 'modbus'}}},\n         True, 'south'),\n        ({'name': \"http_north\", 'version': \"1.1\", 'type': \"north\", 'interface': \"1.0\",\n          'config': {'plugin': {'description': \"HTTP north plugin\", 'type': 'string', 'default': 'http_north'}}},\n         {'name': 'http_north', 'type': 'north', 'description': 'HTTP north plugin', 'version': '1.1',\n          'installedDirectory': 'north/http_north', 'packageName': 'fledge-north-http-north'},\n         False, 'north'),\n        ({'name': \"rms\", 'version': \"1.1\", 'type': \"filter\", 'interface': \"1.0\",\n          'config': {'plugin': {'description': \"RMS Filter plugin\", 'type': 'string', 'default': 'rms'}}},\n         {'name': 'rms', 'type': 'filter', 'description': 'RMS Filter plugin', 'version': '1.1',\n          'installedDirectory': 'filter/rms', 'packageName': 'fledge-filter-rms'},\n         False, 'filter'),\n        ({'name': \"Average\", 'version': \"1.1\", 'type': \"notificationRule\", 'interface': \"1.0\",\n          'config': {'plugin': {'description': \"Average Rule plugin\", 'type': 'string', 'default': 'Average'}}},\n         {'name': 'Average', 'type': 'rule', 'description': 'Average Rule plugin', 'version': '1.1',\n          'installedDirectory': 'notificationRule/Average', 'packageName': 'fledge-rule-average'},\n         False, 'notificationRule'),\n        ({'name': \"asset\", 'version': \"1.1\", 'type': \"notificationDelivery\", 'interface': \"1.0\",\n          'config': {'plugin': {'description': \"Asset Delivery plugin\", 'type': 'string', 'default': 'asset'}}},\n         {'name': 'asset', 'type': 'notify', 'description': 'Asset Delivery plugin', 'version': '1.1',\n          'installedDirectory': 'notificationDelivery/asset', 'packageName': 'fledge-notify-asset'},\n         False, 'notificationDelivery')\n    ])\n    def test_get_plugin_config(self, info, expected, is_config, installed_dir_name):\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[info]):\n            actual = PluginDiscovery.get_plugin_config(info['config']['plugin']['default'], expected['type'],\n                                                       installed_dir_name, is_config)\n            assert expected == actual\n\n    @pytest.mark.parametrize(\"info, warn_count\", [\n        ({'name': \"modbus\", 'version': \"1.1\", 'type': \"south\", 'interface': \"1.0\",\n          'config': {'plugin': {'description': 'Modbus RTU plugin', 'type': 'string', 'default': 'modbus'}}}, 0),\n        ({'name': \"modbus\", 'version': \"1.1\", 'type': \"south\", 'interface': \"1.0\", 'flag': api_utils.DEPRECATED_BIT_MASK_VALUE,\n          'config': {'plugin': {'description': 'Modbus RTU plugin', 'type': 'string', 'default': 'modbus'}}}, 1),\n        ({'name': \"modbus\", 'version': \"1.1\", 'type': \"south\", 'interface': \"1.0\", 'flag': 0,\n          'config': {'plugin': {'description': 'Modbus RTU plugin', 'type': 'string', 'default': 'modbus'}}}, 0),\n    ])\n    def test_deprecated_python_plugins(self, info, warn_count, is_config=True):\n        with patch.object(_logger, \"warning\") as patch_log_warn:\n            with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[info]):\n                PluginDiscovery.get_plugin_config(info['name'], info['type'], info['type'], is_config)\n        assert warn_count == patch_log_warn.call_count\n        if warn_count:\n            args, kwargs = patch_log_warn.call_args\n            assert '\"{}\" plugin is deprecated'.format(info['name']) == args[0]\n\n    def test_bad_get_plugin_config(self):\n        mock_plugin_info = {\n                'name': \"HTTP\",\n                'version': \"1.0.0\",\n                'type': \"north\",\n                'interface': \"1.0.0\",\n                'config': {\n                            'plugin': {\n                                'description': \"HTTP north plugin\",\n                                'type': 'string',\n                                'default': 'http-north'\n                            }\n                }\n        }\n        with patch.object(_logger, \"warning\") as patch_log_warn:\n            with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n                actual = PluginDiscovery.get_plugin_config(\"http-north\", \"south\", \"http_north\", False)\n                assert actual is None\n        patch_log_warn.assert_called_once_with('Plugin http-north is discarded due to invalid type')\n\n    @pytest.mark.parametrize(\"info, dir_name\", [\n        (mock_c_plugins_config[0], \"south\"),\n        (mock_c_plugins_config[1], \"north\"),\n        (mock_c_plugins_config[2], \"filter\"),\n        (mock_c_plugins_config[3], \"notify\"),\n        (mock_c_plugins_config[4], \"rule\")\n    ])\n    def test_fetch_c_plugins_installed(self, info, dir_name):\n        with patch.object(utils, \"find_c_plugin_libs\", return_value=[(info['name'], \"binary\")]) as patch_plugin_lib:\n            with patch.object(utils, \"get_plugin_info\", return_value=info) as patch_plugin_info:\n                PluginDiscovery.fetch_c_plugins_installed(dir_name, True, dir_name)\n            patch_plugin_info.assert_called_once_with(info['name'], dir=dir_name)\n        patch_plugin_lib.assert_called_once_with(dir_name)\n\n    @pytest.mark.parametrize(\"info, dir_name\", [\n        (mock_c_plugins_config[0], \"south\"),\n        (mock_c_plugins_config[1], \"north\"),\n        (mock_c_plugins_config[2], \"filter\"),\n        (mock_c_plugins_config[3], \"notify\"),\n        (mock_c_plugins_config[4], \"rule\")\n    ])\n    def test_deprecated_c_plugins_installed(self, info, dir_name):\n        info['flag'] = api_utils.DEPRECATED_BIT_MASK_VALUE\n        with patch.object(_logger, \"warning\") as patch_log_warn:\n            with patch.object(utils, \"find_c_plugin_libs\", return_value=[(info['name'], \"binary\")]) as patch_plugin_lib:\n                with patch.object(utils, \"get_plugin_info\", return_value=info) as patch_plugin_info:\n                    PluginDiscovery.fetch_c_plugins_installed(dir_name, True, dir_name)\n                patch_plugin_info.assert_called_once_with(info['name'], dir=dir_name)\n            patch_plugin_lib.assert_called_once_with(dir_name)\n        assert 1 == patch_log_warn.call_count\n        args, kwargs = patch_log_warn.call_args\n        assert '\"{}\" plugin is deprecated'.format(info['name']) == args[0]\n\n    def test_fetch_c_hybrid_plugins_installed(self):\n        info = {\"version\": \"1.6.0\", \"name\": \"FlirAX8\",\n                \"config\": {\"asset\": {\"description\": \"Default asset name\", \"default\": \"flir\",\n                                     \"displayName\": \"Asset Name\", \"type\": \"string\"},\n                           \"plugin\": {\"description\": \"A Modbus connected Flir AX8 infrared camera\",\n                                      \"default\": \"FlirAX8\", \"readonly\": \"true\", \"type\": \"string\"}}}\n        with patch.object(utils, \"find_c_plugin_libs\", return_value=[(\"FlirAX8\", \"json\")]) as patch_plugin_lib:\n            with patch.object(common, \"load_and_fetch_c_hybrid_plugin_info\", return_value=info) as patch_hybrid_plugin_info:\n                PluginDiscovery.fetch_c_plugins_installed('south', True, 'south')\n            patch_hybrid_plugin_info.assert_called_once_with(info['name'], True)\n        patch_plugin_lib.assert_called_once_with('south')\n\n    @pytest.mark.parametrize(\"info, exc_count\", [\n        ({}, 0),\n        ({\"interface\": \"1.0.0\", \"version\": \"1.0.0\", \"type\": \"south\", \"name\": \"Random\", \"config\": \"(null)\"}, 1),\n        ({\"interface\": \"1.0.0\", \"version\": \"1.0.0\", \"type\": \"south\", \"name\": \"Random\", \"config\": {}}, 1)\n    ])\n    def test_bad_fetch_c_south_plugin_installed(self, info, exc_count):\n        with patch.object(_logger, \"exception\") as patch_log_exc:\n            with patch.object(utils, \"find_c_plugin_libs\", return_value=[(\"Random\", \"binary\")]) as patch_plugin_lib:\n                with patch.object(utils, \"get_plugin_info\",  return_value=info) as patch_plugin_info:\n                    PluginDiscovery.fetch_c_plugins_installed(\"south\", False, 'south')\n                patch_plugin_info.assert_called_once_with('Random', dir='south')\n            patch_plugin_lib.assert_called_once_with('south')\n            assert exc_count == patch_log_exc.call_count\n\n    @pytest.mark.parametrize(\"info, exc_count\", [\n        ({}, 0),\n        ({\"interface\": \"1.0.0\", \"version\": \"1.0.0\", \"type\": \"north\", \"name\": \"PI_Server\", \"config\": \"(null)\"}, 1),\n        ({\"interface\": \"1.0.0\", \"version\": \"1.0.0\", \"type\": \"north\", \"name\": \"PI_Server\", \"config\": {}}, 1)\n    ])\n    def test_bad_fetch_c_north_plugin_installed(self, info, exc_count):\n        with patch.object(_logger, \"exception\") as patch_log_exc:\n            with patch.object(utils, \"find_c_plugin_libs\", return_value=[(\"PI_Server\", \"binary\")]) as patch_plugin_lib:\n                with patch.object(utils, \"get_plugin_info\", return_value=info) as patch_plugin_info:\n                    PluginDiscovery.fetch_c_plugins_installed(\"north\", False, 'north')\n                patch_plugin_info.assert_called_once_with('PI_Server', dir='north')\n            patch_plugin_lib.assert_called_once_with('north')\n            assert exc_count == patch_log_exc.call_count\n\n    @pytest.mark.parametrize(\"exc_name, log_exc_name, msg\", [\n        (FileNotFoundError, \"error\", 'Import problem from path \"modbus\" for modbus plugin.'),\n        (Exception, \"exception\", 'Failed to fetch config for modbus plugin.')\n    ])\n    def test_bad_get_south_plugin_config(self, exc_name, log_exc_name, msg):\n        with patch.object(_logger, log_exc_name) as patch_log_exc:\n            with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[exc_name]):\n                PluginDiscovery.get_plugin_config(\"modbus\", \"south\", \"south\", False)\n        assert 1 == patch_log_exc.call_count\n        args = patch_log_exc.call_args\n        assert msg == args[0][1]\n\n    @pytest.mark.parametrize(\"exc_name, log_exc_name, msg\", [\n        (FileNotFoundError, \"error\", 'Import problem from path \"http\" for http plugin.'),\n        (Exception, \"exception\", 'Failed to fetch config for http plugin.')\n    ])\n    def test_bad_get_north_plugin_config(self, exc_name, log_exc_name, msg):\n        with patch.object(_logger, log_exc_name) as patch_log_exc:\n            with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[exc_name]):\n                PluginDiscovery.get_plugin_config(\"http\", \"north\", \"north\", False)\n        assert 1 == patch_log_exc.call_count\n        args = patch_log_exc.call_args\n        assert msg == args[0][1]\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_process.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport pytest\nimport sys\n\nfrom unittest.mock import patch\n\nfrom fledge.common import process\nfrom fledge.common.storage_client.storage_client import ReadingsStorageClientAsync, StorageClientAsync\nfrom fledge.common.process import FledgeProcess, ArgumentParserError\nfrom fledge.common.microservice_management_client.microservice_management_client import MicroserviceManagementClient\n\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestFledgeProcess:\n\n    def test_constructor_abstract_method_run(self):\n        with pytest.raises(TypeError):\n            fp = FledgeProcess()\n        with pytest.raises(TypeError):\n            class FledgeProcessImp(FledgeProcess):\n                pass\n            fp = FledgeProcessImp()\n\n    @pytest.mark.parametrize('argslist',\n                             [(['pytest']),\n                              (['pytest, ''--address', 'corehost']),\n                              (['pytest', '--address', 'corehost', '--port', '32333'])\n                              ])\n    def test_constructor_missing_args(self, argslist):\n        class FledgeProcessImp(FledgeProcess):\n            def run(self):\n                pass\n        with patch.object(sys, 'argv', argslist):\n            with pytest.raises(ArgumentParserError) as excinfo:\n                with patch.object(process._logger, \"error\") as patch_logger:\n                    fp = FledgeProcessImp()\n                assert 1 == patch_logger.call_count\n                patch_logger.assert_called_once_with()\n            assert '' in str(excinfo.value)\n\n    def test_constructor_good(self):\n        class FledgeProcessImp(FledgeProcess):\n            def run(self):\n                pass\n        with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):\n            with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:\n                with patch.object(ReadingsStorageClientAsync, '__init__', return_value=None) as rsc_async_patch:\n                    with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:\n                        fp = FledgeProcessImp()\n        mmc_patch.assert_called_once_with('corehost', 32333)\n        rsc_async_patch.assert_called_once_with('corehost', 32333)\n        sc_async_patch.assert_called_once_with('corehost', 32333)\n        assert fp._core_management_host is 'corehost'\n        assert fp._core_management_port == 32333\n        assert fp._name is 'sname'\n        assert hasattr(fp, '_core_microservice_management_client')\n        assert hasattr(fp, '_readings_storage_async')\n        assert hasattr(fp, '_storage_async')\n        assert hasattr(fp, '_start_time')\n\n    def test_get_services_from_core(self):\n        class FledgeProcessImp(FledgeProcess):\n            def run(self):\n                pass\n        with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):\n            with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:\n                with patch.object(MicroserviceManagementClient, 'get_services', return_value=None) as get_patch:\n                    with patch.object(ReadingsStorageClientAsync, '__init__',\n                                      return_value=None) as rsc_async_patch:\n                        with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:\n                            fp = FledgeProcessImp()\n                            fp.get_services_from_core('foo', 'bar')\n        get_patch.assert_called_once_with('foo', 'bar')\n\n    def test_register_service_with_core(self):\n        class FledgeProcessImp(FledgeProcess):\n            def run(self):\n                pass\n        with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):\n            with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:\n                with patch.object(MicroserviceManagementClient, 'register_service', return_value=None) as register_patch:\n                    with patch.object(ReadingsStorageClientAsync, '__init__',\n                                      return_value=None) as rsc_async_patch:\n                        with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:\n                            fp = FledgeProcessImp()\n                            fp.register_service_with_core('payload')\n        register_patch.assert_called_once_with('payload')\n\n    def test_unregister_service_with_core(self):\n        class FledgeProcessImp(FledgeProcess):\n            def run(self):\n                pass\n        with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):\n            with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:\n                with patch.object(MicroserviceManagementClient, 'unregister_service', return_value=None) as unregister_patch:\n                    with patch.object(ReadingsStorageClientAsync, '__init__',\n                                      return_value=None) as rsc_async_patch:\n                        with patch.object(StorageClientAsync, '__init__', return_value=None) as sc_async_patch:\n                            fp = FledgeProcessImp()\n                            fp.unregister_service_with_core('id')\n        unregister_patch.assert_called_once_with('id')\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_service_record.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\n\"\"\" Test fledge/common/service_record.py \"\"\"\n\nimport pytest\n\nfrom fledge.common.service_record import ServiceRecord\n\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestServiceRecord:\n\n    def test_slots(self):\n        slots = ServiceRecord.__slots__\n        assert 9 == len(slots)\n        assert ['_id', '_name', '_type', '_protocol', '_address', '_port', '_management_port', '_status', '_debug'\n                ] == slots\n\n    @pytest.mark.parametrize(\"name, value\", [\n        ('Storage', 1), ('Core', 2), ('Southbound', 3), ('Notification', 4), ('Management', 5), ('Northbound', 6),\n        ('Dispatcher', 7), ('BucketStorage', 8), ('Pipeline', 9)\n    ])\n    def test_types(self, name, value):\n        assert 9 == len(ServiceRecord.Type)\n        assert name == ServiceRecord.Type(value).name\n\n    @pytest.mark.parametrize(\"name, value\", [\n        ('Running', 1), ('Shutdown', 2), ('Failed', 3), ('Unresponsive', 4), ('Restart', 5)\n    ])\n    def test_status(self, name, value):\n        assert 5 == len(ServiceRecord.Status)\n        assert name == ServiceRecord.Status(value).name\n\n    @pytest.mark.parametrize(\"s_port\", [None, 12, \"34\"])\n    def test_init(self, s_port):\n        obj = ServiceRecord(\"some id\", \"aName\", \"Storage\", \"http\", \"127.0.0.1\", s_port, 1234)\n        assert isinstance(obj._id, str), f\"Expected obj._id to be a string, but got {type(obj._id)}\"\n        assert \"some id\" == obj._id\n        assert isinstance(obj._name, str), f\"Expected obj._name to be a string, but got {type(obj._name)}\"\n        assert \"aName\" == obj._name\n        assert isinstance(obj._type, str), f\"Expected obj._type to be a string, but got {type(obj._type)}\"\n        assert \"Storage\" == obj._type\n        assert obj._port is None or isinstance(obj._port, int), (f\"Expected obj._port to be an integer, \"\n                                                                 f\"but got {type(obj._port)}\")\n        assert int(s_port) == obj._port if s_port else obj._port is None\n        assert isinstance(obj._management_port, int), (f\"Expected obj._management_port to be an integer, \"\n                                                       f\"but got {type(obj._management_port)}\")\n        assert 1234 == obj._management_port\n        assert isinstance(obj._status, int), f\"Expected obj._debug to be an integer, but got {type(obj._status)}\"\n        assert 1 == obj._status\n        assert isinstance(obj._debug, dict), f\"Expected obj._debug to be a dictionary, but got {type(obj._debug)}\"\n        assert {} == obj._debug , f\"Expected obj._debug to be an empty dictionary, but got {repr(obj._debug)}\"\n\n    @pytest.mark.parametrize(\"s_type\", [\n        \"Storage\", \"Core\", \"Southbound\", \"Notification\", \"Management\", \"Northbound\", \"Dispatcher\",\n        \"BucketStorage\", \"Pipeline\"])\n    def test_init_with_valid_type(self, s_type):\n        obj = ServiceRecord(\"some id\", \"aName\", s_type, \"http\", \"127.0.0.1\", None, 1234)\n        assert \"some id\" == obj._id\n        assert \"aName\" == obj._name\n        assert s_type == obj._type\n\n    @pytest.mark.parametrize(\"s_type\", [\n        \"\", None, 12, \"southbound\", \"south\", \"South\", \"North\", \"Filter\", \"External\"\n    ])\n    def test_init_with_invalid_type(self, s_type):\n        with pytest.raises(Exception) as ex:\n            ServiceRecord(\"some id\", \"aName\", s_type, \"http\", \"127.0.0.1\", None, 1234)\n        assert ex.type is ServiceRecord.InvalidServiceType\n"
  },
  {
    "path": "tests/unit/python/fledge/common/test_statistics.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport json\nfrom unittest.mock import MagicMock, patch\nimport pytest\n\nfrom fledge.common import statistics\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n\n__author__ = \"Ashish Jabble, Mark Riddoch, Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestStatistics:\n\n    async def test_init_with_no_storage(self):\n        storage_client_mock = None\n        with pytest.raises(TypeError) as excinfo:\n            await statistics.create_statistics(storage_client_mock)\n        assert str(excinfo.value) == 'Must be a valid Async Storage object'\n\n    async def test_init_with_storage(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n\n        async def mock_coro():\n            await asyncio.sleep(0)\n            return \"\"\n\n        _rv = await mock_coro()\n        with patch.object(statistics.Statistics, '_load_keys', return_value=_rv):\n            s = await statistics.create_statistics(storage_client_mock)\n            assert isinstance(s, statistics.Statistics)\n            assert isinstance(s._storage, StorageClientAsync)\n        storage_client_mock.reset_mock()\n\n    async def test_singleton(self):\n        \"\"\" Test that two statistics instance share the same state \"\"\"\n\n        async def mock_coro():\n            return \"\"\n\n        _rv = await mock_coro()\n        with patch.object(statistics.Statistics, '_load_keys', return_value=_rv):\n            storageMock1 = MagicMock(spec=StorageClientAsync)\n            s1 = await statistics.create_statistics(storageMock1)\n\n            storageMock2 = MagicMock(spec=StorageClientAsync)\n            s2 = await statistics.create_statistics(storageMock2)\n            assert s1._storage == s2._storage\n\n    async def test_register(self):\n        \"\"\" Test that register results in a database insert \"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        stats = statistics.Statistics(storageMock)\n        stats._registered_keys = []\n\n        async def mock_coro():\n            await asyncio.sleep(0)\n            return {\"response\": \"updated\", \"rows_affected\": 1}\n\n        _rv = await mock_coro()\n        with patch.object(stats, '_load_keys', return_value=_rv):\n            with patch.object(stats._storage, 'insert_into_tbl', return_value=_rv) as stat_update:\n                await stats.register('T1Stat', 'Test stat')\n            args, kwargs = stat_update.call_args\n            assert args[0] == 'statistics'\n            expected_storage_args = json.loads(args[1])\n            assert expected_storage_args['key'] == 'T1Stat'\n            assert expected_storage_args['value'] == 0\n            assert expected_storage_args['previous_value'] == 0\n            assert expected_storage_args['description'] == 'Test stat'\n        stats._storage.insert_into_tbl.reset_mock()\n\n    async def test_register_twice(self):\n        \"\"\" Test that register results in a database insert only once for same key\"\"\"\n        storageMock = MagicMock(spec=StorageClientAsync)\n        stats = statistics.Statistics(storageMock)\n        stats._registered_keys = []\n\n        async def mock_coro():\n            return {\"response\": \"updated\", \"rows_affected\": 1}\n\n        _rv = await mock_coro()\n        with patch.object(stats, '_load_keys', return_value=_rv):\n            with patch.object(stats._storage, 'insert_into_tbl', return_value=_rv) as stat_insert:\n                await stats.register('T2Stat', 'Test stat')\n                await stats.register('T2Stat', 'Test stat')\n            assert stat_insert.called\n            assert stat_insert.call_count == 1\n        stats._storage.insert_into_tbl.reset_mock()\n\n    async def test_register_exception(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n        with patch.object(statistics._logger, 'exception') as logger_exception:\n            with patch.object(s._storage, 'insert_into_tbl', side_effect=Exception):\n                with pytest.raises(Exception):\n                    await s.register('T3Stat', 'Test stat')\n                    args, kwargs = logger_exception.call_args\n                    assert args[0] == 'Unable to create new statistic %s, error %s'\n                    assert args[1] == 'T3Stat'\n\n    async def test_load_keys(self):\n        \"\"\"Test the load key\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n        s._registered_keys = []\n\n        async def mock_coro():\n            return {'rows': [{\"previous_value\": 0, \"value\": 1,\n                              \"key\": \"K1\", \"description\": \"desc1\"}]}\n        \n        _rv = await mock_coro()\n        with patch.object(s._storage, 'query_tbl_with_payload', return_value=_rv) as patch_query_tbl:\n            await s._load_keys()\n            assert \"K1\" in s._registered_keys\n        patch_query_tbl.assert_called_once_with('statistics', '{\"return\": [\"key\"]}')\n\n    async def test_load_keys_exception(self):\n        \"\"\"Test the load key exception\"\"\"\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n        s._registered_keys = []\n\n        async def mock_coro():\n            return Exception\n\n        _rv = await mock_coro()\n        with patch.object(statistics._logger, 'exception') as logger_exception:\n            with patch.object(s._storage, 'query_tbl_with_payload', return_value=_rv):\n                await s._load_keys()\n        args = logger_exception.call_args\n        assert args[0][1] == 'Failed to retrieve statistics keys'\n\n    async def test_update(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n\n        async def mock_coro():\n            return {\"response\": \"updated\", \"rows_affected\": 1}\n\n        _rv = await mock_coro()\n        payload = '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"READING\"}, ' \\\n                  '\"expressions\": [{\"column\": \"value\", \"operator\": \"+\", \"value\": 5}]}'\n        expected_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        with patch.object(s._storage, 'update_tbl', return_value=_rv) as stat_update:\n            await s.update('READING', 5)\n            assert expected_result['response'] == \"updated\"\n        stat_update.assert_called_once_with('statistics', payload)\n\n    @pytest.mark.parametrize(\"key, value_increment, exception_name, exception_message\", [\n        (123456, 120, TypeError, \"key must be a string\"),\n        ('PURGED', '120', ValueError, \"value must be an integer\"),\n        (None, '120', TypeError, \"key must be a string\"),\n        ('123456', '120', ValueError, \"value must be an integer\"),\n        ('READINGS', None, ValueError, \"value must be an integer\")\n    ])\n    async def test_update_with_invalid_params(self, key, value_increment, exception_name, exception_message):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n\n        with pytest.raises(exception_name) as excinfo:\n            await s.update(key, value_increment)\n        assert exception_message == str(excinfo.value)\n\n    async def test_update_exception(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n        msg = 'Unable to update statistics value based on statistics_key {} and value_increment {}'.format('BUFFERED', 5)\n        with patch.object(s._storage, 'update_tbl', side_effect=Exception()):\n            with pytest.raises(Exception):\n                with patch.object(statistics._logger, 'exception') as logger_exception:\n                    await s.update('BUFFERED', 5)\n                args = logger_exception.call_args\n                assert msg == args[0][1]\n\n    async def test_add_update(self):\n        stat_dict = {'FOGBENCH/TEMPERATURE': 1}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n        payload = '{\"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"FOGBENCH/TEMPERATURE\"}, ' \\\n                  '\"expressions\": [{\"column\": \"value\", \"operator\": \"+\", \"value\": 1}]}'\n\n        async def mock_coro():\n            return {\"response\": \"updated\", \"rows_affected\": 1}\n\n        _rv = await mock_coro()\n        with patch.object(s._storage, 'update_tbl', return_value=_rv) as stat_update:\n            await s.add_update(stat_dict)\n        stat_update.assert_called_once_with('statistics', payload)\n\n    async def test_insert_when_key_error(self):\n        stat_dict = {'FOGBENCH/TEMPERATURE': 1}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n\n        async def mock_coro():\n            return {\"response\": \"not updated\", \"rows_affected\": 0}\n\n        _rv = await mock_coro()\n        with patch.object(s._storage, 'update_tbl', return_value=_rv) as stat_update:\n            with patch.object(statistics._logger, 'exception') as logger_exception:\n                with pytest.raises(KeyError):\n                    await s.add_update(stat_dict)\n            args, kwargs = logger_exception.call_args\n            assert args[0] == 'Statistics key %s has not been registered'\n            assert args[1] == 'FOGBENCH/TEMPERATURE'\n        assert stat_update.call_count == 1\n\n    async def test_add_update_exception(self):\n        stat_dict = {'FOGBENCH/TEMPERATURE': 1}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        s = statistics.Statistics(storage_client_mock)\n        msg = 'Unable to update statistics value based on statistics_key %s and value_increment' \\\n              ' %s, error %s', \"FOGBENCH/TEMPERATURE\", 1, ''\n        with patch.object(s._storage, 'update_tbl', side_effect=Exception()):\n            with pytest.raises(Exception):\n                with patch.object(statistics._logger, 'exception') as logger_exception:\n                    await s.add_update(stat_dict)\n                logger_exception.assert_called_once_with(*msg)\n"
  },
  {
    "path": "tests/unit/python/fledge/common/web/test_middleware.py",
    "content": "\n# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge/common/web/middleware.py \"\"\"\n\nfrom aiohttp import web\nimport pytest\nimport json\n\nfrom fledge.common.web import middleware\nfrom fledge.services.core import routes\n\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestMiddleware:\n\n    @pytest.fixture\n    def client(self, loop, aiohttp_server, aiohttp_client):\n        async def handler0(request):\n            raise RuntimeError('Error text')\n\n        async def handler1(request):\n            raise web.HTTPConflict(reason=\"foo\", text='{\"key\": \"conflict\"}')\n\n        async def handler2(request):\n            raise web.HTTPNotFound(text='{\"key\": \"not found\"}')\n\n        async def handler4(request):\n            return web.json_response({\"key\": \"Okay\"})\n\n        app = web.Application(loop=loop, middlewares=[middleware.error_middleware])\n        # fill the routes table\n        routes.setup(app)\n        app.router.add_route('GET', '/test', handler0)\n        app.router.add_route('GET', '/test-web-ex1', handler1)\n        app.router.add_route('GET', '/test-web-ex2', handler2)\n        app.router.add_route('GET', '/test-okay', handler4)\n\n        server = loop.run_until_complete(aiohttp_server(app))\n        loop.run_until_complete(server.start_server(loop=loop))\n        client = loop.run_until_complete(aiohttp_client(server))\n        return client\n\n    async def test_middleware_for_unhandled_exception(self, client):\n        resp = await client.get('/test')\n\n        assert 500 == resp.status\n        txt = await resp.text()\n        assert {\"error\": {\"message\": \"[RuntimeError] Error text\"}} == json.loads(txt)\n\n    async def test_middleware_allows_exception_trace(self, client):\n        resp = await client.get('/test?trace=1')\n\n        assert 500 == resp.status\n        txt = await resp.text()\n        res_dict = json.loads(txt)\n        assert '[RuntimeError] Error text' == res_dict[\"error\"][\"message\"]\n        # 2 additional key value pairs\n        assert 'RuntimeError' == res_dict[\"error\"][\"exception\"]\n        assert 'RuntimeError' in res_dict[\"error\"][\"traceback\"]\n\n    async def test_http_exception(self, client):\n        resp = await client.get('/test-web-ex1')\n\n        assert 409 == resp.status\n        assert \"foo\" == resp.reason\n        txt = await resp.text()\n        assert {'key': 'conflict'} == json.loads(txt)\n\n    async def test_no_trace_for_http_exception(self, client):\n        resp = await client.get('/test-web-ex1?trace=1')\n\n        assert 409 == resp.status\n        assert \"foo\" == resp.reason\n        txt = await resp.text()\n        assert {'key': 'conflict'} == json.loads(txt)\n\n    async def test_another_http_exception(self, client):\n        resp = await client.get('/test-web-ex2')\n\n        assert 404 == resp.status\n        assert \"Not Found\" == resp.reason\n        txt = await resp.text()\n        assert {'key': 'not found'} == json.loads(txt)\n\n    async def test_http_ok(self, client):\n        resp = await client.get('/test-okay')\n\n        assert 200 == resp.status\n        assert \"OK\" == resp.reason\n        txt = await resp.text()\n        assert {'key': 'Okay'} == json.loads(txt)\n"
  },
  {
    "path": "tests/unit/python/fledge/common/web/test_ssl_wrapper.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge/common/web/ssl_wrapper.py \"\"\"\n\nimport time\nimport datetime\nimport pytest\nfrom fledge.common import utils\nfrom fledge.common.web.ssl_wrapper import SSLVerifier\n\n__author__ = \"Amarendra Kumar Sinha\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestSSLVerifier:\n    @pytest.fixture\n    def user_cert(self):\n        user_cert = \"\"\"-----BEGIN CERTIFICATE-----\nMIIDeDCCAWACAQEwDQYJKoZIhvcNAQELBQAwHTELMAkGA1UEBhMCVVMxDjAMBgNV\nBAMMBU1ZLUNBMB4XDTE5MTIxOTEwMDAyOVoXDTIwMTIxODEwMDAyOVowazELMAkG\nA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExEDAOBgNVBAoMB09TSXNvZnQx\nDTALBgNVBAMMBHVzZXIxJjAkBgkqhkiG9w0BCQEWF2ZsZWRnZUBnb29nbGVncm91\ncHMuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC2vbHDyp5teGbEaLb/\nG5BnRcXcLMs9fbimyYYt7Xhb5OEVuiGPD8npwBfsd0aE12BfoJVARjn/xjkk1rib\nZj0LEocKWfQoYgRjIwzVSdR/uczF/0Xpj68UlvcRxoPsP3LzYQ7i8Smdqn5NI9R1\nP4i8GGOSY/+c+8umd44T6H/jBwIDAQABMA0GCSqGSIb3DQEBCwUAA4ICAQAU+lnr\nImUhPb8U6Tmf4diphJCTADK3zy3qYNmqndLiVutsK32Q/1Gg+My8rtxv7gpPztpF\nH+xPtSsYLfdJtcI2xM9nnx3G442/3Zf5tEGDZdsIvedzPw6kjO9coKD1lwkKtXkl\nKy9TjsnUIkHe91l5c7KwVcxu6b/Zb2ye7uS/CQEC14QVeKbitsovNzAuNZt1JgHl\ncwPAsrobjL+VgJ1O8l/PLijCh6bgeUZQlTdPqIAZN5hFusx8vPYzfRclNteUQGAh\nK020oXuZNRIb9bb8z8wL6g2JBs4c9cDz6/JgdQs226UEsMrUiTGZTyxR5PucCqc7\n09l8vVHInD+mC1HNW4n3aJNSl2qGUAWLU9dWmsKOKQYPxZ8R3UShJnNsxJY476iQ\ndIU5RZzJqVTmFiYLs62Tap+1thTQDjIqf8bKR7bZ3vL08eyiayEeMRGbkClqWIbl\nduLdJ28ZzNMDfSuPF5yk/y8L5dc2XCbYCj3puOXgzrMlmVdAPUdGX6952dLLlqxz\n87hMLe+ZB709EN/sPGt9SmifLal3rx+/dv5ZiHsiCSi/FXdVBRpim+aLh99MG2WS\nPYPBNg5UKCYfUESU1F7V8fZMwH9Go3Qi+YjfL+K9wjN+c+Il/VXBYsLxypJy7QF0\n5eCXag4hEQPihXbjPAgO+LNezaaOuNeW79upfw==\n-----END CERTIFICATE-----\n\"\"\"\n        return user_cert\n\n    @pytest.fixture\n    def err_cert(self):\n        err_cert = \"\"\"-----BEGIN CERTIFICATE-----\nMIICVTCCAb4CCQDhiCBFZbIH6DANBgkqhkiG9w0BAQsFADBvMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEQMA4GA1UECgwHT1NJc29mdDEQMA4GA1UE\nAwwHZm9nbGFtcDEnMCUGCSqGSIb3DQEJARYYZm9nbGFtcEBnb29nbGVncm91cHMu\nY29tMB4XDTE5MDMxMjA4NTM0MVoXDTIwMDMxMTA4NTM0MVowbzELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExEDAOBgNVBAoMB09TSXNvZnQxEDAOBgNV\nBAMMB2ZvZ2xhbXAxJzAlBgkqhkiG9w0BCQEWGGZvZ2xhbXBAZ29vZ2xlZ3JvdXBz\nLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAn1uwj9sV2q339JJHgI/N\ngQbN9She64YgcO/pKlstd+luWPKNQSa/xHQVjPvqAMppFaCcCV4diJ+ExjDteKPU\nI7PG1txD2FH/6o79oG2MC25qD79xFIZjYa4LFXZapozKIPfUdayG9StNRkgaLkAW\nDNTWZW26lTFj2YCsy39S0EMCAwEAATANBgkqhkiG9w0BAQsFAAOBgQAOZPUVTARo\nAShK4DM84LGNChzbdD6EVAl066+d9FRDuoX0KJj2/qepeevh2LC8dqG/QHcl75Ef\n6FhhGzmSosgTHT4GVpFVS3V7fe1TVRfW8QG9MToF3N4nCOHzYRzLQVA+3qXpEy9y\n49JyAKd2BhhczmXi/xtSrhn2JctKf/1MFA=\n-----END CERTIFICATE-----\n\"\"\"\n        return err_cert\n\n    def test_x509_values_with_valid_cert(self, user_cert):\n        SSLVerifier.set_user_cert(user_cert)\n        ssl_purpose = ['Certificate purposes:', 'SSL client : Yes',\n                       'SSL client CA : No', 'SSL server : Yes', 'SSL server CA : No',\n                       'Netscape SSL server : Yes', 'Netscape SSL server CA : No',\n                       'S/MIME signing : Yes', 'S/MIME signing CA : No',\n                       'S/MIME encryption : Yes', 'S/MIME encryption CA : No',\n                       'CRL signing : Yes', 'CRL signing CA : No', 'Any Purpose : Yes',\n                       'Any Purpose CA : Yes', 'OCSP helper : Yes',\n                       'OCSP helper CA : No', 'Time Stamp signing : No',\n                       'Time Stamp signing CA : No'\n                       ]\n        assert '01' == SSLVerifier.get_serial()\n        # TODO: FOGL-7302 -x509_strict check when OpenSSL version >=3.x; Currently, only Ubuntu 22 and CentOS Stream 9\n        if utils.get_open_ssl_version(version_string=False)[0] >= 3:\n            import sys\n            if sys.version_info < (3, 10):\n                ssl_purpose.extend(['Code signing : No', 'Code signing CA : No'])\n\n        assert ssl_purpose == SSLVerifier.get_purposes()\n        assert 'C=US, CN=MY-CA' == SSLVerifier.get_issuer_common_name()\n        assert {'email': 'fledge@googlegroups.com', 'commonName': 'user',\n                'organisation': 'OSIsoft', 'state': 'California',\n                'country': 'US'} == SSLVerifier.get_subject()\n        assert '69:5A:CC:85:3C:8E:D4:14:05:65:21:31:E3:91:9B:BA:35:74:4D:A3' == SSLVerifier.get_fingerprint()\n        assert '-----BEGIN PUBLIC KEY-----' == SSLVerifier.get_pubkey()\n        assert 'Dec 19 10:00:29 2019 GMT' == SSLVerifier.get_startdate()\n        assert 'Dec 18 10:00:29 2020 GMT' == SSLVerifier.get_enddate()\n\n        # Test is_expired(). It should return False if cert end time is yet to come.\n        dt_format = \"%b %d %X %Y %Z\"\n        cert_end_time = time.mktime(\n            datetime.datetime.strptime('Dec 18 10:00:29 2020 GMT',\n                                       dt_format).timetuple())\n        run_time = time.time()\n        expected = False if cert_end_time > run_time else True\n\n        assert expected == SSLVerifier.is_expired()\n\n    def test_x509_values_with_invalid_cert(self, err_cert):\n        SSLVerifier.set_user_cert(err_cert)\n        with pytest.raises(OSError):\n            assert '01' == SSLVerifier.get_serial()\n"
  },
  {
    "path": "tests/unit/python/fledge/plugins/common/test_plugins_common_utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Unit tests for utils \"\"\"\n\nimport pytest\nimport fledge.plugins.common.utils as utils\nfrom collections import Counter\n\n\nclass TestUtils:\n    @pytest.mark.parametrize(\"test_input_old, test_input_new, expected\", [\n        ({'a': 1, 'b': 2, 'c': 3}, {'a': 11, 'b': 22, 'c': 33}, ['a', 'b', 'c']),\n        ({'a': 1, 'b': 2, 'c': 3}, {'a': 11, 'b': 22, 'd': 44}, ['a', 'b', 'd']),\n        ({'a': 1, 'b': 2, 'c': 3}, {'a': 11, 'b': 22}, ['a', 'b']),\n        ({'a': 1, 'b': 2, 'c': 3}, {'d': 11, 'e': 22, 'c': 33}, ['d', 'e', 'c'])\n    ])\n    def test_get_diff(self, test_input_old, test_input_new, expected):\n        actual = utils.get_diff(test_input_old, test_input_new)\n        assert Counter(expected) == Counter(actual)\n\n    @pytest.mark.parametrize(\"number, bit_pos, expected\", [\n        (32, 5, 1),\n        (32, 4, 0),\n        (2, 0, 0),\n        (2, 1, 1),\n        (96, 5, 1)\n    ])\n    def test_bit_at_given_position_set_or_unset(self, number, bit_pos, expected):\n        actual = utils.bit_at_given_position_set_or_unset(number, bit_pos)\n        assert expected == actual\n"
  },
  {
    "path": "tests/unit/python/fledge/plugins/north/common/test_common.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Unit tests about the common code available in plugins.north.common.common \"\"\"\n\nimport pytest\nimport fledge.plugins.north.common.common as plugin_common\n\n\nclass TestPluginsNorthCommon(object):\n    \"\"\" Unit tests about the common code available in plugins.north.common.common \"\"\"\n\n    @pytest.mark.parametrize(\"value, expected\", [\n        # Good Cases\n        (\"xxx\", \"xxx\"),\n        (\"1xx\", \"1xx\"),\n        (\"x1x\", \"x1x\"),\n        (\"xx1\", \"xx1\"),\n\n        (\"26/04/2018 11:14\", \"26/04/2018 11:14\"),\n\n        (-180.2, -180.2),\n        (0.0, 0.0),\n        (180.0, 180.0),\n        (180.2, 180.2),\n        (\"-180.2\", -180.2),\n        (\"180.2\", 180.2),\n        (\"180.0\", 180.0),\n        (\"180.\", 180.0),\n\n        (-10, -10),\n        (0, 0),\n        (10, 10),\n        (\"-10\", -10),\n        (\"0\", 0),\n        (\"10\", 10),\n\n    ])\n    def test_convert_to_type_good(self, value, expected):\n        \"\"\" \"\"\"\n\n        assert plugin_common.convert_to_type(value) == expected\n\n    @pytest.mark.parametrize(\"value, expected\", [\n        # Bad Cases\n        (\"111\", \"111\"),\n\n        (\"26/04/2018 11:14\", \"26/04/2018 11:00\"),\n\n        (\"-180.2\", 180.2),\n\n        (-10, 10),\n    ])\n    def test_convert_to_type_bad(self, value, expected):\n        \"\"\" \"\"\"\n\n        assert plugin_common.convert_to_type(value) != expected\n\n    @pytest.mark.parametrize(\"value, expected\", [\n        # Cases - standard\n        (\"String 1\", \"string\"),\n        (-999, \"integer\"),\n        (-1, \"integer\"),\n        (0, \"integer\"),\n        (-999, \"integer\"),\n        (-999.0,  \"number\"),\n        (-1.2,  \"number\"),\n        (0.,  \"number\"),\n        (1.2,  \"number\"),\n        (999.0,  \"number\"),\n\n        # Cases - Number as string\n        (\"-1.2\",  \"number\"),\n        (\"-1.0\",  \"number\"),\n        (\".0\",  \"number\"),\n        (\"1.0\",  \"number\"),\n        (\"1.2\",  \"number\"),\n\n        # Cases - Integer as string\n        (\"-1\",  \"integer\"),\n        (\"0\",  \"integer\"),\n        (\"1\",  \"integer\"),\n\n        # Cases - real cases generated using fogbench\n        (90774.998,  \"number\"),\n        (41,  \"integer\"),\n        (-2,  \"integer\"),\n        (-159,  \"integer\"),\n        (\"up\",  \"string\"),\n        (\"tock\",  \"string\")\n\n    ])\n    def test_evaluate_type(self, value, expected):\n        \"\"\" tests evaluate_type available in plugins.north.common.common \"\"\"\n\n        assert plugin_common.evaluate_type(value) == expected\n\n    @pytest.mark.parametrize(\"value, expected\", [\n        (\n            # Case 1\n            [\n                {\"asset_code\": \"temperature0\", \"reading\": 20},\n                {\"asset_code\": \"temperature1\", \"reading\": 21},\n                {\"asset_code\": \"temperature2\", \"reading\": 22}\n            ],\n            # Expected\n            [\n                {\"asset_code\": \"temperature0\", \"asset_data\": 20},\n                {\"asset_code\": \"temperature1\", \"asset_data\": 21},\n                {\"asset_code\": \"temperature2\", \"asset_data\": 22}\n            ]\n        ),\n        (\n            # Case 2\n            [\n                {\"asset_code\": \"temperature0\", \"reading\": 20},\n                {\"asset_code\": \"temperature1\", \"reading\": 21},\n                {\"asset_code\": \"temperature0\", \"reading\": 22}  # Duplicated\n            ],\n            # Expected\n            [\n                {\"asset_code\": \"temperature0\", \"asset_data\": 20},\n                {\"asset_code\": \"temperature1\", \"asset_data\": 21},\n            ]\n\n        ),\n        (\n            # Case 3\n            [\n                {\"asset_code\": \"temperature1\", \"reading\": 10},\n                {\"asset_code\": \"temperature2\", \"reading\": 20},\n                {\"asset_code\": \"temperature1\", \"reading\": 11},  # Duplicated\n                {\"asset_code\": \"temperature2\", \"reading\": 21},  # Duplicated\n                {\"asset_code\": \"temperature3\", \"reading\": 30},\n\n            ],\n            # Expected\n            [\n                {\"asset_code\": \"temperature1\", \"asset_data\": 10},\n                {\"asset_code\": \"temperature2\", \"asset_data\": 20},\n                {\"asset_code\": \"temperature3\", \"asset_data\": 30},\n            ]\n\n        ),\n\n    ])\n    def test_identify_unique_asset_codes(self, value, expected):\n        \"\"\" \"\"\"\n\n        assert plugin_common.identify_unique_asset_codes(value) == expected\n"
  },
  {
    "path": "tests/unit/python/fledge/services/common/microservice_management/test_instance.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport uuid\nfrom unittest.mock import patch\nimport pytest\n\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry as Service\nfrom fledge.services.core.service_registry.exceptions import *\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry\n\n__author__ = \"Amarendra Kumar Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestInstance:\n    def setup_method(self):\n        Service._registry = []\n\n    def teardown_method(self):\n        Service._registry = []\n\n    async def test_register(self):\n        with patch.object(Service._logger, 'info') as log_info:\n            idx = Service.register(\"StorageService1\", \"Storage\", \"127.0.0.1\", 9999, 1999)\n            assert str(uuid.UUID(idx, version=4)) == idx\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <StorageService1, type=Storage, protocol=http, address=127.0.0.1, service port=9999,'\n                                ' management port=1999, status=1>')\n\n    async def test_duplicate_name_registration(self):\n        with patch.object(Service._logger, 'info') as log_info:\n            idx1 = Service.register(\"StorageService1\", \"Storage\", \"127.0.0.1\", 9999, 1999)\n            assert str(uuid.UUID(idx1, version=4)) == idx1\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <StorageService1, type=Storage, protocol=http, address=127.0.0.1, service port=9999,'\n                                ' management port=1999, status=1>')\n\n        with pytest.raises(AlreadyExistsWithTheSameName) as excinfo:\n            Service.register(\"StorageService1\", \"Storage\", \"127.0.0.1\", 9999, 1999)\n        assert \"AlreadyExistsWithTheSameName\" in str(excinfo)\n\n    async def test_duplicate_address_port_registration(self):\n        with patch.object(Service._logger, 'info') as log_info:\n            idx1 = Service.register(\"StorageService1\", \"Storage\", \"127.0.0.1\", 9999, 1999)\n            assert str(uuid.UUID(idx1, version=4)) == idx1\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <StorageService1, type=Storage, protocol=http, address=127.0.0.1, service port=9999,'\n                                ' management port=1999, status=1>')\n\n        with pytest.raises(AlreadyExistsWithTheSameAddressAndPort) as excinfo:\n            Service.register(\"StorageService2\", \"Storage\", \"127.0.0.1\", 9999, 1998)\n        assert \"AlreadyExistsWithTheSameAddressAndPort\" in str(excinfo)\n\n    async def test_duplicate_address_and_mgt_port_registration(self):\n        with patch.object(Service._logger, 'info') as log_info:\n            idx1 = Service.register(\"StorageService1\", \"Storage\", \"127.0.0.1\", 9999, 1999)\n            assert str(uuid.UUID(idx1, version=4)) == idx1\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <StorageService1, type=Storage, protocol=http, address=127.0.0.1, service port=9999,'\n                                ' management port=1999, status=1>')\n\n        with pytest.raises(AlreadyExistsWithTheSameAddressAndManagementPort) as excinfo:\n            Service.register(\"StorageService2\", \"Storage\", \"127.0.0.1\", 9998, 1999)\n        assert \"AlreadyExistsWithTheSameAddressAndManagementPort\" in str(excinfo)\n\n    async def test_register_wrong_type(self):\n        with pytest.raises(ServiceRecord.InvalidServiceType) as excinfo:\n            Service.register(\"StorageService1\", \"WrongType\", \"127.0.0.1\", 9999, 1999)\n        assert \"InvalidServiceType\" in str(excinfo)\n\n    async def test_register_invalid_port(self):\n        with pytest.raises(NonNumericPortError) as excinfo:\n            Service.register(\"StorageService2\", \"Storage\", \"127.0.0.1\", \"808a\", 1999)\n        assert \"NonNumericPortError\" in str(excinfo)\n\n    async def test_register_invalid_mgt_port(self):\n        with pytest.raises(NonNumericPortError) as excinfo:\n            Service.register(\"StorageService2\", \"Core\", \"127.0.0.1\", 8888, \"199a\")\n        assert \"NonNumericPortError\" in str(excinfo)\n\n    async def test_unregister(self, mocker):\n        # register a service\n        with patch.object(Service._logger, 'info') as log_info:\n            idx = Service.register(\"StorageService2\", \"Storage\", \"127.0.0.1\", 8888, 1888)\n            assert str(uuid.UUID(idx, version=4)) == idx\n        assert 1 == log_info.call_count\n        arg, kwarg = log_info.call_args\n        assert arg[0].startswith('Registered service instance id=')\n        assert arg[0].endswith(': <StorageService2, type=Storage, protocol=http, address=127.0.0.1, service port=8888,'\n                               ' management port=1888, status=1>')\n\n        mocker.patch.object(InterestRegistry, '__init__', return_value=None)\n        mocker.patch.object(InterestRegistry, 'get', return_value=list())\n\n        # deregister the same\n        with patch.object(Service._logger, 'info') as log_info2:\n            t = Service.unregister(idx)\n            assert idx == t\n        assert 1 == log_info2.call_count\n        args, kwargs = log_info2.call_args\n        assert args[0].startswith('Stopped service instance id=')\n        assert args[0].endswith(': <StorageService2, type=Storage, protocol=http, address=127.0.0.1, '\n                                'service port=8888, management port=1888, status=2>')\n\n        s = Service.get(idx)\n        assert s[0]._status == 2  # Unregistered\n\n    async def test_get(self):\n        with patch.object(Service._logger, 'info') as log_info:\n            s = Service.register(\"StorageService\", \"Storage\", \"localhost\", 8881, 1888)\n            c = Service.register(\"CoreService\", \"Core\", \"localhost\", 7771, 1777)\n            d = Service.register(\"SouthService\", \"Southbound\", \"127.0.0.1\", 9991, 1999, \"https\")\n        assert 3 == log_info.call_count\n\n        _service = Service.get()\n        assert 3 == len(_service)\n\n        assert s == _service[0]._id\n        assert \"StorageService\" == _service[0]._name\n        assert \"Storage\" == _service[0]._type\n        assert \"localhost\" == _service[0]._address\n        assert 8881 == int(_service[0]._port)\n        assert 1888 == int(_service[0]._management_port)\n        # validates default set to HTTP\n        assert \"http\" == _service[0]._protocol\n\n        assert c == _service[1]._id\n        assert \"CoreService\" == _service[1]._name\n        assert \"Core\" == _service[1]._type\n        assert \"localhost\" == _service[1]._address\n        assert 7771 == int(_service[1]._port)\n        assert 1777 == int(_service[1]._management_port)\n        # validates default set to HTTP\n        assert \"http\" == _service[1]._protocol\n\n        assert d == _service[2]._id\n        assert \"SouthService\" == _service[2]._name\n        assert \"Southbound\" == _service[2]._type\n        assert \"127.0.0.1\" == _service[2]._address\n        assert 9991 == int(_service[2]._port)\n        assert 1999 == int(_service[2]._management_port)\n        assert \"https\" == _service[2]._protocol\n\n    async def test_get_fail(self):\n        with pytest.raises(DoesNotExist) as excinfo:\n            with patch.object(Service._logger, 'info') as log_info:\n                Service.register(\"StorageService\", \"Storage\", \"127.0.0.1\", 8888, 9999)\n                Service.get('incorrect_id')\n            assert 1 == log_info.call_count\n            args, kwargs = log_info.call_args\n            assert args[0].startswith('Registered service instance id=')\n            assert args[0].endswith(\n                ': <StorageService1, type=Storage, protocol=http, address=127.0.0.1, service port=8888,'\n                ' management port=9999, status=1>')\n        assert \"DoesNotExist\" in str(excinfo)\n"
  },
  {
    "path": "tests/unit/python/fledge/services/common/test_microservice.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport pytest\nimport time\nfrom unittest.mock import patch\nfrom aiohttp import web\nimport asyncio\nimport sys\nfrom fledge.common.storage_client.storage_client import ReadingsStorageClientAsync, StorageClientAsync\nfrom fledge.common.process import FledgeProcess\nfrom fledge.services.common.microservice import FledgeMicroservice, _logger\nfrom fledge.common.microservice_management_client.microservice_management_client import MicroserviceManagementClient\n\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n# test abstract methods\n# test FledgeProcess class things it needs\n# test that it registers with core\n# test the microservice management api\n\n_DEFAULT_CONFIG = {\n    'plugin': {\n        'description': 'Python module name of the plugin to load',\n        'type': 'string',\n        'default': 'coap_listen',\n        'value': 'coap_listen'\n    },\n    'local_services': {\n        'description': 'Restrict microservice to localhost',\n        'type': 'boolean',\n        'default': 'false',\n        'value': 'false',\n    }\n}\n\n\nclass TestFledgeMicroservice:\n\n    def test_constructor_abstract_method_missing(self):\n        with pytest.raises(TypeError):\n            fm = FledgeMicroservice()\n        with pytest.raises(TypeError):\n            class FledgeMicroserviceImp(FledgeMicroservice):\n                pass\n            fm = FledgeMicroserviceImp()\n        with pytest.raises(TypeError):\n            class FledgeMicroserviceImp(FledgeMicroservice):\n                async def change(self):\n                    pass\n                async def shutdown(self):\n                    pass\n            fm = FledgeMicroserviceImp()\n        with pytest.raises(TypeError):\n            class FledgeMicroserviceImp(FledgeMicroservice):\n                def run(self):\n                    pass\n                async def shutdown(self):\n                    pass\n            fm = FledgeMicroserviceImp()\n        with pytest.raises(TypeError):\n            class FledgeMicroserviceImp(FledgeMicroservice):\n                def run(self):\n                    pass\n                async def change(self):\n                    pass\n            fm = FledgeMicroserviceImp()\n\n    def test_constructor_good(self, loop):\n        class FledgeMicroserviceImp(FledgeMicroservice):\n            def __init__(self):\n                super().__init__()\n\n            def run(self):\n                pass\n\n            async def change(self):\n                pass\n\n            async def shutdown(self):\n                pass\n\n            async def get_track(self):\n                pass\n\n            async def add_track(self):\n                pass\n\n        with patch.object(asyncio, 'get_event_loop', return_value=loop):\n            with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):\n                with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:\n                    with patch.object(MicroserviceManagementClient, 'create_configuration_category', return_value=None):\n                        with patch.object(MicroserviceManagementClient, 'create_child_category',\n                                          return_value=None):\n                            with patch.object(MicroserviceManagementClient, 'get_configuration_category', return_value=_DEFAULT_CONFIG):\n                                with patch.object(ReadingsStorageClientAsync, '__init__',\n                                                  return_value=None) as rsc_async_patch:\n                                    with patch.object(StorageClientAsync, '__init__',\n                                                      return_value=None) as sc_async_patch:\n                                        with patch.object(FledgeMicroservice, '_make_microservice_management_app', return_value=None) as make_patch:\n                                             with patch.object(FledgeMicroservice, '_run_microservice_management_app', side_effect=None) as run_patch:\n                                                 with patch.object(FledgeProcess, 'register_service_with_core', return_value={'id':'bla'}) as reg_patch:\n                                                     with patch.object(FledgeMicroservice, '_get_service_registration_payload', return_value=None) as payload_patch:\n                                                        fm = FledgeMicroserviceImp()\n        # from FledgeProcess\n        assert fm._core_management_host is 'corehost'\n        assert fm._core_management_port == 32333\n        assert fm._name is 'sname'\n        assert hasattr(fm, '_core_microservice_management_client')\n        assert hasattr(fm, '_readings_storage_async')\n        assert hasattr(fm, '_storage_async')\n        assert hasattr(fm, '_start_time')\n        # from FledgeMicroservice\n        assert hasattr(fm, '_microservice_management_app')\n        assert hasattr(fm, '_microservice_management_handler')\n        assert hasattr(fm, '_microservice_management_server')\n        assert hasattr(fm, '_microservice_management_host')\n        assert hasattr(fm, '_microservice_management_port')\n        assert hasattr(fm, '_microservice_id')\n        assert hasattr(fm, '_type')\n        assert hasattr(fm, '_protocol')\n\n    def test_constructor_exception(self, loop):\n        class FledgeMicroserviceImp(FledgeMicroservice):\n            def __init__(self):\n                super().__init__()\n\n            def run(self):\n                pass\n\n            async def change(self):\n                pass\n\n            async def shutdown(self):\n                pass\n\n            async def get_track(self):\n                pass\n\n            async def add_track(self):\n                pass\n\n        with patch.object(asyncio, 'get_event_loop', return_value=loop):\n            with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):\n                with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:\n                    with patch.object(MicroserviceManagementClient, 'create_configuration_category', return_value=None):\n                        with patch.object(MicroserviceManagementClient, 'create_child_category',\n                                          return_value=None):\n                            with patch.object(MicroserviceManagementClient, 'get_configuration_category', return_value=_DEFAULT_CONFIG):\n                                with patch.object(ReadingsStorageClientAsync, '__init__',\n                                                  return_value=None) as rsc_async_patch:\n                                    with patch.object(StorageClientAsync, '__init__',\n                                                      return_value=None) as sc_async_patch:\n                                        with patch.object(FledgeMicroservice, '_make_microservice_management_app', side_effect=Exception()) as make_patch:\n                                            with patch.object(_logger, 'exception') as logger_patch:\n                                                with pytest.raises(Exception) as excinfo:\n                                                    fm = FledgeMicroserviceImp()\n                                            args = logger_patch.call_args\n                                            assert 'Unable to initialize FledgeMicroservice' == args[0][1]\n\n    @pytest.mark.asyncio\n    async def test_ping(self, loop):\n        class FledgeMicroserviceImp(FledgeMicroservice):\n            def __init__(self):\n                super().__init__()\n\n            def run(self):\n                pass\n\n            async def change(self):\n                pass\n\n            async def shutdown(self):\n                pass\n\n            async def get_track(self):\n                pass\n\n            async def add_track(self):\n                pass\n\n        with patch.object(asyncio, 'get_event_loop', return_value=loop):\n            with patch.object(sys, 'argv', ['pytest', '--address', 'corehost', '--port', '32333', '--name', 'sname']):\n                with patch.object(MicroserviceManagementClient, '__init__', return_value=None) as mmc_patch:\n                    with patch.object(MicroserviceManagementClient, 'create_configuration_category', return_value=None):\n                        with patch.object(MicroserviceManagementClient, 'create_child_category',\n                                          return_value=None):\n                            with patch.object(MicroserviceManagementClient, 'get_configuration_category', return_value=_DEFAULT_CONFIG):\n                                with patch.object(ReadingsStorageClientAsync, '__init__',\n                                                  return_value=None) as rsc_async_patch:\n                                    with patch.object(StorageClientAsync, '__init__',\n                                                      return_value=None) as sc_async_patch:\n                                        with patch.object(FledgeMicroservice, '_make_microservice_management_app', return_value=None) as make_patch:\n                                             with patch.object(FledgeMicroservice, '_run_microservice_management_app', side_effect=None) as run_patch:\n                                                 with patch.object(FledgeProcess, 'register_service_with_core', return_value={'id':'bla'}) as reg_patch:\n                                                     with patch.object(FledgeMicroservice, '_get_service_registration_payload', return_value=None) as payload_patch:\n                                                         with patch.object(web, 'json_response', return_value=None) as response_patch:\n                                                             # called once on FledgeProcess init for _start_time, once for ping\n                                                             with patch.object(time, 'time', return_value=1) as time_patch:\n                                                                 fm = FledgeMicroserviceImp()\n                                                                 await fm.ping(None)\n        response_patch.assert_called_once_with({'uptime': 0})\n"
  },
  {
    "path": "tests/unit/python/fledge/services/common/test_services_common_utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom aiohttp import web\nimport time\nfrom unittest.mock import patch\n\nfrom fledge.services.common.microservice_management import routes\nfrom fledge.services.common.microservice import FledgeMicroservice\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.common import utils as utils\n\n__author__ = \"Praveen Garg\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass mServiceThing(FledgeMicroservice):\n\n    async def shutdown(self, request):\n        return web.json_response({'shutdown': 1})\n\n    async def change(self, request):\n        pass\n\n    async def get_track(self, request):\n        pass\n\n    async def add_track(self, request):\n        pass\n\n    def run(self):\n        pass\n\n\nclass TestUtils:\n\n    utils._MAX_ATTEMPTS = 2\n\n    async def test_ping_service_pass(self, aiohttp_server, loop):\n        # GIVEN a service is running at a given URL\n        app = web.Application()\n        with patch.object(FledgeMicroservice, \"__init__\", return_value=None):\n            m = mServiceThing()\n            m._start_time = time.time()\n            # fill route table\n            routes.setup(app, m)\n\n            server = await aiohttp_server(app)\n            await server.start_server(loop=loop)\n\n            # WHEN the service is pinged with a valid URL\n            with patch.object(utils._logger, \"debug\") as patch_logger:\n                service = ServiceRecord(\"d\", \"test\", \"Southbound\", \"http\", server.host, 1, server.port)\n                url_ping = \"{}://{}:{}/fledge/service/ping\".format(service._protocol, service._address,\n                                                                   service._management_port)\n                log_params = 'Ping received for Service %s id %s at url %s', service._name, service._id, url_ping\n                resp = await utils.ping_service(service, loop=loop)\n                # THEN ping response is received\n                assert resp is True\n            patch_logger.assert_called_once_with(*log_params)\n\n    async def test_ping_service_fail_bad_url(self, aiohttp_server, loop):\n        # GIVEN a service is running at a given URL\n        app = web.Application()\n        with patch.object(FledgeMicroservice, \"__init__\", return_value=None):\n            m = mServiceThing()\n            m._start_time = time.time()\n            # fill route table\n            routes.setup(app, m)\n\n            server = await aiohttp_server(app)\n            await server.start_server(loop=loop)\n\n            # WHEN the service is pinged with a BAD URL\n            with patch.object(utils._logger, \"error\") as log:\n                service = ServiceRecord(\"d\", \"test\", \"Southbound\", \"http\", server.host+\"1\", 1, server.port)\n                url_ping = \"{}://{}:{}/fledge/service/ping\".format(service._protocol, service._address, service._management_port)\n                log_params = 'Ping not received for Service %s id %s at url %s attempt_count %s', service._name, service._id, \\\n                       url_ping, utils._MAX_ATTEMPTS+1\n                resp = await utils.ping_service(service, loop=loop)\n                # THEN ping response is NOT received\n                assert resp is False\n            log.assert_called_once_with(*log_params)\n\n    async def test_shutdown_service_pass(self, aiohttp_server, loop):\n        # GIVEN a service is running at a given URL\n        app = web.Application()\n        with patch.object(FledgeMicroservice, \"__init__\", return_value=None):\n            m = mServiceThing()\n            # fill route table\n            routes.setup(app, m)\n\n            server = await aiohttp_server(app)\n            await server.start_server(loop=loop)\n\n            # WHEN shutdown call is made at the valid URL\n            with patch.object(utils._logger, \"info\") as log:\n                service = ServiceRecord(\"d\", \"test\", \"Southbound\", \"http\", server.host, 1, server.port)\n                url_shutdown = \"{}://{}:{}/fledge/service/shutdown\".format(service._protocol, service._address,\n                                                                            service._management_port)\n                log_params = 'Service %s, id %s at url %s successfully shutdown', service._name, service._id, url_shutdown\n                resp = await utils.shutdown_service(service, loop=loop)\n                # THEN shutdown returns success\n                assert resp is True\n            assert 2 == log.call_count\n            log.assert_called_with(*log_params)\n\n    async def test_shutdown_service_fail_bad_url(self, aiohttp_server, loop):\n        # GIVEN a service is running at a given URL\n        app = web.Application()\n        with patch.object(FledgeMicroservice, \"__init__\", return_value=None):\n            m = mServiceThing()\n            # fill route table\n            routes.setup(app, m)\n\n            server = await aiohttp_server(app)\n            await server.start_server(loop=loop)\n\n            # WHEN shutdown call is made at the invalid URL\n            with patch.object(utils._logger, \"info\") as log1:\n                with patch.object(utils._logger, \"exception\") as log2:\n                    service = ServiceRecord(\"d\", \"test\", \"Southbound\", \"http\", server.host, 1, server.port+1)\n                    log_params1 = \"Shutting down the %s service %s ...\", service._type, service._name\n                    resp = await utils.shutdown_service(service, loop=loop)\n                    # THEN shutdown fails\n                    assert resp is False\n                assert log2.called is True\n            assert log1.called is True\n            log1.assert_called_with(*log_params1)\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/certs/fledge.pem",
    "content": "-----BEGIN CERTIFICATE-----\nMIIDZTCCAk0CFD3ingfR4r63x4YNLSZxpNpzlpFrMA0GCSqGSIb3DQEBCwUAMG8x\nCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRAwDgYDVQQKDAdPU0lz\nb2Z0MRAwDgYDVQQDDAdmb2dsYW1wMScwJQYJKoZIhvcNAQkBFhhmb2dsYW1wQGdv\nb2dsZWdyb3Vwcy5jb20wHhcNMTkwNzE5MTEwOTIyWhcNMjAwNzE4MTEwOTIyWjBv\nMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEQMA4GA1UECgwHT1NJ\nc29mdDEQMA4GA1UEAwwHZm9nbGFtcDEnMCUGCSqGSIb3DQEJARYYZm9nbGFtcEBn\nb29nbGVncm91cHMuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA\nqH8Yc4zMwFrQCDc2U9DDlZtpUjNcZCpyUtWooBIyIFLEV7JWOv/o9tnwA20X617j\nI7JdIbKKj44Pj9lzHPNzqCM4DXlhgjCl1fmoFkf9CjPgAMmMG+oYvYzEDbad3bCB\nG/xaaFCV7GWOvoOv5dUCz7wBDzPsuY8dOnHXMvbyIf5QYsa75FVnPCqJGW8kJFs/\nZ9MlZdyQmKAP5exaRxiVIHHpEPCBbo1NEhCfZlIOokx6qsEZAO4rNJb5NDqtb6+9\nb8gWQToOk1VzQklHaBHRKGUGilB0ufvoLibnSZYzERTMyNvMmXncS8J9t+06ojD0\nKzURUKQvF7dELIMOicfQuwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQBNbtONbaGa\n51zdu9aFijO5fOtdWVY7n5ybuIkZckY3uNWPW01X/v8/WCE/7KBKD6nki3n1Wu7g\npeYY42YwYhE0QmozPkSzPOFZUVyjaJp5iBnY+0lg7/ftOH+jun/hK7OJBZLIDe/S\nSDPFAHvwsFyPgJrYXIVTdQy68JnEnPjNL/AjS+BqT1rEVZ2IuCV96NPmzni0H6pj\n86hnMK0uBDOwQqLYEdVmd0VkzPkVVuXkamGXxMkvi0hnK5LkiepSgyqHmRWYnzV9\nlNX1xIzdYDjH46/JwEhNuUrIpziSxXDMwX2teQZTG8KQgOiKXl4TaRpJNaOGRLbV\ns+c0jsUqNDzh\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCofxhzjMzAWtAI\nNzZT0MOVm2lSM1xkKnJS1aigEjIgUsRXslY6/+j22fADbRfrXuMjsl0hsoqPjg+P\n2XMc83OoIzgNeWGCMKXV+agWR/0KM+AAyYwb6hi9jMQNtp3dsIEb/FpoUJXsZY6+\ng6/l1QLPvAEPM+y5jx06cdcy9vIh/lBixrvkVWc8KokZbyQkWz9n0yVl3JCYoA/l\n7FpHGJUgcekQ8IFujU0SEJ9mUg6iTHqqwRkA7is0lvk0Oq1vr71vyBZBOg6TVXNC\nSUdoEdEoZQaKUHS5++guJudJljMRFMzI28yZedxLwn237TqiMPQrNRFQpC8Xt0Qs\ngw6Jx9C7AgMBAAECggEAdtzxytHQvwFRL/qDAK2My8VOjwZcbuziqTzAL+umINdC\nWvsbiZNuLHWhs0kKTqgpY803lcX1qT92CuxDIHE9bacqq5atCsJ2unPb95vhDYl6\nxBNqG2cQ/OaIh4QD6ZfR/IQQ4vW2TYV3JT6Qn3mc+h6OQMNIg75JyCj2vqUmOoOf\nzaUhXzfBDLSToHm22kyobjkva0Ugls7LNUVw3AOlnkxZvquxdpK6itfa43jhd3tH\nsCEAt6blwTSPLvBfZ//HuD6IF0YuAi9S/8BHcEq+8lC5iXv0FOPzlQmfE/EW96/9\naGeD10JjMW9s4IIHQ3UrX0DBWcpWmrZqTjDlrx5ZwQKBgQDXAJ9bzETugbCPUTLS\nFklf+9jGi9IpM9AHZt/7mzjQ/NwVwUy9xrsmiRRbsJPl3IVD0jdJ4S/Hfw8zFmwK\nr9kxpXVagZV4o47jX5Skgc4sjhrifBDit0a+BlnEI5kfP5KgnpSN1BCvyRhe+5N4\n5/7vgZx7F4N5sNitUHhYbMIfnQKBgQDIoEbr1MTaVlTDZnCNTcuwZ8K+sElUlurd\n5zCpFaIutcJBpDgptNJG+ZQWX9PWjYmOOkGvzSx8NJwEHV3cgydkXVvmuumeFeMP\nvFM6kuH4KmTU1ojxHUlXBjHCosapuDhMi4e4VR/YuJr6uLll6SveAoYzHRxdfGo7\n7xKH3ac+NwKBgA8SOhmDPinB6ZCCTp+vdEFINC/myTqeKSz7pyPKA7eSohLcU/bR\noXjYDxdGT3fDd4wDhmClamX/oB8iqTwui3kRciKABuxH+tIxdwf5GWzCIVxS+hQK\naOkVJOG85RTtreeYdi1i+jB4Vj8CP5owGQzM9x0hztOO9AFiLK12Ij15AoGAGm8b\nyRjoswfq9S/7JnMYom6ZdzyM/OtBmOlMPQsPqm3iYXm8uKoNhrJ9s5D+vWc6t5Wv\nb/VtphPcdqJT6qkROKUgZb885spld35NzQrrYSJc1LpLotFEB4ZWahm+aUBPkq5T\nvJLitlBkgyJxsx7M29yjR/rO8PZinPD8FRC8Z3MCgYANlUMEbuLvukH4QwPzVetP\nVFoQdPBgihxN5Ji2IWYg0x+HWf1CzPv74KKKCmwzGdFNVcR8DS1zhR5sxR/7RDhv\nVY7ujCPx1L7/oPUS5Rgxt8+oGEBzDoSGfcu0rpTFV6Ud0M4wL4JORyngTRtYH0lI\nF42ikDx18xBJyQ2kgcYyzQ==\n-----END RSA PRIVATE KEY-----\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/certs/fledge.txt",
    "content": "Test file for certificate store"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/certs/json/test.json",
    "content": "{\n \"certificate\": \"-----BEGIN CERTIFICATE-----\\nMIIETDCCAjSgAwIBAgIUTrvpjzgpyt6L9AWj2E0W4ps2woswDQYJKoZIhvcNAQEL\\nBQAwKzEpMCcGA1UEAwwgc2VsZnNlcnZlX3Byb2R1Y3Rpb25fcGtpIFJvb3QgQ0Ew\\nHhcNMTcwOTI2MDQ1NzM3WhcNMTcxMTIxMDQ1ODA3WjAPMQ0wCwYDVQQDEwR0ZXN0\\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtuWgQ5P9KjpgplOyejAE\\nj5pDgSmQ6mZkbqY6gnIIKlw1I4Vulaigmeiir37NcAHtLA9HrpqafKoQqt3RPIFq\\nMq2qb728JUNqdkmgp1QRnXdRVqrvGxT3o6XLMmxpkniwL+f3A/qFzuBgDJVltKLn\\n1e0O3conPiiGtqaZ70+1lccKkKviLoin13T+27gFFws6dT74znCxT8c/ikXGMja1\\nTDEddd+qkXlo4At104Fo7Uhx95JWorSljSTaCQkEeOjX+8SJHkARSrKeGEvkBESp\\nXD23oUY9MlxGQnldioLAI5Eu8fRo3PKQUhuFnuoxTr0pO7R60AEe8E0sVU/cE3Ut\\nswIDAQABo4GDMIGAMA4GA1UdDwEB/wQEAwIDqDAdBgNVHSUEFjAUBggrBgEFBQcD\\nAQYIKwYBBQUHAwIwHQYDVR0OBBYEFLfcxphP+aSe61Mdi8IDP7bBvGXdMB8GA1Ud\\nIwQYMBaAFA2X7xP+NiXXNXhJy3UQqocENxRTMA8GA1UdEQQIMAaCBHRlc3QwDQYJ\\nKoZIhvcNAQELBQADggIBAH3oLFPSSgubbwhXycm+oTMnEZyUwKfwAjkc2mykDZ/p\\nPPrHZKCfMuWNf8mp7mK0K8O2JjBKbUlUUJZgd/8/9d0vLqU7Hf97Xk/8d0Rxwqgd\\n2OmdujQpj49NFoAC+jAcGFXASwvGAzWg4ylTi+zvpUbVpLk0hOpYnJFvxEcXj0ab\\nul9Mq0hrjarmkPAoDhmWjUQG8EKiJEelIv5r4OuNIDl+N5B3BNU+g8nz4GWJKIbP\\n6dEb98GJh0tFqOHoxewVmrCmMnsGfJYJDqLg+CwXHSNS8xYQnuFzcJXQ4j7Kge5P\\nCeMB6fizgTiUXFexjbTv6RUk1DfOywtRu7Wus9joTpDILb/WlIUlGvRj2j395BvK\\naq5nLcgSpmO46776uobh6MN6se1kmpJ20sjUZWEtJsKODSAv7LA9jsMWhh1SGEWf\\nUuQ1hUKHZ2073hgc0InmYGGyTJAnI3mYIbL+ddprK1CpORAH2cruqn9I192sCWNw\\npZIxuMCiRUrFWitKEkFwPfmDbVhPQ/ZvxMcdAHXJ+ZQ9RxcanmcBGnlvCjidOBZa\\naLN2/Y99M26z+XcYG9rN0fx5Htf4UDENQ8kp8TITmyHdwvqVox/UXcPWzV3MD7+I\\nn0UdA2lqnM2Rv+kg2MGm0u9Y/noZz4IS4YTlfxMbGF212ROcCC9/oQYy321NqBns\\n-----END CERTIFICATE-----\",\n \"private_key\": \"-----BEGIN RSA PRIVATE KEY-----\\nMIIEpQIBAAKCAQEAtuWgQ5P9KjpgplOyejAEj5pDgSmQ6mZkbqY6gnIIKlw1I4Vu\\nlaigmeiir37NcAHtLA9HrpqafKoQqt3RPIFqMq2qb728JUNqdkmgp1QRnXdRVqrv\\nGxT3o6XLMmxpkniwL+f3A/qFzuBgDJVltKLn1e0O3conPiiGtqaZ70+1lccKkKvi\\nLoin13T+27gFFws6dT74znCxT8c/ikXGMja1TDEddd+qkXlo4At104Fo7Uhx95JW\\norSljSTaCQkEeOjX+8SJHkARSrKeGEvkBESpXD23oUY9MlxGQnldioLAI5Eu8fRo\\n3PKQUhuFnuoxTr0pO7R60AEe8E0sVU/cE3UtswIDAQABAoIBAB3kQ6An1K2NIvSs\\nIzRTGru5k6TNfVDB8VIgOtnM90atEUY/7YXqLG1bFxOlnr/aoL+ds7J2tB8B0H2M\\niUDhSdEEjyF6GgDhFspEWExgsgxRTuriPvfnIl4Nn7sa+tokfW8m8zkkPbBE/Y2w\\n8RFnuoo9FzvqaSWAjBvX+LqjBWN4AGHxPcBcZs/H4U7RvdO0etX2Zbpjs62K/KO3\\ni3e4MXgGZtj0Vx2LYD/AYSbqEoo1v8/U1AbGmsCTTNc2EwARhyb1zUgO7yc9yft6\\nUoAC6pZjxOFsJtwz26jpNdqXz9t1xml3XnNusqHe+hgStQlIL2mgU8qj18q5pqpu\\nkehM9LECgYEAxiU9WA7kQTp8hGKTRqrRbcGBsLTGxsYeILFQggtJBOZ5ngOH35Nd\\nUIzQ1EjKODFEzGH9qPBBfE6BNdl3naHuYgIS3Uz8FCAwsOZAW6X8tC7VU/ZrwKUA\\nF3Rc2iek+J1bdaz5o3hnR2eY/6kVuNHznxqIzK+JuZ7Dq/wEMlAL4gkCgYEA7Eyb\\n4uyQFMXfPLiZPn7opNlgmi4i5lNLbPAjJq0dagdP8HbhLBqQThMcyAnu9rJmNm6t\\n2Wu8kkKIpcZiGOVzFQvoTWOm6KGU/nIFFH1p6AAz/hvhATFA8HpLe9B7la9T6c5R\\nabbtFbUNrHyoieMsIxkrjPo1zVIThLJeIVdoUNsCgYEAwuhKyV4MpSU06rxUhsTs\\nsXwRaJLKnSiw5hPFT8ZuE0XrB8YNV52LwvphSRA46sF8HVeevxlmMTK/4wqBoSty\\nZDIKAGoD5IAtpTU4xW4nf845xhe1spAb4PZzh5xLqMqQ9tYp0eVUImcDlyjp1x2e\\n+TiOrFlXrqE/dOO39Q3MQpECgYEA5plMd4OMh/kiBcvQIOEQf+9zCoODo2od7U3b\\nv96pGdPQ+0XIMJYrxUV5jO3EuhMXFH+mQMuW1tT/LWgQS2N/j0ZziTJ6rAMjt7vl\\noT1SoQmxs4XZaqR6TzPJfibStBzJsx2Y7aWKcOijU3TDtOxxIj9p9MYowxoZ2iGH\\nItp9/okCgYEAh6lbVbf77NArp1FsocQoeZ2ZL1hsOXpmRwpNmePPA6DfjqJyttpH\\ngSh8Z0daqMvojStilhwIkEURy9ITuPYoKt2blWQY8RY//H1zFnwKg2AJR5PvlWcT\\n0JBxt4cHMYy6jW2Q8/ZTVuttPd+UVIDehTFN6oyWF6FBgKxLO5bSjzc=\\n-----END RSA PRIVATE KEY-----\",\n \"issuing_ca\": \"-----BEGIN CERTIFICATE-----\\nMIIFJjCCAw6gAwIBAgIUDUnfHPvwqpztM2lJh40lVUmTjV8wDQYJKoZIhvcNAQEL\\nBQAwKzEpMCcGA1UEAwwgc2VsZnNlcnZlX3Byb2R1Y3Rpb25fcGtpIFJvb3QgQ0Ew\\nHhcNMTcwODI1MDUwNTEzWhcNMjcwODIzMDUwNTQzWjArMSkwJwYDVQQDDCBzZWxm\\nc2VydmVfcHJvZHVjdGlvbl9wa2kgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEBBQAD\\nggIPADCCAgoCggIBAM5LpBH9Qyg5VjTkdMj61gt72CVIrqE5s9iD+Bpb2hlLnWdb\\n52FtcgCxIRca8kJhCYK53dNVmCP8d7LSzogxdIHyzEe5f405ukJVZIbYEYcA4BLK\\n3UU322bYJkTTToABwV+XhlHjLhaze9GLo4snCklxAzafWvqR1C0faB2dPtq5WyQi\\n/2uCvGHcpqe/ozNvZON6eYkjQpCwHftR0TwVVb435hvJb6FeeV95MgVq/C0pZFG4\\nGLgJNj4GK4BtG2wsIDVMMcaoFrSKfKDqyE+4ekvzYP4nDzbYK5XsgH7/7XB9tL7w\\nwMVj0J1mR3TbxVTBZyk509F0oXqBcNb6vvybJevhDlkXMQPgxyOmogm6GUQ3beMX\\nsRpN5uotnbWaF0MQbgo8YrgQX3BGrLmKRfk9rIMoBKabptDMRw5Df1ouu5D9Jb3b\\n3nlelkRXR5qb0R68CM0S78KqVB32NQsLixQ58YUKmcvlQcaIF9cwC28+LYm4sRq/\\nV0tCl68K19PmgZT+Qr0Apakw+vlQ8ojvT+/wTVtg+gphuG7Ovv00xRXa/dpoC3Ff\\nOktxUmu3bh4YU/IVCT3+YbwB7vyOfKGTwSmVK+s5gt4MDM65zX58xa85psJI8mqP\\nCwKGDleglrAIrHxxg2wKrIibiIriSnjJsKqCzpcm9+6V4zewwQFqdfr1R92rAgMB\\nAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\\nBBQNl+8T/jYl1zV4Sct1EKqHBDcUUzANBgkqhkiG9w0BAQsFAAOCAgEAYo+vaKzi\\nW2YTogGvuDvWnFzDtRa6zfB1UNqUTiacmr9ISqTDGJPOE7o7+5//31yS63/VuPAb\\nsskfjtbywGUcjLEoa//vqDUA5VPQSr2MGpqZItt+QQ7eIQPQEt6IaqohmIxvgyDI\\nvV35Ld06slZju9IZJdOx5GyRU49ZrhTciNeHBFJbPTzTWw7swjP1Kj13BJ9++YlU\\ndHHnJecMgRPXbbFn8cThcIUwhaTEWFhlC7zc4YUpTm8nmHaCLmG8TM7tYLaymHqd\\nypMBa3TrGr4+XIgwkWWb9h9+JnlBXc+aq2pJulErzN3raytzv+iTOwcI+YCufgee\\nAf25Zzk9t75KIHjSdqu1U/QXiPSgJgr7o2yrtZbeLT+eMHuhCfbuWduipuRgTlUk\\na8hvoiFDabCrlJABDYHNO8WMCIqX9qja0crqA1JbPXAEMiYwdtoU+p27CtNupGVE\\nQENamacyYD5VhApTnxACwwakMep0jDYQUXUYTeLz6Aj3vVUJl54/3Uqbh6fxKamh\\n8xDeb+HjhO5UKDkfAH0qe17qSGGVftMI3YMPCEqrvnnoVl8VHxpvdVjjJoHEEKoE\\ne8mrX4Jp9O3xVcGFItMQQzvWc1A47ewqIy6x+bk+0W8fL6+rKd+8U7aRIvC7LFiw\\nluvq3QIacuHULtox36A7HFmlYDQ1ozh+tLI=\\n-----END CERTIFICATE-----\",\n \"keystorepass\": \"ibmkey\",\n \"destination_host\": \"desthost\"\n}"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/certs/pem/fledge.pem",
    "content": "-----BEGIN CERTIFICATE-----\nMIIDZTCCAk0CFD3ingfR4r63x4YNLSZxpNpzlpFrMA0GCSqGSIb3DQEBCwUAMG8x\nCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRAwDgYDVQQKDAdPU0lz\nb2Z0MRAwDgYDVQQDDAdmb2dsYW1wMScwJQYJKoZIhvcNAQkBFhhmb2dsYW1wQGdv\nb2dsZWdyb3Vwcy5jb20wHhcNMTkwNzE5MTEwOTIyWhcNMjAwNzE4MTEwOTIyWjBv\nMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEQMA4GA1UECgwHT1NJ\nc29mdDEQMA4GA1UEAwwHZm9nbGFtcDEnMCUGCSqGSIb3DQEJARYYZm9nbGFtcEBn\nb29nbGVncm91cHMuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA\nqH8Yc4zMwFrQCDc2U9DDlZtpUjNcZCpyUtWooBIyIFLEV7JWOv/o9tnwA20X617j\nI7JdIbKKj44Pj9lzHPNzqCM4DXlhgjCl1fmoFkf9CjPgAMmMG+oYvYzEDbad3bCB\nG/xaaFCV7GWOvoOv5dUCz7wBDzPsuY8dOnHXMvbyIf5QYsa75FVnPCqJGW8kJFs/\nZ9MlZdyQmKAP5exaRxiVIHHpEPCBbo1NEhCfZlIOokx6qsEZAO4rNJb5NDqtb6+9\nb8gWQToOk1VzQklHaBHRKGUGilB0ufvoLibnSZYzERTMyNvMmXncS8J9t+06ojD0\nKzURUKQvF7dELIMOicfQuwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQBNbtONbaGa\n51zdu9aFijO5fOtdWVY7n5ybuIkZckY3uNWPW01X/v8/WCE/7KBKD6nki3n1Wu7g\npeYY42YwYhE0QmozPkSzPOFZUVyjaJp5iBnY+0lg7/ftOH+jun/hK7OJBZLIDe/S\nSDPFAHvwsFyPgJrYXIVTdQy68JnEnPjNL/AjS+BqT1rEVZ2IuCV96NPmzni0H6pj\n86hnMK0uBDOwQqLYEdVmd0VkzPkVVuXkamGXxMkvi0hnK5LkiepSgyqHmRWYnzV9\nlNX1xIzdYDjH46/JwEhNuUrIpziSxXDMwX2teQZTG8KQgOiKXl4TaRpJNaOGRLbV\ns+c0jsUqNDzh\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCofxhzjMzAWtAI\nNzZT0MOVm2lSM1xkKnJS1aigEjIgUsRXslY6/+j22fADbRfrXuMjsl0hsoqPjg+P\n2XMc83OoIzgNeWGCMKXV+agWR/0KM+AAyYwb6hi9jMQNtp3dsIEb/FpoUJXsZY6+\ng6/l1QLPvAEPM+y5jx06cdcy9vIh/lBixrvkVWc8KokZbyQkWz9n0yVl3JCYoA/l\n7FpHGJUgcekQ8IFujU0SEJ9mUg6iTHqqwRkA7is0lvk0Oq1vr71vyBZBOg6TVXNC\nSUdoEdEoZQaKUHS5++guJudJljMRFMzI28yZedxLwn237TqiMPQrNRFQpC8Xt0Qs\ngw6Jx9C7AgMBAAECggEAdtzxytHQvwFRL/qDAK2My8VOjwZcbuziqTzAL+umINdC\nWvsbiZNuLHWhs0kKTqgpY803lcX1qT92CuxDIHE9bacqq5atCsJ2unPb95vhDYl6\nxBNqG2cQ/OaIh4QD6ZfR/IQQ4vW2TYV3JT6Qn3mc+h6OQMNIg75JyCj2vqUmOoOf\nzaUhXzfBDLSToHm22kyobjkva0Ugls7LNUVw3AOlnkxZvquxdpK6itfa43jhd3tH\nsCEAt6blwTSPLvBfZ//HuD6IF0YuAi9S/8BHcEq+8lC5iXv0FOPzlQmfE/EW96/9\naGeD10JjMW9s4IIHQ3UrX0DBWcpWmrZqTjDlrx5ZwQKBgQDXAJ9bzETugbCPUTLS\nFklf+9jGi9IpM9AHZt/7mzjQ/NwVwUy9xrsmiRRbsJPl3IVD0jdJ4S/Hfw8zFmwK\nr9kxpXVagZV4o47jX5Skgc4sjhrifBDit0a+BlnEI5kfP5KgnpSN1BCvyRhe+5N4\n5/7vgZx7F4N5sNitUHhYbMIfnQKBgQDIoEbr1MTaVlTDZnCNTcuwZ8K+sElUlurd\n5zCpFaIutcJBpDgptNJG+ZQWX9PWjYmOOkGvzSx8NJwEHV3cgydkXVvmuumeFeMP\nvFM6kuH4KmTU1ojxHUlXBjHCosapuDhMi4e4VR/YuJr6uLll6SveAoYzHRxdfGo7\n7xKH3ac+NwKBgA8SOhmDPinB6ZCCTp+vdEFINC/myTqeKSz7pyPKA7eSohLcU/bR\noXjYDxdGT3fDd4wDhmClamX/oB8iqTwui3kRciKABuxH+tIxdwf5GWzCIVxS+hQK\naOkVJOG85RTtreeYdi1i+jB4Vj8CP5owGQzM9x0hztOO9AFiLK12Ij15AoGAGm8b\nyRjoswfq9S/7JnMYom6ZdzyM/OtBmOlMPQsPqm3iYXm8uKoNhrJ9s5D+vWc6t5Wv\nb/VtphPcdqJT6qkROKUgZb885spld35NzQrrYSJc1LpLotFEB4ZWahm+aUBPkq5T\nvJLitlBkgyJxsx7M29yjR/rO8PZinPD8FRC8Z3MCgYANlUMEbuLvukH4QwPzVetP\nVFoQdPBgihxN5Ji2IWYg0x+HWf1CzPv74KKKCmwzGdFNVcR8DS1zhR5sxR/7RDhv\nVY7ujCPx1L7/oPUS5Rgxt8+oGEBzDoSGfcu0rpTFV6Ud0M4wL4JORyngTRtYH0lI\nF42ikDx18xBJyQ2kgcYyzQ==\n-----END RSA PRIVATE KEY-----\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/control_service/test_acl_management.py",
    "content": "import json\nfrom unittest.mock import MagicMock, patch\nimport pytest\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.web import middleware\nfrom fledge.services.core import connect\nfrom fledge.services.core import routes\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro(*args, **kwargs):\n    return None if len(args) == 0 else args[0]\n\n\nclass TestACLManagement:\n    \"\"\" ACL API tests \"\"\"\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def test_get_all_acls(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        result = {'count': 2, 'rows': [\n            {'name': 'demoACL', 'service': [{'name': 'Fledge Storage'}, {'type': 'Southbound'}],\n             'url': [{'url': '/fledge/south/operation', 'acl': [{'type': 'Southbound'}]}]},\n            {'name': 'testACL', 'service': [], 'url': []}]}\n        payload = {\"return\": [\"name\", \"service\", \"url\"]}\n        value = await mock_coro(result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                resp = await client.get('/fledge/ACL')\n                assert 200 == resp.status\n                result = await resp.text()\n                assert 'acls' in result\n            args, _ = patch_query_tbl.call_args\n            assert 'control_acl' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_bad_get_acl_by_name(self, client):\n        acl_name = 'blah'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        result = {\"count\": 0, \"rows\": []}\n        payload = {\"return\": [\"name\", \"service\", \"url\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                   \"value\": acl_name}}\n        value = await mock_coro(result)\n        message = \"ACL with name {} is not found.\".format(acl_name)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                resp = await client.get('/fledge/ACL/{}'.format(acl_name))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = patch_query_tbl.call_args\n            assert 'control_acl' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_good_get_acl_by_name(self, client):\n        acl_name = 'demoACL'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        result = {'count': 1, 'rows': [\n            {'name': acl_name, 'service': [{'name': 'Fledge Storage'}, {'type': 'Southbound'}],\n             'url': [{'url': '/fledge/south/operation', 'acl': [{'type': 'Southbound'}]}]}]}\n        payload = {\"return\": [\"name\", \"service\", \"url\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                   \"value\": acl_name}}\n        value = await mock_coro(result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                resp = await client.get('/fledge/ACL/{}'.format(acl_name))\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert acl_name == json_response['name']\n            args, _ = patch_query_tbl.call_args\n            assert 'control_acl' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({}, \"ACL name is required.\"),\n        ({\"name\": 1}, \"ACL name must be a string.\"),\n        ({\"name\": \"\"}, \"ACL name cannot be empty.\"),\n        ({\"name\": \"test\"}, \"service parameter is required.\"),\n        ({\"name\": \"test\", \"service\": 1}, \"service must be a list.\"),\n        ({\"name\": \"test\", \"service\": []}, \"service list cannot be empty.\"),\n        ({\"name\": \"test\", \"service\": [1]}, \"service elements must be an object.\"),\n        ({\"name\": \"test\", \"service\": [\"1\"]}, \"service elements must be an object.\"),\n        ({\"name\": \"test\", \"service\": [\"1\", {}]}, \"service elements must be an object.\"),\n        ({\"name\": \"test\", \"service\": [{}]}, \"service object cannot be empty.\"),\n        ({\"name\": \"test\", \"service\": [{\"foo\": \"bar\"}]}, \"Either type or name Key-Value Pair is missing for service.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": 1}]}, \"Value must be a string for service type.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"\"}]}, \"Value cannot be empty for service type.\"),\n        ({\"name\": \"test\", \"service\": [{\"name\": 1}]}, \"Value must be a string for service name.\"),\n        ({\"name\": \"test\", \"service\": [{\"name\": \"\"}]}, \"Value cannot be empty for service name.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}]}, \"url parameter is required.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": 1}, \"url must be a list.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{}]}, \"url child Key-Value Pair is missing.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{\"url\": []}]}, \"Value must be a string for url object.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{\"url\": \"\"}]}, \"Value cannot be empty for url object.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{\"acl\": \"\"}]}, \"Value must be an array for acl object.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{\"acl\": [1]}]}, \"acl elements must be an object.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{\"acl\": [{}]}]}, \"acl object cannot be empty.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{\"acl\": [{\"type\": \"Core\"}]}]},\n         \"url child Key-Value Pair is missing.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{\"url\": \"URI/write\", \"acl\": \"\"}]},\n         \"Value must be an array for acl object.\"),\n        ({\"name\": \"test\", \"service\": [{\"type\": \"T1\"}], \"url\": [{\"url\": \"URI/write\", \"acl\": []}, 1]},\n         \"url elements must be an object.\"),\n        ({\"name\": \"test\", \"service\": [{\"name\": \"S1\"}], \"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": []}]},\n         \"url child Key-Value Pair is missing.\"),\n        ({\"name\": \"test\", \"service\": [{\"name\": \"S1\"}], \"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": \"\"}]},\n         \"Value must be an array for acl object.\"),\n        ({\"name\": \"test\", \"service\": [{\"name\": \"S1\"}], \"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": [1]}]},\n         \"acl elements must be an object.\"),\n        ({\"name\": \"test\", \"service\": [{\"name\": \"S1\"}], \"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": [{}]}]},\n         \"acl object cannot be empty.\")\n    ])\n    async def test_bad_add_acl(self, client, payload, message):\n        resp = await client.post('/fledge/ACL', data=json.dumps(payload))\n        assert 400 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": message} == json_response\n\n    async def test_duplicate_add_acl(self, client):\n        acl_name = \"testACL\"\n        request_payload = {\"name\": acl_name, \"service\": [{'name': 'Fledge Storage'}], \"url\": []}\n        result = {'count': 1, 'rows': [\n            {'name': acl_name, 'service': [{'name': 'Fledge Storage'}, {'type': 'Southbound'}],\n             'url': [{'url': '/fledge/south/operation', 'acl': [{'type': 'Southbound'}]}]}]}\n        value = await mock_coro(result)\n        query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        message = \"ACL with name {} already exists.\".format(acl_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                resp = await client.post('/fledge/ACL', data=json.dumps(request_payload))\n                assert 409 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = patch_query_tbl.call_args\n            assert 'control_acl' == args[0]\n            assert query_payload == json.loads(args[1])\n    \n    async def test_good_add_acl(self, client):\n        acl_name = \"testACL\"\n        request_payload = {\"name\": acl_name, \"service\": [{\"type\": \"Notification\"}], \"url\": []}\n        result = {\"count\": 0, \"rows\": []}\n        insert_result = {\"response\": \"inserted\", \"rows_affected\": 1}\n        acl_query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        value = await mock_coro(result)\n        insert_value = await mock_coro(insert_result)\n        _rv = await mock_coro(None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=insert_value\n                                  ) as insert_tbl_patch:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv) as audit_info_patch:\n                            resp = await client.post('/fledge/ACL', data=json.dumps(request_payload))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {'name': acl_name, 'service': [{\"type\": \"Notification\"}], 'url': []} == json_response\n                        audit_info_patch.assert_called_once_with('ACLAD', request_payload)\n                args, _ = insert_tbl_patch.call_args_list[0]\n                assert 'control_acl' == args[0]\n                assert {'name': acl_name, 'service': '[{\"type\": \"Notification\"}]', 'url': '[]'} == json.loads(args[1])\n            args, _ = query_tbl_patch.call_args_list[0]\n            assert 'control_acl' == args[0]\n            assert acl_query_payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({}, \"Nothing to update for the given payload.\"),\n        ({\"service\": 1}, \"service must be a list.\"),\n        ({\"url\": 1}, \"url must be a list.\"),\n        ({\"service\": []}, \"service list cannot be empty.\"),\n        ({\"service\": [1]}, \"service elements must be an object.\"),\n        ({\"service\": [\"1\"]}, \"service elements must be an object.\"),\n        ({\"service\": [\"1\", {}]}, \"service elements must be an object.\"),\n        ({\"service\": [{}]}, \"service object cannot be empty.\"),\n        ({\"service\": [{\"foo\": \"bar\"}]}, \"Either type or name Key-Value Pair is missing for service.\"),\n        ({\"service\": [{\"type\": 1}]}, \"Value must be a string for service type.\"),\n        ({\"service\": [{\"type\": \"\"}]}, \"Value cannot be empty for service type.\"),\n        ({\"service\": [{\"name\": 1}]}, \"Value must be a string for service name.\"),\n        ({\"service\": [{\"name\": \"\"}]}, \"Value cannot be empty for service name.\"),\n        ({\"url\": 1}, \"url must be a list.\"),\n        ({\"url\": [{}]}, \"url child Key-Value Pair is missing.\"),\n        ({\"url\": [{\"url\": []}]}, \"Value must be a string for url object.\"),\n        ({\"url\": [{\"url\": \"\"}]}, \"Value cannot be empty for url object.\"),\n        ({\"url\": [{\"acl\": \"\"}]}, \"Value must be an array for acl object.\"),\n        ({\"url\": [{\"acl\": [1]}]}, \"acl elements must be an object.\"),\n        ({\"url\": [{\"acl\": [{}]}]}, \"acl object cannot be empty.\"),\n        ({\"url\": [{\"acl\": [{\"type\": \"Core\"}]}]}, \"url child Key-Value Pair is missing.\"),\n        ({\"url\": [{\"url\": \"URI/write\", \"acl\": \"\"}]}, \"Value must be an array for acl object.\"),\n        ({\"url\": [{\"url\": \"URI/write\", \"acl\": []}, 1]}, \"url elements must be an object.\"),\n        ({\"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": []}]}, \"url child Key-Value Pair is missing.\"),\n        ({\"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": \"\"}]}, \"Value must be an array for acl object.\"),\n        ({\"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": [1]}]}, \"acl elements must be an object.\"),\n        ({\"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": [{}]}]}, \"acl object cannot be empty.\"),\n        ({\"service\": [{\"foo\": \"bar\"}], \"url\": []}, \"Either type or name Key-Value Pair is missing for service.\"),\n        ({\"url\": [], \"service\": []}, \"service list cannot be empty.\"),\n        ({\"url\": [], \"service\": [{}]}, \"service object cannot be empty.\"),\n        ({\"url\": [], \"service\": [{\"type\": 1}]}, \"Value must be a string for service type.\"),\n        ({\"url\": [], \"service\": [{\"type\": \"\"}]}, \"Value cannot be empty for service type.\"),\n        ({\"url\": [], \"service\": [{\"name\": 1}]}, \"Value must be a string for service name.\"),\n        ({\"url\": [], \"service\": [{\"name\": \"\"}]}, \"Value cannot be empty for service name.\"),\n        ({\"service\": [{\"name\": \"myService\"}], \"url\": 1}, \"url must be a list.\"),\n        ({\"service\": [{}], \"url\": 1}, \"service object cannot be empty.\"),\n        ({\"service\": [{\"name\": \"SVC\"}], \"url\": [{\"url\": \"URI/write\", \"acl\": \"\"}]},\n         \"Value must be an array for acl object.\"),\n        ({\"service\": [{\"name\": \"SVC\"}], \"url\": [{\"url\": \"\", \"acl\": \"\"}]}, \"Value cannot be empty for url object.\"),\n        ({\"service\": [{\"name\": \"SVC\"}], \"url\": [{\"blah\": \"\", \"acl\": []}]}, \"url child Key-Value Pair is missing.\"),\n        ({\"service\": [{\"name\": \"SVC\"}], \"url\": [{\"url\": \"URI/write\", \"acl\": []}, {\"acl\": [{}]}]},\n         \"acl object cannot be empty.\")\n    ])\n    async def test_bad_update_acl(self, client, payload, message):\n        acl_name = \"testACL\"\n        resp = await client.put('/fledge/ACL/{}'.format(acl_name), data=json.dumps(payload))\n        assert 400 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": message} == json_response\n\n    async def test_update_acl_not_found(self, client):\n        acl_name = \"testACL\"\n        req_payload = {\"service\": [{\"type\": \"Notification\"}]}\n        result = {\"count\": 0, \"rows\": []}\n        value = await mock_coro(result)\n        query_payload = {\"return\": [\"name\", \"service\", \"url\"], \"where\": {\n            \"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        message = \"ACL with name {} is not found.\".format(acl_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                resp = await client.put('/fledge/ACL/{}'.format(acl_name), data=json.dumps(req_payload))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = query_tbl_patch.call_args\n            assert 'control_acl' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"payload\", [\n        {\"service\": [{\"name\": \"Sinusoid\"}, {\"type\": \"Southbound\"}]},\n        {\"service\": [{\"name\": \"Sinusoid\"}], \"url\": []},\n        {\"service\": [{\"type\": \"Southbound\"}], \"url\": [{\"url\": \"/fledge/south/operation\",\n                                                       \"acl\": [{\"type\": \"Southbound\"}]}]},\n        {\"service\": [{\"name\": \"Sinusoid\"}, {\"type\": \"Southbound\"}],\n         \"url\": [{\"url\": \"/fledge/south/operation\", \"acl\": [{\"type\": \"Southbound\"}]}]}\n    ])\n    async def test_update_acl(self, client, payload):\n        acl_name = \"testACL\"\n        acl_q_result = {\"count\": 0, \"rows\": []}\n        update_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        query_tbl_result = {\"count\": 1, \"rows\": [{\"name\": acl_name, \"service\": [], \"url\": []}]}\n        query_payload = {\"return\": [\"name\", \"service\", \"url\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        arv = await mock_coro(None)\n        update_value = await mock_coro(update_result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        acl_query_payload_service = {\"return\": [\"entity_name\"],\n                                     \"where\": {\"column\": \"entity_type\", \"condition\": \"=\", \"value\": \"service\",\n                                               \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"{}\".format(\n                                                   acl_name)}}}\n\n        async def q_result(*args):\n            table = args[0]\n            if table == 'acl_usage':\n                assert acl_query_payload_service == json.loads(args[1])\n                return acl_q_result\n            elif table == 'control_acl':\n                assert query_payload == json.loads(args[1])\n                return query_tbl_result\n            else:\n                return {}\n\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                with patch.object(storage_client_mock, 'update_tbl', return_value=update_value\n                                  ) as patch_update_tbl:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=arv) as audit_info_patch:\n                            resp = await client.put('/fledge/ACL/{}'.format(acl_name), data=json.dumps(payload))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {\"message\": \"ACL {} updated successfully.\".format(acl_name)} == json_response\n                        args, _ = audit_info_patch.call_args\n                        assert 'ACLCH' == args[0]\n                        if 'url' not in payload:\n                            payload['url'] = None\n                        payload['name'] = acl_name\n                        assert {\"acl\": payload, \"old_acl\": query_tbl_result['rows'][0]} == args[1]\n                update_args, _ = patch_update_tbl.call_args\n                assert 'control_acl' == update_args[0]\n\n    async def test_delete_acl_not_found(self, client):\n        acl_name = \"test\"\n        result = {\"count\": 0, \"rows\": []}\n        value = await mock_coro(result)\n        query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        message = \"ACL with name {} is not found.\".format(acl_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                resp = await client.delete('/fledge/ACL/{}'.format(acl_name))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = query_tbl_patch.call_args\n            assert 'control_acl' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    async def test_delete_acl(self, client):\n        acl_name = 'demoACL'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        acl_q_result_svc = {\"count\": 0, \"rows\": []}\n        acl_q_result_scr = {\"count\": 0, \"rows\": []}\n        result = {\"count\": 1, \"rows\": [{\"name\": acl_name, \"service\": [], \"url\": []}]}\n        payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        delete_payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        delete_result = {\"response\": \"deleted\", \"rows_affected\": 1}\n        message = '{} ACL deleted successfully.'.format(acl_name)\n        del_value = await mock_coro(delete_result)\n        arv = await mock_coro(None)\n        acl_query_payload_service = {\"return\": [\"entity_name\"], \"where\": {\"column\": \"entity_type\",\n                                                                          \"condition\": \"=\",\n                                                                          \"value\": \"service\",\n                                                                          \"and\":\n                                                                              {\"column\": \"name\",\n                                                                               \"condition\": \"=\",\n                                                                               \"value\": \"{}\".format(acl_name)}}}\n\n        acl_query_payload_script = {\"return\": [\"entity_name\"], \"where\": {\"column\": \"entity_type\",\n                                                                         \"condition\": \"=\",\n                                                                         \"value\": \"script\",\n                                                                         \"and\":\n                                                                         {\"column\": \"name\",\n                                                                          \"condition\": \"=\",\n                                                                          \"value\": \"{}\".format(acl_name)}}}\n\n        async def q_result(*args):\n            table = args[0]\n            if table == 'acl_usage':\n                if acl_query_payload_service == json.loads(args[1]):\n                    return acl_q_result_svc\n                elif acl_query_payload_script == json.loads(args[1]):\n                    return acl_q_result_scr\n            elif table == 'control_acl':\n                assert payload == json.loads(args[1])\n                return result\n            else:\n                return {}\n\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                with patch.object(storage_client_mock, 'delete_from_tbl', return_value=del_value\n                                  ) as patch_delete_tbl:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=arv) as audit_info_patch:\n                            resp = await client.delete('/fledge/ACL/{}'.format(acl_name))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {'message': message} == json_response\n                        audit_info_patch.assert_called_once_with('ACLDL', {'message': message, \"name\": acl_name})\n                delete_args, _ = patch_delete_tbl.call_args\n                assert 'control_acl' == delete_args[0]\n                assert delete_payload == json.loads(delete_args[1])\n\n    async def test_bad_service_with_acl(self, client):\n        svc_name = 'foo'\n        result = {\"count\": 0, \"rows\": []}\n        payload = {\"acl_name\": \"testACL\"}\n        value = await mock_coro(result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        message = \"Schedule with name {} is not found.\".format(svc_name)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                resp = await client.put('/fledge/service/{}/ACL'.format(svc_name), data=json.dumps(payload))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'message': message} == json_response\n            args, _ = query_tbl_patch.call_args\n            assert 'schedules' == args[0]\n            assert {\"where\": {\"column\": \"schedule_name\", \"condition\": \"=\", \"value\": svc_name}} == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({}, 'acl_name KV pair is missing.'),\n        ({\"acl_name\": 1}, 'ACL must be a string.'),\n        ({\"acl_name\": \"\"}, 'ACL cannot be empty.'),\n    ])\n    async def test_bad_attach_acl_to_service(self, client, payload, message):\n        svc_name = 'foo'\n        result = {\"count\": 1, \"rows\": [{\"id\": \"3e84f179-874d-4a91-a524-15512172f8a2\", \"enabled\": \"true\"}]}\n        value = await mock_coro(result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                resp = await client.put('/fledge/service/{}/ACL'.format(svc_name), data=json.dumps(payload))\n                assert 400 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'message': message} == json_response\n            args, _ = query_tbl_patch.call_args\n            assert 'schedules' == args[0]\n            assert {\"where\": {\"column\": \"schedule_name\", \"condition\": \"=\", \"value\": svc_name}} == json.loads(args[1])\n\n    async def test_acl_not_found_when_attach_to_service(self, client):\n        svc_name = 'foo'\n        acl_name = \"testACL\"\n        req_payload = {\"acl_name\": acl_name}\n        acl_result = {\"count\": 0, \"rows\": []}\n        acl_query_payload = {\"return\": [\"name\", \"service\", \"url\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                             \"value\": acl_name}}\n        sch_query_payload = {\"where\": {\"column\": \"schedule_name\", \"condition\": \"=\", \"value\": svc_name}}\n        sch_result = {\"count\": 1, \"rows\": [{\"id\": \"3e84f179-874d-4a91-a524-15512172f8a2\", \"enabled\": \"true\"}]}\n        message = \"ACL with name {} is not found.\".format(acl_name)\n\n        async def q_result(*args):\n            table = args[0]\n            if table == 'schedules':\n                assert sch_query_payload == json.loads(args[1])\n                return sch_result\n            elif table == 'control_acl':\n                assert acl_query_payload == json.loads(args[1])\n                return acl_result\n            else:\n                return {}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                resp = await client.put('/fledge/service/{}/ACL'.format(svc_name), data=json.dumps(req_payload))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'message': message} == json_response\n\n    async def test_service_already_attached_to_acl(self, client):\n        svc_name = 'foo'\n        acl_name = \"testACL\"\n        req_payload = {\"acl_name\": acl_name}\n        acl_result = {\"count\": 1, \"rows\": [{\"name\": acl_name, \"service\": [], \"url\": []}]}\n        acl_query_payload = {\"return\": [\"name\", \"service\", \"url\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                             \"value\": acl_name}}\n        sch_query_payload = {\"where\": {\"column\": \"schedule_name\", \"condition\": \"=\", \"value\": svc_name}}\n        sch_result = {\"count\": 1, \"rows\": [{\"id\": \"3e84f179-874d-4a91-a524-15512172f8a2\", \"enabled\": \"true\"}]}\n        message = \"Service {} already has an ACL object.\".format(svc_name, acl_name)\n\n        async def q_result(*args):\n            table = args[0]\n            if table == 'schedules':\n                assert sch_query_payload == json.loads(args[1])\n                return sch_result\n            elif table == 'control_acl':\n                assert acl_query_payload == json.loads(args[1])\n                return acl_result\n            else:\n                return {}\n\n        cat_info = {\n                        \"AuthenticatedCaller\": {\"description\": \"Caller authorisation is needed\", \"type\": \"boolean\",\n                                                \"default\": \"false\", \"displayName\": \"Enable caller authorisation\",\n                                                \"value\": \"false\"},\n                        \"ACL\": {\n                            \"description\": \"Service ACL for {}\".format(svc_name), \"type\": \"JSON\",\n                            \"displayName\": \"Service ACL\", \"default\": \"[]\", \"value\": \"[]\"\n                        }\n                }\n        cat_value = await mock_coro(cat_info)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=cat_value) as patch_get_all_items:\n                    resp = await client.put('/fledge/service/{}/ACL'.format(svc_name), data=json.dumps(req_payload))\n                    assert 400 == resp.status\n                    assert message == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {'message': message} == json_response\n                patch_get_all_items.assert_called_once_with(\"{}Security\".format(svc_name))\n\n    async def test_good_attach_acl_to_service(self, client):\n        svc_name = 'foo'\n        acl_name = \"testACL\"\n        req_payload = {\"acl_name\": acl_name}\n        acl_result = {\"count\": 1, \"rows\": [{\"name\": acl_name, \"service\": [], \"url\": []}]}\n        acl_query_payload = {\"return\": [\"name\", \"service\", \"url\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                             \"value\": acl_name}}\n        sch_query_payload = {\"where\": {\"column\": \"schedule_name\", \"condition\": \"=\", \"value\": svc_name}}\n        sch_result = {\"count\": 1, \"rows\": [{\"id\": \"3e84f179-874d-4a91-a524-15512172f8a2\", \"enabled\": \"true\"}]}\n        security_cat_name = \"{}Security\".format(svc_name)\n        cat_child_result = {\"children\": [security_cat_name]}\n        message = \"ACL with name {} attached to {} service successfully.\".format(acl_name, svc_name)\n\n        acl_dict = {'ACL': acl_name}\n\n        async def q_result(*args):\n            table = args[0]\n            if table == 'schedules':\n                assert sch_query_payload == json.loads(args[1])\n                return sch_result\n            elif table == 'control_acl':\n                assert acl_query_payload == json.loads(args[1])\n                return acl_result\n            else:\n                return {}\n\n        cat_value = await mock_coro(None)\n        cat_child_value = await mock_coro(cat_child_result)\n        update_bulk_value = await mock_coro(None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=cat_value) as patch_get_all_items:\n                    with patch.object(c_mgr, 'create_category', return_value=cat_value) as patch_create_cat:\n                        with patch.object(c_mgr, 'create_child_category',\n                                          return_value=cat_child_value) as patch_create_child_cat:\n                            with patch.object(c_mgr, 'update_configuration_item_bulk',\n                                              return_value=update_bulk_value) as patch_update_bulk:\n                                resp = await client.put('/fledge/service/{}/ACL'.format(svc_name),\n                                                        data=json.dumps(req_payload))\n                                assert 200 == resp.status\n                                result = await resp.text()\n                                json_response = json.loads(result)\n                                assert {'message': message} == json_response\n                            patch_update_bulk.assert_called_once_with(security_cat_name, acl_dict)\n                        patch_create_child_cat.assert_called_once_with(svc_name, [security_cat_name])\n                        patch_create_cat.assert_called()\n                patch_get_all_items.assert_called_once_with(security_cat_name)\n\n    async def test_bad_detach_acl_from_service(self, client):\n        svc_name = 'foo'\n        result = {\"count\": 0, \"rows\": []}\n        payload = {\"acl_name\": \"testACL\"}\n        value = await mock_coro(result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        message = \"Schedule with name {} is not found.\".format(svc_name)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                resp = await client.delete('/fledge/service/{}/ACL'.format(svc_name), data=json.dumps(payload))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'message': message} == json_response\n            args, _ = query_tbl_patch.call_args\n            assert 'schedules' == args[0]\n            assert {\"where\": {\"column\": \"schedule_name\", \"condition\": \"=\", \"value\": svc_name}} == json.loads(args[1])\n\n    async def test_no_acl_detach_from_service(self, client):\n        svc_name = 'foo'\n        sch_query_payload = {\"where\": {\"column\": \"schedule_name\", \"condition\": \"=\", \"value\": svc_name}}\n        sch_result = {\"count\": 1, \"rows\": [{\"id\": \"3e84f179-874d-4a91-a524-15512172f8a2\", \"enabled\": \"true\"}]}\n        message = \"Nothing to delete as there is no ACL attached with {} service.\".format(svc_name)\n        cat_value = await mock_coro(None)\n        sch_value = await mock_coro(sch_result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=sch_value) as patch_query_tbl:\n                with patch.object(c_mgr, 'get_category_all_items', return_value=cat_value) as patch_get_all_items:\n                    resp = await client.delete('/fledge/service/{}/ACL'.format(svc_name))\n                    assert 400 == resp.status\n                    assert message == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {'message': message} == json_response\n                patch_get_all_items.assert_called_once_with(\"{}Security\".format(svc_name))\n            args, _ = patch_query_tbl.call_args\n            assert 'schedules' == args[0]\n            assert sch_query_payload == json.loads(args[1])\n\n    async def test_good_detach_acl_from_service(self, client):\n        svc_name = 'foo'\n        expected_result = {\n                    \"AuthenticatedCaller\": {\n                        \"description\": \"Caller authorisation is needed\",\n                        \"type\": \"boolean\",\n                        \"default\": \"false\",\n                        \"displayName\": \"Enable caller authorisation\",\n                    },\n                    'ACL': {\n                        'description': 'Service ACL for {}'.format(svc_name),\n                        'type': 'ACL',\n                        'displayName': 'Service ACL',\n                        'default': ''}\n            }\n        security_cat = \"{}Security\".format(svc_name)\n        sch_query_payload = {\"where\": {\"column\": \"schedule_name\", \"condition\": \"=\", \"value\": svc_name}}\n        sch_result = {\"count\": 1, \"rows\": [{\"id\": \"3e84f179-874d-4a91-a524-15512172f8a2\", \"enabled\": \"true\"}]}\n        cat_result = {\"a\": 1}\n        message = \"ACL is detached from {} service successfully.\".format(svc_name)\n        acl_dict = {'ACL': ''}\n        cat_value = await mock_coro(cat_result)\n        sch_value = await mock_coro(sch_result)\n        update_bulk_value = await mock_coro(None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=sch_value) as patch_query_tbl:\n                with patch.object(c_mgr, 'get_category_all_items', return_value=cat_value) as patch_get_all_items:\n                    with patch.object(c_mgr, 'create_category', return_value=cat_value) as patch_create_cat:\n                        with patch.object(c_mgr, 'update_configuration_item_bulk',\n                                          return_value=update_bulk_value) as patch_update_bulk:\n                            resp = await client.delete('/fledge/service/{}/ACL'.format(svc_name))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {'message': message} == json_response\n                        patch_update_bulk.assert_called_once_with(security_cat, acl_dict)\n                    patch_create_cat.assert_called_once_with(category_description='Security category for foo service',\n                                                             category_name=security_cat,\n                                                             category_value=expected_result)\n                patch_get_all_items.assert_called_once_with(security_cat)\n            args, _ = patch_query_tbl.call_args\n            assert 'schedules' == args[0]\n            assert sch_query_payload == json.loads(args[1])\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/control_service/test_entrypoint.py",
    "content": "import json\n\nfrom unittest.mock import MagicMock, patch\nimport pytest\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.web import middleware\nfrom fledge.services.core import connect, routes\nfrom fledge.services.core.api.control_service import entrypoint\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2023 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro(*args):\n    return None if len(args) == 0 else args[0]\n\n\nclass TestEntrypoint:\n    \"\"\" Control Flow Entrypoint API tests \"\"\"\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def test_get_all_entrypoints(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {'count': 3, 'rows': [\n            {'name': 'EP1', 'description': 'EP1', 'type': 1, 'operation_name': 'OP1', 'destination': 0,\n             'destination_arg': '', 'anonymous': 't'},\n            {'name': 'EP2', 'description': 'Ep2', 'type': 0, 'operation_name': '', 'destination': 0,\n             'destination_arg': '', 'anonymous': 'f'},\n            {'name': 'EP3', 'description': 'EP3', 'type': 1, 'operation_name': 'OP2', 'destination': 0,\n             'destination_arg': '', 'anonymous': 'f'}]}\n        expected_api_response = {\"controls\": [{\"name\": \"EP1\", \"description\": \"EP1\", \"permitted\": True},\n                                              {\"name\": \"EP2\", \"description\": \"Ep2\", \"permitted\": True},\n                                              {\"name\": \"EP3\", \"description\": \"EP3\", \"permitted\": True}]}\n        rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=rv) as patch_query_tbl:\n                resp = await client.get('/fledge/control/manage')\n                assert 200 == resp.status\n                json_response = json.loads(await resp.text())\n                assert 'controls' in json_response\n                assert expected_api_response == json_response\n            patch_query_tbl.assert_called_once_with('control_api')\n\n    @pytest.mark.parametrize(\"exception, message, status_code\", [\n        (ValueError, 'name should be in string.', 400),\n        (KeyError, 'EP control entrypoint not found.', 404),\n        (KeyError, '', 404),\n        (Exception, 'Interval Server error.', 500)\n    ])\n    async def test_bad_get_entrypoint_by_name(self, client, exception, message, status_code):\n        ep_name = \"EP\"\n        with patch.object(entrypoint, '_get_entrypoint', side_effect=exception(message)):\n            with patch.object(entrypoint._logger, 'error') as patch_logger:\n                resp = await client.get('/fledge/control/manage/{}'.format(ep_name))\n                assert status_code == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            if exception == Exception:\n                patch_logger.assert_called()\n\n    async def test_get_entrypoint_by_name(self, client):\n        ep_name = \"EP\"\n        storage_result = {'name': ep_name, 'description': 'EP1', 'type': 'operation', 'operation_name': 'OP1',\n                          'destination': 'broadcast', 'anonymous': True, 'constants': {'x': '640', 'y': '480'},\n                          'variables': {'rpm': '800', 'distance': '138'}, 'allow': ['admin', 'user']}\n        rv1 = await mock_coro(storage_result)\n        rv2 = await mock_coro(True)\n        with patch.object(entrypoint, '_get_entrypoint', return_value=rv1) as patch_entrypoint:\n            with patch.object(entrypoint, '_get_permitted', return_value=rv2) as patch_permitted:\n                resp = await client.get('/fledge/control/manage/{}'.format(ep_name))\n                assert 200 == resp.status\n                json_response = json.loads(await resp.text())\n                assert 'permitted' in json_response\n                assert storage_result == json_response\n            assert 1 == patch_permitted.call_count\n        patch_entrypoint.assert_called_once_with(ep_name)\n\n    async def test_create_entrypoint_in_use(self, client):\n        ep_name = \"SetLatheSpeed\"\n        payload = {\"name\": ep_name, \"description\": \"Set the speed of the lathe\", \"type\": \"write\",\n                   \"destination\": \"asset\", \"asset\": \"lathe\", \"constants\": {\"units\": \"spin\"},\n                   \"variables\": {\"rpm\": \"100\"}, \"allow\": [], \"anonymous\": False}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {\"count\": 1, \"rows\": [{\"name\": ep_name}]}\n        rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=rv) as patch_query_tbl:\n                resp = await client.post('/fledge/control/manage', data=json.dumps(payload))\n                assert 400 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'message': '{} control entrypoint is already in use.'.format(ep_name)} == json_response\n            patch_query_tbl.assert_called_once_with('control_api')\n\n    async def test_create_entrypoint(self, client):\n        ep_name = \"SetLatheSpeed\"\n        payload = {\"name\": ep_name, \"description\": \"Set the speed of the lathe\", \"type\": \"write\",\n                   \"destination\": \"asset\", \"asset\": \"lathe\", \"constants\": {\"units\": \"spin\"},\n                   \"variables\": {\"rpm\": \"100\"}, \"allow\": [], \"anonymous\": False}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {\"count\": 0, \"rows\": []}\n        insert_result = {\"response\": \"inserted\", \"rows_affected\": 1}\n\n        async def i_result(*args):\n            table = args[0]\n            insert_payload = args[1]\n            if table == 'control_api':\n                p = {'name': payload['name'], 'description': payload['description'], 'type': 0, 'operation_name': '',\n                     'destination': 2, 'destination_arg': payload['asset'],\n                     'anonymous': 'f' if payload['anonymous'] is False else 't'}\n                assert p == json.loads(insert_payload)\n            elif table == 'control_api_parameters':\n                if json.loads(insert_payload)['constant'] == 't':\n                    assert {'name': ep_name, 'parameter': 'units', 'value': 'spin', 'constant': 't'\n                            } == json.loads(insert_payload)\n                else:\n                    assert {'name': ep_name, 'parameter': 'rpm', 'value': '100', 'constant': 'f'\n                            } == json.loads(insert_payload)\n            elif table == 'control_api_acl':\n                pass\n            return insert_result\n\n        rv = await mock_coro(storage_result)\n        arv = await mock_coro(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=rv) as patch_query_tbl:\n                with patch.object(storage_client_mock, 'insert_into_tbl', side_effect=i_result\n                                  ) as patch_insert_tbl:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=arv) as audit_info_patch:\n                            resp = await client.post('/fledge/control/manage', data=json.dumps(payload))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {'message': '{} control entrypoint has been created successfully.'.format(ep_name)\n                                    } == json_response\n                        audit_info_patch.assert_called_once_with('CTEAD', payload)\n                assert 3 == patch_insert_tbl.call_count\n            patch_query_tbl.assert_called_once_with('control_api')\n\n    async def test_update_entrypoint_not_found(self, client):\n        ep_name = \"EP\"\n        message = '{} control entrypoint not found.'.format(ep_name)\n        payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": ep_name}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {\"count\": 0, \"rows\": []}\n        rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=rv\n                              ) as patch_query_tbl:\n                resp = await client.put('/fledge/control/manage/{}'.format(ep_name))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, kwargs = patch_query_tbl.call_args\n            assert 'control_api' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_update_entrypoint(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        ep_name = \"SetLatheSpeed\"\n        payload = {\"description\": \"Updated\"}\n        query_payload = '{\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"SetLatheSpeed\"}}'\n        storage_result = {\"count\": 1, \"rows\": [{\"name\": ep_name}]}\n        ep_info = {'name': ep_name, 'description': 'Perform speed of lathe', 'type': 'operation',\n                   'operation_name': 'Speed', 'destination': 'broadcast', 'anonymous': False,\n                   'constants': {'x': '640', 'y': '480'}, 'variables': {'rpm': '800', 'distance': '138'}, 'allow': []}\n        new_ep_info = {'name': ep_name, 'description': payload['description'], 'type': 'operation',\n                       'operation_name': 'Speed', 'destination': 'broadcast', 'anonymous': False,\n                       'constants': {'x': '640', 'y': '480'}, 'variables': {'rpm': '800', 'distance': '138'},\n                       'allow': []}\n\n        update_payload = ('{\"values\": {\"description\": \"Updated\"}, '\n                          '\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"SetLatheSpeed\"}}')\n        update_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        rv1 = await mock_coro(storage_result)\n        rv2 = await mock_coro(ep_info)\n        rv3 = await mock_coro(new_ep_info)\n        rv4 = await mock_coro(update_result)\n        arv = await mock_coro(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=rv1\n                              ) as patch_query_tbl:\n                with patch.object(entrypoint, '_get_entrypoint', side_effect=[rv2, rv3]) as patch_entrypoint:\n                    with patch.object(storage_client_mock, 'update_tbl', return_value=rv4) as patch_update_tbl:\n                        with patch.object(AuditLogger, '__init__', return_value=None):\n                            with patch.object(AuditLogger, 'information', return_value=arv\n                                              ) as audit_info_patch:\n                                resp = await client.put('/fledge/control/manage/{}'.format(ep_name),\n                                                        data=json.dumps(payload))\n                                assert 200 == resp.status\n                                result = await resp.text()\n                                json_response = json.loads(result)\n                                assert {'message': '{} control entrypoint has been updated successfully.'.format(\n                                    ep_name)} == json_response\n                            audit_info_patch.assert_called_once_with(\n                                'CTECH', {\"entrypoint\": new_ep_info, \"old_entrypoint\": ep_info})\n                    patch_update_tbl.assert_called_once_with('control_api', update_payload)\n                assert 2 == patch_entrypoint.call_count\n            patch_query_tbl.assert_called_once_with('control_api', query_payload)\n\n    async def test_delete_entrypoint_not_found(self, client):\n        ep_name = \"EP\"\n        message = '{} control entrypoint not found.'.format(ep_name)\n        payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": ep_name}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {\"count\": 0, \"rows\": []}\n        rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=rv\n                              ) as patch_query_tbl:\n                resp = await client.delete('/fledge/control/manage/{}'.format(ep_name))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, kwargs = patch_query_tbl.call_args\n            assert 'control_api' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_delete_entrypoint(self, client):\n        ep_name = \"EP\"\n        payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": ep_name}}\n        storage_result = {\"count\": 0, \"rows\": [\n            {'name': ep_name, 'description': 'EP1', 'type': 'operation', 'operation_name': 'OP1',\n             'destination': 'broadcast', 'anonymous': True, 'constants': {'x': '640', 'y': '480'},\n             'variables': {'rpm': '800', 'distance': '138'}, 'allow': ['admin', 'user']}]}\n        message = \"{} control entrypoint has been deleted successfully.\".format(ep_name)\n        rv1 = await mock_coro(storage_result)\n        rv2 = await mock_coro(None)\n        arv = await mock_coro(None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        del_payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": ep_name}}\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=rv1\n                              ) as patch_query_tbl:\n                with patch.object(storage_client_mock, 'delete_from_tbl', return_value=rv2\n                                  ) as patch_delete_tbl:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=arv) as audit_info_patch:\n                            resp = await client.delete('/fledge/control/manage/{}'.format(ep_name))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {\"message\": message} == json_response\n                        audit_info_patch.assert_called_once_with('CTEDL', {'message': message, \"name\": ep_name})\n                assert 3 == patch_delete_tbl.call_count\n                del_args = patch_delete_tbl.call_args_list\n                args1, _ = del_args[0]\n                assert 'control_api_acl' == args1[0]\n                assert del_payload == json.loads(args1[1])\n                args2, _ = del_args[1]\n                assert 'control_api_parameters' == args2[0]\n                assert del_payload == json.loads(args2[1])\n                args3, _ = del_args[2]\n                assert 'control_api' == args3[0]\n                assert del_payload == json.loads(args3[1])\n            args, kwargs = patch_query_tbl.call_args\n            assert 'control_api' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"ep_type\", [\"operation\", \"write\"])\n    async def test_update_request_entrypoint(self, client, ep_type):\n        from fledge.services.core.service_registry.service_registry import ServiceRegistry\n        from fledge.common.service_record import ServiceRecord\n\n        ServiceRegistry._registry = []\n\n        with patch.object(ServiceRegistry._logger, 'info'):\n            ServiceRegistry.register('Fledge Storage', 'Storage', '127.0.0.1', 1, 1, 'http')\n            ServiceRegistry.register('Dispatcher Service', 'Dispatcher', '127.0.0.1', 8, 8, 'http')\n\n        ep_name = \"SetLatheSpeed\"\n        if ep_type == \"operation\":\n            storage_result = {'name': ep_name, 'description': 'Perform speed of lathe', 'type': 'operation',\n                              'operation_name': 'Speed', 'destination': 'broadcast', 'anonymous': False,\n                              'constants': {'x': '640', 'y': '480'}, 'variables': {'rpm': '800', 'distance': '138'},\n                              'allow': []}\n            dispatch_payload = {'destination': 'broadcast', 'source': 'API', 'source_name': 'Anonymous',\n                                'operation': {'Speed': {'x': '420', 'y': '480', 'rpm': '800', 'distance': '200'}}}\n            payload = {\"x\": \"420\", \"distance\": \"200\"}\n            dispatch_endpoint = 'dispatch/operation'\n        else:\n            storage_result = {'name': ep_name, 'description': 'Perform speed of lathe', 'type': 'write',\n                              'destination': 'broadcast', 'anonymous': False, 'constants': {'x': '640', 'y': '480'},\n                              'variables': {'rpm': '800', 'distance': '138'}, 'allow': ['admin', 'user']}\n            payload = {\"rpm\": \"1200\"}\n            dispatch_endpoint = 'dispatch/write'\n            dispatch_payload = {'destination': 'broadcast', 'source': 'API', 'source_name': 'Anonymous',\n                                'write': {'x': '640', 'y': '480', 'rpm': '1200', 'distance': '138'}}\n\n        svc_info = (ServiceRecord(\"d607c5be-792f-4993-96b7-b513674e7d3b\",\n                                  ep_name, \"Dispatcher\", \"http\", \"127.0.0.1\", \"8118\", \"8118\"), \"Token\")\n\n        rv1 = await mock_coro(storage_result)\n        rv2 = await mock_coro(svc_info)\n        rv3 = await mock_coro(None)\n        with patch.object(entrypoint, '_get_entrypoint', return_value=rv1):\n            with patch.object(entrypoint, '_get_service_record_info_along_with_bearer_token',\n                              return_value=rv2) as patch_service:\n                with patch.object(entrypoint, '_call_dispatcher_service_api',\n                                  return_value=rv3) as patch_call_service:\n                    resp = await client.put('/fledge/control/request/{}'.format(ep_name), data=json.dumps(payload))\n                    assert 200 == resp.status\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {'message': '{} control entrypoint URL called.'.format(ep_name)} == json_response\n                    if ep_type == \"operation\":\n                        op = dispatch_payload['operation']['Speed']\n                        assert storage_result['constants']['x'] != op['x']\n                        assert storage_result['constants']['y'] == op['y']\n                        assert storage_result['variables']['distance'] != op['distance']\n                        assert storage_result['variables']['rpm'] == op['rpm']\n                    else:\n                        write = dispatch_payload['write']\n                        assert storage_result['constants']['x'] == write['x']\n                        assert storage_result['constants']['y'] == write['y']\n                        assert storage_result['variables']['distance'] == write['distance']\n                        assert storage_result['variables']['rpm'] != write['rpm']\n                patch_call_service.assert_called_once_with('http', '127.0.0.1', 8118, dispatch_endpoint,\n                                                           svc_info[1], dispatch_payload)\n            patch_service.assert_called_once_with()\n\n    @pytest.mark.parametrize(\"identifier, identifier_value\", [\n        (0, 'write'),\n        (1, 'operation'),\n        ('write', 0),\n        ('operation', 1)\n    ])\n    async def test__get_type(self, identifier, identifier_value):\n        assert identifier_value == await entrypoint._get_type(identifier)\n\n    @pytest.mark.parametrize(\"identifier, identifier_value\", [\n        (0, 'broadcast'),\n        (1, 'service'),\n        (2, 'asset'),\n        (3, 'script'),\n        ('broadcast', 0),\n        ('service', 1),\n        ('asset', 2),\n        ('script', 3)\n    ])\n    async def test__get_destination(self, identifier, identifier_value):\n        assert identifier_value == await entrypoint._get_destination(identifier)\n\n    async def test__update_params(self):\n        ep_name = \"SetLatheSpeed\"\n        old = {'x': '640', 'y': '480'}\n        new = {'x': '180', 'z': '90'}\n        is_constant = 't'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rows_affected = {\"response\": \"updated\", \"rows_affected\": 1}\n        rv = await mock_coro(rows_affected)\n        tbl_name = 'control_api_parameters'\n        delete_payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": ep_name,\n                                    \"and\": {\"column\": \"constant\", \"condition\": \"=\", \"value\": \"t\",\n                                            \"and\": {\"column\": \"parameter\", \"condition\": \"=\", \"value\": list(old)[1]}}}}\n        insert_payload = {'name': ep_name, 'parameter': 'z', 'value': new['z'], 'constant': 't'}\n        update_payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": ep_name,\n                                    \"and\": {\"column\": \"constant\", \"condition\": \"=\", \"value\": \"t\",\n                                            \"and\": {\"column\": \"parameter\", \"condition\": \"=\", \"value\": \"x\"}}},\n                          \"values\": {\"value\": new['x']}}\n        with patch.object(storage_client_mock, 'update_tbl', return_value=rv) as patch_update_tbl:\n            with patch.object(storage_client_mock, 'delete_from_tbl', return_value=rv) as patch_delete_tbl:\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=rv) as patch_insert_tbl:\n                    await entrypoint._update_params(ep_name, old, new, is_constant, storage_client_mock)\n                args, _ = patch_insert_tbl.call_args\n                assert tbl_name == args[0]\n                assert insert_payload == json.loads(args[1])\n            args, _ = patch_delete_tbl.call_args\n            assert tbl_name == args[0]\n            assert delete_payload == json.loads(args[1])\n        args, _ = patch_update_tbl.call_args\n        assert tbl_name == args[0]\n        assert update_payload == json.loads(args[1])\n\n    async def test__get_entrypoint(self):\n        ep_name = \"SetLatheSpeed\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": ep_name}}\n        storage_result1 = {\"count\": 1, \"rows\": [\n            {'name': ep_name, 'description': 'Perform lathe Speed', 'type': 'operation', 'operation_name': 'Speed',\n             'destination': 'broadcast', 'destination_arg': '', 'anonymous': True,\n             'constants': {}, 'variables': {},\n             'allow': []}]}\n        storage_result2 = {\"count\": 0, \"rows\": []}\n        rv1 = await mock_coro(storage_result1)\n        rv2 = await mock_coro(storage_result2)\n        rv3 = await mock_coro(storage_result2)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=[rv1, rv2, rv3]\n                              ) as patch_query_tbl:\n                await entrypoint._get_entrypoint(ep_name)\n            assert 3 == patch_query_tbl.call_count\n            args1 = patch_query_tbl.call_args_list[0]\n            assert 'control_api' == args1[0][0]\n            assert payload == json.loads(args1[0][1])\n            args2 = patch_query_tbl.call_args_list[1]\n            assert 'control_api_parameters' == args2[0][0]\n            assert payload == json.loads(args2[0][1])\n            args3 = patch_query_tbl.call_args_list[2]\n            assert 'control_api_acl' == args3[0][0]\n            assert payload == json.loads(args3[0][1])\n\n    @pytest.mark.parametrize(\"payload\", [\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'operation',\n         'operation_name': 'OP', 'destination': 'script', 'script': 'S1', 'anonymous': False},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'write',\n         'destination': 'script', 'script': 'S1', 'constants': {'unit': 'cm'}, 'variables': {'aperture': 'f/11'}},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'operation',\n         'operation_name': 'OP', 'destination': 'asset', 'asset': 'AS'},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'write',\n         'destination': 'asset', 'asset': 'AS', 'constants': {'unit': 'cm'}, 'variables': {'aperture': 'f/11'}},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'operation',\n         'operation_name': 'OP', 'destination': 'broadcast'},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'write',\n         'destination': 'broadcast', 'constants': {'unit': 'cm'}, 'variables': {'aperture': 'f/11'}, 'anonymous': True},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'operation',\n         'operation_name': 'OP', 'destination': 'service', 'service': 'Camera'},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'write',\n         'destination': 'service', 'service': 'Camera', 'constants': {'unit': 'cm'}, 'variables': {'aperture': 'f/11'}},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'operation',\n         'operation_name': 'OP', 'destination': 'script', 'script': 'S1', 'anonymous': False,\n         'constants': {'unit': 'cm'}, 'variables': {'aperture': 'f/16'}},\n        {'name': 'EP1', 'description': 'Entry Point', 'type': 'write', 'destination': 'broadcast',\n         'constants': {'seed': '100'}, 'anonymous': True},\n        {'name': 'EP2', 'description': 'Entry Point', 'type': 'write', 'destination': 'broadcast',\n         'variables': {'seed': '100'}, 'anonymous': True},\n        {'name': 'EP3', 'description': 'Entry Point', 'type': 'write', 'destination': 'broadcast',\n         'constants': {'seed': '100', 'param2': \"foo\"}, 'anonymous': False, 'allow': []},\n        {'name': 'EP4', 'description': 'Entry Point', 'type': 'write', 'destination': 'broadcast',\n         'variables': {'seed': '100', 'param2': \"foo\"}, 'anonymous': False, 'allow': []},\n        {'name': 'EP #5', 'description': 'Entry Point', 'type': 'write', 'destination': 'asset', \"asset\": \"Random\",\n         'variables': {'seed': '100', 'param2': \"foo\"}, 'anonymous': True},\n        {'name': 'EP-123', 'description': 'Entry Point', 'type': 'write', 'destination': 'service', \"service\": \"S1\",\n         'variables': {'seed': '100', 'param2': \"foo\"}, 'anonymous': False, 'allow': []}\n    ])\n    async def test__check_parameters(self, payload):\n        cols = await entrypoint._check_parameters(payload)\n        assert isinstance(cols, dict)\n\n    @pytest.mark.parametrize(\"payload\", [\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'operation',\n         'operation_name': 'OP', 'destination': 'script', 'script': 'S1', 'anonymous': False},\n        {'name': 'FocusCamera', 'description': 'Perform focus on camera', 'type': 'write',\n         'destination': 'script', 'script': 'S1', 'constants': {'unit': 'cm'}, 'variables': {'aperture': 'f/11'}},\n        {'anonymous': True},\n        {'description': 'updated'},\n        {'type': 'operation', 'operation_name': 'Distance'},\n        {'type': 'operation', 'operation_name': 'Test', 'constants': {'unit': 'cm'}, 'variables': {'aperture': 'f/11'}},\n        {'type': 'write', 'constants': {'unit': 'cm'}, 'variables': {'aperture': 'f/11'}},\n        {'destination': 'asset', 'asset': 'AS'},\n        {'constants': {'unit': 'cm'}},\n        {'variables': {'aperture': 'f/11'}}\n    ])\n    async def test__check_parameters_without_required_keys(self, payload):\n        cols = await entrypoint._check_parameters(payload, skip_required=True)\n        assert isinstance(cols, dict)\n\n    @pytest.mark.parametrize(\"payload, exception_name, error_msg\", [\n        # ({\"a\": 1}, KeyError,\n        #  \"{'name', 'type', 'destination', 'description'} required keys are missing in request payload.\")\n        ({\"name\": 1}, ValueError, \"Control entrypoint name should be in string.\"),\n        ({\"name\": \"\"}, ValueError, \"Control entrypoint name cannot be empty.\"),\n        ({\"description\": 1}, ValueError, \"Control entrypoint description should be in string.\"),\n        ({\"description\": \"\"}, ValueError, \"Control entrypoint description cannot be empty.\"),\n        ({\"type\": 1}, ValueError, \"Control entrypoint type should be in string.\"),\n        ({\"type\": \"\"}, ValueError, \"Control entrypoint type cannot be empty.\"),\n        ({\"type\": \"Blah\"}, ValueError, \"Possible types are: ['write', 'operation'].\"),\n        ({\"type\": \"operation\"}, KeyError, \"operation_name KV pair is missing.\"),\n        ({\"type\": \"operation\", \"operation_name\": \"\"}, ValueError, \"Control entrypoint operation name cannot be empty.\"),\n        ({\"type\": \"operation\", \"operation_name\": 1}, ValueError,\n         \"Control entrypoint operation name should be in string.\"),\n        ({\"destination\": \"\"}, ValueError, \"Control entrypoint destination cannot be empty.\"),\n        ({\"destination\": 1}, ValueError, \"Control entrypoint destination should be in string.\"),\n        ({\"destination\": \"Blah\"}, ValueError,\n         \"Possible destination values are: ['broadcast', 'service', 'asset', 'script'].\"),\n        ({\"destination\": \"script\", \"destination_arg\": \"\"}, KeyError, \"script destination argument is missing.\"),\n        ({\"destination\": \"script\", \"script\": 1}, ValueError,\n         \"Control entrypoint destination argument should be in string.\"),\n        ({\"destination\": \"script\", \"script\": \"\"}, ValueError,\n         \"Control entrypoint destination argument cannot be empty.\"),\n        ({\"anonymous\": \"t\"}, ValueError, \"anonymous should be a bool.\"),\n        ({\"constants\": \"t\"}, ValueError, \"constants should be a dictionary.\"),\n        ({\"variables\": \"t\"}, ValueError, \"variables should be a dictionary.\"),\n        ({\"type\": \"write\"}, ValueError, \"For write type either variables or constants should not be empty.\"),\n        ({\"type\": \"write\", \"constants\": {}}, ValueError,\n         \"For write type either variables or constants should not be empty.\"),\n        ({\"type\": \"write\", \"variables\": {}}, ValueError,\n         \"For write type either variables or constants should not be empty.\"),\n        ({\"type\": \"write\", \"constants\": None}, ValueError,\n         \"For write type either variables or constants should not be empty.\"),\n        ({\"type\": \"write\", \"variables\": None}, ValueError,\n         \"For write type either variables or constants should not be empty.\"),\n        ({\"allow\": \"user\"}, ValueError, \"allow should be an array of list of users.\")\n    ])\n    async def test_bad__check_parameters(self, payload, exception_name, error_msg):\n        with pytest.raises(Exception) as exc_info:\n            await entrypoint._check_parameters(payload, skip_required=True)\n        assert exc_info.type is exception_name\n        assert exc_info.value.args[0] == error_msg\n\n    # TODO: add more tests\n    \"\"\"\n        a) authentication based\n        b) allow\n        c) exception handling tests\n    \"\"\"\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/control_service/test_pipeline.py",
    "content": "import json\n\nfrom datetime import timedelta\nfrom unittest.mock import MagicMock, patch\n\nimport pytest\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.web import middleware\nfrom fledge.services.core import connect\nfrom fledge.services.core import routes\nfrom fledge.services.core import server\nfrom fledge.services.core.api.control_service import pipeline\nfrom fledge.services.core.scheduler.entities import StartUpSchedule\nfrom fledge.services.core.scheduler.scheduler import Scheduler\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro(*args, **kwargs):\n    return None if len(args) == 0 else args[0]\n\nSOURCE_LOOKUP = [{'cpsid': 1, 'name': 'Any', 'description': 'Any source.'},\n                 {'cpsid': 2, 'name': 'Service', 'description': 'A named service in source of the control pipeline.'},\n                 {'cpsid': 3, 'name': 'API', 'description': 'The control pipeline source is the REST API.'},\n                 {'cpsid': 4, 'name': 'Notification', 'description': 'The control pipeline originated from a '\n                                                                     'notification.'},\n                 {'cpsid': 5, 'name': 'Schedule', 'description': 'The control request was triggered by a schedule.'},\n                 {'cpsid': 6, 'name': 'Script', 'description': 'The control request has come from the named script.'}]\n\nDESTINATION_LOOKUP = [{'cpdid': 1, 'name': 'Any', 'description': 'Any destination.'},\n                      {'cpdid': 2, 'name': 'Service', 'description': 'A name of service that is being controlled.'},\n                      {'cpdid': 3, 'name': 'Asset', 'description': 'A name of asset that is being controlled.'},\n                      {'cpdid': 4, 'name': 'Script', 'description': 'A name of script that will be executed.'},\n                      {'cpdid': 5, 'name': 'Broadcast', 'description': 'No name is applied and pipeline will be '\n                                                                       'considered for any control writes or operations'\n                                                                       ' to broadcast destinations.'}]\n\n\nclass TestPipeline:\n    \"\"\" Pipeline API tests \"\"\"\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    @pytest.mark.parametrize(\"request_param\", [\n        '', '?type=source', '?type=destination', '?type=blah'\n    ])\n    async def test_get_lookup(self, client, request_param):\n        storage_result = {\"controlLookup\": {\"source\": [], \"destination\": []}}\n        rv = await mock_coro(storage_result)\n        with patch.object(pipeline, '_get_all_lookups', return_value=rv) as patch_lookup:\n            resp = await client.get('/fledge/control/lookup{}'.format(request_param))\n            assert 200 == resp.status\n            json_response = json.loads(await resp.text())\n            assert 'controlLookup' in json_response\n        if request_param.endswith('source'):\n            patch_lookup.assert_called_once_with(\"control_source\")\n        elif request_param.endswith('destination'):\n            patch_lookup.assert_called_once_with(\"control_destination\")\n        else:\n            patch_lookup.assert_called_once_with()\n\n    async def test_bad_get_lookup(self, client):\n        with patch.object(pipeline, '_get_all_lookups', side_effect=Exception) as patch_lookup:\n            with patch.object(pipeline._logger, 'error') as patch_logger:\n                resp = await client.get('/fledge/control/lookup')\n                assert 500 == resp.status\n                assert '' == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": \"\"} == json_response\n            patch_logger.assert_called()\n        patch_lookup.assert_called_once_with()\n\n    async def test_get_all_when_empty(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {'count': 0, 'rows': []}\n        expected_api_response = {\"pipelines\": []}\n        rv = await mock_coro(storage_result)\n        source_lookup = await mock_coro([])\n        dest_lookup = await mock_coro([])\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=rv) as patch_query_tbl:\n                with patch.object(pipeline, '_get_all_lookups', side_effect=[source_lookup, dest_lookup]):\n                    resp = await client.get('/fledge/control/pipeline')\n                    assert 200 == resp.status\n                    json_response = json.loads(await resp.text())\n                    assert 'pipelines' in json_response\n                    assert expected_api_response == json_response\n            patch_query_tbl.assert_called_once_with('control_pipelines')\n\n    async def test_get_all(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {'count': 2, 'rows': [\n            {'cpid': 1, 'name': 'Cp1', 'stype': 1, 'sname': '', 'dtype': 5, 'dname': '', 'enabled': 't',\n             'execution': 'Shared'}, {'cpid': 2, 'name': 'cp2', 'stype': 3, 'sname': 'anonymous', 'dtype': 1,\n                                      'dname': '', 'enabled': 't', 'execution': 'Exclusive'}]}\n        expected_api_response = {'pipelines': [{'id': 1, 'name': 'Cp1', 'source': {'type': 'Any', 'name': ''},\n                                                'destination': {'type': 'Broadcast', 'name': ''}, 'enabled': True,\n                                                'execution': 'Shared', 'filters': []},\n                                               {'id': 2, 'name': 'cp2', 'source': {'type': 'API', 'name': 'anonymous'},\n                                                'destination': {'type': 'Any', 'name': ''}, 'enabled': True,\n                                                'execution': 'Exclusive', 'filters': []}]}\n\n        filters_storage_result = {'count': 0, 'rows': []}\n        rv = await mock_coro(storage_result)\n        source_lookup = await mock_coro(SOURCE_LOOKUP)\n        dest_lookup = await mock_coro(DESTINATION_LOOKUP)\n        filters = await mock_coro(filters_storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=rv) as patch_query_tbl:\n                with patch.object(pipeline, '_get_all_lookups', side_effect=[source_lookup, dest_lookup]):\n                    with patch.object(pipeline, '_get_table_column_by_value', return_value=filters\n                                      ) as patch_filters:\n                        resp = await client.get('/fledge/control/pipeline')\n                        assert 200 == resp.status\n                        json_response = json.loads(await resp.text())\n                        assert 'pipelines' in json_response\n                        assert expected_api_response == json_response\n                    assert 2 == patch_filters.call_count\n                    args = patch_filters.call_args_list\n                    args1, _ = args[0]\n                    assert ('control_filters', 'cpid', 1) == args1\n                    args2, _ = args[1]\n                    assert ('control_filters', 'cpid', 2) == args2\n            patch_query_tbl.assert_called_once_with('control_pipelines')\n\n    async def test_bad_get_all(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', side_effect=Exception) as patch_query_tbl:\n                with patch.object(pipeline._logger, 'error') as patch_logger:\n                    resp = await client.get('/fledge/control/pipeline')\n                    assert 500 == resp.status\n                    assert \"\" == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {\"message\": \"\"} == json_response\n                patch_logger.assert_called()\n                args, _ = patch_logger.call_args_list[0]\n                assert 'Failed to get all pipelines.' == args[1]\n            patch_query_tbl.assert_called_once_with('control_pipelines')\n\n    async def test_get_by_id(self, client):\n        cpid = 1\n        storage_result = {'id': cpid, 'name': 'Cp1', 'source': {'type': 'Any', 'name': ''}, 'destination': {\n            'type': 'Broadcast', 'name': ''}, 'enabled': True, 'execution': 'Shared', 'filters': []}\n        rv = await mock_coro(storage_result)\n        with patch.object(pipeline, '_get_pipeline', return_value=rv) as patch_pipeline:\n            resp = await client.get('/fledge/control/pipeline/{}'.format(cpid))\n            assert 200 == resp.status\n            json_response = json.loads(await resp.text())\n            assert storage_result == json_response\n            assert isinstance(json_response['id'], int)\n        patch_pipeline.assert_called_once_with(str(cpid))\n\n    @pytest.mark.parametrize(\"exception_name, status_code\", [\n        (ValueError, 400),\n        (KeyError, 404),\n        (Exception, 500),\n    ])\n    async def test_bad_get_by_id(self, client, exception_name, status_code):\n        cpid = 1\n        with patch.object(pipeline, '_get_pipeline', side_effect=exception_name) as patch_pipeline:\n            with patch.object(pipeline._logger, 'error') as patch_logger:\n                resp = await client.get('/fledge/control/pipeline/{}'.format(cpid))\n                assert status_code == resp.status\n                assert '' == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": \"\"} == json_response\n            if exception_name == Exception:\n                patch_logger.assert_called()\n                args, _ = patch_logger.call_args_list[0]\n                assert 'Failed to fetch details of pipeline having ID: <{}>.'.format(cpid) == args[1]\n        patch_pipeline.assert_called_once_with(str(cpid))\n\n    async def test_create(self, client):\n        data = {\"name\": \"wildcard\", \"enabled\": True, \"execution\": \"shared\", \"source\": {\"type\": 1},\n                \"destination\": {\"type\": 1}}\n        columns = {'name': 'wildcard', 'enabled': 't', 'execution': 'shared', 'stype': 1, 'sname': '',\n                   'dtype': 1, 'dname': ''}\n        insert_column = ('{\"name\": \"wildcard\", \"enabled\": \"t\", \"execution\": \"shared\", \"stype\": 1, \"sname\": \"\", '\n                         '\"dtype\": 1, \"dname\": \"\"}')\n        insert_result = {'response': 'inserted', 'rows_affected': 1}\n        in_use = {'name': 'wildcard', 'stype': 1, 'sname': '', 'dtype': 1, 'dname': '', 'enabled': 't',\n                  'execution': 'shared', 'id': 4}\n        expected_pipeline = {'name': 'wildcard', 'enabled': True, 'execution': 'shared', 'id': 4,\n                             'source': {'type': 'Any', 'name': ''}, 'destination': {'type': 'Any', 'name': ''},\n                             'filters': []}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rv = await mock_coro(columns)\n        rv2 = await mock_coro(insert_result)\n        rv3 = await mock_coro(in_use)\n        rv4 = await mock_coro(\"Any\")\n        rv5 = await mock_coro(\"Any\")\n        rv6 = await mock_coro(None)\n        with patch.object(pipeline, '_check_parameters', return_value=rv) as patch_params:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=rv2) as patch_insert_tbl:\n                    with patch.object(pipeline, '_pipeline_in_use', return_value=rv3) as patch_in_use:\n                        with patch.object(pipeline, '_get_lookup_value', side_effect=[rv4, rv5]):\n                            with patch.object(AuditLogger, '__init__', return_value=None):\n                                with patch.object(AuditLogger, 'information', return_value=rv6) as patch_audit:\n                                    resp = await client.post('/fledge/control/pipeline', data=json.dumps(data))\n                                    assert 200 == resp.status\n                                    result = await resp.text()\n                                    json_response = json.loads(result)\n                                    assert expected_pipeline == json_response\n                                patch_audit.assert_called_once_with('CTPAD', expected_pipeline)\n                    patch_in_use.assert_called_once_with(data['name'],\n                                                         {'type': data['source']['type'], 'name': ''},\n                                                         {'type': data['destination']['type'], 'name': ''}, info=True)\n                patch_insert_tbl.assert_called_once_with('control_pipelines', insert_column)\n        args, _ = patch_params.call_args_list[0]\n        assert data == args[0]\n\n    @pytest.mark.parametrize(\"exception_name, status_code\", [\n        (ValueError, 400),\n        (KeyError, 404),\n        (Exception, 500),\n    ])\n    async def test_bad_create(self, client, exception_name, status_code):\n        payload = {\"name\": \"Cp\"}\n        with patch.object(pipeline, '_check_parameters', side_effect=exception_name):\n            with patch.object(pipeline._logger, 'error') as patch_logger:\n                resp = await client.post('/fledge/control/pipeline', data=json.dumps(payload))\n                assert status_code == resp.status\n                assert '' == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": \"\"} == json_response\n            if exception_name == Exception:\n                patch_logger.assert_called()\n                args, _ = patch_logger.call_args_list[0]\n                assert 'Failed to create pipeline: {}.'.format(payload['name']) == args[1]\n\n    async def test_update(self, client):\n        cpid = 1\n        storage_result = {'id': cpid, 'name': 'Cp1', 'source': {'type': 'Any', 'name': ''}, 'destination': {\n            'type': 'Broadcast', 'name': ''}, 'enabled': True, 'execution': 'Shared', 'filters': []}\n        column_payload = ('{\"values\": {\"enabled\": \"t\", \"execution\": \"Shared\", \"stype\": 3, \"sname\": \"anonymous\", '\n                          '\"dtype\": 1, \"dname\": \"\"}, \"where\": {\"column\": \"cpid\", \"condition\": \"=\", \"value\": 1}}')\n        payload = {'execution': 'Shared', 'source': {'type': 3, 'name': None}, 'destination': {'type': 1, 'name': None},\n                   'filters': [], 'enabled': True}\n        columns = {'enabled': 't', 'execution': 'Shared', 'stype': 3, 'sname': 'anonymous', 'dtype': 1, 'dname': ''}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rows_affected = {\"response\": \"updated\", \"rows_affected\": 1}\n        update_pipeline = {'id': cpid, 'name': 'Cp1', 'source': {'type': 'API', 'name': ''}, 'destination': {\n            'type': 'Any', 'name': ''}, 'enabled': True, 'execution': 'Shared', 'filters': []}\n        rv = await mock_coro(storage_result)\n        rv2 = await mock_coro(columns)\n        rv3 = await mock_coro(rows_affected)\n        rv4 = await mock_coro(None)\n        rv5 = await mock_coro(update_pipeline)\n        with patch.object(pipeline, '_get_pipeline', side_effect=[rv, rv5]) as patch_pipeline:\n            with patch.object(pipeline, '_check_parameters', return_value=rv2) as patch_params:\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with patch.object(storage_client_mock, 'update_tbl', return_value=rv3) as patch_update_tbl:\n                        with patch.object(AuditLogger, '__init__', return_value=None):\n                            with patch.object(AuditLogger, 'information', return_value=rv4) as patch_audit:\n                                resp = await client.put('/fledge/control/pipeline/{}'.format(cpid),\n                                                        data=json.dumps(payload))\n                                assert 200 == resp.status\n                                result = await resp.text()\n                                json_response = json.loads(result)\n                                assert {\"message\": 'Control Pipeline with ID:<{}> has been updated successfully.'\n                                                   ''.format(cpid)} == json_response\n                            patch_audit.assert_called_once_with('CTPCH', {\"pipeline\": update_pipeline,\n                                                                          \"old_pipeline\": storage_result})\n                        patch_update_tbl.assert_called_once_with('control_pipelines', column_payload)\n            args, _ = patch_params.call_args_list[0]\n            payload['old_pipeline_name'] = storage_result['name']\n            assert payload == args[0]\n        assert 2 == patch_pipeline.call_count\n\n    @pytest.mark.parametrize(\"exception_name, status_code\", [\n            (ValueError, 400),\n            (KeyError, 404),\n            (Exception, 500),\n    ])\n    async def test_bad_update(self, client, exception_name, status_code):\n        cpid = 1\n        with patch.object(pipeline, '_get_pipeline', side_effect=exception_name) as patch_pipeline:\n            with patch.object(pipeline._logger, 'error') as patch_logger:\n                resp = await client.put('/fledge/control/pipeline/{}'.format(cpid))\n                assert status_code == resp.status\n                assert '' == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": \"\"} == json_response\n            if exception_name == Exception:\n                patch_logger.assert_called()\n                args, _ = patch_logger.call_args_list[0]\n                assert 'Failed to update pipeline having ID: <{}>.'.format(cpid) == args[1]\n        patch_pipeline.assert_called_once_with(str(cpid))\n\n    async def test_delete(self, client):\n        cpid = 1\n        storage_client_mock = MagicMock(StorageClientAsync)\n        del_payload = '{\"where\": {\"column\": \"cpid\", \"condition\": \"=\", \"value\": 1}}'\n        storage_result = {'id': cpid, 'name': 'Cp1', 'source': {'type': 'Any', 'name': ''}, 'destination': {\n            'type': 'Broadcast', 'name': ''}, 'enabled': True, 'execution': 'Shared', 'filters': []}\n        rows_affected = {\"response\": \"deleted\", \"rows_affected\": 1}\n        message = {'message': 'Control Pipeline with ID:<{}> has been deleted successfully.'.format(cpid),\n                   'name': storage_result['name']}\n        rv = await mock_coro(storage_result)\n        rv2 = await mock_coro(None)\n        rv3 = await mock_coro(rows_affected)\n        with patch.object(pipeline, '_get_pipeline', return_value=rv) as patch_pipeline:\n            with patch.object(pipeline, '_remove_filters', return_value=rv2) as patch_filters:\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with patch.object(storage_client_mock, 'delete_from_tbl', return_value=rv3\n                                      ) as patch_delete_tbl:\n                        with patch.object(AuditLogger, '__init__', return_value=None):\n                            with patch.object(AuditLogger, 'information', return_value=rv2) as patch_audit:\n                                resp = await client.delete('/fledge/control/pipeline/{}'.format(cpid))\n                                assert 200 == resp.status\n                                json_response = json.loads(await resp.text())\n                                assert message == json_response\n                            patch_audit.assert_called_once_with('CTPDL', message)\n                    patch_delete_tbl.assert_called_once_with('control_pipelines', del_payload)\n            patch_filters.assert_called_once_with(storage_client_mock, [], cpid, storage_result['name'])\n        patch_pipeline.assert_called_once_with(str(cpid))\n\n    @pytest.mark.parametrize(\"exception_name, status_code\", [\n        (ValueError, 400),\n        (KeyError, 404),\n        (Exception, 500),\n    ])\n    async def test_bad_delete(self, client, exception_name, status_code):\n        cpid = 1\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(pipeline, '_get_pipeline', side_effect=exception_name) as patch_pipeline:\n                with patch.object(pipeline._logger, 'error') as patch_logger:\n                    resp = await client.delete('/fledge/control/pipeline/{}'.format(cpid))\n                    assert status_code == resp.status\n                    assert '' == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {\"message\": \"\"} == json_response\n                if exception_name == Exception:\n                    patch_logger.assert_called()\n                    args, _ = patch_logger.call_args_list[0]\n                    assert 'Failed to delete pipeline having ID: <{}>.'.format(cpid) == args[1]\n            patch_pipeline.assert_called_once_with(str(cpid))\n\n    async def test__get_all_lookups(self):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        source_lookup = await mock_coro({\"rows\": SOURCE_LOOKUP})\n        dest_lookup = await mock_coro({\"rows\": DESTINATION_LOOKUP})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl',\n                              side_effect=[source_lookup, dest_lookup]) as patch_query:\n                res = await pipeline._get_all_lookups()\n                assert 'source' in res\n                assert 'destination' in res\n            assert 2 == patch_query.call_count\n            args = patch_query.call_args_list\n            tbl1, _ = args[0]\n            assert 'control_source' == tbl1[0]\n            tbl2, _ = args[1]\n            assert 'control_destination' == tbl2[0]\n\n    @pytest.mark.parametrize(\"name\", [\n        \"control_source\", \"control_destination\"\n    ])\n    async def test__get_all_lookups_by_table(self, name):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {\"rows\": SOURCE_LOOKUP} if name == \"control_source\" else {\"rows\": DESTINATION_LOOKUP}\n        lookup = await mock_coro(storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=lookup) as patch_query:\n                res = await pipeline._get_all_lookups(name)\n                assert isinstance(res, list)\n                assert len(res)\n            patch_query.assert_called_once_with(name)\n\n    @pytest.mark.parametrize(\"tbl_name, column_name, column_value, limit\", [\n        (\"control_filters\", \"cpid\", 2, None),\n        (\"control_pipelines\", \"name\", \"CP\", None),\n        (\"control_source\", \"name\", \"Any\", None),\n        (\"control_destination\", \"name\", \"Broadcast\", None),\n        (\"control_pipelines\", \"name\", \"CP\", 1),\n    ])\n    async def test__get_table_column_by_value(self, tbl_name, column_name, column_value, limit):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        lookup = await mock_coro({\"rows\": []})\n        payload = {\"where\": {\"column\": column_name, \"condition\": \"=\", \"value\": column_value}}\n        if tbl_name == \"control_filters\":\n            payload[\"sort\"] = {\"column\": \"forder\", \"direction\": \"asc\"}\n        if limit is not None:\n            payload[\"limit\"] = limit\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=lookup\n                              ) as patch_query_tbl:\n                res = await pipeline._get_table_column_by_value(tbl_name, column_name, column_value, limit)\n                assert 'rows' in res\n                assert not len(res['rows'])\n            assert patch_query_tbl.called\n            args = patch_query_tbl.call_args_list\n            arg, _ = args[0]\n            assert tbl_name == arg[0]\n            assert payload == json.loads(arg[1])\n\n    async def test_bad__get_pipeline(self):\n        cpid = 3\n        rv = await mock_coro({\"rows\": []})\n        with pytest.raises(Exception) as exc_info:\n            with patch.object(pipeline, '_get_table_column_by_value', return_value=rv):\n                await pipeline._get_pipeline(cpid, False)\n        assert exc_info.type is KeyError\n        assert 'Pipeline having ID: {} not found.'.format(cpid) == exc_info.value.args[0]\n\n    async def test__get_pipeline(self):\n        cpid = 3\n        result = {\"rows\": [{\"cpid\": cpid, \"name\": \"CP-3\", \"stype\": 1, \"sname\": \"\", \"dtype\": 5, \"dname\": \"\",\n                            \"enabled\": \"t\", \"execution\": \"Shared\"}]}\n        rv = await mock_coro(result)\n        rv2 = await mock_coro(\"Any\")\n        with patch.object(pipeline, '_get_table_column_by_value', return_value=rv) as patch_tbl:\n            with patch.object(pipeline, '_get_lookup_value', return_value=rv2) as patch_lookup:\n                res = await pipeline._get_pipeline(cpid, False)\n                expected_rows = result[\"rows\"][0]\n                assert expected_rows['cpid'] == res['id']\n                assert expected_rows['name'] == res['name']\n                assert isinstance(res['source']['type'], str)\n                assert expected_rows['sname'] == res['source']['name']\n                assert isinstance(res['destination']['type'], str)\n                assert expected_rows['dname'] == res['destination']['name']\n                assert res['enabled'] is True\n                assert expected_rows['execution'] == res['execution']\n            assert 2 == patch_lookup.call_count\n            args = patch_lookup.call_args_list\n            arg, _ = args[0]\n            assert ('source', result[\"rows\"][0]['stype']) == arg\n            arg, _ = args[1]\n            assert ('destination', result[\"rows\"][0]['dtype']) == arg\n        patch_tbl.assert_called_once_with('control_pipelines', 'cpid', cpid)\n\n    @pytest.mark.parametrize(\"source, dest, matched, info, info_output\", [\n        (\"s\", \"d\", False, False, None),\n        ({'type': 1, 'name': ''}, {'type': 5, 'name': ''}, True, False, None),\n        ({'type': 1, 'name': ''}, {'type': 5, 'name': ''}, True, True, True),\n        ({'type': 2, 'name': ''}, {'type': 5, 'name': ''}, False, False, None),\n        ({'type': 2, 'name': ''}, {'type': 5, 'name': ''}, False, True, None),\n        ({'type': 5, 'name': ''}, {'type': 5, 'name': ''}, False, False, None),\n        ({'type': 5, 'name': ''}, {'type': 1, 'name': ''}, False, False, None)\n    ])\n    async def test__pipeline_in_use(self, source, dest, matched, info, info_output):\n        name = \"Modbus\"\n        result = {\"rows\": [{\"cpid\": 2, \"name\": name, \"stype\": 1, \"sname\": \"\", \"dtype\": 5, \"dname\": \"\",\n                            \"enabled\": \"t\", \"execution\": \"Shared\"}]}\n        rv = await mock_coro(result)\n        with patch.object(pipeline, '_get_table_column_by_value', return_value=rv) as patch_tbl:\n            res = await pipeline._pipeline_in_use(name, source, dest, info)\n            assert res == result['rows'][0] if info_output else res == info_output if info else res is matched\n        patch_tbl.assert_called_once_with('control_pipelines', 'name', name)\n\n    @pytest.mark.parametrize(\"_type, value, name\", [\n        (\"source\", 3, \"API\"), (\"source\", 2, \"Service\"), (\"source\", 1, \"Any\"),\n        (\"source\", 4, \"Notification\"), (\"source\", 5, \"Schedule\"), (\"source\", 6, \"Script\"),\n        (\"destination\", 1, \"Any\"), (\"destination\", 2, \"Service\"), (\"destination\", 3, \"Asset\"),\n        (\"destination\", 4, \"Script\"), (\"destination\", 5, \"Broadcast\")\n    ])\n    async def test__get_lookup_value(self, _type, value, name):\n        storage_result = SOURCE_LOOKUP if _type == \"source\" else DESTINATION_LOOKUP\n        lookup = await mock_coro(storage_result)\n        with patch.object(pipeline, '_get_all_lookups', return_value=lookup) as patch_lookup:\n            res = await pipeline._get_lookup_value(_type, value)\n            assert name == res\n        patch_lookup.assert_called_once_with('control_{}'.format(_type))\n\n    @pytest.mark.parametrize(\"payload, exception_name, error_msg\", [\n        ({\"name\": 1}, ValueError, \"Pipeline name should be in string.\"),\n        ({\"name\": \"\"}, ValueError, \"Pipeline name cannot be empty.\"),\n        ({\"name\": \"Cp\", \"enabled\": 1}, ValueError, \"Enabled should be a bool.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": 1}, ValueError, \"Execution should be in string.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"\"}, ValueError, \"Execution value cannot be empty.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"inclusive\"}, ValueError,\n         \"Execution model value either shared or exclusive.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": 1}, ValueError,\n         \"Source should be passed with type and name.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": \"1\"}}, ValueError,\n         \"Source type should be an integer value.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 5}}, ValueError,\n         \"Invalid source type found.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 5}}, ValueError,\n         \"Source name is missing.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 5, \"name\": 1}}, ValueError,\n         \"Source name should be a string value.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 5, \"name\": \"\"}}, ValueError,\n         \"Source name cannot be empty.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"name\": \"Abra\"}}, ValueError,\n         \"Source type is missing.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 1}, \"destination\": 1}, ValueError,\n         \"Destination should be passed with type and name.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 1}, \"destination\": {\"type\": \"1\"}},\n         ValueError, \"Destination type should be an integer value.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"destination\": {\"type\": 1}},\n         ValueError, \"Invalid destination type found.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 1}, \"destination\":\n            {\"type\": 2, \"name\": 1}}, ValueError, \"Destination name should be a string value.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 1}, \"destination\":\n            {\"type\": 2, \"name\": \"\"}}, ValueError, \"Destination name cannot be empty.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 1}, \"destination\":\n            {\"name\": \"foo\"}}, ValueError, \"Destination type is missing.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 1}, \"destination\": {\"type\": 2}},\n         ValueError, \"Destination name is missing.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 6, \"name\": \"Script\"},\n          \"destination\": {\"type\": 4, \"name\": \"Script\"}}, ValueError,\n         \"Pipeline is not allowed with same type of source and destination.\"),\n        ({\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 5, \"name\": \"Sch\"},\n          \"destination\": {\"type\": 4, \"name\": \"Script\"}, \"filters\": \"[]\"}, ValueError,\n         \"Pipeline filters should be passed in list.\")\n    ])\n    async def test_bad__check_parameters(self, payload, exception_name, error_msg):\n        req_mock = MagicMock(web.Request)\n        storage_result = {\"count\": 0, \"rows\": []}\n        res = \"\" if error_msg.endswith(\"type found.\") else \"Any\"\n        rv = await mock_coro(storage_result)\n        rv2 = await mock_coro(res)\n        with pytest.raises(Exception) as exc_info:\n            with patch.object(pipeline, '_check_unique_pipeline', return_value=rv) as patch_unique_pipeline:\n                with patch.object(pipeline, '_get_lookup_value', return_value=rv2) as patch_lookup_value:\n                    with patch.object(pipeline, '_validate_lookup_name', return_value=rv2\n                                      ) as patch_lookup_name:\n                        await pipeline._check_parameters(payload, req_mock)\n                    patch_lookup_name.assert_called_once_with()\n                patch_lookup_value.assert_called_once_with()\n            patch_unique_pipeline.assert_called_once_with()\n        assert exc_info.type is exception_name\n        assert exc_info.value.args[0] == error_msg\n\n    @pytest.mark.parametrize(\"service_name, payload, exception_name, error_msg\", [\n        (\"Sine\", {\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 2, \"name\": \"Sine\"},\n                  \"destination\": {\"type\": 2, \"name\": \"OMF\"}}, ValueError,\n         \"South services can not be the source for control pipelines.\"),\n        (\"RW\", {\"name\": \"Cp\", \"enabled\": True, \"execution\": \"exclusive\", \"source\": {\"type\": 2, \"name\": \"Sine\"},\n                \"destination\": {\"type\": 2, \"name\": \"OMF\"}}, ValueError,\n         \"North services can not be the destination for control pipelines.\")\n    ])\n    async def test_bad__check_params(self, service_name, payload, exception_name, error_msg):\n        async def mock_schedule(name):\n            schedules = []\n            # South\n            schedule = StartUpSchedule()\n            schedule.schedule_id = \"1\"\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.name = name\n            schedule.process_name = \"south_c\"\n            schedule.repeat = timedelta(seconds=30)\n            schedule.time = None\n            schedule.day = None\n            schedules.append(schedule)\n            # North\n            north_schedule = StartUpSchedule()\n            north_schedule.schedule_id = \"2\"\n            north_schedule.exclusive = True\n            north_schedule.enabled = True\n            north_schedule.name = \"OMF\"\n            north_schedule.process_name = \"north_C\"\n            north_schedule.repeat = timedelta(seconds=30)\n            north_schedule.time = None\n            north_schedule.day = None\n            schedules.append(north_schedule)\n            return schedules\n\n        server.Server.scheduler = Scheduler(None, None)\n        req_mock = MagicMock(web.Request)\n        storage_result = {\"count\": 0, \"rows\": []}\n        rv = await mock_coro(storage_result)\n        rv2 = await mock_coro(\"service\")\n        rv3 = await mock_schedule(service_name)\n        with pytest.raises(Exception) as exc_info:\n            with patch.object(pipeline, '_check_unique_pipeline', return_value=rv) as patch_unique_pipeline:\n                with patch.object(pipeline, '_get_lookup_value', return_value=rv2) as patch_lookup_value:\n                    with patch.object(pipeline, '_validate_lookup_name',\n                                      return_value=rv2) as patch_lookup_name:\n                        with patch.object(server.Server.scheduler, 'get_schedules',\n                                          return_value=rv3) as patch_get_schedules:\n                            await pipeline._check_parameters(payload, req_mock)\n                        patch_get_schedules.assert_called_once_with()\n                    patch_lookup_name.assert_called_once_with()\n                patch_lookup_value.assert_called_once_with()\n            patch_unique_pipeline.assert_called_once_with()\n        assert exc_info.type is exception_name\n        assert exc_info.value.args[0] == error_msg\n\n    @pytest.mark.parametrize(\"lookup, _type, value\", [\n        (\"source\", 6, \"Sc\"), (\"destination\", 4, \"foo\")\n    ])\n    async def test__validate_lookup_name_script(self, lookup, _type, value):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {\"rows\": [{\"name\": \"S1\"}, {\"name\": \"S2\"}]}\n        rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=rv) as patch_query:\n                with pytest.raises(Exception) as exc_info:\n                    await pipeline._validate_lookup_name(lookup, _type, value)\n                assert exc_info.type is ValueError\n                assert \"'{}' not a valid script name.\".format(value) == exc_info.value.args[0]\n            patch_query.assert_called_once_with('control_script', '{\"return\": [\"name\"]}')\n\n    async def test__validate_lookup_name_asset(self, lookup=\"destination\", _type=3, value=\"sinusoid\"):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        storage_result = {\"rows\": [{\"asset\": \"S1\", \"event\": \"Ingest\"}, {\"asset\": \"S2\", \"event\": \"Egress\"}]}\n        rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=rv) as patch_query:\n                with pytest.raises(Exception) as exc_info:\n                    await pipeline._validate_lookup_name(lookup, _type, value)\n                assert exc_info.type is ValueError\n                assert \"'{}' not a valid asset name.\".format(value) == exc_info.value.args[0]\n            patch_query.assert_called_once_with('asset_tracker', '{\"modifier\": \"distinct\", \"return\": [\"asset\"]}')\n\n    async def test__validate_lookup_name_notifications(self, lookup=\"source\", _type=4, value=\"N1\"):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        storage_result = [{\"child\": [\"N\"]}]\n        rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, '_read_all_child_category_names', return_value=rv) as patch_get_all_items:\n                with pytest.raises(Exception) as exc_info:\n                    await pipeline._validate_lookup_name(lookup, _type, value)\n                # assert exc_info.type is ValueError\n                assert \"'{}' not a valid notification instance name.\".format(value) == exc_info.value.args[0]\n            patch_get_all_items.assert_called_once_with('Notifications')\n\n    @pytest.mark.parametrize(\"lookup, _type, value, error_msg, sch_name, sch_process_name\", [\n        (\"source\", 2, \"sine\", \"South services can not be the source for control pipelines.\", \"sine\", \"south_c\"),\n        (\"destination\", 2, \"mod\", \"North services can not be the destination for control pipelines.\", \"mod\", \"north_C\"),\n        (\"source\", 5, \"ninja\", \"'ninja' not a valid schedule name.\", \"sine\", \"bar\"),\n        (\"source\", 2, \"RW\", \"'RW' not a valid service.\", \"sine\", \"south_c\"),\n        (\"destination\", 2, \"OMF\", \"'OMF' not a valid service.\", \"mod\", \"north_C\")\n    ])\n    async def test__validate_lookup_name_schedule(self, lookup, _type, value, error_msg, sch_name, sch_process_name):\n        async def mock_schedule():\n            schedules = []\n            schedule = StartUpSchedule()\n            schedule.schedule_id = \"e7d02d5f-02f4-4b9a-8a77-b924db1a2e7c\"\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.name = sch_name\n            schedule.process_name = sch_process_name\n            schedule.repeat = timedelta(seconds=30)\n            schedule.time = None\n            schedule.day = None\n            schedules.append(schedule)\n            return schedules\n\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        get_sch = await mock_schedule()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(server.Server.scheduler, 'get_schedules', return_value=get_sch\n                              ) as patch_get_schedules:\n                with pytest.raises(Exception) as exc_info:\n                    await pipeline._validate_lookup_name(lookup, _type, value)\n                    server.Server.scheduler = None\n                assert exc_info.type is ValueError\n                assert error_msg == exc_info.value.args[0]\n            patch_get_schedules.assert_called_once_with()\n\n    async def test__check_unique_pipeline(self):\n        name = \"Cp\"\n        rv = await mock_coro({\"rows\": [1]})\n        with patch.object(pipeline, '_get_table_column_by_value', return_value=rv) as patch_tbl_col:\n            with pytest.raises(Exception) as exc_info:\n                await pipeline._check_unique_pipeline(name)\n            assert exc_info.type is ValueError\n            assert \"{} pipeline already exists with the same name.\".format(name) == exc_info.value.args[0]\n        patch_tbl_col.assert_called_once_with('control_pipelines', 'name', name, limit=1)\n\n\nclass TestPipelineFilters:\n    \"\"\" Pipeline Filters API tests \"\"\"\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def test_create(self, client):\n        data = {\"name\": \"wildcard\", \"enabled\": True, \"execution\": \"shared\", \"source\": {\"type\": 1},\n                \"destination\": {\"type\": 1}, \"filters\": [\"Filter1\"]}\n        columns = {'name': 'wildcard', 'enabled': 't', 'execution': 'shared', 'stype': 1, 'sname': '',\n                   'dtype': 1, 'dname': ''}\n        insert_column = ('{\"name\": \"wildcard\", \"enabled\": \"t\", \"execution\": \"shared\", \"stype\": 1, \"sname\": \"\", '\n                         '\"dtype\": 1, \"dname\": \"\"}')\n        insert_result = {'response': 'inserted', 'rows_affected': 1}\n        in_use = {'name': 'wildcard', 'stype': 1, 'sname': '', 'dtype': 1, 'dname': '', 'enabled': 't',\n                  'execution': 'shared', 'id': 4}\n        expected_pipeline = {'name': 'wildcard', 'enabled': True, 'execution': 'shared', 'id': 4,\n                             'source': {'type': 'Any', 'name': ''}, 'destination': {'type': 'Any', 'name': ''},\n                             'filters': [\"Filter1\"]}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rv = await mock_coro(columns)\n        rv2 = await mock_coro(insert_result)\n        rv3 = await mock_coro(in_use)\n        rv4 = await mock_coro(\"Any\")\n        rv5 = await mock_coro(\"Any\")\n        rv6 = await mock_coro(None)\n        rv7 = await mock_coro(True)\n        rv8 = await mock_coro([\"Filter1\"])\n        with patch.object(pipeline, '_check_parameters', return_value=rv) as patch_params:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=rv2) as patch_insert_tbl:\n                    with patch.object(pipeline, '_pipeline_in_use', return_value=rv3) as patch_in_use:\n                        with patch.object(pipeline, '_get_lookup_value', side_effect=[rv4, rv5]):\n                            with patch.object(pipeline, '_check_filters', return_value=rv7\n                                              ) as patch_check_filter:\n                                with patch.object(pipeline, '_update_filters', return_value=rv8\n                                                  ) as patch_update_filter:\n                                    with patch.object(AuditLogger, '__init__', return_value=None):\n                                        with patch.object(AuditLogger, 'information', return_value=rv6\n                                                          ) as patch_audit:\n                                            resp = await client.post('/fledge/control/pipeline', data=json.dumps(data))\n                                            assert 200 == resp.status\n                                            result = await resp.text()\n                                            json_response = json.loads(result)\n                                            assert expected_pipeline == json_response\n                                        patch_audit.assert_called_once_with('CTPAD', expected_pipeline)\n                                patch_update_filter.assert_called_once_with(storage_client_mock, in_use['id'],\n                                                                            data['name'], data['filters'])\n                            patch_check_filter.assert_called_once_with(storage_client_mock, [\"Filter1\"])\n                    patch_in_use.assert_called_once_with(data['name'],\n                                                         {'type': data['source']['type'], 'name': ''},\n                                                         {'type': data['destination']['type'], 'name': ''}, info=True)\n                patch_insert_tbl.assert_called_once_with('control_pipelines', insert_column)\n        args, _ = patch_params.call_args_list[0]\n        assert data == args[0]\n\n    async def test_update(self, client):\n        cpid = 1\n        storage_result = {'id': cpid, 'name': 'Cp1', 'source': {'type': 'Any', 'name': ''}, 'destination': {\n            'type': 'Broadcast', 'name': ''}, 'enabled': True, 'execution': 'Shared', 'filters': []}\n        column_payload = ('{\"values\": {\"enabled\": \"t\", \"execution\": \"Shared\", \"stype\": 3, \"sname\": \"anonymous\", '\n                          '\"dtype\": 1, \"dname\": \"\"}, \"where\": {\"column\": \"cpid\", \"condition\": \"=\", \"value\": 1}}')\n        payload = {'execution': 'Shared', 'source': {'type': 3, 'name': None}, 'destination': {'type': 1, 'name': None},\n                   'filters': [\"Filter1\"], 'enabled': True}\n        columns = {'enabled': 't', 'execution': 'Shared', 'stype': 3, 'sname': 'anonymous', 'dtype': 1, 'dname': ''}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rows_affected = {\"response\": \"updated\", \"rows_affected\": 1}\n        update_pipeline = {'id': cpid, 'name': 'Cp1', 'source': {'type': 'API', 'name': ''}, 'destination': {\n            'type': 'Any', 'name': ''}, 'enabled': True, 'execution': 'Shared', 'filters': []}\n        filters = {\"rows\": [{\"fname\": \"Filter1\"}]}\n        rv = await mock_coro(storage_result)\n        rv2 = await mock_coro(columns)\n        rv3 = await mock_coro(rows_affected)\n        rv4 = await mock_coro(None)\n        rv5 = await mock_coro(update_pipeline)\n        rv6 = await mock_coro(True)\n        rv7 = await mock_coro(filters)\n        with patch.object(pipeline, '_get_pipeline', side_effect=[rv, rv5]) as patch_pipeline:\n            with patch.object(pipeline, '_check_parameters', return_value=rv2) as patch_params:\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with patch.object(storage_client_mock, 'update_tbl', return_value=rv3) as patch_update_tbl:\n                        with patch.object(pipeline, '_check_filters', return_value=rv6) as patch_check_filter:\n                            with patch.object(pipeline, '_get_table_column_by_value', return_value=rv7\n                                              ) as patch_tbl_column:\n                                with patch.object(pipeline, '_update_filters', return_value=rv4\n                                                  ) as patch_update_filter:\n                                    with patch.object(AuditLogger, '__init__', return_value=None):\n                                        with patch.object(AuditLogger, 'information', return_value=rv4\n                                                          ) as patch_audit:\n                                            resp = await client.put('/fledge/control/pipeline/{}'.format(cpid),\n                                                                    data=json.dumps(payload))\n                                            assert 200 == resp.status\n                                            result = await resp.text()\n                                            json_response = json.loads(result)\n                                            assert {\"message\": 'Control Pipeline with ID:<{}> has been '\n                                                               'updated successfully.'.format(cpid)} == json_response\n                                        patch_audit.assert_called_once_with(\n                                            'CTPCH', {\"pipeline\": update_pipeline, \"old_pipeline\": storage_result})\n                                patch_update_filter.assert_called_once_with(storage_client_mock, cpid, 'Cp1',\n                                                                            ['Filter1'], ['Filter1'])\n                            patch_tbl_column.assert_called_once_with('control_filters', 'cpid', str(cpid))\n                        patch_check_filter.assert_called_once_with(storage_client_mock, [\"Filter1\"])\n                    patch_update_tbl.assert_called_once_with('control_pipelines', column_payload)\n            args, _ = patch_params.call_args_list[0]\n            payload['old_pipeline_name'] = storage_result['name']\n            assert payload == args[0]\n        assert 2 == patch_pipeline.call_count\n\n    async def test_bad_update(self, client):\n        cpid = 1\n        storage_result = {'id': cpid, 'name': 'Cp1', 'source': {'type': 'Any', 'name': ''}, 'destination': {\n            'type': 'Broadcast', 'name': ''}, 'enabled': True, 'execution': 'Shared', 'filters': []}\n        column_payload = ('{\"values\": {\"enabled\": \"t\", \"execution\": \"Shared\", \"stype\": 3, \"sname\": \"anonymous\", '\n                          '\"dtype\": 1, \"dname\": \"\"}, \"where\": {\"column\": \"cpid\", \"condition\": \"=\", \"value\": 1}}')\n        payload = {'execution': 'Shared', 'source': {'type': 3, 'name': None}, 'destination': {'type': 1, 'name': None},\n                   'filters': [\"Filter1\"], 'enabled': True}\n        columns = {'enabled': 't', 'execution': 'Shared', 'stype': 3, 'sname': 'anonymous', 'dtype': 1, 'dname': ''}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rows_affected = {\"response\": \"updated\", \"rows_affected\": 1}\n        error_message = \"Filters do not exist as per the given list ['Filter1']\"\n        rv = await mock_coro(storage_result)\n        rv2 = await mock_coro(columns)\n        rv3 = await mock_coro(rows_affected)\n        rv4 = await mock_coro(False)\n        with patch.object(pipeline, '_get_pipeline', return_value=rv) as patch_pipeline:\n            with patch.object(pipeline, '_check_parameters', return_value=rv2) as patch_params:\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with patch.object(storage_client_mock, 'update_tbl', return_value=rv3) as patch_update_tbl:\n                        with patch.object(pipeline, '_check_filters', return_value=rv4) as patch_check_filter:\n                            resp = await client.put('/fledge/control/pipeline/{}'.format(cpid),\n                                                    data=json.dumps(payload))\n                            assert 400 == resp.status\n                            assert error_message == resp.reason\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {\"message\": error_message} == json_response\n                        patch_check_filter.assert_called_once_with(storage_client_mock, [\"Filter1\"])\n                    patch_update_tbl.assert_called_once_with('control_pipelines', column_payload)\n            args, _ = patch_params.call_args_list[0]\n            payload['old_pipeline_name'] = storage_result['name']\n            assert payload == args[0]\n        patch_pipeline.assert_called_once_with(str(cpid))\n\n    async def test__get_pipeline(self):\n        cpid = 3\n        result = {\"rows\": [{\"cpid\": cpid, \"name\": \"CP-3\", \"stype\": 1, \"sname\": \"\", \"dtype\": 5, \"dname\": \"\",\n                            \"enabled\": \"t\", \"execution\": \"Shared\", \"filters\": [\"Filter1\"]}]}\n        rv = await mock_coro(result)\n        rv2 = await mock_coro(\"Any\")\n        rv3 = await mock_coro({\"rows\": [{\"fname\": \"Filter1\"}]})\n        with patch.object(pipeline, '_get_table_column_by_value', side_effect=[rv, rv3]):\n            with patch.object(pipeline, '_get_lookup_value', return_value=rv2) as patch_lookup:\n                res = await pipeline._get_pipeline(cpid, True)\n                expected_rows = result[\"rows\"][0]\n                assert expected_rows['cpid'] == res['id']\n                assert expected_rows['name'] == res['name']\n                assert isinstance(res['source']['type'], str)\n                assert expected_rows['sname'] == res['source']['name']\n                assert isinstance(res['destination']['type'], str)\n                assert expected_rows['dname'] == res['destination']['name']\n                assert res['enabled'] is True\n                assert expected_rows['execution'] == res['execution']\n                assert expected_rows['filters'] == res['filters']\n            assert 2 == patch_lookup.call_count\n            args = patch_lookup.call_args_list\n            arg, _ = args[0]\n            assert ('source', result[\"rows\"][0]['stype']) == arg\n            arg, _ = args[1]\n            assert ('destination', result[\"rows\"][0]['dtype']) == arg\n\n    async def test__remove_filters(self):\n        filters = [\"ctrl_cp_Scale\"]\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        delete_result = {'response': 'deleted', 'rows_affected': 1}\n        rv = await mock_coro(delete_result)\n        with patch.object(storage_client_mock, 'delete_from_tbl', return_value=rv) as patch_delete_tbl:\n            with patch.object(c_mgr, 'delete_category_and_children_recursively', return_value=rv) as patch_mgr:\n                await pipeline._remove_filters(storage_client_mock, filters, 1)\n            assert len(filters) * 2 == patch_mgr.call_count\n            args = patch_mgr.call_args_list\n            args1, _ = args[0]\n            assert ('ctrl_cp_Scale',) == args1\n            args2, _ = args[1]\n            assert ('ctrl_cp_Scale',) == args2\n        assert len(filters) * 2 == patch_delete_tbl.call_count\n        args = patch_delete_tbl.call_args_list\n        args1, _ = args[0]\n        assert ('control_filters', '{\"where\": {\"column\": \"cpid\", \"condition\": \"=\", \"value\": 1, '\n                                   '\"and\": {\"column\": \"fname\", \"condition\": \"=\", \"value\": \"ctrl_cp_Scale\"}}}') == args1\n        args2, _ = args[1]\n        assert ('filters', '{\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"ctrl_cp_Scale\"}}') == args2\n\n    @pytest.mark.parametrize(\"filters, is_exists\", [\n        ([\"REN1\", \"Scale\"], True),\n        ([\"Meta\"], False)\n    ])\n    async def test__check_filters(self, filters, is_exists):\n        res = {\"rows\": [{\"name\": \"Scale\"}, {\"name\": \"REN1\"}]}\n        rv = await mock_coro(res)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(storage_client_mock, 'query_tbl', return_value=rv) as patch_query_tbl:\n            with patch.object(pipeline._logger, 'warning') as patch_logger:\n                res = await pipeline._check_filters(storage_client_mock, filters)\n                assert res is is_exists\n            if not is_exists:\n                patch_logger.assert_called_once_with(\"Filters do not exist as per the given {} payload.\".format(\n                    filters))\n            else:\n                assert not patch_logger.called\n        patch_query_tbl.assert_called_once_with(\"filters\")\n\n    async def test_insert_case_in_update_filters(self):\n        filter_name = \"Filter1\"\n        name = \"Cp\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        cat_info = {'write': {'default': '[{\"order\": 0, \"service\": \"mod\", \"values\": {\"humidity\": \"12\"}}]',\n                              'description': 'Dispatcher write operation using automation script', 'type': 'string',\n                              'value': '[{\"order\": 0, \"service\": \"mod\", \"values\": {\"humidity\": \"12\"}}]'}}\n        insert_result = {\"rows_affected\": 1, \"response\": \"inserted\"}\n        payload = {\"cpid\": 1, \"forder\": 1, \"fname\": \"ctrl_{}_{}\".format(name, filter_name)}\n        rv = await mock_coro(cat_info)\n        rv2 = await mock_coro(insert_result)\n        with patch.object(c_mgr, 'get_category_all_items', side_effect=[rv, rv]):\n            with patch.object(c_mgr, 'create_category', return_value=rv) as patch_create_cat:\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=rv2) as patch_tbl:\n                    with patch.object(c_mgr, 'create_child_category', return_value=rv) as patch_child_cat:\n                        await pipeline._update_filters(storage_client_mock, 1, name, [filter_name])\n                    patch_child_cat.assert_called_once_with(\"dispatcher\",\n                                                            [\"ctrl_{}_{}\".format(name, filter_name), filter_name])\n                args = patch_tbl.call_args_list\n                arg, _ = args[0]\n                assert 'control_filters' == arg[0]\n                assert payload == json.loads(arg[1])\n            assert 1 == patch_create_cat.call_count\n\n    async def test_update_case_in_update_filters(self):\n        filter1 = \"Filter1\"\n        filter2 = \"Filter2\"\n        name = \"Cp\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        payload = {\"values\": {\"forder\": 1}, \"where\": {\"column\": \"fname\", \"condition\": \"=\",\n                                                      \"value\": \"ctrl_{}_{}\".format(name, filter2),\n                                                      \"and\": {\"column\": \"cpid\", \"condition\": \"=\", \"value\": 1}}}\n        rv = await mock_coro(None)\n        with patch.object(storage_client_mock, 'update_tbl', return_value=rv) as patch_tbl:\n            with patch.object(pipeline, '_remove_filters', return_value=rv) as patch_filters:\n                await pipeline._update_filters(storage_client_mock, 1, name, [filter2], [filter1, filter2])\n            patch_filters.assert_called_once_with(storage_client_mock, ['ctrl_{}_{}'.format(name, filter1)],\n                                                  1, name)\n        args = patch_tbl.call_args_list\n        arg, _ = args[0]\n        assert 'control_filters' == arg[0]\n        assert payload == json.loads(arg[1])\n\n    async def test_remove_case_in_update_filters(self):\n        filter1 = \"Filter1\"\n        filter2 = \"Filter2\"\n        name = \"Cp\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rv = await mock_coro(None)\n        with patch.object(pipeline, '_remove_filters', return_value=rv) as patch_filters:\n            await pipeline._update_filters(storage_client_mock, 1, name, [], [filter1, filter2])\n        patch_filters.assert_called_once_with(\n            storage_client_mock, ['ctrl_{}_{}'.format(name, filter1), 'ctrl_{}_{}'.format(\n                name, filter2)], 1, name)\n\n    @pytest.mark.parametrize(\"cf_data, cp_data, error_msg, func_call_count\", [\n        ({\"rows\": [1]}, {\"rows\": [1]}, \"Filters are attached. Pipeline name cannot be changed.\", 1),\n        ({\"rows\": []}, {\"rows\": [1]}, \"Cp pipeline already exists, name cannot be changed.\", 2)\n    ])\n    async def test__check_unique_pipeline(self, cf_data, cp_data, error_msg, func_call_count):\n        name = \"Cp\"\n        cpid = 1\n        rv = await mock_coro(cf_data)\n        rv2 = await mock_coro(cp_data)\n        with patch.object(pipeline, '_get_table_column_by_value', side_effect=[rv, rv2]) as patch_tbl_col:\n            with pytest.raises(Exception) as exc_info:\n                await pipeline._check_unique_pipeline(name, cpid=cpid)\n            assert exc_info.type is ValueError\n            assert error_msg == exc_info.value.args[0]\n        assert func_call_count == patch_tbl_col.call_count\n\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/control_service/test_script_management.py",
    "content": "import copy\nimport json\nimport uuid\n\nfrom unittest.mock import MagicMock, patch\nimport pytest\nfrom aiohttp import web\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.web import middleware\nfrom fledge.services.core import connect\nfrom fledge.services.core import routes\nfrom fledge.services.core import server\nfrom fledge.services.core.scheduler.entities import ManualSchedule\nfrom fledge.services.core.scheduler.scheduler import Scheduler\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2022 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro(*args, **kwargs):\n    return None if len(args) == 0 else args[0]\n\n\nasync def mock_schedule(name):\n    schedules = []\n    schedule = ManualSchedule()\n    schedule.repeat = None\n    schedule.time = None\n    schedule.day = None\n    schedule.schedule_id = \"0c6fbbfd-8b36-4d6d-8fcb-5389436aa0fe\"\n    schedule.exclusive = True\n    schedule.enabled = True\n    schedule.name = name\n    schedule.process_name = \"automation_script\"\n    schedules.append(schedule)\n    return schedules\n\n\nclass TestScriptManagement:\n    \"\"\" Automation script API tests\n    \"\"\"\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def test_get_all_scripts(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        server.Server.scheduler = Scheduler(None, None)\n        acl_name = \"testACL\"\n        script_name = \"demoScript\"\n        cat_name = \"{}-automation-script\".format(script_name)\n        result = {\"count\": 1, \"rows\": [\n            {\"name\": script_name, \"steps\": [{\"write\": {\"order\": 0, \"service\": \"mod\", \"values\": {\"humidity\": \"12\"}}}],\n             \"acl\": acl_name, \"configuration\": {}, \"schedule\": {}}]}\n        payload = {\"return\": [\"name\", \"steps\", \"acl\"]}\n        cat_info = {'write': {'default': '[{\"order\": 0, \"service\": \"mod\", \"values\": {\"humidity\": \"12\"}}]',\n                              'description': 'Dispatcher write operation using automation script', 'type': 'string',\n                              'value': '[{\"order\": 0, \"service\": \"mod\", \"values\": {\"humidity\": \"12\"}}]'}}\n        value = await mock_coro(result)\n        get_sch = await mock_schedule(script_name)\n        get_cat = await mock_coro(cat_info)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                with patch.object(server.Server.scheduler, 'get_schedules',\n                                  return_value=get_sch) as patch_get_schedules:\n                    with patch.object(c_mgr, 'get_category_all_items', return_value=get_cat) as patch_get_all_items:\n                        resp = await client.get('/fledge/control/script')\n                        assert 200 == resp.status\n                        server.Server.scheduler = None\n                        result = await resp.text()\n                        json_response = json.loads(result)\n                        assert 'scripts' in json_response\n                        assert 'acl' in json_response['scripts'][0]\n                        assert acl_name == json_response['scripts'][0]['acl']\n                        assert 'configuration' in json_response['scripts'][0]\n                        assert len(json_response['scripts'][0]['configuration'])\n                        assert cat_name == json_response['scripts'][0]['configuration']['categoryName']\n                        assert 'schedule' in json_response['scripts'][0]\n                        assert len(json_response['scripts'][0]['schedule'])\n                        assert script_name == json_response['scripts'][0]['schedule']['name']\n                        assert \"automation_script\" == json_response['scripts'][0]['schedule']['processName']\n                    patch_get_all_items.assert_called_once_with(cat_name)\n                patch_get_schedules.assert_called_once_with()\n            args, _ = patch_query_tbl.call_args\n            assert 'control_script' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_bad_get_script_by_name(self, client):\n        script_name = 'blah'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        result = {\"count\": 0, \"rows\": []}\n        payload = {\"return\": [\"name\", \"steps\", \"acl\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"blah\"}}\n        value = await mock_coro(result)\n        message = \"Script with name {} is not found.\".format(script_name)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                resp = await client.get('/fledge/control/script/{}'.format(script_name))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = patch_query_tbl.call_args\n            assert 'control_script' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_good_get_script_by_name(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        server.Server.scheduler = Scheduler(None, None)\n        script_name = 'demoScript'\n        cat_name = \"{}-automation-script\".format(script_name)\n        result = {\"count\": 1, \"rows\": [{\"name\": script_name, \"steps\": [\n            {\"write\": {\"order\": 0, \"service\": \"mod\", \"values\": {\"humidity\": \"12\"}}}], \"acl\": \"\"}]}\n        payload = {\"return\": [\"name\", \"steps\", \"acl\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                 \"value\": script_name}}\n        cat_info = {'write': {'default': '[{\"order\": 0, \"service\": \"mod\", \"values\": {\"humidity\": \"12\"}}]',\n                              'description': 'Dispatcher write operation using automation script', 'type': 'string',\n                              'value': '[{\"order\": 0, \"service\": \"mod\", \"values\": {\"humidity\": \"12\"}}]'}}\n\n        async def mock_manual_schedule(name):\n            schedule = ManualSchedule()\n            schedule.repeat = None\n            schedule.time = None\n            schedule.day = None\n            schedule.schedule_id = \"0c6fbbfd-8b36-4d6d-8fcb-5389436aa0fe\"\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.name = name\n            schedule.process_name = \"automation_script\"\n            return schedule\n\n        value = await mock_coro(result)\n        get_cat = await mock_coro(cat_info)\n        get_sch = await mock_manual_schedule(script_name)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                with patch.object(c_mgr, 'get_category_all_items', return_value=get_cat) as patch_get_all_items:\n                    with patch.object(server.Server.scheduler, 'get_schedule_by_name',\n                                      return_value=get_sch) as patch_get_schedule_by_name:\n                        resp = await client.get('/fledge/control/script/{}'.format(script_name))\n                        server.Server.scheduler = None\n                        assert 200 == resp.status\n                        result = await resp.text()\n                        json_response = json.loads(result)\n                        assert script_name == json_response['name']\n                        assert 'acl' in json_response\n                        assert \"\" == json_response['acl']\n                        assert 'configuration' in json_response\n                        assert len(json_response['configuration'])\n                        assert cat_name == json_response['configuration']['categoryName']\n                        assert 'schedule' in json_response\n                        assert len(json_response['schedule'])\n                        assert script_name == json_response['schedule']['name']\n                        assert \"automation_script\" == json_response['schedule']['processName']\n                    patch_get_schedule_by_name.assert_called_once_with(script_name)\n                patch_get_all_items.assert_called_once_with(cat_name)\n            args, _ = patch_query_tbl.call_args\n            assert 'control_script' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({}, \"Script name is required.\"),\n        ({\"name\": 1}, \"Script name must be a string.\"),\n        ({\"name\": \"\"}, \"Script name cannot be empty.\"),\n        ({\"name\": \"test\"}, \"steps parameter is required.\"),\n        ({\"name\": \"test\", \"steps\": 1}, \"steps must be a list.\"),\n        ({\"name\": \"test\", \"steps\": [], \"acl\": 1}, \"ACL name must be a string.\"),\n        ({\"name\": \"test\", \"steps\": [{\"a\": 1}]}, \"a is an invalid step. Supported step types are ['configure', 'delay', \"\n                                                \"'operation', 'script', 'write'] with case-sensitive.\"),\n        ({\"name\": \"test\", \"steps\": [1, 2]}, \"Steps should be in list of dictionaries.\"),\n        ({\"name\": \"test\", \"steps\": [{\"delay\": 1}]}, \"For delay step nested elements should be in dictionary.\"),\n        ({\"name\": \"test\", \"steps\": [{\"delay\": {}}]}, \"order key is missing for delay step.\"),\n        ({\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": \"1\"}}]}, \"order should be an integer for delay step.\"),\n        ({\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": 1}, \"write\": {}}]}, \"order key is missing for write step.\"),\n        ({\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": 1}, \"write\": {\"order\": \"1\"}}]},\n         \"order should be an integer for write step.\"),\n        ({\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": 1}, \"write\": {\"order\": 1}}]},\n         \"order with value 1 is also found in write. It should be unique for each step item.\")\n    ])\n    async def test_bad_add_script(self, client, payload, message):\n        resp = await client.post('/fledge/control/script', data=json.dumps(payload))\n        assert 400 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": message} == json_response\n\n    async def test_duplicate_add_script(self, client):\n        script_name = \"test\"\n        request_payload = {\"name\": script_name, \"steps\": []}\n        result = {\"count\": 1, \"rows\": [{\"name\": script_name, \"steps\": [{\"write\": {\"order\": 1, \"speed\": 420}}]}]}\n        value = await mock_coro(result)\n        query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        message = \"Script with name {} already exists.\".format(script_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                resp = await client.post('/fledge/control/script', data=json.dumps(request_payload))\n                assert 409 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = patch_query_tbl.call_args\n            assert 'control_script' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    async def test_bad_add_script_with_acl(self, client):\n        script_name = \"test\"\n        acl_name = \"blah\"\n        request_payload = {\"name\": script_name, \"steps\": [], \"acl\": acl_name}\n        script_result = {\"count\": 0, \"rows\": []}\n        acl_result = {\"count\": 0, \"rows\": []}\n        script_query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        acl_query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n            if table == 'control_acl':\n                assert acl_query_payload == json.loads(payload)\n                return script_result\n            elif table == 'control_script':\n                assert script_query_payload == json.loads(payload)\n                return acl_result\n            else:\n                return {}\n\n        message = \"ACL with name {} is not found.\".format(acl_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                resp = await client.post('/fledge/control/script', data=json.dumps(request_payload))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n\n    async def test_good_add_script(self, client):\n        script_name = \"test\"\n        request_payload = {\"name\": script_name, \"steps\": []}\n        result = {\"count\": 0, \"rows\": []}\n        insert_result = {\"response\": \"inserted\", \"rows_affected\": 1}\n        script_query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        value = await mock_coro(result)\n        insert_value = await mock_coro(insert_result)\n        arv = await mock_coro(None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value\n                              ) as query_tbl_patch:\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=insert_value\n                                  ) as insert_tbl_patch:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=arv) as audit_info_patch:\n                            resp = await client.post('/fledge/control/script', data=json.dumps(request_payload))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {'name': script_name, 'steps': []} == json_response\n                        audit_info_patch.assert_called_once_with('CTSAD', request_payload)\n                args, _ = insert_tbl_patch.call_args_list[0]\n                assert 'control_script' == args[0]\n                expected = json.loads(args[1])\n                assert {'name': script_name, 'steps': '[]'} == expected\n            args, _ = query_tbl_patch.call_args_list[0]\n            assert 'control_script' == args[0]\n            expected = json.loads(args[1])\n            assert script_query_payload == expected\n\n    async def test_good_add_script_with_acl(self, client):\n        script_name = \"test\"\n        acl_name = \"blah\"\n        request_payload = {\"name\": script_name, \"steps\": [], \"acl\": acl_name}\n        script_result = {\"count\": 0, \"rows\": []}\n        acl_result = {\"count\": 1, \"rows\": [{\"name\": acl_name, \"service\": [], \"url\": []}]}\n        insert_result = {\"response\": \"inserted\", \"rows_affected\": 1}\n        script_query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        acl_query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        arv = await mock_coro(None)\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n            if table == 'control_acl':\n                assert acl_query_payload == json.loads(payload)\n                return acl_result\n            elif table == 'control_script':\n                assert script_query_payload == json.loads(payload)\n                return script_result\n            elif table == \"acl_usage\":\n                return {\"count\": 0, \"rows\": []}\n            else:\n                return {}\n\n        async def i_result(*args):\n            table = args[0]\n            payload = args[1]\n            if table == 'control_script':\n                assert {'name': script_name, 'steps': '[]', 'acl': acl_name} == json.loads(payload)\n            elif table == \"acl_usage\":\n                assert {'name': acl_name, 'entity_type': 'script', 'entity_name': script_name} == json.loads(payload)\n            return insert_result\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                with patch.object(storage_client_mock, 'insert_into_tbl', side_effect=i_result):\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=arv) as audit_info_patch:\n                            resp = await client.post('/fledge/control/script', data=json.dumps(request_payload))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert request_payload == json_response\n                        audit_info_patch.assert_called_once_with('CTSAD', request_payload)\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({}, \"Nothing to update for the given payload.\"),\n        ({\"steps\": 1}, \"steps must be a list.\"),\n        ({\"acl\": 1}, \"ACL must be a string.\"),\n        ({\"steps\": [{\"a\": 1}]}, \"a is an invalid step. Supported step types are \"\n                                \"['configure', 'delay', 'operation', 'script', 'write'] with case-sensitive.\"),\n        ({\"steps\": [1, 2]}, \"Steps should be in list of dictionaries.\"),\n        ({\"steps\": [{\"delay\": 1}]}, \"For delay step nested elements should be in dictionary.\"),\n        ({\"steps\": [{\"delay\": {}}]}, \"order key is missing for delay step.\"),\n        ({\"steps\": [{\"delay\": {\"order\": \"1\"}}]}, \"order should be an integer for delay step.\"),\n        ({\"steps\": [{\"delay\": {\"order\": 1}, \"write\": {}}]}, \"order key is missing for write step.\"),\n        ({\"steps\": [{\"delay\": {\"order\": 1}, \"write\": {\"order\": \"1\"}}]},\n         \"order should be an integer for write step.\"),\n        ({\"steps\": [{\"delay\": {\"order\": 1}, \"write\": {\"order\": 1}}]},\n         \"order with value 1 is also found in write. It should be unique for each step item.\")\n    ])\n    async def test_bad_update_script(self, client, payload, message):\n        script_name = \"testScript\"\n        resp = await client.put('/fledge/control/script/{}'.format(script_name), data=json.dumps(payload))\n        assert 400 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": message} == json_response\n\n    async def test_update_script_not_found(self, client):\n        script_name = \"test\"\n        req_payload = {\"steps\": []}\n        result = {\"count\": 0, \"rows\": []}\n        value = await mock_coro(result)\n        query_payload = {\"return\": [\"name\", \"steps\", \"acl\"],\n                         \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        message = \"No such {} script found.\".format(script_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                resp = await client.put('/fledge/control/script/{}'.format(script_name), data=json.dumps(req_payload))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = query_tbl_patch.call_args\n            assert 'control_script' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    async def test_update_script_when_acl_not_found(self, client):\n        script_name = \"test\"\n        acl_name = \"blah\"\n        payload = {\"steps\": [{\"write\": {\"order\": 1, \"speed\": 420}}], \"acl\": acl_name}\n        script_result = {\"count\": 1, \"rows\": [{\"name\": script_name, \"steps\": [{\"write\": {\"order\": 1, \"speed\": 420}}]}]}\n        script_query_payload = {\"return\": [\"name\", \"steps\", \"acl\"],\n                                \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        acl_query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        acl_result = {\"count\": 0, \"rows\": []}\n\n        async def q_result(*args):\n            table = args[0]\n            if table == 'control_acl':\n                assert acl_query_payload == json.loads(args[1])\n                return acl_result\n            elif table == 'control_script':\n                assert script_query_payload == json.loads(args[1])\n                return script_result\n            else:\n                return {}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                resp = await client.put('/fledge/control/script/{}'.format(script_name), data=json.dumps(payload))\n                assert 404 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": \"ACL with name {} is not found.\".format(acl_name)} == json_response\n\n    @pytest.mark.parametrize(\"payload\", [\n        {\"steps\": []},\n        {\"steps\": [], \"acl\": \"\"},\n        {\"steps\": [], \"acl\": \"testACL\"}\n    ])\n    async def test_update_script(self, client, payload):\n        script_name = \"test\"\n        acl_name = \"testACL\"\n        new_script = copy.deepcopy(payload)\n        new_script['name'] = script_name\n        script_result = {\"count\": 1, \"rows\": [{\"steps\": [{\"write\": {\"order\": 1, \"speed\": 420}}]}]}\n        update_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        steps_payload = payload[\"steps\"]\n        update_value = await mock_coro(update_result)\n        script_query_payload = {\"return\": [\"name\", \"steps\", \"acl\"],\n                                \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        acl_query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": acl_name}}\n        acl_result = {\"count\": 1, \"rows\": [{\"name\": acl_name, \"service\": [], \"url\": []}]}\n        insert_result = {\"response\": \"inserted\", \"rows_affected\": 1}\n\n        async def q_result(*args):\n            table = args[0]\n            if table == 'control_acl':\n                assert acl_query_payload == json.loads(args[1])\n                return acl_result\n            elif table == 'control_script':\n                assert script_query_payload == json.loads(args[1])\n                return script_result\n            elif table == \"acl_usage\":\n                return {\"count\": 0, \"rows\": []}\n            else:\n                return {}\n\n        async def i_result(*args):\n            table = args[0]\n            payload_ins = args[1]\n            if table == \"acl_usage\":\n                assert {'name': acl_name, 'entity_type': 'script',\n                        'entity_name': script_name} == json.loads(payload_ins)\n                return insert_result\n\n        arv = await mock_coro(None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                with patch.object(storage_client_mock, 'update_tbl', return_value=update_value) as patch_update_tbl:\n                    with patch.object(storage_client_mock, 'insert_into_tbl', side_effect=i_result):\n                        with patch.object(AuditLogger, '__init__', return_value=None):\n                            with patch.object(AuditLogger, 'information', return_value=arv) as audit_info_patch:\n                                resp = await client.put('/fledge/control/script/{}'.format(script_name),\n                                                        data=json.dumps(payload))\n                                assert 200 == resp.status\n                                result = await resp.text()\n                                json_response = json.loads(result)\n                                assert {\"message\": \"Control script {} updated successfully.\".format(script_name)}\\\n                                       == json_response\n                        args, _ = audit_info_patch.call_args\n                        audit_info_patch.assert_called_once_with(\n                            'CTSCH', {\"script\": new_script, \"old_script\": script_result['rows'][0]})\n                update_args, _ = patch_update_tbl.call_args\n                assert 'control_script' == update_args[0]\n                update_payload = {\"values\": payload,\n                                  \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n                update_payload[\"values\"][\"steps\"] = str(steps_payload)\n                assert update_payload == json.loads(update_args[1])\n\n    async def test_delete_script_not_found(self, client):\n        script_name = \"test\"\n        req_payload = {\"steps\": []}\n        result = {\"count\": 0, \"rows\": []}\n        value = await mock_coro(result)\n        query_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        message = \"No such {} script found.\".format(script_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                resp = await client.delete('/fledge/control/script/{}'.format(script_name),\n                                           data=json.dumps(req_payload))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = query_tbl_patch.call_args\n            assert 'control_script' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    async def test_delete_script_along_with_category_and_schedule(self, client):\n        script_name = 'demoScript'\n        schedule_cat_name = \"{}-automation-script\".format(script_name)\n        schedule_id = \"0c6fbbfd-8b36-4d6d-8fcb-5389436aa0fe\"\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        q_result = {\"count\": 0, \"rows\": [\n            {\"name\": script_name, \"steps\": [{\"delay\": {\"order\": 0, \"duration\": 9003}}], \"acl\": \"\"}]}\n        q_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        q_tbl_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"entity_type\", \"condition\": \"=\", \"value\": \"script\",\n                                                       \"and\": {\"column\": \"entity_name\", \"condition\": \"=\",\n                                                               \"value\": script_name}}}\n        delete_payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        delete_result = {\"response\": \"deleted\", \"rows_affected\": 1}\n        disable_sch_result = (True, \"Schedule successfully disabled\")\n        delete_sch_result = (True, 'Schedule deleted successfully.')\n        message = '{} script deleted successfully.'.format(script_name)\n\n        async def query_schedule(*args):\n            table = args[0]\n            payload = args[1]\n            if table == 'control_script':\n                assert q_payload == json.loads(payload)\n                return q_result\n            elif table == \"acl_usage\":\n                return {\"count\": 0, \"rows\": []}\n\n        async def d_schedule(*args):\n            table = args[0]\n            payload = args[1]\n            if table == 'control_script':\n                assert delete_payload == json.loads(payload)\n                return delete_result\n            elif table == \"acl_usage\":\n                return delete_result\n\n        del_cat_and_child = await mock_coro(delete_result)\n        get_sch = await mock_schedule(script_name)\n        disable_sch = await mock_coro(disable_sch_result)\n        delete_sch = await mock_coro(delete_sch_result)\n        arv = await mock_coro(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'delete_category_and_children_recursively',\n                              return_value=del_cat_and_child) as patch_delete_cat_and_child:\n                with patch.object(server.Server.scheduler, 'get_schedules',\n                                  return_value=get_sch) as patch_get_schedules:\n                    with patch.object(server.Server.scheduler, 'disable_schedule',\n                                      return_value=disable_sch) as patch_disable_sch:\n                        with patch.object(server.Server.scheduler, 'delete_schedule',\n                                          return_value=delete_sch) as patch_delete_sch:\n                            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                              side_effect=query_schedule) as patch_query_tbl:\n                                with patch.object(storage_client_mock, 'delete_from_tbl',\n                                                  side_effect=d_schedule) as patch_delete_tbl:\n                                    with patch.object(AuditLogger, '__init__', return_value=None):\n                                        with patch.object(AuditLogger, 'information',\n                                                          return_value=arv) as audit_info_patch:\n                                            resp = await client.delete('/fledge/control/script/{}'.format(script_name))\n                                            server.Server.scheduler = None\n                                            assert 200 == resp.status\n                                            result = await resp.text()\n                                            json_response = json.loads(result)\n                                            assert {'message': message} == json_response\n                                        audit_info_patch.assert_called_once_with(\n                                            'CTSDL', {'message': message, \"name\": script_name})\n                                patch_delete_tbl.assert_called_once_with(\n                                    'control_script', json.dumps(delete_payload))\n                            args, _ = patch_query_tbl.call_args\n                            assert 'acl_usage' == args[0]\n                            assert json.dumps(q_tbl_payload) == args[1]\n                        patch_delete_sch.assert_called_once_with(uuid.UUID(schedule_id))\n                    patch_disable_sch.assert_called_once_with(uuid.UUID(schedule_id))\n                patch_get_schedules.assert_called_once_with()\n            patch_delete_cat_and_child.assert_called_once_with(schedule_cat_name)\n\n    async def test_delete_script_acl_not_attached(self, client):\n        script_name = 'demoScript'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        q_result = {\"count\": 1, \"rows\": [\n            {\"name\": script_name, \"steps\": [{\"delay\": {\"order\": 0, \"duration\": 9003}}], \"acl\": \"\"}]}\n        q_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        delete_payload = {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": script_name}}\n        delete_result = {\"response\": \"deleted\", \"rows_affected\": 1}\n        message = '{} script deleted successfully.'.format(script_name)\n        q_tbl_payload = {\"return\": [\"name\"], \"where\": {\"column\": \"entity_type\", \"condition\": \"=\", \"value\": \"script\",\n                                                       \"and\": {\"column\": \"entity_name\", \"condition\": \"=\",\n                                                               \"value\": script_name}}}\n\n        async def query_result(*args):\n            table = args[0]\n            payload = args[1]\n            if table == 'control_script':\n                assert q_payload == json.loads(payload)\n                return q_result\n            elif table == \"acl_usage\":\n                return {\"count\": 0, \"rows\": []}\n\n        async def d_result(*args):\n            table = args[0]\n            payload = args[1]\n            if table == 'control_script':\n                assert delete_payload == json.loads(payload)\n                return delete_result\n            elif table == \"acl_usage\":\n                return delete_result\n\n        arv = await mock_coro(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'delete_category_and_children_recursively', side_effect=Exception):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  side_effect=query_result) as patch_query_tbl:\n                    with patch.object(storage_client_mock, 'delete_from_tbl',\n                                      side_effect=d_result) as patch_delete_tbl:\n                        with patch.object(AuditLogger, '__init__', return_value=None):\n                            with patch.object(AuditLogger, 'information',\n                                              return_value=arv) as audit_info_patch:\n                                resp = await client.delete('/fledge/control/script/{}'.format(script_name))\n                                assert 200 == resp.status\n                                result = await resp.text()\n                                json_response = json.loads(result)\n                                assert {'message': message} == json_response\n                            audit_info_patch.assert_called_once_with(\n                                'CTSDL', {'message': message, \"name\": script_name})\n                            patch_delete_tbl.assert_called_once_with('control_script', json.dumps(delete_payload))\n                args, _ = patch_query_tbl.call_args\n                assert 'acl_usage' == args[0]\n                assert json.dumps(q_tbl_payload) == args[1]\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({}, \"parameters field is required.\"),\n        ({\"parameters\": 1}, \"parameters must be a dictionary.\"),\n        ({\"parameters\": {}}, \"parameters cannot be an empty.\"),\n    ])\n    async def test_bad_schedule_script_with_parameters(self, client, payload, message):\n        resp = await client.post('/fledge/control/script/test/schedule', data=json.dumps(payload))\n        assert 400 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": message} == json_response\n\n    @pytest.mark.parametrize(\"code, message, get_script_result, payload\", [\n        (404, \"Script with name test is not found.\", {\"count\": 0, \"rows\": []}, None),\n        (400, \"write steps KV pair is missing for test script.\", {\"count\": 1, \"rows\": [\n            {\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": 0, \"duration\": 9003}}]}]}, {\"parameters\": {\"foobar\": 1}}),\n        (404, \"foo param is not found in write steps for test script.\", {\"count\": 1, \"rows\": [\n            {\"name\": \"test\", \"steps\": [{\"write\": {\"order\": 0, \"service\": \"rand\",\n                                                  \"values\": {\"random\": \"49\", \"sine\": \"$foobar$\"}}}]}]},\n         {\"parameters\": {\"foo\": 1}}),\n        (404, \"foo param is not found in write steps for test script.\", {\"count\": 1, \"rows\": [\n            {\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": 0, \"duration\": 9003}},\n                                       {\"write\": {\"order\": 0, \"service\": \"rand\", \"values\": {\n                                           \"random\": \"49\", \"sine\": \"$foobar$\"}}}]}]},\n         {\"parameters\": {\"foo\": 1}}),\n        (404, \"bar param is not found in write steps for test script.\", {\"count\": 1, \"rows\": [\n            {\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": 0, \"duration\": 9003}},\n                                       {\"write\": {\"order\": 0, \"service\": \"rand\", \"values\": {\n                                           \"random\": \"$foo$\", \"sine\": \"$foobar$\"}}}]}]},\n         {\"parameters\": {\"foo\": \"1\", \"bar\": \"blah\"}}),\n        (400, \"Value should be in string for foo param.\", {\"count\": 1, \"rows\": [\n            {\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": 0, \"duration\": 9003}},\n                                       {\"write\": {\"order\": 0, \"service\": \"rand\", \"values\": {\n                                           \"random\": \"$foo$\", \"sine\": \"$foobar$\"}}}]}]},\n         {\"parameters\": {\"foo\": 1}}),\n        (400, \"Value should be in string for foobar param.\", {\"count\": 1, \"rows\": [\n            {\"name\": \"test\", \"steps\": [{\"delay\": {\"order\": 0, \"duration\": 9003}},\n                                       {\"write\": {\"order\": 0, \"service\": \"rand\", \"values\": {\n                                           \"random\": \"$foo$\", \"sine\": \"$foobar$\"}}}]}]},\n         {\"parameters\": {\"foo\": \"1\", \"foobar\": 13}})\n    ])\n    async def test_schedule_script_not_found(self, client, code, message, get_script_result, payload):\n        script_name = \"test\"\n        value = await mock_coro(get_script_result)\n        query_payload = {\"return\": [\"name\", \"steps\", \"acl\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                       \"value\": script_name}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as query_tbl_patch:\n                resp = await client.post('/fledge/control/script/{}/schedule'.format(script_name),\n                                         data=json.dumps(payload))\n                assert code == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = query_tbl_patch.call_args\n            assert 'control_script' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    async def test_schedule_found_for_configuration_script(self, client):\n        script_name = 'demoScript'\n        result = {\"count\": 1, \"rows\": [{\"name\": script_name, \"steps\": [\n            {\"write\": {\"order\": 0, \"service\": \"sine\", \"values\": {\"sinusoid\": \"1.2\"}}}], \"acl\": \"\"}]}\n        server.Server.scheduler = Scheduler(None, None)\n        value = await mock_coro(result)\n        get_sch = await mock_schedule(script_name)\n        query_payload = {\"return\": [\"name\", \"steps\", \"acl\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                       \"value\": script_name}}\n        message = \"{} schedule already exists.\".format(script_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                with patch.object(server.Server.scheduler, 'get_schedules',\n                                  return_value=get_sch) as patch_get_schedules:\n                    resp = await client.post('/fledge/control/script/{}/schedule'.format(script_name))\n                    server.Server.scheduler = None\n                    assert 400 == resp.status\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {\"message\": message} == json_response\n                patch_get_schedules.assert_called_once_with()\n            args, _ = patch_query_tbl.call_args\n            assert 'control_script' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    async def test_schedule_configuration_for_script(self, client):\n        script_name = \"demoScript\"\n        schedule_cat_name = \"{}-automation-script\".format(script_name)\n        sch_name = \"foo\"\n        result = {\"count\": 1, \"rows\": [{\"name\": script_name, \"steps\": [\n            {\"write\": {\"order\": 0, \"service\": \"sine\", \"values\": {\"sinusoid\": \"1.2\"}}}], \"acl\": \"\"}]}\n        cat_child_result = {'children': ['dispatcherAdvanced', schedule_cat_name]}\n        server.Server.scheduler = Scheduler(None, None)\n        value = await mock_coro(result)\n        sch = await mock_coro(\"\")\n        queue = await mock_coro(True)\n        cat = await mock_coro(None)\n        child = await mock_coro(cat_child_result)\n        get_sch = await mock_schedule(sch_name)\n        query_payload = {\"return\": [\"name\", \"steps\", \"acl\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                       \"value\": script_name}}\n        message = \"Schedule and configuration is created for control script {}\".format(script_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=value) as patch_query_tbl:\n                with patch.object(server.Server.scheduler, 'get_schedules',\n                                  return_value=get_sch) as patch_get_schedules:\n                    with patch.object(c_mgr, 'create_category', return_value=cat) as patch_create_cat:\n                        with patch.object(c_mgr, 'create_child_category',\n                                          return_value=child) as patch_create_child_cat:\n                            with patch.object(server.Server.scheduler, 'save_schedule',\n                                              return_value=sch) as patch_save_schedule:\n                                with patch.object(server.Server.scheduler, 'queue_task',\n                                                  return_value=queue) as patch_queue_task:\n                                    resp = await client.post('/fledge/control/script/{}/schedule'.format(script_name))\n                                    server.Server.scheduler = None\n                                    assert 200 == resp.status\n                                    result = await resp.text()\n                                    json_response = json.loads(result)\n                                    assert {\"message\": message} == json_response\n                                patch_queue_task.assert_called_once_with(None)\n                            patch_save_schedule.assert_called_once()\n                        patch_create_child_cat.assert_called_once_with('dispatcher', [schedule_cat_name])\n                    assert 1 == patch_create_cat.call_count\n                patch_get_schedules.assert_called_once_with()\n            args, _ = patch_query_tbl.call_args\n            assert 'control_script' == args[0]\n            assert query_payload == json.loads(args[1])\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/plugins/test_config_validator.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport pytest\nfrom unittest.mock import patch, AsyncMock\nfrom aiohttp import web\nfrom fledge.services.core import routes\nfrom fledge.services.core.api.plugins.config_validator import ConfigurationValidator\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2025 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestConfigurationValidator:\n\n    @pytest.fixture\n    def client(self, aiohttp_client, loop):\n        app = web.Application()\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(aiohttp_client(app))\n\n    \n    async def test_bad_json_payload(self, client):\n        config = 1\n        resp = await client.put('/fledge/plugin/validate', data=json.dumps(config))\n        assert 400 == resp.status\n        assert \"Configuration data must be a JSON object\" == resp.reason\n\n\n    @pytest.mark.parametrize(\"config\", [\n        {},\n        {\"plugin\": {\"description\": \"Dummy Plugin\", \"type\": \"string\", \"default\": \"dummy\", \"readonly\": \"true\"}}\n    ])\n    async def test_no_config_check_performed(self, client, config):\n        resp = await client.put('/fledge/plugin/validate', data=json.dumps(config))\n        assert 204 == resp.status\n        assert \"No Content\" == resp.reason\n\n    @pytest.mark.parametrize(\"config, ping_result, listening_result, expected_host_result, expected_listening_result\", [\n        # Host reachable, port listening (both pass)\n        ({\"plugin\": {\"description\": \"MQTT Plugin\", \"type\": \"string\", \"default\": \"mqtt\", \"readonly\": \"true\"}, \n          \"brokerHost\": {\"type\": \"string\", \"default\": \"mqtt.example.com\", \"value\": \"mqtt.example.com\"}, \n          \"brokerPort\": {\"type\": \"integer\", \"default\": \"1883\", \"value\": \"1883\"}},\n          (True, \"Host 'mqtt.example.com' is reachable\"),\n          (True, \"Service is listening on port 1883\"),\n          \"pass\", \"pass\"),\n        \n        # Host unreachable, port not listening (both fail)\n        ({\"plugin\": {\"description\": \"Modbus Plugin\", \"type\": \"string\", \"default\": \"modbus\", \"readonly\": \"true\"}, \n          \"address\": {\"description\": \"Modbus server address\", \"type\": \"string\", \"default\": \"unreachable.host.com\"}, \n          \"port\": {\"description\": \"Modbus port\", \"type\": \"integer\", \"default\": \"502\"}},\n          (False, \"Host 'unreachable.host.com' appears to be unreachable - no response on common ports (80, 443, 22, 53)\"),\n          (False, \"Connection to unreachable.host.com:502 timed out after 5 seconds\"),\n          \"fail\", \"fail\"),\n        \n        # Host reachable, port not listening (mixed result)\n        ({\"plugin\": {\"description\": \"HTTP Plugin\", \"type\": \"string\", \"default\": \"http\", \"readonly\": \"true\"}, \n          \"ServerHostname\": {\"type\": \"string\", \"default\": \"example.com\"}, \n          \"ServerPort\": {\"type\": \"integer\", \"default\": \"8080\"}},\n          (True, \"Host 'example.com' is reachable\"),\n          (False, \"No service is listening on example.com:8080\"),\n          \"pass\", \"fail\"),\n        \n        # DNS resolution failure\n        ({\"plugin\": {\"description\": \"OPC UA Plugin\", \"type\": \"string\", \"default\": \"opcua\", \"readonly\": \"true\"}, \n          \"url\": {\"type\": \"string\", \"default\": \"opc.tcp://invalid.hostname.xyz:4840/server\"}},\n          (False, \"Cannot resolve hostname 'invalid.hostname.xyz' - please check the hostname is correct\"),\n          (False, \"DNS lookup failed for 'invalid.hostname.xyz'\"),\n          \"fail\", \"fail\"),\n        \n        # Connection timeout scenario\n        ({\"plugin\": {\"description\": \"Database Plugin\", \"type\": \"string\", \"default\": \"database\", \"readonly\": \"true\"}, \n          \"host\": {\"type\": \"string\", \"default\": \"slow.database.com\"}, \n          \"port\": {\"type\": \"integer\", \"default\": \"5432\"}},\n          (False, \"Connection test to 'slow.database.com' timed out - host may be unreachable\"),\n          (False, \"Connection to slow.database.com:5432 timed out after 5 seconds\"),\n          \"fail\", \"fail\"),\n        \n        # Broker URL scenario\n        ({\"plugin\": {\"description\": \"MQTT Sparkplug Plugin\", \"type\": \"string\", \"default\": \"mqtt-sparkplug\", \"readonly\": \"true\"}, \n          \"broker\": {\"type\": \"string\", \"default\": \"tcp://broker.hivemq.com:1883\"}},\n          (True, \"Host 'broker.hivemq.com' is reachable\"),\n          (True, \"Service is listening on port 1883\"),\n          \"pass\", \"pass\")\n    ])\n    async def test_validate_configuration(self, client, config, ping_result, listening_result, expected_host_result, expected_listening_result):\n        with patch.object(ConfigurationValidator, 'ping_host', new_callable=AsyncMock) as mock_ping, \\\n             patch.object(ConfigurationValidator, 'check_port_listening', new_callable=AsyncMock) as mock_listening:\n            \n            # Configure mock returns\n            mock_ping.return_value = ping_result\n            mock_listening.return_value = listening_result\n            \n            # Make the API call\n            resp = await client.put('/fledge/plugin/validate', data=json.dumps(config))\n            assert 200 == resp.status\n            \n            result = await resp.text()\n            json_response = json.loads(result)\n            \n            # Verify HostReachable test results\n            assert 'HostReachable' in json_response\n            assert json_response['HostReachable']['description'] == 'Host Reachability'\n            assert json_response['HostReachable']['result'] == expected_host_result\n            \n            # Check for detail on failed host reachable\n            if expected_host_result == \"fail\":\n                assert 'detail' in json_response['HostReachable']\n                assert json_response['HostReachable']['detail']['reason'] == ping_result[1]\n            \n            # Verify Listening test results\n            assert 'Listening' in json_response\n            assert json_response['Listening']['description'] == 'Listening'\n            assert json_response['Listening']['result'] == expected_listening_result\n            \n            # Check for detail on failed listening\n            if expected_listening_result == \"fail\":\n                assert 'detail' in json_response['Listening']\n                assert json_response['Listening']['detail']['reason'] == listening_result[1]\n            \n            # Verify mocks were called\n            assert mock_ping.called\n            if any(field in str(config).lower() for field in ['port', 'url']) or \\\n               any('address' in str(config).lower() for _ in [1]):  # Address-only configs get default ports\n                assert mock_listening.called\n\n    @pytest.mark.parametrize(\"config, ping_result, listening_result, expected_host_result, expected_listening_result, config_type\", [\n        # a) Default KV pair only - uses default value\n        ({\"plugin\": {\"description\": \"Default Only Plugin\", \"type\": \"string\", \"default\": \"default-only\", \"readonly\": \"true\"}, \n          \"address\": {\"description\": \"Server address\", \"type\": \"string\", \"default\": \"default.example.com\"}, \n          \"port\": {\"description\": \"Server port\", \"type\": \"integer\", \"default\": \"8080\"}},\n          (True, \"Host 'default.example.com' is reachable\"),\n          (False, \"No service is listening on default.example.com:8080\"),\n          \"pass\", \"fail\", \"default_only\"),\n        \n        # b) Value KV pair only - uses value (no default)\n        ({\"plugin\": {\"description\": \"Value Only Plugin\", \"type\": \"string\", \"default\": \"value-only\", \"readonly\": \"true\"}, \n          \"hostname\": {\"description\": \"Server hostname\", \"type\": \"string\", \"value\": \"value.example.com\"}, \n          \"port\": {\"description\": \"Server port\", \"type\": \"integer\", \"value\": \"9090\"}},\n          (True, \"Host 'value.example.com' is reachable\"),\n          (True, \"Service is listening on port 9090\"),\n          \"pass\", \"pass\", \"value_only\"),\n        \n        # c) Both default and value KV pairs - value takes precedence over default\n        ({\"plugin\": {\"description\": \"Both Keys Plugin\", \"type\": \"string\", \"default\": \"both-keys\", \"readonly\": \"true\"}, \n          \"ServerHostname\": {\"description\": \"Server hostname\", \"type\": \"string\", \"default\": \"default.server.com\", \"value\": \"override.server.com\"}, \n          \"ServerPort\": {\"description\": \"Server port\", \"type\": \"integer\", \"default\": \"3000\", \"value\": \"4000\"}},\n          (False, \"Host 'override.server.com' appears to be unreachable - no response on common ports (80, 443, 22, 53)\"),\n          (False, \"Connection to override.server.com:4000 timed out after 5 seconds\"),\n          \"fail\", \"fail\", \"both_keys_value_precedence\"),\n        \n        # d) Default and value with URL field - value takes precedence\n        ({\"plugin\": {\"description\": \"URL Override Plugin\", \"type\": \"string\", \"default\": \"url-override\", \"readonly\": \"true\"}, \n          \"url\": {\"description\": \"Service URL\", \"type\": \"string\", \"default\": \"http://default.service.com:8080\", \"value\": \"https://custom.service.com:8443\"}},\n          (True, \"Host 'custom.service.com' is reachable\"),\n          (True, \"Service is listening on port 8443\"),\n          \"pass\", \"pass\", \"url_value_precedence\"),\n        \n        # e) Broker field with default only\n        ({\"plugin\": {\"description\": \"Broker Default Plugin\", \"type\": \"string\", \"default\": \"broker-default\", \"readonly\": \"true\"}, \n          \"broker\": {\"description\": \"MQTT Broker\", \"type\": \"string\", \"default\": \"tcp://default.broker.com:1883\"}},\n          (False, \"Cannot resolve hostname 'default.broker.com' - please check the hostname is correct\"),\n          (False, \"DNS lookup failed for 'default.broker.com'\"),\n          \"fail\", \"fail\", \"broker_default_only\"),\n        \n        # f) Mixed fields - some with default, some with value\n        ({\"plugin\": {\"description\": \"Mixed Fields Plugin\", \"type\": \"string\", \"default\": \"mixed\", \"readonly\": \"true\"}, \n          \"brokerHost\": {\"description\": \"Broker hostname\", \"type\": \"string\", \"value\": \"mixed.broker.com\"}, \n          \"brokerPort\": {\"description\": \"Broker port\", \"type\": \"integer\", \"default\": \"1883\"}},\n          (True, \"Host 'mixed.broker.com' is reachable\"),\n          (False, \"No service is listening on mixed.broker.com:1883\"),\n          \"pass\", \"fail\", \"mixed_default_value\"),\n        \n    ])\n    async def test_configuration_key_value_patterns(self, client, config, ping_result, listening_result, \n                                                   expected_host_result, expected_listening_result, config_type):\n        \"\"\"Test different configuration key-value patterns: default only, value only, and both default+value.\"\"\"\n        \n        with patch.object(ConfigurationValidator, 'ping_host', new_callable=AsyncMock) as mock_ping, \\\n             patch.object(ConfigurationValidator, 'check_port_listening', new_callable=AsyncMock) as mock_listening:\n            \n            # Configure mock returns\n            mock_ping.return_value = ping_result\n            mock_listening.return_value = listening_result\n            \n            # Make the API call\n            resp = await client.put('/fledge/plugin/validate', data=json.dumps(config))\n            assert 200 == resp.status\n            \n            result = await resp.text()\n            json_response = json.loads(result)\n            \n            # Verify HostReachable test results\n            assert 'HostReachable' in json_response\n            assert json_response['HostReachable']['result'] == expected_host_result\n            \n            # Verify Listening test results  \n            assert 'Listening' in json_response\n            assert json_response['Listening']['result'] == expected_listening_result\n            \n            # Verify the correct values are being used based on config type\n            host_values = json_response['HostReachable']['values'][0]\n            listening_values = json_response['Listening']['values'][0]\n            \n            if config_type == \"default_only\":\n                # Should use default values\n                assert 'address' in host_values\n                assert 'address' in listening_values and 'port' in listening_values\n                \n            elif config_type == \"value_only\":\n                # Should use value fields\n                assert 'hostname' in host_values\n                assert 'hostname' in listening_values and 'port' in listening_values\n                \n            elif config_type == \"both_keys_value_precedence\":\n                # Value should take precedence over default\n                assert 'ServerHostname' in host_values\n                assert 'ServerHostname' in listening_values and 'ServerPort' in listening_values\n                # Verify the value field was used, not default\n                mock_ping.assert_called_with('override.server.com')\n                \n            elif config_type == \"url_value_precedence\":\n                # URL value should take precedence\n                assert 'url' in host_values\n                mock_ping.assert_called_with('custom.service.com')\n                \n            elif config_type == \"broker_default_only\":\n                # Broker default should be used\n                assert 'broker' in host_values\n                mock_ping.assert_called_with('default.broker.com')\n                \n            elif config_type == \"mixed_default_value\":\n                # Mixed: brokerHost uses value, brokerPort uses default\n                assert 'brokerHost' in listening_values and 'brokerPort' in listening_values\n                mock_ping.assert_called_with('mixed.broker.com')\n                \n            elif config_type == \"address_value_override\":\n                # Address value should override default\n                assert 'IP' in host_values\n                mock_ping.assert_called_with('192.168.1.200')\n            \n            # Verify error details for failed tests\n            if expected_host_result == \"fail\":\n                assert 'detail' in json_response['HostReachable']\n                assert json_response['HostReachable']['detail']['reason'] == ping_result[1]\n                \n            if expected_listening_result == \"fail\":\n                assert 'detail' in json_response['Listening']\n                assert json_response['Listening']['detail']['reason'] == listening_result[1]\n\n    @pytest.mark.parametrize(\"config, expected_error_message\", [\n        # Test invalid URL format\n        ({\"plugin\": {\"description\": \"Invalid URL Plugin\", \"type\": \"string\", \"default\": \"invalid\", \"readonly\": \"true\"}, \n          \"url\": {\"type\": \"string\", \"default\": \"not-a-valid-url\"}},\n          \"Invalid URL format 'not-a-valid-url' - please check the URL is correct\"),\n        \n        # Test missing value and default\n        ({\"plugin\": {\"description\": \"Missing Value Plugin\", \"type\": \"string\", \"default\": \"missing\", \"readonly\": \"true\"}, \n          \"address\": {\"type\": \"string\", \"description\": \"Server address\"}},\n          \"Configuration item 'address' must have either 'value' or 'default' key\")\n    ])\n    async def test_validate_configuration_error_cases(self, client, config, expected_error_message):\n        \"\"\"Test configuration validation error handling without network calls.\"\"\"\n        \n        resp = await client.put('/fledge/plugin/validate', data=json.dumps(config))\n        \n        if \"Invalid URL format\" in expected_error_message:\n            # URL format errors return 200 with fail result\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert json_response['HostReachable']['result'] == 'fail'\n            assert expected_error_message in json_response['HostReachable']['detail']['reason']\n        else:\n            # Missing value/default errors return 400\n            assert 400 == resp.status\n            assert expected_error_message in resp.reason\n\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/plugins/test_discovery.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom unittest.mock import patch\nimport pytest\nfrom aiohttp import web\nfrom fledge.services.core import routes\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.services.core.api.plugins import common\nfrom fledge.services.core.api.plugins.exceptions import *\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestPluginDiscoveryApi:\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    @pytest.mark.parametrize(\"method, result, is_config\", [\n        (\"/fledge/plugins/installed\", {\"name\": \"sinusoid\", \"version\": \"1.0\", \"type\": \"south\",\n                                       \"description\": \"sinusoid plugin\"}, False),\n        (\"/fledge/plugins/installed?config=true\",\n         {\"name\": \"sinusoid\", \"version\": \"1.0\", \"type\": \"south\", \"description\": \"sinusoid plugin\", \"config\": {\n             \"plugin\": {\"description\": \"sinusoid plugin\", \"type\": \"string\", \"default\": \"sinusoid\",\n                        \"readonly\": \"true\"}}}, True),\n        (\"/fledge/plugins/installed?config=false\", {\"name\": \"sinusoid\", \"version\": \"1.0\", \"type\": \"south\",\n                                                    \"description\": \"sinusoid plugin\"}, False)\n    ])\n    async def test_get_plugins_installed(self, client, method, result, is_config):\n        with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=result) as patch_get_plugin_installed:\n            resp = await client.get('{}'.format(method))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert {'plugins': result} == json_response\n        patch_get_plugin_installed.assert_called_once_with(None, is_config)\n\n    @pytest.mark.parametrize(\"param\", [\n        \"north\", \"North\", \"NORTH\", \"nOrTH\", \"south\", \"South\", \"SOUTH\", \"filter\", \"Filter\", \"FILTER\", \"notify\", \"Notify\",\n        \"rule\", \"RULE\", \"RuLe\"\n    ])\n    async def test_get_plugins_installed_by_params(self, client, param):\n        with patch.object(PluginDiscovery, 'get_plugins_installed', return_value={}) as patch_get_plugin_installed:\n            resp = await client.get('/fledge/plugins/installed?type={}'.format(param))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert {'plugins': {}} == json_response\n        patch_get_plugin_installed.assert_called_once_with(param.lower(), False)\n\n    @pytest.mark.parametrize(\"param, _type, result, is_config\", [\n        (\"?type=north&config=false\", \"north\",\n         {\"name\": \"http\", \"version\": \"1.0.0\", \"type\": \"north\", \"description\": \"HTTP North-C plugin\"}, False),\n        (\"?type=south&config=false\", \"south\",\n         {\"name\": \"sinusoid\", \"version\": \"1.0\", \"type\": \"south\", \"description\": \"sinusoid plugin\"}, False),\n        (\"?type=filter&config=false\", \"filter\",\n         {\"name\": \"scale\", \"version\": \"1.0.0\", \"type\": \"filter\", \"description\": \"Filter Scale plugin\"}, False),\n        (\"?type=notify&config=false\", \"notify\",\n         {\"name\": \"email\", \"version\": \"1.0.0\", \"type\": \"notify\", \"description\": \"Email notification plugin\"}, False),\n        (\"?type=rule&config=false\", \"rule\",\n         {\"name\": \"OverMaxRule\", \"version\": \"1.0.0\", \"type\": \"rule\", \"description\": \"The OverMaxRule plugin\"}, False),\n        (\"?type=north&config=true\", \"north\",\n         {\"name\": \"http\", \"version\": \"1.0.0\", \"type\": \"north\", \"description\": \"HTTP North-C plugin\",\n          \"config\": {\"plugin\": {\"description\": \"HTTP North-C plugin\", \"type\": \"string\", \"default\": \"http-north\"}}},\n         True),\n        (\"?type=south&config=true\", \"south\",\n         {\"name\": \"sinusoid\", \"version\": \"1.0\", \"type\": \"south\", \"description\": \"sinusoid plugin\",\n          \"config\": {\"plugin\": {\"description\": \"sinusoid plugin\", \"type\": \"string\", \"default\": \"sinusoid\",\n                                \"readonly\": \"true\"}}}, True),\n        (\"?type=filter&config=true\", \"filter\",\n         {\"name\": \"scale\", \"version\": \"1.0.0\", \"type\": \"filter\", \"description\": \"Filter Scale plugin\",\n          \"config\": {\"offset\": {\"default\": \"0.0\", \"type\": \"float\", \"description\": \"A constant offset\"},\n                     \"factor\": {\"default\": \"100.0\", \"type\": \"float\", \"description\": \"Scale factor for a reading.\"},\n                     \"plugin\": {\"default\": \"scale\", \"type\": \"string\", \"description\": \"Scale filter plugin\"},\n                     \"enable\": {\"default\": \"false\", \"type\": \"boolean\",\n                                \"description\": \"A switch that can be used to enable or disable.\"}}}, True),\n        (\"?type=notify&config=true\", \"notify\",\n         {\"name\": \"email\", \"version\": \"1.0.0\", \"type\": \"notify\", \"description\": \"Email notification plugin\",\n          \"config\": {\"plugin\": {\"type\": \"string\", \"description\": \"Email notification plugin\", \"default\": \"email\"}}},\n         True),\n        (\"?type=rule&config=true\", \"rule\",\n         {\"name\": \"OverMaxRule\", \"version\": \"1.0.0\", \"type\": \"rule\", \"description\": \"The OverMaxRule plugin\",\n          \"config\": {\"plugin\": {\"type\": \"string\", \"description\": \"The OverMaxRule notification rule plugin\",\n                                \"default\": \"OverMaxRule\"}}}, True)\n    ])\n    async def test_get_plugins_installed_by_type_and_config(self, client, param, _type, result, is_config):\n        with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=result) as patch_get_plugin_installed:\n            resp = await client.get('/fledge/plugins/installed{}'.format(param))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert {'plugins': result} == json_response\n        patch_get_plugin_installed.assert_called_once_with(_type, is_config)\n\n    @pytest.mark.parametrize(\"param, message\", [\n        (\"?type=blah\", \"Invalid plugin type. Must be 'north' or 'south' or 'filter' or 'notify' or 'rule'.\"),\n        (\"?type=notifyDelivery\", \"Invalid plugin type. Must be 'north' or 'south' or 'filter' or 'notify' or 'rule'.\"),\n        (\"?type=notifyRule\", \"Invalid plugin type. Must be 'north' or 'south' or 'filter' or 'notify' or 'rule'.\"),\n        (\"?config=blah\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=False\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=True\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=f\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=t\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=1\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=Y\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=Yes\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=No&type=north\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?config=TRUE&type=south\", 'Only \"true\", \"false\", true, false are allowed for value of config.'),\n        (\"?type=south&config=0\", 'Only \"true\", \"false\", true, false are allowed for value of config.')\n    ])\n    async def test_bad_get_plugins_installed(self, client, param, message):\n        resp = await client.get('/fledge/plugins/installed{}'.format(param))\n        assert 400 == resp.status\n        assert message == resp.reason\n\n    @pytest.mark.parametrize(\"param, output, result\", [\n        (\"\", ['fledge-dev', 'fledge-south-sinusoid', 'fledge-service-notification', 'fledge-gui',\n              'fledge-service-new', 'fledge-quickstart', 'fledge-north-http', 'fledge-filter-asset',\n              'fledge-notify-slack', 'fledge-rule-average'], ['fledge-south-sinusoid', 'fledge-north-http',\n                                                                'fledge-filter-asset', 'fledge-notify-slack',\n                                                                'fledge-rule-average']),\n        (\"?type=south\", ['fledge-south-random'], ['fledge-south-random']),\n        (\"?type=north\", ['fledge-north-http'], ['fledge-north-http']),\n        (\"?type=filter\", ['fledge-filter-asset'], ['fledge-filter-asset']),\n        (\"?type=notify\", ['fledge-notify-slack'], ['fledge-notify-slack']),\n        (\"?type=rule\", ['fledge-rule-average'], ['fledge-rule-average'])\n    ])\n    async def test_get_plugins_available(self, client, param, output, result):\n        async def async_mock(return_value):\n            return return_value\n\n        log_path = 'log/190801-12-01-05.log'\n        _rv = await async_mock((output, log_path))\n        with patch.object(common, 'fetch_available_packages', return_value=(_rv)) as patch_fetch_available_package:\n            resp = await client.get('/fledge/plugins/available{}'.format(param))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert result == json_response['plugins']\n            assert log_path == json_response['link']\n        if param:\n            patch_fetch_available_package.assert_called_once_with(param.split(\"=\")[1])\n\n    async def test_bad_type_get_plugins_available(self, client):\n        resp = await client.get('/fledge/plugins/available?type=blah')\n        assert 400 == resp.status\n        assert \"Invalid package type. Must be 'north' or 'south' or 'filter' or 'notify' or 'rule'.\" == resp.reason\n\n    async def test_bad_get_plugins_available(self, client):\n        log_path = \"log/190801-12-01-05.log\"\n        msg = \"Fetch available plugins package request failed.\"\n        with patch.object(common, 'fetch_available_packages', side_effect=PackageError(log_path)) as patch_fetch_available_package:\n            resp = await client.get('/fledge/plugins/available')\n            assert 400 == resp.status\n            assert msg == resp.reason\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert log_path == json_response['link']\n            assert msg == json_response['message']\n        patch_fetch_available_package.assert_called_once_with('')\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/plugins/test_install.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom unittest.mock import patch, MagicMock\nimport pytest\nfrom aiohttp import web\n\nfrom fledge.services.core import routes\nfrom fledge.services.core.api.plugins import install as plugins_install\nfrom fledge.services.core.api.plugins import common\nfrom fledge.services.core.api.plugins.exceptions import *\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.plugin_discovery import PluginDiscovery\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestPluginInstall:\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    @pytest.mark.parametrize(\"param, message\", [\n        ({\"create\": \"blah\"}, \"file format param is required\"),\n        ({\"format\": \"deb\", \"url\": \"http://blah.co.in\"}, \"URL, checksum params are required\"),\n        ({\"format\": \"tar\"}, \"URL, checksum params are required\"),\n        ({\"format\": \"deb\", \"compressed\": \"false\"}, \"URL, checksum params are required\"),\n        ({\"format\": \"tar\", \"type\": \"north\"}, \"URL, checksum params are required\"),\n        ({\"format\": \"tar\", \"checksum\": \"4015c2dea1cc71dbf70a23f6a203eeb6\"},\n         \"URL, checksum params are required\"),\n        ({\"format\": \"tar\", \"checksum\": \"4015c2dea1cc71dbf70a23f6a203eeb6\"},\n         \"URL, checksum params are required\"),\n        ({\"format\": \"tar\", \"compressed\": \"false\", \"checksum\": \"4015c2dea1cc71dbf70a23f6a203eeb6\"},\n         \"URL, checksum params are required\"),\n        ({\"format\": \"deb\", \"type\": \"north\", \"checksum\": \"4015c2dea1cc71dbf70a23f6a203eeb6\"},\n         \"URL, checksum params are required\"),\n        ({\"url\": \"http://blah.co.in\", \"format\": \"tar\", \"checksum\": \"4015c2dea1cc71dbf70a23f6a203eeb6\"},\n         \"Plugin type param is required\"),\n        ({\"url\": \"http://blah.co.in\", \"format\": \"tar\", \"type\": \"blah\", \"checksum\": \"4015c2dea1cc71dbf70a23f6a203eeb6\"},\n         \"Invalid plugin type. Must be 'north' or 'south' or 'filter' or 'notify' or 'rule'\"),\n        ({\"url\": \"http://blah.co.in\", \"format\": \"blah\", \"type\": \"filter\", \"checksum\": \"4015c2dea1cc71dbf70a23f6a203ee\"},\n         \"Invalid format. Must be 'tar' or 'deb' or 'rpm' or 'repository'\"),\n        ({\"url\": \"http://blah.co.in\", \"format\": \"tar\", \"type\": \"south\", \"checksum\": \"4015c2dea1cc71dbf70a23f6a203eeb6\",\n          \"compressed\": \"blah\"}, 'Only \"true\", \"false\", true, false are allowed for value of compressed.'),\n        ({\"format\": \"repository\"}, \"name param is required\"),\n        ({\"format\": \"repository\", \"version\": \"1.6\"}, \"name param is required\"),\n        ({\"format\": \"repository\", \"name\": \"sinusoid\"}, 'name should start with \"fledge-\" prefix'),\n        ({\"format\": \"repository\", \"name\": \"fledge-south-sinusoid\", \"version\": \"1.6\"},\n         \"Invalid version; it should be empty or a valid semantic version X.Y.Z i.e. major.minor.patch to install \"\n         \"as per the configured repository\"),\n    ])\n    async def test_bad_post_plugins_install(self, client, param, message):\n        resp = await client.post('/fledge/plugins', data=json.dumps(param))\n        assert 400 == resp.status\n        assert message == resp.reason\n\n    async def test_bad_checksum_post_plugins_install(self, client):\n        async def async_mock():\n            return [tar_file_name]\n\n        tar_file_name = 'Benchmark.tar'\n        checksum_value = \"4015c2dea1cc71dbf70a23f6a203eeb6\"\n        url_value = \"http://10.2.5.26:5000//download/c/{}\".format(tar_file_name)\n        param = {\"url\": url_value, \"format\": \"tar\", \"type\": \"south\", \"checksum\": checksum_value, \"compressed\": \"true\"}\n        _rv = await async_mock()\n        with patch.object(plugins_install, 'download', return_value=_rv) as download_patch:\n            with patch.object(plugins_install, 'validate_checksum', return_value=False) as checksum_patch:\n                resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                assert 400 == resp.status\n                assert 'Checksum is failed.' == resp.reason\n            checksum_patch.assert_called_once_with(checksum_value, tar_file_name)\n        download_patch.assert_called_once_with([url_value])\n\n    async def test_bad_post_plugins_install_with_tar(self, client):\n        async def async_mock(ret_val):\n            return ret_val\n\n        def sync_mock(ret_val):\n            return ret_val\n\n        plugin_name = 'mqtt_sparkplug'\n        sub_dir = 'sparkplug_b'\n        tar_file_name = 'fledge-south-mqtt_sparkplug-1.5.2.tar'\n        files = [plugin_name, '{}/__init__.py'.format(plugin_name), '{}/README.rst'.format(plugin_name),\n                 '{}/{}.py'.format(plugin_name, plugin_name), '{}/requirements.txt'.format(plugin_name),\n                 '{}/{}/__init__.py'.format(plugin_name, sub_dir), '{}/{}/{}.py'.format(plugin_name, sub_dir, sub_dir),\n                 '{}/{}/{}_pb2.py'.format(plugin_name, sub_dir, sub_dir)]\n        checksum_value = \"77b74584e09fc28467599636e47f3fc5\"\n        url_value = \"http://10.2.5.26:5000/download/{}\".format(tar_file_name)\n        msg = 'Could not find a version that satisfies the requirement pt==1.4.0'\n        param = {\"url\": url_value, \"format\": \"tar\", \"type\": \"south\", \"checksum\": checksum_value}\n        _rv = await async_mock([tar_file_name])\n        with patch.object(plugins_install, 'download', return_value=_rv) as download_patch:\n            with patch.object(plugins_install, 'validate_checksum', return_value=True) as checksum_patch:\n                with patch.object(plugins_install, 'extract_file', return_value=sync_mock(files)) as extract_patch:\n                    with patch.object(plugins_install, 'copy_file_install_requirement',\n                                      return_value=(1, msg)) as copy_file_install_requirement_patch:\n                        resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                        assert 400 == resp.status\n                        assert msg == resp.reason\n                    assert copy_file_install_requirement_patch.called\n                extract_patch.assert_called_once_with(tar_file_name, False)\n            checksum_patch.assert_called_once_with(checksum_value, tar_file_name)\n        download_patch.assert_called_once_with([url_value])\n\n    async def test_post_plugins_install_with_tar(self, client):\n        async def async_mock(ret_val):\n            return ret_val\n\n        def sync_mock(ret_val):\n            return ret_val\n\n        plugin_name = 'coap'\n        tar_file_name = 'fledge-south-coap-1.5.2.tar'\n        files = [plugin_name, '{}/__init__.py'.format(plugin_name), '{}/README.rst'.format(plugin_name),\n                 '{}/{}.py'.format(plugin_name, plugin_name), '{}/requirements.txt'.format(plugin_name)]\n        checksum_value = \"4015c2dea1cc71dbf70a23f6a203eeb6\"\n        url_value = \"http://10.2.5.26:5000/download/{}\".format(tar_file_name)\n        param = {\"url\": url_value, \"format\": \"tar\", \"type\": \"south\", \"checksum\": checksum_value}\n        _rv = await async_mock([tar_file_name])\n        with patch.object(plugins_install, 'download', return_value=_rv) as download_patch:\n            with patch.object(plugins_install, 'validate_checksum', return_value=True) as checksum_patch:\n                with patch.object(plugins_install, 'extract_file', return_value=sync_mock(files)) as extract_patch:\n                    with patch.object(plugins_install, 'copy_file_install_requirement', return_value=(0, 'Success')) \\\n                            as copy_file_install_requirement_patch:\n                        resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                        assert 200 == resp.status\n                        r = await resp.text()\n                        output = json.loads(r)\n                        assert '{} is successfully downloaded and installed'.format(tar_file_name) == output['message']\n                    assert copy_file_install_requirement_patch.called\n                extract_patch.assert_called_once_with(tar_file_name, False)\n            checksum_patch.assert_called_once_with(checksum_value, tar_file_name)\n        download_patch.assert_called_once_with([url_value])\n\n    async def test_post_plugins_install_with_compressed_tar(self, client):\n        async def async_mock(ret_val):\n            return ret_val\n\n        def sync_mock(ret_val):\n            return ret_val\n\n        plugin_name = 'rms'\n        tar_file_name = 'fledge-filter-rms-1.5.2.tar.gz'\n        files = [plugin_name, '{}/lib{}.so.1'.format(plugin_name, plugin_name),\n                 '{}/lib{}.so'.format(plugin_name, plugin_name)]\n        checksum_value = \"2019c2dea1cc71dbf70a23f6a203fdgh\"\n        url_value = \"http://10.2.5.26:5000/filter/download/{}\".format(tar_file_name)\n        param = {\"url\": url_value, \"format\": \"tar\", \"type\": \"filter\", \"checksum\": checksum_value, \"compressed\": \"true\"}\n        _rv = await async_mock([tar_file_name])\n        with patch.object(plugins_install, 'download', return_value=_rv) as download_patch:\n            with patch.object(plugins_install, 'validate_checksum', return_value=True) as checksum_patch:\n                with patch.object(plugins_install, 'extract_file', return_value=sync_mock(files)) as extract_patch:\n                    with patch.object(plugins_install, 'copy_file_install_requirement', return_value=(0, 'Success')) \\\n                            as copy_file_install_requirement_patch:\n                        resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                        assert 200 == resp.status\n                        r = await resp.text()\n                        output = json.loads(r)\n                        assert '{} is successfully downloaded and installed'.format(tar_file_name) == output['message']\n                    assert copy_file_install_requirement_patch.called\n                extract_patch.assert_called_once_with(tar_file_name, True)\n            checksum_patch.assert_called_once_with(checksum_value, tar_file_name)\n        download_patch.assert_called_once_with([url_value])\n\n    @pytest.mark.parametrize(\"plugin_name, checksum, file_format\", [\n        ('coap', '4015c2dea1cc71dbf70a23f6a203eeb6', 'deb'),\n        ('sinusoid', '04bd924371488d981d9a60bcb093bedc', 'rpm')\n    ])\n    async def test_post_plugins_install_package(self, client, plugin_name, checksum, file_format):\n        async def async_mock():\n            return [plugin_name, '{}/__init__.py'.format(plugin_name), '{}/README.rst'.format(plugin_name),\n                    '{}/{}.py'.format(plugin_name, plugin_name), '{}/requirements.txt'.format(plugin_name)]\n\n        url_value = \"http://10.2.5.26:5000/download/fledge-south-{}-1.6.0.{}\".format(plugin_name, file_format)\n        param = {\"url\": url_value, \"format\": file_format, \"type\": \"south\", \"checksum\": checksum}\n        pkg_mgt = 'yum' if file_format == 'rpm' else 'apt'\n        _rv = await async_mock()\n        with patch.object(plugins_install, 'download', return_value=_rv) as download_patch:\n            with patch.object(plugins_install, 'validate_checksum', return_value=True) as checksum_patch:\n                with patch.object(plugins_install, 'install_package', return_value=(0, 'Success')) \\\n                        as install_package_patch:\n                    resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                    assert 200 == resp.status\n                    result = await resp.text()\n                    response = json.loads(result)\n                    assert {\"message\": \"{} is successfully downloaded and installed\".format(plugin_name)} == response\n                install_package_patch.assert_called_once_with(plugin_name, pkg_mgt)\n            checksum_patch.assert_called_once_with(checksum, plugin_name)\n        download_patch.assert_called_once_with([url_value])\n\n    @pytest.mark.parametrize(\"plugin_name, checksum, file_format\", [\n        ('coap', '4015c2dea1cc71dbf70a23f6a203eeb6', 'deb'),\n        ('sinusoid', '04bd924371488d981d9a60bcb093bedc', 'rpm')\n    ])\n    async def test_bad_post_plugins_install_package(self, client, plugin_name, checksum, file_format):\n        async def async_mock():\n            return [plugin_name, '{}/__init__.py'.format(plugin_name), '{}/README.rst'.format(plugin_name),\n                    '{}/{}.py'.format(plugin_name, plugin_name), '{}/requirements.sh'.format(plugin_name)]\n\n        url_value = \"http://10.2.5.26:5000/download/fledge-south-{}-1.6.0.{}\".format(plugin_name, file_format)\n        param = {\"url\": url_value, \"format\": file_format, \"type\": \"south\", \"checksum\": checksum}\n        if file_format == 'rpm':\n            pkg_mgt = 'yum'\n            msg = '400: Loaded plugins: amazon-id, rhui-lb, search-disabled-reposExamining ' \\\n                  '/usr/local/fledge/data/plugins/fledge-south-sinusoid-1.6.0.rpm: ' \\\n                  'fledge-south-sinusoid-1.6.0-1.x86_64Marking ' \\\n                  '/usr/local/fledge/data/plugins/fledge-south-sinusoid-1.6.0-1.x86_64.rpm to be installed' \\\n                  'Resolving Dependencies--> Running transaction check---> Package ' \\\n                  'fledge-south-sinusoid.x86_64 0:1.6.0-1 will be installed--> ' \\\n                  'Processing Dependency: fledge >= 1.6 for package: fledge-south-sinusoid-1.6.0-1.x86_64' \\\n                  'You could try using --skip-broken to work around the problem ' \\\n                  'You could try running: rpm -Va --nofiles --nodigest'\n        else:\n            pkg_mgt = 'apt'\n            msg = 'The following packages have unmet dependencies: fledge-south-coap:armhf : Depends: ' \\\n                  'fledge:armhf (>= 1.6) but it is not installableE: Unable to correct problems, ' \\\n                  'you have held broken packages.'\n\n        _rv = await async_mock()\n        with patch.object(plugins_install, 'download', return_value=_rv) as download_patch:\n            with patch.object(plugins_install, 'validate_checksum', return_value=True) as checksum_patch:\n                with patch.object(plugins_install, 'install_package', return_value=(256, msg)) as install_package_patch:\n                    resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                    assert 400 == resp.status\n                    assert msg == resp.reason\n                install_package_patch.assert_called_once_with(plugin_name, pkg_mgt)\n            checksum_patch.assert_called_once_with(checksum, plugin_name)\n        download_patch.assert_called_once_with([url_value])\n\n    async def test_post_bad_plugin_install_package_from_repo(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        plugin = \"fledge-south-sinusoid\"\n        param = {\"format\": \"repository\", \"name\": plugin}\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"install\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": plugin}}}\n        plugin_list = [{'name': 'randomwalk', 'type': 'south', 'description': 'Generate random walk data points',\n                        'version': '1.8.2', 'installedDirectory': 'south/randomwalk',\n                        'packageName': 'fledge-south-randomwalk'}]\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await async_mock({'count': 0, 'rows': []})\n        _rv2 = await async_mock(([], 'log/190801-12-41-13.log'))\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              return_value=_rv1) as query_tbl_patch:\n                with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_list\n                                  ) as plugin_installed_patch:\n                    with patch.object(common, 'fetch_available_packages', return_value=(_rv2)) as patch_fetch_available_package:\n                        resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                        assert 404 == resp.status\n                        assert \"'{} plugin is not available for the configured repository'\".format(plugin) == resp.reason\n                    patch_fetch_available_package.assert_called_once_with()\n                plugin_installed_patch.assert_called_once_with('south', False)\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"plugin_name\", [\n        'fledge-south-modbusc',\n        'fledge-north-kafka',\n        'fledge-filter-rms',\n        'fledge-notify-email',\n        'fledge-rule-outofbound'\n    ])\n    async def test_post_plugins_install_package_from_repo(self, client, plugin_name, loop):\n        async def async_mock(return_value):\n            return return_value\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        param = {\"format\": \"repository\", \"name\": plugin_name}\n        insert_row_resp = {'count': 1, 'rows': [{\n                \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n                \"name\": plugin_name,\n                \"action\": \"install\",\n                \"status\": -1,\n                \"log_file_uri\": \"\"\n            }]}\n        query_tbl_payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"install\",\n                                                             \"and\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                     \"value\": plugin_name}}}\n        \n        _rv1 = await async_mock(([plugin_name, \"fledge-north-http\", \"fledge-service-notification\"],\n                                    'log/190801-12-41-13.log'))\n        _rv2 = await async_mock({\"response\": \"inserted\", \"rows_affected\": 1})\n        _se1 = await async_mock({'count': 0, 'rows': []})\n        _se2 = await async_mock(insert_row_resp)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              side_effect=[_se1, _se2]) as query_tbl_patch:\n                with patch.object(common, 'fetch_available_packages', return_value=(_rv1)) as patch_fetch_available_package:\n                    with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv2) as insert_tbl_patch:\n                        with patch.object(plugins_install._LOGGER, \"info\") as log_info:\n                            with patch('multiprocessing.Process'):\n                                resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                                assert 200 == resp.status\n                                result = await resp.text()\n                                response = json.loads(result)\n                                assert 'id' in response\n                                assert 'Plugin installation started.' == response['message']\n                                assert response['statusLink'].startswith('fledge/package/install/status?id=')\n                        assert 1 == log_info.call_count\n                        log_info.assert_called_once_with('{} plugin install started...'.format(plugin_name))\n                    args, kwargs = insert_tbl_patch.call_args_list[0]\n                    assert 'packages' == args[0]\n                    actual = json.loads(args[1])\n                    assert 'id' in actual\n                    assert plugin_name == actual['name']\n                    assert 'install' == actual['action']\n                    assert -1 == actual['status']\n                    assert '' == actual['log_file_uri']\n                patch_fetch_available_package.assert_called_once_with()\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            actual = json.loads(args[1])\n            assert query_tbl_payload == actual\n\n    async def test_plugin_install_package_from_repo_already_in_progress(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        plugin_name = \"fledge-north-http-north\"\n        param = {\"format\": \"repository\", \"name\": plugin_name}\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"install\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": plugin_name}}}\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": plugin_name,\n            \"action\": \"install\",\n            \"status\": -1,\n            \"log_file_uri\": \"\"\n        }]}\n        msg = '{} package installation already in progress'.format(plugin_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await async_mock(select_row_resp)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              return_value=_rv) as query_tbl_patch:\n                resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                assert 429 == resp.status\n                assert msg == resp.reason\n                r = await resp.text()\n                actual = json.loads(r)\n                assert {'message': msg} == actual\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_plugin_install_package_from_repo_already_installed(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        plugin_name = \"fledge-south-modbusc\"\n        param = {\"format\": \"repository\", \"name\": plugin_name}\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"install\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": plugin_name}}}\n        plugin_list = [{\"name\": \"ModbusC\", \"type\": \"south\", \"description\": \"Modbus TCP and RTU C south plugin\",\n                        \"version\": \"1.8.1\", \"installedDirectory\": \"south/ModbusC\", \"packageName\": plugin_name}]\n        msg = '{} package is already installed'.format(plugin_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await async_mock({'count': 0, 'rows': []})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              return_value=_rv) as query_tbl_patch:\n                with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_list\n                                  ) as plugin_list_patch:\n                    resp = await client.post('/fledge/plugins', data=json.dumps(param))\n                    assert 400 == resp.status\n                    assert msg == resp.reason\n                plugin_list_patch.assert_called_once_with('south', False)\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/plugins/test_remove.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom unittest.mock import patch, MagicMock\nimport pytest\nfrom aiohttp import web\n\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core import connect, routes\nfrom fledge.services.core.api import common\nfrom fledge.services.core.api.plugins import remove as plugins_remove\nfrom fledge.services.core.api.plugins.exceptions import *\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestPluginRemove:\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    RUN_TESTS_BEFORE_210_VERSION = False if common.get_version() <= \"2.1.0\" else True\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"_type\", [\"blah\", 1, \"notificationDelivery\", \"notificationRule\"])\n    async def test_bad_type_plugin(self, client, _type):\n        resp = await client.delete('/fledge/plugins/{}/name'.format(_type), data=None)\n        assert 400 == resp.status\n        assert \"Invalid plugin type. Please provide valid type: ['north', 'south', 'filter', 'notify', 'rule']\" == \\\n               resp.reason\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"name\", [\"OMF\", \"omf\", \"Omf\"])\n    async def test_bad_update_of_inbuilt_plugin(self, client, name):\n        resp = await client.delete('/fledge/plugins/north/{}'.format(name), data=None)\n        assert 400 == resp.status\n        assert \"Cannot delete an inbuilt OMF plugin.\" == resp.reason\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"name\", [\"http-south\", \"random\"])\n    async def test_bad_name_plugin(self, client, name):\n        plugin_installed = [{\"name\": \"sinusoid\", \"type\": \"south\", \"description\": \"Sinusoid Poll Plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"south/sinusoid\",\n                             \"packageName\": \"fledge-south-sinusoid\"},\n                            {\"name\": \"http_south\", \"type\": \"south\", \"description\": \"HTTP Listener South Plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"south/http_south\",\n                             \"packageName\": \"fledge-south-http-south\"},\n                            {\"name\": \"Random\", \"type\": \"south\", \"description\": \"Random data generation plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"south/Random\",\n                             \"packageName\": \"fledge-south-random\"}\n                            ]\n        with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                          ) as plugin_installed_patch:\n            resp = await client.delete('/fledge/plugins/south/{}'.format(name), data=None)\n            assert 404 == resp.status\n            expected_msg = \"'Invalid plugin name {} or plugin is not installed.'\".format(name)\n            assert expected_msg == resp.reason\n            result = await resp.text()\n            response = json.loads(result)\n            assert {'message': expected_msg} == response\n        plugin_installed_patch.assert_called_once_with('south', False)\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    async def test_plugin_in_use(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        name = \"sinusoid\"\n        _type = \"south\"\n        svc_list = ['Sine1', 'S2']\n        plugin_installed = [{\"name\": name, \"type\": _type, \"description\": \"Sinusoid Poll Plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/{}\".format(_type, name),\n                             \"packageName\": \"fledge-{}-{}\".format(_type, name)},\n                            {\"name\": \"http_south\", \"type\": _type, \"description\": \"HTTP Listener South Plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/http_south\".format(_type),\n                             \"packageName\": \"fledge-{}-http-south\".format(_type)}\n                            ]\n        \n        _rv = await async_mock([{'service_list': svc_list}])\n        with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                          ) as plugin_installed_patch:\n            with patch.object(plugins_remove, '_check_plugin_usage', return_value=_rv) as plugin_usage_patch:\n                with patch.object(plugins_remove._logger, \"warning\") as patch_logger:\n                    resp = await client.delete('/fledge/plugins/{}/{}'.format(_type, name), data=None)\n                    assert 400 == resp.status\n                    expected_msg = \"{} cannot be removed. This is being used by {} instances.\".format(name, svc_list)\n                    assert expected_msg == resp.reason\n                    result = await resp.text()\n                    response = json.loads(result)\n                    assert {'message': expected_msg} == response\n                assert 1 == patch_logger.call_count\n                patch_logger.assert_called_once_with(expected_msg)\n            plugin_usage_patch.assert_called_once_with(_type, name)\n        plugin_installed_patch.assert_called_once_with(_type, False)\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    async def test_notify_plugin_in_use(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        notify_instances_list = ['N1', 'My Notify']\n        plugin_type = \"rule\"\n        plugin_type_installed_dir = \"notificationRule\"\n        plugin_installed_dirname = \"OutOfBound\"\n        pkg_name = \"fledge-{}-outofbound\".format(plugin_type)\n        plugin_installed = [{\"name\": plugin_installed_dirname, \"type\": plugin_type,\n                             \"description\": \"Generate a notification if the values exceeds a configured value\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/{}\".format(plugin_type_installed_dir,\n                                                                                      plugin_installed_dirname),\n                             \"packageName\": pkg_name}]\n        _rv = await async_mock(notify_instances_list)\n        with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                          ) as plugin_installed_patch:\n            with patch.object(plugins_remove, '_check_plugin_usage_in_notification_instances', return_value=_rv) as plugin_usage_patch:\n                with patch.object(plugins_remove._logger, \"warning\") as patch_logger:\n                    resp = await client.delete('/fledge/plugins/{}/{}'.format(plugin_type, plugin_installed_dirname),\n                                               data=None)\n                    assert 400 == resp.status\n                    expected_msg = \"{} cannot be removed. This is being used by {} instances.\".format(\n                        plugin_installed_dirname, notify_instances_list)\n                    assert expected_msg == resp.reason\n                    result = await resp.text()\n                    response = json.loads(result)\n                    assert {'message': expected_msg} == response\n                assert 1 == patch_logger.call_count\n                patch_logger.assert_called_once_with(expected_msg)\n            plugin_usage_patch.assert_called_once_with(plugin_installed_dirname)\n        plugin_installed_patch.assert_called_once_with(plugin_type, False)\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    async def test_package_already_in_progress(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        _type = \"south\"\n        name = 'http_south'\n        pkg_name = \"fledge-south-http\"\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"purge\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"purge\",\n            \"status\": -1,\n            \"log_file_uri\": \"\"\n        }]}\n        expected_msg = '{} package purge already in progress.'.format(pkg_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        plugin_installed = [{\"name\": \"sinusoid\", \"type\": _type, \"description\": \"Sinusoid Poll Plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/{}\".format(_type, name),\n                             \"packageName\": \"fledge-{}-sinusoid\".format(_type)},\n                            {\"name\": name, \"type\": _type, \"description\": \"HTTP Listener South Plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/{}\".format(_type, name),\n                             \"packageName\": pkg_name}\n                            ]\n        \n        _rv1 = await async_mock([])\n        _rv2 = await async_mock(select_row_resp)\n        with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                          ) as plugin_installed_patch:\n            with patch.object(plugins_remove, '_check_plugin_usage', return_value=_rv1) as plugin_usage_patch:\n                with patch.object(plugins_remove._logger, \"info\") as log_info_patch:\n                    with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                        with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                          return_value=_rv2) as query_tbl_patch:\n                            resp = await client.delete('/fledge/plugins/{}/{}'.format(_type, name), data=None)\n                            assert 429 == resp.status\n                            assert expected_msg == resp.reason\n                            r = await resp.text()\n                            actual = json.loads(r)\n                            assert {'message': expected_msg} == actual\n                        args, kwargs = query_tbl_patch.call_args_list[0]\n                        assert 'packages' == args[0]\n                        assert payload == json.loads(args[1])\n                assert 1 == log_info_patch.call_count\n                log_info_patch.assert_called_once_with(\n                    'No entry found for http_south plugin in asset tracker; '\n                    'or {} plugin may have been added in disabled state & never used.'.format(name))\n            plugin_usage_patch.assert_called_once_with(_type, name)\n        plugin_installed_patch.assert_called_once_with(_type, False)\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    async def test_package_when_not_in_use(self, client):\n        \n        async def async_mock(return_value):\n            return return_value\n        \n        _type = \"south\"\n        name = 'http_south'\n        pkg_name = \"fledge-south-http\"\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"purge\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"purge\",\n            \"status\": 127,\n            \"log_file_uri\": \"\"\n        }]}\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        insert_row = {'count': 1, 'rows': [{\n                \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n                \"name\": pkg_name,\n                \"action\": \"purge\",\n                \"status\": -1,\n                \"log_file_uri\": \"\"\n            }]}\n        delete = {\"response\": \"deleted\", \"rows_affected\": 1}\n        delete_payload = {\"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"purge\",\n                                    \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        plugin_installed = [{\"name\": \"sinusoid\", \"type\": _type, \"description\": \"Sinusoid Poll Plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/{}\".format(_type, name),\n                             \"packageName\": \"fledge-{}-sinusoid\".format(_type)},\n                            {\"name\": name, \"type\": _type, \"description\": \"HTTP Listener South Plugin\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/{}\".format(_type, name),\n                             \"packageName\": pkg_name}\n                            ]\n        \n        _rv1 = await async_mock([])\n        _rv2 = await async_mock(delete)\n        _rv3 = await async_mock(insert)\n        _se1 = await async_mock(select_row_resp)\n        _se2 = await async_mock(insert_row)\n        with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                          ) as plugin_installed_patch:\n            with patch.object(plugins_remove, '_check_plugin_usage', return_value=_rv1) as plugin_usage_patch:\n                with patch.object(plugins_remove._logger, \"info\") as log_info_patch:\n                    with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                        with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                          side_effect=[_se1, _se2]) as query_tbl_patch:\n                            with patch.object(storage_client_mock, 'delete_from_tbl',\n                                              return_value=_rv2) as delete_tbl_patch:\n                                with patch.object(storage_client_mock, 'insert_into_tbl',\n                                                  return_value=_rv3) as insert_tbl_patch:\n                                    with patch('multiprocessing.Process'):\n                                        resp = await client.delete('/fledge/plugins/{}/{}'.format(_type, name),\n                                                                   data=None)\n                                        assert 200 == resp.status\n                                        result = await resp.text()\n                                        response = json.loads(result)\n                                        assert 'id' in response\n                                        assert '{} plugin remove started.'.format(name) == response['message']\n                                        assert response['statusLink'].startswith('fledge/package/purge/status?id=')\n                                args, kwargs = insert_tbl_patch.call_args_list[0]\n                                assert 'packages' == args[0]\n                                actual = json.loads(args[1])\n                                assert 'id' in actual\n                                assert pkg_name == actual['name']\n                                assert 'purge' == actual['action']\n                                assert -1 == actual['status']\n                                assert '' == actual['log_file_uri']\n                            args, kwargs = delete_tbl_patch.call_args_list[0]\n                            assert 'packages' == args[0]\n                            assert delete_payload == json.loads(args[1])\n                        args, kwargs = query_tbl_patch.call_args_list[0]\n                        assert 'packages' == args[0]\n                        assert payload == json.loads(args[1])\n                assert 1 == log_info_patch.call_count\n                log_info_patch.assert_called_once_with(\n                    'No entry found for http_south plugin in asset tracker; '\n                    'or {} plugin may have been added in disabled state & never used.'.format(name))\n            plugin_usage_patch.assert_called_once_with(_type, name)\n        plugin_installed_patch.assert_called_once_with(_type, False)\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/plugins/test_update.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport uuid\nfrom unittest.mock import patch, MagicMock\nimport pytest\nfrom aiohttp import web\n\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.plugin_discovery import PluginDiscovery\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core import connect, routes, server\nfrom fledge.services.core.api import common\nfrom fledge.services.core.api.plugins import update as plugins_update\nfrom fledge.services.core.api.plugins.exceptions import *\nfrom fledge.services.core.scheduler.scheduler import Scheduler\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2020 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n\nclass TestPluginUpdate:\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    RUN_TESTS_BEFORE_210_VERSION = False if common.get_version() <= \"2.1.0\" else True\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"param\", [\"blah\", 1, \"notificationDelivery\", \"notificationRule\"])\n    async def test_bad_type_plugin(self, client, param):\n        resp = await client.put('/fledge/plugins/{}/name/update'.format(param), data=None)\n        assert 400 == resp.status\n        assert \"Invalid plugin type. Must be one of 'south' , north', 'filter', 'notify' or 'rule'\" == resp.reason\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"name\", [\"OMF\", \"omf\", \"Omf\"])\n    async def test_bad_update_of_inbuilt_plugin(self, client, name):\n        resp = await client.put('/fledge/plugins/north/{}/update'.format(name), data=None)\n        assert 400 == resp.status\n        assert \"Cannot update an inbuilt OMF plugin.\" == resp.reason\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"_type, plugin_installed_dirname\", [\n        ('south', 'Random'),\n        ('north', 'http_north')\n    ])\n    async def test_package_already_in_progress(self, client, _type, plugin_installed_dirname):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-{}-{}\".format(_type, plugin_installed_dirname.lower().replace(\"_\", \"-\"))\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"update\",\n            \"status\": -1,\n            \"log_file_uri\": \"\"\n        }]}\n        msg = '{} package update already in progress.'.format(pkg_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await async_mock(select_row_resp)\n        plugin_installed = [{\"name\": plugin_installed_dirname, \"type\": _type, \"description\": \"{} plugin\".format(_type),\n                             \"version\": \"2.1.0\", \"installedDirectory\": \"{}/{}\".format(_type, plugin_installed_dirname),\n                             \"packageName\": pkg_name}]\n        with patch.object(PluginDiscovery, 'get_plugins_installed',\n                          return_value=plugin_installed) as plugin_installed_patch:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  return_value=_rv) as query_tbl_patch:\n                    resp = await client.put('/fledge/plugins/{}/{}/update'.format(_type, plugin_installed_dirname),\n                                            data=None)\n                    assert 429 == resp.status\n                    assert msg == resp.reason\n                    r = await resp.text()\n                    actual = json.loads(r)\n                    assert {'message': msg} == actual\n                args, kwargs = query_tbl_patch.call_args_list[0]\n                assert 'packages' == args[0]\n                assert payload == json.loads(args[1])\n        plugin_installed_patch.assert_called_once_with(_type, False)\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"_type, plugin_installed_dirname\", [\n        ('south', 'Random'),\n        ('north', 'http_north')\n    ])\n    async def test_plugin_not_found(self, client, _type, plugin_installed_dirname):\n        plugin_name = 'sinusoid'\n        pkg_name = \"fledge-{}-{}\".format(_type, plugin_installed_dirname.lower().replace(\"_\", \"-\"))\n        plugin_installed = [{\"name\": plugin_name, \"type\": _type, \"description\": \"{} plugin\".format(_type),\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/{}\".format(_type, plugin_name),\n                             \"packageName\": pkg_name}]\n        with patch.object(PluginDiscovery, 'get_plugins_installed',\n                          return_value=plugin_installed) as plugin_installed_patch:\n            resp = await client.put('/fledge/plugins/{}/{}/update'.format(_type, plugin_installed_dirname), data=None)\n            assert 404 == resp.status\n            assert \"'{} plugin is not yet installed. So update is not possible.'\".format(\n                plugin_installed_dirname) == resp.reason\n        plugin_installed_patch.assert_called_once_with(_type, False)\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"_type, plugin_installed_dirname\", [\n        ('south', 'Random'),\n        ('north', 'http_north')\n    ])\n    async def test_plugin_update_when_not_in_use(self, client, _type, plugin_installed_dirname):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-{}-{}\".format(_type, plugin_installed_dirname.lower().replace(\"_\", \"-\"))\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        plugin_installed = [{\"name\": plugin_installed_dirname, \"type\": _type,\n                             \"description\": \"{} plugin\".format(_type), \"version\": \"1.8.1\",\n                             \"installedDirectory\": \"{}/{}\".format(_type, plugin_installed_dirname),\n                             \"packageName\": pkg_name}]\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        insert_row = {'count': 1, 'rows': [{\n                \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n                \"name\": plugin_installed_dirname,\n                \"action\": \"update\",\n                \"status\": -1,\n                \"log_file_uri\": \"\"\n            }]}\n        svc_name = 'R1'\n        tracked_plugins = [{'plugin': 'sinusoid', 'service': 'S1'}, {'plugin': 'Random', 'service': svc_name},\n                           {'plugin': 'http_north', 'service': svc_name}]\n        sch_info = [{'id': '6637c9ff-7090-4774-abca-07dee59a0610', 'enabled': 'f'}]\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await async_mock(tracked_plugins)\n        _rv2 = await async_mock(sch_info)\n        _rv3 = await async_mock(insert)\n        _se1 = await async_mock({'count': 0, 'rows': []})\n        _se2 = await async_mock(insert_row)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              side_effect=[_se1, _se2]) as query_tbl_patch:\n                with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                                  ) as plugin_installed_patch:\n                    with patch.object(plugins_update, '_get_plugin_and_sch_name_from_asset_tracker',\n                                      return_value=_rv1) as plugin_tracked_patch:\n                        with patch.object(plugins_update, '_get_sch_id_and_enabled_by_name',\n                                          return_value=_rv2) as schedule_patch:\n                            with patch.object(storage_client_mock, 'insert_into_tbl',\n                                              return_value=_rv3) as insert_tbl_patch:\n                                with patch('multiprocessing.Process'):\n                                    resp = await client.put('/fledge/plugins/{}/{}/update'.format(\n                                        _type, plugin_installed_dirname), data=None)\n                                    assert 200 == resp.status\n                                    result = await resp.text()\n                                    response = json.loads(result)\n                                    assert 'id' in response\n                                    assert '{} update started.'.format(pkg_name) == response['message']\n                                    assert response['statusLink'].startswith('fledge/package/update/status?id=')\n                            args, kwargs = insert_tbl_patch.call_args_list[0]\n                            assert 'packages' == args[0]\n                            actual = json.loads(args[1])\n                            assert 'id' in actual\n                            assert pkg_name == actual['name']\n                            assert 'update' == actual['action']\n                            assert -1 == actual['status']\n                            assert '' == actual['log_file_uri']\n                        schedule_patch.assert_called_once_with(svc_name)\n                    plugin_tracked_patch.assert_called_once_with(_type)\n                plugin_installed_patch.assert_called_once_with(_type, False)\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"_type, plugin_installed_dirname\", [\n        ('south', 'Random'),\n        ('north', 'http_north')\n    ])\n    async def test_plugin_update_when_in_use(self, client, _type, plugin_installed_dirname):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-{}-{}\".format(_type, plugin_installed_dirname.lower().replace(\"_\", \"-\"))\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"purge\",\n            \"status\": 0,\n            \"log_file_uri\": \"\"\n        }]}\n        delete = {\"response\": \"deleted\", \"rows_affected\": 1}\n        delete_payload = {\"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                    \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        plugin_installed = [{\"name\": plugin_installed_dirname, \"type\": _type,\n                             \"description\": \"{} plugin\".format(_type), \"version\": \"1.8.1\",\n                             \"installedDirectory\": \"{}/{}\".format(_type, plugin_installed_dirname),\n                             \"packageName\": pkg_name}]\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        insert_row = {'count': 1, 'rows': [\n            {\"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\", \"name\": plugin_installed_dirname, \"action\": \"update\",\n             \"status\": -1, \"log_file_uri\": \"\"}]}\n        svc_name = 'R1'\n        tracked_plugins = [{'plugin': 'sinusoid', 'service': 'S1'}, {'plugin': 'Random', 'service': svc_name},\n                           {'plugin': 'http_north', 'service': svc_name}]\n        sch_info = [{'id': '6637c9ff-7090-4774-abca-07dee59a0610', 'enabled': 't'}]\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await async_mock(tracked_plugins)\n        _rv2 = await async_mock(sch_info)\n        _rv3 = await async_mock(insert)\n        _rv4 = await async_mock(delete)\n        _rv5 = await async_mock((True, \"Schedule successfully disabled\"))\n        _se1 = await async_mock(select_row_resp)\n        _se2 = await async_mock(insert_row)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              side_effect=[_se1, _se2]) as query_tbl_patch:\n                with patch.object(storage_client_mock, 'delete_from_tbl',\n                                  return_value=_rv4) as delete_tbl_patch:\n                    with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                                      ) as plugin_installed_patch:\n                        with patch.object(plugins_update, '_get_plugin_and_sch_name_from_asset_tracker',\n                                          return_value=_rv1) as plugin_tracked_patch:\n                            with patch.object(plugins_update, '_get_sch_id_and_enabled_by_name',\n                                              return_value=_rv2) as schedule_patch:\n                                with patch.object(server.Server.scheduler, 'disable_schedule', return_value=_rv5) as disable_sch_patch:\n                                    with patch.object(plugins_update._logger, \"warning\") as log_warn_patch:\n                                        with patch.object(storage_client_mock, 'insert_into_tbl',\n                                                          return_value=_rv3) as insert_tbl_patch:\n                                            with patch('multiprocessing.Process'):\n                                                resp = await client.put('/fledge/plugins/{}/{}/update'.format(\n                                                    _type, plugin_installed_dirname), data=None)\n                                                server.Server.scheduler = None\n                                                assert 200 == resp.status\n                                                result = await resp.text()\n                                                response = json.loads(result)\n                                                assert 'id' in response\n                                                assert '{} update started.'.format(pkg_name) == response['message']\n                                                assert response['statusLink'].startswith(\n                                                    'fledge/package/update/status?id=')\n                                        args, kwargs = insert_tbl_patch.call_args_list[0]\n                                        assert 'packages' == args[0]\n                                        actual = json.loads(args[1])\n                                        assert 'id' in actual\n                                        assert pkg_name == actual['name']\n                                        assert 'update' == actual['action']\n                                        assert -1 == actual['status']\n                                        assert '' == actual['log_file_uri']\n                                    assert 1 == log_warn_patch.call_count\n                                    log_warn_patch.assert_called_once_with(\n                                        'Disabling {} {} instance, as {} plugin is being updated...'.format(\n                                            svc_name, _type, plugin_installed_dirname))\n                                disable_sch_patch.assert_called_once_with(uuid.UUID(sch_info[0]['id']))\n                            schedule_patch.assert_called_once_with(svc_name)\n                        plugin_tracked_patch.assert_called_once_with(_type)\n                    plugin_installed_patch.assert_called_once_with(_type, False)\n                args, kwargs = delete_tbl_patch.call_args_list[0]\n                assert 'packages' == args[0]\n                assert delete_payload == json.loads(args[1])\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    async def test_filter_plugin_update_when_not_in_use(self, client, _type='filter', plugin_installed_dirname='delta'):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-{}-{}\".format(_type, plugin_installed_dirname.lower())\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        plugin_installed = [{\"name\": plugin_installed_dirname, \"type\": _type,\n                             \"description\": \"{} plugin\".format(_type), \"version\": \"1.8.1\",\n                             \"installedDirectory\": \"{}/{}\".format(_type, plugin_installed_dirname),\n                             \"packageName\": pkg_name}]\n        filter_row = {'count': 1, 'rows': [{'name': plugin_installed_dirname}]}\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        insert_row = {'count': 1, 'rows': [{\n                \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n                \"name\": plugin_installed_dirname,\n                \"action\": \"update\",\n                \"status\": -1,\n                \"log_file_uri\": \"\"\n            }]}\n        svc_name = 'R1'\n        tracked_plugins = [{'plugin': 'sinusoid', 'service': 'S1'}, {'plugin': 'Random', 'service': svc_name},\n                           {'plugin': 'http_north', 'service': svc_name}, {'plugin': plugin_installed_dirname,\n                                                                           'service': svc_name}]\n        sch_info = [{'id': '6637c9ff-7090-4774-abca-07dee59a0610', 'enabled': 'f'}]\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await async_mock(tracked_plugins)\n        _rv2 = await async_mock(sch_info)\n        _rv3 = await async_mock(insert)\n        _se1 = await async_mock({'count': 0, 'rows': []})\n        _se2 = await async_mock(insert_row)\n        _se3 = await async_mock(filter_row)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              side_effect=[_se1, _se3, _se2]) as query_tbl_patch:\n                with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                                  ) as plugin_installed_patch:\n                    with patch.object(plugins_update, '_get_plugin_and_sch_name_from_asset_tracker',\n                                      return_value=_rv1) as plugin_tracked_patch:\n                        with patch.object(plugins_update, '_get_sch_id_and_enabled_by_name',\n                                          return_value=_rv2) as schedule_patch:\n                            with patch.object(storage_client_mock, 'insert_into_tbl',\n                                              return_value=_rv3) as insert_tbl_patch:\n                                with patch('multiprocessing.Process'):\n                                    resp = await client.put('/fledge/plugins/{}/{}/update'.format(\n                                        _type, plugin_installed_dirname), data=None)\n                                    assert 200 == resp.status\n                                    result = await resp.text()\n                                    response = json.loads(result)\n                                    assert 'id' in response\n                                    assert '{} update started.'.format(pkg_name) == response['message']\n                                    assert response['statusLink'].startswith('fledge/package/update/status?id=')\n                            args, kwargs = insert_tbl_patch.call_args_list[0]\n                            assert 'packages' == args[0]\n                            actual = json.loads(args[1])\n                            assert 'id' in actual\n                            assert pkg_name == actual['name']\n                            assert 'update' == actual['action']\n                            assert -1 == actual['status']\n                            assert '' == actual['log_file_uri']\n                        schedule_patch.assert_called_once_with(svc_name)\n                    plugin_tracked_patch.assert_called_once_with(_type)\n                plugin_installed_patch.assert_called_once_with(_type, False)\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    async def test_filter_update_when_in_use(self, client, _type='filter', plugin_installed_dirname='delta'):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-{}-{}\".format(_type, plugin_installed_dirname.lower())\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"purge\",\n            \"status\": 0,\n            \"log_file_uri\": \"\"\n        }]}\n        filter_row = {'count': 1, 'rows': [{'name': plugin_installed_dirname}]}\n        delete = {\"response\": \"deleted\", \"rows_affected\": 1}\n        delete_payload = {\"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                    \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        plugin_installed = [{\"name\": plugin_installed_dirname, \"type\": _type,\n                             \"description\": \"{} plugin\".format(_type), \"version\": \"1.8.1\",\n                             \"installedDirectory\": \"{}/{}\".format(_type, plugin_installed_dirname),\n                             \"packageName\": pkg_name}]\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        insert_row = {'count': 1, 'rows': [\n            {\"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\", \"name\": plugin_installed_dirname, \"action\": \"update\",\n             \"status\": -1, \"log_file_uri\": \"\"}]}\n        svc_name = 'R1'\n        tracked_plugins = [{'plugin': 'sinusoid', 'service': 'S1'}, {'plugin': 'Random', 'service': svc_name},\n                           {'plugin': 'http_north', 'service': svc_name}, {'plugin': plugin_installed_dirname,\n                                                                           'service': svc_name}]\n        sch_info = [{'id': '6637c9ff-7090-4774-abca-07dee59a0610', 'enabled': 't'}]\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await async_mock(tracked_plugins)\n        _rv2 = await async_mock(sch_info)\n        _rv3 = await async_mock(insert)\n        _rv4 = await async_mock(delete)\n        _rv5 = await async_mock((True, \"Schedule successfully disabled\"))\n        _se1 = await async_mock(select_row_resp)\n        _se2 = await async_mock(insert_row)\n        _se3 = await async_mock(filter_row)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              side_effect=[_se1, _se3, _se2]) as query_tbl_patch:\n                with patch.object(storage_client_mock, 'delete_from_tbl',\n                                  return_value=_rv4) as delete_tbl_patch:\n                    with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                                      ) as plugin_installed_patch:\n                        with patch.object(plugins_update, '_get_plugin_and_sch_name_from_asset_tracker',\n                                          return_value=_rv1) as plugin_tracked_patch:\n                            with patch.object(plugins_update, '_get_sch_id_and_enabled_by_name',\n                                              return_value=_rv2) as schedule_patch:\n                                with patch.object(server.Server.scheduler, 'disable_schedule', return_value=_rv5) as disable_sch_patch:\n                                    with patch.object(plugins_update._logger, \"warning\") as log_warn_patch:\n                                        with patch.object(storage_client_mock, 'insert_into_tbl',\n                                                          return_value=_rv3) as insert_tbl_patch:\n                                            with patch('multiprocessing.Process'):\n                                                resp = await client.put('/fledge/plugins/{}/{}/update'.format(\n                                                    _type, plugin_installed_dirname), data=None)\n                                                server.Server.scheduler = None\n                                                assert 200 == resp.status\n                                                result = await resp.text()\n                                                response = json.loads(result)\n                                                assert 'id' in response\n                                                assert '{} update started.'.format(pkg_name) == response['message']\n                                                assert response['statusLink'].startswith(\n                                                    'fledge/package/update/status?id=')\n                                        args, kwargs = insert_tbl_patch.call_args_list[0]\n                                        assert 'packages' == args[0]\n                                        actual = json.loads(args[1])\n                                        assert 'id' in actual\n                                        assert pkg_name == actual['name']\n                                        assert 'update' == actual['action']\n                                        assert -1 == actual['status']\n                                        assert '' == actual['log_file_uri']\n                                    assert 1 == log_warn_patch.call_count\n                                    log_warn_patch.assert_called_once_with(\n                                        'Disabling {} {} instance, as {} plugin is being updated...'.format(\n                                            svc_name, _type, plugin_installed_dirname))\n                                disable_sch_patch.assert_called_once_with(uuid.UUID(sch_info[0]['id']))\n                            schedule_patch.assert_called_once_with(svc_name)\n                        plugin_tracked_patch.assert_called_once_with(_type)\n                    plugin_installed_patch.assert_called_once_with(_type, False)\n                args, kwargs = delete_tbl_patch.call_args_list[0]\n                assert 'packages' == args[0]\n                assert delete_payload == json.loads(args[1])\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"_type, plugin_installed_dirname\", [\n        ('notify', 'Telegram'),\n        ('rule', 'OutOfBound')\n    ])\n    async def test_notify_plugin_update_when_not_in_use(self, client, _type, plugin_installed_dirname):\n        async def async_mock(return_value):\n            return return_value\n\n        _type = \"notify\"\n        plugin_type_installed_dir = \"notificationRule\" if _type == 'rule' else \"notificationDelivery\"\n        pkg_name = \"fledge-{}-{}\".format(_type, plugin_installed_dirname.lower())\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        plugin_installed = [{\"name\": plugin_installed_dirname, \"type\": _type,\n                             \"description\": \"{} C plugin\".format(plugin_type_installed_dir), \"version\": \"1.8.1\",\n                             \"installedDirectory\": \"{}/{}\".format(plugin_type_installed_dir, plugin_installed_dirname),\n                             \"packageName\": pkg_name}]\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        insert_row = {'count': 1, 'rows': [{\n                \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n                \"name\": plugin_installed_dirname,\n                \"action\": \"update\",\n                \"status\": -1,\n                \"log_file_uri\": \"\"\n            }]}\n        sch_info = {'count': 1, 'rows': [{'id': '6637c9ff-7090-4774-abca-07dee59a0610', 'enabled': 'f'}]}\n        storage_client_mock = MagicMock(StorageClientAsync)        \n        _rv1 = await async_mock(insert)\n        _se1 = await async_mock({'count': 0, 'rows': []})\n        _se2 = await async_mock(insert_row)\n        _se3 = await async_mock(sch_info)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              side_effect=[_se1, _se3, _se2]) as query_tbl_patch:\n                with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                                  ) as plugin_installed_patch:\n                    with patch.object(storage_client_mock, 'insert_into_tbl',\n                                      return_value=_rv1) as insert_tbl_patch:\n                        with patch('multiprocessing.Process'):\n                            resp = await client.put('/fledge/plugins/{}/{}/update'.format(\n                                _type, plugin_installed_dirname), data=None)\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            response = json.loads(result)\n                            assert 'id' in response\n                            assert '{} update started.'.format(pkg_name) == response['message']\n                            assert response['statusLink'].startswith('fledge/package/update/status?id=')\n                    args, kwargs = insert_tbl_patch.call_args_list[0]\n                    assert 'packages' == args[0]\n                    actual = json.loads(args[1])\n                    assert 'id' in actual\n                    assert pkg_name == actual['name']\n                    assert 'update' == actual['action']\n                    assert -1 == actual['status']\n                    assert '' == actual['log_file_uri']\n                plugin_installed_patch.assert_called_once_with(_type, False)\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.skipif(RUN_TESTS_BEFORE_210_VERSION, reason=\"requires lesser or equal to core 2.1.0 version\")\n    @pytest.mark.parametrize(\"_type, plugin_installed_dirname\", [\n        ('notify', 'alexa'),\n        ('rule', 'OutOfBound')\n    ])\n    async def test_notify_plugin_update_when_in_use(self, client, _type, plugin_installed_dirname):\n        async def async_mock(return_value):\n            return return_value\n\n        plugin_type_installed_dir = \"notificationRule\" if _type == 'rule' else \"notificationDelivery\"\n        pkg_name = \"fledge-{}-{}\".format(_type, plugin_installed_dirname.lower())\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        plugin_installed = [{\"name\": plugin_installed_dirname, \"type\": _type,\n                             \"description\": \"Generate a notification if the values exceeds a configured value\",\n                             \"version\": \"1.8.1\", \"installedDirectory\": \"{}/{}\".format(plugin_type_installed_dir,\n                                                                                      plugin_installed_dirname),\n                             \"packageName\": pkg_name}]\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        insert_row = {'count': 1, 'rows': [{\n                \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n                \"name\": plugin_installed_dirname,\n                \"action\": \"update\",\n                \"status\": -1,\n                \"log_file_uri\": \"\"\n            }]}\n        notification_name = \"Test Notification\"\n        parent_name = \"Notifications\"\n        sch_info = {'count': 1, 'rows': [{'id': '6637c9ff-7090-4774-abca-07dee59a0610', 'enabled': 't'}]}\n        read_all_child_category_names = [{\"parent\": parent_name, \"child\": notification_name}]\n        read_cat_val = {\"name\": {\"description\": \"The name of this notification\", \"type\": \"string\",\n                                 \"default\": notification_name, \"value\": notification_name},\n                        \"description\": {\"description\": \"Description of this notification\", \"type\": \"string\",\n                                        \"default\": \"description\", \"value\": \"description\"},\n                        \"rule\": {\"description\": \"Rule to evaluate\", \"type\": \"string\",\n                                 \"default\": plugin_installed_dirname, \"value\": plugin_installed_dirname},\n                        \"channel\": {\"description\": \"Channel to send alert on\", \"type\": \"string\",\n                                    \"default\": \"email\", \"value\": \"email\"},\n                        \"notification_type\": {\"description\": \"Type of notification\", \"type\": \"enumeration\",\n                                              \"options\": [\"one shot\", \"retriggered\", \"toggled\"], \"default\": \"one shot\",\n                                              \"value\": \"one shot\"}, \"enable\": {\"description\": \"Enabled\",\n                                                                               \"type\": \"boolean\", \"default\": \"true\",\n                                                                               \"value\": \"true\"}}\n        disable_notification = {\"description\": \"Enabled\", \"type\": \"boolean\", \"default\": \"true\", \"value\": \"false\"}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await async_mock(read_all_child_category_names)\n        _rv2 = await async_mock(read_cat_val)\n        _rv3 = await async_mock(disable_notification)\n        _rv4 = await async_mock(insert)\n        _se1 = await async_mock({'count': 0, 'rows': []})\n        _se2 = await async_mock(insert_row)\n        _se3 = await async_mock(sch_info)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              side_effect=[_se1, _se3, _se2]) as query_tbl_patch:\n                with patch.object(PluginDiscovery, 'get_plugins_installed', return_value=plugin_installed\n                                  ) as plugin_installed_patch:\n                    with patch.object(ConfigurationManager, '_read_all_child_category_names',\n                                      return_value=_rv1) as child_cat_patch:\n                        with patch.object(ConfigurationManager, '_read_category_val',\n                                          return_value=_rv2) as cat_value_patch:\n                            with patch.object(plugins_update._logger, \"warning\") as log_warn_patch:\n                                with patch.object(ConfigurationManager, 'set_category_item_value_entry',\n                                                  return_value=_rv3) as set_cat_value_patch:\n                                    with patch.object(storage_client_mock, 'insert_into_tbl',\n                                                      return_value=_rv4) as insert_tbl_patch:\n                                        with patch('multiprocessing.Process'):\n                                            resp = await client.put('/fledge/plugins/{}/{}/update'.format(\n                                                _type, plugin_installed_dirname), data=None)\n                                            assert 200 == resp.status\n                                            result = await resp.text()\n                                            response = json.loads(result)\n                                            assert 'id' in response\n                                            assert '{} update started.'.format(pkg_name) == response['message']\n                                            assert response['statusLink'].startswith('fledge/package/update/status?id=')\n                                    args, kwargs = insert_tbl_patch.call_args_list[0]\n                                    assert 'packages' == args[0]\n                                    actual = json.loads(args[1])\n                                    assert 'id' in actual\n                                    assert pkg_name == actual['name']\n                                    assert 'update' == actual['action']\n                                    assert -1 == actual['status']\n                                    assert '' == actual['log_file_uri']\n                                set_cat_value_patch.assert_called_once_with(notification_name, 'enable', 'false')\n                            assert 1 == log_warn_patch.call_count\n                            log_warn_patch.assert_called_once_with(\n                                'Disabling {} notification instance, as {} {} plugin is being updated...'.format(\n                                    notification_name, plugin_installed_dirname, _type))\n                        cat_value_patch.assert_called_once_with(notification_name)\n                    child_cat_patch.assert_called_once_with(parent_name)\n                plugin_installed_patch.assert_called_once_with(_type, False)\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/support/.gitkeep",
    "content": ""
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_alerts.py",
    "content": "import json\nfrom unittest.mock import MagicMock, patch\nimport pytest\nfrom aiohttp import web\n\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.alert_manager import AlertManager\nfrom fledge.services.core import connect, routes, server\nfrom fledge.services.core.api import alerts\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2024 Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nclass TestAlerts:\n    \"\"\" Alerts API \"\"\"\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def async_mock(self, return_value):\n        return return_value\n\n    def setup_method(self):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        server.Server._alert_manager = AlertManager(storage_client_mock)\n\n    def teardown_method(self):\n        server.Server._alert_manager = None\n\n    async def test_get_all(self, client):\n        rv = await self.async_mock([])\n        with patch.object(server.Server._alert_manager, 'get_all', return_value=rv):\n            resp = await client.get('/fledge/alert')\n            assert 200 == resp.status\n            json_response = json.loads(await resp.text())\n            assert 'alerts' in json_response\n            assert [] == json_response['alerts']\n\n    async def test_bad_get_all(self, client):\n        with patch.object(server.Server._alert_manager, 'get_all', side_effect=Exception):\n            with patch.object(alerts._LOGGER, 'error') as patch_logger:\n                resp = await client.get('/fledge/alert')\n                assert 500 == resp.status\n                assert '' == resp.reason\n                json_response = json.loads(await resp.text())\n                assert 'message' in json_response\n                assert '' == json_response['message']\n            assert 1 == patch_logger.call_count\n\n    async def test_delete(self, client):\n        rv = await self.async_mock(\"Nothing to delete.\")\n        with patch.object(server.Server._alert_manager, 'delete', return_value=rv):\n            resp = await client.delete('/fledge/alert')\n            assert 200 == resp.status\n            json_response = json.loads(await resp.text())\n            assert 'message' in json_response\n            assert \"Nothing to delete.\" == json_response['message']\n\n    @pytest.mark.parametrize(\"url, msg, exception, status_code, log_count\", [\n        ('/fledge/alert', '', Exception, 500, 1),\n        ('/fledge/alert/blah', 'blah alert not found.', KeyError, 404, 0)\n    ])\n    async def test_bad_delete(self, client, url, msg, exception, status_code, log_count):\n        with patch.object(server.Server._alert_manager, 'delete', side_effect=exception):\n            with patch.object(alerts._LOGGER, 'error') as patch_logger:\n                resp = await client.delete(url)\n                assert status_code == resp.status\n                assert msg == resp.reason\n                json_response = json.loads(await resp.text())\n                assert 'message' in json_response\n                assert msg == json_response['message']\n            assert log_count == patch_logger.call_count\n\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_api_utils.py",
    "content": "import subprocess\n\nfrom unittest.mock import MagicMock, patch\nimport pytest\n\nfrom fledge.services.core.api import utils\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestUtils:\n\n    @pytest.mark.parametrize(\"direction\", ['south', 'north', 'filter', 'notificationDelivery', 'notificationRule'])\n    def test_find_c_plugin_libs_if_empty(self, direction):\n        with patch('os.listdir') as mockwalk:\n            mockwalk.return_value = []\n            assert [] == utils.find_c_plugin_libs(direction)\n\n    @pytest.mark.parametrize(\"direction, plugin_name, plugin_type, libs\", [\n        ('south', ['Random'], 'binary', ['libRandom.so', 'libRandom.so.1']),\n        ('south', ['FlirAX8'], 'json', ['FlirAX8.json']),\n        ('north', ['HttpNorthC'], 'binary', ['libHttpNorthC.so', 'libHttpNorthC.so.1'])\n    ])\n    def test_find_c_plugin_libs(self, direction, plugin_name, plugin_type, libs):\n        with patch('os.walk') as mockwalk:\n            mockwalk.return_value = [('', plugin_name, []),\n                                     ('', [], libs)]\n\n            assert plugin_name, plugin_type == utils.find_c_plugin_libs(direction)\n\n    def test_get_plugin_info_value_error(self):\n        plugin_name = 'Random'\n        with patch.object(utils, '_find_c_lib', return_value=None) as patch_lib:\n            with patch.object(utils._logger, 'error') as patch_logger:\n                assert {} == utils.get_plugin_info(plugin_name, dir='south')\n            assert 1 == patch_logger.call_count\n            args = patch_logger.call_args\n            assert '{} C plugin get info failed.'.format(plugin_name) == args[0][1]\n        patch_lib.assert_called_once_with(plugin_name, 'south')\n\n    @pytest.mark.parametrize(\"exc_name\", [Exception, OSError, subprocess.CalledProcessError])\n    def test_get_plugin_info_exception(self, exc_name):\n        plugin_name = 'OMF'\n        plugin_lib_path = 'fledge/plugins/north/{}/lib{}'.format(plugin_name, plugin_name)\n        with patch.object(utils, '_find_c_lib', return_value=plugin_lib_path) as patch_lib:\n            with patch.object(utils.subprocess, \"Popen\", side_effect=exc_name):\n                with patch.object(utils._logger, 'error') as patch_logger:\n                    assert {} == utils.get_plugin_info(plugin_name, dir='south')\n                assert 1 == patch_logger.call_count\n                args = patch_logger.call_args\n                assert '{} C plugin get info failed.'.format(plugin_name) == args[0][1]\n        patch_lib.assert_called_once_with(plugin_name, 'south')\n\n    @patch('subprocess.Popen')\n    def test_get_plugin_info(self, mock_subproc_popen):\n        with patch.object(utils, '_find_c_lib', return_value='fledge/plugins/south/Random/libRandom') as patch_lib:\n            process_mock = MagicMock()\n            attrs = {'communicate.return_value': (b'{\"name\": \"Random\", \"version\": \"1.0.0\", \"type\": \"south\", '\n                                                  b'\"interface\": \"1.0.0\", \"config\": {\"plugin\" : '\n                                                  b'{ \"description\" : \"Random C south plugin\", \"type\" : \"string\", '\n                                                  b'\"default\" : \"Random\" }, \"asset\" : { \"description\" : '\n                                                  b'\"Asset name\", \"type\" : \"string\", '\n                                                  b'\"default\" : \"Random\" } } }\\n', 'error')}\n            process_mock.configure_mock(**attrs)\n            mock_subproc_popen.return_value = process_mock\n            j = utils.get_plugin_info('Random', dir='south')\n            assert {'name': 'Random', 'type': 'south', 'version': '1.0.0', 'interface': '1.0.0',\n                    'config': {'plugin': {'description': 'Random C south plugin', 'type': 'string',\n                                          'default': 'Random'},\n                               'asset': {'description': 'Asset name', 'type': 'string', 'default': 'Random'}}} == j\n        patch_lib.assert_called_once_with('Random', 'south')\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_asset_tracker_api.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport json\nfrom unittest.mock import MagicMock, patch\nfrom aiohttp import web\nimport pytest\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core import routes, connect\nfrom fledge.services.core.api.asset_tracker import _logger, common_utils\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro(return_value):\n    return return_value\n\n\nclass TestAssetTracker:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def test_get_asset_track(self, client, loop):\n        async def async_mock():\n            await asyncio.sleep(0)\n            return {\"rows\": rows, 'count': 1}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rows = [{'asset': 'AirIntake', 'event': 'Ingest', 'fledge': 'Booth1', 'service': 'PT100_In1',\n                 'plugin': 'PT100', \"timestamp\": \"2018-08-13 15:39:48.796263\", \"deprecatedTimestamp\": \"\", 'data': '{}'},\n                {'asset': 'AirIntake', 'event': 'Egress', 'fledge': 'Booth1', 'service': 'Display',\n                 'plugin': 'ShopFloorDisplay', \"timestamp\": \"2018-08-13 16:00:00.134563\", \"deprecatedTimestamp\": \"\",\n                 'data': '{}'}]\n        payload = {'where': {'condition': '=', 'value': 1, 'column': '1'},\n                   'return': ['asset', 'event', 'service', 'fledge', 'plugin',\n                              {'alias': 'timestamp', 'column': 'ts', 'format': 'YYYY-MM-DD HH24:MI:SS.MS'},\n                              {'alias': 'deprecatedTimestamp', 'column': 'deprecated_ts'}, 'data'\n                              ]\n                   }\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as patch_query_payload:\n                resp = await client.get('/fledge/track')\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'track': rows} == json_response\n            args, kwargs = patch_query_payload.call_args\n            assert 'asset_tracker' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.skip(\"Once initial code version approve, will add more tests\")\n    @pytest.mark.parametrize(\"request_params, payload\", [\n        (\"asset\", {}),\n        (\"event\", {}),\n        (\"service\", {})\n    ])\n    async def test_get_asset_track_with_params(self, client, request_params, payload, loop):\n        pass\n\n    async def test_bad_deprecate_entry(self, client):\n        result = {\"message\": \"failed\"}\n        _rv = await mock_coro(result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv):\n                with patch.object(_logger, 'error') as patch_logger:\n                    resp = await client.put('/fledge/track/service/XXX/asset/XXX/event/XXXX')\n                    assert 500 == resp.status\n                assert 1 == patch_logger.call_count\n\n    async def test_deprecate_entry_not_found(self, client):\n        result = {\"count\": 0, \"rows\": []}\n        _rv = await mock_coro(result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        asset = \"blah\"\n        service = \"Test\"\n        event = \"Ingest\"\n        message = \"No record found in asset tracker for given service: {} asset: {} event: {}\".format(\n            service, asset, event)\n        query_payload = {\"return\": [\"deprecated_ts\"],\n                         \"where\": {\"column\": \"service\", \"condition\": \"=\", \"value\": service,\n                                   \"and\": {\"column\": \"asset\", \"condition\": \"=\", \"value\": asset,\n                                           \"and\": {\"column\": \"event\", \"condition\": \"=\", \"value\": event}}}}\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as patch_query_tbl:\n                resp = await client.put('/fledge/track/service/{}/asset/{}/event/{}'.format(service, asset, event))\n                assert 404 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = patch_query_tbl.call_args\n            assert 'asset_tracker' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    async def test_already_deprecated_entry(self, client):\n        result = {'count': 1, 'rows': [{'deprecated_ts': '2022-11-18 06:11:13.657'}]}\n        _rv = await mock_coro(result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        asset = \"Airtake\"\n        service = \"Sparkplug\"\n        event = \"Ingest\"\n        message = \"'{} asset record already deprecated.'\".format(asset)\n        query_payload = {\"return\": [\"deprecated_ts\"],\n                         \"where\": {\"column\": \"service\", \"condition\": \"=\", \"value\": service,\n                                   \"and\": {\"column\": \"asset\", \"condition\": \"=\", \"value\": asset,\n                                           \"and\": {\"column\": \"event\", \"condition\": \"=\", \"value\": event}}}}\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as patch_query_tbl:\n                resp = await client.put('/fledge/track/service/{}/asset/{}/event/{}'.format(service, asset, event))\n                assert 400 == resp.status\n                assert message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": message} == json_response\n            args, _ = patch_query_tbl.call_args\n            assert 'asset_tracker' == args[0]\n            assert query_payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"event, operator, event_list, audit_event\", [\n        (\"Ingest\", \"in\", [\"Ingest\", \"store\"], \"Ingest & store\"),\n        (\"store\", \"in\", [\"Ingest\", \"store\"], \"Ingest & store\"),\n        (\"Filter\", \"=\", \"Filter\", \"Filter\"),\n        (\"Egress\", \"=\", \"Egress\", \"Egress\")\n    ])\n    async def test_deprecate_entry(self, client, event, operator, event_list, audit_event):\n        asset = \"Airtake\"\n        service = \"Sparkplug\"\n        ts = \"2022-11-18 14:27:25.396383+05:30\"\n        query_payload = {\"return\": [\"deprecated_ts\"],\n                         \"where\": {\"column\": \"service\", \"condition\": \"=\", \"value\": service,\n                                   \"and\": {\"column\": \"asset\", \"condition\": \"=\", \"value\": asset,\n                                           \"and\": {\"column\": \"event\", \"condition\": \"=\", \"value\": event}}}}\n        update_payload = {\"values\": {\"deprecated_ts\": ts},\n                          \"where\": {\"column\": \"service\", \"condition\": \"=\", \"value\": service,\n                                    \"and\": {\"column\": \"asset\", \"condition\": \"=\", \"value\": asset,\n                                            \"and\": {\"column\": \"event\", \"condition\": operator,\n                                                    \"value\": event_list,\n                                                    \"and\": {\"column\": \"deprecated_ts\", \"condition\": \"isnull\"}}}}}\n        query_result = {'count': 1, 'rows': [{'deprecated_ts': ''}]}\n        update_result = {\"response\": \"updated\", \"rows_affected\": 1}\n        message = \"For {} event, {} asset record entry has been deprecated.\".format(event, asset)\n        _rv = await mock_coro(query_result)\n        _rv2 = await mock_coro(update_result)\n        _rv3 = await mock_coro(None)\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as patch_query_tbl:\n                with patch.object(common_utils, 'local_timestamp', return_value=ts):\n                    with patch.object(storage_client_mock, 'update_tbl', return_value=_rv2) as patch_update_tbl:\n                        with patch.object(AuditLogger, '__init__', return_value=None):\n                            with patch.object(AuditLogger, 'information', return_value=_rv3) as patch_audit:\n                                with patch.object(_logger, \"info\") as log_info:\n                                    resp = await client.put('/fledge/track/service/{}/asset/{}/event/{}'.format(\n                                        service, asset, event))\n                                    assert 200 == resp.status\n                                    result = await resp.text()\n                                    json_response = json.loads(result)\n                                    assert {'success': message} == json_response\n                                assert 1 == log_info.call_count\n                                log_info.assert_called_once_with(message)\n                            patch_audit.assert_called_once_with(\n                                'ASTDP', {'asset': asset, 'event': audit_event, 'service': service})\n                    args, _ = patch_update_tbl.call_args\n                    assert 'asset_tracker' == args[0]\n                    assert update_payload == json.loads(args[1])\n            args1, _ = patch_query_tbl.call_args\n            assert 'asset_tracker' == args1[0]\n            assert query_payload == json.loads(args1[1])\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_audit.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\nimport json\nfrom unittest.mock import MagicMock, patch\nfrom collections import Counter\nfrom aiohttp import web\nimport pytest\nfrom fledge.services.core import routes\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core.api import audit\nfrom fledge.common.audit_logger import AuditLogger\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestAudit:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    @pytest.fixture()\n    def get_log_codes(self):\n        return {\"rows\": [{\"code\": \"PURGE\", \"description\": \"Data Purging Process\"},\n                         {\"code\": \"LOGGN\", \"description\": \"Logging Process\"},\n                         {\"code\": \"STRMN\", \"description\": \"Streaming Process\"},\n                         {\"code\": \"SYPRG\", \"description\": \"System Purge\"},\n                         {\"code\": \"START\", \"description\": \"System Startup\"},\n                         {\"code\": \"FSTOP\", \"description\": \"System Shutdown\"},\n                         {\"code\": \"CONCH\", \"description\": \"Configuration Change\"},\n                         {\"code\": \"CONAD\", \"description\": \"Configuration Addition\"},\n                         {\"code\": \"SCHCH\", \"description\": \"Schedule Change\"},\n                         {\"code\": \"SCHAD\", \"description\": \"Schedule Addition\"},\n                         {\"code\": \"SRVRG\", \"description\": \"Service Registered\"},\n                         {\"code\": \"SRVUN\", \"description\": \"Service Unregistered\"},\n                         {\"code\": \"SRVFL\", \"description\": \"Service Fail\"},\n                         {\"code\": \"NHCOM\", \"description\": \"North Process Complete\"},\n                         {\"code\": \"NHDWN\", \"description\": \"North Destination Unavailable\"},\n                         {\"code\": \"NHAVL\", \"description\": \"North Destination Available\"},\n                         {\"code\": \"UPEXC\", \"description\": \"Update Complete\"},\n                         {\"code\": \"BKEXC\", \"description\": \"Backup Complete\"}\n                         ]}\n\n    async def test_get_severity(self, client):\n        resp = await client.get('/fledge/audit/severity')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        log_severity = json_response['logSeverity']\n\n        # verify the severity count\n        assert 4 == len(log_severity)\n\n        # verify the name and value of severity\n        for i in range(len(log_severity)):\n            if log_severity[i]['index'] == 0:\n                assert 'SUCCESS' == log_severity[i]['name']\n            elif log_severity[i]['index'] == 1:\n                assert 'FAILURE' == log_severity[i]['name']\n            elif log_severity[i]['index'] == 2:\n                assert 'WARNING' == log_severity[i]['name']\n            elif log_severity[i]['index'] == 4:\n                assert 'INFORMATION' == log_severity[i]['name']\n\n    async def test_audit_log_codes(self, client, get_log_codes):\n        async def get_log_codes_async():\n            return get_log_codes\n        _rv = await get_log_codes_async()\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=_rv) as log_code_patch:\n                resp = await client.get('/fledge/audit/logcode')\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                codes = [key['code'] for key in json_response['logCode']]\n                expected_code_list = [key['code'] for key in get_log_codes['rows']]\n\n                # verify the default log_codes with their values which are defined in init.sql\n                assert 18 == len(codes)\n                assert Counter(expected_code_list) == Counter(codes)\n            log_code_patch.assert_called_once_with('log_codes')\n\n    @pytest.mark.parametrize(\"request_params, payload\", [\n        ('', {\"return\": [\"code\", \"level\", \"log\", {\"column\": \"ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"alias\": \"timestamp\"}], \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1}, \"sort\": {\"column\": \"ts\", \"direction\": \"desc\"}, \"limit\": 20}),\n        ('?source=PURGE', {'return': ['code', 'level', 'log', {'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'column': 'ts', 'alias': 'timestamp'}], 'where': {'value': 1, 'and': {'value': 'PURGE', 'column': 'code', 'condition': '='}, 'column': '1', 'condition': '='}, 'sort': {'direction': 'desc', 'column': 'ts'}, 'limit': 20}),\n        ('?source=PURGE,START,CONAD', {'return': ['code', 'level', 'log', {'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'column': 'ts', 'alias': 'timestamp'}], 'where': {'value': ['PURGE', 'START', 'CONAD'], 'column': 'code', 'condition': 'in'}, 'sort': {'direction': 'desc', 'column': 'ts'}, 'limit': 20}),\n        ('?skip=1', {'where': {'value': 1, 'column': '1', 'condition': '='}, 'limit': 20, 'return': ['code', 'level', 'log', {'column': 'ts', 'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'alias': 'timestamp'}], 'skip': 1, 'sort': {'direction': 'desc', 'column': 'ts'}}),\n        ('?severity=failure', {'where': {'and': {'value': 1, 'column': 'level', 'condition': '='}, 'value': 1, 'column': '1', 'condition': '='}, 'limit': 20, 'return': ['code', 'level', 'log', {'column': 'ts', 'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'alias': 'timestamp'}], 'sort': {'direction': 'desc', 'column': 'ts'}}),\n        ('?severity=FAILURE&limit=1', {'limit': 1, 'sort': {'direction': 'desc', 'column': 'ts'}, 'return': ['code', 'level', 'log', {'column': 'ts', 'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'alias': 'timestamp'}], 'where': {'value': 1, 'condition': '=', 'and': {'value': 1, 'condition': '=', 'column': 'level'}, 'column': '1'}}),\n        ('?severity=INFORMATION&limit=1&skip=1', {'limit': 1, 'sort': {'direction': 'desc', 'column': 'ts'}, 'return': ['code', 'level', 'log', {'column': 'ts', 'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'alias': 'timestamp'}], 'skip': 1, 'where': {'value': 1, 'condition': '=', 'and': {'value': 4, 'condition': '=', 'column': 'level'}, 'column': '1'}}),\n        ('?source=&severity=&limit=&skip=', {'limit': 20, 'sort': {'direction': 'desc', 'column': 'ts'}, 'return': ['code', 'level', 'log', {'column': 'ts', 'format': 'YYYY-MM-DD HH24:MI:SS.MS', 'alias': 'timestamp'}], 'where': {'value': 1, 'condition': '=', 'column': '1'}})\n    ])\n    async def test_get_audit_with_params(self, client, request_params, payload, get_log_codes, loop):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {\"rows\": [{\"log\": {\"end_time\": \"2018-01-30 18:39:48.1517317788\", \"rowsRemaining\": 0,\n                                      \"start_time\": \"2018-01-30 18:39:48.1517317788\", \"rowsRemoved\": 0,\n                                      \"unsentRowsRemoved\": 0, \"rowsRetained\": 0},\n                              \"code\": \"PURGE\", \"level\": \"4\", \"id\": 2,\n                              \"timestamp\": \"2018-01-30 18:39:48.796263\", 'count': 1}]}\n        \n        async def async_mock():\n            return response\n\n        async def async_mock_log():\n            return get_log_codes\n\n        _rv1 = await async_mock_log()\n        _rv2 = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=_rv1):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2) as log_code_patch:\n                    resp = await client.get('/fledge/audit{}'.format(request_params))\n                    assert 200 == resp.status\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert 1 == json_response['totalCount']\n                    assert 1 == len(json_response['audit'])\n                args, kwargs = log_code_patch.call_args\n                assert 'log' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n\n    @pytest.mark.parametrize(\"request_params, response_code, response_message\", [\n        ('?source=BLA', 400, \"BLA is not a valid source\"),\n        ('?source=1234', 400, \"1234 is not a valid source\"),\n        ('?source=PURGE,NTF', 400, \"NTF is not a valid source\"),\n        ('?limit=invalid', 400, \"Limit must be a positive integer\"),\n        ('?limit=-1', 400, \"Limit must be a positive integer\"),\n        ('?skip=invalid', 400, \"Skip/Offset must be a positive integer\"),\n        ('?skip=-1', 400, \"Skip/Offset must be a positive integer\"),\n        ('?severity=BLA', 400, \"'BLA' is not a valid severity\")\n    ])\n    async def test_source_param_with_bad_data(self, client, request_params, response_code, response_message, get_log_codes, loop):\n        async def async_mock_log():\n            return get_log_codes\n\n        _rv = await async_mock_log()\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=_rv):\n                resp = await client.get('/fledge/audit{}'.format(request_params))\n                assert response_code == resp.status\n                assert response_message == resp.reason\n\n    async def test_get_audit_http_exception(self, client):\n        msg = 'Internal Server Error'\n        with patch.object(connect, 'get_storage_async', side_effect=Exception(msg)):\n            with patch.object(audit._logger, 'error') as patch_logger:\n                resp = await client.get('/fledge/audit')\n                assert 500 == resp.status\n                assert msg == resp.reason\n            assert 1 == patch_logger.call_count\n\n    async def test_create_audit_entry(self, client, loop):\n        request_data = {\"source\": \"LMTR\", \"severity\": \"warning\", \"details\": {\"message\": \"Engine oil pressure low\"}}\n        response = {'details': {'message': 'Engine oil pressure low'}, 'source': 'LMTR',\n                    'timestamp': '2018-03-05 07:36:52.823', 'severity': 'warning'}\n\n        async def async_mock():\n            return response\n\n        _rv = await async_mock()\n        storage_mock = MagicMock(spec=StorageClientAsync)\n        AuditLogger(storage_mock)\n        with patch.object(storage_mock, 'insert_into_tbl', return_value=_rv) as insert_tbl_patch:\n            resp = await client.post('/fledge/audit', data=json.dumps(request_data))\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert response['details'] == json_response['details']\n            assert response['source'] == json_response['source']\n            assert response['severity'] == json_response['severity']\n            assert 'timestamp' in json_response\n\n    @pytest.mark.parametrize(\"request_data, expected_response\", [\n        ({\"source\": \"LMTR\", \"severity\": \"\", \"details\": {\"message\": \"Engine oil pressure low\"}}, \"Missing required parameter severity\"),\n        ({\"source\": \"LMTR\", \"severity\": None, \"details\": {\"message\": \"Engine oil pressure low\"}}, \"Missing required parameter severity\"),\n        ({\"source\": \"\", \"severity\": \"WARNING\", \"details\": {\"message\": \"Engine oil pressure low\"}}, \"Missing required parameter source\"),\n        ({\"source\": None, \"severity\": \"WARNING\", \"details\": {\"message\": \"Engine oil pressure low\"}}, \"Missing required parameter source\"),\n        ({\"source\": \"LMTR\", \"severity\": \"WARNING\", \"details\": None}, \"Missing required parameter details\"),\n        ({\"source\": \"LMTR\", \"severity\": \"WARNING\", \"details\": \"\"}, \"Details should be a valid json object\"),\n    ])\n    async def test_create_audit_entry_with_bad_data(self, client, request_data, expected_response):\n        resp = await client.post('/fledge/audit', data=json.dumps(request_data))\n        assert 400 == resp.status\n        assert expected_response == resp.reason\n\n    async def test_create_audit_entry_with_attribute_error(self, client):\n        request_data = {\"source\": \"LMTR\", \"severity\": \"blah\", \"details\": {\"message\": \"Engine oil pressure low\"}}\n        with patch.object(AuditLogger, \"__init__\", return_value=None):\n            resp = await client.post('/fledge/audit', data=json.dumps(request_data))\n            assert 404 == resp.status\n            assert 'severity type blah is not supported' == resp.reason\n\n    async def test_create_audit_entry_with_exception(self, client):\n        request_data = {\"source\": \"LMTR\", \"severity\": \"blah\", \"details\": {\"message\": \"Engine oil pressure low\"}}\n        with patch.object(AuditLogger, \"__init__\", return_value=\"\"):\n            with patch.object(audit._logger, 'error') as patch_logger:\n                resp = await client.post('/fledge/audit', data=json.dumps(request_data))\n                assert 500 == resp.status\n                assert \"__init__() should return None, not 'str'\" == resp.reason\n            assert 1 == patch_logger.call_count\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_auth_mandatory.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport json\nfrom unittest.mock import MagicMock, patch\nfrom aiohttp import web\nimport pytest\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.web import middleware\nfrom fledge.common.web.ssl_wrapper import SSLVerifier\nfrom fledge.services.core import connect, routes, server\nfrom fledge.services.core.api import auth\nfrom fledge.services.core.user_model import User\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nADMIN_USER_HEADER = {'content-type': 'application/json', 'Authorization': 'admin_user_token'}\nNORMAL_USER_HEADER = {'content-type': 'application/json', 'Authorization': 'normal_user_token'}\nPASSWORD_MIN_LENGTH_ERROR_MSG = \"Password should have minimum 6 characters.\"\n\nasync def mock_coro(*args, **kwargs):\n    return None if len(args) == 0 else args[0]\n\n\nclass TestAuthMandatory:\n\n    @pytest.fixture\n    def client(self, loop, aiohttp_server, aiohttp_client):\n        app = web.Application(loop=loop,  middlewares=[middleware.auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        server = loop.run_until_complete(aiohttp_server(app))\n        loop.run_until_complete(server.start_server(loop=loop))\n        client = loop.run_until_complete(aiohttp_client(server))\n        return client\n\n    async def auth_token_fixture(self, mocker, is_admin=True):\n        user = {'id': 1, 'uname': 'admin', 'role_id': '1'} if is_admin else {'id': 2, 'uname': 'user', 'role_id': '2'}\n        _rv1 = await mock_coro(user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro(user)\n        patch_logger_debug = mocker.patch.object(middleware._logger, 'debug')\n        patch_validate_token = mocker.patch.object(User.Objects, 'validate_token', return_value=_rv1)\n        patch_refresh_token = mocker.patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2)\n        patch_user_get = mocker.patch.object(User.Objects, 'get', return_value=_rv3)\n        return patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get\n\n    @pytest.mark.parametrize(\"payload, msg\", [\n        ({}, \"Username is required to create user.\"),\n        ({\"username\": 1}, \"Values should be passed in string.\"),\n        ({\"username\": \"bla\"}, \"Username should be of minimum 4 characters.\"),\n        ({\"username\": \"  b\"}, \"Username should be of minimum 4 characters.\"),\n        ({\"username\": \"b  \"}, \"Username should be of minimum 4 characters.\"),\n        ({\"username\": \"  b la\"}, \"Username should be of minimum 4 characters.\"),\n        ({\"username\": \"b l A  \"}, \"Username should be of minimum 4 characters.\"),\n        ({\"username\": \"Bla\"}, \"Username should be of minimum 4 characters.\"),\n        ({\"username\": \"BLA\"}, \"Username should be of minimum 4 characters.\"),\n        ({\"username\": \"aj!aj\"}, \"Dot, hyphen, underscore special characters are allowed for username.\"),\n        ({\"username\": \"aj.aj\", \"access_method\": \"PEM\"}, \"Invalid access method. Must be 'any' or 'cert' or 'pwd'.\"),\n        ({\"username\": \"aj.aj\", \"access_method\": 1}, \"Values should be passed in string.\"),\n        ({\"username\": \"aj.aj\", \"access_method\": 'pwd'}, \"Password should not be an empty.\"),\n        ({\"username\": \"aj_123!\"}, \"Dot, hyphen, underscore special characters are allowed for username.\"),\n        ({\"username\": \"aj_123\", \"password\": 1}, PASSWORD_MIN_LENGTH_ERROR_MSG),\n        ({\"username\": \"12-aj\", \"password\": \"blah\"}, PASSWORD_MIN_LENGTH_ERROR_MSG),\n        ({\"username\": \"12-aj\", \"password\": \"12B l\"}, PASSWORD_MIN_LENGTH_ERROR_MSG),\n        ({\"username\": \"aj.123\", \"password\": \"a!23\"}, PASSWORD_MIN_LENGTH_ERROR_MSG),\n        ({\"username\": \"aj.123\", \"password\": \"A!23\"}, PASSWORD_MIN_LENGTH_ERROR_MSG),\n        ({\"username\": \"aj.aj\", \"access_method\": \"any\", \"password\": \"blah\"}, PASSWORD_MIN_LENGTH_ERROR_MSG),\n        ({\"username\": \"aj.aj\", \"access_method\": \"pwd\", \"password\": \"blah\"}, PASSWORD_MIN_LENGTH_ERROR_MSG)\n    ])\n    async def test_create_bad_user(self, client, mocker, payload, msg):\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        rv1 = await mock_coro(msg)\n        rv2 = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=rv2) as patch_role_id:\n            with patch.object(auth, 'validate_password', return_value=rv1):\n                resp = await client.post('/fledge/admin/user', data=json.dumps(payload), headers=ADMIN_USER_HEADER)\n                assert 400 == resp.status\n                assert msg == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": msg} == json_response\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/admin/user')\n\n    @pytest.mark.parametrize(\"request_data\", [\n        {\"username\": \"AdMin\", \"password\": \"F0gl@mp\", \"role_id\": -3},\n        {\"username\": \"aj.aj\", \"password\": \"F0gl@mp\", \"role_id\": \"blah\"}\n    ])\n    async def test_create_user_with_bad_role(self, client, mocker, request_data):\n        msg = \"Invalid role ID.\"\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(False)\n        _rv3 = await mock_coro(\"\")\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(auth, 'validate_password', return_value=_rv3):\n                with patch.object(auth, 'is_valid_role', return_value=_rv2) as patch_role:\n                    resp = await client.post('/fledge/admin/user', data=json.dumps(request_data), headers=ADMIN_USER_HEADER)\n                    assert 400 == resp.status\n                    assert msg == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {\"message\": msg} == json_response\n                patch_role.assert_called_once_with(request_data['role_id'])\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/admin/user')\n\n    async def test_create_dupe_user_name(self, client):\n        msg = \"Username already exists.\"\n        request_data = {\"username\": \"dviewer\", \"password\": \"F0gl@mp\"}\n        valid_user = {'id': 1, 'uname': 'admin', 'role_id': '1'}\n        users = [{'id': 1, 'uname': 'admin', 'real_name': 'Admin user', 'role_id': 1, 'description': 'admin user',\n                  'enabled': 't', 'access_method': 'any'},\n                 {'id': 2, 'uname': 'user', 'real_name': 'Normal user', 'role_id': 2, 'description': 'normal user',\n                  'enabled': 'f', 'access_method': 'any'},\n                 {'id': 3, 'uname': 'dviewer', 'real_name': 'Data Viewer', 'role_id': 4, 'description': 'Test',\n                  'enabled': 'f', 'access_method': 'any'}]\n        _rv1 = await mock_coro(valid_user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro([{'id': '1'}])\n        _rv4 = await mock_coro(True)\n        _rv5 = await mock_coro(valid_user)\n        _rv6 = await mock_coro(users)\n        _rv7 = await mock_coro(\"\")\n        with patch.object(middleware._logger, 'debug') as patch_logger_debug:\n            with patch.object(User.Objects, 'validate_token', return_value=_rv1) as patch_validate_token:\n                with patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2\n                                  ) as patch_refresh_token:\n                    with patch.object(User.Objects, 'get', return_value=_rv5) as patch_user_get:\n                        with patch.object(User.Objects, 'all', return_value=_rv6) as patch_user_all:\n                            with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv3\n                                              ) as patch_role_id:\n                                with patch.object(auth, 'validate_password', return_value=_rv7):\n                                    with patch.object(auth, 'is_valid_role', return_value=_rv4) as patch_role:\n                                        with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                                            resp = await client.post('/fledge/admin/user', data=json.dumps(\n                                                request_data), headers=ADMIN_USER_HEADER)\n                                            assert 409 == resp.status\n                                            assert msg == resp.reason\n                                            result = await resp.text()\n                                            json_response = json.loads(result)\n                                            assert {\"message\": msg} == json_response\n                                        patch_logger_warning.assert_called_once_with(msg)\n                                    patch_role.assert_called_once_with(2)\n                            patch_role_id.assert_called_once_with('admin')\n                        patch_user_all.assert_called_once_with()\n                    patch_user_get.assert_called_once_with(uid=valid_user['id'])\n                patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n            patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/admin/user')\n\n    async def test_create_user(self, client):\n        request_data = {\"username\": \"aj123\", \"password\": \"F0gl@mp\"}\n        data = {'id': '3', 'uname': request_data['username'], 'role_id': '2', 'access_method': 'any',\n                'real_name': '', 'description': ''}\n        expected = {}\n        expected.update(data)\n        users = [{'id': 1, 'uname': 'admin', 'real_name': 'Admin user', 'role_id': 1, 'description': 'admin user',\n                  'enabled': 't', 'access_method': 'any'},\n                 {'id': 2, 'uname': 'user', 'real_name': 'Normal user', 'role_id': 2, 'description': 'normal user',\n                  'enabled': 'f', 'access_method': 'any'},\n                 {'id': 3, 'uname': 'dviewer', 'real_name': 'Data Viewer', 'role_id': 4, 'description': 'Test',\n                  'enabled': 'f', 'access_method': 'any'}]\n        ret_val = {\"response\": \"inserted\", \"rows_affected\": 1}\n        msg = '{} user has been created successfully.'.format(request_data['username'])\n        valid_user = {'id': 1, 'uname': 'admin', 'role_id': '1'}\n        _rv1 = await mock_coro(valid_user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro([{'id': '1'}])\n        _rv4 = await mock_coro(True)\n        _rv5 = await mock_coro(ret_val)\n        _rv6 = await mock_coro(users)\n        _rv7 = await mock_coro(\"\")\n        _se1 = await mock_coro(valid_user)\n        _se2 = await mock_coro(data)\n        with patch.object(middleware._logger, 'debug') as patch_logger_debug:\n            with patch.object(User.Objects, 'validate_token', return_value=_rv1) as patch_validate_token:\n                with patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2) as patch_refresh_token:\n                    with patch.object(User.Objects, 'get', side_effect=[_se1, _se2]) as patch_user_get:\n                        with patch.object(User.Objects, 'all', return_value=_rv6) as patch_user_all:\n                            with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv3\n                                              ) as patch_role_id:\n                                with patch.object(auth, 'validate_password', return_value=_rv7):\n                                    with patch.object(auth, 'is_valid_role', return_value=_rv4) as patch_role:\n                                        with patch.object(User.Objects, 'create', return_value=_rv5\n                                                          ) as patch_create_user:\n                                            with patch.object(auth._logger, 'info') as patch_auth_logger_info:\n                                                resp = await client.post('/fledge/admin/user',\n                                                                         data=json.dumps(request_data),\n                                                                         headers=ADMIN_USER_HEADER)\n                                                assert 200 == resp.status\n                                                r = await resp.text()\n                                                actual = json.loads(r)\n                                                assert msg == actual['message']\n                                                assert expected['id'] == actual['user']['userId']\n                                                assert expected['uname'] == actual['user']['userName']\n                                                assert expected['role_id'] == actual['user']['roleId']\n                                            patch_auth_logger_info.assert_called_once_with(msg)\n                                        patch_create_user.assert_called_once_with(\n                                            request_data['username'], request_data['password'],\n                                            int(expected['role_id']), 'any', '', '')\n                                    patch_role.assert_called_once_with(int(expected['role_id']))\n                            patch_role_id.assert_called_once_with('admin')\n                        patch_user_all.assert_called_once_with()\n                    assert 2 == patch_user_get.call_count\n                    args, kwargs = patch_user_get.call_args_list[0]\n                    assert {'uid': valid_user['id']} == kwargs\n                    args, kwargs = patch_user_get.call_args_list[1]\n                    assert {'username': expected['uname']} == kwargs\n                patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n            patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/admin/user')\n\n    async def test_create_user_unknown_exception(self, client):\n        request_data = {\"username\": \"ajtest\", \"password\": \"F0gl@mp\"}\n        exc_msg = \"Internal Server Error\"\n        valid_user = {'id': 1, 'uname': 'admin', 'role_id': '1'}\n        users = [{'id': 1, 'uname': 'admin', 'real_name': 'Admin user', 'role_id': 1, 'description': 'admin user',\n                  'enabled': 't', 'access_method': 'any'}]\n        _rv1 = await mock_coro(valid_user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro([{'id': '1'}])\n        _rv4 = await mock_coro(True)\n        _rv5 = await mock_coro(valid_user)\n        _rv6 = await mock_coro(users)\n        _rv7 = await mock_coro(\"\")\n        with patch.object(middleware._logger, 'debug') as patch_logger_debug:\n            with patch.object(User.Objects, 'validate_token', return_value=_rv1) as patch_validate_token:\n                with patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2\n                                  ) as patch_refresh_token:\n                    with patch.object(User.Objects, 'get', return_value=_rv5) as patch_user_get:\n                        with patch.object(User.Objects, 'all', return_value=_rv6) as patch_user_all:\n                            with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv3\n                                              ) as patch_role_id:\n                                with patch.object(auth, 'validate_password', return_value=_rv7):\n                                    with patch.object(auth, 'is_valid_role', return_value=_rv4) as patch_role:\n                                        with patch.object(User.Objects, 'create', side_effect=Exception(\n                                                exc_msg)) as patch_create_user:\n                                            with patch.object(auth._logger, 'error') as patch_logger:\n                                                resp = await client.post('/fledge/admin/user',\n                                                                         data=json.dumps(request_data),\n                                                                         headers=ADMIN_USER_HEADER)\n                                                assert 500 == resp.status\n                                                assert exc_msg == resp.reason\n                                                result = await resp.text()\n                                                json_response = json.loads(result)\n                                                assert {\"message\": exc_msg} == json_response\n                                            args = patch_logger.call_args\n                                            assert 'Failed to create user.' == args[0][1]\n                                        patch_create_user.assert_called_once_with(\n                                            request_data['username'], request_data['password'], 2, 'any', '', '')\n                                    patch_role.assert_called_once_with(2)\n                            patch_role_id.assert_called_once_with('admin')\n                        patch_user_all.assert_called_once_with()\n                    patch_user_get.assert_called_once_with(uid=valid_user['id'])\n                patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n            patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/admin/user')\n\n    async def test_create_user_value_error(self, client):\n        valid_user = {'id': 1, 'uname': 'admin', 'role_id': '1'}\n        request_data = {\"username\": \"ajtest\", \"password\": \"F0gl@mp\"}\n        exc_msg = \"Value Error occurred\"\n        users = [{'id': 1, 'uname': 'admin', 'real_name': 'Admin user', 'role_id': 1, 'description': 'admin user',\n                  'enabled': 't', 'access_method': 'any'}]\n        _rv1 = await mock_coro(valid_user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro([{'id': '1'}])\n        _rv4 = await mock_coro(True)\n        _rv5 = await mock_coro(valid_user)\n        _rv6 = await mock_coro(users)\n        _rv7 = await mock_coro(\"\")\n        with patch.object(middleware._logger, 'debug') as patch_logger_debug:\n            with patch.object(User.Objects, 'validate_token', return_value=_rv1) as patch_validate_token:\n                with patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2\n                                  ) as patch_refresh_token:\n                    with patch.object(User.Objects, 'get', return_value=_rv5) as patch_user_get:\n                        with patch.object(User.Objects, 'all', return_value=_rv6) as patch_user_all:\n                            with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv3\n                                              ) as patch_role_id:\n                                with patch.object(auth, 'validate_password', return_value=_rv7):\n                                    with patch.object(auth, 'is_valid_role', return_value=_rv4) as patch_role:\n                                        with patch.object(User.Objects, 'create', side_effect=ValueError(\n                                                exc_msg)) as patch_create_user:\n                                            resp = await client.post('/fledge/admin/user', data=json.dumps(\n                                                request_data), headers=ADMIN_USER_HEADER)\n                                            assert 400 == resp.status\n                                            assert exc_msg == resp.reason\n                                            result = await resp.text()\n                                            json_response = json.loads(result)\n                                            assert {\"message\": exc_msg} == json_response\n                                        patch_create_user.assert_called_once_with(\n                                            request_data['username'], request_data['password'], 2, 'any', '', '')\n                                    patch_role.assert_called_once_with(2)\n                            patch_role_id.assert_called_once_with('admin')\n                        patch_user_all.assert_called_once_with()\n                    patch_user_get.assert_called_once_with(uid=valid_user['id'])\n                patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n            patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/admin/user')\n\n    @pytest.mark.parametrize(\"payload, status_reason\", [\n        ({\"realname\": \"dd\"}, 'Nothing to update.'),\n        ({\"real_name\": \"\"}, 'Real Name should not be empty.'),\n        ({\"real_name\": \"   \"}, 'Real Name should not be empty.')\n    ])\n    async def test_bad_update_me(self, client, mocker, payload, status_reason):\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        user_info = {'role_id': '1', 'id': '2', 'uname': 'user', 'access_method': 'any',\n                     'real_name': 'Sat', 'description': 'Normal User'}\n        rv = await mock_coro(user_info)\n        with patch.object(User.Objects, 'get', return_value=rv) as patch_get_user:\n            resp = await client.put('/fledge/user', data=json.dumps(payload), headers=ADMIN_USER_HEADER)\n            assert 400 == resp.status\n            assert status_reason == resp.reason\n            r = await resp.text()\n            assert {\"message\": status_reason} == json.loads(r)\n        patch_get_user.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/user')\n\n    @pytest.mark.parametrize(\"payload\", [\n        {\"real_name\": \"AJ\"}, {\"real_name\": \"  AJ \"}, {\"real_name\": \"AJ \"}, {\"real_name\": \"  AJ\"}\n    ])\n    async def test_update_me(self, client, mocker, payload):\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        user_info = {'role_id': '1', 'id': '2', 'uname': 'user', 'access_method': 'any',\n                     'real_name': 'AJ', 'description': 'Normal User'}\n        user_record = {'rows': [{'user_id': 2}], 'count': 1}\n        update_result = {\"rows_affected\": 1, \"response\": \"updated\"}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        rv1 = await mock_coro(user_info)\n        rv2 = await mock_coro(user_record)\n        rv3 = await mock_coro(update_result)\n        with patch.object(User.Objects, 'get', return_value=rv1) as patch_get_user:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=rv2) as q_tbl_patch:\n                    with patch.object(storage_client_mock, 'update_tbl', return_value=rv3) as update_tbl_patch:\n                        resp = await client.put('/fledge/user', data=json.dumps(payload), headers=ADMIN_USER_HEADER)\n                        assert 200 == resp.status\n                        r = await resp.text()\n                        assert {\"message\": \"Real name has been updated successfully!\"} == json.loads(r)\n                    update_tbl_patch.assert_called_once_with(\"users\", '{\"values\": {\"real_name\": \"AJ\"}, '\n                                                                      '\"where\": {\"column\": \"id\", \"condition\": \"=\", '\n                                                                      '\"value\": 2}}')\n                    q_tbl_patch.assert_called_once_with('user_logins', '{\"return\": [\"user_id\"], '\n                                                                       '\"where\": {\"column\": \"token\", \"condition\": \"=\", '\n                                                                       '\"value\": \"admin_user_token\"}}')\n        patch_get_user.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/user')\n\n    @pytest.mark.parametrize(\"payload, status_reason\", [\n        ({\"realname\": \"dd\"}, 'Nothing to update.'),\n        ({\"real_name\": \"\"}, 'Real Name should not be empty.'),\n        ({\"real_name\": \"   \"}, 'Real Name should not be empty.'),\n        ({\"access_method\": \"\"}, 'Access method should not be empty.'),\n        ({\"access_method\": \"blah\"}, \"Accepted access method values are ('any', 'pwd', 'cert').\")\n    ])\n    async def test_bad_update_user(self, client, mocker, payload, status_reason):\n        uid = 2\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        user_info = {'role_id': '1', 'id': str(uid), 'uname': 'user', 'access_method': 'any',\n                     'real_name': 'Sat', 'description': 'Normal User'}\n        ret_val = [{'id': '1'}]\n        _rv = await mock_coro(ret_val)\n        _se = await mock_coro(user_info)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            with patch.object(User.Objects, 'get', return_value=_se) as patch_get_user:\n                resp = await client.put('/fledge/admin/{}'.format(uid), data=json.dumps(payload),\n                                        headers=ADMIN_USER_HEADER)\n                assert 400 == resp.status\n                assert status_reason == resp.reason\n                r = await resp.text()\n                assert {\"message\": status_reason} == json.loads(r)\n            patch_get_user.assert_called_once_with(uid=1)\n        patch_role_id.assert_called_once_with('admin')\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/{}'.format(uid))\n\n    @pytest.mark.parametrize(\"payload, exp_result\", [\n        ({\"real_name\": \"Sat\"}, {'role_id': '2', 'id': '2', 'uname': 'user', 'access_method': 'any',\n                                'real_name': 'Sat', 'description': 'Normal User'}),\n        ({\"description\": \"test desc\"}, {'role_id': '2', 'id': '2', 'uname': 'user', 'access_method': 'any',\n                                        'real_name': 'Normal', 'description': 'test desc'}),\n        ({\"real_name\": \"Yamraj\", \"description\": \"test desc\"}, {'role_id': '2', 'id': '2', 'uname': 'user',\n                                                               'access_method': 'any', 'real_name': 'Yamraj',\n                                                               'description': 'test desc'}),\n        ({\"access_method\": 'pwd', \"real_name\": \"Yamraj\", \"description\": \"test desc\"},\n         {'role_id': '2', 'id': '2', 'uname': 'user', 'access_method': 'pwd', 'real_name': 'Yamraj',\n          'description': 'test desc'})\n    ])\n    async def test_update_user(self, client, mocker, payload, exp_result):\n        uid = 2\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv1 = await mock_coro([{'id': '2'}])\n        _rv2 = await mock_coro(True)\n        _se = await mock_coro(exp_result)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(User.Objects, 'update', return_value=_rv2) as patch_update:\n                with patch.object(User.Objects, 'get', return_value=_se):\n                    resp = await client.put('/fledge/admin/{}'.format(uid), data=json.dumps(payload),\n                                            headers=ADMIN_USER_HEADER)\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    assert {\"user_info\": exp_result} == json.loads(r)\n            patch_update.assert_called_once_with(str(uid), payload)\n        patch_role_id.assert_called_once_with('admin')\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/{}'.format(\n            uid))\n\n    @pytest.mark.parametrize(\"request_data, msg\", [\n        ({}, \"Current or new password is missing.\"),\n        ({\"invalid\": 1}, \"Current or new password is missing.\"),\n        ({\"current_password\": 1}, \"Current or new password is missing.\"),\n        ({\"current_password\": \"fledge\"}, \"Current or new password is missing.\"),\n        ({\"new_password\": 1}, \"Current or new password is missing.\"),\n        ({\"new_password\": \"fledge\"}, \"Current or new password is missing.\"),\n        ({\"current_pwd\": \"fledge\", \"new_pwd\": \"fledge1\"}, \"Current or new password is missing.\"),\n        ({\"current_password\": \"F0gl@mp\", \"new_password\": \"F0gl@mp\"},\n         \"New password should not be the same as current password.\"),\n        ({\"current_password\": \"F0gl@mp\", \"new_password\": \"FL\"}, PASSWORD_MIN_LENGTH_ERROR_MSG),\n        ({\"current_password\": \"F0gl@mp\", \"new_password\": 1}, \"New password should be a valid string.\")\n    ])\n    async def test_update_password_with_bad_data(self, client, mocker, request_data, msg):\n        uid = 2\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        rv1 = await mock_coro(msg) \n        with patch.object(auth, 'validate_password', return_value=rv1):\n            resp = await client.put('/fledge/user/{}/password'.format(uid), data=json.dumps(request_data),\n                                    headers=NORMAL_USER_HEADER)\n            assert 400 == resp.status\n            assert msg == resp.reason\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT',\n                                                   '/fledge/user/{}/password'.format(uid))\n        patch_validate_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_refresh_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_user_get.assert_called_once_with(uid=uid)\n\n    async def test_update_password_with_other_user(self, client, mocker):\n        uid = 1\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        resp = await client.put('/fledge/user/{}/password'.format(uid),\n                                data=json.dumps({\"current_password\": \"fledge\", \"new_password\": \"newfledge\"}),\n                                headers=NORMAL_USER_HEADER)\n        assert 401 == resp.status\n        assert \"Insufficient privileges to update the password for the given user.\" == resp.reason\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT',\n                                                   '/fledge/user/{}/password'.format(uid))\n        patch_validate_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_refresh_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_user_get.assert_called_once_with(uid=2)\n\n    async def test_update_password_with_invalid_current_password(self, client, mocker):\n        request_data = {\"current_password\": \"blah\", \"new_password\": \"F0gl@mp\"}\n        uid = 2\n        msg = 'Invalid current password.'\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        rv1 = await mock_coro(\"\")\n        rv2 = await mock_coro(None)\n        with patch.object(auth, 'validate_password', return_value=rv1):\n            with patch.object(User.Objects, 'is_user_exists', return_value=rv2) as patch_user_exists:\n                resp = await client.put('/fledge/user/{}/password'.format(uid), data=json.dumps(request_data),\n                                        headers=NORMAL_USER_HEADER)\n                assert 404 == resp.status\n                assert msg == resp.reason\n            patch_user_exists.assert_called_once_with(str(uid), request_data['current_password'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s',\n                                                   'PUT', '/fledge/user/{}/password'.format(uid))\n        patch_validate_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_refresh_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_user_get.assert_called_once_with(uid=uid)\n\n    @pytest.mark.parametrize(\"exception_name, status_code, msg\", [\n        (ValueError, 400, 'None'),\n        (User.DoesNotExist, 404, 'User with ID:<2> does not exist.'),\n        (User.PasswordAlreadyUsed, 400, 'The new password should be different from previous 3 used.')\n    ])\n    async def test_update_password_exceptions(self, client, mocker, exception_name, status_code, msg):\n        request_data = {\"current_password\": \"fledge\", \"new_password\": \"F0gl@mp\"}\n        uid = 2\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        rv1 = await mock_coro(\"\")\n        rv2 = await mock_coro(uid)\n        with patch.object(auth, 'validate_password', return_value=rv1):\n            with patch.object(User.Objects, 'is_user_exists', return_value=rv2) as patch_user_exists:\n                with patch.object(User.Objects, 'update', side_effect=exception_name(msg)) as patch_update:\n                    resp = await client.put('/fledge/user/{}/password'.format(uid), data=json.dumps(request_data),\n                                            headers=NORMAL_USER_HEADER)\n                    assert status_code == resp.status\n                    assert msg == resp.reason\n                patch_update.assert_called_once_with(2, {'password': request_data['new_password']})\n            patch_user_exists.assert_called_once_with(str(uid), request_data['current_password'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s',\n                                                   'PUT', '/fledge/user/{}/password'.format(uid))\n        patch_validate_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_refresh_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_user_get.assert_called_once_with(uid=uid)\n\n    async def test_update_password_unknown_exception(self, client, mocker):\n        request_data = {\"current_password\": \"fledge\", \"new_password\": \"F0gl@mp\"}\n        uid = 2\n        msg = 'Something went wrong'\n        logger_msg = 'Failed to update the user ID:<{}>.'.format(uid)\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        rv1 = await mock_coro(\"\")\n        rv2 = await mock_coro(uid)\n        with patch.object(auth, 'validate_password', return_value=rv1):\n            with patch.object(User.Objects, 'is_user_exists', return_value=rv2) as patch_user_exists:\n                with patch.object(User.Objects, 'update', side_effect=Exception(msg)) as patch_update:\n                    with patch.object(auth._logger, 'error') as patch_logger:\n                        resp = await client.put('/fledge/user/{}/password'.format(uid), data=json.dumps(request_data),\n                                                headers=NORMAL_USER_HEADER)\n                        assert 500 == resp.status\n                        assert msg == resp.reason\n                    args = patch_logger.call_args\n                    assert logger_msg == args[0][1]\n                patch_update.assert_called_once_with(2, {'password': request_data['new_password']})\n            patch_user_exists.assert_called_once_with(str(uid), request_data['current_password'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s',\n                                                   'PUT', '/fledge/user/{}/password'.format(uid))\n        patch_validate_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_refresh_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_user_get.assert_called_once_with(uid=uid)\n\n    async def test_update_password(self, client, mocker):\n        request_data = {\"current_password\": \"fledge\", \"new_password\": \"F0gl@mp\"}\n        ret_val = {'response': 'updated', 'rows_affected': 1}\n        user_id = 2\n        msg = \"Password has been updated successfully for user ID:<{}>.\".format(user_id)\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        rv1 = await mock_coro(\"\")\n        rv2 = await mock_coro(user_id)\n        rv3 = await mock_coro(ret_val)\n        with patch.object(auth, 'validate_password', return_value=rv1):\n            with patch.object(User.Objects, 'is_user_exists', return_value=rv2) as patch_user_exists:\n                with patch.object(User.Objects, 'update', return_value=rv3) as patch_update:\n                    with patch.object(auth._logger, 'info') as patch_auth_logger_info:\n                        resp = await client.put('/fledge/user/{}/password'.format(user_id),\n                                                data=json.dumps(request_data), headers=NORMAL_USER_HEADER)\n                        assert 200 == resp.status\n                        r = await resp.text()\n                        assert {'message': msg} == json.loads(r)\n                    patch_auth_logger_info.assert_called_once_with(msg)\n                patch_update.assert_called_once_with(user_id, {'password': request_data['new_password']})\n            patch_user_exists.assert_called_once_with(str(user_id), request_data['current_password'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s',\n                                                   'PUT', '/fledge/user/{}/password'.format(user_id))\n        patch_validate_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_refresh_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_user_get.assert_called_once_with(uid=user_id)\n\n    @pytest.mark.parametrize(\"request_data\", ['blah', '123blah'])\n    async def test_delete_bad_user(self, client, mocker, request_data):\n        msg = \"invalid literal for int() with base 10: '{}'\".format(request_data)\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            resp = await client.delete('/fledge/admin/{}/delete'.format(request_data), headers=ADMIN_USER_HEADER)\n            assert 400 == resp.status\n            assert msg == resp.reason\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s',\n                                                   'DELETE', '/fledge/admin/{}/delete'.format(request_data))\n\n    async def test_delete_admin_user(self, client, mocker):\n        msg = \"Super admin user can not be deleted.\"\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            with patch.object(auth._logger, 'warning') as patch_auth_logger_warn:\n                resp = await client.delete('/fledge/admin/1/delete', headers=ADMIN_USER_HEADER)\n                assert 403 == resp.status\n                assert msg == resp.reason\n            patch_auth_logger_warn.assert_called_once_with(msg)\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE', '/fledge/admin/1/delete')\n\n    async def test_delete_own_account(self, client, mocker):\n        msg = \"You can not delete your own account.\"\n        ret_val = [{'id': '2'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            with patch.object(auth._logger, 'warning') as patch_auth_logger_warn:\n                resp = await client.delete('/fledge/admin/2/delete', headers=NORMAL_USER_HEADER)\n                assert 400 == resp.status\n                assert msg == resp.reason\n            patch_auth_logger_warn.assert_called_once_with(msg)\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=2)\n        patch_refresh_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(NORMAL_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE', '/fledge/admin/2/delete')\n\n    async def test_delete_invalid_user(self, client, mocker):\n        ret_val = {\"response\": \"deleted\", \"rows_affected\": 0}\n        msg = 'User with ID:<2> does not exist.'\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(User.Objects, 'delete', return_value=_rv2) as patch_user_delete:\n                resp = await client.delete('/fledge/admin/2/delete', headers=ADMIN_USER_HEADER)\n                assert 404 == resp.status\n                assert msg == resp.reason\n            patch_user_delete.assert_called_once_with(2)\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE', '/fledge/admin/2/delete')\n\n    async def test_delete_user(self, client, mocker):\n        ret_val = {\"response\": \"deleted\", \"rows_affected\": 1}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(auth._logger, 'info') as patch_auth_logger_info:\n                with patch.object(User.Objects, 'delete', return_value=_rv2) as patch_user_delete:\n                    resp = await client.delete('/fledge/admin/2/delete', headers=ADMIN_USER_HEADER)\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    assert {'message': 'User has been deleted successfully.'} == json.loads(r)\n                patch_user_delete.assert_called_once_with(2)\n            patch_auth_logger_info.assert_called_once_with('User with ID:<2> has been deleted successfully.')\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE', '/fledge/admin/2/delete')\n\n    @pytest.mark.parametrize(\"exception_name, code, msg\", [\n        (ValueError, 400, 'None'),\n        (User.DoesNotExist, 404, 'User with ID:<2> does not exist.')\n    ])\n    async def test_delete_user_exceptions(self, client, mocker, exception_name, code, msg):\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            with patch.object(User.Objects, 'delete', side_effect=exception_name(msg)) as patch_user_delete:\n                resp = await client.delete('/fledge/admin/2/delete', headers=ADMIN_USER_HEADER)\n                assert code == resp.status\n                assert msg == resp.reason\n            patch_user_delete.assert_called_once_with(2)\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE', '/fledge/admin/2/delete')\n\n    async def test_delete_user_unknown_exception(self, client, mocker):\n        msg = 'Something went wrong'\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            with patch.object(auth._logger, 'error') as patch_logger:\n                with patch.object(User.Objects, 'delete', side_effect=Exception(msg)) as patch_user_delete:\n                    resp = await client.delete('/fledge/admin/2/delete', headers=ADMIN_USER_HEADER)\n                    assert 500 == resp.status\n                    assert msg == resp.reason\n                patch_user_delete.assert_called_once_with(2)\n            args = patch_logger.call_args\n            assert 'Failed to delete the user ID:<2>.' == args[0][1]\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE', '/fledge/admin/2/delete')\n\n    async def test_logout(self, client, mocker):\n        ret_val = {'response': 'deleted', 'rows_affected': 1}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(auth._logger, 'info') as patch_auth_logger_info:\n            with patch.object(User.Objects, 'delete_user_tokens', return_value=_rv) as patch_delete_user_token:\n                resp = await client.put('/fledge/2/logout', headers=ADMIN_USER_HEADER)\n                assert 200 == resp.status\n                r = await resp.text()\n                assert {'logout': True} == json.loads(r)\n            patch_delete_user_token.assert_called_once_with(\"2\")\n        patch_auth_logger_info.assert_called_once_with('User with ID:<2> has been logged out successfully.')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/2/logout')\n\n    async def test_logout_with_bad_user(self, client, mocker):\n        ret_val = {'response': 'deleted', 'rows_affected': 0}\n        user_id = 111\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'delete_user_tokens', return_value=_rv) as patch_delete_user_token:\n            resp = await client.put('/fledge/{}/logout'.format(user_id), headers=ADMIN_USER_HEADER)\n            assert 404 == resp.status\n            assert 'Not Found' == resp.reason\n        patch_delete_user_token.assert_called_once_with(str(user_id))\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n\n    async def test_logout_me(self, client, mocker):\n        ret_val = {'response': 'deleted', 'rows_affected': 1}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(auth._logger, 'info') as patch_auth_logger_info:\n            with patch.object(User.Objects, 'delete_token', return_value=_rv) as patch_delete_token:\n                resp = await client.put('/fledge/logout', headers=ADMIN_USER_HEADER)\n                assert 200 == resp.status\n                r = await resp.text()\n                assert {'logout': True} == json.loads(r)\n            patch_delete_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_auth_logger_info.assert_called_once_with('User has been logged out successfully.')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/logout')\n\n    async def test_logout_me_with_bad_token(self, client, mocker):\n        ret_val = {'response': 'deleted', 'rows_affected': 0}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(auth._logger, 'error') as patch_auth_logger:\n            with patch.object(User.Objects, 'delete_token', return_value=_rv) as patch_delete_token:\n                resp = await client.put('/fledge/logout', headers=ADMIN_USER_HEADER)\n                assert 404 == resp.status\n            patch_delete_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/logout')\n\n    async def test_enable_with_super_admin_user(self, client, mocker):\n        msg = 'Restricted for Super Admin user.'\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.put('/fledge/admin/1/enable', data=json.dumps({'role_id': 2}),\n                                        headers=ADMIN_USER_HEADER)\n                assert 403 == resp.status\n                assert msg == resp.reason\n                r = await resp.text()\n                assert {'message': msg} == json.loads(r)\n            patch_logger_warning.assert_called_once_with(msg)\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/1/enable')\n\n    @pytest.mark.parametrize(\"request_data, msg\", [\n        ({}, \"Nothing to enable user update.\"),\n        ({\"enable\": 1}, \"Nothing to enable user update.\"),\n        ({\"enabled\": 1}, \"Accepted values are True/False only.\"),\n    ])\n    async def test_enable_with_bad_data(self, client, mocker, request_data, msg):\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            resp = await client.put('/fledge/admin/2/enable', data=json.dumps(request_data),\n                                    headers=ADMIN_USER_HEADER)\n            assert 400 == resp.status\n            assert msg == resp.reason\n            r = await resp.text()\n            assert {'message': msg} == json.loads(r)\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/enable')\n\n    @pytest.mark.parametrize(\"request_data\", [\n        {\"enabled\": 'true'}, {\"enabled\": 'True'}, {\"enabled\": 'TRUE'}, {\"enabled\": 'tRUe'},\n        {\"enabled\": 'false'}, {\"enabled\": 'False'}, {\"enabled\": 'FALSE'}, {\"enabled\": 'fAlSe'}\n    ])\n    async def test_enable_user(self, client, mocker, request_data):\n        uid = 2\n        if request_data['enabled'].lower() == 'true':\n            _modified_enabled_val = 't'\n            _text = 'enabled'\n            _payload = '{\"values\": {\"enabled\": \"t\"}, \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": \"2\"}}'\n        else:\n            _modified_enabled_val = 'f'\n            _text = 'disabled'\n            _payload = '{\"values\": {\"enabled\": \"f\"}, \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": \"2\"}}'\n\n        user_record = {'rows': [{'id': uid, 'role_id': '1', 'uname': 'AJ', 'enabled': 't'}], 'count': 1}\n        update_user_record = {'rows': [{'id': uid, 'role_id': '1', 'uname': 'AJ',\n                                        'enabled': _modified_enabled_val}], 'count': 1}\n        update_result = {\"rows_affected\": 1, \"response\": \"updated\"}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        audit_details = {'user_id': uid, 'old_value': {'enabled': 't'},\n                         'new_value': {'enabled': _modified_enabled_val},\n                         'message': \"'AJ' user has been {}.\".format(_text)}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(update_result)\n        _rv3 = await mock_coro(None)\n        _se1 = await mock_coro(user_record)\n        _se2 = await mock_coro(update_user_record)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  side_effect=[_se1, _se2]) as q_tbl_patch:\n                    with patch.object(storage_client_mock, 'update_tbl',\n                                      return_value=_rv2) as update_tbl_patch:\n                        with patch.object(AuditLogger, '__init__', return_value=None):\n                            with patch.object(AuditLogger, 'information', return_value=_rv3) as patch_audit:\n                                resp = await client.put('/fledge/admin/{}/enable'.format(uid), data=json.dumps(\n                                    request_data), headers=ADMIN_USER_HEADER)\n                                assert 200 == resp.status\n                                r = await resp.text()\n                                assert {\"message\": \"User with ID:<2> has been {} successfully.\".format(_text)\n                                        } == json.loads(r)\n                            patch_audit.assert_called_once_with('USRCH', audit_details)\n                    update_tbl_patch.assert_called_once_with('users', _payload)\n                assert 2 == q_tbl_patch.call_count\n                args, kwargs = q_tbl_patch.call_args_list[0]\n                assert ('users', '{\"return\": [\"id\", \"uname\", \"role_id\", \"enabled\"], '\n                                 '\"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": \"2\"}}') == args\n                args, kwargs = q_tbl_patch.call_args_list[1]\n                assert ('users', '{\"return\": [\"id\", \"uname\", \"role_id\", \"enabled\"], '\n                                 '\"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": \"2\"}}') == args\n            patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/enable')\n\n    async def test_reset_super_admin(self, client, mocker):\n        msg = 'Restricted for Super Admin user.'\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv) as patch_role_id:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.put('/fledge/admin/1/reset', data=json.dumps({'role_id': 2}),\n                                        headers=ADMIN_USER_HEADER)\n                assert 403 == resp.status\n                assert msg == resp.reason\n            patch_logger_warning.assert_called_once_with(msg)\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/1/reset')\n\n    @pytest.mark.parametrize(\"request_data, msg\", [\n        ({}, \"Nothing to update the user.\"),\n        ({\"invalid\": 1}, \"Nothing to update the user.\"),\n        ({\"password\": \"FL\"}, PASSWORD_MIN_LENGTH_ERROR_MSG),\n        ({\"password\": 1}, \"New password should be in string format.\")\n    ])\n    async def test_reset_with_bad_data(self, client, mocker, request_data, msg):\n        ret_val = [{'id': '1'}]\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        rv1 = await mock_coro(ret_val)\n        rv2 = await mock_coro(msg)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=rv1) as patch_role_id:\n            with patch.object(auth, 'validate_password', return_value=rv2):\n                resp = await client.put('/fledge/admin/2/reset', data=json.dumps(request_data),\n                                        headers=ADMIN_USER_HEADER)\n                assert 400 == resp.status\n                assert msg == resp.reason\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/reset')\n\n    async def test_reset_with_bad_role(self, client, mocker):\n        request_data = {\"role_id\": \"blah\"}\n        msg = \"Invalid or bad role id.\"\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(False)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(auth, 'is_valid_role', return_value=_rv2) as patch_role:\n                resp = await client.put('/fledge/admin/2/reset', data=json.dumps(request_data),\n                                        headers=ADMIN_USER_HEADER)\n                assert 400 == resp.status\n                assert msg == resp.reason\n            patch_role.assert_called_once_with(request_data['role_id'])\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/reset')\n\n    @pytest.mark.parametrize(\"exception_name, status_code, msg\", [\n        (ValueError, 400, 'None'),\n        (User.DoesNotExist, 404, 'User with ID:<2> does not exist.'),\n        (User.PasswordAlreadyUsed, 400, 'The new password should be different from previous 3 used.')\n    ])\n    async def test_reset_exceptions(self, client, mocker, exception_name, status_code, msg):\n        request_data = {'role_id': '2'}\n        user_id = 2\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        rv1 = await mock_coro([{'id': '1'}])\n        rv2 = await mock_coro(True)\n        rv3 = await mock_coro(\"\")\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=rv1) as patch_role_id:\n            with patch.object(auth, 'is_valid_role', return_value=rv2) as patch_role:\n                with patch.object(auth, 'validate_password', return_value=rv3):\n                    with patch.object(User.Objects, 'update', side_effect=exception_name(msg)) as patch_update:\n                        with patch.object(auth._logger, 'warning') as patch_logger:\n                            resp = await client.put('/fledge/admin/{}/reset'.format(user_id),\n                                                    data=json.dumps(request_data), headers=ADMIN_USER_HEADER)\n                            assert status_code == resp.status\n                            assert msg == resp.reason\n                        if exception_name == User.PasswordAlreadyUsed:\n                            patch_logger.assert_called_once_with(msg)\n                    patch_update.assert_called_once_with(str(user_id), request_data)\n            patch_role.assert_called_once_with(request_data['role_id'])\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/reset')\n\n    async def test_reset_unknown_exception(self, client, mocker):\n        request_data = {'role_id': '2'}\n        user_id = 2\n        msg = 'Something went wrong'\n        logger_msg = 'Failed to reset the user ID:<{}>.'.format(user_id)\n\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        rv1 = await mock_coro([{'id': '1'}])\n        rv2 = await mock_coro(True)\n        rv3 = await mock_coro(\"\")\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=rv1) as patch_role_id:\n            with patch.object(auth, 'is_valid_role', return_value=rv2) as patch_role:\n                with patch.object(auth, 'validate_password', return_value=rv3):\n                    with patch.object(User.Objects, 'update', side_effect=Exception(msg)) as patch_update:\n                        with patch.object(auth._logger, 'error') as patch_logger:\n                            resp = await client.put('/fledge/admin/{}/reset'.format(user_id), data=json.dumps(request_data),\n                                                    headers=ADMIN_USER_HEADER)\n                            assert 500 == resp.status\n                            assert msg == resp.reason\n                        args = patch_logger.call_args\n                        assert logger_msg == args[0][1]\n                    patch_update.assert_called_once_with(str(user_id), request_data)\n            patch_role.assert_called_once_with(request_data['role_id'])\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/reset')\n\n    async def test_reset_role_and_password(self, client, mocker):\n        request_data = {'role_id': '2', 'password': 'Test@123'}\n        user_id = 2\n        msg = 'User with ID:<{}> has been updated successfully.'.format(user_id)\n        ret_val = {'response': 'updated', 'rows_affected': 1}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        \n        rv1 = await mock_coro([{'id': '1'}])\n        rv2 = await mock_coro(True)\n        rv3 = await mock_coro(ret_val)\n        rv4 = await mock_coro(\"\")\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=rv1) as patch_role_id:\n            with patch.object(auth, 'is_valid_role', return_value=rv2) as patch_role:\n                with patch.object(auth, 'validate_password', return_value=rv4):\n                    with patch.object(User.Objects, 'update', return_value=rv3) as patch_update:\n                        with patch.object(auth._logger, 'info') as patch_auth_logger_info:\n                            resp = await client.put('/fledge/admin/{}/reset'.format(user_id),\n                                                    data=json.dumps(request_data), headers=ADMIN_USER_HEADER)\n                            assert 200 == resp.status\n                            r = await resp.text()\n                            assert {'message': msg} == json.loads(r)\n                        patch_auth_logger_info.assert_called_once_with(msg)\n                    patch_update.assert_called_once_with(str(user_id), request_data)\n            patch_role.assert_called_once_with(request_data['role_id'])\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/reset')\n\n    @pytest.mark.parametrize(\"request_data, ret_val\", [\n        ({\"username\": \"admin\", \"password\": \"fledge\"}, (1, \"token1\", True)),\n        ({\"username\": \"user\", \"password\": \"fledge\"}, (2, \"token2\", False))\n    ])\n    async def test_login_auth_password(self, client, request_data, ret_val):\n        async def async_mock():\n            return ret_val\n\n        _rv = await async_mock()\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(User.Objects, 'login', return_value=_rv) as patch_user_login:\n                with patch.object(auth._logger, 'info') as patch_auth_logger:\n                    resp = await client.post('/fledge/login', data=json.dumps(request_data))\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    actual = json.loads(r)\n                    assert ret_val[0] == actual['uid']\n                    assert ret_val[1] == actual['token']\n                    assert ret_val[2] == actual['admin']\n                patch_auth_logger.assert_called_once_with('User with username:<{}> logged in successfully.'.format(\n                    request_data['username']))\n            # TODO: host arg patch transport.request.extra_info\n            args, kwargs = patch_user_login.call_args\n            assert request_data['username'] == args[0]\n            assert request_data['password'] == args[1]\n            # patch_user_login.assert_called_once_with()\n        patch_logger.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/login')\n\n    @pytest.mark.parametrize(\"exception_name, status_code, msg\", [\n        (User.PasswordNotSetError, 400, 'Password is not set for this user.'),\n        (User.DoesNotExist, 404, 'User does not exist'),\n        (User.PasswordDoesNotMatch, 404, 'Username or Password do not match'),\n        (Exception, 500, 'Internal Server Error')\n    ])\n    async def test_login_fails_when_password_auth_used_but_password_not_set(self, client, exception_name,\n                                                                            status_code, msg):\n        request_data_payload = {\"username\": \"ranveer\", \"password\": \"Singh@123\"}\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(User.Objects, 'login', side_effect=exception_name(msg)):\n                with patch.object(auth._logger, 'error') as patch_auth_logger:\n                    resp = await client.post('/fledge/login', data=json.dumps(request_data_payload))\n                    assert status_code == resp.status\n                    assert msg == resp.reason\n                    r = await resp.text()\n                    actual = json.loads(r)\n                    assert {'message': msg} == actual\n                patch_auth_logger.assert_not_called() if status_code != 500 else patch_auth_logger.assert_called()\n        patch_logger.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/login')\n\n    @pytest.mark.parametrize(\"auth_method, request_data, ret_val\", [\n        (\"certificate\", \"-----BEGIN CERTIFICATE----- Test -----END CERTIFICATE-----\", (2, \"token2\", False))\n    ])\n    async def test_login_auth_certificate(self, client, auth_method, request_data, ret_val):\n        hdr = {'content-type': 'text/plain'}\n\n        async def async_mock():\n            return ret_val\n\n        async def async_get_user():\n            return {'role_id': '2', 'id': '2', 'uname': 'user'}\n\n        _rv1 = await asyncio.sleep(.1)\n        _rv2 = await async_mock()\n        _rv3 = await async_get_user()\n        with patch.object(middleware._logger, 'info'):\n            with patch.object(server.Server, \"auth_method\", auth_method):\n                with patch.object(SSLVerifier, 'get_subject', return_value={\"commonName\": \"user\"}):\n                    with patch.object(User.Objects, 'verify_certificate', return_value=_rv1):\n                        with patch.object(User.Objects, 'certificate_login', return_value=_rv2):\n                            with patch.object(User.Objects, 'get', return_value=_rv3):\n                                with patch.object(auth._logger, 'info'):\n                                    req_data = request_data\n                                    resp = await client.post('/fledge/login', data=req_data, headers=hdr)\n                                    assert 200 == resp.status\n                                    r = await resp.text()\n                                    actual = json.loads(r)\n                                    assert ret_val[0] == actual['uid']\n                                    assert ret_val[1] == actual['token']\n                                    assert ret_val[2] == actual['admin']\n\n    @pytest.mark.skip(reason=\"Request mock required\")\n    @pytest.mark.parametrize(\"auth_method, request_data, ret_val, expected\", [\n        (\"certificate\", {\"username\": \"admin\", \"password\": \"fledge\"}, (1, \"token1\", True),\n         \"Invalid authentication method, use certificate instead.\"),\n    ])\n    async def test_login_auth_exception1(self, client, auth_method, request_data, ret_val, expected):\n        async def async_mock():\n            return ret_val\n        with patch.object(middleware._logger, 'info') as patch_logger_debug:\n            with patch.object(server.Server, \"auth_method\", auth_method) as patch_auth_method:\n                req_data = json.dumps(request_data) if isinstance(request_data, dict) else request_data\n                resp = await client.post('/fledge/login', data=req_data)\n                assert 401 == resp.status\n                actual = await resp.text()\n                assert \"401: {}\".format(expected) == actual\n\n    @pytest.mark.skip(reason=\"Request mock required\")\n    @pytest.mark.parametrize(\"auth_method, request_data, ret_val, expected\", [\n        (\"password\", \"-----BEGIN CERTIFICATE----- Test -----END CERTIFICATE-----\",\n         (2, \"token2\", False), \"Invalid authentication method, use password instead.\")\n    ])\n    async def test_login_auth_exception2(self, client, auth_method, request_data, ret_val, expected):\n        TEXT_HEADER = {'content-type': 'text/plain'}\n\n        async def async_mock():\n            return ret_val\n        with patch.object(middleware._logger, 'info') as patch_logger_debug:\n            with patch.object(server.Server, \"auth_method\", auth_method) as patch_auth_method:\n                req_data = request_data\n                resp = await client.post('/fledge/login', data=req_data, headers=TEXT_HEADER)\n                assert 401 == resp.status\n                actual = await resp.text()\n                assert \"401: {}\".format(expected) == actual\n\n    @pytest.mark.parametrize(\"pwd, error_msg, policy\", [\n        (\"pass\", \"Password should have minimum 6 characters.\", \"Any characters\"),\n        (\"passwords\", \"Password should have maximum 8 characters.\", \"Any characters\"),\n        (\"password\", \"Password must contain upper and lower case letters.\", \"Mixed case Alphabetic\"),\n        (\"password\", \"Password must contain upper, lower case, uppercase and numeric values.\", \"Mixed case and numeric\"),\n        (\"password\", \"Password must contain atleast one upper and lower case letter, numeric and special characters.\", \"Mixed case, numeric and special characters\"),\n    ])\n    async def test_bad_validate_password(self, pwd, error_msg, policy):\n        async def mock_cat():\n            return {\n            \"policy\": {\n                \"description\": \"Password policy\",\n                \"type\": \"enumeration\",\n                \"options\": [\n                    \"Any characters\",\n                    \"Mixed case Alphabetic\",\n                    \"Mixed case and numeric\",\n                    \"Mixed case, numeric and special characters\"\n                ],\n                \"default\": \"Any characters\",\n                \"displayName\": \"Policy\",\n                \"order\": \"1\",\n                \"value\": policy\n            },\n            \"length\": {\n                \"description\": \"Minimum password length\",\n                \"type\": \"integer\",\n                \"default\": \"6\",\n                \"displayName\": \"Minimum Length\",\n                \"minimum\": \"6\",\n                \"maximum\": \"8\",\n                \"order\": \"2\",\n                \"value\": \"6\"\n            },\n            \"expiration\": {\n                \"description\": \"Number of days after which passwords must be changed\",\n                \"type\": \"integer\",\n                \"default\": \"0\",\n                \"displayName\": \"Expiry (in Days)\",\n                \"order\": \"3\",\n                \"value\": \"0\"\n            }\n        }\n        rv = await mock_cat() \n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(ConfigurationManager, \"get_category_all_items\", return_value=rv) as patch_get_cat:\n                msg = await auth.validate_password(pwd)\n                assert error_msg == msg\n            patch_get_cat.assert_called_once_with('password')\n\n    @pytest.mark.parametrize(\"pwd, policy\", [\n        (\"password\", \"Any characters\"), (\"Password\", \"Any characters\"), (\"Passw0rd\", \"Any characters\"),\n        (\"passw0rd\", \"Any characters\"), (\"PaSsw0#d\", \"Any characters\"), (\"paSsw0#1\", \"Any characters\"),\n        (\"Password\", \"Mixed case Alphabetic\"), (\"PassworD\", \"Mixed case Alphabetic\"),\n        (\"Pass123\", \"Mixed case Alphabetic\"), (\"Pass!23\", \"Mixed case Alphabetic\"),\n        (\"Passw0rd\", \"Mixed case and numeric\"), (\"paSSw0rd\", \"Mixed case and numeric\"),\n        (\"1ass0Rd\", \"Mixed case and numeric\"), (\"PASSw0rD\", \"Mixed case and numeric\"),\n        (\"pAss@!1\", \"Mixed case, numeric and special characters\"),\n        (\"@Aswe12\", \"Mixed case, numeric and special characters\"),\n        (\"(Aswe1)\", \"Mixed case, numeric and special characters\"),\n        (\"s!@#$%G2\", \"Mixed case, numeric and special characters\"),\n        (\"Fl@3737\", \"Mixed case, numeric and special characters\")\n    ])\n    async def test_good_validate_password(self, pwd, policy):\n        async def mock_cat():\n            return {\n            \"policy\": {\n                \"description\": \"Password policy\",\n                \"type\": \"enumeration\",\n                \"options\": [\n                    \"Any characters\",\n                    \"Mixed case Alphabetic\",\n                    \"Mixed case and numeric\",\n                    \"Mixed case, numeric and special characters\"\n                ],\n                \"default\": \"Any characters\",\n                \"displayName\": \"Policy\",\n                \"order\": \"1\",\n                \"value\": policy\n            },\n            \"length\": {\n                \"description\": \"Minimum password length\",\n                \"type\": \"integer\",\n                \"default\": \"6\",\n                \"displayName\": \"Minimum Length\",\n                \"minimum\": \"6\",\n                \"maximum\": \"8\",\n                \"order\": \"2\",\n                \"value\": \"6\"\n            },\n            \"expiration\": {\n                \"description\": \"Number of days after which passwords must be changed\",\n                \"type\": \"integer\",\n                \"default\": \"0\",\n                \"displayName\": \"Expiry (in Days)\",\n                \"order\": \"3\",\n                \"value\": \"0\"\n            }\n        }\n        rv = await mock_cat() \n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(ConfigurationManager, \"get_category_all_items\", return_value=rv) as patch_get_cat:\n                msg = await auth.validate_password(pwd)\n                assert \"\" == msg\n            patch_get_cat.assert_called_once_with('password')\n\n\n    @pytest.mark.parametrize(\"data, error_message\", [\n        (\"blah\", \"expiration_days must be an integer.\"),\n        (0, \"expiration_days must be between 1 and 365.\"),\n        (366, \"expiration_days must be between 1 and 365.\"),\n        (-1, \"expiration_days must be between 1 and 365.\")\n    ])\n    async def test_bad_certificate(self, client, data, error_message):\n        payload = {\"expiration_days\": data}\n        valid_user = {'id': 1, 'uname': 'admin', 'role_id': '1'}\n        _rv1 = await mock_coro(valid_user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro(valid_user)\n        _rv4 = await mock_coro([{'id': '1'}])\n        with patch.object(middleware._logger, 'debug') as patch_logger_debug:\n            with patch.object(User.Objects, 'validate_token', return_value=_rv1) as patch_validate_token:\n                with patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2\n                                  ) as patch_refresh_token:\n                    with patch.object(User.Objects, 'get', return_value=_rv3):\n                            with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv4\n                                              ) as patch_role_id:\n                                resp = await client.post('/fledge/admin/3/authcertificate', data=json.dumps(payload),\n                                                         headers=ADMIN_USER_HEADER)\n                                assert 400 == resp.status\n                                assert error_message == resp.reason\n                                actual = json.loads(await resp.text())\n                                assert error_message == actual\n                            patch_role_id.assert_called_once_with('admin')\n                patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n            patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST',\n                                                   '/fledge/admin/3/authcertificate')\n\n    async def test_certificate(self, client):\n        valid_user = {'id': 1, 'uname': 'admin', 'role_id': '1'}\n        get_user = {'id': 3, 'uname': 'dviewer', 'real_name': 'Data Viewer', 'role_id': 4, 'description': 'Test',\n                    'enabled': 'f', 'access_method': 'any'}\n        msg = \"An Authentication certificate has been created for user '{}'.\".format(get_user['uname'])\n        _rv1 = await mock_coro(valid_user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro([{'id': '1'}])\n        _se1 = await mock_coro(valid_user)\n        _se2 = await mock_coro(get_user)\n        with patch.object(middleware._logger, 'debug') as patch_logger_debug:\n            with patch.object(User.Objects, 'validate_token', return_value=_rv1) as patch_validate_token:\n                with patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2\n                                  ) as patch_refresh_token:\n                    with patch.object(User.Objects, 'get', side_effect=[_se1, _se2]):\n                            with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv3\n                                              ) as patch_role_id:\n                                resp = await client.post('/fledge/admin/3/authcertificate', data=None,\n                                                         headers=ADMIN_USER_HEADER)\n                                assert 200 == resp.status\n                                assert \"OK\" == resp.reason\n                                cert = await resp.text()\n                                assert cert.startswith(\"-----BEGIN CERTIFICATE-----\")\n                                assert cert.endswith(\"\\n-----END CERTIFICATE-----\\n\")\n                            patch_role_id.assert_called_once_with('admin')\n                patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n            patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST',\n                                                   '/fledge/admin/3/authcertificate')\n\n    async def test_certificate_verification_value_error(self, client):\n        with patch.object(User.Objects, 'verify_certificate', side_effect=ValueError(\"Invalid certificate format\")):\n            cert_data = \"-----BEGIN CERTIFICATE-----\\ntest certificate data\\n-----END CERTIFICATE-----\"\n            resp = await client.post('/fledge/login', data=cert_data)\n            assert 401 == resp.status\n            assert \"Authentication failed: Invalid certificate format\" == resp.reason\n\n    @pytest.mark.parametrize(\"invalid_data\", [\n        \"just some text\", \"\", \"   \", \"{\", \"{}\"\n    ])\n    async def test_various_invalid_data_formats(self, client, invalid_data):\n        resp = await client.post('/fledge/login', data=invalid_data)\n        assert 400 == resp.status\n        assert \"Invalid or untrusted certificate or missing credentials in payload.\" == resp.reason\n\n    @pytest.mark.parametrize(\"exception_class\", [\n        SSLVerifier.VerificationError, User.DoesNotExist, OSError\n    ])\n    async def test_certificate_verification_improved_error_message(self, client, exception_class):\n        with patch.object(User.Objects, 'verify_certificate', side_effect=exception_class(\"Verification failed\")):\n            cert_data = \"-----BEGIN CERTIFICATE-----\\ntest certificate data\\n-----END CERTIFICATE-----\"\n            resp = await client.post('/fledge/login', data=cert_data)\n            assert 401 == resp.status\n            assert \"Authentication failed: invalid or untrusted certificate.\" == resp.reason\n\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_auth_optional.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom unittest.mock import patch\nfrom aiohttp import web\nimport pytest\n\nfrom fledge.common.web import middleware\nfrom fledge.services.core import routes\nfrom fledge.services.core.user_model import User\nfrom fledge.services.core.api import auth\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nFORBIDDEN = 'Forbidden'\nWARN_MSG = 'Resource you were trying to reach is absolutely forbidden for some reason'\n\n\nasync def mock_coro(*args, **kwargs):\n    return None if len(args) == 0 else args[0]\n\n\nclass TestAuthOptional:\n\n    @pytest.fixture\n    def client(self, loop, aiohttp_server, aiohttp_client):\n        app = web.Application(loop=loop,  middlewares=[middleware.optional_auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        server = loop.run_until_complete(aiohttp_server(app))\n        loop.run_until_complete(server.start_server(loop=loop))\n        client = loop.run_until_complete(aiohttp_client(server))\n        return client\n\n    async def test_get_roles(self, client):\n        _rv = await mock_coro([])\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(User.Objects, 'get_roles', return_value=_rv) as patch_user_obj:\n                resp = await client.get('/fledge/user/role')\n                assert 200 == resp.status\n                r = await resp.text()\n                assert {'roles': []} == json.loads(r)\n            patch_user_obj.assert_called_once_with()\n        patch_logger.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/user/role')\n\n    @pytest.mark.parametrize(\"ret_val, exp_result\", [\n        ([], []),\n        ([{'uname': 'admin', 'role_id': '1', 'access_method': 'any', 'id': '1', 'real_name': 'Admin',\n           'description': 'Admin user', 'enabled': 't', 'failed_attempts': 0, 'block_until': ''},\n          {'uname': 'user', 'role_id': '2', 'access_method': 'any', 'id': '2', 'real_name': 'Non-admin',\n           'description': 'Normal user', 'enabled': 't', 'failed_attempts': 0, 'block_until': ''},\n          {'uname': 'dviewer', 'role_id': '3', 'access_method': 'any', 'id': '3', 'real_name': 'Data-Viewer',\n           'description': 'Data user', 'enabled': 'f', 'failed_attempts': 0, 'block_until': ''}\n          ],\n         [{\"userId\": \"1\", \"userName\": \"admin\", \"roleId\": \"1\", \"accessMethod\": \"any\", \"realName\": \"Admin\",\n           \"description\": \"Admin user\"},\n          {\"userId\": \"2\", \"userName\": \"user\", \"roleId\": \"2\", \"accessMethod\": \"any\", \"realName\": \"Non-admin\",\n           \"description\": \"Normal user\"}])\n    ])\n    async def test_get_all_users(self, client, ret_val, exp_result):\n        _rv = await mock_coro(ret_val)\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(User.Objects, 'all', return_value=_rv) as patch_user_obj:\n                resp = await client.get('/fledge/user')\n                assert 200 == resp.status\n                r = await resp.text()\n                assert {'users': exp_result} == json.loads(r)\n            patch_user_obj.assert_called_once_with()\n        patch_logger.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/user')\n\n    @pytest.mark.parametrize(\"request_params, exp_result, arg1, arg2\", [\n        ('?id=1', {'uname': 'admin', 'role_id': '1', 'id': '1', 'access_method': 'any', 'real_name': 'Admin', 'description': 'Admin user','failed_attempts': 0, 'block_until': ''}, 1, None),\n        ('?username=admin', {'uname': 'admin', 'role_id': '1', 'id': '1', 'access_method': 'any', 'real_name': 'Admin', 'description': 'Admin user', 'failed_attempts': 0, 'block_until': ''},  None, 'admin'),\n        ('?id=1&username=admin', {'uname': 'admin', 'role_id': '1', 'id': '1', 'access_method': 'any', 'real_name': 'Admin', 'description': 'Admin user', 'failed_attempts': 0, 'block_until': ''}, 1, 'admin'),\n        ('?id=1&user=admin', {'uname': 'admin', 'role_id': '1', 'id': '1', 'access_method': 'any', 'real_name': 'Admin', 'description': 'Admin user', 'failed_attempts': 0, 'block_until': ''}, 1, None),\n        ('?uid=1&username=admin', {'uname': 'admin', 'role_id': '1', 'id': '1', 'access_method': 'any', 'real_name': 'Admin', 'description': 'Admin user', 'failed_attempts': 0, 'block_until': ''}, None, 'admin'),\n    ])\n    async def test_get_user_by_param(self, client, request_params, exp_result, arg1, arg2):\n        result = {}\n        result.update(exp_result)\n        _rv = await mock_coro(result)\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(User.Objects, 'get', return_value=_rv) as patch_user_obj:\n                resp = await client.get('/fledge/user{}'.format(request_params))\n                assert 200 == resp.status\n                r = await resp.text()\n                actual = json.loads(r)\n                assert actual['userId'] == exp_result['id']\n                assert actual['roleId'] == exp_result['role_id']\n                assert actual['userName'] == exp_result['uname']\n                assert actual['accessMethod'] == exp_result['access_method']\n                assert actual['realName'] == exp_result['real_name']\n                assert actual['description'] == exp_result['description']\n            patch_user_obj.assert_called_once_with(arg1, arg2)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/user')\n\n    @pytest.mark.parametrize(\"request_params, error_msg, arg1, arg2\", [\n        ('?id=10', 'User with id:<10> does not exist', 10, None),\n        ('?username=blah', 'User with name:<blah> does not exist', None, 'blah')\n    ])\n    async def test_get_user_exception_by_param(self, client, request_params, error_msg, arg1, arg2):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(User.Objects, 'get', side_effect=User.DoesNotExist(error_msg)) as patch_user_get:\n                resp = await client.get('/fledge/user{}'.format(request_params))\n                assert 404 == resp.status\n                assert error_msg == resp.reason\n            patch_user_get.assert_called_once_with(arg1, arg2)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/user')\n\n    @pytest.mark.parametrize(\"request_params\", ['?id=0', '?id=blah', '?id=-1'])\n    async def test_get_bad_user_id_param_exception(self, client, request_params):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            resp = await client.get('/fledge/user{}'.format(request_params))\n            assert 400 == resp.status\n            assert 'Bad user ID' == resp.reason\n        patch_logger.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/user')\n\n    @pytest.mark.parametrize(\"request_data, error_msg\", [\n        ({}, \"Invalid or untrusted certificate or missing credentials in payload.\"),\n        ({\"username\": 12}, \"Username or password is missing\"),\n        ({\"password\": 12}, \"Username or password is missing\"),\n        ({\"username\": \"blah\"}, \"Username or password is missing\"),\n        ({\"password\": \"blah\"}, \"Username or password is missing\"),\n        ({\"invalid\": \"blah\"}, \"Username or password is missing\"),\n        ({\"username\": \"blah\", \"pwd\": \"blah\"}, \"Username or password is missing\"),\n        ({\"uname\": \"blah\", \"password\": \"blah\"}, \"Username or password is missing\"),\n    ])\n    async def test_bad_login(self, client, request_data, error_msg):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            resp = await client.post('/fledge/login', data=json.dumps(request_data))\n            assert 400 == resp.status\n            assert error_msg == resp.reason\n        patch_logger.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/login')\n\n    @pytest.mark.parametrize(\"request_data, status_code, exception_name, msg\", [\n        ({\"username\": \"blah\", \"password\": \"blah\"}, 404, User.DoesNotExist, 'User does not exist'),\n        ({\"username\": \"admin\", \"password\": \"blah\"}, 404, User.PasswordDoesNotMatch, 'Username or Password do not match'),\n        ({\"username\": \"admin\", \"password\": 123}, 404, User.PasswordDoesNotMatch, 'Username or Password do not match'),\n        ({\"username\": 1, \"password\": 1}, 404, ValueError, 'Username should be a valid string'),\n        ({\"username\": \"user\", \"password\": \"fledge\"}, 401, User.PasswordExpired,\n         'Your password has been expired. Please set your password again.'),\n        ({\"username\": \"user1\", \"password\": \"blah\"}, 400, User.PasswordNotSetError,\n         'Password is not set for this user.')\n\n    ])\n    async def test_login_exception(self, client, request_data, status_code, exception_name, msg):\n        _rv = await mock_coro([])\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(User.Objects, 'login', side_effect=exception_name(msg)) as patch_user_login:\n                with patch.object(User.Objects, 'delete_user_tokens', return_value=_rv) as patch_delete_token:\n                    with patch.object(auth._logger, 'warning') as patch_auth_logger:\n                        resp = await client.post('/fledge/login', data=json.dumps(request_data))\n                        assert status_code == resp.status\n                        assert msg == resp.reason\n                    if status_code == 401:\n                        patch_auth_logger.assert_called_once_with(msg)\n                if status_code == 401:\n                    patch_delete_token.assert_called_once_with(msg)\n            # TODO: host arg patch transport.request.extra_info\n            args, kwargs = patch_user_login.call_args\n            assert str(request_data['username']) == args[0]\n            assert request_data['password'] == args[1]\n            # patch_user_login.assert_called_once_with()\n        patch_logger.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/login')\n\n    @pytest.mark.parametrize(\"request_data, ret_val\", [\n        ({\"username\": \"admin\", \"password\": \"fledge\"}, (1, \"token1\", True)),\n        ({\"username\": \"user\", \"password\": \"fledge\"}, (2, \"token2\", False))\n    ])\n    async def test_login(self, client, request_data, ret_val):\n        async def async_mock():\n            return ret_val\n\n        _rv = await async_mock()\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(User.Objects, 'login', return_value=_rv) as patch_user_login:\n                with patch.object(auth._logger, 'info') as patch_auth_logger:\n                    resp = await client.post('/fledge/login', data=json.dumps(request_data))\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    actual = json.loads(r)\n                    assert ret_val[0] == actual['uid']\n                    assert ret_val[1] == actual['token']\n                    assert ret_val[2] == actual['admin']\n                patch_auth_logger.assert_called_once_with('User with username:<{}> logged in successfully.'.format(\n                    request_data['username']))\n            # TODO: host arg patch transport.request.extra_info\n            args, kwargs = patch_user_login.call_args\n            assert request_data['username'] == args[0]\n            assert request_data['password'] == args[1]\n            # patch_user_login.assert_called_once_with()\n        patch_logger.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/login')\n\n    async def test_logout(self, client):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.put('/fledge/2/logout')\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_logger_warning.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/2/logout')\n\n    async def test_update_password(self, client):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.put('/fledge/user/1/password')\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_logger_warning.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/user/1/password')\n\n    async def test_update_me(self, client):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.put('/fledge/user')\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_logger_warning.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/user')\n\n    async def test_update_user(self, client):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.put('/fledge/admin/1')\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_logger_warning.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/1')\n\n    async def test_delete_user(self, client):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_auth_logger_warn:\n                resp = await client.delete('/fledge/admin/1/delete')\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_auth_logger_warn.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'DELETE', '/fledge/admin/1/delete')\n\n    async def test_create_user(self, client):\n        request_data = {\"username\": \"ajtest\", \"password\": \"F0gl@mp\"}\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.post('/fledge/admin/user', data=json.dumps(request_data))\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_logger_warning.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/admin/user')\n\n    async def test_enable_user(self, client):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.put('/fledge/admin/2/enable')\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_logger_warning.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/enable')\n\n    async def test_reset(self, client):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.put('/fledge/admin/2/reset')\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_logger_warning.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'PUT', '/fledge/admin/2/reset')\n\n    @pytest.mark.parametrize(\"role_id, expected\", [\n        (1, True),\n        (2, True),\n        (3, False)\n    ])\n    async def test_valid_role(self, role_id, expected):\n        ret_val = [{\"id\": \"1\", \"description\": \"for the users having all CRUD privileges including other admin users\", \"name\": \"admin\"}, {\"id\": \"2\", \"description\": \"all CRUD operations and self profile management\", \"name\": \"user\"}]\n        _rv = await mock_coro(ret_val)\n        with patch.object(User.Objects, 'get_roles', return_value=_rv) as patch_get_roles:\n            actual = await auth.is_valid_role(role_id)\n            assert expected is actual\n        patch_get_roles.assert_called_once_with()\n\n    async def test_certificate(self, client):\n        with patch.object(middleware._logger, 'debug') as patch_logger:\n            with patch.object(auth._logger, 'warning') as patch_logger_warning:\n                resp = await client.post('/fledge/admin/2/authcertificate')\n                assert 403 == resp.status\n                assert FORBIDDEN == resp.reason\n            patch_logger_warning.assert_called_once_with(WARN_MSG)\n        patch_logger.assert_called_once_with('Received %s request for %s', 'POST',\n                                             '/fledge/admin/2/authcertificate')\n\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_backup_restore.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport os\nimport json\nfrom unittest.mock import MagicMock, patch\nfrom collections import Counter\nfrom aiohttp import web\nimport pytest\nfrom fledge.common.web import middleware\nfrom fledge.services.core import routes\nfrom fledge.services.core import connect\n\nfrom fledge.plugins.storage.common.backup import Backup\nfrom fledge.plugins.storage.common.restore import Restore\nfrom fledge.plugins.storage.common import exceptions\n\nfrom fledge.services.core.api import backup_restore\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n__author__ = \"Vaibhav Singhal, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro(*args, **kwargs):\n    if len(args) > 0:\n        return args[0]\n    else:\n        return \"\"\n\n\nclass TestBackup:\n    \"\"\"Unit test the Backup functionality\n    \"\"\"\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    @pytest.mark.parametrize(\"input_data, expected\", [\n        (1, \"RUNNING\"),\n        (2, \"COMPLETED\"),\n        (3, \"CANCELED\"),\n        (4, \"INTERRUPTED\"),\n        (5, \"FAILED\"),\n        (6, \"RESTORED\"),\n        (7, \"UNKNOWN\")\n    ])\n    def test_get_status(self, input_data, expected):\n        assert expected == backup_restore._get_status(input_data)\n\n    @pytest.mark.parametrize(\"request_params, key_args\", [\n        ('', {'limit': 20, 'skip': 0, 'status': None}),\n        ('?limit=1', {'limit': 1, 'skip': 0, 'status': None}),\n        ('?skip=1', {'limit': 20, 'skip': 1, 'status': None}),\n        ('?status=completed', {'limit': 20, 'skip': 0, 'status': 2}),\n        ('?status=failed', {'limit': 20, 'skip': 0, 'status': 5}),\n        ('?status=restored&skip=10', {'limit': 20, 'skip': 10, 'status': 6}),\n        ('?status=running&limit=1', {'limit': 1, 'skip': 0, 'status': 1}),\n        ('?status=canceled&limit=10&skip=0', {'limit': 10, 'skip': 0, 'status': 3}),\n        ('?status=interrupted&limit=&skip=', {'limit': 20, 'skip': 0, 'status': 4}),\n        ('?status=&limit=&skip=', {'limit': 20, 'skip': 0, 'status': None})\n    ])\n    async def test_get_backups(self, client, request_params, key_args):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = [{'file_name': '1.dump',\n                     'id': 1, 'type': '1', 'status': '2',\n                     'ts': '2018-02-15 15:18:41.821978+05:30',\n                     'exit_code': '0'}]\n        \n        _rv = await mock_coro(response)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Backup, 'get_all_backups', return_value=_rv) as patch_get_all_backups:\n                resp = await client.get('/fledge/backup{}'.format(request_params))\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert 1 == len(json_response['backups'])\n                assert Counter({\"id\", \"date\", \"status\"}) == Counter(json_response['backups'][0].keys())\n            args, kwargs = patch_get_all_backups.call_args\n            assert key_args == kwargs\n\n    @pytest.mark.parametrize(\"request_params, response_code, response_message\", [\n        ('?limit=invalid', 400, \"Limit must be a positive integer\"),\n        ('?limit=-1', 400, \"Limit must be a positive integer\"),\n        ('?skip=invalid', 400, \"Skip/Offset must be a positive integer\"),\n        ('?skip=-1', 400, \"Skip/Offset must be a positive integer\"),\n        ('?status=BLA', 400, \"'BLA' is not a valid status\")\n    ])\n    async def test_get_backups_bad_data(self, client, request_params, response_code, response_message):\n        resp = await client.get('/fledge/backup{}'.format(request_params))\n        assert response_code == resp.status\n        assert response_message == resp.reason\n\n    async def test_get_backups_exceptions(self, client):\n        msg = \"Internal Server Error\"\n        with patch.object(connect, 'get_storage_async', side_effect=Exception(msg)):\n            with patch.object(backup_restore._logger, 'error') as patch_logger:\n                resp = await client.get('/fledge/backup')\n                assert 500 == resp.status\n                assert msg == resp.reason\n            assert 1 == patch_logger.call_count\n\n    async def test_create_backup(self, client):\n        async def mock_create():\n            return \"running_or_failed\"\n\n        _rv = await mock_create()\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Backup, 'create_backup', return_value=_rv):\n                resp = await client.post('/fledge/backup')\n                assert 200 == resp.status\n                assert '{\"status\": \"running_or_failed\"}' == await resp.text()\n\n    async def test_create_backup_exception(self, client):\n        msg = \"Internal Server Error\"\n        with patch.object(connect, 'get_storage_async', side_effect=Exception(msg)):\n            with patch.object(backup_restore._logger, 'error') as patch_logger:\n                resp = await client.post('/fledge/backup')\n                assert 500 == resp.status\n                assert msg == resp.reason\n            assert 1 == patch_logger.call_count\n\n    async def test_get_backup_details(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'id': 1, 'file_name': '1.dump', 'ts': '2018-02-15 15:18:41.821978+05:30',\n                    'status': '2', 'type': '1', 'exit_code': '0'}\n        _rv = await mock_coro(response)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Backup, 'get_backup_details', return_value=_rv):\n                resp = await client.get('/fledge/backup/{}'.format(1))\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert 3 == len(json_response)\n                assert Counter({\"id\", \"date\", \"status\"}) == Counter(json_response.keys())\n\n    @pytest.mark.parametrize(\"input_exception, response_code, response_message\", [\n        (exceptions.DoesNotExist, 404, \"Backup id 8 does not exist\"),\n        (Exception(\"Internal Server Error\"), 500, \"Internal Server Error\")\n    ])\n    async def test_get_backup_details_exceptions(self, client, input_exception, response_code, response_message):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Backup, 'get_backup_details', side_effect=input_exception):\n                with patch.object(backup_restore._logger, 'error') as patch_logger:\n                    resp = await client.get('/fledge/backup/{}'.format(8))\n                    assert response_code == resp.status\n                    assert response_message == resp.reason\n                if response_code == 500:\n                    assert 1 == patch_logger.call_count\n\n    async def test_get_backup_details_bad_data(self, client):\n        resp = await client.get('/fledge/backup/{}'.format('BLA'))\n        assert 400 == resp.status\n        assert \"Invalid backup id\" == resp.reason\n\n    async def test_delete_backup(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Backup, 'delete_backup', return_value=_rv):\n                resp = await client.delete('/fledge/backup/{}'.format(1))\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'message': 'Backup deleted successfully'} == json_response\n\n    @pytest.mark.parametrize(\"input_exception, response_code, response_message\", [\n        (exceptions.DoesNotExist, 404, \"Backup id 8 does not exist\"),\n        (Exception(\"Internal Server Error\"), 500, \"Internal Server Error\")\n    ])\n    async def test_delete_backup_exceptions(self, client, input_exception, response_code, response_message):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Backup, 'delete_backup', side_effect=input_exception):\n                with patch.object(backup_restore._logger, 'error') as patch_logger:\n                    resp = await client.delete('/fledge/backup/{}'.format(8))\n                    assert response_code == resp.status\n                    assert response_message == resp.reason\n                if response_code == 500:\n                    assert 1 == patch_logger.call_count\n\n    async def test_delete_backup_bad_data(self, client):\n        resp = await client.delete('/fledge/backup/{}'.format('BLA'))\n        assert 400 == resp.status\n        assert \"Invalid backup id\" == resp.reason\n\n    async def test_get_backup_status(self, client):\n        resp = await client.get('/fledge/backup/status')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {'backupStatus': [{'index': 1, 'name': 'RUNNING'},\n                                 {'index': 2, 'name': 'COMPLETED'},\n                                 {'index': 3, 'name': 'CANCELED'},\n                                 {'index': 4, 'name': 'INTERRUPTED'},\n                                 {'index': 5, 'name': 'FAILED'},\n                                 {'index': 6, 'name': 'RESTORED'}]} == json_response\n\n    @pytest.mark.parametrize(\"input_exception, response_code, response_message\", [\n        (ValueError, 400, \"Invalid backup id\"),\n        (exceptions.DoesNotExist, 404, \"Backup id 8 does not exist\"),\n        (Exception(\"Internal Server Error\"), 500, \"Internal Server Error\"),\n        (FileNotFoundError(\"fledge_backup_2021_10_04_11_12_11.db backup file does not exist in \"\n                           \"/usr/local/fledge/data/backup directory\"), 404,\n         \"fledge_backup_2021_10_04_11_12_11.db backup file does not exist in /usr/local/fledge/data/backup directory\")\n    ])\n    async def test_get_backup_download_exceptions(self, client, input_exception, response_code, response_message):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Backup, 'get_backup_details', side_effect=input_exception):\n                with patch('os.path.isfile', return_value=False):\n                    with patch.object(backup_restore._logger, 'error') as patch_logger:\n                        resp = await client.get('/fledge/backup/{}/download'.format(8))\n                        assert response_code == resp.status\n                        assert response_message == resp.reason\n                        result = await resp.text()\n                        json_response = json.loads(result)\n                        assert {\"message\": response_message} == json_response\n                    if response_code == 500:\n                        assert 1 == patch_logger.call_count\n\n    async def test_get_backup_download(self, client):\n        # FIXME: py3.9 fails to recognise this in default installed mimetypes known-file\n        import mimetypes\n        mimetypes.add_type('text/plain', '.tar.gz')\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'id': 1, 'file_name': '/usr/local/fledge/data/backup/fledge.db', 'ts': '2018-02-15 15:18:41',\n                    'status': '2', 'type': '1'}\n        _rv = await mock_coro(response)\n        with patch(\"aiohttp.web.FileResponse\", return_value=web.FileResponse(path=os.path.realpath(__file__))) as file_res:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(Backup, 'get_backup_details', return_value=_rv) as patch_backup_detail:\n                    with patch('os.path.isfile', return_value=True):\n                        with patch('tarfile.open'):\n                            resp = await client.get('/fledge/backup/{}/download'.format(1))\n                            assert 200 == resp.status\n                            assert 'OK' == resp.reason\n                patch_backup_detail.assert_called_once_with(1)\n        assert 1 == file_res.call_count\n\n\nclass TestRestore:\n    \"\"\"Unit test the Restore functionality\"\"\"\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def test_restore_backup(self, client):\n        async def mock_restore():\n            return \"running\"\n\n        _rv = await mock_restore()\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Restore, 'restore_backup', return_value=_rv):\n                resp = await client.put('/fledge/backup/{}/restore'.format(1))\n                assert 200 == resp.status\n                r = await resp.text()\n                assert {'status': 'running'} == json.loads(r)\n\n    @pytest.mark.parametrize(\"backup_id, input_exception, code, message\", [\n        (8, exceptions.DoesNotExist, 404, \"Backup with 8 does not exist\"),\n        (2, Exception(\"Internal Server Error\"), 500, \"Internal Server Error\"),\n        ('blah', ValueError, 400, 'Invalid backup id')\n    ])\n    async def test_restore_backup_exceptions(self, client, backup_id, input_exception, code, message):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(Restore, 'restore_backup', side_effect=input_exception):\n                with patch.object(backup_restore._logger, 'error') as patch_logger:\n                    resp = await client.put('/fledge/backup/{}/restore'.format(backup_id))\n                    assert code == resp.status\n                    assert message == resp.reason\n                if code == 500:\n                    assert 1 == patch_logger.call_count\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_browser_assets.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\nimport json\nfrom unittest.mock import MagicMock, patch\n\nfrom aiohttp import web\nfrom aiohttp.web_urldispatcher import PlainResource, DynamicResource\nimport pytest\n\nfrom fledge.services.core.api import browser\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import ReadingsStorageClientAsync\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nURLS = ['fledge/asset',\n        '/fledge/asset/fogbench%2fhumidity',\n        '/fledge/asset/fogbench%2fhumidity/temperature',\n        '/fledge/asset/fogbench%2fhumidity/temperature/series']\n\nPAYLOADS = ['{\"aggregate\": {\"column\": \"*\", \"alias\": \"count\", \"operation\": \"count\"}, \"group\": \"asset_code\"}',\n            '{\"return\": [\"reading\", {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n            '{\"return\": [{\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n            '{\"aggregate\": [{\"operation\": \"min\", \"alias\": \"min\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}, {\"operation\": \"max\", \"alias\": \"max\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}, {\"operation\": \"avg\", \"alias\": \"average\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"group\": {\"format\": \"YYYY-MM-DD HH24:MI:SS\", \"column\": \"user_ts\", \"alias\": \"timestamp\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n            ]\nRESULTS = [{'rows': [{'count': 10, 'asset_code': 'TI sensorTag/luxometer'}], 'count': 1},\n           {'rows': [{'reading': {'temperature': 26, 'humidity': 93}, 'timestamp': '2018-02-16 15:08:51.026'}], 'count': 1},\n           {'rows': [{'temperature': 26, 'timestamp': '2018-02-16 15:08:51.026'}], 'count': 1},\n           {'rows': [{'average': '26', 'timestamp': '2018-02-16 15:08:51', 'max': '26', 'min': '26'}], 'count': 1}\n           ]\n\nFILTERING_IMAGE_URLS = [\n    'fledge/asset/testcard',\n    'fledge/asset/testcard/summary',\n    'fledge/asset/testcard/testcard',\n    'fledge/asset/testcard/testcard/summary',\n    'fledge/asset/testcard/testcard/series'\n    ]\n\nFILTERING_IMAGE_PAYLOADS = [\n    '{\"return\": [\"reading\", {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"testcard\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n    '{\"return\": [\"reading\"], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"testcard\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n    '{\"return\": [{\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}, {\"json\": {\"column\": \"reading\", \"properties\": \"testcard\"}, \"alias\": \"testcard\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"testcard\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n    '{\"aggregate\": [{\"operation\": \"min\", \"json\": {\"column\": \"reading\", \"properties\": \"testcard\"}, \"alias\": \"min\"}, {\"operation\": \"max\", \"json\": {\"column\": \"reading\", \"properties\": \"testcard\"}, \"alias\": \"max\"}, {\"operation\": \"avg\", \"json\": {\"column\": \"reading\", \"properties\": \"testcard\"}, \"alias\": \"average\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"testcard\"}}',\n    '{\"aggregate\": [{\"operation\": \"min\", \"json\": {\"column\": \"reading\", \"properties\": \"testcard\"}, \"alias\": \"min\"}, {\"operation\": \"max\", \"json\": {\"column\": \"reading\", \"properties\": \"testcard\"}, \"alias\": \"max\"}, {\"operation\": \"avg\", \"json\": {\"column\": \"reading\", \"properties\": \"testcard\"}, \"alias\": \"average\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"testcard\"}, \"limit\": 20, \"group\": {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS\"}, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n]\nFILTERING_IMAGE_RESULTS = [\n    {'count': 1, 'rows': [{'reading': {'testcard': '__DPIMAGE:256,256,8_AA'}}]},\n    {'count': 1, 'rows': [{'min': '__DPIMAGE:256,256,8_A', 'max': '__DPIMAGE:256,256,8_AA', 'average': 0.0}]},\n    {'count': 1, 'rows': [{'timestamp': '2022-02-11 16:08:59.617317', 'testcard': '__DPIMAGE:256,256,8_AA'}]},\n    {'count': 1, 'rows': [{'min': '__DPIMAGE:256,256,8_AA', 'max': '__DPIMAGE:256,256,8_AA', 'average': 0.0}]},\n    {'count': 1, 'rows': [{'min': '__DPIMAGE:256,256,8_AA', 'max': '__DPIMAGE:256,256,8_AA', 'average': 0.0}]}\n]\n\nFIXTURE_1 = [(url, payload, result) for url, payload, result in zip(URLS, PAYLOADS, RESULTS)]\nFIXTURE_2 = [(url, 400, payload) for url, payload in zip(URLS, PAYLOADS)]\nFIXTURE_3 = [(url, payload, result) for url, payload, result in\n             zip(FILTERING_IMAGE_URLS, FILTERING_IMAGE_PAYLOADS, FILTERING_IMAGE_RESULTS)]\n\n\nasync def mock_coro(*args, **kwargs):\n    if len(args) > 0:\n        return args[0]\n    else:\n        return \"\"\n\n\nclass TestBrowserAssets:\n    \"\"\"Browser Assets\"\"\"\n\n    @pytest.fixture\n    async def app(self):\n        app = web.Application()\n        browser.setup(app)\n        return app\n\n    @pytest.fixture\n    def client(self, app, loop, test_client):\n        return loop.run_until_complete(test_client(app))\n\n    def test_routes_count(self, app):\n        assert 14 == len(app.router.resources())\n\n    def test_routes_info(self, app):\n        for index, route in enumerate(app.router.routes()):\n            res_info = route.resource.get_info()\n            if index == 0:\n                assert \"GET\" == route.method\n                assert type(route.resource) is PlainResource\n                assert \"/fledge/asset\" == res_info[\"path\"]\n                assert str(route.handler).startswith(\"<function asset_counts\")\n            elif index == 1:\n                assert \"GET\" == route.method\n                assert type(route.resource) is PlainResource\n                assert \"/fledge/asset/timespan\" == res_info[\"path\"]\n                assert str(route.handler).startswith(\"<function asset_timespan\")\n            elif index == 2:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset\")\n            elif index == 3:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}/latest\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset_latest\")\n            elif index == 4:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}/summary\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset_all_readings_summary\")\n            elif index == 5:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}/timespan\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset_reading_timespan\")\n            elif index == 6:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}/{reading}\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset_reading\")\n            elif index == 7:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}/{reading}/summary\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset_summary\")\n            elif index == 8:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}/{reading}/series\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset_averages\")\n            elif index == 9:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}/bucket/{bucket_size}\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset_datapoints_with_bucket_size\")\n            elif index == 10:\n                assert \"GET\" == route.method\n                assert type(route.resource) is DynamicResource\n                assert \"/fledge/asset/{asset_code}/{reading}/bucket/{bucket_size}\" == res_info[\"formatter\"]\n                assert str(route.handler).startswith(\"<function asset_readings_with_bucket_size\")\n\n    @pytest.mark.parametrize(\"request_url, payload, result\", FIXTURE_1)\n    async def test_end_points(self, client, request_url, payload, result):\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        _rv = await mock_coro(result)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=_rv) as query_patch:\n                resp = await client.get(request_url)\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                if str(request_url).endswith(\"summary\"):\n                    assert {'temperature': result['rows'][0]} == json_response\n                elif str(request_url) == 'fledge/asset':\n                    result['rows'][0]['assetCode'] = result['rows'][0].pop('asset_code')\n                    assert result['rows'] == json_response\n                else:\n                    assert result['rows'] == json_response\n            args, kwargs = query_patch.call_args\n            assert json.loads(payload) == json.loads(args[0])\n            query_patch.assert_called_once_with(args[0])\n\n    @pytest.mark.parametrize(\"request_url, response_code, payload\", FIXTURE_2)\n    async def test_bad_request(self, client, request_url, response_code, payload):\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync) \n        result = {'message': 'ERROR: something went wrong', 'retryable': False, 'entryPoint': 'retrieve'}\n        _rv = await mock_coro(result)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=_rv) as query_patch:\n                resp = await client.get(request_url)\n                assert response_code == resp.status\n                assert result['message'] == resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {\"message\": result['message']} == json_response\n            args, kwargs = query_patch.call_args\n            assert json.loads(payload) == json.loads(args[0])\n            query_patch.assert_called_once_with(args[0])\n\n    @pytest.mark.parametrize(\"request_url\", URLS)\n    async def test_http_exception(self, client, request_url):\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        result = {}\n        value = await mock_coro(result)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=value):\n                resp = await client.get(request_url)\n                assert 500 == resp.status\n                assert 'Internal Server Error' == resp.reason\n\n    @pytest.mark.parametrize(\"status_code, message, storage_result, payload\", [\n        (400, \"ERROR: something went wrong\", {'message': 'ERROR: something went wrong', 'retryable': False,\n                                             'entryPoint': 'retrieve'},\n         '{\"where\": {\"value\": \"fogbench/humidity\", \"column\": \"asset_code\", \"condition\": \"=\"}, \"return\": [\"reading\"], '\n         '\"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}, \"limit\": 1}'),\n        (404, \"fogbench/humidity asset code not found\", {\"rows\": [], \"count\": 0},\n         '{\"where\": {\"value\": \"fogbench/humidity\", \"column\": \"asset_code\", \"condition\": \"=\"}, \"return\": [\"reading\"], '\n         '\"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}, \"limit\": 1}'),\n        (404, \"temperature reading key is not found\", {\"count\": 1, \"rows\": [{\"reading\": {\"temp\": 286.8, \"visibility\": 10000,\n                                                                                 \"pressure\": 1000, \"humidity\": 93}}]},\n         '{\"where\": {\"value\": \"fogbench/humidity\", \"column\": \"asset_code\", \"condition\": \"=\"}, \"return\": [\"reading\"], '\n         '\"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}, \"limit\": 1}')\n    ])\n    async def test_bad_summary(self, client, status_code, message, storage_result, payload):\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        _rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=_rv) \\\n                    as query_patch:\n                resp = await client.get('/fledge/asset/fogbench%2fhumidity/temperature/summary')\n                assert status_code == resp.status\n                assert message == resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {\"message\": message} == json_response\n            args, kwargs = query_patch.call_args\n            assert json.loads(payload) == json.loads(args[0])\n            query_patch.assert_called_once_with(args[0])\n\n    async def test_good_summary(self, client):\n        result1 = {\"count\": 1, \"rows\": [{\"reading\": {\"temperature\": 286.8, \"visibility\": 10000, \"pressure\": 1000,\n                                                     \"humidity\": 93}}]}\n        result2 = {'rows': [{'max': '9', 'min': '9', 'average': '9'}], 'count': 1}\n        payload = '{\"aggregate\": [{\"operation\": \"min\", \"alias\": \"min\", \"json\": {\"properties\": \"temperature\", \"column\": ' \\\n                  '\"reading\"}}, {\"operation\": \"max\", \"alias\": \"max\", \"json\": {\"properties\": \"temperature\", \"column\": ' \\\n                  '\"reading\"}}, {\"operation\": \"avg\", \"alias\": \"average\", \"json\": {\"properties\": \"temperature\", ' \\\n                  '\"column\": \"reading\"}}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", ' \\\n                  '\"value\": \"fogbench/humidity\"}}'\n        asset_payload = '{\"return\": [\"reading\"], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", ' \\\n            '\"value\": \"fogbench/humidity\"}, \"limit\": 1, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        _se1 = await mock_coro(result1)\n        _se2 = await mock_coro(result2)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', side_effect=[_se1, _se2]) as query_patch:\n                resp = await client.get('/fledge/asset/fogbench%2fhumidity/temperature/summary')\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {'temperature': result2['rows'][0]} == json_response\n            args, kwargs = query_patch.call_args\n            assert json.loads(payload) == json.loads(args[0])\n            assert 2 == query_patch.call_count\n            args0, kwargs0 = query_patch.call_args_list[0]\n            args1, kwargs1 = query_patch.call_args_list[1]\n            assert json.loads(asset_payload) == json.loads(args0[0])\n            assert json.loads(payload) == json.loads(args1[0])\n\n    @pytest.mark.parametrize(\"group_name, payload, result\", [\n        ('seconds', '{\"aggregate\": [{\"alias\": \"min\", \"operation\": \"min\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}, {\"alias\": \"max\", \"operation\": \"max\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}, {\"alias\": \"average\", \"operation\": \"avg\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"group\": {\"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS\", \"column\": \"user_ts\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n         {'count': 1, 'rows': [{'min': '9', 'average': '9', 'max': '9', 'timestamp': '2018-02-19 17:35:25'}]}),\n        ('minutes', '{\"aggregate\": [{\"alias\": \"min\", \"operation\": \"min\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}, {\"alias\": \"max\", \"operation\": \"max\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}, {\"alias\": \"average\", \"operation\": \"avg\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"group\": {\"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI\", \"column\": \"user_ts\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n         {'count': 1, 'rows': [{'min': '9', 'average': '9', 'max': '9', 'timestamp': '2018-02-19 17:35'}]}),\n        ('hours', '{\"aggregate\": [{\"alias\": \"min\", \"operation\": \"min\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}, {\"alias\": \"max\", \"operation\": \"max\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}, {\"alias\": \"average\", \"operation\": \"avg\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"group\": {\"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24\", \"column\": \"user_ts\"}, \"limit\": 20, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n         {'count': 1, 'rows': [{'min': '9', 'average': '9', 'max': '9', 'timestamp': '2018-02-19 17'}]})\n    ])\n    async def test_asset_averages_with_valid_group_name(self, client, group_name, payload, result):\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        _rv = await mock_coro(result)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=_rv) as query_patch:\n                resp = await client.get('fledge/asset/fogbench%2Fhumidity/temperature/series?group={}'\n                                        .format(group_name))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result['rows'] == json_response\n            args, kwargs = query_patch.call_args\n            assert json.loads(payload) == json.loads(args[0])\n            query_patch.assert_called_once_with(args[0])\n\n    @pytest.mark.parametrize(\"request_param, response_message\", [\n        ('?group=BLA', \"BLA is not a valid group\"),\n        ('?group=0', \"0 is not a valid group\"),\n        ('?limit=invalid', \"Limit must be a positive integer\"),\n        ('?limit=-1', \"Limit must be a positive integer\"),\n        ('?skip=invalid', \"Skip/Offset must be a positive integer\"),\n        ('?skip=-1', \"Skip/Offset must be a positive integer\"),\n        ('?seconds=invalid', \"Time must be a positive integer\"),\n        ('?seconds=-1', \"Time must be a positive integer\"),\n        ('?minutes=invalid', \"Time must be a positive integer\"),\n        ('?minutes=-1', \"Time must be a positive integer\"),\n        ('?hours=invalid', \"Time must be a positive integer\"),\n        ('?hours=-1', \"Time must be a positive integer\")\n    ])\n    async def test_request_params_with_bad_data(self, client, request_param, response_message):\n        resp = await client.get('fledge/asset/fogbench%2Fhumidity/temperature/series{}'.format(request_param))\n        assert 400 == resp.status\n        assert response_message == resp.reason\n\n    @pytest.mark.parametrize(\"request_params, payload\", [\n        ('?limit=5', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"limit\": 5, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?skip=1', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"limit\": 20, \"skip\": 1, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?limit=5&skip=1', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"limit\": 5, \"skip\": 1, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?seconds=3600', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\", \"and\": {\"column\": \"user_ts\", \"condition\": \"newer\", \"value\": 3600}}, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?minutes=20', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\", \"and\": {\"column\": \"user_ts\", \"condition\": \"newer\", \"value\": 1200}}, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?hours=3', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\", \"and\": {\"column\": \"user_ts\", \"condition\": \"newer\", \"value\": 10800}}, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?seconds=60&minutes=10', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\", \"and\": {\"column\": \"user_ts\", \"condition\": \"newer\", \"value\": 60}}, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?seconds=600&hours=1', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\", \"and\": {\"column\": \"user_ts\", \"condition\": \"newer\", \"value\": 600}}, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?minutes=20&hours=1', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\", \"and\": {\"column\": \"user_ts\", \"condition\": \"newer\", \"value\": 1200}}, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'),\n        ('?seconds=10&minutes=10&hours=1', '{\"return\": [{\"alias\": \"timestamp\", \"column\": \"user_ts\", \"timezone\": \"utc\"}, {\"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"temperature\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\", \"and\": {\"column\": \"user_ts\", \"condition\": \"newer\", \"value\": 10}}, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}')\n    ])\n    async def test_limit_skip_time_units_payload(self, client, request_params, payload):\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        _rv = await mock_coro({'count': 0, 'rows': []})\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=_rv) \\\n                    as query_patch:\n                resp = await client.get('fledge/asset/fogbench%2Fhumidity/temperature{}'.format(request_params))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert [] == json_response\n            args, kwargs = query_patch.call_args\n            assert json.loads(payload) == json.loads(args[0])\n            query_patch.assert_called_once_with(args[0])\n\n# TODO This test is looking for the query that caused the error in FOGL-2365 it should be replaced\n# by the right test\n#\n#    async def test_asset_all_readings_summary_when_no_asset_code_found(self, client):\n#        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n#        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n#            with patch.object(readings_storage_client_mock, 'query', return_value=mock_coro({'count': 0, 'rows': []})) as query_patch:\n#                resp = await client.get('fledge/asset/fogbench_humidity/summary')\n#                assert 404 == resp.status\n#                assert 'fogbench_humidity asset_code not found' == resp.reason\n#            query_patch.assert_called_once_with('{\"return\": [\"reading\"], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench_humidity\"}}')\n\n    async def test_asset_all_readings_summary(self, client):\n        async def q_result(*args):\n            if payload1 == args[0]:\n                return {'rows': [{'reading': {'humidity': 20}}], 'count': 1}\n            if payload2 == args[0]:\n                return {'count': 1, 'rows': [{'min': 13.0, 'max': 83.0, 'average': 33.5}]}\n\n        payload1 = {\"return\": [\"reading\"],\n                    \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench_humidity\"}}\n        payload2 = {\n            \"aggregate\": [{\"operation\": \"min\", \"json\": {\"properties\": \"humidity\", \"column\": \"reading\"}, \"alias\": \"min\"},\n                          {\"operation\": \"max\", \"json\": {\"properties\": \"humidity\", \"column\": \"reading\"}, \"alias\": \"max\"},\n                          {\"operation\": \"avg\", \"json\": {\"properties\": \"humidity\", \"column\": \"reading\"},\n                           \"alias\": \"average\"}],\n            \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench_humidity\"}, \"limit\": 20}\n\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        _se1 = await q_result(payload1)\n        _se2 = await q_result(payload2)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', side_effect=[_se1, _se2]) as patch_query:\n                resp = await client.get('fledge/asset/fogbench_humidity/summary')\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert [{'humidity': {'average': 33.5, 'max': 83.0, 'min': 13.0}}] == json_response\n            assert 2 == patch_query.call_count\n            args0, kwargs0 = patch_query.call_args_list[0]\n            args1, kwargs1 = patch_query.call_args_list[1]\n            # assert '{\"return\": [\"reading\"], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench_humidity\"}}' in args0\n            # FIXME: ordering issue and add tests for datetimeunits request param\n            # assert '{\"aggregate\": [{\"operation\": \"min\", \"json\": {\"column\": \"reading\", \"properties\": \"humidity\"}, \"alias\": \"min\"}, {\"operation\": \"max\", \"json\": {\"column\": \"reading\", \"properties\": \"humidity\"}, \"alias\": \"max\"}, {\"operation\": \"avg\", \"json\": {\"column\": \"reading\", \"properties\": \"humidity\"}, \"alias\": \"average\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench_humidity\"}, \"limit\": 20}' in args1\n\n    @pytest.mark.skip(reason='TODO: FOGL-3541 rewrite tests')\n    @pytest.mark.parametrize(\"asset_code\", [\n        \"fogbench%2fhumidity\",\n        \"fogbench%2fhumidity, fogbench%2ftemperature\"\n    ])\n    async def test_asset_datapoints_with_bucket_size(self, asset_code, client):\n        payload2 = {\"aggregate\": {\"operation\": \"all\"}, \"where\": {\"and\": {\"column\": \"user_ts\", \"value\": \"1572851627.341446\", \"condition\": \">=\"}, \"column\": \"asset_code\", \"value\": [\"fogbench/humidity\"], \"condition\": \"in\"}, \"timebucket\": {\"timestamp\": \"user_ts\", \"size\": \"60\", \"alias\": \"timestamp\", \"format\": \"YYYY-MM-DD HH24:MI:SS\"}, \"limit\": 1}\n        result2 = {'rows': [{\"min\": 15082, \"average\": 15083, \"timestamp\": \"2019-10-11 06:22:30\", \"max\": 15086}], 'count': 1}\n        payload1 = '{\"sort\": {\"direction\": \"desc\", \"column\": \"user_ts\"}, \"where\": {\"column\": \"asset_code\", \"value\": [\"fogbench/humidity\"], \"condition\": \"in\"}, \"limit\": 1}'\n        result1 = {'rows': [{'reading': {'temperature': 70}}], 'count': 1}\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        res = [mock_coro(result1), mock_coro(result2)]\n        patch_count = 2\n        if len(asset_code.split(\",\")) >= 2:\n            res = [mock_coro(result1), mock_coro(result1), mock_coro(result2)]\n            patch_count = 3\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', side_effect=res) as query_patch:\n                resp = await client.get('fledge/asset/{}/bucket/60'.format(asset_code))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result2['rows'] == json_response\n            assert patch_count == query_patch.call_count\n            args, kwargs = query_patch.call_args_list[0]\n            assert json.loads(payload1) == json.loads(args[0])\n            # TODO: After datetime patch assert full payload\n            # assert payload == json.loads(args[0])\n\n    @pytest.mark.skip(reason='TODO: FOGL-3541 rewrite tests')\n    async def test_asset_readings_with_bucket_size(self, client):\n        payload2 = {\"aggregate\": [{\"operation\": \"min\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"min\"}, {\"operation\": \"max\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"max\"}, {\"operation\": \"avg\", \"json\": {\"properties\": \"temperature\", \"column\": \"reading\"}, \"alias\": \"average\"}], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\", \"and\": {\"column\": \"user_ts\", \"condition\": \">=\", \"value\": \"1570732140.0\"}}, \"timebucket\": {\"timestamp\": \"user_ts\", \"size\": \"60\", \"format\": \"YYYY-MM-DD HH24:MI:SS\", \"alias\": \"timestamp\"}, \"limit\": 1}\n        result2 = {'rows': [{\"min\": 15082, \"average\": 15083, \"timestamp\": \"2019-10-11 06:22:30\", \"max\": 15086}], 'count': 1}\n        payload1 = '{\"return\": [\"reading\"], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"limit\": 1, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n        result1 = {'rows': [{'reading': {'temperature': 70}}], 'count': 1}\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        # FIXME: datetime.now() patch\n        # import datetime\n        # target = datetime.datetime(2019, 10, 11)\n        # with patch.object(datetime, 'datetime.now', MagicMock(wraps=datetime.datetime)) as dt_patch:\n        #     dt_patch.now.return_value = target\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', side_effect=[mock_coro(result1),\n                                                                                  mock_coro(result2)]) as query_patch:\n                resp = await client.get('fledge/asset/fogbench%2fhumidity/temperature/bucket/60')\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result2['rows'] == json_response\n            assert 2 == query_patch.call_count\n            args, kwargs = query_patch.call_args_list[0]\n            assert json.loads(payload1) == json.loads(args[0])\n            args, kwargs = query_patch.call_args_list[1]\n            assert payload2.keys() == json.loads(args[0]).keys()\n            # TODO: After datetime patch assert full payload\n            # assert payload == json.loads(args[0])\n\n    @pytest.mark.skip(reason='TODO: FOGL-3541 rewrite tests')\n    @pytest.mark.parametrize(\"storage_result, message\", [\n        ({'rows': [], 'count': 0}, \"'fogbench/humidity asset code not found'\"),\n        ({'count': 1, 'rows': [{'reading': {'temp': 70}}]}, \"'temperature reading key is not found for fogbench/humidity asset code'\")\n    ])\n    async def test_bad_asset_readings_with_bucket_size(self, client, storage_result, message):\n        payload = '{\"return\": [\"reading\"], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"}, \"limit\": 1, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=mock_coro(storage_result)) as query_patch:\n                resp = await client.get('fledge/asset/fogbench%2fhumidity/temperature/bucket/60')\n                assert 404 == resp.status\n                assert message == resp.reason\n        assert 1 == query_patch.call_count\n        args, kwargs = query_patch.call_args_list[0]\n        assert json.loads(payload) == json.loads(args[0])\n\n    @pytest.mark.skip(reason='TODO: FOGL-3541 rewrite tests')\n    @pytest.mark.parametrize(\"url, code, storage_result, message, request_params, with_readings\", [\n        ('fledge/asset/fogbench%2ftemp/bucket/10', 404, {'rows': [], 'count': 0},\n         \"'fogbench/temp asset code not found'\", \"\", False),\n        ('fledge/asset/fogbench%2ftemp,sinusoid/bucket/10', 404, {'rows': [], 'count': 0},\n         \"'fogbench/temp asset code not found'\", \"\", False),\n        ('fledge/asset/fogbench%2ftemp/bucket/10', 400,\n         {'rows': [{'reading': {'temp': 13.45}}], 'count': 1}, \"length must be a positive integer\",\n         \"?length=-10\", False),\n        ('fledge/asset/fogbench%2ftemp/bucket/10', 400,\n         {'rows': [{'reading': {'temp': 13.45}}], 'count': 1}, \"Invalid value for start. Error: \",\n         \"?start=1491613677888\", False),\n        ('fledge/asset/fogbench%2ftemp/bucket/10', 400,\n         {'rows': [{'reading': {'temp': 13.45}}], 'count': 1},\n         \"Invalid value for start. Error: \",\n         \"?start=567199223456346457\", False),\n        ('fledge/asset/fogbench%2ftemp/temperature/bucket/60', 404, {'rows': [], 'count': 0},\n         \"'fogbench/temp asset code not found'\", \"\", True),\n        ('fledge/asset/fogbench%2ftemp/temperature/bucket/60', 400,\n         {'rows': [{'reading': {'temperature': 13.45}}], 'count': 1}, \"length must be a positive integer\",\n         \"?length=-10\", True),\n        ('fledge/asset/fogbench%2ftemp/temperature/bucket/60', 400,\n         {'rows': [{'reading': {'temperature': 13.45}}], 'count': 1},\n         \"Invalid value for start. Error: \", \"?start=149199235346457788234\", True)\n    ])\n    async def test_bad_asset_bucket_size_and_optional_params(self, client, url, code, storage_result, message,\n                                                             request_params, with_readings):\n        if request_params:\n            url += request_params\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        if with_readings:\n            payload = '{\"return\": [\"reading\"], \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/temp\"}, \"limit\": 1, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n        else:\n            payload = '{\"where\": {\"column\": \"asset_code\", \"condition\": \"in\", \"value\": [\"fogbench/temp\"]}, \"limit\": 1, \"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', side_effect=[mock_coro(storage_result), mock_coro(storage_result)]) as query_patch:\n                resp = await client.get(url)\n                assert code == resp.status\n                assert message in resp.reason\n        assert 1 == query_patch.call_count\n        args, kwargs = query_patch.call_args\n        query_patch.assert_called_once_with(payload)\n\n    @pytest.mark.parametrize(\"request_params, payload\", [\n        ('?limit=5&skip=1&order=asc',\n         '{\"return\": [\"reading\", {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}],'\n         ' \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"},'\n         ' \"skip\": 1, \"limit\": 5, '\n         '\"sort\": {\"column\": \"user_ts\", \"direction\": \"asc\"}}'\n         ),\n        ('?limit=5&skip=1&order=desc',\n         '{\"return\": [\"reading\", {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}],'\n         ' \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"},'\n         ' \"skip\": 1,\"limit\": 5, '\n         '\"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n         ),\n        ('?limit=5&skip=1',\n         '{\"return\": [\"reading\", {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}],'\n         ' \"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"fogbench/humidity\"},'\n         ' \"skip\": 1,\"limit\": 5, '\n         '\"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}'\n         )\n    ])\n    async def test_order_payload_good(self, client, request_params, payload):\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        _rv = await mock_coro({'count': 0, 'rows': []})\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=_rv) \\\n                    as query_patch:\n                resp = await client.get('fledge/asset/fogbench%2Fhumidity{}'.format(request_params))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert [] == json_response\n            args, kwargs = query_patch.call_args\n            assert json.loads(payload) == json.loads(args[0])\n            query_patch.assert_called_once_with(args[0])\n\n    @pytest.mark.parametrize(\"request_params, response_message\", [\n        ('?limit=5&skip=1&order=blah', 'order must be asc or desc')\n    ])\n    async def test_order_payload_bad(self, client, request_params, response_message):\n        resp = await client.get('fledge/asset/fogbench%2Fhumidity{}'.format(request_params))\n        assert 400 == resp.status\n        assert response_message == resp.reason\n        r = await resp.text()\n        json_response = json.loads(r)\n        assert {\"message\": response_message} == json_response\n\n    @pytest.mark.parametrize(\"request_url, payload, result\", FIXTURE_3)\n    async def test_filtering_image_data(self, client, request_url, payload, result):\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        result_for_reading = {'rows': [{'reading': {'testcard': '__DPIMAGE:256,256,8_A'}}], 'count': 1}\n        if request_url.endswith('summary'):\n            _se1 = await mock_coro(result_for_reading)\n            _se2 = await mock_coro(result)\n            with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n                with patch.object(readings_storage_client_mock, 'query', side_effect=[_se1, _se2]):\n                    resp = await client.get(request_url)\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    expected_result = {'testcard': result['rows'][0]} if isinstance(json_response, dict) else \\\n                        [{'testcard': result['rows'][0]}]\n                    assert expected_result == json_response\n        else:\n            _rv = await mock_coro(result)\n            with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n                with patch.object(readings_storage_client_mock, 'query', return_value=_rv) as query_patch:\n                    resp = await client.get(request_url)\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    assert result['rows'] == json_response\n                args, kwargs = query_patch.call_args\n                assert json.loads(payload) == json.loads(args[0])\n                query_patch.assert_called_once_with(args[0])\n\n    @pytest.mark.parametrize(\"request_url, payload, is_image_excluded\", [\n        ('fledge/asset/testcard?images=include',\n         '{\"return\": [\"reading\", {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}], '\n         '\"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"testcard\"}, \"limit\": 20, '\n         '\"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n         True),\n        ('fledge/asset/testcard?images=exclude',\n         '{\"return\": [\"reading\", {\"column\": \"user_ts\", \"alias\": \"timestamp\", \"timezone\": \"utc\"}], '\n         '\"where\": {\"column\": \"asset_code\", \"condition\": \"=\", \"value\": \"testcard\"}, \"limit\": 20, '\n         '\"sort\": {\"column\": \"user_ts\", \"direction\": \"desc\"}}',\n         False)\n    ])\n    async def test_data_with_images_request_param(self, client, request_url, payload, is_image_excluded):\n        storage_result = {'count': 1, 'rows': [{'reading': {'testcard': '__DPIMAGE:256,256,8_AA'}}]}\n        image_placeholder_result = [{'reading': {'testcard': 'Data removed for brevity'}}]\n        readings_storage_client_mock = MagicMock(ReadingsStorageClientAsync)\n        _rv = await mock_coro(storage_result)\n        with patch.object(connect, 'get_readings_async', return_value=readings_storage_client_mock):\n            with patch.object(readings_storage_client_mock, 'query', return_value=_rv) as query_patch:\n                resp = await client.get(request_url)\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                expected_result = storage_result['rows'] if is_image_excluded else image_placeholder_result\n                assert expected_result == json_response\n            args, _ = query_patch.call_args\n            assert json.loads(payload) == json.loads(args[0])\n            query_patch.assert_called_once_with(args[0])\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_certificate_store.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\nimport json\nimport pathlib\n\nfrom unittest.mock import MagicMock, patch\nfrom collections import Counter\nfrom aiohttp import web\nimport pytest\n\nfrom fledge.services.core import connect\nfrom fledge.common.web import middleware\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core import routes\nfrom fledge.services.core.api import certificate_store\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.user_model import User\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro(*args, **kwargs):\n    return None if len(args) == 0 else args[0]\n\n\n@pytest.fixture\ndef certs_path():\n    return pathlib.Path(__file__).parent\n\n\nADMIN_USER_HEADER = {'content-type': 'application/json', 'Authorization': 'admin_user_token'}\nREST_API_CAT_INFO = {'certificateName': {'value': 'fledge'}, 'authCertificateName': {'value': 'ca'}}\n\n\nclass TestCertificateStore:\n    @pytest.fixture\n    def client(self, loop, aiohttp_server, aiohttp_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        server = loop.run_until_complete(aiohttp_server(app))\n        loop.run_until_complete(server.start_server(loop=loop))\n        client = loop.run_until_complete(aiohttp_client(server))\n        return client\n\n    async def test_get_certs(self, client, certs_path):\n        response_content = {'keys': ['fledge.key', 'rsa_private.pem'],\n                            'certs': ['fledge.cert', 'test.cer', 'test.crt', 'test.json', 'fledge.pem']}\n        with patch.object(certificate_store, '_get_certs_dir', side_effect=[certs_path / 'certs',\n                                                                            certs_path / 'json', certs_path / 'pem']):\n            with patch('os.walk') as mockwalk:\n                mockwalk.return_value = [(str(certs_path / 'certs'), [], ['fledge.cert', 'test.cer', 'test.crt']),\n                                         (str(certs_path / 'certs/pem'), [], ['fledge.pem']),\n                                         (str(certs_path / 'certs/json'), [], ['test.json']),\n                                         (str(certs_path / 'certs'), [], ['fledge.key', 'rsa_private.pem'])\n                                         ]\n                with patch('os.listdir') as mocked_listdir:\n                    mocked_listdir.return_value = ['test.json', 'fledge.pem']\n                    resp = await client.get('/fledge/certificate')\n                    assert 200 == resp.status\n                    res = await resp.text()\n                    jdict = json.loads(res)\n                    cert = jdict[\"certs\"]\n                    assert 5 == len(cert)\n                    assert Counter(response_content['certs']) == Counter(cert)\n                    key = jdict[\"keys\"]\n                    assert 2 == len(key)\n                    assert Counter(response_content['keys']) == Counter(key)\n                assert 2 == mocked_listdir.call_count\n            mockwalk.assert_called_once_with(certs_path / 'certs')\n\n    @pytest.mark.parametrize(\"files\", [\n        [], ['fledge.txt']\n    ])\n    async def test_get_bad_certs(self, client, certs_path, files):\n        with patch.object(certificate_store, '_get_certs_dir', side_effect=[certs_path / 'certs',\n                                                                            certs_path / 'json', certs_path / 'pem']):\n\n            with patch('os.walk') as mockwalk:\n                mockwalk.return_value = [(str(certs_path / 'certs'), [], files),\n                                         (str(certs_path / 'certs/pem'), [], files),\n                                         (str(certs_path / 'certs/json'), [], files),\n                                         (str(certs_path / 'certs'), [], files)\n                                         ]\n                with patch('os.listdir') as mocked_listdir:\n                    mocked_listdir.return_value = []\n                    resp = await client.get('/fledge/certificate')\n                    assert 200 == resp.status\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert 0 == len(json_response['certs'])\n                    assert 0 == len(json_response['keys'])\n                    assert {'certs': [], 'keys': []} == json_response\n                assert 2 == mocked_listdir.call_count\n            mockwalk.assert_called_once_with(certs_path / 'certs')\n\n    async def test_bad_cert_file_upload(self, client, certs_path):\n        files = {'bad_cert': open(str(certs_path / 'certs/fledge.cert'), 'rb'),\n                 'key': open(str(certs_path / 'certs/fledge.key'), 'rb')}\n        resp = await client.post('/fledge/certificate', data=files)\n        assert 400 == resp.status\n        assert 'Cert file is missing' == resp.reason\n\n    async def test_bad_extension_cert_file_upload(self, client, certs_path):\n        cert_valid_extensions = ('.cert', '.cer', '.csr', '.crl', '.crt', '.der', '.json', '.pem', '.p12', '.pfx')\n        files = {'cert': open(str(certs_path / 'certs/fledge.txt'), 'rb'),\n                 'key': open(str(certs_path / 'certs/fledge.key'), 'rb')}\n        resp = await client.post('/fledge/certificate', data=files)\n        assert 400 == resp.status\n        assert 'Accepted file extensions are {} for cert file'.format(cert_valid_extensions) == resp.reason\n\n    @pytest.mark.parametrize(\"overwrite\", ['blah', '2'])\n    async def test_bad_overwrite_file_upload(self, client, certs_path, overwrite):\n        files = {'cert': open(str(certs_path / 'certs/fledge.cert'), 'rb'),\n                 'key': open(str(certs_path / 'certs/fledge.key'), 'rb'),\n                 'overwrite': overwrite}\n        resp = await client.post('/fledge/certificate', data=files)\n        assert 400 == resp.status\n        assert 'Accepted value for overwrite is 0 or 1' == resp.reason\n\n    async def test_exception(self, client, certs_path):\n        with pytest.raises(Exception) as excinfo:\n            files = {'cert': open(str(certs_path / 'certs/{}'.format(\"bla.key\")), 'rb')}\n            resp = await client.post('/fledge/certificate', data=files)\n            assert 500 == resp.status\n            assert 'Internal Server Error' == resp.reason\n        assert excinfo.type is FileNotFoundError\n        assert \"No such file or directory\" in str(excinfo)\n\n    @pytest.mark.parametrize(\"cert_name, actual_code, actual_reason\", [\n        ('root.pem', 404, \"Certificate with name root.pem does not exist\"),\n        ('rsa_private.key', 404, \"Certificate with name rsa_private.key does not exist\")\n    ])\n    async def test_bad_delete_cert_with_invalid_filename(self, client, cert_name, actual_code, actual_reason):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv = await mock_coro(REST_API_CAT_INFO)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_all_items:\n                resp = await client.delete('/fledge/certificate/{}'.format(cert_name))\n                assert actual_code == resp.status\n                assert actual_reason == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": actual_reason} == json_response\n            patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n\n    @pytest.mark.parametrize(\"cert_name, actual_code, actual_reason\", [\n        ('root.txt', 400, \"Accepted file extensions are \"\n                          \"('.cert', '.cer', '.csr', '.crl', '.crt', '.der', '.json', '.key', '.pem', '.p12', '.pfx')\")\n    ])\n    async def test_bad_delete_cert(self, client, cert_name, actual_code, actual_reason):\n        resp = await client.delete('/fledge/certificate/{}'.format(cert_name))\n        assert actual_code == resp.status\n        assert actual_reason == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": actual_reason} == json_response\n\n    async def test_bad_delete_cert_if_in_use(self, client):\n        cert_name = 'fledge.cert'\n        with patch.object(certificate_store._logger, 'warning') as patch_logger:\n            resp = await client.delete('/fledge/certificate/{}'.format(cert_name))\n            assert 403 == resp.status\n            assert certificate_store.FORBIDDEN_MSG == resp.reason\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {\"message\": certificate_store.FORBIDDEN_MSG} == json_response\n        patch_logger.assert_called_once_with(certificate_store.FORBIDDEN_MSG)\n\n    async def test_bad_type_delete_cert(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        msg = 'Only cert and key are allowed for the value of type param'\n        _rv = await mock_coro(REST_API_CAT_INFO)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_all_items:\n                resp = await client.delete('/fledge/certificate/server.cert?type=pem')\n                assert 400 == resp.status\n                assert msg == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {\"message\": msg} == json_response\n            patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n\n    @pytest.mark.parametrize(\"cert_name, param\", [\n        ('fledge.json', '?type=cert'),\n        ('fledge.pem', '?type=cert'),\n        ('test.cer', '?type=cert'),\n        ('test.crt', '?type=cert'),\n        ('rsa_private.pem', '?type=key'),\n    ])\n    async def test_delete_cert_with_type(self, client, cert_name, param):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        cat_info = {'certificateName': {'value': 'foo'}, 'authCertificateName': {'value': 'ca'}}\n        _rv = await mock_coro(cat_info)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_all_items:\n                with patch('os.path.isfile', return_value=True):\n                    with patch('os.remove', return_value=True) as patch_remove:\n                        resp = await client.delete('/fledge/certificate/{}{}'.format(cert_name, param))\n                        assert 200 == resp.status\n                        result = await resp.text()\n                        json_response = json.loads(result)\n                        assert '{} has been deleted successfully'.format(cert_name) == json_response['result']\n                    assert 1 == patch_remove.call_count\n            patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n\n    async def test_delete_cert(self, client, certs_path, cert_name='server.cert'):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        _rv = await mock_coro(REST_API_CAT_INFO)\n        with patch.object(certificate_store, '_get_certs_dir', return_value=str(certs_path / 'certs') + '/'):\n            with patch('os.walk') as mockwalk:\n                mockwalk.return_value = [(str(certs_path / 'certs'), [], [cert_name])]\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_all_items:\n                        with patch('os.remove', return_value=True) as patch_remove:\n                            resp = await client.delete('/fledge/certificate/{}'.format(cert_name))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert '{} has been deleted successfully'.format(cert_name) == json_response['result']\n                        assert 1 == patch_remove.call_count\n                    patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n\n    async def test_default_upload(self, client, certs_path):\n        files = {'key': open(str(certs_path / 'certs/fledge.key'), 'rb'),\n                 'cert': open(str(certs_path / 'certs/fledge.cert'), 'rb')}\n        with patch.object(certificate_store._logger, 'warning') as patch_logger:\n            resp = await client.post('/fledge/certificate', data=files)\n            assert 403 == resp.status\n            assert certificate_store.FORBIDDEN_MSG == resp.reason\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {\"message\": certificate_store.FORBIDDEN_MSG} == json_response\n        patch_logger.assert_called_once_with(certificate_store.FORBIDDEN_MSG)\n\n\nclass TestUploadCertStoreIfAuthenticationIsMandatory:\n    AUTH_HEADER = {'Authorization': 'admin_user_token'}\n\n    @pytest.fixture\n    def client(self, loop, aiohttp_server, aiohttp_client):\n        app = web.Application(loop=loop, middlewares=[middleware.auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        server = loop.run_until_complete(aiohttp_server(app))\n        loop.run_until_complete(server.start_server(loop=loop))\n        client = loop.run_until_complete(aiohttp_client(server))\n        return client\n\n    async def auth_token_fixture(self, mocker, is_admin=True):\n        user = {'id': 1, 'uname': 'admin', 'role_id': '1'} if is_admin else {'id': 2, 'uname': 'user', 'role_id': '2'}\n        \n        _rv1 = await mock_coro(user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro(user)\n        patch_logger_debug = mocker.patch.object(middleware._logger, 'debug')\n        patch_validate_token = mocker.patch.object(User.Objects, 'validate_token', return_value=_rv1)\n        patch_refresh_token = mocker.patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2)\n        patch_user_get = mocker.patch.object(User.Objects, 'get', return_value=_rv3)\n        return patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get\n\n    async def test_bad_upload_when_admin_role_is_required(self, client, certs_path, mocker):\n        files = {'key': open(str(certs_path / 'certs/fledge.key'), 'rb'),\n                 'cert': open(str(certs_path / 'certs/fledge.cert'), 'rb')}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        msg = 'admin role permissions required to overwrite the default installed auth/TLS certificates.'\n        with patch.object(certificate_store._logger, 'warning') as patch_logger:\n            resp = await client.post('/fledge/certificate', data=files, headers=self.AUTH_HEADER)\n            assert 403 == resp.status\n            assert msg == resp.reason\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {\"message\": msg} == json_response\n        patch_logger.assert_called_once_with(msg)\n        patch_user_get.assert_called_once_with(uid=2)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/certificate')\n\n    async def test_bad_upload_when_cert_in_use_and_with_non_admin_role(self, client, certs_path, mocker):\n        files = {'cert': open(str(certs_path / 'certs/test.cer'), 'rb')}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker, is_admin=False)\n        msg = 'Certificate with name test.cer is configured to be used, ' \\\n              'An `admin` role permissions required to add/overwrite.'\n        \n        cat_info = {'certificateName':  {'value': 'test'},  'authCertificateName':  {'value': 'foo'}}\n        _rv = await mock_coro(cat_info)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        with patch.object(certificate_store._logger, 'warning') as patch_logger:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_all_items:\n                    resp = await client.post('/fledge/certificate', data=files, headers=self.AUTH_HEADER)\n                    assert 403 == resp.status\n                    assert msg == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {\"message\": msg} == json_response\n                patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n        patch_logger.assert_called_once_with(msg)\n        patch_user_get.assert_called_once_with(uid=2)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/certificate')\n\n    async def test_upload_as_admin(self, client, certs_path, mocker):\n        files = {'key': open(str(certs_path / 'certs/fledge.key'), 'rb'),\n                 'cert': open(str(certs_path / 'certs/fledge.cert'), 'rb')}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        with patch.object(certificate_store, '_get_certs_dir', return_value=certs_path / 'certs'):\n            with patch.object(certificate_store, '_find_file', return_value=[]) as patch_find_file:\n                resp = await client.post('/fledge/certificate', data=files, headers=self.AUTH_HEADER)\n                assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert 'fledge.key and fledge.cert have been uploaded successfully' == json_response['result']\n            assert 2 == patch_find_file.call_count\n            args, kwargs = patch_find_file.call_args_list[0]\n            assert ('fledge.cert', certificate_store._get_certs_dir('/certs/')) == args\n            args, kwargs = patch_find_file.call_args_list[1]\n            assert ('fledge.key', certificate_store._get_certs_dir('/certs/')) == args\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/certificate')\n\n    @pytest.mark.parametrize(\"filename\", [\"fledge.pem\", \"fledge.cert\", \"test.cer\", \"test.crt\"])\n    async def test_upload_with_cert_only(self, client, certs_path, mocker, filename):\n        files = {'cert': open(str(certs_path / 'certs/{}'.format(filename)), 'rb')}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        with patch.object(certificate_store, '_get_certs_dir', return_value=certs_path / 'certs/pem'):\n            with patch.object(certificate_store, '_find_file', return_value=[]) as patch_find_file:\n                resp = await client.post('/fledge/certificate', data=files, headers=self.AUTH_HEADER)\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert '{} has been uploaded successfully'.format(filename) == json_response['result']\n            assert 1 == patch_find_file.call_count\n            args, kwargs = patch_find_file.call_args\n            assert (filename, certificate_store._get_certs_dir('/certs/pem')) == args\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/certificate')\n\n    async def test_file_upload_with_overwrite(self, client, certs_path, mocker):\n        files = {'key': open(str(certs_path / 'certs/fledge.key'), 'rb'),\n                 'cert': open(str(certs_path / 'certs/fledge.cert'), 'rb'),\n                 'overwrite': '1'}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        with patch.object(certificate_store, '_get_certs_dir', return_value=certs_path / 'certs'):\n            with patch.object(certificate_store, '_find_file', return_value=[]) as patch_find_file:\n                resp = await client.post('/fledge/certificate', data=files, headers=self.AUTH_HEADER)\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert 'fledge.key and fledge.cert have been uploaded successfully' == json_response['result']\n            assert 2 == patch_find_file.call_count\n            args, kwargs = patch_find_file.call_args_list[0]\n            assert ('fledge.cert', certificate_store._get_certs_dir('/certs/')) == args\n            args, kwargs = patch_find_file.call_args_list[1]\n            assert ('fledge.key', certificate_store._get_certs_dir('/certs/')) == args\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/certificate')\n\n    async def test_bad_extension_key_file_upload(self, client, certs_path, mocker):\n        key_valid_extensions = ('.key', '.pem')\n        files = {'cert': open(str(certs_path / 'certs/fledge.cert'), 'rb'),\n                 'key': open(str(certs_path / 'certs/fledge.txt'), 'rb')}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        resp = await client.post('/fledge/certificate', data=files, headers=self.AUTH_HEADER)\n        assert 400 == resp.status\n        assert 'Accepted file extensions are {} for key file'.format(key_valid_extensions) == resp.reason\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/certificate')\n\n    async def test_upload_with_existing_and_no_overwrite(self, client, certs_path, mocker):\n        files = {'key': open(str(certs_path / 'certs/fledge.key'), 'rb'),\n                 'cert': open(str(certs_path / 'certs/fledge.cert'), 'rb')}\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        with patch.object(certificate_store, '_get_certs_dir', return_value=certs_path / 'certs'):\n            with patch.object(certificate_store, '_find_file', return_value=[\"v\"]) as patch_file:\n                resp = await client.post('/fledge/certificate', data=files, headers=self.AUTH_HEADER)\n                assert 400 == resp.status\n                assert 'Certificate with the same name already exists! To overwrite, set the ' \\\n                       'overwrite flag' == resp.reason\n            assert 1 == patch_file.call_count\n            args, kwargs = patch_file.call_args\n            assert ('fledge.cert', certificate_store._get_certs_dir('/certs')) == args\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'POST', '/fledge/certificate')\n\n\nclass TestDeleteCertStoreIfAuthenticationIsMandatory:\n    @pytest.fixture\n    def client(self, loop, aiohttp_server, aiohttp_client):\n        app = web.Application(loop=loop, middlewares=[middleware.auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        server = loop.run_until_complete(aiohttp_server(app))\n        loop.run_until_complete(server.start_server(loop=loop))\n        client = loop.run_until_complete(aiohttp_client(server))\n        return client\n\n    async def auth_token_fixture(self, mocker, is_admin=True):\n        user = {'id': 1, 'uname': 'admin', 'role_id': '1'} if is_admin else {'id': 2, 'uname': 'user', 'role_id': '2'}\n        \n        _rv1 = await mock_coro(user['id'])\n        _rv2 = await mock_coro(None)\n        _rv3 = await mock_coro(user)\n        patch_logger_debug = mocker.patch.object(middleware._logger, 'debug')\n        patch_validate_token = mocker.patch.object(User.Objects, 'validate_token', return_value=_rv1)\n        patch_refresh_token = mocker.patch.object(User.Objects, 'refresh_token_expiry', return_value=_rv2)\n        patch_user_get = mocker.patch.object(User.Objects, 'get', return_value=_rv3)\n        return patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get\n\n    @pytest.mark.parametrize(\"cert_name, actual_code, actual_reason\", [\n        ('root.pem', 404, \"Certificate with name root.pem does not exist\"),\n        ('rsa_private.key', 404, \"Certificate with name rsa_private.key does not exist\")\n    ])\n    async def test_bad_delete_cert_with_invalid_filename(self, client, mocker, cert_name, actual_code, actual_reason):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        \n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(REST_API_CAT_INFO)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv2) as patch_get_cat_all_items:\n                    resp = await client.delete('/fledge/certificate/{}'.format(cert_name), headers=ADMIN_USER_HEADER)\n                    assert actual_code == resp.status\n                    assert actual_reason == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {\"message\": actual_reason} == json_response\n                patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE',\n                                                  '/fledge/certificate/{}'.format(cert_name))\n\n    @pytest.mark.parametrize(\"cert_name, actual_code, actual_reason\", [\n        ('root.txt', 400, \"Accepted file extensions are \"\n                          \"('.cert', '.cer', '.csr', '.crl', '.crt', '.der', '.json', '.key', '.pem', '.p12', '.pfx')\")\n    ])\n    async def test_bad_delete_cert(self, client, mocker, cert_name, actual_code, actual_reason):\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        \n        _payload = [{'id': '1'}]\n        _rv = await mock_coro(_payload)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv):\n            resp = await client.delete('/fledge/certificate/{}'.format(cert_name), headers=ADMIN_USER_HEADER)\n            assert actual_code == resp.status\n            assert actual_reason == resp.reason\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {\"message\": actual_reason} == json_response\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE',\n                                                  '/fledge/certificate/{}'.format(cert_name))\n\n    async def test_delete_cert_if_configured_to_use(self, client, mocker):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        cert_name = 'fledge.cert'\n        msg = 'Certificate with name {} is configured for use, you can not delete but overwrite if required.'.format(\n            cert_name)\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        \n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(REST_API_CAT_INFO)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch('os.path.isfile', return_value=True):\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with patch.object(c_mgr, 'get_category_all_items', return_value=_rv2) as patch_get_cat_all_items:\n                        resp = await client.delete('/fledge/certificate/{}'.format(cert_name),\n                                                   headers=ADMIN_USER_HEADER)\n                        assert 409 == resp.status\n                        assert msg == resp.reason\n                        result = await resp.text()\n                        json_response = json.loads(result)\n                        assert {\"message\": msg} == json_response\n                    patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE',\n                                                  '/fledge/certificate/{}'.format(cert_name))\n\n    async def test_bad_type_delete_cert(self, client, mocker):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        cert_name = 'server.cert'\n        msg = 'Only cert and key are allowed for the value of type param'\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        \n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(REST_API_CAT_INFO)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv2) as patch_get_cat_all_items:\n                    resp = await client.delete('/fledge/certificate/{}?type=pem'.format(cert_name),\n                                               headers=ADMIN_USER_HEADER)\n                    assert 400 == resp.status\n                    assert msg == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {\"message\": msg} == json_response\n                patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE',\n                                                  '/fledge/certificate/{}'.format(cert_name))\n\n    @pytest.mark.parametrize(\"cert_name, param\", [\n        ('fledge.json', '?type=cert'),\n        ('fledge.pem', '?type=cert'),\n        ('test.cer', '?type=cert'),\n        ('test.crt', '?type=cert'),\n        ('rsa_private.pem', '?type=key'),\n    ])\n    async def test_delete_cert_with_type(self, client, mocker, cert_name, param):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        \n        cat_info = {'certificateName':  {'value': 'foo'},  'authCertificateName':  {'value': 'ca'}}\n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(cat_info)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv2) as patch_get_cat_all_items:\n                    with patch('os.path.isfile', return_value=True):\n                        with patch('os.remove', return_value=True) as patch_remove:\n                            resp = await client.delete('/fledge/certificate/{}{}'.format(cert_name, param),\n                                                       headers=ADMIN_USER_HEADER)\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert '{} has been deleted successfully'.format(cert_name) == json_response['result']\n                        assert 1 == patch_remove.call_count\n                patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE',\n                                                  '/fledge/certificate/{}'.format(cert_name))\n\n    async def test_delete_cert(self, client, mocker, certs_path, cert_name='server.cert'):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        patch_logger_debug, patch_validate_token, patch_refresh_token, patch_user_get = await self.auth_token_fixture(\n            mocker)\n        \n        _rv1 = await mock_coro([{'id': '1'}])\n        _rv2 = await mock_coro(REST_API_CAT_INFO)\n        with patch.object(User.Objects, 'get_role_id_by_name', return_value=_rv1) as patch_role_id:\n            with patch.object(certificate_store, '_get_certs_dir', return_value=str(certs_path / 'certs') + '/'):\n                with patch('os.walk') as mockwalk:\n                    mockwalk.return_value = [(str(certs_path / 'certs'), [], [cert_name])]\n                    with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                        with patch.object(c_mgr, 'get_category_all_items',\n                                          return_value=_rv2) as patch_get_cat_all_items:\n                            with patch('os.remove', return_value=True) as patch_remove:\n                                resp = await client.delete('/fledge/certificate/{}'.format(cert_name),\n                                                           headers=ADMIN_USER_HEADER)\n                                assert 200 == resp.status\n                                result = await resp.text()\n                                json_response = json.loads(result)\n                                assert '{} has been deleted successfully'.format(cert_name) == json_response['result']\n                            assert 1 == patch_remove.call_count\n                        patch_get_cat_all_items.assert_called_once_with(category_name='rest_api')\n        patch_role_id.assert_called_once_with('admin')\n        patch_user_get.assert_called_once_with(uid=1)\n        patch_refresh_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_validate_token.assert_called_once_with(ADMIN_USER_HEADER['Authorization'])\n        patch_logger_debug.assert_called_once_with('Received %s request for %s', 'DELETE',\n                                                  '/fledge/certificate/{}'.format(cert_name))    \n\n\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_common_ping.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test rest server api for python/fledge/services/core/api/common.py\n\nThese 2 def shall be tested via python/fledge/services/core/server.py\n    - rest_api_config\n    - get_certificates\nThis test file assumes those 2 units are tested\n\"\"\"\n\nimport re\nimport json\nimport ssl\nimport socket\nimport subprocess\nimport pathlib\nimport time\nfrom unittest.mock import MagicMock, patch\nimport pytest\nimport aiohttp\nfrom aiohttp import web\n\nfrom fledge.services.core import connect, routes, server as core_server\nfrom fledge.services.core.api.common import _logger\nfrom fledge.common.alert_manager import AlertManager\nfrom fledge.common.web import middleware\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.configuration_manager import ConfigurationManager\n\nSEMANTIC_VERSIONING_REGEX = \"^(?P<major>0|[1-9]\\d*)\\.(?P<minor>0|[1-9]\\d*)\\.(?P<patch>0|[1-9]\\d*)$\"\n\n\n@pytest.fixture\ndef certs_path():\n    return pathlib.Path(__file__).parent\n\n\n@pytest.fixture\ndef ssl_ctx(certs_path):\n    ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)\n    ssl_ctx.load_cert_chain(\n        str(certs_path / 'certs/fledge.cert'),\n        str(certs_path / 'certs/fledge.key'))\n    return ssl_ctx\n\n\n@pytest.fixture\ndef get_machine_detail():\n    host_name = socket.gethostname()\n    # all addresses for the host\n    all_ip_addresses_cmd_res = subprocess.run(['hostname', '-I'], stdout=subprocess.PIPE)\n    ip_addresses = all_ip_addresses_cmd_res.stdout.decode('utf-8').replace(\"\\n\", \"\").strip().split(\" \")\n    return host_name, ip_addresses\n\n\nasync def test_ping_http_allow_ping_true(aiohttp_server, aiohttp_client, loop, get_machine_detail):\n    payload = '{\"return\": [\"key\", \"description\", \"value\"], \"sort\": {\"column\": \"key\", \"direction\": \"asc\"}}'\n    result = {\"rows\": [\n        {\"value\": 1, \"key\": \"PURGED\", \"description\": \"blah6\"},\n        {\"value\": 2, \"key\": \"READINGS\", \"description\": \"blah1\"},\n        {\"value\": 3, \"key\": \"North Readings to PI\", \"description\": \"blah2\"},\n        {\"value\": 4, \"key\": \"North Statistics to PI\", \"description\": \"blah3\"},\n        {\"value\": 10, \"key\": \"North Statistics to OCS\", \"description\": \"blah5\"},\n        {\"value\": 100, \"key\": \"Readings Sent\", \"description\": \"Readings Sent North\"},\n    ]}\n\n    async def mock_coro(*args, **kwargs):\n        return result\n    \n    _rv = await mock_coro()\n    host_name, ip_addresses = get_machine_detail\n    attrs = {\"query_tbl_with_payload.return_value\": await mock_coro()}\n    mock_storage_client_async = MagicMock(spec=StorageClientAsync, **attrs)\n    core_server.Server._alert_manager = AlertManager(mock_storage_client_async)\n    core_server.Server._alert_manager.alerts = []\n    with patch.object(middleware._logger, 'debug') as logger_info:\n        with patch.object(connect, 'get_storage_async', return_value=mock_storage_client_async):\n            with patch.object(mock_storage_client_async, 'query_tbl_with_payload', return_value=_rv) as query_patch:\n                    app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n                    # fill route table\n                    routes.setup(app)\n                    server = await aiohttp_server(app)\n                    await server.start_server(loop=loop)\n\n                    client = await aiohttp_client(server)\n                    # note: If the parameter is app aiohttp.web.Application\n                    # the tool creates TestServer implicitly for serving the application.\n                    time.sleep(1)\n                    resp = await client.get('/fledge/ping', headers={'authorization': \"token\"})\n                    assert 200 == resp.status\n                    content = await resp.text()\n                    content_dict = json.loads(content)\n                    assert isinstance(content_dict[\"uptime\"], int)\n                    assert 1 <= content_dict[\"uptime\"]\n                    assert 2 == content_dict[\"dataRead\"]\n                    assert 100 == content_dict[\"dataSent\"]\n                    assert 1 == content_dict[\"dataPurged\"]\n                    assert content_dict[\"authenticationOptional\"] is True\n                    assert content_dict['serviceName'] == \"Fledge\"\n                    assert content_dict['hostName'] == host_name\n                    assert content_dict['ipAddresses'] == ip_addresses\n                    assert content_dict['health'] == \"green\"\n                    assert content_dict['safeMode'] is False\n                    assert re.match(SEMANTIC_VERSIONING_REGEX, content_dict['version']) is not None\n                    assert content_dict['alerts'] == 0\n            query_patch.assert_called_once_with('statistics', payload)\n        log_params = 'Received %s request for %s', 'GET', '/fledge/ping'\n        logger_info.assert_called_once_with(*log_params)\n\n\nasync def test_ping_http_allow_ping_false(aiohttp_server, aiohttp_client, loop, get_machine_detail):\n    payload = '{\"return\": [\"key\", \"description\", \"value\"], \"sort\": {\"column\": \"key\", \"direction\": \"asc\"}}'\n\n    async def mock_coro(*args, **kwargs):\n        result = {\"rows\": [\n            {\"value\": 1, \"key\": \"PURGED\", \"description\": \"blah6\"},\n            {\"value\": 2, \"key\": \"READINGS\", \"description\": \"blah1\"},\n            {\"value\": 3, \"key\": \"North Readings to PI\", \"description\": \"blah2\"},\n            {\"value\": 4, \"key\": \"North Statistics to PI\", \"description\": \"blah3\"},\n            {\"value\": 10, \"key\": \"North Statistics to OCS\", \"description\": \"blah5\"},\n            {\"value\": 100, \"key\": \"Readings Sent\", \"description\": \"Readings Sent North\"},\n        ]}\n        return result\n\n    _rv = await mock_coro()\n    host_name, ip_addresses = get_machine_detail\n    mock_storage_client_async = MagicMock(StorageClientAsync)\n    with patch.object(middleware._logger, 'debug') as logger_info:\n        with patch.object(connect, 'get_storage_async', return_value=mock_storage_client_async):\n            with patch.object(mock_storage_client_async, 'query_tbl_with_payload', return_value=_rv) as query_patch:\n                    app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n                    # fill route table\n                    routes.setup(app)\n\n                    server = await aiohttp_server(app)\n                    await server.start_server(loop=loop)\n\n                    client = await aiohttp_client(server)\n                    # note: If the parameter is app aiohttp.web.Application\n                    # the tool creates TestServer implicitly for serving the application.\n                    resp = await client.get('/fledge/ping', headers={'authorization': \"token\"})\n                    assert 200 == resp.status\n                    content = await resp.text()\n                    content_dict = json.loads(content)\n                    assert 0 <= content_dict[\"uptime\"]\n                    assert 2 == content_dict[\"dataRead\"]\n                    assert 100 == content_dict[\"dataSent\"]\n                    assert 1 == content_dict[\"dataPurged\"]\n                    assert content_dict[\"authenticationOptional\"] is True\n                    assert content_dict['serviceName'] == \"Fledge\"\n                    assert content_dict['hostName'] == host_name\n                    assert content_dict['ipAddresses'] == ip_addresses\n                    assert content_dict['health'] == \"green\"\n                    assert content_dict['safeMode'] is False\n                    assert re.search(SEMANTIC_VERSIONING_REGEX, content_dict['version']) is not None\n                    assert content_dict['alerts'] == 0\n            query_patch.assert_called_once_with('statistics', payload)\n        log_params = 'Received %s request for %s', 'GET', '/fledge/ping'\n        logger_info.assert_called_once_with(*log_params)\n\n\nasync def test_ping_http_auth_required_allow_ping_true(aiohttp_server, aiohttp_client, loop, get_machine_detail):\n    payload = '{\"return\": [\"key\", \"description\", \"value\"], \"sort\": {\"column\": \"key\", \"direction\": \"asc\"}}'\n    result = {\"rows\": [\n                {\"value\": 1, \"key\": \"PURGED\", \"description\": \"blah6\"},\n                {\"value\": 2, \"key\": \"READINGS\", \"description\": \"blah1\"},\n                {\"value\": 3, \"key\": \"North Readings to PI\", \"description\": \"blah2\"},\n                {\"value\": 4, \"key\": \"North Statistics to PI\", \"description\": \"blah3\"},\n                {\"value\": 10, \"key\": \"North Statistics to OCS\", \"description\": \"blah5\"},\n                {\"value\": 100, \"key\": \"Readings Sent\", \"description\": \"Readings Sent North\"},\n               ]}\n\n    async def mock_coro(*args, **kwargs):\n        return result\n\n    async def mock_get_category_item():\n        return {\"value\": \"true\"}\n\n    _rv1 = await mock_coro()\n    _rv2 = await mock_get_category_item()\n    host_name, ip_addresses = get_machine_detail\n    mock_storage_client_async = MagicMock(StorageClientAsync)\n    with patch.object(middleware._logger, 'debug') as logger_info:\n        with patch.object(connect, 'get_storage_async', return_value=mock_storage_client_async):\n            with patch.object(mock_storage_client_async, 'query_tbl_with_payload', return_value=_rv1) as query_patch:\n                with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv2) as mock_get_cat:\n                    app = web.Application(loop=loop, middlewares=[middleware.auth_middleware])\n                    # fill route table\n                    routes.setup(app)\n\n                    server = await aiohttp_server(app)\n                    await server.start_server(loop=loop)\n\n                    client = await aiohttp_client(server)\n                    # note: If the parameter is app aiohttp.web.Application\n                    # the tool creates TestServer implicitly for serving the application.\n                    resp = await client.get('/fledge/ping')\n                    assert 200 == resp.status\n                    content = await resp.text()\n                    content_dict = json.loads(content)\n                    assert 0 <= content_dict[\"uptime\"]\n                    assert 2 == content_dict[\"dataRead\"]\n                    assert 100 == content_dict[\"dataSent\"]\n                    assert 1 == content_dict[\"dataPurged\"]\n                    assert content_dict[\"authenticationOptional\"] is False\n                    assert content_dict['serviceName'] == \"Fledge\"\n                    assert content_dict['hostName'] == host_name\n                    assert content_dict['ipAddresses'] == ip_addresses\n                    assert content_dict['health'] == \"green\"\n                    assert content_dict['safeMode'] is False\n                    assert re.match(SEMANTIC_VERSIONING_REGEX, content_dict['version']) is not None\n                    assert content_dict['alerts'] == 0\n                mock_get_cat.assert_called_once_with('rest_api', 'allowPing')\n            query_patch.assert_called_once_with('statistics', payload)\n        log_params = 'Received %s request for %s', 'GET', '/fledge/ping'\n        logger_info.assert_called_once_with(*log_params)\n\n\nasync def test_ping_http_auth_required_allow_ping_false(aiohttp_server, aiohttp_client, loop, get_machine_detail):\n    result = {\"rows\": [\n        {\"value\": 1, \"key\": \"PURGED\", \"description\": \"blah6\"},\n        {\"value\": 2, \"key\": \"READINGS\", \"description\": \"blah1\"},\n        {\"value\": 3, \"key\": \"North Readings to PI\", \"description\": \"blah2\"},\n        {\"value\": 4, \"key\": \"North Statistics to PI\", \"description\": \"blah3\"},\n        {\"value\": 5, \"key\": \"North Statistics to OCS\", \"description\": \"blah5\"},\n        {\"value\": 100, \"key\": \"Readings Sent\", \"description\": \"Readings Sent North\"},\n    ]}\n\n    async def mock_coro(*args, **kwargs):\n        return result\n\n    async def mock_get_category_item():\n        return {\"value\": \"false\"}\n\n    _rv1 = await mock_coro()\n    _rv2 = await mock_get_category_item()\n    mock_storage_client_async = MagicMock(StorageClientAsync)\n    with patch.object(middleware._logger, 'debug') as logger_info:\n        with patch.object(connect, 'get_storage_async', return_value=mock_storage_client_async):\n            with patch.object(mock_storage_client_async, 'query_tbl_with_payload', return_value=_rv1) as query_patch:\n                with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv2) as mock_get_cat:\n                    with patch.object(_logger, 'warning') as logger_warn:\n                        app = web.Application(loop=loop, middlewares=[middleware.auth_middleware])\n                        # fill route table\n                        routes.setup(app)\n\n                        server = await aiohttp_server(app)\n                        await server.start_server(loop=loop)\n\n                        client = await aiohttp_client(server)\n                        # note: If the parameter is app aiohttp.web.Application\n                        # the tool creates TestServer implicitly for serving the application.\n                        resp = await client.get('/fledge/ping')\n                        assert 401 == resp.status\n                    logger_warn.assert_called_once_with('A valid token required to ping; as auth is mandatory & allow ping is set to false.')\n                mock_get_cat.assert_called_once_with('rest_api', 'allowPing')\n            assert 0 == query_patch.call_count\n    log_params = 'Received %s request for %s', 'GET', '/fledge/ping'\n    logger_info.assert_called_once_with(*log_params)\n\n\nasync def test_ping_https_allow_ping_true(aiohttp_server, ssl_ctx, aiohttp_client, loop, get_machine_detail):\n    payload = '{\"return\": [\"key\", \"description\", \"value\"], \"sort\": {\"column\": \"key\", \"direction\": \"asc\"}}'\n    result = {\"rows\": [\n                {\"value\": 1, \"key\": \"PURGED\", \"description\": \"blah6\"},\n                {\"value\": 2, \"key\": \"READINGS\", \"description\": \"blah1\"},\n                {\"value\": 3, \"key\": \"North Readings to PI\", \"description\": \"blah2\"},\n                {\"value\": 4, \"key\": \"North Statistics to PI\", \"description\": \"blah3\"},\n                {\"value\": 10, \"key\": \"North Statistics to OCS\", \"description\": \"blah5\"},\n                {\"value\": 100, \"key\": \"Readings Sent\", \"description\": \"Readings Sent North\"},\n               ]}\n\n    async def mock_coro(*args, **kwargs):\n        return result\n\n    _rv = await mock_coro()\n    host_name, ip_addresses = get_machine_detail\n    mock_storage_client_async = MagicMock(StorageClientAsync)\n    with patch.object(middleware._logger, 'debug') as logger_info:\n        with patch.object(connect, 'get_storage_async', return_value=mock_storage_client_async):\n            with patch.object(mock_storage_client_async, 'query_tbl_with_payload', return_value=_rv) as query_patch:\n                    app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n                    # fill route table\n                    routes.setup(app)\n\n                    server = await aiohttp_server(app, ssl=ssl_ctx)\n                    await server.start_server(loop=loop)\n\n                    with pytest.raises(Exception) as error_exec:\n                        client = await aiohttp_client(server)\n                        await client.get('/fledge/ping')\n                    assert \"[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed\" in str(error_exec)\n\n                    with pytest.raises(Exception) as error_exec:\n                        # self signed certificate,\n                        # and we are not using SSL context here for client as verifier\n                        connector = aiohttp.TCPConnector(verify_ssl=True, loop=loop)\n                        client = await aiohttp_client(server, connector=connector)\n                        await client.get('/fledge/ping')\n                    assert \"[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed\" in str(error_exec)\n\n                    connector = aiohttp.TCPConnector(verify_ssl=False, loop=loop)\n                    client = await aiohttp_client(server, connector=connector)\n                    resp = await client.get('/fledge/ping')\n                    s = resp.request_info.url.human_repr()\n                    assert \"https\" == s[:5]\n                    assert 200 == resp.status\n                    content = await resp.text()\n                    content_dict = json.loads(content)\n                    assert 0 <= content_dict[\"uptime\"]\n                    assert 2 == content_dict[\"dataRead\"]\n                    assert 100 == content_dict[\"dataSent\"]\n                    assert 1 == content_dict[\"dataPurged\"]\n                    assert content_dict[\"authenticationOptional\"] is True\n                    assert content_dict['serviceName'] == \"Fledge\"\n                    assert content_dict['hostName'] == host_name\n                    assert content_dict['ipAddresses'] == ip_addresses\n                    assert content_dict['health'] == \"green\"\n                    assert content_dict['safeMode'] is False\n                    assert re.match(SEMANTIC_VERSIONING_REGEX, content_dict['version']) is not None\n                    assert content_dict['alerts'] == 0\n            query_patch.assert_called_once_with('statistics', payload)\n        logger_info.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/ping')\n\n\nasync def test_ping_https_allow_ping_false(aiohttp_server, ssl_ctx, aiohttp_client, loop, get_machine_detail):\n    payload = '{\"return\": [\"key\", \"description\", \"value\"], \"sort\": {\"column\": \"key\", \"direction\": \"asc\"}}'\n    result = {\"rows\": [\n        {\"value\": 1, \"key\": \"PURGED\", \"description\": \"blah6\"},\n        {\"value\": 2, \"key\": \"READINGS\", \"description\": \"blah1\"},\n        {\"value\": 3, \"key\": \"North Readings to PI\", \"description\": \"blah2\"},\n        {\"value\": 4, \"key\": \"North Statistics to PI\", \"description\": \"blah3\"},\n        {\"value\": 6, \"key\": \"North Statistics to OCS\", \"description\": \"blah5\"},\n        {\"value\": 100, \"key\": \"Readings Sent\", \"description\": \"Readings Sent North\"},\n    ]}\n\n    async def mock_coro(*args, **kwargs):\n        return result\n\n    _rv = await mock_coro()\n    host_name, ip_addresses = get_machine_detail\n    mock_storage_client_async = MagicMock(StorageClientAsync)\n    with patch.object(middleware._logger, 'debug') as logger_info:\n        with patch.object(connect, 'get_storage_async', return_value=mock_storage_client_async):\n            with patch.object(mock_storage_client_async, 'query_tbl_with_payload', return_value=_rv) as query_patch:\n                    app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n                    # fill route table\n                    routes.setup(app)\n\n                    server = await aiohttp_server(app, ssl=ssl_ctx)\n                    await server.start_server(loop=loop)\n\n                    with pytest.raises(Exception) as error_exec:\n                        client = await aiohttp_client(server)\n                        await client.get('/fledge/ping')\n                    assert \"[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed\" in str(error_exec)\n\n                    with pytest.raises(Exception) as error_exec:\n                        # self signed certificate,\n                        # and we are not using SSL context here for client as verifier\n                        connector = aiohttp.TCPConnector(verify_ssl=True, loop=loop)\n                        client = await aiohttp_client(server, connector=connector)\n                        await client.get('/fledge/ping')\n                    assert \"[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed\" in str(error_exec)\n\n                    connector = aiohttp.TCPConnector(verify_ssl=False, loop=loop)\n                    client = await aiohttp_client(server, connector=connector)\n                    resp = await client.get('/fledge/ping')\n                    s = resp.request_info.url.human_repr()\n                    assert \"https\" == s[:5]\n                    assert 200 == resp.status\n                    content = await resp.text()\n                    content_dict = json.loads(content)\n                    assert content_dict['serviceName'] == \"Fledge\"\n                    assert content_dict['hostName'] == host_name\n                    assert content_dict['ipAddresses'] == ip_addresses\n                    assert content_dict['health'] == \"green\"\n                    assert re.match(SEMANTIC_VERSIONING_REGEX, content_dict['version']) is not None\n                    assert content_dict['alerts'] == 0\n            query_patch.assert_called_once_with('statistics', payload)\n        logger_info.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/ping')\n\n\nasync def test_ping_https_auth_required_allow_ping_true(aiohttp_server, ssl_ctx, aiohttp_client, loop, get_machine_detail):\n    payload = '{\"return\": [\"key\", \"description\", \"value\"], \"sort\": {\"column\": \"key\", \"direction\": \"asc\"}}'\n    result = {\"rows\": [\n                {\"value\": 1, \"key\": \"PURGED\", \"description\": \"blah6\"},\n                {\"value\": 2, \"key\": \"READINGS\", \"description\": \"blah1\"},\n                {\"value\": 3, \"key\": \"North Readings to PI\", \"description\": \"blah2\"},\n                {\"value\": 4, \"key\": \"North Statistics to PI\", \"description\": \"blah3\"},\n                {\"value\": 10, \"key\": \"North Statistics to OCS\", \"description\": \"blah5\"},\n                {\"value\": 100, \"key\": \"Readings Sent\", \"description\": \"Readings Sent North\"},\n               ]}\n\n    async def mock_coro(*args, **kwargs):\n        return result\n\n    async def mock_get_category_item():\n        return {\"value\": \"true\"}\n\n    _rv1 = await mock_coro()\n    _rv2 = await mock_get_category_item()\n    host_name, ip_addresses = get_machine_detail\n    mock_storage_client_async = MagicMock(StorageClientAsync)\n    with patch.object(middleware._logger, 'debug') as logger_info:\n        with patch.object(connect, 'get_storage_async', return_value=mock_storage_client_async):\n            with patch.object(mock_storage_client_async, 'query_tbl_with_payload', return_value=_rv1) as query_patch:\n                with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv2) as mock_get_cat:\n                    app = web.Application(loop=loop, middlewares=[middleware.auth_middleware])\n                    # fill route table\n                    routes.setup(app)\n\n                    server = await aiohttp_server(app, ssl=ssl_ctx)\n                    await server.start_server(loop=loop)\n\n                    with pytest.raises(Exception) as error_exec:\n                        client = await aiohttp_client(server)\n                        await client.get('/fledge/ping')\n                    assert \"[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed\" in str(error_exec)\n\n                    with pytest.raises(Exception) as error_exec:\n                        # self signed certificate,\n                        # and we are not using SSL context here for client as verifier\n                        connector = aiohttp.TCPConnector(verify_ssl=True, loop=loop)\n                        client = await aiohttp_client(server, connector=connector)\n                        await client.get('/fledge/ping')\n                    assert \"[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed\" in str(error_exec)\n\n                    connector = aiohttp.TCPConnector(verify_ssl=False, loop=loop)\n                    client = await aiohttp_client(server, connector=connector)\n                    resp = await client.get('/fledge/ping')\n                    s = resp.request_info.url.human_repr()\n                    assert \"https\" == s[:5]\n                    assert 200 == resp.status\n                    content = await resp.text()\n                    content_dict = json.loads(content)\n                    assert 0 <= content_dict[\"uptime\"]\n                    assert 2 == content_dict[\"dataRead\"]\n                    assert 100 == content_dict[\"dataSent\"]\n                    assert 1 == content_dict[\"dataPurged\"]\n                    assert content_dict[\"authenticationOptional\"] is False\n                    assert content_dict['serviceName'] == \"Fledge\"\n                    assert content_dict['hostName'] == host_name\n                    assert content_dict['ipAddresses'] == ip_addresses\n                    assert content_dict['health'] == \"green\"\n                    assert content_dict['safeMode'] is False\n                    assert re.match(SEMANTIC_VERSIONING_REGEX, content_dict['version']) is not None\n                    assert content_dict['alerts'] == 0\n                    mock_get_cat.assert_called_once_with('rest_api', 'allowPing')\n                query_patch.assert_called_once_with('statistics', payload)\n            logger_info.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/ping')\n\n\nasync def test_ping_https_auth_required_allow_ping_false(aiohttp_server, ssl_ctx, aiohttp_client, loop, get_machine_detail):\n    async def mock_coro(*args, **kwargs):\n        result = {\"rows\": [\n            {\"value\": 1, \"key\": \"PURGED\", \"description\": \"blah6\"},\n            {\"value\": 2, \"key\": \"READINGS\", \"description\": \"blah1\"},\n            {\"value\": 3, \"key\": \"North Readings to PI\", \"description\": \"blah2\"},\n            {\"value\": 4, \"key\": \"North Statistics to PI\", \"description\": \"blah3\"},\n            {\"value\": 6, \"key\": \"North Statistics to OCS\", \"description\": \"blah5\"},\n            {\"value\": 100, \"key\": \"Readings Sent\", \"description\": \"Readings Sent North\"},\n        ]}\n        return result\n\n    async def mock_get_category_item():\n        return {\"value\": \"false\"}\n\n    _rv1 = await mock_coro()\n    _rv2 = await mock_get_category_item()\n    mock_storage_client_async = MagicMock(StorageClientAsync)\n    with patch.object(middleware._logger, 'debug') as logger_info:\n        with patch.object(connect, 'get_storage_async', return_value=mock_storage_client_async):\n            with patch.object(mock_storage_client_async, 'query_tbl_with_payload', return_value=_rv1) as query_patch:\n                with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv2) as mock_get_cat:\n                    with patch.object(_logger, 'warning') as logger_warn:\n                        app = web.Application(loop=loop, middlewares=[middleware.auth_middleware])\n                        # fill route table\n                        routes.setup(app)\n\n                        server = await aiohttp_server(app, ssl=ssl_ctx)\n                        await server.start_server(loop=loop)\n\n                        with pytest.raises(Exception) as error_exec:\n                            client = await aiohttp_client(server)\n                            await client.get('/fledge/ping')\n                        assert \"[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed\" in str(error_exec)\n\n                        with pytest.raises(Exception) as error_exec:\n                            # self signed certificate,\n                            # and we are not using SSL context here for client as verifier\n                            connector = aiohttp.TCPConnector(verify_ssl=True, loop=loop)\n                            client = await aiohttp_client(server, connector=connector)\n                            await client.get('/fledge/ping')\n                        assert \"[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed\" in str(error_exec)\n\n                        connector = aiohttp.TCPConnector(verify_ssl=False, loop=loop)\n                        client = await aiohttp_client(server, connector=connector)\n                        resp = await client.get('/fledge/ping')\n                        s = resp.request_info.url.human_repr()\n                        assert \"https\" == s[:5]\n                        assert 401 == resp.status\n                    logger_warn.assert_called_once_with('A valid token required to ping; as auth is mandatory & allow ping is set to false.')\n                mock_get_cat.assert_called_once_with('rest_api', 'allowPing')\n            assert 0 == query_patch.call_count\n        logger_info.assert_called_once_with('Received %s request for %s', 'GET', '/fledge/ping')\n\n\nasync def test_shutdown_http(aiohttp_server, aiohttp_client, loop):\n    app = web.Application()\n    # fill route table\n    routes.setup(app)\n\n    server = await aiohttp_server(app)\n    await server.start_server(loop=loop)\n\n    client = await aiohttp_client(server)\n    resp = await client.put('/fledge/shutdown', data=None)\n    assert 200 == resp.status\n    content = await resp.text()\n    content_dict = json.loads(content)\n    assert \"Fledge shutdown has been scheduled. Wait for few seconds for process cleanup.\" == content_dict[\"message\"]\n\n\nasync def test_restart_http(aiohttp_server, aiohttp_client, loop):\n    app = web.Application()\n    # fill route table\n    routes.setup(app)\n\n    server = await aiohttp_server(app)\n    await server.start_server(loop=loop)\n\n    with patch.object(_logger, 'info') as logger_info:\n        client = await aiohttp_client(server)\n        resp = await client.put('/fledge/restart', data=None)\n        assert 200 == resp.status\n        content = await resp.text()\n        content_dict = json.loads(content)\n        assert \"Fledge restart has been scheduled.\" == content_dict[\"message\"]\n    logger_info.assert_called_once_with('Executing controlled shutdown and start')\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_configuration.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport copy\nimport asyncio\nimport json\nfrom unittest.mock import MagicMock, patch\nfrom aiohttp import web\nimport pytest\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager, ConfigurationManagerSingleton, _logger\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.web import middleware\nfrom fledge.services.core import connect, routes\nfrom fledge.services.core.api import configuration\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestConfiguration:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop,  middlewares=[middleware.optional_auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    @pytest.fixture()\n    def reset_singleton(self):\n        # executed before each test\n        ConfigurationManagerSingleton._shared_state = {}\n        yield\n        ConfigurationManagerSingleton._shared_state = {}\n\n    async def test_get_categories(self, client):\n        async def async_mock():\n            return [('rest_api', 'User REST API', 'API'), ('service', 'Service configuration', 'SERV')]\n\n        result = {'categories': [{'key': 'rest_api', 'description': 'User REST API', 'displayName': 'API'},\n                                 {'key': 'service', 'description': 'Service configuration', 'displayName': 'SERV'}]}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_all_category_names', return_value=_rv) as patch_get_all_items:\n                resp = await client.get('/fledge/category')\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result == json_response\n            patch_get_all_items.assert_called_once_with()\n\n    @pytest.mark.parametrize(\"value\", [\n        \"True\", \"true\", \"trUe\", \"TRUE\"\n    ])\n    async def test_get_categories_with_root_true(self, client, value):\n        async def async_mock():\n            return [('General', 'General', 'GEN'), ('Advanced', 'Advanced', 'ADV')]\n\n        result = {'categories': [{'key': 'General', 'description': 'General', 'displayName': 'GEN'},\n                                 {'key': 'Advanced', 'description': 'Advanced', 'displayName': 'ADV'}]}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_all_category_names', return_value=_rv) as patch_get_all_items:\n                resp = await client.get('/fledge/category?root={}'.format(value))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result == json_response\n            patch_get_all_items.assert_called_once_with(root=True)\n\n    @pytest.mark.parametrize(\"value\", [\n        \"False\", \"false\", \"fAlsE\", \"FALSE\"\n    ])\n    async def test_get_categories_with_root_false(self, client, value):\n        async def async_mock():\n            return [('service', 'Fledge Service', 'SERV'), ('rest_api', 'Fledge Admin and User REST API', 'API'),\n                    ('SMNTR', 'Service Monitor', 'Monitor'), ('SCHEDULER', 'Scheduler configuration', 'SCH')]\n\n        result = {'categories': [{'key': 'service', 'description': 'Fledge Service', 'displayName': 'SERV'},\n                                 {'key': 'rest_api', 'description': 'Fledge Admin and User REST API', 'displayName': 'API'},\n                                 {'key': 'SMNTR', 'description': 'Service Monitor', 'displayName': 'Monitor'},\n                                 {'key': 'SCHEDULER', 'description': 'Scheduler configuration', 'displayName': 'SCH'}]}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        \n        \n        _rv = await async_mock()\n        \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_all_category_names', return_value=_rv) as patch_get_all_items:\n                resp = await client.get('/fledge/category?root={}'.format(value))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result == json_response\n            patch_get_all_items.assert_called_once_with(root=False)\n\n    async def test_get_categories_with_root_and_children(self, client):\n        d = [{'children': [{'key': 'rest_api',\n                            'description': 'REST API'},\n                           {'key': 'Security',\n                            'description': 'Microservices Security'},\n                           {'key': 'service', 'description': 'Fledge Service'}],\n              'key': 'General', 'description': 'General'},\n             {'children': [], 'key': 'test', 'description': 'test'}]\n\n        async def async_mock():\n            return d\n\n        result = {'categories': d}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_all_category_names', return_value=_rv) as patch_get_all_items:\n                resp = await client.get('/fledge/category?root=true&children=true')\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result == json_response\n            patch_get_all_items.assert_called_once_with(root=True, children=True)\n\n    async def test_get_categories_with_non_root_and_children(self, client):\n        d = [{'children': [], 'key': 'General', 'description': 'General'},\n             {'children': [], 'key': 'test', 'description': 'test'}]\n\n        async def async_mock():\n            return d\n\n        result = {'categories': d}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_all_category_names', return_value=_rv) as patch_get_all_items:\n                resp = await client.get('/fledge/category?root=false&children=true')\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result == json_response\n            patch_get_all_items.assert_called_once_with(root=False, children=True)\n\n    async def test_get_category_not_found(self, client, category_name='blah'):\n        async def async_mock():\n            return None\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_all_items:\n                resp = await client.get('/fledge/category/{}'.format(category_name))\n                assert 404 == resp.status\n                assert 'No such Category found for {}'.format(category_name) == resp.reason\n            patch_get_all_items.assert_called_once_with(category_name)\n\n    @pytest.mark.parametrize(\"expected_result, hide_password\", [\n        ({'httpPort': {'default': '8081', 'value': '8081', 'type': 'integer', 'description': 'The port to accept HTTP'},\n          'certificateName': {'default': 'fledge', 'value': 'fledge', 'type': 'string', \n                              'description': 'Certificate file name'}}, False),\n        ({\"p2\": {\"type\": \"password\", \"description\": \"Test Password\", \"default\": \"fledge\", \"value\": \"Fledge\"}}, True)\n    ])\n    async def test_get_category(self, client, expected_result, hide_password, category_name='rest_api'):\n        async def async_mock():\n            return expected_result\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_all_items:\n                resp = await client.get('/fledge/category/{}'.format(category_name))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                if hide_password:\n                    expected_result[list(expected_result.keys())[0]]['value'] = \"****\"\n                assert expected_result == json_response\n            patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_get_category_item_not_found(self, client, category_name='rest_api', item_name='blah'):\n        async def async_mock():\n            return None\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', return_value=_rv) as patch_get_cat_item:\n                resp = await client.get('/fledge/category/{}/{}'.format(category_name, item_name))\n                assert 404 == resp.status\n                assert 'No such Category item found for {}'.format(item_name) == resp.reason\n            patch_get_cat_item.assert_called_once_with(category_name, item_name)\n\n    @pytest.mark.parametrize(\"expected_result, hide_password\", [\n        ({'value': '8081', 'type': 'integer', 'default': '8081', 'description': 'Port to accept HTTP conn on'}, False),\n        ({'type': 'password', 'description': 'Test Password', 'default': 'fledge', 'value': 'Fledge'}, True)\n    ])\n    async def test_get_category_item(self, client, expected_result, hide_password, category_name='rest_api',\n                                     item_name='http_port'):\n        async def async_mock():\n            return expected_result\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', return_value=_rv) as patch_get_cat_item:\n                resp = await client.get('/fledge/category/{}/{}'.format(category_name, item_name))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                if hide_password:\n                    expected_result['value'] = \"****\"\n                assert expected_result == json_response\n            patch_get_cat_item.assert_called_once_with(category_name, item_name)\n\n    @pytest.mark.parametrize(\"expected_result, hide_password\", [\n        ({'value': '8082', 'type': 'integer', 'default': '8081', 'description': 'Port to accept HTTP conn on'}, False),\n        ({'type': 'password', 'description': 'Test Password', 'default': 'fledge', 'value': 'Fledge'}, True)\n    ])\n    async def test_set_config_item(self, client, expected_result, hide_password, category_name='rest_api',\n                                   item_name='http_port'):\n        async def async_mock(return_value):\n            return return_value\n        payload = {\"value\": expected_result['value']}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock(None)\n        _se = await async_mock(expected_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'set_category_item_value_entry', return_value=_rv) as patch_set_entry:\n                with patch.object(c_mgr, 'get_category_item', side_effect=[_se, _se]) as patch_get_cat_item:\n                    resp = await client.put('/fledge/category/{}/{}'.format(category_name, item_name),\n                                            data=json.dumps(payload))\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    if hide_password:\n                        expected_result['value'] = \"****\"\n                    assert expected_result == json_response\n                assert 2 == patch_get_cat_item.call_count\n                calls = patch_get_cat_item.call_args_list\n                args, kwargs = calls[0]\n                assert category_name == args[0]\n                assert item_name == args[1]\n                args, kwargs = calls[1]\n                assert category_name == args[0]\n                assert item_name == args[1]\n            assert 1 == patch_set_entry.call_count\n            calls = patch_set_entry.call_args_list\n            args, _ = calls[0]\n            assert 3 == len(args)\n            assert category_name == args[0]\n            assert item_name == args[1]\n            assert payload['value'] == args[2]\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({\"valu\": '8082'}, \"Missing required value for http_port\"),\n        ({\"valu\": 8082}, \"Missing required value for http_port\"),\n        ({\"value\": 8082}, \"8082 should be a string literal, in double quotes\")\n    ])\n    async def test_set_config_item_bad_request(self, client, payload, message, category_name='rest_api', item_name='http_port'):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            resp = await client.put('/fledge/category/{}/{}'.format(category_name, item_name),\n                                    data=json.dumps(payload))\n            assert 400 == resp.status\n            assert message == resp.reason\n\n    async def test_set_config_item_not_found(self, client, category_name='rest_api', item_name='http_port'):\n        async def async_mock(return_value):\n            return return_value\n\n        payload = {\"value\": '8082'}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        storage_value_entry = {'value': '8082', 'type': 'integer', 'default': '8081',\n                               'description': 'The port to accept HTTP connections on'}\n        _rv = await async_mock(None)\n        _se1 = await async_mock(storage_value_entry)\n        _se2 = await async_mock(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'set_category_item_value_entry', return_value=_rv) as patch_set_entry:\n                with patch.object(c_mgr, 'get_category_item', side_effect=[_se1, _se2]) as patch_get_cat_item:\n                    resp = await client.put('/fledge/category/{}/{}'.format(category_name, item_name),\n                                            data=json.dumps(payload))\n                    assert 404 == resp.status\n                    assert \"No detail found for the category_name: {} and config_item: {}\".format(\n                        category_name, item_name) == resp.reason\n                assert 2 == patch_get_cat_item.call_count\n                calls = patch_get_cat_item.call_args_list\n                args, kwargs = calls[0]\n                assert category_name == args[0]\n                assert item_name == args[1]\n                args, kwargs = calls[1]\n                assert category_name == args[0]\n                assert item_name == args[1]\n            assert 1 == patch_set_entry.call_count\n            calls = patch_set_entry.call_args_list\n            args, _ = calls[0]\n            assert 3 == len(args)\n            assert category_name == args[0]\n            assert item_name == args[1]\n            assert payload['value'] == args[2]\n\n    @pytest.mark.parametrize(\"payload, optional_item, message\", [\n        ({\"value\": '8082'}, \"readonly\", \"Update not allowed for {} item_name as it has readonly attribute set\")\n    ])\n    async def test_set_config_item_not_allowed(self, client, payload, message, optional_item, category_name='rest_api', item_name='http_port'):\n        async def async_mock(return_value):\n            return return_value\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        storage_value_entry = {'value': '8082', 'type': 'integer', 'default': '8081',\n                               'description': 'The port to accept HTTP connections on', optional_item: 'true'}\n        _rv = await async_mock(storage_value_entry)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', return_value=_rv) as patch_get_cat:\n                resp = await client.put('/fledge/category/{}/{}'.format(category_name, item_name), data=json.dumps(payload))\n                assert 400 == resp.status\n                assert message.format(item_name) == resp.reason\n            patch_get_cat.assert_called_once_with(category_name, item_name)\n\n    @pytest.mark.parametrize(\"value, optional_key\", [\n        ('', 'readonly'), ('false', 'readonly'), ('true', 'readonly'), ('Security', 'group'), ('', 'group')\n    ])\n    async def test_set_optional_in_config_item(self, client, value, optional_key,\n                                               category_name='rest_api', item_name='http_port'):\n        async def async_mock(return_value):\n            return return_value\n\n        payload = {optional_key: value}\n        result = {optional_key: value, 'value': '8082', 'type': 'integer', 'default': '8081',\n                  'description': 'The port to accept HTTP connections on'}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await async_mock(None)\n        _rv2 = await async_mock(result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'set_optional_value_entry', return_value=_rv1) as patch_set_entry:\n                with patch.object(c_mgr, 'get_category_item', return_value=_rv2) as patch_get_cat_item:\n                    resp = await client.put('/fledge/category/{}/{}'.format(category_name, item_name),\n                                            data=json.dumps(payload))\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    assert result == json_response\n                patch_get_cat_item.assert_called_once_with(category_name, item_name)\n            patch_set_entry.assert_called_once_with(category_name, item_name, optional_key, payload[optional_key])\n\n    async def test_set_optional_in_config_item_exception(self, client, category_name='rest_api', item_name='http_port'):\n        optional_key = 'readonly'\n        payload = {optional_key: '8082'}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'set_optional_value_entry', side_effect=ValueError) as patch_set_entry:\n                resp = await client.put('/fledge/category/{}/{}'.format(category_name, item_name), data=json.dumps(payload))\n                assert 400 == resp.status\n                assert resp.reason is ''\n            patch_set_entry.assert_called_once_with(category_name, item_name, optional_key, payload[optional_key])\n\n    @pytest.mark.parametrize(\"expected_result, hide_password\", [\n        ({'value': '8082', 'type': 'integer', 'default': '8081', 'description': 'Port to accept HTTP conn on'}, False),\n        ({'type': 'password', 'description': 'Test Password', 'default': 'fledge', 'value': 'Fledge'}, True)\n    ])\n    async def test_delete_config_item(self, client, expected_result, hide_password, category_name='rest_api',\n                                      item_name='http_port'):\n        async def async_mock(return_value):\n            return return_value\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock(None)\n        _se = await async_mock(expected_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', side_effect=[_se, _se]) as patch_get_cat_item:\n                with patch.object(c_mgr, 'set_category_item_value_entry', return_value=_rv) as patch_set_entry:\n                    resp = await client.delete('/fledge/category/{}/{}/value'.format(category_name, item_name))\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    if hide_password:\n                        expected_result['value'] = \"****\"\n                    assert expected_result == json_response\n                patch_set_entry.assert_called_once_with(category_name, item_name, expected_result['default'])\n            assert 2 == patch_get_cat_item.call_count\n            args, kwargs = patch_get_cat_item.call_args\n            assert category_name == args[0]\n            assert item_name == args[1]\n\n    async def test_delete_config_item_not_allowed(self, client, category_name='rest_api', item_name='http_port'):\n        result = {'value': '8081', 'type': 'integer', 'default': '8081',\n                  'description': 'The port to accept HTTP connections on', 'readonly': 'true'}\n\n        async def async_mock():\n            return result\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _se = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', side_effect=[_se]) as patch_get_cat_item:\n                resp = await client.delete('/fledge/category/{}/{}/value'.format(category_name, item_name))\n                assert 400 == resp.status\n                assert 'Delete not allowed for {} item_name as it has readonly attribute set'.format(item_name) == resp.reason\n            assert 1 == patch_get_cat_item.call_count\n            args, kwargs = patch_get_cat_item.call_args\n            assert category_name == args[0]\n            assert item_name == args[1]\n\n    async def test_delete_config_item_not_found_before_set_config(self, client, category_name='rest_api', item_name='http_port'):\n        async def async_mock():\n            return None\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', return_value=_rv) as patch_get_cat_item:\n                resp = await client.delete('/fledge/category/{}/{}/value'.format(category_name, item_name))\n                assert 404 == resp.status\n                assert \"No detail found for the category_name: {} and config_item: {}\".format(category_name, item_name) == resp.reason\n            assert 1 == patch_get_cat_item.call_count\n            args, kwargs = patch_get_cat_item.call_args\n            assert category_name == args[0]\n            assert item_name == args[1]\n\n    async def test_delete_config_not_found_after_set_config(self, client, category_name='rest_api', item_name='http_port'):\n        result = {'value': '8081', 'type': 'integer', 'default': '8081',\n                  'description': 'The port to accept HTTP connections on'}\n\n        async def async_mock(return_value):\n            return return_value\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock(None)\n        _se1 = await async_mock(result)\n        _se2 = await async_mock(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', side_effect=[_se1, _se2]) as patch_get_cat_item:\n                with patch.object(c_mgr, 'set_category_item_value_entry', return_value=_rv) as patch_set_entry:\n                    resp = await client.delete('/fledge/category/{}/{}/value'.format(category_name, item_name))\n                    assert 404 == resp.status\n                    assert \"No detail found for the category_name: {} and config_item: {}\".format(category_name, item_name) == resp.reason\n                patch_set_entry.assert_called_once_with(category_name, item_name, result['default'])\n            assert 2 == patch_get_cat_item.call_count\n            args, kwargs = patch_get_cat_item.call_args\n            assert category_name == args[0]\n            assert item_name == args[1]\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        (\"blah\", \"Data payload must be a dictionary\"),\n        ({}, \"\\\"'key' param required to create a category\\\"\"),\n        ({\"key\": \"test\"}, \"\\\"'description' param required to create a category\\\"\"),\n        ({\"description\": \"test\"}, \"\\\"'key' param required to create a category\\\"\"),\n        ({\"value\": \"test\"}, \"\\\"'key' param required to create a category\\\"\"),\n        ({\"key\": \"test\", \"description\": \"des\"}, \"\\\"'value' param required to create a category\\\"\"),\n        ({\"key\": \"test\", \"value\": \"val\"}, \"\\\"'description' param required to create a category\\\"\"),\n        ({\"description\": \"desc\", \"value\": \"val\"}, \"\\\"'key' param required to create a category\\\"\"),\n        ({\"key\": \"test\", \"description\": \"test\", \"value\":\n            {\"test1\": {\"type\": \"string\", \"description\": \"d\", \"default\": \"\", \"mandatory\": \"true\"}}},\n         \"For test category, A default value must be given for test1\"),\n        ({\"key\": \"\", \"description\": \"test\", \"value\": \"val\"}, \"Key should not be empty\"),\n        ({\"key\": \" \", \"description\": \"test\", \"value\": \"val\"}, \"Key should not be empty\")\n    ])\n    async def test_create_category_bad_request(self, client, payload, message):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(_logger, 'error'):\n                resp = await client.post('/fledge/category', data=json.dumps(payload))\n                assert 400 == resp.status\n                assert message == resp.reason\n\n    @pytest.mark.parametrize(\"payload, hide_password\", [\n        ({\"key\": \"T1\", \"description\": \"Test\"}, False),\n        ({\"key\": \"T2\", \"description\": \"Test 2\", \"display_name\": \"Test Display\"}, False),\n        ({\"key\": \"T3\", \"description\": \"Test 3\"}, True),\n        ({\"key\": \"T4\", \"description\": \"Test 4\", \"display_name\": \"\"}, False),\n        ({\"key\": \"T5\", \"description\": \"Test 5\", \"display_name\": \" \"}, True)\n    ])\n    async def test_create_category(self, client, reset_singleton, payload, hide_password):\n        info = {'p1': {'type': 'password', 'description': 'P1', 'default': 'P1', 'value': 'P1'}} if hide_password else {'info': {'type': 'boolean', 'value': 'False', 'description': 'Test', 'default': 'False'}}\n        new_info = copy.deepcopy(info)\n        payload[\"value\"] = new_info\n\n        async def async_mock(return_value):\n            return return_value\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        if 'display_name' in payload:\n            if not len(payload['display_name'].strip()):\n                payload.pop('display_name')\n                payload['displayName'] = payload['key']\n            else:\n                payload['displayName'] = payload.pop('display_name')\n        else:\n            payload.update({'displayName': payload['key']})\n\n        c_mgr._cacheManager.update(payload['key'], payload['description'], new_info, payload['displayName'])\n        _rv1 = await async_mock(None)\n        _rv2 = await async_mock(new_info)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'create_category', return_value=_rv1) as patch_create_cat:\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv2) as patch_cat_all_item:\n                    resp = await client.post('/fledge/category', data=json.dumps(payload))\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    if hide_password:\n                        payload['value'][list(payload['value'].keys())[0]]['value'] = \"****\"\n                    assert payload == json_response\n                patch_cat_all_item.assert_called_once_with(category_name=payload['key'])\n            patch_create_cat.assert_called_once_with(category_name=payload['key'], category_description=payload['description'],\n                                                     category_value=info, keep_original_items=False, display_name=None)\n\n    async def test_create_category_invalid_key(self, client, name=\"test_cat\", desc=\"Test desc\"):\n        info = {'info': {'type': 'boolean', 'value': 'False', 'description': 'Test', 'default': 'False'}}\n        payload = {\"key\": name, \"description\": desc, \"value\": info, \"keep_original_items\": \"bla\"}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        ConfigurationManager(storage_client_mock)\n        with patch.object(_logger, 'exception') as log_exc:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                resp = await client.post('/fledge/category', data=json.dumps(payload))\n                assert 400 == resp.status\n                assert \"Specifying value_name and value_val for item_name info is not allowed if desired behavior is to use default_val as value_val\" == resp.reason\n        assert 1 == log_exc.call_count\n        log_exc.assert_called_once_with('Unable to create new category based on category_name %s and category_description %s and category_json_schema %s', 'test_cat', 'Test desc', '')\n\n    async def test_create_category_invalid_category(self, client, name=\"test_cat\", desc=\"Test desc\"):\n        info = {'info': {'type': 'boolean', 'value': 'False', 'description': 'Test', 'default': 'False'}}\n        payload = {\"key\": name, \"description\": desc, \"value\": info}\n\n        async def async_mock_create_cat():\n            return None\n\n        async def async_mock():\n            return None\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await async_mock_create_cat()\n        _rv2 = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'create_category', return_value=_rv1) as patch_create_cat:\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv2) as patch_cat_all_item:\n                    resp = await client.post('/fledge/category', data=json.dumps(payload))\n                    assert 404 == resp.status\n                    assert 'No such test_cat found' == resp.reason\n                patch_cat_all_item.assert_called_once_with(category_name=name)\n            patch_create_cat.assert_called_once_with(category_name=name, category_description=desc,\n                                                     category_value=info, keep_original_items=False, display_name=None)\n\n    async def test_create_category_http_exception(self, client, name=\"test_cat\", desc=\"Test desc\"):\n        info = {'info': {'type': 'boolean', 'value': 'False', 'description': 'Test', 'default': 'False'}}\n        payload = {\"key\": name, \"description\": desc, \"value\": info}\n        msg = 'Something went wrong'\n        with patch.object(connect, 'get_storage_async', side_effect=Exception(msg)):\n            with patch.object(configuration._logger, 'error') as patch_logger:\n                resp = await client.post('/fledge/category', data=json.dumps(payload))\n                assert 500 == resp.status\n                assert msg == resp.reason\n            assert 1 == patch_logger.call_count\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        # FIXME: keys order mismatch assertion\n        # ({\"default\": \"1\"}, \"Missing entry_name\"),\n        # ({\"value\": \"0\"}, \"Missing entry_name\"),\n        # ({\"description\": \"1\", \"type\": \"Integer\"}, \"Invalid entry_val for entry_name \\\"type\\\" for item_name info. valid: ['IPv4', 'IPv6', 'JSON', 'X509 certificate', 'boolean', 'integer', 'password', 'string']\")\n        # (\"blah\", \"Data payload must be a dictionary\"),\n        ({}, \"entry value must be a string for item name info and entry name value; got <class 'NoneType'>\"),\n        ({\"description\": \"Test desc\"}, \"entry value must be a string for item name info and entry name value; got <class 'NoneType'>\"),\n        ({\"type\": \"integer\"}, \"entry value must be a string for item name info and entry name value; got <class 'NoneType'>\"),\n        ({\"default\": \"1\", \"description\": \"Test desc\"}, \"missing entry name type for item name info\"),\n        ({\"default\": \"1\", \"type\": \"integer\"}, \"missing entry name description for item name info\"),\n        ({\"description\": \"1\", \"type\": \"integer\"}, \"entry value must be a string for item name info and entry name value; got <class 'NoneType'>\")\n    ])\n    async def test_validate_data_for_add_config_item(self, client, payload, message, loop):\n        async def async_mock():\n            return message\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              return_value=(await async_mock())) as log_code_patch:\n                resp = await client.post('/fledge/category/{}/{}'.format(\"cat\", \"info\"), data=json.dumps(payload))\n                assert 400 == resp.status\n                assert \"For cat category, {}\".format(message) == resp.reason\n\n    async def test_invalid_cat_for_add_config_item(self, client):\n        async def async_mock():\n            return None\n\n        category_name = 'blah'\n        payload = {\"default\": \"1\", \"description\": \"Test description\", \"type\": \"integer\"}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_all_items:\n                resp = await client.post('/fledge/category/{}/{}'.format(category_name, \"info\"), data=json.dumps(payload))\n                assert 404 == resp.status\n                assert 'No such Category found for {}'.format(category_name) == resp.reason\n            patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_config_item_in_use_for_add_config_item(self, client):\n        async def async_mock():\n            return {\"info\": {\"default\": \"1\", \"description\": \"Test description\", \"type\": \"integer\"}}\n\n        category_name = 'cat'\n        payload = {\"default\": \"1\", \"description\": \"Test description\", \"type\": \"integer\"}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_all_items:\n                resp = await client.post('/fledge/category/{}/{}'.format(category_name, \"info\"), data=json.dumps(payload))\n                assert 400 == resp.status\n                assert \"'Config item is already in use for {}'\".format(category_name) == resp.reason\n            patch_get_all_items.assert_called_once_with(category_name)\n\n    @pytest.mark.parametrize(\"data, payload\", [\n        ({\"default\": \"true\", \"description\": \"Test description\", \"type\": \"boolean\"}, '{\"values\": {\"value\": {\"info\": {\"default\": \"1\", \"type\": \"integer\", \"description\": \"Test description\"}, \"info1\": {\"value\": \"true\", \"default\": \"true\", \"type\": \"boolean\", \"description\": \"Test description\"}}}, \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"cat\"}}'),\n        ({\"default\": \"true\", \"description\": \"Test description\", \"type\": \"boolean\", \"value\": \"false\"}, '{\"values\": {\"value\": {\"info\": {\"default\": \"1\", \"type\": \"integer\", \"description\": \"Test description\"}, \"info1\": {\"value\": \"false\", \"default\": \"true\", \"type\": \"boolean\", \"description\": \"Test description\"}}}, \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"cat\"}}')\n    ])\n    async def test_add_config_item(self, client, data, payload, loop):\n        async def async_mock():\n            return {\"info\": {\"default\": \"1\", \"description\": \"Test description\", \"type\": \"integer\"}}\n\n        async def async_audit_mock(return_value):\n            return return_value\n\n        async def async_mock_expected():\n            expected = {'rows_affected': 1, \"response\": \"updated\"}\n            return expected\n\n        category_name = 'cat'\n        new_config_item = 'info1'\n        result = {'message': '{} config item has been saved for {} category'.format(new_config_item, category_name)}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)        \n        _rv1 = await async_mock()\n        _rv2 = await async_mock_expected()\n        _rv3 = await async_audit_mock(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_all_items:\n                with patch.object(storage_client_mock, 'update_tbl', return_value=_rv2) as update_tbl_patch:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv3) as audit_info_patch:\n                            resp = await client.post('/fledge/category/{}/{}'.format(category_name, new_config_item), data=json.dumps(data))\n                            assert 200 == resp.status\n                            r = await resp.text()\n                            json_response = json.loads(r)\n                            assert result == json_response\n\n                    if 'value' not in data:\n                        data.update({'value': data.get('default')})\n                    val = {new_config_item: data}\n                    audit_details = {'category': category_name, 'item': new_config_item, 'value': val}\n                    args, kwargs = audit_info_patch.call_args\n                    assert 'CONAD' == args[0]\n                    assert audit_details == args[1]\n                args1, kwargs1 = update_tbl_patch.call_args\n                assert 'configuration' == args1[0]\n                assert json.loads(payload) == json.loads(args1[1])\n            patch_get_all_items.assert_called_once_with(category_name)\n\n    async def test_unknown_exception_for_add_config_item(self, client):\n        data = {\"default\": \"d\", \"description\": \"Test description\", \"type\": \"boolean\"}\n        msg = 'Internal Server Error'\n        with patch.object(connect, 'get_storage_async', side_effect=Exception(msg)):\n            with patch.object(configuration._logger, 'error') as patch_logger:\n                resp = await client.post('/fledge/category/{}/{}'.format(\"blah\", \"blah\"), data=json.dumps(data))\n                assert 500 == resp.status\n                assert msg == resp.reason\n            assert 1 == patch_logger.call_count\n\n    async def test_get_child_category(self, client):\n        async def async_mock():\n            return []\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_child', return_value=_rv) as patch_get_child_cat:\n                resp = await client.get('/fledge/category/south/children')\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {\"categories\": []} == json_response\n        patch_get_child_cat.assert_called_once_with('south')\n\n    async def test_create_child_category(self, client):\n        data = {\"children\": [\"coap\", \"http\", \"sinusoid\"]}\n        result = {\"children\": data[\"children\"]}\n\n        async def async_mock():\n            return result\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()        \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'create_child_category', return_value=_rv) as patch_create_child_cat:\n                resp = await client.post('/fledge/category/{}/children'.format(\"south\"), data=json.dumps(data))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result == json_response\n            patch_create_child_cat.assert_called_once_with('south', data['children'])\n\n    async def test_delete_child_category(self, client):\n        async def async_mock():\n            return [\"http\", \"sinusoid\"]\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()       \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'delete_child_category', return_value=_rv) as patch_delete_child_cat:\n                resp = await client.delete('/fledge/category/{}/children/{}'.format(\"south\", \"coap\"))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {\"children\": [\"http\", \"sinusoid\"]} == json_response\n            patch_delete_child_cat.assert_called_once_with('south', 'coap')\n\n    async def test_delete_parent_category(self, client):\n        async def async_mock():\n            return None\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock()       \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'delete_parent_category', return_value=_rv) as patch_delete_parent_cat:\n                resp = await client.delete('/fledge/category/{}/parent'.format(\"south\"))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {'message': 'Parent-child relationship for the parent-south is deleted'} == json_response\n            patch_delete_parent_cat.assert_called_once_with('south')\n\n    async def test_create_category_with_children(self, client, reset_singleton, name=\"test_cat\", desc=\"Test desc\"):\n        info = {'info': {'type': 'boolean', 'value': 'False', 'description': 'Test', 'default': 'False'}}\n        children = [\"child1\", \"child2\"]\n        payload = {\"key\": name, \"description\": desc, \"value\": info, \"children\": children}\n\n        async def async_mock_create_cat():\n            return None\n\n        async def async_mock():\n            return info\n\n        async def async_mock2():\n            return {\"children\": children}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        c_mgr._cacheManager.update(name, desc, info, name)\n        _rv1 = await async_mock()\n        _rv2 = await async_mock2()\n        _rv3 = await async_mock_create_cat()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'create_category', return_value=_rv3) as patch_create_cat:\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_cat_all_item:\n                    with patch.object(c_mgr, 'create_child_category', return_value=_rv2) as patch_create_child:\n                        resp = await client.post('/fledge/category', data=json.dumps(payload))\n                        assert 200 == resp.status\n                        r = await resp.text()\n                        json_response = json.loads(r)\n                        payload.update({'displayName': name})\n                        assert payload == json_response\n                    patch_create_child.assert_called_once_with(name, payload[\"children\"])\n                patch_cat_all_item.assert_called_once_with(category_name=name)\n            patch_create_cat.assert_called_once_with(category_name=name, category_description=desc,\n                                                     category_value=info, keep_original_items=False, display_name=None)\n\n    async def test_update_bulk_config_bad_request(self, client, category_name='rest_api'):\n        resp = await client.put('/fledge/category/{}'.format(category_name), data=json.dumps({}))\n        assert 400 == resp.status\n        assert 'Nothing to update' == resp.reason\n\n    @pytest.mark.parametrize(\"code, exception_name\", [\n        (404, [NameError, KeyError]),\n        (400, [ValueError, TypeError]),\n        (500, Exception)\n    ])\n    async def test_update_bulk_config_exception(self, client,  code, exception_name, category_name='rest_api'):\n        config_item_name = \"authentication\"\n        payload = {config_item_name: \"required\"}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', side_effect=exception_name) as patch_get_cat_item:\n                with patch.object(configuration._logger, 'error') as patch_logger:\n                    resp = await client.put('/fledge/category/{}'.format(category_name), data=json.dumps(payload))\n                    assert code == resp.status\n                    assert resp.reason is ''\n                if code == 500:\n                    assert 1 == patch_logger.call_count\n            patch_get_cat_item.assert_called_once_with(category_name, config_item_name)\n\n    async def test_update_bulk_config_item_not_found(self, client, category_name='rest_api'):\n        async def async_mock(return_value):\n            return return_value\n\n        config_item_name = \"https\"\n        payload = {config_item_name: \"8082\"}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock(None)       \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', return_value=_rv) as patch_get_cat_item:\n                resp = await client.put('/fledge/category/{}'.format(category_name), data=json.dumps(payload))\n                assert 404 == resp.status\n                assert \"'{} config item not found'\".format(config_item_name) == resp.reason\n            patch_get_cat_item.assert_called_once_with(category_name, config_item_name)\n\n    @pytest.mark.parametrize(\"payload, optional_item, message\", [\n        ({\"http_port\": '8082'}, \"readonly\", \"Bulk update not allowed for {} item_name as it has readonly attribute set\")\n    ])\n    async def test_update_bulk_config_not_allowed(self, client, payload, optional_item, message,\n                                                  category_name='rest_api'):\n        async def async_mock(return_value):\n            return return_value\n\n        config_item_name = list(payload)[0]\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        storage_value_entry = {'description': 'Port to accept HTTP connections on', 'displayName': 'HTTP Port',\n                               'value': '8081', 'default': '8081', 'order': '2', 'type': 'integer', optional_item: 'true'}\n        _rv = await async_mock(storage_value_entry)     \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', return_value=_rv) as patch_get_cat_item:\n                resp = await client.put('/fledge/category/{}'.format(category_name), data=json.dumps(payload))\n                assert 400 == resp.status\n                assert message.format(config_item_name) == resp.reason\n            patch_get_cat_item.assert_called_once_with(category_name, config_item_name)\n\n    @pytest.mark.parametrize(\"category_name\", [\n        \"rest_api\", \"Rest $API\"\n    ])\n    async def test_update_bulk_config(self, client, category_name):\n        async def async_mock(return_value):\n            return return_value\n\n        response = {\"response\": \"updated\", \"rows_affected\": 1}\n        result = {'authentication': {'options': ['mandatory', 'optional'], 'description': 'API Call Authentication', 'displayName': 'Authentication', 'value': 'mandatory', 'default': 'optional', 'order': '5', 'type': 'enumeration'},\n                  'enableHttp': {'description': 'Enable HTTP (disable to use HTTPS)', 'displayName': 'Enable HTTP', 'value': 'true', 'default': 'true', 'order': '1', 'type': 'boolean'},\n                  'httpPort': {'description': 'Port to accept HTTP connections on', 'displayName': 'HTTP Port', 'value': '8082', 'default': '8081', 'order': '2', 'type': 'integer'}}\n\n        payload = {\"http_port\": \"8082\", \"authentication\": \"mandatory\"}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        storage_value_entry1 = {'description': 'Port to accept HTTP connections on', 'displayName': 'HTTP Port', 'value': '8081', 'default': '8081', 'order': '2', 'type': 'integer'}\n        storage_value_entry2 = {'options': ['mandatory', 'optional'], 'description': 'API Call Authentication', 'displayName': 'Authentication', 'value': 'optional', 'default': 'optional', 'order': '5', 'type': 'enumeration'}\n        _rv1 = await async_mock(response)\n        _rv2 = await async_mock(result)\n        _se1 = await async_mock(storage_value_entry1)\n        _se2 = await async_mock(storage_value_entry2)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', side_effect=[_se1, _se2]) as patch_get_cat_item:\n                with patch.object(c_mgr, 'update_configuration_item_bulk', return_value=_rv1) as patch_update_bulk:\n                    with patch.object(c_mgr, 'get_category_all_items', return_value=_rv2) as patch_get_all_items:\n                        resp = await client.put('/fledge/category/{}'.format(category_name), data=json.dumps(payload))\n                        assert 200 == resp.status\n                        r = await resp.text()\n                        json_response = json.loads(r)\n                        assert result == json_response\n                    patch_get_all_items.assert_called_once_with(category_name)\n                assert 1 == patch_update_bulk.call_count\n                calls = patch_update_bulk.call_args_list\n                args, _ = calls[0]\n                assert 3 == len(args)\n                assert category_name == args[0]\n                assert payload == args[1]\n                assert args[2] is not None\n            assert 2 == patch_get_cat_item.call_count\n\n    async def test_delete_configuration(self, client, category_name='rest_api'):\n        result = {'result': 'Category {} deleted successfully.'.format(category_name)}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await asyncio.sleep(.1)       \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'delete_category_and_children_recursively', return_value=_rv) as patch_delete_cat:\n                resp = await client.delete('/fledge/category/{}'.format(category_name))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert result == json_response\n            assert 1 == patch_delete_cat.call_count\n            args, kwargs = patch_delete_cat.call_args\n            assert category_name == args[0]\n\n    @pytest.mark.parametrize(\"value\", [\n        \"-1\", \"0\", \"1001\"\n    ])\n    async def test_bad_update_configuration_cache_size(self, client, value,\n                                                   category_name='CONFIGURATION', config_item='cacheSize'):\n        async def async_mock(return_value):\n            return return_value\n\n        payload = {config_item: value}\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        storage_value_entry = {'description': 'To control the caching size of Core Configuration Manager',\n                               'type': 'integer', 'displayName': 'Cache Size', 'default': '30', 'order': '1',\n                               'minimum': '1', 'maximum': '1000', 'value': '30'}\n\n        cat_items = {config_item: storage_value_entry}\n        rv = await async_mock(storage_value_entry)\n        rv2 = await async_mock(cat_items)\n        expected_message = \"For config item cacheSize you cannot set the new value, beyond the range (1,1000)\"\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(c_mgr, 'get_category_item', return_value=rv) as patch_get_cat_item:\n                with patch.object(c_mgr, 'get_category_all_items', return_value=rv2) as patch_get_cat_items:\n                    with patch.object(_logger, 'error') as patch_logger:\n                        resp = await client.put('/fledge/category/{}'.format(category_name), data=json.dumps(payload))\n                        assert 400 == resp.status\n                        assert expected_message == resp.reason\n                    assert patch_logger.called\n                patch_get_cat_items.assert_called_once_with(category_name)\n            patch_get_cat_item.assert_called_once_with(category_name, config_item)\n\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_filters.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport json\nfrom unittest.mock import MagicMock, patch, call\nfrom aiohttp import web\nimport pytest\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core import connect, routes\nfrom fledge.services.core.api import filters, utils as apiutils\nfrom fledge.services.core.api.filters import _LOGGER\nfrom fledge.services.core.api.plugins import common\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestFilters:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def async_mock(self, return_val):\n        return return_val\n\n    async def test_get_filters(self, client):\n        async def get_filters():\n            return {\"rows\": []}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await get_filters()        \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=_rv) as query_tbl_patch:\n                resp = await client.get('/fledge/filter')\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'filters': []} == json_response\n            query_tbl_patch.assert_called_once_with('filters')\n\n    async def test_get_filters_storage_exception(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', side_effect=StorageServerError(\n                    None, None, error='something went wrong')) as query_tbl_patch:\n                with patch.object(_LOGGER, 'error') as patch_logger:\n                    resp = await client.get('/fledge/filter')\n                    assert 500 == resp.status\n                    assert \"something went wrong\" == resp.reason\n                assert 1 == patch_logger.call_count\n                args, kwargs = patch_logger.call_args\n                assert 'Get all filters, caught storage exception: {}'.format('something went wrong') in args[0]\n            query_tbl_patch.assert_called_once_with('filters')\n\n    async def test_get_filters_exception(self, client):\n        async def get_filters():\n            return {\"count\": 1}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await get_filters()        \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=_rv) as query_tbl_patch:\n                with patch.object(_LOGGER, 'error') as patch_logger:\n                    resp = await client.get('/fledge/filter')\n                    assert 500 == resp.status\n                    assert \"'rows'\" == resp.reason\n                assert 1 == patch_logger.call_count\n            query_tbl_patch.assert_called_once_with('filters')\n\n    async def test_get_filter_by_name(self, client):\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'filters':\n                assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n                return {'count': 1, 'rows': [{'name': filter_name, 'plugin': 'python35'}]}\n\n            if table == 'filter_users':\n                assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n                return {'count': 0, 'rows': []}\n\n        filter_name = \"AssetFilter\"\n        cat_info = {'plugin': {'description': 'Python 3.5 filter plugin', 'type': 'string', 'default': 'python35', 'value': 'python35'},\n                    'config': {'description': 'Python 3.5 filter configuration.', 'type': 'JSON', 'default': '{}', 'value': '{}'},\n                    'script': {'description': 'Python 3.5 module to load.', 'type': 'script', 'default': '', 'value': ''}}\n\n        result = {\"filter\": {\"config\": cat_info, \"name\": filter_name, \"plugin\": \"python35\", \"users\": []}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(cat_info)       \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                    resp = await client.get('/fledge/filter/{}'.format(filter_name))\n                    assert 200 == resp.status\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    assert result == json_response\n            get_cat_info_patch.assert_called_once_with(filter_name)\n\n    async def test_get_filter_by_name_not_found(self, client):\n        filter_name = \"AssetFilter\"\n        cat_info = {'plugin': {'description': 'Python 3.5 filter plugin', 'type': 'string', 'default': 'python35', 'value': 'python35'},\n                    'config': {'description': 'Python 3.5 filter configuration.', 'type': 'JSON', 'default': '{}', 'value': '{}'},\n                    'script': {'description': 'Python 3.5 module to load.', 'type': 'script', 'default': '', 'value': ''}}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        filter_result = {'count': 0, 'rows': []}\n        _rv1 = await self.async_mock(cat_info)\n        _rv2 = await self.async_mock(filter_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv1) as get_cat_info_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2) as query_tbl_with_payload_patch:\n                    resp = await client.get('/fledge/filter/{}'.format(filter_name))\n                    assert 404 == resp.status\n                    assert \"No such filter '{}' found.\".format(filter_name) == resp.reason\n                query_tbl_with_payload_patch.assert_called_once_with('filters', '{\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"AssetFilter\"}}')\n            get_cat_info_patch.assert_called_once_with(filter_name)\n\n    async def test_get_filter_by_name_value_error(self, client):\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                resp = await client.get('/fledge/filter/{}'.format(filter_name))\n                assert 404 == resp.status\n                assert \"No such 'AssetFilter' category found.\" == resp.reason\n            get_cat_info_patch.assert_called_once_with(filter_name)\n\n    async def test_get_filter_by_name_storage_error(self, client):\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', side_effect=StorageServerError(\n                    None, None, error='something went wrong')) as get_cat_info_patch:\n                with patch.object(_LOGGER, 'error') as patch_logger:\n                    resp = await client.get('/fledge/filter/{}'.format(filter_name))\n                    assert 500 == resp.status\n                    assert \"something went wrong\" == resp.reason\n                assert 1 == patch_logger.call_count\n                args, kwargs = patch_logger.call_args\n                assert 'Failed to get filter name: {}. Storage error occurred: {}'.format(\n                    filter_name, 'something went wrong') in args[0]\n            get_cat_info_patch.assert_called_once_with(filter_name)\n\n    async def test_get_filter_by_name_type_error(self, client):\n        # What cause a TypeError https://github.com/fledge-iot/fledge/blob/develop/python/fledge/services/core/api/filters.py#L319\n        storage_client_mock = MagicMock(StorageClientAsync)\n        filter_name = \"AssetFilter\"        \n        cf_mgr = ConfigurationManager(storage_client_mock)        \n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', side_effect=TypeError) as get_cat_info_patch:\n                resp = await client.get('/fledge/filter/{}'.format(filter_name))\n                assert 400 == resp.status\n                # assert \"?\" == resp.reason\n            get_cat_info_patch.assert_called_once_with(filter_name)\n\n    async def test_get_filter_by_name_exception(self, client):\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', side_effect=Exception) as get_cat_info_patch:\n                with patch.object(_LOGGER, 'error') as patch_logger:\n                    resp = await client.get('/fledge/filter/{}'.format(filter_name))\n                    assert 500 == resp.status\n                    assert resp.reason is ''\n                assert 1 == patch_logger.call_count\n            get_cat_info_patch.assert_called_once_with(filter_name)\n\n    @pytest.mark.parametrize(\"data\", [\n        {},\n        {\"name\": \"test\"},\n        {\"plugin\": \"benchmark\"},\n        {\"blah\": \"blah\"}\n    ])\n    async def test_bad_create_filter(self, client, data):\n        msg = \"Filter name, plugin name are mandatory.\"\n        resp = await client.post('/fledge/filter'.format(\"bench\"), data=json.dumps(data))\n        assert 400 == resp.status\n        assert msg == resp.reason\n\n    async def test_create_filter_value_error_1(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock({\"result\": \"test\"})\n        msg = \"This 'test' filter already exists\"\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                resp = await client.post('/fledge/filter'.format(\"bench\"), data=json.dumps(\n                    {\"name\": \"test\", \"plugin\": \"benchmark\"}))\n                assert 404 == resp.status\n                assert msg == resp.reason\n            get_cat_info_patch.assert_called_once_with(category_name='test')\n\n    async def test_create_filter_value_error_2(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        plugin_name = 'benchmark'\n        _rv = await self.async_mock(None)\n        msg = \"Can not get 'plugin_info' detail from plugin '{}'\".format(plugin_name)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                with patch.object(apiutils, 'get_plugin_info', return_value=None) as api_utils_patch:\n                    with patch.object(common._logger, 'warning') as patch_logger:\n                        resp = await client.post('/fledge/filter'.format(\"bench\"), data=json.dumps(\n                            {\"name\": \"test\", \"plugin\": plugin_name}))\n                        assert 404 == resp.status\n                        assert msg == resp.reason\n                    assert 2 == patch_logger.call_count\n                api_utils_patch.assert_called_once_with(plugin_name, dir='filter')\n            get_cat_info_patch.assert_called_once_with(category_name='test')\n\n    async def test_create_filter_value_error_3(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        plugin_name = 'benchmark'\n        _rv = await self.async_mock(None)\n        msg = \"Loaded plugin 'python35', type 'south', doesn't match the specified one '{}', type 'filter'\".format(\n            plugin_name)\n        ret_val = {\"config\": {'plugin': {'description': 'Python 3.5 filter plugin', 'type': 'string',\n                                         'default': 'python35'}}, \"type\": \"south\"}\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                with patch.object(apiutils, 'get_plugin_info', return_value=ret_val) as api_utils_patch:\n                    with patch.object(common._logger, 'warning') as patch_logger:\n                        resp = await client.post('/fledge/filter'.format(\"bench\"), data=json.dumps(\n                            {\"name\": \"test\", \"plugin\": plugin_name}))\n                        assert 404 == resp.status\n                        assert msg == resp.reason\n                    assert 2 == patch_logger.call_count\n                api_utils_patch.assert_called_once_with(plugin_name, dir='filter')\n            get_cat_info_patch.assert_called_once_with(category_name='test')\n\n    async def test_create_filter_value_error_4(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        plugin_name = 'filter'\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock({'count': 0, 'rows': []})\n        msg = \"filter_config must be a JSON object\"\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv1) as get_cat_info_patch:\n                with patch.object(apiutils, 'get_plugin_info', return_value={\"config\": {'plugin': {'description': 'Python 3.5 filter plugin', 'type': 'string', 'default': 'filter'}}, \"type\": \"filter\"}) as api_utils_patch:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2) as query_tbl_patch:\n                        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv1) as insert_tbl_patch:\n                            with patch.object(cf_mgr, 'create_category', return_value=_rv1) as create_cat_patch:\n                                with patch.object(common._logger, 'warning') as patch_logger:\n                                    resp = await client.post('/fledge/filter'.format(\"bench\"), data=json.dumps(\n                                        {\"name\": \"test\", \"plugin\": plugin_name, \"filter_config\": \"blah\"}))\n                                    assert 404 == resp.status\n                                    assert msg == resp.reason\n                                assert 2 == patch_logger.call_count\n                            create_cat_patch.assert_called_once_with(\n                                category_description=\"Configuration of 'test' filter for plugin 'filter'\",\n                                category_name='test', category_value=\n                                {'plugin': {'description': 'Python 3.5 filter plugin', 'type': 'string',\n                                            'default': 'filter'}}, keep_original_items=True)\n                        args, kwargs = insert_tbl_patch.call_args_list[0]\n                        assert 'filters' == args[0]\n                        assert {\"name\": \"test\", \"plugin\": \"filter\"} == json.loads(args[1])\n                        # insert_tbl_patch.assert_called_once_with('filters', '{\"name\": \"test\", \"plugin\": \"filter\"}')\n                    args, kwargs = query_tbl_patch.call_args_list[0]\n                    assert 'filters' == args[0]\n                    assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"test\"}} == json.loads(args[1])\n                    # query_tbl_patch.assert_called_once_with('filters', '{\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"test\"}}')\n                api_utils_patch.assert_called_once_with(plugin_name, dir='filter')\n            get_cat_info_patch.assert_called_once_with(category_name='test')\n\n    async def test_create_filter_storage_error(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        plugin_name = 'filter'\n        name = 'test'\n        _rv = await self.async_mock(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                with patch.object(apiutils, 'get_plugin_info', return_value={\n                    \"config\": {'plugin': {'description': 'Python 3.5 filter plugin', 'type': 'string',\n                                          'default': 'filter'}}, \"type\": \"filter\"}) as api_utils_patch:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=StorageServerError(\n                            None, None, error='something went wrong')):\n                        with patch.object(filters, '_delete_configuration_category', return_value=_rv) as _delete_cfg_patch:\n                            with patch.object(_LOGGER, 'error') as patch_logger:\n                                with patch.object(common._logger, 'warning') as patch_logger2:\n                                    resp = await client.post('/fledge/filter'.format(\"bench\"), data=json.dumps(\n                                        {\"name\": name, \"plugin\": plugin_name}))\n                                    assert 500 == resp.status\n                                    assert 'something went wrong' == resp.reason\n                                assert 2 == patch_logger2.call_count\n                            assert 1 == patch_logger.call_count\n                        args, kwargs = _delete_cfg_patch.call_args\n                        assert name == args[1]\n                api_utils_patch.assert_called_once_with(plugin_name, dir='filter')\n            get_cat_info_patch.assert_called_once_with(category_name=name)\n\n    async def test_create_filter_exception(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        plugin_name = 'filter'\n        name = 'test'\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', side_effect=Exception) as get_cat_info_patch:\n                with patch.object(_LOGGER, 'error') as patch_logger:\n                    resp = await client.post('/fledge/filter'.format(\"bench\"), data=json.dumps(\n                        {\"name\": name, \"plugin\": plugin_name}))\n                    assert 500 == resp.status\n                    assert resp.reason is ''\n                assert 1 == patch_logger.call_count\n                args = patch_logger.call_args\n                assert 'Add filter failed.' == args[0][1]\n            get_cat_info_patch.assert_called_once_with(category_name=name)\n\n    async def test_create_filter(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        plugin_name = 'filter'\n        name = 'test'\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock({'count': 0, 'rows': []})\n        _se1 = await self.async_mock(None)\n        _se2 = await self.async_mock({})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', side_effect=[_se1, _se2]) as get_cat_info_patch:\n                with patch.object(apiutils, 'get_plugin_info', return_value={\"config\": {'plugin': {'description': 'Python 3.5 filter plugin', 'type': 'string', 'default': 'filter'}}, \"type\": \"filter\"}) as api_utils_patch:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2) as query_tbl_patch:\n                        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv1) as insert_tbl_patch:\n                            with patch.object(cf_mgr, 'create_category', return_value=_rv1) as create_cat_patch:\n                                with patch.object(cf_mgr, 'update_configuration_item_bulk', return_value=_rv1) as update_cfg_bulk_patch:\n                                    with patch.object(common._logger, 'warning') as patch_logger2:\n                                        resp = await client.post('/fledge/filter'.format(\"bench\"), data=json.dumps(\n                                            {\"name\": name, \"plugin\": plugin_name, \"filter_config\": {}}))\n                                        assert 200 == resp.status\n                                        r = await resp.text()\n                                        json_response = json.loads(r)\n                                        assert {'filter': name, 'description': \"Configuration of 'test' filter for plugin 'filter'\", 'value': {}} == json_response\n                                    assert 2 == patch_logger2.call_count\n                                update_cfg_bulk_patch.assert_called_once_with(name, {})\n                            create_cat_patch.assert_called_once_with(category_description=\"Configuration of 'test' filter for plugin 'filter'\", category_name='test', category_value={'plugin': {'description': 'Python 3.5 filter plugin', 'type': 'string', 'default': 'filter'}}, keep_original_items=True)\n                        args, kwargs = insert_tbl_patch.call_args_list[0]\n                        assert 'filters' == args[0]\n                        assert {\"name\": \"test\", \"plugin\": \"filter\"} == json.loads(args[1])\n                        # insert_tbl_patch.assert_called_once_with('filters', '{\"name\": \"test\", \"plugin\": \"filter\"}')\n                    args, kwargs = query_tbl_patch.call_args_list[0]\n                    assert 'filters' == args[0]\n                    assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"test\"}} == json.loads(args[1])\n                    # query_tbl_patch.assert_called_once_with('filters', '{\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"test\"}}')\n                api_utils_patch.assert_called_once_with(plugin_name, dir='filter')\n            assert 2 == get_cat_info_patch.call_count\n\n    async def test_delete_filter(self, client):\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'filters':\n                assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n                return {'count': 1, 'rows': [{'name': filter_name, 'plugin': 'python35'}]}\n\n            if table == 'filter_users':\n                assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n                return {'count': 0, 'rows': []}\n\n            if table == 'asset_tracker':\n                assert {\"return\": [\"deprecated_ts\"],\n                        \"where\": {\"column\": \"plugin\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n                return {'count': 1, 'rows': [{'deprecated_ts': ''}]}\n\n        filter_name = \"AssetFilter\"\n        delete_result = {'response': 'deleted', 'rows_affected': 1}\n        update_result = {'rows_affected': 1, \"response\": \"updated\"}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(delete_result)\n        _rv3 = await self.async_mock(update_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                with patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv2) as delete_tbl_patch:\n                    with patch.object(filters, '_delete_configuration_category', return_value=_rv1) as delete_cfg_patch:\n                        with patch.object(storage_client_mock, 'update_tbl', return_value=_rv3) as update_tbl_patch:\n                            resp = await client.delete('/fledge/filter/{}'.format(filter_name))\n                            assert 200 == resp.status\n                            r = await resp.text()\n                            json_response = json.loads(r)\n                            assert {'result': 'Filter AssetFilter deleted successfully.'} == json_response\n                        args, kwargs = update_tbl_patch.call_args\n                        assert 'asset_tracker' == args[0]\n                    args, kwargs = delete_cfg_patch.call_args\n                    assert filter_name == args[1]\n                delete_tbl_patch.assert_called_once_with(\n                    'filters', '{\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"AssetFilter\"}}')\n\n    async def test_delete_filter_value_error(self, client):\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        message = \"No such filter '{}' found\".format(filter_name)\n        _rv = await self.async_mock({'count': 0, 'rows': []})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv):\n                resp = await client.delete('/fledge/filter/{}'.format(filter_name))\n                assert 404 == resp.status\n                assert message == resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert message == json_response['message']\n\n    async def test_delete_filter_type_error(self, client):\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        message = 'string indices must be integers'\n        \n        rv1 = await self.async_mock(\"blah\")\n\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=rv1):\n                resp = await client.delete('/fledge/filter/{}'.format(filter_name))\n                assert 400 == resp.status\n                assert message in resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert message in json_response['message']\n\n    async def test_delete_filter_conflict_error(self, client):\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        message = (\"The filter '{}' is currently being used within a pipeline. \"\n                   \"To delete the filter, you must first remove it from the pipeline.\").format(filter_name)\n        _se1 = await self.async_mock({'count': 1, 'rows': [{'name': filter_name, 'plugin': 'python35'}]})\n        _se2 = await self.async_mock({'count': 0, 'rows': [\"Random\"]})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              side_effect=[_se1, _se2]):\n                resp = await client.delete('/fledge/filter/{}'.format(filter_name))\n                assert 409 == resp.status\n                assert message == resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert message == json_response['message']\n\n    async def test_delete_filter_storage_error(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        filter_name = \"AssetFilter\"\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=StorageServerError(\n                    None, None, error='something went wrong')) as get_cat_info_patch:\n                with patch.object(_LOGGER, 'exception') as patch_logger:\n                    resp = await client.delete('/fledge/filter/{}'.format(filter_name))\n                    assert 500 == resp.status\n                    assert \"something went wrong\" == resp.reason\n                assert 1 == patch_logger.call_count\n                patch_logger.assert_called_once_with('Delete {} filter, caught storage exception: {}'.format(\n                    filter_name, 'something went wrong'))\n            get_cat_info_patch.assert_called_once_with(\n                'filters', '{\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"AssetFilter\"}}')\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({}, \"'pipeline key-value pair is required in the payload.'\"),\n        ({\"foo\": \"bar\"}, \"'pipeline key-value pair is required in the payload.'\"),\n        ({\"Pipeline\": []}, \"'pipeline key-value pair is required in the payload.'\"),\n        ({\"pipeline\": 1}, \"pipeline must be either a list of filters or an empty list.\"),\n        ({\"pipeline\": False}, \"pipeline must be either a list of filters or an empty list.\"),\n        ({\"pipeline\": \"\"}, \"pipeline must be either a list of filters or an empty list.\"),\n        ({\"pipeline\": \"AssetFilter\"}, \"pipeline must be either a list of filters or an empty list.\"),\n        ({\"pipeline\": {}}, \"pipeline must be either a list of filters or an empty list.\"),\n        ({\"pipeline\": [\"F1\", \"F1\"]}, \"The filter name 'F1' cannot be duplicated in the pipeline.\"),\n        ({\"pipeline\": [\"F1\", \"f1\", \"F2\", \"F2\"]}, \"The filter name 'F2' cannot be duplicated in the pipeline.\"),\n        ({\"pipeline\": [\"F1\", \"f1\", [\"F2\"], \"F2\"]}, \"The filter name 'F2' cannot be duplicated in the pipeline.\"),\n        ({\"pipeline\": [\"F1\", [\"f1\"], [\"f1\", \"F3\"]]}, \"The filter name 'f1' cannot be duplicated in the pipeline.\"),\n        ({\"pipeline\": [[\"F1\", \"f1\"], [\"f1\", \"F3\"]]}, \"The filter name 'f1' cannot be duplicated in the pipeline.\")\n    ])\n    async def test_bad_update_filter_pipeline(self, client, payload, message):\n        resp = await client.put('/fledge/filter/{}/pipeline'.format(\"bench\"), data=json.dumps(payload))\n        assert 400 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": message} == json_response\n\n    @pytest.mark.parametrize(\"request_param, param, val\", [\n        ('?append_filter=T', 'append_filter', 't'),\n        ('?append_filter=1', 'append_filter', '1'),\n        ('?append_filter=t', 'append_filter', 't'),\n        ('?append_filter=F', 'append_filter', 'f'),\n        ('?append_filter=0', 'append_filter', '0'),\n        ('?append_filter=f', 'append_filter', 'f'),\n        ('?allow_duplicates=T', 'allow_duplicates', 't'),\n        ('?allow_duplicates=1', 'allow_duplicates', '1'),\n        ('?allow_duplicates=t', 'allow_duplicates', 't'),\n        ('?allow_duplicates=F', 'allow_duplicates', 'f'),\n        ('?allow_duplicates=0', 'allow_duplicates', '0'),\n        ('?allow_duplicates=f', 'allow_duplicates', 'f')\n    ])\n    async def test_add_filter_pipeline_bad_request_param_val(self, client, request_param, param, val):\n        user = \"bench\"\n        resp = await client.put('/fledge/filter/{}/pipeline{}'.format(user, request_param), data=json.dumps(\n            {\"pipeline\": [\"AssetFilter\"]}))\n        assert 404 == resp.status\n        assert \"Only 'true' and 'false' are allowed for {}. {} given.\".format(param, val) == resp.reason\n\n    async def test_add_filter_pipeline_value_error_1(self, client):\n        user = \"bench\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(None)\n        msg = \"No such '{}' category found.\".format(user)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                resp = await client.put('/fledge/filter/{}/pipeline'.format(user), data=json.dumps(\n                    {\"pipeline\": [\"AssetFilter\"]}))\n                assert 404 == resp.status\n                assert msg == resp.reason\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_add_filter_pipeline_value_error_2(self, client):\n        user = \"bench\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(None)\n        msg = \"No such '{}' category found.\".format(user)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                resp = await client.put('/fledge/filter/{}/pipeline'.format(user), data=json.dumps(\n                    {\"pipeline\": [\"AssetFilter\"]}))\n                assert 404 == resp.status\n                assert msg == resp.reason\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_add_filter_pipeline_value_error_3(self, client):\n        cat_info = {'filter': {'description': 'Filter pipeline', 'type': 'JSON', 'default': '{\"pipeline\": []}', 'value': '{\"pipeline\":[]}'},\n                    'plugin': {'description': 'Benchmark C south plugin', 'type': 'string', 'default': 'Benchmark', 'value': 'Benchmark'},\n                    'asset': {'description': 'Asset name prefix', 'type': 'string', 'default': 'Random', 'value': 'Random'}}\n        user = \"bench\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(cat_info)\n        _rv2 = await self.async_mock({'count': 1, 'rows': []})\n        msg = \"No such 'AssetFilter' filter found in filters table.\"\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv1) as get_cat_info_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2) as query_tbl_patch:\n                    resp = await client.put('/fledge/filter/{}/pipeline'.format(user), data=json.dumps(\n                        {\"pipeline\": [\"AssetFilter\"]}))\n                    assert 404 == resp.status\n                    assert msg == resp.reason\n                query_tbl_patch.assert_called_once_with(\n                    'filters', '{\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"AssetFilter\"}}')\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_add_filter_pipeline_value_error_4(self, client):\n        async def query_result(*f_args):\n            table = f_args[0]\n            payload = f_args[1]\n            assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n            if table == 'filters':\n                return {'count': 1, 'rows': [{'name': filter_name, 'plugin': 'python35'}]}\n            return {'count': 0, 'rows': []}\n\n        cat_info = {'filter': {'description': 'Filter pipeline', 'type': 'JSON', 'default': '{\"pipeline\": []}',\n                               'value': '{\"pipeline\":[]}'},\n                    'plugin': {'description': 'Benchmark C south plugin', 'type': 'string', 'default': 'Benchmark',\n                               'value': 'Benchmark'},\n                    'asset': {'description': 'Asset name prefix', 'type': 'string', 'default': 'Random',\n                              'value': 'Random'}}\n        user = \"bench\"\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(cat_info)\n        _rv3 = await self.async_mock(None)\n        msg = 'No detail found for user: {} and filter: filter'.format(user)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items',\n                              return_value=_rv1) as get_cat_info_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=query_result):\n                    with patch.object(cf_mgr, 'set_category_item_value_entry',\n                                      return_value=_rv3) as set_cat_item_patch:\n                        with patch.object(filters, '_delete_child_filters',\n                                          return_value=_rv3) as _delete_child_patch:\n                            with patch.object(filters, '_add_child_filters',\n                                              return_value=_rv3) as _add_child_patch:\n                                with patch.object(cf_mgr, 'get_category_item', return_value=_rv3\n                                                  ) as get_cat_item_patch:\n                                    resp = await client.put('/fledge/filter/{}/pipeline'.format(user),\n                                                            data=json.dumps({\"pipeline\": [filter_name]}))\n                                    assert 404 == resp.status\n                                    assert msg == resp.reason\n                                    r = await resp.text()\n                                    json_response = json.loads(r)\n                                    assert {'message': msg} == json_response\n                                get_cat_item_patch.assert_called_once_with(user, 'filter')\n                            args, kwargs = _add_child_patch.call_args\n                            assert user == args[2]\n                            assert [filter_name] == args[3]\n                            assert {'old_list': []} == kwargs\n                        args, kwargs = _delete_child_patch.call_args\n                        assert user == args[2]\n                        assert [filter_name] == args[3]\n                        assert {'old_list': []} == kwargs\n                    set_cat_item_patch.assert_called_once_with(user, 'filter', {'pipeline': [filter_name]})\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_add_filter_pipeline_conflict_error(self, client):\n        async def query_result(*f_args):\n            table = f_args[0]\n            payload = f_args[1]\n            assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n            if table == 'filters':\n                return {'count': 1, 'rows': [{'name': filter_name, 'plugin': 'python35'}]}\n            return {'count': 0, 'rows': [{'name': filter_name, 'user': 'S1'}]}\n\n        cat_info = {'filter': {'description': 'Filter pipeline', 'type': 'JSON', 'default': '{\"pipeline\": []}',\n                               'value': '{\"pipeline\":[]}'},\n                    'plugin': {'description': 'Benchmark C south plugin', 'type': 'string', 'default': 'Benchmark',\n                               'value': 'Benchmark'},\n                    'asset': {'description': 'Asset name prefix', 'type': 'string', 'default': 'Random',\n                              'value': 'Random'}}\n        user = \"bench\"\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(cat_info)\n        msg = (\"The filter '{}' is currently in use. To update the filter pipeline, \"\n               \"you must first remove it from the '{}' instance.\").format(filter_name, 'S1')\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items',\n                              return_value=_rv1) as get_cat_info_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=query_result):\n                    resp = await client.put('/fledge/filter/{}/pipeline'.format(user),\n                                            data=json.dumps({\"pipeline\": [filter_name]}))\n                    assert 409 == resp.status\n                    assert msg == resp.reason\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    assert {'message': msg} == json_response\n\n    async def test_add_filter_pipeline_storage_error(self, client):\n        cat_info = {'filter': {'description': 'Filter pipeline', 'type': 'JSON', 'default': '{\"pipeline\": []}', 'value': '{\"pipeline\":[]}'},\n                    'plugin': {'description': 'Benchmark C south plugin', 'type': 'string', 'default': 'Benchmark', 'value': 'Benchmark'},\n                    'asset': {'description': 'Asset name prefix', 'type': 'string', 'default': 'Random', 'value': 'Random'}}\n        user = \"bench\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(cat_info)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',  side_effect=StorageServerError(None, None, error='something went wrong')):\n                    with patch.object(_LOGGER, 'error') as patch_logger:\n                        resp = await client.put('/fledge/filter/{}/pipeline'.format(user), data=json.dumps({\"pipeline\": [\"AssetFilter\"]}))\n                        assert 500 == resp.status\n                        assert \"something went wrong\" == resp.reason\n                    assert 1 == patch_logger.call_count\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_add_filter_pipeline(self, client):\n        async def query_result(*f_args):\n            table = f_args[0]\n            payload = f_args[1]\n            assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n            if table == 'filters':\n                return {'count': 1, 'rows': [{'name': filter_name, 'plugin': 'python35'}]}\n            return {'count': 0, 'rows': []}\n\n        cat_info = {'filter': {\n            'description': 'Filter pipeline', 'type': 'JSON', 'default': '{\"pipeline\": []}',\n            'value': '{\"pipeline\":[]}'}, 'plugin': {\n            'description': 'Benchmark C south plugin', 'type': 'string', 'default': 'Benchmark', 'value': 'Benchmark'},\n            'asset': {'description': 'Asset name prefix', 'type': 'string', 'default': 'Random', 'value': 'Random'}}\n        update_filter_val = cat_info\n        update_filter_val['filter']['value'] = '{\"pipeline\": [\"AssetFilter\"]}'\n        user = \"bench\"\n        filter_name = \"AssetFilter\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        cat_child = {'children': ['Benchmark Filters', 'BenchmarkAdvanced', 'Benchmark_{}'.format(filter_name),\n                                  filter_name]}\n        _rv1 = await self.async_mock(cat_info)\n        _rv3 = await self.async_mock(None)\n        _rv4 = await self.async_mock(update_filter_val['filter'])\n        _rv5 = await self.async_mock(cat_child)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv1) as get_cat_info_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=query_result):\n                    with patch.object(filters, '_delete_child_filters', return_value=_rv3\n                                      ) as _delete_child_patch:\n                        with patch.object(filters, '_add_child_filters', return_value=_rv3\n                                          ) as _add_child_patch:\n                            with patch.object(cf_mgr, 'set_category_item_value_entry',\n                                              return_value=_rv3) as set_cat_item_patch:\n                                with patch.object(cf_mgr, 'get_category_item', return_value=_rv4\n                                                  ) as get_cat_item_patch:\n                                    with patch.object(cf_mgr, 'create_child_category',\n                                                      return_value=_rv5) as create_child_patch:\n                                        resp = await client.put('/fledge/filter/{}/pipeline'.format(user),\n                                                                data=json.dumps({\"pipeline\": [filter_name]}))\n                                        assert 200 == resp.status\n                                        r = await resp.text()\n                                        json_response = json.loads(r)\n                                        message = \"Filter pipeline {'pipeline': ['AssetFilter']} updated successfully\"\n                                        assert {'result': message} == json_response\n                                    create_child_patch.assert_called_once_with(user, [filter_name])\n                                get_cat_item_patch.assert_called_once_with(user, 'filter')\n                            set_cat_item_patch.assert_called_once_with(user, 'filter', {'pipeline': [filter_name]})\n                        args, kwargs = _add_child_patch.call_args\n                        assert user == args[2]\n                        assert [filter_name] == args[3]\n                        assert {'old_list': [filter_name]} == kwargs\n                    args, kwargs = _delete_child_patch.call_args\n                    assert user == args[2]\n                    assert [filter_name] == args[3]\n                    assert {'old_list': [filter_name]} == kwargs\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_add_filter_pipeline_without_filter_config(self, client):\n        async def query_result(*f_args):\n            table = f_args[0]\n            payload = f_args[1]\n            assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": filter_name}} == json.loads(payload)\n            if table == 'filters':\n                return {'count': 1, 'rows': [{'name': filter_name, 'plugin': 'python35'}]}\n            return {'count': 0, 'rows': [{'name': filter_name, 'user': user}]}\n\n        cat_info = {'plugin': {\n            'description': 'Benchmark C south plugin', 'type': 'string', 'default': 'Benchmark', 'value': 'Benchmark'},\n            'asset': {'description': 'Asset name prefix', 'type': 'string', 'default': 'Random', 'value': 'Random'}}\n        user = \"bench\"\n        filter_name = \"AssetFilter\"\n        new_item_val = {'filter': {'description': 'Filter pipeline', 'type': 'JSON',\n                                   'default': '{\"pipeline\": []}', 'value': '{\"pipeline\":[]}'}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        cat_child = {'children': ['Benchmark Filters', 'BenchmarkAdvanced', 'Benchmark_{}'.format(filter_name),\n                                  filter_name]}\n        _rv1 = await self.async_mock(cat_info)\n        _rv3 = await self.async_mock(None)\n        _rv4 = await self.async_mock(new_item_val['filter'])\n        _rv5 = await self.async_mock(cat_child)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv1) as get_cat_info_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=query_result):\n                    with patch.object(cf_mgr, 'create_category', return_value=_rv3) as create_cat_patch:\n                        with patch.object(filters, '_add_child_filters', return_value=_rv3) as _add_child_patch:\n                            with patch.object(cf_mgr, 'get_category_item', return_value=_rv4) as get_cat_item_patch:\n                                with patch.object(cf_mgr, 'create_child_category',\n                                                  return_value=_rv5) as create_child_patch:\n                                    resp = await client.put('/fledge/filter/{}/pipeline'.format(user),\n                                                            data=json.dumps({\"pipeline\": [filter_name]}))\n                                    assert 200 == resp.status\n                                    r = await resp.text()\n                                    json_response = json.loads(r)\n                                    message = \"Filter pipeline {'pipeline': []} updated successfully\"\n                                    assert {'result': message} == json_response\n                                create_child_patch.assert_called_once_with(user, [filter_name])\n                            get_cat_item_patch.assert_called_once_with(user, 'filter')\n                        args, kwargs = _add_child_patch.call_args\n                        assert user == args[2]\n                        assert [filter_name] == args[3]\n                    create_cat_patch.assert_called_once_with(category_name='bench', category_value={\n                        'filter': {'description': 'Filter pipeline', 'readonly': 'true', 'type': 'JSON',\n                                   'default': f'{{\"pipeline\": [\"{filter_name}\"]}}'}}, keep_original_items=True)\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_get_filter_pipeline(self, client):\n        d = {'filter': {'description': 'Filter pipeline', 'type': 'JSON', 'default': '{\"pipeline\": [\"AssetFilter\"]}', 'value': '{\"pipeline\": [\"AssetFilter\"]}'},\n             'plugin': {'description': 'CoAP Listener South Plugin', 'type': 'string', 'default': 'coap', 'value': 'coap'},\n             'port': {'description': 'Port to listen on', 'type': 'integer', 'default': '5683', 'value': '5683'},\n             'uri': {'description': 'URI to accept data on', 'type': 'string', 'default': 'sensor-values', 'value': 'sensor-values'},\n             'management_host': {'description': 'Management host', 'type': 'string', 'default': '127.0.0.1', 'value': '127.0.0.1'}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        user = \"Coap\"\n        _rv = await self.async_mock(d)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                resp = await client.get('/fledge/filter/{}/pipeline'.format(user))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {'result': {'pipeline': ['AssetFilter']}} == json_response\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_get_filter_pipeline_value_error(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        user = \"Blah\"\n        _rv = await self.async_mock(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                resp = await client.get('/fledge/filter/{}/pipeline'.format(user))\n                assert 404 == resp.status\n                assert \"No such '{}' category found.\".format(user) == resp.reason\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_get_filter_pipeline_key_error(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        user = \"Blah\"\n        _rv = await self.async_mock({})\n        msg = \"No filter pipeline exists for {}.\".format(user)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', return_value=_rv) as get_cat_info_patch:\n                resp = await client.get('/fledge/filter/{}/pipeline'.format(user))\n                assert 404 == resp.status\n                assert msg == resp.reason\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_get_filter_pipeline_storage_error(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        user = \"Random\"\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', side_effect=StorageServerError(\n                    None, None, error='something went wrong')) as get_cat_info_patch:\n                with patch.object(_LOGGER, 'error') as patch_logger:\n                    resp = await client.get('/fledge/filter/{}/pipeline'.format(user))\n                    assert 500 == resp.status\n                    assert \"something went wrong\" == resp.reason\n                assert 1 == patch_logger.call_count\n                patch_logger.assert_called_once_with(\n                    'Failed to delete filter pipeline {}. Storage error occurred: {}'.format(\n                        user, 'something went wrong'), exc_info=True)\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    async def test_get_filter_pipeline_exception(self, client):\n        user = \"Random\"\n        storage_client_mock = MagicMock(StorageClientAsync)\n        cf_mgr = ConfigurationManager(storage_client_mock)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(cf_mgr, 'get_category_all_items', side_effect=Exception) as get_cat_info_patch:\n                with patch.object(_LOGGER, 'error') as patch_logger:\n                    resp = await client.get('/fledge/filter/{}/pipeline'.format(user))\n                    assert 500 == resp.status\n                    assert resp.reason is ''\n                assert 1 == patch_logger.call_count\n            get_cat_info_patch.assert_called_once_with(category_name=user)\n\n    @pytest.mark.skip(reason='Incomplete')\n    async def test_delete_filter_pipeline(self, client):\n        user = \"Random\"\n        resp = await client.delete('/fledge/filter/{}/pipeline'.format(user))\n        assert 500 == resp.status\n\n    async def test_delete_configuration_category(self, mocker):\n        # GIVEN\n        mock_payload = {\n               'where': {\n                 'column': 'key',\n                 'condition': '=',\n                 'value': 'test'\n               }\n        }\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = MagicMock(ConfigurationManager)\n        _rv = await asyncio.sleep(.1)\n        mock_connect = mocker.patch.object(connect, 'get_storage_async', return_value=storage_client_mock)\n        delete_tbl_patch = mocker.patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv)\n        cache_manager = mocker.patch.object(c_mgr, '_cacheManager')\n        cache_remove = mocker.patch.object(cache_manager, 'remove', return_value=MagicMock())\n\n        # WHEN\n        await filters._delete_configuration_category(storage_client_mock, \"test\")\n\n        # THEN\n        args, kwargs = delete_tbl_patch.call_args\n        assert 'configuration' == args[0]\n        p = json.loads(args[1])\n        assert mock_payload == p\n        # TODO: cache_remove.assert_called_once_with(\"test\")\n\n    def test_diff(self):\n        in_list1 = ['a', 'b', 'c']\n        in_list2 = ['x', 'y', 'z']\n        out_list = ['x', 'y', 'z']\n        assert out_list == filters._diff(in_list1, in_list2)\n\n    def test_delete_keys_from_dict(self):\n        in_dict_del = {\n            \"assetName\": {\n                \"order\": \"1\",\n                \"description\": \"Name of Asset\",\n                \"type\": \"string\",\n                \"value\": \"sinusoid1\",\n                \"default\": \"sinusoid\",\n                \"displayName\": \"Asset name\"\n            },\n            \"plugin\": {\n                \"description\": \"Sinusoid Plugin\",\n                \"type\": \"string\",\n                \"readonly\": \"true\",\n                \"default\": \"sinusoid\",\n                \"value\": \"sinusoid1\"\n            },\n            \"filter\": {\n                \"description\": \"Filter pipeline\",\n                \"type\": \"JSON\",\n                \"default\": \"{\\\"pipeline\\\": [\\\"S1\\\"]}\",\n                \"value\": \"{\\\"pipeline\\\": [\\\"S11\\\"]}\"\n            },\n            \"dataPointsPerSec\": {\n                \"order\": \"2\",\n                \"description\": \"Data points per second\",\n                \"type\": \"integer\",\n                \"value\": \"11\",\n                \"default\": \"1\",\n                \"displayName\": \"Data points per second\"\n            }\n        }\n        out_dict_del = {\n            \"assetName\": {\n                \"order\": \"1\",\n                \"description\": \"Name of Asset\",\n                \"type\": \"string\",\n                \"default\": \"sinusoid\",\n                \"displayName\": \"Asset name\"\n            },\n            \"plugin\": {\n                \"description\": \"Sinusoid Plugin\",\n                \"type\": \"string\",\n                \"readonly\": \"true\",\n                \"default\": \"sinusoid\",\n            },\n            \"filter\": {\n                \"description\": \"Filter pipeline\",\n                \"type\": \"JSON\",\n                \"default\": \"{\\\"pipeline\\\": [\\\"S1\\\"]}\",\n            },\n            \"dataPointsPerSec\": {\n                \"order\": \"2\",\n                \"description\": \"Data points per second\",\n                \"type\": \"integer\",\n                \"default\": \"1\",\n                \"displayName\": \"Data points per second\"\n            }\n        }\n        lst_keys = ['value']\n        deleted_values = {\n            'plugin': 'sinusoid1',\n            'assetName': 'sinusoid1',\n            'filter': {\n                'pipeline': ['S11']\n            },\n            'dataPointsPerSec': '11'\n        }\n        a, b = filters._delete_keys_from_dict(in_dict_del, lst_keys, deleted_values={}, parent=None)\n        assert out_dict_del, deleted_values == (a, b)\n\n    @pytest.mark.parametrize(\"list1, list2, diff\", [\n        (['Rename'], ['Exp #1', 'Rename'], ['Exp #1']),\n        (['Meta_1'], ['Meta_1'], []),\n        (['Meta_1', 'Exp#1'], ['Meta_1'], []),\n        (['Exp'], ['Exp #1', 'Rename'], ['Exp #1', 'Rename']),\n        (['Exp', 'Exp #1', 'Rename'], ['Exp #1', 'Rename'], []),\n        ([['Rename #1'], 'Scale', 'Meta Data'], ['Asset'], ['Asset']),\n        ([['RE2'], 'RE3', 'PY35'], [['RE2', 'RE3'], 'PY35'], []),\n        ([['Py 35', 'Py#1 35'], 'Py35'], [['Py25', 'Py']], ['Py25', 'Py']),\n        ([['Rms #1'], 'Scale', 'Meta Data'], [['Scale'], 'Meta Data'], [])\n    ])\n    async def test__diff(self, list1, list2, diff):\n        assert diff == filters._diff(list1, list2)\n\n\n    async def test_delete_child_filters(self, mocker):\n        # GIVEN\n        user_name_mock = 'random1'\n        new_list_mock = ['scale2', 'python35b', 'meta2']\n        old_list_mock = ['scale1', 'python35a', 'meta1']\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr_mock = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(None)\n        _rv2 = await self.async_mock({'count': 0, 'rows': []})\n        mocker.patch.object(connect, 'get_storage_async', return_value=storage_client_mock)\n        delete_child_category_mock = mocker.patch.object(c_mgr_mock, 'delete_child_category',\n                                                         return_value=(await self.async_mock(None)))\n        delete_tbl_patch = mocker.patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv)\n        delete_configuration_category_mock = mocker.patch.object(filters, '_delete_configuration_category',\n                                                                 return_value=_rv)\n\n        get_filters_mock = mocker.patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                               return_value=_rv2)\n\n        # WHEN\n        await filters._delete_child_filters(storage_client_mock, c_mgr_mock, user_name_mock, new_list_mock,\n                                            old_list_mock)\n\n        # THEN\n        calls = get_filters_mock.call_args_list\n        args, kwargs = calls[0]\n        assert 'filters' == args[0]\n        p = json.loads(args[1])\n        assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"scale1\"}} == p\n\n        calls = delete_configuration_category_mock.call_args_list\n        args, kwargs = calls[0]\n        assert 'random1_scale1' == args[1]\n\n        calls = delete_tbl_patch.call_args_list\n        args, kwargs = calls[0]\n        assert 'filter_users' == args[0]\n        p = json.loads(args[1])\n        assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"scale1\",\n                          \"and\": {\"column\": \"user\", \"condition\": \"=\", \"value\": \"random1\"}}} == p\n\n        args, kwargs = calls[1]\n        assert 'filter_users' == args[0]\n        p = json.loads(args[1])\n        assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"python35a\",\n                          \"and\": {\"column\": \"user\", \"condition\": \"=\", \"value\": \"random1\"}}} == p\n\n        args, kwargs = calls[2]\n        assert 'filter_users' == args[0]\n        p = json.loads(args[1])\n        assert {\"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"meta1\",\n                          \"and\": {\"column\": \"user\", \"condition\": \"=\", \"value\": \"random1\"}}} == p\n\n        calls_child = [call('random1', 'random1_scale1'),\n                       call('random1', 'random1_python35a'),\n                       call('random1', 'random1_meta1')]\n        delete_child_category_mock.assert_has_calls(calls_child, any_order=True)\n\n    async def test_add_child_filters(self, mocker):\n        # GIVEN\n        user_name_mock = 'random1'\n        new_list_mock = ['scale1', 'meta2']\n        old_list_mock = ['scale1', 'python35a']\n        mock_cat = {\n            \"assetName\": {\n                \"order\": \"1\",\n                \"description\": \"Name of Asset\",\n                \"type\": \"string\",\n                \"value\": \"test1\",\n                \"default\": \"test\",\n                \"displayName\": \"Asset name\"\n            },\n        }\n\n        async def get_cat(category_name):\n            category = category_name\n            if category == \"random1_scale1\":\n                return mock_cat\n            if category == 'random1_meta2':\n                return None\n            if category == 'meta2':\n                return mock_cat\n\n        async def mock():\n            return {}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr_mock = MagicMock(ConfigurationManager)\n        _rv = await asyncio.sleep(.1)\n        _rv2 = await mock()\n        connect_mock = mocker.patch.object(connect, 'get_storage_async', return_value=storage_client_mock)\n        get_category_mock = mocker.patch.object(c_mgr_mock, 'get_category_all_items', side_effect=get_cat)\n        create_category_mock = mocker.patch.object(c_mgr_mock, 'create_category', return_value=_rv2)\n        create_child_category_mock = mocker.patch.object(c_mgr_mock, 'create_child_category', return_value=_rv2)\n        update_config_bulk_mock = mocker.patch.object(c_mgr_mock, 'update_configuration_item_bulk', return_value=_rv)\n        cache_manager = mocker.patch.object(c_mgr_mock, '_cacheManager')\n        cache_remove = mocker.patch.object(cache_manager, 'remove', return_value=MagicMock())\n        insert_tbl_patch = mocker.patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv)\n\n        # WHEN\n        await filters._add_child_filters(storage_client_mock, c_mgr_mock, user_name_mock, new_list_mock, old_list_mock)\n\n        # THEN\n        calls_get_cat = [call(category_name='random1_scale1'),\n                         call(category_name='random1_meta2'),\n                         call(category_name='meta2')]\n        get_category_mock.assert_has_calls(calls_get_cat, any_order=True)\n\n        calls_create_cat = [\n            call(category_description='Configuration of meta2 filter for user random1', category_name='random1_meta2',\n                 category_value={\n                     'assetName': {'description': 'Name of Asset', 'type': 'string', 'order': '1', 'default': 'test',\n                                   'displayName': 'Asset name'}}, keep_original_items=True)]\n        create_category_mock.assert_has_calls(calls_create_cat, any_order=True)\n\n        calls_create_child = [call(category_name='random1', children=['random1_scale1', 'random1_meta2'])]\n        create_child_category_mock.assert_has_calls(calls_create_child, any_order=True)\n\n        calls_update = [call('random1_meta2', {'assetName': 'test1'})]\n        update_config_bulk_mock.assert_has_calls(calls_update, any_order=True)\n\n        args, kwargs = insert_tbl_patch.call_args\n        assert 'filter_users' == args[0]\n        p = json.loads(args[1])\n        assert {\"user\": \"random1\", \"name\": \"meta2\"} == p\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_notification.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\nimport asyncio\nimport uuid\n\nimport aiohttp.web_exceptions\nimport pytest\nimport json\nfrom aiohttp import web\nfrom unittest.mock import call, patch\n\nfrom fledge.services.core import routes\nfrom fledge.services.core import connect\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\nfrom fledge.services.core.api import notification\nfrom fledge.common.audit_logger import AuditLogger\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 Dianomic Systems\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\nmock_registry = [ServiceRecord(uuid.uuid4(), \"Notifications\", \"Notification\", \"http\", \"localhost\", \"8118\", \"8118\")]\nrule_config = [\n        {\n            \"name\": \"threshold\",\n            \"version\": \"1.0.0\",\n            \"type\": \"notificationRule\",\n            \"interface\": \"1.0\",\n            \"config\": {\n                \"plugin\": {\n                    \"description\" : \"The accepted tolerance\",\n                    \"default\"     : \"threshold\",\n                    \"type\"        : \"string\"\n                },\n                \"builtin\": {\n                    \"description\" : \"Is this a builtin plugin?\",\n                    \"default\"     : \"false\",\n                    \"type\"        : \"boolean\"\n                },\n                \"tolerance\": {\n                    \"description\": \"The accepted tolerance\",\n                    \"default\": \"4\",\n                    \"type\": \"integer\"\n                },\n                \"window\": {\n                    \"description\" : \"The window to perform rule evaluation over in minutes\",\n                    \"default\"     : \"60\",\n                    \"type\"        : \"integer\"\n                },\n                \"trigger\": {\n                    \"description\" : \"Temparature threshold value\",\n                    \"default\"     : \"40\",\n                    \"type\"        : \"integer\"\n                },\n                \"asset\": {\n                    \"description\" : \"The asset the notification is defined against\",\n                    \"default\"     : \"temperature\",\n                    \"type\"        : \"string\"\n                },\n            },\n        },\n        {\n            \"name\": \"ruleNotificationTwo\",\n            \"version\": \"1.0.0\",\n            \"type\": \"notificationRule\",\n            \"interface\": \"1.0\",\n            \"config\": {\n                \"plugin\": {\n                    \"description\": \"Builtin\",\n                    \"default\": \"Builtin\",\n                    \"type\": \"string\"\n                },\n                \"builtin\": {\n                    \"description\": \"Is this a builtin plugin?\",\n                    \"default\": \"true\",\n                    \"type\": \"boolean\"\n                },\n            },\n        },\n        {\n            \"name\": \"rulePlantAPump\",\n            \"version\": \"1.0.0\",\n            \"type\": \"notificationRule\",\n            \"interface\": \"1.0\",\n            \"config\": {\n                \"plugin\": {\n                    \"description\": \"Builtin\",\n                    \"default\": \"Builtin\",\n                    \"type\": \"string\"\n                },\n                \"builtin\": {\n                    \"description\": \"Is this a builtin plugin?\",\n                    \"default\": \"true\",\n                    \"type\": \"boolean\"\n                },\n            },\n        },\n]\n\ndelivery_config = [{\n                \"name\": \"email\",\n                \"version\": \"1.0.0\",\n                \"type\": \"notificationDelivery\",\n                \"interface\": \"1.0\",\n                \"config\": {\n                    \"plugin\": {\n                        \"description\": \"Email\",\n                        \"default\": \"email\",\n                        \"type\": \"string\"\n                    },\n                    \"server\": {\n                        \"description\" : \"The smtp server\",\n                        \"default\"     : \"smtp\",\n                        \"type\"        : \"string\"\n                        },\n                    \"from\" : {\n                        \"description\" : \"The from address to use in the email\",\n                        \"default\"     : \"fledge\",\n                        \"type\"        : \"string\"\n                    },\n                    \"to\" : {\n                        \"description\" : \"The address to send the notification to\",\n                        \"default\"     : \"test\",\n                        \"type\"        : \"string\"\n                    },\n                },\n            },\n            {\n                \"name\": \"sms\",\n                \"version\": \"1.0.0\",\n                \"type\": \"notificationDelivery\",\n                \"interface\": \"1.0\",\n                \"config\": {\n                    \"plugin\": {\n                        \"description\": \"SMS\",\n                        \"default\": \"sms\",\n                        \"type\": \"string\"\n                    },\n                    \"number\": {\n                        \"description\": \"The phone number to call\",\n                        \"type\": \"string\",\n                        \"default\": \"01111 222333\"\n                    },\n                },\n            },\n            {\n                \"name\": \"deliveryPlantAPump\",\n                \"version\": \"1.0.0\",\n                \"type\": \"notificationDelivery\",\n                \"interface\": \"1.0\",\n                \"config\": {\n                    \"plugin\": {\n                        \"description\": \"SMS\",\n                        \"default\": \"sms\",\n                        \"type\": \"string\"\n                    },\n                    \"number\": {\n                        \"description\": \"The phone number to call\",\n                        \"type\": \"string\",\n                        \"default\": \"01234 567890\"\n                    },\n                },\n            },\n]\n\nNOTIFICATION_TYPE = notification.NOTIFICATION_TYPE\nnotification_config = {\n    \"name\": {\n        \"description\": \"The name of this notification\",\n        \"type\": \"string\",\n        \"default\": \"Test Notification\",\n        \"value\": \"Test Notification\",\n    },\n    \"description\": {\n        \"description\": \"Description of this notification\",\n        \"type\": \"string\",\n        \"default\": \"description\",\n        \"value\": \"description\",\n    },\n    \"rule\": {\n        \"description\": \"Rule to evaluate\",\n        \"type\": \"string\",\n        \"default\": \"threshold\",\n        \"value\": \"threshold\",\n    },\n    \"channel\": {\n        \"description\": \"Channel to send alert on\",\n        \"type\": \"string\",\n        \"default\": \"email\",\n        \"value\": \"email\",\n    },\n    \"notification_type\": {\n        \"description\": \"Type of notification\",\n        \"type\": \"enumeration\",\n        \"options\": NOTIFICATION_TYPE,\n        \"default\": \"one shot\",\n        \"value\": \"one shot\",\n    },\n    \"enable\": {\n        \"description\": \"Enabled\",\n        \"type\": \"boolean\",\n        \"default\": \"true\",\n        \"value\": \"true\",\n    },\n    \"retrigger_time\": {\n        \"description\": \"Retrigger time in seconds for sending a new notification.\",\n        \"type\": \"integer\",\n        \"default\": \"60\",\n        \"value\": \"60\",\n    }\n}\n\ndelivery_channel_config = {\n    \"action\": {\n      \"description\": \"Perform a control action to turn pump\",\n      \"type\": \"boolean\",\n      \"default\": \"false\"\n    },\n    \"plugin\": {\n      \"description\": \"Telegram notification plugin\",\n      \"type\": \"string\",\n      \"readonly\": \"true\",\n      \"default\": \"Telegram\"\n    }\n}\n\n\nasync def mock_get_url(get_url):\n    if get_url.endswith(\"/notification/rules\"):\n        return json.dumps(rule_config)\n    if get_url.endswith(\"/notification/delivery\"):\n        return json.dumps(delivery_config)\n    if get_url.endswith(\"/plugin\"):\n        return json.dumps({'rules': rule_config, 'delivery': delivery_config})\n    if get_url.endswith(\"/notification\"):\n        return json.dumps(notification_config)\n    if get_url.endswith(\"/fledge/notification/Test Notification\"):\n        r = list(filter(lambda rules: rules['name'] == notification_config['rule']['value'], rule_config))\n        c = list(filter(lambda channels: channels['name'] == notification_config['channel']['value'], delivery_config))\n        if len(r) == 0 or len(c) == 0: raise KeyError\n        rule_plugin_config = r[0]['config']\n        delivery_plugin_config = c[0]['config']\n\n        notif = {\n            \"name\": notification_config['name']['value'],\n            \"description\": notification_config['description']['value'],\n            \"rule\": notification_config['rule']['value'],\n            \"ruleConfig\": rule_plugin_config,\n            \"channel\": notification_config['channel']['value'],\n            \"deliveryConfig\": delivery_plugin_config,\n            \"notificationType\": notification_config['notification_type']['value'],\n            \"retriggerTime\": notification_config['retrigger_time']['value'],\n            \"enable\": notification_config['enable']['value'],\n        }\n        return json.dumps(notif)\n\n\nasync def mock_post_url(post_url):\n    if post_url.endswith(\"/notification/Test Notification\"):\n        return json.dumps({\"result\": \"OK\"})\n    if post_url.endswith(\"/notification/Test Notification/rule/threshold\"):\n        return json.dumps({\"result\": \"OK\"})\n    if post_url.endswith(\"/notification/Test Notification/delivery/email\"):\n        return json.dumps({\"result\": \"OK\"})\n\n\nasync def mock_delete_url(delete_url):\n    return json.dumps({\"result\": \"OK\"})\n\n\nasync def mock_read_category_val(key):\n    r = list(filter(lambda rules: rules['name'] == notification_config['rule']['value'], rule_config))\n    c = list(filter(lambda channels: channels['name'] == notification_config['channel']['value'], delivery_config))\n    if len(r) == 0 or len(c) == 0: raise KeyError\n    rule_plugin_config = r[0]['config']\n    delivery_plugin_config = c[0]['config']\n    if key.endswith(\"ruleTest Notification\"):\n        return rule_plugin_config\n    if key.endswith(\"deliveryTest Notification\"):\n        return delivery_plugin_config\n    if key.endswith(\"Test Notification\"):\n        return notification_config\n    if key.endswith(\"overspeed\"):\n        return delivery_channel_config\n    if key.endswith(\"foo\"):\n        return {}\n    if key.endswith(\"bar\"):\n        return []\n    if key.endswith(\"coolant\"):\n        return [\"coolant\"]\n\n\nasync def mock_read_all_child_category_names():\n    return [{\n        \"parent\": \"Notifications\",\n        \"child\": \"Test Notification\",\n    }]\n\n\nasync def mock_create_category():\n    return \"\"\n\n\nasync def mock_update_category():\n    return \"\"\n\n\nasync def mock_create_child_category():\n    return \"\"\n\n\nasync def mock_check_category(val=None):\n    return val\n\n\nclass TestNotification:\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    @pytest.mark.parametrize(\"side_effect_exception, expected_http_exception, expected_message, expected_log_calls\", [\n        (ValueError, aiohttp.web_exceptions.HTTPNotFound, \"\", 0),\n        (notification.PluginFetchError, aiohttp.web_exceptions.HTTPInternalServerError,\n         \"Failed to fetch notification plugins.\", 1),\n        (Exception, aiohttp.web_exceptions.HTTPInternalServerError, \"\", 1)\n    ])\n    async def test_bad_get_plugin(self, mocker, side_effect_exception, expected_http_exception, expected_message,\n                                  expected_log_calls):\n        with patch.object(notification, 'fetch_plugins', side_effect=side_effect_exception):\n            with patch.object(notification._logger, 'error') as patch_logger:\n                with pytest.raises(Exception) as exc_info:\n                    await notification.get_plugin(mocker)\n                assert isinstance(exc_info.value, expected_http_exception)\n                assert expected_message == str(exc_info.value)\n            assert expected_log_calls == patch_logger.call_count\n            if expected_log_calls > 0:\n                args, kwargs = patch_logger.call_args\n                assert 'Failed to get notification plugin list.' == args[1]\n\n    async def test_get_plugin(self, mocker, client):\n        rules_and_delivery = {'rules': rule_config, 'delivery': delivery_config}\n        _se1 = await mock_get_url(\"/notification/rules\")\n        _se2 = await mock_get_url(\"/notification/delivery\")\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(notification, '_hit_get_url', side_effect=[_se1, _se2])\n\n        resp = await client.get('/fledge/notification/plugin')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert rules_and_delivery == json_response\n\n    async def test_get_type(self, client):\n        notification_type = {'notification_type': NOTIFICATION_TYPE}\n        resp = await client.get('/fledge/notification/type')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert notification_type == json_response\n\n    async def test_get_notification(self, mocker, client):\n        async def mock_get_channel_type():\n            return []\n        r = list(filter(lambda rules: rules['name'] == notification_config['rule']['value'], rule_config))\n        c = list(filter(lambda channels: channels['name'] == notification_config['channel']['value'], delivery_config))\n        if len(r) == 0 or len(c) == 0: raise KeyError\n        rule_plugin_config = r[0]['config']\n        delivery_plugin_config = c[0]['config']\n        notif = {\n            \"name\": notification_config['name']['value'],\n            \"description\": notification_config['description']['value'],\n            \"rule\": notification_config['rule']['value'],\n            \"ruleConfig\": rule_plugin_config,\n            \"channel\": notification_config['channel']['value'],\n            \"deliveryConfig\": delivery_plugin_config,\n            \"notificationType\": notification_config['notification_type']['value'],\n            \"retriggerTime\": notification_config['retrigger_time']['value'],\n            \"enable\": notification_config['enable']['value'],\n        }\n        _se1 = await mock_read_category_val(\"Test Notification\")\n        _se2 = await mock_read_category_val(\"ruleTest Notification\")\n        _se3 = await mock_read_category_val(\"deliveryTest Notification\")\n        mocker.patch.object(notification, '_get_channels_type', return_value=await mock_get_channel_type())\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', side_effect=[_se1, _se2, _se3])\n\n        resp = await client.get('/fledge/notification/Test Notification')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert notif == json_response[\"notification\"]\n\n    async def test_get_notifications(self, mocker, client):\n        async def mock_get_channel_type():\n            return []\n        notifications = [{\n            \"name\": notification_config['name']['value'],\n            \"rule\": notification_config['rule']['value'],\n            \"channel\": notification_config['channel']['value'],\n            \"notificationType\": notification_config['notification_type']['value'],\n            \"retriggerTime\": notification_config['retrigger_time']['value'],\n            \"enable\": notification_config['enable']['value'],\n        }]\n        _rv1 = await mock_read_all_child_category_names()\n        _rv2 = await mock_read_category_val(\"Test Notification\")\n        mocker.patch.object(notification, '_get_channels_type', return_value=await mock_get_channel_type())\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_all_child_category_names',\n                            return_value=_rv1)\n        mocker.patch.object(ConfigurationManager, '_read_category_val',\n                            return_value=_rv2)\n\n        resp = await client.get('/fledge/notification')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert notifications == json_response[\"notifications\"]\n\n    async def test_post_notification(self, mocker, client):\n        _rv1 = await mock_get_url(\"/fledge/notification/plugin\")\n        _rv2 = await mock_create_category()\n        _rv3 = await mock_read_category_val(\"\")\n        _rv4 = await mock_check_category()\n        _rv5 = await asyncio.sleep(.1)\n        _se1 = await mock_post_url(\"/notification/Test Notification\")\n        _se2 = await mock_post_url(\"/notification/Test Notification/rule/threshold\")\n        _se3 = await mock_post_url(\"/notification/Test Notification/delivery/email\")\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(notification, '_hit_get_url', return_value=_rv1)\n        mocker.patch.object(notification, '_hit_post_url',\n                            side_effect=[_se1, _se2, _se3])\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        update_configuration_item_bulk = mocker.patch.object(ConfigurationManager, 'update_configuration_item_bulk',\n                                              return_value=_rv2)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', return_value=_rv3)\n        mocker.patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv4)\n        mocker.patch.object(AuditLogger, \"__init__\", return_value=None)\n        audit_logger = mocker.patch.object(AuditLogger, \"information\", return_value=_rv5)\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": false}'\n\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 200 == resp.status\n        result = await resp.json()\n        assert result['result'].endswith(\"Notification {} created successfully\".format(\"Test Notification\"))\n        update_configuration_item_bulk_calls = [call('Test Notification', {'enable': 'false', 'rule': 'threshold', 'description': 'Test Notification', 'channel': 'email', 'notification_type': 'one shot'})]\n        update_configuration_item_bulk.assert_has_calls(update_configuration_item_bulk_calls, any_order=True)\n\n    async def test_post_notification_duplicate_name(self, mocker, client):\n        _rv = await mock_check_category(True)\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv)\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": false}'\n\n        # Check duplicate name\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert \"400: A Category with name Test Notification already exists.\" == result\n\n    async def test_post_notification2(self, mocker, client):\n        _rv1 = await mock_get_url(\"/fledge/notification/plugin\")\n        _rv2 = await mock_create_category()\n        _rv3 = await mock_read_category_val(\"\")\n        _rv4 = await mock_check_category()\n        _rv5 = await asyncio.sleep(.1)\n        _se1 = await mock_post_url(\"/notification/Test Notification\")\n        _se2 = await mock_post_url(\"/notification/Test Notification/rule/threshold\")\n        _se3 = await mock_post_url(\"/notification/Test Notification/delivery/email\")\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(notification, '_hit_get_url', return_value=_rv1)\n        mocker.patch.object(notification, '_hit_post_url',\n                            side_effect=[_se1, _se2, _se3])\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(AuditLogger, \"__init__\", return_value=None)\n        audit_logger = mocker.patch.object(AuditLogger, \"information\", return_value=_rv5)\n        update_configuration_item_bulk = mocker.patch.object(ConfigurationManager, 'update_configuration_item_bulk',\n                                              return_value=_rv2)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', return_value=_rv3)\n        mocker.patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv4)\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": false, \"rule_config\":{\"window\": \"100\"}, \"delivery_config\": {\"server\": \"pop\"}}'\n\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 200 == resp.status\n        result = await resp.json()\n        assert result['result'].endswith(\"Notification {} created successfully\".format(\"Test Notification\"))\n        update_configuration_item_bulk_calls = [call('Test Notification', {'description': 'Test Notification', 'rule': 'threshold', 'channel': 'email',\n                                        'notification_type': 'one shot', 'enable': 'false'}),\n             call('ruleTest Notification', {'window': '100'}),\n             call('deliveryTest Notification', {'server': 'pop'})]\n        update_configuration_item_bulk.assert_has_calls(update_configuration_item_bulk_calls, any_order=True)\n\n    async def test_post_notification_exception(self, mocker, client):\n        _rv1 = await mock_get_url(\"/fledge/notification/plugin\")\n        _rv2 = await mock_create_category()\n        _rv3 = await mock_read_category_val(\"\")\n        _rv4 = await mock_check_category()\n        _rv5 = await asyncio.sleep(.1)\n        _rv6 = await mock_create_child_category()\n        _se1 = await mock_post_url(\"/notification/Test Notification\")\n        _se2 = await mock_post_url(\"/notification/Test Notification/rule/threshold\")\n        _se3 = await mock_post_url(\"/notification/Test Notification/delivery/email\")\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(notification, '_hit_get_url', return_value=_rv1)\n        mocker.patch.object(notification, '_hit_post_url',\n                            side_effect=[_se1, _se2, _se3])\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        create_category = mocker.patch.object(ConfigurationManager, 'create_category',\n                                              return_value=_rv2)\n        create_child_category = mocker.patch.object(ConfigurationManager, 'create_child_category',\n                                                    return_value=_rv6)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', return_value=_rv3)\n        mocker.patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv4)\n\n        mocker.patch.object(AuditLogger, \"__init__\", return_value=None)\n        audit_logger = mocker.patch.object(AuditLogger, \"information\", return_value=_rv5)\n        mock_payload = '{\"description\":\"Test Notification\", \"rule\": \"threshold\", \"channel\": \"email\", ' \\\n                       '\"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Missing name property in payload.')\n\n        mock_payload = '{\"name\": \"\", \"description\":\"Test Notification\", \"rule\": \"threshold\", \"channel\": \"email\", ' \\\n                       '\"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Missing name property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"rule\": \"threshold\", \"channel\": \"email\", ' \\\n                       '\"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Missing description property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"channel\": \"email\", ' \\\n                       '\"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Missing rule property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Missing channel property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"email\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Missing notification_type property in payload.')\n\n        mock_payload = '{\"name\": \";\", \"description\":\"Test Notification\", \"rule\": \"threshold\", \"channel\": \"email\", ' \\\n                       '\"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Invalid name property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \";\", ' \\\n                       '\"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Invalid rule property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \";\", \"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Invalid channel property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"email\", \"notification_type\": \";\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Invalid notification_type property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": fals}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Expecting value: line 1 column 151 (char 150)')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshol\", ' \\\n                       '\"channel\": \"emai\", \"notification_type\": \"one shot\", \"enabled\": false}'\n        resp = await client.post(\"/fledge/notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith(\n            \"Invalid rule plugin {} and/or delivery plugin {} supplied.\".format(\"threshol\", \"emai\"))\n\n    async def test_post_notification_plugin_fetch_error(self, mocker, client):\n        expected_message = \"Failed to fetch notification plugins.\"\n        _rv = await mock_check_category()\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv)\n        with patch.object(notification, 'fetch_plugins', side_effect=notification.PluginFetchError):\n            with patch.object(notification._logger, 'error') as patch_logger:\n                mock_payload = ('{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", '\n                                '\"channel\": \"email\", \"notification_type\": \"one shot\", \"enabled\": false}')\n                resp = await client.post(\"/fledge/notification\", data=mock_payload)\n                assert 500 == resp.status\n                assert expected_message == resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {\"message\": expected_message} == json_response\n            assert 1 == patch_logger.call_count\n            args, kwargs = patch_logger.call_args\n            assert 'Failed to create notification instance.' == args[1]\n\n    async def test_put_notification(self, mocker, client):\n        _rv1 = await mock_get_url(\"/fledge/notification/plugin\")\n        _rv2 = await mock_create_category()\n        _rv3 = await mock_read_category_val(\"Test Notification\")\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(notification, '_hit_get_url', return_value=_rv1)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        update_configuration_item_bulk = mocker.patch.object(ConfigurationManager, 'update_configuration_item_bulk',\n                                              return_value=_rv2)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', return_value=_rv3)\n        create_category = mocker.patch.object(ConfigurationManager, 'create_category',\n                                              return_value=_rv2)\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"sms\", \"notification_type\": \"one shot\", \"enabled\": false, ' \\\n                       '\"rule_config\": {\"asset\": \"temperature\", \"trigger\": \"70\", \"window\": \"60\", \"tolerance\": \"4\"}, ' \\\n                       '\"delivery_config\": {\"number\": \"07812 343830\"} }'\n\n        resp = await client.put(\"/fledge/notification/Test Notification\", data=mock_payload)\n        assert 200 == resp.status\n        result = await resp.json()\n        assert result['result'].endswith(\"Notification {} updated successfully\".format(\"Test Notification\"))\n        update_configuration_item_bulk_calls = [call('Test Notification', {'description': 'Test Notification', 'notification_type': 'one shot', 'enable': 'false', 'rule': 'threshold', 'channel': 'sms'}),\n                                                call('ruleTest Notification', {'asset': 'temperature', 'trigger': '70', 'tolerance': '4', 'window': '60'}),\n                                                call('deliveryTest Notification', {'number': '07812 343830'})]\n        update_configuration_item_bulk.assert_has_calls(update_configuration_item_bulk_calls, any_order=True)\n\n    async def test_put_notification_exception(self, mocker, client):\n        _rv1 = await mock_get_url(\"/fledge/notification/plugin\")\n        _rv2 = await mock_create_category()\n        _rv3 = await mock_read_category_val(\"Test Notification\")\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(notification, '_hit_get_url', return_value=_rv1)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        update_configuration_item_bulk = mocker.patch.object(ConfigurationManager, 'update_configuration_item_bulk',\n                                              return_value=_rv2)\n        create_category = mocker.patch.object(ConfigurationManager, 'create_category',\n                                              return_value=_rv2)\n        mocker.patch.object(ConfigurationManager, '_read_category_val',\n                            return_value=_rv3)\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"sms\", \"notification_type\": \"one shot\", \"enabled\": false, ' \\\n                       '\"rule_config\": {\"asset\": \"temperature\", \"trigger\": \"70\", \"window\": \"60\", \"tolerance\": \"4\"}, ' \\\n                       '\"delivery_config\": {\"number\": \"07812 343830\"} }'\n        resp = await client.put(\"/fledge/notification\", data=mock_payload)\n        assert 405 == resp.status\n        result = await resp.text()\n        assert result.endswith(\" Method Not Allowed\")\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \";\", ' \\\n                       '\"channel\": \"sms\", \"notification_type\": \"one shot\", \"enabled\": false, ' \\\n                       '\"rule_config\": {\"asset\": \"temperature\", \"trigger\": \"70\", \"window\": \"60\", \"tolerance\": \"4\"}, ' \\\n                       '\"delivery_config\": {\"number\": \"07812 343830\"} }'\n        resp = await client.put(\"/fledge/notification/Test Notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Invalid rule property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \";\", \"notification_type\": \"one shot\", \"enabled\": false, ' \\\n                       '\"rule_config\": {\"asset\": \"temperature\", \"trigger\": \"70\", \"window\": \"60\", \"tolerance\": \"4\"}, ' \\\n                       '\"delivery_config\": {\"number\": \"07812 343830\"} }'\n        resp = await client.put(\"/fledge/notification/Test Notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Invalid channel property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"sms\", \"notification_type\": \";\", \"enabled\": false, ' \\\n                       '\"rule_config\": {\"asset\": \"temperature\", \"trigger\": \"70\", \"window\": \"60\", \"tolerance\": \"4\"}, ' \\\n                       '\"delivery_config\": {\"number\": \"07812 343830\"} }'\n        resp = await client.put(\"/fledge/notification/Test Notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Invalid notification_type property in payload.')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", ' \\\n                       '\"channel\": \"sms\", \"notification_type\": \"one shot\", \"enabled\": fals, ' \\\n                       '\"rule_config\": {\"asset\": \"temperature\", \"trigger\": \"70\", \"window\": \"60\", \"tolerance\": \"4\"}, ' \\\n                       '\"delivery_config\": {\"number\": \"07812 343830\"} }'\n        resp = await client.put(\"/fledge/notification/Test Notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith('Expecting value: line 1 column 149 (char 148)')\n\n        mock_payload = '{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshol\", ' \\\n                       '\"channel\": \"sm\", \"notification_type\": \"one shot\", \"enabled\": false, ' \\\n                       '\"rule_config\": {\"asset\": \"temperature\", \"trigger\": \"70\", \"window\": \"60\", \"tolerance\": \"4\"}, ' \\\n                       '\"delivery_config\": {\"number\": \"07812 343830\"} }'\n        resp = await client.put(\"/fledge/notification/Test Notification\", data=mock_payload)\n        assert 400 == resp.status\n        result = await resp.text()\n        assert result.endswith(\n            \"Invalid rule plugin:{} and/or delivery plugin:{} supplied.\".format(\"threshol\", \"sm\"))\n\n    async def test_put_notification_plugin_fetch_error(self, mocker, client):\n        name = \"Test Notification\"\n        expected_message = \"Failed to fetch notification plugins.\"\n        _rv2 = await mock_read_category_val(name)\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', return_value=_rv2)\n        with patch.object(notification, 'fetch_plugins', side_effect=notification.PluginFetchError):\n            with patch.object(notification._logger, 'error') as patch_logger:\n                mock_payload = ('{\"name\": \"Test Notification\", \"description\":\"Test Notification\", \"rule\": \"threshold\", '\n                                '\"channel\": \"sm\", \"notification_type\": \"one shot\", \"enabled\": false, '\n                                '\"rule_config\": {\"asset\": \"temperature\", \"trigger\": \"70\", \"window\": \"60\", '\n                                '\"tolerance\": \"4\"}, \"delivery_config\": {\"number\": \"07812 343830\"} }')\n                resp = await client.put(\"/fledge/notification/{}\".format(name), data=mock_payload)\n                assert 500 == resp.status\n                assert expected_message == resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {\"message\": expected_message} == json_response\n            assert 1 == patch_logger.call_count\n            args, kwargs = patch_logger.call_args\n            assert 'Failed to update {} notification instance.'.format(name) == args[1]\n\n    async def test_delete_notification(self, mocker, client):\n        _rv1 = await mock_get_url(\"/fledge/notification/plugin\")\n        _rv2 = await asyncio.sleep(.1)\n        _rv3 = await mock_read_category_val(\"Test Notification\")\n        _se = await mock_delete_url(\"/notification/Test Notification\")\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(notification, '_hit_get_url', return_value=_rv1)\n        storage_client_mock = mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val',\n                            return_value=_rv3)\n        mocker.patch.object(AuditLogger, \"__init__\", return_value=None)\n        audit_logger = mocker.patch.object(AuditLogger, \"information\", return_value=_rv2)\n\n        c_mgr = ConfigurationManager(storage_client_mock)\n        delete_configuration = mocker.patch.object(ConfigurationManager, \"delete_category_and_children_recursively\", return_value=_rv2)\n        audit_logger = mocker.patch.object(AuditLogger, \"information\", return_value=_rv2)\n\n        mocker.patch.object(notification, '_hit_delete_url', side_effect=[_se])\n\n        resp = await client.delete(\"/fledge/notification/Test Notification\")\n        assert 200 == resp.status\n        result = await resp.json()\n        assert result['result'].endswith(\"Notification {} deleted successfully.\".format(\"Test Notification\"))\n        args, kwargs = delete_configuration.call_args_list[0]\n        assert \"Test Notification\" in args\n\n        assert 1 == audit_logger.call_count\n        audit_logger_calls = [call('NTFDL', {'name': 'Test Notification'})]\n        audit_logger.assert_has_calls(audit_logger_calls, any_order=True)\n\n    async def test_delete_notification_exception(self, mocker, client):\n        mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        mocker.patch.object(notification, '_hit_get_url', return_value=(await mock_get_url(\"/fledge/notification/plugin\")))\n        storage_client_mock = mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val',\n                            return_value=(await mock_read_category_val(\"Test Notification\")))\n        mocker.patch.object(AuditLogger, \"__init__\", return_value=None)\n        audit_logger = mocker.patch.object(AuditLogger, \"information\", return_value=(await asyncio.sleep(.1)))\n\n        c_mgr = ConfigurationManager(storage_client_mock)\n        delete_configuration = mocker.patch.object(ConfigurationManager, \"delete_category_and_children_recursively\", return_value=(await asyncio.sleep(.1)))\n\n        resp = await client.delete(\"/fledge/notification\")\n        assert 405 == resp.status\n        result = await resp.text()\n        assert result.endswith(\" Method Not Allowed\")\n\n    async def test_registry_exception(self, mocker, client):\n        mocker.patch.object(ServiceRegistry, 'get', side_effect=service_registry_exceptions.DoesNotExist)\n        resp = await client.delete(\"/fledge/notification/Test Notification\")\n        assert 404 == resp.status\n        result = await resp.text()\n        assert result.endswith(\"No Notification service available.\")\n\n    @pytest.mark.parametrize(\"payload, message\", [\n        ({}, \"Missing name property in payload\"),\n        ({\"name\": \"\"}, \"Name should not be empty\"),\n        ({\"name\": \" \"}, \"Name should not be empty\"),\n        ({\"name\": \"Test@123\"}, \"name should not use reserved words\"),\n        ({\"name\": \"Test123\", \"config\": \"\"}, \"config must be a valid JSON\")\n    ])\n    async def test_bad_post_delivery_channel(self, client, payload, message):\n        resp = await client.post(\"/fledge/notification/overspeed/delivery\", data=json.dumps(payload))\n        assert 400 == resp.status\n        assert message == resp.reason\n        r = await resp.text()\n        json_response = json.loads(r)\n        assert {\"message\": message} == json_response\n\n    @pytest.mark.parametrize(\"name, config, description\", [\n        (\"coolant\", delivery_channel_config, None),\n        (\" coolant2\", delivery_channel_config, ''),\n        (\" coolant3\", delivery_channel_config, 'Test coolant'),\n    ])\n    async def test_good_post_delivery_channel(self, mocker, client, name, config, description):\n        notification_instance_name = \"overspeed\"\n        _se = await mock_read_category_val(notification_instance_name)\n        _rv1 = await mock_create_category()\n        _rv2 = await mock_check_category(delivery_channel_config)\n        _rv3 = await mock_create_child_category()\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', side_effect=[_se])\n        mocker.patch.object(ConfigurationManager, 'create_category', return_value=_rv1)\n        mocker.patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv2)\n        mocker.patch.object(ConfigurationManager, 'create_child_category', return_value=_rv3)\n        payload = {\"name\": name, \"config\": config}\n        expected_description = \"{} delivery channel\".format(name.strip())\n        if description is not None:\n            payload['description'] = description\n            expected_description = description\n        resp = await client.post(\"/fledge/notification/{}/delivery\".format(notification_instance_name), data=json.dumps(payload))\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        expected_cat_name = \"{}_channel_{}\".format(notification_instance_name, name.strip())\n        assert expected_cat_name == json_response['category']\n        assert config == json_response['config']\n        assert expected_description == json_response['description']\n\n    async def test_bad_get_delivery_channel(self, mocker, client):\n        notification_instance_name = \"blah\"\n        message = \"{} notification instance does not exist\".format(notification_instance_name)\n        _se = await mock_read_category_val(notification_instance_name)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', side_effect=[_se])\n\n        resp = await client.get('/fledge/notification/{}/delivery'.format(notification_instance_name))\n        assert 404 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": message} == json_response\n\n    @pytest.mark.parametrize(\"notification_instance_name, categories, exp_channel, plugin_type\", [\n        (\"overspeed\", [], [], {}),\n        (\"overspeed\", [(\"overspeed_channel_coolant\", 1), (\"Pump_channel_coolant2\", 2)], ['coolant'], {}),\n        (\"overspeed\", [(\"overspeed_channel_coolant\", 1), ('overspeed_channel_coolant2', 2)], ['coolant', 'coolant2'], {}),\n        (\"overspeed\", [(\"deliveryoverspeed\", 1), (\"overspeed_channel_coolant2\", 2)], ['deliveryoverspeed', 'coolant2'], {}),\n        (\"overspeed\", [(\"deliveryoverspeed\", 1), (\"overspeed_channel_coolant2\", 2)], [{'name': 'mqtt', 'category': 'deliveryoverspeed'}, {'name':'coolant2', 'category' : 'overspeed_channel_coolant2'}], {\"value\": {\"plugin\": {\"value\": \"mqtt\" }}})\n    ])\n    async def test_good_get_delivery_channel(self, mocker, client, notification_instance_name, categories, exp_channel, plugin_type):\n        async def async_mock(cat):\n            return cat\n        _se = await mock_read_category_val(notification_instance_name)\n        _rv = await async_mock(categories)\n        _rv2 = await async_mock(plugin_type)\n        mocker.patch.object(notification, '_get_all_delivery_channels', return_value=await async_mock(exp_channel))\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', side_effect=[_se])\n        mocker.patch.object(ConfigurationManager, 'get_all_category_names', return_value=_rv)\n        mocker.patch.object(ConfigurationManager, '_read_category', return_value=_rv2)\n        resp = await client.get('/fledge/notification/{}/delivery'.format(notification_instance_name))\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert exp_channel == json_response['channels']\n\n    @pytest.mark.parametrize(\"notification_instance_name, channel_name, message\", [\n        (\"foo\", \"bar\", \"foo notification instance does not exist\"),\n        (\"Test Notification\", \"bar\", \"bar channel does not exist\")\n    ])\n    async def test_bad_get_delivery_channel_configuration(self, mocker, client, notification_instance_name,\n                                                          channel_name, message):\n        _se1 = await mock_read_category_val(notification_instance_name)\n        _se2 = await mock_read_category_val(channel_name)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', side_effect=[_se1])\n        mocker.patch.object(notification, '_get_channels', side_effect=[_se2])\n        resp = await client.get('/fledge/notification/{}/delivery/{}'.format(notification_instance_name, channel_name))\n        assert 404 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": message} == json_response\n\n    async def test_good_get_delivery_channel_configuration(self, mocker, client):\n        notification_instance_name = \"overspeed\"\n        channel_name = \"coolant\"\n        _se1 = await mock_read_category_val(notification_instance_name)\n        _se2 = await mock_read_category_val(channel_name)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', side_effect=[_se1, _se1])\n        mocker.patch.object(notification, '_get_channels', side_effect=[_se2])\n        resp = await client.get('/fledge/notification/{}/delivery/{}'.format(notification_instance_name, channel_name))\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert 'config' in json_response\n        assert delivery_channel_config == json_response['config']\n\n    @pytest.mark.parametrize(\"notification_instance_name, channel_name, message\", [\n        (\"foo\", \"bar\", \"No Notification service available.\"),\n        (\"Test Notification\", \"bar\", \"No Notification service available.\")\n    ])\n    async def test_bad_delete_delivery_channel(self, mocker, client, notification_instance_name, channel_name, message): \n        _se1 = await mock_read_category_val(notification_instance_name)\n        _se2 = await mock_read_category_val(channel_name)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', side_effect=[_se1])\n        mocker.patch.object(notification, '_get_channels', side_effect=[_se2])\n        resp = await client.delete('/fledge/notification/{}/delivery/{}'.format(notification_instance_name,\n                                                                                channel_name))\n        assert 404 == resp.status\n        assert message == resp.reason\n        result = await resp.text()\n        #// FIXME_I:\n        #json_response = json.loads(result)\n        #assert {\"message\": message} == json_response\n\n    async def test_good_delete_delivery_channel(self, mocker, client):\n        notification_instance_name = \"overspeed\"\n        channel_name = \"coolant\"\n        _se1 = await mock_read_category_val(notification_instance_name)\n        _se2 = await mock_read_category_val(channel_name)\n        _se3 = await mock_read_category_val(\"bar\")\n        _rv = await asyncio.sleep(.1)\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(ConfigurationManager, '__init__', return_value=None)\n        mocker.patch.object(ConfigurationManager, '_read_category_val', side_effect=[_se1])\n        mocker.patch.object(notification, '_get_channels', side_effect=[_se2, _se3])\n        mocker.patch.object(ConfigurationManager, 'delete_category_and_children_recursively', return_value=_rv)\n        resp = await client.delete('/fledge/notification/{}/delivery/{}'.format(notification_instance_name,\n                                                                                channel_name))\n        #// FIXME_I:\n        #assert 200 == resp.status\n        #result = await resp.text()\n        #json_response = json.loads(result)\n        #assert [] == json_response['channels']\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_package_log.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport os\nimport json\nimport pathlib\nfrom pathlib import PosixPath\nfrom unittest.mock import Mock, MagicMock, patch, mock_open\nfrom aiohttp import web\n\nimport pytest\nfrom fledge.services.core import routes\nfrom fledge.services.core.api import package_log\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2019, Dianomic Systems Inc.\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestPackageLog:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    @pytest.fixture\n    def logs_path(self):\n        return \"{}/logs\".format(pathlib.Path(__file__).parent)\n\n    async def test_get_logs(self, client, logs_path):\n        files = [\"190801-13-21-56.log\",\n                 \"190801-13-18-02-fledge-north-httpc-install.log\",\n                 \"190801-14-55-25-fledge-south-sinusoid-install.log\",\n                 \"191024-04-21-56-list.log\",\n                 \"230619-10-20-31-fledge-south-http-south-remove.log\",\n                 \"230619-10-17-36-fledge-south-s2opcua-update.log\",\n                 \"trace.log\",\n                 \"20230609_093006_Trace_00000.log\",\n                 \"trace.txt\",\n                 \"syslog\"\n                 ]\n        with patch.object(package_log, '_get_logs_dir', side_effect=[logs_path]):\n            with patch('os.walk') as mockwalk:\n                mockwalk.return_value = [(str(logs_path), [], files)]\n                resp = await client.get('/fledge/package/log')\n                assert 200 == resp.status\n                res = await resp.text()\n                jdict = json.loads(res)\n                logs = jdict[\"logs\"]\n                assert len(files) - 2 == len(logs)\n                obj = logs[0]\n                assert files[0] == obj['filename']\n                assert \"2019-08-01 13:21:56\" == obj['timestamp']\n                assert \"190801-13-21-56\" == obj['name']\n                obj = logs[1]\n                assert files[1] == obj['filename']\n                assert \"2019-08-01 13:18:02\" == obj['timestamp']\n                assert \"fledge-north-httpc-install\" == obj['name']\n                obj = logs[2]\n                assert files[2] == obj['filename']\n                assert \"2019-08-01 14:55:25\" == obj['timestamp']\n                assert \"fledge-south-sinusoid-install\" == obj['name']\n                obj = logs[3]\n                assert files[3] == obj['filename']\n                assert \"2019-10-24 04:21:56\" == obj['timestamp']\n                assert \"list\" == obj['name']\n                obj = logs[4]\n                assert files[4] == obj['filename']\n                assert \"2023-06-19 10:20:31\" == obj['timestamp']\n                assert \"fledge-south-http-south-remove\" == obj['name']\n                obj = logs[5]\n                assert files[5] == obj['filename']\n                assert \"2023-06-19 10:17:36\" == obj['timestamp']\n                assert \"fledge-south-s2opcua-update\" == obj['name']\n                obj = logs[6]\n                assert files[6] == obj['filename']\n                assert len(obj['timestamp']) > 0\n                assert \"trace\" == obj['name']\n                obj = logs[7]\n                assert files[7] == obj['filename']\n                assert len(obj['timestamp']) > 0\n                assert \"20230609_093006_Trace_00000\" == obj['name']\n            mockwalk.assert_called_once_with(logs_path, topdown=True)\n\n    async def test_get_log_by_name_with_invalid_extension(self, client):\n        resp = await client.get('/fledge/package/log/blah.txt')\n        assert 400 == resp.status\n        assert \"Accepted file extension is .log\" == resp.reason\n\n    async def test_get_log_by_name_when_it_doesnot_exist(self, client, logs_path):\n        files = [\"190801-13-18-02-fledge-north-httpc.log\"]\n        with patch.object(package_log, '_get_logs_dir', side_effect=[logs_path]):\n            with patch('os.walk') as mockwalk:\n                mockwalk.return_value = [(str(logs_path), [], files)]\n                resp = await client.get('/fledge/package/log/190801-13-21-56.log')\n                assert 404 == resp.status\n                assert \"190801-13-21-56.log file not found\" == resp.reason\n\n    async def test_get_log_by_name(self, client, logs_path):\n        log_filepath = Mock()\n        log_filepath.open = mock_open()\n        log_filepath.is_file.return_value = True\n        log_filepath.stat.return_value = MagicMock()\n        log_filepath.stat.st_size = 1024\n\n        filepath = Mock()\n        filepath.name = '190801-13-21-56.log'\n        filepath.open = mock_open()\n        filepath.with_name.return_value = log_filepath\n        with patch.object(package_log, '_get_logs_dir', return_value=logs_path):\n            with patch('os.walk'):\n                with patch(\"aiohttp.web.FileResponse\",\n                           return_value=web.FileResponse(path=os.path.realpath(__file__))) as f_res:\n                    resp = await client.get('/fledge/package/log/{}'.format(str(filepath.name)))\n                    assert 200 == resp.status\n                    assert 'OK' == resp.reason\n                args, kwargs = f_res.call_args\n                assert {'path': PosixPath(pathlib.Path(\"{}/{}\".format(logs_path, filepath.name)))} == kwargs\n                assert 1 == f_res.call_count\n\n    @pytest.mark.parametrize(\"action\", [\n        'upgrade',\n        'blah',\n        1\n    ])\n    async def test_get_package_status_with_bad_action(self, client, action):\n        msg = \"Accepted package actions are ('list', 'install', 'purge', 'update')\"\n        resp = await client.get('/fledge/package/{}/status'.format(action))\n        assert 400 == resp.status\n        assert msg == resp.reason\n        r = await resp.text()\n        actual = json.loads(r)\n        assert {\"message\": msg} == actual\n\n    @pytest.mark.parametrize(\"action, status\", [\n        ('list', -1),\n        ('install', 0),\n        ('purge', -1),\n        ('update', 127),\n        ('Install', 1),\n        ('PURGE', 0)\n    ])\n    async def test_get_package_status_with_no_record(self, client, action, status):\n        payload = {\"return\": [\"id\", \"name\", \"action\", \"status\", \"log_file_uri\"],\n                   \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": action.lower()}}\n\n        async def mock_coro():\n            return {\"rows\": []}\n\n        _rv = await mock_coro()\n        storage_client_mock = MagicMock(StorageClientAsync)\n        msg = \"'No record found'\"\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as tbl_patch:\n                resp = await client.get('/fledge/package/{}/status'.format(action))\n                assert 404 == resp.status\n                assert msg == resp.reason\n                r = await resp.text()\n                actual = json.loads(r)\n                assert {'message': msg} == actual\n            args, kwargs = tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"action, status\", [\n        ('list', -1),\n        ('install', 0),\n        ('purge', -1),\n        ('update', 127),\n        ('Install', 1),\n        ('PURGE', 0)\n    ])\n    async def test_get_package_status(self, client, action, status):\n        payload = {\"return\": [\"id\", \"name\", \"action\", \"status\", \"log_file_uri\"],\n                   \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": action.lower()}}\n\n        async def mock_coro():\n            return {\"rows\": [{'id': 'b57fd5c5-8079-49ff-b6a1-9515cbd259e4', 'name': 'fledge-south-modbus',\n                              'action': action, 'status': status,\n                              'log_file_uri': 'log/201006-17-02-53-fledge-south-modbus-{}.log'.format(action.lower())}]}\n\n        def convert_status_and_log_file_uri(old):\n            new = old\n            if old['status'] == 0:\n                new['status'] = 'success'\n            elif old['status'] == -1:\n                new['status'] = 'in-progress'\n            else:\n                new['status'] = 'failed'\n            new['logFileURI'] = old['log_file_uri']\n            del old['log_file_uri']\n            return new\n\n        _rv = await mock_coro()\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as tbl_patch:\n                resp = await client.get('/fledge/package/{}/status'.format(action))\n                assert 200 == resp.status\n                r = await resp.text()\n                actual = json.loads(r)\n                res = await mock_coro()\n                expected = convert_status_and_log_file_uri(res['rows'][0])\n                assert expected == actual['packageStatus'][0]\n            args, kwargs = tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"uid, action, status, index\", [\n        ('b57fd5c5-8079-49ff-b6a1-9515cbd259e4', 'install', -1, 0),\n        ('1cd38675-fea8-4783-b3b5-463ed6c8cbe8', 'install', 0, 1),\n        ('5c7be038-ce36-449e-8a09-580108ee25ca', 'PURGE', -1, 3)\n    ])\n    async def test_get_package_status_by_uid(self, client, uid, action, status, index):\n        payload = {\"return\": [\"id\", \"name\", \"action\", \"status\", \"log_file_uri\"],\n                   \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": action.lower(),\n                             \"and\": {\"column\": \"id\", \"condition\": \"=\", \"value\": uid}}}\n\n        async def mock_coro():\n            return {\"rows\": [{'id': 'b57fd5c5-8079-49ff-b6a1-9515cbd259e4', 'name': 'fledge-south-random',\n                              'action': \"install\", 'status': -1,\n                              'log_file_uri': 'log/201006-17-02-53-fledge-south-random-install.log'},\n                             {'id': '1cd38675-fea8-4783-b3b5-463ed6c8cbe8', 'name': 'fledge-north-kafka',\n                              'action': \"install\", 'status': 0,\n                              'log_file_uri': 'log/201007-01-02-53-fledge-north-kafka-install.log'},\n                             {'id': '63f3c84b-0cbf-4c76-b9bf-848779fbcc6f', 'name': 'fledge-filter-fft',\n                              'action': \"update\", 'status': 127,\n                              'log_file_uri': 'log/201006-12-02-12-fledge-filter-fft-update.log'},\n                             {'id': '5c7be038-ce36-449e-8a09-580108ee25ca', 'name': 'fledge-south-modbus',\n                              'action': \"purge\", 'status': -1,\n                              'log_file_uri': 'log/201006-17-02-53-fledge-south-modbus-purge.log'}\n                             ]}\n\n        def convert_status_and_log_file_uri(old):\n            new = old\n            if old['status'] == 0:\n                new['status'] = 'success'\n            elif old['status'] == -1:\n                new['status'] = 'in-progress'\n            else:\n                new['status'] = 'failed'\n            new['logFileURI'] = old['log_file_uri']\n            del old['log_file_uri']\n            return new\n\n        _rv = await mock_coro()\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as tbl_patch:\n                resp = await client.get('/fledge/package/{}/status?id={}'.format(action, uid))\n                assert 200 == resp.status\n                r = await resp.text()\n                actual = json.loads(r)\n                res = await mock_coro()\n                expected = convert_status_and_log_file_uri(res['rows'][index])\n                assert expected == actual['packageStatus'][index]\n            args, kwargs = tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_scheduler_api.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom datetime import timedelta, datetime\nfrom unittest.mock import MagicMock, patch, call\nfrom uuid import UUID\n\nimport uuid\nimport pytest\n\nfrom aiohttp import web\nfrom fledge.services.core import routes\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core import server\nfrom fledge.services.core.scheduler.scheduler import Scheduler\nfrom fledge.services.core.scheduler.entities import ScheduledProcess, Task, IntervalSchedule, TimedSchedule, StartUpSchedule, ManualSchedule\nfrom fledge.services.core.scheduler.exceptions import *\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock_coro_response(*args, **kwargs):\n    if len(args) > 0:\n        return args[0]\n    else:\n        return \"\"\n\n\nclass TestScheduledProcesses:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    def setup_method(self):\n        server.Server.scheduler = Scheduler(None, None)\n\n    def teardown_method(self):\n        server.Server.scheduler = None\n\n    async def test_get_scheduled_processes(self, client):\n        async def mock_coro():\n            processes = []\n            process = ScheduledProcess()\n            process.name = \"foo\"\n            process.script = \"bar\"\n            processes.append(process)\n            return processes\n\n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'get_scheduled_processes', return_value=_rv):\n            resp = await client.get('/fledge/schedule/process')\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n        assert {'processes': ['foo']} == json_response\n\n    async def test_get_scheduled_process(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        payload = '{\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"in\", \"value\": [\"purge\"]}}'\n        response = {'rows': [{'name': 'purge'}], 'count': 1}\n        _rv = await mock_coro_response(response)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  return_value=_rv) as mock_storage_call:\n                    resp = await client.get('/fledge/schedule/process/purge')\n                    assert 200 == resp.status\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert 'purge' == json_response\n                mock_storage_call.assert_called_with('scheduled_processes', payload)\n\n    async def test_get_scheduled_process_bad_data(self, client):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'rows': [], 'count': 0}\n        _rv = await mock_coro_response(response)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  return_value=_rv):\n                    resp = await client.get('/fledge/schedule/process/bla')\n                    assert 404 == resp.status\n                    assert \"No such Scheduled Process: ['bla'].\" == resp.reason\n\n    async def test_post_scheduled_process(self, client):\n        payload = {'process_name': 'manage', \"script\": '[\"tasks/manage\"]'}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'rows': [], 'count': 0}\n        ret_val = {\"response\": \"inserted\", \"rows_affected\": 1}\n        _rv1 = await mock_coro_response(response)\n        _rv2 = await mock_coro_response(ret_val)\n        _rv3 = await mock_coro_response(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=(_rv1)\n                              ) as query_tbl_patch:\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=(_rv2)\n                                  ) as insert_tbl_patch:\n                    with patch.object(server.Server.scheduler, '_get_process_scripts',\n                                      return_value=_rv3) as get_process_script_patch:\n                        resp = await client.post('/fledge/schedule/process', data=json.dumps(payload))\n                        assert 200 == resp.status\n                        result = await resp.text()\n                        json_response = json.loads(result)\n                        assert {'message': '{} process name created successfully.'.format(\n                            payload['process_name'])} == json_response\n                    get_process_script_patch.assert_called_once_with()\n                assert insert_tbl_patch.called\n                args, kwargs = insert_tbl_patch.call_args_list[0]\n                assert 'scheduled_processes' == args[0]\n                assert {'name': 'manage', 'script': '[\"tasks/manage\"]'} == json.loads(args[1])\n            assert query_tbl_patch.called\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'scheduled_processes' == args[0]\n            assert {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"manage\"}\n                    } == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"request_data, response_code, error_message\", [\n        ({}, 400, \"Missing process_name property in payload.\"),\n        ({\"process_name\": \"\"}, 400, \"Missing script property in payload.\"),\n        ({\"script\": \"\"}, 400, \"Missing process_name property in payload.\"),\n        ({\"processName\": \"\", \"script\": \"\"}, 400, \"Missing process_name property in payload.\"),\n        ({\"process_name\": \"\", \"script\": '[\"tasks/statistics\"]'}, 400, \"Process name cannot be empty.\"),\n        ({\"process_name\": \"new\", \"script\": \"\"}, 400, \"Script cannot be empty.\"),\n        ({\"process_name\": \" \", \"script\": '[\"tasks/statistics\"]'}, 400, \"Process name cannot be empty.\"),\n        ({\"process_name\": \" new\", \"script\": \" \"}, 400, \"Script cannot be empty.\"),\n        ({\"process_name\": \"purge\", \"script\": '[\"tasks/purge\"]'}, 400, \"purge process name already exists.\")\n    ])\n    async def test_post_scheduled_process_bad_data(self, client, request_data, response_code, error_message):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'rows': [{\"name\": \"purge\"}], 'count': 1}\n        _rv = await mock_coro_response(response)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv):\n                resp = await client.post('/fledge/schedule/process', data=json.dumps(request_data))\n                assert response_code == resp.status\n                assert error_message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'message': error_message} == json_response\n\n\nclass TestSchedules:\n    _random_uuid = uuid.uuid4()\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    def setup_method(self):\n        server.Server.scheduler = Scheduler(None, None)\n\n    def teardown_method(self):\n        server.Server.scheduler = None\n\n    async def test_get_schedules(self, client):\n        async def mock_coro():\n            schedules = []\n            schedule = StartUpSchedule()\n            schedule.schedule_id = \"1\"\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.name = \"foo\"\n            schedule.process_name = \"bar\"\n            schedule.repeat = timedelta(seconds=30)\n            schedule.time = None\n            schedule.day = None\n            schedules.append(schedule)\n            return schedules\n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'get_schedules', return_value=_rv):\n            resp = await client.get('/fledge/schedule')\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {'schedules': [\n                {'name': 'foo', 'day': None, 'type': 'STARTUP', 'processName': 'bar',\n                 'time': 0, 'id': '1', 'exclusive': True, 'enabled': True, 'repeat': 30.0}\n            ]} == json_response\n\n    async def test_get_schedule(self, client):\n        async def mock_coro():\n            schedule = StartUpSchedule()\n            schedule.schedule_id = self._random_uuid\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.name = \"foo\"\n            schedule.process_name = \"bar\"\n            schedule.repeat = timedelta(seconds=30)\n            schedule.time = None\n            schedule.day = None\n            return schedule\n\n        \n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'get_schedule', return_value=_rv):\n            resp = await client.get('/fledge/schedule/{}'.format(self._random_uuid))\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {'id': str(self._random_uuid),\n                    'name': 'foo', 'repeat': 30.0, 'enabled': True,\n                    'processName': 'bar', 'type': 'STARTUP', 'day': None,\n                    'time': 0, 'exclusive': True} == json_response\n\n    async def test_get_schedule_bad_data(self, client):\n            resp = await client.get('/fledge/schedule/{}'.format(\"bla\"))\n            assert 404 == resp.status\n            assert 'Invalid Schedule ID bla' == resp.reason\n\n    @pytest.mark.parametrize(\"exception_name, response_code, response_message\", [\n        (ScheduleNotFoundError(_random_uuid), 404, 'Schedule not found: {}'.format(_random_uuid)),\n        (ValueError, 404, ''),\n    ])\n    async def test_get_schedule_exceptions(self, client, exception_name, response_code, response_message):\n        with patch.object(server.Server.scheduler, 'get_schedule', side_effect=exception_name):\n            resp = await client.get('/fledge/schedule/{}'.format(self._random_uuid))\n            assert response_code == resp.status\n            assert response_message == resp.reason\n\n    async def test_enable_schedule(self, client):\n        async def mock_coro():\n            return True, \"Schedule successfully enabled\"\n\n        \n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'enable_schedule', return_value=_rv):\n            resp = await client.put('/fledge/schedule/{}/enable'.format(self._random_uuid))\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {'status': True, 'message': 'Schedule successfully enabled',\n                    'scheduleId': '{}'.format(self._random_uuid)} == json_response\n\n    async def test_enable_schedule_bad_data(self, client):\n            resp = await client.put('/fledge/schedule/{}/enable'.format(\"bla\"))\n            assert 404 == resp.status\n            assert 'Invalid Schedule ID bla' == resp.reason\n\n    @pytest.mark.parametrize(\"exception_name, response_code, response_message\", [\n        (ScheduleNotFoundError(_random_uuid), 404, 'Schedule not found: {}'.format(_random_uuid)),\n        (ValueError, 404, ''),\n    ])\n    async def test_enable_schedule_exceptions(self, client, exception_name, response_code, response_message):\n        with patch.object(server.Server.scheduler, 'enable_schedule', side_effect=exception_name):\n            resp = await client.put('/fledge/schedule/{}/enable'.format(self._random_uuid))\n            assert response_code == resp.status\n            assert response_message == resp.reason\n\n    async def test_disable_schedule(self, client):\n        async def mock_coro():\n            return True, \"Schedule successfully disabled\"\n\n        \n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'disable_schedule', return_value=_rv):\n            resp = await client.put('/fledge/schedule/{}/disable'.format(self._random_uuid))\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {'status': True, 'message': 'Schedule successfully disabled',\n                    'scheduleId': '{}'.format(self._random_uuid)} == json_response\n\n    async def test_disable_schedule_bad_data(self, client):\n            resp = await client.put('/fledge/schedule/{}/disable'.format(\"bla\"))\n            assert 404 == resp.status\n            assert 'Invalid Schedule ID bla' == resp.reason\n\n    @pytest.mark.parametrize(\"exception_name, response_code, response_message\", [\n        (ScheduleNotFoundError(_random_uuid), 404, 'Schedule not found: {}'.format(_random_uuid)),\n        (ValueError, 404, ''),\n    ])\n    async def test_disable_schedule_exceptions(self, client, exception_name, response_code, response_message):\n        with patch.object(server.Server.scheduler, 'disable_schedule', side_effect=exception_name):\n            resp = await client.put('/fledge/schedule/{}/disable'.format(self._random_uuid))\n            assert response_code == resp.status\n            assert response_message == resp.reason\n\n    @pytest.mark.parametrize(\"return_queue_task, expected_response\", [\n        (True, {'message': 'Schedule started successfully', 'id': '{}'.format(_random_uuid)}),\n        (False, {'message': 'Schedule could not be started', 'id': '{}'.format(_random_uuid)}),\n    ])\n    async def test_start_schedule(self, client, return_queue_task, expected_response):\n        async def mock_coro():\n            return \"\"\n\n        async def patch_queue_task(_resp):\n            return _resp\n\n        _rv1 = await mock_coro()\n        _rv2 = await patch_queue_task(return_queue_task)\n        with patch.object(server.Server.scheduler, 'get_schedule', return_value=_rv1) as mock_get_schedule:\n            with patch.object(server.Server.scheduler, 'queue_task', return_value=_rv2) \\\n                    as mock_queue_task:\n                resp = await client.post('/fledge/schedule/start/{}'.format(self._random_uuid))\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert expected_response == json_response\n                mock_queue_task.assert_called_once_with(uuid.UUID('{}'.format(self._random_uuid)))\n            mock_get_schedule.assert_called_once_with(uuid.UUID('{}'.format(self._random_uuid)))\n\n    async def test_start_schedule_bad_data(self, client):\n            resp = await client.post('/fledge/schedule/start/{}'.format(\"bla\"))\n            assert 404 == resp.status\n            assert 'Invalid Schedule ID bla' == resp.reason\n\n    @pytest.mark.parametrize(\"exception_name, response_code, response_message\", [\n        (ScheduleNotFoundError(_random_uuid), 404, 'Schedule not found: {}'.format(_random_uuid)),\n        (NotReadyError(), 404, ''),\n        (ValueError, 404, ''),\n    ])\n    async def test_start_schedule_exceptions(self, client, exception_name, response_code, response_message):\n        with patch.object(server.Server.scheduler, 'get_schedule', side_effect=exception_name):\n            resp = await client.post('/fledge/schedule/start/{}'.format(self._random_uuid))\n            assert response_code == resp.status\n            assert response_message == resp.reason\n\n    @pytest.mark.parametrize(\"request_data, expected_response\", [\n        ({\"type\": 1, \"name\": \"foo\", \"process_name\": \"bar\"},\n         {'schedule': {'type': 'STARTUP', 'day': None, 'name': 'foo', 'exclusive': True, 'enabled': True,\n                       'id': '{}'.format(_random_uuid), 'processName': 'bar', 'time': 0, 'repeat': 0}}),\n        ({\"type\": 2, \"day\": 1, \"time\": 10, \"name\": \"foo\", \"process_name\": \"bar\"},\n         {'schedule': {'name': 'foo', 'processName': 'bar', 'time': 10, 'enabled': True,\n                       'id': '{}'.format(_random_uuid), 'repeat': 0, 'exclusive': True, 'day': 1,\n                       'type': 'TIMED'}}),\n        ({\"type\": 3, \"repeat\": 15, \"name\": \"foo\", \"process_name\": \"bar\"},\n         {'schedule': {'day': None, 'type': 'INTERVAL', 'exclusive': True, 'enabled': True, 'time': 0, 'repeat': 15.0,\n                       'name': 'foo', 'id': '{}'.format(_random_uuid), 'processName': 'bar'}}),\n        ({\"type\": 4, \"name\": \"foo\", \"process_name\": \"bar\"},\n         {'schedule': {'day': None, 'enabled': True, 'repeat': 0, 'id': '{}'.format(_random_uuid),\n                       'type': 'MANUAL', 'name': 'foo', 'exclusive': True, 'processName': 'bar', 'time': 0}}),\n        ])\n    async def test_post_schedule(self, client, request_data, expected_response):\n        async def mock_coro():\n            return \"\"\n\n        async def mock_schedules():\n            schedule1 = ManualSchedule()\n            schedule1.schedule_id = self._random_uuid\n            schedule1.exclusive = True\n            schedule1.enabled = True\n            schedule1.name = \"bar\"\n            schedule1.process_name = \"foo\"\n\n            schedule2 = IntervalSchedule()\n            schedule2.schedule_id = self._random_uuid\n            schedule2.repeat = timedelta(seconds=15)\n            schedule2.exclusive = True\n            schedule2.enabled = True\n            schedule2.name = \"stats collection\"\n            schedule2.process_name = \"stats collector\"\n            return [schedule1, schedule2]\n\n        async def mock_schedule(_type):\n            if _type == 1:\n                schedule = StartUpSchedule()\n                schedule.repeat = None\n                schedule.time = None\n                schedule.day = None\n            elif _type == 2:\n                schedule = TimedSchedule()\n                schedule.repeat = None\n                schedule.time = datetime(1, 1, 1, 0, 0, 10)\n                schedule.day = 1\n            elif _type == 3:\n                schedule = IntervalSchedule()\n                schedule.repeat = timedelta(seconds=15)\n                schedule.time = None\n                schedule.day = None\n            else:\n                schedule = ManualSchedule()\n                schedule.repeat = None\n                schedule.time = None\n                schedule.day = None\n            schedule.schedule_id = self._random_uuid\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.name = \"foo\"\n            schedule.process_name = \"bar\"\n            return schedule\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'rows': [{'name': 'p1'}], 'count': 1}\n        _rv1 = await mock_coro_response(response)\n        _rv2 = await mock_schedules()\n        _rv3 = await mock_schedule(request_data[\"type\"])\n        _rv4 = await mock_coro()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                with patch.object(server.Server.scheduler, 'get_schedules',\n                                  return_value=_rv2) as patch_get_schedules:\n                    with patch.object(server.Server.scheduler, 'get_schedule',\n                                      return_value=_rv3) as patch_get_schedule:\n                        with patch.object(server.Server.scheduler, 'save_schedule',\n                                          return_value=_rv4) as patch_save_schedule:\n                            resp = await client.post('/fledge/schedule', data=json.dumps(request_data))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert expected_response == json_response\n                        assert 1 == patch_save_schedule.call_count\n                    patch_get_schedule.assert_called_once_with(None)\n                patch_get_schedules.assert_called_once_with()\n\n    async def test_post_schedule_bad_param(self, client):\n        resp = await client.post('/fledge/schedule', data=json.dumps({'schedule_id': 'bla'}))\n        assert 400 == resp.status\n        assert 'Schedule ID not needed for new Schedule.' == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {'message': 'Schedule ID not needed for new Schedule.'} == json_response\n\n    @pytest.mark.parametrize(\"request_data, response_code, error_message, storage_return\", [\n        ({\"type\": 'bla'}, 400, \"Error in type: bla\", {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"day\": 'bla'}, 400, \"Error in day: bla\", {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"time\": 'bla'}, 400, \"Error in time: bla\", {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"repeat\": 'bla'}, 400, \"Error in repeat: bla\", {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 2, \"name\": \"sch1\", \"process_name\": \"p1\"}, 400,\n         \"Errors in request: Schedule time cannot be empty for TIMED schedule. 1\",\n         {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 2, \"day\": 9, \"time\": 1, \"name\": \"sch1\", \"process_name\": \"p1\"}, 400,\n         \"Errors in request: Day must either be None or must be an integer and in range 1-7. 1\",\n         {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 2, \"day\": 5, \"time\": -1, \"name\": \"sch1\", \"process_name\": \"p1\"}, 400,\n         \"Errors in request: Time must be an integer and in range 0-86399. 1\",\n         {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 200}, 400,\n         \"Errors in request: Schedule type error: 200,Schedule name and Process name cannot be empty. 2\",\n         {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 1, \"name\": \"sch1\", \"process_name\": \"p1\"}, 404,\n         \"No such Scheduled Process name: p1\",\n         {'rows': [], 'count': 0}),\n    ])\n    async def test_post_schedule_bad_data(self, client, request_data, response_code, error_message, storage_return):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = storage_return\n        _rv = await mock_coro_response(response)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv):\n                resp = await client.post('/fledge/schedule', data=json.dumps(request_data))\n                assert response_code == resp.status\n                assert error_message == resp.reason\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'message': error_message} == json_response\n\n    @pytest.mark.parametrize(\"request_data\", [\n        {\"type\": 4, \"name\": \"purge\", \"process_name\": \"purge\", \"repeat\": \"45\"},\n        {\"type\": 1, \"name\": \"Sine\", \"process_name\": \"south_c\",  \"repeat\": 0}\n    ])\n    async def test_duplicate_post_schedule(self, client, request_data):\n        async def mock_schedules():\n            schedule1 = ManualSchedule()\n            schedule1.schedule_id = self._random_uuid\n            schedule1.exclusive = True\n            schedule1.enabled = True\n            schedule1.name = \"purge\"\n            schedule1.process_name = \"purge\"\n\n            schedule2 = StartUpSchedule()\n            schedule2.schedule_id = self._random_uuid\n            schedule2.exclusive = True\n            schedule2.enabled = True\n            schedule2.name = \"Sine\"\n            schedule2.process_name = \"south_c\"\n\n            schedule3 = IntervalSchedule()\n            schedule3.schedule_id = self._random_uuid\n            schedule3.repeat = timedelta(seconds=15)\n            schedule3.exclusive = True\n            schedule3.enabled = True\n            schedule3.name = \"stats collection\"\n            schedule3.process_name = \"stats collector\"\n\n            return [schedule1, schedule2, schedule3]\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'rows': [{'name': 'purge'}, {'name': 'south_c'}, {'name': 'stats collector'}], 'count': 3}\n        _rv1 = await mock_coro_response(response)\n        _rv2 = await mock_schedules()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                with patch.object(server.Server.scheduler, 'get_schedules',\n                                  return_value=_rv2) as patch_get_schedules:\n                    resp = await client.post('/fledge/schedule', data=json.dumps(request_data))\n                    assert 409 == resp.status\n                    assert \"Duplicate schedule name entry found\" == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {'message': 'Duplicate schedule name entry found'} == json_response\n                patch_get_schedules.assert_called_once_with()\n\n    @pytest.mark.parametrize(\"request_data, expected_response\", [\n        ({\"name\": \"new\"},\n         {'schedule': {'id': '{}'.format(_random_uuid), 'time': 0, 'processName': 'bar', 'repeat': 30.0,\n                       'exclusive': True, 'enabled': True, 'type': 'INTERVAL', 'day': None, 'name': 'new'}})\n        ])\n    async def test_update_schedule(self, client, request_data, expected_response):\n        async def mock_coro():\n            return \"\"\n\n        async def mock_schedules():\n            schedule1 = ManualSchedule()\n            schedule1.schedule_id = self._random_uuid\n            schedule1.exclusive = True\n            schedule1.enabled = True\n            schedule1.name = \"purge\"\n            schedule1.process_name = \"purge\"\n\n            schedule2 = StartUpSchedule()\n            schedule2.schedule_id = self._random_uuid\n            schedule2.exclusive = True\n            schedule2.enabled = True\n            schedule2.name = \"Sine\"\n            schedule2.process_name = \"south_c\"\n\n            schedule3 = IntervalSchedule()\n            schedule3.schedule_id = self._random_uuid\n            schedule3.repeat = timedelta(seconds=15)\n            schedule3.exclusive = True\n            schedule3.enabled = True\n            schedule3.name = \"stats collection\"\n            schedule3.process_name = \"stats collector\"\n\n            return [schedule1, schedule2, schedule3]\n\n        async def mock_schedule(*args):\n            schedule = IntervalSchedule()\n            schedule.schedule_id = self._random_uuid\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.process_name = \"bar\"\n            schedule.repeat = timedelta(seconds=30)\n            schedule.time = 0\n            schedule.day = None\n            schedule.name = \"foo\" if args[0] == 1 else \"new\"\n            return schedule\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'rows': [{'name': 'p1'}], 'count': 1}\n        _rv1 = await mock_coro_response(response)\n        _rv2 = await mock_schedules()\n        _rv3 = await mock_coro()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                with patch.object(server.Server.scheduler, 'get_schedules',\n                                  return_value=_rv2) as patch_get_schedules:\n                    with patch.object(server.Server.scheduler, 'save_schedule',\n                                      return_value=_rv3) as patch_save_schedule:\n                        with patch.object(server.Server.scheduler, 'get_schedule',\n                                          side_effect=mock_schedule) as patch_get_schedule:\n                            resp = await client.put('/fledge/schedule/{}'.format(self._random_uuid),\n                                                    data=json.dumps(request_data))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert expected_response == json_response\n                        assert 2 == patch_get_schedule.call_count\n                        assert call(uuid.UUID(str(self._random_uuid))) == patch_get_schedule.call_args\n                    arguments, kwargs = patch_save_schedule.call_args\n                    assert isinstance(arguments[0], IntervalSchedule)\n                patch_get_schedules.assert_called_once_with()\n\n    async def test_update_schedule_bad_param(self, client):\n        error_msg = 'Invalid Schedule ID bla'\n        resp = await client.put('/fledge/schedule/{}'.format(\"bla\"), data=json.dumps({\"a\": 1}))\n        assert 400 == resp.status\n        assert error_msg == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {'message': error_msg} == json_response\n\n    @pytest.mark.parametrize(\"payload, status_code, message\", [\n        ({\"name\": \"Updated\"}, 400, \"South Service is a STARTUP schedule type and cannot be renamed.\"),\n        ({\"type\": 3}, 400, \"South Service is a STARTUP schedule type and cannot be changed its type.\"),\n        ({\"name\": \"Updated\", \"type\": 3}, 400, \"South Service is a STARTUP schedule type and cannot be renamed.\"),\n        ({\"name\": \"Updated\", \"enabled\": False}, 400, \"South Service is a STARTUP schedule type and cannot be renamed.\"),\n        ({\"type\": 4, \"enabled\": False}, 400, \"South Service is a STARTUP schedule type and cannot be changed its type.\")\n    ])\n    async def test_bad_update_startup_schedule(self, client, payload, status_code, message):\n        uuid = \"5affb5d1-96bb-4334-96ea-f91904cacc9b\"\n\n        async def mock_schedule():\n            schedule = StartUpSchedule()\n            schedule.schedule_id = self._random_uuid\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.process_name = \"south_c\"\n            schedule.repeat = 0\n            schedule.time = 0\n            schedule.day = None\n            schedule.name = \"South Service\"\n            return schedule\n\n        _rv1 = await mock_schedule()\n        with patch.object(server.Server.scheduler, 'get_schedule', return_value=_rv1) as patch_get_schedule:\n            resp = await client.put('/fledge/schedule/{}'.format(uuid), data=json.dumps(payload))\n            assert status_code == resp.status\n            assert message == resp.reason\n        patch_get_schedule.assert_called_once_with(UUID(uuid))\n\n    @pytest.mark.parametrize(\"payload\", [\n        ({\"enabled\": True}),\n        ({\"enabled\": False}),\n        ({\"exclusive\": False}),\n        ({\"exclusive\": True}),\n        ({\"exclusive\": True, \"enabled\": False}),\n        ({\"exclusive\": False, \"enabled\": True}),\n        ({\"exclusive\": False, \"enabled\": False}),\n        ({\"exclusive\": True, \"enabled\": True}),\n    ])\n    async def test_good_update_startup_schedule(self, client, payload):\n        startup_uuid = \"5affb5d1-96bb-4334-96ea-f91904cacc9b\"\n\n        async def mock_coro():\n            return \"\"\n\n        async def mock_schedules():\n            schedule1 = ManualSchedule()\n            schedule1.schedule_id = self._random_uuid\n            schedule1.exclusive = True\n            schedule1.enabled = True\n            schedule1.name = \"purge\"\n            schedule1.process_name = \"purge\"\n\n            schedule2 = StartUpSchedule()\n            schedule2.schedule_id = startup_uuid\n            schedule2.exclusive = True\n            schedule2.enabled = True\n            schedule2.name = \"South Service\"\n            schedule2.process_name = \"south_c\"\n            schedule2.repeat = 0\n            schedule2.time = 0\n            schedule2.day = None\n\n            schedule3 = IntervalSchedule()\n            schedule3.schedule_id = self._random_uuid\n            schedule3.repeat = timedelta(seconds=15)\n            schedule3.exclusive = True\n            schedule3.enabled = True\n            schedule3.name = \"stats collection\"\n            schedule3.process_name = \"stats collector\"\n\n            return [schedule1, schedule2, schedule3]\n\n        async def mock_schedule():\n            sch = await mock_schedules()\n            return sch[1]\n\n        async def final_schedule():\n            schedule = StartUpSchedule()\n            schedule.schedule_id = startup_uuid\n            schedule.exclusive = payload['exclusive'] if 'exclusive' in payload else True\n            schedule.enabled = payload['enabled'] if 'enabled' in payload else True\n            schedule.name = \"South Service\"\n            schedule.process_name = \"south_c\"\n            schedule.repeat = 0\n            schedule.time = 0\n            schedule.day = None\n            return schedule\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'rows': [{'name': 'SCH'}], 'count': 1}\n        _rv0 = await mock_coro_response(response)\n        _rv1 = await mock_schedule()\n        _rv11 = await final_schedule()\n        _rv2 = await mock_coro()\n        _rv3 = await mock_schedules()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv0):\n                with patch.object(server.Server.scheduler, 'get_schedule', side_effect=[_rv1, _rv11]):\n                    with patch.object(server.Server.scheduler, 'save_schedule', return_value=_rv2\n                                      ) as patch_save_schedule:\n                        with patch.object(server.Server.scheduler, 'get_schedules', return_value=_rv3\n                                          ) as patch_get_schedules:\n                            resp = await client.put('/fledge/schedule/{}'.format(startup_uuid), data=json.dumps(payload))\n                            assert 200 == resp.status\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            if 'exclusive' in payload:\n                                assert payload['exclusive'] == json_response['schedule']['exclusive']\n                            if 'enabled' in payload:\n                                assert payload['enabled'] == json_response['schedule']['enabled']\n                        assert 1 == patch_get_schedules.call_count\n                    arguments, kwargs = patch_save_schedule.call_args\n                    assert isinstance(arguments[0], StartUpSchedule)\n\n    async def test_update_schedule_data_not_exist(self, client):\n        async def mock_coro():\n            return \"\"\n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'get_schedule',\n                          return_value=_rv) as patch_get_schedule:\n            error_message = 'Schedule not found: {}'.format(self._random_uuid)\n            resp = await client.put('/fledge/schedule/{}'.format(self._random_uuid), data=json.dumps({\"a\": 1}))\n            assert 404 == resp.status\n            assert error_message == resp.reason\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {'message': error_message} == json_response\n        patch_get_schedule.assert_called_once_with(uuid.UUID('{}'.format(self._random_uuid)))\n\n    @pytest.mark.parametrize(\"request_data, response_code, error_message, storage_return\", [\n        ({\"type\": 'bla'}, 400, \"Error in type: bla\", {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"day\": 'bla'}, 400, \"Error in day: bla\", {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"time\": 'bla'}, 400, \"Error in time: bla\", {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"repeat\": 'bla'}, 400, \"Error in repeat: bla\", {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 2, \"name\": \"sch1\", \"process_name\": \"p1\"}, 400,\n         \"Errors in request: Schedule time cannot be empty for TIMED schedule.\",\n         {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 2, \"day\": 9, \"time\": 1, \"name\": \"sch1\", \"process_name\": \"p1\"}, 400,\n         \"Errors in request: Day must either be None or must be an integer and in range 1-7.\",\n         {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 2, \"day\": 5, \"time\": -1, \"name\": \"sch1\", \"process_name\": \"p1\"}, 400,\n         \"Errors in request: Time must be an integer and in range 0-86399.\",\n         {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 200}, 400,\n         \"Errors in request: Schedule type error: 200\",\n         {'rows': [{'name': 'bla'}], 'count': 1}),\n        ({\"type\": 1, \"name\": \"sch1\", \"process_name\": \"p1\"}, 404,\n         \"No such Scheduled Process name: p1\",\n         {'rows': [], 'count': 0}),\n    ])\n    async def test_update_schedule_bad_data(self, client, request_data, response_code, error_message, storage_return):\n        async def mock_coro():\n            schedule = IntervalSchedule()\n            schedule.schedule_id = self._random_uuid\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.name = \"foo\"\n            schedule.process_name = \"bar\"\n            schedule.repeat = timedelta(seconds=30)\n            schedule.time = 0\n            schedule.day = None\n            return schedule\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = storage_return\n        _rv1 = await mock_coro_response(response)\n        _rv2 = await mock_coro()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                with patch.object(server.Server.scheduler, 'get_schedule',\n                                  return_value=_rv2) as patch_get_schedule:\n                    resp = await client.put('/fledge/schedule/{}'.format(self._random_uuid),\n                                            data=json.dumps(request_data))\n                    assert response_code == resp.status\n                    assert error_message == resp.reason\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {'message': error_message} == json_response\n                patch_get_schedule.assert_called_once_with(uuid.UUID(str(self._random_uuid)))\n\n    @pytest.mark.parametrize(\"payload\", [\n        {'name': 'purge'},\n        {'name': \"purge\", 'type': 3, 'repeat': 15, 'exclusive': 'true', 'enabled': 'true'},\n        {'name': \"purge\", 'enabled': 'false'},\n        {'name': \"purge\", 'enabled': 'true', 'repeat': 15},\n    ])\n    async def test_duplicate_name_update_schedule(self, client, payload):\n        async def mock_schedules():\n            schedule1 = ManualSchedule()\n            schedule1.schedule_id = \"2176eb68-7303-11e7-8cf7-a6006ad3dba0\"\n            schedule1.exclusive = True\n            schedule1.enabled = True\n            schedule1.name = \"purge\"\n            schedule1.process_name = \"purge\"\n\n            schedule2 = StartUpSchedule()\n            schedule2.schedule_id = self._random_uuid\n            schedule2.repeat = timedelta(seconds=15)\n            schedule2.exclusive = True\n            schedule2.enabled = True\n            schedule2.name = \"foo\"\n            schedule2.process_name = \"bar\"\n            return [schedule1, schedule2]\n\n        async def mock_schedule(*args):\n            schedule = ManualSchedule()\n            schedule.schedule_id = self._random_uuid\n            schedule.exclusive = True\n            schedule.enabled = True\n            schedule.process_name = \"bar\"\n            schedule.repeat = 0\n            schedule.time = 0\n            schedule.day = None\n            schedule.name = \"foo\" if args[0] == 1 else \"new\"\n            return schedule\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'rows': [{'name': 'purge'}], 'count': 1}\n        _rv1 = await mock_coro_response(response)\n        _rv2 = await mock_schedules()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                with patch.object(server.Server.scheduler, 'get_schedule',\n                                  side_effect=mock_schedule) as patch_get_schedule:\n                    with patch.object(server.Server.scheduler, 'get_schedules',\n                                      return_value=_rv2) as patch_get_schedules:\n                        resp = await client.put('/fledge/schedule/{}'.format(self._random_uuid),\n                                                data=json.dumps(payload))\n                        assert 409 == resp.status\n                        result = await resp.text()\n                        json_response = json.loads(result)\n                        assert {'message': 'Duplicate schedule name entry found'} == json_response\n                    patch_get_schedules.assert_called_once_with()\n                patch_get_schedule.assert_called_once_with(uuid.UUID(str(self._random_uuid)))\n\n    async def test_delete_schedule(self, client):\n        async def mock_coro():\n            return True, \"Schedule deleted successfully.\"\n\n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'delete_schedule', return_value=_rv):\n            resp = await client.delete('/fledge/schedule/{}'.format(self._random_uuid))\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {'id': '{}'.format(self._random_uuid),\n                    'message': 'Schedule deleted successfully.'} == json_response\n\n    async def test_delete_schedule_bad_data(self, client):\n            resp = await client.delete('/fledge/schedule/{}'.format(\"bla\"))\n            assert 404 == resp.status\n            assert 'Invalid Schedule ID bla' == resp.reason\n\n    @pytest.mark.parametrize(\"exception_name, response_code, response_message\", [\n        (ScheduleNotFoundError(_random_uuid), 404, 'Schedule not found: {}'.format(_random_uuid)),\n        (NotReadyError(), 404, ''),\n        (ValueError, 404, ''),\n        (RuntimeWarning, 409, \"Enabled Schedule {} cannot be deleted.\".format(str(_random_uuid))),\n    ])\n    async def test_delete_schedule_exceptions(self, client, exception_name, response_code, response_message):\n        with patch.object(server.Server.scheduler, 'delete_schedule', side_effect=exception_name):\n            resp = await client.delete('/fledge/schedule/{}'.format(self._random_uuid))\n            assert response_code == resp.status\n            assert response_message == resp.reason\n\n    async def test_get_schedule_type(self, client):\n        resp = await client.get('/fledge/schedule/type')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {'scheduleType': [{'name': 'STARTUP', 'index': 1},\n                                 {'name': 'TIMED', 'index': 2},\n                                 {'name': 'INTERVAL', 'index': 3},\n                                 {'name': 'MANUAL', 'index': 4}]} == json_response\n\n\nclass TestTasks:\n    _random_uuid = uuid.uuid4()\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    def setup_method(self):\n        server.Server.scheduler = Scheduler(None, None)\n\n    def teardown_method(self):\n        server.Server.scheduler = None\n\n    async def test_get_task(self, client):\n        async def mock_coro():\n            task = Task()\n            task.task_id = self._random_uuid\n            task.state = Task.State.RUNNING\n            task.start_time = None\n            task.schedule_name = \"bar\"\n            task.process_name = \"bar\"\n            task.end_time = None\n            task.exit_code = 0\n            task.reason = None\n            return task\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'count': 1, 'rows': [{'process_name': 'bla'}]}\n\n        \n        _rv1 = await mock_coro_response(response)\n        _rv2 = await mock_coro()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                with patch.object(server.Server.scheduler, 'get_task', return_value=_rv2):\n                    resp = await client.get('/fledge/task/{}'.format(self._random_uuid))\n                    assert 200 == resp.status\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {'startTime': 'None', 'reason': None,\n                            'endTime': 'None', 'state': 'Running',\n                            'name': 'bar', 'processName': 'bar', 'exitCode': 0,\n                            'id': '{}'.format(self._random_uuid)} == json_response\n\n    async def test_get_task_bad_data(self, client):\n            resp = await client.get('/fledge/task/{}'.format(\"bla\"))\n            assert 404 == resp.status\n            assert 'Invalid Task ID bla' == resp.reason\n\n    @pytest.mark.parametrize(\"exception_name, response_code, response_message\", [\n        (TaskNotFoundError(_random_uuid), 404, 'Task not found: {}'.format(_random_uuid)),\n        (ValueError, 404, ''),\n    ])\n    async def test_get_task_exceptions(self, client, exception_name, response_code, response_message):\n        with patch.object(server.Server.scheduler, 'get_task', side_effect=exception_name):\n            resp = await client.get('/fledge/task/{}'.format(self._random_uuid))\n            assert response_code == resp.status\n            assert response_message == resp.reason\n\n    @pytest.mark.parametrize(\"request_params\", [\n        '',\n        '?limit=1',\n        '?name=bla',\n        '?state=running',\n        '?limit=1&name=bla&state=running',\n    ])\n    async def test_get_tasks(self, client, request_params):\n        async def patch_get_tasks():\n            tasks = []\n            task = Task()\n            task.task_id = self._random_uuid\n            task.state = Task.State.RUNNING\n            task.start_time = None\n            task.schedule_name = \"bla\"\n            task.process_name = \"bla\"\n            task.end_time = None\n            task.exit_code = 0\n            task.reason = None\n            tasks.append(task)\n            return tasks\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'count': 1, 'rows': [{'process_name': 'bla'}]}\n        _rv1 = await mock_coro_response(response)\n        _rv2 = await patch_get_tasks()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                with patch.object(server.Server.scheduler, 'get_tasks', return_value=_rv2):\n                    resp = await client.get('/fledge/task{}'.format(request_params))\n                    assert 200 == resp.status\n                    result = await resp.text()\n                    json_response = json.loads(result)\n                    assert {'tasks': [{'state': 'Running', 'id': '{}'.format(self._random_uuid),\n                                       'endTime': 'None', 'exitCode': 0,\n                                       'startTime': 'None', 'reason': None, 'name': 'bla', 'processName': 'bla'}]} == json_response\n\n    @pytest.mark.parametrize(\"request_params, response_code, response_message\", [\n        ('?limit=invalid', 400, \"Limit must be a positive integer\"),\n        ('?limit=-1', 400, \"Limit must be a positive integer\"),\n        ('?state=BLA', 400, \"This state value 'BLA' not permitted.\"),\n    ])\n    async def test_get_tasks_exceptions(self, client, request_params, response_code, response_message):\n        resp = await client.get('/fledge/task{}'.format(request_params))\n        assert response_code == resp.status\n        assert response_message == resp.reason\n\n    async def test_get_tasks_no_task_exception(self, client):\n        async def patch_get_tasks():\n            tasks = []\n            return tasks\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'count': 0, 'rows': []}\n        _rv1 = await mock_coro_response(response)\n        _rv2 = await patch_get_tasks()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                with patch.object(server.Server.scheduler, 'get_tasks', return_value=_rv2):\n                    resp = await client.get('/fledge/task{}'.format('?name=bla&state=running'))\n                    assert 404 == resp.status\n                    assert \"No Tasks found\" == resp.reason\n\n    @pytest.mark.parametrize(\"request_params\", ['', '?name=bla'])\n    async def test_get_tasks_latest(self, client, request_params):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'count': 2, 'rows': [\n            {'pid': '1', 'reason': '', 'exit_code': '0', 'id': '1',\n             'process_name': 'bla', 'schedule_name': 'bla', 'end_time': '2018', 'start_time': '2018', 'state': '2'}]}\n        _rv1 = await mock_coro_response(response)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                resp = await client.get('/fledge/task/latest{}'.format(request_params))\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'tasks': [{'reason': '', 'name': 'bla', 'processName': 'bla',\n                                   'state': 'Complete', 'exitCode': '0', 'endTime': '2018',\n                                   'pid': '1', 'startTime': '2018', 'id': '1'}]} == json_response\n\n    @pytest.mark.parametrize(\"request_params\", ['', '?name=not_exist'])\n    async def test_get_tasks_latest_no_task_exception(self, client, request_params):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        response = {'count': 0, 'rows': []}\n        _rv1 = await mock_coro_response(response)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1):\n                resp = await client.get('/fledge/task/latest{}'.format(request_params))\n                assert 404 == resp.status\n                assert \"No Tasks found\" == resp.reason\n\n    async def test_cancel_task(self, client):\n        async def mock_coro():\n            return \"some valid values\"\n\n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'get_task', return_value=_rv):\n            with patch.object(server.Server.scheduler, 'cancel_task', return_value=_rv):\n                resp = await client.put('/fledge/task/{}/cancel'.format(self._random_uuid))\n                assert 200 == resp.status\n                result = await resp.text()\n                json_response = json.loads(result)\n                assert {'id': '{}'.format(self._random_uuid),\n                        'message': 'Task cancelled successfully'} == json_response\n\n    async def test_cancel_task_bad_data(self, client):\n            resp = await client.put('/fledge/task/{}/cancel'.format(\"bla\"))\n            assert 404 == resp.status\n            assert 'Invalid Task ID {}'.format(\"bla\") == resp.reason\n\n    @pytest.mark.parametrize(\"exception_name, response_code, response_message\", [\n        (TaskNotFoundError(_random_uuid), 404, 'Task not found: {}'.format(_random_uuid)),\n        (TaskNotRunningError(_random_uuid), 404, 'Task is not running: {}'.format(_random_uuid)),\n        (ValueError, 404, ''),\n    ])\n    async def test_cancel_task_exceptions(self, client, exception_name, response_code, response_message):\n        async def mock_coro():\n            return \"\"\n        _rv = await mock_coro()\n        with patch.object(server.Server.scheduler, 'get_task', return_value=_rv):\n            with patch.object(server.Server.scheduler, 'cancel_task', side_effect=exception_name):\n                resp = await client.put('/fledge/task/{}/cancel'.format(self._random_uuid))\n                assert response_code == resp.status\n                assert response_message == resp.reason\n\n    async def test_get_task_state(self, client):\n        resp = await client.get('/fledge/task/state')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {'taskState': [\n            {'name': 'Running', 'index': 1},\n            {'name': 'Complete', 'index': 2},\n            {'name': 'Canceled', 'index': 3},\n            {'name': 'Interrupted', 'index': 4}]} == json_response\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_service.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport json\nfrom uuid import UUID\nfrom unittest.mock import MagicMock, patch, call\nimport pytest\nfrom aiohttp import web\nfrom fledge.services.core import routes\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry\nfrom fledge.services.core import server\nfrom fledge.services.core.scheduler.scheduler import Scheduler\nfrom fledge.services.core.scheduler.entities import StartUpSchedule\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.api import service\nfrom fledge.services.core.api.plugins import common\nfrom fledge.services.core.api.plugins.exceptions import *\n\n\n__author__ = \"Ashwin Gopalakrishnan, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestService:\n    def setup_method(self):\n        ServiceRegistry._registry = list()\n\n    def teardown_method(self):\n        ServiceRegistry._registry = list()\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def async_mock(self, return_value):\n        return return_value\n\n    async def test_get_health(self, mocker, client):\n        # empty service registry\n        resp = await client.get('/fledge/service')\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {'services': []} == json_response\n\n        mocker.patch.object(InterestRegistry, \"__init__\", return_value=None)\n        mocker.patch.object(InterestRegistry, \"get\", return_value=list())\n\n        with patch.object(ServiceRegistry._logger, 'info') as log_patch_info:\n            # populated service registry\n            ServiceRegistry.register('name1', 'Storage', 'address1', 1, 1, 'protocol1')\n            ServiceRegistry.register('name2', 'Southbound', 'address2', 2, 2, 'protocol2')\n            s_id_3 = ServiceRegistry.register('name3', 'Southbound', 'address3', 3, 3, 'protocol3')\n            s_id_4 = ServiceRegistry.register('name4', 'Notification', 'address4', 4, 4, 'protocol4')\n            ServiceRegistry.register('name5', 'Management', 'address5', 5, 5, 'http')\n            ServiceRegistry.register('name6', 'Northbound', 'address6', 6, 6, 'http')\n            ServiceRegistry.register('name7', 'Dispatcher', 'address7', 7, 7, 'http')\n            ServiceRegistry.register('name8', 'BucketStorage', 'address8', 8, 8, 'http')\n            ServiceRegistry.register('name9', 'Northbound', 'address9', 9, 9, 'http')\n            ServiceRegistry.unregister(s_id_3)\n            ServiceRegistry.mark_as_failed(s_id_4)\n            default_debugger = {\"debugger\": \"Detached\"}\n            svc_3_debugger = {\"debugger\": \"Attached\", \"ingress\": \"Suspended\", \"egress\": \"Storage\"}\n            svc_9_debugger = {\"debugger\": \"Attached\", \"ingress\": \"Running\", \"egress\": \"Storage\"}\n            for service_record in ServiceRegistry.all():\n                if service_record._type in ('Southbound', 'Northbound'):\n                    if service_record._name == 'name3':\n                        service_record._debug = svc_3_debugger\n                    elif service_record._name == 'name9':\n                        service_record._debug = svc_9_debugger\n                    else:\n                        service_record._debug = default_debugger\n            resp = await client.get('/fledge/service')\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert json_response == {\n                'services': [\n                    {\n                        'type': 'Storage',\n                        'service_port': 1,\n                        'address': 'address1',\n                        'protocol': 'protocol1',\n                        'status': 'running',\n                        'name': 'name1',\n                        'management_port': 1\n                    },\n                    {\n                        'type': 'Southbound',\n                        'service_port': 2,\n                        'address': 'address2',\n                        'protocol': 'protocol2',\n                        'status': 'running',\n                        'name': 'name2',\n                        'management_port': 2,\n                        'debug': default_debugger\n                    },\n                    {\n                        'type': 'Southbound',\n                        'service_port': 3,\n                        'address': 'address3',\n                        'protocol': 'protocol3',\n                        'status': 'shutdown',\n                        'name': 'name3',\n                        'management_port': 3,\n                        'debug': svc_3_debugger\n                    },\n                    {\n                        'type': 'Notification',\n                        'service_port': 4,\n                        'address': 'address4',\n                        'protocol': 'protocol4',\n                        'status': 'failed',\n                        'name': 'name4',\n                        'management_port': 4\n                    },\n                    {\n                        'type': 'Management',\n                        'service_port': 5,\n                        'address': 'address5',\n                        'protocol': 'http',\n                        'status': 'running',\n                        'name': 'name5',\n                        'management_port': 5\n                    },\n                    {\n                        'type': 'Northbound',\n                        'service_port': 6,\n                        'address': 'address6',\n                        'protocol': 'http',\n                        'status': 'running',\n                        'name': 'name6',\n                        'management_port': 6,\n                        'debug': default_debugger\n                    },\n                    {\n                        'type': 'Dispatcher',\n                        'service_port': 7,\n                        'address': 'address7',\n                        'protocol': 'http',\n                        'status': 'running',\n                        'name': 'name7',\n                        'management_port': 7\n                    },\n                    {\n                        'type': 'BucketStorage',\n                        'service_port': 8,\n                        'address': 'address8',\n                        'protocol': 'http',\n                        'status': 'running',\n                        'name': 'name8',\n                        'management_port': 8\n                    },\n                    {\n                        'type': 'Northbound',\n                        'service_port': 9,\n                        'address': 'address9',\n                        'protocol': 'http',\n                        'status': 'running',\n                        'name': 'name9',\n                        'management_port': 9,\n                        'debug': svc_9_debugger\n                    }\n                ]\n            }\n        assert 11 == log_patch_info.call_count\n\n    @pytest.mark.parametrize(\"_type\", [\"blah\", 1, \"storage\"])\n    async def test_bad_get_service_with_type(self, client, _type):\n        svc_type_members = ServiceRecord.Type._member_names_\n        expected_msg = \"{} is not a valid service type. Supported types are {}\".format(_type, svc_type_members)\n        resp = await client.get('/fledge/service?type={}'.format(_type))\n        assert 400 == resp.status\n        assert expected_msg == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": expected_msg} == json_response\n\n    async def test_get_service_with_type_not_found(self, client, _type=\"Notification\"):\n        expected_msg = \"No record found for {} service type\".format(_type)\n        resp = await client.get('/fledge/service?type={}'.format(_type))\n        assert 404 == resp.status\n        assert expected_msg == resp.reason\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert {\"message\": expected_msg} == json_response\n\n    @pytest.mark.parametrize(\"_type, svc_record_count\", [\n        (\"Storage\", 1),\n        (\"Southbound\", 2),\n        (\"Northbound\", 1),\n        (\"Notification\", 1),\n        (\"Management\", 1),\n        (\"Dispatcher\", 1),\n        (\"BucketStorage\", 1)\n    ])\n    async def test_get_service_with_type(self, mocker, client, _type, svc_record_count):\n        mocker.patch.object(InterestRegistry, \"__init__\", return_value=None)\n        mocker.patch.object(InterestRegistry, \"get\", return_value=list())\n        with patch.object(ServiceRegistry._logger, 'info') as log_patch_info:\n            # populated service registry\n            ServiceRegistry.register('name1', 'Storage', 'address1', 1, 1, 'protocol1')\n            ServiceRegistry.register('name2', 'Southbound', 'address2', 2, 2, 'protocol2')\n            ServiceRegistry.register('name3', 'Northbound', 'address3', 3, 3, 'protocol3')\n            ServiceRegistry.register('name4', 'Southbound', 'address4', 4, 4, 'protocol4')\n            ServiceRegistry.register('name5', 'Notification', 'address5', 5, 5, 'http')\n            ServiceRegistry.register('name6', 'Management', 'address6', 6, 6, 'http')\n            ServiceRegistry.register('name7', 'Dispatcher', 'address7', 7, 7, 'http')\n            ServiceRegistry.register('name8', 'BucketStorage', 'address8', 8, 8, 'http')\n        assert 8 == log_patch_info.call_count\n        resp = await client.get('/fledge/service?type={}'.format(_type))\n        assert 200 == resp.status\n        result = await resp.text()\n        json_response = json.loads(result)\n        assert svc_record_count == len(json_response['services'])\n\n    @pytest.mark.parametrize(\"payload, code, message\", [\n        ('\"blah\"', 400, \"Data payload must be a valid JSON\"''),\n        ('{}', 400, \"Missing name property in payload.\"),\n        ('{\"name\": \"test\"}', 400, \"Missing type property in payload.\"),\n        ('{\"name\": \"a;b\", \"plugin\": \"dht11\", \"type\": \"south\"}', 400, \"Invalid name property in payload.\"),\n        ('{\"name\": \"test\", \"plugin\": \"dht@11\", \"type\": \"south\"}', 400, \"Invalid plugin property in payload.\"),\n        ('{\"name\": \"test\", \"plugin\": \"dht11\", \"type\": \"south\", \"enabled\": \"blah\"}', 400,\n         'Only \"true\", \"false\", true, false are allowed for value of enabled.'),\n        ('{\"name\": \"test\", \"plugin\": \"dht11\", \"type\": \"south\", \"enabled\": \"t\"}', 400,\n         'Only \"true\", \"false\", true, false are allowed for value of enabled.'),\n        ('{\"name\": \"test\", \"plugin\": \"dht11\", \"type\": \"south\", \"enabled\": \"True\"}', 400,\n         'Only \"true\", \"false\", true, false are allowed for value of enabled.'),\n        ('{\"name\": \"test\", \"plugin\": \"dht11\", \"type\": \"south\", \"enabled\": \"False\"}', 400,\n         'Only \"true\", \"false\", true, false are allowed for value of enabled.'),\n        ('{\"name\": \"test\", \"plugin\": \"dht11\", \"type\": \"south\", \"enabled\": \"1\"}', 400,\n         'Only \"true\", \"false\", true, false are allowed for value of enabled.'),\n        ('{\"name\": \"test\", \"plugin\": \"dht11\", \"type\": \"south\", \"enabled\": \"0\"}', 400,\n         'Only \"true\", \"false\", true, false are allowed for value of enabled.'),\n        ('{\"name\": \"test\", \"plugin\": \"dht11\"}', 400, \"Missing type property in payload.\"),\n        ('{\"name\": \"test\", \"plugin\": \"dht11\", \"type\": \"blah\"}', 400,\n         \"Only south, north, notification, management, dispatcher, bucketstorage and pipeline types are supported.\"),\n        ('{\"name\": \"test\", \"type\": \"south\"}', 400, \"Missing plugin property for type south in payload.\")\n    ])\n    async def test_add_service_with_bad_params(self, client, code, payload, message):\n        resp = await client.post('/fledge/service', data=payload)\n        assert code == resp.status\n        assert message == resp.reason\n\n    async def test_insert_scheduled_process_exception_add_service(self, client):\n        data = {\"name\": \"furnace4\", \"type\": \"south\", \"plugin\": \"dht11\"}\n\n        async def q_result(*arg):\n            return {'count': 0, 'rows': []}\n\n        mock_plugin_info = {\n            'name': \"furnace4\",\n            'version': \"1.1\",\n            'type': \"south\",\n            'interface': \"1.0\",\n            'mode': \"async\",\n            'config': {\n                'plugin': {\n                    'description': \"DHT11\",\n                    'type': 'string',\n                    'default': 'dht11'\n                }\n            }\n        }\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(None)\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n            with patch.object(service._logger, 'error') as patch_logger:\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with patch.object(c_mgr, 'get_category_all_items',\n                                      return_value=_rv) as patch_get_cat_info:\n                        with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result) \\\n                                as query_table_patch:\n                            with patch.object(storage_client_mock, 'insert_into_tbl', side_effect=Exception()):\n                                resp = await client.post('/fledge/service', data=json.dumps(data))\n                                assert 500 == resp.status\n                                assert 'Failed to create service.' == resp.reason\n                        args1, kwargs1 = query_table_patch.call_args\n                        assert 'scheduled_processes' == args1[0]\n                        p2 = json.loads(args1[1])\n                        assert {'return': ['name'], 'where': {'column': 'name', 'condition': '=', 'value': 'south_c',\n                                                              'and': {'column': 'script', 'condition': '=',\n                                                                      'value': '[\\\"services/south_c\\\"]'}}\n                                } == p2\n                    patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n            assert 1 == patch_logger.call_count\n\n    async def test_dupe_category_name_add_service(self, client):\n        mock_plugin_info = {\n            'name': \"furnace4\",\n            'version': \"1.1\",\n            'type': \"south\",\n            'interface': \"1.0\",\n            'mode': \"async\",\n            'config': {\n                'plugin': {\n                    'description': \"DHT11\",\n                    'type': 'string',\n                    'default': 'dht11'\n                }\n            }\n        }\n        data = {\"name\": \"furnace4\", \"type\": \"south\", \"plugin\": \"dht11\"}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(mock_plugin_info)\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_info:\n                    resp = await client.post('/fledge/service', data=json.dumps(data))\n                    assert 400 == resp.status\n                    assert \"The '{}' category already exists\".format(data['name']) == resp.reason\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    async def test_dupe_schedule_name_add_service(self, client):\n        async def q_result(*arg):\n            table = arg[0]\n            payload = arg[1]\n\n            if table == 'schedules':\n                assert {'return': ['schedule_name'], 'where': {'column': 'schedule_name', 'condition': '=',\n                                                               'value': 'furnace4'}} == json.loads(payload)\n                return {'count': 1, 'rows': [{'schedule_name': 'schedule_name'}]}\n\n        mock_plugin_info = {\n            'name': \"furnace4\",\n            'version': \"1.1\",\n            'type': \"south\",\n            'interface': \"1.0\",\n            'mode': \"async\",\n            'config': {\n                'plugin': {\n                    'description': \"DHT11\",\n                    'type': 'string',\n                    'default': 'dht11'\n                }\n            }\n        }\n        data = {\"name\": \"furnace4\", \"type\": \"south\", \"plugin\": \"dht11\"}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        msg = \"A service with {} name already exists.\".format(data['name'])\n        _rv = await self.async_mock(None)\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        resp = await client.post('/fledge/service', data=json.dumps(data))\n                        assert 400 == resp.status\n                        assert msg == resp.reason\n                        result = await resp.text()\n                        json_response = json.loads(result)\n                        assert {\"message\": msg} == json_response\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    p1 = '{\"name\": \"furnace4\", \"type\": \"south\", \"plugin\": \"dht11\"}'\n    p2 = '{\"name\": \"furnace4\", \"type\": \"south\", \"plugin\": \"dht11\", \"enabled\": false}'\n    p3 = '{\"name\": \"furnace4\", \"type\": \"south\", \"plugin\": \"dht11\", \"enabled\": true}'\n    p4 = '{\"name\": \"furnace4\", \"type\": \"south\", \"plugin\": \"dht11\", \"enabled\": \"true\"}'\n    p5 = '{\"name\": \"furnace4\", \"type\": \"south\", \"plugin\": \"dht11\", \"enabled\": \"false\"}'\n\n    @pytest.mark.parametrize(\"payload\", [p1, p2, p3, p4, p5])\n    async def test_add_service(self, client, payload):\n        data = json.loads(payload)\n\n        async def async_mock_get_schedule():\n            schedule = StartUpSchedule()\n            schedule.schedule_id = '2129cc95-c841-441a-ad39-6469a87dbc8b'\n            return schedule\n\n        async def q_result(*arg):\n            table = arg[0]\n            _payload = arg[1]\n            if table == 'scheduled_processes':\n                assert {'return': ['name'], 'where': {'column': 'name', 'condition': '=', 'value': 'south_c',\n                                                      'and': {'column': 'script', 'condition': '=',\n                                                              'value': '[\\\"services/south_c\\\"]'}}\n                        } == json.loads(_payload)\n                return {'count': 0, 'rows': []}\n            if table == 'schedules':\n                assert {'return': ['schedule_name'], 'where': {'column': 'schedule_name', 'condition': '=',\n                                                               'value': 'furnace4'}} == json.loads(_payload)\n                return {'count': 0, 'rows': []}\n\n        expected_insert_resp = {'rows_affected': 1, \"response\": \"inserted\"}\n        mock_plugin_info = {\n            'name': \"furnace4\",\n            'version': \"1.1\",\n            'type': \"south\",\n            'interface': \"1.0\",\n            'mode': \"async\",\n            'config': {\n                'plugin': {\n                    'description': \"DHT11 plugin\",\n                    'type': 'string',\n                    'default': 'dht11'\n                }\n            }\n        }\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(expected_insert_resp)\n        _rv3 = await self.async_mock(\"\")\n        _rv4 = await async_mock_get_schedule()\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv2) \\\n                                as insert_table_patch:\n                            with patch.object(c_mgr, 'create_category', return_value=_rv2) as patch_create_cat:\n                                with patch.object(c_mgr, 'create_child_category', return_value=_rv2) \\\n                                        as patch_create_child_cat:\n                                    with patch.object(server.Server.scheduler, 'save_schedule',\n                                                      return_value=_rv3) as patch_save_schedule:\n                                        with patch.object(server.Server.scheduler, 'get_schedule_by_name',\n                                                          return_value=_rv4) as patch_get_schedule:\n                                            resp = await client.post('/fledge/service', data=payload)\n                                            server.Server.scheduler = None\n                                            assert 200 == resp.status\n                                            result = await resp.text()\n                                            json_response = json.loads(result)\n                                            assert {'id': '2129cc95-c841-441a-ad39-6469a87dbc8b',\n                                                    'name': 'furnace4'} == json_response\n                                        patch_get_schedule.assert_called_once_with(data['name'])\n                                    patch_save_schedule.assert_called_once()\n                                patch_create_child_cat.assert_called_once_with('South', ['furnace4'])\n                            assert 2 == patch_create_cat.call_count\n                            patch_create_cat.assert_called_with('South', {}, 'South microservices', True)\n                        args, kwargs = insert_table_patch.call_args\n                        assert 'scheduled_processes' == args[0]\n                        p = json.loads(args[1])\n                        assert {'name': 'south_c', 'script': '[\"services/south_c\"]'} == p\n                patch_get_cat_info.assert_called_once_with(category_name='furnace4')\n\n    p1 = '{\"name\": \"DispatcherServer\", \"type\": \"dispatcher\"}'\n    p2 = '{\"name\": \"NotificationServer\", \"type\": \"notification\"}'\n    p3 = '{\"name\": \"ManagementServer\", \"type\": \"management\"}'\n    p4 = '{\"name\": \"BucketServer\", \"type\": \"bucketstorage\"}'\n\n    @pytest.mark.parametrize(\"payload\", [p1, p2, p3, p4])\n    async def test_bad_external_service(self, client, payload):\n        data = json.loads(payload)\n        with patch('os.path.exists', return_value=False):\n            resp = await client.post('/fledge/service', data=payload)\n            assert 404 == resp.status\n            msg = '{} service is not installed correctly.'.format(data['type'].capitalize())\n            assert msg == resp.reason\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {\"message\": msg} == json_response\n\n    p1 = '{\"name\": \"NotificationServer\", \"type\": \"notification\"}'\n    p2 = '{\"name\": \"NotificationServer\", \"type\": \"notification\", \"enabled\": false}'\n    p3 = '{\"name\": \"NotificationServer\", \"type\": \"notification\", \"enabled\": true}'\n    p4 = '{\"name\": \"DispatcherServer\", \"type\": \"dispatcher\"}'\n    p5 = '{\"name\": \"DispatcherServer\", \"type\": \"dispatcher\", \"enabled\": false}'\n    p6 = '{\"name\": \"DispatcherServer\", \"type\": \"dispatcher\", \"enabled\": true}'\n    p7 = '{\"name\": \"DispatcherServer\", \"type\": \"bucketstorage\"}'\n    p8 = '{\"name\": \"DispatcherServer\", \"type\": \"bucketstorage\", \"enabled\": false}'\n    p9 = '{\"name\": \"DispatcherServer\", \"type\": \"bucketstorage\", \"enabled\": true}'\n\n    @pytest.mark.parametrize(\"payload\", [p1, p2, p3, p4, p5, p6, p7, p8, p9])\n    async def test_add_external_service(self, client, payload):\n        data = json.loads(payload)\n        sch_id = '45876056-e04c-4cde-8a82-1d8dbbbe6d72'\n\n        async def async_mock_get_schedule():\n            schedule = StartUpSchedule()\n            schedule.schedule_id = sch_id\n            return schedule\n\n        async def q_result(*arg):\n            table = arg[0]\n            _payload = json.loads(arg[1])\n            if table == 'schedules':\n                if _payload['return'][0] == 'process_name':\n                    assert {\"return\": [\"process_name\"]} == _payload\n                    return {'rows': [{'process_name': 'purge'}, {'process_name': 'stats collector'}], 'count': 2}\n                else:\n                    assert {\"return\": [\"schedule_name\"], \"where\": {\"column\": \"schedule_name\", \"condition\": \"=\",\n                                                                   \"value\": data['name']}} == _payload\n\n                    return {'count': 0, 'rows': []}\n            if table == 'scheduled_processes':\n                sch_ps = data['type'] if data['type'] != \"bucketstorage\" else \"bucket_storage\"\n                assert {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                      \"value\": \"{}_c\".format(sch_ps),\n                                                      \"and\": {\"column\": \"script\", \"condition\": \"=\",\n                                                              \"value\": \"[\\\"services/{}_c\\\"]\".format(\n                                                                  sch_ps)}}} == _payload\n                return {'count': 0, 'rows': []}\n\n        expected_insert_resp = {'rows_affected': 1, \"response\": \"inserted\"}\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(expected_insert_resp)\n        _rv3 = await self.async_mock(\"\")\n        _rv4 = await async_mock_get_schedule()\n        with patch('os.path.exists', return_value=True):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        with patch.object(storage_client_mock, 'insert_into_tbl',\n                                          return_value=_rv2) as insert_table_patch:\n                            with patch.object(server.Server.scheduler, 'save_schedule',\n                                              return_value=_rv3) as patch_save_schedule:\n                                with patch.object(server.Server.scheduler, 'get_schedule_by_name',\n                                                  return_value=_rv4) as patch_get_schedule:\n                                    resp = await client.post('/fledge/service', data=payload)\n                                    server.Server.scheduler = None\n                                    assert 200 == resp.status\n                                    result = await resp.text()\n                                    json_response = json.loads(result)\n                                    assert {'id': sch_id, 'name': data['name']} == json_response\n                                patch_get_schedule.assert_called_once_with(data['name'])\n                            patch_save_schedule.assert_called_once()\n                        args, kwargs = insert_table_patch.call_args\n                        assert 'scheduled_processes' == args[0]\n                        ps = data['type'] if data['type'] != \"bucketstorage\" else \"bucket_storage\"\n                        assert {'name': '{}_c'.format(ps), 'script': '[\"services/{}_c\"]'.format(\n                            ps)} == json.loads(args[1])\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    @pytest.mark.parametrize(\"payload, svc_type\", [\n        ('{\"name\": \"NotificationServer\", \"type\": \"notification\"}', \"notification\"),\n        ('{\"name\": \"DispatcherServer\", \"type\": \"dispatcher\"}', \"dispatcher\"),\n        ('{\"name\": \"BucketServer\", \"type\": \"bucketstorage\"}', \"bucketstorage\")\n    ])\n    async def test_dupe_external_service_schedule(self, client, payload, svc_type):\n        data = json.loads(payload)\n\n        async def q_result(*arg):\n            table = arg[0]\n            _payload = json.loads(arg[1])\n            sch_ps = svc_type if svc_type != \"bucketstorage\" else \"bucket_storage\"\n            if table == 'schedules':\n                if _payload['return'][0] == 'process_name':\n                    assert {\"return\": [\"process_name\"]} == _payload\n                    return {'rows': [{'process_name': 'stats collector'}, {'process_name': '{}_c'.format(sch_ps)}],\n                            'count': 2}\n                else:\n                    assert {\"return\": [\"schedule_name\"], \"where\": {\"column\": \"schedule_name\", \"condition\": \"=\",\n                                                                   \"value\": data['name']}} == _payload\n\n                    return {'count': 0, 'rows': []}\n            if table == 'scheduled_processes':\n                assert {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\",\n                                                      \"value\": \"{}_c\".format(sch_ps),\n                                                      \"and\": {\"column\": \"script\", \"condition\": \"=\",\n                                                              \"value\": \"[\\\"services/{}_c\\\"]\".format(\n                                                                  sch_ps)}}} == _payload\n                return {'count': 0, 'rows': []}\n\n        expected_insert_resp = {'rows_affected': 1, \"response\": \"inserted\"}\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(expected_insert_resp)\n        with patch('os.path.exists', return_value=True):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        with patch.object(storage_client_mock, 'insert_into_tbl',\n                                          return_value=_rv2) as insert_table_patch:\n                            resp = await client.post('/fledge/service', data=payload)\n                            server.Server.scheduler = None\n                            assert 400 == resp.status\n                            svc_record = svc_type.capitalize() if svc_type != \"bucketstorage\" else \"BucketStorage\"\n                            msg = \"A {} service type schedule already exists.\".format(svc_record)\n                            assert msg == resp.reason\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {\"message\": msg} == json_response\n                        args, kwargs = insert_table_patch.call_args\n                        assert 'scheduled_processes' == args[0]\n                        p = json.loads(args[1])\n                        ps = svc_type if svc_type != \"bucketstorage\" else \"bucket_storage\"\n                        assert {'name': '{}_c'.format(ps), 'script': '[\"services/{}_c\"]'.format(ps)} == p\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    async def test_add_service_with_config(self, client):\n        payload = '{\"name\": \"Sine\", \"type\": \"south\", \"plugin\": \"sinusoid\", \"enabled\": \"false\",' \\\n                  ' \"config\": {\"dataPointsPerSec\": {\"value\": \"10\"}}}'\n        data = json.loads(payload)\n\n        async def async_mock_get_schedule():\n            schedule = StartUpSchedule()\n            schedule.schedule_id = '2129cc95-c841-441a-ad39-6469a87dbc8b'\n            return schedule\n\n        async def q_result(*arg):\n            table = arg[0]\n            _payload = arg[1]\n            if table == 'scheduled_processes':\n                assert {'return': ['name'], 'where': {'column': 'name', 'condition': '=', 'value': 'south_c',\n                                                      'and': {'column': 'script', 'condition': '=',\n                                                              'value': '[\\\"services/south_c\\\"]'}}\n                        } == json.loads(_payload)\n                return {'count': 0, 'rows': []}\n            if table == 'schedules':\n                assert {'return': ['schedule_name'],\n                        'where': {'column': 'schedule_name', 'condition': '=',\n                                  'value': data['name']}} == json.loads(_payload)\n                return {'count': 0, 'rows': []}\n\n        expected_insert_resp = {'rows_affected': 1, \"response\": \"inserted\"}\n        mock_plugin_info = {\n            'name': data['name'],\n            'version': \"1.1\",\n            'type': \"south\",\n            'interface': \"1.0\",\n            'mode': \"async\",\n            'config': {\n                'plugin': {\n                    'description': \"Sinusoid Plugin\",\n                    'type': 'string',\n                    'default': 'sinusoid'\n                },\n                'dataPointsPerSec': {\n                    'description': 'Data points per second',\n                    'type': 'integer',\n                    'default': '1',\n                    'order': '2'\n                }\n            }\n        }\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(expected_insert_resp)\n        _rv3 = await self.async_mock(\"\")\n        _rv4 = await async_mock_get_schedule()\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        with patch.object(storage_client_mock, 'insert_into_tbl',\n                                          return_value=_rv2) as insert_table_patch:\n                            with patch.object(c_mgr, 'create_category', return_value=_rv1) as patch_create_cat:\n                                with patch.object(c_mgr, 'create_child_category',\n                                                  return_value=_rv1) as patch_create_child_cat:\n                                    with patch.object(c_mgr, 'set_category_item_value_entry',\n                                                      return_value=_rv1) as patch_set_entry:\n                                        with patch.object(server.Server.scheduler, 'save_schedule',\n                                                          return_value=_rv3) as patch_save_schedule:\n                                            with patch.object(server.Server.scheduler, 'get_schedule_by_name',\n                                                              return_value=_rv4) as patch_get_schedule:\n                                                resp = await client.post('/fledge/service', data=payload)\n                                                server.Server.scheduler = None\n                                                assert 200 == resp.status\n                                                result = await resp.text()\n                                                json_response = json.loads(result)\n                                                assert {'id': '2129cc95-c841-441a-ad39-6469a87dbc8b',\n                                                        'name': data['name']} == json_response\n                                            patch_get_schedule.assert_called_once_with(data['name'])\n                                        patch_save_schedule.assert_called_once()\n                                    patch_set_entry.assert_called_once_with(data['name'], 'dataPointsPerSec', '10')\n                                patch_create_child_cat.assert_called_once_with('South', ['Sine'])\n                            assert 2 == patch_create_cat.call_count\n                            patch_create_cat.assert_called_with('South', {}, 'South microservices', True)\n                        args, kwargs = insert_table_patch.call_args\n                        assert 'scheduled_processes' == args[0]\n                        p = json.loads(args[1])\n                        assert {'name': 'south_c', 'script': '[\"services/south_c\"]'} == p\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    async def test_delete_service(self, mocker, client):\n        sch_id = '0178f7b6-d55c-4427-9106-245513e46416'\n        reg_id = 'd607c5be-792f-4993-96b7-b513674e7d3b'\n        name = \"Test\"\n        sch_name = \"Test Service\"\n        mock_registry = [ServiceRecord(reg_id, name, \"Southbound\", \"http\", \"localhost\", \"8118\", \"8118\")]\n\n        async def mock_result():\n            return {\n                        \"count\": 1,\n                        \"rows\": [\n                            {\n                                \"id\": sch_id,\n                                \"process_name\": name,\n                                \"schedule_name\": sch_name,\n                                \"schedule_type\": \"1\",\n                                \"schedule_interval\": \"0\",\n                                \"schedule_time\": \"0\",\n                                \"schedule_day\": \"0\",\n                                \"exclusive\": \"t\",\n                                \"enabled\": \"t\"\n                            },\n                        ]\n            }\n\n        delete_result = {'response': 'deleted', 'rows_affected': 1}\n        update_result = {'rows_affected': 1, \"response\": \"updated\"}\n        query_result = [{'rows': [{'name': 'Delta #123'}], 'count': 1}]\n        _rv1 = await mock_result()\n        _rv2 = asyncio.ensure_future(asyncio.sleep(.1))\n        _rv3 = await self.async_mock(delete_result)\n        _rv4 = await self.async_mock(update_result)\n        _rv5 = await self.async_mock(query_result)\n        mocker.patch.object(connect, 'get_storage_async')\n        get_schedule = mocker.patch.object(service, \"get_schedule\", return_value=_rv1)\n        scheduler = mocker.patch.object(server.Server, \"scheduler\", MagicMock())\n        delete_schedule = mocker.patch.object(scheduler, \"delete_schedule\", return_value=_rv2)\n        disable_schedule = mocker.patch.object(scheduler, \"disable_schedule\", return_value=_rv2)\n        delete_configuration = mocker.patch.object(ConfigurationManager, \"delete_category_and_children_recursively\",\n                                                   return_value=_rv2)\n        get_registry = mocker.patch.object(ServiceRegistry, 'get', return_value=mock_registry)\n        remove_registry = mocker.patch.object(ServiceRegistry, 'remove_from_registry')\n        delete_streams = mocker.patch.object(service, \"delete_streams\", return_value=_rv3)\n        delete_plugin_data = mocker.patch.object(service, \"delete_plugin_data\", return_value=_rv3)\n        delete_filters = mocker.patch.object(service, \"delete_filters\", return_value=_rv5)\n        update_deprecated_ts_in_asset_tracker = mocker.patch.object(service, \"update_deprecated_ts_in_asset_tracker\",\n                                                                    return_value=_rv4)\n\n        mock_registry[0]._status = ServiceRecord.Status.Shutdown\n\n        resp = await client.delete(\"/fledge/service/{}\".format(sch_name))\n        assert 200 == resp.status\n        result = await resp.json()\n        assert \"Service {} deleted successfully.\".format(sch_name) == result['result']\n\n        assert 1 == get_schedule.call_count\n        args, kwargs = get_schedule.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == delete_schedule.call_count\n        delete_schedule_calls = [call(UUID('0178f7b6-d55c-4427-9106-245513e46416'))]\n        delete_schedule.assert_has_calls(delete_schedule_calls, any_order=True)\n\n        assert 1 == disable_schedule.call_count\n        disable_schedule_calls = [call(UUID('0178f7b6-d55c-4427-9106-245513e46416'))]\n        disable_schedule.assert_has_calls(disable_schedule_calls, any_order=True)\n\n        assert 1 == delete_configuration.call_count\n        args, kwargs = delete_configuration.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == get_registry.call_count\n        get_registry_calls = [call(name=sch_name)]\n        get_registry.assert_has_calls(get_registry_calls, any_order=True)\n\n        assert 1 == remove_registry.call_count\n        remove_registry_calls = [call('d607c5be-792f-4993-96b7-b513674e7d3b')]\n        remove_registry.assert_has_calls(remove_registry_calls, any_order=True)\n\n        assert 1 == delete_streams.call_count\n        args, kwargs = delete_streams.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == delete_plugin_data.call_count\n        args, kwargs = delete_plugin_data.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == delete_filters.call_count\n        args, kwargs = delete_plugin_data.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == update_deprecated_ts_in_asset_tracker.call_count\n        args, kwargs = update_deprecated_ts_in_asset_tracker.call_args_list[0]\n        assert sch_name in args\n\n    async def test_delete_service_exception(self, mocker, client):\n        resp = await client.delete(\"/fledge/service\")\n        assert 405 == resp.status\n        assert 'Method Not Allowed' == resp.reason\n\n        reg_id = 'd607c5be-792f-4993-96b7-b513674e7d3b'\n        name = 'Test'\n        mock_registry = [ServiceRecord(reg_id, name, \"Southbound\", \"http\", \"localhost\", \"8118\", \"8118\")]\n\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(service, \"get_schedule\", side_effect=Exception)\n        resp = await client.delete(\"/fledge/service/{}\".format(name))\n        assert 500 == resp.status\n        assert resp.reason is ''\n\n        async def mock_bad_result():\n            return {\"count\": 0, \"rows\": []}\n\n        _rv = await mock_bad_result()\n        mock_registry[0]._status = ServiceRecord.Status.Shutdown\n        mocker.patch.object(service, \"get_schedule\", return_value=_rv)\n\n        resp = await client.delete(\"/fledge/service/{}\".format(name))\n        assert 404 == resp.status\n        assert '{} service does not exist.'.format(name) == resp.reason\n\n    async def test_post_install_package_from_repo_already_in_progress(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = 'fledge-service-notification'\n        param = {\"format\": \"repository\", \"name\": pkg_name}\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"install\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"install\",\n            \"status\": -1,\n            \"log_file_uri\": \"\"\n        }]}\n        msg = '{} package installation already in progress'.format(pkg_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await async_mock(select_row_resp)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              return_value=_rv) as query_tbl_patch:\n                resp = await client.post('/fledge/service?action=install', data=json.dumps(param))\n                assert 429 == resp.status\n                assert msg == resp.reason\n                r = await resp.text()\n                actual = json.loads(r)\n                assert {'message': msg} == actual\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_post_install_package_from_repo_already_installed(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = 'fledge-service-notification'\n        param = {\"format\": \"repository\", \"name\": pkg_name}\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"install\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        svc_list = [\"storage\", \"south\", \"notification\"]\n        msg = '{} package is already installed'.format(pkg_name)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await async_mock({'count': 0, 'rows': []})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              return_value=_rv) as query_tbl_patch:\n                with patch.object(service, 'get_service_installed', return_value=svc_list\n                                  ) as svc_list_patch:\n                    resp = await client.post('/fledge/service?action=install', data=json.dumps(param))\n                    assert 400 == resp.status\n                    assert msg == resp.reason\n                    r = await resp.text()\n                    actual = json.loads(r)\n                    assert {'message': msg} == actual\n                svc_list_patch.assert_called_once_with()\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_post_service_package_from_repo(self, client, loop):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = 'fledge-service-notification'\n        param = {\"format\": \"repository\", \"name\": pkg_name}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        insert_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"install\",\n            \"status\": 0,\n            \"log_file_uri\": \"\"\n        }]}\n        query_tbl_payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"install\",\n                                                             \"and\": {\"column\": \"name\", \"condition\": \"=\",\n                                                                     \"value\": pkg_name}}}\n        svc_list = [\"storage\", \"south\"]\n        _rv1 = await async_mock({'count': 0, 'rows': []})\n        _rv2 = await async_mock(([pkg_name, \"fledge-north-http\", \"fledge-south-sinusoid\"], 'log/190801-12-41-13.log'))\n        _rv3 = await async_mock({\"response\": \"inserted\", \"rows_affected\": 1})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1\n                              ) as query_tbl_patch:\n                with patch.object(service, 'get_service_installed', return_value=svc_list) as svc_list_patch:\n                    with patch.object(common, 'fetch_available_packages', return_value=_rv2\n                                      ) as patch_fetch_available_package:\n                        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv3\n                                          ) as insert_tbl_patch:\n                            with patch.object(service._logger, \"info\") as log_info:\n                                with patch('multiprocessing.Process'):\n                                    resp = await client.post('/fledge/service?action=install', data=json.dumps(param))\n                                    assert 200 == resp.status\n                                    result = await resp.text()\n                                    response = json.loads(result)\n                                    assert 'id' in response\n                                    assert '{} service installation started'.format(pkg_name) == response['message']\n                                    assert response['statusLink'].startswith('fledge/package/install/status?id=')\n                            assert 1 == log_info.call_count\n                            log_info.assert_called_once_with('{} service installation started...'.format(pkg_name))\n                        args, kwargs = insert_tbl_patch.call_args_list[0]\n                        assert 'packages' == args[0]\n                        actual = json.loads(args[1])\n                        assert 'id' in actual\n                        assert pkg_name == actual['name']\n                        assert 'install' == actual['action']\n                        assert -1 == actual['status']\n                        assert '' == actual['log_file_uri']\n                    patch_fetch_available_package.assert_called_once_with()\n                svc_list_patch.assert_called_once_with()\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            actual = json.loads(args[1])\n            assert query_tbl_payload == actual\n\n    @pytest.mark.parametrize(\"req_param, post_param, message\", [\n        (\"?action=install\", {\"name\": \"blah\"}, \"format param is required\"),\n        (\"?action=install\", {\"format\": \"repository\"}, \"Missing name property in payload.\"),\n        (\"?action=install\", {\"format\": \"blah\", \"name\": \"blah\"}, \"Invalid format. Must be 'repository'\"),\n        (\"?action=blah\", {\"format\": \"blah\", \"name\": \"blah\"}, \"blah is not a valid action\"),\n        (\"?action=install\", {\"format\": \"repository\", \"name\": \"fledge-service-notification\", \"version\": \"1.6\"},\n         \"Service semantic version is incorrect; it should be like X.Y.Z\"),\n        (\"?action=install\", {\"format\": \"repository\", \"name\": \"blah\"},\n         \"name should start with \\\"fledge-service-\\\" prefix\")\n    ])\n    async def test_bad_post_service_package_from_repo(self, client, req_param, post_param, message):\n        resp = await client.post('/fledge/service{}'.format(req_param), data=json.dumps(post_param))\n        assert 400 == resp.status\n        assert message == resp.reason\n\n    async def test_post_service_package_from_repo_is_not_available(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-service-notification\"\n        param = {\"format\": \"repository\", \"name\": pkg_name}\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"install\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        svc_list = [\"storage\", \"south\"]\n        _rv1 = await async_mock({'count': 0, 'rows': []})\n        _rv2 = await async_mock(([], 'log/190801-12-19-24'))\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                              return_value=_rv1) as query_tbl_patch:\n                with patch.object(service, 'get_service_installed', return_value=svc_list) as svc_list_patch:\n                    with patch.object(common, 'fetch_available_packages', return_value=_rv2\n                                      ) as patch_fetch_available_package:\n                        resp = await client.post('/fledge/service?action=install', data=json.dumps(param))\n                        assert 404 == resp.status\n                        assert \"'{} service is not available for the given repository'\".format(pkg_name) == resp.reason\n                    patch_fetch_available_package.assert_called_once_with()\n                svc_list_patch.assert_called_once_with()\n            args, kwargs = query_tbl_patch.call_args_list[0]\n            assert 'packages' == args[0]\n            assert payload == json.loads(args[1])\n\n    async def test_get_service_available(self, client):\n        async def async_mock(return_value):\n            return return_value\n\n        _rv = await async_mock(([], 'log/190801-12-19-24'))\n        with patch.object(common, 'fetch_available_packages', return_value=_rv) as patch_fetch_available_package:\n            resp = await client.get('/fledge/service/available')\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert {'services': [], 'link': 'log/190801-12-19-24'} == json_response\n        patch_fetch_available_package.assert_called_once_with('service')\n\n    async def test_bad_get_service_available(self, client):\n        log_path = \"log/190801-12-19-24\"\n        msg = \"Fetch available service package request failed\"\n        with patch.object(common, 'fetch_available_packages', side_effect=PackageError(log_path)) as patch_fetch_available_package:\n            resp = await client.get('/fledge/service/available')\n            assert 400 == resp.status\n            assert msg == resp.reason\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert log_path == json_response['link']\n            assert msg == json_response['message']\n        patch_fetch_available_package.assert_called_once_with('service')\n\n    @pytest.mark.parametrize(\"mock_value1, mock_value2, exp_result\", [\n        ([(['/usr/local/fledge/services'], [], [])], [(['/usr/local/fledge/python/fledge/services/management'],\n                                                       [], [])], []),\n        ([(['/usr/local/fledge/services'], [], ['fledge.services.south', 'fledge.services.storage'])], [],\n         [\"south\", \"storage\"]),\n        ([(['/usr/local/fledge/services'], [],\n           ['fledge.services.south', 'fledge.services.storage', 'fledge.services.notification'])], [],\n         [\"south\", \"storage\", \"notification\"]),\n        ([(['/usr/local/fledge/services'], [], ['fledge.services.south', 'fledge.services.storage'])],\n         [(['/usr/local/fledge/python/fledge/services/management'], [], [])], [\"south\", \"storage\"]),\n        ([(['/usr/local/fledge/services'], [], ['fledge.services.south', 'fledge.services.storage'])],\n         [(['/usr/local/fledge/python/fledge/services/management'], [], ['__main__.py'])],\n         [\"south\", \"storage\", \"management\"]),\n        ([(['/usr/local/fledge/services'], [],\n           ['fledge.services.south', 'fledge.services.storage', 'fledge.services.notification'])],\n         [(['/usr/local/fledge/python/fledge/services/management'], [], ['__main__.py'])],\n         [\"south\", \"storage\", \"notification\", \"management\"]),\n        ([(['/usr/local/fledge/services'], [],\n           ['fledge.services.south', 'fledge.services.storage', 'fledge.services.north'])], [],\n         [\"south\", \"storage\", \"north\"]),\n        ([(['/usr/local/fledge/services'], [],\n           ['fledge.services.south', 'fledge.services.storage', 'fledge.services.dispatcher'])], [],\n         [\"south\", \"storage\", \"dispatcher\"]),\n        ([(['/usr/local/fledge/services'], [],\n           ['fledge.services.south', 'fledge.services.storage', 'fledge.services.north',\n            'fledge.services.notification', 'fledge.services.dispatcher', 'fledge.services.bucket'])], [],\n         [\"south\", \"storage\", \"north\", \"notification\", \"dispatcher\", \"bucket\"])\n    ])\n    async def test_get_service_installed(self, client, mock_value1, mock_value2, exp_result):\n        with patch('os.walk', side_effect=(mock_value1, mock_value2)) as mockwalk:\n            resp = await client.get('/fledge/service/installed')\n            assert 200 == resp.status\n            result = await resp.text()\n            json_response = json.loads(result)\n            assert json_response == {'services': exp_result}\n        assert 2 == mockwalk.call_count\n\n    p1 = '{\"name\": \"FL Agent\", \"type\": \"management\"}'\n    p2 = '{\"name\": \"FL #1\", \"type\": \"management\", \"enabled\": false}'\n    p3 = '{\"name\": \"FL_MGT\", \"type\": \"management\", \"enabled\": true}'\n\n    @pytest.mark.parametrize(\"payload\", [p1, p2, p3])\n    async def test_add_management_service(self, client, payload):\n        data = json.loads(payload)\n        sch_id = '4624d3e4-c295-4bfd-848b-8a843cc90c3f'\n\n        async def async_mock_get_schedule():\n            schedule = StartUpSchedule()\n            schedule.schedule_id = sch_id\n            return schedule\n\n        async def q_result(*arg):\n            table = arg[0]\n            _payload = json.loads(arg[1])\n            if table == 'schedules':\n                if _payload['return'][0] == 'process_name':\n                    assert {\"return\": [\"process_name\"]} == _payload\n                    return {'rows': [{'process_name': 'purge'}, {'process_name': 'stats collector'}], 'count': 2}\n                else:\n                    assert {\"return\": [\"schedule_name\"], \"where\": {\"column\": \"schedule_name\", \"condition\": \"=\",\n                                                                   \"value\": data['name']}} == _payload\n\n                    return {'count': 0, 'rows': []}\n            if table == 'scheduled_processes':\n                assert {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"management\",\n                                                      \"and\": {\"column\": \"script\", \"condition\": \"=\",\n                                                              \"value\": \"[\\\"services/management\\\"]\"}}\n                        } == _payload\n                return {'count': 0, 'rows': []}\n\n        expected_insert_resp = {'rows_affected': 1, \"response\": \"inserted\"}\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(expected_insert_resp)\n        _rv3 = await self.async_mock(\"\")\n        _rv4 = await async_mock_get_schedule()\n        with patch('os.path.exists', return_value=True):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv2) as insert_table_patch:\n                            with patch.object(server.Server.scheduler, 'save_schedule', return_value=_rv3) as patch_save_schedule:\n                                with patch.object(server.Server.scheduler, 'get_schedule_by_name', return_value=_rv4) as patch_get_schedule:\n                                    resp = await client.post('/fledge/service', data=payload)\n                                    server.Server.scheduler = None\n                                    assert 200 == resp.status\n                                    result = await resp.text()\n                                    json_response = json.loads(result)\n                                    assert {'id': sch_id, 'name': data['name']} == json_response\n                                patch_get_schedule.assert_called_once_with(data['name'])\n                            patch_save_schedule.assert_called_once()\n                        args, kwargs = insert_table_patch.call_args\n                        assert 'scheduled_processes' == args[0]\n                        p = json.loads(args[1])\n                        assert {'name': 'management', 'priority': 300, 'script': '[\"services/management\"]'} == p\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    async def test_dupe_management_service_schedule(self, client):\n        payload = '{\"name\": \"FL Agent\", \"type\": \"management\"}'\n        data = json.loads(payload)\n\n        async def q_result(*arg):\n            table = arg[0]\n            _payload = json.loads(arg[1])\n            if table == 'schedules':\n                if _payload['return'][0] == 'process_name':\n                    assert {\"return\": [\"process_name\"]} == _payload\n                    return {'rows': [{'process_name': 'stats collector'}, {'process_name': 'management'}],\n                            'count': 2}\n                else:\n                    assert {\"return\": [\"schedule_name\"], \"where\": {\"column\": \"schedule_name\", \"condition\": \"=\",\n                                                                   \"value\": data['name']}} == _payload\n\n                    return {'count': 0, 'rows': []}\n            if table == 'scheduled_processes':\n                assert {\"return\": [\"name\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"management\",\n                                                      \"and\": {\"column\": \"script\", \"condition\": \"=\",\n                                                              \"value\": \"[\\\"services/management\\\"]\"}}\n                        } == _payload\n                return {'count': 0, 'rows': []}\n\n        expected_insert_resp = {'rows_affected': 1, \"response\": \"inserted\"}\n        msg = \"A Management service type schedule already exists.\"\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(expected_insert_resp)\n        with patch('os.path.exists', return_value=True):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items',\n                                  return_value=_rv1) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        with patch.object(storage_client_mock, 'insert_into_tbl',\n                                          return_value=_rv2) as insert_table_patch:\n                            resp = await client.post('/fledge/service', data=payload)\n                            server.Server.scheduler = None\n                            assert 400 == resp.status\n                            assert msg == resp.reason\n                            result = await resp.text()\n                            json_response = json.loads(result)\n                            assert {\"message\": msg} == json_response\n                        args, kwargs = insert_table_patch.call_args\n                        assert 'scheduled_processes' == args[0]\n                        p = json.loads(args[1])\n                        assert {'name': 'management', 'priority': 300, 'script': '[\"services/management\"]'} == p\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    @pytest.mark.parametrize(\"param\", [\n        \"blah\",\n        1,\n        \"storage\"\n        \"south\"\n    ])\n    async def test_bad_type_update_package(self, client, param):\n        resp = await client.put('/fledge/service/{}/name/update'.format(param), data=None)\n        assert 400 == resp.status\n        assert \"Invalid service type.\" == resp.reason\n\n    async def test_bad_update_package(self, client, _type=\"notification\", name=\"notification\"):\n        svc_list = [\"storage\", \"south\"]\n        with patch.object(service, 'get_service_installed', return_value=svc_list) as svc_list_patch:\n            resp = await client.put('/fledge/service/{}/{}/update'.format(_type, name), data=None)\n            assert 404 == resp.status\n            assert \"'{} service is not installed yet. Hence update is not possible.'\".format(name) == resp.reason\n        svc_list_patch.assert_called_once_with()\n\n    async def test_package_update_already_in_progress(self, client, _type=\"notification\", name=\"notification\"):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-service-{}\".format(name)\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"update\",\n            \"status\": -1,\n            \"log_file_uri\": \"\"\n        }]}\n        msg = '{} package update already in progress'.format(pkg_name)\n        svc_list = [\"south\", \"storage\", name]\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await async_mock(select_row_resp)\n        with patch.object(service, 'get_service_installed', return_value=svc_list) as svc_list_patch:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  return_value=_rv) as query_tbl_patch:\n                    resp = await client.put('/fledge/service/{}/{}/update'.format(_type, name),\n                                            data=None)\n                    assert 429 == resp.status\n                    assert msg == resp.reason\n                    r = await resp.text()\n                    actual = json.loads(r)\n                    assert {'message': msg} == actual\n                args, kwargs = query_tbl_patch.call_args_list[0]\n                assert 'packages' == args[0]\n                assert payload == json.loads(args[1])\n        svc_list_patch.assert_called_once_with()\n\n    async def test_package_update_when_in_use(self, client, _type=\"notification\", name=\"notification\"):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-service-{}\".format(name)\n        svc_name = \"NF #1\"\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"update\",\n            \"status\": 0,\n            \"log_file_uri\": \"\"\n        }]}\n        delete = {\"response\": \"deleted\", \"rows_affected\": 1}\n        delete_payload = {\"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                    \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n\n        sch_info = {'count': 1, 'rows': [\n            {'id': '6637c9ff-7090-4774-abca-07dee59a0610', 'schedule_name': svc_name, 'enabled': 't'}]}\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        svc_list = [\"south\", \"storage\", name]\n        _rv1 = await async_mock(delete)\n        _rv2 = await async_mock((True, \"Schedule successfully disabled\"))\n        _rv3 = await async_mock(insert)\n        _se1 = await async_mock(select_row_resp)\n        _se2 = await async_mock(sch_info)\n        with patch.object(service, 'get_service_installed', return_value=svc_list\n                          ) as svc_list_patch:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  side_effect=[_se1, _se2]) as query_tbl_patch:\n                    with patch.object(storage_client_mock, 'delete_from_tbl',\n                                      return_value=_rv1) as delete_tbl_patch:\n                        with patch.object(server.Server.scheduler, 'disable_schedule', return_value=_rv2) as disable_sch_patch:\n                            with patch.object(service._logger, \"warning\") as log_warn_patch:\n                                with patch.object(storage_client_mock, 'insert_into_tbl',\n                                                  return_value=_rv3) as insert_tbl_patch:\n                                    with patch('multiprocessing.Process'):\n                                        resp = await client.put('/fledge/service/{}/{}/update'.format(_type, name),\n                                                                data=None)\n                                        server.Server.scheduler = None\n                                        assert 200 == resp.status\n                                        result = await resp.text()\n                                        response = json.loads(result)\n                                        assert 'id' in response\n                                        assert '{} update started'.format(pkg_name) == response['message']\n                                        assert response['statusLink'].startswith('fledge/package/update/status?id=')\n                                args, kwargs = insert_tbl_patch.call_args_list[0]\n                                assert 'packages' == args[0]\n                                actual = json.loads(args[1])\n                                assert 'id' in actual\n                                assert pkg_name == actual['name']\n                                assert 'update' == actual['action']\n                                assert -1 == actual['status']\n                                assert '' == actual['log_file_uri']\n                            assert 1 == log_warn_patch.call_count\n                            log_warn_patch.assert_called_once_with(\n                                'Schedule is disabled for {}, as {} service of type {} is being updated...'.format(\n                                    sch_info['rows'][0]['schedule_name'], name, _type))\n                        disable_sch_patch.assert_called_once_with(UUID(sch_info['rows'][0]['id']))\n                    args, kwargs = delete_tbl_patch.call_args_list[0]\n                    assert 'packages' == args[0]\n                    assert delete_payload == json.loads(args[1])\n                args, kwargs = query_tbl_patch.call_args_list[0]\n                assert 'packages' == args[0]\n                assert payload == json.loads(args[1])\n        svc_list_patch.assert_called_once_with()\n\n    async def test_package_update_when_not_in_use(self, client, _type=\"notification\", name=\"notification\"):\n        async def async_mock(return_value):\n            return return_value\n\n        pkg_name = \"fledge-service-{}\".format(name)\n        svc_name = \"NF #1\"\n        payload = {\"return\": [\"status\"], \"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                                   \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n        select_row_resp = {'count': 1, 'rows': [{\n            \"id\": \"c5648940-31ec-4f78-a7a5-b1707e8fe578\",\n            \"name\": pkg_name,\n            \"action\": \"update\",\n            \"status\": 0,\n            \"log_file_uri\": \"\"\n        }]}\n        delete = {\"response\": \"deleted\", \"rows_affected\": 1}\n        delete_payload = {\"where\": {\"column\": \"action\", \"condition\": \"=\", \"value\": \"update\",\n                                    \"and\": {\"column\": \"name\", \"condition\": \"=\", \"value\": pkg_name}}}\n\n        sch_info = {'count': 1, 'rows': [\n            {'id': '6637c9ff-7090-4774-abca-07dee59a0610', 'schedule_name': svc_name, 'enabled': 'f'}]}\n        insert = {\"response\": \"inserted\", \"rows_affected\": 1}\n        server.Server.scheduler = Scheduler(None, None)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        svc_list = [\"south\", \"storage\", name]\n        _rv1 = await async_mock(delete)\n        _rv2 = await async_mock(insert)\n        _se1 = await async_mock(select_row_resp)\n        _se2 = await async_mock(sch_info)\n        with patch.object(service, 'get_service_installed', return_value=svc_list\n                          ) as svc_list_patch:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  side_effect=[_se1, _se2]) as query_tbl_patch:\n                    with patch.object(storage_client_mock, 'delete_from_tbl',\n                                      return_value=_rv1) as delete_tbl_patch:\n                        with patch.object(storage_client_mock, 'insert_into_tbl',\n                                          return_value=_rv2) as insert_tbl_patch:\n                            with patch('multiprocessing.Process'):\n                                resp = await client.put('/fledge/service/{}/{}/update'.format(_type, name), data=None)\n                                server.Server.scheduler = None\n                                assert 200 == resp.status\n                                result = await resp.text()\n                                response = json.loads(result)\n                                assert 'id' in response\n                                assert '{} update started'.format(pkg_name) == response['message']\n                                assert response['statusLink'].startswith('fledge/package/update/status?id=')\n                        args, kwargs = insert_tbl_patch.call_args_list[0]\n                        assert 'packages' == args[0]\n                        actual = json.loads(args[1])\n                        assert 'id' in actual\n                        assert pkg_name == actual['name']\n                        assert 'update' == actual['action']\n                        assert -1 == actual['status']\n                        assert '' == actual['log_file_uri']\n                    args, kwargs = delete_tbl_patch.call_args_list[0]\n                    assert 'packages' == args[0]\n                    assert delete_payload == json.loads(args[1])\n                args, kwargs = query_tbl_patch.call_args_list[0]\n                assert 'packages' == args[0]\n                assert payload == json.loads(args[1])\n        svc_list_patch.assert_called_once_with()\n\n    # TODO:  add negative tests and C type plugin add service tests\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_statistics_api.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge/services/core/api/statistics.py \"\"\"\n\nimport json\nfrom unittest.mock import MagicMock, patch\nfrom aiohttp import web\nimport pytest\n\nfrom fledge.services.core import routes\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestStatistics:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def test_get_stats(self, client):\n        payload = {\"return\": [\"key\", \"description\", \"value\"], \"sort\": {\"column\": \"key\", \"direction\": \"asc\"}}\n        result = {\"rows\": [{\"value\": 0, \"key\": \"BUFFERED\", \"description\": \"blah1\"},\n                           {\"value\": 1, \"key\": \"READINGS\", \"description\": \"blah2\"}]\n                  }\n\n        async def mock_coro():\n            return result\n\n        _rv = await mock_coro()\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', return_value=_rv) as query_patch:\n                resp = await client.get(\"/fledge/statistics\")\n                assert 200 == resp.status\n                r = await resp.text()\n                assert result[\"rows\"] == json.loads(r)\n\n        args, kwargs = query_patch.call_args\n        assert json.loads(args[1]) == payload\n        query_patch.assert_called_once_with('statistics', args[1])\n\n    async def test_get_stats_exception(self, client):\n        result = {\"message\": \"error\"}\n\n        async def mock_coro():\n            return result\n\n        _rv = await mock_coro()\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', return_value=_rv):\n                resp = await client.get(\"/fledge/statistics\")\n                assert 500 == resp.status\n                assert \"Internal Server Error\" == resp.reason\n\n    @pytest.mark.parametrize(\"interval, schedule_interval\", [\n        (60, \"00:01:00\"),\n        (100, \"0:01:40\"),\n        (3660, \"1:01:00\"),\n        (3660, \"01:01:00\"),\n        (86400, \"1 day\"),\n        (86500, \"1 day, 00:01:40\"),\n        (86500, \"1 day 00:01:40\"),\n        (86400, \"1 day 0:00:00\"),\n        (86400, \"1 day, 0:00:00\"),\n        (172800, \"2 days\"),\n        (172900, \"2 days, 0:01:40\"),\n        (179999, \"2 days, 01:59:59\"),\n        (179940, \"2 days 01:59:00\"),\n        (176459, \"2 days 1:00:59\"),\n        (0, \"0 days\"),\n        (0, \"0 days, 00:00:00\"),\n        (0, \"0 days 00:00:00\"),\n        (3601, \"0 days, 1:00:01\"),\n        (100, \"0 days 0:01:40\"),\n        (864000, \"10 days\"),\n        (867600, \"10 days, 01:00:00\"),\n        (864000, \"10 days 00:00:00\"),\n        (864000, \"10 days, 0:00:00\"),\n        (867600, \"10 days 1:00:00\")\n    ])\n    async def test_get_statistics_history(self, client, interval, schedule_interval):\n        output = {\"interval\": interval, 'statistics': [{\"READINGS\": 1, \"BUFFERED\": 10, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                                       {\"READINGS\": 0, \"BUFFERED\": 10, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n        p1 = {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n              \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n              \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1}}\n        p2 = {\"return\": [\"schedule_interval\"],\n              \"where\": {\"column\": \"process_name\", \"condition\": \"=\", \"value\": \"stats collector\"}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'statistics_history':\n                assert p1 == json.loads(payload)\n                return {\"rows\": [{\"key\": \"READINGS\", \"value\": 1, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"BUFFERED\", \"value\": 10, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"READINGS\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"},\n                                 {\"key\": \"BUFFERED\", \"value\": 10, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n\n            if table == 'schedules':\n                assert p2 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": schedule_interval}]}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history\")\n            assert 200 == resp.status\n            r = await resp.text()\n            assert output == json.loads(r)\n        assert query_patch.called\n        assert 2 == query_patch.call_count\n\n    @pytest.mark.parametrize(\"param, time_unit_payload\", [\n        (\"?minutes=30\", {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n                         \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n                         \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"history_ts\", \"condition\": \"newer\", \"value\": 1800}}}),\n        (\"?hours=1\", {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n                      \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n                      \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"history_ts\", \"condition\": \"newer\", \"value\": 3600}}}),\n        (\"?days=1\", {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n                     \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n                     \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"history_ts\", \"condition\": \"newer\", \"value\": 86400}}}),\n        (\"?minutes=10&hours=1\", {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n                                 \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n                                 \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"history_ts\", \"condition\": \"newer\", \"value\": 600}}}),\n        (\"?hours=1&days=2\", {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n                             \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n                             \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"history_ts\", \"condition\": \"newer\", \"value\": 3600}}}),\n        (\"?minutes=15&days=1\", {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n                                \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n                                \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"history_ts\", \"condition\": \"newer\", \"value\": 900}}})\n    ])\n    async def test_get_statistics_history_with_time_unit(self, client, param, time_unit_payload):\n        output = {\"interval\": 60, 'statistics': [{\"READINGS\": 1, \"BUFFERED\": 10, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                                 {\"READINGS\": 0, \"BUFFERED\": 10, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n\n        p1 = {\"aggregate\": {\"operation\": \"count\", \"column\": \"*\"}}\n        p3 = {\"return\": [\"schedule_interval\"],\n              \"where\": {\"column\": \"process_name\", \"condition\": \"=\", \"value\": \"stats collector\"}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'statistics':\n                assert p1 == json.loads(payload)\n                return {\"rows\": [{\"count_*\": 2}]}\n\n            if table == 'statistics_history':\n                assert time_unit_payload == json.loads(payload)\n                return {\"rows\": [{\"key\": \"READINGS\", \"value\": 1, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"BUFFERED\", \"value\": 10, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"READINGS\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"},\n                                 {\"key\": \"BUFFERED\", \"value\": 10, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n\n            if table == 'schedules':\n                assert p3 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": \"00:01:00\"}]}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history{}\".format(param))\n            assert 200 == resp.status\n            r = await resp.text()\n            assert output == json.loads(r)\n        assert query_patch.called\n        assert 2 == query_patch.call_count\n\n    @pytest.mark.parametrize(\"param\", [\n        \"?minutes=-1\"\n    ])\n    async def test_get_statistics_history_with_time_unit_exception(self, client, param):\n        p1 = {\"return\": [\"schedule_interval\"],\n              \"where\": {\"column\": \"process_name\", \"condition\": \"=\", \"value\": \"stats collector\"}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'schedules':\n                assert p1 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": \"00:01:00\"}]}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history{}\".format(param))\n            assert 400 == resp.status\n            assert 'Time unit must be a positive integer' == resp.reason\n        assert query_patch.called\n        assert 1 == query_patch.call_count\n\n    async def test_get_statistics_history_limit(self, client):\n        output = {\"interval\": 60, 'statistics': [{\"READINGS\": 1, \"BUFFERED\": 10, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                                 {\"READINGS\": 0, \"BUFFERED\": 10, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n\n        p1 = {\"aggregate\": {\"operation\": \"count\", \"column\": \"*\"}}\n        # payload limit will be request limit*2 i.e. via p1 query\n        p2 = {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n              \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n              \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1}, \"limit\": 2}\n        p3 = {\"return\": [\"schedule_interval\"],\n              \"where\": {\"column\": \"process_name\", \"condition\": \"=\", \"value\": \"stats collector\"}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'statistics':\n                assert p1 == json.loads(payload)\n                return {\"rows\": [{\"count_*\": 2}]}\n\n            if table == 'statistics_history':\n                assert p2 == json.loads(payload)\n                return {\"rows\": [{\"key\": \"READINGS\", \"value\": 1, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"BUFFERED\", \"value\": 10, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"READINGS\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"},\n                                 {\"key\": \"BUFFERED\", \"value\": 10, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n\n            if table == 'schedules':\n                assert p3 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": \"00:01:00\"}]}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history?limit=1\")\n            assert 200 == resp.status\n            r = await resp.text()\n            assert output == json.loads(r)\n        assert query_patch.called\n        assert 3 == query_patch.call_count\n\n    @pytest.mark.parametrize(\"request_limit\", [-1, 'blah'])\n    async def test_get_statistics_history_bad_limit(self, client, request_limit):\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        result = {\"rows\": [{\"schedule_interval\": \"00:01:00\"}]}\n\n        async def mock_coro():\n            return result\n\n        _rv = await mock_coro()\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', return_value=_rv):\n                resp = await client.get(\"/fledge/statistics/history?limit={}\".format(request_limit))\n            assert 400 == resp.status\n            assert \"Limit must be a positive integer\" == resp.reason\n\n    async def test_get_statistics_history_no_stats_collector(self, client):\n        p1 = {\"return\": [\"schedule_interval\"],\n              \"where\": {\"column\": \"process_name\", \"condition\": \"=\", \"value\": \"stats collector\"}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'schedules':\n                assert p1 == json.loads(payload)\n                return {\"rows\": []}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history\")\n            assert 404 == resp.status\n            assert 'No stats collector schedule found' == resp.reason\n\n        assert query_patch.called\n        assert 1 == query_patch.call_count\n\n    async def test_get_statistics_history_server_exception(self, client):\n        p1 = {\"return\": [\"history_ts\", \"key\", \"value\"]}\n        p2 = {\"return\": [\"schedule_interval\"],\n              \"where\": {\"column\": \"process_name\", \"condition\": \"=\", \"value\": \"stats collector\"}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'statistics_history':\n                assert p1 == json.loads(payload)\n                # no rows\n                return {\"message\": \"error\"}\n            if table == 'schedules':\n                assert p2 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": \"00:01:00\"}]}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history\")\n            assert 500 == resp.status\n            assert \"Internal Server Error\" == resp.reason\n\n        assert query_patch.called\n        assert 2 == query_patch.call_count\n\n    async def test_get_statistics_history_by_key(self, client):\n        output = {\"interval\": 15, 'statistics': [{\"READINGS\": 1, \"history_ts\": \"2018-02-20 13:16:24.321589\"}, {\"READINGS\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n        p1 = {'where': {'value': 'stats collector', 'condition': '=', 'column': 'process_name'}, 'return': ['schedule_interval']}\n        p2 = {'aggregate': {'column': '*', 'operation': 'count'}}\n        p3 = {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"], \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"}, \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"READINGS\"}}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'schedules':\n                assert p1 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": \"00:00:15\"}]}\n\n            if table == 'statistics':\n                assert p2 == json.loads(payload)\n                return {\"rows\": [{\"count_*\": 2}]}\n\n            if table == 'statistics_history':\n                assert p3 == json.loads(payload)\n                return {\"rows\": [{\"key\": \"READINGS\", \"value\": 1, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"READINGS\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history?key=READINGS\")\n            assert 200 == resp.status\n            r = await resp.text()\n            assert output == json.loads(r)\n        assert query_patch.called\n        assert 2 == query_patch.call_count\n\n    async def test_get_statistics_history_with_multiple_keys(self, client):\n        output = {\"interval\": 15, 'statistics': [{\"READINGS\": 1, \"PURGED\": 0, \"UNSENT\": 0, \"history_ts\": \"2018-02-20 13:16:24.321589\"}, {\"READINGS\": 0, \"PURGED\": 0, \"UNSENT\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n        p1 = {'where': {'value': 'stats collector', 'condition': '=', 'column': 'process_name'}, 'return': ['schedule_interval']}\n        p2 = {'aggregate': {'column': '*', 'operation': 'count'}}\n        p3 = {\"where\": {\"and\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"READINGS\", \"or\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"PURGED\", \"or\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"UNSENT\"}}}, \"column\": \"1\", \"condition\": \"=\", \"value\": 1}, \"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"], \"sort\": {\"direction\": \"desc\", \"column\": \"history_ts\"}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'schedules':\n                assert p1 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": \"00:00:15\"}]}\n\n            if table == 'statistics':\n                assert p2 == json.loads(payload)\n                return {\"rows\": [{\"count_*\": 2}]}\n\n            if table == 'statistics_history':\n                assert p3 == json.loads(payload)\n                return {\"rows\": [{\"key\": \"READINGS\", \"value\": 1, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"PURGED\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"UNSENT\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:24.321589\"},\n                                 {\"key\": \"READINGS\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"},\n                                 {\"key\": \"PURGED\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"},\n                                 {\"key\": \"UNSENT\", \"value\": 0, \"history_ts\": \"2018-02-20 13:16:09.321589\"}]}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history?key=READINGS,PURGED,UNSENT\")\n            assert 200 == resp.status\n            r = await resp.text()\n            assert output == json.loads(r)\n        assert query_patch.called\n        assert 2 == query_patch.call_count\n\n    async def test_get_statistics_history_by_key_with_limit(self, client):\n        output = {\"interval\": 15, 'statistics': [{\"READINGS\": 1, \"history_ts\": \"2018-02-20 13:16:24.321589\"}]}\n        p1 = {'where': {'value': 'stats collector', 'condition': '=', 'column': 'process_name'}, 'return': ['schedule_interval']}\n        p3 = {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"], \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"}, \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"READINGS\"}}, \"limit\": 1}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'schedules':\n                assert p1 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": \"00:00:15\"}]}\n\n            if table == 'statistics_history':\n                assert p3 == json.loads(payload)\n                return {\"rows\": [{\"key\": \"READINGS\", \"value\": 1, \"history_ts\": \"2018-02-20 13:16:24.321589\"}]}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history?key=READINGS&limit=1\")\n            assert 200 == resp.status\n            r = await resp.text()\n            assert output == json.loads(r)\n        assert query_patch.called\n        assert 2 == query_patch.call_count\n\n    async def test_get_statistics_history_with_bad_key(self, client):\n        output = {\"interval\": 15, 'statistics': [{}]}\n        p1 = {'where': {'value': 'stats collector', 'condition': '=', 'column': 'process_name'}, 'return': ['schedule_interval']}\n        p2 = {'aggregate': {'column': '*', 'operation': 'count'}}\n        p3 = {\"return\": [{\"column\": \"history_ts\", \"alias\": \"history_ts\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"key\", \"value\"],\n              \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"},\n              \"where\": {\"column\": \"1\", \"condition\": \"=\", \"value\": 1,\n                        \"and\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"blah\"}}}\n\n        async def q_result(*args):\n            table = args[0]\n            payload = args[1]\n\n            if table == 'schedules':\n                assert p1 == json.loads(payload)\n                return {\"rows\": [{\"schedule_interval\": \"00:00:15\"}]}\n\n            if table == 'statistics':\n                assert p2 == json.loads(payload)\n                return {\"rows\": [{\"count_*\": 0}]}\n\n            if table == 'statistics_history':\n                assert p3 == json.loads(payload)\n                return {\"rows\": []}\n\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload', side_effect=q_result) as query_patch:\n                resp = await client.get(\"/fledge/statistics/history?key=blah\")\n            assert 200 == resp.status\n            r = await resp.text()\n            assert output == json.loads(r)\n        assert query_patch.called\n        assert 2 == query_patch.call_count\n\n    @pytest.mark.parametrize(\"params, msg\", [\n        (\"\", \"periods request parameter is required\"),\n        (\"?period\", \"periods request parameter is required\"),\n        (\"?periods\", \"statistics request parameter is required\"),\n        (\"?statistics\", \"periods request parameter is required\"),\n        (\"?periods=&statistics=\", \"periods cannot be an empty. Also comma separated list of values required \"\n                                  \"in case of multiple periods of time\"),\n        (\"?periods=1&statistics=\", \"statistics cannot be an empty. Also comma separated list of statistics values \"\n                                   \"required in case of multiple assets\"),\n        (\"?periods=&statistics=readings\", \"periods cannot be an empty. Also comma separated list of values \"\n                                          \"required in case of multiple periods of time\"),\n        (\"?periods=1,blah&statistics=READINGS\", \"periods should contain numbers\"),\n        (\"?periods=1,,blah&statistics=READINGS\", \"periods should contain numbers\"),\n        (\"?periods=,1,10801&statistics=1234,READINGS,\", \"The maximum allowed value for a period is 10080 minutes\")\n    ])\n    async def test_bad_get_statistics_rate(self, client, params, msg):\n        resp = await client.get(\"/fledge/statistics/rate{}\".format(params))\n        assert 400 == resp.status\n        assert msg == resp.reason\n\n    async def test_get_statistics_rate(self, client, params='?periods=1,5&statistics=READINGS'):\n        output = {'rates': {'READINGS': {'1': 45.0, '5': 9.0}}}\n        p1 = ({\"where\": {\"value\": \"stats collector\", \"condition\": \"=\", \"column\": \"process_name\"},\n               \"return\": [\"schedule_interval\"]})\n        p2 = {\"return\": [\"value\"], \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"READINGS\"},\n              \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"}, \"limit\": 4}\n        p3 = {\"return\": [\"value\"], \"where\": {\"column\": \"key\", \"condition\": \"=\", \"value\": \"READINGS\"},\n              \"sort\": {\"column\": \"history_ts\", \"direction\": \"desc\"}, \"limit\": 20}\n\n        async def async_mock(return_value):\n            return return_value\n\n        storage_rows = {\"rows\": [{\"value\": 15}, {\"value\": 10}, {\"value\": 5}, {\"value\": 15}], \"count\": 4}\n        _rv1 = await async_mock({\"rows\": [{\"schedule_interval\": \"00:00:15\"}]})\n        _rv2 = await async_mock(storage_rows)\n        mock_async_storage_client = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=mock_async_storage_client):\n            with patch.object(mock_async_storage_client, 'query_tbl_with_payload',\n                              side_effect=[_rv1, _rv2, _rv2]) as query_patch:\n                resp = await client.get(\"/fledge/statistics/rate{}\".format(params))\n                assert 200 == resp.status\n                r = await resp.text()\n                assert output == json.loads(r)\n            assert query_patch.called\n            assert 3 == query_patch.call_count\n            args, _ = query_patch.call_args_list[0]\n            assert 'schedules' == args[0]\n            assert p1 == json.loads(args[1])\n            args, _ = query_patch.call_args_list[1]\n            assert 'statistics_history' == args[0]\n            assert p2 == json.loads(args[1])\n            args, _ = query_patch.call_args_list[2]\n            assert 'statistics_history' == args[0]\n            assert p3 == json.loads(args[1])\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_support.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json, os, pathlib, subprocess\nfrom pathlib import PosixPath\n\nfrom unittest.mock import patch, mock_open, Mock, MagicMock\n\nfrom aiohttp import web\nimport pytest\n\nfrom fledge.common.web import middleware\nfrom fledge.services.core import routes\nfrom fledge.services.core.api import support\nfrom fledge.services.core.support import *\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestBundleSupport:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop, middlewares=[middleware.optional_auth_middleware])\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n\n    @pytest.fixture\n    def support_bundles_dir_path(self):\n        return pathlib.Path(__file__).parent\n\n    @pytest.mark.parametrize(\"data, expected_content, expected_count\", [\n        (['support-180301-13-35-23.tar.gz', 'support-180301-13-13-13.tar.gz'], {'bundles': ['support-180301-13-35-23.tar.gz', 'support-180301-13-13-13.tar.gz']}, 2),\n        (['support-180301-15-25-02.tar.gz', 'fledge.txt'], {'bundles': ['support-180301-15-25-02.tar.gz']}, 1),\n        (['fledge.txt'], {'bundles': []}, 0),\n        ([], {'bundles': []}, 0)\n    ])\n    async def test_get_support_bundle(self, client, support_bundles_dir_path, data, expected_content, expected_count):\n        path = support_bundles_dir_path / 'support'\n        with patch.object(support, '_get_support_dir', return_value=path):\n            with patch('os.walk') as mockwalk:\n                mockwalk.return_value = [(path, [], data)]\n                resp = await client.get('/fledge/support')\n                assert 200 == resp.status\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert expected_count == len(jdict['bundles'])\n                assert expected_content == jdict\n            mockwalk.assert_called_once_with(path)\n\n    async def test_get_support_bundle_by_name(self, client, support_bundles_dir_path):\n        gz_filepath = Mock()\n        gz_filepath.open = mock_open()\n        gz_filepath.is_file.return_value = True\n        gz_filepath.stat.return_value = MagicMock()\n        gz_filepath.stat.st_size = 1024\n        bundle_name = 'support-180301-13-35-23.tar.gz'\n        filepath = Mock()\n        filepath.name = bundle_name\n        filepath.open = mock_open()\n        filepath.with_name.return_value = gz_filepath\n        with patch(\"aiohttp.web.FileResponse\",\n                   return_value=web.FileResponse(path=os.path.realpath(__file__))) as f_res:\n            path = support_bundles_dir_path / 'support'\n            with patch.object(support, '_get_support_dir', return_value=path):\n                with patch('os.path.isdir', return_value=True):\n                    with patch('os.walk') as mockwalk:\n                        mockwalk.return_value = [(path, [], [bundle_name])]\n                        resp = await client.get('/fledge/support/{}'.format(bundle_name))\n                        assert 200 == resp.status\n                        assert 'OK' == resp.reason\n                mockwalk.assert_called_once_with(path)\n                args, kwargs = f_res.call_args\n                assert {'path': PosixPath(pathlib.Path(path) / str(bundle_name))} == kwargs\n                assert 1 == f_res.call_count\n\n    @pytest.mark.parametrize(\"data, request_bundle_name\", [\n        (['support-180301-13-35-23.tar.gz'], 'xsupport-180301-01-15-13.tar.gz'),\n        ([], 'support-180301-13-13-13.tar.gz')\n    ])\n    async def test_get_support_bundle_by_name_not_found(self, client, support_bundles_dir_path, data, request_bundle_name):\n        path = support_bundles_dir_path / 'support'\n        with patch.object(support, '_get_support_dir', return_value=path):\n            with patch('os.path.isdir', return_value=True):\n                with patch('os.walk') as mockwalk:\n                    mockwalk.return_value = [(path, [], data)]\n                    resp = await client.get('/fledge/support/{}'.format(request_bundle_name))\n                    assert 404 == resp.status\n                    assert '{} not found'.format(request_bundle_name) == resp.reason\n            mockwalk.assert_called_once_with(path)\n\n    async def test_get_support_bundle_by_name_bad_request(self, client):\n        resp = await client.get('/fledge/support/support-180301-13-35-23.tar')\n        assert 400 == resp.status\n        assert 'Bundle file extension is invalid' == resp.reason\n\n    async def test_get_support_bundle_by_name_no_dir(self, client, support_bundles_dir_path):\n        path = support_bundles_dir_path / 'invalid'\n        with patch.object(support, '_get_support_dir', return_value=path):\n            with patch('os.path.isdir', return_value=False) as mockisdir:\n                resp = await client.get('/fledge/support/bla.tar.gz')\n                assert 404 == resp.status\n                assert 'Support bundle directory does not exist' == resp.reason\n            mockisdir.assert_called_once_with(path)\n\n    async def test_create_support_bundle(self, client):\n        async def mock_build():\n            return 'support-180301-13-35-23.tar.gz'\n\n        _rv = await mock_build()    \n        mock_config = {\n            \"support_bundle_retain_count\": {\n                \"value\": \"3\",\n                \"description\": \"Number of support bundles to retain (minimum 1)\",\n                \"type\": \"integer\",\n                \"default\": \"3\",\n                \"minimum\": \"1\",\n                \"displayName\": \"Bundles To Retain\"\n            }\n        }\n        with patch.object(support, 'get_support_bundle_config', return_value=mock_config):\n            with patch.object(SupportBuilder, \"__init__\", return_value=None):\n                with patch.object(SupportBuilder, \"build\", return_value=_rv):\n                    resp = await client.post('/fledge/support')\n                    res = await resp.text()\n                    jdict = json.loads(res)\n                    assert 200 == resp.status\n                    assert {\"bundle created\": \"support-180301-13-35-23.tar.gz\"} == jdict\n\n    async def test_create_support_bundle_exception(self, client):\n        msg = \"Failed to create support bundle.\"\n        mock_config = {\n            \"support_bundle_retain_count\": {\n                \"value\": \"3\",\n                \"description\": \"Number of support bundles to retain (minimum 1)\",\n                \"type\": \"integer\",\n                \"default\": \"3\",\n                \"minimum\": \"1\",\n                \"displayName\": \"Bundles To Retain\"\n            }\n        }\n        with patch.object(support, 'get_support_bundle_config', return_value=mock_config):\n            with patch.object(SupportBuilder, \"__init__\", return_value=None):\n                with patch.object(SupportBuilder, \"build\", side_effect=RuntimeError(\"blah\")):\n                    with patch.object(support._logger, \"error\") as patch_logger:\n                        resp = await client.post('/fledge/support')\n                        assert 500 == resp.status\n                        assert msg == resp.reason\n                    assert 1 == patch_logger.call_count\n                    args = patch_logger.call_args\n                    assert msg == args[0][1]\n\n    async def test_get_syslog_entries_all_ok(self, client):\n        def mock_syslog():\n            return \"\"\"\n        echo \"Mar 19 14:00:53 nerd51-ThinkPad Fledge[18809] INFO: server: fledge.services.core.server: start core\n        Mar 19 14:00:53 nerd51-ThinkPad Fledge[18809] INFO: server: fledge.services.core.server: Management API started on http://0.0.0.0:38311\n        Mar 19 14:00:53 nerd51-ThinkPad Fledge[18809] INFO: server: fledge.services.core.server: start storage, from directory /home/asinha/Development/Fledge/scripts\n        Mar 19 14:00:54 nerd51-ThinkPad Fledge[18809] INFO: service_registry: fledge.services.core.service_registry.service_registry: Registered service instance id=479a90ec-0d1d-4845-b2c5-f1d9ce72ac8e: <Fledge Storage, type=Storage, protocol=http, address=localhost, service port=33395, management port=45952, status=1>\n        Mar 19 14:00:58 nerd51-ThinkPad Fledge[18809] INFO: server: fledge.services.core.server: start scheduler\n        Mar 19 14:00:58 nerd51-ThinkPad Fledge Storage[18809]: Registered configuration category STORAGE, registration id 3db674a7-9569-4950-a328-1204834fba7e\n        Mar 19 14:00:58 nerd51-ThinkPad Fledge[18809] INFO: scheduler: fledge.services.core.scheduler.scheduler: Starting Scheduler: Management port received is 38311\n        Mar 19 14:00:58 nerd51-ThinkPad Fledge[18809] INFO: scheduler: fledge.services.core.scheduler.scheduler: Scheduled task for schedule 'purge' to start at 2018-03-19 15:00:58.912532\n        Mar 19 14:00:58 nerd51-ThinkPad Fledge[18809] INFO: scheduler: fledge.services.core.scheduler.scheduler: Scheduled task for schedule 'stats collection' to start at 2018-03-19 14:01:13.912532\n        Mar 19 14:00:58 nerd51-ThinkPad Fledge[18809] INFO: scheduler: fledge.services.core.scheduler.scheduler: Scheduled task for schedule 'certificate checker' to start at 2018-03-19 15:05:00\n        Apr 22 13:59:39 aj Fledge S1[28584] INFO: sinusoid: module.name: Sinusoid plugin_init called\n        Apr 22 13:57:08 aj Fledge S2[26398] INFO: sinusoid: module.name: Sinusoid plugin_reconfigure called\n        Apr 22 14:04:59 aj Fledge HT[7080] INFO: sending_process: sending_process_HT: Started\"\n        \"\"\"\n\n        with patch.object(support, \"__GET_SYSLOG_CMD_TEMPLATE\", mock_syslog()):\n            with patch.object(support, \"__GET_SYSLOG_TOTAL_MATCHED_LINES\", \"\"\"echo \"13\" \"\"\"):\n                resp = await client.get('/fledge/syslog')\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert 200 == resp.status\n                assert 13 == jdict['count']\n                assert 'INFO' in jdict['logs'][0]\n                assert 'Fledge' in jdict['logs'][0]\n                assert 'Fledge Storage' in jdict['logs'][5]\n\n    async def test_get_syslog_entries_all_with_level_error(self, client):\n        def mock_syslog():\n            return \"\"\"\n            echo \"Sep 12 13:31:41 nerd-034 Fledge PI[9241] ERROR: sending_process: sending_process_PI: cannot complete the sending operation\n            Dec 18 15:15:10 aj-ub Fledge OMF[12145]: FATAL: Signal 11 (Segmentation fault) trapped:\n            Dec 18 15:15:10 aj-ub Fledge OMF[12145]: INFO: Signal 11 (Segmentation fault) trapped:\"\n            \"\"\"\n\n        with patch.object(support, \"__GET_SYSLOG_CMD_WITH_ERROR_TEMPLATE\", mock_syslog()):\n            with patch.object(support, \"__GET_SYSLOG_ERROR_MATCHED_LINES\", \"\"\"echo \"2\" \"\"\"):\n                resp = await client.get('/fledge/syslog?level=error')\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert 200 == resp.status\n                assert 2 == jdict['count']\n                assert 'ERROR' in jdict['logs'][0]\n\n    async def test_get_syslog_entries_all_with_level_warning(self, client):\n        def mock_syslog():\n            return \"\"\"\n            echo \"Sep 12 14:31:36 nerd-034 Fledge Storage[8683]: SQLite3 storage plugin raising error: UNIQUE constraint failed: readings.read_key\n            Sep 12 17:42:23 nerd-034 Fledge[16637] WARNING: server: fledge.services.core.server: A Fledge PID file has been found: [/home/fledge/Development/Fledge/data/var/run/fledge.core.pid] found, ignoring it.\"\n            \"\"\"\n        with patch.object(support, \"__GET_SYSLOG_CMD_WITH_WARNING_TEMPLATE\", mock_syslog()):\n            with patch.object(support, \"__GET_SYSLOG_WARNING_MATCHED_LINES\", \"\"\"echo \"2\" \"\"\"):\n                resp = await client.get('/fledge/syslog?level=warning')\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert 200 == resp.status\n                assert 2 == jdict['count']\n                assert 'error' in jdict['logs'][0]\n                assert 'WARNING' in jdict['logs'][1]\n\n    async def test_get_syslog_entries_from_storage(self, client):\n        def mock_syslog():\n            return \"\"\"\n            echo \"Sep 12 14:31:41 nerd-034 Fledge Storage[8874]: Starting service...\n            Sep 12 14:46:36 nerd-034 Fledge Storage[8683]: SQLite3 storage plugin raising error: UNIQUE constraint failed: readings.read_key\n            Sep 12 14:56:41 nerd-034 Fledge Storage[8979]: warning No directory found\"\n            \"\"\"\n        with patch.object(support, \"__GET_SYSLOG_CMD_TEMPLATE\", mock_syslog()):\n            with patch.object(support, \"__GET_SYSLOG_TOTAL_MATCHED_LINES\", \"\"\"echo \"3\" \"\"\"):\n                resp = await client.get('/fledge/syslog?source=Storage')\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert 200 == resp.status\n                assert 3 == jdict['count']\n                assert 'Fledge Storage' in jdict['logs'][0]\n                assert 'error' in jdict['logs'][1]\n                assert 'warning' in jdict['logs'][2]\n\n    async def test_get_syslog_entries_from_storage_with_level_warning(self, client):\n        def mock_syslog():\n            return \"\"\"\n            echo \"Sep 12 14:31:36 nerd-034 Fledge Storage[8683]: SQLite3 storage plugin raising error: UNIQUE constraint failed: readings.read_key\n            Sep 12 14:46:41 nerd-034 Fledge Storage[8979]: warning No directory found\"\n            \"\"\"\n        with patch.object(support, \"__GET_SYSLOG_CMD_WITH_WARNING_TEMPLATE\", mock_syslog()):\n            with patch.object(support, \"__GET_SYSLOG_WARNING_MATCHED_LINES\", \"\"\"echo \"3\" \"\"\"):\n                resp = await client.get('/fledge/syslog?source=storage&level=warning')\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert 200 == resp.status\n                assert 3 == jdict['count']\n                assert 'Fledge Storage' in jdict['logs'][0]\n                assert 'error' in jdict['logs'][0]\n                assert 'warning' in jdict['logs'][1]\n\n    @pytest.mark.parametrize(\"param, message\", [\n        ('limit=-1', \"Limit must be a positive integer.\"),\n        ('offset=-1', \"Offset must be a positive integer OR Zero.\"),\n        ('limit=1&offset=-1', \"Offset must be a positive integer OR Zero.\"),\n        ('limit=-1&offset=0', \"Limit must be a positive integer.\"),\n    ])\n    async def test_bad_limit_and_offset_in_get_syslog_entries(self, client, param, message):\n        resp = await client.get('/fledge/syslog?{}'.format(param))\n        assert 400 == resp.status\n        assert message == resp.reason\n        res = await resp.text()\n        jdict = json.loads(res)\n        assert {\"message\": message} == jdict\n\n    async def test_get_syslog_entries_cmd_exception(self, client):\n        msg = 'Internal Server Error'\n        with patch.object(subprocess, \"Popen\", side_effect=Exception(msg)):\n            with patch.object(support._logger, \"error\") as patch_logger:\n                resp = await client.get('/fledge/syslog')\n                assert 500 == resp.status\n                assert msg == resp.reason\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert {\"message\": msg} == jdict\n            assert 1 == patch_logger.call_count\n\n    async def test_get_syslog_entries_from_name(self, client):\n        def mock_syslog():\n            return \"\"\"echo \"Apr 23 18:30:21 aj Fledge Sine 1[21288] ERROR: sinusoid: module.name: Sinusoid plugin_init\" \n            \"\"\"\n        with patch.object(support, \"__GET_SYSLOG_CMD_TEMPLATE\", mock_syslog()):\n            with patch.object(support, \"__GET_SYSLOG_TOTAL_MATCHED_LINES\", \"\"\"echo \"1\" \"\"\"):\n                resp = await client.get('/fledge/syslog?source=Sine 1')\n                assert 200 == resp.status\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert 1 == jdict['count']\n                assert 'Fledge Sine 1' in jdict['logs'][0]\n\n    @pytest.mark.parametrize(\"template_name, matched_lines, level, actual_count\", [\n        ('__GET_SYSLOG_CMD_WITH_ERROR_TEMPLATE', '__GET_SYSLOG_ERROR_MATCHED_LINES', 'error', 0),\n        ('__GET_SYSLOG_CMD_WITH_INFO_TEMPLATE', '__GET_SYSLOG_INFO_MATCHED_LINES', 'info', 3)\n    ])\n    async def test_get_syslog_entries_from_name_with_level(self, client, template_name, matched_lines, level,\n                                                           actual_count):\n        def mock_syslog(_level):\n            if _level == 'info':\n                return \"\"\"echo \"Apr 23 18:30:21 aj Fledge HT[31901] INFO: sending_process: sending_process_HT: Started\n                 Apr 23 18:48:52 aj Fledge HT[31901] INFO: sending_process: sending_process_HT: Stopped\n                 Apr 23 18:48:52 aj Fledge HT[31901] INFO: sending_process: sending_process_HT: Execution completed\" \"\"\"\n            else:\n                return \"\"\"echo \"\" \"\"\"\n        with patch.object(support, template_name, mock_syslog(level)):\n            with patch.object(support, matched_lines, \"\"\"echo \"{}\" \"\"\".format(actual_count)):\n                resp = await client.get('/fledge/syslog?source=HT&level={}'.format(level))\n                assert 200 == resp.status\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert actual_count == jdict['count']\n\n    @pytest.mark.parametrize(\"level\", [\n        1,\n        \"blah\",\n        \"panic\",\n        \"emerg\",\n        \"alert\",\n        \"crit\",\n        \"notice\"\n    ])\n    async def test_bad_level_in_syslog_entry(self, client, level):\n        resp = await client.get('/fledge/syslog?level={}'.format(level))\n        msg = \"{} is invalid level. Supported levels are ['info', 'warning', 'error', 'debug']\".format(level)\n        assert 400 == resp.status\n        assert msg == resp.reason\n        res = await resp.text()\n        jdict = json.loads(res)\n        assert msg == jdict['message']\n\n    @pytest.mark.parametrize(\"template_name, matched_lines, level, actual_count\", [\n        ('__GET_SYSLOG_CMD_WITH_INFO_TEMPLATE', '__GET_SYSLOG_INFO_MATCHED_LINES', 'info', 7),\n        ('__GET_SYSLOG_CMD_WITH_ERROR_TEMPLATE', '__GET_SYSLOG_ERROR_MATCHED_LINES', 'error', 4),\n        ('__GET_SYSLOG_CMD_WITH_WARNING_TEMPLATE', '__GET_SYSLOG_WARNING_MATCHED_LINES', 'warning', 5),\n        ('__GET_SYSLOG_CMD_TEMPLATE', '__GET_SYSLOG_TOTAL_MATCHED_LINES', 'debug', 8)\n    ])\n    async def test_get_syslog_entries_with_level(self, client, template_name, matched_lines, level, actual_count):\n        def mock_syslog(_level):\n            if _level == 'info':\n                return \"\"\"echo \"Dec 21 10:20:03 aj-ub Fledge[14623] WARNING: server: fledge.services.core.server: A Fledge PID file has been found.\n                Dec 21 12:20:03 aj-ub Fledge[14623] ERROR: change_callback: fledge.services.core.interest_registry.change_callback: Unable to notify microservice with uuid dc2b2f3a-0310-426f-8d1c-8bd3853fcf2f due to exception\n                Dec 12 13:31:41 aj-ub Fledge PI[9241] ERROR: sending_process: sending_process_PI: cannot complete the sending operation\n                Dec 21 15:15:10 aj-ub Fledge OMF[12145]: FATAL: Signal 11 (Segmentation fault) trapped:\n                Dec 21 16:52:48 aj-ub Fledge[24953] INFO: scheduler: fledge.services.core.scheduler.scheduler: Service HTC records successfully removed\n                Dec 21 16:52:54 aj-ub Fledge[24953] INFO: service_registry: fledge.services.core.service_registry.service_registry\n                Dec 21 25:15:10 aj-ub Fledge OMF[12145]: FATAL: (0) 00x55ac77b9d1b9 handler(int) + 73---------\"\n                \"\"\"\n            elif _level == 'warning':\n                return \"\"\"echo \"Dec 21 10:20:03 aj-ub Fledge[14623] WARNING: server: fledge.services.core.server: A Fledge PID file has been found.\n                Dec 21 12:20:03 aj-ub Fledge[14623] ERROR: change_callback: fledge.services.core.interest_registry.change_callback: Unable to notify microservice with uuid dc2b2f3a-0310-426f-8d1c-8bd3853fcf2f due to exception\n                Dec 12 13:31:41 aj-ub Fledge PI[9241] ERROR: sending_process: sending_process_PI: cannot complete the sending operation\n                Dec 21 15:15:10 aj-ub Fledge OMF[12145]: FATAL: Signal 11 (Segmentation fault) trapped:\n                Dec 21 25:15:10 aj-ub Fledge OMF[12145]: FATAL: (0) 00x55ac77b9d1b9 handler(int) + 73---------\" \"\"\"\n            elif _level == 'error':\n                return \"\"\"echo \"Dec 21 12:20:03 aj-ub Fledge[14623] ERROR: change_callback: fledge.services.core.interest_registry.change_callback: Unable to notify microservice with uuid dc2b2f3a-0310-426f-8d1c-8bd3853fcf2f due to exception\n                Dec 12 13:31:41 aj-ub Fledge PI[9241] ERROR: sending_process: sending_process_PI: cannot complete the sending operation\n                Dec 21 15:15:10 aj-ub Fledge OMF[12145]: FATAL: Signal 11 (Segmentation fault) trapped:\n                Dec 21 25:15:10 aj-ub Fledge OMF[12145]: FATAL: (0) 00x55ac77b9d1b9 handler(int) + 73---------\" \"\"\"\n            else:\n                return \"\"\"echo \"Dec 21 10:20:03 aj-ub Fledge[14623] WARNING: server: fledge.services.core.server: A Fledge PID file has been found.\n                Dec 21 12:20:03 aj-ub Fledge[14623] ERROR: change_callback: fledge.services.core.interest_registry.change_callback: Unable to notify microservice with uuid dc2b2f3a-0310-426f-8d1c-8bd3853fcf2f due to exception\n                Dec 12 13:31:41 aj-ub Fledge PI[9241] ERROR: sending_process: sending_process_PI: cannot complete the sending operation\n                Dec 21 15:15:10 aj-ub Fledge OMF[12145]: FATAL: Signal 11 (Segmentation fault) trapped:\n                Dec 21 16:52:48 aj-ub Fledge[24953] INFO: scheduler: fledge.services.core.scheduler.scheduler: Service HTC records successfully removed\n                Dec 21 16:52:54 aj-ub Fledge[24953] INFO: service_registry: fledge.services.core.service_registry.service_registry\n                Dec 21 25:15:10 aj-ub Fledge OMF[12145]: FATAL: (0) 00x55ac77b9d1b9 handler(int) + 73---------\n                Dec 21 25:15:10 aj-ub Fledge sin[11011]: DEBUG: 'sinusoid' plugin reconfigure called\" \"\"\"\n\n        with patch.object(support, template_name, mock_syslog(level)):\n            with patch.object(support, matched_lines, \"\"\"echo \"{}\" \"\"\".format(actual_count)):\n                resp = await client.get('/fledge/syslog?level={}'.format(level))\n                assert 200 == resp.status\n                res = await resp.text()\n                jdict = json.loads(res)\n                assert actual_count == jdict['count']\n\n\nclass TestGetSupportBundleConfig:\n    \"\"\"Test class for the get_support_bundle_config function\"\"\"\n\n    @pytest.mark.asyncio\n    async def test_get_support_bundle_config_success(self):\n        \"\"\"Test successful retrieval of support bundle configuration\"\"\"\n        # Mock configuration data that would be returned by ConfigurationManager\n        mock_config = {\n            \"auto_support_bundle\": {\n                \"value\": \"true\",\n                \"description\": \"Automatically create support bundle when service fails\",\n                \"type\": \"boolean\",\n                \"default\": \"false\",\n                \"displayName\": \"Auto Support Bundle\"\n            },\n            \"support_bundle_retain_count\": {\n                \"value\": \"3\",\n                \"description\": \"Number of support bundles to retain (minimum 1)\",\n                \"type\": \"integer\",\n                \"default\": \"3\",\n                \"minimum\": \"1\",\n                \"displayName\": \"Bundles To Retain\"\n            }\n        }\n        \n        # Mock storage client\n        mock_storage_client = Mock()\n        \n        # Mock configuration manager with async method\n        mock_cfg_manager = Mock()\n        \n        # Track calls to get_category_all_items\n        category_calls = []\n        \n        async def mock_get_category_all_items(category):\n            category_calls.append(category)\n            return mock_config\n        \n        mock_cfg_manager.get_category_all_items = mock_get_category_all_items\n        \n        with patch('fledge.services.core.connect.get_storage_async', return_value=mock_storage_client):\n            with patch('fledge.common.configuration_manager.ConfigurationManager', return_value=mock_cfg_manager) as mock_cfg_class:\n                result = await support.get_support_bundle_config()\n                \n                # Verify the function returns the expected configuration\n                assert result == mock_config\n                \n                # Verify ConfigurationManager was called with the storage client\n                mock_cfg_class.assert_called_once_with(mock_storage_client)\n                \n                # Verify get_category_all_items was called with the correct category\n                assert len(category_calls) == 1\n                assert category_calls[0] == 'SUPPORT_BUNDLE'\n\n    @pytest.mark.asyncio\n    async def test_get_support_bundle_config_exception(self):\n        \"\"\"Test exception handling when configuration retrieval fails\"\"\"\n        # Mock storage client\n        mock_storage_client = Mock()\n        \n        # Mock configuration manager to raise an exception\n        mock_cfg_manager = Mock()\n        \n        # Track calls and raise exception\n        category_calls = []\n        \n        async def mock_get_category_all_items_error(category):\n            category_calls.append(category)\n            raise Exception(\"Configuration error\")\n        \n        mock_cfg_manager.get_category_all_items = mock_get_category_all_items_error\n        \n        with patch('fledge.services.core.connect.get_storage_async', return_value=mock_storage_client):\n            with patch('fledge.common.configuration_manager.ConfigurationManager', return_value=mock_cfg_manager):\n                # Verify that the exception is properly propagated\n                with pytest.raises(Exception) as exc_info:\n                    await support.get_support_bundle_config()\n                \n                assert str(exc_info.value) == \"Configuration error\"\n                \n                # Verify the configuration manager was still called\n                assert len(category_calls) == 1\n                assert category_calls[0] == 'SUPPORT_BUNDLE'\n\n    @pytest.mark.asyncio\n    async def test_get_support_bundle_config_storage_client_exception(self):\n        \"\"\"Test exception handling when storage client creation fails\"\"\"\n        # Mock connect.get_storage_async to raise an exception\n        with patch('fledge.services.core.connect.get_storage_async', side_effect=Exception(\"Storage connection error\")):\n            # Verify that the exception is properly propagated\n            with pytest.raises(Exception) as exc_info:\n                await support.get_support_bundle_config()\n            \n            assert str(exc_info.value) == \"Storage connection error\" \n    "
  },
  {
    "path": "tests/unit/python/fledge/services/core/api/test_task.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport functools\nimport asyncio\nimport json\nfrom uuid import UUID\nfrom aiohttp import web\nimport pytest\nfrom unittest.mock import MagicMock, patch, call\nfrom fledge.services.core import routes\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core import server\nfrom fledge.services.core.scheduler.scheduler import Scheduler\nfrom fledge.services.core.scheduler.entities import TimedSchedule\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.api import task\nfrom fledge.services.core.api.plugins import common\nfrom fledge.services.core.api.task import _logger\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestTask:\n    def setup_method(self):\n        ServiceRegistry._registry = list()\n\n    def teardown_method(self):\n        ServiceRegistry._registry = list()\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(loop=loop)\n        # fill the routes table\n        routes.setup(app)\n        return loop.run_until_complete(test_client(app))\n\n    async def async_mock(self, return_value):\n        return return_value\n\n    @pytest.mark.parametrize(\"payload, code, message\", [\n        (\"blah\", 400, \"Data payload must be a valid JSON\"),\n        ({}, 400, 'Missing name property in payload.'),\n        ({\"name\": \"test\"}, 400, \"Missing plugin property in payload.\"),\n        ({\"name\": \"test\", \"plugin\": \"omf\"}, 400, 'Missing type property in payload.'),\n        ({\"name\": \"test\", \"plugin\": \"omf\", \"type\": \"north\", \"schedule_type\": 3}, 400, 'schedule_repeat None is required for INTERVAL schedule_type.'),\n        ({\"name\": \"test\", \"plugin\": \"omf\", \"type\": \"north\", \"schedule_type\": 1}, 400, 'schedule_type cannot be STARTUP: 1')\n    ])\n    async def test_add_task_with_bad_params(self, client, code, payload, message):\n        resp = await client.post('/fledge/scheduled/task', data=json.dumps(payload))\n        assert code == resp.status\n        assert message == resp.reason\n\n    async def test_insert_scheduled_process_exception_add_task(self, client):\n        data = {\"name\": \"north bound\", \"type\": \"north\", \"schedule_type\": 3, \"plugin\": \"omf\", \"schedule_repeat\": 30}\n\n        async def q_result(*arg):\n            table = arg[0]\n            payload = arg[1]\n\n            if table == 'scheduled_processes':\n                assert {'return': ['name'],\n                        'where': {'column': 'name', 'condition': '=', 'value': 'north_c'}} == json.loads(payload)\n                return {'count': 0, 'rows': []}\n            if table == 'schedules':\n                assert {'return': ['schedule_name'],\n                        'where': {'column': 'schedule_name', 'condition': '=',\n                                  'value': 'north bound'}} == json.loads(payload)\n                return {'count': 0, 'rows': []}\n            if table == 'tasks':\n                return {'count': 0, 'rows': []}\n\n        mock_plugin_info = {\n            'name': \"north bound\",\n            'version': \"1.1\",\n            'type': \"north\",\n            'interface': \"1.0\",\n            'config': {\n                'plugin': {\n                    'description': \"North OMF plugin\",\n                    'type': 'string',\n                    'default': 'omf'\n                }\n            }\n        }\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(None)\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(_logger, 'error') as patch_logger:\n                    with patch.object(c_mgr, 'get_category_all_items',\n                                      return_value=_rv) as patch_get_cat_info:\n                        with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                            with patch.object(storage_client_mock, 'insert_into_tbl', side_effect=Exception()):\n                                resp = await client.post('/fledge/scheduled/task', data=json.dumps(data))\n                                assert 500 == resp.status\n                                assert 'Failed to create north instance.' == resp.reason\n                    patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n                assert 1 == patch_logger.call_count\n\n    async def test_dupe_category_name_add_task(self, client):\n\n        async def q_result(*arg):\n            table = arg[0]\n\n            if table == 'tasks':\n                return {'count': 0, 'rows': []}\n\n        mock_plugin_info = {\n            'name': \"north bound\",\n            'version': \"1.1\",\n            'type': \"north\",\n            'interface': \"1.0\",\n            'config': {\n                'plugin': {\n                    'description': \"North OMF plugin\",\n                    'type': 'string',\n                    'default': 'omf'\n                }\n            }\n        }\n        data = {\"name\": \"north bound\", \"plugin\": \"omf\", \"type\": \"north\", \"schedule_type\": 3, \"schedule_repeat\": 30}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(mock_plugin_info)\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        resp = await client.post('/fledge/scheduled/task', data=json.dumps(data))\n                        assert 400 == resp.status\n                        assert \"The '{}' category already exists\".format(data['name']) == resp.reason\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    async def test_dupe_schedule_name_add_task(self, client):\n        async def q_result(*arg):\n            table = arg[0]\n            payload = arg[1]\n\n            if table == 'schedules':\n                assert {'return': ['schedule_name'], 'where': {'column': 'schedule_name', 'condition': '=',\n                                                               'value': 'north bound'}} == json.loads(payload)\n                return {'count': 1, 'rows': [{'schedule_name': 'schedule_name'}]}\n\n            if table == 'tasks':\n                return {'count': 0, 'rows': []}\n\n        mock_plugin_info = {\n            'name': \"north bound\",\n            'version': \"1.1\",\n            'type': \"north\",\n            'interface': \"1.0\",\n            'config': {\n                'plugin': {\n                    'description': \"North OMF plugin\",\n                    'type': 'string',\n                    'default': 'omf'\n                }\n            }\n        }\n        data = {\"name\": \"north bound\", \"plugin\": \"omf\", \"type\": \"north\", \"schedule_type\": 3, \"schedule_repeat\": 30}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv = await self.async_mock(None)\n        with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        resp = await client.post('/fledge/scheduled/task', data=json.dumps(data))\n                        assert 400 == resp.status\n                        assert 'A north instance with this name already exists' == resp.reason\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    async def test_add_task(self, client):\n        async def async_mock_get_schedule():\n            schedule = TimedSchedule()\n            schedule.schedule_id = '2129cc95-c841-441a-ad39-6469a87dbc8b'\n            return schedule\n\n        async def q_result(*arg):\n            table = arg[0]\n            payload = arg[1]\n\n            if table == 'scheduled_processes':\n                assert {'return': ['name'], 'where': {'column': 'name', 'condition': '=',\n                                                      'value': 'north_c'}} == json.loads(payload)\n                return {'count': 0, 'rows': []}\n            if table == 'schedules':\n                assert {'return': ['schedule_name'], 'where': {'column': 'schedule_name', 'condition': '=',\n                                                               'value': 'north bound'}} == json.loads(payload)\n                return {'count': 0, 'rows': []}\n\n            if table == 'tasks':\n                return {'count': 0, 'rows': []}\n\n        expected_insert_resp = {'rows_affected': 1, \"response\": \"inserted\"}\n        mock_plugin_info = {\n            'name': \"north bound\",\n            'version': \"1.1\",\n            'type': \"north\",\n            'interface': \"1.0\",\n            'config': {\n                'plugin': {\n                    'description': \"North OMF plugin\",\n                    'type': 'string',\n                    'default': 'omf'\n                }\n            }\n        }\n        server.Server.scheduler = Scheduler(None, None)\n        data = {\n            \"name\": \"north bound\",\n            \"plugin\": \"omf\",\n            \"type\": \"north\",\n            \"schedule_type\": 3,\n            \"schedule_day\": 0,\n            \"schedule_time\": 0,\n            \"schedule_repeat\": 30,\n            \"schedule_enabled\": True\n        }\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(expected_insert_resp)\n        _rv3 = await self.async_mock(\"\")\n        _rv4 = await async_mock_get_schedule()\n        with patch.object(common, 'load_and_fetch_python_plugin_info', return_value=mock_plugin_info):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv2) \\\n                                as insert_table_patch:\n                            with patch.object(c_mgr, 'create_category', return_value=_rv1) as patch_create_cat:\n                                with patch.object(c_mgr, 'create_child_category', return_value=_rv1) \\\n                                        as patch_create_child_cat:\n                                    with patch.object(server.Server.scheduler, 'save_schedule',\n                                                      return_value=_rv3) as patch_save_schedule:\n                                        with patch.object(server.Server.scheduler, 'get_schedule_by_name',\n                                                          return_value=_rv4) as patch_get_schedule:\n                                            resp = await client.post('/fledge/scheduled/task', data=json.dumps(data))\n                                            server.Server.scheduler = None\n                                            assert 200 == resp.status\n                                            result = await resp.text()\n                                            json_response = json.loads(result)\n                                            assert {'id': '2129cc95-c841-441a-ad39-6469a87dbc8b',\n                                                    'name': 'north bound'} == json_response\n                                        patch_get_schedule.assert_called_once_with(data['name'])\n                                    patch_save_schedule.assert_called_once()\n                                patch_create_child_cat.assert_called_once_with('North', ['north bound'])\n                            calls = [call(category_description='North OMF plugin', category_name='north bound',\n                                          category_value={\n                                              'plugin': {'description': 'North OMF plugin', 'default': 'omf',\n                                                         'type': 'string'}}, keep_original_items=True),\n                                     call('North', {}, 'North tasks', True)]\n                            patch_create_cat.assert_has_calls(calls)\n                        args, kwargs = insert_table_patch.call_args\n                        assert 'scheduled_processes' == args[0]\n                        p = json.loads(args[1])\n                        assert p['name'] == 'north_c'\n                        assert p['script'] == '[\"tasks/north_c\"]'\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    @pytest.mark.parametrize(\"expected_count, expected_http_code, expected_message\", [\n        (1, 400, '400: Unable to reuse name north bound, already used by a previous task.'),\n        (10, 400, '400: Unable to reuse name north bound, already used by a previous task.')\n    ])\n    async def test_add_task_twice(self, client, expected_count, expected_http_code, expected_message):\n        async def q_result(*arg):\n            table = arg[0]\n\n            if table == 'tasks':\n                return {'count': expected_count, 'rows': []}\n\n        mock_plugin_info = {\n            'name': \"north bound\",\n            'version': \"1.1\",\n            'type': \"north\",\n            'interface': \"1.0\",\n            'config': {\n                'plugin': {\n                    'description': \"North OMF plugin\",\n                    'type': 'string',\n                    'default': 'omf'\n                }\n            }\n        }\n        data = {\n            \"name\": \"north bound\",\n            \"plugin\": \"omf\",\n            \"type\": \"north\",\n            \"schedule_type\": 3,\n            \"schedule_day\": 0,\n            \"schedule_time\": 0,\n            \"schedule_repeat\": 30,\n            \"schedule_enabled\": True\n        }\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(_logger, 'warning') as patch_logger:\n            with patch.object(common, 'load_and_fetch_python_plugin_info', side_effect=[mock_plugin_info]):\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        resp = await client.post('/fledge/scheduled/task', data=json.dumps(data))\n                        result = await resp.text()\n                        assert resp.status == expected_http_code\n                        assert result == expected_message\n                        print(expected_message)\n        assert 1 == patch_logger.call_count\n\n    async def test_add_task_with_config(self, client):\n        async def async_mock_get_schedule():\n            schedule = TimedSchedule()\n            schedule.schedule_id = '2129cc95-c841-441a-ad39-6469a87dbc8b'\n            return schedule\n\n        async def q_result(*arg):\n            table = arg[0]\n            payload = arg[1]\n            if table == 'scheduled_processes':\n                assert {'return': ['name'], 'where': {'column': 'name', 'condition': '=',\n                                                      'value': 'north_c'}} == json.loads(payload)\n                return {'count': 0, 'rows': []}\n            if table == 'schedules':\n                assert {'return': ['schedule_name'], 'where': {'column': 'schedule_name', 'condition': '=',\n                                                               'value': 'north bound'}} == json.loads(payload)\n                return {'count': 0, 'rows': []}\n\n            if table == 'tasks':\n                return {'count': 0, 'rows': []}\n\n        # async def async_wrapper(*args, **kwargs):\n        #     r = await q_result(*args)\n        #     return r\n        \n        # def wrapper(*args, **kwargs):\n        #     r = q_result(*args)\n        #     return r\n        \n        # if asyncio.iscoroutinefunction(q_result):\n        #     wrapped = functools.update_wrapper(async_wrapper, q_result)\n        # else:\n        #     wrapped = functools.update_wrapper(wrapper, method)\n\n        expected_insert_resp = {'rows_affected': 1, \"response\": \"inserted\"}\n        mock_plugin_info = {\n            'name': \"PI server\",\n            'version': \"1.1\",\n            'type': \"north\",\n            'interface': \"1.0\",\n            'config': {\n                'plugin': {\n                    'description': \"North PI plugin\",\n                    'type': 'string',\n                    'default': 'omf'\n                },\n                'producerToken': {\n                    'description': 'Producer token for this Fledge stream',\n                    'type': 'string',\n                    'default': 'pi_server_north_0001',\n                    'order': '2'\n                }\n            }\n        }\n        server.Server.scheduler = Scheduler(None, None)\n        data = {\n            \"name\": \"north bound\",\n            \"plugin\": \"omf\",\n            \"type\": \"north\",\n            \"schedule_type\": 3,\n            \"schedule_day\": 0,\n            \"schedule_time\": 0,\n            \"schedule_repeat\": 30,\n            \"schedule_enabled\": True,\n            \"config\": {\n                \"producerToken\": {\"value\": \"uid=180905062754237&sig=kx5l+\"}}\n        }\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        c_mgr = ConfigurationManager(storage_client_mock)\n        _rv1 = await self.async_mock(None)\n        _rv2 = await self.async_mock(expected_insert_resp)\n        _rv3 = await self.async_mock(\"\")\n        _rv4 = await async_mock_get_schedule()\n        with patch.object(common, 'load_and_fetch_python_plugin_info', return_value=mock_plugin_info):\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(c_mgr, 'get_category_all_items', return_value=_rv1) as patch_get_cat_info:\n                    with patch.object(storage_client_mock, 'query_tbl_with_payload', side_effect=q_result):\n                        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv2) \\\n                                as insert_table_patch:\n                            with patch.object(c_mgr, 'create_category', return_value=_rv1) as patch_create_cat:\n                                with patch.object(c_mgr, 'create_child_category', return_value=_rv1) \\\n                                        as patch_create_child_cat:\n                                    with patch.object(c_mgr, 'set_category_item_value_entry',\n                                                      return_value=_rv1) as patch_set_entry:\n                                        with patch.object(server.Server.scheduler, 'save_schedule',\n                                                          return_value=_rv3) as patch_save_schedule:\n                                            with patch.object(server.Server.scheduler, 'get_schedule_by_name',\n                                                              return_value=_rv4) as patch_get_schedule:\n                                                resp = await client.post('/fledge/scheduled/task',\n                                                                         data=json.dumps(data))\n                                                server.Server.scheduler = None\n                                                assert 200 == resp.status\n                                                result = await resp.text()\n                                                json_response = json.loads(result)\n                                                assert {'id': '2129cc95-c841-441a-ad39-6469a87dbc8b',\n                                                        'name': 'north bound'} == json_response\n                                            patch_get_schedule.assert_called_once_with(data['name'])\n                                        patch_save_schedule.assert_called_once()\n                                    patch_set_entry.assert_called_once_with(data['name'], 'producerToken',\n                                                                            'uid=180905062754237&sig=kx5l+')\n                                patch_create_child_cat.assert_called_once_with('North', ['north bound'])\n                            assert 2 == patch_create_cat.call_count\n                            patch_create_cat.assert_called_with('North', {}, 'North tasks', True)\n                        args, kwargs = insert_table_patch.call_args\n                        assert 'scheduled_processes' == args[0]\n                        p = json.loads(args[1])\n                        assert p['name'] == 'north_c'\n                        assert p['script'] == '[\"tasks/north_c\"]'\n                patch_get_cat_info.assert_called_once_with(category_name=data['name'])\n\n    async def test_delete_task(self, mocker, client):\n        sch_id = '0178f7b6-d55c-4427-9106-245513e46416'\n        sch_name = \"Test Task\"\n\n        async def mock_result():\n            return {\n                \"count\": 1,\n                \"rows\": [\n                    {\n                        \"id\": sch_id,\n                        \"process_name\": \"Test\",\n                        \"schedule_name\": sch_name,\n                        \"schedule_type\": \"3\",\n                        \"schedule_interval\": \"30\",\n                        \"schedule_time\": \"0\",\n                        \"schedule_day\": \"0\",\n                        \"exclusive\": \"t\",\n                        \"enabled\": \"t\"\n                    },\n                ]\n            }\n\n        delete_result = {'response': 'deleted', 'rows_affected': 1}\n        update_result = {'rows_affected': 1, \"response\": \"updated\"}\n        _rv1 = await mock_result()\n        _rv2 = asyncio.ensure_future(asyncio.sleep(.1))\n        _rv3 = await self.async_mock(delete_result)\n        _rv4 = await self.async_mock(update_result)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        mocker.patch.object(connect, 'get_storage_async', storage_client_mock)\n        get_schedule = mocker.patch.object(task, \"get_schedule\", return_value=_rv1)\n        scheduler = mocker.patch.object(server.Server, \"scheduler\", MagicMock())\n        delete_schedule = mocker.patch.object(scheduler, \"delete_schedule\", return_value=_rv2)\n        disable_schedule = mocker.patch.object(scheduler, \"disable_schedule\", return_value=_rv2)\n        delete_task_entry_with_schedule_id = mocker.patch.object(task, \"delete_task_entry_with_schedule_id\",\n                                                                 return_value=_rv2)\n        delete_configuration = mocker.patch.object(ConfigurationManager, \"delete_category_and_children_recursively\",\n                                                   return_value=_rv2)\n        delete_statistics_key = mocker.patch.object(task, \"delete_statistics_key\", return_value=_rv3)\n        delete_streams = mocker.patch.object(task, \"delete_streams\", return_value=_rv3)\n        delete_plugin_data = mocker.patch.object(task, \"delete_plugin_data\", return_value=_rv3)\n        update_deprecated_ts_in_asset_tracker = mocker.patch.object(task, \"update_deprecated_ts_in_asset_tracker\",\n                                                                    return_value=_rv4)\n\n        resp = await client.delete(\"/fledge/scheduled/task/{}\".format(sch_name))\n        assert 200 == resp.status\n        result = await resp.json()\n        assert 'North instance {} deleted successfully.'.format(sch_name) == result['result']\n\n        assert 1 == get_schedule.call_count\n        args, kwargs = get_schedule.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == delete_schedule.call_count\n        delete_schedule_calls = [call(UUID(sch_id))]\n        delete_schedule.assert_has_calls(delete_schedule_calls, any_order=True)\n\n        assert 1 == disable_schedule.call_count\n        disable_schedule_calls = [call(UUID(sch_id))]\n        disable_schedule.assert_has_calls(disable_schedule_calls, any_order=True)\n\n        assert 1 == delete_task_entry_with_schedule_id.call_count\n        args, kwargs = delete_task_entry_with_schedule_id.call_args_list[0]\n        assert UUID(sch_id) in args\n\n        assert 1 == delete_configuration.call_count\n        args, kwargs = delete_configuration.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == delete_statistics_key.call_count\n        args, kwargs = delete_statistics_key.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == delete_streams.call_count\n        args, kwargs = delete_streams.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == delete_plugin_data.call_count\n        args, kwargs = delete_plugin_data.call_args_list[0]\n        assert sch_name in args\n\n        assert 1 == update_deprecated_ts_in_asset_tracker.call_count\n        args, kwargs = update_deprecated_ts_in_asset_tracker.call_args_list[0]\n        assert sch_name in args\n\n    async def test_delete_task_exception(self, mocker, client):\n        resp = await client.delete(\"/fledge/scheduled/task\")\n        assert 405 == resp.status\n        assert 'Method Not Allowed' == resp.reason\n\n        mocker.patch.object(connect, 'get_storage_async')\n        mocker.patch.object(task, \"get_schedule\", side_effect=Exception)\n        with patch.object(_logger, 'error') as patch_logger:\n            resp = await client.delete(\"/fledge/scheduled/task/Test\")\n            assert 500 == resp.status\n            assert resp.reason is ''\n        assert 1 == patch_logger.call_count\n        args = patch_logger.call_args\n        assert 'Failed to delete Test north task.' == args[0][1]\n\n        async def mock_bad_result():\n            return {\"count\": 0, \"rows\": []}\n        \n        _rv = await mock_bad_result()\n        mocker.patch.object(task, \"get_schedule\", return_value=_rv)\n        resp = await client.delete(\"/fledge/scheduled/task/Test\")\n        assert 404 == resp.status\n        assert 'Test north instance does not exist.' == resp.reason\n\n# TODO: Add test for negative scenarios"
  },
  {
    "path": "tests/unit/python/fledge/services/core/asset_tracker/test_asset_tracker.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nfrom unittest.mock import MagicMock, patch\nimport pytest\nfrom fledge.services.core.asset_tracker.asset_tracker import AssetTracker\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.configuration_manager import ConfigurationManager\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestAssetTracker:\n\n    async def test_init_with_no_storage(self):\n        storage_client_mock = None\n        with pytest.raises(TypeError) as excinfo:\n            AssetTracker(storage_client_mock)\n        assert 'Must be a valid Async Storage object' == str(excinfo.value)\n\n    @pytest.mark.parametrize(\"result, asset_list\", [\n        ({'rows': [], 'count': 0}, []),\n        ({'rows': [{'event': 'Ingest', 'service': 'sine', 'plugin': 'sinusoid', 'asset': 'sinusoid', 'data': '{}'}], 'count': 1}, [{'event': 'Ingest', 'service': 'sine', 'asset': 'sinusoid', 'plugin': 'sinusoid', 'data':'{}'}])\n    ])\n    async def test_load_asset_records(self, result, asset_list):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        asset_tracker = AssetTracker(storage_client_mock)\n        asset_tracker._registered_asset_records = []\n\n        async def mock_coro():\n            return result\n\n        _rv = await mock_coro()\n        with patch.object(asset_tracker._storage, 'query_tbl_with_payload', return_value=_rv) as patch_query_tbl:\n            await asset_tracker.load_asset_records()\n            assert asset_list == asset_tracker._registered_asset_records\n        patch_query_tbl.assert_called_once_with('asset_tracker', '{\"return\": [\"asset\", \"event\", \"service\", \"plugin\", \"data\"]}')\n\n    async def test_add_asset_record(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        asset_tracker = AssetTracker(storage_client_mock)\n        cfg_manager = ConfigurationManager(storage_client_mock)\n        asset_tracker._registered_asset_records = []\n        payload = {\"plugin\": \"sinusoid\", \"asset\": \"sinusoid\", \"event\": \"Ingest\", \"fledge\": \"Fledge\", \"service\": \"sine\", \"data\":\"{}\"}\n\n        async def mock_coro():\n            return {\"default\": \"Fledge\", \"value\": \"Fledge\", \"type\": \"string\", \"description\": \"Name of this Fledge service\"}\n\n        async def mock_coro2():\n            return {\"response\": \"inserted\", \"rows_affected\": 1}\n\n        _rv1 = await mock_coro()\n        _rv2 = await mock_coro2()\n        with patch.object(cfg_manager, 'get_category_item', return_value=_rv1) as patch_get_cat_item:\n            with patch.object(asset_tracker._storage, 'insert_into_tbl', return_value=_rv2) as patch_insert_tbl:\n                result = await asset_tracker.add_asset_record(asset='sinusoid', event='Ingest', service='sine', plugin='sinusoid', jsondata='{}')\n                assert payload == result\n            args, kwargs = patch_insert_tbl.call_args\n            assert 'asset_tracker' == args[0]\n            assert payload == json.loads(args[1])\n        patch_get_cat_item.assert_called_once_with(category_name='service', item_name='name')\n\n    # TODO: will add -ve tests later\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/interest_registry/test_change_callback.py",
    "content": "from unittest.mock import MagicMock, patch, Mock, call\nimport pytest\nimport aiohttp\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry, InterestRegistrySingleton\nimport fledge.services.core.interest_registry.change_callback as cb\n\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestChangeCallback:\n\n    def setup_method(self):\n        InterestRegistrySingleton._shared_state = {}\n        ServiceRegistry._registry = []\n\n    def teardown_method(self):\n        InterestRegistrySingleton._shared_state = {}\n        ServiceRegistry._registry = []\n\n    @pytest.mark.asyncio\n    async def test_run_good(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        cfg_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id_1 = ServiceRegistry.register(\n                'sname1', 'Storage', 'saddress1', 1, 1, 'http')\n            s_id_2 = ServiceRegistry.register(\n                'sname2', 'Southbound', 'saddress2', 2, 2, 'http')\n            s_id_3 = ServiceRegistry.register(\n                'sname3', 'Southbound', 'saddress3', 3, 3, 'http')\n        assert 3 == log_info.call_count\n        i_reg = InterestRegistry(cfg_mgr)\n        i_reg.register(s_id_1, 'catname1')\n        i_reg.register(s_id_1, 'catname2')\n        i_reg.register(s_id_2, 'catname1')\n        i_reg.register(s_id_2, 'catname2')\n        i_reg.register(s_id_3, 'catname3')\n\n        # used to mock client session context manager\n        async def async_mock(return_value):\n            return return_value\n\n        class AsyncSessionContextManagerMock(MagicMock):\n            def __init__(self, *args, **kwargs):\n                super().__init__(*args, **kwargs)\n\n            async def __aenter__(self):\n                client_response_mock = MagicMock(spec=aiohttp.ClientResponse)\n                client_response_mock.text.side_effect = [async_mock(None)]\n                status_mock = Mock()\n                status_mock.side_effect = [200]\n                client_response_mock.status = status_mock()\n                return client_response_mock\n\n            async def __aexit__(self, *args):\n                return None\n\n        _rv = await async_mock(None)\n        with patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv) as cm_get_patch:\n            with patch.object(aiohttp.ClientSession, 'post', return_value=AsyncSessionContextManagerMock()) as post_patch:\n                await cb.run('catname1')\n            post_patch.assert_has_calls([call('http://saddress1:1/fledge/change', data='{\"category\": \"catname1\", \"items\": null}', headers={'content-type': 'application/json'}),\n                                         call('http://saddress2:2/fledge/change', data='{\"category\": \"catname1\", \"items\": null}', headers={'content-type': 'application/json'})])\n        cm_get_patch.assert_called_once_with('catname1')\n\n        with patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv) as cm_get_patch:\n            with patch.object(aiohttp.ClientSession, 'post', return_value=AsyncSessionContextManagerMock()) as post_patch:\n                await cb.run('catname2')\n            post_patch.assert_has_calls([call('http://saddress1:1/fledge/change', data='{\"category\": \"catname2\", \"items\": null}', headers={'content-type': 'application/json'}),\n                                         call('http://saddress2:2/fledge/change', data='{\"category\": \"catname2\", \"items\": null}', headers={'content-type': 'application/json'})])\n        cm_get_patch.assert_called_once_with('catname2')\n\n        with patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv) as cm_get_patch:\n            with patch.object(aiohttp.ClientSession, 'post', return_value=AsyncSessionContextManagerMock()) as post_patch:\n                await cb.run('catname3')\n            post_patch.assert_called_once_with('http://saddress3:3/fledge/change', data='{\"category\": \"catname3\", \"items\": null}', headers={'content-type': 'application/json'})\n        cm_get_patch.assert_called_once_with('catname3')\n\n    @pytest.mark.asyncio\n    async def test_run_empty_interests(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        cfg_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            ServiceRegistry.register('sname1', 'Storage', 'saddress1', 1, 1, 'http')\n            ServiceRegistry.register('sname2', 'Southbound', 'saddress2', 2, 2, 'http')\n            ServiceRegistry.register('sname3', 'Southbound', 'saddress3', 3, 3, 'http')\n        assert 3 == log_info.call_count\n        InterestRegistry(cfg_mgr)\n\n        # used to mock client session context manager\n        async def async_mock(return_value):\n            return return_value\n\n        class AsyncSessionContextManagerMock(MagicMock):\n            def __init__(self, *args, **kwargs):\n                super().__init__(*args, **kwargs)\n\n            async def __aenter__(self):\n                client_response_mock = MagicMock(spec=aiohttp.ClientResponse)\n                client_response_mock.text.side_effect = [async_mock(None)]\n                status_mock = Mock()\n                status_mock.side_effect = [200]\n                client_response_mock.status = status_mock()\n                return client_response_mock\n\n            async def __aexit__(self, *args):\n                return None\n\n        with patch.object(ConfigurationManager, 'get_category_all_items') as cm_get_patch:\n            with patch.object(aiohttp.ClientSession, 'post') as post_patch:\n                await cb.run('catname1')\n            post_patch.assert_not_called()\n        cm_get_patch.assert_not_called()\n\n    @pytest.mark.asyncio\n    async def test_run_no_interests_in_cat(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        cfg_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id_1 = ServiceRegistry.register(\n                'sname1', 'Storage', 'saddress1', 1, 1, 'http')\n            s_id_2 = ServiceRegistry.register(\n                'sname2', 'Southbound', 'saddress2', 2, 2, 'http')\n            s_id_3 = ServiceRegistry.register(\n                'sname3', 'Southbound', 'saddress3', 3, 3, 'http')\n        assert 3 == log_info.call_count\n        i_reg = InterestRegistry(cfg_mgr)\n        i_reg.register(s_id_1, 'catname2')\n        i_reg.register(s_id_2, 'catname2')\n        i_reg.register(s_id_3, 'catname3')\n\n        # used to mock client session context manager\n        async def async_mock(return_value):\n            return return_value\n\n        class AsyncSessionContextManagerMock(MagicMock):\n            def __init__(self, *args, **kwargs):\n                super().__init__(*args, **kwargs)\n\n            async def __aenter__(self):\n                client_response_mock = MagicMock(spec=aiohttp.ClientResponse)\n                client_response_mock.text.side_effect = [async_mock(None)]\n                status_mock = Mock()\n                status_mock.side_effect = [200]\n                client_response_mock.status = status_mock()\n                return client_response_mock\n\n            async def __aexit__(self, *args):\n                return None\n        with patch.object(ConfigurationManager, 'get_category_all_items') as cm_get_patch:\n            with patch.object(aiohttp.ClientSession, 'post') as post_patch:\n                await cb.run('catname1')\n            post_patch.assert_not_called()\n        cm_get_patch.assert_not_called()\n\n    @pytest.mark.asyncio\n    async def test_run_missing_service_record(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        cfg_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id_1 = ServiceRegistry.register(\n                'sname1', 'Storage', 'saddress1', 1, 1, 'http')\n            s_id_2 = ServiceRegistry.register(\n                'sname2', 'Southbound', 'saddress2', 2, 2, 'http')\n            s_id_3 = ServiceRegistry.register(\n                'sname3', 'Southbound', 'saddress3', 3, 3, 'http')\n        assert 3 == log_info.call_count\n\n        i_reg = InterestRegistry(cfg_mgr)\n        i_reg.register('fakeid', 'catname1')\n        i_reg.register(s_id_1, 'catname1')\n        i_reg.register(s_id_1, 'catname2')\n        i_reg.register(s_id_2, 'catname1')\n        i_reg.register(s_id_2, 'catname2')\n        i_reg.register(s_id_3, 'catname3')\n\n        # used to mock client session context manager\n        async def async_mock(return_value):\n            return return_value\n\n        class AsyncSessionContextManagerMock(MagicMock):\n            def __init__(self, *args, **kwargs):\n                super().__init__(*args, **kwargs)\n\n            async def __aenter__(self):\n                client_response_mock = MagicMock(spec=aiohttp.ClientResponse)\n                client_response_mock.text.side_effect = [async_mock(None)]\n                status_mock = Mock()\n                status_mock.side_effect = [200]\n                client_response_mock.status = status_mock()\n                return client_response_mock\n\n            async def __aexit__(self, *args):\n                return None\n\n        _rv = await async_mock(None)\n        with patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv) as cm_get_patch:\n            with patch.object(aiohttp.ClientSession, 'post', return_value=AsyncSessionContextManagerMock()) as post_patch:\n                with patch.object(cb._LOGGER, 'exception') as exception_patch:\n                    await cb.run('catname1')\n                exception_patch.assert_called_once_with(\n                    'Unable to notify microservice with uuid %s as it is not found in the service registry', 'fakeid')\n            post_patch.assert_has_calls([call('http://saddress1:1/fledge/change', data='{\"category\": \"catname1\", \"items\": null}', headers={'content-type': 'application/json'}),\n                                         call('http://saddress2:2/fledge/change', data='{\"category\": \"catname1\", \"items\": null}', headers={'content-type': 'application/json'})])\n        cm_get_patch.assert_called_once_with('catname1')\n\n    @pytest.mark.asyncio\n    async def test_run_general_exception(self):\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        cfg_mgr = ConfigurationManager(storage_client_mock)\n\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id_1 = ServiceRegistry.register('sname1', 'Storage', 'saddress1', 1, 1, 'http')\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <sname1, type=Storage, protocol=http, address=saddress1, service port=1, management port=1, status=1>')\n\n        i_reg = InterestRegistry(cfg_mgr)\n        i_reg.register(s_id_1, 'catname1')\n\n        # used to mock client session context manager\n        async def async_mock(return_value):\n            return return_value\n\n        _rv = await async_mock(None)\n        with patch.object(ConfigurationManager, 'get_category_all_items', return_value=_rv) as cm_get_patch:\n            with patch.object(aiohttp.ClientSession, 'post', side_effect=Exception) as post_patch:\n                with patch.object(cb._LOGGER, 'exception') as patch_logger:\n                    await cb.run('catname1')\n                args = patch_logger.call_args\n                assert 'Unable to notify microservice with uuid {}'.format(s_id_1) == args[0][1]\n            post_patch.assert_has_calls(\n                [call('http://saddress1:1/fledge/change', data='{\"category\": \"catname1\", \"items\": null}',\n                      headers={'content-type': 'application/json'})])\n        cm_get_patch.assert_called_once_with('catname1')\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/interest_registry/test_interest_registry.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport pytest\nfrom unittest.mock import MagicMock\nfrom unittest.mock import patch\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistrySingleton\nfrom fledge.services.core.interest_registry.interest_record import InterestRecord\nfrom fledge.services.core.interest_registry import exceptions as interest_registry_exceptions\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestInterestRegistry:\n    @pytest.fixture()\n    def reset_singleton(self):\n        # executed before each test\n        InterestRegistrySingleton._shared_state = {}\n        yield\n        InterestRegistrySingleton._shared_state = {}\n\n    def test_constructor_no_configuration_manager_defined_no_configuration_manager_passed(\n            self, reset_singleton):\n        # first time initializing InterestRegistry without configuration manager\n        # produces error\n        with pytest.raises(TypeError) as excinfo:\n            InterestRegistry()\n        assert 'Must be a valid ConfigurationManager object' in str(\n            excinfo.value)\n\n    def test_constructor_no_configuration_manager_defined_configuration_manager_passed(\n            self, reset_singleton):\n        # first time initializing InterestRegistry with configuration manager\n        # works\n        configuration_manager_mock = MagicMock(spec=ConfigurationManager)\n        i_reg = InterestRegistry(configuration_manager_mock)\n        assert hasattr(i_reg, '_configuration_manager')\n        assert isinstance(i_reg._configuration_manager, ConfigurationManager)\n        assert hasattr(i_reg, '_registered_interests')\n\n    def test_constructor_configuration_manager_defined_configuration_manager_passed(\n            self, reset_singleton):\n        configuration_manager_mock = MagicMock(spec=ConfigurationManager)\n        # second time initializing InterestRegistry with new configuration manager\n        # works\n        configuration_manager_mock2 = MagicMock(spec=ConfigurationManager)\n        i_reg = InterestRegistry(configuration_manager_mock)\n        i_reg2 = InterestRegistry(configuration_manager_mock2)\n        assert hasattr(i_reg2, '_configuration_manager')\n        # ignore new configuration manager\n        assert isinstance(i_reg2._configuration_manager, ConfigurationManager)\n        assert hasattr(i_reg2, '_registered_interests')\n\n    def test_constructor_configuration_manager_defined_no_configuration_manager_passed(\n            self, reset_singleton):\n        configuration_manager_mock = MagicMock(spec=ConfigurationManager)\n        i_reg = InterestRegistry(configuration_manager_mock)\n        # second time initializing InterestRegistry without configuration manager\n        i_reg2 = InterestRegistry()\n        assert hasattr(i_reg2, '_configuration_manager')\n        assert isinstance(i_reg2._configuration_manager, ConfigurationManager)\n        assert hasattr(i_reg2, '_registered_interests')\n        assert len(i_reg._registered_interests) == 0\n\n    def test_register(self, reset_singleton):\n        configuration_manager_mock = MagicMock(spec=ConfigurationManager)\n        i_reg = InterestRegistry(configuration_manager_mock)\n        # register the first interest\n        microservice_uuid = 'muuid'\n        category_name = 'catname'\n        ret_val = i_reg.register(microservice_uuid, category_name)\n        assert ret_val is not None\n        assert len(i_reg._registered_interests) is 1\n        assert isinstance(i_reg._registered_interests[0], InterestRecord)\n        assert i_reg._registered_interests[0]._registration_id is ret_val\n        assert i_reg._registered_interests[0]._microservice_uuid is microservice_uuid\n        assert i_reg._registered_interests[0]._category_name is category_name\n        str_val = 'interest registration id={}: <microservice uuid={}, category_name={}>'.format(\n            ret_val, microservice_uuid, category_name)\n        assert str(i_reg._registered_interests[0]) == str_val\n\n        # register an existing interest\n        with pytest.raises(interest_registry_exceptions.ErrorInterestRegistrationAlreadyExists) as excinfo:\n            ret_val = i_reg.register(microservice_uuid, category_name)\n        assert ret_val is not None\n        assert len(i_reg._registered_interests) is 1\n        assert isinstance(i_reg._registered_interests[0], InterestRecord)\n        assert i_reg._registered_interests[0]._registration_id is ret_val\n        assert i_reg._registered_interests[0]._microservice_uuid is microservice_uuid\n        assert i_reg._registered_interests[0]._category_name is category_name\n        str_val = 'interest registration id={}: <microservice uuid={}, category_name={}>'.format(\n            ret_val, microservice_uuid, category_name)\n        assert str(i_reg._registered_interests[0]) == str_val\n\n        # register a second interest\n        category_name2 = 'catname2'\n        ret_val = i_reg.register(microservice_uuid, category_name2)\n        assert ret_val is not None\n        assert len(i_reg._registered_interests) is 2\n        assert isinstance(i_reg._registered_interests[1], InterestRecord)\n        assert i_reg._registered_interests[1]._registration_id is ret_val\n        assert i_reg._registered_interests[1]._microservice_uuid is microservice_uuid\n        assert i_reg._registered_interests[1]._category_name is category_name2\n        str_val = 'interest registration id={}: <microservice uuid={}, category_name={}>'.format(\n            ret_val, microservice_uuid, category_name2)\n        assert str(i_reg._registered_interests[1]) == str_val\n\n    def test_unregister(self, reset_singleton):\n        configuration_manager_mock = MagicMock(spec=ConfigurationManager)\n        i_reg = InterestRegistry(configuration_manager_mock)\n        # unregister when no items exists\n        fake_uuid = 'bla'\n        with pytest.raises(interest_registry_exceptions.DoesNotExist) as excinfo:\n            ret_val = i_reg.unregister(fake_uuid)\n\n        # register 2 interests, then unregister 1\n        id_1_1 = i_reg.register('muuid1', 'catname1')\n        id_1_2 = i_reg.register('muuid1', 'catname2')\n        ret_val = i_reg.unregister(id_1_1)\n        assert ret_val == id_1_1\n        assert len(i_reg._registered_interests) is 1\n        assert isinstance(i_reg._registered_interests[0], InterestRecord)\n        assert i_reg._registered_interests[0]._registration_id is id_1_2\n        assert i_reg._registered_interests[0]._microservice_uuid is 'muuid1'\n        assert i_reg._registered_interests[0]._category_name is 'catname2'\n\n        # unregister the second one\n        ret_val = i_reg.unregister(id_1_2)\n        assert ret_val == id_1_2\n        assert len(i_reg._registered_interests) is 0\n\n    def test_get(self, reset_singleton):\n        configuration_manager_mock = MagicMock(spec=ConfigurationManager)\n        i_reg = InterestRegistry(configuration_manager_mock)\n\n        # get when empty\n        microservice_uuid = 'muuid'\n        category_name = 'catname'\n        with pytest.raises(interest_registry_exceptions.DoesNotExist) as excinfo:\n            i_reg.get(microservice_uuid=microservice_uuid,\n                      category_name=category_name)\n\n        # get when there is a result (use patch on 'get')\n        with patch.object(InterestRegistry, 'and_filter', return_value=[1]):\n            ret_val = i_reg.get(\n                microservice_uuid=microservice_uuid, category_name=category_name)\n        assert ret_val is not None\n        assert ret_val == [1]\n\n    def test_get_with_and_filter(self, reset_singleton):\n        configuration_manager_mock = MagicMock(spec=ConfigurationManager)\n        i_reg = InterestRegistry(configuration_manager_mock)\n        # register some interts\n        id_1_1 = i_reg.register('muuid1', 'catname1')\n        id_1_2 = i_reg.register('muuid1', 'catname2')\n        id_2_1 = i_reg.register('muuid2', 'catname1')\n        id_2_2 = i_reg.register('muuid2', 'catname2')\n        id_3_3 = i_reg.register('muuid3', 'catname3')\n\n        ret_val = i_reg.get(microservice_uuid='muuid1')\n        assert len(ret_val) is 2\n        for i in ret_val:\n            assert isinstance(i, InterestRecord)\n        assert ret_val[0]._registration_id is id_1_1\n        assert ret_val[0]._microservice_uuid is 'muuid1'\n        assert ret_val[0]._category_name is 'catname1'\n        assert ret_val[1]._registration_id is id_1_2\n        assert ret_val[1]._microservice_uuid is 'muuid1'\n        assert ret_val[1]._category_name is 'catname2'\n\n        ret_val = i_reg.get(category_name='catname2')\n        assert len(ret_val) is 2\n        for i in ret_val:\n            assert isinstance(i, InterestRecord)\n        assert ret_val[0]._registration_id is id_1_2\n        assert ret_val[0]._microservice_uuid is 'muuid1'\n        assert ret_val[0]._category_name is 'catname2'\n        assert ret_val[1]._registration_id is id_2_2\n        assert ret_val[1]._microservice_uuid is 'muuid2'\n        assert ret_val[1]._category_name is 'catname2'\n\n        ret_val = i_reg.get(category_name='catname2',\n                            microservice_uuid='muuid2')\n        assert len(ret_val) is 1\n        for i in ret_val:\n            assert isinstance(i, InterestRecord)\n        assert ret_val[0]._registration_id is id_2_2\n        assert ret_val[0]._microservice_uuid is 'muuid2'\n        assert ret_val[0]._category_name is 'catname2'\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/scheduler/test_scheduler.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport asyncio\nimport datetime\nimport logging\nimport uuid\nimport time\nimport json\nfrom unittest.mock import MagicMock, call\nimport copy\nimport pytest\nfrom fledge.services.core.scheduler.scheduler import Scheduler, AuditLogger, ConfigurationManager\nfrom fledge.services.core.scheduler.entities import *\nfrom fledge.services.core.scheduler.exceptions import *\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n__author__ = \"Amarendra K Sinha, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def mock():\n    return \"\"\n\n\nasync def mock_process():\n    m = MagicMock()\n    m.pid = 9999\n    m.terminate = lambda: True\n    return m\n\n\nclass TestScheduler:\n\n    async def scheduler_fixture(self, mocker):\n        _rv = await mock_process()\n        scheduler = Scheduler()\n        scheduler._logger.level = logging.INFO\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        mocker.patch.object(scheduler, '_ready', True)\n        mocker.patch.object(scheduler, '_paused', False)\n        mocker.patch.object(scheduler, '_process_scripts', return_value=\"North Readings to PI\")\n        mocker.patch.object(scheduler, '_wait_for_task_completion', return_value=asyncio.ensure_future(mock()))\n        mocker.patch.object(scheduler, '_terminate_child_processes')\n        mocker.patch.object(asyncio, 'create_subprocess_exec', return_value=_rv)\n\n        await scheduler._get_schedules()\n\n        schedule = scheduler._ScheduleRow(\n            id=uuid.UUID(\"2b614d26-760f-11e7-b5a5-be2e44b06b34\"),\n            process_name=\"North Readings to PI\",\n            name=\"OMF to PI north\",\n            type=Schedule.Type.INTERVAL,\n            repeat=datetime.timedelta(seconds=30),\n            repeat_seconds=30,\n            time=None,\n            day=None,\n            exclusive=True,\n            enabled=True)\n\n        log_exception = mocker.patch.object(scheduler._logger, \"exception\")\n        log_error = mocker.patch.object(scheduler._logger, \"error\")\n        log_debug = mocker.patch.object(scheduler._logger, \"debug\")\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n\n        return scheduler, schedule, log_info, log_exception, log_error, log_debug\n\n    @pytest.mark.asyncio\n    async def test__resume_check_schedules(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n\n        # WHEN\n        # Check IF part\n        mocker.patch.object(scheduler, '_scheduler_loop_sleep_task', asyncio.Task(asyncio.sleep(5)))\n        scheduler._resume_check_schedules()\n\n        # THEN\n        assert scheduler._check_processes_pending is False\n\n        # WHEN\n        # Check ELSE part\n        mocker.patch.object(scheduler, '_scheduler_loop_sleep_task', None)\n        scheduler._resume_check_schedules()\n\n        # THEN\n        assert scheduler._check_processes_pending is True\n\n    @pytest.mark.asyncio\n    async def test__wait_for_task_completion(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n\n        mock_schedules = dict()\n        mock_schedule = scheduler._ScheduleRow(\n            id=uuid.UUID(\"2b614d26-760f-11e7-b5a5-be2e44b06b34\"),\n            process_name=\"North Readings to PI\",\n            name=\"OMF to PI north\",\n            type=Schedule.Type.INTERVAL,\n            repeat=datetime.timedelta(seconds=30),\n            repeat_seconds=30,\n            time=None,\n            day=None,\n            exclusive=True,\n            enabled=True)\n        mock_schedules[mock_schedule.id] = mock_schedule\n\n        mock_task_process = scheduler._TaskProcess()\n        mock_task_processes = dict()\n        mock_task_process.process = await asyncio.create_subprocess_exec(\"sleep\", \".1\")\n        mock_task_process.schedule = mock_schedule\n        mock_task_id = uuid.uuid4()\n        mock_task_process.task_id = mock_task_id\n        mock_task_processes[mock_task_process.task_id] = mock_task_process\n\n        mock_schedule_executions = dict()\n        mock_schedule_execution = scheduler._ScheduleExecution()\n        mock_schedule_executions[mock_schedule.id] = mock_schedule_execution\n        mock_schedule_executions[mock_schedule.id].task_processes[mock_task_id] = mock_task_process\n\n        mocker.patch.object(scheduler, '_resume_check_schedules')\n        mocker.patch.object(scheduler, '_schedule_next_task')\n        mocker.patch.multiple(scheduler, _schedules=mock_schedules,\n                              _task_processes=mock_task_processes,\n                              _schedule_executions=mock_schedule_executions)\n        mocker.patch.object(scheduler, '_process_scripts', return_value=\"North Readings to PI\")\n\n        # WHEN\n        await scheduler._wait_for_task_completion(mock_task_process)\n\n        # THEN\n        # After task completion, sleep above, no task processes should be left pending\n        assert 0 == len(scheduler._task_processes)\n        assert 0 == len(scheduler._schedule_executions[mock_schedule.id].task_processes)\n        args, kwargs = log_info.call_args_list[0]\n        assert 'OMF to PI north' in args\n        assert 'North Readings to PI' in args\n\n    @pytest.mark.asyncio\n    async def test__start_task(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        await scheduler._get_schedules()\n\n        schedule = scheduler._ScheduleRow(\n            id=uuid.UUID(\"2b614d26-760f-11e7-b5a5-be2e44b06b34\"),\n            process_name=\"North Readings to PI\",\n            name=\"OMF to PI north\",\n            type=Schedule.Type.INTERVAL,\n            repeat=datetime.timedelta(seconds=30),\n            repeat_seconds=30,\n            time=None,\n            day=None,\n            exclusive=True,\n            enabled=True)\n\n        mocker.patch.object(scheduler, '_ready', True)\n        mocker.patch.object(scheduler, '_resume_check_schedules')\n\n        # Assert that there is no task queued for mock_schedule\n        with pytest.raises(KeyError) as excinfo:\n            assert scheduler._schedule_executions[schedule.id] is True\n        # Now queue task and assert that the task has been queued\n        await scheduler.queue_task(schedule.id)\n        assert isinstance(scheduler._schedule_executions[schedule.id], scheduler._ScheduleExecution)\n        _rv = await mock_process()\n        mocker.patch.object(asyncio, 'create_subprocess_exec', return_value=_rv)\n        mocker.patch.object(asyncio, 'ensure_future', return_value=asyncio.ensure_future(mock()))\n        mocker.patch.object(scheduler, '_resume_check_schedules')\n        mocker.patch.object(scheduler, '_process_scripts', return_value=\"North Readings to PI\")\n        mocker.patch.object(scheduler, '_wait_for_task_completion')\n\n        # Confirm that task has not started yet\n        assert 0 == len(scheduler._schedule_executions[schedule.id].task_processes)\n\n        # WHEN\n        await scheduler._start_task(schedule)\n\n        # THEN\n        # Confirm that task has started\n        assert 1 == len(scheduler._schedule_executions[schedule.id].task_processes)\n        assert 1 == log_info.call_count\n        # assert call(\"Queued schedule '%s' for execution\", 'OMF to PI north') == log_info.call_args_list[0]\n        args, kwargs = log_info.call_args_list[0]\n        assert \"Process started: Schedule '%s' process '%s' task %s pid %s, %s running tasks\\n%s\" in args\n        assert 'OMF to PI north' in args\n        assert 'North Readings to PI' in args\n\n    @pytest.mark.asyncio\n    async def test_purge_tasks(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.multiple(scheduler, _ready=True, _paused=False)\n        mocker.patch.object(scheduler, '_max_completed_task_age', datetime.datetime.now())\n\n        # WHEN\n        await scheduler.purge_tasks()\n\n        # THEN\n        assert scheduler._purge_tasks_task is None\n        assert scheduler._last_task_purge_time is not None\n\n    @pytest.mark.asyncio\n    async def test__check_purge_tasks(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.multiple(scheduler, _purge_tasks_task=None,\n                              _last_task_purge_time=None)\n        mocker.patch.object(scheduler, 'purge_tasks', return_value=asyncio.ensure_future(mock()))\n\n        # WHEN\n        scheduler._check_purge_tasks()\n\n        # THEN\n        assert scheduler._purge_tasks_task is not None\n\n    @pytest.mark.asyncio\n    async def test__check_schedules(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._logger.level = logging.INFO\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n\n        current_time = time.time()\n        mocker.patch.multiple(scheduler, _max_running_tasks=10,\n                              _start_time=current_time)\n        await scheduler._get_schedules()\n        mocker.patch.object(scheduler, '_start_task', return_value=asyncio.ensure_future(mock()))\n\n        # WHEN\n        earliest_start_time = await scheduler._check_schedules()\n\n        # THEN\n        assert earliest_start_time is not None\n        assert 3 == log_info.call_count\n        args0, kwargs0 = log_info.call_args_list[0]\n        args1, kwargs1 = log_info.call_args_list[1]\n        args2, kwargs2 = log_info.call_args_list[2]\n        assert 'stats collection' in args0\n        assert 'COAP listener south' in args1\n        assert 'OMF to PI north' in args2\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(\"_scheduler_loop() not suitable for unit testing. Will be tested during System tests.\")\n    async def test__scheduler_loop(self, mocker):\n        pass\n\n    @pytest.mark.asyncio\n    async def test__schedule_next_timed_task(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n\n        current_time = time.time()\n        mocker.patch.multiple(scheduler, _max_running_tasks=10,\n                              _start_time=current_time)\n        await scheduler._get_schedules()\n\n        sch_id = uuid.UUID(\"2176eb68-7303-11e7-8cf7-a6006ad3dba0\")  # stat collector\n        sch = scheduler._schedules[sch_id]\n        sch_execution = scheduler._schedule_executions[sch_id]\n        time_before_call = sch_execution.next_start_time\n\n        # WHEN\n        next_dt = datetime.datetime.fromtimestamp(sch_execution.next_start_time)\n        next_dt += datetime.timedelta(seconds=sch.repeat_seconds)\n        scheduler._schedule_next_timed_task(sch, sch_execution, next_dt)\n        time_after_call = sch_execution.next_start_time\n\n        # THEN\n        assert time_after_call > time_before_call\n        assert 3 == log_info.call_count\n        args0, kwargs0 = log_info.call_args_list[0]\n        args1, kwargs1 = log_info.call_args_list[1]\n        args2, kwargs2 = log_info.call_args_list[2]\n        assert 'stats collection' in args0\n        assert 'COAP listener south' in args1\n        assert 'OMF to PI north' in args2\n\n    @pytest.mark.asyncio\n    async def test__schedule_next_task(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n\n        current_time = time.time()\n        mocker.patch.multiple(scheduler, _max_running_tasks=10,\n                              _start_time=current_time-3600)\n        await scheduler._get_schedules()\n\n        sch_id = uuid.UUID(\"2176eb68-7303-11e7-8cf7-a6006ad3dba0\")  # stat collector\n        sch = scheduler._schedules[sch_id]\n        sch_execution = scheduler._schedule_executions[sch_id]\n        time_before_call = sch_execution.next_start_time\n\n        # WHEN\n        scheduler._schedule_next_task(sch)\n        time_after_call = sch_execution.next_start_time\n\n        # THEN\n        assert time_after_call > time_before_call\n        assert 4 == log_info.call_count\n        args0, kwargs0 = log_info.call_args_list[0]\n        args1, kwargs1 = log_info.call_args_list[1]\n        args2, kwargs2 = log_info.call_args_list[2]\n        args3, kwargs3 = log_info.call_args_list[3]\n        assert 'stats collection' in args0\n        assert 'COAP listener south' in args1\n        assert 'OMF to PI north' in args2\n        # As part of scheduler._get_schedules(), scheduler._schedule_first_task() also gets executed, hence\n        # \"stat collector\" appears twice in this list.\n        assert 'stats collection' in args3\n\n    @pytest.mark.asyncio\n    async def test__schedule_first_task(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n\n        current_time = time.time()\n        curr_time = datetime.datetime.fromtimestamp(current_time)\n        mocker.patch.multiple(scheduler, _max_running_tasks=10,\n                              _start_time=current_time)\n        await scheduler._get_schedules()\n\n        sch_id = uuid.UUID(\"2176eb68-7303-11e7-8cf7-a6006ad3dba0\")  # stat collector\n        sch = scheduler._schedules[sch_id]\n        sch_execution = scheduler._schedule_executions[sch_id]\n\n        # WHEN\n        scheduler._schedule_first_task(sch, current_time)\n        time_after_call = sch_execution.next_start_time\n\n        # THEN\n        assert time_after_call > time.mktime(curr_time.timetuple())\n        assert 4 == log_info.call_count\n        args0, kwargs0 = log_info.call_args_list[0]\n        args1, kwargs1 = log_info.call_args_list[1]\n        args2, kwargs2 = log_info.call_args_list[2]\n        args3, kwargs3 = log_info.call_args_list[3]\n        assert 'stats collection' in args0\n        assert 'COAP listener south' in args1\n        assert 'OMF to PI north' in args2\n        # As part of scheduler._get_schedules(), scheduler._schedule_first_task() also gets executed, hence\n        # \"stat collector\" appears twice in this list.\n        assert 'stats collection' in args3\n\n    @pytest.mark.asyncio\n    async def test__get_process_scripts(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n\n        # WHEN\n        await scheduler._get_process_scripts()\n\n        # THEN\n        assert len(scheduler._storage_async.scheduled_processes) == len(scheduler._process_scripts)\n\n    @pytest.mark.asyncio\n    async def test__get_process_scripts_exception(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_debug = mocker.patch.object(scheduler._logger, \"debug\", side_effect=Exception())\n        log_exception = mocker.patch.object(scheduler._logger, \"exception\")\n\n        # WHEN\n        # THEN\n        with pytest.raises(Exception):\n            await scheduler._get_process_scripts()\n\n        log_args = 'Query failed: %s', 'scheduled_processes'\n        log_exception.assert_called_once_with(*log_args)\n\n    @pytest.mark.asyncio\n    @pytest.mark.parametrize(\"test_interval, is_exception\", [\n        ('\"Blah\" 0 days', True),\n        ('12:30:11', False),\n        ('0 day 12:30:11', False),\n        ('1 day 12:40:11', False),\n        ('2 days', True),\n        ('2 days 00:00:59', False),\n        ('00:25:61', True)\n    ])\n    async def test__get_schedules(self, test_interval, is_exception, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        log_exception = mocker.patch.object(scheduler._logger, \"exception\")\n\n        new_schedules = copy.deepcopy(MockStorageAsync.schedules)\n        new_schedules[5]['schedule_interval'] = test_interval\n        mocker.patch.object(MockStorageAsync, 'schedules', new_schedules)\n\n        # WHEN\n        # THEN\n        if is_exception is True:\n            with pytest.raises(Exception):\n                await scheduler._get_schedules()\n                assert 1 == log_exception.call_count\n        else:\n            await scheduler._get_schedules()\n            assert len(scheduler._storage_async.schedules) == len(scheduler._schedules)\n\n    @pytest.mark.asyncio\n    async def test__get_schedules_exception(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_debug = mocker.patch.object(scheduler._logger, \"debug\", side_effect=Exception())\n        log_exception = mocker.patch.object(scheduler._logger, \"exception\")\n        mocker.patch.object(scheduler, '_schedule_first_task', side_effect=Exception())\n\n        # WHEN\n        # THEN\n        with pytest.raises(Exception):\n            await scheduler._get_schedules()\n\n        log_args = 'Query failed: %s', 'schedules'\n        log_exception.assert_called_once_with(*log_args)\n\n    @pytest.mark.asyncio\n    async def test__read_storage(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.object(scheduler, '_schedule_first_task')\n\n        # WHEN\n        await scheduler._read_storage()\n\n        # THEN\n        assert len(scheduler._storage_async.scheduled_processes) == len(scheduler._process_scripts)\n        assert len(scheduler._storage_async.schedules) == len(scheduler._schedules)\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(\"_mark_tasks_interrupted() not implemented in main Scheduler class.\")\n    async def test__mark_tasks_interrupted(self, mocker):\n        pass\n\n    @pytest.mark.asyncio\n    async def test__read_config(self, mocker):\n        async def get_cat():\n            return {\n                    \"max_running_tasks\": {\n                        \"description\": \"The maximum number of tasks that can be running at any given time\",\n                        \"type\": \"integer\",\n                        \"default\": str(Scheduler._DEFAULT_MAX_RUNNING_TASKS),\n                        \"value\": str(Scheduler._DEFAULT_MAX_RUNNING_TASKS)\n                    },\n                    \"max_completed_task_age_days\": {\n                        \"description\": \"The maximum age, in days (based on the start time), for a rows \"\n                                       \"in the tasks table that do not have a status of running\",\n                        \"type\": \"integer\",\n                        \"default\": str(Scheduler._DEFAULT_MAX_COMPLETED_TASK_AGE_DAYS),\n                        \"value\": str(Scheduler._DEFAULT_MAX_COMPLETED_TASK_AGE_DAYS)\n                    },\n            }\n        \n        _rv = await get_cat()\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        cr_cat = mocker.patch.object(ConfigurationManager, \"create_category\", return_value=asyncio.ensure_future(mock()))\n        get_cat = mocker.patch.object(ConfigurationManager, \"get_category_all_items\", return_value=_rv)\n\n        # WHEN\n        assert scheduler._max_running_tasks is None\n        assert scheduler._max_completed_task_age is None\n        await scheduler._read_config()\n\n        # THEN\n        assert 1 == cr_cat.call_count\n        assert 1 == get_cat.call_count\n        assert scheduler._max_running_tasks is not None\n        assert scheduler._max_completed_task_age is not None\n\n    @pytest.mark.asyncio\n    async def test_start(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_debug = mocker.patch.object(scheduler._logger, \"debug\")\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n\n        current_time = time.time()\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        mocker.patch.object(scheduler, '_scheduler_loop', return_value=asyncio.ensure_future(mock()))\n        mocker.patch.multiple(scheduler, _core_management_port=9999,\n                              _core_management_host=\"0.0.0.0\",\n                              current_time=current_time - 3600)\n\n        # TODO: Remove after implementation of above test test__read_config()\n        mocker.patch.object(scheduler, '_read_config', return_value=asyncio.ensure_future(mock()))\n\n        assert scheduler._ready is False\n\n        # WHEN\n        await scheduler.start()\n\n        # THEN\n        assert scheduler._ready is True\n        assert len(scheduler._storage_async.scheduled_processes) == len(scheduler._process_scripts)\n        assert len(scheduler._storage_async.schedules) == len(scheduler._schedules)\n        calls = [call('Starting'),\n                 call('Starting Scheduler: Management port received is %d', 9999)]\n        log_info.assert_has_calls(calls, any_order=True)\n        calls = [call('Database command: %s', 'scheduled_processes'),\n                 call('Database command: %s', 'schedules')]\n        log_debug.assert_has_calls(calls, any_order=True)\n\n    @pytest.mark.asyncio\n    async def test_stop(self, mocker):\n        # TODO: Mandatory - Add negative tests for full code coverage\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n        log_exception = mocker.patch.object(scheduler._logger, \"exception\")\n\n        mocker.patch.object(scheduler, '_scheduler_loop', return_value=asyncio.ensure_future(mock()))\n        mocker.patch.object(scheduler, '_resume_check_schedules', return_value=asyncio.ensure_future(mock()))\n        mocker.patch.object(scheduler, '_purge_tasks_task', return_value=asyncio.ensure_future(asyncio.sleep(.1)))\n        mocker.patch.object(scheduler, '_scheduler_loop_task', return_value=asyncio.ensure_future(asyncio.sleep(.1)))\n        current_time = time.time()\n        mocker.patch.multiple(scheduler, _core_management_port=9999,\n                              _core_management_host=\"0.0.0.0\",\n                              _start_time=current_time - 3600,\n                              _paused=False,\n                              _task_processes={})\n\n        # WHEN\n        retval = await scheduler.stop()\n\n        # THEN\n        assert retval is True\n        assert scheduler._schedule_executions is None\n        assert scheduler._task_processes is None\n        assert scheduler._schedules is None\n        assert scheduler._process_scripts is None\n        assert scheduler._ready is False\n        assert scheduler._paused is False\n        assert scheduler._start_time is None\n        calls = [call('Processing stop request'), call('Stopped')]\n        log_info.assert_has_calls(calls, any_order=True)\n\n        # FIXME: Find why exception is being raised despite mocking _scheduler_loop_task\n        args = log_exception.call_args\n        assert 'An exception was raised by Scheduler._scheduler_loop' == args[0][1]\n\n    @pytest.mark.asyncio\n    async def test_get_scheduled_processes(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        await scheduler._get_process_scripts()\n        mocker.patch.object(scheduler, '_ready', True)\n\n        # WHEN\n        processes = await scheduler.get_scheduled_processes()\n\n        # THEN\n        assert len(scheduler._storage_async.scheduled_processes) == len(processes)\n\n    @pytest.mark.asyncio\n    async def test_schedule_row_to_schedule(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        schedule_id = uuid.uuid4()\n        schedule_row = scheduler._ScheduleRow(\n            id=schedule_id,\n            name='Test Schedule',\n            type=Schedule.Type.INTERVAL,\n            day=0,\n            time=0,\n            repeat=10,\n            repeat_seconds=10,\n            exclusive=False,\n            enabled=True,\n            process_name='TestProcess')\n\n        # WHEN\n        schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n\n        # THEN\n        assert isinstance(schedule, Schedule)\n        assert schedule.schedule_id == schedule_row[0]\n        assert schedule.name == schedule_row[1]\n        assert schedule.schedule_type == schedule_row[2]\n        assert schedule_row[3] is 0  # 0 for Interval Schedule\n        assert schedule_row[4] is 0  # 0 for Interval Schedule\n        assert schedule.repeat == schedule_row[5]\n        assert schedule.exclusive == schedule_row[7]\n        assert schedule.enabled == schedule_row[8]\n        assert schedule.process_name == schedule_row[9]\n\n    @pytest.mark.asyncio\n    async def test_get_schedules(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n\n        # WHEN\n        schedules = await scheduler.get_schedules()\n\n        # THEN\n        assert len(scheduler._storage_async.schedules) == len(schedules)\n\n    @pytest.mark.asyncio\n    async def test_get_schedule(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        schedule_id = uuid.UUID(\"cea17db8-6ccc-11e7-907b-a6006ad3dba0\")  # purge schedule\n\n        # WHEN\n        schedule = await scheduler.get_schedule(schedule_id)\n\n        # THEN\n        assert isinstance(schedule, Schedule)\n        assert schedule.schedule_id == schedule_id\n        assert schedule.name == \"purge\"\n        assert schedule.schedule_type == Schedule.Type.MANUAL\n        assert schedule.repeat == datetime.timedelta(0, 3600)\n        assert schedule.exclusive is True\n        assert schedule.enabled is True\n        assert schedule.process_name == \"purge\"\n\n    @pytest.mark.asyncio\n    async def test_get_schedule_exception(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        schedule_id = uuid.uuid4()\n\n        # WHEN\n        # THEN\n        with pytest.raises(ScheduleNotFoundError):\n            schedule = await scheduler.get_schedule(schedule_id)\n\n    @pytest.mark.asyncio\n    async def test_save_schedule_new(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        audit_logger = mocker.patch.object(AuditLogger, 'information', return_value=asyncio.ensure_future(mock()))\n        first_task = mocker.patch.object(scheduler, '_schedule_first_task')\n        resume_sch = mocker.patch.object(scheduler, '_resume_check_schedules')\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n\n        enable_schedule = mocker.patch.object(scheduler, \"enable_schedule\",\n                                              return_value=asyncio.ensure_future(mock()))\n        disable_schedule = mocker.patch.object(scheduler, \"disable_schedule\",\n                                               return_value=asyncio.ensure_future(mock()))\n\n        schedule_id = uuid.uuid4()\n        schedule_row = scheduler._ScheduleRow(\n            id=schedule_id,\n            name='Test Schedule',\n            type=Schedule.Type.INTERVAL,\n            day=0,\n            time=0,\n            repeat=datetime.timedelta(seconds=30),\n            repeat_seconds=30,\n            exclusive=False,\n            enabled=True,\n            process_name='TestProcess')\n        schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n\n        # WHEN\n        await scheduler.save_schedule(schedule)\n\n        # THEN\n        assert len(scheduler._storage_async.schedules) + 1 == len(scheduler._schedules)\n        assert 1 == audit_logger.call_count\n        calls =[call('SCHAD', {'schedule': {'name': 'Test Schedule', 'processName': 'TestProcess',\n                                            'type': Schedule.Type.INTERVAL, 'repeat': 30.0, 'enabled': True,\n                                            'exclusive': False}})]\n        audit_logger.assert_has_calls(calls, any_order=True)\n        assert 1 == first_task.call_count\n        assert 1 == resume_sch.call_count\n        assert 0 == enable_schedule.call_count\n        assert 0 == disable_schedule.call_count\n\n    @pytest.mark.asyncio\n    async def test_save_schedule_new_with_enable_modified(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        audit_logger = mocker.patch.object(AuditLogger, 'information', return_value=asyncio.ensure_future(mock()))\n        first_task = mocker.patch.object(scheduler, '_schedule_first_task')\n        resume_sch = mocker.patch.object(scheduler, '_resume_check_schedules')\n\n        enable_schedule = mocker.patch.object(scheduler, \"enable_schedule\",\n                                              return_value=asyncio.ensure_future(mock()))\n        disable_schedule = mocker.patch.object(scheduler, \"disable_schedule\",\n                                               return_value=asyncio.ensure_future(mock()))\n\n        schedule_id = uuid.uuid4()\n        schedule_row = scheduler._ScheduleRow(\n            id=schedule_id,\n            name='Test Schedule',\n            type=Schedule.Type.INTERVAL,\n            day=0,\n            time=0,\n            repeat=datetime.timedelta(seconds=30),\n            repeat_seconds=30,\n            exclusive=False,\n            enabled=True,\n            process_name='TestProcess')\n        schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n\n        # WHEN\n        await scheduler.save_schedule(schedule, is_enabled_modified=True)\n\n        # THEN\n        assert len(scheduler._storage_async.schedules) + 1 == len(scheduler._schedules)\n        assert 1 == audit_logger.call_count\n        calls =[call('SCHAD', {'schedule': {'name': 'Test Schedule', 'processName': 'TestProcess',\n                                            'type': Schedule.Type.INTERVAL, 'repeat': 30.0, 'enabled': True,\n                                            'exclusive': False}})]\n        audit_logger.assert_has_calls(calls, any_order=True)\n        assert 1 == first_task.call_count\n        assert 1 == resume_sch.call_count\n        assert 1 == enable_schedule.call_count\n        assert 0 == disable_schedule.call_count\n\n        # WHEN\n        await scheduler.save_schedule(schedule, is_enabled_modified=False)\n        # THEN\n        assert 1 == disable_schedule.call_count\n\n    @pytest.mark.asyncio\n    async def test_save_schedule_update(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        audit_logger = mocker.patch.object(AuditLogger, 'information', return_value=asyncio.ensure_future(mock()))\n        first_task = mocker.patch.object(scheduler, '_schedule_first_task')\n        resume_sch = mocker.patch.object(scheduler, '_resume_check_schedules')\n        schedule_id = uuid.UUID(\"2b614d26-760f-11e7-b5a5-be2e44b06b34\")  # OMF to PI North\n        schedule_row = scheduler._ScheduleRow(\n            id=schedule_id,\n            name='Test Schedule',\n            type=Schedule.Type.TIMED,\n            day=1,\n            time=datetime.time(),\n            repeat=datetime.timedelta(seconds=30),\n            repeat_seconds=30,\n            exclusive=False,\n            enabled=True,\n            process_name='TestProcess')\n        schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n\n        enable_schedule = mocker.patch.object(scheduler, \"enable_schedule\",\n                                              return_value=asyncio.ensure_future(mock()))\n        disable_schedule = mocker.patch.object(scheduler, \"disable_schedule\",\n                                               return_value=asyncio.ensure_future(mock()))\n\n        # WHEN\n        await scheduler.save_schedule(schedule)\n\n        # THEN\n        assert len(scheduler._storage_async.schedules) == len(scheduler._schedules)\n        assert 1 == audit_logger.call_count\n\n        new = {'schedule': {'name': 'Test Schedule', 'enabled': True, 'repeat': 30.0, 'exclusive': False, 'day': 1,\n                            'time': '0:0:0', 'processName': 'TestProcess', 'type': Schedule.Type.TIMED}}\n        old = {'old_schedule': {'enabled': True, 'exclusive': True, 'name': 'OMF to PI north',\n                                'processName': 'North Readings to PI', 'repeat': 30.0, 'type': Schedule.Type.INTERVAL}}\n        calls = [call('SCHCH', {**new, **old})]\n        audit_logger.assert_has_calls(calls, any_order=True)\n        assert 1 == first_task.call_count\n        assert 1 == resume_sch.call_count\n        assert 0 == enable_schedule.call_count\n        assert 0 == disable_schedule.call_count\n\n    @pytest.mark.asyncio\n    async def test_save_schedule_update_with_enable_modified(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        audit_logger = mocker.patch.object(AuditLogger, 'information', return_value=asyncio.ensure_future(mock()))\n        first_task = mocker.patch.object(scheduler, '_schedule_first_task')\n        resume_sch = mocker.patch.object(scheduler, '_resume_check_schedules')\n        schedule_id = uuid.UUID(\"2b614d26-760f-11e7-b5a5-be2e44b06b34\")  # OMF to PI North\n        schedule_row = scheduler._ScheduleRow(\n            id=schedule_id,\n            name='Test Schedule',\n            type=Schedule.Type.TIMED,\n            day=1,\n            time=datetime.time(),\n            repeat=datetime.timedelta(seconds=30),\n            repeat_seconds=30,\n            exclusive=False,\n            enabled=True,\n            process_name='TestProcess')\n        schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n\n        enable_schedule = mocker.patch.object(scheduler, \"enable_schedule\",\n                                              return_value=asyncio.ensure_future(mock()))\n        disable_schedule = mocker.patch.object(scheduler, \"disable_schedule\",\n                                               return_value=asyncio.ensure_future(mock()))\n\n        # WHEN\n        await scheduler.save_schedule(schedule, is_enabled_modified=True)\n\n        # THEN\n        assert len(scheduler._storage_async.schedules) == len(scheduler._schedules)\n        assert 1 == audit_logger.call_count\n        new = {'schedule': {'name': 'Test Schedule', 'enabled': True, 'repeat': 30.0, 'exclusive': False, 'day': 1,\n                            'time': '0:0:0', 'processName': 'TestProcess', 'type': Schedule.Type.TIMED}}\n        old = {'old_schedule': {'enabled': True, 'exclusive': True, 'name': 'OMF to PI north',\n                                'processName': 'North Readings to PI', 'repeat': 30.0, 'type': Schedule.Type.INTERVAL}}\n        calls = [call('SCHCH', {**new, **old})]\n        audit_logger.assert_has_calls(calls, any_order=True)\n        assert 1 == first_task.call_count\n        assert 1 == resume_sch.call_count\n        assert 1 == enable_schedule.call_count\n        assert 0 == disable_schedule.call_count\n\n        # WHEN\n        await scheduler.save_schedule(schedule, is_enabled_modified=False)\n        # THEN\n        assert 1 == disable_schedule.call_count\n\n    @pytest.mark.asyncio\n    async def test_save_schedule_exception(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        schedule_id = uuid.uuid4()\n        schedule_row = scheduler._ScheduleRow(\n            id=schedule_id,\n            name='Test Schedule',\n            type=Schedule.Type.TIMED,\n            day=0,\n            time=0,\n            repeat=datetime.timedelta(seconds=30),\n            repeat_seconds=30,\n            exclusive=False,\n            enabled=True,\n            process_name='TestProcess')\n\n        # WHEN\n        # THEN\n        with pytest.raises(ValueError) as ex:\n            temp_schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n            temp_schedule.name = None\n            await scheduler.save_schedule(temp_schedule)\n            del temp_schedule\n        assert \"name can not be empty\" in str(ex)\n\n        with pytest.raises(ValueError) as ex:\n            temp_schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n            temp_schedule.name = \"\"\n            await scheduler.save_schedule(temp_schedule)\n            del temp_schedule\n        assert \"name can not be empty\" in str(ex)\n\n        with pytest.raises(ValueError) as ex:\n            temp_schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n            temp_schedule.repeat = 1234\n            await scheduler.save_schedule(temp_schedule)\n            del temp_schedule\n        assert \"repeat must be of type datetime.timedelta\" in str(ex)\n\n        with pytest.raises(ValueError) as ex:\n            temp_schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n            temp_schedule.exclusive = None\n            await scheduler.save_schedule(temp_schedule)\n            del temp_schedule\n        assert \"exclusive can not be None\" in str(ex)\n\n        with pytest.raises(ValueError) as ex:\n            temp_schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n            temp_schedule.time = 1234\n            await scheduler.save_schedule(temp_schedule)\n            del temp_schedule\n        assert \"time must be of type datetime.time\" in str(ex)\n\n        with pytest.raises(ValueError) as ex:\n            temp_schedule = scheduler._schedule_row_to_schedule(schedule_id, schedule_row)\n            temp_schedule.day = 0\n            temp_schedule.time = datetime.time()\n            await scheduler.save_schedule(temp_schedule)\n            del temp_schedule\n        assert \"day must be between 1 and 7\" in str(ex)\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be done\")\n    async def test_remove_service_from_task_processes(self):\n        pass\n\n    @pytest.mark.asyncio\n    async def test_disable_schedule(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        await scheduler._get_schedules()\n        mocker.patch.object(scheduler, '_ready', True)\n        mocker.patch.object(scheduler, '_task_processes')\n        audit_logger = mocker.patch.object(AuditLogger, 'information', return_value=asyncio.ensure_future(mock()))\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n        sch_id = uuid.UUID(\"2b614d26-760f-11e7-b5a5-be2e44b06b34\")  # OMF to PI North\n\n        # WHEN\n        status, message = await scheduler.disable_schedule(sch_id)\n\n        # THEN\n        assert status is True\n        assert message == \"Schedule successfully disabled\"\n        assert (scheduler._schedules[sch_id]).id == sch_id\n        assert (scheduler._schedules[sch_id]).enabled is False\n        assert 2 == log_info.call_count\n        calls = [call('No Task running for Schedule %s', '2b614d26-760f-11e7-b5a5-be2e44b06b34'),\n                 call(\"Disabled Schedule '%s/%s' process '%s'\\n\", 'OMF to PI north',\n                      '2b614d26-760f-11e7-b5a5-be2e44b06b34', 'North Readings to PI')]\n        log_info.assert_has_calls(calls)\n        assert 1 == audit_logger.call_count\n        new = {'schedule': {'name': 'OMF to PI north', 'repeat': 30.0, 'enabled': False,\n                                             'type': Schedule.Type.INTERVAL, 'exclusive': True,\n                                             'processName': 'North Readings to PI'}}\n        old = {'old_schedule': {'name': 'OMF to PI north', 'repeat': 30.0, 'enabled': True,\n                                             'type': Schedule.Type.INTERVAL, 'exclusive': True,\n                                             'processName': 'North Readings to PI'}}\n        calls = [call('SCHCH', {**new, **old})]\n        audit_logger.assert_has_calls(calls, any_order=True)\n\n    @pytest.mark.asyncio\n    async def test_disable_schedule_wrong_schedule_id(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        await scheduler._get_schedules()\n        mocker.patch.object(scheduler, '_ready', True)\n        mocker.patch.object(scheduler, '_task_processes')\n        log_exception = mocker.patch.object(scheduler._logger, \"exception\")\n        random_schedule_id = uuid.uuid4()\n\n        # WHEN\n        await scheduler.disable_schedule(random_schedule_id)\n\n        # THEN\n        log_params = \"No such Schedule %s\", str(random_schedule_id)\n        log_exception.assert_called_with(*log_params)\n\n    @pytest.mark.asyncio\n    async def test_disable_schedule_already_disabled(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        await scheduler._get_schedules()\n        mocker.patch.object(scheduler, '_ready', True)\n        mocker.patch.object(scheduler, '_task_processes')\n        log_info = mocker.patch.object(scheduler._logger, \"info\")\n        sch_id = uuid.UUID(\"d1631422-9ec6-11e7-abc4-cec278b6b50a\")  # backup\n\n        # WHEN\n        status, message = await scheduler.disable_schedule(sch_id)\n\n        # THEN\n        assert status is True\n        assert message == \"Schedule {} already disabled\".format(str(sch_id))\n        assert (scheduler._schedules[sch_id]).id == sch_id\n        assert (scheduler._schedules[sch_id]).enabled is False\n        log_params = \"Schedule %s already disabled\", str(sch_id)\n        log_info.assert_called_with(*log_params)\n\n    @pytest.mark.asyncio\n    async def test_enable_schedule(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        sch_id = uuid.UUID(\"d1631422-9ec6-11e7-abc4-cec278b6b50a\")  # backup\n        queue_task = mocker.patch.object(scheduler, 'queue_task', return_value=asyncio.ensure_future(mock()))\n        audit_logger = mocker.patch.object(AuditLogger, 'information', return_value=asyncio.ensure_future(mock()))\n\n        # WHEN\n        status, message = await scheduler.enable_schedule(sch_id)\n\n        # THEN\n        assert status is True\n        assert message == \"Schedule successfully enabled\"\n        assert (scheduler._schedules[sch_id]).id == sch_id\n        assert (scheduler._schedules[sch_id]).enabled is True\n        assert 1 == queue_task.call_count\n        calls = [call(\"Enabled Schedule '%s/%s' process '%s'\\n\", 'backup hourly', 'd1631422-9ec6-11e7-abc4-cec278b6b50a', 'backup')]\n        log_info.assert_has_calls(calls, any_order=True)\n        assert 1 == audit_logger.call_count\n        new = {'schedule': {'name': 'backup hourly', 'type': Schedule.Type.INTERVAL, 'processName': 'backup',\n                            'exclusive': True, 'repeat': 3600.0, 'enabled': True}}\n        old = {'old_schedule': {'name': 'backup hourly', 'type': Schedule.Type.INTERVAL, 'processName': 'backup',\n                            'exclusive': True, 'repeat': 3600.0, 'enabled': False}}\n        calls = [call('SCHCH', {**new, **old})]\n        audit_logger.assert_has_calls(calls, any_order=True)\n\n    @pytest.mark.asyncio\n    async def test_enable_schedule_already_enabled(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        sch_id = uuid.UUID(\"ada12840-68d3-11e7-907b-a6006ad3dba0\")  #Coap\n        mocker.patch.object(scheduler, 'queue_task', return_value=asyncio.ensure_future(mock()))\n\n        # WHEN\n        status, message = await scheduler.enable_schedule(sch_id)\n\n        # THEN\n        assert status is True\n        assert message == \"Schedule is already enabled\"\n        assert (scheduler._schedules[sch_id]).id == sch_id\n        assert (scheduler._schedules[sch_id]).enabled is True\n        log_params = \"Schedule %s already enabled\", str(sch_id)\n        log_info.assert_called_with(*log_params)\n\n    @pytest.mark.asyncio\n    async def test_enable_schedule_wrong_schedule_id(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        random_schedule_id = uuid.uuid4()\n\n        # WHEN\n        await scheduler.enable_schedule(random_schedule_id)\n\n        # THEN\n        log_params = \"No such Schedule %s\", str(random_schedule_id)\n        log_exception.assert_called_with(*log_params)\n\n    @pytest.mark.asyncio\n    async def test_queue_task(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        await scheduler._get_schedules()\n        sch_id = uuid.UUID(\"cea17db8-6ccc-11e7-907b-a6006ad3dba0\")  # backup\n\n        mocker.patch.object(scheduler, '_ready', True)\n        mocker.patch.object(scheduler, '_resume_check_schedules')\n\n        # Assert that there is no task queued for this schedule at first\n        with pytest.raises(KeyError) as excinfo:\n            assert scheduler._schedule_executions[sch_id] is True\n\n        # WHEN\n        await scheduler.queue_task(sch_id)\n\n        # THEN\n        assert isinstance(scheduler._schedule_executions[sch_id], scheduler._ScheduleExecution)\n        # log_params = \"Queued schedule '%s' for execution\", 'purge'\n        # log_info.assert_called_with(*log_params)\n\n    @pytest.mark.asyncio\n    async def test_queue_task_schedule_not_found(self, mocker):\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        mocker.patch.object(scheduler, '_schedule_first_task')\n        mocker.patch.object(scheduler, '_ready', True)\n        mocker.patch.object(scheduler, '_resume_check_schedules')\n\n        # WHEN\n        # THEN\n        with pytest.raises(ScheduleNotFoundError) as excinfo:\n            await scheduler.queue_task(uuid.uuid4())\n\n    @pytest.mark.asyncio\n    async def test_delete_schedule(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        sch_id = uuid.UUID(\"d1631422-9ec6-11e7-abc4-cec278b6b50a\")  # backup\n        await scheduler._get_schedules()\n\n        # Confirm no. of schedules\n        assert len(scheduler._storage_async.schedules) == len(scheduler._schedules)\n\n        mocker.patch.object(scheduler, '_ready', True)\n\n        # WHEN\n        # Now delete schedule\n        await scheduler.delete_schedule(sch_id)\n\n        # THEN\n        # Now confirm there is one schedule less\n        assert len(scheduler._storage_async.schedules) - 1 == len(scheduler._schedules)\n\n    @pytest.mark.asyncio\n    async def test_delete_schedule_enabled_schedule(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        sch_id = uuid.UUID(\"ada12840-68d3-11e7-907b-a6006ad3dba0\")  #Coap\n        await scheduler._get_schedules()\n        mocker.patch.object(scheduler, '_ready', True)\n\n        # Confirm there are 14 schedules\n        assert len(scheduler._storage_async.schedules) == len(scheduler._schedules)\n\n        # WHEN\n        # Now delete schedule\n        with pytest.raises(RuntimeWarning):\n            await scheduler.delete_schedule(sch_id)\n\n        # THEN\n        # Now confirm no schedule is deleted\n        assert len(scheduler._storage_async.schedules) == len(scheduler._schedules)\n        assert 1 == log_exception.call_count\n        log_params = 'Attempt to delete an enabled Schedule %s. Not deleted.', str(sch_id)\n        log_exception.assert_called_with(*log_params)\n\n    @pytest.mark.asyncio\n    async def test_delete_schedule_exception(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        log_debug = mocker.patch.object(scheduler._logger, 'debug', side_effect=Exception())\n        sch_id = uuid.UUID(\"d1631422-9ec6-11e7-abc4-cec278b6b50a\")  # backup\n\n        # WHEN\n        # THEN\n        with pytest.raises(ScheduleNotFoundError) as excinfo:\n            await scheduler.delete_schedule(uuid.uuid4())\n\n    @pytest.mark.asyncio\n    async def test_delete_schedule_not_found(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n\n        # WHEN\n        # THEN\n        with pytest.raises(ScheduleNotFoundError) as excinfo:\n            await scheduler.delete_schedule(uuid.uuid4())\n\n    @pytest.mark.asyncio\n    async def test_get_running_tasks(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n\n        # Assert that there is no task queued for schedule\n        with pytest.raises(KeyError) as excinfo:\n            assert scheduler._schedule_executions[schedule.id] is True\n\n        # Now queue task and assert that the task has been queued\n        await scheduler.queue_task(schedule.id)\n        assert isinstance(scheduler._schedule_executions[schedule.id], scheduler._ScheduleExecution)\n\n        # Confirm that no task has started yet\n        assert 0 == len(scheduler._schedule_executions[schedule.id].task_processes)\n\n        await scheduler._start_task(schedule)\n\n        # Confirm that task has started\n        assert 1 == len(scheduler._schedule_executions[schedule.id].task_processes)\n\n        # WHEN\n        tasks = await scheduler.get_running_tasks()\n\n        # THEN\n        assert 1 == len(tasks)\n        assert schedule.process_name == tasks[0].process_name\n        assert tasks[0].reason is None\n        assert tasks[0].state == Task.State.RUNNING\n        assert tasks[0].cancel_requested is None\n        assert tasks[0].start_time is not None\n        assert tasks[0].end_time is None\n        assert tasks[0].exit_code is None\n\n    @pytest.mark.asyncio\n    async def test_get_task(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n\n        # Assert that there is no North task queued for schedule\n        with pytest.raises(KeyError) as excinfo:\n            assert scheduler._schedule_executions[schedule.id] is True\n\n        # Now queue task and assert that the North task has been queued\n        await scheduler.queue_task(schedule.id)\n        assert isinstance(scheduler._schedule_executions[schedule.id], scheduler._ScheduleExecution)\n\n        # Confirm that no task has started yet\n        assert 0 == len(scheduler._schedule_executions[schedule.id].task_processes)\n\n        await scheduler._start_task(schedule)\n\n        # Confirm that task has started\n        assert 1 == len(scheduler._schedule_executions[schedule.id].task_processes)\n        task_id = list(scheduler._schedule_executions[schedule.id].task_processes.keys())[0]\n\n        # WHEN\n        task = await scheduler.get_task(task_id)\n\n        # THEN\n        assert schedule.process_name == task.process_name\n        assert task.reason is ''\n        assert task.state is not None\n        assert task.cancel_requested is None\n        assert task.start_time is not None\n        assert task.end_time is not None\n        assert task.exit_code is '0'\n\n    @pytest.mark.skip(\"Need a suitable fixture\")\n    @pytest.mark.asyncio\n    async def test_get_task_not_found(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n\n        # WHEN\n        # THEN\n        with pytest.raises(TaskNotFoundError) as excinfo:\n            tasks = await scheduler.get_task(uuid.uuid4())\n\n    @pytest.mark.asyncio\n    async def test_get_task_exception(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        log_debug = mocker.patch.object(scheduler._logger, 'debug', side_effect=Exception())\n\n        # WHEN\n        # THEN\n        task_id = uuid.uuid4()\n        with pytest.raises(Exception) as excinfo:\n            await scheduler.get_task(task_id)\n\n        # THEN\n        payload = {\"return\": [\"id\", \"process_name\", \"schedule_name\", \"state\", {\"alias\": \"start_time\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"column\": \"start_time\"}, {\"alias\": \"end_time\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"column\": \"end_time\"}, \"reason\", \"exit_code\"], \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": str(task_id)}}\n        args, kwargs = log_exception.call_args\n        assert 'Query failed: %s' == args[0]\n        p = json.loads(args[1])\n        assert payload == p\n\n    @pytest.mark.asyncio\n    async def test_get_tasks(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n\n        # Assert that there is no North task queued for schedule\n        with pytest.raises(KeyError) as excinfo:\n            assert scheduler._schedule_executions[schedule.id] is True\n\n        # Now queue task and assert that the North task has been queued\n        await scheduler.queue_task(schedule.id)\n        assert isinstance(scheduler._schedule_executions[schedule.id], scheduler._ScheduleExecution)\n\n        # Confirm that no task has started yet\n        assert 0 == len(scheduler._schedule_executions[schedule.id].task_processes)\n\n        await scheduler._start_task(schedule)\n\n        # Confirm that task has started\n        assert 1 == len(scheduler._schedule_executions[schedule.id].task_processes)\n        task_id = list(scheduler._schedule_executions[schedule.id].task_processes.keys())[0]\n\n        # WHEN\n        tasks = await scheduler.get_tasks()\n\n        # THEN\n        assert schedule.process_name == tasks[0].process_name\n        assert tasks[0].reason is ''\n        assert tasks[0].state is not None\n        assert tasks[0].cancel_requested is None\n        assert tasks[0].start_time is not None\n        assert tasks[0].end_time is not None\n        assert tasks[0].exit_code is '0'\n\n    @pytest.mark.asyncio\n    async def test_get_tasks_exception(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        log_debug = mocker.patch.object(scheduler._logger, 'debug', side_effect=Exception())\n\n        # WHEN\n        with pytest.raises(Exception) as excinfo:\n            tasks = await scheduler.get_tasks()\n\n        # THEN\n        payload = {\"return\": [\"id\", \"process_name\", \"schedule_name\", \"state\", {\"alias\": \"start_time\", \"column\": \"start_time\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, {\"alias\": \"end_time\", \"column\": \"end_time\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\"}, \"reason\", \"exit_code\"], \"limit\": 100}\n        args, kwargs = log_exception.call_args\n        assert 'Query failed: %s' == args[0]\n        p = json.loads(args[1])\n        assert payload == p\n\n    @pytest.mark.asyncio\n    async def test_cancel_task_all_ok(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n\n        # Assert that there is no task queued for schedule\n        with pytest.raises(KeyError) as excinfo:\n            assert scheduler._schedule_executions[schedule.id] is True\n\n        # Now queue task and assert that the task has been queued\n        await scheduler.queue_task(schedule.id)\n        assert isinstance(scheduler._schedule_executions[schedule.id], scheduler._ScheduleExecution)\n\n        # Confirm that no task has started yet\n        assert 0 == len(scheduler._schedule_executions[schedule.id].task_processes)\n        await scheduler._start_task(schedule)\n\n        # Confirm that task has started\n        assert 1 == len(scheduler._schedule_executions[schedule.id].task_processes)\n        task_id = list(scheduler._schedule_executions[schedule.id].task_processes.keys())[0]\n\n        # Confirm that cancel request has not been made\n        assert scheduler._schedule_executions[schedule.id].task_processes[task_id].cancel_requested is None\n\n        # WHEN\n        await scheduler.cancel_task(task_id)\n\n        # THEN\n        assert scheduler._schedule_executions[schedule.id].task_processes[task_id].cancel_requested is not None\n        assert 2 == log_info.call_count\n        # args, kwargs = log_info.call_args_list[0]\n        # assert (\"Queued schedule '%s' for execution\", 'OMF to PI north') == args\n        args, kwargs = log_info.call_args_list[0]\n        assert \"Process started: Schedule '%s' process '%s' task %s pid %s, %s running tasks\\n%s\" in args\n        assert 'OMF to PI north' in args\n        assert 'North Readings to PI' in args\n        args, kwargs = log_info.call_args_list[1]\n        assert \"Stopping process: Schedule '%s' process '%s' task %s pid %s\\n%s\" in args\n        assert 'OMF to PI north' in args\n        assert 'North Readings to PI' in args\n\n    @pytest.mark.asyncio\n    async def test_cancel_task_exception(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n\n        # WHEN\n        # THEN\n        with pytest.raises(TaskNotRunningError) as excinfo:\n            await scheduler.cancel_task(uuid.uuid4())\n\n    @pytest.mark.asyncio\n    async def test_not_ready_and_paused(self, mocker):\n        # GIVEN\n        scheduler, schedule, log_info, log_exception, log_error, log_debug = await self.scheduler_fixture(mocker)\n        mocker.patch.object(scheduler, '_ready', False)\n        mocker.patch.object(scheduler, '_paused', True)\n\n        # WHEN\n        # THEN\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.start()\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.get_scheduled_processes()\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.get_schedules()\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.get_schedule(uuid.uuid4())\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.save_schedule(Schedule(Schedule.Type.INTERVAL))\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.disable_schedule(uuid.uuid4())\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.enable_schedule(uuid.uuid4())\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.queue_task(uuid.uuid4())\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.delete_schedule(uuid.uuid4())\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.get_running_tasks()\n\n        with pytest.raises(NotReadyError) as excinfo:\n            await scheduler.cancel_task(uuid.uuid4())\n\n    @pytest.mark.skip(\"_terminate_child_processes() not fit for unit test.\")\n    @pytest.mark.asyncio\n    async def test__terminate_child_processes(self, mocker):\n        pass\n\n    @pytest.mark.asyncio\n    async def test_cleanup(self):\n        scheduler = Scheduler()\n        scheduler._logger.level = logging.WARNING\n\n    @pytest.mark.parametrize(\"schedule_name, should_raise, expected_exception_type, expected_message\", [\n        (\"valid_schedule\", False, None, None),\n        (\"valid-schedule\", False, None, None),\n        (\"valid_schedule_123\", False, None, None),\n        (\"invalid\\\\schedule\", True, ValueError, \"Invalid character\"),\n        (\"invalid/schedule\", False, None, None),  # Forward slash is allowed\n        (\"\", True, ValueError, \"name can not be empty\"),\n        (None, True, ValueError, \"name can not be empty\"),\n        (\"schedule with spaces\", False, None, None),  # Spaces are allowed\n        (\"schedule.with.dots\", False, None, None),  # Dots are allowed\n        (\"UPPERCASE_SCHEDULE\", False, None, None),\n        (\"MixedCase123\", False, None, None),\n    ])\n    @pytest.mark.asyncio\n    async def test_save_schedule_identifier_validation(self, mocker, schedule_name, should_raise, expected_exception_type, expected_message):\n        \"\"\"Test that schedule names are validated for invalid characters\"\"\"\n        # GIVEN\n        scheduler = Scheduler()\n        scheduler._storage = MockStorage(core_management_host=None, core_management_port=None)\n        scheduler._storage_async = MockStorageAsync(core_management_host=None, core_management_port=None)\n        scheduler._ready = True\n        scheduler._paused = False\n\n        # Create a schedule with the test name\n        schedule = ManualSchedule()\n        schedule.name = schedule_name\n        schedule.schedule_id = uuid.uuid4()\n        schedule.process_name = \"test_process\"\n        schedule.exclusive = True\n        schedule.enabled = True\n\n        # WHEN/THEN\n        if should_raise:\n            with pytest.raises(expected_exception_type) as excinfo:\n                await scheduler.save_schedule(schedule)\n            assert expected_message in str(excinfo.value)\n        else:\n            # Mock the storage operations for valid cases\n            mocker.patch.object(scheduler._storage_async, 'insert_into_tbl', return_value={'response': 'OK'})\n            mocker.patch.object(scheduler, '_schedule_first_task')\n            mocker.patch.object(scheduler, '_resume_check_schedules')\n\n            # Should not raise an exception\n            result = await scheduler.save_schedule(schedule)\n            assert result is None\n\n\nclass MockStorage(StorageClientAsync):\n    def __init__(self, core_management_host=None, core_management_port=None):\n        super().__init__(core_management_host, core_management_port)\n\n    def _get_storage_service(self, host, port):\n        return {\n                \"id\": uuid.uuid4(),\n                \"name\": \"Fledge Storage\",\n                \"type\": \"Storage\",\n                \"service_port\": 9999,\n                \"management_port\": 9999,\n                \"address\": \"0.0.0.0\",\n                \"protocol\": \"http\"\n        }\n\n\nclass MockStorageAsync(StorageClientAsync):\n    schedules = [\n        {\n            \"id\": \"cea17db8-6ccc-11e7-907b-a6006ad3dba0\",\n            \"process_name\": \"purge\",\n            \"schedule_name\": \"purge\",\n            \"schedule_type\": 4,\n            \"schedule_interval\": \"01:00:00\",\n            \"schedule_time\": \"\",\n            \"schedule_day\": 0,\n            \"exclusive\": \"t\",\n            \"enabled\": \"t\"\n        },\n        {\n            \"id\": \"2176eb68-7303-11e7-8cf7-a6006ad3dba0\",\n            \"process_name\": \"stats collector\",\n            \"schedule_name\": \"stats collection\",\n            \"schedule_type\": 2,\n            \"schedule_interval\": \"00:00:15\",\n            \"schedule_time\": \"00:00:15\",\n            \"schedule_day\": 3,\n            \"exclusive\": \"f\",\n            \"enabled\": \"t\"\n        },\n        {\n            \"id\": \"d1631422-9ec6-11e7-abc4-cec278b6b50a\",\n            \"process_name\": \"backup\",\n            \"schedule_name\": \"backup hourly\",\n            \"schedule_type\": 3,\n            \"schedule_interval\": \"01:00:00\",\n            \"schedule_time\": \"\",\n            \"schedule_day\": 0,\n            \"exclusive\": \"t\",\n            \"enabled\": \"f\"\n        },\n        {\n            \"id\": \"ada12840-68d3-11e7-907b-a6006ad3dba0\",\n            \"process_name\": \"COAP\",\n            \"schedule_name\": \"COAP listener south\",\n            \"schedule_type\": 1,\n            \"schedule_interval\": \"00:00:00\",\n            \"schedule_time\": \"\",\n            \"schedule_day\": 0,\n            \"exclusive\": \"t\",\n            \"enabled\": \"t\"\n        },\n        {\n            \"id\": \"2b614d26-760f-11e7-b5a5-be2e44b06b34\",\n            \"process_name\": \"North Readings to PI\",\n            \"schedule_name\": \"OMF to PI north\",\n            \"schedule_type\": 3,\n            \"schedule_interval\": \"00:00:30\",\n            \"schedule_time\": \"\",\n            \"schedule_day\": 0,\n            \"exclusive\": \"t\",\n            \"enabled\": \"t\"\n        },\n        {\n            \"id\": \"5d7fed92-fb9a-11e7-8c3f-9a214cf093ae\",\n            \"process_name\": \"North Readings to OCS\",\n            \"schedule_name\": \"OMF to OCS north\",\n            \"schedule_type\": 3,\n            \"schedule_interval\": \"1 day 00:00:40\",\n            \"schedule_time\": \"\",\n            \"schedule_day\": 0,\n            \"exclusive\": \"t\",\n            \"enabled\": \"f\"\n        },\n    ]\n\n    scheduled_processes = [\n        {\n            \"name\": \"purge\",\n            \"script\": [\n                \"tasks/purge\"\n            ]\n        },\n        {\n            \"name\": \"stats collector\",\n            \"script\": [\n                \"tasks/statistics\"\n            ]\n        },\n        {\n            \"name\": \"backup\",\n            \"script\": [\n                \"tasks/backup_postgres\"\n            ]\n        },\n        {\n            \"name\": \"COAP\",\n            \"script\": [\n                \"services/south\"\n            ]\n        },\n        {\n            \"name\": \"North Readings to PI\",\n            \"script\": [\n                \"tasks/north\",\n                \"--stream_id\",\n                \"1\",\n                \"--debug_level\",\n                \"1\"\n            ]\n        },\n        {\n            \"name\": \"North Readings to OCS\",\n            \"script\": [\n                \"tasks/north\",\n                \"--stream_id\",\n                \"4\",\n                \"--debug_level\",\n                \"1\"\n            ]\n        },\n    ]\n\n    tasks = [\n        {\n            \"id\": \"259b8570-65c1-4b92-8c62-e9642631a600\",\n            \"process_name\": \"North Readings to PI\",\n            \"state\": 1,\n            \"start_time\": \"2018-02-06 13:28:14.477868\",\n            \"end_time\": \"2018-02-06 13:28:14.856375\",\n            \"exit_code\": \"0\",\n            \"reason\": \"\"\n        }\n    ]\n\n    def __init__(self, core_management_host=None, core_management_port=None):\n        super().__init__(core_management_host, core_management_port)\n\n    def _get_storage_service(self, host, port):\n        return {\n                \"id\": uuid.uuid4(),\n                \"name\": \"Fledge Storage\",\n                \"type\": \"Storage\",\n                \"service_port\": 9999,\n                \"management_port\": 9999,\n                \"address\": \"0.0.0.0\",\n                \"protocol\": \"http\"\n        }\n\n    @classmethod\n    async def insert_into_tbl(cls, table_name, payload):\n        pass\n\n    @classmethod\n    async def update_tbl(cls, table_name, payload):\n        # Only valid for test_save_schedule_update\n        if table_name == \"schedules\":\n            return {\"count\": 1}\n\n    @classmethod\n    async def delete_from_tbl(cls, table_name, condition=None):\n        pass\n\n    @classmethod\n    async def query_tbl_with_payload(cls, table_name, query_payload):\n        if table_name == 'tasks':\n            return {\n                \"count\": len(MockStorageAsync.tasks),\n                \"rows\": MockStorageAsync.tasks\n            }\n\n    @classmethod\n    async def query_tbl(cls, table_name, query=None):\n        if table_name == 'schedules':\n            return {\n                \"count\": len(MockStorageAsync.schedules),\n                \"rows\": MockStorageAsync.schedules\n            }\n\n        if table_name == 'scheduled_processes':\n            return {\n                \"count\": len(MockStorageAsync.scheduled_processes),\n                \"rows\": MockStorageAsync.scheduled_processes\n            }\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/scheduler/test_scheduler_entities.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge/services/core/scheduler/entities.py \"\"\"\n\nimport pytest\nimport datetime\nfrom enum import IntEnum\n\nfrom fledge.services.core.scheduler.entities import *\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestSchedulerEntities:\n    def test_scheduled_process(self):\n        scheduled_process = ScheduledProcess()\n        assert scheduled_process.name is None\n        assert scheduled_process.script is None\n\n    def test_schedule(self):\n        assert isinstance(Schedule.Type(2), IntEnum)\n        schedule = Schedule(Schedule.Type.STARTUP)\n        assert schedule.schedule_id is None\n        assert schedule.name is None\n        assert schedule.exclusive is True\n        assert schedule.enabled is False\n        assert schedule.repeat is None\n        assert schedule.process_name is None\n        assert schedule.schedule_type == 1\n\n    def test_schedule_todict(self):\n        schedule = Schedule(Schedule.Type.STARTUP)\n        schedule.name = 'test'\n        schedule.process_name = 'test'\n        schedule.repeat = datetime.timedelta(seconds=30)\n        schedule.enabled = True\n        schedule.exclusive = False\n        schedule_json = {\n            \"name\": \"test\",\n            \"type\": 1,\n            \"processName\": \"test\",\n            \"repeat\": 30,\n            \"enabled\": True,\n            \"exclusive\": False\n        }\n        assert schedule_json == schedule.toDict()\n\n    def test_startup_schedule(self):\n        startup_schedule = StartUpSchedule()\n        assert startup_schedule.schedule_id is None\n        assert startup_schedule.name is None\n        assert startup_schedule.exclusive is True\n        assert startup_schedule.enabled is False\n        assert startup_schedule.repeat is None\n        assert startup_schedule.process_name is None\n        assert startup_schedule.schedule_type == 1\n        with pytest.raises(AttributeError):\n            assert startup_schedule.day is None\n            assert startup_schedule.time is None\n\n    def test_timed_schedule(self):\n        timed_schedule = TimedSchedule()\n        assert timed_schedule.schedule_id is None\n        assert timed_schedule.name is None\n        assert timed_schedule.exclusive is True\n        assert timed_schedule.enabled is False\n        assert timed_schedule.repeat is None\n        assert timed_schedule.process_name is None\n        assert timed_schedule.schedule_type == 2\n        assert timed_schedule.day is None\n        assert timed_schedule.time is None\n\n    def test_timed_schedule_todict(self):\n        schedule = TimedSchedule()\n        schedule.name = 'test'\n        schedule.process_name = 'test'\n        schedule.repeat = datetime.timedelta(seconds=30)\n        schedule.enabled = True\n        schedule.exclusive = False\n        schedule.day = 3\n        schedule.time = datetime.time(hour=5, minute=22, second=25)\n        schedule_json = {\n            \"name\": \"test\",\n            \"type\": 2,\n            \"processName\": \"test\",\n            \"repeat\": 30,\n            \"day\": 3,\n            \"time\": \"5:22:25\",\n            \"enabled\": True,\n            \"exclusive\": False\n        }\n        assert schedule_json == schedule.toDict()\n\n    def test_interval_schedule(self):\n        interval_schedule = IntervalSchedule()\n        assert interval_schedule.schedule_id is None\n        assert interval_schedule.name is None\n        assert interval_schedule.exclusive is True\n        assert interval_schedule.enabled is False\n        assert interval_schedule.repeat is None\n        assert interval_schedule.process_name is None\n        assert interval_schedule.schedule_type == 3\n        with pytest.raises(AttributeError):\n            assert interval_schedule.day is None\n            assert interval_schedule.time is None\n\n    def test_manual_schedule(self):\n        manual_schedule = ManualSchedule()\n        assert manual_schedule.schedule_id is None\n        assert manual_schedule.name is None\n        assert manual_schedule.exclusive is True\n        assert manual_schedule.enabled is False\n        assert manual_schedule.repeat is None\n        assert manual_schedule.process_name is None\n        assert manual_schedule.schedule_type == 4\n        with pytest.raises(AttributeError):\n            assert manual_schedule.day is None\n            assert manual_schedule.time is None\n\n    def test_task(self):\n        assert isinstance(Task.State(2), IntEnum)\n        task = Task()\n        assert task.task_id is None\n        assert task.process_name is None\n        assert task.reason is None\n        assert task.state is None\n        assert task.cancel_requested is None\n        assert task.start_time is None\n        assert task.end_time is None\n        assert task.exit_code is None\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/scheduler/test_scheduler_exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge/services/core/scheduler/exceptions.py \"\"\"\n\nimport uuid\nimport pytest\nfrom fledge.services.core.scheduler.exceptions import *\n\n__author__ = \"Amarendra K Sinha\"\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestSchedulerExceptions:\n    def test_NotReadyError(self):\n        with pytest.raises(NotReadyError) as excinfo:\n            raise NotReadyError()\n        assert excinfo.type is NotReadyError\n        assert issubclass(excinfo.type, RuntimeError)\n\n    def test_DuplicateRequestError(self):\n        with pytest.raises(DuplicateRequestError) as excinfo:\n            raise DuplicateRequestError()\n        assert excinfo.type is DuplicateRequestError\n        assert issubclass(excinfo.type, RuntimeError)\n\n    def test_TaskNotRunningError(self):\n        task_id = uuid.uuid4()\n        with pytest.raises(TaskNotRunningError) as excinfo:\n            raise TaskNotRunningError(task_id)\n        assert excinfo.type is TaskNotRunningError\n        assert issubclass(excinfo.type, RuntimeError)\n        assert \"Task is not running: {}\".format(task_id) in str(excinfo)\n\n    def test_TaskNotFoundError(self):\n        task_id = uuid.uuid4()\n        with pytest.raises(TaskNotFoundError) as excinfo:\n            raise TaskNotFoundError(task_id)\n        assert excinfo.type is TaskNotFoundError\n        assert issubclass(excinfo.type, ValueError)\n        assert \"Task not found: {}\".format(task_id) in str(excinfo)\n\n    def test_ScheduleNotFoundError(self):\n        schedule_id = uuid.uuid4()\n        with pytest.raises(ScheduleNotFoundError) as excinfo:\n            raise ScheduleNotFoundError(schedule_id)\n        assert excinfo.type is ScheduleNotFoundError\n        assert issubclass(excinfo.type, ValueError)\n        assert \"Schedule not found: {}\".format(schedule_id) in str(excinfo)\n\n    def test_ScheduleProcessNameNotFound(self):\n        with pytest.raises(ScheduleProcessNameNotFoundError) as excinfo:\n            raise ScheduleProcessNameNotFoundError()\n        assert excinfo.type is ScheduleProcessNameNotFoundError\n        assert issubclass(excinfo.type, ValueError)\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/service_registry/test_exceptions.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge/services/core/service_registry/exceptions.py \"\"\"\n\nimport pytest\n\nfrom fledge.services.core.service_registry.exceptions import *\n\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestServiceRegistryExceptions:\n\n    def test_DoesNotExist(self):\n        with pytest.raises(Exception) as excinfo:\n            raise DoesNotExist()\n        assert excinfo.type is DoesNotExist\n        assert issubclass(excinfo.type, Exception)\n        assert \"\" == str(excinfo.value)\n\n    def test_AlreadyExistsWithTheSameName(self):\n        with pytest.raises(Exception) as excinfo:\n            raise AlreadyExistsWithTheSameName()\n        assert excinfo.type is AlreadyExistsWithTheSameName\n        assert issubclass(excinfo.type, Exception)\n        assert \"\" == str(excinfo.value)\n\n    def test_AlreadyExistsWithTheSameAddressAndPort(self):\n        with pytest.raises(Exception) as excinfo:\n            raise AlreadyExistsWithTheSameAddressAndPort()\n        assert excinfo.type is AlreadyExistsWithTheSameAddressAndPort\n        assert issubclass(excinfo.type, Exception)\n        assert \"\" == str(excinfo.value)\n\n    def test_AlreadyExistsWithTheSameAddressAndManagementPort(self):\n        with pytest.raises(Exception) as excinfo:\n            raise AlreadyExistsWithTheSameAddressAndManagementPort()\n        assert excinfo.type is AlreadyExistsWithTheSameAddressAndManagementPort\n        assert issubclass(excinfo.type, Exception)\n        assert \"\" == str(excinfo.value)\n\n    def test_NonNumericPortError(self):\n        with pytest.raises(Exception) as excinfo:\n            raise NonNumericPortError()\n        assert excinfo.type is NonNumericPortError\n        assert issubclass(excinfo.type, TypeError)\n        assert \"\" == str(excinfo.value)\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/service_registry/test_monitor.py",
    "content": "# -*- coding: utf-8 -*-\nimport asyncio\nfrom unittest.mock import MagicMock\nfrom unittest.mock import patch\nimport pytest\nimport sys\nimport asyncio\n\nimport aiohttp\nfrom fledge.services.core.service_registry.monitor import Monitor, MonitorRegistry\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.core import connect\n\n\n__author__ = \"Ashwin Gopalakrishnan\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestMonitor:\n\n    def setup_method(self):\n        ServiceRegistry._registry = []\n\n    def teardown_method(self):\n        ServiceRegistry._registry = []\n\n    @pytest.mark.asyncio\n    async def test__monitor_good_uptime(self):\n        async def async_mock(return_value):\n            return return_value\n        # used to mock client session context manager\n\n        class AsyncSessionContextManagerMock(MagicMock):\n            def __init__(self, *args, **kwargs):\n                super().__init__(*args, **kwargs)                       \n\n            async def __aenter__(self):\n                _rv = await async_mock('{\"uptime\": \"bla\"}')\n                \n                client_response_mock = MagicMock(spec=aiohttp.ClientResponse)\n                # mock response (good)\n                client_response_mock.text.side_effect = [_rv]\n                return client_response_mock\n\n            async def __aexit__(self, *args):\n                return None\n        # as monitor loop is as infinite loop, this exception is thrown when we need to exit the loop\n\n        class TestMonitorException(Exception):\n            pass\n        # register a service\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id_1 = ServiceRegistry.register(\n                'sname1', 'Storage', 'saddress1', 1, 1, 'protocol1')\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <sname1, type=Storage, protocol=protocol1, address=saddress1, service port=1, '\n                                'management port=1, status=1>')\n        monitor = Monitor()\n        monitor._sleep_interval = Monitor._DEFAULT_SLEEP_INTERVAL\n        monitor._max_attempts = Monitor._DEFAULT_MAX_ATTEMPTS\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n\n        # throw the TestMonitorException when sleep is called (end of infinite loop)\n        with patch.object(Monitor, '_sleep', side_effect=TestMonitorException()):\n            with patch.object(aiohttp.ClientSession, 'get', return_value=AsyncSessionContextManagerMock()):\n                with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                    with pytest.raises(Exception) as excinfo:\n                        await monitor._monitor_loop()\n                    assert excinfo.type is TestMonitorException\n        # service is good, so it should remain in the service registry\n        assert len(ServiceRegistry.get(idx=s_id_1)) is 1\n        # TODO: Investigate in py3.8 ServiceRecord.Status is Unresponsive on exception\n        \"\"\" =============================== warnings summary ===============================\n        tests/unit/python/fledge/services/core/service_registry/test_monitor.py::TestMonitor::()::test__monitor_good_uptime\n        /usr/lib/python3.8/unittest/mock.py:2076: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited \n        See: https://bugs.python.org/issue40406\n        \"\"\"\n        print(ServiceRegistry.get(idx=s_id_1)[0]._status)\n\n    @pytest.mark.asyncio\n    async def test__monitor_exceed_attempts(self, mocker):\n        class AsyncSessionContextManagerMock(MagicMock):\n            def __init__(self, *args, **kwargs):\n                super().__init__(*args, **kwargs)\n\n            async def __aenter__(self):\n                # mock response (error- exception)\n                raise Exception(\"test\")\n\n            async def __aexit__(self, *args):\n                return None\n        # as monitor loop is as infinite loop, this exception is thrown when we need to exit the loop\n\n        class TestMonitorException(Exception):\n            pass\n\n        # register a service\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id_1 = ServiceRegistry.register(\n                'sname1', 'Storage', 'saddress1', 1, 1, 'protocol1')\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <sname1, type=Storage, protocol=protocol1, address=saddress1, '\n                                'service port=1, management port=1, status=1>')\n        monitor = Monitor()\n        monitor._sleep_interval = Monitor._DEFAULT_SLEEP_INTERVAL\n        monitor._max_attempts = Monitor._DEFAULT_MAX_ATTEMPTS\n        _rv = await asyncio.sleep(0.1)\n        sleep_side_effect_list = list()\n        # _MAX_ATTEMPTS is 15\n        # throw exception on the 16th time sleep is called - the first 15 sleeps are used during retries\n        for i in range(0, 15):\n            sleep_side_effect_list.append(_rv)\n        sleep_side_effect_list.append(TestMonitorException())\n        with patch.object(Monitor, '_sleep', side_effect=sleep_side_effect_list):\n            with patch.object(aiohttp.ClientSession, 'get', return_value=AsyncSessionContextManagerMock()):\n                with pytest.raises(Exception) as excinfo:\n                    await monitor._monitor_loop()\n                assert excinfo.type in [TestMonitorException, TypeError]\n\n        assert ServiceRegistry.get(idx=s_id_1)[0]._status is ServiceRecord.Status.Failed\n\n    @pytest.mark.asyncio\n    async def test_monitor_support_bundle_and_alert_creation(self):\n        \"\"\"Test that support bundle and alert are created when service fails with auto_support_bundle enabled\"\"\"\n        \n        # Register a service\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id_1 = ServiceRegistry.register(\n                'test_service', 'Southbound', 'localhost', 1234, 1235, 'http')\n        \n        monitor = Monitor()\n        monitor._max_attempts = 3\n        # Enable auto support bundle creation\n        monitor._support_bundle_config = {\n            'auto_support_bundle': {'value': 'true'},\n            'support_bundle_retain_count': {'value': '3'}\n        }\n\n        # Track method calls\n        support_bundle_calls = []\n        \n        # Mock the create_automated_support_bundle method\n        async def mock_create_support_bundle(service_name):\n            support_bundle_calls.append(service_name)\n            return f'support-{service_name}-123.tar.gz'\n        \n        with patch.object(monitor, 'create_automated_support_bundle', side_effect=mock_create_support_bundle):\n            # Track asyncio.create_task calls\n            created_tasks = []\n            original_create_task = asyncio.create_task\n            \n            def mock_create_task(coro):\n                task = original_create_task(coro)\n                created_tasks.append(task)\n                return task\n            \n            # Mock InterestRegistry to avoid ConfigurationManager issues\n            with patch('fledge.services.core.service_registry.service_registry.InterestRegistry') as mock_interest_registry:\n                mock_interest_instance = MagicMock()\n                mock_interest_registry.return_value = mock_interest_instance\n                mock_interest_instance.get.return_value = []  # Return empty list\n                mock_interest_instance.unregister.return_value = None\n                \n                with patch('asyncio.create_task', side_effect=mock_create_task):\n                    with patch.object(ServiceRegistry._logger, 'info') as log_info_mark_failed:\n                        # Simulate the logic from monitor loop when service fails\n                        service_record = ServiceRegistry.get(idx=s_id_1)[0]\n                        check_count = {service_record._id: monitor._max_attempts + 1}  # Exceed max attempts\n                        \n                        # This is the logic from the monitor loop when max attempts are exceeded\n                        if check_count[service_record._id] > monitor._max_attempts:\n                            ServiceRegistry.mark_as_failed(service_record._id)\n                            check_count[service_record._id] = 0\n                            auto_support_bundle = monitor._support_bundle_config['auto_support_bundle']['value'] == 'true'\n                            if auto_support_bundle:\n                                asyncio.create_task(monitor.create_automated_support_bundle(service_record._name))\n            \n            # Wait for any created tasks to complete\n            if created_tasks:\n                await asyncio.gather(*created_tasks, return_exceptions=True)\n\n        # Verify service is marked as failed\n        assert ServiceRegistry.get(idx=s_id_1)[0]._status is ServiceRecord.Status.Failed\n        \n        # Verify support bundle creation method was called\n        assert len(support_bundle_calls) == 1\n        assert support_bundle_calls[0] == 'test_service'\n        \n        # Verify a task was created\n        assert len(created_tasks) == 1\n\n    @pytest.mark.asyncio\n    async def test_monitor_no_support_bundle_when_disabled(self):\n        \"\"\"Test that support bundle is not created when auto_support_bundle is disabled\"\"\"\n        \n        # Register a service\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id_1 = ServiceRegistry.register(\n                'test_service_2', 'Northbound', 'localhost', 1236, 1237, 'http')\n        \n        monitor = Monitor()\n        monitor._max_attempts = 3\n        # Disable auto support bundle creation\n        monitor._support_bundle_config = {\n            'auto_support_bundle': {'value': 'false'}\n        }\n\n        # Track method calls\n        support_bundle_calls = []\n        \n        # Mock the create_automated_support_bundle method\n        async def mock_create_support_bundle(service_name):\n            support_bundle_calls.append(service_name)\n            return f'support-{service_name}-123.tar.gz'\n        \n        with patch.object(monitor, 'create_automated_support_bundle', side_effect=mock_create_support_bundle):\n            # Track asyncio.create_task calls\n            created_tasks = []\n            original_create_task = asyncio.create_task\n            \n            def mock_create_task(coro):\n                task = original_create_task(coro)\n                created_tasks.append(task)\n                return task\n            \n            # Mock InterestRegistry to avoid ConfigurationManager issues\n            with patch('fledge.services.core.service_registry.service_registry.InterestRegistry') as mock_interest_registry:\n                mock_interest_instance = MagicMock()\n                mock_interest_registry.return_value = mock_interest_instance\n                mock_interest_instance.get.return_value = []  # Return empty list\n                mock_interest_instance.unregister.return_value = None\n                \n                with patch('asyncio.create_task', side_effect=mock_create_task):\n                    with patch.object(ServiceRegistry._logger, 'info') as log_info_mark_failed:\n                        # Simulate the logic from monitor loop when service fails\n                        service_record = ServiceRegistry.get(idx=s_id_1)[0]\n                        check_count = {service_record._id: monitor._max_attempts + 1}  # Exceed max attempts\n                        \n                        # This is the logic from the monitor loop when max attempts are exceeded\n                        if check_count[service_record._id] > monitor._max_attempts:\n                            ServiceRegistry.mark_as_failed(service_record._id)\n                            check_count[service_record._id] = 0\n                            auto_support_bundle = monitor._support_bundle_config['auto_support_bundle']['value'] == 'true'\n                            if auto_support_bundle:\n                                asyncio.create_task(monitor.create_automated_support_bundle(service_record._name))\n\n        # Verify service is marked as failed\n        assert ServiceRegistry.get(idx=s_id_1)[0]._status is ServiceRecord.Status.Failed\n        \n        # Verify support bundle creation method was NOT called\n        assert len(support_bundle_calls) == 0\n        \n        # Verify no tasks were created\n        assert len(created_tasks) == 0\n\n\nclass TestMonitorRegistry:\n    \"\"\"Test cases for MonitorRegistry functionality\"\"\"\n\n    def setup_method(self):\n        \"\"\"Clear the registry before each test\"\"\"\n        MonitorRegistry._monitors = {}\n\n    def teardown_method(self):\n        \"\"\"Clean up registry after each test\"\"\"\n        MonitorRegistry._monitors = {}\n\n    def test_register_monitor(self):\n        \"\"\"Test registering a monitor instance\"\"\"\n        monitor = Monitor()\n        \n        # Register monitor\n        MonitorRegistry.register('test_monitor', monitor)\n        \n        # Verify it was registered\n        assert MonitorRegistry.get('test_monitor') is monitor\n        assert len(MonitorRegistry.get_all()) == 1\n\n\n    def test_get_default_monitor(self):\n        \"\"\"Test getting monitor with default ID\"\"\"\n        monitor = Monitor()\n        MonitorRegistry.register('default', monitor)\n        \n        # Should return the same monitor for default ID\n        assert MonitorRegistry.get() is monitor\n        assert MonitorRegistry.get('default') is monitor\n\n    def test_unregister_monitor(self):\n        \"\"\"Test unregistering a monitor instance\"\"\"\n        monitor = Monitor()\n        MonitorRegistry.register('test_monitor', monitor)\n        \n        # Verify it's registered\n        assert MonitorRegistry.get('test_monitor') is monitor\n        \n        # Unregister it\n        result = MonitorRegistry.unregister('test_monitor')\n        \n        # Verify it was returned and removed\n        assert result is monitor\n        assert MonitorRegistry.get('test_monitor') is None\n        assert len(MonitorRegistry.get_all()) == 0\n \n\nclass TestMonitorWithRegistry:\n    \"\"\"Test Monitor class integration with MonitorRegistry\"\"\"\n\n    def setup_method(self):\n        \"\"\"Clean up before each test\"\"\"\n        MonitorRegistry._monitors = {}\n        ServiceRegistry._registry = []\n\n    def teardown_method(self):\n        \"\"\"Clean up after each test\"\"\"\n        MonitorRegistry._monitors = {}\n        ServiceRegistry._registry = []\n\n    @pytest.mark.asyncio\n    async def test_monitor_registers_itself_during_read_config(self):\n        \"\"\"Test that Monitor registers itself during _read_config\"\"\"\n        monitor = Monitor()\n        \n        # Mock dependencies\n        mock_storage = MagicMock(spec=StorageClientAsync)\n        mock_config = {\n            'sleep_interval': {'value': '5'},\n            'ping_timeout': {'value': '1'},\n            'max_attempts': {'value': '15'},\n            'restart_failed': {'value': 'auto'}\n        }\n        mock_support_config = {\n            'auto_support_bundle': {'value': 'false'}\n        }\n        \n        with patch.object(connect, 'get_storage_async', return_value=mock_storage):\n            with patch('fledge.common.configuration_manager.ConfigurationManager') as mock_cfg_mgr_class:\n                mock_cfg_mgr = MagicMock()\n                mock_cfg_mgr_class.return_value = mock_cfg_mgr\n                mock_cfg_mgr.create_category = MagicMock()\n                mock_cfg_mgr.get_category_all_items.side_effect = [mock_config, mock_support_config]\n                mock_cfg_mgr.register_interest = MagicMock()\n                \n                # Call _read_config\n                await monitor._read_config()\n        \n        # Verify monitor registered itself\n        assert MonitorRegistry.get('default') is monitor\n\n    @pytest.mark.asyncio\n    async def test_monitor_unregisters_itself_during_stop(self):\n        \"\"\"Test that Monitor unregisters itself during stop\"\"\"\n        monitor = Monitor()\n        \n        # Manually register monitor first\n        MonitorRegistry.register('default', monitor)\n        assert MonitorRegistry.get('default') is monitor\n        \n        # Mock the configuration manager\n        mock_cfg_mgr = MagicMock()\n        monitor._cfg_manager = mock_cfg_mgr\n        \n        # Mock the monitor loop task\n        mock_task = MagicMock()\n        monitor._monitor_loop_task = mock_task\n        \n        # Call stop\n        await monitor.stop()\n        \n        # Verify monitor unregistered itself\n        assert MonitorRegistry.get('default') is None\n\n\nclass TestMonitorConfigCallback:\n    \"\"\"Test module-level callback function with MonitorRegistry\"\"\"\n\n    def setup_method(self):\n        \"\"\"Clean up before each test\"\"\"\n        MonitorRegistry._monitors = {}\n\n    def teardown_method(self):\n        \"\"\"Clean up after each test\"\"\"\n        MonitorRegistry._monitors = {}\n\n    @pytest.mark.asyncio\n    async def test_run_callback_with_registered_monitor(self):\n        \"\"\"Test run callback when monitor is registered\"\"\"\n        monitor = Monitor()\n        # Mock handle_config_change as async method\n        async def mock_handle_config_change(category_name):\n            pass  # Successful call\n        monitor._handle_config_change = mock_handle_config_change\n        # Track if the method was called\n        call_tracker = {'called': False, 'category': None}\n        async def tracked_mock_handle_config_change(category_name):\n            call_tracker['called'] = True\n            call_tracker['category'] = category_name\n        monitor._handle_config_change = tracked_mock_handle_config_change\n        # Register monitor\n        MonitorRegistry.register('default', monitor)\n        # Import and call the run function\n        from fledge.services.core.service_registry.monitor import run\n        await run('SMNTR')\n        # Verify the monitor's handle_config_change was called\n        assert call_tracker['called'] is True\n        assert call_tracker['category'] == 'SMNTR'\n\n    @pytest.mark.asyncio\n    async def test_run_callback_with_no_registered_monitor(self):\n        \"\"\"Test run callback when no monitor is registered\"\"\"\n        # Ensure no monitor is registered\n        assert MonitorRegistry.get('default') is None\n        # Mock logger setup to capture warning\n        with patch('fledge.services.core.service_registry.monitor.logger.setup') as mock_logger_setup:\n            mock_logger = MagicMock()\n            mock_logger_setup.return_value = mock_logger\n            # Import and call the run function\n            from fledge.services.core.service_registry.monitor import run\n            await run('SMNTR')\n        # Verify warning was logged\n        mock_logger.warning.assert_called_once_with(\"Monitor instance not available for config change callback\")\n\n    @pytest.mark.asyncio\n    async def test_run_callback_handles_exception(self):\n        \"\"\"Test run callback handles exceptions in monitor's handle_config_change\"\"\"\n        monitor = Monitor()\n        # Mock handle_config_change to raise an exception - make it async\n        async def mock_handle_config_change(category_name):\n            raise Exception(\"Test exception\")\n        monitor._handle_config_change = mock_handle_config_change\n        # Mock logger for error logging - patch the logger to avoid MagicMock async issues\n        with patch.object(monitor, '_logger') as mock_logger:\n            # Register monitor\n            MonitorRegistry.register('default', monitor)\n            # Import and call the run function\n            from fledge.services.core.service_registry.monitor import run\n            # Should not raise exception, should handle it gracefully\n            await run('SMNTR')\n        # Verify error was logged\n        mock_logger.error.assert_called_once_with(\n            \"Error in configuration change callback for {}: {}\".format('SMNTR', 'Test exception'))\n\n    @pytest.mark.asyncio\n    async def test_run_callback_with_support_bundle_category(self):\n        \"\"\"Test run callback with SUPPORT_BUNDLE category\"\"\"\n        monitor = Monitor()\n        # Track if the method was called\n        call_tracker = {'called': False, 'category': None}\n        async def tracked_mock_handle_config_change(category_name):\n            call_tracker['called'] = True\n            call_tracker['category'] = category_name\n        monitor._handle_config_change = tracked_mock_handle_config_change\n        # Register monitor\n        MonitorRegistry.register('default', monitor)\n        # Import and call the run function\n        from fledge.services.core.service_registry.monitor import run\n        await run('SUPPORT_BUNDLE')\n        # Verify the monitor's handle_config_change was called with correct category\n        assert call_tracker['called'] is True\n        assert call_tracker['category'] == 'SUPPORT_BUNDLE'\n\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/service_registry/test_service_registry.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom unittest.mock import patch\nimport pytest\n\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry.exceptions import *\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry\n\n__copyright__ = \"Copyright (c) 2018 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestServiceRegistry:\n\n    def setup_method(self):\n        ServiceRegistry._registry = list()\n\n    def teardown_method(self):\n        ServiceRegistry._registry = list()\n\n    def test_register(self):\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id = ServiceRegistry.register(\"A name\", \"Storage\", \"127.0.0.1\", 1234, 4321, 'http')\n            assert 36 == len(s_id)  # uuid version 4 len\n            assert 1 == len(ServiceRegistry._registry)\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <A name, type=Storage, protocol=http, address=127.0.0.1, service port=1234,'\n                                ' management port=4321, status=1>')\n\n    def test_register_with_service_port_none(self):\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            s_id = ServiceRegistry.register(\"A name\", \"Southbound\", \"127.0.0.1\", None, 4321, 'http')\n            assert 36 == len(s_id)  # uuid version 4 len\n            assert 1 == len(ServiceRegistry._registry)\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <A name, type=Southbound, protocol=http, address=127.0.0.1, service port=None,'\n                                ' management port=4321, status=1>')\n\n    def test_register_with_same_name(self):\n        \"\"\"raise AlreadyExistsWithTheSameName\"\"\"\n        with patch.object(ServiceRegistry._logger, 'info') as log_info1:\n            ServiceRegistry.register(\"A name\", \"Storage\", \"127.0.0.1\", 1, 2, 'http')\n            assert 1 == len(ServiceRegistry._registry)\n        assert 1 == log_info1.call_count\n        args, kwargs = log_info1.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <A name, type=Storage, protocol=http, address=127.0.0.1, service port=1,'\n                                ' management port=2, status=1>')\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(ServiceRegistry._logger, 'info') as log_info2:\n                ServiceRegistry.register(\"A name\", \"Storage\", \"127.0.0.2\", 3, 4, 'http')\n                assert 1 == len(ServiceRegistry._registry)\n            assert 0 == log_info2.call_count\n        assert excinfo.type is AlreadyExistsWithTheSameName\n\n    def test_register_with_same_address_and_port(self):\n        \"\"\"raise AlreadyExistsWithTheSameAddressAndPort\"\"\"\n        with patch.object(ServiceRegistry._logger, 'info') as log_info1:\n            ServiceRegistry.register(\"A name\", \"Storage\", \"127.0.0.1\", 1234, 1, 'http')\n            assert 1 == len(ServiceRegistry._registry)\n        assert 1 == log_info1.call_count\n        args, kwargs = log_info1.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <A name, type=Storage, protocol=http, address=127.0.0.1, service port=1234,'\n                                ' management port=1, status=1>')\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(ServiceRegistry._logger, 'info') as log_info2:\n                ServiceRegistry.register(\"B name\", \"Storage\", \"127.0.0.1\", 1234, 2, 'http')\n                assert 1 == len(ServiceRegistry._registry)\n            assert 0 == log_info2.call_count\n        assert excinfo.type is AlreadyExistsWithTheSameAddressAndPort\n\n    def test_register_with_same_address_and_mgt_port(self):\n        \"\"\"raise AlreadyExistsWithTheSameAddressAndManagementPort\"\"\"\n        with patch.object(ServiceRegistry._logger, 'info') as log_info1:\n            ServiceRegistry.register(\"A name\", \"Storage\", \"127.0.0.1\", 1, 1234, 'http')\n            assert 1 == len(ServiceRegistry._registry)\n        assert 1 == log_info1.call_count\n        args, kwargs = log_info1.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <A name, type=Storage, protocol=http, address=127.0.0.1, service port=1,'\n                                ' management port=1234, status=1>')\n\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(ServiceRegistry._logger, 'info') as log_info2:\n                ServiceRegistry.register(\"B name\", \"Storage\", \"127.0.0.1\", 2, 1234, 'http')\n                assert 1 == len(ServiceRegistry._registry)\n            assert 0 == log_info2.call_count\n        assert excinfo.type is AlreadyExistsWithTheSameAddressAndManagementPort\n\n    def test_register_with_bad_service_port(self):\n        \"\"\"raise NonNumericPortError\"\"\"\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(ServiceRegistry._logger, 'info') as log_info:\n                ServiceRegistry.register(\"B name\", \"Storage\", \"127.0.0.1\", \"s01\", 1234, 'http')\n                assert 1 == len(ServiceRegistry._registry)\n            assert 0 == log_info.call_count\n        assert excinfo.type is NonNumericPortError\n\n    def test_register_with_bad_management_port(self):\n        \"\"\"raise NonNumericPortError\"\"\"\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(ServiceRegistry._logger, 'info') as log_info:\n                ServiceRegistry.register(\"B name\", \"Storage\", \"127.0.0.1\", 1234, \"m01\", 'http')\n                assert 0 == len(ServiceRegistry._registry)\n            assert 0 == log_info.call_count\n        assert excinfo.type is NonNumericPortError\n\n    def test_unregister(self, mocker):\n        mocker.patch.object(InterestRegistry, '__init__', return_value=None)\n        mocker.patch.object(InterestRegistry, 'get', return_value=list())\n\n        with patch.object(ServiceRegistry._logger, 'info') as log_info1:\n            reg_id = ServiceRegistry.register(\"A name\", \"Storage\", \"127.0.0.1\", 1234, 4321, 'http')\n            assert 1 == len(ServiceRegistry._registry)\n        assert 1 == log_info1.call_count\n        arg, kwarg = log_info1.call_args\n        assert arg[0].startswith('Registered service instance id=')\n        assert arg[0].endswith(': <A name, type=Storage, protocol=http, address=127.0.0.1, service port=1234,'\n                               ' management port=4321, status=1>')\n\n        with patch.object(ServiceRegistry._logger, 'info') as log_info2:\n            s_id = ServiceRegistry.unregister(reg_id)\n            assert 36 == len(s_id)  # uuid version 4 len\n            assert 1 == len(ServiceRegistry._registry)\n            s = ServiceRegistry.get(idx=s_id)\n            assert s[0]._status == 2\n        assert 1 == log_info2.call_count\n        args, kwargs = log_info2.call_args\n        assert args[0].startswith('Stopped service instance id=')\n        assert args[0].endswith(': <A name, type=Storage, protocol=http, address=127.0.0.1, service port=1234,'\n                                ' management port=4321, status=2>')\n\n    def test_unregister_non_existing_service_record(self):\n        \"\"\"raise DoesNotExist\"\"\"\n        with pytest.raises(Exception) as excinfo:\n            with patch.object(ServiceRegistry._logger, 'info') as log_info:\n                ServiceRegistry.unregister(\"blah\")\n                assert 0 == len(ServiceRegistry._registry)\n            assert 0 == log_info.call_count\n        assert excinfo.type is DoesNotExist\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/test_connect.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nfrom unittest.mock import patch\nimport pytest\n\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.services.core.service_registry.exceptions import DoesNotExist\nfrom fledge.services.core import connect\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nclass TestConnect:\n    \"\"\" Storage connection\"\"\"\n    def setup_method(self):\n        ServiceRegistry._registry = []\n\n    def teardown_method(self):\n        ServiceRegistry._registry = []\n\n    def test_get_storage(self):\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            ServiceRegistry.register(\"Fledge Storage\", \"Storage\", \"127.0.0.1\", 37449, 37843)\n            storage_client = connect.get_storage_async()\n            assert isinstance(storage_client, StorageClientAsync)\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <Fledge Storage, type=Storage, protocol=http, address=127.0.0.1, service port=37449,'\n                                ' management port=37843, status=1>')\n\n    @patch('fledge.services.core.connect._logger')\n    def test_exception_when_no_storage(self, mock_logger):\n        with pytest.raises(DoesNotExist) as excinfo:\n            connect.get_storage_async()\n        assert \"DoesNotExist\" in str(excinfo)\n        assert 1 == mock_logger.error.call_count\n\n    @patch('fledge.services.core.connect._logger')\n    def test_exception_when_non_fledge_storage(self, mock_logger):\n        with patch.object(ServiceRegistry._logger, 'info') as log_info:\n            ServiceRegistry.register(\"foo\", \"Storage\", \"127.0.0.1\", 1, 2)\n        assert 1 == log_info.call_count\n        args, kwargs = log_info.call_args\n        assert args[0].startswith('Registered service instance id=')\n        assert args[0].endswith(': <foo, type=Storage, protocol=http, address=127.0.0.1, service port=1, '\n                                'management port=2, status=1>')\n        with pytest.raises(DoesNotExist) as excinfo:\n            connect.get_storage_async()\n        assert \"DoesNotExist\" in str(excinfo)\n        assert 1 == mock_logger.error.call_count\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/test_main.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test server.Server.__main__ entry point\n\n\"\"\"\n\nimport pytest\nfrom unittest.mock import patch\n\nfrom fledge.services import core\n\n\nasync def test_main():\n    with patch('fledge.services.core', return_value=None) as mockedMain:\n        srvr = mockedMain.Server\n        srvr.start.return_value = None\n\n        srvr.start()\n\n        srvr.start.assert_called_once_with()  # assert_called_once() is python3.6 onwards :]\n\n        # Okay, let's verify once more! :P\n        assert 1 == srvr.start.call_count\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/test_server.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test fledge.services.core server \"\"\"\n\nimport asyncio\nimport json\nfrom unittest import mock\nfrom unittest.mock import MagicMock, patch\nfrom aiohttp import web\nfrom aiohttp.test_utils import make_mocked_request\nfrom aiohttp.streams import StreamReader\nfrom multidict import CIMultiDict\nimport pytest\n\nfrom fledge.services.common.microservice_management import routes as management_routes\nfrom fledge.services.core import server\nfrom fledge.services.core.server import Server\nfrom fledge.common.web import middleware\nfrom fledge.services.core.interest_registry.interest_registry import InterestRegistry\nfrom fledge.services.core.interest_registry.interest_record import InterestRecord\nfrom fledge.services.core.interest_registry import exceptions as interest_registry_exceptions\nfrom fledge.services.core.service_registry.service_registry import ServiceRegistry\nfrom fledge.common.service_record import ServiceRecord\nfrom fledge.services.core.service_registry import exceptions as service_registry_exceptions\nfrom fledge.services.core.api import configuration as conf_api\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.services.core.user_model import User\n\n\n__author__ = \"Vaibhav Singhal, Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\ndef mock_request(data, loop):\n    payload = StreamReader(\"http\", loop=loop, limit=1024)\n    payload.feed_data(data.encode())\n    payload.feed_eof()\n\n    protocol = mock.Mock()\n    app = mock.Mock()\n    headers = CIMultiDict([('CONTENT-TYPE', 'application/json')])\n    req = make_mocked_request('POST', '/sensor-reading', headers=headers,\n                              protocol=protocol, payload=payload, app=app, loop=loop)\n    return req\n\n\nclass TestServer:\n\n    @pytest.fixture\n    def client(self, loop, test_client):\n        app = web.Application(middlewares=[middleware.error_middleware])\n        management_routes.setup(app, Server, True)\n        return loop.run_until_complete(test_client(app))\n\n    ############################\n    # start stop\n    ############################\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test_get_certificates(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__rest_api_config(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test_service_config(self):\n        pass\n\n    async def test__installation_config(self):\n        async def async_mock(return_value):\n            return return_value\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(storage_client_mock)\n        _rv = await async_mock([])\n        with patch.object(Server._configuration_manager, 'create_category',\n                          return_value=_rv) as patch_create_cat:\n            with patch.object(Server._configuration_manager, 'get_category_all_items',\n                              return_value=_rv) as patch_get_all_cat:\n                await Server.installation_config()\n            patch_get_all_cat.assert_called_once_with('Installation')\n        patch_create_cat.assert_called_once_with('Installation', Server._INSTALLATION_DEFAULT_CONFIG, 'Installation',\n                                                 True, display_name='Installation')\n\n    async def test__setup_config_manager(self):\n        async def async_mock(return_value):\n            return return_value\n\n        storage_client_mock = MagicMock(spec=StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(storage_client_mock)\n        value = {'cacheSize': {'description': 'To control the caching size of Core Configuration Manager',\n                               'type': 'integer', 'displayName': 'Cache Size', 'default': '30', 'value': '30',\n                               'order': '1', 'minimum': '1', 'maximum': '1000'}}\n        rv = await async_mock(value)\n        with patch.object(Server._configuration_manager, 'create_category',\n                          return_value=rv) as patch_create_cat:\n            with patch.object(Server._configuration_manager, 'get_category_all_items',\n                              return_value=rv) as patch_get_all_cat:\n                await Server.setup_config_manager()\n            patch_get_all_cat.assert_called_once_with('CONFIGURATION')\n        patch_create_cat.assert_called_once_with('CONFIGURATION', Server._CONFIGURATION_DEFAULT_CONFIG,\n                                                 'Core Configuration Manager', True,\n                                                 display_name='Configuration Manager')\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__make_app(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__make_core_app(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__start_service_monitor(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test_stop_service_monitor(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test___start_scheduler(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__start_storage(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__start_storage(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__get_storage_client(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__start_app(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test_pid_filename(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__pidfile_exists(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__remove_pid(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__write_pid(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__start_core(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__register_core(self):\n        pass\n\n    @pytest.mark.asyncio\n    async def test_start(self):\n        with patch.object(Server, \"_start_core\", return_value=None) as patched_start_core:\n            Server.start()\n        args, kwargs = patched_start_core.call_args\n        assert 1 == patched_start_core.call_count\n        assert isinstance(kwargs['loop'], asyncio.unix_events._UnixSelectorEventLoop)\n\n    @pytest.mark.asyncio\n    async def test__stop(self, mocker):\n        mocked__stop_scheduler = mocker.patch.object(Server, \"_stop_scheduler\")\n        mocked_stop_microservices = mocker.patch.object(Server, \"stop_microservices\")\n        mocked_stop_service_monitor = mocker.patch.object(Server, \"stop_service_monitor\")\n        mocked_stop_rest_server = mocker.patch.object(Server, \"stop_rest_server\")\n        mocked_stop_storage = mocker.patch.object(Server, \"stop_storage\")\n        mocked__remove_pid = mocker.patch.object(Server, \"_remove_pid\")\n\n        async def return_async_value(val):\n            return val\n\n        _rv1 = await return_async_value(None)\n        _rv2 = await return_async_value('stopping scheduler..')\n        _rv3 = await return_async_value('stopping msvc..')\n        _rv4 = await return_async_value('stopping svc monitor..')\n        _rv5 = await return_async_value('stopping REST server..')\n        _rv6 = await return_async_value('stopping storage..')\n        mocked__stop_scheduler.return_value = _rv2\n        mocked_stop_microservices.return_value = _rv3\n        mocked_stop_service_monitor.return_value = _rv4\n        mocked_stop_rest_server.return_value = _rv5\n        mocked_stop_storage.return_value = _rv6\n\n        mocked__remove_pid.return_value = 'removing PID..'        \n\n        with patch.object(AuditLogger, '__init__', return_value=None):\n            with patch.object(AuditLogger, 'information', return_value=_rv1) as audit_info_patch:\n                await Server._stop()\n            # Must write the audit log entry before we stop the storage service\n            args, kwargs = audit_info_patch.call_args\n            assert 'FSTOP' == args[0]\n            assert None is args[1]\n\n        assert 1 == mocked__stop_scheduler.call_count\n        assert 1 == mocked_stop_microservices.call_count\n        assert 1 == mocked_stop_service_monitor.call_count\n        assert 1 == mocked_stop_rest_server.call_count\n        assert 1 == mocked_stop_storage.call_count\n        assert 1 == mocked__remove_pid.call_count\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test_stop_rest_server(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test_stop_storage(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test_stop_microservices(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__request_microservice_shutdown(self):\n        pass\n\n    @pytest.mark.asyncio\n    @pytest.mark.skip(reason=\"To be implemented\")\n    async def test__stop_scheduler(self):\n        pass\n\n    ############################\n    # Configuration Management\n    ############################\n\n    \"\"\" Tests the calls to configuration manager via core management api\n    \n    No negative tests added since these are already covered in fledge/services/core/api/test_configuration.py\n    \"\"\"\n    async def test_get_configuration_categories(self, client):\n        async def async_mock():\n            return web.json_response({'categories': \"test\"})\n\n        _rv = await async_mock()\n        result = {'categories': \"test\"}\n        with patch.object(conf_api, 'get_categories', return_value=_rv) as patch_get_all_categories:\n            resp = await client.get('/fledge/service/category')\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert result == json_response\n        assert 1 == patch_get_all_categories.call_count\n\n    async def test_get_configuration_category(self, client):\n        async def async_mock():\n            return web.json_response(\"test\")\n\n        _rv = await async_mock()\n        result = \"test\"\n        with patch.object(conf_api, 'get_category', return_value=_rv) as patch_category:\n            resp = await client.get('/fledge/service/category/{}'.format(\"test_category\"))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert result == json_response\n        assert 1 == patch_category.call_count\n\n    async def test_create_configuration_category(self, client):\n        async def async_mock():\n            return web.json_response({\"key\": \"test_name\",\n                                      \"description\": \"test_category_desc\",\n                                      \"value\": \"test_category_info\"})\n\n        _rv = await async_mock()\n        result = {\"key\": \"test_name\", \"description\": \"test_category_desc\", \"value\": \"test_category_info\"}\n        with patch.object(conf_api, 'create_category', return_value=_rv) as patch_create_category:\n            resp = await client.post('/fledge/service/category')\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert result == json_response\n        assert 1 == patch_create_category.call_count\n\n    async def test_get_configuration_item(self, client):\n        async def async_mock():\n            return web.json_response(\"test\")\n\n        _rv = await async_mock()\n        result = \"test\"\n        with patch.object(conf_api, 'get_category_item', return_value=_rv) as patch_category_item:\n            resp = await client.get('/fledge/service/category/{}/{}'.format(\"test_category\", \"test_item\"))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert result == json_response\n        assert 1 == patch_category_item.call_count\n\n    async def test_update_configuration_item(self, client):\n        async def async_mock():\n            return web.json_response(\"test\")\n\n        _rv = await async_mock()\n        result = \"test\"\n        with patch.object(conf_api, 'set_configuration_item', return_value=_rv) as patch_update_category_item:\n            resp = await client.put('/fledge/service/category/{}/{}'.format(\"test_category\", \"test_item\"))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert result == json_response\n        assert 1 == patch_update_category_item.call_count\n\n    async def test_delete_configuration_item(self, client):\n        async def async_mock():\n            return web.json_response(\"ok\")\n\n        _rv = await async_mock()\n        result = \"ok\"\n        with patch.object(conf_api, 'delete_configuration_item_value', return_value=_rv) as patch_del_category_item:\n            resp = await client.delete('/fledge/service/category/{}/{}/value'.format(\"test_category\", \"test_item\"))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert result == json_response\n        assert 1 == patch_del_category_item.call_count\n\n    ############################\n    # Register Interest\n    ############################\n    async def test_bad_uuid_get_interest(self, client):\n        resp = await client.get('/fledge/interest?microserviceid=X')\n        assert 400 == resp.status\n        assert 'Invalid microservice id X' == resp.reason\n\n    @pytest.mark.parametrize(\"params, expected_kwargs\", [\n        (\"\", {}),\n        (\"?category=Y\", {'category_name': 'Y'}),\n        (\"?microserviceid=c6bbf3c8-f43c-4b0f-ac48-f597f510da0b\", {'microservice_uuid': 'c6bbf3c8-f43c-4b0f-ac48-f597f510da0b'}),\n        (\"?category=Y&microserviceid=0c501cd3-c45a-439a-bec6-fc08d13f9699\",  {'microservice_uuid': '0c501cd3-c45a-439a-bec6-fc08d13f9699', 'category_name': 'Y'})\n    ])\n    async def test_get_interest_with_filter(self, client, params, expected_kwargs):\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(Server._storage_client)\n        Server._interest_registry = InterestRegistry(Server._configuration_manager)\n        with patch.object(Server._interest_registry, 'get', return_value=[]) as patch_get_interest_reg:\n            resp = await client.get('/fledge/interest{}'.format(params))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert {'interests': []} == json_response\n        args, kwargs = patch_get_interest_reg.call_args\n        assert expected_kwargs == kwargs\n\n    @pytest.mark.parametrize(\"params, expected_kwargs, message\", [\n        (\"\", {}, \"No interest registered\"),\n        (\"?category=Y\", {'category_name': 'Y'}, \"No interest registered for category Y\"),\n        (\"?microserviceid=c6bbf3c8-f43c-4b0f-ac48-f597f510da0b\",\n         {'microservice_uuid': 'c6bbf3c8-f43c-4b0f-ac48-f597f510da0b'}, \"No interest registered microservice id c6bbf3c8-f43c-4b0f-ac48-f597f510da0b\"),\n        (\"?category=Y&microserviceid=0c501cd3-c45a-439a-bec6-fc08d13f9699\",\n         {'microservice_uuid': '0c501cd3-c45a-439a-bec6-fc08d13f9699', 'category_name': 'Y'}, \"No interest registered for category Y and microservice id 0c501cd3-c45a-439a-bec6-fc08d13f9699\")\n    ])\n    async def test_get_interest_exception(self, client, params, message, expected_kwargs):\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(Server._storage_client)\n        Server._interest_registry = InterestRegistry(Server._configuration_manager)\n        with patch.object(Server._interest_registry, 'get', side_effect=interest_registry_exceptions.DoesNotExist) as patch_get_interest_reg:\n            resp = await client.get('/fledge/interest{}'.format(params))\n            assert 404 == resp.status\n            assert message == resp.reason\n        args, kwargs = patch_get_interest_reg.call_args\n        assert expected_kwargs == kwargs\n\n    async def test_get_interest(self, client):\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(Server._storage_client)\n        Server._interest_registry = InterestRegistry(Server._configuration_manager)\n\n        data = []\n        category_name = 'test_Cat'\n        muuid = '0c501cd3-c45a-439a-bec6-fc08d13f9699'\n        reg_id = 'c6bbf3c8-f43c-4b0f-ac48-f597f510da0b'\n        record = InterestRecord(reg_id, muuid, category_name)\n        data.append(record)\n\n        with patch.object(Server._interest_registry, 'get', return_value=data) as patch_get_interest_reg:\n            resp = await client.get('/fledge/interest')\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert {'interests': [{'category': category_name, 'microserviceId': muuid, 'registrationId': reg_id}]} == json_response\n        args, kwargs = patch_get_interest_reg.call_args\n        assert {} == kwargs\n\n    async def test_bad_uuid_register_interest(self, client):\n        request_data = {\"category\": \"COAP\", \"service\": \"X\"}\n        resp = await client.post('/fledge/interest', data=json.dumps(request_data))\n        assert 400 == resp.status\n        assert 'Invalid microservice id X' == resp.reason\n\n    async def test_bad_register_interest(self, client):\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(Server._storage_client)\n        Server._interest_registry = InterestRegistry(Server._configuration_manager)\n\n        request_data = {\"category\": \"COAP\", \"service\": \"c6bbf3c8-f43c-4b0f-ac48-f597f510da0b\"}\n        with patch.object(Server._interest_registry, 'register', return_value=None) as patch_reg_interest_reg:\n            resp = await client.post('/fledge/interest', data=json.dumps(request_data))\n            assert 400 == resp.status\n            assert 'Interest by microservice_uuid {} for category_name {} could not be registered'.format(request_data['service'], request_data['category']) == resp.reason\n        args, kwargs = patch_reg_interest_reg.call_args\n        assert (request_data['service'], request_data['category']) == args\n\n    async def test_register_interest_exceptions(self, client):\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(Server._storage_client)\n        Server._interest_registry = InterestRegistry(Server._configuration_manager)\n\n        request_data = {\"category\": \"COAP\", \"service\": \"c6bbf3c8-f43c-4b0f-ac48-f597f510da0b\"}\n        with patch.object(Server._interest_registry, 'register', side_effect=interest_registry_exceptions.ErrorInterestRegistrationAlreadyExists) as patch_reg_interest_reg:\n            resp = await client.post('/fledge/interest', data=json.dumps(request_data))\n            assert 400 == resp.status\n            assert 'An InterestRecord already exists by microservice_uuid {} for category_name {}'.format(request_data['service'], request_data['category']) == resp.reason\n        args, kwargs = patch_reg_interest_reg.call_args\n        assert (request_data['service'], request_data['category']) == args\n\n    async def test_register_interest(self, client):\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(Server._storage_client)\n        Server._interest_registry = InterestRegistry(Server._configuration_manager)\n\n        request_data = {\"category\": \"COAP\", \"service\": \"c6bbf3c8-f43c-4b0f-ac48-f597f510da0b\"}\n        reg_id = 'a404852d-d91c-47bd-8860-d4ff81b6e8cb'\n        with patch.object(Server._interest_registry, 'register', return_value=reg_id) as patch_reg_interest_reg:\n            resp = await client.post('/fledge/interest', data=json.dumps(request_data))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert {'id': reg_id, 'message': 'Interest registered successfully'} == json_response\n        args, kwargs = patch_reg_interest_reg.call_args\n        assert (request_data['service'], request_data['category']) == args\n\n    async def test_bad_uuid_unregister_interest(self, client):\n        resp = await client.delete('/fledge/interest/blah')\n        assert 400 == resp.status\n        assert 'Invalid registration id blah' == resp.reason\n\n    async def test_unregister_interest_exception(self, client):\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(Server._storage_client)\n        Server._interest_registry = InterestRegistry(Server._configuration_manager)\n\n        reg_id = 'c6bbf3c8-f43c-4b0f-ac48-f597f510da0b'\n        with patch.object(Server._interest_registry, 'get', side_effect=interest_registry_exceptions.DoesNotExist) as patch_get_interest_reg:\n            resp = await client.delete('/fledge/interest/{}'.format(reg_id))\n            assert 404 == resp.status\n            assert 'InterestRecord with registration_id {} does not exist'.format(reg_id) == resp.reason\n        args, kwargs = patch_get_interest_reg.call_args\n        assert {'registration_id': reg_id} == kwargs\n\n    async def test_unregister_interest(self, client):\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._configuration_manager = ConfigurationManager(Server._storage_client)\n        Server._interest_registry = InterestRegistry(Server._configuration_manager)\n\n        data = []\n        category_name = 'test_Cat'\n        muuid = '0c501cd3-c45a-439a-bec6-fc08d13f9699'\n        reg_id = 'c6bbf3c8-f43c-4b0f-ac48-f597f510da0b'\n        record = InterestRecord(reg_id, muuid, category_name)\n        data.append(record)\n\n        with patch.object(Server._interest_registry, 'get', return_value=data) as patch_get_interest_reg:\n            with patch.object(Server._interest_registry, 'unregister', return_value=[]) as patch_unregister_interest:\n                resp = await client.delete('/fledge/interest/{}'.format(reg_id))\n                assert 200 == resp.status\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert {'id': reg_id, 'message': 'Interest unregistered'} == json_response\n            args, kwargs = patch_unregister_interest.call_args\n            assert (reg_id,) == args\n        args1, kwargs1 = patch_get_interest_reg.call_args\n        assert {'registration_id': reg_id} == kwargs1\n\n    ############################\n    # Register Service\n    ############################\n    @pytest.mark.parametrize(\"params, obj, expected_kwargs\", [\n        (\"\", \"all\", {}),\n        (\"?name=Y\", \"get\", {'name': 'Y'}),\n        (\"?type=Storage\", \"get\", {'s_type': 'Storage'}),\n        (\"?name=Y&type=Storage\", \"filter_by_name_and_type\", {'name': 'Y', 's_type': 'Storage'})\n    ])\n    async def test_get_service(self, client, params, obj, expected_kwargs):\n        with patch.object(ServiceRegistry, obj, return_value=[]) as patch_get_service_reg:\n            resp = await client.get('/fledge/service{}'.format(params))\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert {'services': []} == json_response\n        args, kwargs = patch_get_service_reg.call_args\n        assert expected_kwargs == kwargs\n\n    @pytest.mark.parametrize(\"params, obj, expected_kwargs, message\", [\n        (\"\", \"all\", {}, \"No service found\"),\n        (\"?name=Y\", \"get\", {'name': 'Y'}, \"Service with name Y does not exist\"),\n        (\"?type=Storage\", \"get\", {'s_type': 'Storage'}, \"Service with type Storage does not exist\"),\n        (\"?name=Y&type=Storage\", \"filter_by_name_and_type\", {'name': 'Y', 's_type': 'Storage'}, \"Service with name Y and type Storage does not exist\")\n    ])\n    async def test_get_service_exception(self, client, params, obj, expected_kwargs, message):\n        with patch.object(ServiceRegistry, obj, side_effect=service_registry_exceptions.DoesNotExist) as patch_service_reg:\n            resp = await client.get('/fledge/service{}'.format(params))\n            assert 404 == resp.status\n            assert message == resp.reason\n        args, kwargs = patch_service_reg.call_args\n        assert expected_kwargs == kwargs\n\n    async def test_get_services(self, client):\n        sid = \"c6bbf3c8-f43c-4b0f-ac48-f597f510da0b\"\n        sname = \"name\"\n        stype = \"Southbound\"\n        sprotocol = \"http\"\n        saddress = \"localhost\"\n        sport = 1234\n        smgtport = 4321\n        data = []\n        record = ServiceRecord(sid, sname, stype, sprotocol, saddress, sport, smgtport)\n        data.append(record)\n\n        with patch.object(ServiceRegistry, 'all', return_value=data) as patch_get_all_service_reg:\n            resp = await client.get('/fledge/service')\n            assert 200 == resp.status\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert {'services': [{'id': sid, 'management_port': smgtport, 'address': saddress, 'name': sname, 'type': stype, 'protocol': sprotocol, 'status': 'running', 'service_port': sport}]} == json_response\n        args, kwargs = patch_get_all_service_reg.call_args\n        assert {} == kwargs\n\n    @pytest.mark.parametrize(\"request_data, message\", [\n        ({\"type\": \"Storage\", \"name\": \"Storage Services\", \"address\": \"127.0.0.1\", \"service_port\": \"8090\", \"management_port\": 1090}, \"Service's service port can be a positive integer only\"),\n        ({\"type\": \"Storage\", \"name\": \"Storage Services\", \"address\": \"127.0.0.1\", \"service_port\": 8090, \"management_port\": \"1090\"}, \"Service management port can be a positive integer only\"),\n        ({\"type\": \"Storage\", \"name\": \"Storage Services\", \"address\": \"127.0.0.1\", \"service_port\": \"8090\", \"management_port\": \"1090\"}, \"Service's service port can be a positive integer only\")\n    ])\n    async def test_bad_register_service(self, client, request_data, message):\n        resp = await client.post('/fledge/service', data=json.dumps(request_data))\n        assert 400 == resp.status\n        assert message == resp.reason\n\n    @pytest.mark.parametrize(\"exception_name, message\", [\n        (service_registry_exceptions.AlreadyExistsWithTheSameName, \"A Service with the same name already exists\"),\n        (service_registry_exceptions.AlreadyExistsWithTheSameAddressAndPort, \"A Service is already registered on the same address: 127.0.0.1 and service port: 8090\"),\n        (service_registry_exceptions.AlreadyExistsWithTheSameAddressAndManagementPort, \"A Service is already registered on the same address: 127.0.0.1 and management port: 1090\")\n    ])\n    async def test_register_service_exceptions(self, client, exception_name, message):\n        request_data = {\"type\": \"Storage\", \"name\": \"Storage Services\", \"address\": \"127.0.0.1\", \"service_port\": 8090, \"management_port\": 1090}\n        with patch.object(ServiceRegistry, 'getStartupToken', return_value=None):\n            with patch.object(ServiceRegistry, 'register', side_effect=exception_name):\n                resp = await client.post('/fledge/service', data=json.dumps(request_data))\n                assert 400 == resp.status\n                assert message == resp.reason\n\n    async def test_service_not_registered(self, client):\n        request_data = {\"type\": \"Storage\", \"name\": \"Storage Services\", \"address\": \"127.0.0.1\", \"service_port\": 8090, \"management_port\": 1090}\n        with patch.object(ServiceRegistry, 'getStartupToken', return_value=None):\n            with patch.object(ServiceRegistry, 'register', return_value=None) as patch_register:\n                resp = await client.post('/fledge/service', data=json.dumps(request_data))\n                assert 400 == resp.status\n                assert 'Service {} could not be registered'.format(request_data['name']) == resp.reason\n            args, _ = patch_register.call_args\n            assert (request_data['name'], request_data['type'], request_data['address'],  request_data['service_port'], request_data['management_port'], 'http', None) == args\n\n    async def test_register_service(self, client):\n        async def async_mock():\n            return \"\"\n\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._storage_client_async = MagicMock(StorageClientAsync)\n        request_data = {\"type\": \"Storage\", \"name\": \"Storage Services\", \"address\": \"127.0.0.1\", \"service_port\": 8090,\n                        \"management_port\": 1090}\n        _rv = await async_mock()\n        with patch.object(ServiceRegistry, 'getStartupToken', return_value=None):\n            with patch.object(ServiceRegistry, 'register', return_value='1') as patch_register:\n                with patch.object(AuditLogger, '__init__', return_value=None):\n                    with patch.object(AuditLogger, 'information', return_value=_rv) as audit_info_patch:\n                        resp = await client.post('/fledge/service', data=json.dumps(request_data))\n                        assert 200 == resp.status\n                        r = await resp.text()\n                        json_response = json.loads(r)\n                        assert {'message': 'Service registered successfully', 'id': '1', 'bearer_token': ''} == json_response\n                    args, kwargs = audit_info_patch.call_args\n                    assert 'SRVRG' == args[0]\n                    assert {'name': request_data['name']} == args[1]\n            args, _ = patch_register.call_args\n            assert (request_data['name'], request_data['type'], request_data['address'],\n                    request_data['service_port'], request_data['management_port'], 'http', None) == args\n\n    async def test_service_not_found_when_unregister(self, client):\n        with patch.object(ServiceRegistry, 'get', side_effect=service_registry_exceptions.DoesNotExist) as patch_unregister:\n            resp = await client.delete('/fledge/service/blah')\n            assert 404 == resp.status\n            assert 'Service with blah does not exist' == resp.reason\n        args, kwargs = patch_unregister.call_args\n        assert {'idx': 'blah'} == kwargs\n\n    async def test_unregister_service(self, client):\n        async def async_mock():\n            return \"\"\n\n        service_id = \"c6bbf3c8-f43c-4b0f-ac48-f597f510da0b\"\n        sname = \"name\"\n        stype = \"Southbound\"\n        sprotocol = \"http\"\n        saddress = \"localhost\"\n        sport = 1234\n        smgtport = 4321\n        data = []\n        record = ServiceRecord(service_id, sname, stype, sprotocol, saddress, sport, smgtport)\n        data.append(record)\n        Server._storage_client = MagicMock(StorageClientAsync)\n        Server._storage_client_async = MagicMock(StorageClientAsync)\n        _rv = await async_mock()\n        with patch.object(ServiceRegistry, 'get', return_value=data) as patch_get_unregister:\n            with patch.object(ServiceRegistry, 'unregister') as patch_unregister:\n                with patch.object(AuditLogger, '__init__', return_value=None):\n                    with patch.object(AuditLogger, 'information', return_value=_rv) as audit_info_patch:\n                        resp = await client.delete('/fledge/service/{}'.format(service_id))\n                        assert 200 == resp.status\n                        r = await resp.text()\n                        json_response = json.loads(r)\n                        assert {'id': service_id, 'message': 'Service unregistered'} == json_response\n                    args, kwargs = audit_info_patch.call_args\n                    assert 'SRVUN' == args[0]\n                    assert {'name': sname} == args[1]\n            args1, kwargs1 = patch_unregister.call_args\n            assert (service_id,) == args1\n        args2, kwargs2 = patch_get_unregister.call_args\n        assert {'idx': service_id} == kwargs2\n\n    ############################\n    # Common\n    ############################\n    async def test_ping(self, client):\n        resp = await client.get('/fledge/service/ping')\n        assert 200 == resp.status\n        r = await resp.text()\n        json_response = json.loads(r)\n        assert 'uptime' in json_response\n        assert 0.0 < json_response[\"uptime\"]\n\n    @pytest.mark.asyncio\n    async def test_shutdown(self, mocker):\n        async def return_async_value(val):\n            return val\n\n        _rv = await return_async_value('stopping...')\n        mocked__stop = mocker.patch.object(Server, \"_stop\")\n        mocked__stop.return_value = _rv\n        mocked_log_info = mocker.patch.object(server._logger, \"info\")\n\n        request = mock_request(data=\"\", loop=asyncio.get_event_loop())\n        resp = await Server.shutdown(request)\n\n        assert 1 == mocked__stop.call_count\n        assert 200 == resp.status\n\n        json_response = json.loads(resp.body.decode())\n\n        assert 1 == mocked_log_info.call_count\n        args, kwargs = mocked_log_info.call_args\n        assert 'Stopping the Fledge Core event loop. Good Bye!' == args[0]\n        assert 'message' in json_response\n        assert 'Fledge stopped successfully. Wait for few seconds for process cleanup.' == json_response[\"message\"]\n\n    ######################\n    # Service Login Tests\n    ######################\n\n    @pytest.mark.parametrize(\"data, expected_status, expected_message\", [\n        (None, 400, \"valid JSON\"),\n        (\"not a dict\", 400, \"valid JSON object\"),\n        ({}, 400, \"Username field is required\"),\n        ({\"username\": 123}, 400, \"Username must be a string\"),\n        ({\"username\": \"  \"}, 400, \"cannot be empty or contain only whitespace\"),\n        ({\"username\": \"\"}, 400, \"cannot be empty\")\n    ])\n    async def test_service_login_input_validation(self, client, data, expected_status, expected_message):\n        resp = await client.post('/fledge/service/login', data=json.dumps(data))\n        assert expected_status == resp.status\n        assert expected_message in resp.reason\n        r = await resp.text()\n        json_response = json.loads(r)\n        assert 'message' in json_response\n        assert expected_message in json_response['message']\n\n    @pytest.mark.parametrize(\"headers, expected_status, expected_message\", [\n        ({}, 401, \"Authorization header is missing\"),\n        ({\"Authorization\": \"InvalidFormat token\"}, 401, \"must start with 'Bearer '\"),\n        ({\"Authorization\": \"Bearer \"}, 401, \"Authorization header must start with 'Bearer ' followed by a token\"),\n        ({\"Authorization\": f\"Bearer {'x' * 2049}\"}, 401, \"exceeds maximum length\")\n    ])\n    async def test_service_login_bearer_token_validation(self, client, headers, expected_status, expected_message):\n        data = {\"username\": \"testuser\"}\n        resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n        assert expected_status == resp.status\n        assert expected_message in resp.reason\n        r = await resp.text()\n        json_response = json.loads(r)\n        assert 'message' in json_response\n        assert expected_message in json_response['message']\n\n    @pytest.mark.parametrize(\"token_error, expected_message\", [\n        (\"Invalid token\", \"Bearer token is invalid\"),\n        (\"Token has expired\", \"Bearer token has expired\"),\n        (\"Signature verification failed\", \"Bearer token signature is invalid\")\n    ])\n    async def test_service_login_jwt_validation_errors(self, client, token_error, expected_message):\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": \"Bearer invalid.jwt.token\"}\n        with patch.object(Server, 'validate_token', return_value={'error': token_error}) as mock_validate_token:\n            resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n            assert 401 == resp.status\n            assert expected_message in resp.reason\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert 'message' in json_response\n            assert expected_message in json_response['message']\n        mock_validate_token.assert_called_once_with('invalid.jwt.token')\n\n    async def test_service_login_missing_sub_claim(self, client):\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": \"Bearer valid.jwt.token\"}\n        with patch.object(Server, 'validate_token', return_value={}) as mock_validate_token:\n            resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n            assert 401 == resp.status\n            assert \"missing service name claim\" in resp.reason\n            r = await resp.text()\n            json_response = json.loads(r)\n            assert 'message' in json_response\n            assert \"missing service name claim\" in json_response['message']\n        mock_validate_token.assert_called_once_with('valid.jwt.token')\n\n    async def test_service_login_unregistered_service(self, client):\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": \"Bearer valid.jwt.token\"}\n        with patch.object(Server, 'validate_token', return_value={'sub': 'unregistered-service'}) as mock_validate_token:\n            with patch.object(ServiceRegistry, 'getBearerToken', return_value=None) as mock_get_bearer_token:\n                resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                assert 404 == resp.status\n                assert \"is not registered\" in resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert 'message' in json_response\n                assert \"is not registered\" in json_response['message']\n            mock_get_bearer_token.assert_called_once_with('unregistered-service')\n        mock_validate_token.assert_called_once_with('valid.jwt.token')\n\n    async def test_service_login_mismatched_bearer_token(self, client):\n        bearer_token = \"valid.jwt.token\"\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}) as mock_validate_token:\n            with patch.object(ServiceRegistry, 'getBearerToken', return_value='different-token') as mock_get_bearer_token:\n                resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                assert 401 == resp.status\n                assert \"does not match registered\" in resp.reason\n                r = await resp.text()\n                json_response = json.loads(r)\n                assert 'message' in json_response\n                assert \"does not match registered\" in json_response['message']\n            mock_get_bearer_token.assert_called_once_with('test-service')\n        mock_validate_token.assert_called_once_with(bearer_token)\n\n    @pytest.mark.parametrize(\"user_data, expected_status, expected_message\", [\n        ([], 401, \"User not found or not enabled\"),  # No users\n        ([{'id': 1, 'uname': 'testuser', 'enabled': 'f', 'role_id': 1}], 401, \"User not found or not enabled\"),\n        ([{'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 3}], 403, \"not authorized to access services\"),\n        ([{'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 4}], 403, \"not authorized to access services\")\n    ])\n    async def test_service_login_user_authorization(self, client, user_data, expected_status, expected_message):\n        bearer_token = \"valid.jwt.token\"\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}) as mock_validate_token:\n            with patch.object(ServiceRegistry, 'getBearerToken', return_value=bearer_token) as mock_get_bearer_token:\n                with patch.object(User.Objects, 'all', return_value=user_data) as mock_user_all:\n                    resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                    assert expected_status == resp.status\n                    assert expected_message in resp.reason\n                    r = await resp.text()\n                    json_response = json.loads(r)\n                    assert 'message' in json_response\n                    assert expected_message in json_response['message']\n                mock_user_all.assert_called_once_with()\n            mock_get_bearer_token.assert_called_once_with('test-service')\n        mock_validate_token.assert_called_once_with(bearer_token)\n\n    async def test_service_login_missing_password(self, client):\n        bearer_token = \"valid.jwt.token\"\n        user_data = [{'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 1, 'access_method': 'any'}]\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}) as mock_validate_token:\n            with patch.object(ServiceRegistry, 'getBearerToken', return_value=bearer_token) as mock_get_bearer_token:\n                with patch.object(User.Objects, 'all', return_value=user_data) as mock_user_all:\n                    with patch.object(User.Objects, 'delete_user_tokens', return_value=None) as mock_delete_tokens:\n                        resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                        assert 401 == resp.status\n                        assert \"password is not configured\" in resp.reason\n                        r = await resp.text()\n                        json_response = json.loads(r)\n                        assert 'message' in json_response\n                        assert \"password is not configured\" in json_response['message']\n                    mock_delete_tokens.assert_called_once_with(1)\n                mock_user_all.assert_called_once_with()\n            mock_get_bearer_token.assert_called_once_with('test-service')\n        mock_validate_token.assert_called_once_with(bearer_token)\n\n    @pytest.mark.parametrize(\"access_method, login_method, login_args\", [\n        ('cert', 'certificate_login', ('testuser', '127.0.0.1')),\n        ('any', 'login', ('testuser', 'hashedpassword', '127.0.0.1')),\n        ('password', 'login', ('testuser', 'hashedpassword', '127.0.0.1'))\n    ])\n    async def test_service_login_authentication_success(self, client, access_method, login_method, login_args):\n        bearer_token = \"valid.jwt.token\"\n        user_data = [{\n            'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 1,\n            'access_method': access_method, 'pwd': 'hashedpassword'\n        }]\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(server._logger, 'info') as mock_logger_info:\n            with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}) as mock_validate_token:\n                with patch.object(ServiceRegistry, 'getBearerToken', return_value=bearer_token) as mock_get_bearer_token:\n                    with patch.object(User.Objects, 'all', return_value=user_data) as mock_user_all:\n                        with patch.object(User.Objects, 'delete_user_tokens', return_value=None) as mock_delete_tokens:\n                            with patch.object(User.Objects, login_method, return_value=(1, 'mock-token', True)) as mock_login:\n                                resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                                assert 200 == resp.status\n                                r = await resp.text()\n                                response_data = json.loads(r)\n                                assert response_data['token'] == 'mock-token'\n                                assert response_data['uid'] == 1\n                                assert response_data['admin'] is True\n                                assert \"logged in successfully\" in response_data['message']\n                            mock_login.assert_called_once_with(*login_args)\n                        mock_delete_tokens.assert_called_once_with(1)\n                    mock_user_all.assert_called_once_with()\n                mock_get_bearer_token.assert_called_once_with('test-service')\n            mock_validate_token.assert_called_once_with(bearer_token)\n        mock_logger_info.assert_called_once()\n        info_call_args = mock_logger_info.call_args[0]\n        expected_method = 'certificate' if access_method == 'cert' else 'password'\n        assert f\"Successful {expected_method} login for user 'testuser' via service 'test-service'\" in info_call_args[0]\n\n    @pytest.mark.parametrize(\"exception_type, expected_status, expected_message\", [\n        (User.PasswordNotSetError, 400, \"Password is not set\"),\n        (User.DoesNotExist, 401, \"User authentication failed\"),\n        (Exception(\"Authentication failed\"), 401, \"Authentication failed\"),\n    ])\n    async def test_service_login_authentication_errors(self, client, exception_type, expected_status, expected_message):\n        bearer_token = \"valid.jwt.token\"\n        user_data = [{'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 1, 'access_method': 'any', 'pwd': 'hashedpassword'}]\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}) as mock_validate_token:\n            with patch.object(ServiceRegistry, 'getBearerToken', return_value=bearer_token) as mock_get_bearer_token:\n                with patch.object(User.Objects, 'all', return_value=user_data) as mock_user_all:\n                    with patch.object(User.Objects, 'delete_user_tokens', return_value=None) as mock_delete_tokens:\n                        with patch.object(User.Objects, 'login', side_effect=exception_type) as mock_login:\n                            resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                            assert expected_status == resp.status\n                            assert expected_message in resp.reason\n                            r = await resp.text()\n                            json_response = json.loads(r)\n                            assert 'message' in json_response\n                            assert expected_message in json_response['message']\n                        mock_login.assert_called_once_with('testuser', 'hashedpassword', '127.0.0.1') \n                    mock_delete_tokens.assert_called_once_with(1)\n                mock_user_all.assert_called_once_with()\n            mock_get_bearer_token.assert_called_once_with('test-service')\n        mock_validate_token.assert_called_once_with(bearer_token)\n\n    async def test_service_login_token_cleanup_failure_continues(self, client):\n        bearer_token = \"valid.jwt.token\"\n        user_data = [{'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 1, 'access_method': 'any', 'pwd': 'hashedpassword'}]\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(server._logger, 'warning') as mock_logger_warning:\n            with patch.object(server._logger, 'info') as mock_logger_info:\n                with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}) as mock_validate_token:\n                    with patch.object(ServiceRegistry, 'getBearerToken', return_value=bearer_token) as mock_get_bearer_token:\n                        with patch.object(User.Objects, 'all', return_value=user_data) as mock_user_all:\n                            with patch.object(User.Objects, 'delete_user_tokens', side_effect=Exception(\"Cleanup failed\")) as mock_delete_tokens:\n                                with patch.object(User.Objects, 'login', return_value=(1, 'mock-token', True)) as mock_login:\n                                    resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                                    assert 200 == resp.status\n                                    r = await resp.text()\n                                    response_data = json.loads(r)\n                                    assert response_data['token'] == 'mock-token'\n                                mock_login.assert_called_once_with('testuser', 'hashedpassword', '127.0.0.1')\n                            mock_delete_tokens.assert_called_once_with(1)\n                        mock_user_all.assert_called_once_with()\n                    mock_get_bearer_token.assert_called_once_with('test-service')\n                mock_validate_token.assert_called_once_with(bearer_token)\n            mock_logger_info.assert_called_once()\n            info_call_args = mock_logger_info.call_args[0]\n            assert \"Successful password login for user 'testuser' via service 'test-service'\" in info_call_args[0]\n        mock_logger_warning.assert_called_once()\n        warning_call_args = mock_logger_warning.call_args[0]\n        assert \"Failed to delete existing tokens for user 'testuser'\" in warning_call_args[0]\n\n    async def test_service_login_no_peername_uses_default_host(self, client):\n        bearer_token = \"valid.jwt.token\"\n        user_data = [{'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 1, 'access_method': 'any', 'pwd': 'hashedpassword'}]\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(server._logger, 'info') as mock_logger_info:\n            with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}) as mock_validate_token:\n                with patch.object(ServiceRegistry, 'getBearerToken', return_value=bearer_token) as mock_get_bearer_token:\n                    with patch.object(User.Objects, 'all', return_value=user_data) as mock_user_all:\n                        with patch.object(User.Objects, 'delete_user_tokens', return_value=None) as mock_delete_tokens:\n                            with patch.object(User.Objects, 'login', return_value=(1, 'mock-token', True)) as mock_login:\n                                resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                                assert 200 == resp.status\n                                r = await resp.text()\n                                response_data = json.loads(r)\n                                assert response_data['token'] == 'mock-token'\n                            mock_login.assert_called_once()\n                            args, kwargs = mock_login.call_args\n                            assert len(args) == 3\n                            assert args[0] == 'testuser'\n                            assert args[1] == 'hashedpassword'\n                            # args[2] is the host IP - could be 127.0.0.1 or similar\n                            assert kwargs == {}\n                        mock_delete_tokens.assert_called_once_with(1)\n                    mock_user_all.assert_called_once_with()\n                mock_get_bearer_token.assert_called_once_with('test-service')\n            mock_validate_token.assert_called_once_with(bearer_token)\n        mock_logger_info.assert_called_once()\n\n    async def test_service_login_error_handling_invalid_json(self, client):\n        resp = await client.post('/fledge/service/login', data='{\"invalid\": json}')\n        assert 400 == resp.status\n        assert \"valid JSON\" in resp.reason\n        r = await resp.text()\n        json_response = json.loads(r)\n        assert 'message' in json_response\n        assert \"valid JSON\" in json_response['message']\n\n    async def test_service_login_uses_constant_time_comparison(self, client):\n        bearer_token = \"valid.jwt.token\"\n        user_data = [{'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 1, 'access_method': 'any', 'pwd': 'hashedpassword'}]\n        data = {\"username\": \"testuser\"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(server._logger, 'info') as mock_logger_info:\n            with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}):\n                with patch.object(ServiceRegistry, 'getBearerToken', return_value=bearer_token):\n                    with patch('hmac.compare_digest', return_value=True) as mock_compare:\n                        with patch.object(User.Objects, 'all', return_value=user_data):\n                            with patch.object(User.Objects, 'delete_user_tokens', return_value=None):\n                                with patch.object(User.Objects, 'login', return_value=(1, 'mock-token', True)):\n                                    resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                                    assert 200 == resp.status\n                                    r = await resp.text()\n                                    response_data = json.loads(r)\n                                    assert response_data['token'] == 'mock-token'\n                    assert mock_compare.call_count >= 2\n                    for call in mock_compare.call_args_list:\n                        args, kwargs = call\n                        assert len(args) == 2\n                        assert isinstance(args[0], str)\n                        assert isinstance(args[1], str)\n                        assert kwargs == {}\n        mock_logger_info.assert_called_once()\n\n    async def test_service_login_trims_whitespace_username(self, client):\n        bearer_token = \"valid.jwt.token\"\n        user_data = [{'id': 1, 'uname': 'testuser', 'enabled': 't', 'role_id': 1, 'access_method': 'any', 'pwd': 'hashedpassword'}]\n        data = {\"username\": \"  testuser  \"}\n        headers = {\"Authorization\": f\"Bearer {bearer_token}\"}\n        with patch.object(server._logger, 'info') as mock_logger_info:\n            with patch.object(Server, 'validate_token', return_value={'sub': 'test-service'}) as mock_validate_token:\n                with patch.object(ServiceRegistry, 'getBearerToken', return_value=bearer_token) as mock_get_bearer_token:\n                    with patch.object(User.Objects, 'all', return_value=user_data) as mock_user_all:\n                        with patch.object(User.Objects, 'delete_user_tokens', return_value=None) as mock_delete_tokens:\n                            with patch.object(User.Objects, 'login', return_value=(1, 'mock-token', True)) as mock_login:\n                                resp = await client.post('/fledge/service/login', data=json.dumps(data), headers=headers)\n                                assert 200 == resp.status\n                                r = await resp.text()\n                                response_data = json.loads(r)\n                                assert response_data['token'] == 'mock-token'\n                            mock_login.assert_called_once()\n                            args, kwargs = mock_login.call_args\n                            assert len(args) == 3\n                            assert args[0] == 'testuser'\n                            assert args[1] == 'hashedpassword'\n                            # args[2] is the host IP\n                            assert kwargs == {}\n                        mock_delete_tokens.assert_called_once_with(1)\n                    mock_user_all.assert_called_once_with()\n                mock_get_bearer_token.assert_called_once_with('test-service')\n            mock_validate_token.assert_called_once_with(bearer_token)\n        mock_logger_info.assert_called_once()\n\n    @pytest.mark.asyncio\n    async def test_change(self):\n        pass\n"
  },
  {
    "path": "tests/unit/python/fledge/services/core/test_user_model.py",
    "content": "# -*- coding: utf-8 -*-\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport copy\nimport json\nimport asyncio\nfrom unittest.mock import MagicMock, patch\nimport pytest\nfrom datetime import datetime\n\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.common.storage_client.exceptions import StorageServerError\nfrom fledge.services.core import connect\nfrom fledge.services.core.user_model import User\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\npytestmark = pytest.mark.asyncio\n\n\nasync def mock_coro(*args, **kwargs):\n    return None if len(args) == 0 else args[0]\n\n\nclass TestUserModel:\n\n    async def test_initial_value(self):\n        obj = User(1, 'admin', 'fledge')\n        assert obj.uid == 1\n        assert obj.username == 'admin'\n        assert obj.password == 'fledge'\n        assert obj.is_admin is False\n\n    async def test_value_with_is_admin(self):\n        obj = User(1, 'admin', 'fledge', True)\n        assert obj.uid == 1\n        assert obj.username == 'admin'\n        assert obj.password == 'fledge'\n        assert obj.is_admin is True\n\n    async def test_no_value(self):\n        with pytest.raises(Exception) as excinfo:\n            User()\n        assert excinfo.type is TypeError\n        assert \"__init__() missing 3 required positional arguments: 'uid', 'username', and 'password'\" in str(\n            excinfo.value)\n\n    async def test_get_roles(self):\n        expected = {'rows': [], 'count': 0}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=_rv) as query_tbl_patch:\n                actual = await User.Objects.get_roles()\n                assert actual == expected['rows']\n            query_tbl_patch.assert_called_once_with('roles', )\n\n    async def test_get_role_id_by_name(self):\n        expected = {'rows': [{'id': '1'}], 'count': 1}\n        payload = '{\"return\": [\"id\"], \"where\": {\"column\": \"name\", \"condition\": \"=\", \"value\": \"admin\"}}'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as query_tbl_patch:\n                actual = await User.Objects.get_role_id_by_name(\"admin\")\n                assert actual == expected['rows']\n            query_tbl_patch.assert_called_once_with('roles', payload)\n\n    async def test_get_role_name_by_id(self):\n        expected = {'rows': [{'name': 'user'}], 'count': 1}\n        payload = '{\"return\": [\"name\"], \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2}, \"limit\": 1}'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv\n                              ) as query_tbl_patch:\n                actual = await User.Objects.get_role_name_by_id(2)\n                assert actual == expected['rows'][0]['name']\n            query_tbl_patch.assert_called_once_with('roles', payload)\n\n    async def test_get_all(self):\n        expected = {'rows': [], 'count': 0}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl', return_value=_rv) as query_tbl_patch:\n                actual = await User.Objects.all()\n                assert actual == expected['rows']\n            query_tbl_patch.assert_called_once_with('users')\n\n    @pytest.mark.parametrize(\"kwargs, payload\", [\n        ({'username': None, 'uid': None}, '{\"return\": [\"id\", \"uname\", \"role_id\", \"access_method\", \"real_name\", \"description\", \"hash_algorithm\", \"block_until\", \"failed_attempts\"], \"where\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}'),\n        ({'username': None, 'uid': 1}, '{\"return\": [\"id\", \"uname\", \"role_id\", \"access_method\", \"real_name\", \"description\", \"hash_algorithm\", \"block_until\", \"failed_attempts\"], \"where\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\", \"and\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 1}}}'),\n        ({'username': 'aj', 'uid': None}, '{\"return\": [\"id\", \"uname\", \"role_id\", \"access_method\", \"real_name\", \"description\", \"hash_algorithm\", \"block_until\", \"failed_attempts\"], \"where\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\", \"and\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"aj\"}}}'),\n        ({'username': 'aj', 'uid': 1}, '{\"return\": [\"id\", \"uname\", \"role_id\", \"access_method\", \"real_name\", \"description\", \"hash_algorithm\", \"block_until\", \"failed_attempts\"], \"where\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\", \"and\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"aj\"}}}}')\n    ])\n    async def test_get_filter(self, kwargs, payload):\n        expected = {'rows': [], 'count': 0}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as query_tbl_patch:\n                actual = await User.Objects.filter(**kwargs)\n                assert actual == expected['rows']\n        query_tbl_patch.assert_called_once_with('users', payload)\n\n    @pytest.mark.parametrize(\"exp_kwargs, error_msg\", [\n        ({'username': None, 'uid': None}, ''),\n        ({'username': None, 'uid': 1}, 'User with id:<1> does not exist'),\n        ({'username': 'aj', 'uid': None}, 'User with name:<aj> does not exist'),\n        ({'username': 'aj', 'uid': 1}, 'User with id:<1> and name:<aj> does not exist')\n    ])\n    async def test_get_exception(self, exp_kwargs, error_msg):\n        _rv = await mock_coro([])\n        with patch.object(User.Objects, 'filter', return_value=_rv) as filter_patch:\n            with pytest.raises(Exception) as excinfo:\n                await User.Objects.get(uid=exp_kwargs['uid'], username=exp_kwargs['username'])\n            assert str(excinfo.value) == error_msg\n            assert excinfo.type is User.DoesNotExist\n            assert issubclass(excinfo.type, Exception)\n        args, kwargs = filter_patch.call_args\n        assert kwargs == exp_kwargs\n\n    async def test_get(self):\n        expected = [{'role_id': '1', 'id': '1', 'uname': 'admin'}]\n        exp_kwargs = {'uid': 1, 'username': 'admin'}\n        _rv = await mock_coro(expected)\n        with patch.object(User.Objects, 'filter', return_value=_rv) as filter_patch:\n            actual = await User.Objects.get(uid=exp_kwargs['uid'], username=exp_kwargs['username'])\n            assert actual == expected[0]\n        args, kwargs = filter_patch.call_args\n        assert kwargs == exp_kwargs\n\n    @pytest.mark.parametrize(\"expected, user_pwd\", [(True, 'fledge'), (False, 'invalid')])\n    async def test_hash_password_check(self, expected, user_pwd):\n        password = User.Objects.hash_password(\"fledge\", \"SHA512\")\n        actual = User.Objects.check_password(password, user_pwd, \"SHA512\")\n        assert actual == expected\n\n    async def test_create_user(self):\n        hashed_password = \"dd7171406eaf4baa8bc805857f719bca\"\n        expected = {'rows_affected': 1, \"response\": \"inserted\"}\n        payload = {\"pwd\": \"dd7171406eaf4baa8bc805857f719bca\", \"role_id\": 1, \"uname\": \"aj\", 'access_method': 'any',\n                   'description': '', 'real_name': ''}\n        audit_details = copy.deepcopy(payload)\n        audit_details.pop('pwd', None)\n        audit_details['message'] = \"'{}' username created for '{}' user.\".format(payload['uname'],\n                                                                                 payload['real_name'])\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        _rv2 = await mock_coro(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(User.Objects, 'hash_password', return_value=hashed_password) as hash_pwd_patch:\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv) as insert_tbl_patch:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv2) as patch_audit:\n                            actual = await User.Objects.create(\"aj\", \"fledge\", 1)\n                            assert actual == expected\n                        patch_audit.assert_called_once_with('USRAD', audit_details)\n                assert 1 == insert_tbl_patch.call_count\n                assert insert_tbl_patch.called is True\n                args, kwargs = insert_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n            hash_pwd_patch.assert_called_once_with('fledge', 'SHA512')\n\n    async def test_create_user_exception(self):\n        hashed_password = \"dd7171406eaf4baa8bc805857f719bca\"\n        expected = {'message': 'Something went wrong', 'retryable': False, 'entryPoint': 'insert'}\n        payload = {\"pwd\": \"dd7171406eaf4baa8bc805857f719bca\", \"role_id\": 1, \"uname\": \"aj\", 'access_method': 'any',\n                   'description': '', 'real_name': ''}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(User.Objects, 'hash_password', return_value=hashed_password) as hash_pwd_patch:\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv, side_effect=StorageServerError(code=400, reason=\"blah\", error=expected)) as insert_tbl_patch:\n                    with pytest.raises(ValueError) as excinfo:\n                        await User.Objects.create(\"aj\", \"fledge\", 1)\n                    assert str(excinfo.value) == expected['message']\n                args, kwargs = insert_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n            hash_pwd_patch.assert_called_once_with('fledge', 'SHA512')\n\n    async def test_delete_user(self):\n        p1 = '{\"where\": {\"column\": \"user_id\", \"condition\": \"=\", \"value\": 2}}'\n        p2 = '{\"values\": {\"enabled\": \"f\"}, \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2, \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}'\n        r1 = {'response': 'deleted', 'rows_affected': 1}\n        r2 = {'response': 'updated', 'rows_affected': 1}\n\n        storage_client_mock = MagicMock(StorageClientAsync)\n        user_id = 2\n        audit_details = {\"user_id\": user_id, \"message\": \"User ID: <{}> has been disabled.\".format(user_id)}\n        _rv1 = await mock_coro(r1)\n        _rv2 = await mock_coro(r2)\n        _rv3 = await mock_coro(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv1) as delete_tbl_patch:\n                with patch.object(storage_client_mock, 'update_tbl', return_value=_rv2) as update_tbl_patch:\n                    with patch.object(AuditLogger, '__init__', return_value=None):\n                        with patch.object(AuditLogger, 'information', return_value=_rv3) as patch_audit:\n                            actual = await User.Objects.delete(user_id)\n                            assert r2 == actual\n                        patch_audit.assert_called_once_with('USRDL', audit_details)\n                update_tbl_patch.assert_called_once_with('users', p2)\n            delete_tbl_patch.assert_called_once_with('user_logins', p1)\n\n    async def test_delete_admin_user(self):\n        with pytest.raises(ValueError) as excinfo:\n            await User.Objects.delete(1)\n        assert str(excinfo.value) == 'Super admin user can not be deleted'\n\n    async def test_delete_user_exception(self):\n        expected = {'message': 'Something went wrong', 'retryable': False, 'entryPoint': 'delete'}\n        payload = '{\"values\": {\"enabled\": \"f\"}, \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2, \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv) as delete_tbl_patch:\n                with patch.object(storage_client_mock, 'update_tbl', side_effect=StorageServerError(code=400, reason=\"blah\",\n                                                                                                    error=expected)) as update_tbl_patch:\n                    with pytest.raises(ValueError) as excinfo:\n                        await User.Objects.delete(2)\n                    assert str(excinfo.value) == expected['message']\n            update_tbl_patch.assert_called_once_with('users', payload)\n\n    async def test_update_user_access_method_fails_without_password(self):\n        user_id = 3\n        access_method = 'pwd'\n        user_info = {'id': user_id, 'uname': 'dianomic', 'role_id': 4, 'access_method': 'cert', 'real_name': 'D System',\n         'description': 'Company', 'hash_algorithm': 'SHA512', 'block_until': '', 'failed_attempts': 0}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv0 = await mock_coro(user_info)\n        _rv1 = await mock_coro({'rows': [{'pwd': ''}], 'count': 1})\n        with patch.object(User.Objects, 'get', return_value=_rv0) as patch_get:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1\n                                  ) as query_tbl_patch:\n                    with pytest.raises(Exception) as excinfo:\n                        await User.Objects.update(user_id, {'access_method': access_method})\n                    assert excinfo.type is ValueError\n                    assert ('No password has been set for this user. Please create one before switching the '\n                            'authentication method to \"Password\".') == str(excinfo.value)\n                args, kwargs = query_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert {'return': ['pwd'], 'where': {'and': {'column': 'enabled', 'condition': '=', 'value': 't'},\n                                                     'column': 'id', 'condition': '=', 'value': user_id}} == p\n        patch_get.assert_called_once_with(uid=user_id)\n\n    @pytest.mark.parametrize(\"user_data, payload\", [\n        ({'role_id': 2}, {\"values\": {\"role_id\": 2}, \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2, \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}),\n        ({'role_id': '2'}, {\"values\": {\"role_id\": \"2\"}, \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2, \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}})\n    ])\n    async def test_update_user_role(self, user_data, payload):\n        expected = {'response': 'updated', 'rows_affected': 1}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        user_id = 2\n        user_info = {'id': user_id, 'uname': 'dianomic', 'role_id': 4, 'access_method': 'cert', 'real_name': 'D System',\n                     'description': ''}\n        audit_details = {'user_id': user_id, 'old_value': {'role_id': 4},\n                         'message': \"'dianomic' user has been changed.\", 'new_value': user_data}\n        _rv0 = await mock_coro(user_info)\n        _rv1 = await mock_coro()\n        _rv2 = await mock_coro(expected)\n        _rv3 = await mock_coro(None)\n        with patch.object(User.Objects, 'get', return_value=_rv0) as patch_get:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'update_tbl', return_value=_rv2) as update_tbl_patch:\n                    with patch.object(User.Objects, 'delete_user_tokens', return_value=_rv1) as delete_token_patch:\n                        with patch.object(AuditLogger, '__init__', return_value=None):\n                            with patch.object(AuditLogger, 'information', return_value=_rv3) as patch_audit:\n                                actual = await User.Objects.update(user_id, user_data)\n                                assert actual is True\n                            patch_audit.assert_called_once_with('USRCH', audit_details)\n                    delete_token_patch.assert_called_once_with(user_id)\n                args, kwargs = update_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n        patch_get.assert_called_once_with(uid=user_id)\n\n    @pytest.mark.parametrize(\"user_data, payload\", [\n        ({'password': \"Test@123\"}, {\"values\": {\"pwd\": \"HASHED_PASSWORD\"},\n                                    \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2}})\n    ])\n    async def test_update_user_password(self, user_data, payload):\n        expected = {'response': 'updated', 'rows_affected': 1}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        user_id = 2\n        user_info = {'id': user_id, 'uname': 'dianomic', 'role_id': 4, 'access_method': 'cert', 'real_name': 'D System',\n                     'description': '', 'hash_algorithm': 'SHA512'}\n        audit_details = {'user_id': user_id, 'old_value': {'pwd': '****'},\n                         'new_value': {'pwd': 'Password has been updated.'},\n                         'message': \"'dianomic' user has been changed.\"}\n\n        _rv0 = await mock_coro(user_info)\n        _rv1 = await mock_coro()\n        _rv2 = await mock_coro(expected)\n        _rv3 = await mock_coro(['HASHED_PWD'])\n        _rv4 = await mock_coro(None)\n        with patch.object(User.Objects, 'get', return_value=_rv0) as patch_get:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(User.Objects, 'hash_password', return_value='HASHED_PWD') as hash_pwd_patch:\n                    with patch.object(User.Objects, '_get_password_history', return_value=_rv3\n                                      ) as pwd_list_patch:\n                        with patch.object(storage_client_mock, 'update_tbl', return_value=_rv2\n                                          ) as update_tbl_patch:\n                            with patch.object(User.Objects, 'delete_user_tokens', return_value=_rv1\n                                              ) as delete_token_patch:\n                                with patch.object(User.Objects,\n                                                  '_insert_pwd_history_with_oldest_pwd_deletion_if_count_exceeds',\n                                                  return_value=_rv1) as pwd_history_patch:\n                                    with patch.object(AuditLogger, '__init__', return_value=None):\n                                        with patch.object(AuditLogger, 'information', return_value=_rv4\n                                                          ) as patch_audit:\n                                            actual = await User.Objects.update(user_id, user_data)\n                                            assert actual is True\n                                        patch_audit.assert_called_once_with('USRCH', audit_details)\n                                pwd_history_patch.assert_called_once_with(\n                                    storage_client_mock, user_id, 'HASHED_PWD', ['HASHED_PWD'])\n                            delete_token_patch.assert_called_once_with(user_id)\n                        args, kwargs = update_tbl_patch.call_args\n                        assert 'users' == args[0]\n                        # FIXME: payload ordering issue after datetime patch\n                        # update_tbl_patch.assert_called_once_with('users', payload)\n                    pwd_list_patch.assert_called_once_with(storage_client_mock, user_id, user_data, 'SHA512')\n                hash_pwd_patch.assert_called_once_with(user_data['password'], 'SHA512')\n        patch_get.assert_called_once_with(uid=user_id)\n\n    async def test_update_user_storage_exception(self):\n        expected = {'message': 'Something went wrong', 'retryable': False, 'entryPoint': 'update'}\n        payload = '{\"values\": {\"role_id\": 2}, \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2, ' \\\n                  '\"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}'\n        user_id = 2\n        user_info = {'id': user_id, 'uname': 'dianomic', 'role_id': 4, 'access_method': 'cert', 'real_name': 'D System',\n                     'description': ''}\n        _rv0 = await mock_coro(user_info)\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(User.Objects, 'get', return_value=_rv0) as patch_get:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'update_tbl', side_effect=StorageServerError(\n                        code=400, reason=\"blah\", error=expected)) as update_tbl_patch:\n                    with pytest.raises(ValueError) as excinfo:\n                        await User.Objects.update(user_id, {'role_id': 2})\n                    assert str(excinfo.value) == expected['message']\n            update_tbl_patch.assert_called_once_with('users', payload)\n        patch_get.assert_called_once_with(uid=user_id)\n\n    async def test_update_user_exception(self):\n        payload = '{\"values\": {\"role_id\": \"blah\"}, \"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2, ' \\\n                  '\"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}'\n        msg = 'Bad role id'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        user_id = 2\n        user_info = {'id': user_id, 'uname': 'dianomic', 'role_id': 4, 'access_method': 'cert', 'real_name': 'D System',\n                     'description': ''}\n        _rv0 = await mock_coro(user_info)\n        with patch.object(User.Objects, 'get', return_value=_rv0) as patch_get:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'update_tbl', side_effect=ValueError(msg)) as update_tbl_patch:\n                    with pytest.raises(Exception) as excinfo:\n                        await User.Objects.update(user_id, {'role_id': 'blah'})\n                    assert excinfo.type is ValueError\n                    assert str(excinfo.value) == msg\n                update_tbl_patch.assert_called_once_with('users', payload)\n        patch_get.assert_called_once_with(uid=user_id)\n\n    @pytest.mark.parametrize(\"user_data\", [\n        {'real_name': 'MSD'},\n        {'description': 'Captain Cool'},\n        {'real_name': 'MSD', 'description': 'Captain Cool'},\n        {'access_method': 'pwd'}\n    ])\n    async def test_update_user_other_fields(self, user_data):\n        expected = {'response': 'updated', 'rows_affected': 1}\n        expected_payload = {'where': {'column': 'id', 'condition': '=', 'value': 2,\n                                      'and': {'column': 'enabled', 'condition': '=', 'value': 't'}}}\n        expected_payload.update({'values': user_data})\n        storage_client_mock = MagicMock(StorageClientAsync)\n        user_id = 2\n        user_info = {'id': user_id, 'uname': 'dianomic', 'role_id': 4, 'access_method': 'cert', 'real_name': 'D System',\n                     'description': ''}\n\n        audit_details = {'user_id': user_id, 'new_value': user_data, 'message': \"'dianomic' user has been changed.\"}\n        temp = {}\n        for u in user_data.keys():\n            temp[u] = user_info[u]\n        audit_details['old_value'] = temp\n        _rv0 = await mock_coro(user_info)\n        _rv1 = await mock_coro()\n        _rv2 = await mock_coro(expected)\n        _rv3 = await mock_coro(None)\n        _rv4 = await mock_coro({'rows': [{'pwd': '2Xc34'}], 'count': 1})\n        with patch.object(User.Objects, 'get', return_value=_rv0) as patch_get:\n            with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv4\n                                  ) as query_tbl_patch:\n                    with patch.object(storage_client_mock, 'update_tbl', return_value=_rv2) as update_tbl_patch:\n                        with patch.object(User.Objects, 'delete_user_tokens', return_value=_rv1\n                                          ) as delete_token_patch:\n                            with patch.object(AuditLogger, '__init__', return_value=None):\n                                with patch.object(AuditLogger, 'information', return_value=_rv3) as patch_audit:\n                                    actual = await User.Objects.update(user_id, user_data)\n                                    assert actual is True\n                                patch_audit.assert_called_once_with('USRCH', audit_details)\n                        delete_token_patch.assert_not_called()\n                    args, kwargs = update_tbl_patch.call_args\n                    assert 'users' == args[0]\n                    p = json.loads(args[1])\n                    assert expected_payload == p\n                if 'access_method' in user_data and user_data['access_method'] != 'cert':\n                    args, kwargs = query_tbl_patch.call_args\n                    assert 'users' == args[0]\n                    p = json.loads(args[1])\n                    assert {'return': ['pwd'], 'where': {'and': {'column': 'enabled', 'condition': '=', 'value': 't'},\n                                                         'column': 'id', 'condition': '=', 'value': user_id}} == p\n                else:\n                    query_tbl_patch.assert_not_called()\n        patch_get.assert_called_once_with(uid=user_id)\n\n    async def test_login_if_no_user_exists(self):\n        async def mock_get_category_item():\n            return {\"value\": \"0\"}\n\n        payload = {\"return\": [\"pwd\", \"id\", \"role_id\", \"access_method\",\n                              {\"column\": \"pwd_last_changed\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n                               \"alias\": \"pwd_last_changed\"}, \"real_name\", \"description\", \"hash_algorithm\",\"block_until\",\"failed_attempts\"],\n                   \"where\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"admin\",\n                             \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await mock_get_category_item()\n        _rv2 = await mock_coro({'rows': [], 'count': 0})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv1\n                              ) as mock_get_cat_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2\n                                  ) as query_tbl_patch:\n                    with pytest.raises(Exception) as excinfo:\n                        await User.Objects.login('admin', 'blah', '0.0.0.0')\n                    assert str(excinfo.value) == 'User does not exist'\n                    assert excinfo.type is User.DoesNotExist\n                    assert issubclass(excinfo.type, Exception)\n                args, kwargs = query_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n            mock_get_cat_patch.assert_called_once_with('password', 'expiration')\n\n    async def test_login_if_invalid_password(self):\n        async def mock_get_category_item():\n            return {\"value\": \"0\"}\n\n        pwd_result = {'count': 1, 'rows': [{'role_id': '2', 'pwd': '3759bf3302f5481e8c9cc9472c6088ac', 'id': '2',\n                                            'pwd_last_changed': '2018-03-30 12:32:08.216159',\n                                            'hash_algorithm': 'SHA256', 'block_until': '', 'failed_attempts': 0}]}\n        payload = {\"return\": [\"pwd\", \"id\", \"role_id\", \"access_method\",\n                              {\"column\": \"pwd_last_changed\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"alias\":\n                                  \"pwd_last_changed\"}, \"real_name\", \"description\", \"hash_algorithm\", \"block_until\", \"failed_attempts\"],\n                   \"where\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"user\",\n                             \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        found_user = pwd_result['rows'][0]\n        _rv1 = await mock_get_category_item()\n        _rv2 = await mock_coro(pwd_result)\n        _rv3 = await mock_coro(None)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv1\n                              ) as mock_get_cat_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2\n                                  ) as query_tbl_patch:\n                    with patch.object(User.Objects, 'check_password', return_value=False) as check_pwd_patch:\n                        with patch.object(User.Objects, 'update', return_value=_rv3) as update_patch:\n                            with pytest.raises(Exception) as excinfo:\n                                await User.Objects.login('user', 'blah', '0.0.0.0')\n                            assert str(excinfo.value) == 'Username or Password do not match'\n                            assert excinfo.type is User.PasswordDoesNotMatch\n                            assert issubclass(excinfo.type, Exception)\n                        update_patch.assert_called_once_with(found_user['id'], {\"failed_attempts\": found_user['failed_attempts'] + 1})\n                    check_pwd_patch.assert_called_once_with('3759bf3302f5481e8c9cc9472c6088ac', 'blah', algorithm='SHA256')\n                args, kwargs = query_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n            mock_get_cat_patch.assert_called_once_with('password', 'expiration')\n\n    async def test_login_with_empty_password(self):\n        async def mock_get_category_item():\n            return {\"value\": \"0\"}\n\n        pwd_result = {'count': 1, 'rows': [{'pwd': '', 'id': 3, 'role_id': 2, 'access_method': 'cert',\n                                            'pwd_last_changed': '', 'real_name': 'AJ', 'description': '',\n                                            'hash_algorithm': 'SHA512', 'block_until': '', 'failed_attempts': 0}]}\n        payload = {\"return\": [\"pwd\", \"id\", \"role_id\", \"access_method\",\n                              {\"column\": \"pwd_last_changed\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"alias\":\n                                  \"pwd_last_changed\"}, \"real_name\", \"description\", \"hash_algorithm\", \"block_until\",\n                              \"failed_attempts\"],\n                   \"where\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"user\",\n                             \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await mock_get_category_item()\n        _rv2 = await mock_coro(pwd_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(ConfigurationManager, \"get_category_item\",\n                              return_value=_rv1) as mock_get_cat_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload',\n                                  return_value=_rv2) as query_tbl_patch:\n                    with pytest.raises(Exception) as excinfo:\n                        await User.Objects.login('user', 'blah', '0.0.0.0')\n                    assert str(excinfo.value) == 'Password is not set for this user.'\n                    assert excinfo.type is User.PasswordNotSetError\n                    assert issubclass(excinfo.type, Exception)\n                args, kwargs = query_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n            mock_get_cat_patch.assert_called_once_with('password', 'expiration')\n\n    async def test_login_age_pwd_expiration(self):\n        async def mock_get_category_item():\n            return {\"value\": \"30\"}\n\n        pwd_result = {'count': 1, 'rows': [{'role_id': '2', 'pwd': '3759bf3302f5481e8c9cc9472c6088ac', 'id': '2',\n                                            'pwd_last_changed': '2018-01-30 12:32:08.216159'}]}\n        payload = {\"return\": [\"pwd\", \"id\", \"role_id\", \"access_method\",\n                              {\"column\": \"pwd_last_changed\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n                               \"alias\": \"pwd_last_changed\"}, \"real_name\", \"description\", \"hash_algorithm\", \"block_until\", \"failed_attempts\"],\n                   \"where\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"user\", \"and\":\n                       {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await mock_get_category_item()\n        _rv2 = await mock_coro(pwd_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv1\n                              ) as mock_get_cat_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2\n                                  ) as query_tbl_patch:\n                    with pytest.raises(Exception) as excinfo:\n                        await User.Objects.login('user', 'fledge', '0.0.0.0')\n                    assert pwd_result['rows'][0]['id'] == str(excinfo.value)\n                    assert excinfo.type is User.PasswordExpired\n                    assert issubclass(excinfo.type, Exception)\n                args, kwargs = query_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n            mock_get_cat_patch.assert_called_once_with('password', 'expiration')\n\n    @pytest.mark.parametrize(\"user_data\", [\n        ({'count': 1, 'rows': [{'role_id': '1', 'pwd': '3759bf3302f5481e8c9cc9472c6088ac', 'id': '1', 'is_admin': True,\n                                'pwd_last_changed': '2018-03-30 12:32:08.216159', 'hash_algorithm': 'SHA256', 'block_until': '2018-03-30 12:32:08.216159', 'failed_attempts': 0}]}),\n        ({'count': 1, 'rows': [{'role_id': '2', 'pwd': '3759bf3302f5481e8c9cc9472c6088ac', 'id': '2', 'is_admin': False,\n                                'pwd_last_changed': '2018-03-29 05:05:08.216159', 'hash_algorithm': 'SHA256', 'block_until': '2018-03-30 12:32:08.216159', 'failed_attempts': 0}]})\n    ])\n    async def test_login(self, user_data):\n        async def mock_get_category_item():\n            return {\"value\": \"0\"}\n\n        payload = {\"return\": [\"pwd\", \"id\", \"role_id\", \"access_method\",\n                              {\"column\": \"pwd_last_changed\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n                               \"alias\": \"pwd_last_changed\"}, \"real_name\", \"description\", \"hash_algorithm\", \"block_until\", \"failed_attempts\"],\n                   \"where\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"user\", \"and\":\n                       {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await mock_get_category_item()\n        _rv2 = await mock_coro(user_data)\n        _rv3 = await mock_coro(True)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv1\n                              ) as mock_get_cat_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2\n                                  ) as query_tbl_patch:\n                    with patch.object(User.Objects, 'check_password', return_value=True) as check_pwd_patch:\n                        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv3\n                                          ) as insert_tbl_patch:\n                            uid, jwt_token, is_admin = await User.Objects.login('user', 'fledge', '0.0.0.0')\n                            expected = user_data['rows'][0]\n                            assert uid == expected['id']\n                            assert is_admin == expected['is_admin']\n                            # FIXME: token patch\n                            # assert jwt_token\n\n                        # FIXME: datetime.now() patch and then payload assertion\n                        args, kwargs = insert_tbl_patch.call_args\n                        assert 'user_logins' == args[0]\n                    check_pwd_patch.assert_called_once_with('3759bf3302f5481e8c9cc9472c6088ac',\n                                                            'fledge', algorithm=\"SHA256\")\n                args1, kwargs1 = query_tbl_patch.call_args\n                assert 'users' == args1[0]\n                p = json.loads(args1[1])\n                assert payload == p\n            mock_get_cat_patch.assert_called_once_with('password', 'expiration')\n\n    async def test_login_exception(self):\n        async def mock_get_category_item():\n            return {\"value\": \"0\"}\n\n        pwd_result = {'count': 1, 'rows': [{'role_id': '1', 'pwd': '3759bf3302f5481e8c9cc9472c6088ac', 'id': '1',\n                                            'pwd_last_changed': '2018-03-30 12:32:08.216159',\n                                            'hash_algorithm': 'SHA256', 'block_until': '', 'failed_attempts': 0}]}\n        expected = {'message': 'Something went wrong', 'retryable': False, 'entryPoint': 'delete'}\n        payload = {\"return\": [\"pwd\", \"id\", \"role_id\", \"access_method\",\n                              {\"column\": \"pwd_last_changed\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\",\n                               \"alias\": \"pwd_last_changed\"}, \"real_name\", \"description\", \"hash_algorithm\",\n                              \"block_until\", \"failed_attempts\"],\n                   \"where\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"user\",\n                             \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await mock_get_category_item()\n        _rv2 = await mock_coro(pwd_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(ConfigurationManager, \"get_category_item\", return_value=_rv1\n                              ) as mock_get_cat_patch:\n                with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv2\n                                  ) as query_tbl_patch:\n                    with patch.object(User.Objects, 'check_password', return_value=True) as check_pwd_patch:\n                        with patch.object(storage_client_mock, 'insert_into_tbl',\n                                          side_effect=StorageServerError(code=400, reason=\"blah\", error=expected)):\n                            with pytest.raises(Exception) as excinfo:\n                                await User.Objects.login('user', 'fledge', '0.0.0.0')\n                            assert excinfo.type is ValueError\n                            assert str(excinfo.value) == expected['message']\n                    check_pwd_patch.assert_called_once_with('3759bf3302f5481e8c9cc9472c6088ac', 'fledge',\n                                                            algorithm='SHA256')\n                args, kwargs = query_tbl_patch.call_args\n                assert 'users' == args[0]\n                p = json.loads(args[1])\n                assert payload == p\n            mock_get_cat_patch.assert_called_once_with('password', 'expiration')\n\n    async def test_delete_user_tokens(self):\n        expected = {'response': 'deleted', 'rows_affected': 1}\n        payload = '{\"where\": {\"column\": \"user_id\", \"condition\": \"=\", \"value\": 2}}'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv) as delete_tbl_patch:\n                actual = await User.Objects.delete_user_tokens(2)\n                assert actual == expected\n            delete_tbl_patch.assert_called_once_with('user_logins', payload)\n\n    async def test_delete_user_tokens_exception(self):\n        expected = {'message': 'Something went wrong', 'retryable': False, 'entryPoint': 'delete'}\n        payload = '{\"where\": {\"column\": \"user_id\", \"condition\": \"=\", \"value\": 2}}'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'delete_from_tbl', side_effect=StorageServerError(code=400, reason=\"blah\", error=expected)) as delete_tbl_patch:\n                with pytest.raises(ValueError) as excinfo:\n                    await User.Objects.delete_user_tokens(2)\n                assert str(excinfo.value) == expected['message']\n        delete_tbl_patch.assert_called_once_with('user_logins', payload)\n\n    async def test_refresh_token_expiry(self):\n        expected = {'rows_affected': 1, \"response\": \"updated\"}\n        token = \"RDSlaEtgXuxbYHlDgJURbEeBua2ccwvHeB7MVDeIHq4\"\n        payload = {\"values\": {\"token_expiration\": \"2018-03-13 15:33:25.959408\"}, \"where\": {\"column\": \"token\", \"condition\": \"=\", \"value\": \"RDSlaEtgXuxbYHlDgJURbEeBua2ccwvHeB7MVDeIHq4\"}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'update_tbl', return_value=_rv) as update_tbl_patch:\n                await User.Objects.refresh_token_expiry(token)\n            # FIXME: datetime.now() patch and then payload assertion\n            args, kwargs = update_tbl_patch.call_args\n            assert 'user_logins' == args[0]\n\n    async def test_invalid_token(self):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        payload = {\"return\": [{\"column\": \"token_expiration\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"alias\": \"token_expiration\"}], \"where\": {\"column\": \"token\", \"condition\": \"=\", \"value\": \"blah\"}}\n        _rv = await mock_coro({'rows': [], 'count': 0})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as query_tbl_patch:\n                with pytest.raises(Exception) as excinfo:\n                    await User.Objects.validate_token('blah')\n                assert str(excinfo.value) == 'Token appears to be invalid'\n                assert excinfo.type is User.InvalidToken\n                assert issubclass(excinfo.type, Exception)\n            args, kwargs = query_tbl_patch.call_args\n            assert 'user_logins' == args[0]\n            p = json.loads(args[1])\n            assert payload == p\n\n    async def test_validate_token(self):\n        token = (\"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJ1aWQiOjIsImV4cCI6MTcxNzQxNzAwMH0.\"\n                 \"J9y-y_ssMTQJm5vzZiBIj8OjcoreIPRDUskl3_X0HRibX5ck5f_J8Ii-_WXngeIFdOdEWGz6KG5mB6QQiPQYcg\")\n        valid_token_result = {'rows': [{\"token_expiration\": \"2017-03-14 15:09:19.800648\"}], 'count': 1}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        payload = {\"return\": [{\"column\": \"token_expiration\", \"format\": \"YYYY-MM-DD HH24:MI:SS.MS\", \"alias\":\n            \"token_expiration\"}], \"where\": {\"column\": \"token\", \"condition\": \"=\", \"value\": token}}\n        _rv = await mock_coro(valid_token_result)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv\n                              ) as query_tbl_patch:\n                # FIXME: jwt.decode patch\n                uid = await User.Objects.validate_token(token)\n                assert 2 == uid\n            # FIXME: datetime.now() patch\n            args, kwargs = query_tbl_patch.call_args\n            assert 'user_logins' == args[0]\n            p = json.loads(args[1])\n            assert payload == p\n\n    @pytest.mark.skip(reason=\"Need to patch jwt token and datetime\")\n    async def test_token_expiration(self):\n        pass\n\n    async def test_delete_token(self):\n        expected = {'response': 'deleted', 'rows_affected': 1}\n        payload = '{\"where\": {\"column\": \"token\", \"condition\": \"=\", \"value\": \"eyz\"}}'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro(expected)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv) as delete_tbl_patch:\n                actual = await User.Objects.delete_token(\"eyz\")\n                assert actual == expected\n            delete_tbl_patch.assert_called_once_with('user_logins', payload)\n\n    async def test_delete_token_exception(self):\n        expected = {'message': 'Something went wrong', 'retryable': False, 'entryPoint': 'delete'}\n        payload = '{\"where\": {\"column\": \"token\", \"condition\": \"=\", \"value\": \"eyx\"}}'\n        storage_client_mock = MagicMock(StorageClientAsync)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'delete_from_tbl', side_effect=StorageServerError(code=400, reason=\"blah\", error=expected)) as delete_tbl_patch:\n                with pytest.raises(ValueError) as excinfo:\n                    await User.Objects.delete_token(\"eyx\")\n                assert str(excinfo.value) == expected['message']\n        delete_tbl_patch.assert_called_once_with('user_logins', payload)\n\n    async def test_delete_all_user_tokens(self):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro()\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv) as delete_tbl_patch:\n                await User.Objects.delete_all_user_tokens()\n        delete_tbl_patch.assert_called_once_with('user_logins')\n\n    async def test_no_user_exists(self):\n        expected_payload = ('{\"return\": [\"uname\", \"pwd\", \"hash_algorithm\"], '\n                            '\"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 2, '\n                            '\"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}')\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro({'rows': []})\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv\n                              ) as query_tbl_patch:\n                result = await User.Objects.is_user_exists(2, 'blah')\n                assert result is None\n            query_tbl_patch.assert_called_once_with('users', expected_payload)\n\n    @pytest.mark.parametrize(\"ret_val_check_pwd, expected\", [(True, 1), (False, None)])\n    async def test_user_exists(self, ret_val_check_pwd, expected):\n        expected_payload = ('{\"return\": [\"uname\", \"pwd\", \"hash_algorithm\"], '\n                            '\"where\": {\"column\": \"id\", \"condition\": \"=\", \"value\": 1, \"and\": {\"column\": \"enabled\", '\n                            '\"condition\": \"=\", \"value\": \"t\"}}}')\n        storage_client_mock = MagicMock(StorageClientAsync)\n        ret_val = {'rows': [{'id': 1, 'pwd': 'HASHED_PWD', \"hash_algorithm\": \"SHA512\"}]}\n        _rv = await mock_coro(ret_val)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv\n                              ) as query_tbl_patch:\n                with patch.object(User.Objects, 'check_password', return_value=ret_val_check_pwd\n                                  ) as check_pwd_patch:\n                    actual = await User.Objects.is_user_exists(1, 'admin')\n                    assert expected == actual\n                check_pwd_patch.assert_called_once_with(ret_val['rows'][0]['pwd'],\n                                                        'admin', ret_val['rows'][0]['hash_algorithm'])\n            query_tbl_patch.assert_called_once_with('users', expected_payload)\n\n    async def test__get_password_history(self):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        user_data = {'password': 'HASHED_PWD'}\n        ret_val = {'rows': [{'id': 1, 'user_id': 2, 'pwd': 'HASHED_PWD', \"hash_algorithm\": \"SHA512\"}]}\n        row = ret_val['rows'][0]\n        _rv = await mock_coro(ret_val)\n        with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as query_tbl_patch:\n            with patch.object(User.Objects, 'check_password', return_value=False) as check_pwd_patch:\n                result = await User.Objects._get_password_history(storage_client_mock, row['user_id'], user_data,\n                                                                  row['hash_algorithm'])\n                assert [user_data['password']] == result\n            check_pwd_patch.assert_called_once_with(row['pwd'], user_data['password'], row['hash_algorithm'])\n        query_tbl_patch.assert_called_once_with('user_pwd_history',\n                                                '{\"where\": {\"column\": \"user_id\", \"condition\": \"=\", \"value\": 2}}')\n\n    async def test__get_password_history_exception(self):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        user_data = {'password': 'HASHED_PWD'}\n        ret_val = {'rows': [{'id': 1, 'user_id': 2, 'pwd': 'HASHED_PWD', \"hash_algorithm\": \"SHA256\"}]}\n        row = ret_val['rows'][0]\n        _rv = await mock_coro(ret_val)\n        with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv) as query_tbl_patch:\n            with patch.object(User.Objects, 'check_password', return_value=True) as check_pwd_patch:\n                with pytest.raises(Exception) as excinfo:\n                    await User.Objects._get_password_history(storage_client_mock, row['user_id'], user_data,\n                                                             row['hash_algorithm'])\n                # assert str(excinfo.value) == ''\n                assert excinfo.type is User.PasswordAlreadyUsed\n                assert issubclass(excinfo.type, Exception)\n            check_pwd_patch.assert_called_once_with(row['pwd'], user_data['password'], row['hash_algorithm'])\n        query_tbl_patch.assert_called_once_with('user_pwd_history',\n                                                '{\"where\": {\"column\": \"user_id\", \"condition\": \"=\", \"value\": 2}}')\n\n    @pytest.mark.parametrize(\"hashed_pwd, pwd_history_list, payload\", [\n        ('HASHED_PWD_1', ['HASHED_PWD_1'], {\"pwd\": \"HASHED_PWD_1\", \"user_id\": 2}),\n        ('HASHED_PWD_2', ['HASHED_PWD_2', 'HASHED_PWD_1'], {\"pwd\": \"HASHED_PWD_2\", \"user_id\": 2})\n    ])\n    async def test__insert_pwd_history(self, hashed_pwd, pwd_history_list, payload):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv = await mock_coro()\n        with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv) as insert_tbl_patch:\n            await User.Objects._insert_pwd_history_with_oldest_pwd_deletion_if_count_exceeds(storage_client_mock, 2, hashed_pwd, pwd_history_list)\n        args, kwargs = insert_tbl_patch.call_args\n        assert 'user_pwd_history' == args[0]\n        p = json.loads(args[1])\n        assert payload == p\n\n    @pytest.mark.parametrize(\"hashed_pwd, pwd_history_list\", [\n        ('HASHED_PWD_4', ['HASHED_PWD_3', 'HASHED_PWD_2', 'HASHED_PWD_1'])\n    ])\n    async def test__insert_pwd_history_and_delete_oldest_pwd_if_count_exceeds(self, hashed_pwd, pwd_history_list):\n        storage_client_mock = MagicMock(StorageClientAsync)\n        payload = {\"where\": {\"column\": \"user_id\", \"condition\": \"=\", \"value\": 2, \"and\": {\"column\": \"pwd\", \"condition\": \"=\", \"value\": \"HASHED_PWD_1\"}}}\n        _rv = await mock_coro()\n        with patch.object(storage_client_mock, 'delete_from_tbl', return_value=_rv) as delete_tbl_patch:\n            with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv) as insert_tbl_patch:\n                await User.Objects._insert_pwd_history_with_oldest_pwd_deletion_if_count_exceeds(storage_client_mock, 2, hashed_pwd, pwd_history_list)\n            args, kwargs = insert_tbl_patch.call_args\n            assert 'user_pwd_history' == args[0]\n            p = json.loads(args[1])\n            assert {\"pwd\": \"HASHED_PWD_4\", \"user_id\": 2} == p\n        args1, kwargs1 = delete_tbl_patch.call_args\n        assert 'user_pwd_history' == args1[0]\n        p = json.loads(args1[1])\n        assert payload == p\n\n    @pytest.mark.parametrize(\"user_data\", [\n        ({'count': 1, 'rows': [{'role_id': '1', 'pwd': '3759bf3302f5481e8c9cc9472c6088ac', 'id': '1', 'is_admin': True, 'pwd_last_changed': '2018-03-30 12:32:08.216159'}]}),\n        ({'count': 1, 'rows': [{'role_id': '2', 'pwd': '3759bf3302f5481e8c9cc9472c6088ac', 'id': '2', 'is_admin': False, 'pwd_last_changed': '2018-03-29 05:05:08.216159'}]})\n    ])\n    async def test_certficate_login(self, user_data):\n        payload = {\"return\": [\"id\", \"role_id\"], \"where\": {\"column\": \"uname\", \"condition\": \"=\", \"value\": \"user\", \"and\": {\"column\": \"enabled\", \"condition\": \"=\", \"value\": \"t\"}}}\n        storage_client_mock = MagicMock(StorageClientAsync)\n        _rv1 = await mock_coro(user_data)\n        _rv2 = await mock_coro(True)\n        with patch.object(connect, 'get_storage_async', return_value=storage_client_mock):\n            with patch.object(storage_client_mock, 'query_tbl_with_payload', return_value=_rv1) as query_tbl_patch:\n                with patch.object(storage_client_mock, 'insert_into_tbl', return_value=_rv2) as insert_tbl_patch:\n                    uid, jwt_token, is_admin = await User.Objects.certificate_login('user', '0.0.0.0')\n                    expected = user_data['rows'][0]\n                    assert uid == expected['id']\n                    assert is_admin == expected['is_admin']\n                    # FIXME: token patch\n                    # assert jwt_token\n\n                # FIXME: datetime.now() patch and then payload assertion\n                args, kwargs = insert_tbl_patch.call_args\n                assert 'user_logins' == args[0]\n            args1, kwargs1 = query_tbl_patch.call_args\n            assert 'users' == args1[0]\n            p = json.loads(args1[1])\n            assert payload == p\n"
  },
  {
    "path": "tests/unit/python/fledge/tasks/purge/test_purge.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\nimport json\nimport pytest\nimport asyncio\nfrom unittest.mock import patch, call, MagicMock\nfrom fledge.common.audit_logger import AuditLogger\nfrom fledge.common.configuration_manager import ConfigurationManager\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common.storage_client.exceptions import *\nfrom fledge.common.storage_client.storage_client import StorageClientAsync, ReadingsStorageClientAsync\nfrom fledge.common.statistics import Statistics\nfrom fledge.tasks.purge.purge import Purge\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\nasync def q_result(*args):\n    table = args[0]\n    if table == 'streams':\n        rows = {\"rows\": [{\"min_last_object\": 0}], \"count\": 1}\n        if len(args) == 2 and args[1] == 'any':\n            rows = {\"rows\": [{\"max_last_object\": 0}], \"count\": 1}\n        return rows\n\n\nasync def mock_value(val):\n    return val\n\n\nclass TestPurge:\n    \"\"\"Test the units of purge.py\"\"\"\n\n    def test_init(self):\n        \"\"\"Test that creating an instance of Purge calls init of FledgeProcess and creates loggers\"\"\"\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        with patch.object(FledgeProcess, \"__init__\") as mock_process:\n            with patch.object(FLCoreLogger, \"get_logger\") as log:\n                with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                    p = Purge()\n                    assert isinstance(p, Purge)\n                    assert isinstance(p._audit, AuditLogger)\n            log.assert_called_once_with(\"Data Purge\")\n        mock_process.assert_called_once_with()\n\n    async def test_write_statistics(self):\n        \"\"\"Test that write_statistics calls update statistics with defined keys and value increments\"\"\"\n\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        _rv = await mock_value(\"\")\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(Statistics, '_load_keys', return_value=_rv):\n                with patch.object(Statistics, 'update', return_value=_rv) as mock_stats_update:\n                    with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                        p = Purge()\n                        p._storage_async = mock_storage_client_async\n                        await p.write_statistics(1, 2)\n                mock_stats_update.assert_has_calls([call('PURGED', 1), call('UNSNPURGED', 2)])\n\n    async def test_set_configuration(self):\n        \"\"\"Test that purge's set_configuration returns configuration item with key 'PURGE_READ' \"\"\"\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        _rv = await mock_value(\"\")\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                p = Purge()\n                p._storage = MagicMock(spec=StorageClientAsync)\n                mock_cm = ConfigurationManager(p._storage)\n                with patch.object(mock_cm, 'create_category', return_value=_rv) as mock_create_cat:\n                    with patch.object(mock_cm, 'create_child_category', return_value=_rv) as mock_create_child_cat:\n                        with patch.object(mock_cm, 'get_category_all_items', return_value=_rv) as mock_get_cat:\n                            await p.set_configuration()\n                        mock_get_cat.assert_called_once_with('PURGE_READ')\n                    mock_create_child_cat.assert_called_once_with('Utilities', ['PURGE_READ'])\n                args, _ = mock_create_cat.call_args\n                assert 4 == len(args)\n                assert 5 == len(args[1].keys())\n                assert 'PURGE_READ' == args[0]\n                assert 'Purge the readings, log, statistics history table' == args[2]\n                assert args[3] is True\n\n    async def store_purge(self, **kwargs):\n        if kwargs.get('age') == '-1' or kwargs.get('size') == '-1':\n            raise StorageServerError(400, \"Bla\", \"Some Error\")\n        return {\"readings\": 10, \"removed\": 1, \"unsentPurged\": 2, \"unsentRetained\": 7, \"duration\": 100, \"method\": \"mock\"}\n\n    config = {\n        \"purgeAgeSize\": {\"retainUnsent\": {\"value\": \"purge unsent\"}, \"age\": {\"value\": \"72\"}, \"size\": {\"value\": \"20\"}},\n        \"purgeAge\": {\"retainUnsent\": {\"value\": \"purge unsent\"}, \"age\": {\"value\": \"72\"}, \"size\": {\"value\": \"0\"}},\n        \"purgeSize\": {\"retainUnsent\": {\"value\": \"purge unsent\"}, \"age\": {\"value\": \"0\"}, \"size\": {\"value\": \"100\"}},\n        \"retainAgeSize\": {\"retainUnsent\": {\"value\": \"retain unsent to all destinations\"}, \"age\": {\"value\": \"72\"},\n                          \"size\": {\"value\": \"20\"}},\n        \"retainAge\": {\"retainUnsent\": {\"value\": \"retain unsent to all destinations\"}, \"age\": {\"value\": \"72\"},\n                      \"size\": {\"value\": \"0\"}},\n        \"retainSize\": {\"retainUnsent\": {\"value\": \"retain unsent to all destinations\"}, \"age\": {\"value\": \"0\"},\n                       \"size\": {\"value\": \"100\"}},\n        \"retainSizeAny\": {\"retainUnsent\": {\"value\": \"retain unsent to any destination\"}, \"age\": {\"value\": \"0\"},\n                          \"size\": {\"value\": \"100\"}}\n        }\n\n    @pytest.mark.parametrize(\"conf, expected_return, expected_calls\", [\n        (config[\"purgeAge\"], (1, 2), {'sent_id': 0, 'age': '72', 'flag': 'purge'}),\n        (config[\"purgeSize\"], (1, 2), {'sent_id': 0, 'size': '100', 'flag': 'purge'}),\n        (config[\"retainAge\"], (1, 2), {'sent_id': 0, 'age': '72', 'flag': 'retainall'}),\n        (config[\"retainSize\"], (1, 2), {'sent_id': 0, 'size': '100', 'flag': 'retainall'}),\n        (config[\"retainSizeAny\"], (1, 2), {'sent_id': 0, 'size': '100', 'flag': 'retainany'})\n    ])\n    async def test_purge_data(self, conf, expected_return, expected_calls):\n        \"\"\"Test that purge_data calls Storage's purge with defined configuration\"\"\"\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        payload = {\"aggregate\": {\"operation\": \"min\", \"column\": \"last_object\"}}\n        if expected_calls[\"flag\"] == \"retainany\":\n            payload = {\"aggregate\": {\"operation\": \"max\", \"column\": \"last_object\"}}\n            _rv1 = await q_result('streams', 'any')\n        else:\n            _rv1 = await q_result('streams')\n        _rv2 = await mock_value(\"\")\n        _rv3 = await self.store_purge()\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                p = Purge()\n                p._logger = FLCoreLogger\n                p._logger.info = MagicMock()\n                p._logger.error = MagicMock()\n                p._logger.debug = MagicMock()\n                p._storage_async = MagicMock(spec=StorageClientAsync)\n                p._readings_storage_async = MagicMock(spec=ReadingsStorageClientAsync)\n                audit = p._audit\n                with patch.object(p._storage_async, \"query_tbl_with_payload\", return_value=_rv1) as patch_storage:\n                    with patch.object(p._readings_storage_async, 'purge',\n                                      return_value=_rv3) as mock_storage_purge:\n                        with patch.object(audit, 'information', return_value=_rv2) as audit_info:\n                            # Test the positive case when all if conditions in purge_data pass\n                            assert expected_return == await p.purge_data(conf)\n                        assert audit_info.called\n                    _, kwargs = mock_storage_purge.call_args\n                    assert kwargs == expected_calls\n                assert patch_storage.called\n                assert 2 == patch_storage.call_count\n                args, _ = patch_storage.call_args\n                assert 'streams' == args[0]\n                assert payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"conf, expected_return, expected_calls\", [\n        (config[\"purgeAgeSize\"], (2, 4), [{'sent_id': 0, 'size': '20', 'flag': 'purge'},\n                                          {'sent_id': 0, 'age': '72', 'flag': 'purge'}]),\n        (config[\"retainAgeSize\"], (2, 4), [{'sent_id': 0, 'size': '20', 'flag': 'retainall'},\n                                           {'sent_id': 0, 'age': '72', 'flag': 'retainall'}])\n    ])\n    async def test_data_with_age_and_size(self, conf, expected_return, expected_calls):\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        payload = {\"aggregate\": {\"operation\": \"min\", \"column\": \"last_object\"}}\n        _rv1 = await q_result('streams')\n        _rv2 = await mock_value(\"\")\n        _rv3 = await self.store_purge()\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                p = Purge()\n                p._logger = FLCoreLogger\n                p._logger.info = MagicMock()\n                p._logger.error = MagicMock()\n                p._logger.debug = MagicMock()\n                p._storage_async = MagicMock(spec=StorageClientAsync)\n                p._readings_storage_async = MagicMock(spec=ReadingsStorageClientAsync)\n                audit = p._audit\n                with patch.object(p._storage_async, \"query_tbl_with_payload\", return_value=_rv1\n                                  ) as patch_storage:\n                    with patch.object(p._readings_storage_async, 'purge',\n                                      return_value=_rv3) as mock_storage_purge:\n                        with patch.object(audit, 'information', return_value=_rv2) as audit_info:\n                            assert expected_return == await p.purge_data(conf)\n                        assert audit_info.called\n                    assert 2 == mock_storage_purge.call_count\n                    args, kwargs = mock_storage_purge.call_args_list[0]\n                    assert expected_calls[0] == kwargs\n                    args, kwargs = mock_storage_purge.call_args_list[1]\n                    assert expected_calls[1] == kwargs\n                assert patch_storage.called\n                assert 2 == patch_storage.call_count\n                args, _ = patch_storage.call_args\n                assert 'streams' == args[0]\n                assert payload == json.loads(args[1])\n\n    @pytest.mark.parametrize(\"conf, expected_return\", [\n        ({\"retainUnsent\": {\"value\": \"purge unsent\"}, \"age\": {\"value\": \"0\"}, \"size\": {\"value\": \"0\"}}, (0, 0)),\n        ({\"retainUnsent\": {\"value\": \"retain unsent to all destinations\"}, \"age\": {\"value\": \"0\"},\n          \"size\": {\"value\": \"0\"}}, (0, 0))\n    ])\n    async def test_purge_data_no_data_purged(self, conf, expected_return):\n        \"\"\"Test that purge_data logs message when no data was purged\"\"\"\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        _rv1 = await q_result('streams')\n        _rv2 = await mock_value(\"\")\n        _rv3 = await self.store_purge()\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                p = Purge()\n                p._logger = FLCoreLogger\n                p._logger.info = MagicMock()\n                p._logger.error = MagicMock()\n                p._logger.debug = MagicMock()\n                p._storage_async = MagicMock(spec=StorageClientAsync)\n                p._readings_storage_async = MagicMock(spec=ReadingsStorageClientAsync)\n                audit = p._audit\n                with patch.object(p._storage_async, \"query_tbl_with_payload\",\n                                  return_value=_rv1) as patch_storage:\n                    with patch.object(p._readings_storage_async, 'purge', return_value=_rv3) as patch_purge:\n                        with patch.object(audit, 'information', return_value=_rv2):\n                            assert expected_return == await p.purge_data(conf)\n                            p._logger.info.assert_called_once_with(\"No rows purged\")\n                    patch_purge.assert_not_called()\n                assert patch_storage.called\n                assert 2 == patch_storage.call_count\n\n    @pytest.mark.parametrize(\"conf, expected_return\", [\n        ({\"retainUnsent\": {\"value\": \"retain unsent to all destinations\"}, \"age\": {\"value\": \"-1\"},\n          \"size\": {\"value\": \"-1\"}}, (2, 4))\n    ])\n    async def test_purge_error_storage_response(self, conf, expected_return):\n        \"\"\"Test that purge_data logs error when storage purge returns an error response\"\"\"\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        _rv1 = await q_result('streams')\n        _rv2 = await mock_value(\"\")\n        _rv3 = await self.store_purge()\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                p = Purge()\n                p._logger = FLCoreLogger\n                p._logger.info = MagicMock()\n                p._logger.error = MagicMock()\n                p._logger.debug = MagicMock()\n                p._storage_async = MagicMock(spec=StorageClientAsync)\n                p._readings_storage_async = MagicMock(spec=ReadingsStorageClientAsync)\n                audit = p._audit\n                with patch.object(p._storage_async, \"query_tbl_with_payload\",\n                                  return_value=_rv1) as patch_storage:\n                    with patch.object(p._readings_storage_async, 'purge', return_value=_rv3) as patch_purge:\n                        with patch.object(audit, 'information', return_value=_rv2):\n                            assert expected_return == await p.purge_data(conf)\n                    assert 2 == patch_purge.call_count\n                assert patch_storage.called\n                assert 2 == patch_storage.call_count\n\n    @pytest.mark.parametrize(\"conf, expected_error_key\", [\n        ({\"retainUnsent\": {\"value\": \"retain unsent to all destinations\"}, \"age\": {\"value\": \"bla\"},\n          \"size\": {\"value\": \"0\"}}, \"age\"),\n        ({\"retainUnsent\": {\"value\": \"retain unsent to all destinations\"}, \"age\": {\"value\": \"0\"},\n          \"size\": {\"value\": \"bla\"}}, \"size\")])\n    async def test_purge_data_invalid_conf(self, conf, expected_error_key):\n        \"\"\"Test that purge_data raises exception when called with invalid configuration\"\"\"\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        expected_error_message = 'purge_data - Configuration item {} bla should be integer!'.format(expected_error_key)\n        _rv1 = await q_result('streams')\n        _rv2 = await mock_value(\"\")\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                p = Purge()\n                p._logger = FLCoreLogger\n                p._logger.info = MagicMock()\n                p._logger.error = MagicMock()\n                p._storage_async = MagicMock(spec=StorageClientAsync)\n                p._readings_storage_async = MagicMock(spec=ReadingsStorageClientAsync)\n                audit = p._audit\n                with patch.object(p._storage_async, \"query_tbl_with_payload\", return_value=_rv1) as patch_storage:\n                    with patch.object(p._readings_storage_async, 'purge', side_effect=self.store_purge):\n                        with patch.object(audit, 'information', return_value=_rv2):\n                            # Test the code block when purge failed because of invalid configuration\n                            await p.purge_data(conf)\n                            p._logger.error.assert_called_with(expected_error_message)\n                assert patch_storage.called\n                assert 2 == patch_storage.call_count\n\n    async def test_run(self):\n        \"\"\"Test that run calls all units of purge process\"\"\"\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        _rv1 = await mock_value(\"Some config\")\n        _rv2 = await mock_value((1, 2))\n        _rv3 = await mock_value(None)\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                p = Purge()\n                p._logger.exception = MagicMock()\n                with patch.object(p, 'set_configuration', return_value=_rv1) as mock_set_config:\n                    with patch.object(p, 'purge_data', return_value=_rv2) as mock_purge_data:\n                        with patch.object(p, 'write_statistics', return_value=_rv3) as mock_write_stats:\n                            with patch.object(p, 'purge_stats_history', return_value=_rv3) as mock_purge_stats_history:\n                                with patch.object(p, 'purge_audit_trail_log', return_value=_rv3) as mock_purge_audit:\n                                    await p.run()\n                                    # Test the positive case when no error in try block\n                                mock_purge_audit.assert_called_once_with(\"Some config\")\n                            mock_purge_stats_history.assert_called_once_with(\"Some config\")\n                        mock_write_stats.assert_called_once_with(1, 2)\n                    mock_purge_data.assert_called_once_with(\"Some config\")\n                mock_set_config.assert_called_once_with()\n\n    async def test_run_exception(self, event_loop):\n        \"\"\"Test that run calls all units of purge process and checks the exception handling\"\"\"\n\n        async def mock_purge(x):\n            await asyncio.sleep(0.1)\n            raise Exception(\"\")\n\n        mock_storage_client_async = MagicMock(spec=StorageClientAsync)\n        mock_audit_logger = AuditLogger(mock_storage_client_async)\n        _rv = await mock_value(\"Some config\")\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                p = Purge()\n                p._logger.exception = MagicMock()\n                with patch.object(p, 'set_configuration', return_value=_rv):\n                    with patch.object(p, 'purge_data', side_effect=mock_purge):\n                        with patch.object(p, 'write_statistics'):\n                            await p.run()\n                # Test the negative case when function purge_data raise some exception\n                assert 1 == p._logger.exception.call_count\n\n"
  },
  {
    "path": "tests/unit/python/fledge/tasks/purge/test_purge_main.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\" Test tasks/purge/__main__.py entry point\n\n\"\"\"\nimport pytest\nfrom unittest.mock import patch, MagicMock\n\nfrom fledge.tasks import purge\n\nfrom fledge.common import logger\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.tasks.purge.purge import Purge\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common.audit_logger import AuditLogger\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n@pytest.fixture\nasync def _purge_instance():\n    mockStorageClientAsync = MagicMock(spec=StorageClientAsync)\n    mockAuditLogger = AuditLogger(mockStorageClientAsync)\n    with patch.object(FledgeProcess, \"__init__\"):\n        with patch.object(logger, \"setup\"):\n            with patch.object(mockAuditLogger, \"__init__\", return_value=None):\n                p = Purge()\n    return p\n\n\nasync def test_main(_purge_instance):\n    async def mock_coro():\n        return None\n    \n    with patch.object(purge, \"__name__\", \"__main__\"):\n        purge.purge_process = _purge_instance\n        _rv = await mock_coro()\n        with patch.object(Purge, 'run', return_value=_rv):\n            await purge.purge_process.run()\n            purge.purge_process.run.assert_called_once_with()\n"
  },
  {
    "path": "tests/unit/python/fledge/tasks/statistics/test_statistics_history.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Test tasks/statistics/statistics_history.py\"\"\"\n\nfrom unittest.mock import patch, MagicMock\nimport pytest\nimport ast\nfrom fledge.common.logger import FLCoreLogger\nfrom fledge.common.process import FledgeProcess\nfrom fledge.tasks.statistics.statistics_history import StatisticsHistory\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\n\n__author__ = \"Vaibhav Singhal\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\npytestmark = pytest.mark.asyncio\n\n\nasync def mock_coro(*args, **kwargs):\n    if len(args) > 0:\n        return args[0]\n    else:\n        return \"\"\n\n\nclass TestStatisticsHistory:\n    \"\"\"Test the units of statistics_history.py\n    \"\"\"\n\n    async def test_init(self):\n        \"\"\"Test that creating an instance of StatisticsHistory calls init of FledgeProcess and creates loggers\"\"\"\n        with patch.object(FledgeProcess, \"__init__\") as mock_process:\n            with patch.object(FLCoreLogger, \"get_logger\") as log:\n                sh = StatisticsHistory()\n                assert isinstance(sh, StatisticsHistory)\n            log.assert_called_once_with(\"StatisticsHistory\")\n        mock_process.assert_called_once_with()\n\n    async def test_update_previous_value(self):\n        _rv = await mock_coro(None)\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(FLCoreLogger, \"get_logger\"):\n                sh = StatisticsHistory()\n                sh._storage_async = MagicMock(spec=StorageClientAsync)\n                payload = {'updates': [{'where': {'value': 'Bla', 'condition': '=', 'column': 'key'}, 'values': {'previous_value': 1}}]}\n                with patch.object(sh._storage_async, \"update_tbl\", return_value=_rv) as patch_storage:\n                    await sh._bulk_update_previous_value(payload)\n                args, kwargs = patch_storage.call_args\n                assert \"statistics\" == args[0]\n                payload = ast.literal_eval(args[1])\n                assert \"Bla\" == payload[\"updates\"][0][\"where\"][\"value\"]\n                assert 1 == payload[\"updates\"][0][\"values\"][\"previous_value\"]\n\n    async def test_run(self):\n        with patch.object(FledgeProcess, '__init__'):\n            with patch.object(FLCoreLogger, \"get_logger\"):\n                sh = StatisticsHistory()\n                sh._storage_async = MagicMock(spec=StorageClientAsync)\n                retval = {'count': 2,\n                          'rows': [{'description': 'Readings removed from the buffer by the purge process',\n                                    'value': 0, 'key': 'PURGED', 'previous_value': 0,\n                                    'ts': '2018-08-31 17:03:17.597055+05:30'},\n                                   {'description': 'Readings received by Fledge',\n                                    'value': 0, 'key': 'READINGS', 'previous_value': 0,\n                                    'ts': '2018-08-31 17:03:17.597055+05:30'\n                                    }]\n                          }\n                _rv1 = await mock_coro(retval)\n                _rv2 = await mock_coro(None)\n                with patch.object(sh._storage_async, \"query_tbl\", return_value=_rv1) as mock_keys:\n                    with patch.object(sh, \"_bulk_update_previous_value\", return_value=_rv2) as mock_update:\n                        with patch.object(sh._storage_async, \"insert_into_tbl\", return_value=_rv2) as mock_bulk_insert:\n                            await sh.run()\n                    assert 1 == mock_bulk_insert.call_count\n                    assert 1 == mock_update.call_count\n                mock_keys.assert_called_once_with('statistics')\n"
  },
  {
    "path": "tests/unit/python/fledge/tasks/statistics/test_statistics_main.py",
    "content": "# -*- coding: utf-8 -*-\n\n# FLEDGE_BEGIN\n# See: http://fledge-iot.readthedocs.io/\n# FLEDGE_END\n\n\"\"\"Test tasks/statistics/__main__.py entry point\"\"\"\n\nfrom unittest.mock import patch, MagicMock\nimport pytest\n\nfrom fledge.tasks.statistics import statistics_history\nfrom fledge.common import logger\nfrom fledge.common.storage_client.storage_client import StorageClientAsync\nfrom fledge.tasks.statistics.statistics_history import StatisticsHistory\nfrom fledge.common.process import FledgeProcess\nfrom fledge.common.audit_logger import AuditLogger\n\n__author__ = \"Ashish Jabble\"\n__copyright__ = \"Copyright (c) 2017 OSIsoft, LLC\"\n__license__ = \"Apache 2.0\"\n__version__ = \"${VERSION}\"\n\n\n@pytest.fixture\nasync def _stats_history_instance():\n    mock_storage_client = MagicMock(spec=StorageClientAsync)\n    mock_audit_logger = AuditLogger(mock_storage_client)\n    with patch.object(FledgeProcess, \"__init__\"):\n        with patch.object(logger, \"setup\"):\n            with patch.object(mock_audit_logger, \"__init__\", return_value=None):\n                stats = StatisticsHistory()\n    return stats\n\n\nasync def test_main(_stats_history_instance):\n    async def mock_coro():\n        return None\n    with patch.object(statistics_history, \"__name__\", \"__main__\"):\n        _rv = await mock_coro()\n        with patch.object(StatisticsHistory, 'run', return_value=_rv):\n            assert isinstance(_stats_history_instance, StatisticsHistory)\n            await _stats_history_instance.run()\n            _stats_history_instance.run.assert_called_once_with()\n"
  },
  {
    "path": "tests-manual/C/services/core/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nset(CMAKE_CXX_FLAGS \"-std=c++11 -O3\")\nset(UUIDLIB -luuid)\nset(COMMONLIB -ldl)\n \n# Locate GTest\nfind_package(GTest REQUIRED)\ninclude_directories(${GTEST_INCLUDE_DIRS})\n\nset(BOOST_COMPONENTS system thread)\n# Late 2017 TODO: remove the following checks and always use std::regex\nif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)\n        set(BOOST_COMPONENTS ${BOOST_COMPONENTS} regex)\n        set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -DUSE_BOOST_REGEX\")\n    endif()\nendif()\nfind_package(Boost 1.53.0 COMPONENTS ${BOOST_COMPONENTS} REQUIRED)\ninclude_directories(SYSTEM ${Boost_INCLUDE_DIR})\n\ninclude_directories(../../../../C/common/include)\ninclude_directories(../../../../C/services/common/include)\ninclude_directories(../../../../C/services/core/include)\ninclude_directories(../../../../C/thirdparty/rapidjson/include)\ninclude_directories(../../../../C/thirdparty/Simple-Web-Server)\n\nfile(GLOB core_services \"../../../../C/services/core/*.cpp\")\nfile(GLOB common_services \"../../../../C/services/common/*.cpp\")\nfile(GLOB common_sources \"../../../../C/common/*.cpp\")\nfile(GLOB unittests \"test_*.cpp\")\n \n# Link runTests with what we want to test and the GTest and pthread library\nadd_executable(RunTests ${core_services} ${common_services} ${common_sources} \"main.cpp\" ${unittests})\ntarget_link_libraries(RunTests ${GTEST_LIBRARIES} pthread)\ntarget_link_libraries(RunTests  ${Boost_LIBRARIES})\ntarget_link_libraries(RunTests  ${UUIDLIB})\ntarget_link_libraries(RunTests  ${COMMONLIB})\n\n# Create C++ Fledge Core executable\nadd_executable(fledge-core core_server.cpp ${core_services} ${common_services} ${common_sources})\ntarget_link_libraries(fledge-core ${Boost_LIBRARIES})\ntarget_link_libraries(fledge-core ${CMAKE_THREAD_LIBS_INIT})\ntarget_link_libraries(fledge-core ${UUIDLIB})\ntarget_link_libraries(fledge-core ${COMMONLIB})\ntarget_link_libraries(fledge-core -lssl -lcrypto)\n"
  },
  {
    "path": "tests-manual/C/services/core/README",
    "content": "=================================\nConfigurationManger class tests.\n=================================\n\nSteps:\n\n1) make sure Fledge storage layer is running on port 8080\n\n\tif not, set FLEDGE_DATA to \".\" and start it.\n\t# export FLEDGE_DATA=.\n\n\tSet FLEDGE_ROOT if needed\n\t# export FLEDGE_ROOT=/some/path\n\n\tMake we have a Fledge SQlite3 database:\n\t# export DEFAULT_SQLITE_DB_FILE=/some_path/fledge.db\n\n\tStart storage service\n\t# $FLEDGE_ROOT/services/storage\t\n\n2) delete category \"testcategory\" and its child categories\n\n\t# curl -X DELETE -d '{\"where\":{\"column\":\"key\",\"condition\":\"=\",\"value\":\"testcategory\"}}' 'http://127.0.0.1:8080/storage/table/configuration'\n\t# curl -X DELETE -d '{\"where\":{\"column\":\"parent\",\"condition\":\"=\",\"value\":\"testcategory\"}}' 'http://127.0.0.1:8080/storage/table/category_children'\n\tCheck\n\t# curl -X GET 'http://127.0.0.1:8080/storage/table/configuration?key=testcategory'\n\t# curl -X GET 'http://127.0.0.1:8080/storage/table/category_children?parent=testcategory'\n\n3) Make / Run tests\n\n\t# mkdir build\n\t# cd build\n\t# cmake ..\n\t# make\n\t# ./RunTests\n\n=====================================================================\nIntegration tests for classes:\n - Fledge Core C++\n - ConfigurationManager C++ (which needs a running Storage Service)\n=====================================================================\n\n\nSteps:\n\n1) Set FLEDGE_ROOT\n2) ./testRunner.sh\n3) Manually kill Fledge Core and Storage Service processes (this is required at the time being)\n\n\n"
  },
  {
    "path": "tests-manual/C/services/core/core_server.cpp",
    "content": "#include <core_management_api.h>\n#include <configuration_manager.h>\n#include <rapidjson/document.h>\n#include <service_registry.h>\n\nint main(int argc, char** argv)\n{\n\tunsigned short port = 9393;\n\tif (argc == 2 && argv[1])\n\t{\n\t\tport = (unsigned short)atoi(argv[1]);\n\t}\n\n\t// Instantiate CoreManagementApi class\n\tCoreManagementApi coreServer(\"test_core\", port);\n\n\t// Start the core server\n\tCoreManagementApi::getInstance()->startServer();\n}\n"
  },
  {
    "path": "tests-manual/C/services/core/expected/1",
    "content": "{ \"categories\" : [] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/10",
    "content": "{}"
  },
  {
    "path": "tests-manual/C/services/core/expected/11",
    "content": "{ \"key\" : \"CAT_A\", \"description\" : \"category_A\", \"value\" : {\"a\" : { \"description\" : \"a\", \"type\" : \"integer\", \"value\" : \"20\", \"default\" : \"20\" }} }"
  },
  {
    "path": "tests-manual/C/services/core/expected/12",
    "content": "{ \"key\" : \"CAT_B\", \"description\" : \"category_b\", \"value\" : {\"b\" : { \"description\" : \"b\", \"type\" : \"integer\", \"value\" : \"87\", \"default\" : \"87\" }} }"
  },
  {
    "path": "tests-manual/C/services/core/expected/13",
    "content": "{ \"key\" : \"CAT_B\", \"description\" : \"category_b_UPD\", \"value\" : {\"new_b\" : { \"description\" : \"new_b\", \"type\" : \"integer\", \"value\" : \"1001\", \"default\" : \"1001\" }, \"b\" : { \"description\" : \"b\", \"type\" : \"integer\", \"value\" : \"87\", \"default\" : \"87\" }} }"
  },
  {
    "path": "tests-manual/C/services/core/expected/14",
    "content": "{ \"key\" : \"CAT_B\", \"description\" : \"category_b\", \"value\" : {\"b\" : { \"description\" : \"b\", \"type\" : \"integer\", \"value\" : \"87\", \"default\" : \"87\" }} }"
  },
  {
    "path": "tests-manual/C/services/core/expected/15",
    "content": "{ \"children\" : [ \"CAT_A\", \"CAT_B\" ] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/16",
    "content": "{ \"message\" : \"Requested child categories are already set for the given parent category\", \"entryPoint\" : \"add child category\" }"
  },
  {
    "path": "tests-manual/C/services/core/expected/17",
    "content": "{ \"categories\" : [{\"key\": \"CAT_A\", \"description\" : \"category_A\"}, {\"key\": \"CAT_B\", \"description\" : \"category_b\"}] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/18",
    "content": "{ \"children\" : [ \"CAT_B\" ] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/19",
    "content": "{ \"categories\" : [{\"key\": \"CAT_B\", \"description\" : \"category_b\"}] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/2",
    "content": "{\"retainUnsent\" : { \"description\" : \"Retain data that has not been sent to any historian yet.\", \"type\" : \"boolean\", \"value\" : \"False\", \"default\" : \"False\" }, \"size\" : { \"description\" : \"Maximum size of data to be retained (in Kbytes). Oldest data will be removed to keep below this size, unless retained.\", \"type\" : \"integer\", \"value\" : \"1000000\", \"default\" : \"1000000\" }, \"age\" : { \"description\" : \"Age of data to be retained (in hours). All data older than this value will be removed,unless retained.\", \"type\" : \"integer\", \"value\" : \"72\", \"default\" : \"72\" }}"
  },
  {
    "path": "tests-manual/C/services/core/expected/20",
    "content": "{ \"children\" : [ \"CAT_B\" ] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/21",
    "content": "{ \"categories\" : [{\"key\": \"North Readings to PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North Statistics to PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North Readings to OCS\", \"description\" : \"OCS North Plugin\"}, {\"key\": \"SCHEDULER\", \"description\" : \"Scheduler configuration\"}, {\"key\": \"SMNTR\", \"description\" : \"Service Monitor\"}, {\"key\": \"rest_api\", \"description\" : \"Fledge Admin and User REST API\"}, {\"key\": \"service\", \"description\" : \"Fledge Service\"}, {\"key\": \"OMF_TYPES\", \"description\" : \"\"}, {\"key\": \"PURGE_READ\", \"description\" : \"Purge the readings table\"}, {\"key\": \"COAP\", \"description\" : \"COAP South plugin\"}, {\"key\": \"sinusoid\", \"description\" : \"sinusoid South plugin\"}, {\"key\": \"South\", \"description\" : \"South Service configuration\"}, {\"key\": \"North_Readings_to_PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North_Statistics_to_PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"testcategory\", \"description\" : \"category_description\"}, {\"key\": \"CAT_B\", \"description\" : \"category_b\"}] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/22",
    "content": "{ \"categories\" : [{\"key\": \"North Readings to PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North Statistics to PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North Readings to OCS\", \"description\" : \"OCS North Plugin\"}, {\"key\": \"SCHEDULER\", \"description\" : \"Scheduler configuration\"}, {\"key\": \"SMNTR\", \"description\" : \"Service Monitor\"}, {\"key\": \"rest_api\", \"description\" : \"Fledge Admin and User REST API\"}, {\"key\": \"service\", \"description\" : \"Fledge Service\"}, {\"key\": \"OMF_TYPES\", \"description\" : \"\"}, {\"key\": \"PURGE_READ\", \"description\" : \"Purge the readings table\"}, {\"key\": \"COAP\", \"description\" : \"COAP South plugin\"}, {\"key\": \"sinusoid\", \"description\" : \"sinusoid South plugin\"}, {\"key\": \"South\", \"description\" : \"South Service configuration\"}, {\"key\": \"North_Readings_to_PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North_Statistics_to_PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"testcategory\", \"description\" : \"category_description\"}] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/23",
    "content": "{ \"categories\" : [{\"key\": \"North Readings to PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North Statistics to PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North Readings to OCS\", \"description\" : \"OCS North Plugin\"}, {\"key\": \"SCHEDULER\", \"description\" : \"Scheduler configuration\"}, {\"key\": \"SMNTR\", \"description\" : \"Service Monitor\"}, {\"key\": \"rest_api\", \"description\" : \"Fledge Admin and User REST API\"}, {\"key\": \"service\", \"description\" : \"Fledge Service\"}, {\"key\": \"OMF_TYPES\", \"description\" : \"\"}, {\"key\": \"PURGE_READ\", \"description\" : \"Purge the readings table\"}, {\"key\": \"COAP\", \"description\" : \"COAP South plugin\"}, {\"key\": \"sinusoid\", \"description\" : \"sinusoid South plugin\"}, {\"key\": \"South\", \"description\" : \"South Service configuration\"}, {\"key\": \"North_Readings_to_PI\", \"description\" : \"Configuration of the Sending Process\"}, {\"key\": \"North_Statistics_to_PI\", \"description\" : \"Configuration of the Sending Process\"}] }"
  },
  {
    "path": "tests-manual/C/services/core/expected/3",
    "content": "{ \"message\" : \"Config category does not exist\", \"entryPoint\" : \"get category\" }"
  },
  {
    "path": "tests-manual/C/services/core/expected/4",
    "content": "{ \"message\" : \"Config category does not exist\", \"entryPoint\" : \"get category item\" }"
  },
  {
    "path": "tests-manual/C/services/core/expected/5",
    "content": "{\"age\" : { \"description\" : \"Age of data to be retained (in hours). All data older than this value will be removed,unless retained.\", \"type\" : \"integer\", \"value\" : \"\", \"default\" : \"72\" }}"
  },
  {
    "path": "tests-manual/C/services/core/expected/6",
    "content": "{\"age\" : { \"description\" : \"Age of data to be retained (in hours). All data older than this value will be removed,unless retained.\", \"type\" : \"integer\", \"value\" : \"72\", \"default\" : \"72\" }}"
  },
  {
    "path": "tests-manual/C/services/core/expected/7",
    "content": "{ \"message\" : \"The config category being inserted/updated has both default and value properties for items\", \"entryPoint\" : \"create category\" }"
  },
  {
    "path": "tests-manual/C/services/core/expected/8",
    "content": "{ \"key\" : \"testcategory\", \"description\" : \"category_description\", \"value\" : {\"info\" : { \"description\" : \"Test\", \"type\" : \"string\", \"value\" : \"ONE\", \"default\" : \"ONE\" }, \"detail\" : { \"description\" : \"detail\", \"type\" : \"integer\", \"value\" : \"99\", \"default\" : \"99\" }} }"
  },
  {
    "path": "tests-manual/C/services/core/expected/9",
    "content": "{\"info\" : { \"description\" : \"Test\", \"type\" : \"string\", \"value\" : \"ONE\", \"default\" : \"ONE\" }}"
  },
  {
    "path": "tests-manual/C/services/core/main.cpp",
    "content": "#include <gtest/gtest.h>\n#include <resultset.h>\n#include <string.h>\n#include <string>\n\nusing namespace std;\n\nint main(int argc, char **argv) {\n    testing::InitGoogleTest(&argc, argv);\n    return RUN_ALL_TESTS();\n}\n\n"
  },
  {
    "path": "tests-manual/C/services/core/payloads/add_child_categories.json",
    "content": "{ \"children\" : [\"CAT_A\", \"CAT_B\"] }\n"
  },
  {
    "path": "tests-manual/C/services/core/payloads/create_category.json",
    "content": "{\n    \"key\" : \"testcategory\",\n    \"description\" : \"category_description\",\n    \"value\" :\n        {\n           \"info\": {\"description\": \"Test\", \"type\": \"string\", \"default\": \"ONE\"},\n           \"detail\": {\"description\": \"detail\", \"type\": \"integer\", \"default\" : \"99\"}\n        }\n}\n"
  },
  {
    "path": "tests-manual/C/services/core/payloads/create_category_a.json",
    "content": "{\n    \"key\" : \"CAT_A\",\n    \"description\" : \"category_A\",\n    \"value\" :\n        {\n           \"a\": {\"description\": \"a\", \"type\": \"integer\", \"default\" : \"20\"}\n        }\n}\n"
  },
  {
    "path": "tests-manual/C/services/core/payloads/create_category_b.json",
    "content": "{\n    \"key\" : \"CAT_B\",\n    \"description\" : \"category_b\",\n    \"value\" :\n        {\n           \"b\": {\"description\": \"b\", \"type\": \"integer\", \"default\" : \"87\"}\n        }\n}\n"
  },
  {
    "path": "tests-manual/C/services/core/payloads/create_category_update_b.json",
    "content": "{\n    \"key\" : \"CAT_B\",\n    \"description\" : \"category_b_UPD\",\n    \"value\" :\n        {\n           \"new_b\": {\"description\": \"new_b\", \"type\": \"integer\", \"default\" : \"1001\"}\n        }\n}\n"
  },
  {
    "path": "tests-manual/C/services/core/payloads/create_category_with_values.json",
    "content": "{\n    \"key\" : \"testcategory\",\n    \"description\" : \"create_category_with_value_and_default_properties\",\n    \"value\" :\n        {\n           \"info\": {\"description\": \"Test\", \"type\": \"string\", \"default\": \"ONE\", \"value\" : \"ONE\"},\n           \"detail\": {\"description\": \"detail\", \"type\": \"integer\", \"default\" : \"99\"}\n        }\n}\n"
  },
  {
    "path": "tests-manual/C/services/core/payloads/setvalue.json",
    "content": "{ \"value\" : \"72\" }\n"
  },
  {
    "path": "tests-manual/C/services/core/testRunner.sh",
    "content": "#!/bin/sh\n\nif [ \"${FLEDGE_ROOT}\" = \"\" ] ; then\n        echo \"Must set FLEDGE_ROOT variable\"\n        exit 1\nfi\n\nexport DEFAULT_SQLITE_DB_FILE=${FLEDGE_ROOT}/data/fledge.db\n\ntestNum=1\nn_failed=0\nn_passed=0\nn_unchecked=0\n\nexport fledge_core_port=9393\n\n# Start FledgeCore an storage service\n./testSetup.sh\n\nrm -f failed\nrm -rf results\nmkdir results\n\ncat testset | while read name method url payload optional; do\n\t# Add Fledge core port\n\turl=`echo ${url} | sed -e \"s/_CORE_PORT_/${fledge_core_port}/\"`\n\n\techo -n \"Test [$testNum] ${name}: \"\n\tif [ \"$payload\" = \"\" ] ; then\n\t\tcurl -X $method $url -o results/$testNum >/dev/null 2>&1\n\t\tcurlstate=$?\n\telse\n\t\tcurl -X $method $url -d@payloads/$payload -o results/$testNum >/dev/null 2>&1\n\t\tcurlstate=$?\n\tfi\n\n\tif [ ! -f expected/$testNum ]; then\n\t\tn_unchecked=`expr $n_unchecked + 1`\n\t\techo Missing expected results for test $testNum - result unchecked\n\telse\n\t\tcmp -s results/$testNum expected/$testNum\n\t\tif [ $? -ne \"0\" ]; then\n\t\t\techo Failed\n\t\t\tn_failed=`expr $n_failed + 1`\n\t\t\tif [ \"$payload\" = \"\" ]\n\t\t\t\tthen\n\t\t\t\techo Test $testNum  ${name} curl -X $method $url >> failed\n\t\t\telse\n\t\t\t\techo Test $testNum  ${name} curl -X $method $url -d@payloads/$payload  >> failed\n\t\t\tfi\n\t\t\t(\n\t\t\techo \"   \" Expected: \"`cat expected/$testNum`\" >> failed\n\t\t\techo \"   \" Got:     \"`cat results/$testNum`\" >> failed\n\t\t\t)\n\t\t\techo >> failed\n\t\telse\n\t\t\techo Passed\n\t\t\tn_passed=`expr $n_passed + 1`\n\t\tfi\n\tfi\n\n\ttestNum=`expr $testNum + 1`\n\trm -f tests.result\n\techo $n_failed Tests Failed             >  tests.result\n\techo $n_passed Tests Passed             >> tests.result\n\techo $n_unchecked Tests Unchecked       >> tests.result\ndone\n\ncat tests.result\nrm -f tests.result\n\nif [ -f \"failed\" ]; then\n        echo\n        echo \"Failed Tests\"\n        echo \"============\"\n        cat failed\n        exit 1\nfi\n\n####\n# Add as last test shutdown of core and storage.\n# Core shutdown not implemented yet\n# Storage can be done thss way\n#\n#  storageServiceURL=\"http://127.0.0.1:${fledge_core_port}/fledge/service?name=Fledge%20Storage\"\n#  storageInfo=`curl -s ${storageServiceURL}`\n#  storageManagementPort=`echo ${storageInfo} | grep -o '\"management_port\".*:.*,' | awk -F':' '{print $2}' | tr -d ', '`\n#  storageShutdownURL=\"http://127.0.0.1:${storageManagementPort}/fledge/service/shutdown\"\n#\n#  curl -s -X POST ${storageShutdownURL}\n\n"
  },
  {
    "path": "tests-manual/C/services/core/testSetup.sh",
    "content": "#!/bin/sh\n\necho \"Starting Fledge Core on port ${fledge_core_port} ...\"\n./build/fledge-core $fledge_core_port &\n\nsleep 2\n\necho \"Starting Fledge Storage Service, registering to Fledge Core\"\n$FLEDGE_ROOT/services/storage --address=127.0.0.1 --port=$fledge_core_port\n\nsleep 2\n\n"
  },
  {
    "path": "tests-manual/C/services/core/test_configuration_manager.cpp",
    "content": "#include <gtest/gtest.h>\n#include <configuration_manager.h>\n#include <rapidjson/document.h>\n\nusing namespace std;\nusing namespace rapidjson;\n\n// Get all found category names and description\nTEST(ConfigurationManagerTest, getAllCategoryNames)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\n\tConfigCategories allCats = cfgManager->getAllCategoryNames();\n\n\tstring result = \"{\\\"categories\\\": \" + allCats.toJSON() + \"}\";\n\n\tDocument doc;\n\tdoc.Parse(result.c_str());\n\n\tif (doc.HasParseError() || !doc.HasMember(\"categories\"))\n\t{\n\t\tASSERT_FALSE(1);\n\t}\n\n\tValue& categories = doc[\"categories\"];\n\n\tASSERT_TRUE(categories.IsArray());\n\n\tConfigCategories confCategories(result);\n\n\tASSERT_EQ(categories.Size(), confCategories.length());\n}\n\n// Get all items of \"service\" category\nTEST(ConfigurationManagerTest, getCategoryItems)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\tConfigCategory category = cfgManager->getCategoryAllItems(\"service\");\n\tASSERT_EQ(0, category.getDescription().compare(\"Fledge Service\"));\n}\n\n// Test check we cannot create a category with both value and default for one item\nTEST(ConfigurationManagerTest, addCategoryWithValueAndDefaultForOneItem)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tConfigCategory category = cfgManager->createCategory(\"testcategory\", \"category_description\", \"{\\\"info\\\": {\\\"description\\\": \\\"Test\\\", \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"ONE\\\", \\\"value\\\" : \\\"ONE\\\"}, \\\"detail\\\": {\\\"description\\\": \\\"detail\\\", \\\"type\\\": \\\"integer\\\", \\\"default\\\" : \\\"99\\\"}}\");\n\n\t\t// Test failure\n\t\tASSERT_TRUE(false);\n\t}\n\tcatch (ConfigCategoryDefaultWithValue& e)\n\t{\n\t\t// Test success only for found value and default\n\t\tASSERT_FALSE(false);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_TRUE(false);\n\t}\n}\n\n// Create a category\nTEST(ConfigurationManagerTest, addCategoryWithDefaultValues)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\t// Don't force keep_original_items, just use the default (false)\n\t\tConfigCategory category = cfgManager->createCategory(\"testcategory\", \"category_description\", \"{\\\"item_1\\\": {\\\"description\\\": \\\"Test\\\", \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"ONE\\\"}, \\\"item_2\\\": {\\\"description\\\": \\\"test_2\\\", \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"____\\\"}}\");\n\n\t\t// Test success\n\t\tASSERT_EQ(2, category.getCount());\n\t\tASSERT_EQ(0, category.getDescription().compare(\"category_description\"));\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Update category and keep original items\nTEST(ConfigurationManagerTest, updateCategoryKeepItems)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\t// Force keep_original_items = true\n\t\tConfigCategory category = cfgManager->createCategory(\"testcategory\",\n\t\t\t\t\t\t\t\t     \"category_description_merge\",\n\t\t\t\t\t\t\t\t     \"{\\\"item_99\\\": {\\\"description\\\": \\\"TestMerge\\\", \"\n\t\t\t\t\t\t\t\t     \"\\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"99\\\"}}\",\n\t\t\t\t\t\t\t\t     true);\n\n\t\t// Test success\n\t\tASSERT_EQ(3, category.getCount());\n\t\tASSERT_EQ(0, category.getDescription().compare(\"category_description_merge\"));\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Update a category\nTEST(ConfigurationManagerTest, UpdateCategory)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\t// Force keep_original_items = false (which is the default)\n\t\tConfigCategory category = cfgManager->createCategory(\"testcategory\", \"category_description\", \"{\\\"item_1\\\": {\\\"description\\\": \\\"run\\\", \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"TWO\\\"}, \\\"item_3\\\": {\\\"description\\\": \\\"test_3\\\", \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"_3_\\\"}, \\\"item_4\\\": {\\\"description\\\": \\\"the operation\\\", \\\"type\\\": \\\"integer\\\", \\\"default\\\": \\\"101\\\"}}\", false);\n\n\t\t// item_1 gets updated\n\t\t// item_2 is removed\n\t\t// item_3 is addedd\n\t\t// item_4 is addedd\n\n\t\t// Test success\n\t\tASSERT_EQ(3, category.getCount());\n\t\tASSERT_EQ(0, category.getDescription().compare(\"category_description\"));\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Get a not existing category name\nTEST(ConfigurationManagerTest, GetNoExistentCategoryItem)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\t// Category item \"item_2\" doesn't exist\n\t\tstring item = cfgManager->getCategoryItem(\"testcategory\", \"item_2\");\n\n\t\tASSERT_EQ(0, item.compare(\"{}\"));\n\t\n\t\t// Test success\n\t\tASSERT_TRUE(true);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Get all details of existing category item\nTEST(ConfigurationManagerTest, GetCategoryItem)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\t// item_4 exists\n\t\tstring item = cfgManager->getCategoryItem(\"testcategory\", \"item_4\");\n\n\t\t// Test success\n\t\tASSERT_TRUE(item.compare(\"{}\") != 0);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Get existing value of a category item\nTEST(ConfigurationManagerTest, GetCategoryItemValue)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tstring item = cfgManager->getCategoryItemValue(\"testcategory\", \"item_4\");\n\n\t\t// Test success\n\t\tASSERT_EQ(0, item.compare(\"101\"));\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n//Set category item value of an existing item\nTEST(ConfigurationManagerTest, SetCategoryItemValue)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tbool item = cfgManager->setCategoryItemValue(\"testcategory\", \"item_4\", \"frog\");\n\t\n\t\t// Test success\n\t\tASSERT_TRUE(item);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n//Set category item value of a not existing item\nTEST(ConfigurationManagerTest, SetCategoryNotExistingItemValue)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tbool item = cfgManager->setCategoryItemValue(\"testcategory\", \"item_xyz\", \"frog\");\n\t\n\t\t// Test failure\n\t\tASSERT_TRUE(false);\n\t}\n\tcatch (NoSuchCategoryItem& e)\n\t{\n\t\t// Test success\n\t\tASSERT_TRUE(true);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n\n// Create category A\nTEST(ConfigurationManagerTest, addCategoryA)\n{       \n        // Before the test start the storage layer with FLEDGE_DATA=.\n        // TCP port will be 8080\n        ConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n        try\n        {       \n                // Don't force keep_original_items, just use the default (false)\n                ConfigCategory category = cfgManager->createCategory(\"CAT_A\", \"category_A\", \"{\\\"item_1\\\": {\\\"description\\\": \\\"CAT_A\\\", \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"a\\\"}}\");\n                \n                // Test success\n                ASSERT_EQ(1, category.getCount());\n                ASSERT_EQ(0, category.getDescription().compare(\"category_A\"));\n        }\n        catch (...)\n        {       \n                // Test failure\n                ASSERT_FALSE(true);\n        }\n}\n\n// Create category B\nTEST(ConfigurationManagerTest, addCategoryB)\n{       \n        // Before the test start the storage layer with FLEDGE_DATA=.\n        // TCP port will be 8080\n        ConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n        try\n        {       \n                // Don't force keep_original_items, just use the default (false)\n                ConfigCategory category = cfgManager->createCategory(\"CAT_B\", \"category_B\", \"{\\\"item_1\\\": {\\\"description\\\": \\\"CAT_B\\\", \\\"type\\\": \\\"string\\\", \\\"default\\\": \\\"b\\\"}}\");\n                \n                // Test success\n                ASSERT_EQ(1, category.getCount());\n                ASSERT_EQ(0, category.getDescription().compare(\"category_B\"));\n        }\n        catch (...)\n        {       \n                // Test failure\n                ASSERT_FALSE(true);\n        }\n}\n\n// Add child categories\n// Success when adding 2 or 1 child categories\nTEST(ConfigurationManagerTest, AddChildCategories)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tstring childCategories = cfgManager->addChildCategory(\"testcategory\", \"{\\\"children\\\": [\\\"CAT_A\\\", \\\"CAT_B\\\"]}\");\n\t\n\t\t// Test success\n\t\tASSERT_TRUE(true);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Add child categories\n// if both child categories are already set\n// ExistingChildCategories is raised\n// If we catch ExistingChildCategories test is successful\nTEST(ConfigurationManagerTest, AddExistingChildCategories)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tstring childCategories = cfgManager->addChildCategory(\"testcategory\", \"{\\\"children\\\": [\\\"CAT_A\\\", \\\"CAT_B\\\"]}\");\n\t\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n\tcatch (ExistingChildCategories& e)\n\t{\n\t\t// Test success\n\t\tASSERT_TRUE(true);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// A child categories to a noin existent parent\nTEST(ConfigurationManagerTest, AddChildCategoriesToNotExistentParent)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tstring childCategories = cfgManager->addChildCategory(\"not_existent\", \"{\\\"children\\\": [\\\"COAP\\\", \\\"HTTP_SOUTH\\\"]}\");\n\t\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n\tcatch (NoSuchCategory& e)\n\t{\n\t\t// Test success\n\t\tASSERT_TRUE(true);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n// Get the child categories\nTEST(ConfigurationManagerTest, GetChildCategories)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tConfigCategories childCategories = cfgManager->getChildCategories(\"testcategory\");\n\n\t\t// Test success\n\t\tASSERT_TRUE(true);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Delete a non existent child categories\nTEST(ConfigurationManagerTest, DeleteNotExistentChildCategory)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tConfigCategories beginChildCategories = cfgManager->getChildCategories(\"testcategory\");\n\t\tint numChildCategories = beginChildCategories.length();\n\n\t\tstring childCategories = cfgManager->deleteChildCategory(\"testcategory\", \"DCOAP\");\n\n\t\tConfigCategories endChildCategories = cfgManager->getChildCategories(\"testcategory\");\n\t\tint finalChildCategories = endChildCategories.length();\n\n\t\t// Test success\n\t\tASSERT_EQ(numChildCategories, finalChildCategories);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Delete a category item value (set to \"\")\nTEST(ConfigurationManagerTest, DeleteCategoryItemValue)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tstring modifiedCategory = cfgManager->deleteCategoryItemValue(\"testcategory\", \"item_4\");\n\n\t\t// Test success\n\t\tASSERT_EQ(cfgManager->getCategoryItemValue(\"testcategory\", \"item_4\").compare(\"\"), 0);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Delete a child category\nTEST(ConfigurationManagerTest, DeleteChildCategory)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tConfigCategories beginChildCategories = cfgManager->getChildCategories(\"testcategory\");\n\t\tint numChildCategories = beginChildCategories.length();\n\n\t\tstring childCategories = cfgManager->deleteChildCategory(\"testcategory\", \"CAT_B\");\n\n\t\tConfigCategories endChildCategories = cfgManager->getChildCategories(\"testcategory\");\n\t\tint finalChildCategories = endChildCategories.length();\n\n\t\t// Test success\n\t\tASSERT_EQ(numChildCategories - 1, finalChildCategories);\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n\n// Delete a category\nTEST(ConfigurationManagerTest, DeleteCategory)\n{\n\t// Before the test start the storage layer with FLEDGE_DATA=.\n\t// TCP port will be 8080\n\tConfigurationManager *cfgManager = ConfigurationManager::getInstance(\"127.0.0.1\", 8080);\n\ttry\n\t{\n\t\tConfigCategories currentCategories = cfgManager->getAllCategoryNames();\n\t\tConfigCategories modifiedCategories = cfgManager->deleteCategory(\"testcategory\");\n\n\t\t// Test success\n\t\tASSERT_EQ(currentCategories.length() - 1, modifiedCategories.length());\n\t}\n\tcatch (...)\n\t{\n\t\t// Test failure\n\t\tASSERT_FALSE(true);\n\t}\n}\n"
  },
  {
    "path": "tests-manual/C/services/core/testset",
    "content": "EmptyChildCategories GET http://localhost:_CORE_PORT_/fledge/service/category/test_cat/children\nGetCategoryName GET http://localhost:_CORE_PORT_/fledge/service/category/PURGE_READ\nGetNonExistentCategoryName GET http://localhost:_CORE_PORT_/fledge/service/category/NOT_EXISTENT\nGetNonExistentCategoryItemName GET http://localhost:_CORE_PORT_/fledge/service/category/NOT_EXISTENT/item_456\nDeleteCategoryItemValue DELETE http://localhost:_CORE_PORT_/fledge/service/category/PURGE_READ/age/value\nSetCategoryItemValue PUT http://localhost:_CORE_PORT_/fledge/service/category/PURGE_READ/age setvalue.json\nFailureCreateCategory POST http://localhost:_CORE_PORT_/fledge/service/category create_category_with_values.json\nCreateCategory POST http://localhost:_CORE_PORT_/fledge/service/category create_category.json\nGetCategoryItemName GET http://localhost:_CORE_PORT_/fledge/service/category/testcategory/info\nGetCategoryNonExistingItemName GET http://localhost:_CORE_PORT_/fledge/service/category/testcategory/foobar\nCreateCategory_a POST http://localhost:_CORE_PORT_/fledge/service/category create_category_a.json\nCreateCategory_b POST http://localhost:_CORE_PORT_/fledge/service/category create_category_b.json\nUpdateCategoryKeepItems_b POST http://localhost:_CORE_PORT_/fledge/service/category?keep_original_items=true create_category_update_b.json\nUpdateCreateCategory_b POST http://localhost:_CORE_PORT_/fledge/service/category?keep_original_items=false create_category_b.json\nAddChildCategories POST http://localhost:_CORE_PORT_/fledge/service/category/testcategory/children add_child_categories.json\nAddSameChildCategories POST http://localhost:_CORE_PORT_/fledge/service/category/testcategory/children add_child_categories.json\nGetChildCategories GET http://localhost:_CORE_PORT_/fledge/service/category/testcategory/children\nDeleteChildCategory_a DELETE http://localhost:_CORE_PORT_/fledge/service/category/testcategory/children/CAT_A\nGetRemainingChildCategories GET http://localhost:_CORE_PORT_/fledge/service/category/testcategory/children\nDeleteNonExistentChildCategory DELETE http://localhost:_CORE_PORT_/fledge/service/category/testcategory/children/CAT_XYZ\nDeleteCategoryName_a DELETE http://localhost:_CORE_PORT_/fledge/service/category/CAT_A\nDeleteCategoryName_b DELETE http://localhost:_CORE_PORT_/fledge/service/category/CAT_B\nDeleteCategoryName DELETE http://localhost:_CORE_PORT_/fledge/service/category/testcategory\n"
  },
  {
    "path": "tests-manual/debugger/.debugrc",
    "content": "    PS1='\\[\\033[01;32m\\]Debug\\[\\033[00m\\]:\\[\\033[01;34m\\] ${SERVICE}\\[\\033[00m\\]\\$ '\n"
  },
  {
    "path": "tests-manual/debugger/README.rst",
    "content": "Debugger Command Line Test Programs\n===================================\n\nThis directory contains a set of utilities used to manually test the pipeline debugger features. These tests should be run against a live system built from this branch.\n\nTo start testing;\n\n  - Change directory to the debugger directory\n\n  - Type the command\n\n    .. code-block:: console\n\n        ./debug <ServiceName>\n\n    Where *<ServiceName>* is the name of a south or north service. You will need to quote the name if it contains whitespace or wildcard characters that have meaning to the shell.\n\n   - Attach the debugger to the pipeline by using the attach command\n\n     .. code-block:: console\n\n         attach\n\nThe debugger is now attached to the pipeline and collecting one reading at each point in the pipeline.\n\nTo see the list of commands available type the *commands* command\n\n.. code-block:: console\n\n    % commands\n    attach:\t\tAttach the pipeline debugger\n    buffer:\t\tReturn the contents of the buffers at every pipeline element\n    detach:\t\tDetach the debugger from the pipeline\n    isolate:\t        Isolate the pipeline from the destination\n    replay:\t\tReplay the buffered data through the pipeline\n    resumeIngest:\tResume the flow of data into the pipeline\n    setBuffer:\t        Set the number of readings to hold in each buffer, passing an integer argument\n    state:\t\tReturn the state of the debugger\n    step:\t\tAllow readings to flow into the pipeline. Pass an optional number of readings to ingest; default to 1 if omitted\n    store:\t\tAllow data to flow out of the pipeline into storage\n    suspendIngest:\tSuspend the ingestion of data into the pipeline\n\nThe data output from the *buffer* command shows the readings at the input to the named filter in the pipeline. The item name *Writer* is the end of the pipeline that writes data to storage, in south plugins. The item named *Branch* is a branch point in the pipeline.\n\nTo exit the debugger simple type exit or <Control>-D\n"
  },
  {
    "path": "tests-manual/debugger/attach",
    "content": "#!/bin/bash\n#help \t\tAttach the pipeline debugger\ncurl -s -X PUT http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/attach|jq\n"
  },
  {
    "path": "tests-manual/debugger/buffer",
    "content": "#!/bin/bash\n#help \t\tReturn the contents of the buffers at every pipeline element\ncurl -s http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/buffer|jq\n"
  },
  {
    "path": "tests-manual/debugger/commands",
    "content": "#!/bin/bash\n#help \tList the commands that can be run\ncmds=`ls |grep -v debug |grep -v commands` \ngrep '#help' $cmds | sed -e 's/#help //'\n"
  },
  {
    "path": "tests-manual/debugger/debug",
    "content": "#!/bin/bash\nif [ $# -gt 0 ]; then\n\tservice=$1\nelse\n\techo You must pass a service name\n\texit 1\nfi\n\n\nfledge_authenticate() {\n\n    if [[ -f ~/.fledge_token ]]; then\n\tcat ~/.fledge_token\n\texit 0\n    fi\n    fd=0\n    if [[ -t \"$fd\" ]]; then\n\t# We have an interactive shell\n\t    read -p \"Username: \" USERNAME\n\t    read -s -p \"Password: \" PASSWORD\n\t    /bin/echo >/dev/tty\n    fi\n\n    # Get/Updates the rest API URL\n    payload='{ \"username\" : \"'${USERNAME}'\", \"password\" : \"'${PASSWORD}'\" }'\n    result=`curl -X POST -k -s http://localhost:8081/fledge/login -d\"$payload\" || true`\n    if [[ ! \"$result\" =~ \"Logged in successfully\" ]]; then\n\techo \"failed\"\n    else\n    \ttoken=`echo ${result} | tr -d ' ' | grep -o '\"token\".*' | cut -d\":\" -f2 | cut -d\",\" -f1 | sed -e 's/\"//g' -e 's/}//'`\n\techo $token >~/.fledge_token\n\techo $token\n    fi\n}\n\nresult=`curl -k -s http://localhost:8081/fledge/service || true`\nif [[ \"${result}\" == \"401\"* ]]; then\n\ttoken=`fledge_authenticate`\n\tif [[ \"${token}\" =~ \"failed\" ]]; then\n\t\techo \"Authentication failed.\"\n\t\texit -1\n\tfi\n\tresult=`curl -H \"authorization: $token\" -k -s ${REST_API_URL}/fledge/ping || true`\n\texport DEBUG_SERVICE=`curl -H \"authorization: $token\" -k -s http://localhost:8081/fledge/service|jq '.services[] | select (.name == \"'$service'\") | .service_port'`\n\ttype=`curl -H \"authorization: $token\" -k -s http://localhost:8081/fledge/service|jq '.services[] | select (.name == \"'$service'\") | .type' | sed -e 's/\"//g'`\nelse\n\texport DEBUG_SERVICE=`curl -s http://localhost:8081/fledge/service|jq '.services[] | select (.name == \"'$service'\") | .service_port'`\n\ttype=`curl -s http://localhost:8081/fledge/service|jq '.services[] | select (.name == \"'$service'\") | .type' | sed -e 's/\"//g'`\nfi\nif [ \"$type\" = \"Southbound\" ] ; then\n\texport DEBUG_TYPE=\"south\"\nelif [ \"$type\" = \"Northbound\" ]; then\n\texport DEBUG_TYPE=\"north\"\nelse\n\techo Only South or North services are currently supported\n\texit 1\nfi\nexport PATH=.:$PATH\nexport SERVICE=${service}\nif [ \"$DEBUG_SERVICE\" = \"\" ]; then\n\techo $service is not the name of a south or north service\nelse\n\tbash --rcfile .debugrc\nfi\n"
  },
  {
    "path": "tests-manual/debugger/detach",
    "content": "#!/bin/bash\n#help \t\tDetach the debugger from the pipeline\ncurl -s -X PUT http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/detach|jq\n"
  },
  {
    "path": "tests-manual/debugger/isolate",
    "content": "#!/bin/bash\n#help \tIsolate the pipeline from the destination\ncurl -s -X PUT -d'{\"state\":\"discard\"}' http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/isolate|jq\n"
  },
  {
    "path": "tests-manual/debugger/replay",
    "content": "#!/bin/bash\n#help \t\tReplay the buffered data through the pipeline\ncurl -s -X PUT http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/replay|jq\n"
  },
  {
    "path": "tests-manual/debugger/resumeIngest",
    "content": "#!/bin/bash\n#help \tResume the flow of data into the pipeline\ncurl -s -X PUT -d'{\"state\":\"resume\"}' http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/suspend|jq\n"
  },
  {
    "path": "tests-manual/debugger/setBuffer",
    "content": "#!/bin/bash\n#help \tSet the number of readings to hold in each buffer, passing an integer argument\nsize=1\nif [ $# -gt 0 ]; then\n\tsize=$1\nfi\npayload='{\"size\":'$size'}'\ncurl -s -X POST  -d$payload http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/buffer|jq\n"
  },
  {
    "path": "tests-manual/debugger/state",
    "content": "#!/bin/bash\n#help \t\tReturn the state of the debugger\ncurl -s http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/state|jq\n"
  },
  {
    "path": "tests-manual/debugger/step",
    "content": "#!/bin/bash\n#help \t\tAllow readings to flow into the pipeline. Passed an optional number of readings to ingest; default to 1 if omitted\nsteps=1\nif [ $# -gt 0 ]; then\n\tsteps=$1\nfi\npayload='{\"steps\":'$steps'}'\ncurl -s -X PUT  -d$payload http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/step|jq\n"
  },
  {
    "path": "tests-manual/debugger/store",
    "content": "#!/bin/bash\n#help \t\tAllow data to flow out of the pipeline into storage\ncurl -s -X PUT -d'{\"state\":\"store\"}' http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/isolate|jq\n"
  },
  {
    "path": "tests-manual/debugger/suspendIngest",
    "content": "#!/bin/bash\n#help \tSuspend the ingestion of data into the pipeline\ncurl -s -X PUT -d'{\"state\":\"suspend\"}' http://localhost:${DEBUG_SERVICE}/fledge/${DEBUG_TYPE}/debug/suspend|jq\n"
  }
]